id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2310.00503 | Black-box Attacks on Image Activity Prediction and its Natural Language
Explanations | Explainable AI (XAI) methods aim to describe the decision process of deep
neural networks. Early XAI methods produced visual explanations, whereas more
recent techniques generate multimodal explanations that include textual
information and visual representations. Visual XAI methods have been shown to
be vulnerable to white-box and gray-box adversarial attacks, with an attacker
having full or partial knowledge of and access to the target system. As the
vulnerabilities of multimodal XAI models have not been examined, in this paper
we assess for the first time the robustness to black-box attacks of the natural
language explanations generated by a self-rationalizing image-based activity
recognition model. We generate unrestricted, spatially variant perturbations
that disrupt the association between the predictions and the corresponding
explanations to mislead the model into generating unfaithful explanations. We
show that we can create adversarial images that manipulate the explanations of
an activity recognition model by having access only to its final output. | Alina Elena Baia, Valentina Poggioni, Andrea Cavallaro | 2023-09-30T21:56:43Z | http://arxiv.org/abs/2310.00503v1 | # Black-Box Attacks on Image Activity Prediction and Its Natural Language Explanations
###### Abstract
Explainable AI (XAI) methods aim to describe the decision process of deep neural networks. Early XAI methods produced visual explanations, whereas more recent techniques generate multimodal explanations that include textual information and visual representations. Visual XAI methods have been shown to be vulnerable to white-box and gray-box adversarial attacks, with an attacker having full or partial knowledge of and access to the target system. As the vulnerabilities of multimodal XAI models have not been examined, in this paper we assess for the first time the robustness to black-box attacks of the natural language explanations generated by a self-rationalizing image-based activity recognition model. We generate unrestricted, spatially variant perturbations that disrupt the association between the predictions and the corresponding explanations to mislead the model into generating unfaithful explanations. We show that we can create adversarial images that manipulate the explanations of an activity recognition model by having access only to its final output.
## 1 Introduction
Deep neural models are generally black-box systems whose decision-making process is obscure. Explainable artificial intelligence (XAI) aims to make decisions of deep neural models transparent, i.e. understandable by a human [4]. An XAI model provides insights into the decision-making process identifying feature importance contribution that facilitates error analysis and the identification of uncertain cases.
XAI systems favor the assessment of the vulnerabilities of a model [2] and interactions with people to support their decisions [26].
XAI approaches may generate visual (V-XAI), textual (T-XAI) or multimodal (M-XAI) explanations. Visual explanations highlight the most relevant pixel information used by the model [46, 49, 55]. Examples include superpixels based visualizations (e.g. LIME [46]), heatmaps [49], saliency maps [55], and feature contribution methods inspired by game theory (e.g. SHAP [35]). However, V-XAI outputs may be difficult to comprehend for non-expert users when no information is provided on how highlighted pixels influence the decision. Textual explanations describe the reasons for a decision in a more human-interpretable form through natural language [14, 21, 25, 33, 36]. Finally, multimodal explanations jointly generate textual rationales and visual evidence in the form of attention maps [41, 47, 65]. A recent self-rationalizing M-XAI method [47] simultaneously predicts the decision and justifies, textually and visually, what led to that decision.
Various studies have addressed the vulnerability of V
Figure 1: Sample adversarial images generated against NLX-GPT [47] from a clean image (left) by changing the activity prediction while maintaining the textual explanation (middle) and by maintaining the activity prediction while changing the textual explanation (right).
XAI methods to adversarial attacks [3, 15, 18, 22, 27, 59, 56], however no previous work explicitly considered T-XAI or M-XAI models.
In this paper we present a black-box1, content-based and unrestricted2 attack against a natural language XAI model for image classification [47]. We generate the adversarial attack against a vision-language model with unrestricted semantic colorization. Our attack uses only the final output, namely the textual output or/and the visual maps of the model to determine the adversarial perturbations. We consider two attack scenarios, namely changing the activity prediction while keeping the textual explanation similar and maintaining the same activity prediction while changing the textual explanation (see Figure 1). To the best of our knowledge, no related work explicitly performs black-box attacks on the prediction-generation mechanism of a natural language-based explanation system.
Footnote 1: A black-box attack simulates a realistic threat since there is no need of model-specific information and the access to the target model is limited (i.e. only its final output).
Footnote 2: Unrestricted perturbations allow for more freedom in modifying the image, which improve attack effectiveness and transferability [50, 51, 52, 58, 74], and can evade defense mechanisms more effectively [52, 60].
In summary, our contributions are as follows:
* We propose the first black-box attack against the prediction-explanation mechanism of a natural language explanation model for image classification. We evaluate the robustness of the target model against adversarial image colorization techniques under two scenarios: changing the prediction while keeping the explanation similar, and keeping the same prediction while changing the explanation.
* We create adversarial examples by combining image semantics and the information provided by a visual explanation map to localize the most relevant areas for the prediction and to adapt to different image regions.
## 2 Related works
V-XAI models are susceptible to adversarial attacks that may, for example, preserve the prediction of the original image but change the explanation [3, 15, 18, 22, 27, 56, 59]. Examples of attacks include restricted adversarial perturbations [18], structured manipulations that change the explanation maps to match an arbitrary target map [15], and adversarial classifiers [56] that fool post-hoc explanations methods such as LIME [46] and SHAP [35]. Other works use simple constant shift transformation of the input data [27], model parameter randomization and data randomization [3], and network fine-tuning with adversarial loss [22] to manipulate visual explanations models.
Table 1 shows a summary of existing attacks on vision-language models. Several studies covered V-XAI methods, however no work has yet explicitly considered textual explanations of self-rationalizing multimodal explanations models. Existing similar research on vision-language models focuses on attacking image captioning or visual question answering models. The attacks use \(L_{p}\)-norm restricted perturbations and are primarily conducted in a white-box [10, 24, 30, 54, 62, 64, 67, 68, 72] or gray-box [1, 9, 31, 62, 64] setup. These attacks are less practical in a real-world scenario since they require prior knowledge about the victim model, which is not readily available, and are often designed for specific model architectures. Also, restricted perturbations are often not semantically meaningful [38, 57] and can create visible artifacts that can be detected by defenses [16, 53, 66].
Attacks on image-to-text generation models may treat the structured output as a single label and design the attack as a targeted complete sentence [1, 31, 67]. This idea was extended to targeted keywords attacks that encourage the adversarial caption to include a predefined set of keywords in any order [10, 72] or at specified positions in the caption [68]. Methods may mask out targeted keywords while preserving the caption quality for the visual content [24]. Untargeted attacks may use attention maps of the underlying target model to focus the adversarial noise on the regions attended by the model [54]. Generative adversarial models have also been used to create adversarial perturbations [1, 62, 64]. Alternatively, adversarial images may be generated by perturbing an image so that its features resem
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Reference & Task & Box & R & \(\overline{\mathbf{R}}\) & T & \(\overline{\mathbf{T}}\) \\ \hline Chen et al. [10] & IC & \(\mathcal{O}\) & ✓ & ✓ & \\ Zhang et al. [72] & IC & \(\mathcal{O}\) & ✓ & ✓ & \\ Ji et al. [24] & IC & \(\mathcal{O}\) & ✓ & ✓ & \\ Kwon et al. [30] & IC & \(\mathcal{O}\) & ✓ & ✓ & ✓ \\ Xu et al. [68] & IC & \(\mathcal{O}\) & ✓ & ✓ & ✓ \\ Bhattad et al. [7] & IC & \(\mathcal{O}\) & ✓ & ✓ & \\ Wu et al. [64] & IC & \(\mathcal{O}\) & ✓ & ✓ & ✓ \\ Wang et al. [62] & IC & \(\mathcal{O}\) & ✓ & ✓ & ✓ \\ Sharma et al. [54] & VQA & \(\mathcal{O}\) & ✓ & ✓ & ✓ \\ Huang et al. [23] & SG & \(\mathcal{O}\) & ✓ & & ✓ \\ Xu et al. [67] & IC, VQA & \(\mathcal{O}\) & ✓ & ✓ & ✓ \\ Lapid et al. [31] & IC & \(\mathcal{O}\) & ✓ & ✓ & ✓ \\ Aafafaq et al. [1] & IC & \(\mathcal{O}\) & ✓ & ✓ & \\ Chaturvedi et al. [9] & IC, VQA & \(\mathcal{O}\) & ✓ & ✓ & \\ Zhao et al. [73] & IC, VQA, IG & \(\mathcal{O}\) & ✓ & ✓ & \\
**Ours** & ACT-X & \(\mathcal{O}\) & & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Adversarial attacks against vision-language models. KEY – \(\mathcal{O}\): white-box, \(\bullet\): black-box, \(\blacklozenge\): gray-box, T: targeted, \(\overline{\mathbf{T}}\): untargeted, R: restricted, \(\overline{\mathbf{R}}\): unrestricted, IC: image captioning, SG: story ending text generation, IG: image generation, VQA: visual question answering, ACT-X: activity recognition explanation.
ble those of a target image forcing the model to output the same caption [1, 9, 31].
Multimodal vision-language models for classification tasks are vulnerable to white-box and gray-box adversarial perturbations on a single modality [69] (i.e. image input) or multiple modalities [17, 39, 71] (i.e. image and text input). A multimodal white-box iterative attack [23] that fuses image and text modalities attacks has also been introduced to change the output sentence of a multimodal story-ending generation model. A recent black-box attack [73] deceives large vision-language models assuming a targeted adversarial goal. First, a surrogate model is used to craft adversarial examples with restricted perturbations and transfers the adversarial examples to the victim model; then a query-based attacking strategy generates responses more similar to the targeted text.
In this work, we focus on a self-rationalizing model and empirically analyze the robustness against black-box content-based unrestricted attacks by changing either the activity prediction or the explanation. We do not consider the scenario of attacking both activity and explanation since this would be similar to image-captioning attacks that aim to change the entire textual output of a model. The proposed methodology uses only the final decision of the explanation model and does not rely on any surrogate models. Moreover, considering the attack scenarios, our problem is more challenging since multiple conditions need to be satisfied for an attack to be successful.
## 3 Methodology
### Problem definition
Let \(\text{I}\in\mathbb{R}^{h\times\text{w}\times 3}\) be an RGB image with height \(h\) and width \(w\). Let \(M_{E}\) be an encoder-decoder M-XAI model such that \(M_{E}(I)=\big{(}a,e,I_{e}\big{)}\), where \(a=(a_{1},a_{2},\dots,a_{p})\) represents the generated textual description of the activity and \(e=(e_{1},e_{2},\dots,e_{n})\) is the generated textual explanation that justifies the activity decision; \(a_{i}\) and \(e_{j}\) are words, and \(p\) and \(n\) are variable sentence lengths, which depend on the type of activity illustrated in the image \(I\). The set of possible activities is not fixed if \(M_{E}\) uses as decoder a language prediction model that generalizes to activity categories unseen during training. \(I_{e}\) is the visual explanation map generated for the predicted activity using the cross-attention weights of \(M_{E}\).
We define an adversarial example for the explainable model \(M_{E}\), the image \(\hat{I}\), such that \(M_{E}(\hat{I})=\big{(}\hat{a},\hat{e},\hat{I}_{e}\big{)}\), where \(\hat{a}\), \(\hat{e}\), and \(\hat{I}_{e}\) are the activity prediction, textual explanation, and visual explanation generated for the image \(\hat{I}\). In this work, we focus on the textual explanations and we do not set any conditions on \(\hat{I}_{e}\). Under the assumption of faithful explanations (i.e. explanations that accurately reflect the process behind a prediction) the label-rationale should be strongly associated [63]: changing the activity prediction implies a change in its explanation. Therefore, our objective is to break the correlation between activity prediction and its explanation by changing one part while keeping the other unchanged.
We therefore consider two attack scenarios, namely \(S1\) for which the activities predictions are different (\(a\neq\hat{a}\)), but the explanations are similar (\(e\simeq\hat{e}\)), and \(S2\), for which the activities predictions are the same (\(a=\hat{a}\)), but the explanations are different (\(e\nsim\hat{e}\)).
### Black-box unrestricted attacks
We condition the perturbation generation on the activity prediction and textual explanation. We craft region-specific unrestricted perturbations and generate adversarial examples following the image semantics-based idea proposed in [52]. To determine the adversarial perturbations we use the (dis)similarity between textual explanations. We consider two strategies for perturbing the semantic areas accordingly and extend them to our problem. The first strategy is a random colorization approach [52] and the second is a strategy that combine photo editing techniques [5].
**Explanation similarity.** We measure the difference between \(e\), the textual explanation generated for the clean image \(I\), and \(\hat{e}\), the explanation generated for the perturbed image \(\hat{I}\). Let \(E(\cdot)\) be a transformer-based network [45] that computes the vector embedding of a sentence. Then we calculate the similarity between \(e\) and \(\hat{e}\), \(Q_{\hat{T}}(\hat{I},\hat{I})\), as the cosine similarity3 normalized in the range [0,1]:
Footnote 3: We use a cosine-similarity measure with neural sentence embedding because of its highest correlation with human judgement [12, 25, 45] and out-performance of other methods such as METEOR [6] or BLEU [40].
\[Q_{\hat{T}}(I,\hat{I})=\frac{1}{2}\left(\frac{\sum_{i=1}^{n}E(e)_{i}E(\hat{e })_{i}}{\sqrt{\sum_{i=1}^{n}E(e)_{i}^{2}}\sqrt{\sum_{i=1}^{n}E(\hat{e})_{i}^{ 2}}}+1\right), \tag{1}\]
where \(n\) is the size of the embedding vector. The larger the similarity \(Q_{\hat{T}}(\hat{I},\hat{I})\), the more similar the explanations for \(I\) and \(\hat{I}\). For example, let us consider the sentences \(e_{1}\): _he is standing on a bridge with a backpack on his back_, \(e_{2}\): _he is wearing a backpack and standing on a bridge_, and \(e_{3}\): _he is standing in a field with a frisbee in his hand_. Sentences \(e_{1}\) and \(e_{2}\) have the same meaning and their similarity is 0.97. Sentences \(e_{1}\) and \(e_{3}\) describe different scenarios (although they share a few words) and their difference is reflected in a lower similarity of 0.69. Sentences \(e_{2}\) and \(e_{3}\) also have a low similarity of 0.70.
**Image partitioning.** We use a multi-step segmentation approach to partition an image into sensitive regions, \(R^{s}_{i}\), and non-sensitive regions, \(R^{n}_{j}\). Sensitive regions correspond to objects whose unrealistic colors and appearance could raise suspicion (e.g. human skin), whereas non-sensitive regions
can have their colors arbitrarily modified without necessarily making the image look unnatural. We represent an image \(I\) as:
\[I=\bigcup R_{i}^{s}\cup\bigcup R_{j}^{n}. \tag{2}\]
First, we use semantic segmentation to partition an image into semantic regions, such as person, sky, car, building [11]. Next, we detect skin4 areas on top of semantic regions representing people and mark the skin as sensitive and unalterable. Finally, we further partition each semantic region into smaller areas and obtain the non-sensitive regions with color-based oversegmentation [32]. An example is shown in Figure 2.
Footnote 4: Skin Segmentation Network[https://github.com/WillBrennan/SemanticSegmentation](https://github.com/WillBrennan/SemanticSegmentation)
**Optimization process.** We find an adversarial example for \(I\) in \(S1\), \(\hat{I}_{S1}\), whose generated explanation has the highest similarity with that generated for \(I\), while also having a different activity prediction, as follows:
\[\hat{I}_{S1}=\operatorname*{argmax}_{\hat{I}}\bigl{(}Q_{\hat{T}}(I,\hat{I}) \mathbbm{1}_{\{(a,\hat{a}):a\neq\hat{a}\}},Q_{\hat{I}}(I,\hat{I})\bigr{)}, \tag{3}\]
where \(Q_{\hat{I}}(I,\hat{I})\) is used to reduce the noticeability of the perturbation and is implemented as SSIM [61] between the clean image, \(I\), and the candidate adversarial example, \(\hat{I}\), and \(\mathbbm{1}_{\{(a,\hat{a}):a\neq\hat{a}\}}\) is the indicator function whose value is 1 only if the predicted activity of \(\hat{I}\) is different from the activity of \(I\).
Similarly, we find an adversarial example for \(I\) in \(S2\), \(\hat{I}_{S2}\), whose generated explanation has the lowest similarity with the explanation generated for the clean image \(I\), while also having the same the activity prediction as \(I\), as:
\[\hat{I}_{S2}=\operatorname*{argmax}_{\hat{I}}\bigl{(}1-Q_{\hat{T}}(I,\hat{I}) \mathbbm{1}_{\{(a,\hat{a}):a=\hat{a}\}},Q_{\hat{I}}(I,\hat{I})\bigr{)}, \tag{4}\]
where \(\mathbbm{1}_{\{(a,\hat{a}):a=\hat{a}\}}\) is the indicator function whose value is 1 only if the predicted activity of \(\hat{I}\) is the same as the activity of \(I\).
**Random colorization.** We extend ColorFool [52] to consider the explanation similarity \(Q_{\hat{T}}\), as defined in Eq. 1. We refer to this method as ColorFoolX (CFX). In this case, we do not use the image similarity \(Q_{\hat{I}}\) in the process of finding the adversarial example. We rely only on the image region semantics and prior information about color perception in each region to generate the adversarial images. ColorFool uses the semantic regions computed in the first step of the multi-step segmentation scheme and defines four types of sensitive regions: person, sky, vegetation, and water. Adversarial images are generated by randomly modifying the \(a\) and \(b\) components of the regions in the perceptually uniform \(Lab\) color space within specific color ranges, which depends on the semantics of a region, without changing the lightness \(L\). ColorFool avoids perturbing regions representing people.
**Combining editing filters.** We extend a combination of image editing filters method [5] to perform localized attention-based attacks. The method manipulates image attributes like saturation, contrast, brightness, sharpness, and applies edge enhancement, gamma correction or soft light gradients. We restrict the perturbations to non-sensitive areas \(R_{j}^{n}\) using the information from \(I_{e}\): we select the non-sensitive areas that are the most important for the activity prediction, \(R_{a}^{n}\), for \(S1\), and the least important non-sensitive areas for the activity prediction, \(R_{na}^{n}\), for \(S2\).
We generate \(\hat{I}\), through a sequence of \(L\) filters on \(I\), for \(S1\) as:
\[\hat{I}=R_{i}^{s}\cup f_{t_{1}}^{\alpha_{t_{1}},\beta_{t_{1}}}\circ f_{t_{2}} ^{a_{t_{2}},\beta_{t_{2}}}\circ\cdots\circ f_{t_{L}}^{a_{t_{L}},\beta_{t_{L}}} (R_{a}^{n})\cup R_{na}^{n}, \tag{5}\]
and for \(S2\) as:
\[\hat{I}=R_{i}^{s}\cup R_{a}^{n}\cup f_{t_{1}}^{\alpha_{t_{1}},\beta_{t_{1}}} \circ f_{t_{2}}^{a_{t_{2}},\beta_{t_{2}}}\circ\cdots\circ f_{t_{L}}^{a_{t_{L}},\beta_{t_{L}}}(R_{na}^{n}), \tag{6}\]
where each \(f_{i}^{\alpha_{i},\beta_{i}}\) is selected from a set of \(F\) predefined filters parameterized with \(\beta_{i}\) that controls the amount of change of each property (intensity), and \(\alpha_{i}\), the parameter of the alpha blending between the clean image and the filtered image. The optimal filter configuration is found with a nested evolutionary algorithm consisting of an outer optimization step that determines the sequence of \(L\) filters with \(f_{t_{i}}\in F\) with a genetic algorithm (GA) [37], and an inner optimization step that determines the values of \((\alpha_{t_{i}},\beta_{t_{i}})\) of each selected filter in the outer step with an Evolutionary Strategies (ES) [44].
We consider both \(Q_{\hat{T}}\) and \(Q_{\hat{J}}\) functions to find the adversarial examples, as defined in Eq. 3, 4. Considering the conflicting nature of the two functions we formulate the optimization process as multi-objective optimization, handled by the NSGA-II algorithm [13], to find the best trade-off between \(Q_{\hat{T}}\) and \(Q_{\hat{I}}\).
Figure 2: Example of semantic regions obtained after the first step (middle) and last step (right) of the multi-step segmentation scheme. Regions in brown are considered sensitive to color changes.
## 4 Validation
### Experimental setup
**Multimodal explanation model.** We perform the attacks on the multimodal explanation model NLX-GPT for activity recognition [47], which textually explains its prediction using CLIP [43] as vision encoder and the distilled GPT-2 pre-trained model [48, 8] as a decoder. NLX-GPT generates also a visual explanation map based on the cross-attention weights of the model. The distilled GPT-2 was pre-trained on image-caption pairs (COCO captions [34], Flickr30k [42], visual genome [29] and image-paragraph captioning [28]). NLX-GPT was fine-tuned on the activity recognition dataset ACT-X [41] (18k images). The encoder is fixed for both the pre-training and fine-tuning stages.
**Dataset.** We use the test set of the ACT-X [41], a 3,620-image dataset used to explain decisions of activity recognition models. Each image is labeled with an activity and three explanations. We perform the attack on the 1,829 images with correctly predicted activity by NLX-GPT.
**Cases.** We compare different filtering approaches and objective functions. We analyze the following cases: full image filtering (FL-s) and localized filtering (LC-s, as described in Section 3.2) with single objective (\(Q_{\hat{T}}\)) for explanation (dis)similarity; full image filtering (FL-m) and localized filtering (LC-m) with multi-objective function (\(Q_{\hat{T}}\), \(Q_{\hat{T}}\)), and ColorFoolX (CFX). Note that CFX does not account for image similarity during the attack.
**Parameters.** For CFX we allow a maximum of 1000 trials. For FL-s and LC-s we follow the CFX iterative approach. For FL-m and LC-m we use the multi-objective evolutionary optimization with the configuration proposed in [5]. We set the size of the outer population to \(N_{out}=10\), the number of outer generations to \(G_{out}=10\), and the mutation probability to \(\rho=0.5\). The inner population size is \(\lambda=5\), inner generations \(G_{in}=3\) with initial learning rate \(lr=0.1\) and decay rate \(\beta=0.75\).
### Performance evaluation
**Success of the attacks.** We measure the success rate, \(S_{r}\), of the adversarial attacks as:
\[S_{r}=\frac{1}{N_{a}}\sum\nolimits_{j=1}^{N_{a}}\mathbb{1}_{\omega}, \tag{7}\]
where \(N_{a}\) is the total number of images and, for \(S1\):
\[\omega\triangleq\{(a_{j},\hat{a}_{j}):a_{j}\neq\hat{a}_{j}\wedge Q_{\hat{T}}( I_{j},\hat{I}_{j})\geq t\}, \tag{8}\]
where \(t\) is a threshold; and, for \(S2\):
\[\omega\triangleq\{(a_{j},\hat{a}_{j}):a_{j}=\hat{a}_{j}\wedge Q_{\hat{T}}(I_{ j},\hat{I}_{j})<t\}. \tag{9}\]
We determined the value of \(t\) with a subjective human evaluation of the similarity of explanations pairs. We created nine groups for the explanations based on their similarity, such that \(G_{i}=\{(e,\hat{e}):Q_{\hat{T}}\in(1-0.05i,1-0.05(i-1)]\}\) with \(i\in\{1,2,\ldots,8\}\) and \(G_{9}=\{(e,\hat{e}):Q_{\hat{T}}\in(0,0.6]\}\). From each group, we randomly selected ten \((e,\hat{e})\) pairs that were rated on semantic similarity on a 5-level Likert scale: _not similar at all_; _a little similar_; _somehow similar_; _very similar_; and _they are the same_. We used majority voting to assign each pair of explanations to a similarity class. Likewise, we labeled each group with the most frequent similarity class of the questions within the group. Eleven people who did not see the data prior to the test rated the similarity and could change their rating before completing the test. The mapping between explanation groups and similarity classes is shown in Figure 3. We choose _somehow similar_ class as similarity breaking point. This similarity class maps to group G4, which corresponds to \(Q_{\hat{T}}<0.85\). Thus, we set the threshold \(t=0.85\).
**Image quality.** We evaluate the quality of the adversarial images with MANIQA [70], a transformer-based no reference image quality assessment metric that won the NTIRE2022 NR-IQA challenge [19]. MANIQA scores \(\in\) [0,1] and the higher the score, the better the quality.
**Image colorfulness.** We also analyze the colorfulness [20] of the adversarial images and compare it with the colorfulness of original images in order to evaluate whether the colorization attacks generate images with color vividness in accordance with human perception. Given an RGB image, first the opponent color space representation is computed as:
\[rg=R-G,\quad yb=\frac{1}{2}(R+G)-B, \tag{10}\]
where \(R,G,B\) are the red, green, and blue channels. Next, the standard deviation \(\sigma\) and the mean pixel values \(\mu\) are calculated as:
\[\sigma=\sqrt{\sigma_{rg}^{2}+\sigma_{yb}^{2}},\quad\mu=\sqrt{\mu_{rg}^{2}+ \mu_{yb}^{2}}. \tag{11}\]
Finally, the colorfulness metric is defined as:
\[C=\sigma+0.3\mu. \tag{12}\]
The higher the \(C\) score, the more colorful the image.
Figure 3: Mapping between explanation groups and similarity classes. KEY – C1: not similar at all, C2: a little similar, C3: somehow similar, C4: very similar, C5: they are the same. Explanations pairs with \(Q_{\hat{T}}>0.85\) (i.e. G1-G3) are rated as highly similar.
### Results and Discussion
**Success of the attacks.** Table 2 reports the success rates for all methods under both scenarios. Methods considering only the explanation similarity (i.e. CFX, FL-s) achieve the best success rate with \(S_{r}\) of 64.62% for CFX and 63.09% for FL-s for \(S1\), and \(S_{r}\) of 73.82 % for CFX and 77.53% for FL-s in \(S2\). CFX and FL-s apply the perturbation across wider areas of the image than LC-s, which perturbs small regions selected by combining over-segmentation and visual map information. Moreover, the perturbation is only limited by the semantic region information, which allows more intense modifications than in the case of the multi-objective setup where we use an image similarity metric, \(Q_{\hat{I}}\), to calibrate the perturbation. The \(S_{r}\) decreases as we focus on more localized areas (LC-s) and as we limit the freedom of the attack with the image similarity function (LC-m). This behavior could be caused by the noisiness and inaccuracy of the cross-attention visual maps, which may fail to accurately explain visually why the model made a certain decision. Since we use the visual maps to localize the areas to attack, inaccurate visual maps lead to selecting areas that are irrelevant for the prediction. These model-intrinsic visual attention maps require more investigation to fully assess their relevance for the localized attacks. We further notice a decrease in attack performance as we enforce an additional constraint on the optimization. On top of the area restriction we also control the applied perturbation using \(Q_{\hat{I}}\). Thus, the algorithm has to find a trade-off between explanation (dis)similarity and image similarity. The found solution may sometimes prioritize image similarity over explanation similarity leading to a decrease of the attack success rate. We also notice that the methods are more effective in \(S2\) achieving a \(S_{r}\) of up to 77.53% for FL-s. In this scenario, the selected alterable areas are more numerous since we focus on regions that are not highly attended by the explanation model, and thus in general the adversarial perturbation is applied on larger image areas than in the case of LC methods. Moreover, we observe that in the case of localized attacks, LC-s and LC-m, the visual attention maps relative to the activity prediction are less noisy and the attention is primarily focused in one area of the image, whereas for CFX more image regions are attended to, similarly to the original image. Localized attacks, for their nature, are more effective in altering the attention of the model.
**Image quality.** Both methods produce comparable results, however, the generated adversarial images have different visual characteristics and aesthetics. In general, the image filtering attacks produce images with more toned-down soft vintage looks while most of the images generated by CFX have vivid colors (see Figure 4). Table 3 reports the average MANIQA and standard deviation scores for the adversarial
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Scenario & CFX & LC-s & FL-s & LC-m & FL-m \\ \hline \(S1\) & 64.62 & 51.33 & 63.09 & 43.47 & 47.62 \\ \(S2\) & 73.82 & 67.47 & 77.53 & 51.76 & 49.45 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Success rate (%) for the two scenarios. KEY – CFX: ColorFoolX, LC-s: localized filtering with a single objective, FL-s: full image filtering with a single objective, LC-m: localized filtering with multi-objective, FL-m: full image filtering with multi-objective.
Figure 4: Adversarial images generated for a clean image (top left). The visual explanation maps for the activity prediction are shown next to each image. For \(S1\) the images have a different activity and the textual explanations are similar. For \(S2\) the images have the same activity but different textual explanations. The MANIQA scores for the images are 0.69, 0.63, 0.70, 0.72, 0.64, from top to bottom, respectively.
images and their corresponding clean versions. The average MANIQA score varies from 0.65 for FL-s and LC-s to 0.68 for CFX, LC-m, and FL-m. As a reference, the average score on the clean images is 0.70. This suggests that the adversarial perturbations do not substantially degrade the image quality.
**Image colorfulness.** Figure 5 shows the distribution of colorfulness scores of adversarial images and their corresponding original version. LC-m and FL-m generate images with colors most similar to the original images, whereas LC-s and FL-s tend to generate images with more faded colors. This indicates that the image similarity objective contributes toward the generations of more natural-looking images, as also shown by the SSIM scores in Figure 6. On the contrary, CFX generates very colorful images that diverge the most from the original distribution (Figure 5). However, images different from the original ones do not necessarily imply worse quality. Thus, a human subjective evaluation remains the best way to assess the perceptual realism, which we will address in future work.
**Ablation study.** We perform an ablation study to verify the contribution of each part of the multi-objective function of FL-m and LC-m in both attack success rate and SSIM values (Table 4). We start with a random approach, where we randomly perturb the images while only considering changing the activity prediction, disregarding explanation and image similarity. Then we consider each objective separately. For the image similarity objective, \(Q_{\hat{I}}\), the aim is to find the image that changes the activity prediction and has the highest SSIM. For the explanation objective, \(Q_{\hat{T}}\), the goal is to find an image that changes the activity prediction and has the highest explanation similarity. When using both objectives, the goal is to find an adversarial image that has a different activity prediction, high explanation similarity and high image similarity. We consider both full image filtering, FL, and localized image filtering, LC. In the case of the image similarity objective only, adversarial images have the highest SSIM scores but a low \(S_{r}\). However, the textual explanation objective achieves the highest \(S_{r}\) at the expense of the image similarity. This is the main justification for using the version with both objectives to find a trade-off between \(S_{r}\) and SSIM.
Figure 5: Colorfulness scores distribution for \(S1\) (top row) and for \(S2\) (bottom row). The adversarial examples generated with LC-m and FL-m have colors similar to the original images. In the case of CFX, the colors of adversarial examples diverge from the distribution of original images. The higher the score, the more colorful the image.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Attack\(\begin{subarray}{c}\text{Scenario}\\ \text{Clean}\end{subarray}\)} & \multicolumn{2}{c}{\(S1\)} & \multicolumn{2}{c}{\(S2\)} \\ \cline{2-5} & Clean & Adversarial & Clean & Adversarial \\ \hline CFX &.70 \(\pm\).05 &.68 \(\pm\).06 &.70 \(\pm\).05 &.67 \(\pm\).06 \\ LC-s &.69 \(\pm\).05 &.66 \(\pm\).06 &.70 \(\pm\).04 &.65 \(\pm\).07 \\ FL-s &.70 \(\pm\).05 &.65 \(\pm\).07 &.70 \(\pm\).05 &.65 \(\pm\).06 \\ LC-m &.69 \(\pm\).05 &.67 \(\pm\).07 &.70 \(\pm\).04 &.68 \(\pm\).05 \\ FL-m &.70 \(\pm\).05 &.66 \(\pm\).07 &.70 \(\pm\).05 &.68 \(\pm\).05 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Average (and standard deviation) of MANIQA scores for the adversarial images and their corresponding clean images.
Figure 6 shows the generally large SSIM values of the adversarial examples obtained with LC-s, FL-s, LC-m, FL-m, and CFX for \(S1\). The results show that using SSIM for the optimization of the FL and LC is useful for generating images with higher similarity since the type of modification applied can alter the structural similarity. Among all, CFX generates images with the highest SSIM values because it does not directly target the lightness attribute in the images, which can degrade the structural similarity.
We also conducted the analysis for CFX to assess its attack capabilities with respect to the original version of ColorFool (CF) [52]. CFX searches for the adversarial example that satisfies two conditions, while CF only considers the activity prediction. The \(S_{r}\) is \(\sim\) 80% when we attack only the activity prediction. When considering also the explanation similarity, as in \(S1\), CF reaches \(S_{r}\) of 37.23 %, whereas CFX reaches \(S_{r}\) of 64.62 % (Eq. 7). Similarly, when using FL-s to attack only the activity, we observed that \(\sim\) 66% of images with different activity have also different explanations (\(Q_{\hat{T}}<0.85\)).
## 5 Conclusion
We presented a black-box attack on a self-rationalizing multimodal explanation system and evaluated the robustness of its prediction-explanation mechanism under two scenarios: changing the activity prediction while keeping the textual explanations similar and preserving the activity prediction while modifying the textual explanation. The adversarial examples are generated through semantic colorization or through image filtering. We showed that the prediction-explanation mechanism is vulnerable to black-box attacks that use only the final output of the target model. As future work, we will conduct a subjective evaluation of the adversarial examples to inform the attention mechanism. The proposed approach could be used to develop model-agnostic evaluation metrics to enable comparative and fair assessment of the faithfulness of different vision-language explanation systems.
|
2309.14401 | Derivative Based Extended Regular Expression Matching Supporting
Intersection, Complement and Lookarounds | Regular expressions are widely used in software. Various regular expression
engines support different combinations of extensions to classical regular
constructs such as Kleene star, concatenation, nondeterministic choice (union
in terms of match semantics). The extensions include e.g. anchors, lookarounds,
counters, backreferences. The properties of combinations of such extensions
have been subject of active recent research.
In the current paper we present a symbolic derivatives based approach to
finding matches to regular expressions that, in addition to the classical
regular constructs, also support complement, intersection and lookarounds (both
negative and positive lookaheads and lookbacks). The theory of computing
symbolic derivatives and determining nullability given an input string is
presented that shows that such a combination of extensions yields a match
semantics that corresponds to an effective Boolean algebra, which in turn opens
up possibilities of applying various Boolean logic rewrite rules to optimize
the search for matches.
In addition to the theoretical framework we present an implementation of the
combination of extensions to demonstrate the efficacy of the approach
accompanied with practical examples. | Ian Erik Varatalu, Margus Veanes, Juhan-Peep Ernits | 2023-09-25T17:48:20Z | http://arxiv.org/abs/2309.14401v1 | Derivative Based Extended Regular Expression Matching Supporting Intersection, Complement and Lookarounds
###### Abstract.
Regular expressions are widely used in software. Various regular expression engines support different combinations of extensions to classical regular constructs such as Kleene star, concatenation, nondeterministic choice (union in terms of match semantics). The extensions include e.g. anchors, lookarounds, counters, backreferences. The properties of combinations of such extensions have been subject of active recent research.
In the current paper we present a symbolic derivatives based approach to finding matches to regular expressions that, in addition to the classical regular constructs, also support complement, intersection and lookarounds (both negative and positive lookaheads and lookbacks). The theory of computing symbolic derivatives and determining nullability given an input string is presented that shows that such a combination of extensions yields a match semantics that corresponds to an effective Boolean algebra, which in turn opens up possibilities of applying various Boolean logic rewrite rules to optimize the search for matches.
In addition to the theoretical framework we present an implementation of the combination of extensions to demonstrate the efficacy of the approach accompanied with practical examples.
## 1. Introduction
Regular expressions are supported by standard libraries of all major programming languages and software development kits. They are essential in string manipulation, membership testing, text extraction tasks on strings, etc. In recent years symbolic derivatives based regular expression matching has seen rapid development as being performance wise more predictable than backtracking engines and modern optimized regular expression engines such as e.g. Hyperscan (Wang et al., 2019) and RE2 (Google, 2023), that are all, in essence, relying on algorithms by (Glushkov, 1961) or (Thompson, 1968).
The extended regular expressions supported by backtracking engines often involve support for combinations of extensions, e.g. backreferences, lookarounds and balancing groups, yielding a family of languages that has been shown by (Carle and Narendran, 2009) not to be closed under intersection. Recent work by (Chida and Terauchi, 2023) has explored the expressive power of the combination of regular expressions with backreferences and positive lookaheads and devised a new flavor of memory automata to express their behavior.
We present a derivative-based symbolic extended regular expression engine, which supports _intersection_ (\(\succcurly\)), _complement_ (\(\sim\)) and _lookaround_ ((?=), (?!), (?<=) and (?<!)) operations, i.e. both negative and positive lookahead and lookback operations. To remind the reader, by example of a positive lookahead, the regex a(?=c) will match a in the string "ac" but nothing in "ab", i.e. lookarounds do not match text but positions in strings in a more general way than anchors (e.g. word boundary \(\backslash\)b) (Friedl, 2006). We show that the extended engine can be used to solve problems that are currently unfeasible to solve with existing regular expression engines. We also show that all context dependent regex anchors can be expressed using lookarounds. We have implemented the extended engine as a library for the.NET platform, which extends the existing.NET nonbacktracking engine (Moseley et al., 2023), but without preserving backtracking semantics.
We demonstrate the usefulness of the extended engine by examples of precise text extraction in a way that is to our knowledge not possible with standard regexes or involves a factorial blowup of the regex pattern. We make the case that intersection and complement are useful tools in concise definition of regular expressions, showing examples of incremental specification of regexes by use of complement, and exclusion of unwanted matches by use of negation.
Our work is motivated by the fact that the current state-of-the-art symbolic regex engine (Moseley et al., 2023) does not support intersection because its semantics is difficult to combine with all the other features in a meaningful way while maintaining backtracking semantics. It is also a problem of backwards compatibility to, for example, support & as a new operator.
### Contributions
This paper makes the following contributions:
* We build on the.NET 7 nonbacktracking regular expression engine (Moseley et al., 2023) to develop a theory for symbolic derivatives based regular expression matching that, in addition to _Kleene star, bounded loops_ (_counters_), _alternation_ (which in our case is synonymous to _union_) and _concatenation_, supports _intersection_, _complement_ and _both positive and negative lookaround operations_ in regexes. The result is a significant increment over (Moseley et al., 2023) in terms of Theorem 1, Theorem 2 and Theorem 3 accompanied by proofs.
* The theoretical results are accompanied by an implementation (Varatalu, 2023) of an extended regular expression engine that supports all Boolean operations on regexes, including intersection and complement using a natural syntax making it convenient to use.
* Due to the match semantics of such a regular expression language corresponding to an effective Boolean algebra, it is now possible to define and apply numerous Boolean rewrites and optimizations that yield better runtime performance. It is thus also possible to reason about the validity of such optimizations in terms of Boolean algebra.
* We show that the extended engine can be used to solve problems that are currently either impossible or infeasible to solve with standard regular expressions.
* We show that all regex anchors can be expressed using lookarounds as conjectured in (Moseley et al., 2023). We prove the conjecture together with providing an implementation as mentioned above.
Before proceeding to introducing the theory, let us look at some practical examples that illustrate how the proposed combination extensions can be used to establish the existence of matches but also locate them in strings.
## 2. Motivating Examples
In this section we explain the intuition behind the intersection (conjunction) and complement (negation) operators in regexes and give some motivating examples of how they can be useful. The examples are also available in the accompanying web application 1. The following examples are written in the syntax of the.NET regex engine, with the addition of &, \(\sim\), and the symbol \(\top\), to denote a predicate that is true on any character. The syntax is explained in more detail in Section 3.
Footnote 1: [https://ieviev.github.io/sbre/](https://ieviev.github.io/sbre/)
Example 2.1 (Password extraction).Consider the following regex
*(?=.*[a-z])(?=.*[A-Z])(?=.*\d)[a-zA-Z\d]{8,}$
that originates from a StackOverflow question 2 asking for a regex for validating a password. The regex in principle is an intersection of three lookaheads, each of which checks for a certain condition and a loop that checks for the length of the password.
Footnote 2: [http://stackoverflow.com/questions/19605150/regex-for-password-must-contain-at-least-eight-characters-at-least-one-number-a](http://stackoverflow.com/questions/19605150/regex-for-password-must-contain-at-least-eight-characters-at-least-one-number-a)
* (?=.*[a-z]) checks for at least one lowercase letter
* (?=.*[A-Z]) checks for at least one uppercase letter
* (?=.*\d) checks for at least one digit
* [a-zA-Z\d]{8,} checks for at least 8 characters
The regex is a good example of emulating conjunctive conditions using lookaheads for checking if there exists a match. However, the regex is not suitable for extracting a password from an arbitrary string, even when removing the anchors. The reason is that the regex engine will try to match the lookaheads independently, and then backtrack to the beginning of the match to try the next lookahead. This is an example of a limitation to what can be expressed in traditional regular expressions. And note that due to the lack of support for lookarounds in RE2 [9], Hyperscan [21] and the nonbacktracking engine in.NET to name a few optimized open source regex libraries, it cannot be expressed concisely using an engine that does not support lookaheads.
Now consider the following regular expression, composed of intersections which we denote by the & symbol:
.*[a-z].*&.*[A-Z].*&.*\d.*&[a-zA-Z\d]{8,} The regular expression has a number of differences from the previous one.
* The proposed regex engine will check for all four conditions at the same time, without any backtracking.
* Upon meeting one of the conditions, the rest of the alternation is matched with one less condition to check. For example, after matching the lowercase letter a at the beginning, the set [a-z] is satisfied, and the regex engine will match the rest of the input with the regex.*[A-Z].*&.*\d.*&[a-zA-Z\d]{7,}, by effectively removing the first condition from the regex, as the remaining.* will be subsumed by the other intersections. In Sections 4.2 and 5.1 we show how it is achieved.
* Because all checks are performed together at the same position, intersections allow precise text extraction in a single pass, which is also highly amenable to parallelization in the form of vectorization. Such optimizations could be beneficial for tasks such as scanning for potential credential leaks.
* The pattern does not currently check for the validity of the entire string, but it can be done by appending the intersection *.*$ to the regex, which uses lookarounds to constrain the regex to match the entire line. Similarly, the regex could be constrained to match text between word borders by appending the intersection $b$s$b to the regex.
Now consider modifying the regular expression to find potential password substrings while excluding certain ones, such as those having 2 consecutive digits or the word "password" in it. Such conditions cannot be checked effectively with positive lookaheads but can be achieved with negative ones. For example, the following regular expression uses negative lookaheads to match a password that does not have 2 consecutive digits:
.*[a-z].*&.*[A-Z].*&.*\d.*&(?!$s$
"@11" on the same line, after the occurrence of the potential match - it would be falsely considered a non-match, as \$* travels further than [a-zA-Z\d]{8,}, while the loop would stop at @.
A better way of writing the pattern would be to convert the negative lookahead (?!\$*\d\d) to (?![a-zA-Z\d]*\d\d), that is, adjust the loop in the negative lookahead to the entire following regular expression body, to guarantee that the constraints apply to the same range. This is where the use of complement ("~) can help.
.*[a-z].*&.*[A-Z].*&.*\d.*&[a-zA-Z\d]{8,}&~(.*\d\d.*)
The regex above is equivalent to the previous one, but it is constrained to _the exact range of the potential match_, as \(\sim\)(.*\d\d.*) remains nullable exactly until two sequential digits are found, after which it turns the entire intersection of regexes into \(\bot\). The negation operator is explained in detail in Section 4.2.
Example 2.2 (Incremental specification of regexes through composition): Consider the following regex:
.*A.*&.*B.*&.*C.*
The regex matches any line that contains the substrings A, B and C in any order.
Now consider creating an equivalent regular expression without using intersections. The result would be similar to the following:
.*A.*B.*C.*|.*A.*C.*B.*|.*B.*A.*C.*|.*B.*C.*A.*|.*C.*A.*B.*|.*C.*B.*A.*
The regex is equivalent to the previous one, but it is much more verbose, as it requires enumerating all possible permutations of the substrings as separate alternatives. In such scenarios the use of intersection (8) becomes very6 helpful. The number of alternations required is equal to the factorial of the number of substrings.
Matching substrings A,B,C,D,E,F this way would require 720 alternations, while the equivalent regex with intersections would only require 6 intersections. Note that positive lookaheads can be used for such examples strictly as long as it is possible to define a clear lookup range for all the substrings, which is not always the case.
Another important advantage of intersections is that they are not interlinked with each other, and can be added or removed independently. For example, if we wanted to add a fourth substring D, we would only need to add one more intersection, instead of adding 18 alternations and modifying the existing 6. And if we wanted to remove the substring C, we would only need to remove one intersection, instead of reducing and modifying all alternations down to 4.
Similarly, if we wanted exclude strings containing the substring D, we would only need append the negated intersection \(\sim\)(.*D.*), instead of modifying all existing alternations to exclude the substring D.
Such incremental aspect can be significant when dealing with large regexes, and can be useful for tasks such as machine learning based text extraction with regular expressions, where the regex is incrementally built by adding and removing positive or negative intersection terms. Such approach also improves the readability of regular expressions, as the regexes are built in a modular way.
Example 2.3 (Matching paragraphs of text containing multiple substrings): Another practical use case for the new features is matching paragraphs of text that meet certain conditions, as both intersection and negation introduce new ways of matching ranges of text with complex conditions. While it is possible to envisage sequential use of e.g. grep for positively and negatively matching lines of text on the input string when each paragraph is on a separate line, we generalize the example to paragraphs spanning over multiple lines and paragraphs being separated by double newlines, as is often done in texts, e.g. in Project Gutenberg books in plain text format. The example
generalizes to any substring start and end condition and provides a mechanism to also extract the match in addition to checking for the existence of it.
Note that, for the sake of improved readability, we use the \(\top\) predicate to denote the set of all characters, which can be thought of as a canonical representation of [\(\$\)\(\$\)\(\$\)] or [\(\$\)\(\$\)\(\$\)].
For example, consider the following regular expression:
\(\$\)\(\$\)\(\$\)\(\$\)\(\$\)\(\$\)\(\$\)\(\$\)\(\$\)\(\$\)\(\$\)\(\$\)\(\$\)\(\$\)\(\$\)\(\$\)\(\$\)\(\$\)\(\$\$\)\(\$\)\(\$\$\)\(\$\$\)\(\$\$\)\(\$\$\)\(\$\$\)\(\$\$\$\)\(\$\$\$\$\)\(\
The result now matches all paragraphs where there is no word "charity" or there exists a word "honor" (according to the definition of the standard Boolean implication operation). To locate the paragraph where "charity" is present, but "honor" is not, we need to take the complement of the implication and write
(?<=\n\n|\A)-(T\n\n\nT\))(?=\n\n|\Z)&~(~(T\*charityT*)|(T\*honorT*)) which in turn can be rewritten by applying de Morgan's rule into
(?<=\n\n|\A)-(T\n\n\nT*)(?=\n\n|\Z)&(T*charityT*)&~(T*honorT*) The resulting regular expression matches the paragraph that contains the word "charity" but does not contain "honor".
It is possible to utilize the If-Then-Else conditionals (?(?=\(R_{1}\))(\(R_{2}\))|(\(R_{3}\))) in traditional regex engines supporting the construct. The explanation of If-Then-Else is nontrivial [20]: \(R_{1}\) acts as a test and if the test yields true then \(R_{2}\) gets applied, otherwise \(R_{3}\). \(R_{1}\) can either be a reference to a capturing group and test if _the group participated in a match_ (which is different from the actual match) or be a lookaround and yield a Boolean result at a particular location. As mentioned before, there are differences in the extent of the string in which lookarounds get evaluated and the extent of loops, as was shown in the Example 2.1. As a result, writing a conditional regular expression in terms of If-Then-Else corresponding to the above requirements is error prone and requires a great deal of care. We argue that the use of complement and intersection (and consequently a whole Boolean algebra) together with lookarounds simplifies the task significantly.
We now proceed to establishing the theory for regular expressions extended with lookarounds, complement and intersection.
## 3. Preliminaries
Here we introduce the notation and main concepts used in the paper. The general meta-notation, notation for denoting strings and locations is based on the approach taken in [14].
As a general meta-notation throughout the paper we write \(\mathit{lhs}\stackrel{{\mathrm{ner}}}{{=}}\mathit{rhs}\) to let \(\mathit{lhs}\) be _equal by definition_ to \(\mathit{rhs}\). Let \(\mathbb{B}=\{\mathbf{false},\mathbf{true}\}\) denote Boolean values. Let \(\Sigma\) be a domain of _characters_. \(\langle x,y\rangle\) stands for pairs of elements and let \(\pi_{1}(\langle x,y\rangle)\stackrel{{\mathrm{ner}}}{{=}}x\) and \(\pi_{2}(\langle x,y\rangle)\stackrel{{\mathrm{ner}}}{{=}}y\).
_Strings._ Let \(\epsilon\) or "\({}^{*}\) denote the empty string and let \(\Sigma^{*}\) denote the set of all strings over \(\Sigma\). Let \(s\in\Sigma^{*}\). The length of \(s\) is denoted by \(|s|\). Individual characters and strings of length \(1\) are not distinguished. Let \(i\) and \(l\) be nonnegative integers such that \(i+l\leq|s|\). Then \(s_{i,l}\) denotes the substring of \(s\) that starts from index \(i\) and has length \(l\), where the first character has index \(0\). In particular \(s_{i,0}=\epsilon\). For \(0\leq i<|s|\) let \(s_{i}\stackrel{{\mathrm{ner}}}{{=}}s_{i,1}\). Let also \(s_{-1}=s_{|s|}\stackrel{{\mathrm{ner}}}{{=}}\epsilon\). E.g., "abcde"\({}_{1,3}=\) "bcd" and "abcde"\({}_{5,0}=\epsilon\). \(s^{r}\) denotes the _reverse_ of \(s\), so that \(s^{r}_{i}=s_{|s|-1-i}\) for \(0\leq i<|s|\).
_Locations._ Let \(s\) be a string. A _location in_\(s\) is a pair \(\langle s,i\rangle\), where \(-1\leq i\leq|s|\). We use \(s\langle i\rangle\stackrel{{\mathrm{ner}}}{{=}}\langle s,i\rangle\) as a dedicated notation for locations, where \(s\) is the _string_ and \(i\) the _position_ of the location. Since \(s\langle i\rangle\) is a pair, note also that \(\pi_{1}(s\langle i\rangle)=s\) and \(\pi_{2}(s\langle i\rangle)=i\). If \(x\) and \(y\) are locations, then \(x<y\) iff \(\pi_{2}(x)<\pi_{2}(y)\). A location \(s\langle i\rangle\) is _valid_ if \(0\leq i\leq|s|\). A location \(s\langle i\rangle\) is called _final_ if \(i=|s|\) and _initial_ if \(i=0\). Let \(\mathit{Final}(s\langle i\rangle)\stackrel{{\mathrm{ner}}}{{=}}i=|s|\) and \(\mathit{Initial}(s\langle i\rangle)\stackrel{{\mathrm{ner}}}{{=}}i=0\). We let \(\dot{\sharp}\stackrel{{\mathrm{ner}}}{{=}}\epsilon\langle-1\rangle\) that is going to be used to represent _match failure_ and in general \(s(-1)\) is used as a _pre-initial_ location. The _reverse_\(s\langle i\rangle^{r}\) of a valid location \(s\langle i\rangle\) in \(s\) is the valid location \(s^{r}\langle|s|-i\rangle\) in \(s^{r}\). For example, the reverse of the final location in \(s\) is the initial location in \(s^{r}\). When working with sets \(S\) of locations over the same string we let \(\max(S)\) (\(\min(S)\)) denote the maximum (minimum) location in the set according to the location order above. In this context we also let \(\max(\emptyset)=\min(\emptyset)\stackrel{{\mathrm{ner}}}{{=}}\dot {\downarrow}\) and \(\dot{\downarrow}^{r}\stackrel{{\mathrm{ner}}}{{=}}\dot{\downarrow}\).
Valid locations in a string \(s\) can be illustrated by
\(
\(R\) is the _body_, \(m\) the _lower bound_, and \(n\) the _upper bound_ of the loop. If \(n=\infty\) then the loop is _infinite_ else _finite_. We let \(R\{0,0\}\stackrel{{\text{\tiny{\tt{def}}}}}{{=}}()\) for convenience in recursive definitions. We use the common abbreviations \(R\star\) for \(R\{0,\infty\}\), \(R\)+ for \(R\{1,\infty\}\).
The regex denoting _nothing_ is just the predicate \(\bot\). Concatenation operator \(\cdot\) is often implicit by using juxtaposition. The expressions in the second row are called _lookarounds_: (?=\(R\)) is _lookahead_, (?<=\(R\)) is _lookback_, (?!\(R\)) is _negative lookahead_, and (?<!\(R\)) is _negative lookback_.
The _reverse_\(R^{r}\) of \(R\in\mathcal{R}\) is defined as follows:
\[\begin{array}{r@{\quad}l@{\quad}l@{\quad}l}\psi^{r}\quad&\stackrel{{ \text{\tiny{\tt{def}}}}}{{=}}\quad&\psi\quad&(?=R)^{r}\quad&\stackrel{{ \text{\tiny{\tt{def}}}}}{{=}}\quad&(?<=R^{r})\\ (\rangle^{r}\quad&()\quad&(?<=R)^{r}\quad&\stackrel{{\text{ \tiny{\tt{def}}}}}{{=}}\quad&(?=R^{r})\\ (R\mid S)^{r}\quad&\stackrel{{\text{\tiny{\tt{def}}}}}{{=}}\quad&R ^{r}\mid S^{r}\quad&(?<!R)^{r}\quad&\stackrel{{\text{\tiny{ \tt{def}}}}}{{=}}\quad&(?<!R^{r})\\ (R\mathbin{\&\&}S)^{r}\quad&\stackrel{{\text{\tiny{\tt{def}}}}}{{=}} \quad&R^{r}\mathbin{\&}S^{r}\quad&(?<!R)^{r}\quad&\stackrel{{ \text{\tiny{\tt{def}}}}}{{=}}\quad&(?!R^{r})\\ (R\mathbin{\&}S)^{r}\quad&\stackrel{{\text{\tiny{\tt{def}}}}}{{=}} \quad&S^{r}\mathbin{\cdot}R^{r}\\ R\{m,n\}^{r}\quad&\stackrel{{\text{\tiny{\tt{def}}}}}{{=}} \quad&R^{r}\{m,n\}\\ (\mathbin{\&}R)^{r}\quad&\stackrel{{\text{\tiny{\tt{def}}}}}{{=}} \quad&\mathbin{\sim}(R^{r})\end{array}\]
Note that the reverse \(R^{r}\) of a regex \(R\) has exactly the same size as \(R\). The size of a regex, \(|R|\), is defined recursively as the number of subexpressions, where each predicate \(\psi\in\Psi\) is considered to have size one, i.e., the actual representation size of predicates is irrelevant in this context. It also follows by induction from the definition that \((R^{r})^{r}=R\).
### Derivatives
In the following definition let \(x=s\langle i\rangle\) be a valid nonfinal location. For example, if \(s=\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\
_latest match end location_ from a valid \(x\) or \(\lightning\) if none exists. Note that \(\max(x,\lightning)=\max(\lightning,x)=x\).
\[\begin{array}{rcl}\mathit{Null}_{x}^{\lightning}(R)&\stackrel{{ \raisebox{-0.5pt}{\scalebox{0.5}{\text{\tiny{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\text{\text{\texttexttexttext{\texttexttexttexttexttexttexttexttexttexttext \text
_._
* \(x\stackrel{{ R.S.}}{{\rightarrow}}y\stackrel{{(x<y)}}{{ \Leftrightarrow}}x+1\stackrel{{\partial_{x}(R\,\&\,S)}}{{ \Leftrightarrow}}y\stackrel{{(\text{\sc{in}})}}{{\Leftrightarrow}}x+1 \stackrel{{\partial_{x}(R)}}{{\Leftrightarrow}}y\stackrel{{ (\text{\sc{in}})}}{{\Leftrightarrow}}\mathbf{not}(x+1\stackrel{{ \partial_{x}(R)}}{{\Leftrightarrow}}y)\stackrel{{(x<y)}}{{ \Leftrightarrow}}\mathbf{not}(x\stackrel{{ R}}{{\rightarrow}}y)\)__
_Proof of 1(7)._ by induction over \(\pi_{2}(y)-\pi_{2}(x)\) with \(y\) fixed. If \(x=y\) then
\[x\stackrel{{ R.S.}}{{\rightarrow}}y\stackrel{{(x<y)}}{{ \Leftrightarrow}}x+1\stackrel{{\partial_{x}(R\cdot S)}}{{ \Leftrightarrow}}y\stackrel{{\text{\sc{in}}}}{{\Leftrightarrow}}x+1 \stackrel{{\partial_{x}(R)\cdot S}}{{\Leftrightarrow}}y\stackrel{{ \text{\sc{in}}}}{{\Leftrightarrow}}x+1\stackrel{{ \partial_{x}(S)}}{{\Leftrightarrow}}y\stackrel{{\text{\sc{in}}}}{{ \Leftrightarrow}}\exists z(x+1\stackrel{{\partial_{x}(R)}}{{ \Leftrightarrow}}z\stackrel{{ S}}{{\rightarrow}}y)\stackrel{{ \text{\sc{in}}}}{{\Leftrightarrow}}\exists z(x+1\stackrel{{ \partial_{x}(R)}}{{\Leftrightarrow}}z\stackrel{{ S}}{{ \rightarrow}}y)\stackrel{{\text{\sc{in}}}}{{\Leftrightarrow}} \exists z(x+1\stackrel{{\partial_{x}(R)}}{{\Leftrightarrow}}z \stackrel{{ S}}{{\rightarrow}}y)\stackrel{{\text{\sc{in}}}} {{\Leftrightarrow}}\exists z(x+1\stackrel{{\partial_{x}(R)}}{{ \Leftrightarrow}}z\stackrel{{ S}}{{\rightarrow}}y)\stackrel{{ \text{\sc{in}}}}{{\Leftrightarrow}}\exists z(x+1\stackrel{{
_Proof of 1(8)._ Let \(L=R\{m,n\}\) where \(m>0\). We prove the statement by proving (for all \(m\) and \(n\))
\[x\xrightarrow{L},\,y\Leftrightarrow x\xrightarrow{R\cdot(L-1)},\,y\]
The statement holds trivially when \(m=n=1\). Assume \(n>1\).
We prove the statement by induction over \(\pi_{2}(y)-\pi_{2}(x)\). In each induction step we also get, by using the IH, that \(R(L-1)\equiv R((L-2)R)\equiv(R(L-2))R\equiv(L-1)R\), where \((L-2)\) is well-defined because \(n>1\).
The case \(\mathit{Null}_{\nu}(R)=\mathbf{false}\) but \(\mathit{Null}_{x}(R)=\mathbf{true}\) then \(\partial_{x}(L)=\partial_{x}(R\cdot(L-1))\) by definition. If \(\mathit{Null}_{x}(R)=\mathbf{false}\) then \(\partial_{x}(L)=\partial_{x}(R)\cdot(L-1)\) but then also \(\partial_{x}(R\cdot(L-1))=\partial_{x}(R)\cdot(L-1)\). So, in either case it follows that \(L\equiv R\cdot(L-1)\) without induction.
The remaining case is \(\mathit{Null}_{\nu}(R)=\mathbf{true}\). The base case of \(x=y\) follows directly because \(x=y\) iff \(\mathit{Null}_{x}(R)=\mathbf{true}\) iff \(\mathit{Null}_{x}(L)\) iff \(\mathit{Null}_{x}(R\cdot(L-1))\).
For the induction case Let \(x<y\). If \(m>1\) then let \(L^{\prime}=(L-1)\) else if \(m=1\), using that \(R\{0,k\}\equiv R\{1,k\}\) when \(\mathit{Null}_{\nu}(R)=\mathbf{true}\), let \(L^{\prime}=R\{1,n-1\}\) in order to maintain that the lower bound remains positive. Observe that the induction is over \(\pi_{2}(y)-\pi_{2}(x)\) and in the induction steps there exists \(z\geq x+1\) such that \(z\xrightarrow{L^{\prime}}y\) where \(\pi_{2}(y)-\pi_{2}(z)<\pi_{2}(y)-\pi_{2}(x)\).
\[\begin{array}{r@{\quad}l@{\quad}l}x\xrightarrow{R\cdot(L-1)},\,y\quad& \Leftrightarrow&x+1\xrightarrow{\partial_{x}(R\cdot L^{\prime})},\,y\\ \quad&\Leftrightarrow&x+1\xrightarrow{\partial_{x}(R)\cdot L^{\prime}}\,\,y \quad&\\ \quad&\Leftrightarrow&x+1\xrightarrow{\partial_{x}(R)\cdot L^{\prime}}\,\,y \quad&x+1\xrightarrow{\partial_{x}(L^{\prime})},\,y\\ \quad&\Leftrightarrow&x+1\xrightarrow{\partial_{x}(R)\cdot(L-2)\cdot R},\,y \quad&x+1\xrightarrow{\partial_{x}(L^{\prime})},\,y\\ \quad&\Leftrightarrow&x+1\xrightarrow{\partial_{x}(L^{\prime})\cdot R},\,y \quad&x+1\xrightarrow{\partial_{x}(L^{\prime})},\,y\\ \quad&\Leftrightarrow&x+1\xrightarrow{\partial_{x}(L^{\prime})\cdot R},\,y \quad&x+1\xrightarrow{\partial_{x}(R)\cdot(L-2)\cdot R},\,y\\ \quad&\Leftrightarrow&x+1\xrightarrow{\partial_{x}(R)\cdot(L^{\prime})},\,y \quad&\\ \quad&\Leftrightarrow&x\xrightarrow{L},\,y\end{array}\]
_Proof of 1(9)._ for the case of \(\mathit{Null}_{\nu}(R)=\mathbf{true}\) is similar to the proof of 1(8) and observe that in this case \(R\equiv R\mid()\). Assume \(\mathit{Null}_{\nu}(R)=\mathbf{false}\). Then for \(\mathit{Null}_{x}(R)=\mathbf{true}\) or \(\mathit{Null}_{x}(R)=\mathbf{false}\) the proof follows directly because in either case \(\partial_{x}(R\{0,n\})=\partial_{x}(R\cdot R\{0,n-1\})\):
\[x\xrightarrow{R\cdot(0,n)},\,y\Leftrightarrow x+1\xrightarrow{\partial_{x}(R \cdot R\{0,n-1\})},\,y\quad&x\xrightarrow{\partial},\,y\Leftrightarrow x \xrightarrow{R\cdot R\{0,n-1\}\mid()},\,y\]
where \(R\cdot R\{0,n-1\}\equiv R\{1,n\}\) by 1(8). The theorem follows by the induction principle over long distances.
It is possible to prove Theorem 1 by induction over \(R\) but the proof becomes much more involved, while using induction over location distances takes full advantage of the definition of location derivatives. We get the following key characterization of \(\mathcal{R}\) as a corollary of Theorem 1. Let
\[\begin{array}{r@{\quad}l}\mathcal{U}\quad&\stackrel{{\mathrm{ \tiny{\rm{pr}}}}}{{=}}\quad&\{\langle s\langle i\rangle,s\langle j\rangle \rangle\mid s\in\Sigma^{*},0\leq i\leq j\leq|s|\}\\ \langle x,y\rangle\models R\quad&\stackrel{{\mathrm{\tiny{\rm{pr }}}}}{{=}}\quad&x\xrightarrow{R},\,y\quad\text{for }\langle x,y\rangle\in\mathcal{U}\\ \mathcal{M}(R)\quad&\stackrel{{\mathrm{\tiny{\rm{pr}}}}}{{=}}\quad& \{M\in\mathcal{U}\mid M\models R\}\end{array}\]
We say that \(M\in\mathcal{U}\) is a _match_ of \(R\) if \(M\models R\), and \(\mathcal{M}(R)\) is called the _match set_ or _match semantics_ of \(R\). Observe that \(R\equiv S\Leftrightarrow\mathcal{M}(R)=\mathcal{M}(S)\).
**Corollary 1**.: \(\mathfrak{M}=(\mathcal{U},\mathcal{R},\mathcal{M},\bot,\top\star,\mid,\&,\,\,\, \sim)\) _is an effective Boolean algebra over \(\mathcal{U}\)._
Proof.: Recall that _complement_ of \(X\subseteq\mathcal{U}\) in \(\mathfrak{M}\) is \(\operatorname{\hat{\mathbb{C}}}(X)\stackrel{{\text{\tiny{\
\begin{tabular}{l l l l} \multicolumn{1}{c}{**4.**} & \multicolumn{1}{c}{Name} & \multicolumn{1}{c}{Definition} & \multicolumn{1}{c}{Effective meaning} \\ \hline \(\backslash\)A & start & (?!\(\uparrow\)) & _initial_ location of input (\(i=0\)) \\ \(\backslash\)z & end & (?!\(\uparrow\)) & _final_ location of input (\(i=|s|\)) \\ \(\backslash^{\wedge}\) & start-of-line & (\(\backslash\)A\(|\)(?\(\Leftarrow\)n)) & \(i=0\) or \(s_{i-1}=\backslash\)n \\ \(\$\) & end-of-line & (\(\backslash\)z\(|\)(?\(\!\(\!\backslash\)n)) & \(i=|s|\) or \(s_{i}=\backslash\)n \\ \(\backslash\)Z & end-or-last-\(\$\) & (\(\backslash\)z\(|\)(?\(\!\backslash\)n\(\!\backslash\)z)) & \(i=|s|\) or \(i=|s|\)\(-\)1 and \(s_{i}=\backslash\)n \\ \(\backslash\)b & word-border & (?\(\Leftarrow\)w).(?!\(\backslash\)w)\(|\)(?\(<\)!\(\backslash\)w).(?\(=\)w) & \(s_{i-1}\in[\![\![\psi_{\backslash\backslash\backslash}]\!]\)\(\Leftrightarrow\)\(s_{i}\notin[\![\![\psi_{\backslash\backslash\backslash}]\!]\)\(\setminus\!\!]\) \\ \(\backslash\)B & non-word-border & (?\(<\)\(\backslash\)w).(?\(=\)w)\(|\)(?\(<\)!\(\backslash\)w).(?!\(\backslash\)w) & \(s_{i-1}\in[\![\psi_{\backslash\backslash\backslash}]\!]\)\(\Leftrightarrow\)\(s_{i}\in[\![\psi_{\backslash\backslash\backslash\backslash}]\!]\) \\ \hline \end{tabular}
_Induction case \(R=X\&Y\)._ Then
\[x\stackrel{{ R}}{{\xrightarrow{\ \ }}}y\stackrel{{\text{\tiny Thm \ref{thm:1}}}}{{\Leftrightarrow}}x\stackrel{{ X}}{{ \xrightarrow{\ \ }}}y\stackrel{{\text{\tiny Thm \ref{thm:1}}}}{{\Leftrightarrow}}y\stackrel{{\text{\tiny \
In other words, (?!w) \(\#\) (?=\(W\)). In general, negative lookarounds of predicates _cannot be converted_ into positive lookarounds by delegating negation to the underlying character algebra \(\mathcal{A}\) by using \(\neg\) of \(\mathcal{A}\).
It is also important to note that _complement_ (\(\neg\)) of positive lookahead does not correspond to a negative lookahead (and vice versa). Below we show that \(\neg\)\(\backslash\)b \(\#\)\(\backslash\)B, i.e. they are not complements of each other in match semantics. Thus, negations of combinations of lookarounds must be used with caution as their semantics needs careful analysis. In fact, \(\neg\)\(\backslash\)b \(\equiv\)\(\backslash\)B\(\sqcap\)+, as shown below.
While it follows from definitions, for all valid locations \(x\), that \(\mathit{Null}_{x}((?!R))\)\(\Leftrightarrow\)**not**\(\mathit{Null}_{x}((?=\!R))\) their _derivatives_ are fundamentally different: they are _complements_ of each other. In particular,
\[\begin{array}{rcl}\partial_{x}((?!R))&=&\bot\\ \partial_{x}(\neg(?=\!R))&=&\neg\partial_{x}((?=\!R))=\neg\bot\equiv\top\star \end{array}\]
The same applies to lookbacks. The derivative of a lookaround is always \(\bot\) because lookarounds are context conditions for locations with no forward progress on their own, and essentially operate as blocking conditions inside concatenations. It follows from the definition of derivatives of concatenations that, if \(\ell\) is a lookaround then
\[\partial_{x}(\ell\cdot R)\equiv\begin{cases}\partial_{x}(R),\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
A natural question that arises, that we are investigating, is if there exists a general way to encode any negative lookaround (?!\(R\)) (or (?\(\prec\)!\(R\))) by an equivalent positive lookaround (?=\(R^{\prime}\)) (or (?\(\prec\)=\(R^{\prime}\))), by converting \(R\in\mathcal{R}\) into a regex \(R^{\prime}\in\mathcal{R}\) by using \(\neg\) in some manner. A possible algorithmic advantage is that this transformation may enable further transformations. For example, (?!\(R\))\(\cdot\)(?=\(S\)) could then be transformed into (?=\(R^{\prime}\)\(\cdot\)\(\top\)\(\star\)\(S\)\(\cdot\)\(\top\)\(\star\)).
The anchor denoting the end of previous match \(\backslash\)G [21] is a non-traditional anchor as it uses metadata instead of regular expressions. While it can be supported by adding implementation details, we have decided to omit it as we have not observed it being pervasively used in real life applications. The anchor \(\backslash\)a in [19] that is needed internally for reversal, can be defined here as (\(\backslash\)Z)\({}^{\prime}\); \(\backslash\)a is not supported in the concrete syntax of.NET regular expressions.
### Top-Level Match Algorithm
The top-level match algorithm \(\mathit{llMatch}(s,R)\) takes a string \(s\in\Sigma^{*}\) and a regex \(R\in\mathcal{R}\), and either returns \(\not{\perp}\) if \(\mathcal{M}(R)=\emptyset\), or else returns a match \(\langle s\langle i\rangle,s\langle j\rangle\rangle\in\mathcal{M}(R)\) such that \(i\) is minimal (leftmost) and then \(j\) is maximal for the given \(i\) (in particular all loops are eager). Semantically the implementation corresponds to the following algorithm, which therefore provides the leftmost and longest match, also known as POSIX semantics,
\[\mathit{llMatch}(s,R)\quad\stackrel{{\text{\tiny{max}}}}{{=}} \quad\mathbf{let}\,x=\mathit{MatchEnd}(s^{\prime}\langle 0\rangle,\top\)\(\star\)\(R^{\prime}\)) \(\mathbf{in}\begin{cases}\not{\perp},&\mathbf{if}\,x=\not{\perp}\,;\\ \langle x^{\prime},\mathit{MatchEnd}(x^{\prime},R)\rangle,&\mathbf{otherwise}. \end{cases}\]
In the actual implementation the first phase initially locates the end location \(s\langle j\rangle\) by simulating a PCRE "lazy" loop \(\top\)\(\star\)? concatenated with \(R\), and subsequently locates the start location backwards from \(s\langle j\rangle\) by using \(R^{\prime}\). It is typically more economical to start the search from the start of the string rather than backwards from the end of the string, although both variants are possible.
Example 4.1 ().: Consider \(s=\)"\(\#\#\)abacar\(\,\)abacar\(\,\)aba\(\#\)" and \(R=\)abacar\(\,\)aba. So \(R^{\prime}=\)abaracaba and \(s^{\prime}=\)"\(\#\)abaracabaracaba\(\#\)". Then \(\mathit{MatchEnd}(s^{\prime}\langle 0\rangle,\top\)\(\star\)\(R^{\prime})=s^{\prime}\langle 17\rangle=s\langle\langle |s|-17\rangle^{r}=s\langle 3\rangle^{r}\) and \(\mathit{MatchEnd}(s\langle 3\rangle,R)=s\langle 12\rangle\). So \(\mathit{llMatch}(s,R)=\langle s\langle 3\rangle,s\langle 12\rangle\rangle\). Observe that \(\mathit{MatchEnd}(s,\top\)\(\star\)\(R)=s\langle 18\rangle\) would be the end location of the _second_ match. Therefore, when starting with the end location search, the initial \(\star\)-loop must be treated or simulated as a lazy loop.
If we instead let \(R=\)abacar\(\,\)aba\(\backslash\)b so that the match must end a word then in \(R^{\prime}=\)\(\backslash\)babaracaba the match must start a word. In that case \(\mathit{MatchEnd}(s^{\prime}\langle 0\rangle,\top\)\(\star\)\(R^{\prime})=s^{\prime}\langle 11\rangle=s\langle 9\rangle^{r}\) because \(\#\) is not a word-letter, and then \(\mathit{llMatch}(s,R)=\langle s\langle 9\rangle,s\langle 18\rangle\rangle\).
The _POSIX match for \(R\) in \(s\)_, if a match exists, is a pair \(\langle s\langle i\rangle,s\langle j\rangle\rangle\) where \(s\langle i\rangle\) is the minimal location in \(s\) such that \(\exists y:s\langle i\rangle\mathrel{\overset{R}{\longleftarrow}}y\) and \(s\langle j\rangle\) is the maximal location in \(s\) such that \(s\langle i\rangle\mathrel{\overset{R}{\longleftarrow}}s\langle j\rangle\). The following is the correctness theorem of _llMatch_.
Theorem 3 (Match).: _For \(s\in\Sigma^{*}\) and \(R\in\mathcal{R}\) the following statements hold:_
1. \(\mathit{llMatch}(s,R)=\)__\(\not{\perp}\)
The core parser was taken directly from the.NET runtime, but was modified to read the symbols & and \(\sim\) as intersection and negation respectively. Fortunately the escaped variants '&' and '\(\sim\)' were not assigned to any regex construct, and all existing regex patterns can be used by escaping these two characters.
Another important component we directly took from the.NET runtime is the alphabet compression of the nonbacktracking engine that builds the minimal number of predicates on the underlying character set by applying appropriate Boolean combinations. For example, all input characters in the the regex ab can be represented with just 3 predicates: \(a\), \(b\) and ['_ab_]. It is an essential preprocessing step for such a large alphabet (16-bit Unicode Basic Multilingual Plane in our case), which allows us to represent the input predicates as bits, and then use bitwise operations to check if a character is in the set of characters represented by a predicate.
The engine itself is implemented in mostly high-level F#, without caching any derivatives, which does not give it the optimal performance characteristics as of at the time of writing. The aim is rather to provide a working prototype that, despite having much higher number of memory allocations than, e.g., the nonbacktracking or backtracking alternatives of.NET 7, is still capable of solving problems that neither the backtracking nor nonbacktracking variant can currently solve.
To make the modifications possible, the System.Text.RegularExpressions library was copied to another namespace and the visibility of the library was changed from internal to public. The only other meaningful modification to the library is the extension of the parser to parse negation and intersection symbols appropriately, as described above.
The concatenation and \(\epsilon\)-nodes were implemented as just a single-linked list of other regex nodes, as it allows to traverse to next one using just a tail pointer, without needing any other steps. There is never any need to traverse the concatenation backwards, thus allowing to take advantage of what the concatenation fundamentally is - a single linked list. And () is just an empty concatenation, i.e. empty list.
### Rewrite Rules and Subsumption
Our system implements a number of regex rewrite rules, which are essential for the efficiency of the implementation. Figure 1 illustrates the basic rewrite rules that are always applied when regular expressions are constructed. Intersection and union are implemented as commutative, associative and idempotent operators, so changing the order of their arguments does not change the result.
There are many further derived rules that can be beneficial in reducing the state space. Unions and intersections are both implemented by sets. If a union contains a regex \(S\), such as a predicate \(\psi\), that is trivially subsumed by another regex \(R\), such as \(\psi\star\), then \(S\) is removed from the union. This is an instance of the loop rule in Figure 1 that rewrites \(\psi\{0,\infty\}\,|\,\psi\{1,1\}\) to \(\psi\{0,\infty\}\).
A further simplification rule for unions is that if a union contains a regex \(\psi\star\) and all the other alternatives only refer to elements from \(\llbracket\psi\rrbracket\) then the union reduces to \(\psi\star\). This rule rewrites any union such as (\(\star\)ab.* \(\mid\).*) to just.* (recall that. \(\equiv\)[^\(\wedge\)n]), which significantly reduces the number of alternations in total. Such rewrite could potentially be extended even further, e.g., rewriting (\(\star\)ab.* \(\mid\).*b.*) to.*, but the detection overhead is more complex for rewrites like this and has not been evaluated yet.
### POSIX Semantics
One key difference from the.NET 7 nonbacktracking engine, is that we treat both alternations and intersections as sets, which makes the semantics different from the backtracking one. For example, matching the regular expression (a\(\mid\)ab)* on the string "abab" with the current.NET engines results in the prefix "a", as the leftmost alternation finds a successful match. Our engine
will match the whole string "abab", as all the alternations are explored in parallel, and the longest match is in the right alternation. The behavior of \(\mathit{IsMatch}(x,R)\) is identical in both engines.
In the nonbacktracking engine, PCRE semantics is achieved by prepending \(\top\)*? to the regex pattern, which is always nullable and has only two potential derivatives. For example, in the case of the pattern \(\top\)*?ab, the derivative can be either \(\texttt{b}\,|\top\)*?ab, when the character matches \(\psi_{a}\) (the initial pattern creates a new alternation), or \(\top\)*?ab, (i.e. just the initial pattern) when it does not.
In our case we meet POSIX semantics by keeping the initial pattern as is, but starting a potential match on every input position, that has a valid starting predicate. Then we keep track of the nullability of all alternations the initial pattern produced separately. This is done by having a set of "top-level alternation" data structures, each of which consist of a regex and the maximum nullable position of the alternation. Once an alternation turns into \(\bot\), we check if the alternation has a nullable position, and if it does, we return the match. If the alternation does not have a nullable position, we remove it from the set and continue matching the remaining alternations.
### Conjunctive Starset Lookup and Skipping
One very important optimization that allows to avoid doing a lot of unnecessary work on average, depending on the input string and regular expression, is the startset optimization, which enables search for characters belonging to the set of initial characters of a regex denoted by the _startset_ predicate by using dedicated string traversal constructs which can be several orders of magnitude faster than checking characters one-by-one.
The result of evaluating the predicate _startset_ is captured in the state of the regex node. Starset optimization is often comprised of checking if the first character of the regex pattern is present in the input string. As mentioned in Section 2.3, we make use of the fact that determining the startset of a regex with derivatives is a relatively cheap operation, and we can use it to incorporate efficient vectorized string-search procedures into the matching algorithm.
A key difference from other regex engines is that we don't just skip until the initial startset of the regex, but perform intermediate startset computations and appropriate skipping in the inner loop as well. To take advantage of the fast _startset_ check, we first need to determine if a regex is "skippable" or not, where we determine if taking the derivative changes the node on only a subset of the input alphabet. Such regexes start with a *-loop.
A skippable regex is defined as a regex that starts with a \({}^{*}\)-loop, or one in which all child regexes recursively start with a \({}^{*}\)-loop, such as an alternation, lookahead, intersection or negation of regexes starting only with \({}^{*}\)-loops. Lookbacks are more complex and do not currently take advantage of this optimization. The key insight of this skippable check is that \({}^{*}\)-loops keep producing the same regex derivative until either the loop terminator or tail of one of the \({}^{*}\)-loops matches.
Figure 1. Basic rewrite rules.
For example, all regexes starting with \(\top\star\) are skippable, as according to the derivation rules in Section 4.2: the derivative of the regex does not change unless the tail of the concatenation produces a non-\(\bot\) derivative. The same is true for all regexes starting with \(\cdot\)*, as the only two predicates that produce a non-initial derivative are \(\psi_{n}\), that is the loop termination predicate for \(\cdot\)*, and the starting predicate of the concatenation tail. For example, the regex \(\cdot\)*ab has two predicates that change the state: \(\psi_{n}\), in which case the regex becomes \(\bot\), and \(\psi_{a}\), in which case the regex becomes \(\mathbf{b}\)\(\mid\)\(\cdot\)*ab. All other predicates produce the initial regex.
Such skipping becomes useful when we want to match over input text with multiple intersections or alternations. For example, consider the regex (12.*)&(.*\(\wedge\)d), which is strictly non-skippable, as the transition from 12.* to \(\bot\) or to 2.* is an important part of the matching process. However, the regex (.*12.*)&(.*\(\wedge\)d) is skippable, allowing us to look up the occurrence of a character satisfying the predicate \(\psi_{\lceil\wedge\wedge n\rceil}\) (the disjunction of predicates 1, \(\wedge\)d and the loop exit condition \(\setminus\)n), and continue matching directly at that position.
The in-match starset computation heuristic itself is very simple, and only considers the first node of the regex, or the first node and the second node recursively in the case of a star. In the case of a regex starting with a predicate, we take the predicate itself. In the case of a predicate star loop, we take the negation of the loop body predicate, and union it with the concatenation tail predicate, as done in the previous example. In the case of an intersection, alternation or complement, we take the union of the starsets of the contained regex bodies. Notably, the starset of a complement is not the complement of the starset of the contained regex, but the same starset, as the derivation rules apply the same way inside negation.
After computing the joint starset of the regex, we use it in a vectorized index-of lookup in the following input string. Currently, we perform this lookup in an overestimated manner, by taking the union of all starsets, but this can be improved upon by taking the intersection in some cases. For example, if we have an intersection with the starsets \(\psi_{\wedge}\)d and \(\psi_{1}\) strictly in the same position, then we know ahead-of-time that the starset of the intersection is constrained to only \(\psi_{1}\) - the conjunction of the two starsets.
The starset optimization is very powerful as it allows us to skip over large parts of the input string in a single step, and only perform the more expensive derivative computation on the remaining positions. A future improvement would be to extend this to multi-character predicates and intersections, which would allow us to skip over even more of the input string.
## 6. Evaluation
The evaluation of the engine is still in its early stages, and we have not yet implemented all the optimizations that are possible, such as caching transition regexes, or skipping over multi-character predicates. However, despite being only a research prototype, we can already see that the engine is capable of solving problems that neither of the available.NET regex engines can currently solve.
An industrial implementation of the engine should reach performance characteristics comparable to (Moseley et al., 2023) over standard regexes, because the same.NET framework is used, with potential marginal gains in space/time due to commutative \(\mid\), as non-commutativity of \(\mid\) disallows some useful rewrites in (Moseley et al., 2023). Moreover, all initial fixed-prefix/suffix-search optimizations are shared across all backends. For a more comprehensive evaluation on standard regexes outside.NET, see the evaluation of (Moseley et al., 2023) directly.
Note that we do not use the caching mechanism from the nonbacktracking engine in our prototype, as it is not yet fully implemented, and would require a significant amount of work to integrate in the presence of lookarounds. The lack of caching is the main reason why our engine is not yet competitive with the nonbacktracking engine on the benchmarks.
We have evaluated the performance of our engine against the.NET default backtracking engine, and the symbolic automata based nonbacktracking version. The benchmarks were run on a machine with an AMD Ryzen 7 5800X 8-Core CPU, and 32 GB of RAM. The benchmarks were run on.NET version 7.0.305.
We compare the performance of extracting paragraphs containing multiple substrings from the collected works of Mark Twain [1], first we compare the performance on a 9 kB string, containing 34 paragraphs, extracted from the lines 188589 to 188771, and then on the entire 20 MB file, containing 379897 lines in total.
The paragraphs are extracted with three kinds of patterns: one that uses a negative lookahead to match the end of the paragraph, one that uses a line loop until the occurrence of two sequential newlines, and one that uses negation to constrain the paragraph range and intersections to constrain the paragraph contents.
Examples of the patterns used to match paragraphs containing the word "King":
1. Negative lookahead: \n\n((?!\n\n)[\s\s])*?(King)((?!\n\n)[\s\s])*?\n\n
2. Loop: \n\n((.+n)\n+?(.*King.*\n)(.+n)+?)\n
3. Conjunction, negation: \n\n^([\s\s]\n\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\n][\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\s]\n[\n][\s]\n[\n][\s]\n[\n][\s]\n[\n][\n][\n\][\n\][\n][\n][\n][\n\][\n][\n][\n][\n][\n][\n][\n][\n][\n][\n][\n][\n][\n][\n][\n][\n][\n][\n][\n][\n][\n][\n][\n][\n][\n][\n][\n][\nn][\n][\n][\nn][\n][\n][\n][\n][\nn][\n][\nn][\n\][\nn][\n][\nn][\nn][\nn][\n][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\n][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nnn][\nn][\nn][\nn][\nn][\nn][\nnn][\nn][\nn][\nn][\nn][\nn][\nn][\nnn][\nnn][\nn][\nn][\nnn][\nn][\nnn][\nnn][\nnn][\nnn][\nn][\nn][\nn][\nnn][\nnn][\nnn][\nn][\nn][\nnn][\nn][\nn][\nnn][\nnn][\nn][\nnn][\nnn][\nnn][\nnn][\nnn][\nnn][\nnn][\nn][\nnn][\nnn][\nn][\nn][\nnn][\nnn][\nn][\nnn][\nnn][\nnn][\nnn][\nn][\nnn][\nnn][\nn][\nn][\nn][\nn][\nn][\nnn][\nn][\nnn][\nnn][\nn][\nnn][\nn][\nn][\nnn][\nn][\nnn][\nn][\nnn][\nnn][\nnn][\nnn][\nnn][\nnn][\nn][\nn][\nnn][\nn][\nnn][\nnn][\nnn][\nnn][\nn][\nnn][\nn][\nnn][\nnn][\nn][\nnn][\nn][\nnn][\nnn][\nnn][\nnn][\nnn][\nnn][\nn][\nnn][\nn][\nn][\nn][\nn][\nn][\nn][\nnn][\nn][\nnn][\nn][\nnn][\nn][\nn][\nnn][\nn][\nnn][\nn][\nnn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nnn][\nn][\nnn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nn][\nnn][\nn][\nn][\nn][\nnn][\nn][\nnn][\nn][\nnn][\nn][\nnn][\nn][\nn][\nnn][\nn][\nnn][\nn][\nnn][\nnn][\nn][\nnn][\nn][\nnn][\nnn][\nnn][\nnn][\nnn][\nn][\nnnn][\nn][\nnn][\nnn][\nnn][\nnn][\nnn][\nnnn][\nnn][\nnn][\nnn][\nnn][\nnnn][\nnn][\nnn][\nnn][\nnn][\nnnn][\nnn][\nnnn][\nnn][\nnnn][\nnnn][\nnn][\nnn][\nnnn][\nnn][\nnn][\nnn][\nnnn][\nnn][\nnnn][\nnnn][\nnn][\nnnn][\nnn][\nnnn][\nnn][\nnn][\nnnnn][\nnnn][\nnnn][\nnnnn][\nnn][\nnnnn][\nnnn][\nnnn][\nnnn][\nnnn][\nnnnn][\nnnn][\nnnn][\nnnn][\nnnn][\nnnnn][\nnnn][\nnnn][\nnn][\nnnn][\nnnn][\nnnn][\nnnnn][\nnnn][\nnnnn][\nnnnn][\nnnn][\nnnn][\nnnn][\nnnn][\nnnnn][\nnnn][\nnnnn][\nnnn][\nnnnn][\nnnn][\nnnnn][\nnnnn][\nnnnn][\nnnn][\nnnnn][\nnnn][\nnnnn][\nnnn][\nnnn][\nnnnn][\nnnn][\nnnn][\nnnn][\nnnnn][\nnnn][\nnnn][\nnnn][\nnnn][\nnnn][\nnn][\nnnn][\nnnnn][\nnnn][\nnnnn][\nnnn][\nnn][\nnnnn][\nnnn][\nnn][\nnnnn][\nnnn
using the intersections produces more efficient match as there is no blowup in match orderings required in conventional regular expressions.
In the 9 kB sample text, our engine, SBRE (Varatalu, 2023) starts becoming faster than the rest at 3 substrings, but due to lack of caching has 4 times higher memory allocations than the nonbacktracking engine and about 9 times higher allocations than the backtracking implementation.
After a certain number of substrings, the order of the matchings (Table 2) becomes so large that both the backtracking and nonbacktracking engines stop being able to find a match within 60 seconds, while our engine still manages to complete the match with minor slowdowns.
In the full 20 MB text, our research prototype (SBRE) takes significantly more unnecessary derivatives than the nonbacktracking variant, as it suffers from the lack of caching, but after a certain number of substrings, it is still the only engine that can complete the match within 60 seconds. The results are shown in Fig. 3.
A notable observation in the full text is that the lookahead pattern of the backtracking engine falls off significantly earlier than in the 9 kB text, where it showed promising results up to 6 substrings. In the full text, the nonbacktracking engine shows the best performance up to 6 substrings, but then falls off completely, as the memory allocated blows up from 74.41 MB in 5 paragraphs to 3.39 GB in 6 paragraphs. The nonbacktracking engine was not able to produce a result with 7 substrings within 10 minutes.
The results show that our engine is able to search for a large amount of substrings in a paragraph efficiently, while the other engines suffer from the exponential blowup in match orderings. In the full text, adding a substring to the SBRE pattern after 2 substrings currently causes a constant memory allocation growth of 800 MB, which is a significant amount of memory, but still allows for the match to complete, while the other engines are not able to complete the match at all. The memory usage of the SBRE engine could be significantly reduced by caching the derivatives, which would make the engine more competitive overall.
## 7. Related Work
Regular expressions have in practice many extensions, such as _backreferences_ and _balancing groups_, that reach far beyond _regular_ languages in their expressive power. Such extensions, see (Loring et al., 2019), fall outside the scope of this paper. The focus on related work here is solely on automata and derivative based matching algorithms, tools, and techniques, that in some form or shape maintain, at least in principle, a finite state based view corresponding to regular languages. In particular, _lookaheads_ maintain regularity (Morihata, 2012) and regular expressions with lookaheads can
\begin{table}
\begin{tabular}{l l l l} n of substrings & Neg. lookahead & Loop & \& and \(\sim\) \\ \hline
1 & 50 & 34 & 46 \\
2 & 101 & 77 & 66 \\
3 & 363 & 295 & 88 \\
4 & 1869 & 1513 & 108 \\
5 & 11805 & 9385 & 127 \\
6 & 87885 & 69145 & 148 \\
7 & 740925 & 579625 & 170 \\
8 & 6854445 & 5322265 & 190 \\
9 & 69310125 & 53343385 & 208 \\
10 & 769305645 & 587865625 & 226 \\ \hline \end{tabular}
\end{table}
Table 2. Paragraph extraction pattern lengths in characters
be converted to Boolean automata [Berglund et al., 2021]. [Chida and Terauchi, 2023] consider extended regular expressions in the context of backreferences and lookaheads. They build on [Carle and Narendran, 2009] to show that extended regular expressions involving backreferences and both positive and negative lookaheads leads to _undecidable_ emptiness, but, when restricted to positive lookaheads only is closed under complement and intersection. [Miyazaki and Minamide, 2019] present an approach to find match end with derivatives in regular expressions with lookaheads. The semantics of derivatives in [Miyazaki and Minamide, 2019] uses _Kleene algebras with lookahead_ as an extension of Kleene algebras with tests [Kozen, 1997], and is fundamentally different from our formulation in several aspects, where the underlying semantic concatenation is _commutative_ and _idempotent_ and where the difference to derivatives of concatenations in relation to [Brzozowski, 1964] is also pointed out, which also leaves unclear the question of supporting lookbacks and reverse. Derivatives in combination of Kleene algebras are also studied in [Pous, 2015]. Our approach is a conservative extension of the theory in [Moseley et al., 2023] with intersection and complement, as well as positive and negative lookbacks and lookaheads, that builds on [Brzozowski, 1964], leading to the core fundamental results of Theorem 1 and Theorem 2.
In functional programming derivatives were studied in [Fischer et al., 2010; Owens et al., 2009] for _IsMatch_. [Ausaf et al., 2016; Sulzmann and Lu, 2012] study _MatchEnd_ with Antimirov derivatives and POSIX semantics and also Brzozowski derivatives in [Ausaf et al., 2016] with a formalization in Isabelle/HOL. The algorithm of [Sulzmann and Lu, 2012] has been recently further studied in [Tan and Urban, 2023] as a recursive functional program and also formalized in Isabelle/HOL. The fundamental difference to the theory here is that, because of lookarounds (that in particular enable the definition anchors) imply that certain classical laws, such as several of the _inhabitation relation rules_ in [Tan and Urban, 2023], become invalid. In [Wingbrant, 2019] anchors are considered as special symbols in an extended alphabet, using classical derivatives. This approach has the drawback that anchors are intended as specialized lookarounds and treating them as special symbols conflicts with match semantics in terms of locations. Some aspects of our work here, such
Figure 3. Time taken to find a matching paragraph containing all required substrings in any order in the full 20MB sample text.
as support for intersection, are related to SRM [Saarikivi et al., 2019] that is the predecessor of the NonBacktracking regex backend of.NET [Moseley et al., 2023], but SRM lacks support for lookarounds as well as anchors. The top-level matcher of SRM is also different and more costly because it uses three passes over the input to locate a match, instead of two.
State-of-the-art nonbacktracking regular expression matchers based on automata such as RE2 [Cox, 2010] and grep [GNU, 2023] using variants of [Thompson, 1968], and Hyperscan [Wang et al., 2019] using a variant of [Glushkov, 1961], as well as the derivative based NonBacktracking engine in.NET make heavy use of _state graph memoization_. None of these engines currently support lookarounds intersection or complement. An advantage of using derivatives is that they often minimize the state graph (but do not guarantee minimization), as was already shown in [Owens et al., 2009, Table 1] for DFAs. Further evidence of this is also provided in [Sulzmann and Lu, 2012, Section 5.4] where NFA sizes are compared for Thompson's and Glushkov's, versus Antimirov's constructions, showing that Antimirov's construction consistently yields a smaller state graph. In automata-based engines an upfront DFA minimization is undesirable because it is too costly, while derivatives allow DFA-minimizing optimizations to be applied essentially on-the-fly. In general, the rewrite rules applied in our framework are not feasible in automata-caching, because the corresponding semantic language-level checks would require global analysis, because in traditional automata based representations one has lost the relationship between states and regular expressions. In our case we preserve the relationship between regular expressions
The two phases of the top-level matching algorithm: to find the match end location and to find the match start location are similar in RE2 [Cox, 2010] as well as in.NET NonBacktracking which our implementation builds on top of. It therefore also benefits from switching to NFA mode when a certain threshold of DFA states is reached, having an overall effect similar to that of RE2 [Cox, 2010], but the derivatives can switch to Antimirov-style derivatives without any prior bookkeeping. The top-level loop is in some sense oblivious to the fact that intersection, complement, and lookarounds are being used inside the regular expressions.
The two main standards for matching are PCRE (backtracking semantics) and POSIX [Berglund et al., 2021; Laurikari, 2000]. _Greedy_ matching algorithm for backtracking semantics was originally introduced in [Frisch and Cardelli, 2004], based on \(\epsilon\)-NFAs, while maintaining matches for eager loops. We should point out that [Frisch and Cardelli, 2004, Proposition 2] assumes the axiom \(L(R\cdot S)=L(R)\cdot L(S)\) that fails with anchors or lookarounds. While backtracking semantics is needed in.NET for compatibility across all the backends - including NonBacktracking [Moseley et al., 2023] - here we use POSIX semantics that allows us to treat alternations and intersections as commutative operations. The rationale behind using POSIX is that it is semantically unclear as to what de Morgan's laws and laws of distributivity would mean in the context of backtracking semantics if intersection would be treated as a noncommutative operation.
The results [Moseley et al., 2023, Theorem 3.3 and Theorem 3.8] form a _strict subset_ of our Theorem 1 and Theorem 2 whose proofs are novel and nonobvious because definitions of nullability and derivatives are mutually recursive. It was far from obvious if regex complement and intersection could even be combined in any meaningful way with lookarounds. It remains unclear to us, as to if the main result [Moseley et al., 2023, Theorem 4.5] that builds on formally linking the semantics of derivatives to _backtracking_ (PCRE) semantics can be extended to cover the extended fragment of regexes defined here by \(\mathcal{R}\) due to noncommutativity of alternations in backtracking semantics, the proof of [Moseley et al., 2023, Theorem 4.5] is much more complicated than that of Theorem 3. Intersection was also included as an experimental feature in the initial version of SRM [Saarikivi et al., 2019] by building directly on derivatives in [Brazzowski, 1964], and used an encoding via regular expression _conditionals_ (if-then-else) that unfortunately conflicts with the intended semantics of conditionals and therefore has, to the best of our knowledge, never been used or evaluated.
The conciseness of using intersection and complement in regular expressions is also demonstrated in (Gelade and Neven, 2012) where the authors show that using intersection and complement in regular expressions can lead to a double exponentially more succinct representation of regular expressions.
## 8. Future Work
The theory of derivatives based on locations that is developed here can be used to extend regular expressions with lookarounds in SMT solvers that support derivative based lazy exploration of regular expressions as part of the sequence theory, such solvers are CVC4 (CVC4, 2020; Liang et al., 2015) and Z3 (de Moura and Bjorner, 2008; Stanford et al., 2021). A further extension is to lift the definition of location derivatives to a fully _symbolic_ form as is done with _transition regexes_ in Z3 (Stanford et al., 2021). (Chen et al., 2022) mention that the OSTRICH string constraint solver could be extended with backreferences and lookaheads by some form of alternating variants of prioritized streaming string transducers (PSSTs), but it has, to our knowledge, not been done. Such extensions would widen the scope of analysis of string verification problems that arise from applications that involve regexes using anchors and lookarounds. It would then also be beneficial to extend the SMT-LIB (SMT-LIB, 2021) format to support lookarounds.
Counters are a well-known Achilles heel of essentially all nonbacktracking state-of-the-art regular expression matching engines as recently also demonstrated in (Turonova et al., 2022), which makes any algorithmic improvements of handling counters highly valuable. In (Turonova et al., 2020), Antimirov-style derivatives (Antimirov, 1996) are used to extend NFAs with counting to provide a more succinct symbolic representation of states by grouping states that have similar behavior for different stages of counter values together using a data-structure called a _counting-set_. It is an intriguing open problem to investigate if this technique can be adapted to work with location derivatives within our current framework. (Glaunec et al., 2023) point out that it is important to optimize specific steps of regular expression matching to address particular performance bottlenecks. The specific BVA-Scan algorithm is aimed at finding matches with regular expressions containing counters more efficient. (Holik et al., 2023) report on a subset of regexes with counters called synchronizing regexes that allow for fast matching.
Further work of improved rewrite rules is also needed in our current implementation to reduce the number of states that arise in the matching engine, as well as enhancing caching techniques that record transitions that have already been seen for lookarounds. Currently, no special handling of transitions arising from lookarounds is taking place. And, as intersections allow precise text extraction in a single pass, the approach can be optimized further by application of vectorization.
One of the main drawbacks of lookarounds is that they are not cacheable in the same way as other regexes, as they depend on the position of the matcher in the string. This means that the regex engine cannot statically determine if a lookaround will match at a given position and has to try the lookaround at every position.
This is not a problem for the short context-lookups, like the anchor lookarounds in Table 1, as they do not move around in the string, and retain the linear time complexity of the regex. However, this is a problem for the other lookarounds, such as the lookback (?<=a.*)b, where the occurrence of the character a has to be re-checked at every occurrence of b. This leads to a non-ideal quadratic time complexity.
However, these kinds of lookarounds could be cached by storing the predicate \(\psi\). = \(\psi_{\llbracket^{*}\setminus n\rrbracket}\) and passing it along with the derivative. This way, the regex engine will only invalidate the cache upon finding an occurrence of a character that does not match the loop predicate, i.e., in the case of (?<=a.*) this cached predicate will be invalidated by \(\backslash\)n, which is the only character not in \(\llbracket\psi\).\(\rrbracket\).
This caching technique could lead to a linear time complexity for many lookarounds but needs more research.
One notable feature that is missing from our regexes is support for laziness. However, this feature can often be emulated by using complement and negation of character classes. For example, the regex a.*?b, which matches the shortest string in the range of a and b, can be emulated by a[^b\n]*b, which has the exact same semantics, but without using laziness. Another way to emulate laziness is by using intersection and complement, as in (a.*)&(~(.*b.*)b), which is also equivalent a.*?b. Note that ~(.*b.*), that means "does not contain b", matches _before_ b where the final match character must be b in (~(.*b.*)b).
## 9. Conclusion
We have presented both a theory and implementation for a combination of extensions to regular expressions including complement, intersection and positive and negative lookarounds that have not previously been explored in depth in such a combination. Prior work has analyzed different other sets of extensions and their properties, but several such combinations veer out of the scope of regular languages. The work combines and extends the symbolic derivatives based approaches presented in (Saarikivi et al., 2019) and (Moseley et al., 2023) by showing how the carefully selected set of extensions to regular expressions yield an effective Boolean algebra on the match set semantics, producing a regular language with interesting new applications and opening up a principled way to defining rewrite rules for optimizations that play an important role in practical applications. In addition, we have provided a precisely defined approach to finding matches using symbolic derivatives.
When considering context-sensitive anchors, we showed how anchors can be represented in terms of lookarounds and are thus supported by our approach. Moreover, such generalization provides possibilities for defining custom anchors.
The implementation reuses several components of the.NET 7 nonbacktracking regular expression backend and adds support for newly introduced extensions. To demonstrate the efficacy of the approach we showed that the task of locating a substring containing a set of matches in arbitrary order can be solved by the proposed engine while neither the nonbacktracking nor backtracking engines of.NET 7 scaled due to the factorial blowup of the possible orderings of the matches.
|
2309.10667 | Learning Tri-modal Embeddings for Zero-Shot Soundscape Mapping | We focus on the task of soundscape mapping, which involves predicting the
most probable sounds that could be perceived at a particular geographic
location. We utilise recent state-of-the-art models to encode geotagged audio,
a textual description of the audio, and an overhead image of its capture
location using contrastive pre-training. The end result is a shared embedding
space for the three modalities, which enables the construction of soundscape
maps for any geographic region from textual or audio queries. Using the
SoundingEarth dataset, we find that our approach significantly outperforms the
existing SOTA, with an improvement of image-to-audio Recall@100 from 0.256 to
0.450. Our code is available at https://github.com/mvrl/geoclap. | Subash Khanal, Srikumar Sastry, Aayush Dhakal, Nathan Jacobs | 2023-09-19T14:49:50Z | http://arxiv.org/abs/2309.10667v1 | # Learning Tri-modal Embeddings for Zero-Shot Soundscape Mapping
###### Abstract
We focus on the task of soundscape mapping, which involves predicting the most probable sounds that could be perceived at a particular geographic location. We utilise recent state-of-the-art models to encode geotagged audio, a textual description of the audio, and an overhead image of its capture location using contrastive pre-training. The end result is a shared embedding space for the three modalities, which enables the construction of soundscape maps for any geographic region from textual or audio queries. Using the SoundingEarth dataset, we find that our approach significantly outperforms the existing SOTA, with an improvement of image-to-audio Recall@100 from 0.256 to 0.450. Our code is available at [https://github.com/mvrl/geoclap](https://github.com/mvrl/geoclap).
## 1 Introduction
Sound is one of the fundamental senses that helps us reason about our environment. There exists an intricate relationship between the visual appearance and sound of a location [15, 16]. Learning about the type of sound at a geographic location allows one to understand many high-level concepts of the area. For example, just by hearing the sound of traffic, we can imagine the location to be an urban setting with a rush of cars and people, whereas the sound of sea waves might elicit the beautiful scenery of a beach.
There have been several studies conducted on different cities around the world attempting to understand human perception of various types of environmental sound [1, 3, 15, 22, 28, 30]. Moreover, it has been established that there is a strong correlation between the physiological and psychological health of a person and the environmental sound condition they live in [8, 21, 33]. Therefore, understanding the soundscape for a given geographic area can be of great importance to policymakers focused on urban planning and environmental noise management. Soundscapes also serve value to the general public for whom environmental sound plays a vital role in decisions such as buying a house or setting up a business.
Most of the existing works on creating soundscape focus on crowd-sourcing human perception of sound in their surroundings [1, 3, 22, 30, 40]. While serving as an important tool for understanding the sound distribution of a region, such approaches have two major limitations. First, the abstraction of sound into a fixed set of indicators and psycho-acoustic descriptors limits our ability to have a complete picture of underlying physical factors associated with sound. Second, such soundscapes are usually created for only highly visited places in the world, creating massive sparsity of soundscapes on a global scale. In order to solve both of these limitations, we propose to leverage the intrinsic relationship between sound and visual cues of the location and learn to directly predict the most probable sound that could be heard at any given location. Specifically, we train a multi-modal deep learning framework that learns a shared embedding space where the sound that is most likely to come from a given location, is pulled closer while pushing other unlikely sounds farther apart. We represent the location (latitude, longitude) by an overhead image of size \(H\times W\) centered around it. Once trained, our multi-modal embedding space and free availability of overhead imagery makes it possible for us to create soundscape maps for any area in the world.
One of the successful approaches to learning shared embedding space between different modalities is contrastive learning. In recent years, contrastive learning between image and text [32]; image, text, and audio [17]; text, audio [9, 12, 38]; overhead image and audio [19] has been an effective self-supervised training objective to learn a multi-modal embedding space. Such a space has an understanding of the correspondence between the modalities that can be transferred to various downstream tasks, where impressive results have been observed. Motivated by these works, we also adopt contrastive learning as our pre-training strategy to learn a multi-modal embedding space. However, unlike the prior works, we are interested in incorporating geographic knowledge into the embedding space learned by audio-language pre-training. We achieve this by adding an overhead image, capturing the geographic context of a scene, as an additional modality in our contrastive learning framework. With the shared embedding space that has knowledge of correspondence between audio and its corresponding overhead image, we can then formulate the task of soundscape mapping as a cross-modal retrieval problem, where the objective is to predict the most likely sound from a gallery of \(N\) sounds given an overhead image.
Our work builds upon a prior work [19] that introduced the _SoundingEarth_ dataset containing over \(50k\) geotagged audios paired with their corresponding overhead image. The objective of work by Heidler _et al_. [19] was to learn a good audio-visual embedding space useful to be transferred for different downstream tasks in remote sensing. However, in the interest of learning an embedding space to create accurate soundscapes, our work is focused on improving the task of cross-modal retrieval. In this regard, we utilise weights of publicly available modality-specific SOTA models. Moreover, unlike Heidler _et al_., who build an embedding space capturing two modalities (overhead-image and audio), we propose to also incorporate textual description of audio into the embedding space. This essentially creates a tri-modal embedding space with richer understanding of three modalities: overhead-image, audio, and text. We call our framework GeoCLAP: Geography-Aware Contrastive Language Audio Pre-training. As demonstrated by our results adding the textual modality improves the representational capability of both overhead-image and audio encoders. Moreover, with an understanding of three modalities, we are now able to create soundscapes either from a textual or audio query for any geographic region. The main contributions of our work are as follows:
* We significantly improve the prior baseline on the task of cross-modal retrieval of overhead image to sound and vice-versa.
* We build a tri-modal embedding space that has an understanding of overhead image, audio, and textual description of audio at a given location.
* We demonstrate a simple and scalable way of creating soundscape for any geographic area using either a textual or audio query.
## 2 Related Work
### Soundscape Mapping
The soundscape of a geographic region can be defined as the acoustic environment perceived by individuals within its context [14]. There exists a large body of work focusing on the problem of soundscape mapping [1, 3, 13, 16, 22, 26, 28, 30, 40, 41]. In these works, soundscape mapping is formulated as a framework containing three components: indicators, descriptors, and a predictive model that maps indicators to descriptors. Indicators are psycho-acoustic measures (for example, sound pressure level, loudness, spectral slope, etc.) which determine the perceived value of descriptors (for example, pleasant, unpleasant, eventful, etc.). In this paper, we refer to this line of work as perceptual soundscape mapping.
One of the common findings from the literature of perceptual soundscape mapping is that there exists a strong correlation between the human perception of sound and the environmental variables of the scene such as building, road category, etc. [15]. Utilising this correlation between sound and visual cues, there have been a few works that use deep learning to learn a shared embedding space between sound and either ground level image [29] or overhead image [20] of the scene. This multimodal learning approach leads to improved performance on visual tasks such as aerial scene recognition [20], image classification [29], and object detection [29]. Closer to our work, a few prior works [6, 25, 27, 39] focus on the task of cross-modal image-to-voice retrieval. Such tasks require a dataset containing overhead imagery paired with spoken audio captions, which is very limited. Moreover, instead of learning from speech, we are interested in learning from free-form audio such as field recordings, natural sounds, etc. which capture diverse concepts of the location. Another closer work by Salem _et al_. [35], proposed learning a shared embedding space between audio, overhead image, and ground level image, enabling them to predict a distribution over sound clusters from an overhead image. The problem formulation of soundscape mapping in our work is similar to [35]. However, the striking difference as well as the strength of our work is that leveraging the power of contrastive language audio pre-training (CLAP), we are able to create soundscape conditioned on any textual or audio query. In doing so, we still retain the ability to create soundscape with desired set of sound categories in a zero-shot manner.
### Contrastive Learning
Radford _et al_., in their seminal work, CLIP [32], trained large image-text dataset using contrastive loss and demonstrated it's impressive zero-shot performance on many computer vision tasks. AudioCLIP [17], extends CLIP to three modalities: image, text, and audio. Such tri-modal embedding space enables one to perform query between three pairs of modalities. Wav2clip [37], distilled the knowledge of CLIP embedding space by freezing the image encoder of CLIP and contrastively training an audio encoder to learn a new embedding space shared by audio and a corresponding image. With similar training objective as CLIP, an
other work CLAP [12] performs contrastive learning between audio and natural language. CLAP training has proven to be an effective strategy with impressive audio retrieval performance [9]. Inspired by this, Wu _et al_. [38] further improved the CLAP's performance by training on large-scale data with effective audio feature fusion and text augmentation strategies. We refer the work by Wu _et al_. [38] as L-CLAP in our paper and use the pre-trained encoders from L-CLAP to embed audio and text for GeoCLAP pre-training.
Our work takes motivation from the proven performance of contrastive learning as an effective pre-training strategy. The focus of our work is soundscape mapping. The embedding space for such tasks should have an understanding of geography of a location where the sound is coming from [4]. Therefore, we propose to learn an embedding space trained contrastively on three modalities: overhead image, text, and audio.
### Pretrained Models
Availability of modality specific pre-trained models trained with various self-supervision objectives have proven to be crucial in bringing performance improvement in various tasks in remote sensing [36]. In the recent years, masked auto-encoders (MAE) [18] based models trained on satellite imagery have demonstrated to be a good starting checkpoints to be fine-tuned for various downstream tasks [7, 34]. In our work, we start with the pre-trained weights of Vision Transformer (ViT) [11] encoder of SATMAE [7] as the overhead-image encoder for GeoCLAP. SATMAE [7] was pre-trained on large-scale (over 700K) satellite imagery of the world. To learn representations for audio and text, we use L-CLAP's pre-trained encoders. It uses HTSAT [5] as the audio encoder and RoBERTa [23] as the text encoder. HTSAT is a swin-transformer [24] based model with SOTA performance on various audio classification tasks. RoBERTa is a powerful transformer-based language model trained with improved design choices than BERT [10]. L-CLAP [38] was contrastively pre-trained on over 630K audio-text paired dataset.
## 3 Approach
We present a detailed description of our approach, including the high-level problem formulation, a description of our primary evaluation dataset, and a detailed description of the network architecture and training procedure for our method, GeoCLAP.
### Problem formulation
The objective of our work is to learn a shared embedding space that allows us to predict the most probable sounds that can be heard at a given geographic location. This can be represented as \(s^{*}=\max_{s}P(s|I)\) where \(P(s|I)\) represents the conditional distribution of sounds for a given location \(l\) and \(s^{*}\) is the most likely sound. Unfortunately, direct conditioning on location does not generalize to regions without a large number of training samples, which means truly global mapping wouldn't be possible. On the other hand, overhead imagery has a strong correlation to the type of sound at a given location and is freely available across the globe. Therefore, in our work, we represent the location indirectly, using an overhead image \(I(l)\) of the location. We learn a conditional distribution \(P(s|I(l))\), which is able to make high-resolution predictions even for regions without training samples.
### Dataset
We use the _SoundingEarth_ dataset to train and evaluate our method. The dataset contains more than \(50k\) geotagged audio recordings from \(136\) countries and overhead image pairs. The overhead images have size of \(1024\times 1024\) collected from _Google Earth_ with an approximate ground-sample distance (GSD) of \(0.2\) meters (m). Audio data in the dataset was collected from the project _Radio Aporee::Maps_[2], which hosts an online platform dedicated to creating a global soundmap. It contains diverse audio recordings from urban, rural and natural environments, published under the creative commons license. For our project, we remove the audio files with a sampling frequency less than \(16k\). This yields a dataset size of \(50\,792\) samples.
The high-resolution _Google Earth_ imagery is not available to be used freely. Therefore, in order to have the ability to globally scale soundscape mapping, we augment the existing _SoundingEarth_ dataset by including freely available lower-resolution images. Specifically, we use the RGB bands of the _Sentinel-2 cloudless_ imagery with \(10m\)_GSD_. For each location, we download a \(256\times 256\) image tile with the coverage radius of \(512m\) centered at that location.
### GeoCLAP
Figure 1 represents the overall framework of GeoCLAP. Given a geotagged audio \(X^{a}_{k}\), textual description of the audio \(X^{t}_{k}\), and an overhead image at a given location \(X^{i}_{k}\), where (\(X^{a}_{k}\),\(X^{t}_{k}\),\(X^{i}_{k}\)) is one audio-text-image triplet. We obtain embeddings for each modalities by passing through modality-specific encoder and linear projection layer, yielding embeddings with the same dimension for audio, text, and overhead image, respectively.
\[E^{a}_{k}=g_{audio}(f_{audio}(X^{a}_{k})) \tag{1}\]
\[E^{i}_{k}=g_{text}(f_{text}(X^{t}_{k})) \tag{2}\]
\[E^{i}_{k}=g_{image}(f_{image}(X^{i}_{k})) \tag{3}\]
where \((f_{audio},g_{audio})\), \((f_{text},g_{text})\), \((f_{image},g_{image})\) are (encoder, linear projection layer) pairs producing \(l2\)-normalized \(d\) dimensional embeddings: \(E^{a}_{k}\), \(E^{t}_{k}\), and \(E^{i}_{k}\), for audio, text, and overhead image respectively.
GeoCLAP is trained on embedding triplets using contrastive learning objective similar to CLIP [32] for all three pairs of embeddings:
Figure 1: GeoCLAP: A tri-modal contrastive learning framework to learn shared embedding space between overhead image, sound, and textual description of the corresponding sound.
\[L_{at}=\frac{1}{2N}\sum_{k=1}^{N}\left(log\frac{exp(E_{k}^{a}E_{k}^{t}/\tau_{at})}{ \sum_{j=1}^{N}exp(E_{k}^{a}.E_{j}^{t}/\tau_{at})}+log\frac{exp(E_{k}^{t}.E_{k}^{a }/\tau_{at})}{\sum_{j=1}^{N}exp(E_{k}^{t}.E_{j}^{a}/\tau_{at})}\right) \tag{4}\]
\[L_{ai}=\frac{1}{2N}\sum_{k=1}^{N}\left(log\frac{exp(E_{k}^{a}.E_{k}^{t}/\tau_{ ai})}{\sum_{j=1}^{N}exp(E_{k}^{a}.E_{j}^{t}/\tau_{ai})}+log\frac{exp(E_{k}^{i}.E_{k}^{ a}/\tau_{ai})}{\sum_{j=1}^{N}exp(E_{k}^{i}.E_{j}^{a}/\tau_{ai})}\right) \tag{5}\]
\[L_{ti}=\frac{1}{2N}\sum_{k=1}^{N}\left(log\frac{exp(E_{k}^{t}.E_{k}^{i}/\tau_{ ti})}{\sum_{j=1}^{N}exp(E_{k}^{t}.E_{j}^{i}/\tau_{ti})}+log\frac{exp(E_{k}^{i}.E_{k} ^{t}/\tau_{ti})}{\sum_{j=1}^{N}exp(E_{k}^{i}.E_{j}^{t}/\tau_{ti})}\right) \tag{6}\]
where, \(N\) is the training batch size and \(\tau_{at}\), \(\tau_{ai}\), and \(\tau_{ti}\) are learnable temperature parameters used to scale logits in loss computation for each pairs of embeddings.
Combining equations 4, 5, and 6, the final loss for which GeoCLAP is trained is as follows:
\[L=L_{at}+L_{ai}+L_{ti} \tag{7}\]
## 4 Experimental Details
### Data Preprocessing
For audio preprocessing, we convert each audio sample into mel-spectrogram using the default settings: {feature_size=64, sampling_rate=48000, hop_length=480, max_length_s=10,fft_window_size=1024} provided in the HuggingFaceWrapper:ClapProcessor for the pre-trained L-CLAP model clap-htsat-fused.
In the _SoundingEarth_ dataset, most of the audio recordings (except 6333 samples) are also accompanied by a brief description and a title uploaded by the contributor. In order to have textual description for all audio recordings as well as to further encode geographic information in text, we use a python client, geopy to obtain the address of the location and append an additional sentence, _"The location of the sound is:[address]."_ to the textual description of each sample. For example, for the geolocation (52.509663, 13.376481), the added sentence would be _"The location of the sound is: Potsdamer Platz, Tiergarten, Mitte, Berlin, 10785, Germany"_. Following L-CLAP, we use RobertaTokenizer with the parameter max_length set to 77.
For overhead imagery, we adopt the same data augmentation as SATMAE [7]. We perform _RandomResizedCrop_ with parameters:{input_size=224, scale=(0.2,1.0)}, followed by a _RandomHorizontalFlip_, during training. During inference, we extract a \(224\times 224\) center crop of the image.
### Implementation and metrics
We implement our code in Pytorch and utilise HuggingFace for loading L-CLAP encoders and their respective data pre-processing wrappers. We split the dataset with ratio 70:10:20 yielding a total of 35 554, 5079, and 10 159 samples into training, validation, and test split, respectively. For experiments regarding the baseline, we ran the publicly available code for [19] using the data splits of our study. We used the experimental setting for their best reported results on cross-modal retrieval task, which is as follows:{batch_size=256 encoders=ResNet18, latent_dim=128, loss=SymmetricCL, tau=0.2}. The baseline was trained for 300 epochs with Adam optimizer and learning rate of \(1e-3\)
#### 4.2.1 Encoders
We use the pre-trained model clap-htsat-fused [38] to encode audio and text. The audio encoder used in our study, HTSAT, has 4 swin-transformer blocks with hidden feature dimension of 768. The text encoder RoBERTa from [38] used in our study, has 12 transformer blocks with hidden feature dimension of 768. For both audio and text encoders, we take the output of their respective L-CLAP's projection layer producing 512-dimensional embeddings. For encoding overhead image, we use the pre-trained vit_base_patch16 encoder of SATMAE [7]. It processes input as a sequence of \(16\times 16\) image patches passing through 12 layers of transformer blocks. In order to match dimension with audio and text embeddings, we pass the output from SATMAE encoder to a ReLU activation followed by a 512-dimension linear layer. Starting from weights of these pre-trained encoders, we conduct two set of experiments. First, we allow only the overhead-image encoder to train while freezing L-CLAP. Second, we allow fine-tuning of all encoders in our framework.
#### 4.2.2 Training
We train GeoCLAP using the contrastive loss objective presented in Equation 7. We initialize all three learnable temperature parameters to 0.07. We also run experiments with and without using _text_ in our framework. While using text, we further experiment the impact of adding an additional sentence describing detailed address of the location to the text. For experiments where we use overhead image and audio only, we train our model with image-audio contrastive loss represented by Equation 5. Moreover, for experiments using overhead image, audio, and text, while keeping the L-CLAP encoders frozen, we train with \(Loss=L_{ui}+L_{ti}\). We use a training batch size of 256 for the baseline, and our experiments with frozen L-CLAP, while using batch size of 128 for experiments allowing fine tuning of L-CLAP. We use the Adam optimizer and set the initial learning rate to \(5e-5\). We use weight_decay=0.2 and betas=(0.9,0.98). We use cosine annealing learning rate scheduler with number of warm up iterations set to 2000. We set max_epochs to 100 for experiments with frozen L-CLAP and 30 for experiments allowing fine tuning of L-CLAP.
#### 4.2.3 Metrics
Following Heidler _et al_. [19], we use Recall@100 and Median Rank (Median-R) of the ground-truth as the evaluation metrics of our approach. We use the test set containing 10 159 samples as the gallery for both image-to-sound and sound-to-image retrieval evaluation.
## 5 Evaluation
### Experiments with SoundingEarth data
Table 1 shows the results of our experiments with the _SoundingEarth_ dataset while using the original overhead imagery of \(0.2m\) resolution. One of the interesting results from this table is that by just using frozen pre-trained audio encoder from L-CLAP [38], while allowing only overhead-image encoder to be fine-tuned, we already get about 10 points improvement in cross-modal retrieval. This highlights the advantage of leveraging rich representation space of pre-trained models like L-CLAP. However, when we introduce text modality into training, while still keeping both text and audio encoders frozen, the image-to-sound Recall@100
drops to 0.32. L-CLAP was trained on large corpus of text-audio pairs where textual description of audio have relatively high quality. However, the primary focus of the _SoundingEarth_ dataset has been to collect geotagged audio from all around the world and associate them with high-resolution overhead imagery. We observed that the textual descriptions of audio in the _SoundingEarth_ dataset are noisy and do not reflect the type of textual prompts L-CLAP models were trained on. In our experiments, we use three different types of texts: textual description of audio, only address of the audio, and text containing both description and address of the audio. We observed that for any type of text, learning with frozen representation lowers the performance when compared to learning with frozen representation of audio alone. With this observation, we decided to allow fine-tuning of L-CLAP encoders. Accordingly, the performance of our approach noticeably improves to image-to-sound Recall@100 of 0.384 while learning with overhead image and audio. The performance further improves to Recall@100 of 0.423 with Median Rank of 172 when we learn with overhead image, audio, and text. This performance is further improved to Recall@100 of 0.434 with Median Rank of 159, when we add address of the audio location in the text. This is an absolute improvement of the baseline performance by 0.178 points in image-to-sound Recall@100 and 655 in Median Rank. We see similar trends on sound-to-image retrieval task.
### Experiments with Sentinel data
Table 2 shows the results of our experiments with _Sentinel-2 cloudless_ imagery with \(10m\)_GSD_. We found that performance in all of our experiments noticeably improved while using lower-resolution overhead imagery. This choice brought in 12.89% relative improvement in the baseline Recall@100 performance as well. We believe the reason for this improvement is the larger coverage of geographic area in a single overhead image with 10m _GSD_. Moreover, the lower-resolution sentinel imagery is inherently blurry offering some regularization effect during training, leading to improved generalizability of our models. Following similar trends as in Table 1, an absolute Recall@100 improvement of about 10 points is observed, when using a pre-trained frozen audio encoder from L-CLAP. Similarly, the retrieval performance improves to 0.396 when the audio encoder is allowed to be fine-tuned. We also observe gain in performance of fine-tuned GeoCLAP models trained with text containing address. The best performance for GeoCLAP trained with all three modalities, yields (Recall@100, Median Rank) of (0.450,143) and (0.447,144) for image-to-sound and sound-to-image retrieval, respectively. Compared to the baseline, this is a relative gain of 55.71% and 57.95% for Recall@100 on tasks: image-to-sound and sound-to-image retrieval, respectively.
\begin{table}
\begin{tabular}{c c c c c|c c|c c} \hline \hline \multicolumn{3}{c|}{Method} & \multicolumn{3}{c|}{Image2Sound} & \multicolumn{2}{c}{Sound2Image} \\ \hline Experiment & Image Encoder & Text-Audio Encoder & Text & Address & R@100 & Median-R & R@100 & Median-R \\ \hline Baseline [19] & ResNet18 & ResNet18 & ✗ & ✗ & 0.256 & 814 & 0.250 & 816 \\ \hline ours & SATMAE & L-CLAP-frozen & ✗ & ✗ & 0.352 & 360 & 0.348 & 369 \\ ours & SATMAE & L-CLAP-frozen & ✓ & ✗ & 0.328 & 428 & 0.325 & 428 \\ ours & SATMAE & L-CLAP-frozen & ✗ & ✓ & 0.298 & 546 & 0.295 & 544 \\ ours & SATMAE & L-CLAP-frozen & ✓ & ✓ & 0.317 & 439 & 0.311 & 443 \\ \hline ours & SATMAE & L-CLAP & ✗ & ✗ & 0.384 & 230 & 0.385 & 237 \\ ours & SATMAE & L-CLAP & ✓ & ✗ & 0.423 & 172 & 0.419 & 175 \\ ours & SATMAE & L-CLAP & ✗ & ✓ & 0.432 & 166 & 0.431 & 167 \\ ours & SATMAE & L-CLAP & ✓ & ✓ & **0.434** & **159** & **0.434** & **167** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Cross-modal retrieval performance for models using 0.2m GSD overhead imagery.
### Zero-Shot Soundscape Mapping
Utilising the rich representation space of our best-performing GeoCLAP model, we demonstrate zero-shot soundscape mapping using both text and audio queries. Soundscape maps, in our work, are the similarity-score heatmaps for a given query. Specifically, we use the appropriate encoder from GeoCLAP to produce an embedding of the query and embeddings for a dense set of overhead images in the region of interest. Then, the cosine similarity score between the query embedding and all overhead image embeddings is overlaid on the corresponding region to yield a soundscape map (Figure 2). In Figure 3, we demonstrate a country-scale soundscape map for the Netherlands. For this, we compute soundscape for three prompts: {_This is a sound of car horn; This is a sound of chirping birds; This is a sound of animal farm_} and overlay them together to create a composite pseudo-color map. We compare this soundscape with ESRI's _Sentinel-2 land cover_ classes. We observe a strikingly high correlation between the related land-cover classes with the category of sound likely to be heard at the location. More such soundscape maps can be found in the supplemental material of this paper.
\begin{table}
\begin{tabular}{c c c c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{Method} & \multicolumn{3}{c|}{Image2Sound} & \multicolumn{2}{c}{Sound2Image} \\ \hline Experiment & Image Encoder & Text-Audio Encoder & Text & Address & R@100 & Median-R & R@100 & Median-R \\ \hline Baseline [19] & ResNet18 & ResNet18 & ✗ & ✗ & 0.289 & 620 & 0.283 & 635 \\ \hline ours & SATMAE & L-CLAP-frozen & ✗ & ✗ & 0.384 & 274 & 0.381 & 271 \\ ours & SATMAE & L-CLAP-frozen & ✓ & ✗ & 0.340 & 369 & 0.338 & 367 \\ ours & SATMAE & L-CLAP-frozen & ✗ & ✓ & 0.311 & 453 & 0.304 & 461 \\ ours & SATMAE & L-CLAP-frozen & ✓ & ✓ & 0.337 & 378 & 0.331 & 370 \\ \hline ours & SATMAE & L-CLAP & ✗ & ✗ & 0.396 & 199 & 0.396 & 205 \\ ours & SATMAE & L-CLAP & ✓ & ✗ & 0.441 & 152 & 0.441 & 155 \\ ours & SATMAE & L-CLAP & ✗ & ✓ & 0.441 & 153 & 0.440 & 156 \\ ours & SATMAE & L-CLAP & ✓ & ✓ & **0.450** & **143** & **0.447** & **144** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Cross-modal retrieval performance for models using 10m GSD overhead imagery.
Figure 2: Soundscape maps along with reference overhead image for two regions. Soundscape created for queries: (a) A textual prompt: _This is a sound of sea waves_; (b) randomly selected sound from the class chirping_birds from ESC50 database [31] (green: more probable, white: less probable).
## 6 Conclusion
We proposed GeoCLAP, a contrastive-learning framework capable of embedding the modalities of overhead imagery, audio, and text into a common space. Our approach significantly improves the state of the art for cross-modal retrieval between overhead imagery and audio. We utilise the learned, multi-modal representation space for soundscape mapping, demonstrating a simple and scalable way to create soundscape maps for any geographic area using only satellite imagery and audio or textual queries. With this approach, we can construct global, high-resolution soundmaps with minimal effort.
|
2309.08028 | Properties of stable ensembles of Euclidean random matrices | We study the spectrum of a system of coupled disordered harmonic oscillators
in the thermodynamic limit. This Euclidean random matrix ensemble has been
suggested as model for the low-temperature vibrational properties of glass.
Exact numerical diagonalization is performed in three and two spatial
dimensions, which is accompanied by a detailed finite size analysis. It reveals
a low-frequency regime of sound waves that are damped by Rayleigh scattering.
At large frequencies localized modes exist. In between, the central peak in the
vibrational density of states is well described by Wigner's semicircle law for
not too large disorder, as is expected for simple random matrix systems. We
compare our results with predictions from two recent self-consistent field
theories. | Philipp Baumgärtel, Florian Vogel, Matthias Fuchs | 2023-09-14T20:58:51Z | http://arxiv.org/abs/2309.08028v2 | # Properties of stable ensembles of Euclidean random matrices
###### Abstract
We study the spectrum of a system of coupled disordered harmonic oscillators in the thermodynamic limit. This Euclidean random matrix ensemble has been suggested as model for the low-temperature vibrational properties of glass. Exact numerical diagonalization is performed in three and two spatial dimensions. It reveals a low-frequency regime of sound waves that are damped by Rayleigh scattering. At large frequencies localized modes exist. In between, the central peak in the vibrational density of states is well described by Wigner's semicircle law expected for random matrix systems. We compare our results with predictions from two recent self-consistent field theories.
## I Introduction
The nature of the vibrational excitations in athermal amorphous solids remains an open and important topic affecting inter alia thermal properties of glasses at low temperatures [1]. The vibrational properties of glassy materials differ strongly from the ones of crystals [2] as is well established by scattering experiments [3; 4; 5; 6; 7]. Computer simulations of various particle models have shown that the preparation of the glass state matters and a number of diverse phenomena have been discovered [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. As a consistent comprehensive theoretical picture is still lacking [20], a simple idealized model appears desirable, where a part of the phenomena can be studied in detail.
In 1999, Mezard, Parisi, and Zee introduced ensembles of Euclidean random matrices (ERM) [21], which were studied as simple models for low temperature glasses by Grigera and colleagues [22; 23; 24; 25], and by Schirmacher and colleagues [26; 27; 28; 29]. Random matrix ensembles have successfully been employed in a wide variety of physical systems with disorder [30; 31; 32], and the special ERM ensemble may arguably be considered the most idealized model for the vibrations in glass. _Particles perform harmonic motion around random positions. The restoring forces depend on the distances between the positions via a positive spring function, and translational invariance is postulated. Only properties averaged over the random positions are studied._ Based on diagrammatic perturbation expansions in field theoretic approaches, self-consistent theories for the central Greens functions have been developed [23; 24; 26; 27; 33]. They allow the (approximate) calculation of the vibrational density of states (vDOS) and of the dynamical structure factor.
While ERM had been studied intensively up to a decade ago, open questions remained on the sound damping and on the spatial characteristics of the eigenmodes. In the present contribution, we investigate the most simple ERM model, already considered in [23; 27], an homogeneous and isotropic system with a positive Gaussian spring function. The only state parameter turns out to be the rescaled density \(n\), which encodes the amount of disorder. We perform large scale numerical investigations including studies of finite size corrections, in order to reveal the complete characteristics of this specific ERM ensemble. We also compare with predictions from two self-consistent theories [23; 33], where different series of diagrams in the perturbation expansion [25] were re-summed.
## II Model
In this work we study an Euclidean random matrix (ERM) model [21; 29; 31] in which we consider \(N\) particles which are randomly placed in a box of Volume \(V=L^{d}\). Here, \(d\) denotes the dimension of the system, and \(L\) is its length. We apply periodic boundary conditions to the system and study an uniform distribution of particles. The set of random positions \(\{\mathbf{r}_{i}\}\) will be called inherent positions. We consider a harmonic motion of the particles around their inherent positions which leads us to the definition of a random matrix \(\mathbf{M}\) via the interaction potential \(U\)
\[U(\{\phi_{i}\}) = \frac{1}{4}\sum_{i,j}f(\mathbf{r}_{i}-\mathbf{r}_{j})(\phi_{i}-\phi_{j})^ {2} \tag{1}\] \[= \frac{1}{2}\sum_{i,j}M_{ij}\phi_{i}\phi_{j},\]
with
\[M_{ij}=-f(\mathbf{r}_{i}-\mathbf{r}_{j})+\delta_{ij}\sum_{k}f(\mathbf{r}_{i}-\mathbf{r_{k}}). \tag{2}\]
Here, \(f(\mathbf{r})\) is called spring function and \(\phi_{i}\) is a small scalar displacement of particle \(i\) from its inherent site. In this work we consider the simple case, where the spring function is isotropic and given by the Gaussian
\[f(r)=\exp(-r^{2}/2), \tag{3}\]
with \(r\) the dimensionless distance. For positive spring functions, \(f(r)>0\), the potential \(U\) is positive and thus the matrix \(\mathbf{M}\) is positive semi-definite.
In the limit of large systems, a single state parameter \(n=N/V\) determines the properties of quantities averaged over the disorder.
In the harmonic approximation, the equations of motion of the system are given by
\[\ddot{\phi}_{i}=-\sum_{j=1}^{N}M_{ij}\phi_{j}\quad,\,\text{for}\,\,1\leq i\leq N \tag{4}\]
Translational invariance and hence momentum conservation follow immediately from the potential \(U(\{\phi_{i}\})\). Consequently, \(\mathbf{M}\) has the eigenvalue zero. The associated eigenvector \(\mathbf{e}^{0}\) corresponds to the uniform shift \(\mathbf{e}^{0}=(1,1,....,1)/\sqrt{N}\).
Note that we already set the length scale of our system to one by the definition of the spring function in Eq. (3), and frequency and time are also chosen dimensionless quantities in Eq. (4).
## III Methods
We use two methods to study the characteristics of the system. The first one, in which we diagonalize the random matrix \(M_{ij}\), will be called normal mode analysis. The second one, where we solve the equations of motion, will be called excited wave analysis. In both cases, averages over the disorder are finally performed by sampling different inherent positions.
### Normal mode analysis
In the normal mode analysis [3; 8; 10; 11; 12; 13; 14], we calculate the eigenvalues \(\lambda^{k}\), corresponding to the eigenfrequencies \(\omega^{k}=\sqrt{\lambda^{k}}\), and the eigenvectors \(\mathbf{e}^{k}\) of the random matrix \(\mathbf{M}\). For this, we use the standard diagonalization routine of _matlab_ and a routine called lobpcg [34] which can handle sparse matrices efficiently. Note, that the symmetry and semi-positivity of \(\mathbf{M}\) assures \(\lambda^{k}\geq 0\) and that the \(\mathbf{e}^{k}\) form an orthonormal basis.
The density of states per particle in the energy domain is calculated by [35]
\[g_{\lambda}(\lambda)=\frac{1}{N}\overline{\sum_{k}\delta(\lambda-\lambda^{k})} \tag{5}\]
and can be transformed into the frequency domain with \(\lambda=\omega^{2}\) leading to
\[g(\omega)=2\omega\;g_{\lambda}(\lambda(\omega)). \tag{6}\]
Here, the overbar denotes the average over disorder. As we expect discrete eigenfrequencies in finite systems, the density of states \(g(\omega)\) can be subject to inaccuracies occurring due to the binning process if the bin size is chosen incorrectly. A quantity which resolves this issue is called integrated density of states [36; 37; 38] and can be calculated by
\[I(\omega)=\int_{0}^{\omega}g(\omega^{\prime})\;\mathrm{d}\omega^{\prime}. \tag{7}\]
The integrated density of states counts the number of eigenfrequencies up to a frequency \(\omega\).
We calculate the dynamical structure factor by \(S(q,\omega)=2\omega\,S_{\lambda}(p,\lambda)\), where
\[S_{\lambda}(q,\lambda) =\sum_{k}Q^{k}(q)\delta(\lambda-\lambda^{k}), \tag{8}\] \[Q^{k}(q) =\frac{1}{N}\left|\sum_{j}e_{j}^{k}\;\exp(i\,qx_{j})\right|^{2}. \tag{9}\]
Here, we have assumed an excitation along the \(x\)-axis with \(\mathbf{q}=q\,\mathbf{\hat{e}}_{x}\). Note that throughout all analyses we only allow discrete wavevectors \(q_{l}=l\,2\pi/L\) with \(l=\pm 1,\pm 2,\ldots\), satisfying the periodic boundary conditions. The dynamic structure factor can be used to extract the dispersion relation \(\Omega(q)\) and the damping \(\Gamma(q)\) by fitting \(S(q,\omega)\) to a damped harmonic oscillator model [39; 40]
\[S(q,\omega)\propto\frac{\Omega^{2}(q)\Gamma(q)}{(\omega^{2}-\Omega^{2}(q))^{2} +\omega^{2}\Gamma^{2}(q)} \tag{10}\]
Another quantity which is used to characterise the eigenmodes of the systems is the participation ratio [10; 11; 13; 15]
\[P^{k}=\frac{1}{N}\frac{1}{\sum_{i}(e_{i}^{k}\,e_{i}^{k})^{2}}. \tag{11}\]
The participation ratio \(P^{k}=1/N\ll 1\) is small for an ideal localized mode involving only one particle, while \(P^{k}=2/3\) for an ideal plane wave.
### Excited wave analysis
If the eigenvectors \(\mathbf{e}^{k}\) of the system are known, we can also easily solve the equations of motion by
\[\phi_{i}(t)=\sum_{k}u^{k}(t)e_{i}^{k}, \tag{12}\]
where
\[u^{k}(t)=\mathbf{e}^{k}\cdot\mathbf{\phi}(0)\cos(\omega^{k}t)+\mathbf{e}^{k}\cdot\dot{\bm {\phi}}(0)\frac{\sin(\omega^{k}t)}{\omega^{k}}. \tag{13}\]
The scalar product abbreviates the sum over particles, \(\mathbf{e}^{k}\cdot\mathbf{a}=\sum_{i}e_{i}^{k}\,a_{i}\). We can excite a standing wave with wavenumber \(q\) in our inherent structure by choosing for example \(\phi_{i}(0)=0\) and \(\dot{\phi}_{i}(0)=~{}\sin(qx_{i}+\Phi)\) as initial conditions [12; 14; 9; 41], with \(\Phi=0,\pi/2\). Equivalent to fitting the damped harmonic oscillator model to the dynamic structure factor in the frequency domain, we can calculate the correlation function
\[C(q,t) =\overline{R(q,t)}, \tag{14}\] \[R(q,t) =\frac{\sum_{i}\dot{\phi}_{i}(0)\,\dot{\phi}_{i}(t)}{\sum_{i}\dot {\phi}_{i}(0)\,\dot{\phi}_{i}(0)} \tag{15}\]
and fit it with
\[C(q,t)=\exp(-\Gamma(q)\,t/2)\cos(\Omega(q)t). \tag{16}\]
Note that the average now also includes both phases \(\Phi=0,\pi/2\) for each set of inherent positions. This again allows us to extract the dispersion relation \(\Omega(q)\) and the attenuation \(\Gamma(q)\). While the dynamic structure factor includes a binning process in its calculation, the correlation function does not, yielding better results for small wavevectors \(q\).
We can rewrite the correlation function in terms of "hybridization" coefficients
\[\xi^{k}(q)=\frac{\mathbf{e}^{k}\cdot\dot{\mathbf{\phi}}(0)}{\sqrt{\dot{\mathbf{\phi}}(0) \cdot\dot{\mathbf{\phi}}(0)}} \tag{17}\]
by
\[R(q,t) =\frac{\sum_{k}\mathbf{e}^{k}\cdot\dot{\mathbf{\phi}}(0)\cos(\omega^{k}t) \mathbf{e}^{k}\cdot\dot{\mathbf{\phi}}(0)}{\dot{\mathbf{\phi}}(0)\cdot\dot{\mathbf{\phi}}(0)} \tag{18}\] \[=\sum_{k}(\xi^{k}(q))^{2}\cos(\omega^{k}t), \tag{19}\]
where \(\sum_{k}(\xi^{k}(q))^{2}=1\) must hold.
## IV Numerical details
We study systems with periodic boundary conditions. In our case this means that the periodic boundaries influence the calculation of the dynamical matrix \(\mathbf{M}\). A particle at the boundary of the simulation box also interacts with the particles of the periodic copies of the simulation box. This enforces the selection of wavevectors \(q_{l}\) introduced above.
In order to use sparse matrices (storing less then \(N^{2}\) entries) for the large systems, we introduce a cut-off radius \(\sigma\) at which the spring function \(f(r)\) is truncated. We choose \(\sigma=4\). See below for a discussion of the accruing errors.
In order to obtain the randomly generated inherent structures we use the standard random number generator of _matlab_ which is the Mersenne Twister algorithm [42].
The main simulation parameter we will vary is the dimensionless density \(n\). At a given number of particles \(N\) the density determines the size of the periodic simulation box _via_\(L=(N/n)^{1/d}\). Hence, if we increase the number of particles \(N\) we also increase the size of the simulation box \(L\). At the same time we also get access to smaller wave vectors \(q\).
We calculate the full set of eigenvalues for system sizes up to \(N=4\times 10^{4}\) particles and the smallest 2000 eigenvalues for larger systems. This drastically reduces the computation time and especially the storage consumption. See below for a discussion of the accruing errors.
For each density \(n\) and system size \(N\) we perform calculations in 250 realizations of the random inherent positions and present the averages. When calculating the 3D participation ratios, we tested an ensemble size of \(5\times 10^{5}\) simulations and found qualitatively the same result.
## V Results and discussions
First we show the results obtained _via_ the normal mode analysis and afterwards we show the excited wave analysis.
Fig. 1a) shows the density of states for systems of size \(N=4\times 10^{4}\) for varying densities \(n\). As is often done, it is divided by the Debye behaviour \(g_{\rm D}(\omega)=A_{\rm D}\,\omega^{2}=\omega^{2}/\omega_{\rm D}^{3}\). Here \(A_{\rm D}\) is the Debye level and \(\omega_{\rm D}\) the Debye frequency.
We can observe a Debye spectrum for \(\omega\to 0\). It is expected because of the breaking of translational invariance in the solid inherent structures which leads to the existence of sound waves for small wavevector. For lowering \(n\), viz. increasing
Figure 1: **(a)** Density of states \(g(\omega)\) and **(b)** integrated density of states \(I(\omega)\) divided by the Debye behaviour for different densities \(n\) (see legend) and as function of the rescaled frequency. The dashed lines indicate the Debye levels \(nA_{\rm D}/3\).
average separation of the inherent positions, the elastic restoring forces become weaker and the Debye-frequency decreases. We observe an excess in the vDOS above the Debye-law for larger frequencies \(\omega\), which we call the boson peak of the ERM. This interpretation will be discussed in the Sect. VI. The location of the boson peak \(\omega_{\rm BP}\) scales with \(\sqrt{n}\). Fig. 1b) shows the integrated density of states \(I(\omega)\) divided by the Debye behaviour for the same densities \(n\). The dashed lines indicate the Debye levels \(A_{\rm D}/3\), which can most directly be extracted from these rescaled \(I(\omega)\) data.
Fig. 2 shows the vDOS presented as function of \(\omega-\omega_{\rm BP}\). In this plot the curves coincide quite well [43], highlighting that the majority of eigenfrequencies is accumulated in the boson peak. The overall shape of the vDOS is well described by Wigner's semicircle law, which would hold if the entries of \({\bf M}\) were independent and identically distributed zero-mean Gaussian entries [31]. The semicircle law is shown by the dashed lines in Fig. 2; note that for simplicity its normalization to unity is not adjusted to fit the data best. The semicircle is (in energy space) located around \(\omega_{\rm BP}^{2}\) and has a radius of \(R=n\sqrt{2a}\), where \(a=\hat{f}(0)/(\sqrt{8}n)\). In frequency space the explicit form is given by [33]
\[g_{\rm wigner}(\omega)=\frac{4\,\omega}{\pi R^{2}}\sqrt{R^{2}-(\omega^{2}- \omega_{\rm BP}^{2})^{2}}. \tag{20}\]
Note that \(g_{\rm wigner}(\omega_{\rm BP})=4\,\omega_{\rm BP}/(\pi R)=4/\pi\,2^{1/4}\propto n ^{0}\). A shift of the boson peak frequency is the dominant effect when changing the disorder via changing \(n\). The plot hides the \(n\)-dependence of the vDOS for small frequencies, as the Debye-level is very low compared to the boson peak amplitude for the considered \(n\).
From \(g(\omega)\) and \(I(\omega)\) we extract the density dependence of the relevant frequencies, \(\omega_{\rm D}\) and \(\omega_{\rm BP}\) shown in Fig. 3. The uncertainty of \(\omega_{\rm D}\) is calculated from the confidence interval of the fit of the Debye-level \(A_{\rm D}\) and the uncertainty of \(\omega_{\rm BP}\) is estimated by evaluating the frequencies of the bins left and right of the maximum of the vDOS. Since the uncertainties in \(\omega_{\rm BP}\) are very small they are omitted throughout this work. The energy scale of the boson peak arises from the pairwise interaction among all particles. This explains the scaling \(\omega_{\rm BP}\propto n^{1/2}\). For high enough \(n\), the boson peak position is the square-root of the mean value of the diagonal entries \(\omega_{\rm BP}^{2}=\overline{M_{ii}}=n\hat{f}(0)\)[33].
Considering the amplitude of the Debye-law, \(A_{D}=1/\omega_{D}^{3}\), we observe that \(A_{D}\propto n^{-5/2}\) and thus \(\omega_{\rm D}\) scales with \(n^{5/6}\). The increase of the Debye frequency with decreasing \(n\) is a non-trivial effect of the increasing disorder, which will be explained based on the dispersion relations shown in Fig. 5 below. While the obtained boson peak positions \(\omega_{\rm BP}\) perfectly match with the values from the theory of [33], we find small deviations in the Debye frequency \(\omega_{\rm D}\) and therefore also in the ratios \(\omega_{\rm BP}/\omega_{\rm D}\). Still, the scaling of \(\omega_{\rm D}\) and \(\omega_{\rm BP}/\omega_{\rm D}\) with the density \(n\) matches well with the theory.
Fig. 4 shows the dynamic structure factors \(S(q,\omega)\) at \(n=1.0\) for 4 different \(q\) values. The structure factors are characterised by a pronounced peak which shifts to higher frequencies \(\omega\) with increasing wave vector \(q\). At the same time the peak broadens and its height decreases. In the limit of \(q\to\infty\), \(S(q,\omega)\) approaches the vDOS \(g(\omega)\) shown by the dashed line.
Figure 2: Density of states \(g(\omega)\) for different densities \(n\) (see legend) plotted as function of \(\omega-\omega_{\rm BP}\). The dashed line shows the predicted Wigner semicircle law [33].
We use Eq. (10) to extract the dispersion relation \(\Omega(q)\) from the structure factor. The results are shown in Fig. 5a). We observe a linear dispersion relation \(\Omega(q)=c_{T}q\) for small wave vectors and that \(\Omega(q)\rightarrow\omega_{BP}\) saturates for \(q\rightarrow\infty\). Here, \(c_{\rm T}\) denotes the speed of sound, it inherits the scaling \(c_{T}\propto\sqrt{n}\) from the dispersion relation. For increasing density \(n\) the slope of the dispersion relation, and therefore, the speed of sound \(c_{\rm T}\) gets larger; the system gets stiffer with the average separation of particles getting smaller. The dispersion relation obtained _via_ the excited wave analysis is shown exemplary for \(n=1.0\) by the purple circles and agrees perfectly with the dispersion relation from the structure factor.
We compare the dispersion relation with the bare dispersion, \(\Omega_{0}(q)=\sqrt{\epsilon_{0}(q)n}=\sqrt{(\hat{f}(0)-\hat{f}(q))n}\), at the smallest density \(n=0.5\). The largest deviations from the bare dispersion should be observed at the biggest disorder, i.e. the smallest density. Yet, at the largest simulated disorder we still observe a good agreement between the bare dispersion and \(\Omega(q)\). Comparing with the self-consistent theories [24; 33], one notices that both overestimate the effects of disorder; see Fig. 1 in Ref. [33], where dressed and bare dispersion relations are shown. Both predict a stronger change of \(\Omega(q)\) than observed, yet the more elaborate resummation including non-planar diagrams lies closer to the data.
In Fig. 5b) we show the rescaled dispersion relation \(\Omega(q)\,\omega_{\rm BP}^{-1}\) as function of the rescaled wave vector \(q/q_{BP}\) with \(q_{BP}=\omega_{\rm BP}/\,c_{\rm T}\). The rescaled data collapse very well indicating that the boson peak frequency \(\omega_{\rm BP}\) is the relevant frequency for the rescaling. Using the Debye-frequency does not lead to a comparable rescaling (not shown).
The scaling of the sound velocity with density, \(c_{T}\propto\sqrt{n}\), also explains the scaling of the Debye frequency, as \(A_{D}\approx 1/(2n\pi^{2}c_{T}^{3})\) approximately holds for high enough densities [22; 33]. The Debye amplitude \(A_{D}\) grows for increasing disorder because the sound velocity softens. Additionally, \(A_{D}\) is inversely proportional to the number of degrees of freedom, which become fewer with lowering \(n\) at fixed volume. Both effects together cause the dependence \(\omega_{D}\propto n^{5/6}\) in \(d=3\) which is appreciably stronger than the dependence of the boson peak frequency on disorder.
Fig. 6 shows the participation ratios \(P^{k}\) for a single realisation of the inherent positions at different system sizes \(N\) at \(n=1.0\). We can clearly see frequencies with large participation ratios of magnitude close to the ideal plane wave at small frequencies \(\omega^{k}\). These frequencies have distinct gaps between them and belong to the
Figure 4: Dynamic structure factor \(S(q,\omega)\) for \(q_{4},q_{6},q_{8}\) and \(q_{25}\) (marked in Fig. 5; \(q\) increases from left to right) at \(n=1.0\). The density of states is shown by the black dashed line, which agrees with \(S(q_{25},\omega)\).
Figure 5: **(a)** Dispersion relation \(\Omega(q)\) for varying densities \(n\) (see legend in Fig. 2). The black arrows at \(n=1.0\) mark the \(q\) values at which the dynamic structure factors in Fig. 4 are shown. The black circles at \(n=0.5\) indicate the bare dispersion \(\sqrt{\epsilon_{0}(q)n}=\sqrt{(\hat{f}(0)-\hat{f}(q))n}\) and the purple diamonds show the dispersion relation obtained from the excited wave analysis. **(b)** Rescaled dispersion relation \(\Omega(q)\,\omega_{\rm BP}^{-1}\) in dependency of the rescaled wave vector \(q\,c_{\rm T}/\omega_{\rm BP}\). Additionally, the prediction from Ref. [33] (black dashed line) and from Ref. [23] (green dashed line) both at \(n=1.0\) are shown.
phonon bands which have discrete frequencies due to the finite size of our system. The first participation ratios are therefore located at \(\omega=c_{\rm T}2\pi/L\).
With increasing \(\omega\) the participation ratio gets smaller and at the boson peak it distinctively drops. Above the boson peak we observe localized modes. The behaviour is qualitatively the same for different system sizes. We have run additional simulations for an increased ensemble size of \(5\times 10^{5}\) systems each with \(N=10^{4}\). We calculate the average, minimum and maximum participation ratio in distinct frequency bins. The maximum and minimum participation ratio per bin is shown in Fig. 6 by the red lines dashed dotted lines and the average participation ratio by the full line. The large ensemble confirms the overall shape of the participation ratio distribution. Contrary to the interpretation in [33], we do not find (quasi-) localised modes. This may be a consequence of studying scalar excitations [20; 44].
We can now turn to the results obtained by the excited wave analysis. Fig. 7 shows the velocity correlation function \(C(q,t)\) for \(q_{4}\) and \(q_{7}\). One can estimate the uncertainties of \(C(q,t)\) by the standard deviation. For better visibility we do not shown them in Fig. 7 because they are smaller than the symbols.
Clearly, a damped oscillation can be observed where the damping and the frequency becomes larger for large \(q\). For some \(q\) there are beats with large magnitude for larger times \(t\) which can be somewhat eliminated by averaging over different inherent structures. However, we get a more reliable result for the damping if we fit \(\exp(-\Gamma(q)t)\) to the envelope \(C_{\rm env}(q,t)\) of the correlation function.
Fig. 8a) and Fig.8b) show the envelope \(C_{\rm env}(q,t)\) for the wavevectors \(q_{4}\) and \(q_{7}\) at \(N=4\times 10^{5}\). Here, the uncertainties are those of \(C(q,t)\) evaluated at the respective maxima and minima. The initial decay of the envelope is exponential (a fit is shown by the dashed black lines) but large deviations from the exponential decay are visible. This effect was already observed and discussed in [12]. They argued that this is a finite size effect and that the deviations start at a system size dependent time and are stronger for smaller wavevector. Our results confirm this observation as can be seen in Fig. 8c) where the envelope of the correlation function for three different systems sizes at a similar wavevector \(q\) are shown. The uncertainty of the damping \(\Gamma(q)\) therefore is due to the uncertainty of the range in which the envelope can by fitted by an exponential.
Figure 6: Participation ratio \(P^{k}\) in dependency of the eigenfrequencies \(\omega^{k}\) for system sizes \(N=10^{4}\) (purple), \(N=3\times 10^{4}\) (green) and \(N=3\times 10^{5}\) (cyan). Note that in the last case, only the 2000 lowest participation ratios can be calculated. The black dashed line is located at the boson peak frequency. The red lines show the average, maximum and minimum participation ratio of an large ensemble of \(N=10^{4}\) systems.
Figure 7: Velocity correlation function \(C(q,t)\) of \(N=4\times 10^{5}\) systems at \(n=1.0\) for \(q_{4}\)**(a)** and \(q_{7}\)**(b)**. The black dashed lines correspond to a fitted damped oscillation.
So far, we neglected the fact that only the 2000 smallest eigenvectors are calculated for systems with \(N>4\times 10^{4}\). We will now argue that the 2000 smallest eigenvectors are sufficient to capture the small \(q\) behaviour of the velocity correlation function. For this we look at the hybridization coefficients shown in Fig. 9.
As can be seen, for small values \(q\) we get very large hybridization coefficients only for a narrow band of frequencies. As \(q\) increases, a wider range of frequencies is involved in the response of the system to a standing wave. At a certain value of \(q\) the frequencies with large hybridization coefficients start to lie outside of the smallest 2000 eigenvalues. At this point the approximation of using only the 2000 smallest eigenvectors necessarily fails. In Fig. 9b) we show the sum over the hybridization coefficients for different values of \(q\). As long as this sum is 1 our approximation for \(C(q,t)\) is valid. If the sum distinctively differs from 1 we have not included enough frequencies and the approximation fails. For the shown system this is the case at \(q>q_{7}\).
Finally, we show the obtained damping \(\Gamma(q)\) in Fig. 10. We observe that the attenuation becomes larger for increasing wave vectors \(q\). The damping saturates for large \(q\). For very small \(q\) we observe a weak quartic (Rayleigh) damping which can be fitted by \(\Gamma(q)\sim q^{4}\). The theoretically predicted prefactor \(B_{R}\)[33] lies within a factor of two relative to the exact prefactor. The theory prediction \(B_{r}q^{4}\) is included in Fig. 10. In the high density regime holds \(B_{R}\approx\frac{7}{48\pi}\frac{\omega_{R}^{4}}{\hbar\omega}\) around the sound pole.
Figure 8: Envelope of the velocity correlation functions shown in Fig.7 for \(q_{4}\)**(a)** and \(q_{7}\) at \(n=1.0\)**(b)**. An exponential decay is fitted and is drawn by the black dashed line. **(c)** Envelope \(C_{\text{env}}(q,t)\) of three different system sizes at a similar wavevector.
Figure 9: **(a)** Hybridization coefficients for the 7 smallest values of \(q\) in a system at size \(N=4\times 10^{5}\). The larger \(q\) becomes, the wider the range becomes over which non-vanishing hybridization coefficients are spread. The corresponding values of \(\Omega(q)\) are indicated by the dashed grey lines. **(b)** The sum over the smallest 2000 hybridization coefficients in this system drastically drops as soon as \(q\) exceeds \(q_{7}\).
We have also studied 2D systems. Compared to their \(3D\) counterparts the box size \(L\) of the 2D systems at our maximum particle number of \(N=10^{6}\) is larger. Hence, smaller wavevectors \(q\) are accessible in the 2D systems. In general, the 2D systems behave quite similar to their 3D counterparts. Note that that the Debye behaviour becomes \(g(\omega)=A_{\rm D}\,\omega\) in 2D with \(A_{D}=1/\omega_{D}^{2}\).
In Fig. 11 the density of states \(g(\omega)\), the dependences of the frequencies on density, \(\hat{\omega}(n)\), the damping \(\Gamma(q)\), and the participation ratios \(P^{k}\) of the 2D systems are shown. We observe a Debye spectrum for \(\omega\to 0\). Lowering the density \(n\) again increases the Debye level i.e. decreases the Debye frequency. The Boson peak frequency \(\omega_{\rm BP}\) again scales with \(\sqrt{n}\), while the Debye frequency scales with \(n^{1}\). We observe Rayleigh damping for small wavevectors \(q\). Note that in 2D the Rayleigh damping becomes \(\Gamma(q)\propto q^{3}\)[41].
In Fig. 11d) the corresponding participation ratios are shown. Again as in 3D, a crossover from extended to localized modes can be observed, which happens already at lower frequencies than \(\omega_{BP}\) in 2D. An ensemble of 500 systems allows to characterize the distribution of \(P^{k}\), which lies lower at the boson peak frequency in 2D than in 3D; compare with Fig. 11d).
Fig. 12 shows the contact number \(\overline{z}=M/N-1\) where \(M\) is the number of nonzero entries of \(\mathbf{M}\). Recall from Sect. IV, that a finite cut-off \(\sigma\) was chosen in order to speed-up the matrix diagonalizations. Thus the contact number, which would be infinite for the Gaussian spring function in Eq. (3), becomes finitie.
Figure 11: 2D Systems. **(a)** Density of states \(g(\omega)\) for different densitites \(n\) devided by the Debye behaviour. **(b)** Obtained and rescaled Debye frequency \(\hat{\omega}_{\rm D}=\omega_{\rm D}\,n^{-1}\) and boson peak frequency \(\hat{\omega}_{\rm BP}=\omega_{\rm BP}\,n^{-1/2}\) in dependency of the density \(n\). Additionally, the ratio \(\hat{\omega}_{\rm BP}/\hat{\omega}_{\rm D}\). **(c)** Sound attenuation for the systems sizes \(N=10^{4}\) and \(N=4\times 10^{5}\) at \(n=1\). A \(q^{3}\) fit is shown for the small \(q\) regime. **(d)** Participation ratio \(P^{k}\) in dependency of the eigenfrequencies \(\omega^{k}\) at \(n=1\) for \(N=3\times 10^{4}\) (green) \(N=4\times 10^{5}\) (purple). Only the 2000 lowest participation ratios can be calculated for \(N=4\times 10^{5}\). The black dashed line is located at \(\omega_{BP}\). The red lines show the average, maximum and minimum participation ratio of an ensemble of systems with \(N=3\times 10^{4}\).
Figure 10: Sound attenuation \(\Gamma(q)\) in dependency of the wave vector \(q\) at \(n=1.0\). The data obtained from different systems sizes is combined. Note that the small systems are used to obtain the large \(q\) behaviour, while the large systems are used to obtain the small \(q\) behaviour. The magenta colored line corresponds to Rayleigh damping \(\Gamma(q)=B_{\rm R}q^{4}\) where the strength \(B_{\rm R}\) is taken form the theory [33]. The blue line shows the attenuation as calculated from the dynamic structure factor \(S(q,\omega)\).
The contact number depends linear on the density \(n\). This dependency becomes evident if one considers a \(d\) dimensional sphere with the cut-off radius \(\sigma\) and volume \(V_{d}\) around a test particle. The contact number \(z\) of the test particle is given by the number of other particles in the sphere and thus simply reads \(V_{d}\,n\). In Fig. 12 the expected contact number is shown by the dashed black lines. At all densities \(n\) the contact number is above Maxwell's isostatic stability criterion \(\overline{z}_{c}=2\,d\) which is indicated by the dashed dotted lines [45]. Thus we conclude that the cut-off leads to negligible errors only.
## VI Conclusions and Outlook
Aim of the present contribution is to argue that the ERM system at large contact numbers captures the pertinent phenomena of transverse vibrations in stable glass at vanishing temperature [3; 12; 13; 41]. Two spatial dimensions, \(d=3\) and \(d=2\), were studied. We used dimensionless frequencies and wavevectors, which can be mapped onto experimental systems in the following way: The frequency or time scale of the ERM model is given by the boson peak frequency which was denoted \(\omega_{BP}\). The corresponding length scale is obtained using the (transverse) sound velocity, \(q_{BP}=\omega_{BP}/c_{T}\). Both scales allow to map the vDOS and the dynamical structure factor onto measured data. The strength of the disorder then is the only free parameter. Within the ERM model with a Gaussian spring function it is quantified by the dimensionless density \(n\). When comparing to real systems, the amplitude of the Debye-level at the position of the boson peak, viz. \(g_{D}(\omega_{BP})=A_{D}\,\omega_{BP}^{2}\) relative to the amplitude of the boson peak, viz. \(g(\omega_{BP})\), can be used to match \(n\). This gives the dominant variation with disorder in the ERM system, when rescaled variables \(\omega/\omega_{BP}\) and \(q/q_{BP}\) are employed. Importantly, all parameters of the ERM model are thus accessible by experiment easily and by well defined procedures.
Our numerical results should be compared to earlier numerical work on the same ERM system using diagonalizations and approximate calculations with the method of moments [22; 24]. We extend the range of accessible wavevectors so that clear statements on the damping of sounds modes becomes possible. Considering the slow crossover to \(\Gamma(q)\propto q^{d+1}\) exhibited in Fig. 10 and panel c) of Fig 11 for \(q\to 0\), we conclude that previously no statements on Rayleigh damping had been possible in the numerical ERM solutions.
We also compared our numerical results to predictions obtained from two self-consistent theories, one where all diagrams of first order in a diagrammatic perturbative expansion in \(1/n\) were resummed [23; 24], and a more recent one, where all diagrams of second order in \(1/n\) were re-summed [33]. Both theories qualitatively agree in the predictions of a Wigner-semicircle law for the boson peak and of a Debye-law at low frequencies, with the crucial difference of the sound damping. Non-planar diagrams in the diagrammatic perturbation expansion are required to correctly predict Rayleigh damping of sound. The non-planar diagrams capture non-local correlations of elastic fluctuations and arise first in second order in \(1/n\). Thus they were missed in the older approach which consequently predicts hydrodynamic damping \(\Gamma(q)\propto q^{2}\). See Ref. [33] for more details of the comparison of both theories.
Finally, let us address the interpretation of the main peak in the vDOS of the ERM model. When discussing Fig. 1 and Fig. 2, we called this peak boson peak and suggested to consider it as simple model of the (transverse contribution to the) boson peak in the vDOS of real glasses of simple particle systems. This interpretation, which differs from the older literature on the ERM system [23; 24; 26], rests on the following arguments:
\((i)\) The boson peak in the vDOS of real glasses survives in the zero temperature limit and thus arguably should be contained in an harmonic approach such as the ERM model.
\((ii)\) The reported universality of the boson peak [44; 46] is mirrored in its origin in the disorder, which is the single conceptual extension of the harmonic approach to disordered solids beyond the classical Born-Debye theory of crystals: The nature of the dominant normal modes changes in the frequency region of the boson peak. While the vibrational modes are mainly extended below \(\omega_{BP}\), they are localised above of the peak [47; 48; 49], which can be clearly seen in Fig. 6. This transition in the nature of modes has been related to the anomalous thermal properties of glasses
Figure 12: Contact number \(\overline{z}\) in dependency of the density \(n\) for the 3D (red circles) and 2D (purple diamonds) systems. The dashed lines show the expected contact numbers and the dashed dotted lines are located at the isostatic stability criterion \(\overline{z}_{c}=2\,d\)
[47; 50; 51].
(\(iii\)) It was argued in [48; 49] that the normal modes in the region of the Boson peak follow the statistics of a Gaussian orthogonal random matrix ensemble (GOE). This is why the shape of the boson peak resembles Wigner's semicircle law which is one of the hallmarks of a GOE matrix [52]. Clearly, our peak shown and analysed in the figures 1,2,6, and 11 is in accordance with this characteristic of the boson peak. This also hints at why the planar ERM model [23; 24] describes this part of the spectrum rather well, both quantitatively and qualitatively. It is known, that only the planar diagrams survive in thermodynamic limit for GOE matrices [52]. Again, see [33] for a detailed comparison of the planar and non-planar self-consistent model.
(\(iv\)) The density of states of a stable ERM system exhibits only a single peak as seen in simulations of stable glasses [41; 13]. Arguably, the vibrational phenomena we presented agree qualitatively and even semi-quantitatively with these simulations.
(\(v\)) An alternative explanation for the origin of the prominent peak in the ERM model is, that it is a smeared out van Hove singularity [53]. But this can not rationalise the observed transition from extended to localised eigenmodes at \(\omega_{BP}\). Almost all eigenmodes above \(\omega_{BP}\) are localized (for the considered simple ERM spring function) and while the dispersion relation approaches \(\omega_{BP}\) in the limit of \(q\rightarrow\infty\), the number of extended modes in the boson peak region is far too small in order to justify its interpretation as smeared van Hove singularity. This is further supported by the analytic derivation of Wigner's semicircle law. It rests on the solution of the self-consistency equation for \(g(\omega)\) neglecting the coupling to other modes [23; 33]. Again, this is in accordance with observed transition in the nature of normal modes.
The present study considered the simplest ERM system of a Gaussian spring function. Work is under way to extend it to richer spring functions. Additionally, recent works have shown that the vectorial character of the displacement field in glass is important as vortex like eigenmodes become possible [11; 20; 44; 54]. These stuctures can be studied in appropriately generalized ERM systems.
## VII Acknowledgments
We thank Giancarlo Ruocco, Walter Schirmacher, and Grzegorz Szamel for fruitful discussions. The work was supported by the Deutsche Forschungsgemeinschaft (DFG) via SFB 1432 project CO7.
|
2309.09336 | Unleashing the Power of Dynamic Mode Decomposition and Deep Learning for
Rainfall Prediction in North-East India | Accurate rainfall forecasting is crucial for effective disaster preparedness
and mitigation in the North-East region of India, which is prone to extreme
weather events such as floods and landslides. In this study, we investigated
the use of two data-driven methods, Dynamic Mode Decomposition (DMD) and Long
Short-Term Memory (LSTM), for rainfall forecasting using daily rainfall data
collected from India Meteorological Department in northeast region over a
period of 118 years. We conducted a comparative analysis of these methods to
determine their relative effectiveness in predicting rainfall patterns. Using
historical rainfall data from multiple weather stations, we trained and
validated our models to forecast future rainfall patterns. Our results indicate
that both DMD and LSTM are effective in forecasting rainfall, with LSTM
outperforming DMD in terms of accuracy, revealing that LSTM has the ability to
capture complex nonlinear relationships in the data, making it a powerful tool
for rainfall forecasting. Our findings suggest that data-driven methods such as
DMD and deep learning approaches like LSTM can significantly improve rainfall
forecasting accuracy in the North-East region of India, helping to mitigate the
impact of extreme weather events and enhance the region's resilience to climate
change. | Paleti Nikhil Chowdary, Sathvika P, Pranav U, Rohan S, Sowmya V, Gopalakrishnan E A, Dhanya M | 2023-09-17T17:58:06Z | http://arxiv.org/abs/2309.09336v1 | Unleashing the Power of Dynamic Mode Decomposition and Deep Learning for Rainfall Prediction in North-East India.
###### Abstract
Accurate rainfall forecasting is crucial for effective disaster preparedness and mitigation in the North-East region of India, which is prone to extreme weather events such as floods and landslides. In this study, we investigated the use of two data-driven methods, Dynamic Mode Decomposition (DMD) and Long Short-Term Memory (LSTM), for rainfall forecasting using daily rainfall data collected from India Meteorological Department in northeast region over a period of 118 years. We conducted a comparative analysis of these methods to determine their relative effectiveness in predicting rainfall patterns. Using historical rainfall data from multiple weather stations, we trained and validated our models to forecast future rainfall patterns. Our results indicate that both DMD and LSTM are effective in forecasting rainfall, with LSTM outperforming DMD in terms of accuracy, revealing that LSTM has the ability to capture complex nonlinear relationships in the data, making it a powerful tool for rainfall forecasting. Our findings suggest that data-driven methods such as DMD and deep learning approaches like LSTM can significantly improve rainfall forecasting accuracy in the North-East region of India, helping to mitigate the impact of extreme weather events and enhance the region's resilience to climate change.
Keywords:DMD LSTM rainfall forecasting Data driven methods
## 1 Introduction
Rainfall is one of the most important climatic variables that affects various aspects of human life and natural ecosystems. Accurate rainfall forecasting is crucial for effective disaster preparedness and mitigation. The North-East region of India, commonly known as the "Seven Sisters" is a topographically diverse
region with a unique mix of flora and fauna. However, it is also one of the most vulnerable regions in the world, with a high incidence of natural disasters, particularly floods and landslides. In this region, accurate rainfall forecasting is essential for effective disaster preparedness and mitigation, particularly in the face of increasing occurrences of extreme weather events.
The North-East region receives the highest annual rainfall in India, with its hilly terrain increasing its susceptibility to landslides and flash floods. The region also experiences cyclones and thunderstorms, which have the potential to cause extensive damage to infrastructure and agriculture. Climate change has exacerbated the frequency and intensity of these weather events, leading to prolonged dry spells and erratic rainfall patterns, further exacerbating the challenges faced by the region.
Rainfall prediction methods in the past relied on empirical relationships between atmospheric variables (temperature, humidity, wind, and pressure) using statistical [13] and dynamical approaches involving computer simulations [9]. However, their effectiveness was limited due to complex interactions and uncertainty. Recently, there is increasing interest in using machine learning (ML) and data-driven techniques to enhance rainfall predictions [4].
The present work aims to build data driven models to predict rainfall in mm using monthly average rainfall data. The work involved implementing Deep Learning and Dynamic Mode Decomposition techniques and performing various experiments to determine the hyperparameters that yield the best results.
The dataset from India Meteorological Department (IMD) is considered in this study, The dataset is processed to construct average monthly data and a DMD model is build using it. Few key locations are considered from the chosen region and a sliding-window based Deep Learning(DL) model is built using it. The models are then evaluated on RMSE and MAE metrics.
Our results show that both DMD techniques and LSTM techniques are capable to capture the patterns in the data allowing the models to forecast rainfall effectively. Our DMD method got MSE values ranging from 150.44 mm to 263.34 mm and MAE values ranging from 91.34 mm to 154.61 mm while the DL approach on average got a normalized MAE value of 0.35 and a normalized RMSE value of 0.534.
The paper is structured as follows: section 2 provides an overview of the existing literature, section 3 outlines the proposed methodology, section 4 presents the results and corresponding discussion, and section 5 provides the conclusion.
## 2 Literature review
In recent years, several deep learning-based approaches have been proposed for rainfall forecasting. In the paper "a deep convolutional neural network with bi-directional long short-term memory model for short-term rainfall prediction" Convolutional Neural Networks (CNNs) combined with Long Short-Term Memory (LSTM) networks have been used to create a new network called Tiny-RainNet which can be used for rainfall forecasting from radar images (Zhang et
al)[14].However, this approach only predicts rainfall only one or two hours into the future.
Some recent studies have proposed incorporating location information into deep learning-based rainfall forecasting models. For example, (Men et al)[6] in the paper "Spatio-temporal Analysis of Precipitation and Temperature: A Case Study Over the Beijing-Tianjin-Hebei Region, China" proposed a deep learning-based framework that incorporates both spatial and temporal features for rainfall forecasting. In this approach, the spatial features are extracted using a Convolutional Neural Network (CNN), while the temporal features are extracted using a Long Short-Term Memory (LSTM) network. This paper presents a deep learning-based spatial-temporal modeling approach for rainfall prediction. The authors used a Convolutional Neural Network (CNN) to extract spatial features from rainfall data, and an LSTM network to capture temporal patterns. They evaluated their model on a large-scale dataset from the Beijing-Tianjin-Hebei region, and demonstrated its superior performance compared to traditional methods.
Another recent study by (Luo et al)[5] in the paper "PredRANN: The spatiotemporal attention Convolution Recurrent Neural Network for precipitation nowcasting" proposed a deep learning-based approach for rainfall prediction that incorporates a spatial-temporal attention mechanism. They used a combination of CNN and LSTM networks to extract spatiotemporal features from rainfall data, and then applied an attention mechanism to weight these features. Their experiments on real-world rainfall datasets showed that the proposed approach outperformed existing deep learning models for rainfall prediction.
Deep learning and data-driven methods show promise in enhancing rainfall predictions by incorporating spatiotemporal features and location information. They outperform traditional approaches and have the potential to mitigate risks associated with extreme weather events. This paper explores DL and DMD forecasting, tuning parameters to enhance accuracy.
## 3 Methodology
### Dataset
The dataset used in this study was collected from the India Meteorological Department (IMD) [[https://imdpune.gov.in/lrfindex.php](https://imdpune.gov.in/lrfindex.php)] [8]. It contains gridded rainfall data with a spatial resolution of 0.25 x 0.25 degree across India for 122 years, from 1901 to 2022. We focused on the North-east region of India from 1901 to 2018, considering 429 grid points between longitudes 89.81\({}^{0}\)E to 98\({}^{0}\)E and latitudes 21.89\({}^{0}\)N to 30\({}^{0}\)N (Fig 1).
The daily rainfall data is averaged per month to create a new dataset of monthly average rainfall for each of the 429 grid points, used in DMD analysis. For DL techniques, we selected four key locations: Agartala, Guwahati, Imphal, and Itanagar, using monthly average rainfall data from the corresponding four grid points.
### Data Driven Modelling
Data-driven modeling uses data to develop accurate predictions and insights in various fields, like engineering and computer science. Unlike traditional methods, it doesn't rely solely on theoretical assumptions, making it flexible and applicable to a wide range of problems. Data-driven models handle large datasets efficiently, enabling researchers to understand complex systems and make precise predictions. In this study, we explore two data-driven techniques, DMD and DL, for time series rainfall forecasting.
#### 3.2.1 Dynamic Mode Decomposition (DMD)
Dynamic mode decomposition (DMD) is a data-driven method used to extract the underlying dynamic structures and patterns from complex, high-dimensional systems. DMD works by decomposing the data into a series of modes, each of which represents a spatial-temporal pattern of motion. These modes are determined by the eigenvectors of a matrix constructed from the data, and the associated eigenvalues represent the temporal dynamics of each mode (Tu et al)[12]. DMD has been successfully applied in a variety of fields, in [3] the authors have used DMD to identify rice leaf disease and in [7] the authors have used DMD to both detect and also classify the defects in Cantilever beams. Overall, DMD is rising in popularity across all the domains and has emerged as a powerful tool for understanding complex, high-dimensional systems and has the potential to revolutionize our understanding of a wide range of phenomena.
The steps for performing DMD are as follows:
1. Collect data from the system you want to analyze. This data should consist of time series measurements of the state variables of the system.
2. Create a matrix of "snapshots" from the data. Each column of the matrix represents a single snapshot of the system's state at a particular point in time. Let the matrix be denoted by \(\mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{N}]\), where \(\mathbf{x}_{i}\in\mathbb{C}^{n}\).
3. Perform Singular Value Decomposition on the matrix of snapshots to obtain its left singular vectors, right singular vectors, and singular values. These will be used to construct the Dynamic Mode Decomposition.
Figure 1: Selected Grid Points in North-East India.
Let the SVD of \(\mathbf{X}\) be given by \(\mathbf{X}=\mathbf{U}\boldsymbol{\Sigma}\mathbf{V}^{*}\), where \(\mathbf{U}\) is a matrix of left singular vectors, \(\boldsymbol{\Sigma}\) is a diagonal matrix of singular values, and \(\mathbf{V}\) is a matrix of right singular vectors.
4. Using the SVD results, construct the DMD modes. These modes are the building blocks of the system's dynamics and describe how the system evolves over time. The DMD modes are given by \(\boldsymbol{\Phi}=\mathbf{X}^{\prime}\mathbf{V}\boldsymbol{\Sigma}^{-1} \mathbf{U}^{*}\), where \(\mathbf{X}^{\prime}\) is the matrix of snapshots with the last snapshot removed.
5. Calculate the eigenvalues of the DMD modes. These eigenvalues represent the frequencies and growth rates of the system's dynamics. The eigenvalues are given by the diagonal elements of the matrix \(\boldsymbol{\Lambda}=\mathbf{V}\boldsymbol{\Sigma}^{-1}\mathbf{U}^{*}\mathbf{ X}^{\prime}\mathbf{X}\).
6. Using the DMD modes and eigenvalues, reconstruct the system's dynamics. The time evolution of the system can be approximated as \(\mathbf{x}(t)\approx\sum_{k=1}^{r}\boldsymbol{\phi}_{k}e^{\omega_{k}t}b_{k}\), where \(\boldsymbol{\phi}_{k}\) is the \(k\)-th DMD mode, \(\omega_{k}\) is the \(k\)-th eigenvalue, \(r\) is the number of significant modes, and \(b_{k}\) is a set of coefficients that can be calculated from the initial conditions of the system.
Overall, Dynamic Mode Decomposition is a powerful tool for analyzing and understanding complex dynamical systems. It has applications in many fields, including physics, engineering, biology, and finance. Our workflow for DMD analysis is presented in Fig 2.
### Deep Learning
Deep learning is a subset of machine learning that uses artificial neural networks to learn from vast datasets. Its key advantages are automatic feature extraction and handling complex data like images, speech, and language without manual engineering. It excels in tasks like image recognition, speech processing, NLP, and autonomous driving.
Recurrent Neural Networks are a type of neural network designed to process sequential data by retaining and utilizing information from previous time steps. RNNs have been applied in a wide range of applications, including speech recognition, natural language processing, and time series forecasting. For example, in [10], authors have used RNN based LSTM model to predict COVID-19 in different states in India And in [10], authors have used RNN based models and sliding window based CNN model to predict the stock price.
Figure 2: Applying DMD to Rainfall data
One of the main disadvantages of RNNs is the vanishing gradient problem, which occurs when the gradients used to update the weights in the network become too small to have a significant impact on the network's performance. As a result, RNNs may have difficulty learning long-term dependencies and may perform poorly on tasks that require such dependencies. Additionally, RNNs may suffer from overfitting, where the network becomes too complex and begins to memorize the training data instead of generalizing to new data.
Researchers have developed various techniques to mitigate the vanishing gradient problem, including gating mechanisms such as LSTM (Hochreiter & Schmidhuber)[2] and GRU (Chung et al)[1] networks, as well as methods for gradient clipping and weight regularization (Pascanu et al)[11].
Long Short Term Memory (LSTM) is a type of recurrent neural network designed to handle the vanishing gradient problem that occurs in traditional RNNs when trying to propagate information over many time steps. LSTM networks consist of a set of memory cells that can selectively store or erase information over time, as well as input, forget, and output gates that control the flow of information into and out of the memory cells.
The proposed model architecture as shown in Fig 3 consists of the following key components: LSTM layer with 64 units, capturing long-term dependencies in sequential data. Dropout layer with a rate of 0.2, reducing overfitting. Dense layer with 1 unit, initialized with zeros. Reshape layer to convert the output shape to [1; 1].
This architecture processes sequential data, applies regularization, and outputs a reshaped result.
The input sequence is passed through the LSTM layer, followed by the Dropout layer, Dense layer, and Reshape layer, ultimately resulting in the output.
Figure 4: Deep Learning approach to Rainfall Data
Figure 3: Model Architecture
Fig 4, shows the overall workflow of the LSTM approach, The monthly average data is first normalized and is split into windows which is then used to train the LSTM network. After training the network, given a past history window, we will be able to predict rainfall for a future month.
### Model Training
#### 3.4.1 DMD
For the dynamic mode decomposition analysis, all locations in the dataset were used, and a subset of locations was also analyzed to test the sensitivity of the results to the number of locations included in the analysis. We also take the monthly average measurements corresponding to each location. The rows in the Data matrix \(X\) will correspond to each location and the columns will correspond to timestamp. In our case it will be for each month. Using this data, we will try to predict for a period of 1 year using a period of 10 years of data for different intervals.
#### 3.4.2 Deep Learning
Four locations, Agartala, Imphal, Guwahati, and Itanagar, were selected for deep learning analysis in the northeast region of India. The dataset covered a wide area, spanning from 89.8degE to 98degE and 30degN to 21.89degN. Each location's latitude and longitude values were used to identify them in the dataset. We conducted separate model training for each location using an 80-20 data split. Experiments with various optimizers and dropout layers were performed to improve model accuracy.
#### 3.4.3 Evaluation metrics
RMSE is a common performance metric for regression models that measures the average difference between predicted and actual values. A lower RMSE indicates better predictive performance.
The formula for the root mean squared error (RMSE) is:
\[\text{RMSE}=\sqrt{\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\hat{y_{i}})^{2}}\]
Here, \(n\) represents the number of data points, \(y_{i}\) represents the actual value of the \(i\)-th data point, and \(\hat{y_{i}}\) represents the predicted value of the \(i\)-th data point.
MAE calculates the average absolute difference between actual and predicted values. It is suitable for data with high variability or outliers, but other metrics and qualitative factors may be considered for comprehensive evaluation.
The formula for the mean absolute error (MAE) can be written as:
\[\text{MAE}=\frac{1}{n}\sum_{i=1}^{n}|y_{i}-\hat{y_{i}}|\]
Here, \(n\) represents the number of data points, \(y_{i}\) represents the actual value of the \(i\)-th data point, and \(\hat{y_{i}}\) represents the predicted value of the \(i\)-th data point.
## 4 Results and Discussion
This section discusses the results obtained using Deep Learning and Dynamic Mode Decomposition (DMD) techniques to predict rainfall. Our results showed that both methods were able to accurately predict rainfall demonstrating the potential for using advanced computational methods to improve weather forecasting and better prepare for extreme weather events.
### Dmd
Table 1 shows results from Dynamic Mode Decomposition (DMD) experiments on a rainfall data matrix, constructed from 10 years of data, with predictions made for a single year. DMD was conducted at various projection ranks to obtain a low-dimensional representation of the data. The RMSE and MAE values indicate reasonably accurate rainfall predictions, ranging from 150.44 mm to 263.34 mm and 91.34 mm to 154.61 mm, respectively. Notably, the best performance was observed for the data from 1995-2005.
The rainfall prediction results for the year 2016, for Agartala, Guwahati, Imphal, and Itanagar, were obtained by constructing a matrix \(\mathcal{X}\) from rainfall data between 2005 and 2015, using a projection rank of 123. The forecasted precipitation for the entire year of 2016 is shown in Figure 5. The prediction performance was evaluated using the Mean Squared Error (MSE) and Root Mean Square Error (RMSE), which were calculated to be 97.9195 and 158.5830, respectively.
### Deep Learning
This section presents results from the TensorFlow time series prediction model for rainfall in four Indian cities: Itanagar, Guwahati, Agartala, and Imphal. The dataset is split into training and testing sets, and a window generator function is defined to create input and output windows for the model. The model's core includes a single layer of LSTM cells with 64 hidden units, followed by a dropout layer (rate=0.2) to prevent overfitting. Experiments with different optimizers, input/output window sizes, and dropout values were conducted to identify the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Start & Stop & Predicted & Rank & RMSE & MAE \\ Year & Year & Duration & & & \\ \hline
1929 & 1939 & 1 Year & 106 & 263.3429 & 154.6123 \\ \hline
1941 & 1951 & 1 Year & 123 & 260.2758 & 144.6304 \\ \hline
1954 & 1964 & 1 Year & 127 & 177.3236 & 107.8259 \\ \hline
1973 & 1983 & 1 Year & 128 & 170.6351 & 109.3051 \\ \hline
1995 & 2005 & 1 Year & 118 & 150.4379 & 91.3362 \\ \hline
2000 & 2010 & 1 Year & 100 & 236.3857 & 124.9998 \\ \hline
2005 & 2015 & 1 Year & 123 & 158.5830 & 97.9195 \\ \hline \end{tabular}
\end{table}
Table 1: DMD Results for forecasting 1 year
best-performing model. Evaluation was done using RMSE and MAE metrics. Table 2 presents various model parameters.
Table 3 shows the results of experiments for Itanagar. The best performing optimizer and parameter combination for MAE is Nadam with input window size of 13, output window size of 1, and dropout of 0.2 with a value of 0.3707, and for RMSE is AdamW with input window size of 14, output window size of 1, and dropout of 0.2 with a value of 0.4527. The figure 6 shows the performance of the model while predicting on the test set.
Table 3 shows the results of experiments for Imphal.The results suggest that the combination of Nadam optimizer, an input window size of 14, an output window size of 1, and a dropout rate of 0.2 achieved the lowest MAE and RMSE values. The figure 7 shows the performance of the model while predicting on the test set.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Learning Rate & Hidden Units & Loss Function \\ \hline
0.01 & 64 & MAE \\ \hline \end{tabular}
\end{table}
Table 2: Model Parameters
Figure 5: Predictions for regions of North East India in 2016 using DMD
Table 5 shows the results of experiments for Guwahati.The results suggest that the combination of Nadam optimizer, an input window size of 13, an output window size of 1, and a dropout rate of 0.2 achieved the lowest MAE and RMSE values. The figure 8 shows the performance of the model while predicting on the test set.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline \multirow{2}{*}{**Optimiser**} & \multirow{2}{*}{\begin{tabular}{l} **Input** \\ **Window** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{l} **Output** \\ **Window** \\ \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{l} **Dropout** \\ **Dropout** \\ \end{tabular} } & \multicolumn{2}{l|}{**Metrics**} \\ \cline{5-6} & & & & 0 & 0.4422 & 0.6505 \\ \hline \multirow{4}{*}{AdamW} & 13 & 1 & 0.2 & 0.4421 & 0.6508 \\ \cline{2-6} & 14 & 1 & 0 & 0.4448 & 0.6533 \\ \cline{2-6} & 14 & 1 & 0.2 & 0.4477 & 0.6561 \\ \cline{2-6} & 15 & 1 & 0 & 0.4513 & 0.6593 \\ \hline \multirow{4}{*}{**Nadam**} & 13 & 1 & 0 & 0.4434 & 0.6534 \\ \cline{2-6} & 13 & 1 & 0.2 & 0.4378 & 0.6449 \\ \cline{1-1} \cline{2-6} & **14** & **1** & 0 & **0.424** & 0.6490 \\ \cline{1-1} \cline{2-6} & 15 & 1 & 0 & 0.4381 & 0.6446 \\ \cline{1-1} \cline{2-6} & 15 & 1 & 0.2 & 0.4423 & 0.6502 \\ \hline \end{tabular}
\end{table}
Table 4: Performance Analysis of the proposed method for Imphal
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{**Optimiser**} & \multirow{2}{*}{\begin{tabular}{l} **Input** \\ **Window** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{l} **Output** \\ **Window** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{l} **Dropout** \\ **MAE** \\ \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{l} **Metrics** \\ **RMSE** \\ \end{tabular} } \\ \hline \multirow{4}{*}{AdamW} & 13 & 1 & 0 & 0.3637 & 0.5285 \\ \cline{2-6} & 13 & 1 & 0.2 & 0.3570 & 0.5249 \\ \cline{2-6} & 14 & 1 & 0 & 0.3650 & 0.5356 \\ \cline{2-6} & 14 & 1 & 0.2 & 0.3612 & 0.4527 \\ \cline{2-6} & 15 & 1 & 0 & 0.3792 & 0.5505 \\ \hline \multirow{4}{*}{**Nadam**} & 13 & 1 & 0.2 & 0.3812 & 0.5459 \\ \cline{2-6} & 13 & 1 & 0 & **0.3526** & **0.5260** \\ \cline{1-1} \cline{2-6} & 14 & 1 & 0.2 & 0.3550 & 0.5258 \\ \cline{1-1} \cline{2-6} & 14 & 1 & 0 & 0.3569 & 0.5344 \\ \cline{1-1} \cline{2-6} & 14 & 1 & 0.2 & 0.3695 & 0.5372 \\ \cline{1-1} \cline{2-6} & 15 & 1 & 0 & 0.3670 & 0.5412 \\ \hline \end{tabular}
\end{table}
Table 3: Performance Analysis of the proposed method for Itanagar
Figure 6: Prediction Visualisation for Itanagar
Table 6 shows the results of experiments for Agarala.The results suggest that the combination of Nadam optimizer, an input window size of 14, an output window size of 1, and a dropout rate of 0 achieved the lowest MAE and RMSE values. The figure 9 shows the performance of the model while predicting on the test set.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline \multirow{2}{*}{**Optimiser**} & \multirow{2}{*}{\begin{tabular}{l} **Input** \\ **Window** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{l} **Output** \\ **Window** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{l} **Dropout** \\ **MAE** \\ \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{l} **Metrics** \\ **RMSE** \\ \end{tabular} } \\ \hline \multirow{4}{*}{AdamW} & 13 & 1 & 0 & 0.3402 & 0.4763 \\ \cline{3-6} & 14 & 1 & 0.2 & 0.3368 & 0.4827 \\ \cline{3-6} & 14 & 1 & 0 & 0.3341 & 0.4707 \\ \cline{3-6} & 15 & 1 & 0.2 & 0.3374 & 0.4668 \\ \hline \multirow{4}{*}{**Nadam**} & **13** & **1** & 0 & 0.3351 & 0.4746 \\ \cline{3-6} & 13 & **1** & **0.2** & **0.3264** & 0.4665 \\ \cline{3-6} & 14 & 1 & 0.2 & 0.3115 & 0.4555 \\ \cline{3-6} & 15 & 1 & 0.2 & 0.3091 & 0.4559 \\ \hline \end{tabular}
\end{table}
Table 5: Performance Analysis of the proposed method for Guhawati
Figure 8: Prediction Visualisation for Guwahati
Figure 7: Prediction Visualisation for Imphal
## 5 Conclusion
Our findings suggest that both Dynamic Mode Decomposition (DMD) and the proposed LSTM neural network, are capable of accurately predicting rainfall. However, the LSTM model's ability to use memory cells to analyze and capture longer-term patterns and spatiotemporal dependencies allows it to forecast rainfall peaks effectively compared to the DMD model. This enables it to give early warning for potential flood events. Early warning triggers can provide crucial information to decision-makers and emergency responders, allowing them to take timely actions to mitigate the impact of natural disasters. Overall, the results of this study demonstrate the potential of LSTM to improve rainfall prediction accuracy and to provide early warning triggers for effective disaster preparedness and mitigation, especially in the more vulnerable communities.
|
2309.11579 | On the quasi polynomiality of extremal homology of configuration spaces | Consider the unordered configuration spaces of manifolds. Knudsen, Miller and
Tosteson proved that the extremal homology groups of configuration spaces of
manifold are eventually quasi polynomials. In this paper, we give the precise
degree of top non-trivial quasi polynomials. This shows that the upper bound of
Knudsen, Miller and Tosteson for the degree of quasi polynomials is sharp for
every manifold. | Muhammad Yameen | 2023-09-20T18:31:55Z | http://arxiv.org/abs/2309.11579v1 | ###### Abstract
###### Abstract
Consider the unordered configuration spaces of manifolds. Knudsen, Miller and Tosteson proved that the extremal homology groups of configuration spaces of manifold are eventually quasi polynomials. In this paper, we give the precise degree of top non-trivial quasi polynomials. This shows that the upper bound of Knudsen, Miller and Tosteson for the degree of quasi polynomials is sharp for every manifold.
**On the quasi polynomiality of extremal homology of configuration spaces**
by
Muhammad Yameen
**Key Words**: Configuration spaces, quasi polynomials, extremal stability
**Mathematics Subject Classification**: Primary 55R80, Secondary 55P62.
## 1 Introduction
For any manifold \(M,\) let
\[F_{n}(M):=\{(x_{1},\ldots,x_{n})\in M^{n}|x_{i}\neq x_{j}\,for\,i\neq j\}\]
be the configuration space of \(n\) distinct ordered points in \(M\) with induced topology. The symmetric group \(S_{k}\) acts on \(F_{k}(M)\) by permuting the coordinates. The quotient
\[B_{n}(M):=F_{n}(M)/S_{n}\]
is the unordered configuration space with quotient topology.
It is well known fact that the homology groups \(H_{i}(B_{n}(M);\mathbb{Q})\) are vanish for \(i\geq\nu_{n},\) where \(\nu_{n}=(d-1)n+1.\) In the paper [5] (see also [6]), Knudsen, Miller and Tosteson prove that the extremal homology groups of configuration spaces of manifold are eventually quasi polynomials:
**Theorem 1**.: _Let \(M\) be a manifold of even dimension \(d\geq 2.\) For each \(i\geq 0,\) there is a quasi-polynomial in \(n\) of degree at most dim \(H_{d-1}(M;\mathbb{Q}^{w})-1)\) and period at most 2, which coincides with dim \(H_{\nu_{n}-i}(B_{n}(M);\mathbb{Q})\) for all \(n\) sufficiently large._
If \(H_{d-1}(M;\mathbb{Q})=0,\) then the extremal homology groups \(H_{\nu_{n}-i}(B_{n}(M);\mathbb{Q})\) are eventually vanish. Equivalently, Theorem 1 states that there are two polynomials \(p_{M}^{\nu_{n}-i}(n)\) and \(q_{M}^{\nu_{n}-i}(n)\) such that
\[Q_{M}^{\nu_{n}-i}(n)=\begin{cases}p_{M}^{\nu_{n}-i}(n)&\text{$n$ is even}\\ q_{M}^{\nu_{n}-i}(n)&\text{$n$ is odd.}\end{cases}\]
where \(Q_{M}^{\nu_{n}-i}(n)=\text{dim}H_{\nu_{n}-i}(B_{n}(M);\mathbb{Q}).\) The degree of quasi-polynomial \(Q_{M}^{\nu_{n}-i}(n)\) is maximum of \(deg(p_{M}^{\nu_{n}-i}(n))\) and \(deg(q_{M}^{\nu_{n}-i}(n).\) We give the precise degree of top non-trivial quasi polynomials for every orientable manifold.
**Theorem 2**.: _Let \(M\) be a closed orientable manifold of even dimension \(d\geq 2.\) If \(H_{d-1}(M;\mathbb{Q})\) is non-trivial then the degree of quasi-polynomial \(Q_{M}^{\nu_{n}}(n)\) is dim\(H_{d-1}(M;\mathbb{Q})-1.\)_
If \(M\) is not closed then the homology group \(H_{\nu_{n}}(B_{n}(M);\mathbb{Q})\) is vanish. In this case, we will focus on the homology group \(H_{\nu_{n}-1}(B_{n}(M);\mathbb{Q}).\)
**Theorem 3**.: _Let \(M\) be an orientable manifold of even dimension \(d\geq 2.\) If \(M\) is not closed and \(H_{d-1}(M;\mathbb{Q})\) is non-trivial then the degree of quasi-polynomial \(Q_{M}^{\nu_{n}-1}(n)\) is dim\(H_{d-1}(M;\mathbb{Q})-1.\)_
In light of main theorems, we formulate the following conjecture:
**Conjecture 1**.: _Let \(M\) be an orientable manifold of even dimension \(d\geq 2.\) If \(H_{d-1}(M;\mathbb{Q})\) and \(Q_{M}^{\nu_{n}-i}(n)\) are non-trivial then the degree of quasi-polynomial \(Q_{M}^{\nu_{n}-i}(n)\) is dim\(H_{d-1}(M;\mathbb{Q})-1\) for \(i\geq 0.\)_
**Remark 1**.: _Drummond-Cole and Knudsen [1] computed the all Betti numbers of configuration spaces of surfaces. From their computations, we see that the Conjecture 1 is true for surfaces._
### Notations
\(\bullet\) We work throughout with finite dimensional graded vector spaces. The degree of an element \(v\) is written \(|v|.\)
\(\bullet\) The symmetric algebra \(Sym(V^{*})\) is the tensor product of a polynomial algebra and an exterior algebra:
\[Sym(V^{*})=\bigoplus_{k\geq 0}Sym^{k}(V^{*})=Poly(V^{even})\bigotimes Ext(V^{odd }),\]
where \(Sym^{k}\) is generated by the monomials of length \(k.\)
\(\bullet\) The \(n\)-th suspension of the graded vector space \(V\) is the graded vector space \(V[n]\) with \(V[n]_{i}=V_{i-n},\) and the element of \(V[n]\) corresponding to \(a\in V\) is denoted \(s^{n}a;\) for example
\[H^{*}(S^{2};\mathbb{Q})[n]=\begin{cases}\mathbb{Q},&\text{if $*\in\{n,n+2\}$}\\ 0,&\text{otherwise}.\end{cases}\]
\(\bullet\) We write \(H_{-*}(M;\mathbb{Q})\) for the graded vector space whose degree \(-i\) part is the \(i\)-th homology group of \(M\); for example
\[H^{-*}(\text{CP}^{m};\mathbb{Q})=\begin{cases}\mathbb{Q},&\text{if $*\in\{-2m,-2m+2,\ldots,0.$ \}$}\\ 0,&\text{otherwise}.\end{cases}\]
## 2 Chevalley-Eilenberg complex
Felix-Thomas [3] (see also [2]) constructed the model for rational cohomology of unordered configuration spaces of closed orientable even dimension manifolds. More recently, the identification was established in full generality by the Knudsen in [4]. We will restrict our attention to the case of orientable even dimensional manifolds.
Let us introduced some notations. Consider two graded vector spaces
\[V^{*}=H_{c}^{-*}(M;\mathbb{Q})[d],\quad W^{*}=H_{c}^{-*}(M;\mathbb{Q})[2d-1]:\]
where
\[V^{*}=\bigoplus_{i=0}^{d}V^{i},\quad W^{*}=\bigoplus_{j=d-1}^{2d-1}W^{j}.\]
We choose bases in \(V^{i}\) and \(W^{j}\) as
\[V^{i}=\mathbb{Q}\langle v_{i,1},v_{i,2},\ldots\rangle,\quad W^{j}=\mathbb{Q} \langle w_{j,1},w_{j,2},\ldots\rangle\]
(the degree of an element is marked by the first lower index; \(x_{i}^{l}\) stands for the product \(x_{i}\wedge x_{i}\wedge\ldots\wedge x_{i}\) of \(l\)-factors). Always we take \(V^{0}=\mathbb{Q}\langle v_{0}\rangle\). Now consider the graded algebra
\[\Omega_{n}^{*,*}(M)=\bigoplus_{i\geq 0}\bigoplus_{\omega=0}^{\lfloor\frac{n }{2}\rfloor}\Omega_{n}^{i,\omega}(M)=\bigoplus_{\omega=0}^{\lfloor\frac{n}{2} \rfloor}(Sym^{n-2\omega}(V^{*})\otimes Sym^{\omega}(W^{*}))\]
where the total degree \(i\) is given by the grading of \(V^{*}\) and \(W^{*}\). We called \(\omega\) is a weight grading. The differential \(\partial:Sym^{2}(V^{*})\to W^{*}\) is defined as a coderivation by the equation
\[\partial(s^{d}a\wedge s^{d}b)=(-1)^{(d-1)|b|}s^{2d-1}(a\cup b),\]
where
\[\cup\,:H_{c}^{-*}(M;\mathbb{Q})^{\otimes 2}\to H_{c}^{-*}(M;\mathbb{Q})\]
(here \(H_{c}^{-*}\) denotes compactly supported cohomology of \(M\)). The degree of \(\partial\) is \(-1.\) It can be easily seen that \(s^{d}a,\,s^{d}b\in V^{*}\) and \(s^{2d-1}(a\cup b)\in W^{*}.\) The differential \(\partial\) extends over \(\Omega_{n}^{*,*}(M)\) by co-Leibniz rule. By definition the elements in \(V^{*}\) have length \(1\) and weight \(0\) and the elements in \(W^{*}\) have length \(2\) and weight \(1\). By definition of differential, we have
\[\partial:\Omega_{n}^{*,*}(M)\longrightarrow\Omega_{n}^{*-1,*+1}(M).\]
**Theorem 4**.: _If \(d\) is even, \(H_{*}(B_{n}(M);\mathbb{Q})\) is isomorphic to the homology of the complex_
\[(\Omega_{n}^{*,*}(M),\partial).\]
For a closed manifold, the compactly supported cohomology is the ordinary cohomology. In this case the two graded vector spaces are
\[V^{*}=H_{-*}(M;\mathbb{Q})[d],\quad W_{*}=H_{-*}(M;\mathbb{Q})[2d-1].\]
Now, we will define the dual complex of \((\Omega_{n}^{*,*}(M),\partial).\) First, we define a dual differential \(D\) on \(\Omega_{n}^{*,*}(M).\) The dual differential is defined as
\[D|_{V^{*}}=0,\quad D|_{W^{*}}:\,W^{*}\simeq H_{*}(M;\mathbb{Q})\xrightarrow{ \Delta}Sym^{2}(V^{*})\simeq Sym^{2}(H_{*}(M;\mathbb{Q})),\]
where \(\Delta\) is a diagonal comultiplication corresponding to cup product. By definition of differential, we have
\[D:\Omega_{n}^{*,*}(M)\longrightarrow\Omega_{n}^{*+1,*-1}(M).\]
**Theorem 5**.: _If \(d\) is even and \(M\) is closed, then \(H^{*}(B_{n}(M);\mathbb{Q})\) is isomorphic to the cohomology of the complex_
\[(\Omega_{n}^{*,*}(M),D).\]
## 3 Reduced Chevalley-Eilenberg complex
In this section, we define an acyclic subcomplex of \((\Omega_{n}^{*,*}(M),D).\)
**Theorem 6**.: _Let \(M\) be a closed orientable manifold of dimension \(d.\) The subspace_
\[\Omega_{n-2}^{*,*}(M).(v_{d}^{2},w_{2d-1})<\Omega_{n}^{*,*}(M)\]
_is acyclic for \(n\geq 2.\)_
Proof.: Let \(M\) is closed and orientable. An element in \(\Omega_{n-2}^{*,*}(M).(v_{d}^{2},w_{2d-1})\) has a unique expansion \(v_{d}^{2}A+Bw_{2d-1},\) where \(A\) and \(B\) have no monomial containing \(w_{2d-1}.\) The operator
\[h(v_{d}^{2}A+Bw_{2d-1})=Bv_{d}^{2}\]
gives a homotopy \(id\simeq 0.\)
We denote the reduced complex \((\Omega_{n}^{*,*}(M)/\Omega_{n-2}^{*,*}(M).(v_{d}^{2},w_{2d-1}),D_{\text{ induced}})\) by
\[(^{*}\Omega_{n}^{*,*}(M),D).\]
**Corollary 1**.: _If \(n\geq 2\) and \(M\) is closed orientable, then we have an isomorphism_
\[H^{*}(^{\tau}\Omega_{n}^{*,*}(M),D)\cong H^{*}(B_{n}(M)).\]
**Remark 2**.: _If \(M\) is not closed then the subspace \(\Omega_{n-2}^{*,*}(M).(v_{d}^{2},w_{2d-1})\) is vanish._
## 4 Proof of Theorem 2
In this section, we give the proof of Theorem 2.
Proof of Theorem 2.: Let \(M\) be a closed orientable. Assume \(H_{d-1}(M;\mathbb{Q})\) is non-trivial and \(H_{d-1}(M;\mathbb{Q})=k.\) The corresponding two vector spaces are following
\[V^{*}=\oplus_{i=0}^{d}V^{i},\quad W^{*}=\oplus_{i=d-1}^{2d-1}W^{i}\]
where
\[W^{2d-2}=\langle w_{2d-2,1},\ldots,w_{2d-2,k}\rangle,\quad V^{d-1}=\langle v_{d-1,1 }\ldots,v_{d-1,k-1}\rangle.\]
There is no element of degree greater than \(\nu_{n}\) in reduced complex. Therefore, we just focus on the degree \(\nu_{n}.\) We will use the notation \(\mathrm{J}=\langle w_{2d-2,1},\ldots,w_{2d-2,k}\rangle.\) Let \(n\) be odd. We have
\[{}^{r}\Omega_{n}^{\nu_{n},\lfloor\frac{n}{2}\rfloor}(M)=v_{d}J^{\lfloor\frac{ n}{2}\rfloor}.\]
The cardinality of the bases elements of \(\mathrm{J}^{l}\) is \(\binom{l+k-1}{k-1}.\) The cardinality of the bases elements of \({}^{r}\Omega_{n}^{\nu_{n},\lfloor\frac{n}{2}\rfloor}(M)\) is \(\binom{\frac{n-2}{2}+k-1}{k-1}.\) Moreover, the differential of each bases element of \(\mathrm{J}\) is
\[D(w_{2d-2,i})=v_{d}v_{d-1,i}.\]
We have
\[D(v_{d}w_{2d-2,i})=0,\quad\text{for $i\in\{1,\ldots,k-1\}$}.\]
Note that \(v_{d}^{\geqslant 2}=0\) in reduced complex. Also, \({}^{r}\Omega_{n}^{*,j>\lfloor\frac{n}{2}\rfloor}(M)=0.\) The differential has bi-degree \((1,-1).\) Therefore each \(v_{d}w_{2d-2,i}\) gives a cohomology class. We can write
\[q_{M}^{\nu_{n}}(n)=\binom{\frac{n-2}{2}+k-1}{k-1}+\overline{q}_{M}^{\nu_{n}}( n).\]
where \(\overline{q}_{M}^{\nu_{n}}(n)\) is a polynomial in \(n.\) We can write
\[\binom{\frac{n-2}{2}+k-1}{k-1}=\frac{(\frac{n-2}{2}+1)(\frac{n-2}{2}+2))\ldots (\frac{n-2}{2}+k-1)}{(k-1)!}.\]
This implies that the degree of \(q_{M}^{\nu_{n}}(n)\) is at least \(k-1.\) From Theorem 1, the degree of quasi-polynomial \(Q_{M}^{\nu_{n}}\) is at most \(k-1.\) Hence the degree of \(Q_{M}^{\nu_{n}}\) is \(k-1.\)
## 5 Proof of Theorem 3
In this section, we give the proof of Theorem 3.
Proof of Theorem 3.: Let \(M\) be a orientable manifold. Assume \(M\) is not closed and \(H_{d-1}(M;\mathbb{Q})\) is non-trivial. Furthermore, \(H_{d-1}(M;\mathbb{Q})=k.\) The corresponding two vector spaces are following
\[V^{*}=\oplus_{i=0}^{d}V^{i},\quad W^{*}=\oplus_{i=d-1}^{2d-1}W^{i}\]
where
\[W^{2d-2}=\langle w_{2d-2,1},\ldots,w_{2d-2,k}\rangle,\quad V^{d-1}=\langle v _{d-1,1}\ldots,v_{d-1,k-1}\rangle.\]
There is no element of degree greater than \(\nu_{n}-1\) in complex. Therefore, we just focus on the degree \(\nu_{n}-1.\) We will use the notation
\[\mathrm{I}=\langle v_{d-1,1},\ldots,v_{d-1,k}\rangle,\quad\mathrm{J}=\langle w _{2d-2,1},\ldots,w_{2d-2,k}\rangle.\]
Let \(n\) be odd. We have
\[{}^{r}\Omega_{n}^{\nu_{n}-1,\lfloor\frac{n}{2}\rfloor}(M)=\mathrm{I}^{\lfloor \frac{n}{2}\rfloor}.\]
The cardinality of the bases elements of \({}^{r}\Omega_{n}^{\nu_{n}-1,\lfloor\frac{n}{2}\rfloor}(M)\) is \(k\binom{n-2}{k-1}.\) We have
\[\partial(v_{d-1,j}w_{2d-2,i})=0,\quad\text{for $i,j\in\{1,\ldots,k-1\}$.}\]
The differential has bi-degree \((-1,1)\) and \({}^{r}\Omega_{n}^{j\geq\nu_{n},*}(M)=0.\) Therefore each \(v_{d-1,j}w_{2d-2,i}\) gives a homology class. We can write
\[q_{M}^{\nu_{n}-1}(n)=k\binom{\frac{n-2}{2}+k-1}{k-1}+\overline{q}_{M}^{\nu_{n} -1}(n).\]
where \(\overline{q}_{M}^{\nu_{n}-1}(n)\) is a polynomial in \(n.\) We can write
\[\binom{\frac{n-2}{2}+k-1}{k-1}=\frac{(\frac{n-2}{2}+1)(\frac{n-2}{2}+2))\ldots (\frac{n-2}{2}+k-1)}{(k-1)!}.\]
This implies that the degree of \(q_{M}^{\nu_{n}-1}(n)\) is at least \(k-1.\) From Theorem 1, the degree of quasi-polynomial \(Q_{M}^{\nu_{n}-1}\) is at most \(k-1.\) Hence the degree of \(Q_{M}^{\nu_{n}-1}\) is \(k-1.\)
**Acknowledgement**. The author gratefully acknowledge the support from the ASSMS, GC university Lahore. This research is partially supported by Higher Education Commission of Pakistan.
|
2309.06005 | Distributed Scheduling of Quantum Circuits with Noise and Time
Optimization | Quantum computers are noisy at present in the absence of error correction and
fault tolerance. Interim methods such as error suppression and mitigation find
wide applicability. Another method, which is independent of other error
suppression and mitigation, and can be applied in conjunction with them to
further lower the noise in the system, is circuit cutting. In this paper, we
propose a scheduler that finds the optimum schedule for the subcircuits
obtained by circuit cutting on the available set of hardware to (i) maximize
the overall fidelity, and (ii) ensure that the predefined maximum execution
time for each hardware is not exceeded. The fidelity obtained by this method on
various benchmark circuits is significantly better than that of the uncut
circuit executed on the least noisy device. The average increase in the
fidelity obtained by our method are respectively ~12.3% and ~21% for 10-qubit
benchmark circuits without and with measurement error mitigation, even when
each hardware was allowed the minimum possible execution time. This noise and
time optimized distributed scheduler is an initial step towards providing the
optimal performance in the current scenario where the users may have limited
access to quantum hardware. | Debasmita Bhoumik, Ritajit Majumdar, Amit Saha, Susmita Sur-Kolay | 2023-09-12T07:02:21Z | http://arxiv.org/abs/2309.06005v2 | # Distributed Scheduling of Quantum Circuits with Noise and Time Optimization
###### Abstract
Quantum computers are noisy at present in the absence of error correction and fault tolerance. Interim methods such as error suppression and mitigation find wide applicability. Another method, which is independent of other error suppression and mitigation, and can be applied in conjunction with them to further lower the noise in the system, is circuit cutting. In this paper, we propose a scheduler that finds the optimum schedule for the subcircuits obtained by circuit cutting on the available set of hardware to (i) maximize the overall fidelity, and (ii) ensure that the predefined maximum execution time for each hardware is not exceeded. The fidelity obtained by this method on various benchmark circuits is significantly better than that of the uncut circuit executed on the least noisy device. The average increase in the fidelity obtained by our method are respectively \(\sim 12.3\%\) and \(\sim 21\%\) for 10-qubit benchmark circuits without and with measurement error mitigation, even when each hardware was allowed the minimum possible execution time. This noise and time optimized distributed scheduler is an initial step towards providing the optimal performance in the current scenario where the users may have limited access to quantum hardware.
## I Introduction
Quantum computers have been shown to be able to perform certain computations faster and/or more accurately than their classical counterparts [9; 25]. However, these algorithms assume the existence of fault-tolerant quantum computers, which are still not available. Current quantum computers are noisy, which limits the computation achievable by them. In the absence of error correction and fault-tolerance, other methods to suppress [19; 33] and mitigate [18; 29; 31; 32] the noise in the system have been studied widely. Several studies exploit one or more of these methods to perform reliable computation involving hundreds of qubits [13; 26].
Apart from these, circuit cutting is another approach that has been proven to reduce the noise in the system. This method of partitioning a circuit into multiple subcircuits, independently computing each of these and then using classical postprocessing to retrieve the uncut output distribution, was proposed primarily as a method to compute larger circuits on smaller devices [21]. However, since then, multiple studies [3; 15; 24] establish their capability to reduce the noise in the system since each of the subcircuits involves fewer qubits and/or gates. In [12], the authors obtained a more accurate estimation of ground state energy of the nearest neighbour Hamiltonian by leveraging circuit cutting and computing each subcircuit on the least-busy device. However, they did not consider the noise profile of the subcircuit. In [5], the authors provided the framework of a scheduler to assign circuits to multiple hardware to minimize the overall execution time without considering the noise profile of the hardware. In this study, we propose a noise and time-aware scheduler that leverages circuit cutting and then schedules the subcircuits to multiple hardware to maximize the overall fidelity while restricting the execution time on each hardware below a predefined value \(\tau\).
_Motivation:_ Ideally, a user would want to (i) reduce the effect of noise on the quantum circuit, and (ii) execute their quantum circuit on the least noisy hardware. These two requirements are not independent, but we leverage circuit cutting and perform noise-aware scheduling respectively. Circuit cutting is known to lower the effective noise on the system, but it also leads to an increased number of subcircuit instances to be executed (refer to Sec. II.1). A user may not have the desired execution time available on the hardware of their choice due to limited access, or due to the access time being shared among multiple users. In our approach, we tackle the challenge of enhancing fidelity while reducing execution time by combining circuit cutting and selecting the best available hardware for each subcircuit.
_Main contribution:_ Achieving the optimal trade-off between noise and execution time is a complex task since these two requirements are often orthogonal to each other. In order to minimize the noise, we would want to execute all the subcircuits on the least noisy device available, leading to an increased execution time. Conversely, to minimize the execution time, we can opt to distribute the subcircuits across all available devices without considering their individual noise profiles, leading to low fidelity. In order to address this optimization challenge, we design an integer linear program (ILP) that seeks to maximize the fidelity while conforming to a fixed maxi
mum allowable execution time \(\tau\) for each hardware. The uncut probability distribution is obtained through classical postprocessing over the outcomes of the individual subcircuits. The results obtained through our Noise and Time Aware Distributed Scheduler (NoTaDS) demonstrate significantly better fidelity for different benchmark circuits, compared to the scenario where the uncut circuit was executed on the least noisy device. Our method represents an initial step towards noise and time-minimized distributed quantum computing, showcasing promising outcomes in improving the performance of quantum computing in real-world applications.
For this study, we have not considered the queuing delay of the hardware since there is currently no known relationship between queue time and hardware noise.
The rest of the paper is organized as follows: In Section II, we present the basic concepts of quantum circuit fragmentation, circuit placement, and how to select "good" qubits. Section III exhibits about hardware schedule for subcircuits. Section IV proposes the time and noise-optimized distributed scheduler. Section V discusses the experimental results of the proposed methodology. Section VI states the conclusions.
## II Background
Noise is arguably the primary hindrance to the scalability and applicability of quantum computers for problems of interest. While error correction is the primary goal to achieve in the long run, current quantum devices do not have the necessary qubit count and/or low enough noise profile for it. Therefore, methods to suppress [19; 33] or mitigate [13; 18; 29; 31; 32] the effect of noise are widely studied for near-term quantum devices. In this study, we have made use of two error suppression methods - circuit cutting [21; 28] and selection of _good_ qubits [19] for mapping the circuit onto the hardware in order to cope with the noise in the system. We make use of both of these methods to propose an optimized scheduling of circuits on the available hardware which aims to minimize both the noise and the overall execution time. In the next two subsections, we briefly discuss the two mentioned methods.
### Circuit cutting
Due to limitation in the size of current hardware, methods to partition a circuit into multiple smaller subcircuits have been studied extensively. These methods include splitting the problem itself to execute multiple smaller subcircuits (e.g. entanglement forging [8]), cutting the circuit between two gates to create multiple tomographic instances of smaller subcircuits (called wire cutting [21]) or replacing two-qubit gates by multiple instances of single qubit operation and feedforward classical communication (called gate cutting [16]). In this manuscript, we shall stick to wire cutting only, and use the term _circuit cutting_ to imply wire cutting.
Given a circuit \(\Phi\), let us denote the expectation value of some observable \(A\) as \(\Phi(A)\). Note that, for any observable \(A\), it is possible to write [28]
\[A=\tfrac{Tr\{A.I\}I+Tr\{A.X\}X+Tr\{A.Y\}Y+Tr\{A.Z\}Z}{2}\]
where \(I,X,Y,Z\) are the Pauli operators [20]. In other words, \(\Phi(A)=\frac{1}{2}\sum_{P\in\{I,X,Y,Z\}}c_{P}\Phi_{P}(A)\), where \(\Phi_{P}(A)=Tr\{AP\}\rho_{P}\). Here \(\rho_{P}\) denotes the eigenstates of the Pauli operator \(P\) and \(c_{P}\) denotes the eigenvalue. Note that the mathematical expression \(Tr\{AP\}\rho_{P}\) takes instances of both subcircuits into account where the former is measured in basis \(P\) and the latter is prepared in the state \(\rho_{P}\). Since there are two eigenstates corresponding to each Pauli operator, this method results in four subcircuit instances for measurement basis and eight for preparation state. The uncut expectation value (or probability distribution) is obtained via classical postprocessing.
In [28] the authors showed that the previous representation of the observable \(A\) is tomographically over-complete; It is possible to have a more succinct representation of \(\Phi(A)=\sum_{i}Tr\{AO_{i}\}\rho_{i}\), where \(O_{i}\in\{X,Y,Z\}\) and \(\rho_{i}\in\{\ket{0},\ket{1},\ket{+},\ket{+i}\}\). These two sets \(O_{i}\) and \(\rho_{i}\) are tomographically complete and hence denote the minimum number of subcircuits necessary. Here, there are three subcircuit instances for measurement basis and four for preparation state. A general drawback of cutting is that the classical postprocessing time scales exponentially in the number of cuts when the full probability distribution needs to be reconstructed. Therefore, this method is suitable only for circuits that can be split into disjoint subcircuits using a small (ideally constant) number of cuts only.
Let us consider a RealAmplitudes [11] circuit with linear reverse entanglement with a single repetition. An \(n\)-qubit RealAmplitudes circuit consists of \(n-1\) CNOT gates and two layers of \(R_{y}\) gates, resulting in \(2n\) parameters. Fig. 1 shows circuit-cutting of a 6-qubit RealAmplitudes circuit resulting in two subcircuits. The cut is denoted by the dotted red line. Here \(\rho_{i}\) and \(O_{i}\) have similar meaning as discussed above. Therefore, there are three variants of the first subcircuit for \(O_{i}=X,Y,Z\), and four of the second for \(\rho_{i}=\ket{0},\ket{1},\ket{+},\ket{+i}\).
Since each subcircuit has a lower number of qubits and/or gates, the noise on each subcircuit is expected to be lower. Hence circuit cutting is often used as a method to lower the noise in the system [3; 12; 15; 28]. In other words, the motivation for circuit cutting is not only the ability to run bigger circuits on smaller hardware but also to lower the noise in the system at the cost of some classical post-processing.
### Circuit placement and selection of _good_ qubits
In current quantum devices, a two-qubit operation is possible only between nearest neighbours. For example, Fig. 2 shows the coupling map of a 5-qubit IBM Quantum device. Here a two-qubit operation is possible between qubits 0 and 1, but not between 0 and 2 since the latter are not neighbours. In order to perform a two-qubit operation between qubits 0 and 2, they must be adjacent to each other using SWAP gates. A general requirement of placement and scheduling algorithms [14; 17; 23; 27; 34; 35; 6] is to minimize the number of SWAP gates.
Although the aim of placement is to minimize the number of SWAP gates, Fig. 2 clearly shows that the noise profile of all the qubits is not the same. Therefore, it is important to try to involve the less noisy, or _good_, qubits from the hardware for placement. However, selecting _good_ qubits for placement may lead to increased SWAP gates if the _good_ qubits are not adjacent. Therefore, minimization of SWAP gates and selection of _good_ qubits can often be contradictory requirements in placement.
In [19], the authors proposed a two-step solution for this. In the first step, also known as _transpilation_, the placement algorithm focuses on minimizing the number of SWAP gates without considering the noise profile of the hardware. As a second step, a list of isomorphisms of the transpiled circuit graph on the hardware graph is generated (refer to Fig. 3). Each of these isomorphisms is also called _layout_. Finally, the noise profile of each layout is calculated from the calibration data of the hardware to assign a score \(Q\) which is an indicator of the quality of the layout. The layout having the lowest score, which corresponds to the best quality, is selected. This entire process has been named _mapomatic_ by the authors. In this paper, we use _mapomatic_ for the selection of the best qubit placement for a given circuit.
For a set of hardware, Table 1 shows the least mapomatic score and the corresponding layout for placement of a 6-qubit RealAmplitudes circuit (Fig. 1). In other words, each layout shown in the table implies that both the number of SWAP gates and the noise will be minimized if the circuit is placed on those qubits of the hardware. A layout is generally represented as an array \(l\), where \(l[k]\) denotes the qubit of the hardware on which the \(k-th\) qubit of the circuit is mapped. For example, from Table 1, in IBMQ Hanoi, the qubits 0, 1, 2, 3, 4, and 5 of the 6-qubit Real Amplitudes circuit are respectively
Figure 1: Circuit cutting for a 6-qubit RealAmplitudes ansatz, with single repetition and reverse-linear entanglement [11], into two subcircuits
Figure 3: An example of _mapomatic_[18] to find the best placement of a circuit on hardware. This figure is obtained from the GitHub repository of mapomatic ([https://github.com/Qiskit-Partners/mapomatic](https://github.com/Qiskit-Partners/mapomatic)).
Figure 2: The coupling map and error distribution of a 5 qubit IBM Quantum device \(Belem\)
mapped to physical qubits 0, 1, 2, 4, 7 and 6. From this table, we get that IBMQ Kolkata is the best hardware with layout [22; 25; 26; 24; 23; 21] to execute the 6-qubit Real Amplitudes circuit.
In the next section, we leverage circuit cutting and _mapomatic_ to maximize the fidelity of a (or a set of) circuits given a set of hardware while minimizing the overall execution time.
## III Hardware Schedule for Subcircuits
In this paper, we study the problem of scheduling jobs on different hardware with a focus on maximizing the fidelity and minimizing the execution time. First, we want to emphasize that in this paper we consider circuit-cutting primarily as a method to suppress the effect of noise. It has been shown in multiple studies that circuit cutting itself can lower noise in the system [2; 3; 15; 24]. Therefore, we shall resolve to circuit cutting even if the circuit is small enough to be executed on the hardware. This method allows us to improve fidelity as well as use parallel scheduling of the subcircuits obtained after cutting to multiple hardware, thus lowering the execution time [5].
Consider a list of hardware \(H\) and a list of circuits (or subcircuits) \(C\). For a (sub) circuit \(i\in C\), let \(l_{ij}\) be the optimum (least noisy) layout on a hardware \(j\in H\). Ideally, each (sub) circuit can be assigned to the best layout corresponding to it, in terms of noise. However, cutting the circuit increases the number of executable circuits by creating multiple instances for each subcircuit (refer to Fig. 1). Therefore, the trade-off for error suppression using circuit cutting is the increased execution time to execute all the subcircuit instances.
If the number of hardware available is at least as many as the number of subcircuit instances, then a polynomial time algorithm for finding a minimum weight maximum matching in the bipartite graph having an edge between a subcircuit and a hardware with a weight (say, the \(mapomatic\) score) can provide the required assignment. This also comes with an inherent assumption that the maximum execution time for each hardware can accommodate no more than one subcircuit instance. However, if the number of subcircuit instances are more than that of available hardware, or the allowed execution time for each hardware can accommodate more than one subcircuit instance, then job scheduling has to be performed.
Of these multiple subcircuits, most of them may conform to the least noisy layout on the same hardware. Therefore, the best assignment may lead to _sequential_ execution of a large number of subcircuits, leading to a large execution time. A user often has limitations on the execution time on a particular hardware, barring this sequential approach. On the other hand, the execution time can be minimized if we opt for as much parallelization as possible, i.e., equally distribute the (sub) circuits to all the available hardware without considering the noise profile.
In this study, we delve into finding the optimum scheduling of the (sub) circuits on the available hardware, when an upper limit on the execution time for each hardware is imposed, such that the overall fidelity is maximized. The problem statement can be formally stated as follows:
**Problem Statement.**_Given a list of circuits \(C\), a list of hardware \(H\) and the corresponding execution time limit \(\tau_{j}\) for \(j\in H\), find an assignment \(X_{ij}\)\(\forall\)\(i\in C\) such that the fidelity of the circuits are maximized and the execution time \(t_{j}\leq\tau_{j}\)\(\forall\)\(j\in H\)._
This problem is not known to have a polynomial time solution. In the next section, we elaborate on each step of our proposed framework as given in Fig. 4.
## IV Proposed Framework
We start with the premise where a list of circuits \(C\) and a list of hardware \(H\) are provided. The list of hardware can either be provided by the user or can be determined from their credential for a particular vendor. For each \(c\in C\), we first fragment it into \(k\) subcircuits via circuit cutting. Note that as stated before, we resort to cutting all the circuits, irrespective of whether these can be executed on a single hardware, in order to reduce the noise, and thus improve the fidelity. Henceforth, \(C\) denotes the set of all subcircuits obtained via circuit-cutting, and \(i\in C\) implies a subcircuit.
The steps in the flowchart of Fig. 4 are described in the following subsections.
### Selection of appropriate hardware
As stated before, let \(C\) be the set of all subcircuits. Naturally, tagging is required to keep track of which subcircuit corresponds to which circuit for classical recombination over the cuts (refer to Sec. II.1) which follows later. First, for each subcircuit \(i\in C\), the set of hardware \(H_{i}\in H\) is determined such that for all \(j\in H_{i}\) the number of qubits in \(j\) is at least as big as the number of
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Backend & \# Qubits & Corresponding layout & Mapomatic score \\ \hline IBMQ Haooi & 27 & [0, 1, 2, 4, 7, 6] & 0.099 \\ IBMQ Mumbai & 27 & [6, 7, 4, 10, 12, 13] & 0.183 \\ IBMQ Cairo & 27 & [13, 12, 10, 15, 18, 17] & 0.105 \\ IBMQ Kolkata & 27 & [22, 25, 26, 24, 23, 21] & 0.084 \\ IBMQ Guadalupe & 16 & [15; 12, 13, 10, 7, 6] & 0.142 \\ IBMQ Lagos & 7 & [0, 1, 2, 3, 5, 6] & 0.093 \\ IBMQ Nairobi & 7 & [0, 1, 2, 3, 5, 4] & 0.193 \\ \hline \end{tabular}
\end{table}
Table 1: \(Mapomatic\) score for the best layout of the 6-qubit RealAmplitudes circuit (Fig: 1) corresponding to each of the available hardware
qubits in the subcircuit \(i\). If \(H_{i}=\{\}\) for any subcircuit \(i\), then \(i\) needs to be partitioned again such that at least one hardware can accommodate each subcircuit.
At the end of this step, we obtain a list of feasible hardware \(H_{i}\) for each subcircuit \(i\).
### Scoring each hardware as per noise profile
Next, we use mapomatic [30] for each \(i\in C\) and \(j\in H_{i}\). The action of mapomatic here can be considered as a function
\[\mathcal{F}:\{j,i\}\rightarrow\{l_{ij},Q_{ij}\}\]
where \(l_{ij}\) is the optimum layout and \(Q_{ij}\) is the mapomatic score for this set of circuits and layout. In other words, given a hardware \(j\) and a circuit \(i\), mapomatic returns a list of physical qubits \(l\), which is the initial layout for the placement of \(i\) on \(j\), and a score \(Q_{ij}\), which is an indicator of the noise. For each (subcircuit-hardware) pair \((i,j)\), we obtain a set of such score \(Q_{ij}\). Therefore, for each circuit \(i\), this step produces a list of hardware \(j\in H_{i}\) ordered by the score \(Q_{ij}\). In the usual scoring technique of mapomatic, a lower score implies a lower noise profile. Therefore, a hardware \(h_{1}\) is better than \(h_{2}\) for a circuit \(i\) if \(Q_{ih_{1}}<Q_{ih_{2}}\). However, one may define their own custom scoring technique which may imply the opposite. At the end of this step, we obtain an ordering among the list of feasible hardware for each subcircuit in terms of their noise profile.
### Noise and Time Aware Distributed Scheduler (NoTaDS)
Now, we propose an integer linear program (ILP) to schedule each subcircuit from \(C\) to quantum hardware such that the fidelity is maximized while conforming to the upper bound on the execution time for each hardware. Note that the list of feasible hardware and their ordering as per mapomatic score may vary from circuit to circuit. Therefore, the optimization needs to take into account this variation for each subcircuit.
After cutting, each subcircuit corresponds to multiple subcircuit instances (refer to Sec. II.1). The number of instances of a subcircuit \(i\) depends on the number of preparation qubits \(\rho_{i}\) and the number of measurement qubits \(O_{i}\). We associate a value \(\eta_{i}\) with each subcircuit \(i\) such that
\[\eta_{i}=\begin{cases}1&\text{if all instances are scheduled individually}\\ \nu(\rho_{i},O_{i})&\text{otherwise}\end{cases}\]
where \(\nu(\rho_{i},O_{i})\) denotes the total number of subcircuit instances for subcircuit \(i\). In Sec. V we discuss their advantages and disadvantages.
Next, we define the variables, constraints, and the objective function for the ILP.
1. Variables: We associate a variable \(X_{ij}\) for each subcircuit \(i\in C\) and hardware \(j\in H_{i}\) such that \[X_{ij}=\begin{cases}1&\text{if subcircuit $i$ is scheduled to hardware $j$}\\ 0&\text{otherwise.}\end{cases}\] In other words, \(X_{ij}\) acts as a decision variable for the scheduling. Moreover, a score variable \(Q_{ij}\) is associated with each \(X_{ij}\) which indicates the mapomatic score when subcircuit \(i\) is placed on hardware \(j\).
Figure 4: A flowchart of our proposed noise and time optimized scheduler, including circuit cutting, scoring of (circuit, hardware) pair, noise and time optimized scheduling, and final reconstruction of the entire probability distribution from those of the subcircuits
2. Constraints: Next, we fix the constraints for the ILP. 1. The first requirement is that every subcircuit \(i\) is assigned to some hardware. Formally, this constraint can be represented as \[\sum_{j\in H_{i}}X_{ij}=1.\] (1) Note that this constraint should hold for all subcircuits \(i\in C\), and therefore, Constraint 1 essentially results in \(|C|\) constraints. 2. Now, as discussed before, there is some time restriction for each hardware. We associate a maximum execution time \(\tau_{j}\) for each \(j\in H\). The value of \(\tau_{j}\) can be provided by the user or can be determined from the user's access plan for a particular vendor. Let \(t_{i}\) denote the execution time for each subcircuit \(i\in C\). Then, the total execution time of all the subcircuits scheduled to a particular hardware \(j\) should not exceed \(\tau_{j}\). Formally, this is represented as \[\sum_{i\in C}\eta_{i}\cdot t_{i}\cdot X_{ij}\leq\tau_{j}.\] (2) Note that while the summation of this constraint goes over the entire set of subcircuits, the indicator variable \(X_{ij}\) ensures that the time for a particular subcircuit is added to the execution time only if it is scheduled to that hardware. This constraint holds for each hardware, and therefore Constraint 2 essentially results in \(|H|\) constraints.
3. Objective Function: The objective of this optimization problem is to maximize the overall fidelity, which translates to minimizing the overall score \(Q\). Therefore, the objective function for this is defined as \[\min\sum_{i\in C,j\in H}X_{ij}\cdot Q_{ij}\] (3) The final ILP formulation, thus, is \[\min \sum_{i\in C,j\in H}X_{ij}\cdot Q_{ij}\] subject to Constraints 1-2 \[X_{ij}\in\{0,1\}.\]
Note on linear objective functionThe objective function of Eq. 3 is linear in \(Q_{ij}\). A question may arise whether it is sufficient to have a linear objective function when minimizing over multiple subcircuits and their multiple instances. As we mentioned earlier, the score computed by \(mapomatic\) is an indicator of the hardware noise profile. However, the quality of the result for a shallow subcircuit running on a noisy layout may exceed that of a very deep circuit running on a less noisy layout. As long as the subcircuits are roughly equal in the number of qubits and the depth, the ordering of the hardware layouts according to its \(mapomatic\) score primarily depends on the hardware noise profile. In such a scenario, a linear objective function that minimizes the overall score over all the subcircuits is sufficient. However, if the subcircuits are largely imbalanced in width and/or depth, then both the size of the circuit and the noise profile of the layout affect the \(mapomatic\) score. In that case, a nonlinear objective function may be needed to ensure good scheduling for _all_ the subcircuits. However, previous results on circuit cutting show optimal performance when the cutting is more or less balanced [3]. Therefore, in this study, we primarily stick to cuts that lead to roughly balanced subcircuits, hence linear objective function suffices for the main goal.
## V Experimental results
In this section, we show the experimental results of our NoTaDS scheduler for different types of circuits. We have used the Circuit-knitting-toolbox [4] for circuit cutting and reconstruction, and CPLEX Optimization Studio to solve the ILP from Sec. IV. Table 2 shows the set of quantum hardware used for our experiments and their noise profile. Some of the parameters for the noise profile include the probability of faulty gates and measurement, and the rate of spontaneous decay of a qubit, characterized by \(T_{1}\) and \(T_{2}\). The noise profile of the hardware varies with time. The values for each type of error in the table are the average over all the qubits in that hardware. Moreover, the readout error probability for each qubit is the average of \(p(0|1)\) and \(p(1|0)\) where \(p(i|j)\) denotes the probability of measuring \(i\) when the outcome was originally \(j\), \(i,j\in\{0,1\}\).
In the following subsections, first, we discuss the selection of the maximum execution time \(\tau\) for each hardware, and then we discuss our method for estimating the execution time of a subcircuit. Finally show the fidelity obtained by our scheduling method for a range of quantum circuits.
### Selection of maximum execution time \(\tau\)
In Sec. 2, we defined the maximum execution time for a hardware \(j\in H\) as \(\tau_{j}\). There are \(\eta_{i}\) instances for each subcircuit \(i\in C\). Let \(t^{(i)}\) be the execution time for subcircuit \(i\). The maximum execution time \(\tau_{max}\) is required if we are to execute all the instances of all the subcircuits on one particular hardware. Then
\[\tau_{max}=\sum_{i\in C}\eta_{i}\cdot t^{(i)}.\]
Note that allowing any excess time to \(\tau_{max}\) does not change the scheduling and execution time. Therefore, we stick to equality instead of \(\geq\).
The minimum time that any hardware should allow is to run at least one subcircuit. Here, we want to mention once more that one subcircuit \(i\in C\) consists of \(\eta_{i}\) subcircuit _instances_. One can choose to schedule each instance or each subcircuit. We tested the former, which resulted in a drop of fidelity by \(\sim 9\%\) over the latter. This is obvious since the different instances are meant as a tomography of the subcircuit [15; 22]. Therefore, running each instance of the subcircuit on different hardware (i.e., different noise models leads to an incorrect tomography, which makes the reconstruction fallible. Therefore, for this study, we stick to the scheduling of subcircuits, and not the instances. There may be scenarios where scheduling the instances in an intelligent way may lead to a lesser decrease in the fidelity - we postpone that for future studies.
Now, the minimum time \(\tau_{min}\) for each hardware should ensure that all the subcircuits can be executed. The maximum time required to execute one subcircuit is
\[t_{max}=\max_{i\in C}\eta_{i}\cdot t^{(i)}.\]
For our experimental settings, the number of hardware is always greater than the number of subcircuits. Therefore, it is sufficient to ensure that each hardware should have execution time \(\geq t_{max}\). Therefore,
\[\tau_{min}=\max_{i\in C}\eta_{i}\cdot t^{(i)}.\]
In all our experiments, we fix \(\tau_{j}=\tau_{min}\ \forall\ j\in H\). Later in Sec. V.7 we show the change in fidelity and execution time if we allow \(\tau_{j}>\tau_{min}\). Selection of \(\tau_{min}\) requires the execution time \(t^{(i)}\) for all the subcircuits \(i\in C\). In the following subsection, we discuss the method we used to decide the value of \(t^{(i)}\ \forall\ i\in C\).
### Estimation of the execution time of a circuit
There are sophisticated methods for estimating the run-time of a quantum circuit [10]. However, for our experiment, we stick to a simple method of calculating the time of each level of a circuit. We define _level_ of a circuit as the timestamp where some gates are executed in parallel. In Fig. 5 we separate the different levels of the circuit by red lines.
The time duration of each level is determined by the longest gate in that level. Naturally, 2-qubit gates have a much larger execution time than 1-qubit gates. Therefore, if a level contains a 2-qubit gate, then the time duration of that level is \(t_{2}\), which is the execution time of a single 2-qubit gate. Note that it doesn't matter if the level contains multiple 2-qubit gates since they are operated parallelly. On the other hand, if a level contains only single qubit gates then the time duration of that level is \(t_{1}\), which is the execution time of a single 1-qubit gate. Therefore, if a circuit contains \(\kappa_{1}\) levels
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Hardware & \# Qubits & \begin{tabular}{c} 2-qubit gate \\ error probability \\ \end{tabular} & \begin{tabular}{c} 1-qubit gate \\ error probability \\ \end{tabular} & \(T_{1}\ (\mu s)\) & \begin{tabular}{c} \(T_{2}\ (\mu s)\) \\ \end{tabular} &
\begin{tabular}{c} Readout error \\ probability \\ \end{tabular} \\ \hline IBMQ Hanoi & 27 & \(8.3\times 10^{-3}\) & \(2.1\times 10^{-4}\) & 156.69 & 137.7 & \(10^{-2}\) \\ IBMQ Mumbai & 27 & \(7.5\times 10^{-3}\) & \(2.5\times 10^{-4}\) & 118.01 & 161.97 & \(1.8\times 10^{-2}\) \\ IBMQ Cairo & 27 & \(9.4\times 10^{-3}\) & \(2.2\times 10^{-4}\) & 94.62 & 116.42 & \(1.3\times 10^{-2}\) \\ IBMQ Kolkata & 27 & \(8.7\times 10^{-3}\) & \(2\times 10^{-4}\) & 117.42 & 92.97 & \(1.2\times 10^{-2}\) \\ IBMQ Guadalupe & 16 & \(9.74\times 10^{-3}\) & \(2.64\times 10^{-4}\) & 86.72 & 118.73 & \(1.64\times 10^{-2}\) \\ IBMQ Lagos & 7 & \(7.2\times 10^{-3}\) & \(2\times 10^{-4}\) & 112.51 & 84.42 & \(1.4\times 10^{-2}\) \\ IBMQ Nairobi & 7 & \(8.7\times 10^{-3}\) & \(3.5\times 10^{-4}\) & 114.75 & 71.42 & \(2.7\times 10^{-2}\) \\ IBMQ Jakarta & 7 & \(7.3\times 10^{-3}\) & \(1.03\times 10^{-4}\) & 136.95 & 38.99 & \(2.09\times 10^{-2}\) \\ IBMQ Manila & 5 & \(7.7\times 10^{-3}\) & \(2.46\times 10^{-4}\) & 141.15 & 56.53 & \(2.2\times 10^{-2}\) \\ IBMQ Lima & 5 & \(9.58\times 10^{-3}\) & \(3.76\times 10^{-4}\) & 98.68 & 115.32 & \(2.41\times 10^{-2}\) \\ IBMQ Belem & 5 & \(8.89\times 10^{-3}\) & \(3.88\times 10^{-4}\) & 101.42 & 98.85 & \(2.39\times 10^{-2}\) \\ IBMQ Quito & 5 & \(7.9\times 10^{-3}\) & \(2.88\times 10^{-4}\) & 96.83 & 104.39 & \(4.15\times 10^{-2}\) \\ \hline \end{tabular}
\end{table}
Table 2: Number of qubits and noise profile of the hardware used in our experiments
Figure 5: An example of 6-qubit RealAmplitudes circuit with the level
where only 1-qubit gates are present and \(\kappa_{2}\) levels where 2-qubit gates are also present, then the overall runtime is \(\kappa_{1}\cdot t_{1}+\kappa_{2}\cdot t_{2}\). In Fig. 5, in the whole circuit \(\kappa_{1}=2\) and \(\kappa_{2}=5\). In the first subcircuit, \(\kappa_{1}=2\) and \(\kappa_{2}=2\), and in the second subcircuit \(\kappa_{1}=2\) and \(\kappa_{2}=3\).
In current IBM Quantum devices, the execution time of a CNOT gate is \(\sim 10\times\) that of single-qubit gates. For this study, we assume \(t_{1}=1\), making \(t_{2}=10\). From the abstraction, the execution time of the circuit in Fig. 5 is \(2\cdot t_{1}+5\cdot t_{2}\). This abstract calculation of the execution time keeps the method simple. Since the values of \(\tau_{min}\) and \(\tau_{max}\) depend on the execution time, if some other method for determining the execution time is used, or if absolute execution times are selected, then the values of \(\tau\) will change accordingly without hampering the assignment of the subcircuits on the hardware. Note that if absolute values are used instead of \(t_{1}\), \(t_{2}\), then one should also account for the fact that the absolute values for the execution time of gates are not always the same on different hardware.
### Experimental results on 6-qubit circuits
In Table 3 we consider four benchmark circuits having 6 qubits each. These circuits are small enough to be executed on any hardware with \(\geq 7\) qubits, and hence distributed scheduling using circuit cutting may not be deemed necessary here. However, Table 3 shows that distributed scheduling using circuit cutting still helps in the improvement of fidelity. For each of the circuits, we provide its fidelity with the ideal simulation both without any error mitigation and with measurement error mitigation (MEM). For MEM, we have used the default _MThree_ mitigation [18] provided in _Qiskit Runtime_ by setting the _resilience level_ option to 1.
Naturally, MEM improves the fidelity over no-mitigation. However, we observe that distributed scheduling with circuit cutting without any error mitigation outperforms the fidelity of the uncut circuit with MEM. In this experiment, we partitioned the circuit into two subcircuits. We want to emphasize here that (i) as discussed before, \(\tau\) for all hardware was fixed to \(\tau_{min}\), and (ii) the uncut circuit was always executed on the best hardware and its corresponding layout as per mapomatic. We obtain an average percentage improvement in fidelity for distributed scheduling using circuit cutting over no cutting by \(\sim 5.2\) when no mitigation was used, and by \(\sim 4.89\) when MEM was used. The average is taken over the four circuits in Table 3.
Next, we dive deeper into the exact details of the experiment for the 6-qubit Ripple carry adder circuit [7]. This is meant to provide an overall idea for recreating the experimental steps for the circuit in Table 3 and also those in later subsections. The steps follow from the flowchart provided in Fig. 4.
#### iii.3.1 Experiment details for the 6-qubit Ripple carry adder circuit
Fig. 6 shows the circuit for the 6-qubit ripple carry adder and its two subcircuits obtained using the Circuit-knitting-toolbox [4]. After obtaining the subcircuits, we found the hardware big enough to accommodate each of them. In this particular scenario, all the hardware from Table 2 can accommodate each subcircuit.
Next, we use mapomatic to find the score for each subcircuit against each hardware and its layout. Then we use the optimization in Sec. IV to schedule the subcircuits to the hardware. Finally, we use mapomatic to find the best hardware and its layout for the uncut circuit as well. In
Figure 6: Two subcircuits of 6-qubit ripple-carry-adder circuit obtained by using Circuit-knitting-toolbox [4]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Benchmark circuit} & \multirow{2}{*}{\# qubits} & \multirow{2}{*}{Cut size} & \multirow{2}{*}{\# subcircuits} & \multicolumn{4}{c|}{Fidelity} \\ \cline{5-8} & & & & & \multicolumn{2}{c|}{Uncut} & \multicolumn{2}{c|}{Cut} \\ \cline{5-8} & & & & No mit & MEM & No mit & MEM \\ \hline Ripple carry adder [7] & 6 & 2 & 2 & 0.759 & 0.787 & 0.792 & 0.843 \\ \hline RealAmplitudes [11] & 6 & 1 & 2 & 0.959 & 0.983 & 0.987 & 0.997 \\ \hline Trotterized [15] & 6 & 2 & 2 & 0.922 & 0.951 & 0.965 & 0.974 \\ \hline Bernstein Vazirani [1] & 6 & 1 & 2 & 0.81 & 0.869 & 0.882 & 0.944 \\ \hline \end{tabular}
\end{table}
Table 3: Fidelity for 6-qubit circuits by scheduling over the hardware in Table 2 with and without circuit cutting for no error mitigation (no mit) and measurement error mitigation (MEM)
Table 5 we show the layout, backend, and the mapomatic score for the uncut circuit and the two subcircuits.
Note that the cut-size to partition the 6-qubit adder circuit into two subcircuits is 2. Therefore, each subcircuit has 4 qubits. We note from Table 5 that our scheduler has scheduled the two subcircuits on two different hardware, each of which has a mapomatic score lower than the one where the original circuit has been scheduled. This provides an explanation as to why the fidelity obtained via cutting exceeds the uncut circuit.
### Experimental results on 10-qubit circuits
Next, in Table 4 we take a few 10-qubit circuits. These circuits are too big to be executed on 5 or 7-qubit devices but can be executed without cutting on 16 or 27-qubit devices. However, as before, we show that the fidelity can be improved by using our NoTaDS scheduler. Table 4 shows the fidelity of four 10-qubit circuits with and without measurement error mitigation, where the value of \(\tau\) is set to \(\tau_{min}\) for all hardware.
Once more we observe that the fidelity of the noisy and the ideal circuit obtained without any error mitigation via our scheduling method outperforms (sometimes significantly, e.g., see RealAmplitudes and Trotterized circuits) the fidelity of the uncut circuit with MEM. We obtain an average improvement in fidelity for distributed scheduling using circuit cutting over no cutting by \(\sim 12.38\%\) when no mitigation was applied, and by \(\sim 21\%\) when MEM was used. The average is taken over the four circuits in Table 4.
In the following subsection, we take a deeper dive into the improvement in fidelity with the variation in the number and size of the subcircuits.
### Variation in fidelity with the number and size of subcircuits
In Fig. 7 we plot the fidelity of 20-qubit RealAmplitudes and Bernstein-Vazirani circuits as the number of subcircuits is increased linearly from 2 to 6. In each case, each subcircuit is scheduled using the NoTaDS scheduler, and the fidelity is compared with the ideal outcome. We notice that the fidelity increases linearly with an increasing number of subcircuits.
Here we consider cutting 20-qubit RealAmplitudes and Bernstein-Vazirani circuits, where the number of subcircuits varies from 2 to 6. The subcircuits are then scheduled with our proposed NoTaDS scheduler. The result is bootstrapped over 10 trials. We notice a linear improvement in the fidelity with the increase in the number of subcircuits. As the number of subcircuits increases, each subcircuit becomes smaller, and hence less contagious noise. Therefore, the fidelity is increased. However, with an increase in the number of subcircuits, the cut-size also increases leading to an exponential increment in the classical postprocessing time for reconstruction of the full probability distribution from the subcircuits [21, 28]. We verify this in Fig. 8. Therefore, the number of cuts cannot be increased beyond a certain point to keep the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Circuit & \(\beta\) qubits & Layout & Scheduled backend & Mapomatic score \\ \hline Uncut & 6 & [26, 25, 24, 23, 21, 18] & IBMQ Kollata & 0.28 \\ \hline Subcircuit 1 & 4 & [26, 25, 22, 19] & IBMQ Mumbai & 0.13 \\ \hline Subcircuit 2 & 4 & [4, 1, 2, 3] & IBMQ Hanoi & 0.11 \\ \hline \end{tabular}
\end{table}
Table 5: Scheduling details of the 6-qubit ripple carry adder circuit and its two subcircuits obtained after cutting
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Benchmark circuit} & \multirow{2}{*}{\# qubits} & \multirow{2}{*}{Cut size} & \multirow{2}{*}{\# subcircuits} & \multicolumn{4}{c|}{Fidelity} \\ \cline{5-8} & & & & & \multicolumn{2}{c|}{Uncut} & \multicolumn{2}{c|}{Cut} \\ \cline{5-8} & & & & No mit & MEM & No mit & MEM \\ \hline Ripple carry adder [7] & 10 & 2 & 2 & 0.315 & 0.325 & 0.375 & 0.5138 \\ \hline Bernstein Vazirani [1] & 10 & 1 & 2 & 0.702 & 0.714 & 0.728 & 0.749 \\ \hline RealAmplitudes [11] & 10 & 1 & 2 & 0.806 & 0.876 & 0.977 & 0.994 \\ \hline Trotterized [15] & 10 & 2 & 2 & 0.878 & 0.891 & 0.927 & 0.960 \\ \hline \end{tabular}
\end{table}
Table 4: Fidelity for 10-qubit circuits by scheduling over the hardware in Table 2 with and without circuit cutting for no error mitigation (no mit) and measurement error mitigation (MEM)
Figure 7: Fidelity obtained by the \(NoTaDS\) scheduler with an increasing number of subcircuits for 20-qubit RealAmplitudes and Bernstein Vazirani (BV) circuits
classical postprocessing time in check.
We show a complimentary result in Fig. 9 where we increase the size of the circuit and partition each of them into two subcircuits. The two subcircuits were then scheduled by our proposed NoTaDS scheduler. We notice that the fidelity decreases with an increase in the size of the circuit. The result is bootstrapped over 10 trials.
### Experimental results on 28-qubit circuit
In our chosen set of hardware (Table 2), the largest hardware contains 27 qubits. Hence, here we consider one circuit that is too big to be executed on any of the hardware. However, via circuit cutting and NoTaDS scheduling, we can still execute such a circuit. In Table 6 we show the fidelity obtained with and without MEM for a 28-qubit RealAmplitudes circuit. We do not have any fidelity value for uncut since it is too big to be executed on our set of devices. We observe that the fidelity is poor without error mitigation, but improves significantly in the presence of MEM. This is obvious since the measurement is the most dominant noise in current quantum devices (see Table 2 for the probabilities of different types of noise). Therefore, the larger the circuit, the stronger the effect of measurement error, leading to poor fidelity.
We have selected hardware devices up to 27-qubit devices for our experiments. Currently IBM has hardware with 433 qubits, and our proposed method is independent of the size of the hardware.
### Change in fidelity with and without scheduling
As stated before, till now in all our experiments we have fixed \(\tau_{j}=\tau_{min}\)\(\forall\)\(j\in H\). Naturally, this makes the scheduling restrictive. It may be possible to execute all the subcircuits on the best device to obtain the best fidelity at the cost of execution time. Our restriction over the maximum allowable execution time \(\tau\) prevented NoTaDS from doing so.
In Fig. 10 we consider a 16-qubit RealAmplitudes circuit which is partitioned into two balanced subcircuits (i.e., the number of qubits and gate count are roughly equal for both). We show the fidelity obtained when the two subcircuits are executed in all possible hardware pairs \((j,k)\), \(j,k\in H\). Note that since there are only two subcircuits,
\[\text{\emph{execution time}}=\begin{cases}\tau_{max}&\text{for }j=k\\ \tau_{min}&\text{otherwise.}\end{cases}\]
The partition being balanced, we have \(\tau_{max}\simeq 2\cdot\tau_{min}\).
The maximum fidelity obtained in Fig. 10 is when both the subcircuits are executed on _ibm_ _hanoi_, whereas if the two subcircuits are executed on _ibm_ _hanoi_ and
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Benchmark circuit} & \multirow{2}{*}{\# qubits} & \multirow{2}{*}{Cut size} & \multirow{2}{*}{\# subcircuits} & \multicolumn{4}{c|}{Fidelity} \\ \cline{5-8} & & & & \multicolumn{2}{c|}{Uncut} & \multicolumn{2}{c|}{Cut} \\ \cline{5-8} & & & & No mit & MEM & No mit & MEM \\ \hline RealAmplitudes & 28 & 1 & 2 & - & - & 0.31 & 0.7 \\ \hline \end{tabular}
\end{table}
Table 6: Fidelity for 28-qubit circuits by scheduling over the hardware in Table 2 with and without circuit cutting for no error mitigation (no mit) and measurement error mitigation (MEM)
Figure 8: The increase in classical reconstruction time of the full probability distribution from the subcircuits with increasing number of subcircuits
Figure 9: Fidelity obtained by the NoTaDS scheduler with increasing size of the circuit where each circuit is partitioned into two subcircuits.
_ibmq_kolkata_, the fidelity is slightly lower but the execution time is reduced to half. The reduction in fidelity is \(\sim 1\%\) only.
On the other hand, the result here indicates that if it is not possible to execute both the circuits on _ibm_hanoi_ due to restrictions in execution time, it is rather more useful to schedule the subcircuits to two different hardware using NaToDS than to execute both of them together on any other hardware. This holds true even if there is some hardware whose maximum execution time can accommodate both subcircuits. For example, if we keep _ibm_hanoi_ out of the story, it is better to distribute the two subcircuits to, say _ibmq_kolkata_ and _ibm_cairo_ than to execute both of them on the later, even if it can accommodate both. This is because _ibmq_kolkata_ has a lower noise profile than _ibm_cairo_. Therefore, NaToDS will find this distributed scheduling, and improve the final fidelity of the circuit.
In Fig. 10, the number of subcircuits is 2, so for \(\tau_{max}\) both of them can be executed on the best hardware and for \(\tau_{min}\) distinct devices need to be assigned. In this scenario, allowing an execution time of \(\tau_{min}<\tau<\tau_{max}\) to one or more hardware cannot change the scheduling, and hence the fidelity. However, if the number of subcircuits is more than 2, then \(NoTaDS\) may be able to find even better schedules for maximum execution time \(\tau_{min}<\tau<\tau_{max}\) so that the difference in fidelity obtained from the scheduling with that when all the subcircuits are executed on the best device is less than even 1%.
Naturally answers to the questions such as (i) what is the best schedule, (ii) is it better to schedule all the subcircuits to the same hardware - changes with time (since noise varies with time), the list of available hardware, and the circuits. NoTaDS automates this process by finding the optimum scheduling based on the hardware noise profile and the upper bound on the execution time of each hardware.
## VI Concluding remarks
In this paper, we propose a noise and time optimized distributed scheduler that schedules the subcircuits obtained after circuit cutting to hardware such that the fidelity is maximized, and yet the execution time on each hardware is restricted by a pre-specified limit. Note that this same scheduler can be used to schedule a set of circuits to hardware even without circuit cutting. However, we show that our method outperforms the fidelity of the uncut circuit which has been executed on the least noisy device, and yet requires significantly low execution time on a quantum processor. This method combines inter-device parallelization with noise-aware scheduling to optimize the fidelity of the circuit. This method is expected to be particularly useful in the near-term when the devices are noisy, and the execution time available to a user on a quantum device is limited. The study of scheduling circuits where balanced partitioning may be too costly may be explored in future.
###### Acknowledgements.
This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 using NERSC award NERSC DDR-ERCAP0023266.
## Code availability
The code to find the optimal scheduling using our proposed NoTaDS scheduler is available in [https://github.com/debasmita2102/NoTODS](https://github.com/debasmita2102/NoTODS).
## Conflict of interest
The authors report no conflict of interest. Ritajit Majumdar started working on this project when he was affiliated with Indian Statistical Institute and continued the project in his current affiliation.
Figure 10: Fidelity of a 16-qubit RealAmplitudes circuit, when partitioned into 2 subcircuits, and executed on all possible hardware pair. Since there are only two subcircuits, when the subcircuits are scheduled to two different hardware, the execution time is \(\tau_{min}\), and when scheduled to the same hardware the execution time is \(\tau_{max}\). |
2308.16659 | Autoencoder-based Online Data Quality Monitoring for the CMS
Electromagnetic Calorimeter | The online Data Quality Monitoring system (DQM) of the CMS electromagnetic
calorimeter (ECAL) is a crucial operational tool that allows ECAL experts to
quickly identify, localize, and diagnose a broad range of detector issues that
would otherwise hinder physics-quality data taking. Although the existing ECAL
DQM system has been continuously updated to respond to new problems, it remains
one step behind newer and unforeseen issues. Using unsupervised deep learning,
a real-time autoencoder-based anomaly detection system is developed that is
able to detect ECAL anomalies unseen in past data. After accounting for spatial
variations in the response of the ECAL and the temporal evolution of anomalies,
the new system is able to efficiently detect anomalies while maintaining an
estimated false discovery rate between $10^{-2}$ to $10^{-4}$, beating existing
benchmarks by about two orders of magnitude. The real-world performance of the
system is validated using anomalies found in 2018 and 2022 LHC collision data.
Additionally, first results from deploying the autoencoder-based system in the
CMS online DQM workflow for the ECAL barrel during Run 3 of the LHC are
presented, showing its promising performance in detecting obscure issues that
could have been missed in the existing DQM system. | Abhirami Harilal, Kyungmin Park, Michael Andrews, Manfred Paulini | 2023-08-31T11:58:13Z | http://arxiv.org/abs/2308.16659v1 | # Autoencoder-based Online Data Quality Monitoring for the CMS Electromagnetic Calorimeter
###### Abstract
The online Data Quality Monitoring system (DQM) of the CMS electromagnetic calorimeter (ECAL) is a crucial operational tool that allows ECAL experts to quickly identify, localize, and diagnose a broad range of detector issues that would otherwise hinder physics-quality data taking. Although the existing ECAL DQM system has been continuously updated to respond to new problems, it remains one step behind newer and unforeseen issues. Using unsupervised deep learning, a real-time autoencoder-based anomaly detection system is developed that is able to detect ECAL anomalies unseen in past data. After accounting for spatial variations in the response of the ECAL and the temporal evolution of anomalies, the new system is able to efficiently detect anomalies while maintaining an estimated false discovery rate between \(10^{-2}\) to \(10^{-4}\), beating existing benchmarks by about two orders of magnitude. The real-world performance of the system is validated using anomalies found in 2018 and 2022 LHC collision data. Additionally, first results from deploying the autoencoder-based system in the CMS online DQM workflow for the ECAL barrel during Run 3 of the LHC are presented, showing its promising performance in detecting obscure issues that could have been missed in the existing DQM system.
## 1 Introduction
In a large-scale high energy physics experiment like CMS [1], ensuring timely detection of issues that could affect the detector performance and the quality of data taken is a major task that requires significant personpower and time. Presently, the CMS online DQM [2] consists of a set of histograms that are filled based on a first-pass analysis of a subset of data collected by the detector. These histograms are monitored continuously by a DQM _shifter_ who reports on any apparent irregularities observed. Depending on the severity of the problem, various mitigation measures up to stopping of the data taking would be performed by the relevant experts. While this system has proven to be dependable, the changing running conditions and increasing collision rates, along with aging electronics, bring forth failure modes that are newer and harder to predict. Machine learning (ML) based approaches to anomaly detection in DQM have been adopted by previous efforts in CMS [3, 4]. In this paper, an unsupervised method of anomaly detection for the online DQM of the CMS ECAL is presented utilizing an autoencoder (AE) [5] on ECAL data processed as two-dimensional (2D) images. In a novel approach, correction strategies are implemented to account for spatial differences in the ECAL response as well as the time-dependent nature of anomalies in the detector. This system is deployed in the online DQM for the ECAL barrel during LHC Run 3 collisions, complementing
the existing DQM plots. First results from the AE-based anomaly detection system are reported indicating it to be a highly valuable diagnostic tool for ECAL experts involved in real-time data taking operations. All the figures shown here are from the approved CMS public results in Refs. [6, 7].
## 2 The CMS Electromagnetic Calorimeter and Data Quality Monitoring
The CMS ECAL [8] is a hermetic calorimeter which measures the energy, time, and position of photons, electrons and electromagnetic fraction of jets. It played a crucial role in the discovery of the Higgs Boson [9] as well as in the measurement of the Higgs properties. ECAL is made up of scintillating lead tungstate crystals arranged in a cylindrical central barrel (EB) section closed by two endcaps (EE+ and EE-).
The DQM system in ECAL mainly consists of two kinds of histograms: occupancy-style histograms as shown in Fig. 1(a), filled with real-time data of vital quantities, and quality-style histograms as shown in Fig. 1(b), which are drawn based on thresholds and rules applied on the quantity in the occupancy-style histograms. Quality-style histograms are in color-coded maps that are easy-to-interpret, so that it is possible to tell at a glance if something is wrong in the ECAL. The color code used is as follows: green for _good_, red for _bad_, brown for a _known problem_, and yellow for _no data_, which may or may not be an issue depending on the context. The histograms are often plotted at the granularity of a _trigger tower_ (TT), which is defined as a set of 5x5 crystals. Each single square in Fig. 1 represents a TT, and 68 such TTs form a _supermodule_ (SM) in the barrel, corresponding to the numbered rectangles in Fig. 1. In online DQM, these histograms are accumulated over a CMS data acquisition _run_, with each histogram plotted at every _lunisection_ (LS), that corresponds to a time interval of about 23 seconds, over which the luminosity is considered to remain approximately constant.
With increasing luminosity and harsher radiation environment, it becomes impossible to anticipate all failure modes in a complex detector like ECAL. Though there are multiple alarms set in the current DQM framework to catch various types of errors, they are prone to high false positives and are limited by having to define hard-coded rules for every possible detector geometry. To mitigate these issues, an automated approach to anomaly detection is explored using unsupervised ML.
## 3 Autoencoder-based Anomaly Detection System
### Network and Anomaly Detection Strategy
An unsupervised anomaly detection method is developed using an AE trained on occupancy-style histograms from the ECAL DQM processed as 2D images. The AE built with a ResNet [10] architecture using Pytorch[11] is trained with manually certified _good_ data. The encoder network of the AE encodes the input image into a lower dimensional latent space, and the decoder network tries to reconstruct the original image from the encoded space. The goodness
Figure 1: DQM plots for ECAL barrel: (a) occupancy-style and (b) quality-style histogram.
of reconstruction is measured by the reconstruction loss \(\mathcal{L}\), computed as the Squared Error between the input (\(x\)) and the AE-reconstructed output (\(x^{\prime}\)) as defined in Eq. 1:
\[\mathcal{L}(x,x^{\prime})=|(x-x^{\prime})|^{2} \tag{1}\]
The network trained on good data reconstructs the nominal detector image well with minimal reconstruction loss. When fed with anomalous data, however, it fails to reconstruct the anomalies and gives a higher loss. The squared error on each tower is calculated and plotted as a 2D loss map, on which some post-processing steps are applied as explained in the next sections. A threshold is then derived from the anomalous loss values that can efficiently catch 99% of the anomalies.
### Dataset, Training and Validation
Each input image to the AE is the digitized hit occupancy map from a single LS. The dataset used for training and validation is taken from the 2018 runs during LHC Run 2. It consists of 100 000 2D occupancy images, with training and validation dataset split in 9-to-1 ratio. In order to make the quality interpretation consistent across different run conditions, occupancy maps are normalized with respect to _pileup_ (PU), which are additional proton-proton interactions within the same proton bunch crossing. After pre-processing with the PU correction, AE models are trained and validated separately for the barrel and each endcap.
In addition to the nominal validation, fake anomaly validation is performed, using the good images from the nominal validation set but with synthetic anomalies introduced in random towers. Three types of anomaly scenarios are studied:
1. Missing SM/sector: Entire SMs for the barrel and sectors for endcaps are randomly set to have zero occupancy values in each LS.
2. Single zero occupancy tower: A single tower is set to have zero occupancy at random in each LS.
3. Single hot tower: A single tower is set to be _hot_, or having higher-than-nominal occupancy.
### Spatial Response Correction
In the ECAL, the occupancy close to the beam pipe tends to be higher. This is clearly visible in the average occupancy map shown in Fig. 2(a). This effect is reflected in the loss map shown in Fig. 2(b) in the case of a missing SM, where the towers in the higher \(|\eta|\) region have higher loss than those in the lower \(|\eta|\) region, for an anomaly affecting all the towers in the SM equally. In order to interpret the towers as equally anomalous across the SM, the loss map is normalized by the average occupancy map, as a spatial response correction, to obtain a uniform loss map as illustrated in Fig. 2(c).
Figure 2: (a) Average occupancy map of EB. Loss maps for missing SM scenario (b) before and (c) after spatial response correction.
### Time correction
A correction to further reduce false positives is implemented by utilizing the time-dependent nature of the anomalies in the detector. Unlike random fluctuations, real anomalies would persist with time. Figure 3 describes the time correction strategy, where loss maps that have been corrected for spatial effects from three consecutive LSs are multiplied with one another at a tower level to boost the anomaly and suppress the fluctuations.
## 4 Results
An anomaly tagging threshold is chosen based on the validation set with fake anomalies, such that 99% of the anomalies are detected. The performance of the AE is measured using the metric False Discovery Rate (FDR) defined as:
\[\text{FDR}=\frac{\text{Number of good towers above the anomaly threshold}}{\text{Number of good and bad towers above the anomaly threshold}} \tag{2}\]
The lower the FDR, lesser the false detection and better the performance of the AE.
### Testing on Fake Anomalies
Table 1 summarizes the FDR values from each anomaly scenario. For the barrel and the endcaps, single zero occupancy towers are most challenging to detect. Spatial correction improves the AE performance, with a much greater effect for the endcaps. This is due to the presence of a larger effective gradient in the occupancy values across the endcap region compared to the barrel. Further improvement in the FDRs by an order of magnitude is achieved with the time correction. Final performance scores after applying all post-processing are around 5% for the zero occupancy tower scenario and sub-percent for the missing SM (for the barrel) and hot tower scenarios. Remaining false positives have likely come from real anomalous towers that have fallen into the manually certified good dataset. Based on the performance for each scenario, a single anomaly tagging threshold is chosen for each sub-detector part, such that it can efficiently tag all anomalies.
### Testing on Real Anomalies
The AE model is further tested using real anomalous data from 2018 and 2022 LHC runs as illustrated in Fig. 4. Figures 4(a) and (b) respectively show occupancy plots from Run 2 with
Figure 3: Time correction strategy: loss maps from three consecutive LSs (top panel) are multiplied at a tower level to make a time-corrected loss map (bottom).
a missing SM EB\(-\)03, and with a ring of hot towers with a zero occupancy tower in the center in EB\(-\)01 and EB\(-\)18. Their corresponding final quality plots from the AE loss map identify the bad towers in red in Figures 4(d) and 4(e) respectively. For the endcaps, a real anomaly case from Run 3 is illustrated in Fig. 4(c), with two zero occupancy towers and a hot tower around the edge. The AE quality plot in Fig. 4(f) spots the two zero occupancy towers correctly. The hot tower does not show up in the quality plot, as it is previously identified as problematic and therefore masked in the DQM system.
The performance of the AE on real-life anomalies demonstrates its ability to detect various kinds of anomalies using a single threshold, without the need for hard-coded rules based on the type of anomaly or the geometry, and emphasizes the power of unsupervised ML as an efficient, adaptable anomaly detection tool.
## 5 Deployment in Online DQM for LHC Run3
The AE-based anomaly detection system, named MLDQM, has been deployed in the ECAL online DQM workflow for the ECAL barrel since the start of Run 3 of the LHC. This introduces
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Scenario & Missing SM & \multicolumn{2}{c|}{Zero Occupancy Tower} & \multicolumn{3}{c|}{Hot Tower} \\ \hline & Barrel & Barrel & EE+ & EE– & Barrel & EE+ & EE– \\ \hline AE & 3.6\% & 51\% & 86\% & 87\% & 2.8\% & 0.01\% & 0.00\% \\ no correction & 3.1\% & 49\% & 13\% & 14\% & 2.9\% & 0.06\% & 0.05\% \\ \hline AE after & 0.13\% & 4.1\% & 5.6\% & 6.3\% & 0.00\% & 0.00\% & 0.00\% \\ spatial and time correction & 0.13\% & 4.1\% & 5.6\% & 6.3\% & 0.00\% & 0.00\% & 0.00\% \\ \hline \end{tabular}
\end{table}
Table 1: Summary of FDR at 99% anomaly detection in fake anomaly validation.
Figure 4: Input occupancy images with real anomalies (top) and corresponding AE quality plots (bottom) for the barrel and endcaps.
a new ML quality plot from the AE in the ECAL DQM workspace (see Fig. 5(a)) complementing the existing plots. The model inference is done by exporting the trained Pytorch model to Onnx[12] and run in CMS production using Onnx Runtime[13]. The deployment of the models for the endcaps is underway and will be ready for the 2023 runs.
It has been observed that the MLDQM is able to correctly identify consistently bad towers, as well as transient bad towers, which may point to deteriorating channels in the ECAL that are not easily detected by the existing DQM plots. Figure 5(a) shows the ML quality plot with two towers in EB+08 marked red: Tower 1 closest to \(i\eta=0\) and Tower 2 closest to \(i\eta=85\). Figure 5(b) shows the total digitized hit occupancy map accumulated over all LSs of the run in the online DQM. Here, Tower 1 is visible with very low occupancy compared to other towers, indicating that it is a persistent zero occupancy tower, while Tower 2 is faintly visible, indicating that it likely had zero occupancy in several LSs but not in all, thus pointing to a transient anomaly. This feature also shows up in the occupancy map in Fig. 5(c), produced offline by averaging over several runs in Run 3. The low occupancy of Tower 2 in this plot implies that the tower indeed had zero occupancy transiently for many LSs in these runs, and that it is likely not a stand-alone random occurrence. This tower could be pointing to channels on the way to becoming permanently faulty. This feature of the MLDQM which detects potentially deteriorating channels would be immensely helpful to detector experts monitoring the health of the detector. By keeping track of how often a particular tower is flagged as bad by the AE and defining a threshold on this frequency, experts can choose to mask the transient tower proactively.
## 6 Summary
A robust, deployment-ready AE-based anomaly detection and localization system has been developed for the CMS ECAL using unsupervised ML. An efficient pileup-based normalization strategy is applied on images made from the raw detector data, making quality interpretations consistent across changing experimental conditions. The application of novel techniques of spatial and time corrections gives an order of magnitude improvement in performance. The autoencoder-based system demonstrates efficient detection of anomalies of various degrees, shapes, and positions in the detector up to a tower-level granularity, using a single efficient threshold for each sub-detector part. The MLDQM system deployed in the online DQM workflow for the ECAL barrel performs well on real-time data from Run 3 by detecting anomalies as well as identifying potential degrading channels that goes undetected by the existing DQM plots. This ML-based DQM system is designed to complement and improve the existing DQM by helping detector experts make more accurate decisions and reduce false alarms. This autoencoder-based
Figure 5: (a) ML quality plot accumulated over 9 LSs, from the MLDQM deployed in the online DQM workflow for EB from a 2022 run, showing two bad towers in red. (b) The digitized hit occupancy plot accumulated over all the LSs in the full run from the online DQM for the same run. (c) PU-corrected average occupancy plot over several runs in 2022.
anomaly detection system can be generalized and adapted to other particle physics experiments for data quality monitoring.
|
2309.10101 | JWST lensed quasar dark matter survey I: Description and First Results | The flux ratios of gravitationally lensed quasars provide a powerful probe of
the nature of dark matter. Importantly, these ratios are sensitive to
small-scale structure, irrespective of the presence of baryons. This
sensitivity may allow us to study the halo mass function even below the scales
where galaxies form observable stars. For accurate measurements, it is
essential that the quasar's light is emitted from a physical region of the
quasar with an angular scale of milli-arcseconds or larger; this minimizes
microlensing effects by stars within the deflector. The warm dust region of
quasars fits this criterion, as it has parsec-size physical scales and
dominates the spectral energy distribution of quasars at wavelengths greater
than 10$\mu$m. The JWST Mid-Infrared Instrument (MIRI) is adept at detecting
redshifted light in this wavelength range, offering both the spatial resolution
and sensitivity required for accurate gravitational lensing flux ratio
measurements. Here, we introduce our survey designed to measure the warm dust
flux ratios of 31 lensed quasars. We discuss the flux-ratio measurement
technique and present results for the first target, DES J0405-3308. We find
that we can measure the quasar warm dust flux ratios with 3% precision. Our
simulations suggest that this precision makes it feasible to detect the
presence of 10$^7$ M$_\odot$ dark matter halos at cosmological distances. Such
halos are expected to be completely dark in Cold Dark Matter models. | A. M. Nierenberg, R. E. Keeley, D. Sluse, D. Gilman, S. Birrer, T. Treu, K. N. Abazajian, T. Anguita, A. J. Benson, V. N. Bennert, S. G. Djorgovski, X. Du, C. D. Fassnacht, S. F. Hoenig, A. Kusenko, C. Lemon, M. Malkan, V. Motta, L. A. Moustakas, D. Stern, R. H. Wechsler | 2023-09-18T19:23:26Z | http://arxiv.org/abs/2309.10101v1 | # JWST lensed quasar dark matter survey I: Description and First Results.
###### Abstract
The flux ratios of gravitationally lensed quasars provide a powerful probe of the nature of dark matter. Importantly, these ratios are sensitive to small-scale structure, irrespective of the presence of baryons. This sensitivity may allow us to study the halo mass function even below the scales where galaxies form observable stars. For accurate measurements, it is essential that the quasar's light is emitted from a physical region of the quasar with an angular scale of milli-arcseconds or larger; this minimizes microlensing effects by stars within the deflector. The warm dust region of quasars fits this criterion, as it has parsec-size physical scales and dominates the spectral energy distribution of quasars at wavelengths greater than 10\(\mu\)m. The JWST Mid-Infrared Instrument (MIRI) is adept at detecting redshifted light in this wavelength range, offering both the spatial resolution and sensitivity required for accurate gravitational lensing flux ratio measurements. Here, we introduce our survey designed to measure the warm dust flux ratios of 31 lensed quasars. We discuss the flux-ratio measurement technique and present results for the first target, DES J0405-3308. We find that we can measure the quasar warm dust flux ratios with 3% precision. Our simulations suggest that this precision makes it feasible to detect the presence of \(10^{7}\) M\({}_{\odot}\) dark matter halos at cosmological distances. Such halos are expected to be completely dark in Cold Dark Matter models.
keywords: dark matter - gravitational lensing: strong - quasars: general -
Introduction
Understanding the properties and behavior of dark matter (DM) is essential to our understanding of structure formation and galaxy formation. Its existence is currently our best model for the structure and evolution of the universe from scales ranging from the cosmic microwave background (Planck Collaboration, 2020) to the rotation curves of spiral galaxies and the dispersion support of spheroidal dwarf galaxies (see, e.g. Weinberg et al., 2015; Bullock & Boylan-Kolchin, 2017, and references therein). In this theory, baryonic galaxies form within extended dark matter halos (White & Rees, 1978; White & Frenk, 1991). Direct detection of these dark halos would provide robust evidence for dark matter's existence. Moreover, the particle properties of dark matter, such as its mass, formation mechanism, and possible self-interactions, determine the abundance and internal structure of halos (see e.g. Buckley & Peter, 2018, and references therein). As dark matter continues to evade laboratory detection and is not guaranteed to be detected directly through non-gravitational interactions, observations of the properties of dark matter halos provide a crucial way to test hypotheses about its particle properties.
The 'Cold' dark matter scenario and cosmological theory, \(\Lambda\)CDM, predicts the existence of dark halos down to planet masses (Wang et al., 2020) in many models. Detecting these dark objects, below the expected scale of galaxy formation, would provide strong evidence in support of CDM and rule out entire classes of theories in which these low-mass objects do not exist. For example, warm dark matter (WDM) refers categorically to scenarios in which free-streaming suppresses the matter power spectrum below a characteristic scale, suppressing the concentration of halos and precluding their formation below a certain mass scale (Bode et al., 2001; Schneider et al., 2012; Bose et al., 2016; Ludlow et al., 2016). Self-interacting dark matter (SIDM) models introduce a self-interaction cross section between dark matter particles small enough to preserve the successes of CDM on large scales, but large enough to drive heat conduction through dark matter halos. This results in a dynamic evolution of halo density profiles that begins with core formation and eventual core collapse (Spergel & Steinhardt, 2000; Balberg et al., 2002; Kaplinghat et al., 2016; Yang et al., 2023, 2023). Models in which an extremely light boson with a mass \(\sim 10^{-22}\)eV comprises all or part of the dark matter, usually referred to as "ultra-light dark matter" (ULDM) or fuzzy dark matter, predict suppression of small-scale structure similar to WDM, and manifest quantum-mechanical interference effects on galactic scales due to the kpc-scale de Broglie wavelength of the particles (Schive et al., 2014; Mocz et al., 2017; Chan et al., 2020; Laroche et al., 2022; Powell et al., 2023). More generally, any theory that modifies the linear matter power spectrum on scales \(k>5\) Mpc\({}^{-1}\) impacts the abundance and internal structure of dark matter halos. This includes certain models of inflation, primordial non-Gaussianity, late-decaying DM particles, or a non-zero running spectral index in slow-roll inflation (Zentner & Bullock, 2002; Stafford et al., 2020; Gilman et al., 2022; Ando et al., 2022; Maria Ezquiaga et al., 2022; Esteban et al., 2023). Primordial black holes (PBH) are another potential DM candidate that primarily affect the internal structure of subhalos (Afshordi et al., 2003; Ricotti et al., 2008; Carr et al., 2016; Carr & Kuhnel, 2020; Dike et al., 2023)
Galaxy-scale strong gravitational lensing can reveal dark matter structure through its gravitational effects on sub-galactic scales, and thus provide insight into its properties (see Vegetti et al. (2023) for a comprehensive review). In a galaxy-scale strong gravitational lens, multiple images of a background source appear due to the deflection of light by a foreground galaxy and its surrounding dark matter halo. An extended background source, such as a galaxy, will appear warped and distorted by strong lensing, and will often partially encircle the foreground deflector. A more compact source, such as a quasar, typically appears two or four times from the perspective of the observer1. The first derivative of the gravitational potential determines the relative positions of the lensed images, while the second derivative of the potential determines their magnifications. Thus, the positions and magnifications of lensed images constrain the mass distribution of the deflector across a range of scales, spanning the size of the Einstein radius (typically \(\sim 1\) arcsec) down to the mill-arcsecond scales probed by the image magnifications. These data are therefore sensitive to the abundance and internal structure of dark matter halos several orders of magnitude less massive than the main deflector and its host halo. The sensitivity of strong lensing observables to both the abundance and internal structure of halos has led to constraints on warm dark matter (Vegetti et al., 2018; Hsueh et al., 2020; Gilman et al., 2020; Zel'ko et al., 2022), fuzzy dark matter (Laroche et al., 2022; Powell et al., 2023), self-interacting dark matter (Minor et al., 2021; Gilman et al., 2021, 2023), primordial density fluctuations (Gilman et al., 2022), and primordial black holes (Dike et al., 2023).
Footnote 1: If the source is a quasar surrounded by a galaxy, both extended arcs and multiple images of the quasar appear.
The state of the field has evolved considerably since Mao & Schneider (1998) and Dalal & Kochanek (2002) showed that low-mass dark matter halos could explain the relative magnifications (or flux ratios) of quadruply imaged radio-loud quasars. In the ensuing decades, the sample of known galaxy-scale strong lenses has grown by an order of magnitude, both through the discovery of new systems and the use of radio-quiet quasars observed at optical and infrared wavelengths. The modeling frameworks used to analyze and interpret data from strong lens systems now include more accurate models for the population of dark matter halos perturbing the lenses, including dark halos along the line of sight (Xu et al., 2012; Despali et al., 2018; Gilman et al., 2019, 2020; Sengul et al., 2022), correlated structure around the host halo (Gilman et al., 2019), and the tidal evolution of dark subhalos. The calibration of the substructure models implemented in lensing analyses come from the predictions of numerical simulations of structure formation in early-type galaxies (Fiacconi et al., 2016; Nadler et al., 2023) and semi-analytic models, including galacticus (Benson, 2012) and SatGen (Jiang et al., 2021). Advances in the modeling of strong lens systems have been enabled by software packages such as lensmodel2, lensronomy3(Birrer & Amara, 2018; Birrer et al., 2021), GLEe (Suyu & Halkola, 2010),
PyAutoLens4(Nightingale et al., 2021), Herculens5(Galan et al., 2022), and the codes ofVegetti & Koopmans (2009); Verneros & Koopmans (2022), which include capabilities to forward-model lensing observables through multi-plane lensing computations and simultaneous reconstruction of lensed images and background sources. Finally, open-source packages such as pyHalo6 and paltas7(Wagner-Carena et al., 2023) interface between lensing codes and dark matter models to quickly generate populations of dark matter halos for lensing simulations.
Footnote 4: [https://github.com/Jammy2211/PyAutoLens](https://github.com/Jammy2211/PyAutoLens)
Footnote 5: [https://github.com/austinpeel/herculens](https://github.com/austinpeel/herculens)
Footnote 6: [https://github.com/dangilman/pyHalo](https://github.com/dangilman/pyHalo)
Footnote 7: [https://github.com/swagnercarena/paltas](https://github.com/swagnercarena/paltas)
The background source plays a key role in gravitational lensing inferences of dark matter structure from image flux ratios because its spatial extent imposes a particular angular and temporal scale on the problem. For substructure lensing studies, the source must be extended enough that the light-crossing time exceeds the arrival time difference between lensed images (typically days to months) so that intrinsic variations in the source produce a negligible change in the flux ratios. For a typical time delay of \(\sim 10\) days, this implies a spatial extent of at least 0.1 parsec. The source must also be extended enough to be insensitive to microlensing by stars in the main deflector. The perturbation of an image magnification caused by a halo depends on the deflection angle produced by the halo relative to the angular size of the source (Dobler & Keeton, 2006; Metcalf & Amara, 2012). Stars produce deflection angles of order \(\sim\mu\)as. Given typical galaxy-scale lensing configurations, this implies a minimum required source size of \(\sim\) mas, which corresponds to physical scales of \(\sim 1\) parsec at a typical source redshift of \(z=2\). Quasar radio and narrow-line emission are extended enough to meet these criteria (Metcalf & Madau, 2001), and these sources have yielded some of the strongest constraints to date on a turnover in the halo mass function (Gilman et al., 2018, 2019; Hsueh et al., 2020), with an upper limit of \(\rm M_{lim}<10^{7.8}\)\(\rm M_{\odot}\)(2\(\sigma\)) (Gilman et al., 2020a). Improvements in this measurement can be made by increasing the sample of lenses, improving the lens modeling techniques applied to interpret the data, improving flux-ratio measurement sensitivity, and choosing sources with intrinsically smaller sizes.
Quasar warm dust serves as an attractive light source for flux-ratio anomaly measurements. This dust component has temperatures of hundreds of Kelvin and dominates the quasar spectral energy distribution at rest-frame wavelengths of \(\sim 8-12\mu\)m. It has typical sizes of \(\sim 0.1-10\) pc (Burtscher et al., 2013; Leftley et al., 2019), with minimal scaling with quasar luminosity. This is much smaller than the nuclear narrow-line emission with FWHM\(\sim\)100 pc (Muller-Sanchez et al., 2011; Nierenberg et al., 2014, 2017). Figure 1 demonstrates an example of the magnification induced by a perturbing subhalo on a source with a characteristic size scale of the narrow-line emission compared with the warm dust emission. The size of the quasar warm dust emission region is excellent for dark matter studies, as it is large enough to be unaffected by microlensing while still being small enough to be significantly magnified by individual low-mass dark matter halos. It is also bright and ubiquitous.
Quasar warm dust has long been recognized as a potential source for analyses of dark matter through strong lensing. Several studies have undertaken IR studies of strongly lensed quasars out to _observed frame_ 10 \(\mu\)m (Agol et al., 2000; Chiba et al., 2005; Fadely & Keeton, 2011; Jones et al., 2019; MacLeod et al., 2009, 2013; Ross et al., 2009). Chiba et al. (2005) and MacLeod et al. (2009) both measured flux ratios to be consistent with results from lensed radio jets. These studies probed _rest-frame_ wavelengths of \(\sim\)3-5 \(\mu\)m, where we expect light from the quasar accretion disk as well as both hot and warm dust components (see e.g. Stalevski et al., 2012; Sluse et al., 2013, and references therein). Measuring the flux ratios at even redder wavelengths, where the warm dust dominates the SED, may provide an even more robust constraint of dark matter structure. This has now become possible with JWST, which has both the spatial resolution and sensitivity to measure lensed quasar flux ratios to _rest-frame_ 8 \(\mu\)m given typical source redshifts.
Here we introduce our survey JWST-GO-2056 (PI: Nierenberg) of 31 quadruply lensed quasars in which we use multi-band Mid-Infrared Instrument (MIRI) imaging with JWST to measure the warm dust flux ratios. Given typical source sizes of \(1-10\) pc, and target flux ratio precision of 3%, dark matter halos with masses below \(10^{7}\)\(\rm M_{\odot}\) can cause a significant perturbation to the flux ratios. No existing dataset has demonstrated the capability to reveal the presence of dark halos on these scales across cosmological distances. Detecting a population of halos at \(10^{7}\rm M_{\odot}\) would have profound consequences for dark matter physics. Independent confirmation of the presence of dark halos through lensing would verify a key prediction of the \(\Lambda\)CDM paradigm, complementing other probes of low-mass dark matter structure, such as studies of dwarf galaxies (e.g. Nadler et al., 2021; Dekker et al., 2022; Slone et al., 2023) and stellar streams (Bovy et al., 2017; Banik et al., 2021). Non-detection of these low-mass halos would falsify CDM, and an inference of their central density profiles and concentrations would improve existing bounds from lensing on self-interacting dark matter, fuzzy dark matter, and the matter power spectrum (see Vegetti et al., 2023, and references therein).
This paper is organised as follows. In Section 2, we describe the survey design and sample selection. In Section 3 we present measurements for the first target observed for our program, DES J0405-3308 (Anguita et al., 2018). In Section 4, we describe how we measure the light components. In Section 5 we present our model for fitting the quasar spectral energy distribution. In Section 6, we discuss our results in light of previous measurements of this system. In section 7 we estimate our sensitivity to dark matter halos for the full survey. In Section 8 we provide a summary of the major conclusions of this paper. In order to calculate physical sizes, we assume a flat \(\Lambda\)CDM cosmology with \(h=0.7\) and \(\Omega_{\rm m}=0.3\).
## 2 The quasar mid-IR spectral energy distribution, and survey design
The goal of this program is to measure the flux ratios of strongly lensed warm dust emission of quasars in order to constrain the properties of dark matter. Quadruply imaged quasars were selected from the current known sample of
\(\sim\)50 systems (Inada et al. 2012; Lemon et al. 2017; Agnello et al. 2018; Agnello & Spiniello 2019; Delchambre et al. 2019; Lemon et al. 2019; Stern et al. 2021). These systems were discovered through a combination of data from wide-field surveys including the Sloan Digital Sky Survey (York et al. 2000), the Panoramic Survey Telescope and Rapid Response System (Chambers et al. 2016), Gaia (Gaia Collaboration et al. 2023), the Wide-field Infrared Survey Explorer (Wright et al. 2010), and the Dark Energy Survey (Dark Energy Survey Collaboration et al. 2016). We first describe the properties of the quasar mid-infrared spectral energy distribution that are relevant to our measurement and explain how this impacted our observation strategy and lens selection. After selecting based on the criteria outlined in the following subsections, the final sample contains 31 lenses. We will provide detailed information for each target in the papers that present flux ratios for those targets.
### Photometric requirements for spectral energy distribution fitting
Our goal is to isolate emission coming from physical regions more extended than \(\sim 0.1\) pc in order to ensure that these regions subtend an angular size of \(\sim\)mas, and are therefore not contaminated by stellar microlensing in the lens galaxy. This in turn ensures that the flux ratios we measure are sensitive only to the presence of low-mass dark matter halos rather than stellar microlensing or intrinsic variability.
The current picture of the mid-IR emitting region of quasars has been built up using a combination of narrow-band imaging, reverberation mapping, and high-resolution interferometric measurements. One model is consistent with all of these observations. In this model, the mid-IR SED of quasars is composed of three relatively distinct sources of emission. At wavelengths below 2 microns, there is significant emission from the quasar accretion disk, which has physical scales of light-days (e.g. Wambsganss et al. 1990; Wanders et al. 1997; Anguita et al. 2008; Fausnaugh et al. 2016), corresponding to angular sizes of \(\mu\)as at typical source redshifts. At redder wavelengths, the spectral energy distribution becomes dominated by a 'hot' dust region with peak flux emitted at temperatures ranging from 1000-1400 K (\(\sim\)3 \(\mu\)m) (e.g. Bosman et al. 2023). This emission is associated with dust near the sublimation temperature that marks the inner boundary of the dusty region of the quasar and has characteristic size scales of order 0.05-0.2 pc (Suganuma et al. 2004, 2006; Mor & Trakhtenbrot 2011; GRAVITY Collaboration et al. 2020), depending on quasar luminosity. In addition to this, there is a 'warm' dust component (see e.g. Honig 2019, and references therein), which dominates the SED at wavelengths of 8-12 \(\mu\)m. This component is observed to subtend scales of \(\sim 0.1-10\) pc, with little or no scaling with luminosity (Burtscher et al. 2013; Leflley et al. 2019).
The size of the warm torus makes it both insensitive to microlensing, as well as relatively more sensitive to low mass perturbations than the larger narrow-line region used in previous flux-ratio anomaly studies (Nierenberg et al. 2020; Gilman et al. 2020a). Figure 1 illustrates this for the case of a saddle image in a quadruply imaged quasar. Saddle images are located at a saddle point in the time-delay surface of the lens and are therefore particularly sensitive to the effects of small-scale perturbations. The smaller source with FWHM of 5 pc, characteristic of the quasar warm dust emitting region, is significantly more perturbed by the subhalo than is the more extended source with FWHM of 80 pc, characteristic of the quasar nuclear narrow-line region. We are aiming for measurements that are sensitive to the presence of individual \(10^{7}\) M\({}_{\odot}\) NFW halos 8. We selected this mass target as it is below the threshold at which the majority of halos are believed to contain detectable galaxies (e.g Nadler et al. 2021a). Based on these simulations, we aim for a target flux ratio measurement signal to noise of 3%.
Footnote 8: In CDM, we expect large numbers of such subhalos and therefore we will model their collective effects.
Sluse et al. (2013) performed microlensing analyses of simulated lensed quasars spectral energy distributions and demonstrated that lensed quasar images could be significantly affected by microlensing at rest-frame wavelengths blue-ward of 8 \(\mu\)m, because of the small physical size of the hot dust emitting region, and the quasar accretion disk. Therefore, ideally, a flux-ratio study of quasars would probe only the warm dust emission at rest-frame wavelengths beyond 10 \(\mu\)m and redder in order to avoid contamination. The reddest MIRI imaging filter is 25.5 \(\mu\)m. Such a restriction on rest-frame wavelength would enable us to study only lensed quasars with redshifts below 1.5.
Figure 1: Illustration of the differential magnification of a saddle image of a quadruply imaged quasar with a Gaussian light distribution by a perturbing NFW subhalo with a mass of \(10^{6}\) (dashed lines) and \(10^{7}\) M\({}_{\odot}\) (solid lines), as a function of the position of the subhalo relative to the center of the lensed image. Per cent differences in flux are relative to a model without a subhalo. The subhalo significantly alters the flux of the smaller source (blue lines) with FWHM typical of the quasar warm dust region, but it is not massive enough to significantly affect the larger source (black lines) with FWHM typical of the quasar nuclear narrow-line region. The JWST program described in this work aims to have sensitivity to the effects of \(10^{7}\) M\({}_{\odot}\) subhalos, which are not expected to contain detectable gas or stars. Our final measurements will be made statistically by generating populations of dark matter halos both in the lens and along the line of sight, and by marginalizing over uncertainties in the deflector macromodel and source properties as described in (Gilman et al. 2019, 2020a).
In order to expand our sample to higher source redshifts, and to ensure a lack of microlensing contamination at lower redshifts, we use multi-band imaging spanning the near-to-mid-IR SED of the quasar to constrain the relative contributions of the quasar accretion disk and the hot and warm dust for each lensed image. Based on simulations presented in a companion paper (Sluse et al., in prep.), such multi-band imaging enables the identification of lensed images affected by significant microlensing and can be used to reduce systematic uncertainties relative to single-band imaging only.
We adopted the following strategy to measure the spectral energy distribution of lensed quasar images. For all lenses, we obtained imaging in F560W, F1280W, and F1800W to obtain a constraint on the relative brightness of the quasar accretion disk and hot dust emission. We also required the reddest filter to measure rest-frame 6 \(\mu\)m or redder. Thus, for quasars with redshifts \(z>2\), we required the faintest lensed image to be detectable in F2550W. Our target signal-to-noise was 100. Using the pre-launch JWST Exposure Time Calculator, this corresponded to a minimum lensed image flux of 1 mJy. The faintest lensed image fluxes were estimated by applying the optical flux ratios by the unresolved total flux measured in WISE W4 (22.4 \(\mu\)m).
For source quasars with redshifts \(z<2\), F2100W (rest-frame 8 \(\mu\)m or redder) provides sufficiently red wavelength coverage to mitigate microlensing. This filter is much more sensitive than F2550W given the lower background and more compact point-spread function (PSF), and thus we did not impose a minimum flux requirement for these targets beyond an unresolved detection of the lens in W4 (total W4 flux for all four images greater than \(\sim 3\) mJy).
Given typical quasar SEDs, and the sensitivity of MIRI imaging as a function of wavelength, these criteria were sufficient to ensure that the quasar flux ratios could be measured with adequate signal-to-noise in the three bluer filters.
In addition to the sensitivity requirements, we selected lenses with a minimum image separation of \(0\farcs 1\) for accurate image deblending, given that the highest resolution imaging is in F560W with a PSF FWHM of \(0\farcs 2\).
### Macromodel Requirements
Lenses were selected to have four images to constrain the smooth mass distribution, which is used as a baseline for flux ratio anomaly studies. Furthermore, we required that the lens have a'simple' deflector light distribution with no significant disk, and only a single massive deflector was needed to reproduce the observed image positions.
## 3 Observations and Initial Reduction
The first system to be observed was DESJ040559.7-330851.00 (Anguita et al., 2018). This lens has source redshift of \(z_{s}=1.713\) and a photometrically estimated deflector redshift of \(z_{d}\sim 0.3\)(Gilman et al., 2020). DESJ0405-3308 has an unresolved W4 flux of 7.7 mJy. Assuming the optical flux ratios are identical to the F2550W flux ratios, this would indicate an expected faint image flux of approximately 1.3 mJy. Based on our photometric criteria, this was bright enough to use F2550W as the reddest filter for this target, enabling us to measure fluxes at rest-frame \(\sim\)9.4 \(\mu\)m, where we expect little to no contamination from microlensing. For this system, the spectral energy distribution will provide a useful test of our SED fitting method.
Observations for DESJ0405-3308 were obtained on October 27, 2022. Exposure times were 58 s in F560W, F1280W, and F1800W and 574 s in F2550W. All exposures were divided into a three-point dither pattern to improve spatial resolution and mitigate cosmic rays.
Initial calibration was performed using the default JWST data calibration pipeline10 (Greenfield & Miller, 2016; Bushouse et al., 2022). Sky subtraction of Level 2 data products was performed using customized routines11 before drizzling to produce the final images. The final pixel scale was set to \(0\farcs 11\) per pixel, identical to the native detector pixel scale. Reduced images in each filter are shown in Figures 3-6. 12
Footnote 10: Using the jwst_1041.pmap context file.
Footnote 11: Based on [https://github.com/STScI-MIRI/Imaging_ExampleNB](https://github.com/STScI-MIRI/Imaging_ExampleNB)
## 4 Image Flux Measurement
Our goal was to accurately measure the lensed image fluxes in the presence of other light components including the lensed quasar host galaxy (which appears as a lensed arc) and the deflector galaxy light. We adopted a forward modelling approach to measure the lensed quasar image fluxes in all four filters. The model consisted of a combination of up to four light components depending on the filter, as described below.
**Lensed quasar images:** The quasar light is dominated by the accretion disk and hot and warm dust on angular size scales of micro- to milli-arcseconds. Given that this is smaller than the smallest imaging PSF with FWHM of F560W of \(0\farcs 2\), we treated these components as point sources. We wished our measurement to have as little dependence as possible on the gravitational lensing model, as the image fluxes will later be used to constrain this model with dark matter substructure. Therefore, we did not associate the point source fluxes or positions with a lens model but rather treated them as completely independent. This is the same procedure one might adopt if, for example, there were foreground stars in the data.
**Deflector light distribution:** The lens galaxy is detected in F560W and F1280W. We modelled this light distribution as an elliptical Sersic profile (Sersic, 1963).
**Lensed quasar host galaxy:** The lensed host galaxy of the quasar is apparent as an extended arc in F560W, F1280W and F1800W. We modelled the unlensed quasar
host galaxy light distribution as an elliptical Sersic profile. To produce the observed gravitationally lensed arc, we included a gravitational lensing model for the deflector mass distribution. We adopted an elliptical power-law model (Tessore & Metcalf, 2015), with external shear.
### Point spread function fitting
We used webbPSF12(Perrin et al., 2012, 2014) to fit the PSF in our data. We used a super-sampling of 3 in order to enable improved astrometric precision, and because of the large detector pixel scale relative to the sizes of the light features such as the lensed quasar host galaxy. At the time of writing, this software was in active development to update the models to match observed optics and detector properties. The default parameters provided a poor fit to the observed data due to detector-level effects. The dominant discrepancy was due to inter-pixel capacitance and charge diffusion in the detector (e.g. Argyriou et al., 2023). A preliminary model for the charge diffusion effect has been implemented. However, at the time of writing, this was only in the detector-sampled PSF models, while we required a super-sampled PSF model given the large pixel size relative to the light features.
Footnote 12: Development branch 1.2.1.
As an alternative, we found that the PSF could be modelled by varying the webbPSF Gaussian 'jitter_sigma' parameter. The 'jitter' effect is implemented in webbPSF by convolving the PSF model with a Gaussian kernel to account for spacecraft motion. In practice, the jitter effect has a nearly equivalent impact on the data, as does charge diffusion.13 The jitter_sigma value was optimized for each filter as described in the following.
Footnote 13: M. Perrin: Private communication.
We used blackbodies at the redshift of the quasar to account for the wavelength dependence of the PSF. The temperature of each blackbody was optimized separately for each filter. Although in principle the PSF spectrum should be connected to the SED of the quasar (rather than a single blackbody), we found that a single blackbody model for the PSF source provided an excellent fit to the data. We defer incorporating additional complexity in the PSF simulation until the PSF model has been further refined based on in-flight results.
In addition to the charge diffusion effects, F560W displays a prominent 'cruciform' artifact (Gaspar et al., 2021; Wright et al., 2023), which is a wavelength-dependent, detector-level artifact apparent beyond the first Airy ring. The second extension output of webbPSF provides a model for this feature that provides an improved fit relative to the PSF model without it. However, residuals owing to the cross artifact were still prominent in our data. We therefore fit the F560W data in a relatively small region where the cross feature was sub-dominant.
### Modelling Procedure
We adopted an iterative approach to fitting our images, switching between optimizing the PSF parameters (jitter_sigma and blackbody temperature), and the parameters associated with light sources and gravitational lensing model until both inferences were returning stable results. Due to the small number of stars in the field of view, and their very different SED from our quasar images, we fit the PSF parameters directly using our lensed quasar images.
We fit the three images that contain the lensed quasar host galaxy simultaneously. We required the image positions, gravitational lens model, and the centroids of the deflector and source light to be the same between the three filters but allowed all other model parameters to vary between the three filters. F2550W, which contained only four point sources, was fit independently with no lens model and only the four independent PSFs.
After finding the best-fitting model parameters, uncertainties were estimated using a Markov Chain Monte Carlo with the PSF held fixed at the best-fitting value obtained from the previous steps. Given that the flux ratios show no variation over a broad range of PSF model parameters (including those that provide a poor overall fit to the data), we do not anticipate that this choice will make a significant impact on the estimate of the flux-ratio uncertainties. We used lenstronomy(Birrer et al., 2015; Birrer & Amara, 2018; Birrer et al., 2021) for all image fitting and simulation.
### Results of forward modelling and uncertainty estimation
The best-fit PSF parameters are given in Table 1, and the measured image fluxes and positions are given in Table 2. Figure 2 shows the measured flux ratios as a function of wavelength.
We do not report the lens model parameters. Owing to the limitations of the current PSF model as well as the fact that the quasar images are treated as independent foreground objects, the lens and light model parameters we infer
Figure 2: The measured flux ratios with respect to image A as a function of rest frame wavelength. Colored bands indicate the 68% confidence interval of the corresponding warm torus component, which is not expected to be microlensed. The labels indicate whether the image is located at a minimum or saddle point of the time delay surface. Image A is a minimum. Rest wavelengths blue-ward of 8 \(\mu\)m rest frame have significant contributions from the hot dust and accretion disk that are small enough to be microlensed by stars in the lens galaxy and/or time variable on the day-to-week time scales.
cannot be meaningfully compared to other studies for this system, which were based on Hubble Space Telescope data with a well-modelled PSF (Shajib et al., 2019; Schmidt et al., 2023). Ultimately, for our gravitational lensing dark matter measurement, we will apply the approach used by Gilman et al. (2019, 2020a) in which only the image positions and flux ratios are used to constrain the mass distribution of the deflector. This allows for a high degree of flexibility in the smooth mass distribution used as the baseline for the flux-ratio comparison (see also Nierenberg et al., 2020). Below we discuss our tests for the dependence of measurement uncertainty on model choices.
The formal statistical uncertainties for the image fluxes, positions, and flux ratios were extremely small. Here we describe how we estimated systematic uncertainties due to model choices. When estimating uncertainties, it is important to make the distinction between _absolute fluxes_, which are relevant to SED fitting described in Section 5, and _flux ratios_, which are the key quantity for gravitational lensing estimates.
**Position Uncertainties:** We estimate the systematic uncertainties by comparing the measured relative image positions with those measured in HST WFC3-IR F140W direct imaging from Nierenberg et al. (2020), and find maximum relative offsets of 0\(\aas@@fstack{\prime\prime}\)007 in the lensed image positions. This is much smaller than the pixel sizes of 0\(\aas@@fstack{\prime\prime}\)11 for JWST MIRI and 0\(\aas@@fstack{\prime\prime}\)13 for HST WFC3-IR.
**Light component modelling:** We performed several tests of the systematic uncertainties on measured image fluxes and flux ratios. These included: 1) Fitting the light in the imaging bands together and requiring the model light components to have the same parameters except amplitude in all three bands; 2) Performing the fits in the three filters separately and allowing the lens model to be different in each filter; 3) Restricting the source light to be round in shape; 4) Restricting the host mass profile to have a slope of \(\gamma_{\rm p}=2\) rather than allowing it to vary freely; and 5) Fixing the image positions to those specified by the lens model, rather than treating them as completely independent foreground light sources. As an additional test on the flux ratios, we measured the flux ratios before and after including the lensed quasar host galaxy.
The extended source was most significant in F560W, contributing approximately 40% of the flux at the location of the quasar images. In F1280W and F1800W the flux was less than 10% at the location of the quasar images. This is reflected in the systematic uncertainties from the tests above, in which we found that the absolute fluxes varied by 5% in F560W and F1280W and 2% in F1800W, and the flux ratios varied by up to 6% in F560W and 1% in F1280W and F1800W.
**PSF uncertainties:** We found that variations in the choice of PSF model impacted the _absolute_ image fluxes by 10% or less. We also tested for variation of PSF within a filter as a function of image brightness. Therefore we did an additional fit of the F2550W data, allowing each point source to have a different jitter_sigma value. We found no significant variation in the value of this parameter between the four images, indicating that the adoption of a single PSF model was sufficient for this system. Furthermore, even with the variable PSF, the flux ratios and fluxes varied by less than 1% relative to a fit in which the PSF was the same for all four images.
**Instrument Calibration:** The absolute flux calibration uncertainties for MIRI have not been estimated at the time of writing. In August 2023 a significant wavelength-dependent loss in sensitivity of 3% for F1280W, 8% for F1800W and 18% for F2550W was reported for the MIRI imager relative to the commissioning sensitivity measured in Summer 2022.14 The sensitivity loss seems to have occurred over time. At the time of writing it is not known what the sensitivity loss was at the time of the observations for this program (October 2022), therefore we include the August 2023 reported loss values as an additional systematic uncertainty in our absolute flux measurements.
Footnote 14: [https://www.stsci.edu/contents/news/jwst/2023/miri-imager-reduced-count-ratePage-1&keyword=MIRI](https://www.stsci.edu/contents/news/jwst/2023/miri-imager-reduced-count-ratePage-1&keyword=MIRI)
**Conclusion of uncertainty estimate testing:** Based on our tests of systematic sources of uncertainty, we find that the absolute flux uncertainty is likely dominated by the uncertainty in the instrument calibration. For this work, we adopt 15% flux uncertainties in F560W, F1280W, and F1800W, and 20% flux uncertainties in F2550W based on our current knowledge of the detector calibration. We expect these uncertainties to become smaller in the near future as the instrument behavior is better understood.
The dominant source of flux ratio uncertainty in F560W was 6% from modelling the lensed quasar host galaxy, while the uncertainties related to PSF modelling and lensed quasar host galaxy modelling were comparable for the flux ratio measurements in F1280W and F1800W. We adopt flux ratio uncertainties for 2% in these filters. For F2550W, which did not have an apparent lensed quasar host galaxy, we estimate 1% flux ratio uncertainties.
## 5 SED fitting
In this section, we describe how we used the MIRI four-band photometry to fit the multi-component SED and isolate light coming from the warm dust region of the quasar, which is extended enough to avoid contamination from micro-lensing as described in Section 2.
We followed Sluse et al. (2013) and adopted a simple three-component model of the quasar spectral energy distribution. This is composed of power-law continuum emission from the quasar accretion disk combined with two blackbodies representing the hot dust component, with prior temperature range of 500-1800 K, and the warm dust component, with prior temperature range 100-500 K. We did not include emission lines such as PAH emission, which we expect to make a small contribution to the broad-band fluxes (e.g. Garcia-Bernete et al., 2022).
Our SED model allowed for independent variation of each component amplitude for each lensed image to account for the fact that both the quasar accretion disk and the hot dust are small enough to be affected by microlensing. This also accounts for intrinsic flux variation of the accretion disc on timescales shorter than the time delay between the lensed images (of order days) (Schmidt et al., 2023). We performed the SED fit simultaneously for all four images. The temperatures of the hot and warm dust blackbodies were allowed to
vary as free parameters but were restricted to be the same for all images. The overall SED amplitudes were also allowed to vary independently to account for different overall magnifications for the lensed images.
When fitting the lensed quasar SEDs, we computed the joint likelihood that each set of model parameters would reproduce the observed _flux ratios_ (B/A, C/A, and D/A) in each filter as well as the likelihood that the model matched the _absolute fluxes_ for image A in each filter. Model SEDs were transformed into band fluxes following Gordon et al. (2022)15. We used emcee (Foreman-Mackey et al., 2013) to estimate the posterior probability distribution.
Footnote 15: [https://github.com/STScI-NIRI/ImagingFluxCal/blob/main/model_fluxes.py](https://github.com/STScI-NIRI/ImagingFluxCal/blob/main/model_fluxes.py)
Figure 7 shows the accepted model drawn from the Markov Chain Monte Carlo, while inferred component flux ratios for the hot and warm dust blackbodies are presented in Table 3. The flux ratios are computed by dividing the normalization of the blackbody component for a given image by the normalization of the corresponding blackbody component for image A. Although we included the continuum emission in our model to estimate the uncertainty it might contribute, we do not present the continuum flux ratios as its contribution to the fluxes was small (\(<\)10%) in the observed band-passes.
## 6 Discussion
Here we discuss the results of SED fitting and flux ratio measurements in light of other studies of this system.
Figure 4: _Upper panels_: From left to right, comparison of original F1280W image, best-fit model, and residuals. _Lower panels_: Separate light model components. From left to right, model point sources, lensed quasar host galaxy, and deflector light distribution. The yellow bar in the lower left of the data image indicates 1 arcsecond.
Figure 3: _Upper panels_: From left to right, comparison of original F560W image, best-fit model, and residuals. _Lower panels_: Separate light model components. From left to right, model point sources, lensed quasar host galaxy, and deflector light distribution. The yellow bar in the lower left of the data image indicates 1 arcsecond. The arrow indicates North.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline \multicolumn{1}{c}{ Parameter} & \multicolumn{1}{c}{F560W} & \multicolumn{1}{c}{F1280W} & \multicolumn{1}{c}{F1800W} & \multicolumn{1}{c}{F2550W} \\ \hline jitter\_sigma & 0\(\aas@@fstack{\prime\prime}\)063 & 0\(\aas@@fstack{\prime\prime}\)061 & 0\(\aas@@fstack{\prime\prime}\)073 & 0\(\aas@@fstack{\prime\prime}\)075 \\ Temperature (K) & 1120 & 700 & 680 & 250 \\ \hline \end{tabular}
\end{table}
Table 1: Best fit webbPSF jitter_sigma and blackbody temperature for each filter. These values were inferred for our data using webbPSF development version 1.2.1, and do not include inter-pixel capacitance effects.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline \multicolumn{1}{c}{ Image} & \multicolumn{1}{c}{dRa} & \multicolumn{1}{c}{dDec} & \multicolumn{1}{c}{F560W} & \multicolumn{1}{c}{F1280W} & \multicolumn{1}{c}{F1800W} & \multicolumn{1}{c}{F2550W} \\ \hline A & 1.065 & 0.318 & 0.396 & 1.06 & 1.38 & 2.647 \\ B & 0 & 0 & 0.279 & 0.656 & 0.875 & 1.787 \\ C & 0.721 & 1.152 & 0.459 & 1.08 & 1.42 & 2.790 \\ D & -0.153 & 1.018 & 0.536 & 1.34 & 1.73 & 3.357 \\ \hline \end{tabular}
\end{table}
Table 2: Measured image positions and fluxes in units of mJy. Image positions are measured from the F2550W data. Image naming follows Shajib et al. (2019), and image labels are shown in Figure 3. We estimate the flux ratio (absolute flux) uncertainties to be 6% (15%), 2% (15%), 2% (15%), 2% (15%), 1% (20%) in F560W, F1280W, F1800W, and F2550W respectively. The Right Ascension and Declination offsets of the quasar images with respect to image B are within 0\(\aas@@fstack{\prime\prime}\)007 of those measured by Nierenberg et al. (2020).
Figure 5: _Upper panels_: From left to right, comparison of original F1800W image, best-fit model, and residuals. _Lower panels_: Separate light model components. From left to right, model point sources and lensed quasar host galaxy. The deflector light is not detected in this filter, thus it is not included in the model. The yellow bar in the lower left of the data image indicates 1 arcsecond.
Figure 6: From left to right, comparison of original image, model, and residuals. The light model in F2550W consists of only the point source contribution. The yellow bar in the lower left of the data image indicates 1 arcsecond.
### SED Fitting Results
The hot dust temperature was inferred to be 1200\(\pm\)100 K, and the warm dust temperature was 300\(\pm\)100 K. Interestingly, these values are consistent with the best-fit webPSF blackbody temperature parameters for F560W (1130 K) and F2550W (250 K, Table 1). In these filters, the SED model predicts the flux is dominated by the hot and warm dust components respectively.
The hot dust flux ratios are significantly different from the warm dust flux ratios for images B and C. This is reflected in the flux ratios displayed in Figure 2, which are nearly achromatic for D/A but show small chromatic changes for B and C. A microlensing explanation would be consistent with results from Nierenberg et al. (2020), who found clear signatures of microlensing in image C, which had a wider H\(\beta\) emission line in C band relative to the other three images. Deformation of the broad emission line profile (such as H\(\beta\)) is a noted signature of microlensing (e.g. Sluse et al., 2012; Fian et al., 2021), and reflects the differential lensing by stars of the higher velocity wings emitted from the smaller parts of the broad-line region.
From the SED fitting, we see that the flux from the warm dust is dominant relative to the hot dust. This is consistent with typical quasar SEDs which find a lower covering fraction of hot dust relative to warm dust (Mor & Trakhtenbrot, 2011). From this result, we expect little contamination in F2550W from the more compact hot dust region. We find that the warm dust flux ratios for this system are consistent with the F2550W flux ratios.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Ratio & Hot & Warm & F2550W & [OIII] \\ \hline B/A & 0.58\({}^{+0.04}_{-0.07}\) & 0.70\(\pm\)0.02 & 0.7\(\pm\)0.007 & 0.65\(\pm\)0.04 \\ C/A & 0.96\({}^{+0.04}_{-0.04}\) & 1.07\({}^{+0.03}_{-0.02}\) & 1.05\(\pm\)0.01 & 1.25\(\pm\)0.03 \\ D/A & 1.23\({}^{+0.04}_{-0.06}\) & 1.27\(\pm\)0.02 & 1.27\(\pm\)0.01 & 1.17\(\pm\)0.04 \\ \hline \end{tabular}
\end{table}
Table 3: Flux ratios and one sigma uncertainties measured through SED fitting, F2550W, and narrow-line [OIII] from Nierenberg et al. (2020).
Figure 7: Results for SED fitting for separate lensed quasar images fit to a model with continuum plus hot and warm blackbody components with variable temperature. These components represent the quasar accretion disk (gray) and the hot (orange) and warm dust (red) contributions respectively. The amplitude of each model component varied freely between images to accommodate size-dependent microlensing, intrinsic variability, and lensing by the main deflector and potential dark matter substructure. The fits are required to reproduce the observed absolute fluxes as well as the flux ratios in each filter. See Section 5 for a description of the model. Each line represents an accepted MCMC draw to illustrate the variations in models.
### Comparison with past results
There is a significant difference between the cold dust flux ratios and the [OIII] flux ratios for image C measured by Nierenberg et al. (2020). As discussed in the Introduction, the [OIII] and warm dust emission regions are both extended and not subject to microlensing or time-variability on the day-to-month time scales relevant to galaxy-scale lenses. Therefore, the differences in flux ratios cannot be explained by these phenomena. Furthermore, differential dust extinction is not a likely explanation as the [OIII] emission is redshifted to \(\sim\)1 \(\mu\)m at the redshift of the deflector, and the quasar warm torus light is redshifted well beyond this. Assuming all measurement uncertainties have been accurately characterized, we explore two possible explanations below.
An offset between the centroid of the [OIII] emission and the warm torus emission could create a small difference in the flux ratios. Offsets have been observed to be of order tens of parsecs (Singha et al., 2022) between the nuclear narrow-line region and the quasar accretion disk. We tested the impact such an offset would make by choosing a macro model that fits the measured image positions and flux ratios, and offsetting the source from the best-fit position. A 10 pc offset, for example, would create a flux-ratio difference of up to 2% and change the image positions by up to 0\(\aas@@fstack{\prime\prime}\)007. However, the flux-ratio changes are not independent of each other and there is no source offset that reproduces both the image positions and flux ratios for the [OIII] and warm dust in this system. Further investigation of the grism data from Nierenberg et al. (2020) with simulated offsets between the continuum and the [OIII] region on the two-dimensional grism data would enable limits to be placed on the possible magnitude of such an offset for this system.
Another explanation for the difference in flux ratios is differential milli-lensing by low mass perturbers. The mid-IR and [OIII] sources have intrinsically different characteristic sizes. The two sources could be magnified differently by the same mass perturber. A qualitative example of this effect is provided in Figure 1, in which a small source like the warm torus is strongly de-magnified by a perturbing subhalo, while a larger narrow-line region source is not. As with the example lensed image in Figure 1, image C is a saddle image and we would therefore typically expect it to be de-magnified by a local perturbation to the macromodel, thus the observed in the flux ratios could be explained by this type of phenomenon. The differential effect of such a perturbation on the narrow and warm dust flux-ratios would depend on a variety of factors including both the mass of the perturbation and the intrinsic size of the narrow-line region. Based on the grism spectra, Nierenberg et al. (2020) placed an approximate upper limit of \(\sim\) 100 pc on the FWHM of the narrow-line region for this system based on a lack of differential extension in the spectra of the four lensed images. Such a differential extension would be observed in the grism spectrum if the narrow-line emission was partially resolved (see also Nierenberg et al., 2017). As a test, we started with a macromodel that fits the observed [OIII] flux ratios and image positions. Assuming the [OIII] emitting region has a FWHM of 50 pc, a perturbation with mass scale \(10^{7}\) M\({}_{\odot}\) could reproduce the observed warm torus flux ratios for this system while leaving the [OIII] flux ratios unchanged.
In reality, we expect many low-mass halos in the lens and along the line of sight, potentially perturbing all four images simultaneously, therefore we defer a more meaningful physical interpretation of the discrepancy between the [OIII] flux ratios and the mid-IR flux ratios until we have included the effects of full populations of halos and subhalos (Keeley et al. in prep).
## 7 Dark matter constraints forecast
Given the flux-ratio precision measured in this work, we can estimate the constraint on dark matter properties obtainable from the full sample based on the scaling simulations by Gilman et al. (2019). The current WDM constraint is based on a sample of 8 lenses with approximately 6% measurement precision. Extrapolating to 31 lenses with a 3% measurement precision for the relative flux ratios yields an estimated 95% upper limit on a turnover in the half mode mass M\({}_{\rm{hm}}\) of below \(10^{7}\) M\({}_{\odot}\) if dark matter is cold. This would correspond to a limit on a thermal relic particle mass above 9.7 keV. The current limit from lensing is M\({}_{\rm{hm}}<10^{7.8}\) (\(M_{\rm{WDM}}>\)5.2 keV) based on 8 lenses with narrow-line measurements (Gilman et al., 2020). Constraining the half mode mass to be below \(10^{7}\) M\({}_{\odot}\) would imply the existence of completely dark subhalos and provide a validation of a major prediction of Cold Dark Matter.
In addition to WDM, Gilman et al. (2021) showed that the compact sources in the JWST dataset make these data highly sensitive to the internal structure of halos. This has particularly relevant consequences for self-interacting dark matter, which can cause halos to undergo core collapse, raising their central densities and therefore their lensing efficiency. Based on the forecasts by Gilman et al. (2021) and the analysis with existing data performed by Gilman et al. (2023), the sample size of lenses obtained through this JWST program should enable constraints on self-interaction cross sections in which \(>\) 40% of halos core collapse. The properties of the SIDM cross section required to produce this quantity of collapsed objects depend on the degree to which tidal stripping and evaporation alter the collapse times for subhalos and on the nature of the self-interaction itself. Keeley et al. (2023) demonstrated that this data set will enable the detection of a mixture of dark matter made of 50% WDM with half mode mass of \(10^{8.5}\) M\({}_{\odot}\) and 50% CDM. Similarly, major improvements will be obtained for limits on all dark matter models that produce observed consequences on these scales, including, for example, fuzzy dark matter and PBHs.
## 8 Summary
We present flux-ratio measurements for DES J0405-3308, the first of 31 systems to be observed in our program to measure rest-frame mid-IR flux ratios of quadruply imaged quasars with JWST.
Our main conclusions are as follows:
1. We find that the MIRI point spread function is well fit when significant additional jitter is added to the model, and when the source spectrum is treated as a blackbody with variable temperature in each filter.
2. The flux ratios can be measured to an estimated 6%, 2%, 2%, and 1% precision in F560W, F1280W, F1800W, and F2550W, with the dominant source of uncertainty coming from modelling the lensed quasar host galaxy light in the three bluer filters and from the point spread function in F2550W. The absolute flux uncertainties are estimated to be dominated by ongoing instrument calibrations. For this work, we adopt 15% uncertainties in F560W, F1280W, and F1800W, and 20% in F2550W, but we expect these to improve in the future.
3. We introduce an SED-fitting method that enables us to take into account the high flux-ratio precision and the relatively uncertain absolute flux precision. This model fits for the temperatures of the dust components as well as the relative amplitudes of each component in each lensed image.
4. We estimate the hot and cold dust temperatures for the source to be 1200\(\pm\)100 K and 300\(\pm\)100 K. The hot dust region shows substantial microlensing relative to the warm dust region, confirming the sub-parsec size of this region.
5. The flux ratios inferred from the warm dust component of SED fitting are consistent with the flux ratios measured in F2550W. Given current absolute and flux-ratio measurement uncertainties, the warm dust emission flux ratios can be measured to 3% with one-sigma uncertainty. This sensitivity will enable us to infer population-level statistics of dark matter halos below masses of \(10^{7}\) M\({}_{\odot}\) in future work, thus providing a test of a key prediction of CDM.
6. The F2550W and warm dust flux ratios are inconsistent at a 20% level with narrow-line flux ratios measured by Nierenberg et al. (2020). This can potentially be explained by the presence of a low-mass dark matter halo magnifying the smaller warm torus light, but not significantly affecting the more extended narrow-line region image fluxes. Full modeling of the substructure and finite size effects, to be presented in a future paper, will be used to study the origin of the discrepancy in more detail.
## Acknowledgments
We thank Crystal Mannfolk, Greg Sloan, and Blair Porterfield for help with observation planning. We thank Karl Gordon, Mattia Libralato, Jane Morrison, and Sarah Kendrew for their help in answering questions about the data reduction. We thank Marshall Perrin for helpful conversations about webbPSF.
This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program #2046. Support for program #2046 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127.
AN and TT acknowledge support from the NSF through AST-2205100 "Collaborative Research: Measuring the physical properties of dark matter with strong gravitational lensing". The work of LAM and DS was carried out at Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. TA acknowledges support the Millennium Science Initiative ICN12_009 and the ANID BASAL project FB210003. DS acknowledges the support of the Fonds de la Recherche Scientifique-FNRS, Belgium, under grant No. 4.4503.1. VM acknowledges support from ANID FONDECYT Regular grant number 1231418 and Centro de Astrofisica de Valparaiso. VNB gratefully acknowledges assistance from a National Science Foundation (NSF) Research at Undergraduate Institutions (RUI) grant AST-1909297. Note that findings and conclusions do not necessarily represent views of the NSF. KNA is partially supported by the U.S. National Science Foundation (NSF) Theoretical Physics Program, Grants PHY-1915005 and PHY-2210283. AK was supported by the U.S. Department of Energy (DOE) Grant No. DE-SC000937, by the UC Southern California Hub, with funding from the UC National Laboratories division of the University of California Office of the President, by the World Premier International Research Center Initiative (WPI), MEXT, Japan, and by Japan Society for the Promotion of Science (JSPS) KAKENHI Grant No. JP20H05853. SB acknowledges support from Stony Brook University. DG acknowledges support for this work provided by the Brinson Foundation through a Brinson Prize Fellowship grant, and from the Schmidt Futures organization through a Schmidt AI in Science Fellowship.
|
2305.19596 | Pattern Formation by Electric-field Quench in Mott Crystal | The control of Mott phase is intertwined with the spatial reorganization of
the electronic states. Out-of-equilibrium driving forces typically lead to
electronic patterns that are absent at equilibrium, whose nature is however
often elusive. Here, we unveil a nanoscale pattern formation in the
Ca$_2$RuO$_4$ Mott insulator. We demonstrate how an applied electric field
spatially reconstructs the insulating phase that, uniquely after switching off
the electric field, exhibits nanoscale stripe domains. The stripe pattern has
regions with inequivalent octahedral distortions that we directly observe
through high-resolution scanning transmission electron microscopy. The
nanotexture depends on the orientation of the electric field, it is
non-volatile and rewritable. We theoretically simulate the charge and orbital
reconstruction induced by a quench dynamics of the applied electric field
providing clear-cut mechanisms for the stripe phase formation. Our results open
the path for the design of non-volatile electronics based on voltage-controlled
nanometric phases. | Nicolas Gauquelin, Filomena Forte, Daen Jannis, Rosalba Fittipaldi, Carmine Autieri, Giuseppe Cuono, Veronica Granata, Mariateresa Lettieri, Canio Noce, Fabio Miletto Granozio, Antonio Vecchione, Johan Verbeeck, Mario Cuoco | 2023-05-31T06:40:00Z | http://arxiv.org/abs/2305.19596v1 | # Pattern Formation by Electric-field Quench in Mott Crystal
###### Abstract
The control of Mott phase is intertwined with the spatial reorganization of the electronic states. Out-of-equilibrium driving forces typically lead to electronic patterns that are absent at equilibrium, whose nature is however often elusive. Here, we unveil a nanoscale pattern formation in the Ca\({}_{2}\)RuO\({}_{4}\) Mott insulator. We demonstrate how an applied electric field spatially reconstructs the insulating phase that, uniquely after switching off the electric field, exhibits nanoscale stripe domains. The stripe pattern has regions with inequivalent octahedral distortions that we directly observe through high-resolution scanning transmission electron microscopy. The nanotexture depends on the orientation of the electric field, it is non-volatile and rewritable. We theoretically simulate the charge and orbital reconstruction induced by a quench dynamics of the applied electric field providing clear-cut mechanisms for the stripe phase formation. Our results open the path for the design of non-volatile electronics based on voltage-controlled nanometric phases.
pacs: There are various paths to drive a changeover of the Mott insulating state [1] by either applying pressure or strain, changing the temperature nearby the Mott transition or doping the system away from integer filling, corresponding to bandwidth, temperature and density control, respectively [2; 3; 4; 5; 6; 7; 8]. The resulting phenomena have broad impact in condensed matter physics for both fundamental [2; 3] and technological perspectives [4; 5].
There are two scenarios that are often encountered in proximity of Mott phases: i) the occurrence of superconductivity when the insulating phase is destroyed, as for the emblematic case of cuprates [9] with magnetism playing an important role too, and ii) the tendency to form inhomogeneous electronic patterns due to the first order character of the Mott transition and the competing length scales of localized and itinerant electronic degrees of freedom [10; 11; 12]. Recently, it has been pointed out that the application of an electric field, both static or dynamic, can be an ideal knob to control the conducting properties of correlated materials by inducing insulator-to-metal transitions and novel quantum phases of matter [13; 14; 15; 16]. Depending on its amplitude, an applied gate voltage can yield a dielectric breakdown and electronic avalanches [17] or activate collective low-energy lattice and spin-orbital excitations [18]. The transitions which emerge from the Mott insulating state can hence involve multiple degrees of freedom and be marked or not by significant changes in the crystal structure, as is the case for V\({}_{2}\)O\({}_{3}\)[19], VO\({}_{2}\)[20; 21; 22; 23] and Fe\({}_{3}\)O\({}_{4}\) systems [24]. In this framework, CRO represents a paradigmatic material platform to assess the interplay of electron correlations and electron-lattice coupling in the presence of multi-orbital physics [25; 26; 27] with spin-orbit and Hund's interactions [28]. Indeed, CRO is a Mott-insulator at room temperature and on heating through T\({}_{\rm MI}\)=83 \({}^{\circ}\)C it undergoes an insulator-to-metal transition accompanied by an abrupt structural change [29], without varying the crystal symmetry. The structural transition involves an orbital reconstruction from a preferential out-of-plane orbital occupancy of the 4d states \((xz,yz)\) to a dominant orbital configuration with in-plane character \((xy)\). At lower temperature, this redistribution turns into an orbitally ordered state [18; 30]. The application of electric field, through current and optical pulses, or the use of thermal quench have been shown to melt the insulating phase [31; 32; 33; 34] resulting into the formation of phase coexistence [33],including nanometric regions [34] at the boundaries of micrometric domains having metallic and insulating character. Nevertheless, while spatial inhomogeneities can form and inequivalent structural components compete, the origin and the mechanisms for the spatial reorganization remain mostly unexplained.
The problem of domains formation is particularly challenging in the context of Mott transitions as they are first order type in real materials. Domains formation is a general and complex phenomenon [35; 36; 37], with modulations that often result from the competition between short-range attractive and long-range repulsive interactions. In dynamical conditions, domains can arise from the quench of the interactions or by quenching the temperature from above to below the ordering transition [38]. Whether similar reorganization phenomena after electric or
orbital quench can be encountered in correlated systems exhibiting insulator-to-metal transition is an outstanding problem not yet fully uncovered.
In this manuscript, we face this challenge and unveil a novel path to induce as well as turn on and off the formation of nano-textures by means of an applied electric field in a Mott insulator, focusing on the case of the CRO system. The emergent phase remarkably depends on the electric field orientation. It is a stable configuration that can be erased by voltage or temperature and regenerated with the same voltage quench protocol. The formation of the domains is ascribed to a nontrivial orbital dynamics that is activated by the electric field. We demonstrate that the electric field is able to affect and reduce the orbital population unbalance among the \(xy\) and \((xz,yz)\) states. The nontrivial orbital relaxation allows for the formation of interfaces of long and short octahedra. We show that these interfaces are electrically active, thus they can interact by stabilizing a stripe pattern.
Let us start by considering the structural evolution across the thermally induced insulator-to-metal transition. Nakamura et al. [31] reported the change of the lattice parameters as measured upon heating of a bulk specimen. Hence, to set the reference, the first issue we aim to address is how the impact of the thermal effects on the structure manifests at the nanoscale. Figure 1a and Figure 1b compare the High-Resolution Transmission Electron Microscopy High Angle Annular dark field (HRSTEM-HAADF) image of the CRO sample taken at room temperature (20\({}^{\circ}\)C) and at a representative higher temperature (200\({}^{\circ}\)C). Note that we do not have access to the b-axis as it is in the direction of the electron beam. Although the change of lattice parameter from the short (S-) to the elongated (L)-phase is almost invisible by figure inspection, the fitting of the Ru atomic columns with a 2D gaussian distribution can provide the amplitude of the in-plane and out-of-plane lattice parameters. We find an expansion of the \(c\) lattice parameter from 11.9 to 12.2 A, as shown in Figure 1d, while the variation of the in-plane (\(a\) lattice parameter) is, on the other hand, small (Figure 1c) in amplitude, with almost no change in the histogram displayed on Figure 1e. This analysis indicates that, by increasing the temperature, the RuO\({}_{6}\) octahedra and the unit cell elongate. Nanobeam electron diffraction (NBED) was used to locally determine the lattice parameter. This method collects a diffraction pattern, from the transmitted electrons, at each probe position while raster scanning the nanometer electron beam over the specimen. The information on the lattice parameters comes from the position of the peaks in the diffraction pattern, which is collected at each probe position. The peaks in a diffraction pattern arise from the constructive interference of the electron wave coming from the interaction of the electron with crystal. The position of the diffraction peaks can be translated into a scattering angle from which the lattice parameters can be determined via the use of Bragg's law. Since the transmitted electrons are collected, the measured lattice parameters arise from the entire thickness of the specimen (\(\sim\) 100 nm). These diffraction patterns were collected along the [010] zone axis in the case of Figure 1. Figure 1c and Figure 1f report the evolution of the \(a\) and \(c\) lattice parameters as measured in-situ from the diffraction pattern every 10 \({}^{\circ}\)C. The analysis clearly signals the phase transition and its hysteresis loop of almost 30\({}^{\circ}\)C between heating and cooling cycle, which is consistent with the previously reported findings [31].
Interestingly, the application of a constant electric current induces the presence of a domain structure with stripes at the interface between a metallic and an insulating domain [33]. Although inhomogeneities occur, in Ref. [39] it is shown that locally a metal-to-insulator changeover always occurs at the same transition temperature, irrespective of being driven by temperature or by current. These phenomena thus indicate a tied connection between thermal and electric current driven effects. Here, we aim to verify whether the electronic phase reconstruction happens or not at the nanoscale, in the presence of an applied electric field when the electric current is not allowed to flow through the specimen. The experiment is performed by employing a capacitor like geometry, as described in the supplementary material, for a specimen where the electric field is applied along the a-crystallographic axis. In this experimental configuration, the voltage has been increased with a saw-tooth profile having the following sequence: 0;+1 V;-1 V;+1.2 V;-1.2 V...+3.2 V;-3.2 V (see inset in Figure 1h). Such voltage variation leads to a progressive increase of the \(c\) lattice constant as a function of the applied voltage accompanied by an almost unchanged behavior of the \(a\) lattice parameter amplitude. Here, two relevant remarks are in place: 1) the lattice parameter measured at a specific voltage does not depend on the orientation (sign) of the electric field but solely on its amplitude, 2) the lattice parameter value achieved at 3.2 V (corresponding to approximately 3.2 kV/cm), at room temperature, has the same value of that found at 200\({}^{\circ}\)C at zero voltage. This is also consistent with our measured bulk value and with published data [31]. Additionally, the analysis of the spatially resolved maps demonstrates that the distribution of the lattice parameter is uniform and, thus, no pattern formation in real space is observed at any applied voltage.
An unexpected and different behaviour is instead achieved when the voltage is quenched to zero from the maximum voltage configuration. This observation is obtained in the devised capacitor geometry (see supplementary material), where the Joule heating is expected to play a minor role compared to the electric current flow setup [32; 33; 39].
Single crystalline specimens of the bulk single crystal of CRO were prepared using the focused ion beam (FIB) and attached to one of the capacitor plates to have the electric field applied along the three different crystallographic directions, as shown in the supplementary material. For this setup, distinct spatial patterns are observed depending on the direction of the applied electric field before the quench to zero voltage.
For clarity only the \(c\) lattice parameter is reported as it gives the most significant variation. When the electric field is applied along the \(b\) crystallographic direction, two large domains are observed (Figure 2a) with short c-axis in the center of the specimen and longer c-axis in the regions which are closer to the gate contact and the surface, i.e. along the bottom and left side of the Figure 2a, respectively. Figure 2d (similarly to panels e and f) represents the histogram of the \(c\) lattice parameters corresponding to 0 voltage, maximum voltage
Figure 1: High Angle Annular Dark Field (HAADF-STEM) image from the **(a)** low and **(b)** high temperature structural phase of CRO, respectively. **(d)** The histogram of the \(c\) lattice constant for low and high temperature. The lattice constant is determined by fitting the atomic positions of the Ru atoms with a 2d Gaussian function and from this the lattice constant can be calculated. **(e)** Similar to **(d)** but the \(a\) lattice constant is shown. **(e,f)** The evolution of the two lattice parameters as a function of temperature when heating and cooling the specimen. The lattice constants are determined from the NBED experiments. **(g,h)** The lattice constants as a function of the applied voltage. In the inset of panel **(h)** the sequence of applied voltage is shown. The setup of the contacts between the electrodes and the sample is reported in the Extended Figure 1. The applied voltage leads to an electric field that is oriented along the \(b\) axis.
and quenched state as shown in the supplementary material. When the applied voltage induces an electric field along the \(c\)-crystallographic direction and then is quenched to zero, stripes with a periodic sequence of regions with short and long octahedra appear along the c-direction of the specimen, as shown in Figs. 2b and 2e. The direction of the stripe modulation is perpendicular to the electric field orientation. On the other hand, when the voltage is applied along the \(a\)-crystallographic axis (Figures 2c and 2f), stripes are observed parallel to the applied electric field and mostly close to the interface between the sample and the electrode corresponding to the \(ac\) plane. Note that this stripe configuration has both a different generating mechanism and a diverse orientation when compared to the findings in Ref. [33].
All those stripe patterns can be erased by switching on the voltage and bringing it to the highest probed amplitude necessary to have the uniform high voltage state, or heating the sample above the metal-insulator transition temperature \(\mathrm{T_{MI}}\). In both ways, the stripes can be systematically generated again with a similar spatial distribution when switching back the system to the zero voltage state. Hence, this patterned state is a stable configuration for the system which can be reproduced in a controlled manner. It is important to point out that, when considering a specimen attached to both sides of the capacitor, thus allowing a current flow through the sample, the crystallographic dependence of the pattern, as shown in Figure 2, is suppressed. Indeed, by quenching the specimen from \(200^{\circ}\mathrm{C}\) to room temperature (within 1 second) does not lead to the formation of any pattern (see Figure 3 top panels). Similarly, by quenching the voltage to zero after bringing the specimen to the more conducting high-voltage configuration, no stripes are observed (see Figure 3 bottom panels). When electrical current is allowed to flow through the specimen, only fringes between a more conducting region (with elongated phase) and an insulating domain, similar to the results reported previously [33], are observed. We attribute this difference to Joule heating [39], which yields gradients of temperature in time during the temperature quench experiment that destabilize the orbital ordering. Hence, we argue that Joule heating or temperature gradients having a different relaxation time during the quench prevent the patterns formation.
Let us focus on the nature and formation mechanism of the observed pattern. Our first aim is to demonstrate that an orbital reconstruction can occur after the quench of the electric field. For this purpose, we perform a time-dependent simulation of the many body state on a finite size cluster with two effective RuO\({}_{6}\) octahedra. The analysis is aimed to capture on equal foot the correlated dynamics activated by the electric field both in space, on short-range length scale, as well as in time. The employed model Hamiltonian (see Supplemental Info) effectively includes all the local interactions at the Ru site and Ru-O charge transfer processes which are relevant to describing the correlated ground state in the CRO Mott phase. The electric field is introduced through a time-dependent vector potential that enters as a Peierls factor in the phase of the \(d-p\) hopping amplitude (see Supplemental Information). For a quench dynamics it can be generally expressed by the profile in Figure 4a.
We start by considering the insulating state in the regime of flatten octahedra with the crystal field potential favoring the charge occupation of the planar \(xy\) orbital. In this configuration, it is known [27; 30; 40; 41; 42] that there is an orbital unbalance with excess charge in the \(xy\) band compared to the \((xz,yz)\) states, with direct correspondence with the octahedral distortions. We track the orbital dynamics in the time frame before and after the quench of the electric field for different amplitudes of the maximum applied electric field, \(E_{max}\), expressed by the parameter \(\eta=E_{max}/E_{M}\), with \(E_{M}=0.01eV\) /Angstrom. \(E_{M}\) is a reference scale of the amplitude of the electric field that, for convenience, we introduced when scanning the phase diagram in our simulation. We have chosen the value \(10^{-2}\) eV/Angstrom because for the considered cluster, it is a characteristic scale that separates different regimes in terms of realized electronic configurations after the quench of the electric field. For values of \(\eta\) smaller than \(10^{-2}\) there is no significant orbital variations in the time dynamics. Namely, the orbital occupation stays substantially unchanged with small fluctuations around the ground state values. For an applied electric field corresponding to \(\eta\sim 10^{-2}\) the orbital dynamics starts to manifest significant fluctuations with a distribution having a broad variance around the ground state averaged occupation. For this regime of weak electric field the orbital unbalance is dynamically softened, although the averaged values are not much affected. The dynamics indicates that the charge and orbital distributions become broader in amplitude during the ramp up (Figure 4b) and evolve into a different orbital distribution with larger spread (Figure 4c) after the quench of the electric field. The increase of the maximal applied electric field before its switching-off has a significant impact on the orbital dynamics pre- and post-quench. The emergent orbital configuration is such that the orbital unbalance is completely suppressed (Figure 4d) before the quench, while after the quench (Figure 4e) the distribution shrinks in amplitude and the difference of the averaged charge occupation in the \(xy\) and \((xz,yz)\) states tends to vanish. A further increase of the maximal electric field amplitude (i.e. \(\eta\sim 1\)) is affecting the profile of the orbital distribution before the quench while keeping the qualitative trend of suppressing the charge separation in the \(xy\) and \((xz,yz)\) orbital sectors.
Hence, the analysis demonstrates that the application of an electric field yields non-equilibrium orbital configurations that are compatible with short and long octahedra depending on the parameter \(\eta\). After the quench, above a critical threshold of the electric field, the orbital unbalance is substantially suppressed thus favoring the formation of more elongated octahedra. This result implies that, due to the presence of spatial electric field gradients, the system tends to rearrange by forming interfaces between long and short octahedra.
Having established that the quench of the electric field results into a reduction of the orbital unbalance that is compatible with long unit cell, we consider how the formation of interfaces between long and short octahedra interacts with the electric field. For this purpose, we simulate a superlattice configuration with long (L) and short (S) CRO unit cells (Fig. 5a). We determine the optimized structure by the relaxation of the Ru atoms along the c-axis and we obtain a head-to-head dis
placement of the Ru layers, as reported in Fig. 5a. This configuration implies that the overall displacement is vanishing. The Ru layers at the interface between the L- and the S-phase move towards the S-phase. Notably, qualitative similar results are observed for supercells of different size. We calculate the free energies of the superlattice and the bulk as a function of the displacements of the Ru atoms with respect to their centrosymmetric positions.
As we can see from Fig. 5b, the equilibrium energy of the bulk is achieved when the Ru atoms are in the centrosymmetric positions, while for the superlattice the situation changes, namely the equilibrium energy is when the Ru atoms are displaced by approximately 0.47 pm with respect to their centrosymmetric positions. These shifts of the Ru atoms along the (001) direction at the interface between different Ru-based compounds stacked along (001) are similar to those predicted in the metallic phase of the Sr-based ruthenate compounds [43]. While larger displacements are found in the metallic phase, we expect that the size of the displacements mainly depends on the electronic mismatch between the two structurally distinct phases within the superlattice. These displacements make the interface electrically active because they can sustain a non-vanishing electric dipole. Hence, the resulting physical scenario is that the application of an electric field activates the orbital and lattice dynamics which allows for deviations of the structural configurations from the equilibrium. When interface configurations with inequivalent octahedral distortions form, the system tends to stabilize them and lower the energy by having a periodic alternation of short and long unit cells (Fig. 5). The head-to-head interface configuration is compatible with an averaged net vanishing electric field. This behavior resembles the phenomenology of magnetic systems described by the kinetic Ising model [38] with the orbital-lattice degrees of freedom replacing the spin ones.
To wrap up, we have demonstrated that the ramp up to a critical amplitude of the applied voltage and its successive quench to zero are able to induce a stripe phase in the CRO Mott insulator. The stripe phase is marked by long and short octahedra that periodically alternate along the \(c\)-axis with a nanometric length scale. This pattern together with the way (i.e. gate voltage quench) it is generated mark the difference between the observed stripe phase and the stripe domains realized at the boundary of micrometric regions with metallic and insulating character [34]. The configuration is stable and can be controlled by varying and switching off the applied gate. The stripe formation mechanism depends on the orientation
Figure 2: **(a-c)** The real space map of the \(c\) lattice parameter after the voltage is quenched to zero amplitude for three different orientation of the electric field. The orientation of the crystal with respect to the electric field is indicated in each panel. In the inset images, the average diffraction pattern is shown. **(d-f)** The histogram of the lattice parameter at zero voltage and maximum applied voltage indicates the distribution of the lattice parameters amplitude for the corresponding voltage configurations. We find that after the quench the distribution exhibits a bimodal lineshape that reflects the occurrence of stripes or domains with unit cells having short and long \(c\) lattice parameters. The geometry of the electrical contacts corresponds to an open circuit with the sample gated only on one side. The details of the electrical setup is reported in the Extended Figure 1. The bottom side in the panels a-c corresponds to the region of the contact of the sample with the electrode through which the electric field is applied. The white region on the left side of the panels (a) and (b) refers to the interface with the vacuum at the boundary of the sample.
of the applied electric field thus underlining the role of the orbital degrees of freedom in the stripe phase formation. We argue that the reason for having a different response for an electric field oriented along the \(a\) and \(b\) axes might arise from the character of the activated orbital excitation due to presence of an orbital easy axis for the orthorhombic crystalline symmetry of CRO.
Besides, in the insulating phase the ground state is orbitally and structurally correlated. The electric field tends to destroy the orbital pattern by deforming the octahedra and brings the system into a new state with interfaces between orbitally and structurally inequivalent configurations. The interfaces carry an electric dipole, so that they can interact and stabilize a periodic arrangement in the form of stripes. The observed phenomena may have a high impact on innovative types of switching memories, once the stripes pattern has been written, and they can be erased and written at will just by applying a small amplitude voltage. The fact that the stripe phase becomes the new zero-voltage state is also very beneficial as it is more energy efficient than having to switch back to a fully insulating state. These findings thus can pave the way for the construction of low-energy consumption non-volatile nanoscale electronics and in perspective be integrated with other functional devices employing photonic effects.
**Acknowledgement** This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 823717 - ESTEEM3. The Merlin camera used in the experiment received funding from the FWO-Hercules fund G0H4316N 'Direct electron detector for soft matter TEM'. C. A. and G. C. are supported by the Foundation for Polish Science through the International Research Agendas program co-financed by the European Union within the Smart Growth Operational Programme. C. A. and G. C. acknowledge the access to the computing facilities of the Interdisciplinary Center of Modeling at the University of Warsaw, Grant No. GB84-0, GB84-1 and GB84-7 and GB84-7 and GB84-7 and Poznan Supercomputing and Networking Center Grant No. 609.- C. A. and G. C. acknowledge the CINECA award under the ISCRA initiative IScS "TOP-MOST" Grant, for the availability of high-performance computing resources and support. We acknowledge A. Guarino and C. Elia for providing support about the electrical characterization of the sample. M.C., R.F., and A.V. acknowledge support from the EU's Horizon 2020213 research and innovation program under Grant Agreement No. 964398 (SUPERGATE).
## I Supplemental Info
In the Supplemental Information we provide details about the experimental methods and setup, the real space maps for various electric field configurations, and the electrical-structural characterization. Furthermore, we describe the methodology for the simulation of the electric field quench, and aspects of the density functional theory employed to in
Figure 3: **(a-c)** The real space map of the \(c\) lattice parameter at room temperature, at 200\({}^{\circ}\)C and after quenching the specimen within 0.5 s back to room temperature. **(d)** Additionally, the histogram of the lattice parameter showing the two room temperature measurements and the high temperature measurement of the c-crystallographic axis. We can notice that with temperature alone the insulator-to-metal transition is fully reversible **(e-g)** The real space map of the \(c\) lattice parameter at 0 V, at 1.5 V and after quenching back to 0 V of a specimen contacted to both electrodes with the field applied along the a axis (Joule heating is playing a role in this geometry as current fan flow through the specimen). **(h)** The histogram which corresponds to the image in panel **(g)** where the colors of the bars correspond to the color in the image is displayed next to the right-most panel. Additionally, a histogram of the lattice parameter at 0 voltage **(e)** and maximum voltage **(f)** are shown to indicate the lattice constant in both insulating and metallic initial states. We notice that stripes are observed at the interface between two domains, one metallic and one with smaller lattice parameter (almost insulating) (see Supplemental Info for the I-V behavior before and after the voltage quench).
vestigate the superlattice.
## II Experimental Methods
In this Section we describe the methods related to the crystal fabrication, the structural characterization, and the preparation and the analysis for transmission electron microscopy.
#### ii.0.1 Fabrication and structural characterization
Single crystals were grown by the floating zone technique using an infrared image furnace with two mirrors (NEC Machinery, model SC1-MDH11020). CRO single crystals used in this experiment were carefully selected prior to HRSTEM analysis. The morphology and composition were inspected by scanning electron microscopy using a SEM Leo EVO 50 Zeiss, coupled with an energy dispersive spectrometer (Oxford INCA Energy 300). The structural characterization was performed by high-angle X-ray diffraction measurements using a Panalytical X-Pert MRD PRO diffractometer
Figure 4: **(a)** The electric field is introduced via a time dependent potential \(A(t)\) as given by the Maxwell relation \(E(t)=-\partial A(t)\) that is switched off after a characteristic time interval \(\tau_{0}\). The \(\eta\) parameter sets the strength of the electric field and it is defined as \(E_{max}/E_{M}\), where \(E_{max}\) is the maximum absolute value of \(E(t)=-\partial A(t)\) and \(E_{M}=0.01eV/\text{Angstrom}\). Time is scaled in units of \(\tau_{0}=100\) picoseconds. The details of the model parameters are reported in the Methods section. **(b-g)** Time distribution of the density of the \(d_{xy}\) and \(d_{\text{\tiny{$\mathcal{R}$}}}\) orbitals at the ruthenium site. The distribution is evaluated on the time interval preceding (left panels (b),(d),(f)) or following (right panels (c),(e),(g)) the quench of the electric field, for increasing values of electric field amplitude through \(\eta\). Vertical dotted lines mark the value of the orbital densities at zero voltage (left panels (b),(d),(f)), while dashed lines mark the values of the time average of the orbital densities after the quench (right panels (c),(e),(g)). **(h)** Schematics of the evolution of the orbital population, before and after the quench, in the limit of weak (small \(\eta\)) or strong electric field (large \(\eta\)).
Figure 5: **(a)** Superlattice of CRO composed of four RuO\({}_{2}\) layers. Two layers are in the L- and two in the S-phase. Grey, red and blue spheres indicate the Ru, O and Ca atoms, respectively. The blue arrows indicate the displacements, \(\delta\tau\), of Ru atoms as due to structural relaxation. \(11\), \(l2\), \(l3\) and \(l4\) label the layers in the superlattice in the \(L\)- and \(S\)-regions. **(b)** Energies of the superlattice and the bulk as a function of displacements \(\delta\tau\) of the Ru atoms with respect to the centrosymmetric positions.
and the electrical characterization was obtained by two terminal method applying DC current along the c-axis of the crystal at room temperature (see supplementary material)
#### iii.2.2 Specimen preparation for Transmission Electron microscopy
Cross-sectional cuts of the samples along the [100] and [010] directions of Ca\({}_{2}\)RuO\({}_{4}\) c-oriented single crystal were prepared using a Thermofisher Scientific Helios 650 dual-beam Focused Ion Beam device on dedicated DENS biasing chips as shown in the supplementary material. To get a sample with field applied along the c-crystallographic axis, the lift-out lamella was rotated by 180\({}^{\circ}\) using the omniprobe nanomanipulator before attaching it to the chip this resulted in a slight angle between the electrical field and the c-axis of the crystal which has been neglected. The sample thickness was kept around 100-150 nm thick. Biasing and heating experiments were carried out in a DENS Solutions Lightning double tilt holder with help from a Keithley 2400 Source Meter and in-house control program.
#### iii.2.3 Scanning Transmission Electron microscopy
The electron microscopy characterization was performed on the X-Ant-Em instrument at the University of Antwerp. The Electron Microscope used consists of an FEI Titan G3 electron microscope equipped with an aberration corrector for the probe-forming lens as well as a high-brightness gun operated at 300 kV acceleration voltage with a beam current of around 100 pA for all experiments to reduce acquisition time. The STEM convergence semi-angle used was 21 mrad for HRSTEM-HAADF imaging, providing a probe size of 0.8 A. The collection semi-angle ranges from 29-160 mrad for annular dark field (ADF) imaging.
Diffraction patterns used for Fig. 1 (main text) were acquired in nano-beam electron diffraction(NBED) mode with a convergence angle of 0.25 mrad, resulting in a spatial resolution of \(\sim\) 1 nm, and a collection angle of 21 mrad using a camera length of 285 mm and a 256\(\times\)256 pixel Quantum Detectors Merlin direct electron detection camera with an acquisition time of 2ms/pixel. Similar conditions were used to acquire the 2D maps presented in Fig. 2 and Fig. 2b (main text).
#### iii.2.4 Determination of the lattice parameters using HRSTEM-HAADF (direct space).
The HR-STEM images were used to determine the lattice parameters of the CRO crystal. Ten frames were acquired with a dwell time of 2 \(\mu\)s. Since each individual image contains enough signal it is possible to align them with the cross correlation method [47]. Multiple fast scans were acquired to reduce the effect of the sample drift while retaining the same signal-to-noise as doing one slow scan. After the images were aligned, a peak finding routine implemented in Statstem [48] was used to extract the initial guess of the atomic positions. These initial positions were refined by fitting 2D Gaussians to each atomic column. The fitted values of the centre were used to determine the lattice constants of the material.
#### iii.2.5 Determination of the lattice parameters using NBED (reciprocal space)
For NBED, a diffraction pattern is acquired at each probe position making it possible to map the lattice parameter at each probe position. In order to do this, a local 2D peak finding algorithm is used to determine the position of each diffraction peak [49]. Once these positions are determined, the two lattice vectors which describe the diffraction peak positions is determined by performing a linear fitting procedure. Once the lattice vectors are determined it is straightforward to retrieve the norm of each vector which corresponds the length of the lattice parameters.
## III Experimental setup and electrical-structural characterization
In this Section we present extra results about the experimental setup (Fig. S1), the real space maps for various electric field configurations (Fig. S2), and the electrical-structural characterization (Fig. S3).
## IV Theoretical modelling of the electric field quench
In CRO, the bands close to the Fermi level stem mostly from the \(t_{2g}\) orbitals \((d_{yz},d_{zx},d_{xy})\), which hybridize with the oxygen \((p_{x},p_{y},p_{z})\) bands. Hence, one can build an effective model Hamiltonian for the propagating electrons within the ruthenium-oxygen plane, by considering the interaction terms at the ruthenium and oxygen sites and the kinetic term responsible for the ruthenium-oxygen hybridization.
The non-interacting part of the Hamiltonian for the Ru-O bond along the \(x\) ([100]) direction comprises the following terms:
\[H_{Ru_{i}-O}[x] = t_{da,p_{\beta}}\left[d_{i,\alpha\sigma}^{\dagger}p_{\beta\sigma }+h.c.\right] \tag{1}\] \[H_{el}^{O} = \varepsilon_{x}n_{p_{x}}+\varepsilon_{y}n_{p_{y}}+\varepsilon_{ z}n_{p_{z}}\] (2) \[H_{el}^{Bu} = \sum_{i}\varepsilon_{yz}n_{id_{yz}}+\varepsilon_{x}n_{id_{xz}}+ \varepsilon_{xy}n_{id_{xy}}. \tag{3}\]
Eq. (1) is the Ru-O hopping along a given symmetry direction, e.g. the \(x\)-axis, \(t_{da,p_{\beta}}\) is the hopping amplitude, \(\alpha,\beta\) are orbital indices running over the three orbitals in the \(t_{2g}\) sector, and \(d_{i\alpha\sigma}^{\dagger}\) is the creation operator of an electron with spin \(\sigma\) at the site \(i\) in the orbital \(\alpha\). Here, we include all the hopping terms which are allowed according to the Slater-Koster rules, assuming that the Ru-O bond can form an angle \(\theta\) with the \(x\) axis, due to the rotation of the octahedra around the \(c\)-axis. Eqs. (2) and (3) describe the orbital dependent on-site energy terms, which take into account the offset between the occupied orbitals of O and Ru. In particular, Eqs (3) includes the on-site crystal-field splitting of the \(t_{2g}\) manifold in the octahedral environment, which can be expressed in terms of the amplitude \(\Delta_{CF}\), with \(\Delta_{CF}=(\varepsilon_{xy}-\varepsilon_{z})\). For flat octahedra below the structural transition temperature of Ca-RuO\({}_{4}\), \(\Delta_{CF}\) is negative. We also consider the possibility of having a small orthorhombic splitting, \(\delta_{\alpha r}\) of the \(d_{xz}\),\(d_{yz}\) orbitals by assuming that \(\varepsilon_{yz}=\varepsilon_{z}+\delta_{\alpha r}\) and \(\varepsilon_{yz}=\varepsilon_{z}-\delta_{\alpha r}\).
For interacting electrons, we limit to the local Hamiltonian \(H_{\text{el-el}}^{Ru}\)[40; 41; 50] at the Ru sites, which includes the complete Coulomb interaction projected onto the \(t_{2g}\) subspace. This is given by the intra-orbital \(U\), and the inter-orbital Coulomb and exchange elements, \(U^{{}^{\prime}}\) and \(J_{\text{H}}\). We assume a rotational invariant condition for the Coulomb amplitudes, so that \(U=U^{{}^{\prime}}+2J_{H}\), and \(J^{{}^{\prime}}=J_{H}\)
\[H_{\text{el-el}}^{Ru}= U\sum_{l,\alpha}n_{i\alpha\uparrow}n_{i\alpha\downarrow}-2J_{H} \sum_{i,\alpha<\beta}\mathbf{S}_{i\alpha}\cdot\mathbf{S}_{i\beta}+ \tag{4}\] \[+\left(U^{{}^{\prime}}-\frac{J_{\text{H}}}{2}\right)\sum_{i, \alpha<\beta}n_{i\alpha}n_{i\beta}+\] \[+J_{H}\sum_{i,\alpha<\beta}d_{i\alpha\uparrow}^{\dagger}d_{i \alpha\downarrow}^{\dagger}d_{i\beta\uparrow}d_{i\beta\downarrow}. \tag{5}\]
Moreover, we consider the spin-orbit coupling \(H_{\text{SOC}}^{Ru}\)
\[H_{\text{SOC}}^{Ru} = \lambda\sum_{i\alpha,\sigma}\sum_{\beta,\sigma^{{}^{\prime}}}d_{i \alpha\sigma}^{\dagger}(\mathbf{l}_{\alpha\beta}\cdot\mathbf{S}_{\sigma\sigma ^{{}^{\prime}}})d_{i\beta\sigma^{{}^{\prime}}} \tag{6}\]
where \(\lambda\) is the spin-orbit coupling strength and \((\mathbf{l}_{\alpha\beta}\cdot\mathbf{s}_{\sigma\sigma^{\prime}})\) are the matrix elements of the atomic SOC in the \(t_{2g}\) basis. Note that the \(t_{2g}\) orbitals have an effective orbital momentum \(l=1\), whose components in the basis \((d_{yz},d_{xz},d_{xy})\) can be expressed as \(l_{k}=i\epsilon_{kpq}\).
For the examined cluster with two ruthenium ions Ru\({}_{1}\) and Ru\({}_{2}\) and one oxygen atom O, the total Hamiltonian definitely reads as:
\[H=H_{Ru_{1}-O}[x]+H_{Ru_{2}-O}[x]+H_{el}^{O}+H_{el}^{Ru}+H_{el-el}^{Ru}+H_{ \text{SOC}}^{Ru}. \tag{7}\]
For the present analysis we adopt material specific values such as \(\lambda\) = 0.075 eV, \(U\) in the range [2.0,2.2] eV, and \(J_{H}\) [0.35, 0.5] eV are taken as a reference for the analysis. Similar values for \(\Delta_{CF}\), \(U\) and \(J_{H}\) have been used for calculations of electronic spectra in CRO and the ratio \(g=\Delta_{CF}/(2\lambda)\) is typically considered to lie in the range
\(\sim\)[1.5,2] for modelling the spin excitations observed by neutron scattering[27, 28, 51, 52]. For the hopping amplitudes, we assume that the basic \(p-d\) hopping amplitudes in the tetragonal (\(\alpha=0\)) symmetry have the following value \(t_{p,d}^{0}\)=1.5 eV.
Let us describe the methodology for investigating the consequence of a time-dependent electric field that is switched off after a given time interval. In the presence of an applied voltage, the effect of the external electric field can be incorporated in the miscroscopic model by the standard Peierls substitution to the hopping matrix elements,
\[t_{da,pg}(t)=t_{da,pg}\exp\left[-i\frac{e}{\hbar}\int_{r_{Ru}}^{r_{O}}\mathbf{A}(t )d\mathbf{r}\right] \tag{8}\]
where \(r_{O}\) and \(r_{Ru}\) are the position of the O and Ru atoms, \(e\) is the electron charge and \(\hbar\) the Planck constant, while the vector potential is related to the electric field by \(\mathbf{E}(\mathbf{t})=-\partial_{t}\mathbf{A}(\mathbf{t})\). The electric field in this formalism corresponds to a time-dependent deformation of the Hamiltonian and the present approach avoids to deal with electrodes in the system. Assuming that the electric field is static lying in the Ru-O plane and taking only one projection along the Ru-O-Ru axis, one can describe the evolution of the ground state by introducing a scalar vector potential \(A(t)\). We model the quench behavior of the electric field, by assuming the time profile for \(A(t)\) displayed in Fig. 4(a) (main text). In the time interval preceding the quench \(t<t_{Q},A(t)\) grows from zero to a maximum value, showing a super-linear dependence in time. In the specific, it is obtained as a cubic polynomial interpolation between linear functions, where the strength of the electric field in gradually increased up to an absolute value \(E_{max}\). In order to explore different coupling regimes compatible with the experimental values of the applied voltage, we considered several cases
which are parametrized by the constant \(\eta=E_{max}/E_{M}\), with \(E_{M}=0.01eV/\text{\AA}\) and \(E_{max}\) in the range \([10^{-4},10^{-2}]\,\text{eV}/\text{\AA}\). At \(t_{Q}\) of the order of 0.8 ns, \(A\) is suddenly reduced to zero over a time interval of 0.1 ns.
From a methodological point of view, we need to solve the time-dependent Schrodinger equation, \(i\hat{n}\frac{\partial}{\partial t}|\Psi(t)\rangle=H(t)|\Psi(t)\rangle\), which rules the time evolution of the quantum system at zero temperature starting from the ground state of the Hamiltonian obtained by exact diagonalization. Due to the large dimension of the Hilbert space, the time dynamics of the many-body ground state is performed by means of the Cranck-Nicholson's method, that guarantees a unitary time evolution where the evolved wave function is expressed in an infinitesimal interval as
\[|\Psi(t+\Delta t)\rangle=\exp[-i\hbar^{-1}\int_{t}^{t+\Delta t}H(t)dt]|\Psi(t) \rangle\approx\frac{[1-i\frac{\Delta t/2}{\hbar}H(t+\Delta t/2)]}{1+i\frac{ \Delta t/2}{\hbar}H(t+\Delta t/2)]}|\Psi(t)\rangle\,.\]
Hence, by means of Eq. (9) we determine the time dependent evolution of the ground state by discretizing the time interval. Here, the time step is considered to be \(dt=1.0\times 10^{-2}\hbar/t_{p,d}^{0}\), with \(t_{p,d}^{0}\) being the amplitude of the p-d \(\pi\) hybridization hopping process for \(\theta=0\). The choice of the time step is small enough to guarantee the convergence for the solution.
Finally, we provide the description of the out-of-equilibrium dynamics of the on-site orbital occupancy of the \(d\)-orbitals in the ground-state following the quench, by calculating the time dependent expectation value of the electron density \(n_{xy}\) in the \(d_{xy}\), and averaged (\(d_{xz}\),\(d_{yz}\)) orbitals as given by \(\frac{1}{2}(n_{xz}+n_{yz})\), respectively. These quantities are the most relevant to identify the modification of the orbital configuration after quenching the applied electric field.
## V Density functional theory for cro superlattice
We have performed DFT calculations by using the Vienna ab-initio simulation package (VASP) [53; 54; 55]. The core and the valence electrons were treated within the projector augmented wave (PAW) [56] method with a cutoff of 480 eV for the plane-wave basis. We have used the PBEsol exchange-correlation method [57], a revised Perdew-Burke-Ernzerhof (PBE) that improves equilibrium properties of solids. PBEsol+\(U\) is the approach that we have followed to take into account the correlations associated with the Ru-4\(d\) states. We have considered \(U\)=3 eV for the antiferromagnetic insulating phase of ruthenates[42; 58], and regarding the Hund coupling we have used the value \(J_{H}=0.15\ U\) in agreement with approaches based on the constrained random phase approximation for 4\(d\)-electrons [59]. The values of the lattice constants are \(a_{S}\)=5.3945 A, \(b_{S}\)=5.5999 A, \(c_{S}\)=11.7653 A in the S-Pbca phase and \(a_{L}\)=5.3606 A, \(b_{L}\)=5.3507 A, \(c_{L}\)=12.2637 A in the L-Pbca phase [60]. To simulate the stripe phase, we built a superlattice composed of four RuO\({}_{2}\) layers, with two layers in the L- and two layers in the S-phase stacked along the \(c\)-axis. The lattice constants of the superlattice are obtained by averaging the lattice parameters of the bulk; we have that \(a_{superlattice}\)=(\(a_{S}\)+\(a_{L}\))/2, \(b_{superlattice}\)=(\(b_{S}\)+\(b_{L}\))/2 and \(c_{superlattice}\)=\(c_{S}\)+\(c_{L}\). A 11\(\times\)11\(\times\)4 k-point grid has been used for the bulk [61], while a 11\(\times\)11\(\times\)2 k-point grid has been used for the superlattice.
|
2309.14367 | Design of Novel Loss Functions for Deep Learning in X-ray CT | Deep learning (DL) shows promise of advantages over conventional signal
processing techniques in a variety of imaging applications. The networks' being
trained from examples of data rather than explicitly designed allows them to
learn signal and noise characteristics to most effectively construct a mapping
from corrupted data to higher quality representations. In inverse problems, one
has options of applying DL in the domain of the originally captured data, in
the transformed domain of the desired final representation, or both.
X-ray computed tomography (CT), one of the most valuable tools in medical
diagnostics, is already being improved by DL methods. Whether for removal of
common quantum noise resulting from the Poisson-distributed photon counts, or
for reduction of the ill effects of metal implants on image quality,
researchers have begun employing DL widely in CT. The selection of training
data is driven quite directly by the corruption on which the focus lies.
However, the way in which differences between the target signal and measured
data is penalized in training generally follows conventional, pointwise loss
functions.
This work introduces a creative technique for favoring reconstruction
characteristics that are not well described by norms such as mean-squared or
mean-absolute error. Particularly in a field such as X-ray CT, where
radiologists' subjective preferences in image characteristics are key to
acceptance, it may be desirable to penalize differences in DL more creatively.
This penalty may be applied in the data domain, here the CT sinogram, or in the
reconstructed image. We design loss functions for both shaping and selectively
preserving frequency content of the signal. | Obaidullah Rahman, Ken D. Sauer, Madhuri Nagare, Charles A. Bouman, Roman Melnyk, Jie Tang, Brian Nett | 2023-09-23T15:39:28Z | http://arxiv.org/abs/2309.14367v1 | # Design of Novel Loss Functions for Deep Learning in X-ray CT
###### Abstract
Deep learning (DL) shows promise of advantages over conventional signal processing techniques in a variety of imaging applications. The networks' being trained from examples of data rather than explicitly designed allows them to learn signal and noise characteristics to most effectively construct a mapping from corrupted data to higher quality representations. In inverse problems, one has options of applying DL in the domain of the originally captured data, in the transformed domain of the desired final representation, or both.
X-ray computed tomography (CT), one of the most valuable tools in medical diagnostics, is already being improved by DL methods. Whether for removal of common quantum noise resulting from the Poisson-distributed photon counts, or for reduction of the ill effects of metal implants on image quality, researchers have begun employing DL widely in CT. The selection of training data is driven quite directly by the corruption on which the focus lies. However, the way in which differences between the target signal and measured data is penalized in training generally follows conventional, pointwise loss functions.
This work introduces a creative technique for favoring reconstruction characteristics that are not well described by norms such as mean-squared or mean-absolute error. Particularly in a field such as X-ray CT, where radiologists' subjective preferences in image characteristics are key to acceptance, it may be desirable to penalize differences in DL more creatively. This penalty may be applied in the data domain, here the CT sinogram, or in the reconstructed image. We design loss functions for both shaping and selectively preserving frequency content of the signal.
Deep learning, neural network, X-ray CT, novel loss functions, spectral shaping. Further author information: (Send correspondence to O.R.)
O.R.: E-mail: [email protected]
## 1 Introduction
Artificial neural networks (ANN) have been increasingly finding success in X-ray computed tomography (CT) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. ANN in imaging are designed by adjusting strengths of interconnections among artificial neurons with the goal of making the network's output, on the average, as close as possible to the ideal form of the image. This ideal form may be well known in training phase of the ANN, in which one may start with a perfect signal as the "target" and then corrupt it according to the character of noises and artifacts typically encountered in application. Alternatively, the target image may be imperfect, but far less afflicted with error than those encountered as measurements. In training, simple multiplicative coefficients or other representations of neural interconnections are iteratively adjusted to minimize some average measured error, or loss, between an ensemble of network-processed input data and their respective target images, as represented in Figure 1. The measured loss is backpropagated through the ANN to provide gradients to correct the connections and reduce loss, thus "learning" the inverse operator. Following training, the network may be applied to new data sets in order to reduce their content of error as described by the system's loss function. The process is, with increasing frequency, titled "deep learning" because more powerful computational resources have allowed more layers in the ANN, hence a "deeper" network.
robably the most common loss function applied has been mean-squared error. Let us define \(Y\) as the input data, which we model as a function of some ideal, target image \(X\), or \(Y=h(X)\). The task of the ANN is to extract from \(Y\) a rendering close to the unknown, ideal image. If we define \(g=h^{-1}\), our training would seek to learn \(g\) to produce \(X=g(Y)\). Equality is seldom achievable due to noise or other corruption, and we optimize in the sense of average, possibly weighted, error. If we use the variable \(k\) to index among training pairs, \(n\) to index entries in vectors \(X_{k}\) and \(Y_{k}\), and \(\theta\) to represent the variable parameters of the ANN, our DL-trained mapping \(g_{\theta}\) for the mean-squared error case may be expressed in terms of
\[\theta=\underset{\theta}{\text{\emph{argmin}}}\ \sum_{k,n}w_{k,n}[X_{k,n}-(g_{ \theta}(Y_{k}))_{n}]^{2} \tag{1}\]
in which the weightings \(w_{k,n}\) may be fixed in either or both variables, or may be adapted according to relative local characteristics of data. This weighted, mean-squared penalty on the standard error, \(S_{k}\triangleq X_{k}-g_{\theta}(Y_{k})\), has a number of potential advantages, including being statistically well-matched to Gaussian noise. In cases where less severe penalization of large errors is desired, squared error may be replaced by absolute error, similarly to penalty adjustment in edge-preserving regularization.
While simple norms such as expressed above provide highly useful loss metrics, it has long been recognized in the image processing community that they may be less than ideal for applications in which the final receiver for the system's output is a human observer. Various metrics for perceptual loss have been designed in hopes of optimizing the elusive human-interpreted quality of audio[1] and visual data[2]. For diagnostic CT imaging, in which much analysis is performed by radiologists, more subjective quality metrics are applied by the end users of the technology, and spectral content of residual noise, plateauing of image levels in low-contrast areas and other context-dependent evaluations must be addressed.
This work consists of a novel class of loss metrics which may expand the usefulness of DL in X-ray CT. We generalize the sense of optimality to
\[\hat{\theta}=\underset{\theta}{\text{\emph{argmin}}}\ \sum_{k}L[X_{k},Y_{k},g_{ \theta}(Y_{k})], \tag{2}\]
where \(L\) is now a function that may capture any number of spectral and spatial characteristics in the error. In the X-ray CT arena, we may choose to improve the signal in either the sinogram domain, where measurements are made directly, or in the image domain after reconstruction by any existing algorithm. The signal and error statistics in these two differ, leading to designs tailored for each case. In the following, we describe one embodiment of the design.
## 2 Method
Conventional, point-wise mean-squared error as loss may be thought of as a flat spectral penalty. However, in cases where we wish to focus on removing artifacts with low or medium spatial frequency content, penalizing all frequencies equally may be counter-productive. Given that many well-developed, edge-preserving techniques are available for removing high-frequency noise, particularly in the image domain, low-signal correction in CT may in some cases be better served by training the network to remove errors only in lower frequencies. In this case, we propose a loss function \(L\) in Eq. (3) that may take the form
\[L[S_{k}]\triangleq\phi[f_{1}(S_{k})], \tag{3}\]
Figure 1: Training of neural network. Parameters governing system behavior are denoted by \(\theta\). The gradient of the loss function’s penalization of error (\(L\)), as a function of \(\theta\), is used to improve the averaged match between target and output of network during training.
where \(\phi\) is a suitable error metric applied only within the passband of the lowpass filter \(f_{1}\). The higher frequency error becomes a "don't care" element for the network. Alternatively, band-pass or high-pass filtering may focus loss on those portions of the error spectrum. Particularly in three-dimensional image vectors, frequencies may be treated differently along the three axes. This forms the first part of our novel loss function.
The discussion above is most commonly addressed to conventional CT imagery in two or three dimensions, in which spatial frequency has roughly equivalent meaning in all dimensions. However, the present methods are intended at least as importantly for use in the native domain of the data, the sinogram. Application of the type of loss function in Eq. (3) in the sinogram requires modeling behavior in such coordinates as row, channel and view, where the first two index in the detector panel of the CT gantry, and the last indexes the distinct rotating, two-dimensional views of patient or object. In this case, the error filtering operation will need to be spatially adapted, as statistics of both the underlying signal and the corrupting noise vary spatially in the sinogram domain.
It has been widely observed in the DL community that networks appear to have a strong tendency toward elimination of high frequencies in the output and this may occur even when the penalized loss is restricted to low frequency error as in Eq. (3). An example application is using DL for low signal correction, where some of the most problematic artifacts are of low to medium spatial frequency. Here, it may be advantageous to retain parts of the error spectrum in the output when the correction network is applied in the sinogram domain. Powerful, adaptive denoisers in the image domain can capitalize on the relatively stationary underlying image statistics to remove higher frequency noise with little damage to edge resolution. Thus, we may wish to actively discourage suppression of this part of the error signal in the first stage of processing in order to preserve both resolution and desirable texture. We propose a second part of the loss function that will penalize removal of components of the signal \(Y_{k}\) according to their spectral content. This component of the loss may be expressed similarly to Eq. (3), but with the argument redefined as
\[T_{k}\triangleq Y_{k}-g_{\theta}(Y_{k}) \tag{4}\] \[L[S_{k},\ T_{k}]\triangleq\phi[f_{1}(S_{k})+\alpha f_{2}(T_{k})], \tag{5}\]
An realization of the system is shown below in Figure 2. It includes the two loss functions discussed previously. The first loss, realized by the right branch, penalizes the error from Eq. (3) filtered by \(f_{1}\). The left branch features the error from Eq. (4), where a different portion of the spatial frequency spectrum of error within the passband of \(f_{2}\) is penalized. The two types of error signals are combined before the application of the norm \(\phi\) and the gradient for backpropagation. The weighting factor \(\alpha\) could be any positive value, with increase resulting in more of the desired frequency components preserved in the output.
The responses of filters \(f_{1}\) and \(f_{2}\) plus the parameter \(\alpha\) appear to provide a great deal of control over the inference behavior of the network. In an extremely conservative case, with \(f_{1}=f_{2}=1.0\ \forall\ \omega\) and \(\alpha=1\), the composite error becomes
\[X_{k}+Y_{k}-2g_{\theta}(Y_{k}), \tag{6}\]
Figure 2: Training of system to encourage the output to mimic the target content as selected by filter \(f_{1}\), but refrain from removal of input signal content as selected by filter \(f_{2}\)
which will simply place the optimum output midway between the target and the input.
## 3 Results
Parts of this method have been preliminarily tested with phantom and clinical data. Below are a few results with the latter. In this configuration, the training loss was the weighted sum of low pass (LP) filtered error between output and target, and high pass (HP) filtered error between input and output. The filters are shown in the Figure 3. The DL network was trained to operate in the original domain i.e. counts domains. Training data consisted of high-dosage Kyoto phantom scans as targets, with synthetic photon counting and electronic noise added to form input sinograms. We can see in Figure 4 the increase in fine-grain texture i.e. high frequency components in the output with increase in \(\alpha\).
The noise power spectra (NPS), shown in Figure 5, were measured in the liver region of reconstructed clinical images. The NPS resulting from the use of only low pass in the loss function (\(\alpha=0\)) can be seen to lack much high frequency content. Use of the high pass filter on the error between the input and the output preserves some of the high frequency components, retaining resolution along with high-frequency noise. The value of \(\alpha\) can be adjusted based on the balance between NPS qualities and noise tolerance in the image.
To assess the flatness of the NPS curve, entropy measurement was performed as
\[Entropy=\sum_{\omega_{i}=0}^{\omega_{s}/2}NPS(\omega_{i})log_{2}\frac{1}{NPS( \omega_{i})}, \tag{7}\]
where \(\omega_{i}\) is the discrete spatial frequency and \(\omega_{s}\) is the spatial sampling frequency. It can be seen in Table 1 that the flatness of the NPS increases with \(\alpha\) as far as 0.8, but it suffers from excessive high frequency emphasis for \(\alpha\) of 1.0. This case exhibits undesirable streaks in the image as well.
## 4 Conclusion
This paper presents a combination of two frequency-weighted loss function components for a deep network, furnishing potentially better control of the behavior of the network in removing signal corruption. The first part of the DL loss function employed here restricts training loss to lower frequency error between a target data set and the input set processed by the network. The second component of the loss ensures the preservation of select error content from the uncorrected data, with the intent of delegating any removal of that error to a later stage of processing. This results in network's ability to retain desired traits in the data according to chosen models for training loss. In our example application, improvement in the texture of the reconstructed image was observed and confirmed with the NPS metric. Further work will test the value of this design in improving the noise/resolution trade-off in the presence of image-domain postprocessing. We have developed this novel DL loss function design for X-ray CT imaging, but it can easily find application in other areas.
|
2310.00271 | Virial Black Hole Mass Estimates of Quasars in the XQ-100 Legacy Survey | The black hole (BH) mass and luminosity are key factors in determining how a
quasar interacts with its environment. In this study, we utilise data from the
European Southern Observatory Large Programme XQ-100, a high-quality sample of
100 X-shooter spectra of the most luminous quasars in the redshift range $3.5 <
z < 4.5$, and measure the properties of three prominent optical and ultraviolet
broad emission-lines present in the wide wavelength coverage of X-shooter: CIV,
MgII, and H$\beta$. The line properties of all three broad lines are used for
virial estimates of the BH mass and their resulting mass estimates for this
sample are tightly correlated. The BH mass range is
$\log{(\rm{M_{BH}}/\rm{M_\odot})} = 8.6-10.3$ with bolometric luminosities
estimated from the 3000A continuum in the range
$\log{(\rm{L_{bol}}/\rm{erg\,s^{-1}})} = 46.7-48.0$. Robustly determined
properties of these quasars enable a variety of follow-up research in quasar
astrophysics, from chemical abundance and evolution in the broad-line region to
radiatively driven quasar outflows. | Samuel Lai, Christopher A. Onken, Christian Wolf, Fuyan Bian, Guido Cupani, Sebastian Lopez, Valentina D'Odorico | 2023-09-30T06:20:53Z | http://arxiv.org/abs/2310.00271v1 | # Virial Black Hole Mass Estimates of Quasars in the XQ-100 Legacy Survey
###### Abstract
The black hole (BH) mass and luminosity are key factors in determining how a quasar interacts with its environment. In this study, we utilise data from the European Southern Observatory Large Programme XQ-100, a high-quality sample of 100 X-shooter spectra of the most luminous quasars in the redshift range \(3.5<z<4.5\), and measure the properties of three prominent optical and ultraviolet broad emission-lines present in the wide wavelength coverage of X-shooter: C iv, Mg ii, and H\(\beta\). The line properties of all three broad lines are used for virial estimates of the BH mass and their resulting mass estimates for this sample are tightly correlated. The BH mass range is \(\log\left(\rm M_{BH}/\rm M_{\odot}\right)=8.6-10.3\) with bolometric luminosities estimated from the 3000 A continuum in the range \(\log\left(\rm L_{bol}/\rm erg\ s^{-1}\right)=46.7-48.0\). Robustly determined properties of these quasars enable a variety of follow-up research in quasar astrophysics, from chemical abundance and evolution in the broad-line region to radiatively driven quasar outflows.
keywords: galaxies: active - galaxies: high-redshift - quasars: emission lines
## 1 Introduction
Hundreds of thousands of quasar (QSO) sources have now been confirmed through massive surveys (e.g., Flesch, 2015; Yao et al., 2019; Lyke et al., 2020) up to a redshift of \(z=7.642\)(Wang et al., 2021). Despite the abundance of sources, high-quality echelle spectroscopy is available for only a few thousand unique QSOs, of which only a fraction contain data in the near-infrared (NIR). As the redshift increases, more of the rest-frame ultraviolet (UV) and optical atomic transitions shift into the infrared, which renders NIR observations invaluable for QSO emission and absorption-line studies.
The European Southern Observatory Large Programme "Quasars and their absorption lines: a legacy survey of the high-redshift universe with VLT/X-shooter" (hereafter referred to as XQ-100, PI: S. Lopez, programme number 189.A-0424) is a publicly available and high-quality sample of echelle spectra from 100 luminous QSOs in the redshift range \(3.5<z<4.5\)(Lopez et al., 2016). The simultaneous full spectral coverage is from 315 nm to 2500 nm with resolving power \(R\sim 5400-8900\) and median signal-to-noise ratio (SNR) of 24, measured across the whole spectrum and entire sample of 100 QSOs. Prior to XQ-100, the largest NIR spectroscopic survey, conducted using the FHE spectrograph at Magellan, was comprised of 50 QSOs at \(2<z<5\)(Matejek and Simcoe, 2012) with a median SNR per-pixel of 13 across the entire QSO sample. The XQ-100 survey with its high SNR and broad spectral coverage provides a unique and statistically significant sample to study the rest-frame UV and optical spectral properties of 100 high-redshift QSOs.
Among the scientific themes of the XQ-100 programme is the study of galactic absorption. Sub-damped (subDLA) or damped Ly \(\alpha\) systems (DLA; Wolfe et al., 2005) are used to determine the cosmic density of neutral gas as they are the main reservoirs for neutral gas in the Universe (e.g., Prochaska and Wolfe, 2009; Noterdaeme et al., 2012; Sanchez-Ramirez et al., 2016; Berg et al., 2019). The same systems can be used to probe metal abundances of QSO hosts by tracing gaseous absorbers along QSO sightlines (Berg et al., 2016, 2021). Similarly, intrinsic narrow absorption lines (NALs) in XQR-30 data are probes of the physical conditions of the QSO immediate environment and energetics of its outflow (Perrotta et al., 2016), where absorption-line diagnostics indicate metallicity, absorber covering fraction, and ionisation structure (Perrotta et al., 2018). In addition, the XQ-100 spectra also addresses cosmological questions through independent constraints of the Ly \(\alpha\) forest power spectrum at high redshift (Irsic et al., 2017; Yeche et al., 2017).
The study of active galactic nuclei (AGN) properties is also one of the scientific themes from the XQ-100 programme. The high-quality spectra can be used for accurate measurements of \(z>3.5\) black hole masses using line profiles of rest-frame UV C iv, Mg ii, or rest-frame optical H\(\beta\) emission-lines and the continuum luminosity (e.g.,
McLure & Dunlop, 2004; Greene & Ho, 2005; Vestergaard & Peterson, 2006; Vestergaard & Osmer, 2009). Flux ratios of emission-lines in the rest-frame UV, such as N v/C iv or (Si iv+O iv)/C iv, provide estimates of the metallicity in the QSO broad-line region (BLR), which probes the chemical enrichment history in high-redshift galactic nuclear regions (e.g., Hamann & Ferland, 1999; Hamann et al., 2002; Nagao et al., 2006; Wang et al., 2012; Xu et al., 2018; Wang et al., 2022; Lai et al., 2022). In the local universe, black hole masses and galactic bulge masses are strongly correlated (the M\({}_{\rm BH}-\) M\({}_{\rm bulge}\) relation; Marconi & Hunt, 2003; Haring & Rix, 2004; Greene et al., 2010), indicating that host galaxies and their central supermassive black holes co-evolve. Determining the black hole masses of high-redshift QSOs is valuable for studies that aim to investigate how properties of host galaxies and their black holes came to be strongly coupled (e.g., Croton et al., 2006; McConnell & Ma, 2013; Terrazas et al., 2020).
In this study, we estimate the black hole masses of every source in XQ-100 using single-epoch virial estimates based on the prominent broad C iv\(\lambda\)1549A, Mg ii\(\lambda\)2799A, and H\(\beta\)4.4863A lines. We measure emission-line properties utilising the high SNR, resolving power, and wide spectral coverage of the X-shooter data to tightly constrain the observed spectral profiles. This study produces a large catalogue of bright QSOs with robustly measured emission-line properties, black hole masses, and luminosity estimates at high-redshift (\(z>3.5\)).
The content of this paper is organised as follows: in Section 2, we describe the XQ-100 data and their further processing. In Section 3, we present our approach to modelling prominent emission-lines in the observed spectra. In Section 4, we describe virial mass estimates based on the measured line properties. In Section 5, we discuss measurements of the emission-lines, black hole mass, and QSO luminosity. We compare the different virial mass estimates against each other and contextualise our results with large low-redshift samples. We summarize and conclude in Section 6. Throughout the paper, we adopt a flat \(\Lambda\)CDM cosmology with H\({}_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\) and (\(\Omega_{\rm m},\Omega_{\Lambda}\)) = \((0.3,0.7)\). All referenced wavelengths of emission-lines are measured in vacuum.
## 2 XQ-100 Sample Data and Processing
Targets in the XQ-100 sample were initially selected from the NASA/IPAC Extragalactic Database (NED) with declinations \(\delta<+15^{\circ}\) and redshifts \(z>3.5\). An additional twelve targets were obtained from the literature with declination \(+15^{\circ}<\delta<+30^{\circ}\). Deliberate steps were taken to avoid targets with known broad absorption features and to avoid intrinsic colour selection bias. A full description of the target selection process can be found in Lopez et al. (2016).
### Sample Description and Data Reduction
The targets span the redshifts from \(z=3.508\) to \(z=4.716\)(Lopez et al., 2016), although all but four are within the redshift range \(3.5<z<4.5\). The sample is biased towards bright sources, covering a magnitude range in _Gaia DR3_ Gp band (Gaia Collaboration et al., 2021) of 16.78 to 19.00 Vegamag. Observations were carried out between 2012 April 1, and 2014 March 26 by the X-shooter instrument (Vernet et al., 2011) on the Very Large Telescope (VLT) using all three spectroscopic arms: UVB (300\(-\)559.5 nm), VIS (559.5\(-\)1024 nm), and NIR (1024\(-\)2480 nm). The wide wavelength coverage ensures that the C iv and Mg ii emission-lines are always observed within the VIS and NIR arms for the range of redshifts in the sample. Additional information on the requested observing conditions and instrumental setup is available in Lopez et al. (2016). We briefly summarise the reduction and processing procedures behind the XQ-100 data products, as described in Lopez et al. (2016).
Extraction of XQ-100 spectra was performed using an IDL-based custom pipeline (Becker et al., 2012). The strategy of the custom pipeline follows techniques described in Kelson (2003). Flux calibration uses response curves generated from observations of spectrophotometric standard stars, observed close in time to the science frames (Lopez et al., 2016), where a fiducial response curve was used if the temporally closest standard star observation was not optimal. Newer versions of this pipeline have been used in other QSO studies, such as XQR-30 (D'Odorico et al., 2023). While XQ-100 data from all three spectrograph arms are available, for the present study, we consider only the VIS and NIR arms, because they contain all emission-lines of interest. The velocity resolution chosen to rebin the spectra are 11 km s\({}^{-1}\) and 19 km s\({}^{-1}\) for the VIS and NIR arms, respectively.
The absolute flux calibration is a crucial step in determining the luminosity of the QSO continuum. A comparison between XQ-100 and Sloan Digital Sky Survey (SDSS; York et al., 2000) spectra showed a systematic underestimation of flux for the X-shooter spectra due to slit losses. However, the slit losses appear to be roughly achromatic, such that the spectral shape is correctly reconstructed, but the flux calibration should be taken as order-of-magnitude estimates (Lopez et al., 2016). Thus, we describe our independent calibration of the XQ-100 spectra to observed photometry in Section 2.2.
Telluric absorption features appear prominently in both the VIS and NIR arms. Corrections to the spectra are derived using model transmission spectra based on the ESO SKYCALC Cerro Paranal Advanced Sky Model, version 1.3.5 (Noll et al., 2012; Jones et al., 2013), which are applied to individual-epoch spectra of all XQ-100 QSOs. After extraction and telluric correction, the median per-pixel SNR for the whole QSO sample are 33, 25, and 43, measured at rest-frame wavelengths 1700, 3000, and 3600 A, respectively (Lopez et al., 2016), computed in \(\pm 10\)A windows.
The processed XQ-100 data products, including reduced spectra and telluric models, are publicly available through the ESO Science Archive Facility. However, the spliced spectra and the multi-epoch averaged spectra are not telluric corrected.
### Data Post-Processing
We obtain individual VIS and NIR single-epoch frames from the ESO Science Archive Facility for all XQ-100 sources and apply the following post-processing procedure:
1. We use the respective telluric model included in each frame to obtain the telluric-corrected spectra and use the emission redshift to transform the spectra into the rest-frame.
2. We identify pixels for which the per-pixel SNR is 5 or below and mask them from further processing and modelling.
3. We apply a mask by sigma-clipping with a \(3\sigma\)-threshold along a box width of 40 pixels to remove some of the narrow absorption features and noise above \(3\sigma\). The absorption features are not desired when modelling the intrinsic profile of the broad emission-lines and the sigma-clipped spectrum also helps constrain the continuum. While this procedure alone will not remove the base of absorption troughs, we follow the procedure in Shen et al. (2011), which defines our single-epoch virial mass calibration of Mg ii. In Section 3.2, we describe an additional mask buffer window to remove the base of absorption features embedded in the C iv line profile, but this is not applied throughout the entire spectrum.
(iv) We crossmatch the XQ-100 sample with UKIRT Infrared Deep Sky Survey (UKIDSS; Lawrence et al., 2007) DR11, UKIRT Hemisphere Survey (UHS; Dye et al., 2018) DR1, VISTA Hemisphere Survey (VHS; McMahon et al., 2013) DR6, VISTA Kilodegree Infrared Galaxy Survey (VIKING; Edge et al., 2013) DR5, and Two Micron All-Sky Survey (2MASS; Skrutskie et al., 2006) to obtain near-infrared \(J\)-band photometry. We also crossmatch all targets with the SkyMapper Southern Survey (SMSS; Onken et al., 2019) DR3, Panoramic Survey Telescope and Rapid Response System (Pan-STARRS; Chambers et al., 2016) DR1, Sloan Digital Sky Survey (SDSS; York et al., 2000) DR16, and Dark Energy Sky Survey (DES; Abbott et al., 2021) DR2 to obtain optical \(i\)-band photometry. We obtain the transmission profile of the broadband filters using the SVO Filter Profile Service (Rodrigo and Solano, 2020) and integrate the observed-frame spectra across the profile, obtaining a flux ratio between the photometry and spectrum with an associated uncertainty, which is used to calibrate the observed spectra to the photometry. There is one target, SDSS J004219.74\(-\)102009.4, for which no publicly available \(J\)-band photometry was found in the above surveys. In this case, we scale the flux of the NIR arm to match the flux of the VIS arm within the overlapping wavelength coverage. The magnitudes used for calibration are provided in the online supplementary table. We note that the median correction required to match the spectrum to photometry is a 42% flux increase with an error of 2-3%, which is higher than the \(\sim 30\)% flux underestimation on X-shooter's part compared to SDSS spectra estimated in Lopez et al. (2016). As the photometry is taken from a separate epoch from the spectroscopic data, the additional uncertainty from the photometric calibration is insignificant compared to QSO variability, which we quantify and discuss in Section 5.1.
(v) We standardise the rest-frame wavelength domain for all of the spectra. Every spectrum is resampled using a flux-conserving algorithm (SPECTRES; Carnall, 2017) into rest-frame bins with a common velocity dispersion of 50 km s\({}^{-1}\). The resampling calculation and error propagation are described in detail in Carnall (2017). Then the VIS and NIR arms are spliced together without rescaling, using the inverse variance weighted mean flux for the superposition between arms. In a few cases, we observe a discontinuity between the VIS and NIR arms, located between 1860A to 2275A for the redshift range of our sample. The median flux difference between arms as measured in the overlapping region is 0.6%, albeit with a large standard deviation of 24%. However, we emphasise that the data in the overlapping region between arms are naturally at the edge of the wavelength coverage of each arm and is particularly noisy, so the flux difference measured in this fashion can be exaggerated. Nevertheless, we flag all targets with higher than 25% flux difference between the VIS and NIR arms in the supplementary table under the column "NIR_VIS_Flag". We rely on the flux calibration in each respective arm and only use data within one arm at a time to fit the QSO continuum. Thus, the flux discontinuity between arms does not affect our continuum or emission-line models.
(vi) If there are repeated observations of a single source, we make use of all the available data and stack the resampled telluric-corrected spectra together, using the mean weighted by the inverse variance to define the value at each 50 km s\({}^{-1}\) velocity bin and propagate the uncertainty. Because of the calibration in step (iv), the flux density at each velocity bin between repeated observations are in good agreement. The temporal separation between repeated observations range from 10 days to 1.5 years. However, we are interested in the average spectrum in order to determine representative properties of the black hole mass and luminosity. Prior to rescaling the flux level of the spectra to photometry, the median flux difference between exposures measured at every velocity bin is 7.3% with a standard deviation of 7.6%. After rescaling, our flux level is more consistent, measured at 2.3% with a standard deviation of 0.6%. We also quantify the uncertainty from QSO variability in Section 5.1.
(vii) We use \(R_{v}=3.1\) and the Schlegel, Finkbeiner and Davis (SFD; Schlegel et al., 1998) extinction map to apply a correction for the Milky Way extinction in the observed frame. However, the normalisation of the colour excess based on the SDSS footprint and fits to the blue tip of the stellar locus suggests that SFD systematically over-predicts \(E(B-V)\) by 14%. (Schlafly et al., 2010) Thus, we apply a 14% re-calibration factor to the colour excess, such that \(E(B-V)=0.86\times E(B-V)_{\rm SFD}\)(Schlafly and Finkbeiner, 2011).
After the post-processing procedure, the median SNR per 50 km s\({}^{-1}\) for the whole QSO sample measured at rest-frame 1700, 3000, and 3600 A is 76, 52, and 74, respectively, measured from the median SNR within \(\pm 10\)A windows. Much of the increase in signal originates from the SNR floor and consolidating the flux from its native resolution into the rest frame 50 km s\({}^{-1}\) grid.
For the wavelength range redder than rest-frame 3600 A, a similar post-processing procedure is applied, but the sigma-clip mask of step (iii) is omitted to preserve narrow emission-line features of H\(\beta\) and [O iii]. Due to the redshift range of this sample, only a subset of sources contains the H\(\beta\) line within the X-shooter coverage. We visually inspect the data to ensure that the H\(\beta\) line is distinguishable from the additional noise of the thermal background and second-order contamination at edge of the NIR arm wavelength coverage. We also ensure that the H\(\beta\) line is observed with sufficient SNR (\(>10\) per resolution element), which produces a sub-sample of 21 QSOs, where the median 50 km s\({}^{-1}\) SNR across all 21 QSOs is 13. In this case, the SNR of each QSO is measured from the median SNR between 5090\(-\)5110 A.
## 3 Spectral Modelling
Our objective in this study is to measure the properties of the following QSO broad emission-lines: C iv\(\lambda\)1549A, Mg ii\(\lambda\)2799A, and H\(\beta\lambda\)4863A. In the XQ-100 sample, both C iv and Mg ii can be located in all spectra, while H\(\beta\) is observable only in lower redshift targets with sufficient signal. In this section, we describe our approach towards modelling emission-lines, using a publicly available code (PyQSpecFit1; Lai, 2023) designed specifically for modelling QSO spectral lines.
Footnote 1: [https://github.com/samlahei/PyQSpecFit](https://github.com/samlahei/PyQSpecFit)
### Continuum Modelling
Although a continuum model is provided as part of the XQ-100 data products, we elect to use our own continuum model due to how sensitive the broad emission-line models are to the local continuum. Our model follows similar studies (e.g., Wang et al., 2009) in that the underlying continuum is built from two components: a power-law continuum and Fe ii template, simultaneously fit to selected pseudo-continuum-modelled wavelength regions. We briefly comment on the Balmer continuum later in this section and quantify its effect in Appendix A. All components of the pseudo-continuum are used in measuring the Mg ii and H\(\beta\) emission-lines, but the flux contribution from the Fe ii continuum is less significant in the wavelength region of C iv. Thus, we only use a power-law to constrain the continuum in the vicinity of the C iv line.
The power-law continuum is defined by the following function normalised at rest-frame 3000 A,
\[F_{\rm pl}(\lambda;F_{0},\gamma)=F_{\rm pl,0}\left(\frac{\lambda}{3000\rm{\AA}} \right)^{\gamma}\,, \tag{1}\]
where \(F_{\rm pl,0}\) and \(\gamma\) are the normalization and power-law slope, respectively.
The Fe ii continuum is of considerable importance to the Mg ii and H\(\beta\) models, as both lines are sensitive to the features of the Fe ii contribution underneath the emission-line. To eliminate the Fe ii emission when strong, we convolve the Fe ii model with a Gaussian broadening kernel \(G(\lambda,\sigma)\) of standard deviation \(\sigma\) in order to match the variety of features observed in our spectra. The Gaussian broadening follows,
\[F_{\rm Fe}(\lambda;\zeta_{0},\delta,\sigma)=\zeta_{0}\,F_{\rm template}]_{ \lambda(1+\delta)}\oplus G(\lambda,\sigma)\,, \tag{2}\]
where the free parameters of the Fe ii contribution include the flux scaling factor denoted by \(\zeta_{0}\), the standard deviation of the broadening kernel \(\sigma\), and a small multiplicative wavelength shift \(\delta\). Furthermore, we consider a variety of empirical and semi-empirical Fe ii emission templates: Vestergaard & Wilkes (2001, VW01) and Mejia-Restrepo et al. (2016, M16) cover the rest-frame UV while Boroson & Green (1992, BG92) and Park et al. (2022, P22) cover the rest-frame optical. Brubweiler & Verner (2008, BV08) and Tsuzuki et al. (2006, T06) cover both regions. We use a VW01 template spliced with the Salviander et al. (2007) template, which extrapolates underneath the Mg ii line from rest-frame 2200\(-\)3090A. Furthermore, the wavelength range 3090\(-\)3500A is augmented with the T06 template (Shen & Liu, 2012). This version of VW01 is also used in other QSO modelling codes such as PyQSOFii (Guo et al., 2018). We find the typical value of the Gaussian broadening dispersion \(\sigma\) to be 1600 km s\({}^{-1}\) in the rest-frame UV and 1300 km s\({}^{-1}\) in the rest-frame optical.
In this work, we are not concerned with the specific properties of the Fe ii emission, and thus we will not discuss the physical interpretation of the dispersion and velocity shifts of the Fe ii emission. The Fe ii pseudo-continuum is used solely as an approximation to remove iron emission when significant in the spectra. Figure 1 shows the spectrum of J110352+100403 close to the Mg ii region with the four UV Fe ii templates models overplotted along with the resulting emission-line models in a separate panel. Properties of the Mg ii line model depend sensitively on the assumed Fe ii model. In the extreme case of SDSS J093556.91+002255.6, differences in the Fe ii model alone are responsible for shifting the measured full-width half-maximum (FWHM) in a range from 3700 to 5200 km s\({}^{-1}\). Similarly, the H\(\beta\) line model is also sensitive to the optical Fe ii model. In Section 3.2, we discuss how the differences in measured line properties resulting from various Fe ii models inform the measurement uncertainty.
The full pseudo-continuum is the sum of all contributing components, which is uniquely defined by 5 free parameters. All components of the continuum are fit simultaneously to selected pseudo-continuum windows close to the emission-line of interest. For each emission feature, the local underlying continuum is fit separately. We do not fit a "global" continuum across the the spectral range from C iv to H\(\beta\) in order to avoid biases due to deviations from a single power-law model, such as dust reddening (e.g., Richards et al., 2003) and host galaxy contributions (e.g., Vanden Berk et al., 2001). Outside irregular circumstances, such as a discontinuity in the flux-calibrated spectrum between the VIS and NIR arms, the pseudo-continuum modelling windows are selected from: 1275-1290A, 1348-1353A, 1445-1455A, 1687-1697A, 1973-1983A, 2200-2750A, 2820-3300A, 3500-3800A, 4200-4230A, and 4435-4700A with occasional \(\pm 30\)A deviations to suit specific features of the spectra, avoid telluric regions, or to accommodate the properties of particular emission or absorption features.
The Balmer continuum is also often included in the pseudo-continuum model when modelling QSO spectra, but it is not always well-constrained and is degenerate with the power-law and Fe ii continuum (e.g., Wang et al., 2009; Shen & Liu, 2012), such that the Balmer contribution is not considered for the underlying continuum in some other QSO studies (e.g., Shen et al., 2011). For the XQ-100 sample, we find that the Balmer continuum properties are not well-constrained and the broad emission-line decomposition is not strongly affected by either the inclusion or exclusion of the Balmer continuum. Therefore, in the following sections, we present our results without the Balmer continuum, but we quantify the effect of its inclusion in Appendix A.
Figure 1: Example model of the Mg ii emission feature of J110352+100403 with the combined pseudo-continuum from the power-law, Balmer, and Fe ii components. The line models differ from one another by the applied Fe ii template as indicated in the legend. The resulting continuum for each template is plotted in solid lines in the top panel and the continuum-subtracted line profile model is plotted in the bottom panel. In this example, the Mg ii FWHM between models with different Fe ii templates ranges from 4000 to 4400 km s\({}^{-1}\).
### Line Modelling
Broad emission-line profiles exhibit a wide range of properties and complexities from asymmetries to multiple peaks and plateaus, making single Gaussian models unsuitable. Instead, many QSO spectral modelling studies use a multiple Gaussian approach to fit each emission feature (e.g., Greene and Ho, 2005; Shen et al., 2011; Raskhit et al., 2020). Following these studies, we fit each broad emission-line with multiple (\(N_{\rm{gaussian}}\)) symmetric Gaussian functions having \(3\times N_{\rm{gaussian}}\) free parameters, to obtain smooth realisations of the observed line profile. Similar to Raskhit et al. (2020), we choose \(N_{\rm{gaussian}}\) = (3, 4, 4) for C iv, Mg ii, and H\(\beta\), respectively. The 4 Gaussian components of the Mg ii and H\(\beta\) lines are divided into 3 broad and 1 narrow component. For high luminosity QSOs, such as the targets in the XQ-100 sample, [O iii] lines with FWHMs exceeding 1000 km s\({}^{-1}\) are more common (e.g. Shen and Liu, 2012; Coatman et al., 2019), so we adopt a FWHM upper threshold of 1200 km s\({}^{-1}\) for the H\(\beta\) narrow lines. The Mg ii narrow lines are often ambiguous and poorly constrained. Thus, we further constrain the upper FWHM threshold for Mg ii to 1000 km s\({}^{-1}\), ensuring that the modelled components are indeed narrow. To measure the broad-line properties of each line, we subtract the narrow-line contribution from the total line profile, using only the 3 broad components to model the emission-line.
The adjoining [O iii] lines can present a challenge for modelling the redder wing of the H\(\beta\) line profile, but they are also useful to constrain the width of the narrow H\(\beta\) component. However, these adjoining lines are infrequently detected in the XQ-100 spectra with J133254+005250 presented in Figure 2 as one of the only two cases with detectable [O iii], alongside J101818+054822. Without the presence of the [O iii] lines, the decomposition of the total H\(\beta\) line profile into its broad and narrow emission may not be unique.
While all broad emission-line profiles are affected by embedded narrow absorption features, the effect that they have on the resulting model is more significant for C iv. In order to obtain more appropriate models of the intrinsic C iv broad emission profile, we apply an additional 2500 km s\({}^{-1}\) box-width sigma-clip mask with a \(3\sigma\)-threshold. Every contiguous masked region has a masked buffer window of 3 pixels, equivalent to 150 km s\({}^{-1}\), applied on each end. Most broad C iv emission profiles can be fit automatically, but some QSOs contain features which are visually inspected and masked.
If not accounted for, neighbouring lines can influence the measurement of the intrinsic C iv broad-line properties. In order to disentangle the C iv line model from its neighbours, we simultaneously model the broad Si iv\(\lambda\)1398A, O iv\(\lambda\)1402A, N iv\(\nu\)]\(\lambda\)1486A, He ii\(\lambda\)1640A, and [O iii]\(\lambda\)1663A lines, using one broad component or one broad and one narrow component to constrain each neighbouring line. We find that the C iv line properties are not sensitive to whether narrow features, if present, in adjacent lines are modelled as a separate component. The neighbouring lines are only modelled in order to account for their influence on the C iv line properties. As such, we do not tabulate properties of the neighbouring lines in our catalogue.
For both Mg ii and H\(\beta\), the final emission-line properties, tabulated in Table 3, are determined as the average of properties measured from the resulting line models, created by applying in turn each of the four Fe ii templates. We consider two primary sources of uncertainty in the measurement of emission-line and continuum properties. One source is the uncertainty from the various Fe ii emission templates and another is the measurement uncertainty. We estimate the uncertainty from the Fe ii template by independently modelling each spectrum with four models; VW01, T06, BV08, and M16 at UV wavelengths and BG92, T06, P22, and BV08 at optical wavelengths. Then we measure the line properties for a given model and quote the standard deviation. We also estimate the measurement uncertainty using a Monte Carlo approach by creating 50 synthetic spectra for individual target (e.g., Shen et al., 2011), where the flux at each pixel is resampled from a symmetric distribution with a standard deviation equivalent to the pixel flux error. We assume that the noise in the spectrum follows a normal distribution. After modelling all of the synthetic spectra independently and varying the Fe ii template, the final measurement uncertainty is determined from these two sources added in quadrature. Each of the two sources contributes roughly equivalent uncertainty to emission-line FWHM, but the choice of Fe ii emission template dominates the variance in the measured continuum luminosity.
For the C iv line properties, we use only the Monte Carlo uncertainty estimated by modelling 50 synthetic spectra. In this case, we do not consider the uncertainty from modelling different Fe ii templates, because the Fe ii emission is weak in this wavelength region and it is not used to define the continuum. The different realisations of the resampled spectra and their resulting line models help to capture degeneracies in the way that flux can be distributed between C iv and its neighbouring lines, propagating that degeneracy into the uncertainty of the line properties.
We provide a data quality flag, indicated by "Quality_Flag", for each emission-line model which is used to identify where the median SNR per 50 km s\({}^{-1}\) resolution element of the data within the emission-line modeling region is below 20. We manually flag additional targets with the Mg ii data quality flag to indicate poor quality fits or that significant residual telluric features are evident in the spectrum. As there are no targets below the SNR threshold for the data quality flag for C iv, we instead use the flag to indicate where we have manually adjusted the fit, by choosing more appropriate continuum windows or manually masking absorption features. Additionally, we flag targets, using "Hbeta_Truncation_Flag", for which the red wings of the H\(\beta\) profile is clearly truncated by the edge of X-shooter's NIR arm wavelength coverage, which would reduce the reliability of the line model. However, we do not exclude flagged targets from further analysis and contextualisation of the XQ-100 sample in Section 5.
Figure 2 shows samples of emission-line models of SDSS J092041.76+072544.0 and J133254+005250, which are both lower redshift for our sample and contain the H\(\beta\) emission feature. Models of C iv, Mg ii, and H\(\beta\) are presented. In this figure, both Mg ii and H\(\beta\) models use an underlying T06 Fe ii template. Examples of all line models of C iv, Mg ii, and H\(\beta\), showing the emission-line models in greater detail, are provided as online supplementary material, where models of Mg ii and H\(\beta\) are separated by Fe ii template. Due to the lack of a continuum redward of H\(\beta\), we can only use the blue side to constrain the continuum.
## 4 Single-epoch virial mass estimate
We measure the black hole mass from single-epoch spectroscopic data using the virial estimate, which is a method routinely applied to QSO spectra (e.g., Vestergaard, 2002; McLure and Jarvis, 2002; McLure and Dunlop, 2004; Greene and Ho, 2005; Vestergaard and Peterson, 2006). The model assumes that the motion of gas around the black hole is virialized and its dynamics are dominated by the central gravitational field. The velocity-broadened line profile measures the gas velocity and the nuclear continuum luminosity is used as a proxy for the radius of the BLR. The radius-luminosity (\(R\)-\(L\)) relationship is an empirical correlation derived from reverberation mapping experiments which tightly links the radius of the BLR to the continuum luminosity
(e.g., Kaspi et al., 2000, 2005; Bentz et al., 2006, 2013). Common emission-lines used to estimate gas velocity include H\(\beta\), C iv, and Mg ii, but the H\(\beta\) line is redshifted out of the X-shooter NIR coverage at \(z\gtrsim 4\). Instead, the Mg ii emission-line profile is found to generally be correlated with H\(\beta\) and can be used as its substitute in single-epoch virial black hole mass estimates (e.g., Salviander et al., 2007; Shen et al., 2008; Wang et al., 2009; Shen & Liu, 2012). Additionally, there are indications that the Mg ii-based estimator is more reliable for QSOs with large (> 4000 km s\({}^{-1}\)) H\(\beta\) FWHM (see Marziani et al., 2013). The following equation describes the single-epoch virial mass estimate,
\[\left(\frac{M_{\rm{BH,vir}}}{M_{\odot}}\right)=10^{2}\left[\frac{\lambda L _{\lambda}}{10^{44}\,{\rm erg\,s^{-1}}}\right]^{b}\,\left[\frac{\rm{FWHM_{ line}}}{1000\,{\rm km\,s^{-1}}}\right]^{2}\,, \tag{3}\]
where \(\lambda L_{\lambda}\) is the monochromatic luminosity of the QSO continuum, which we measure from the power-law continuum model, and
Figure 2: Example models of the C iv, Mg ii, and H\(\beta\) emission-lines from SDSS J092041.76+072544.0 and J133254+005250. The spectroscopic data are plotted in black, its error spectrum in grey, and the power-law continuum is in orange. The error spectrum is shifted vertically such that the bottom of the panel represents zero flux error. The combined pseudo-continuum which includes a power-law and the Tsuzuki et al. (2006) Fe ii template is plotted in blue. The red lines indicate the total line profile and the dashed lines are the multiple Gaussian decomposition. We show the narrow-line model with the green dotted line and in the C iv panel, green highlights the line models of neighbouring lines. The residuals are shown normalized by the flux error, \(\sigma\), in order to represent the quantity minimised by the line-modelling algorithm, (data-model)/error. We represent \(\pm 3\sigma\) in the residual panel with the dashed lines and we also show the result of the automated sigma-clipping in the C iv residual, where masked features remain in blue rather than red or green. Telluric absorption windows are denoted by the solid grey bar in the residual panel. These figures are available for all targets in the online supplementary material.
FWHM\({}_{\rm line}\) is the measured line full-width half-maximum of the total broad line profile. We opt to use the FWHM for the virial mass estimate instead of the line dispersion, i.e., the second moment of the line profile. Although the dispersion is well-defined for arbitrary line profiles and may have advantages over the FWHM (e.g., Fromerth and Melia, 2000; Peterson et al., 2004; Collin et al., 2006; Rafiee and Hall, 2011; Dalla Bonta et al., 2020), in practice, the line dispersion is sensitive to the wings of the line profile, which are naturally low in flux and can often be difficult to constrain independently from noise due to the accretion disk and Fe ii continuum. We determine all line properties, including the FWHM of Mg ii and H\(\beta\), as the average of the measurements obtained from spectral decomposition using each of the all four Fe ii templates. We further quantify the deviation of the measured black hole mass using each template in Appendix B.
The exponents (a, b) in Equation 3 depend on the choice of line and luminosity and are empirically calibrated by reverberation mapping experiments. For the Mg ii line and a monochromatic luminosity at rest-frame 3000 A, (a, b) are calibrated to the values (6.86, 0.5) in Vestergaard and Osmer (2009) and (6.74, 0.62) in Shen et al. (2011). On average, the differences between these different calibrations are 0.1 dex, but virial mass estimators show an instrinsic scatter of \(\sim 0.3\) dex around their reverberation mapping counterparts (Dalla Bonta et al., 2020), while the reverberation-based estimates exhibit an intrinsic scatter of \(\sim 0.4\) dex around the \(M_{\rm BH}-\sigma_{*}\) relation (Bennert et al., 2021), meaning the virial mass estimates could have errors as large as \(\sim 0.5\) dex. We adopt 0.5 dex as our single-epoch virial black hole mass uncertainty in this study. In this study, we use the Mg ii-based calibration from Shen et al. (2011), which is anchored to a high-luminosity subset of local reverberation mapping determinations from H\(\beta\), making it better suited to the XQ-100 sources. Other broad emission-lines present in our spectra can be used to obtain virial estimates of the black hole mass as well. Compared with the Mg ii line, the C iv line is more likely to be affected by non-virial motions, such as the radiatively driven outflows (e.g., Proga et al., 2000; Saturni et al., 2018), making it potentially a biased black hole mass estimator (e.g., Baskin and Laor, 2005; Sulentic et al., 2007; Shen et al., 2008). We provide a measure of the C iv blueshift, a signature of outflowing emission (Richards et al., 2011), in order to quantify how much the C iv-based black hole masses may be biased by non-virial components. The velocity shifts of C iv are measured relative to the systemic redshifts from Lopez et al. (2016).
Table 1 presents the virial relations and specific calibrations used in this study for determining the black hole mass using C iv, Mg ii, and H\(\beta\) emission-lines. Using the 3000 A luminosity, we also estimate the bolometric luminosity by adopting a fixed bolometric correction factor of 5.15, which can lead to errors as large as 50%, or \(\sim\)0.3 dex, for individual QSOs (Richards et al., 2006).
The typical final measurement uncertainties are \(\sim\)240 km s\({}^{-1}\) for the Mg ii FWHM and 0.01 dex for the 3000 A monochromatic luminosity, resulting in an average of 0.06 dex uncertainty in the Mg ii black hole mass estimate. Similarly for H\(\beta\), the average uncertainty is \(\sim\)640 km s\({}^{-1}\) for the FWHM and 0.02 dex for the 5100 A luminosity, resulting in 0.12 dex mean uncertainty in \(M_{\rm BH}\). Therefore, the measurement uncertainty for both estimates are well below the errors of the virial mass estimator. Without the additional uncertainties introduced by the multiple Fe ii templates, the mean final measurement uncertainty in C iv FWHM is \(\sim\)130 km s\({}^{-1}\) with negligible uncertainty in the 1450A luminosity. The typical black hole mass uncertainty from the C iv virial estimator is thus 0.02 dex. However, line asymmetries and contribution from QSO outflows or disk winds should imply a greater uncertainty of C iv-based black hole masses.
## 5 Results and Discussion
For each of the 3 broad emission-lines (C iv, Mg ii, and H\(\beta\)) used for virial black hole mass estimates in this study, we measure 6 properties of the broad line profile, described in Table 2. The FWHM of each line is used in the virial mass estimate. The line dispersion, Sigma, is the second moment of the line profile. We also measure the Blueshift, equivalent width (EW), and wavelength of the line profile peak (pWavelength). The blueshift is measured from the median wavelength bisecting the total flux of the line profile, and can be a useful indicator of QSO orientation, particularly with the C iv line (e.g., Richards et al., 2002; Yong et al., 2020). We measure the integrated line luminosity (iLuminosity) from the reconstructed broad emission-line profile. Ratios of the integrated luminosity may be used for chemical abundance estimates (e.g., Hamann and Ferland, 1999; Hamann et al., 2002; Nagao et al., 2006), while the EWs may be used in studies of the Baldwin effect (e.g., Baldwin, 1977; Patino Alvarez et al., 2016). We present a sample of measured emission-line properties for 5 selected QSOs in Table 3, while the full table is available as online supplementary material.
### QSO Variability
Ever since the identification of the first QSOs, it has been recognized that QSOs are intrinsically variable (Matthews and Sandage, 1963). Variations of QSO brightness occur on a large range of timescales from hours to years, where short timescales are typically associated with higher energy X-ray flux and longer timescales to the disk emission (Edelson et al., 2015; Lira et al., 2015). Models of QSO variability focus on its stochastic origin, comparing the ensemble variability structure function (SF) to damped random walk (DRW) models (e.g., Kelly et al., 2009; MacLeod et al., 2010; Kozlowski, 2016; Suberlak et al., 2021).
Many studies have shown that the amplitude of QSO variability is anti-correlated with the QSO luminosity, with little apparent dependence on the redshift (e.g., Vanden Berk et al., 2004; MacLeod et al., 2010; Kozlowski, 2016; Caplar et al., 2017). For a high-redshift and high-luminosity sample, such as XQ-100, the long-term asymptotic variability amplitude (SF\({}_{\rm eq}\)) is measured to be low, from 0.1 mag (e.g., MacLeod et al., 2010; Suberlak et al., 2021) to 0.25 mag (e.g., Kozlowski, 2016), where these studies made use of SDSS Stripe 82
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Emission-line & Luminosity & a & b & Ref \\ \hline C iv & 1450 Å & 6.66 & 0.53 & 1 \\ Mg ii & 3000 Å & 6.74 & 0.62 & 2 \\ H\(\beta\) & 5100 Å & 6.91 & 0.50 & 1 \\ \hline \hline \end{tabular}
* Vestergaard and Peterson (2006) \({}^{2}\) Shen et al. (2011)
\end{table}
Table 1: Virial relations used in this study (see Equation 3)
\begin{table}
\begin{tabular}{l l l} \hline \hline Suffix & Description & Units \\ \hline FWHM & Full-width half-maximum of profile & km s\({}^{-1}\) \\ Sigma & Second moment of profile & km s\({}^{-1}\) \\ Blueshift & Defined by the flux-bisecting wavelength & km s\({}^{-1}\) \\ EW & Equivalent width in rest-frame & Å \\ pWavelength & Peak wavelength & Å \\ iLuminosity & Integrated log luminosity & erg s\({}^{-1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Description of measured properties for each broad emission-line.
(Jiang et al., 2014), an equatorial region imaged repeatedly during 2005, 2006, and 2007.
In this study, our sample of XQ-100 QSOs is flux calibrated to photometry observed at a separate epoch, thus our results are susceptible to QSO variability. In order to constrain variability in the XQ-100 sample, we crossmatch all sources with the Pan-STARRS DR2 detections table (Flewelling et al., 2020), removing all cases for which the number of multi-epoch \(i\)-band detections is less than 5. This results in a nearly complete sample of 82 QSOs, each with up to 33 independent \(i\)-band detections across a period of 3-5 years from 2009-2015. Models of QSO variability characteristic timescales typically find a best-fit parameter of a few hundred days for supermassive black holes in the rest-frame (e.g., MacLeod et al., 2010; Burke et al., 2021; Suberlak et al., 2021), so the Pan-STARRS detections cover little more than one rest-frame characteristic timescale. For this analysis, we estimate the observed variability amplitude using the structure function, SF\({}_{\rm obs}(\Delta t)\) = rms [m(t) - m(t + \(\Delta t\))], where rms is the root-mean-square deviation. We calculate SF\({}_{\rm obs}(\Delta t_{\rm obs})\) from the ensemble of 82 QSOs with multi-epoch photometric measurements by considering the distribution of \(\Delta m\), the measured magnitude difference, for each pair of measurements separated by a time-lag, \(\Delta t_{\rm obs}\sim\) 2, 3, or 4 years in the observed frame.
Using Pan-STARRS DR2 \(i\)-band detections, we find the ensemble variability amplitude of the XQ-100 sample to be SF\({}_{\rm obs}(\Delta t_{\rm obs})\) = (0.125, 0.132, 0.150) mag for observed frame \(\Delta t_{\rm obs}\sim\) (2, 3, 4) years, which is consistent with a luminous QSO asymptotic long-term variability of SF\({}_{\rm co}<\) 0.20 (MacLeod et al., 2010; Kozlowski, 2016). The photometric calibrations we have used span an even wider timeframe relative to the spectroscopic observations taken between 2012-2014. Therefore, we assume the asymptotic variability as our uncertainty in the overall flux normalisation for each spectrum. A 0.20 mag variability amplitude between the X-shooter and photometric observation would manifest as \(<\) 0.1 dex uncertainty in the measured luminosities. As a consequence, we expect a black hole mass uncertainty up to \(\sim\)0.05 dex may be present from variability.
### XQ-100 Sample Properties
We now examine the black hole mass estimates from the C iv, Mg ii, and H\(\beta\)-based virial estimators and contextualise the results. For both the H\(\beta\) and Mg ii lines, we subtract the narrow component to obtain the pure broad emission profile (e.g., Kovacevic-Dojcinovic et al., 2017) and measure the broad-line properties. Figure 3 compares the three mass estimates to each other. As we do not exclude any flagged targets from further analysis and contextualisation, the C iv and Mg ii comparison contains all 100 QSOs in the sample and the comparisons to H\(\beta\) are limited to 21 measurements. All three panels show data dispersed around the 1:1 relation denoted by the black dashed line, where the total sample variance is smaller than the adopted 0.5 dex uncertainty of the virial mass relation, which is shown for scale on the top-left of each plot. The mean differences and the standard deviation between black hole mass estimates are \(\log\left({\rm M_{\rm Mg\,II}}/{\rm M_{\rm C\textsc{IV}}}\right)=-0.05\pm 0.34\), \(\log\left({\rm M_{\rm H\beta}}/{\rm M_{\rm C\textsc{IV}}}\right)=-0.12\pm 0.36\), and \(\log\left({\rm M_{\rm Mg\,II}}/{\rm M_{\rm H\beta}}\right)=0.06\pm 0.22\). In the online supplementary table, we provide an averaged black hole mass estimate from all measured lines and determine a "Mbh_Flag" for when the averaged masses differ from the Mg ii-based masses by more than 0.3 dex. Throughout the XQ-100 sample, 8% of QSOs are flagged in this way, and only one (SDSSJ1042+1957) has a H\(\beta\) virial mass estimate to shed light on the discrepancy between C iv and Mg ii-based masses. For SDSSJ1042+1957, the H\(\beta\)-based mass estimate is much more consistent with C iv than Mg ii. The reason may be that the emission-lines are relatively narrow (FWHM\(-\)2000 km s\({}^{-1}\)) compared to the rest of the sample, and only the Mg ii line models consistently contain a narrow component.
Although the mass measurement for the XQ-100 sample relies on an extrapolation of the well-determined and lower luminosity H\(\beta\) reverberation mapping \(R\)-\(L\) relation (e.g., Bentz et al., 2013), we find all three virial estimators to remain consistent with each other within the measurement uncertainties in the high luminosity regime. This shows that the relative physical geometry of the three line-emitting regions does not change significantly with luminosity. Additionally, there are minimal systematic differences between our models to individual emission-lines and this increases our confidence in the resulting mass estimate.
In the case of outliers such as SDSSJ1202-0054, where M\({}_{\rm H\beta}\sim\) 9.8 and M\({}_{\rm C\textsc{IV}}\sim\) 9.0, or the inverse scenario for J1320299-052335, where M\({}_{\rm H\beta}\sim\) 9.2 and M\({}_{\rm C\textsc{IV}}\sim\) 9.8, their H\(\beta\) profiles are truncated and broad components are not well constrained. We present SDSSJ1202-0054 in additional detail in Appendix B. Other outliers, such as SDSS J074711.15+273903.3 which exhibits a \(>\) 1 dex mass difference between different lines, are characterised by relatively poor data quality in the wavelength regions surrounding the Mg ii line, resulting in weaker constraints on the Fe ii continuum model and a narrower emission-line FWHM. Residual telluric features from an insufficient telluric correction can corrupt the continuum model. Additionally, for targets with 3.8 \(\leq z\leq\) 4.2, the Mg ii line overlaps with a wide H\({}_{2}\)O telluric absorption band at 1.4 \(\mu\)m which deteriorates the quality of its detection.
We compare the distribution of black hole masses and bolometric luminosities in XQ-100 to the SDSS DR7 QSO catalogue from Shen et al. (2011) in Figure 4, using mean masses from at least two virial mass estimates to represent the XQ-100 sample. The QSOs in the SDSS DR7 catalogue cover \(0.06<z<\) 5.47 in redshift and their black hole masses are primarily based on the Mg ii emission-line with the same virial mass calibration we have used, but \(\sim\)40% of the sample utilise either C iv or H\(\beta\) with calibrations from Vestergaard & Peterson (2006). The median black hole mass and bolometric luminosity for the SDSS DR7 QSO catalogue is \(\log\left({\rm M_{\rm BH}}/{\rm M_{\odot}}\right)=9.0^{+0.5}_{-0.6}\) and \(\log\left({\rm L_{bol}}/{\rm erg\,s^{-1}}\right)=46.4^{+0.5}_{-0.7}\), where the asymmetric dispersion is set by the 16% and 84% percentile.
We also identify the sub-sample of the SDSS DR7 QSO catalogue consisting of 3127 QSOs within the \(3.5<z<4.5\) redshift range of the XQ-100 sample. Relative to the full catalogue of 104746 objects, the sub-sample has higher median black hole mass and luminosity with \(\log\left({\rm M_{\rm BH}}/{\rm M_{\odot}}\right)=9.3^{+0.5}_{-0.8}\) and \(\log\left({\rm L_{bol}}/{\rm erg\,s^{-1}}\right)=47.0^{+0.3}_{-0.3}\). The XQ-100 sample is more tightly distributed at the high-mass and high-luminosity tail of the redshift-selected SDSS DR7 QSO sample with \(\log\left({\rm M_{\rm BH}}/{\rm M_{\odot}}\right)=9.6^{+0.3}_{-0.4}\) and \(\log\left({\rm L_{bol}}/{\rm erg\,s^{-1}}\right)=47.5^{+0.2}_{-0.2}\). A sub-sample (27%) of the XQ-100 sample exhibits mildly super-Eddington accretion rates. We also plot J2157\(-\)3602, one of the most luminous known QSO (Onken et al., 2020), which is at a comparable redshift (\(z=4.692\)), with a black hole mass of \(\log\left({\rm M_{\rm BH}}/{\rm M_{\odot}}\right)=10.33\) and bolometric luminosity \(\log\left({\rm L_{bol}}/{\rm erg\,s^{-1}}\right)=48.4\), measured with the same approach used here (Lai et al., 2023). The full range of XQ-100 QSO properties is measured to span \(\log\left({\rm M_{\rm BH}}/{\rm M_{\odot}}\right)=8.6-10.3\) in black hole mass and \(\log\left({\rm L_{bol}}/{\rm erg\,s^{-1}}\right)=46.7-48.0\) in bolometric luminosity, where over 85% of the sample lies within \(\log({\rm M_{\rm BH}}/{\rm M_{\odot}})=9-10\) and \(\log\left({\rm L_{bol}}/{\rm erg\,s^{-1}}\right)=47-48\).
We find that 55 of the targets have C iv measurements in the Shen et al. (2011) catalogue, which has also produced C iv-based virial mass estimates using the same Vestergaard & Peterson (2006)
calibration. The mean and standard deviation of differences between the mass estimates from our work and from Shen et al. (2011) is \(\log(\mathrm{M_{C\textsc{iv}}}/\mathrm{M_{Shen}})=-0.07\pm 0.18\) using the C iv-based mass from our sample and \(\log(\mathrm{M_{avg}}/\mathrm{M_{Shen}})=-0.12\pm 0.24\) using the mean mass, which is a small systematic adjustment towards lower masses on average. We find no significant correlations between these mass differences and other measurable line properties.
## 6 Summary and Conclusion
Infrared echelle spectroscopic observations of high-redshift QSOs provide an opportunity to investigate their optical and ultraviolet atomic transitions. The XQ-100 legacy survey provides a high-quality sample of 100 QSOs in the redshift range of \(z=3.5-4.5\) with high SNR, wide spectroscopic coverage between its three observation arms, and moderate resolving power.
In this study, we examine rest-frame UV and optical broad-emission-lines from all 100 QSOs in the XQ-100 legacy survey. We measure properties of the C iv, Mg ii, and H\(\beta\) emission-lines as well as the QSO continuum to estimate QSO luminosities and black hole masses through virial relations. The main results of this study are as follows:
* We measure the C iv and Mg ii line for all 100 QSOs and the H\(\beta\) line for 21 QSOs, using multiple templates to estimate the underlying Fe ii emission. The virial mass estimate is based on the measured FWHM of all three broad emission-lines and the continuum luminosity measured near each respective emission-line at 1450, 3000, and 5100 A. We provide an averaged black hole mass estimate from all measured emission-lines for each QSO in the online supplementary table2 and determine the black hole masses of the XQ-100 sample to be \(\log\) (\(\mathrm{M_{BH}}/\mathrm{M_{\odot}}\)) = \(8.6-10.3\). A comparison of mass measurements between the Mg ii virial mass estimate and the C iv and H\(\beta\) virial estimates show a mean difference and standard deviation of \(-0.05\pm 0.34\) dex and \(0.06\pm 0.22\) dex, respectively, which are both well below the 0.5 dex uncertainty of the virial estimate. There is a general consistency between the mass estimates derived from the C iv, Mg ii, and H\(\beta\) broad emission lines. Using a fixed 5.15 bolometric correction factor applied to the 3000 A continuum luminosity, we estimate the bolometric luminosity range of the XQ-100 sample to be \(\log\) (\(\mathrm{L_{bol}}/\mathrm{erg\,s^{-1}}\)) = \(46.7-48.0\).
Footnote 2: also available at [https://github.com/samlaihei/XQ-100](https://github.com/samlaihei/XQ-100)
* Compared to the SDSS DR7 QSO catalogue, QSOs in the XQ-100 legacy survey occupy the high-mass and high-luminosity tail of the distribution. A sizable sub-sample consisting of 27% of the XQ-100 QSOs are accreting at mildly super-Eddington rates.
* For each broad emission-line from C iv, Mg ii, and H\(\beta\), we measure 6 properties from the broad line profile and release the full set of measurements as online supplementary material. The measured properties of each line include the full-width half maximum (FWHM), line dispersion, blueshift, equivalent width (EW), wavelength of the peak line profile, and integrated luminosity. We also release example figures of all line models in the sample as online material2.
Footnote 2: also available at [https://github.com/samlaihei/XQ-100](https://github.com/samlaihei/XQ-100)
Characterising basic properties of the XQ-100 QSOs enables a variety of follow-up research in QSO astrophysics, from chemical enrichment history using emission-line diagnostics to black hole orientation and QSO outflows. As a sample of some of the most luminous QSOs in redshift \(3.5<z<4.5\), the XQ-100 targets are among most massive, rapidly accreting black holes in the early universe and likely harbour the most massive and active host galaxies as well. These targets can potentially be used to further investigate the relationship between black holes and their host galaxies in the high redshift universe.
## Acknowledgements
We thank the anonymous referee for their constructive comments and suggestions which have improved this manuscript. We also thank the authors of Vestergaard & Wilkes (2001), Tsuzuki et al. (2006), Bruhweiler & Verner (2008), Mejia-Restrepo et al. (2016), Boroson
Figure 3: Comparison of virial black hole masses based on Mg ii, C iv, and H\(\beta\) relations in Table 1. The blue shaded contours represent the two-dimensional continuous probability density distribution calculated with a kernel density estimator (Waskom, 2021). Each subsequent contour level marks density is-proportions increasing by an additional 10% up to 90% enclosed. The red error bars plotted on the top left of each plot show the extent of the 0.5 dex uncertainty, which is a conservative estimate of the uncertainty inherent in the virial mass estimation method. Comparisons between all three virial mass estimates are scatted around the 1:1 relation, indicated by the black dashed line. The mean and standard deviation listed in the top-left of each panel are based on the residual from the mass measure on the y-axis subtracted by the mass measure on the x-axis.
& Green (1992), and Park et al. (2022) for producing and sharing their Fe ii emission templates.
S.L. is grateful to the Research School of Astronomy & Astrophysics at Australian National University for funding his Ph.D. studentship.
CAO was supported by the Australian Research Council (ARC) through Discovery Project DP190100252.
This paper is based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 189.A-0424.
The national facility capability for SkyMapper has been funded through ARC L1EF grant LE130100104 from the Australian Research Council, awarded to the University of Sydney, the Australian National University, Swinburne University of Technology, the University of Queensland, the University of Western Australia, the University of Melbourne, Curtin University of Technology, Monash University and the Australian Astronomical Observatory. SkyMapper is owned and operated by The Australian National University's Research School of Astronomy and Astrophysics. The survey data were processed and provided by the SkyMapper Team at ANU. The SkyMapper node of the All-Sky Virtual Observatory (ASVO) is hosted at the National Computational Infrastructure (NCI). Development and support of the SkyMapper node of the ASVO has been funded in part by Astronomy Australia Limited (AAL) and the Australian Government through the Commonwealth's Education Investment Fund (EIF) and National Collaborative Research Infrastructure Strategy (NCRIS), particularly the National eResearch Collaboration Tools and Resources (NeCTAR) and the Australian National Data Service Projects (ANDS).
The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian
Figure 4: Distribution of XQ-100 black hole masses and luminosities compared to the SDSS DR7 QSO catalogue from Shen et al. (2011). The mean virial black hole mass measurements of XQ-100 are shown in purple and the SDSS DR7 data points are shown in blue. The contours delineate iso-proportions in the continuous probability distribution of the higher redshift SDSS sub-sample calculated with a kernel density estimator (Waskom, 2021). Each contour encloses an additional 10% up to a 50% threshold. The black hole mass and bolometric luminosity histograms of the XQ-100 sample are normalised to the same area. Compared to the SDSS DR7 QSO catalogue, the XQ-100 sample occupies the high mass and high luminosity tail. The orange point is J2157\(-\)3602 (\(z=4.692\)), one of the most luminous known QSOs (Oaken et al., 2020).
Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation.
The VISTA Hemisphere Survey data products served at Astro Data Lab are based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programme 179.A-2010, and/or data products created thereof.
This work is based in part on data obtained as part of the UKIRT Infrared Deep Sky Survey and the UKIRT Hemisphere Survey.
This publication has made use of data from the VIKING survey from VISTA at the ESO Paranal Observatory, programme ID 179.A-2004. Data processing has been contributed by the VISTA Data Flow System at CASU, Cambridge and WFAU, Edinburgh.
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho,
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & Units & HB89 0000-263 & PMN J0100-2708 & BRI 0241-0146 & J112634-012436 & J1401+0244 \\ \hline OBJECT & HB89 0000-263 & PMN J0100-2708 & BRI 0241-0146 & J112634-012436 & J1401+0244 \\ RA & & 00:03:22.79 & 01:00:12.47 & 02:44:01.83 & 11:263:44.2 & 14:01:46.52 \\ Dec & & -26:03:19.40 & -27:08:52.10 & -01:34:06.30 & -01:24:38.00 & 02:44:37.70 \\ redshift & & 4.125 & 3.546 & 4.055 & 3.765 & 4.408 \\ Source\_i & SkyMapper & SkyMapper & SkyMapper & SkyMapper & SkyMapper \\ image & mag & 17.075 \(\pm\) 0.006 & 18.928 \(\pm\) 0.026 & 18.099 \(\pm\) 0.028 & 19.038 \(\pm\) 0.071 & 18.395 \(\pm\) 0.014 \\ Source\_j & & VHS & VIKINGDR5 & VHS & UKIDSS & UKIDSS \\ Jmag & mag & 16.023 \(\pm\) 0.008 & 17.590 \(\pm\) 0.011 & 16.916 \(\pm\) 0.015 & 18.053 \(\pm\) 0.038 & 17.419 \(\pm\) 0.032 \\ CIV\_FWHM & km s\({}^{-1}\) & 5275 \(\pm\) 34 & 6103 \(\pm\) 228 & 8387 \(\pm\) 418 & 5746 \(\pm\) 64 & 6048 \(\pm\) 195 \\ CIV\_Sigma & km s\({}^{-1}\) & 3820 \(\pm\) 91 & 2888 \(\pm\) 367 & 3568 \(\pm\) 229 & 3662 \(\pm\) 134 & 3888 \(\pm\) 276 \\ CIV\_Blueshift & km s\({}^{-1}\) & 1206 \(\pm\) 45 & 1618 \(\pm\) 110 & 1833 \(\pm\) 228 & 2191 \(\pm\) 44 & 821 \(\pm\) 97 \\ CIV\_EW & Å & 27.370 \(\pm\) 0.270 & 23.930 \(\pm\) 1.060 & 23.310 \(\pm\) 1.100 & 23.270 \(\pm\) 0.490 & 39.040 \(\pm\) 0.450 \\ CIV\_pWavelength & Å & 1543.380 \(\pm\) 0.140 & 1540.830 \(\pm\) 0.510 & 1540.860 \(\pm\) 0.740 & 1539.290 \(\pm\) 0.670 & 1545.790 \(\pm\) 0.770 \\ CIV\_Luminosity & erg s\({}^{-1}\) & 45.754 \(\pm\) 0.004 & 44.833 \(\pm\) 0.019 & 45.256 \(\pm\) 0.021 & 44.819 \(\pm\) 0.008 & 45.430 \(\pm\) 0.005 \\ CIV\_PL\_slope & & -1.622 \(\pm\) 0.008 & -1.352 \(\pm\) 0.020 & -1.525 \(\pm\) 0.016 & -1.340 \(\pm\) 0.025 & -1.374 \(\pm\) 0.008 \\ MgII\_FWHM & km s\({}^{-1}\) & 3396 \(\pm\) 111 & 3599 \(\pm\) 238 & 6378 \(\pm\) 537 & 4574 \(\pm\) 288 & 4319 \(\pm\) 313 \\ MgII\_Sigma & km s\({}^{-1}\) & 3379 \(\pm\) 113 & 2810 \(\pm\) 297 & 3488 \(\pm\) 386 & 3761 \(\pm\) 75 & 3410 \(\pm\) 245 \\ MgII\_Blueshift & km s\({}^{-1}\) & 217 \(\pm\) 71 & 434 \(\pm\) 171 & -239 \(\pm\) 100 & -57 \(\pm\) 132 & -289 \(\pm\) 103 \\ MgII\_EW & Å & 21.620 \(\pm\) 1.670 & 28.940 \(\pm\) 3.150 & 40.640 \(\pm\) 3.270 & 36.500 \(\pm\) 2.280 & 37.360 \(\pm\) 2.110 \\ MgII\_Wavelength & Å & 2798.540 \(\pm\) 2.310 & 2792.860 \(\pm\) 0.099 & 2808.220 \(\pm\) 4.840 & 2794.430 \(\pm\) 0.650 & 2804.110 \(\pm\) 0.450 \\ MgII\_iLuminosity & erg s\({}^{-1}\) & 45.210 \(\pm\) 0.039 & 44.502 \(\pm\) 0.059 & 45.083 \(\pm\) 0.038 & 44.502 \(\pm\) 0.028 & 44.961 \(\pm\) 0.032 \\ MgII\_PL\_slope & & -1.339 \(\pm\) 0.087 & -1.145 \(\pm\) 0.023 & -1.519 \(\pm\) 0.140 & -1.685 \(\pm\) 0.087 & -1.400 \(\pm\) 0.155 \\ Hbeta\_FWHM & km s\({}^{-1}\) & & 5308 \(\pm\) 924 & & & \\ Hbeta\_Sigma & km s\({}^{-1}\) & & 3864 \(\pm\) 637 & & & \\ Hbeta\_Blueshift & km s\({}^{-1}\) & & -634 \(\pm\) 330 & & & \\ Hbeta\_EW & Å & 86.440 \(\pm\) 25.040 & & & & \\ Hbeta\_PWavelength & Å & 4856.340 \(\pm\) 4.640 & & & & \\ Hbeta\_Luminosity & erg s\({}^{-1}\) & & 44.494 \(\pm\) 0.099 & & & & \\ Hbeta\_Pl\_slope & & -2.426 \(\pm\) 0.395 & & & & \\ log\_L1450 & erg s\({}^{-1}\) & 47.522 \(\pm\) 0.001 & 46.651 \(\pm\) 0.001 & 47.090 \(\pm\) 0.001 & 46.647 \(\pm\) 0.001 & 47.038 \(\pm\) 0.001 \\ log\_L3000 & erg s\({}^{-1}\) & 47.312 \(\pm\) 0.018 & 46.486 \(\pm\) 0.021 & 46.907 \(\pm\) 0.023 & 46.369 \(\pm\) 0.013 & 46.824 \(\pm\) 0.019 \\ log\_L1500 & erg s\({}^{-1}\) & & 46.240 \(\pm\) 0.057 & & & \\ logMBH\_CIV & M\({}_{\odot}\) & 9.971 \(\pm\) 0.006 & 9.636 \(\pm\) 0.032 & 10.145 \(\pm\) 0.043 & 9.582 \(\pm\) 0.010 & 9.833 \(\pm\) 0.028 \\ CIV\_Quality\_Flag & & 1 & & & & \\ logMBH\_MgII & M\({}_{\odot}\) & 9.855 \(\pm\) 0.030 &
the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is [http://www.sdss.org/](http://www.sdss.org/).
The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
This project used public archival data from the Dark Energy Survey (DES). Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, the Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Financiadora de Estudos e Projetos, Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cientifico e Tecnologico and the Ministerio da Ciencia, Tecnologia e Inovacao, the Deutsche Forschungsgemeinschaft, and the Collaborating Institutions in the Dark Energy Survey.
The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenossische Technische Hochschule (ETH) Zurich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciencies de l'Espai (IEEC/CSIC), the Institut de Fisica d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universitat Munchen and the associated Excellence Cluster Universe, the University of Michigan, the National Optical Astronomy Observatory, the University of Nottingham, The Ohio State University, the OzDES Membership Consortium, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A&M University.
Based in part on observations at Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.
Software packages used in this study include Numpy (van der Walt et al., 2011), Scipy (Virtanen et al., 2020), Astropy (Astropy Collaboration et al., 2013), Specutils (Earl et al., 2022), Matplotlib (Hunter, 2007), and seaborn (Waskom, 2021).
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author. The post-processed spectra, supplementary table, and figures can be downloaded from a GitHub repository: [https://github.com/samlahei/XQ-100](https://github.com/samlahei/XQ-100).
|
2302.00060 | Interaction and Decision Making-aware Motion Planning using Branch Model
Predictive Control | Motion planning for autonomous vehicles sharing the road with human drivers
remains challenging. The difficulty arises from three challenging aspects:
human drivers are 1) multi-modal, 2) interacting with the autonomous vehicle,
and 3) actively making decisions based on the current state of the traffic
scene. We propose a motion planning framework based on Branch Model Predictive
Control to deal with these challenges. The multi-modality is addressed by
considering multiple future outcomes associated with different decisions taken
by the human driver. The interactive nature of humans is considered by modeling
them as reactive agents impacted by the actions of the autonomous vehicle.
Finally, we consider a model developed in human neuroscience studies as a
possible way of encoding the decision making process of human drivers. We
present simulation results in various scenarios, showing the advantages of the
proposed method and its ability to plan assertive maneuvers that convey intent
to humans. | Rui Oliveira, Siddharth H. Nair, Bo Wahlberg | 2023-01-31T19:51:14Z | http://arxiv.org/abs/2302.00060v1 | # Interaction and Decision Making-aware Motion Planning using Branch Model Predictive Control
###### Abstract
Motion planning for autonomous vehicles sharing the road with human drivers remains challenging. The difficulty arises from three challenging aspects: human drivers are 1) multi-modal, 2) interacting with the autonomous vehicle, and 3) actively making decisions based on the current state of the traffic scene. We propose a motion planning framework based on Branch Model Predictive Control to deal with these challenges. The multi-modality is addressed by considering multiple future outcomes associated with different decisions taken by the human driver. The interactive nature of humans is considered by modeling them as reactive agents impacted by the actions of the autonomous vehicle. Finally, we consider a model developed in human neuroscience studies as a possible way of encoding the decision making process of human drivers. We present simulation results in various scenarios, showing the advantages of the proposed method and its ability to plan assertive maneuvers that convey intent to humans.
## I Introduction
Autonomous vehicles (AVs) must drive in the presence of other traffic participants, such as human-driven vehicles (HVs) and pedestrians. To this date, sharing the road with human traffic participants is one of the biggest challenges hindering AVs from being deployed at a large scale.
Current state-of-the-art sensor and perception technology already provides an accurate understanding of the traffic scene's current state. However, the irregularity of human behavior makes the prediction task, _i.e._, knowing how the traffic scene evolves, reliable only for a few seconds into the future.
Besides being hard to predict, the traffic scene evolution is also directly impacted by the decisions taken by the AV. Human drivers will react differently depending on other surrounding vehicles' maneuvers.
Most planning approaches assume that other traffic participant predictions are fixed, resulting in the autonomous vehicle performing maneuvers to avoid said predictions. The result is overly conservative driving from the autonomous vehicle. To consider that the AV's decisions impact the traffic scene evolution and avoid conservativeness, one must solve the joint prediction and planning problem.
This work presents a novel motion planning approach to tackle the joint prediction and planning problem, making the following contributions:
* Proposal of a framework for handling interaction-heavy scenarios, considering the aspects of _multi-modality_, _interaction_, and _decision making_ of human drivers;
* Approximating human drivers' decision making process through models developed in neuroscience studies, allowing the autonomous vehicle to take proactive and assertive maneuvers that convey intent to human drivers;
* Performance evaluation and comparison against relevant works in the area, showing an increase in average performance without sacrificing safety.
### _Related work_
We start by introducing three challenging aspects of human drivers and existing works relating to them.
#### I-A1 Multi-modality
Consider a human driver arriving at an intersection (_initial state_ of Fig. 1a). A defensive driver slows down and stops to check for oncoming vehicles safely (_outcome 1_). On the other hand, an aggressive driver speeds through the intersection to achieve a shorter traveling time (_outcome 2_). A planner must consider different outcomes and plan a motion that guarantees safety for all.
Model Predictive Control (MPC) approaches for tackling uncertainties stemming from the multi-modality are introduced in [1, 2]. However, they are limited to uncertainties stemming from the unknown existence of static obstacles or intents of pedestrians.
The works [3, 4] use Scenario MPC to consider the multi-modality arising from the uncertainty over different maneuvers types of other drivers (such as keep or change lanes). [5] combines Scenario MPC and Stochastic MPC to improve other vehicles' motion predictions.
Contingency MPC is an approach that tracks a desired nominal plan while maintaining a contingency plan to deal with possible emergencies [6, 7]. Multi-modality is tackled by considering both the nominal and contingency outcomes.
#### I-A2 Interaction
Driving is a highly interactive task, where drivers adapt their actions in response to those of other drivers. Consider the HV (red car) and AV (yellow car) approaching the intersection shown in _initial state_ of Fig. 1b. If the AV turns
right, the HV will slow down to avoid a collision (_outcome 1_). On the other hand, if the AV proceeds through the intersection, the HV keeps its speed (_outcome 2_). Interactions occur in most traffic situations, and considering them is crucial to reducing AVs' conservativeness.
The problem of driving an autonomous vehicle through an intersection is tackled with Stochastic MPC in [8]. The approach considers the interaction aspects of driving by modeling human drivers as closed-loop predictions tracking a constant headway to the AV in front of them.
The work in [9] introduces a formulation of interaction with HVs as an underactuated dynamical system. This formulation allows the AV to perform complex interaction behaviors, such as accelerating or slowing down, to show intent to humans.
#### I-B3 Decision making
Humans often make decisions while driving, as shown in the initial state of Fig. 0(c), where the HV (red car) has to decide if it will cross the intersection. The decision is affected by the perceived intended behavior of the AV (yellow car). If the AV approaches at high speed (_outcome 1_), the human decides to stop at the intersection. On the other hand, if the AV slows down (_outcome 2_), the human drives through the intersection. This example shows how the traffic scene, namely the AV's position and velocity, affects the HV's decision [10]. This example significantly differs from the interaction aspects shown in Fig. 0(b), as no imminent collision forces the HV to stop. Instead, the AV shows an intent to drive through or stop at the intersection, leading the HV to react accordingly.
Branch MPC [11] is used to tackle the multi-modality arising from other human driver's decision making. The human is modeled using a finite set of policies that build a scenario tree. The planned solution is a feedback policy in the form of a trajectory tree accounting for all possible scenarios. The HV policies are propagated independently of other agents, which can lead to the freezing robot problem [12]. Realizing this drawback, the authors of [13] consider closed-loop models to propagate the control policies of other vehicles. However, both works lack a driver decision making model based on human behavior research.
#### I-B4 Summary
Table I outlines the previous works regarding their ability to tackle the mentioned challenges. Only recently has [13] considered all challenges. We build upon [11] by considering the reactive behavior of humans and addressing the _interaction_ challenge. Moreover, we model human _decision making_ using neuroscience studies, adding a sound sociological model lacking in [13].
## II Modeling
### _Scenarios_
We consider two scenarios that force the AV to interact with a HV. In the first scenario, the AV merges onto a road, as shown in Fig. 1(a). The second scenario considers a non-signalized intersection, _i.e._, an intersection without traffic lights Fig. 1(b). Both scenarios lack priority rules, and thus no vehicle has the right of way over the other, requiring interaction and unspoken negotiation between them.
We assume that a vehicle \(i\), \(i\in\{H,A\}\), human-driven (\(H\)) or autonomous (\(A\)), moves along the road centerline along a path length \(s^{i}\). In both scenarios in Figs. 1(a) and 1(b), the vehicles are on separate lanes until the merging point \(s_{\text{conflict}}\). After \(s_{\text{conflict}}\) the vehicles are on the same lane (merging), or share a conflict region (intersection), and must keep a safe distance to avoid a collision. We assume that as the HV approaches the conflict point, it eventually makes a decision at \(s_{\text{br}}\), possibly changing its behavior. \(s_{\text{br}}\) is a point located before \(s_{\text{conflict}}\), and it is chosen so as to represent a point on the road where the HV becomes aware of the AV, and takes a decision according to its preferences and driving style, [14], and even the intention shown by the AV [10].
\begin{table}
\begin{tabular}{l c c c} Approach & Multi-modal & Interaction & Decision making \\ \hline
[1, 2, 4, 7] & ✓ & \(\times\) & \(\times\) \\
[11] & ✓ & \(\times\) & ✓ \\
[9] & \(\times\) & ✓ & ✓ \\
[8] & ✓ & ✓ & \(\times\) \\
[13], our approach & ✓ & ✓ & ✓ \\ \end{tabular}
\end{table} TABLE I: Comparison of different planning approaches according to the challenges identified in Section I-A.
Fig. 1: Challenges associated with an autonomous vehicle (yellow car) driving in the presence of human drivers (red car).
### _Vehicle models_
We assume the AV to drive along the lane center and only plan for its longitudinal motion. This assumption is justified as there is little to no advantage in considering the possibility of steering the vehicle laterally in these scenarios. Both vehicles follow the model
\[x=\begin{bmatrix}s&v\end{bmatrix}^{\intercal},\quad\dot{x}=\begin{bmatrix}v&u \end{bmatrix}^{\intercal}, \tag{1}\]
where \(s\) is the current position along the centerline path, \(v\) is the vehicle longitudinal velocity, and \(u\) is the acceleration, corresponding to the control input.
The AV input \(u^{A}\) is determined by the planning framework presented in Section III. The HV input \(u^{H}\) follows a certain driving policy \(\pi^{\text{before}}\), up to \(s_{\text{br}}\), and afterwards \(\pi^{\text{after}}\):
\[u^{H}=\begin{cases}\pi^{\text{before}}(s^{H},v^{H})&\text{if }s^{H}<s_{\text{br}} \\ \pi^{\text{after}}(s^{H},v^{H},s^{A},v^{A})&\text{if }s^{H}\geq s_{\text{br}} \end{cases} \tag{2}\]
Note that \(\pi^{\text{after}}\) is a function of \(s^{A}\) and \(v^{A}\), due to the interaction behavior between the HV and AV.
### _Human driving policies_
We assume two types of policies for the HV, _velocity tracking_ when there is no vehicle ahead, and _vehicle following_ when there is a vehicle ahead.
#### Iii-C1 Velocity tracking
When there is no leading vehicle ahead, the HV will track a desired reference speed \(v_{\text{ref}}\). We then have \(u^{H}=\pi^{v_{\text{ref}},\text{track}}\), where
\[\pi^{v_{\text{ref}},\text{track}}\left(v^{H},v_{\text{ref}}\right)=K_{v} \left(v_{\text{ref}}-v^{H}\right), \tag{3}\]
and \(K_{v}>0\) is a constant gain. Policy \(\pi^{v_{\text{ref}},\text{track}}\) takes as inputs the current vehicle velocity \(v^{H}\) and the desired reference velocity \(v_{\text{ref}}\), and outputs an acceleration command proportional to their difference.
#### Iii-C2 Vehicle following
If there is another vehicle ahead, the HV adapts its speed to avoid a rear-end collision. In this case, \(u^{H}=\pi^{\text{va}}\), where va stands for _vehicle ahead_, and:
\[\pi^{\text{va}}=\begin{cases}\pi^{v_{\text{ref}},\text{track}}\left(v^{H},v_{ \text{ref}}\right)&\text{if }v^{H}\geq v_{\text{ref}}\\ K_{v}(v^{A}-v^{H})+K_{d}\left(d-d_{\text{ref}}\right)&\text{if }v^{H}<v_{\text{ref}} \end{cases} \tag{4}\]
where \(d=s^{A}-s^{H}\) corresponds to the distance from the human-driven vehicle to the vehicle ahead, and \(K_{d}>0\) is a constant gain.
When \(v^{H}\geq v_{\text{ref}}\), the vehicle tracks its desired velocity \(v_{\text{ref}}\), resulting in braking. When \(v^{H}<v_{\text{ref}}\), the vehicle speeds up while taking into account the vehicle ahead. The term \(K_{v}(v^{A}-v^{H})\) performs velocity tracking, and \(K_{d}\left(d-d_{\text{ref}}\right)\) keeps a safe distance \(d_{\text{ref}}\) to the vehicle in front. Policy \(\pi^{\text{va}}\) is inspired by the Intelligent Driver Model [15].
### _Human decision making_
In the considered scenarios we assume the human has two different behaviors, one before arriving at \(s_{\text{br}}\), and another after it, as given by Equation (2). At \(s_{\text{br}}\) the HV starts interacting with the AV and changes its behavior. The new behavior depends on the type of driver profile and on the traffic scene, and it is not known in advance. In the merge scenario we consider behaviors corresponding to three types of driver:
* egoistic driver who speeds up (\(v_{\text{ref}}=v_{\text{ref}}^{\text{fast}}\)),
* neutral driver who keeps speed (\(v_{\text{ref}}=v_{\text{ref}}^{\text{keep}}\)),
* altruistic driver who slows down (\(v_{\text{ref}}=v_{\text{ref}}^{\text{slow}}\)).
In the intersection scenario we consider two types of driver:
* driver keeps speed (\(v_{\text{ref}}=v_{\text{ref}}^{\text{keep}}\)),
* driver slows down to a stop.
At point \(s_{\text{br}}\) the human decides on one of the policies to follow. We make use of research in the field of neuroscience, namely we consider the work in [10], where the authors study the decision making process of human drivers approaching an intersection. The authors propose that drivers' decision making depends on the degree of safety of the two co-existing possibilities of either crossing or stopping at the intersection. The degree of safety can be quantified by the critical time to cross \(CT_{\text{cross}}\), and the critical time to stop \(CT_{\text{stop}}\). The probability of the human choosing to cross, _i.e._, applying policy \(\pi^{\text{cross}}\), is defined as [10]:
\[P_{\pi^{\text{cross}}}=\frac{w}{1+e^{-aCT_{\text{cross}}}}+\frac{1-w}{1+e^{- bCT_{\text{stop}}}}. \tag{5}\]
Where the parameters \(a\), \(b\), and \(w\) are found by fitting the model Equation (5) to experimental data. The experimental data is obtained from thirty experienced drivers whose decision making is studied in a driving simulator.
Equation (5) provides a decision making model that can be used to estimate the probabilities of the human driver taking different decisions. This allows the AV to understand
Fig. 2: The scenarios considered in this work.
the likelihood of different future scene outcomes stemming from different decision taken by the human driver. Moreover, the dependency of Equation (5) on both the HV and AV states allows the planner to reason about how different maneuvers affect the likelihood of different scenarios. This is fundamental in order for the AV to achieve assertive behavior. The following section proposes a framework that can take advantage of model Equation (5), providing the planner with a better knowledge of HV behavior and allowing the AV to drive assertively.
## III Motion Planning Framework
### _Tree formulation - human-driven vehicle_
In order to take into account the possible future decisions of the human, and the resulting states of its vehicle, we use a tree structure [11]. Figure 3 shows a tree with \(J+1\) branches (branches \([2,\ldots,J-1]\) not visible). Each branch \(j\) in the tree has an associated human driver policy \(\pi^{j}\). Within a branch \(j\), the HV states are propagated assuming they follow the associated policy \(\pi^{j}\). The evolution of states corresponds to a discretized model of Equation (1) so that
\[x_{t+1}^{H,\pi^{j}}=f^{H,\pi^{j}}\left(x_{t}^{H,\pi^{j}},x_{t}^{A,\pi^{j}} \right), \tag{6}\]
where the vehicle input \(u_{t}^{H,\pi^{j}}\) is determined according the active policy \(\pi^{j}\), and can depend on the AV state, as in Equation (4). For the transition states between branches
\[x_{t_{\text{br}}+1}^{H,\pi^{j}}=f^{H,\pi^{j}}\left(x_{t_{\text{br}}}^{H,\pi^{ 0}},x_{t_{\text{br}}}^{A,\pi^{0}}\right), \tag{7}\]
where \(u_{t_{\text{br}}}^{H,\pi^{j}}\) already follows the policy \(\pi^{j}\).
The tree in Fig. 3 is composed of a root branch and \(J\) child branches, corresponding to the set of branches \(\mathcal{J}=\{0,1,2,\ldots,J\}\). The root branch splits into different branches at time \(t_{\text{br}}\), corresponding to the state at which the HV crosses the path length \(s_{\text{br}}\). At this path length \(s_{\text{br}}\), the human changes to a policy that interacts with the AV. Since the human can take multiple policies, \(x_{t_{\text{br}}}^{H,\pi^{0}}\) propagates into \(J\) different states \(x_{t_{\text{br}}+1}^{H,\pi^{j}}\), corresponding to the different branches with policies \(\pi^{j}\). Each branch has an associated probability \(P_{\pi^{j}}\). For the first branch \(P_{\pi^{0}}=1\), and for the remaining branches \(\sum_{j=1}^{J}P_{\pi^{j}}=1\).
### _Tree formulation - autonomous vehicle_
The AV state evolution follows a similar tree structure as the HV, illustrated in Fig. 4, and corresponding to a discretized model of Equation (1) so that
\[x_{t+1}^{A,\pi^{j}}=f^{A,\pi^{j}}\left(x_{t}^{A,\pi^{j}},u_{t}^{A,\pi^{j}} \right). \tag{8}\]
The vehicle input \(u_{i}^{A,\pi^{j}}\) is determined by the solution to the optimal control problem introduced later in Section III-C. Similarly, for the branching states, we have:
\[x_{t_{\text{br}}+1}^{A,\pi^{j}}=f^{A,\pi^{j}}\left(x_{t_{\text{br}}}^{A,\pi^{ 0}},u_{i}^{A,\pi^{j}}\right). \tag{9}\]
In a practical setting, it is not possible to immediately estimate the policy decision made by the human-driven vehicle, as prediction systems have a delay until accurately estimating the new human behavior. Therefore, we force the autonomous vehicle states to be equal between the different branches \([1,\ldots,J]\) for the first \(\Delta t_{\text{obs}}\) seconds of the branch:
\[x_{t_{i}}^{A,\pi^{j}}=x_{t_{i}}^{A,\pi^{j}},\\ \forall t_{i}\in[t_{\text{br}}+1,\ldots,t_{\text{br}}+\Delta t_{ \text{obs}}],\{\forall j,j^{\prime}\in\mathcal{J}|j\neq j^{\prime}\}. \tag{10}\]
Equation (10) forces the solutions to not assume immediate knowledge of the HV policy, and instead delay by \(\Delta t_{\text{obs}}\) the adaption to the new behavior. \(\Delta t_{\text{obs}}\) is tuned based on considerations of expected time to perceive a new vehicle behavior and feasibility of the planning problem.
### _MPC formulation_
For each branch \(j\in\mathcal{J}\), consider the vectors
\[\mathbf{x}_{j}^{H} =[x_{t_{i}}^{H,\pi^{j}},x_{t_{i}^{j}+1}^{H,\pi^{j}},\ldots,x_{t_{f }}^{H,\pi^{j}}],\] \[\mathbf{x}_{j}^{A} =[x_{t_{i}^{j}}^{A,\pi^{j}},x_{t_{i}^{j}+1}^{A,\pi^{j}},\ldots,x_{ t_{f}^{j}}^{A,\pi^{j}}],\] \[\mathbf{u}_{j}^{A} =[u_{t_{i}^{j}}^{A,\pi^{j}},u_{t_{i}^{j}+1}^{A,\pi^{j}},\ldots,u_{ t_{f}^{j}}^{A,\pi^{j}}],\]
where \(t_{i}^{j}\) and \(t_{f}^{j}\) are initial and final times associated with the first and last states in branch \(j\). \(\mathbf{x}_{j}^{H}\) corresponds to the HV states, and \(\mathbf{x}_{j}^{A}\), \(\mathbf{u}_{j}^{A}\) to the AV states, and inputs, respectively.
Fig. 4: Diagram of the scenario tree used to model the AV possible future states. The arcs between states in branch \(1\) and branch \(J\) correspond to autonomous vehicle states that are forced to be equal, according to Equation (10).
Fig. 3: Diagram of the scenario tree used to model the HV possible future states.
The MPC is solved at the beginning of every planning cycle at time \(t\), its formulation is as follows:
\[\underset{\{\mathbf{u}_{j}^{A}\}_{j\in\mathcal{J}}}{\text{min}} \sum_{j\in\mathcal{J}}~{}P_{\pi^{j}}J(\mathbf{x}_{j}^{H},\mathbf{x}_{j}^{A},\mathbf{u}_{j}^{A})\] (11a) s.t. \[\text{Eq.~{}~{}(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:
assumes that the obstacle exists, it achieves an acceptable performance when the probability of the obstacle is high enough. However, in most cases, it has the worst performance. On the other hand, the Prescient MPC consistently achieves the best performance as it plans its maneuver according to the actual (unknown) state of the traffic light.
The Contingency MPC performance is comparable to the Branch and Prescient MPCs for small obstacle probabilities. However, as the obstacle probability increases, its performance degrades. The Contingency MPC is an optimistic planner, assuming a green light until proven otherwise. This results in abrupt braking maneuvers when the light is red, leading to poor performance.
Our proposed method achieves, on average, a performance that is always better than Robust MPC while being equally as safe. For lower probabilities of red light, the Branch MPC performance is comparable to that of Contingency MPC. For higher probabilities, Branch MPC outperforms Contingency MPC due to not being optimistic about the traffic light state.
Figure 7 shows a single planning cycle for the different MPC approaches starting at identical initial vehicle states. In this initial state, the AV is aware of the traffic light but does not know its state yet. Robust MPC plans a single velocity profile assuming a red light and bringing the vehicle to a stop. Contingency MPC plans two velocity profiles to deal with the possibility of green and red lights. The velocity profile associated with a red light is not penalized in the cost function, resulting in an abrupt and uncomfortable contingency maneuver. Branch MPC reduces its driving speed, at the cost of increasing travel time, to deal with the possibility of a red light comfortably. Therefore, Branch MPC chooses a tradeoff between the performances in both future possible scenarios.
### _Multi-modality_
We consider the merging scenario in Fig. 1(a), where the HV can have three possible future velocity tracking policies \(\pi^{\text{va,fast}}\), \(\pi^{\text{va,keep}}\), or \(\pi^{\text{va,slow}}\). The tree structure consists of a root node where the human driver keeps a constant speed policy \(\pi_{0}\) until the branching point, and three child nodes corresponding to the HV taking the three possible policies. Although chosen manually in this study, the tracking policies could be given by a prediction module providing a multi-modal set of outcomes, as in [18, 19].
Figure 8 (bottom) shows the planned velocity profiles of a planning cycle occurring when the HV and AV are heading toward the merging point. The HV is predicted to have the possibility to take either of the three different velocity profiles. The AV plans a set of velocity profiles, going behind the HV in case it decides to keep its speed or accelerate or going ahead in case the HV slows down. The AV plan is better visualized by looking at the planned path lengths in Fig. 8 (top). The path lengths (subtracted by the predicted path length of the HV with policy \(\pi^{\text{va,keep}}\)) show the distinctive decisions made by the planner to either go behind or ahead of different possible policies of the HV.
Figure 8 also shows the velocity planned by a Robust MPC approach. The planned maneuver is conservative, opting to go behind all possible realizations of future human policies. Considering the multi-modality and planning feedback policies dependent on the mode that the human eventually decides upon allows the planned profiles to squeeze in between different possible policies, reducing conservativeness compared to a robust approach.
### _Non-interaction vs. interaction-aware human models_
We now consider an AV that accelerates from a standstill and merges onto a road with an oncoming vehicle driving at
Fig. 8: Planned autonomous vehicle trajectory (solid line) and predicted human-driven vehicle trajectory (dashed line) for the Branch MPC and Robust MPC cases. Top: Path lengths (centered around the path executed by \(\pi^{\text{va,keep}}\)). The rounded markers correspond to the instant when the vehicles cross the conflict point. Bottom: Velocity profiles.
Fig. 7: Single planning cycle for the autonomous vehicle approaching an uncertain obstacle under different planning frameworks: Branch, Robust, and Contingency MPCs.
high speed. The AV can decide to go ahead of the other vehicle or wait for it to pass and go behind it.
Figure 9 shows a single planning cycle, where the AV is at a standstill, and an oncoming vehicle drives faster than the AV's desired cruising velocity. In the non-interacting vehicle model case, the AV predicts the HV to keep its speed constant and decides to wait for it to pass and, afterward, go behind it. In the case of packed traffic with several oncoming vehicles, the AV could be stuck indefinitely at the junction [12]. However, when considering an interaction-aware model, the AV decides to go ahead of the oncoming vehicle, as it predicts that the HV will slow down and adapt its speed. We remark that the planner assumes a limit to the HV's braking capabilities to guarantee that the AV does not act discourteously to other traffic participants [20].
### _Intersection scenario_
We consider the non-signalized intersection scenario in Fig. 1(b). Since there are no priority rules, the vehicles must negotiate who goes first. The tree structure consists of a root node and two child nodes corresponding to the HV keeping its speed, \(\pi^{\text{cross}}\), or stopping, \(\pi^{\text{stop}}\).
Figure 0(a) shows the AV planned velocities when assuming that both HV policies have an equal and fixed probability \(P_{\pi^{\text{cross}}}=P_{\pi^{\text{exp}}}=0.5\). The AV decides to slow down to deal with both possible outcomes of the human decision.
We note that this intersection scenario resembles the one considered in [10], and therefore we use the human decision making model Equation (5) to determine probabilities \(P_{\pi^{\text{num}}}\) and \(P_{\pi^{\text{stop}}}\). Figure 0(b) shows the results for this scenario when considering that the human policy probabilities follow Equation (5). In this case, the planned velocity profile increases at around \(t=5\) s to assertively indicate to the human driver that the AV intends to cross. This results in a higher predicted probability of the human deciding to slow down and give way to the autonomous vehicle, \(P_{\pi^{\text{exp}}}=0.856\). With a lower probability of the human keeping its speed, \(P_{\pi^{\text{cross}}}=0.144\), the AV plans a more abrupt maneuver in the unlikely event of this outcome occurring.
These results show that considering the human decision making model Equation (5) allows the planner to make assertive maneuvers and improve driving performance. The AV drives in a way that shows intent to other HVs, intending to optimize its expected driving outcomes. This behavior comes out naturally due to minimizing the objective cost of Equation (11a), and therefore, does not require manually tuned driving strategies for different traffic situations.
## V Conclusions
We presented a motion planning framework tackling the challenges of autonomous vehicles in the presence of human drivers, namely, multi-modality, interaction, and decision making. A Branch MPC problem is formulated by combining scenario trees and research from the neuroscience field to model the human driver's decision making process. We show that Branch MPC and interaction-aware models achieve better average performance than alternative formulations in the literature. Furthermore, we show that using a human decision making model leads the planner to find proactive maneuvers that convey intent to the human driver. The proposed framework plans assertive maneuvers that influence human drivers to make decisions favorable to the autonomous vehicle.
In future work, it is interesting to analyze the impact of the problem modeling assumptions and to consider behavioral decision making for more general driving scenarios. Finally, practical implementation is needed to validate the suitability of the proposed approach to model and solve the joint prediction and planning problem accurately.
Fig. 10: Intersection scenario for different configurations of human-vehicle policy probabilities.
Fig. 9: Predicted and planned velocity profiles when considering non-interacting and interaction-aware HV models. |
2309.16558 | Revealing the Landscape of Globally Color-Dual Multi-loop Integrands | We report on progress in understanding how to construct color-dual multi-loop
amplitudes. First we identify a cubic theory, semi-abelian Yang-Mills, that
unifies many of the color-dual theories studied in the literature, and provides
a prescriptive approach for constructing $D$-dimensional color-dual numerators
through one-loop directly from Feynman rules. By a simple weight counting
argument, this approach does not further generalize to two-loops. As a first
step in understanding the two-loop challenge, we use a $D$-dimensional
color-dual bootstrap to successfully construct globally color-dual local
two-loop four-point nonlinear sigma model (NLSM) numerators. The double-copy of
these NLSM numerators with themselves, pure Yang-Mills, and $\mathcal{N}=4$
super-Yang-Mills correctly reproduce the known unitarity constructed integrands
of special Galileons, Born-Infeld theory, and Dirac-Born-Infeld-Volkov-Akulov
theory, respectively. Applying our bootstrap to two-loop four-point pure
Yang-Mills, we exhaustively search the space of local numerators and find that
it fails to satisfy global color-kinematics duality, completing a search
previously initiated in the literature. We pinpoint the failure to the bowtie
unitarity cut, and discuss a path forward towards non-local construction of
color-dual integrands at generic loop order. | Alex Edison, James Mangan, Nicolas H. Pavao | 2023-09-28T16:13:21Z | http://arxiv.org/abs/2309.16558v1 | # Revealing the Landscape of Globally Color-Dual Multi-loop Integrands
###### Abstract
We report on progress in understanding how to construct color-dual multi-loop amplitudes. First we identify a cubic theory, semi-abelian Yang-Mills, that unifies many of the color-dual theories studied in the literature, and provides a prescriptive approach for constructing \(D\)-dimensional color-dual numerators through one-loop directly from Feynman rules. By a simple weight counting argument, this approach does not further generalize to two-loops. As a first step in understanding the two-loop challenge, we use a \(D\)-dimensional color-dual bootstrap to successfully construct globally color-dual _local_ two-loop four-point nonlinear sigma model (NLSM) numerators. The double-copy of these NLSM numerators with themselves, pure Yang-Mills, and \(\mathcal{N}=4\) super-Yang-Mills correctly reproduce the known unitarity constructed integrands of special Galileons, Born-Infeld theory, and Dirac-Born-Infeld-Volkov-Akulov theory, respectively. Applying our bootstrap to two-loop four-point pure Yang-Mills, we exhaustively search the space of local numerators and find that it fails to satisfy global color-kinematics duality, completing a search previously initiated in the literature. We pinpoint the failure to the bowtie unitarity cut, and discuss a path forward towards _non-local_ construction of color-dual integrands at generic loop order.
###### Contents
* 1 Introduction
* 2 Background
* 2.1 Color-dual Amplitudes
* 2.2 Color-dual Lagrangians
* 2.3 Color-dual Bootstrap
* 3 One-loop cubic construction
* 3.1 Semi-abelian Yang-Mills theory
* 3.2 One-loop color-dual integrands
* 3.3 Two-loop obstruction
* 4 Two-loop four-point bootstrap
* 4.1 Two-loop NLSM
* 4.2 Double-copy verification
* 4.3 Two-loop Yang-Mills revisited
* 5 Conclusions and Outlook
* 5.1 Non-local construction of scattering amplitudes
* 5.2 Future Directions
* A Catalog of color-dual theories
* B Spinor-helicity and conventions
* C Regulating BEL integrals
## 1 Introduction
Since the turn of the century, our understanding of the \(S\)-matrix and its concealed structures has expanded dramatically. At the heart of this progress is the mantra that physical observables are simpler when studied on-shell [1; 2; 3]. However, while inserting on-shell states certainly offers a path towards taming the factorial growth of Feynman diagrams, it also obscures the off-shell simplicity at the heart of many quantum field theories. A shining example of this hidden structure is the duality between color and kinematics [4; 5; 6], which states that the kinematic numerators of gauge theory amplitudes can be rearranged to obey the same algebraic identities as the constituent color factors. When this duality is realized globally
off-shell, it dramatically decreases the combinatorial complexity posed by integrand construction, and offers a path towards efficient assembly of quantum gravity integrands directly from simpler gauge theory building blocks via the double copy [4; 5].
Despite the tremendous success in leveraging this duality to compute gauge and gravity observables to high orders in perturbation theory [7; 8; 9], color-kinematics remains a conjecture at loop level. Indeed, there are many examples in the literature where identifying color-dual representations beyond one-loop has posed a formidable challenge [10; 11; 12; 13]. In this work, we use the nonlinear sigma model (NLSM) and Yang-Mills, two theories proven to permit color-dual representations at tree-level [14; 15], as case studies to advance our understanding of the kinematic algebra at the multi-loop level.
We begin with an overview of color-kinematics duality in section 2, and define the notion of "globally" color-dual integrands in section 2.3. We then construct a manifestly color-dual theory in section 3, which generates \(D\)-dimensional color-dual \(n\)-point numerators at both tree-level and one-loop. When plugging in appropriate on-shell states, we find that these numerators underpin color-dual representations of self-dual Yang-Mills, NLSM and Chern-Simons theory through one-loop. The construction of this theory, which we dub _semi-abelian Yang-Mills_, relies on isolating the cubic sector of pure Yang-Mills amplitudes from the four-point contacts that are needed to fully realize non-abelian gauge symmetry. Despite the potency of this theory through one-loop, the construction runs into an obstruction at two-loop, which we describe in section 3.3.
Faced with this obstruction, we study the multi-loop sector of both NLSM and Yang-Mills in section 4 using an ansatz-based color-dual bootstrap. In contrast to much of the available literature on color-dual integrand construction at two-loop [10; 11; 16], our results are completely agnostic to the spacetime dimension; all the polarizations and momenta appearing in our construction will remain formally \(D\)-dimensional. This makes our methods and results particularly well suited for algorithmically extracting rational terms from the \(D\)-dependence of dimensionally regulated loop momenta. The ansatz approach to constructing numerators generally results in an explosion of terms, but in the case of scalar theories there is an added difficulty because the linear equations become dense. We were able overcome this barrier by employing a custom solver, FiniteFieldSolve, which will soon be made public.
With this solver, we are able to compute a two-loop integrand for NLSM that globally manifests the duality off-shell. This represents a new benchmark for color-dual representations in non-supersymmetric gauge theories. However, in sharp contrast, we find that Yang-Mills does _not_ permit a globally color-dual representation, even when considering the most general polynomial ansatz of Lorentz covariant kinematics.1 Concretely, the failure point can be
pinpointed to a conflict between the "bowtie" cut and the following Jacobi triple:
\[\Rightarrow\quad\raisebox{-14.226378pt}{\includegraphics[scale=0.4]{fig/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutie/boutietie/boutie/boutie/boutie/boutie/boutietie/boutie/boutietie/boutie/boutietie/boutie/boutietie/boutietie/boutietie/boutietie/boutietie/boutietie/boutietietie/boutie
where \(i\), \(j\), and \(k\) are three graphs. Example Jacobi relations are presented in detail in section 2.3. The duality between color and kinematics is the statement that that there exists a way to rearrange factors between the various kinematic numerators \(N_{\Gamma}\) such that the numerators obey the same Jacobi identities as the color factors
\[N_{i}+N_{j}+N_{k}=0. \tag{3}\]
If color-kinematics duality can be achieved at the off-shell level, then the kinematic numerators should satisfy an algebra just like the color factors. In theories where the kinematic algebra is known, it is often the algebra of volume preserving diffeomorphisms [24; 25; 26; 27; 28]. When the kinematic algebra is not known explicitly, the kinematic numerators are typically constructed from an ansatz. As an example, the result of such a calculation for four-point NLSM scattering at tree level produces the numerators
\[N_{s}^{\rm NLSM} =s(t-u) \tag{4}\] \[N_{t}^{\rm NLSM} =t(u-s)\] (5) \[N_{u}^{\rm NLSM} =u(s-t) \tag{6}\]
where the coupling has been normalized away. These numerators sum to zero by explicit calculation. Note that the \(s\)-channel numerator contains an explicit factor of \(s\) that cancels with the propagator and the same goes for the \(t\)- and \(u\)-channels. The net result is that the amplitude is a local function, just as one would expect for the pion four-point tree amplitude.
Once color-dual numerators \(N_{\Gamma}\) have been found, they can be used to replace the color factors (\(C_{\Gamma}\to N_{\Gamma}\)) in a _different_ amplitude to produce the corresponding amplitude in a new theory,
\[\tilde{\cal A}_{n}^{L}=\sum_{\Gamma}\frac{1}{S_{\Gamma}}\int\frac{d^{LD}\ell} {(2\pi)^{LD}}\frac{C_{\Gamma}\tilde{N}_{\Gamma}}{d_{\Gamma}}\quad\frac{(C_{ \Gamma}\to N_{\Gamma})}{d_{n}}\quad{\cal M}_{n}^{L}=\sum_{\Gamma}\frac{1}{S_{ \Gamma}}\int\frac{d^{LD}\ell}{(2\pi)^{LD}}\frac{N_{\Gamma}\tilde{N}_{\Gamma}}{ d_{\Gamma}}. \tag{7}\]
This is known as the double-copy construction [4; 5; 6]. Importantly, the \(\tilde{N}_{\Gamma}\) do not need to respect color-kinematics duality. The prototypical example of the double-copy is that YM double copied with YM results in gravity where the gauge invariance of each separate gluon produces the diffeomorphism invariance of the graviton. The double copy can be proven at tree level using the Kawai-Lewellen-Tye (KLT) relations but there is significant evidence that the double copy persists to all loop orders [7; 8; 9; 29; 30]. In addition to pointing at undiscovered structure hidden in the Lagrangian for gravity, the double copy is immensely useful at a utilitarian level since it is much more efficient at producing \({\cal M}_{n}^{L}\) than traditional methods like Feynman rules.
Many of the one-loop results presented in section 3 are informed by the tree-level double-copy so we will review the tree case here. The color structure of tree-level amplitudes is particularly simple. A fully color dressed tree amplitude \({\cal A}_{n}\) can be decomposed in terms of the trace basis
\[{\cal A}_{n}=\sum_{\sigma\in S_{n-1}}{\rm Tr}(T^{a_{\sigma_{1}}}T^{a_{\sigma_ {2}}}...T^{a_{\sigma_{n-1}}}T^{a_{n}})A_{n}[\sigma_{1},\sigma_{2},...\sigma_{n -1},n], \tag{8}\]
where \(T^{a}\) are generators of the gauge group and, because of the cyclicity of the trace, one of the legs is held fixed so that the sum runs over the \((n-1)!\) permutations of the remaining external legs. The coefficients of the color factors are the color-ordered partial amplitudes \(A_{n}[...]\). Any tree-level color factor can be converted into a linear combination of Del Duca-Dixon-Maltoni (DDM) half-ladder color factors
\[\text{DDM}[a_{1},a_{2},...a_{n}]\equiv f^{a_{1}a_{2}b_{1}}f^{b_{1}a_{3}b_{2}}f^ {b_{2}a_{4}b_{3}}...f^{b_{n-3}a_{n-1}a_{n}} \tag{9}\]
associated with the graph
\[a_{1} \tag{10}\]
by repeated application of the Jacobi identity [31]. The fully color-dressed tree amplitude can then be re-expressed as
\[\mathcal{A}_{n}=\sum_{\sigma\in S_{n-2}}\text{DDM}[a_{1},a_{\sigma_{2}},a_{ \sigma_{3}},a_{\sigma_{n-1}},a_{n}]A_{n}[1,\sigma_{2},...\sigma_{n-1},n] \tag{11}\]
with legs \(1\) and \(n\) fixed. Since the sum only contains \((n-2)!\) terms, there must be additional relations - the Kleiss-Kuijf (KK) relations [32] - amongst the partial amplitudes appearing in the right-hand side of eq. (8). The KK relations are generic to any theory with purely adjoint particles. Color-kinematics duality further implies that the KK basis is overcomplete and can be reduced to a basis of \((n-3)!\) amplitudes via the fundamental Bern-Carrasco-Johansson (BCJ) identities [4; 14]
\[\sum_{i=2}^{n-1}k_{1}\cdot(k_{2}+...+k_{i})A_{n}[2,...,i,1,i+1,...,n]=0. \tag{12}\]
In terms of constructing explicit color-dual solutions at tree level, it is enough to specify the numerator of the half-ladder because any diagram can be reduced to a half-ladder through successive applications of the Jacobi identity. At one-loop, every diagram can be related to one topology as well, in this case the \(n\)-gon, which can be understood as a forward limit of the half-ladder. Two-loop calculations play an important role in our understanding of color-kinematics duality because this is the first order where multiple basis graphs are required, at least for a generic theory. We refer the reader to Ref. [6] for more background on the topic.
### Color-dual Lagrangians
While the BCJ relations are an on-shell statement about the color-dual nature of scattering amplitudes, one might aspire to trivialize the duality by encoding it in an off-shell Lagrangian of the theory. The double-copy construction of eq. (7) would suggest a manifestly cubic Lagrangian of the form
\[\mathcal{L}=P_{V}^{ab}P_{W}^{\mu\nu}\mathcal{O}_{\mu}^{a}\Box\mathcal{O}_{\nu }^{b}+V^{abc}W^{\mu\nu\rho}\mathcal{O}_{\mu}^{a}\mathcal{O}_{\nu}^{b}\mathcal{ O}_{\rho}^{c} \tag{13}\]
where \(V^{abc}\) and \(W^{\mu\nu\rho}\) are cubic Feynman rules mixing field operators \(\mathcal{O}^{a}_{\mu}\) indexed by quantum numbers \(a\) and \(\mu\). The Feynman rules for this theory are simply:
\[\begin{split} a,\mu\xrightarrow{k\to}& b,\nu\ =\frac{i}{k^{2}}(P_{V}^{-1})^{ab}(P_{W}^{-1})^{\mu\nu}\\ b,\nu&\\ a,\mu\xrightarrow{}& c,\rho\end{split} \tag{14}\]
We have introduced \(P_{V}^{ab}\) and \(P_{W}^{\mu\nu}\) projection operators to encode non-local kinetic structure that could in principle participate in the construction, as is the case for \(DF^{2}\) theory [17]. Generally, these quadratic dressings are simply delta functions, \(P_{V}^{ab}=\delta^{ab}\), or flat space metrics, \(P_{W}^{\mu\nu}=\eta^{\mu\nu}\). As written, the theory will be color dual if the off-shell cubic vertex is antisymmetric and satisfies the Jacobi identity. Specifically,
\[\text{antisymmetry}: \quad\mu_{1}\langle W^{\mu_{2}}\rangle_{\mu_{3}}+\text{cyc}(23)=0\] (15) Jacobi identity \[: \quad\mu_{1}\langle W^{\mu_{2}}W^{\mu_{3}}\rangle_{\mu_{4}}+\text{ cyc}(234)=0 \tag{16}\]
where, using the Feynman rules of eq. (14), the bracketed expression is the half-ladder numerator of the following diagram,
\[{}_{\mu_{1}}\langle W^{\mu_{2}}W^{\mu_{3}}\cdots W^{\mu_{n-1}}\rangle_{\mu_{n} }=N\left[\begin{array}{c|c|c}\mu_{2}&\mu_{3}&\mu_{n-1}\\ \hline\end{array}\right]\,, \tag{17}\]
and likewise for the \(V^{abc}\) dressing. Off-shell descriptions of color-dual theories are rare, and only a few examples are available in the literature [24; 25; 26; 27; 33; 34; 35]. Typically, when one goes about constructing color-dual theories directly from a set of cubic interactions, higher point Jacobi and antisymmetry constraints require introducing additional operators [36; 37] or propagating fields [34]. Indeed, as we will show in section 3, Yang-Mills requires introducing a cubic two-form interaction in order to manifest the kinematic algebra just at four-point. Absent information about the kinematic algebra, one can make progress in the construction of loop-level color-dual amplitudes by employing a color-dual bootstrap.
### Color-dual Bootstrap
When an off-shell color-dual Lagrangian is not known, the loop-level amplitude integrand must be constructed from an ansatz. The four conditions we impose on an integrand ansatz are summarized here and elaborated on below.3 After imposing the four constraints below, any remaining coefficients in the ansatz encode "generalized gauge freedom", meaning that the choice of coefficients does not affect the physical integrand or its double copy.
1. **Off-shell Locality**: Diagram numerators are polynomials in momenta and polarizations, e.g., the diagrams only have poles given by the propagators of the diagrams.
2. **Color-kinematics duality**: The numerators obey the same Jacobi identities as the color factors. This implicitly requires that the integrand is expressed in terms of purely cubic diagrams.
3. **Graph symmetries**: A diagram's numerator is only a function of the diagram's topology and labeling. Furthermore, the diagram numerators are invariant under the automorphisms of the diagrams including signs that compensate for color-factor sign changes.
4. **On-shell Unitarity**: The cuts of the ansatz must reproduce the physical unitarity cuts of the theory when internal momenta are taken on-shell.
In order to clarify general statements, the following subsections include examples from a four-point two-loop integrand for some generic color-dual theory like YM. This is the simplest case that demonstrates the full machinery of the color-dual bootstrap. All tree and one-loop processes are generated by single basis diagrams (the half ladder and _n_-gon respectively) so they lack examples of three-term "boundary" Jacobi relations to be described shortly. Choosing the four-point two-loop integrand also has the advantage that this process appears prominently in this paper.
Off-shell LocalityThe kinematic numerators must be \(D\)-dimensional, Lorentz invariant, local (polynomial) functions of the graph kinematics with the correct power counting and external states. Locality is physically motivated since Feynman rules produce polynomial numerators. Indeed, the most natural way to guarantee locality and avoid spurious poles is to require the off-shell numerators to be polynomial functions of kinematics. However, insisting on a local numerator is partly a matter of convenience as the ansatz for a rational function would quickly grow out of control without a guiding principle for what terms to include. While these assumptions are well motivated from a physical and complexity standpoint, the literature has featured numerators that violate locality, modify naive power counting through the inclusion of higher spin modes, and break manifest Lorentz invariance [33, 34, 38, 10, 39, 40, 11]. These developments suggest that relaxing locality is a viable alternative when the condition proves too stringent for an ansatz. However, simply requiring on-shell locality may suffice. This alternative will be discussed in section 5.1.
Color-kinematics dualityColor-kinematics duality is a statement about color factors, kinematic numerators, and the accompanying Jacobi relations. Adjoint-type color factors are associated with cubic graphs, so satisfying color-kinematics duality implicitly requires expressing the integrand in terms of such graphs. As mentioned earlier, this is possible to do even for a theory with quartic (or higher multiplicity) vertices by multiplying and dividing by propagators. In the case of the four-point two-loop example promised above, there are 14
hich the integrand expressed in terms of cubic graphs, it is possible to discuss the main ingredient in color-kinematics duality - the Jacobi relations. The color factors in an integrand obey a set of Jacobi identities. Color-kinematics duality states that there _exists_ a way of writing the kinematic numerators of the integrand so that they obey the same set of Jacobi identities. The Jacobi identities can be divided into two categories: _defining_ relations and _boundary_ relations. Defining Jacobi relations can be used to express any graph in terms of a basis. For example, the double box, crossed box, and penta-triangle are related to each other as follows
\[\includegraphics[scale=0.4]{figures/1
graphs receives its own ansatz and the numerator of any other graph can be expressed in terms of these two.
After using the defining Jacobi relations to relate every graph to the basis, any remaining Jacobi relations will be referred to as boundary relations. In other words, boundary relations (indirectly) relate basis elements to themselves. For example, the crossed box numerator is related to itself via the following Jacobi relation
\[\begin{split}\includegraphics[scale=0.5]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/figfig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/figfig/figfig/fig/figfig/figfig/figfig/fig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/figfig/figfig/figfig/fig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/fig/figfig/figfigfig/fig/figfig/figfigfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfigfig/fig/figfigfig/figfigfig/figfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfig/figfigfigfig/figfigfig/figfigfig/figfig/figfigfigfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfigfig/figfigfigfigfig/figfigfigfigfigfig/figfigfigfigfigfig/figfigfigfigfigfig/figfigfigfigfigfig/figfigfigfigfigfig/figfigfigfigfigfigfig/figfigfigfigfigfigfig/figfigfigfigfigfigfig/figfigfigfigfigfigfig/figfigfigfigfigfigfigfig/figfigfigfigfigfigfigfig/figfigfigfigfigfigfigfig/figfigfigfigfigfigfigfig/figfigfigfigfigfigfigfig/figfigfigfigfigfigfigfig/figfigfigfigfigfigfigfigfigfig/figfigfigfigfigfigfigfig/figfigfigfigfigfigfigfigfig/figfigfigfigfigfigfigfigfigfig/figfigfigfigfigfigfigfigfig/figfigfigfigfigfigfigfig/figfigfigfigfigfigfigfigfig/figfigfigfigfigfigfigfigfigfigfig/figfigfigfigfigfigfigfigfigfig/figfigfigfigfigfigfigfigfigfig/figfigfigfigfigfigfigfigfigfigfig/figfigfigfigfigfigfigfig
which are just functional relabelings of external legs, \(\{1234\}\to\{2134\}\) and \(\{1234\}\to\{1243\}\). In general, internal edge labels must be tracked as well. Furthermore, care must be taken with the signs in a symmetry relation. To compensate for the antisymmetric color factors, \(f^{abc}\), graph vertices are all totally antisymmetric and hence have signs built into their orientations. When used in conjunction with the Jacobi relations, e.g., eq. (18), the symmetry constraints of non-basis graphs still impose conditions on the basis ansatz. In fact, since the symmetry properties of a graph are directly linked with transformations of its color dressing, the resulting constraints can be thought of as a subclass of the _boundary_ Jacobi relations.
On-shell UnitarityThe final constraints come from ensuring that the ansatz correctly reproduces the generalized unitarity cuts of the desired theory, thus guaranteeing that the numerators encode the correct physical, theory-dependent information. Generalized unitarity has been reviewed in many places [40; 41; 42; 45; 46; 47; 48; 13; 40; 41; 42] and was recently optimized for effective field theories like NLSM [49]. Generalized unitarity equates products of on-shell tree amplitudes, encoded via a graph \(\gamma\), to sums over compatible4 diagram numerators evaluated on the support of the on-shell conditions and weighted by their uncut propagators
Footnote 4: A diagram \(g\) is compatible with the cut diagram \(\gamma\) if \(g\) is isomorphic to \(\gamma\) or to any of the additional factorization channels of \(\gamma\).
\[\text{Cut}[\gamma]=\sum_{\begin{subarray}{c}\text{states}\\ \text{crossing }E(\gamma)\end{subarray}}\prod_{v\in V(\gamma)}A^{\text{tree}}(v)= \sum_{g\text{\leavevmode\nobreak\ {\small compact}}\gamma}\frac{N[g]|_{\text{cut}}}{ \prod\limits_{\text{uncut}}\ell^{2}}\,, \tag{21}\]
where \(V(\gamma)\) are the vertices of \(\gamma\) and \(E(\gamma)\) its edges. The sum over states hides much of the complexity of cut construction: for scalars the process is trivial while spinning states greatly complicate matters (see Refs. [42; 50] for recent discussion and detailed examples of evaluating state sums). As an example of the two types of expressions appearing in eq. (21), the iterated two-particle cut can be represented as a product of trees via
\[\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_1.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_2.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_3.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_4.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_5.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_6.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_7.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_8.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_9.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_10.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_11.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_12.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_13.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_14.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_15.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_16.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_17.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_18.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_19.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_20.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_21.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_22.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_23.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_24.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_25.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_26.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_27.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_28.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_29.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_29.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_29.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_29.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_29.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_29.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_29.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_30.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_31.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_32.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_33.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_34.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_35.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_36.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_37.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_38.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_39.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_40.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_41.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_42.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_43.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_44.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_45.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_46.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_47.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_48.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_49.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_40.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_41.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_42.pdf}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/cut_43.pdf}} \raisebox{-1
When drawing cut diagrams, we will use gray blobs as in the left-hand sides of eqs. (22) and (23) and all drawn legs are assumed to be on-shell. When drawing numerator contributions to cuts, we will not use blobs on vertices, and will use dashed lines bisecting edges to denote cut lines and use colored edges to highlight the uncut propagators. From now on we will leave the cut \(\delta(\ell^{2})\) factors implicit.
In color-charged theories, both sides of the cut can be further decomposed according to the color algebra. For theories charged under the adjoint of \(SU(N)\) via \(f^{abc}\)-dressed amplitudes, it is convenient to project onto a specific element of the Del Duca-Dixon-Maltoni (DDM) color basis [31]. This is done by only inserting color-ordered amplitudes, _a la_ the right hand side of eq. (11) in the product of trees, and restricting the set of compatible diagrams to those whose color factors reduce to the appropriate DDM element when summing over _only the uncut color contractions_. In this situation, the color becomes an overall factor on the entire cut equality and thus can be ignored. Cuts that are organized in this manner are known as "color-ordered" cuts, in the same sense as color-ordered tree amplitudes. Since the DDM basis (and dual Kleiss-Kuijf amplitude basis [32]) have \((n-2)!\) elements, one generally needs to evaluate \((n_{1}-2)!(n_{2}-2)!...\) separate color-orderings of the same cut, where \(n_{1},\,n_{2}...\) are the multiplicities of the amplitudes making up the cut. As an example, consider the "bowtie" cut topology - one of the factorization channels of the iterated unitarity cut - that has two topologically distinct color-ordered expansions
\[=\begin{cases}\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par
uncut propagator is set to zero by an _internal cut condition_, for instance,
(26)
Fully resolving both of these problems is beyond the scope of the current work, so we will simply not impose any constraints that require handling these types of cuts.
## 3 One-loop cubic construction
Here we will describe why constructing the full Yang-Mills kinematic algebra (outside the self-dual sector) is generically hard, and why the same reason makes NLSM beyond one-loop hard as well. The major obstacle stems from the vector state sum mixing with higher-point contacts. Starting with the Yang-Mills Lagrangian,
\[\mathcal{L}^{\text{YM}}=-\frac{1}{2}(\partial_{\mu}A_{\nu})^{2}+f^{abc} \partial_{\mu}A^{a}_{\nu}A^{b}_{\mu}A^{c}_{\nu}+f^{abc}f^{ecd}A^{a}_{\mu}A^{b} _{\nu}A^{c}_{\mu}A^{d}_{\nu} \tag{27}\]
we can express the YM three-point interaction in terms of the following vertex in Lorenz gauge:
\[\tikzfig{vertex}=(\varepsilon_{i}\varepsilon_{j})(\varepsilon_{k}p_{i}). \tag{28}\]
This kinematic vertex is antisymmetric in \(1\leftrightarrow 2\) exchange, and from it we can construct the full Lorenz gauge Yang-Mills vertex by summing over cyclic permutations,
\[V^{(123)}_{\text{YM}}=\tikzfig{vertex}+\text{cyc}(123)\,. \tag{29}\]
Contracting this with the vector state sum, we can see that the cubic Yang-Mills vertex does not satisfy the four-point Jacobi identity,
\[{}_{1}\langle V^{2}_{\text{YM}}V^{3}_{\text{YM}}\rangle_{4}+\text{cyc}(234)= \varepsilon_{(12)}\varepsilon_{(34)}(s_{13}-s_{23})+\text{cyc}(234)\neq 0 \tag{30}\]
where \(\varepsilon_{(ij)}=\varepsilon_{i}\cdot\varepsilon_{j}\). One way to absorb the remainder is with the four-point contact of eq. (27), which preserves non-abelian gauge invariance off-shell. However, this additional term can also be absorbed into the definition of the four-point kinematic numerator in a way that preserves the cubic graph construction as follows,
\[{}_{1}\langle V^{2}_{\text{YM}+B}V^{3}_{\text{YM}+B}\rangle_{4}={}_{1}\langle V ^{2}_{\text{YM}}V^{3}_{\text{YM}}\rangle_{4}+s_{12}(\varepsilon_{(13)} \varepsilon_{(24)}-\varepsilon_{(14)}\varepsilon_{(23)}). \tag{31}\]
Note that this new definition of the \(s\)-channel numerator contains two factors of \(\varepsilon_{(ij)}\) that contract spacetime indices _across_ the factorization channel, suggesting the inclusion of a spin-2 mode. As such, we use \(V_{\text{YM}+B}\) to denote the inclusion of an additional two-form, as was studied in Ref. [34] for constructing the NMHV Yang-Mills Lagrangian. After the introduction of the two-form, the numerators now satisfy the Jacobi identity
\[{}_{1}\langle V_{\text{YM}+B}^{2}V_{\text{YM}+B}^{3}\rangle_{4}+\text{cyc}(234 )=0\,, \tag{3.6}\]
where the new cubic Lagrangian takes the form,
\[\mathcal{L}^{\text{YM}+B}=-\frac{1}{2}(\partial_{\mu}A_{\nu})^{2}-B_{\mu\nu} \Box\tilde{B}_{\mu\nu}+f^{abc}\partial_{\mu}A_{\nu}^{a}A_{\mu}^{b}A_{\nu}^{c}+ f^{abc}(B_{\mu\nu}+\Box\tilde{B}_{\mu\nu})^{a}A_{\mu}^{b}A_{\nu}^{c}\,. \tag{3.7}\]
Of course, higher multiplicity would likely require further redefinition of the three-point vertex to satisfy Jacobi identities on all internal edges. While introducing successively higher spin states could in principle work at tree level, we will see that it is not consistent with what we find at general loop order. We will comment on this in section 5.2. For now, we will study how to construct one-loop integrands consistent with color-kinematics by isolating the cubic sector of the state sum above.
As can be seen above, the Jacobi identity of eq. (3.4) fails only in terms with two factors of polarization dot products, \((\varepsilon\varepsilon)^{2}\). If we restricted ourselves to just considering factors with a single polarization dot product, the Jacobi identity of Yang-Mills would be satisfied _off-shell_ and in arbitrary spacetime dimensions
\[{}_{1}\langle V_{\text{YM}}^{2}V_{\text{YM}}^{3}\rangle_{4}\big{|}^{( \varepsilon\varepsilon)^{1}}+\text{cyc}(234)=0\,. \tag{3.8}\]
This restriction to the \((\varepsilon\varepsilon)^{1}\) cubic sector of Yang-Mills is closely related to the MHV decomposition of color-dual numerators [55; 56; 2, 57]. In the MHV sector it is possible to choose the reference vectors so that all dot products of polarization vectors vanish, \((\varepsilon\varepsilon)\to 0\), except those involving one of the positive helicity polarizations, \((\varepsilon_{1}^{+}\varepsilon_{i}^{-})\neq 0\). We describe this in detail in appendix B. By focusing on \((\varepsilon\varepsilon)\) structure rather than specific 4D helicity states, we will be able to construct dimension agnostic color dual integrands at one-loop.
Thus for our approach at one-loop, we aim to build color-dual integrands directly from \(D\)-dimensional kinematic factors that are \(\mathcal{O}((\varepsilon\varepsilon)^{1})\) at tree-level, and \(\mathcal{O}((\varepsilon\varepsilon)^{0})\) at one-loop. The \(D\)-dimensional organizational principle underlying this construction can be understood in terms of a vector amplitude decomposition introduced by one of the authors [58],
\[A_{(\sigma)}^{\text{YM}}=\sum_{k=0}^{|\sigma|/2|}\sum_{\rho\in S_{\sigma}^{2| k}}\varepsilon_{(\rho)}\Delta_{(\sigma)}^{(\rho)}\,. \tag{3.9}\]
where we have introduced the shorthand notation, \(\varepsilon_{(ij)\cdots(kl)}\equiv(\varepsilon_{i}\varepsilon_{j})\cdots( \varepsilon_{k}\varepsilon_{l})\), and \(S_{\sigma}^{2|k}\) is the set of \(k\) pairs of external legs appearing in the color-ordered label list, \(\sigma\). For example, at four-point, \(S_{\sigma}^{2|1}=\{(12),(13),(14),(23),(24),(34)\}\), and \(S_{\sigma}^{2|2}=\{(12)(34),(13)(24),(14)(23)\}\).
The sum thus selects out kinematic building blocks, \(\Delta^{(\rho)}_{(\sigma)}\),that are each weighted by different polarization dot products. Some simple four-point examples include,
\[\Delta^{(13)}_{(1234)}=\frac{(k_{1}\varepsilon_{2})(k_{3}\varepsilon_{4})}{s_{12 }}+\frac{(k_{1}\varepsilon_{4})(k_{3}\varepsilon_{2})}{s_{14}},\qquad\Delta^{( 13)(24)}_{(1234)}=1,\qquad\Delta^{(12)(34)}_{(1234)}=\frac{s_{13}}{s_{12}}\,. \tag{3.10}\]
We direct the reader to Ref. [58] for further details. The expansion of eq. (3.9) comes with the added advantage of making the transmutation relations of [59] absolutely manifest. By a simple mass-dimension argument [55], we know the tree-level cubic sector of Yang-Mills must be at \(\mathcal{O}((\varepsilon\varepsilon)^{1})\),
\[A^{\text{cubic-YM}}_{(\text{tree})}=\sum_{\rho\in S^{2|0}_{\sigma}}\varepsilon _{(\rho)}\Delta^{(\rho)}_{(\sigma)}\,, \tag{3.11}\]
while at one-loop, the cubic sector corresponds to \(\mathcal{O}((\varepsilon\varepsilon)^{0})\) in polarization dot products,
\[A^{\text{cubic-YM}}_{(\text{1-loop})}=\sum_{\rho\in S^{2|0}_{\sigma}} \varepsilon_{(\rho)}\Delta^{(\rho)}_{(\sigma)}\,. \tag{3.12}\]
When decomposed in this way, the polarization stripped building blocks must obey a set of Ward-identities between different "helicity", or \((\varepsilon\varepsilon)^{n}\), sectors in order for the full amplitude to be gauge invariant,
\[\Delta^{(\rho)}_{(\sigma)}\Big{|}_{\epsilon_{i}\to k_{i}}=-\sum_{j\in\rho^{c}} (k_{i}\varepsilon_{j})\Delta^{(\rho\cup(ij))}_{(\sigma)}\,. \tag{3.13}\]
With this in hand, we can construct a Lagrangian description of the of the cubic sector for Yang-Mills and will demonstrate that the resulting amplitudes with manifestly color-dual Feynman rules are equivalent to both SDYM and NLSM through one-loop.
### Semi-abelian Yang-Mills theory
As we argued above, if we include factors of \((\varepsilon\varepsilon)^{n\geq 1}\) at tree-level, then we need to keep the Yang-Mills four-point contact for color-kinematics duality to be restored on-shell. However, the contrapositive is also true - if we omit the four-point Yang-Mills vertex, then we only need terms that contribute to the manifestly color-dual cubic sector of the theory. We call this manifestly color-dual theory _semi-abelian Yang-Mills_,
\[\mathcal{L}^{\text{semi-YM}}=-\frac{1}{2}\text{tr}\left[\bar{F}_{\mu\nu}F^{ \mu\nu}\right] \tag{3.14}\]
where
\[\bar{F}^{a}_{\mu\nu}=\partial_{\mu}\bar{A}^{a}_{\nu}-\partial_{\nu}\bar{A}^{a}_ {\mu} \tag{3.15}\]
\[F^{a}_{\mu\nu}=\partial_{\mu}A^{a}_{\nu}-\partial_{\nu}A^{a}_{\mu}+f^{abc}A^{ b}_{\mu}A^{c}_{\nu} \tag{3.16}\]
and \(A_{\mu}\) is in Lorenz gauge. We can construct this Lagrangian from eq. (3.1) by keeping the right field strength covariant under \(U(N)\), and making the left field strength gauge covariant under \(U(1)^{N^{2}}\). Note that since \(U(1)\) covariance is identical to \(U(1)\) invariance, the amplitudes
of this semi-abelian theory vanish under \(\bar{\varepsilon}(k)\to k\), where \(\bar{\varepsilon}\) is the polarization of an external \(\bar{A}_{\mu}\) vector.
Here we can think of the abelian vector as a background field that sources on-shell \(A_{\mu}\) currents. Indeed, as we describe in appendix A, semi-abelian Yang-Mills theory is at the heart of \(YZ\)-theory [33], \(J\)-theory [26, 27], self-dual Yang-Mills [24], and Chern-Simons theory [25]. Specifically, semi-abelian YM is simply a clever reinterpretation of the Lagrangian obtained by integrating an auxiliary field into the \(J\)-theory equations of motion. The Feynman rule for the cubic vertex is
\[\tikzfig{fig-diagram-diagram-1.eps}\qquad=(\varepsilon_{1}\bar{\varepsilon}_{3})( \varepsilon_{2}k_{3})-(\varepsilon_{2}\bar{\varepsilon}_{3})(\varepsilon_{1}k_ {3}) \tag{3.17}\]
where the incoming arrow is the abelianized gauge field, and the outgoing arrows the on-shell non-abelian vectors. The propagator is simply
\[A^{\nu}\ \xrightarrow{k\to}\bar{A}^{\mu}\ =\frac{i}{k^{2}}\eta^{\mu\nu}. \tag{3.18}\]
We can use the Ward identity of eq. (3.13) to check that the amplitudes are indeed gauge invariant when the _abelian_ polarizations are taken to be longitudinal, \(\bar{\varepsilon}\to k\). Furthermore, a simple calculation shows that the four-point correlation function of this theory satisfies the Jacobi identity of eq. (2.16) _off-shell_. This ensures that color-kinematics duality holds to all multiplicity and loop order. We also note that semi-YM does not have any ghosts from non-abelian gauge symmetry that could spoil color-kinematics at loop level. Due to the Feynman rules of this theory, the amplitudes are non-vanishing only at tree-level and one-loop. We demonstrate the implications of this property in section 3.3.
As noted above, we can select out the manifestly cubic sector of semi-abelian Yang-Mills from the full theory of eq. (3.1) by selecting only \((\varepsilon^{+}\varepsilon^{+})\) in light-cone gauge (i.e., SDYM) and one-minus at tree-level or by plugging in the on-shell states of \(J\)-theory [26, 27] and \(YZ\)-theory [33]. Indeed, semi-YM is just the following sum over building blocks in the expansion of Yang-Mills given in Ref. [58],
\[A(A_{1},...,\bar{A}_{i},...,A_{n})_{\text{tree}}=\sum_{i\neq j}(\varepsilon_{ i}\varepsilon_{j})\Delta^{(ij)}_{\text{tree}}\sim\sum_{i\neq j}(\varepsilon_{i} \varepsilon_{j})\sum_{a}\,[(\varepsilon k)^{n-2}(kk)^{3-n}]_{a}\,, \tag{3.19}\]
and similarly so at one-loop,
\[A(A_{1},...,A_{n})_{\text{1-loop}}=\Delta^{(\varnothing)}_{\text{1-loop}} \sim\sum_{a}\,[(\varepsilon k)^{n}(kk)^{-n}]_{a}\,, \tag{3.20}\]
where \([\cdots]_{a}\) are terms with appropriate powers of \((\varepsilon k)\) and \((kk)\). As we describe in detail in appendix A, to recover NLSM at one-loop we need to extract the \(D\)-dependent part that corresponds to an internal \(\bar{Y}Y\)-loop from extra-dimensional scalars
\[A^{\rm NLSM}_{\rm 1\text{-loop}}\equiv\partial_{D}\Delta^{(2)}_{\rm 1\text{-loop}} \big{|}_{\varepsilon\to k}\,. \tag{3.21}\]
Why must we take the derivative with respect to \(D\)? After all, \(J\)-theory and \(YZ\)-theory both have well defined propagators for internal \(\bar{J}J\) and \(\bar{Z}Z\) propagators. However, as we discuss in appendix A, all the \(J\) and \(Z\) states must be on-shell in order to produce NLSM amplitudes. Thus, the unitarity cuts of \(J\) theory at one-loop will not produce NLSM amplitudes. However, the internal \(YY\)-loop is a valid forward limit for producing NLSM amplitudes, since as constructed \(YZ\)-theory matches to NLSM for off-shell \(Y\)-particles. We now provide explicit expressions for these one-loop amplitudes in the next section.
### One-loop color-dual integrands
Our first application of this theory for color-dual construction at loop level is for self-dual Yang-Mills (SDYM). As we have done throughout the text, we will set all coupling constants to unity. At tree-level, the off-shell cubic vertices can be written in terms of light cone coordinates [24]
\[X(p,k)=p_{u}k_{w}-p_{w}k_{u}. \tag{3.22}\]
A derivation of this Feynman rule and the definition of light cone coordinates can be found in appendix A. At loop-level, this construction needs to be analytically continued to general dimension in order to apply dimensional regularization at one-loop. This can be acheived by using the cubic semi-YM vertices of the previous section,
\[\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{fig/s1.eps}}= \mathcal{X}(k_{1},k_{2})=(\varepsilon_{1}\bar{\varepsilon}_{3})(\varepsilon_{2} k_{3})-(\varepsilon_{2}\bar{\varepsilon}_{3})(\varepsilon_{1}k_{3}). \tag{3.23}\]
Plugging in on-shell all-plus helicity states in light-cone gauge will yield precisely the 4D SDYM vertex, up to an unphysical phase
\[\mathcal{X}(k_{1}^{+},k_{2}^{+})=\frac{\langle 12\rangle^{3}}{\langle 23 \rangle\langle 31\rangle}\sim\langle 12\rangle=k_{1,u}k_{2,w}-k_{1,w}k_{2,u}\,, \tag{3.24}\]
where \(\mathcal{X}(k_{1}^{+},k_{2}^{+})=A^{\rm YM}(1^{+},2^{+},3^{-})\) in light cone gauge. In the second equality, momentum conservation has been applied to the redefined the spinor bracket, \(\langle 12\rangle\to X(k_{1},k_{2})\), whose definition is given in appendix B. Of course, the form of eq. (3.23) has the advantage of permitting a \(D\)-dimensional construction of the one-loop integrand. As an example, the four-point box numerator is
\[\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{fig/s2.eps}}= \langle\mathcal{X}(k_{1},\ell_{1})\mathcal{X}(k_{2},\ell_{2})\mathcal{X}(k_{3},\ell_{3})\mathcal{X}(k_{4},\ell_{4})\rangle\,, \tag{3.25}\]
where \(\ell_{i}=\ell-(k_{1}+k_{2}+\cdots+k_{i})\) and the bracket \(\langle\cdots\rangle\) indicates that we have applied the gauge fixed state projector, \(\sum\varepsilon^{\mu}_{(+\ell_{i})}\varepsilon^{\nu}_{(-\ell_{i})}=\eta^{\mu\nu}\), on all internal polarizations. In general, the \(n\)-gon diagram is
\[N^{\rm SDYM}_{n\text{-gon}}=\ 2 \tag{3.26}\]
Recall that all other numerators can be obtained from the \(n\)-gon through Jacobi. Due to the state-sum of internal loop factors, \(\sum\varepsilon_{(+\ell)}\varepsilon_{(-\ell)}\sim D\), the integrand above depends explicitly on the spacetime dimension, \(D\). By taking a derivative5, we can recover the integrand needed for the all-plus one-loop amplitudes with an internal scalar
Footnote 5: Another way to understand this derivative is that it selects the large \(D\) behavior of the integrand.
\[\partial_{D}N^{\rm SDYM}_{n\text{-gon}}=\ 2 \tag{3.27}\]
This integrand numerator with internal scalar loop is precisely what one would obtain from the Feynman rules of \(YZ\)-theory, absent the \(\bar{Z}Z\) internal vector loop. For more background, we refer the reader to appendix A. When plugging on the on-shell states of \(YZ\)-theory, we thus obtain the following expression for the NLSM one-loop \(n\)-gon numerator
\[N^{\rm NLSM}_{n\text{-gon}}=\left[\partial_{D}N^{\rm SDYM}_{n\text{-gon}} \right]^{\epsilon\to k}=\ 2 \tag{3.28}\]
where we have defined the antisymmetric kinematic variable, \(\llbracket ij\rrbracket=\ell_{i}^{2}-\ell_{j}^{2}=2(k_{i}\cdot\ell_{i})\). We have verified through 10-point one-loop that this \(n\)-gon expression is a valid color-dual representation for NLSM. Thus, composing the \(n\)-gon numerators of eq. (3.26) and eq. (3.28) would yield \(D\)-dimensional integrands that project down to the 4D all-plus Born-Infeld one-loop amplitudes studied in Ref. [60].
Before proceeding, we note that the above definition does give rise to "pathological" bubble-on-external-leg (BEL) diagrams, discussed previously in section 2.3. However, one can show that these diagrams integrate to zero for spacetime dimension, \(D>2\), and thus can be disregarded as unphysical. For the interested reader, in appendix C we provide a detailed overview of this dimensional regularization of the relevant BEL diagram.
### Two-loop obstruction
At two-loop, introducing terms that conspire with four-point contacts is unavoidable. At one-loop, we were able to avoid internal contractions of \(\bar{A}_{\mu}A^{\mu}\) by selecting appropriate external
states. However, at two-loop when choosing all external \(A_{\mu}\) states, the amplitude vanishes in semi-YM theory
\[{\cal A}_{\text{2-loop}}^{\text{semi-YM}}(A_{\mu},A_{\mu},A_{\mu},A_{\mu})=0, \tag{3.29}\]
that is, the theory is one-loop exact. In order to produce non-vanishing interactions, we would need to reintroduce \(D\)-dimensional vertices from the full Yang-Mills Lagrangian of eq. (3.1) that we dropped in our construction semi-abelian YM. In terms of cubic graphs, the additional interaction must necessarily have the opposite number of \(\bar{A}_{\mu}\) and unbarred \(A_{\mu}\) fields to that of eq. (3.17). Reintroducing these oppositely oriented vertices allows for new internal contractions of \(\bar{A}_{\mu}A^{\mu}\),
\[{\cal A}_{\text{2-loop}}^{\text{YM}}(A_{\mu},A_{\mu},A_{\mu},A_{\mu})= \tag{3.30}\]
where we used white dots to indicate interaction vertices of weight6\({\cal W}[\bar{A}\bar{A}A]=-1\), rather than the isolated semi-YM vertex which always carries weight \({\cal W}[\bar{A}AA]=+1\). This immediately runs into the difficulty of introducing the four-point contact needed for color-kinematics to be satisfied on all internal edges. Therefore, by a simple weight counting argument, one can see that including this wrong sign interactions is unavoidable at two-loop and higher.
Footnote 6: We choose the convention that the abelianized gauge fields carry weight \({\cal W}[\bar{A}]=-1\), and non-abelian vectors are \({\cal W}[\bar{A}]=+1\).
Thus, to construct two-loop numerators prescriptively, as we have done at one-loop, would require knowledge of the full kinematic algebra for Yang-Mills off-shell. Since this is presently unavailable, we will tackle the two-loop integrand using an ansatz approach.
## 4 Two-loop four-point bootstrap
As we have just seen, it is possible to coerce tree numerators into one-loop numerators at any multiplicity for pions and related theories, but that these methods cannot be reapplied to generate higher-loop numerators. For pions in particular, the two-loop no-go statement is only for _one particular representation of the theory_, so it does not completely preclude the existence of a two-loop color-dual integrand. We thus turn to the color-dual bootstrap method described in section 2.3 to construct a color-dual representation of two-loop four-point NLSM. Because of the similarity of the problem setup, we will also use the opportunity to revisit the construction of a color-dual representation of pure YM, extending the search space beyond what was covered in Ref. [13] to include the most general local ansatz.
Both NLSM and YM share the same cubic graph basis, and thus their defining Jacobi relations lead to the same graph basis. Ignoring tadpole and BEL graphs, there are 14 cubic four-point two-loop graphs (see fig. 1) which are related to each other via 21 Jacobi relations.
As mentioned in section 2.3, color-kinematics duality ensures that the numerator of every graph can be expressed in terms of a basis of the double box and penta-triangle,
\[\includegraphics[scale=0.4]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/figfig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/figfig/fig/fig/figfig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/figfig/fig/figfig/fig/fig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/fig/fig/figfig/figfig/figfig/figfig/fig/fig/fig/figfig/fig/figfig/figfig/figfig/fig/fig/fig/figfig/figfig/fig/figfig/fig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/fig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfigfig/figfigfig/figfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfig/figfig/figfigfig/figfigfig/figfig/figfig/figfigfigfig/figfigfig/figfig/figfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfigfig/figfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfigfig/figfigfig
option for enforcing additional aesthetic constraints. All remaining generalized gauge parameters could be set to zero, but a simpler and more insightful result can be obtained through physical arguments. \(\mathcal{N}=4\) SYM provides several hints for further conditions to impose on the ansatz. For example, for maximally supersymmetric gauge theory it is possible to enforce the no triangle hypothesis, manifest loop power counting, and, for four-point and up to at least six loops, strip off a factor of \(stA^{\rm tree}\) from the integrand [7; 61; 62; 63; 64; 65; 66]. For the NLSM integrand it is desirable to make color-kinematics duality as manifest as possible. One hope would be to factor out some piece of the one-loop numerator since this at least manifests antisymmetry for the vertices involving external legs. However, a more fruitful direction is to match onto the one known theory with pion power counting that manifests color-kinematics duality to all loop orders, namely, Zakharov-Mikhailov (ZM) theory [67; 28]. ZM theory is governed by the Lagrangian
\[\mathcal{L}^{\rm ZM}=\frac{1}{2}(\partial\varphi)^{2}+gf^{abc}\varphi^{a} \varepsilon^{\mu\nu}(\partial_{\mu}\varphi^{b})(\partial_{\nu}\varphi^{c}), \tag{4.3}\]
where more details can be found in appendix A. The color-stripped Feynman rule for the vertex is \(V(k_{1},k_{2},k_{3})\propto\varepsilon_{\mu\nu}k_{1}^{\mu}k_{2}^{\nu}\equiv \langle k_{1}k_{2}\rangle\) where off-shell color-kinematics duality to all orders in perturbation theory simply follows from the Schouten identity in 2D. Since the theory is purely cubic and manifests off-shell color-kinematics duality, it is trivial to read off the color-dual numerator for any graph. From the presence of the Levi-Civita tensor \(\varepsilon^{\mu\nu}\), the theory clearly resides in two spacetime dimensions where scattering is notoriously plagued by infrared regulation issues. Every on-shell particle is either a right mover, with momentum proportional to \(k_{R}^{\mu}\equiv(1,1)\), or a left mover, with momentum proportional to \(k_{L}^{\mu}\equiv(1,-1)\). On-shell ZM amplitudes naturally divide into sectors corresponding to the configuration of left and right movers, where the scattering in many sectors is rather subtle. Only color-ordered amplitudes can be defined unambiguously from the naive Feynman rules. At four-point, only the alternating sector is free of subtleties and the amplitude for this process vanishes. In equations, \(A[LRLR]=0\) where \(R\) corresponds to a right mover and \(L\) corresponds to a left mover. In every other configuration of right and left movers, such as \(A[RRRR]\), one of the internal propagators is accidentally on-shell since \(k_{R}^{2}=k_{L}^{2}=0\).
It is tempting to try to match the pion four-point two-loop integrand to ZM on the cuts, but given that the cuts only probe the on-shell four-point amplitude, which is either subtle or vanishes for ZM, it is better to compare the off-shell numerators directly. The numerators
Figure 2: The two physical cut topologies shared by NLSM, sGal, BI, and DBIVA
of the two basis graphs are
\[N^{\text{ZM}}\left[\begin{array}{c}\includegraphics[width=145. 524409pt]{201.eps}\end{array}\right] =\langle\ell_{1}\ell_{2}\rangle\langle k_{4}\ell_{1}\rangle\langle\ell_{2}k_ {1}\rangle\langle k_{3}(\ell_{1}+k_{4})\rangle \tag{4.4}\] \[N^{\text{ZM}}\left[\begin{array}{c}\includegraphics[width=145. 524409pt]{201.eps}\end{array}\right] =\langle\ell_{1}\ell_{2}\rangle\langle k_{4}\ell_{1}\rangle \langle\ell_{2}k_{1}\rangle\langle k_{3}(\ell_{1}+k_{4})\rangle \tag{4.5}\]
where \(k_{12}=k_{1}+k_{2}\) and, again, \(\langle ab\rangle\equiv p_{a}^{\mu}\varepsilon_{\mu\nu}p_{b}^{\nu}\) is the color-stripped ZM vertex. As local functions, the numerators never suffer from any of the subtleties of on-shell 2D kinematics. We introduce a proportionality constant \(z\) between the numerators of the two theories, \(N_{\text{NLSM}}|_{2D}=z\ N_{\text{ZM}}\), because we are only interested in how the kinematical structure of \(N_{\text{ZM}}\) can be used to eliminate gauge freedom in \(N_{\text{NLSM}}\).8 Mechanically, the pion numerator is matched to ZM by first taking every possible assignment of right and left movers for the external particles and then restricting the pion numerator to 2D. The loop momenta are restricted to 2D but left off-shell. After performing the 2D matching, there are only 365 parameters of generalized gauge freedom.
Footnote 8: Another reason for introducing the parameter \(z\) is because the ZM and NLSM are believed to differ at loop level even though they are dual classically in 2D [68]. Once all of the constraints from this section are imposed, \(z\) is fixed to 216/565 where the four-point pion tree amplitude is normalized to \(-k_{1}\cdot k_{2}\) and the ZM vertex is normalized to \(k_{1}^{\mu}\varepsilon_{\mu\nu}k_{2}^{\nu}\).
The loop momentum structure of ZM theory provides one final hint for simplifying the pion numerators. A generic term in the ZM numerator (of either basis graph) looks schematically like
\[\ell_{1}^{m}\ell_{2}^{n}k^{12-(m+n)}\text{ where }5\leq m+n\leq 8, \tag{4.6}\]
whereas a generic set of local, cubic Feynman rules could have produced terms with up to \(m+n=12\) powers of loop momenta. When the pion numerators are forced to have the loop power counting structure in eq. (4.6) of ZM theory, the number of generalized gauge freedom parameters reduces to 58. The pion numerators appearing in the ancillary files make exactly this choice.
### Double-copy verification
With a cubic color-dual pion representation in hand, we can perform double copies with many theories to extract colorless gravity-like amplitudes. In particular, we produce numerators for special Galileons (sGal), Born-Infeld (BI) theory, and Dirac-Born-Infeld-Volkov-Akulov (DBIVA) by double-copying against pions, pure YM, and \(\mathcal{N}=4\) super-Yang-Mills (sYM) respectively [15; 69; 70]. Such double-copy constructions are important nontrivial checks on the underlying single-copy theories as the Jacobi relations in the single copy conspire to produce the double-copy theory's version of linear diffeomorphism invariance: enhanced
shift symmetry for special Galileons and gauge invariance of the BI photon and DBIVA supermultiplet [71]. All three of these double-copy theories were recently studied extensively by one of the authors and Carrasco from the perspective of direct unitarity cut construction [49]. One of the shared features of these three theories is that they all have the same physical cut topologies in 4D: the two diagrams shown in fig. 2, which are only composed of four-point amplitudes.
To construct the double-copy theories, we source the cubic pure YM numerators from Ref. [13] (which additionally satisfies a relaxed form of color-kinematics duality), and the cubic \(\mathcal{N}=4\) sYM representation from the well-known \(n_{\rm 2box}=n_{\rm cross-box}=s^{2}t\,A^{\rm tree}\)[72]. On the other hand, we use the methods of Ref. [49] to directly compute the needed basis cuts in all three theories without relying on a color-dual pion representation. We find exact agreement in each of the three theories for both physical cuts. Since Ref. [49] has already exhaustively explored the properties of four-point loop amplitudes in these theories, we direct interested readers there for more information.
### Two-loop Yang-Mills revisited
Given the close ties between pion and gluon scattering for trees and at one loop, the existence of the two-loop pion numerator prompts us to investigate the most general local numerator for YM, without any of the loop power counting assumptions of Ref. [13]. We follow the same general procedure as in section 4.1, again identifying the double-box and penta-triangle as the Jacobi basis graphs and building the most general ansatz for each of their numerators compatible with the assumptions in section 2.3. Because we are now considering pure Yang-Mills, each monomial in the local ansatz must consist of five Lorentz scalar dot products instead of the six for NLSM, and each monomial must be linear in each of the four external gluon polarizations. Thus, the numerator ansatz for both diagrams will be built from terms of the form
\[N^{\rm YM}\left[\begin{array}{c}\includegraphics[width=142.36475pt]{figs.eps}\end{array}\right]\,\,\text{and}\,\,N^{\rm YM}\left[\begin{array}{c} \includegraphics[width=142.36475pt]{figs.eps}\end{array}\right]\in{\rm span} \left\{(\varepsilon_{i}\varepsilon_{j}),(k_{i}\varepsilon_{j}),(k_{i}k_{j})\right\} \tag{4.7}\]
where the \(k_{i}\) take the same definition as described near eq. (4.2). Without any power counting restrictions imposed, both the double-box and the penta-triangle have an ansatz with 10,010 terms each. Imposing diagram automorphisms and maximal-cut gauge invariance on the two basis diagrams, we reduce the number of terms to 2,235 for the double-box, and 4,133 for the penta-triangle. The minimal set of spanning physical cuts is shown in fig. 3, but we choose to work within the framework of the method of maximal cuts [73] in order to identify the simplest cut that is in tension with the kinematic Jacobi relations. Maximal cuts and symmetries are then imposed on the 14 non-pathological cubic diagrams shown above in fig. 1. Doing so, we are left with 596 total parameters in the ansatz.
Proceeding to the next-to-maximal cuts, we continue to avoid pathological diagrams including the newly-appearing type discussed in eq. (2.26). After discarding these types
of cuts as well as those involving cuts of external Mandelstams, there are 8 one-particle-irreducible cut topologies (see fig. 4). Seven of these next-to-maximal cuts are consistent with the kinematic Jacobi relations and symmetries.
Critically, the "bowtie" cut, cannot be satisfied by the ansatz once Jacobi relations and symmetries are applied.
In fact, we can make the failure extremely precise. First, start with the diagrams shown in the Jacobi relation from eq. (18), which includes the double-box, crossed-box, and penta-triangle. Then _without_ defining the crossed-box in terms of the other two diagrams, write down the most generic parity-even local ansatz involving four polarizations and six momenta (any combination of external or loop) for each of the three diagrams. Each ansatz will have 10,010 terms initially. Next impose the symmetry constraints on each of the three diagrams, leaving 2,761 free parameters on the double-box, 2,576 free parameters on the crossed-box, and 5,040 free parameters on the penta-triangle. Finally impose the "bowtie" next-to-maximal
Figure 4: The 8 non-pathological 1-particle-irreducible next-to-maximal-cut diagrams used in our cut constraints on the Yang-Mills integrand.
Figure 3: The three spanning physical unitarity cuts of pure Yang-Mills at two loops
cut, which in the _non-planar t-u_ color channel only receives contributions from the double-box
\[\raisebox{-14.226378pt}{\includegraphics[scale=14.226378pt]{figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/fig
captured by eq. (4.9). Thus, obtaining a globally color-dual integrand beyond one-loop then requires sacrificing more than just Bose symmetry, and in this paper we have argued that off-shell locality9 must be abandoned for color-dual constructions of multi-loop Yang-Mills. To expand the available function space, we propose that one should consider including rational functions of kinematics, rather than just polynomials.
Footnote 9: An example of this can already be found in the literature, where locality is relaxed in Ref. [10] in order to obtain color-dual two-loop integrands with on-shell 4D states.
We concede building loop-level numerators from rational functions of the kinematics is rather unnatural from the perspective of point-like quantum field theories. After all, operators that produce rational functions are typically non-local in their construction. The archetypal example of non-local quantum operators are string vertex operators with \(\alpha^{\prime}\) corrections. These operators promote local tree-level amplitudes to stringy extended objects by integrating over disc integrals of the open string worldsheet
\[\mathcal{A}^{\rm YM}\stackrel{{\alpha^{\prime}}}{{ \longrightarrow}}\mathcal{A}^{\rm OS}\supset t_{8}F^{4}\int_{0}^{1}dx\,\frac{x ^{\alpha^{\prime}s_{12}}(1-x)^{\alpha^{\prime}s_{23}}}{x(1-x)(s_{12}+s_{23})}\,. \tag{5.1}\]
The resulting Veneziano factor of Gamma functions produces rational functions of kinematics, while preserving color-kinematics duality [74; 75; 76; 77]. While introducing some type of worldsheet formulation of color-dual numerators might seem unjustified given the results of this work, our findings certainly suggest that we must do _something_ to relax the constraint of off-shell locality, and realize locality only in the limit of _on-shell_ kinematics. This could be achieved by either modifying the kinetic term or potentially something new and more exotic. Below we provide some simple examples at tree-level of what one might consider for implementing such a construction.
### Non-local construction of scattering amplitudes
As an exemplar of an off-shell non-local structure, consider the simple four-point example of a color-dual representation of Yang-Mills theory. We can define a functional numerator, \(N^{\rm YM}_{(12|34)}\), as follows,
\[N^{\rm YM}_{(12|34)}=\frac{t_{8}F^{4}}{3}\frac{s_{13}-s_{12}}{s_{23}s_{13}} \tag{5.2}\]
where \(N_{s}=N_{(12|34)}\), \(N_{t}=N_{(14|23)}\) and \(N_{u}=N_{(13|42)}\). By construction, this functionally symmetric numerator is antisymmetric and obeys the Jacobi identity
\[N^{\rm YM}_{(12|34)}+N^{\rm YM}_{(13|42)}+N^{\rm YM}_{(14|23)}=0. \tag{5.3}\]
However, it must also factorize to kinematics that are consistent with the local Feynman rules of eq. (10). Evaluating the residue on the \(s=s_{12}\to 0\) pole we find
\[\text{Res}\left[\frac{N_{s}^{\text{YM}}}{s}\right]^{s=0} =\frac{2}{3}A_{(12|l)}^{\text{YM}}A_{(-l|34)}^{\text{YM}} \tag{23}\] \[\text{Res}\left[\frac{N_{t}^{\text{YM}}}{t}\right]^{s=0} =-\frac{1}{3}A_{(12|l)}^{\text{YM}}A_{(-l|34)}^{\text{YM}}\] (24) \[\text{Res}\left[\frac{N_{u}^{\text{YM}}}{u}\right]^{s=0} =-\frac{1}{3}A_{(12|l)}^{\text{YM}}A_{(-l|34)}^{\text{YM}}\,, \tag{25}\]
with \(t=s_{23}\) and \(u=s_{13}\). Plugging these into a cubic graph representation of Yang-Mills, the tree level version of eq. (1), we find
\[\text{Res}\left[\mathcal{A}_{4}^{\text{YM}}\right]^{s=0}=\frac{1}{3}A_{(12|l) }^{\text{YM}}A_{(-l|34)}^{\text{YM}}(2c_{s}-c_{t}-c_{u})=\mathcal{A}_{(12|l)} ^{\text{YM}}\mathcal{A}_{(-l|34)}^{\text{YM}}\,, \tag{26}\]
where we have applied the color structure Jacobi identity, \(c_{s}+c_{t}+c_{u}=0\). In light of our findings in eq. (19), building color-dual numerators in this way where locality is only realized on-shell might be a more natural approach. While we can absolutely apply a generalized gauge transformation [4] to restore field theoretic locality to the cubic numerators, this could merely be an aesthetic choice that is only permissible at tree-level.
Moreover, by abandoning traditional notions of off-shell locality, we incidentally have gained enough functional freedom to massage the color-dual numerators into a form that is manifestly gauge invariant for all particles. As an organizational principle, pulling out overall gauge-invariant factors comes with the added advantage of possibly simplifying a loop-level construction of color-dual numerators for Yang-Mills.
### Future Directions
With an eye towards generalizing the non-local construction above to future multi-loop studies, we note that the four-point half-ladder of Yang-Mills secretly makes use of the color-dual structure of NLSM. At four-point, we can construct permutation invariants from the single BCJ basis amplitude of both NLSM and pure YM theory as follows
\[stA_{(s,t)}^{\text{YM}}=t_{8}F^{4}\qquad stA_{(s,t)}^{\text{NLSM}}=stu\,. \tag{27}\]
Using this, we can redefine the non-local numerators of eq. (22) so that the vector structure of YM is captured in a permutation invariant prefactor and the kinematic Jacobi identity is entirely due to the NLSM numerators
\[N_{(12|34)}^{\text{YM}}\equiv t_{8}F^{4}\,\frac{N_{(12|34)}^{\text{NLSM}}}{ stu}\,. \tag{28}\]
where four-point pion numerator, \(N_{s}^{\text{NLSM}}\), is given in eq. (4). One advantage of this construction is that it reduces some of the \(D\)-dimensional complexity of vector theories that arises
due to the mixing between external polarization and internal loop momenta. But maybe more importantly, it puts all the heavy lifting of functional Jacobi relations on the scalar kinematic numerator, \(N^{\rm NLSM}_{(12|34)}\). Thus, rather than building an ansatz from the irreducible scalar products of eq. (4.7), one might instead consider the following construction
\[N^{\rm YM}_{m,L}=\sum_{i}{\cal O}_{i}{\cal R}^{(i)}_{m,L} \tag{5.10}\]
where \({\cal O}_{i}\) are the on-shell gauge invariant tensor basis elements of [77; 78], and \({\cal R}^{(i)}_{m,L}\) are rational functions of irreducible scalar products of eq. (4.2). It would be interesting if a similar construction as our tree-level example could be uplifted to two-loop using the globally color-dual NLSM integrand that we have computed in this work. We see this as a natural future direction worth investigating that is now made possible by our findings.
###### Acknowledgements.
The authors would like to thank John Joseph Carrasco, Sasank Chava, Clifford Cheung, Kezhu Guo, Nia Robles, Aslan Seifi, Fei Teng, and Suna Zekioglu for insightful conversations, feedback on earlier drafts, and encouragement throughout the completion of this work. This work was supported by the DOE under contract DE-SC0015910 and by the Alfred P. Sloan Foundation. Additionally we would like to acknowledge the Northwestern University Amplitudes and Insight group, the Department of Physics and Astronomy, and Weinberg College for their generous support. Feynman diagrams were typeset using TikZ-Feynman [79].
## Appendix A Catalog of color-dual theories
Below we review the five manifestly color-dual theories that we mention in the text, self-dual Yang-Mills (SDYM) [24], two formulations of NLSM [26; 27; 33], ZM [28; 67], and Chern-Simons theory [25]. All of these theories are closely tied together by the semi-abelian YM theory in eq. (3.14), which gives the cubic sector of pure YM. The manifestly color-dual equations of motion for SDYM and NLSM are derived by picking a gauge and then placing a constraint directly on the field strength. In almost all of these theories, Bose symmetry is broken or obscured at the Lagrangian level. Since Bose symmetry is broken, the ensuing kinetic mixing prevents color-dual representations beyond one-loop without the introduction of additional interactions.
### Self-Dual Yang-Mills
The first example in the literature of color-dual Feynman rules is that of Yang-Mills theory in the self-dual sector (SDYM). We summarize the approach of Ref. [24] for extracting the kinematic algebra. To obtain the Lagrangian for self-dual Yang-Mills from standard Yang-Mills, we apply the self-duality condition that equates the non-abelian \(SU(N)\) field strength, \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}+g[A_{\mu},A_{\nu}]\), to its dual two-form,
\[F_{\mu\nu}=\frac{1}{2}\varepsilon_{\mu\nu\rho\sigma}F^{\rho\sigma}\,.\] (A.1)
Solutions to the self-dual condition automatically satisfy the Yang-Mills equation of motion due to the Bianchi identity,
\[D^{\mu}F^{\mu\nu}=\frac{1}{2}\varepsilon_{\mu\nu\rho\sigma}D^{\mu}F^{\rho\sigma}= 0\,.\] (A.2)
Furthermore, field configurations that satisfy eq. (A.1) correspond to instantons which are probed by the path integral in the full non-perturbative description of Yang-Mills. To construct a manifestly perturbative action, one begins by transforming from Cartesian coordinates to light cone coordinates, \((t,x,y,z)\to(u,v,w,\bar{w})\), via
\[u=t+x,\quad v=t-x,\quad w=y+iz,\quad\bar{w}=y-iz\,,\] (A.3)
which yields the invariant line element,
\[ds^{2}=2(dudv-dwd\bar{w})=dt^{2}-d\vec{x}\cdot d\vec{x}\,.\] (A.4)
Applying this coordinate transformation to the self-dual condition of eq. (A.1), produces three constraint equations on the field strength,
\[F_{uw}=0\] (A.5) \[F_{uv}=F_{w\bar{w}}\] (A.6) \[F_{v\bar{w}}=0\,.\] (A.7)
In light-cone gauge, \(A_{u}=0\), we the self-duality condition restricts the gauge field as follows:
\[A_{u}=0\text{ and }(A.5)\quad\Rightarrow\quad A_{w}=0\] (A.8) \[A_{w}=0\,,\,A_{u}=0\text{ and }(A.6)\quad\Rightarrow\quad \partial_{u}A_{v}=\partial_{w}A_{\bar{w}}\,.\] (A.9)
The last constraint is satisfied by writing the gauge field in terms of a single scalar field \(\Psi\) via
\[A_{v}=\frac{1}{2}\partial_{w}\Psi\quad\text{ and }\quad A_{\bar{w}}=\frac{1}{2} \partial_{u}\Psi\,.\] (A.10)
Using this definition on the first self-dual constraint equation yields the following equation of motion for the dynamical scalar field:
\[(A.5)\text{ and }(A.10)\quad\Rightarrow\quad\Box\Psi+ig[\partial_{u}\Psi, \partial_{w}\Psi]=0\,.\] (A.11)
Adding back in the anti-holomorphic field \(\bar{\Psi}\) as a Lagrange multiplier, we obtain the following Lagrangian for self-dual Yang-Mills theory:
\[\mathcal{L}^{\text{SDYM}}=(\partial\bar{\Psi})(\partial\Psi)-ig\bar{\Psi}[ \partial_{u}\Psi,\partial_{w}\Psi]\,.\] (A.12)
The color-ordered cubic Feynman rule for this theory can be immediately read off as
\[\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figfig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/
In this form, the Lagrangian and Feynman rules are manifestly color-dual _off-shell_. The effect of eq. (A.1) on the full YM Lagrangian is to decouple the anti-MHV three-point vertex from the theory, leaving only the MHV three-point vertex. Before proceeding, it is important to note that the mass-dimension of the theory appears to differ from that of Yang-Mills - we comment on this in detail in appendix B.
### YZ-theory
Self-dual Yang-Mills involves explicit reliance on the spacetime dimension, which makes the theory poorly suited for constructing dimensionally-regulated loop-level amplitudes. On the other hand, \(YZ\)-theory of [33] is an honest \(D\)-dimensional theory that captures the classical physics of NLSM while manifesting the duality between color and kinematics.
One can understand \(YZ\)-theory as a particular dimensional reduction of Yang-Mills in \(D=2d+1\) dimensions [80], down to \(d\)-dimensions. Starting with the Yang-Mills Lagrangian of eq. (3.1), one can redefine the gauge fields, \(A_{M}\), in terms of \(X\), \(Y\) and \(Z\) fields:
\[X_{M} =(X_{\mu},0,-iX_{\mu})\] (A.14) \[Y_{M} =(0,Y,0)\] (A.15) \[Z_{M} =(Z_{\mu},0,iZ_{\mu})\] (A.16)
where \(X\) and \(Z\) are \(d\)-vectors and \(Y\) is a scalar field. Considering the conjugate nature of the \(XZ\) propagator, in this work we make the replacement \(X\to\bar{Z}\). Plugging in this redefinition of the gauge fields produces the following Lagrangian up to cubic order in the interactions
\[\mathcal{L}^{\text{YZ}}=\frac{1}{2}(\partial Y)^{2}+(\partial Z)(\partial\bar {Z})-gf^{abc}\left(\bar{Z}_{\mu\nu}Z^{\mu}Z^{\nu}+[Y,\partial_{\mu}Y]Z^{\mu} \right)\,.\] (A.17)
The Feynman rules for this Lagrangian are simply
\[\begin{array}{c}\includegraphics[width=142.26378pt]{fig/diagram_1.eps}=i( \varepsilon_{3}p_{2})\end{array}\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{fig/diagram_2.eps}=i(\varepsilon_{1}p_{3})(\varepsilon_{2}\bar{ \varepsilon}_{3})-i(\varepsilon_{2}p_{3})(\varepsilon_{1}\bar{\varepsilon}_{3}) \,.\] (A.18)
Note that the pure vector vertex on the right is _exactly_ what was obtain from semi-abelian YM in eq. (3.14). We now review of the construction of NLSM numerators at tree-level from \(YZ\)-theory. In this construction, we can define \(D\)-dimensional generators of the kinematic algebra as follows,
\[T^{a}_{ij}=i\varepsilon_{a}(p_{i}-p_{j})\,,\] (A.19)
where momentum conservation requires,
\[p_{a}+p_{i}+p_{j}=0\,.\] (A.20)
The kinematic half-ladder diagrams then take on the following concise form
\[n^{\text{NLSM}}_{(i|a_{1}a_{2}\dots a_{n}|j)}={}_{i}\langle T^{a_{1}}T^{a_{2}} \cdots T^{a_{n}}\rangle_{j}\,.\] (A.21)
Since there are no pole cancelling factors of \(s_{ij}=(p_{i}+p_{j})^{2}\), this definition of the kinematic algebra is manifestly cubic. Thus, the kinematic structure constants defined in terms of these generators are invariant under generalized gauge freedom because there are no quartic vertices. They can be defined implicitly through
\[[T^{a},T^{b}]_{ij}=F^{a}_{b|c}T^{c}_{ij}\,. \tag{110}\]
Given this definition, the Feynman rule associated with kinematic structure constant is
\[iF^{a}_{b|c}=(\varepsilon_{b}p_{ab})(\varepsilon_{a}\bar{\varepsilon}_{c})-( \varepsilon_{a}p_{ab})(\varepsilon_{b}\bar{\varepsilon}_{c})\,, \tag{111}\]
where \(p_{ab}=p_{a}+p_{b}\) and \(\varepsilon\) and \(\bar{\varepsilon}\) are the polarizations of the \(Z\)-vectors particle and its conjugate field, respectively. The gauge-fixed state sum for this theory is simply
\[\sum_{\rm states}\varepsilon^{\,\mu}_{(p)}\bar{\varepsilon}^{\,\nu}_{(-p)}= \eta^{\mu\nu}\,. \tag{112}\]
Notice that the vector state sum is gauge fixed since the \(YZ\) model explicitly chooses Lorenz gauge for the \(Z\) particles, \(\partial_{\mu}Z^{\mu}=0\). Tree-level NLSM amplitudes are recovered from the kinematic structure constants by plugging in
\[\varepsilon^{\mu}_{(p)}=p^{\mu}\qquad\bar{\varepsilon}^{\mu}_{(p)}=\frac{q^{ \mu}}{pq} \tag{113}\]
for the on-shell polarizations for \(Z\) and \(\bar{Z}\), respectively, where \(q^{2}=0\) is some null reference momentum. The tree-level pion amplitude can then be defined in two equivalent ways,
\[A^{\rm NLSM}=A(...,Y,...,Y,...)\quad\mbox{ and }\quad A^{\rm NLSM}=A(...,\bar{Z},...)\,, \tag{114}\]
where the ellipses denote additional on-shell \(Z\)-particles. In a suitable gauge, the kinematic numerators in the latter definition for pion scattering are equivalent to those of \(J\)-theory, where \(\bar{Z}\) corresponds to the root leg of \(J\)-theory [27].
It is instructive to see how both of the constructions in eq. (114) produce valid tree-level amplitudes for the pion. First we will start with \(Y\) particles on legs 1 and 4. Applying the Feynman rules above, and plugging in on-shell states for the \(Z\)-particles produces the following \(s\)- and \(t\)-channel numerators:
\[n^{YY}_{s} =(T^{2}T^{3})_{14}=s_{12}^{2} \tag{115}\] \[n^{YY}_{t} =F^{3}_{2|X}T^{X}_{14}=s_{14}(s_{13}-s_{12})\,. \tag{116}\]
Plugging these numerators into the ordered amplitudes \(A(s,t)\) yields the desired result,
\[A^{YY}_{(s,t)}=\frac{n^{YY}_{s}}{s_{12}}+\frac{n^{YY}_{t}}{s_{14}}=s_{13}\,. \tag{117}\]
For the \(Z\) and \(\bar{Z}\) configuration, the numerators are
\[n^{\bar{Z}Z}_{s} ={}_{4}\langle F^{3}F^{2}\rangle_{1}=s_{12}(s_{14}-s_{13})p_{2}^{ \mu_{1}}+s_{12}^{2}(p_{3}-p_{4})^{\mu_{1}} \tag{118}\] \[n^{\bar{Z}Z}_{t} ={}_{2}\langle F^{3}F^{4}\rangle_{1}=s_{14}(s_{12}-s_{13})p_{4}^{ \mu_{1}}+s_{14}^{2}(p_{3}-p_{2})^{\mu_{1}}\,, \tag{119}\]
where we have used the shorthand
\[{}_{x}\langle F^{a_{1}}F^{a_{2}}\cdots F^{a_{n}}\rangle_{y}\equiv F^{a_{1}}_{x|b_{ 2}}F^{a_{2}}_{b_{2}|b_{3}}\cdots F^{a_{n}}_{b_{n}|y}\,.\] (A.32)
The polarization vector of the \(\bar{Z}\) particle has not been contracted into the numerators above, resulting in the free \(\mu_{1}\) index. These numerators produce the partial amplitude
\[A^{ZZ}_{(s,t)}=\frac{n_{s}^{ZZ}}{s_{12}}+\frac{n_{t}^{ZZ}}{s_{14}}=-s_{13}(p_{ 2}+p_{3}+p_{4})^{\mu_{1}}=s_{13}\,p_{1}^{\mu_{1}}\,.\] (A.33)
Plugging in the on-shell polarization of the conjugate field in eq. (A.25) produces precisely the desired result of eq. (A.29). Indeed, this construction is valid to all multiplicity at tree-level. One can see this by considering the only two possible factorization channels that contribute to each of these amplitudes, the \(YY\) cut and the \(\bar{Z}Z\) cut,
\[A(...,Y,...,Y,...) \to A(...,Y,...,Y)A(Y,...,Y,...)\] (A.34) \[\to A(...,Y,...,Y,...,Z)A(\bar{Z},...)\,.\] (A.35)
Both factorization channels are valid descriptions of NLSM amplitudes when plugging in the \(Z\) and \(\bar{Z}\) on-shell states. As long as we plug in valid on-shell states for \(Z\) and \(\bar{Z}\) from eq. (A.25), these numerators will produce NLSM ordered amplitudes. Given this special property of \(YZ\)-theory (namely that the \(Z\bar{Z}\) and \(YY\) constructions are equivalent), the \(Y\) particle need not be considered when constructing NLSM amplitudes on shell. This is essentially what \(J\)-theory does, as we will show below.
### \(J\)-theory
\(J\)-theory is obtained through a first order reformulation of NLSM in terms of the chiral current [27].10 Two constraints are placed on the chiral current. First, the field strength associated with \(J_{\mu}\) should vanish,
Footnote 10: For related formulations of NLSM that do not treat color-kinematics duality see [81; 82].
\[F_{\mu\nu}(J)=\partial_{\mu}J_{\nu}-\partial_{\nu}J_{\mu}+g[J_{\mu},J_{\nu}]= 0\,,\] (A.36)
which then implies that \(J^{\mu}\) is pure gauge \(J_{\mu}=U\partial_{\mu}U^{-1}\). The second condition is that the chiral current should be in Lorenz gauge,
\[\partial_{\mu}J^{\mu}=0\,,\] (A.37)
which then yields the NLSM equation of motion \(\partial_{\mu}(U\partial^{\mu}U^{-1})=0\). Taking a linear combination of eq. (A.36) and eq. (A.37) yields
\[\Box J^{\mu}+gf^{abc}J^{\nu}\partial_{\nu}J^{\mu}=0.\] (A.38)
Integrating in an auxiliary field \(\bar{J}^{\mu}\) as a Lagrange multiplier trivially produces the Lagrangian of semi-abelian YM eq. (3.14) but with the gauge fields renamed to \(J_{\mu}\) and \(\bar{J}_{\mu}\). Since both
theories are in Lorenz gauge this means that \(J\)_-theory and semi-abelian YM are identical_. Furthermore, this theory is one-to-one with the amplitudes of \(YZ\)-theory when only considering on-shell \(Z\bar{Z}\) states. As such, the single vertex in this theory is the same as the pure vector vertex of \(YZ\) theory in Lorenz gauge,
\[\begin{split}\includegraphics[scale=0.4]{fig/fig/fig/fig/
This key property is due to momentum conservation alone. To prove off-shell color kinematics duality for any multiplicity and loop order it is enough to observe that the sum of the off-shell \(s\)-, \(t\)-, and \(u\)-channel numerators sum to zero
\[\langle ab\rangle\langle cd\rangle+\langle ac\rangle\langle db\rangle+\langle ad \rangle\langle bc\rangle=0 \tag{104}\]
by virtue of the 2D Levi-Civita identity \(\varepsilon_{\mu\nu}\varepsilon_{\rho\sigma}=\eta_{\mu\rho}\eta_{\nu\sigma}- \eta_{\mu\sigma}\eta_{\nu\rho}\). Amusingly, eq. (104) is simply a manifestation of the Schouten identity. Since ZM theory manifests color-kinematics duality without any reference to the on-shell condition, a mass term can be added to ZM theory without spoiling color-kinematics duality [28]. In general, if the on-shell conditions are not used, color-kinematics duality is unaffected by the details of the propagator structure.
Of course, dramatically altering the pole structure could spoil locality in the double-copied theory. As already mentioned, the only difference classically between ZM and SDYM is the propagator structure. While both theories are color dual for the same reason, altering the propagators has profound consequences since SDYM tree amplitudes vanish and manifest color-kinematics does not persist to all loop orders. Color dual theories with the same interactions but different kinetic terms have appeared in the literature before. For example, \(J\)-theory and the theory of non-Abelian fluids presented in [26] differ only in their propagators. Specifically, taking the static limit of the fluid's \((D+1)\)-dimensional kinetic term \(\partial_{t}-\nabla^{2}\) and Wick rotating one of the coordinates yields the relativistic d'Alembertian of \(D\)-dimensional \(J\)-theory.
While the color-dual nature of ZM theory is rather elegant, the loop-level construction introduces complications when applying dimensional regularization. Similar to the difficulties of renormalizing chiral fermions [83], there is ambiguity in promoting the integrands to formal \(D\)-dimensional expressions. As such, in our two-loop bootstrap of NLSM we only used the ZM integrands as a mechanism for reducing the residual gauge freedom in our final solution.
### Chern-Simons theory
The final theory that is unified - through one loop - by the semi-abelian Yang-Mills construction is Chern-Simons (CS) theory. As we will show below, plugging in appropriate on-shell states to the semi-YM numerators that we constructed in the text will produce precisely the kinematic numerators for CS theory. Much like SDYM and ZM theory, the CS action has a preferred integer dimension of \(D=3\),
\[S_{\rm CS}=\int d^{3}x\,{\rm tr}\left[\epsilon^{\mu\nu\rho}\left(A_{\mu} \partial_{\nu}A_{\rho}+\frac{2}{3}A_{\mu}A_{\nu}A_{\rho}\right)\right] \tag{105}\]
where \(\epsilon^{\mu\nu\rho}\) is a 3D Levi-Civita symbol. The color-dual properties of this theory were previously studied in Ref. [25], which demonstrated that color-kinematics duality is manifest off-shell. At the classical level, CS is clearly related to \(J\)-theory (and thus semi-YM) since the CS equation of motion,
\[\epsilon^{\rho\mu\nu}F_{\mu\nu}=0\Leftrightarrow F_{\mu\nu}=0\,, \tag{106}\]
and the Lorenz gauge choice,
\[\partial_{\mu}A^{\mu}=0\,,\] (A.46)
are the same as the \(J\)-theory conditions in eqs. (A.36) and (A.37), respectively.
The correspondence between CS and semi-YM can be made more precise at the level of the Feynman rules. The color-stripped three-point vertex and propagator can be read off from the CS action in eq. (A.44) producing
\[\raisebox{-14.226378pt}{\includegraphics[scale=0.4]{figures/diagram.eps}}= \epsilon^{\mu\nu\rho}\varepsilon_{1}^{\mu}\varepsilon_{2}^{\nu}\varepsilon_{3}^ {\rho}\qquad\qquad A^{\nu}\ \raisebox{-14.226378pt}{\includegraphics[scale=0.4]{figures/diagram.eps}}\ A^{\mu}\ =\frac{i}{k^{2}}\epsilon^{\mu\nu\rho}k^{\rho}\,.\] (A.47)
At first glance, this seems far removed from the semi-abelian Yang-Mills analogs in eq. (3.17) and eq. (3.18). After all, semi-YM makes no reference to an antisymmetric 3-tensor and the mass dimensions of the vertices do not agree. However, to construct the full on-shell amplitudes of CS theory, we must plug in the Fourier transforms of the on-shell currents, \(\bar{\varepsilon}(p)\), as described in Ref. [25]. These are related to the polarizations, \(\varepsilon_{i}\), of the Feynman rules above as follows:
\[\varepsilon^{\mu}(k)=\epsilon^{\mu\nu\rho}k^{\nu}\bar{\varepsilon}^{\rho}(k)\,.\] (A.48)
With this definition of the on-shell states, it is clear how the Feynman rules of eq. (A.47) are related to those of eq. (3.17) and eq. (3.18) - we must simply canonicalize the CS kinetic term by absorbing \(\epsilon^{\mu\nu\rho}k^{\rho}\) into the definition of the three-point vertex:
\[A^{\nu}\ \raisebox{-14.226378pt}{\includegraphics[scale=0.4]{figures/diagram.eps}}\ A^{\mu}\ =\frac{i}{k^{2}}\epsilon^{\mu\nu\rho}k^{\rho}\quad \longrightarrow\quad A^{\nu}\ \raisebox{-14.226378pt}{\includegraphics[scale=0.4]{figures/diagram.eps}}\ \ \bar{A}^{\mu}\ =\frac{i}{k^{2}}\eta^{\mu\nu}\,.\] (A.49)
This has the effect of orienting the three-point vertex of eq. (A.47) by making the replacement \(\varepsilon_{i}\to\epsilon^{\mu\nu\rho}k^{\nu}_{i}\bar{\varepsilon}^{\rho}_{i}\) on one of the legs,
\[\raisebox{-14.226378pt}{\includegraphics[scale=0.4]{figures/diagram.eps}} \ \rightarrow\ \ \raisebox{-14.226378pt}{\includegraphics[scale=0.4]{figures/diagram.eps}} \ =\epsilon^{\mu\nu\rho}\varepsilon_{1}^{\mu}\varepsilon_{2}^{\nu}( \epsilon^{\rho\alpha\beta}k_{3}^{\alpha}\bar{\varepsilon}_{3}^{\beta})=( \varepsilon_{1}\bar{\varepsilon}_{3})(\varepsilon_{2}k_{3})-(\varepsilon_{2} \bar{\varepsilon}_{3})(\varepsilon_{1}k_{3})\,,\] (A.50)
where we have used the identity \(\epsilon^{\mu\nu\rho}\epsilon^{\rho\alpha\beta}=\eta^{\mu\alpha}\eta^{\nu\beta }-\eta^{\mu\beta}\eta^{\nu\alpha}\). As desired, the redefined CS vertex is precisely the three-point vertex of semi-YM in eq. (3.17). Clearly the \(D\)-dimensional amplitudes of semi-YM can be mapped to the 3D observables of CS theory by making the following replacement on all of the external states of semi-abelian YM:
\[\varepsilon_{i}^{\mu}\to\epsilon^{\mu\nu\rho}k_{i}^{\nu}\bar{ \varepsilon}_{i}^{\rho}\qquad\qquad\bar{\varepsilon}_{i}^{\mu}\to\bar{ \varepsilon}_{i}^{\mu}\,.\] (A.51)
Upon making this replacement, the amplitudes of \(D\)-dimensional semi-YM are mapped onto CS theory in \(D=3\), \(\mathcal{A}^{\text{semi-YM}}\to\mathcal{A}^{\text{CS}}\). While this may seem like a marginal gain at tree-level, by using our \(D\)-dimensional construction at one-loop, the manifestly color-dual integrands of eq. (3.26) can resolve regularization ambiguity of CS theory due to its explicit dependence on \(D=3\). For more background on CS theory, we refer the reader to Ref. [25].
Spinor-helicity and conventions
We use the same conventions as Ref. [84], which we quote now. The component-wise definitions of the spinor brackets are
\[\langle ab\rangle =\frac{(a_{1}+ia_{2})(b_{0}+b_{3})-(b_{1}+ib_{2})(a_{0}+a_{3})}{ \sqrt{(a_{0}+a_{3})(b_{0}+b_{3})}}\,, \tag{114}\] \[[ab] =\frac{(b_{1}-ib_{2})(a_{0}+a_{3})-(a_{1}-ia_{2})(b_{0}+b_{3})}{ \sqrt{(a_{0}+a_{3})(b_{0}+b_{3})}}\,, \tag{115}\]
where the \(a_{i}\) are components of the four-vector, \(k_{a}^{\mu}=(a_{0},a_{1},a_{2},a_{3})\). For massless momenta, \(k_{a}\) and \(k_{b}\), one can apply the on-shell condition, \(k_{a}^{2}=k_{b}^{2}=0\), to show that
\[s_{ab}=(k_{a}+k_{b})^{2}=\langle ab\rangle[ba]\,. \tag{116}\]
Four-dimensional polarization dot products with fixed helicity states are mapped to
\[k_{a}\cdot\varepsilon_{b}^{(+)} =\frac{\langle qa\rangle[ab]}{\sqrt{2}\langle qb\rangle}\,, k_{a}\cdot\varepsilon_{b}^{(-)} =-\frac{[qa]\langle ab\rangle}{\sqrt{2}[qb]}\,, \tag{117}\] \[\varepsilon_{a}^{(-)}\cdot\varepsilon_{b}^{(+)} =-\frac{\langle qa\rangle[qb]}{[qa]\langle qb\rangle}\,, \varepsilon_{a}^{(\pm)}\cdot\varepsilon_{b}^{(\pm)} =0\,,\]
Note that given the above definition, the spinor helicity variables carry the same mass dimension for both the angle and square brackets. In terms of the light-cone coordinates that we employed in appendix A, the angles and squares can be rewritten as
\[\langle ab\rangle =\frac{a_{w}b_{u}-a_{u}b_{w}}{\sqrt{a_{u}b_{u}}}\,, \tag{118}\] \[[ab] =\frac{b_{\bar{w}}a_{u}-a_{\bar{w}}b_{u}}{\sqrt{a_{u}b_{u}}}\,. \tag{119}\]
The conventions we use in the text where the holomorphic spinor, \(X(p,k)\), carries extra mass dimension amounts to shifting the bracket by an overall factor,
\[\langle ab\rangle \to a_{w}b_{u}-a_{u}b_{w}\equiv X(a,b)\,, \tag{120}\] \[[ab] \to\frac{b_{\bar{w}}a_{u}-a_{\bar{w}}b_{u}}{a_{u}b_{u}}\equiv Q(a,b)\,. \tag{121}\]
With this choice we still obtain the same completeness relation for the SDYM spinors,
\[X(a,b)Q(b,a)=s_{ab}\,. \tag{122}\]
## Appendix C Regulating BEL integrals
This appendix covers the bubble-on-external-leg (BEL) integrals that result from the \(n\)-gon numerator of \(Y\!Z\) theory. Before covering the BEL integrals themselves, we will need the
appropriate numerator. Weight counting tells us that the \(n\)-gon master numerator must have \(n\) on-shell \(Z\)-particles. Unitarity requires that there are three distinct contributions to the \(n\)-gon: one from an off-shell \(Y\)-loop particle and then two more from different orientations of a \(\bar{Z}Z\)-loop. Thus, \(YZ\) theory gives us the following one-loop \(n\)-gon numerator:
\[N^{\text{n-gon},YZ}_{(12\ldots n)}=\langle T^{1}T^{2}\cdots T^{n}\rangle+ \langle F^{1}F^{2}\cdots F^{n}\rangle \tag{124}\]
where \(\langle\cdots\rangle\) indicates an internal contraction over the \(YY\) and \(\bar{Z}Z\) loops and \(T\) and \(F\) were given in eq. (117) and eq. (118), respectively. In terms of the external momenta \(k_{i}\) and the loop momenta \(\ell_{i}\), where \(\ell_{i}\) flows into \(k_{i}\) and out of \(k_{i-1}\), the NLSM numerator is
\[N^{\text{NLSM}}_{(12\ldots n)}=\langle T^{1}T^{2}\cdots T^{n}\rangle=2^{n}( \ell_{1}k_{1})(\ell_{2}k_{2})\cdots(\ell_{n}k_{n}) \tag{125}\]
which is the same as eq. (3.28) after using \(2(k_{i}\cdot\ell_{i})=\ell_{i}^{2}-\ell_{i-1}^{2}\). Similarly, there is a pure vector contribution that is identical to the semi-abelian YM \(n\)-gon numerator that we constructed in eq. (3.26) after projecting external states along longitudinal modes \(\varepsilon\to k\),
\[\langle F^{1}F^{2}\cdots F^{n}\rangle=D(\ell_{1}k_{1})(\ell_{2}k_{2})\cdots( \ell_{n}k_{n})+\mathcal{O}(D^{0})\,. \tag{126}\]
The dimension-dependent factor essentially counts that number of internal vector states. While this \(n\)-gon is manifestly color-dual, it does not produce the right cuts for NLSM. However, the scalar contribution, that comes dressed with an overall factor \(D\)_does_ manifest the duality globally, and satisfies all the desired pion cuts due to the factorization of eq. (119) and eq. (120). In order for the Feynman rules for \(Y\!Z\) theory compute one-loop color-dual numerators consistent with NLSM cuts, one would have to add additional states to cancel off the spurious poles, while preserving color-kinematics duality. We leave this as a direction of future work.
While the \(\bar{Z}Z\) vector loop spoils color-kinematics off-shell, the \(\bar{Y}Y\) loop alone gives us the desired expression for the \(n\)-gon. However, as we noted in the text, the \(n\)-gon numerator has the strange property that it produces _non-vanishing_ bubble-on-external-leg (BEL) graphs. However, the BEL diagram integrates to zero after integral reduction. As an example, the four-point BEL diagram can be reconstructed from the \(n\)-gon,
\[N^{\text{BEL}}_{1|2,34}=N^{\text{NLSM}}_{(1234|\ell)}-N^{\text{ NLSM}}_{(1243|\ell)}-N^{\text{NLSM}}_{(1342|\ell)}+N^{\text{NLSM}}_{(1432| \ell)}\,, \tag{127}\]
where we define the loop momentum to be in between the left most and right most leg on the box. Plugging in particular values for \(\ell_{i}\), the BEL reduces to
\[N^{\text{BEL}}_{1|2,34}=s_{12}(\ell+k_{1})^{2}\ell^{\mu}k_{1}^{\nu}k_{2}^{[\mu }k_{[34]}^{\nu]}\,. \tag{128}\]
Notice there is an overall factor that cancels one of the propagators. Plugging this numerator into the amplitude produces an integral of the form
\[\mathcal{I}^{\text{BEL}}_{1|2,34}=s_{12}k_{1}^{\nu}k_{2}^{[\mu}k_{[34]}^{\nu]} \int\frac{d^{D}\ell}{i\pi^{D/2}}\frac{\ell^{\mu}}{\ell^{2}-\mu^{2}}\sim s_{12} (s_{13}-s_{14})(\mu^{2})^{D/2}\,, \tag{129}\]
where we have introduced a mass regulator that will be proportional to the on-shell momentum inside the BEL, \(\mu^{2}\equiv k_{1}^{2}\). Thus, in sufficiently large dimension, \(D>2\), this integral suppresses the \(\mu^{-2}\) divergence appearing in the denominator of the BEL diagram. |
2309.07455 | Influence of Initial Entangled States on the Temperature-Dependent CHSH
Inequality | We demonstrate that the temperature affects the validity of the CHSH
inequality in an open bipartite two-qubit system. Specifically, for initial
entangled states within the decoherence-free subspace (DFS), the CHSH
inequality remains temperature-independent. In contrast, other entangled states
exhibit a temperature threshold beyond which the inequality holds. | Esteban Marulanda, Andrés Gómez | 2023-09-14T06:29:32Z | http://arxiv.org/abs/2309.07455v1 | # Influence of Initial Entangled States on the Temperature-Dependent CHSH Inequality.
###### Abstract
We demonstrate that the temperature affects the validity of the CHSH inequality in an open bipartite two-qubit system. Specifically, for initial entangled states within the decoherence-free subspace (DFS), the CHSH inequality remains temperature-independent. In contrast, other entangled states exhibit a temperature threshold beyond which the inequality holds.
## I Introduction
The advent of quantum information theory and the 50th-anniversary celebrations of the original publication of Bell's theorem in 2014 has only further elevated the importance of Bell inequalities in modern physics [1]. Bell inequalities have played a pivotal role in the foundational analysis of quantum mechanics, leading to insights that challenge our classical intuitions about the nature of reality. Introduced as part of Bell's Theorem in 1964, these inequalities stem from considerations of local causality, the idea that correlations between distant events should be explainable regarding local factors, such as the common source of the particles in question [2]. Although initially confined to discussions among a small group of physicists and philosophers, interest in Bell's theorem has surged, particularly following the groundbreaking experiments by Aspect and colleagues. Nowadays, Bell inequalities are used to test entanglement in systems like photonic chips that generate entangled states [3].
However, those tests are primarily designed for systems with pure states. In reality, most systems interact with an environment, and generally, the state that describes the system of interest must be mixed. Therefore, the necessity for investigating quantum correlations and their dependence on environmental parameters is a task of vital importance. In the past years, much research has focused on this. To highlight a few, [4], contrary to the conventional wisdom for a continuum variable setting, finds a high-temperature entangled system. [5] investigates various measures of quantum correlations in mixed states that interact with an environment through non-demolition and dissipative mechanisms. [6] presents an example of a system with a quantum correlation robust against noise. Finally, [7] addresses a system of N spins in thermal equilibrium, calculating a bound for the temperature of the multipartite Bell inequality.
While there is extensive literature on the topic, the relationship between the CHSH inequality for an open system and temperature remains unclear. Additionally, the possible underlying mechanisms that permit the existence of entangled states in high-temperature regimes. Could this phenomenon depend on the initial entangled state coupled to the environment? This article aims to explore CHSH inequality for an open quantum system composed of two qubits and discuss the conditions for which an initial entangled state after coupling with an environment remains entangled even in the high-temperature regime. If an initial state doesn't maintain its entanglement in this high-temperature context, we then establish a specific temperature threshold beyond which the CHSH inequality is violated.
This paper is organized as follows: Section 2 presents an example of a system defined by a Bell-type state. In the absence of external interactions, this system violates the CHSH inequality. We explore how interaction with a bosonic bath and temperature variation can affect such inequalities. Section 3 is dedicated to analyzing why some initial states have a violation of the CHSH inequality that is not affected by temperature. We conclude the study in Section 4.
## II Methodology.
Let us consider the version of the EPR paradox initially proposed by David Bohm [8]. This involves a system of two spin-1/2 particles that move far apart enough that their mutual interaction becomes negligible. Among the possible final states, let us consider a Bell-type state,
\[\ket{\psi_{AB}}=\frac{1}{\sqrt{2}}\left(\ket{0_{A}1_{B}}-\ket{1_{A}0_{B}} \right), \tag{1}\]
where \(A\) and \(B\) represent the spin to be measured by Alice and Bob, respectively. Since both are unaware of
the system's orientation in which the state 1 is prepared, they perform measurements of the received spin in an arbitrary basis rotated by \(\theta\) and \(\phi\) concerning the preparation reference frame. Considering a statistical ensemble of bell states given by 1, the expected value for simultaneous measurement of A and B are [9]:
\[\left\langle\hat{W}_{\theta}^{A}\hat{W}_{\phi}^{B}\right\rangle=-\cos\left( \theta-\phi\right)\text{,} \tag{2}\]
where \(\hat{W}_{\theta}^{j}=\sin\theta\hat{\sigma}_{x}^{j}+\cos\theta\hat{\sigma}_{z} ^{j}\), with \(j=A,B\) and \(\hat{\sigma}_{x}^{j}\), \(\hat{\sigma}_{z}^{j}\) represent the spin operators of \(j\) in the directions indicated by the subscript. In the state specified by equation 1, the CHSH inequality is violated for certain specific values of \(\theta\) and \(\phi\),
\[\begin{split} S&=\left\langle\hat{W}_{0}^{A}\hat{W} _{\pi/4}^{B}\right\rangle-\left\langle\hat{W}_{0}^{A}\hat{W}_{3\pi/4}^{B} \right\rangle+\left\langle\hat{W}_{\pi/2}^{A}\hat{W}_{\pi/4}^{B}\right\rangle \\ &+\left\langle\hat{W}_{\pi/2}^{A}\hat{W}_{3\pi/4}^{B}\right\rangle =-2\sqrt{2}<-2\text{.}\end{split} \tag{3}\]
This example highlights a well-established understanding: a theory based on Bell locality cannot account for the results of quantum mechanics in every instance [10]. Nonetheless, in the context of open quantum systems, we often examine the reduced system due to the impracticality of achieving complete isolation. Consequently, questions arise regarding the sufficient conditions for a reduced system to violate the CHSH inequality.
To observe this, let us consider a simplified experiment illustrated in Figure 1: two spins that have interacted become entangled at a moment \(t<0\), at \(t=0\) they couple to a thermal bath consisting of bosons such that the interaction is now solely between the spins and the bath. In this scenario, we have the following initial state of the entire system, composed of the pair of spins and the bath:
\[\hat{\rho}(0)=\hat{\rho}_{S}(0)\otimes\hat{\rho}_{E}\text{,} \tag{4}\]
in this expression, \(\hat{\rho}_{S}(0)=\left|\psi\right\rangle\left\langle\psi\right|\) represents the density matrix of the bipartite system composed of two spins whose Hamiltonian is given by \(\hat{H}_{S}=\sum_{m\in\{A,B\}}\frac{\omega_{0}^{m}}{2}\hat{\sigma}_{z}^{m}\), where \(\omega_{0}^{m}\) is the energy difference between the eigenstates \(\left|0_{m}\right\rangle\) and \(\left|1_{m}\right\rangle\). \(\hat{\sigma}_{z}^{m}\) is the spin operator in the z-direction for spin \(m\). On the other hand, \(\hat{\rho}_{E}=\frac{\exp(-\beta\hat{H}_{E})}{Z_{E}}\) characterizes the thermal state of the bosonic bath, with a Hamiltonian defined by \(\hat{H}_{E}=\sum_{s}\omega_{s}\hat{b}_{s}^{\dagger}\hat{b}_{s}\). The interaction is assumed to be of the oscillator type, that is,
\[\hat{H}_{int}=\frac{1}{2}\sum_{m\in\{A,B\}}\sum_{s}\hat{\sigma}_{z}^{m}\left( g_{s}\hat{b}_{s}^{\dagger}+g_{s}^{*}\hat{b}_{s}\right)\text{,} \tag{5}\]
where \(g_{s}\) represents the coupling constants. Physically, this model represents a situation of decoherence without dissipation, as \(\left[\hat{H},\hat{\sigma}_{z}^{m}\right]=0\). Our focus now shifts to calculating equation 2 for the bipartite system, that is,
\[\begin{split}&\left\langle\hat{W}_{\theta}^{A}\hat{W}_{\phi}^{B} \right\rangle_{S}=\text{tr}_{S}\left(\hat{\rho}_{S}(t)\hat{W}_{\theta}^{A}\hat {W}_{\phi}^{B}\right)=\\ &=\sin\theta\sin\phi\left(\rho_{S}^{1100}(t)+\rho_{S}^{0011}(t)+ \rho_{S}^{0101}(t)+\rho_{S}^{1010}(t)\right)\\ &+\sin\theta\cos\phi\left(\rho_{S}^{0100}(t)+\rho_{S}^{0010}(t)- \rho_{S}^{1101}(t)-\rho_{S}^{0111}(t)\right)\\ &+\cos\theta\sin\phi\left(\rho_{S}^{1000}(t)+\rho_{S}^{0001}(t)- \rho_{S}^{1101}(t)-\rho_{S}^{0111}(t)\right)\\ &+\cos\theta\cos\phi\left(\rho_{S}^{0000}(t)+\rho_{S}^{1111}(t)- \rho_{S}^{1001}(t)-\rho_{S}^{0110}(t)\right)\text{,}\end{split} \tag{6}\]
here, \(\hat{\rho}_{S}(t)=\text{tr}_{E}\left(\hat{U}(t)\hat{\rho}(0)\hat{U}^{\dagger}( t)\right)\) represents the reduced density matrix of the bipartite system and \(\rho_{S}^{ijkl}(t)=\left\langle ij\right|\hat{\rho}_{S}(t)\left|kl\right\rangle\). The computation of the matrix elements has been studied in the literature when the coupling with a single spin is considered [11]. However, since there is no interaction between the spins, it is feasible to generalize the result for two spins, thus obtaining,
\[\begin{split}\left\langle ji\right|\hat{\rho}_{S}(t)\left|lk \right\rangle&=\rho_{S}^{jilk}(0)\times\\ &\text{tr}_{E}[\exp\left(\frac{1}{2}p_{ij}\sum_{s}\left(\alpha_{s }\hat{b}_{s}^{\dagger}-\alpha_{s}^{*}\hat{b}_{s}\right)\right)\hat{\rho}_{E}(0) \\ &\exp\left(\frac{1}{2}p_{lk}\sum_{s}\left(\alpha_{s}^{*}\hat{b}_ {s}-\alpha_{s}\hat{b}_{s}^{\dagger}\right)\right)]\text{,}\end{split} \tag{7}\]
we have defined \(\alpha_{k}=2g_{k}\frac{1-\exp\left(i\alpha_{k}t\right)}{\omega_{k}}\) and also \(p_{ij}=(-1)^{i}+(-1)^{j}\). Thus, for the state \(\left|\psi\right\rangle=\left|\psi_{AB}\right\rangle\) given by
Figure 1: Schematic showing the entangled state of two distant spins (measured by Alice and Bob) with negligible interaction, coupled to a thermal bath at temperature \(T\) via \(g_{k}\).
equation 1, only the matrix elements \(\rho_{S}^{jilk}(0)\) that satisfy \(l\neq k\) and \(i\neq j\) are non-zero. Therefore, we observe no temperature dependence in the bipartite system at any point for this initial state since equation 6 matches equation 2; hence, the Bell inequalities will be violated at any temperature.
Now, among the possible initially entangled states, consider \(\ket{\psi_{AB}}=\frac{1}{\sqrt{2}}\left(\ket{1_{A}1_{B}}+\ket{0_{A}0_{B}}\right)\), which violates the CHSH inequality when considering the same \(\theta\) and \(\phi\) from equation 3. For the new bell-type state, we obtain **for** the reduced system under the same physical conditions as in the previous case,
\[\begin{split}\left\langle\hat{W}_{\theta}^{A}\hat{W}_{\phi}^{B} \right\rangle_{S}&=\cos{(\theta-\phi)}\times\\ &\times\exp{\left(-8\sum_{s}\abs{\alpha_{s}}^{2}\coth{\left( \frac{\omega_{s}\beta}{2}\right)}\right)}.\end{split} \tag{8}\]
Moreover, in this case, equation 3 exhibits a temperature dependence for the bipartite system. Explicitly,
\[S=2\sqrt{2}\times\exp{\left(-8\sum_{s}\abs{\alpha_{s}}^{2}\coth{\left(\frac{ \omega_{s}\beta}{2}\right)}\right)}. \tag{9}\]
This expression demonstrates how the Bell inequality would be satisfied as a function of temperature and the parameters associated with the thermal bath and interaction. For instance, if we assume that the number of bosons tends to infinity, equation 9 can be expressed in terms of the spectral density of the thermal bath as,
\[\begin{split} S&=2\sqrt{2}\times\\ &\times\exp{\left(-32\int_{0}^{\infty}d\omega\;J(\omega)\left| \frac{\sin{\left(\frac{\omega t}{2}\right)}}{\omega/2}\right|^{2}\coth{\left( \frac{\omega\beta}{2}\right)}\right)},\end{split} \tag{10}\]
where, \(J(\omega)=\sum_{s}\abs{g_{s}}^{2}\delta(\omega_{s}-\omega)\). For this example, we see that given the spectral density of the bath, we can find an upper limit for the temperature \(T_{c}\) such that \(S<2\). Let's consider the thermal limit \(t\rightarrow\infty\), in this case, \(\sin^{2}{\left(\frac{\omega t}{2}\right)}/{\omega^{2}}\rightarrow\pi t\delta(\omega)\)[12]. Commonly spectral densities satisfy \(J(\omega=0)=0\)[13]. Then, we have,
\[\begin{split} S&=2\sqrt{2}\times\exp{\left(-256k_{b }T\frac{\mathrm{d}J}{\mathrm{d}\omega}\abs{\omega=0}\right)}\\ &\Rightarrow T_{c}=\frac{\ln\sqrt{2}}{256k_{b}\frac{\mathrm{d}J} {\mathrm{d}\omega}\abs{\omega=0}}.\end{split} \tag{11}\]
The upper equation shows the expected physical situation. In the high-temperature limit, the CHSH inequality is anticipated to hold since \(S\to 0\). On the other hand, the equation below indicates the critical temperature dependence on the spectral density.
## III Discussion.
In the previous section, we prove that CHSH inequality could be independent on the temperature once the system interacts with a bosonic bath for certain initial entangled states of the bipartite system. We find out that this is because the system's initial states belong to the decoherence-free subspace (DFS), which is not influenced by the bath [14]. That is, the density matrix associated with the bipartite system preserves unitary evolution \(\hat{\rho}_{S}(t)=\hat{U}_{S}\hat{\rho}_{S}(0)\hat{U}_{S}^{\dagger}\) with \(\hat{U}_{S}=\exp(-it\hat{H}_{S})\). In our previous example, states belonging to DFS are given by those states which are eigenstates of \(\frac{1}{2}\sum_{m\in\{A,B\}}\sum_{s}\hat{\sigma}_{z}^{m}\). In short, initial entangled states satisfying this condition as in the case of equation 1 violated CHSH inequality independent of temperature. However, other states, such as in the case \(\ket{\psi_{AB}}=\frac{1}{\sqrt{2}}\left(\ket{1_{A}1_{B}}+\ket{0_{A}0_{B}}\right)\), will exhibit a critical temperature beyond which the Bell inequality is satisfied. Therefore, the environment constrains the validity of using a Bell test as a measure of entanglement. That is, violating the Bell inequality is a sufficient condition to ensure entanglement but not a necessary one [15]. In this scenario, employing other more effective entanglement tests is essential.
## IV Conclusion.
We show that starting from an initial state that violates the CHSH inequality, we ascertain that states residing in the Decoherence-Free Subspace (DFS) allow two qubits, when interacting with an oscillator environment, to consistently violate the CHSH inequality irrespective of temperature. For states outside the DFS, we determine a temperature threshold beyond which the CHSH inequality is no longer violated. This threshold holds true across various spectral densities of the environment. |
2309.13420 | DenMune: Density peak based clustering using mutual nearest neighbors | Many clustering algorithms fail when clusters are of arbitrary shapes, of
varying densities, or the data classes are unbalanced and close to each other,
even in two dimensions. A novel clustering algorithm, DenMune is presented to
meet this challenge. It is based on identifying dense regions using mutual
nearest neighborhoods of size K, where K is the only parameter required from
the user, besides obeying the mutual nearest neighbor consistency principle.
The algorithm is stable for a wide range of values of K. Moreover, it is able
to automatically detect and remove noise from the clustering process as well as
detecting the target clusters. It produces robust results on various low and
high-dimensional datasets relative to several known state-of-the-art clustering
algorithms. | Mohamed Abbas, Adel El-Zoghobi, Amin Shoukry | 2023-09-23T16:18:00Z | http://arxiv.org/abs/2309.13420v1 | # DenMune: Density Peak Based Clustering Using Mutual Nearest Neighbors
###### Abstract
Many clustering algorithms fail when clusters are of arbitrary shapes, of varying densities, or the data classes are unbalanced and close to each other, even in two dimensions. A novel clustering algorithm "DenMune" is presented to meet this challenge. It is based on identifying dense regions using mutual nearest neighborhoods of size \(K\), where \(K\) is the only parameter required from the user, besides obeying the mutual nearest neighbor consistency principle. The algorithm is stable for a wide range of values of \(K\). Moreover, it is able to automatically detect and remove noise from the clustering process as well as detecting the target clusters. It produces robust results on various low and high dimensional datasets relative to several known state of the art clustering algorithms.
keywords: clustering, mutual neighbors, dimensionality reduction, arbitrary shapes, pattern recognition, nearest neighbors, density peak +
## 1 Introduction
Data clustering, which is the process of gathering similar data samples into groups/clusters, has been found useful in different fields such as medical imaging (to differentiate between different types of tissues [1]), market research (to partition consumers into perceptual market segments [2]), document retrieval (to find documents that are relevant to a user query in a collection of documents [3]), and fraud detection (to detect suspicious fraudulent patterns) [4]), as well as many others [5]. In general, Clustering algorithms can be divided into the following types:
### Partitioning-based Clustering Algorithms
In this category, data objects are divided into non-overlapping subsets (clusters) such that each object lies in exactly one subset. The most well-known and commonly used algorithm in this class is K-means. K-means is heavily dependent on the initial cluster centers, which are badly affected by noise and outliers. A well known variant is K-medoid. K-medoid selects the most centrally located point in a cluster, namely its medoid, as its representative point. Another well-known variant of K-means is KMeans++. It chooses centers at random, but weighs them according to the square distance from the closest already chosen center.
A recent algorithm in this area is RS algorithm [6]. It belongs to the class of swap-based clustering algorithms that aim at using a sequence of prototype swaps to deal with the inability of K-means in fine-tuning the cluster boundaries globally, although it succeeds locally. By adopting a random swap strategy the computational complexity is reduced and the results are better
than those obtained by k-means. Its main limitation is that there is no clear rule how long the algorithm should be iterated.
Another recent algorithm in this category is CBKM [7]. It investigates the extent to which using better initialization (poor initialization can cause the algorithm to get suck at an inferior local minimum) and repeats can improve the k-means algorithm. It is found that when the clusters overlap, furthest point heuristic(Maxmin)can reduce the number of erroneous clusters from 15
### Proximity-based Clustering Algorithms
Neighborhood construction is useful in discovering the hidden interrelations between connected patterns [8]. Proximity can be identified using k-nearest-neighbor (cardinality-based), or identified using \(\epsilon\) -neighbourhood (distance-based).
A recent algorithm in this category is FastDP algorithm [9]. It focuses on improving the quadratic time complexity of the "Density peaks" popular clustering algorithm by using a fast and generic construction of approximate k-nearest neighbor graph both for density and for delta calculation (distance to the nearest point with higher density). The cluster centers are selected so that they have a high value of both delta and density. After that, the remaining points are allocated (joined) to the already formed clusters by merging with the nearest higher density point. The algorithm inherits the problems associated with the original "Density peaks" algorithm which include: (1) how to select the initial k cluster centers based on dendity and delta. The algorithm adopts the gamma strategy (which uses the points with high product of the two features(density and delta). (2) the problem of how
to threshold the density and delta features.
Another recent algorithm is NPIR algorithm [10]. it finds the nearest neighbors for the points that are already clustered based on the Euclidean distance between them and cluster them accordingly. Different nearest neighbors are selected; from the kNN lists of the already clustered point; at different iterations of the algorithm. Therefore,the algorithm relies on the random and iterative behavior of the partitional clustering algorithms to give quality clustering results. It performs Election, Selection, and Assignment operations to assign data points to appropriate clusters. Therefore, three parameters should are needed: The number of clusters, The indexing ratio (controls the amount of possible reassignment of points) and the number of iterations.
CMUNE [11], a predecessor of DPC, uses the MNN graph to calculate the density of each point and select the high-density points (also called strong points) as the seeds from which clusters may grow up. A cutoff parameter is also needed to differentiate between strong and weak points. Similar to DPC, the constructed clusters are very sensitive to variations in this parameter. The notion of weak/isolated points has been introduced in ( [11], [12] and [13]) to define points which are prone to be classified as noise and, consequently, excluded from the clusters' formation.
### Hierarchical Clustering Algorithms
In this category, data objects are organized into a tree of group-of-objects. The tree is constructed either from top to bottom or from bottom to top leading to divisive or agglomerative type of algorithms, respectively. Hierarchical clustering has been extensively applied in pattern recognition. Some known examples are Chameleon [14] and CURE [15]. The scalability of hierarchical
methods is generally limited due to their time complexity. To address this issue, [16] proposed a fast hierarchical clustering algorithm based on topology training. PHA [17] uses both local and global data distribution information during the clustering process. It can deal with overlapping clusters, clusters of non-spherical shapes and clusters containing noisy data, by making good use of the similarity between the iso-potential contours of a potential field and hierarchical clustering. A more successful variant of hierarchical density is HDBSCAN [18]. HDBSCAN provides a clustering hierarchy from which a simplified tree of significant clusters is constructed, then a flat partition composed of clusters extracted from optimal local cuts through the cluster tree. Unlike DBSCAN, It can find clusters of variable densities.
RCC [19] is a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. it optimizes a smooth continuous objective function that allows the algorithm to be extended to perform joint clustering and dimensionality reduction.
A recent algorithm in this category is FINCH algorithm [20]. It is fully parameter-free (i.e. does not require any user defined parameters such as similarity thresholds, number of clusters or a priori knowledge about the data distribution) clustering algorithm.The algorithm is based on the clustering equation which defines an adjacency link matrix that links two points i and j if j is the first neighbor of i or i is the first neighbor of j or both i and j have (share) the same first nearest neighbor. The algorithm belongs to the family of hierarchical agglomerative methods, has low computational overhead and is fast.
In this paper, a novel clustering algorithm DenMune is presented for the purpose of finding complex clusters of arbitrary shapes and densities in a two-dimensional space. Higher dimensional spaces are first reduced to 2-D using the t-sne algorithm. It can be considered as a variation of the CMUNE algorithm [11]. DenMune requires only one parameter from the user across its two-phases. Other advantages include its ability in automatically detecting, removing and excluding noise from the clustering process. It adopts a voting-system where all data points are voters but only those that receive highest votes are considered clusters' constructors. Moreover, it automatically detects the target clusters and produces robust results with no cutoff parameter needed.
### Outline of the Paper
The rest of this paper is organized as follows. Section 2 describes the key concepts of the DenMune clustering algorithm. Section 3 describes the algorithm itself and its time complexity analysis. Section 4 presents the data sets used in the experiments conducted to evaluate the performance of the algorithm. Section 5 presents the conclusion and possible future work.
## 2 Basic Definitions and Mechanisms Underlying the Proposed Algorithm
In this section we describe the basic concepts used in the proposed algorithm and its underlying mechanisms.
### K-Mutual-Neighbors Consistency
The principle of K-Mutual-Neighbors (K-MNN) consistency [21]; which states that for any data points in a cluster its _MNN_ should also be in the same cluster; is stronger than the K-nearest Neighbors (KNN) consistency concept. In CMune and CSharp ( [11], [12]) the concept of K-MNN is used to develop a clustering framework based on "Reference Points", defined in section 2.2, in which dense regions are identified using mutual nearest neighborhoods of size \(K\), where \(K\) is a user-parameter. Next, sets of points sharing common mutual nearest neighborhoods are considered in an agglomerative process to form the final clusters. This process is controlled by two threshold parameters. In contrast, by properly partitioning the data points into classes (section 2.3) and guided by the principle of K-Mutual-Neighbors consistency (K-MNN), DenMune is able to get rid of these threshold parameters in performing its clustering task (section 2.6).
### Refer-To-List, Reference-List and Reference Point
Given a set of points \(P=\{p_{1},p_{2},\)\(\ldots p_{n-2},p_{n-1},p_{n}\)\(\}\), let \(KNN_{p_{i}\rightarrow}=\{p_{1},p_{2},p_{3},\ldots,p_{k}\}\) be the K-nearest neighbors of point \(p_{i}\). In this paper, we consider that points in a \(KNN\) set are sorted, ascendingly, according to their distances from a given reference point. Therefore, \(KNN_{p_{i}\rightarrow}\) represents the ordered list of points that \(p_{i}\) refers-to, namely, the "Refer-To List". If \(p_{i}\in KNN_{p_{j}\rightarrow}\), then \(p_{i}\) is referred-to by \(p_{j}\). In this case, \(p_{j}\in KNN_{p_{i}\leftarrow}\), the set of points considering \(p_{i}\) among their K-nearest neighbors. The set \(KNN_{p_{i}\rightarrow}\cap KNN_{p_{i}\leftarrow}\), is the set \(MNN_{p_{i}}\) of mutual nearest neighbors of \(p_{i}\). It represents a set of dense points associated with point \(p_{i}\). Point \(p_{i}\) is said to be the "Representative-Point" or "Reference-Point" of \(MNN_{p_{i}}\).
As shown in Fig. 1, although the Euclidean distance is a symmetric metric, from the SNN [22] perspective (and considering \(K=4\)), point \(A\) is in \(KNN_{B\rightarrow}\), however, \(B\) is not in \(KNN_{A\rightarrow}\).
### DenMune classification of data points into Strong, Weak and Noise Points
According to the value of the non-negative ratio \(r=\frac{|KNN_{p\leftarrow}|}{|KNN_{p\rightarrow}|}=\frac{|KNN_{p\leftarrow}|}{K}\), since \(|KNN_{p\rightarrow}|=K\) (by definition), from DenMune point of view, each data point 'p' in a dataset, belongs to one of the types described in Eq.(1):
\[p.Type=\begin{cases}\text{Strong point}&\text{if }r\geq 1\\ \text{Weak point}&\text{if }r<1\\ \text{Noise point}&\text{if }0\leq r\ll 1\end{cases} \tag{1}\]
* Strong Points: satisfy the condition \(|KNN_{p\leftarrow}|\geq|KNN_{p\rightarrow}|\), or \(|KNN_{p\leftarrow}|\geq K\). This implies that \(|MNN_{p}|=\mid KNN_{p\rightarrow}\cap KNN_{p\leftarrow}|=\)_K_. Strong
Figure 1: Asymmetry of the _K_-nearest neighborhood relation.
points are also called seed points. Seed points that share non-empty _MNN_-sets of seeds are the clusters' constructors in the proposed algorithm.
* Weak points: satisfy the condition \(|KNN_{p\leftarrow}|<|KNN_{p\rightarrow}|\). From Eq.(1), it is clear that the boundaries of the set defining the weak points are fuzzy. Fig. 2 illustrates the idea that in DenMune, a weak point either succeeds in joining a cluster or it is considered as noise. For this reason, weak-points are called non-strong (non-seed) points. Hence, the following lemma can be concluded: \(\underline{\text{Lemma}}\): The set of weak points is a fuzzy set. Its boundaries with the sets of strong and noise points are fuzzy. The rule governing the assignment of a weak point to a cluster or rejecting it as noise is, in general, data as well as algorithm dependent.
* Noise points, represent points either with empty \(MNN\)s (corresponding to \(r=0\), which are removed early in phase I of DenMune algorithm,
Figure 2: Fuzziness of the set \(W\) of weak points. \(N\) and \(S\) denote the noise (\(r=0\)) and strong (\(r\geq 1\)) points, respectively. \(T\) is some threshold that partitions the set \(W\) into \(W_{N}\) and \(W_{S}\). Both sets are automatically detected by DenMune.
named as noise of type-1), or weak points that fail to merge with any formed cluster (corresponding to \(r\ll 1\), which are removed in phase II of the algorithm, named as noise of type-2).
### Proposed Algorithm: Overview
DenMune is based on a voting system framework where points that receive the largest number of votes (i.e. they belong to the K-nearest neighbors of at least \(K\) other points), are marked as dense/ seed points and are used to construct the backbone of the target clusters in phase I of the algorithm. Points that receive no votes are considered as noise of type-1 and are eliminated from the clustering process. Phase II deals with the weak points that either survive by merging with the existing clusters, or are eliminated by being considered as noise of type-2.
Table 1 shows the distribution of strong/ seeds and weak/ non-seeds points among the Chameleon's DS7 dataset which includes 10,000 data points, while Fig. 3 illustrates how strong points determine the shapes/ structures of the clusters where weak points can only merge with them.
### Proposed Algorithm: Steps
DenMune involves the following steps:
* Canonical ordering: Clustering results obtained by DenMune are deterministic, as it orders the set of points \(P\) according to \(|KNN_{p\leftarrow}|\) in a descending order.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Algorithm & Parameters & Strong Points & Weak Points & Noise of type-1 & noise of type-2 \\ \hline DenMune & \(K\)=39 & 5858 & 3471 & 0 & 671 \\ \hline \end{tabular}
\end{table}
Table 1: Strong and weak points found by DenMune in the Chameleon DS7 dataset.
* Noise Removal: Noise points of type-1 as well as those of type-2 are detected and removed in phase I and phase II, of the algorithm, respectively, as illustrated in Table 2.
* Skeleton Construction and Propagation: after removal of type-1 noise points, the remaining points are partitioned into two groups: dense points (seeds) and low-dense points (non-seeds). Only seed points are eligible to construct the skeleton of the target clusters (i.e. the number of seed points represent an upper bound on the number of clusters), while low-dense points are considered in the next phase.
Figure 3: Phases of DenMune
To further illustrate the process of clusters propagation, Chameleon's dataset DS7 1 is used. Several snapshots of the clustering process, are shown in Fig. 4, to illustrate how clusters propagate agglomeratively, and in parallel, in CSharp and DenMune.
\begin{table}
\begin{tabular}{c c c c c} \hline \(K\) & Strong Points & Weak Points & Noise of type-1 & Noise of type-2 \\ \hline \hline
[MISSING_PAGE_POST]
\end{tabular}
\end{table}
Table 2: Distribution of the different type of points, detected by DenMune, vs the number \(K\) of nearest neighbors in Chameleon DS7 dataset.
Figure 4: Clusters formation and propagation in DenMune and CSharp. Clusters seeds in DenMune are sparser but their propagation speed is slower. Also, DenMune results are more noise free.
### Conservative Nature of DenMune
* Clusters formation in Phase I: Fig. 5(a), illustrates the evolution of the number of clusters with the number of iterations for Chameleon's dataset. DenMune merges clusters conservatively in contrast to CSharp which is eager to merge clusters. Table 1, indicates that 5858 strong points are found by DenMune during this phase. Therefore, the process of clusters formation stabilizes after 5858 iterations at the end of phase I, after which no more clusters can be constructed.
* Slow Merging of Weak Points in Phase II: weak points are merged one by one, each to the cluster with which it shares the largest number of \(MNN\)-seeds. Table 1, indicates that out of the 4142 (\(3471+671\)) weak points, 3471 of them succeed in merging with the clusters formed in the first phase. The remaining 671 points are considered as noise points of type-2. It is worth to note that DenMune overcomes the lack of the noise threshold \(L\) and the merge parameter \(M\), used in CSharp, by (1) strengthening the \(MNN\) relationship to involve only seed points, (2) the propagation process considers the weak points individually, i.e. one by one, (3) weak points that fail to merge with the formed clusters are detected and removed as noise. As shown in Fig. 5(b), for the DS7 dataset, after 1000 iterations, CSharp clustered 80% of the data points, while DenMune clustered only 50% of them. This is due to the fact that clusters in DenMune are initially sparse, as shown in Fig. 5(b).
## 3 DenMune Algorithm
Algorithm 1 describes the proposed algorithm, followed by a detailed discussion of its time complexity.
Figure 5: DenMune vs CSharp:(a) Number of clusters and (b) number of clustered data points vs number of iterations.
```
Input: Data points \(P=\{p_{1},p_{2}\ldots,p_{n}\}\), \(K\) // size of the neighborhood of a point Output:\(C\) // set of generated clusters
1 Construct distance matrix \(D\) // Construct the Refer-To-List, \(KNN_{p_{i}\rightarrow}\), for each point \(p_{i}\in P\)
2\(KNN_{p_{i}\rightarrow}\leftarrow\{j|d(p_{i},p_{j})\leq d(p_{i},p_{k})\}\) // For each point \(p_{i}\) construct \(KNN_{p_{i}\leftarrow}\) by scanning \(KNN_{p_{j}\rightarrow}\) and selecting points \(j\) having point \(i\) in their \(KNN_{p_{j}\rightarrow}\)
3foreach\(p_{i}\in P\)do
4foreach\(p_{j}\in P\)do
5if\(p_{i}\in KNN_{p_{j}\rightarrow}\)then
6\(KNN_{p_{i}\leftarrow}\leftarrow\{KNN_{p_{i}\leftarrow}\cup p_{j}\) }// From \(KNN_{p_{i}\rightarrow}\) and \(KNN_{p_{i}\leftarrow}\), construct \(MNN_{p_{i}}\)
7\(MNN_{p_{i}}\gets KNN_{p_{i}\rightarrow}\) \(\cap KNN_{p_{i}\leftarrow}\)
8 Remove the set \(O\), of noise points \(p_{i}\) of type-1, satisfying \(|MNN_{p_{i}}|=0\)
9 Form the sorted list \(P\), The sorting is in a descending order according to \(|KNN_{p_{i}\leftarrow}|\) // \(P\) = \(P\) - \(O\)
10 Form the sorted list \(S\subset P\) = \(\{p_{i}|p_{i}\) satisfies \(|KNN_{p_{i}\leftarrow}|\geq|KNN_{p_{i}\rightarrow}|\}\)
11 Form the set \(Q\) of non-seed points, where \(Q=P-S\) // Note that \(Q\subset P\) = \(\{p_{i}|p_{i}\) satisfies \(|KNN_{p_{i}\leftarrow}|<|KNN_{p_{i}\rightarrow}|\}\)
12 CreateClustersSkeleton(S) // Phase I of the algorithm
13 AssignWeakPoints(Q) // Phase II of the algorithm
```
**Algorithm 1**DenMune Algorithm
```
Input: Data points \(P=\{p_{1},p_{2}\ldots,p_{n}\}\), \(K\) // size of the neighborhood of a point \(C\) Output:\(C\) // set of generated clusters
1 Construct distance matrix \(D\) // Construct the Refer-To-List, \(KNN_{p_{i}\rightarrow}\), for each point \(p_{i}\in P\)
2\(KNN_{p_{i}\rightarrow}\leftarrow\{j|d(p_{i},p_{j})\leq d(p_{i},p_{k})\}\) // For each point \(p_{i}\) construct \(KNN_{p_{i}\leftarrow}\) by scanning \(KNN_{p_{i}\rightarrow}\) and selecting points \(j\) having point \(i\) in their \(KNN_{p_{i}\rightarrow}\)
3foreach\(p_{i}\in P\)do
4foreach\(p_{i}\in P\)do
5if\(p_{i}\in KNN_{p_{i}\rightarrow}\)then
6\(KNN_{p_{i}\leftarrow}\leftarrow\{KNN_{p_{i}\leftarrow}\cup p_{j}\) }// From \(KNN_{p_{i}\rightarrow}\) and \(KNN_{p_{i}\leftarrow}\), construct \(MNN_{p_{i}\leftarrow}\)
7\(MNN_{p_{i}\leftarrow}\) \(KNN_{p_{i}\rightarrow}\) \(\cap KNN_{p_{i}\leftarrow}\)
8 Remove the set \(O\), of noise points \(p_{i}\) of type-1, satisfying \(|MNN_{p_{i}}|=0\)
9 Form the sorted list \(P\), The sorting is in a descending order according to \(|KNN_{p_{i}\leftarrow}|\) // \(P\) = \(P\) - \(O\)
10 Form the sorted list \(S\subset P\) = \(\{p_{i}|p_{i}\) satisfies \(|KNN_{p_{i}\leftarrow}|\geq|KNN_{p_{i}\rightarrow}|\}\)
11 Form the set \(Q\) of non-seed points, where \(Q=P-S\) // Note that \(Q\subset P\) = \(\{p_{i}|p_{i}\) satisfies \(|KNN_{p_{i}\leftarrow}|<|KNN_{p_{i}\rightarrow}|\}\)
12 CreateClustersSkeleton(S) // Phase I of the algorithm
13 AssignWeakPoints(Q) // Phase II of the algorithm
```
**Algorithm 2**DenMune Algorithm
```
Input: Sorted list \(S\) of Seed points Output: Sorted list \(L\) of the m generated clusters // Loop through all seed points and create clusters skeleton from seeds that share non-empty sets of MNN-seeds
1\(i\gets 1\)// \(i\) is a seed index \(L\leftarrow\phi\)// List of clusters so far \(C\leftarrow\cup(s_{1},MNN(s_{1}))\) \(L\).append(\(C\)) \(i\gets i+1\)// increment i foreach\(s_{i}\in S\)do \(C_{intersect}\leftarrow\phi\) \(C\leftarrow\cup(s_{i},MNN(s_{i}))\) foreach\(l\in L\)do if\(l\cap C\neq\phi\)then \(C_{intersect}\leftarrow\cup(C_{intersect},l)\) \(L\).delete(\(l\)) if\(C_{intersect}\neq\phi\)then \(C\leftarrow\cup(C,C_{intersect})\) \(L\).append(\(C\)) \(i\gets i+1\)// increment i \(m\gets Length(L)\) for\(j\) from 1 to \(m\)do \(\ell(s\in C_{j})\gets j\)// label each seed point in \(C_{j}\) as belonging to cluster \(j\) // Output the set of generated clusters, \(m\) the number of clusters and label each seed point \(s\) belonging to a cluster \(C_{j}\) by its corresponding cluster index
```
**Algorithm 2**CreateClustersSkeleton(S)
```
Input: Sorted lists \(L\) of m clusters and \(Q\) of non-seed points. Output: Updated list \(L\) of the m generated clusters. // Loop through all non-seed points and assign each of them to the cluster with which it shares the largest number of \(MNN\)-seeds
1\(i\gets 1\)// \(i\) is an index for non-seed points
2foreach\(q_{i}\in Q\)do
3 Select \(j\) such that \(|\{q_{i}\cup MNN_{q_{i}}\}\cap C_{j}|\) is maximum, where \(j=1,2,\cdots,m\) and \(C_{j}\in L\); \(\ell(q_{i})\gets j\)// label non-seed point \(q_{i}\) as belonging to cluster \(C_{j}\) \(i\gets i+1\); Output the formed clusters. The remaining unlabeled points are noise of type-2.
```
**Algorithm 3**AssignWeakPoints(Q)
### Time Complexity
Given \(N\) the number of data points, \(K\) the number of nearest neighbors, \(D\) the number of dimensions and \(C\) the number of constructed clusters, the time complexity for computing the similarity matrix, between the data points, is \(O(N^{2})*D\) = \(O(N^{2})\), since \(D=2\) (after dimensionality reduction). This complexity can be reduced to \(O(N\log N)\), by the use of a data structure such as a k-d tree [23] and [24], which works efficiently with low dimensional data. The space complexity of this preprocessing phase is \(O(ND)\). The time complexity of the algorithm can be analyzed as follows:
* line 2, finding \(KNN_{p_{i}\rightarrow}\): needs K iterations for each data point, hence it has a complexity of \(O(NK)\)
* lines 3-6, finding \(KNN_{p_{i}\leftarrow}\): needs K iterations for each of the N data points, hence it has a complexity of \(O(KN)\)
* line 7, finding \(MNN\) for each of the N data points, a search for mutual neighborhood is done within the K-nearest neighbors of each point.
* line 9, sorting points: has a complexity of \(O(N\log N)\), using binary sort.
* CreateClustersSkeleton algorithm has a complexity of \(O(|S|*|R|*\log K)\), where R is an upper bound on the number of temporarily generated clusters, \(m\leq R\leq|S|\). Letting O(R) \(\approx|S|\) and O(\(|\)S\(|\)) \(\approx N\), then this complexity becomes \(\approx O(N^{2}\log K)\).
* Similarly AssignWeakPoints algorithm has a complexity of \(O(|Q|*|R|*K)\), since we iterate through each of the \(Q\) weak data points, searching
for the maximum intersection between its MNN and each of the formed clusters. Therefore, this complexity becomes \(\approx O(N^{2}K)\).
The overall time complexity for DenMune algorithm is O(\(N^{2}K\)) and its space complexity is \(O(NK)\).
## 4 Experimental Results
We have conducted extensive experiments on the datasets described in Table 3 which include: (1) Fifteen real datasets obtained from UCI repository 2, MNIST dataset3 and KEEL datasets4 (2) Twenty-one synthetic datasets from 5 and 6. In total, thirty-six datasets have been used to assess the results obtained by DenMune with respect to the ground truth as well as to the results obtained by nine known algorithms, NPIR [10], CBKM [7], Fast DP [9], FINCH [20]), RS [6]), RCC [19]) HDBSCAN [18], KMeans++ [25] and Spectral clustering.
Footnote 2: [https://archive.ics.uci.edu/ml/index.php](https://archive.ics.uci.edu/ml/index.php)
Footnote 3: [http://yann.lecun.com/exdb/mnist/](http://yann.lecun.com/exdb/mnist/)
Footnote 4: [https://sci2s.ugr.es/keel/dataset.php?cod=183](https://sci2s.ugr.es/keel/dataset.php?cod=183)
Footnote 5: [http://cs.joensuu.fi/sipu/datasets/](http://cs.joensuu.fi/sipu/datasets/)
Footnote 6: [https://elki-project.github.io/datasets/](https://elki-project.github.io/datasets/)
The Euclidean distance has been adopted as a similarity metric for all datasets.
### Dimensionality Reduction
Datasets often contain a large number of features, which may even outnumber the observations as in the Arcene dataset. Due to the computational and theoretical challenges associated with high dimensional data, reducing the dimension while maintaining the structure of the original data is desirable [26]. Also, high dimensional data may contain many irrelevant dimensions that suppress each others. These issues can confuse any clustering algorithm by hiding clusters, especially in noisy data. For these reasons, all datasets have been reduced to two dimensions, using the t-sne algorithm [27], before applying the examined algorithms on them. DenMune has been examined on the ten real datasets listed in Table 3, using various dimensionality
reduction techniques. In general, as shown in Table 5, the algorithm performance on a dataset projected to 2-D is better than its performance on the same dataset in its high dimension version. Table 4 shows that DenMune attains its best performance when t-sne is used for dimensionality reduction. t-sne outperforms Principal Component Analysis (PCA), Factor Analysis (FA) and Non-negative Matrix Factorization (NMF) by a large margin.
### Agorithms' Implementation and Parameters' setting
For HDBSCAN, Spectral Clustering and Kmeans++ the implementations provided by SKlearn7have been adopted. All other algorithms, NBIR, CBKM, RS, FINCH, FastDP and RCC implementations are provided by authors themselves. DenMune algorithm has been implemented in C++ and integrated with SKlearn, to benefit from its libraries in computing various validation indexes.
Footnote 7: [https://scikit-learn.org/stable/](https://scikit-learn.org/stable/)
The parameters for each algorithm have been selected according to each algorithm defaults and recommendations. (1) for NPIR, the IR parameter is selected in the range [0.01, 0.05, 0.10, 0.15, 0.20], with ten iterations for each run. (2) for HDBSCAN, The primary parameter and the most intuitive parameter is min-cluster-size is selected in the range [2..100], (3) for Spectral clustering and KMeans++ the default parameters in SKlearn have been adopted. The number of clusters is set equal to the ground truth. Each algorithm is run 100 times for each dataset and the best performance is recorded, (4) for DenMune, the only used parameter, \(K\) is selected in the range [1..50] for small datasets and [1..200] for big datasets. For MNIST dataset, NPIR
failed to scale to adapt to this big dataset even on a cloud server with 128 GB memory, thus all MNIST results were removed from the ranking process for all other algorithms.
### Results and Discussion
The twenty-one synthetic datasets, listed in Table 3, have been used to demonstrate the efficiency of our proposed algorithm. All datasets are 2-D except DIM datasets which are reduced from 32, 128 and 512 to 2-D to make them easy to visualize. They are of different sizes (G2 and DIM datasets). They have clusters of different densities, shapes (Spiral, Compound, Flame and Pathbased datasets) and degree of overlapping (S1 and G2 datasets). Revealing the inherent structure of these datasets is challenging for most heuristic algorithms. Three metrics have been recorded (1) The F1 scores are recorded in Tables 6 and 7 for synthetic and real datasets, respectively. The Normalized Mutual Information, NMI is recorded in Tables 8 and 9 for synthetic and real datasets, respectively, and the Adjusted Rand Index, ARI is recorded in Tables 10 and 11 for synthetic and real datasets, respectively.
We adopt a ranking system to order algorithms, based on their clustering performance, as measured by F1, NMI and ARI scores. The lower the rank of an algorithm, the better its clustering quality for the datasets examined. Three ranking values are added to the bottom of Tables (6 : 11) as follows: (1) Total rank: sum of the ranks of an algorithm over the examined datasets (2) Average rank: Total rank divided by the number of datasets and (3) rank: the algorithm ranking among the set of examined algorithms, given in ascending order (lower ranks preferred).
The ground truth for all datasets are visualised using the t-sne algorithm as shown in Fig. (14 and 15) for synthetic and real datasets, respectively.
In general, the results show that DenMune outperforms all other algorithms for the majority of the datasets examined. Denmune has the lowest rank values for each of the three validity indexes used in the assessment for both synthetic, Tables( 6, 8 and 10) and real datasets, Tables( 7, 9 and 11).
Based on F1-score, Denmune outperforms the other algorithms for twenty-eight out of the thirty-six datasets. For the remaining datasets.
(1) Arcene dataset: all algorithms (except for Finch and RCC algorithms) outperform Denmune (+8%). (2) G2-2-50 dataset: CBKM, RS and FastDP algorithms outperform DenMune (+2%). (3) Iris dataset: NPIR, CBKM, RS and Spectral outperform DenMune (+1% : +8%). (4) Glass dataset: Spectral Clustering outperforms DenMune slightly (+2%). (5) SCC dataset: RS algorithm outperforms all other algorithms for this dataset with noticable F1-score (84%) then comes RCC with (77%), while Denmune scores only 68%. (6) Seeds dataset: NPIR and CBKM outperform DenMune (+1% : +2%). (7). WDBC dataset: NPIR, RS and FastDP outperform DenMune (+2% : +7%) (8) Yeast dataset: CBKM, FINCH, RCC, KMeans++ outperform DenMune (+1% : +5%).
We are going to investigate why DenMune outperform for the majority of datasets, then we will ilusterate why some algorithms outperform DenMune for some datasets.
DenMune has a noticeable better performance over other algorithms for many datasets such as (1) A1 dataset (+33%), (2) A2 dataset (+14%), (3) Compound dataset (+8%), (4) D31 dataset (+34%), (5) Dim-128 dataset
(+20%), (6) Pathbased dataset (+10%), (7) S1 dataset (+22%),(8) S2 datset (+25%), (9) Optical-digits dataset (+27%), (10) Pendigits dataset (+24%), (11) Ecoli dataset (+6%) and (12) MNIST datasets (+6%). This is due to the framework DenMune adopts in its clustering process which allows it to distinguish real clusters in noisy data, even if they are attached to each others or overlapped as long as they are of distinguishable densities.
On contrary to DenMune, density-based algorithms such as HDBSCAN fails when clusters have different densities, that is why HDBSCAN performs badly on Compound and Pathbased datsets, Fig. 10i and Fig. 12i, respectively. Also, on Aggregation dataset it merges some spherical shapes incorrectly due to the strong linkage between them, Fig. 8i. nevertheless, it performs well on spiral dataset, Fig.9i since clusters are well separated.
FastDP speeds up the clustering process by building an approximate k-nearest neighbor (kNN) graph using an iterative algorithm. Its main advantage is that it removes the quadratic time complexity limitation of density peaks and allows clustering of very large datasets. FastDP can not select the right cluster centers on Pathbased and Spiral datasets Fig. (12f and 9f) respectively. Its performance goes down when working on datasets with extremely uneven distributions as in Compound dataset, Fig. 10f. On contrary to the speed achieved by the algorithm, results obtained showed lower clustering quality. This is obvious from the validity indexes values achieved by the algorithm.
Centroid based algorithms fail when the centroid of a cluster is closer to other data points rather than the data points of its representative cluster, That is why KMeans++ and spectral clustering perform badly on arbitrary
shaped data. They can detect clusters of globular shapes specifically when clusters are well separated as in DIM datasets. Datasets with varying clusters' overlap degrade validations scores even if the clusters are of globular shapes as in G2 datasets. Although,they have an advantage over traditional KMmeans, they perform badly on noisy or data with overlapping clusters.
NPIR uses an indexing ratio, IR to control the amount of possible reassignment of points. The higher IR value means that the assigned points have more possibility for reassignment. The reassignment process does not guarantee algorithm to assign points to the correct clusters specifically when data are noisy as in A1 and A2 datasets (F1= 0.48 and 0.40 respectively) or clusters with different degree of cluster overlap as in S1 and S1 datasets (F1= 0.43 and 0.41 respectively). It is obvious that NPIR performs badly when data are noisy or clusters are of different densities and attached to each other. It performs better when clusters are well separated even if they are of different densities as in Jain and Aggregation datasets Fig.11c and Fig.8c, respectively. A noticeable issue we experienced during examining the algorithm is that it could not scale when working on the MNIST dataset and failed to run even on a cloud server with 128 GB memory. We tested NPIR for IR in the range [0.01, 0.05, 0.10, 0.15, 0.20.]. We found that NPIR performs well when IR is set to 0.01. However, it achieved its highest score for some datasets such as Mouse, Ecoli and Compound datasets for IR=0.15 and for Flame dataset on IR=0.20. Tuning NPIR to yield the best results is not an easy task.
We can observe easily that the performance of all clustering algorithms decrease for datasets with clusters' overlap as in S1, S2, A1 and A2 datasets
except for DenMune algorithm. DenMune can deal with overlaping clusters as long as they are of distinguishable densities. DenMune outperformed the other algorithms, for these datasets, with a remarkable margin.
RCC performs well on some datasets such as G2, Spiral and R15 datatsets, but it performs too badly on DIM datasets, a high-dimensional datasets where clusters are well separated even in the higher dimensional space, Fig(14f:14h).
For synthetic datasets (6), FINCH has the closest F1-score to DenMune while being faster. However, for the same datasets, its performance based on the NMI and ARI metrics (Tables 9 and 11 as well as Figs 8e to 13e ) is bad. The same applies for real datasets ( 7).
RS algorithm adopts a randomized search strategy, which is simple to implement and efficient. It archived good quality clustering, and if iterated longer, it would finds the correct clustering with high probability. CBKM algorithm uses a better initialization technique and/or repeating (restarting) the algorithm to improve the quality clustering of KMeans to overcome issues with clusters overlap and clusters of unbalanced sizes. Authors of CBKM observed that choosing an initialization technique like Maxmin can compensate for the weaknesses of k-means and recommended that repeating k-means 10-100 times; each time taking a random point as the first centroids and selecting the rest using the Maxmin heuristic would improve the quality clustering. We believe that increasing the number of runs (from 100 to say 1000) would slightly increase the clustering quality of RS, CBKM and NBIR since they have random initial states. We found that RS and CBKM algorithms have the most reasonable results achieved for both real and synthetic
datasets assessed by F1, NMI and ARI scores. In general, they have the closest rank to DenMune.
Also, we found that DenMune performs moderately for small size datasets where DenMune has not enough chance to build robust \(KNN\) framework to distinguish clusters, this is the case for Iris and Arcene datasets.
Finally, we recorded the Homogeneity and Completeness of DenMune in Tables(12 and 13). A clustering result satisfies homogeneity if each cluster contains only members of a single class, while it satisfies completeness if all members of a given class are assigned to the same cluster. It is easy to observe that DenMune has high homogeneity and completeness scores, which explain the goodness of its clustering quality.
\begin{table}
\begin{tabular}{l c c c c} \hline Dataset & PCA & FA & NMF & t-sne \\ \hline \hline Optical Digits & 0.43 & 0.46 & 0.31 & 0.95 \\ Pen Digits & 0.48 & 0.46 & 0.36 & 0.88 \\ MNIST & 0.25 & 0.25 & 0.24 & 0.89 \\ \hline \end{tabular}
\end{table}
Table 4: Best NMI scores, obtained by DenMune, when applying different dimensionality reduction methods on three real N-D datasets
### Speed Performance
The speed of DenMune has been compared to the speed of CMune and CSharp, as shown in Fig.6a. The data set considered is the MNIST dataset (with 70000 patterns), after dividing it into subsets, each of size 1000 patterns. The subsets are added incrementally, and the speed of the algorithm is recorded with each increment. The time considered is the time required for running the core clustering algorithms, excluding the pre-processing time for computing the proximity matrix and dimensionality reduction. The time is measured in seconds. The adopted algorithms as well as the proposed algorithm have been executed on a cloud with the following configuration: Intel E5 Processor, up to 128 GB RAM, and running Linux operating system (Ubuntu 18.04 LTS). Another test is conducted to examine speed versus the number of K-nearest neighbors used, as shown in Fig.6b
\begin{table}
\begin{tabular}{l c c c c} \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c|}{Original dataset} & \multicolumn{2}{c}{Reduced dataset} \\ \cline{2-5} & Dimensions & NMI & Dimensions & NMI \\ \hline \hline Optical Digits & 64 & 0.75 & 2 & 0.95 \\ SCC & 60 & 0.85 & 2 & 0.85 \\ Arcene & 10000 & 0.19 & 2 & 0.20 \\ iris & 4 & 0.73 & 2 & 0.81 \\ Breast Cancer & 9 & 0.75 & 2 & 0.80 \\ Ecoli & 8 & 0.01 & 2 & 0.71 \\ Pen Digits & 16 & 0.81 & 2 & 0.88 \\ \hline \end{tabular}
\end{table}
Table 5: Best NMI scores when applying DenMune on five real datasets before and after dimensionality reduction
## 5 Conclusion and Future Work
In this paper, a novel shared nearest neighbors clustering algorithm DenMune, is presented. It utilizes the MNN size to calculate the density of each point and chooses the high-density points as the seeds from which clusters may grow up. In contrast to recent similar algorithms, such as DPC and CMune, no cut-off parameter is needed from the user of DenMune. Guided by the principle of Mutual Nearest-Neighbors (_MNN_) consistency, DenMune prioritizes points according to a voting system and partitions them into seeds
Figure 6: (a) Speed of DenMune compared to the speed of CMune and CSharp on the MNIST dataset. (b) Speed of DenMune vs number of \(K\)- nearest neighbors.
and non-seeds. Seed points determine the number as well as the skeleton of the clusters while non-seed points either merge with the formed clusters or are considered as noise. It has the ability to automatically detect the number of clusters and has shown robustness for datasets of different shapes and densities. We examined the sensitivity of DenMune to changes in K, the number of nearest neighbors (the only parameter required by the algorithm) on three real datasets with \(K\) in the range [1..200] and recorded the NMI for each dataset as shown in Fig. 7. The stability of DenMune, with respect to K, makes it a good candidate for data exploration and visualization since it works in a two-dimensional feature space. Algorithms that rely on several parameters such as CSharp, CMune, HDBSCAN and DPC can offer more flexibility than single parameter algorithms such as DenMune, but at the expense of the time needed for their tuning.
Although the motivations behind the algorithm are logical (the scheme adopted by the algorithm to partition points in a given data set into three
Figure 7: DenMune Results stability over changes in K, measured in NMI
types (seed, noise and potential noise points) and the MNN consistency principle that governs clusters growth), the conducted experiments on a variety of data sets, have shown its efficiency and robustness in detecting clusters of different sizes, shapes and densities in the presence of noise. In summary, DenMune is conceptually simple, logically sound, relies on a single parameter. As future work, we intend to implement a parallel version of it, since clusters' propagation in the algorithm is inherently parallel, as shown in Fig.4, and investigate its performance on other types of datasets.
## Acknowledgement
The authors would like to thank the anonymous reviewers for their valuable comments and suggestions which contributed to the improvement of the manuscript. Also, the authors would like to thank Dr. Saquib Sarfraz, Dr. Sami Sieranoja and Dr. Pasi Franti, the authors of FINCH, CBKM and RS algorithms, respectively.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Dataset & DenMune & NPIR & CBKM & RS & FastDP & FINCH & HDBSCAN & RCC & Spectral & KM++ \\ \hline \hline A1 & 0.93 & 0.48 & 0.55 & 0.60 & 0.44 & 0.80 & 0.40 & 0.57 & 0.48 & 0.63 \\ A2 & 0.95 & 0.40 & 0.45 & 0.41 & 0.48 & 0.81 & 0.38 & 0.61 & 0.46 & 0.57 \\ Aggregation & 1.00 & 0.65 & 0.77 & 0.73 & 0.70 & 0.69 & 0.66 & 0.81 & 0.67 & 0.69 \\ Compound & 0.97 & 0.62 & 0.58 & 0.57 & 0.54 & 0.77 & 0.63 & 0.89 & 0.40 & 0.65 \\ D31 & 0.97 & 0.42 & 0.52 & 0.48 & 0.41 & 0.63 & 0.38 & 0.59 & 0.46 & 0.54 \\ Dim-32 & 1.00 & 0.52 & 0.75 & 0.58 & 0.51 & 0.99 & 0.45 & 0.07 & 0.54 & 0.92 \\ Dim-128 & 1.00 & 0.53 & 0.58 & 0.54 & 0.59 & 0.80 & 0.52 & 0.10 & 0.58 & 0.94 \\ Dim-512 & 1.00 & 0.53 & 0.67 & 0.59 & 0.59 & 0.22 & 0.36 & 0.01 & 0.68 & 0.97 \\ Flame & 1.00 & 1.00 & 0.86 & 0.86 & 1.00 & 0.99 & 0.91 & 0.80 & 0.98 & 0.85 \\ G2-2-10 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 0.90 & 0.84 & 1.00 & 0.33 & 0.86 \\ G2-2-30 & 0.99 & 0.97 & 0.99 & 0.33 & 0.99 & 0.96 & 0.99 & 0.33 & 0.99 \\ G2-2-50 & 0.90 & 0.90 & 0.92 & 0.92 & 0.92 & 0.69 & 0.65 & 0.67 & 0.33 & 0.53 \\ Jain & 1.00 & 1.00 & 0.80 & 0.80 & 0.93 & 0.77 & 0.97 & 0.89 & 1.00 & 0.79 \\ Mouse & 0.98 & 0.88 & 0.66 & 0.82 & 0.76 & 0.82 & 0.77 & 0.96 & 0.95 & 0.67 \\ Pathbased & 0.97 & 0.84 & 0.49 & 0.53 & 0.50 & 0.73 & 0.77 & 0.87 & 0.48 & 0.69 \\ R15 & 1.00 & 0.56 & 0.56 & 0.65 & 0.39 & 0.91 & 0.47 & 0.99 & 0.56 & 0.62 \\ S1 & 1.00 & 0.43 & 0.58 & 0.49 & 0.42 & 0.78 & 0.49 & 0.75 & 0.67 & 0.72 \\ S2 & 0.97 & 0.41 & 0.58 & 0.58 & 0.22 & 0.72 & 0.58 & 0.49 & 0.63 & 0.70 \\ Spiral & 1.00 & 0.48 & 0.28 & 0.36 & 0.55 & 0.46 & 1.00 & 0.56 & 1.00 & 0.44 \\ Unbalance & 1.00 & 0.92 & 0.98 & 0.98 & 0.86 & 0.67 & 0.93 & 0.97 & 0.85 & 0.63 \\ Vary density & 1.00 & 1.00 & 0.95 & 0.95 & 0.56 & 0.67 & 0.95 & 0.89 & 1.00 & 0.77 \\ \hline Total Rank & 24 & 120 & 110 & 107 & 139 & 99 & 147 & 102 & 125 & 117 \\ Avg rank & 1.14 & 5.71 & 5.24 & 5.10 & 6.62 & 4.71 & 7.00 & 4.86 & 6.0 & 5.57 \\ \hline \hline Rank & 1 & 7 & 5 & 4 & 9 & 2 & 10 & 3 & 8 & 6 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison of the performance of DenMune with other nine algorithms, based on F1-score, on twenty-one synthetic datasets.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline Dataset & DenMune & NPIR & CBKM & RS & FastDP & FINCH & HDBSCAN & RCC & Spectral & KM++ \\ \hline \hline Appendicitis & 0.89 & 0.89 & 0.75 & 0.75 & 0.88 & 0.76 & 0.86 & 0.73 & 0.71 & 0.78 \\ \multirow{2}{*}{Arrene} & 0.58 & 0.66 & 0.66 & 0.66 & 0.66 & 0.50 & 0.66 & 0.58 & 0.66 & 0.66 \\ \multirow{2}{*}{Breast cancer} & 0.97 & 0.97 & 0.88 & 0.91 & 0.97 & 0.48 & 0.96 & 0.54 & 0.51 & 0.96 \\ \multirow{2}{*}{Optical digits} & 0.97 & 0.49 & 0.69 & 0.58 & 0.59 & 0.54 & 0.70 & 0.58 & 0.58 & 0.59 \\ \multirow{2}{*}{Pendigits} & 0.89 & 0.53 & 0.65 & 0.65 & 0.41 & 0.57 & 0.52 & 0.62 & 0.52 & 0.67 \\ \multirow{2}{*}{Ecoli} & 0.77 & 0.69 & 0.71 & 0.65 & 0.28 & 0.52 & 0.21 & 0.53 & 0.51 & 0.56 \\ \multirow{2}{*}{Glass} & 0.57 & 0.51 & 0.55 & 0.54 & 0.36 & 0.52 & 0.47 & 0.52 & 0.59 & 0.46 \\ \multirow{2}{*}{Iris} & 0.90 & 0.97 & 0.91 & 0.94 & 0.90 & 0.90 & 0.56 & 0.90 & 0.98 & 0.88 \\ \multirow{2}{*}{MNIST} & 0.90 & N/A\({}^{*}\) & 0.66 & 0.64 & 0.46 & 0.61 & 0.84 & 0.18 & 0.83 & 0.61 \\ \multirow{2}{*}{Libras movement} & 0.46 & 0.27 & 0.29 & 0.33 & 0.24 & 0.44 & 0.27 & 0.41 & 0.26 & 0.39 \\ \multirow{2}{*}{Robot navigation} & 0.60 & 0.43 & 0.49 & 0.46 & 0.38 & 0.57 & 0.56 & 0.27 & 0.43 & 0.56 \\ \multirow{2}{*}{SCC} & 0.68 & 0.65 & 0.65 & 0.84 & 0.49 & 0.64 & 0.36 & 0.77 & 0.64 & 0.55 \\ \multirow{2}{*}{Seeds} & 0.89 & 0.91 & 0.90 & 0.89 & 0.54 & 0.76 & 0.82 & 0.89 & 0.52 & 0.77 \\ \multirow{2}{*}{WDBC} & 0.84 & 0.91 & 0.83 & 0.86 & 0.89 & 0.82 & 0.82 & 0.56 & 0.48 & 0.81 \\ \multirow{2}{*}{Yeast} & 0.40 & 0.35 & 0.44 & 0.40 & 0.27 & 0.41 & 0.31 & 0.45 & 0.40 & 0.42 \\ \hline \hline Total Rank & 37 & 60 & 54 & 57 & 92 & 86 & 88 & 80 & 91 & 72 \\ \multirow{2}{*}{Avg rank} & 2.64 & 4.29 & 3.86 & 4.07 & 6.57 & 6.14 & 6.29 & 5.71 & 6.5 & 5.14 \\ \hline \hline Rank & 1 & 4 & 2 & 3 & 10 & 7 & 8 & 6 & 9 & 5 \\ \hline \end{tabular}
* NPIR failed to scale to adapt to MNIST dataset even on a cloud server with 128 GB memory.
\end{table}
Table 7: Comparison of the performance of DenMune with other nine algorithms, based on F1-score, on fifteen real datasets.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Dataset & DenMune & NPIR & CBKM & RS & FastDP & FINCH & HDBSCAN & RCC & Spectral & KM++ \\ \hline \hline A1 & 0.98 & 0.84 & 0.87 & 0.92 & 0.83 & 0.85 & 0.74 & 0.78 & 0.88 & 0.89 \\ A2 & 0.98 & 0.86 & 0.89 & 0.88 & 0.89 & 0.89 & 0.75 & 0.80 & 0.90 & 0.85 \\ Aggregation & 0.99 & 0.80 & 0.90 & 0.90 & 0.87 & 0.76 & 0.86 & 0.82 & 0.90 & 0.73 \\ Compound & 0.94 & 0.79 & 0.58 & 0.62 & 0.46 & 0.83 & 0.66 & 0.87 & 0.62 & 0.66 \\ D31 & 0.96 & 0.84 & 0.87 & 0.87 & 0.84 & 0.85 & 0.71 & 0.85 & 0.85 & 0.84 \\ Dim-32 & 1.00 & 0.91 & 0.95 & 0.92 & 0.89 & 0.99 & 0.85 & 0.16 & 0.87 & 0.90 \\ Dim-128 & 1.00 & 0.87 & 0.92 & 0.87 & 0.91 & 0.90 & 0.87 & 0.22 & 0.92 & 0.91 \\ Dim-512 & 1.00 & 0.88 & 0.93 & 0.91 & 0.91 & 0.59 & 0.83 & 0.00 & 0.93 & 0.94 \\ Flame & 1.00 & 1.00 & 0.46 & 0.48 & 1.00 & 0.94 & 0.61 & 0.47 & 0.85 & 0.55 \\ G2-2-10 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 0.75 & 0.53 & 0.99 & 0.00 & 0.80 \\ G2-2-30 & 0.94 & 0.83 & 0.92 & 0.92 & 0.00 & 0.92 & 0.76 & 0.94 & 0.00 & 0.92 \\ G2-2-50 & 0.51 & 0.53 & 0.59 & 0.59 & 0.58 & 0.36 & 0.25 & 0.23 & 0.00 & 0.26 \\ Jain & 1.00 & 1.00 & 0.37 & 0.37 & 0.64 & 0.47 & 0.88 & 0.70 & 1.00 & 0.50 \\ Mouse & 0.94 & 0.68 & 0.58 & 0.58 & 0.85 & 0.56 & 0.55 & 0.87 & 0.85 & 0.61 \\ Pathbased & 0.89 & 0.66 & 0.51 & 0.55 & 0.52 & 0.56 & 0.56 & 0.70 & 0.50 & 0.58 \\ R15 & 0.99 & 0.86 & 0.89 & 0.91 & 0.84 & 0.90 & 0.85 & 0.99 & 0.90 & 0.79 \\ S1 & 0.99 & 0.84 & 0.89 & 0.87 & 0.84 & 0.80 & 0.83 & 0.86 & 0.91 & 0.88 \\ S2 & 0.94 & 0.79 & 0.83 & 0.82 & 0.62 & 0.75 & 0.74 & 0.80 & 0.87 & 0.77 \\ Spiral & 1.00 & 0.28 & 0.00 & 0.00 & 0.74 & 0.26 & 1.00 & 0.50 & 1.00 & 0.31 \\ Unbalance & 1.00 & 0.94 & 0.99 & 0.99 & 0.82 & 0.97 & 0.95 & 0.95 & 0.88 & 0.90 \\ Vary density & 1.00 & 1.00 & 0.86 & 0.86 & 0.73 & 0.53 & 0.87 & 0.70 & 1.00 & 0.56 \\ \hline Total Rank & 25 & 109 & 97 & 99 & 127 & 129 & 151 & 125 & 99 & 130 \\ Avg rank & 1.19 & 5.19 & 4.62 & 4.71 & 6.05 & 6.14 & 7.19 & 5.95 & 4.7 & 6.19 \\ \hline \hline Rank & 1 & 5 & 2 & 3 & 7 & 8 & 10 & 6 & 3 & 9 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Comparison of the performance of DenMune with other nine algorithms, based on NMI-score, on twenty-one synthetic datasets.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline Dataset & DenMune & NPIR & CBKM & RS & FastDP & FINCH & HDBSCAN & RCC & Spectral & KM++ \\ \hline \hline Appendicitis & 0.37 & 0.37 & 0.18 & 0.18 & 0.33 & 0.18 & 0.26 & 0.21 & 0.00 & 0.16 \\ Arecne & 0.07 & 0.09 & 0.09 & 0.09 & 0.09 & 0.07 & 0.09 & 0.02 & 0.09 & 0.09 \\ Breast cancer & 0.80 & 0.80 & 0.53 & 0.61 & 0.80 & 0.20 & 0.77 & 0.24 & 0.00 & 0.77 \\ Optical digits & 0.94 & 0.83 & 0.86 & 0.81 & 0.82 & 0.73 & 0.83 & 0.82 & 0.80 & 0.81 \\ Pendigits & 0.88 & 0.72 & 0.78 & 0.71 & 0.57 & 0.76 & 0.77 & 0.74 & 0.70 & 0.75 \\ Ecoli & 0.71 & 0.56 & 0.53 & 0.54 & 0.52 & 0.49 & 0.51 & 0.43 & 0.49 & 0.55 \\ Glass & 0.39 & 0.34 & 0.37 & 0.35 & 0.31 & 0.37 & 0.35 & 0.36 & 0.39 & 0.34 \\ Iris & 0.81 & 0.90 & 0.79 & 0.82 & 0.81 & 0.81 & 0.73 & 0.80 & 0.92 & 0.82 \\ MNIST & 0.86 & N/A\({}^{*}\) & 0.74 & 0.76 & 0.61 & 0.75 & 0.87 & 0.54 & 0.85 & 0.66 \\ Libras movement & 0.67 & 0.52 & 0.56 & 0.52 & 0.52 & 0.63 & 0.57 & 0.63 & 0.54 & 0.63 \\ Robot navigation & 0.43 & 0.24 & 0.24 & 0.23 & 0.06 & 0.37 & 0.33 & 0.37 & 0.17 & 0.43 \\ SCC & 0.82 & 0.78 & 0.68 & 0.79 & 0.66 & 0.71 & 0.72 & 0.76 & 0.72 & 0.66 \\ Seeds & 0.69 & 0.71 & 0.73 & 0.71 & 0.56 & 0.60 & 0.62 & 0.68 & 0.53 & 0.60 \\ WDBC & 0.46 & 0.56 & 0.41 & 0.45 & 0.50 & 0.47 & 0.41 & 0.32 & 0.00 & 0.47 \\ Yeast & 0.27 & 0.22 & 0.26 & 0.25 & 0.25 & 0.23 & 0.22 & 0.28 & 0.19 & 0.18 \\ \hline \hline Total Rank & 33 & 52 & 66 & 71 & 83 & 81 & 73 & 81 & 99 & 72 \\ Avg rank & 2.36 & 3.71 & 4.71 & 5.07 & 5.93 & 5.79 & 5.21 & 5.79 & 7.1 & 5.14 \\ \hline \hline Rank & 1 & 2 & 3 & 4 & 9 & 7 & 6 & 7 & 10 & 5 \\ \hline \hline \end{tabular}
* NPIR failed to scale to adapt to MNIST dataset even on a cloud server with 128 GB memory.
\end{table}
Table 9: Comparison of the performance of DenMune with other nine algorithms, based on NMI-score, on fifteen real datasets.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline Dataset & DenMune & NPIR & CBKM & RS & FastDP & FINCH & HDBSCAN & RCC & Spectral & KM++ \\ \hline \hline A1 & 0.94 & 0.56 & 0.62 & 0.75 & 0.51 & 0.69 & 0.42 & 0.53 & 0.64 & 0.70 \\ A2 & 0.96 & 0.55 & 0.62 & 0.61 & 0.61 & 0.73 & 0.37 & 0.53 & 0.64 & 0.60 \\ Aggregation & 0.99 & 0.65 & 0.90 & 0.90 & 0.81 & 0.57 & 0.77 & 0.65 & 0.84 & 0.54 \\ Compound & 0.97 & 0.73 & 0.56 & 0.59 & 0.30 & 0.79 & 0.62 & 0.86 & 0.36 & 0.44 \\ D31 & 0.94 & 0.55 & 0.64 & 0.65 & 0.56 & 0.63 & 0.33 & 0.65 & 0.55 & 0.60 \\ Dim-32 & 1.00 & 0.73 & 0.83 & 0.74 & 0.67 & 0.97 & 0.55 & 0.02 & 0.58 & 0.84 \\ Dim-128 & 1.00 & 0.57 & 0.74 & 0.58 & 0.70 & 0.76 & 0.62 & 0.03 & 0.74 & 0.89 \\ Dim-512 & 1.00 & 0.60 & 0.78 & 0.70 & 0.70 & 0.16 & 0.54 & 0.00 & 0.74 & 0.93 \\ Flame & 1.00 & 1.00 & 0.50 & 0.51 & 1.00 & 0.97 & 0.69 & 0.40 & 0.92 & 0.52 \\ G2-2-10 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 0.72 & 0.63 & 1.00 & 0.00 & 0.75 \\ G2-2-30 & 0.97 & 0.89 & 0.96 & 0.96 & 0.00 & 0.96 & 0.84 & 0.97 & 0.00 & 0.96 \\ G2-2-50 & 0.63 & 0.64 & 0.70 & 0.70 & 0.69 & 0.30 & 0.21 & 0.17 & 0.00 & 0.17 \\ Jain & 1.00 & 1.00 & 0.32 & 0.32 & 0.71 & 0.38 & 0.95 & 0.59 & 1.00 & 0.44 \\ Mouse & 0.97 & 0.69 & 0.54 & 0.54 & 0.92 & 0.53 & 0.40 & 0.92 & 0.90 & 0.37 \\ Pathbased & 0.92 & 0.61 & 0.42 & 0.47 & 0.43 & 0.54 & 0.50 & 0.69 & 0.42 & 0.46 \\ R15 & 0.99 & 0.65 & 0.67 & 0.72 & 0.60 & 0.83 & 0.66 & 0.98 & 0.72 & 0.50 \\ S1 & 0.99 & 0.58 & 0.68 & 0.65 & 0.58 & 0.61 & 0.63 & 0.71 & 0.73 & 0.74 \\ S2 & 0.93 & 0.56 & 0.58 & 0.56 & 0.27 & 0.54 & 0.52 & 0.52 & 0.72 & 0.55 \\ Spiral & 1.00 & 0.22 & 0.00 & 0.00 & 0.57 & 0.13 & 1.00 & 0.26 & 1.00 & 0.14 \\ Unbalance & 1.00 & 0.98 & 1.00 & 1.00 & 0.78 & 0.97 & 0.99 & 0.97 & 0.85 & 0.90 \\ Vary density & 1.00 & 1.00 & 0.87 & 0.87 & 0.57 & 0.41 & 0.87 & 0.73 & 1.00 & 0.51 \\ \hline Total Rank & 25 & 111 & 100 & 97 & 133 & 120 & 144 & 120 & 113 & 132 \\ Avg rank & 1.19 & 5.29 & 4.76 & 4.62 & 6.33 & 5.71 & 6.86 & 5.71 & 5.4 & 6.29 \\ \hline \hline Rank & 1 & 4 & 3 & 2 & 9 & 6 & 10 & 6 & 5 & 8 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Comparison of the performance of DenMune with other nine algorithms, based on ARI-score, on twenty-one synthetic datasets.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline Dataset & DenMune & NPIR & CBKM & RS & FastDP & FINCH & HDBSCAN & RCC & Spectral & KM++ \\ \hline \hline Appendicitis & 0.56 & 0.55 & 0.19 & 0.19 & 0.52 & 0.25 & 0.42 & 0.20 & 0.00 & 0.23 \\ Arecne & 0.09 & 0.10 & 0.10 & 0.10 & 0.10 & 0.01 & 0.10 & 0.03 & 0.10 & 0.10 \\ Breast cancer & 0.88 & 0.88 & 0.55 & 0.67 & 0.87 & 0.06 & 0.86 & 0.12 & 0.00 & 0.86 \\ Optical digits & 0.94 & 0.68 & 0.75 & 0.61 & 0.62 & 0.43 & 0.71 & 0.67 & 0.58 & 0.66 \\ Pendigits & 0.83 & 0.48 & 0.63 & 0.56 & 0.30 & 0.58 & 0.61 & 0.46 & 0.50 & 0.60 \\ Ecoli & 0.75 & 0.60 & 0.46 & 0.47 & 0.30 & 0.27 & 0.49 & 0.23 & 0.38 & 0.45 \\ Glass & 0.26 & 0.23 & 0.25 & 0.25 & 0.22 & 0.31 & 0.17 & 0.20 & 0.30 & 0.13 \\ Iris & 0.76 & 0.92 & 0.77 & 0.83 & 0.76 & 0.76 & 0.57 & 0.75 & 0.94 & 0.81 \\ MNIST & 0.84 & N/A\({}^{*}\) & 0.62 & 0.64 & 0.41 & 0.59 & 0.83 & 0.08 & 0.78 & 0.46 \\ Libras movement & 0.32 & 0.25 & 0.31 & 0.26 & 0.26 & 0.26 & 0.23 & 0.23 & 0.27 & 0.22 \\ Robot navigation & 0.26 & 0.12 & 0.16 & 0.15 & 0.03 & 0.24 & 0.20 & 0.08 & 0.05 & 0.25 \\ SCC & 0.66 & 0.62 & 0.54 & 0.68 & 0.45 & 0.51 & 0.48 & 0.61 & 0.56 & 0.47 \\ Seeds & 0.69 & 0.75 & 0.74 & 0.71 & 0.48 & 0.56 & 0.58 & 0.71 & 0.42 & 0.54 \\ WDBC & 0.49 & 0.66 & 0.41 & 0.53 & 0.60 & 0.46 & 0.41 & 0.20 & 0.00 & 0.46 \\ Yeast & 0.13 & 0.11 & 0.16 & 0.14 & 0.17 & 0.13 & 0.15 & 0.18 & 0.10 & 0.11 \\ \hline \hline Total Rank & 41 & 53 & 59 & 62 & 82 & 87 & 74 & 98 & 94 & 83 \\ Avg rank & 2.93 & 3.79 & 4.21 & 4.43 & 5.86 & 6.21 & 5.29 & 7.00 & 6.7 & 5.93 \\ \hline \hline Rank & 1 & 2 & 3 & 4 & 6 & 8 & 5 & 10 & 9 & 7 \\ \hline \hline \end{tabular}
* NPIR failed to scale to adapt to MNIST dataset even on a cloud server with 128 GB memory.
\end{table}
Table 11: Comparison of the performance of DenMune with other nine algorithms, based on ARI-score, on fifteen real datasets.
Figure 8: Visualization of the results obtained by the ten algorithms for the Aggregation dataset.
Figure 9: Visualization of the results obtained by the ten algorithms for the Spiral dataset.
Figure 11: Visualization of the results obtained by the ten algorithms for the Jain dataset.
Figure 10: Visualization of the results obtained by the ten algorithms for the Compound dataset.
Figure 12: Visualization of the results obtained by the ten algorithms for the Pathbased dataset.
Figure 13: Visualization of the results obtained by the ten algorithms for the Mouse dataset.
Figure 14: Ground truths of the fifteen synthetic datasets, visualized using t-sne, and used for algorithms’ comparisons.
Figure 15: Ground truths of the fifteen real datasets, visualized using t-sne, and used for algorithm’s comparisons.
|
2302.00048 | Multilinear oscillatory integrals and estimates for coupled systems of
dispersive PDEs | We establish sharp global regularity of a class of multilinear oscillatory
integral operators that are associated to nonlinear dispersive equations with
both Banach and quasi-Banach target spaces. As a consequence we also prove the
(local in time) continuous dependence on the initial data for solutions of a
large class of coupled systems of dispersive partial differential equations. | Aksel Bergfeldt, Salvador Rodriguez-Lopez, David Rule, Wolfgang Staubach | 2023-01-31T19:28:33Z | http://arxiv.org/abs/2302.00048v1 | # Multilinear oscillatory integrals and estimates for coupled systems of dispersive PDEs
###### Abstract.
We establish sharp global regularity of a class of multilinear oscillatory integral operators that are associated to nonlinear dispersive equations with both Banach and quasi-Banach target spaces. As a consequence we also prove the (local in time) continuous dependence on the initial data for solutions of a large class of coupled systems of dispersive partial differential equations.
Key words and phrases:Multilinear oscillatory integral operators, Systems of dispersive PDEs 2020 Mathematics Subject Classification: 35S30, 35G20, 35G50, 42B20, 42B25 The second author has been partially supported by the Grant PID2020-113048GB-I00. The third author was partially supported by the Research School in Interdisciplinary Mathematics at Linkoping University.
In proving boundedness results with quasi-Banach target spaces, although an appropriate frequency-space decomposition of the operators into various frequency regimes is available (see e. g. [15]), the classical duality method of R. Coifman and Y. Meyer [6], is no longer applicable. Therefore, in this paper we introduce an approach based on
1. vector-valued inequalities,
2. maximal functions of Hardy-Littlewood, Peetre and Park,
3. estimates for linear oscillatory integral operators,
which enable us to prove the desired boundedness results.
### Some results concerning boundedness of multilinear oscillatory integral operators
We start by giving an overview of the previously known regularity results for OIOs, which are relevant to operators that are considered here.
**Definition 1.1**.: _For integers \(n,N\geqslant 1\) and \(m\in\mathbb{R}\), the set of (multilinear) amplitudes \(S^{m}(n,N)\) is the set of functions \(\sigma\in\mathcal{C}^{\infty}(\mathbb{R}^{n}\times\mathbb{R}^{nN})\) that satisfy_
\[|\partial^{\underline{\alpha}}_{\underline{\alpha}}\partial^{\beta}_{x} \sigma(x,\Xi)|\leqslant C_{\alpha,\beta}\langle\Xi\rangle^{m-|\alpha|},\]
_for all multi-indices \(\alpha\) and \(\beta\). Here and in what follows_
\[\langle\Xi\rangle=\left(1+\sum_{j=1}^{N}|\xi_{j}|^{2}\right)^{1/2}\quad\text {for}\;\;\Xi=(\xi_{1},\ldots,\xi_{N})\in\mathbb{R}^{nN}\text{ with }\xi_{j}\in\mathbb{R}^{n},\text{ j=1,\ldots, N.}\]
_The parameter \(m\) is referred to as the order or decay of the amplitude._
In what follows, we shall also use
\[\widehat{f}(\xi)=\int_{\mathbb{R}^{n}}f(x)e^{-ix.\xi}\;\mathrm{d}x\]
as the definition of Fourier transform of \(f\). We now consider multilinear OIOs of the form
\[T^{\Phi}_{\sigma}(f_{1},\ldots,f_{N})(x)=\int_{\mathbb{R}^{nN}}\sigma(x,\Xi) \prod_{j=1}^{N}\widehat{f}_{j}(\xi_{j})\,e^{i\Phi(x,\Xi)}\;\mathrm{d}\Xi, \tag{1}\]
where \(\sigma\in S^{m}(n,N)\) and \(\,\mathrm{d}\Xi:=\,\mathrm{d}\Xi/(2\pi)^{nN}\).
The main goal here is to show that the operator \(T^{\Phi}_{\sigma}\), initially defined by (1) for \(f_{1},\ldots,f_{N}\in\mathscr{S}\) (the Schwartz class), extends to a bounded multilinear operator from \(X^{p_{1}}\times\ldots\times X^{p_{N}}\) to \(X^{p_{0}}\), where \(X^{p_{j}}\) are certain Banach or quasi-Banach spaces. Now, in the case that \(\frac{1}{p_{0}}=\sum_{j=1}^{N}\frac{1}{p_{j}}\), we shall refer to the corresponding regularity results as _Holder-type_ (HT for short), otherwise _non-Holder-type_ (NHT for short).
Given \(\sigma\in S^{m}(n,N)\), the phases \(\Phi\) in \(T^{\Phi}_{a}\) for which regularity results are currently known take of the following forms:
1. \(N=2\), \(\Phi(x,\Xi)=\lambda\varphi_{0}(\Xi)+\sum_{j=1}^{2}x\cdot\xi_{j},\;\Xi\in \mathbb{R}^{2n}\) and \(\lambda\) a parameter;
2. \(N=2\), \(\Phi(x,\Xi)=\sum_{j=1}^{2}\varphi_{j}(x,\xi_{j}),\;\;\Xi\in\mathbb{R}^{2n}\); and
3. \(N\geqslant 1\), \(\Phi(x,\Xi)=\varphi_{0}(\xi_{1}+\cdots+\xi_{N})+\sum_{j=1}^{N}(x\cdot\xi_{j}+ \varphi_{j}(\xi_{j}))\).
For the phase functions of the form _a_), one is aiming at non-Holder-type boundedness of \(T^{\Phi}_{a}\) where part of the goal is also to obtain optimal powers of \(\lambda\) in the boundedness estimates. In this case Bernicot and Germain [2] proved optimal global NHT regularity results in Lebesgue spaces, under suitable conditions on the rank of various Hessians of \(\varphi_{0}\). Their analysis also accommodates quadratic phases.
For case \(b)\) D. Rule, S. Rodriguez-Lopez and W. Staubach [14] proved optimal HT local regularity results, under the conditions that the mixed Hessian of the phase functions \(\varphi_{j}(x,\xi_{j})\) are non-vanishing (the non-degeneracy condition) and that each of these phases are positively homogeneous of degree one in \(\xi_{j}.\) Note that this case only accommodates examples that are relevant to the study of nonlinear wave equation. In [13] it was shown that for bilinear operators where the phase functions are also allowed to behave quadratically, one can prove an \(L^{2}\times L^{2}\to L^{1}\) boundedness result. A. Bergfeldt and W. Staubach [1] extended this to the case of globally defined multilinear operators and all possible Banach target spaces.
For case \(c)\) Rule, Rodriguez-Lopez and Staubach [15] proved optimal HT global regularity results in the general multilinear case, under the condition that the phase functions \(\varphi_{j}\) are positively homogeneous of degree one. In this context, only the case of Banach target spaces were investigated.
### Synopsis of the results of the paper
Given our previous discussions, there are quite a few problems that remain in the context of the regularity of oscillatory integral operators. Generally speaking, these problems are related to the nature of the amplitudes \(a(x,\Xi)\) and that of the phase functions \(\Phi(x,\Xi)\) for which one can prove various regularity results. For example one could lower the regularity of amplitudes or allow the phases to depend in a particular way on the spatial and/or frequency variables. In this paper we have chosen to look at the problem of global regularity for multilinear operators with phase functions of form \(c)\) above, partly because of its relevance to the method of space-time resonance and partly because it is a tractable halfway house that should lead to an understanding of more general phases.
To implement our agenda, and motivated by examples related to dispersive PDEs, we consider the following class of phase functions:
**Definition 1.2**.: _Let \(0<s<\infty\). A function \(\varphi\colon\mathbb{R}^{n}\to\mathbb{R}\) which belongs to \(\mathcal{C}^{\infty}(\mathbb{R}^{n}\setminus\{0\})\) and satisfies_
\[|\partial^{\alpha}\varphi(\xi)|\leqslant c_{\alpha}\,|\xi|^{s-|\alpha|}\;\; \text{for}\;\;\xi\neq 0\,\text{and}\;|\alpha|\geqslant 0, \tag{2}\]
_is called a phase function (or phase) of order \(s\)._
We note that the case of the water wave equation corresponds to the case \(s=\frac{1}{2}\), capillary waves to the case \(s=\frac{3}{2}\), the Schodinger equation to the case \(s=2\) and the Airy equation to the case \(s=3\).
We shall say that \(0<p_{j}\leqslant\infty\) satisfy the Holder condition if
\[\frac{1}{p_{0}}=\sum_{j=1}^{N}\frac{1}{p_{j}}. \tag{3}\]
Now defining the functions spaces \(X^{p}\) as
\[X^{p}:=\begin{cases}h^{p}&\text{ if }p\leqslant 1\\ L^{p}&\text{ if }1<p<\infty\\ \text{bmo}&\text{ if }p=\infty,\end{cases} \tag{4}\]
where \(L^{p}\) is the usual Lebesgue space, \(h^{p}\) is the local Hardy space defined in Definition 2.2 below, and bmo is the dual space of \(h^{1}\), and considering phase functions of the form
\[\Phi(x,\Xi)=\varphi_{0}(\xi_{1}+\cdots+\xi_{N})+\sum_{j=1}^{N}(x\cdot\xi_{j}+ \varphi_{j}(\xi_{j})), \tag{5}\]
we have the following HT boundedness result.
**Theorem 1.3**.: _For integers \(n,N\geqslant 2\), let the exponents \(p_{j}\in(\frac{n}{n+1},\infty]\)\((j=0,\ldots,N)\) satisfy (3). Moreover let_
\[m\leqslant-(n-1)\left(\sum_{j=1}^{N}\left|\frac{1}{p_{j}}-\frac{1}{2}\right|+ \left|\frac{1}{p_{0}}-\frac{1}{2}\right|\right). \tag{6}\]
_If \(\sigma\in S^{m}(n,N)\) and \(\Phi\) is of the form (5) with each phase \(\varphi_{j}\) being smooth outside the origin and positively homogeneous of degree one, then the multilinear operator \(T_{\sigma}^{\Phi}\) initially defined by (1) for \(f_{1},\ldots,f_{N}\in\mathscr{S}\)\((\)the Schwartz class\()\), extends to a bounded multilinear operator from \(X^{p_{1}}\times\ldots\times X^{p_{N}}\) to \(X^{p_{0}}\). Moreover, the same result holds in case each \(\varphi_{j}(\xi)\) is equal to \(\left\langle\xi\right\rangle\), which is an inhomogeneous phase related to the Klein-Gordon equation, for the range \(p_{j}\in(0,\infty]\)._
For the range \(p_{0}\in[1,\infty]\) (the Banach target-spaces), this theorem for the case of homogeneous of degree one phase functions (which is the case of the wave-equation) was proven in [15]. Therefore Theorem 1.3 extends our previous result to the quasi-Banach target-spaces as well as to the Klein-Gordon case. Note also that the admissible dimensions in the case of \(N\geqslant 2\) are necessarily greater than or equal to two (see [14]), however if \(N=1\) then of course \(n=1\) is also allowed, since this is just the well-known boundedness result for linear Fourier integral operators [11], [12]. Our second result HT boundedness result is the following.
**Theorem 1.4**.: _For integers \(N,n\geqslant 1\), and a real number \(s\in(0,\infty)\), assume that the exponents \(p_{j}\in(\frac{n}{n+\min(1,s)},\infty]\)\((j=0,\ldots,N)\) satisfy (3). Suppose also that \(\sigma\in S^{m}(n,N)\) and \(\Phi\) is of the form (5) with each phase \(\varphi_{j}\)\((j=0,1,\ldots,N)\) of order \(s\) and_
\[m\leqslant-sn\left(\sum_{j=1}^{N}\left|\frac{1}{p_{j}}-\frac{1}{2}\right|+ \left|\frac{1}{p_{0}}-\frac{1}{2}\right|\right). \tag{7}\]
_Then the multilinear operator \(T_{\sigma}^{\Phi}\) initially defined by (1) for \(f_{1},\ldots,f_{N}\in\mathscr{S}\), extends to a bounded multilinear operator from \(X^{p_{1}}\times\ldots\times X^{p_{N}}\) to \(X^{p_{0}}\). Moreover, if the functions \(\varphi_{j}\) are all in \(\mathcal{C}^{\infty}(\mathbb{R}^{n})\)\((\)the Schrodinger case is such an example\()\), then the ranges of the exponents \(p_{j}\) in the theorem can be extended to \(\in(0,\infty]\)._
**Remark 1.5**.: _If in Theorems 1.3 and 1.4, the phase function \(\varphi_{0}=0\)\((\)and \(n>1\) in the case of multilinear FIOs\()\), then the order of the decay \(m\) can be improved by just removing the term \(-sn|1/p_{0}-1/2|\)\((\)or \(-(n-1)|1/p_{0}-1/2|)\) from the \(m\)'s given in those theorems._
Theorem 1.4 has no predecessor in the literature and covers the cases of water wave, capillary wave, Schrodinger, Korteweg-de Vries and many other higher order dispersive equations. Moreover, this result, in contrast to Theorem 1.3, applies in all dimensions, when \(N\geqslant 1\).
In proving Theorems 1.3 and 1.4, we make use of several global boundedness results: Those for linear Klein-Gordon equations, proved by J. Peral [12] (for \(X^{p}\) with \(1<p<\infty\)); those for linear wave equations, proved by S. Rodriguez-Lopez, D. Rule and W. Staubach [13] (for \(X^{p}\) with \(n/(n+1)<p\leqslant\infty\)); and those for higher order equations, proved by A.J. Castro, A. Israelsson, W. Staubach and M. Yerlanov [5] (for \(X^{p}\) with \(n/(n+\min(1,s))<p\leqslant\infty\)).
The methods involved in proving the multilinear results in the realm of Banach spaces are essentially the same as the ones used by us to prove the boundedness of multilinear FIOs in [13], which are based on non-trivial extensions of the Coifman-Meyer methods in [6] to the case of multilinear operators with nonlinear phase functions.
Thus, one writes the multilinear operator as a sum of operators whose amplitudes have specific support properties in the frequency variable \(\Xi\). One term has compact frequency support, and for the other terms one has either that some \(|\xi_{j}|\) dominates \(\Xi\), or that \(|\xi_{j}|\approx|\xi_{k}|\) for certain \(j\) and \(k\) on the support of the amplitude in question. Thereafter one identifies the end-points that are needed to apply complex interpolation and proving these end-point results creates in turn a number of cases, which are dealt with in accordance with whether the target spaces are Banach or quasi-Banach.
In the case of Banach target spaces and for the term that is compactly supported in the frequency, and those where \(|\xi_{j}|\) dominates \(\Xi\), the machinery of [15] can be used without difficulty. However for the parts where \(|\xi_{j}|\approx|\xi_{k}|\) (and for the target spaces bmo and \(L^{2}\)), one needs a result, provided in Proposition 4.4, that demonstrates how certain oscillatory integral operators give rise to Carleson measures, with an estimate on their Carleson-norms. With this result at hand, the rest of the analysis is as in the case of multilinear FIOs in [15].
The major hindrance to overcome here is that in the realm of quasi-Banach spaces, all Coifman-Meyer-type approaches, including the ones used in [15] or [14] fail because of the impossibility of using duality arguments. Thus to prove results in the quasi-Banach realm, it behoves us to use a different method, and this is one of the novelties of the approach developed in this paper. To obtain the end-point results of this paper, our approach will be mainly based on various vector-valued inequalities. To our knowledge, using this type of estimates to derive estimates for multilinear oscillatory integral operators is new. The treatment that we describe here is rather technical, however it is fairly general in its nature and can be used in other contexts as well. We should mention, however, that this approach requires some degree of decay in the terms that represent the portion of multilinear operators where \(|\xi_{j}|\approx|\xi_{k}|\). As such, the case of \(L^{2}\)-target spaces can not be subsumed in the quasi-Banach methods due to exactly that lack of decay. In addition to this, a lack of a convenient vector-valued characterisation for bmo means the method developed here also can not be applied in the case of a bmo-target space. Fortunately though, the \(L^{2}\) and bmo-target space cases can be handled by the strategies mentioned earlier so that, in the end, we arrive at all the desired results for both Banach and quasi-Banach targets, albeit with a slightly longer proof than one might have hoped.
The main motivation for our work was provided by a series of papers of F. Bernicot and P. Germain in [2, 3, 4] regarding coupled systems of dispersive PDEs, where the authors derived bilinear dispersive estimates in dimension 1, 2 and 3, for these systems in light of the method of space-time resonances. To briefly recall the setting of Bernicot-Germain's investigation, let \(\zeta(\Xi)\) be a smooth symbol and let \(T_{\zeta}\) be the associated multilinear paraproduct defined by
\[T_{\zeta}(f_{1},\ldots,f_{N})(x):=\int_{\mathbb{R}^{nN}}\zeta(\Xi)\prod_{j=1} ^{N}\left(\widehat{f}_{j}(\xi_{j})e^{ix\cdot\xi_{j}}\right)\,\mathrm{d}\Xi, \tag{8}\]
where \(\xi_{j}\in\mathbb{R}^{n}\)\((j=1,\ldots,N)\) and \(\Xi=(\xi_{1},\ldots,\xi_{N})\in\mathbb{R}^{nN}\). Furthermore, for \(j=0,\ldots,N\), let
\[\varphi_{j}(D)\,f(x)=\int_{\mathbb{R}^{n}}\varphi_{j}(\xi)\,\widehat{f}(\xi)\, e^{ix\cdot\xi}\,\,\mathrm{d}\xi,\]
where \(\,\mathrm{d}\xi\) denotes the normalised Lebesgue measure \(\,\mathrm{d}\xi/(2\pi)^{n}\). Consider now the coupled system of dispersive equations
\[\left\{\begin{array}{l}i\partial_{t}u+\varphi_{0}(D)\,u=T_{\zeta}\left(v_{1},\ldots,v_{N}\right)\\ i\partial_{t}v_{j}+\varphi_{k}(D)\,v_{j}=0,\,\,\,j=1,\ldots,N\end{array}\right. \quad\text{with}\quad\left\{\begin{array}{l}u(0,x)=0\\ v_{j}(0,x)=f_{j}(x),\,\,\,j=1,\ldots,N.\end{array}\right.\]
The functions \(u\) and \(v_{k}\) are complex valued, and each \(f_{k}\) maps \(\mathbb{R}^{n}\) to \(\mathbb{C}\).
The above system is used in order to study the nonlinear interaction of free waves, as a first step towards understanding a nonlinear dispersive equation \(i\partial_{t}u+\varphi(D)u=F(u)\), with a suitable nonlinearity. Thus given \(f_{j}\) in some function spaces, one would like to understand the behaviour of \(u\) in some other function spaces.
Using this setting and our estimates for multilinear oscillatory integrals we are able to establish the validity of the following regularity theorem.
**Theorem 1.6**.: _Let \(s\in(0,\infty)\), \(\sigma_{k}\geqslant 0\), \(k=1,\dots,\,N\), \(\varkappa=\min\sigma_{k}\), \(p_{j}\in(1,\infty),\)\(j=0,\dots,\,N\), satisfying the Holder condition (3), and assume that \(\varphi_{k}\in\mathcal{C}^{\infty}(\mathbb{R}^{n}\backslash 0)\) are positively homogeneous of degree \(s\), and \(f_{k}\in H^{\sigma_{k},p_{k}}\). Assume further that \(T_{\zeta}\) is the multilinear multiplier given by (8) with symbol \(\zeta(\Xi)\in S^{m_{\zeta}}(n,N)\) and set \(m_{c}(s):=-ns\sum_{j=0}^{N}\left|\frac{1}{p_{j}}-\frac{1}{2}\right|\) for \(s\neq 1\), and \(m_{c}(1)=-(n-1)\sum_{j=0}^{N}\left|\frac{1}{p_{j}}-\frac{1}{2}\right|\). Then for any \(q\in[1,\infty]\) and any \(T>0\), there exists a constant \(C_{T}>0\) such the solution \(u(t,x)\) satisfies the regularity estimate_
\[\|u\|_{L^{q}([0,T])\,H^{\varkappa+m_{c}(s)-m_{\zeta},p_{0}(\mathbb{R}^{n})}} \leqslant C_{T}\prod_{j=1}^{N}\|f_{j}\|_{H^{\sigma_{j},p_{j}}},\]
_provided that \(\varkappa+m_{c}(s)-m_{\zeta}\geqslant 0\)\((\)which is needed in order to land in a space of functions rather than a space of distributions\()\)._
Here for \(1<p<\infty\), \(\sigma\in\mathbb{R}\), \(H^{\sigma,p}=\{f\in\mathscr{S}^{\prime};\,(1-\Delta)^{s/2}f\in L^{p}(\mathbb{ R}^{n})\}\) is the \(L^{p}\)-based Sobolev space with the norm \(\|f\|_{H^{\sigma,p}}:=\|(1-\Delta)^{s/2}f\|_{L^{p}}\).
The paper is organised as follows. In Section 2 we recall the basic notions and tools from Fourier analysis and state some fairly general results that will also be used in the proof of Theorems 1.3 and 1.4. In Section 3 we briefly discuss the sharpness of the order of the decay of the operators in the bilinear setting. In Section 4 we state and prove several results in the vector-valued setting for linear OIOs as well as a key proposition regarding the OIOs giving rise to Carleson measures. Section 5 recalls briefly the frequency decomposition that was introduced in [15], and which will be used throughout the paper. In Section 6 we briefly discuss the endpoint cases that are going to be considered in the Banach-target case. Section 7 contains the proofs of Theorems 1.3 and 1.4. Finally Section 8 is devoted to the proof of Theorem 1.6 on the Sobolev regularity of the solutions to coupled systems of dispersive partial differential equations.
## 2. Definitions and Preliminaries
Here we collect all the definitions and basic results that will be used in the forthcoming sections, in order to make the paper essentially self-contained.
We shall denote constants which can be determined by known parameters in a given situation, but whose values are not crucial to the problem at hand, by \(C\) or \(c\), sometimes adding a subscript, for example \(c_{\alpha}\), to emphasis a dependency on a given parameter \(\alpha\). Such parameters are those which determine function spaces, such as \(p\) or \(m\) for example, the dimension \(n\) of the underlying Euclidean space, and the constants connected to the seminorms of various amplitudes or phase functions. The value of the constants may differ from line to line, but in each instance could be estimated if necessary. We also write \(a\lesssim b\) as shorthand for \(a\leqslant Cb\) and \(a\approx b\) when \(a\lesssim b\) and \(b\lesssim a\). By
\[B(x,r):=\{y\in\mathbb{R}^{n}\,:\,|y-x|<r\}\]
we denote the open ball of radius \(r>0\) centred at \(x\in\mathbb{R}^{n}\).
We also recall the definition of the _Littlewood-Paley_ partition of unity which is a basic tool in harmonic analysis and theory of partial differential equations.
**Definition 2.1**.: _Let \(\vartheta\colon\mathbb{R}^{n}\to\mathbb{R}\) be a positive, radial, radially decreasing, smooth cut-off function which satisfies \(\vartheta(\xi)=1\) if \(|\xi|\leqslant 1\) and \(\vartheta(\xi)=0\) if \(|\xi|\geqslant 2\). We set \(\vartheta_{0}:=\vartheta\) and_
\[\vartheta_{j}(\xi):=\vartheta\left(2^{-j}\xi\right)-\vartheta(2^{-(j-1)}\xi),\]
_for integers \(j\geqslant 1\). Then one has the following Littlewood-Paley partition of unity:_
\[\sum_{j=0}^{\infty}\vartheta_{j}(\xi)=1\quad\text{for all $\xi\in\mathbb{R}^{n}$}.\]
Using the definition above, let \(s\in\mathbb{R}\) and \(0<p<\infty\), \(0<q\leqslant\infty\). The Triebel-Lizorkin space is defined as
\[F_{p,q}^{s}(\mathbb{R}^{n}):=\Big{\{}f\in\mathscr{S}^{\prime}(\mathbb{R}^{n}) \,:\,\|f\|_{F_{p,q}^{s}(\mathbb{R}^{n})}:=\Big{\|}\Big{\{}\sum_{j=0}^{\infty}2^ {iqs}\,|\vartheta_{j}(D)f|^{q}\,\Big{\}}^{1/q}\Big{\|}_{L^{p}(\mathbb{R}^{n})} <\infty\Big{\}},\]
where \(\mathscr{S}^{\prime}(\mathbb{R}^{n})\) denotes the space of tempered distributions.
In our analysis of the boundedness of oscillatory integral operators which is based on multilinear interpolation, the end-points often involve local Hardy spaces which were introduced by D. Goldberg [8]. One of the main advantages of these spaces is that they are mapped into themselves under the action of the linear oscillatory integral operators that are considered in this paper.
**Definition 2.2**.: _The local Hardy space \(h^{p}(\mathbb{R}^{n})\), \((0<p<\infty)\) is the Triebel-Lizorkin space \(F_{p,2}^{0}\)\((\)see, for example [18]\()\) with the norm_
\[\|f\|_{h^{p}(\mathbb{R}^{n})}\approx\|\vartheta_{0}(D)f\|_{L^{p}(\mathbb{R}^{ n})}+\left\|\left(\sum_{j=1}^{\infty}|\vartheta_{j}(D)f|^{2}\right)^{\frac{1}{2} }\right\|_{L^{p}(\mathbb{R}^{n})}. \tag{9}\]
Note that the usual Hardy space \(\mathscr{H}^{p}(\mathbb{R}^{n})\) is defined the condition
\[\|f\|_{\mathscr{H}^{p}}:=\left(\int\sup_{t>0}|\vartheta(tD)f(x)|^{p}\,\,\mathrm{ d}x\right)^{\frac{1}{p}}<\infty.\]
The dual of \(\mathscr{H}^{1}\) is the John-Nirenberg space of functions of bounded mean oscillation BMO, which consists of all functions \(f\in L^{1}_{\mathrm{loc}}\) such that
\[\|f\|_{\mathrm{BMO}}:=\sup_{Q}\frac{1}{|Q|}\int_{Q}|f(x)-\mathrm{avg}_{Q}f|\, dx<\infty,\]
where \(\mathrm{avg}_{Q}f=|Q|^{-1}\int_{Q}f\), and \(Q\) ranges over cubes in \(\mathbb{R}^{n}\). The dual of the local Hardy space \(h^{1}\) is the _local_ BMO space, which is denoted by bmo and consists of locally integrable functions that verify
\[\|f\|_{\mathrm{bmo}}\approx\|f\|_{\mathrm{BMO}}+\|\vartheta(D)f\|_{L^{\infty} }<\infty,\]
where \(\vartheta\) is the cut-off function introduced in Definition 2.1.
In the analysis of multilinear operators, a basic tool is a certain type of measure whose definition we now recall. A Borel measure \(\mathrm{d}\mu(x,t)\) on \(\mathbb{R}^{n+1}_{+}\) is called a _Carleson measure_ if
\[\|\mathrm{d}\mu\|_{\mathcal{C}}:=\sup_{Q}\frac{1}{|Q|}\int_{0}^{\ell(Q)}\int_ {Q}|\mathrm{d}\mu(x,t)|<\infty\]
where the supremum is taken over cubes \(Q\subset\mathbb{R}^{n}\) and \(\ell(Q)\) denotes the side length of \(Q\) and \(|Q|\) its Lebesgue measure. The quantity \(\|\mathrm{d}\mu\|_{\mathcal{C}}\) is called the _Carleson norm_ of \(\,\mathrm{d}\mu\). An equivalent norm is given if cubes are replaced with balls. In this paper we are exclusively
interested in Carleson measures which are supported on lines parallel to the boundary of \(\mathbb{R}^{n+1}_{+}\). More precisely, in what follows all Carleson measures will be supported on the set
\[E:=\{(x,t)\,:\,x\in\mathbb{R}^{n}\text{ and }t=2^{-k}\text{ for some }k\in\mathbb{Z}\}\]
so they take the form
\[\sum_{k\in\mathbb{Z}}\mathrm{d}\mu(x,t)\delta_{2^{-k}}(t),\]
where \(\delta_{2^{-k}}(t)\) is a Dirac measure at \(2^{-k}\). This will be assumed throughout without further comment.
The following basic results concerning the Carleson measure and the quadratic estimate are very useful in the context of multilinear operators. See E. M. Stein [16] for the proofs.
**Lemma 2.3**.: _Let \(\,\mathrm{d}\mu(x,t)\) be a Carleson measure. Then if \(\varphi\) satisfies \(|\varphi(x)|\lesssim\langle x\rangle^{-n-\varepsilon}\)_ (for some \(0<\varepsilon<\infty\)), then
\[\sum_{k}\int_{\mathbb{R}^{n}}|\varphi(2^{-k}D)f(x)|^{2}\,\,\mathrm{d}\mu(x,2^ {-k})\leqslant C_{n}\left\|\mathrm{d}\mu\right\|_{\mathcal{C}}\left\|f\right\| _{L^{2}}^{2}, \tag{10}\]
_and if \(\varphi\) is a bump function supported in a ball near the origin with \(\varphi(0)=1\) then one also has_
\[\sum_{k}\int_{\mathbb{R}^{n}}|\varphi(2^{-k}D)f(x)|\,\,\mathrm{d}\mu(x,2^{-k} )\leqslant C_{n}\left\|\,\mathrm{d}\mu\right\|_{\mathcal{C}}\left\|f\right\| _{h^{1}}. \tag{11}\]
_If \(\varphi\in\mathscr{S}\) is such that \(\varphi(0)=0\), then_
\[\sum_{k}\int\left|\varphi(2^{-k}D)f(x)\right|^{2}\,\mathrm{d}x\lesssim\left\| f\right\|_{L^{2}}^{2}. \tag{12}\]
In our investigations we will also confront three types of maximal operators. The first one is the Hardy-Littlewood maximal operator
\[\mathcal{M}f(x):=\sup_{B\ni x}\frac{1}{|B|}\int_{B}|f(y)|dy,\]
where the supremum is taken over all balls \(B\) containing \(x\). For \(0<p<\infty\), one also defines \(\mathcal{M}_{p}f(x):=(\mathcal{M}\left(|f|^{p}\right))^{1/p}\).
The second one is J. Peetre's maximal operator [18].
\[\mathfrak{M}_{a,b}(f)(x):=\left\|\frac{f(x-\cdot)}{(1+b\left|\cdot\right|)^{a }}\right\|_{L^{\infty}} \tag{13}\]
where \(0<a,b<\infty\). For any \(x\in\mathbb{R}^{n}\), \(f\in\mathscr{S}^{\prime}\) with \(\operatorname{supp}\hat{f}\subset\{\xi;\,|\xi|\leqslant 2b\}\) and \(a\geqslant\frac{n}{p}\) one has that
\[\mathfrak{M}_{a,b}u(x)\lesssim\mathcal{M}_{p}u(x). \tag{14}\]
The third type of maximal operator that will be used in this paper is B.J. Park's maximal operator [10]: For \(j\in\mathbb{Z}\), \(s>0\) and \(0<p\leqslant\infty\)
\[\mathfrak{M}_{s,2^{j}}^{p}f(x):=2^{jn/p}\left\|\frac{f(x-\cdot)}{(1+2^{j}| \cdot|)^{s}}\right\|_{L^{p}}. \tag{15}\]
Park's maximal operator has the following properties: If \(0<p<\infty\) and \(s>n/p\), then
\[\mathfrak{M}_{s,2^{j}}^{p}f(x)\lesssim\mathcal{M}_{p}f(x), \tag{16}\]
uniformly in \(j\in\mathbb{Z}.\) Moreover if the set of all dyadic cubes in \(\mathbb{R}^{n}\) is denoted by \(\mathcal{D},\) and for each \(j\in\mathbb{Z}\) one denotes the elements of \(\mathcal{D}\) with side length \(2^{-j}\) by \(\mathcal{D}_{j},\) then for every dyadic cube \(J\in\mathcal{D}_{j}\) and for every \(s>0,\)\(0<p<\infty\) and \(f,\)
\[\sup_{y\in J}\mathfrak{M}_{s,2^{j}}^{p}f(y)\lesssim\inf_{y\in J}\mathfrak{M}_{ s,2^{j}}^{p}f(y), \tag{17}\]
with constants independent of \(f\) and \(j.\)
Using the maximal operator \(\mathfrak{M}_{a,b},\) Park has given a useful characterisation of the Hardy and BMO spaces, in the following theorem.
**Theorem 2.4**.: _[_10_]__. Let \(\Lambda\in\mathscr{S}\) be a function whose Fourier transform is supported in the annulus \(1/2\leqslant|\xi|\leqslant 2\) and set \(\widehat{\Lambda}\left(\cdot/2^{j}\right)=\widehat{\Lambda_{j}}\) so that one has the partition of unity \(\sum_{j\in\mathbb{Z}}\widehat{\Lambda}_{j}\left(\xi\right)=1\) for \(\xi\neq 0.\) Assume that \(0<p\leqslant\infty,0<q\leqslant\infty,\)\(0<\gamma<1\), and \(s>n/\min(p,2,q).\) Then for each dyadic cubes \(Q\in\mathcal{D}\), there exists a proper measurable subset \(S_{Q}\) of \(Q\), depending on \(\gamma,s,q\) and \(f\), such that \(|S_{Q}|>\gamma|Q|\) and_
\[\|f\|_{Y^{p}}\approx\left\|\left\{\sum_{Q\in\mathcal{D}_{j}}\left(\inf_{y\in Q }\mathfrak{M}_{s,2^{j}}^{q}\left(\Lambda_{j}\ast f\right)(y)\right)\chi_{S_{Q }}\right\}_{j\in\mathbb{Z}}\right\|_{L^{p}(\ell^{2})}\]
_where \(Y^{p}=\mathscr{H}^{p}\) for \(0<p<\infty\) and \(Y^{\infty}=\mathrm{BMO}.\)_
Now in connection to the Hardy-Littlewood maximal operator defined above, a useful device in proving multilinear estimates is the Fefferman-Stein vector-valued maximal inequality [7, Theorem 1], which states that for \(r<p,\)\(q<\infty,\) or \(0<p<\infty,\)\(q=\infty\) or for \(p=q=\infty,\) one has
\[\left\|\left\{\mathcal{M}_{r}f_{j}\right\}_{j\in\mathbb{Z}}\right\|_{L^{p}( \ell^{q})}\lesssim\left\|\left\{f_{j}\right\}_{j\in\mathbb{Z}}\right\|_{L^{p} (\ell^{q})}. \tag{18}\]
The following theorem gives a corresponding vector-valued inequality involving Park's maximal operator.
**Theorem 2.5**.: _[_10_]__. Let \(0<p,q,r\leqslant\infty\) and \(s>n/\min(p,q,r).\) Suppose that the Fourier transform of \(f_{j}\) is supported in a ball of radius \(A2^{j}\) for some \(A>0.\) Then for \(0<p<\infty\) and \(0<q\leqslant\infty\) or for \(p=q=\infty\), one has_
\[\left\|\left\{\mathfrak{M}_{s,2^{j}}^{r}f_{j}\right\}_{j\in\mathbb{Z}}\right\| _{L^{p}(\ell^{q})}\lesssim\left\|\left\{f_{j}\right\}_{j\in\mathbb{Z}}\right\| _{L^{p}(\ell^{q})} \tag{19}\]
We will also need the following vector valued inequality due to H. Triebel [18, Theorem 2, Section 2.4.9].
**Theorem 2.6**.: _If \(G_{k}\) is a sequence of functions with \(\operatorname{supp}\widehat{G_{k}}\subset B(0,2^{k}R)\), for \(k=0,1,\dots\) and \(R\geqslant 1\), then for \(0<r<\infty\) and \(0<q<\infty\) one has the following vector-valued inequality: For \(\mathfrak{m}\in H^{\alpha}(\mathbb{R}^{n})\)\((\)the Sobolev space \(H^{\alpha,2}\) of order \(\alpha\) defined in the introduction section\()\), and \(\mathfrak{m}(\widehat{2^{-k}D})f(\xi)=\mathfrak{m}(2^{-k}\xi)\hat{f}(\xi)\), with_
\[\alpha>n\left(\frac{1}{\min(1,r,q)}-\frac{1}{2}\right),\]
_there is a constant \(C>0\) independent of \(R\) and \(G_{k}\)'s, such that_
\[\left\|\left\{\mathfrak{m}(2^{-k}D)G_{k}\right\}_{k\in\mathbb{Z}}\right\|_{L^{r }(\ell^{q})}\leqslant C\|\mathfrak{m}\|_{H^{\alpha}}\left\|\left\{G_{k} \right\}_{k\in\mathbb{Z}}\right\|_{L^{r}(\ell^{q})}. \tag{20}\]
We note that the multilinear amplitudes defined in Definition 1.1 reduce to the classical Hormander classes \(S^{m}\) of _amplitudes_ (or _symbols_) in the case \(N=1,\) that is to say
\(S^{m}=S^{m}(n,1)\). The linear OIOs are the special case of (1) when \(N=1\), in which case we have
\[T_{a}^{\varphi}f(x):=\int_{\mathbb{R}^{n}}e^{ix\cdot\xi+i\varphi(\xi)}a(x,\xi) \widehat{f}(\xi)\,\mathrm{d}\xi, \tag{21}\]
for a given amplitude \(a\in S^{m}\) and phase function \(\varphi\). In the proofs in the forthcoming sections we will also use the notion of _multilinear pseudodifferential operators_ which are operators of the form
\[T_{\sigma}(f_{1},\dots,f_{N})(x)=\int_{\mathbb{R}^{nN}}\sigma(x,\Xi)\prod_{j=1 }^{N}\widehat{f}_{j}(\xi_{j})\,e^{i\sum_{j=1}^{N}x\cdot\xi_{j}}\,\,\mathrm{d}\Xi.\]
For the analysis of the low frequency portion of the operators, where ususally the singularity of the phase functions lie, we recall a linear result proved in [5], which established the \(h^{p}\)-boundedness of low-frequency portions of oscillatory integral operators, whose multilinear generalisations are considered here in this paper.
**Lemma 2.7**.: _Let \(s>0\), \(s_{c}:=\min(s,1)\), \(a(x,\xi)\) be a symbol that is compactly supported and smooth outside the origin in the \(\xi\)-variable and \(\varphi(\xi)\in\mathcal{C}^{\infty}(\mathbb{R}^{n}\setminus\{0\})\) be a phase function. Also assume that the following conditions hold:_
\[\begin{cases}\|\partial_{\xi}^{\alpha}a(\cdot,\xi)\|_{L^{\infty}(\mathbb{R}^{ n})}\leqslant c_{\alpha},&|\alpha|\geqslant 0,\\ |\partial_{\xi}^{\alpha}\varphi(\xi)|\leqslant c_{\alpha}|\xi|^{s-|\alpha|},&| \alpha|\geqslant 0,\end{cases}\]
_for \(\xi\neq 0\) and on the support of \(a(x,\xi).\) Let_
\[K(x,y):=\int_{\mathbb{R}^{n}}a(x,\xi)\,e^{i\varphi(\xi)+i(x-y)\cdot\xi}\, \mathrm{d}\xi.\]
_Then one has:_
1. \(|K(x,y)|\lesssim\langle x-y\rangle^{-n-\varepsilon s_{c}}\) _for any_ \(0\leqslant\varepsilon<1\)_._
2. _For every_ \(r\in(n/(n+\varepsilon s_{c}),1]\) _one has, for every_ \(f\in\mathscr{S}^{\prime}\) _with frequency support inside the unit ball and_ \(T_{a}^{\varphi}\) _defined as in (_21_), that_ \[|T_{a}^{\varphi}f(x)|\lesssim\mathcal{M}_{r}f(x),\quad x\in\mathbb{R}^{n}.\]
3. _For every_ \(\frac{n}{n+s_{c}}<p\leqslant\infty\)_, and all_ \(f\in X^{p}\)_,_ \[\|T_{a}^{\varphi}f\|_{X^{p}}\lesssim\|f\|_{X^{p}}\,.\]
Proof.: The proof of the first statement can be found in [5, Lemma 4.3].
For the second statement we can apply (i) to obtain that
\[|T_{a}^{\varphi}f(x)|\lesssim|(\vartheta(D)f)*\langle\cdot\rangle^{-n- \varepsilon s_{c}}|\lesssim\mathcal{M}_{r}(\vartheta(D)f)(x)\]
for all \(f\in\mathscr{S}\), \(r\in(\frac{n}{n+\varepsilon s_{c}},1]\) and \(\varepsilon\in(0,1)\).
We can prove the third statement by choosing \(\frac{n}{n+s_{c}}<r<p\) and making use of the boundedness of the maximal operator \(\mathcal{M}\) on \(L^{p/r}\) to obtain
\[\|T_{a}^{\varphi}f\|_{h^{p}}\lesssim\|T_{a}^{\varphi}f\|_{L^{p}}\lesssim\| \mathcal{M}(|\vartheta(D)f|^{r})\|_{L^{p/r}}^{1/r}\lesssim\|\vartheta(D)f\|_{L ^{p}}\lesssim\|f\|_{h^{p}},\]
where the last inequality follows by (9) in Definition 2.2. In the case of \(p=\infty\) for which \(X^{p}=\mathrm{bmo}\), we just observe that the integral kernel of the adjoint of \(T_{a}^{\varphi}\) is given by \(\int_{\mathbb{R}^{n}}a(y,\xi)\,e^{-i\varphi(\xi)-i(x-y)\cdot\xi}\,\mathrm{d}\xi\), for which one can deduce a similar decay estimate as in \((i)\). Therefore by the same reasoning as above one has that \(\|(T_{a}^{\varphi})^{*}f\|_{h^{1}}\lesssim\|f\|_{h^{1}}\) and hence \(T_{a}^{\varphi}\) is bounded on \(\mathrm{bmo}\).
As was mentioned earlier, the proofs of Theorems 1.3 and 1.4 also use the following linear results:
**Theorem 2.8**.: _Let \(m=-(n-1)\left|\frac{1}{p}-\frac{1}{2}\right|\) and \(\frac{n}{n+1}<p\leqslant\infty\). Then any \(\mathrm{FIO}\) of the form_
\[T_{\sigma}^{\varphi}f(x)=\int_{\mathbb{R}^{n}}\sigma(x,\xi)\,e^{ix\cdot\xi+i \varphi(\xi)}\widehat{f}(\xi)\,\mathrm{d}\xi,\]
_with an amplitude \(\sigma(x,\xi)\in S^{m}\) and a real-valued phase function \(\varphi\in\mathcal{C}^{\infty}(\mathbb{R}^{n}\setminus\{0\})\) that is positively homogeneous of degree one, satisfies the estimate_
\[\|T_{\sigma}^{\varphi}f\|_{X^{p}}\leqslant C\,\|f\|_{X^{p}}\,,\]
_where \(X^{p}\) is defined in (4). Moreover, the same result also holds for \(0<p<\infty,\) if \(\varphi(\xi)\) is equal to the inhomogeneous phase function \(\langle\xi\rangle\) (the case of the Klein-Gordon equation)._
Proof.: For homogeneous phase functions, this result was established in [15, Theorem 3.1]. For the proof for \(\varphi(\xi)=\langle\xi\rangle\) we sketch an argument from [9]. One first separates the amplitude \(\sigma(x,\xi)\) into low and high frequency portions. For the low frequency part we have the result thanks to Lemma 2.7, and for the high frequency part one can write \(\sigma(x,\xi)e^{i\langle\xi\rangle}=\tilde{\sigma}(x,\xi)e^{i|\xi|}\) with \(\tilde{\sigma}\in S^{m}\) and thereafter apply Theorem 3.1 from [15] once again.
For other classes of OIOs, the following theorem was proven in [5], Theorem 3.5.
**Theorem 2.9**.: _Let \(0<s<\infty\), \(m=-ns\left|\frac{1}{p}-\frac{1}{2}\right|\) and \(\frac{n}{n+\min(s,1)}<p\leqslant\infty\). Then any linear oscillatory integral operator_
\[T_{\sigma}^{\varphi}f(x)=\int_{\mathbb{R}^{n}}\sigma(x,\xi)\,e^{ix\cdot\xi+i \varphi(\xi)}\widehat{f}(\xi)\,\mathrm{d}\xi,\]
_with an amplitude \(\sigma(x,\xi)\in S^{m}\) and a phase function \(\varphi\) satisfying (2), satisfies the estimate_
\[\|T_{\sigma}^{\varphi}f\|_{X^{p}}\leqslant C\,\|f\|_{X^{p}}\,.\]
_Moreover, if the phase function \(\varphi\) is in \(\mathcal{C}^{\infty}(\mathbb{R}^{n}),\) then the range of \(p\) in the theorem can be extended to \(\in(0,\infty]\)._
## 3. On the sharpness of the orders of the operators
Here, building on the example in [15] and the sharpness results in [11], we construct examples which show the sharpness of [14, Theorem 2.7] for certain values of the function space exponents. They also serve as examples which show the sharpness of our main results here (Theorems 1.3 and 1.4) when the target space is \(L^{2}\). As such, we consider the case of bilinear operators with \(\varphi_{0}=0\), and the failure of \(L^{p}\times L^{q}\to L^{r}\) boundedness (in the cases \(p,q\leqslant 2\) and \(p,q\geqslant 2\)). At the very end of the section we consider the case of \(\varphi_{0}\neq 0\) but only for \(r=2\).
So let us first consider the operator
\[B(f,g)(x)=\int_{\mathbb{R}^{2n}}a(\xi,\eta)\widehat{f}(\xi)\,\widehat{g}(\eta )\,e^{ix\cdot(\xi+\eta)}\,e^{i\varphi(\xi)-i\varphi(\eta)}\,\,\mathrm{d}\xi\, \,\mathrm{d}\eta,\]
with \(\varphi(\xi)=|\xi|^{s}\),
\[a(\xi,\eta)=\sum_{k=0}^{\infty}\vartheta_{k}(\xi)\overline{\vartheta_{k}(- \eta)}b_{1}(\xi)\overline{b_{2}(-\eta)},\]
and \(b_{j}(\xi)=(1-\vartheta_{0}(\xi))|\xi|^{m_{j}}\)\((j=1,2)\), so that \(a\in S^{m}_{1,0}(n,2)\), with \(m=m_{1}+m_{2}\).
The parameter \(m\) and the order \(s\) of \(\varphi\) will be specified later, but we have in mind that \(m\) should fail to satisfy either (6) or alternatively (7) depending on \(s\). We compute
\[B(f,\overline{g})(x)\] \[=\int_{\mathbb{R}^{2n}}\left(\sum_{k=0}^{\infty}\vartheta_{k}( \xi)\overline{\vartheta_{k}(-\eta)}b_{1}(\xi)\overline{b_{2}(-\eta)}\right) \widehat{f}(\xi)\,\widehat{g}(-\eta)\,e^{ix\cdot(\xi+\eta)}\,e^{i\varphi(\xi) -i\varphi(\eta)}\,\,\mathrm{d}\xi\,\,\mathrm{d}\eta\] \[=\sum_{k=0}^{\infty}\left(\int_{\mathbb{R}^{n}}\vartheta_{k}(\xi )b_{1}(\xi)\widehat{f}(\xi)\,e^{ix\cdot\xi}\,e^{i\varphi(\xi)}\,\,\mathrm{d} \xi\right)\left(\int_{\mathbb{R}^{n}}\overline{\vartheta_{k}(-\eta)}b_{2}(- \eta)\widehat{g}(-\eta)\,e^{ix\cdot\eta}\,e^{-i\varphi(\eta)}\,\,\mathrm{d} \eta\right)\] \[=\sum_{k=0}^{\infty}\left(\int_{\mathbb{R}^{n}}\vartheta_{k}(\xi )b_{1}(\xi)\widehat{f}(\xi)\,e^{ix\cdot\xi}\,e^{i\varphi(\xi)}\,\,\mathrm{d} \xi\right)\overline{\left(\int_{\mathbb{R}^{n}}\vartheta_{k}(\xi)b_{2}(\xi) \widehat{g}(\xi)\,e^{ix\cdot\xi}\,e^{i\varphi(\xi)}\,\,\mathrm{d}\xi\right)}. \tag{22}\]
### Fourier integral operators
Consider \(s=1\) and
\[m=-(n-1)\left(\left|\frac{1}{p}-\frac{1}{2}\right|+\left|\frac{1}{q}-\frac{1 }{2}\right|\right)+\varepsilon\]
for some \(\varepsilon>0\).
If \(p,q\geqslant 2\) (so \(2r\geqslant 1\)) we choose
\[\lambda_{1} =\frac{n+1}{2}-\frac{1}{p}+\frac{\varepsilon}{4},\] \[\lambda_{2} =\frac{n+1}{2}-\frac{1}{q}+\frac{\varepsilon}{4},\] \[m_{1} =-\frac{n-1}{2}+\frac{n}{2r}-\frac{1}{p}+\frac{\varepsilon}{2}, \quad\text{and}\] \[m_{2} =-\frac{n-1}{2}+\frac{n}{2r}-\frac{1}{q}+\frac{\varepsilon}{2}.\]
We see directly that \(m=m_{1}+m_{2}\) and if we define \(\widehat{f}(\xi)=(1-\vartheta_{0}(\xi))|\xi|^{-\lambda_{1}}e^{-i|\xi|}\) and \(\widehat{g}(\xi)=(1-\vartheta_{0}(\xi))|\xi|^{-\lambda_{2}}e^{-i|\xi|}\), fact (II-i) from [11, page 302] shows us that \(f\in L^{p}\) and \(g\in L^{q}\). We see also that
\[b_{1}(\xi)\widehat{f}(\xi)e^{i\varphi(\xi)}=b_{2}(\xi)\widehat{g}(\xi)e^{i \varphi(\xi)}=(1-\vartheta_{0}(\xi))^{2}|\xi|^{-n(1-1/(2r))+\varepsilon/4}=: \widehat{F}(\xi).\]
so we can compute from (22) that
\[B(f,\overline{g})(x)=\sum_{k=0}^{\infty}\left|\int_{\mathbb{R}^{n}}\vartheta _{k}(\xi)\widehat{F}(\xi)\,e^{ix\cdot\xi}\,\,\mathrm{d}\xi\right|^{2}=\sum_{k =0}^{\infty}|\vartheta_{k}(D)(F)(x)|^{2}\,. \tag{23}\]
If we assume \(B\) is bounded from \(L^{p}\times L^{q}\) to \(L^{r}\), then the Littlewood-Paley characterisation of \(h^{2r}\) and the fact that \(F\) is high-frequency localised, yield
\[\|F\|_{H^{2r}}\sim\|F\|_{h^{2r}}\lesssim\left\|\left(\sum_{k=1}^{\infty}| \vartheta_{k}(D)(F)|^{2}\right)^{1/2}\right\|_{L^{2r}}=\|T(f,\overline{g})\|_{L ^{r}}^{1/2}\lesssim\|f\|_{L^{p}}^{1/2}\|g\|_{L^{q}}^{1/2}.\]
However, fact (II-i) from [11, page 302] shows us that \(F\not\in H^{2r}\).
So we arrive at a contradiction, and \(B\) cannot be a bounded operator from \(L^{p}\times L^{q}\) to \(L^{r}\).
If \(p,q\leqslant 2\) we can apply a similar argument but choose instead
\[\lambda_{1} =n\left(1-\frac{1}{p}\right)+\frac{\varepsilon}{4},\] \[\lambda_{2} =n\left(1-\frac{1}{q}\right)+\frac{\varepsilon}{4},\] \[m_{1} =\frac{n-1}{2}-\frac{n}{p}+\frac{1}{2r}+\frac{\varepsilon}{2}, \quad\text{and}\] \[m_{2} =\frac{n-1}{2}-\frac{n}{q}+\frac{1}{2r}+\frac{\varepsilon}{2}.\]
We still have that \(m=m_{1}+m_{2}\) and if we this time define \(\widehat{f}(\xi)=(1-\vartheta_{0}(\xi))|\xi|^{-\lambda_{1}}\) and \(\widehat{g}(\xi)=(1-\vartheta_{0}(\xi))|\xi|^{-\lambda_{2}}\), fact (II-ii) from [11, page 302] shows us that \(f\in L^{p}\) and \(g\in L^{q}\). We once again obtain (23) but with
\[\widehat{F}(\xi)=(1-\vartheta_{0}(\xi))^{2}|\xi|^{-(n+1)/2+1/(2r)+\varepsilon /4}e^{-i|\xi|},\]
so the proof of fact (II-ii) from [11, page 302] reveals that \(F(x)\sim(1-|x|)^{-1/(2r)-\varepsilon/4}\) as \(|x|\to 1\), so again \(F\not\in H^{2r}\). We have therefore shown, even for \(p,q\leqslant 2\), \(B\) is not a bounded operator from \(L^{p}\times L^{p}\) to \(L^{p/2}\).
### Oscillatory integral operators
We consider now either \(0<s<1\) or \(s>1\) and
\[m=-sn\left(\left|\frac{1}{p}-\frac{1}{2}\right|+\left|\frac{1}{q}-\frac{1}{2} \right|\right)+\varepsilon\]
for some \(\varepsilon>0\).
If \(p,q\geqslant 2\) we choose
\[\lambda_{1} =n\left(1-\frac{s}{2}\right)-n\frac{(1-s)}{p}+\frac{\varepsilon} {4},\] \[\lambda_{2} =n\left(1-\frac{s}{2}\right)-n\frac{(1-s)}{q}+\frac{\varepsilon} {4},\] \[m_{1} =-sn\left(\frac{1}{2}-\frac{1}{p}\right)-n\left(\frac{1}{p}-\frac {1}{2r}\right)+\frac{\varepsilon}{2},\quad\text{and}\] \[m_{1} =-sn\left(\frac{1}{2}-\frac{1}{p}\right)-n\left(\frac{1}{p}-\frac {1}{2r}\right)+\frac{\varepsilon}{2}.\]
then we can carry out an analogous argument to that above for FIOs with \(\widehat{f}(\xi)=(1-\vartheta_{0}(\xi))|\xi|^{-\lambda_{1}}e^{-i|\xi|^{2}}\) and \(\widehat{g}(\xi)=(1-\vartheta_{0}(\xi))|\xi|^{-\lambda_{2}}e^{-i|\xi|^{2}}\). We use (I-i) instead of (II-i) from [11] to conclude that \(f\in L^{p}\) and \(g\in L^{q}\) but \(B(f,g)\not\in L^{r}\).
If \(p,q\leqslant 2\) we choose
\[\lambda_{1} =n\left(1-\frac{1}{p}\right)+\frac{\varepsilon}{4},\] \[\lambda_{2} =n\left(1-\frac{1}{q}\right)+\frac{\varepsilon}{4},\] \[m_{1} =-sn\left(\frac{1}{2r}-\frac{1}{2}\right)-n\left(\frac{1}{p}-\frac {1}{2r}\right)+\frac{\varepsilon}{2},\quad\text{and}\] \[m_{1} =-sn\left(\frac{1}{2r}-\frac{1}{2}\right)-n\left(\frac{1}{p}-\frac {1}{2r}\right)+\frac{\varepsilon}{2}.\]
so once again we can carry out the same argument, this time with the help of (II-ii) from [11], \(\widehat{f}(\xi)=(1-\vartheta_{0}(\xi))|\xi|^{-\lambda_{1}}\) and \(\widehat{g}(\xi)=(1-\vartheta_{0}(\xi))|\xi|^{-\lambda_{2}}\). We conclude that \(f\in L^{p}\) and \(g\in L^{q}\) but \(B(f,g)\not\in L^{r}\).
Finally turning to the case of bilinear operators of the form
\[T(f,g)(x)=\int_{\mathbb{R}^{2n}}a(\xi,\eta)\widehat{f}(\xi)\,\widehat{g}(\eta)\,e ^{ix\cdot(\xi+\eta)}\,e^{i\varphi(\xi)-i\varphi(\eta)+i\varphi_{0}(\xi+\eta)}\, \,\mathrm{d}\xi\,\,\mathrm{d}\eta,\]
we observe that \(T(f,g)(x)=e^{i\varphi_{0}(D)}(B(f,g))(x)\), with \(B(f,g)\) as above. Therefore the unitarity of the operator \(e^{i\varphi_{0}(D)}\) on \(L^{2}\) yields that the boundedness of \(T\) from \(L^{p}\times L^{q}\to L^{2}\) is equivalent to the \(L^{p}\times L^{q}\to L^{2}\) boundedness of \(B(\cdot,\cdot)\), and the discussion above establishes the sharpness of the parameters involved, in the case \(1/p+1/q=1/2\) and \(r=2\).
## 4. Basic Vector-Valued and Carleson Estimates for Oscillatory Integral Operators
Before proceeding to the boundedness results, we need the following lemma, which was proved in the case of FIOs in [15]. We also include the proof, both for the sake of completeness and for later reference.
**Lemma 4.1**.: _Let \(\vartheta\colon\mathbb{R}^{n}\to\mathbb{R}\) be a positive, radial, radially decreasing, smooth cut-off function which satisfies \(\vartheta(\xi)=1\) if \(|\xi|\leqslant 1\) and \(\vartheta(\xi)=0\) if \(|\xi|\geqslant 2\)\((\)as defined in Definition 2.1\()\), and set \(\theta_{k}(\xi):=\vartheta(2^{3-k}\xi)\). Furthermore let \(\omega_{k}(\xi)\) be a bump function equal to one on the support of \(\theta_{k}\). Now assume that_
\[s>0,\qquad s_{c}=\min(s,1),\qquad n/(n+s_{c})<p\leqslant\infty,\qquad m=-ns \left|\frac{1}{p}-\frac{1}{2}\right|,\]
_and for a fixed but arbitrary vector \(u\in\mathbb{R}^{n}\) set_
\[b(k,\xi):=2^{km}\omega_{k}(\xi)\quad\text{and}\quad\widehat{P_{k}^{u}(g)}(\xi ):=\theta_{k}(\xi)e^{i2^{-k}\xi\cdot u}\widehat{g}(\xi).\]
_If \(\varphi\) is a phase function of order \(s\), then one has_
\[\sup_{k}\left\|(P_{k}^{u}\circ T_{b}^{\varphi})(f)\right\|_{h^{p}}\lesssim \left\|f\right\|_{h^{p}}, \tag{24}\]
_and for \(n\geqslant 1\) one also has for \(m=-ns/2\)_
\[\sup_{k}\left\|(P_{k}^{u}\circ T_{b}^{\varphi})(f)\right\|_{L^{\infty}} \lesssim\left\|f\right\|_{\mathrm{bmo}}\quad\text{and}\quad\sup_{k}\left\|(P_ {k}^{u}\circ T_{b}^{\varphi})(f)\right\|_{h^{1}}\lesssim\left\|f\right\|_{L^{1 }}. \tag{25}\]
_The same conclusion holds for FIOs, that is, when \(s=1\) and \(\varphi\) is positively homogeneous of degree one. In that case (24) is valid for \(m=-(n-1)\left|\frac{1}{p}-\frac{1}{2}\right|\) and \(n/(n+1)<p\leqslant\infty\), and (25) is valid when \(n\geqslant 2\) and \(m=-(n-1)/2\)._
Proof.: The proof of (24) follows from the fact that the amplitude of \(P_{k}^{u}\circ T_{b}^{\varphi}\) is in \(S^{m}\) uniformly in \(k\).
In order to establish the first inequality in (25), we write \(b=b^{\flat}+b^{\sharp}\) where
\[b^{\flat}(k,\xi)=b(k,\xi)(1-\lambda(\xi)),\quad\text{and}\quad b^{\sharp}(k, \xi)=b(k,\xi)\lambda(\xi). \tag{26}\]
and \(\lambda\) is a smooth function that vanishes in a neighbourhood of the origin and equal to one outside a larger neighbourhood of the origin. Now since \(m\leqslant 0\) and \(1-\lambda\) is a low frequency cut-off, one can essentially throw away the \(\omega\) in the definition of \(b\) which would then make \(b^{\flat}\) equal to \(2^{km}\). Then by the kernel estimates for the OIOs with amplitude \(b^{\flat}\) (see e.g. Lemma 2.7), for \(f\in\mathrm{bmo}\) we have that
\[\left\|P_{k}^{u}T_{b^{\flat}}^{\varphi}(f)\right\|_{L^{\infty}}\lesssim\left\| T_{b}^{\varphi}(f)\right\|_{L^{\infty}}\lesssim\left\|(1-\lambda)(D)f\right\|_{L^{ \infty}}\lesssim\left\|f\right\|_{\mathrm{bmo}}.\]
In order to ameliorate \((P_{k}^{u}\circ T_{b^{\sharp}}^{\varphi})(f)\) so that we can better understand its action on \(\mathrm{bmo}\) functions, we employ an argument from [14, page 27]. According to that argument, for
\(n\geqslant 1\) and \(m=\frac{-ns}{2}\), one introduces the operator \(R_{k}(G)(x)=\int K_{k}(x-y)G(y)\,\mathrm{d}y\), with
\[K_{k}(z)=\sum_{\kappa\leqslant j\leqslant k}2^{jm}\Psi\left(2^{k-j}z\right)2^{n (k-j)},\quad\text{for some}\;\;\kappa\]
and
\[\widehat{\Psi}(\eta)=\widehat{\psi}(\eta)^{2}\left|\eta\right|^{m}=:\widehat{ \psi}(\eta)\widehat{\Psi}(\eta), \tag{27}\]
where \(\widehat{\psi}\) is smooth, radial and positive with
\[\operatorname{supp}\widehat{\psi}\subset\left\{\xi:\,2^{-2}\leqslant\left| \xi\right|\leqslant 1\right\},\]
and
\[\sum_{j\in\mathcal{Z}}\widehat{\psi}(2^{j}\eta)^{2}=1\qquad\text{for any}\, \eta\neq 0.\]
Moreover, by [14, Lemma 4.8], the kernel \(K_{k}\) has the following properties:
\[\int K_{k}(z)\,\mathrm{d}z=0;\]
and for each \(0<\delta<\frac{ns}{2}\) the estimates
\[\left|K_{k}(x-y)\right|\lesssim 2^{kn}\bigg{(}1+\frac{\left|x-y\right|}{2^{-k}} \bigg{)}^{-n-\delta}\]
and
\[\left|K_{k}(x-y)-K_{k}(x-y^{\prime})\right|\lesssim 2^{k(n+1)}\left|y-y^{ \prime}\right|\]
hold for all \(x,y,y^{\prime}\in\mathbb{R}^{n}\) and \(k\in\mathbb{Z}\). Therefore the operator \(R_{k}\) satisfies
\[\sup_{k\in\mathcal{Z}}\left\|R_{k}f\right\|_{L^{q}}\lesssim\left\|f\right\|_{ L^{q}},\qquad 1\leqslant q<\infty,\]
and
\[\sup_{k\in\mathbb{Z}}\left\|R_{k}f\right\|_{L^{\infty}}\lesssim\left\|f\right\| _{\mathrm{BMO}}.\]
The consequence of the above discussion is that we can write
\[R_{k}=\sum_{\kappa\leqslant j\leqslant k}Q_{j}2^{(k-j)m} \tag{28}\]
and \(Q_{j}(D):=\widehat{\Psi}(2^{-j}D)\), which enables one to replace \((P_{k}^{u}\circ T_{b^{2}}^{\varphi})(f)\) by \(P_{k}^{u}\circ R_{k}\circ T_{\gamma}^{\varphi}(f)\), for \(n\geqslant 1\), where \(\gamma(\xi):=\lambda(\xi)|\xi|^{m}\in S^{-ns/2}\).
Using the BMO-\(L^{\infty}\) boundedness above, the global bmo-boundedness of OIOs with amplitudes in \(S^{-ns/2}\) (i.e. Theorem 2.9) and the \(L^{\infty}\)-boundedness of \(P_{k}^{u}\), all together yields that
\[\sup_{k}\left\|P_{k}^{u}T_{b^{2}}^{\varphi}(f)\right\|_{L^{\infty}}=\sup_{k} \left\|P_{k}^{u}\circ R_{k}\circ T_{\gamma}^{\varphi}(f)\right\|_{L^{\infty}} \lesssim\left\|\lambda(D)f\right\|_{\mathrm{BMO}}\leqslant\left\|f\right\|_{ \mathrm{bmo}}.\qed\]
Another useful tool in our analysis is the following lemma.
**Lemma 4.2**.: _Let_
\[s>0,\quad n\geqslant 1,\quad n/(n+s_{c})<p<\infty,\quad p\neq 2\quad\text{and} \quad m=-ns\left|\frac{1}{p}-\frac{1}{2}\right|.\]
_Assume that \(b(k,\xi)\) and \(P_{k}^{u}\) are given by the same expressions as in Lemma 4.1. Then for an OIO\(T_{b}^{\varphi}\) one has_
\[\left\|\Big{(}\sum_{k=0}^{\infty}\left|P_{k}^{u}T_{b}^{\varphi}(f)\right|^{2} \Big{)}^{1/2}\right\|_{L^{p}}\lesssim\left\|f\right\|_{h^{p}}.\]
_For an_ FIO_\(T_{b}^{\varphi}\) _the same result is valid under the conditions that_ \(n\geqslant 2,\)__\(m=-(n-1)\left|\frac{1}{p}-\frac{1}{2}\right|\) _and_ \(n/(n+1)<p\leqslant\infty\) _with_ \(p\neq 2\)
Proof.: We only prove the result for the case of OIOs since the corresponding proof for FIOs is carried out in a similar manner. Observe that \(P_{k}^{u}T_{b}^{\varphi}\) is an oscillatory integral with amplitude
\[e^{i2^{-k}\eta\cdot u}2^{km}\widehat{\omega}_{k}(\eta)\widehat{\theta}(2^{-k}\eta)\]
and phase function \(x\cdot\eta+\varphi(\eta).\) One can also write the amplitude as
\[e^{i2^{-k}\eta\cdot u}2^{km} \widehat{\omega}_{k}(\eta)\widehat{\theta}(2^{-k}\eta)\] \[=\lambda(\eta)e^{i2^{-k}\eta\cdot u}2^{km}\widehat{\omega}_{k}( \eta)\widehat{\theta}(2^{-k}\eta)+(1-\lambda(\eta))e^{i2^{-k}\eta\cdot u}2^{km }\widehat{\omega}_{k}(\eta)\widehat{\theta}(2^{-k}\eta)\] \[:=\alpha_{k}+\beta_{k},\]
where \(\lambda\) is the high frequency localisation introduced in (26).
We first consider the case of \(\left\|\left(\sum_{k=0}^{\infty}|P_{k}^{u}T_{\alpha_{k}}^{\varphi}(f)|^{2} \right)^{1/2}\right\|_{L^{p}}\). Replacing \(T_{\alpha_{k}}^{\varphi}\) with \(P_{k}^{u}\circ R_{k}\circ T_{\gamma}^{\varphi}\), with \(\gamma\in S_{1,0}^{m}\), matters reduce to proving the desired boundedness for
\[(\sum_{k\geqslant 0}|(P_{k}^{u}\circ R_{k}\circ T_{\gamma}^{\varphi})f|^{2})^{ 1/2},\]
where \(T_{\gamma}^{\varphi}\) and \(R_{k}\) are as in Lemma 4.1. Now if we introduce a smooth cut-off function \(\chi\) such that \(R_{k}=R_{k}(1-\chi(D))\) and using Theorem 2.9 for OIOs (or Theorem 2.8 in the case of FIOs), it is enough to prove
\[\left(\int\left(\sum_{k\geqslant 0}|P_{k}^{u}R_{k}G(x)|^{2}\right)^{p/2}\mathrm{ d}x\right)^{1/p}\lesssim\|G\|_{p^{p}}. \tag{29}\]
At this point, for the sake of simplicity of the notation, we replace \(P_{k}^{u}\) by \(P_{k}\) in what follows. This modification will not cause any problems since the difference between the two operators only lies in a harmless factor \(e^{i(\cdot)\cdot u}\). Observe now that using the integral representation of \(R_{k}\), one has
\[P_{k}R_{k}G(x)=\sum_{\kappa\leqslant j\leqslant k}2^{jm}\int\big{(}\theta_{k} \ast\tilde{\Psi}_{2^{j-k}}\big{)}(y)\big{(}\psi_{2^{j-k}}\ast G\big{)}(x-y) \,\mathrm{d}y,\]
where \(\tilde{\Psi}\) is defined in (27) and \(\tilde{\Psi}_{(\cdot)}(x):=(\cdot)^{-n}\tilde{\Psi}(\frac{x}{(\cdot)})\), and \(\psi_{2^{j-k}}\) is defined in a similar way. Then for any \(\nu>0\) (to be later determined)
\[|P_{k}R_{k}G(x)|\leqslant\sum_{\kappa\leqslant j\leqslant k}2^{jm}\left(\int \Big{|}\theta_{k}\ast\tilde{\Psi}_{2^{j-k}}(y)\Big{|}\left(1+\frac{|y|}{2^{j-k }}\right)^{\nu}\,\mathrm{d}y\right)\,\mathcal{M}_{\nu,2^{k-j}}(\psi_{2^{j-k}} \ast G)(x),\]
where \(\mathcal{M}_{\nu,2^{k-j}}\) is the Peetre maximal function as defined in (13).
Now by (14) we have
\[\mathcal{M}_{\nu,2^{k-j}}(\psi_{2^{j-k}}\ast G)(x)\lesssim\mathcal{M}_{n/\nu} (\psi_{2^{j-k}}\ast G),\]
for any \(x\in\mathbb{R}^{n}\). Moreover by fairly standard estimates for convolution-type integrals one has for any \(N>n+\nu\)
\[\Big{|}\theta_{k}\ast\tilde{\Psi}_{2^{j-k}}(y)\Big{|}\lesssim(2^{-k}\max\big{(} 2^{j},1\big{)})^{-n}\left(1+\frac{|y|}{2^{-k}\max\big{(}2^{j},1\big{)}}\right)^ {-N},\]
which in turn implies that
\[\sup_{\kappa\leqslant j\leqslant k}\int\Big{|}\theta_{k}\ast\tilde{\Psi}_{2^{ j-k}}(y)\Big{|}\left(1+\frac{|y|}{2^{j-k}}\right)^{\nu}\,\mathrm{d}y\lesssim 1.\]
Thus, we have the pointwise inequality
\[|P_{k}R_{k}G(x)|\lesssim\sum_{\kappa\leqslant j\leqslant k}2^{jm}\left[ \mathcal{M}\left(|\psi_{2^{j-k}}\ast G|^{\frac{n}{\nu}}\right)(x)\right]^{ \frac{\nu}{n}}.\]
Therefore, for any \(q>\max\left(\frac{n}{\nu},1\right)\)
\[\left(\sum_{k>0}|P_{k}R_{k}G(x)|^{q}\right)^{1/q} \lesssim\sum_{j=\kappa}^{\infty}2^{jm}\left(\sum_{k\geqslant j} \left[\mathcal{M}\left(|\psi_{2^{j-k}}\ast G|^{\frac{n}{\nu}}\right)(x)\right]^ {\frac{q\nu}{n}}\right)^{\frac{1}{q}}\] \[\leqslant C_{m}\left(\sum_{k\geqslant 0}\left[\mathcal{M}\left(| \psi_{k}\ast G|^{\frac{n}{\nu}}\right)(x)\right]^{\frac{q\nu}{n}}\right)^{ \frac{1}{q}}\] \[=C_{m}\left(\sum_{k\geqslant 0}\left[\mathcal{M}_{n/\nu}\left( \psi_{k}\ast G\right)(x)\right]^{q}\right)^{\frac{1}{q}}\]
where \(C_{m}=\sum_{j=\kappa}^{\infty}2^{jm}<+\infty\).
Hence, for any \(p>\frac{n}{\nu}\) the Fefferman-Stein's estimate (18) yields that
\[\left\|\left(\sum_{k\geqslant 0}|P_{k}R_{k}G|^{q}\right)^{1/q}\right\|_{L^{p} \left(\mathbb{R}^{n}\right)}\lesssim\left[\int\left(\sum_{k\geqslant 0}|\psi_{k} \ast G(x)|^{q}\right)^{\frac{p}{q}}\mathrm{d}x\right]^{\frac{1}{p}}.\]
Finally the last term is equal to
\[\left[\int\left(\sum_{k\geqslant 0}|\psi_{k}\ast G(x)|^{q}\right)^{\frac{p}{ q}}\mathrm{d}x\right]^{\frac{1}{p}}\lesssim\left\|G\right\|_{F_{p,q}^{0}}.\]
Taking \(q=2\) and \(\nu>n/p\), and using Definition 2.2, we obtain
\[\left(\int\left(\sum_{k\geqslant 0}|P_{k}R_{k}G(x)|^{2}\right)^{p/2}\mathrm{d}x \right)^{1/p}\lesssim\left\|G\right\|_{hp}.\]
This proves (29).
Now to treat \(\left\|\Big{(}\sum_{k=0}^{\infty}\,|P_{k}^{u}T_{\beta_{k}}^{\varphi}(f)|^{2} \Big{)}^{1/2}\right\|_{L^{p}}\), we observe that by an argument similar to the proof of Lemma 4.1 (that is to say, essentially use (24)) we have
\[\left\|\Big{(}\sum_{k=0}^{\infty}\,|P_{k}^{u}T_{\beta_{k}}^{ \varphi}(f)|^{2}\Big{)}^{1/2}\right\|_{L^{p}}^{p} \lesssim\left\|\Big{(}\sum_{k=0}^{\infty}\,2^{km}|P_{k}^{u}T_{1- \lambda}^{\varphi}f|^{2}\Big{)}^{1/2}\right\|_{L^{p}}^{p}\] \[\lesssim\sum_{k=0}^{\infty}\,2^{pkm/2}\left\|P_{k}^{u}T_{1- \lambda}^{\varphi_{2}}f\right\|_{L^{p}}^{p}\Big{)}\] \[\lesssim\left(\sum_{k=0}^{\infty}2^{pkm/2}\right)\sup_{k\geqslant 0 }\left\|P_{k}^{u}T_{1-\lambda}^{\varphi_{2}}f\right\|_{L^{p}}^{p}\] \[\lesssim\left\|f\right\|_{hp}^{p}.\qed\]
**Remark 4.3**.: _A re-examination of the proofs of Lemmas 4.1 and 4.2 reveals that if \(b(k,\xi)=2^{km_{0}}\omega_{k}\) with \(m_{0}<0\) and \(1<p<\infty\), \(m(p)=-ns\Big{|}\frac{1}{p}-\frac{1}{2}\Big{|}\) then one has_
\[\left\|\Big{(}\sum_{k=0}^{\infty}\,|P_{k}^{u}T_{b}^{\varphi}(f)|^{2}\Big{)}^{1/ 2}\right\|_{L^{p}}\lesssim\left\|f\right\|_{H^{m_{0}-m(p),p}}.\]
_where \(H^{s,p}=F_{p,2}^{s}\) is the \(L^{p}\)-based Sobolev space._
For the boundedness of multilinear OIOs with target spaces \(L^{2}\) or bmo, we need the following result about oscillatory integrals giving rise to Carleson measures, whose counterpart in the case of FIOs was proven in [15]. The proposition below doesn't require any homogeneity from the phase function as in the case of FIOs.
**Proposition 4.4**.: _Let \(s>0\), \(d\in S^{-ns/2}\), \(u\in\mathbb{R}^{n}\) and let_
\[Q_{k}^{u}f(x)=\frac{1}{(2\pi)^{n}}\int\psi_{k}(\xi)e^{i2^{-k}\xi\cdot u}\widehat {f}(\xi)e^{ix\cdot\xi}\,\mathrm{d}\xi,\]
_where \(k\geqslant k_{0}\in\mathbb{Z}\) and_
\[\psi_{k}(\xi)^{2}:=\vartheta(2^{-1-k}\xi)^{2}-\vartheta(2^{2-k}\xi)^{2}, \tag{30}\]
_and \(\vartheta\) is as in_ Lemma 4.1_. Then if \(\varphi\) is a phase function of order \(s>0\) and \(f\in\mathrm{bmo}\) one has that_
\[\mathrm{d}\mu_{k}(x,t)=\sum_{\ell=0}^{\infty}|(Q_{k+\ell}^{u}\circ T_{d}^{ \varphi})(f)(x)|^{2}\delta_{2^{-\ell}}(t)\,\mathrm{d}x\]
_is a Carleson measure with Carleson norm bounded by \(C_{\varepsilon}2^{-\varepsilon k}\|f\|_{\mathrm{bmo}}^{2}\). Here, for any \(\delta\in(0,1)\), \(\varepsilon\) is given by \(\min(ns/2,n\delta)\)._
Proof.: Since we can write \(Q_{k+\ell}^{u}\circ T_{d}^{\varphi}=Q_{k+\ell}^{u}\circ T_{d}^{\varphi}\circ \tilde{Q}_{k+\ell}^{u}\), where \(\tilde{Q}_{k+\ell}^{u}:\mathrm{bmo}\to L^{\infty}\) uniformly in \(k\), we first consider the case of \(f\in L^{\infty}\). Also for simplicity of the exposition we set \(u=0\) in what follows.
Now since the operator \(Q_{k+\ell}\circ T_{d}^{\varphi}\) is essentially the \((k+\ell)\)-th component of the Littlewood-Paley decomposition of the operator \(T_{d}^{\varphi}\), setting \(j=k+\ell\geqslant k_{0}\) we carry out a second microlocalisation of \(Q_{j}\circ T_{d}^{\varphi}\) in the following way.
Take a non-negative real number \(\mu\), to be fixed later, and for each \(j\), fix \(O(2^{\mu\mu j})\) vectors \(\xi_{j}^{\nu}\), \(\nu=1,\dots,O(2^{n\mu j})\), distributed evenly in \(\operatorname{supp}\psi_{j}\). Let \(\{\rho_{j}^{\nu}\}_{\nu}\) be a family of smooth functions, where \(\operatorname{supp}\rho_{j}^{\nu}\) is a ball of radius \(2^{(1-\mu)j}\) centred at \(\xi_{j}^{\nu}\), chosen in such a way that the supports of \(\{\rho_{j}^{\mu}\}_{\nu}\) cover \(\operatorname{supp}\psi_{j}\). One may for example take a smooth bump function \(\beta\) supported in a ball of radius \(1\) about the origin and from this form \(\rho_{j}^{\nu}(\xi)=\beta(2^{(\mu-1)j}(\xi-\xi_{j}^{\nu}))/\sum_{\kappa}\beta (2^{(\mu-1)j}(\xi-\xi_{j}^{\kappa}))\).
It is clear that these cut-off-functions satisfy
\[|\partial^{\alpha}\rho_{j}^{\nu}(\xi)|\leqslant C_{\alpha}2^{|\alpha|(\mu-1)j}.\]
With this partition of unity, we may therefore write the integral kernel of \(Q_{j}\circ T_{d}^{\varphi}\) as \(K_{j}(x,y)=\sum_{\nu}K_{j}^{\nu}(x,y)\), with
\[K_{j}^{\nu}(x,y)=\int d(\xi)\rho_{j}^{\nu}(\xi)\psi_{j}(\xi)e^{i(x-y)\cdot\xi+ i\varphi(\xi)}\,\mathrm{d}\xi.\]
In order to get desired estimates for the kernel, we rewrite the phase of this integral as
\[(x-y)\cdot\xi+\varphi(\xi) =(x-y+\nabla\varphi(\xi_{j}^{\nu}))\cdot\xi+h_{j}^{\nu}(\xi),\] \[\text{with}\qquad h_{j}^{\nu}(\xi) =\varphi(\xi)-\nabla\varphi(\xi_{j}^{\nu})\cdot\xi,\]
which in turn yields
\[K_{j}^{\nu}(x,y)=\int b_{j}^{\nu}(\xi)e^{i(x-y+\nabla\varphi(\xi_{j}^{\nu})) \cdot\xi}\,\mathrm{d}\xi,\]
where \(b_{j}^{\nu}(\xi)=d(\xi)\rho_{j}^{\nu}(\xi)\psi_{j}(\xi)e^{ih_{j}^{\nu}(\xi)}\). The mean-value theorem then yields that \(\partial_{i}h_{j}^{\nu}(\xi)=\nabla\partial_{i}\varphi(\eta)\cdot(\xi-\xi_{j} ^{\nu})\) for some \(\eta\) on the line segment between \(\xi\) and \(\xi_{j}^{\nu}\). On \(\operatorname{supp}\psi_{j}\,\rho_{j}^{\nu}\), we therefore have from (2) that
\[|\partial^{\alpha}h_{j}^{\nu}(\xi)|\lesssim\begin{cases}2^{(s-\mu-1)j}&|\alpha| =1\\ 2^{(s-|\alpha|)j}&|\alpha|>1.\end{cases}\]
If we take \(\mu\leqslant s/2\), the worst terms of \(|\partial_{\xi}^{\alpha}e^{ih_{j}^{\nu}(\xi)}|\) are hence bounded by a constant times \(2^{(s-\mu-1)|\alpha|j}\).
With these estimates at hand, we find that on the support of \(\psi_{j}\,\rho_{j}^{\nu}\),
\[|\partial^{\alpha}b_{j}^{\nu}(\xi)| \leqslant C_{\alpha}\sum_{\sum\alpha_{\ell}=\alpha}\big{|}\partial ^{\alpha_{1}}d(\xi)\partial^{\alpha_{2}}\rho_{j}^{\nu}(\xi)\partial^{\alpha_{ 3}}\psi_{j}(\xi)\partial_{\xi}^{\alpha_{4}}(e^{ih_{j}^{\nu}(\xi)})\big{|}\] \[\leqslant C_{\alpha}\sum_{\sum\alpha_{\ell}=\alpha}2^{(-ns/2-| \alpha_{1}+\alpha_{3}|+(\mu-1)|\alpha_{2}|+(s-\mu-1)|\alpha_{4}|)j}\leqslant C _{\alpha}2^{(-ns/2+(s/2-1)|\alpha|)j},\]
where we have fixed \(\mu\) to be the optimal \(\mu=s/2\). For later convenience, we define \(\lambda=s/2-1\).
We are now ready to take on the Carleson norm estimates. To that end, we fix a ball \(B\) of radius \(r<1\) and centre \(x_{0}\). Let then \(\tau\in(0,1)\) be given by
\[\tau=\begin{cases}1-s/2&\text{if }s<2\\ 1-\delta&\text{otherwise},\end{cases}\]
where \(\delta\in(0,1)\) is arbitrary, and let \(R_{\nu}\) be the ball of radius \(2\cdot 2^{(\lambda+\tau)(j-k_{0})}r^{\tau}\) and centre \(x_{0}+\nabla\varphi(\xi_{j}^{\nu})\). Clearly then,
\[Q_{j}T_{d}^{\varphi}f(x)=\sum_{\nu}S_{j}^{\nu}(\chi_{R_{\nu}}f)(x)+\sum_{\nu} \int_{R_{\nu}^{c}}K_{j}^{\nu}(x,y)f(y)\,\mathrm{d}y,\]
where \(S_{j}^{\nu}\) is the operator with kernel \(K_{j}^{\nu}\).
For the parts inside the balls \(R_{\nu}\), we use that \(|\psi(2^{-j}\xi)d(\xi)|\lesssim 2^{-nsj/2}\), and hence \(S_{j}^{\nu}\) is bounded \(L^{2}\to L^{2}\) with operator norm estimated by \(2^{-nsj/2}\). Using this and the fact that the symbols have almost disjoint support - that is, with a finite number of overlaps - we find that for each \(\nu\),
\[\int_{B}\Big{|}\sum_{\nu}S_{j}^{\nu}(\chi_{R_{\nu}}f)(x)\Big{|}^ {2}\,\mathrm{d}x \lesssim\int\Big{|}\sum_{\nu}S_{j}^{\nu}(\chi_{R_{\nu}}f)(x) \Big{|}^{2}\,\mathrm{d}x\lesssim\sum_{\nu}2^{-nsj}\|\chi_{R_{\nu}}f\|_{L^{2}}^ {2}\] \[\leqslant\sum_{\nu}2^{-nsj}|R_{\nu}|\|f\|_{L^{\infty}}^{2} \lesssim(2^{j}r)^{(\tau-1)n}|B|\|f\|_{L^{\infty}}^{2}.\]
To find a similar estimate for the parts outside \(R_{\nu}\) we start by noting that the triangle inequality and the fact that \(\lambda+\tau\geqslant 0\) and \(j\geqslant k_{0}\) yield that for \(r\leqslant 1\), any \(x\in B\) and any \(y\) with \(y+x_{0}+\nabla\varphi(\xi_{j}^{\nu})\in R_{\nu}^{c}\) we have
\[|x-x_{0}-y|\geqslant|y|-r\geqslant\frac{1}{2}|y|+2^{(\lambda+\tau)(j-k_{0})j}r ^{\tau}+r\geqslant\frac{1}{2}|y|\gtrsim 2^{(\lambda+\tau)j}r^{\tau}.\]
We therefore have for any \(x\in B\) and non-negative integer \(N\) that
\[\int_{R_{\mathbb{C}}^{c}}|K_{j}^{\nu}(x,y)|\,\mathrm{d}y =\int_{R_{\mathbb{C}}^{c}}\Big{|}\int b_{j}^{\nu}(\xi)\,\Big{(} \frac{(x+\nabla\varphi(\xi_{j}^{\nu})-y)\cdot\nabla_{\xi}}{|x+\nabla\varphi( \xi_{j}^{\nu})-y|^{2}}\Big{)}^{N}e^{i(x+\nabla\varphi(\xi_{j}^{\nu})-y)\cdot \xi}\,\mathrm{d}\xi\Big{|}\,\mathrm{d}y\] \[=\int_{R_{\nu}^{c}}\Big{|}\int e^{i(x+\nabla\varphi(\xi_{j}^{\nu} )-y)\cdot\xi}\Big{(}\frac{(x+\nabla\varphi(\xi_{j}^{\nu})-y)\cdot\nabla_{\xi}} {|x+\nabla\varphi(\xi_{j}^{\nu})-y|^{2}}\Big{)}^{N}b_{j}^{\nu}(\xi)\,\mathrm{d} \xi\Big{|}\,\mathrm{d}y\] \[\lesssim\int_{R_{\nu}^{c}}\frac{2^{-nsj/2+N\lambda j}|{\rm supp \,}\rho_{j}^{\nu}|}{|x+\nabla\varphi(\xi_{j}^{\nu})-y|^{N}}\,\mathrm{d}y\] \[=\int_{y+x_{0}+\nabla\varphi(\xi_{j}^{\nu})\in R_{\mathbb{C}}} \frac{2^{-nsj/2+N\lambda j}|{\rm supp\,}\rho_{j}^{\nu}|}{|x-x_{0}-y|^{N}}\, \mathrm{d}y\] \[\lesssim\int_{|y|\geqslant 2^{(\lambda+\tau)j}r^{\tau}}\frac{2^{- nsj/2+(N-n)\lambda j}}{|y|^{N}}\,\mathrm{d}y\ \lesssim\ 2^{-nsj/2}(2^{j}r)^{(n-N)\tau}.\]
Now choose \(N\) large enough to make \(2(N-n)\tau\geqslant n(1-\tau)=:\varepsilon\). Note that from the definition of \(\tau\), we have that \(\varepsilon=\min(ns/2,n\delta)\), where \(\delta\in(0,1)\) is arbitrary. Combining with the result from the part from inside \(R_{\nu}\) and summing over the \(O(2^{nsj/2})\) balls, this then yields that \(\int_{B}|Q_{j}T_{d}^{\varepsilon}(f)(x)|^{2}\,\mathrm{d}x\lesssim(2^{j}r)^{- \varepsilon}|B||f||_{L^{\infty}}^{2}\). Hence
\[\int_{B\times[0,r]}|\mathrm{d}\mu_{k}(x,t)| =\sum_{2^{-\ell}\leqslant r}\int_{B}|Q_{k+\ell}\circ T_{d}^{ \phi}\circ\tilde{Q}_{k+\ell}(f)(x)|^{2}\,\mathrm{d}x\] \[\lesssim\sum_{2^{-\ell}\leqslant r}(2^{k+\ell}r)^{-\varepsilon} |B||\tilde{Q}_{k+\ell}(f)||_{L^{\infty}}^{2}\] \[\lesssim 2^{-\varepsilon k}\sum_{2^{-\ell}\leqslant r}(2^{- \varepsilon})^{\ell}r^{-\varepsilon}|B||f||_{\mathrm{bmo}}^{2}\lesssim 2^{- \varepsilon k}|B|\|f\|_{\mathrm{bmo}}^{2}, \tag{31}\]
which shows the requested Carleson estimate for balls of radius smaller than \(1\).
Now if the radius \(r\) of \(B\) is larger than one, then we cover \(B\) by balls \(B_{j}\) of radius \(1/2\), observing that there are \(O(r^{n})\) such balls needed for this covering. Furthermore we observe that for \(r>1\), (31) yields that
\[\int_{B\times[0,r]}|\,\mathrm{d}\mu_{k}(x,t)| =\int_{B\times[0,1]}|\,\mathrm{d}\mu_{k}(x,t)|\leqslant\sum_{O(r^ {n})}\int_{B_{j}\times[0,1]}|\,\mathrm{d}\mu_{k}(x,t)|\] \[\lesssim\sum_{O(r^{n})}2^{-\varepsilon k}2^{-n}\left\|f\right\|_ {\mathrm{bmo}}^{2}\lesssim 2^{-\varepsilon k}|B|\left\|f\right\|_{\mathrm{bmo}}^{2}.\qed\]
## 5. Frequency decomposition of the oscillatory integral operator
Following the method in [15] for the decomposition of the amplitude \(\sigma(x,\Xi)\in S^{m}(n,N)\), we reduce the problem of regularity of \(T_{\sigma}^{\Phi}\) into considering three frequency regimes: When \(\Xi\) lies inside a compact set; when one component of \(\Xi=(\xi_{1},\ldots,\xi_{N})\) dominates the others; and when two fixed components of \((\xi_{1},\ldots,\xi_{N})\) are comparable to each other. In what follows we only describe the aspects of the amplitude decomposition which are crucial to the later sections of the paper. For the remaining details, we refer the reader to [15].
Here and in all that follows we take \(N>1\). First we define the component of \(\sigma\) with frequency support contained in a compact set. We introduce a cut-off function \(\chi\colon\mathbb{R}^{nN}\to\mathbb{R}\), such that \(\chi(\Xi)=1\) for \(|\Xi|\leqslant 1/8\) and \(\chi(\Xi)=0\) for \(|\Xi|\geqslant 1/4\) and define
\[\sigma_{0}(x,\Xi)=\chi(\Xi)\,\sigma(x,\Xi). \tag{32}\]
To define the components of \(\sigma\) where one frequency dominates all the others, we construct a cut-off function \(\nu\colon\mathbb{R}^{nN}\to\mathbb{R}\) such that \(\nu(\Xi)=0\) for \(|\xi_{1}|\leqslant 32\sqrt{N-1}\,|\Xi^{\prime}|\) and \(\nu(\Xi)=1\) for \(64\sqrt{N-1}\,|\Xi^{\prime}|\leqslant|\xi_{1}|\), where \(\Xi^{\prime}:=(\xi_{2},\ldots,\xi_{N})\). This can be done by taking \(\Lambda\in\mathcal{C}^{\infty}(\mathbb{R})\) such that \(\Lambda(t)=1\) if \(t\leqslant c_{1}\) and \(\Lambda(t)=0\), if \(t\geqslant c_{2}\) for two suitably chosen real numbers \(0<c_{1}<c_{2}<1\).
Define
\[\nu(\Xi)=1-\Lambda\left(\frac{|\xi_{1}|^{2}}{|\Xi|^{2}}\right)\in\mathcal{C} ^{\infty}(\mathbb{R}^{nN}\setminus 0). \tag{33}\]
Now given \(j=1,\ldots N\) we define \(\Xi^{\prime}_{j}:=(\xi_{1},\ldots,\xi_{j-1},\xi_{j+1},\ldots,\xi_{N})\) and
\[\nu_{j}(\Xi):=\nu(\xi_{j},\Xi^{\prime}_{j}),\]
for all \(\Xi\in\mathbb{R}^{nN}\). We then define the component of \(\sigma\) for which \(\xi_{j}\) dominates the other frequency components to be
\[\sigma_{j}(x,\Xi)=(1-\chi(\Xi))\,\nu_{j}(\Xi)\,\sigma(x,\Xi),\quad\text{for }j =1,\ldots N. \tag{34}\]
What remains of \(\sigma\) will be split into functions on whose support two frequency components are comparable (see [15] pages 22-23 for the details). Thus \(\sigma\) can be finally decomposed as
\[\sigma(x,\Xi)=\sigma_{0}(x,\Xi)+\sum_{j=1}^{N}\sigma_{j}(x,\Xi)+\sum_{j\neq k} \sigma_{j,k}(x,\Xi),\]
where \(\sigma_{0}\) has compact \(\Xi\)-support, \(|\xi_{j}|\) dominates \(|\Xi|\) on the \(\Xi\)-support of \(\sigma_{j}\), and \(|\xi_{j}|\approx|\xi_{k}|\) on the \(\Xi\)-support of \(\sigma_{j,k}\). More specifically, \(\sigma_{j,k}\) and \(\sigma_{j}\) are supported away from the origin, and
\[c\left|\xi_{j}\right|^{2}\geqslant|\Xi|^{2} \tag{35}\]
on the \(\Xi\)-support of \(\sigma_{j}\), for a suitably chosen \(c>1\).
One can also check that if \(\sigma\in S^{m}(n,N)\) then \(\sigma_{j}\) and \(\sigma_{j,k}\) are also in \(S^{m}(n,N)\) for all \(j,k=1,\ldots,N\) and \(\sigma_{0}\in S^{\mu}(n,N)\) for all \(\mu\in\mathbb{R}\).
We shall now proceed by giving explicit representations for the multilinear OIOs \(T_{\sigma_{0}}^{\Phi}\), \(T_{\sigma_{1}}^{\Phi}\) and \(T_{\sigma_{1,2}}^{\Phi}\), which as will be clarified in Section 6, are the prototypes of the operators for which the boundedness results will be established here. Moreover, the boundedness of \(T_{\sigma}^{\Phi}\) can be reduced to the boundedness of these three types of operators. However further reductions are needed to make the representations of the aforementioned operators amenable to the vector-valued- and maximal-function-based proofs that are utilised in this paper.
### Representation of \(T_{\sigma_{0}}^{\Phi}\)
We note that by (32), the support of \(\sigma_{0}\) is in a fixed compact set. Therefore as was demonstrated in [15, page 44] the operator \(T_{\sigma_{0}}^{\Phi}\) can be written as
\[T_{\sigma_{0}}^{\Phi}(f_{1},\ldots,f_{N})(x)=\sum_{K\in\mathcal{I}^{nN}}a_{K} (x)T_{\theta(\cdot/\sqrt{N})}^{\varphi_{0}}\left(\prod_{j=1}^{N}T_{\theta}^{ \varphi_{j}}\circ\tau_{\frac{2\pi k_{j}}{L}}(f_{j})\right)(x), \tag{36}\]
where \(\tau_{h}f(x):=f(x-h)\), \(\theta\in\mathcal{C}_{c}^{\infty}(\mathbb{R}^{n})\) and \(a_{K}(x)\) is a smooth function satisfying
\[|\partial^{\alpha}a_{K}(x)|\lesssim(1+\sum_{j=1}^{N}|k_{j}|^{2})^{-M} \tag{37}\]
for all \(x\in\mathbb{R}^{n}\) and \(M\geqslant 0\), with \(K=(k_{1},\ldots,k_{N})\).
### Representation of \(T_{\sigma_{1}}^{\Phi}\)
Let \(\vartheta\) be the function introduced in Definition 2.1 and recall or define
* \(\theta_{k}(\xi):=\vartheta(2^{3-k}\xi)\),
* \(\psi_{k}(\xi)^{2}:=\vartheta(2^{-1-k}\xi)^{2}-\vartheta(2^{2-k}\xi)^{2}\),
* \(\phi_{k}(\xi)^{2}:=\vartheta(2^{-3-k}\xi)^{2}-\vartheta(2^{4-k}\xi)^{2}\).
From the support properties of \(\sigma_{1}\), it follows that if \(\psi_{k}\left(\xi_{1}\right)\neq 0\) and \(\sigma_{1}(x,\Xi)\neq 0\) then
\[\left|2^{-k}\Xi_{1}^{\prime}\right|\leqslant\frac{\left|2^{-k}\xi_{1}\right| }{32\sqrt{N-1}}\leqslant\frac{2^{-3}}{\sqrt{N-1}},\]
which implies that \(\theta_{k}\left(\xi_{j}\right)=1\) for \(j=2,\ldots,N\), and one also has that
\[\begin{split}\frac{1}{8}\leqslant&\left|2^{-k}(\xi_ {1}+\cdots+\xi_{N})\right|<8\\ &\text{which implies}\quad\phi_{k}(\xi_{1}+\cdots+\xi_{N})=1. \end{split} \tag{38}\]
Using these facts, there exists \(k_{0}\in\mathbb{Z}\) (independent of \(x\)) such that we can write \(T^{\Phi}_{\sigma_{1}}\) as
\[T^{\Phi}_{\sigma_{1}}(f_{1},\dots,f_{N})(x)\] \[=\int_{\mathbb{R}^{nN}}\sum_{k\geqslant k_{0}}\psi_{k}(\xi_{1})^{ 2}\prod_{j=2}^{N}\theta_{k}(\xi_{j})^{2}\phi_{k}(\xi_{1}+\dots+\xi_{N})^{2} \sigma_{1}(x,\Xi)\widehat{f}_{1}(\xi_{1})\] \[\times\prod_{j=2}^{N}\widehat{f}_{j}(\xi_{j})\,e^{ix\cdot(\xi_{1} +\dots+\xi_{N})}e^{i\Phi(\Xi)}\,\mathrm{d}\Xi.\]
See [15, page 24] for the details of all these deductions.
We also introduce a high frequency cut-off \(\chi_{0}\) that satisfies
\[\begin{cases}\chi_{0}(\xi)=1,&\quad\text{for $|\xi|\geqslant 2^{k_{0}-4}$, and}\\ \chi_{0}(\xi)=0,&\quad\text{for $|\xi|\leqslant 2^{k_{0}-5}$},\end{cases}\]
where \(k_{0}\) can be chosen appropriately, and let \(m_{0},\dots,m_{N}\) be a (non-integer) partition of the decay \(m\) of the amplitude \(\sigma\), so that
\[m=\sum_{j=0}^{N}m_{j},\]
and \(m_{j}=-ns|1/p_{j}-1/2|\). Based on these frequency cut-offs, we introduce the following localisation operators as well as amplitudes
\[\widehat{Q^{0}_{k}(f)}(\xi) =\phi_{k}(\xi)\widehat{f}(\xi), b_{0}(\xi) =|\xi|^{m_{0}}\chi_{0}(\xi),\] \[\widehat{Q^{u_{1}}_{k}(f)}(\xi) =|2^{-k}\xi|^{m-m_{0}-m_{1}}\psi_{k}(\xi)e^{i2^{-k}\xi\cdot u_{1} }\widehat{f}(\xi), b_{1}(\xi) =|\xi|^{m_{1}}\chi_{0}(\xi),\] \[\widehat{P^{u_{j}}_{k}(f)}(\xi) =\theta_{k}(\xi)e^{i2^{-k}\xi\cdot u_{j}}\widehat{f}(\xi), b_{j,k}(\xi) =2^{km_{j}}\omega_{k}(\xi),\]
for \(j=2,\dots,N\), \(\omega_{k}(\xi):=\theta_{k}(\xi/2)\) is the bump function introduced in Lemma 4.1 equal to one on the support of \(\theta_{k}\).
Also note that for any \(m\leqslant 0\) the symbol \(2^{km}\omega_{k}(\xi)\in S^{m}\) uniformly in \(k\), since when \(m\leqslant 0\) one has that \(|2^{km}\omega(2^{-k}\xi)|\lesssim 2^{km}\langle 2^{-k}\xi\rangle^{m}\leqslant \langle\xi\rangle^{m}\), since \(\omega\) is Schwartz, and moreover we also have that for any \(N\geqslant 0\) and \(|\alpha|>0\)
\[|\partial^{\alpha}(2^{km}\omega(2^{-k}\xi))|\lesssim 2^{km}2^{-k|\alpha|}(1+2^{-k }|\xi|)^{-N}\lesssim 2^{km}2^{-k|\alpha|}2^{kN}(1+|\xi|)^{-N},\]
which by choosing \(N=|\alpha|-m\geqslant 0\), yields that \(|\partial^{\alpha}(2^{km}\omega(2^{-k}\xi))|\lesssim\langle\xi\rangle^{m-| \alpha|}\).
Using these operators one can show [15, page 26] that for any \(M\geqslant 0\), the operator \(T^{\Phi}_{\sigma_{1}}\) can be written as
\[T^{\Phi}_{\sigma_{1}}(f_{1},\dots,f_{N})(x)\] \[\qquad\qquad=\int\sum_{k\geqslant k_{0}}^{\infty}M_{\mathfrak{m} }\circ T^{\varphi_{0}}_{b_{0}}\circ P^{0}_{k}\left[(Q^{u_{1}}_{k}\circ T^{ \varphi_{1}}_{b_{1}})(f_{1})\,\prod_{j=2}^{N}(P^{u_{j}}_{k}\circ T^{\varphi_ {j}}_{b_{j,k}})(f_{j})\right](x)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\times\frac{e^{i2^{-k}\Xi\cdot U}}{(1+|U| ^{2})^{M}}\,\,\mathrm{d}U, \tag{39}\]
where \(M_{\mathfrak{m}}\) denotes the operator of multiplication by \(\mathfrak{m}=\mathfrak{m}(k,x,U)\) with \(U=(u_{1},\dots,u_{N})\), and \(\mathfrak{m}\) is a smooth function depending on \(\sigma_{1}\), with uniformly bounded derivatives of all orders. It was shown in [15, page 26] that boundedness of \(T^{\Phi}_{\sigma_{1}}\) can been reduced to showing the boundedness of
\[B(f_{1},\dots,f_{N})(x)\] \[:=\sum_{k\geqslant k_{0}}\chi_{0}(2D)\,Q^{0}_{k}\left[(Q^{u_{1}}_{k} \circ T^{\varphi_{1}}_{b_{1}})(f_{1})\prod_{j=2}^{N}(P^{u_{j}}_{k}\circ T^{ \varphi_{j}}_{b_{j,k}})(f_{j})\right](x), \tag{40}\]
where the symbol of the high-frequency cut-off \(\chi_{0}\) belongs to \(S^{0}\).
### Representation of \(T_{\sigma_{1,2}}^{\Phi}\)
With the same choice of \(\psi_{k}\), \(\theta_{k}\), \(\chi_{0}\) and \(\omega_{k}\) as above, and with a suitable choice of the integer \(k_{1}\) and setting
\[\zeta_{k}(\xi)^{2}:=\vartheta\big{(}2^{-k-k_{1}-2}\xi\big{)}^{2}-\vartheta \big{(}2^{3+k_{1}-k}\xi\big{)}^{2},\]
it was demonstrated in [15] page 42, that for some \(k_{0}\in\mathbb{Z}\) one has the representation
\[T_{\sigma_{1,2}}^{\Phi}\left(f_{1},\ldots,f_{N}\right)(x)\] \[=\int_{\mathbb{R}^{nN}}\sum_{k\geqslant k_{0}}\psi_{k}\left(\xi_ {1}\right)^{2}\zeta_{k}\left(\xi_{2}\right)^{2}\sigma_{1,2}(x,\Xi)\chi_{0} \left(\xi_{1}\right)\widehat{f}_{1}\left(\xi_{1}\right)\times\] \[\chi_{0}\left(\xi_{2}\right)\widehat{f}_{2}\left(\xi_{2}\right) \prod_{j=3}^{N}\theta_{k}\left(\xi_{j}\right)^{2}\widehat{f}_{j}\left(\xi_{j} \right)e^{ix\cdot\left(\xi_{1}+\cdots+\xi_{N}\right)+i\Phi\left(\Xi\right)} \mathrm{d}\Xi.\]
Now we introduce the following localisation operators and amplitudes:
\[\widehat{P_{0}^{0}(f)}(\xi) =\theta_{k}(\xi)\widehat{f}(\xi), d_{0}(\xi) =2^{km_{0}}\omega_{k}(\xi),\] \[\widehat{Q_{k}^{u_{1}}(f)}(\xi) =\left|2^{-k}\xi\right|^{m-m_{1}-m_{2}}\psi_{k}(\xi)e^{i2^{-k}\xi \cdot u_{1}}\widehat{f}(\xi), d_{1}(\xi) =|\xi|^{m_{1}}\chi_{0}(\xi),\] \[\widehat{Q_{k}^{u_{2}}(f)}(\xi) =\psi_{k}(\xi)e^{i2^{-k}\xi\cdot u_{2}}\widehat{f}(\xi), d_{2}(\xi) =|\xi|^{m_{2}}\chi_{0}(\xi),\] \[\widehat{P_{k}^{u_{j}}(f)}(\xi) =\theta_{k}(\xi)e^{i2^{-k}\xi\cdot u_{j}}\widehat{f}(\xi), d_{j,k}(\xi) =2^{km_{j}}\omega_{k}(\xi), \tag{41}\]
for \(j=3,\ldots,N\).
Using these operators one can show [15, page 26] that for any \(M\geqslant 0\), the operator \(T_{\sigma_{1,2}}^{\Phi}\) can be written as
\[T_{\sigma_{1,2}}^{\Phi}(f_{1},\ldots,f_{N})\] \[\qquad=\int\sum_{k\geqslant k_{0}}^{\infty}M_{\mathfrak{m}} \circ T_{d_{0}}^{\varphi_{0}}\circ P_{k}^{0}\left[(Q_{k}^{u_{1}}\circ T_{d_{1 }}^{\varphi_{1}})(f_{1})\,(Q_{k}^{u_{2}}\circ T_{d_{2}}^{\varphi_{2}})(f_{2}) \,\prod_{j=3}^{N}(P_{k}^{u_{j}}\circ T_{d_{j,k}}^{\varphi_{j}})(f_{j})\right]\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \times\frac{1}{(1+|U|^{2})^{M}}\;\mathrm{d}U,\]
for a certain smooth function \(\mathfrak{m}\) depending on \(\sigma_{1,2}\), with uniformly bounded derivatives of all orders. Therefore one can reduce the analysis of boundedness of \(T_{\sigma_{1,2}}^{\Phi}\), to the study of the boundedness of the multilinear operator
\[D(f_{1},\ldots,f_{N})(x)\] \[=\sum_{k\geqslant k_{0}}^{\infty}M_{\mathfrak{m}}\circ T_{d_{0}}^ {\varphi_{0}}\circ P_{k}^{0}\left[(Q_{k}^{u_{1}}\circ T_{d_{1}}^{\varphi_{1}}) (f_{1})\,(Q_{k}^{u_{2}}\circ T_{d_{2}}^{\varphi_{2}})(f_{2})\,\prod_{j=3}^{N} (P_{k}^{u_{j}}\circ T_{d_{j,k}}^{\varphi_{j}})(f_{j})\right](x), \tag{42}\]
see [15] for further details.
## 6. A catalogue of end-point cases
The method by which we prove the boundedness of the components \(T_{\sigma_{0}}^{\Phi}\), \(T_{\sigma_{1}}^{\Phi}\) and \(T_{\sigma_{1,2}}^{\Phi}\) splits into four separate cases. For \(T_{\sigma_{1}}^{\Phi}\) and \(T_{\sigma_{1,2}}^{\Phi}\) we use vector-valued inequality techniques to deal with almost all function spaces \(X^{p}\). However, as mentioned in the introduction, this method fails when \(p=2\) or \(p=\infty\), so we make use of different techniques when these functions spaces are present. This failure is due in the first case to a lack of usable decay in the amplitude and in the second case due to a lack of a suitable characterisation
of bmo, and means we use three different techniques to deal with \(T_{\sigma_{1}}^{\Phi}\) and \(T_{\sigma_{1,2}}^{\Phi}\). Finally, we make use of a fourth method, which deals with \(T_{\sigma_{0}}^{\Phi}\) for all values of the function space exponents.
As far as boundedness of \(T_{\sigma_{1}}^{\Phi}\) is concerned, due to the symmetry of (40) in the indices \(j=2,\ldots,N,\) as was shown in [15], we only need to consider endpoint cases \((p_{0},\ldots,p_{N})\) which are distinct within the equivalence class of permutations of \((p_{2},\ldots,p_{N})\). Thus, there are three possibilities for the function space with exponent \(p_{1}\): \(h^{p_{1}}\), \(L^{2}\) or bmo. Then for the exponents \(p_{2},\ldots,p_{N}\) we can have a Cartesian product of the same spaces:
\[\left(\prod_{j\in\mathcal{I}_{2}}L^{2}\right)\times\left(\prod_{j\in\mathcal{ I}_{\infty}}\text{bmo}\right)\times\left(\prod_{j\in\mathcal{I}_{f}}h^{p_{j}} \right),\]
where the index sets \(\mathcal{I}_{2}\), \(\mathcal{I}_{\infty}\) and \(\mathcal{I}_{f}\) are the sets of all \(j\) such that \(p_{j}=2\), \(p_{j}=\infty\) and \(p_{j}\) is any other value, respectively.
Similarly, regarding \(T_{\sigma_{1,2}}^{\Phi}\), due to the symmetry of the form of (42) in the indices \(j=1,2\) and \(j=3,\ldots,N\), we only need to consider endpoint cases \((p_{0},\ldots,p_{N})\) which are distinct within the equivalence class of permutations of \((p_{1},p_{2})\) and \((p_{3},\ldots,p_{N})\). (We have, therefore, \((3+3)\times(3+3+1)-3=39\) cases, since the possibility of three or more copies of \(L^{2}\) appearing is ruled out because \(p_{0}>2/3\).)
For the Banach target spaces, both for later use in Section 8 and for the convenience of the reader, we recall the endpoint-cases that need to be considered and the orders of decay of the amplitude that are involved in each case. This is of course quite similar to the analysis that was carried out in [15, Section 5], with the only difference that here we also consider the cases of various multilinear OIOs. However the interpolation procedure towards the establishment of Banach-target results remain the same. We summarise this in the following lemma:
**Lemma 6.1**.: _Let \(m=\sum_{j=0}^{N}m_{j}\), \(\frac{1}{p_{0}}=\sum_{j=1}^{N}\frac{1}{p_{j}}\), and \(\sigma(x,\Xi)\in S^{m}(n,N)\) and \(\varphi_{j}\) be phase functions of order \(s\) with \(s>0.\) Let also_
\[m(p)=\begin{cases}-(n-1)\left|\frac{1}{p}-\frac{1}{2}\right|,\,n>1,\,\varphi_{ j}{}^{\prime}\text{s positively homogeneous of degree }1,\,\,\frac{n}{n+1}<p<\infty\\ -ns\left|\frac{1}{p}-\frac{1}{2}\right|,\,\,\varphi_{j}{}^{\prime}\text{s of order }s,\,\,\frac{n}{n+\min(1,s)}<p<\infty.\end{cases} \tag{43}\]
_For Banach-target spaces \((\)i.e. \(X^{p}\) with \(p\in[1,\infty]\)\()\), it is enough to prove Theorem 1.4 for the following values of exponents:_
* **Target bmo.**__\(\prod_{j=1}^{N}\text{bmo}\to\text{bmo},\) _i.e._ \((p_{j},m_{j})=(\infty,m(\infty))\) _for all_ \(j=1,\ldots N;\)__
* **Target \(\mathbf{L^{2}}\).**__\((p_{0},m_{0})=(2,0)\)_, and for each_ \(1\leqslant j\leqslant N\)_,_ \((p_{j},m_{j})=(2,0)\) _and_ \((p_{k},m_{k})=(\infty,m_{k}(\infty))\) _for_ \(k\neq j;\)__
* **Target \(\mathbf{h^{1}}\).**__\((p_{0},m_{0})=(1,m(1))\) _and any pair_ \(1\leqslant j_{1}<j_{2}\leqslant N\)_,_ \((p_{j_{1}},m_{j_{1}})=(p_{j_{2}},m_{j_{2}})=(2,0)\) _and_ \((p_{k},m_{k})=(\infty,m(\infty))\) _for_ \(j_{1}\neq k\neq j_{2}\)_; and_
* **Target \(\mathbf{h^{1}}\).**__\((p_{0},m_{0})=(1,m(1))\) _and for any_ \(1\leqslant j\leqslant N\)_,_ \((p_{j},m_{j})=(1,m(1))\)_, and_ \((p_{k},m_{k})=(\infty,m(\infty))\) _for_ \(k\neq j\)_._
Proof.: This is a standard application of multilinear interpolation, as was also done in [15]. In short, we take two end points, \(P_{A}=(p_{A,1},\ldots,p_{A,N})\) and \(P_{B}\), from the list above, with corresponding amplitude orders \(m_{A}=\sum_{j=1}^{N}m(p_{A,j})\) and likewise for \(m_{B}\). We then form the amplitude family \(\sigma_{z}\) given by \(\sigma_{z}(x,\Xi)=\hat{\sigma}(x,\Xi)\langle\Xi\rangle^{(1-z)m_{A}+zm_{B}}\), where
\(\hat{\sigma}\in S^{0}_{1,0}(n,N)\) is arbitrary, so that \(\sigma_{0}\in S^{m_{A}}_{1,0}\) and \(\sigma_{1}\in S^{m_{B}}_{1,0}\). Notice that for any Schwartz \(f_{1},\ldots,f_{N}\), the map \(z\mapsto T^{\Phi}_{\sigma_{z}}(f_{1},\ldots,f_{N})\) is analytic, and that the bounds in our proof depend polynomially on \(\operatorname{Im}z\). This ensures that we can use the mentioned interpolation result, showing the boundedness of \(T^{\Phi}_{\sigma_{z}}\) for \(z\in[0,1]\). Since \(\hat{\sigma}\) was arbitrary boundedness holds for any \(T^{\Phi}_{\sigma_{z}}\) with amplitude \(\sigma_{z}\in S^{(1-z)m_{A}+zm_{B}}_{1,0}(n,N)\), \(z\in[0,1]\) and source space \(X^{p_{1}}\times\cdots\times X^{p_{N}}\) where \(P^{-1}=(p_{1}^{-1},\ldots,p_{N}^{-1})=(1-z)P_{A}^{-1}+zP_{B}^{-1}\). One can then do this for any two points \(P_{A}\) and \(P_{B}\) in the convex polygon of studied \(P\) to get the full range of exponents, but it suffices to show boundedness at corners and where the function \(m\) ceases to be linear.
## 7. Boundedness of the multilinear operators
In this section we shall very briefly indicate the modifications that are needed in the proofs that were provided in [15], in order to prove the corresponding results for multilinear OIOs.
As far as boundedness results are concerned, due to the symmetry of (40) in the indicies \(j=2,\ldots,N\), as was shown in [15], we only need to consider endpoint cases \((p_{0},\ldots,p_{N})\) which are distinct within the equivalence class of permutations of \((p_{2},\ldots,p_{N})\). Similarly, due to the symmetry of the form of (42) in the indicies \(j=1,2\) and \(j=3,\ldots,N\) we only need to consider endpoint cases \((p_{0},\ldots,p_{N})\) which are distinct within the equivalence class of permutations of \((p_{1},p_{2})\) and \((p_{3},\ldots,p_{N})\).
This reduces the analysis of boundedness of \(T^{\Phi}_{\sigma}\) to the investigation of just one of the \(T^{\Phi}_{\sigma_{j}}\)'s say \(T^{\Phi}_{\sigma_{1}}\), one of \(T^{\Phi}_{\sigma_{j},k}\)'s say \(T^{\Phi}_{\sigma_{1},2}\) and of course also the boundedness of low-frequency part \(T^{\Phi}_{0}.\) All the other cases can be studied in essentially identical ways as these.
In each case we fix
\[\frac{1}{p_{0}}=\sum_{j=1}^{N}\frac{1}{p_{j}},\qquad 1\leqslant p_{j}\leqslant \infty,\quad j=0,\ldots,N,\]
and \(m_{j}:=m(p_{j})\), \(j=0,\ldots,N\), with \(m(p_{j})\) given as in (43) and consider \(f_{j}\in X^{p_{j}}\) for \(j=1,\ldots,N\). The rest of the analysis is identical to that of multilinear FIOs as carried out in Section 8 of [13], Having this lemma at our disposal, we can run the machinery of the proofs in the case of multilinear FIOs and obtain the desired results.
### Boundedness of \(T^{\Phi}_{\sigma_{0}}\)
Here, due to the localised nature of the amplitude and in contrast to the other parts of the OIO, we can furnish a proof which covers both the quasi-Banach and Banach target spaces cases. In order to control \(T^{\Phi}_{\sigma_{0}}\) defined in (36), we observe that since \(\theta\in\mathcal{C}^{\infty}_{c}(\mathbb{R}^{n})\), Lemma 2.7 yields that
\[\left\|T^{\varphi_{j}}_{\theta}(f)\right\|_{X^{p}}\lesssim\left\|f\right\|_{X^ {p}}\quad\text{and}\quad\left\|T^{\varphi_{0}}_{\theta(/\sqrt{N})}(f)\right\|_ {X^{p}}\lesssim\left\|f\right\|_{X^{p}}\]
for \(n/(n+s_{c})<p\leqslant\infty\). Applying these two estimates, the fact that each term is frequency localised, the translation invariance of the norms and Holder's inequality (using the Littlewood-Paley characterisation of local Hardy spaces) altogether yield
\[\left\|\left(\prod_{j=1}^{N}T^{\varphi_{j}}_{\theta}\circ\tau_{\frac{2nk_{j}}{ L}}(f_{j})\right)\right\|_{h^{r}}\lesssim\prod_{j=1}^{N}\left\|f_{j}\right\|_{h^{ p_{j}}}.\]
Combining these estimates one has
\[\left\|T^{\varphi_{0}}_{\theta(/\sqrt{N})}\left(\prod_{j=1}^{N}T^{\varphi_{j}}_ {\theta}\circ\tau_{\frac{2nk_{j}}{L}}(f_{j})\right)\right\|_{X^{p_{0}}} \lesssim\prod_{j=1}^{N}\left\|f_{j}\right\|_{X^{p_{j}}},\]
for all the endpoint cases of \(p_{0},p_{1},\ldots,p_{N}\) in Lemma 6.1. Finally, the boundedness of \(T_{\sigma_{0}}^{\varphi_{0}}\) follows by applying (37) with the inclusions \(\mathcal{C}_{b}^{1}\cdot h^{p}\subseteq h^{p}\), \(L^{\infty}\cdot L^{2}\subseteq L^{2}\) and \(\mathcal{C}_{b}^{1}\cdot\operatorname{bmo}\subseteq\operatorname{bmo}\) (see [8]).
Therefore, for the purely low-frequency portion of the operator, we have now established the boundedness with both Banach and quasi-Banach target spaces.
### Boundedness of \(T_{\sigma_{1}}^{\Phi}\)
Due to the symmetry of the representation (40) of \(T_{\sigma_{1}}^{\Phi}\) (in the indicies \(j=2,\ldots,N\)) we only need to consider endpoint cases \((p_{0},\ldots,p_{N})\) which are distinct within the equivalence class of permutations of \((p_{2},\ldots,p_{N})\).
#### 7.2.1. Boundedness with Banach targets
Having this, then all the boundedness results with target spaces \(L^{2}\) and \(\operatorname{bmo}\) (in accordance to Theorem 6.1) are proven in exactly the same way as in the case of multilinear FIOs in [15], where one replaces \(-(n-1)/2\) of multilinear FIOs by \(-ns/2\) of multilinear OIOs and noting that no restriction on the dimension (as in the FIO case) is necessary, since \(-ns/2<0\).
#### 7.2.2. Boundedness with quasi-Banach targets
As discussed earlier in connection to representation (40), matters can be reduced to the study of the regularity of the multilinear operator
\[\mathbf{I}:=\sum_{k=k_{0}}^{\infty}\,Q_{k}^{0}\,\left[(Q_{k}^{u_{1}}\circ T_{ b_{1}}^{\varphi_{1}})(f_{1})\,\prod_{j=2}^{N}(P_{k}^{u_{j}}\circ T_{b_{j,k}}^{ \varphi_{j}})(f_{j})\right]. \tag{44}\]
Our goal is to prove the boundedness of \(T_{\sigma_{1}}^{\Phi}\) with target in \(h^{p_{0}}\) with \(n/(n+s_{c})<p_{0}<\infty\) and \(p_{0}\neq 2\). We also note that the cases \(p_{0}\geqslant 1\) are all Banach, but our method of proof will cover these cases as well. Using (44), we infer that the boundedness of \(T_{\sigma_{1}}^{\Phi}\), could via (9), be investigated by considering
\[\vartheta_{0}(D)(\mathbf{I})=\sum_{k=k_{0}}^{3}\vartheta_{0}(D)\,Q_{k}^{0}(D) \left[(Q_{k}^{u_{1}}\circ T_{b_{1}}^{\varphi_{1}})(f_{1})\,\prod_{\ell=2}^{N} (P_{k}^{u_{\ell}}\circ T_{b_{\ell,k}}^{\varphi_{\ell}})(f_{\ell})\right], \tag{45}\]
and for \(j\geqslant 1\)
\[\vartheta_{j}(D)(\mathbf{I}) =\sum_{k=k_{0},|k-j|\leqslant 4}^{\infty}\vartheta_{j}(D)\,Q_{k}^{0 }(D)\left[(Q_{k}^{u_{1}}\circ T_{b_{1}}^{\varphi_{1}})(f_{1})\,\prod_{\ell=2}^ {N}(P_{k}^{u_{\ell}}\circ T_{b_{\ell,k}}^{\varphi_{\ell}})(f_{\ell})\right]\] \[=\sum_{\ell=-4}^{4}\mathbb{1}_{[k_{0},\infty)}(\ell+j)\vartheta(2 ^{-j}D)\,\phi(2^{-(j+\ell)}D)[F_{j+\ell}^{U}],\]
where for all \(k\in\mathbb{Z}\)
\[F_{k}^{U}=(Q_{k}^{u_{1}}\circ T_{b_{1}}^{\varphi_{1}})(f_{1})\,\prod_{\ell=2} ^{N}(P_{k}^{u_{\ell}}\circ T_{b_{\ell,k}}^{\varphi_{\ell}})(f_{\ell}).\]
Now given an \((N-1)\)-tuple \((p_{2},\ldots,p_{N})\), we define
\[\mathfrak{I}_{2}=\{j\in\{2,\ldots,N\}:\,p_{j}=2\},\,\mathfrak{I}_{f}=\{j\in\{2,\ldots,N\}:\,2\neq p_{j}<\infty\}\]
and
\[\mathfrak{I}_{\infty}=\{j\in\{2,\ldots,N\}:\,p_{j}=\infty\}.\]
Using this notation we can write
\[F_{k}^{U}:=(Q_{k}^{u_{1}}\circ T_{b_{1}}^{\varphi_{1}})(f_{1})\,\prod_{j\in \mathfrak{I}_{2}}(P_{k}^{u_{j}}\circ T_{b_{j,k}}^{\varphi_{j}})(f_{j})\prod_{j \in\mathfrak{I}_{f}}(P_{k}^{u_{j}}\circ T_{b_{j,k}}^{\varphi_{j}})(f_{j})\prod_ {j\in\mathfrak{I}_{\infty}}(P_{k}^{u_{j}}\circ T_{b_{j,k}}^{\varphi_{j}})(f_{ j}). \tag{46}\]
Taking (9) into account for a generic piece of \(F_{k}^{U}\), we shall see that the following proposition will be useful in dealing with various cases that arise in connection to the proof of \(h^{p_{0}}\)-regularity of \(\mathbf{I}\) (given by (44)).
**Proposition 7.1**.: _Given \(p_{1}>0\) and \(N_{1}\geqslant 2\), assume that \(\frac{1}{r_{0}}=\frac{1}{p_{1}}+\frac{N_{1}-1}{2}\) and \(X^{p_{1}}\) be defined as in (4). Then one has_
\[\left\|\Big{(}\sum_{k=k_{0}}^{\infty}\,|(Q_{k}^{u_{1}}f_{1})\prod_{j=2}^{N_{1}} (P_{k}^{u_{j}}(f_{j})|^{2})^{1/2}\right\|_{L^{r_{0}}}\lesssim\left\|f_{1} \right\|_{X^{p1}}\prod_{j=2}^{N_{1}}\left\|f_{j}\right\|_{2}.\]
Proof.: By the translation invariance of the norm of the spaces \(X^{p}\), we can reduce the study to the case where \(u_{1}=\ldots=u_{N_{1}}=0\).
Consider the multilinear pseudodifferential operator
\[T_{k}\left(f_{1},\ldots,f_{N_{1}}\right)(x):=Q_{k}f_{1}\,\prod_{j=2}^{N_{1}}P_ {k}f_{j}\]
with the symbol
\[\rho^{k}(\Xi)=\psi(2^{-k}\xi_{1})\,\prod_{j=2}^{N_{1}}\theta(2^{-k}\xi_{j}).\]
Now since, in addition to the frequency localisations \(|2^{-k}\xi_{1}|\sim 1\), \(|2^{-k}\xi_{j}|\lesssim 1\) for \(2\leqslant j\leqslant N_{1}\), one also has that \(|\Xi|\leqslant c|\xi_{1}|\) on the support of \(\sigma_{1}\) (note that the later follows from (33) and (34)), then the Leibniz rule, the aforementioned support properties, and finally (35) yield
\[|\partial_{\Xi}^{\varrho}(\rho^{k}(\Xi))|\lesssim\langle\Xi\rangle^{-|\alpha|},\]
which yields that \(\rho^{k}\in S^{0}_{1,0}(n,N)\), uniformly in \(k\).
Let assume first that \(p_{1}<\infty\). Khinchin's inequality yields that
\[\left\|\Big{(}\sum_{k=-5}^{\infty}\,|(Q_{k}f_{1})\prod_{j=2}^{N_{ 1}}(P_{k}f_{j})|^{2}\Big{)}^{1/2}\right\|_{L^{r_{0}}} \approx\left\|\sum_{k\geqslant-5}\varepsilon_{k}(t)Q_{k}f_{1} \prod_{j=2}^{N_{1}}P_{k}f_{j}\right\|_{L^{r_{0}}_{x_{i},t}(\mathbb{R}^{n} \times[0,1])}\] \[=\left\|\sum_{k\geqslant-5}\varepsilon_{k}(t)T_{k}(f_{1},\ldots,f _{N_{1}})\right\|_{L^{r_{0}}_{x_{i},t}(\mathbb{R}^{n}\times[0,1])},\]
where \(\{\varepsilon_{j}(t)\}_{j}\) are the Rademacher functions. Now the family of multilinear pseudodifferential operators \(\sum_{k=-5}^{\infty}\varepsilon_{k}(t)T_{k}(f_{1},\ldots,f_{N_{1}})\), has the symbol
\[\rho_{t}(\xi_{1},\ldots,\xi_{N_{1}}):=\sum_{k=-5}^{\infty}\varepsilon_{k}(t) \rho^{k}(\xi_{1},\ldots,\xi_{N_{1}})\in S^{0}_{1,0}(n,N),\]
uniformly in \(t\). Therefore, the boundedness of multilinear pseudodifferential operators of order zero from \(\prod_{j=1}^{N}h^{l_{j}}\to L^{r}\)[17, Theorem 1.1] yields
\[\left\|\sum_{k\geqslant-5}\varepsilon_{k}(t)T_{k}(f_{1},\ldots,f_{N_{1}}) \right\|_{L^{r_{0}}_{x_{i},t}(\mathbb{R}^{n}\times[0,1])}\lesssim\left\|f_{1} \right\|_{h^{p_{1}}}\prod_{j=2}^{N_{1}}\left\|f_{j}\right\|_{L^{2}}.\]
Now let us assume now that \(p_{1}=\infty\). Note that we are also allowed to assume (38) on the support of \(\rho^{k}\), which yields that
\[T_{k}(f_{1},\ldots,f_{N_{1}})(x)\] \[=\int_{\left(\mathbb{R}^{n}N_{1}\right)}2^{N_{1}nk}\phi^{\vee} \left(2^{k}\left(x-y_{1}\right),\ldots,2^{k}\left(x-y_{N_{1}}\right)\right) \prod_{j=2}^{N_{1}}P_{k}\left(f_{j}\right)\left(y_{j}\right)Q_{k}\left(f_{1} \right)\left(y_{1}\right)dY.\]
Holder's inequality, the translation invariance of the Lebesgue measure and the definition of the maximal operator \(\mathfrak{M}_{a,b}^{p}\) in (15) yield
\[|T_{k}(f_{1},\ldots,f_{N_{1}})(x)|\leqslant\] \[2^{N_{1}nk}\left\{\int_{\mathbb{R}^{n}N_{1}}\langle 2^{k}Y \rangle^{sq^{\prime}}\left|\phi^{\vee}(2^{k}Y)\right|^{q}\,\mathrm{d}Y\right\} ^{1/q^{\prime}}\left\{\int_{\mathbb{R}^{n}N_{1}}\prod_{j=2}^{N_{1}}\frac{|P_{ k}\left(f_{j}\right)(y_{j})|^{q}}{\langle 2^{k}(x-y_{j})\rangle^{sq/N_{1}}} \frac{|Q_{k}\left(f_{1}\right)(y_{1})|^{q}}{\langle 2^{k}(x-y_{1})\rangle^{sq/N_{1}}} \,\mathrm{d}Y\right\}^{1/q}\] \[\lesssim 2^{-N_{1}nk/q^{\prime}}2^{N_{1}nk}\mathfrak{M}_{s/N_{1},2 ^{k}}^{q}(Q_{k}f_{1})(x)2^{-kn/q}\prod_{j=2}^{N_{1}}\mathfrak{M}_{s/N_{1},2^{ k}}^{q}(P_{k}f_{j})(x)2^{-kn/q}\] \[\lesssim\mathfrak{M}_{s/N_{1},2^{k}}^{q}(Q_{k}f_{1})(x)\prod_{j=2 }^{N_{1}}\mathfrak{M}_{s/N_{1},2^{k}}^{q}(P_{k}f_{j})(x)\]
where we have also used that for all \(z\in\mathbb{R}^{nN_{1}}\)
\[(1+2^{2\ell}\left|z_{1}\right|^{2}+\ldots+2^{2\ell}\left|z_{N_{1}}\right|^{2} )^{N_{1}}\geqslant\prod_{k=1}^{N_{1}}(1+2^{2\ell}\left|z_{k}\right|^{2}).\]
Now denoting the set of all dyadic cubes in \(\mathbb{R}^{n}\) by \(\mathcal{D}\), and denoting for each \(k\in\mathbb{Z}\) the elements of \(\mathcal{D}\) with side length \(2^{-k}\) by \(\mathcal{D}_{k}\), we have by inequality (17) that for every dyadic cube \(J\in\mathcal{D}_{k}\) and every \(f\)
\[\sup_{y\in J}\mathfrak{M}_{s/N,2^{k}}^{q}(f)(y)\lesssim\inf_{y\in J}\mathfrak{ M}_{s/N,2^{k}}^{q}(f)(y),\]
with constants independent of \(f\) and \(k\).
Therefore, since there is no overlap between \(\mathcal{D}_{k}\)'s, we have
\[\left\|\Big{(}\sum_{k=-5}^{\infty} |T_{k}(f_{1},\ldots f_{N_{1}})|^{2}\Big{)}^{1/2}\right\|_{L^{ \prime_{0}}}=\left\|\Big{(}\sum_{k=-5}^{\infty}\sum_{J\in\mathcal{D}_{k}}|T_{ k}(f_{1},\ldots f_{N_{1}})|^{2}\chi_{J}\Big{)}^{1/2}\right\|_{L^{\prime_{0}}}\] \[\leqslant\left\|\left(\sum_{k\geqslant-5}\sum_{J\in\mathcal{D}_{k }}\prod_{j=2}^{N_{1}}\left|\mathfrak{M}_{s/N_{1},2^{k}}^{q}(P_{k}f_{j})(x) \right|^{2}\left|\mathfrak{M}_{s/N_{1},2^{k}}^{q}(Q_{k}f_{1})\right|^{2}\chi_ {J}\right)^{1/2}\right\|_{L^{\prime_{0}}}\] \[\leqslant\left\|\left(\sum_{k\geqslant-5}\sum_{J\in\mathcal{D}_{k }}\prod_{j=2}^{N_{1}}\inf_{y\in J}\left|\mathfrak{M}_{s/N_{1},2^{k}}^{q}(P_{k} f_{j})(x)\right|^{2}\left|\inf_{y\in J}\mathfrak{M}_{s/N_{1},2^{k}}^{q}(Q_{k}f_{1}) \right|^{2}\chi_{J}\right)^{1/2}\right\|_{L^{\prime_{0}}}\] \[\leqslant\left\|\left(\sum_{k\geqslant-5}\left(\sum_{J\in( \mathcal{D}_{k})}\prod_{j=2}^{N_{1}}\inf_{y\in J}\mathfrak{M}_{s/N_{1},2^{k}}^ {q}(P_{k}f_{j})(x)\inf_{y\in J}\mathfrak{M}_{s/N_{1},2^{k}}^{q}(Q_{k}f_{1}) \chi_{J}\right)^{2}\right)^{1/2}\right\|_{L^{\prime_{0}}}. \tag{47}\]
Now by Theorem 2.4, given \(r_{0}\in(0,\infty]\), \(0<q\leqslant\infty\), \(\gamma\in(0,1)\) and \(s/N_{1}>n/(\min{(2,q,r_{0})})\), for any dyadic cube \(J\in\mathcal{D}\) there exists a measurable subset \(S_{J}\subset Q\), depending on \(\gamma,f_{k},q,s,N_{1}\) such that \(|S_{J}|>\gamma\left|J\right|\). For this \(S_{J}\) and any \(0<\rho<\infty\) one has for \(x\in J\) that
\[\chi_{J}(x)=1<\frac{1}{\gamma^{1/\rho}}\frac{|S_{J}|^{1/\rho}}{|J|^{1/\rho}}= \frac{1}{\gamma^{1/\rho}}\left(\frac{1}{|J|}\int_{J}\chi_{S_{J}}^{\rho}(y)dy \right)^{1/\rho}\leqslant\gamma^{-1/\rho}\mathcal{M}_{\rho}\left(\chi_{S_{J}} \right)(x).\]
Hence, using this and the vector-valued maximal inequality (18), one can bound the last term in (47) by
\[\left\|\left(\sum_{k\geqslant-5}\left(\sum_{J\in\mathcal{D}_{k}}\prod_{j=2}^{N_{ 1}}\inf_{y\in J}\mathfrak{M}_{s/N_{1},2^{k}}^{q}(P_{k}f_{k})(x)\inf_{y\in J} \mathfrak{M}_{s/N_{1},2^{k}}^{q}(Q_{k}f_{1})\chi_{S_{J}}\right)^{2}\right)^{1/ 2}\right\|_{L^{r_{0}}}. \tag{48}\]
We also note that the characterisation of BMO given in Theorem 2.4 implies that, given \(0<q\leqslant\infty\), \(\gamma\in(0,1)\) and \(s/N_{1}>n/(\min{(2,q)})\), one has
\[\left\|\left(\sum_{k\geqslant-5}\left(\sum_{J\in\mathcal{D}_{k}}\inf_{y\in J} \mathfrak{M}_{s/N_{1},2k}^{q}(Q_{k}f_{1})\chi_{S_{J}}\right)^{2}\right)^{1/2} \right\|_{L^{\infty}}\approx\left\|\Gamma(D)f_{1}\right\|_{\mathrm{BMO}} \leqslant\left\|f_{1}\right\|_{\mathrm{bmo}}.\]
where \(\Gamma(D)\) is a high-frequency cut-off.
Therefore, Holder's inequality, Theorem 2.5 and the \(L^{2}\)-boundedness of Hardy-Littlewood's maximal functions yield that the expression in (48) is bounded by
\[\left\|\prod_{j=2}^{N_{1}}\sup_{k}\mathfrak{M}_{s/N_{1},2^{k}}^{q }(P_{k}f_{j})\right\|_{L^{r_{0}}}\left\|\left(\sum_{k\geqslant-5}\left(\sum_{J \in(\mathcal{D}_{k})}\inf_{y\in J}\mathfrak{M}_{s/N_{1},2^{k}}^{q}(Q_{k}f_{1}) \chi_{S_{J}}\right)^{2}\right)^{1/2}\right\|_{L^{\infty}}\] \[\lesssim\prod_{j=2}^{N_{1}}\left\|\sup_{k}\mathfrak{M}_{s/N_{1},2 ^{k}}^{q}(P_{k}f_{j})\right\|_{L^{2}}\left\|f_{1}\right\|_{\mathrm{bmo}} \lesssim\prod_{j=2}^{N_{1}}\left\|\sup_{k}|P_{k}f_{j}|\right\|_{L^{2}}\left\| f_{1}\right\|_{\mathrm{bmo}}\] \[\lesssim\prod_{j=2}^{N_{1}}\left\|\mathcal{M}f_{j}\right\|_{L^{2}} \left\|f_{1}\right\|_{\mathrm{bmo}}\lesssim\prod_{j=2}^{N_{1}}\left\|f_{j} \right\|_{L^{2}}\left\|f_{1}\right\|_{\mathrm{bmo}}.\]
Now we turn to the study of the regularity of the multilinear operators associated to \(T_{\sigma_{1}}^{\Phi}\). This will be divided in the following cases:
**Case I. \(\mathfrak{I}_{2}\neq\emptyset\).** Observe that by our previous considerations the frequency support of \(F_{k}^{U}\) (given by (46)) is contained in \(B(0,2^{k}R)\), for some \(R\geqslant 1\). Therefore, for \(\ell\in[-4,4]\), the frequency support of \(F_{j+\ell}^{U}\) is contained in \(B(0,2^{j}(2^{\ell}R))\). Hence using (20) we have
\[\left\|\Big{(}\sum_{j=1}^{\infty}|\vartheta_{j}(D)(\mathbf{I})|^{2}\Big{)}^{1/ 2}\right\|_{L^{p_{0}}}\lesssim\left\|\Big{(}\sum_{k=-5}^{\infty}|F_{k}^{U}|^{2 }\Big{)}^{1/2}\right\|_{L^{p_{0}}}.\]
Note that for \(j\in\mathfrak{I}_{2}\), the \(b_{j,k}\)'s dependence on \(k\) could be suppressed due to the fact that for these terms the corresponding \(m_{j}\)'s are equal to zero and one can replace the amplitudes by the constant function one. Hence using the uniform boundedness given in (25), the embedding \(\ell^{1}(\mathbb{N})\subset\ell^{2}(\mathbb{N})\) jointly with the Cauchy-Schwarz inequality, Holder's inequality, Lemma 4.2, Proposition 7.1 and the boundedness of linear oscillatory integrals
given in Theorem 2.9, yield
\[\begin{split}&\Big{\|}\Big{(}\sum_{k=-5}^{\infty}|P_{k}^{U}|^{2} \Big{)}^{1/2}\Big{\|}_{L^{p_{0}}}\\ &\lesssim\Big{\|}\Big{(}\sum_{k=-5}^{\infty}|(Q_{k}^{u_{1}}T_{b_{ 1}}^{\varphi_{1}})f_{1})\prod_{\mathfrak{I}_{2}}(P_{k}^{u_{j}}T_{1}^{\varphi_{j }})(f_{j})|^{2}\Big{)}^{1/2}\Big{\|}_{L^{r_{0}}}\\ &\qquad\times\prod_{\mathfrak{I}_{f}}\Big{\|}\Big{(}\sum_{k=-5}^{ \infty}|(P_{k}^{u_{j}}T_{b_{j,k}}^{\varphi_{j}})(f_{j})|^{2}\Big{)}^{1/2}\Big{\|} _{L^{p_{l}}}\prod_{\mathfrak{I}_{\infty}}\|f_{j}\|_{\mathrm{bmo}}\\ &\lesssim\Big{\|}T_{b_{1}}^{\varphi_{1}}f_{1}\Big{\|}_{X^{p_{1}} }\prod_{\mathfrak{I}_{2}}\big{\|}T_{1}^{\varphi_{j}}(f_{j})\big{\|}_{L^{2}} \prod_{\mathfrak{I}_{f}}\|f_{j}\|_{h^{p_{j}}}\prod_{\mathfrak{I}_{\infty}}\|f_ {j}\|_{\mathrm{bmo}}\lesssim\prod_{j=1}^{N}\|f_{j}\|_{X^{p_{j}}}\,,\end{split} \tag{49}\]
where
\[\frac{1}{p_{0}}=\frac{1}{r_{0}}+\sum_{\ell\in\mathfrak{I}_{f}}\frac{1}{p_{\ell }},\qquad\frac{1}{r_{0}}=\frac{1}{p_{1}}+\frac{|\mathfrak{I}_{2}|}{2}.\]
**Case II. \(\mathfrak{I}_{2}=\emptyset\).** In this case \(r_{0}=p_{1}\) and if moreover \(p_{1}<\infty\) the we have
\[\Big{\|}\Big{(}\sum_{k=-1}^{\infty}|(Q_{k}^{u_{1}}T_{b_{1}}^{\varphi_{1}})f_{1 })|^{2}\Big{)}^{1/2}\Big{\|}_{L^{r_{0}}}\lesssim\|f_{1}\|_{h^{p_{1}}}\,, \tag{50}\]
and we can proceed as in (49) to reach the desired estimate. However, if \(p_{1}=\infty\) then the classical Fefferman-Stein estimate yields that
\[\sup_{k}\big{\|}Q_{k}^{u_{1}}(f_{1})\big{\|}_{\infty}\lesssim\|f_{1}\|_{ \mathrm{bmo}}\,. \tag{51}\]
Hence Lemma 4.2 yields
\[\begin{split}&\Big{\|}\Big{(}\sum_{k=-5}^{\infty}|F_{k}^{U}|^{2} \Big{)}^{1/2}\Big{\|}_{L^{p_{0}}}\\ &\lesssim\|f_{1}\|_{\mathrm{bmo}}\prod_{\mathfrak{I}_{f}}\Big{\|} \Big{(}\sum_{k=-5}^{\infty}|(P_{k}^{u_{j}}T_{b_{j,k}}^{\varphi_{j}})(f_{j})|^ {2}\Big{)}^{1/2}\Big{\|}_{L^{p_{j}}}\prod_{\mathfrak{I}_{3_{\infty}}}\|f_{j} \|_{\mathrm{bmo}}\lesssim\prod_{j=1}^{N}\|f_{j}\|_{X^{p_{j}}}\,.\end{split}\]
Finally, for the low frequency part (45), we only need to estimate the \(L^{p_{0}}\) norm of \(F_{k}^{U}\). To that end, we use the following generalised Holder's inequality
\[\|\prod_{j=1}^{N}f_{j}\|_{L^{p_{0}}}\lesssim\prod_{\mathfrak{I}_{2}\cup \mathfrak{I}_{f}}\|f_{j}\|_{h^{p_{j}}}\prod_{\mathfrak{I}_{\infty}}\|f_{j}\|_{L^ {\infty}}, \tag{52}\]
where \(\frac{1}{p_{0}}=\sum_{j=1}^{N}\frac{1}{p_{j}}\), which is is a consequence of [17, Theorem 1.1], together with Lemma 4.1, which concludes the proof.
### Boundedness of \(T_{\sigma_{1,2}}^{\Phi}\)
In the analysis of the boundedness of \(T_{\sigma_{j,k}}^{\Phi}\), the symmetry of the operators form under permutations of the frequency variables allows us to restrict our attention to just one of the \(\sigma_{j,k}\), the argument for all the others being identical. For definiteness, we choose to study \(\sigma_{1,2}\), so we have that \(|\xi_{1}|\) and \(|\xi_{2}|\) are comparable to each other.
#### 7.3.1. Boundedness with Banach targets
The demonstrations of the boundedness of \(T^{\Phi}_{\sigma_{1,2}}\) with target spaces bmo and \(L^{2}\) are identical to that of multilinear FIOs as carried out in Section 8 of [13]. However the analysis in [13] required a result about Carleson measures associated to linear FIOs. The analogue of that result was provided in Proposition 4.4 above, and with that proposition at hand, we can run the machinery of the proofs in the case of multilinear FIOs in [15] and obtain the boundedness of \(T^{\Phi}_{\sigma_{1,2}}\) with target spaces bmo and \(L^{2}\).
#### 7.3.2. Boundedness with quasi-Banach targets
Using the representation (42), we are dealing with the \(h^{p}_{0}\)-regularity of the multilinear operator
\[D(f_{1},\ldots,f_{N})(x)=\sum_{k=k_{0}}^{\infty}M_{\mathfrak{m}}\circ T^{\varphi _{0}}_{d_{0}}\circ P^{0}_{k}\left[G^{U}_{k}\right](x), \tag{53}\]
where
\[G^{U}_{j}:=(Q^{u_{1}}_{j}\circ T^{\varphi_{1}}_{d_{1}})(f_{1})( Q^{u_{2}}_{j}\circ T^{\varphi_{2}}_{d_{2}})(f_{2})\prod_{\iota\in\mathfrak{I}_{2 }}(P^{u_{\iota}}_{j}\circ T^{\varphi_{\iota}}_{d_{\iota,j}})(f_{\iota})\] \[\times\prod_{\iota\in\mathfrak{J}_{f}}(P^{u_{\iota}}_{j}\circ T^ {\varphi_{\iota}}_{d_{\iota,j}})(f_{\iota})\prod_{\iota\in\mathfrak{I}_{ \infty}}(P^{u_{\iota}}_{j}\circ T^{\varphi_{\iota}}_{d_{\iota,j}})(f_{\iota})\]
Now given an \((N-2)\)-tuple \((p_{3},\ldots,p_{N})\), we define
\[\mathfrak{J}_{2}=\{j\in\{3,\ldots,N\}:\,p_{j}=2\},\,\mathfrak{J}_{f}=\{j\in\{ 3,\ldots,N\}:\,3\neq p_{j}<\infty\}\]
and
\[\mathfrak{J}_{\infty}=\{j\in\{3,\ldots,N\}:\,p_{j}=\infty\}.\]
Now, by using (26) we can rewrite
\[d_{0}(k,\xi)=d^{\flat}_{0}(k,\xi)+d^{\sharp}_{0}(k,\xi),\]
and observe that the supports of \(\omega_{k}\) and \(\chi_{0}\) allow us to write
\[T^{\varphi_{0}}_{d^{\flat}_{0}}\circ P^{0}_{k}=2^{km_{0}}T^{\varphi_{0}}_{ \Omega_{0}}\circ P^{0}_{k_{0}},\]
where \(\Omega_{0}:=1-\chi_{0}\).
**The analysis concerning \(d^{\sharp}_{0}\).** For \(d^{\sharp}_{0}(k,\xi)\), we replace \((T^{\varphi_{0}}_{d^{\flat}_{0}}\circ P^{0}_{k})(f)\) by \(T^{\varphi_{0}}_{\gamma}\circ R_{k}\circ P^{0}_{k}(f)\), where \(\gamma(\xi):=\chi_{0}(\xi)|\xi|^{m_{0}}\in S^{m_{0}}\) with \(m_{0}<0\), and \(R_{k}\) is as in (28). This yields that
\[\begin{split}&\sum_{k=k_{0}}^{\infty}M_{\mathfrak{m}}\circ T^{ \varphi_{0}}_{d^{\flat}_{0}}\circ P^{0}_{k}\left[G^{U}_{k}\right](x)\\ &=\sum_{j\geqslant k_{0}}\sum_{k\geqslant j}2^{(k-j)m_{0}}M_{ \mathfrak{m}}T^{\varphi_{0}}_{\gamma}Q_{j}P^{0}_{k}\left[G^{U}_{k}\right](x) \\ &=\sum_{k\geqslant 0}2^{km_{0}}\sum_{j\geqslant k_{0}}M_{ \mathfrak{m}_{j+k}}T^{\varphi_{0}}_{\gamma}Q_{j}P^{0}_{k+j}\left[G^{U}_{k+j} \right](x).\end{split} \tag{54}\]
**Remark 7.2**.: _Note that here the fact that \(m_{0}<0\)\((\)which excludes target-space \(L^{2})\) is crucial in the analysis that follows below._
Now we observe that one can write
\[M_{\mathfrak{m}_{j+k}}\circ T^{\varphi_{0}}_{\gamma}\circ Q_{j}=\left(\sum_{j ^{\prime}=j-1}^{j+1}T^{U}_{j,j^{\prime},k}\right)\circ Q_{j}=\left(\sum_{\ell=- 1}^{1}\sum_{j^{\prime}-j=\ell\ (\text{mod }3)}T^{U}_{j^{\prime}+\ell,j^{\prime},k}\right) \circ Q_{j}\]
where \(T^{U}_{j,j^{\prime},k}\) is the oscillatory integral with amplitude \(\mathfrak{m}(j+k,x,U)\,\gamma(\xi)\,\phi_{j^{\prime}}(\xi)\) and phase \(\varphi_{0}\). Observe that
\[\mathcal{T}^{U}_{j,k}:=\sum_{\ell=-1}^{1}\sum_{j^{\prime}-j\equiv\ell\ (\text{mod }3 )}T^{U}_{j^{\prime}+\ell,j^{\prime},k}\]
is periodic in \(j\) with period \(3\), and is an oscillatory integral with amplitude in \(S^{m_{0}}\) uniformly in \(k\).
Thus (54) can be rewritten as
\[\sum_{k}2^{km_{0}}\sum_{\ell=0}^{2}\mathcal{T}^{U}_{\ell,k}\left(D_{\ell,k}(f_ {1},\ldots,f_{N})\right)(x),\]
where
\[D_{\ell,k}(f_{1},\ldots,f_{N})(x):=\chi_{0}(2D)\sum_{j\equiv\ell\ (\text{mod }3 ),\,j\geqslant k_{0}}Q^{0}_{j}P^{0}_{k+j}\left[G^{U}_{j+k}\right](x),\]
and \(\chi_{0}\) is the same high-frequency cut-off introduced previously (with a symbol in \(S^{0}\)).
For the high-frequency part of the multilinear operator we observe that
\[\|D_{\ell,k}(f_{1},\ldots,f_{N})\|_{h^{p_{0}}}\lesssim\Big{\|}\sum_{j\equiv \ell\ (\text{mod }3),\,j\geqslant k_{0}}Q^{0}_{j}P^{0}_{k+j}\left[G^{U}_{j+k}\right] \Big{\|}_{h^{p_{0}}}.\]
Now since the spectrum of \(Q^{0}_{j}P^{0}_{k+j}\left[G^{U}_{j+k}\right]\) is inside an annulus of size \(2^{j}\), a theorem in Section 2.5.2 on page 79 of [18], together with estimate (20) and finally the Cauchy-Schwarz inequality (using the boundedness of the operators \(T^{\varphi_{1}}_{d_{1}}\) and \(T^{\varphi_{2}}_{d_{2}}\)), yield that
\[\begin{split}&\Big{\|}\sum_{j\equiv\ell\ (\text{mod }3),\,j\geqslant k_{0}}Q^{0}_{j}P^{0}_{k+j}\left[G^{U}_{j+k}\right] \Big{\|}_{h^{p_{0}}}\\ &\lesssim\Big{\|}\Big{(}\sum_{j\equiv\ell\ (\text{mod }3),\,j \geqslant k_{0}}\left|Q^{0}_{j}P^{0}_{k+j}\left[G^{U}_{j+k}\right]\Big{|}^{2} \right|^{\frac{1}{2}}\Big{\|}_{L^{p_{0}}}\\ &\lesssim\Big{\|}\Big{(}\sum_{j\geqslant k+k_{0}}\left|G^{U}_{j} \right|^{2}\Big{)}^{\frac{1}{2}}\Big{\|}_{L^{p_{0}}}.\end{split} \tag{55}\]
Now we proceed by dividing the regularity results into cases which we shall deal with accordingly.
**Case I. \(p_{1}<\infty\)**
Let us first assume that \(p_{2}<\infty\). Here we use the same reasoning as in the paragraph preceding the displayed equation (49) and note that the left-hand side term of (55) is bounded by
\[\begin{split}&\left\|\left(\sum_{j\geqslant 2k_{0}}\left|(Q^{u_{1}}_{ j}\circ T^{\varphi_{1}}_{d_{1}})(f_{1})\right|^{2}\right)^{1/2}\right\|_{L^{p_{1}}} \\ \times\Big{\|}\Big{(}\sum_{j\geqslant 2k_{0}}\left|(Q^{u_{2}}_{ j}\circ T^{\varphi_{2}}_{d_{2}})(f_{2})\prod_{\iota\in 32\omega\Im_{f} \cup\Im_{\infty}}(P^{u_{\iota}}_{j}\circ T^{\varphi_{\iota}}_{d_{\iota,j}})(f_ {\iota})\right|^{2}\Big{)}^{\frac{1}{2}}\Big{\|}_{L^{r_{1}}},\end{split}\]
where
\[\frac{1}{p_{0}}=\frac{1}{p_{1}}+\frac{1}{r_{1}},\]
and the term \((Q_{j}^{u_{2}}\circ T_{d_{2}}^{\varphi_{2}})(f_{2})\prod_{\iota\in 32\cup \mathfrak{J}_{f}\cup\mathfrak{J}_{\infty}}(P_{j}^{u_{\iota}}\circ T_{d_{\iota,j} }^{\varphi_{\iota}})(f_{\iota})\) is essentially \(F_{k}^{U}\) given in (46). Now since we have that
\[\left\|\left(\sum_{j\geqslant 2k_{0}}\,\left|(Q_{j}^{u_{1}}\circ T_{d_{1}}^{ \varphi_{1}}(f_{1})\right|^{2}\right)^{1/2}\right\|_{p_{1}}\lesssim\left\|T_{d _{1}}^{\varphi_{1}}(f_{1})\right\|_{hp_{1}}\lesssim\left\|f_{1}\right\|_{h^{p_ {1}}},\]
the same argument as the one involved in deriving estimate (50) for the case \(N=2\) and (49) for \(N\geqslant 3\), yield the desired bound.
Now if \(p_{2}=\infty\), then by using Fefferman-Stein's estimate (51), one extracts the \(\|f_{2}\|_{\mathrm{bmo}}\) from the left-hand side of (55) and the remaining term will be
\[\left\|\Big{(}\sum_{j\geqslant 2k_{0}}\,\left|(Q_{j}^{u_{2}}\circ T_{d_{2}}^{ \varphi_{2}})(f_{1})\prod_{\iota\in\mathfrak{J}_{2}\cup\mathfrak{J}_{f}\cup \mathfrak{J}_{\infty}}(P_{j}^{u_{\iota}}\circ T_{d_{\iota,j}}^{\varphi_{\iota}} )(f_{\iota})\right|^{2}\Big{)}^{\frac{1}{2}}\right\|_{L^{r_{1}}},\]
for which the boundedness can be established as was done previously.
**Case II. \(\mathbf{p_{1}=\infty}\).**
In this case, applying Fefferman-Stein's estimate (51), one has that (55) is bounded by
\[\|f_{1}\|_{\mathrm{bmo}}\,\Big{\|}\Big{(}\sum_{j\geqslant 2k_{0}}\,\left|(Q_{j}^ {u_{2}}\circ T_{d_{2}}^{\varphi_{2}})(f_{2})\prod_{\iota\in\mathfrak{J}_{2} \cup\mathfrak{J}_{f}\cup\mathfrak{J}_{\infty}}(P_{j}^{u_{\iota}}\circ T_{d_{ \iota,j}}^{\varphi_{\iota}})(f_{\iota})\right|^{2}\Big{)}^{\frac{1}{2}}\Big{\|} _{L^{p_{0}}}.\]
If we assume that \(p_{2}<\infty\), we just proceed as in the analysis of (49) (or **Case I** above).
Now if \(p_{2}=\infty\), since \(p_{0}<\infty\), then \(\mathfrak{J}_{2}\cup\mathfrak{J}_{f}\neq\emptyset\). If \(\mathfrak{J}_{2}\neq\emptyset\), (55) is bounded by
\[\|f_{1}\|_{\mathrm{bmo}}\,\Big{\|}\Big{(}\sum_{j\geqslant 2k_{0}}\, \left|(Q_{j}^{u_{2}}\circ T_{d_{2}}^{\varphi_{2}})(f_{2})\prod_{\iota\in \mathfrak{J}_{2}}(P_{j}^{u_{\iota}}\circ T_{1}^{\varphi_{\iota}})(f_{\iota}) \right|^{2}\Big{)}^{\frac{1}{2}}\Big{\|}_{L^{r_{2}}}\] \[\times\prod_{\mathfrak{J}_{f}}\left\|\Big{(}\sum_{j\geqslant 2k_{0}} \,\left|(P_{j}^{u_{\iota}}\circ T_{d_{\iota,j}}^{\varphi_{\iota}})(f_{\iota}) \right|^{2}\Big{)}^{\frac{1}{2}}\right\|_{L^{p_{\iota}}}\prod_{3\infty}\|f_{ \iota}\|_{\mathrm{bmo}}\,,\]
where
\[\frac{1}{r_{2}}=\frac{|\mathfrak{J}_{2}|}{2}.\]
Therefore, the same analysis as in (49) yields the result.
If \(\mathfrak{J}_{2}=\emptyset\), then \(\mathfrak{J}_{f}\neq\emptyset\). Therefore applying Fefferman-Stein's estimate (51), yields that (55) is bounded by
\[\|f_{1}\|_{\mathrm{bmo}}\,\|f_{2}\|_{\mathrm{bmo}}\,\prod_{\mathfrak{J}_{f}} \left\|\Big{(}\sum_{j\geqslant 2k_{0}}\,\left|(P_{j}^{u_{\iota}}\circ T_{d_{ \iota,j}}^{\varphi_{\iota}})(f_{\iota})\right|^{2}\Big{)}^{\frac{1}{2}}\right\| _{L^{p_{\iota}}}\prod_{3\infty}\|f_{\iota}\|_{\mathrm{bmo}}\]
and using Lemma 4.2 concludes the discussion of this case.
**The analysis concerning \(\mathbf{d_{0}^{b}}\).** The following lemma will be useful in to proving the desired regularity result.
**Lemma 7.3**.: _Let \(k_{0}\) be fixed, \(0<p_{0}<\infty\) and \(0<p_{j}\leqslant\infty\) so that_
\[\frac{1}{p_{0}}=\sum_{j=1}^{N}\frac{1}{p_{j}}.\]
_Then one has that_
\[\sup_{k}\left\|P_{k_{0}}^{0}\left(G_{k}^{U}\right)\right\|_{h^{p_{0}}}\lesssim c _{k_{0}}\prod_{j=1}^{N}\|f_{j}\|_{X^{p_{j}}}\,.\]
Proof.: We shall give the proof for the case that \(p_{0}\leqslant 1\). A small modification of the argument yields the case for \(p_{0}>1\).
First we assume that \(p_{1}+p_{2}<\infty\). We use the Littlewood-Paley characterisation of \(h^{p_{0}}\), and the inclusion \(\ell^{p_{0}}\subset\ell^{1}\subset\ell^{2}\). Then applying [18, p.17] and the fact that the frequency support of \(\vartheta_{j}(D)P_{k_{0}}^{0}\) is included in a ball of radius \(O(2^{k_{0}})\) followed by Holder's inequality (52) and Lemma 4.1, we find that
\[\left\|P_{k_{0}}^{0}\left(G_{k}^{U}\right)\right\|_{h^{p_{0}}} \sim\left\|\left(\sum_{j=0}^{N(k_{0})}\left|\vartheta_{j}(D)P_{k_{ 0}}^{0}\left(G_{k}^{U}\right)\right|^{2}\right)^{1/2}\right\|_{p_{0}}\] \[\lesssim\left(\sum_{j=0}^{N(k_{0})}\left\|\vartheta_{j}(D)P_{k_{ 0}}^{0}\left(G_{k}^{U}\right)\right\|_{p_{0}}^{p_{0}}\right)^{1/2}\lesssim \left\|G_{k}^{U}\right\|_{p_{0}}\] \[\lesssim\prod_{\iota=1}^{2}\left\|G_{k}^{u_{\iota}}\circ T_{d_{ \iota}}^{\varphi_{\iota}}(f_{\iota})\right\|_{h^{p_{\iota}}}\prod_{\iota\in \mathfrak{Im}\cup\mathfrak{J}_{f}}\left\|P_{k}^{u_{\iota}}\circ T_{d_{\iota,k}}^{\varphi_{\iota}}(f_{\iota})\right\|_{h^{p_{\iota}}}\prod_{\iota\in \mathfrak{Im}\cup}\left\|P_{k}^{u_{\iota}}\circ T_{d_{\iota,j}}^{\varphi_{ \iota}}(f_{\iota})\right\|_{L^{\infty}}\] \[\lesssim\prod_{\iota=1}^{2}\left\|f_{\iota}\right\|_{h^{p_{\iota }}}\prod_{\iota\in\mathfrak{Im}\cup\mathfrak{J}_{f}}\left\|f_{\iota}\right\| _{h^{p_{\iota}}}\prod_{\iota\in\mathfrak{Im}\cup}\left\|f_{\iota}\right\|_{ \mathrm{bmo}}\]
In the case that \(p_{1}=\infty\) or \(p_{2}=\infty\), a modification of the argument above, where one just uses that \(\sup_{k\geqslant k_{0}}\left\|Q_{k}G\right\|_{L^{\infty}}\lesssim\left\|G \right\|_{\mathrm{bmo}}\), yields the result.
Finally to deal with \(\sum_{k=k_{0}}^{\infty}M_{\mathfrak{m}}\circ T_{d_{0}}^{\varphi_{0}}\circ P_{k }^{0}\left[G_{k}^{U}\right](x)\) we observe that Lemma 7.3 yields
\[\left\|\sum_{k=k_{0}}^{\infty}2^{km_{0}}M_{\mathfrak{m}}\circ T_ {\Omega_{0}}^{\varphi_{0}}\circ P_{k_{0}}^{0}\left[G_{k}^{U}\right]\right\|_{ h^{p_{0}}}^{p_{0}}\] \[\leqslant\sum_{k=k_{0}}^{\infty}2^{km_{0}p_{0}}\left\|P_{k_{0}}^{ 0}\left[G_{k}^{U}\right]\right\|_{h^{p_{0}}}^{p_{0}}\lesssim\prod_{j=1}^{N} \left\|f_{j}\right\|_{X^{p_{j}}}.\]
Summing up and using the fact that \(m_{0}<0\), we deduce the boundedness of \(D(f_{1},\ldots,f_{N})\), with target \(h^{p_{0}}\).
## 8. Space-time estimates for systems of dispersive PDEs
In this section we shall prove Theorem 1.6, which amounts to showing Sobolev estimates for the solution \(u\) of the system of coupled PDEs
\[\left\{\begin{array}{l}i\partial_{t}u+\varphi_{0}(D)\,u=T_{\zeta}\left(v_{1},\ldots,v_{N}\right)\\ i\partial_{t}v_{j}+\varphi_{j}(D)\,v_{j}=0,\,\,\,j=1,\ldots,N\end{array}\right. \quad\text{with}\quad\left\{\begin{array}{l}u(0,x)=0\\ v_{j}(0,x)=f_{j}(x),\,\,\,j=1,\ldots,N,\end{array}\right.\]
where \(\varphi_{j}\in\mathcal{C}^{\infty}(\mathbb{R}^{n}\setminus 0)\), \(f_{j}\in H^{\sigma_{j},p_{j}}\), \(\sigma_{j}\geqslant 0\), \(j=0,\ldots,N\) are assumed to be positively homogeneous of degree \(s\in(0,\infty)\) and \(T_{\zeta}\) is the multilinear multiplier given by (8) with symbol \(\zeta\in S^{m_{\zeta}}(n,N)\) for some \(m_{\zeta}\leqslant 0\), to be specified later. The solution can be represented using the Duhamel formula as
\[u(t,x)=\int_{0}^{t}\int_{\mathbb{R}^{nN}}\zeta(\Xi)\,\prod_{j=1}^{N}\left( \widehat{f}_{j}(\xi_{j})\,e^{ix\cdot\xi_{j}+ir\varphi_{j}(\xi_{j})}\right)\,e^ {i(t-r)\varphi_{0}(\xi_{1}+\cdots+\xi_{N})}\,\mathrm{d}\Xi\,\,\mathrm{d}r. \tag{56}\]
This formula contains a multilinear oscillatory integral, and should therefore be suitable for analysis with the results of this paper. There are, however, two reasons why we cannot directly apply Theorems 1.4 and 1.3. Firstly, we must deal with the time dependency of \(u\), and secondly, proving bounds in Sobolev spaces introduces more complicated amplitudes, which are a product of the multilinear amplitudes we have seen earlier and linear
amplitudes in each variable. The following two results solve the first problem and extend regularity estimates of oscillatory integral operators with space-dependent phases to the corresponding time-dependent operators. We then proceed to prove Theorem 1.6 as a scholium to Theorems 1.4 and 1.3.
We shall start with the following lemma which yields time-dependent \(L^{p}\) estimates for linear evolutions.
**Lemma 8.1**.: _Let \(\varphi\in\mathcal{C}^{\infty}(\mathbb{R}^{n}\setminus 0)\) be a phase function positively homogeneous of degree \(s>0\). Then if \(s\neq 1\) then for all \(t\geqslant 0\),_
\[\|\langle D\rangle^{-sn|1/p-1/2|}e^{it\varphi(D)}u\|_{L^{p}}\lesssim\langle t \rangle^{n|1/p-1/2|}\|u\|_{L^{p}}, \tag{57}\]
_and for \(s=1\)_
\[\|\langle D\rangle^{-(n-1)|1/p-1/2|}e^{it\varphi(D)}u\|_{L^{p}}\lesssim\langle t \rangle^{(n-1)|1/p-1/2|}\|u\|_{L^{p}}. \tag{58}\]
Proof.: We only prove the case of \(s\neq 1\), the remaining case is proven in a similar manner using Theorem 2.8. First note that Theorem 2.9 yields that for \(1<p<\infty\)
\[\|e^{i\varphi(D)}\langle D\rangle^{-ns|\frac{1}{p}-\frac{1}{2}|}u\|_{L^{p}} \lesssim\|u\|_{L^{p}}.\]
To include \(t\)-dependence, we first note that in the case \(t\leqslant 1\), \(t\varphi(\xi)\) is a phase of order \(s\) uniformly in \(t\) and therefore satisfies the estimate
\[\|e^{it\varphi(D)}u\|_{L^{p}}\lesssim\|\langle D\rangle^{sn|1/p-1/2|}u\|_{L^{p }}\leqslant\langle t\rangle^{a}\|\langle D\rangle^{sn|1/p-1/2|}u\|_{L^{p}}, \tag{59}\]
for any \(a\geqslant 0\). When \(t>1\), we write \(m(p,s)=-ns|1/p-1/2|\) and perform a change of variables (and using homogeneity of \(\varphi\)), finding
\[\begin{split}\int e^{ix\cdot\xi+it\varphi(\xi)}\langle\xi \rangle^{m(p,s)}\hat{u}(\xi)\,\mathrm{d}\xi&=t^{-n/s}\int e^{it^ {-1/s}x\cdot\xi+i\varphi(\xi)}\langle t^{-1/s}\xi\rangle^{m(p,s)}\hat{u}(t^{- 1/s}\xi)\,\mathrm{d}\xi\\ &=t^{-m(p,s)/s}\int e^{it^{-1/s}x\cdot\xi+i\varphi(\xi)}\sigma_{ t}(\xi)\widehat{u(t^{1/s}\cdot)}(\xi)\,\mathrm{d}\xi,\end{split} \tag{60}\]
where \(\sigma_{t}(\xi)=t^{m(p,s)/s}\langle t^{-1/s}\xi\rangle^{m(p,s)}\) satisfies \(|\partial_{\xi}^{\alpha}\sigma_{t}(\xi)|\leqslant C_{\alpha}\langle\xi\rangle ^{m(p,s)-|\alpha|}\) when \(t\geqslant 1\) and \(m(p,s)\geqslant 0\). Therefore the \(L^{p}-\)bound given by Theorem 2.9, (59) and (60) yield the desired result.
A useful multilinear generalisation of this result is the following.
**Lemma 8.2**.: _Let \(\varphi_{j}\in\mathcal{C}^{\infty}(\mathbb{R}^{n}\setminus 0)\), \(j=0,\ldots,N\), be phase functions that are homogeneous of degree \(s>0\) and \(\sigma\in S^{m}(n,N)\) with \(m\in\mathbb{R}\). Define_
\[T_{\sigma}^{(t)}(f_{1},\ldots,f_{N}):=\int_{\mathbb{R}^{Nn}}e^{it\varphi_{0}( \xi_{1}+\cdots+\xi_{N})}\sigma(\Xi)\,\prod_{j=1}^{N}\widehat{f}_{j}(\xi_{j}) \,e^{ix\cdot\xi_{j}+it\varphi_{j}(\xi_{j})}\,\mathrm{d}\Xi.\]
_Assume that for some \(1<p_{0},\ldots,p_{N}<\infty\) and \(r_{0},\ldots,r_{N}\in\mathbb{R}\) one has the estimate_
\[\|\langle D\rangle^{-r_{0}}T_{\sigma}^{(1)}(f_{1},\ldots,f_{N})\|_{L^{p}} \leqslant C(\sigma,\Phi)\prod_{j=1}^{N}\|\langle D\rangle^{r_{j}}f_{j}\|_{L^{ p_{j}}},\]
_where \(C(\sigma,\Phi)\) only depends on a finite number of seminorms of \(\sigma\) and upper bounds on the size of a finite number of derivatives of \(\varphi_{j}\). Then it follows that, for all \(t\geqslant 0\)_
\[\|\langle D\rangle^{-r_{0}}T_{\sigma}^{(t)}(f_{1},\ldots,f_{N})\|_{L^{p}} \leqslant C(\sigma,\Phi)\langle t\rangle^{(\max(-m,0)+\sum_{j=0}^{N}\max(r_{j },0))/s}\prod_{j=1}^{N}\|\langle D\rangle^{r_{j}}f_{j}\|_{L^{p_{j}}}.\]
Proof.: For \(0\leqslant t\leqslant 1\), there is an upper bound on the derivatives of \(t\Phi\) that is uniform in \(t\), so this case is clear. When \(t>1\), we let \(g_{j}=\langle D\rangle^{r_{j}}f_{j}\), so that
\[\langle D\rangle^{-r_{0}}T_{\sigma}^{(t)}(f_{1},\dots,f_{N})(x)=\] \[\int_{\mathbb{R}^{Nn}}e^{ix\cdot(\xi_{1}+\dots+\xi_{N})+it\varphi_ {0}(\xi_{1}+\dots+\xi_{N})+\sum_{j=1}^{N}it\varphi_{j}(\xi_{j})}\] \[\qquad\times\ \langle\xi_{1}+\dots+\xi_{N}\rangle^{-r_{0}}\sigma( \Xi)\prod_{j=1}^{N}\langle\xi_{j}\rangle^{-r_{j}}\hat{g}_{j}(\xi_{j})\,\mathrm{ d}\Xi\] \[=t^{-Nn/s}\int_{\mathbb{R}^{Nn}}e^{it^{-1/s}x\cdot(\xi_{1}+\dots+ \xi_{N})+i\varphi_{0}(\xi_{1}+\dots+\xi_{N})+\sum_{j=1}^{N}i\varphi_{j}(\xi_{ j})}\] \[\qquad\times\ \langle t^{-1/s}(\xi_{1}+\dots+\xi_{N})\rangle^{-r_{0}} \sigma(t^{-1/s}\Xi)\prod_{j=1}^{N}\langle t^{-1/s}\xi_{j}\rangle^{-r_{j}}\hat {g}_{j}(t^{-1/s}\xi_{j})\,\mathrm{d}\Xi\] \[=t^{(\max(-m,0)+\sum_{j=0}^{N}\max(r_{j},0)-Nn)/s}\int_{\mathbb{R }^{Nn}}e^{it^{-1/s}x\cdot(\xi_{1}+\dots+\xi_{N})+i\varphi_{0}(\xi_{1}+\dots+ \xi_{N})+\sum_{j=1}^{N}i\varphi_{j}(\xi_{j})}\] \[\qquad\times\ \langle\xi_{1}+\dots+\xi_{N}\rangle^{-r_{0}}t^{ \min(m,0)/s}\sigma(t^{-1/s}\Xi)\bigg{(}\prod_{j=1}^{N}\langle\xi_{j}\rangle^{ -r_{j}}\hat{g}_{j}(t^{-1/s}\xi_{j})\bigg{)}\] \[\qquad\times\ \frac{t^{-\max(r_{0},0)/s}\langle t^{-1/s}(\xi_{1}+ \dots+\xi_{N})\rangle^{-r_{0}}}{\langle\xi_{1}+\dots+\xi_{N}\rangle^{-r_{0}}} \prod_{j=1}^{N}\frac{t^{-\max(r_{j},0)/s}\langle t^{-1/s}\xi_{j}\rangle^{-r_{ j}}}{\langle\xi_{j}\rangle^{-r_{j}}}\,\mathrm{d}\Xi\] \[=t^{(\max(-m,0)+\sum_{j=0}^{N}\max(r_{j},0))/s}\] \[\qquad\times\ S_{0}\langle D\rangle^{-r_{0}}T_{\sigma_{t}}^{(1)} (\langle D\rangle^{-r_{1}}S_{1}g_{1}(t^{1/s}\cdot),\,\dots,\,\langle D\rangle ^{-r_{N}}S_{N}g_{N}(t^{1/s}\cdot))(t^{-1/s}x),\]
where
\[S_{j} =t^{-\max(r_{j},0)/s}\langle t^{-1/s}D\rangle^{-r_{j}}\langle D \rangle^{r_{j}},\] \[\sigma_{t}(\Xi) =t^{\min(m,0)/s}\sigma(t^{-1/s}\Xi).\]
Now, \(\sigma_{t}\in S^{m}(n,N)\) uniformly in \(t\), so we can use the known boundedness of \(\langle D\rangle^{-r_{0}}T_{\sigma_{t}}^{(1)}\). The operators \(S_{j}\) are furthermore Mikhlin multipliers uniformly in \(t\) and hence bounded \(L^{p}\to L^{p}\). It follows that
\[\|\langle D\rangle^{-r_{0}}T_{\sigma}^{(t)}(f_{1},\dots,f_{N})\|_{L^{p_{0}}} \lesssim t^{(\max(-m,0)+\sum_{j=0}^{N}\max(r_{j},0))/s}\prod_{j=1}^{N}\| \langle D\rangle^{r_{j}}f_{j}\|_{L^{p_{j}}}.\qed\]
Now let us return to the Duhamel representation (56). Here we set
\[T_{\zeta}^{(r)}(f_{1},\dots,f_{N})(x):=\int_{\mathbb{R}^{nN}}\zeta(\Xi)\,\prod _{j=1}^{N}\Big{(}\widehat{f}_{j}(\xi_{j})\,e^{ix\cdot\xi_{j}+ir\varphi_{j}(\xi_ {j})}\Big{)}\ \mathrm{d}\Xi,\]
and observe that
\[u(t,x)=\int_{0}^{t}e^{i(t-r)\varphi_{0}(D)}\langle D\rangle^{m(p_{0},s)} \langle D\rangle^{-m(p_{0},s)}T_{\zeta}^{(r)}(f_{1},\dots,f_{N})(x)\,\mathrm{ d}r.\]
Let
\[\sigma_{0}=\varkappa+m_{c}-m_{\zeta},\qquad\varkappa:=\min_{j=1,\dots,N} \sigma_{j},\]
where \(m_{c}=m_{c}(s)\) is as in the statement of Theorem 1.6. From this and Lemma 8.1 we immediately obtain
\[\|u\|_{H^{\sigma_{0},p_{0}}}\lesssim\int_{0}^{t}\langle t-r\rangle^{-m(p_{0},s )/s}\|\langle D\rangle^{-m(p_{0},s)}T_{\zeta}^{(r)}(f_{1},\dots,f_{N})\|_{H^{ \sigma_{0},p_{0}}}\,\mathrm{d}r, \tag{61}\]
for \(1<p_{0}<\infty\). Using Lemma 8.2 it will therefore be enough for us to study the right-hand norm in the case where \(r=1\). Now, using the decomposition of Section 5 we can decompose \(\zeta\) and reduce the analysis of \(T_{\zeta}^{(1)}\) to the study of multilinear operators \(T_{\zeta_{0}}\), \(T_{\zeta_{1}}\) and \(T_{\zeta_{1,2}}\). It should however be noted that for these terms the method only takes advantage of the added regularity on the first argument (i. e. \(f_{1}\)). For the similar terms \(T_{\zeta_{2}}\), \(T_{\zeta_{2,3}}\), etc. one can take advantage of a different \(\sigma_{j}\). This is the reason why \(\varkappa\) is the minimum of the \(\sigma_{j}\), \(j=1,\ldots,N\).
### Treatment of \(T_{\zeta_{0}}\)
Here we make use of the representation given in (36), which in our case with \(x\)-independent amplitude translates to
\[\mathbf{I} :=\langle D\rangle^{-m(p_{0},s)}T_{\zeta_{0}}(f_{1},\ldots,f_{N})\] \[=\sum_{K\in\mathbb{Z}^{nN}}a_{K}\langle D\rangle^{-m(p_{0},s)} \theta(D/\sqrt{N})\bigg{(}\prod_{j=1}^{N}T_{\theta}^{\varphi_{j}}\circ\tau_{ \frac{2\pi k_{j}}{L}}(f_{j})\bigg{)}.\]
The method in Subsection 7.1 can then be carried out to show that for any \(\sigma\in\mathbb{R}\)
\[\|\mathbf{I}\|_{H^{\sigma_{0}},p_{0}}\lesssim\prod_{j=1}^{N}\|f_{j}\|_{X^{p_{ j}}}\lesssim\prod_{j=1}^{N}\|f_{j}\|_{H^{\sigma_{j}},p_{j}}.\]
### Treatment of \(T_{\zeta_{1}}\)
Using (39) and (40), with the same notation as was introduced there, its \(L^{p}\)-boundedness can be inferred from that of the multilinear operator
\[\mathbf{II}:=\langle D\rangle^{-m(p_{0},s)}T_{\zeta_{1}}(f_{1},\ldots,f_{N})= \int\widetilde{\mathbf{II}}_{U}\,\frac{1}{(1+|U|^{2})^{M}}\ \mathrm{d}U,\]
where
\[\widetilde{\mathbf{II}}_{U} =\langle D\rangle^{-m(p_{0},s)}\sum_{k\geqslant k_{0}}\chi_{0}(2 D)\,Q_{k}^{0}\left[(Q_{k}^{u_{1}}\circ T_{b_{1}}^{\varphi_{1}})(f_{1})\prod_{j=2} ^{N}(P_{k}^{u_{j}}\circ T_{b_{j,k}}^{\varphi_{j}})(f_{j})\right]\] \[=\sum_{k\geqslant k_{0}}\chi_{0}(2D)\langle D\rangle^{-m(p_{0},s )}\circ Q_{k}^{0}\left[(Q_{k}^{u_{1}}\circ T_{b_{1}|\cdot|-\sigma_{1}}^{\varphi _{1}})(|D|^{\sigma_{1}}f_{1})\prod_{j=2}^{N}(P_{k}^{u_{j}}\circ T_{b_{j,k}}^{ \varphi_{j}})(f_{j})\right]\] \[=\chi_{0}(2D)\langle D\rangle^{-m(p_{0},s)}\,|D|^{m_{\zeta}+m(p_ {0},s)-m_{c}(s)-\sigma_{1}}\] \[\circ\sum_{k\geqslant k_{0}}Q_{k}^{1}\left[(Q_{k}^{2}\circ T_{b_ {1}}^{\varphi_{1}})(|D|^{\sigma_{1}}\,\chi_{0}(2D)f_{1})\prod_{j=2}^{N}(P_{k} ^{u_{j}}\circ T_{b_{j,k}}^{\varphi_{j}})(f_{j})\right],\]
where \(b_{1}\in S^{m_{1}}\) and \(b_{j,k}\in S^{m_{j}}\). \(Q_{k}^{1}\) has symbol
\[\phi_{k}(\xi)|2^{-k}\xi|^{-m_{\zeta}+m_{c}(s)-m_{0}(p_{0},s)+\sigma_{1}},\]
\(Q_{k}^{2}\) has symbol
\[\psi_{k}(\xi)|2^{-k}\xi|^{m_{\zeta}-m(p_{1},s)-\sigma_{1}}e^{2^{-k}\xi\cdot u _{1}},\]
and we define
\[\widetilde{b_{1}}(\xi) =\chi_{0}(\xi)\,|\xi|^{m(p_{1},s)}\in S^{m(p_{1},s)},\] \[\widetilde{b_{j,k}}(\xi) =2^{m(p_{j},s)k}\omega_{k}(\xi),\qquad j=2,\ldots,N.\]
Now since the operator \(\sum_{k\geqslant k_{0}}Q_{k}^{1}\left[(Q_{k}^{2}\circ T_{b_{1}}^{\varphi_{1}}) (f_{1})\prod_{j=2}^{N}(P_{k}^{u_{j}}\circ T_{b_{j,k}}^{\varphi_{j}})(f_{j})\right]\) is of the form (44) the boundedness of the latter yields that
\[\|\mathbf{II}\|_{H^{\sigma_{0}},p_{0}}\lesssim\|\mathbf{II}\|_{H^{\sigma_{1}+m _{c}-m_{\zeta}},p_{0}}\lesssim\|f_{1}\|_{H^{\sigma_{1},p_{1}}}\prod_{j=2}^{N} \|f_{j}\|_{X^{p_{j}}}\lesssim\prod_{j=1}^{N}\|f_{j}\|_{H^{\sigma_{j}},p_{j}}.\]
Treatment of \(T_{\zeta_{1,2}}\).For this part we will need to invoke an interpolation argument. To that end, we fix \(\sigma_{j}\)'s, \(0\leqslant j\leqslant N\) with \(\sigma_{0}\geqslant 0\). Then the goal is to show that the \(N\)-linear operator \(W\) given by
\[W(f_{1},\ldots,f_{N}):=\langle D\rangle^{\sigma_{0}-m(p_{0},s)}T_{\zeta_{1,2}} (\langle D\rangle^{-\sigma_{1}}f_{1},\ldots,\langle D\rangle^{-\sigma_{N}}f_{N})\]
is bounded \(\prod_{j}X^{p_{j}}\to X^{p_{0}}\), provided that \(m_{\zeta}=m_{c}-\sigma_{0}+\varkappa\). Observe that \(m_{c}\) depends linearly on the \(1/p_{j}\) between any two adjacent endpoints in Lemma 6.1, and hence the same goes for \(m_{\zeta}\). We can therefore use the interpolation argument in Lemma 6.1 on \(W\).
Just as in the treatment of **II**, we only use the Sobolev regularity in \(f_{1}\), and that of \(f_{2},\ldots,f_{N}\) will only be used in the analogous estimates for other \(\zeta_{i,j}\).
Now, using the representation (53), we need to study the boundedness of
\[\textbf{III}:=\langle D\rangle^{-m(p_{0},s)}T_{\zeta_{1,2}}(f_{1},\ldots,f_{N} )=\int\widetilde{\textbf{III}}_{U}\,\frac{1}{(1+|U|^{2})^{M}}\ \mathrm{d}U,\]
where \(U=(u_{1},\ldots,u_{N})\) and
\[\widetilde{\textbf{III}}_{U}=\sum_{k\geqslant k_{0}}^{\infty}M_{\mathfrak{m}} d_{0}(D)\langle D\rangle^{-m(p_{0},s)}P_{k}^{0}\left[(Q_{k}^{u_{1}}\circ T_{d_{1}}^{ \varphi_{1}})(f_{1})\,(Q_{k}^{u_{2}}\circ T_{d_{2}}^{\varphi_{2}})(f_{2})\, \prod_{j=3}^{N}(P_{k}^{u_{j}}\circ T_{d_{j,k}}^{\varphi_{j}})(f_{j})\right],\]
with \(M_{\mathfrak{m}}\) being the operation of multiplication by \(\mathfrak{m}(k,U)\), which is uniformly bounded in \(k\). Moreover \(d_{0}=2^{k(m_{\zeta}-m_{c}+m(p_{0},s))}\omega_{k}(\xi)\) and the amplitudes for each OIO are defined by (41).
We shall consider the norm of \(\widetilde{\textbf{III}}_{U}\) in \(H^{\sigma_{0},p_{0}}\) where \(p_{0}=2\), \(p_{0}=1\) and \(p_{0}=\infty\), which by duality corresponds to estimating
\[S:=\int\widetilde{\textbf{III}}_{U}(x)f_{0}(x)\ \mathrm{d}x\]
with \(f_{0}\in H^{-\sigma_{0},p_{0}^{\prime}}\) (\(p_{0}^{\prime}\) is the Holder dual of \(p_{0}\)). First we observe that
\[P_{k}^{0}=P_{k_{0}}+\sum_{\ell=k_{0}+1}^{k}Q_{\ell}\]
and therefore one can write \(S=S_{\mathrm{\tiny P}}+S_{\mathrm{\tiny Q}}\) with
\[S_{\mathrm{\tiny P}} =\int\sum_{k\geqslant k_{0}}^{\infty}M_{\mathfrak{m}}d_{0}(D) \langle D\rangle^{-m(p_{0},s)}P_{k_{0}}f_{0}(x)\,Q_{k}^{u_{1}}T_{d_{1}}^{ \varphi_{1}}f_{1}(x)\,Q_{k}^{u_{2}}T_{d_{2}}^{\varphi_{2}}f_{2}(x)\,\prod_{j=3 }^{N}P_{k}^{u_{j}}T_{d_{j,k}}^{\varphi_{j}}f_{j}(x)\ \mathrm{d}x\] \[S_{\mathrm{\tiny Q}} =\int\sum_{k\geqslant k_{0}}^{\infty}\sum_{\ell=k_{0}+1}^{k}M_{ \mathfrak{m}}Q_{\ell}d_{0}(D)\langle D\rangle^{-m(p_{0},s)}f_{0}(x)\,Q_{k}^{u _{1}}T_{d_{1}}^{\varphi_{1}}f_{1}(x)\,Q_{k}^{u_{2}}T_{d_{2}}^{\varphi_{2}}f_{2 }(x)\,\prod_{j=3}^{N}P_{k}^{u_{j}}T_{d_{j,k}}^{\varphi_{j}}f_{j}(x)\ \mathrm{d}x\]
To show the needed boundedness of these parts, we shall rely on the method laid out in detail in Section 8.1 of [15]. The terms \(S\), and \(S_{\mathrm{\tiny P}}\) correspond in that text to the expressions (60) and (61), respectively. For the term \(S_{\mathrm{\tiny Q}}\) we note that using the condition
\(\varkappa+m_{c}-m_{\zeta}\geqslant 0\), we have that
\[\sum_{k\geqslant k_{0}}\sum_{\ell=k_{0}+1}^{k}\Big{|}\int\Big{(}M_ {\mathfrak{m}}Q_{\ell}d_{0}(D)\langle D\rangle^{-m(p_{0},s)}f_{0}\Big{)}(x) \left(Q_{k}^{u_{1}}\circ T_{d_{1}}^{\varphi_{1}}\right)\left(f_{1}\right)(x)\\ \times\left(Q_{k}^{u_{2}}\circ T_{d_{2}}^{\varphi_{2}}\right)(f_ {2})\left(x\right)\prod_{j=3}^{N}\left(P_{k}^{u_{j}}\circ T_{d_{j,k}}^{\varphi_ {j}}\right)(f_{j})(x)\mathrm{d}x\Big{|}\\ \leqslant\sum_{k\geqslant k_{0}}\sum_{\ell=k_{0}+1}^{k}2^{(k-\ell )(m(p_{0},s)-\sigma_{1}-m_{c}+m_{\zeta})}\Big{|}\int\Big{(}M_{\mathfrak{m}}2^{ -km(p_{0},s)}Q_{\ell}\tilde{d_{0}}(D)\langle D\rangle^{-\sigma_{1}-m_{c}+m_{ \zeta}}f_{0}\Big{)}(x)\\ \times\left(Q_{k}^{u_{1}}\circ T_{d_{1}}^{\varphi_{1}}\right)(| D|^{\sigma_{1}}f_{1})\left(x\right)\left(Q_{k}^{u_{2}}\circ T_{d_{2}}^{ \varphi_{2}}\right)(f_{2})\left(x\right)\prod_{j=3}^{N}\left(P_{k}^{u_{j}} \circ T_{d_{j,k}}^{\varphi_{j}}\right)(f_{j})\left(x\right)\mathrm{d}x\Big{|} \\ \leqslant\sum_{k\geqslant k_{0}}\sum_{\ell=k_{0}+1}^{k}\Big{|} \int\Big{(}M_{\mathfrak{m}}Q_{\ell}\tilde{d_{0}}(D)\langle D\rangle^{-\sigma_ {1}-m_{c}+m_{\zeta}}f_{0}\Big{)}(x)\left(Q_{k}^{u_{1}}\circ T_{d_{1}}^{\varphi _{1}}\right)(|D|^{\sigma_{1}}f_{1})\left(x\right)\\ \times\left(Q_{k}^{u_{2}}\circ T_{d_{2}}^{\varphi_{2}}\right)(f_ {2})\left(x\right)\prod_{j=3}^{N}\left(P_{k}^{u_{j}}\circ T_{d_{j,k}}^{\varphi _{j}}\right)(f_{j})\left(x\right)\mathrm{d}x\Big{|}\\ =\sum_{\ell=k_{0}}^{\infty}\sum_{k=0}^{\infty}\Big{|}\int\Big{(} M_{\mathfrak{m}}Q_{\ell}\tilde{d_{0}}(D)\langle D\rangle^{\varkappa- \sigma_{0}-\sigma_{1}}f_{0}\Big{)}(x)\left(Q_{k+\ell}^{u_{1}}\circ T_{d_{1}}^{ \varphi_{1}}\right)(|D|^{\sigma_{1}}f_{1})\left(x\right)\\ \times\left(Q_{k+\ell}^{u_{2}}\circ T_{d_{2}}^{\varphi_{2}} \right)(f_{2})\left(x\right)\prod_{j=3}^{N}\left(P_{k+\ell}^{u_{j}}\circ T_{d _{j,k}}^{\varphi_{j}}\right)(f_{j})\left(x\right)\mathrm{d}x\Big{|} \tag{62}\]
where \(\tilde{d}_{0}(\xi)=\omega_{k}(\xi)\). This last expression corresponds to the sum in \(k\) of expression (62) in [15].
With this set, one can follow the procedure in [15] to show the required end-point estimates. However, in order for every step of that proof to translate to this setting, we need to show some additional facts about our terms.
First we consider the target space \(H^{\sigma_{0},2}\), and to make use of duality take \(f_{0}\) such that \(\langle D\rangle^{-\sigma_{0}}f_{0}\in L^{2}\). To deal with \(S_{\mathrm{p}}\) we hence have to estimate
\[\sum_{k\geqslant k_{0}}\Big{|}\int\Big{(}M_{\mathfrak{m}}P_{k_{0} }d_{0}(D)\langle D\rangle^{-m(p_{0},s)}f_{0}\Big{)}(x)\\ \times(Q_{k}^{u_{1}}\circ T_{d_{1}}^{\varphi_{1}})(f_{1})(x) \left(Q_{k}^{u_{2}}\circ T_{d_{2}}^{\varphi_{2}})(f_{2})(x)\,\prod_{j=3}^{N}(P_ {k}^{u_{j}}\circ T_{d_{j}}^{\varphi_{j}})(f_{j})(x)\,\,\mathrm{d}x\Big{|}\]
Now since \(k_{0}\) is fixed, the symbol of the multiplier \(P_{k_{0}}\) is a Schwartz function and therefore
\[M_{\mathfrak{m}}P_{k_{0}}d_{0}(D)\langle D\rangle^{-m(p_{0},s)}f _{0}\] \[=(P_{k_{0}}\langle D\rangle^{-m(p_{0},s)}\langle D\rangle^{\sigma_ {1}+m_{c}-m_{\zeta}}\circ d_{0}(D)\circ P_{k}\circ M_{\mathfrak{m}})(\langle D \rangle^{-\sigma_{1}-m_{c}+m_{\zeta}}f_{0})\] \[=K\ast((P_{k}\circ M_{\mathfrak{m}})(\langle D\rangle^{-\sigma_ {1}-m_{c}+m_{\zeta}}f_{0})),\]
for \(k\geqslant k_{0}\), with \(|K(\cdot)|\lesssim\langle\cdot\rangle^{-N}\), for any \(N\geqslant 0\), which shows that this term has the required form for the steps on page 36 in [15] to go through.
Following those steps, we therefore see that **III** is bounded in \(L^{2}\) provided that for \(j=1,2\) and \(f\in\mathrm{bmo}\) the measure
\[\mathrm{d}\mu_{k}(x,t)=\sum_{\ell=0}^{\infty}\Big{|}\Big{(}Q_{k+\ell}^{u_{j}} \circ T_{d_{j}}^{\varphi_{j}}\Big{)}\left(f\right)(x)\Big{|}^{2}\,\delta_{2^{- \ell}}(t)\mathrm{d}x\]
is Carleson with a decay in \(\ell\) in the Carleson norm. However, in Proposition 4.4 it was shown that the Carleson norm is bounded by a multiple of \(2^{-ek}\|f\|_{\mathrm{bmo}}^{2}\), for some \(\varepsilon>0\). This decay in \(k\) is needed to be able to deal with double sum in (62) in various cases that are handled below.
This fact enables us to use the arguments in Section 8.1 on page 35 of [15], in accordance with case (_ii_) of Lemma 6.1 to conclude that
\[\|\langle D\rangle^{\sigma_{0}}\mathbf{\Pi}\|_{L^{2}}\lesssim\|\langle D \rangle^{\sigma_{1}}f_{1}\|_{X^{p_{1}}}\prod_{j=2}^{N}\|f_{j}\|_{X^{p_{j}}} \lesssim\prod_{j=1}^{N}\|\langle D\rangle^{\sigma_{j}}f_{j}\|_{X^{p_{j}}},\]
and therefore
\[\|W(f_{1},\ldots,f_{N})\|_{L^{2}}\lesssim\prod_{j=1}^{N}\|f_{j}\|_{X^{p_{j}}}.\]
Next, we deal with the target space with norm \(\|\langle D\rangle^{\sigma_{0}}\cdot\|_{h^{1}}\), and therefore take \(f_{0}\) such that \(\langle D\rangle^{-\sigma_{0}}f_{0}\in\mathrm{bmo}\). Therefore if \(\langle D\rangle^{\sigma_{1}}f_{1}\in\mathrm{bmo}\) and \(f_{j}\in\mathrm{bmo}\) for \(j=2,\ldots,M\), then for any \(3\leqslant M\leqslant N\) it is not hard (mainly using Proposition 4.4) to see that the measure
\[\mathrm{d}\mu_{k}(x,t) :=\sum_{\ell=0}^{\infty}\left(Q_{\ell}M_{\mathfrak{m}}\tilde{d}_ {0}(D)\langle D\rangle^{-\sigma_{1}-m_{c}+m_{\zeta}}f_{0}(x)\right)\] \[\times\left[\left(Q_{k+\ell}^{u_{1}}\circ T_{d_{1}}^{\varphi_{1} }\right)\left(|D|^{\sigma_{1}}f_{1}\right)(x)\left(Q_{k+\ell}^{u_{1}}\circ T_ {d_{2}}^{\varphi_{2}}\right)(f_{2})\left(x\right)\prod_{j=3}^{M}\left(P_{k+ \ell}^{u_{j}}\circ T_{d_{j}}^{\varphi_{j}}\right)(f_{j})\left(x\right)\right] \mathrm{d}x\,\delta_{2^{-l}}(t)\]
is a Carleson measure with the Carleson norm bounded by a multiple of
\[2^{-\varepsilon k}\left\|\langle D\rangle^{-\sigma_{0}}f_{0}\right\|_{\mathrm{bmo }}\|\langle D\rangle^{\sigma_{1}}f_{1}\|_{\mathrm{bmo}}\prod_{j=2}^{M}\|f_{j} \|_{\mathrm{bmo}},\]
for some \(\varepsilon>0\). Moreover by estimate (25) we also have that
\[\sup_{\ell>b_{0}}\left\|Q_{\ell}M_{\mathfrak{m}}\tilde{d}_{0}(D)\langle D \rangle^{-\sigma_{1}-m_{c}+m_{\zeta}}f_{0}\right\|_{L^{\infty}}\lesssim\left\| \langle D\rangle^{-\sigma_{0}}f_{0}\right\|_{\mathrm{bmo}}\]
and
\[\sup_{\ell>b_{0}}\left\|\left(Q_{k+\ell}^{u_{j}}\circ T_{d_{j}}^{\varphi_{j}} \right)(f_{j})\right\|_{L^{\infty}}\lesssim\left\|f_{j}\right\|_{\mathrm{bmo}} \quad\text{ for }j=1,2\text{ when }p_{j}=\infty,\]
where the hidden constant in the above estimate is uniform in \(k\). These facts together with estimates (10), (11) and (12) enable us to run the arguments of Section 8.2 on page 40 of [15] to prove various boundedness results corresponding to the cases (_iii_) and (_iv_) of Lemma 6.1 and finally arrive at
\[\|\langle D\rangle^{\sigma_{0}}\mathbf{\Pi}\|_{h^{1}}\lesssim\|\langle D \rangle^{\sigma_{1}}f_{1}\|_{X^{p_{1}}}\prod_{j=2}^{N}\|f_{j}\|_{X^{p_{j}}} \lesssim\prod_{j=1}^{N}\|\langle D\rangle^{\sigma_{j}}f_{j}\|_{X^{p_{j}}},\]
and hence
\[\|W(f_{1},\ldots,f_{N})\|_{h^{1}}\lesssim\prod_{j=1}^{N}\|f_{j}\|_{X^{p_{j}}}.\]
The last case to deal with is when \(f_{0}\) in the duality arguments above has the property that \(\langle D\rangle^{-\sigma_{0}}f_{0}\in h^{1}\). Here we observe that the measure
\[\mathrm{d}\mu_{k}(x,t)=\sum_{\ell=0}^{\infty}\left(Q_{k+\ell}^{u_{1}}\circ T_ {d_{1}}^{\varphi_{1}}\right)\left(|D|^{\sigma_{1}}f_{1}\right)(x)\left(Q_{k+ \ell}^{u_{1}}\circ T_{d_{2}}^{\varphi_{2}}\right)(f_{2})\left(x\right)\prod_{j =3}^{M}\left(P_{k+\ell}^{u_{j}}\circ T_{d_{j}}^{\varphi_{j}}\right)(f_{j}) \left(x\right)\mathrm{d}x\delta_{2^{-l}}(t)\]
is Carleson with Carleson norm bounded by a multiple of
\[2^{-\varepsilon k}\,\|\langle D\rangle^{\sigma_{1}}f_{1}\|_{\mathrm{bmo}}\prod_{ j=2}^{N}\|f_{j}\|_{\mathrm{bmo}},\]
for some \(\varepsilon>0\). Therefore (11) yields that
\[\|\langle D\rangle^{\sigma_{0}}\mathbf{\Pi}\mathbf{I}\|_{\mathrm{bmo}}\lesssim \|\langle D\rangle^{\sigma_{1}}f_{1}\|_{X^{p_{1}}}\prod_{j=2}^{N}\|f_{j}\|_{X^ {p_{j}}}\lesssim\prod_{j=1}^{N}\|\langle D\rangle^{\sigma_{j}}f_{j}\|_{X^{p_{j }}},\]
yielding
\[\|W(f_{1},\ldots,f_{N})\|_{\mathrm{bmo}}\lesssim\prod_{j=1}^{N}\|f_{j}\|_{X^{ p_{j}}}.\]
With all the end point estimates set, we can by interpolation finally deduce that
\[\|W(f_{1},\ldots,f_{N})\|_{X^{p_{0}}}\lesssim\prod_{j=1}^{N}\|f_{j}\|_{X^{p_{j }}},\quad p_{0},\ldots,p_{N}\in[0,\infty],\]
which means that
\[\|\mathbf{\Pi}\mathbf{I}\|_{H^{\sigma_{0}},p_{0}}\lesssim\prod_{j=1}^{N}\|f_ {j}\|_{H^{\sigma_{j}},p_{j}},\quad p_{0},\ldots,p_{N}\in(0,\infty).\]
Returning now to (61), we recall that \(T_{\zeta}^{(1)}\) is a sum of operators, of the type \(T_{\zeta_{0}}\), \(T_{\zeta_{1}}\) and \(T_{\zeta_{1,2}}\) and the bounds obtained above for \(\mathbf{I}\), \(\mathbf{II}\) and \(\mathbf{III}\) can therefore be used to show that
\[\|\langle D\rangle^{-m(p_{0},s)}T_{\zeta}^{(1)}(f_{1},\ldots,f_{N})\|_{H^{ \sigma_{0}},p_{0}}\lesssim\prod_{j=1}^{N}\|f_{j}\|_{H^{\sigma_{j}},p_{j}}.\]
Lemma 8.2 then yields that
\[\|\langle D\rangle^{-m(p_{0},s)+\varkappa+m_{c}-m_{\zeta}}T_{\zeta}^{(r)}(f_{1 },\ldots,f_{N})\|_{L^{p_{0}}}\lesssim\langle r\rangle^{(-m_{\zeta}+\varkappa^ {\prime}+\sum_{j=1}^{N}\sigma_{j})/s}\prod_{j=1}^{N}\|f_{j}\|_{H^{\sigma_{j}},p_{j}},\]
where \(\varkappa^{\prime}:=\max(m(p_{0},s)-\varkappa-m_{c}+m_{\zeta},0).\) Thus we conclude that for the solution \(u\) in (61) one has
\[\|u(t,\cdot)\|_{H^{\sigma_{0}},p_{0}(\mathbb{R}^{n})}\lesssim\int_{0}^{t} \langle t-r\rangle^{-m(p_{0},s)/s}\langle r\rangle^{(-m_{\zeta}+\varkappa^{ \prime}+\sum_{j=1}^{N}\sigma_{j})/s}\,\mathrm{d}r\,\prod_{j=1}^{N}\|f_{j}\|_{H^ {\sigma_{j}},p_{j}}\]
from which one obtains the space-time estimate
\[\|u\|_{L^{q}([0,\mathcal{T}])\,H^{\varkappa+m_{c}-m_{\zeta}},p_{0}(\mathbb{R} ^{n})}\leqslant C_{T}\prod_{j=1}^{N}\|f_{j}\|_{H^{\sigma_{j}},p_{j}},\]
which is valid for any \(q\in[1,\infty]\), any \(T\in(0,\infty).\) Theorem 1.6 is thereby proven.
|
2309.05242 | A Flexible Architecture for Broadcast Broadband Convergence in Beyond 5G | There has been an exponential increase in the usage of multimedia services in
mobile networks in recent years. To address this accelerating data demand,
mobile networks are experiencing a subtle transformation in their architecture.
One of the changes in this direction is the support of Multicast/Broadcast
Service (MBS) in the Third Generation Partnership Project (3GPP) Fifth
Generation (5G) network. The MBS has been introduced to enhance resource
utilization and user experience in 3GPP 5G networks. However, there are certain
limitations in the 3GPP 5G MBS architecture, such as the selection of the
delivery method (unicast or broadcast) by the core network (may result in
sub-optimal radio resource utilization) and no provision for converging
non-3GPP broadcast technologies (like digital terrestrial television) with
cellular (3GPP 5G) broadband. In this context, we propose a new architecture
for the convergence of cellular broadband and non-3GPP broadcast networks. A
novelty of the architecture is that it treats signalling exchange with User
Equipment (UE) as data (service) which results in improved scalability of
mobile networks. The architecture supports enhanced flexibility in choosing a
delivery method (3GPP 5G unicast, 3GPP 5G broadcast, or non-3GPP broadcast) for
user data. We evaluate the performance of the proposed architecture using
process algebra-based simulations, demonstrating a significant reduction in the
number of signalling messages exchanged between the UE and the network for MBS
session establishment as compared to the 3GPP 5G network. | Rashmi Yadav, Rashmi Kamran, Pranav Jha, Abhay Karandikar | 2023-09-11T05:28:01Z | http://arxiv.org/abs/2309.05242v4 | # 6G Unleashed: Transforming Broadcast via service based architecture
###### Abstract
There has been an exponential increase in the usage of multimedia services in mobile networks in recent years. To address this accelerating data demand, mobile networks are experiencing a subtle transformation in their architecture. One of the changes in this direction is the support of Multicast/Broadcast Service (MBS) in the Third Generation Partnership Project (3GPP) Fifth Generation (5G) network. The MBS has been introduced to enhance resource utilization and user experience in 3GPP 5G networks. However, there are certain limitations in the 3GPP 5G MBS architecture, such as the selection of the delivery method (unicast or broadcast) by the core network (may result in sub-optimal radio resource utilization) and no provision for converging non-3GPP broadcast technologies (like digital terrestrial television) with cellular (3GPP 5G) broadband. In this context, we propose a new architecture for the convergence of cellular broadband and non-3GPP broadcast networks. A novelty of the architecture is that it treats signalling exchange with User Equipment (UE) as data (service) which results in improved scalability of mobile networks. The architecture supports enhanced flexibility in choosing a delivery method (3GPP 5G unicast, 3GPP 5G broadcast, or non-3GPP broadcast) for user data. We evaluate the performance of the proposed architecture using process algebra-based simulations, demonstrating a significant reduction in the number of signalling messages exchanged between the UE and the network for MBS session establishment as compared to the 3GPP 5G network.
Fifth Generation (5G) network, Beyond 5G (B5G), Multicast Broadcast Services (MBS), Digital Terrestrial Television (DTT), Broadcast Broadband Convergence.
## I Introduction
In recent years, there has been a remarkable rise in multimedia content utilization over mobile networks. As highlighted in the Ericsson mobility report [1], video content, especially in social media and video-on-demand services, comprises the largest and fastest-growing segment of mobile data traffic globally, which accounts for approximately 70% of traffic share in 2022. Highlights from the same report show an annual growth of about 30% by the end of 2028, further increasing the global mobile data traffic's video share to 80%. Moreover, Qualcomm's broadcast report [2] predicts an enormous rise in live streaming content on social media, with approximately 800 million users expected to participate in daily live streams. These statistics reflect the significant impact and importance of multimedia services in the present-day mobile network.
The deployment of Fifth Generation (5G) mobile networks brings the possibility to increase the usage of Multicast/Broadcast Service (MBS). MBS is a crucial use case to address the increasing data demands within the framework of 5G technology. Apropos to this, Release 17 of the Third Generation Partnership Project (3GPP) 5G standards has introduced the support for MBS to enhance the 3GPP 5G architecture. Nevertheless, there are some architectural limitations of the 3GPP 5G MBS support. These include the selection of delivery methods (unicast or broadcast) by the Core Network (CN) that might result in sub-optimal utilization of resources, limited handling for user mobility for MBS, and no provision for the convergence of Non-3GPP Broadcast Networks (N3BNs) within the 3GPP 5G (such as Digital Terrestrial Television (DTT)).
This section provides the literature survey related to MBS architecture and mechanisms. The work in [3] presents various architectural concepts and mechanisms to optimize network loading and traffic patterns for MBS delivery. The paper [4] presents a comprehensive review on the convergence of broadcast and broadband in the 5G network. The authors in [5] proposed an enhanced Next Generation Radio Access Network (NG-RAN) architecture with architectural and functional enhancements to provide the efficient delivery of terrestrial broadcast services. The work in [6] explores a mixed transmission mode that utilizes shared multicast, broadcast, and unicast resources over the same physical channel. In [7], authors review the upcoming 3GPP 5G standards, discuss limitations of 3GPP 5G MBS architecture and present state-of-the-art standardization initiatives towards integrating N3BNs with
the 5G. Furthermore, the latest release of the 3GPP standard (Release 18) does not include the support for integration of the N3BN and the 3GPP broadcast network.
To the best of our knowledge, the prior art lacks comprehensive architectural solutions related to the convergence of cellular broadband (3GPP 5G) and N3BNs and scalability enhancements in the context of MBS delivery in mobile networks. We propose a flexible architecture for broadcast broadband convergence where we treat UE signalling as data or service. Therefore, we call the proposed architecture a Signalling Service-Based Architecture (SSBA) for broadcast broadband convergence. This work is based on prior research conducted in [8], focusing on the convergence of broadcast and broadband as the authors did not delve into this specific aspect. The SSBA presents a scalable network architecture making it a promising solution for the Beyond 5G (B5G) landscape through enhanced resource utilization in a converged network. SSBA provides the flexibility to choose between broadcast and broadband delivery methods based on resource availability. An illustration of the convergence of N3BN in the proposed SSBA is provided in Section V. In Section IV, we evaluate the performance of the proposed SSBA using the Eclipse plug-in [9], a tool for modelling distributed systems with the help of Performance Evaluation Process Algebra (PEPA) [10], a modelling language.
The rest of the paper is as follows: Section II also presents the architectural details of the proposed signalling service-based architecture. Section III presents the system model. We conclude in Section VI along with future directions.
## II Proposed Signalling Service-Based Architecture
In this section, we present an overview of the proposed SSBA, as shown in Fig. 1. Before delivering the details of the proposed SSBA, let us first understand the basic principles of an SSBA (a detailed explanation is available in [8]). In SSBA, a Service Function (SF) handles signalling exchange with the UE through the user plane for network services (i.e. RRC/NAC service, broadcast service). The SF then communicates with the network control plane to establish the data path through the network data plane.
In the context of Fig.1, the preceding explanation can be linked as follows: the Broadcast Service Function (BSF) functions as an independent SF responsible for exchanging multicast/broadcast-related signalling messages with UEs. These include membership requests, content delivery requests, and other associated operations. The BSF interfaces with the control plane (Broadcast Controller (BCC)) to initiate data path establishment. Further, the BCC communicates with the RAN/CN controller to configure the data path for efficient content delivery. However, the decision for delivery methods (unicast or broadcast) is governed by BCC and switching between delivery methods is handled by the Broadcast Data Plane (BDP).
The information flow in the proposed SSBA is as follows: UE-1 is associated with the broadcast RAN (RAN-DP (BC)), whereas UE-2 is connected to the unicast (UC) RAN (RAN-DP (UC)). Notably, the BSF facilitates UE (both UE-1 and UE-2) interaction via the DP. Therefore, the signalling path for UC/BC transmission involves the following: UE-2/UE-1 - RAN-DP (UC) - User Plane Function (UPF) - BSF as shown in Fig. 1 with the blue dotted line. Furthermore, the RAN controller controls the RAN data planes (RAN-DP (UC) and RAN-DP (BC)), while the CN controller supervises the CN data planes (UPF and Multicast/Broadcast UPF (MB-UPF)).
### _MBS Session Establishment Call flow for the Proposed SSBA_
Fig. 2 illustrates the MBS session establishment call flow for the proposed SSBA. Conversely, the call flow for MBS session establishment in the 3GPP 5G architecture is available in [11] (Section 7.1.1.2 and 7.2.1.3). This section provides a concise comparison between the call flows of the proposed and 3GPP 5G MBS session establishment, highlighting the significant reduction in the number of signalling messages achieved in the proposed SSBA.
A Temporary Mobile Group Identity (TMGI) allocation process (messages 1-2) is carried out between Application Function (AF) and BCC as shown in Fig. 2. The session create request (message 3) is sent to BCC by AF. In response, BCC forwards this request to the CN controller. CN controller confirms the resource availability from the RAN controller (messages 5 and 7) and the RAN controller sends the session command (message 6) to RAN-DP to set up the session on the RAN side. On receiving confirmation from the RAN controller, the CN controller sends the session command to MB-UPF (message 8) to set up the session on the core network side as well. Messages 13-15 are related to UE joining the MBS session. Here, UE initiates the MBS session join request to BSF, which is then forwarded to the RRC/NAS SF and eventually processed by this SF to send the Radio Resource Control (RRC) reconfiguration message in response, to UE via RAN-DP.
The controllers in the proposed SSBA do not call for response messages from the user plane, as they have global information about the user plane resources. Therefore, messages 6, 8 and 11 (as shown in Fig. 2) are sent as commands without requiring corresponding response messages. As a result, the modified call flow significantly reduces the total number of signalling messages. Similarly, message 14 (Fig. 2) also does not require a response message. Therefore, the procedure for
Fig. 1: Proposed signalling service-based mobile network architecture.
MBS session establishment call flow is appreciably simplified, enhancing the overall modularity of the network.
## III System Model
In this section, we describe the system modelling of the proposed SSBA, using the PEPA [10], a high-level language, for modelling distributed systems. The PEPA modelling facilitates system performance evaluation using the Eclipse plug-in [9], a tool for performance analysis. In Table I, we present the MBS session establishment call flow (Fig. 2) modelling.
To explain the behaviour of various Network Functions (NFs) involved in the MBS session establishment call flow, we model each NF as a PEPA component. This approach allows us to represent the tasks executed by each PEPA component/NF through different states. Each state is denoted by the NF's name followed by a number corresponding to its state (\(NF_{1}\)). For instance, the NF _UE_ possesses two states, \(Ue_{1}\) and \(Ue_{2}\), representing different states of the UE to perform various tasks. As shown in the table, similar representations of states are provided for each NF, such as RAN-DP, SF, RAN controller, CN controller, MB-UPF, BCC, BSF, BDP, and AF.
Furthermore, we model each task of the NF as an action type, indicated in lowercase letters. To provide further details, subscripts are added to the action types (\(actiontype_{detail}\)). For instance, the action type \(tmgi_{req}\) signifies the TMGI allocation request, where \(tmgi\) is the action type, and \(req\) corresponds to its specific detail as a \(request\). Each action type is also associated with a rate value, denoted as \(r\). These rates represent the expected duration of a particular action type in the PEPA component and values are considered as given in [12, 13], and [14].
By using PEPA modelling, we can analyze the performance of the proposed SSBA for MBS session establishment call flow. To understand the system modelling, consider an example of SF as an NF to model it as a PEPA component as shown in Table I. This NF (SF) is represented with two states: \(Sf_{1}\) and \(Sf_{2}\), each performing specific tasks (as shown in Fig. 2). In the first state, \(Sf_{1}\), the action type \(setup_{req}\) signifies the "PDU session resource setup request" as a task performed during this state.
Moving to the second state, \(Sf_{2}\), two actions are being performed within this state. The first action, denoted as \(get_{sfp}\), represents the SF NF's attempt to access the SF processor (SFP) to process the received request. The second action, denoted as \(reconfig\), represents the specific action of RRC reconfiguration involving the PDU session modification command. For the processing of requests, each NF is associated with a corresponding processor. For this purpose, we define processors using a two-state model as defined in [15, 16]. However, the following are the associated processors considered in the example: UE processor (_UEP_), RAN-DP processor (_RANDPP_), _SFP_, RAN controller processor (_RANP_), CN controller processor (_CNP_), MB-UPF processor (_MB-UPFP_), BCC processor (_BCCP_), BSF processor (_BSFP_), BDP processor (_BDPP_) and AF processor (_AFP_). For instance, the AFP is defined with two states. The first state, \(Afp_{1}\), represents the action of getting access to the AF processor (\(get_{afp}\)), while the second state is dedicated to performing actions associated with the processor (\(tmgireq\) and \(sessionreq\)). Similarly, other processors (as shown in Table I) are defined with their corresponding NFs.
The system equation (as shown in Table I) describes the interactions between the NFs in the proposed SSBA. These interactions are performed between various NFs as \(NF_{1}\bowtie
sents the number of UEs. \(N_{nf}\) denotes the number of specific NFs, such as \(N_{randp}\), \(N_{sf}\), \(N_{ran}\), \(N_{cn}\), \(N_{mbupf}\), \(N_{bec}\), \(N_{bsf}\), \(N_{bdp}\), and \(N_{af}\), representing the number of RAN-DP, SF, RAN controller, CN controller, MB-UPF, BCC, BSF, BDP, and AF NFs, respectively. Furthermore, each processor can handle a set of concurrent threads, denoted as \(N_{t}\), while the number of processors allocated to each NF is represented as \(N_{nfp}\). Consequently, \(N\) = \(N_{nf}\).\(N_{nfp}\).\(N_{t}\) presents the total number of threads for a specific NF, and \(N_{p}\) = \(N_{nf}\).\(N_{nfp}\) denotes the total number of processors allocated to a particular NF type.
However, we followed a similar modelling procedure and simulations were then performed for both the 3GPP 5G and the proposed SSBA's MBS session establishment call flow, demonstrating a comparative analysis of their performance in terms of the reduced number of signalling messages, enhanced modularity and improved scalability.
## IV Performance Evaluation
In this section, we present the performance evaluation of both the proposed SSBA and the 3GPP 5G architecture using the Eclipse plug-in tool [9]. The evaluation is based on several parameters, such as the number of MBS sessions established per unit time, average response time (ART), and processor utilization. These parameters are significant in evaluating the network's scalability, one of the key aspects we consider. The MBS session establishment rate measures the rate at which MBS sessions are established with respect to specific actions, such as \(reconfig\) representing RRC reconfiguration (PDU session modification command). This specific action represents the completion of the MBS session establishment call flow. ART evaluates the average waiting time for UE's MBS session establishment process. Processor utilization evaluates the NF's processor capacity utilization during the entire process.
However, with separate controllers, user plane functions, broadcast service functions and other NFs, the proposed SSBA can be viewed as a distributed system similar to the 3GPP 5G architecture, we propose to use the scalability parameter of a distributed system to evaluate and perform the comparison of their scalability. The scalability (_S_) for a distributed system, based on productivity as defined in [17], is given by the ratio of the productivity of the system at two different configurations with different scales \(b_{1}\) and \(b_{2}\)[13]. In this context, the configurations (\(b_{1}\) and \(b_{2}\)) refer to the different numbers of NFs used in the network. For example, \(b_{1}\) = (1,1,1,1,1,1,1,1,1) and \(b_{2}\) = (3,3,3,3,3,3,3,3). Configuration \(b_{1}\) represents the basic configuration with a single NF assigned to RAN-DP, SF, RAN controller, CN controller, BCC, MB-UPF, BSF, BDP and AF in the proposed SSBA (i.e.,\(N_{randp}\),\(N_{sf}\),\(N_{ran}\),\(N_{cn}\),\(N_{mbupf}\),\(N_{bec}\),\(N_{bsf}\),\(N_{bdp}\),\(N_{af}\)) = (1,1,1,1,1,1,1,1). On the other hand, configuration \(b_{2}\) represents a scaled system with three NFs assigned as follows: \((N_{randp}\),\(N_{sf}\),\(N_{rane}\),\(N_{enc}\),\(N_{mbupf}\),\(N_{bcc}\),\(N_{bsf}\),\(N_{bdp}\),\(N_{af}\)) = (3,3,3,3,3,3,3,3). Please note that we provide an equal number of total processors in each case (3GPP 5G and the proposed SSBA). The mathematical expression for scalability can be referred from [18]. By evaluating scalability, we can
compare the performance of the proposed SSBA and the 3GPP 5G architecture under different scaling configurations which demonstrate the efficient handling of the proposed SSBA for the increased number of users.
Fig. 3 and 4 illustrate the number of MBS sessions established per unit time for both architectures under two different configurations, denoted as \(b_{1}\) and \(b_{2}\). It is observed that the proposed SSBA achieves a higher saturation point than the 3GPP 5G architecture. In the basic configuration (\(b_{1}\)), the 3GPP 5G architecture saturates at 14,000 users, while the proposed SSBA saturates at 30,000 users. Similarly, in the scaled configuration (\(b_{2}\)), the 3GPP 5G architecture saturates at 42,000 users, while the proposed SSBA saturates at 90,000 users. The saturation point indicates the maximum number of UEs served by the network before it becomes overloaded.
in the 3GPP 5G architecture saturate earlier than the proposed SSBA due to the higher number of messages in the 3GPP 5G architecture.
Based on the obtained results for the MBS session rate, ART, and processor utilization, the scalability is evaluated using the equation provided in [18]. The scalability results are plotted in Fig. 7 for configurations \(b_{1}\) and \(b_{2}\). The proposed SSBA outperforms the 3GPP 5G architecture, as it can serve more concurrent users with the same scaling configuration. It is evident from the results that the proposed SSBA is more scalable and performs better than the 3GPP 5G architecture.
## V Convergence of 3GPP 5G and N3BN in SSBA
The proposed SSBA can also be extended to converge 3GPP 5G cellular broadband and N3BN (say, DTT) easily. The simplified integration of N3BN (DTT) with 3GPP 5G is shown in Fig. 8. The MBS-related signalling is taken care of by the BSF. However, the data path in the case of DTT content delivery is as follows: AF - BDP - DTT CN - DTT RAN - UE-3. The BSF serves as the integration point for the signalling interplay between 3GPP 5G and the DTT broadcasting network, while the BDP emerges as the point for data path integration.
## VI Conclusion
In this paper, we have proposed a signalling service-based architecture for MBS that offers enhanced flexibility to select a delivery method based on resource availability, and results in improved network scalability by handling UE signalling as a service. Besides, it also facilitates the convergence of 3GPP 5G cellular broadband and N3BNs in the landscape of B5G. The simulations and performance evaluations were performed to demonstrate that the proposed SSBA outperforms 3GPP 5G architecture, exhibiting enhanced modularity, scalability, and a reduced number of signalling messages in the MBS session establishment procedure. In the future, we would like to perform the evaluation of the converged SSBA mobile network.
|
2303.18248 | Towards Flexible Multi-modal Document Models | Creative workflows for generating graphical documents involve complex
inter-related tasks, such as aligning elements, choosing appropriate fonts, or
employing aesthetically harmonious colors. In this work, we attempt at building
a holistic model that can jointly solve many different design tasks. Our model,
which we denote by FlexDM, treats vector graphic documents as a set of
multi-modal elements, and learns to predict masked fields such as element type,
position, styling attributes, image, or text, using a unified architecture.
Through the use of explicit multi-task learning and in-domain pre-training, our
model can better capture the multi-modal relationships among the different
document fields. Experimental results corroborate that our single FlexDM is
able to successfully solve a multitude of different design tasks, while
achieving performance that is competitive with task-specific and costly
baselines. | Naoto Inoue, Kotaro Kikuchi, Edgar Simo-Serra, Mayu Otani, Kota Yamaguchi | 2023-03-31T17:59:56Z | http://arxiv.org/abs/2303.18248v1 | # Towards Flexible Multi-modal Document Models
###### Abstract
Creative workflows for generating graphical documents involve complex inter-related tasks, such as aligning elements, choosing appropriate fonts, or employing aesthetically harmonious colors. In this work, we attempt at building a holistic model that can jointly solve many different design tasks. Our model, which we denote by FlexDM, treats vector graphic documents as a set of multi-modal elements, and learns to predict masked fields such as element type, position, styling attributes, image, or text, using a unified architecture. Through the use of explicit multi-task learning and in-domain pre-training, our model can better capture the multi-modal relationships among the different document fields. Experimental results corroborate that our single FlexDM is able to successfully solve a multitude of different design tasks, while achieving performance that is competitive with task-specific and costly baselines. 1
Footnote 1: Please find the code and models at: [https://cyberagentaillab.github.io/flex-dm](https://cyberagentaillab.github.io/flex-dm).
## 1 Introduction
Vector graphic documents are composed of diverse multi-modal elements such as text or images and serve as the dominant medium for visual communication today. The graphical documents are created through many different design tasks, _e.g_., filling in a background image, changing font and color, adding a decoration, or aligning texts. While skilled designers perform tasks based on their design knowledge and expertise, novice designers often struggle to make decisions to create an effective visual presentation. To assist such novice designers, interactive frameworks equipped based on models that learn design knowledge from completed designs have been proposed [12, 38]. Our present work proposes models that can be used in such systems, with a particular focus on developing holistic models that can flexibly switch between design tasks.
Design tasks are characterized by 1) the variety of possible actions and 2) the complex interaction between multi-modal elements. As discussed above, a designer can make almost any edit to the appearance of a vector graphic document, ranging from basic layout to nuanced font styling. While there have been several studies in solving specific tasks of a single modality, such as layout generation [3, 13, 26, 30, 23], font recommendation [56], or colorization [54, 22, 40], in realistic design applications, we believe it is essential to build a _flexible_ model that can consider multiple design tasks in a principled manner to make automated decisions on creative workflow.
In this work, we refer to a certain attribute of an element as a _field_ and formulate the various design tasks as a unified _masked field prediction_, which is inspired by the recent masked autoencoders [9, 15] and multi-task models [19, 36]. The key idea is to utilize masking patterns to switch among different design tasks within a single model; _e.g_., element filling can be formulated as predicting all the fields of the newly added element. Our flexible document model, denoted by _FlexDM_, consists of an encoder-decoder architecture with a multi-modal head dedicated to handling different fields within a visual element. After pre-training with random masking strategy, we train FlexDM by explicit multi-task learning where we randomly sample tasks in the form of masking patterns corresponding to the target design task. We illustrate in Figs. 1 and 2 an overview of FlexDM, with emphasis on the correspondence between design tasks and masking patterns.
Through our carefully designed experiments, we show that our proposed FlexDM performs favorably against baselines in five design tasks using the Rico [7] and Crello [52] datasets. We also study how different modeling approaches affect the final task performance in the ablation study. Finally, we apply our framework to several previously studied design tasks with minimal modifications and show that the performance matches or even surpasses the current task-specific approaches.
Our contributions can be summarized in the following.
* We formulate multiple design tasks for vector graphic documents by masked multi-modal field prediction in a
set of visual elements.
* We build a flexible model to solve various design tasks jointly in a single Transformer-based model via multi-task learning.
* We empirically demonstrate that our model constitutes a strong baseline for various design tasks.
## 2 Related Work
### Vector Graphic Generation
There has been a growing interest in vector graphics to realize resolution/artifact-free rendering that is easy to interpret and edit, such as Scalable Vector Graphics (SVG) [8]. Modeling documents in a vector format is much more complex than the stroke or path level vector graphics [5, 14, 35] since each element contains multi-modal features such as text and image. CanvasVAE [52] tackles the document-level unconditional generation of vector graphics, but is not a multi-task model and cannot solve specific design tasks such as element filling. Doc2PPT [11] generates slides given a longer and more detailed multi-modal document, but it is a summarization task and cannot infer what is missing in the incomplete document.
Obtaining transferable representation for downstream tasks learned from multi-modal large-scale data is getting popular. Domains closest to our setting are document understanding [32, 49, 50, 51] and UI understanding [4, 16], where the data consist of elements with multi-modal attributes. Despite the generalizable representation, all the methods fine-tune different parameters for each downstream task (mainly in classification). In contrast, we aim to solve many essential tasks for design creation in a single model.
### Multi-task Learning
Multi-task learning (MTL) [2, 6, 10] aims at solving different tasks at the same time while sharing information and computation among them, which is crucial for deployment. MTL methods achieve a good tradeoff between performance and computational cost by (i) multiple lightweight heads at the top of shared backbone [25, 55] and (ii) efficient use of task-specific parameters [43, 44, 33]. On the contrary, our model obtains the task information from the masking patterns of the input fields and we empirically show that extra task-specific parameters are not necessary.
Training a single model that generalizes to many different tasks has been a long-standing goal. 12-in-1 [37] and UniT [17] handle multiple tasks in vision and language domain with small task-specific parameters. In a more unified manner, Perceiver [20] and Perceiver IO [19] treat different modalities as the same data format, OFA [48] and Unified-IO [36] consider similar attempts in the sequence-to-sequence framework, resulting in a single model or architecture with no task-specific tuning. We are highly inspired by these works and explore how to unify the design tasks in vector graphic document domain.
### Computational Assistance for Graphic Design
There is a long history of automatic graphic design [1, 34, 53]. Recent approaches rely on the learning-based formulation, where the primal focus is in predicting layouts given label sets [21, 30] or in an unconditional manner [13, 3], and avoids the manual design of the energy functions seen in
Figure 1: Examples of the design tasks that can be solved by our proposed FlexDM model, which is designed to process a vector graphic document consisting of an arbitrary number of elements (_e.g_., text). Each element is composed of multi-modal fields indicating its attribute properties (_e.g_., text content, position, font color, etc.).
the earlier work [39]. Some works additionally take positional/relational constraints [24, 27, 31] or textual descriptions [57] for finer design control, but are not applicable in a more complex scenario. In contrast, our multi-task approach solves many conditional tasks thanks to the flexible multi-modal fields in both inputs and targets.
Considering multi-modal features is essential to go beyond layout generation for intelligent graphic design assistance. Wang [47] retrieve images from layout information and keywords for each element to obtain visually pleasing visual design by reinforcement learning. Zhao [56] predict font properties of a text on a webpage over a background image considering metadata. Li [29] predict position and size for a single text box over a background image considering saliency. We demonstrate that we can apply our flexible model to solve these tasks with almost no modification, and our model performs favorably against the task-specific well-tuned approaches.
## 3 Approach
We first describe the formal definition of the vector graphic document and notations in Sec. 3.1. We then introduce the idea of masked field prediction and a model for it in Sec. 3.2 and Sec. 3.3. Finally, we describe how we train FlexDM in Sec. 3.4.
### Preliminary
**Document Structure**: In this work, a vector graphic document \(X\) consists of a set of elements \(X=(X_{1},X_{2},\dots,X_{S})\), where \(S\) is the number of elements in \(X\). Each element \(X_{i}\) consists of a set of multi-modal fields and denoted by \(X_{i}=\{x_{i}^{k}\mid k\in\mathcal{E}\}\), where \(\mathcal{E}\) indicates the indices for all the attributes. Each field \(x_{i}^{k}\) can be either a categorical or numerical variable such as element type, position, text content, or image embedding. For ease of explanation, we illustrate \(X\) by a 2D-array as shown in the top of Fig. 2. Note that the order in the array does not matter because \(X\) is a set of sets. Since processing high-dimensional data such as raw images and texts during optimization is computationally intensive, we extract a low-dimensional numerical vector from such data for \(x_{i}^{k}\) using pre-trained models.
**Special Tokens**: In a similar spirit to the masked language model [9], we use a few special tokens to represent \(x_{i}^{k}\).
[NULL]: appears when \(x_{i}^{k}\) is inevitably missing (, font type for an image element), or padding variable-length sequence within a mini-batch on training.
[MASK]: appears when \(x_{i}^{k}\) is masked for prediction.
### Masked Field Prediction
Given an incomplete document \(X\) containing [MASK] as context, our goal is to predict values for all the fields filled with [MASK] and generate a complete document \(\hat{X}\). We refer to this problem by _masked field prediction_, where a model has to predict the masked field considering the different multi-modal relations between the fields. While the masking approach is similar to the masked language model [9], there is a key distinction in that we process an order-less set of multi-modal items (, document \(X\)). For
Figure 3: The architecture of FlexDM. E, T, and D are short for Encoder, Transformer blocks, and Decoder, respectively.
Figure 2: **Top**: example of a vector graphic document consisting of five elements. The array is used to illustrate the data structure of the document. Each column corresponds to a single visual element. Each row corresponds to an attribute or a group of attributes consisting the element. **Bottom**: Correspondence between design tasks and masking patterns for our masked field prediction.
this reason, we design our architecture to 1) efficiently capture inter-field relationships of vector graphic attributes, and 2) ensure that the model works without positional encodings commonly used to model an ordered sequence.
### FlexDM Architecture
As shown in Fig. 3, our architecture consists of three modules; encoder, Transformer blocks, and decoder. Given a document, we first project a set of partially masked fields (_e.g_., position or font) into embeddings using the encoder, and then feed the output to the intermediate Transformer blocks. The final decoder takes the transformed embeddings and projects them back to the original fields space. The Transformer blocks only process \(S\) embeddings, which is efficient compared to architecture processing \(S\times N\) fields with off-the-shelf Transformer [46] directly, when there are \(N\) attributes. In the following, let us denote all model parameters by \(\theta\).
**Encoder**: The encoder takes a document input \(X\) and embeds it into \(h^{\text{enc}}=\{h^{\text{enc}}_{1},h^{\text{enc}}_{2},\dots,h^{\text{enc}}_{S}\}\) with element-wise operations. The encoder first maps each field \(x^{k}_{i}\) to a fixed dimensional vector with \(f^{\text{enc},k}\), and sums up all the fields in the element to produce a latent vector for the \(i\)-th element with:
\[h^{\text{enc}}_{i}=\sum_{k\in\mathcal{E}}f^{\text{enc},k}(x^{k}_{i};\theta), \tag{1}\]
where \(f^{\text{enc},k}\) is an embedding function that retrieves learnable dense embeddings for each category id if \(x^{k}_{i}\) is a categorical variable, or a simple linear projection layer if \(x^{k}_{i}\) is a numerical variable. We treat the special tokens (_i.e_., [NULL] and [MASK]) in the same manner to the categorical variable.
**Transformer Blocks**: Transformer blocks take \(h^{\text{enc}}\) as input and transform it to \(h^{\text{dec}}=\{h^{\text{dec}}_{1},h^{\text{dec}}_{2},\dots,h^{\text{dec}}_{ S}\}\). We stack these intermediate blocks to process complex inter-element relations. Our model can stack any off-the-shelf Transformer layer to build up the blocks \(f^{\text{trans}}\):
\[h^{\text{dec}}=f^{\text{trans}}(h^{\text{enc}};\theta) \tag{2}\]
**Decoder**: The final decoder takes \(h_{\text{dec}}\) and decodes them back into a document \(\hat{X}=(\hat{X}_{1},\hat{X}_{2},\dots,\hat{X}_{S})\), where \(\hat{X}_{i}=\{\hat{x}^{k}_{i}\mid k\in\mathcal{E}\}\). We compute each \(\hat{x}^{k}_{i}\) by a linear layer \(f^{\text{dec},k}\) for both categorical and numerical variables:
\[\hat{x}^{k}_{i}=f^{\text{dec},k}(h^{\text{dec}}_{i};\theta). \tag{3}\]
**Loss**: We train our model using reconstruction losses. Let us denote by \(X^{*}\) the ground truth of the incomplete document \(X\), and also denote by \(M\) a set of tuples indicating the indices for [MASK] tokens in \(X\). We define the loss function by:
\[\mathcal{L}=\sum_{(i,k)\in M}l^{k}(\hat{x}^{k}_{i},x^{*}{}^{k}_{i}), \tag{4}\]
where \(l^{k}\) is the loss function for the \(k\)-th attribute. For each \(l^{k}\), we use softmax cross-entropy loss for categorical variables and mean squared error for numerical variables.
### FlexDM Training
Masked field prediction allows us to represent diverse design tasks having various input/output formats just by altering the masking pattern. The pattern can be both deterministic or stochastic. The bottom of Fig. 2 illustrates example tasks and the corresponding masking patterns. Although we can formulate arbitrary tasks with masked field prediction, we consider several subsets of representative design tasks for our evaluation and analyses in Sec. 4.
We describe typical masking patterns in the following. Note that fields already filled with [NULL] will never be replaced in priority to the masking operations. _Element masking_ randomly selects elements and masks all the fields within the element; _i.e_., we can formulate the element filling task by single element masking. _Attribute masking_ randomly selects attributes and mask the fields across all the elements; _e.g_., masking position and size of all the elements becomes layout prediction, and masking fonts becomes font prediction. _Random masking_ strategy masks fields by some probability without considering the data structure, which is similar to BERT [9].
**Pre-training**: To learn the initial model, we employ a pre-training by ordinary random masking similar to the prevailing pre-training strategy of BERT [9]. One distinction is that our pre-training happens in the same, in-domain dataset, unlike the common setup where a model is pre-trained on a larger dataset in a different domain and then fine-tuned on a target task in a target dataset. We show in Sec. 4 that this in-domain pre-training moderately improves the final task performance.
**Explicit Multi-task Learning**: The random masking pre-training above is a solid baseline for any task. Radford _et al_. [42] hypothesize that this implicit multi-task training leads to the astonishingly strong zero-shot performance of large language models. However, the random masking strategy actually produces any task with an extraordinarily low probability as the number of attributes and elements increases. Instead, we employ the _explicit_ masking strategy to maximize the performance on all the target tasks. During training we randomly sample a task from the target tasks, sample a complete document \(X^{*}\), and make the triplet (\(X\), \(X^{*},M\)) by using the masking pattern associated with the task. We repeat this procedure to build each mini-batch when training FlexDM.
## 4 Experiments
### Dataset
We mainly use two datasets containing vector graphic documents, Rico [7] and Crello [52], to evaluate FlexDM. We basically follow the setting used in [52]. Due to memory limitations, we discard documents having more than fifty elements. Position, size, and color information are discretized in order to enhance the implicit alignment of multiple elements. We describe the overview of each dataset.
**Rico [7]:** The dataset collects UI designs from mobile apps. We follow previous works [27, 30] and exclude elements whose labels are not in the most frequent 13 labels. We divide the dataset into 45,012 / 5,565 / 5,674 examples for train, validation, and test splits.
**Crello [52]:** The dataset provides design templates from an online design service. Crello contains various design formats such as social media posts, banner ads, blog headers, or printed posters. We divide the dataset into 18,738 / 2,313 / 2,271 examples for train, validation, and test splits. Please refer to the original paper [52] for the definition of each attribute. For image and text features, we extract 768-dimensional features using CLIP [41]. We also additionally extract categorical font information (called Font). We group the attributes into some groups based on their property. **TYPE** denotes Type attribute. **POS** denotes Position and Size attributes. **IMG** denotes Image attribute. **TXT** denotes Text attribute. **ATTR** denotes attributes not listed above, and these attributes have a large impact on fine-grained appearance.
### Tasks
We carefully select tasks to evaluate how our model performs in various design tasks. We select evaluation tasks such that (i) they are practical, (ii) they have various combinations of input/output modalities, and (iii) the masking ratio is modest. We impose the masking ratio requirement because the extreme masking ratio makes the task too difficult or trivial to solve and makes the baseline comparison impossible.
**Element Filling (ELEM):** This task is to predict a new element that can enhance the document. We mask all the attributes of a single element in a complete document during training and evaluation.
**Attribute Prediction:** This task is to predict missing attributes at once in the document, which is very challenging. We apply attribute masking on a complete document to make the masked inputs during training and evaluation. We select an attribute group discussed in Sec. 4.1 and apply the attribute masking for all the attributes in the group. We consider each group-level prediction task as an individual task. Note that we do not consider TYPE prediction since it is too trivial and unrealistic. Therefore, we have two (POS and ATTR) and four (POS, ATTR, IMG, and TXT) attribute prediction tasks for Rico and Crello, respectively.
### Evaluation Metrics
For each task, we quantitatively evaluate the reconstruction performance. The score \(\mathcal{S}\) for each document is computed by:
\[\mathcal{S}=\frac{1}{|M|}\sum_{(i,k)\in M}s^{k}(\hat{x}_{i}^{k},{x^{*}}_{i}^{ k}), \tag{5}\]
where \(s^{k}\in[0,1]\) is a scoring function for \(k\)-th attribute. If the attribute is categorical, \(s^{k}\) is an indicator function that takes 1 if \(\hat{x}_{i}^{k}\) and \({x^{*}}_{i}^{k}\) are identical, otherwise 0. For image and text features that are the only numerical attributes in our experiments, we use cosine similarity in \([0,1]\) scale.
### Training Details
We use 256-dimensional latent representations within the encoder, Transformer blocks, and decoder. For the Transformer blocks we use the one from DeepSVG [5]. We apply a dropout probability of 0.1 to all the dropout layers. We train the model with a batch size of 256 sequences for 500 epochs in all the experiments. We use Adam with learning rate of 1e-4, \(\beta_{1}=0.9\), \(\beta_{2}=0.99\), and L2 weight decay of 1e-2. In experiments on Rico, we make FlexDM take positional embedding as the additional input, since otherwise the model is unable to distinguish elements having a completely similar set of attributes, which often occurs in POS prediction.
### Quantitative Evaluation
We test three models based on our proposed framework to clarify the contribution of both explicit multi-task learning and pre-training.
**Ours-IMP**: As in the standard masked language modeling such as BERT [9], we randomly mask 15% of the fields during training. Since this randomized training is called implicit multi-task learning [42], we call it Ours-IMP.
**Ours-EXP**: All the tasks are explicitly and jointly trained in a single model by sampling the masking patterns corresponding to each task. For simplicity, \(T\) tasks introduced in Sec. 4.2 are uniformly sampled in a mini-batch.
**Ours-EXP-FT**: This is our entire model. We use weights of the model trained on IMP, and fine-tune the model. The rest of the training is the same as Ours-EXP.
We compare these models with the following baselines, some of which are adapted from existing task-specific models to our multi-task, multi-attribute, and arbitrary masking setting with minimal modification.
**Expert**: We train the network individually for each task. Note that the number of the parameters used in this variant is \(T\) times larger than our models.
**Most-frequent**: We calculate the statistics of the training dataset. For a categorical attribute, we count the occurrences and pick the most frequent category. For a numerical attribute, we compute the average because the numerical attributes that we use are only image and text features.
**BERT [9]**: We convert all the fields into a single sequence and process them with Transformer blocks. This evaluates the effect of element-wise embedding discussed in Sec. 3.3.
**BART [28]**: BART employs an encoder-decoder-based sequence-to-sequence model for pre-training text generation models by masked language modeling. We replace our Transformer blocks with the blocks from BART.
**CVAE [21, 27]**: Recent methods for conditional layout generation such as LayoutVAE [21] and NDN [27] employ Conditional VAE [45] in an auto-regressive manner. We replace our Transformer block and decoder parts with CVAE variants used in [21, 27] and predict the fields in an element-by-element manner. Note that the full version of NDN contains relation prediction and layout refinement modules in addition to CVAE modules. We omit the full NDN pipeline evaluation due to their specific approach.
**CanvasVAE [52]**: CanvasVAE is for an unconditional generation. Although direct comparison is impossible, we adapt CanvasVAE to our setting, similar to other baselines.
Table 1 summarizes the performance of all the models. Our full model (Ours-EXP-FT) is almost comparable to Expert model while being much more efficient in the number of parameters. Ours-IMP exhibits moderate performance, resulting in a better initial weight for fine-tuning in Ours-EXP-FT. We can see that most of the compared baselines perform clearly worse compared to Ours-EXP. The result suggests that applying existing Transformer models for sequence modeling or conditional layout generation models is not enough in our challenging setting. POS-prediction in Rico is the exceptional case, where most of the methods fail because of the larger number of elements compared to the benchmark setup in the literature [24] (nine at maximum).
### Qualitative Evaluation
We show the prediction quality of our full FlexDM (Ours-EXP-FT) for Rico dataset in the element-filling task in Fig. 4. For Rico, we show a color map indicating the position and type information. In Fig. 5, we show the prediction of our full FlexDM (Ours-EXP-FT) on all the target design tasks. For visualizing predicted low-dimensional image and text features, we conduct a nearest neighbor search to retrieve actual images and texts using the assets in the test subset, following CanvasVAE [52].
### Ablation Study
In this section, we perform several ablation experiments in the Crello dataset, as shown in Tab. 2. We demonstrate that our design choices non-trivially affect the final performance of FlexDM.
**Task-specific Embedding**: The previous work [17] on unifying multiple tasks in a single Transformer uses small task-specific learnable query embedding to feed information of the current task explicitly. We append the query as \(h^{\text{enc}}_{0}\) at the beginning of \(h^{\text{enc}}=\{h^{\text{enc}}_{1},h^{\text{enc}}_{2},\dots,h^{\text{enc}}_{ S}\}\) and train the model. The result suggests the benefit of the embedding is marginal. We conjecture that the model implicitly captures the task information from the masked inputs in our setting.
**Attention**: Here we study the importance of self-attention to model the inter-element relationship by training a model without self-attention. We increase the number of layers to eight to roughly match the total number of parameters with Ours-EXP. As expected, the result clearly suggests the importance of modeling the inter-element relationship.
**Additional Loss**: Our objective function in Eq. (4) only considers reconstruction. One may argue that incorporating
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{3}{c}{Dataset} & Rico [7] & \multicolumn{5}{c}{Crello [52]} \\ \cline{2-9} Model & \#par. & ELEM & POS & ATTR & \#par. & ELEM & POS & ATTR & IMG & TXT \\ \hline Most-frequent & 0.0x & 0.461 & 0.213 & 0.830 & 0.0x & 0.402 & 0.134 & 0.382 & 0.922 & 0.932 \\ BERT [9] & 1.0x & 0.517 & 0.238 & 0.847 & 1.0x & **0.524** & 0.155 & 0.632 & 0.935 & 0.949 \\ BART [28] & 1.2x & 0.515 & 0.220 & 0.714 & 1.2x & 0.469 & 0.156 & 0.615 & 0.932 & 0.945 \\ CVAE [21, 27] & 1.1x & 0.511 & 0.214 & 0.917 & 1.0x & 0.499 & 0.197 & 0.587 & 0.942 & 0.947 \\ CanvasVAE [52] & 1.2x & 0.437 & 0.192 & 0.790 & 1.2x & 0.475 & 0.138 & 0.586 & 0.912 & 0.946 \\ Ours-IMP & 1.0x & 0.505 & **0.259** & 0.923 & 1.0x & 0.483 & 0.197 & 0.607 & 0.945 & 0.949 \\ Ours-EXP & 1.0x & 0.540 & 0.226 & 0.937 & 1.0x & 0.499 & 0.218 & 0.679 & 0.948 & 0.952 \\ Ours-EXP-FT & 1.0x & **0.552** & 0.215 & **0.945** & 1.0x & 0.508 & **0.227** & **0.688** & **0.950** & **0.954** \\ \hline Expert & 3.0x & 0.575 & 0.228 & 0.952 & 5.0x & 0.534 & 0.255 & 0.703 & 0.948 & 0.955 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative evaluation in two datasets. A higher score indicates the better performance. Top two results are highlighted in **bold** and underline, respectively. LGAN++ is short for LayoutGAN++.
Figure 4: Results in element filling using Rico dataset. The red dotted box indicates the target element to be predicted.
adversarial losses such as those used in LayoutGAN++ [24] could improve the model. While we tried our best in implementing and tuning the additional adversarial loss, we did not find a clear benefit in adversarial training.
### Comparison with Task-specific Baselines
In this section, we show that our data-driven masked field prediction model can match or even surpasses task-specific approaches. We perform experiments in two tasks: 1) single text styling and 2) single text box placement. Since each task uses partially overlapping set of attributes, we train our model for each single task for fair comparison. Note that we are unable to compare to contextual images filling [47] discussed in Sec. 2.3 due to their task setup where they retrieve an image only from pre-defined sets used during training.
#### 4.8.1 Single Text Styling
Zhao [56] propose an MLP-based model to predict desirable font properties for a _single_ text box (, font emb., color, and size), given context in web designs. We consider that each design is a document with one text and two image elements, and regard all the context information as attributes in the elements so that we can just apply FlexDM. We implement Zhao [56] with the following minor difference, since the code is not publicly available. We quantize the color and size into 16 bins and 64 bins, respectively. We did not apply data augmentation using the external dataset, since the dataset used for the augmentation is not available. We show the results in Tab. 3. The metrics are accuracy for font color and size, and cosine similarity for font type, which is represented by a low-dimensional embedding. We can clearly see that our model is comparable to the task-specific model.
#### 4.8.2 Single Text Box Placement
Li [29] propose to predict the size and position of a single text box given a natural image and aspect ratio of
\begin{table}
\begin{tabular}{c l c c c c c} \hline \hline & Model & ELEM & POS & ATTR & IMG & TXT \\ \hline \multirow{3}{*}{(i)} & Ours-EXP & **0.499** & 0.218 & **0.679** & 0.948 & 0.952 \\ & w/ task-ID & 0.496 & **0.222** & 0.674 & **0.949** & **0.953** \\ & w/o attention & 0.446 & 0.208 & 0.605 & 0.939 & 0.947 \\ & w/ adv. & **0.499** & 0.215 & 0.677 & 0.948 & 0.952 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study results in Crello dataset. Top two results are highlighted in **bold** and underline, respectively.
Figure 5: Prediction of FlexDM (Ours-EXP-FT trained on Crello). FlexDM jointly handles a large variety of design tasks with a single Transformer-based model. In the input of ATTR/TXT/IMG prediction, the target fields assigned [MASK] are visualized using fixed default values (, black for text color, gray for image and solid fill, “TEXT” for text). In POS prediction, we additionally show the layout of the elements. The correspondence between the color and type of the element is as follows: green = _vector shape_, magenta = _image_, purple = _text_, yellow = _solid fill_. Best viewed with zoom and color.
the text box. We perform comparison in Crello dataset, since the dataset used for their model training and evaluation is not publicly available. We evaluate the performance in terms of the intersection over union (IoU) and boundary displacement error (BDE) [29]. As shown in the upper half of Tab. 4, our model clearly outperforms Li _et al_. [29]'s model. To measure the contribution of multi-modal features to the prediction, we exclude each of them and train the model. The results in the lower half of Tab. 4 suggest that those features contribute to the better performance. Some results are shown in Fig. 6.
## 5 Limitation and Discussion
As image and text generation quality is astonishingly improving, one may want to generate images and texts directly. However, retrieval-based generation is still a practical option. For instance, due to clients' requests, designers often need to use images from private collections or public photo stock services such as Adobe Stock or Shutterstock. Moreover, some people avoid using generated images or text as there are controversies about the legal and ethical issues of AI-generated images.
Our model does not support design tasks that cannot be framed as masked field prediction. We do not consider unconditional generation; _i.e_., generating a complete document without input. Extending FlexDM to an unconditional scenario requires us to apply a generative formulation instead of BERT-style masked modeling, and we leave such formulation as future work. However, we believe that our model nicely fits in a common application scenario where there exist initial design materials to start with.
The model's performance decreases when the input document has more elements. Whether bigger models or datasets alleviate the issue is worth investigating. Developing other evaluation metrics would be helpful for further analysis since current metrics simply evaluate reconstruction performance. In conditional generation, the input context may correspond to multiple possible outputs, especially when the input context is sparse (_e.g_., label sets). Modeling such variability as in layout generation models [21, 18, 24] would be an exciting direction.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Single} & \multicolumn{2}{c}{Multiple} \\ \cline{2-5} & IoU \(\uparrow\) & BDE \(\downarrow\) & IoU \(\uparrow\) & BDE \(\downarrow\) \\ \hline SmartText+ [29] & 0.047 & 0.262 & 0.023 & 0.300 \\ Ours & **0.357** & **0.098** & **0.110** & **0.141** \\ w/o image & 0.355 & 0.100 & 0.103 & 0.156 \\ w/o text & 0.350 & 0.106 & 0.086 & 0.178 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Quantitative evaluation of models for single text box placement in Crello dataset. The samples are divided into two groups: no other text box available (Single) and some text boxes available as the context (Multiple).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Model & Color & Size & Emb. & Avg. \\ \hline Zhao _et al_. [56] & 45.8\({}_{\pm 29}\) & 19.9\({}_{\pm 1.1}\) & **79.2\({}_{\pm 5}\)** & 48.2\({}_{\pm 1.2}\) \\ Ours & **54.2\({}_{\pm 0.7}\)** & **24.2\({}_{\pm 0.1}\)** & 77.7\({}_{\pm 1.3}\) & **52.0\({}_{\pm 65}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of models for font properties prediction in CTXFont dataset [56]. The average and standard deviation of three runs are reported. The values are multiplied by 100x for visibility.
Figure 6: Qualitative comparison of single text box placement with SmartText+ [29]. Best viewed with zoom and color. |
2309.03155 | Harmonic chain far from equilibrium: single-file diffusion, long-range
order, and hyperuniformity | In one dimension, particles can not bypass each other. As a consequence, the
mean-squared displacement (MSD) in equilibrium shows sub-diffusion ${\rm
MSD}(t)\sim t^{1/2}$, instead of normal diffusion ${\rm MSD}(t)\sim t$. This
phenomenon is the so-called single-file diffusion. Here, we investigate how the
above equilibrium behaviors are modified far from equilibrium. In particular,
we want to uncover what kind of non-equilibrium driving force can suppress
diffusion and achieve the long-range crystalline order in one dimension, which
is prohibited by the Mermin-Wagner theorem in equilibrium. For that purpose, we
investigate the harmonic chain driven by the following four types of driving
forces that do not satisfy the detailed balance: (i) temporally correlated
noise with the noise spectrum $D(\omega)\sim \omega^{-2\theta}$, (ii)
conserving noise, (iii) periodic driving force, and (iv) periodic deformations
of particles. For the driving force (i) with $\theta>-1/4$, we observe ${\rm
MSD}(t)\sim t^{1/2+2\theta}$ for large $t$. On the other hand, for the driving
forces (i) with $\theta<-1/4$ and (ii)-(iv), MSD remains finite. As a
consequence, the harmonic chain exhibits the crystalline order even in one
dimension. Furthermore, the density fluctuations of the model are highly
suppressed in a large scale in the crystal phase. This phenomenon is known as
hyperuniformity. We discuss that hyperuniformity of the noise fluctuations
themselves is the relevant mechanism to stabilize the long-range crystalline
order in one dimension and yield hyperuniformity of the density fluctuations. | Harukuni Ikeda | 2023-09-06T16:52:04Z | http://arxiv.org/abs/2309.03155v4 | **Harmonic chain far from equilibrium: single-file diffusion, long-range order, and hyperuniformity**
## Abstract
**In one dimension, particles can not bypass each other. As a consequence, the mean-squared displacement (MSD) in equilibrium shows sub-diffusion \(\mathtt{MSD}(t)\sim t^{1/2}\), instead of normal diffusion \(\mathtt{MSD}(t)\sim t\). This phenomenon is the so-called single-file diffusion. Here, we investigate how the above equilibrium behaviors are modified far from equilibrium. In particular, we want to uncover what kind of non-equilibrium driving force can suppress diffusion and achieve the long-range crystalline order in one dimension, which is prohibited by the Mermin-Wagner theorem in equilibrium. For that purpose, we investigate the harmonic chain driven by the following four types of driving forces that do not satisfy the detailed balance: (i) temporally correlated noise with the noise spectrum \(D(\omega)\sim\omega^{-2\theta}\), (ii) center-of-mass conserving noise, (iii) periodic driving force, and (iv) periodic deformations of particles. For the driving force (i) with \(\theta>-1/4\), we observe \(\mathtt{MSD}(t)\sim t^{1/2+2\theta}\) for large \(t\). On the other hand, for the driving forces (i) with \(\theta<-1/4\) and (ii)-(iv), MSD remains finite. As a consequence, the harmonic chain exhibits the crystalline order even in one dimension. Furthermore, the density fluctuations of the model are highly suppressed in a large scale for the driving forces (i) with \(\theta<0\) and (ii)-(iv). This phenomenon is known as hyperuniformity. We discuss that hyperuniformity of the noise fluctuations themselves is the relevant mechanism to stabilize the long-range crystalline order in one dimension and yield hyperuniformity of the density fluctuations.**
###### Contents
* 1 Introduction
* 2 Model and physical quantities
* 2.1 Model
* 2.2 Mean-squared displacement
* 2.3 Order parameter
* 2.4 Hyperuniformity
* 3 Temporally correlated noise
* 3.1 Settings
* 3.2 Mean-squared displacement
* 3.3 Order parameter
* 3.4 Giant number fluctuations and hyperuniformity
* 4
## 1 Introduction
In one-dimensional many-particle systems, the particles cannot bypass one another. As a consequence, the mean-squared displacement (MSD) in equilibrium shows sub-diffusion \(\text{MSD}(t)\thicksim t^{1/2}\)[1, 2, 3, 4, 5, 6], instead of normal diffusion \(\text{MSD}(t)\thicksim t\)[7]. This phenomenon is known as single-file diffusion. The simplest model to observe single-file diffusion is the one-dimensional harmonic chain, where point-like particles on a line are connected by harmonic springs [1]. The harmonic chain is often recognized as a toy model of a one-dimensional crystal [8, 9]. However, as proved by Mermin and Wagner, the long-range order cannot exist in one and two dimensions in equilibrium [10, 11, 9]. This implies that the particles will diffuse away from their lattice positions after a sufficiently long time. As a consequence, MSD of the harmonic chain grows as \(\text{MSD}\thicksim t^{1/2}\), as in the case of standard single-file diffusion [1]. The aim of this manuscript is to discuss how the above equilibrium behaviors are changed if the model is driven by athermal fluctuations violating the detailed balance. In particular, we show that for specific types of athermal fluctuations, the diffusion is strongly suppressed, and as a consequence, the harmonic chain can have the long-range crystalline order even in one dimension.
Our model also provides an ideal playground to investigate hyperuniformity far from equilibrium. Hyperuniformity is a phenomenon that the large-scale fluctuations of physical quan
tities are anomalously suppressed. In particular, hyperuniformity of the density fluctuations is characterized by the vanishing of the static structure factor \(S(q)\) in the limit of the small wave number \(q\): \(\lim_{q\to 0}S(q)=0\)[12]. Hyperuniformity has been observed in perfect crystals at zero temperature [13], quasicrystals [14, 15], ground states of quantum systems [16, 17, 18], periodically driven emulsions [19], chiral active matter [20, 21, 22, 23], and so on [24, 25]. Interestingly, a recent numerical study reported that hyperuniformity of out-of-equilibrium systems can also suppress the critical fluctuations and stabilize the crystaline order even in two dimension [26], which is prohibited by the Mermin-Wagner theorem in equilibrium [10, 11]. So far the most of theoretical studies of hyperuniformity far from equilibrium are based on the fluctuating hydrodynamics, which can be justified only at sufficiently low densities and can not be applied in the crystal phase [20, 23, 27]. We believe that the harmonic chain plays the role of the minimal model for investigating how hyperuniformity appears and stabilizes the crystalline order in low-dimensional systems far from equilibrium.
The manuscript is organized as follows. In Sec. 2, we introduce the model and define a few important physical quantities. Then, we perform case studies for the following four types of driving forces that do not satisfy the detailed balance.
Firstly, in Sec. 3, we consider the temporally correlated noise with the power-low noise spectrum \(D_{q}(\omega)\sim\omega^{-2\theta}\). Although the model may appear somewhat artificial, it allows us to systematically investigate how the temporal correlations enhance or suppress the diffusion and yield long-range crystalline order and hyperuniformity by continuously changing the value of \(\theta\). We show that for \(\theta>-1/4\), the mean-squared displacement behaves as \(\text{MSD}\sim t^{1/2+2\theta}\) for large \(t\). For \(\theta<-1/4\), on the contrary, the diffusion is completely suppressed, and MSD converges to a finite value in the long time limit. As a consequence, the model exhibits the long-range crystalline order even in one dimension, which is prohibited in equilibrium [10, 11]. Furthermore, we show that the static structure factor \(S(q)\) for a small wave number \(q\) behaves as \(S(q)\sim q^{-4\theta}\). For \(\theta>0\), \(S(q)\) diverges in the limit \(q\to 0\), meaning that the large-scale density fluctuations are anomalously enhanced. This property is referred to as giant number fluctuations [28, 29]. On the contrary, for \(\theta<0\), \(S(q)\to 0\) in the limit \(q\to 0\), meaning that the density fluctuations show hyperuniformity [12]. We discuss that hyperuniformity of the density fluctuations is a consequence of temporal hyperuniformity of the noise, _i.e._, the fluctuations of the noise vanish in the long-time scale \(\lim_{\omega\to 0}D_{q}(\omega)=0\).
Secondly, in Sec. 4, we consider the noise that conserves the center of mass to investigate the effects of the spatial correlation of the noise. In previous work, Hexner and Levine have shown that for a system conserving the center of mass, the density fluctuations are highly suppressed and exhibit hyperuniformity [25], due to hyperuniformity of the noise itself [30]. Recently, Galliano _et al._[26] have shown that the suppression of the fluctuations also stabilizes the long-range crystalline order even in two dimension, which is prohibited by the Mermin-Wagner theorem in equilibrium. Does the crystalline order also emerge in one dimension? We show that the harmonic chain driven by the center-of-mass conserving dynamics indeed possesses the crystalline order [26]. We also show that the static structure factor for small \(q\) behaves as \(S(q)\sim q^{2}\), meaning that the density fluctuations show hyperuniformity [12], as observed in previous works [25, 26]. We discuss that hyperuniformity of the density fluctuations is a consequence of spatial hyperuniformity of the noise itself, _i.e._, \(\lim_{q\to 0}D_{q}(\omega)=0\).
Thirdly, in Sec. 5, we investigate a periodically driven system. For that purpose, we consider chiral active particles confined in a narrow one-dimensional channel and connected with harmonic springs [31, 32]. We show that MSD oscillates with the same frequency as that of the driving force, and the crystalline order parameter takes a finite value. We also show \(S(q)\sim q^{2}\) for \(q\ll 1\), meaning that the model shows hyperuniformity, as previously observed in chiral active matter in two dimension [20, 21, 22, 23]. We discuss that hyperuniformity of the density fluctuations is a consequence of temporal hyperuniformity of the noise itself, _i.e._, \(\lim_{\omega\to 0}D_{q}(\omega)=0\).
Finally, in Sec.6, we consider periodically deforming particles in one dimension, which was originally introduced as a model to describe dense biological tissues [33]. The driving force of the model oscillates with the constant frequency and simultaneously conserves the center of mass. The Fourier spectrum of the driving force satisfies \(\lim_{q\to 0}D_{q}(\omega)=0\) and \(\lim_{\omega\to 0}D_{q}(\omega)=0\), meaning that the driving force is spatio-temporally hyperuniform. Under the harmonic approximation, the model can be reduced to the one-dimensional harmonic chain with the oscillating natural lengths. We show that MSD oscillates with the same frequency as that of the driving force, and the crystalline order parameter takes a finite value. We also show that the model exhibits stronger hyperuniformity than those of the center-of-mass conserving noise and periodic driving forces: \(S(q)\sim q^{4}\) for \(q\ll 1\)[12].
The above four case studies demonstrate that the temporal and/or spatial hyperuniformity of the driving force yield hyperuniformity of the density fluctuations and can stabilize the crystalline order even in one dimension [34, 30], while the positive correlation enhances the diffusion. In Sec. 7, we summarize those results and give more quantitative discussion for the connection between the strength of hyperuniformity and existence of the crystalline order in low-dimensional systems.
## 2 Model and physical quantities
Here, we introduce the model and define a few important physical quantities.
### Model
We consider the harmonic chain driven by the following dynamics [7]:
\[\dot{x}_{j}=K(x_{j+1}+x_{j-1}-2x_{j})+\xi_{j}(t),\ j=1,\cdots,N\,, \tag{1}\]
where \(\{x_{j}\}_{j=1,\cdots,N}\), \(\{\xi_{j}\}_{j=1,\cdots,N}\), \(K\), and \(N\) denote the positions of the particles, deriving forces, spring constant, and number of particles, respectively. We impose the periodic boundary condition \(x_{N+1}=x_{1}\). Let \(a\) be the lattice constant, \(R_{j}=ja\) be the equilibrium position of the \(j\)-th particle, and \(u_{j}=x_{j}-R_{j}\) be the displacement from the equilibrium position. The dynamical equation for \(u_{j}\) is then written as
\[\dot{u}_{j}=K(u_{j+1}+u_{j-1}-2u_{j})+\xi_{j}(t). \tag{2}\]
It is convenient to introduce the Fourier and inverse Fourier transformations of the displacement \(u_{j}(t)\)[7, 8]:
\[u_{j}(t)=\frac{1}{\sqrt{N}}\sum_{q}\ddot{u}_{q}(t)e^{iqR_{j}}, \tilde{u}_{q}(t)=\frac{1}{\sqrt{N}}\sum_{j=1}^{N}u_{j}(t)e^{-iqR_{j}}, \tag{3}\]
where \(q\in\{\frac{2\pi k}{Na}\}_{k=1,\cdots,N}\). Eq. (2) is diagonalized in the Fourier space:
\[\dot{\tilde{u}}_{q}(t)=-\lambda_{q}\tilde{u}_{q}(t)+\tilde{\xi}_{q}(t), \tag{4}\]
where
\[\lambda_{q}=2K\left[1-\cos(aq)\right], \tag{5}\]
and
\[\tilde{\xi}_{q}(t)=\frac{1}{\sqrt{N}}\sum_{j=1}^{N}\xi_{j}(t)e^{-iqR_{j}}. \tag{6}\]
We assume that the mean and variance of \(\tilde{\xi}_{k}(t)\) are given by
\[\big{\langle}\tilde{\xi}_{q}(t)\big{\rangle} =0, \big{\langle}\tilde{\xi}_{q}(t)\tilde{\xi}_{q^{\prime}}(t^{\prime} )\big{\rangle} =\delta_{q,-q^{\prime}}D_{q}(t-t^{\prime}). \tag{7}\]
Using the Fourier transformation w.r.t \(t\), Eq. (4) can be solved as
\[\tilde{u}_{q}(\omega)=\frac{\tilde{\xi}_{q}(\omega)}{i\omega+ \lambda_{q}}. \tag{8}\]
The two point correlation is then calculated as
\[\big{\langle}\tilde{u}_{q}(\omega)\tilde{u}_{q^{\prime}}(\omega^ {\prime})\big{\rangle} =2\pi\delta_{q,-q^{\prime}}\delta(\omega+\omega^{\prime})\frac{D _{q}(\omega)}{\omega^{2}+\lambda_{q}^{2}}. \tag{9}\]
The inverse Fourier transform w.r.t. \(\omega\) yields
\[\big{\langle}\tilde{u}_{q}(t)\tilde{u}_{-q}(0)\big{\rangle} =\frac{1}{2\pi}\int_{-\infty}^{\infty}d\omega e^{i\omega t}\frac{D _{q}(\omega)}{\omega^{2}+\lambda_{q}^{2}}=\frac{1}{\pi}\int_{0}^{\infty}d \omega\frac{D_{q}(\omega)\cos(\omega t)}{\omega^{2}+\lambda_{q}^{2}}, \tag{10}\]
where we used the time-reversal symmetry of the correlation: \(D_{q}(\omega)=D_{q}(-\omega)\).
### Mean-squared displacement
Using the Parseval's identity, the mean-squared displacement in the thermodynamic limit \(N\to\infty\) is calculated as follows:
\[\text{MSD}(t) =\frac{1}{N}\sum_{j=1}^{N}\big{\langle}\big{(}u_{j}(t)-u_{j}(0) \big{)}^{2}\big{\rangle}\] \[=\frac{1}{N}\sum_{q}\big{\langle}\big{(}\tilde{u}_{q}(t)-\tilde{ u}_{q}(0)\big{)}\big{(}\tilde{u}_{-q}(t)-\tilde{u}_{-q}(0)\big{)}\big{\rangle}\] \[=\frac{2}{N}\sum_{q}\frac{1}{\pi}\int_{0}^{\infty}d\omega D_{q}( \omega)\frac{1-\cos(\omega t)}{\omega^{2}+\lambda_{q}^{2}}\] \[=\frac{1}{\pi^{2}}\int_{0}^{2\pi/a}adq\int_{0}^{\infty}d\omega D_ {q}(\omega)\frac{1-\cos(\omega t)}{\omega^{2}+\lambda_{q}^{2}}, \tag{11}\]
where we have replaced the summation for \(q\in\big{\{}\frac{2\pi k}{Na}\big{\}}_{k=1,\cdots,N}\) with an integral for \(q\in(0,2\pi/a]\).
### Order parameter
To quantify the crystalline order, we observe the Fourier component of the density at the reciprocal wave number \(q=2\pi/a\)[10]:
\[R=\frac{1}{N}\left\langle\sum_{j=1}^{N}e^{i\frac{2\pi}{a}x_{j}} \right\rangle=\left\langle e^{i\frac{2\pi n_{1}}{a}}\right\rangle=\left\langle \cos\left(\frac{2\pi u_{1}}{a}\right)\right\rangle. \tag{12}\]
The order parameter vanishes \(R=0\) for disordered liquid-like configurations, while \(R>0\) for crystals [10]. In equilibrium, the distribution of \(u_{1}\) becomes a Gaussian. Therefore, the order parameter is calculated as \(R=\exp\left[-\frac{2\pi^{2}\big{\langle}u_{1}^{2}\big{\rangle}}{a^{2}}\right]\). However at finite temperature, the fluctuation diverges \(\left\langle u_{1}^{2}\right\rangle\to\infty\) in the thermodynamic limit \(N\to\infty\), leading to \(R\to 0\)[35]. Therefore, the thermal fluctuations destroy the crystalline order in one dimension, which is consistent with the Mermin-Wagner theorem [10, 11]. Of course, the theorem does not hold in systems far from equilibrium. Indeed, in the later sections, we will show several examples of the out-of equilibrium driving forces that preserve the crystalline order even in one dimension.
### Hyperuniformity
For perfect crystals at zero temperature, the static structure factor \(S(q)\) in the limit of the small wave number vanishes: \(\lim_{q\to 0}S(q)=0\). In other words, the density fluctuations are highly suppressed for small \(q\). This property is referred to as hyperuniformity [12]. In equilibrium, on the contrary, \(\lim_{q\to 0}S(q)\) converges to a finite value, namely, the thermal fluctuations destroy hyperuniformity [13]. Does hyperuniformity survive under the athermal fluctuations considered in this work? To answer this question, we calculate \(S(q)\) for \(q\ll 1\) as follows [13]:
\[S(q)= \left\langle\frac{1}{N}\left|\sum_{j=1}^{N}e^{iqx_{j}}\right|^{2}\right\rangle\] \[\approx \frac{1}{N}\left\langle\left|\sum_{j=1}^{N}e^{iqR_{j}}\right|^{2} \right\rangle+q^{2}\left\langle\frac{1}{N}\left|\sum_{j=1}^{N}u_{j}e^{iqR_{j}} \right|^{2}\right\rangle\] \[= S_{0}(q)+q^{2}\left\langle\tilde{u}_{q}\tilde{u}_{-q}\right\rangle\] \[\approx q^{2}\left\langle\tilde{u}_{q}\tilde{u}_{-q}\right\rangle. \tag{13}\]
where \(S_{0}(q)=\left\langle\left|\sum_{j=1}^{N}e^{iqR_{j}}\right|^{2}\right\rangle/N\) denotes the static structure factor of the one-dimensional lattice, which has delta peaks at \(q=2\pi n/a\), \(n=0,1,2,\cdots\) and can be ignored for sufficiently small but finite \(q\). Eq. (13) allows us to discuss hyperuniformity from the scaling of \(\left\langle\tilde{u}_{q}\tilde{u}_{-q}\right\rangle\) for small \(q\).
The interaction potential of the harmonic chain is diagonalized in the Fourier space: \(V_{N}=\sum_{q}\frac{\lambda_{q}}{2}\tilde{u}_{q}\tilde{u}_{-q}\). From the law of equipartition, one obtains \(\left\langle\tilde{u}_{q}\tilde{u}_{-q}\right\rangle=T/\lambda_{q}\) in equilibrium at temperature \(T\). This leads to \(S(q)\approx T/(a^{2}K)\) for \(q\ll 1\), meaning that the thermal fluctuations destroy hyperuniformity. On the contrary, hyperuniformity is often observed in systems driven by athermal fluctuations, where the law of equipartition does not hold. The simplest and well-known example is the quantum harmonic chain: \(H=\sum_{q}\frac{\tilde{p}_{q}\tilde{p}_{-q}}{2}+\sum_{q}\frac{\lambda_{q}}{2} \tilde{u}_{q}\tilde{u}_{-q}\), where the momentum \(\tilde{p}_{q}\) satisfies the canonical commutation relation \(\left[\tilde{u}_{q},\tilde{p}_{q^{\prime}}\right]=i\delta_{q,-q^{\prime}} \hbar\)[16, 17]. On the ground state, the distribution of \(\tilde{u}_{q}\) is a Gaussian of zero mean and variance \(\left\langle\tilde{u}_{q}\tilde{u}_{-q}\right\rangle=\hbar/(2\sqrt{\lambda_{ q}})\)[36]. Therefore, the static structure factor is approximated as \(S(q)\approx q\hbar/(2a\sqrt{K})\) for \(q\ll 1\)[16, 17]: the model exhibits hyperuniformity \(\lim_{q\to 0}S(q)=0\). In the subsequent sections, we will show that hyperuniformity can also emerge in classical systems far from equilibrium.
## 3 Temporally correlated noise
For the first example of the athermal fluctuations, we here consider the temporally correlated noise. The model allows us to understand how the time correlations of the noise yield giant number fluctuations and hyperuniformity of the density fluctuations, and how these properties affect the diffusion and long-range order.
### Settings
Here we consider the Gaussian color noise of zero mean and variance
\[\left\langle\xi_{i}(t)\xi_{j}(t^{\prime})\right\rangle=\delta_{ij}D(t). \tag{14}\]
In previous work, single-file diffusion of active particles has been investigated [37]. In that case, the correlation of the noise \(D(t)\) decays exponentially, which leads to the same scaling as
that in equilibrium \(\mathrm{MSD}(t)\sim t^{1/2}\) for \(t\gg 1\)[37]. As we will see below, the scaling is altered if \(D(t)\) has the power-law tail. We assume that the noise spectrum is written as
\[D(\omega)=\frac{2T\left|\omega\right|^{-2\theta}}{\sec(\theta\pi)}, -1/2<\theta<1/2. \tag{15}\]
Here, the pre-factor \(1/\sec(\theta\pi)\) has been chosen to simplify the final result, and \(\theta\) is restricted to \(-1/2<\theta<1/2\) to converge the correlation function, as we will see later. When \(\theta=0\), the model satisfies the detailed balance [7], and thus \(\xi_{i}\) can be identified with the thermal noise at temperature \(T\). For \(\theta>0\), the noise has the positive correlation that decays algebraically for large \(t\): \(D(t)\sim 1/t^{1-2\theta}\). On the contrary, for \(\theta<0\), \(\lim_{\omega\to 0}D(\omega)=0\), meaning that the noise fluctuations are highly suppressed in the long-time scale. Namely, the noise is temporally hyperuniform.
The power-law spectrum Eq. (15) of the noise appears for non-equilibrium systems showing self-organized criticality [38, 39] and is often referred to as the \(1/f\) noise [40]. The power-law spectrum also appears for the Fourier spectrum of quasi-periodic patterns. In Ref. [15], the authors showed that the Fourier spectrum of one-dimensional quasi-periodic patterns exhibits the power-law behavior for small \(\omega\) with \(\theta\in[-3/2,1]\). Also, in Ref. [13], the authors argued that small perturbations to one-dimensional periodic patterns yield the power-law spectrum for \(\theta\in[-1,0]\). Therefore, the model driven by the noise with the correlation Eq. (15) would give useful insights for quasi-periodically and periodically driven systems.
### Mean-squared displacement
In the thermodynamic limit \(N\to\infty\), the mean-squared displacement Eq. (11) is calculated as
\[\mathrm{MSD}(t)=\frac{2T}{\pi^{2}\sec(\pi\theta)}\int_{0}^{2\pi/a} adq\int_{0}^{\infty}d\omega\left|\omega\right|^{-2\theta}\frac{1-\cos(\omega t)}{ \omega^{2}+\left[2K(1-\cos(aq))\right]^{2}}. \tag{16}\]
The integral w.r.t \(q\) can be performed as
\[\int_{0}^{2\pi/a}\frac{adq}{\omega^{2}+\left[2K(1-\cos(aq))\right]^{2}}=2\pi \sqrt{\frac{\omega+\sqrt{\omega^{2}+16K^{2}}}{2\omega^{3}(\omega^{2}+16K^{2}) }}\sim\begin{cases}\frac{\pi}{\sqrt{2K\omega^{3}}}&\left|\omega\right|\ll 1\\ \frac{2\pi}{\omega^{2}}&\left|\omega\right|\gg 1.\end{cases} \tag{17}\]
Using this result, one can deduce the scaling of \(\mathrm{MSD}\) for \(t\ll 1\) as follows:
\[\mathrm{MSD}(t)\sim At^{1+2\theta}\ (t\ll 1), \tag{18}\]
where \(A\) denotes a constant. This scaling agrees with that of a free-particle driven by the temporally correlated noise [41]. For \(t\gg 1\) and \(\theta>-1/4\), we get
\[\mathrm{MSD}(t)\sim Bt^{\frac{1}{2}+2\theta}\ (t\gg 1), \tag{19}\]
where \(B\) denotes a constant. For \(\theta=0\), one recovers the scaling of single-file diffusion in equilibrium \(\mathrm{MSD}\sim t^{1/2}\left[1,5\right]\). For \(\theta<-1/4\), \(\mathrm{MSD}\) in the long time limit converges to a finite value: \(\lim_{t\to\infty}\mathrm{MSD}(t)=2\left\{u_{1}^{2}\right\}\). We plot \(\mathrm{MSD}\) for several \(\theta\) in Fig. 1.
Eq. (19) can be understood from a simple scaling argument. To see this, we consider the continuum limit of Eq. (2):
\[\dot{u}(x,t)=K\nabla^{2}u(x,t)+\xi(x,t),, \tag{20}\]
where the noise correlation is given by \(\left\langle\xi(x,t)\xi(x^{\prime},t^{\prime})\right\rangle=\delta(x-x^{ \prime})D(t-t^{\prime})\). To analyze the model in the large spatio-temporal scale, we consider the following scaling transformations: \(x\to bx\), \(t\to b^{z_{t}}t\), \(u\to b^{z_{u}}u\). Assuming that all terms in Eq. (20) have the same scaling dimension, we obtain \(z_{t}=2\) and \(z_{u}=1/2+2\theta\)[30, 42]. This leads to \(\mathrm{MSD}\sim u^{2}\sim b^{2z_{u}}\sim t^{2z_{u}/z_{t}}\sim t^{1/2+2\theta}\), which is consistent with Eq. (19).
### Order parameter
The equal time correlation in the Fourier space is
\[\left\langle\tilde{u}_{q}\tilde{u}_{-q}\right\rangle=\frac{1}{\pi}\int_{0}^{ \infty}\frac{D(\omega)d\omega}{\omega^{2}+\lambda_{q}^{2}}=\frac{2T}{\pi\sec( \theta\pi)(\lambda_{q})^{1+2\theta}}\int_{0}^{\infty}dx\frac{|x|^{-2\theta}}{ x^{2}+1}=\frac{T}{(\lambda_{q})^{1+2\theta}}. \tag{21}\]
Note that the integral converges only when \(-1/2<\theta<1/2\). When \(\theta=0\), we recover the law of equipartition:
\[\left\langle\tilde{u}_{q}\tilde{u}_{-q}\right\rangle=\frac{T}{\lambda_{q}}. \tag{22}\]
In real space, we get
\[\left\langle u_{1}^{2}\right\rangle=\frac{1}{N}\sum_{j=1}^{N}\left\langle u_{j} ^{2}\right\rangle=\frac{1}{N}\sum_{q}\left\langle\tilde{u}_{q}\tilde{u}_{-q} \right\rangle=\frac{T}{N}\sum_{q}\frac{1}{\lambda_{q}^{1+2\theta}}=\frac{T}{ \pi}\int_{0}^{2\pi/a}adq\frac{1}{\left[2K(1-\cos(aq))\right]^{1+2\theta}}. \tag{23}\]
For \(\theta<-1/4\), the integral converges to
\[\frac{\left\langle u_{1}^{2}\right\rangle}{T}=-\frac{\theta\Gamma(-1/2-2 \theta)}{2^{1+4\theta}K^{1+2\theta}\pi^{1/2}\Gamma(1-2\theta)}, \tag{24}\]
while for \(\theta\geq-1/4\), \(\left\langle u_{1}^{2}\right\rangle\rightarrow\infty\), see Fig. 2 (a). Since \(\xi_{i}\) is a Gaussian random number, the solution of the linear differential equation Eq. (2), \(u_{1}\), also becomes a Gaussian random number [7]. Therefore, the order parameter can be calculated as
\[R=\left\langle e^{i\frac{2\pi\mu_{i}}{a}}\right\rangle=\exp\left[-\frac{2\pi^ {2}}{a^{2}}\left\langle u_{1}^{2}\right\rangle\right]. \tag{25}\]
We plot \(R\) in Fig. 2 (b). The order parameter \(R\) has a finite value for \(\theta<-1/4\), meaning that the model has the long-range crystalline order even in one dimension. For \(\theta>-1/4\), \(R=0\), implying that the diffusion destroys the crystalline order.
Figure 1: Mean-squared displacement of harmonic chain driven by temporally correlated noise. Markers denote exact results. Dashed and solid lines represent short and long-time asymptotic behaviors, respectively. For simplicity, we set \(K=1\) and \(T=1\).
### Giant number fluctuations and hyperuniformity
For \(q\ll 1\), \(S(q)\) is approximated as
\[S(q)\approx q^{2}\left\langle\bar{u}_{q}\bar{u}_{-q}\right\rangle=T\frac{q^{2}}{ \lambda_{q}^{1+2\theta}}\approx\frac{Tq^{-4\theta}}{(Ka^{2})^{1+2\theta}}. \tag{26}\]
For \(\theta>0\), \(S(q)\to\infty\) in the limit \(q\to 0\), meaning that the large-scale density fluctuations are anomalously enhanced. This property is referred to as giant number fluctuations [28, 29]. On the contrary, for \(\theta<0\), \(\lim_{q\to 0}S(q)=0\), meaning that the large-scale fluctuations are highly suppressed. This property is referred to as hyperuniformity [12]. For \(\theta<0\), the noise spectrum satisfies \(\lim_{\omega\to 0}D(\omega)=0\), meaning that the noise is temporally hyperuniform. The above result implies that temporal hyperinformiy of the noise leads to hyperuniformy of the density fluctuations.
In summary, we found three qualitatively distinct regimes depending on \(\theta\). For \(\theta>0\), the model exhibits giant number fluctuations. For \(-1/4<\theta<0\), the model exhibits hyperuniformity, but does not have the crystalline order \(R=0\). This property is often referred to as disordered hyperuniformity [12]. For \(\theta<-1/4\), the model exhibits hyperuniformity and has the crystalline order \(R>0\). This property is referred to as ordered hyperuniformity [12].
The above analysis is limited for \(\theta>-1/2\) because the correlation Eq. (21) diverges for \(\theta\leq-1/2\). This ultraviolet divergence can be removed by introducing a phenomenological cut-off to the spectrum. For that purpose, we consider the modified power-spectrum:
\[D(\omega)=\begin{cases}C\left|\omega\right|^{-2\theta}&\left|\omega\right|< \omega_{c}\\ 0&\text{otherwise}\end{cases}, \tag{27}\]
where \(C\) denotes a constant. The correlation Eq. (21) for \(q\ll 1\) can be calculated as
\[\left\langle\bar{u}_{q}\bar{u}_{-q}\right\rangle=\frac{T}{\pi}\int_{0}^{\omega _{c}}d\omega\frac{C\left|\omega\right|^{-2\theta}}{\omega^{2}+\lambda_{q}^{2} }\sim\frac{T}{\pi}\int_{0}^{\omega_{c}}d\omega\left|\omega\right|^{-2\theta-2 },\qquad\qquad\theta<-1/2, \tag{28}\]
Figure 2: Physical quantities of harmonic chain driven by temporally correlated noise. (a) \(\theta\) dependence of the fluctuation \(\left\langle u_{1}^{2}\right\rangle\). \(\left\langle u_{1}^{2}\right\rangle\) has a finite value for \(\theta<-1/4\) and diverges at \(\theta=-1/4\). (b) \(\theta\) dependence of the order parameter \(R\) for several temperatures. For \(\theta<-1/4\), \(R>0\), while for \(\theta\geq-1/4\), \(R=0\). For simplicity, we here set \(K=1\) and \(a=1\).
where \(\omega_{c}\) denotes the cut-off frequency. The integral converges to a constant value for \(\theta<-1/2\). Therefore, we get \(S(q)\approx q^{2}\left\langle\tilde{u}_{q}\tilde{u}_{-q}\right\rangle\thicksim q ^{2}\), which is consistent with the limit \(\theta\to-1/2\) of Eq. (26).
One can also investigate the effects of the power-law spatial correlation \(D_{q}(\omega)\thicksim q^{-2\rho}\). Since the analysis is very parallel to that in this section, we just shortly summarize the main consequences of the power-law spatial correlation in Sec. 7. From the next sections, we shall focus on more concrete examples.
## 4 Center-of-mass conserving dynamics
In the previous section, we have observed that temporal hyperuniformity of the noise leads to hyperuniformity of the density fluctuations and also stabilizes the crystalline order even in one dimension. In this section, we shall show that spatial hyperuniformity of the noise can also yield hyperuniformity of the density fluctuations and stabilize the crystalline order [25].
### Settings
Hyperuniformity is a phenomenon that the fluctuations of physical quantities become much smaller than what would be expected from the central limit theorem. Hyperuniformity has been reported in various systems, such as crystals, quasicrystals [13, 15, 43], and chiral active matter [21, 22, 23]. In general, the physical mechanisms causing hyperuniformity can differ depending on the details of the systems. However, Hexner and Levine have pointed out that hyperuniformity can universally appear for out-of-equilibrium systems conserving the center of mass [25]. Recently, Galliano _et al._[26] argued that the suppression of the density fluctuations also stabilizes the long-range crystalline order even in two dimension, which is prohibited by the Mermin-Wagner theorem in equilibrium [10, 11]. Here, we argue that the same scenario also holds in a one-dimensional system driven by the center-of-mass conserving dynamics.
To preserve the center of mass \(X\equiv\sum_{j=1}^{N}x_{j}\), the noise should satisfy \(\sum_{j=1}^{N}\xi_{j}=0\). A simple implementation of this condition is
\[\xi_{j}(t)=\eta_{j}(t)-\eta_{j-1}(t), \tag{29}\]
where \(\eta_{j}(t)\) is a Gaussian random number of zero mean and variance:
\[\left\langle\eta_{i}(t)\eta_{j}(t^{\prime})\right\rangle=T\delta_{ij}\delta(t -t^{\prime}). \tag{30}\]
Then, one can show that the dynamics Eq. (1) preserves the center of mass \(\dot{X}=0\) under the periodic boundary condition. The Fourier component of \(\xi_{j}(t)\) satisfies
\[\left\langle\tilde{\xi}_{q}(t)\right\rangle=0,\] \[\left\langle\tilde{\xi}_{q}(t)\tilde{\xi}_{q^{\prime}}(t)\right\rangle =4\delta_{q,-q^{\prime}}T\left[1-\cos(aq)\right]\delta(t-t^{\prime})=\delta_ {q,-q^{\prime}}D_{q}(t-t^{\prime}), \tag{31}\]
where
\[D_{q}(t)=\frac{2T\lambda_{q}}{K}\delta(t). \tag{32}\]
For \(q\ll 1\), \(\lambda_{q}\thicksim q^{2}\) and thus \(D_{q}(t)\thicksim q^{2}\), meaning that the large-scale spatial fluctuations of the noise are highly suppressed. In other words, the noise is spatially hyperuniform.
### Mean-squared displacement
In the thermodynamics limit \(N\rightarrow\infty\), the mean-squared displacement is calculated as
\[\text{MSD}(t)=\frac{2T}{\pi^{2}K}\int_{0}^{\infty}d\omega(1-\cos(\omega t))\int_ {0}^{2\pi/a}adq\frac{2K(1-\cos(aq))}{\omega^{2}+[2K(1-\cos(aq))]^{2}}. \tag{33}\]
We plot MSD in Fig. 3. MSD converges to a finite value in the long-time limit, \(\lim_{t\rightarrow\infty}\text{MSD}=2\left\langle u_{1}^{2}\right\rangle\). This means that the particles are localized around their lattice positions, and thus the model is expected to have the crystalline order. Below, we confirm that this intuition is correct.
### Order parameter
Repeating the same analysis as in Eq. (10), we get
\[\left\langle\tilde{u}_{q}\tilde{u}_{-q}\right\rangle=\frac{1}{\pi}\int_{0}^{ \infty}d\omega\frac{D_{q}(\omega)}{\omega^{2}+\lambda_{q}^{2}}=\frac{2T}{K\pi} \int_{0}^{\infty}d\omega\frac{\lambda_{q}}{\omega^{2}+\lambda_{q}^{2}}=\frac{ T}{K}. \tag{34}\]
The squared deviation from the lattice position is then calculated as
\[\left\langle u_{1}^{2}\right\rangle=\frac{1}{N}\sum_{q}\left\langle\tilde{u}_ {q}\tilde{u}_{-q}\right\rangle=\frac{T}{K}. \tag{35}\]
Since \(\xi_{j}(t)\) is a Gaussian random number and the model only has the linear interactions, \(u_{1}\) also follows the Gaussian distribution. Thus, the order parameter is
\[R=\frac{1}{N}\left\langle\sum_{j=1}^{N}e^{\frac{2\pi i}{a}x_{j}}\right\rangle= \left\langle e^{\frac{2\pi i}{a}u_{1}}\right\rangle=\exp\left[-\frac{2\pi^{2}T }{a^{2}K}\right]. \tag{36}\]
The order parameter has a finite value, meaning that the model driven by the center-of-mass conserving noise has the crystalline order even in one dimension.
Figure 3: Mean-squared displacement of harmonic chain driven by center-of-mass conserving noise. Markers denote exact results. Solid lines represent long time asymptotic behavior: MSD \(\sim 2\left\langle u_{1}^{2}\right\rangle\). For simplicity, we set \(K=1\) and \(T=1\).
### Hyperuniformity
Hexner and Levine argued that the density fluctuations are anomalously suppressed in the center-of-mass conserving systems [25]. To see this, we calculate \(S(q)\) for small \(q\ll 1\):
\[S(q)\approx q^{2}\left\langle\tilde{u}_{q}\tilde{u}_{-q}\right\rangle=\frac{T}{K }q^{2}. \tag{37}\]
\(S(q)\) vanishes in the limit \(q\to 0\), meaning that the large-scale density fluctuations are highly suppressed. This is the signature of hyperuniformity [12].
Overall, the above results imply that spatial hyperuniformity of the noise \(\lim_{q\to 0}D_{q}(\omega)=0\) yields hyperuniformity of the density fluctuations and stabilizes the long-range crystalline order even in one-dimension.
### Mapping to Einstein model
Interestingly, the current model can be mapped into an equilibrium model. This can be seen by rewriting Eq. (4) as follows:
\[\frac{\partial\tilde{u}_{q}(t)}{\partial t}=-\Gamma_{q}\frac{\partial V_{\rm eff }}{\partial\tilde{u}_{-q}(t)}+\tilde{\xi}_{q}(t),\qquad\quad\left\langle \tilde{u}_{q}(t)\tilde{u}_{q^{\prime}}(t^{\prime})\right\rangle=2\delta_{q,-q ^{\prime}}\,T\Gamma_{q}\delta(t-t^{\prime}), \tag{38}\]
where \(\Gamma_{q}=\lambda_{q}/K\) and \(V_{\rm eff}=\sum_{i=1}^{N}\frac{K}{2}u_{i}^{2}\). Eq. (38) is the equilibrium Langevin equation satisfying the detailed balance with the friction coefficient \(\Gamma_{q}\)[7]. Then, the steady state distribution follows the Boltzmann distribution:
\[P(u_{1},\cdots,u_{N})=\frac{e^{-\frac{V_{\rm eff}}{T}}}{\int\prod_{i=1}^{N}du _{i}e^{-\frac{V_{\rm eff}}{T}}}. \tag{39}\]
This is nothing but the Einstein model consisting of \(N\) independent harmonic oscillators of the same frequency \(\omega=\sqrt{K}\). The Einstein model is known to exhibit hyperuniformity [13], which is consistent with Eq. (37).
## 5 Periodically driven system
In Sec. 3, we have investigated the effects of temporal hyperuniformity of the driving force \(\lim_{\omega\to 0}D(\omega)=0\). For the extreme case of temporal hyperuniformity, here we study a periodically driven system, where the Fourier spectrum of the driving force is strictly zero, \(D(\omega)=0\), for \(\omega<\omega_{0}\).
### Settings
Here, we consider the periodic driving force. For a concrete example, we consider chiral active particles in one dimension. Chiral active particles are particles that exhibit circular motions [32]. A popular mathematical model to describe this motion is [31]
\[\dot{x} =\sqrt{2T}\cos\phi+\xi_{x},\] \[\dot{y} =\sqrt{2T}\sin\phi+\xi_{y},\] \[\dot{\phi} =\omega_{0}+\xi_{\phi}, \tag{40}\]
where \(\xi_{x,y,\phi}\) denotes the noise. We are particularly interested in the limit \(\xi_{x,y,\phi}\to 0\), where a chiral active particle undergoes a purely periodic motion. If the particle is confined in a
one-dimensional channel along the \(x\) direction, one can only consider the motion along that direction: \(\dot{x}=\sqrt{2T}\cos(\omega_{0}t+\phi(0))\). How does this periodic nature of the driving force affect the collective motion? To model the collective excitation of chiral active particles in one dimension, we consider the harmonic chain Eq. (1) driven by the following periodic function [34]:
\[\xi_{j}(t)=\sqrt{2T}\cos(\omega_{0}t+\psi_{j}), \tag{41}\]
where \(\psi_{j}\) denotes a random number uniformly distributed in \([0,2\pi]\). The mean and variance of \(\xi_{j}(t)\) are then given by
\[\left\langle\xi_{j}(t)\right\rangle=0,\] \[\left\langle\xi_{i}(t)\xi_{j}(t^{\prime})\right\rangle=\delta_{ ij}TD(t-t^{\prime}), \tag{42}\]
where \(D(t)=\cos(\omega_{0}t)\). The noise spectrum \(D(\omega)=\pi\delta(|\omega|-\omega_{0})\) vanishes in the limit of the small frequency: \(\lim_{\omega\to 0}D(\omega)=0\). Thus the noise is temporally hyperuniform. When \(\omega_{0}=0\), \(\xi_{j}(t)=\sqrt{2T}\cos\psi_{j}\) plays the role of the random field and destroys the long-range order in \(d\leq 4\) as predicted by Imry and Ma [44]. What will happen when \(\omega_{0}\neq 0\)?
### Mean-squared displacement
By using Eq. (11), MSD is calculated as
\[\text{MSD}(t)=2\left[1-\cos(\omega_{0}t)\right]\left\langle u_{1}^{2}\right\rangle. \tag{43}\]
MSD oscillates with the frequency of the driving force \(\omega_{0}\), see Fig. 4.
The fluctuation around the lattice position \(\left\langle u_{1}^{2}\right\rangle\) is calculated as
\[\left\langle u_{1}^{2}\right\rangle=\frac{T}{\pi}\int_{0}^{2\pi/a}adq\frac{1} {\omega_{0}^{2}+\left[2K(1-\cos(aq))\right]^{2}}, \tag{44}\]
which has a finite value for \(\omega_{0}\neq 0\) and diverges at \(\omega_{0}=0\), see Fig. 5. Therefore, the model is expected to have the crystalline order for \(\omega_{0}\neq 0\).
Figure 4: Mean-squared displacement of the periodically driven harmonic chain.
### Order parameter
Because the current driving force Eq. (41) is not a Gaussian random variable, one can not easily calculate \(R\). Nevertheless, one can prove the existence of the crystalline order by using the following inequality:
\[R=\left\langle e^{i\frac{2\pi u_{1}}{a}}\right\rangle=\left\langle\cos\left( \frac{2\pi u_{1}}{a}\right)\right\rangle\geq 1-\frac{2\pi^{2}}{a^{2}}\left\langle u _{1}^{2}\right\rangle. \tag{45}\]
Since dynamics Eq. (2) does not depend on \(a\), whether or not the crystalline order exists is also independent of \(a\). So, we chose \(a\) so that \(a>\sqrt{2\pi^{2}\left\langle u_{1}^{2}\right\rangle}\). Then Eq. (45) leads to \(R>0\), meaning that the model has the crystalline order even in one dimension.
### Hyperuniformity
Chiral active particles are known to exhibit hyperuniformity in two dimension [20, 21, 22, 23]. Does one-dimensional system also exhibit hyperuniformity? For \(q\ll 1\), the static structure factor is calculated as
\[S(q)\sim q^{2}\left\langle\tilde{u}_{q}\tilde{u}_{-q}\right\rangle=\frac{Tq^ {2}}{\omega_{0}^{2}+\lambda_{q}^{2}}\sim\begin{cases}Tq^{2}/\omega_{0}^{2}& \omega_{0}>0,\\ Tq^{-2}/(Ka^{2})^{2}&\omega_{0}=0.\end{cases} \tag{46}\]
For \(\omega_{0}\neq 0\), the model indeed exhibits hyperuniformity \(S(q)\sim q^{2}\), as in chiral active matter in two dimension [20, 21, 22, 23]. The result is also consistent with the temporally correlated noise with \(\theta\leq-1/2\), see Eq. (28). This is a reasonable result because the modified power-law spectrum Eq. (27) converges to \(\lim_{\theta\to-\infty}D(\omega)\propto\delta(\omega-\omega_{c})\) in the limit \(\theta\to-\infty\), which agrees with the Fourier-spectrum of the driving force Eq. (42). For \(\omega_{0}=0\), on the contrary, one observes \(S(q)\sim q^{-2}\). Therefore, \(S(q)\) diverges in the limit of the small \(q\). This anomalous enhancement of the large-scale density fluctuations is referred to as giant number fluctuations [28, 29]. A similar power-law divergence of \(S(q)\) has been previously reported for active matter in quenched random potentials [45].
Figure 5: \(\left\langle u_{1}^{2}\right\rangle\) of the periodically driven harmonic chain. \(\left\langle u_{1}^{2}\right\rangle\) has a finite value for \(\omega_{0}\neq 0\) and diverges in the limit \(\omega_{0}\to 0\).
## 6 Periodically deforming particles
What will happen if the driving force is a periodic function and simultaneously conserves the center of mass? To answer this question, we here consider the model introduced by Tjhung and Berthier [33].
### Settings
Tissues are often fluidized by periodic deformations of cells [46]. To model this behavior, Tjhung and Berthier introduced periodically deforming particles [33]. The one-dimensional version of the model is written as
\[\dot{x}_{j}(t)=-\frac{\partial V_{N}}{\partial x_{j}}, V_{N}=\sum_{i<j}^{N}v(h_{ij}), \tag{47}\]
where \(v(h_{ij})\) denotes the one-sided harmonic potential [47]:
\[v(h_{ij})=\frac{Kh_{ij}^{2}\Theta(-h_{ij})}{2}, h_{ij}=\left|x_{i}-x_{j}\right|-\frac{r_{i}(t)+r_{j}(t)}{2}. \tag{48}\]
Here the diameter of the \(i\)-th particle \(r_{i}(t)\) oscillates with the frequency \(\omega_{0}\)[33]:
\[r_{i}(t)=a+\sigma\cos(\omega_{0}t+\psi_{i}), \tag{49}\]
where \(\psi_{i}\) is a random number distributed uniformly in \([0,2\pi]\). When \(\omega_{0}=0\), \(\sigma\cos\psi_{i}\) plays the role of the polydispersity, and thus, the model can not have the crystalline order 1. The force term in Eq. (47) satisfies Newton's third law and thus conserves the center of mass [48].
Footnote 1: For \(\omega_{0}=0\), the driving force Eq. (53) becomes a quenched randomness of zero mean and variance \(\left(\bar{\xi}_{q}\bar{\xi}_{q^{\prime}}\right)=T\delta_{q-q^{\prime}}\sin( aq)^{2}\). For \(q\ll 1\), \(\left\{\bar{\xi}_{q}\bar{\xi}_{q^{\prime}}\right\}\propto q^{2}\delta_{q-q^{ \prime}}\). The Imry-Ma argument [44] for the correlated disorder predicts that this type of disorder prohibits the continuous symmetry breaking for \(d\leq 2\)[30]. Therefore, the polydispersity would destroy the crystalline order in one and two dimensions even without thermal fluctuations.
For sufficiently high density and small \(\sigma\), the harmonic approximation would be justified, and thus the one-sided harmonic potential would be replaced by the harmonic potential (see Fig. 6):
\[v(r_{ij})\approx\frac{Kh_{ij}^{2}}{2}. \tag{50}\]
Taking only the nearest neighbor interactions, one can approximate Eq. (47) as
\[\dot{x}_{j}\approx K(x_{j+1}+x_{j-1}-2x_{j})+K\frac{r_{j+1}-r_{j-1}}{2}. \tag{51}\]
Figure 6: Schematic figures of (a) periodically deforming particles in one dimension and (b) harmonic chain where particle interactions are replaced by linear springs.
Then, the equation of motion of the displacement \(u_{j}\) is
\[\dot{u}_{j}=K(u_{j+1}+u_{j-1}-2u_{j})+\xi_{j}, \tag{52}\]
where
\[\xi_{j}(t)=\frac{K\sigma}{2}\big{[}\cos(\omega_{0}t+\psi_{j+1})-\cos(\omega_{0}t +\psi_{j-1})\big{]}. \tag{53}\]
The mean and variance of \(\tilde{\xi}_{j}\) are
\[\big{\langle}\tilde{\xi}_{q}(t)\big{\rangle}=0,\] \[\big{\langle}\tilde{\xi}_{q}(t)\tilde{\xi}_{q^{\prime}}(t^{\prime })\big{\rangle}=T\delta_{q,-q^{\prime}}D_{q}(t), \tag{54}\]
where \(T=(K\sigma)^{2}/4\) and
\[D_{q}(t)=\sin(aq)^{2}\cos(\omega_{0}t). \tag{55}\]
The noise spectrum \(D_{q}(\omega)=\pi\sin(aq)^{2}\delta(|\omega|-\omega_{0})\) vanishes in the limits of small \(\omega\) and/or \(q\). Therefore, the noise is spatio-temporally hyperuniform.
### Mean-squared displacement
Repeating the same analysis as in the previous sections, we get
\[\text{MSD}(t)=2(1-\cos(\omega_{0}t))\big{\langle}u_{1}^{2}\big{\rangle}, \tag{56}\]
where
\[\big{\langle}u_{1}^{2}\big{\rangle}=\frac{T}{\pi}\int_{0}^{\pi/a} adq\frac{\left(\sin(aq)\right)^{2}}{\omega_{0}^{2}+\left[2K(1-\cos(aq)) \right]^{2}}. \tag{57}\]
Eq. (56) impleis that MSD shows the periodic motion as in the case of the model considered in Sec. 5. A similar periodic motion of MSD has been previously reported by a numerical simulation of the periodically deforming particles in two dimension [49].
Figure 7: \(\big{\langle}u_{1}^{2}\big{\rangle}\) of periodically deforming particles. \(\big{\langle}u_{1}^{2}\big{\rangle}\) has a finite value for \(\omega_{0}>0\) and diverges in the limit \(\omega_{0}\to 0\).
### Order parameter
We plot \(\left\langle u_{1}^{2}\right\rangle\) in Fig. 7. The cage size \(\left\langle u_{1}^{2}\right\rangle\) has a finite value for \(\omega_{0}>0\). In this case, using Eq. (45) and repeating the same argument as in the previous section, we can conclude that the model possesses the crystalline order. In the limit \(\omega_{0}\to 0\), the cage size diverges \(\left\langle u_{1}^{2}\right\rangle\to\infty\), and thus one can not prove the existence of the crystalline order. This is a natural result because when \(\omega_{0}=0\), the polydispersity \(\sigma\cos\psi_{i}\) destroys the crystalline order.
### Hyperuniformity
For small \(q\ll 1\), the static structure factor is
\[S(q)\sim q^{2}\left\langle\tilde{u}_{q}\tilde{u}_{-q}\right\rangle=\frac{Tq^{ 2}\sin(aq)^{2}}{\omega_{0}^{2}+\lambda_{q}^{2}}\sim\begin{cases}Ta^{2}q^{4}/ \omega_{0}^{2}&\omega_{0}>0\\ T/(Ka)^{2}&\omega_{0}=0.\end{cases} \tag{58}\]
For \(\omega_{0}>0\), we observe \(S(q)\sim q^{4}\), which is much smaller than that of the center-of-mass conserving dynamics Eq. (37) and periodic driving force Eq. (46). This is a consequence of the fact that the driving force Eq. (53) is a periodic function and simultaneously conserves the center of mass. For \(\omega_{0}=0\), \(S(q)\) converges to a finite value in the limit \(q\to 0\), meaning that the polydispersity destroys hyperuniformity.
Note, we used the fact that \(S_{0}(q)=0\) to derive Eq. (58), see Eq. (13). However, this condition is not satisfied for amorphous solids [50], and polydisperse systems studied in previous works [49, 33]. Our theory predicts that hyperuniformity is observed only in crystal phases of monodisperse systems.
## 7 Summary and discussions
### Summary
In this work, we investigated the one-dimensional harmonic chain far from equilibrium. We considered the four types of driving forces that do not satisfy the detailed balance: (i) temporally correlated noise with power-law spectrum \(D(\omega)\sim\omega^{-2\theta}\), (ii) center-of-mass conserving noise, (iii) periodic driving force, and (iv) periodic deformation. For the driving force (i) with \(\theta>-1/4\), the model undergoes the anomalous diffusion \(\mathrm{MSD}(t)\sim t^{1/2+2\theta}\). On the contrary, for the driving forces (i) with \(\theta<-1/4\), and (ii)-(iv), MSD(t) remains finite. As a consequence, the crystalline order parameter has a finite value, unlike the equilibrium systems where the Mermin-Wagner theorem prohibits the long-range crystalline order in one dimension. We also discussed hyperuniformity of the density fluctuations by observing the small \(q\) behavior of the static structure factor \(S(q)\). We found \(S(q)\sim q^{-4\theta}\) for the driving force (i), \(S(q)\sim q^{2}\) for (ii) and (iii), and \(S(q)\sim q^{4}\) for (iv). Therefore, the driving forces (i) with \(\theta<0\), and (ii)-(iv) yield hyperuniformity. Given the simplicity of the model, it is remarkable to obtain such rich results. We hope our work will stimulate further interest and progress of the long-range order [51, 52, 53, 30, 54, 55, 26] and hyperuniformity [13, 15, 57] in non-equilibrium low-dimensional systems.
### Hyperuniformity
For the driving forces (i) with \(\theta<0\) and (iii), the power-spectrum of the noise vanishes in the limit of the small frequency: \(\lim_{\omega\to 0}D_{q}(\omega)=0\). This means that the fluctuations of the noise are highly suppressed in a long time scale, _i.e._, the noise is temporally hyperuniform. For the driving force (ii), \(\lim_{q\to 0}D_{q}(\omega)=0\), implying that the noise is spatially hyperuniform.
For the driving force (iv), \(D_{q}(\omega)\) vanishes in the limits \(\omega\to 0\) and/or \(q\to 0\), _i.e._, the noise is spatio-temporally hyperuniform. Our work demonstrated that these spatial and temporal hyperuniformity of the noise yield hyperuniformity of the density fluctuations [30].
For more general and quantitative discussions, we consider the spatio-temporally correlated noise whose Fourier spectrum is given by for \(\omega\ll 1\) and \(q\ll 1\)
\[D_{q}(\omega)\sim\omega^{-2\theta}q^{-2\rho}. \tag{59}\]
The driving force (i) corresponds to \(\rho=0\), (ii) corresponds to \(\rho=-1\) and \(\theta=0\), (iii) corresponds to \(\rho=0\) and \(\theta\to-\infty\), and (iv) corresponds to \(\rho=-1\) and \(\theta\to-\infty\). For the noise to be hyperuniform \(\lim_{q\to 0,\omega\to 0}D_{q}(\omega)=0\), \(\rho\) and \(\theta\) should satisfy \(\rho\leq 0\), \(\theta\leq 0\) and \((\rho,\theta)\neq(0,0)\). The static structure factor \(S(q)\) for \(q\ll 1\) is calculated as [30]
\[S(q)\approx q^{2}\left\langle\tilde{u}_{q}\tilde{u}_{-q}\right\rangle=\frac{q ^{2}}{\pi}\int_{0}^{\infty}d\omega\frac{D_{q}(\omega)}{\omega^{2}+\lambda_{q }^{2}}\sim\begin{cases}q^{-2\rho\to 4\theta}&\theta>-1/2\\ q^{2-2\rho}&\theta\leq-1/2,\end{cases} \tag{60}\]
where the phenomenological cut-off \(\omega_{c}\) is needed to converge the integral for \(\theta\leq-1/2\), see Eq. (28). The above equation implies \(\lim_{q\to 0}S(q)=0\) if the noise is hyperuniform.
### Anomalous diffusion of spatio-temporally correlated noise
Here we briefly discuss the anomalous diffusion of a one-dimensional system driven by the spatio-temporally correlated noise Eq. (59). For that purpose, we investigate the model in the continuum limit Eq. (20). The scaling transformations of Eq. (20), \(x\to bx\), \(t\to b^{z_{t}}\), \(u\to b^{z_{t}}u\), lead to \(z_{t}=2\) and \(z_{u}=1/2+2\theta+\rho\)[30]. Then, we get the anomalous diffusion MSD \(\sim t^{2z_{u}/z_{t}}\sim t^{1/2+2\theta+\rho}\) for \(1/2+2\theta+\rho<0\), on the contrary, the diffusion is completely suppressed and the model has the long-range crystalline order.
### Hyperuniformity and crystalline order
For the existence of the crystalline order, \(\left\langle u_{1}^{2}\right\rangle=\sum_{q}\left\langle\tilde{u}_{q}\tilde{ u}_{-q}\right\rangle/N\) should remain finite in the thermodynamic limit \(N\to\infty\). A necessary condition is \(\left\langle\tilde{u}_{q}\tilde{u}_{-q}\right\rangle\propto q^{-\mu}\) with \(\mu<1\) for \(q\ll 1\), which is tantamount to \(S(q)\approx q^{2}\left\langle\tilde{u}_{q}\tilde{u}_{-q}\right\rangle\sim q ^{2-\mu}\) for \(q\ll 1\). In other words, for the existence of the crystalline order in one dimension, the density fluctuations should exhibit sufficiently strong hyperuniformity \(S(q)\sim q^{\nu}\) with \(\nu>1\). This condition is more stringent than in two-dimensional systems, where \(\nu>0\) is enough to stabilize the long-range crystalline order [26]. For \(0<\nu<1\), the one-dimensional system does not have the crystalline order, but still exhibits hyperuniformity, or more specifically, disordered hyperuniformity [12].
The generalization of the above argument to higher dimension \(d\) is straightforward. Let \(\tilde{u}(\mathbf{q})=\{\tilde{u}_{a}(\mathbf{q})\}_{a=1,\cdots,d}\) be the Fourier component of the displacement vector. Assuming that the system is isotropic \(\left\langle\tilde{u}_{a}\tilde{u}_{b}\right\rangle=\delta_{ab}\left\langle \tilde{u}^{2}\right\rangle\), one obtains \(S(\mathbf{q})\approx\left|\mathbf{q}\right|^{2}\left\langle\tilde{u}(\mathbf{q})\tilde{u} (-\mathbf{q})\right\rangle\) for the harmonic lattice in \(d\) dimension, see Ref [13]. Then, hyperuniformity \(S(\mathbf{q})\sim\left|\mathbf{q}\right|^{\nu}\) (\(\nu>0\)) implies \(\left\langle\tilde{u}(\mathbf{q})\tilde{u}(-\mathbf{q})\right\rangle\approx\left|\mathbf{q }\right|^{-2}S(\mathbf{q})\sim\left|\mathbf{q}\right|^{\nu-2}\). To exist the long-range crystalline order, the particles should localize around their lattice positions. In other words, the mean-squared displacement from the lattice position \(\left\langle u(\mathbf{x})^{2}\right\rangle\) should remain finite. A rough estimation of this quantity in \(d\) dimension is [26]
\[\left\langle u(\mathbf{x})^{2}\right\rangle=\frac{1}{(2\pi)^{d}}\int d\mathbf{q}\left\langle \tilde{u}(\mathbf{q})\tilde{u}(-\mathbf{q})\right\rangle\sim\int_{0}^{q_{D}}dqq^{d-3+\nu}, \tag{61}\]
where \(q_{D}\) denotes the Debye cut-off. Eq. (61) remains finite below the lower critical dimension
\[d_{\rm low}=2-\nu. \tag{62}\]
Therefore, the crystalline order can exist for \(\nu>0\) in \(d=2\)[26], and \(\nu>1\) in \(d=1\). The above argument also implies that giant number fluctuations, \(S(\mathbf{q})\sim|\mathbf{q}|^{\nu}\) with \(\nu<0\), increases \(d_{\rm low}\). Using Eq. (60), we get the lower critical dimension of the crystallization of the systems driven by the spatio-temporally correlated noise
\[d_{\rm low}=\begin{cases}2+2\rho+4\theta&\theta>-1/2\\ 2\rho&\theta\leq-1/2\end{cases}. \tag{63}\]
### Comparison with \(O(n)\) model
In a previous work, we have investigated the \(O(n)\) model driven by the correlated noise of the noise spectrum \(D(\omega,q)\sim\omega^{-2\theta}q^{-2\rho}\)[30]. For the model-A dynamics [58], the lower critical dimension for the continuous symmetry braking is \(d_{\rm low}=2+2\rho+4\theta\) for \(\theta>-1/2\) and \(d_{\rm low}=2\rho\) for \(\theta\leq-1/2\), which agrees with Eq. (63). This is a reasonable result because the order parameter of the crystallization is a non-conservative quantity. On the contrary, since the density is a conservative quantity [58], the prediction for hyperuniformity Eq. (60) agrees with that of the model-B dynamics of the \(O(n)\) model. As a consequence, the relation between hyperuniformity and \(d_{\rm low}\), Eq. (62), is not consistent with ether the model-A and model-B dynamics of the \(O(n)\) model [30]. This result highlights an essential difference between the crystallization of particle systems and ferromagnetic phase transition of lattice spin systems. Further studies would be beneficial to elucidate the similarities and differences of these models.
### Does fluid-solid transition occur in one dimension?
Our analysis for the driving forces (i) for \(\theta<-1/4\) and (ii)-(iv) showed that the crystalline order can emerge even in one dimension, which is prohibited in equilibrium by the Mermin-Wagner theorem. One natural question is whether the systems driven by these driving forces exhibit liquid-solid phase transitions on increasing density. For the harmonic potential studied in this manuscript, the dynamics of the relative displacement \(u_{j}\), Eq. (2), does not depend on the lattice spacing \(a\). Therefore, the qualitative behavior of the model is also density-independent. What will happen for more realistic interaction potentials, such as the Lennard-Jones potential [59], one-sided harmonic potential, Hertzian potential [47], and so on? Extensive numerical simulations of center-of-mass conserving systems [25, 26], chiral active particles [31, 32], and periodically deforming particles [33] for a wide range of density would be beneficial to elucidate this point.
## Acknowledgements
We thank Y. Nishikawa and Y. Kuroda for useful comments.
Funding informationThis project has received JSPS KAKENHI Grant Numbers 23K13031.
|
2301.13564 | Superconducting Diode Effect -- Fundamental Concepts, Material Aspects,
and Device Prospects | Superconducting diode effect, in analogy to the nonreciprocal resistive
charge transport in semiconducting diode, is a nonreciprocity of
dissipationless supercurrent. Such an exotic phenomenon originates from
intertwining between symmetry-constrained supercurrent transport and intrinsic
quantum functionalities of helical/chiral superconductors. In this article,
research progress of superconducting diode effect including fundamental
concepts, material aspects, device prospects, and theoretical/experimental
development is reviewed. First, fundamental mechanisms to cause superconducting
diode effect including simultaneous space-inversion and time-reversal symmetry
breaking, magnetochiral anisotropy, interplay between spin-orbit interaction
energy and the characteristic energy scale of supercurrent carriers, and
finite-momentum Cooper pairing are discussed. Second, the progress of
superconducting diode effect from theoretical predictions to experimental
observations are reviewed. Third, interplay between various system parameters
leading to superconducting diode effect with optimal performance is presented.
Then, it is explicitly highlighted that nonreciprocity of supercurrent can be
characterized either by current-voltage relation obtained from resistive
direct-current measurements in the metal-superconductor fluctuation region
($T\approx T_c$) or by current-phase relation and nonreciprocity of superfluid
inductance obtained from alternating-current measurements in the
superconducting phase ($T<T_c$). Finally, insight into future directions in
this active research field is provided with a perspective analysis on
intertwining between band-topology and helical superconductivity, which could
be useful to steer the engineering of emergent topological superconducting
technologies. | Muhammad Nadeem, Michael S. Fuhrer, Xiaolin Wang | 2023-01-31T11:32:44Z | http://arxiv.org/abs/2301.13564v1 | # Superconducting Diode Effect
###### Abstract
Superconducting diode effect, in analogy to the nonreciprocal resistive charge transport in semiconducting diode, is a nonreciprocity of dissipationless supercurrent. Such an exotic phenomenon originates from intertwining between symmetry-constrained supercurrent transport and intrinsic quantum functionalities of helical/chiral superconductors. In this article, research progress of superconducting diode effect including fundamental concepts, material aspects, device prospects, and theoretical/experimental development is reviewed. First, fundamental mechanisms to cause superconducting diode effect including simultaneous space-inversion and time-reversal symmetry breaking, magnetochiral anisotropy, interplay between spin-orbit interaction energy and the characteristic energy scale of supercurrent carriers, and finite-momentum Cooper pairing are discussed. Second, the progress of superconducting diode effect from theoretical predictions to experimental observations are reviewed. Third, interplay between various system parameters leading to superconducting diode effect with optimal performance is presented. Then, it is explicitly highlighted that nonreciprocity of supercurrent can be characterized either by current-voltage relation obtained from resistive direct-current measurements in the metal-superconductor fluctuation region (\(T\approx T_{c}\)) or by current-phase relation and nonreciprocity of superfluid inductance obtained from alternating-current measurements in the superconducting phase (\(T<T_{c}\)). Finally, insight into future directions in this active research field is provided with a perspective analysis on intertwining between band-topology and helical superconductivity, which could be useful to steer the engineering of emergent topological superconducting technologies.
**Keywords:** Superconducting diode effect, Josephson diode effect, Nonreciprocal transport, Magnetochiral anisotropy, Spin-orbit coupling, Helical superconductivity, Chiral superconductors
Superconducting diode effect (SDE), a recently observed quantum phenomenon in noncentrosymmetric superconductors (SCs) with finite-momentum Cooper pairing, refers to the nonreciprocity of supercurrent [1; 2; 3]. As depicted by the word _'nonreciprocity'_ and _'diode effect'_, the system allows supercurrent to flow only in one direction. Similar to the role of semiconducting diode [4; 5], which is one of the central building blocks for (opto-)electronic technologies, e.g., current rectifiers, voltage-controlled oscillators, alternating-direct current converters, LEDs, photodetectors, and solar cells etc., SDE envisions novel device applications in superconducting electronics [6; 7], superconducting spintronics [8; 9], and quantum information and communication technology (QICT) [10; 11].
After recent observation of SDE for the critical current (fluctuation regime) in symmetric superconductor [12] and for the supercurrent (far below the fluctuation regime) in the Josephson junction (JJ) version [13], nonreciprocity has emerged as an active research topic in the field of superconductivity. For instance, after seminal observation of SDE in artificially fabricated junction-free superconducting [Nb/V/Ta]\({}_{n}\) superlattice, reported first time by F. Ando et al. [12] in 2020, SDE has been experimentally observed in a number of junction-free SCs [14; 15; 16; 17; 18]. Similarly, JJ version of SDE in symmetric Al/InAs-2DEG/Al junction, first reported by Baumgartner et al. [13] in 2022, is fol
lowed by SDE experiments on various JJs utilizing different materials acting as a normal barrier or weak link sandwiched between conventional SCs [19; 20; 21; 22; 23; 24]. In addition, observation of SDE has also been demonstrated in engineered superconducting systems, e.g., superconducting thin films with conformal-mapped nanoholes [25]. The interest in the nonreciprocal supercurrent transport has been further advanced by the recent demonstration of SDE in unconventional/topological superconducting materials. For instance, apart from conventional SCs, SDE has also been observed in unconventional SCs such as magic-angle-twisted bilayer-graphene (MATBLG) [24] and small-twist-angle trilayer graphene (STATLG) [18]. Furthermore, SDE has also been demonstrated in topological SCs [16; 17; 23] where superconductivity coexists with nontrivial band-topology, e.g., topological JJ [23] where type-II Dirac semimetal NiTe\({}_{2}\) is sandwiched between conventional s-wave spin-singlet superconductor Nb, and a topological insulator-superconductor interface such as Bi\({}_{2}\)Te\({}_{3}\)/FeTe heterostructure [16] and Bi\({}_{2}\)Te\({}_{3}\)/PdTe\({}_{2}\) heterostructure [17].
Following these intriguing experimental observations, and inspired by theoretical work by V. M. Edelstein [26; 27; 28], SDE has been theorized by a number of research groups. For instance, by employing mean-field (MF), Bogoliubov-de Gennes (BdG) and Ginzburg-Landau (GL) theories, theoretical insights have recently been presented for SDE in junction-free bulk SCs [29; 30; 31; 32; 33; 34; 35; 36; 37; 38] as well as for its JJ version [39; 40; 41; 42; 43; 44; 45]. Though, the intrinsic mechanism to cause SDE in junction-free SCs is recently clarified, i.e., nonreciprocity of depairing critical current, theoretical modelling for potential spin-orbit coupled bulk SCs is still enjoying its infancy [29; 30; 31; 32; 33; 34; 35; 36; 37; 38]. In comparison, the underlying mechanism of nonreciprocal supercurrent and SDE is better understood in engineered systems. For instance, diode effect can be engineered in a JJ by controlling Andreev bound states in the normal metal barrier or a weak-link. M. Davydova et al. [40] showed that such effects in a short JJ can arise from both the Doppler energy shift in the Andreev bound states due to finite-momentum Cooper pairing and the asymmetric current from the continuum of states due to phase-independent contribution. It has also been shown that SDE in JJ [13] and conformal-mapped nanoholes [25] is well simulated by BdG [13] and time-dependent GL theories [25]. Furthermore, before even experimental demonstration of SDE in artificial devices [13; 25], similar nonreciprocal effects have been recognized in several engineered systems [46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56], e.g., conventional JJs [46; 47; 48; 49; 50; 51], domain-wall superconducting state [52], ferromagnetic JJ with a spin-flipper weak-link acting as a quantized Josephson phase battery [53], and topological JJs [54; 55; 56].
In nonreciprocal quantum materials (NRQM) lacking space-inversion symmetry, direction-selective charge transport can generally be realized whether time-reversal symmetry is broken or not. However, thus far, experimental observation of nonreciprocal supercurrent has only been reported in SCs with simultaneous space-inversion and time-reversal symmetry breaking leading to magnetochiral effects. Space-inversion symmetry is either intrinsically broken or it can be broken by applying an electric field externally. Similarly, time-reversal symmetry can be broken either by applying an external magnetic-field or through intrinsic magnetization, leading to an observation of field-free SDE [15; 18; 19; 20; 21; 24]. In time-reversal asymmetric SCs, nonreciprocity of supercurrent is guaranteed if the following symmetry-imposed constraint, inducing finite-momentum Cooper pairing, is satisfied: both the orientation along which inversion/mirror symmetry is broken and the direction along which (super)current is flowing must be perpendicular to the magnetic-field orientation or magnetization polarization. Thus, owning to the magnetochiral effects, nonreciprocal supercurrent can be switched by reversing the orientation/polarization of magnetic-field/magnetization.
In this article, recent theoretical and experimental progress on SDE including fundamental concepts, material aspects, and device prospects, is reviewed. In section I, fundamental concepts and various mechanisms to cause SDE, i.e., nonreciprocal charge transport, magnetochiral anisotropy (MCA), breaking of space-inversion/time-reversal symmetry, type of associated spin-orbit interaction (SOI), origin and orientation of magnetization, upper critical field, and finite-momentum Cooper pairing or helical/chiral superconductivity are discussed. We highlighted how all these mechanisms are closely related with each other and, especially, their intertwining with SOI, which is a fundamental relativistic quantum functionality. In section II, theoretical progress is reviewed for bulk and engineered SCs. Section III covers the material aspects. Various superconducting materials, in which observation of SDE has been reported, are classified based on the geometric structure of diode device, nature of SOI, origin and orientation of magnetization, and their topological character. Section IV demonstrates how the strength/efficiency of SDE depend on a range of parameters such as magnetic field, temperature, Cooper pairing momentum, SOI, chemical potential [31], next-nearest neighbour hopping [29], disorder [32], and design or characteristics of a JJ [13; 45]. Section V covers two main techniques employed for the observation of SDE: nonreciprocity of critical current via resistive direct-current (dc) measurements and nonreciprocity of supercurrent via inductive alternating-current (ac) measurements. In section VI, the article is concluded with a perspective on future directions and device prospects of SDE. Since SDE is a novel quantum mechanical phenomenon and the system hosting this effect may prove to be a key component of emergent quantum technologies, we hope this review article may shed light on profound understanding of fundamental mechanism/concepts of SDE and may allow search of novel superconducting systems for emergent superconducting technologies such as electronics, spintronics, optoelectronics, and fault-tolerant quantum computing.
## I Mechanisms of superconducting diode effect
The origin of SDE manifests in a number of physical phenomena, imposed by transport mechanisms, symmetry constraints, and underlying quantum functionalities of superconducting materials. In this section, it is explicitly demonstrated how nonreciprocity of supercurrent is intertwined with underlying symmetries of noncentrosymmetric systems, e.g., nonreciprocity driven by MCA in time-reversal asymmetric systems and that induced by shift current or Coulomb interactions in time-reversal symmetric systems. It is also highlighted how nonreciprocity of supercurrent is associated with nonreciprocal behaviour of physical quantities characterizing current-voltage (I-V) and current-phase relation (CPR), e.g., resistance and inductance respectively. Quantum functionalities of SCs, such as SOI, Berry phase, band-topology and their effects on the SDE efficiency are also discussed. Finally, intertwining between nonreciprocity and helical superconductivity with finite-momentum spin-singlet or spin-triplet Cooping pairing is presented.
Figure 1: **Diode effect in semiconductors and SCs.** Here straight black lines represent supercurrent flowing due to coherent Cooper pairs while the wiggly black lines represent normal current flowing due to depaired electrons. **(a)** Diode effects in noncentrosymmetric bulk semiconductors and pn junctions. (_Left_) Diode effects, such as rectification, can be realized in a junction-free and noncentrosymmetric bulk electrical conductor (top) and at a semiconducting pn junction (bottom). (_Right_) I-V curve for noncentrosymmetric bulk semiconductors (dashed) and pn junctions (solid). **(b)** Superconducting diode effect in junction-free noncentrosymmetric bulk crystals and JJs. (_Left_) SDE in a bulk crystal with an order parameter \(\Delta e^{i\phi}\) (top) and SDE in a JJ between two SCs with phases \(\phi_{L}\) and \(\phi_{R}\), which are separated by a normal barrier (bottom). (_Right_) I-V curves for SDE in a bulk crystal (solid red lines) and a JJ (solid red lines and dashed red curves) show that a SDE occurs when \(I_{c+}\neq I_{c-}\) while the superconductor becomes a normal metal when \(I\) is larger than the critical current \(I_{c+}\) (along positive direction) or \(I_{c-}\) (along negative direction). For a JJ version, \(I_{r+}\) and \(I_{r-}\) represent two critical return currents in the downward sweep measurements and leads to another non-reciprocal effect when \(I_{r+}\neq I_{r-}\). **(c)** Schematic illustration of nonreciprocal current/supercurrent in noncentrosymmetric bulk crystals. (_Left_) In the normal state of a noncentrosymmetric crystal, whose MCA coefficient (\(\gamma_{N}\)) is usually tiny, non-linear I-V curves (red and blue) show a small deviation form the linear I-V curve (gray), indicating a small nonreciprocal current. (_Right_) In the fluctuation regime (resistive superconducting state) of a noncentrosymmetric crystal, whose MCA coefficient (\(\gamma_{S}\)) becomes much larger than that of the normal state, I-V curve shows large nonreciprocal current below the critical current (\(I_{c}\)), whereas it resembles to that for the normal state and remains unchanged at \(I>I_{c}\). Here \(\eta\) represents efficiency of a superconducting diode which changes its sign when the polarity of magnetic field (B) is reversed. **(d)** Strength of superconducting diode effect. (_Left_) The superconducting rectification becomes maximal when \(+I_{c}\) (or \(-I_{c}\)) remains finite but \(-I_{c}\) (or \(+I_{c}\)) becomes zero. As defined by the diode efficiency, equation (7), the maximum difference of critical depairing currents \(+I_{c}\) and \(-I_{c}\) can be about a factor of 2. (_Right_) In the junction-free noncentrosymmetric bulk material, rectification can be induced by applying magnetic field perpendicular to the directions of both the polar axis and the current.
### Nonreciprocity and magnetochirality
In condensed matters, nonreciprocity refers to the spatial-dependence of physical quantities. A prototypical example of nonreciprocal transport is a diode effect which refers to a highly direction-selective electron transport in systems with a lack of spatial inversion center. Until recently, nonracirocity was thought to be a transport phenomenon associated with dissipative materials. For instance, in conventional semiconductors, where resistance is the nonreciprocal quantity, nonreciprocity refers to charge transport that is sensitive to the polarity of current or bias potential. Such nonreciprocal charge transport leads to a diode effect in a spatially asymmetric pn junction [4; 5], in which, spatial asymmetry of the junction is associated with electron-hole asymmetry across the contact of n- and p-type semiconductors.
In modern quantum condensed matter physics, in addition to electron-hole asymmetric junctions, nonreciprocal charge transport can be induced in spatially symmetric devices, in which resistance is direction-selective when inversion and/or time-reversal symmetry are broken. This can be realized, for instance, by externally applying an electric and/or a magnetic field orthogonal to each other and to the direction along which current is traversing. It implies that nonreciprocal transport can be treated as a bulk property of noncentrosymmetric quantum materials [57; 58]. In noncentrosymmetric systems, i.e., in which inversion symmetry is broken, nonreciprocal responses can be classified into four categories [57]: (i) linear- and (ii) nonlinear-response in time-reversal symmetric systems, and (iii) linear- and (iv) nonlinear-response in time-reversal asymmetric systems.
When both inversion and time-reversal symmetry are simultaneously broken, a closely related phenomenon leading to nonlinear nonreciprocal response is MCA [16; 27; 59; 60; 58; 59; 70]. In the linear response regime of noncentrosymmetric systems, broken time-reversal symmetry produces finite magnetochiral effect, as recognized by the Onsager's reciprocal theorem [71; 57; 66; 72], and the longitudinal transport coefficients become dependent on the polarity of the current. The Onsager's reciprocal theorem, and thus the magnetochiral effect and direction-selective transport, can be generalized to the nonlinear regime of both (semi)conductors [59; 62] and SCs [27; 28].
#### ii.1.1 Nonreciprocity of supercurrent
In 1996, before even prediction/observation of nonreciprocity in (semi)conductors by Rikken et al. [59; 62], V. M. Edelstein [28] proposed nonreciprocity in the critical supercurrent. Followed by his earlier work characterizing Cooper pairing in noncentrosymmetric SCs [26] and describing magnetoelectric effects in polar SCs [27], V. M. Edelstein [28] proposed that if the mixed product \((\mathbf{c}\times\mathbf{B})\cdot\mathbf{\hat{j}_{c}}\) is non-vanishing in polar SCs, then the magnitude of the critical current \(j_{c}(B)\) depends on the sign of this mixed product, i.e., the critical current appears to be different for two opposite directions. By employing GL theory for a thin film of polar superconductor, expression for the nonreciprocity in the critical current reads [28]
\[j_{c}(B)=j_{c}(0)[1+\gamma_{j}(\mathbf{c}\times\mathbf{B})\cdot\mathbf{\hat{j}}] \tag{1}\]
Here \(c\) is the unit vector along the polar axis, \(\mathbf{\hat{j}}\) is the unit vector along the supercurrent, and \(B\) is an in-plane magnetic field. The exact expression for the observable can be found in the reference [28].
MCA and nonreciprocity has been observed in (semi)conductors [60; 61; 62; 63; 64; 69] that show resistive current as well as in SCs [67; 68; 69; 70] that display dissipationless supercurrent. So a question arises naturally: how nonreciprocity can uniquely be defined in these two systems with completely contrasting behaviour? As first pointed by Rikken et al. [59], when both inversion and time-reversal symmetries are broken, the finite MCA coefficient \(\gamma\) gives rise to different resistance for electric currents traversing in different (opposite) directions. That is, MCA can be defined as the inequivalence of \(R(+I)\) and \(R(-I)\). In (semi)conductors, resistances along opposite directions differ, i.e. \(R(+I)\neq R(-I)\), but both \(R(+I)\) and \(R(-I)\) normally take finite values. On the other hand, in SCs, such a situation becomes more drastic: either one of \(R(\pm I)\) remains finite while the other vanishes completely.
With this consideration in SCs, it becomes more appropriate to define nonreciprocity in terms of (super)current. That is, as shown in figure [1], nonreciprocity in SCs means supercurrent flows along one direction while a normal current along the other(opposite). Observation of such a situation is more probable near critical temperature \(T_{c}\), i.e., in the fluctuation regime of metal-superconductor resistive transition, where the critical current is different along opposite directions, i.e. \(I_{c+}\neq I_{c-}\). Thus, if the current is tuned between \(I_{c+}\) and \(I_{c-}\), the system displays zero resistance for supercurrent but nonzero for the normal current.
It can be understood how conductance varies while going from normal to a superconducting phase. The linear resistance \(R_{0}\) is normally scaled by the Fermi energy \(E_{F}\), i.e., the kinetic energy of the electrons, while the MCA coefficient \(\gamma\) depends upon the strength of SOI and the magnetic field. Correspondingly, nonlinear resistance induced by MCA may be treated as a perturbation to \(R_{0}\). In the normal conducting phase, because the SOI energy (\(E_{sol}\)) and the Zeeman energy (\(\mu_{B}B\)) is usually much smaller (by many orders of magnitude) than \(E_{F}\), MCA coefficient \(\gamma\rightarrow\gamma_{N}\) is typically very tiny, usually of the order of \(\sim 10^{-3}\) to \(10^{-2}\) T\({}^{-1}\) A\({}^{-1}\) in typical metals [59; 61; 67]. However, as the superconducting phase develops, superconducting transition temperature \(T_{c}\) or the superconducting gap \(\Delta_{sc}\) appear as a new energy scale. That is, the energy scale in SCs, to which the strength of SOI has to be compared with, is superconducting gap and not the Fermi energy. Since the energy scale (\(\sim\)meV) in
the SCs is much smaller than the Fermi energy (\(\sim\)eV) in metals, the effects of SOI and Zeeman energy greatly enhance in the superconducting phase [66; 67]. As a result, near the superconducting transition temperature \(T\gtrsim T_{c}\), MCA coefficient becomes reasonably large [57] and, thus, the paraconductivity [73] above \(T_{c}\) becomes nonreciprocal. In the superconducting fluctuation region, i.e. when \(T\to T_{c}\) and the superconducting order parameter \(\Delta_{sc}\) develops, a sizable enhancement in MCA coefficient \(\gamma_{S}\) is found (ref.[66; 67; 16; 67]) and a robust non-reciprocal charge transport is demonstrated in noncentrosymmetric SCs [67; 70]. For instance, by employing GL theory for an Ising type SC MoS\({}_{2}\), R. Wakatsuki et al. [67] showed that the ratio of MCA coefficients in the superconducting resistive region (\(\gamma_{S}\)) and the normal resistive region (\(\gamma_{N}\)) is quite large
\[\frac{\gamma_{S}}{\gamma_{N}}\sim\left(\frac{E_{F}}{k_{B}T_{c}}\right)^{3} \tag{2}\]
Such anomalous enhancement of the MCA coefficient, as it is associated with the energy scale difference between the superconducting gap and the Fermi energy, can be considered an intrinsic feature of both Rashba and Ising type noncentrosymmetric SCs [66]. However, mainly due to a gradual decrease in the linear resistance \(R_{0}\) during the metal-superconducting transition, \(R_{0}\) remains larger (by orders of magnitude) than the nonlinear resistance in low-dimensional superconducting materials such as MoS\({}_{2}\) (ref. [67]), WS\({}_{2}\) (ref. [68]) and Bi\({}_{2}\)Te\({}_{3}\)/FeTe (ref. [16]). As a result, low rectification ratio in these superconducting materials does not suffice for device implementation. In this regard, it is highly desired to search for novel mechanisms/principles to enlarge the rectification effect and guide the design of efficient SDE.
#### ii.2.2 From resistance to supercurrent
Rikken et al. [59; 62] generalized the Onsager's reciprocal theorem to the nonlinear regime and gave a heuristic argument for nonreciprocity and MCA in two-dimensional diffusive conductors. In their seminal proposal of MCA in (semi)conductors, Rikken et al. [59] suggested that nonreciprocal nonlinear resistive response, characterized by the directional IV-characteristics, can be described by a current-dependent resistance \(R(I)\) as
\[R(I)=R_{0}[1+\beta B^{2}+\gamma(\mathbf{B}\times\mathbf{r})\cdot\mathbf{I}] \tag{3}\]
Here R, B, and I are the resistance, magnetic field, and the electric current, respectively. The unit vector \(\mathbf{r}\) represents the direction along which mirror symmetry is broken. On the right-hand side, first term is the resistance at zero magnetic field, second term denotes the normal magnetoresistance, and the third term corresponds to the MCA. Dependence of MCA coefficient \(\gamma\) on electric current, magnetic field, as well as their mutual orientation, relative to the direction along which mirror symmetry is broken, allows us to access various functionalities and aspects of noncentrosymmetric materials.
First, dependence of MCA on electric current leads to a current-dependent resistance which generally causes a nonlinear nonreciprocal transport, i.e. nonlinear voltage-drop. Such nonlinear nonreciprocal transport can be detected by measuring the second harmonic signal through lock-in techniques, see further details in section (V.1). Second, dependence of MCA on magnetic field implies that its coefficient \(\gamma\) remains non-zero only when time-reversal symmetry is broken. In addition, the orientation of magnetic field must be orthogonal to both current and the direction along which mirror symmetry is broken. It implies that, not only finite magnetic field is required, but its orientation is also important depending upon the nature of SOI associated broken mirror symmetry. Here we discuss the key mechanisms associated with nonreciprocity in superconducting systems.
The conventional semiconducting diode is not favorable for energy-efficient technologies with ultralow power consumption. At high temperatures relevant for thermionic transport, owning to their finite resistance, energy loss is inevitable in semiconductors. At low (sub-Kelvin) temperatures, on the other hand, relevant for cryogenic electronics [7] and ultrasensitive (sub-THz frequencies) optoelectronics and detection [74], semiconductors cease to work due to their large energy gap. Therefore, owning to their dissipationless supercurrents, intrinsically low impedance and thereby very high rectification of supercurrents, and low energy scales associated with superconducting gap (\(\sim\)meV) as compared to semiconductor energy gap (\(\sim\)eV), a superconducting diode is highly desired for energy-efficient cryogenic electronic/optoelectronic devices [6; 7]. However, as broken electron-hole symmetry is required, physical realization of a junction-free superconducting diode turns out to be difficult with electron-hole symmetric Bardeen-Cooper-Schrieffer (BCS) superconducting state.
In light of this, SCs with broken spatial-inversion and time-reversal symmetry can offer bright perspectives for supercurrent diode effect via MCA. However, for the implementation of a simplest possible device displaying SDE intrinsically, it is worthy to pin down intertwining between superconductivity and MCA. First, unlike rectification due to self-field effects in asymmetric superconducting quantum interference devices (SQUIDs) [75; 76], MCA is expected to induce an intrinsic SDE in symmetric devices with spatially homogeneous supercurrent density. Second, intertwining between superconductivity and MCA can lead to a spin-filtering diode effect in a spin-selective Al/EuS/Cu superconducting tunnel junction [77] and thus superconducting spintronic technologies [8]. However, even such promising ferromagnetic superconducting structure, in which electron-hole symmetry can possibly be broken when both spin-filtering and spin-splitting are present to induce opposite shift in BCS density of state (DOS), are not desired for the in
trinsic SDE with nonreciprocal supercurrent transport. Finally, we could see the light at the end of the tunnel: Intrinsic SDE with nonreciprocal supercurrent transport can be realized in a helical superconductor with finite-momentum Cooper pairing which can be induced by antisymmetric Rashba/Ising SOI and Zeeman exchange spin-splitting. Further details on this key mechanism are discussed in section I.4.
#### ii.1.3 From inductance to supercurrent
Nonreciprocity in the fluctuation regime of metal-superconductor resistive transition confines SDE to a narrow temperature window near \(T_{c}\). Baumgartner et al. [13] pointed that the temperature window in which MCA coefficient becomes sizeable must be widened for a sustainable fabrication of devices showing SDE. To achieve this milestone, the authors demonstrated supercurrent rectification in the superconducting phase, i.e., far below the transition temperature \(T_{c}\). Since d.c. measurement of resistance-current (R-I) curve is not viable at low temperatures, as the resistance vanishes, supercurrent response to an alternating-current (a.c.) excitation is studied, which is described by its superfluid stiffness, and thus, can be detected through kinetic inductance measurements.
If mirror symmetry is broken along out-of-plane direction (\(\hat{e}_{z}\)), whereas the current I and magnetic field B are directed in-plane, MCA or nonreciprocity for the superfluid can be described by an equation similar to that for the resistance (3), i.e.,
\[L(I)=L_{0}[1+\gamma_{L}\hat{e}_{z}(\mathbf{B}\times\mathbf{I})] \tag{4}\]
Here resistance (\(R\)) is substituted for the kinetic inductance (\(L\)). The nonraciproocity in supercurrent could then be characterized by a new observable, i.e., MCA coefficient \(\gamma_{L}\).
### Nonreciprocity without magnetochirality
In noncentrosymmetric but time-reversal symmetric systems, nonreciprocal nonlinear response can be realized via shift current (photovoltaic effect) [78; 79; 80], via Coulomb interactions [81], and asymmetric Hall effect of vortices and antivortices [82]. Shift current is a nontrivial contribution by the Berry phase of the electronic states [83]. That is, unlike conventional charge transport which comes form intraband transition [78; 79; 80] and depends only on the energy dispersion, interband shift current depends not only on the energy dispersion but also on the Bloch wavefunction and plays an essential role in modern quantum transport phenomena [83; 84]. Followed by theoretical proposals [78; 79], shift current has been studied for semiconductor (GaAs) [85], ferroelectric semiconductor (SbSI) [86], and Dirac surface states of a 3D topological insulator (Bi\({}_{2}\)X\({}_{3}\)(X=Te, Se)) with a hexagonal warping [87]. It shows that shift current is an ubiquitous phenomenon in noncentrosymmetric quantum materials, and the nonreciprocal nonlinear response can also be realized without breaking of time-reversal symmetry.
T. Morimoto and N. Nagaosa [81] theoretically showed that nonreciprocal nonlinear I-V characteristics can be induced by electron correlations in noncentrosymmetric multiband systems without time-reversal symmetry breaking. According to general symmetry considerations, nonreciprocal nonlinear response in such time-reversal symmetric systems is generally constrained by the presence of two ingredients: (i) dissipation, and (ii) interactions (e.g., electron-electron and electron-phonon interactions). First, generalization of Onsager's reciprocal theorem to nonlinear current responses shows that dissipation is crucial for nonreciprocity. Second, gauge invariant formulation of Keldysh Green's function shows that nonreciprocity disappears without interactions. A general formula of the nonreciprocity ratio (\(\gamma_{c}\)), and derived by employing nonequilibrium Green's functions for two-band systems with onsite Coulomb interaction, reads [81]
\[\gamma_{c}=\frac{\delta J}{J}\simeq\frac{U}{E_{g,k_{F}}}\frac{eEa}{W} \tag{5}\]
where \(U\) is Coulomb interaction energy (\(\gamma_{c}\to 0\) for \(U\to 0\)), \(E_{g,k_{F}}\) is the band gap, \(k_{F}\) is the Fermi momentum, \(e\) is charge of electron, \(E\) is the applied electric field, \(a\) is the lattice constant, and \(W\) is the bandwidth. Here \(J\) is the linear current response (the part of current response proportional to \(E\)) while \(\delta J\) is the nonlinear current response (the part of current response proportional to \(E^{2}\)). When \(U\approx E_{g,k_{F}}\), nonreciprocal response can be estimated by quantifying the ratio \(eEa/W\) between the electric potential (\(eEa\)) in the unit cell and the bandwidth (\(W\)).
First the nonreciprocity induced by electron correlation [81] is relatively smaller than that induced by MCA, in both typical metals [59; 61] as well as resistive semiconductors [62]. Second, the requirement of dissipation means nonreciprocal response induced by Coulomb interactions is only measurable in the resistive fluctuation regime of metal-superconductor transition, and not in the superconducting phase below transition temperature. On the other hand, nonreciprocity of supercurrent by asymmetric Hall effect of vortices and antivortices in time-reversal symmetric trigonal superconductors (PbTaSe\({}_{2}\)) [82] promise another nonlinear transport phenomena to study SDE. However, thus far, experimental observation of nonreciprocity of supercurrent has only been reported in noncentrosymmetric systems with broken time-reversal symmetry, while the observation of supercurrent nonreciprocity in time-reversal symmetric SCs is scarce.
### Role of spin-orbit coupling
Apart from the strength of SOI, since broken inversion symmetry is assumed/required (\(\gamma=0\) for centrosymmetric systems), MCA coefficient \(\gamma\) also depends on the nature of associated SOI. That is, based on the lattice symmetry, finite \(\gamma\) may be realized in noncentrosymmetric condensed matter systems [58] such as polar or Rashba SCs and trigonal or Ising SCs. In polar systems, where Rashba SOI generated from broken \(\mathcal{M}_{z}\) and electron's spin is locked to in-plane orientations, nonreciprocal supercurrent is controlled by an in-plane magnetic field. On the other hand, in trigonal systems with \(D_{3h}\) symmetry, where Ising or valley-Zeeman SOI is originated from broken \(\mathcal{M}_{x/y}\) and electron's spin is locked to out-of-plane orientations, nonreciprocal supercurrent is controlled by an out-of-plane magnetic field.
In addition, it would be interesting to study effects on SDE due to a crossover between various SOI types associated with broken inversion symmetry. For instance, Baumgartner et al. [42] studied effects of Rashba and Dresselhaus SOI on supercurrent rectification and MCA by fabricating Al/InAs-2DEG/Al ballistic JJs. Similarly, Pekerten et al. [44] studied an interplay between Rashba and Dresselhaus SOI and investigated effects of magnetic and crystalline anisotropies on the topological superconductivity in JJs. If only Rashba-type SOI is present in the JJs, the topological phase diagram strongly depends on the magnetic field orientation but remains insensitive to the supercurrent polarity. On the other hand, when both Rashba- and Dresselhaus-type SOIs coexist, the phase diagram exhibits a strong dependence on the magnetic field as well as junction crystallographic orientations. These studies illustrate the role of SOI, both for the material search leading to SDE with the best performance and probing phase diagram of topological/helical SCs.
Furthermore, H. Yi recently showed a crossover from Ising- to Rashba-type superconductivity in epitaxial topological insulator and monolayer Ising superconductor heterostructure [88] (Bi\({}_{2}\)Se\({}_{3}\)/NbSe\({}_{2}\)). By altering the thickness of Bi\({}_{2}\)Se\({}_{3}\) film, emergence of topological superconductivity coincides with a considerable suppression of the upper critical in-plane magnetic field. While the former transition is marked by the emergence of spin-non-degenerate surface states and Rashba-type quantum-well bands in the bulk, the later signatures a crossover from Ising- to Rashba-type superconductivity. This system represents a classic example and sheds light on the role of SOI while searching new systems to engineer SDE.
Based on the above discussion, one can conclude that Ising/trigonal topological SCs, such as NbSe\({}_{2}\) which display exceptional upper critical-fields exceeding the Pauli limit [89; 90; 91], can be identified as suitable materials for the realization of SDE via magnetic field driven MCA. On the other hand, owning to the nontrivial Berry phase intertwined with band topology, time-reversal symmetric polar/Rashba SCs can be identified as promising materials for the realization of SDE via shift current. This qualitative analogy needs further quantitative investigation, as the performance of SDE also depends upon the strength of SOI, interband transition, and photoresponse etc.
### Helical superconductivity
To observe SDE via MCA in noncentrosymmetric superconductor, breaking of time-reversal reversal symmetry (\(\mathcal{T}\)) is necessary but not sufficient. First, SDE is not necessarily present in all the magnetic SCs but rather the orientation of magnetic field or magnetization should be such that it breaks all possible inversion symmetries \(\mathcal{P}_{i}\)\((i=x,y,z)\). Second, time-reversal reversal symmetry should be broken such that a finite-momentum Cooper pairing or a helical superconductivity emerges. Third, magnetic field (or magnetization) should have a component perpendicular to the polarity of applied current such that finite pairing momentum emerges parallel/anti-parallel to the current direction. In this section, after a brief overview of helical superconductivity, desired orientation of magnetic field or magnetization, and its intertwining with the nature of SOI, polarity of applied current, direction along which structural mirror symmetry is broken, and the momentum space orientation of Cooper pairing momentum is discussed.
#### iii.4.1 Fulde-Ferrell-Larkin-Ovchinnikov state
In the field of conventional superconductivity, following from the fact that Cooper pairing is formed between Kramers partners and most known conventional SCs are characterized by the Bardeen-Cooper-Schrieffer theory [93], presence of time-reversal symmetry is a key ingredient and the preserved Kramers degeneracy is the fundamental reason/criterion that stabilize superconducting phase in so many systems at sufficiently low temperatures [94; 95; 96]. Thus, such a conventional superconducting state with a spin-singlet pairing is suppressed or destroyed by time-reversal symmetry breaking perturbations -- as a consequence of applied magnetic field, doped magnetic impurities, or intrinsic magnetic instability leading to spontaneous magnetization -- due to electron pair breaking.
On the other hand, beyond conventional BCS paradigm, unconventional superconductivity allows coexistence of more exotic superconducting order parameters with magnetic order. For instance, as predicted independently by Peter Fulde and Richard Ferrell (FF) [97] and Anatoly Larkin and Yuri Ovchinnikov (LO) [98], magnetic fields can give rise to a superconducting state with FF-type order parameter \(\Delta(x)=\Delta e^{iqx}\) and/or spatially inhomogeneous LO-type pair potential \(\Delta(x)=\Delta\cos qx\). The underlying physical mechanism of the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state
[97; 98], owning to the opposite energy-shift in the electronic spin bands as shown in Fig. 2(B), induces non-zero centre-of-mass momentum of Cooper pairs and leads to a spatially-modulated order parameter. The FF state ubiquitously exist in noncentrosymmetric SCs and is particularly known as the helical superconductivity [99; 100; 101].
The FFLO states, and/or the implications of the helical superconductivity, have been obtained in heavy-fermion SCs CeCoIn\({}_{5}\)[112; 113; 114], organic SCs [92], pure single crystals of FeSe [115; 116], thin films of Pb [110] and doped SrTiO\({}_{3}\)[117], a heavy-fermion Kondo superlattice [118; 119], and a three-dimensional topological
Figure 2: **abc (A)** Schematics of the Ising- and Rashba-type superconducting pairing symmetry. **a** Ising-type pairing symmetry originates from spin-singlet Cooper pairs formed between the electrons near the K and K\({}^{\prime}\) valleys with opposite spins pinned to the out-of-plane direction. **(b)** Rashba-type pairing symmetry originates from spin-singlet Cooper pairs formed between the electrons near the \(\Gamma\) point with opposite momentum and opposite spins pinned to the in-plane direction. Figure (A) is reproduced with permission from ref. [88]. **(B)** (a) Schematic sketch showing magnetic field driven spin-splitting of free-electron parabola, inducing Pauli paramagnetism, and leading to different Fermi momenta for spin-up (\(k_{F}^{\uparrow}\)) and spin-down (\(k_{F}^{\downarrow}\)) electrons. (b) Schematic representation of the conventional spin-singlet BCS pairing state (left) with zero center-of-mass momentum and the spin-singlet FFLO pairing state (right) with a finite center-of-mass momentum (q). The red (blue) circle represents the Fermi surface for electrons with spin-up (spin-down). Figure (B) is reproduced with permission from ref. [92]. **(C)** Band splitting and Fermi contours under Rashba SOI and exchange field in a J3 Nb/Pt/Nb with a Pt barrier proximity-magnetized by a ferrimagnetic insulating Y\({}_{3}\)Fe\({}_{5}\)O\({}_{12}\) (YIG) film. **a** Rashab SOI splits the conduction bands laterally (along momentum (k) axis) by \(\Delta k_{R}\) while the Zeeman exchange field splits them vertically (along energy (E) axis) by \(\Delta E_{ex}\) such that the Kramers degeneracy is removed. Here \(E_{F}\) represents the Fermi level while \(k_{x/y}\) stands for in-plane momentum components. **b**I-V curve representing SDE at T = 2 K (\(<T_{c}\)) for different orientations of the Pt magnetization (M\({}_{Pt}\)), parallel (yellow) and antiparallel (cyan) with respect to the x-axis. Here Pt magnetization orientations, and, thus the direction of the exchange field, reverses when the magnetization orientation of the proximity-coupled YIG is inverted. The diode symbols in the yellow (cyan) shaded regime indicates that the Josephson supercurrent flows only in the positive (negative) y-direction. Figure (C) is reproduced with permission from ref. [19], Springer Nature Ltd. **(D)** Supercurrent diode effect under external current source J and in-plane magnetic field B in a noncentrosymmetric Rashba superconductor. (a and c) Schematics of device plots showing Rashba- and Zeeman-split normal state Fermi surfaces (denoted by circles) and the directions along which J and B are applied. (b and d) Schematic phase diagrams in the B-J plane corresponding to device configurations shown in (A and B), respectively. Figure (D) is taken from ref. [30].
insulator Bi\({}_{2}\)Se\({}_{3}\)[120]. While the existence of FFLO-like states is well established in proximity-coupled SCs and ferromagnets [121], the experimental observation of FFLO states has been reported in nonmagnetic SCs by applying external magnetic fields [112; 113] as well as intrinsic ferromagnetic SCs [122; 123; 124; 125; 126; 127; 128].
#### iii.2.2 Pairing in Rashba/Ising SCs
To understand SDE via finite-momentum Cooper pairing in noncentrosymmetric superconducting materials, it is instructive to quickly review pairing phenomenon in Ising- and Rashba-type superconductivity. In this regard, (5QL)Bi\({}_{2}\)Se\({}_{3}\)/NbSe\({}_{2}\)(ML) heterostructure is a promising example where a crossover from Ising- to Rashba-type superconductivity is reported recently [88]. NbSe\({}_{2}\) bulk crystal with 2H phase is a well-studied superconductor with Fermi surface sheet-dependent s-wave superconductivity [129]. 2H-NbSe\({}_{2}\) bulk crystals covered by molecular-beam epitaxy (MBE)-grown films of Bi\({}_{2}\)Se\({}_{3}\) or Bi\({}_{2}\)Te\({}_{3}\) topological insulators are the most successful topological superconductor interfaces [130; 131; 132; 133; 134; 135]. It is also well-known that monolayer NbSe\({}_{2}\) with the Se-Nb-Se trilayer structure, with preserved out-of-plane mirror symmetry but broken in-plane inversion symmetry, is a prototypical Ising-type superconductor [89; 90], and are preferred over 2H-NbSe\({}_{2}\) bulk crystals for for device fabrication and technological applications. On the other hand, Bi\({}_{2}\)Se\({}_{3}\) with the Se-Bi-Se-Bi-Se Quintuple-layered structure is a prototypical 3D strong TI hosting a single surface Dirac cone intertwined with nontrivial bulk Rashba bands at the \(\Gamma\)-point [136; 137].
In monolayer NbSe\({}_{2}\), broken in-plane inversion symmetry generates an out-of-plane spin polarization and originates the Ising-type SOI which induces a valley-dependent Zeeman-type spin-splitting, as shown in figure 2(A-a). Such opposite spin-splitting in the bulk valence bands around valleys K and K\({}^{\prime}\) leads to Ising-type superconducting pairing symmetry in monolayer NbSe\({}_{2}\), i.e., which refer to the intervalley spin-momentum locked spin-singlet Cooper pairing between two electrons with opposite momenta and opposite out-of-plane spins. On the other hand, as shown in figure 2(A-b), owning to the emergence of Rashba-split low-energy conduction bands at \(\Gamma\)-point and corresponding Dirac surface states, (5QL)Bi\({}_{2}\)Se\({}_{3}\)/NbSe\({}_{2}\)(ML) heterostructures become proximity-coupled topological SCs with Rashba-type superconducting pairing symmetry, i.e., which refer to the Cooper pairing between two Rashba-split electrons with opposite momenta and opposite spins pinned to the in-plane direction.
#### iii.2.3 Nonreciprocity in FFLO states
Such momentum-dependent spin-splitting of the low-energy electronic bands, caused by broken inversion symmetry in the noncentrosymmetric bulk crystals, surfaces, and interfaces, is crucial for the emergence of nonreciprocal transport. However, in order to avoid the cancellation of this effect due to superposition of degenerate Kramers pairs, one also needs energy-dependent spin-splitting such that electrons with opposite momenta and opposite spin become Kramers non-degenerate, or simply non-equivalent. Typically, this can achieve by breaking time-reversal symmetry. In the presence of external magnetic field or intrinsic magnetization, parallel to the electron's spin-orientation, BSC-type zero-momentum Cooper pairing (symmetric around \(\Gamma\)-point) become asymmetric around \(\Gamma\)-point due to opposite energy-shift and FFLO-type finite-momentum Cooper pairing originates in both Ising- and Rashba-type superconducting phase.
Recent theoretical studies [29; 30; 31; 32] revealed how a nontrivial interplay of antisymmetric Rashba SOI, magnetic field, and helical supercurrent leads to an intrinsic SDE in noncentrosymmetric bulk SCs. It implies that intrinsic SDE is closely related to the FFLO state [97; 98] with a periodically modulating phase of the superconducting order parameter \(\Delta_{sc}(r)=\Delta_{sc}e^{iq\cdot r}\): Rashba SOI splits Fermi surfaces while the finite pairing momentum \(\mathbf{q}_{0}\) is induced by the magnetic field and varies continuously with its strength and orientation. In terms of charge transport, when an in-plane magnetic field is applied, Cooper pairs in noncentrosymmetric Rashba SCs acquire a finite-momentum \(\mathbf{q}_{0}\), and, as a result, critical currents traversing along the direction parallel and antiparallel to \(\mathbf{q}_{0}\) become unequal.
Recently, N. Yuan and L. Fu [30] explicitly demonstrated the effect of in-plane magnetic field on Rashba spin-split bands and, thus, the emergence of finite-momentum Cooper pairing. As shown in figure 2 (D-a) and (D-c), a finite magnetic field (B) displaces the centers of Rashba-split inner(+) and outer(-) Fermi pockets from \(\mathbf{k}=0\) to opposite momenta, \(\pm\mathbf{k}_{0}=\pm\hat{z}\times B/v_{F}\), respectively, and leads to a finite intrapocket Cooper pair momentum \(\mathbf{q}_{0}=\pm 2\mathbf{k}_{0}\). Owning to the larger DOS in the outer pocket, usually, energetically favored state is the one with the Cooper pair momentum \(\mathbf{q}_{0}=-2\mathbf{k}_{0}\). Figure 2 (BR-b) and (BR-d) show a magnetic field dependence of the depairing critical current in the fluctuation regime of a metal-superconductor resistive transition. When \(\mathbf{B}\parallel\mathbf{J}\), as shown in figure 2 (BR-b), the phase diagram in the B-J plane remains symmetric with respect to both B and J axes and thus, no nonreciprocity in the critical current \(J_{c}\) and critical magnetic field \(B_{c}\). However, when \(\mathbf{B}\perp\mathbf{J}\), as shown in figure 2 (BR-d), the phase diagram becomes asymmetric/skewed, indicating nonreciprocity in the critical current \(J_{c}^{+}\neq J_{c}^{-}\) and polarity-dependence of critical field \(B_{c}^{+}\neq B_{c}^{-}\). That is, the maximum critical current flowing in the direction parallel and antiparallel to \(\mathbf{q}_{0}\) are different, which leads to SDE. On the same footing, in the presence of a supercurrent, the polarity-dependence of in-plane critical fields is also a direct consequence of the finite-momentum
Cooper pairing. Similar mechanism has been realized in Rashba SCs with intrinsic magnetization. Figure 2(BL) demonstrates Rashba spin-splitting and SDE by controlling magnetization orientation in a JJ Nb/Pt/Nb with a proximity-magnetized Pt barrier (Pt/Y\({}_{3}\)Fe\({}_{5}\)O\({}_{12}\) (YIG)) [19].
#### i.2.4 From spin-singlet to spin-triplet pairing
It is also crucial to consider competition between spin-singlet and spin-triplet pairing. Note that, in the absence of magnetic field, pairing momentum remains zero for spin-singlet symmetry even in the presence of SOI, whereas SOI induces finite momentum for spin-triplet symmetry, \(q^{\pm}=q_{0}^{\pm}\). However, owning to the symmetric shift of pairing momentum (\(q_{0}^{+}=q_{0}^{-}\)) in the absence of magnetic field, even finite-momentum of spin-triplet pairing does not induce nonreciprocity of supercurrent. It can be explained by noticing that the q-linear term in the kinetic energy of Cooper pairs could only shift the momentum space positions of the optimal critical currents (maximum \(I^{+}\) and minimum \(I^{-}\)) while keeping \(I^{\pm}\) values unchanged, and thus, could not induced nonreciprocity. To induce nonreciprocity, one needs magnetic field dependent (higher order) q-terms in the GL free energy of a SC [31].
When magnetic field is applied, energy-dependent spin-splitting lifts the Kramers degeneracy, such that electrons with opposite spin are not momentum-symmetric around \(\Gamma=0\), and nonreciprocity emerges in both cases. As a result, contribution from Cooper pairs with opposite center of mass momentum becomes non-equivalent and the cancellation of their effect is avoided. In the spin-singlet pairing symmetry, magnetic field changes both the magnitude and the momentum-space position of center of mass momentum: enlarging and moving \(q_{0}^{+}\) along the current direction while reducing and moving \(q_{0}^{-}\) opposite to the current direction. On the other hand, with the spin-triplet pairing symmetry, SOI shifts momentum of Cooper pairs from \(q_{0}=0\) (symmetric around \(\Gamma\)-point) to \(q=q_{0}^{\pm}\) (asymmetric around \(\Gamma\)-point) due to opposite momentum-shift. Unlike spin-singlet case, magnetic field cannot change the momentum space position of \(q=q_{0}^{\pm}\) but rather modifies their magnitude: \(q_{0}^{+}\) enlarges whereas \(q_{0}^{-}\) reduces with increasing magnetic field.
There is a threshold limit, certainly, for magnetic field. For the spin-triplet pairing, when bottom of one CB passes above FL, \(q_{0}^{-}\to 0\) while \(q_{0}^{+}\) become maximal. On the other hand, for the spin-singlet pairing, magnitude of \(q_{0}^{-}\) become constant when in the inner Fermi circle shifted completely on one-side of \(\Gamma\)-point. Furthermore, one needs to keep an eye on the curvature of parabolic bands as it is key to understand change in momentum when magnetic field is increased, means the Fermi velocity plays central role.
## II Theory of superconducting diode effects
Before jumping onto the recently reported theoretical analysis of SDE, it is important to have a quick review of theoretical studies in which nonreciprocity of supercurrent is reported and the interesting functionalities of SCs intertwined with broken inversion symmetry and SOI are highlighted. Interestingly, V. M. Edelstein has discussed the characteristics of the Cooper pairing in two-dimensional noncentrosymmetric electron systems [26], magnetoelectric effect in polar SCs [27], and nonreciprocity in the supercurrent by studying the Ginzburg-Landau equation for SCs of polar symmetry [28]. In other words, SDE has been there since 1990s, and only recently demonstrated experimentally.
In 1996, followed by his earlier work characterizing Cooper pairing in noncentrosymmetric SCs [26] and describing magnetoelectric effects polar SCs [27], V. M. Edelstein [28] explicitly proposed nonreciprocity in the supercurrent, that is, when applied magnetic field (\(B\)), electric current (\(j\)), and polar axis (\(\hat{r}\)) are orthogonal to each other, the magnitude of the critical current \(j_{c}(B)\) depends on the sign of the mixed product \((\hat{r}\times\hat{B})\cdot j_{c}\), i.e., the critical current should be different for two opposite directions.
Recently, intriguing experimental demonstrations of SDE, especially in the Rashba-type bulk superconducting [V/Nb/Ta]\({}_{n}\) superlattice [12] or Al/InAs-2DEG/Al JJs [13] and in the Ising-type superconducting JJs such as NbSe\({}_{2}\) constriction [22] or Nb/NiTe\({}_{2}\)/Nb junction [23], has stimulated theoretical research on nonreciprocal supercurrent transport in a number of exotic quantum materials. In addition, it also sparked the discussion on fundamental mechanisms that cause nonreciprocal charge transport in SCs. For instance, how nonracirocal charge transport in a semiconductor with finite resistance could be generalized to a superconductor allowing supercurrent with zero-resistance? More specifically, which physical quantity display nonreciprocal behaviour in the
By employing mean-field (MF), Bogoliubov-de Gennes (BdG), and time-dependent Ginzburg-Landau (GL) theories, Daido et al. [29], J. He et al. [31], N. Yuan and L. Fu [30], and S. Ilic and F. S. Bergeret [32] theorized SDE in junction-free Rashba/polar SCs. A. Daido et al. [29] studied Rashba-Zeeman-Hubbard model for the helical superconductivity and proposed that nonreciprocity in the depairing critical current is the intrinsic mechanism of SDE in the fluctuation regime of metal-superconductive resistive transition. A. Daido et al [29] also showed that such mechanism of intrinsic SDE can be employed as a microscopic probe to study and explore the phase diagram of helical superconductivity. Similar proposal has been made by N. Yuan and L. Fu [30], who studied effective Rashba-Zeeman-Hubbard model and reported that nonreciprocal depairing critical current and the polarity-dependent critical magnetic field are the consequences of finite-momentum Cooper pairing. On the same footing,
mainly using the GL theory and phenomenological theory of SDE, J. He et al. [31] presented a detailed discussion on symmetry breaking phenomenon and an intertwining between polar axis, magnetic field orientation, and current direction that is desired for the realization of SDE. The theory of SDE has been generalized for Rashba SCs with arbitrary disorder by S. Ilic and F. S. Bergeret [32].
Thus far, theoretical discussion on nonreciprocal supercurrent and prediction of intrinsic SDE has also been extended for other junction-free polar superconducting systems. For instance, H. D. Scammell et al. presented theory of zero-field SDE in twisted trilayer graphene [33]. Zhai et al. [34] predicted reversible SDE in ferroelectric SCs. The experimental demonstration of nonreciprocal transport in chiral SCs, e.g., Ru-Sr\({}_{2}\)RuO\({}_{4}\) eutectic system [138; 139] and WS\({}_{2}\) nanotubes [68], is recently followed by B. Zinkl et al. [35] who discussed the detailed symmetry conditions for the SDE in various chiral superconducting models/systems. The theory of nonreciprocal charge transport and intertwining between SDE and band topology has also been presented for topological SCs [36; 37; 38]. For instance, N. Yuan and L. Fu [36] uncovered an intertwining between finite-momentum superconductivity and topological band theory, i.e., Cooper pairing with finite momentum depends closely on the nontrivial topological spin texture of nondegenerate Fermi surfaces, driven by combined effect of SOI and Zeeman fields. Recently, H. F. Legg et al. [37] theorized SDE due to MCA in topological insulators and Rashba nanowires, while K. Takasan et al. [38] discussed supercurrent-induced topological phase transitions.
In addition, the basic mechanisms of SDE (first envisioned by J. Hu et al. [39]) has also been theorized for JJs [40], e.g., conventional superconducting NbSe\({}_{2}\)/Nb\({}_{3}\)Br\({}_{8}\)/NbSe\({}_{2}\) JJ [41] and Al/InAs-2DEG/Al JJ [42], graphene-based JJ [43], and topological superconducting JJ [44; 45]. Furthermore, the effect of Rashba and Dresselhaus SOI on supercurrent rectification and MCA have also been studied for JJs based on conventional SCs [42] topological SCs [44]. In the recent theoretical studies on the topological JJ dS/FI/dS (dS: d-wave superconductor, FI: ferromagnetic insulator) on a 3D topological insulator surface, Y. Tanaka and N. Nagaosa [45] also demonstrated the relevance of the Majorana bound states (MBS), i.e., spin-momentum locked energy-zero Andreev bound states (ABS) at the interface [140; 141].
## III Materials for superconducting diode effects
In the last two years, SDE has been experimentally observed in a number of superconducting structures, ranging from junction-free SCs [12; 14; 15; 16; 17; 18], JJs [19; 20; 21; 22; 23; 24], and other engineered structures such as superconducting tunnelling junctions [77] and superconducting devices with pinning centres of asymmetric pattern [25]. JJs, mainly due to the presence of a junction, can be though as symmetric and superconducting analogue of asymmetric semiconducting pn junction. On the other hand, junction-free SCs can be though as symmetric and superconducting analogue of symmetric semiconductors.
The observation of SDE originated from nonreciprocal charge transport driven by MCA in symmetric SCs, whether junction-free or JJs, relies on simultaneously broken spatial-inversion and time-reversal symmetries, similar to that in symmetric semiconductors. Furthermore, similar that in topologically nontrivial semiconductor/semimetals, SDE can be realized in time-reversal symmetric systems where nonreciprocal charge transport is associated with nontrivial Berry phase. Since both MCA and nontrivial Berry phase are strongly associated the strength and nature of SOI originated due to broken inversion symmetry, noncentrosymmetric SCs can be classified as Rashba SCs [12; 13; 14; 15; 16; 17; 19; 20] or Ising SCs [21; 22; 23].
If spatial-inversion symmetry is broken, SDE can be realized in three-dimensional bulk materials, quasi-two-dimensional thin films and van der Waals heterostructures, and atomically-thin superconducting materials. Thus far, SDE has been reported in several materials, ranging from conventional SCs such as [Nb/VTa]\({}_{n}\) superlattice [12; 14], Al/InAs-2DEG/Al junction [13], Nb SCs [20], Cu/EuS/Al tunnel junction [77], and superconducting thin films with conformal-mapped nanoholes [25], ferromagnetic SCs [15], twisted-angle bilayer [24] and trilayer [18] graphene with unconventional superconductivity and, TMDCs with Ising superconductivity [21; 22; 23], and topological superconducting materials [16; 17; 23] where superconductivity coexists with nontrivial band topology.
For device fabrication of superconducting electronics, and especially for the search/utilization of novel superconducting materials with high workable temperature and large magnetic field, it is important to categorize materials hosting SDE. Superconducting materials/structures displaying SDE can be classified as junction-free or JJs based on device structure, Rashba or Ising SCs based on the nature of SOI, and trivial or nontrivial based on band topology. Furthermore, SDE can be classified as magnetic-field-driven or field-free SDE depending on the magnetic character of superconducting materials. Furthermore, depending upon the origin of nanoraciprocity of charge transport, whether MCA or nontrivial Berry phase, SDE materials can be classified as time-reversal-symmetric or time-reversal-asymmetric.
## IV Efficiency of superconducting diode
Let's consider a superconducting sheet with pairing potential \(\Delta(q)\), where \(\mathbf{q}=q\hat{x}\) is the center-of-mass momentum. The metal-superconducting transition, and thus a distinction between supercurrent, depairing current, and
a normal current, can be conveniently described by introducing condensation energy \(F(q)\equiv F_{n}(q)-F_{s}(q)\) for each \(q\), i.e., the difference between free energy per unit area in the normal (n) and superconducting (s) states. The sheet current density, as an expectation value of the current operator, can be obtained by \(j(q)=2\partial_{q}F(q)\). If a current source supplies an electric current \(j_{ex}\), a superconducting state with pairing momentum \(\mathbf{q}\) should be realized when \(j_{ex}=j(q)\). However, when \(j_{ex}<j_{c}^{-}\equiv\min_{q}j(q)\) or \(j_{ex}>j_{c}^{+}\equiv\max_{q}j(q)\), the superconducting state can not sustain \(j_{ex}\) and turns into a normal state. Thus, the depairing critical current along a direction parallel (\(+\hat{x}\)) and antiparallel (\(-\hat{x}\)) to the pairing momentum \(\mathbf{q}\) is given by the maximum (\(j_{c}^{+}\)) and minimum (\(j_{c}^{-}\)) value of \(j(q)\). The SDE in such helical superconductor is identified and characterized with a finite \(\Delta j_{c}\) given by
\[\Delta j_{c}\equiv j_{c}^{+}+j_{c}^{-}=j_{c}^{+}-|j_{c}^{-}| \tag{6}\]
Although a huge current density is generally required to achieve the depairing limit in a typical superconductor, depairing critical current density (\(j_{c}\)) has recently been reported in the superconducting microbridge devices [142; 143; 144]. For an optimal performance of SDE, it is instructive to analyse the behavior of depairing \(j_{c}\) and \(\Delta j_{c}(T)\) through various perspectives. For instance, dependence of critical current density \(j_{c}\) on temperature and the orientation of magnetic field reported for Fe-based Ba\({}_{0.5}\)K\({}_{0.5}\)Fe\({}_{2}\)As\({}_{2}\) microbridge with nanoscale thickness, see, e.g. Fig. 3 and Fig. 4 in ref. [143], critical current density as a function of bridge width and length reported for Cu-based YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{7-\delta}\) microbridge with nanoscale thickness, see, e.g. Fig. 3 in ref. [142], a comparison of critical current density obtained from Ginzburg-Landau (GL) theory [\(\propto(T_{c}-T)^{3/2}\)] to that from Kupriyanov-Lukichev (KL) theory for Fe-based Fe\({}_{1+y}\)Te\({}_{1-x}\)Se\({}_{x}\) microbridge with microscale thickness, see figure 3 in ref. [144], and the sign reversal of \(\Delta j_{c}\) by increasing the magnetic field at low temperatures, see, e.g. Fig. 4 and Fig. 5 in ref. [29]. As a figure of merit, the strength of the nonreciprocal response or the superconducting diode efficiency can be expressed as a ratio between \(\Delta j_{c}\) and the averaged critical current \(j_{c}^{avg}\)[29; 30; 31; 32]
\[\eta\equiv\frac{j_{c}^{+}-|j_{c}^{-}|}{j_{c}^{+}+|j_{c}^{-}|}=\frac{\Delta j_ {c}}{2j_{c}^{avg}} \tag{7}\]
Recent theoretical studies [29; 30; 31; 32] show that the strength of \(\eta\) depends on a range of relevant system parameters: applied magnetic field, working temperature, induced Cooper pairing momentum, intrinsic SOI, and an intertwining between them [29; 30; 31; 32]. In addition, strength of \(\eta\) also depends on two other related but distinct parameters, chemical potential [31] and next-nearest neighbour hopping [29] that break the particle-hole symmetry. Furthermore, though the SDE persists even in the presence of disorder, strength of \(\eta\) is also affected by disorder as it may cause changes in the nature of the two helical bands by introducing mixing between them [32]. Thus, for energy-efficient and high performance superconducting device application, it is crucial to find certain optimal system parameter regimes where the strength of \(\eta\) is maximal. Recently, Ilice et al. [32] theoretically predicted that SDE efficiency may exceed \(\eta=40\%\) (in the ballistic limit) at optimal magnetic field, temperature, and SOI in Rashba SCs. Interestingly, SDE with optimal efficiency can be engineered by steering the exotic characteristics and the design of a JJ [13; 45].
At some fixed temperature, SDE efficiency shows nonmonotonic magnetic field dependence [29; 30; 32]: \(\eta\) increases (almost linearly) for (weak) moderate fields and then suppresses beyond a certain breakdown/threshold field \(B_{max,\eta}\), see, e.g. Fig. 3(D) in ref. [30] and Fig. 4 in ref. [32]. For Rashba SCs, threshold field is theoretically [30; 32] predicted to be of the order of the Pauli paramagnetic limit, i.e., much larger than the breakdown limit observed in recent experiments [12; 13; 22]. Along with this nonmonotonic behavior, SDE efficiency changes its sign with increase in magnetic field [29; 30; 32]. Such change in sign of SDE efficiency appears approximately at the Pauli limit \(B\approx B_{P}\), see, e.g. Fig. 3(F) in ref. [30], Fig. 3 in ref. [32], and Fig. 4 in ref [29]. Such magnetic field driven sign reversal of the SDE, accompanied by the crossover between weak and strong helical phase, is a general feature of helical SCs irrespective of their details [145].
Unlike magnetic field dependence, recent theoretical studies predict quite diverse behaviour for the temperature dependence of SDE efficiency. For instance, at some fixed magnetic field, Rashba-Zeeman-Hubbard model [29; 30; 31] predict that SDE efficiency shows a monotonic square-root-like temperature dependence near the transition temperature which saturates at low temperatures, see, e.g. Fig. 2 in ref. [29], and Fig. 3 in ref [31]. On the other hand, quasiclassical Eilenberger equation for a 2D disordered Rashba superconductor [111] shows that the temperature dependence of SDE efficiency is critically affected by the strength of fixed magnetic field and may display nonmonotonic temperature-dependence [32]. For instance, SDE efficiency shows a monotonic temperature-dependence for \(B\gtrapprox B_{P}\) but it becomes nonmonotonic when \(B\lessapprox B_{P}\), see, e.g. Fig. 4 in ref. [32]. That is, in the later case, first SDE efficiency increases with decrease in temperature but it is gradually suppressed when temperature is further lowers after certain breakdown limit. Recent observation of SDE shows that the monotonic [13; 22] and the nonmonotonic [22] temperature-dependence of SDE efficiency may also depend on the sample fabrication [22]. Similar transition from monotonic to nonmonotonic temperature dependence may also be realized by varying strength of disorder, see, e.g. Fig. 6 in ref. [32]
Next, we turn to the dependence of SDE efficiency on the momentum of Cooper pairs or the nature of helical phase. For spin-orbit coupled Rashba SCs in magnetic
field, the nature of helical phase can be characterized by quantifying the contribution of two helical bands to the helical superconductivity [111]. Owning to the opposite energy shift induced by magnetic field, the two helical bands denoted with the index \(\lambda=\pm\) and characterized by the same Fermi velocity \(v=\sqrt{2\mu/m+\alpha^{2}}\) but different densities of states \(\nu_{\lambda}=\nu(1-\lambda\alpha/v)\), prefer opposite modulation vectors: \(q_{0}^{\lambda}v=-2\lambda h\). Here, \(m\) is the effective electron mass, \(\mu\) is the chemical potential, \(\nu=m/(2\pi)\), and \(\alpha=\Delta_{so}/\sqrt{2m\mu}\) characterizes the SOI strength. Figure 3(R-a) illustrates the crossover from a "weak" to "strong" helical phase for different ratios of Fermi velocity (\(v\)) and the velocity associated with Rashba SOI (\(\alpha\)). In the "weak" or long-wavelength helical phase subject to low magnetic fields, contribution of both bands to helical superconductivity yields a modulation vector \(q_{0}v\approx 2(\alpha/v)h\). In the "strong" or short-wavelength helical phase at large magnetic fields, owning to the dominance (suppression) of contribution from the band with higher (lower) density of states, only one of the bands contributes to helical superconductivity which leads to the modulation vector \(q_{0}v\approx 2h\).
S. Ilic and F. S. Bergeret [32], based on the quasiclassical Eilenberger equation for a 2D Rashba superconductor [111], predicted that the maximum of \(\eta\) emerges when both the bands contribute to the helical superconductivity and the magnetic field is close to the critical value \(h^{*}\) at which a crossover between "weak" and "strong" helical phase. In the "strong" helical phase, the maximum of \(\eta\) is \(\eta\), and the maximum of \(\eta\) is \(\eta\), and the maximum of \(\eta\) is \(\eta\). The "strong" helical phase is \(\eta
lical superconducting phase occurs. It can be explained from the self-consistent calculation of \(\Delta(q)\), \(j(q)\) and \(\eta\) vs Cooper pairing momentum under various magnetic field strength as shown in Fig. 3(R-b) or from the h-T phase diagram under various strengths of Rashba SOI as shown in Fig. 3(R-d). In the absence of magnetic field (\(h=0\)), as shown in the upper left panel of Fig. 3(R-b), there is no helical phase (\(q_{0}=0\)) and thus no nonreciprocity of the critical current. In the presence of finite magnetic field (\(h\neq 0\)), finite Cooper pairing momentum (\(q_{0}\neq 0\)) leads to nonreciprocity of the critical current in both the "weak" helical state induced by sufficiently low h, as shown in the upper right panel of Fig. 3(R-b), and the "strong" helical state induced by large h, as as shown in the two lower panels of Fig. 3(R-b). The momentum dependence of \(\Delta(q)\) and \(j(q)\) is markedly different in these three superconducting states, and thus, depict a completely different supercurrent transport: no SDE in the BCS state (\(j_{c}^{+}=|j_{c}^{-}|,\eta=0\)), whereas negative SDE in the "weak" helical state (\(j_{c}^{+}<|j_{c}^{-}|,\eta<0\)) while positive SDE in the "strong" helical states (\(j_{c}^{+}>|j_{c}^{-}|,\eta>0\)). In addition, different strength and opposite sign of SDE under different magnetic field values hint that there must be some optimal field at which \(\eta\) should be maximum.
It can be depicted by plotting \(\eta\) for every point in the h-T phase diagram, and in addition, effect of other parameters can be visualised. For instance, as shown in Fig. 3(R-d), S. Ilic and F. S. Bergeret [32] plotted the h-T phase diagram and calculated \(\eta\) for different strengths of SOI. Here black curve corresponds to the upper critical field \(h_{c2}\) while the orange and purple colors clearly illustrate the two distinct regimes in which SDE is driven by the "weak" and "strong" helical phases, respectively. First, it showcases that the maximum efficiency appears at the crossover between "weak" and "strong" helical phases. Second, maximum efficiency exceeding 40% at the crossover corresponds to the optimal SOI. Third, the maximum efficiency also corresponds to optimal temperature in the superconducting phase.
Such momentum dependence, yielding maximum \(\eta\) with optimal magnetic field and SOI driving system at the crossover between "weak" and "strong" helical phases, implies that the competition and the contribution of both helical bands is central for the SDE. This can be explained by noticing that the MCA is proportional to magnetic field and SOI, and thus become strongest when both of these parameters are maximal. The maximal magnetic field and SOI borne by the system, along with the constraint of contribution from both helical bands, is ensured at the crossover between "weak" and "strong" helical phases. This can further be explained by the analysing the h-T phase diagram regimes, as illustrated in Fig. 3(R-d), where too large magnetic field and too large SOI both suppress the SDE efficiency. For instance, SDE efficiency vanishes when magnetic field is increased, beyond the crossover to the "strong" phase, where only one of the helical bands dominates. Similarly, when SOI is increased -- such that \(\alpha/v\to 1\), only one helical band with a large DOS (\(\nu_{-}\approx 2\nu\)) exists while the helical band with vanishingly small DOS (\(\nu_{+}\to 0\)) other is fully suppressed, and the SDE disappears.
This phase diagram also helps to understand the intertwining of optimal temperature with magnetic field and SOI. At weak SOI, such as depicted in the upper left panel of Fig. 3(R-d), SDE becomes strongest at the tricritical point (\(T^{*},h^{*}\)) where the "weak" helical phase meets the "strong" helical phase and the normal phase. It is in good qualitative agreement with the results predicted by N. Yuan and L. Fu [30] where (\(T^{*},h^{*}\)) denotes tricritical point at which the FF phase meets the normal phase and the BCS phase. However, with increasing strength of SOI, h-T phase diagram regime hosting maximum SDE moves towards zero-temperature, i.e., where \(T\ll T^{*}\).
Similarly, as shown in Fig. 3(L-b), Daido et al. [29] plotted the h-T phase diagram and calculated \(\eta\) for different strengths of next-nearest neighbour hopping \(t_{2}\) in the Rashba-Zeeman-Hubbard model. It depicts the sign change of \(\eta\) with increasing magnetic field. In addition, at some magnetic field, the sign of SDE efficiency found at \(t_{2}=0\) (left panel) also switches when a finite \(t_{2}\neq 0\) is considered (right panel). Furthermore, magnetic field dependence of pairing momentum as shown in the left panel of Fig. 3(L-c) and the SDE efficiency as shown in the right panel of Fig. 3(L-c) showcases that the maximum \(\eta\) appears at the crossover between "weak" and "strong" helical phase. It implies that the results obtained from the numerical study of Rashba-Zeeman-Hubbard model [29] and that from quasiclassical Eilenberger equation [32] are in good qualitative agreement. However, as mentioned above, there is considerable differences between these two studies when it comes to the temperature dependence of SDE efficiency, i.e., numerical study of Rashba-Zeeman-Hubbard model shows monotonic behaviour while quasiclassical Eilenberger equation shows temperature dependence could be either monotonic or nonmonotonic depending on the strength of magnetic field. Based on the above analysis, one can conclude that the nonmonotonic behaviour, for both magnetic field and temperature dependence, and the change of sign of \(\eta\) with increasing magnetic field is related to the magnetic field-driven evolution of the helical phase. That is, \(\eta\) becomes maximum at a particular field \(h^{*}\) and optimal temperature, and then lowers for other values.
Similar to the dependence on next-nearest neighbour hopping [29], and consistent with the analogy discussed for magnetic field and SOI intertwined with the variation in the DOS of two helical bands [32], He et. al [31] theoretically predicted that SDE efficiency show strong dependence on the chemical potential. For a Rashba superconductor with Zeeman field, where the free energy includes all terms up to the linear order in \(h\sqrt{\epsilon}\), GL the
ory results in the SDE efficiency [31]:
\[\eta=\frac{2.7\lambda_{R}}{|\lambda_{R}|}\frac{h\sqrt{\epsilon}}{T_{c}}\times \begin{cases}(1+\tilde{\mu})^{-1/2}&\text{if }\tilde{\mu}>0\\ \frac{8}{7}+\frac{16}{21}\tilde{\mu}+(1+\tilde{\mu})^{-1/2}&\text{if }-1<\tilde{\mu}<0\end{cases} \tag{8}\]
Here \(\lambda_{R}\) is the Rashba SOI strength, \(\epsilon=1-T/T_{c}\), and \(\tilde{\mu}=\mu/E_{R}\) where \(E_{R}=\frac{1}{2}m\lambda_{R}^{2}\) is the energy difference between band crossing point (\(\mu=0\)) of Rashba-split bands and the conduction band edge (\(\mu=-E_{R}\)) and \(m\) denotes the effective electron mass. At some fixed magnetic field, temperature, and SOI, SDE efficiency shows maximum strength at \(\mu=0\), whereas it decrease when the Fermi level moves away from the band crossing point, either towards the large \(\mu\) limit (\(\mu\gg E_{R}\)) or towards the conduction band edge (\(\mu=-E_{R}\)). It is important to note that there are several constraints, and thus limitations, on these GL theory calculations. For instance, the expression (8) is derived by assuming \(|h|\ll Tc\ll E_{R}\) and treating the problem in the band basis where only the intra-band pairing \(\Delta_{t}\) is considered while the interband pairing \(\Delta_{s}\) is neglected. As a consequence of taking the limit \(T_{c}/E_{R}\to 0\) and neglecting the inter-band pairing, there exists a discontinuity in \(\eta\) at \(\mu=0\). In addition, owning to the consideration of intra-band pairing only, such a discontinuity appears also due to the flip of spin-momentum locking helicity. However, the features of SDE efficiency obtained numerically from a self-consistent Bogoliubov-de Gennes mean-field Hamiltonian [31] are in good qualitative agreement with those displayed by SDE efficiency obtained from the analytic generalized GL theory calculations. In addition, the discontinuity of \(\eta\) at \(\mu=0\) is smoothed out when \(T_{c}/E_{R}\) is not so small and it shows square root dependence on \(\mu\), \(\eta\sim\mu^{1/2}\), when \(\mu\) is large.
Finally, it is important to emphasise that SDE efficiency also depends upon the characteristics and the design of a JJ [13; 45]. In general, with a macroscopic phase difference \(\phi\) between two SCs, the standard CPR of the Josephson supercurrent \(I(\phi)\) between two SCs is \(I(\phi)\sim\sin\phi\). That is, when either space-inversion symmetry or time-reversal symmetry is preserved, purely sinusoidal terms leads to an antisymmetric CPR, \(I(\phi)=-I(-\phi)\), and the Josephson current vanishes for \(\phi=0\). On the other hand, when both time-reversal symmetry and space-inversion symmetry are simultaneously broken, an anomalous CPR [46; 49; 141; 146; 147; 148; 149; 150] (displaying finite anomalous Josephson current even at zero phase difference) contains cosine terms as well. However, even the presence of such cosine term does not suffice to obtain SDE because it simply introduces an anomalous phase shift in the purely sinusoidal CPR and thus the Josephson inductance remains reciprocal (symmetric across the zero-current). Thus, in order to realize SDE, it is mandatory that an asymmetry is induced in the CPR by higher order phase (especially sine) terms such that the cosine terms are not absorbed in a mere phase shift [13; 45].
By fabricating Al/InAs-2DEG/Al ballistic JJs, Baumgartner et al. [13] observed supercurrent rectification. When an in-plane magnetic field is applied perpendicular to the current, Rashba superconducting system shows an anomalous Josephson supercurrent due to even (cosine) terms in the CPR [156]. Such anomalous CPR contains higher harmonic sine terms if the junction transparency is high [159; 161], and thus leads to SDE. By theoretical studying a JJ dS/FI/dS made with d-wave SCs (dS) and a ferromagnetic insulator (FI) on the surface of a 3D topological insulator, Y. Tanaka and N. Nagaosa [45] showed that asymmetric CPR containing a wide variety of phase terms leads to high quality SDE [45]. Apart from the conventional \(\sin\phi\) phase term in the Josephson current, energy-zero Andreev bound state (ABS) at the dS/FI/dS interface enhances the \(\sin 2\phi\) component of \(I(\phi)\)[162; 163]. When the junction dS/FI/dS is placed on the surface of topological insulator [164], simultaneous space-inversion and time-reversal symmetry breaking allows a \(\cos\phi\) phase term [141; 148] leading to an exotic current-phase relation with \(I(\phi)\neq-I(-\phi)\)[155] while the energy-zero ABS become MBS due to the spin-momentum locking [140; 141]. The simultaneous existence, with almost the same order, of \(\sin\phi\), \(\cos\phi\), and \(\sin 2\phi\) phase terms promises a maximum value of SDE efficiency (\(\eta=\pm 2\)) for the d-wave SCs junction on the surface of topological insulator [45]. In light of this, optimal supercurrent rectification effect of a JJ can be realized by exploiting exotic characteristics of unconventional SCs as well as optimizing junction transparency.
## V Observation of Supercurrent
Dide Effect
SDE is associated with the literal metal-superconductor transition and defined as nonreciprocity of depairing critical current, i.e., depairing critical current in the direction parallel (\(j_{c}^{+}\)) and antiparallel (\(j_{c}^{-}\)) to the pairing momentum differ (\(j_{c}^{+}\neq j_{c}^{-}\)). An ideal SDE would be either \(j_{c}^{+}\) or \(j_{c}^{-}\) is zero so that one has maximum \(\Delta Jc\). Such a resistive transition between a supercurrent and a normal current can be realized either by extrinsic stimuli or via mechanisms that are intrinsic to the superconducting materials. For instance, the resistive transition can be caused by the vortex motion, usually realized under out-of-plane magnetic fields. Owning to the dependence of dynamics and the statistical mechanics of the vortex system on the device setup such as impurity concentrations and the thermal/quantum fluctuations [165], such extrinsic mechanism promise tunability of the resistive transition by the nanostructure engineering [166; 56]. Apart from the resistance caused by extrinsic mechanisms, the metal-superconductor resistive transition can literally be caused by the dissociation of the Cooper pairs resulting in a transition from supercurrent to a normal current [167; 6]. This occurs at the maximum critical current, which is known as depairing current. In other words, the depairing critical current is directly associated with
the closing of the superconducting gap, which reduces and eventually closes with increasing supercurrent. As the depairing limit or the upper limit of the critical current is unique to each superconducting material, depairing current is an intrinsic material parameter for characterizing SCs [165]. Thus, the intrinsic mechanism responsible for SDE ties around the nonreciprocity in the depairing critical current in the fluctuation regime of metal-superconductor resistive transition. In this picture, like many exotic characteristics of quantum materials, intrinsic SDE is a nontrivial quantum mechanical effect.
Based on the working temperature, or a working regime of phase diagram representing metal-superconductor resistive transition, observation of SDE can be classified into two main categories: (i) SDE based on the nonreciprocity of depairing current near the superconducting transition temperature (\(T\approx T_{c}\)), i.e., in the fluctuation regime of metal-superconductor resistive transition, and (ii) SDE based on the nonreciprocity of supercurrent at sub-Kelvin temperatures (\(T\ll T_{c}\)), i.e., deep in the superconducting phase regime.
### Magnetochiral anisotropy of the resistance
In the fluctuation regime of resistive transition close to \(T_{c}\), SDE can be described by MCA of the resistance (\(\gamma_{S}\), as defined in equation (3)), similar to that in semiconductors, and may be characterized by I-V curves. In this regime, MCA coefficient \(\gamma_{S}\) can be found by measuring the second harmonic signal in lock-in measurements.
Figure 4: **Magnetochiral anisotropy of the resistance.****(T)** Nonreciprocal transport measurements of critical current in the resistive fluctuation regime of [Nb/V/Ta]\({}_{n}\) superlattice. **a** Magnetic field dependence of first-harmonic (\(R_{\omega}\)) and second-harmonic (\(R_{2\omega}\)) sheet resistances. \(R_{\omega}\) vanishes in the superconducting region (white shadings) while become finite in the normal conducting region (blue shadings). \(R_{2\omega}\) enhances when the magnetic field orientation is orthogonal to the current direction and becomes maximal in the fluctuation region. **b** Temperature dependence of second-harmonic sheet resistance. **c** Temperature dependence of the coefficient of magnetochiral anisotropy (\(\gamma\)) calculated from \(R_{2\omega}/R_{\omega}\). The plot roughly shows that \(\gamma\) increases with temperature and become maximal in the vicinity of \(T_{c}\), except a a dip appearing at 4.2 K and 4.3 K reflecting small \(R_{2\omega}\) values at these temperatures. Figure is reproduced with permission from ref.[12]. **(B)** Nonreciprocal transport measurements of critical current in the resistive fluctuation regime of Rashba-type Al/InAs-2DEG/Al JJ array. **(a)** Temperature dependence of first-harmonics \(R_{\omega}(T,\theta)\) showing resistive transition for different angles (\(\theta\)) of the in-plane magnetic field (\(B_{ip}\)). **(b)** Temperature dependence of second-harmonics \(R_{2\omega}(T,\theta)=V_{2\omega}(T,\theta)/I_{ac}\) of the I-V characteristics for different \(\theta\) values with the a.c. current bias of \(I_{ac}\)= 20 nA. **c** The coefficient of magnetochiral anisotropy \(2R_{2\omega}^{max}/R_{\omega}\) versus orientation/angle \(\theta\) of the in-plane magnetic field. Here \(R_{2\omega}^{max}\) are the maxima of second-harmonics displayed in (b) and \(R_{\omega}\) is the corresponding linear resistance displayed in (a). The red data point shown at \(\theta=90^{\circ}\) is obtained by switching the orientation of \(B_{ip}\) at \(\theta=90^{\circ}\) (the data point in blue), which is equivalent to setting \(\theta=270^{\circ}\). The maximal coefficient of magnetochiral anisotropy, extracted from a sine fit of the data, is \(\gamma_{S}\simeq 4.1\times 10^{6}\) T\({}^{-}\)A\({}^{-}\). Figure is reprinted with permission from ref.[13]
That is, for an ac current (\(I_{in}=I\sin\omega t\)) with an amplitude of \(I\) and a frequency of \(\omega\) applied as input, the nonlinear voltage-drop and current-dependent resistance can be derived from the nonlinear resistance term in equation (3) as:
\[\begin{split} V_{2\omega}(t)&=\gamma BR_{\omega}I^{2 }\sin^{2}\omega t\\ &=\frac{1}{2}\gamma BR_{\omega}I^{2}\left[1+\sin\left(2\omega t- \frac{\pi}{2}\right)\right]\\ R_{2\omega}&=\frac{1}{2}\gamma BR_{\omega}I\end{split} \tag{9}\]
Here \(R_{\omega}\) corresponds to the current-independent linear resistance \(R_{0}\), while \(R_{2\omega}\) represents the second-order nonlinear resistance, which is dependent on both the current and the magnetic field. Thus by measuring the first- (\(R_{\omega}\)) and second-harmonic \(R_{2\omega}\) sheet/junction resistances through \(2\omega\) voltage response, \(\gamma_{S}\) can be estimated as \(\gamma_{S}=\frac{2R_{\omega}}{BIR\omega}\).
However, such resistive measurements cannot realistically simulate the intrinsic SDE at temperatures well below \(T_{c}\) due to no measurable resistance in this regime (\(R_{0}=0\)). Thus, the efficiency of SDE is expected to be finite only at \(T\approx T_{c}\) while negligibly small both at temperatures well below Tc and above Tc (\(\gamma_{N}\ll\gamma_{S}\)). For instance, Ando et al. [12] measured MCA of the resistance by performing an a.c. harmonic measurements for Rashba-type bulk superconducting [V/Nb/Ta]\({}_{n}\) superlattice [12]. MCA coefficient \(\gamma_{S}\) show sharp increase in the fluctuation regime and reaches to its maximal value \(\gamma_{S}\simeq 550\) T\({}^{-}\)A\({}^{-}\) at \(T_{c}\). However, \(\gamma_{S}\) remains negligibly small at temperatures well below \(T_{c}\). Though the observation seems to be at variance with the theoretical predictions for intrinsic SDE [29; 30; 31; 32] and the temperature dependence of experimentally measured MCA in JJs [13; 22], but it is an expected outcome of resistive measurements. On the other hand, by fabricating symmetric Rashba-type Al/InAs-2DEG/Al JJs, Baumgartner et al. [13] measured MCA for both the inductance (\(\gamma_{L}\)) and the resistance (\(\gamma_{S}\)). Finite MCA coefficient \(\gamma_{S}\simeq 4.1\times 10^{6}\) T\({}^{-}\)A\({}^{-}\) observed through resistive measurements near \(T_{c}\sim 1.45\) K is of the same order (namely, in the range of \(10^{6}\) T\({}^{-}\)A\({}^{-}\)) of the corresponding MCA coefficient observed for the inductance (measured at \(T=100\) mK), \(\gamma_{L}\simeq 0.77\times 10^{6}\) T\({}^{-}\)A\({}^{-}\).
### Magnetochiral anisotropy of the inductance
Unlike fluctuation regime, where nonreciprocity of depairing critical current is tied to the nonlinear resistance, nonreciprocity of sub-Kelvin supercurrent promise fully superconducting/dissipationless nonreciprocal circuit element. Deep in the sub-Kelvin superconducting regime of the phase diagram, i.e., far below the transition temperature where resistance is zero (so DC measurements are not feasible), supercurrent MCA and a corresponding SDE (supercurrent rectification/nonreciprocity) is characterized rather by measuring kinetic (or Josephson) inductance (clearly with AC measurements). By measuring Josephson inductance, nonreciprocal supercurrent can be linked to an asymmetry in the current-phase relation, induced by simultaneous breaking of inversion and time-reversal symmetry such that B is not parallel to I, and the MCA coefficient (\(\gamma_{L}\)) for the supercurrent can be directly derived from the equation (4).
This mechanism can be understood from a semiquantitative model [13; 42; 161] in which Josephson inductance can be derived from the CPR relation \(I=I_{c0}f(\varphi)\) (where f is a \(2\varphi\)-periodic function) and second Josephson equation \(\dot{\varphi}=2\pi V/\Phi_{0}\) (where \(\Phi_{0}=h/(2e)\) is the magnetic flux quantum) as
\[L(I)=\frac{V}{\frac{dI}{dt}}=\frac{V}{\frac{dI}{d\varphi}\dot{\varphi}}=\frac{ \Phi_{0}}{2\pi I_{c0}\frac{df(\varphi)}{d\varphi}}=\frac{\Phi_{0}}{2\pi}\left[ \frac{dI(\varphi)}{d\varphi}\right]^{-1} \tag{10}\]
It shows that Josephson inductance is a convenient probe to study CPR symmetry by investigating the effects of space-inversion/time-reversal symmetry breaking on the current-phase relation (CPR). Let's assume a JJ configuration in which electric current is flowing along x-direction, while inversion and time-reversal symmetry is broken by applying out-of-plane electric field \(\mathbf{E}=E_{z}\hat{z}\) and in-plane magnetic field \(\mathbf{B}_{ip}=B_{x}\hat{x}+B_{y}\hat{y}\), respectively.
Equation (10) shows that \(L(I)\) is inversely proportional to the derivative of the CPR, therefore, the minimum of Josephson inductance occurs at the inflection-point of the CPR. In the absence of in-plane magnetic field component along y-direction (\(B_{y}=0\)), CPR remains symmetric around inflection-point appearing at zero-phase, that is \((i,\varphi)=(0,0)\). As a result, the minimum inductance occurs at zero-current, around which \(L(I)\) appears to be symmetric. On the other hand, in the presence of in-plane magnetic field component along y-direction (\(B_{y}\neq 0\)), CPR become asymmetric around inflection-point (\(i^{*},\varphi^{*}\)), mainly associated with the broken Kramers degeneracy between the oppositely polarized spin components of Andreev bound states (ABS) leading to a finite-momentum pairing. As a result, current dependence of the Josephson inductance \(L(I)\) also become asymmetric and the minimum of \(L(I)\) appears at some finite current \(i^{*}\), corresponding to the shifted inflection point (\(i^{*},\varphi^{*}\)) in the CPR.
Such a pronounced asymmetry in the skewed CPR and, thus, in the Josephson inductance L(I), signals the supercurrent MCA (as defined in equation (4)) and hence supercurrent SDE. First, with a given orientation of electric field and polarity of applied current, the shift in inflection point switches along with the sign of \(B_{y}\): \((i^{*},\varphi^{*})\) for \(+B_{y}\) and \((-i^{*},-\varphi^{*})\) for \(-B_{y}\), as shown in figure 5(Top(d,e)). Second, for a given orientation of \(B_{y}\), the CPR gets more skewed with increasing \(B_{y}\) implying increase in the value of \(i^{*}\) with increasing strength of \(B_{y}\), as shown in figure 5(Bottom-a). As a result, as shown in figure 5(Top-d), the extremal values of \(i^{*}\) (which are
the critical currents \(I_{c}^{+}\) and \(I_{c}^{-}\)) differ for positive (\(\varphi_{c}^{+}\)) and negative (\(\varphi_{c}^{-}\)) phase difference, signaling the existence of a certain bias-current range in which SDE can be observed for a supercurrent which become different for opposite phase difference polarities. That is, junction allows supercurrent (\(I<I_{c}^{+}\) (red curve) or \(|I|<|I_{c}^{-}|\) (blue curve)) along one current direction while it enters in a resistive state (\(|I|>|I_{c}^{-}|\) (red curve) or \(I>I_{c}^{+}\) (blue curve)).
Figure 5: **Current-phase relation and nonreciprocity of inductance in a JJ array.****(T)** Device fabrication, current phase relation, and measurement of inductance. **(a)** A JJ array is made of 2,250 Al islands (grey), of width w=3.15 \(\mu\)m, length a=1.0 \(\mu\)m and separated by d=0.1 \(\mu\)m, on top of a Rashba-type InAs quantum well (yellow) sandwiched between InGaAs barriers. Red and blue arrows represent the spontaneous supercurrents, with zero phase difference, via spin-split pairs of Andreev bound states, denoted by black and white particle representing oppositely spin-polarized electron and hole. The strength and direction of these spontaneous supercurrents depend on that of an in-plane magnetic field \(B_{ip}\). Counterpropagating circles of black arrows represent the Rashba spin-texture in the InAs quantum well. **(b)** Fabricated device showing growth sequence of the heterostructure. The Al layer induces a superconducting gap \(\Delta^{*}\), via proximity effect, in the InAs quantum well. **(c)** Scanning electron micrograph of the array with a scale bar of 1 \(\mu\)m. **(d)** Illustrative current-phase relation for a short-ballistic JJ, with high transparency (\(\tau\)=0.94) and strong SOI, in the absence (black) and presence (red/blue) of an in-plane magnetic field \(\mathbf{B}_{y}\parallel\hat{\mathbf{y}}\) (red, \(B_{y}>0\); blue, \(B_{y}<0\)). The finite magnetic field (\(\pm B_{y}\)) reduces the critical current by a factor 0.8, \(I_{c}=0.8I_{c0}\), and adds a cosine term \(\pm 0.2I_{c}\cos(\phi)\) to the current-phase relation’s Fourier series. The red dots represent the inflection points (\(i^{*},\phi^{*}\)) of the current-phase relation. **(e)** Josephson inductance (\(\Phi_{0}/2\pi I_{o}\)) as function of current (\(I_{c0}\)), corresponding to the current-phase relation in (d). **(f)** Resonance curves for the RLC circuit, measured at 500 mK, for different values of the bias current. **(g)** Current dependence of measured Josephson inductance (at \(B=0\)). Coloured dots correspond to the spectra in (f). **(B)** Measurements of inductance and supercurrent anisotropy. **(a)** Kinetic inductance versus current, for different orientations of in-plane magnetic field of 100 mT. **(b,c)** Constant (b) and linear (c) coefficients of the polynomial expansion of kinetic inductance L(I) as a function of the angle (\(\theta\)) between in-plane magnetic field \(\mathbf{B}_{ip}\) and the supercurrent density directed along \(\hat{x}\). **(d)** Measured supercurrent magnetochiral anisotropy (coloured lines and symbols) \(-2L_{0}^{\prime}/(L_{0}B_{ip})\) versus in-plane magnetic field orientation (\(\theta\)). The maximum magnetochiral anisotropy, coefficient \(\gamma_{L}\simeq 0.77\times 10^{6}\) T\({}^{-}\)A\({}^{-}\), is extracted from a sinusoidal fit of the data. Fitted supercurrent magnetochiral anisotropy (Grey scale lines) is computed within semiquantitative model (eq. (10)) for different values of the confinement potential \(V_{conf}\). The three fitted curves are perfect sinusoidal functions. All measurements are performed at T = 100 mK. Figure is reproduced with permission from ref. [13].
curve)) along the other current-direction.
The MCA of the inductance, can be quantified by measuring the constant (\(L_{0}\)) and the and the linear (\(L_{0}^{\prime}\)) junction inductance, which appear as the leading terms in the polynomial expansion of L(I) around zero current: \(L(I)\approx L_{0}+L^{\prime}I+L^{\prime\prime}I^{2}/2\) with \(L^{\prime}\equiv\partial_{I}L|_{I=0}\) and \(L^{\prime\prime}\equiv\partial_{I}^{2}L|_{I=0}\). As shown in figure 5(Bottom(b,c)), \(L_{0}\) and \(L_{0}^{\prime}\) are plotted as functions of the angle \(\theta\) between the direction of supercurrent \(\hat{\mathbf{x}}\) and the orientation of applied in-plane magnetic field \(\mathbf{B}_{ip}\). In the Hall-bar geometry of Al/InAs-2DEG/Al junctions with a Ti-Au global top gate, the constant term \(L_{0}\) strongly depends on the gate voltage, reaches its maximum when \(B_{y}=0\), and shows relatively small anisotropy. In contrast, the linear term \(L_{0}^{\prime}\) shows relatively weak dependence on the gate-voltage, completely vanishes when \(B_{y}=0\) and reaches its maximum when \(B_{x}=0\), and thus shows strongly anisotropic behaviour. As shown in figure 5(Bottom-d), MCA coefficient for the inductance \(\gamma_{L}=2L_{0}^{\prime}/(L_{0}B_{ip})\) shows sinusoidal \(\theta\)-dependence, that is, proportional to \((B\times I)\cdot\hat{z}=BI\sin\theta\) and agrees with the numerical results obtained from semiquantitative model. In addition, \(\gamma_{L}\) remains nearly independent of the gate-voltage and its maximum extracted from the amplitude of the sine reads \(\gamma_{L}\simeq 0.77\times 10^{6}\) T\({}^{-}\)A\({}^{-}\). This value of \(\gamma_{L}\), obtained from measurements performed at T = 100 mK, far below the transition temperature (\(T_{c}\)), is of the same order as that of \(\gamma_{S}\) calculated for resistive measurements at \(T_{c}\).
## VI Outlook
SDE is a captivating phenomenon and could be a promising building block of the superconducting dissipationless technologies. Thus far, by characterizing the type/nature of SOI and optimizing/matching the SOI energy with the characteristic energy scale (superconducting gap) of charge carriers [57], SDE has been observed in both Rashba SCs and Ising SCs. Recent theoretical studies show that SDE is the strongest (i) when the Cooper pairing momentum lies at the crossover between weak and the strong helical superconducting phase in the vicinity of high critical field, which may be realized via optimizing magnetic field (or intrinsic magnetization), temperature, and SOI [32] and/or (ii) when the Fermi level lies at the band crossing point of two helical bands, which may be tuned by gating [31]. From here on, one of the prime goals is to expand the existing platforms and mechanisms for the observation of SDE. For instance, considering the discussion on the optimization of SDE originated from MCS, one of the remaining challenge is to identify suitable superconducting material which may provide the best performance. Thus far, in addition to conventional superconducting structures, the SDE has also been predicted and/or observed in unconventional superconducting structures such as twisted few-layer graphene, ferroelectric materials, topological semimetals, and topological insulators. Recent observation of extremely long-range and high-temperature Josephson coupling across a half-metallic ferromagnet [168] and the prediction of SDE in a JJ with half-metals [169] opens another rout for the search and utilization of promising quantum material class, known as spin-gapless materials [170; 171; 172; 173].
In passing, it is interesting to note that the realization of SDE via Rashba SOI and Zeeman exchange interaction in ferromagnetic SCs has a close connection to the realization of QAHE via Rashba SOI and Zeeman exchange interaction in ferromagnetic topological insulators. In the later case, a combined effect of Rashba SOI and Zeeman exchange leads to a spin-splitting in the low-energy bands such that only one of spin sectors display nontrivial band topology with inverted band structure while the other spin sector becomes/remains trivial with normal band structure. As a result, when Fermi level is tuned inside the energy band gap, spin-momentum locked chiral edge state leads to a quantized conductance. In the former case, however, low energy bands in both of the spin sectors play role, mainly due to formation of intra-(Fermi)surface and inter-(conduction)bands spin-singlet Cooper pairing. As a result, when Fermi level is tuned inside the superconducting gap, locking between magnetization orientation and finite-momentum of Cooper pairing leads to finite MCA and nonraciprocity in the supercurrent. Such a fundamental connection between the realization of QAHE and SDE may allow searching suitable topological superconducting materials based on heterostructure of s-wave SCs and QAH insulators [174; 175; 176; 177]. In addition, intrinsic iron-based SCs where Rashba SOI-driven band topology and superconductivity coexist [178] may also provide promising platform for the realization of SDE in topological superconducting materials [36; 37; 38; 54; 55; 56].
However, regarding orientation and the strength of exchange interaction, it is important to remember two differences between the realization of SDE and QAHE. (i) Magnetization orientation needs to be in-plane (at an angel to the polar axis) for SDE while out-of-plane for QAHE. (ii) Nontrivial QAH gap saturates after a critical strength of exchange interaction. On the other hand, strength of SDE decreases after the critical value of exchange interaction \(h^{*}\), yielding crossover between weak and strong helical phase, and vanishes for too high values.
On the other hand, considering the reliance of SDE on intrinsic system parameters, search of novel mechanisms may open new rout towards the observation of ideal SDE. In a broader sense, SDE is a manifestation of the interplay between superconductivity and spatial inversion asymmetry. Apart from its realization via MCA induced by time-reversal symmetry breaking, it could also be realized via shift currents induced by nontrivial Berry phase in a time-reversal symmetric systems. Furthermore, for JJs, M. Davydova et al. [40] recently proposed that finite-momentum Cooper pairing, which elucidates the origin of SDE, can also be achieved without relying on SOI.
Similar to the gate-controllability of Fermi level and thus the tunability of SDE strength [31], it would be intriguing to understand electric field-effects on the intrinsic properties of a superconducting structure, switching of SDE, and its utilization for dissipationless logic/memory applications. For instance, from the material aspect, SOI, critical current, and pair-breaking are the most important intrinsic properties directly impacting the SDE. Antisymmetric SOI, Rashba and Zeeman SOI, and thus the corresponding spin-splitting can be tuned via electric field. Superconducting pair-breaking shows strong dependence on the strength and the frequency/wavelength of electric field [179]. Similarly, it is shown that a gate tunable critical current in a NbN micro- and nano superconducting bridges [180] can be enhanced up to 30%. Electric field tunability of superconducting properties has recently been discussed for various ionic-gated superconducting materials, including cuprates, iron-based SCs, and honeycomb structures such as transition-metal dichalcogenides and bilayer SCs [181; 182]. For the device prospects, it would be intriguing to replicate magnetic field (or intrinsic magnetization) driven switching of SDE with electric field driven switching via electrical control of magnetization orientation. Electric field driven switching of SDE may also be realized by devising reversible SDE via electric switch of ferroelectricity [34]. Furthermore, gate-controlled barrier transparency in Rashba semiconductor based JJ (Al/InAs/Al) [159] and the gate-controlled asymmetry of highly skewed CPR in topological insulator (BiSbTeSe\({}_{2}\)) based JJ [183] demonstrate potential rout of controlling SDE in gate-controlled JJs.
The plausible electric field controllability of SDE and the intertwining between band topology and superconductivity may allow searching new mechanisms/functionalities [184; 185; 186; 187] of topological quantum materials for steering the engineering of low-power and low-dimensional topological superconducting technologies. We hope this article may provide a route to understand/achieve the optimal performance of SDE and its utilization for superconducting logic/memory device applications.
###### Acknowledgements.
This research is supported by the Australian Research Council (ARC) Centre of Excellence in Future Low-Energy Electronics Technologies (FLEET Project No. CE170100039) and funded by the Australian Government.
|
2309.17120 | On Direction Preserving Discretizations for Computing Phase-Space
Densities | Ray flow methods provide efficient tools for modelling wave energy transport
in complex systems at high-frequencies. We compare two Petrov-Galerkin
discretizations of a phase-space boundary integral model for stationary wave
energy densities in two-dimensional domains. The directional dependence is
approximated using a finite set of directions oriented into the domain from the
boundary. The propagation direction can be preserved across multi-component
domains when the directions within the local set for a given region of the
boundary are taken as a subset of a global direction set. In this work we
compare the use of piecewise constant and piecewise linear test functions,
which physically corresponds to the interpolation scheme used when the
transport is in a direction not belonging to the finite global set. | David J. Chappell, Martin Richter, Gregor Tanner | 2023-09-29T10:31:36Z | http://arxiv.org/abs/2309.17120v1 | # On Direction Preserving Discretizations for Computing Phase-Space Densities
###### Abstract
Ray flow methods provide efficient tools for modelling wave energy transport in complex systems at high-frequencies. We compare two Petrov-Galerkin discretizations of a phase-space boundary integral model for stationary wave energy densities in two-dimensional domains. The directional dependence is approximated using a finite set of directions oriented into the domain from the boundary. The propagation direction can be preserved across multi-component domains when the directions within the local set for a given region of the boundary are taken as a subset of a global direction set. In this work we compare the use of piecewise constant and piecewise linear test functions, which physically corresponds to the interpolation scheme used when the transport is in a direction not belonging to the finite global set.
## 1 Introduction
Dynamical energy analysis (DEA) is an approach for modelling wave energy densities at high-frequencies that was first proposed just over ten years ago [1, 2, 3]. DEA is based on a linear integral operator description of phase-space density transport along ray trajectories between positions on the boundary of a domain or sub-domain. Recent developments have seen the capability of DEA extended to industrial applications [1, 2], as well as stochastic propagation through uncertain structures [4, 5, 6].
A Petrov-Galerkin method that is efficient for modelling densities with strong directional dependence was recently proposed [7]. This class of problems has usually proved problematic for the DEA method [1]. In [7], a basis approximation using Dirac delta distributions was proposed to approximate the directional dependence. The direction of transport can be preserved throughout multi-domains provided that the Dirac delta specified direction set local to any part of the boundary is inherited from a common global set of directions; the propagation is then able to continue along rays with the same global direction through different sub-domains. A beneficial consequence is that the proposed methodology can be applied directly on complicated domains formed from a potentially large number of simpler sub-domains as is typically the case with the finite element type meshes used for industrial applications.
In this study we investigate the effect of modifying the choice of test functions from the piecewise constant indicator functions employed in the Petrov-Galerkin scheme proposed in [7]. For classical potential problems, it is known that the convergence of Petrov-Galerkin schemes based on Dirac delta distribution basis approximations together with splines as test functions crucially depends on this choice [8, 9, 10]. In particular, higher order test functions were shown to lower the regularity requirements on the boundary data. In comparison to more standard projection methods for integral equations, such as the collocation and Galerkin methods, these Petrov-Galerkin schemes combine the efficiency and ease of implementation of collocation based schemes with potentially even lower regularity requirements for convergence than the Galerkin method [9]. These features are important in this work since ray tracing solutions can often have low regularity and the added dimensionality of working in phase-space means that a faster implementation with fewer integrals to compute is desirable.
## 2 Phase-Space Boundary Integral Model
We are concerned with transporting densities along ray trajectories through multi-domains \(\Omega=\cup_{j=1}^{K}\Omega_{j}\). The ray dynamics in \(\Omega_{j}\) is defined by the Hamiltonian \(H_{j}(\mathbf{r},\mathbf{p})=|\mathbf{p}|/\eta(\mathbf{r})\equiv 1\) for \(j=1,\ldots,K\). The phase-space coordinates \((\mathbf{r},\mathbf{p})\) denote the position \(\mathbf{r}\) and momentum \(\mathbf{p}\) vectors, respectively. Each \(\Omega_{j}\), \(j=1,\ldots,K\) is assumed to be a convex polygon containing a homogeneous medium. As a consequence we may write \(\eta(\mathbf{r})=\eta_{j}\) when \(\mathbf{r}\in\Omega_{j}\) for \(j=1,\ldots,K\), where \(\eta_{j}\), \(j=1,\ldots,K\) are
constants. When \(\eta_{j}=c_{j}^{-1}\) is taken to be the inverse of the phase speed in \(\Omega_{j}\), then \(H_{j}\) defines the ray trajectories obtained via leading order high-frequency asymptotics for the Helmholtz equation
\[c_{j}^{2}\,\Delta u+\omega^{2}u=0, \tag{1}\]
at angular frequency \(\omega\). The choice of \(\eta\) can easily be modified for flexural waves with nonlinear dispersion - see [7].
In order to express our model in boundary integral form it is convenient to introduce the phase-space boundary coordinates \(Y_{j}=(s_{j},p_{j})\) on \(\Gamma_{j}=\partial\Omega_{j}\), \(j=1,\ldots,K\), where \(s_{j}\) is an arclength parameter for \(\Gamma_{j}\) and \(p_{j}=\eta_{j}\sin(\theta_{j})\) is the tangential component of \({\bf p}\) at \(s_{j}\). Here \(\theta_{j}\) is the angle formed between the ray (oriented into \(\Omega_{j}\)) and the inward normal vector at \(s_{j}\) as shown in Fig. 1. We next introduce a local boundary flow map \(\varphi_{i,j}(s_{j},p_{j})=(s_{i}^{\prime}(s_{j},p_{j}),p_{i}^{\prime}(s_{j},p _{j}))\), which describes the discrete evolution of the rays at times coinciding with boundary intersections. As written here, \(\varphi_{i,j}\) maps the boundary phase-space coordinate \((s_{j},p_{j})\) on the boundary of \(\Omega_{j}\) to \((s_{i}^{\prime}(s_{j},p_{j}),p_{i}^{\prime}(s_{j},p_{j}))\) on the boundary of \(\Omega_{i}\). To write \(\varphi_{i,j}\) in this form we have implicitly assumed that either \(i=j\), or \(\Gamma_{i}\) and \(\Gamma_{j}\) share a common edge through which the ray can transmit. As before, \(p_{i}^{\prime}(s_{j},p_{j})=\eta_{i}\sin(\theta_{i}^{\prime}(s_{j},p_{j}))\) denotes the tangential slowness at \(s_{i}^{\prime}\), and \(\theta_{i}^{\prime}\) is the angle formed between the outgoing ray and the inward normal vector to \(\Gamma_{i}\) at \(s_{i}^{\prime}\). In the simplest cases, \(\theta_{i}^{\prime}\) is obtained from either a specular reflection when \(i=j\), or if \(i\neq j\) and \(\eta_{i}=\eta_{j}\) then the ray will continue in the same direction into \(\Omega_{i}\).
Phase-space densities are transported throughout \(\Omega\) using a local boundary operator \({\cal B}_{j}\), which transports a density \(f\) along the boundary flow \(\varphi_{i,j}\) as follows [1, 7]
\[{\cal B}_{j}[f](X_{i}):=\int e^{-\mu_{j}D(X_{i},Y_{j})}w_{i,j}(Y_{j})\delta(X_ {i}-\varphi_{i,j}(Y_{j}))f(Y_{j})\,{\rm d}Y_{j}. \tag{2}\]
Here \(X_{i}\in\Gamma_{i}\times(-\eta_{i},\eta_{i})\) for some \(i=1,2,\ldots,K\) and \(w_{i,j}\) has been introduced to incorporate reflection/transmission coefficients. An exponential damping term with coefficient \(\mu_{j}>0\) has also been introduced, where \(D(X_{i},Y_{j})\) denotes the distance between \(s_{j}\) and the solution point. The global boundary operator \({\cal B}\) is then given by \({\cal B}=\sum_{j}{\cal B}_{j}\), where the sum is taken over each \(\Omega_{j}\) that shares an edge with \(\Omega_{i}\), including \(\Omega_{i}\) itself.
The stationary boundary density \(\rho\) can be expressed in terms of a Neumann series
\[\rho=\sum_{n=0}^{\infty}{\cal B}^{n}[\rho_{0}]=(I-{\cal B})^{-1}[\rho_{0}], \tag{3}\]
where \(\rho_{0}\) is a given initial boundary density and \({\cal B}^{n}\) represents \(n\) iterates of the operator \({\cal B}\). Once \(\rho\) has been evaluated using (3), the interior density \(\rho_{\Omega}\) can be calculated by projecting onto a prescribed solution point \({\bf r}\in\Omega_{j}\) using [2]
\[\rho_{\Omega}({\bf r})=\eta_{j}^{2}\int_{0}^{2\pi}e^{-\mu_{j}D({\bf r},s_{j})} \rho(s_{j}({\bf r},\Theta),p_{j}({\bf r},\Theta))\,{\rm d}\Theta. \tag{4}\]
Here, \(\Theta\in[0,2\pi)\) is the polar angle parametrising trajectories approaching \({\bf r}\) from \(s_{j}({\bf r},\Theta)\in\Gamma_{j}\) and \(D\) is used to represent the length of the trajectory between \({\bf r}\in\Omega_{j}\) and \(s_{j}\in\Gamma_{j}\).
## 3 Petrov-Galerkin discretization
In this section we introduce two direction preserving discretizations of the boundary operator (2) using Petrov-Galerkin projections in order to numerically solve for \(\rho\) by solving a discretised form of equation (3). We first split \(\Gamma_{j}\) into boundary elements \(E_{m}^{j}\) for \(m=1,2,\ldots,M_{j}\) and define the global ray directions \(\Phi_{l}\in[0,2\pi)\) for \(l=1,2,\ldots L\). These global direction are defined anti-clockwise relative to the positive \(x_{1}\)-axis as specified in [7]. Here we take \(\Phi_{l}=2\pi(l-1)/L\), but note that this choice is flexible and can easily be amended to include dominant transmission paths, where known. Let \(\phi_{n}(s_{j})\in(-\pi/2,\pi/2)\)
\(n=1,2,\ldots,N_{m}\) denote the local ray directions at \(s_{j}\in\Gamma_{j}\). The local directions at \(s_{j}\) are simply a subset of the global directions taken as all directions which satisfy the property of being directed into \(\Omega_{j}\) from \(s_{j}\). The local directions are also given a local numbering based on their direction relative to the interior normal at \(s_{j}\) - see Fig. 2.
We apply an approximation of the form
\[\rho(s_{j},p_{j})\approx\sum_{m=1}^{M_{j}}\sum_{n=1}^{N_{m}}\rho_{(j,m,n)}b_{m} (s_{j})\delta(p_{j}-\tilde{p}_{n}(s_{j})),\qquad j=1,\ldots,K, \tag{5}\]
where \(\tilde{p}_{n}(s_{j})=\eta_{j}\sin(\phi_{n}(s_{j}))\) and \(b_{m}(s_{j})=|E^{j}_{m}|^{-1/2}:=(\mbox{diam}(E^{j}{}_{m}))^{-1/2}\) for \(s_{j}\in E^{j}_{m}\) or \(b_{m}(s_{j})=0\) otherwise. We impose a standard (Bubnov) Galerkin projection in \(s_{j}\) with the orthonormal basis \(b_{m}\), \(m=1,2,\ldots,M_{j}\). It is therefore only in the momentum variable \(p_{j}\) that we apply a Petrov-Galerkin projection and compare two possible choices of test functions that are orthonormal in the \(L^{2}\) inner product to \(\delta(p_{j}-\tilde{p}_{n}(s_{j}))\) for \(n=1,2,\ldots,N_{m}\). The first set of test functions that we consider are the indicator functions
\[\chi_{n}(p_{j})=\tilde{\chi}_{n}(\arcsin(p_{j}/\eta_{j}))\]
introduced in [7], where \(\tilde{\chi}_{n}(\theta_{j})=1\) if \(\theta_{j}\in((\phi_{n-1}+\phi_{n})/2,(\phi_{n}+\phi_{n+1})/2)\) and zero otherwise. The second set of test functions that we propose are the piecewise linear hat functions \(B_{n}(p_{j})=\tilde{B}_{n}(\arcsin(p_{j}/\eta_{j}))\), where
\[\tilde{B}_{n}(\theta_{j})=\left\{\begin{array}{ccc}1+\frac{ \theta_{j}-\phi_{n}}{\phi_{n}-\phi_{n-1}}&\mbox{if}&\theta_{j}\in(\phi_{n-1}, \phi_{n}],\\ 1+\frac{\phi_{n}-\theta_{j}}{\phi_{n+1}-\phi_{n}}&\mbox{if}&\theta_{j}\in(\phi _{n},\phi_{n+1}],\\ 0&\mbox{otherwise}.\end{array}\right.\]
In both cases we note that
\[\langle\delta(\cdot-\tilde{p}_{n}(s_{j})),\chi_{n^{\prime}}\rangle_{L^{2}(- \eta_{j},\eta_{j})}=\langle\delta(\cdot-\tilde{p}_{n}(s_{j})),B_{n^{\prime}} \rangle_{L^{2}(-\eta_{j},\eta_{j})}=0 \tag{6}\]
for all \(n\neq n^{\prime}\), and if \(n=n^{\prime}\) then the right hand side of (6) is instead one.
We now apply the various Galerkin projections described above to the operator \(\mathcal{B}\). Using piecewise linear test functions in direction leads to a matrix representation \(B\) of \(\mathcal{B}\) as follows:
\[B_{I,J} = \int_{\Gamma_{j}\times(-\eta_{j},\eta_{j})}\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
been simplified to just one integral over \(E_{m}^{j}\subset\Gamma_{j}\). Note that in the third line of (7) we have made the additional assumption that the reflection/transmission coefficients \(w_{i,j}(s_{j},\tilde{p}_{n}(s_{j}))=w_{i,j}(\tilde{p}_{n})\) depend only on the arrival direction of the ray along an interface and not on the position along the interface. For homogeneous polygonal sub-domains, the Euclidean distance function \(D_{i}(s_{j})\) is linear in \(s_{j}\in E_{m}^{j}\) and hence the one integral remaining in the third line of (7) can be evaluated relatively simply. One can also apply analytic spatial integration for higher order (Legendre polynomial) spatial basis functions as detailed in [11].
The expansion coefficients in (5) can now be obtained by solving a linear system
\[\boldsymbol{\rho}=(I-B)^{-1}\boldsymbol{\rho}_{0}\]
corresponding to the discretized operator equation (3). Here, the vectors \(\boldsymbol{\rho}_{0}\) and \(\boldsymbol{\rho}\) contain the expansion coefficients for \(\rho_{0}\) and \(\rho\), respectively. The source vector \(\boldsymbol{\rho}_{0}\) can be evaluated from the prescribed initial density \(\rho_{0}\) by making use of the orthonormality properties of the bases in both position and momentum as follows
\[[\boldsymbol{\rho}_{0}]_{J}=\int_{\Gamma_{j}\times(-\eta_{j},\eta_{j})}\rho_ {0}(s_{j},p_{j})b_{m}(s_{j})B_{n}(p_{j})\mathrm{d}Y_{j}=\frac{\eta_{j}}{|E_{m}^ {j}|^{1/2}}\int_{E_{m}^{j}}\int_{\phi_{n-1}}^{\phi_{n+1}}\rho_{0}(s_{j},p_{j}( \theta_{j}))\tilde{B}_{n}(\theta_{j})\cos(\theta_{j})\mathrm{d}\theta_{j} \mathrm{d}s_{j}.\]
The corresponding expression when using the test functions \(\chi_{n}\) instead of \(B_{n}\) is slightly simpler since the piecewise constants are independent of the integration variable and hence
\[[\boldsymbol{\rho}_{0}]_{J}=\int_{\Gamma_{j}\times(-\eta_{j},\eta_{j})}\rho_ {0}(s_{j},p_{j})b_{m}(s_{j})\chi_{n}(p_{j})\mathrm{d}Y_{j}=\frac{\eta_{j}}{|E_ {m}^{j}|^{1/2}}\int_{E_{m}^{j}}\int_{(\phi_{n-1}+\phi_{n})/2}^{(\phi_{n}+\phi_ {n+1})/2}\rho_{0}(s_{j},p_{j}(\theta_{j}))\cos(\theta_{j})\mathrm{d}\theta_{j} \mathrm{d}s_{j}.\]
Numerical results comparing these discretization schemes will be presented during the conference and included in an extended version of this paper.
## 4 Acknowledgments
Support from the EPSRC (grant no. EP/R012008/1) is gratefully acknowledged.
|
2310.03036 | A quantum system control method based on enhanced reinforcement learning | Traditional quantum system control methods often face different constraints,
and are easy to cause both leakage and stochastic control errors under the
condition of limited resources. Reinforcement learning has been proved as an
efficient way to complete the quantum system control task. To learn a
satisfactory control strategy under the condition of limited resources, a
quantum system control method based on enhanced reinforcement learning
(QSC-ERL) is proposed. The states and actions in reinforcement learning are
mapped to quantum states and control operations in quantum systems. By using
new enhanced neural networks, reinforcement learning can quickly achieve the
maximization of long-term cumulative rewards, and a quantum state can be
evolved accurately from an initial state to a target state. According to the
number of candidate unitary operations, the three-switch control is used for
simulation experiments. Compared with other methods, the QSC-ERL achieves close
to 1 fidelity learning control of quantum systems, and takes fewer episodes to
quantum state evolution under the condition of limited resources. | Wenjie Liu, Bosi Wang, Jihao Fan, Yebo Ge, Mohammed Zidan | 2023-09-30T03:22:44Z | http://arxiv.org/abs/2310.03036v1 | # A Quantum System Control Method Based on Enhanced Reinforcement Learning
###### Abstract
Traditional quantum system control methods often face different constraints, and are easy to cause both leakage and stochastic control errors under the condition of limited resources. Reinforcement learning has been proved as an efficient way to complete the quantum system control task. To learn a satisfactory control strategy under the condition of limited resources, a quantum system control method based on enhanced reinforcement learning (QSC-ERL) is proposed. The states and actions in reinforcement learning are mapped to quantum states and control operations in quantum systems. By using a new enhanced neural networks, reinforcement learning can quickly achieve the maximization of long-term cumulative rewards, and a quantum state can be evolved accurately from an initial state to a target state. According to the number of candidate unitary operations, the three-switch control is used for simulation experiments. Compared with other methods, the QSC-ERL achieves close to 1 fidelity learning control of quantum systems, and takes fewer episodes to quantum state evolution under the condition of limited resources.
Keywords:quantum system controlreinforcement learning quantum computing machine learning neural networks Msc: 81Q93 81P68 68T07
## 1 Introduction
Quantum system control is one of the keys to the development of quantum information technology, which has been applied in many fields, such as multi-photon interference measurement (Vedaie et al., 2018), quantum error correction (Fosel et al., 2018), quantum states preparation (Bukov et al., 2018). Since most quantum systems cannot meet the two constraint conditions, one is strong regular free Hamiltonian, the other is that interaction Hamiltonian are fully connected (Meng and Cong, 2022), it is difficult to implement active manipulation or control. In order to manipulate quantum systems to a ideal performance, different control methods have been developed (Patsch et al., 2020; An et al., 2021; Torosov et al., 2021). For a quantum system with limited control resources, it is a challenge to effectively and accurately control quantum states evolution under perturbation.
Traditional learning algorithms (such as gradient algorithms (Chakrabarti and Rabitz, 2007; Roslund and Rabitz, 2009), genetic algorithms (Tsubouchi and Momose, 2008) have shown excellent control effects under specific experimental environment. But in practical, the quantum system to be manipulated usually has different restrictions. There is a class of quantum system
control problem with limited control resources. In this case, the gradient algorithms are not suitable for solving the above problems, and the genetic algorithms need a lot of experimental data to optimize the control performance that complicates the resolution of the problem.
With the advent of quantum information technology and the upsurge of machine learning (Abualigah et al. 2021; Abualigah et al. 2021; Abualigah et al. 2021; Abualigah et al. 2021), many researchers have found that machine learning can effectively help to find the optimal strategy to solve the control problem of quantum systems (Chunlin et al. 2012; Chen et al. 2013; Palittapongarnpim et al. 2017). In particular, studies on quantum system control based on reinforcement learning have been increasing gradually. Reinforcement learning (Fang et al. 2020) interacts with the environment in the form of rewards and punishments. Vedaie et al. (2018) applied reinforcement learning to realize multi-photon interference measurement. Cardenas-Lopez et al. (2018) proposed a protocol for quantum reinforcement learning, which does not require coherent feedback during the learning process and can be implemented in a variety of quantum systems. Fogel et al. (2018) showed how a network-based "agent" can discover a complete quantum error correction method to protect qubits from noise. In addition, Bukov et al. (2018) used reinforcement learning to prepare the desired quantum states. They also successfully used Q-learning (Watkins et al. 1992) to control quantum systems (Bukov 2018). Yu et al. (2019) used quantum reinforcement learning to make a qubit "agent" adapt to the unknown quantum system "environment" to achieve maximum overlap. Niu et al. (2019) used deep reinforcement learning and proposed a quantum control framework for fast and high-fidelity quantum gate control optimization. Zhang et al. (2019) successfully used reinforcement learning algorithm to solve a class of quantum state control problems, and made a theoretical analysis. However, the above methods have high requirements on hardware resources in practical and are not effective for solving a class of resource-constrained quantum system control problems.
In order to complete the evolution of quantum states quickly and efficiently under the condition of insufficient hardware conditions and limited numbers and types of unitary operations that can be used, a quantum system control method based on enhanced reinforcement learning (QSC-ERL) is proposed. The quantum system control problem under the condition of limited resources is modeled using reinforcement learning algorithm. By using a proposed enhanced neural networks, reinforcement learning can more quickly achieve the maximization of long-term cumulative rewards, and a quantum state can be evolved accurately from the initial state to the target state. The simulation experiment is implemented by Python programming language and Linalg tool library. The result shows that compared with other methods, the QSC-ERL can achieve high fidelity learning control of quantum systems, and takes fewer episodes to achieve quantum state evolution under the condition of limited resources.
The main contributions of this paper are: (1) Various reinforcement learning algorithms are used for validate the effectiveness and generality of quantum system control methods based on reinforcement learning. (2) A quantum system control method based on enhanced reinforcement learning (QSC-ERL) is proposed to efficiently solve the control problem of quantum systems with limited control resources.
The rest of this paper is structured as follows. In Sec. II, we briefly overview the preliminaries about quantum system control and reinforcement learning. In Sec. III, we model the quantum system control problem and present our novel method. In Sec. IV and V, we respectively show the results of simulation experiments and draw our conclusions.
## 2 Preliminaries
### Learning control of quantum systems
Learning control methods are powerful for solving quantum system control problems (Ma and Chen 2020). The learning methods are often optimized by multiple iterations to realize the evolution of qubits from an initial state to the desired target state. In this paper, the task of quantum system control is set as the quantum pure state transition control problem of n-order quantum system. For the free Hamiltonian \(H_{0}\) of n-order quantum system, its eigenstate can be defined as \(D=\left\{\left|\phi_{i}\right\rangle\right\}_{i=1}^{N}\). The quantum state to be evolved \(\left|\psi_{(t)}\right\rangle\) of a controlled system can be extended according to the eigenstates in set \(D\):
\[\left|\psi_{(t)}\right\rangle=\sum_{i=1}^{N}c_{i}(t)\left|\psi_{i}\right\rangle, \tag{1}\]
where the complex number \(c_{i}(t)\) satisfies \(\sum_{i=1}^{N}\left|c_{i}(t)\right|^{2}=1\).
In order to achieve the active control of the quantum system, the control Hamiltonian \(H_{c}\) is introduced into the control \(u(t)\in L^{2}(\mathbf{R})\), which is independent of time and interacts with the quantum system. The \(\left|\psi_{(t=0)}\right\rangle\) can be redefined as \(\left|\psi_{0}\right\rangle\). The \(C(t)=\left(C_{i}(t)\right)_{i=1}^{N}\) evolves
according to the Schrodinger equation:
\[\left\{\begin{aligned} &\imath h\dot{C}(t)=[A+u(t)B]C(t)\,\\ & C(t=0)=C_{0}\end{aligned}\right. \tag{2}\]
where \(\iota=\sqrt{-1}\), \(\quad C_{0}=\left(c_{0i}\right)_{i=1}^{N}\), \(\quad c_{0i}=\left\langle\varphi_{i}\mid\psi_{0}\right\rangle\), \(\sum_{i=1}^{N}|c_{0i}|^{2}=1\), \(\hbar\) is the reduced Planck constant, and the matrices \(A\) and \(B\) correspond to the free Hamiltonian \(H_{0}\) and the controlled Hamiltonian \(H_{c}\) of the quantum system respectively. \(U_{(t_{1}\to t_{2})}\) represents an unitary operation for any state \(\left|\psi_{(t_{1})}\right\rangle\) of the quantum system. The \(\left|\psi_{(t_{2})}\right\rangle=U_{(t_{1}\to t_{2})}\left|\psi_{(t_{1})}\right\rangle\) of the quantum system is that the quantum state \(\left|\psi_{(t_{1})}\right\rangle\) evolves from time \(t=t_{1}\) to time \(t=t_{2}\). In addition, \(U_{(t_{1}\to t_{2})}\) can also be defined as \(U_{(t)}\), \(t\in[t_{1},t_{2}]\).
In fact, if the quantum systems evolve freely without control resources limited, it can also arrive at the target state from an initial state. However, there are two unfavorable problems in this way of free evolution control: One is that it is difficult to satisfy the conditions in practice, and will waste a lot of control resources to evolve from an initial state to the desired target state. The other is that free evolutionary control has no certain control law, and is unable to be determined when the quantum system reaches the target state. Our study mainly aims at solving a class of control resource-limited quantum system control problem.
### Quantum control landscapes
The quantum control landscapes (Chakrabarti and Rabitz 2007) has provided a theoretical basis for analyzing the learning control problem of quantum systems, which can be defined as the mapping between the control Hamiltonian and the correlation value of the control performance function. The task of quantum system control can be defined as a problem of maximizing the target performance function. In other words, it can be transformed into a problem of maximizing the state transition probability from the initial state to the desired target state. For the state transition control problem, the quantum control transition can be defined as
\[\begin{aligned} J(u)=& tr(U_{(\varepsilon,T)}| \psi_{initial}\rangle\\ &\langle\psi_{initial}|U_{(\varepsilon,T)}^{\dagger}|\psi_{target }\rangle\langle\psi_{target}|),\end{aligned} \tag{3}\]
where \(tr(\cdot)\) is the trace operation, \(U^{\dagger}\) is the ad-joint of \(U\), \(|\psi_{initial}\rangle\) is the initial quantum state, \(|\psi_{target}\rangle\) is the desired target quantum state.
In this paper, it is assumed that the control set \(\{u_{j},j=1,2,\ldots,m\}\) allowed to operate in a controlled quantum system can be given in advance, where each control \(u_{j}\) corresponds to an unitary operation \(U_{j}\). The goal of learning control is to evolve control from the initial state \(|\psi_{initial}\rangle\) to the desired target state \(|\psi_{target}\rangle\), and learn a global optimal control sequence \(u^{*}\):
\[u^{*}=\operatorname*{arg\,max}_{u}J(u). \tag{4}\]
### Reinforcement learning
Reinforcement learning (Fang et al. 2020) is described by Markov Decision Process (MDP), which is usually defined by the quadruple \(\langle S,A,P,R\rangle\). The \(S\) is the set of states, \(A\) is the set of actions, and the state \(s\in S\), the action \(a\in A\). The state transition function \(P\left(s,a,s^{\prime}\right)\) represents the probability of state transition. The \(R\left(s,a,s^{\prime}\right)\) represents the reward value function. \(P\left(s,a,s^{\prime}\right)\) and \(R\left(s,a,s^{\prime}\right)\) only depend on the current state \(s\) and action \(a\) that have nothing to do with other historical states and actions. The MDP which adopts the discount criterion is denoted as \(M=\left(S,A,P,\gamma,R\right)\), where \(\gamma\) is the discount factor.
Reinforcement learning agents learn by interacting with external environment. Specifically, the agent observes the state \(s_{t}\in S\) at each discrete time step \(t\in[0,T]\), where T is the end time, and selects an action \(a_{t}\in A\) used for transitioning the state \(s_{t}\in S\) to the next state \(s_{t+1}\in S\) with the probability \(p\). After performing an action, the agent is usually given a scalar reward signal \(r_{t+1}\), which reflects how good or bad the action was. The learning process mentioned above is repeated continuously until the agent can learn an optimal strategy, which is a mapping from the state space \(S\) to the action set \(A\).
Q-learning proposed by Watkins et al. (1992) is an offline reinforcement learning algorithm, and is described in **Algorithm 1**. The iteration of the Q-value function and the strategy selection are independent of each other. The approximation goal of Q-learning can be defined as \(r+\gamma\max_{a^{\prime}}Q\left(s^{\prime},a^{\prime}\right)\). The agent can choose actions according to the greedy algorithm or other non-optimal strategies.
## 3 Methods
### Problem modeling
The two-level quantum system (D'Alessandro and Dahleh 2001) is representative in filed of quantum system control. The spin 1/2 system is one of the typical two-level quantum systems for theoretical and practical research. The state \(|\psi\rangle\) of the spin 1/2 system can be defined as:
\[|\psi\rangle=\cos\frac{\theta}{2}|0\rangle+e^{t\phi}\sin\frac{\theta}{2}|1\rangle, \tag{5}\]
where \(\theta\in[0,\pi]\) and \(\phi\in[0,2\pi]\) represent the polar and phase angles respectively. A point \(\vec{a}\) on the unit sphere can be defined as
\[\vec{a}=(x,y,z)=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta). \tag{6}\]
The aim is to design the control of two-level quantum system based on reinforcement learning. In the following, the problem of quantum system control based on reinforcement learning is modeled and described.
The agent in reinforcement learning learns through continuous interaction with the environment. Specific to the quantum system environment, our method divides the state space of the quantum system into a finite discrete set of states \(S\). Set \(A=\{u_{j},j=1,2,\ldots,m\}\) is defined as a limited set of executable actions (unitary operations) in a quantum environment. Specifically, for the three-switch control, the \(m\) is set to 3. Whenever the agent performs action \(a\) and the state is transformed from \(s\) to \(s^{\prime}\), it will receive the feedback value, and using the fidelity as the reward:
\[r=\left\{\begin{array}{l}10,\ fidelity\leq 0.5\\ 100,\ 0.5<fidelity\leq 0.7\\ 10000,\ fidelity>0.7\end{array}\right. \tag{7}\]
The goal of reinforcement learning is to obtain an optimal method \(\pi^{*}\) and the global optimal control sequence \(u^{*}\) as Eq. (4).
For quantum systems, the agent of reinforcement learning obtains the optimal method by maximizing the long-term cumulative reward in the process of interacting with the environment of quantum systems. Therefore, the agent also needs to constantly interact with the external environment and learns through trial and error. Specifically, the permitted controls at each control step for any quantum state are \(U_{1}\) (no control), \(U_{2}\) (positive impulse control), and \(U_{3}\) (negative impulse control), which is defined as follows:
\[\begin{split} U_{1}&=e^{-iI_{x}\frac{1}{15}},\\ U_{2}&=e^{-i(I_{x}+0.5I_{x})\frac{1}{15}},\\ U_{3}&=e^{-i(I_{x}-0.5I_{x})\frac{1}{15}},\end{split} \tag{8}\]
where \(I_{z}=\frac{1}{2}\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right),I_{x}=\frac{1}{2}\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right).\) The state of the quantum system in evolutionary control will be limited by the three-switch control. The agent of reinforcement learning will learn under the norms of the three-switch control in interactive learning with the environment of the quantum system. It is mainly embodied in the action selection of the agent in any quantum system state. Under the three-switch control, each action can be performed by the agent is \(U_{1}\), \(U_{2}\) and \(U_{3}\).
Under the above control conditions, a global optimal control method is obtained by using proposed reinforcement learning algorithm to minimize the number of control sequences, so that the spin 1/2 system can reach the target state from the initial state.
### Enhanced reinforcement learning
In order to improve the learning efficiency of Q learning algorithm (Watkins and Dayan 1992) without prior knowledge, it is important to improve the foresight ability of the learning agent. But it brings the following two problems in practice: 1) The state space increases, causing the "dimensionality disaster", which greatly reduces the learning efficiency; 2) The visible space of the learning agent is reduced, making the agent's search process more blind.
To solve the above two problems, a new enhanced reinforcement learning algorithm is proposed. The enhanced reinforcement learning shown in Fig. 1 consists of a quantitative \(Q\) table and a qualitative \(V\) value heuristic function obtained by enhanced neural network. And the description of the algorithm is shown in **Algorithm 2**. As the action \(a\) is executed, the reward \(r\) is obtained, state \(s\) and state \(s^{\prime}\) will change accordingly. The agent trains a enhanced neural network for learning a table space, which can gradually form a heuristic function to guide the agent to efficiently obtain optimal strategies for the evolution of quantum states.
#### 3.2.1 Heuristic function based on enhanced neural network
To build a generalization and foresight capacity to avoid the blind behavior of the agent, a heuristic function based on enhanced neural network is proposed. The Q-table in enhanced reinforcement learning is updated with the execution of actions. At the same time, the enhanced neural network shown in Fig. 2 is trained, and a \(V\) value fitting surface is gradually developed. The heuristic function based on enhanced neural network thereby shows up to guide the optimization and updating of the new quantum system control method
based on enhanced reinforcement learning (QSC-ERL). Inspired by common convolutional neural network (Gu et al., 2018) and residual neural network (He et al., 2016), the enhanced neural network can make full use of the extracted features. The state \(s\) is the input of the neural network, and the Q values got by the probability of actions is the output, where \(N\) is the number of actions. In order to obtain the nonlinear characteristics more comprehensively, the Leaky ReLU is selected as activation function to give all negative values a non-zero slope. The heuristic function \(F\left(s,a,s^{\prime}\right)\) participating in the update of the Q table takes \(s\) and \(s^{\prime}\) as input and gets the \(V\) value output which is defined as \(V_{NN}(s)\) and \(V_{NN}\left(s^{\prime}\right)\) in state \(s\) and \(s^{\prime}\) respectively. And the heuristic function is defined as
\[F\left(s,a,s^{\prime}\right)=\gamma V_{NN}\left(s^{\prime}\right)-V_{NN}(s). \tag{9}\]
#### 3.2.2 The parameters updating method
To build an effective parameters updating method, and accelerate training speed of QSC-ERL, the eligibility trace (Singh and Suttun, 1996) is introduced. The error obtained by updating can be passed back several steps to speed up the learning of the enhanced neural network and provide an effective inspiration for the whole algorithm.
The learning of Q table can be defined as
\[\begin{split} Q(s,a)=& Q(s,a)+\alpha[r(s,a,s^{ \prime})+F(s,a,s^{\prime})\\ &+\gamma\max_{a^{\prime}}Q(s^{\prime},a^{\prime})-Q(s,a)],\end{split} \tag{10}\]
Figure 1: An overview of enhanced reinforcement learning: the orange rectangle is given by the environment. \(S_{t}\) and \(S_{t+1}\) are input into the enhanced neural network which is abbreviated to E network. The algorithm selects \(Q_{1}^{*}\) according to \(a_{t}\) from \(Q_{S_{i}}\)and \(maxQ_{t+1}\) from \(Q_{S_{t+1}}\) respectively. Then calculating the loss for updating the enhanced neural network between “the blue rectangles”.
and the updating of \(V\) values can be defined as
\[V(s)=\max_{a}Q(s,a). \tag{11}\]
When the agent performs a non-greedy action, its next state \(s^{\prime}\) often does not obtain the largest Q value. The QSC-ERL will update the current state-action paired Q value according to the \(V\) value of the next state obtained by the greedy strategy. For updating the enhanced neural network, when the agent requires the \(V\) value according to the greedy strategy, the eligibility trace is also updated. When the agent performs a non-greedy strategy, the eligibility trace is set as 0, preventing the error from propagating backward.
To update the weights of the enhanced neural network, the gradient descent method is adopted which can be defined as
\[\begin{split}\Delta w_{t}=&\beta(r(s_{t})+\gamma V _{\text{NN}}(s_{t+1})-V_{\text{NN}}(s_{t}))\times\\ &\sum_{k=0}^{t}(\gamma\lambda)^{tk}\frac{\partial}{\partial w}V_{ \text{NN}}(s_{k}),\end{split} \tag{12}\]
where \(\beta\) is the learning rate, \(0<\beta<1\), \(\lambda\) is the eligibility trace coefficient, \(0<\lambda<1\).
The agent updates the weight of the neural network through the difference value \(r\left(s_{t}\right)+\gamma V_{\text{NN}}\left(s_{t+1}\right)-V_{\text{NN}} \left(s_{t}\right)\) between the next predicted \(V\) value of state \(s\) and the current target \(V\) value. The difference value can be used for updating the \(V\) value in other state. If the eligibility trace is defined as
\[e_{t}=\sum_{k=0}^{t}\gamma\lambda\frac{\partial}{\partial w}V_{\text{NN}}(s_{ k})=\gamma\lambda e_{t-1}+\frac{\partial}{\partial w}V_{\text{NN}}(s_{t}), \tag{13}\]
Eq. (12) can be rewritten as
\[\Delta w_{t}=\beta(r(s_{t})+\gamma V_{\text{NN}}(s_{t+1})-V_{\text{NN}}(s_{t} ))e_{t}. \tag{14}\]
It is easy for modifying the weights from the hidden layer of the neural network to the output layer, and then modify the weights from the input layer to the hidden layer through the back propagation.
The QSC-ERL is carried out synchronously in the learning of Q-Table and the enhanced neural network. The Q-Table based reinforcement learning can obtain more accurate results, but the speed of learning is slow. The enhanced neural network is not accurate enough, but it has better generalization performance. In the initial stage of learning, the effect is not obvious. But with continuous learning, by using the parameters updating method, the enhanced neural network is gradually established the trend information, and the convergence speed can be greatly improved.
## 4 Simulation experiments
### Settings
Since it is difficult to verify the validity and efficiency of the algorithm in real quantum computers, the realization of the experiment is inseparable from the quantum control landscapes (Chakrabarti and Rabitz 2007). The simulation experiment is implemented by Python programming language and Linalg tool library. Full training for a given scenario can be achieved on a single CPU+GPU workstation (CPU: Intel Xeon Gold 5218, GPU: GeForce RTX 2080 Ti 11G). The state space of the quantum system will be reconstructed from the initial state \(s_{initial}=|\psi_{initial}\rangle\) to the target state \(s_{target}=|\psi_{target}\rangle\). The state set is \(S=\{s_{i}=|\psi_{i}\rangle,i=1,2,\ldots,n\}\), and the executable action set is \(A=\{a_{j}=U_{j},j=1,2,\ldots,m_{o}\}\). For the spin 1/2 system, the initial state is set as \(|\psi_{initial}\rangle(\theta=(\pi/60),\phi=(\pi/30))\), and the target state is \(|\psi_{target}\rangle(\theta=(41\pi/60),\phi=(29\pi/30))\). The Eq. (3) and Eq. (5) can be utilized to construct the whole quantum simulation environment. The setting of the reward in QSC-ERL is according to the Eq. (7). Here is the parameter settings shown as table 1.
### Evaluation index
Fidelity is a evaluation index to measure the distance between density operators. It allows us to compare how
Figure 2: The enhanced neural network architecture: For each state s fed into the network, the network extracts features and outputs Q values.
the state of the system at any given moment is different from the initial state, or how the state of a system is different from a reference state. It allows us to measure quantitatively how different two states really are. For two density matrices \(\rho\), \(\sigma\) it is generalized as the largest fidelity between any two purifications of the given states. And the fidelity function can be defined as
\[F(\rho,\sigma)=(tr\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}})^{2}, \tag{15}\]
where \(\rho\) and \(\sigma\) are the density matrix of source information and target information respectively.
### Results and analysis
The simulation experiments is carried out under the three switch control paradigm. The goal of the experiment is to control the spin 1/2 system from the initial state \(|\psi_{initial}\rangle\) to the target state \(|\psi_{target}\rangle\). The main purpose is to explore the effectiveness of reinforcement learning algorithm for solving quantum control problem.
Therefore, the simulation experiments is divided for two parts: one is that the tabular Q-learning (TQL) (Sutton and Barto 2018), deep Q-learning (DQL) (Mnih et al. 2015) and policy gradient (PG) (Sutton et al. 2000) are applied to explore the effectiveness of reinforcement learning algorithm for solving quantum control problem. The other is that the NN-QSC (Fosel et al. 2018) and the DRL-QSC (An and Zhou 2019) are compared for verifying that the proposed QSC-ERL performed better than its peers. The parameters of the reinforcement learning algorithms involved in the experiment are set as follows: For all state-action paired, the Q value is initialized to 0, the discount factor is \(\gamma=0.99\), the learning rate is \(\alpha=0.1\), and the action selection probability is initialized to 1/3.
Fig. 3 shows the comparison of fidelity between algorithms, where the X-axis is the number of episode, and the Y-axis is fidelity. It can be seen from Fig. 3 that reinforcement learning has certain effects on solving the quantum system control problems. Since the TQL algorithm can not converge rapidly during training, the fidelity is the lowest. The PG algorithm has better convergence, but only a little bit at one episode. The other four methods have good results. The DQL algorithm adopted convolutional neural network to guide the learning of the Q learning algorithm. Although it solves the problem of not being able to update Q table well when there are many actions, it is difficult for simple neural networks to learn the useful features of quantum systems. If want to get a high fidelity between the final state and the target state, it needs to train with more episodes and data set. Due to the high correlation between states in the training process, the NN-QSC and the DRL-QSC may fall into local optimum or be difficult to converge. Our QSC-ERL use the enhanced neural network to effectively make use of the differential features before and after the evolution of the quantum state. By introducing the eligibility trace to update parameters, The QSC-ERL can quickly find the optimal control strategy of the quantum system. Table 2 shows the number of episodes when the fidelity can get the maximum, and total number of episodes is set to 500. The data is taken from the average value of 100 experiments. It represents that the ability of algorithms can make the quantum system from the initial state to the desired target state. The experimental results show that most methods converge after training and make the quantum system from the initial state to the desired target state. Specifically, for the TQL, the maximum of the fidelity is 0.73, and others can reach 0.99 after total training. The PG requires about 311 episodes and the DQL requires about 135 episodes. It means that the reinforcement learning based on neural network has the better performance in some degree than the common RL algorithm. The NN-QSC requires about 171 episodes to control the evolution of the quantum system from the initial state to the target state while the DRL-QSC requires about 60 episodes, and the QSC-ERL requires the 42 episodes. So our proposed QSC-ERL algorithm is faster than the NN-QSC and the DRL-QSC for controlling the evolution of the quantum system from the initial state to the target state.
\begin{table}
\begin{tabular}{l l} \hline Name & Value \\ \hline maximum episode & 500 \\ learning rate & 0.01 \\ reward decay & 0.9 \\ e greedy & 0.99 \\ memory size & 2000 \\ \hline \end{tabular}
\end{table}
Table 1: The parameter settings of the QSC-ERL
\begin{table}
\begin{tabular}{l l l} \hline Name & Episodes & Fidelity \\ \hline TQL & 452 & 0.73 \\ PG & 311 & 0.99 \\ DQL & 135 & 0.99 \\ NN-QSC & 171 & 0.99 \\ DRL-QSC & 60 & 0.99 \\ QSC-ERL & 42 & 0.99 \\ \hline \end{tabular}
\end{table}
Table 2: The comparison of the number of episodes between algorithms
## 5 Conclusion
In this paper, a quantum system control method based on enhanced reinforcement learning (QSC-ERL) is proposed to achieve the learning control of the spin 1/2 system. A satisfactory control strategy is obtained through enhanced reinforcement learning so that the quantum system can be evolved accurately from the initial state to the target state. Compared with other methods, our method can achieve the quantum system control with high fidelity, and improve the control efficiency of quantum systems.
It should be noted that our method is sufficient for the evolution of quantum state in spin 1/2 system. Other difficult quantum control problems include quantum error correction based on bosonic codes (Michael et al., 2016) and quantum state preparation in the single-photon manifold (Vrajitoarea et al., 2020). And it is a valuable work to conduct a study on providing solutions by using learning theories (Li et al., 2018; Zhang and Wang, 2020) and neural network (Xu et al., 2019; Hu et al., 2020), which is also one of our next research.
###### Acknowledgements.
The authors would like to thank the anonymous reviewers and editors for their comments that improved the quality of this paper. This work is supported by the National Natural Science Foundation of China (62071240, 61802175), the Natural Science Foundation of Jiangsu Province (BK20171458), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).
## Declarations
**Conflict of interest** The authors declare that they have no conflict of interest.
**Ethical statement** Articles do not rely on clinical trials.
**Human and animal participants** All submitted m-anuscripts containing research which does not involve human participants and/or animal experimentation.
|
2309.10897 | Anomalous microwave response in the dissipative regime of topological
superconducting devices based on Bi2Te2.3Se0.7 | Superconducting proximity junctions based on topological insulators are
widely believed to harbor Majorana-like bound states. The latter serves as a
paradigm non-local topological quantum computation protocols. Nowadays, a
search for topological phases in different materials, perspective for a
realization of topological qubits, is one of the central efforts in quantum
physics. It is motivated, in particular, by recent observation of anomalous ac
Josephson effect, which being a signature of Majorana physics. Its
manifestations, such as a fractional Josephson frequency and the absence of the
first (or several odd in more rare cases), Shapiro steps, were reported for
different materials. Here we study Shapiro steps in Nb/Bi2Te2.3Se0.7/Nb
junctions, based on ultrasmall single crystals of a 3D topological insulator
synthesized by a physical vapor deposition (PVD) technique. We present evidence
that our junctions are ballistic. When subjected to microwave radiation, the
junctions exhibit Shapiro steps, but the first step is missing. Typically it is
assumed that the missing first step (MFS) effect cannot be observed in the
presence of quasiparticle poisoning due to suppression of the 4{\pi}-periodic
component. Our findings within the context of the RSJ-model of Josephson
junction dynamics show that such behaviour of samples corresponds to a specific
condition, requiring a minimum of 5% of the 4{\pi}-component for disappearance
of the first Shapiro step. | Vasily Stolyarov, Sergei Kozlov, Dmitry Yakovlev, Nicolas Bergeal, Cheryl Feuillet-Palma, Dmitry Lvov, Olga Skryabina, Mikhail Kupriyanov, Alexander Golubov, Dimitri Roditchev | 2023-09-19T19:43:43Z | http://arxiv.org/abs/2309.10897v1 | Anomalous microwave response in the dissipative regime of topological superconducting devices based on Bi\({}_{2}\)Te\({}_{2.3}\)Se\({}_{0.7}\)
###### Abstract
Superconducting proximity junctions based on topological insulators are widely believed to harbor Majorana-like bound states. The latter serves as a paradigm non-local topological quantum computation protocols. Nowadays, a search for topological phases in different materials, perspective for a realization of topological qubits, is one of the central efforts in quantum physics. It is motivated, in particular, by recent observation of anomalous ac Josephson effect, which being a signature of Majorana physics. Its manifestations, such as a fractional Josephson frequency and the absence of the first (or several odd in more rare cases), Shapiro steps, were reported for different materials.
Here we study Shapiro steps in Nb/Bi\({}_{2}\)Te\({}_{2.3}\)Se\({}_{0.7}\)/Nb junctions, based on ultrasmall single crystals of a 3D topological insulator synthesized by a physical vapor deposition (PVD) technique. We present evidence that our junctions are ballistic. When subjected to microwave radiation, the junctions exhibit Shapiro steps, but the first step is missing. Typically it is assumed that the missing first step (MFS) effect cannot be observed in the presence of quasiparticle poisoning due to suppression of the \(4\pi\)-periodic component. Our findings within the context of the RSJ-model of Josephson junction dynamics show that such behaviour of samples corresponds to a specific condition, requiring a minimum of 5% of the \(4\pi\)-component for disappearance of the first Shapiro step.
Shapiro step missing, Toplogical insulator, Superconductivity, Ballistic transport, \(4\pi\)-periodic component pacs: +
Footnote †: preprint:
## I Introduction
Andreev Bound States (ABS) emerge in Superconductor/Normal metal/Superconductor (SNS) Josephson junctions as localized solutions of the Bogoliubov-De Gennes equations within the normal region [1]. In SNS junctions with topological order in the normal part, Majorana Zero Modes (MZM) are believed to appear [2; 3; 4; 5; 6]. Localized Majorana fermions encode topological Andreev Bound States (\(T\)-ABS) which coherently transfer \(1e\) charge and, correspondingly, their current-phase relationship (CPR) is \(4\pi\)-periodic, at least according to the theory. Such periodicity is possible since the topological superconductors can accept single electrons onto a pair of Majorana modes, and such single electrons have zero energy, unlike the usual quasiparticles (bogoliubons).
Distinct features of \(T\)-ABS originate from that fact that a pair of Majorana real fermions correspond to a complex fermion level. This level defines fermion parity \(\sigma\) in the junction where, let us say, \(\sigma=+1\) if the level is occupied and \(\sigma=-1\) if it is empty. Energies of \(t\)-ABS, given by \(\epsilon_{\sigma}(\varphi)=\sigma E(\varphi)\), are non-degenerate with respect to \(\sigma\) and the function \(E(\varphi)\) is \(4\pi\)-periodic and odd. Unlike ABS in trivial SNS junctions, the branches \(\epsilon_{\sigma}(\varphi)\) and \(\epsilon_{-\sigma}(\varphi)\) have topologically protected crossings at zero energy in the phases of \(\varphi=\pi+2\pi n,n\in\mathbb{Z}\) (the ground state is degenerate at these points with respect to \(\sigma\)). Due to these protected crossings the current-phase relationship acquires anomalous \(4\pi\)-component, in addition to the conventional \(2\pi\)-periodic one. The sign of the \(4\pi\)-component can be positive or negative depending on
the sign of the parity \(\sigma\).
The \(4\pi\)-component can be observed experimentally only if the parity does not change over the duration of the measurement. Otherwise, if the parity changes frequently, the \(4\pi\)-component contribution is averaged to zero. This explains why rapid dynamics measurements are preferable. Thus, previously, the \(4\pi\)-Josephson effect was probed in the high-frequency regime in which \(\varphi\) grows in time and crosses the degeneracy points faster than a parity relaxation occurs [7].
The initial experimental confirmation of the absence of the first Shapiro step at a voltage \(V_{1}=\frac{hf}{2e}\) (where \(h\) represents Planck's constant, \(e\) the electron charge, and \(f\) indicating frequency) was demonstrated in spin-orbit coupled nanowires [2; 3; 4; 8], as well as in three-dimensional topological insulators [9]. Later, HgTe/CdTe-based junction experiments unveiled the absence of the first nine odd steps [10] and the fractional Josephson radiation frequency. The debate surrounds the anomalous interpretation of the ac Josephson effect due to unbroken time-reversal symmetry, challenging expectations for localized Majorana states in Ref. [10].
Absence of \(n=\pm 1\) Shapiro step was shown in exfoliated topological insulator Bi\({}_{2}\)Se\({}_{3}\)[11] flakes and Dirac semimetals [12; 13]. In particular, high-intensity drives exhibited residual supercurrent [11], confirming the \(4\pi\)-periodic nature of the ac Josephson effect. The partial fractional observation of the Josephson effect linked to Joule overheating, the decrease in parity lifetime, was evident in hysteretic \(V(I)\)-curves.
To explain the Missing First Step (MFS) effect, the phase winding frequency, \(f_{Jt}=eV/h\), is assumed to be a half of the usual one, \(f_{J}=2eV/h\), being a consequence of coherent \(1e\) transport. Therefore, the first microwave-induced step should occur at a voltage \(eV_{1t}/h=hf/e\), for a given radiation frequency \(f\), which is twice higher than the usual voltage for the first step,\(V_{1}=hf/2e\). Thus, it appears that the first step is missing, which is the MFS effect [5; 6; 9; 11]. An observation of the absence of odd Shapiro steps as well as fractional frequency of Josephson radiation is believed to be one of possible routes to prove the existence of the Majorana zero modes [14; 15; 16].
Comparing to trivial SNS junctions, ballistic SNS junctions exhibit ABS energies with a \(4\pi\)-periodic dependence [17]\(\pm\Delta\cos(\phi/2)\), where \((\phi/2)\), is the superconducting phase difference and \(\Delta\) is the induced energy gap. However, recent theory indicates a \(2\pi\)-periodic static CPR [17] in this case.
The MFS effect could also originate from trivial ballistic junctions, posing a challenge to differentiate between possibilities. The ubiquity of the MFS effect includes observations in amorphous superconducting nanowires and nanobridges [18]. First steps or few steps could be masked by junction switching or retrapping currents. Junction bistability, characterized by abrupt voltage jumps, can hinder first step visibility. Recovery of the first step involves increasing microwave frequency or power.
Distinguishing topological and trivial effects involves a proposed method [11] involving quasiparticle poisoning due to Joule heating. Quasiparticles cause rapid changes in the Majorana Zero-Mode (MZM) contribution, averaging it to zero. This explains why only the first step disappears while the third step remains, as Joule heating creates high quasiparticle population. High-population quasiparticles can occupy Majorana fermion modes, altering parity according to theory [19]. Thus, changing current from high to zero should retain the presence of the first step.
In this letter, we demonstrate the anomalous Josephson effect, namely the absence of the first Shapiro step. The studied SNS junction involves two Nb electrodes coupled through an ultrasmall single nanocrystallite of a 3D topological insulator Bi\({}_{2}\)Te\({}_{2.3}\)Se\({}_{0.7}\)[20], acting as the normal (N) region of the SNS junction. We present Shapiro step maps as the differential conductance \(dI/dV\) plotted versus the DC current bias and the RF drive
Figure 1: **Experimental observation of supercurrent in Josephson junction based on ultra small single crystal of Bi\({}_{2}\)Te\({}_{2.3}\)Se\({}_{0.7}\).****a** Scanning electron microscopy image of the Nb/Bi\({}_{2}\)Te\({}_{2.3}\)Se\({}_{0.7}\)/Nb Josephson junction. The superconducting Nb leads (S-regions) are shown in blue color; the flake of Bi\({}_{2}\)Te\({}_{2.3}\)Se\({}_{0.7}\) in the N-region is colored light green. **b** Schematic view of the device along with the external electrical circuit. DC bias current \(I\) is applied and the voltage drop \(V\) across the junction is measured. **c** Hysteretic \(V(I)\) characteristic of the junction measured at \(T=20\,\mathrm{mK}\). Supercurrent at \(V_{RF}=0\,\mathrm{\mu V}\) is observed with a critical current \(I_{C}=0.46\,\mathrm{\mu A}\) and a retrapping current \(I_{R}=0.22\,\mathrm{\mu A}\), respectively. The solid black line corresponds to the hysteretic \(V(I)\) curve measured at \(f_{RF}=1.5\,\mathrm{GHz}\) and \(V_{RF}=20\,\mathrm{\mu V}\) with Shapiro steps. **d** Critical \(I_{C}\) and retrapping \(I_{R}\) currents as function of \(T\). The ballistic fit was performed using the Eilenberger equations. The retrapping current was fitted by t-RSJ model (see text for details).
Figure 2: **Shapiro maps dV/dI for different RF drive frequencies.****a, d, g and j** The drive frequencies \(f_{RF}=0.9\), 1, 1.3, and 2 GHz are plotted. The maps cover negative (switching) and positive (retrapping) polarities of the bias current and demonstrate a hysteresis. The left-hand side of a given map for \(I<0\) corresponds to the switching current, and the I\(>\)0 segment corresponds to the retrapping current. **b, e, h** and **k**\(V(I)\)-curves in the normalized unit for voltage. The negative polarity shows the switching current, and the positive polarity shows the retrapping current. For I\(>\)0, the even-odd effect is observed. The panel on the right shows the dI/dV as a function of bias current at different RF powers, giving a clearer view of the effect. The second Shapiro step emerges at a lower radiation power than first step. The first step is recovered at sufficiently large RF drive amplitudes.
power, in the 0.5 to 3 GHz frequency range. Strongly hysteretic curves \(V(I)\) and the \(I_{C}\)(T) dependence are also presented. The \(I_{C}\)(T) function is well described by the ballistic electron theory of the SNS junctions (see also ref[20]). To describe the Shapiro steps and their dependence of the RF drive power, the two-channel thermal resistively shunted junction (t-RSJ) model is adopted. Our important conclusion is that the first step is missing even in the retrapping branch of the \(V(I)\) curve, i.e., when the Joule heating is quite significant. The model we use to analyze the data includes (i) quasiparticle overheating due to the Joule effect [11], and (ii) thermally activated poisoning of the MBS. We demonstrate a good agreement of the results with this extended t-RSJ model.
## II Experiment
The Bi\({}_{2}\)Te\({}_{2.3}\)Se\({}_{0.7}\) nanocrystalline ingots used to make our junctions were synthesized by Physical Vapor Deposition (PVD) method [21]. Our detailed XRD analysis shows that the ingots contain only a single phase of Bi\({}_{2}\)Te\({}_{2.3}\)Se\({}_{0.7}\).The studied SNS junction is shown in Fig.1 (a). The measured thickness of the flake is \(t=20\) nm. The length of the N-region is \(L=150\), nm, and the widths of the Nb electrodes are \(w=500\), nm. Several junctions of different sizes were fabricated and studied in the experiments. Fig.1 (b) shows a schematic view of the Josephson junction and the external electrical circuit is presented. The device is measured in a regime of DC bias current \(I\) where the voltage drop \(V\) across the junction is present; the RF drive is applied through a microwave generator.
The normal state resistance of the junction at room temperature is \(R_{N}\approx 1.2\) k\(\Omega\). The resistance shows a conventional metallic behavior if the temperature \(T\) is decreased. An abrupt drop in resistance was detected at \(T=8\), K attributed to the superconducting phase transition in the Nb electrodes that form the junction. The further cooling down of the junction shows a smooth decrease of the resistance in a domain from 5 to 1 K. The latter is characteristic for the superconducting proximity effect induced in the TI flake by the closely spaced Nb electrodes [20].
Fig. 1(c) illustrates a hysteric voltage-current \(V(I)\) relation, where the critical current (\(I_{C}\)) and the retrapping current (\(I_{R}\)) are not equal. The red curve represents the increasing bias current and shows the switching (critical) current, a jump-like transition from superconducting to normal state. The blue curve corresponds to the opposite direction of the current (\(I\)) and represents the retrapping current. The black curve illustrates the voltage-current characteristic under the influence of an RF signal. Fig. 1(d) displays the temperature dependence of both critical and retrapping currents. Hysteresis appears at temperatures below 0.7 K. Additionally, the absence of a low-temperature saturation in \(T_{c}(T)\) suggests ballistic transport [20], which is further supported by fits generated using the ballistic-limit Eilenberger equations and their solutions obtained by Galaktionov and Zaikin [17].
The presence of an RF antenna 1(b), which is not ideally matched and is located close to the sample, leads to an elevation in background noise levels. This effect arises due to the direct connection of the system to the room-temperature electronics. In contrast to our previous work, in the study referenced in Ref.[22], where the contribution of s-wave and p-wave superconductivity was investigated. The critical current didn't achieved higher magnitudes related to p-wave superconductivity. In previous work the sample was meticulously shielded from all forms of electromagnetic radiation within the cavity. Additionally, it featured supplementary low-pass filters integrated with silver paste. Nevertheless, the critical current remains large (at s-wave level) 1(d), and the transport mode remains predominantly ballistic, even in the presence of an additional noise source like the antenna.
The hysteretic behavior of the \(V(I)\)-curves indicates Joule overheating effects for quasiparticles [23]. This is considered a source of quasi-particle poisoning [24]. In addition, we discuss the effect of microwave radiation in detail. In Figs.2 (a,b,c) we show the results obtained at \(f_{RF}=0.9\) GHz. Five representative \(V(I)\)-curves, corresponding to different amplitudes of the RF radiation, are shown in Fig.2 (b); the insert shows the \(dV/dI\), which amplifies the steps of the \(V(I)\)-curve. The corresponding map of the differential conductance, plotted versus the bias current and the RF signal amplitude, is shown in Fig.2(a). The darker regions represent voltage plateaus corresponding to a lock-in synchronization effect between the RF field and the evolution of the superconducting phase. In other words, the darker regions are Shapiro steps, which are numbered in Fig.2(a).
A prominent region is the large black central region which represents the perfect superconductivity (zero voltage) of the sample. In this zero resistance region (0-region) no steps can be observed. On the left side, this superconducting region is bounded by the critical current dependent on the RF amplitude \(I_{C}(V_{RF})\). On the right side the 0-region is bounded by the retrapping current which is also dependent on the RF amplitude \(I_{R}(V_{RF})\). The 0-region, if it overlaps some of the Shapiro steps, can completely suppress them since the voltage is exactly zero in the 0-region.
For example, on the left side, and at low bias, the first visible step is number 10, that is, the first through the ninth steps are suppressed (see Fig.2 (b-insert)). The dependence of the critical current on the RF amplitude and the dependence of the steps (bright regions) on the RF amplitude are both linear, but the slope of the critical current is larger. Therefore more step boundaries cross the critical field line at higher driving amplitudes. Thus, at higher RF amplitudes, it is possible to observe lower-order steps. For example, in the critical current branch, at the lowest power the lowest step is \(n=10\), while at the highest power the lowest step is \(n=4\) (Fig.2(a)).
The situation with the retrapping current is more favorable for the observation of the lower-order steps. This is simply because \(I_{R}<I_{C}\), so the voltage explores lower values on the retrapping branch of the \(V(I)\)-curve (positive region in Fig.2 (a)), i.e., on the branch on which the current is ramped down. Because of this, all steps are observed on the retrapping branch, except the first one.
The general condition for the observation of a step number \(n\) is: \(V(I_{C})<V_{n}=nhf/(2e)\) on the critical current branch and \(V(I_{R})<V_{n}=nhf/(2e)\) on the retrapping branch. On the other hand, as the amplitude of the RF signal increases, both the critical current and the retrapping currents are reduced. Thus the number of steps which can be detected increases with the amplitude of the RF signal.
Another map of differential conductance is shown in Fig.2(c), which represents \(dV/dI\) plotted versus the RF amplitude and the DC voltage on the sample, \(V\), normalized by the fundamental mode voltage of the phase evolution, \(hf/2e\). Interestingly, it shows that the second step is present at very low amplitudes. However, the first step appears only when the driving amplitude is quite large and approaches \(170\)\(\mu\)V.
When the critical current branch is consider, up to \(9\) first steps appear suppressed. This suppression is most probably due to the switching-current voltage jump being larger than \(nhf/2e\). Thus, in what follows, we focus on the retrapping branch, where only the first step is absent, while the second is observed.There could be multiple explanations for this phenomenon of the first step disappearance. It could be due to the topological nature of the proximitized material in these sample. As noted previously, Majorana fermions support coherent transfer of single electrons and the corresponding CPR is proportional to \(\sin(\phi/2)\), where \(\phi\) is the phase difference at the junction. Such a CPR could lead to the disappearance of the first Shapiro step. Another possibility is that the ballistic character of the electron transport through the junction causes the second step to become more prominent than the first one. According to the recent Galaktionov-Zaikin model [17], the second step could occur at a lower driving power than the second step if the phenomenon of MAR is taken into account. Yet another possible explanation is that the second step boundary crosses the curve \(I_{R}(V_{RF})\) at a lower \(V_{RF}\), where \(V_{RF}\) is the amplitude of the RF driving signal. This seems quite plausible based on the results shown on Fig.2(a).
Considering results obtained at higher frequencies, we focus on the retrapping branch (positive current bias on all plots), since the switching current branch produces a large jump at the critical current such that the voltage immediately goes to value much higher than needed for the observation of low-order steps. Yet, it is interesting to note that, at \(f=2\,\)GHz, even on the critical current branch, the first observed step is \(n=2\), while at a higher RF drive the \(n=1\) is also visible (Fig.2(j)).
Let us now return to the discussion regarding the retrapping branch. In this case, we will focus on the data acquired at \(1\,\)GHz, as depicted in Fig.2(d), (e), and (f), following the same format as previously discussed. Similarly to the results of \(0.9\,\)GHz, the second step at this frequency appears at a lower RF amplitude compared to the first step, as shown in Fig.2(f). This outcome is an exact analogy to the behavior observed at \(0.9\,\)GHz.
However, the behavior of the sample changes noticeably when it moves to \(1.3\,\)GHz, as illustrated in Figs.2(g), (h), and (i). At this frequency, no steps are observed in the retrapping current branch at the lowest \(V_{RF}\) value. As the driving amplitude increases, the first and second steps become visible approximately at the same level of the RF drive, as depicted in Fig. 2(i). This behavior aligns with the general expectation that an increase in the frequency of the RF drive leads to larger step sizes (\(hf/2e\)) capable of reaching the voltage level on the sample just before the retrapping event occurs and causes the voltage to drop to zero.
Finally, the data acquired at \(2\,\)GHz is presented in Figs. 2(j), (k), and (l). At this frequency, the multiple fractional Shapiro steps effect is not observed. As the current increases, the first step appears at a voltage of \(V=hf/2e\), followed by another step at a higher current corresponding to \(V=hf/e\). In other words, both the first and second steps are observed. This qualitatively distinct behavior arises from the fact that the slope of the \(I_{R}(V_{RF})\) line is now lower than the slope of the first step, whereas at \(1.3\,\)GHz, the slope of the first step line was lower than the slope of the retrapping current line.
## III Discussion
The studied SNS junction involves two Nb electrodes coupled through an ultrasmall single nanocrystallite of a 3D topological insulator Bi\({}_{2}\)Te\({}_{2.3}\)Se\({}_{0.7}\)[20], acting as
Figure 3: **Retrapping current overlapping Shapiro steps.****a** and **b** Differential resistance versus normalized bias current and RF drive amplitude. The retrapping currents and the switching currents (blue dashed lines) form the main borders of a trapezoidal zero-resistance region. Microwave-induced steps are marked by yellow lines. (**a**) \(f=2\,\)GHz, The first step (\(V=hf/2e\)) is observable at low RF drive. (**b**) \(f=1\,\)GHz. The first step is not always apparent. At low RF drive the junction transitions from zero resistance to the second step (\(V=hf/e\)), without entering the first step.
the normal (N) region of the SNS junction. We present Shapiro step maps as differential conductance \(dI/dV\) plotted versus DC current bias and RF drive power, in the frequency range from \(0.9\,\mathrm{GHz}\) to \(3\,\mathrm{GHz}\). Strongly hysteretic I(V) curves as well as the \(I_{C}\)(T) dependence are also shown. The \(I_{C}\)(T) function is well described by the ballistic electron theory of SNS junctions (see also ref[20]). In such measurements the Joule heating is always present, down to the retrapping occurrence. Thus, the expectation is that the parity of the MZM (if MZM is indeed present) should be changing very rapidly and so the first step should be observed. Our important conclusion is that the first step is still missing in the retrapping current branch, i.e., a strong Joule heating, which is responsible for a pronounced hysteresis of the V-I curves, is not able to eliminate the MFS effect. We also analyze the critical current as a function of temperature and demonstrate that our SNS junctions are ballistic, because of the high structural perfection of the PVD-grown topological insulator crystals. Thus, our results provide evidence that the incomplete even-odd effect (the MFS effect) can occur in ballistic junctions even under strong quasiparticle poisoning. We compare the results to the models outlined above.
A general feature, observed at all frequencies, is that all steps are visible if the RF signal is sufficiently strong (see the top parts of the plots in Figs.3(a,b)). The reason is that at sufficiently large \(V_{RF}\) the critical current and the retrapping current are both zero, while the superconducting order parameter is not suppressed to zero. Under such conditions, the DC voltage increases gradually as the DC bias current increases. Thus, all low-order steps are visible and there are no missing steps at sufficiently large amplitudes of the RF drive, \(V_{RF}\). In Figs.3(a,b) we mark all steps with yellow lines; the switching and retrapping currents are shown by blue dashed lines. The main difference between the high-frequency data (\(2\,\mathrm{GHz}\)) (Fig.3(a)) and the low-frequency data (\(1\,\mathrm{GHz}\)) (Fig.3(b)) is that in the first case the slope of the yellow lines is larger than the slope of the retrapping current blue dashed lines, while in the latter case the slopes are about equal. Also, the spacing between the yellow lines (the steps) is naturally lower if the RF signal has a lower frequency, since the voltage difference between the steps is \(hf/2e\), where \(f\) is the frequency of the RF signal. On the other hand, at a high RF frequency, there is a significant region between the retrapping line (blue) and the second yellow line. Thus, a part of the first voltage plateau can be observed experimentally even at low RF drive (Figs.3(a,b)).
An alternative and more traditional explanation of the observed results is given below. The hysteretic behaviour is linked to the Joule overheating effect related to the phase winding and the corresponding voltage. This is considered to be a source of quasiparticle poisoning. The poisoning results in the partial observation of \(4\pi\)-periodic Josephson effect where only \(n=\pm 1\) Shapiro steps are absent. Our results differ from those previously reported in that we observe the absence of the first step in the retrapping current, when the quasiparticle poisoning is not negligible.
We model our system using the two-channel thermal RSJ model proposed in ref.[11; 25]. This model consists of two parts: additional coherent superconducting channel originating from Majorana bound states and contributing \(\sin(\phi/2)\) supercurrent, and self-consistent thermal balance which is manifested in the hysteresis of the \(V(I)\) curves. In Fig.4(a) the scheme of the two-channel thermal RSJ model is shown. There are two parallel superconducting junctions in parallel with the shunt resistance \(R\). The Josephson junction J\({}_{1}\) is a conventional junction with \(2\pi\)-CPR, and the second junction J\({}_{2}\) is a
Figure 4: **a** Scheme of the two-channel thermal RSJ model. One of the two parallel Josephson junctions (J\({}_{1}\)) stands for the trivial supercurrent and the other one (J\({}_{2}\)) stands for the topological channel in parallel with shunting resistance \(R_{n}\). **b** Phase diagram of odd and even Shapiro steps in two-channel thermal RSJ model. Simulated differential resistance map as a function of the RF current amplitude for RF current frequencies (**c**) \(2\) GHz and (**e**) \(0.9\) GHz with \(2\pi\) and \(5\%\)\(4\pi\)-component. Corresponding Shapiro bins maps for RF current frequencies (**d**) \(2\) GHz and (**f**) \(0.9\) GHz.
topological one with \(4\pi\)-CPR.
According to this total superconducting current is \(I_{s}(\varphi)=I_{c}^{2\pi}\sin(\varphi)+I_{c}^{4\pi}\sin(\varphi/2)\) and equation for phase can be written as
\[\frac{\hbar d\varphi}{2eRdt}=I_{DC}+I_{RF}\sin{(\omega_{RF}t)}-\left(I_{c}^{2 \pi}\sin(\varphi)+I_{c}^{4\pi}\sin(\varphi/2)\right), \tag{1}\]
where I\({}_{DC}\) is the DC bias current, I\({}_{RF}\) is the RF current applied by a generator, \(\omega_{RF}\) is the frequency of RF current, \(I_{c}^{2\pi}\left(I_{c}^{4\pi}\right)\) is the critical current of \(2\pi\) (\(4\pi\)) component, \(R\) is the normal state resistance of the junction.
To take into account the hysteretic behavior of \(V(I)\) characteristics due to Joule overheating, we write the heat balance equation:
\[\left\langle P(t)\right\rangle=\Sigma U\left(T_{e}^{5}-T_{ph}^{5}\right), \tag{2}\]
where \(\Sigma\) is the electron-phonon coupling constant of the normal material, \(U\) is the effective volume of the sample, \(T_{e}\) and \(T_{ph}\) are electron and phonon temperatures respectively. A self-consistent scheme is provided as follows: at a given I\({}_{DC}\), I\({}_{RF}\) we begin by supposing a temperature \(T\) (at the first iteration \(T=T_{ph}\) is equal to the bath temperature of the cryostat) which gives a critical current \(I_{c}(T)=I_{c}^{2\pi}(T)+I_{c}^{4\pi}(T)\). Next we solve equation 1 from which we estimate the Joule power \(P=\left\langle I(t)V(t)\right\rangle\) that gives us temperature of electron subsystem \(T_{e}=\sqrt[5]{T_{ph}}^{5}+\frac{P}{\Sigma U}\) and hence a new critical current \(I_{c}(T_{e})\). Then this sequence is repeated until the difference between a new estimated temperature \(T^{n+1}\) and that from the previous step \(T^{n}\) is small enough. The temperature dependence of the critical current \(I_{c}(T)\) is taken from the ballistic fit and values of parameters \(\Sigma\) and \(U\) are extracted from the fit of retrapping \(I_{r}\) current (see Figs.1).
As it has been shown in the work by Le Calvez et al. [11] there are two characteristic frequencies in this system \(f^{2\pi}=\frac{1}{\hbar}2eRI_{c}^{2\pi}\) and \(f^{4\pi}=\frac{1}{\hbar}eRI_{c}^{4\pi}\). They define the 2D phase diagram, see Fig.4(b).In Fig. 4(c,e), a numerical calculation was performed using the model for a frequency of \(0.9\,\mathrm{GHz}\). The results demonstrate that at low power levels of the RF signal, the second stage appears first, while at high power levels, the first stage appears. To achieve the closest resemblance to the experimental data, it was necessary to include a minimum of \(5\%\) of the \(4\pi\)-periodic component. Fig.4(e,f) depicts the fitting results for a frequency of \(2\,\mathrm{GHz}\), which replicate the observed behavior in the experimental data. An interesting result of the model is that it reproduces the zero-resistance region at low DC and RF bias and predicts a linear shape of the boundary.
Our study sheds light on the perplexing phenomenon of selective missing of the first Shapiro step exclusively in topological Josephson junctions. In our ballistic Josephson junction, constructed using the 3D topological insulator Bi\({}_{2}\)Te\({}_{2.3}\)Se\({}_{0.7}\), this phenomenon is aptly characterized and, can be effectively explained by a two-channel thermal RSJ model.
**ACKNOWLEDGMENTS**
|
2301.13570 | Conjugacy for certain automorphisms of the one-sided shift via
transducers | We address the following open problem, implicit in the 1990 article
"Automorphisms of one-sided subshifts of finite type" of Boyle, Franks and
Kitchens (BFK):
"Does there exists an element $\psi$ in the group of automorphisms of the
one-sided shift $\operatorname{Aut}(\{0,1,\ldots,n-1\}^{\mathbb{N}},
\sigma_{n})$ so that all points of $\{0,1,\ldots,n-1\}^{\mathbb{N}}$ have
orbits of length $n$ under $\psi$ and $\psi$ is not conjugate to a
permutation?"
Here, by a 'permutation' we mean an automorphism of one-sided shift dynamical
system induced by a permutation of the symbol set $\{0,1,\ldots,n-1\}$.
We resolve this question by showing that any $\psi$ with properties as above
must be conjugate to a permutation.
Our techniques naturally extend those of BFK using the strongly synchronizing
automata technology developed here and in several articles of the authors and
collaborators (although, this article has been written to be largely
self-contained). | Collin Bleak, Feyishayo Olukoya | 2023-01-31T11:47:34Z | http://arxiv.org/abs/2301.13570v2 | # Conjugacy for certain automorphisms of the one-sided shift via transducers
###### Abstract
We address the following open problem, implicit in the 1990 article _Automorphisms of one-sided subshifts of finite type_ of Boyle, Franks and Kitchens (BFK):
Does there exists an element \(\psi\) in the group of automorphisms of the one-sided shift \(\mathrm{Aut}(\{0,1,\ldots,n-1\}^{\mathbb{N}},\sigma_{n})\) so that all points of \(\{0,1,\ldots,n-1\}^{\mathbb{N}}\) have orbits of length \(n\) under \(\psi\) and \(\psi\) is not conjugate to a permutation?
Here, by a _permutation_ we mean an automorphism of one-sided shift dynamical system induced by a permutation of the symbol set \(\{0,1,\ldots,n-1\}\).
We resolve this question by showing that any \(\psi\) with properties as above must be conjugate to a permutation.
Our techniques naturally extend those of BFK using the strongly synchronizing automata technology developed here and in several articles of the authors and collaborators (although, this article has been written to be largely self-contained).
###### Contents
* 1 Introduction
* 2 Preliminaries
* 2.1 The natural numbers and some of its subsets
* 2.2 Words and infinite sequences
* 2.3 Automata and transducers
* 2.4 Increasing alphabet size and the dual automaton
* 2.5 Synchronizing automata and bisynchronizing transducers
* 2.6 De Bruijn graphs and folded automata
* 2.7 Automorphisms of digraphs underlying de Bruijn graphs and \(\mathcal{H}_{n}\)
* 2.8 Synchronizing sequences and collapse chains
* 3 Minimal actions of finite order elements of \(\mathcal{H}_{n}\)
* 3.1 Duals and Splits
* 3.2 Notational inconvenience.
* 3.3 Finite order elements of \(\mathcal{H}_{n}\)
3.3.1 Building \(\mathscr{A}(A_{k}^{\vee})\) from \(A\) * 3.3.2 Duals, automata, and automorphisms
* shrinking conjugacy class representatives * 4.1 Relabellings and automata sequences * 4.1.1 Constructing discriminant permutations \(\operatorname{disc}(s,t,Q)\) * 4.1.2 Discriminant permutations and amalgamation sequences * 4.2 Relabellings along orbits * 4.3 Shadow states * 4.4 Relabelling through shadows
* 5 Conjugate to an \(n\)-cycle
* 5.1 An Example
## 1 Introduction
Let \(n\) be a positive integer and set \(X_{n}:=\{0,1,\ldots,n-1\}\). We will use \(X_{n}\) to represent our standard alphabet of size \(n\) and we will denote by \(\sigma_{n}\) the usual shift map on \(X_{n}^{\mathbb{N}}\). The group \(\operatorname{Aut}(X_{n}^{\mathbb{N}},\sigma_{n})\) of homeomorphisms of \(X_{n}^{\mathbb{N}}\) which commute with the shift map is called _the group of automorphisms of the shift dynamical system_. This is a well-studied group in symbolic dynamics, with the special property (first given by Hedlund in [10]) that if \(\phi\in\operatorname{Aut}(X_{n}^{\mathbb{N}},\sigma_{n})\) has \((x_{0}x_{1}x_{2}\ldots)\phi=y_{0}y_{1}y_{2}\ldots\) then there is an integer \(k\) so that for all indices \(i\), the value \(y_{i}\) is determined by the finite word \(x_{i}x_{i+1}\ldots x_{i+k}\).
The paper [6] characterises all of the finite subgroups of the group \(\operatorname{Aut}(X_{n}^{\mathbb{N}},\sigma_{n})\), shows that this group contains non-abelian free groups whenever \(n\geq 3\), and investigates other algebraic structures of the group. The papers [7, 5] develop a conjugacy invariant for the group \(\operatorname{Aut}(X_{n}^{\mathbb{N}},\sigma_{n})\), arising from the action of the group on periodic words, which for an automorphism \(\phi\) we will denote as \(\operatorname{Sp}(\phi)\) (this invariant consists of a tuple: the well-known _gyration_ and _sign_ functions, together with _first return_ data: bundled data associated to the permutation representation on prime words of length \(k\)).
This article resolves the following open problem, implicit in [6], which Mike Boyle suggested to us for its own sake, and, as a test of our approach.
Let \(\Sigma_{n}\) represent the group of permutations of the set \(X_{n}\). By a mild abuse of language, we say \(\phi\in\operatorname{Aut}(X_{n}^{\mathbb{N}},\sigma_{n})\) is a _permutation_ if there is a fixed permutation \(\alpha\in\Sigma_{n}\) so that if \((x_{0}x_{1}x_{2}\ldots)\phi=y_{0}y_{1}y_{2}\ldots\) then we have \(y_{i}=(x_{i})\alpha\) for all \(i\). We say a permutation is a _rotation_ if the permutation from \(\Sigma_{n}\) is an \(n\)-cycle. We can now state the problem:
Does there exist an automorphism \(\psi\in\operatorname{Aut}(X_{n}^{\mathbb{N}},\sigma_{n})\) of order \(n\) so that all points
of \(X_{n}^{\mathbb{N}}\) travel on orbits of size \(n\), where \(\psi\) is not conjugate to a rotation?
In [6] Boyle, Franks and Kitchens show that if \(n\) is prime then any such \(\psi\) is in fact conjugate to a rotation. We show that the Boyle, Franks, and Kitchens result holds for general \(n\).
We have written this article so that it is essentially self-contained for general researchers working with automorphisms of the shift. In particular, we gather definitions and key constructions from [15] and [4] here to simplify the presentation without insisting the reader peruse those articles to follow our discussion. We use the highlighted technology to enhance the key method in the article [6]. The paper [4] shows how to represent any automorphism \(\phi\) of the one-sided shift by a particularly nice family of transducers (finite state machines that transform inputs sequentially) while [15] investigates the order problem for that same family of transducers. A key idea of [4] is that any such transducer \(T\) representing \(\phi\) can be thought of as a triple \((D,R,\phi_{*})\), where \(D\) and \(R\) are _strongly synchronizing automata_ (edge-labelled directed graphs with the particularly nice property of having a _synchronizing sequence_) with \(D\) representing the domain and \(R\) representing the range, and where \(\phi_{*}\) is an isomorphism of the underlying digraphs \(\Gamma(D)\) and \(\Gamma(R)\) of \(D\) and \(R\) determined by the action of \(\phi\) on periodic words. In the case of a finite order element, the domain and range automata can also be chosen to be identical.
In the article [6] the central method for studying finite subgroups of \(\operatorname{Aut}(X_{n}^{\mathbb{N}},\sigma_{n})\) is firstly to find an action of the group on the underlying digraph of an automaton (now understood to be a strongly synchronizing automaton). Once the first step is accomplished, the group is decomposed as a composition series where each composition factor is isomorphic to a subgroup of the symmetric group \(\Sigma_{n}\) on \(n\)-points. This is accomplished by pushing the action down along what is called an "amalgamation sequence" (see Section 4.1.2 here) of the digraph until one has an action by automorphisms on a particularly nice digraph. The construction typically requires passing through the automorphism groups of various one-sided shifts of finite type via topological conjugations induced by the amalgamations.
Our first step simplifies this process. In particular we show that we can always find an action of a finite subgroup of \(\operatorname{Aut}(X_{n}^{\mathbb{N}},\sigma_{n})\) on the underlying digraph of a strongly synchronizing automaton whose amalgamation and synchronizing sequences cohere (Section 4.1.2), thus we can push down along the synchronizing sequence of that automaton without needing to possibly change alphabet. This is already enough, when \(n\) is prime, to show that every element of order \(p\) in \(\operatorname{Aut}(X_{n}^{\mathbb{N}},\sigma_{n})\) is conjugate in \(\operatorname{Aut}(X_{n}^{\mathbb{N}},\sigma_{n})\) to a rotation.
However, to answer the open problem above, we need to go beyond this. Suppose \(\phi\in\operatorname{Aut}(X_{n}^{\mathbb{N}},\sigma_{n})\) has order \(n\) and with the condition (\(\star\)) that all points of \(X_{n}^{\mathbb{N}}\) travel on orbits of size \(n\). It turns out that (\(\star\)) is equivalent to the condition that for any transducer \((A,A,\phi_{*})\) representing \(\phi\), the action of \(\phi_{*}\) on \(\Gamma(A)\) has the property that for every (based) circuit \(C\) of \(\Gamma(A)\) the orbit length of \(C\) under this action is \(n\). (We are using based circuits here to avoid a circuit returning to itself with some non-trivial rotation as counting as completing the orbit.) When \(n\) is a prime \(p\), it is not hard to see that the action of \(\phi_{*}\) on the underlying digraph is limited in orbit lengths for edges and vertices to \(1\) and \(p\). When \(n\) is not prime, orbit lengths of edges and vertices can be any divisor of \(n\) even though all circuits have orbit length \(n\). This last issue creates problems when trying to implement the approach successfully carried out by Boyle et al for \(n\) prime.
We overcome this issue for such a \(\phi\) with representative transducer \((A,A,\phi_{*})\) with several technical lemmas. These aim to show that the automaton \(A\) can be "fluffed up" by adding
shadow states_ (Section 4.3) to create a new strongly synchronizing automaton \(B\) with an induced and more informative action \(\psi_{*}\) on \(\Gamma(B)\) so that \((B,B,\psi_{*})\) still represents \(\phi\). By'more informative' we mean that the correct addition of shadow states results in states and edges originally on orbits of length \(<n\) having resulting orbits of length \(n\). This new action makes it possible to find a conjugate action of \(\phi\) on a strongly synchronizing automaton of strictly smaller size than \(A\) (where the conjugacy occurs entirely with \(\text{Aut}(X_{n}^{\mathbb{N}},\sigma_{n})\)).
Our approach can now be summarised as follows. First we conjugate to get a (conjugate) action of \(\phi\) on a strongly synchronizing automaton whose synchronizing sequence coheres with the amalgamation sequence of its underlying digraph. Then we have a series of "fluffing up" moves followed by reductions via conjugation. Eventually, these processes result in a conjugate action given by a transducer over a single state automaton with \(n\) labelled loops, where each edge is on an orbit of length \(n\); our original element \(\phi\) must then be conjugate to a rotation.
The example in Section 5.1 might prove helpful to the reader as an illustration of our approach and of the difficulties discussed above.
The property of being a strongly synchronizing automaton is equivalent to that of being a _folded de Bruijn graph_. Crucial to the approach we have sketched out is the process: given a finite order element \(\phi\in\text{Aut}(X_{n}^{\mathbb{N}},\sigma_{n})\), find the minimal folded de Bruijn graph \(\Gamma\) so that \(\phi\) acts faithfully on \(\Gamma\) by automorphisms. The following is essentially a result from [4] stated in our context (see Lemma 3.4 and Theorem 3.5, below).
**Theorem 1.1**.: _Let \(n\geq 2\) be an integer and suppose \(\phi\in\text{Aut}(X_{n}^{\mathbb{N}},\sigma_{n})\) is a finite order element. There is an effective process for determining \(\Gamma_{\phi}\), the minimal folded de Bruijn graph on an \(n\) letter alphabet, so that \(\phi\) induces a natural automorphism of \(\Gamma_{\phi}\)._
Finally, we can state the theorem which answers the question of Boyle.
**Theorem 1.2**.: _Let \(\phi\in\text{Aut}(X_{n}^{\mathbb{N}},\sigma_{n})\) be an element of finite order. The following are equivalent:_
* \(\phi\) _is conjugate to a rotation;_
* _every element of_ \(X_{n}^{\mathbb{N}}\) _is on an orbit of length_ \(n\) _under the action of_ \(A\)_; and_
* _for any folded de Bruijn graph_ \(\Gamma_{\phi}\) _admitting a faithful action by_ \(\phi\) _via an automorphism_ \(\phi_{*}\)_, every (based) circuit of_ \(\Gamma_{\phi}\) _is on an orbit of length_ \(n\) _under_ \(\phi_{*}\)_._
It is unclear at the moment how much our approach depends on the condition that "all circuits are on orbits of length \(n\)". In work in progress we aim to extend our current ideas towards resolving the conjugacy problem for finite order elements of \(\text{Aut}(X_{n}^{\mathbb{N}},\sigma_{n})\).
### Acknowledgements
The authors are grateful for partial support from EPSRC research grant EP/R032866/1. The second author is additionally grateful for support from Leverhulme Trust Research Project Grant RPG-2017-159 and LMS ECF grant ECF-1920-105. Finally, we are also grateful to Mike Boyle for conversations on the question we address here.
Preliminaries
### The natural numbers and some of its subsets
We use the notation \(\mathbb{N}\) for the set \(\{0,1,2,\ldots\}\); for \(j\in\mathbb{N}\) we write \(\mathbb{N}_{j}\) for the set \(\{i\in\mathbb{N}:1\geq j\}\) of all natural numbers which are bigger than or equal to \(j\).
### Words and infinite sequences
In this subsection we set up necessary notation for words and sequences.
Firstly, we employ all of the usual notation around finitary words over the alphabet \(X\). Namely, for a base set \(X\), and natural \(n\), \(X^{n}\) is the set of ordered \(n\)-tuples with coordinates from \(X\). We call these the _words of length \(n\) (over alphabet \(X\))_. By convention, we set \(X^{0}:=\{\varepsilon\}\) and we refer to \(\varepsilon\) as _the empty word_ or _empty string_, proclaiming this to be the same object, independent of the (non-empty) set \(X\) used as our alphabet. We set \(X^{*}:=\cup_{n\in\mathbb{N}}X^{n}\), the words of finite length over \(X\) (this is the Kleene-star operator). We also set \(X^{+}:=X^{*}\backslash\{\varepsilon\}\), the non-trivial/non-empty finite length words over \(X\). If \(w\in X^{*}\) we set \(|w|=n\) where \(w\in X^{n}\), and we call \(|w|\) the _length of \(w\)_. If \(X\) has a linear order \(<\), then we give \(X^{*}\) the induced dictionary order. If \(u\in X^{n}\) then we implicitly set values \(u_{i}\in X\) for \(0\leq i<n\) so that \(u=(u_{0},u_{i},\ldots,u_{n-1})\). In this context, from here forward we will simply write \(u=u_{0}u_{1}\ldots u_{n-1}\). For \(u\in X^{n}\) and \(i\leq|u|\), we write \(u_{[1,i]}\) for the prefix \(u_{1}\ldots u_{i}\) of \(u\). Finally, if \(u,v\in X^{*}\), so that \(u=u_{0}u_{1}\ldots u_{r-1}\) and \(v=v_{0}v_{1}\ldots v_{s-1}\) then \(uv\) will represent the concatenation of these words: \(uv:=u_{0}u_{1}\ldots u_{r-1}v_{0}v_{1}\ldots v_{s-1}\), which is a word of length \(r+s\) over \(X\).
As in the paper [4], we take \(X_{n}^{-\mathbb{N}}:=\{\ldots x_{-2}x_{-1}x_{0}\mid x_{i}\in X_{n}\}\) as our shift space, with the shift operator \(\sigma_{n}\) defined by \((x_{i})_{i\in-\mathbb{N}}\sigma_{n}=(y_{i})_{i\in-\mathbb{N}}\) where we have \(y_{i}=x_{i-1}\). We use the characterisation of elements of \(\mathcal{H}_{n}\) as strongly synchronizing transducers corresponding to shift commuting automorphisms of \(X_{n}^{-\mathbb{N}}\). For a finite-length word over \(X_{n}\) we may index this word with negative or positive indices as seems natural at the time. When we are explicitly thinking of a finite subword \(w\in X_{n}^{k}\) of a point \(x\in X_{n}^{-\mathbb{N}}\) we will ordinarily index \(w\) as \(w=w_{i-k+1}w_{i-k+2}\ldots w_{i}\) for some \(i\in-\mathbb{N}\).
Suppose \(k\) is a positive integer and \(u=u_{-(k-1)}u_{-(k-2)}\ldots u_{-1}u_{0}\in X_{n}^{k}\). Define \(u^{\omega}\in X_{n}^{-\mathbb{N}}\), by which notation we mean the point \(\ldots x_{m}x_{m-1}\ldots x_{-1}x_{0}=:x\) where \(x_{i}=u_{i\pmod{k}}\). The word \(x\in X_{n}^{-\mathbb{N}}\) is called a _periodic word_. The _period_ of \(x\) is the smallest \(j\in\mathbb{N}\) such that \((x)\sigma_{n}^{j}=x\). If the length \(|u|\) is the period of the word \(x\), then \(u\) is called _prime_. Alternatively \(u\) is prime if there is no smaller word \(\gamma\in X_{n}^{+}\) such that \(u=\gamma^{i}\) for some \(i\geq 2\).
Write \(\mathsf{X}_{n}^{k}\) for the full set of prime words of length \(k\) over the alphabet \(X_{n}\).
Given two words \(u,v\in X_{n}^{+}\) such that \(|u|=|v|=r\), we call \(v\) a _rotation_ of \(u\) if there is an \(i\in\mathbb{N}\) with \((u^{\omega})\sigma_{n}^{i}=v^{\omega}\). In this case, we may refer to \(v\) as the \(i^{th}\)_-rotation of \(u\)_ (even if \(i>|u|\)).
It is a well-known fact that an element \(\phi\in\operatorname{Aut}(X_{n}^{-\mathbb{N}},\sigma_{n})\) preserves the period of a periodic element of \(X_{n}^{-\mathbb{N}}\). In this way, the action of \(\operatorname{Aut}(X_{n}^{-\mathbb{N}},\sigma_{n})\) on periodic words gives a representation from \(\operatorname{Aut}(X_{n}^{-\mathbb{N}},\sigma_{n})\) to the group \(\Pi_{k\in\mathbb{N}}\operatorname{Sym}(\mathsf{X}_{n}^{k})\). For \(\phi\in\operatorname{Aut}(X_{n}^{-\mathbb{N}},\sigma_{n})\), write
\(\phi_{k}\) for the action of \(\phi\) on prime words of length \(k\) and write \(\overline{\phi}\) for the element \((\overline{\phi}_{k})_{k\in\mathbb{N}}\in\Pi_{k\in\mathbb{N}}\operatorname{Sym}( \mathsf{X}_{n}^{k})\). The map \(\overline{\phi}\) is the _periodic point representation_ of \(\operatorname{Aut}(X_{n}^{-\mathbb{N}},\sigma_{n})\), introduced in [7].
### Automata and transducers
An _automaton_, in our context, is a triple \(A=(X_{A},Q_{A},\pi_{A})\), where
* \(X_{A}\) is a finite set called the _alphabet_ of \(A\) (we assume that this has cardinality \(n\), and identify it with \(X_{n}\), for some \(n\));
* \(Q_{A}\) is a finite set called the _set of states_ of \(A\);
* \(\pi_{A}\) is a function \(X_{A}\times Q_{A}\to Q_{A}\), called the _transition function_.
The _size_ of an automaton \(A\) is the cardinality of its state set. We use the notation \(|A|\) for the size of the \(A\).
We regard an automaton \(A\) as operating as follows. If it is in state \(q\) and reads symbol \(a\) (which we suppose to be written on an input tape), it moves into state \(\pi_{A}(a,q)\) before reading the next symbol. As this suggests, we can imagine that the automaton \(A\) is in the middle of an input word, reads the next letter and moves to the right, possibly changing state in the process.
We can extend the notation as follows. For \(w\in X_{n}^{m}\), let \(\pi_{A}(w,q)\) be the final state of the automaton which reads the word \(w\) from initial state \(q\). Thus, if \(w=x_{0}x_{1}\ldots x_{m-1}\), then
\[\pi_{A}(w,q)=\pi_{A}(x_{m-1},\pi_{A}(x_{m-2},\ldots,\pi_{A}(x_{0},q)\ldots)).\]
By convention, we take \(\pi_{A}(\varepsilon,q)=q\).
For a given state \(q\in Q_{A}\), we call the automaton \(A\) which starts in state \(q\) an _initial automaton_, denoted by \(A_{q}\), and say that it is _initialised_ at \(q\).
An automaton \(A\) can be represented by a labeled directed graph \(G_{A}\), whose vertex set \(V_{A}\) is \(Q_{A}\). For this directed graph there is a directed edge labeled by \(a\in X_{A}\) from \(p\) to \(q\) if \(\pi_{A}(a,p)=q\). Representing this, we determine the set \(E_{A}\) of edges of \(G_{A}\) to be the set of triples
\[E_{A}:=\{(p,a,q)\mid\exists p,q\in Q_{A},a\in X_{A},\text{ so that }\pi_{A}(a,p)=q\}.\]
In what follows, the labelled directed graph \(G_{A}\) will be referred to as the _underlying digraph for the automaton \(A\)_.
A _transducer_ is a quadruple \(T=(X_{T},Q_{T},\pi_{T},\lambda_{T})\), where
* \((X_{T},Q_{T},\pi_{T})\) is an automaton;
* \(\lambda_{T}:X_{T}\times Q_{T}\to X_{T}^{*}\) is the _output function_.
Formally such a transducer is an automaton which can write as well as read; after reading symbol \(a\) in state \(q\), it writes the string \(\lambda_{T}(a,q)\) on an output tape, and makes a transition into state \(\pi_{T}(a,q)\). Thus, the size of a transducer is the size of its underlying automaton. An _initial transducer_\(T_{q}\) is simply a transducer which starts processing input from state \(q\). Transducers which are _synchronous_ (i.e., which always write one letter whenever they read one letter) are also known as _Mealy machines_ (see [9]), although we generally will not use that language here. Transducers which are not synchronous are described as _asynchronous_ when this aspect of the transducer is being highlighted. In this paper, we will only work with synchronous transducers without an initial state, and, henceforth **we simply call these transducers**.
In the same manner as for automata, we can extend the notation to allow transducers to act on finite strings: we let \(\pi_{T}(w,q)\) and \(\lambda_{T}(w,q)\) be, respectively, the final state and the concatenation of all the outputs obtained when a transducer \(T\) reads a string \(w\) from a state \(q\).
A transducer \(T\) can also be represented as an edge-labeled directed graph. Again the vertex set is \(Q_{T}\); now, if \(\pi_{T}(a,q)=r\), we put an edge with label \(a|\lambda_{T}(a,q)\) from \(q\) to \(r\). In other words, the edge label describes both the input and the output associated with that edge. We call \(a\) the _input label_ of the edge and \(\lambda_{T}(a,q)\) the _output label_ of the edge.
For example, Figure 1 describes a synchronous transducer over the alphabet \(X_{2}\).
In what follows, we only use the language _automaton_ for those automata which are not transducers. This allows us characterise a synchronous transducer \(T\) as a pair of automata together with a directed graph isomorphism "gluing" the two automata together as a domain automaton and a range automaton (we split any edge label '\(x|y\)' of \(T\) as specifying the domain automaton edge with label \(x\) and the range automaton edge with label \(y\)).
We can regard any state \(q\) of a transducer as acting on an infinite string from \(X_{n}^{\mathbb{N}}\) where \(X_{n}\) is the alphabet. This action is given by iterating the action on a single symbol; so the output string is given by
\[\lambda_{T}(xw,q)=\lambda_{T}(x,q)\lambda_{T}(w,\pi_{T}(x,q)).\]
Thus \(T_{q}\) induces a map \(w\mapsto\lambda_{T}(w,q)\) from \(X_{n}^{\mathbb{N}}\) to itself; it is easy to see that this map is continuous. If it is a homeomorphism, then we call the state \(q\) a _homeomorphism state_. We write \(\operatorname{Im}(q)\) for the image of the map induced by \(T_{q}\).
Figure 1: A transducer over \(X_{2}\)
Two states \(q_{1}\) and \(q_{2}\) are said to be \(\omega\)_-equivalent_ if the transducers \(T_{q_{1}}\) and \(T_{q_{2}}\) induce the same continuous map. (This can be checked in finite time, see [9].) More generally, we say that two initial transducers \(T_{q}\) and \(T^{\prime}_{q^{\prime}}\) are \(\omega\)_-equivalent_ if they induce the same continuous map on \(X_{n}^{\mathbb{N}}\).
A transducer is said to be _minimal_ if no two states are \(\omega\)-equivalent. For a transducer \(T\), two states \(q_{1}\) and \(q_{2}\) are \(\omega\)-equivalent if \(\lambda_{T}(a,q_{1})=\lambda_{T}(a,q_{2})\) for any finite word \(a\in X_{n}^{*}\). Moreover, if \(q_{1}\) and \(q_{2}\) are \(\omega\)-equivalent states of a synchronous transducer, then for any finite word \(a\in X_{n}^{p}\), \(\pi_{T}(a,q_{1})\) and \(\pi_{T}(a,q_{2})\) are also \(\omega\)-equivalent states.
Two minimal non-initial transducers \(T\) and \(U\) are said to be \(\omega\)_-equal_ if there is a bijection \(f:Q_{T}\to Q_{U}\), such that for any \(q\in Q_{T}\), \(T_{q}\) is \(\omega\)-equivalent to \(U_{(q)f}\). Two minimal initial transducers \(T_{p}\) and \(U_{q}\) are said to be \(\omega\)-equal if they are \(\omega\)-equal as non-initial transducers and there is a bijection \(f:Q_{T}\to Q_{U}\) witnessing this which satisfies the equality \((p)f=q\). We use the symbol '\(=\)' to represent \(\omega\)-equality of initial and non-initial transducers. Two non-initial transducers \(T\) and \(U\) are said to be \(\omega\)_-equivalent_ if they have \(\omega\)-equal minimal representatives, and in this case we might instead say \(T\)_and \(U\) represent the same transformation_.
In the class of synchronous transducers, the \(\omega\)-equivalence class of any transducer has a unique minimal representative.
Throughout this article, as a matter of convenience, we shall not distinguish between \(\omega\)-equivalent transducers. Thus, for example, we introduce various groups as if the elements of those groups are transducers, whereas the elements of these groups are in fact \(\omega\)-equivalence classes of transducers.
Given two transducers \(T=(X_{n},Q_{T},\pi_{T},\lambda_{T})\) and \(U=(X_{n},Q_{U},\pi_{U},\lambda_{U})\) with the same alphabet \(X_{n}\), we define their product \(T*U\). The intuition is that the output for \(T\) will become the input for \(U\). Thus we take the alphabet of \(T*U\) to be \(X_{n}\), the set of states to be \(Q_{T*U}=Q_{T}\times Q_{U}\), and define the transition and rewrite functions by the rules
\[\pi_{T*U}(x,(p,q)) = (\pi_{T}(x,p),\pi_{U}(\lambda_{T}(x,p),q)),\] \[\lambda_{T*U}(x,(p,q)) = \lambda_{U}(\lambda_{T}(x,p),q),\]
for \(x\in X_{n}\), \(p\in Q_{T}\) and \(q\in Q_{U}\). Here we use the earlier convention about extending \(\lambda\) and \(\pi\) to the case when the transducer reads a finite string. If \(T\) and \(U\) are initial with initial states \(q\) and \(p\) respectively then the state \((q,p)\) is considered the initial state of the product transducer \(T*U\).
In automata theory a synchronous (not necessarily initial) transducer \(T=(X_{n},Q_{T},\pi_{T},\lambda_{T})\) is _invertible_ if for any state \(q\) of \(T\), the map \(\rho_{q}:=\lambda_{T}(\centerdot,q):X_{n}\to X_{n}\) is a bijection. In this case the inverse of \(T\) is the transducer \(T^{-1}\) with state set \(Q_{T^{-1}}:=\{q^{-1}\mid q\in Q_{T}\}\), transition function \(\pi_{T^{-1}}:X_{n}\times Q_{T^{-1}}\to Q_{T^{-1}}\) defined by \((x,p^{-1})\mapsto q^{-1}\) if and only if \(\pi_{T}((x)\rho_{p}^{-1},p)=q\), and output function \(\lambda_{T^{-1}}:X_{n}\times Q_{T^{-1}}\to X_{n}\) defined by \((x,p)\mapsto(x)\rho_{p}^{-1}\). Thus, in the graph of the transducer \(T\) we simply switch the input labels with the output labels and append '\({}^{-1}\)' to the state names.
We are concerned only with **invertible, synchronous transducers** in this article.
### Increasing alphabet size and the dual automaton
We require a couple of standard constructions in the theory of synchronous automata in this work.
First we consider the 'paths to letters' construction. Let \(T\) be a transducer over the alphabet \(X_{n}\). Let \(m\in\mathbb{N}_{1}\). Write \(T(m)\) for the transducer over the alphabet \(X_{n}^{m}\) with state set \(Q_{T}\) and transition and output functions \(\pi_{T(m)}\), \(\lambda_{T(m)}\) satisfying the following conditions. For \(x\in X_{n}^{m}\) and \(q\in Q_{T}\) we set \(\pi_{T(m)}(x,q)=p\) if and only if \(\pi_{T}(x,q)=p\) in \(T\); we set \(\lambda_{T(m)}(x,q):=\lambda_{T}(x,q)\). It is clear that if \(T\) is minimal and invertible, the \(T(m)\) is also minimal and invertible.
The other construction we require is the _dual automaton_ (see [1, 14]).
Again let \(T\) be a transducer over the alphabet \(X_{n}\). Set \(T^{\vee}=\langle Q_{T},X_{n},\pi_{T}^{\vee},\lambda_{T}^{\vee}\rangle\), that is the state set of \(T^{\vee}\) is the set \(X_{n}\), the alphabet of \(T^{\vee}\) is the state set \(Q_{T}\) of \(T\), and the transition \(\pi_{T}^{\vee}\) and output functions \(\lambda_{T}^{\vee}\) are defined as follows. For \(q\in Q_{T}\) and \(x\in X_{n}\), \(\pi_{T}^{\vee}(q,x)=y\) and \(\lambda_{T}^{\vee}(q,x)=p\) if and only if \(\pi_{T}(x,q)=p\) and \(\lambda_{T}(x,q)=y\).
There is a connection between the two constructions. The following is standard in the theory of synchronous automata and provides a key insight in the analysis of [1].
**Lemma 2.1**.: _Let \(T\) be a synchronous transducer over alphabet \(X_{n}\). For positive natural \(m\), we have \((T^{\vee})^{m}=T(m)^{\vee}\)._
Note that to lighten our notation below, we may use the notation \(T_{m}^{\vee}\) for the transducer \(T(m)^{\vee}\).
Also observe that \(T^{-1^{\vee}}\) is obtained from \(T^{\vee}\) by'reversing the arrows'. That is if, \(x,y\in X_{n}\), \(q,p\in Q_{T}\) are such that \(\pi_{T}^{\vee}(q,x)=y\) and \(\lambda^{\vee}(q,x)=p\), then \(\pi_{T^{-1}}^{\vee}(q^{-1},y)=x\) and \(\lambda^{\vee}(q^{-1},y)=p^{-1}\).
### Synchronizing automata and bisynchronizing transducers
Given a natural number \(k\), we say that an automaton \(A\) with alphabet \(X_{n}\) is _synchronizing at level \(k\)_ if there is a map \(\mathfrak{s}_{k}:X_{n}^{k}\mapsto Q_{A}\) such that, for all \(q\) and any word \(w\in X_{n}^{k}\), we have \(\pi_{A}(w,q)=\mathfrak{s}_{k}(w)\). In other words, \(A\) is synchronizing at level \(k\) if, after reading a word \(w\) of length \(k\) from a state \(q\), the final state depends only on \(w\) and not on \(q\). (Again we use the extension of \(\pi_{A}\) to allow the reading of an input string rather than a single symbol.) We call \(\mathfrak{s}_{k}(w)\) the state of \(A\)_forced_ by \(w\); the map \(\mathfrak{s}_{k}\) is called the _synchronizing map at level \(k\)_. An automaton \(A\) is called _strongly synchronizing_ if it is synchronizing at level \(k\) for some \(k\).
We remark here that the notion of synchronization occurs in automata theory in considerations around the _Cerny conjecture_, in a weaker sense. A word \(w\) is said to be a _reset word_ for \(A\) if \(\pi_{A}(w,q)\) is independent of \(q\); an automaton is called _synchronizing_ if it has a reset word [16, 2]. Our definition of "synchronizing at level \(k\)"/"strongly synchronizing" requires every word of length \(k\) to be a reset word for the automaton.
If the automaton \(A\) is synchronizing at level \(k\), we define the _core_ of \(A\) to be the maximal sub-automaton with set of states those states in the image of the map \(\mathfrak{s}\). It is an easy observation that, if \(A\) is synchronizing at level \(k\), then its core is an automaton in its own
right using the same alphabet, and is also synchronizing at level \(k\). We denote this automaton by \(\operatorname{core}(A)\). We say that an automaton or transducer is _core_ if it is equal to its core.
Clearly, if \(A\) is synchronizing at level \(k\), then it is synchronizing at level \(l\) for all \(l\geq k\).
Let \(T_{q}\) be an initial transducer which is invertible with inverse \(T_{q}^{-1}\). If \(T_{q}\) is synchronizing at level \(k\), and \(T_{q}^{-1}\) is synchronizing at level \(l\), we say that \(T_{q}\) is _bisynchronizing_ at level \((k,l)\). If \(T_{q}\) is invertible and is synchronizing at level \(k\) but not bisynchronizing, we say that it is _one-way synchronizing_ at level \(k\).
For a non-initial invertible transducer \(T\) we also say \(T\) is bi-synchronizing (at level \((k,l)\)) if both \(T\) and its inverse \(T^{-1}\) are synchronizing at levels \(k\) and \(l\) respectively.
Note that if \(T\) is a strongly synchronizing transducer, then for any \(m\in\mathbb{N}\), \(T(m)\) is also strongly synchronizing. Moreover, if \(k\) the minimal synchronizing level of \(T\), then \(T(m)\) is synchronizing at level \(1\) for any \(m\geq k\) and, more generally, is synchronizing at level \(\lceil\sfrac{k}{m}\rceil\).
**Notation 2.2**.: Let \(T\) be a transducer which is synchronizing at level \(k\) and let \(l\geq k\) be any natural number. Then for any word \(w\in X_{n}^{l}\), we write \(q_{w}\) for the state \(\mathfrak{s}_{l}(w)\), where \(\mathfrak{s}_{l}:X_{n}^{l}\to Q_{T}\) is the synchronizing map at level \(l\).
The following result was proved in Bleak _et al._[3].
**Proposition 2.3**.: _Let \(T\), \(U\) be transducers which (as automata) are synchronizing at levels \(j\), \(k\) respectively, Then \(T*U\) is synchronizing at level \(j+k\)._
Note that in the statement of Proposition 2.3, the lowest synchronizing level of \(T*U\) might actually be less than \(j+k\).
Let \(T\) be a transducer which (regarded as an automaton) is synchronizing at level \(k\), then the core of \(T\) (similarly denoted \(\operatorname{core}(T)\)) induces a continuous map
\[f_{T}:X_{n}^{-\mathbb{N}}\to X_{n}^{-\mathbb{N}}\]
as follows. Let \(x\in X_{n}^{-\mathbb{N}}\) and set \(y\in X_{n}^{-\mathbb{N}}\) to be the sequence defined by
\[y_{i}=\lambda_{T}(x_{i},q_{x_{i-k}x_{i-(k-1)}\dots x_{i-1}}).\]
Note that
\[\pi_{T}(x_{i},q_{x_{i-k}x_{i-(k-1)}\dots x_{i-1}})=q_{x_{i-k}x_{i-(k-1)}\dots x _{i-1}}.\]
Set
\[(x)f_{T}=y.\]
Thus, from the point of view of the transition function of \(T\) we in fact begin processing \(x\) at \(-\infty\) and move towards \(x_{0}\). (This is in keeping with our interpretation of transducer as representing machines applying sliding block codes, where here, we are thinking of \(\operatorname{Aut}(X_{n}^{-\mathbb{N}},\sigma_{n})\) as consisting of the sliding block code transformations that require past information only to determine what to do with a digit.) Note, moreover, that the map \(f_{T}\) is independent of the (valid) synchronizing level chosen to define it. We have the following result:
**Proposition 2.4**.: _[_4_]_ _Let \(T\) be a minimal transducer which is synchronizing at level \(k\) and which is core. Then \(f_{T}\in\operatorname{End}(X_{n}^{-\mathbb{N}},\sigma_{n})\)._
The transducer in Figure 1 induces the shift map on \(X_{n}^{-\mathbb{N}}\).
In [3], the authors show that the set \(\widetilde{\mathcal{H}_{n}}\) of minimal finite synchronizing invertible synchronous core transducers is a monoid; the monoid operation consists of taking the product of transducers and reducing it by removing non-core states and identifying \(\omega\)-equivalent states to obtain a minimal and synchronous representative.
Let \(\mathcal{H}_{n}\) be the subset of \(\widetilde{\mathcal{H}_{n}}\) consisting of transducers which are bi-synchronizing. A chief result of [4] is that \(\operatorname{Aut}(X_{n}^{-\mathbb{N}},\sigma_{n})\cong\mathcal{H}_{n}\).
### De Bruijn graphs and folded automata
The _de Bruijn graph_\(G(n,m)\) can be defined as follows, for integers \(m\geq 1\) and \(n\geq 2\). The vertex set is \(X_{n}^{m}\), where \(X_{n}\) is the alphabet \(\{0,\ldots,n-1\}\) of cardinality \(n\). There is a directed arc from \(a_{0}\ldots a_{m-1}\) to \(a_{1}a_{2}\ldots a_{m}\), with label \(a_{m}\).
Note that, in the literature, the directed edge is also from \(a_{0}a_{1}\ldots a_{m-1}\) to \(a_{1}\ldots a_{m-1}a_{m}\) and the label on this edge is often given as the \((m+1)\)-tuple \(a_{0}a_{1}\ldots a_{m-1}a_{m}\). However, the labelling given above produces an isomorphic graph and is better suited for our purposes.
Figure 2 shows the de Bruijn graph \(G(3,2)\).
Observe that the de Bruijn graph \(G(n,m)\) describes an automaton over the alphabet \(X_{n}\). Moreover, this automaton is synchronizing at level \(m\): when it reads the string \(b_{0}b_{1}\ldots b_{m-1}\) from any initial state, it moves into the state labeled \(b_{0}b_{1}\ldots b_{m-1}\).
The de Bruijn graph is, in a sense we now describe, the universal automaton over \(X_{n}\) which is synchronizing at level \(m\).
We define a _folding_ of an automaton \(A\) over the alphabet \(X_{n}\) to be an equivalence relation \(\equiv\) on the state set of \(A\) with the property that, if \(a\equiv a^{\prime}\) and \(\pi_{A}(x,a)=b\), \(\pi_{A}(x,a^{\prime})=b^{\prime}\), then \(b\equiv b^{\prime}\). That is, reading the same letter from equivalent states takes the automaton to equivalent states. If \(\equiv\) is a folding of \(A\), then we can uniquely define the _folded automaton_\(A/\equiv\): the state set is the set of \(\equiv\)-classes of states of \(A\); and, denoting the \(\equiv\)-class of \(a\) by \([a]\), we have \(\pi_{A/\equiv}(x,[a])=[\pi_{A}(x,a)]\) (note that this is well-defined).
**Proposition 2.5**.: _[_4_]_ _The following are equivalent for an automaton \(A\) on the alphabet \(X_{n}\):_
* \(A\) _is synchronizing at level_ \(m\)_, and is core;_
* \(A\) _is the folded automaton from a folding of the de Bruijn graph_ \(G(n,m)\)_._
We may think of a de Bruijn graph \(G(n,m)\) as determining a finite category, with objects the foldings of \(G(n,m)\) and with arrows digraph morphisms which commute with the transition maps of the given automata. It is immediate in that point of view that all such arrows are surjective digraph morphisms (and indeed, these are folding maps).
### Automorphisms of digraphs underlying de Bruijn graphs and \({\cal H}_{n}\)
In this section we describe finite order elements of \({\cal H}_{n}\) as automorphisms of folded de Bruijn graphs.
Let \(A\) be a finite automaton on edge-alphabet \(X_{n}\). Recall (Section 2.3) that an automaton \(A\) may be regarded as labeled directed graph with vertex set \(Q_{A}\), and edge set \(E_{A}\subset Q_{A}\times X_{n}\ \times Q_{A}\). We let \(G_{A}\) denote the unlabeled directed graph corresponding to an automaton \(A\), but we retain the triple \((p,x,q)\) to denote the edge of \(G_{A}\) underlying the edge \((p,x,q)\) of \(A\).
Let \(\phi\) be an automorphism of the directed graph \(G_{A}\). Let \(H(A,\phi)\) be a transducer with
* state set \(Q_{H(A,\phi)}:=Q_{A}\),
Figure 2: The de Bruijn graph \(G(3,2)\).
* alphabet set \(X_{n}\),
* transition function \(\pi_{H(A,\phi)}:=\pi_{A}\), and
* output function \(\lambda_{H(A,\phi)}:X_{n}\times Q_{H(A,\phi)}\to X_{n}\),
where \(\lambda_{H(A,\phi)}(x,p)=y\) if and only if there are edges \((p,x,q)\) and \((r,y,s)\) of \(G_{A}\) so that \((p,x,q)\) is taken to \((r,y,s)\) by \(\phi\).
The transducer \(H(A,\phi)\) can be thought of as the result of gluing the automaton \(A\) to a copy of itself along the map \(\phi\). That is, if \(p,q\in Q_{A}\) and \((p,x,q)\) is an edge from \(p\) to \(q\) with label \(x\) in \(A\), and if \(y\) is the label of the edge \(((p,x,q))\phi\) in \(A\), then the vertex \(p\) is identified with the vertex \((p)\phi\), the vertex \(q\) with the vertex \((q)\phi\), the edge \((p,x,q)\) is identified with the edge \(((p,x,q))\phi\) and has input label \(x\) and the output label \(y\).
**Remark 2.6**.: We make a few observations:
* For each state \(q\in Q_{H(A,\phi)}\), the map \(\lambda_{H(A,\phi)}(\centerdot,q):X_{n}\to X_{n}\) is a bijection. This follows from the definition of \(G_{A}\): for each \(x\in X_{n}\) there is precisely one edge of the form \(((q)\phi,x,p)\) based at the vertex \((q)\phi\). It follows that the transducer \(H(A,\phi)\) is invertible.
* If \(A\) is synchronizing at level \(k\) (and so a folding of \(G(n,k)\) by Proposition 2.5) then both \(\underline{H(A,\phi)}\) and \(H(A,\phi)^{-1}\) are synchronizing at level \(k\) hence the minimal representative \(\overline{H(A,\phi)}\) of \(H(A,\phi)\) is an element of \(\mathcal{H}_{n}\).
* In fact, for a state \(q\in Q_{A}\), if \(W_{k,q}\) is the set of words of length \(k\), that force the state \(q\), i.e., \[W_{k,q}:=\{a\in X_{n}^{k}:\pi_{H(A,\phi)}(a,q)=q\},\] then \(\{\lambda_{H(A,\phi)}(a,p)\mid a\in Q_{k,q},p\in Q_{H(A,\phi)}\}\) is equal to \(W_{k,(q)\phi}\).
* An element of \(\mathcal{H}_{n}\) which can be represented by a transducer \(H(A,\phi)\) for some folded de Bruijn graph \(A\) and digraph automorphism \(\phi\) of \(G_{A}\) must have finite order.
If \(A\in\mathcal{H}_{n}\) and \(B\) is an automaton so that there is a digraph automorphism \(\phi:G_{B}\to G_{B}\) so that \(A\) and \(H(B,\phi)\) represent the same transformation then we say \(A\)_is induced from \((B,\phi)\)_.
### Synchronizing sequences and collapse chains
We require an algorithm given in [3] for detecting when an automaton is strongly synchronizing. We state a version below.
Let \(A=(X_{n},Q_{A},\pi_{A})\) be an automaton. Define an equivalence relation \(\sim_{A}\) on the states of \(A\) by \(p\sim_{A}q\) if and only if the maps \(\pi_{A}(\cdot,p):Q_{A}\to Q_{A}\) and \(\pi_{A}(\cdot,q):Q_{A}\to Q_{A}\) are equal. For a state \(q\in Q_{A}\) let \(\mathfrak{q}\) represent the equivalence class of \(q\) under \(\sim_{A}\). Further set \(\mathsf{Q_{A}}:=\{\mathfrak{q}\mid q\in Q_{A}\}\) and let \(\pi_{\mathsf{A}}:\mathsf{Q_{A}}\to\mathsf{Q_{A}}\) be defined by \(\pi_{\mathsf{A}}(x,\mathfrak{q})=\mathfrak{p}\) where \(p=\pi_{A}(x,q)\)
Observe that \(\pi_{\mathsf{A}}\) is a well defined map. Define a new automaton \(\mathsf{A}=(X_{n},\mathsf{Q_{A}},\pi_{\mathsf{A}})\) noting that \(|\mathsf{Q_{A}}|\leq|Q_{A}|\) and \(|\mathsf{Q_{A}}|=|Q_{A}|\) implies that \(A\) is isomorphic to \(\mathsf{A}\).
Given an automaton \(A\), let \(A_{0}:=A,A_{1},A_{2},\ldots\) be the sequence of automata such that \(A_{i}=\mathsf{A}_{i-1}\) for all \(i\geq 1\). We call the sequence \((A_{i})_{i\in\mathbb{N}}\) the _synchronizing sequence of \(A\)_. We make a few observations.
By definition each term in the synchronizing sequence is a folding of the automaton which precedes it, therefore there is a \(j\in\mathbb{N}\) such that all the \(A_{i}\) for \(i\geq j\) are isomorphic to one another. By a simple induction argument, for each \(i\), the states of \(A_{i}\) corresponds to a partition of \(Q_{A}\). We identify the states of \(A_{i}\) with this partition. For two states \(q,p\in Q_{A}\) that belong to a state \(P\) of \(A_{i}\), \(\pi_{A}(x,q)\) and \(\pi_{A}(x,p)\) belong to the same state of \(Q_{A_{i}}\) for all \(x\in X_{n}\). We will use the language '_two states of \(A\) are identified at level \(i\)_' if the two named states belong to the same element of \(Q_{A_{i}}\).
If the automaton \(A\) is strongly synchronizing and core, then an easy induction argument shows that all the terms in its synchronizing sequence are core and strongly synchronizing as well (since they are all foldings of \(A\)). For example if \(A=G(n,m)\), then the first \(m\) terms of the synchronizing sequence of \(A\) are \((G(n,m),G(n,m-1),G(n,m-2),\ldots,G(n,1)\), after this all the terms in the sequence are the single state automaton on \(X_{n}\).
The result below is from [3].
**Theorem 2.7**.: _Let \(A\) be an automaton and \(A_{0}:=A,A_{1},A_{2},\ldots\) be the sequence of automata such that \(A_{i}=\mathsf{A}_{i-1}\) for all \(i>1\). Then_
1. _a pair of states_ \(p,q\in Q_{A}\)_, belong to the same element_ \(t\in Q_{A_{i}}\) _if and only if for all words_ \(a\in X_{n}^{i}\)_,_ \(\pi_{A}(a,p)=\pi_{A}(a,q)\)_, and_
2. \(A\) _is strongly synchronizing if and only if there is a_ \(j\in\mathbb{N}\) _such that_ \(|Q_{A_{j}}|=1\)_. The minimal_ \(j\) _for which_ \(|A_{j}|=1\) _is the minimal synchronizing level of_ \(A\)_._
We also require the notion of a _collapse chain_ from [4]. Let \(A\) and \(B\) be strongly synchronizing automata. Let \(A=A_{0},A_{1},\ldots,A_{k}=B\) be a sequence such that \(A_{i+1}\) is obtained from \(A_{i}\) by identifying pairs of states \(p\sim_{A_{i}}q\). We note that as distinct from the synchronizing sequence, we do not necessarily make all possible identifications. Such a sequence is called a _collapse chain_ if at each step, we make the maximal number of collapses possible relative to the final automaton \(B\). That is, for \(u,v\in Q_{A}\) belonging to the same state of \(B\), in the minimal \(A_{i}\) such that \([u]\sim_{A_{i}}[v]\), we have \([u]=[v]\) in \(A_{i+1}\). We note that this condition means that a collapse chain is unique. Therefore, for \(B\) a strongly synchronizing automaton, we say that \(B\)_belongs to a collapse chain of_ \(A\) if there is a collapse chain \(A=A_{0},A_{1},\ldots,A_{k}=B\). In this case, we call the collapse chain \(A=A_{0},A_{1},\ldots,A_{k}=B\), the _the collapse chain from \(A\) to \(B\)_. If \(B\) is a single state automaton, the collapse chain from \(A\) to \(B\) is precisely the strongly synchronizing sequence of \(A\). Thus a collapse chain can be thought of as a synchronizing sequence relative to its end point.
The following facts are straightforward. Let \(A\) be a strongly synchronizing automaton, and \(B\) be an automaton which is a folding of \(A\), then there is a collapse chain from \(A\) to \(B\). Therefore \(B\) belongs to a collapse chain of \(A\) if and only if \(B\) is a folding of \(A\). In particular
if \(B\) belongs to a collapse chain of \(A\), then \(B\) is synchronizing at the minimal synchronizing level of \(A\).
The following result about collapse chains is proved similarly to Theorem 2.7.
**Theorem 2.8**.: _Let \(A\) be an automaton and \(B\) be a folding of \(A\). Let \(A_{0}:=A,A_{1},A_{2},\ldots,A_{m}=B\) be the collapse chain from \(A\) to \(B\). Then a pair of states \(p,q\in Q_{A}\) belong to the same element \(t\in Q_{A_{i}}\) if and only if \(p,q\) belong to the same state of \(Q_{B}\) and for all words \(a\in X_{n}^{i}\), \(\pi_{A}(a,p)=\pi_{A}(a,q)\)._
## 3 Minimal actions of finite order elements of \(\mathcal{H}_{n}\)
For this section, we will work using facts related to dual transducers for strongly synchronizing transducers.
It has been shown in [12, 1, 14] that the dual \(T^{\vee}\) transducer for a synchronous transducer \(T\) contains much information about the order of \(T\), but implicit in those works also, much information about the conjugacy class of \(T\). In [15] the dual is considered for strongly synchronizing transducers, where it is shown that for infinite order strongly synchronizing transducers the powers of the dual grow in size asymptotically exponentially while for finite order transducers the dual generates a finite semigroup with a zero. In this section we bring in some of the methods and results of those works. See [4, 15] for more details than we give below.
### Duals and Splits
Recall our definition of the dual of a transducer from Subsection 2.4. We will mostly be working in a power of the dual of a transducer \(T\), below.
We introduce the following notation. Let \(T\) be a strongly synchronizing transducer, and \(q\in Q_{T}\) be a state. Then we write \(W_{q}\) for the set of words \(\gamma\in X_{n}^{+}\) such that the map \(\pi_{T}(\gamma,\cdot):Q_{T}\to Q_{T}\) has image \(\{q\}\).
Let \(A\) be an element of \(\mathcal{H}_{n}\), with synchronizing level \(k\). Then for \(r\geq k\), \(A_{r}^{\vee}\) has a _split_\(((p_{1},\ldots,p_{l}),(q_{1}\ldots,q_{l}),\Gamma)\) if and only if the following depiction (see Figure 3) of the transitions in \(A_{r}^{\vee}\) at the state \(\Gamma\) is valid:
More formally, we have the following.
**Definition 3.1** (Splits).: Let \(A\) be an element of \(\mathcal{H}_{n}\), with synchronizing level \(k\) and let \(r\geq k\). Suppose there are
* \(l\in\mathbb{N}_{1}\),
* elements \((p_{1},p_{2},\ldots,p_{l}),(q_{1},q_{2},\ldots,q_{l}),(s_{1},s_{2},\ldots,s_{l })\in Q_{A}^{l}\),
* a word \(\Gamma\in X_{n}^{r}\cap W_{s_{1}}\), and
* distinct states \(t_{1},t_{2}\in Q_{A}\)
such that when we define sequences \(\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{l}\) and \(\Lambda_{1},\Lambda_{2},\ldots,\Lambda_{l}\) by
* \(\Gamma_{1}=\lambda_{A}(\Gamma,p_{1})\) and \(\Lambda_{1}=\lambda_{A}(\Gamma,q_{1})\), and
* for \(1<i\leq l\), \(\Gamma_{i}=\lambda_{A}(\Gamma_{i-1},p_{i})\) and \(\Lambda_{i}=\lambda_{A}(\Lambda_{i-1},q_{i})\),
then \(\Gamma_{i},\Lambda_{i}\in W_{s_{i+1}}\) for all \(1\leq i\leq l-1\), \(\Gamma_{l}\in W_{t_{1}}\) and \(\Lambda_{l}\in W_{t_{2}}\). In this case we say that \(A_{r}^{\vee}\)_splits_.
We also say that the \(l\)-tuples \((p_{1},\ldots,p_{l})\) and \((q_{1},\ldots,q_{l})\)_split \(A_{r}^{\vee}\) (at \(\Gamma\))_. We call \(\{p_{1},q_{1}\}\) the _top of the split_, \(\{t_{1},t_{2}\}\) the _bottom of the split_, and the triple \(((p_{1},\ldots,p_{l}),(q_{1}\ldots,q_{l}),\Gamma)\) a _split of \(A_{r}^{\vee}\) (of length \(l\))_.
N.B.: if we took \(r<k\) in the definition of a split above, then there is no guarantee that some word \(\Gamma\) of length \(r\) would not even be synchronizing, and also no guarantee of the existence of any synchronizing word of length \(r\), so the definition above breaks down.
The following concept appears implicitly in the proof of Lemma 3.8.
**Definition 3.2**.: Let \(A\) be an element of \(\mathcal{H}_{n}\), with synchronizing level \(k\). Let \(r\geq k\) and \(((p_{1},\ldots,p_{l}),(q_{1}\ldots,q_{l}),\Gamma)\) be a split of \(A_{r}^{\vee}\). Let \(\{t_{1},t_{2}\}\) be the bottom of this split. Then we say that _the bottom of the split \(((p_{1},\ldots,p_{l}),(q_{1}\ldots,q_{l}),\Gamma)\) depends only on the top_ if the following conditions hold for any other tuples \(U_{1},U_{2}\in Q_{A}^{l-1}\):
* the triple \(((p_{1},U_{1}),(q_{1},U_{2}),\Gamma)\) is also a split with bottom \(\{t_{1},t_{2}\}\) and,
* if \(\lambda_{A^{l}}(\Gamma,(p_{1},\ldots,p_{l}))\in W_{t_{1}}\) and \(\lambda_{A^{l}}(\Gamma,(q_{1},\ldots,q_{l}))\in W_{t_{2}}\) then \(\lambda_{A^{l}}(\Gamma,(p_{1},U_{1}))\in W_{t_{1}}\) and \(\lambda_{A^{l}}(\Gamma,(q_{1},U_{2}))\in W_{t_{2}}\), and vice-versa.
Observe that if, for \(r\geq k\), \(A_{r}^{\vee}\) has a split \(((p_{1},\ldots,p_{l}),(q_{1}\ldots,q_{l}),\Gamma)\) whose bottom depends only on the top, then \(p_{1}\neq q_{1}\).
Splitting length as defined below is used explicitly in Lemma 3.8.
**Definition 3.3**.: For a transducer \(A\), we define the \(r\)_-splitting length of \(A\)_ (for \(r\) greater than or equal to the minimal synchronizing length) to be minimal \(l\) such that there is a split of \(A_{r}^{\vee}\) of length \(l\). If there is no such split then we set the \(r\)-splitting length of \(A\) to be \(\infty\).
Figure 3: A split; the symbols \(*\) and \(\sharp\) represent arbitrary elements of \(Q_{A}\).
Note that if, for \(r\geq k\), \(A_{r}^{\vee}\) has \(r\)-splitting length \(l<\infty\), then any split of length \(l\) has the property that the bottom depends only on the top as otherwise one can find a shorter split (see [15]).
### Notational inconvenience.
We are soon to run into some collisions of notation.
Firstly, if \(\phi\in\operatorname{Aut}(X_{n}^{-\mathbb{N}},\sigma_{n})\), then we can represent \(\phi\) by a (minimal) transducer \(A_{\phi}\in\mathcal{H}_{n}\).
Secondly, if \(A\in\mathcal{H}_{n}\), then \(A\) represents an element \(\phi_{A}\in\operatorname{Aut}(X_{n}^{-\mathbb{N}},\sigma_{n})\).
Finally, if \(\phi\in\operatorname{Aut}(X_{n}^{-\mathbb{N}},\sigma_{n})\) has finite order, then as we will see from Theorem 3.5 there is an automaton \(\mathscr{A}(A_{k}^{\vee})\) and an automorphism \(\psi\) of the underlying digraph \(G_{\mathscr{A}(A_{k}^{\vee})}\) so that \(A_{\phi}\) and \(H(\mathscr{A}(A_{k}^{\vee}),\psi)\) represent the same element. It happens that there is a way to define \(\psi\) from \(\phi\), and also, from \(A_{\phi}\). Similarly, we could have begun this paragraph with an element \(A\in\mathcal{H}_{n}\), in which case \(\psi\) would be defined from both \(\phi_{A}\) and from \(A\).
In order to unify our notation here, we will simply denote \(\psi\) in the above situation as \(\phi_{A}\). This of course means that \(\phi_{A}\) will represent two different things (an automorphism of the one-sided shift, or alternatively, an automorphism of a digraph underlying a folded de Bruijn graph). We hope that confounding the notation in this way will not cause confusion as it should be clear what is meant from context, noting as well that the digraph homomorphism \(\phi_{A}\) is the induced digraph homomorphism that arises on \(G_{\mathscr{A}(A_{k}^{\vee})}\) by considering how \(\phi\) maps infinite paths on \(\mathscr{A}(A_{k}^{\vee})\) to other infinite paths on \(\mathscr{A}(A_{k}^{\vee})\).
### Finite order elements of \(\mathcal{H}_{n}\)
In this subsection we build, for a finite order element \(A\in\mathcal{H}_{n}\) and corresponding \(\phi_{A}\in\operatorname{Aut}(X_{n}^{-\mathbb{N}},\sigma_{n})\), the minimal strongly synchronizing automaton \(\mathscr{A}(A_{k}^{\vee})\) which \(\phi_{A}\) can act on as an automorphism of the underlying directed graph with \(A\) being the minimal representative of \(H(\mathscr{A}(A_{k}^{\vee}),\phi_{A})\).
Note that we will retain the notation \(\phi_{A}\) for both the element of \(\operatorname{Aut}(X_{n}^{-\mathbb{N}},\sigma_{n})\) corresponding to \(A\) as well as the digraph automorphism \(\phi_{A}\) that is induced by this automorphism of the shift.
Note that the process of determining \(A_{\phi}\in\mathcal{H}_{n}\) from a given element \(\phi\in\operatorname{Aut}(X_{n}^{-\mathbb{N}},\sigma_{n})\) is not difficult: one simply relates states to different maps as determined by the fixed viewing window (it is common for differing viewing-window strings to correspond to the same map, the set of all such strings corresponding to "same" map can be used effectively as the name of that state) and then one records the local letter transformations as the edge labels. For details, see [4].
#### 3.3.1 Building \(\mathscr{A}(A_{k}^{\vee})\) from \(A\)
Let \(A\in\mathcal{H}_{n}\) be of finite order.
In this subsubsection, we learn a process for building a strongly synchronizing automaton \(\mathscr{A}(A_{k}^{\vee})\) so that \(\phi_{A}\) acts on the underlying digraph of \(\mathscr{A}(A_{k}^{\vee})\) by automorphisms in such a way
that \(H(\mathscr{A}(A_{k}^{\vee}),\phi_{A})\) has \(A\) as its minimal representative transducer. This process is essential in the proof that follows for finding simplified elements in the conjugacy class representing our given finite order element \(A\). We will also given an example of the process for a specific element.
In [15, Proposition 4.15] it is shown that there is \(k\in\mathbb{N}\) such that \(A_{k}^{\vee}\) is the zero of the semigroup generated by \(A^{\vee}\). Fix the minimal such \(k\in\mathbb{N}\) so that \(A_{k}^{\vee}\) is the zero of the semigroup generated by \(A^{\vee}\), and let \(\overline{A_{k}^{\vee}}\) be the minimal representative of \(A_{k}^{\vee}\).
The following is a **very useful fact** from [15]: for every state \([\gamma]\) of the zero \(\overline{A_{k}^{\vee}}\), there is a word \(W([\gamma])\in Q_{A}^{+}\) such that for any input word \(s\in Q_{A}^{+}\), the output when \(s\) is processed from the state \([\gamma]\) of \(\overline{A_{k}^{\vee}}\) is the word \((W([\gamma]))^{t}W([\gamma])_{[1,m]}\), where, \(|s|=l|W([\gamma])|+m\) and \(W([\gamma])_{[1,m]}\) is the length \(m\) prefix of \((W([\gamma]))\). It follows from this that \(\overline{A_{k}^{\vee}}\) has the following structure: for each state \([\gamma]\) (for \(\gamma\in X_{n}^{k}\)) there is \(q_{[\gamma]}\in Q_{A}\) so that for all \(p\in Q_{A}\) we have
* \(\pi_{\overline{A_{k}^{\vee}}}(p,[\gamma])=[\gamma]\cdot A\), and
* \(\lambda_{\overline{A_{k}^{\vee}}}(p,[\gamma])=q_{[\gamma]}\).
We will call this the \(|Q_{A}|\)_-parallel cycle structure of \(\overline{A_{k}^{\vee}}\)_, or less formally, the _cyclical structure of \(\overline{A_{k}^{\vee}}\)_.
Form the automaton \(\mathscr{A}(A_{k}^{\vee})\) as follows. The states of \(\mathscr{A}(A_{k}^{\vee})\) are the states \([\gamma]\) of \(\overline{A_{k}^{\vee}}\) and the transitions are given by the rule that for \(x\in X_{n}\), and \([\gamma]\) a state of \(\overline{A_{k}^{\vee}}\), we set \(\pi_{\mathscr{A}(A_{k}^{\vee})}(x,[\gamma])=[\gamma_{[2,|\gamma]}x]\). (For this construction it does not matter which \(\gamma\in X_{n}^{k}\) one picks from the class \([\gamma]\), even though we use \(\gamma\) explicitly in the formula of the transition function: this follows since states of \(\overline{A_{k}^{\vee}}\) are \(\omega\)-equivalent classes of \(A_{k}^{\vee}\).)
The following lemma is immediate from the definitions:
**Lemma 3.4**.: _If \(A\in\mathcal{H}_{n}\) be of finite order and let \(k\in\mathbb{N}\) so that \(A_{k}^{\vee}\) is the zero of the semigroup generated by \(A^{\vee}\) then the automaton \(\mathscr{A}(A_{k}^{\vee})\) is strongly synchronizing at level \(k\)._
We also have the following translation (into our context) of the statement of Theorem 4.5 of [4], where \(A(G)\) in that theorem corresponds to \(\mathscr{A}(A_{k}^{\vee})\) here, and where the group \(G\) there is the group \(\langle A\rangle\) here.
**Theorem 3.5**.: _Let \(A\in\mathcal{H}_{n}\) be of finite order and let \(k\in\mathbb{N}\) so that \(A_{k}^{\vee}\) is the zero of the semigroup generated by \(A^{\vee}\)._
* \(A\) _acts as an automorphism_ \(\phi_{A}\) _of the digraph underlying_ \(\mathscr{A}(A_{k}^{\vee})\) _by mapping an edge_ \(([\gamma],x,[\gamma_{[2,|\gamma|}x])\) _to the edge_ \((([\gamma])A,\lambda_{A}(x,q_{\gamma}),([\gamma_{[2,|\gamma|}x])A)\) _where_ \(([\gamma])A=[\lambda_{A}(\gamma,q)]\) _for some_ \(q\in Q_{A}\)_,_
* _The minimal representative of the transducer_ \(H(\mathscr{A}(A_{k}^{\vee}),\phi_{A}^{i})\) _is the transducer_ \(A^{i}\)_._
_Example 3.6_.: Consider the transducer \(A\) below. The transducer \(A\) of Figure 4 is bi-synchronizing at the second level. The level \(2\) dual has \(36\) nodes and so we shall not give this below. However utilising the AutomGrp package [13] in GAP [8], together with (in AutomGrp) the
function "MinimizationOfAutomaton( )" which returns an \(\omega\)-equivalent automaton, applied to the second power of the dual automaton, we get the result \(\overline{A_{2}^{\vee}}\), depicted in Figure 5 (which is the zero of the semigroup generated by the dual):
Considering the states \(\{q_{0},q_{1},q_{2},p_{0},p_{1},p_{2}\}\) of \(\overline{A_{2}^{\vee}}\) as a partition of the words of length \(2\) over the alphabet \(\{0,1,2,3,4,5\}\), it is easy to see that
\[q_{0} =\{00,01,10,11,40,41,50,51\} p_{0} =\{20,21,30,31\}\] \[q_{1} =\{24,25,34,35,44,45,54,55\} p_{1} =\{04,05,15,15\}\] \[q_{2} =\{02,03,12,13,22,23,32,33\} p_{2} =\{42,43,52,53\}\]
by verifying the \(36\) transitions from \(q_{0}\) in \(A\) using these input words, and cross-checking state-change results against the transitions of \(\overline{A_{2}^{\vee}}\). From this we can calculate the transitions of \(\mathscr{A}(A_{2}^{\vee})\), with the resulting automaton depicted in \(6\).
Figure 4: An element \(A\in\mathcal{H}_{6}\) of order \(6\).
Figure 5: The level \(2\) dual of \(A\).
Notice that both the domain and range automaton of \(A\) are foldings of \(\mathscr{A}(A_{2}^{\vee})\). This phenomenon generalises, that is, for a strongly synchronizing transducer \(A\) representing an element of \(\mathcal{H}_{n}\) of finite order, both the domain and range automata of \(A\) are foldings of \(\mathscr{A}(A_{k}^{\vee})\) (where \(k\) is appropriately chosen).
We revisit this example in Section 5.1 where we show that \(A\) is conjugate to a 6-cycle. \(\bigcirc\)
#### 3.3.2 Duals, automata, and automorphisms
In this subsubsection, we will prove that for finite order \(A\in\mathcal{H}_{n}\) and minimal \(k\) so that \(A_{k}^{\vee}\) is the zero of the semigroup generated by \(A^{\vee}\), that \(\mathscr{A}(A_{k}^{\vee})\) as defined above is the minimal (strongly synchronizing) automaton so that \(A\) can act on \(\mathscr{A}(A_{k}^{\vee})\) as an automorphism \(\phi_{A}\), with \((\mathscr{A}(A_{k}^{\vee}),\phi_{A})\) inducing \(A\).
We first require lemmata exploring the relationship between properties of \(\mathscr{A}(A_{k}^{\vee})\) and of
\(A_{k}^{\vee}\).
Our first step is the following useful lemma about the automaton \(\mathscr{A}(A_{k}^{\vee})\) constructed as above from a finite order element \(A\in\mathcal{H}_{n}\). In essence, it says that if two states \([\delta]\) and \([\gamma]\) are distinct in \(\mathscr{A}(A_{k}^{\vee})\) but their two transition functions are the same, then by following the cycles of the level \(k\) dual (by iteratively acting by \(A\)), we will eventually get to a pair of states which have different output letters in the level \(k\) dual, and at that pair of locations, the states of \(\mathscr{A}(A_{k}^{\vee})\) will still transition the same way, but the output functions of \(H(\mathscr{A}(A_{k}^{\vee}),\phi_{A})\) at these states will disagree at the first letter.
**Lemma 3.7**.: _Let \(A\in\mathcal{H}_{n}\) be finite order. Let \(\gamma,\delta\in X_{n}^{k}\) be such that the states \([\gamma],[\delta]\) of \(\mathscr{A}(A_{k}^{\vee})\) are distinct. Suppose moreover that the maps \(\pi_{\mathscr{A}(A_{k}^{\vee})}(\cdot,[\gamma])\) and \(\pi_{\mathscr{A}(A_{k}^{\vee})}(\cdot,[\delta])\) coincide. Then there is a natural \(i\) with \(0\leq i<o(A)\) and \(x,y,y^{\prime}\in X_{n}\) such that \(y\neq y^{\prime}\), \(\pi_{\mathscr{A}(A_{k}^{\vee})}(\cdot,[\gamma]A^{i})=\pi_{\mathscr{A}(A_{k}^{ \vee})}(\cdot,[\delta]A^{i})\) but \(A\) maps the edges_
\[([\gamma]A^{i},x,\pi_{\mathscr{A}(A_{k}^{\vee})}(x,[\gamma]A^{i}))\text{ and }([\delta]A^{i},x,\pi_{\mathscr{A}(A_{k}^{\vee})}(x,[\delta]A^{i}))\]
_respectively to the edges_
\[([\gamma]A^{i+1},y,\pi_{\mathscr{A}(A_{k}^{\vee})}(y,[\gamma]A^{i+1}))\text{ and }([\delta]A^{i+1},y^{\prime},\pi_{\mathscr{A}(A_{k}^{\vee})}(y^{\prime},[ \delta]A^{i+1})).\]
Proof.: Let \(w=W([\gamma])\) and \(v=W([\delta])\). Since \([\gamma]\neq[\delta]\), we may find words \(u,w_{2},v_{2}\in Q_{A}^{*}\) and letters \(t\neq t^{\prime}\in Q_{A}\) such that \(w=utw_{2}\) and \(v=ut^{\prime}v_{2}\). Set \(i-1:=|u|\). We note that for any \(j\in\mathbb{N}\) with \(j\leq i-1\), a straightforward induction argument shows, the edges \(([\gamma],a,\pi_{\mathscr{A}(A_{k}^{\vee})}(a,[\gamma])A)\) and \(([\delta],a,\pi_{\mathscr{A}(A_{k}^{\vee})}(q,[\delta]))\) map respectively under \(A^{j}\) to edges \(([\gamma]A^{j},b,\pi_{\mathscr{A}(A_{k}^{\vee})}(a,[\gamma])A^{j})\) and \(([\delta]A^{j},b,\pi_{\mathscr{A}(A_{k}^{\vee})}(q,[\delta])A^{j})\), where \(b=\lambda_{A^{j}}(a,u_{[1,j]})\) (if \(j=0\), take \(b=a\)). In particular it follows that \(\pi_{A}(\cdot,t)=\pi_{A}(\cdot,t^{\prime})\) and so, since \(t\neq t^{\prime}\), there is an \(a\in X_{n}\) be such that \(y:=\lambda_{A}(a,t)\neq\lambda_{A}(a,t^{\prime})=y^{\prime}\). Let \(x\in X_{n}\) be such that \(\lambda_{A^{i-1}}(x,u)=a\). Then it follows that the edges
\[([\gamma]A^{i},x,\pi_{\mathscr{A}(A_{k}^{\vee})}(x,[\gamma]A^{i}))\text{ and }([\delta]A^{i},x,\pi_{\mathscr{A}(A_{k}^{\vee})}(x,[\delta]A^{i}))\]
are mapped respectively under \(A\), to the edges
\[([\gamma]A^{i+1},y,\pi_{\mathscr{A}(A_{k}^{\vee})}(y,[\gamma]A^{i+1}))\text{ and }([\delta]A^{i+1},y^{\prime},\pi_{\mathscr{A}(A_{k}^{\vee})}(y^{\prime},[\delta]A^{i+1 })).\]
Recall from Subsection 2.7 that if \(A\in\mathcal{H}_{n}\) and \(B\) is an automaton so that there is a digraph automorphism \(\phi:G_{B}\to G_{B}\) so that \(A\) and \(H(B,\phi)\) represent the same transformation then we say \(A\)_is induced from \((B,\phi)\)_.
Let \(A\in\mathcal{H}_{n}\). We say a strongly synchronizing automaton \(B\)_is an automaton supporting \(A\)_ if there is a digraph automorphism \(\phi\) of the digraph \(G_{B}\), with \(A\) induced from \((B,\phi)\). In this situation, if there is no proper folding \(B^{\prime}\) of \(B\) and digraph automorphism \(\phi^{\prime}:G_{B^{\prime}}\to G_{B^{\prime}}\) so that \(A\) is induced from \((B^{\prime},\phi^{\prime})\), then we say \(B\)_is a minimal automaton supporting \(A\)_ (or simply, that \(B\) is minimal).
In the next lemma, we show that there is precisely one minimal automaton (up to isomorphism of automata) supporting a finite order element \(A\) of \(\mathcal{H}_{n}\).
**Lemma 3.8**.: _Let \(A\in\mathcal{H}_{n}\) be an element of finite order and let \(k\in\mathbb{N}\) be minimal such that \(A^{\vee}_{k}\) is the zero of the semigroup generated by the dual. Then (up to isomorphism of automata) \(\mathscr{A}(A^{\vee}_{k})\) is the minimal strongly synchronizing automaton admitting an automorphism \(\phi\) of \(G_{\mathscr{A}(A^{\vee}_{k})}\) so that \(A\) is induced by \((\mathscr{A}(A^{\vee}_{k}),\phi)\). Furthermore, \(\phi\) is the automorphism \(\phi_{A}\) of Theorem 3.5._
Proof.: Let \(A\in\mathcal{H}_{n}\) be finite order of order \(o(A)\). We note that by results in [15, 4]\(k\) is minimal such that all of the elements \(A,A^{2},\ldots,A^{o(A)-1}\) are strongly synchronizing at level \(k\) (\(A^{i}\) being the product in \(\mathcal{H}_{n}\) of \(A\) with itself \(i\) times).
It follows from Theorem 3.5 that \(\mathscr{A}(A^{\vee}_{k})\) is an automaton supporting \(A\) and indeed that \((\mathscr{A}(A^{\vee}_{k}),\phi_{A})\) induces \(A\). We argue below that \(\mathscr{A}(A^{\vee}_{k})\) is a minimal such automaton, and further, that any minimal automaton supporting \(A\) is isomorphic to a folding of \(\mathscr{A}(A^{\vee}_{k})\), and hence, must actually be \(\mathscr{A}(A^{\vee}_{k})\) up to isomorphism.
Now suppose that there was another automaton \(B\), such that \(A\) acts as an automorphism \(\psi_{A}\) of the underlying digraph of \(B\) so that \(H(B,\psi_{A})\) has minimal representative \(A\). Additionally suppose that \(B\) is a minimal strongly synchronizing transducer on which \(A\) acts as an automorphism. We note that the minimal synchronizing level \(l\) of \(B\) is greater than or equal to \(k\) for, \(H(B,\psi^{i}_{A})\) is strongly synchronizing at level \(l\) and has minimal representative \(A^{i}\).
Suppose for a contradiction that \(B\neq\mathscr{A}(A^{\vee}_{k})\). There are two cases.
Firstly for any state \(q\in Q_{B}\), there is a state \(p\in Q_{A}\) such that the set \(W(q,j)\) of words of length \(j\) which force \(q\) is contained in the set \(W(p,j)\) of words of length \(j\) which force the state \(p\) of \(\mathscr{A}(A^{\vee}_{k})\). In this case, one observes that \(\mathscr{A}(A^{\vee}_{k})\) is a folding of \(B\) contradicting the minimality of \(B\).
Thus we must be in the negation of the first case. That is, we assume that there is a pair of words \(\gamma,\delta\in X^{k}_{n}\) such that the state of \(B\) forced by \(\gamma\) is the same as the state of \(B\) forced by \(\delta\) but \(\gamma\) and \(\delta\) force different states of \(\mathscr{A}(A^{\vee}_{k})\). We may further assume that the states \([\gamma],[\delta]\) of \(\mathscr{A}(A^{\vee}_{k})\) also satisfy \(\pi_{\mathscr{A}(A^{\vee}_{k})}(\cdot,[\gamma])=\pi_{\mathscr{A}(A^{\vee}_{k} )}(\cdot,[\delta])\). This is because if \(\pi_{\mathscr{A}(A^{\vee}_{k})}(\cdot,[\gamma])\neq\pi_{\mathscr{A}(A^{\vee}_ {k})}(\cdot,[\delta])\), then we may find a word \(\nu\in X^{+}_{n}\) such that \([\gamma^{\prime}]:=\pi_{\mathscr{A}(A^{\vee}_{k})}(\nu,[\gamma])\neq\pi_{ \mathscr{A}(A^{\vee}_{k})}(\nu,[\delta])=[\delta^{\prime}]\), satisfy \(\pi_{\mathscr{A}(A^{\vee}_{k})}(\cdot,[\gamma^{\prime}])=\pi_{\mathscr{A}(A^ {\vee}_{k})}(\cdot,[\delta]^{\prime})\). Thus \(\gamma\nu\) and \(\delta\nu\) force the same state of \(B\) but force, respectively, the states \([\gamma^{\prime}]\) and \([\delta^{\prime}]\) of \(\mathscr{A}(A^{\vee}_{k})\). We may then replace \(\gamma,\delta\) with \(\gamma^{\prime},\delta^{\prime}\).
Let \(z_{1}\) be the state of \(B\) forced by \(\gamma\) and \(\delta\) and let \(z_{1},z_{2},\ldots,z_{o(A)}\) be the orbit of \(z_{1}\) under the action of \(A\). As \(H(B,\psi_{A})=H(\mathscr{A}(A^{\vee}_{k}),\phi_{A})\), it must be the case that if \(a,b\in X_{n}\) are such that the edge \(([\gamma],a,[\Gamma])\) maps to \((([\gamma])A,b,([\Gamma])A)\), then the edge \(([\delta],a,[\Gamma])\) also maps to \((([\delta])A,b,([\Gamma])A)\). Thus we conclude that \(\pi_{\mathscr{A}(A^{\vee}_{k})}(\cdot,([\gamma])A)=\pi_{\mathscr{A}(A^{\vee} _{k})}(\cdot,([\delta])A)\). Now observe that since \(\gamma\) and \(\delta\) are representatives of \([\gamma]\) and \([\delta]\), respectively, and since for any \(q\in Q_{A}\), the state of \(B\) forced by \(\lambda_{A}(\gamma,q)\) is equal to the state of \(B\) forced by \(\lambda_{A}(\delta,q)\) is equal to \(z_{2}\), it follows that there are representatives of \(([\gamma])A\) and \(([\delta])A\) respectively such that the states of \(B\) forced by these representative is \(z_{2}\). We may thus repeat the argument in the \(z_{1}\) case. By induction we therefore see that for any \(1\leq i\leq o(A)\), the points \(([\gamma])A^{i}\) and \(([\delta])A^{i}\) satisfy that \(\pi_{\mathscr{A}(A^{\vee}_{k})}(\cdot,([\gamma])A^{i})=\pi_{\mathscr{A}(A^{ \vee}_{k})}(\cdot,([\delta])A^{i})\) and whenever there are \(a,b\in X_{n}\), such that \((([\gamma])A^{i},a,\nu)\) is an edge mapping under \(A\) to the edge \((([\gamma])A^{i+1},b(\nu)A\), then the
edge \((([\delta])A^{i}),a,\nu)\) also maps under \(A\) to \((([\delta])A^{i+1}),b,(\nu)A)\). This contradicts Lemma 3.7. \(\Box\)
## 4 Water for the witch - shrinking conjugacy class representatives
Suppose we have a finite order element \(A\in{\cal H}_{n}\), induced by \((B,\phi_{A})\) for some strongly synchronizing \(B\) with a minimal number of states. Under certain conditions we may employ a two-step process to find a new element \(C\in{\cal H}_{n}\), where \(C\) is conjugate to \(A\), and \(C\) is induced by \((D,\psi)\) for some strongly synchronizing \(D\) with \(D\) having fewer states than \(B\). In what follows we describe this process of finding "smaller" conjugacy class representatives of \(A\).
The first (and main) step in this process is to employ "relabelling." This is a conjugacy which, for a pair of states that would be identified in the collapse sequence of the domain automaton, relabels inputs and outputs on edges from this pair of states, with the goal of making this pair of states represent the same local map. If this is possible, then we can collapse the carrying transducer to a smaller one than we started with.
The conditions for a successful relabelling include that the orbits of these states have the same lengths, and that for any two corresponding outgoing edges, these orbit lengths of these edges are also the same. In the case where some of these orbit lengths differ, then in certain circumstances we can employ the second step of the overall process. This step"fluffs up" the carrying automaton by executing some splittings, creating what we call _shadow states_, and where we can then employ relabelling to the result. In either case, after a relabelling, the whole resultant transducer can be minimised so as to be carried by a transducer with strictly fewer states than \(T_{A}\).
In the case that \(A\) is conjugate to an \(n\)-cycle this process will eventually result in single state transducer representing an \(n\)-cycle.
### Relabellings and automata sequences
**Definition 4.1**.: Let \(A\) be a strongly synchronizing automaton and \(A=A_{0},A_{1},\ldots,A_{m}\) be a collapse chain of \(A\). Let \(0\leq k\leq m\) and \(\phi_{k}\) be a vertex fixing automorphism of \(G_{A_{k}}\). Define \(A^{\prime}\) to be the automaton with \(Q_{A^{\prime}}=Q_{A}\) and transition function defined as follows: for \(p\in Q_{A}\), set \(\pi_{A^{\prime}}(x^{\prime},p)=q\) if and only if there is an \(x\in X_{n}\) such that \(\pi_{A}(x,p)=q\) with \(\lambda_{H(A_{k},\phi_{k})}(x,[p])=x^{\prime}\). We call \(A^{\prime}\) the _relabelling of \(A\) by \((A_{k},\phi_{k})\)_ or _the relabelling of \(A\) by (the transducer) \(H(A_{k},\phi_{k})\)_.
Note that if we relabel \(A\) by \((A_{k},\phi_{k})\), then the resulting automaton \(A^{\prime}\) is _strongly isomorphic_ to \(A\) in the sense that there is a natural digraph isomorphism from the underlying digraph of \(A^{\prime}\) to the underlying digraph of \(A\) that fixes states and which maps the relabelled edges of \(A^{\prime}\) to the original edges in \(A\). More precisely, if \((p,x,q)\) is an edge of \(G_{A}\) and \(\lambda_{H(A_{k},\phi_{k})}(x,[p])=x^{\prime}\), then the natural digraph isomorphism maps the edge \((p,x^{\prime},q)\) of
\(A^{\prime}\) to the edge \((p,x,q)\) of \(A\). The point of view one should have in mind is that we have renamed/relabelled the edges of \(A\) by switching edge labels on edges which are parallel edges in \(A_{k}\). Notice that if we relabel by \((A_{0},\phi_{0})\), then all we do is switch labels on parallel edges in \(A\), thus the resulting underlying digraph would not change, but a "fixed" drawing of it would be relabelled.
**Lemma 4.2**.: _Let \(A\) be a strongly synchronizing automaton and \(A=A_{0},A_{1},\ldots,A_{m}\) be a collapse chain for \(A\). Let \(0\leq k\leq m\) and \(\phi\) be a vertex fixing automorphism of \(G_{A_{k}}\). Let \(A^{\prime}\) be the relabelling of \(A\) by \((A_{k},\phi)\). Then \(A^{\prime}\) has underlying digraph strongly isomorphic to the underlying digraph of \(A\) and \(A_{m}\) remains a folding of \(A^{\prime}\). More specifically, writing \(A^{\prime}=A^{\prime}_{0},A^{\prime}_{1},\ldots,A^{\prime}_{l}\) for the collapse chain from \(A^{\prime}\) to \(A_{m}\), then \(l\leq m\) and two states of \(u,v\) of \(A\) belong to the same state of \(A_{i}\) if and only if for some \(i^{\prime}\leq i\) we have \(u\) and \(v\) belong to the same state of \(A^{\prime}_{i^{\prime}}\)._
Proof.: We may consider \(A\) as a non-minimal synchronizing transducer where each state induces the identity transformation of the set \(X_{n}\). Consider the core of the product \(A*H(A_{k},\phi)\). Let \(p\in Q_{A}\), and \(\gamma\in X_{n}^{+}\) be such that the state of \(A\) forced by \(\gamma\) is \(p\). Then, by definition of \(A_{k}\), the state of \(A_{k}\) forced by \(\gamma\) is the state \([p]\) containing \(p\). Thus the set of states of \(\mathrm{core}(A*H(A_{k},\phi))\) is the set \(\{(p,[p])\mid p\in Q_{A}\}\). Let \(x\in X_{n}\) and \(p,q\in Q_{A}\) such that \(\pi_{A}(x,p)=q\). Then we have, \(\pi_{A}(x,(p,[p]))=(q,[q])\) and \(\lambda_{A}(x,(p,[p]))=\lambda_{H(A_{k},\phi)}(x,[p])\). Thus setting \(A^{\prime}\) to be the output automaton of \(\mathrm{core}(A*H(A_{k},\phi))\) we see that \(A^{\prime}\) is the relabelling of \(A\) by \((A_{k},\phi)\). From this it follows that the underlying digraph of \(A^{\prime}\) is strongly isomorphic to the underlying digraph of \(A\).
Let \(u,v\) be two states of \(A\) which belong to the same state of \(A_{m}\) and which transition identically on all words of length \(j\) and suppose \(j\) is minimal for which this happens. Let \(p\in Q_{A}\) be an arbitrary state and let \(W(p)\subseteq X_{n}^{j}\) consist of those words \(\gamma\) such that \(\pi_{A}(\gamma,u)=\pi_{A}(\gamma,v)=p\). We break into cases based on whether or not \(k\geq j\) or \(k<j\).
First suppose that \(k\geq j\). This means that in \(A_{k}\), the states \([u]\) and \([v]\) are equal. Thus, \(\lambda_{H(A_{k},\phi)}(\gamma,[u])=\lambda_{H(A_{k},\phi)}(\gamma,[v])\) for any \(\gamma\in X_{n}^{*}\). Therefore in \(A^{\prime}\) we see that the set of words \(\nu\in X_{n}^{j}\) for which \(\pi_{A^{\prime}}(\nu,u)=\pi_{A^{\prime}}(\nu,v)=p\) is precisely the set \(\{\lambda_{H(A_{k},\phi)}(\gamma,[u])\mid\gamma\in W(p)\}\).
Now suppose that \(k<j\). This means that the states \([u]\) and \([v]\) are distinct states of \(A_{k}\) such that \(\pi_{A_{k}}(\cdot,[u])\) and \(\pi_{A_{k}}(\cdot,[v])\) coincide on \(X_{n}^{j-k}\). Let \(\gamma\in W(p)\) be arbitrary. Set \(\gamma_{1}\) to be the length \(j-k\) prefix of \(\gamma\) and set \(\gamma_{2}\in X_{n}^{k}\) such that \(\gamma_{1}\gamma_{2}=\gamma\). Set \([r]=\pi_{A_{k}}(\gamma_{1},[u])=\pi_{A_{k}}(\gamma_{1},[v])\) and set \(\kappa\in X_{n}^{k}\) such that \(\lambda_{H(A_{k},\phi)}(\kappa,[r])=\gamma_{2}\). For \(t\in\{u,v\}\), set \(\delta_{t}\in X_{n}^{j-k}\) be such that \(\lambda_{H(A_{k},\phi)}(\delta_{t},[t])=\gamma_{1}\). Then since \(\phi\) is a vertex fixing automorphism of \(A_{k}\) we notice that \(\pi_{A}(\delta_{u},u)\), \(\pi_{A}(\delta_{v},v)\), \(\{\pi_{A}(\gamma_{1},t)\mid t\in\{u,v\}\}\) all belong to the same state of \(A_{k}\). This means that \(\pi_{A}(\kappa,\pi_{A}(\delta_{u},u))=\pi_{A}(\kappa,\pi_{A}(\delta_{v},v))=s\).(Note that \([s]=[p]\) in \(A_{k}\) by the vertex fixing property of \(\phi\).) Therefore \(\pi_{A*H(A_{k},\phi)}(\delta_{u}\kappa,(u,[u]))=(s,[p])=\pi_{A*H(A_{k},\phi)}( \delta_{v}\kappa,(v,[v]))\). Since \(\lambda_{A*H(A_{k},\phi)}(\delta_{u}\kappa,(u,[u]))=\gamma=\lambda_{A*H(A_{k},\phi)}(\delta_{v}\kappa,(v,[v]))\), we see that in \(A^{\prime}\), \(\pi_{A^{\prime}}(\gamma,u)=\pi_{A^{\prime}}(\gamma,v)\).
Therefore, in \(A^{\prime}\), \(\pi_{A^{\prime}}(\cdot,u)\) and \(\pi_{A^{\prime}}(\cdot,v)\) coincide on the set \(W(p)\). Since \(p\) was chosen arbitrarily and \(\sqcup_{p\in Q_{A}}W(p)=X_{n}^{j}\) we conclude that \(\pi_{A^{\prime}}(\cdot,u)\) and \(\pi_{A^{\prime}}(\cdot,v)\) coincide on the set \(X_{n}^{j}\).
The result now follows by Theorem 2.8.
Now suppose that there are states \(u,v\) of \(A\) which belong to the same state \(A^{\prime}_{i^{\prime}}\) for some \(0\leq i^{\prime}\leq l\). Then since \(u,v\) belong to the same state of \(A_{m}\), there is an \(i\) between \(0\) and \(m\) such that \(u\) and \(v\) belong to the same state of \(A_{i}\). The preceding paragraph and Theorem 2.8 show that \(i^{\prime}\) must be less than or equal to \(i\).
Let \(A\) be a strongly synchronizing automaton and \(A^{\prime}\) be the relabelling of \(A\) by \((A_{k},\phi_{k})\). Set \(\iota:G_{A}\to G_{A^{\prime}}\) to be the natural digraph isomorphism. If \(\varphi\) is an automorphism of the underlying digraph \(G_{A}\) of \(A\), then we will mean by the _induced automorphism_\(\varphi^{\prime}\) of \(G_{A^{\prime}}\) precisely the map \(\iota^{-1}\varphi\iota\).
**Lemma 4.3**.: _Let \(A\in\mathcal{H}_{n}\) be an element of finite order and let \(B\) be a strongly synchronizing automaton such that there is an automorphism \(\phi_{A}\) of the underlying digraph of \(B\) with \(A\) the minimal representative of \(H(B,\phi_{A})\). Let \(B_{k}\) be an element of the synchronizing sequence of \(B\) and \(\psi\) a vertex fixing automorphism of \(B_{k}\). Let \(B^{\prime}\) be the relabelling of \(B\) according to \((B_{k},\psi)\) and \(\varphi\) be the induced isomorphism from the underlying digraph of \(B\) to the underlying digraph of \(B^{\prime}\). Set \(P\) to be the minimal representative of the transducer \(H(B,B^{\prime},\varphi)\). Then \(P^{-1}AP\) is the minimal representative of \(H(B^{\prime},\phi_{A}^{\varphi})\)._
Proof.: This is a straight-forward application of the definitions.
In the situation of Lemma 4.3 we refer to the resulting transducer \(P^{-1}AP\) as the _transducer induced from \(A\) by the relabelling \(B\mapsto B^{\prime}\)_.
**Lemma 4.4**.: _Let \(A\) be a strongly synchronizing automaton, and let \(s,t\) be distinct states of \(A\). Let \((A_{i})_{1\leq i\leq m}\) be a collapse chain of \(A\) such that \(s,t\) belong to the same state of \(A_{m}\). Let \(1\leq k<m\) be minimal such that \(t\) belongs to the state \([s]\) of the automaton \(A_{k+1}\). Then for all \(x,x^{\prime}\in X_{n}\) such that \(\pi_{A}(x,s)=\pi_{A}(x^{\prime},t)\) and \([v]\in\{[s],[t]\}\), states of \(A_{k}\), we have \(\pi_{A_{k}}(x,[v])=\pi_{A_{k}}(x^{\prime},[v])\)._
Proof.: By minimality of \(k\), it must be the case that the states \([s]\) and \([t]\) of \(A_{k}\) are distinct and the equality \(\pi_{A_{k}}(\cdot,[s])=\pi_{A_{k}}(\cdot,[t])\) holds.
Let \(x,x^{\prime}\in X_{n}\) and \(u\in Q_{A}\) be such that \(\pi_{A}(x,s)=\pi_{A}(x^{\prime},t)=u\). Then by definition of \(A_{k}\), \(\pi_{A_{k}}(x,[s])=[u]=\pi_{A_{k}}(x^{\prime},[t])\). However, the equality \(\pi_{A_{k}}(\cdot,[s])=\pi_{A_{k}}(\cdot,[t])\), now implies that \(\pi_{A_{k}}(x^{\prime},[s])=[u]=\pi_{A_{k}}(x,[t])\) also.
#### 4.1.1 Constructing discriminant permutations \(\operatorname{disc}(s,t,Q)\)
Let \(A\) be an automaton and let \(s,t\in Q_{A}\). Set the notation:
\[\operatorname{E}_{A}(s,t) :=\{(s,x,t)\in\operatorname{E}_{A}\};and\] \[\operatorname{Letters}_{A}(s,t) :=\{x\in X_{n}\mid(s,x,t)\in\operatorname{E}_{A}(s,t)\}.\]
We may leave out the explicit mention of the automaton \(A\) when it is clear from context, writing simply \(\operatorname{E}(s,t)\) and \(\operatorname{Letters}(s,t)\) for these sets in this case.
Let \(Q\subseteq Q_{A}\) and \(s\in Q_{A}\). Set the notation
\[X_{s,Q}:=\bigsqcup_{p\in Q}\operatorname{Letters}(s,p).\]
Now, suppose \(s,t\in Q_{A}\) and suppose further there is a subset \(Q\subseteq Q_{A}\) so that
1. \(X_{s,Q}=X_{t,Q}\), and
2. for all \(p\in Q\) we have \(\big{|}\operatorname{Letters}(s,p)\big{|}=\big{|}\operatorname{Letters}(t,p) \big{|}\).
Then to describe this situation we say \(s\)_and \(t\) distribute similarly over \(Q\)_. (Note in passing that for some choices of \(s\) and \(t\) the only possible such set \(Q\) may be empty.) For any states \(s\) and \(t\) and set \(Q\subset Q_{A}\) so that \(s\) and \(t\) distribute similarly over \(Q\), we denote by \(X_{Q}\) the set \(X_{s,Q}=X_{t,Q}\). We call \(X_{Q}\subseteq X_{n}\)_the agreement alphabet (of \(s\) and \(t\) on \(Q\))_ noting that if \(Q=Q_{A}\), then \(X_{Q}=X_{n}\).
Define a bijection \(\operatorname{disc}(s,t,Q):X_{Q}\to X_{Q}\) as follows.
First, let \(p_{1},\dots,p_{r}\in Q\) be a maximal sequence of distinct states such that for \(1\leq i\leq r\) there is an \(x\in X_{Q}\) with \(\pi_{A}(x,s)=p_{i}\). Observe that the sets
\[\{\operatorname{Letters}(s,p_{i})\mid 1\leq i\leq r\}\]
and
\[\{\operatorname{Letters}(t,p_{i})\mid 1\leq i\leq r\}\]
form partitions of \(X_{Q}\), with equal-size corresponding parts in index \(i\).
Now, for \(1\leq i\leq r\), set \(\operatorname{disc}(s,t,Q)\) to act as the identity on \(\operatorname{Letters}(s,p_{i})\cap\operatorname{Letters}(t,p_{i})\). Set
\[\operatorname{Letters}(s,p_{i})^{\prime}:=\operatorname{Letters}(s,p_{i}) \backslash(\operatorname{Letters}(s,p_{i})\cap\operatorname{Letters}(t,p_{i}))\]
and
\[\operatorname{Letters}(t,p_{i})^{\prime}:=\operatorname{Letters}(t,p_{i}) \backslash(\operatorname{Letters}(s,p_{i})\cap\operatorname{Letters}(t,p_{i})).\]
We note that \(|\operatorname{Letters}(s,p_{i})^{\prime}|=|\operatorname{Letters}(t,p_{i})^{ \prime}|\) and indeed that
\[Y_{s,t}:=\bigcup_{1\leq i\leq r}\operatorname{Letters}(s,p_{i})^{\prime}= \bigcup_{1\leq i\leq r}\operatorname{Letters}(t,p_{i})^{\prime}.\]
Order the elements of \(\operatorname{Letters}(s,p_{i})^{\prime}\) and \(\operatorname{Letters}(t,p_{i})^{\prime}\) with the order induced from \(X_{n}\). For \(1\leq i\leq r\) and \(x\in\operatorname{Letters}(s,p_{i})^{\prime}\) we write \(x^{\prime}\) for the corresponding element of \(\operatorname{Letters}(t,p_{i})^{\prime}\), that is, in the ordering of \(\operatorname{Letters}(s,p_{i})^{\prime}\) and \(\operatorname{Letters}(t,p_{i})^{\prime}\) induced from \(X_{n}\), \(x\) and \(x^{\prime}\) have the same index.
Using the definitions and facts above we extend the definition of \(\operatorname{disc}(s,t,Q)\) over the set \(Y_{s,t}\) by the rule \(x\mapsto x^{\prime}\). One easily checks that the resulting function
\[\operatorname{disc}(s,t,Q):X_{Q}\to X_{Q}\]
is a well-defined bijection. Further, observe that for \(x_{0}\in Y_{s,t}\) the function \(\operatorname{disc}(s,t,Q)\) contains a cycle \((x_{0}\ x_{1}\ x_{2}\ \dots\ x_{k-1})\) in its disjoint cycle decomposition, where for all \(i\) we have \(x_{i+1}=x_{i}^{\prime}\) (indices taken mod \(k\)). Recall as well that \(\operatorname{disc}(s,t,Q)\) acts as the identity over the set \(X_{Q}\backslash Y_{s,t}\).
For \(s\) and \(t\) satisfying points (b) and (b) for some set \(Q\) we call \(\operatorname{disc}(s,t,Q)\)_the discriminant of \(s\) and \(t\)_; it is a permutation that encodes the difference in transitions between \(s\) and \(t\) amongst the set of states \(Q\). In the case that \(Q=Q_{A}\), we will write \(\operatorname{disc}(s,t)\) for the bijection \(\operatorname{disc}(s,t,Q_{A})\). As with the notation \(\operatorname{Letters}(p,q)\), we often run in to situations where we compute discriminant permutations in distinct automata sharing the same state set, in such cases we use the notation \(\operatorname{disc}_{A}(s,t,Q)\) and \(\operatorname{disc}(s,t)\) to emphasise the automaton in which the permutation is computed.
**Lemma 4.5**.: _Let \(A\) be a strongly synchronizing automaton, and let \(s,t\) be distinct states of \(A\). Let \(Q\subseteq Q_{A}\) be such that \(s\) and \(t\) distribute similarly over \(Q\) with agreement alphabet \(X_{Q}\). Let \((A_{i})_{1\leq i\leq m}\) be a collapse chain of \(A\) such that \(s,t\) belong to the same state of \(A_{m}\). Let \(1\leq k<m\) be minimal such that \(\pi_{A_{k}}(\cdot,[t])\) and \(\pi_{A_{k}}(\cdot,[s])\) are equal on \(X_{Q}\). Then for \(x,y\in X_{Q}\) which belong to the same disjoint cycle of \(\operatorname{disc}(s,t,Q)\),_
\[\pi_{A_{k}}(x,[s])=\pi_{A_{k}}(y,[s])=\pi_{A_{k}}(y,[t])=\pi_{A_{k}}(x,[t]).\]
Proof.: By assumption \(\pi_{A_{k}}(\cdot,[s])=\pi_{A_{k}}(\cdot,[t])\).
An easy induction argument using the definition of \(\operatorname{disc}(s,t,Q)\) now shows that for any \(x,y\in X_{n}\) such that \(y\) belongs to the orbit of \(x\) under the action of \(\operatorname{disc}(s,t,Q)\),
\[\pi_{A_{k}}(x,[s])=\pi_{A_{k}}(y,[s])=\pi_{A_{k}}(x,[t])=\pi_{A_{k}}(y,[t]).\]
This follows since for any \(x\in X_{n}\), \(\pi_{A}(x,s)=\pi_{A}((x)\operatorname{disc}(s,t,Q),t)\).
#### 4.1.2 Discriminant permutations and amalgamation sequences
Let \(B\) be an automaton and \(G:=G_{B}\) be the underlying digraph of \(B\). Define a sequence \(G:=G_{0},G_{1},\dots\) as follows. Assuming \(G_{i}\) is defined, \(G_{i+1}\) is obtained from \(G\) in the following manner. Let \(\sim\) be the equivalence relation on the vertices \(Q_{G_{i}}\) of \(G_{i}\) that relates two vertices \(p,q\) precisely when for every vertex \(t\in Q_{G_{i}}\) the number of edges from \(q\) to \(t\) is precisely the number of edges from \(p\) to \(t\). If \(p\in Q_{G_{i}}\) write \([p]_{i+1}\) for the equivalence class of \(p\) under the relation \(\sim\). Set \(Q_{G_{i+1}}=\{[p]_{i+1}\mid p\in Q_{G_{i}}\}\). Now suppose \(p,q\in Q_{G_{i}}\) and enumerate those elements of \([q]_{i+1}\) which have an incoming edge from a vertex in \([p]_{i+1}\) in some order as \(q_{1},q_{2},\dots,q_{r}\). For \(1\leq j\leq r\), let \(k_{j}\) be the number of edges from \(p\) to \(q_{j}\) and set \(ec(i+1,p,q):=\sum_{1\leq j\leq r}k_{j}\). Set \(G_{i+1}\) to be the directed graph with vertices \(Q_{G_{i+1}}\) and with \(ec(i+1,p,q)\) many edges from \([p]_{i+1}\) to \([q]_{i+1}\) for each \([p]_{i+1}\), \([q]_{i+1}\in Q_{G_{i+1}}\).
We refer to the resulting sequence \(G_{0}\), \(G_{1}\),..., as defined above as the _amalgamation sequence of \(G\)_ (see [17]). Note that for each natural \(i\) the construction above induces an identification of the states of \(G_{i}\) with a partition of \(B\). It follows that after finitely many steps, the amalgamation sequence stabilises to a fixed digraph.
The lemma below says that for a given automaton \(B\), there is a relabelling of \(B\) such that the synchronizing sequence coincides with the amalgamation sequence.
**Lemma 4.6**.: _Let \(B\) be an automaton with underlying digraph \(G\) and synchronizing sequence \(B=B_{0},B_{1},\ldots\). Let \(G=G_{0},G_{1},\ldots\) be the amalgamation sequence of \(G\). Then there is a relabelling \(D\) of \(B\) such that if \(D=D_{0},D_{1},\ldots\) is the synchronizing sequence of \(D\), the underlying digraph of \(D_{i}\) is \(G_{i}\); in particular, the partition of the state set of \(B\) induced by \(D_{i}\) is the same partition induced by \(G_{i}\)._
Proof.: Since \(B\) is strongly synchronizing there is a minimal \(l\in\mathbb{N}\) for which \(G_{l}=G_{l+1}\) and both have a single vertex with \(n\) loops. We proceed by induction on the amalgamation sequence.
We begin with the base case. Let \(s,t\in Q_{B}\) be distinct such that \(s\) and \(t\) belong to the same state of \(G_{1}\). This means that \(s\) and \(t\) distribute similarly over \(Q_{B}\). Suppose that \(\operatorname{disc}(s,t)\) is not trivial.
Let \(k\in\mathbb{N}\) be minimal such that \(s\) and \(t\) belong to the same state of \(B_{k+1}\). Note that since \(\operatorname{disc}(s,t)\) is not trivial, then \(k\geq 1\). By Lemma 4.5, for any \(x,y\) which belong to the same orbit under \(\operatorname{disc}(s,t)\),
\[\pi_{B_{k}}(x,[s])=\pi_{B_{k}}(y,[s])=\pi_{B_{k}}(x,[t])=\pi_{B_{k}}(x,[t]).\]
Let \(\lambda_{B_{k}}\) be defined such that \(\lambda_{B_{k}}(\cdot,[q]):X_{n}\to X_{n}\) is trivial whenever \([q]\) is not equal to \([t]\). We set \(\lambda_{B_{k}}(\cdot,[t])=\operatorname{disc}(s,t)^{-1}\). We note that the transducer \(B_{k}\) is induced by a vertex fixing automorphism of \(B_{k}\). Furthermore, for any pair \((u,v)\neq(s,t)\) such that \(u,v\) belong to the same state of \(G_{1}\) and \(\operatorname{disc}(u,v)\) is trivial, \([u]=[v]\) in \(B_{k}\).
Let \(E\) be the relabelling of \(B\) by the transducer \(B_{k}\). Note that \(Q_{E}=Q_{B}\) and \(G\) remains the underlying digraph of \(E\). It therefore follows that for any pair \(u,v\) in \(B\) which distribute similarly over \(Q_{B}\), \(u,v\) still distribute similarly over \(Q_{B}\) in \(E\). If moreover, \(\operatorname{disc}_{B}(u,v)\) is trivial, then construction of \(\lambda_{B_{k}}\), means \(\operatorname{disc}_{E}(u,v)\) remains trivial. Lastly we note that \(s,t\) distribute similarly over \(Q_{B}\) in \(E\), and \(\operatorname{disc}_{E}(s,t)\) is trivial.
Applying an induction argument, there is an automaton \(E_{1}\) a relabelling of \(B\) such that for any pair \(s,t\in Q_{B}\) which belong to the same state of \(G_{1}\), \(\operatorname{disc}_{E_{1}}(s,t)\) is trivial. In particular, such \(s,t\) satisfy, \(\pi_{E_{1}}(\cdot,s)=\pi_{E_{1}}(\cdot,t)\).
Now assume by induction that there is a relabelling \(E\) of \(B\) with synchronizing sequence \(E=E_{0},E_{1},\ldots\) possessing the following property: for \(0\leq i\leq k<l\) two states \(s,t\in Q_{B}\) belonging to the same state of \(G_{i}\) belong to the same state of \(E_{i}\).
Let \(s,t\in Q_{B}\) and suppose \(s\) and \(t\) belong to the same state of \(G_{k+1}\) but do not belong to the same state of \(E_{k+1}\). We note that since the underlying digraph of \(E_{k}\) is the same as \(G_{k}\) and they induce the same partition of the state set \(Q_{B}\), then \(s\) and \(t\) belong to distinct states of \(E_{k}\) and so to distinct states of \(G_{k}\). The fact that \(s\) and \(t\) belong to the same state of \(G_{k+1}\) means that \([s]\) and \([t]\) distribute similarly over \(Q_{E_{k}}\) but \(\operatorname{disc}_{E_{k}}([s],[t])\) is not trivial.
Let \(k<j\leq l\) be minimal such that \(s\) and \(t\) belong to the same state of \(E_{j+1}\). By Lemma 4.5 once more, we have the equalities: for any \(x,y\) which belong to the same orbit under \(\operatorname{disc}_{E_{k}}([s],[t])\),
\[\pi_{E_{j}}(x,[s])=\pi_{E_{j}}(y,[s])=\pi_{E_{j}}(x,[t])=\pi_{E_{j}}(x,[t]).\]
Let \(\lambda_{E_{j}}\) be defined such that \(\lambda_{E_{j}}(\cdot,[q]):X_{n}\to X_{n}\) is trivial whenever \([q]\) is not equal to \([t]\). We set \(\lambda_{E_{j}}(\cdot,[t])=\operatorname{disc}_{E_{k}}([s],[t])^{-1}\). We note that the transducer \(E_{j}\) is induced by a vertex fixing automorphism of \(E_{j}\). Furthermore, for any pair \((u,v)\neq(s,t)\) such that \(u,v\) belong to the same state of \(G_{k+1}\) and \(\operatorname{disc}_{E_{k}}([u],[v])\) is trivial, \([u]=[v]\) in \(E_{j}\).
Let \(F\) be the relabelling of \(E\) by the transducer \(E_{j}\). Let \(F_{1},F_{2}\ldots\) be the synchronizing sequence of \(F\).
Let \(u,v\in Q_{B}\) belong to the same state of \(G_{i}\) for some \(0\leq i\leq k<l\). Then by the inductive assumption and Lemma 4.2, \(u,v\) belong to the same state of \(F_{i}\).
Let \(u,v\in Q_{B}\) belong to the same state of \(G_{k+1}\) and suppose that \(\operatorname{disc}_{E_{k}}(u,v)\) is trivial. Note that since \(\operatorname{disc}_{E_{k}}(u,v)\) and \([u],[v]\) distribute similarly over \(Q_{E_{k}}\), \([u]=[v]\) in \(E_{k+1}\). Therefore, Lemma 4.2 implies that \([u]=[v]\) in \(F_{k+1}\) as well.
Lastly observe that \([s]=[t]\) in \(F_{k+1}\) by construction of \(\lambda_{E_{k}}\) and the fact that states which are identified in \(E_{k}\) remain identified in \(F_{k+1}\).
The result now follows by induction.
### Relabellings along orbits
For lemma below, we give stronger hypotheses than appear to be required as per the following observation. Let \(B\) be a strongly synchronizing automaton, \(\phi\) an automorphism of the underlying digraph \(G_{B}\) of \(B\), and, \(s\) and \(p\) states of \(B\). Every edge from \(s\) to a state in the orbit of \(p\) (under the action of \(\phi\)) is on an orbit of length \(N\) (when such an edge exists) if and only if every edge from any state in the orbit of \(s\) to a state in the orbit of \(p\) is on an orbit of length \(N\) (when such an edge exists). We state the lemma with the stronger hypotheses below to ease understanding.
**Lemma 4.7**.: _Let \(B\) be a strongly synchronizing automaton and \(\phi\) an automorphism of the underlying digraph \(G_{B}\) of \(B\). Let \(s,p\) be vertices of \(G_{B}\) so that \(\operatorname{Letters}_{B}(s,p)\) is non-empty. Suppose_
* _there is_ \(N\in\mathbb{N}_{1}\) _so that for every edge_ \(e\) _from a vertex in the orbit of_ \(s\) _to a vertex in the orbit of_ \(p\)_, the orbit length of_ \(e\) _is_ \(N\)_, and secondly_
* _there is_ \(r\in\mathbb{N}_{1}\) _so that if_ \((s\phi^{i},y,p\phi^{j})\) _is any edge from the orbit of_ \(s\) _to the orbit of_ \(p\)_, then we have_ \(\operatorname{Letters}(s\phi^{i},p\phi^{j})=\operatorname{Letters}(s\phi^{i+r },p\phi^{j})\)_._
_Then there is a relabelling of \(B^{\prime}\) of \(B\) such that the induced automorphism \(\phi^{\prime}\) of \(G_{B^{\prime}}\) satisfies the following: for any \(i,j\in\mathbb{N}\),_
* \(\operatorname{Letters}_{B^{\prime}}(s\phi^{i},p\phi^{j})=\operatorname{Letters }_{B^{\prime}}(s\phi^{i+r},p\phi^{j})\)_, and,_
* _if_ \(x\in\operatorname{Letters}_{B^{\prime}}(s\phi^{i},p\phi^{j})\)_, then the labels of the edges_ \((s\phi^{\prime i},x,p\phi^{\prime j})\phi^{\prime}\) _and_ \((s\phi^{\prime i+r},x,p\phi^{\prime j})\phi^{\prime}\) _are equal._
Proof.: We first set up some notational convenience. Given an edge \((u,x,v)\) of \(B\), with respect to this edge, we shall write \(x\phi^{i}\) for the label of its image \((u,x,v)\phi^{i}\) so that we have
the equality \((u,x,v)\phi^{i}=(u\phi^{i},x\phi^{i},v\phi^{i})\). Note that in general we do not have an induced action of \(\phi\) on \(X_{n}\), but the notation will be well-defined in the context of a base edge \((u,x,v)\) being understood.
Let \(B\), \(\phi\), \(s\), \(p\), \(N\) and \(r\) be as in the hypotheses, and assume \(r\) is minimal. It follows that \(r\) divides the orbit length of \(s\) (by minimality). Write \(mr\) for the orbit length of \(s\). Write \(s_{1},s_{2},\ldots,s_{mr}\) for the orbit of \(s=s_{1}\) (we note that \(mr|N\) and we make this explicit below).
Let \(\widetilde{P}\) be those states \(p^{\prime}\) in the orbit of \(p\) which have \(\operatorname{Letters}(s,p^{\prime})\) non-empty. The set \(\widetilde{P}\) can be partitioned according to the orbits under the action of \(\phi^{mr}\), that is, two elements of \(\widetilde{P}\) belong to the same part if they belong to the same orbit. Choose \(T\subset\widetilde{P}\) so that \(T\) has exactly one representative from each block of this partition. Note, by definition, for any edge \((s(\phi^{r})^{i},x^{\prime},p\phi^{j})\), \(i,j\in\mathbb{N}\), there is a unique \(p^{\prime\prime}\in T\) and an element \(x\in\operatorname{Letters}(s,p^{\prime\prime})\) so that the orbit of \((s,x,p^{\prime\prime})\) under \(\phi^{r}\) contains \((s(\phi^{r})^{i},x^{\prime},p\phi^{j})\).
We inductively define a map \(\lambda_{B}\) (i.e., induced by a vertex fixing automorphism of \(B\)) along the orbit of an edge \((s,x,q)\) for some \(q\in T\). The map \(\lambda_{B}\) will then determine a transducer \((X_{n},Q_{B},\pi_{B},\lambda_{B})\) which can be used (as in Definition 4.1) to carry out the required relabelling. To this end fix \(q\in T\) and set \(k=|\{q\phi^{imr}|i\in\mathbb{N}\}|\).
For \(0\leq a\leq m-1\), partition \(\operatorname{Letters}(s_{ar+1},q)\) via the equivalence relation relating two edge labels whose corresponding edges are in the same orbit under \(\phi^{kmr}\). Recall there is an order on the elements of each equivalence class induced from the standard \(\leq\) ordering on \(X_{n}\). Use this ordering to determine a transversal for the equivalence classes, choosing as representative of each class the least element in that class. Write \(\beta(s_{ar+1})\) for this transversal. For \(b\in\beta(s_{ar+1})\) we use the phrase _the equivalence class of \(b\) at \(s_{ar+1}\)_ to mean the edge labels in \(\operatorname{Letters}(s_{ar+1},q)\) which are orbit equivalent to \(b\).
Let \(0\leq a,a^{\prime}\leq m-1\). We note that since N is the orbit length of any edge from a state in the orbit of \(s\) to a state in the orbit of \(q\) we have \(|\beta(s_{ar+1})|=|\beta(s_{a^{\prime}r+1})|\). Let \(\alpha\in\mathbb{N}\) such that for \(b\in\beta(s_{ar+1})\) and \(b^{\prime}\in\beta(s_{a^{\prime}r+1})\), the size of the equivalence class of \(b\) at \(s_{ar+1}\) is equal to the size of the equivalence class of \(b^{\prime}\) at \(s_{a^{\prime}r+1}\) is equal to \(\alpha\). We fix a bijection between the sets \(\beta(s_{ar+1})\) and \(\beta(s_{a^{\prime}r+1})\) induced by the ordering of the elements. We note that \(\alpha kmr=N\).
For \(1\leq j\leq r\), and \(0\leq a\leq m-1\) we write \(\beta(s_{ar+j})\) for the set \(\{b\phi^{j-1}|b\in\beta(s_{ar+1})\}\). We note that the orbit equivalence class of \(b\phi^{j-1}\) at \(s_{ar+j}\), \(b\in\beta(s_{ar+1})\), is precisely the image of the equivalence class of \(b\) at \(s_{ar+1}\) under the image of \(\phi^{j-1}\). We transport using \(\phi^{j-1}\) the orderings of \(\beta(s_{ar+1})\), and the equivalence classes of elements \(b\in\beta(s_{ar+1})\) to the set \(\beta(s_{ar+j})\) and the equivalence classes of its elements. That is, for instance, if \(b<b^{\prime}\in\beta(s_{ar+1})\), then \(b\phi^{j-1}<b^{\prime}\phi^{j-1}\) in \(\beta(s_{ar+j})\).
Let \(1\leq l\leq m\) be minimal such that \(\{q\phi^{imr}|i\in\mathbb{N}\}\phi^{rl}=\{q\phi^{imr}|i\in\mathbb{N}\}\). We note that by minimality \(l|m\) since: \(\{q\phi^{imr}|i\in\mathbb{N}\}\phi^{mr}=\{q\phi^{imr}|i\in\mathbb{N}\}\). Moreover \(\phi^{lr}:\{q\phi^{imr}|i\in\mathbb{N}\}\to\{q\phi^{imr}|i\in\mathbb{N}\}\) is a \(k\)-cycle since \(\phi^{mr}:\{q\phi^{imr}|i\in\mathbb{N}\}\to\{q\phi^{imr}|i\in\mathbb{N}\}\) is a \(k\)-cycle and \(\phi^{mr}\) is a power of \(\phi^{lr}\). Let \(M\in\mathbb{N}\) be such that \(Ml=m\) so that \(\alpha kMlr=N\).
Further observe that if \(\{q\phi^{imr}|i\in\mathbb{N}\}\phi^{rd}\cap\{q\phi^{imr}|i\in\mathbb{N}\}\neq\emptyset\) for some \(d\in\mathbb{N}\), then \(\{q\phi^{imr}|i\in\mathbb{N}\}\phi^{rd}=\{q\phi^{imr}|i\in\mathbb{N}\}\). For suppose \(q\phi^{fmr}\in\{q\phi^{imr}|i\in\mathbb{N}\}\phi^{rd}\) for some \(1\leq f\leq k\). Then there is some \(1\leq j\leq k\) such that \(q\phi^{jmr}\phi^{dr}=q\phi^{fmr}\), this now means that
\(q\phi^{dr}\phi^{jmr}\in\{q\phi^{imr}|i\in\mathbb{N}\}\). However since \(\phi^{mr}\) is a \(k\)-cycle on the set \(\{q\phi^{imr}|i\in\mathbb{N}\}\), then \(q\phi^{dr}\in\{q\phi^{imr}|i\in\mathbb{N}\}\).
Thus we conclude that the sets \(\{q\phi^{imr}|i\in\mathbb{N}\}\phi^{ar}\) for \(0\leq a\leq l-1\) are pairwise disjoint.
We define a relabelling map \(\lambda_{B}\) inductively as follows.
Let \(b=b_{0}\in\beta(s_{1})\) be the smallest element such that \(\lambda_{B}(b,s)\) is undefined and for all \(1\leq i\leq l-1\)\(\lambda_{B}(b_{i},s_{ir+1})\) is undefined for the element \(b_{i}\) of \(\beta(s_{ir+1})\) corresponding to \(b\). (Note that \(b_{i}\) is the least element of \(\beta(s_{ir+1})\) such that \(\lambda_{B}(b_{i},s_{ir+1})\) that is undefined.) In the inductive process which follows, we will define \(\lambda_{B}(b_{i},s_{ir+1})\) for all \(0\leq i\leq l-1\) in order.
Define a \(kl\)-by-\(r\) matrix \(\mathfrak{r}\) with entries tuples of size \(\alpha\) as follows. Set
\[\mathfrak{r}_{0,0}=(b=b_{1,1},b_{1,2},\ldots,b_{1,\alpha})\]
where \((b_{1,1},\ldots,b_{1,\alpha})\) is the ordered tuple of element \(\beta(s_{1})\). For \(0\leq i<kl\) and \(0\leq j<r\) set \(\mathfrak{r}_{i,j}=(b_{1,1},b_{1,2},\ldots,b_{1,\alpha})\phi^{ir+j}=(b_{1,1} \phi^{ir+j},b_{1,2}\phi^{ir+j},\ldots,b_{1,\alpha}\phi^{ir+j})\).
Define a matrix \(\mathfrak{R}\) of dimension \(Mkl\)-by-\(r\) such that \(\mathfrak{R}_{i,j}\) for \(0\leq i<Mkl\) and \(0\leq j<r\) has entry
\[((b_{1,1},s),\ldots,(b_{1,\alpha},s))\phi^{ir+j}:=((b_{1,1}\phi^{ir+j},s\phi^ {ir+j}),\ldots,(b_{1,\alpha}\phi^{ir+j},s\phi^{ir+j})).\]
For \(0\leq d<M\), set \(\mathfrak{R}(d)\) to be the \(kl\)-by-\(r\) matrix corresponding to rows \(dkl\) to row \((d+1)kl-1\). For \(0\leq d<M\), \(0\leq i<kl\) and \(0\leq j<r\) we set \(\lambda_{B}\mathfrak{R}(d)_{i,j}=\mathfrak{r}_{i,j}\), where we extend \(\lambda_{B}\) naturally to act on tuples \((X_{n}\times Q_{B})^{\alpha}\) to produce tuples in \(X_{n}^{\alpha}\).
Let \(1\leq i<l\). We note that for the element \(b^{\prime}\in\beta_{s_{ir+1}}\), the function \(\lambda_{B}(b^{\prime},s_{ir+1})\) remains undefined. Let the matrix \(\mathfrak{r}\) be exactly as above and define the matrix \(\mathfrak{R}\) as above but with \(b^{\prime}\) playing the role of \(b\) and \(s_{ir+1}\) playing the role of \(s_{1}=s\). For \(0\leq d<M\) define the component \(\mathfrak{R}(d)\) as above. Then once more for \(0\leq d<\alpha M\), \(0\leq i<kl\) and \(0\leq j<r\) we set \(\lambda_{B}\mathfrak{R}(d)_{i,j}=\mathfrak{r}_{i,j}\).
Continuing on in this way across the set \(T\), we define \(\lambda_{B}\) on all pairs \((x,s\phi^{i})\) where \(i\in\mathbb{N}\) and there is a \(j\in\mathbb{N}\) such that \((s\phi^{i},x,p\phi^{j})\) is an edge. We set \(\lambda_{B}\) to be projection onto the first coordinate on all other pairs in \(X_{n}\times Q_{B}\).
By construction \(\lambda_{B}\) is induced by a vertex fixing automorphism and induces the required relabelling of \(B\).
**Remark 4.8**.: Note that the relabelling \(B^{\prime}\) of \(B\) given by Lemma 4.7 is in fact isomorphic as an automaton to \(B\), since the relabelling is by a vertex fixing automorphism of \(B\). This means we may instead write \((B,\phi)\) for the pair \((B^{\prime},\phi^{\prime})\).
**Lemma 4.9**.: _Let \(B\) be a strongly synchronizing automaton and \(\phi\) an automorphism of the underlying digrpah \(G_{B}\) of \(B\). Let \(s,t,p\) be states of \(B\) such that there is an \(x\in X_{n}\) with \(\pi_{B}(x,s)=p\). Suppose_
* _for_ \(i,j\in\mathbb{N}\)_,_ \(\pi_{B}(x,s\phi^{i})=p\phi^{j}\) _if and only if_ \(\pi_{B}(x,t\phi^{i})=p\phi^{j}\)_;_
* _the orbits of_ \(s\) _and_ \(t\) _are distinct and have equal length_ \(l\)
_
* _there is an_ \(N\in\mathbb{N}\) _such that for any_ \(j\in\mathbb{N}\)_, all edges_ \((s,x,p\phi^{j})\) _and_ \((t,x,p\phi^{j})\) _are on orbits of length_ \(N\)_._
_Then there is a relabelling of \(B^{\prime}\) of \(B\) such that the induced automorphism \(\phi^{\prime}\) of \(G_{B^{\prime}}\) satisfies: for any \(i,j\in\mathbb{N}\)\(\mathrm{Letters}(s\phi^{i},p\phi^{j})=\mathrm{Letters}(t\phi^{i},p\phi^{j})\), and for any \(x\in\mathrm{Letters}(s\phi^{i},p\phi^{j})\), the labels of the edges \((s\phi^{i},x,p\phi^{j})\phi^{\prime}\) and \((t\phi^{i},x,p\phi^{j})\phi^{\prime}\) coincide._
Proof.: This is a more straight-forward relabelling operation than the previous case. We simply match the orbits of \(t\) along \(p\) with those of \(s\) along \(p\). We define the relabelling map \(\lambda_{B}\) inductively. As before, throughout we observe the following notation. Let \(u,v\in Q_{B}\) and \(x\in X_{n}\) such that \((u,x,v)\) is an edge. For \(i\in\mathbb{N}\) we write \(x\phi^{i}\), whenever there is no ambiguity, for the label of the edge \((u,x,v)\phi^{i}\).
First, for any pair \((c,d)\in X_{n}\times Q_{B}\) such that \((d,c,\pi(c,d))\) is not an edge from a state in the orbit of \(t\) to a state in the orbit of \(p\), set \(\lambda_{B}(c,d)=c\).
Let \(x\in X_{n}\) be smallest such that \((t,x,p\phi^{i})\) is an edge for some \(i\) and \(\lambda_{B}(x,t)\) is not defined. Let \(y\in X_{n}\) be minimal such that \((s,y,p\phi^{i})\) is an edge and \(y\) is not equal to \(\lambda_{B}(z,t)\) for \((t,z,p\phi^{i})\) an edge. For \(0\leq j<N\) set \(\lambda_{B}(x\phi^{j},t\phi^{j})=y\phi^{j}\) where \(y\phi^{j}\) is the label of the edge \((s,y,p\phi^{i})\phi^{j}\).
This inductively defined relabelling map \(\lambda_{B}\) is given by a vertex fixing automorphism and induces the required relabelling of \(B\).
**Remark 4.10**.: We note that, once more, \(B^{\prime}\) and \(B\) are isomorphic as automata and so we may write \((B,\phi^{\prime})\) for \((B^{\prime},\phi^{\prime})\).
### Shadow states
In this second part of our process, we find new states to add to the transducer via splitting operations, to provide more room for relabelling.
Let \(A\in\mathcal{H}_{n}\) have finite order and let \(B\) be a minimal strongly synchronizing automaton such that there is an automorphism \(\phi_{A}\) of the underlying digraph \(G_{B}\) of \(B\) with \(A\) the minimal representative of \(H(B,\phi_{A})\).
The following definition is motivated by considering paths into a vertex \(t\) that might provide an obstruction to a collapse through relabelling of \(B\), as described in the next paragraph.
Suppose there is a state \(q\) of \(B\) so that there is a minimal length \(r\) so that all paths of length \(r\) that end on \(q\) have orbits of length \(n\) under the action of \(\phi_{A}\), and for this choice of \(q\) we have \(r>1\). Let \(\mathcal{P}=e_{1}e_{2}\ldots e_{r}\) be a path of length \(r\) terminating at \(q\), where the orbit of \(\mathcal{P}\) has length \(n\) but the orbit of \(e_{2}e_{3}\ldots e_{r}\) is of size \(c\) for some \(c<n\). For indices \(1\leq i\leq j\leq r\) set \(\mathcal{P}_{i,j}:=e_{i}e_{i+1}\ldots e_{j}\). By construction, the least common multiple of the orbit size of the edge \(e_{1}\) and of \(c\) is \(n\), and further, the orbit length of \(\mathcal{P}_{2,r-1}\) must divide \(c<n\). As will become clear later, if this situation arises, it may be an obstruction to collapse of a transducer through a relabelling process.
In the definition that follows, the state \(t\) corresponds to the target of \(e_{1}\) from the path mentioned above, while \(b\) is some integer multiple of the orbit length of \(t\), but which still properly divides \(n\).
**Definition 4.11**.: Let \(A\in\mathcal{H}_{n}\), \(B\) be a minimal strongly synchronizing automaton such that there is an automorphism \(\phi_{A}\) of the underlying digraph \(G_{B}\) of \(B\) with \(A\) the minimal representative of \(H(B,\phi_{A})\), and \(t\in Q_{B}\). We say \(t\)_is heavy (for the pair \((B,\phi_{A})\))_ if the following conditions hold:
* there is a proper divisor \(b\) of \(n\), where \(b\) is divisible by the length of the orbit of \(t\);
* there is at least one pair \((x,s)\in X_{n}\times Q_{B}\) such that \((s,x,t)\) is an edge;
* for any \(x\in X_{n}\) and any \(s\in Q_{B}\) such that \((s,x,t)\) is an edge of \(B\), the lowest common multiple of \(b\) and the length of the orbit of \((s,x,t)\) under \(\phi_{A}\) is \(n\).
In this case, we call the value \(b\) above a _divisibility constant for \(t\)_ and observe that the set of valid divisibility constants for \(t\) might have more than one element.
In our overall process, we will apply the Lemma 4.13 (directly below) in a situation where we cannot simplify a transducer \(H(B,\phi_{A})\) directly by a relabelling operation. Specifically, this lemma is useful in situations where we can carry out an in-split of the domain automaton \(B\) along the orbit of a heavy state \(t\) to create a new automaton \(B^{\prime}\) with automorphism \(\psi_{A}\) so that \(A\) is a minimal representative of both \(H(B,\phi_{A})\) and of \(H(B^{\prime},\psi_{A})\), and where the new pair \((B^{\prime},\psi_{A})\) has a reduced obstruction to the existence of a helpful relabelling. Note that here, we mean "in-split" in the normal sense of that operation for edge-shift equivalences, see, e.g. [11].
The following lemma characterises how to perform an _in-split along the orbit of a heavy state \(t\)_. The new automaton that is created has all of the old states, together with new states which we call _shadow states (from the orbit of \(t\))_.
The following two lemmas address the same set of hypotheses, but we split the results into two statements as the lemma of primary interest is the second one.
Any such number \(n^{\prime}\) which arises as in Lemma 4.12 below will be referred to as a _valid splitting length for (the heavy state) \(t\) with respect to divisibility constant \(b\)_.
**Lemma 4.12**.: _Let \(A\in\mathcal{H}_{n}\) and let \(B\) be a minimal strongly synchronizing automaton such that there is an automorphism \(\phi_{A}\) of the underlying digraph \(G_{B}\) of \(B\) with \(A\) the minimal representative of \(H(B,\phi_{A})\). Suppose there are \(b\in\mathbb{N}\) and \(t\in Q_{B}\) so that \(t\) is heavy for the pair \((B,\phi_{A})\) with \(b\) a divisibility constant for \(t\), and where \(\left|\{t\phi_{A}^{p}|p\in\mathbb{Z}\}\right|=r\)._
_In these circumstances, there is \(n^{\prime}\in\mathbb{N}\) a number which divides the lengths of orbits of all edges \((s,x,t)\) and satisfies the following conditions:_
* _the lowest common multiple of_ \(n^{\prime}\) _and_ \(b\) _is_ \(n\)_,_
* _there is_ \(m>1\) _so that_ \(n^{\prime}=mr\)
Proof.: Let \(N\) be the greatest common divisor of the orbit lengths of all edges \((s,x,t)\) in \(B\). Let \((s,x,t)\) be an edge of \(B\) with orbit length \(k\) under the action of \(\langle\phi_{A}\rangle\). Now, by the third bullet point of the definition of the state \(t\) being heavy we see that \(\operatorname{lcm}(k,b)=n\). It follows, as \((s,x,t)\) is an arbitrary incoming edge for \(t\), that \(\operatorname{lcm}(N,b)=n\) as well. Since \(r\) is the orbit length of \(t\) we see that \(r|k\) and since \((s,x,t)\) is an arbitrary incoming edge for \(t\) we therefore have \(r|N\). By assumption, \(r|b\), so if \(r=N\) we would have \(\operatorname{lcm}(N,b)=b<n\) which is a contradiction. It then follows that \(N=kr\) for some integer \(k>1\). Thus the set of numbers \(N\) which divide the orbit lengths of all edges \((s,x,t)\) and satisfy points i) and ii) is non-empty. Now let \(n^{\prime}\) be an element of this set and determine \(m\in\mathbb{N}\) so that \(mr=n^{\prime}\) (noting that \(1<m\) by construction).
**Lemma 4.13**.: _Let \(A\in\mathcal{H}_{n}\) and let \(B\) be a minimal strongly synchronizing automaton such that there is an automorphism \(\phi_{A}\) of the underlying digraph \(G_{B}\) of \(B\) with \(A\) the minimal representative of \(H(B,\phi_{A})\). Suppose there is \(b\in\mathbb{N}\) and \(t\in Q_{B}\) so that \(t\) is heavy for the pair \((B,\phi_{A})\) with \(b\) a divisibility constant for \(t\). Set \(t_{0,0}=t\) and let \(t_{0,0},t_{0,1}\ldots,t_{0,r-1}\) be the orbit of \(t\) under iteration by \(\phi_{A}\). Let \(n^{\prime}\) be a valid splitting length of \(t\) and let \(m>1\) be determined by \(n^{\prime}=mr\)._
_In these circumstances we may form a new strongly synchronizing automaton \(B^{\prime}\) with_
\[Q_{B^{\prime}}=Q_{B}\sqcup\{t_{a,0},\ldots,t_{a,r-1}\mid 1\leq a<m\}\]
_such that we have_
* \(\pi_{B^{\prime}}(x,s):=\pi_{B}(x,s)\) _for those pairs_ \((x,s)\in X_{n}\times Q_{B}\) _where_ \(\pi_{B}(x,s)\) _is not in the orbit of_ \(t\)_;_
* \(\pi_{B^{\prime}}(\cdot,t_{a,i}):=\pi_{B^{\prime}}(\cdot,t_{0,i})\) _for all_ \(0\leq a<m\)_,_ \(0\leq i<r\)_;_
* _The incoming transitions of_ \(B^{\prime}\) _to the set of vertices_ \(\{t_{a,i}\mid 0\leq a<m,0\leq i<r\}\) _are determined by the above rules, and by an automorphism_ \(\psi_{A}\) _of the underlying digraph_ \(G_{B^{\prime}}\) _of_ \(B^{\prime}\) _satisfying:_ \((t_{0,0})\psi_{A}^{ar+i}=t_{a,i}\) _for_ \(0\leq a<m\)_,_ \(0\leq i<r\)_,_ \((t_{0,0})\psi_{A}^{n^{\prime}}=t_{0,0}\)_, and_ \(H(B^{\prime},\psi_{A})=A\)_._
Proof.: Let \(n^{\prime}\) be a valid splitting length for the heavy state \(t_{0,0}\) with divisibility constant \(b\), and let \(m>1\) be an integer so that \(n^{\prime}=mr\).
Set
\[T:=\{t_{0,0},t_{0,1},\ldots,t_{0,r-1}\}\]
and build a set of new objects (the extra "shadow states" arising from the splitting along the orbit of \(t_{0,0}\))
\[T^{\prime}=\{t_{a,1},t_{a,2},\ldots,t_{a,r}\mid 1\leq a<m\}.\]
Note that \(|T\cup T^{\prime}|=n^{\prime}\).
We will define an action \(\psi_{A}\) on
\[Q_{B^{\prime}}:=Q_{B}\cup T^{\prime}\]
as follows.
For \(s\in Q_{B^{\prime}}\), set
\[s\psi_{A}=\begin{cases}s\phi_{A}&\text{ if }\quad s\in Q_{B}\backslash T\\ t_{a,i+1}&\text{ if }\quad s=t_{a,i}\text{ and }i<r-1\\ t_{a+1,0}&\text{ if }\quad s=t_{a,r-1}\text{ and }a<m-1\\ t_{0,0}&\text{ if }\quad s=t_{m-1,r-1}.\end{cases}\]
It is immediate by construction that this is an action, and also that the orbit of \(t_{0,0}\) has size \(n^{\prime}\). We will specify transitions for \(B^{\prime}\) by steadily expanding the definition of the underlying digraph \(G_{B^{\prime}}\) of \(B^{\prime}\) through adding edges of the form \((p,x,q)\) for \(p,q\in Q_{B^{\prime}}\) and \(x\in X_{n}\) (thus adding the transition \((x,p)\pi_{B^{\prime}}=q\) to \(B^{\prime}\)), while simultaneously extending the function \(\psi_{A}\) on the corresponding edges of \(G_{B^{\prime}}\). Ultimately, \(B^{\prime}\) will be a strongly synchronizing automaton and \(\psi_{A}\) will be an automorphism of the digraph \(G_{B^{\prime}}\) with \(H(B^{\prime},\psi_{A})\) being equivalent to \(A\).
Important in what follows will be a graph homomorphism \(\iota:G_{B^{\prime}}\to G_{B}\), which we will automatically extend to the (new) edges of \(G_{B^{\prime}}\) whenever they are added. On the set \(Q_{B^{\prime}}\), \(\iota\) is defined as follows: for \(s\in Q_{B^{\prime}}\backslash(T\cup T^{\prime})\) set \(s\iota:=s\), and for any \(t_{a,i}\in T\cup T^{\prime}\) set \(t_{a,i}\iota:=t_{0,i}\).
Below, whenever we extend \(G_{B^{\prime}}\) by adding new edges, we also extend the graph homomorphisms \(\iota:G_{B^{\prime}}\to G_{B}\) and \(\psi_{A}:G_{B^{\prime}}\to G_{B^{\prime}}\) so as to maintain _rsc_, the _rule of semi-conjugacy_, which we define here.
\(\underline{rsc}\):
1. for all \(q\in Q_{B^{\prime}}\) we have \(q\psi_{A}\iota=q\iota\phi_{A}\), and
2. for all edges \(e\) of \(G_{B^{\prime}}\) we further require \(e\psi_{A}\iota=e\iota\phi_{A}\).
Of course we have part (a) of the rule because we have already defined \(\iota\) and \(\psi_{A}\) over \(Q_{B^{\prime}}\) to satisfy this rule.
In the above construction of \(\iota\) if \(e\iota=(r,x,s)\) then we will identify \(e\) as \((p,x,q)\) where \(p\) is the source of \(e\) and \(q\) is the target of \(e\), so after any extension we can always think of the new \(G_{B^{\prime}}\) as an edge-labelled directed graph with edge labels "lifted" from \(G_{B}\) by the map \(\iota\).
Note that below we will sometimes add a large collection of edges at one go, but in this case, there is always a well defined triple \((p,x,q)\) for each new edge, as we add in edges along an orbit under \(\psi_{A}\) which always contains a well-defined edge \((r,y,s)\), from which we can detect the correct letter labelling of all edges along the orbit by using rsc.
It follows that if \(B^{\prime}\) is a strongly synchronizing automaton then \(H(B,\phi_{A})\) will represent the same element of \(\mathcal{H}_{n}\) as \(H(B^{\prime},\psi_{A})\), since the map \(\iota\) never changes edge labels, and the map \(\psi_{A}\) will have to change edge labels in the corresponding fashion as \(\phi_{A}\) in order to uphold rsc.
We now begin to specify the edges of \(G_{B^{\prime}}\), and hence the transition function \(\pi_{B^{\prime}}\). Recall below that \(Q_{B}\backslash T=Q_{B^{\prime}}\backslash(T\cup T^{\prime})\).
Partition the edges of \(G_{B}\) into the following four sets.
\[N_{T}:= \{(p,x,q)\mid p,q\not\in T\},\] \[B_{T}:= \{(p,x,q)\mid p,q\in T\},\] \[D_{T}:= \{(p,x,q)\mid p\in T,q\not\in T\},\text{ and}\] \[R_{T}:= \{(p,x,q)\mid p\not\in T,q\in T\}.\]
We observe in passing that \(\phi_{A}\) acts on each of the sets \(N_{T}\), \(B_{T}\), \(D_{T}\), and \(R_{T}\).
For \((p,x,q)\) an edge in \(N_{T}\), let \((p,x,q)\) also be an edge of \(G_{B^{\prime}}\) (and so \((x,p)\pi_{B^{\prime}}=q\) as well) and set \((p,x,q)\psi_{A}:=(p,x,q)\phi_{A}\).
Recall that for a group \(H\) acting on a set \(X\), a traversal for the orbits is a subset \(\mathscr{Y}\subset X\) so that each orbit under the group action has a unique representative in the set \(\mathscr{Y}\).
Let \(\mathscr{Y}_{B}\) be a traversal for the orbits of the edges in \(B_{T}\) such that each edge in \(\mathscr{Y}_{B}\) is of the form \((t_{0,0},x,t_{0,i})\). Similarly set \(\mathscr{Y}_{D}\) to be a traversal for the orbits of the edges in \(D_{T}\) so that each edge of \(\mathscr{Y}_{D}\) is of the form \((t_{0,0},x,s)\) for some \(s\in Q_{B}\backslash T\). Finally set \(\mathscr{Y}_{R}\) to be a traversal for the orbits of the edges in \(R_{T}\) so that each edge of \(\mathscr{Y}_{R}\) is of the form \((s,x,t_{0,0})\) for some \(s\in Q\backslash T\).
Extend \(G_{B^{\prime}}\) to include \(\mathscr{Y}_{D}\cup\mathscr{Y}_{R}\cup\mathscr{Y}_{B}\) as edges incident on \(t_{0,0}\) (we will add more edges incident on \(t_{0,0}\) later). Furthermore, use the action of \(\psi_{A}\) on the set \(Q_{B^{\prime}}\) together with the map \(\iota\) to uniquely determine new edges (of the form \((p,x,q)\) for \(x\in X_{n}\)) that must be added to \(G_{B^{\prime}}\) so that the resulting digraph is closed under the action of \(\psi_{A}\), contains the transversal edges \(\mathscr{Y}_{D}\cup\mathscr{Y}_{R}\cup\mathscr{Y}_{B}\) and satisfies rsc. Note that this process extends the definition of \(\iota\) and \(\psi_{A}\) to these new edges as well, but these extensions are inductively well defined. Now we may use the new edges of \(G_{B^{\prime}}\) in the obvious way to also extend the definition of the transition function \(\pi_{B^{\prime}}\) so as to create a correspondingly larger automaton \(B^{\prime}\).
Observe that for any state \(s\in Q_{B^{\prime}}\backslash(T\cup T^{\prime})=Q_{B}\backslash T\), the process above now has created a unique edge of the form \((s,x,q)\) for each \(x\in X_{n}\) (which are the "lifts" of \(N_{T}\) and \(R_{T}\) to \(G_{B^{\prime}}\) by \(\iota\)). For edges in \(N_{T}\) this is simply by definition. For an edge \((s,x,t_{0,i})\in R_{T}\), there is an edge \((s^{\prime},x^{\prime},t_{0,0})\in\mathscr{Y}_{R}\) so that there is a minimal non-negative integer \(k\) with \((s^{\prime},x^{\prime},t_{0,0})\phi_{A}^{k}=(s,x,t_{0,i})\). It follows that \((s^{\prime},x^{\prime},t_{0,0})\psi_{A}^{k}=(s,x,t_{a,i})\) for the unique non-negative \(a\) so that \(k=ar+i\). Now suppose there is an edge \((s,x,t_{b,j})\) of \(G_{B^{\prime}}\). By rsc, we see that \((s,x,t_{b,j})\iota=(s,x,t_{0,j})\) but as there is a unique outgoing edge in \(G_{B}\) from \(s\) with letter \(x\) we see that \(t_{0,j}=t_{0,i}\) and in particular, \(i=j\). We assume without meaningful loss of generality that \(b\geq a\) and that \(|b-a|\) is minimal amongst all such differences. Thus by rsc the orbit length of the edge \((s,x,t_{0,i})\) under \(\phi_{A}\) is precisely \((b-a)r\) or else \(b=a\). However, \((b-a)r<n^{\prime}\) and \(n^{\prime}\) divides the length of the orbit of \((s,x,t_{0,i})\) by the definition of \(n^{\prime}\). It follows that \((s,x,t_{a,i})\) is the unique pre-image of \((s,x,t_{0,i})\) under \(\iota\).
There remains a special concern that we must address. Specifically, there are now pairs \((x,t_{a,i})\in X_{n}\times(T\cup T^{\prime})\) so that there are no edges of the form \((t_{a,i},x,q)\) in \(G_{B^{\prime}}\). This happens as the orbit of \(t_{0,0}\) has length \(r\) under \(\phi_{A}\) but length \(n^{\prime}=mr>r\) under \(\psi_{A}\). Also, to verify the coherence of the rsc condition for edges in \(\mathscr{Y}_{B}\), recall that \(n^{\prime}\) divides the orbit length of these edges as they are in the orbit of an edge incident to \(t\).
Let us now deal with the "missing edges" issue. Observe that for an edge \((t_{0,0},x,s)\in\mathscr{Y}_{D}\cup\mathscr{Y}_{B}\), its orbit under \(\phi_{A}\) may contain multiple edges of the form \((t_{0,0},y,q)\) (for various \(y\in X_{n}\) and \(q\in Q_{B}\)). Let us organise these as the sequence of pairwise distinct edges \((e_{0},e_{1},\ldots,e_{k})\) where \(e_{i}=(t_{0,0},x,s)\phi_{A}^{ri}\), and with \(e_{k}\phi_{A}^{r}=e_{0}=(t_{0,0},x,s)\). In this context, the orbit of \((t_{0,0},x,s)\) under \(\phi_{A}\) has length \((k+1)r\). Let us set notation \(e_{i}=:(t_{0,0},x_{i},q_{i})\) so we can understand the letter \(x_{i}\) associated to \(e_{i}\) for each valid index \(i\). The concern is that in our current graph \(G_{B^{\prime}}\) we see for any index \(0<i\leq k\) that there is no edge of the form \((t_{0,0},x_{i},q)\in G_{B^{\prime}}\). For the letter \(x_{i}\) observe that there is an edge of the form \((t_{a,0},x_{i},q_{i})\) of \(G_{B^{\prime}}\) for \(a=\text{ir}\mod n^{\prime}\) and some state \(q_{i}\). The rule of modification is, add the edge \((t_{0,0},x_{i},q_{i})\) to \(G_{B^{\prime}}\), for all indices \(0<i\leq k\). Repeat this same procedure across all of the transversal elements \((t_{0,0},x^{\prime},s^{\prime})\in\mathscr{Y}_{D}\cup\mathscr{Y}_{B}\) and as a consequence, for each letter \(y\in X_{n}\), we see that the vertex \(t_{0,0}\) now has a unique outgoing edge of the form \((t_{0,0},y,q)\). Finally, we again use the action of \(\psi_{A}\) on vertices and the action of \(\phi_{A}\) on \(G_{B}\) along with the rsc condition to extend the definitions of \(\psi_{A}\) and \(\iota\) to the necessary edges we have to add to \(G_{B^{\prime}}\) in order to complete the orbits of our newly-added edges based at \(t_{0,0}\), and to discern what letters needed to be associated to these new edges. Now induce from \(G_{B^{\prime}}\) the enlarged automaton \(B^{\prime}\).
One observes that for any valid indices \(a\) and \(b\) and fixed index \(i\), the states \(t_{a,i}\) and \(t_{b,i}\) of \(B^{\prime}\) have all the same outgoing transitions, and indeed, that the automaton \(B^{\prime}\) collapses back down to \(B\) by identifying these states for each fixed \(i\). In particular \(G_{B^{\prime}}\) is strongly synchronizing as it admits a collapse sequence to the \(n\)-leafed rose. Further, the rsc condition implies that \(\psi_{A}\) acts as an automorphism of the directed graph \(G_{B^{\prime}}\) in fashion locally emulating how \(\phi_{A}\) acts on \(G_{B}\) so that \(H(G_{B^{\prime}},\psi_{A})\) represents \(A\).
Let \(A,B,t=t_{0,0}\) be as in the statement of Lemma 4.13. Assume we applied Lemma 4.13 to lengthen the orbit of \(t\) as in the lemma statement to create automaton \(B^{\prime}\) with automorphism \(\psi_{A}\) so that \(H(B^{\prime},\psi_{A})\) has minimal representative \(A\) and where the orbit of \(t\) in \(G_{B^{\prime}}\) under the action of \(\psi_{A}\) is the set \(\{t_{a,i}\mid 0\leq a<m,0\leq i<r\}\). Now for each state \(t_{0,i}\) for \(0\leq i<r\) (these states are in the original orbit of \(t\) in \(G_{B}\) under the action of \(\phi_{A}\)), we call the set of states
\[\{t_{a,i}\mid 0<a\leq m-1\}\subsetneq Q_{B^{\prime}}\]
the _shadow states for \(t_{0,i}\) (in \(Q_{B^{\prime}}\))_. Note that these are precisely the states of \(H(B^{\prime},\psi_{A})\) with local maps equivalent to the local map at \(t_{0,i}\) for the transducer \(H(B,\phi_{A})\). If we apply Lemma 4.13 inductively and perhaps repeatedly on states on the now extended orbit of \(t\), we extend the definition of the shadow states of \(t_{0,i}\) to be the union of the sets of states added in each round of applying Lemma 4.13 which have local maps equivalent to the local map at \(t_{0,i}\) for the transducer \(H(B,\phi_{A})\). Note that this process cannot go on forever as each application of Lemma 4.13 lengthens the orbit of \(t_{0,0}\) with \(n\) an upper bound on the length of this orbit. Also note that each added state will be a shadow state of one of the original states \(t_{0,j}\) after any number of iterated applications of Lemma 4.13.
**Lemma 4.14**.: _Let \(A\in\mathcal{H}_{n}\) and let \(B\) be the minimal strongly synchronizing automaton such that there is an automorphism \(\phi_{A}\) of the underlying digraph of \(B\) with \(A\) the minimal
representative of \(H(B,\phi_{A})\). Suppose that all circuits in \(B\) are on orbits of length \(n\) under the action of \(A\). Then there is a strongly synchronizing transducer \(\widehat{B}\) such that \(A\) acts as an automorphism \(\widehat{\psi}_{A}\) of \(\widehat{B}\) and all edges of \(\widehat{B}\) are on orbits of length \(n\) under the action of \(A\)._
Proof.: We first observe that all states of \(B\) must be on orbits of length dividing \(n\) as the underlying digraph of \(B\) is strongly synchronizing and therefore each state is visited by some circuit which is on an orbit of length \(n\). Also, we may assume that \(|B|>1\) otherwise all loops at the state of \(B\) will be on orbits of length \(n\) and we would be done.
Let \(s\in Q_{B}\) be such that \(s\) is on an orbit of length strictly less than \(n\) under \(\phi_{A}\). (If all states of \(Q_{B}\) were on orbits of length \(n\) then all edges of \(B\) would be on orbits of length \(n\) as well and we would be done.)
Inductively define states as follows.
Set \(Q_{B}(0,s):=\{s\}\). Assume \(Q_{B}(i,s)\) is defined for some \(i\in\mathbb{N}\). We now define \(Q_{B}(i+1,s)\subseteq Q_{B}\). An element \(q\in Q_{B}\) belongs to \(Q_{B}(i+1,s)\) if the following conditions hold:
1. there are elements \(x_{1},x_{2},\ldots,x_{i+1}\in X_{n}\), such that, for all \(1\leq j\leq i+1\), we have \(\pi_{B}(x_{i+1}\ldots x_{j},q)\in Q_{B}(j-1,s)\); and
2. the path \((q,x_{i+1}x_{i}\ldots x_{1},s)\) is on an orbit of length strictly less than \(n\) under \(\phi_{A}\).
We observe that for any state \(q\) in a set \(Q_{B}(i+1,s)\), the orbit of \(q\) under \(\phi_{A}\) has size properly dividing \(n\). If \(q\in Q_{B}(i+1,s)\) and \(x_{i+1},x_{i},\ldots,x_{1}\in X_{n}\) satisfies points (a) and (b) then we call the path \((q,x_{i+1}x_{i}\ldots x_{1},s)\)_conformant for \(Q_{B}(i+1,s)\)_.
Let \(k\in\mathbb{N}\) be minimal so that \(Q_{B}(k+1,s)=\emptyset\). If such \(k\) did not exist then there would a long path (as in point (b) of the definition of the sets \(Q_{B}(i,s)\)) which is long enough that it must contain a circuit in \(B\). Any such circuit would be on an orbit of length strictly less than \(n\) under the action of \(\phi_{A}\), which is a contradiction.
From the argument directly above it also follows that whenever \(j>0\), \(s\notin Q_{B}(j,s)\).
We now apply an induction argument using Lemma 4.13 to reduce \(k\) to \(0\).
Let \(t\in Q_{B}(k,s)\) and fix \(x_{k},x_{k-1},\ldots,x_{1}\in X_{n}\), such that the path \((t,x_{k}x_{k-1}\cdots x_{1},s)\) is conformant for \(Q_{B}(k,s)\) and where the orbit of this path under the action of \(\phi_{A}\) is of length \(b<n\) (note that \(b|n\) and also that the length of the orbit of \(t\) divides \(b\)). Moreover, by choice of \(k\), for any pair \((x,p)\in X_{n}\times Q_{B}\) with \(\pi_{B}(x,p)=t\), the lowest common multiple of the length of the orbit of the edge \((p,x,t)\) and \(b\) is \(n\).
Therefore \(t\) is heavy for the pair \((B,\phi_{A})\) and \(b\) is a divisibility constant for \(t\), so we may apply Lemma 4.13 with the state \(t\) and constant \(b\) to add states in the orbit of \(t\) and necessary edges to form a new strongly synchronizing automaton \(B^{\prime}\) with \(\psi_{A}\) an automorphism of \(G_{B^{\prime}}\), and so that \(H(B^{\prime},\psi_{A})\) still represents \(A\) (and so in particular, all circuits of \(G_{B^{\prime}}\) are still on orbits of length \(n\) under the action of \(\psi_{A}\)).
Recall that by the construction of \(B^{\prime}\), the orbit of \(t\), which includes all of its shadow states, is now of larger size \(n^{\prime}_{t}\) under the action of the resulting digraph automorphism \(\psi_{A}\). Thus, for any \(t^{\prime}\) in the orbit of \(t\) (including the shadow states we have just added), if there is a path from \(t^{\prime}\) to \(s\) which is conformant for \(Q_{B^{\prime}}(k,s)\), then \(t^{\prime}\) (and therefore \(t\)) is heavy for \((Q_{B^{\prime}},\psi_{A})\), so we may again inductively increase the length of the orbit of \(t\) by adding
more shadow states until there are no paths from a point in the orbit of \(t\) to \(s\) which are conformant for \(Q_{B^{\prime}}(k,s)\) (note this happens to be a consequence of \(t\) no longer being heavy for \((B^{\prime},\psi_{A})\) which must happen eventually as the orbit of \(t\) is getting longer and is bounded above by \(n\)). Note that if \(p\in Q_{B^{\prime}}(k,s)\) but \(p\) is not in the orbit of \(t\), then \(p\in Q_{B}(k,s)\). In particular, we have \(|Q_{B^{\prime}}(k,s)|<|Q_{B}(k,s)|\) as this count of states for \(B^{\prime}\) no longer includes the state \(t\) nor any state in its orbit.
We can now inductively repeat this process for \(s\) until \(|Q_{B^{\prime}}(k,s)|=0\).
Note that we can repeat this process for any state \(p\) with \(|Q_{B^{\prime}}(k,p)|\neq 0\). Thus we may now proceed inductively in this fashion until finally we have constructed an automaton \(B^{\prime}\) and an automorphism \(\psi_{A}\) of \(G_{B^{\prime}}\) so that \(B^{\prime}\) folds onto \(B\) and \(H(B^{\prime},\psi_{A})\) represents \(A\), and where if \(q\) is any state and \(j\) is minimal so that \(Q_{B^{\prime}}(j,q)=\emptyset\), then \(j=0\).
We set \(\widehat{B}=B^{\prime}\) and \(\widehat{\psi}_{A}=\psi_{A}\) in this final case, noting that the orbit of every edge of \(\widehat{B}\) under the action of \(\widehat{\psi}_{A}\) is of length \(n\).
**Remark 4.15**.: Let \(A\in\mathcal{H}_{n}\) be an element of finite order and suppose that every point in \(X_{n}^{-\mathbb{N}}\) is on an orbit of length \(n\) under the action of \(A\). By lemma 4.14, there is a minimal (in size) strongly synchronizing automaton \(B\) such that \(A\) acts as an automorphism \(\phi_{A}\) of the underlying digraph of \(B\) and all edges of \(B\) are on orbits of length \(n\) under the action of \(A\).
### Relabelling through shadows
In this section we make use of Lemma 4.13 to deal with situations in which we are unable to directly apply Lemma 4.7 or Lemma 4.9.
The lemma below says that applying Lemma 4.13 and then suitably relabelling does not move us out of the conjugacy class of the considered finite order element \(A\in\mathcal{H}_{n}\).
**Definition 4.16**.: Let \(A\in\mathcal{H}_{n}\) and let \(B\) be the minimal strongly synchronizing automaton such that there is an automorphism \(\phi_{A}\) of the underlying digraph of \(B\) with \(A\) the minimal representative of \(H(B,\phi_{A})\). Let \(D\) be a strongly synchronizing automaton which is obtained by repeated applications of Lemma 4.13 to the automaton \(B\). Let \(\psi_{A}\) be the automorphism of the underlying digraph of \(D\) with \(A\) the minimal representative of \(H(D,\psi_{A})\). A relabelling \(D^{\prime}\) of \(D\) is called _a relabelling through shadows_ if, for the induced automorphism \(\psi_{A}^{\prime}\) of the underlying digraph of \(D^{\prime}\), for any state \(q\in Q_{B}\) the set \(S_{q,D}\) of shadow states of \(q\) are all \(\omega\)-equivalent in \(H(D^{\prime},\psi_{A}^{\prime})\) to the state \(q\).
**Lemma 4.17**.: _Let \(A\in\mathcal{H}_{n}\) and let \(B\) be the minimal strongly synchronizing automaton such that there is an automorphism \(\phi_{A}\) of the underlying digraph of \(B\) with \(A\) the minimal representative of \(H(B,\phi_{A})\). Let \(D\) be a strongly synchronizing automaton which is obtained by repeated applications of Lemma 4.13 to the automaton \(B\). Let \(\psi_{A}\) be the automorphism of underlying digraph of \(D\) with \(A\) the minimal representative of \(H(D,\psi_{A})\). Let \(D^{\prime}\) be a relabelling through shadows of \(D\), \(\psi_{A^{\prime}}\) be the induced automorphism of the underlying digraph of \(D^{\prime}\), and \(A^{\prime}\) be the minimal representative of \(H(D^{\prime},\psi_{A^{\prime}})\). Then \(A^{\prime}\) is conjugate to \(A\) and there is a strongly synchronizing automaton \(B^{\prime}\) and an automorphism \(\phi_{A^{\prime}}\) of the underlying
digraph \(G_{B^{\prime}}\) so that \(A^{\prime}\) is the minimal representative of \(H(B^{\prime},\phi_{A^{\prime}})\), with \(G_{B^{\prime}}\) equal to the underlying digraph of \(B\)._
_Thus, the minimal strongly synchronizing automaton \(C\) on which \(A^{\prime}\) acts is carried by a digraph \(G_{C}\) that is a graph quotient of \(G_{B}\) and we have \(|G_{C}|\leq|G_{B}|\)._
Proof.: As the relabelling process employed is a relabelling through shadows, it preserves the equivalence of the local maps induced by the graph automorphism across all shadow states shadowing any particular original state of \(B\). In particular the collapse of each state with all of its shadow states results in an automaton \(B^{\prime}\) which still admits an automorphism \(\psi_{A^{\prime}}\) of its underlying graph (which graph is isomorphic to \(G_{B}\)) so that \(A^{\prime}\) is the minimal representative of \(H(B^{\prime},\psi_{A^{\prime}})\). The result now follows.
**Lemma 4.18**.: _Let \(A\in\mathcal{H}_{n}\) and let \(B\) be the minimal strongly synchronizing automaton such that there is an automorphism \(\phi_{A}\) of the underlying digraph of \(B\) with \(A\) the minimal representative of \(H(B,\phi_{A})\). Suppose that all circuits in \(B\) are on orbits of length \(n\) under the action of \(A\). Let \(p\in Q_{B}\), \(i\in\mathbb{N}\) be less than or equal to the orbit length of \(p\), and \(\gamma\in X_{n}^{*}\) be such that \((p,\gamma,p\phi^{i})\) is a path in \(B\) from \(p\) to \(p\phi^{i}\), then \((p,\gamma,p\phi^{i})\) is on an orbit of length \(n\) under \(\phi\)._
Proof.: Let \(k\) be the orbit length of \(p\) and let \(r\) be the order of \(i\) in the additive group \(\mathbb{Z}_{k}\). For \(1\leq a<r\) write \(\gamma_{a}\) for the label of the path \((p,\gamma,p\phi^{i})\phi^{ia}\) and set \(\gamma_{0}=\gamma\). Write \(\Gamma=\gamma_{0}\gamma_{1}\ldots\gamma_{r-1}\), then the circuit \((p,\Gamma,p)\) is on an orbit of length \(n\) by assumption. From this it follows that the path \((p,\gamma,p\phi^{i})\) is also on an orbit of length \(n\).
**Lemma 4.19**.: _Let \(A\in\mathcal{H}_{n}\) and suppose there is a minimal strongly synchronizing automaton \(B\) such that there is an automorphism \(\phi_{A}\) of the underlying digraph of \(B\) with \(A\) the minimal representative of \(H(B,\phi_{A})\) and so that all circuits in \(B\) are on orbits of length \(n\) under \(\phi_{A}\). Let \(s,t,p\in Q_{B}\) be such that \(\mathrm{Letters}(s,p)=\mathrm{Letters}(t,p)\neq\emptyset\). Then by repeated application of Lemma 4.13 one may obtain from \(B\) a strongly synchronizing automaton \(D\) with automorphism \(\psi_{A}\) so that \(A\) is the minimal representative of \(H(D,\psi_{A})\) and where the pair \((D,\psi_{A})\) satisfies the following conditions:_
1. _there are no shadow states of any element in the orbit of_ \(p\) _in_ \(D\) _(that is, each application of Lemma_ 4.13 _creates no shadow states for elements in the orbit of_ \(p\)_),_
2. \[\mathrm{Letters}_{D}(s,p)=\mathrm{Letters}_{D}(t,p)=\mathrm{Letters}_{B}(s,p),\]
3. _for_ \(u\in\{s,t\}\)_, for any shadow state_ \(u^{\prime}\) _of_ \(u\) _we have,_ \[\mathrm{Letters}_{D}(u,p)=\mathrm{Letters}_{D}(s,p)=\mathrm{Letters}_{B}(s,p),\]
4. _for any_ \(x\in\mathrm{Letters}_{B}(s,p)\)_, the length of the orbits of the edges_ \((s,x,p)\)_,_ \((t,x,p)\) _under the action of_ \(\psi_{A}\) _on_ \(D\) _is_ \(n\)
Proof.: We proceed in a similar way to Lemma 4.14. If \(|B|=1\), then we are done. Therefore we assume that \(|B|>1\).
Inductively define subsets of \(Q_{B}\) as follows.
Set \(Q_{B}(0,p)=\{p\}\). Assume that \(Q_{B}(i,p)\) is defined for some \(i\in\mathbb{N}\). Define \(Q_{B}(i+1,p)\) as follows. A state \(q\in Q_{B}\) belongs to \(Q_{B}(i+1,p)\), if there are elements \(x_{0},x_{1},\ldots,x_{i}\in X_{n}\), such that \(\pi_{B}(x_{i}x_{i-1}\ldots x_{j},q)\in Q_{B}(j,p)\) for \(0\leq j\leq i\), and the path \((q,x_{i}x_{i-1}\ldots x_{0},p)\) is on an orbit of length strictly less than \(n\) under \(\phi_{A}\).
As in the proof of Lemma 4.14, there is a \(k\in\mathbb{N}\) such that \(Q_{B}(k+1,p)=\emptyset\). Set \(k\in\mathbb{N}\) to be minimal such that \(Q_{B}(k+1,p)=\emptyset\). If \(Q_{B}(1,p)\cap\{s,t\}=\emptyset\), then we are done. Thus we may assume that at least one of \(s,t\) belongs to \(Q_{B}(1,p)\) (and so \(k\geq 1\)).
Observe that by Lemma 4.18 for any \(j\in\mathbb{N}\), \(p\phi^{j}\) is not an element of \(Q_{B}(i,p)\) for any \(1\leq i\leq k\).
We now repeatedly apply Lemma 4.13, as in the proof of Lemma 4.14, until we have an automaton \(D\) such that \(Q_{D}(1,p)\cap\{s,t\}=\emptyset\). We note that since \(p\) is always the single element of \(Q_{D}(0,p)\), then, as for any \(j\in\mathbb{N}\), \(p\phi^{j}\) is not an element of \(Q_{B}(i,p)\) for any \(1\leq i\leq k\), we do not create a shadow state of \(p\) or elements in the orbit of \(p\) in an application of Lemma 4.13.
We prove the base case to illustrate how the proof goes. Let \(q\in Q_{B}(k,p)\). We may find \(x_{0},x_{1},\ldots,x_{k}\) such that for any \(0\leq j\leq k\), \(\pi_{B}(q,x_{k}x_{k-1}\ldots x_{j})\in Q_{B}(j,p)\). Let \(b\) be the length of the orbit of the path \((q,x_{k}x_{k-1}\ldots x_{0},p)\). Then for any state \(u\in Q_{B}\) for which there is an edge \((u,x,q)\), the lowest common multiple of the length of the orbit of \((u,x,q)\) and \(b\) is \(n\). We may now apply Lemma 4.13 to form a new transducer \(B^{\prime}\) by adding shadow states of \(q\). We may define the sets \(Q_{B^{\prime}}(i,p)\) as before, noting that \(Q_{B^{\prime}}(k+1,p)=\emptyset\). This follows as for any edge \((u,x,q)\) in \(B\), the orbit of the edge \((u^{\prime},x,q^{\prime})\) in \(B^{\prime}\) (for \(u^{\prime}\) and \(q^{\prime}\) either equal to \(u\) and \(q\) or shadow states of \(u\) and \(q\) respectively) is equal to the orbit of the edge \((u,x,q)\). Moreover, as in the proof of Lemma 4.14, the number of paths \((q^{\prime},x_{0}x_{1}\ldots x_{k},p)\), where \(q^{\prime}\) is either \(q\) or one of its shadow states, witnessing that \(q^{\prime}\in Q_{B^{\prime}}(k)\) is strictly fewer than the witness paths for \(q\) in \(B\). Thus inductively applying Lemma 4.13, we find an automaton, which we again denote \(B^{\prime}\), in which \(|Q_{B^{\prime}}(k,p)|<|Q_{B}(k,p)|\) since neither \(q\) nor any of its shadow states belong to \(Q_{B^{\prime}}(k,p)\). Thus, replacing \(B^{\prime}\) with \(B\) we may repeat the process.
Eventually we reach an automaton \(D^{\prime}\) such that \(Q_{D}(1,p)\cap\{s,t\}=\emptyset\). Moreover, since, by Lemma 4.13, shadow states transition identically to their original counterparts on edges into states which have no shadow states added (and this transition mirrors the transition in \(B\)), the automaton \(D^{\prime}\) satisfies the requirements of the lemma.
In what follows, set notation \(\operatorname{ol}_{\tau}(\star)\) to represent the orbit length of \(\star\) under the action of \(\tau\) a digraph automorphism of some digraph \(G\), where \(\star\) is a vertex, edge, or path in \(G\).
For an automaton \(C\) over alphabet \(Y\) with \(a,b\in Q_{C}\), recall that \(\operatorname{E}_{C}(a,b)\) represents the set of edges of \(G_{C}\) from \(a\) to \(b\), while
\[\operatorname{Letters}_{C}(a,b):=\{y\in Y\mid\exists(a,y,b)\in\operatorname{E}_ {C}(a,b)\}\]
represents the set of letters from \(Y\) which are the labels of these edges.
**Lemma 4.20**.: _Let \(A\in\mathcal{H}_{n}\) and let \(B\) be the minimal strongly synchronizing automaton such that there is an automorphism \(\phi_{A}\) of the underlying digraph of \(B\) with \(A\) the minimal representative of \(H(B,\phi_{A})\). Suppose that all circuits in \(B\) are on orbits of length \(n\) under \(\phi_{A}\). Let \(s,t,p\in Q_{B}\) be such that \(s\) and \(t\) belong to distinct orbits but have the same orbit length under \(\phi_{A}\), and so that \(\operatorname{Letters}(s,p)=\operatorname{Letters}(t,p)\). Then there is \(A^{\prime}\in\mathcal{H}_{n}\) and an automorphism \(\phi_{A^{\prime}}\) of the underlying digraph of \(B\) so that the following hold:_
1. _the element_ \(A\) _is conjugate to_ \(A^{\prime}\) _in_ \(\mathcal{H}_{n}\)_, where_ \(A^{\prime}\) _is the minimal representative of_ \(H(B,\phi_{A^{\prime}})\)_; and,_
2. _there is an_ \(N\in\mathbb{N}\) _such that for all_ \(x\in\operatorname{Letters}(s,p)\) _the edges_ \((s,x,p)\) _and_ \((t,x,p)\) _have orbit length_ \(N\) _under_ \(\phi_{A^{\prime}}\)_._
Proof.: The strategy is to apply Lemma 4.19 to add shadow states to \(B\) to obtain an automaton \(D\) and an automorphism \(\psi_{A}\) of \(D\), such that \(H(D,\psi_{A})\) has minimal representative \(A\), \(\psi_{A}\) preserves the orbit length of the state \(p\), and, the orbit length of the edges from \(s\) and \(t\) into \(p\) is \(n\). We then apply a relabelling through shadows of \(D\) by Lemma 4.17 to obtain a conjugate element \(A^{\prime}\) to \(A\) represented by a transducer \(H(B,\phi_{A^{\prime}})\) for \(\phi_{A^{\prime}}\) an automorphism of the underlying digraph \(G_{B}\). The key ingredient is that the relabelling of \(D\) is chosen such that the orbits of the edges of \(s\) and \(t\) into \(p\) now all have the same length under \(\phi_{A^{\prime}}\). In order to find a relabelling achieving this goal, we will need to track numerous integer constants.
Set \(r=\operatorname{ol}_{\phi_{A}}(p)\) and determine \(m\) so that \(mr=\operatorname{lcm}(\operatorname{ol}_{\phi_{A}}(s),\operatorname{ol}_{ \phi_{A}}(p))=\operatorname{lcm}(\operatorname{ol}_{\phi_{A}}(t),\operatorname {ol}_{\phi_{A}}(p))\). For any edge \((s,x,p)\in\operatorname{E}_{B}(s,p)\) we have \((s,x,p)\phi_{A}^{k}\in\operatorname{E}_{B}(s,p)\) if and only if \(k\) is an integer multiple of \(mr\). For each \(x\in\operatorname{Letters}_{B}(s,p)=\operatorname{Letters}_{B}(t,p)\), determine integers \(u_{x}\), \(v_{x}\) so that \(u_{x}mr=\operatorname{ol}_{\phi_{A}}((s,x,p))\) and \(v_{x}mr=\operatorname{ol}_{\phi_{A}}((t,x,p))\). In particular, for any \(x\in\operatorname{Letters}_{B}(s,p)\), the number of edges in \(\operatorname{E}_{B}(s,p)\) which belong to the orbit of \((s,x,p)\) under \(\phi_{A}\) is precisely \(u_{x}\), while \(v_{x}\) defines the analogous number for \((t,x,p)\). It follows that there are permutations \(\theta_{s}:\operatorname{Letters}_{B}(s,p)\to\operatorname{Letters}_{B}(s,p)\) and \(\theta_{t}:\operatorname{Letters}_{B}(t,p)\to\operatorname{Letters}_{B}(t,p)\) induced from the permutations of edges from \(s\) to \(p\) (and from \(t\) to \(p\) respectively) achieved by applying \(\phi_{A}^{mr}\). In particular, for \(x\in\operatorname{Letters}_{B}(s,p)(=\operatorname{Letters}_{B}(t,p))\), we have the cycle of \(\theta_{s}\) containing \(x\) has length \(u_{x}\) and the cycle of \(\theta_{t}\) containing \(x\) has length \(v_{x}\).
_Adding shadows_:
Apply Lemma 4.19 to the quadruple \((s,t,p,B)\) to obtain a strongly synchronizing automaton \(D\) and a corresponding automorphism \(\psi_{A}\) of the digraph \(G_{D}\) underlying \(D\) so that \(A\) is the minimal representative of \(H(D,\psi_{A})\), and the conclusions of Lemma 4.19 are satisfied. In particular, for any \(x\in\operatorname{Letters}_{B}(s,p)\) we have \(\operatorname{ol}_{\psi_{A}}((s,x,p))=\operatorname{ol}_{\psi_{A}}((t,x,p))=n\).
We now determine various constants arising from the construction so far.
By construction, \(r\) remains the length of the orbit of \(p\) in \(D\); however, the orbit lengths of \(s\) and \(t\) have possibly been padded out with shadow states. Determine \(e\), \(f\) so that \(emr=\operatorname{lcm}(\operatorname{ol}_{\psi_{A}}(s),\operatorname{ol}_{ \psi_{A}}(p))\) and \(fmr=\operatorname{lcm}(\operatorname{ol}_{\psi_{A}}(t),\operatorname{ol}_{ \psi_{A}}(p))\).
As \(\operatorname{Letters}_{B}(s,p)=\operatorname{Letters}_{B}(t,p)=\operatorname{ Letters}_{D}(t,p)=\operatorname{Letters}_{D}(s,p)\) we will often use the notation \(\operatorname{Letters}\left(s|\!|t,p\right)\) for this set, although, we might use one of the other names if we
specifically wish to emphasise that we are considering the action of \(\phi_{A}\) or of \(\psi_{A}\) in that case. Set as well \(\zeta:=|\operatorname{Letters}\left(s|\!|t,p\right)|\).
_Determining constants and orbit blocks from \(\operatorname{Letters}_{D}(s,p)\) and \(\operatorname{Letters}_{D}(t,p)\)_:
As above, consider the permutation of \(\operatorname{Letters}_{D}(s,p)\) induced by applying \(\psi_{A}^{emr}\) to \(G_{D}\). Note that this permutation is \(\theta_{s}^{e}\). Similarly \(\psi_{A}^{fmr}\) induces \(\theta_{t}^{f}\) on \(\operatorname{Letters}_{D}(t,p)\). Set \(q_{e}\) to be the order of \(\theta_{s}^{e}\), so that \(\theta_{s}^{q_{e}e}\) is the identity permutation. Analogously define \(q_{f}\) to be the order of \(\theta_{t}^{f}\). Note in passing that \(q_{e}=n/(emr)\) is the number of times the orbit of an edge of the form \((s,x,p)\) intersects \(\operatorname{E}_{D}(s,p)\) under the action of \(\langle\psi_{A}\rangle\) (and that this number is independent from the choice of such edge), and similarly, \(q_{f}=n/(fmr)\) counts the cardinality of the intersection of the orbit of an edge of the form \((t,x,p)\) with \(\operatorname{E}_{D}(t,p)\) under the action of \(\langle\psi_{A}\rangle\) (again, independent of the choice of such an edge). It follows that all cycles of \(\theta_{s}^{e}\) have length \(q_{e}\) and that all the cycles of \(\theta_{t}^{f}\) have length \(q_{f}\).
We note that for all \(x\in\operatorname{Letters}_{D}(s,p)\) we have \(q_{e}e=\operatorname{lcm}(u_{x},e)\) and \(q_{f}f=\operatorname{lcm}(v_{x},f)\). Therefore, for any \(x\in\operatorname{Letters}\left(s|\!|t,p\right)\), we have \(q_{e}|u_{x}\) and \(q_{f}|v_{x}\). Further, as \(q_{e}emr=q_{f}fmr=n\) we have \(q_{e}e=q_{f}f\).
Let \(u=\gcd\{u_{x}:x\in\operatorname{Letters}\left(s|\!|t,p\right)\}\) and \(v=\gcd\{v_{x}:x\in\operatorname{Letters}\left(s|\!|t,p\right)\}\). As \(q_{e}|u_{x}\) and \(q_{f}|v_{x}\) for all \(x\in\operatorname{Letters}\left(s|\!|t,p\right)\), we see that \(q_{e}|u\) and \(q_{f}|v\). Let \(\overline{u}\) and \(\overline{v}\) be such that \(\overline{u}q_{e}=u\) and \(\overline{v}q_{f}=v\). It also follows that \(\operatorname{lcm}(u,e)=q_{e}e=q_{f}f=\operatorname{lcm}(v,f)\).
Let \(\tau\in\{s,t\}\) and set \(\sim_{\tau,p}\) to be the equivalence relation on the set of edges from \(\tau\) to \(p\), where two edges are equivalent under \(\sim_{\tau,p}\) if they are in the same orbit under \(\psi_{B}\). Let \(X(\tau,p)\) be a transversal for this equivalence relation. It follows that
\[\sum_{(s,x,p)\in X(s,p)}u_{x}=\sum_{(t,x,p)\in X(t,p)}v_{x}=|\operatorname{ Letters}\left(s|\!|t,p\right)|=\zeta\]
so in particular we see that both \(u\) and \(v\) divide \(\zeta\).
Set \(w:=\operatorname{lcm}(u,v)\). Since \(u\) and \(v\) divide \(\zeta\) we have that \(w|\zeta\). Let \(\alpha\) be such that \(\alpha w=\zeta\). Further, as \(u|w\) and \(v|w\) there are \(\mu,\nu\in\mathbb{N}\) such that \(w=\mu u=\mu\overline{u}q_{e}\) and \(w=\nu v=\nu\overline{v}q_{f}\). Further, \(q_{e}e=\operatorname{lcm}(u,e)|\operatorname{lcm}(w,e)\) and \(q_{f}f=\operatorname{lcm}(v,f)|\operatorname{lcm}(w,f)\). Since both \(u\) and \(v\) divide \(q_{e}e=q_{f}f\), it follows that \(w|q_{e}e\) and so \(\operatorname{lcm}(w,e)=q_{e}e=q_{f}f=\operatorname{lcm}(w,f)\). In particular, \(wmr|q_{e}emr=n\) and \(wmr|q_{f}fmr=n\) and we have \(\operatorname{lcm}(wmr,emr)=n=\operatorname{lcm}(wmr,fmr)\).
_Building the relabelling_:
Our relabelling will be a relabelling through shadows which will create a pattern of labels along an edge orbit that repeats after every \(wmr\) steps under iteration of the new automorphism \(\psi_{A^{\prime}}\) of \(G_{D}\).
To be a relabelling through shadows we will need that for any \(0\leq i<mr\) and integer \(k\) that the local map at \(s\psi_{A^{\prime}}^{i+kmr}\) agrees with the local map at \(s\psi_{A^{\prime}}^{i}\) (and similarly for the orbit of \(t\)).
We will now define a new labelling \(\lambda_{D}:X_{n}\times Q_{D}\to X_{n}\). Recall that a relabelling function is always induced from a vertex fixing automorphism of the underlying digraph of an automaton in the collapse sequence of the original. In our particular construction of \(\lambda_{D}\)
the reader will see that the automorphism employed in its creation is simply a vertex fixing automorphism of the underlying digraph \(G_{D}\) of \(D\).
Suppose \((l,x^{\prime},p^{\prime})\) is an edge of \(D\) which does not belong to the orbit of an edge \((\tau,x,p)\) for any pair \((\tau,x)\in\{s,t\}\times\operatorname{Letters}_{D}(\tau,p)\). In this case we set \((x^{\prime},l)\lambda_{B}=x^{\prime}\).
For the moment we focus on edges in the orbit of some edge \((s,x,p)\). The edges in the orbit of \((t,x,p)\) are dealt with analogously.
As \(|\operatorname{Letters}\left(s|\!|t,p\right)|=\zeta=w\alpha\) we may partition \(\operatorname{Letters}\left(s|\!|t,p\right)\) into \(\alpha\) blocks of size \(w\), which partition we organise through some labelling of the set's elements as follows:
\[\operatorname{Letters}\left(s|\!|t,p\right)=\{x_{i}^{c}\mid 0\leq i<w,0\leq c< \alpha\},\]
where here, the \(c^{th}\) part, denoted \(x^{c}\), has \(w\) elements arranged as the ordered sequence \((x_{i}^{c})_{0\leq i<w}\).
For \(0\leq i<mr\) and \(q=s\psi_{A}^{i}\) we define \((x,q)\lambda_{D}=x\). Let \(0\leq i<w\), \(0\leq c<\alpha\) and determine \(u_{i}\in X_{n}\) so that \((s,x_{i}^{c},p)\psi_{A}^{mr}=(s\psi_{A}^{mr},u_{i},p\psi_{A}^{mr})\). Now set \((u_{i},s\psi_{A}^{mr})\lambda_{D}:=x_{(i+1)\mod w}^{c}\).
Now, the fact that we are relabelling through shadows determines the rest of the relabelling function \(\lambda_{D}\), as the local functions of \(H(G_{D},\psi_{A^{\prime}})\) have to agree for each shadow state with that occurring at the state being shadowed, that is, we need the local function of the transducer \(H(G_{D},\psi_{A^{\prime}})\) at any state \(s\psi_{A^{\prime}}^{a+mr}\) to agree with the local function at \(s\psi_{A^{\prime}}^{a}\) (where \(mr\) suffices as that is the orbit length of the pair \((s,p)\) in \(G_{B}\) under \(\phi_{A}\), and our relabelling only impacts the labels of edges in the orbit of edges from \(s\) to \(p\) in \(G_{D}\)).
The following inductive definition of \(\lambda_{D}\) enforces this agreement.
In particular, suppose \(mr<a<emr\) and for all \(x\in X_{n}\) and \(0\leq i<a\) we have \((x,s\psi_{A}^{i})\lambda_{D}\) defined. Suppose further that \((s\psi_{A}^{a-1},u,p\psi_{A}^{a-1})\) is an edge of \(G_{D}\), \(v\in X_{n}\) with \((s\psi_{A}^{a-1},u,p\psi_{A}^{a-1})\psi_{A}=(s\psi_{A}^{a},v,p\psi_{A}^{a})\), and that \(u^{\prime},v^{\prime},x,y\in X_{n}\) so that:
\[(s\psi_{A}^{a-1},u)\lambda_{D} =u^{\prime}\] \[(s\psi_{A}^{a-mr-1},x)\lambda_{D} =u^{\prime}\] \[(s\psi_{A}^{a-mr-1},x,p\psi_{A}^{a-mr-1})\psi_{A} =(s\psi_{A}^{a-mr},y,p\psi_{A}^{a-mr})\] \[(s\psi_{A}^{a-mr},y)\lambda_{D} =v^{\prime}\]
then set \((s\psi_{A}^{a},v)\lambda_{D}:=v^{\prime}\).
We define \(\lambda_{D}\) analogously on all edges in the orbit of an edge from \(t\) to \(p\).
We now determine \(\psi_{A^{\prime}}\) acting on \(G_{D}\) by the rules that \(\psi_{A^{\prime}}\) agrees with \(\psi_{A}\) on the vertices of \(G_{D}\), and if \((u,x,v),(u\psi_{A},y,v\psi_{A})\) are edges of \(G_{D}\) so that \((u,x,v)\psi_{A}=(u\psi_{A},y,v\psi_{A})\) then \((u,(x,u)\lambda_{D},v)\psi_{A^{\prime}}=(u\psi_{A^{\prime}},(y,u\psi_{A^{ \prime}})\lambda_{D},v\psi_{A^{\prime}})\).
Note in passing that while the orbit of any edge of \(G_{D}\) from \(s\) to \(p\) (or, from \(t\) to \(p\), respectively) has length \(n\) under \(\psi_{A^{\prime}}\), the pattern of labels taken by such an edge repeats every \(wmr\) steps, and so, as the vertex pair \((s,p)\) (resp. \((t,p)\)) is on an orbit of length \(emr\) (respectively \(fmr\)) and \(\operatorname{lcm}(wmr,emr)=n\) (resp. \(\operatorname{lcm}(wmr,fmr)=n\)) we see that in the induced automorphism \(\phi_{A^{\prime}}\) of \(G_{B}\) (so that \(A^{\prime}\) is represented by both \(H(G_{B},\phi_{A^{\prime}})\) and \(H(G_{D},\psi_{A^{\prime}})\)) the orbit of any edge from \(s\) or \(t\) to \(p\) is now on an orbit of length \(wmr\).
Conjugate to an \(n\)-cycle
A circuit of length \(k\) in an automaton \(B\) is carried by \(k\) directed edges in \(G_{B}\) in some order \((e_{0},e_{1},\ldots,e_{k-1})\) where the target of \(e_{i}\) is the source of \(e_{i+1}\) (indices modulo \(k\)) for each index \(i\). In the next lemma, an automorphism \(\phi\) of \(G_{B}\) carries a circuit \(C\) to itself if and only if the image of each edge of \(C\) is itself under the automorphism. Specifically, a "rotation" of a circuit is not the circuit itself.
**Lemma 5.1**.: _Let \(A\in\mathcal{H}_{n}\) be an element of finite order. Let \(B\) be a strongly synchronizing automaton for which there is an automorphism \(\phi_{A}\) of the underlying digraph \(G_{B}\) of \(B\) with \(A\) the minimal representative of \(H(G_{B},\phi_{A})\). Then every point of \(X_{n}^{-\mathbb{N}}\) is on an orbit of length \(n\) under the action of \(A\) if and only if all circuits in \(B\) are on orbits of length \(n\) under the action of \(\phi_{A}\)._
Proof.: This proof follows straight-forwardly from the observation that the orbits of circuits in \(B\) under the action of \(\phi_{A}\) correspond to the action of \(A\) on periodic points of \(X_{n}^{-\mathbb{N}}\). Now as periodic points are dense in \(X_{n}^{-\mathbb{N}}\), the following chain of equivalences is true: all points of \(X_{n}^{-\mathbb{N}}\) are on orbits of length \(n\) under the action of \(A\) if and only if all periodic points of \(X_{n}^{-\mathbb{N}}\) are on orbits of length \(n\) under the action of \(A\) if and only if all circuits of \(B\) are on orbits of length \(n\) under the action of \(A\).
**Lemma 5.2**.: _Let \(A\in\mathcal{H}_{n}\) be an element of finite order and let \(B\) be the minimal strongly synchronizing automaton such that \(A\) acts as an automorphism \(\phi_{A}\) of \(G_{B}\) the underlying digraph of \(B\). Suppose that all circuits in \(B\) are on orbits of length \(n\), then \(A\) is conjugate to an \(n\)-cycle._
Proof.: We proceed by induction. In each iteration, we successively replace \(A\) with a conjugate \(C\) that acts as an automorphism of the underlying digraph of a smaller strongly synchronizing automaton.
By Lemma 4.6 we may assume, replacing \(A\) with a conjugate if necessary, that the amalgamation and collapse sequence of \(B\) cohere. Thus, a pair of states \(s,t\) which distribute similarly over \(Q_{B}\) satisfy, \(\operatorname{Letters}(s,p)=\operatorname{Letters}(t,p)\) for all \(p\in Q_{B}\).
Suppose that \(|B|>1\) (as otherwise we are done). Since \(B\) is strongly synchronising, we may find a pair of distinct states \(s,t\) which distribute similarly over \(Q_{B}\).
We consider two cases.
First suppose that \(s\) and \(t\) belong to the same orbit. Fix a state \(p\) for which there is an edge from \(s\) (and so from \(t\)) into \(p\).
We apply Lemma 4.19 to the triple \((s,t,p)\) to obtain a transducer \(D^{\prime\prime}\) and automorphism \(\psi^{\prime}_{A}\) such that there are no shadow states for elements in the orbit of \(p\), edges from \(s\) and \(t\) into \(p\) are on orbits of length \(n\).
Now since there are no shadow states for elements in the orbit of \(p\) in \(D^{\prime\prime}\), we may repeatedly apply Lemma 4.19 to \(s,t\) and states in the orbits of \(p\) in turn, until we obtain an automaton \(D^{\prime}\) and automorphism \(\psi_{A}\) which has no shadow states for elements in the orbit of \(p\), such that \(\operatorname{Letters}_{D^{\prime}}(s,p\psi^{i}_{A})=\operatorname{Letters}_{D^{ \prime}}(t,p\psi^{i}_{A})=\operatorname{Letters}_{B}(s,p\phi^{i}_{A})\) for all \(i\in\mathbb{N}\), and such that any edge from \(s\) or \(t\) into a state in the orbit of \(p\) has orbit length \(n\).
Now we apply Lemma 4.7 to the automaton \(D^{\prime}\). This results in an automaton \(D\) and conjugate automorphism \(\psi_{A^{\prime}}\) with the following properties. For \(r\) minimal such that, for any \(i,j\in\mathbb{N}\), \(\operatorname{Letters}_{D}(s\psi_{A^{\prime}}^{i},p\psi_{A^{\prime}}^{j})= \operatorname{Letters}_{D}(s\psi_{A^{\prime}}^{i+r},p\psi_{A^{\prime}}^{j+r})\), then, the labels of the edges \((s\psi_{A^{\prime}}^{i},x,p\psi_{A^{\prime}}^{j})\psi_{A^{\prime}}\) and \((s\psi_{A^{\prime}}^{i+r},x,p\psi_{A^{\prime}}^{j})\psi_{A^{\prime}}\) coincide. Notice that, by minimality of \(r\), Shadow states remain \(\omega\)-equivalent to the state they shadow and \(t\) is an element of the orbit of \(s\) under the action of \(\psi_{A^{\prime}}^{r}\).
We now apply Lemma 4.17 to collapse down to the automaton \(B\) with a conjugate automorphism \(\phi_{A^{\prime}}\). The conjugate automorphism \(\phi_{A^{\prime}}\) has the following properties. For edges not belonging to the orbit of an edge from \(s\) or \(t\) into a state not in the orbit of \(p\), the action of \(\phi_{A^{\prime}}\) coincides with the action of \(A\). By construction of \(D\), the label of the edges \((s\phi_{A^{\prime}}^{i},x,p\phi_{A^{\prime}}^{j})\phi_{A^{\prime}}\) and \((t\phi_{A^{\prime}}^{i},x,t\phi_{A^{\prime}}^{j})\phi_{A^{\prime}}\) coincide for any \(i,j\in\mathbb{N}\).
We now repeat this process across all states of \(B\) which have an edge from \(s\). Thus we end up with a conjugate automorphism \(\psi_{C}\) such that the label of the edges \((s\psi_{C}^{i},x,q)\psi_{C}\) and \((t\phi_{C}^{i},x,q)\psi_{C}\) coincide, for any state \(q\) and any incoming edge from \(s\) and \(t\) into \(q\) labelled \(x\).
Let \(B^{\prime}\) be the automaton obtained from \(B\) by identifying the pair of states \((s\psi_{C}^{i},t\psi_{C}^{i})\) for all \(i\in\mathbb{N}\). Since \(\psi_{C}\) induces the same action on labels for corresponding edges from elements \(s\psi_{C}^{i}\) and \(t\psi_{C}^{i}\), then there is an induced action \(\phi_{C}\) of \(\psi_{C}\) on the underlying digraph of \(B^{\prime}\). That is there is an element \(C\in\mathcal{H}_{n}\) which is a conjugate of \(A\) such that \(C\) is the minimal representative of \(H(B^{\prime},\phi_{C})\) an \(H(B,\psi_{C})\).
Now consider the case that \(s,t\) belong to distinct orbits. We may assume that the orbit lengths of \(s\) and \(t\) coincide otherwise we may find a \(\tau^{\prime}\) distinct from \(s\) and \(t\) which has the same orbit length and agrees with one of \(s\) or \(t\) on \(Q_{B}\). Whereby we apply the previous case to the pair \((s,\tau)\) or \((t,\tau)\).
Fix a state \(p\in Q_{B}\) with an edge from \(s\) (and so from \(t\)). We apply Lemma 4.20 to obtain a conjugate automorphism \(\phi_{A^{\prime\prime}}\) of the underlying digraph of \(B\) such that all edges from \(s\) and \(t\) into \(p\) have the same orbit length.
We repeat the process along all states of \(B\) with an incoming edge from \(s\). This yields a conjugate automorphism \(\phi_{A^{\prime}}\) of \(B\), whereby, for a given state \(q\in Q_{B}\) all edges from \(s\) and \(t\) into \(q\) have the same orbit length under \(\phi_{A^{\prime}}\).
We now repeatedly apply Lemma 4.9 to the triple \((s,t,B,\phi_{A^{\prime}})\) to obtain a conjugate automorphism \(\psi_{C}\) of \(B\) which satisfies the following. For any pair of edges \((s,x,q)\) and \((t,x,q)\), \((s,x,q)\psi_{C}\) an \((t,x,q)\psi_{C}\) have the same labels. This means, we may once more identify the pair of states \((s\psi_{C}^{i},t\psi_{C}^{i})\) to obtain an action \(\phi_{C}\) of \(C\), the minimal representative of \(H(B,\psi_{C})\), on a smaller automaton \(B^{\prime}\).
If \(|C|>1\), then as \(C\) is conjugate to \(A\) we may now repeat the process with \(C\) instead of \(A\).
Eventually we end up with the single state transducer.
We recall that by Theorem 3.5, for an element \(A\in\mathcal{H}_{n}\) of finite order, there is a strongly synchronizing automaton \(B\) on which \(A\) acts as an automorphism \(\phi_{A}\) of the underlying digraph of \(B\) so that \(H(B,\phi_{A})\) has minimal representative \(A\).
**Theorem 5.3**.: _Let \(A\in\mathcal{H}_{n}\) be an element of finite order. Then \(A\) is conjugate to an \(n\)-cycle if and only if every element of \(X_{n}^{-\mathbb{N}}\) is on an orbit of length \(n\) under the action of \(A\) if and only if for any strongly synchronizing automaton \(B\) on which \(A\) acts as an automorphism \(\phi_{A}\) of the underlying digraph of \(B\), every circuit of \(B\) is on an orbit of length \(n\) under the action of \(A\)._
Proof.: The equivalences follow from lemmas 5.1 and 5.2.
### An Example
In this section we work through an example that illustrates the key ideas of the proof.
Consider the automaton \(A\) of Figure 7 which we encountered already in Example 3.6.
This is an element of \(\mathcal{H}_{6}\) of order \(6\), where every point in the Cantor space \(X_{6}^{-\mathbb{N}}\) is on an orbit of length \(6\) under the action of \(A\). Following the construction in Subsection 3.3.1 (see Example 3.6), the minimal strongly synchronising automaton \(B\) which admits an automorphism \(\phi_{A}\) of \(G_{B}\) such that \(H(B,\phi_{A})\) has minimal representative \(A\), is as depicted in Figure 8, where each drawn edge represents two edges with labels as listed; the map \(\phi_{A}\) on the vertices on \(G_{B}\) is the permutation which in cycle notation is
\[(p_{0}\;p_{1}\;p_{2})(q_{0}\;q_{1}\;q_{2});\]
the action of \(\phi_{A}\) on the vertices and edges of \(G_{B}\) is uniquely determined from the fact that \(A\) is the minimal representative of \(H(G_{B},\phi_{A})\). We refer to the vertices \(q_{0},q_{1},q_{2}\) as the vertices of the "inner triangle" and the vertices \(p_{0},p_{1},p_{2}\) as the vertices of the "outer triangle".
Figure 7: The element \(A\)
We notice that the automaton \(B\) has the property that its synchronizing and amalgamation sequences cohere. In particular both reduce to the single vertex with 6 looped edges after 2 steps.
The fact that every circuit of \(G_{B}\) is on orbit of length 6 can be seen as follows. A circuit of \(G_{B}\) which is not formed by repeating the circuit (or a cyclic rotation of it) \(p_{0}\to p_{1}\to p_{2}\to p_{0}\) a finite number of times, must have an edge leaving a vertex in the inner triangle -- any such edge has orbit length 6.
Therefore \(A\) satisfies the hypothesis of Theorem 5.3. We work through Lemma 5.2 to find an element of \(\mathcal{H}_{6}\) which conjugates \(A\) to a 6-cycle.
In the first step we find two states of \(G_{B}\) which can be collapsed i.e. which distribute similarly over \(Q_{B}\). We may take the pair \((p_{0},q_{0})\) (any other valid pair belongs to the orbit of this one). The orbits of \(p_{0}\) and \(q_{0}\) are distinct so we are in the second case of Lemma 5.2. Now all edges leaving any vertex in the orbit of \(q_{0}\) have orbit length 6, whereas the edges
Figure 8: Minimal automaton witnessing finite order of \(A\).
edges from \(p_{0}\) to \(q_{0}\) and \(p_{0}\) to \(q_{2}\) have orbit length 3 while the edges from \(p_{0}\) to \(p_{1}\) have orbit length 6. Thus we apply Lemma 4.20. We add shadow states using Lemma4.19. Focusing on the vertex \(q_{0}\) as our vertex \(p\) (in the notation of Lemma4.19), we see that
\[Q(0,q_{0}) =\{q_{0}\}\] \[Q(1,q_{0}) =\{p_{0},p_{1}\}\] \[Q(2,q_{0}) =\emptyset.\]
The last follows since any incoming edge to a vertex on the outer triangle is on an orbit of length 6.
We may take either \(p_{0}\) or \(p_{1}\) as the heavy state (since they belong to the same orbit). Our divisibility constant is 3 (the orbit lengths of the two orbits of edges from the outer triangle into the inner triangle, i.e. the edge from \(p_{0}\) into \(q_{0}\) represents one such orbit, and the edge from \(p_{1}\) into \(q_{0}\) represents the other); the number \(n^{\prime}\) is precisely 6 - since every incoming edge into either \(p_{0}\) or \(p_{1}\) has orbit length 6. (We note as an aside that since the edge from \(p_{1}\) to \(q_{0}\) is in the orbit of the edge from \(p_{0}\) to \(q_{2}\), we only need one round of adding shadow states in order to fix the orbit lengths of both of these edges, using more words, we need not consider \(p=q_{2}\) as a separate case).
Our new automaton \(B^{\prime}\) will have shadow states \(p^{\prime}_{0},p^{\prime}_{1}\) and \(p^{\prime}_{2}\) as is as depicted in Figure 9. There is a lift \(\psi_{A}\) of \(\phi_{A}\) to \(G_{B^{\prime}}\). The action of \(\psi_{A^{\prime}}\) is uniquely determined by the facts that the orbit of \(p_{0}\) under \(\psi_{A^{\prime}}\) is \((p_{0}\;p_{1}\;p_{2}\;p^{\prime}_{0}\;p^{\prime}_{1}\;p^{\prime}_{2})\) and \(H(B^{\prime},\psi_{A})\) has minimal representative \(A\).
We can now apply Lemma 4.20 to the orbit of the edge from \(p_{2}\) to \(q_{0}\) and from \(p_{1}\) to \(q_{0}\). Notice that since the orbit lengths of edges leaving the inner triangle is \(6\), the relabelling map of Lemma 4.20 will simply wrap around the orbits of the relevant edges from \(p_{2}\) and \(p_{1}\) to increase their orbit lengths after re-identifying shadow states to \(6\). This can be achieved by relabelling such that the actions on letters of orbits in the edge \((p_{0},\{1,0\},q_{0})\) mirrors the action on the corresponding edge in the orbit of \((q_{0},\{0,1\},q_{0})\) (similarly for the pair \((p_{1},\{1,0\},q_{0})\) and \((q_{1},\{1,0\},q_{0})\). One such relabelling is that induced by the vertex fixing
Figure 9: Adding shadows to form \(B^{\prime}\).
automorphism of \(G_{B^{\prime}}\) that swaps the edges from \(p_{1}\) to \(q_{1}\), the edges from \(p_{2}\) to \(q_{2}\), the edges from \(p^{\prime}_{0}\) to \(q_{0}\); the edges from \(p_{1}\) to \(q_{0}\), from \(p_{2}\) to \(q_{1}\) and from \(p^{\prime}_{0}\) to \(q_{2}\). This gives rise to the element \(C\) in figure 10.
The reader can verify that the conjugate of \(A\) by \(C\) is the automaton \(D\) to the left of Figure 11.
The automaton \(E\) to the right of Figure 11 admits an automorphism \(\phi_{D}\) of its underlying digraph such that \(H(E,\phi_{D})\) has minimal representative \(D\). The map \(\phi_{D}\) is uniquely determined by the fact that \(H(E,\phi_{D})\) has minimal representative \(D\). Notice that all edges of \(G_{D}\) are on orbits of length \(6\) and the collapse and amalgamation sequences of \(G_{D}\) coincide. Following Lemma 5.2, we find a pair of vertices which distribute similarly over \(Q_{D}\), any pair of distinct vertices works -- we choose \((a_{1},a_{3})\). Now we are in the first case of Lemma 5.2 and the relabelling protocol we apply is that given by Lemma 4.7. Essentially we want to relabel such that the action of \(a_{1}\) and \(a_{3}\) on \(X_{6}\) coincide along their orbits. A relabelling that achieves this is obtained by swapping the edged between \(a_{1}\) and \(a_{5}\) and between \(a_{5}\) and \(a_{3}\). This relabelling gives rise to the conjugator \(F\) in figure 12
The reader can verify that conjugating \(D\) by \(F\) results in the single state transducer corresponding to the \(6\)-cycle \((0\;4\;2\;1\;5\;3)\).
Figure 10: Conjugator \(C\).
Therefore, the element \(CF\) of \(\mathcal{H}_{6}\) conjugates \(A\) to the 6-cycle (0 4 2 1 5 3).
|
2309.17134 | Promoting Generalized Cross-lingual Question Answering in Few-resource
Scenarios via Self-knowledge Distillation | Despite substantial progress in multilingual extractive Question Answering
(QA), models with high and uniformly distributed performance across languages
remain challenging, especially for languages with limited resources. We study
cross-lingual transfer mainly focusing on the Generalized Cross-Lingual
Transfer (G-XLT) task, where the question language differs from the context
language - a challenge that has received limited attention thus far. Our
approach seeks to enhance cross-lingual QA transfer using a high-performing
multilingual model trained on a large-scale dataset, complemented by a few
thousand aligned QA examples across languages. Our proposed strategy combines
cross-lingual sampling and advanced self-distillation training in generations
to tackle the previous challenge. Notably, we introduce the novel mAP@k
coefficients to fine-tune self-knowledge distillation loss, dynamically
regulating the teacher's model knowledge to perform a balanced and effective
knowledge transfer. We extensively evaluate our approach to assess XLT and
G-XLT capabilities in extractive QA. Results reveal that our self-knowledge
distillation approach outperforms standard cross-entropy fine-tuning by a
significant margin. Importantly, when compared to a strong baseline that
leverages a sizeable volume of machine-translated data, our approach shows
competitive results despite the considerable challenge of operating within
resource-constrained settings, even in zero-shot scenarios. Beyond performance
improvements, we offer valuable insights through comprehensive analyses and an
ablation study, further substantiating the benefits and constraints of our
approach. In essence, we propose a practical solution to improve cross-lingual
QA transfer by leveraging a few data resources in an efficient way. | Casimiro Pio Carrino, Carlos Escolano, José A. R. Fonollosa | 2023-09-29T10:54:59Z | http://arxiv.org/abs/2309.17134v1 | Promoting Generalized Cross-lingual Question Answering in Few-resource Scenarios via Self-knowledge Distillation
###### Abstract
Despite substantial progress in multilingual extractive Question Answering (QA), models with high and uniformly distributed performance across languages remain challenging, especially for languages with limited resources. We study cross-lingual transfer mainly focusing on the Generalized Cross-Lingual Transfer (G-XLT) task, where the question language differs from the context language -- a challenge that has received limited attention thus far. Our approach seeks to enhance cross-lingual QA transfer using a high-performing multilingual model trained on a large-scale dataset, complemented by a few thousand aligned QA examples across languages. We build our techniques upon the analysis of the cross-lingual transfer capabilities of a pre-trained multilingual BERT model fine-tuned on English-language SQuAD-v1.1. Our proposed strategy combines cross-lingual sampling and advanced self-distillation training in generations to tackle the previous challenge. Notably, we introduce the novel _mAP@k coefficients_ to fine-tune self-knowledge distillation loss, dynamically regulating the teacher's model knowledge to perform a balanced and effective knowledge transfer. We extensively evaluate our approach using various QA datasets, including MLQA, XQuAD, and TyDiQA-goldp, to assess XLT and G-XLT capabilities in extractive QA. Results reveal that our self-knowledge distillation approach outperforms standard cross-entropy fine-tuning by a significant margin. Importantly, when compared to a strong baseline that leverages a sizeable volume of machine-translated data, our approach shows competitive results despite the considerable challenge of operating within resource-constrained settings, even in zero-shot scenarios. Beyond performance improvements, we offer valuable insights through comprehensive analyses and an ablation study, further substantiating the benefits and constraints of our approach. In essence, we propose a practical solution to improve cross-lingual QA transfer by leveraging a few data resources in an efficient way.
## 1 Introduction
Significant advancements have been made in the realm of cross-lingual Question Answering QA in recent years, attributed to the emergence of powerful cross-lingual representations acquired through multilingual Pre-trained Language Models (PLMs) [1, 2, 3, 4, 5]. Transfer learning techniques have enriched Cross-lingual Transfer (XLT) capabilities by seamlessly enabling PLMs to operate across languages. The realm of enhancing Cross-Lingual Transfer (XLT) within the domain of Question Answering (QA) has witnessed a diverse array of methodologies. These methods encompass various strategies, including zero-shot and few-shot transfer techniques [6, 7, 8, 9, 10], the utilization of machine-translation and data augmentation strategies [11, 12, 8, 13, 14], as well as more sophisticated approaches such as meta-learning [7], leveraging knowledge graphs [15], employing contrastive learning [16], and incorporating adversarial training [17], and defining auxiliary tasks for PLMs [18]. Concurrently, substantial efforts have been invested in the development of comprehensive and rigorous benchmarks that aim to assess cross-lingual transferability in the context of the QA task [19, 9, 20, 21, 22, 23].
However, the pursuit of attaining cross-lingual QA performance characterized by high and uniformly distributed proficiency across languages persists as a formidable challenge. This challenge is especially pronounced in languages constrained by limited linguistic resources. Of particular significance is the aspiration for achieving generalized cross
lingual transfer (G-XLT), where the language of the posed question is different from the language of the answer. This specific avenue of inquiry assumes importance in addressing the challenges posed by information scarcity and linguistic asymmetry in languages with limited resources [21], and gains relevance in situations where language mismatch could hinder effective information extraction [24]. However, it is worth noting that this area of investigation remains largely unexplored, with only a handful of studies dedicated to its exploration.
This study focuses on the enhancement of cross-lingual abilities for the extractive QA task, with particular emphasis on advancing the G-XLT capabilities. The methodology employed involves the transfer of QA knowledge extracted from a proficient multilingual QA model, which has undergone fine-tuning using a large-scale QA dataset in a high-resource language. Remarkably, we delve into attaining effective knowledge transfer by harnessing as few as a thousand QA examples aligned across languages to aid and promote the transfer. We commence by scrutinising the zero-shot knowledge of an mBERT model fine-tuned on the SQuAD-v1.1 dataset in English, which provided essential insights for the design of our strategy. Then, we tackle the challenge by proposing a customized cross-lingual QA fine-tuning strategy that involves cross-lingual sampling and self-distillation training, a special case of knowledge distillation where the teacher model becomes the student model itself. Importantly, we introduce the _mAP@k coefficients_ to modulate the self-knowledge distillation loss. These coefficients dynamically regulate the influence of the teacher's cross-lingual knowledge during the fine-tuning process, thereby facilitating a balanced knowledge transfer. Ultimately, we conduct a comprehensive assessment of our methodology, employing a diverse array of QA benchmarks such as MLQA, XQuAD, and TyDiQA-golp. Our objective is to scrutinize the extent of XLT and G-XLT capabilities that our approach demonstrates in the context of extractive QA. Additionally, our work provides valuable insights, supported by thorough analyses and an ablation study, emphasising the strengths and limitations of our approach.
In summary, our study's key contributions are as follows:
* We introduce effective self-knowledge distillation techniques tailored for cross-lingual fine-tuning, utilizing aligned multilingual QA data and cross-lingual sampling to bolster knowledge transfer between languages.
* We propose the mAP@k loss coefficients to better handle wrong teacher predictions, making the cross-lingual transfer more robust and resulting in enhanced XLT and G-XLT performances, including zero-shot scenarios.
* We perform a comprehensive analysis and ablation study, unveiling the influence of distinct components and design choices to elucidate the underlying mechanisms behind our approach.
Therefore, our investigation lays the foundation for enhancing cross-lingual QA transfer efficacy in data-scarce scenarios. Besides, we believe the introduction of the mAP@k coefficients may be of potential interest in different knowledge distillation settings beyond QA applications.
## 2 Related Works
In this section, we describe most similar studies utilising fine-tuning techniques for extractive QA to enhance G-XLT and XLT performances on common benchmarks. In [8] the authors present the concept of Language Branch Machine Reading Comprehension (LBMRC), which employs language-specific branches, to sample data that pair passages in a single language to questions in various other languages using machine translation, along with a novel multilingual multi-teacher distillation framework. This approach effectively improves cross-lingual performance and enhances robustness against noise in low-resource languages such as Arabic, Hindi, and Vietnamese. The proposed methodology achieves remarkable outcomes on the XQuAD [20] and MLQA [9] benchmarks for cross-lingual QA, both in translation and zero-shot scenarios, underscoring its efficacy in addressing the challenges associated with QA tasks. In [17], the authors enhance cross-lingual QA transfer by augmenting training data by 14 times through machine translation, language adversarial training, and a Language Arbitration Framework (LAF). They empirically validated on MLQA [9] and TyDiQA [19] datasets, demonstrating significant improvements over the zero-shot baseline in [9], highlighting its limitations. In contrast to these investigations, our approach is distinct in that it relies solely on a few thousand cross-lingual QA examples, devoid of a substantial machine-translated dataset, thus posing extra challenges. Additionally, we adopt a novel self-distillation approach that leverages customized mAP@k loss coefficients to further enhance the efficiency of the transfer learning procedure.
## 3 Solving the Extractive QA task
The extractive question-answering (QA) task involves a question \(q\), a context \(c\), and an answer \(a\) that corresponds to a span within the context \(c\). To solve the task, the standard method in [1] employs a classification layer on top of a transformer-based pre-trained encoder. First, the input question and the context are concatenated into a single sequence
and encoded through contextualized embeddings of a given dimension \(T_{k}\in R^{h}\). Then, for each context token \(i\), we compute the probabilities \(p_{k}^{start}\) and \(p_{k}^{end}\) of being the start and end token of the answer span \(a\), respectively, with a softmax over all the tokens \(T_{m}\) in the context:
\[\begin{split} p_{k}^{start}&=softmax(e^{ST_{k}};t =1)=\frac{e^{ST_{k}/t}}{\sum_{m}e^{ST_{m}/t}}\\ p_{k}^{end}&=softmax(e^{ET_{i}};t=1)=\frac{e^{ET_ {k}/t}}{\sum_{m}e^{ET_{m}/t}}\end{split} \tag{1}\]
Where \(T_{k}\) is the contextualized embedding of the token \(k\), \(S\in\mathcal{R}^{h}\) and \(E\in\mathcal{R}^{h}\) are the start and end vectors representing the trainable parameters of the classification layer, and \(t\) is the temperature of the softmax function. Then, for each example, the total loss to optimize is the sum over the context tokens \(i\in{1,N}\) of the cross-entropy (CE) between the ground-truth labels \(\{a_{i}^{l}\}\) and the model probabilities \(\{p_{i}^{l}\}\), as follows:
\[L_{ce}=\sum_{l=start}^{end}\sum_{i=1}^{N}log(p_{i}^{l})=\sum_{l=start}^{end} \sum_{i=1}^{N}CE(a_{i}^{l},p_{i}^{l}) \tag{2}\]
Following training, the model identifies the answer by selecting the span defined by the start and end tokens with the highest probabilities. It ensures that potential spans where the end tokens are predicted before the start tokens are excluded.
## 4 Proposed Approach
In this section, we introduce our self-distillation approach to achieve cross-lingual QA performance by transferring the QA knowledge from a high-resource language, such as English, to multiple low-resource target languages. We initiate our study by analyzing the zero-shot cross-lingual performance of the extractive QA task to assess its limitations and gain valuable insights that set the base of our work. Subsequently, we introduce a tailored cross-lingual sampling method that leverages parallel QA datasets aligned across multiple languages, to foster the development of robust G-XLT capabilities. Importantly, we propose a self-knowledge distillation fine-tuning using a pre-trained model by endowing the loss function with the mAP@k loss coefficients to balance multi-loss contributions.
### A close look at Zero-shot Cross-lingual Transfer
We choose the widely employed mBERT [25] model as our baseline and utilize the SQuAD v1.1 dataset [26], a well-established high-resource extractive QA dataset in English. Specifically, we conduct fine-tuning of the mBERT model using the SQuAD v1.1 dataset using the standard approach as described in Section 3. We performed fine-tuning for 3 epochs, with a learning rate of 3e-5 and a batch size of 24. The rest of the hyperparameters are set to their default values 1 as implemented in the popular Hugging Face's Transformers library [27]. We refer to the resulting model as the mBERT-qa-en model.
Footnote 1: This standard configuration of the hyperparameters aims to provide a standard reference point for comparison and ensures consistency with previous research.
Measuring the G-XLT performanceAs stated in the Introduction, our primary objective is to achieve cross-lingual extractive QA performance that surpasses the limitations of the same-language setting, particularly in scenarios where questions and the corresponding contextual information are expressed in different languages. To assess this capability, we commence by measuring the generalized cross-lingual transfer (G-XLT) performance of the mBERT-qa-en model on the MLQA benchmark [9]. Results on the dev and test splits, as illustrated in Figure 1, reveal a high degree of transferability for languages closely related to English, such as Spanish and German, with a maximum F1 score of 71.0. However, when confronted with less similar languages, particularly in cases where the language of the question differs from the language of the context, the performance diminishes significantly, reaching a minimum F1 score as low as 29.7. In Appendix 10, we show similar patterns in the corresponding Exact Match (EM).
Hidden Knowledge in the Top-k Answer PredictionsTo further measure the extent of cross-lingual knowledge embedded within the mBERT-qa-en model, we conduct an analysis of the prediction quality derived from the top
\(k\) answer predictions, partially building upon the methodology in [16]. Specifically, we quantify the number of predictions among the top 10 ranked answers that are correct, indicating the presence of hidden knowledge that may be advantageous for subsequent cross-lingual fine-tuning. Consequently, we calculate the distribution of these predictions with their respective ranks within the top 10 positions. Moreover, we sort the precitions based on the language of the corresponding questions to gain insights into their distribution across different languages. As shown in Table 1, our findings highlight the presence of a significant number of correct predictions hidden beyond the top-1 position exhibiting a decreasing trend towards the tenth position.
On one hand, these initial findings in the zero-shot scenario validate prior research on cross-lingual QA knowledge acquisition by multilingual models fine-tuned with English data, such as mBERT, which tends to prioritize languages closely related to English over more distant ones. Nevertheless, despite the suboptimal performance in languages other than English, our analysis reveals the presence of valuable cross-lingual knowledge within the top 10 ranked positions, which holds promise for enhancing cross-lingual transfer. In the forthcoming sections, we put together various ideas and methodologies rooted in these observations, with the aim of enhancing generalized cross-lingual QA transfer.
### Cross-lingual Sampling
Recalling Section 3 on the extractive QA task in 3, we denote with \(x_{i,j}=(q_{i},(c_{j},a_{j}))\) an example, where the \(i\) indicates the language of the question \(q\), and \(j\) indicates the language of the context \(c\) and the answer \(a\), a notation suitable for cross-lingual applications. Here, the first index indicates the language of the question, and the second index indicates the language of the context (and the language from which the answer is extracted). Assuming the availability of a parallel QA dataset, where examples are aligned across multiple, usually via a translation process, we construct semantically equivalent QA examples by solely mixing question and answer languages.
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline \hline & \multicolumn{6}{c}{**No. correct predictions**} \\ \hline top \(k\) & en & es & de & ar & vi & hi & zh \\
1 & 1773 & 720 & 684 & 492 & 525 & 442 & 592 \\
2 & 460 & 215 & 195 & 136 & 175 & 144 & 170 \\
3 & 272 & 99 & 106 & 82 & 98 & 89 & 94 \\
4 & 188 & 86 & 89 & 57 & 55 & 72 & 77 \\
5 & 135 & 60 & 43 & 68 & 48 & 47 & 61 \\
6 & 92 & 34 & 54 & 28 & 42 & 35 & 44 \\
7 & 68 & 42 & 32 & 45 & 46 & 31 & 30 \\
8 & 68 & 28 & 36 & 15 & 24 & 23 & 38 \\
9 & 54 & 25 & 22 & 30 & 32 & 30 & 24 \\
10 & 38 & 28 & 26 & 21 & 27 & 33 & 27 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Number of correct predictions on the MLQA-dev dataset distributed within the top-10 positions and across different question languages.
Figure 1: Zero-shot F1 performance of the mBERT-qa-en for the G-XLT evaluation on the MLQA dev and test datasets.
Formally, given a set of seed examples \(N_{seed}\) in a source language and their translations into \(N_{tl}\) target languages, we randomly sample \(ntl\) target languages and generate all possible cross-lingual combinations for each example. By following this approach, the total number of sampled cross-lingual examples grows quadratically with respect to the number of target languages, following the formula:
\[N_{cross-lingual}=N_{seed}\cdot(1+ntl)^{2} \tag{3}\]
Our approach leads to a quadratic increase in the number of potentially relevant examples, allowing for cross-lingual fine-tuning with mixed question and context languages across \(N_{tl}\) target languages. Consequently, our method encourages generalized cross-lingual performances directly at the level of data. Note that we anticipate a certain degree of redundancy as the number of sampled target languages \(ntl\) increases, potentially leading to overfitting. Therefore, in the forthcoming sections, we conduct experiments by varying the \(nlt\) variable and analyze its impact on the model performance.
### Self-knowledge Distillation Objective
Our work is based on the knowledge distillation techniques [28] to perform effective cross-lingual transfer for the extractive QA task. Specifically, we employ self-distillation techniques in generations methods [29; 30; 31]. Here, both the teacher and student models share identical architectures and sizes, and the teacher's parameters are synchronized with the student's parameters after specific steps or epochs while maintaining a fixed relationship between them.
Following the Equation 1, we indicate with \(\hat{p}^{tstart}_{k}\) and \(\hat{p}^{end}_{k}\) the start and end probability of the teacher model (also referred as soft-labels), and with \(a^{start}_{k}\) and \(a^{end}_{k}\) given the ground-truth labels. We then define a knowledge distillation term expressed by the Kullback-Leibler divergence (KL) between the teacher's soft labels \(\{\hat{p}^{l}_{i}\}\) and the ground-truth labels \(\{a^{l}_{i}\}\). Hence, for each QA example, the total loss is a linear combination of the sums over the context's tokens \(i\in 1,N\) for the start and end positions \(l\in\{start,end\}\) of the cross-entropy and Kullback-Leibler, as follows in the equation below:
\[\begin{split}& L_{skd}=\alpha_{ce}\sum_{l=start}^{end}\ \sum_{i=1}^{N}\ log(p^{l}_{i})+\ \alpha_{ce}\sum_{l=start}^{end}\sum_{i=1}^{N}\ \hat{p}^{l}_{i}\ log(p^{l}_{i})\\ &=\alpha_{ce}\sum_{l=start}^{end}\ \sum_{i=1}^{N}\ CE(a^{l}_{i};p^{l}_{i})+\ \alpha_{kl}\sum_{l=start}^{end}\sum_{i=1}^{N}\ KL(\hat{p}^{l}_{i},p^{l}_{i}) )\end{split} \tag{4}\]
The \(\alpha_{ce}\) and \(\alpha_{kl}\) are hyperparameters that represent coefficients to weight the contribution of each loss term independently. Finally, in our self-distillation in generations implementation, we update the teacher's parameter with the student's parameters after each epoch, as follows:
\[\begin{cases}\{\hat{p}^{l}_{i}\}=\{p^{l}_{i}\},&\text{at each epoch}\\ \{\hat{p}^{l}_{i}\}\neq\{p^{l}_{i}\},&\text{between epochs}\end{cases}\]
### mAP@k Coefficients to Adjust Knowledge Transfer
In the Equation 4.3 we formulate the total loss as a linear combination with coefficients \(\alpha_{ce}\) and \(\alpha_{kl}\), which serve as hyperparameters to weigh the contributions of the KL and CE loss terms. The proper configuration of these coefficients is essential for achieving an optimal learning process. Although various approaches for knowledge distillation have been developed in different deep learning applications [32; 33], finding a dynamic method suitable for general cases remains an open problem. In this study, we propose a customized technique specifically tailored for the extractive QA task based on the _mAP@k coefficients_. Our approach dynamically adjusts the loss coefficients for each batch of examples to address the issue of incorrect or informative teacher predictions that may lead to weak student training. We base our technique on the insights gained from the analysis conducted in Section 4.1. Specifically, we observe that the teacher model's degree of cross-lingual QA knowledge is not evenly distributed across different combinations of question-and-answer languages. Additionally, we find that a significant number of correct predictions are spread among the top-10 ranked positions. The uneven and poorly ranked cross-lingual knowledge of the teacher model can result in inaccurate predictions, negatively impacting the performance of the student model. In addressing this challenge, our objective is to create a dynamic formula for selecting teacher-generated predictions that exhibit acceptable cross-lingual knowledge. Subsequently, we enhance these selected predictions by incorporating the cross-entropy term derived from the ground-truth hard labels. We accomplish this task by adopting an information retrieval perspective. Specifically, we
introduce a heuristic criterion for identifying the relevance of a teacher's prediction: it is deemed relevant if it falls within a predefined interval around the actual ground truth position. Consequently, we utilize the count of relevant predictions within the top \(k\) predictions as a surrogate measure for assessing the quality of the teacher's probability distribution. This assessment, in turn, guides the weighting of the corresponding loss term. We visualize an example of this interval for the probability distribution on the start position in Figure 2.
More formally, for each QA example, and each start and end position over the context, we calculate the well-established Mean Average Precision at k (mAP@k) metric to define the \(\alpha_{kl}\) coefficient, as provided by the equation below:
\[\alpha_{kl}^{l}\ {=\ \frac{\sum_{i=1}^{N}AP_{k}}{N}=\frac{\sum_{i=1}^{N}\frac{ \sum_{j=1}^{k}P(j)\ \delta(j\in s\pm\Delta_{s})}{2\Delta_{s}})}{N}} \tag{5}\]
In the above Equation, \(N\) denotes the total number of examples, k is the rank of predictions at which we stop, \(P(j)\) represents the precision at the cut-off position \(j\) and the indicator function \(\delta(j\in l\pm\Delta_{l})\) takes the value of 1 if the prediction at position \(j\) fall within an interval of length \(2\Delta_{l}\) centred around the ground truth start or end position \(l\). By applying these dynamic coefficients to the self-knowledge distillation equation in 4.3, for each QA example, we compute the total loss as follows:
\[L_{skd}^{mAP@k}=\sum_{l=start}^{end}\ \sum_{i=1}^{N}\ CE(a_{i}^{l},p_{i}^{l})+\ \alpha_{kl}^{mAP@k}\sum_{l=start}^{end}\sum_{i=1}^{N}\ KL(\hat{p}_{i}^{l},p_{i} ^{l})) \tag{6}\]
Where \(\alpha_{kl}^{mAP@k}\) is computed as the average of the \(\alpha_{kl}^{l}\) and \(\alpha_{kl}^{l}\) coefficients, and \(\alpha_{ce}\) is set to 1. The relevance of the aforementioned equation lies in the dynamic scaling of the self-knowledge distillation terms, which is directly proportional to the quality of the teacher's predictions. Consequently, higher coefficients result in a more substantial contribution to the overall loss. We assert that employing the mAP@k metric allows for a more accurate assessment of the teacher's high-quality distribution, which is crucial for optimizing cross-lingual transfer efficiency.
## 5 Experimental Setting
We provide details of the experimental setting encompassing the datasets, the training details and the evaluations conducted.
### Datasets
For our investigation into cross-lingual transfer in the extractive QA task and the evaluation of our models across diverse languages, we utilized three well-suited datasets.
Figure 2: Probability distribution for the start position of the mBERT model on an example with a low F1 score. The figure shows a \(\Delta\) interval of several tokens around the start ground truth start. As expected, the peaks of the distribution are not located around the ground truth thus producing a low F1 score.
XQuADWe incorporate the XQuAD dataset [20] consisting of 240 paragraphs and 1,190 question-answer from SQuAD v1.1 [34] pairs translated into ten languages: Spanish, German, Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and Hindi. These translations were performed by professional human translators.
MlqaThe MLQA benchmark [9] includes QA instances in seven languages: English, Arabic, German, Spanish, Hindi, Vietnamese, and Simplified Chinese. It comprises over 12,000 instances in English and approximately 5,000 instances in each of the other languages, with an average of four language pairs for each instance. The dataset is divided into development and test splits, consisting of 4,199 and 41,244 examples, respectively. The creation of MLQA involved a meticulous process encompassing professional translation and human annotation. It consists of three steps: parallel sentence mining, English QA annotation, and target language QA annotation. MLQA aims to drive research in cross-lingual QA and bridge the gap between training and testing language performance.
TyDiQA-goldpTyDiQA [19] is a comprehensive question-answering dataset that spans 11 typologically diverse languages and includes three tasks: Passage Selection (SelectP), Minimal Answer Span (MinSpan), and Gold Passage (GoldP). For our study, we focus on the Gold Passage task (GoldP), which involves predicting the contiguous span of characters that answer the question when a passage containing the answer is provided. This task enables comparison with prior works and allows compatibility with existing datasets for extractive QA. The dataset aims to evaluate the models's ability to generalize across a wide range of languages by including linguistic phenomena that are not typically found in English-only corpora. Additionally, the dataset is collected directly in each language, without relying on translations.
In our work, we use the XQuAD for training, selecting the languages present in MLQA. The MLQA-dev benchmark serves as the development set for model selection and hyperparameter tuning. Finally, we employ the MLQA-test, the remaining portion of XQuAD, and TyDiQA-goldp for evaluation. This selection is based on several considerations:
* The XQuAD dataset, with a limited number of samples (1090 examples), is suitable for exploring training with a restricted number of aligned labelled examples across multiple languages. Since it is fully aligned across a wide range of languages, it represents a suitable choice to apply our cross-lingual fine-tuning approach. Additionally, its creation process is straightforward, involving the translation of the original SQuAD dataset without requiring annotation.
* The MLQA-test dataset provides an ideal evaluation framework for assessing cross-lingual question-answering performance. It supports both the standard cross-lingual transfer (XLT) task and the more challenging generalized cross-lingual transfer (GXLT) task, which involves different question and context languages. Importantly, to our knowledge, the MLQA-dev set has not been used by the community for cross-lingual QA transfer experiments, even though it is valuable for hyperparameter optimization and model selection as emphasized by the dataset authors.
* The TyDiQA-goldp dataset provides an excellent zero-shot testbed for thoroughly evaluating the generalization capabilities of our cross-lingual fine-tuning approach on languages not encountered during training. Importantly, it avoids the use of translation, better reflecting the characteristics of the languages and reducing the potential exploitation of translation artifacts to enhance system performance, as discussed in [35].
### Training
The training process is divided into two phases, namely: 1) large-scale QA fine-tuning, involving the use of numerous QA examples to imbue the model with proficient QA capabilities, and 2) cross-lingual QA fine-tuning, designed to enhance and amplify cross-lingual QA transfer by utilizing few labelled QA examples that are aligned across multiple languages. In the first phase, we conduct fine-tuning on the widely adopted multilingual language model, mBERT [1], for the extractive QA task. We solve the task as in Section 3 and implement the loss functions defined in Equations 4.3, setting the coefficients \(\alpha_{ce}\) and \(\alpha_{kl}\) to 1. The optimization is performed using stochastic gradient descent with the Adam optimizer [36], utilizing a learning rate of \(10^{-3}\), a batch size of 12, and a maximum sequence length of 384 tokens. The training is conducted over 3 epochs.
In the second phase, we continue fine-tuning the previously trained model, named mBERT-qa-en, by leveraging the QA data sampled from the XQuAD dataset with the cross-lingual sampling described in Section 4.2 obtaining a total number of cross-lingual examples \(N_{cross-lingual}\) that follow the Equation 3. We maintain the same hyperparameters as in the previous phase and evaluate the model's performance on the MLQA-dev set. We assess the average cross-lingual transfer performance across all languages by computing both the XLT and G-XLT F1 and EM metrics on the concatenation of all cross-lingual examples in the MLQA-dev split. Model selection is then performed by choosing the set of hyperparameters that yield the highest G-XLT F1 accuracy.
### Evaluation
Our evaluation is categorized into two types: in-language, which pertains to languages utilized during cross-lingual fine-tuning, and zero-shot, as the name implies, encompasses languages that were not encountered during the training phase. In the in-language evaluation, we use the MLQA languages for in-language evaluations on the XLT and G-XLT tasks. We recall that the former evaluation involves examples with the same question and context language, while the latter uses examples with different question and context languages. We compute the standard F1 score using the official MLQA evaluation script2 to properly handle language-specific modifications. In the zero-shot evaluation, we employ the TyDiQA-goldp and the remaining languages of XQuAD datasets that we do not utilize during training.
Footnote 2: [https://github.com/facebookresearch/MLQA/blob/main/mlqa_evaluation_v1.py](https://github.com/facebookresearch/MLQA/blob/main/mlqa_evaluation_v1.py)
#### 5.3.1 Baselines
To ensure a direct comparison with our models, we consider as pertinent baselines the methods that involve the mBERT model, applying the same task resolution and evaluating the XLT and G-XLT tasks for the in-language and zero-shot settings. Thus, we exclude approaches that add other variables in terms of training techniques or models. Therefore, to the best of our knowledge, we identify only two baselines that match the previous criteria. The first baseline is our implementation of the mBERT-qa-en used as a zero-shot baseline, as proposed in [9], to point out the sheer gain in the performance when cross-lingual fine-tuning is used. Moreover, we adopt as a strong baseline the best-performing mBERT model in [17]3. These models were tailored for cross-lingual QA through fine-tuning of the mBERT model. Subsequently, they are evaluated on both the MLQA dataset and the TyDiQA-goldp benchmark to assess their performance in terms of XLT and G-XLT tasks, spanning both in-language and zero-shot settings.
Footnote 3: We refer to the model obtained with the LAF, PSA+QS (en-all) method, as reported in the original paper.
## 6 Results and Discussions
Following the setting introduced in the previous section, we compare the best model configurations on the development of the MLQA dataset for the standard cross-entropy and self-distillation training, as described in Section 4. Since we are primarily interested in the generalized cross-lingual QA performance, we consider as best models the ones obtaining the highest F1 score for the G-XLT metric.
### In-Language Performance
We present the results in various tables and figures, providing both average scores, by taking the average performance across questions in all available languages for each answer language, and single-language scores, using examples with the same question and answer language. Table 2 and 3 demonstrate the superiority of our distillation methods over the standard cross-entropy fine-tuning approach. The distillation methods exhibit a substantial improvement of more than 2 F1 points in the XLT and G-XLT scores for both the MLQA dev and test sets. Compared to the zero-shot baseline, the distillation methods achieve a remarkable gain of over 9 F1 points in the G-XLT scores for both the dev and test sets of MLQA. Crucially, the results highlight the benefits of utilizing the mAP@k technique to balance the loss terms, leading to improved average cross-lingual scores. The distillation approaches consistently outperform the cross-entropy and zero-shot baselines at the single-language level. Importantly, we are able to achieve more than 90% F1 score on the MLQA-test set over the strong baseline established in [17] despite utilizing only 3.5% of their training data4. This observation emphasizes the effectiveness of our approaches in scenarios where obtaining large-scale and high-quality translated data is challenging or unfeasible. Additionally, to further investigate the G-XLT capabilities achieved by our method, we examine how the improvement in F1 scores is distributed across different language pairs. Thus, we compare the F1 scores of our best-fine-tuned models with the zero-shot performances of the mBERT-qa-en model for each pair of languages. To visualize these differences, we use heatmaps depicting the variations in F1 scores across language pairs, as shown in Figure 3. The heatmaps clearly demonstrate the superiority of our self-distillation methods over standard cross-entropy fine-tuning, particularly in dissimilar language pairs such as hi-es and ar-vi. Notably, when the question is presented in English, the self-distillation fine-tuning approach exhibits a significantly reduced decline in performance, indicating its propensity to transfer the English QA knowledge to other languages while maintaining its initial performance. In Appendix 9, we report an analogous trend for the EM score. This finding provides further evidence to support the higher average G-XLT values discussed earlier, thereby reinforcing the notion of enhanced generalized cross-lingual transfer achieved by our proposed methods.
Footnote 4: The percentage is calculated as the ratio of the \(N_{cross-lingual}\) between our methods and the strong baseline, as reported in Table 2.
### Zero-shot Performance
The findings of the zero-shot evaluation, detailed in Table 4, underscore the superior efficacy of self-distillation fine-tuning in contrast to standard cross-entropy fine-tuning. This superiority is manifest in the higher XLT F1 scores obtained for both the XQuAD and TyDiQA-goldp benchmarks. This encompasses both the average XLT score and the individual language levels, with a notable margin. Surprisingly, on the TyDiQA dataset, our results also stand in competitive alignment with the robust baseline proposed in [17]. Specifically, we surpass this baseline by approximately 2-4 F1 points for all languages except Russian (ru) and Swahili (sw), despite having one order of magnitude less training data, as discussed in the preceding section. Overall, while the substantial performance boost on the XQuAD dataset may be partly due to subtle translation artifacts between training and test sets, as discussed in [35], our positive results on the TyDiQA-goldp dataset validate the efficacy of our self-distillation approach. Notably, TyDiQA stands apart from MLQA and XQuAD, as it necessitates genuine information-seeking questions crafted by individuals who do not possess the answers. Furthermore, this dataset is sourced directly in each language, obviating the need for translations.
\begin{table}
\begin{tabular}{l|c c c|c c|c c} \hline \hline & \multicolumn{6}{c|}{MLQA-dev} & \multicolumn{2}{c}{MLQA-test} \\ model & ntl & t & \(N_{cross-lingual}\) & GXLT & XLT & GXLT & XLT \\ \hline mBERT-qa-en & - & - & - & 49.7/33.6 & 59.7/42.6 & 49.7/33.8 & 59.4/33.8 \\ w/ skd & 5 & 2 & 42,840 & 59.1/42.6 & 64.1/46.2 & 58.4/41.6 & **64.0/46.3** \\ w/ skd, mAP@k & 5 & 2 & 42,840 & **59.2/42.7** & **64.7/47.1** & **58.5/41.8** & 63.7/45.9 \\ w/ ce & 3 & - & 19,040 & 57.5/40.8 & 62.2/44.8 & 56.3/39.8 & 61.2/44.0 \\ \hline LAF, PSA+QS (en-all) [17] & - & - & 1,233,776 & - & - & **61.9\({}^{\dagger}\)**/- & **65.7\({}^{\dagger}\)**/- \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average scores (F1/EM) of the best-on-dev for the G-XLT and XLT tasks on the MLQA datasets. We also provide the number of target languages \(ntl\), the temperature \(t\) and the total number of cross-lingual fine-tuning examples \(N_{cross-lingual}\) for each model. To distinguish the comparisons between our models and the strong baseline from [17], we employ the symbol \(\dagger\) to represent the scores of the latter.
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline & \multicolumn{6}{c}{MLQA-dev} \\ model & en & es & de & ar & vi & hi & zh \\ \hline mBERT-qa-en & 59.3/66.8 & 55.1/50.2 & 49.5/41.0 & 43.3/30.4 & 48.5/37.0 & 44.2/35.5 & 48.1/37.5 \\ w/ skd & **68.6/65.0** & **65.4/51.2** & 57.7/45.5 & 52.1/34.2 & 58.9/40.7 & **54.9**/44.0 & 56.2/43.0 \\ w/ skd, mAP@k & 68.4/64.6 & 64.8/50.8 & **57.7/47.3** & **53.3/36.2** & **58.9/42.7** & 54.4/**45.2** & **57.1/43.2** \\ w/ ce & 66.7/62.4 & 63.6/50.8 & 56.3/45.5 & 50.7/33.0 & 56.1/41.0 & 53.6/42.0 & 55.3/39.3 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c c c c c c c} \hline \hline & \multicolumn{6}{c}{MLQA-test} \\ model & en & es & de & ar & vi & hi & zh \\ \hline mBERT-qa-en & 58.0/67.4 & 53.1/47.8 & 49.8/45.8 & 44.9/30.4 & 50.3/39.0 & 43.8/31.2 & 47.6/37.0 \\ w/ skd & **67.5/64.4** & 62.3/**49.2** & 58.5/**48.6** & 51.9/**37.0** & 59.8/42.8 & **53.4/41.9** & 55.7/**40.5** \\ w/ skd, mAP@k & 67.3/63.9 & **62.6**/49.1 & **58.7**/47.8 & **52.0**/36.6 & **60.1/43.1** & 53.3/40.8 & **56.0**/40.0 \\ w/ ce & 65.2/61.5 & 60.0/47.8 & 56.4/45.9 & 49.9/33.5 & 57.9/41.0 & 50.7/38.6 & 53.9/39.3 \\ \hline LAF, PSA+QS (en-all) [17] & **74.3\({}^{\dagger}\)**/- & **66.1\({}^{\dagger}\)**/- & **61.5\({}^{\dagger}\)**/- & **54.8\({}^{\dagger}\)**/- & **64.3\({}^{\dagger}\)**/- & **54.9\({}^{\dagger}\)**/- & **57.6\({}^{\dagger}\)**/- \\ \hline \hline \end{tabular}
\end{table}
Table 3: Single-language scores (F1/EM) of the best-on-dev for the G-XLT and XLT tasks on the MLQA datasets. We drop the number of target languages \(ntl\), the temperature \(t\) and the total number of training examples \(N_{cross-lingual}\) for space reasons. The scores are computed by taking the average performance across questions in all available languages for each answer language. To distinguish the comparisons between our models and the strong baseline from [17], we employ the symbol \(\dagger\) to represent the scores of the latter.
\begin{table}
\begin{tabular}{l|c c c c c c c c} \hline \hline & \multicolumn{8}{c}{XQuAD} \\ model & XLT & el & ru & tr & th \\ \hline mBERT-qa-en & 55.3/40.9 & 62.7/45.5 & 70.5/53.2 & 51.2/36.9 & 36.6/27.9 \\ w/ skd & 72.1/60.1 & **81.6/67.5** & 86.8/74.9 & **71.0/57.0** & 48.9/40.9 \\ w/ skd, mAP@k & **72.7/60.8** & 81.4/66.7 & **87.6/76.3** & 70.5/56.0 & **51.2/44.1** \\ w/ ce & 68.8/56.2 & 75.1/59.4 & 85.2/72.9 & 66.2/51.8 & 48.7/40.8 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c c c c c c c} \hline \hline & \multicolumn{8}{c}{TyDiQA-goldp} \\ model & XLT & bn & fi & in & ko & ru & sw & te \\ \hline mBERT-qa-en & 54.0 & 55.9 & 54.7 & 58.1 & 47.5 & 64.3 & 50.2 & 47.3 \\ w/ skd & **59.5** & 60.9 & **61.6** & 65.4 & 55.1 & 65.0 & 59.0 & 49.3 \\ w/ skd, mAP@k & 58.9 & **63.3** & 54.8 & **68.7** & 53.8 & **65.0** & 55.3 & **51.0** \\ w/ ce & 58.6 & 58.1 & 59.0 & 66.4 & **55.8** & 64.1 & 56.7 & 50.0 \\ \hline LAF, PSA+QS (en-all) [17] & **61.9\({}^{\dagger}\)** & 59.9 & 57.3 & 64.1 & 55.3 & **65.9\({}^{\dagger}\)** & **63.8\({}^{\dagger}\)** & 49.1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Single-language zero-shot F1/EM scores of the best-on-dev for the XQuAD and F1 score on the TyDiQA-goldp datasets. To distinguish the comparisons between our models and the strong baseline from [17], we employ the symbol \(\dagger\) to represent the scores of the latter.
Figure 3: Difference in F1 scores on the MLQA-dev and MLQA-test sets between the zero-shot performances of the mBERT-qa-en and our best models, namely, a) cross-entropy and b) self-knowledge distillation plus mAP@k.
Analysis
To understand the relationships and impact on the training performance of crucial variables such as the number of parallel target languages \(ntl\) in cross-lingual sampling, the temperature \(t\) in knowledge distillation, and the mAP@k loss coefficients \(\alpha_{kl}^{mAP@k}\), we utilise the experiments on the development set of the MLQA dataset and perform analysis on them.
### Evolution of the mAP@k Coefficients
We commence by investigating the temporal evolution of the mAP@k coefficient, denoted as \(\alpha_{kl}^{mAP@k}\), throughout the training process to gain insights into its influence on self-distillation fine-tuning. The mAP@k method we propose in Section 4.4, dynamically weighs the significance of the teacher's predictions. It is important to emphasize that in our self-knowledge distillation experiments, the teacher is synchronized with the student model at each epoch. Consequently, we anticipate that, given proper learning of the student model, the \(\alpha_{kl}^{mAP@k}\) increment its value, thus increasingly relying on improved teacher predictions. Hence, Figure 4 illustrates the gradual augmentation of the coefficient during training, starting from an initial value of approximately 0.6 and progressing until it attains its maximal value of 1 at approximately half of the training duration. Interestingly, we observe oscillatory behaviour in the values of \(\alpha_{kl}^{mAP@k}\) during the initial epoch, which subsequently stabilizes over time, as a consequence of successful student training. This empirical evidence is consistent with the overall superior performance achieved by the models employing the mAP@k method, underscoring its efficacy in promoting the regularization of self-knowledge distillation training.
### Impact of the Number of Parallel Target Languages
Equation 3 illustrates how the total number of training examples used for cross-lingual fine-tuning scales quadratically with the number of sampled parallel target languages \(ntl\), resulting in a substantial increase in dataset size. Nonetheless, while each combination is, by definition, a distinct QA instance5, we recall that they are all derived from the same set of \(N_{seed}\) seed examples in English. Consequently, it is plausible to argue that a multilingual model equipped with powerful cross-lingual representations may map two parallel examples into highly similar representations, thereby not fully harnessing the benefits of these additional examples. Therefore, we aim to assess the impact of \(ntl\) in Equation 3 on the performances of our self-knowledge distillation models. In Figure 5, we present the F1 scores of the G-XLT and XLT metrics for both the self-distillation and cross-entropy models, plotted as a function of the number \(ntl\) while keeping the temperature set to the default distillation value of 2. The curves corresponding to the G-XLT setting exhibit an increasing trend in F1 scores for the self-distillation models, whereas the cross-entropy models experience a decline for values of \(ntl\) greater than 3. Conversely, the XLT metric shows a decreasing trend as the number of parallel target languages increases, with a steep decline observed for the cross-entropy method. Additionally, the curves demonstrate that the
Figure 4: Evolution of the \(\alpha_{kl}^{mAP@k}\) coefficient during training. Each vertical line indicates a training epoch.
self-distillation models using the mAP@k loss coefficients \(\alpha_{kl}^{mAP@k}\) achieve better performance for both the G-XLT and XLT scores. We report the corresponding analysis for the EM score in Appendix 10, confirming the same trend. Overall, these results lead us to two conclusions: i) the cross-entropy method tends to overfit when fine-tuning with a cross-lingual sampled dataset, likely due to the increased redundancy of the data; and ii) the self-distillation methods effectively leverage the cross-lingual sampled dataset to enhance G-XLT performance while mitigating overfitting for the XLT metrics, in contrast to the observed trend with the cross-entropy approach.
### Effect of Temperature on the Number of Target Languages
The knowledge distillation temperature \(t\) is a key hyperparameter that directly affects cross-lingual transfer by influencing the shape of the teacher's distribution and consequently the Kullback-Leibler term in the loss. In order to explore this impact, we consider the models trained with self-knowledge distillation and investigate the relationship between the F1 score in the G-XLT setting and the number of target languages used in cross-lingual sampling. Figure 6 presents the results of this analysis. We observe that the self-knowledge distillation models improve as the number of target languages \(ntl\), observing a sharp increase when \(ntl>2\) while exhibiting relatively small variations as the number of target languages \(ntl\) increases. Moreover, the optimal configuration depends on the use of the mAP@k loss coefficients, making it more challenging to determine. We display similar patterns for the EM scores in Appendix 10. Therefore, this analysis highlights the importance of using a number of target languages greater than 2 while underscoring the need for careful tuning of the knowledge distillation temperature to maximize cross-lingual transfer performance.
## 8 Ablation Study: Unsupervised Self-Distillation
In this section, we investigate the impact of unsupervised self-distillation, where only the Kullback-Leibler divergence term is utilized, thus excluding the cross-entropy term. We conduct an ablation experiment by using our best-performing models in Table 2 and removing the cross-entropy term, The aim is to explore the potential of this unsupervised approach and gain further insights into how to accomplish beneficial cross-lingual fine-tuning using limited labelled examples. The results of the ablation study, presented in Table 5, reveal a significant decrease in performance by several F1 points in both the in-language and zero-shot evaluations. Surprisingly, the performance deteriorates by several F1/EM points even beyond the baseline achieved by the cross-entropy-only experiments. Furthermore, the inclusion of the mAP@k loss coefficients does not mitigate this performance decline. This observation is supported by the behaviour of the \(\alpha_{kl}\) mAP@k coefficient that keeps oscillating around 0.6 and does not exhibit an increasing trend throughout the training process, as depicted in Figure 7. This is in stark contrast to the trend exhibited by the \(\alpha_{kl}\) coefficient in Figure 4, where a progressive increase is evident over the course of training. These findings underscore the critical role played by the cross-entropy term in guiding and enhancing the model's self-knowledge, leading to overall improved cross-lingual transfer performance. We conclude that the inclusion of cross-entropy is crucial for effective knowledge distillation and achieving superior cross-lingual transfer capabilities.
Figure 5: F1 scores on the MLQA-dev set for the G-XLT and XLT settings as a function of the number of parallel target languages \(ntl\) sampled during cross-lingual fine-tuning for our self-distillation models and standard cross-entropy fine-tuning.
Figure 6: F1 scores for the G-XLT setting of the self-distillation models on the MLQA-dev set as a function of both the self-distillation temperature and the number of target languages sampled for training. Minimum and maximum values are relative to each figure to better visualise the variation of the scores.
\begin{table}
\begin{tabular}{l|c c c c|c c} \hline \hline & \multicolumn{4}{c|}{In-language} & \multicolumn{2}{c}{Zero-shot} \\ & \multicolumn{2}{c}{MLQA-dev} & \multicolumn{2}{c|}{MLQA-test} & \multicolumn{2}{c}{XQuAD} & \multicolumn{2}{c}{TyDiQA-goldp} \\ model & GXLT & XLT & GXLT & XLT & XLT & XLT \\ \hline mBERT-qa-en & - & - & - & - & - & - \\ w/ skd, mAP@k w/o ce & 52.5/36.6 & 61.0/43.9 & 53.1/36.0 & 60.4/43.6 & 57.7/43.6 & 53.9/41.9 \\ w/ skd w/o ce & 53.8/37.9 & 61.2/44.2 & 52.3/37.0 & 60.5/43.6 & 56.8/42.7 & 53.9/41.5 \\ w/ ce & **57.5/40.8** & **62.2/44.8** & **56.3/39.8** & **61.2/44.0** & **68.8/56.2** & **58.6/45.3** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Average scores (F1/EM) of the best-on-dev models without the cross-entropy term for the MLQA dev, MLQA test, XQuAD and TyDiQA-goldp datasets. We intentionally do not report the mBERT-qa-en scores to better stress the comparison between the models in the ablation study.
Figure 7: Evolution of the \(\alpha_{kl}^{mAP@k}\) coefficient during training. Each vertical line indicates a training epoch.
## 9 Limitations and Future Works
Our study on cross-lingual question-answering (QA) techniques has shown encouraging results, nevertheless, we acknowledge the main limitations:
Availability of Parallel QA DatasetsAlthough our experiments employed as few as 1 thousand examples per target language, the availability of high-quality parallel QA datasets ultimately relies on human annotations that could be resource-intensive for a wide range of languages. Therefore, The scarcity of such data may limit the generalizability and scalability of our approach to a larger set of low-resource languages.
Challenges with Distant Language FamiliesOur methods assume that the underlying structure and patterns of the source language, English in our case, can be effectively transferred to target languages. However, as expected, our findings indicate that achieving high-quality cross-lingual QA transfer becomes more challenging when dealing with languages from distant language families. We believe the linguistic dissimilarities among these languages can hinder effective knowledge transfer, leading to performance degradation in specific language pairs.
Sensitivity to the HyperparametersOur self-distillation techniques and cross-lingual sampling performance are sensitive to various method-specific training hyperparameters. Determining the optimal values for these hyperparameters requires proper tuning.
Acknowledging these limitations and addressing them through future research will contribute to further advancements in cross-lingual QA, allowing for more effective and efficient cross-lingual QA across a diverse and low-resource range of languages.
## 10 Conclusion
This study presents an effective approach for achieving cross-lingual QA transfer abilities with limited data aligned across languages. By leveraging advanced self-knowledge distillation techniques and cross-lingual sampling, our method offers promising prospects for bridging the language gap in QA tasks and enhancing G-XLT performances in data-scarce scenarios. Importantly, the introduction of a novel dynamic loss weighting technique with mAP@k coefficients adds an essential contribution to cross-lingual transfer methods, allowing for more optimized knowledge distillation during fine-tuning. Therefore, our approach proves beneficial for languages where high-quality machine-translated data is scarce, but small-scale annotation efforts are feasible. Overall, our research contributes to the advancement of cross-lingual QA models, providing valuable insights and methodologies for future work in this domain.
## Acknowledgments
This work was supported by the project PID2019-107579RB-I00 (MICINN).
|
2309.11545 | Galaxy archaeology for wet mergers: Globular cluster age distributions
in the Milky Way and nearby galaxies | Identifying past wet merger activity in galaxies has been a longstanding
issue in extragalactic formation history studies. Gaia's 6D kinematic
measurements in our Milky Way (MW) have vastly extended the possibilities for
Galactic archaeology, leading to the discovery of early mergers in the MW's
past. As recent work has established a link between young globular clusters
(GCs) and wet galaxy merger events, the MW provides an ideal laboratory for
testing how GCs can be used to trace galaxy formation histories. To test the
hypothesis that GCs trace wet mergers, we relate the measured GC age
distributions of the MW and three nearby galaxies to their merger histories and
interpret the connection with wet mergers through an empirical model for GC
formation. For the MW, we cross-match the GCs with their associated progenitor
host galaxies to disentangle the connection to the GC age distribution. We find
that the MW GC age distribution is bimodal, mainly caused by younger GCs
associated with Gaia-Sausage/Enceladus (GSE) and in part by unassociated
high-energy GCs. The GSE GC age distribution also appears to be bimodal. We
propose that the older GSE GCs were accreted together with GSE, while the
younger ones formed through the merger. For the nearby galaxies, we find that
peaks in the GC age distributions coincide with early gas-rich mergers. Even
small signatures in the GC age distributions agree well with the formation
histories of the galaxies inferred through other observed tracers. From the
models, we predict that the involved cold gas mass can be estimated from the
number of GCs found in the formation burst. Multimodal GC age distributions can
trace massive wet mergers as a result of GCs being formed through them. From
the laboratory of our own MW and nearby galaxies we conclude that the ages of
younger GC populations of galaxies can be used to infer the wet merger history
of a galaxy. | Lucas M. Valenzuela, Rhea-Silvia Remus, Madeleine McKenzie, Duncan A. Forbes | 2023-09-20T18:00:01Z | http://arxiv.org/abs/2309.11545v2 | Galaxy Archaeology for Wet Mergers: Globular Cluster Age Distributions in the Milky Way and Nearby Galaxies
###### Abstract
Context:Identifying past wet merger activity in galaxies has been a longstanding issue in extragalactic formation history studies. Gaia's 6D kinematic measurements in our Milky Way have vastly extended the possibilities for Galactic archaeology, leading to the discovery of a multitude of early mergers in the Milky Way's past. As recent work has established a link between young globular clusters (GCs) and wet galaxy merger events, the Milky Way provides an ideal laboratory for testing which GC properties can be used to trace extragalactic galaxy formation histories.
Aims:To test the hypothesis that GCs trace wet mergers, we relate the measured GC age distributions of the Milky Way and three nearby galaxies, M 31, NGC 1407, and NGC 3115, to their merger histories and interpret the connection with wet mergers through an empirical model for GC formation.
Methods:The GC ages of observed galaxies are taken from a variety of studies to analyze their age distributions side-by-side with the model. For the Milky Way, we additionally cross-match the GCs with their associated progenitor host galaxies to disentangle the connection to the GC age distribution. For the modeled GCs, we take galaxies with similar GC age distributions as observed to compare their accretion histories with those inferred through observations.
Results:We find that the Milky Way GC age distribution is bimodal, mainly caused by younger GCs associated with Gaia-Sausage/Enceladus (GSE) and in part by unassociated high-energy GCs. The GSE GC age distribution also appears to be bimodal. We propose that the older GSE GCs were accreted together with GSE, while the younger ones formed as a result of the merger. For the nearby galaxies, we find that clear peaks in the GC age distributions coincide with active early gas-rich merger phases. Even small signatures in the GC age distributions agree well with the expected wet formation histories of the galaxies inferred through other observed tracers. From the models, we predict that the involved cold gas mass can be estimated from the number of GCs found in the formation burst.
Conclusions:Multimodal GC age distributions can trace massive wet mergers as a result of GCs being formed through them. From the laboratory of our own Milky Way and nearby galaxies we conclude that the ages of younger GC populations of galaxies can be used to infer the wet merger history of a galaxy.
## 1 Introduction
Over the course of galaxy formation and evolution, in-situ formed structures will mix with accreted matter, concealing their origin and thereby the formation history of the galaxy. Through precise measurements of star and gas properties, such as their distribution in space, their velocity, their chemical compositions, and the stellar ages, it is possible to disentangle many of the individual clues on the formation history. This is the aim of _galaxy archaeology_(e.g., Freeman & Bland-Hawthorn, 2002; Binney, 2013; Helmi, 2020). In particular, stellar structures and overdensities, such as stellar streams and other tidal features, can reveal details on a galaxy's history, where especially stellar streams are generally the remains of a tidally disrupted satellite galaxy. This has been done extensively for the Milky Way (MW; e.g., Helmi et al., 1999; Belokurov et al., 2006, 2007; Bell et al., 2008; Shipp et al., 2018), aided particularly by the _Gaia_ mission (Gaia Collaboration et al., 2016, 2018, 2023) in recent years with 6D phase-space data (e.g., Helmi et al., 2018; Helmi, 2020; Prudil et al., 2022; Malhan et al., 2022; Ruiz-Lara et al., 2022). It is also possible to detect tidal features for other galaxies in a more limited way through photometric and integral field unit (IFU) observations. Such identified structures have also been connected to the merger history of their host galaxies in observations (e.g., Bliek et al., 2020, 2023; Chandra et al., 2023) and simulations (e.g., Bullock and Johnston, 2005; Johnston et al., 2008; Amorisco, 2015; Hendel and Johnston, 2015; Pop et al., 2018; Karademir et al., 2019; Valenzuela and Remus, 2022). However, all of these are tracers for stellar-dominated merger events and cannot trace the gas that has been accreted through such mergers.
For our own Galaxy, data is available in unprecedented detail, including measured proper motions. By detecting kinematic substructures of stars clustered in phase space, it has been possible to identify a number of past and ongoing mergers for the MW. The Sagittarius Dwarf Spheroidal Galaxy was the first merger discovered with the MW through positional and line-of-sight kinematic data by Ibata et al. (1994). Using Hipparcos data and line-of-sight distances and velocities, Helmi et al. (1999) discovered substr
tures in the inner halo now known as the _Helmi streams_. Through the Gaia mission and the 6D kinematic data that were obtained for stars in the MW, a large number of halo stars were found to have distinct kinematics in phase space, which were attributed to a major merger event that is expected to have taken place around 10 Gyr ago, _Gaia-Sausage/Enceladus_ (GSE; Belokurov et al., 2018; Haywood et al., 2018; Helmi et al., 2018; Mackereth et al., 2019). It is the last major merger that the MW experienced, as well as the most massive one, forming a large part of the stellar halo. Finally, Myceong et al. (2019) found a second group of halo stars kinematically and chemically different from GSE stars with overall retrograde motions, which they attributed to a separate merger event, referred to as _Sequiao_. Further groups of stars in phase space have been found, which are likely the debris of disrupted galaxies that fell into the MW, though the connection to specific merger events is not yet clear (see Dodd et al., 2023 and Horta et al., 2023 for a current overview of structures found in phase space).
Through Gaia data, globular clusters (GCs) in the MW have also been used to further disentangle the Galactic formation history. Based on their 6D phase-space properties and their age-metallicity relations, some studies have linked the GCs with their likely progenitor host galaxies (Myceong et al., 2018; Massari et al., 2019; Forbes, 2020; Horta et al., 2020; Callingham et al., 2022), such as to the MW itself as in-situ GCs, or to some of the inferred accreted galaxies. These studies also revealed unassociated groups of GCs with low and high orbital energies, where it has been proposed that a group of unassociated low-energy GCs are part of a further past merger event (Massari et al., 2019; Forbes, 2020; Callingham et al., 2022). Similar structures in phase space with overlapping properties have been identified through different methods (e.g., Kruijssen et al., 2019; Forbes, 2020; Horta et al., 2021, 2023). However, it has also been shown that GCs can migrate in phase space over time, potentially making it difficult to disentangle the origin of GCs based on their phase-space properties alone (Pagnini et al., 2023).
Because of their intrinsic brightness, GCs have also been used as tracer populations in the outskirts of other galaxies to study their mass distribution, kinematics, and formation history (e.g., Coccato et al., 2013). Through their old age, GCs in particular have experienced a large part of a galaxy's history, making them valuable tracers for past merger events. However, the formation of GCs themselves is still poorly understood. For this reason, models and simulations have been developed to help constrain the details of their formation process. Highly-resolved hydrodynamic simulations help study the resolved formation of individual GCs (e.g., Kravtsov & Gnedin, 2005; Lahen et al., 2019; McKenzie & Bekki, 2021), and sub-grid models for GCs applied to isolated or cosmological simulations allow one to follow GC properties and their spatial distribution through time, making comparisons with observations of nearby galaxies possible (e.g., Bekki et al., 2005; Kruijssen et al., 2011; Pfeffer et al., 2018; Chen & Gnedin, 2022; De Lucia et al., 2023). Finally, empirical and semi-analytic GC formation models applied to cosmological merger trees of galaxies enable one to test the parameter space of a limited number of free parameters for a large number of galaxies (e.g., Beasley et al., 2002; Choksi et al., 2018; El-Badry et al., 2019; Valenzuela et al., 2021; Chen & Gnedin, 2023). Such models can lead to a better understanding of the statistical properties of GC formation that are necessary to reproduce GC properties and relations as they are observed today (e.g., Spitler & Forbes, 2009; Harris et al., 2015, 2017; Forbes et al., 2018; Burkert & Forbes, 2020).
The stars of GCs can be individually observed in the MW, such that reasonably good measurements of their ages can be determined through the color-magnitude diagram (CMD), which has been done for various GC subsamples in the MW (e.g., Salaris & Weiss, 1998; Dotter et al., 2008; Marin-Franch et al., 2009). For extragalactic GCs, the ages have much larger uncertainties associated with them because only the integrated GC properties can be measured. For this reason, stellar population models and evolutionary tracks have to be used to determine the ages, albeit they typically have biases towards younger ages compared to the CMD method. This has been done for GCs in galaxies in the Local Group (e.g., Beasley et al., 2005; Schiavon et al., 2013; Wang et al., 2021) and for selected nearby galaxies (e.g., Usher et al., 2019).
In this work, we use a recent empirical GC formation model with two formation pathways (Valenzuela et al., 2021; Valenzuela, 2023, the first pathway forms GCs in small halos, the second forms GCs in gas-rich mergers) to study in what way its second pathway of forming GCs in gas-rich wet mergers (e.g., Ashman & Zepf, 1992) can help shed light on the formation history of the MW and other nearby galaxies. The model has been shown to agree well with the observed numbers of GCs in galaxies from dwarf to galaxy cluster masses, where a linear relation has been found to exist between the number of GCs and the dark matter (DM) halo virial mass (e.g., Blakeslee et al., 1997; Harris et al., 2017; Forbes et al., 2018), as well as with GC age distributions of the MW and nearby galaxies. We introduce the GC model and observational data in Sect. 2. In Sect. 3, we present a bimodal feature found in the observed GC age distribution of the MW and how it could be related to the predictions of the GC model. We then test and discuss these predictions in detail for the MW in Sect. 4 and for other nearby galaxies in Sect. 5. Finally, we summarize and conclude the results in Sect. 6.
## 2 Data & Method
In the following, the empirical GC model and the GC observational data used in this work are presented. The main property studied is the GC age distribution of galaxies.
### Globular Cluster Model
In this work, we use the empirical GC formation model introduced by Valenzuela et al. (2021), which builds on previous models and investigations by Boylan-Kolchin (2017), Choksi et al. (2018), and Burkert & Forbes (2020). The model employs two formation pathways for GCs: The first, the _small halo pathway_, forms GCs in small haloes as soon as a halo's virial mass surpasses a given threshold value, \(M_{\rm seedGC}\). With equal probability, 0, 1, or 2 GCs are formed. The second, the _wet merger pathway_, is the formation pathway introduced by Choksi et al. (2018) and triggers GC formation when the relative halo mass accretion rate surpasses a given threshold value, \(A_{\rm min}\). The formed number of GCs is then determined by converting the available cold gas mass \(M_{\rm gas}\) to a total GC mass via a conversion factor, \(\eta_{\rm GC}\)(Kravtsov & Gnedin, 2005; Li & Gnedin, 2014; Choksi et al., 2018; Valenzuela et al., 2021):
\[M_{\rm GC}=1.8\times 10^{-4}\eta_{\rm GC}M_{\rm gas}. \tag{1}\]
By assuming a cluster initial mass function of
\[\frac{dN}{dM}\propto M^{-2}, \tag{2}\]
the expected number of formed GCs is obtained as (combining eqs. 3, 6, and 7 of Valenzuela et al.2021)
\[\langle N\rangle=\exp\left(W\left(\frac{1.8\times 10^{-4}\eta_{GC}M_{\rm gas}}{M_{ \rm min}}\right)\right)-1, \tag{3}\]
where \(W\) is the Lambert \(W\) function, \(\eta_{\rm GC}=0.5\) for the best-fitting model, and \(M_{\rm min}=10^{5}\,{\rm M}_{\odot}\) is the minimum mass that a GC needs at formation time to survive for a few Gyr (Li & Gnedin2014; Choksi et al.2018). For more details on the models, see Valenzuela et al. (2021). Note that recent work by Chen & Gnedin (2023) has now used a smaller value of \(M_{\rm min}=10^{4}\,{\rm M}_{\odot}\), although they note that GCs with low initial masses for example \(10^{4}\,{\rm M}_{\odot}\) will have an estimated lifetime of around 1 Gyr at 3 kpc distance of the center of a MW-mass galaxy. The model only tracks the numbers of GCs per galaxy and their formation times, but does not include metallicities. This limits the comparison with observations to only the GC ages, though the available measured metallicities can be used as indicators for the formation sites of the observed GCs. In contrast, for the modeled GCs this information is already known.
The GC model was applied to the merger tree of a DM-only simulation of side length 30 Mpc with a DM particle mass of \(m_{\rm DM}=7.90\times 10^{6}\,{\rm M}_{\odot}\) that was run with the TreePM code Gadget-3 (Springel2005). The empirical model emerge(Moster et al.2018) provides the model with the baryonic matter content per galaxy. Because the model only tracks the number of GCs in each galaxy and at what times they were formed, the model parameters were fit to match observations of the numbers of GCs, since those are available for a sufficiently large sample: the GC numbers are taken from Burkert & Forbes (2020). The GC age distributions are consistent with those found by Usher et al. (2019) for the MW and three SLUGGS galaxies (_SAGES Legacy Unifying Globulars and GalaxieS_; Brodie et al.2014), and the fractions of GCs formed through the wet merger pathway agree with the red GC fractions measured by Harris et al. (2015). For more information on the comparability to observations and how the model parameters affect the resulting GC properties, see Valenzuela et al. (2021).
### Globular Cluster Observational Data
A variety of observed GC age measurements from the literature are used in this work for selected galaxies in the Local Universe. These use different methods to obtain the GC ages and are presented in the following. For all of the measured samples, it is important to keep in mind that while the _absolute_ ages have large uncertainties and are difficult to measure even in the MW itself (e.g., Ying et al.2023, for M 92), within a given GC sample they are subjected to the same systematic uncertainties, resulting in much more precise _relative_ ages. This is important for the study of GC age distributions, where the features in the distribution itself are given by the relative ages as opposed to the absolute ones.
#### 2.2.1 Milky Way
The Milky Way is the only galaxy besides its own satellites for which it is currently possible to obtain accurate CMDs of its GCs. This allows for a much more exact determination of their ages and has been done in multiple studies for different sized samples of GCs. In this work, we consider three of these as compiled by Kruijssen et al. (2019), and additionally investigate the mean ages of those three studies in Appendix A.1:
* Forbes & Bridges (2010) with a sample of 92 GCs,
* Dotter et al. (2010, 2011) with a sample of 68 GCs,
* VandenBerg et al. (2013) with a sample of 54 GCs.
* Kruijssen et al. (2019) with a sample of 96 GCs, which contains the mean GC ages from the previous three studies.
The sample from Forbes & Bridges (2010) is based on a number of previous age measurement studies (Salaris & Weiss1998; Bellazzini et al.2002; Catelan et al.2002; De Angeli et al.2005; Carraro et al.2007; Dotter et al.2008; Carraro2009; Marin-Franch et al.2009), of which 64 GCs were measured using the _Advanced Camera for Surveys_ (ACS) from the _Hubble Space Telescope_ (HST) through the ACS survey for Galactic GCs (Sarajedini et al.2007; Marin-Franch et al.2009) to obtain relative ages. They were normalized to absolute ages with the Dartmouth models of Dotter et al. (2007). While that sample is restricted to the inner 20 kpc of the MW, the age measurements of further GCs were supplemented from the other works. The sample from Dotter et al. (2010, 2011) is for the most part also based on the GCs measured through the ACS survey of Galactic GCs using the photometric catalog from Anderson et al. (2008), and the remaining GCs were observed with further HST/AST measurements. Lastly, the GC age measurements from VandenBerg et al. (2013) were computed from the same photometric catalog of the ACS survey of Galactic GCs as Marin-Franch et al. (2009) used, but employing the stellar evolutionary tracks from VandenBerg et al. (2012). It should be noted that while not all these objects are necessarily actual GCs, but in part also nuclear star clusters, metal complex clusters, or a combination thereof (e.g., McKenzie et al.2022), we refer to all of them simply as GCs in this work.
In addition to the three CMD age samples of the MW GCs, we also include two GC age samples obtained through integrated measurements, as this is also what one is restricted to for other galaxies:
* Usher et al. (2019) with a sample of 61 GCs,
* Cabrera-Ziri & Conroy (2022) with a sample of 32 GCs, of which we remove the 3 spurious young GCs (see their section 6.1), which have been shown to be much older from CMD measurements.
The measurements from Usher et al. (2019) are obtained through combining photometry and spectroscopy, to which stellar population models are fitted using a Markov chain Monte Carlo (MCMC) method. In contrast, Cabrera-Ziri & Conroy (2022) used spectroscopy only, but additionally took the hot horizontal branch (HB) stars into account in their modeling of the integrated stellar population measurements.
For the MW, additional GC properties can be determined that are not possible to obtain for other galaxies at the moment. Six-dimensional phase space measurements have been made available for many MW GCs through _Gaia_(Gaia Collaboration et al.2018; Vasiliev2019), which Massari et al. (2019) used to assign the likely origin of the individual GCs. Their list of progenitors for the GCs consist of the MW itself (i.e., in-situ formed GCs in the disk or bulge), the GSE galaxy, the Sagittarius dwarf, the Helmi Streams, the Sequoia galaxy, and unassociated high- and low-energy GCs. Forbes (2020) used the age-metallicity relation (AMR) to improve these progenitor assignments and proposed that the unassociated low-energy GCs belong to a single progenitor dwarf satellite, which they dubbed _Koala_ and is likely related to or overlaps with _Kraken_(Kruijssen et al.2019) and _Heracles_(Horta et al.2021, 2023). Further work was recently done by Callingham et al. (2022), who used a chemo-dynamical model and
hydrodynamical simulations of MW-like galaxies to associate the GCs with their progenitor hosts. Their assumed accretion events largely align with those used by Massari et al. (2019) and Forbes (2020). It should be noted that due to the ongoing observations and work on this topic, this is a rapidly evolving field. The identifiers of the GCs were available to us for the CMD age samples and for the sample of Cabrera-Ziri & Conroy (2022), such that we could cross-match the ages for those four samples to the progenitor assignments. For this work, we use the assignments made by Forbes (2020), though using the assignments from Massari et al. (2019) and Callingham et al. (2022) do not change the statistical findings presented in this work when we did the analysis using those instead (also see Appendix A.2 for an analysis using the associations from Callingham et al. 2022).
#### 2.2.2 M31
As the nearest more massive galaxy to the MW, the Andromeda galaxy (M 31) is an ideal galaxy for which GCs can be identified and analyzed, since observations have much better resolution and better signal-to-noise values than for more distant galaxies. For M 31, we use two different studies for the GC ages:
* Wang et al. (2021) with a sample of 343 clusters, of which we use the 293 old GCs (\(t_{\rm age}>1.5\) Gyr) in this work,
* Cabrera-Ziri et al. (in prep.) with a sample of 286 GCs, of which we use the 136 GCs whose ages are sufficiently constrained.
The sample from Wang et al. (2021) was observed with the _Large Sky Area Multi-Object Fiber Spectroscopic Telescope_ (LAMOST; Cui et al. 2012; Luo et al. 2015). The GC ages were then determined based on the obtained integrated spectra and multi-band photometry from _Beijing-Arizona-Taiwan-Connecticut_ (BATC; Ma et al. 2015) and _Sloan Digital Sky Survey_ (SDSS; Peacock et al. 2010). For the old clusters, they fit the spectra using an empirical stellar spectra library and the measured colors using an MCMC method. Their ages are all younger than 10 Gyr, however, which is likely a consequence of their underlying models and almost certainly not actually the case for the majority of M 31's GCs (e.g., Beasley et al. 2005; Caldwell et al. 2011; Schiavon et al. 2013). Their ages should therefore be taken with caution.
The integrated spectra measurements of Cabrera-Ziri et al. (in prep.) use the same method as applied for the MW GCs presented by Cabrera-Ziri & Conroy (2022). Their GC sample is selected from the inner halo of M 31, of which 150 have only a lower bound on their age, leaving 136 GCs with sufficiently constrained ages to study their distribution. The reason for the large number of lower age bounds is that their method is able to determine that the metal-poor GCs with a horizontal branch are very old, but not what exact age they have (\(>\)9 Gyr in all cases, and \(>\)12.5 Gyr is the median lower bound) due to degeneracies between the ages and horizontal branch properties. This also means that the resulting age distribution has a selection bias towards metal-rich GCs, which means that the metal-poor GCs typically accreted through smaller galaxies are removed. This should be kept in mind, though we believe that the consequences for this work are not severe since we focus on GCs formed during mergers, which produce younger GCs at generally higher metallicities.
#### 2.2.3 Ngc 3115 and Ngc 1407
The age measurements for 116 GCs in NGC 3115 and 213 GCs in NGC 1407 were made by Usher et al. (2019) by combining photometry and spectroscopy to fit the stellar spectra using an MCMC method. The spectral data were obtained through the SLUGGS survey (Brodie et al. 2014) and the photometric data is from Arnold et al. (2011) using the Suprime-Cam of the Subaru telescope.
## 3 Age Dating Wet Mergers With GCs in the Milky Way
Having the observational data, we investigate some reoccurring features in the GC age distributions with the aim of verifying if such features can be explained by the model. In the following, we present our initial findings from the observations and what predictions the model makes with respect to these.
### GC Age Distribution Observations in the Milky Way
As the galaxy for which the best data exists, we first consider the GC age distribution of the MW. In two of the available samples, the ages appear to have a bimodal or even multimodal distribution, which are shown in Fig. 1. For Forbes & Bridges (2010), there is a slight second peak in the distribution at around 11 Gyr, while for Usher et al. (2019) there are two small peaks between 8 Gyr and 10 Gyr. Studies of the age-metallicity relationship for MW GCs have shown that there are multiple branches of GCs corresponding to the in-situ formed GCs and different progenitor galaxies that fell into the MW (e.g., Kruijssen et al. 2019; Forbes 2020), which could be what is seen as a multimodal age distribution. The peaks in the age distributions align well with the estimated infall times of Sagittarius (8-9 Gyr ago) and Sequoia (\(\sim\)10 Gyr ago; Forbes 2020), and of the GSE merger event, the last major merger that the MW experienced (Helmi et al. 2018; Haywood et al. 2018): around 8 Gyr to 11 Gyr ago according to Belokurov et al. (2018) and around 10 Gyr ago according to Helmi et al. (2018). With an estimated merger mass ratio of 1:4 (Helmi et al. 2018; Gallart et al. 2019), GSE is expected to have been a gas-rich major merger that triggered a starburst in the disk of the MW (e.g., Helmi 2020; Ciuca et al. 2023). There is also evidence for this from stellar population measurements, where Gallart et al. (2019) found a clear peak of high star formation at around 9.5 Gyr ago.
Of course, the findings from the observed age distributions are accompanied by some caveats: First of all, GC age measurements are subject to large uncertainties, especially for the older ages (see the solid lines in Fig. 1, which are the distributions smoothed by Gaussian kernels given by the measurement uncertainties). Still, a large part of these uncertainties are systematic effects, such that the relative ages can still be trusted more than individual absolute ages. Second, the sample sizes are not large enough to make any kind of statistically significant statements, in particular for the few GCs that make up the minor peaks in the age distributions. Third, bringing together the estimated time of the GSE merger event and the time of the GC age distribution peaks is far from having established a causal relation between the two. However, since the features are present in the data and the time also aligns with that of the GSE merger event, GC formation models can be employed to investigate if there is a theoretical basis for a causal connection.
### Model Predictions
The model introduced by Valenzuela et al. (2021) consists of two GC formation pathways, of which one allows GCs to form
through wet mergers containing a sufficient amount of cold gas. To study what the model predicts for galaxies such as the MW, we first extracted MW-like galaxies from the simulation to which the GC model was applied. For this, we selected MW-mass galaxies by their virial mass of \(M_{\rm vir}=1\)-\(2\times 10^{12}\,M_{\rm vir}\), which applies to 21 simulated galaxies. Out of these, there are two with GC age distributions that best match the distribution measured by Usher et al. (2019) for the MW in terms of their cumulative distributions, which is shown in the top panel of Fig. 2. The analogs are selected using the measurements by Usher et al. (2019) instead of one of the CMD measurements since the model was originally calibrated by Valenzuela et al. 2021 to be in agreement with the ages measured by Usher et al. 2019 to have a comparison with multiple galaxies. A recalibration of the model to the CMD age distributions is unsuitable because it would surpass the scope of this work and would only provide one single galaxy with a sufficient number of CMD-measured GC ages. The conclusions drawn in the following are still applicable to the model in general, independent of the age calibration that was used.
Both of the simulated galaxies have roughly bimodal GC age distributions with a minor peak at around 9 Gyr to 10 Gyr (middle panel of Fig. 2). Assuming a measurement uncertainty of 0.75 Gyr for each GC age continues to show a clear bimodality in one case (orange distribution), but only leaves a hint for the other case (red distribution). This shows that even if there is a bimodal age distribution, the large measurement uncertainties for GC ages can make it difficult to actually confirm it in practice.
The underlying reason for these peaks in the age distributions becomes apparent when studying the accretion histories of the two galaxies: both of them experience a major merger in the same time period as the GC formation bursts (bottom panel of Fig. 2). The mergers occur at the same time as the GSE merger is estimated to have happened (8-11 Gyr ago). The two accretion histories differ strongly in their later evolution, however: while the galaxy with the more pronounced bimodal GC age distribution
Figure 1: Globular cluster age distributions in the MW from Forbes and Bridges (2010) and Usher et al. (2019), with sample sizes of 92 and 61 GCs, respectively. The boxy lines are the actual histograms, while the smooth curves show the distributions smoothed by the measurement uncertainties. These are computed through a summation over normal distributions with the respective ages as the means and their uncertainties as the standard deviations. The shaded region between 8 Gyr and 11 Gyr indicates the estimated time of the GSE merger (Belokurov et al., 2018; Helmi et al., 2018).
Figure 2: _Top_: Cumulative GC age distributions of the MW from Usher et al. (2019) (blue) and of two modeled GC populations in simulated MW-mass galaxies from Valenzuela et al. (2021) (red and orange). _Middle_: Age distributions of the same three GC populations as in the top panel. The smooth distributions are smoothed using an assumed uncertainty of 0.75 Gyr for the GC ages. These are computed through a summation over normal distributions with the respective ages as the means and their uncertainties as the standard deviations. For the sample from Usher et al. (2019), the distribution was scaled by a factor of four to be comparable to the modeled populations. _Bottom_: Dark matter virial mass evolution of the two modeled galaxies, showing their accretion histories. The shaded region between 8 Gyr and 11 Gyr indicates the estimated time of the GSE merger (Belokurov et al., 2018; Helmi et al., 2018).
only experiences mini to minor mergers afterwards (orange line in the bottom panel of Fig. 2), the other galaxy has a second major merger at a later time of around 2-4 Gyr ago (red line). The lack of GCs having formed around that time is a clear indication that the merger was rather gas-poor (i.e., dry). The formation history of the first galaxy is therefore more similar to that inferred for the MW ("MW-analog"), while the other galaxy has had a much more violent recent history ("non-MW-analog"). The fact that the non-MW-analog GC age distribution seen in the middle panel of Fig. 2 is more similar to the observed one by Usher et al. (2019) only indicates that the major merger around 10 Gyr ago is more similar to the GSE merger in terms of their GCs formed. However, the second major merger around 2-4 Gyr ago is in no way comparable to the MW, making it the non-MW-analog.
In conclusion, the model makes the following predictions: wet mergers with a large amount of cold gas are capable of producing a bimodal or even multimodal GC age distribution for a galaxy, providing an indication for a type of event that is generally difficult to trace through other means. However, dry mergers with little to no cold gas do not result in noticeable signatures in the GC age distributions. This is the case for the majority of late-time mergers, but at higher redshifts gas-rich mergers are increasingly more common and beyond \(z\approx 2\) the most common kind of merger event (e.g., Bournaud et al., 2011). Thus, GC ages provide a means to probe the very early turbulent formation times of galaxies by tracing their massive wet merger events.
## 4 Discussion: GCs in the Milky Way
Having the model prediction that wet mergers leave an imprint on the GC age distribution of a galaxy, we will test it through the available GC measurements in the MW. For this we can make use of additional properties currently unavailable for GCs around other galaxies, such as kinematic phase-space information and more accurate age measurements.
### Milky Way Diagnostics
To address some of the caveats brought up in Sect. 3.1, we use further data available on the MW GCs to study to what extent the model prediction can be applied to the MW and its GC population. Using the GC progenitor assignments by Forbes (2020), the age distributions for four of the samples (Forbes & Bridges, 2010; Dotter et al., 2010, 2011; VandenBerg et al., 2013; Cabrera-Ziri & Conroy, 2022) can be split up by those assignments. Figure 3 shows the age distributions for the GCs formed in-situ in the MW and those associated with GSE and the other GC host progenitors, for each of the four samples. This figure uses the age uncertainties to smooth the distributions. For the total GC age distributions see Fig. A.1, which shows that for the smoothed distributions, only the total age distribution from Forbes & Bridges (2010) shows a hint at a bimodality. This bimodality is smoothed out when taking the mean GC ages from Kruijssen et al. (2019), which may lead to GC age biases (discussed in Appendix A.1). See Appendix A.3 for the unsmoothed age distributions plotted as histograms for the most relevant GC host progenitor groups that correspond to Fig. 3.
The two smaller samples from VandenBerg et al. (2013) and Cabrera-Ziri & Conroy (2022) do not reveal further information besides an indication for Koala GCs to have formed at slightly later times for the measurements by VandenBerg et al. (2013). In contrast, both of the other samples show that there is a physical reason for the bimodal GC age distribution shown in Fig. 1: in the largest sample by Forbes & Bridges (2010), it is clearly seen that the younger peak is dominated by GSE associated GCs, with contributions from the Helmi Streams and the high-energy GCs. This is less pronounced in the sample by Dotter et al. (2010, 2011), where the high-energy GCs contribute most of the young GCs, albeit there is also a significant contribution from GSE. The fact that these GCs have a different origin than the other MW GCs is known through the age-metallicity relation (e.g., Leaman et al., 2013; Kruijssen et al., 2019; Forbes, 2020), in which the GCs associated with GSE and the other progenitor galaxies follow their own tracks. While the Sagittarius dwarf is expected to have fallen into the MW around 8-9 Gyr ago on a most likely rather circular orbit, its extended time range of GC formation could be a result of tidally induced star cluster formation during its orbit around the MW while there was still enough cold gas available (e.g., Williams et al., 2022, for a study on young clusters in the Small Magellanic Cloud, which are suggested to have formed through tidal interactions with the Large Magellanic Cloud).
It is curious, however, that there appears to be a hint at a non-unimodal age distribution within the GSE GCs, which can be seen more clearly for Forbes & Bridges (2010) than for Dotter et al. (2010, 2011) in Fig. 3. The bimodality is also present when using the mean GC ages from Kruijssen et al. (2019), which is even slightly more prominent when using those ages, despite the overall distribution being more smoothed out (Appendix A.1). Additionally, it is also present when using the GC progenitor associations from both Massari et al. (2019) and Callingham et al. (2022), and it is actually much more clearly visible for their classifications (Appendix A.2). The GCs associated with GSE are therefore shown to have an age bimodality across multiple different studies of their ages and association models with the GSE. The reason the bimodality is not seen in the samples from Dotter et al. (2010, 2011) and VandenBerg et al. (2013) is that they each only contain 13 GCs associated with GSE, compared to 21 GCs from Forbes & Bridges (2010). Considering the raw age data without uncertainties, both distributions are actually bimodal (Fig. A.4), with the caveat of there being a very small amount of GCs involved.
Still, if the signal is real, it can be brought together with the model prediction in a straightforward manner. As the model forms GCs in small halos at early times (first pathway as described in Sect. 2.1), a GSE satellite galaxy will host its own GCs as it falls into the MW. We will refer to these GCs as the _accreted GCs_ in the following. Assuming there is enough cold gas available, the merger event will then trigger GC formation through the gas collision and tidal forces between the two galaxies (_merger-induced GCs_ in the following, second pathway as described in Sect. 2.1; Ashman & Zepf, 1992; Williams et al., 2022). This scenario leads to multiple properties arising for the GCs: (1) The age distribution of the combined early-formed accreted and merger-induced GCs will be bimodal. (2) Assuming that GSE brings in its own cold gas, many of the merger-induced GCs will also be formed from that gas, or a mixture of that and the MW's gas and therefore have similar metallicities and phase-space properties as those of the accreted GCs. This would then also lead to such GCs being associated with GSE through an analysis of phase-space and the age-metallicity relation. (3) Assuming the merger leads to violent gas interactions, it is possible that some recently formed merger-induced GCs are ejected from the overall orbit of GSE, leading to unassociated high-energy GCs. In that case, the high-energy GCs could also be associated with GSE and thus the bimodality in the GSE GC age distribution would be even more enhanced. It is also possible for GCs to distribute themselves further apart in phase space through other dynamical processes as shown by Pagnini
et al. (2023), potentially ending up in the high-energy regime. This may occur in a similar fashion to the _Splash_ (e.g., Bonaca et al. 2017; Belokurov et al. 2020), a group of more metal-rich stars in the MW halo that appear to have been formed in-situ and dynamically ejected around the time of the GSE merger (e.g., Belokurov et al. 2020; Ciuca et al. 2023). Of course, this would not result in a change in age and metallicity of the formed GCs, though Ciuca et al. (2023) argue that the GSE merger could have first driven down the metallicity, which was then again enriched by the induced starburst.
While it is not possible to prove this theory with the current state of GC age precision and the difficulties of kinematic associations, the GC formation model indicates that the GCs associated with GSE are not only those that were brought in by the accreted galaxy, but also those that were formed through the wet merger. Additionally, it is possible that some of the unassociated high-energy GCs were also formed in the process of the GSE merger, though it should be noted that many of those GCs have lower measured metallicities than those associated with GSE (Forbes 2020). In turn, this could be a result of the metallicity lowering through the merger (Ciuca et al. 2023). However, analyzing and modeling the details of GC metallicities in such a gas-rich major merger scenario are beyond the scope of this work and will be addressed in a future study.
### Model Diagnostics
From Eq. 3 it is possible to obtain the number of GCs expected from the model to form through a merger event based on the cold gas mass, \(M_{\rm gas}\). Vincenzo et al. (2019) estimated GSE to have brought in a cold gas mass of \(6.62\times 10^{9}\,{\rm M}_{\odot}\). For the MW, we estimated a range of possible cold gas masses by determining the cold gas masses of galaxies at \(z=2\) in the hydrody
Figure 3: Globular cluster age distributions in the MW from Forbes & Bridges (2010), Dotter et al. (2010, 2011), VandenBerg et al. (2013), and Cabrera-Ziri & Conroy (2022), split up according to their likely progenitors from Forbes (2020). The shown classifications are the Milky Way (MW), Gaia-Sausage/Enceladus (GSE), unassociated high-energy GCs (H-E), unassociated low-energy GCs (L-E, which was given the name _Koala_ by Forbes 2020), the Helmi Streams (H99), Sequoia (Seq), and Sagittarius (Sag). The total number of GCs in the respective sample is indicated in the top left. The distributions are computed from the GC ages and their uncertainties through a summation over normal distributions with the respective ages as the means and their uncertainties as the standard deviations. The shaded region between 8 Gyr and 11 Gyr indicates the estimated time of the GSE merger (Belokurov et al. 2018; Helmi et al. 2018). See Fig. 4 for the corresponding histograms of the most relevant GC progenitor groups without taking the uncertainties into account.
namical cosmological simulation _Magneticum Pathfinder_1 Box4 (uhr) (with a side length of \(68\,\mathrm{Mpc}\) and a gas particle mass of \(m_{\mathrm{gas}}=1.0\times 10^{7}\,\mathrm{M}_{\odot}\)). The galaxies contained within it have been shown to agree well with observations across a broad range of properties (see Teklu et al., 2015 for details on the implementations and Valenzuela and & Remus, 2022 for an overview of comparisons to observations). For the galaxies at \(z=2\) with virial masses \(5\times 10^{11}\,\mathrm{M}_{\odot}\lesssim M_{\mathrm{vir}}\leq 8\times 10^{11}\, \mathrm{M}_{\odot}\), the typical cold gas mass (which we define as star-forming gas particles with temperatures below \(10^{5}\,\mathrm{K}\)) is \(M_{\mathrm{gas}}=(2.1\pm 0.6)\times 10^{10}\,\mathrm{M}_{\odot}\). We have computed the expected numbers of formed GCs for a range of different MW cold gas masses between \(1.0\times 10^{10}\,\mathrm{M}_{\odot}\) and \(7.5\times 10^{10}\,\mathrm{M}_{\odot}\), leading to \(6\) to \(22\,\mathrm{GCs}\) being formed (Table 1).
Footnote 1: www.magneticum.org
As seen in the GC age distribution from Forbes & Bridges (2010), there are 12 out of 21 GCs with available ages associated with GSE and 7 out of 9 unassociated high-energy GCs in the time range of GSE and the secondary peak of the overall MW GC age distribution. While the latter GCs will surely not all be related to the GSE merger, the GC sample also does not include all GCs found in the MW (around 50-60%). Overall, the number of GCs expected from the model to be formed through such a merger aligns well with the observed number of GCs found to be associated with GSE or to be unassociated with high energies, while also taking into account the statistical incompleteness of the GC sample. This supports the prediction that multimodal GC age distributions can be used to trace wet mergers.
## 5 Discussion: Extension to Nearby Galaxies
While the GC age measurements are by far the most accurate for the MW due to resolved measurements of the clusters, the general properties from integrated GC age measurements can still give indications about the wet merger history of the host galaxy, through the relative ages between the GCs. One of the galaxies studied by Usher et al. (2019) is NGC 3115, a fast-rotating S0 field galaxy with stellar mass \(M_{\ast}=9\times 10^{10}\,\mathrm{M}_{\odot}\) and virial mass \(M_{\mathrm{vir}}=1.2\times 10^{12}\,\mathrm{M}_{\odot}\)(Forbes et al., 2016, assuming an NFW profile such that the virial mass is a factor 10 larger than the DM mass within \(8\,R_{e}\), where \(R_{e}\) is the effective radius, the radius within which half the light of the galaxy is emitted). It has \(550\pm 80\) GCs (Harris et al., 2013) and features multimodal age distributions (based on 116 measured GC ages from Usher et al., 2019; top panel of Fig. 4). The multimodal behavior of the ages is even retained when smoothing the histogram with a Gaussian kernel of 1.5 Gyr (smooth line), which is a value we select to illustrate the effect of smoothing that can be caused by measurement uncertainties. There is in fact a galaxy in the simulation that has a very similar GC age distribution as NGC 3115, which can here be seen in the middle panel of Fig. 4 (the similarity of the distributions is seen especially well in the cumulative distributions shown in fig. 17 of Valenzuela et al., 2021). The simulated galaxy has a virial mass of \(M_{\mathrm{vir}}=1.8\times 10^{12}\,\mathrm{M}_{\odot}\), similar to that of NGC 3115, and a total of 525 GCs, consistent with NGC 3115, which places it 0.2 dex above the mean linear scaling relation from Burkert and Forbes (2020), but still within the observed scatter. Interestingly, both the observed and simulated ages feature three minor peaks in the distribution without smoothing over it, although this could very well be a coincidence given the large uncertainties for the observational measurements. For the simulation this means that there were overall three especially gas-rich mergers leading to an increased amount of GC formation. This occurred during a time of steady assembly between 6 Gyr and 12 Gyr ago (bottom panel of Fig. 4).
This agrees well with the inferred formation history of NGC 3115 obtained through kinematics from IFU data and tracer populations, the metallicity profiles, its mass distribution, and the study of its morphological components: it is believed to have experienced an early gas-rich accreting phase followed by a lack of significant mergers thereafter (Arnold et al., 2011; Guerou et al., 2016; Poci et al., 2019; Buzzo et al., 2021). In particular, this supports the prediction of GC ages being able to trace wet mergers, also for galaxies outside the Local Group.
Such behavior is not the norm, however. This could already be seen from the median age distributions presented by Valenzuela et al. (2021) for all the virial mass bins, which show that generally GC populations are dominated by old GCs like those of the MW. For most galaxies, GC formation bursts are too close in time to the oldest GC populations to be able to distinguish them without having further properties available like in the MW, or the bursts are not significant enough due to a lack of cold gas as gas-rich merger events become less and less frequent with time.
One such case is M 31, for which the GC age measurements of Wang et al. (2021) and Cabrera-Ziri et al. (in prep.) show no clear bimodal distribution (Fig. 5). At most, there could be a hint at some additional modes between 6 Gyr and 10 Gyr ago in the data from Cabrera-Ziri et al. (in prep.) (lower panel). However, the numbers of GCs in these modes are small (6 around 6-8 Gyr ago and 8 around 8-10 Gyr ago), so we cannot exclude that they originate from statistical uncertainties. If they do not, this would indicate only small gas-rich mergers given the small numbers of GCs produced. As discussed in Sect. 2.2.2, the absolute ages between the two measured distributions should not be compared with each other due to differences in the measurement techniques, which led to the much younger determined ages for Wang et al. (2021). In terms of their relative ages, however, both samples show that significant features in the GC age distributions are not found for all galaxies and strong features are only visible for mergers that have a sufficient amount of gas. Additionally, which features are identifiable in GC age distributions will always depend on how large the measurement errors are, which smooth out the data, which first removes the signatures from smaller mergers or those with smaller gas fractions. Thus, the method is the most reliable in detecting wet mergers with large mass fractions. Finally, note that the sample from Cabrera-Ziri et al. (in prep.) is biased towards metal-rich GCs (Sect. 2.2.2). However, gas-rich major mergers are expected to involve the more metal-rich GCs as opposed to metal-poor ones, such that we believe the implications of our analysis to be unaffected.
Observationally, it has been proposed that M 31 experienced a major merger at around 2 Gyr ago, possibly with the progenitor of M 32 (D'Souza and Bell, 2018, 2018; Hammer et al., 2018). A large peak in star formation between 2 Gyr and 4 Gyr ago in the disk and outskirts of M 31 (Bernard et al., 2012, 2015; Williams et al., 2017) is likely related to this event. Using planetary nebulae,
\begin{table}
\begin{tabular}{c c} \hline \hline \(M_{\mathrm{gas,MW}}/\mathrm{M}_{\odot}\) & \(N_{\mathrm{GC,formed}}\) \\ \hline \(1.0\times 10^{10}\) & 6 \\ \(2.5\times 10^{10}\) & 11 \\ \(5.0\times 10^{10}\) & 17 \\ \(7.5\times 10^{10}\) & 22 \\ \hline \end{tabular}
\end{table}
Table 1: Predicted number of GCs, \(N_{\mathrm{OC,formed}}\), formed through the GSE merger with the MW for different assumed MW total cold gas masses, \(M_{\mathrm{gas,MW}}\). The GSE galaxy is assumed to have had a cold gas mass of \(6.62\times 10^{9}\,\mathrm{M}_{\odot}\)(Vincenzo et al., 2019).
Bhattacharya et al. (2023) found further evidence for a wet major merger of M 31 2.5 Gyr to 4 Gyr ago. In fact, the full sample by Wang et al. (2021) also includes young stellar clusters with ages below 1.5 Gyr, which could potentially also have been formed as a result of the gas brought in by the merger and would not be referred to as GCs yet. Similarly, the distribution by Cabrera-Ziri et al. (in prep.) also finds one GC with an age of 2.5 Gyr, which could coincide with that merger. However, one single object is not significant enough to be conclusive on its own.
It has also been determined that the star formation rate was very low before the recent peak, with most stars having formed prior to 8 Gyr ago (Williams et al., 2017). Due to the smooth and very massive stellar halo observed for M 31, it is estimated that the merger history was dominated by many smaller accretion events (e.g., Ibata et al., 2014; Mackey et al., 2019). This scenario is supported by the model prediction presented in this work, in which no single sufficiently massive merger exists that would lead to a significant GC formation burst.
Finally, the massive elliptical galaxy NGC 1407 also lacks a significant second peak in its GC age distribution, though there
Figure 4: _Top and middle_: Age distributions of the GC population in NGC 3115 from Usher et al. (2019), with a sample of 116 GCs, and of the modeled GC population in an NGC 3115-analog galaxy from Valenzuela et al. (2021) with 525 GCs. The smooth distributions are smoothed using an assumed uncertainty of 1.5 Gyr for the GC ages. These are computed through a summation over normal distributions with the respective ages as the means and their uncertainties as the standard deviations. _Bottom:_ Dark matter virial mass evolution of the simulated NGC 3115-analog galaxy, showing its accretion history.
Figure 5: Globular cluster age distribution in M 31 from Wang et al. (2021) and Cabrera-Ziri et al. (in prep.), with sample sizes of 293 GCs and 136 GCs, respectively. The ages in the sample of Wang et al. (2021) correspond to their determined old clusters (\(t_{\rm age}>1.5\) Gyr). For further comparisons with previous GC age determinations in M 31, see fig. 8 of Wang et al. (2021). The smooth lines show the distributions smoothed by the measurement uncertainties. These are computed through a summation over log-normal distributions with the respective ages as the means and their uncertainties as the standard deviations. Note that for the logarithmic uncertainties, the visualization in linear space is skewed with respect to the peak.
is a tail with a slight peak towards younger ages around 6-9 Gyr ago and potentially another around 10 Gyr ago (Fig. 6). Due to the integrated age measurements, this could be the result of underestimated ages for those GCs as the peak again consists of only very few GCs. Overall, it appears that from the GC ages NGC 1407 has not experienced any massive wet mergers since the early buildup phase of the galaxy, and at most a later merger with a low cold gas fraction. In the latter case, it should be expected that there would also be a sign of late star formation activity in the stellar populations themselves. In fact, Spolaro et al. (2008) found from their stellar population measurements of NGC 1407 that the stars are uniformly old, having formed around 12 Gyr ago.
A kinematically decoupled core (KDC) hints at a major merger of NGC 1407 with gas fractions between 15% and 40% (Hoffman et al., 2010; Schulze et al., 2017). While Forbes and Remus (2018) found that a simulated galaxy from _Magneticum_ with a similar size and mass as NGC 1407 had a late major merger around 8 Gyr ago, it had not been selected based on further properties such as stellar ages or metallicity gradients. However, Ferre-Mateu et al. (2019) found in their observations of NGC 1407 that the KDC is slightly younger than the rest of the galaxy (we estimate a difference of around 1 Gyr based on their fig. 5), suggesting that a wet major merger could have occurred slightly later than the early buildup of the galaxy. This scenario is compatible with the GC age distribution found by Usher et al. (2019): the lack of late gas-rich mergers is consistent with no large amount of late GC formation, and the slight peak in the GC ages could be related to the wet major merger that formed the KDC. Such a merger is expected to have had a relatively high gas fraction (Hoffman et al., 2010), potentially forming many GCs as a result. However, due to the violent nature of such a merger, many of those systems would likely be disrupted in the same process, leading to a smaller peak in the age distribution. Since the GC sample analyzed by Usher et al. (2019) is biased towards central GCs due to the SLUGGS survey having been focused on the inner regions, it is expected that it would be more likely to pick up signatures from major mergers.
## 6 Conclusion
In this work, we have used the GC formation model from Valenzuela et al. (2021) with dual formation pathways to predict that massive wet mergers with enough cold gas can leave an imprint on the age distribution of the GC population in the host galaxy. This imprint results in a bimodal or even multimodal distribution, indicating when the wet mergers occurred. This prediction is in part also a consequence of the idea that red GCs tend to form through mergers that also induce star formation, thus resulting in properties that overall trace the underlying stellar component, such as spatial, kinematic, or chemical properties (e.g., Brodie and Strader, 2006; Pota et al., 2013; Dolfi et al., 2021). In contrast, mergers with little to no gas are not traced by the GC ages since the lack of gas means that no significant number of GCs could be formed in the process.
The prediction is discussed for the MW in detail. We find that a hint at bimodality visible in the pure GC age distributions compiled by Forbes and Bridges (2010) can be further disentangled by combining the data with phase-space information on the GCs to map which galaxy progenitors the GCs are likely associated with (e.g., Massari et al., 2019; Forbes, 2020; Callingham et al., 2022). We find the later peak in the GC age distribution to correspond to the GCs associated with GSE, the last major merger of the MW (Belokurov et al., 2018; Helmi et al., 2018), and in part also to the unassociated high-energy GCs and the GCs of the Helmi Streams. In fact, the age distribution of the GSE GCs appears to also have a bimodality. Since GSE is expected to have been a massive and gas-rich merger, we suggest that GSE not only brought in its own older GCs, but also formed a second group of GCs through the merger with the MW. The second group of GCs would be located near the first in phase space and lie on the same age-metallicity relation, as is also found for them in the observations. We further suggest that some of the unassociated high-energy GCs may also originate from the GSE merger, since it is possible that the violent merger dynamics would eject some GCs, or that GCs migrate away in phase space over time (Bonaca et al., 2017; Belokurov et al., 2020; Pagnini et al., 2023).
Two simulated MW-mass galaxies with similar GC age distributions as the MW are found to both have had a wet major merger around the same time as GSE (around 8 Gyr to 10 Gyr ago). One of the two then evolved only through smooth accretion and mini and minor mergers, as is believed to have been the case for the MW. In contrast, the other simulated galaxy encountered a dry merger at later times that is not traced by the GC ages, (for those kind of mergers, other kind of indicators have to be used).
We also tested the model with three other observed galaxies, NGC 3115, M 31, and NGC 1407, for which GC ages have been obtained. For NGC 3115, the observed GC age distribution is clearly multimodal with a considerable population of younger GCs. We were able to find a simulated galaxy of comparable virial mass and a similar GC age distribution. It underwent an early phase of multiple wet mini and minor mergers that led to the formation of the younger GC population, after which it experienced no further significant mergers. This agrees well with the expected formation history of NGC 3115 inferred from other tracers (Arnold et al., 2011; Guerou et al., 2016; Poci et al., 2019; Buzzo et al., 2021) and supports the prediction that additional modes in the GC age distribution trace wet mergers.
Figure 6: Globular cluster age distribution in NGC 1407 from Usher et al. (2019) with a sample size of 213 GCs. The smooth line shows the distribution smoothed using an assumed uncertainty of 1.5 Gyr for the GC ages. These are computed through a summation over normal distributions with the respective ages as the means and their uncertainties as the standard deviations.
The GC age distribution of M 31 features no bimodality, indicating that it experienced no significant late wet mergers. However, it shows two or even three small peaks in the GC age distributions at 2-3 Gyr and at 7 and 9 Gyr, indicating either small merger events with large gas fractions or large merger events with low gas fractions, with the former being more likely than the latter. Since these peaks are small and the uncertainties are large, this is not conclusive, however. Similarly, NGC 1407 features no clear bimodality, but a possible second late peak in the GC age distribution may be related to an early wet major merger that led to the kinematically distinct core found in the overall very old galaxy. These are examples for the most common case for galaxies: the simulation predicts that on average, galaxies do not feature strong indications for wet mergers in their GC age distributions, but small wet mergers can still leave minor peaks in the GC age distribution.
To conclude, the age distribution of GCs can be used as a tracer for wet mergers of galaxies, which are generally more difficult to infer from observations: while the old age peak of the GC age distribution does not help in constraining the merger history as here the old populations from the main progenitor of a galaxy mix with the old GCs brought in through other merging galaxies of all kind, the young GCs with ages less than around 11 Gyr are formed in the otherwise untraceable wet merger events. However, due to the current large observational uncertainties in determining GC ages from integrated measurements, further development of the age determination techniques will be essential to better understand individual galaxies' formation histories. Finally, increasing the number of galaxies with accurate GC age measurements will also help improve our understanding of GC formation and set more constraints on current GC formation models. We thus propose that this is a good method to infer the wet merger times of extragalactic galaxies from observations.
###### Acknowledgements.
We thank Ivan Cabrera-Ziri and Giulia Pagnini for helpful discussions. LMV acknowledges support by the German Academic Scholarship Foundation (Studienstienthe des deutschen Volkes), the Maritime-Plebn-Program of the Elite Network of Bavaria, and the COMPLEX project from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program grant agreement ERC-2019-A68 882679. This research was supported by the Excellence Cluster ORIGINS, funded by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy - EXC-2094-390783311. The following software was used for this work: astropy (Austropy Collaboration et al., 2013, 2018), jupyter (Kluyver et al., 2014), matplotlib (Hunter, 2007), numpy (Harris et al., 2020), pandas (Wes McKinney, 2010; The Pandas Development team, 2023), Julia (Bezanson et al., 2017), CSV.jl (Quinn et al., 2023), and DataFrameJJJ (Kaminski et al., 2023).
|
2309.03627 | Precise Deviations for a discrete Hawkes process | In this paper, we study precise deviations including precise large deviations
and moderate deviations for discrete Hawkes processes for large time
asymptotics by using mod-$\phi$ convergence theory. | Ying-Li Wang, Ping He | 2023-09-07T10:48:09Z | http://arxiv.org/abs/2309.03627v1 | # Precise deviations for a discrete Hawkes process
###### Abstract.
In this paper, we study precise deviations including precise large deviations and moderate deviations for discrete Hawkes processes for large time asymptotics by using mod-\(\phi\) convergence theory.
Key words and phrases:precise deviations, discrete Hawkes processes, mod-\(\phi\) convergence theory
## 1. Introduction
### Continuous-time Hawkes processes and their limit theorems
Hawkes process is a continuous-time stochastic model that captures temporal stochastic self-exciting phenomena which is first introduced by Hawkes[12]. In particular, the linear Hawkes process has been well studied and widely used in practice because of its mathematical tractability especially the immigration-birth representation. There are applications in neuroscience, e.g. Johnson[13], DNA modeling, e.g. Gusto and Schbath[11], finance, and many other fields. Applications of the Hawkes process in finance include market order modeling, e.g. Bauwens and Hautsch[2], Bowsher[4] and Large[15], value-at-risk, e.g. Chavez-Demoulin et al.[6], and credit risk, e.g. Errais et al.[8].
Let us introduce the Hawkes processes. Let \(N\) be a simple point process on \(\mathbb{R}\), and let \(\mathcal{F}_{t}^{-\infty}:=\sigma(N(C),C\in\mathcal{B}(\mathbb{R}),C\subset(- \infty,t])\) be an increasing family of \(\sigma\)-algebras. Any nonnegative \(\mathcal{F}_{t}^{-\infty}\)-progressively measurable \(\lambda_{t}\) with
\[\mathbb{E}\left[N(a,b]|\mathcal{F}_{a}^{-\infty}\right]=\mathbb{E}\left[\int_ {a}^{b}\lambda_{s}ds|\mathcal{F}_{a}^{-\infty}\right]\]
a.s. for all intevals \((a,b]\) is called an \(\mathcal{F}_{t}^{-\infty}\)-intensity of \(N\). We use the notation \(N_{t}:=N(0,t]\) to denote the number of points in the inteval \((0,t]\).
A general Hawkes process is a simple point process \(N\) admitting an \(\mathcal{F}_{t}^{-\infty}\) intensity
\[\lambda_{t}:=\lambda\left(\int_{-\infty}^{t}h(t-s)N(ds)\right),\]
where \(\lambda(\cdot):\mathbb{R}^{+}\to\mathbb{R}^{+}\) is locally integrable and left continuous, \(h(\cdot):\mathbb{R}^{+}\to\mathbb{R}^{+}\), and we always assume that \(\|h\|_{L^{1}}=\int_{0}^{\infty}h(t)dt<\infty\). We always assume that \(N(-\infty,0]=0\), i.e. the Hawkes process has empty history. In the literature, \(h(\cdot)\) and \(\lambda(\cdot)\) are usually referred to as the exciting function and the rate function, respectively. The Hawkes process is linear if \(\lambda(\cdot)\) is linear and it is nonlinear otherwise, in the linear case, the stochastic intensity can be written as
\[\lambda_{t}=\nu+\int_{0}^{t-}h(t-s)N(ds).\]
Because the lack of immigration-birth representation and computational tractability, nonlinear Hawkes processes are much less studied. A nonlinear Hawkes processes are first introduced by Bremaud et al.[5].
Let us review the limit theorems for linear Hawkes processes in the literature. It is well known that we have the law of large numbers \(\frac{N_{t}}{t}\rightarrow\frac{\nu}{1-\left\|h\right\|_{L^{1}}}\) as \(t\rightarrow\infty\). Bacry et al.[1] obtain a functional central limit theorem for multivariate Hawkes process and as a special case of their result,
\[\frac{N_{t}-\frac{\nu}{1-\left\|h\right\|_{L^{1}}}}{\sqrt{t}}\to N \left(0,\frac{\nu}{(1-\left\|h\right\|_{L^{1}})^{3}}\right), \tag{1.1}\]
in distribution as \(t\rightarrow\infty\) under the assumption that \(\int_{0}^{\infty}t^{1/2}h(t)dt<\infty\). Bordenave and Torrisi[3] prove that \(\mathbb{P}(\frac{N_{t}}{t}\in\cdot)\) satisfies a large deviation principle with the rate function:
\[I(x)=x\log\left(\frac{x}{\nu+x\left\|h\right\|_{L^{1}}}\right)-x+x\left\|h \right\|_{L^{1}}+\nu, \tag{1.2}\]
if \(x\geq 0\) and \(I(x)=+\infty\) otherwise. The rate function is written in Legendre transform expression in Bordenave et al.[3], and (1.2) is first mentioned in Karabash and Zhu[14]. Moderate deviations for linear Hawkes processes are studied in Zhu[21].
For nonlinear Hawkes processes, Zhu[22] is a complete introduction. Zhu[20] studies the central limit theorem, and [24] obtains a level-3 large deviation principle, and hence has the scalar large deviations as a by-product. When the system is Markovian, Zhu[25] obtains an expression for the rate function. Zhu[23] also studies limit theorems for a CIR process with Hawkes jumps.
The large deviations and moderate deviations for linear Hawkes processes are of the Donsker-Varadhan type, which only gives the leading order term. In many occasions, more accurate estimates are desired, i.e. the precise deviations. Gao and Zhu[10] use the recent mod-\(\phi\) convergence theory[9] to compute the precise deviations including precise large deviations and moderate deviations for continuous-time linear Hawkes processes. The mod-\(\phi\) convergence theory shows that if we can characterize the convergence speed of the moment generating function and verify the limit corresponds to an infinitely divisible distribution, we can obtain the mod-\(\phi\) convergence which can derive the precise deviations.
### Discrete Hawkes processes
In practical applications, data are always from discrete time observations. As a result, there has been some literatures to study discrete Hawkes models. Discrete Hawkes processes are first introduced in Seol[16], the literature studies the limit theorems for discrete time Hawkes-type model with 0-1 arrivals including law of large number, central limit theorem and the invariance principle. Wang[17] studies the one with Poisson arrivals and marked situation whose large and moderate deviations are stated in [18].
In this paper, we also use mod-\(\phi\) convergence theory to study precise deviations including precise large deviations and moderate deviations for discrete Hawkes processes which is stated in [17]. Our results extends the result in [18] to some extent. Two important proof techniques when characterizing the mod-\(\phi\) convergence are an another type of Abel's lemma and a discrete type generalized Gronwall's inequality. We will show the discrete case is very similar to the continuous case which is stated in [10].
## 2. Main Results
Before we introduce the discrete model and precise deviation results, let us first recall the definition of mod-\(\phi\) convergence in [9].
### Mod-\(\phi\) convergence
Let \((X_{n})_{n\geq 1}\) be a sequence of real-valued random variables and \(\mathbb{E}[e^{zX_{n}}]\) exist in a strip \(\mathcal{S}_{(c,d)}:=\{z\in\mathbb{C}:c<\mathcal{R}(z)<d\}\), with \(c<0<d\) extended real numbers, i.e. we allow \(c=-\infty\) and \(d=+\infty\) and \(\mathcal{R}(z)\) denotes
the real part of \(z\in\mathbb{C}\) throughout this paper. We assume that there exists a non-constant infinitely divisible distribution \(\phi\) with \(\int_{\mathbb{R}}e^{zx}\phi(dx)=e^{\eta(z)}\), which is well defined on \(\mathcal{S}_{(c,d)}\), and an analytic function \(\psi(z)\) that does not vanish on the real part of \(\mathcal{S}_{(c,d)}\) such that locally uniformly in \(z\in\mathcal{S}_{(c,d)}\),
\[e^{-t_{n}\eta(z)}\mathbb{E}[e^{zX_{n}}]\to\psi(z),\]
where \(t_{n}\to\infty\) as \(n\to\infty\). Then we say that \(X_{n}\) converges mod-\(\phi\) on \(\mathcal{S}_{(c,d)}\) with parameters \((t_{n})_{n\geq 1}\) and limiting function \(\psi\). Assume that \(\phi\) is a lattice distribution i.e., a distribution with support included in \(\gamma+\lambda\mathbb{Z}\) for some constants \(\gamma,\lambda>0\). Also assume that the sequence of random variables \((X_{n})_{n\geq 1}\) converges mod-\(\phi\) at speed \(O(t_{n}^{-v})\), that is
\[\sup_{z\in K}\left|e^{-t_{n}\eta(z)}\mathbb{E}[e^{zX_{n}}]-\psi(z)\right|\leq C _{K}t_{n}^{-v},\]
where \(C_{K}>0\) is some constant, for any compact set \(K\subset\mathcal{S}_{(c,d)}\). Then the Theorem 3.2.2 in [9] states that for any \(x\in\mathbb{R}\) in the inteval \((\eta^{\prime}(c),\eta^{\prime}(d))\) such that \(t_{n}x\in\mathbb{N}\), we have
\[\mathbb{P}(X_{n}=t_{n}x)=\frac{e^{-t_{n}F(x)}}{\sqrt{2\pi t_{n}\eta^{\prime \prime}(\theta^{*})}}\left(\psi(\theta^{*})+\frac{a_{1}}{t_{n}}+\frac{a_{2}}{ t_{n}^{2}}+\cdots+\frac{a_{v-1}}{t_{n}^{v-1}}+O\left(\frac{1}{t_{n}^{v}}\right) \right),\]
as \(n\to\infty\), where \(\theta^{*}\) is defined via \(\eta^{\prime}(\theta^{*})=x\), and \(F(x):=\sup_{\theta\in\mathbb{R}}\{\theta x-\eta(\theta)\}\) is the Legendre transform of \(\eta(\cdot)\), and if \(x\in\mathbb{R}\) and \(x\in(\eta^{\prime}(0),\eta^{\prime}(d))\), then, as \(n\to\infty\),
\[\mathbb{P}(X_{n}\geq t_{n}x)=\frac{e^{-t_{n}F(x)}}{\sqrt{2\pi t_{n}\eta^{ \prime\prime}(\theta^{*})}}\frac{1}{1-e^{-\theta^{*}}}\left(\psi(\theta^{*})+ \frac{b_{1}}{t_{n}}+\frac{b_{2}}{t_{n}^{2}}+\cdots+\frac{b_{v-1}}{t_{n}^{v-1}} +O\left(\frac{1}{t_{n}^{v}}\right)\right),\]
where \((a_{k})_{k\geq 1}\), \((b_{k})_{k\geq 1}\) are rational fractions in the derivatives of \(\eta\) and \(\psi\) at \(\theta^{*}\).
### The discrete model
For \(t\in\mathbb{N}\), let \(\alpha_{t}:=\alpha(t):\mathbb{N}\to\mathbb{R}_{+}\) be a positive function on \(\mathbb{N}\). The process has an empty history and \(X_{0}=N_{0}=0\). It is worth to mention that \(\alpha(\cdot)\) is an exponential function in [19], and the model proposed in [17] is in fact the extension of the model in [19]. Define \(\left\|\alpha\right\|_{1}:=\sum_{t=0}^{\infty}\alpha_{t}\) (for convenience, set \(\alpha_{0}=0\)) as the \(l_{1}\) norm of \(\alpha\). Conditional on \(X_{t-1},X_{t-2},...,X_{1}\), we define \(X_{t}\) as a Poisson random variable with mean
\[\lambda_{t}:=\nu+\sum_{s=1}^{t-1}\alpha_{s}X_{t-s}.\]
Finally, we define \(N_{t}:=\sum_{s=1}^{t}X_{s}\). And the moment generating function can be directly obtained from [17].
\[\mathbb{E}[e^{zN_{t}}]= \exp\left(\nu\left(-t+\sum_{i=0}^{t-1}e^{f_{i}(z)}\right)\right), \tag{2.1}\]
where
\[f_{0}(z)=z,f_{1}(z)=z+(e^{z}-1)\alpha_{1},\ f_{s}(z)=z+\sum_{i=1}^{s}\alpha_{i }(e^{f_{s-i}(z)}-1).\]
Wang[18] proves that pointwisely we have
\[f_{\infty}(z)=z+(e^{f_{\infty}(z)}-1)||\alpha||_{1}.\]
Set \(x(z):=e^{f_{\infty}(z)}\), \(x(z)\) satisfies the algebraic equation
\[x(z)=e^{z+\|\alpha\|_{1}(x(z)-1)}.\]
We notice that in [18], the rate function is written in Legendre transform expression since random marks are included. After removing marks, the rate function in [18] can be expressed explicitly as follows,
\[I(x)=x\log\left(\frac{x}{\nu+x\left\|\alpha\right\|_{1}}\right)-x+x\left\|\alpha \right\|_{1}+\nu, \tag{2.2}\]
if \(x\geq 0\) and \(I(x)=+\infty\) otherwise, we refer the readers to [3] and [14] for this result.
Choose \(\eta(z)=\nu\left(e^{z+(e^{f_{\infty}(z)}-1)||\alpha||_{1}}-1\right)=\nu\left( x(z)-1\right)\), and we have the following lemma.
**Lemma 2.1**.: _Assume there is a random variable \(Y\) such that \(\mathbb{E}[e^{zY}]=e^{\eta(z)}=e^{\nu(x(z)-1)}\), then \(Y\) has an infinitely divisible distrbution._
Proof.: After replacing the \(L^{1}\) norm of the exciting function \(h\) in [10] by the discrete norm \(l^{1}\) of \(\alpha\). It is a direct result from Lemma 6 in [10], which shows \(\eta(\cdot)\) is a moment generating function of an infinitely divisible random variable \(Y\), i.e.
\[\mathbb{E}[e^{zY}]=e^{\eta(z)}=e^{\nu(x(z)-1)}.\]
We will show \(\eta(\cdot)\) is the exactly the function we need in characterizing mod-\(\phi\) convergence, i.e.
\[e^{-t\eta(z)}\mathbb{E}[e^{zN_{t}}]\longrightarrow\psi(z):=e^{\nu\varphi(z)},\]
as \(t\rightarrow\infty\), locally uniformly in \(z\) for \(\mathcal{R}(z)\leq\theta_{c}:=\left\|\alpha\right\|_{1}-1-\log\left\|\alpha \right\|_{1}\), where the limit function
\[\psi(z)=\exp\left(\nu\sum_{i=0}^{\infty}\left(e^{f_{i}(z)}-e^{f_{\infty}(z)} \right)\right).\]
**Proposition 2.2**.: _For any \(\theta\in\mathbb{R}\), and \(\theta\leq\theta_{c}\), where \(\theta_{c}:=\left\|\alpha\right\|_{1}-1-\log\left\|\alpha\right\|_{1}\), we have (i) \(x(\theta)\left\|\alpha\right\|_{1}\leq 1.\) (ii)\(x^{\prime}(\theta)\rightarrow\infty\) as \(\theta\uparrow\theta_{c}\)._
Proof.: The proof is exactly the same as [10] after replacing \(\left\|h\right\|_{L^{1}}\) with \(\left\|\alpha\right\|_{1}\).
**Lemma 2.3** (An another type of the Abel's lemma).: _Assume \((b_{i})_{i\geq 1}\in l^{1}\), denote \(B_{k}=\sum_{i=k+1}^{\infty}b_{i},k\geq 0\), then_
\[\sum_{k=1}^{p}a_{k}b_{k}=a_{1}B_{0}+\sum_{k=1}^{p-1}(a_{k+1}-a_{k})B_{k}-a_{p} B_{p},\ p\geq 2.\]
Proof.: The proof is similar to the proof of the classical Abel's lemma,
\[\sum_{k=1}^{p}a_{k}b_{k}= \sum_{k=1}^{p}a_{k}(B_{k-1}-B_{k})\] \[= \sum_{k=1}^{p}a_{k}B_{k-1}-\sum_{k=1}^{p}a_{k}B_{k}\] \[= a_{1}B_{0}+\sum_{k=1}^{p-1}a_{k+1}B_{k}-\sum_{k=1}^{p-1}a_{k}B_{ k}-a_{p}B_{p}\] \[= a_{1}B_{0}+\sum_{k=1}^{p-1}(a_{k+1}-a_{k})B_{k}-a_{p}B_{p}.\]
**Lemma 2.4** (Discrete generalized Gronwall's inequality).: _Let \(\left(p(n)\right)_{n\geq 1}\) and \(\left(q(n)\right)_{n\geq 1}\) be two \(l^{1}\) nonnegative sequences. If \(p(i)\leq g(i)\) and_
\[p(i)\leq\sum_{j=1}^{i-1}q(i-j)p(j)+g(i),\ i\geq 2,\]
_then_
\[p(i)\leq \sum_{j=1}^{i-1}Q(i-j)g(j)+g(i),\ i\geq 2, \tag{2.3}\]
_where_
\[Q(i)=\sum_{j=1}^{\infty}q^{*j}(i),\ q^{*j}(i)=\sum_{j=1}^{i-1}q^{*j-1}(j)q(i-j ),\ q^{*0}(i)=q(i).\]
Proof.: We can prove it like the method in [7],
\[p(i)\leq \sum_{j=1}^{i-1}q(i-j)\left(\sum_{m=1}^{j-1}q(j-m)p(m)+g(j)\right) +g(i) \tag{2.4}\] \[= \sum_{j=1}^{i-1}q(i-j)\sum_{m=1}^{j-1}q(j-m)p(m)+\sum_{j=1}^{i-1}q (i-j)g(j)+g(i)\] (2.5) \[= \sum_{j=1}^{i-1}p(j)q^{*2}(i-j)+\sum_{j=1}^{i-1}q^{*1}(i-j)g(j)+g (i). \tag{2.6}\]
By iterating, we have
\[p(i)\leq \sum_{j=1}^{i-1}Q(i-j)g(j)+g(i),\ i\geq 2. \tag{2.7}\]
**Lemma 2.5**.: _For any \(\mathcal{R}(z)\leq\theta_{c}\), where \(\theta_{c}:=\left\|\alpha\right\|_{1}-1-\log\left\|\alpha\right\|_{1}\),_
\[\varphi(z)=\sum_{i=0}^{\infty}\left(e^{f_{i}(z)}-x(z)\right)\]
_is well-defined and analytic, and as \(t\to\infty\),_
\[e^{-t\eta(z)}\mathbb{E}[e^{zN_{t}}]\longrightarrow\psi(z):=e^{\nu\varphi(z)},\]
_locally uniformly in \(z\). In addition, if \(\sum_{i=0}^{\infty}i^{v+1}\alpha_{i}<\infty\), then for any compact set \(K\), there exists some \(C_{K}>0\) such that \(\sup_{z\in K}|e^{-t\eta(z)}\mathbb{E}[e^{zN_{t}}]-e^{\nu\varphi(z)}|\leq C_{K }t^{-v}\)._
Proof.: First, it is obvious that \(x(z)\) is analytic in \(\mathcal{S}_{(-\infty,\theta_{c})}\), and for any positive integer \(t\), \(\sum_{i=0}^{t}\left(e^{f_{i}(z)}-x(z)\right)\) are analytic in \(\mathcal{S}_{(-\infty,\theta_{c})}\). To show \(\varphi(z)\) is well-defined and analytic in \(\mathcal{S}_{(-\infty,\theta_{c})}\), we need to prove
\[\sum_{i=0}^{t}\left(e^{f_{i}(z)}-x(z)\right)\to\varphi(z)\]
as \(t\to\infty\), locally uniformly in \(z\) for \(\mathcal{R}(z)<\theta_{c}\). In other words, we need to prove that for any compact set \(K\subset\{z\in\mathbb{C};\mathcal{R}(z)<\left\|\alpha\right\|_{1}-1-\log\left\| \alpha\right\|_{1}\}\),
\[\sum_{i=0}^{\infty}\sup_{z\in K}|e^{f_{i}(z)}-x(z)|<\infty.\]
In fact, from page 11-12 in [18], we know that \(e^{f_{i}(z)}\to x(z)\) pointwisely as \(i\to\infty\). Since
\[e^{f_{i}(z)}-e^{f_{\infty}(z)}= e^{f_{\infty}(z)}\cdot\left(e^{\sum_{j=1}^{i}\alpha_{j}(e^{f_{i-j }(z)}-e^{f_{\infty}(z)})-\sum_{j=i+1}^{\infty}\alpha_{j}(e^{f_{\infty}(z)}-1)}- 1\right),\]
which yields that
\[\sum_{j=1}^{i}\alpha_{j}(e^{f_{i-j}(z)}-e^{f_{\infty}(z)})\to 0,\text{ as }i\to\infty.\]
Furthermore, for any fixed \(\delta>0\) such that \((1+\delta)|x(z)|\left\|\alpha\right\|_{1}<1\), there exists \(M>0\), so that for any \(i\geq M\) and \(z\in K\), we have
\[\begin{split}\left|e^{f_{i}(z)}-x(z)\right|\leq&( 1+\delta)|x(z)|\left(\sum_{j=1}^{i}\alpha_{j}|e^{f_{i-j}(z)}-x(z)|+|x(z)-1| \sum_{j=i+1}^{\infty}\alpha_{j}\right).\end{split} \tag{2.8}\]
Therefore, we get that for any \(T>M\),
\[\begin{split}&\sum_{i=M}^{T}\sup_{z\in K}\left|e^{f_{i}(z)}-x(z) \right|\\ \leq&(1+\delta)\sup_{z\in K}|x(z)|\sum_{i=M}^{T} \sum_{j=1}^{i}\sup_{z\in K}|e^{f_{i-j}(z)}-x(z)|\alpha_{j}+(1+\delta)\sup_{z \in K}|x(z)||x(z)-1|\sum_{i=M}^{T}\sum_{j=i+1}^{\infty}\alpha_{j}\\ \leq&(1+\delta)\sup_{z\in K}|x(z)|\sum_{i=1}^{T} \sum_{j=1}^{i}\sup_{z\in K}|e^{f_{i-j}(z)}-x(z)|\alpha_{j}+(1+\delta)\sup_{z \in K}|x(z)||x(z)-1|\sum_{i=1}^{\infty}\sum_{j=i+1}^{\infty}\alpha_{j}\\ =&(1+\delta)\sup_{z\in K}|x(z)|\sum_{j=1}^{T}\alpha_ {j}\sum_{i=0}^{T-j}\sup_{z\in K}|e^{f_{i}(z)}-x(z)|+(1+\delta)\sup_{z\in K}|x (z)||x(z)-1|\sum_{j=1}^{\infty}j\alpha_{j},\end{split}\]
which implies that
\[\begin{split}&\sum_{i=0}^{T}\sup_{z\in K}\left|e^{f_{i}(z)}-x(z) \right|\\ \leq&\sum_{i=0}^{M}\sup_{z\in K}\left|e^{f_{i}(z)}-x( z)\right|+(1+\delta)\sup_{z\in K}|x(z)|\sum_{j=1}^{T}\alpha_{j}\sum_{i=0}^{T-j} \sup_{z\in K}|e^{f_{i}(z)}-x(z)|\\ &+(1+\delta)\sup_{z\in K}|x(z)||x(z)-1|\sum_{j=1}^{\infty}j\alpha _{j}\\ \leq&\sum_{i=0}^{M}\sup_{z\in K}\left|e^{f_{i}(z)}-x (z)\right|+(1+\delta)\sup_{z\in K}|x(z)|\left\|\alpha\right\|_{1}\sum_{i=0}^{ T}\sup_{z\in K}|e^{f_{i}(z)}-x(z)|\\ &+(1+\delta)\sup_{z\in K}|x(z)|\left(\sup_{x\in K}|x(z)|+1\right) \sum_{j=1}^{\infty}j\alpha_{j}.\end{split}\]
Let \(T\to\infty\), we have
\[\sum_{i=0}^{\infty}\sup_{z\in K}\left|e^{f_{i}(z)}-x(z)\right| \tag{2.9}\] \[\leq \frac{\sum_{i=0}^{M}\sup_{z\in K}\left|e^{f_{i}(z)}-x(z)\right|+( 1+\delta)\sup_{z\in K}|x(z)|\left(\sup_{x\in K}|x(z)|+1\right)\sum_{j=1}^{ \infty}j\alpha_{j}}{1-(1+\delta)\sup_{z\in K}|x(z)|\left\|\alpha\right\|_{1}}. \tag{2.10}\]
Hence, we conclude that \(\sum_{i=t}^{\infty}\sup_{z\in K}\left|e^{f_{i}(z)}-x(z)\right|\to 0\) as \(t\to\infty\), and so
\[\sum_{i=0}^{t}\left(e^{f_{i}(z)}-x(z)\right)\to\sum_{i=0}^{\infty}\left(e^{f_{i} (z)}-x(z)\right)=\varphi(z), \tag{2.11}\]
as \(t\to\infty\), locally uniformly in \(z\) for \(\mathcal{R}(z)<\theta_{c}\). Hence, \(\varphi(z)\) is well-defined and is analytic in \(\mathcal{S}_{(-\infty,\theta_{c})}\). By equations (2.1), (2.11) and the definitions of \(\psi(z)\) and \(\varphi(z)\), we have proved that locally uniformly in \(z\) for \(\mathcal{R}(z)<\theta_{c}\),
\[e^{-t(\nu(x(z)-1))}\mathbb{E}[e^{zN_{t}}]=\exp\left(\nu\sum_{i=0}^{t-1}\left(e ^{f_{i}(z)}-x(z)\right)\right)\longrightarrow\psi(z):=e^{\nu\varphi(z)},\ \text{as}\ t\to\infty.\]
To show the mod-\(\phi\) convergence at speed \(O(t^{-v})\), that is, for any compact set \(K\), there exists some \(C_{K}>0\) such that \(\sup_{z\in K}\left|e^{-t(\nu(x(z)-1))}\mathbb{E}[e^{zN_{t}}]\right|\leq C_{K} t^{-v}\), it suffices to show that for any compact set \(K\subset\{z\in\mathbb{C};\mathcal{R}(z)<\left\|\alpha\right\|_{1}-1-\log\left\| \alpha\right\|_{1}\}\),
\[\sum_{i=0}^{\infty}i^{v}\sup_{z\in K}\left|e^{f_{i}(z)}-x(z)\right|<\infty. \tag{2.12}\]
To see this, notice that
\[\sup_{z\in K}\left|e^{-t(\nu(x(z)-1))}\mathbb{E}[e^{zN_{t}}]-e^{ \nu\varphi(z)}\right|= \sup_{z\in K}\left|e^{\nu\varphi(z)}\left(e^{\nu\sum_{i=t}^{ \infty}\left(e^{f_{i}(z)}-x(z)\right)}-1\right)\right|\] \[\leq \sup_{z\in K}e^{\nu|\varphi(z)|}\left(e^{\nu\sum_{i=t}^{\infty} \sup_{z\in K}\left|e^{f_{i}(z)}-x(z)\right|}-1\right).\]
Thus it suffices to show that
\[\sum_{i=t}^{\infty}\sup_{z\in K}\left|e^{f_{i}(z)}-x(z)\right|\leq c_{K}t^{-v}, \tag{2.13}\]
for some \(c_{K}>0\). By Lemma 2.3,
\[\sum_{i=0}^{T}i^{v}\sup_{z\in K}\left|e^{f_{i}(z)}-x(z)\right| \tag{2.14}\] \[= \sum_{i=0}^{\infty}\sup_{z\in K}\left|e^{f_{i}(z)}-x(z)\right|+ \sum_{i=0}^{T-1}((i+1)^{v}-i^{v})\sum_{j=i+1}^{\infty}\sup_{z\in K}\left|e^{f _{j}(z)}-x(z)\right|\] (2.15) \[-T^{v}\sum_{i=T+1}^{\infty}\sup_{z\in K}\left|e^{f_{i}(z)}-x(z)\right|. \tag{2.16}\]
On the other hand, by Tonelli's theorem,
\[\sum_{i=0}^{T-1}((i+1)^{v}-i^{v})\sum_{j=i+1}^{\infty}\sup_{z\in K}\left|e^{f _{j}(z)}-x(z)\right|=\sum_{j=0}^{\infty}j^{v}\sup_{z\in K}\left|e^{f_{j}(z)}- x(z)\right|<\infty. \tag{2.17}\]
Hence, we get \(\lim_{T\to\infty}T^{v}\sum_{i=T+1}^{\infty}\sup_{z\in K}\left|e^{f_{i}(z)}-x(z )\right|=0\) by letting \(T\to\infty\) in (2.14) and applying (2.17). This implies (2.13).
Next, let us prove (2.12). From (2.8), we obtain that there exists \(M>0\) so that for any \(i\geq M\),
\[\sup_{z\in K}\left|e^{f_{i}(z)}-x(z)\right|\] \[\leq (1+\delta)\sup_{z\in K}\left|x(z)\right|\sum_{j=1}^{i}\alpha_{j} \sup_{z\in K}\left|e^{f_{i-j}(z)}-x(z)\right|+(1+\delta)\sup_{z\in K}\left|x(z) \right|\sup_{z\in K}\left|x(z)-1\right|\sum_{j=i+1}^{\infty}\alpha_{j}.\]
Therefore, for every \(i\geq 2\),
\[\sup_{z\in K}|e^{f_{i}(z)}-x(z)| \tag{2.18}\] \[\leq (1+\delta)\sup_{z\in K}|x(z)|\sum_{j=1}^{i}\alpha_{j}\sup_{z\in K}| e^{f_{i-j}(z)}-x(z)|+g(i), \tag{2.19}\]
where
\[g(i):=C_{1}\sum_{j=i+1}^{\infty}\alpha_{j}+C_{2}1_{\{i\leq M\}}, \tag{2.20}\]
where
\[C_{1}:=(1+\delta)\sup_{z\in K}|x(z)|\sup_{z\in K}|x(z)-1|,\ C_{2}=\sup_{0\leq i \leq M}\sup_{z\in K}|e^{f_{i}(z)}-x(z)|.\]
Let \(p(i)=\sup_{z\in K}|e^{f_{i}(z)}-x(z)|\) and \(q(i):=(1+\delta)\sup_{z\in K}|x(z)|\alpha_{i}\) for every \(i\geq 2\). Then (2.18) can be re-written as
\[p(i)\leq\sum_{j=1}^{i-1}q(i-j)p(j)+g(i). \tag{2.21}\]
By Lemma 2.4, we conclude that
\[p(i)\leq \sum_{j=1}^{i-1}Q(i-j)g(j)+g(i), \tag{2.22}\]
where
\[Q(i)=\sum_{j=1}^{\infty}q^{*j}(i)=\sum_{j=1}^{\infty}((1+\delta)\sup_{z\in K} |x(z)|)^{j}\alpha_{i}^{*j}.\]
It is equivalent to
\[\sup_{z\in K}|e^{f_{i}(z)}-x(z)|\leq\sum_{j=1}^{i-1}Q(i-j)g(j)+g(i),\ i\geq 2.\]
Hence, it remains to show that
\[\sum_{i=1}^{\infty}i^{v}g(i)<\infty, \tag{2.23}\]
\[\sum_{i=2}^{\infty}i^{v}\sum_{j=1}^{i-1}Q(i-j)g(j)<\infty. \tag{2.24}\]
Let us first prove (2.23). Note that
\[\sum_{i=1}^{\infty}i^{v}g(i)= \sum_{i=1}^{\infty}i^{v}\left(C_{1}\sum_{j=i+1}^{\infty}\alpha_{ j}+C_{2}1_{\{i\leq M\}}\right) \tag{2.25}\] \[= C_{1}\sum_{i=1}^{\infty}i^{v}\sum_{j=i+1}^{\infty}\alpha_{j}+C_ {2}\sum_{i=1}^{M}i^{v}, \tag{2.26}\]
and by our assumption \(\sum_{j=1}^{\infty}j^{v+1}\alpha_{j}<\infty\), we obtain
\[\sum_{i=1}^{\infty}i^{v}\sum_{j=i+1}^{\infty}\alpha_{j}= \sum_{j=2}^{\infty}\alpha_{j}\sum_{i=1}^{j-1}i^{v}\leq\frac{1}{v+1} \sum_{j=1}^{\infty}j^{v+1}\alpha_{j}<\infty, \tag{2.27}\]
thus (2.23) follows.
Next, let us prove (2.24). Note that
\[\sum_{i=2}^{\infty}i^{v}\sum_{j=1}^{i-1}Q(i-j)g(j) \tag{2.28}\] \[= C_{1}\sum_{i=2}^{\infty}i^{v}\sum_{j=1}^{i-1}Q(i-j)\sum_{m=j+1}^{ \infty}\alpha_{m}+C_{2}\sum_{i=2}^{\infty}i^{v}\sum_{j=1}^{i-1}Q(i-j)1_{\{j\leq M \}}, \tag{2.29}\]
and it is easy to check that
\[\sum_{i=2}^{\infty}i^{v}\sum_{j=1}^{i-1}Q(i-j)1_{\{j\leq M\}}= \sum_{j=1}^{\infty}1_{\{j\leq M\}}\sum_{i=j+1}^{\infty}Q(i-j)\cdot i ^{v}\] \[= \sum_{j=1}^{M}\sum_{i=2}^{\infty}Q(i)(i+j)^{v}\] \[\leq 2^{v-1}(M+1)\sum_{i=2}^{\infty}Q(i)(i^{v}+j^{v})\] \[\leq 2^{v-1}(M+1)\sum_{i=2}^{\infty}Q(i)(i^{v}+M^{v}).\]
where we use the inequality \((a+b)^{v}\leq 2^{v-1}(a^{v}+b^{v})\) for any \(a,b>0\) and \(v\geq 1\). We can compute that
\[\sum_{i=1}^{\infty}Q(i)= \sum_{i=1}^{\infty}\sum_{j=1}^{\infty}\left((1+\delta)\sup_{z\in K }|x(z)|\right)^{j}\alpha_{i}^{*j} \tag{2.30}\] \[= \sum_{j=1}^{\infty}\left((1+\delta)\sup_{z\in K}|x(z)|\right)^{j }\|\alpha\|_{1}^{j}, \tag{2.31}\]
which is finite since \((1+\delta)\sup_{z\in K}|x(z)|\left\|\alpha\right\|_{1}<1\). Next, let us show that
\[\sum_{i=1}^{\infty}i^{v}Q(i)<\infty.\]
Notice that
\[\sum_{i=1}^{\infty}i^{v}Q(i)= \sum_{i=1}^{\infty}i^{v}\sum_{j=1}^{\infty}q^{*j}(i) \tag{2.32}\] \[= \sum_{i=1}^{\infty}\sum_{j=1}^{\infty}\left((1+\delta)\sup_{z\in K }|x(z)|\right)^{j}\alpha_{i}^{*j}\] (2.33) \[= \sum_{j=1}^{\infty}\left((1+\delta)\sup_{z\in K}|x(z)|\right)^{j }\sum_{i=1}^{\infty}i^{v}\alpha_{i}^{*j}\] (2.34) \[= \sum_{j=1}^{\infty}\left((1+\delta)\sup_{z\in K}|x(z)|\right)^{j }\sum_{i=1}^{\infty}i^{v}\sum_{m=0}^{i}\alpha_{m}^{*j-1}\alpha_{i-m}\] (2.35) \[= \sum_{j=1}^{\infty}\left((1+\delta)\sup_{z\in K}|x(z)|\right)^{j }\sum_{m=0}^{\infty}\alpha_{m}^{*j-1}\sum_{i=0}^{\infty}(i+m)^{v}\alpha_{i}. \tag{2.36}\]
Note that for any \(\delta^{\prime}>0\), there exists some \(C(\delta^{\prime})>0\) such that for any \(s,u\geq 0\),
\[(s+u)^{v}\leq C(\delta^{\prime})s^{v}+(1+\delta^{\prime})u^{v}.\]
Therefore,
\[\sum_{m=0}^{\infty}m^{v}\alpha_{m}^{*k}\leq C(\delta^{\prime})\sum_{m=0}^{\infty}\alpha_{m}^{*k-1}\sum_{i=0}^{ \infty}i^{v}\alpha_{i}+(1+\delta^{\prime})\sum_{m=0}^{\infty}m^{v}\alpha_{m}^{*k -1}\left\|\alpha\right\|_{1}. \tag{2.37}\]
Let us define \(A_{k}:=\sum_{m=0}^{\infty}m^{v}\alpha_{m}^{*k}\). Then, we have:
\[A_{k}\leq C(\delta^{\prime})\left\|\alpha\right\|_{1}^{k-1}A_{1}+(1+\delta^{ \prime})A_{k-1}\left\|\alpha\right\|_{1}.\]
It follows that
\[A_{k}\leq C(\delta^{\prime})\left\|\alpha\right\|_{1}^{k-1}\left[1+(1+ \delta^{\prime})\left\|\alpha\right\|_{1}+((1+\delta^{\prime})\left\|\alpha \right\|_{1})^{2}+\cdots+((1+\delta^{\prime})\left\|\alpha\right\|_{1})^{k-2 }\right]A_{1}\] \[+((1+\delta^{\prime})\left\|\alpha\right\|_{1})^{k-1}A_{1}.\]
Choose \(\delta^{\prime}>0\) to be sufficiently small so that \((1+\delta^{\prime})\left\|\alpha\right\|_{1}<1\). Then, we have
\[A_{k}\leq \frac{C(\delta^{\prime})}{1-(1+\delta^{\prime})\left\|\alpha \right\|_{1}}\left\|\alpha\right\|_{1}^{k-1}A_{1}+(1+\delta^{\prime})^{k-1} \left\|\alpha\right\|_{1}^{k-1}A_{1}\] \[\leq \left(\frac{C(\delta^{\prime})}{1-(1+\delta^{\prime})\left\| \alpha\right\|_{1}}+1\right)(1+\delta^{\prime})^{k-1}\left\|\alpha\right\|_{1 }^{k-1}A_{1}.\]
Choose \(\delta^{\prime}>0\) to be sufficiently small so that \((1+\delta)(1+\delta^{\prime})\sup_{z\in K}\left|x(z)\right|\left\|\alpha\right\| _{1}<1\). Hence, we conclude that
\[\sum_{i=0}^{\infty}i^{v}Q(i)\] \[= \sum_{j=1}^{\infty}\left((1+\delta)\sup_{z\in K}\left|x(z)\right| \right)^{j}A_{j}\] \[\leq (1+\delta)\sup_{z\in K}\left|x(z)\right|A_{1}\left(\frac{C(\delta ^{\prime})}{1-(1+\delta^{\prime})\left\|\alpha\right\|_{1}}+1\right)\sum_{j=1 }^{\infty}\left((1+\delta)(1+\delta^{\prime})\sup_{z\in K}\left|x(z)\right| \left\|\alpha\right\|_{1}\right)^{j-1}\] \[= \frac{(1+\delta)\sup_{z\in K}\left|x(z)\right|A_{1}\left(\frac{C (\delta^{\prime})}{1-(1+\delta^{\prime})\left\|\alpha\right\|_{1}}+1\right)} {1-(1+\delta)(1+\delta^{\prime})\sup_{z\in K}\left|x(z)\right|\left\|\alpha \right\|_{1}}<\infty.\]
Finally, we can compute that
\[\sum_{i=2}^{\infty}i^{v}\sum_{j=1}^{i-1}Q(i-j)\sum_{m=j+1}^{\infty} \alpha_{m}\] \[= \sum_{j=1}^{\infty}\sum_{i=j+1}^{\infty}i^{v}Q(i-j)\sum_{m=j+1}^{ \infty}\alpha_{m}\] \[= \sum_{j=1}^{\infty}\sum_{i=1}^{\infty}(i+j)^{v}Q(i)\sum_{m=j+1}^{ \infty}\alpha_{m}\] \[\leq \sum_{j=1}^{\infty}\sum_{i=1}^{\infty}2^{v-1}(i^{v}+j^{v})Q(i) \sum_{m=j+1}^{\infty}\alpha_{m}\] \[= \sum_{j=1}^{\infty}\sum_{m=j+1}^{\infty}\alpha_{m}\left(\sum_{i=1 }^{\infty}2^{v-1}i^{v}Q(i)\right)+\sum_{j=1}^{\infty}j^{v}\sum_{m=j+1}^{ \infty}\alpha_{m}\left(\sum_{i=1}^{\infty}2^{v-1}Q(i)\right)\] \[= \sum_{m=2}^{\infty}m\alpha_{m}\left(\sum_{i=1}^{\infty}2^{v-1}i^ {v}Q(i)\right)+\sum_{m=2}^{\infty}\sum_{j=0}^{m-1}\alpha_{m}j^{v}\left(\sum_{ i=1}^{\infty}2^{v-1}Q(i)\right)\] \[\leq 2^{v-1}\sum_{m=1}^{\infty}m\alpha_{m}\left(\sum_{i=1}^{\infty}i ^{v}Q(i)\right)+\frac{2^{v-1}}{v+1}\sum_{m=1}^{\infty}m^{v+1}\alpha_{m}\left( \sum_{i=1}^{\infty}Q(i)\right)<\infty.\]
This completes the proof.
### Precise Large Deviations
**Theorem 2.6**.: _Given \(v\in\mathbb{N}\). Assume \(\left\|\alpha\right\|_{1}<\infty\) and the following condition holds:_
\[\sum_{i=1}^{\infty}i^{v+1}\alpha_{i}<\infty. \tag{2.38}\]
1. _For any_ \(x>0\)_, and_ \(tx\in\mathbb{N}\)_, as_ \(t\to\infty\)_,_ \[\mathbb{P}(N_{t}=tx)=e^{-tI(x)}\sqrt{\frac{I^{\prime\prime}(x)}{2\pi t}}\left( \psi(\theta^{*})+\frac{a_{1}}{t}+\frac{a_{2}}{t^{2}}+\cdots+\frac{a_{v-1}}{t^{ v-1}}+O\left(\frac{1}{t^{v}}\right)\right),\] _where for any_ \(\mathcal{R}(z)\leq\left\|\alpha\right\|_{1}-1-\log\left\|\alpha\right\|_{1}\)_,_ \[\psi(z):=e^{\nu\varphi(z)},\text{ and }\varphi(z)=\sum_{i=0}^{\infty}\left(e^{f_{i}(z)}-x(z)\right),\] _which is analytic in_ \(\mathcal{S}_{(-\infty,\left\|\alpha\right\|_{1}-1-\log\left\|\alpha\right\|_{1 })}\)_, where_ \[f_{0}(z)=z,f_{1}(z)=z+(e^{z}-1)\alpha_{1},\ f_{s}(z)=z+\sum_{i=1}^{s}\alpha_{i} (e^{f_{s-i}(z)}-1),\] _and_ \(x(z)=e^{f_{\infty}(z)}\) _exists and it satisfies the equation_ \[x(z)=e^{z+\left\|\alpha\right\|_{1}(x(z)-1)},\] _and it is analytic in_ \(\mathcal{S}_{(-\infty,\left\|\alpha\right\|_{1}-1-\log\left\|\alpha\right\|_{1 })}\)_. And_ \(I(x)\) _is defined in (_2.2_),_ \(I^{\prime\prime}(x)=\frac{\nu^{2}}{x(\nu+\left\|\alpha\right\|_{1}x)^{2}}\)_, and_ \[\theta^{*}=\log\left(\frac{x}{\nu+\left\|\alpha\right\|_{1}x}\right)-\frac{ \left\|\alpha\right\|_{1}x}{\nu+\left\|\alpha\right\|_{1}x}+\left\|\alpha\right\| _{1},\]
_where_ \((a_{k})_{k\geq 1}\) _are rational fractions in the derivatives of_ \(\eta\) _and_ \(\psi\) _at_ \(\theta^{*}\) _whose formulas are given in the Proposition_ 1_(i) from_ _[_10_]__, which is_
\[a_{k}= \sum_{l=0}^{2k}\frac{\psi^{(2k-l)}(\theta^{*})}{(2k-l)!}\sum_{ \mathcal{S}_{l}}\frac{(-1)^{m_{1}+\cdots+m_{l}}}{m_{1}!1!^{m_{1}}m_{2}!2!^{m_{2 }}\cdots m_{l}!ll!^{m_{l}}}\] \[\cdot\prod_{j=1}^{l}\left(\frac{1}{\eta^{\prime\prime}(\theta^{*} )}\frac{\eta^{(j+2)(\theta^{*})}}{(j+2)(j+1)}\right)^{m_{j}}\frac{(-1)^{k}(2(k+ m_{1}+\cdots+m_{l})-1)!!}{(\eta^{\prime\prime}(\theta^{*}))^{k}},\ k\geq 1,\] (2.39) _where_ \(\eta(z):=\nu(x(z)-1)\)_._
2. _For any_ \(x>\frac{\nu}{1-\left\|\alpha\right\|_{1}}\) _and_ \(tx\in\mathbb{N}\)_, as_ \(t\to\infty\)_,_ \[\mathbb{P}(N_{t}\geq tx)=e^{-tI(x)}\sqrt{\frac{I^{\prime\prime}(x)}{2\pi t}} \frac{1}{1-e^{-\theta^{*}}}\left(\psi(\theta^{*})+\frac{b_{1}}{t}+\frac{b_{2} }{t^{2}}+\cdots+\frac{b_{v-1}}{t^{v-1}}+O\left(\frac{1}{t^{v}}\right)\right),\] _where_ \((b_{k})_{k\geq 1}\) _are rational fractions in the derivatives of_ \(\eta\) _and_ \(\psi\) _at_ \(\theta^{*}\) _whose formulas are given in the Proposition_ 1_(ii) from_ _[_10_]__, which is_ \[b_{k}= \sum_{n=0}^{2k}\sum_{\mathcal{S}_{n}}\frac{e^{-(m_{1}+\cdots+m_{n })\theta^{*}}(m_{1}+\cdots+m_{n})!(1-e^{-\theta^{*}})^{-(m_{1}+\cdots+m_{n})-1 }}{m_{1}!1!^{m_{1}}m_{2}!2!^{m_{2}}\cdots m_{n}!n!^{m_{n}}}\cdot\prod_{j=1}^{n }(-1)^{j\cdot m_{j}}\] \[\cdot\prod_{l=0}^{2k-n}\frac{\psi^{(2k-n-l)}(\theta^{*})}{(2k-n- l)!}\sum_{\mathcal{S}_{l}}\frac{(-1)^{m_{1}+\cdots+m_{l}}}{m_{1}!1!^{m_{1}}m_{2}!2!^{ m_{2}}\cdots m_{l}!ll^{m_{l}}}\] \[\cdot\prod_{j=1}^{l}\left(\frac{1}{\eta^{\prime\prime}(\theta^{*} )}\frac{\eta^{(j+2)}(\theta^{*})}{(j+2)(j+1)}\right)^{m_{j}}\frac{(-1)^{k}(2( k+m_{1}+\cdots+m_{l})-1)!!}{(\eta^{\prime\prime}(\theta^{*}))^{k}}.\]
Proof.: By Lemma 2.1, and Lemma 2.5, we have established the mod-\(\phi\) convergence. The proof is exactly the same as [10] after replacing \(\left\|h\right\|_{L^{1}}\) with \(\left\|\alpha\right\|_{1}\).
### Precise Moderate Deviations
**Theorem 2.7**.: \(I(\cdot)\) _is defined in (2.2) and for any \(i\geq 2\),_
\[I^{(i)}(x)=(i-2)!(-1)^{i-2}x^{1-i}\left((i-1)\left(\frac{\left\|\alpha\right\| _{1}x}{\nu+\left\|\alpha\right\|_{1}x}\right)^{i}-i\left(\frac{\left\|\alpha \right\|_{1}x}{\nu+\left\|\alpha\right\|_{1}x}\right)^{i-1}+1\right).\]
_Assume \(\left\|\alpha\right\|_{1}<\infty\) holds, if \(y=o(t^{1/2-1/m})\), where \(m\geq 3\), then as \(t\to\infty\),_
\[\mathbb{P}\left(N_{t}\geq\frac{\nu}{1-\left\|\alpha\right\|_{1}}t+\sqrt{t} \frac{\sqrt{\nu}}{(1-\left\|\alpha\right\|_{1})^{3/2}}y\right)=\frac{(1+o(1) )}{y\sqrt{2\pi}}e^{-\sum_{i=2}^{m-1}\frac{I^{(i)}(\nu^{\prime}(0))}{i!}\frac{ \eta^{\prime\prime}(0))^{i/2}y^{i}}{t^{(i-2)/2}}},\]
_where \(\eta^{\prime}(0)=\frac{\nu}{1-\left\|\alpha\right\|_{1}}\), and \(\eta^{\prime\prime}(0)=\frac{\nu}{(1-\left\|\alpha\right\|_{1})^{3}}\)._
Proof.: The proof is exactly the same as [10] after replacing \(\left\|h\right\|_{L^{1}}\) with \(\left\|\alpha\right\|_{1}\).
**Proposition 2.8**.:
1. \(\eta(\theta^{*})=\nu(x(\theta^{*})-1)\)_, and for_ \(k\geq 1\)_,_ \(\eta^{(k)}(\theta^{*})=\nu x^{(k)}(\theta^{*})\)_._ _For_ \(k\geq 1\)_,_ \(x^{(k)}(\theta^{*})\) _can be computed recursively as:_ \[x^{(k)}(\theta^{*})= \frac{x(\theta^{*})}{1-\left\|\alpha\right\|_{1}x(\theta^{*})}\sum_{ \mathcal{T}_{k}}\frac{k!\cdot\left\|\alpha\right\|_{1}^{m_{1}+\cdots+m_{k-1}}}{ m_{1}!1!1^{m_{1}}m_{2}!2!^{m_{2}}\cdots m_{k-1}!(k-1)!^{m_{k-1}}}\cdots\prod_{j=1}^{k -1}(x^{(j)}(\theta^{*}))^{m_{j}}\] \[+\frac{x(\theta^{*})}{1-\left\|\alpha\right\|_{1}x(\theta^{*})} \sum_{l=0}^{k-1}\binom{k}{l}\sum_{\mathcal{S}_{l}}\frac{l!\cdot\left\|\alpha \right\|_{1}^{m_{1}+\cdots+m_{l}}}{m_{1}!1!1^{m_{1}}m_{2}!2!^{m_{2}}\cdots m_{l}! l!^{m_{l}}}\cdot\prod_{j=1}^{l}(x^{(j)}(\theta^{*}))^{m_{j}},\] _where_ \(\mathcal{T}_{k}\) _denotes the set of_ \((k-1)\)_-tuples of nonnegative integers_ \((m_{1},\cdots,m_{k-1})\) _satisfying the constraint_ \(1\cdot m_{1}+2\cdot m_{2}+3\cdot m_{3}+\cdots+(k-1)\cdot m_{k-1}=k-1\)_._
2. _For every_ \(k\geq 1\)_,_ \[\psi^{(k)}(\theta^{*})=\sum_{\mathcal{S}_{k}}\frac{k!\cdot\nu^{m_{1}+\cdots+m_{k}} \cdot\psi(\theta^{*})}{m_{1}!1!^{m_{1}}m_{2}!2!^{m_{2}}\cdots m_{k}!k!^{m_{k}}} \cdot\prod_{j=1}^{k}\left(\sum_{i=0}^{\infty}\left(\left(e^{f_{i}(z)}\right)^{(j )}-x^{(j)}(z)\right)\right)^{m_{j}},\] (2.41) _where_ \[\left(e^{f_{i}(z)}\right)^{(j)}=\sum_{\mathcal{S}_{j}}\frac{j!e^{f_{i}(z)}}{q _{1}!1!^{m_{1}}q_{2}!2!^{m_{2}}\cdots q_{j}!j!q_{j}}\prod_{r=1}^{j}\left(f_{i} ^{(r)}(z)\right)^{q_{r}}.\]
Proof.: The expression of \(x^{(k)}(\theta^{*})\) is a direct result after replacing \(\left\|h\right\|_{L^{1}}\) with \(\left\|\alpha\right\|_{1}\). Next, let us compute the expression of \(\psi^{(k)}(\theta^{*})\), recall that
\[\psi(z)=e^{\nu\sum_{i=0}^{\infty}\left(e^{f_{i}(z)}-x(z)\right)}, \tag{2.42}\]
where
\[f_{0}(z)=z,f_{1}(z)=z+(e^{z}-1)\alpha_{1},\ f_{s}(z)=z+\sum_{i=1}^{s}\alpha_{i }(e^{f_{s-i}(z)}-1).\]
Let \(\left(e^{f_{i}(z)}\right)^{(k)}\) denote the \(k\)-th derivative of \(e^{f_{i}(z)}\). By Faa di Bruno's formula, we have
\[\psi^{(k)}(\theta^{*})=\sum_{\mathcal{S}_{k}}\frac{k!\cdot\nu^{m_{1}+\cdots+m _{k}}\cdot\psi(\theta^{*})}{m_{1}!1!^{m_{1}}m_{2}!2!^{m_{2}}\cdots m_{k}!k!^{m_ {k}}}\cdot\prod_{j=1}^{k}\left(\sum_{i=0}^{\infty}\left(\left(e^{f_{i}(z)} \right)^{(j)}-x^{(j)}(z)\right)\right)^{m_{j}}, \tag{2.43}\]
where \(\mathcal{S}_{k}\) consists of all the \(k\)-tuples of nonnegative integers \((m_{1},...,m_{k})\) satisfying the constraint \(1\cdot m_{1}+2\cdot m_{2}+3\cdot m_{3}+\cdots+k\cdot m_{k}=k\), \(\left(e^{f_{i}(z)}\right)^{(j)}\) can be computed again by Faa di Bruno's formula,
\[\left(e^{f_{i}(z)}\right)^{(j)}=\sum_{\mathcal{S}_{j}}\frac{j!e^{f_{i}(z)}}{ q_{1}!1!^{q_{1}}q_{2}!2!^{q_{2}}\cdots q_{j}!j!q_{j}}\prod_{r=1}^{j}\left(f_{i} ^{(r)}(z)\right)^{q_{r}},\]
where \(\mathcal{S}_{j}\) consists of all the \(j\)-tuples of nonnegative integers \((q_{1},...,q_{j})\) satisfying the constraint \(1\cdot q_{1}+2\cdot q_{2}+3\cdot q_{3}+\cdots+j\cdot q_{j}=j\).
## 3. Acknowledgement
The first author Yingli Wang would like to thank Professor Lingjiong Zhu for bringing up this topic and Assistant Professor Qinghua Wang for helpful discussions.
|
2309.15045 | Modeling Evacuee Behavior for Robot-Guided Emergency Evacuation | This paper considers the problem of developing suitable behavior models of
human evacuees during a robot-guided emergency evacuation. We describe our
recent research developing behavior models of evacuees and potential future
uses of these models. This paper considers how behavior models can contribute
to the development and design of emergency evacuation simulations in order to
improve social navigation during an evacuation. | Mollik Nayyar, Alan Wagner | 2023-09-26T16:24:57Z | http://arxiv.org/abs/2309.15045v1 | # Modeling Evacue Behavior for Robot-Guided Emergency Evacuation
###### Abstract
This paper considers the problem of developing suitable behavior models of human evacueues during a robot-guided emergency evacuation. We describe our recent research developing behavior models of evacuees and potential future uses of these models. This paper considers how behavior models can contribute to the development and design of emergency evacuation simulations in order to improve social navigation during an evacuation.
## I Introduction
Our research is attempting to develop robots that are capable of helping people evacuate during an emergency [1]. In order to accomplish this task the robot must not only navigate social environments such as hallways, entry ways and exits, it may also need to avoid collisions or, perhaps, even feint collisions in order get people to move out of its way. Managing the collision avoidance and social navigation problems in such environments is extremely challenging for a variety of reasons [2]. First, emergencies are rare events and difficult to predict. Hence, running real world experiments in which robots attempt to guide evacuees during actual emergencies is not practical. Moreover, emergencies can be unique. Earthquakes can damage buildings making navigation maps unusable. Fires may require evacuation to distant or improvised exits and avoidance of certain areas of a building. Finally, people do not always react rationally during an emergency [3, 4]. Depending on the type of emergency, humans may not evacuate at all, freeze and remain motionless, or blindly comply with obviously incorrect information. Given the wide range of potential environments, emergencies, robot designs, and evacuees, evaluating the effectiveness of a robot's guidance requires the use of simulation experiments. Although simulation modeling of evacuations [5, 6, 7, 8] and human behavior modeling during emergencies is an active area of research [9, 10, 11, 3], relatively few groups are exploring the development of robot emergency guides [12, 13, 14].
Yet using simulation to evaluate robot designs, behaviors, or evacuation strategies is also problematic for a number of reasons. For instance, the behavior of other evacuees has a strong influence on the experiment's human subject (Fig. 1) [15]. In fact, during an evacuation, the behavior of other evacuees is often a determining factor impacting when and how quickly a person evacuates [4]. Our previous simulation experiments demonstrate that people often follow the crowd, in spite of the robot's suggestions [15]. Moreover, an important aspect of designing a robot that guides people during an emergency is understanding how people will react and respond to the robot [16]. Yet, simulations may not generate the same visceral response as actual emergencies [17]. Specifically, will evacuees follow an emergency guide robot? If so, how closely will they follow the robot? Will they follow it through closed doors? Will they hold the door open for the robot? Will they follow the robot if it signals a change in direction? Clearly we have an ethical obligation to thoroughly evaluate emergency evacuation robots prior to their deployment [18].
Our research is currently working to address these problems. We are currently in the process of developing behavior models of evacuees that we will then use to create more realistic simulation environments (discussed more below). We are also developing realistic simulations in virtual reality to improve the realism of the experience for the purpose of better understanding social navigation during an emergency. The remainder of this paper presents our preliminary data and ongoing research on this topic.
## II Modeling Evacue Behavior
This section details our method for creating and using behavior models of evacuees.
### _Creating Behavior Model_
Given that evacuee behavior is strongly influence by the other people experiencing the emergency, in order to begin to improve the accuracy of an emergency simulation one must also simulate the behavior of the other evacuees [8]. To do this we are currently running human subject experiments that capture and record an evacuee's behavior to a robot's guidance directions when an alarm has sounded in a physical environment. Specifically, we have naive human subjects arrive at the experimental site (Fig. 2). These subjects are briefly introduced to a guidance robot which leads them to a cubicle in order to read and comment on an article. While reading the article a smoke alarm is sounded. The subject has not been informed that a smoke alarm will go off during the experiment. The subject then chooses whether to follow the robot's guidance directions to an unfamiliar exit or may exit the way they came.
We are using this general experimental paradigm to test two different types of emergency evacuation robot systems.
The first is a shepherding robot which is a single robot that guides individuals or groups to exits. The second system is a handoff system in which robots are deployed to the important evacuation decision points (e.g. hall intersections). Each robot remains at that position and points to either the next robot along an evacuation path or to the desired exit. These two types of robot systems are being tested with either single evacuees or groups of evacuees. We are recording video data capturing the person's movement through the environment. Soon we will explore some variations of a simple evacuation. Specifically, we intend to capture the subject's behavior when the robot's guidance directions change mid-evacuation, when the robot must guide through a closed door, when confederates ignore the robot's directions, and when the robot instructs the subject to shelter in place.
the system. The system is a hypothesis testing of a limited number of hypotheses. These descriptive statistics include measures of the person's speed, local distance from the robot, evacuation time, and the response time of the participant between the sounding of smoke alarm and evacuation action. These measures will allow us to create parameterized behavior models representing each human subject's behavior during the evacuation [19]. Although the form of these models has yet to be determined, we envision some form of parameterized finite state machine that can be used in conjunction with a specific set of experimental parameters to generate motion commands representing prototypical evacuees behavior (average motions), exemplar behavior (behavior tied to specific individual subjects), or variations which randomize elements of the subject's behavior.
### _Using Behavior Models_
As mentioned, we intend to use these behavior models as a tool to facilitate the evaluation of emergency evacuation robots. In the simulation environment pictured in Figure 1, the evacuating crowd runs at a fixed speed to an out of sight exit. While this type of crowd behavior is possible, it is unlikely given that most emergencies generate an initial sense of confusion accompanied with information seeking behaviors [3].
The most straightforward use of these models is to vary the type of environment (number of exits, number of corridors, etc.) and to use the behavior models to estimate the evacuation time or other evacuation statistics. For example, given a floor plan for school, we might use the behavior models to estimate how long it will take students located in particular classrooms to follow a robot to an exit. These estimates could then be compared to a limited number of physical experiments.
Another potential way that these models could be used is to create different demographic groups and estimate the amount of time a robot-guided evacuation would require. For example, estimating how long it would take to evacuate able bodied older adults from a nursing home or pregnant mothers could again inform the design and use of the robots (e.g. speed of the robot, number of robots, etc.). Of course, in order to evaluate these different demographic groups we need to study participants of each type. Currently we have several older adults that have participated, but we are not aware of any pregnant women that have joined our study.
One final method for using these behavior models is to combine different models to generate estimates of evacuation time for different scenarios. For example, using a behavioral model for a shelter in place scenario, followed by a behavior model for an evacuation, followed by a behavior model captured when the robot redirects the person to a different exit. Although the individual behavior models will have been generated by three different subjects, it may be possible to produce a rough estimate of the evacuee's behavior by simply using different models for different events during the evacuation.
Experimentally, the resulting data could serve as a baseline estimate for in-person experiments or it could govern the behavior of Non-Player Characters (NPCs) in a simulation with a human subject. Hence, the human subject would be surrounded by NPCs using these behavior models with the reactions of the human subject being experimentally observed and recorded. Presented in an immersive virtual reality simulation we believe that we will be able to produce realistic environments with NPC behavior that reflects the behavior of true human subjects.
Fig. 3: A subject following the robot to an exit.
## III Limitations
We are making a number of important assumptions and there are certainly limitations to our approach. One assumption is that the data generated by one or many evacuees during a robot-guided evacuation will be correlated to and predictive of other evacuees' behavior during different emergencies. This assumption only holds to a limited degree, however. Evacuee behavior after an explosion and fire is unlikely to predict evacuee behavior during small, non-threatening fire. Nevertheless, we believe that our research can provide a type of foothold for representing how people evacuate in the presence of a robot which can then be expanded to new situations and evacuee types. Specifically, by capturing people's behavior towards the robot in response to an alarm, we record information about following speed, likelihood of following, and response to robot guidance actions. This information is necessary for designing evacuation robots and could be useful for related applications, such as path clearing for evacuees [20].
Furthermore, we recognize that changes to the robot or the environment could influence the resulting models and parameters. It will be necessary to occasionally perform in-person experiments to evaluate if and how much the behavior model has drifted from real world results. Simulation experiments may inform robot and experimental design and focus the research on the most promising hypotheses, yet cannot serve as a complete substitute for in-person experiments.
Our long-term hope for this work is that a catalog of pedestrian behavior models could be compiled allowing researchers to more accurately simulate the actual social navigation behavior of individuals and groups during emergencies. This catalog could serve as an important tool, perhaps allowing HRI researchers to compare algorithms, systems, and behaviors for a fixed context and reasonably accurate set of behavior models.
## IV Conclusions
This paper has briefly introduced our ongoing research to create behavior models of human evacuees while being guided to an exit by a robot during a simulated emergency. The resulting behavior models will contribute to the realism of our emergency evacuation simulations. This added realism will assist in the design and use of emergency evacuation robots. Over time, we hope to create a catalog of behavior models representing different evacuation scenarios. This catalog could serve as a method to evaluate potential robot social navigation algorithms applied to the emergency evacuation domain.
|
2309.05256 | Examining the Effect of Pre-training on Time Series Classification | Although the pre-training followed by fine-tuning paradigm is used
extensively in many fields, there is still some controversy surrounding the
impact of pre-training on the fine-tuning process. Currently, experimental
findings based on text and image data lack consensus. To delve deeper into the
unsupervised pre-training followed by fine-tuning paradigm, we have extended
previous research to a new modality: time series. In this study, we conducted a
thorough examination of 150 classification datasets derived from the Univariate
Time Series (UTS) and Multivariate Time Series (MTS) benchmarks. Our analysis
reveals several key conclusions. (i) Pre-training can only help improve the
optimization process for models that fit the data poorly, rather than those
that fit the data well. (ii) Pre-training does not exhibit the effect of
regularization when given sufficient training time. (iii) Pre-training can only
speed up convergence if the model has sufficient ability to fit the data. (iv)
Adding more pre-training data does not improve generalization, but it can
strengthen the advantage of pre-training on the original data volume, such as
faster convergence. (v) While both the pre-training task and the model
structure determine the effectiveness of the paradigm on a given dataset, the
model structure plays a more significant role. | Jiashu Pu, Shiwei Zhao, Ling Cheng, Yongzhu Chang, Runze Wu, Tangjie Lv, Rongsheng Zhang | 2023-09-11T06:26:57Z | http://arxiv.org/abs/2309.05256v1 | # Examining the Effect of Pre-training Followed by Fine-tuning on Time Series Classification
###### Abstract
Although the pre-training followed by fine-tuning paradigm is used extensively in many fields, there is still some controversy surrounding the impact of pre-training on the fine-tuning process. Currently, experimental findings based on text and image data lack consensus. To delve deeper into the unsupervised pre-training followed by fine-tuning paradigm, we have extended previous research to a new modality: time series. In this study, we conducted a thorough examination of 150 classification datasets derived from the Univariate Time Series (UTS) and Multivariate Time Series (MTS) benchmarks. Our analysis reveals several key conclusions. (i) Pre-training can only help improve the optimization process for models that fit the data poorly, rather than those that fit the data well. (ii) Pre-training does not exhibit the effect of regularization when given sufficient training time. (iii) Pre-training can only speed up convergence if the model has sufficient ability to fit the data. (iv) Adding more pre-training data does not improve generalization, but it can strengthen the advantage of pre-training on the original data volume, such as faster convergence. (v) While both the pre-training task and the model structure determine the effectiveness of the paradigm on a given dataset, the model structure plays a more significant role.
T -Series Classification, Unsupervised Pre-training, Optimization
## 1 Introduction
_The pre-training then fine-tuning paradigm_ continues to shine in the Natural Language Processing field, owing to the immense data and extra-large model sizes, with ultra-large models dominating the SuperGlue Benchmark Wang et al. (2019). Recently, the masked autoencoding pre-training scheme has also demonstrated its viability in Computer Vision He et al. (2022). Deep Learning seems to be moving towards a grand unification of pre-training. Nonetheless, there is still a lack of consensus on how exactly this paradigm manifests itself. While earlier work Bengio et al. (2006) has shown that unsupervised pre-training helps
optimization, the later milestone research Erhan et al. (2010) argues that unsupervised pre-training improves generalization by acting as a form of regularization. In contrast, a recent study He et al. (2019) claims that supervised pre-training only improves convergence speed without benefiting generalization. Another study Abnar et al. (2021) showcases a scenario where the downstream performance is at odds with the supervised pre-training accuracy. Other works suggest pre-training offers more obvious advantages of combating label noise Hendrycks et al. (2019), alleviating catastrophic forgetting Mehta et al. (2021), and dealing with imbalanced datasets Liu et al. (2021).
The divergent conclusions drawn from prior research have prompted us to investigate this issue using a different approach. Our curiosity lies in determining which findings demonstrate cross-modal consistency. Time series possess two distinct properties that complicate their analysis. Firstly, the features of time series data vary significantly across different domains, posing a challenge to domain transfer Eldele et al. (2021). Secondly, within the same domain, the distribution of time series data can shift over time Tonekaboni et al. (2020), rendering the use of long-standing data less effective. It is likely due to these factors that _the pre-training and then fine-tuning paradigm_ has yet to prosper in the field of time series. Nonetheless, we believe that further investigation into this direction is worthwhile. Specifically, we focus on the widely concerned problem of Time Series Classification (TSC) to understand how this paradigm operates. Notwithstanding its high accuracy on the UCR benchmark Dau et al. (2019), the current best ensemble model of Time Series Classification, HIVE-COTE 2.0 Middlehurst et al. (2021), suffers from slow training speed and challenging deployment. Moreover, given the complexity of time series data Tonekaboni et al. (2020), it is difficult for non-specialists to directly annotate the raw time series data Eldele et al. (2021); Zerveas et al. (2021), even as time series data relevant to our daily lives continues to accumulate at an unprecedented rate 1. We posit that _the unsupervised pre-training and fine-tuning paradigm_ holds promise for the future of Time Series Classification. Considering the characteristics of time series data, we design a pre-training setup with in-domain datasets, conducting experiments on 150 datasets with three model structures and five pre-training tasks. **Our study makes a threefold contribution**. Firstly, we verify the feasibility of the paradigm of unsupervised pre-training followed by fine-tuning on TSC. Secondly, we re-validate some existing conclusions concerning the impact of unsupervised pre-training on fine-tuning for time series data. Additionally, we provide novel insights into which factors - the pre-training task or model structure - are more critical in enhancing the efficacy of pre-training on fine-tuning. Lastly, we attempt to find correlates of successful pre-training. **Our key findings are as follows:**
Footnote 1: [https://www.forbes.com/sites/forbescommunicationscouncil/2022/06/16/the-ubiquity-of-time-series-data-isnt-coming-its-already-here](https://www.forbes.com/sites/forbescommunicationscouncil/2022/06/16/the-ubiquity-of-time-series-data-isnt-coming-its-already-here)
- Pre-training can enhance optimization for under-fitted models of few parameters (Consistent with Bengio et al. (2006) under constraints), but it does not improve optimization for for models that already have adequate ability to fit the data.
- Given sufficient training time, pre-training does not significantly improve a model's generalization ability (Consistent with He et al. (2019)), i.e., it has no regularization effect (Contradict with Erhan et al. (2010)).
- Pre-training can accelerate the convergence speed (Consistent with He et al. (2019)), but only if the model has the capability to fit the data well.
- Increasing the amount of pre-training data does not aid generalization (Contradict with Paine et al. (2014)), but it can strengthen the existing advantages of pre-training for the original data sizes.
- The effectiveness of _the pre-training followed by fine-tuning paradigm_ on a given dataset is determined by both the model structure and the pre-training task, with the model structure being more crucial.
## 2 Related Work
Unsupervised pre-training is gaining more and more attention in the field of time series. Feature-wise, most of the work is based on temporal features, considering both inter-sample dissimilarity and intra-temporal resemblance to design pre-training tasks around contrastive learning Yue et al. (2022); Eldele et al. (2021); Tonekaboni et al. (2020). The time-frequency interchangeability motivates other work to exploit the frequency domain to obtain proper representations of the seasonal trend Woo et al. (2021); Zhang et al. (2022). However, most of the work above uses linear probing to verify the effectiveness of representations on tasks such as classification Eldele et al. (2021); Franceschi et al. (2019), forecasting Yue et al. (2022), regression Zerveas et al. (2021), and anomaly detection Yue et al. (2022).In the field of time series, no work has investigated how unsupervised pre-training affects fine-tuning.
In other fields such as Natural Language Processing and Computer Vision, it is still controversial how unsupervised pre-training works on fine-tuning. Some work suggests that unsupervised pre-training is beneficial for optimization Hao et al. (2019); Neyshabur et al. (2020), while other work argued that unsupervised pre-training only serves the function of regularization Erhan et al. (2010). There is also evidence suggesting that unsupervised pre-training can only speed up convergence He et al. (2019). In the face of adversarial samples, imbalanced datasets Liu et al. (2021), and label corruption, some work also argues that pre-training has a great advantage and can improve the model's uncertainty estimates as well Hendrycks et al. (2019). In addition, in the life-long learning scenario, one work suggests that pre-training can alleviate the effects of catastrophic forgetting Mehta et al. (2021). Our work extends the above efforts by validating some existing ideas in the field of time series while doing new research on finding potential correlates of effective pre-training, the inductive bias of pre-training tasks, etc.
## 3 Problem Formulation
Our paper focuses on the validity of the _paradigm of unsupervised pre-training followed by fine-tuning_ for the Time Series Classification task. To this end, we verify its effectiveness on 150 datasets. In the following paragraphs, we first introduce the notations of the time series data and the encoder, then we present how experiments are designed and what issues we specifically analyze.
**Time Series Data** We define a group of continuous variables changing over time as \(s\), where \(s\in\mathbb{R}^{d}\), with \(d\) corresponding to the number of variables or, alternatively, the dimension of the features. Based on this, we define a time series of length \(\ell\) as \(S=(s_{1},\ldots,s_{\ell})\). When \(d\) is 1, the time series \(S\) belongs to the UTS, while when \(d\) is greater than 1, the time series \(S\) belongs to the MTS.
**Time Series Encoder and Classification Head** We define the time series encoder as \(f_{\theta_{1}}:S\to E\), where \(E\) is the output of the encoder's last layer. The output \(E=(e_{1},\dots,e_{\ell})\), where \(e_{i}\in\mathbb{R}^{h}\) and \(h\) is the hidden dimension of the output layer. To adapt to different classification datasets, we introduce the classification head \(g_{\theta_{2}}:E\rightarrow\{1,\dots,C\}\) such that \((g_{\theta_{2}}\circ f_{\theta_{1}}):S\to y\in\{1,\dots,C\}\), where \(C\) is the total number of categories and \(y\) denotes the categorical output. More specifically, the classification head \(g_{\theta_{2}}\) consists of a one-dimensional convolutional layer followed by a Multiple Layer Perceptron (MLP). By considering the length \(\ell\) of the sequence as the number of channels, we apply the one-dimensional convolutional layer to transform the encoder's output tensor \(\mathrm{T_{E}}\in\mathbb{R}^{\ell\times h}\) into a length uncorrelated vector \(\mathbf{v}_{E}\in\mathbb{R}^{h}\). Using the vector \(\mathbf{v}_{E}\in\mathbb{R}^{h}\) as the input, the MLP layer produces the output as the predicted class label \(y\).
**Research Problems and Experimental Procedure** We chose three sequence encoder structures and five pre-training tasks for our study, with a pre-training task expressed as \(\mathcal{PT}\). We denote an unlabelled pre-training dataset as \(\mathcal{D}_{pre}\), where \(\mathcal{D}_{pre}=(S_{1},\dots,S_{|D_{pre}|})\). Similarly, we define the training set and test set of a classification dataset as \(\mathcal{D}_{train}\) and \(\mathcal{D}_{test}\). There are 150 experimented datasets in total, for each of them, we iterate through all combinations of different model structures and pre-training tasks. For each combination, we first pre-train \(f_{\theta_{1}}\) on \(\mathcal{D}_{pre}\) with \(\mathcal{PT}\) and obtain a specific set of pre-trained parameters \(\theta_{pre}\), after which the encoder \(f_{\theta_{1}}\) initialized with \(\theta_{pre}\) and \(g_{\theta_{2}}\) are simultaneously fine-tuned on \(\mathcal{D}_{train}\) (The domain of \(\mathcal{D}_{pre}\) is the same as \(\mathcal{D}_{train}\), and we set \(\mathcal{D}_{train}\subseteq\mathcal{D}_{pre}\) for every dataset, the reason for which is described in Section 5). Based on the results of all test sets and the recorded training processes, we perform a significance analysis on whether pre-training is advantageous over random initialization in various aspects--including optimization, generalization, convergence speed, etc. We verify whether the role of pre-training changes when the model is significantly under-fitted, and whether adding additional pre-training data is beneficial. In addition, we also analyze the results to answer a question -- which aspect has more influence on the fine-tuning results given an arbitrary dataset, the model structure, or the pre-training task?
## 4 Research Subjects
### Time Series Encoder
While ensemble models continue to set new records on the UCR benchmark Middlehurst et al. (2021), we select three classical model structures of moderate complexity as the time series encoder (the number of parameters is around 500,000). There are two reasons. First, models designed for time series need to be scalable and efficient because signals are often long and have a high dimension Tonekaboni et al. (2020); Franceschi et al. (2019). Second, under limited computational resources, we chose to do experiments on as many datasets as possible to make conclusions informative and credible. It is worth noting that the model structures presented below all model the time series in both directions.
**LSTM**: Previous work Sagheer and Kotb (2019) shows the usefulness of unsupervised pre-training of LSTM-based autoencoder for MTS prediction tasks. Here we have simplified the model by constructing a vanilla bidirectional LSTM of two layers.
**Dilated Convolutional Neural Network (D.Conv)**: The Convolutional Neural Network performs well on time series forecasting Yue et al. (2022) and demonstrates its strengths
of representing learning on the UTS and the MTS datasets Franceschi et al. (2019). _D.Conv_ consists of layers of dilated convolutions. Compared to the previous work Franceschi et al. (2019), we do not use causal convolutions, so as to incorporate information from both before and after time step \(i\) when conducting convolution operations.
**Time-Series Transformer (TsTransformer)**: The Time-Series Transformer has proved a success in representing the MTS data type Zerveas et al. (2021). It has the same structure as the original transformer encoder Vaswani et al. (2017), except that it replaces the Layer Normalization layer with the Batch Normalization layer and the embedding layer with the linear projection layer.
### Pre-training Task
Representation learning of time series is becoming more and more sought-after in recent years. We select some recently proposed unsupervised pre-training tasks of relative efficacy, while also taking into account diversity.
**(Baseline) Random-Cls Maennel et al. (2020)**: _Radom-Cls_ is the abbreviation of training with Random Class Labels. Because one paper claims that pre-training the model with random labels can effectively improve the convergence speed when fine-tuning Maennel et al. (2020), we choose it as the baseline of the pre-training task.
**Ts2Vec Yue et al. (2022)**: Ts2vec is a contrastive learning-based approach. It highlights the construction of positive and negative samples by taking into account both sample and temporal differences. The other strength of Ts2Vec lies in modeling time series at hierarchical levels of granularity. This method has been evaluated in great detail on the UTS and MTS datasets, but it only validates its effectiveness in the linear-probing setting.
**Ts-Tcc Eldele et al. (2021)**: Ts-Tcc is another contrastive learning framework, integrating two contrasting modules -- temporal and contextual. It starts with generating a strong and a weak view from the time series \(S\) via augmentation. During temporal contrasting, the model predicts future series \((S_{>t})\) of one view from another \((S_{\leq t}^{\prime})\), where \(S^{\prime}\) denotes another view of \(S\) and \(t\) is the time step. While the contextual contrasting is to minimize the distance between \(f_{C}(S_{\leq t})\) and \(f_{C}(S_{\leq t}^{\prime})\), where \(f_{C}\) summarize the context till \(t\), at the same time, maximize the distance between \(f_{C}(S_{\leq t})\) and the context of other augmented views of different instances.
**Mvts Zerveas et al. (2021)**: The abbreviation Mvts comes from the tile 'A Transformer-based Framework for Multivariate **T**ime **S**eries Representation Learning'. Given a sample of time series \(S\), Mvts obtains corrupted inputs by zeroing. For instance, we can mask (zero) every variable independently (for the MTS data type) or mask a subseries of \(S\). The pre-training task is to impute these zeroed inputs, which is analogous to the Masked Language Modeling Devlin et al. (2019) in NLP. The loss function of Mvts is the Mean Squared Error.
**Srlt Franceschi et al. (2019)**: The abbreviation Srlt comes from the paper title -- Unsupervised **S**calable **R**epresentation **L**earning for Multivariate **T**ime Series. Inspired by how word2vec Mikolov et al. (2013) is trained, this method proposes a novel triplet loss for modeling the time series data. Roughly speaking, given a time series \(S\), its subseries
of varying lengths are selected as positive samples, while other subseries of some arbitrary time series are selected as negative samples.
Given a dataset, how to weigh the picking of the model structure and the design of the pre-training task?
The pre-training tasks for time series are built on some implicit or explicit assumptions. Ts-Tcc, Ts2Vec, and Mvts all assume that time series are contextually related and predictable, and Ts-Tcc assumes that time series have stretch and shuffle invariance properties. Therefore, we believe that different pre-training tasks have their specific applicability scenarios and limitations. Likewise, different model structures also have different inductive biases Tuli et al. (2021). The model structure determines the parameter space, while the pre-training process determines the starting point of fine-tuning. Given an arbitrary dataset, which one is more critical for successful fine-tuning? The pre-training task or the model structure?
To reveal potential correlates, we calculate the Spearman's rank correlation coefficient Schober et al. (2018) between different variables and the relative test set accuracy gaps, which equals the test-set accuracy of the pre-trained model \(f_{\theta_{pre}}\) minus the accuracy of randomly initialized model \(f_{\theta_{1}}\). We take sequence length as an example. We denote \(M\) as the total number of datasets and \(b_{i}\) as the average sequence length of \(i\)-th data set. We have the variable of sequence length \(B\), where \(B=(b_{1},\ldots,b_{M})\). The other variable of the accuracy gap \(A\) can be defined similarly. If we calculate the correlation coefficient between \(A\) and \(B\) as \(0.55\), with the \(p\)-value being \(0.03\) (\(<0.05\)). We can conclude that \(A\) is positively correlated with \(B\) and the result is significant enough.
Can we determine in advance whether we need to pre-train and when can we trust the pre-trained parameters?
In the practical application of time series data, we may have abundant unlabeled data, while often not having sufficient annotated data Eldele et al. (2021). When the real demand is for fast iteration or instant migration to new scenarios, it becomes critical to understand in advance whether \(\theta_{pre}\) has the potential to fit the data of the downstream task and generalize beyond. To this end, we collect potential correlates from three perspectives--the data, the pre-training process, and the model parameters, hoping to uncover some indications for effective pre-training. The correlation factors include: the length of time series, the size of the pre-training data, \(\ell_{2}\) norms Neyshabur et al. (2017), \(\ell_{2}\)-path norm Jiang et al. (2019), the sharpness value Mehta et al. (2021); Hao et al. (2019), the convergence state of pre-training and the \(\ell_{2}\) distance \(\theta_{pre}\) traveled from the initial point. The \(\ell_{2}\) norms, the \(\ell_{2}\)-path norm, and the sharpness value are three generalization measures validated by previous work. Recent work Gouk et al. (2020); Mao (2020) suggest that traveled distance is associated with generalization performance.
To reveal potential correlates, we calculate the Spearman's rank correlation coefficient Schober et al. (2018) between different variables and the relative test set accuracy gaps, which equals the test-set accuracy of the pre-trained model \(f_{\theta_{pre}}\) minus the accuracy of randomly initialized model \(f_{\theta}\). We take sequence length as an example. We denote \(M\) as the total number of datasets and \(b_{i}\) as the average sequence length of \(i\)-th data set. We have the variable of sequence length \(B\), where \(B=(b_{1},\ldots,b_{M})\). The other variable of the
accuracy gap \(A\) can be defined similarly. If we calculate the correlation coefficient between \(A\) and \(B\) as 0.55, with the \(p\)-value being 0.03 (\(<\) 0.05). We can conclude that \(A\) is positively correlated with \(B\) and the result is significant enough.
## 5 Datasets and Experimental settings
We use a total of 150 datasets, 125 of which are from the UTS data type and 25 from the MTS data type. Due to hardware constraints, we discard six datasets that contain excessively long and highly dimensional sequences. In most of the datasets, the sequence lengths are equal, while in other few datasets where the sequences are not equal, we pad the sequences to the maximum length with zeros. We present basic statistics of the UTS and MTS datasets in Table 1.
Due to the complexity, diversity, and timeliness of time series data Zhang et al. (2022), collecting in-domain data depends upon strong expert knowledge Eldele et al. (2021). Thus it is impractical to collect a large amount of unlabeled data for each dataset, to simplify the experimental setup, we set \(\mathcal{D}_{train}\) as \(\mathcal{D}_{pre}\) in our main experiments, which is a setup focusing on the low-resource scenario -- an important topic in today's machine learning community Rotman and Reichart (2019). We divide each \(\mathcal{D}_{pre}\) into a training set \(\mathcal{D}_{pre}^{train}\) and a validation set \(\mathcal{D}_{pre}^{val}\) (10% of \(\mathcal{D}_{pre}\)). To avoid over-fitting, we retain the parameter with the lowest validation loss as \(\theta_{pre}\). We set the learning rate, batch size, and epoch of pre-training to \(1e^{-4}\), 32, and 100 respectively. Some pre-training works mention the sensitivity of their methods to hyperparameters Yue et al. (2022); Zerveas et al. (2021), so we randomly select 30 UTS datasets, perform a gird search of hyperparameters for each pre-training task and pick the best ones.
For fine-tuning, we split 10% of \(\mathcal{D}_{train}\) as the validation set, on which we trace the model with the highest accuracy. Learning rates of \(f_{\theta_{1}}\) and \(g_{\theta_{2}}\) are both set to \(1e^{-3}\), and batch size is set to 32. We apply gradient clipping and set the clip value to 4.0. We train encoders and classifiers for 200 epochs to ensure convergence, with the cross-entropy loss. To evaluate our model, we use the official split Dau et al. (2019) of training and test data for each dataset. We train the model five times with different random seeds (control weight initialization) for a given set of model structures and pre-training tasks. The final test result for each dataset is the average of the five runs. The pre-training and supervised training process share the same optimizer of Adam Kingma and Ba (2014). For other detailed configuration of the model structure and hyperparameters please refer to code repository.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & Num.Class & Num.Sample & Series.Length & Feature.Dim \\ \hline UTS & 9/2/60 & 473/16/8926 & 794/15/13167 & 1/1/1 \\ MTS & 9/2/39 & 1866/12/25000 & 1159/8/17984 & 99/2/1345 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Basic statistics of the UTS and the MTS datasets. We present the average/min/max value for each aspect. **Feature.Dim** means feature dimension.
## 6 Experimental Results and Analysis
The Wilcoxon signed rank test Rey and Neuhauser (2011) is performed between the results of each pre-trained model and the results of the train-from-scratch counterpart. In Table 2 and Table 3, the symbol \(\dagger\) indicate \(p\)-value \(<0.05\), and the down arrow \(\downarrow\) indicates that the pre-trained result is significantly worse (\(p<0.05\)) than the non-pre-trained one.
### Effect of pre-training
Effects on optimizationNon-convex optimization in high-dimensional space has been a major challenge in deep learning since the overall process is affected by many factors Goodfellow et al. (2016), such as the initial parameters, the choice of the optimizer, the model structure, etc. Here we study whether the pre-trained parameters \(\theta_{pre}\) lead to a lower training loss at convergence. Some studies suggest pre-training simplifies optimization Hao et al. (2019); Neyshabur et al. (2020), while other work Erhan et al. (2010) claim pre-training does not benefit the optimization procedure.
Analyzing Table 2, we can find that all pre-training tasks do not significantly obtain lower training loss compared to random initialization, that is, the pre-trained parameters do not enable the model to fit the data better.
We also analyse whether pre-training helps in optimization by considering two types of under-fitted models. Both types are poorly fitted (high training loss) but vary in structural complexity. The first type is the model with a complex structure, i.e., the TsTransformer (with a parameter count of around 400,000). The other type is the model with a simpler structure, including LSTM-underfit and D.Conv-underfit (number of parameters around 3,000). LSTM-underfit and D.Conv-underfit share the same structure with LSTM and D.Conv but differs in hidden dimensions and number of layers. Interestingly, observing Table 3 and Table 2, we find that when pre-training the models with simple structures, pre-training can, in some cases, improve the model's fitting ability of the data. In contrast, provided with a poorly fit model with a large number of parameters, pre-training does not bring any benefit to the final optimization result.
Availability of regularizationA more general definition of regularization is a technology aimed at improving the generalization ability of a model Tian and Zhang (2022); Goodfellow et al. (2016). One previous work states that the main role of pre-training is to regularize Erhan et al. (2010). To verify this belief, we chose three metrics to reflect the generalization ability of the model - including the accuracy in the early stopping condition, the accuracy of the last epoch, and the highest accuracy among all epochs.
We can see in Table 2 that no pre-training task significantly improves the generalization ability of the model on both the UTS and the MTS datasets. Relatively speaking, Ts2Vec is the most effective pre-training scheme for improving generalization, but its effectiveness is only limited to some datasets and model structures, while the Mvts pre-training task even significantly degrades the generalization performance in some settings -- e.g., it degrades the performance of LSTM on the UTS dataset in the early stopping condition. Although Ts2Vec is effective in some cases, we conclude that most of the pre-training tasks for time series fail to improve the generalization ability of the model, i.e., the regularization effect of pre-training is not significant.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline & & \multicolumn{2}{c}{**Not.P**} & \multicolumn{1}{l}{**R.Cls**} & \multicolumn{1}{l}{**Ts2Vec**} & \multicolumn{1}{l}{**Ts-Tcc**} & \multicolumn{1}{l}{**Mvts**} & \multicolumn{1}{l}{**Srlt**} \\ \hline \multicolumn{6}{c}{**Training Loss (Min)**} \\ \hline \multirow{3}{*}{MTS} & LSTM & 0.045 & **0.043** & 0.044 & 0.046 & 0.044 & 0.044 \\ & TsTransformer & 0.389 & 0.391 & **0.384** & 0.401 & 0.429 & 0.388 \\ & D.Conv & 0.012 & 0.010 & 0.010 & **0.008** & 0.012 & **0.008** \\ \hline \multirow{3}{*}{UTS} & LSTM & 0.038 & 0.039 & **0.037** & **0.037** & 0.041 & **0.037** \\ & TsTransformer & 0.487 & 0.501 & **0.476** & 0.490 & 0.483 & 0.487 \\ & D.Conv & 0.012 & **0.011** & 0.013 & 0.015 & 0.013 & 0.020 \\ \hline \multicolumn{6}{c}{**Accuracy (Early Stopping)**} \\ \hline \multirow{3}{*}{MTS} & LSTM & 0.679 & **0.727** & 0.683 & 0.724 & 0.716 & 0.708 \\ & TsTransformer & 0.685 & 0.700 & **0.707\(\dagger\)** & 0.689 & 0.695 & 0.693 \\ & D.Conv & 0.728 & 0.716 & 0.723 & 0.711 & **0.751\(\dagger\)** & 0.728 \\ \hline \multirow{3}{*}{UTS} & LSTM & **0.704** & 0.696 & 0.692 & 0.700 & 0.683\(\downarrow\) & 0.692 \\ & TsTransformer & 0.665 & 0.660 & 0.662 & 0.661 & 0.653 & **0.668** \\ & D.Conv & 0.724 & 0.721 & **0.743\(\dagger\)** & 0.732 & 0.741 & 0.726 \\ \hline \multicolumn{6}{c}{**Accuracy (Last Epoch)**} \\ \hline \multirow{3}{*}{MTS} & LSTM & 0.701 & 0.711 & 0.714 & 0.716 & **0.722** & 0.719 \\ & TsTransformer & 0.680 & 0.672 & **0.696\(\dagger\)** & 0.685 & 0.680 & 0.681 \\ & D.Conv & 0.730 & 0.728 & 0.732 & 0.724 & **0.742** & 0.723 \\ \hline \multirow{3}{*}{UTS} & LSTM & **0.698** & 0.688 & 0.683 & 0.683 & 0.677 & 0.689 \\ & TsTransformer & **0.660** & 0.650 & **0.660** & 0.656 & 0.648\(\downarrow\) & 0.648 \\ & D.Conv & 0.739 & 0.740 & 0.746 & 0.745 & **0.754** & 0.738 \\ \hline \multicolumn{6}{c}{**Accuracy (Max)**} \\ \hline \multirow{3}{*}{MTS} & LSTM & 0.754 & 0.764 & 0.769 & 0.772 & 0.770 & **0.775\(\dagger\)** \\ & TsTransformer & 0.744 & 0.739 & 0.750 & 0.746 & 0.748 & **0.753** \\ & D.Conv & 0.779 & 0.772 & 0.783 & 0.779 & **0.790** & 0.775 \\ \hline \multirow{3}{*}{UTS} & LSTM & 0.770 & 0.765 & 0.763 & 0.764 & 0.761\(\downarrow\) & **0.772** \\ & TsTransformer & 0.727 & 0.722 & **0.730** & 0.725 & 0.721\(\downarrow\) & 0.726 \\ & D.Conv & 0.794 & 0.796 & **0.804\(\dagger\)** & 0.799\(\dagger\) & 0.800 & 0.790 \\ \hline \multicolumn{6}{c}{**Accuracy (Epoch 1)**} \\ \hline \multirow{3}{*}{MTS} & LSTM & 0.355 & 0.376 & **0.456\(\dagger\)** & 0.395 & 0.444\(\dagger\) & 0.435\(\dagger\) \\ & TsTransformer & 0.411 & 0.386 & **0.414** & 0.361\(\downarrow\) & 0.389 & 0.397 \\ & D.Conv & 0.441 & 0.436 & 0.470 & 0.468\(\dagger\) & 0.461 & **0.476** \\ \hline \multirow{3}{*}{UTS} & LSTM & 0.338 & 0.340 & **0.416\(\dagger\)** & 0.388\(\dagger\) & 0.396\(\dagger\) & 0.385\(\dagger\) \\ & TsTransformer & **0.399** & 0.361\(\downarrow\) & 0.392 & 0.374 & 0.377 & 0.374 \\ & D.Conv & 0.421 & 0.409 & **0.474\(\dagger\)** & 0.460\(\dagger\) & 0.458\(\dagger\) & 0.460\(\dagger\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Main results of using different pre-training tasks on the UTS (125) and MTS (25) datasets. ‘R.Cls’ refers to the pre-training task of Random-Cls. ‘Not.P’ refers to the Pytorch ([https://pytorch.org](https://pytorch.org)) default weight initialization scheme. The Best results are in **bold** and the second best is underlined.
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline & & **Not.P** & **R.Cls** & **Ts2Vec** & **Ts-Tcc** & **Mvts** & **Splt** \\ \hline \multicolumn{6}{c}{**Training Loss (Min)**} \\ \hline MTS & LSTM-underfit & 0.391 & 0.372\(\dagger\) & 0.378 & 0.391 & 0.411\(\dagger\) & 0.387 \\ & D.Conv-underfit & 0.306 & 0.311 & 0.319 & 0.317 & 0.319 & 0.317 \\ \hline UTS & LSTM-underfit & 0.490 & 0.485 & 0.468 & 0.476 & 0.462\(\dagger\) & 0.464\(\dagger\) \\ & D.Conv-underfit & 0.414 & 0.396 & 0.401\(\dagger\) & 0.410 & 0.404 & 0.404 \\ \hline \multicolumn{6}{c}{**Accuracy (Epoch 1)**} \\ \hline MTS & LSTM-underfit & 0.264 & 0.260 & 0.265 & 0.265 & 0.239 & 0.283 \\ & D.Conv-underfit & 0.258 & 0.276 & 0.271 & 0.297\(\dagger\) & 0.308\(\dagger\) & 0.303\(\dagger\) \\ \hline UTS & LSTM-underfit & 0.312 & 0.303 & 0.320 & 0.313 & 0.312 & 0.311 \\ & D.Conv-underfit & 0.312 & 0.301 & 0.323 & 0.334 & 0.321 & 0.334 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Training loss and epoch-1 accuracy of under-fitted LSTM and D.Conv.
Figure 1: Gaps between each unsupervised pre-training task and random initialization in terms of convergence speed and generalization. Only the results on the UTS datasets are shown (the trend is similar on the MTS datasets). Each point is averaged across all datasets, with the length of the error bars being a 95% confidence interval for the mean. The _Top row_ presents the decrease in training loss of the pre-trained model compared to the randomly initialized model. The _Bottom row_ depicts the increase in test-set accuracy.
Effects on convergence speedAs we can see in Figure 1, except when applied to TsTransformer, all pre-training schemes improve accuracy in the first few epochs and also reduce the training loss. In addition, observing the accuracy of the first epoch in Table 2, most of the pre-training tasks significantly boost LSTM and D.Conv at the beginning of fine-tuning. In contrast, the pre-training for TsTransformer does not show similar advantages. Since TsTransformer is under-fitted (high training loss) on most datasets, we conjecture that pre-training cannot speed up the convergence of the model when it is under-fitted. The results in Table 3 largely validate our conjecture. Only a small number of cases of under-fitted D.Conv are accelerated by pre-training (using Ts-Tcc, Mvts, and srlt pre-training schemes). We summarize as follows: when the model fitting ability is sufficient, pre-training mostly improves the convergence speed and entails fast generalization, a conclusion similar to previous observations in the CV domain He et al. (2019), while when the model is under-fitted, pre-training usually fails to improve the convergence rate.
Effects of extra pre-training dataWe experiment with adding more pre-training data on three data types--_Food_, _ECG_ and _Image_ (all UTS). We specifically select these three types because they contain datasets that are relatively similar to each other, which does
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & **Ts2vec** & **Ts-Tcc** & **Mvts** & **Srlt** \\ \hline \multicolumn{5}{c}{**Training Loss (Min)**} \\ \hline LSTM & 0.010/0.006 & 0.012/0.007 & **0.012/0.008†** & 0.013/0.009 \\ LSTM-underfit & 0.184/0.176 & 0.193/0.181 & **0.194/0.180†** & **0.193/0.179†** \\ TsTransformer & 0.302/0.282 & 0.301/0.287 & 0.302/0.288 & 0.303/0.296 \\ D.Conv & 0.038/0.024 & 0.046/0.025 & 0.045/0.026 & **0.056/0.021†** \\ D.Conv-underfit & **0.239/0.179†** & **0.250/0.201†** & **0.247/0.197†** & **0.246/0.202†** \\ \hline \multicolumn{5}{c}{**Accuracy (Early Stopping)**} \\ \hline LSTM & 0.726/0.769 & 0.737/0.787 & 0.741/0.778 & 0.736/0.780 \\ LSTM-underfit & 0.712/0.724 & 0.721/0.737 & 0.732/0.741 & 0.736/0.739 \\ TsTransformer & 0.756/0.756 & 0.745/0.745 & 0.743/0.748 & 0.746/0.742 \\ D.Conv & 0.793/0.800 & 0.786/0.807 & 0.790/0.815 & **0.783/0.819†** \\ D.Conv-underfit & 0.736/0.755 & 0.725/0.762 & 0.722/0.761 & 0.728/0.759 \\ \hline \multicolumn{5}{c}{**Accuracy (Epoch1)**} \\ \hline LSTM & 0.513/0.557 & 0.487/0.512 & **0.473/0.508†** & **0.459/0.500†** \\ LSTM-underfit & 0.391/0.394 & 0.378/0.376 & 0.381/0.385 & 0.377/0.383 \\ TsTransformer & 0.433/0.453 & 0.414/0.437 & 0.421/0.436 & 0.421/0.427 \\ D.Conv & 0.525/0.567 & **0.506/0.569†** & **0.505/0.565†** & **0.502/0.569†** \\ D.Conv-underfit & 0.397/0.422 & 0.408/0.413 & 0.406/0.417 & 0.405/0.414 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The values before and after the slash(/) are the dataset-averaged (21 in total) accuracy of using original pre-training datasets and of using the expanded ones. The symbol † indicates a significant advantage of using extra data.
not violate the experimental setup of in-domain pre-training. _Food_, _ECG_ and _Image_ data type contains 6, 7 and 32 datasets respectively. On average, the amount of pre-training data is increased by 31.41, 27.5, and 37.8 folds. We illustrate the expansion process with an example. Suppose the target dataset, Beef, falls under the _Food_ type, we add samples of other datasets (containing both training and test sets) of the _Food_ type to \(\mathcal{D}_{pre}\).
In Table 4 we show the impact of adding extra pre-training data. Two key points can be summarized. First, in the majority of cases, increasing the pre-training data by several times does not markedly improve the generalization ability of the model, which contradicts the previous work Paine et al. (2014) that claims unsupervised pre-training helps when the ratio of unsupervised to supervised samples is high. Second, except for Ts2vec, adding more pre-training data reinforces the advantages stemming from pre-training. For instance, pre-training on the original dataset speed up the convergence of D.Conv in the early stage (Table 2). With additional pre-training data, the convergence speed is further accelerated.
Which factor, the model structure or the pre-training task, is more critical for pre-training to work on specific dataset?
Since the training of neural networks is strongly influenced by the initialized parameters Summers and Dinneen (2021), sometimes the pre-trained model outperforms the randomly initialized one by a large margin most likely because the initialized position leads to bad local minima. Thus we need a stronger baseline for locating true advantageous
Figure 2: We present the intersection ratio \(\varphi_{ij}\) of advantageous dataset between different pre-training tasks with the fixed model structure. The shades of color present the values of \(\varphi_{ij}\). The value in parentheses represents the size of the advantageous dataset \(\mathcal{A}_{pt}\). _Rand-init_ in this figure and _R.Init_ in Table 5 correspond to sets of randomly initialized parameters that do not contribute to \(acc_{max}\).
datasets of the pre-training task. To this end, for each dataset, we additionally train with random initialization four more times for each model structure and select the highest accuracy among the five sets of test results as the new baseline, noted as \(acc_{max}\). Given a pre-training task \(\mathcal{PT}\), we define its advantageous dataset set \(\mathcal{A}_{pt}\) of a determined model structure as follows: for any dataset \(\mathcal{D}\), if the pre-trained accuracy \(acc_{pt}\) achieves a relative improvement of more than 15% compared to \(acc_{max}\), then \(\mathcal{D}\in\mathcal{A}_{pt}\). We define \(\varphi_{ij}=|\mathcal{A}_{pt_{i}}\cap\mathcal{A}_{pt_{j}}|/max(|\mathcal{A}_{ pt_{i}}|,|\mathcal{A}_{pt_{j}}|)\) as the intersection ratio between two sets of advantageous dataset \(\mathcal{A}_{pt_{i}}\) and \(\mathcal{A}_{pt_{j}}\), derived from two different pre-training tasks--\(\mathcal{PT}_{i}\) and \(\mathcal{PT}_{j}\). We also define their intersection set as \(\Omega_{ij}=\mathcal{A}_{pt_{i}}\cap\mathcal{A}_{pt_{j}}\). We analogously define the model-wise intersection ratio \(\omega_{ij}\) by substituting the pre-training task for the model structure, where \(i\) and \(j\) indicate different model structures.
In Figure 2, given the same model structure, except for TsTransformer, there is a high intersection ratio \(\varphi_{ij}\) between each pair of the pre-training tasks (random-cls and rand-init excluded), mostly above 0.4 and in some cases up to 0.8 or more. More interestingly, according to statistics, such overlap is not due to the close vicinity between the pre-trained parameters. In fact, for some datasets within \(\Omega_{ij}\), the \(\ell_{2}\) distances between parameter \(\theta_{i}\) and parameter \(\theta_{j}\) are even further apart than their average distance across all datasets. We also calculate the common subset ratio \(\phi\) among all pre-training tasks (random-cls and rand-init excluded), which is formally defined as
\[\phi=|\bigcap_{i=1}^{n}A_{pt_{i}}|/|\bigcup_{i=1}^{n}A_{pt_{i}}|, \tag{1}\]
where \(n\) is the number of pre-training tasks. On the UTS dataset, the common subset ratio \(\phi\) of LSTM, D.Conv, and TsTransformers are 0.1186, 0.06, and 0.1568 respectively, which is relatively low compared with pair-wise intersection ratio \(\varphi\). This result shows that the model is not the only determinant and that different pre-training tasks can bring out their advantages on different datasets.
Analyzed from another perspective, Table 5 shows that the model-wise overlap \(\omega_{ij}\) is relatively small when we fix the pre-training task. Nevertheless, we still find that Mvts and Ts2Vec methods significantly outperform two baselines -- R.Cls and R.Init, demonstrating a potential of luring the model to perform better on certain datasets regardless of its structure.
Observing Table 5 and Figure 2, if the question is how to make pre-training benefit fine-tuning on a certain dataset, it is more important to consider the fit of the model structure rather than designing a specific pre-training task, because the model structure has a much
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & **Ts2Vec** & **Ts-Tcc** & **Mvts** & **Splt** & **R.Cls** & **R.Init** \\ \hline MTS & **.233\(\pm\).047** &.125\(\pm\).177 &.151\(\pm\).117 &.125\(\pm\).102 &.139\(\pm\).104 &.042\(\pm\).059 \\ \hline UTS &.252\(\pm\).086 &.193\(\pm\).099 & **.301\(\pm\).085** &.272\(\pm\).046 &.198\(\pm\).070 &.136\(\pm\).037 \\ \hline \hline \end{tabular}
\end{table}
Table 5: We present overlap ratio \(\omega_{ij}\) of advantageous datasets between different model structures when the pre-training task is fixed. Here we interpret \(i\) and \(j\) as two different model structures.
greater impact than the pre-training task for squeezing the potential of unsupervised pre-training on a given dataset.
### Correlation Factors
From Table 6, we can see that there is not a single factor that is significantly associated with the effectiveness of pre-training. There are some sporadic significant associations, but they are only observed on a small number of model structures, which do not provide much of a reference. This result is rather disappointing, and we have yet to find an indicator to guide us on whether to trust the pre-trained parameters or to decide whether to pre-train on certain datasets.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & & **Seq.L** & **Pre.S** & \(\ell_{2}\)**-norm** & \(\ell_{2}\)**-path** & **Sharp.** & **P.T** & **P.V** & **Dis.** \\ \hline \multicolumn{10}{c}{**Mvts**} \\ \hline \multirow{3}{*}{UTS} & Lstm & 0.045 & -0.020 & 0.017 & 0.017 & -0.095 & 0.162 & **0.198\(\dagger\)** & 0.027 \\ & Transf. & -0.036 & 0.137 & -0.078 & -0.086 & 0.172 & -0.069 & -0.080 & -0.027 \\ & D.CNN & 0.0270 & -0.147 & 0.093 & 0.094 & **0.229\(\dagger\)** & 0.109 & 0.115 & -0.130 \\ \hline \multicolumn{10}{c}{**Srlt**} \\ \hline \multirow{3}{*}{UTS} & Lstm & -0.037 & 0.058 & 0.040 & 0.037 & 0.174 & -0.060 & -0.043 & 0.045 \\ & Transf. & -0.002 & -0.035 & 0.049 & 0.047 & -0.133 & 0.001 & -0.106 & -0.102 \\ & D.CNN & -0.127 & -0.004 & -0.040 & -0.042 & 0.112 & -0.009 & -0.071 & -0.050 \\ \hline \multicolumn{10}{c}{**Ts-Tcc**} \\ \hline \multirow{3}{*}{UTS} & Lstm & -0.070 & 0.080 & 0.182 & **0.218\(\dagger\)** & 0.085 & 0.002 & -0.052 & 0.009 \\ & Transf. & 0.0381 & -0.061 & -0.022 & -0.021 & -0.094 & 0.176 & -0.047 & 0.016 \\ & D.CNN & -0.037 & -0.067 & 0.049 & 0.051 & 0.131 & -0.109 & -0.066 & -0.046 \\ \hline \multicolumn{10}{c}{**Ts2Vec**} \\ \hline \multirow{3}{*}{UTS} & Lstm & -0.211 & 0.053 & 0.039 & 0.041 & 0.169 & -0.024 & -0.024 & 0.040 \\ & Transf. & -0.010 & 0.078 & -0.002 & 0.001 & 0.039 & -0.005 & 0.033 & -0.066 \\ \cline{1-1} & D.CNN & 0.057 & -0.225 & -0.104 & -0.109 & 0.046 & 0.087 & 0.102 & **-0.195\(\dagger\)** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Spearman correlation coefficient between different factors and accuracies (early-stopping) of the test set. The results in this table are derived from the UTS datasets only, But the situation is similar on the MTS datasets. **Seq.L** is sequence length. **Pre.S** is the size of the pre-training set. **Sharp.** means sharpness. **P.V** represents the convergence state of the model on the pre-training set and is equal to the lowest training loss divided by the training loss of the first epoch. **P.V** is defined similarly on the validation set. **Dis.** is the \(\ell_{2}\) distance between the pre-trained parameters and its initial point.
## 7 Conclusion and Future Work
This paper focuses on the study of whether unsupervised pre-training is beneficial for fine-tuning on the Time Series Classification task, making an empirical contribution to the study of when and how unsupervised pre-training helps fine-tuning. We conclude that pre-training does not significantly enhance the generalization performance but can improve the convergence speed when the model fits the data well. Also, it can improve the optimization process for simple models with a small number of parameters. Increasing the amount of pre-training data does not benefit generalization, but amplifies the existing advantages of pre-training, such as fast convergence. When dealing with a new time series dataset and aiming to enhance the model using pre-training, it's more important to focus on creating an appropriate model architecture than on creating the pre-training task itself.
The experimental procedures and conclusions presented in this paper are primarily grounded in low-resource settings, utilizing medium-sized models and limited pre-training data. This approach is particularly relevant to the current discourse surrounding artificial intelligence, as the development of AI with lower resources has become a pressing concern for various fields, including natural language processing Rotman and Reichart (2019), healthcare Wahl et al. (2018); Pokaprakarn et al. (2022), edge computing Merenda et al. (2020), and green AI Schwartz et al. (2020). To extend the scope of our work, we encourage future studies to explore larger models and more data in this domain, as well as to do new research on catastrophic forgetting Mehta et al. (2021), intrinsic dimension Aghajanyan et al. (2021), and other aspects not covered in this paper.
|
2305.19541 | Few-Shot Speaker Identification Using Lightweight Prototypical Network
with Feature Grouping and Interaction | Existing methods for few-shot speaker identification (FSSI) obtain high
accuracy, but their computational complexities and model sizes need to be
reduced for lightweight applications. In this work, we propose a FSSI method
using a lightweight prototypical network with the final goal to implement the
FSSI on intelligent terminals with limited resources, such as smart watches and
smart speakers. In the proposed prototypical network, an embedding module is
designed to perform feature grouping for reducing the memory requirement and
computational complexity, and feature interaction for enhancing the
representational ability of the learned speaker embedding. In the proposed
embedding module, audio feature of each speech sample is split into several
low-dimensional feature subsets that are transformed by a recurrent
convolutional block in parallel. Then, the operations of averaging, addition,
concatenation, element-wise summation and statistics pooling are sequentially
executed to learn a speaker embedding for each speech sample. The recurrent
convolutional block consists of a block of bidirectional long short-term
memory, and a block of de-redundancy convolution in which feature grouping and
interaction are conducted too. Our method is compared to baseline methods on
three datasets that are selected from three public speech corpora (VoxCeleb1,
VoxCeleb2, and LibriSpeech). The results show that our method obtains higher
accuracy under several conditions, and has advantages over all baseline methods
in computational complexity and model size. | Yanxiong Li, Hao Chen, Wenchang Cao, Qisheng Huang, Qianhua He | 2023-05-31T04:09:50Z | http://arxiv.org/abs/2305.19541v1 | Few-Shot Speaker Identification Using Lightweight Prototypical Network with Feature Grouping and Interaction
###### Abstract
Existing methods for few-shot speaker identification (FSSI) obtain high accuracy, but their computational complexities and model sizes need to be reduced for lightweight applications. In this work, we propose a FSSI method using a lightweight prototypical network with the final goal to implement the FSSI on intelligent terminals with limited resources, such as smart watches and smart speakers. In the proposed prototypical network, an embedding module is designed to perform feature grouping for reducing the memory requirement and computational complexity, and feature interaction for enhancing the representational ability of the learned speaker embedding. In the proposed embedding module, audio feature of each speech sample is split into several low-dimensional feature subsets that are transformed by a recurrent convolutional block in parallel. Then, the operations of averaging, addition, concatenation, element-wise summation and statistics pooling are sequentially executed to learn a speaker embedding for each speech sample. The recurrent convolutional block consists of a block of bidirectional long short-term memory, and a block of de-redundancy convolution in which feature grouping and interaction are conducted too. Our method is compared to baseline methods on three datasets that are selected from three public speech corpora (VoxCeleb1, VoxCeleb2, and LibriSpeech). The results show that our method obtains higher accuracy under several conditions, and has advantages over all baseline methods in computational complexity and model size.
Few-shot learning, speaker identification, feature grouping, feature interaction, prototypical network
## I Introduction
Speaker recognition is a quite critical technique for many practical applications, such as criminal investigation [1], financial services [2]. It can be mainly divided into two classes: speaker identification (SI) and speaker verification (SV) [3]. The SI is a task to decide which speaker utters a given speech sample [4], while the SV is a task to reject or accept the identity claim of a speaker based on the speaker's voice [5].
In some applications, it is difficult to acquire enough speech samples for building a reliable SI system. For instance, law enforcement agencies often have difficulty in collecting enough speech samples spoken by the suspects for forensic SI [1]. However, few samples are needed for humans to learn a new task well [6]. To reduce the performance gap between machine learning and human learning when the speech samples are very few, a task of FSSI is proposed [7, 8]. The FSSI is a newly emerging task to identify speakers in unlabeled speech samples (query set) based on few labeled speech samples (support set).
In this paper, we propose a FSSI method using a lightweight prototypical network with feature grouping and interaction. The rest of this paper is structured as follows. Sections II and III describe related works and our contributions, respectively. Section IV introduces the proposed FSSI method. Section V presents the experiments and discussions, and the conclusions are drawn in Section VI.
## II Related Works
In this section, we introduce related works from three aspects, including general speaker recognition, few-shot speaker recognition, and lightweight speaker recognition.
### _General Speaker Recognition_
Many efforts were made on speaker recognition. They mainly focused on solving two problems: how to learn a front-end feature with strong representational ability, and how to build a back-end classifier with high accuracy for recognition.
Hand-crafted features were designed to represent properties of different speakers, such as constant Q cepstral coefficients [9], Mel-frequency cepstral coefficients [10], linear prediction coding coefficients [10], Gaussian supervector [11], eigenvoice motivated vectors [12], and I-vector [13]. Each one of these features was often designed for a specific scenario. Hence, they would not perform well in other scenarios. In addition, they are shallow-model based features, instead of deep-model based features. They cannot effectively represent the differences of deep-level properties among various speakers. To overcome the shortcomings of hand-crafted features, deep neural network (DNN) was proposed to learn deep-model based features. The DNN can learn discriminative embeddings from speech samples. Hence, the deep-model based features exceeded the hand-crafted features for speaker recognition. The deep-model based features include the X-vector learned by a time-delay neural network (TDNN) [14, 15]; the d-vector learned by a DNN [16]; and the S-vector learned by a Transformer encoder speaker authenticator [17]. In addition, other networks were used to learn or adapt embeddings for speaker recognition, such as long short-term memory network [18], convolutional neural network [19], Siamese neural network (SNN) [20], Transformer [21], and multi-scale convolutional recurrent network [22].
On the other side, many works were also done on the design of back-end classifiers. Typical classifiers adopted in previous works mainly include: DNN [23], vector quantization [24], Gaussian mixture model [25], support vector machine [26], hidden Markov model [27], probabilistic linear discriminant analysis (PLDA) [28], and cosine distance [10].
### _Few-Shot Speaker Recognition_
Since the DNN requires a huge amount of training data for achieving satisfactory results, a few works were done for few-shot speaker recognition with various DNNs. In these works, a common practice is to obtain an end-to-end neural network for speaker embedding learning [29, 7, 30]. For instance, Wang et al. [7] built an end-to-end neural network to learn speaker embedding by a prototypical loss for few-shot speaker recognition. Recently, Wang et al. [31] proposed an end-to-end neural network with an attention corrected prototype using the relation based indefinite distance metric for few-shot speaker recognition. In addition, Li et al. [32] designed a depthwise separable convolutional network with channel attention for FSSI. Their neural network was trained with a prototypical loss, which can alleviate the overfitting problem.
The technique of adversarial learning [33] is to generate two neural networks in an adversarial way for enhancing training efficiency, which is beneficial for few-shot speaker recognition. For example, Li et al. [34] designed an adversarial model for few-shot speaker recognition. The technique of transfer learning [35] was used for few-shot speaker recognition too. For instance, Anand et al. [8] utilized a capsule network [36] for few-shot speaker recognition. In addition, the technique of meta-learning was applied for few-shot speaker recognition. For example, Li et al. [37] built the bridging mixture density networks with meta learning to identify speakers using a few samples. Kye et al. [38] adopted the meta-learning to tackle the problem of imbalance length of training and testing samples. Mishra et al. [39] used a SNN [40] and 3-dimensional convolution to tackle the problem of few-shot speaker verification.
These methods above for few-shot speaker recognition can solve the problem of performance degradation caused by the lack of samples, but model size and computational complexity of these methods are relatively high and are not explicitly considered in previous works. It is very challenging to deploy these methods on intelligent terminals with limited resources, such as smart watches, smart speakers, service robots.
### _Lightweight Speaker Recognition_
To deploy speaker recognition systems on intelligent terminals with limited resources, it is necessary to study the problems of either lightweight model design or model compression while maintaining the performance of speaker recognition. These problems belong to the scope of lightweight speaker recognition.
Inspired by the successes of MobileNet [41] and ShuffleNet [42] in computer vision, some efforts were made to improve convolutional operations for obtaining a lightweight model for speaker recognition [43, 44]. For example, Koluguri et al. [43] proposed a SpeakerNet for the tasks of speaker recognition and verification. The SpeakerNet is composed of residual blocks with 1-dimensional depth-wise separable convolutions, batch normalization, and ReLU layers, which is a lighter model with 5 million parameters. Similarly, Nunes et al. [44] proposed a portable model called additive margin MobileNet1D for implementing speaker identification on mobile devices. The MobileNet1D takes 11.6 megabytes on disk storage. In short, such improved convolutional operations can reduce the model size and computational complexity, but they generally lead to different levels of performance degradation. In addition, the tradeoff between the complexity reduction and performance degradation requires to be carefully considered.
Some works were made on manual design of a model with better architecture or organization for realizing lightweight speaker recognition [45]-[48]. For example, inspired by the success of residual network (ResNet) in image recognition [49], Oneata et al. [45] replaced the original trunk of the SincNet [50] with a lightweight (ResNet-inspired) trunk. They found that their trunk was lighter (2.8 million instead of 22 million parameters) with better performance. Recently, inspired by the favorable geometry of the hyperbolic geometry, Lee et al. [48] proposed a lighter model called hyperbolic ResNet for speaker recognition. They found that the learned speaker embeddings were more compact and were at the same level of performance. In summary, the minimum size of all such proposals for designing a model with better architecture or organization is at the level of millions of parameters. Therefore, there is still much room to reduce the size of the model in these methods.
In addition, knowledge distillation [51] and neural architecture search (NAS) [52] were used to realize lightweight speaker recognition [53]-[55]. For instance, Ng et al. [53] investigated the framework of teacher-student training for knowledge distillation in the text-independent speaker recognition, and obtained competitive result with 88-93% smaller models. Lin et al. [54] proposed an asymmetric structure, which took a big model of the ECAPA-TDNN (Emphasized Channel Attention, Propagation and Aggregation in TDNN) for enrollment and a small-scale model of the ECAPA-TDNN-Lite for verification. The ECAPA-TDNN-Lite obtained competitive equal error rates with 11.6 million FLOPS (floating-point operations per second). Wang et al. [55] used the NAS to automatically design an efficient network (EfficientTDNN) which consisted of a TDNN based super-net and a TDNN-NAS algorithm. Their neural network obtained competitive equal error rates with Multiply-Accumulate operations (MACs) from 204 million to 1.45 giga. Although these methods can obtain a relatively small model for testing, they require to pre-train a very large model or to search the proper network with large number of architecture settings. Hence, the training cost and overall requirement for realizing speaker recognition are still heavy.
## III Our Contributions
Based on the descriptions above, it can be known that many efforts are made on few-shot or lightweight speaker recognition. However, to our best knowledge, there is almost no work on lightweight few-shot speaker recognition until now.
The work in this paper focuses on the SI task only. We propose a FSSI method by a lightweight prototypical network in which an embedding module is designed to realize feature grouping and interaction. In the embedding module, the input feature with \(H\)-dimension is evenly split into \(I\) feature subsets with \(J\)-dimension. Afterwards, these \(I\) feature subsets are independently fed into a recurrent convolutional block (RCB) which consists of a block of bidirectional long short-term memory (BLSTM) and a block of de-redundancy convolution (DRC). Then, some typical operations (e.g., averaging, addition, concatenation, statistics pooling) are sequentially executed on the output of the RCB to produce a speaker embedding for each speech sample. Finally, a Softmax layer is introduced to the proposed prototypical network for realizing the FSSI task.
Three datasets are constructed by randomly selecting speech samples from three speech corpora (VoxCeleb1, VoxCeleb2, and LibriSpeech) for evaluating various methods. The results show that our method is effective for lightweight FSSI. In conclusion, the main contributions of this work are as follows: 1. To efficiently learn the speaker embeddings with strong representational ability, we design an embedding module mainly consisting of a RCB to implement feature grouping and interaction. The RCB is composed of a BLSTM block and a DRC block, which can capture global sequential information and local spatial information during speaker embedding learning. To the best of our knowledge, the architecture of the proposed embedding module (including the RCB and DRC) is novel and not used in previous works. 2. The idea of both feature grouping and feature interaction for speaker embedding learning is not considered and used in previous works for speaker recognition. The operation of feature grouping can reduce the model size and computational complexity, while the operation of feature interaction can enhance the representational ability of the learned speaker embedding. In addition, these two operations are executed twice during the speaker embedding learning. The first time is conducted at the preceding and succeeding blocks of the RCB, while the second time is performed at the DRC block. In short, the adoption of these two operations is the most critical reason why our proposed embedding module is effective for lightweight FSSI and achieves better performance. 3. We propose a FSSI method using a lightweight prototypical network for solving the problem of lightweight FSSI which is not considered in previous works. We thoroughly validate the effectiveness of our proposed method, and compare it with the baseline methods on three experimental datasets under different conditions. Experimental results show that our proposed method has advantages over the baseline methods in accuracy, model size and computational complexity.
## IV Method
In this section, we describe the proposed method, including the descriptions of problem definition, network architecture, embedding module, and recurrent convolutional block.
### _Problem Definition_
In this study, we focus on investigating the problem of text-independent FSSI by an episode-based strategy [56]. In each episodic training step, \(K\) samples from each one of \(N\) classes (speakers) are randomly selected from training subset as the support set (i.e., \(K\)\(N\) support samples), and then another \(K\) samples from each one of the \(N\) speakers are also randomly selected from the remaining samples of the training subset to form the query set (i.e., \(K\)\(N\) query samples). The \(N\) speakers in the support set are the same as those in the query set, whereas the \(K\)\(N\) support samples are completely different from the \(K\)\(N\) query samples. These 2\(K\)\(N\) samples form one batch of support and query samples that are fed into the prototypical network in one episodic learning procedure. In each episodic testing step, the speaker of each query sample is identified after a set of support samples are fed into the network for enrollment. It is supposed that each query sample belongs to one of the speakers in the support set during the episodic testing step. We evaluate on the variable numbers of both support speakers (\(N\)-way) and support samples (\(K\)-shot). This is the \(N\)-way \(K\)-shot task.
After extracting speaker embeddings from support and query samples by the embedding module of the network, the support set and query set are denoted as \(\mathbf{S}=\{\mathbf{X}_{1},\,...,\,\mathbf{X}_{n},\,...,\,\mathbf{X}_{N}\}\) and \(\mathbf{Q}\)\(=\{\mathbf{X}_{1},\,...,\,\mathbf{X}_{n},\,...,\,\mathbf{X}_{N}\}\), respectively. Furthermore, \(\mathbf{X}_{n}\)\(=\)\(\{(\mathbf{x}_{n,1},\,y_{n}),\,...,\,\mathbf{(x}_{n,k},y_{n}),\,...,\,\mathbf{(x}_{n,k},\,y_{n})\}\) denotes the support (or query) set of the \(n\)th speaker, where 1\(\leq\)\(n\)\(\leq\)\(N\), 1\(\leq\)\(k\)\(\leq\)\(K\). In each \(\mathbf{X}_{n},\,\mathbf{x}_{n,k}\) and \(y_{n}\) denote the kth speaker embedding and the speaker label of the \(n\)th speaker in support (or query) set, respectively.
The prototype vector \(\mathbf{\overline{x}}_{n}\) of the \(n\)th speaker is defined by the mean vector of speaker embeddings in the support set of the \(n\)th speaker, namely,
\[\mathbf{\overline{x}}_{n}\!=\!\frac{1}{K}\sum_{k=1}^{K}\mathbf{x}_{n,k}. \tag{1}\]
The probability produced by the prototypical network over speakers for a query speaker embedding \(\mathbf{x}\) based on a Softmax over distances to the prototype vectors is defined by
\[p(y=y_{*}\mid\mathbf{x})^{=}\frac{\exp\left(-Dist(\mathbf{\overline{x}}_{*},\,\mathbf{x}) \right)}{\sum_{*}\exp\left(-Dist(\mathbf{\overline{x}}_{*},\,\mathbf{x})\right)}, \tag{2}\]
where \(\mathbf{\overline{x}}_{n}\) and \(\mathbf{\overline{x}}_{*}\) denote prototype vectors of the \(n\)th speaker in support set; \(\mathbf{x}\) and \(y\) denote one speaker embedding in query set and the predicted speaker label, respectively; \(Dist(\cdot)\) denotes a distance function, such as Euclidean distance, cosine distance.
Learning proceeds by minimizing a loss function \(\ell\) via the stochastic gradient descent algorithm [57]. The loss function \(\ell\) is defined by
\[\ell\!=\!-\!\log\!\left(p(y=y_{*}\mid\mathbf{x})\right). \tag{3}\]
A prototypical network is trained to minimize the loss function \(\ell\) by repeatedly feeding batches of speaker embeddings of both support and query samples to the network, so that each query sample corresponds to one of the speakers in the support set.
### _Network Architecture_
The proposed prototypical network for FSSI is illustrated in Fig. 1, which is mainly composed of two parts: one embedding module and one Softmax layer. The embedding module is designed to learn speaker embedding from support samples and query samples, while the Softmax layer is adopted to make a decision for speaker identification.
Fig. 1: The proposed prototypical network for FSSI. PV: prototype vectors; SS: support samples; SE: speaker embeddings; QS: query samples.
The motivation for designing the embedding module is based on two reasons. First, key part of the embedding module is the RCB which is a recurrent convolutional architecture (BLSTM + DRC). The embedding module can capture global sequential information by the BLSTM block from the log Mel-spectrum [58] and local spatial information by the DRC block from the transformed feature (output by the BLSTM block). These two kinds of information are complementary to each other, and will enhance the representational ability of speaker embedding. In addition, the embedding learned by recurrent convolutional neural network exceeds the embeddings that are learned by recurrent neural network or convolutional neural network for other audio processing tasks [59, 60]. Second, the embedding module performs feature grouping and feature interaction during speaker embedding learning. The feature grouping can decrease the model size and computational complexity. Meanwhile the feature interaction can capture correlation information among feature subsets and is helpful for improving the representational ability of speaker embedding.
The motivation for using the Softmax layer is based on two reasons. First, the Softmax layer can be stacked at the top of the embedding module to seamlessly form a prototypical network with discriminability for realizing the FSSI task. Second, the network with the proposed embedding module can be efficiently updated under the guidance of the loss function \(\ell\) without tuning complex hyper-parameters.
### _Embedding Module_
The framework of the embedding module is depicted in Fig. 2.
The input audio feature of log Mel-spectrum \(\mathbf{F}\)\(\in\)\(\mathbb{R}^{J\!I}\)[58], is extracted from each support sample or query sample. Along the frequency-dimension, the log Mel-spectrum is split into \(I\) (taking 4 as an example as shown in Fig. 2) feature subsets \(\mathbf{F}_{i}\)\(\in\)\(\mathbb{R}^{J}\), where \(H\)=\(J\), \(I\)\(\in\)\(\mathbb{Z}^{*}\), \(J\)\(\in\)\(\mathbb{Z}^{*}\) and \(1\)\(\leq\)\(S\)\(I\) feature subsets \(\mathbf{F}_{i}\) instead of \(\mathbf{F}\), are independently fed into the RCB for further transformation. That is, the RCB is shared for \(I\) feature subsets \(\mathbf{F}_{i}\). Compared with the feature \(\mathbf{F}\), the feature subset \(\mathbf{F}_{i}\) is with lower dimension. Hence, the number of parameters of the RCB (e.g., numbers of hidden units of BLSTM and convolutional kernels) can be reduced when the \(\mathbf{F}_{i}\) is used as the input of the RCB. However, the correlation information among \(I\) feature subsets cannot be utilized when each \(\mathbf{F}_{i}\) is independently fed into the RCB. As a result, the representational ability of the learned speaker embedding will be weakened.
To acquire the correlation information among \(I\) feature maps (subsets) \(\mathbf{G}_{i}\), we design a block of feature interaction which consists of two operations: the calculation of mean vector \(\mathbf{\overline{G}}\), and the addition of both mean vector \(\mathbf{\overline{G}}\) and feature maps \(\mathbf{G}_{i}\). \(I\) transformed feature maps \(\mathbf{G}_{i}\)=\(\{g_{m,n}\}\) are fed into the block of "Mean vector" for calculating the mean vector \(\mathbf{\overline{G}}\)=\(\{g_{m,n}\}\) by
\[\mathbf{\overline{g}}_{{n,n}}\)\(=\)\(\mathbf{\overline{I}}\)\(\sum_{i=1}^{J}\mathbf{g}_{{n,n}}^{{}^{\prime}}\),
where \(1\)\(\leq\)\(m\)\(\leq\)\(\mathbf{h}\), \(1\)\(\leq\)\(n\)\(\leq\)\(n\); \(\mathbf{h}\) and \(w\) denote the height and width of the feature maps \(\mathbf{G}_{i}\), respectively. The mean vector \(\mathbf{\overline{G}}\) is a transformation of all feature maps \(\mathbf{G}_{i}\), and thus contains the correlation information among all \(\mathbf{G}_{i}\). Each \(\mathbf{G}_{i}\) is further transformed to \(\mathbf{G}_{i}^{{}^{\prime}}\)=\(\{g_{m,n}^{{}^{\prime}}\}\) by adding both \(\mathbf{G}_{i}\) and \(\mathbf{\overline{G}}\), namely
\[\mathbf{g}_{{n,n}}^{{}^{\prime}}\)\(=\)\(\mathbf{g}_{{n,n}}^{{}^{\prime}}\)\(+\)\(\mathbf{\overline{g}}_{{n,n}}^{{}^{\prime}}\). (5)
\(I\) feature maps \(\mathbf{G}_{i}^{{}^{\prime}}\) are concatenated together to obtain a transformed feature map \(\mathbf{G}\)=\(\{\mathbf{G}_{i}^{{}^{\prime}},\...,\ \mathbf{G}_{i}^{{}^{\prime}},\...,\ \mathbf{G}_{i}^{{}^{\prime}}\}\). The log Mel-spectrum \(\mathbf{F}\) is convoluted with convolution kernels with the size of \(1\times 1\) (i.e., \(1\times 1\) Conv in Fig. 2) to obtain transformed feature map \(\mathbf{F}^{\prime}\). An operation of statistics pooling [15] is conducted on the addition of both \(\mathbf{G}^{\prime}\) and \(\mathbf{F}^{\prime}\), namely \(\mathbf{x}^{{}^{\prime}}\)=\(\mathbf{G}^{\prime}\)\(+\)\(\mathbf{F}^{\prime}\), to produce speaker embedding \(\mathbf{x}\). Specifically, in the statistics pooling layer, both mean vector \(\mathbf{x}^{{}^{\prime}}\)mean and standard deviation vector \(\mathbf{x}^{{}^{\prime}}\)std of \(\mathbf{x}^{{}^{\prime}}\) are calculated and then the \(\mathbf{x}^{{}^{\prime}}\)mean and \(\mathbf{x}^{{}^{\prime}}\)std are concatenated to form the speaker embedding \(\mathbf{x}\)=\([\)\(\mathbf{x}^{{}^{\prime}}\)mean, \(\mathbf{x}^{{}^{\prime}}\)std\(]\).
### _Recurrent Convolutional Block_
The RCB is composed of one BLSTM block and one DRC block, as shown Fig. 3.
Fig. 3: The recurrent convolutional block.
Fig. 2: The framework of embedding module for learning speaker embedding.
The BLSTM block consists of one long short-term memory block (LSTMB) and one bidirectional recurrent block (BRB), which can acquire the global sequential information from two directions with forward layer and backward layer [61, 62]. The global sequential information needs to be captured for speaker identification with better performance. In addition, deep convolutional neural network (DCNN) consists of many regular convolutions that leads to heavy computational load with occupying large memory [63]. There is much redundant information in the feature maps output by convolutional layers of the DCNN, and most of these feature maps are similar to each other. The output feature map \(\mathbf{Z}\) at a regular convolutional layer is defined as the convolution of the input feature map \(\mathbf{Y}\) and the convolution kernel \(\mathbf{K}\), namely
\[\mathbf{Z}\mathbf{\vdash}\mathbf{Y}\mathbf{\ast}\mathbf{K}\, \tag{6}\]
where \(\mathbf{Z}\mathbf{\in}\mathbb{R}^{h^{\prime}\times w^{\prime}\times M}\), \(\mathbf{Y}\mathbf{\in}\mathbb{R}^{C\times h^{\prime}\times w^{\prime}\times M}\), \(\mathbf{\in}\mathbb{R}^{C\times h^{\prime}\times w^{\prime}\times M}\), \(\mathbf{\ast}\) denotes the convolutional operation; \(h^{\prime}\) and \(w^{\prime}\) denote the height and width of the output feature map, respectively; \(h\) and \(w\) stand for the height and width of the input feature map, respectively; \(M\) and \(C\) represent the numbers of output channels and input channels, respectively. In Eq. 6, the bias vector is omitted for simplicity.
In practice, it is dispensable to produce all feature maps using regular convolutions. It will be computationally efficient and of low redundancy, if some representative feature maps are produced by regular convolutions and the remaining (derivative) feature maps are generated using some linear transformations with efficient computation based on the representative feature maps. It is supposed that \(L\) representative feature maps \(\mathbf{Z}^{\prime}\) are produced by regular convolutions and defined by
\[\mathbf{Z}^{\prime\mathbf{\ast}}\mathbf{\vdash}\mathbf{K}^{\prime}\, \tag{7}\]
where \(\mathbf{Z}^{\prime}\mathbf{\in}\mathbb{R}^{h^{\prime}\times w^{\prime}\times L}\), \(\mathbf{K}^{\prime}\mathbf{\in}\mathbb{R}^{C\times h^{\prime}\times w^{\prime}\times L}\), \(L\) is the number of output channels and \(L\)\(\leq\)\(M\); and the bias vector is omitted for simplicity.
To generate the rest (\(M\)-\(L\)) output feature maps, linear transformations are conducted on each representative feature map to produce its derivative feature maps. These linear transformations are defined by
\[\mathbf{Z}^{\prime\mathbf{\ast}}_{L^{\prime}\mathbf{\vdash}}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash} \mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash} \mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash} \mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash} \mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash} \mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{ \vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\vdash}\mathbf{\}\mathbf{\vdash}\mathbf{ \vdash}
training subset to generate the support set, and then another _N'K_ different samples of the same \(N\) speakers (_K_ samples per speaker) are randomly chosen from the training subset to form the query set. In each episodic testing step, the selections of query samples and support samples from the testing subset are the same as that in the episodic training step. The selection procedures of speakers and speech samples per speaker are repeated until all speakers and their speech samples in the training and testing subsets are selected. The selected speech samples in various episodes are different to each other. The average score of the repeated results in episodic testing steps is used as the final result of this test. In addition, we construct the testing subset ten times by randomly selecting speech samples from each speech corpus, and conduct ten tests. The final results are the average of the ten test results.
### _Experimental Setup_
Our experiments are done on a machine whose configurations are as follows: a CPU of Intel(R) Core (TM) i7-6700 with 3.10 GHz, a RAM of 64 GB, and a GPU of NVIDIA 1080 TI. All experiments are performed by the toolkit of PyTorch.
The metric of accuracy is used to measure the identification performances of various methods, which is defined as the ratio of the number of correctly identified samples to the total number of samples. The higher the value of accuracy, the better the identification performance of the methods. In addition, memory requirement and computational complexity of various methods are measured by the metrics of model size (MS) and MACs, respectively. The MS is defined as the total number of parameters of a neural network. The MACs is defined as the number of multiplication and addition operations of a neural network. The lower the value of MACs (or MS), the lower computational complexity (or memory requirement) of the methods. Main parameters of our method are listed in Table II.
### _Ablation Experiments_
In this subsection, we discuss the settings of two parameters that have direct influences on the performance of our method. These two parameters are the number of the feature subsets (i.e., _I_), and the ratio of the number of output feature maps to the number of representative feature maps (i.e., _M/L_). In the experiments, the value of _N_-way _K_-shot is set to 5-way 5-shot.
We discuss the performance of our method with different numbers of feature subsets. The number of feature subsets, \(I\), ranges from 1 to 16 in steps of power of 2. On the other hand, the ratio of _M/L_ is set to 2 and other parameters are configured as given in Table II. The scores of MS, MACs and accuracy obtained by our method with different numbers of feature subsets are presented in Table III.
When the number of feature subsets is equal to 4, our method obtains the highest accuracy scores of 92.89%, 92.74%, and 98.51% on the V2-set, V1-set, and L-set, respectively, with relatively low values of both MS and MACs. When the number of feature subsets deviates from 4, the accuracy scores steadily decrease. Hence, the number of feature subsets is set to 4 in the experiments of the following sections.
With the increase of the number of feature subsets (e.g., from 1 to 4), the dimension of each feature subset which is fed into the RCB becomes lower. As a result, the number of parameters required for the RCB is reduced, and the neural network becomes lightweight. Thanks to the operation of feature interaction executed in the embedding module, the correlation information among all feature subsets is effectively captured for enhancing the representational ability of the learned speaker embedding. Hence, the accuracy score obtained by our method is increased. However, when the input feature is segmented into too many feature subsets (e.g., >4), the correlation information among all feature subsets is too fragmented to be effectively acquired by the operation of feature interaction. Hence, the representational ability of the learned speaker embedding is weakened. As a result, the accuracy score obtained by our method is reduced and lower than that when \(I\) is equal to 4.
In addition, we discuss the impact of the ratio of _M/L_ on the performance of our method. The values of the ratio of _M/L_ range from 1 to 4. In this experiment, the number of feature subsets is set to 4 and other parameters are configured as shown in Table II. The scores of MS, MACs and accuracy obtained by our method with different values of _M/L_ are listed in Table IV.
When the value of _M/L_ is equal to 1, the output feature maps of the DRC consist of representative feature maps only (without derivative feature maps). That is, there is regular convolution only (without linear transformation) and thus without the interactions of representative feature maps in the DRC. In this case, our method obtains satisfactory results on three datasets in accuracy, but the values of MS and MACs reach the maximum.
When the value of _M/L_ is equal to 2, our method obtains the highest accuracy scores of 92.89%, 92.74%, and 98.51% on the V2-set, V1-set, and L-set, respectively, with relatively low values of both MS and MACs. With the increase of the value of _M/L_, the proportion of the derived feature maps in the output
feature maps increases and thus the neural network becomes lighter, but the accuracy scores decrease. Hence, the value of _M/L_ is set to 2 in the following experiments.
### _Qualitative Analysis_
In this subsection, we make a qualitative analysis about the influence of feature interaction on the representational ability of the learned feature subsets. The t-SNE [67] is utilized to map the feature subsets \(\mathbf{G}_{i}\) and \(\mathbf{G}^{{}^{\prime}}_{i}\) into a two-dimensional space. \(\mathbf{G}_{i}\) and \(\mathbf{G}^{{}^{\prime}}_{i}\) (1\(\leq\)\(\leq\)\(I\)) are the input and output feature subsets of the feature interaction block of the proposed embedding module, respectively, as given in Fig. 2 (where \(I\)=4). We adopt the Python library of _scikit-learn_ to reduce the dimensionality of \(\mathbf{G}_{i}\) and \(\mathbf{G}^{{}^{\prime}}_{i}\), and utilize the Python library of _matplotlib_ to plot the distributions of \(\mathbf{G}_{i}\) and \(\mathbf{G}^{{}^{\prime}}_{i}\) in the two-dimension space. Without loss of generality, five speakers (5-way) are randomly selected from the V2-set for demonstrating the distributions of their corresponding feature subsets. The distributions of \(\mathbf{G}_{i}\) and \(\mathbf{G}^{{}^{\prime}}_{i}\) in the two-dimensional space are depicted in Fig. 4 where 1\(\leq\)\(\leq\)2 for simplicity.
It can be seen from Fig. 4 (a) and Fig. 4 (b) that the distance between the feature subsets \(\mathbf{G}^{{}^{\prime}}_{i}\) of different speakers is greater than that between the feature subsets \(\mathbf{G}_{i}\) of different speakers. That is, compared to the feature subsets \(\mathbf{G}_{i}\) (without feature interaction), the feature subsets \(\mathbf{G}^{{}^{\prime}}_{i}\) (with feature interaction) are shifted away from the confusion region in the two-dimensional space to obtain more discriminative decision boundaries. Accordingly, the confusion between the five \(\mathbf{G}^{{}^{\prime}}_{i}\) are expected to be less than that between the five \(\mathbf{G}_{i}\). In other words, after transformation by the feature interaction block of the proposed embedding module, the representation ability of the feature subsets can be improved.
### _Comparison of Different Methods in Accuracy_
In this subsection, our method is compared to ten baseline FSSI methods by the episode-based strategy. These baseline methods are briefly described as follows.
The method in [44] was inspired by the success of the Moibliheet [41], where a portable model of MobileNet1D (MN1D) was used for speaker recognition. The technique of group convolution (GC) [63] was proposed to reduce the complexity of neural networks. In the GC based method for speaker identification, each input feature map was split into some sub-feature-maps to execute the operation of convolution in each convolutional layer. Afterwards, the sub-feature-maps after convolution were concatenated to obtain the output feature maps for further transformations. Lin et al [54] designed an ECAPA-TDNN-Lite (ETL) model using the knowledge distillation for lightweight speaker recognition. A prototypical network (PN) was proposed by Snell et al. [56] for few-shot classification. The method in [68] was a model agnostic meta learning (MAML) algorithm and was used for FSSI. Vinyals et al. [69] proposed a matching network (MN) for few-shot learning. Snyder et al. [14] proposed a framework of the TDNN to learn the X-vector for speaker recognition. Gao et al. [70] designed a multi-scale convolutional neural network, namely Res2Net, where multi-scale convolution instead of single-scale convolution was adopted to learn speaker embedding. The technique of principal component analysis (PCA) was discussed in [71] for reducing the dimensionality of the input feature. In addition, a filter-based feature selection method (filter based) [72] was employed for selecting dominant feature subsets from the original feature. In the Res2Net based, PCA based and filter based methods, the input feature was processed by the modules of multi-scale convolution, dimensionality reduction and feature selection, respectively. Afterwards, the processed feature subsets in these three methods were fed into a back-end prototypical network for making decision. Based on the descriptions above, main technical merits of different methods are listed in Table V.
As shown in Table VI, our method obtains accuracy scores of 92.89%, 78.31%, 88.87% and 68.88% on the V2-set when the values of \(N\)-way \(K\)-shot are set to 5-way 5-shot, 5-way 1-shot, 10-way 5-shot, and 10-way 1-shot, respectively. These accuracy scores are higher than the counterparts produced by the baseline methods. Hence, our method exceeds baseline methods in accuracy when evaluated on the V2-set. The same conclusions can be made from the results of Table VIII. When different methods are evaluated on the V1-set, as shown in Table VII, the accuracy scores obtained by our method are higher than the counterparts obtained by other methods, except the case of 10-way 1-shot. In summary, our method exceeds all baseline methods on three datasets across different settings of \(N\)-way \(K\)-shot, except the case of 10-way 1-shot on the V1-set. The advantage of our method in accuracy over the baseline methods mainly benefits from the designs of feature interaction operation and the RCB in the embedding module.
In addition, the accuracy scores obtained by different methods on the L-set are consistently higher than that on the V2-set and V1-set. The reason is probably that the background noise in the L-set is much lower than that in the V2-set and V1-set. Hence, the variations of time-frequency properties among the speech samples of the same speaker from the L-set are smaller than that among the speech samples of the same speaker from the V2-set and V1-set.
### _Comparison on Truncated Segments in Accuracy_
In this subsection, we compare the performance of different methods on the truncated testing segments with various lengths. Each truncated segment is generated by randomly splitting each testing speech sample (7 seconds) into speech segments with lengths of 1 second, 3 seconds or 5 seconds. These truncated segments in the V2-set, V1-set, and L-set, are adopted as testing data for evaluating the robustness of different methods to the length of truncated testing segments. In this experiment, we discuss the performance of different methods when the value of \(N\)-way \(K\)-shot is set to 5-way 5-shot without loss of generality. Table IX lists accuracy scores obtained by different methods on the testing segments with different durations, in which "whole" represents the length of the entire testing speech sample, namely 7 seconds.
Based on the accuracy scores obtained by different methods in Table IX, the following four observations can be obtained.
First, the accuracy scores obtained by all methods on all testing subsets constantly decrease with the decrease of the lengths of testing speech segments. Furthermore, the decrease of accuracy scores produced by our method is smaller than that obtained by most baseline methods. For example, when the lengths of speech segments in the V2-set decrease from 5 seconds to 1 second, the absolute reduction of the accuracy score achieved by our method is 5.66% (92.80% - 87.14%). This value (5.66%) is smaller than the counterparts produced by all baseline methods, except the filter based method (80.82% - 76.14% = 4.68%).
Second, the shorter the testing speech segment is, the smaller the accuracy scores obtained by different methods are. For
example, the accuracy scores obtained by our method decrease from 92.89% to 87.14% when the lengths of testing speech segments in the V2-set decrease from "whole" to 1 second. Similar results are obtained for the baseline methods when they are evaluated on different testing subsets.
Third, our method outperforms all baseline methods in accuracy when they are evaluated on truncated segments with different lengths. For example, our method obtains the highest accuracy score of 87.14% when evaluated on the speech segments with 1 second in the V2-set. However, the maximum of the corresponding accuracy scores obtained by the baseline methods is 81.44%. The same observations can be obtained when our method and baseline methods are evaluated on the speech segments with different lengths in all testing subsets.
Fourth, the shorter the length of testing speech segment is, the larger the accuracy margins between our method and most of the baseline methods are. For instance, the absolute margin of accuracy score between our method and the MAML based method is 28.29% (92.89% - 64.60%), when these two methods are evaluated on the speech segments with length of "whole" in the V2-set. However, the counterpart between these two methods becomes 32.44% (87.14% - 54.70%) when the speech segments with length of 1 second are adopted as testing data.
In conclusion, our proposed method still exceeds the baseline methods when they are assessed on the truncated testing segments with various lengths in terms of accuracy. In addition, our method is robust to the length of the truncated testing segments, since it still produces higher accuracy scores on the truncated testing segments. The reason is that the speaker embedding learned by the proposed embedding module can effectively represent both the global sequential information and the local spatial information. Accordingly, our proposed method generalizes well across truncated testing segments with different lengths instead of overfitting on the segments with single length.
### _Comparison of Different Methods in Complexity_
In this subsection, we measure the memory requirements of different methods using the metric of MS. In addition, computational complexities of different methods are measured by the metric of MACs when the lengths of speech segments are equal to 1 second, 3 seconds and 5 seconds. The values of MS and MACs of different methods are presented in Table X.
In terms of memory requirement, the MS of our proposed method is 54.14 kilo which is smaller than that of all baseline methods. In terms of computational complexity, the values of the MACs of our proposed method are 5.54 million, 16.63 million and 27.71 million on speech segments with 1 second, 3 seconds and 5 seconds, respectively. Moreover, the values of the MACs of our method are lower than the counterparts of all baseline methods, when they are evaluated on speech segments with different lengths.
In summary, our proposed method has advantage over all baseline methods in terms of both memory requirement and computational complexity. The advantages of our proposed method in these two aspects over all baseline methods mainly benefit from the designs of feature grouping and the DRC block in the proposed embedding module.
### _Generalization across Datasets_
In all experiments above, the training subset and testing subset are chosen from the same dataset. To evaluate the generalization performance of various methods across datasets, the training subset and the testing subset are from different datasets. That is, when the training subset is from a dataset (e.g., V2-set), the testing subset is from the remaining two datasets (e.g., V1-set and L-set). In this experiment, the value of \(N\)-way \(K\)-shot is set to 5-way 5-shot without loss of generality, and each input sample is a whole speech sample.
In the first row of Table XI, the item on the left side (e.g., "V2" in "V2\(\rightarrow\)V1") and the item on the right side (e.g., "V1" in "V2\(\rightarrow\)V1") of the arrow denote the training subset and the testing subset, respectively. The accuracy scores obtained by different methods across datasets are listed in Table XI.
Our proposed method obtains accuracy scores of 92.65%, 99.41%, 92.17%, 98.41%, 87.52%, and 88.32% when datasets are V2\(\rightarrow\)V1, V2\(\rightarrow\)L, V1\(\rightarrow\)V2, V1\(\rightarrow\)L, L\(\rightarrow\)V2, and L\(\rightarrow\)V1, respectively. These accuracy scores are higher than the counterparts obtained by baseline methods, except the cases of V1\(\rightarrow\)L and L\(\rightarrow\)V2. Hence, our proposed method still performs well when the training and testing subsets are from different datasets.
As given in the second row and second column of Tables VI, VII, and VIII, our proposed method obtains accuracy scores of 92.89%, 92.74%, and 98.51% when datasets are V2\(\rightarrow\)V2, V1\(\rightarrow\)V1, and L\(\rightarrow\)L (training and testing subsets from the same datasets), respectively. The accuracy score of 92.89% (V2\(\rightarrow\)V2) is higher than the accuracy score of 92.65% (V2\(\rightarrow\)V1) but is lower than the accuracy score of 99.41% (V2\(\rightarrow\)L). Similarly, the accuracy score of 92.74% (V1\(\rightarrow\)V1) is higher than the
accuracy scores of 92.17% (V1\(\rightarrow\)V2), but is lower than the accuracy score of 98.41% (V1\(\rightarrow\)L). However, the accuracy score of 98.51% (L\(\rightarrow\)L) is higher than the accuracy scores of 87.52% (L\(\rightarrow\)V2) and 88.32% (L\(\rightarrow\)V1). In short, our proposed method obtains better performance when the training subset and the testing subset (except the L-set) are from the same datasets. In addition, when the testing subset is the L-set, even if the training subset is different from the testing subset, our proposed method obtains higher accuracy scores. The reason is probably that the speech samples in the L-set are clean (without evident background noise). Accordingly, the distribution of time-frequency properties of speech samples in the L-set is relatively simpler and may overlap with that of speech samples in the V1-set and V2-set.
In summary, our proposed method generalizes well across datasets instead of overfitting on a single dataset.
### _Comparison of Different Methods for Speaker Verification_
In this subsection, we conduct one extended experiment for comparing different methods on three datasets for speaker verification. In this experiment, the value of _N_-way _K_-shot is set to 5-way 5-shot, and each feature is learned from a whole speech sample. A typical metric of equal error rate (EER) is adopted to measure the performance of all methods for speaker verification. The EER is defined as the rate at which acceptance error equals to rejection error. The lower the EER score is, the better the performance of the methods is.
Under the same conditions, the EER scores that are produced by different methods on the datasets of the V2-set, V1-set and L-set are given in Table XII. Our method obtains EER scores of 12.98%, 15.32% and 8.32% on the V2-set, V1-set and L-set, respectively. These EER scores are lower than the counterparts obtained by all baseline methods. In other words, our proposed method outperforms all baseline methods in terms of EER when they are evaluated on the datasets of the V2-set, V1-set and L-set. Therefore, the proposed embedding module for learning speaker embedding is also effective in the task of speaker verification.
## VI Conclusions
In this work, we have investigated a newly emerging problem of lightweight FSSI. Moreover, we have tackled this problem by designing a lightweight prototypical network with the operations of both feature grouping and feature interaction. Based on the description of our proposed method and the discussions of experimental results, we can draw the following three conclusions.
First, our propose method basically outperforms baseline methods in accuracy under many experimental conditions, such as variable numbers of _N_-way _K_-shot, various lengths of speech segments, different datasets. Hence, our proposed method is a state-of-the-art method for solving the problem of lightweight FSSI.
Second, the proposed method has advantage over all baseline methods in terms of memory requirement and computational complexity. In addition, the minimum margins of the values of both MS and MACs between the proposed method and the baseline methods are 5.12 (59.26 - 54.14) kilo and 0.34 (5.88 - 5.54) million, respectively.
Third, we design a computationally-efficient embedding module which is used to learn speaker embedding with strong representational ability. The proposed embedding module acquires both global sequential information and local spatial information, and carries out the operations of both feature grouping and feature interaction. The acquisition of the two kinds of information above and the operation of feature interaction have enhanced the representation ability of the learned speaker embedding. The designs of the DRC block and the operation of feature grouping have been beneficial for reducing the computational complexity and model size of the proposed prototypical network. The extended experiments have further verified our proposed method's robustness to the length of truncated testing segments, the generalization across datasets, and the effectiveness for speaker verification.
Although our proposed method has obtained satisfactory results for lightweight FSSI, there are still some areas for improvement in our proposed method. For example, the BLSTM block in the RCB is not implemented in parallel, which obviously increases the computational complexity of the proposed embedding module. In addition, we do not consider the attention mechanism in the proposed embedding module, which affects the representational ability of the learned speaker embedding as well. In the next work, we will optimize the architecture of the proposed network to further reduce the computational complexity and enhance the representational ability of speaker embedding by proposing extra effective strategies or modules. Specifically, we will consider designing an attention block which can be implemented in parallel for replacing the BLSTM block. Other techniques, such as network quantization, linear transformation with higher computational efficiency, will be considered as well. Accordingly, the proposed network will become lighter and perform better for deploying the proposed method on the intelligent terminals with limited resources.
|
2309.08799 | SHAPNN: Shapley Value Regularized Tabular Neural Network | We present SHAPNN, a novel deep tabular data modeling architecture designed
for supervised learning. Our approach leverages Shapley values, a
well-established technique for explaining black-box models. Our neural network
is trained using standard backward propagation optimization methods, and is
regularized with realtime estimated Shapley values. Our method offers several
advantages, including the ability to provide valid explanations with no
computational overhead for data instances and datasets. Additionally,
prediction with explanation serves as a regularizer, which improves the model's
performance. Moreover, the regularized prediction enhances the model's
capability for continual learning. We evaluate our method on various publicly
available datasets and compare it with state-of-the-art deep neural network
models, demonstrating the superior performance of SHAPNN in terms of AUROC,
transparency, as well as robustness to streaming data. | Qisen Cheng, Shuhui Qu, Janghwan Lee | 2023-09-15T22:45:05Z | http://arxiv.org/abs/2309.08799v1 | # SHAPNN: Shapley Value Regularized Tabular Neural Network
###### Abstract
We present SHAPNN, a novel deep tabular data modeling architecture designed for supervised learning. Our approach leverages Shapley values, a well-established technique for explaining black-box models. Our neural network is trained using standard backward propagation optimization methods, and is regularized with real-time estimated Shapley values. Our method offers several advantages, including the ability to provide valid explanations with no computational overhead for data instances and datasets. Additionally, prediction with explanation serves as a regularizer, which improves the model's performance. Moreover, the regularized prediction enhances the model's capability for continual learning. We evaluate our method on various publicly available datasets and compare it with state-of-the-art deep neural network models, demonstrating the superior performance of SHAPNN in terms of AUROC, transparency, as well as robustness to streaming data.
## 1 Introduction
Tabular data is widely used in real-world applications like scientific analysis [Kehrer and Hauser (2012)], financial transactions [Andriosopoulos et al. (2019)], industrial planning [Hecklau et al. (2016)], etc. Tabular data are commonly presented in a structured and heterogeneous form [Borisov et al. (2022)], with data points or samples in rows, and features in columns, corresponding to particular dimensions of information.
In the past decade, machine learning algorithms have been used to efficiently analyze tabular data, with most research focusing on classification and regression tasks [Athmaja et al. (2017)]. Gradient-boosted decision trees (GBDT) [Chen and Guestrin (2016)] and its extensions, such as LightGBM [Ke et al. (2017)] and CatBoost [Dorogush et al. (2018)], have emerged as dominant methods. However, these methods have limitations in practice due to their data-specific learning paradigm [Arik and Pfister (2021)]. Firstly, gradient-based tree structures impede continual learning, which is crucial in situations where live data streams in. Secondly, these models are typically data-specific and must be learned in a fully supervised manner, which hinders their ability to fuse with other models and data modalities under different degrees of label availability [Ke et al. (2019)].
Recently, deep learning has been explored as an alternative to GBDT-based models for analyzing tabular data [Huang et al. (2020)]. DNN employs adaptable weights that can be gradually updated
to learn almost any mapping from inputs to targets, and it has proven to be effective and flexible in handling various types of data modalities. DNN models can also conveniently learn from and adapt to continuously streaming data [Ke et al. (2019)]. However, despite these promising features, DNN's performance on tabular data often falls short compared to that of GBDT-based methods [Gorishniy et al. (2021)]. Additionally, DNN models are often considered a "black box" approach, lacking transparency in how they transform input data into model outputs [Klambauer et al. (2017)]. Due to these limitations of both GBDT and DNN, there is no clear winner for tabular data tasks [Kadra et al. (2021), Shwartz-Ziv and Armon (2022)]. In comparison to GBDT-based models, DNN lacks two crucial capabilities, which degrade its performance on various tabular tasks: (1) the ability to effectively utilize the most informative features through the splitting mechanism based on information gain and (2) the capacity to progressively discover feature sets that lead to fine-grained enhancements through the boosting-based ensemble. We could contend that both capabilities contribute to evaluating feature utility and selecting relevant features during model training [Grinsztajn et al. (2022)].
In this study, we aim to address these challenges faced by current deep learning methods for tabular data. Our objective is to develop a DNN-based model that accomplishes the following goals: (i) achieves superior performance on tabular data tasks, (ii) provides quantitative explanations of model decisions, and (iii) facilitates effective continual learning. To achieve these goals, we introduce SHAPNN, which leverages the Shapley value as a bridge between GBDTs and DNNs. The Shapley value is a model-agnostic approach used for generating post-hoc explanations by quantifying the influence of each feature on predictions based on game theory. In SHAPNN, we incorporate Shapley value estimation into the DNN training process and use it as additional supervision to transfer feature evaluation and selection guidelines from GBDT-based priors. However, Shapley value estimation is time-consuming due to exponentially growing feature selections [Lundberg and Lee (2017)]. To overcome this obstacle, we utilize the recent FastSHAP framework (Jethani et al. (2021)) to efficiently estimate Shapley values and generate model predictions in a single forward propagation. Our approach also allows us to ensemble multiple prior models to provide comprehensive feature evaluation and selection guidelines. Moreover, in inference, we utilize the estimated Shapley values to obtain feature-level explanations of how the model makes decisions. We extend the utilization of Shapley values to enhance the continual learning of DNNs by using them as proxies that memorize the mapping from features to predictions in a certain time step. We can then use them to regulate the updating of models to achieve overall stability, eliminating the need for collecting and accessing all historical data during inference. Our extensive experiments demonstrate the effectiveness of the SHAPNN approach. Our contributions are threefold: 1) To our best knowledge, this is the first work to incorporate Shapley value estimation in DNN training for tabular data, 2) We demonstrate that the approach can improve overall stability in continual learning of DNN, 3) this method can be applied to different backbone models, resulting in performance improvements and quantitative explanations in a single feedforward pass.
In this paper, our motivations are introduced in section 2. The background of Shapley values is presented in section 3. Our proposed methodology is shown in section 4. The experiment details and results are presented in section 5. Related work is shown in section 6. Section 7 concludes our paper.
## 2 An empirical study on tabular data
We provide an empirical example to explain our motivation for introducing Shapley-based regularization into deep neural networks (DNN) training for tabular data. This example illustrates the shortcomings of using a Multilayer Perceptron (MLP) for feature evaluation and selection compared to a Gradient Boosting Decision Tree (GBDT) model. We compared the classification accuracy of LGBM (GBDT-based model) and MLP on a customized Iris dataset [Fisher (1936)], where we purposefully attach extra numerical features (columns) whose values are sampled from a uniform distribution. As demonstrated in Figure 0(a), we observe a significant decrease in MLP's classification accuracy, as the percentage of extra features being constructed. We further investigate the effect of each feature on model prediction by examining their Shapley values by using KernelSHAP [Lundberg and Lee (2017)]. As shown in Figure 0(b), we observe that the extra features obtain a larger impact on MLP predictions, which explains its performance degradation. In contrast, the GBDT model almost completely disregards the extra features, and its performance remains stable even with the introduction of the new features.
The aforementioned example suggests a potential remedy for the comparatively weaker feature evaluation and selection ability of DNNs. As Shapley values provide a measure of the contribution of each feature, we can align the values obtained by DNNs with those obtained by GBDTs, in order to supervise the training process. This approach has the potential to enhance DNNs training by reducing the impact of irrelevant features and prioritizing the learning of useful ones.
## 3 Background
### Shapley value
The Shapley value aims to distribute the gain and cost fairly among the players in a coalition game to achieve a desired outcome or payoff. In a coalition game, there are \(N\) players and a characteristic function \(v(S)\) that maps the subset of players \(S\in\{0,1\}^{N}\) to a real number, representing the expected sum of the payoffs of the subset S that can be obtained through cooperation. The Shapley value \(\phi(v)\in R^{N}\) distributes the gains to players. The contribution \(\phi_{i}(v)\) of player \(i\) is calculated:
\[\phi_{i}(v)=\frac{1}{N}\sum_{S\subseteq N\setminus i}\binom{n-1}{|S|}^{-1} \left(v(S\cup i)-v(S)\right) \tag{1}\]
In the context of machine learning explanation, the characteristic function shows how the prediction of a sample changes when different subsets of features are removed. More specifically, given a sample \((x,y)\in\mathcal{D}\) from dataset \(\mathcal{D}\), where \(x=(x_{1},...,x_{N})\) is the input vector, and \(y\in{1,...,K}\) is the output of \(K\) classes for a classification problem, the characteristic function \(v\) is defined as follows:
\[v_{x,y}(S)=E_{p(x_{1-S})}[\mathrm{Softmax}(f_{\theta}(x_{S},x_{1-S})],y) \tag{2}\]
Here, \(f_{\theta}\) represents the machine learning model.The exact computation of the Shapley value increases exponentially with the number of players (features) \(N\). Various approximation solutions have been proposed to improve efficiency [Lundberg and Lee (2017)]. Despite these, accurately estimating the Shapley value can still be extremely slow for large-scale and high-dimensional cases.
### FastSHAP
Due to the computational cost of Shapley value estimation, we adopt the FastSHAP approach introduced in [Jethani et al.(2021)] to perform amortized estimation of Shapley values. Specifically, we first learn a FastSHAP function \(\phi_{fast,\gamma}(x,y):X\times Y\to R^{N}\), with \(\gamma\) being the Shapley value generation model that map each feature to a Shapley value. The function is learned in a single forward pass by penalizing predictions using the following loss:
\[\mathcal{L}_{\gamma}=E_{p(x)}E_{U(y)}E_{p(S)}[(v_{x,y}(S)-v_{x,y}(\emptyset)-S ^{T}\phi_{fast,\gamma}(x,y))^{2}] \tag{3}\]
Figure 1: Comparison between LGBM and MLP under impact of noisy features. In (a), it shows the prediction accuracy comparison between the 2 models with different amount of noisy features. (b) shows the Shapley values of each feature for both models.
where \(S\) denotes a subset of features, \(p(S)=\frac{n-1}{(\frac{n}{\tau_{S}})^{1/T}S(N-1^{T}S)}\), and \(U(y)\) is uniformly sampled from a distribution of \(K\) classes. To further improve training efficiency, we use additive efficient normalization to obtain Shapley value estimation function \(\phi_{fast,\gamma}^{eff}(x,y)\):
\[\phi_{fast,\gamma}^{eff}(x,y)= \phi_{fast}(x,y,\gamma)+\frac{1}{N}\left(v_{x,y}(\mathbb{1})-v_{ x,y}(\emptyset)-\mathbb{1}^{T}\phi_{fast,\gamma}(x,y)\right) \tag{4}\]
Here, \(\phi_{fast,\gamma}(x,y)\) denotes the original FastSHAP function, \(v_{x,y}(\mathbb{1})\) and \(v_{x,y}(\emptyset)\) are the sum of prediction values of all features and the sum of prediction values with zero features, respectively. FastSHAP consists of three steps: 1) Train a machine learning model \(f_{\theta}(x)\to y\) to be explained. 2) Train a surrogate model \(f_{surr,\beta}\) that approximates the original prediction model \(f_{\beta}(x,m)\to y\) with a masking function \(m(x,S)\) that is to replace the feature \(x_{i}\) with a default value not in the support of \(S\). 3) Train the Shapley value generation model \(\phi_{\gamma}(x)\to v_{x}(S)\).
## 4 Methodology
### Shapnn
This section presents the Shapley-based Neural Network (SHAPNN), which is built upon FastSHAP. By utilizing estimated Shapley values as intermediate features, SHAPNN is designed to construct a machine-learning prediction model that achieves both high prediction accuracy and interpretability. Both model predictions and Shapley value estimations are obtained in a single forward pass.
The neural network serves as the foundation for (1) the Shapley value generation model \(\phi_{\gamma}(x)\in R^{N\times K}\), which takes the input feature vector and generates the Shapley value vector for each possible class, and (2) the surrogate model \(f_{\beta}(x,S)\to y\), which takes the input feature vector and support \(S\) to produce the predicted label.
The Concat SHAPNN \(f_{w}(x)\) is constructed by incorporating the estimated Shapley value \(v_{x}(S)\) as part of the input feature to the prediction model \(f_{w^{\prime}}:f_{w^{\prime}}(f_{w\setminus w^{\prime}}(x),v_{x}(S))\to y\), where \(v_{x}(S)=\phi_{\gamma}(x)\in R^{N,K}\) represents the estimated Shapley value. The Concat SHAPNN's loss function (\(\mathcal{L}\)) is:
\[\mathcal{L}=\mathcal{L}_{\gamma}+CE(y^{\prime},y) \tag{5}\]
Here, \(CE\) denotes the cross-entropy loss function for classification.
### Ensemble prior
The SHAPNN with ensemble prior is developed by aligning estimated Shapley values to a series of GBDT models, such as an ensemble prior that combines Xgboost and LightGBM. These Shapley value
Figure 2: The SHAPNN framework involves generating Surrogate Models from prior models. During the training of the DNN, the input data \(X\) is perturbed, and the Shapley values for the perturbed dimensions are estimated in an intermediate block. The original data and the estimated Shapley values are then passed to the prediction block for classification.
estimations are then integrated into the input feature for the prediction model, \(f_{w^{\prime}}(f_{w\setminus w^{\prime}},v_{x}(S))\to y^{\prime}\), where \(v_{x}(S)=\phi_{\gamma}(x)\in R^{N,K}\) is the estimated Shapley value. The overall SHAPNN loss function is defined as:
\[\mathcal{L}=agg_{k}(\mathcal{L}_{\gamma}^{k})+CE(y^{\prime},y) \tag{6}\]
where the aggregation operator, \(agg\), combines the losses from each prior model of the ensemble, indexed by \(k\). In practice, we use a weighted sum for aggregation. This design of the SHAPNN enables explainability while also achieving higher performance.
### Continual learning
The concept of continual learning can be framed as follows: given a data stream composed of a series of data batches \(x^{t}\), indexed by \(t\in[0,1,...,T]\), and a model \(f_{w}(x)\to y\) that is sequentially trained on each data batch \(x^{t}\) and recorded at each time step as \(f_{w}^{t}\), the task is to make two predictions at each time step. Firstly, using the most recent recorded model (\(f_{w}^{t-1}\)), we make predictions (\(\hat{y^{t}}\)) on the current data batch \(x^{t}\). Note that this batch of data is not available for model training before making the prediction. Secondly, we make backward predictions (\(y^{t-1}\)) on data batches that precede \(t-1\) using \(f_{w}^{t-1}\). Our aim is to ensure that both \(\hat{y^{t}}\) and \(y^{t-1}\) are accurate predictions of their respective true labels \(y\).
During each time step \(t\), the model is trained using a combination of model prediction loss and Shapley estimation regularization, as described in previous sections. To ensure the model remains robust to concept drift, we generate pseudo labels for time step \(t\) by applying mixup (Zhang et al. (2017)) between the true label and all predictions from surrogate models of previous steps. This involves combining the true label (\(y^{t}\)) with a weighted average of the predictions (\(f_{w}^{i}(x^{t})\)) from previous steps \(i\in\{1,...,t-1\}\), where the weight is controlled by a parameter \(\alpha\):
\[\widetilde{y^{t}}=\alpha\cdot y^{t}+(1-\alpha)\cdot\sum_{i}^{t-1}f_{w}^{i}(x^ {t}) \tag{7}\]
To ensure stable feature selection and evaluation during continual learning, we also extend the regularization by including all the explanation models \(\gamma_{t}\) from previous time steps. Thus, the model at time step \(t\) is trained using the following loss:
\[\mathcal{L}^{t}=\sum_{i}^{t-1}\lambda^{i}\cdot\mathcal{L}_{\gamma}^{i}+CE( \hat{y^{t}},\widetilde{y^{t}}) \tag{8}\]
where \(\lambda_{i}\) is a discount factor of the losses from each time step, and \(\sum_{i}^{t-1}\lambda^{i}=1\). In practice, we use a decaying schedule that emphasizes recent steps and reduces the effect of distant steps.
## 5 Experiments
### Implementation and setup
To evaluate the generalizability of our SHAPNN approach, we conducted our experiments on two popular DNN models for processing tabular data: Multi-Layer Perceptron (Kadra et al. (2021)) and recently published FT-Transformer (Gorishniy et al. (2021)), which has demonstrated state-of-the-art performance on various tabular datasets. The MLP has 3 hidden layers, each containing 512 neurons, while the FT-Transformer's hyperparameter follows (Gorishniy et al. (2021)). The Shapley estimation block for both implementations consists of a 2-layer MLP with an output dimension equal to the number of features in each dataset. The prediction layer is a linear projection layer without nonlinear activation functions. We employed the standard Stochastic Gradient Descent (SGD) optimizer and followed the hyper-parameter settings outlined in (Gorishniy et al. (2021)), including the learning rate selection. More detail is shown in Appendix.
### Tabular data analysis and datasets
We conducted experiments on several well-known benchmark datasets, including: 1) the Adult Income dataset (Kohavi et al. (1996)), which comprises 48842 instances of adult income data with
14 attributes; 2) the Electricity dataset [Hoiem et al. (2009)], which contains 45312 instances of electricity consumption with 8 real-valued attributes; 3) the Iris dataset [Fisher (1936)], consisting of 3 types of Iris flowers, each with 50 samples; 4) the Epsilon dataset [Blackard and Dean (1999)], comprising 40000 objects with 2001 columns of simulated experiments; and 5) the Covertype dataset [Hulten et al. (2001)], which includes 581012 instances of tree samples, each with 54 attributes. We specifically chose the Epsilon and Covertype datasets for their higher dimensionality, which allowed us to demonstrate the efficiency and scalability of our method. The evaluation metric used for all analyses in this section is the Area Under the Receiver Operating Characteristic curve (AUROC). We chose this metric to ensure a fair comparison and to account for label imbalance bias.
### Model prediction results
Table 1 shows that our SHAPNN approach applied to MLP consistently improves performance over the vanilla MLP baseline on all tabular data benchmarks. The magnitude of improvement appears to be associated with the difficulty of the datasets. On the challenging Adult Income dataset, which has missing values and different data types in features Shwartz-Ziv and Armon (2022), we achieve an improvement in AUROC of 1.3%. We observe an 0.6% increase in AUROC over the original 94.6% on the Iris dataset, which has the smallest size and fewest features among the five datasets.
Table 1 also shows the test results on the FT-Transformer backbone, where we also observe improvements over the baseline model on all 5 test cases. Notably, FT-Transformer is a stronger baseline compared to MLP, potentially due to its attention mechanism that effectively weighs the features based on their pairwise correlation. Nevertheless, our approach still benefits FT-Transformer by enhancing feature evaluation and selection. Additionally, we compare the performance of two widely used models, Logistic Regression (LR) and Random Forest (RF), for tabular classification tasks to further evaluate our FT-Transformer's performance. The results show that FT-Transformer's performance is comparable to, or better than, that of LR and RF.
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline Datasets & \multicolumn{3}{c}{Adult} & Electricity & Iris & Epsilon & Covertype \\ \hline \multirow{6}{*}{Models} & Logistic Regression & 0.793 & 0.774 & 0.935 & 0.854 & 0.945 \\ & Random Forrest & 0.837 & 0.822 & 0.959 & 0.892 & 0.957 \\ \cline{2-7} & MLP & 0.839 & 0.790 & 0.946 & 0.883 & 0.955 \\ & **SHAPNN (MLP)** & **0.852** & **0.818** & **0.952** & **0.892** & **0.961** \\ \cline{2-7} & FT-Transformer & 0.849 & 0.824 & 0.954 & 0.890 & 0.960 \\ & **SHAPNN (FT-Transformer)** & **0.857** & **0.835** & **0.957** & **0.894** & **0.969** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Prediction Results (AUROC)
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline Dataset & \multicolumn{3}{c}{Adult} & Electricity & Iris & Epsilon & Covertype \\ \hline \multirow{2}{*}{Models} & SHAPNN (single prior) & 0.849 & 0.807 & 0.952 & 0.889 & 0.961 \\ & SHAPNN (ensemble prior) & **0.852** & **0.818** & 0.952 & **0.892** & 0.961 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison between single and ensemble prior (AUROC)
\begin{table}
\begin{tabular}{l l l} \hline \hline Dataset & \multicolumn{3}{c}{Models} \\ \cline{2-5} & SHAPNN & KernelSHAP \\ \hline Epsilon & **4.7 s** & 34.9 s \\ Covertype & **0.8 s** & 5.2 s \\ \hline \hline \end{tabular}
\end{table}
Table 3: Inference speed comparison
#### 5.3.1 Single prior vs. ensemble priors
The performance comparison between a DNN trained with a single prior model and an ensemble of prior models is presented in Table 2. The results show that on 3 of the 5 datasets, including the more challenging Adult Income and Electricity datasets, using ensemble priors leads to better performance compared to using a single prior. However, on the Iris and Covertype datasets, where the original performance is already high, the performance of using ensemble priors is the same as using a single prior. The observed improvement in performance may be attributed to the ensemble priors providing a more comprehensive evaluation of features compared to a single prior.
### Model explanation results
Figures 2(a) and 2(b) illustrate SHAPNN's ability to provide quantitative explanations at both the sample-wise and population-wise levels, respectively, using the Adult Income dataset as an example. For each type of explanation, SHAPNN presents the impact of each feature on the model prediction, along with its magnitude and polarity.
In sample-wise explanations, the magnitude indicates the importance of each feature, while the polarity reflects the direction in which a feature influences the model prediction for a particular sample. For example, the education length feature seems to be important for predicting personal income, with a positive contribution to high earners and a negative contribution to low earners. Notably, negative class samples (i.e., low earners) are associated with features of overwhelmingly negative impacts, while positive class samples have more diverse feature influences.
Similarly, population-wise explanations demonstrate the general relationship between feature values and their influence within a given population. In this example, relationship and marital status are identified as two important factors. We can interpret from the plot that not being in a relationship or being married almost always contributes positively to earning status, whereas the influence is more diverse for opposite conditions. It is also worth mentioning that only a few features have high Shapley values, which could be an effect of the proposed regularization.
To evaluate the efficiency of our method in generating explanations, we conducted a wall-clock experiment comparing the inference time consumed by SHAPNN and KernelSHAP Lundberg and Lee (2017) for generating sample-wise explanations. We tested Covertype and Epsilon datasets due to their relatively higher dimensionality. We report the average inference time of 100 randomly sampled data points in Table 3. Our method was found to provide a 7-8X speedup over KernelSHAP.
Figure 3: Explanation examples of Adult Income Dataset. (a) shows the sample-wise explanation examples, and (b) gives the population-wise explanation examples.
### Continual learning analysis
We further analyze the ability of SHAPNN in handling streaming data through the continual learning framework. Continual learning presents two conflicting challengesDe Lange et al. (2021): the model should quickly adapt to incoming data that often leads to concept drift, but it should not forget the knowledge learned from previous data and become biased towards the newest data. To comprehensively evaluate the model's performance in both aspects, we conduct both online adaptations and retrospective tests.
We use three synthetic streaming datasets with controlled levels of concept drift for this analysis: STA dataset Gama et al. (2004), SEA dataset Street and Kim (2001), and ROT dataset Hulten et al. (2001). In all three datasets, the mapping between features and predictors changes over time with different concept drifts defined by certain functions. Recurring and abrupt concept drift is introduced into each time window by randomly shuffling the parameter of the functions. The function definitions can be found in Appendix. These datasets pose a significant challenge to the model.
#### 5.5.1 Online adaptation
For all the datasets that follow, we conduct an adaptation test by assuming that only the most recent data is available for re-training. This means that we test the model on each time step \(t\) after updating it with the most recent data (i.e., data batch \(t-1\)). We compare two scenarios: one with SHAPNN and one without SHAPNN, using MLP as the backbone model (see Appendix) in both cases.
Figures 3(a) to 3(c) depict the online adaptation results on these streaming datasets. The comparison between the baseline case and the SHAPNN approach reveals that the latter provides much more stable performance across all time steps. The fluctuations are reduced, and the average performance is substantially higher. These suggest SHAPNN's capability for online adaptation of streaming data.
#### 5.5.2 Retrospective test
For this test, we update the MLP model (see Appendix) using the same approach as in the online adaptation test. We assess the model's performance by predicting the historic data it was trained from and report the average AUROC over all past time steps.
The retrospective testing outcomes are displayed in Table 4. The test outcomes are reported at timestep 10 and 50. Since no historical data is used in model retraining, the MLP baseline model performs poorly on previous data batches after updating its weights at the evaluation time step. At timestep 50, the MLP model barely outperforms the random guessing, which clearly indicates the catastrophic forgetting issue. The model's weights are biased toward the latest data and lose previously learned concepts. On the other hand, SHAPNN consistently maintains a higher model
Figure 4: Online adaptation performance for (a) STA, (b) SEA, and (c) ROT datasets.
performance on previous data batches, which shows the efficacy of SHAPNN in mitigating the catastrophic forgetting issue.
## 6 Related work
**Neural networks for tabular data** Several approaches have been proposed to enhance the performance of tree-based models for analyzing tabular data, either by extending them with deep learning techniques or by designing new neural architectures Borisov et al. (2022). Two main categories of model architectures have emerged from these efforts: differentiable trees and attention-based models. For instance, TabNet leverages sequential attention to perform feature selection and learning Arik and Pfister (2021), while NODE uses an ensemble of shallow neural nets connected in a tree fashion Popov et al. (2019). Another example is Net-NDF, which utilizes disjunctive normal neural form blocks to achieve feature splitting and selection Katzir et al. (2020). More recently, researchers have explored applying transformer-based models to tabular data, with TabTransformer being the first attempt to do so Huang et al. (2020). This approach has been further improved upon in SAINT, which introduced additional row-wise attention Sompenalli et al. (2021). The state-of-the-art method in this category is the Feature-tokenizer Transformer, which enhances the learning of embedding from tabular data with a tailored tokenizer Gorishniy et al. (2021).
**Intepretable machine learning** The importance of generating interpretable tabular neural networks has gained increasing attention in recent years, particularly for critical applications where explanations are essential (Sahakyan et al. (2021)). Existing work in this area often relies on attention-based mechanisms to generate feature-level explanations (Konstantinov and Utkin (2022)). Another line of research involves using model-agnostic approaches to explain trained models, such as KernelSHAP and its extensions (Lundberg and Lee (2017); Covert and Lee (2021)). While most Shapley-based explanations are performed post-hoc, Wang et al. (2021) proposed a Shapley Explanation Network that incorporates Shapley values during training by adding extra Shapley value estimation modules to the neural net. In contrast, our approach uses amortized estimation to generate and leverage Shapley-based representations, which largely reduces the complexity of incorporating Shapley value.
**Continual learning** Concept drift handling and adapting to new data after model training have been extensively discussed and explored even before the advent of deep learning (Widmer and Kubat (1996); Gama et al. (2014)). Typically, existing work relies on collectively re-training a new model on the aggregated historical data. With deep learning, this concept has been extended to continual learning, which focuses on learning new tasks while preventing the model from forgetting what has been learned on old tasks (Chen and Liu (2018)). As summarized in (De Lange et al. (2021)), prior work has introduced more regularization terms during training (Aljundi et al. (2018); Zhang et al. (2020), learned separate sets of parameters for different tasks (Aljundi et al. (2017); Rosenfeld and Tsotsos (2018)), or retained sampled historical data in a memory buffer to compensate for new task data during re-training (Rolnick et al. (2019); Lopez-Paz and Ranzato (2017)). For instance, ASER (Shim et al. (2021)) leverages Shapley values to adversarially select buffered data samples for effective re-training. In a similar vein, we also utilize the Shapley values for continual learning. However, unlike ASER, we directly leverage the Shapley value estimator of past models as a medium for retaining knowledge from past training without accessing any historical data. Since the Shapley value estimators already contain the information on the mapping between features and predictions, we use them to regularize the parameter updating.
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline Dataset & \multicolumn{2}{c}{STA} & \multicolumn{2}{c}{SEA} & \multicolumn{2}{c}{ROT} \\ \hline Timestep & 10 & 50 & 10 & 50 & 10 & 50 \\ \hline \multirow{2}{*}{Models} & MLP & 0.647 & 0.493 & 0.627 & 0.563 & 0.692 & 0.583 \\ & SHAPNN (MLP) & **0.715** & **0.673** & **0.902** & **0.757** & **0.881** & **0.785** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Retrospective testing results (AUROC)
Conclusion, Boarder Impact, Limitations, LLM Statement
We introduce SHAPNN, a new deep-learning architecture for supervised learning tasks on tabular data. The neural network incorporates real-time Shapley value estimation module, which is trained through standard backward propagation. The estimation module provides enhanced regularization for model training that leads to performance improvements and enables valid explanations with no extra computational cost. Furthermore, the Shapley-based regularization improves the ability to perform continual learning. We extensively evaluate SHAPNN on publicly available datasets and compare it to state-of-the-art deep learning models, demonstrating its superior performance. We also show that SHAPNN is effective in continual learning, adapting to concept drifts and being robust to noisy data.
Our work could potentially facilitate general data analysis, and improve the transparency and trustworthiness of AI. Some limitations of our method include: 1) prior models need to be trained separately, ahead of training of the neural network; 2) our model may have an upper limit on its capacity to adapt to new concepts or drifts. In this paper, we use LLM to correct grammatical mistakes.
|
2310.20176 | Predicting Astrometric Microlensing Events from Gaia DR3 | Currently astrometric microlensing is the only tool that can directly measure
the mass of a single star, it can also help us to detect compact objects like
isolated neutron stars and black holes. The number of microlensing events that
are being predicted and reported is increasing. In the paper, the potential
lens stars are selected from three types of stars, high-proper-motion stars,
nearby stars and high-mass stars. For each potential lens star, we select a
larger search scope to find possible matching sources to avoid missing events
as much as possible. Using Gaia DR3 data, we predict 4500 astrometric
microlensing events with signal>0.1mas that occur between J2010.0 and J2070.0,
where 1664 events are different from those found previously. There are 293 lens
stars that can cause two or more events, where 5 lens stars can cause more than
50 events. We find that 116 events have the distance of background stars from
the proper motion path of lens stars more than 8 arcsec in the reference epoch,
where the maximum distance is 16.6 arcsec, so the cone search method of
expanding the search range of sources for each potential lens star can reduce
the possibility of missing events. | Jie Su, Jiancheng Wang, Yigong Zhang, Xiangming Cheng, Lei Yang | 2023-10-31T04:55:58Z | http://arxiv.org/abs/2310.20176v1 | # Predicting Astrometric Microlensing Events from Gaia DR3
###### Abstract
Currently astrometric microlensing is the only tool that can directly measure the mass of a single star, it can also help us to detect compact objects like isolated neutron stars and black holes. The number of microlensing events that are being predicted and reported is increasing. In the paper, the potential lens stars are selected from three types of stars, high-proper-motion stars, nearby stars and high-mass stars. For each potential lens star, we select a larger search scope to find possible matching sources to avoid missing events as much as possible. Using Gaia DR3 data, we predict 4500 astrometric microlensing events with \(\delta\theta_{*}>0.1mas\) that occur between J2010.0 and J2070.0, where 1664 events are different from those found previously. There are 293 lens stars that can cause two or more events, where 5 lens stars can cause more than 50 events. We find that 116 events have the distance of background stars from the proper motion path of lens stars more than 8\({}^{\prime\prime}\) in the reference epoch, where the maximum distance is 16.6\({}^{\prime\prime}\), so the cone search method of expanding the search range of sources for each potential lens star can reduce the possibility of missing events.
keywords: astrometry - gravitational lensing: micro - methods: data analysis
## 1 Introduction
Gravitational lensing describes the deflection and magnification of background sources when a massive object (lens) passes in front of them. When the lens is the stellar-mass object, the deflection is less than a few microarcseconds (mas) and the effect is referred to as microlensing. Microlensing describes the positional deflection (astrometric microlensing) and magnification (photometric microlensing) of a background source over time (Paczynski, 1986; Hog et al., 1995; Miyamoto & Yoshii, 1995; Walker, 1995). The photometric microlensing has been investigated by surveys such as the Optical Gravitational Lensing Experiment (OGLE) (Udalski, 2003) or the Microlensing Observations in Astrophysics (MOA) (Bond et al., 2001), whereas the astrometric microlensing was detected for the first time only recently (Sahu et al., 2017; Zurlo et al., 2018; McGill et al., 2023).
Astrometric microlensing provides the possibility to directly measure the mass of a single star (Paczynski, 1995; Kains et al., 2017), it also can detect faint and compact lens such as isolated neutron stars and black holes because the luminosity of the lens is not necessarily measured. Recently, an isolated stellar-mass black hole was detected by Sahu et al. (2022) through astrometric microlensing based on the Hubble Space Telescope (HST) astrometry and the ground-based photometry, and for the microlensing events Lam et al. (2022) proposed that the lens star is a compact object.
The microlensing events are intrinsically rare occurrences, e.g. the all-sky averaged value of the astrometric optical depth is \(2.5\times 10^{-5}\)(Belokurov & Evans, 2002), that depend on the alignments of source star and lens star, so predicting when and where they will occur is highly advantageous for the collection of data throughout an event.
Gaia Data has been proven to be ideal for predicting astrometric microlensing events (Gaia Collaboration et al., 2016), and about 25 000 sources will have a significant variation of the centroid shift during the Gaia mission period (Belokurov & Evans, 2002). Using Gaia DR1 (Gaia Data Release 1), McGill et al. (2018) predicted an astrometric microlensing event with the white dwarf LAWD 37 as a lens star,and recently McGill et al. (2023) measured the astrometric deflection of the background source and obtained the mass of lens star LAWD37. After Gaia DR2 (Gaia Data Release 2) was released, many research works have been done. Bramich & Nielsen (2018) and Bramich (2018) predicted the microlensing events between 25th July 2014 and the end of the century. In addition, McGill et al. (2019) used the data of Gaia DR2 and Vista Variables in the Via Lactea Infrared Astrometric Catalog (Smith et al., 2018) to predict two astrometric microlensing events,and McGill et al. (2019) searched for two upcoming photometric microlensing events. Mustill et al. (2018) predicted the photometric microlensing events between 2015.5 and 2035.5. Ofek (2018) searched for the astrometric microlensing events between pulsars and stars in Gaia DR2. Kluter et al. (2018, 2018) predicted 3914 astrometric microlensing events caused by 2875 different lenses between 2010 and 2065. In 2020, Gaia's early Data release 3 (Gaia eDR3) data was released, Kluter et al. (2022) updated their prediction results and added 1758 new microlensing events between 2010 and 2066. In addition, Luberto et al. (2022) searched for astrometric microlensing events by nearby brown dwarfs, but this work
did not reveal any upcoming microlensing events. Wyrzykowski et al. (2022) identified 363 photometric microlensing events in Gaia Data Release 3 (Gaia DR3) covering the years of 2014 - 2017 in all over the sky, and Jablonska et al. (2022) found that one of the events discovered by Wyrzykowski et al. (2022) may lead to a measurable astrometric signal.
In June 2022, Gaia Data Release3 (Gaia DR3) was released, and its astrometry and broad band photometry content are the same as those of Gaia EDR3. Gaia DR3 contains 585 million sources with five-parameter astrometry (two positions, the parallax, and two proper motion components), and about 882 million sources with six-parameter astrometry, including an additional pseudocolour parameter given by Collaboration et al. (2023). Gaia DR3 also released many new data products, such as the astrophysical parameters (mass and age) of 128 million stars (Creevey et al., 2023; Fouesneau et al., 2023), and the mass estimation of some lens stars used in this paper.
In this paper, we search for microlensing events using Gaia DR3, and potential lens stars are selected from the three types of stars, high-proper-motion stars (HPMS), nearby stars (NS) and high-mass stars (HMS). It should be noted that NS do not contain HPMS.
In Section 2, we briefly outline the theoretical background for astrometric microlensing. In Section 3, we present the lens and background star catalogs used in the paper. In Section 4, we detail the methods of searching for background star matched with lens star, calculate the source-lens closest approach, and estimate lens masses. We present our results in Section 5. In Section 6, we give summary and conclusions.
## 2 Theoretical background
### Astrometric Microlensing Signals
The theory of astrometric microlensing is described in detail in the literature (Belokurov & Evans, 2002; Dominik & Sahu, 2000; Paczynski, 1996) and we briefly introduce important concepts and equations directly relevant to this paper.
When the background source (S), the lens (L), and the observer are perfectly collinear, the lensed image of the source will form a so-called Einstein ring. The characteristic size of this ring is given by the Einstein radius as
\[\theta_{\rm E}=\sqrt{\frac{4GM_{L}}{c^{2}}}\,\frac{D_{S}-D_{L}}{D_{L}D_{S}}, \tag{1}\]
where \(G\) is the gravitational constant, \(c\) is the speed of light, \(M_{L}\) is the mass of the lens, \(D_{S}\) and \(D_{L}\) are the distances between the observer and the background source or the lens.
The angular position vector \(\mathbf{\varphi}\) on the celestial sphere for star is given:
\[\mathbf{\varphi}=\left(\begin{array}{c}\alpha_{0}\\ \delta_{0}\end{array}\right)+\left(\begin{array}{c}\mu_{\alpha^{*}}/\cos \delta_{0}\\ \mu_{\delta}\end{array}\right)\cdot\left(t-t_{ref}\right)+\mathbf{\varpi}\cdot\mathbf{P }(t), \tag{2}\]
where \(\mathbf{P}(t)\) is expressed by
\[\mathbf{P}(t)=\left(\begin{array}{c}\left[X(t)\sin\alpha_{0}-Y(t)\cos\alpha_{0 }\right]/\cos\delta_{0}\\ X(t)\cos\alpha_{0}\sin\delta_{0}+Y(t)\sin\alpha_{0}\sin\delta_{0}-Z(t)\cos \delta_{0}\end{array}\right), \tag{3}\]
and \(\alpha_{0}\), \(\delta_{0}\),\(\mu_{\alpha}\), \(\mu_{\delta}\) and \(\varpi\) represent the right ascension, declination, proper motion in right ascension direction, proper motion in declination direction and annual parallax respectively. \(t_{ref}\) is the reference epoch, \(X(t)\), \(Y(t)\) and \(Z(t)\) are the Cartesian barycentric Solar-system coordinates in au of the earth on the ICRF at time \(t\). In this paper, the above coordinate are calculated by the astropy PYTHON package from NASA JPL's Horizons Ephemeris (Astropy Collaboration et al., 2013, 2018; Collaboration et al., 2022).
Let \(\varphi_{S}\) and \(\varphi_{L}\) represent the angular positions of the source and the lens, respectively. One can define the dimensionless distance vector as
\[\mathbf{u}=\frac{\mathbf{\varphi_{S}}-\mathbf{\varphi_{L}}}{\theta_{E}}. \tag{4}\]
Usually two images of the source are observed when the source is not perfectly aligned with the lens. The bright image (+) is close to the source and the faint image (-) is close to the lens. Their distance relative to the lens are given by
\[\theta_{\rm z}=\frac{u\pm\sqrt{(u^{2}+4)}}{2}\cdot\theta_{E}, \tag{5}\]
where \(u=|\mathbf{u}|\). When the separation of the images is too small to be resolved, only the centroid of light formed by the images can be measured. This can be described by
\[\theta_{C}=\frac{u^{2}+3}{u^{2}+2}u\cdot\theta_{E}, \tag{6}\]
and the corresponding shift is the astrometric signal given by
\[\delta\theta_{C}=\frac{u}{u^{2}+2}\cdot\theta_{E}. \tag{7}\]
The maximum shift of the center of light occurs at \(u=\sqrt{2}\).
When the lens is luminous and unresolved from the source, the shift between lensed and unlensed position (position of the combined center of light) can be determined by
\[\delta\theta_{C,lum}=\frac{u\cdot\theta_{E}}{1+f_{LS}}\cdot\frac{1+f_{LS}(u^{ 2}+3-u\sqrt{u^{2}+4})}{u^{2}+2+f_{LS}u\sqrt{u^{2}+4}}, \tag{8}\]
where \(f_{LS}\) is the flux ratio between the lens and the source. When the separation between the lens and the source is large enough for lens and the brighter image (+) to be resolved, the brighter image (+) will be measured. The corresponding shift compared to the unlensed position can be expressed by
\[\delta\theta_{+}=\frac{\sqrt{u^{2}+4}-u}{2}\cdot\theta_{E}. \tag{9}\]
Therefore \(\delta\theta_{C,lum}\), \(\delta\theta_{C}\) and \(\delta\theta_{+}\) are the astrometric microlensing signals.
### The change of astrometric microlensing signal with time
Refer to the derivation method of Dominik & Sahu (2000), we deduce the relationships between the diurnal variation of the astrometric signals (\(\delta\theta_{C,lum}\), \(\delta\theta_{C}\) and \(\delta\theta_{+}\)) and the parameters of \(u\), \(f_{LS}\), \(\mu_{LS}\) and \(\varpi_{LS}\), where \(\mu_{LS}\) and \(\varpi_{LS}\) are the relative proper motion and parallax between the source and the lens, respectively. First we obtain the following equation from the equations (4) and (9):
\[\frac{d\delta\theta_{\rm z}}{d\varphi}=\frac{d\delta\theta_{\rm z}}{du}\cdot \frac{du}{d\varphi}=\frac{u}{2\cdot\sqrt{u^{2}+4}}-\frac{1}{2}, \tag{10}\]
and then we have the equations from the equations (2) and (3):
\[\frac{d\varphi}{dt}\approx\left|\frac{d\varphi}{dt}\right|=\sqrt{( \Delta\mu_{\alpha,\varpi})^{2}+(\Delta\delta\mu_{\rm z,\varpi})^{2}}, \tag{11}\] \[\Delta\alpha_{\mathbf{\mu},\varpi}=\mu_{\alpha_{LS}^{*}}+\varpi_{LS} \cdot\Delta P_{\delta},\] \[\Delta\delta_{\mathbf{\mu},\varpi}=\mu_{\delta_{LS}}+\varpi_{LS} \cdot\Delta P_{\delta}\]
where \(\Delta P_{\alpha^{*}}=\Delta x\cdot sin(\alpha_{0,L})-\Delta y\cdot cos(\alpha_{0, L})\), \(\Delta P_{\delta}=\Delta x\cdot cos(\alpha_{0,L})\cdot sin(\delta_{0,L})+\Delta y\cdot sin (\delta_{0,L})\cdot sin(\delta_{0,L})-\Delta z\cdot cos(\delta_{0,L})\)
and \(\Delta x=-0.0171AU\), \(\Delta y=-0.0036AU\), \(\Delta z=-0.0016AU\) are the daily variation of \(x\), \(y\), \(z\) respectively. We take the daily variation from 2455199.5JD to 2455200.5JD near the perihelion, and set \(\mu_{ALS}=\mu_{AL^{*}_{L}}=\mu_{AL^{*}_{L}}-\mu_{AS^{*}_{S}}\), \(\mu_{GLS}=\mu_{GL}-\mu_{SS}\), and \(\varpi_{LS}=\varpi_{L}-\varpi_{S}\). It should be noted that since the projection on the parallactic motion is position depended, the equation underestimates the effect for stars with \(\alpha\sim 0^{\circ}\) or \(\alpha\sim 180^{\circ}\). However, the effect only causes a small change in \(\frac{d\varphi}{dt}\), so it does not affect our conclusion. Then we obtain
\[\frac{d\delta\theta_{\mathcal{H}}}{dt}=\frac{d\delta\theta_{\mathcal{H}}}{d \varphi}\frac{d\varphi}{dt}=\left(\frac{u}{2\cdot\omega_{AL}^{2}+4}-\frac{1}{2 }\right)\cdot\frac{d\varphi}{dt}. \tag{12}\]
According to the equations (7), we then have
\[\frac{d\delta\theta_{\mathcal{C}}}{dt}=\frac{d\delta\theta_{\mathcal{C}}}{d \varphi}\cdot\frac{d\varphi}{dt}=\frac{2-u^{2}}{\left(u^{2}+2\right)^{2}} \cdot\frac{d\varphi}{dt}, \tag{13}\]
According to the equations (12),(13)), we obtain that the diurnal variation of the astrometric signal depending on \(\mu_{LS}\) and \(\varpi_{LS}\) varies with \(u\). For \(\delta\theta_{\mathcal{C}}\) and \(\delta\theta_{+}\), their maximum diurnal variations increase with \(\mu_{LS}\) and \(\varpi_{LS}\). The maximum diurnal variation of \(\delta\theta_{+}\) can exceed 0.1mas only when \(u<5\). Therefore, for most astrometric microlensing events, the change of their astrometric signals within 24 hours (or even longer) cannot be detected by space telescope, e.g. Hubble Space Telescope (Bellini et al., 2011) and James Webb Space Telescope (Gardner et al., 2023), we do not need high accuracy in finding the closest approach (\(u_{0}\) and \(t_{0}\)) because the shift doesn't vary by an appreciable amount on a small timescale (e.g. a daily timescale). We just need to encrypt the calculation points appropriately during certain periods.
In this paper, according to the characteristic of that the astrometric signals of microlensing events change slowly with time, we only take a small number of data points when calculating the stellar trajectory, and can find the closest approach (\(u_{0}\) and \(t_{0}\)). More detailed description is given in Section 4.3.
## 3 Data Sources
### Lens star selection
High proper motion stars are typically selected as potential lenses (Kluter et al., 2018, 2022; McGill et al., 2019) because they traverse large areas of the sky in a given time period which increases the rate of close alignments to background sources. In addition, according to equation (1), for the events with \(D_{S}\gg D_{L}\), the closer the lens star is to the Earth or the larger its mass is, the larger the radius of the Einstein ring is. For a given lens-source angular separation, the larger \(\theta_{E}\) corresponds to a larger astrometric signal, the larger the observed signal is, the higher the probability observed by the telescope is. Therefore, in this paper, the potential lens stars are selected from three types of stars, HPMS, NS and HMS (as shown in Table 1).
The potential lens star comes from Gaia DR3 and must have position, parallax, and proper motion. To ensure a good astrometric solution, we require the parallaxes with relatively small errors parallax_over_error\(>\)5. However, the source with parallax_over_error\(>\)5 may also contain about 1.6% spurious astrometric solutions (Fabricius et al., 2021). In order to classify valid and spurious astrometric solutions, Rybizki et al. (2022) train a neural network to get a parameter of "astrometric fidelity" taken between 0 and 1 for 1.47 billion sources (e.g. five-parameter solutions or six- parameter solutions) in Gaia eDR3. A value of 1.0 means a perfectly trustable solution, and the lowest value of 0.0 indicates a lot of issues in the astrometric solution. Using this parameter to eliminate spurious astrometric solutions is more effective than simple cutting. Rybizki et al. (2022) suggested "In most regimes, the use of the astrometric fidelity should yield a purer and more complete sample of sources with reliable astrometric solutions." Because the astrometric data of Gaia DR3 is the same as that of Gaia eDR3, the "astrometric fidelity" parameter is still applicable in Gaia DR3. In this paper, the lens with the "astrometric fidelity" parameter greater than 0.8 (fidelity_v2\(>\)0.8) are selected. This limit of the parameter is the condition that all potential lens stars need to meet.
For HPMS, \(\mu\geq 100mas/yr\), where \(\mu\) is the total proper motion, their number is about 470000.
For NS, \(\varpi>10mas\), and \(\mu<100mas/yr\) is also needed to avoid the overlap with HPMS. Their number is about 160000.
For HMS, their mass provided by Gaia DR3 is limited as \(M_{L}>5M_{\odot}\), and \(\mu<100mas/yr\) and \(\varpi<10mas\) are needed to avoid the overlap with HPMS and NS. Their number is about 190000.
### Background star selection
Background star (BGS) data are also taken from Gaia DR3. The selection conditions are listed in Table 2. The condition is fidelity_v2\(>\)0.7 for background stars with five - and six-parameter solutions. For background stars with two-parameter solution, Rybizki et al. (2022) did not provide "astrometric fidelity" parameter, and Gaia DR3 did also not provide the ruwe parameter (Lindegren et al., 2021). However, we can estimate the ruwe parameter according to the Gaia DR3 documentation (see the section 20.1.1 of Gaia DR3 1), where the equations include
Footnote 1: [https://gea.esac.esa.int/archive/documentation/GDR3/Gaia_archive/chap_datamodel/sec_dm_main_source_catalogue/ssec_dm_gaia_source.html](https://gea.esac.esa.int/archive/documentation/GDR3/Gaia_archive/chap_datamodel/sec_dm_main_source_catalogue/ssec_dm_gaia_source.html)
\[astrometric\_gol\_al=(9\cdot v/2)^{1/2}\cdot\left[ruwe^{2/3}+2/(9\cdot v)-1 \right], \tag{14}\]
and
\[v=astrometric\_n\_good\_obs\_al-N. \tag{15}\]
Then the ruwe parameter is given by
\[ruwe=\left(\frac{astrometric\_gol\_al}{\sqrt{9\cdot v/2}}-\frac{2}{9\cdot v}+1 \right)^{3/2}, \tag{16}\]
\begin{table}
\begin{tabular}{l|l|l} \hline \hline HPMS & NS & HMS \\ \hline \(\mu\geq\)100mas/yr & \(\varpi>\)10mas & \(\varpi<\)10mas \\ & \(\mu<\)100mas/yr & \(\mu<\)100mas/yr \\ & & \(M_{L}>5M_{\odot}\) \\ \(\sim\)470000 & \(\sim\)160000 & \(\sim\)190000 \\ \hline parallax\_over\_error\(>\)5 & & \\ phot\_g\_mean\_mag & IS NOT NULL & \\ astrometric\_params\_solved=31(OR astrometric\_params\_solved=95) & \\ fidelity\_v2\(>\)0.8 & & \\ \hline Notes. \(\mu\) is the total proper motion, \(\varpi\) is parallax, & \\ \(M_{L}\) is the mass estimation of star (mass\_flame) in the appendix & \\ Astrophysical parameters provided by Gaia DR3, & \\ parallax\_over\_error\(\}\) is parallax divided by its standard deviation, & \\ phot\_g\_mean\_mag & is 6-band mean magnitude, & \\ astrometric\_params\_solved is astrometric solutions, & \\ astrometric\_params\_solved=31 is five-parameter solutions, & \\ astrometric\_params\_solved=95 is six-parameter solutions, & \\ fidelity\_v2 is “astrometric fidelity” parameter from Rybizki et al. (2022) & \\ \hline \end{tabular}
\end{table}
Table 1: The conditions of the potential lens stars
where \(astrometric\_go\_al\) is the goodness of fitting statistic of model along-scan observations, \(v\) is the number of degrees of freedom for a source update, and \(N=5\). Therefore, the condition for background stars with two-parameter solution is \(ruwe<2\) and \(\sqrt{\sigma_{\alpha^{*}}^{2}+\sigma_{\delta}^{2}}<10mas\). In the subsequent calculations, we set the parallax and proper to be 0 for background stars with two-parameter solution and assumed standard errors of \(\sigma_{\alpha^{*}_{g},g_{S}}=5mas/yr\) and \(\sigma_{\alpha g_{S}}=2mas\). Roughly 90% BGSs with five or six parameter solution have the standard errors in proper motion and parallax below the above values.
## 4 Predicting microlensing events
The selection process is shown in Figure 1. First, the potential lens (blue) are selected, then the background stars are matched (green), and finally the predicted events that meet the conditions are selected (red).
### Initial lens-source matching
For \(\sim 820000\) potential lens stars, we query for all sources using the cone search method, where the search time range is J2010.0 \(\sim\) J2070.0. The center of the circle is the position of the lens at the reference epoch (J2016.0) of Gaia DR3. For HPMS, the search radius is \(T\cdot\mu+6^{\prime\prime}\), and \(\sim 1240000\) star pairs are found. For NS and HMS, the search radius is \(T\cdot\mu+8^{\prime\prime}\), and \(\sim 260000\) and \(\sim 760000\) star pairs are found respectively. \(T=54yr\) is from J2070.0 to J2016.0. To avoid missing events as much as possible, we use the cone search method that has larger searching area than that of the rectangular search method (Kluiter et al., 2018, 2022; McGill et al., 2019), and the subsequent steps can further constrain the star pairs.
Now the number of founded pairs are still large, we further select the pair of stars according to the characteristics of their relative motion. The relative motion of the lens-background pair can be approximated as linear motion (without considering parallax), and the relative position (\(\Delta\varphi\)) of the star pair with time can be given by the equation (2) as
\[\begin{array}{l}\Delta\varphi\approx\left(\begin{array}{c}\alpha_{0L}- \alpha_{0S}\\ \delta_{0L}-\delta_{0S}\end{array}\right)+\left[t-t_{ref}\right]\left(\begin{array} []{c}\left[\mu_{\alpha^{*}L}-\mu_{\alpha^{*}S}\right]/\cos\delta_{0S}\\ \mu_{\delta L}-\mu_{\delta S}\end{array}\right)\\ =\left(\begin{array}{c}\alpha_{0,\;\rm LS}\\ \delta_{0,\;\rm LS}\end{array}\right)+\left[t-t_{\rm ref}\right]\left(\begin{array} []{c}\mu_{\alpha^{*},\;\rm LS}\left/\cos\delta_{0S}\\ \mu_{\delta,\;\rm LS}\end{array}\right.\right),\end{array} \tag{17}\]
where \(\alpha_{0L}\), \(\delta_{0L}\), \(\alpha_{0S}\) and \(\delta_{0S}\) are the right ascensions and declinations of the lens star and the background star at the reference epoch, respectively. \(\mu_{\alpha^{*}L}\), \(\mu_{\delta L}\), \(\mu_{\delta S}\) and \(\mu_{\alpha^{*}S}\) are the proper motions in right ascension and declination directions of the lens star and the background star. \(\alpha_{0,LS}\) and \(\delta_{0,LS}\) are the relative right ascension and relative declination of the star pair, \(\mu_{\alpha^{*},LS}\) and \(\mu_{\delta,LS}\) are proper motions in right ascension and declination directions of the star pair.
Therefore, the relative angular distance (\(\theta_{sep}\)) of the star pair can be approximated as
\[\theta_{sep}\approx\sqrt{\left(\Delta\alpha\cdot cos\delta_{0S}\right)^{2}+ \left(\Delta\delta\right)^{2}}, \tag{18}\]
where \(\Delta\alpha=\alpha_{0,LS}+\left(t-t_{ref}\right)\cdot\mu_{\alpha^{*},LS}/cos \delta_{0S}\) and \(\Delta\delta=\delta_{0,LS}+\left(t-t_{ref}\right)\cdot\mu_{\delta,LS}\). Based on \(\frac{\partial\theta_{sep}}{\partial t}=0\), we can get the minimum value \(\theta_{sep,min}\) of \(\theta_{sep}\) and its corresponding time \(t_{min}\) as
\[\theta_{sep,min}=\left|\frac{\mu_{\delta,LS}\cdot\alpha_{0,LS}\cdot\cos\delta _{0S}-\mu_{\alpha^{*},LS}\cdot\delta_{0,LS}}{\sqrt{\mu_{\alpha^{*},LS}^{2}+\mu_ {\delta,LS}^{2}}}\right|, \tag{19}\]
and
\[t_{min}=-\frac{\alpha_{0,LS}\cdot\cos\delta_{0S}\cdot\mu_{\alpha^{*},LS}+ \delta_{0,LS}\cdot\mu_{\delta,LS}}{\mu_{\alpha^{*},LS}^{2}+\mu_{\delta,LS}^{2} }+t_{ref}. \tag{20}\]
For all star pairs searched above, we use the equation (20) to calculate \(t_{min}\). If \(t_{min}\) is within the range of J2010.0-J2070.0, the equation (19) is used to calculate \(\theta_{sep,min}\). If \(t_{min}>2070\), the relative angular distance \(\theta_{sep,2070}\) is calculated at \(t=2070\) according to the equation (18). If \(t_{min}<2010\), the relative angular distance \(\theta_{sep,2010}\) is calculated at \(t=2010\) based on the equation (18). If \(\theta_{sep,min}\), \(\theta_{sep,2070}\) or \(\theta_{sep,2010}\) is less than \(\frac{20}{mass}\cdot\theta_{E}^{2}\), e.g. \(\theta_{\phi*,max}>0.05mas\) (\(\delta\theta_{*}-\frac{\theta_{E}}{\omega}=\frac{\theta_{E}^{2}}{\omega}\)), where \(\delta\theta_{*,max}\) is the approximate value of the maximum value of \(\delta\theta_{*}\) in the range J2010.0-J2070.0, the star pair can be initially matched, otherwise the star pair is excluded. \(\theta_{E}\) is calculated according to the equation (1), where the lens star masses are estimated in Section 4.2.
After the application the above criteria, we find 19401, 3742 and 3914 pairs for HPMS, NS and HMS respectively. It is noted that there is no restriction on the background star at this time.
### Mass estimation of lens stars
The masses of these potential lens stars are estimated in three cases :
First case, the masses are searched in the "Astrophysical parameters" table of 128 million stars released by Gaia DR3 (Creevey et al., 2023; Fouesneau et al., 2023). The mass estimation of all HMS comes from this table. In addition, the masses of about 40000 HPMS and NS are given in this table (see upper-right panel of Figure 2, the mass estimation of the red dots in the panel comes from this table).
Second case, the masses are estimated by matching the white dwarf catalog (Gentile Fusillo et al., 2021). The catalog provides the parameter 'PWD' as a measure of the probability of the source being a white dwarf. In this paper, the potential lens stars that do not belong to the first case are matched with the white dwarf star catalog. About 13000 potential lens stars are matched with the catalog (see upper-right panel of Figure 2, the mass estimation of the yellow dots in the panel comes from the white dwarf star catalog), where 99% potential lens stars with 'PWD'>0.7. Gentile Fusillo et al. (2021) listed three types of estimated mass: M_H, M_He and M_mix corresponding to the mass estimated by pure-H, pure-He, mixed hydrogen/helium (H/He) compositions model, respectively. The order of use for mass estimation is M_H, M_He and M_mix. If there is no mass estimation in the first order, the second order is used, and so on. The lens ratios using M_H, M_He and M_mix are \(\sim 96\%\), \(\sim 1\%\), and \(\sim 2\%\), respectively. According to the statistics of lens, \(\sim 93\%\) of lens stars have a difference between M_H and M_He less than \(3\sigma\), and \(\sim 70\%\) of lens stars have a difference between M_H and M_mix less than \(3\sigma\). It is noted that about 200 potential lens stars have not mass estimations, we then use the classical mass of white dwarf stars as their masses, \(M_{WD}=(0.65\pm 0.15)M_{\odot}\).
Third case, the masses are estimated by the mass-luminosity relations. For potential lens stars (about 270000) that do not belong to the above two cases, we use the method of Kluiter et al. (2022) to estimate the mass. These lens stars are divided into white dwarf, red giant, brown dwarf and main sequence stars (see upper-left panel of
\begin{table}
\begin{tabular}{c c} \hline \hline two-parameter solution & five - and six-parameter solutions \\ \hline ruwe\textless{2}, & \(\sqrt{\sigma_{\alpha^{*}}^{2}+\sigma_{\delta}^{2}}<10mas\) & fidelity\_v2\textgreater{0.7} \\ \hline \end{tabular}
\end{table}
Table 2: The conditions of background stars
Figure 2). The masses of white dwarfs, red giants and brown dwarfs are estimated as \(M_{WD}=(0.65\pm 0.15)M_{\odot}\), \(M_{RG}=(1.0\pm 0.5)M_{\odot}\) and \(M_{BD}=(0.07\pm 0.03)M_{\odot}\) respectively. The masses of the main sequence stars are estimated using the mass-luminosity relations, where a 10 per cent error is assumed.
It should be noted that mass estimation is important as it can affect follow up planning decisions (McGill et al., 2019). If we can find a more accurate mass estimate from the literature, we prefer to choose it for improving the accuracy of the prediction.
### the source-lens closest approach
In this paper, the source-lens closest approach are searched by calculating the relative motion of the star pair. According to Section 2.2,
Figure 1: Illustration of the selection processes. They are the selection of potential lens stars (blue), the selection of background stars (green), the determination of the closest approach, and the exclusion of co-moving stars and the estimation of the expected microlensing effect (red).
Figure 2: Color-magnitude Diagrams. Upper-left panel: Color-magnitude diagram of the third case potential lens with \(G\), \(G_{BP}\), \(G_{RP}\) from Section 4.2, \(M_{GBP}\) is the absolute magnitude at \(G_{BP}\) band, the stars below the green line are considered to be WDs, and the stars above the red line are considered to be RGs. Upper right panel: Hertzsprung-Russell diagram of \(M_{G}\) versus \(G_{BP}-G_{RP}\) for all potential lens with \(G\), \(G_{BP}\), \(G_{RP}\) from Section 4.2. The red dots indicate the first case, the yellow indicate the second case and the blue indicate the third case. Lower left panel: Same as the upper right panel for all potential lens with \(G\), \(G_{BP}\), \(G_{RP}\). The lens masses are indicated by the colour of the points (see the scale at the right of the panel). Lower right panel: Same as lower left panel, the grey indicate all potential lens with \(G\), \(G_{BP}\), \(G_{RP}\) and the colour dots indicate the lenses of predicted events.
the signal of astrometric microlensing events changes slowly with time. Therefore, we only take a small number of data points when calculating the trajectory of the star, and only encrypted sampling is carried out in a short period of time when the signals change rapidly. The specific steps are the following as (see Figure 3):
Step 1: we use the equations (2) and (3) to calculate the sources-lens separation (\(u\)) every 30 days, where there are 732 data points in the range of J2010.0-J2070.0. From these data points we find the minimum source-lens separation (\(u=u_{01}\)) and the time of closest approach ( \(t=t_{01}\)) between the source and the lens. If the difference between the source-lens separation (\(u_{t}\)) and the minimum source-lens separation \(u_{01}\) is less than \(\frac{5mass}{G_{\rm c}}\), we record its corresponding time as \(t_{t}\), where i represents the tilt data point.
Step2: we calculate the source-lens separation every day from 30 days before \(t_{01}\) to 30 days after \(t_{01}\), and from 30 days before \(t_{t}\) to 30 days after \(t_{i}\), where there are 60 - 120 data points. From these data points we find the minimum source-lens separation (\(u=u_{02}\)) and the time of closest approach (\(t=t_{02}\)) between the source and the lens. According to the equations (13) and (12), we calculate \(\frac{d\delta\theta_{\rm c}}{dt}\), and \(\frac{d\delta\theta_{\rm u}}{dt}\) with \(u=u_{02}\). If the absolute value of any one of them is greater than \(80\mu as/day\), we go to step 3. Otherwise, the calculation ends, and we obtain the results of \(u_{0}=u_{02}\) and \(t_{0}=t_{02}\).
Step3: we calculate the source-lens separation every two hours from 24 hours before \(t=t_{02}\) to 24 hours after \(t=t_{02}\), where there are 24 data points. From these data points we find the minimum source-lens separation (\(u=u_{0}\)) and the time of closest approach (\(t=t_{0}\)) between the source and the lens.
### Checking for microlensing events
Using \(u_{0}\) and \(t_{0}\) obtained in Section 4.3, we calculate the observation signal \(\delta\theta_{\rm u}\) through the equations (1), (4) and (9), and find out the events with \(\delta\theta_{\rm u}>0.1mas\) and the background star meeting the conditions in Table 2. For these events, if the background star has a five - or six-parameter solution, it is necessary to exclude binary stars or co-moving stars. We refer to the conditions of excluding these star pairs used by many authors (Bramich and Nielsen, 2018; Bramich, 2018; Kluter et al., 2022) and the method of selecting binary stars by El-Badry et al. (2021), and retain the star pairs meeting the following conditions (as shown in Figure 4):
\[\frac{\Delta\varpi}{\sigma_{\Delta\varpi}}=\frac{\varpi_{L}-\varpi_{S}}{ \left(\sigma_{\Delta\varpi_{L}+\sigma_{\Delta\varpi_{S}}}^{2}\right)^{1/2}}>3, \tag{21}\]
\[|\mu_{S}-\mu_{L}|>0.7\cdot\mu_{L}, \tag{22}\]
\(\mu_{S}<0.8\cdot\mu_{L}\).
In addition, McGill et al. (2020) indicates that events with bright background sources(\(G<18mag\)) which are predicted to happen during the Gaia mission are likely not genuine. For the background star with a two-parameter solution, we check these events using a method similar to that of Kluter et al. (2022). First, we search for matched sources by using the gaia473.dr2_neighbourhood catalog (Torra et al., 2021). For one-to-many match case, we only consider the DR3-DR2 pair with the smallest angular distance. Then we only select the match that meets the following criteria as
\[\Delta\varphi_{match}<400mas, \tag{23}\]
\(\Delta G_{match}<1mag\),
where \(\Delta\varphi_{match}\) and \(\Delta G_{match}\) are the position and magnitude of source in DR3-DR2, respectively. We notice that 40 source have different source_id in DR3-DR2 release data. To ensure a correct match, we check these sources again. Specifically, we match their dr2_source_id to dr3_source_id and find that one of them is an error match. We have obtained 1281 sources with good match and 1117 sources with bad or no match. Second, we estimate the proper motion (\(\mu_{S}\)) of background stars with good match as
\[\mu_{S}=\left[\begin{array}{cc}(\alpha_{S,DR3})&-\left(\begin{array}{cc }\alpha_{S,DR2}\\ \delta_{S,DR2})&-\left(\begin{array}{cc}\alpha_{S,DR2}\\ \delta_{S,DR2}\end{array}\right)\end{array}\right]\cdot\left[\begin{array}{cc }\cos\delta_{S,DR3}\\ 1\end{array}\right]/\Delta t, \tag{24}\]
Figure 4: The proper motion difference between the lens star and the background star. The red points are star pairs that do not meet the conditions (21) and (22). The blue dots are the final selected star pairs.
Figure 3: Flow chart of calculating the source-lens closest approach.
where \(\sigma_{S,DR3}\), \(\delta_{S,DR3}\), \(\alpha_{S,DR2}\) and \(\delta_{S,DR2}\) are the positions of the BGS in Gaia DR3 and Gaia DR2 at reference epoch, respectively, \(\Delta t\) is the difference between the catalog epochs of Gaia eDR3 and Gaia DR2 taken as 0.5 yr. We determine whether these events satisfy the equations (22).
We then find 1070 events satisfying the above conditions. Third, we still need to match the external catalogues for the events (42 events) with two-parameter solution BGS that are brighter than 18 mag and satisfy the equation (22). It is also required that the angular distance between the source of Gaia DR3 and the external catalogues is less than 1 arcsecond, and the lens and background star can match the same catalogues. Gaia DR3 catalogue includes pre-computed cross-matches with optical/near infrared photometric and spectroscopic surveys (Marrese et al., 2017, 2019). These external catalogues matched with Gaia DR3 are Pan-STARRS1 DR1 (Flewelling et al., 2020), SkyMapper DR2(Onken et al., 2019), SDSS DR 13(Albareti et al., 2017), URAT1(Zchararias et al., 2015), Tycho2(Hog et al., 2000), Hipparcos-2(van Leeuwen, 2007), 2MASS(Skrutskie et al., 2006), AllWISE(Mainzer et al., 2011), APASS DR9(Henden et al., 2016), GSC 2.3(Lasker et al., 2008), RAVE DR5(Kunder et al., 2017) and RAVE DR6(Steinmetz et al., 2020). We exclude 33 events that cannot match any of the above external catalogues. Fourth, we exclude 993 events that cannot match any of the above external catalogues from 1117 events of that their background stars do not match with Gaia DR3.
In addition, we recalculate 360 events occurred before 2010 or after 2070 to estimate the exact epoch, and remove 75 events occurred before 2005 or after 2075.
For the final star pair, we determine the uncertainties of these predictions using a Monte Carlo method, where we draw 1000 samples from appropriate gaussian distribution for the lens with position, proper motion and parallax. We do not include any co-variances between different input parameters. It should be noted that for 95 events with \(u_{0}<10\) and \(\sigma_{u_{0}}/u_{0}>5\), we provide the lower confidence level (16%) of \(\delta\theta_{*}\), \(\delta\theta_{C}\), and \(\delta\theta_{C,lum}\) respectively, rather than the standard deviation.
## 5 Results
### Predicted Astrometric-microlensing Events
In this paper, the searching time range is J2010.0-J2070.0, and the result is provided as a spreadsheet available online, where each event has 43 columns and a single row. Table 3 is a sample table consisting of 5 rows of result data. Column meanings in Table 3 are defined in Table 4. Finally, 4500 events caused by 3558 lens stars are found, where 4279 events are caused by HPMS (including 907, 1453 and 1919 events with two, five and six astrometric parameter for the background stars, respectively). There are 220 events caused by NS (including 71, 50 and 99 events with two, five and six astrometric parameter for background stars, respectively). Only one event is caused by HMS and has two parameter background star. The details are shown in Table 5. In Section 5.1.1, Section 5.1.2 and Section 5.1.3, our results are compared with those of Kluiter et al. (2022) in detail.
In the paper, the masses of the lens stars are mainly distributed in \(M_{L}<1.2M_{\odot}\) for many events. No event is found for the mass of the lens star distributed in \(3.2M_{\odot}<M_{L}<5M_{\odot}\). For the events found by the authors (Bramich & Nielsen, 2018; Bramich, 2018; Kluiter et al., 2018, 2022), the masses of their lens stars are less than \(2.5M_{\odot}\). The mass threshold of HMS is set to be \(M_{L}>5M_{\odot}\), and only one event is finally determined for HMS, more detailed description is given in Section 5.1.3. Therefore our future work can reduce the mass threshold of lens stars to \(3M_{\odot}\) which could help us to find more events.
There are 293 lens stars that can cause two or more events, as shown in Figure 5. Surprisingly, there are five lens stars to cause more than 50 events. In section 5.2, we will discuss two of these lens stars and their corresponding events.
About 48% of events have \(0.1mas<\delta\theta_{*}<0.2mas\), 457 events have \(\delta\theta_{*}>1mas\), and 7 events have \(\delta\theta_{*}>10mas\), as shown in Figure 6 and Figure 7. The events around the Gaia DR3 epoch of J2016.0 are the least because of the angular resolution limitation of Gaia DR3. From 2018 to 2035, the events will gradually increase and reach at the rate of about \(96events/yr\), as shown in Figure 8.
\(\theta_{sep}\), where the values with subscript 'ext' are the results of Kluter et al. (2022) and \(\theta_{sep}\) is our estimated distance at closest approach.
There are 2352 events with five or six astrometric parameter for background stars. For \(\Delta t_{0}\), about 92% of events have \(\Delta t_{0}<3d\) (as shown in Figure 9), but there are two events with \(\Delta t_{0}\approx 140d\), where their \(\Delta\delta\theta_{+}\) are very small (\(-0.002mas\) and \(-0.031mas\) respectively, as shown in lower-right panel of Figure 10). For \(\Delta\delta\theta_{+}\), about 71% of events have \(|\Delta\delta\theta_{+}|\leq 0.010mas\) (as shown in Figure 11). However, about 5.9 % of events have \(|\Delta\delta\theta_{+}|\geq 0.1mas\), and the note that \(u_{0}\) of these events is relatively small, especially when \(|\Delta\delta\theta_{+}|\geq 0.5mas\), \(u_{0}\) is less than 10 (see lower-left panel of Figure 10). It can be seen from formula (9) that \(\delta\theta_{+}\) is related to \(u\) and \(\theta_{E}\), and \(\delta\theta_{+}\) changes rapidly with \(u\) when \(u\) is small. There are four events with \(M_{R}>2\) (HPMS1160, HPMS3441, HPMS1742 and HPMS1703), and their mass estimates are from the white dwarf catalogue (Gen-tile Fusiulo et al. (2021)), \(M_{L}\equiv(0.87\pm 0.09)M_{\odot}\), \((0.44\pm 0.16)M_{\odot}\), \((1.32\pm 0.01)M_{\odot}\) and \((1.38\pm 0.06)M_{\odot}\), respectively. For the four events, \(\Delta\delta\theta_{+}\) are 5.174 mas, 1.412 mas, 0.827 mas and 0.562 mas, respectively (see upper right panel of Figure 10 ). For these events with \(|\Delta\delta\theta_{+}|\geq 0.1mas\), our estimated lens masses are not the same as that of (Kluter et al. 2022) (out of which 67% the mass difference is more than 10%) to cause the differences in \(\theta_{E}\). But, 54 % of events with \(|\Delta\delta\theta_{+}|\geq 0.1mas\) still have \(\delta\theta_{+}\) differences within 1\(\sigma\).
For 484 events with two parameter background source stars, the differences of both predicted results are obvious, where the parallax and proper motion of background stars used are different in addition to lens estimated mass. (Kluter et al. 2022) used the estimated values of the parallax and proper motion, but we set them to 0. The comparisons of both results are shown in Figure 12. It is noted that there are 106 events with \(\Delta t_{0}<5d\) and 215 events with \(|\Delta\delta\theta_{+}|\leq 0.05mas\).
#### 5.1.2 The events only searched by Kluter et al. (2022)
2006 events in Kluter et al. (2022) are not included in our results, where 1578 are excluded because they do not meet the selecting conditions in Table 1, Table 2, or the equation (21). There are 166 events that do not satisfy the selection of two-parameter background stars in Section 4.4. In addition, we find that \(\delta\theta_{+}\) of 262 events do not reach the threshold of 0.1 mas. These differences could be caused by lens mass estimation, the parallax and proper motion of background
Figure 8: Top panel: the histogram of the closest time for star pairs. The green, yellow and cyan bars show the events with two-, five- and six-parameter solutions BGS, respectively. Bottom panel: the histogram of the closest time for HMPS and NS. The blue and red bars show the events with HMPS and NS lens, respectively.
Figure 7: Cutouts for the events with \(\delta\theta_{+}>10mas\) obtained using online observation tool: the ESAsky (Baines et al. 2017; Giordano et al. 2018) (\(\sim\) 2J0212 for PanSTARRS DR1 color(i,r\(g\))) and \(\sim\) J1990 for DSS2 color and the Legacy Surveys(Collaboration et al. 2023a) (\(\sim\) J2017 for DECapS 2 images(riY)). The lens is shown in red rectangular, and the source is indicated in green circles. There are 7 events with \(\delta\theta_{+}>10mas\). We do not show HPMS862 and HPMS3274, their times of closest approach are after J2070.0, and their cutouts can be found through the ESAsky and Legacy Surveys.
Figure 9: The \(\Delta t_{0}\)-statistic diagram of overlap events with the five- or six-parameter background source stars. About 92% of events have \(\Delta t_{0}<3d\).
stars. See Table 6 for details. However, \(\sim 80\%\) of these 262 events have a difference of \(\delta\theta_{\star}\) within \(1\sigma\).
#### 5.1.3 The events only searched by us (the new predicted event)
We have searched 1664 new predicted event, where about 86 % of the events are not included in the search range of Kluter et al. (2022), including the lens, background star and \(t_{0}\) of event. It is noted that the sources of some events are not within the rectangular box set by Kluter et al. (2022) during the initial search. Other events could be related to the differences of selecting parameters, such as lens mass estimation, the parallax and proper motion of background stars. See Table 7 for details.
We will discuss these new events in detail based on our classification of lens.
(1) Microlensing events by NS
There are 220 events caused by NS (see Table 5), where 45 events with \(\delta\theta_{\star}>0.5mas\), out of which 20 events with \(\delta\theta_{\star}>1mas\) (including 4, 5 and 11 events with two, five and six parameter BGS, respectively). 27 events will occur between J2010.0 and J2035.0, of which 2 events with \(\delta\theta_{\star}>1mas\). After J2035.0, the number of events will increase, about 5 events per year (see Figure 8 and Figure 13). For timescales of the events with \(\delta\theta_{\star}>0.1mas\), 62 events are within 5 years, 67 events are between 5 years and 10 years, 91 events are longer than 10 yr (see Figure 13).
NS180. This event pick will occur in \(2032.21\pm 0.03\), the timescale is \(\sim 6.2yr\) and the maximum shift of the major image is \((1.45\pm 0.24)mas\). The lens mass is estimated to be \(0.32M_{\odot}\) by the mass-luminosity relations in the third case of Section 4.2. Its total proper motion and parallax are \(95.777mas/yr\) and \(11.334mas\), respectively. The lens has \(G\approx 14.77mag\) which is \(\sim 2.78mag\) brighter than the
Figure 10: Comparison of \(|\Delta\delta\theta_{\star}|\) for overlap events with the five- or six-parameter background source stars. Upper-left panel: the relationship diagram of \(|\Delta\delta\theta_{\star}|\) and \(M_{R}\). There are four events with \(M_{R}>2\) (HPMS1160, HPMS3441, HPMS1742 and HPMS1703), and their mass estimates are from the white dwarf catalogue (Gentile Fusillo et al. (2021)). For the four events, \(\Delta\delta\theta_{\star}\) are 5.174 mas, 1.412 mas, 0.827 mas and 0.562 mas, respectively. Lower left panel: the relationship diagram of \(|\Delta\delta\theta_{\star}|\) and \(u_{0}\). For events with large difference in \(\delta\theta_{\star}\), \(u_{0}\) is very small. Lower right panel: the relationship diagram of \(|\Delta\delta\theta_{\star}|\) and \(\Delta t_{0}\). There is not obvious difference in \(\delta\theta_{\star}\) for events with large difference of \(t_{0}\).
source star with six parameter solutions. This event is listed in the first row of Table 3.
(2) Microlensing events by HMS
Initially, for HMS we obtained 558 pairs with \(\delta\theta_{*}>0.1mas\) (including 4 and 554 events with two, five or six parameter BGS, respectively). For 554 events with five or six parameter BGS, 487 pairs can not meet the conditions of table 2 (fidelity_v2>0.7), and
Figure 14: The changes of \(u\) (black) and \(\delta\theta_{*}\) (blue) with time for the events HMS1.
Figure 12: Comparison of \(\delta\theta_{*}\) for overlap events with the two-parameter background source stars. Upper-left panel: The \(|\Delta\delta\theta_{*}|\)-statistic diagram. The contents of other three panels are similar with those of Figure 10.
Figure 13: Top panel: the relationship diagram of \(t_{0}\) and \(\delta\theta_{*}\) for microlensing events by NS.The green, red and blue dots show the events with two-, five- and six-parameter solutions BGS, respectively. Bottom panel:the histogram of the timescales for star pairs by NS.Four events with a timescale of more than 35 years are not shown in the figure.
the remaining 67 pairs did not pass the criteria in equation 21. For 4 events with two parameter BGS, 3 events have bad match with Gaia DR2.In the end, we only get an event (HMS1) that satisfies all the conditions.
HMS1. The lens star (source_id=4090028395985895552) is a single star, its spectral type is B and its mass is estimated as \(5.160^{+0.098}_{-0.089}M_{\odot}\),taken from Gaia DR3 astrophysical parameters. Its parallax, proper motion in right ascension direction and declination direction are \((0.489\pm 0.017)mas\), \((-22.466\pm 0.018)mas/yr\) and \((-38.676\pm 0.015)mas/yr\), respectively. The lens has \(G\approx 12.77mag\) which is \(\sim 3.1mag\) brighter than the source star. The event has a time of closest lens-source approach of \(2046.0\pm 3.3yr\) and the Einstein radius of \((4.54\pm 3.19)mas\). The maximum shift of the major image is \(0.53mas\) and its lower confidence level (16%) is \(0.05mas\). The changes of \(u\) and \(\delta\theta_{u}\) with time is shown in Figure 14, the timescale is \(\sim 9yr\). More details are shown in the second row of Table 3.
(3) Microlensing events by HPMS
We find 1443 new events by HPMS,where 240 events with \(\delta\theta_{\rm{+}}>0.5mas\), out of which 123 events with \(\delta\theta_{\rm{+}}>1mas\) (including 83 events with five or six parameter BGS). There are 845 events occurred between J2010.0 and J2067.0, of which 100 will occur within the next 10 years. In 116 events, the distance of background stars from the proper motion track of the lens star exceeds \(8^{\prime\prime}\), as shown in Figure 15, where the maximum is \(16^{\prime\prime}\), and \(\sim 97\%\) of these events have a Einstein radius of more than 30 mas, with a maximum of \(\sim 40.66mas\), so these lens can produce microgravitational effects with \(\delta\theta_{\rm{+}}>0.1mas\) on the background stars far away, and \(\sim 83\%\) of these lens can cause ten or more events because of high proper motion. However, since \(\delta\theta_{\rm{+}}\) decreases with the increase of \(\theta_{sep}\), the expected shift is small (\(\delta\theta_{\rm{+}}<0.2mas\)).
There are three examples listed in Table 3:
HPMS1816. The maximum shift of the major image is more than 10 mas (\(\delta\theta_{\rm{+}}=10.94mas\)) and its lower confidence level (16%) is \(0.79mas\). The event has a time of closest lens-source approach of \(2068.48\pm 1.62yr\). The lens (CD-29 5220) has \(G\approx 9.95mag\) which is \(\sim 11mag\) brighter than the source star and the source star only have two parameter solutions. The lens mass is estimated to be \(0.65M_{\odot}\) by the mass-luminosity relations in the third case of Section 4.2. Its total proper motion and parallax are \(162.919mas/yr\) and \(23.662mas\), respectively.
HPMS3274. The maximum shift of the major image is the largest of all events (\(\delta\theta_{\rm{+}}=21.38mas\)) and its lower confidence level (16%) is \(4.82mas\). The Einstein radius is \((30.54\pm 0.89)mas\). However, the event peak will occur in \(2073.23\pm 0.23\) and the lens (V\({}^{*}\) V2215 Oph) is very bright (\(G\approx 5.89mag\)) which is \(\sim 14.87mag\) brighter than the source star with two parameter solutions. The lens star is a single star, its spectral type is K and its mass is estimated as \(0.682^{+0.040}_{-0.040}M_{\odot}\), taken from Gaia DR3 astrophysical parameters.
HPMS4600. The maximum shift of the major image is \((6.72\pm 2.17)mas\), the event peak will occur in \(2069.47\pm 0.12\) and the timescale is \(\sim 6.4yr\). The lens mass is estimated as \((0.42\pm 0.02)M_{\odot}\) from the white dwarf catalogue (Gentile Fusillo et al. (2021)) labeled as 'WDJ221800.59+560214.92'. The lens with \(G\approx 18.03mag\) is \(\sim 2.44mag\) brighter than the source star with six parameter solutions.
### Multiple astrometric microlensing events caused by a single lens star
(1) 71 astrometric microlensing events caused by 61 CYG A for Gaia source_id 18720460934556480
In Gaia DR3, the lens star 61 CYG A will cause 71 astrometric microlensing events, as shown in Figure 16 and Figure 17, and the detailed data is listed in the Appendix (event names: HPMS4708 - HPMS4783). The lens star is the main sequence star, its spectral type is K, taken from Gaia DR3 astrophysical parameters. It is a binary stars, and has the annual parallax of 286mas and the proper motion of 5282mas / yr. We compare our results with those of Kluiter et al. (2022). In 26 events searched by Kluiter et al. (2022), one event is not included in our results, because the background star cannot match Gaia DR2 and external catalogues (see section 4.4). The mass of the lens star is estimated by the table "Astrophysical parameters" to be \((0.680\pm 0.04)M_{\odot}\), which is slightly larger than that of Kluiter et al. (2022) \((0.621\pm 0.06)M_{\odot}\). There are 3 events with \(\delta\theta_{\rm{+}}>1mas\). It is noted from Figure 16 that the distance between the background stars and the proper motion track of the lens star exceeds \(10^{\prime\prime}\) in about 45% of the events, and finding that these events can increase the completeness of the predicted events. It should be emphasized that events with the distance between the background stars and the proper motion track of the lens star below \(7^{\prime\prime}\) are predicted by Kluiter et al. (2022) except 3 events occurred before 2010 or after 2068.
(2) 52 astrometric microlensing events caused by LAWD 37 for Gaia source_id 5332606522595645952
In Gaia DR3, the lens star LAWD 37 will cause 52 astrometric microlensing events, as shown in Figure 18 and Figure 19. The detailed data is shown in the appendix (event names: HPMS2109 - HPMS2132). The lens star is the white dwarf star, its spectral type is B, and it has the annual parallax of 216mas and the proper motion of 2684 mas / yr, taken from Gaia DR3. Kluiter et al. (2022) had searched for 36 events, where 7 events are not included in our results, because the "astrometric fidelity" parameter value of the background star is less than 0.7, which do not meet the conditions in Table 2. We originally estimated the mass of the lens star as \((0.77\pm 0.01)M_{\odot}\) from the white dwarf catalogue (Gentile Fusillo et al. (2021)) labeled as 'WDJ114542.92-645029.46'. The mass of the lens star measured by McGill et al. (2023) is \((0.56\pm 0.08)M_{\odot}\), so we update the mass estimation and delete 23 events with \(\delta\theta_{\rm{+}}<0.1mas\) due to mass changes. While Kluiter et al. (2022) estimated the mass of the lens star as \((0.65\pm 0.15)M_{\odot}\). For 10 events, the distance between the background stars and the proper motion track of the lens star exceeds \(8^{\prime\prime}\). There are 4 events with \(\delta\theta_{\rm{+}}>1mas\), where 2 events (HPMS2079 and HPMS2080) have \(\delta\theta_{\rm{+}}>8mas\), and the minimum separation
Figure 15: The distances of background stars from the proper motion paths of lens stars for all astrometric microlensing events. There are 116 events with the distances more than 8 arcseconds.
of star pairs is greater than 105mas, as shown in Figure 19 and 20. Events with the distance between the background stars and the proper motion track of the lens star below 7\({}^{\prime\prime}\) are predicted by Kluter et al. (2022) except 4 events occurred before 2010 or after 2068.
As seen from the above two examples, the events searched by us and Kluter et al. (2022) are different, because the different constraints for star pairs and mass estimation for lens star are used, specially we use the cone search method to find more background stars. It is also seen from Figure 15 that there are a small number of event with large distance of background stars from the proper motion track of the lens star. Therefore, we suggest that our cone search method is helpful to find more events. It is noted that two lens stars will cause multiple events due to their high proper motion and parallax, the detection of many events caused by one lens will help us to improve lens mass accuracy.
### The events with large astrometric signals (\(\delta\theta_{+}\))
From the discussion in section 2.1, it can be seen that only when \(\theta_{sep}\) is large, for example, \(\theta_{sep}>103mas\) for Gaia with FWHM=103mas (Fabricius et al., 2016), the signal \(\delta\theta_{+}\) can be detected. Therefore, the events with large \(\delta\theta_{+}\) discussed in this section should meet the condition of \(\theta_{sep}>103mas\). We find that 348 events have \(\delta\theta_{+}>0.5mas\), including 97 events with \(\delta\theta_{+}>1mas\) and 2 events with \(\delta\theta_{+}>8mas\) discussed in Section 5.2. The masses of the lens stars in these events are concentrated in \(0.4M_{\odot}<M_{L}<1M_{\odot}\). However, we have not new predicted events with \(\delta\theta_{+}>1mas\) and \(\theta_{sep}>103mas\).
## 6 Summary and conclusions
In this paper, we select about 820000 potential lens stars from Gaia DR3 data for three types of stars, HPMS, NS and HMS, and obtain them to be about 470000, 160000 and 190000, respectively. Based on the cone search method, about 2260000 star pairs are initially selected, including 1240000, 260000 and 760000 star pairs with lens stars of HPMS, NS and HMS respectively. The relative motion of the star pair is approximated to be linear motion (not considering parallax). The mass of the lens star is estimated according to the "Astrophysical parameters", the White Dwarf Catalog (Gentile Fusillo et al. (2021)) or the mass-luminosity relations, and then the minimum angular distance of the star pair in the period of J2010.0-J2070.0 is estimated. These star pairs are further selected to obtain 27057 star pairs, where HPMS, NS and HMS are 19401,3742 and 3914 respectively. For these star pairs, we search for their minimum angular distance to reserve the star pairs with \(\delta\theta_{+}>0.1mas\) and remove unqualified star pairs according to Table 2 and the equations (21) and (22). We finally get 4500 events caused by 3558 lens stars, where 4279 events are caused by HPMS (their background stars with two-, five- and six-parameter solutions are 907, 1453 and 1919 respectively), 220 events are caused by NS (their background stars with two-, five- and six-parameter solutions are 71, 50 and 99 respectively), and only one event is caused by HMS (its background star has a two-parameter solution). There are 293 lens stars that can cause two or more events, where 5 lens stars will cause more than 50 events. Specially two lens stars (61 CYG A and LAWD 37) are found to cause 71 and 52 events respectively. It is noted that two events, HPMS2079 and HPMS2080, have \(\delta\theta_{+}>8mas\) and \(\theta_{sep}>103mas\).
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \(\mathrm{s}\)\(\mathrm{e}\)
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline sep & c\_sep & E & e\_E & u0 & e\_u0 & d+ & e\_d+ & d+\_lower & dC & e\_oC & d\_lower & dC\_lum & e\_dC\_lum \\ mas & mas & mas & mas & - & - & mas & mas & mas & mas & mas & mas & mas \\ \hline
18.53 & 2.78 & 5.38 & 0.27 & 3.44 & 0.55 & 1.45 & 0.24 & - & 1.34 & 0.18 & - & 0.11 & 0.02 \\
38.32 & 91.18 & 4.54 & 3.19 & 8.45 & 52.11 & 0.53 & - & 0.05 & 0.52 & - & 0.04 & 0.03 & - \\
0.50 & 153.89 & 11.18 & 0.60 & 0.04 & 14.36 & 10.94 & - & 0.79 & 3.95 & - & 0.78 & 0.00 & - \\
22.52 & 166.31 & 30.54 & 0.89 & 0.73 & 5.52 & 21.38 & - & 4.82 & 10.80 & - & 4.70 & 0.00 & - \\
5.73 & 19.37 & 9.14 & 0.26 & 0.63 & 2.13 & 6.72 & 2.17 & - & 3.23 & 0.78 & - & 0.77 & 0.22 \\ \hline \end{tabular}
\end{table}
Table 3: _continued_ 5 rows of the predicted astrometric-microlensing events in this paper, and the rest are available online.
Figure 16: The information of 71 astrometric microlensing events caused by 61 CYG A. Upper-left panel: the distance of background stars in the reference epoch from the proper motion path of lens stars. The black line is the proper motions of the lens star in J2010.0-2070.0, the red asterisks are the background stars (reference epoch time position) in the result of Kluiter et al. (2022), and the blue dots are the background star (reference epoch time position) that are not present in the result of Kluiter et al. (2022). Upper right panel: the histogram of the distance between the background star and the proper motion track of the lens star. Lower left panel: the histogram of \(\delta\theta_{\star}\) for 71 events, with 3 events \(\delta\theta_{\star}>1mas\), and the maximum value of \(\delta\theta_{\star}\) is 1.51mas. Lower right panel: the histogram of \(t_{0}\).
\begin{table}
\begin{tabular}{l c c c} \hline \hline dC\_lum\_lower & t0 & e\_t0 & new events \\ mas & yr & yr & \\ \hline - & 2032.2067 & 0.0277 & * \\
0.00 & 2046.0000 & 3.3156 & * \\
0.00 & 2068.4830 & 1.6180 & * \\
0.00 & 2073.2950 & 0.2298 & * \\ - & 2069.4857 & 0.1241 & * \\ \hline \end{tabular}
\end{table}
Table 3: _continued_ 5 rows of the predicted astrometric-microlensing events in this paper, and the rest are available online.
Because we use the different conditions of lens star, background star, binary stars or co-moving stars, mass estimation of lens stars and search scopes of star pairs, the events we searched are not the same with ones searched by Kluter et al. (2022). We obtain 2836 events that are the same with those of Kluter et al. (2022), and 1664 new predicted event.
In the events we searched, 116 events have the distances of background stars from the proper motion path of lens stars more than 8'' in the reference epoch, where the maximum distance is 16.6''. Although \(\delta\theta_{+}\) does not exceed 0.2 mas for 116 events, they increase the completeness of the predicted events. Therefore, the cone search method
\begin{table}
\begin{tabular}{c c c} \hline \hline & \multicolumn{2}{c}{BGS} \\ \hline \multirow{2}{*}{Exclusion reasons} & two-parameter & five or \\ & solutions & six-parameter \\ \cline{2-3} & & solutions \\ \hline Lens fidelity \(-\)v2 \(<\)0.8 & - & 2 \\ BGS fidelity \(-\)v2 \(<\)0.7 & - & 1560 \\ No satisfying equations (21) & - & 16 \\ No matching with DR2 & 166 & - \\ or external catalogues. & \(\delta_{+}<0.1mas\) & 173 & 89 \\ \hline \end{tabular}
\end{table}
Table 6: Reasons why the results in Klüter et al. (2022) are not included in this paper.
\begin{table}
\begin{tabular}{c c c} \hline \hline Column Number & Column Name & Description \\ \hline
1 & Event\_name & The name of the predicted astrometric microlensing event. \\
2 & ref\_epoch & Reference epoch \\
3 & lens\_source\_id & Gaia DR3 source\_id of the lens \\
4 & lens\_ra & Right ascension of the lens \\
5 & lens\_dec & Declination of the lens \\
6 & lens\_parallax & Parallax of the lens \\
7 & lens\_pm & Total proper motion of the lens \\
8 & lens\_pm & Proper motion in right ascension direction of the lens \\
9 & lens\_pm & Proper motion in declination direction of the lens \\
10 & lens\_g\_mean & G-band mean magnitude of the lens \\
11 & lens\_M & The estimated mass of the lens \\
12 & WDI\_name & The name of a white dwarf from ‘the white dwarf catalog’ (Gentile Fusillo et al., 2021) \\
13 & Pwd & the probability of the source becoming a white dwarf from Gentile Fusillo et al. (2021) \\
14 & type & Type of the lensing star: WD = White Dwarf, MS = Main Sequence, RG = Red Giant, BD = Brown Dwarf \\
15 & lens\_fidelity\_v2 & Astrometric fidelity of the lens from Rybizki et al. (2022) \\ \hline
16 & source\_source\_id & Gaia DR3 source\_id of the background source stars \\
17 & source\_ra & Right ascension of the background source stars \\
18 & source\_dec & Declination of the background source stars \\
19 & source\_ parallax & Parallax of the background source stars \\
20 & source\_pm & Total proper motion of the background source stars \\
21 & source\_pm & Proper motion in right ascension direction of the background source stars \\
22 & source\_pm & Proper motion in declination direction of the background source stars \\
23 & source\_g\_mean & G-band mean magnitude of the background source stars \\
24 & source\_ruwe(new) & Renormalised unit weight error for the background source star with two-parameter solution, and can be solved by formula(16) \\
25 & source\_fidelity\_v2 & Astrometric fidelity of the background source stars from Rybizki et al. (2022) \\ \hline
26 & sep & Estimated distance at closest approach \\
27 & e\_sep & Error in sep \\
28 & E & Einstein radius of the event \\
29 & e\_E & Error in Einstein radius of the event \\
30 & u0 & Estimated distance at closest approach in Einstein radii \\
31 & e\_u0 & Error in u0 \\
32 & d+ & Maximal astrometric shift of brighter image \\
33 & e\_d+ & Error in d+ \\
34 & d+\_lower & Lower confidence level (16\%) of d+ \\
35 & dC & Maximal astrometric shift of center of light \\
36 & e\_dC & Error in dC \\
37 & dC\_lower & Lower confidence level (16\%) of dC \\
38 & dC\_lum & Maximal astrometric shift including lens-luminosity effects \\
39 & e\_dC\_lum & Error in dC\_lum \\
40 & dC\_lum\_lower & Lower confidence level (16\%) of dC\_lum \\
41 & t0 & Estimated time of the closest approach \\
42 & e\_t0 & Error in t0 \\
43 & new events & Marked as * represents a new event different from Klüter et al. (2022) \\ \hline \end{tabular}
\end{table}
Table 4: Column meanings in Table 3.
\begin{table}
\begin{tabular}{c|c|c|c|} \hline \hline & solutions & HPMS & NS & HMS \\ \hline & two-parameter solutions & 907 & 71 & 1 \\ \cline{2-3} BGS & five-parameter solutions & 1453 & 50 & - \\ \cline{2-3} & six-parameter solutions & 1919 & 99 & - \\ \hline \end{tabular}
\end{table}
Table 5: Statistical results of events caused by three types of lens stars and their corresponding background stars.
we used can help us to find more events. In addition, we have not found the event with lens star mass distributed in \(3.2M_{\odot}<M_{L}<5M_{\odot}\), we consider to reduce the threshold of mass for HMS in future work and expect to find more events.
In the future, Gaia DR4 will provide the epoch astrometry2, which can be used to measure the shift of a handful events. The predicted events are expected to be followed up using space devices such as HST, JWST or Chinese Space Station Telescope (CSST) that will be launched at the near future (Cao et al., 2018; Gong et al., 2019).
Footnote 2: [https://www.cosmos.esa.int/web/gaia/newsletter/contents](https://www.cosmos.esa.int/web/gaia/newsletter/contents)
## Acknowledgements
We thank the referee for useful comments and suggestions to improve the paper. This research work is financially supported by the National Natural Science Foundation of China (Grant Nos. 11403101, 12173085), Training Object Project of technological innovation talents in Yunnan Province(No. 202305AD160004) and the China Manned Space Project. We use the data from the European Space Agency (ESA) mission Gaia ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)) the Gaia Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.
This research has made use of the data of ESASky, developed by the ESAC Science Data Centre (ESDC) team and maintained alongside other ESA science mission's archives at ESA's European Space Astronomy Centre (ESAC, Madrid, Spain)(Baines et al., 2017; Giordano et al., 2018), the images of the DESI Legacy Imaging Surveys consisting of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS), the Beijing-Arizona Sky Survey (BASS), and the Mayall z-band Legacy Survey (MzLS) (Collaboration et al., 2023), and the images of SkyMapper (Wolf et al., 2016, 2018; Onken et al., 2019).
## Data Availability
The data underlying this article are available in the article and in its spreadsheet available online.
|
2309.03814 | Crossing numbers of cable knots | We use the degree of the colored Jones knot polynomials to show that the
crossing number of a $(p,q)$-cable of an adequate knot with crossing number $c$
is larger than $q^2\, c$. As an application we determine the crossing number of
$2$-cables of adequate knots. We also determine the crossing number of the
connected sum of any adequate knot with a $2$-cable of an adequate knot. | Efstratia Kalfagianni, Rob Mcconkey | 2023-09-07T16:11:11Z | http://arxiv.org/abs/2309.03814v3 | # Crossing numbers of cable knots
###### Abstract.
We use the degree of the colored Jones knot polynomials to show that the crossing number of a \((p,q)\)-cable of an adequate knot with crossing number \(c\) is larger than \(q^{2}\,c\). As an application we determine the crossing number of \(2\)-cables of adequate knots.
We also determine the crossing number of the connected sum of any adequate knot with a \(2\)-cable of an adequate knot.
The authors acknowledge partial research support through NSF Grants DMS-2004155 and DMS-2304033 and from the NSF/RTG grant DMS-2135960.
Corollary 1.2 allows us to compute the crossing number of \((\pm 1,2)\)-cables of adequate knots that are equivalent to their mirror images (a.k.a. amphicheiral) since such knots are known have \(\operatorname{wr}(K)=0\). In particular, since for any adequate knot \(K\) with mirror image \(K^{*}\) the connect sum \(K\#K^{*}\) is adequate and amphicheiral, we have the following:
**Corollary 1.3**.: _For any adequate knot \(K\) with crossing number \(c(K)\) and mirror image \(K^{*}\) let \(K^{2}:=K\#K^{*}\). Then \(c(K_{\pm 1,2}^{2})=8\,c(K)+1\)._
Our results also have an application to the open conjecture on the additivity of crossing numbers [6, Problem 1.68] under connect sums. The conjecture has been proved in the cases where each summand is adequate, [5, 10, 13] both torus knots, [2] and when one summand is adequate and the other an untwisted Whitehead doubles of adequate knot with zero writhe number [4]. To these we add the following:
**Theorem 1.4**.: _Suppose that \(K\) is an adequate knot and let \(K_{1}=K_{p,2}\), where \(p=2\operatorname{wr}(K)\pm 1\). Then for any adequate knot \(K_{2}\), the connected sum \(K_{1}\#K_{2}\) is non-adequate and we have_
\[c(K_{1}\#K_{2})=c(K_{1})+c(K_{2}).\]
It may be worth noting that out of the 2977 prime knots with up to 12 crossings, 1851 are listed as adequate on Knotinfo [9] and thus our results above can be applied to them.
## 2. Crossing numbers of cables of adequate knots
### Preliminaries
A _Kauffman state_ on a knot diagram \(D\) is a choice of either the \(A\)-resolution or the \(B\)-resolution for each crossing of \(D\) as shown in Figure 1. The result of applying \(\sigma\) to \(D\) is a collection \(\sigma(D)\) of disjoint simple closed curves called _state circles_. The _all-\(A\) state_, denoted by \(\sigma_{A}\), is the state where the \(A\)-resolution is chosen at every crossing of \(D\). Similarly, the _all-\(B\) state_, denoted by \(\sigma_{B}\), is the state where the \(B\)-resolution is chosen at every crossing of \(D\).
For a knot diagram \(D\) we use the following notation:
* \(c(D)\) is the number of crossings \(D\), and with an orientation on \(D\), \(c_{+}(D)\) and \(c_{-}(D)\) are respectively the number of positive crossings and negative crossings of \(D\), following the convention of Figure 2. The _writhe_ of \(D\), is given by \(\operatorname{wr}(D):=c_{+}(D)-c_{-}(D)\).
* The graphs \(\mathbb{G}_{A}(D)\) and \(\mathbb{G}_{B}(D)\) have vertices the state circles of the all-\(A\) and all-\(B\) state respectively, and edges the segments recording the original location of the crossings, as indicated in Figure 1. We will denote by \(v_{A}(D)\) and \(v_{B}(D)\) the number of vertices of \(\mathbb{G}_{A}(D)\) and \(\mathbb{G}_{A}(D)\), respectively.
Figure 1. The \(A\)- and \(B\)-resolution at a crossing and the corresponding edges of \(\mathbb{G}_{A}(D)\) and \(\mathbb{G}_{B}(D)\).
**Definition 2.1**.: A knot diagram \(D=D(K)\) is called \(A\)_-adequate_ (resp. \(B\)_-adequate_ ) if \(\mathbb{G}_{A}(D)\) (resp. \(\mathbb{G}_{B}(D)\)) has no one-edged loops. A knot is _adequate_ if it admits a diagram \(D=D(K)\) that is both \(A\)- and \(B\)-adequate [8, 7].
We recall that if \(D=D(K)\) is an adequate diagram the quantities \(c(D)\), \(c_{\pm}(D)\)[7, 5, 10, 13] are minimal over all diagrams representing \(K\) and \(\operatorname{wr}(D)\) is also constant for \(K\). Thus they are invariants of \(K\) and we will denote them by \(c(K)\), \(c_{\pm}(K)\), \(g_{T}(K)\), and \(\operatorname{wr}(K)\) respectively.
Given a knot \(K\) let \(J_{K}(n)\) denote its \(n\)-th unreduced colored Jones polynomial, which is a Laurent polynomial in a variable \(t\). The value on the unknot \(u\) is given by \(J_{U}(n)(t)=(-1)^{n-1}\frac{t^{-n/2}-t^{n/2}}{t^{-1/2}-t^{1/2}}\) for \(n\geq 2\). Let \(d_{+}[J_{K}(n)]\) and \(d_{-}[J_{K}(n)]\) denote the maximal and minimal degree of \(J_{K}(n)\) in \(t\) and set
\[d[J_{K}(n)]:=4d_{+}[J_{K}(n)]-4d_{-}[J_{K}(n)].\]
For the purposes of this paper we will assume that the set of cluster points \(\left\{|n^{-2}d[J_{K}(n)]|\right\}_{n\in\mathbb{N}}^{\prime}\) consists of a single point and denoted by \(dj_{K}\). This number is called is called the _Jones diameter_ of \(K\). We recall the following.
**Theorem 2.2**.: _[_4_]__Let \(K\) be a knot with Jones diameter \(dj_{K}\) and crossing number \(c(K)\). Then,_
\[dj_{K}\leq 2\,c(K),\]
_with equality \(dj_{K}=2\,c(K)\) if and only if \(K\) is adequate._
_In particular, if \(K\) is a non-adequate knot admitting a diagram \(D=D(K)\) such that \(dj_{K}=2(c(D)-1)\), then we have \(c(D)=c(K)\)._
Next we recall a couple of results from that give the extreme degrees of the colored Jones polynomials for knots where the degrees \(d_{\pm}[J_{K}(n)]\) are quadratic polynomials.
**Proposition 2.3**.: _[_3, 1_]_ _Suppose that \(K\) is a knot such that \(d_{+}[J_{K}(n)]=a_{2}n^{2}+a_{1}n+a_{0}\) and \(d_{-}[J_{K}(n)]=a_{2}^{*}n^{2}+a_{1}^{*}n+a_{0}^{*}\) are quadratic polynomials for all \(n>0\). Suppose, moreover, that \(a_{1}\leq 0\), \(a_{1}^{*}\geq 0\) and that \(\frac{p}{q}<4a_{2}\) and \(\frac{-p}{q}<4a_{2}^{*}\)._
_Then for \(n\) large enough,_
\[4d_{+}[J_{K_{p,q}}(n)]=q^{2}4a_{2}n^{2}+(q4a_{1}+2(q-1)(p-4qa_{2}))n+A,\]
\[4d_{-}[J_{K_{p,q}}(n)]=-q^{2}4a_{2}^{*}n^{2}-(q4a_{1}^{*}+2(q-1)(p-4qa_{2}^{* }))n+A^{*},\]
_where \(A,A^{*}\in\mathbb{Q}\) depend only on \(K\) and \(p,q\)._
Proof.: The first equation is shown in [3] (see also [1]). To obtain the second equation we use the fact that, since \(K_{-p,q}^{*}=(K_{p,q})^{*}\), we have \(d_{-}[J_{K_{p,q}}(n)]=-d_{+}[J_{K_{-p,q}^{*}}(n)]\) and apply the first equation to \(K_{-p,q}^{*}\).
Now we recall the second result promised earlier.
Figure 2. A positive crossing and a negative crossing.
**Lemma 2.4**.: _[_3_, 1_]__Let the notation and setting be as in Proposition 2.3._
_If \(\frac{p}{q}>4a_{2}\), then_
\[4d_{+}[J_{K_{p,q}}(n)]=pqn^{2}+B,\]
_where \(B\in\mathbb{Q}\) depends only on \(K\) and \(p,q\)._
_Similarly, if \(\frac{-p}{q}>4a_{2}^{*}\), then_
\[4d_{-}[J_{K_{p,q}}(n)]=-pqn^{2}+B^{*},\]
_where \(B^{*}\in\mathbb{Q}\) depends only on \(K\) and \(p,q\)._
### Lower bounds and admissible knots
We will say that a knot \(K\) is _admissible_ if there is a diagram \(D=D(K)\) such that we have \(dj_{K}=2\,(c(D)-1)\). Our interest in admissible knots comes from the fact that if \(K\) is admissible and non-adequate, then by Theorem 2.2, \(D\) is a minimal diagram (i.e. \(c(D)=c(K)\)).
**Theorem 2.5**.: _Let \(K\) be an adequate knot and let \(c(K)\), \(c_{\pm}(K)\) and \(\operatorname{wr}(K)\) be as above._
1. _For any coprime integers_ \(p,q\)_, we have_ (1)__ \[c(K_{p,q})\geq q^{2}\cdot c(K).\]
2. _The cable_ \(K_{p,q}\) _is admissible if and only if_ \(q=2\) _and_ \(p.=q\operatorname{wr}(K)\pm 1\)_._
Proof.: Since \(K\) is adequate we have
\[4d_{+}[J_{K}(n)]-4d_{-}[J_{K}(n)]=2c(K)n^{2}+O(n). \tag{2}\]
for every \(n\geq 0\)[7]. We distinguish three cases.
**Case 1.** Suppose that \(\frac{p}{q}<2c_{+}(K)\) and \(\frac{-p}{q}<2c_{-}(K)\). Then, \(d_{+}[J_{K}(n)]\) satisfies the hypothesis of Proposition 2.3 with \(4a_{2}=2c_{+}(K)>0\) and \(d_{-}[J_{K}(n)]=-d_{+}[J_{K^{*}}(n)]\), where \(d_{+}[J_{K^{*}}(n)]\) satisfies that hypothesis of Proposition 2.3 with \(4a_{2}^{*}=2c_{+}(K^{*})=2c_{-}(K)\). The requirement that \(a_{1}\leq 0\) is satisfied since for adequate knots the linear terms of the degree of \(J_{K}^{*}(n)\) are multiples of Euler characteristics of spanning surfaces of \(K\). See [3, Lemmas 3.6, 3.7]. Now Proposition 2.3 implies that for sufficiently large \(n\) we have that \(d_{\pm}[J_{K_{p,q}}(n)]\) is a quadratic polynomial and the Jones diameter of \(K_{p,q}\) is \(dj_{K}=q^{2}c(K)\). Hence by Theorem 2.2 we get \(c(K_{p,q})\geq q^{2}\cdot c(K)\) which proves part (a) of Theorem 1.1 in this case.
For part (b), we recall that a diagram \(D_{p,q}\) of \(K_{p,q}\) is obtained as follows: Start with an adequate diagram \(D=D(K)\) and take \(q\) parallel copies to obtain a diagram \(D^{q}\). In other words, take the \(q\)-cabling of \(D\) following the blackboard framing. To obtain \(D_{p,q}\) add \(t\)_-twists_ to \(D^{q}\), where \(t:=p-q\operatorname{wr}(K)\) as follows: If \(t<0\) then a twist takes the leftmost string in \(D^{q}\) and slides it over the \(q-1\) strings to the right; then we repeat the operation \(|t|\)-times. If \(t>0\) a twist takes the rightmost string in \(D^{q}\) and slides it over the \(q-1\) strings to the left; then we repeat the operation \(|t|\)-times. Now
\[c(D_{p,q})=q^{2}\,c(K)+|t|(q-1)=q^{2}\,c(K)+|p-q\operatorname{wr}(K)|(q-1),\]
while \(dj_{K}=2q^{2}\,c(K)\). Now setting \(2c(D_{p,q})-2=dj_{K}\), we get \(|p-q\operatorname{wr}(K)|(q-1)=1\) which gives that \(q=2\) and \(p=q\operatorname{wr}(K)\pm 1\). Similarly, if we set \(p=q\operatorname{wr}(K)\pm 1\) we find that \(2c(D_{p,q})-2=dj_{K}\) must also be true. Hence in this case both (a) and (b) hold.
**Case 2.** Suppose that \(\frac{p}{q}>2c_{+}(K)\). Then by Lemma 2.4, \(4d_{+}[J_{K_{p,q}}(n)]=pqn^{2}+B\), where \(B\in\mathbb{Q}\) depends only on \(K\) and \(p,q\). Since \(\frac{p}{q}>2c_{+}(K)\), we get \(pq>2c_{+}(K)q^{2}\). On the other hand, since \(\frac{-p}{q}<0\), we clearly have \(\frac{-p}{q}<2c_{-}(K)\), and Proposition 2.3 applies to \(d_{+}[J_{K^{*}_{-p,q}}(n)]\) for \(4a_{2}^{*}=2c_{-}(K)\). Then
\[4d_{+}[J_{K}(n)]-4d_{-}[J_{K}(n)]=d_{+}[J_{K}(n)]+4d_{+}[J_{K^{*}_{-p,q}}(n)]>q ^{2}\cdot c(K),\]
as desired. This finishes the proof for part (a) of the theorem. In this case, we don't get any admissible knots: first note that \(p>2qc_{+}(K)>q\operatorname{wr}(K)\). As in Case 1 we get a diagram \(D_{p,q}\) of \(K_{p,q}\) with
\[c(D_{p,q})=q^{2}\,c(K)+(p-q\operatorname{wr}(K))(q-1),\]
while \(d\!j_{K}=2q^{2}\,c_{-}(K)+p\,q\). Now setting \(2c(D_{p,q})-2=d\!j_{K}\), and after some straightforward algebra, we find that in order for \(K_{p,q}\) to be admissible we must have
\[2(q^{2}-q)\,c_{-}(K)+2q\,c_{+}(K)+p\,(q-2)-2=0.\]
However, since \(p,c(K)>0\) and \(q\geq 2\), above equation is never satisfied.
**Case 3.** The case where \(\frac{-p}{q}>2c_{-}(K)>0\) is similar to Case 2.
In [11] inequality (1) is also verified, for some choices of \(p\) and \(q\), using crossing number bounds obtained from the ordinary Jones polynomial in [12] and also from the 2-variable Kauffman polynomial. Theorem 1.1 shows that the colored Jones polynomial and the results of [4] provide better bounds for crossing numbers of satellite knots, allowing in particular exact computations.
## 3. Non-adequacy results
To prove the stronger version of inequality (1), stated in Theorem 1.1, we need to know that the cables \(K_{p,q}\) are not adequate. This is the main result in this section.
**Theorem 3.1**.: _Let \(K\) be an adequate knot with crossing number \(c(K)>0\) and suppose that \(\frac{p}{q}<2c_{+}(K)\) and \(\frac{-p}{q}<2c_{-}(K)\). Then, the cable \(K_{p,q}\) is non-adequate._
To prove Theorem 3.1 we need the following lemma:
**Lemma 3.2**.: _Let \(K\) be an adequate knot with crossing number \(c(K)>0\) and suppose that \(\frac{p}{q}<2c_{+}(K)\) and \(\frac{-p}{q}<2c_{-}(K)\). If \(K_{p,q}\) is adequate, then \(c(K_{p,q})=q^{2}\,c(K)\)._
Proof.: By our earlier discussion, for \(n\) large enough,
\[4d_{+}[J_{(K_{p,q}}(n)]-4d_{-}[J_{K_{p,q}}(n)]=d_{2}n^{2}+d_{1}n+d_{0},\]
with \(d_{i}\in\mathbb{Q}\). By Proposition 2.3, we compute \(d_{2}=q^{2}(4a_{2}+4a_{2}^{*})=2q^{2}c(K)\). Now if \(K_{p,q}\) is adequate, since \(d_{2}=2c(K_{p,q})\), we must have \(c(K_{p,q})=q^{2}c(K)\).
We now give the proof of Theorem 3.1:
Proof.: First, we let \(K\), \(p\), and \(q\) such that \(t:=p-q\operatorname{wr}(K)<0\).
Recall that if \(K\) has an adequate diagram \(D=D(K)\) with \(c(D)=c_{+}(D)+c_{-}(D)\) crossings and the all-\(A\) (rep. all-\(B\)) resolution has \(v_{A}=v_{A}(D)\) (resp. \(v_{B}=v_{B}(D)\)) state circles, then
\[4\,d_{-}[J_{K}(n)]=-2c_{-}(D)n^{2}+2(c(D)-v_{A}(D))n+2v_{A}(D)-2c_{+}(D), \tag{3}\]
Figure 3. Three positive (left) and three negative (right) twists on four strands.
\[4\,d_{+}[J_{K}(n)]=2c_{+}(D)n^{2}+2(v_{B}(D)-c(D))n+2c_{-}(D)-2v_{B}(D). \tag{4}\]
Equation (3) holds for \(A\)-adequate diagrams \(D=D(K)\). Thus in particular the quantities \(c_{-}(D),v_{A}(D)\) are invariants of \(K\) (independent of the particular \(A\)-adequate diagram). Similarly, Equation (4) holds for \(B\)-adequate diagrams \(D=D(K)\) and hence \(c_{+}(D),v_{B}(D)\) are invariants of \(K\). Recall also that \(c(D)=c(K)\) since \(D\) is adequate.
Now we start with a knot \(K\) that has an adequate diagram \(D\) then \(\operatorname{wr}(D)=\operatorname{wr}(K)\). Hence we have \(c_{+}(D)=c_{-}(D)+\operatorname{wr}(K)\). Since \(D\) is \(B\)-adequate and \(t<0\), the cable \(D_{p,q}\) is a \(B\)-adequate diagram of \(K_{p,q}\) with \(v_{B}(D_{p,q})=qv_{B}(D)\) and \(c_{+}(D_{p,q})=q^{2}c_{+}(D)\). See Figure 4. Furthermore, since as said above these quantities are invariants of \(K_{p,q}\), they remain the same for all \(B\)-adequate diagrams of \(K_{p,q}\).
Now assume, for a contradiction, that \(K_{p,q}\) is adequate: Then, it has a diagram \(\bar{D}\) that is both \(A\) and \(B\)-adequate. By above observation we must have \(v_{B}(\bar{D})=v_{B}(D_{p,q})=qv_{B}(D)\) and \(c_{+}(\bar{D})=c_{+}(D_{p,q})=q^{2}c_{+}(D)\).
By Lemma 3.2, \(c(\bar{D})=c(K_{p,q})=q^{2}c(K)\). Write
\[4\,d_{+}[J_{K_{p,q}}(n)]=xn^{2}+yn+z,\]
for some \(x,y,z\in\mathbb{Q}\).
For sufficiently large \(n\) we have two different expressions for \(x,y,z\). On one hand, because \(\bar{D}\) is adequate, we can use Equation (4) to determine \(x,y,z\). On the other hand, using \(4\,d_{+}[J_{K^{*}_{-p,q}}(n)]\), \(x,y,z\) can be determined using Proposition 2.3 with \(a_{2}\) and \(a_{1}\) coming from Equation (4).
We will use these two ways to find the quantity \(y\). Applying Equation (4) to \(\bar{D}\) we obtain
\[y=2(v_{B}(\bar{D}-c(\bar{D})))=2qv_{B}(D)-2q^{2}c(D) \tag{5}\]
On the other hand, using Proposition 2.3 with \(a_{2}\) and \(a_{1}\) coming from Equation (4) we have: \(4a_{2}=2c_{+}(D)=c(D)+\operatorname{wr}(K)\). Also, we have \(4a_{1}=2v_{B}(D)-2c(D)\). We obtain
\[y=q(4a_{1})-2q(q-1)(4a_{2})+2(q-1)p=2qv_{B}(D)-2q^{2}c(D)+2(q-1)p-2q(q-1) \operatorname{wr}(K). \tag{6}\]
For the two expressions derived for \(y\) from Equations (5) and (6) to agree we must have \(2q((q-1)2\operatorname{wr}(K)+p)-2p=0\). However this is impossible since \(q>1\) and \(p,q\) are coprime. This contradiction shows that \(K_{p,q}\) is non-adequate.
To deduce the result for \(K_{p,q}\), with \(t(K,p,q):=p-q\operatorname{wr}(K)>0\), let \(K^{*}\) denote the mirror image of \(K\). Note that \(\operatorname{f}\,(K_{p,q})^{*}=K^{*}_{-p,q}\) and since being adequate is a property that is preserved under taking mirror images, it is enough to show that \(K^{*}_{-p,q}\) is non-adequate. Since \(t(K^{*},-p,q):=-p-q\operatorname{wr}(K^{*})=-t(K,p,q)<0\), the later result follows from the argument above.
Figure 4. A diagram of the (-1,2)-cable of the figure eight knot and its all-\(B\) state graph.
### Proof of Theorem 1.1 and Corollary 1.2
By Theorem 2.5, we have \(c(K_{p,q})\geq q^{2}c(K)\). We need to show that this inequality is actually strict. Recall that by the proof of Theorem 2.5, if \(\frac{p}{q}>2c_{+}(K)\) or \(\frac{-p}{q}>2c_{-}(K)\), then the above inequality is strict so we need to only consider when \(\frac{p}{q}<2c_{+}(K)\) and \(\frac{-p}{q}<2c_{-}(K)\). By Theorem 3.1, \(K_{p,q}\) is non-adequate. Hence by Theorem 2.2 again we have \(2c(K_{p,q})\neq dj_{K}\) and the strict inequality follows.
Next we discuss how to deduce Corollary 1.2:
Proof.: If \(q=2\) and \(p=q\operatorname{wr}(K)\pm 1\), then by Theorem 2.5\(K_{p,q}\) is admissible. Thus by Theorem 2.2, the diagram \(D_{p,2}\) constructed in the proof of Theorem 2.5 is minimal. That is \(c(K_{p,2})=c(D_{p,2})=4\,c(K)+1\).
## 4. Composite non-adequate knots
In this section we prove Theorem 1.4.
Given a knot \(K\), such that for \(n\) large enough the degrees of the colored Jones polynomials of \(K\) are quadratic polynomials with rational coefficients, we will write
\[4\,d_{+}[J_{K}(n)]=x(K)n^{2}+y(K)n+z(K)\text{ and }4d_{+}[J_{K}(n)]-4d_{-}[J_{K} (n)]=d_{2}(K)n^{2}+d_{1}(K)n+d_{0}(K).\]
Now let \(K_{1}\), \(K_{2}\) be as in the statement of Theorem 1.4. For the proof we need the following well known lemma:
**Lemma 4.1**.: _[_7_]_ _For large enough \(n\), the degrees \(d_{\pm}[J_{K_{1}\#K_{2}}(n)]\) are polynomials, and we have:_
1. \(x(K_{1}\#K_{2})=x(K_{1})+x(K_{2})\)_._
2. \(y(K_{1}\#K_{2})=y(K_{1})+y(K_{2})-2\)_._
3. \(d_{2}(K_{1}\#K_{2})=d_{2}(K_{1})+d_{2}(K_{2})\)_._
The second ingredient we need for the proof of Theorem 1.4 is the following lemma.
**Lemma 4.2**.: _Let \(K\) be a non-trivial adequate knot, \(p=2\operatorname{wr}(K)\pm 1\) and let \(K_{1}:=K_{p,2}\). Then for any adequate knot \(K_{2},\) the connected sum \(K_{1}\#K_{2}\) is non-adequate._
Proof.: The claim is proven by applying the arguments applied to \(K_{1}=K_{p,2}\) in the proofs of Lemma 3.2 and Theorem 3.1 to the knot \(K_{1}\#K_{2}\) and using the fact that the degrees of the colored Jones polynomial are additive under connected sum.
First we claim that if \(K_{1}\#K_{2}\) were adequate then we would have
\[c(K_{1}\#K_{2})=4c(K)+c(K_{2}). \tag{7}\]
Note that as \(p=2\operatorname{wr}(K)\pm 1\), we have \(\frac{p}{2}<2c_{+}(K)\) and \(\frac{-p}{2}<2c_{-}(K)\). Hence Proposition 2.3 applies to \(K_{1}\). Now write
\[4d_{+}[J_{K_{1}\#K_{2}}(n)]-4d_{-}[J_{K_{1}\#K_{2}}(n)]=d_{2}(K_{1}\#K_{2})n^{ 2}+d_{1}(K_{1}\#K_{2})n+d_{0}(K_{1}\#K_{2}).\]
Since we assumed that \(K_{1}\#K_{2}\) is adequate, we have \(d_{2}(K_{1}\#K_{2})=2c(K_{1}\#K_{2})\). On the other hand by Lemma 4.1, \(d_{2}(K_{1}\#K_{2})=d_{2}(K_{1})+d_{2}(K_{2})=2\cdot 4c(K)+2c(K_{2})\) which leads to (7).
**Case 1.** Suppose that \(p-2\operatorname{wr}(K)=-1<0\).
Start with \(D=D(K)\) an adequate diagram and let \(D_{1}:=D_{p,2}\) be constructed as in the proof of Theorem 2.5. Also let \(D_{2}\) be an adequate diagram of \(K_{2}\). As in the proof of Theorem 3.1 conclude that \(D_{1}\#D_{2}\) is a \(B\)-adequate diagram for \(K_{1}\#K_{2}\) and that the quantities \(v_{B}(D_{1}\#D_{2})=2v_{B}(D)+v_{B}(D_{2})-1\) and \(c_{+}(D_{1}\#D_{2})=4c_{+}(D)+c_{+}(D_{2})\) are invariants of \(K_{1}\#K_{2}\).
Let \(\bar{D}\) be an adequate diagram. Then
\[v_{B}(\bar{D})=v_{B}(D_{1}\#D_{2})=2v_{B}(D)+v_{B}(D_{2})-1\text{ and }c_{+}(\bar{D})=4c_{+}(D)+c_{+}(D_{2}).\]
Next we will calculate the quantity \(y(K_{1}\#K_{2})\) of Lemma 4.1 in two ways: Firstly, since we assumed that \(\bar{D}\) is an adequate diagram for \(K_{1}\#K_{2}\), applying Equation (4), we get
\[y(K_{1}\#K_{2})=2(v_{B}(\bar{D})-c(\bar{D}))=2(2v_{B}(D)+v_{B}(D_{2})-1-4c(D)-c(D _{2})).\]
Secondly, using by Proposition 2.3 we get \(y(K_{1})=2(2v_{B}(D)-4c(D)+p+2\operatorname{wr}(K))\).
Then by Lemma 4.1,
\[y(K_{1}\#K_{2})=y(K_{1})+y(K_{2})-2=2(2v_{B}(D)-4c(D)+p-2\operatorname{wr}(K)+v _{B}(D_{2})-c(D_{2})-1).\]
Now note that in order for the two resulting expressions for \(y(K_{1}\#K_{2})\) to be equal we must have \((p-2\operatorname{wr}(K))=0\) which contradicts our assumption that \(p-2\operatorname{wr}(K)=-1\). We conclude that \(K_{1}\#K_{2}\) is non-adequate.
**Case 2.** Assume now that \(p-2\operatorname{wr}(K)=1\). Since \((K_{p,2})^{*}=K_{-p,2}^{*}\) and being adequate is preserved under taking mirror images, it is enough to show that \(K_{-p,2}^{*}\#K_{2}^{*}\) is non-adequate. Since \(-p-2\operatorname{wr}(K^{*})=-(p-2\operatorname{wr}(K)))=-1\), the later result follows from Case 1.
### Proof of Theorem 1.4
Note that if \(K\) is the unknot then so is \(K_{p,2}\) and the result follows trivially. Suppose that \(K\) is a non-trivial knot. Then by Lemma 4.2 we obtain that \(K_{1}\#K_{2}\) is non-adequate.
As discussed above \(dj_{K_{1}}=2(4c(K))=2(c(D_{\pm 1,2})-1)\). On the other hand, \(dj_{K_{2}}=2c(D_{2})=2c(K)\) where \(D_{2}\) is an adequate diagram for \(K_{2}\). Hence, by Lemma 4.1, \(dj_{K_{1}\#K_{2}}=2(c(D_{1}\#D_{2})-1)\), where \(D_{1}=D_{\pm 1,2}\). By Theorem 2.2,
\[c(K_{1}\#K_{2})=c(D_{1}\#D_{2})=c(D_{1})+c(D_{2})=c(K_{1})+c(K_{2}),\]
where the last equality follows since, by Theorem 1.1, we have \(c(K_{1})=c(D_{1})=c(D_{p,2})\).
|
2301.00281 | Lightmorphic Signatures Analysis Toolkit | In this paper we discuss the theory used in the design of an open source
lightmorphic signatures analysis toolkit (LSAT). In addition to providing a
core functionality, the software package enables specific optimizations with
its modular and customizable design. To promote its usage and inspire future
contributions, LSAT is publicly available. By using a self-supervised neural
network and augmented machine learning algorithms, LSAT provides an easy-to-use
interface with ample documentation. The experiments demonstrate that LSAT
improves the otherwise tedious and error-prone tasks of translating
lightmorphic associated data into usable spectrograms, enhanced with parameter
tuning and performance analysis. With the provided mathematical functions, LSAT
validates the nonlinearity encountered in the data conversion process while
ensuring suitability of the forecasting algorithms. | D. Damian | 2022-12-31T20:37:05Z | http://arxiv.org/abs/2301.00281v1 | # Lightmorphic Signatures Analysis Toolkit
###### Abstract
In this paper we discuss the theory used in the design of an open source lightmorphic signatures analysis toolkit (LSAT). In addition to providing a core functionality, the software package enables specific optimizations with its modular and customizable design.
To promote its usage and inspire future contributions, LSAT is publicly available. By using a self-supervised neural network and augmented machine learning algorithms, LSAT provides an easy-to-use interface with ample documentation.
The experiments demonstrate that LSAT improves the otherwise tedious and error-prone tasks of translating lightmorphic associated data into usable spectrograms, enhanced with parameter tuning and performance analysis.
With the provided mathematical functions, LSAT validates the nonlinearity encountered in the data conversion process while ensuring suitability of the forecasting algorithms.
lightmorphic, machine learning, spectrogram, graph chord, neural network arXiv[c]2022 (cs.LG) 1-7
## 1 Introduction
It is common knowledge, in the machine learning domain, to use differential values, since they provide a simple way to model the data. However, such algorithms may not fit the lightmorphic signature properly, leading to a reduced quality of the obtained results. Training a neural network to predict the lightmorphic signature can significantly increase the data quality. This is the task that LSAT tries to accomplish.
As such we define the lightmorphic metric learning (LML) as a branch of machine learning algorithms, set out with the purpose of learning lightmorphic signatures from multiple datasets trough usage of vibrating graph chords.
In the pursuing sections we describe the main features of the toolkit, explain the general mathematical concepts and finally detail the plans regarding future functionalities.
## 2 General mathematical concepts
In this section we expand the mathematical concepts and link them with the reasoning encountered in the implemented code.
We define the lightmorphic signature as a function of: light intensity (I) that varies according to seasons and local weather conditions, trajectory distribution characteristics
(D), and specific adjustments (T):
\[f_{L_{\odot}}=\int\limits_{1}^{I}\int\limits_{1}^{D}\int\limits_{1}^{T}\Gamma_{t }\zeta_{t}dt \tag{1}\]
where:
* trajectory tensor
* point in time specificity
Storage of these trajectory specific lightmorphic signatures is done in a database (\(\Theta\)). The segments containing isochronous surfaces with similarities are stored in another database (\(\Phi\)) that serves as a baseline for training the neural network implementation.
The isochronous surfaces that constitute the lightmorphic signature are interlinked trough the definition and usage of graph chords (\(\delta(t)\)). Observing their vibrational amplitude allows the prediction of alternative lightmorphic signatures and, at the same time, correction of the already known values.
Since the primary light source considered is the Earth's Sun, specific spacetime metrics (ex. \(g_{\mu\nu},\eta_{\mu\nu},h_{+},h_{\times},G_{\mu\nu}\)) have to be used in order to describe the encountered anisotropies. These are implemented as a function of distant astrophysical forces that stretch and compress the fabric of spacetime.
According to special relativity, spacetime is seen as a four dimensional manifold described by a flat Minkowski metric defined in Cartesian coordinates (t, x, y, z, c = 1) as:
\[\eta_{\mu\nu}=\begin{pmatrix}-1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}, \tag{2}\]
When considering the geometry of curved space, we have made use of the metric \(g_{\mu\nu}\), that replaces the flat Minkowski metric \(\eta_{\mu\nu}\). This substitution was done considering that the geometry of curved space will eventually reduce to the flat spacetime of special relativity at a sufficiently small scale.
The interaction between curvature of spacetime and the mass distribution was modeled following (Blackburn (2010)) work, as:
\[G_{\mu\nu}=kT_{\mu\nu} \tag{3}\]
where \(G_{\mu\nu}\) is defined as the Einstein curvature tensor, \(T_{\mu\nu}\) is the stress-energy tensor and represents the mass-energy distribution, while k describes the Einstein constant of gravitation defined as:
\[k=\frac{8\pi G}{c^{4}} \tag{4}\]
where c is the speed of light in a vacuum.
At the same time, in order to improve the results quality, the Einstein tensor was also considered under the form:
\[G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R, \tag{5}\]
where \(R_{\mu\nu}\) is the Riemann tensor for the local spacetime, and R is the Ricci scalar.
Since there is not one general solution for the complex Einstein equations, but a large variety of possible solutions that apply to particular circumstances, we've considered a weak-field approximation, where the nonlinear Einstein equations where approximated towards linearity.
For example, a very small perturbation specific to a gravitational wave, will impact the flat spacetime and it is defined as \(h_{\mu\nu}(x)\) and it's value will be \(|h_{\mu\nu}|<<1\).
Thus, the Einstein equation becomes:
\[g_{\mu\nu}(x)=\eta_{\mu\nu}+h_{\mu\nu}(x). \tag{6}\]
or by simply considering the induced strain variations:
\[\Box h_{\mu\nu}(x)=0, \tag{7}\]
By further pursuing such linearization, we can represent in the TT gauge, a propagating wave, under the following form:
\[h_{\mu\nu}^{TT}=\begin{pmatrix}0&0&0&0\\ 0&h_{+}&h_{\times}&0\\ 0&h_{\times}&-h_{+}&0\\ 0&0&0&0\end{pmatrix}, \tag{8}\]
where the constant amplitudes (\(h_{+}\), \(h_{\times}\)) represent the two gravitational wave polarizations, the plus- and cross-polarization.
We represent the distance between two neighboring points as defined by (Berit (2013)) for a flat spacetime, trough the following expression:
\[ds^{2}=-c^{2}dt^{2}+dx^{2}+dy^{2}+dz^{2}=-c^{2}dt^{2}+[1+h_{+}(t)]dx^{2}+[1-h_ {+}(t)]dy^{2}\]
That allows us to model in the TT gauge, the gravitational wave stretching along the x axis and compression along the y axis with the specific factor of: \(\sqrt{1\pm h_{+}(t)}\simeq 1+\frac{1}{2}h_{+}(t)\)
Having modeled the photon's traveling path in outer space, in order to simplify the inherent path inhomogeneities, we separated the domains into outer space domain, atmospheric domain and Earth specific domains (lithosphere, hydrosphere, biosphere, noises, etc).
We further define the phase of an electromagnetic wave of frequency \(\omega_{0}\) as \(\phi\). Following Driggers (2015)'s work, we consider that the starting light phase is at 0 and it travels at the speed of light c. After a distance L it will have a phase \(\delta_{\phi_{space}}\) that can be expressed as a distance integral over the spacetime metric,
\[\delta_{\phi_{space}}=\frac{\omega_{0}}{c}\int_{0}^{L}gdx, \tag{9}\]
with \(g(t)=\eta+h(t)\), where \(\eta\) is the Minkowski metric and h(t) is the dimensionless spacetime strain.
Summing the light phase shift \(\delta_{\phi_{atm}}\) and the \(\delta_{\phi_{Earth}}\) which is derived from the noise sources like seismic or electromagnetic interferences, leads to the dataset of trajectory specific lightmorphic signatures:
\[\Phi_{\Gamma_{IDT}}=\sum_{j=1}^{\mathbb{N}}\Gamma_{IDT}^{j} \tag{10}\]
The signature parameter estimation is performed considering a prior distribution \(\mathrm{p}(\Phi|L_{\odot})\) that is updated upon receiving the new data d to give a posterior distribution \(\mathrm{p}(\Phi|d,L_{\odot})\)
\[p(\Phi|d,L_{\odot})=\frac{p(\Phi|L_{\odot})p(d|\Phi,L_{\odot})}{p(d|L_{\odot})} \tag{11}\]
While observing the distribution of multiple light segments within the dataset \(\Phi_{\Gamma_{IDT}}\), it will be possible to estimate the probability for trajectory specific lightmorphic evolution:
\[p_{\Phi}=f(\rho_{k}\cdot p_{\Phi_{k}}) \tag{12}\]
where \(p_{\Phi_{k}}\) is the database's k-th segment specific probability, \(\rho_{k}\) is the prediction weight for the k-th segment.
## 3 Software package design
The distribution matrices specific to the isochronous segmentation surfaces, which define the lightmorphic signature model, form the LSAT core.
As such we've used a design principle that ensures simplicity for the whole package, while making the source codes easy to read and maintain. As the toolkit is written in a modular way, new functionalities can be easily plugged in. This makes the LSAT not only a lightmorphic signature machine learning tool but also an experimental platform.
LSAT comes with plenty of documentation for all the interface functionalities and related data structures. The README file describes the installation process and interface usage. For developers who use the toolkit in their applications, the API documentation can provide additional information related to functionality calls.
## 4 Practical Usage
In the examples, we provide sample values for the lightmorphic signature updates, as a function of \(\delta_{\phi_{atm}}\) derived by the neural network from the values of a large dataset of atmospheric meteorological data for 317 cities in Romania, with hundreds of thousands data points. Automatic learning is supported trough API calls to the domain specific data providers.
Beyond this simple way of running the lightmorphic signatures analysis toolkit, there are several enhancement options for advanced usage. As example, one may activate additional functionalities that consider input parameters like complex space weather forecasting, different electromagnetic wave disturbances or lithosphere, hydrosphere and biosphere specific localized data.
## 5 Conclusion and Future Work
With the lightmorphic signatures analysis toolkit we provided an open source SW package that is simple and easy-to-use.
Experiments and analysis conclude that the modular design and customization support are performing excellent in practice and can provide the base for additional research on lightmorphic signatures.
The toolkit is constantly being improved by new research results and user feed-back with the ultimate goal of having an automated toolkit to use in maintaining and updating a large database of high-quality light signatures.
Future work will focus on probability estimates, additional functionalities that mitigate the large uncertainties in the available observational input data which arise from the complex interaction processes. In addition, the inclusion of artificial intelligence (AI) options will be considered while building a national/international network for lightmorphic signature analysis. |
2310.20610 | Weak Limit of $W^{1,2}$ Homeomorphisms in $\mathbb{R}^3$ Can Have Any
Degree | In this paper for every $k\in\mathbb{Z}$ we construct a sequence of weakly
converging homeomorphisms $h_m\colon B(0,10)\to\mathbb{R}^3$,
$h_m\rightharpoonup h$ in $W^{1,2}(B(0,10))$, such that $h_m(x)=x$ on $\partial
B(0,10)$ and for every $r\in \left(\tfrac5{16},\tfrac{7}{16}\right)$ the degree
of $h$ with respect to the ball $B(0,r)$ is equal to $k$ on a set of positive
measure. | Ondřej Bouchala, Stanislav Hencl, Zheng Zhu | 2023-10-31T16:44:52Z | http://arxiv.org/abs/2310.20610v1 | # Weak limit of \(W^{1,2}\) homeomorphisms in \(\mathbb{R}^{3}\) can have any degree
###### Abstract.
In this paper for every \(k\in\mathbb{Z}\) we construct a sequence of weakly converging homeomorphisms \(h_{m}\colon B(0,10)\to\mathbb{R}^{3}\), \(h_{m}\rightharpoonup h\) in \(W^{1,2}(B(0,10))\), such that \(h_{m}(x)=x\) on \(\partial B(0,10)\) and for every \(r\in\big{(}\frac{5}{16\cdot 7},\frac{7}{16}\big{)}\) the degree of \(h\) with respect to the ball \(B(0,r)\) is equal to \(k\) on a set of positive measure.
Key words and phrases:limits of Sobolev homeomorphisms, topological degree The first two authors were supported by the grant GACR P201/21-01976S. The third author was supported by the NSFC grant no. 12301111. This research was done when Z. Zhu was visiting the Department of Mathematical Analysis, Faculty of Mathematics and Physics, Charles University. He wishes to thank Charles University for its hospitality.
Introduction
Let \(\Omega\) be a bounded domain with \(n\) elements. Let \(\Omega\) be a bounded domain with \(n\) elements. Let \(\Omega^{\infty}\) be a bounded domain with \(n\) elements.
The image of a spherical cap \(C_{m,r}\subseteq\partial B(0,r)\) (of size roughly \(1/m\)) by \(u_{m}\) is the outer "dashed" bubble (of size roughly \(1\)) and this bubble disappears in the limit as \(\operatorname{diam}(C_{m,r})\stackrel{{ m\to\infty}}{{\to}}0\). The limiting mapping has a degree equal to \(-1\) inside the inner bubble as the orientation of the bubble is reversed with respect to the original ball. The derivative of \(h_{m}\) on \(C_{m,r}\) is comparable to
\[\int_{C_{m,r}}|Dh_{m}|^{2}\approx\mathcal{H}^{2}(C_{m,r})\left|\frac{1}{\frac {1}{m}}\right|^{2}\approx\frac{1}{m^{2}}|m|^{2}=1\]
and thus it remains uniformly bounded in \(m\) even when we integrate it over \(r\) and thus \(f_{m}\) have weak limit in \(W^{1,2}\). Of course, one needs to extend it as a homeomorphism on all neighboring spheres, make it identity on \(\partial B(0,10)\), estimate derivatives in all directions but the essential idea is in the figure. To obtain degree \(k=2\) in our example, we need to achieve something like the following behavior for the limit mapping \(h\), see Fig. 2 (again everything is rotated around the \(x\)-axis), which is more complicated. We need more than one spherical cap with strange behavior and our \(h_{m}\) are not radially symmetric.
Figure 1. Construction of Conti and De Lellis - everything is radially symmetric. Vectors denote the orientation of the sphere and its image.
Let us also note that the class of weak limits of Sobolev homeomorphisms was recently characterized in the planar case by Iwaniec and Onninen [15, 16] and De Philippis and Pratelli [9]. For some other kind of pathological examples of limits of \(W^{1,2}\cap L^{\infty}\) homeomorphisms where the limit fails to be one to one a.e., we refer the reader to Bouchala, Hencl and Molchanova [5]. For other interesting examples of non-injective mappings see [17].
## 2. Preliminaries
We use \(B(x,r)\) for a ball in \(\mathbb{R}^{3}\) with the center \(x\in\mathbb{R}^{3}\) and with the radius \(r>0\). By \(\operatorname{int}(A)\) we denote the interior of the set \(A\). Throughout the paper, \(C\) will be a generic positive constant, which may even be different in a single string of estimates.
### Degree for continuous mappings
Let \(\Omega\subseteq\mathbb{R}^{3}\) be a bounded open set. Given a continuous map \(f\colon\overline{\Omega}\to\mathbb{R}^{3}\) and \(y\in\mathbb{R}^{3}\setminus f(\partial\Omega)\), we can define the _topological degree_ as
\[\deg(f,\ \Omega,\ y)=\sum_{x\in\Omega\cap f^{-1}(y)}\operatorname{sgn}(J_{f}(x))\]
if \(f\) is smooth in \(\Omega\) and \(J_{f}(x)\neq 0\) for each \(x\in\Omega\cap f^{-1}(y)\). By uniform approximation, this definition can be extended to an arbitrary continuous mapping \(f\colon\overline{\Omega}\to\mathbb{R}^{3}\). Note that the degree depends only on values of \(f\) on \(\partial\Omega\).
If \(f\colon\overline{\Omega}\to\mathbb{R}^{n}\) is a homeomorphism, then either \(\deg(f,\Omega,y)=1\) for all \(y\in f(\Omega)\) (\(f\) is _sense preserving_), or \(\deg(f,\Omega,y)=-1\) for all \(y\in f(\Omega)\) (\(f\) is _sense reversing_). If, in addition, \(f\in W^{1,n-1}(\Omega,\mathbb{R}^{n})\), then this topological orientation corresponds to the sign of the Jacobian. More precisely, we have
**Proposition 2.1** ([14]).: _Let \(f\in W^{1,n-1}(\Omega,\mathbb{R}^{n})\) be a homeomorphism on \(\overline{\Omega}\) with \(J_{f}>0\) a.e. Then_
\[\deg(f,\ \Omega,\ y)=1,\qquad y\in f(\Omega).\]
Figure 2. Our construction of limit mappings - everything is radially symmetric.
### Degree for \(W^{1,2}\cap L^{\infty}\) mappings
Let \(B\) be a ball, \(f\in W^{1,2}(\partial B,\mathbb{R}^{3})\cap\mathcal{C}(\partial B,\mathbb{R}^{3})\), \(|f(\partial B)|=0\), and \(\mathbf{u}\in\mathcal{C}^{1}(\mathbb{R}^{3},\mathbb{R}^{3})\), then (see [18, Proposition 2.1])
\[\int_{\mathbb{R}^{3}}\deg(f,B,y)\operatorname{div}\mathbf{u}(y)\,\mathrm{d}y= \int_{\partial B}(\mathbf{u}\circ f)\cdot(\Lambda_{2}D_{\tau}f)\nu\,\mathrm{d} \mathcal{H}^{2}, \tag{2.1}\]
where \(D_{\tau}f\) denotes the tangential gradient and \(\Lambda_{2}D_{\tau}f\) is the restriction of \(\operatorname{cof}Df\) to the corresponding subspace (see [10] for details).
Following [8] (see also [6]) we need a more general version of the degree which works for mappings in \(W^{1,2}\cap L^{\infty}\) that are not necessarily continuous.
**Definition 2.2**.: _Let \(B\subseteq\mathbb{R}^{3}\) be a ball and let \(f\in W^{1,2}(\partial B,\mathbb{R}^{3})\cap L^{\infty}(\partial B,\mathbb{R}^ {3})\). Then we define \(\operatorname{Deg}(f,B,\cdot)\) as the distribution satisfying_
\[\int_{\mathbb{R}^{3}}\operatorname{Deg}(f,B,y)\psi(y)\,\mathrm{d}y=\int_{ \partial B}(\mathbf{u}\circ f)\cdot(\Lambda_{2}D_{\tau}f)\nu\,\mathrm{d} \mathcal{H}^{2} \tag{2.2}\]
_for every test function \(\psi\in\mathcal{C}^{\infty}_{c}(\mathbb{R}^{3})\) and every \(\mathcal{C}^{\infty}\) vector field \(\mathbf{u}\) on \(\mathbb{R}^{3}\) satisfying \(\operatorname{div}\mathbf{u}=\psi\)._
As in [8] (see also [10]) it can be verified that the right-hand side does not depend on the way \(\psi\) is expressed as \(\operatorname{div}\mathbf{u}\) and that the distribution \(\operatorname{Deg}(f,B,\cdot)\) can be represented as a \(BV\) function.
_Remark 2.3_.: Let \(B\) be a ball and \(f\in W^{1,2}(\partial B,\mathbb{R}^{3})\cap\mathcal{C}(\overline{B},\mathbb{R }^{3})\). If \(|f(\partial B)|=0\), then \(\operatorname{Deg}(f,B,y)=\deg(f,B,y)\) for a.e. \(y\in\mathbb{R}^{3}\).
### Matrix of derivatives in different coordinates
Let \((x_{1},r,\varphi)\) denote the usual cylindrical coordinates in \(\mathbb{R}^{3}\) and let \(a\colon\mathbb{R}^{3}\to\mathbb{R}^{3}\) be a mapping from cylindrical to cylindrical coordinates, i.e.
\[a(x_{1},r,\varphi)=(a^{x_{1}}(x_{1},r,\varphi),a^{r}(x_{1},r,\varphi),a^{ \varphi}(x_{1},r,\varphi))\]
It is well-known that the matrix of derivatives of \(a\) in this coordinate system is
\[Da(x_{1},r,\varphi)=\begin{pmatrix}\frac{\partial a^{x_{1}}}{\partial x_{1}}& \frac{\partial a^{x_{1}}}{\partial r}&\frac{1}{r}\frac{\partial a^{x_{1}}}{ \partial\varphi}\\ \frac{\partial a^{r}}{\partial x_{1}}&\frac{\partial a^{r}}{\partial r}& \frac{1}{r}\frac{\partial a^{r}}{\partial\varphi}\\ a^{r}\cdot\frac{\partial a^{\varphi}}{\partial x_{1}}&a^{r}\cdot\frac{ \partial a^{\varphi}}{\partial r}&\frac{a^{r}}{r}\frac{\partial a^{\varphi}} {\partial\varphi}\end{pmatrix}. \tag{2.3}\]
Let \((r,\theta,\varphi)\) denote the usual spherical coordinates in \(\mathbb{R}^{3}\), i.e. \(\varphi\in(0,2\pi)\) and \(\theta\in(0,\pi)\). Let \(b\colon\mathbb{R}^{3}\to\mathbb{R}^{3}\) be a mapping from cylindrical to spherical coordinates, i.e.
\[b(x_{1},r,\varphi)=(b^{r}(x_{1},r,\varphi),b^{\theta}(x_{1},r,\varphi),b^{ \varphi}(x_{1},r,\varphi)).\]
It is well-known that the matrix of derivatives of \(a\) in this coordinate system is
\[Db(x_{1},r,\varphi)=\begin{pmatrix}\frac{\partial b^{r}}{\partial x_{1}}& \frac{\partial b^{r}}{\partial r}&\frac{1}{r}\frac{\partial b^{r}}{\partial \varphi}\\ b^{r}\cdot\frac{\partial b^{\theta}}{\partial x_{1}}&b^{r}\cdot\frac{\partial b^{ \theta}}{\partial r}&\frac{b^{r}}{r}\cdot\frac{\partial b^{\theta}}{\partial \varphi}\\ b^{r}\sin b^{\theta}\cdot\frac{\partial b^{\varphi}}{\partial x_{1}}&b^{r}\sin b^ {\theta}\cdot\frac{\partial b^{\varphi}}{\partial r}&\frac{b^{r}}{r}\cdot \frac{\partial b^{\varphi}}{\partial\varphi}\end{pmatrix}. \tag{2.4}\]
## 3. Proof of main theorem for \(k=2\)
### Overview
In this subsection we roughly explain the construction of the mappings \(h\) and \(h_{m}\) from the Theorem 1.1. To create degree two somewhere we need to go around that area twice (imagine the planar case). To achieve that for the limit function \(h\), we will define \(h_{m}\) to go around three times, twice in the positive and once in the negative direction (or orientation). Using the idea and the mapping \(u_{m}\) from [8, Section 6] we can create such loops (or bubbles). And by preparing the set before applying the mapping \(u_{m}\) we can control which bubbles will disappear in the limit.
The mappings \(h_{m}\) will be the composition of three homeomorphisms,
\[h_{m}(x):=u_{m}\circ l_{m}\circ b(x).\]
The Figure 3 roughly explains what these three homeomorphisms do.
The mapping \(b\) is bi-Lipschitz and it does not depend on \(m\). It moves the dashed part of the circle/sphere (that will not disappear) and the dotted part (which will disappear) close together so that the image of the dashed bubble will be inside the image of the dotted one at the end.
The mappings \(l_{m}\) are Lipschitz, and they squash the dashed part (of size comparable to \(1\)) to something smaller (of size comparable to \(\frac{1}{m}\)). In this way, the dashed arc does not disappear in the limit (as it was big at the start) even after applying the Conti
Figure 3. Overview of the construction.
Later we add additional requirements about the behavior on spheres near \(\partial B(0,\frac{3}{8})\) so that their images do not cross some "forbidden" regions but this can be clearly satisfied as well.
### Definition of \(l_{m}\)
First, we define a thick cylinder (containing the sets \(K=b(K)\) and \(b(C_{m})\) from earlier) by setting
\[D_{0}:=\left\{(x_{1},x_{2},x_{3})\in B(0,10):\frac{1}{8}<x_{1}<\frac{7}{8},\ 0\leq \sqrt{x_{2}^{2}+x_{3}^{2}}<\frac{2}{8}\right\}.\]
The mapping \(l_{m}\) is the identity outside of this cylinder. For every \(m\in\mathbb{N}\), we define a slim cylinder by setting
\[D_{m}:=\left\{(x_{1},x_{2},x_{3})\in B(0,10):\frac{1}{8}<x_{1}<\frac{7}{8},\ 0\leq \sqrt{x_{2}^{2}+x_{3}^{2}}<\frac{1}{8m}\right\},\]
and we define two cones \(T_{m}^{L}\) and \(T_{m}^{R}\) by setting (see Fig. 5)
\[T_{m}^{L}:=\left\{(x_{1},x_{2},x_{3})\in B(0,10):\frac{1}{8}\leq x _{1}\leq\frac{2}{8},\ 0\leq\sqrt{x_{2}^{2}+x_{3}^{2}}<\left(1-\frac{1}{m}\right)x_{1}+\frac{2-m}{8m} \right\},\] \[T_{m}^{R}:=\left\{(x_{1},x_{2},x_{3})\in B(0,10):\frac{4}{8}\leq x _{1}<\frac{5}{8},\ 0\leq\sqrt{x_{2}^{2}+x_{3}^{2}}<\left(\frac{1}{m}-1\right)x_{1}-\frac{5m-4}{8m }\right\}.\]
Then, we define a "bonbon-like" domain \(B_{m}\) by setting
\[B_{m}:=T_{m}^{L}\cup b(K)\cup T_{m}^{R}\cup b(C_{m}).\]
The mapping \(l_{m}\) squeezes \(B_{m}\) into the slim cylinder \(D_{m}\) and stretches the set \(D_{0}\setminus B_{m}\) onto \(D_{0}\setminus D_{m}\).
To be precise, we define the homeomorphism \(l_{m}\) inside \(D_{0}\) using the cylindrical coordinates \((x_{1},r,\varphi)\), that is \((x_{1},x_{2},x_{3})=(x_{1},r\cos(\varphi),r\sin(\varphi))\):
The definition of \(l_{m}\) for \(x=(x_{1},r,\varphi)\in B_{m}\) is
\[l_{m}(x):=\begin{cases}\left(x_{1},\ \frac{r}{(8m-8)x_{1}+2-m},\ \varphi\right),&x\in T_{m}^{L},\\ \left(x_{1},\ \frac{r}{m},\ \varphi\right),&x\in b(K),\\ \left(x_{1},\ \frac{r}{(8-8m)x_{1}+5m-4},\ \varphi\right),&x\in T_{m}^{R},\\ x,&x\in b(C_{m}).\end{cases}\]
Figure 5. The set \(B_{m}\) in the plane \(\{x_{1},x_{2}\}\).
On the set \(D_{0}\setminus B_{m}\) we define \(l_{m}\) to be linear, with respect to the radius \(r\) (and mapping \(x_{1}\) to \(x_{1}\) and \(\varphi\) to \(\varphi\)), such that it maps the set \(D_{0}\setminus B_{m}\) onto the "annulus" \(D_{0}\setminus D_{m}\). It is not difficult to see that \(l_{m}\) is Lipschitz on \(D_{0}\setminus B_{m}\).
Next, we compute the matrix of derivatives of \(l_{m}\). Obviously, this matrix is the identity on \(C_{m}\). The matrix of derivatives of \(l_{m}\) on \(T_{m}^{L}\), with respect to the cylindrical coordinates \((x_{1},r,\varphi)\), is (see (2.3))
\[Dl_{m}(x)=\begin{pmatrix}1&0&0\\ \frac{(8-8m)r}{((8m-8)x_{1}+2-m)^{2}}&\frac{1}{(8m-8)x_{1}+2-m}&0\\ 0&0&\frac{1}{(8m-8)x_{1}+2-m}\end{pmatrix}. \tag{3.1}\]
The matrix of derivatives of \(l_{m}\) on \(b(K)\) with respect to the cylindrical coordinates is
\[Dl_{m}(x)=\begin{pmatrix}1&0&0\\ 0&\frac{1}{m}&0\\ 0&0&\frac{1}{m}\end{pmatrix}. \tag{3.2}\]
And on \(T_{m}^{R}\) the matrix of derivatives w.r.t. the cylindrical coordinates is
\[Dl_{m}(x)=\begin{pmatrix}1&0&0\\ \frac{(8m-8)r}{((8-8m)x_{1}+5m-4)^{2}}&\frac{1}{(8-8m)x_{1}+5m-4}&0\\ 0&0&\frac{1}{(8-8m)x_{1}+5m-4}\end{pmatrix}. \tag{3.3}\]
### Definition of \(u_{m}\)
The mapping \(u_{m}\) is the same as in [8, Section 6] with \(\varepsilon=\frac{1}{8m}\). They define it piecewise in several regions. For convenience, we include their picture here as Fig. 6. The mapping is axially symmetric with respect to \(x_{1}\)-axis so Fig. 6 is rotated around it and the regions a, a', b, c, d, e, e' on the left are mapped by the mapping to the corresponding regions on the right. In the limit the regions a', c, e' disappear and the region a which was outside of e is mapped inside the image of the boundary of e.
Figure 6. The areas as in [8, Figure 2.].
For us it is important how it works in the region that contains our bonbons \(B_{m}\), that is in the region \(\mathsf{c}\),
\[\mathsf{c}:=\left\{(x_{1},r,\varphi)\in\mathbb{R}^{3}:0\leq x_{1}\leq 1,\ 0\leq r \leq\tfrac{1}{8m}\right\}.\]
The mapping \(u_{m}\) in the region \(\mathsf{c}\) is defined from cylindrical to spherical coordinates as
\[u_{m}(x_{1},r,\varphi) :=\left(u_{m}^{r}(x_{1},r,\varphi),u_{m}^{\theta}(x_{1},r,\varphi ),\varphi\right), \tag{3.4}\] \[u_{m}^{r}(x_{1},r,\varphi) :=\left(1+\frac{1}{8m}\right)\cos\left(2\arctan(8m\cdot r)\right) +\frac{2}{8m}+\frac{x_{1}}{64m^{2}},\] \[u_{m}^{\theta}(x_{1},r,\varphi) :=2\arctan(8m\cdot r).\]
Obviously \(l_{m}(B_{m})\) is a subset of \(\mathsf{c}\). Since \(u_{m}^{r}\) does not depend on \(\varphi\) and \(u_{m}^{\theta}\) does not depend on \(x_{1}\) or \(\varphi\), the matrix of derivatives of \(u_{m}\) at the point \((x_{1},r,\varphi)\in\mathsf{c}\) with respect to the cylindrical coordinates is (see (2.4))
\[Du_{m}(x_{1},r,\varphi)=\begin{pmatrix}\frac{\partial u_{m}^{r}}{\partial x_{ 1}}&\frac{\partial u_{m}^{r}}{\partial r}&0\\ 0&u_{m}^{r}\frac{\partial u_{m}^{\theta}}{\partial r}&0\\ 0&0&\frac{u_{m}^{r}\sin(u_{m}^{\theta})}{r}\end{pmatrix}. \tag{3.5}\]
It is easy to check that
\[\frac{\partial u_{m}^{r}}{\partial x_{1}} =\frac{1}{64m^{2}},\] \[\frac{\partial u_{m}^{r}}{\partial r} =-\left(1+\frac{1}{8m}\right)\sin\left(2\arctan(8m\cdot r)\right) \frac{16m}{1+64m^{2}r^{2}},\] \[u_{m}^{r}\frac{\partial u_{m}^{\theta}}{\partial r} =\left(\left(1+\frac{1}{8m}\right)\cos(2\arctan(8m\cdot r))+ \frac{1}{4m}+\frac{x_{1}}{64m^{2}}\right)\frac{16m}{1+64m^{2}r^{2}},\] \[\frac{u_{m}^{r}\sin(u_{m}^{\theta})}{r} =\left(\left(1+\frac{1}{8m}\right)\cos(2\arctan(8m\cdot r))+ \frac{1}{4m}+\frac{x_{1}}{64m^{2}}\right)\frac{\sin(2\arctan(8m\cdot r))}{r}.\]
For \(x=(x_{1},r,\theta)\in\mathsf{c}\) there is a positive constant \(C\) independent on \(m\) such that
\[0\leq\left|\frac{\partial u_{m}^{r}(x)}{\partial x_{1}}\right| \leq\tfrac{C}{m^{2}} 0\leq\left|\frac{\partial u_{m}^{r}(x)}{\partial r}\right| \leq C\cdot m, \tag{3.6}\] \[0\leq\left|u_{m}^{r}(x)\frac{\partial u_{m}^{\theta}(x)}{\partial r }\right| \leq C\cdot m, 0\leq\left|\frac{u_{m}^{r}(x)\sin(u_{m}^{\theta}(x))}{r} \right| \leq C\cdot m.\]
In their paper [8] they do not need the mappings \(u_{m}\) to be identity on the boundary. But it is not difficult to observe that their mappings are bi-Lipschitz and "well-behaved" on \(\partial B(0,3)\), so we can extend them by hand to be uniformly bi-Lipschitz on \(B(0,10)\setminus B(0,3)\) and identity on \(\partial B(0,10)\). So we get a sequence of homeomorphisms with
\[\sup_{m\in\mathbb{N}}\int_{B(0,10)}\left|Du_{m}(y)\right|^{2}\, \mathrm{d}y<\infty. \tag{3.7}\]
As we have already mentioned we need a small additional requirement about the behavior of our first map \(b\) on the set \(A:=B(0,\frac{7}{16})\setminus B(0,\frac{5}{16})\). We need \(A\) to be mapped to the regions b, d and f from the Figure 6 on the left (i.e. they do not enter \(a\), \(a^{\prime}\), \(e\) and \(e^{\prime}\)), except for the intersection of \(A\) with \(b(K)\) and \(b(C_{1})\), which can be mapped into the set c from the same figure. This could be easily achieved as seen in Fig. 7.
We shall also need that \(l_{m}(D_{0}\setminus B_{m})\) is the subset of d (see Fig. 7) and that the derivative of \(u_{m}\) is bounded by a constant (independent on \(m\)) as \(u_{m}\) are uniformly Lipchitz there (see [8, construction of d - pages 544-545]).
### Computation of the \(\mathbf{W^{1,2}}\) norm
Now, we set
\[h_{m}(x):=u_{m}\circ l_{m}\circ b(x).\]
Since all three mappings are homeomorphisms it is easy to see that \(h_{m}\) is homeomorphism as well. Analogously it is easy to see that \(h_{m}(x)=x\) for \(x\in\partial B(0,10)\).
Since \(b\) is bi-Lipschitz, in order to show
\[\sup_{m\in\mathbb{N}}\int_{B(0,10)}\left|Dh_{m}(x)\right|^{2}\,\mathrm{d}x<\infty,\]
it suffices to show
\[\sup_{m\in\mathbb{N}}\int_{B(0,10)}\left|D(u_{m}\circ l_{m})(x)\right|^{2}\, \mathrm{d}x<\infty.\]
Since \(l_{m}\) is identity on \(B(0,10)\setminus D_{0}\), from (3.7), we obtain
\[\sup_{m\in\mathbb{N}}\int_{B(0,10)\setminus D_{0}}\left|D(u_{m}\circ l_{m})(x )\right|^{2}\,\mathrm{d}x<\infty. \tag{3.8}\]
As mentioned at the end of the previous subsection \(u_{m}\) are uniformly Lipschitz on \(l_{m}(D_{0}\setminus B_{m})\). Hence, for every \(x\in D_{0}\setminus\overline{B}_{m}\), we have
\[|D(u_{m}\circ l_{m})(x)|\leq|Du_{m}(l_{m}(x))|\cdot|Dl_{m}(x)|\leq C.\]
Therefore
\[\sup_{m\in\mathbb{N}}\int_{D_{0}\setminus\overline{B}_{m}}|D(u_{m}\circ l_{m} )(x)|^{2}\ \mathrm{d}x<\infty. \tag{3.9}\]
It remains to consider the derivative on \(B_{m}\). For almost every \(x\in B_{m}\) the chain rule (with respect to the correct system of coordinates) gives
\[D(u_{m}\circ l_{m})(x)=Du_{m}(l_{m}(x))\cdot Dl_{m}(x). \tag{3.10}\]
Since \(l_{m}\) is always identity on \(C_{m}\), by (3.7), we have
\[\sup_{m}\int_{C_{m}}|D(u_{m}\circ l_{m})(x)|^{2}\,\mathrm{d}x<\infty. \tag{3.11}\]
By (3.2), (3.5) and (3.10) we know that for every \(x\in b(K)\)
\[D(u_{m}\circ l_{m})(x)=\begin{pmatrix}\frac{\partial u_{m}^{\prime}}{\partial x _{1}}&\frac{1}{m}.\frac{\partial u_{m}^{\prime}}{\partial x}&0\\ 0&\frac{u_{m}^{\prime}}{m}.\frac{\partial u_{m}^{\prime}}{\partial x}&0\\ 0&0&\frac{u_{m}^{\prime}}{mr}\sin(u_{m}^{\theta})\\ \end{pmatrix}\]
and (3.6) now imply \(|D(u_{m}\circ l_{m})(x)|\leq C.\) Hence we have
\[\sup_{m\in\mathbb{N}}\int_{b(K)}\left|D(u_{m}\circ l_{m})(x)\right|^{2}\, \mathrm{d}x<\infty. \tag{3.12}\]
Let
\[\diamondsuit:=(8m-8)x_{1}+2-m.\]
By (3.1), (3.5) and (3.10), for every \(x\in\mathrm{int}(T_{m}^{L})\) it holds that
\[D(u_{m}\circ l_{m})(x)=\begin{pmatrix}\frac{\partial u_{m}^{\prime}}{\partial x _{1}}+\frac{(8-8m)r}{\diamondsuit^{2}}.\frac{\partial u_{m}^{\prime}}{\partial r }&\frac{1}{\diamondsuit^{\prime}_{m}}&0\\ \frac{(8-8m)r\cdot u_{m}^{\prime}}{\diamondsuit^{2}}.\frac{\partial u_{m}^{ \prime}}{\partial r}&\frac{u_{m}^{\prime}}{\diamondsuit^{\prime}_{m}}{\partial r }&0\\ 0&0&\frac{u_{m}^{\prime}\sin(u_{m}^{\theta})}{r\cdot\diamondsuit}\\ \end{pmatrix}. \tag{3.13}\]
By definition of \(T_{m}^{L}\) we know that for \(x\in\mathrm{int}(T_{m}^{L})\) we have
\[0\leq r<\left(1-\frac{1}{m}\right)x_{1}+\frac{2-m}{8m}=\frac{\diamondsuit}{8m}, \tag{3.14}\]
so
\[\left|\frac{(8-8m)r}{\diamondsuit}\right|\leq C\]
for a constant \(C\) independent on \(x_{1}\), \(r\) and \(m\). By (3.13) and (3.6), for every \(x\in\mathrm{int}(T_{m}^{L})\), we have
\[|D(u_{m}\circ l_{m})(x)|\leq\frac{C\cdot m}{\diamondsuit}.\]
Hence the Fubini theorem implies (see also (3.14))
\[\sup_{m\in\mathbb{N}}\int_{\mathrm{int}(T_{m}^{L})}\left|D(u_{m}\circ l_{m}) (x)\right|^{2}\,\mathrm{d}x\leq C\int_{\frac{1}{8}}^{\frac{2}{8}}\left(\frac{ \diamondsuit}{8m}\right)^{2}\frac{m^{2}}{\diamondsuit^{2}}\,\mathrm{d}x_{1}<\infty. \tag{3.15}\]
Since \(T_{m}^{R}\) is essentially the same as \(T_{m}^{L}\), similar computation gives
\[\sup_{m\in\mathbb{N}}\int_{\mathrm{int}(T_{m}^{R})}|D(u_{m}\circ l_{m})(x)|^{2}\; \mathrm{d}x<\infty. \tag{3.16}\]
The boundaries of \(D_{0}\), \(C_{m}\), \(b(K)\), \(T_{m}^{L}\) and \(T_{m}^{R}\) have zero measure, so after summing the inequalities (3.8), (3.9), (3.11), (3.12), (3.15) and (3.16) we obtain the desired inequality
\[\sup_{m\in\mathbb{N}}\int_{B(0,10)}|D(u_{m}\circ l_{m})(x)|^{2}\,\mathrm{d}x<\infty.\]
It implies that \(\{h_{m}\}\) is a bounded and (possibly after taking a subsequence) weakly convergent sequence in the space \(W^{1,2}(B(0,10),\;B(0,10))\), \(h_{m}\rightharpoonup h\), where \(h\) is the pointwise limit of \(h_{m}\).
### Degree satisfies \(\mathrm{Deg}(\mathbf{h},\mathbf{B}(0,\mathbf{r}),\mathbf{y})=\mathbf{2}\)
We claim that for every point \(y\in B\big{(}(\frac{1}{2},0,0),\frac{1}{2}\big{)}\) and for every radius \(r\in(\frac{5}{16},\frac{7}{16})\) we have
\[\mathrm{Deg}\;(h,B(0,r),y)=2.\]
Indeed, for every \(r\in(\frac{5}{16},\frac{7}{16})\) it holds that \(h\in W^{1,2}(\partial B(0,r))\cap L^{\infty}(\partial B(0,r))\). Fix such an \(r\). The mappings \(h_{m}\) map the sphere \(\partial B(0,r)\) onto three bubbles, see Fig. 3. In the limit, the filled and the dashed bubbles become topological spheres with the same orientation as the original sphere \(B(0,r)\), and the dotted bubble disappears, see Fig. 2. Therefore it is not difficult to see that the degree of \(h\) is \(2\) inside the smaller topological sphere.
## 4. For other degrees
In this section, we explain an idea of how to construct a sequence of bounded and weakly convergent homeomorphisms \(\{h_{m}\}\subseteq W^{1,2}(B(0,10),B(0,10))\) which are identity on the boundary \(\partial B(0,10)\), such that the weak limit has degree \(k\in\mathbb{Z}\) on a subset of positive measure. For \(k=0,-1,1\), the original construction by Conti and De Lellis in [8] already gives the desired result. For \(k=2\) please see our construction above.
For degrees \(|k|\geq 2\) the construction is similar to the case \(k=2\). We need to modify our mapping \(b\) and \(l_{m}\) from Fig. 7. Instead of three, we need to create the appropriate number of bubbles to achieve the desired degree \(k\). To do that we use a bi-Lipschitz mapping \(b\) that maps the sphere onto a sufficiently wiggly shape as in Fig. 8. And by choosing which arrows to shrink using \(l_{m}\) we can decide which bubbles will disappear in the limit. Arrows that are shrunk before applying the "bubble making function \(u_{m}\)" will not disappear and can change the final degree, the arrows that do not change size by \(l_{m}\) will disappear in the limit and so will not affect the final degree. |
2309.10868 | (110) Facet of MgTe Zinc Blende Semiconductor: A Holy Grail for Modern
Spintronics | Unlike, momentum-dependent Rashba spin-splitting, materials exhibiting
intrinsic momentum-independent unidirectional spin polarization also known as
persistent spin texture (PST) in the full Brillouin zone are scarce. In this
work, a list of characteristic electronic properties for identifying an ideal
PST material is provided based on earlier analytical models, and a new
semiconductor, the MgTe(110) facet is proposed which satisfies all these
conditions and exhibits PST in the full Brillouin zone. The atomic arrangement
in this particular facet exhibits three basic symmetries found in nature:
rotation, reflection, and translation. Using the method of invariance, an
effective Hamiltonian is constructed which reproduces the results obtained
using the density functional theory. Further, mono/few layers of MgTe (110)
facets of the zinc-blende structure are proposed for a ferromagnet-free
non-ballistic spin-field effect transistor (s-FET) that combines both the
spin-Hall effect and inverse spin-Hall effect, thus harmonizing spintronics
with conventional electronics. Although only quantum well structures have been
experimentally studied for nonballistic s-FET under the stringent condition of
equal Rashba and Dresselhaus strength, PST originating intrinsically in the
proposed 2D structures makes them an ideal alternate. | Manish Kumar Mohanta, Puru Jena | 2023-09-19T18:27:05Z | http://arxiv.org/abs/2309.10868v3 | # (110) Facet of MgTe Zinc Blende Semiconductor: A Holy Grail for Modern Spintronics
###### Abstract
In this work, we propose a new material MgTe(110) facet which exhibits momentum-independent unidirectional spin polarization known as persistent spin texture (PST). The atomic arrangement in this particular facet exhibits three basic symmetries found in nature: rotation, reflection, and translation. An effective Hamiltonian obtained using method of invariant is also discussed which supplements the exact outcomes obtained using density functional theory. Further mono/few layers of MgTe (110) facets of zinc-blende structure for a ferromagnet-free non-ballistic spin-field effect transistor is proposed combining both the spin-Hall effect and inverse spin-Hall effects harmonizing spintronics with conventional electronics. Although only quantum well structures have been experimentally studied for nonballistic s-FET under the stringent condition of equal Rashba and Dresselhaus strength, PST in these 2D structures originating intrinsically makes them an ideal alternate.
## I Introduction
The precise control of spin degrees of freedom for data storage and process has been of great interest and a hot topic of research following the proposal of a spin-field-effect transistor (s-FET) by Datta and Das. [1; 2; 3] The s-FET device consists of a lateral semiconducting channel exhibiting strong spin-orbit-coupling (SOC) and two ferromagnets used for spin generation and detection. The spin transport is controlled by the gate voltage in the semiconducting region. Depending on the spin-transport, ballistic (impurity-free) and non-ballistic s-FETs have been proposed but the latter have been less explored. In ballistic s-FET, the spin direction is maintained in the channel without any scatterings. The nonmagnetic Rashba semiconductors exhibit momentum-dependent Rashba spin splitting and are thus prone to impurity scattering in a non-ballistic region. However, spin-orbit interaction can be engineered to produce momentum-independent unidirectional spin configuration which is also known as persistent spin texture (PST). This has been theoretically shown in a two-dimensional quantum well systems having equal Rashba and Dresselhaus strength. [4; 5] Under this condition, enforced by SU(2) symmetry [6] spins exhibit an extraordinarily long lifetime even in the presence of disorder or imperfection. The spin-helix state in a two-dimensional electron gas system is found to be robust against D'yakonov-Perel's spin relaxation which makes Datta-Das type s-FET operable in the nonballistic transport regime. [7] For a conventional s-FET, interfacial scattering, band mismatch, ferromagnetic materials with 100% spin-polarized current, spin injection efficiency, and long spin lifetime [8] are major challenges that hinder realizing s-FET. The control of spin precession using gate voltage requires materials having large SOC for ballistic s-FET. However, relatively low SOC quantum well structures can be used in nonballistic s-FET as demonstrated by Eberle et al. [9]. Although quantum well structures have been extensively studied for nonballistic s-FET [10; 11; 12; 13], it is necessary to explore 2D materials which exhibit intrinsic PST.
Recently Tao and Symbal have proposed existence of intrinsic PST in bulk non-symmorphic crystal structure [14] which has generated a surge in interest in exploring 2D materials. In this regard there are only a handful of 2D semiconductors that have been identified to exhibit PST to date; MX monolayers [15; 16; 17] (M: Sn, Ge, X: S/Se/Te), Bi(110) monolayer [18]. These proposed monolayers are van der Waals solids and hence odd/even effect may arise under different stacking configurations. In this work, we propose MgTe (110) facet which is a direct band gap semiconductor having band edges located at the \(\Gamma\)-point. PST is shown to exhibit at the Brillouin zone center using DFT and being a century-old semiconductor the experimental synthesis [19; 20] is well-known which makes this facet very interesting and appealing to explore in nonballistic s-FET.
## II Results and Discussion
### Symmetries Associated with Atomic Arrangments in MgTe (110) Facets and Ferroelectricity
The cubic zinc-blende (ZB) structure of MgTe has many facets. A brief crystallographic description with their geometrical views are provided in Table S1 and Figure S1 in Supplemental Material (SM). In this work we are particularly interested in nonsymmorphic (110) facet where the atomic arrangements show all three basic types of transformation: rotation (R), reflection (M) and translation (t). Recently discovered black phosphorene is a typical example of a non-symmorphic space group. Since MgTe is a non van der Waals solid, the focus of this work is to explore the electronic properties associated with a two-atomic thick layer (2L) which is the basic building block for the (110) facet (see Figure S1(d)). Geometric top and side view of 2L-MgTe is presented in Figure 1. Considering 2D system, the crystallographic symmetry operations under which 2L-MgTe remains invariant are: |
2306.00226 | Human-centric Literature on Trust for SfTI Veracity Spearhead | This article summarizes the literature on trust of digital technologies from
a human-centric perspective. We summarize literature on trust in face-to-face
interactions from other fields, followed by a discussion of organizational
trust, technology-mediated trust, trust of software products, trust of AI, and
blockchain. This report was created for the Science for Technological
Innovation Veracity Spearhead supported by New Zealand's National Science
Challenges. | Kelly Blincoe, Markus Luczak-Roesch, Tim Miller, Matthias Galster | 2023-05-31T22:46:44Z | http://arxiv.org/abs/2306.00226v1 | # Human-centric Literature on Trust for SfTI Veracity Spearhead
###### Abstract
This article summarizes the literature on trust of digital technologies from a human-centric perspective. We summarize literature on trust in face-to-face interactions from other fields, followed by a discussion of organizational trust, technology-mediated trust, trust of software products, trust of AI, and blockchain. This report was created for the Science for Technological Innovation Veracity Spearhead supported by New Zealand's National Science Challenges.
## 1 Trust in General
_Disclaimer: This report does not include a Maori perspective on trust. Work is in progress to combine this Western view with a Maori perspective._
Trust has been described as facilitating cooperative behaviour [1]. Trust has been examined extensively in the fields of experimental psychology, philosophy, sociology, and political science [2]. In sociology, trust is considered to be multi-faceted with distinct cognitive, emotional, and behavioral dimensions [2]. The cognitive dimension says that trust is based on rational decisions, while the emotional dimension, which is also referred to as affect-based trust, says that emotional relationships between people form a basis for trust [3, 4]. The behavioral component of trust is the action of doing something with uncertain outcomes while assuming that all people involved in the action will act with integrity [5]. Trust is only needed when there is some level or risk or uncertainty involved, so trust also involves vulnerability [6]. Research in psychology and philosophy also describe trust as having both rational and emotional aspects [7].
Much research has considered how initial trust is formed. McKnight and Chervany identified a set of characteristics based on a review of literature across multiple disciplines (see Figure 1) [8]. First, a person's own disposition to trust, or their willingness or tendency to depend on others, impacts how trust is formed. Second, there must be conditions in place that could lead to success,
this is called institutional-based trust. Finally, there are three main trusting behaviours: competence, benevolence, and integrity. Competence is defined as the belief that the other party has the required skills, benevolence is the belief that the other party wants to do good, and integrity relates to the belief that the other party has good values or character.
Studies have shown that people are influenced by a "truth bias", meaning that they are more likely to assess things as truth than lies, even when being deceived [10, 11, 12]. People are not very good at identifying deception, which is defined as someone purposely misleading someone else [13]. Research in psychology has found that people assume all information as truth initially and only later change their assessment if they find the information is false [14]. This is in line with the Truth-Default theory, which states that people tend to believe each other by default [15].
When trust is damaged, there are negative consequences [16, 17]. Researchers have studied how trust is repaired after it has been damaged [18, 19]. Unlike initial trust, significant effort is often required to rebuild trust after a trust violation. Various factors impact trust repair, including the strength of the initial trust [20]. A theory which is important for trust repair is attribution theory [21]. Attribution theory considers the cause of the trust violation. It considers three main dimensions: locus of control (internal or external), controllability, and stability [19]. Attribution theory states that the outcomes and reactions of trust violations will vary based on these dimensions. The theory also suggests that the outcomes and reactions are not permanent and that trust can be repaired following violations [22].
## 2 Organizational Trust
While most research in the field of psychology has focused on interpersonal trust, organizational trust has been studied extensively in other domains like management and marketing [6]. Organizational trust has been defined as "the belief that the decision makers will produce outcomes favorable to the person's interests without any influence by the person" [23]. Management researchers argue that trust can improve business performance [6, 24]. Trust within an organization can result in improved productivity and satisfaction of employees [25]. In the field of marketing, researchers have found that consumers' trust of a business is impacted by both the people within that business that they interact with and the business's management practices and policies [26].
Figure 1: Characteristics of Trust [9]
## 3 Technology-Mediated Trust
Trust has primarily been studied from a perspective of human, face-to-face interactions [27]. When interactions occur through technology, signals of trust are different [1]. Riegelsberger et al. proposed a framework of trust in technology-mediated interactions (see Figure 2), which included both contextual and intrinsic properties of trust [9]. The contextual properties included in the framework are temporal, social, and institutional embeddedness. Where temporal embeddedness considers likely future encounters since repeated interactions can both encourage trustworthy behaviour and provide signals to make decisions around trust. Social embeddedness considers the reputation of the person or organization who is being trusted, which can be discussed and shared across the trustors (those doing the trusting). Institutional embeddedness refers to the institutions that govern behaviour, such as judicial systems or organizations, since the rules imposed by these institutions will influence trust. Yet, there is an acknowledgement that new technology can disrupt trust formation, since new technology has the potential to transform the way in which people interact, which can lead to uncertainty and vulnerability until new norms are established [28].
The intrinsic properties included in Riegelsberger et al.'s framework are ability, internalized norms, and benevolence, which are in line with the trusting behaviours of competence, integrity, and benevolence of McKnight and Chervany described above [8]. Here, ability refers to the capabilities and characteristics of the person or organization who is being trusted that will enable them to fulfill the promised outcomes. Internalized norms includes attributes such as honesty, credibility, reliability, dependability, openness, and good will. Benevolence represents the enjoyment obtained by person or organization who is being trusted when good outcomes are experienced by the person doing the trusting.
While this framework was created to conceptualise trust in technology-mediated interactions, the elements of trust are still focused on the people or organizations involved in the trust relationship. Shneiderman also recognized this in his definition of trust: "If users rely on a computer and it fails, they may get frustrated or vent their anger by smashing a keyboard, but there is no relationship of trust with a computer. If users depend on a network and it breaks, they cannot get compensation
Figure 2: Characteristics of trust from Riegelsberger et al. framework for technology-mediated interactions [9]
from the network. However, they can seek compensation from people or organizations they trusted to supply a correctly functioning computer or communication service." [1] Based on this definition, Shneiderman developed a set of guidelines for developers of online services, such as e-commerce or e-services, which are underlined by two key principles. First, the organization providing the service should ensure trust both by providing evidence of past trustworthy performance and providing strong assurances of trust. Second, the organization should clarify responsibilities and obligations by providing full disclosure of terms, guarantees, and mechanisms for disputes.
Of course, when considering technology-mediated trust, it is also important to know that people have different perceptions of and attitudes towards technology, and so technology-mediated trust will be subjective [29]. This is in line with the characteristics of trust in general by McKnight and Chervany which state that a person's own disposition to trust impacts trust formation.
## 4 Trust of Software
In line with this, studies have shown that for software products, trust is based on both a trust of the creators of the software product and a trust of the software itself [30, 31]. Similarly, Siau and Wang argue that trust in technology is determined by three main factors: human characteristics, environment characteristics, and technology characteristics [32]. Users of software products assess the trustworthiness of software in different ways [31, 33, 34]. Yang et al. propose a software trust framework which considers software correctness, security, and reliability as measures of trustworthiness [33]. Jackson equates trust to dependability of the software product to perform a particular task [31]. These definitions all relate to the intrinsic ability and internalized norms of the software. On the other hand, Wang et al. show that user feedback of software products is useful to determining levels of trust, which relates to the reputation of the software [35]. Mercuri considered the view of transparency as it relates to trust of software, defining different ways that transparency can be achieved [36]. For example, through open sourced code, certifications, and assurances. Provenance, defined as "metadata about the origin, context or history of data", can also promote transparency [37].
Another perspective comes from the field of human computer interaction where the relationship between trust and user interface design has been studied. Interface designers argue that the visual design of the interface forms the first impressions of trust [38]. Rendell et al. found that inclusion of nature imagery on websites positively influenced users' perceptions of trust [39]. Xiling found that simple and well laid out interfaces promoted trust [40]. They also found that familiarity, being able to clearly relate the "offline" brand and experience to the online interface, was important for trust. Xiling found that usability was important for building trust [40]. Systems that were easy to use, consistent, and logically structured were more trusted. While researchers have investigated the use of particular colors in an interface and their relationship with user trust, no relationship was identified [41].
## 5 Trust of AI
With the rise of Artificial Intelligence (AI) to perform decision-making, it is also important to consider trust as it relates to these systems in particular. Glikson and Woolley find that the
representation of an AI system (e.g., robot, humanoid, embedded) and the system's capabilities are important factors in developing trust [42]. Siau and Wang present a list of factors that are used for both building initial trust in AI systems and developing continuous trust in those systems [32]. They find that understanding how AI works (its transparency and explainability) and being able to trial the AI system before adopting it (trialability) are important for initial trust formation. In addition, the visual appearance of the AI (its representation), reviews of the AI system written by other users, and the users' perceptions of AI in general based on exposure to things like media coverage or Sci-fi books will also impact initial trust.
While many AI systems operate as black boxes (there is no way to understand why decisions are being made), transparency, explainability, and interpretability are still seen as important for trust of AI systems [43]. Some research defines interpretable as understandable or transparent [44]. Others define interpretable as providing explanations for decisions. In these cases, the model may not be transparent, but some understandable reasons for decisions are provided by the black box AI model [45]. Thus, interpretability may be defined as both explainable or transparent. Explanations are often proposed to improve trust in AI systems [46, 47] and recent research shows that software users do want explanations when complex decisions are being made [48].
For developing continuous trust in AI systems, Siau and Wang [32] find that usability and reliability, collaboration and communication, sociability and bonding, security and privacy protection, interpretability, concerns about job replacement, and goal congruence are important factors. Of course, accuracy is also important. Yin et al. found that people considered both a model's stated accuracy and its observed accuracy in determining their trust of the model [49]. Siau and Wang say "trust in AI takes time to build, seconds to break, and forever to repair once it is broken!" [32]
We have seen many examples where AI has gone wrong, and Winfield and Jirotka argue that ethical governance is critical to building trust in AI [50]. Through a literature review of trust and AI, Lockey at al. identified five main challenges: 1) transparency and explainability, 2) accuracy and reliability, 3) automation resulting in job loss, 4) anthropomorphism (or including human-like characteristics) leading to over-estimation of the AI system, and 5) privacy concerns related to mass data extraction [51].
Another important factor related to trust in AI is fairness [52]. While one might assume machines can make more fair decisions that are free from human bias, it is well known that AI systems actually amplify existing bias [53, 54]. Historical bias in training data can cause AI systems to learn this bias and make biased decisions.
Accountability is also important [52] for trust in AI. This factor considers who will be held responsible for the decisions made by AI systems. There is currently not a clear answer to who should be held accountable. The Law Commission of England and Wales and the Scottish Law Commission recently proposed that self driving car users should not be held responsible for crashes and other driving offenses.1. However, in the US, self driving car users are considered responsible. Research suggests more auditing of AI is needed to reduce corporate reputation damage and assure AI is legal, ethical, and safe [55].
Footnote 1: [https://www.forbes.com/sites/zacharysmith/2022/01/25/self-driving-car-users-shouldnt-be-held-responsible-for-crashes-uk-report-says](https://www.forbes.com/sites/zacharysmith/2022/01/25/self-driving-car-users-shouldnt-be-held-responsible-for-crashes-uk-report-says)
Trust of AI also comes down to perceptions of how decisions are made. Machines make decisions which are rule-based and algorithmic [56]. Machines do not consider emotions in their decision-making nor can they learn in the same way as humans [57]. These differences can lead to algorithmic aversion, where people prefer human made decisions even if the decisions are inferior to those made
by a machine [58].
Jussupow et al. defined four characteristics of algorithms that influence aversion: 1) algorithm agency which describes the level at which the algorithm behaves autonomously; 2) algorithm performance which considers the accuracy and failures of the algorithm; 3) perceived algorithm capabilities which describes the algorithm's perceived ability to perform the task; and 4) human involvement which relates to how much humans (but not the end user) are involved in training and using the algorithm [58].
Recent research has found that people do not want AI to make moral decisions [59]. Through a series of studies, Bigman and Gray found that people distrust AI to make moral decisions even when the outcome is favorable since "machines can neither fully think nor feel" [59]. They suggest that limiting AI to providing only advice and increasing the AI's perceived experience and expertise are ways to improve trust in AI for moral decisions [59].
Figure 3 summarizes the factors that influence trust for digital technologies, including software products and AI.
## 6 Blockchain
A discussion on trust of software is not complete without a mention of Blockchain. Blockchain has been nicknamed a "trustless" technology [60, 61]. It emerged due to a growing lack of trust in centralized systems which relied on trust of institutions (e.g. banks) [62]. The idea of a trustless technology goes back to the theory of Wang and Emurian that claims that trust is only needed when there is some level or risk or uncertainty involved [6]. Blockchain is a distributed, immutable ledger. This means transactions cannot be modified once they are written to the ledger and all participants have access to the shared ledger. Thus, "users subject themselves to the authority of a technological system that they are confident is immutable, rather than to the authority of centralized institutions which are deemed untrustworthy." [62]
Figure 3: Factors that affect trust of digital technology
De Filippi et al. prefer to label Blockchains as "confidence machines", since their underlying technology creates shared expectations and confidence in the correctness of its transactions. However, while many Blockchains remove the need to trust a single organization, they still require "distributed trust" since there are often a large number of actors who require a low-level of trust [62]. There still needs to be trust that the data going into the Blockchain can be trusted since compromised data cannot be corrected. These actors must be trusted not to collude and cause collective harm. It should also be noted that not all Blockchains follow this same model and some (e.g. The Linux Foundation's Hyperledger and Amazon's QLDB) are maintained by organizations, which means these organizations will also still require trust. Using blockchain may lead to trade-offs between trust and other concerns like energy consumption and sustainability [63]. These tradeoffs could in turn compromise trust because benevolence and integrity can be adversely affected if a Blockchain is perceived to be non-sustainable.
There does not appear to be literature on trust violations and repair in relation to blockchain technology.
|
2301.00295 | Packing Meets Topology | This note initiates an investigation of packing links into a region of
Euclidean space to achieve a maximal density subject to geometric constraints.
The upper bounds obtained apply only to the class of homotopically essential
links and even there seem extravagantly large, leaving much working room for
the interested reader. | Michael H. Freedman | 2022-12-31T21:42:19Z | http://arxiv.org/abs/2301.00295v2 | # Packing meets topology
###### Abstract.
This note initiates an investigation of packing links into a region of Euclidean space to achieve a maximal density subject to geometric constraints. The upper bounds obtained apply only to the class of homotopically essential links and even there seem extravagantly large, leaving much working room for the interested reader.
## 1. Introduction and Theorems
Optimal packing of balls into Euclidean space has a long history and recent astonishing successes, including Hale's resolution of the Kepler conjecture [1] and the optimality of the \(E_{8}\) and Leech lattices [17, 13], resulting in a 2022 Fields Medal to Maryna Viazovska.
In this note, we introduce the idea of packing links, rather than points, again with the goal of achieving the highest possible density subject to the geometric constraints that _certain_ link components must maintain a distance \(\geq\epsilon\) from _certain_ other components. There will be an observation about higher dimensions but let us begin with packing classical links into Euclidean 3-space. In the classical sphere packing problem, _all_ points are constrained to have distance \(\geq\epsilon\) from each other. The analogous stipulation for links, that all components must maintain a distance \(\geq\epsilon\) from each other, is also of potential interest, but in that case, the coarse outline of the subject is broadly similar to point packings. That is, in both cases, each component takes up a definite amount of volume so no more that \(O(\epsilon^{-3})\) link components can be \(\epsilon\)-embedded into the unit cube, where \(\epsilon\)-embedded means no two components approach within \(\epsilon\) of each other. While this coarse upper bound holds for all link types, complicated links almost surely have smaller upper bounds. For example, we conjecture that if \(L_{n}\) is the link type consisting of \(n\)-fibers of the Hopf map \(S^{3}\to S^{2}\) then \(n\) can grow no more quickly than \(O(\epsilon^{-2})\).
However, this note focuses on a regime, _partial-\(\epsilon\)-embeddings_, where even the coarse answer can be quite mysterious. By a partial-\(\epsilon\)-embedding we mean that only certain specified pairs of components must stay \(\epsilon\) apart. In this context we are often left puzzled as to whether the number of link components that can fit into the unit cube is: (1) countably infinite1, (2) finite but unbounded, (3) exponential or super-exponential in \(\epsilon^{-1}\), or (4) polynomial in \(\epsilon^{-1}\).
Footnote 1: This case, of course, would require slightly relaxing the definition of “embedding” to a 1-1 map which, when restricted to any finite collection of circles, is a smooth embedding of the expected link type.
The theme of this note is well illustrated by our first example: \(H^{n}\), which by definition is the \(2n\)-component link type formed by taking \(n\)-Hopf links \({}_{r}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
We use \(\Omega\)-notation, \(g(x)=\Omega(f(x))\), to mean that, for some \(a>0\) and sufficiently large \(x\), \(g(x)\leq af(x)\).
**Theorem 1**.: _If \(H^{n}\) has a diagonal-\(\varepsilon\)-embedding into the unit cube then \(n(\varepsilon)=\Omega\left(e^{a\varepsilon^{-3}}\right)\) for some \(a>0\)._
Proof.: Let \(I^{3}\) be the unit cube. Tile \(I^{3}\) by cells dual to a triangulation of \(I^{3}\). The cells should have the property that they are somewhat regular: each cell should have an inscribed sphere of radius \(>\frac{\varepsilon}{20}\) and an excircled sphere of radius \(<\frac{\varepsilon}{2}\). These dual cells have the property that any union of them is a PL 3-manifold with boundary. We prefer not to use the obvious coordinate sub-cubes of \(I^{3}\), because they fail to have this property.2 The number of cells in this tiling \(\tau\) is \(O(\varepsilon^{-3})\).
Footnote 2: For this first proof, the manifold property is actually not necessary, but for Theorems 2 and 6 the manifold property is an added convenience.
Assume \(H^{n}\) is diagonally-\(\varepsilon\)-embedded. For each \(j\), \(1\leq j\leq n\), 3-color the tiling \(\tau\) according to the rule that a cell is red if it meets \(r_{j}\), blue if it meets \(b_{j}\), and white otherwise. Call this coloring \(c_{j}\). Now we decorate \(c_{j}\) with additional homological information. Let \(R_{j}\) (\(B_{j}\)) be the union of the red (blue) cells under \(c_{j}\). The first homology, \(H_{1}(R_{j};\mathbb{Z}_{2})\) is a vector space over \(\mathbb{Z}_{2}\) of dimension \(d_{j}=\Omega(\varepsilon^{-3})\), for which we choose a basis \(b_{j1},\ldots,b_{jd_{j}}\). Similarly, \(H_{1}(B_{j};\mathbb{Z}_{2})\) has dimension \(e_{j}=\Omega(\varepsilon^{-3})\) with basis \(f_{j1},\ldots,f_{je_{j}}\). Let \(L_{jpq}\) be the \(d_{j}xe_{j}\) matrix of \(\mathbb{Z}_{2}\)-linking numbers, \(L_{j}^{pq}=\operatorname{Link}(b_{jp},e_{jq})\).
Now, homologically we may express the class of \(r_{j}\) in \(H_{1}(R_{j})\) (the class of \(b_{j}\) in \(H_{1}(B_{j})\)) as \(r_{j}=\sum_{p=1}^{d_{j}}x_{jp}b_{jp}\) (\(b_{j}=\sum_{q=1}^{e_{j}}y_{jq}f_{jq}\)). Now the mod 2 linking numbers of the link \(H_{j}\) can be recovered as:
\[1=\operatorname{Link}(b_{j},r_{j})=L_{j}^{pq}x_{jp}y_{jq}, \tag{1}\]
Einstein summation convention in effect.
Where \(c_{j}\) was the \(j^{\text{th}}\) coloring, let \(\hat{c}_{j}\) be a decorated \(j\)-coloring where the decoration amounts to fixing the mod 2 numbers \(\{x_{p}\}\), \(1\leq p\leq d_{j}\), and \(\{y_{q}\}\), \(1\leq q\leq e_{j}\), which express, within the arbitrarily chosen bases, how \(b_{j}\) and \(r_{j}\) lie homologically in \(H_{1}(B_{j};\mathbb{Z}_{2})\) and \(H_{1}(R_{j};\mathbb{Z}_{2})\).
How many possible decorated colorings, \(\#_{\text{DC}}(\varepsilon)\) can there be?
\[\#_{\text{DC}}(\varepsilon)=\Omega\left(3^{q^{\prime}\varepsilon^{-3}}\cdot 2 ^{q^{\prime\prime}\varepsilon^{-3}}\cdot 2^{q^{\prime\prime}\varepsilon^{-3}} \right)=\Omega\left(e^{a\varepsilon^{-3}}\right), \tag{2}\]
all constants \(>0\).
The first factor bounds the number of 3-colorings and the second two factors the possible values of the binary strings \(\{x_{jp}\}\) and \(\{y_{jq}\}\), respectively.
Now by the pigeonhole principle if \(n(\varepsilon)\) were _not_\(\Omega(e^{a\varepsilon^{-3}})\), two Hopf links \(H_{i}\) and \(H_{j}\), \(i\neq j\), within \(H^{n}\) must determine the _same_ decorated coloring \(\hat{c}_{i}=\hat{c}_{j}\). But with \(H_{i}\) and \(H_{j}\) having identical thickenings: \(B_{i}=B_{j}\) and \(R_{i}=R_{j}\), and identical homological data. Line 1 can also be read as a computation for the _off-diagonal_ linking number:
\[0=\operatorname{Link}(b_{j},r_{i})=L_{j}^{pq}x_{jp}y_{iq}=L_{i}^{pq}x_{jp}y_{ jq}=L_{i}^{pq}x_{ip}y_{iq}=1 \tag{3}\]
This contradiction proves the theorem.
Before leaving this example, what packings can we imagine to supply a lower bound on \(n(\varepsilon)\), for \(H^{n}\)? The simplest starting point would be to link two circles of radius \(\varepsilon\) into a small, rigid, Hopf link, and then throw copies of these "linked key rings" into a unit box, shaking gently until full. This seems to yield \(n=O(\varepsilon^{-3})\). But then we realize the box is not as full as we thought. We can sprinkle in a second generation of orthogonally linked pairs of radius \(3\varepsilon\) circles, ignoring the presence of the first generation. By ignoring the first generation, we will create many linking number \(=1\) with the first generation, but these can be undone "finger moves" of length \(\leq\varepsilon\) to the second generation. By the triangle inequality, the second generation will still satisfy the diagonal-\(\varepsilon\)-embedded condition after all finger moves. We are still not done; we can add a third generation of orthogonal radius \(=7\varepsilon\) Hopf links, which will retain the diagonal-\(\varepsilon\) condition after length \(\leq 3\) finger moves recovers the correct link type, \(H_{n}\). We can of course iterate with Hopf links of radius \(\{r_{i}\}\), \(r_{0}=\varepsilon\), \(r_{i+1}=2r_{i}+1\), until \(r_{i}\) approaches unit size. From scale considerations, but ignoring unimportant boundary effects, we see that if \(n_{i}\) is the number of \(i^{\text{th}}\) generation Hopf links in the box, then \(n_{0},n_{1},n_{2},\ldots\) is dominated by the geometric series \(n_{0},2^{-3}n_{0},4^{-3}n_{0},\ldots\). So summing this series we find that the total number \(n=\sum n_{i}\) of the Hopf links satisfies
\[n(\varepsilon)<\frac{8}{7}n_{0} \tag{4}\]
So, in the end, all our extra work only changed (slightly) the leading coefficient. Not being able to find anything more clever, this leaves the huge gap between \(O(\varepsilon^{-3})\) and \(O(e^{\alpha\varepsilon^{-3}})\), in which the truth must lie. Our conjecture is that \(n=O(\varepsilon^{-3})\), but the proof calls out for a new idea.
Before discussing other link types, let us make a quick remark regarding higher dimensions. If \(d=p+q+1\), then two disjoint closed submanifolds of \(\mathbb{R}^{d}\) have a well-defined mod 2 linking nmber if they have dimensions \(p\) and \(q\) respectively. Now in \(\mathbb{R}^{d}\) let \(H^{n}\) denote any link of \(2n\) component \(\{r_{j},b_{j}\}\), \(1\leq j\leq n\), with mod 2 linking numbers given by:
\[L(r_{i},r_{j})=0,\;L(b_{i},b_{j})=0,\;i\neq j,\;\text{and}\;L(r_{i},b_{j})= \delta_{ij}\]
Identical reasoning shows that the maximum possible \(n(\varepsilon)\), \(n_{\max}(\varepsilon)\), satisfies:
\[O(\varepsilon^{-d})\leq n_{\max}(\varepsilon)\leq e^{a(d)\varepsilon^{-d}} \tag{5}\]
for some \(a>0\), which actually generalizes Theorem 1 even when \(d=3\). \(n_{\max}(\varepsilon)\) is the largest number such that a \(2n(\varepsilon)\)-component link can be embedded in the unit \(d\)-cube with the specified linking and \(\text{dist}(r_{j},b_{j})\geq\varepsilon\), \(1\leq j\leq n\).
Returning to dimension \(d=3\), let us give a further example, which steps away, slightly, from linking number. Consider the problem of packing the disjoint union (again this means smoothly embedded spheres separating the copies of) of \(n\) copies \(B^{n}\) of a three component link \(B\), such as the Borromean rings, which has all linking numbers \(0\) and Milnor's \(\overline{\mu}\)-invariant \(\overline{\mu}_{123}(L)\not\equiv 0\) mod 3 [Mil54]. \(B\) has components \(l_{1},l_{2},l_{3}\), \(B_{j}=(l_{j1},l_{j2},l_{j3})\). Again, colors \(r_{1},r_{2},r_{3}\) are associated to the 3-components. We now enforce the diagonal-\(\varepsilon\)-condition: for each \(j\), \(1\leq j\leq n\), \(\text{dist}(l_{ji},l_{ji^{\prime}})\geq\varepsilon\) whenever \(i\neq i^{\prime}\).
Let \(n_{B}(\varepsilon)\) be the largest \(n\) for which such an embedding exists, or \(\infty\) is no such bound exists.
**Theorem 2**.: _For all \(\varepsilon>0\), the Borromean packing number \(n_{B}(\varepsilon)\) is indeed a finite integer, with \(n_{B}(\varepsilon)=\Omega(e^{\alpha\varepsilon^{-9}})\)._
Proof.: We begin, as before, with a generic tessellation of \(I^{3}\) of scale between \(\frac{\epsilon}{20}\) and \(\frac{\epsilon}{2}\). Now, for each \(j\), \(1\leq j\leq n\), make a 4-coloring \(c_{j}\) of \(I^{3}\) by the rule that a cell gets the color \(r_{i}\), \(1\leq i\leq 3\), of the component it meets; if it meets none then it is white. But now we proceed differently, for the _decoration_: homology is wholly insufficient. To motivate our new decoration recall a classic:
**Theorem 3** (Burnside).: _Any finitely generated group of exponent 3 is finite._
_Note._ To estimate \(n_{B}(\epsilon)\) in Theorem 2, we used the calculation of [10] that the order of the free, restricted Burnside group is \(|B(m,3)|=3^{m+\binom{m}{2}+\binom{m}{3}}\). This will imply the bound stated in Theorem 2.
We create a bespoke invariant to exploit Burnside's theorem.
**Definition**.: Define _3-link-homotopy_ to be Milnor's classical link-homotopy [14] (individual components may cross themselves during the homotopy but not other components) with the additional ad hoc relation: at any moment during the homotopy, any component may be band summed to \(g^{3}\), where \(g\) is a free loop in the complement of the other components. The cube means wrap 3 times around \(g\).
Whereas before, the coloring \(c_{j}\) was decorated with homological information, now the decoration \(\hat{c}_{j}\) assigns to the submanifold \(C_{ji}\) colored \(r_{i}\) (according to our rule for the \(j^{\text{th}}\) coloring \(c_{j}\)) the conjugacy class \([l_{ji}]\) of the component \(l_{ji}\) in the Burnside group \(\pi_{1}^{3}(C_{ji})\), where by definition, \(\pi_{1}^{3}(X)\) means \(\pi_{1}(X)\) with the additional relations that all elements cube to the identity.
**Lemma 4**.: _The 3-link-homotopy class of a link \(B_{j}\) in \(I^{3}\) can be recovered from the decorated coloring \(\hat{c}_{j}\)._
Proof.: Since \(C_{ji}\) and \(C_{ji^{\prime}}\) are disjoint for \(i\neq i^{\prime}\), a homotopy of \(L_{j}\) in which each component \(l_{ji}\) stays within its \(C_{ji^{\prime}}\) is a link-homotopy. Furthermore, if each \(l_{ji}\) is permitted to vary in \(C_{ji}\) within its \(\pi_{1}^{3}(C_{ji})\) conjugacy class, this is a special case of 3-link-homotopy. Thus, if each \(l_{ji}\) is rechosen within its \(\pi_{1}^{3}(C_{ji})\) conjugacy class, the 3-link homotopy class is preserved.
**Lemma 5**.: _For a 3-component link with vanishing linking numbers3, \(\overline{\mu}_{123}(B)\) is conserved mod 3 under 3-link-homotopy._
Footnote 3: It is actually only necessary to assume \(3\nmid\gcd(\text{link}(l_{i},l_{j}))\), \(i\neq j\).
Proof.: We may assume that during the 3-link-homotopy only one component moves or is altered at any given time. The "cyclic symmetry" theorem ([14] Theorem 6) says that w.l.o.g. we may assume that the active component is the one being Magnus-expanded in the link group of the others. To recall, for any \(k\)-component link \(L\), \(\overline{\mu}_{I}(L)\), \(I=i_{1},\ldots,i_{k}\) distinct indices, is computed by expending, as below, the component \(l_{i_{k}}\) in the polynomial ring denoted by \(R[x_{i_{1}},\ldots,x_{i_{k-1}}]\)[14]. This is Milnor's notation for the integers adjoined \(k-1\) non-commuting variables which are also "non-repeating," meaning that one divides out by the ideal generated by monomials in which any variable occurs more than once.
\[\begin{array}{c}[l_{i_{k}}]\in\pi_{1}(I^{3}\setminus(l_{i_{1}}\cup\cdots\cup l_ {i_{k-1}}))\\ \downarrow\\ M(I^{3}\setminus(l_{i_{1}}\cup\cdots\cup l_{i_{k-1}}))\\ \uparrow\\ FM_{k-1}(m_{i-1},\ldots,m_{i_{k-1}})\xrightarrow{\text{Magnus}}R[x_{i_{1}}, \ldots,x_{i_{k-1}}]\\ m_{i_{j}}\mapsto 1+x_{i}\\ m_{i_{j}}^{-1}\mapsto 1-x_{i}\end{array} \tag{6}\]
where \(m_{i_{j}}\) are meridians to \(l_{i_{j}}\), \(M\) denotes the Milnor link group obtained by adding the relations that each meridian commutes with all its conjugates, and FM is the corresponding free Milnor group generated by \(m_{i_{1}},\ldots,m_{i_{k-1}}\) subject only to these commutation relations. As the diagram indicates, \([l_{i_{k}}]\) is first projected, then lifted to \(FM_{k-1}\), and finally expanded.
Then by definition, \(\overline{\mu}_{I}(L)=\) the coefficient of \(x_{i_{1}},\ldots,x_{i_{k-1}}\) of \(\text{Magnus}[l_{jk}]\). Any ambiguity in the expansion due to the choice of lifting constitutes the indeterminancy of that \(\overline{\mu}_{I}\). For general background on \(\overline{\mu}\) invariants see [10, 11, 12].
As the \(k^{\text{th}}\)-component moves by link-homotopy the element and its expansion are constant. Adding the cube \(g^{3}\) of a loop \(g\) to \(l_{jk}\) multiplies its Magnus expansion \(M\) by the Magnus expansion \(M_{g^{3}}\) of \(g^{3}\), \(M\to MM_{g^{3}}=M(M_{g})^{3}\).
Since \(B\) has 3 components, \(k-1=2\), \((M_{g})^{3}\) is the cube of some monoic polynomial in two variables \(x_{1}\) and \(x_{2}\): \((M_{g})^{3}=(1+c_{1}x_{1}+c_{2}x_{2}+c_{1}x_{1}x_{2}+\ldots)^{3}\). A brute force consideration of the 27 possible coefficient values mod 3 shows that in all cases the coefficients of \(x_{1}\), \(x_{2}\), and \(x_{1}x_{2}\) in \((M_{g})^{3}\) are all divisible by 3. Multiplying out we see that \(\overline{\mu}_{123}(B)\) mod 3 is invariant under 3-link-homotopy.
The number \(\#_{c}(\varepsilon)\) of possible colors is \(\#_{c}(\varepsilon)=\Omega(e^{a^{\prime}\varepsilon^{-3}})\) and the number of decorations possible for a coloring \(c\) is bounded by the product of the order of the Burnside groups \(\Omega(e^{a^{\prime\prime}\varepsilon^{-9}})\) for each of the colored (not white) regions. Thus the number of possible decorated coloring \(\#_{\tilde{c}(\varepsilon)}\) has a similar bound as a function of \(\varepsilon\). As in Theorem 1, the pigeonhole principle tells us that if we could place \(n>\#_{\tilde{c}(\varepsilon)}\) copies of \(B\) in \(I^{3}\), obeying the diagonal-\(\varepsilon\)-condition then for \(1\leq i<j\leq n\), \(B_{i}\) and \(B_{j}\) will determine identical decorated colorings.
But Lemma 4 now tells us three things: \(L_{i}\) has 3-link-homotopy type \(B\), \(B_{j}\) has 3-link-homotopy type \(B\), and \(B_{ij}\) has 3-link-homotopy type \(B\), where \(B_{ij}\) is the link obtained by starting with \(B_{i}\) and then swapping out any one component of \(B_{i}\) for the corresponding component of \(B_{j}\). The first two conclusions are as we expect, but the third sounds wrong. Because \(B_{i}\) and \(B_{j}\) are split (separated by a smoothly embedded 2-sphere), so \(B_{ij}\) is a split link and all its \(\overline{\mu}_{123}\)-invariant must vanish. But this vanishing contradicts Lemma 5, which says any link (including \(B_{ij}\)) in the 3-link-homotopy class of \(B\) has its \(\overline{\mu}_{123}\)-invariant not congruent to 0 mod 3. This proves Theorem 2.
Replacing Burnside groups with the mod \(p\) lower central series (\(p\)-lcs) quotients allows a joint extension of Theorems 1 and 2, although with an exponentially weaker upper bound.
**Theorem 6**.: _Let \(E\) be any homotopically essential link of \(k\)-components, \(E=(e_{1},\ldots,e_{k})\) and \(E^{n}\) be the disjoint union of \(n\) copies of \(E\). For every \(\varepsilon>0\) there is a largest \(n\), \(n_{\text{max}}\), such that
embeds in the unit cube \(I^{3}\) with the property that for all \(j\), \(1\leq j\leq n\), \(\operatorname{dist}(e_{ji},e_{ji^{\prime}})\geq\varepsilon\) for \(i\neq j\), and \(1\leq i\neq i^{\prime}\leq k\). \(n_{max}=\Omega\Big{(}e^{ap\left(\left^{\left(\alpha^{\prime}\varepsilon^{-3} \right)k}\right)}\Big{)}\), where \(p\) is the smallest prime not dividing the first nontrivial non-repeating \(\overline{\mu}\)-invariant of \(E\), and \(a,a^{\prime}>0\) are fixed constants._
Proof.: Begin in the familiar fashion by creating a \((k+1)\)-coloring \(c_{j}\) of a fixed \(\varepsilon\)-scale tessellation of \(I^{3}\) in which each cell meeting \(e_{ji}\) is colored \(r_{i}\) and the remaining cells are colored white. Similar to Theorems 1 and 2, we need to specify some finite amount of data about \(e_{ji}\) in its \(r_{i}\)-colored region \(R_{ji}\) sufficient to (1) certify the homotopically essential nature of \(E_{j}\) and (2) create the contradiction that a related split link \(E_{ij}\) would also be homotopically essential.
By induction, it suffices to consider the case that \(E\) is almost homotopically trivial, meaning all its sub-links are all homotopically trivial, or more algebraically, that all non-repeating \(\overline{\mu}\)-invariants of length \(<k\) vanish.
As in the proof of Theorem 2, cyclic symmetry implies that we may focus on a single "active" component \(l_{ji_{k}}\) (and going forward drop the \(j\)-index for the embedding and replace \(i_{k^{\prime}}\) by a single index), project \(l_{ji_{k}}\), now denoted simply by \(l_{k}\), to the Milnor group, and choose a lift \(\alpha_{k}\) to the free Milnor group \(FM_{k-1}\). The finite data we consider is the image of \(\alpha_{k}\) in \(Q_{k}^{p}\coloneqq FM_{k-1}/[FM_{k-1}]_{k}^{p}\), where for any group \(G\), \([G]_{n}^{p}\) is the \(n^{\text{th}}\)-term of the mod \(p\) lower central series of \(G\). This is defined by saying \(G_{1}=G\), and \(G_{m}\) is generated (\(=\) normally generated) by the words \(aua^{-1}u^{-1}v^{p}\), \(a\in G\), and \(u,v\in G_{m-1}\).
Regarding the bound, its essential ingredient is that the order \(\left|Q_{k}^{p}\right|=\Omega\left(p^{\left((a^{\prime}\varepsilon^{-3})k \right)}\right)\). \(\pi_{1}(R_{ji})\) has \(g=\Omega(\varepsilon^{-3})\) generators, so this also bounds the size of the free Milnor group under consideration. The quotient \(Q_{k}^{p}\) is \((k-1)\)-stage nilpotent with at most \(g^{s}\) new (twisted) \(2p\) factors added by during the \(s^{\text{th}}\)-central extension. Thus the total number of copies of \(\mathbb{Z}_{p}\) twisted together to make the \(p\)-group \(Q_{k-1}^{p}\) is \(\Omega(\varepsilon^{-3})^{k-1}\), giving the order bound.
Returning to the main line of the proof, we need:
**Lemma 7**.: _Suppose \(\beta\in[FM(m_{1},\ldots,m_{k-1})]_{i}^{p}\), \(1\leq i\leq k-1\), then \(\operatorname{Magnus}(\beta)\) maps to \((1+\text{monomials of degree}\geq i)\) under reduction of coefficients \(\mathbb{Z}\to\mathbb{Z}_{p}\), inducing \(R[x_{1},\ldots,x_{k-1}]\xrightarrow{\pi_{p}}R_{p}[x_{1},\ldots,x_{k-1}] \coloneqq\mathbb{Z}_{p}[x_{1},\ldots,x_{k-1}]/(\text{repeating ideal})\), i.e. \(\pi_{p}(\operatorname{Magnus}(\beta))=(1+\text{terms of degree}\geq i)\)._
Proof.: By induction. When \(i=1\) the statement is that \(p^{\text{th}}\) power has no linear terms when expanded into \(R_{p}[x_{1},\ldots,x_{k-1}]\). Now assume that Lemma 7 is true for \(i-1\) and expand \(aua^{-1}u^{-1}v^{p}\), where \(a\in FM(m_{1},\ldots,m_{k-1})\), and \(u,v\in[FM(m_{1},\ldots,m_{k-1})]_{i-1}^{p}\). The lowest positive degree (\(=i-1\)) monomials in \(\operatorname{Magnus}(u)\) and \(\operatorname{Magnus}(u^{-1})\) are identical except for reversed signs; consequently, the \(aua^{-1}u^{-1}\) factor expands to \((1+\text{monomials of degree}\geq i)\) as the degree \(=i-1\) terms all cancel. The \(v^{p}\) factor has the same form since the degree \(i-1\) terms are now repeated \(p\) times each. Consequently, the product \(aua^{-1}u^{-1}v^{2}\) also expands to this form.
The \(p\)-lcs subgroups are characteristic: they map to each other under homomorphisms and if \(F\to G\) is an epimorphism then \([F]_{k}^{p}\) maps epimorphically to \([G]_{k}^{p}\). Apply these facts to the maps:
(7) \[\begin{array}{l}\pi_{1}(R_{k})\to\pi_{1}(I^{3}\setminus e_{1}\cup\cdots\cup e _{k-1})\to M(I^{3}\setminus e_{1}\cup\cdots\cup e_{k-1})\gets FM(I^{3} \setminus e_{1}\cup\cdots\cup e_{k-1})\\ \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\
## 3. Acknowledgements
The question studied here arose while working with Michael Starbird on [11]. An \(\Omega(\varepsilon^{-3})\) bound for the Hopf link problem might offer an alternative proof strategy for that paper's main theorem. I would also like to thank Slava Krushkal for insightful discussions.
|
2308.16842 | Non-standard power grid frequency statistics in Asia, Australia, and
Europe | The power-grid frequency reflects the balance between electricity supply and
demand. Measuring the frequency and its variations allows monitoring of the
power balance in the system and, thus, the grid stability. In addition, gaining
insight into the characteristics of frequency variations and defining precise
evaluation metrics for these variations enables accurate assessment of the
performance of forecasts and synthetic models of the power-grid frequency.
Previous work was limited to a few geographical regions and did not quantify
the observed effects. In this contribution, we analyze and quantify the
statistical and stochastic properties of self-recorded power-grid frequency
data from various synchronous areas in Asia, Australia, and Europe at a
resolution of one second. Revealing non-standard statistics of both empirical
and synthetic frequency data, we effectively constrain the space of possible
(stochastic) power-grid frequency models and share a range of analysis tools to
benchmark any model or characterize empirical data. Furthermore, we emphasize
the need to analyze data from a large range of synchronous areas to obtain
generally applicable models. | Xinyi Wen, Mehrnaz Anvari, Leonardo Rydin Gorjao, G. Cigdem Yalcin, Veit Hagenmeyer, Benjamin Schafer | 2023-08-31T16:22:43Z | http://arxiv.org/abs/2308.16842v1 | # Non-standard power grid frequency statistics in Asia, Australia, and Europe
###### Abstract
The power-grid frequency reflects the balance between electricity supply and demand. Measuring the frequency and its variations allows monitoring of the power balance in the system and, thus, the grid stability. In addition, gaining insight into the characteristics of frequency variations and defining precise evaluation metrics for these variations enables accurate assessment of the performance of forecasts and synthetic models of the power-grid frequency. Previous work was limited to a few geographical regions and did not quantify the observed effects. In this contribution, we analyze and quantify the statistical and stochastic properties of self-recorded power-grid frequency data from various synchronous areas in Asia, Australia, and Europe at a resolution of one second. Revealing non-standard statistics of both empirical and synthetic frequency data, we effectively constrain the space of possible (stochastic) power-grid frequency models and share a range of analysis tools to benchmark any model or characterize empirical data. Furthermore, we emphasize the need to analyze data from a large range of synchronous areas to obtain generally applicable models.
bimodal, frequency, linear test, correlation, SDE modeling, power grid, Hurst exponent, statistics, heavy tails +
Footnote †: Submitted to the 23rd Power Systems Computation Conference (PSCC 2024).
## I Introduction
A power grid is a complex and interconnected network that enables the transmission and distribution of electricity from generators to consumers [1]. It is a vital infrastructure ensuring homes, companies, and industries have access to a robust supply of electricity [2]. To operate, a power grid must maintain a constant balance between electricity supply and demand. Any deviation from this balance can lead to grid instability, blackouts, or infrastructure damage. The power-grid frequency is a measurable quantity that indicates the operational status of a power grid. It reflects the rotational speed of the numerous synchronous machines within one area so that we refer to a region with one shared frequency as a synchronous area. An excessive feed-in of power into the grid causes an increase in frequency, whereas an insufficient supply results in a decrease in frequency. Sudden changes in this frequency can cause grid instability, which is why maintaining consistent frequency levels is important. The power-grid frequency typically remains within a few percentage points of a reference value of 50 Hz or 60 Hz through the installation of various balancing and control systems in place [3]. These control measures monitor and stabilize the frequency to keep it within a permissible range [4, 5, 6].
To use expensive control measures as efficiently as possible, e.g. via forecasting algorithms, a thorough understanding and modeling of the power-grid frequency is necessary. Therefore, the analysis of the stochastic nature of the power-grid frequency has garnered significant interest and attention from mathematicians, statisticians, and physicists alike [3, 7, 8]. Non-Gaussian frequency distributions have been discussed for European synchronous areas [9, 10, 11]. Furthermore, there have been studies that analyze frequency deviations and make predictions using the Fokker-Planck equation [12]. However, a thorough characterization of the stochastic properties constraining potential models is missing. Moreover, previous works have often focused on European areas, while for example measurements from Asia have not been thoroughly investigated or compared.
We substantially expand previous research [11, 13] by conducting a rigorous quantitative analysis of the statistical properties of a large class of synchronous areas. Our main objective is to establish quantitative measures that enable the comparison of different synchronous areas with each other and with synthetic models. In particular, consider an equation of the form
\[\frac{df}{dt}=g(f,t)-\xi(f,t), \tag{1}\]
where \(f\) is power-grid frequency, \(g\) is the (unknown) intrinsic dynamics of the system, and \(\xi\) represents noise, e.g. \(\xi=\frac{dW}{dt}\) where \(W\) could be the Wiener process. Both \(g\) and \(\xi\) are potentially explicitly dependent on the frequency value \(f\) and time \(t\). We now wish to understand how the empirical data constrains potential deterministic functions \(g\) or stochastic contributions \(\xi\).
This article is structured as follows: First, we give an overview of the multi-continent dataset (II). We then investigate fundamental statistical properties and the frequency distribution of empirical power grid data. Additionally, we compare the degree of bimodality across different synchronous areas (III). Furthermore, we calculate the one-step increment in order to evaluate frequency fluctuations (IV). To establish a benchmark for comparison, we employ three distinct datasets derived from various stochastic differential models [14]. We then evaluate and compare the degree of linearity between empirical power grid data and synthetic data (V). Lastly, we delve into a detailed analysis of the correlations in the system, suggesting non-Markovian behavior in the recorded frequency data (VI). In the concluding section (VII), we present our findings and engage in a comprehensive discussion to provide a deeper understanding of the observed statistical properties and dynamics of the power-grid frequency.
## II Data Overview
Many previous studies, in particular ones discussing open data, have focused on European regions [12, 15, 16, 17]. Meanwhile, there is limited research [18] that systematically and quantitatively compares the frequency characteristics of power grids across Asia, Australia, and Europe. One important reason is the limited availability of public data on power-grid frequency, making it difficult for researchers to comprehensively analyze the frequency behavior of power grids in different regions. To address this issue, it is necessary to encourage data sharing and collaboration among industry actors and academics - as well as to support initiatives that collect and disseminate such data. Furthermore, conducting comparative analyses that consider these diverse geographical areas becomes essential. Indeed, by undertaking a systematic study of power-grid frequency characteristics across different geographical areas, we can examine the parallels, discrepancies, and underlying causes influencing power-grid frequency behavior.
To collect our dataset, we utilize a GPS-synchronized frequency acquisition device called an Electrical Data Recorder (EDR) developed at the Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Germany [19, 20]. The EDR provides data similar to a Phasor Measurement Unit while allowing easy transfer and processing of the raw and processed signals. Our primary mode of data collection involves connecting the EDR to conventional power sockets in an office or at a hotel to capture the voltage waveforms as experienced on the low-voltage distribution grid. Such local voltage phasor measurements allow the extraction of frequency values, which are essentially identical on the low-voltage and the high-voltage grid and thereby indicate the state of the entire synchronous area [16, 21]. Saving the original waveform information allows us to increase the accuracy of our data with further post-processing [22, 23]. In the present study, we collected power-grid recordings from various locations in the Southeast Asian region, including Indonesia, Malaysia, and Singapore, as well as measurements from the Australia National Electricity Market (NEM area).
The data collection period at each location spanned 10 to 25 days, from October 30th, 2022 to January 9th, 2023. We evaluate the raw data to obtain frequency data with a resolution of one second. The selection of these countries in this study is based on their geographical diversity and distinct characteristics in terms of generation mixture and grid configurations. Each recording corresponds to a distinct synchronous region, wherein different countries exhibit a unique combination of energy sources, including fossil fuels and renewables. This diversity in energy sources along with different operational control or market rules in each synchronous area contributes to variations in grid behavior and dynamics across these regions. Furthermore, we also obtained power-grid frequency data from European areas, namely Iceland, Ireland, and the Balearic Islands. For the European areas, the data collection timeframe was longer, ranging from September 29th, 2019 to February 22nd, 2022, covering at least three months in each location. In Fig. 1, we highlight the location of the measurement points on a world map. To perform an accurate analysis, we remove any intervals lacking EDR-recorded frequency data, ensuring a continuous dataset.
Additionally, we employ stochastic models, such as a Langevin process [14] and a fractional Brownian motion-based model [24] to generate synthetic data. Incorporating these synthetically generated datasets in our analysis serves a dual purpose: firstly, it allows us to gain a more comprehensive understanding of the underlying processes, and secondly, it enables us to validate and verify the efficacy of the methods we have employed in our study.
## III Quantify bimodality
To gain an initial impression of the distribution of the power-grid frequency, we utilize kernel density estimation to display the probability density function (PDF). This approach visualizes the shape of the distribution and identifies the central tendencies. In addition, we compute the normalized third and fourth moment, the skewness, and the kurtosis of the data. The skewness is a measure of the asymmetry of a distribution, with a value of zero indicating a symmetrical (potentially normally) distributed dataset. Meanwhile, kurtosis measures the behavior of the tails, i.e. of large deviations. A Gaussian distribution has a kurtosis of 3 so values below 3 indicate light and above 3 indicate heavy tails. We consistently observe kurtosis values below 3 in contrast to earlier results for European data [9, 13].
All regions in our study operate at a reference value of 50 Hz around which the grid frequency fluctuates. Naively, we could expect that the most probable value of the grid
frequency would be 50 Hz. Surprisingly, Fig. 2 shows that the PDF of the grid frequencies across Asian regions, with the exception of Indonesia, displays two peaks. This indicates that the power-grid frequency follows a bimodal distribution, rather than a unimodal distribution, potentially due to deadbands in the control [12, 14].
In order to illustrate the disparity in the distributional properties of our power-grid frequency data, Fig. 2(a) show-cases three distinct density curves. The curves correspond to two synthetic baseline models and one empirical distribution observed in the data from Singapore.
To quantify the distributional properties of our power-grid frequency data, we calculate the dip statistic [25], which measures the degree of bimodality, or equivalently, the deviation from unimodality. Specifically, it quantifies the distance between the empirical distribution and the closest unimodal distribution, with larger values indicating a greater departure from unimodality and providing more evidence for bimodality. Fig. 2(b) provides an overview of the dip statistic values for the power-grid frequency data collected in our study, as well as two synthetic datasets that follow a non-standard distribution and a unimodal distribution respectively, for reference. These plots allow us to compare the different distribution characteristics of the datasets. Singapore demonstrates the highest degree of bimodality among the datasets examined, as indicated by the largest value of the dip statistic. This result is consistent with our expectations, given the frequency distribution plot for Singapore shows the most pronounced double-peak pattern, see Fig. 2. Furthermore, our analysis indicates that Australia, Malaysia, the Balearic Islands, and Ireland also exhibit a bimodal distribution, evident from the relatively larger dip statistic values.
Conversely, both Indonesia and Iceland display zero values of dip statistics, implying a unimodal distribution. This aligns with our expectations because their PDFs are more normally distributed compared to the other datasets [15].
The observed bimodality indicates that the deterministic dynamics \(g\) in (1) could originate from a double-wellled potential or from a superposition of single-well statistics (superstatistics) [9].
## IV Frequency increments
To understand whether there exist sudden and extreme ramps or jumps in the frequency over time, we consider the distribution of the frequency increments for each region.
For this purpose, first, we calculate the frequency increment \(\Delta f_{\tau}=f(t+\tau)-f(t)\), where \(\tau=$1\,\mathrm{s}$\), that is, the sampling rate of the data. When we examine these increments, we observe that the PDF of the frequency increments \(\Delta f=\Delta f_{\tau=1}\) of each region, as shown in Fig. 4, exhibits deviations from a strict Gaussian distribution by displaying heavy tails for both negative and positive frequency increments.
Additionally, to provide insights into the statistical properties and distribution characteristics of the frequency increments, we calculate the skewness and kurtosis of the increments for each region. We observe that the Asian areas show positive skewness values, which can be understood as
Figure 1: **Location overview**. Power-grid frequency recordings from Australia, Indonesia, Malaysia, Singapore, Iceland, Ireland, and the Balearic Islands. The map is created using the Folium package (OpenStreetMaps, Leaflet.js) in Python.
Figure 3: **Bimodality quantification**. **a**: The left plot shows synthetic data with a bimodal distribution, while the middle plot displays a normal distribution derived from synthetic data as well. The right plot, based on actual data, reveals a bimodal distribution. **b**: Using the dip statistics, we measure the level of bimodality, with Singapore showing the highest value.
Figure 2: **Frequency distribution in Asian areas**. All PDFs are shown on a vertical logarithmic scale. All power grids demonstrate non-Gaussian distributions, e.g. two peaks or non-Gaussian skewness \(s\) and kurtosis \(\kappa\) (\(s^{\text{Gauss}}=0\), \(\kappa^{\text{Gauss}}=3\)).
a ramp-up pattern, with a longer tail for positive values of the frequency increment distribution, see Fig. 4a. On the contrary, Iceland and Ireland exhibit negative skewness values, indicating a leftward skew with a longer tail for negative values of the frequency increment distribution. Still, our analysis reveals that all regions display a very small non-zero skewness, indicating deviations from symmetry are minor, see Fig. 4.
Moreover, by measuring the kurtosis, we conclude that all synchronous areas exhibit a leptokurtic distribution with kurtosis values greater than 3. This suggests the presence of heavier tails in the frequency increment distribution compared to a Gaussian distribution.
Our analysis indicates that the frequency increments in Asian and European areas do not follow a Gaussian distribution, as evidenced by the PDFs and statistical moments. These findings align with prior research [15, 13].
The observed heavy tails and deviations from Gaussianity indicate that the stochastic dynamics \(\xi\) in (1) might be better modeled as a Levy-stable process or a superposition than a simple Wiener process [9].
## V Linearity
In modeling power systems, there is often a preference for simplicity, and linear models are commonly employed. This simplification might contrast with the nature of power systems themselves, where, e.g., power flow equations are nonlinear by nature, but the inertial response from generating units is linear. However, it is unclear how this transpires to simplified models or empirical data of the power-grid frequency, and thus, we should test for the presence of linear contributions in the data.
We use the higher-order autocorrelation function to test the linearity of the power-grid frequency data by quantifying the correlation between two observations in a time series at a given time lag [26]. The higher-order autocorrelation function of the frequency data provides a quantitative measure of time asymmetry in the dataset. If a time series exhibits asymmetry in time, it is indicative of non-linearity in the underlying dynamics. Therefore, we determine the degree of time asymmetry and quantify the level of non-linearity present in the power-grid frequency data. We calculate the higher-order autocorrelation for a given data set as:
\[LT(t)=\frac{[f(t)-f(t+\tau)]^{3}}{[f(t)-f(t+\tau)]^{2}}, \tag{2}\]
where \(LT\) stands for "linear test".
To ensure the validity of our results for a realistic process, we compare the original data to a surrogate time series. This involves taking the Fourier transform (\(FT\)) of the original data and randomizing the phases before using an inverse \(FT\) to obtain the surrogate data. The equation of Fourier transform (\(FT\)) is defined as:
\[F(\omega)=\int\limits_{-\infty}^{\infty}f(t)\cdot e^{-i\omega t}\,dt, \tag{3}\]
and its inverse is given by
\[f(t)=\frac{1}{2\pi}\int\limits_{-\infty}^{\infty}F(\omega)\cdot e^{i\omega t} \,d\omega, \tag{4}\]
where \(\omega\) is the Fourier-frequency variable and \(F(\omega)\) represents the Fourier transform of the function \(f(t)\) (the power-grid frequency in our case). The exponential term, \(e^{-i\omega t}\), is the complex exponential function with an imaginary unit, \(i\). By implementing the procedure described above, we effectively eliminate any non-linearity in the process, resulting in surrogate data that solely reflects the linear characteristics of the analyzed data [27].
With the surrogate data at hand, we compute the \(LT(t)\) for the empirical and the surrogate data and then quantify the distance between both time series using the root mean square error (RMSE). If the value of the root mean square error (RMSE) is close to zero, it indicates that the time series exhibits linear behavior.
To validate our findings and put them into context, we employ three synthetic datasets of Ireland provided by stochastic differential models [14]. For the sake of brevity, we will not detail the models extensively and point the reader to Oberhofer _et al._[14]. We only include a short description of these stochastic models of the power-grid frequency, denoted as OU,1D-NL-KM, and 2D-NL-KM. OU is a basic Ornstein-Uhlenbeck process with a single damping constant and noise term. 1D-NL-KM represents a one-dimensional
Figure 4: **Distribution of increment frequency**. Probability distributions of increment frequency on a vertical log scale. The variable \(s\) represents skewness, and \(\kappa\) represents kurtosis.
non-linear Kramers-Moyal model that incorporates a non-linear response and multiplicative noise. 2D-NL-KM is a two-dimensional non-linear Kramers-Moyal model that separates the frequency into stochastic fluctuations and a deterministic trend. Hence, OU is fully linear, while 1D-NL-KM and 2D-NL-KM are designed to include non-linear effects. Our RMSE results attest to the nature of these models, as OU exhibits the lowest nonlinearity while 2D-NL-KM demonstrates the highest nonlinearity, as measured by their respective RMSE scores in Fig. 5. After analyzing the RMSE of the observed power-grid frequency data and three synthetic datasets, we find that the power-grid frequency data from Australia exhibits a linear property, characterized by small RMSE values of the LT test. Meanwhile, the Singapore and Balearic regions exhibit the greatest RMSE values, indicating a larger deviation from linearity, see Fig. 5. However, even the largest RMSE value for the power-grid frequency data is significantly smaller than the non-linearity observed in the synthetic models.
The observed linearity indicates that the deterministic dynamics \(g\) in (1) should approximately follow a linear relationship, which supports the idea of a superposition of simple statistics [9] over explicit nonlinear modeling [12, 14].
## VI Correlation analysis
When modeling stochastic processes it is key to assess whether a process is Markov or not, i.e. whether there are long-range correlations present. Hence, we investigate the autocorrelation and decay characteristics of the datasets for Asian, Australian, and European power grids. The autocorrelation function (ACF) at lag \(\tau\) of a time series \(f_{t}\) is calculated as:
\[\text{ACF}(\tau)=\frac{\text{Cov}(f_{t},f_{t-\tau})}{\sqrt{\text{Var}(f_{t}) \cdot\text{Var}(f_{t-\tau})}}, \tag{5}\]
where \(\text{Cov}(f_{t},f_{t-\tau})\) is the covariance between \(f_{t}\) and \(f_{t-\tau}\), \(\text{Var}(f_{t})\) is the variance of the original series at time \(t\), and \(\text{Var}(f_{t-\tau})\) is the variance of the lagged series at time \(t-\tau\).
As shown in Fig. 6, the autocorrelation of these regions' power-grid frequencies exhibits an approximate exponential decay pattern concerning the time lag \(\Delta\tau\). To quantify the decay trend, we fit a curve with \(e^{-\lambda\Delta\tau}\) (where \(\lambda\) represents the decay constant and \(\Delta\tau\) is the time lag) using an exponential model. We observe that Iceland exhibits the highest decay constant value, measuring 0.1509, indicating a relatively rapid decay in autocorrelation. On the other hand, Singapore shows the lowest decay value at 0.0006, suggesting long-lasting correlations, potentially arising from correlated noise. The results are robust regardless of whether we consider 1 or 6 hours of data for the exponential fit.
To quantify these emerging correlations, we calculate the Hurst exponent of power-grid frequency for each region. Specifically, we estimate the Hurst exponent of these time series using the Detrended Fluctuation Analysis (DFA) method [28, 29, 30]. Several studies have successfully applied DFA to analyze power-grid frequency data and uncover underlying long-range correlations [17, 31]. DFA stands out among other methods for its ability to accurately quantify the strength of long-range correlations, even when dealing with non-stationary time series [32].
We generate a set of lag values that span from 5 to \(10^{6}\). Fig. 7 illustrates the DFA results of the power-grid frequency in various synchronous areas, utilizing the fluctuation function plotted against lag values. The slope of the fitted line in the log-log scale is equal to the Hurst index plus 1, in consideration of the integration performed in the DFA algorithm. From the slope of the Fluctuation Function, we extract the Hurst exponents, which exceed 0.5 for all regions except Iceland. This indicates the presence of positively correlated motions.
These observed correlations indicate that the stochastic dynamics \(\xi\) in (1) might be better modeled as colored or fractional noise instead of a simple white noise process. Meanwhile, the almost exponential decay of the autocorrelation supports simple stochastic models, such as Ornstein-Uhlenbeck processes or extensions thereof.
## VII Discussion and conclusion
In the present study, we have collected and analyzed non-standard characteristics of the power-grid frequency data from
Figure 5: **Linearity quantification**. Visualizing the degree of linearity \(LT\) from (2), measured by the RMSE of data vs. surrogate data. Lower values of RMSE indicate a smaller deviation from linearity.
Figure 6: **Decay of the autocorrelation**. We calculate the autocorrelation for each region over a time lag of up to 6 hours. The solid lines represent the autocorrelation, while the dashed lines correspond to the exponential fit with a decay constant \(\lambda\).
Asia, Australia, and Europe. In particular, we demonstrated clear deviations from Gaussianity of both the frequency and its increment statistics, varying degrees of bimodality in different synchronous areas, and long-term correlations. All data are available openly on power-grid-frequency.org [33] and our code to quantitatively compare models and data is available on Github [34]. We advance previous data analysis [13, 15] by including more non-European grids in our analysis. We compare the observed statistical results to reference systems and provide insights into the similarities and differences in power-grid dynamics across regions.
Naively, we could expect that a large number of random perturbations on the power grid leads to Gaussian distributions. Instead, we observe very clear non-Gaussian distributions: Frequency statistics are highly bimodal and the frequency jumps (increments) are heavy-tailed. While the exact nature and origin of these properties might vary between different synchronous areas, we may speculate that a bimodal distribution could arise from deadbands in the control [12, 14] or transitions between two discrete states of the system. These transitions could be stochastic or deterministic, depending on the power system. Similarly, heavy tails in the increments are both explained plausibly as arising from sudden changes in power generation or load [10, 13] as well as by deterministic changes due to power dispatch at the start of an hour [35, 36].
As with Gaussianity, a common assumption about stochastic processes is to regard them as uncorrelated, i.e. as Markov processes. Again, we found that power grids both in Asia and Europe are more complex than this simple assumption. Furthermore, utilizing Detrended Fluctuation Analysis (DFA), we demonstrated that the Hurst values are generally greater than 0.5, i.e. that the time series displays a positive correlation. This finding is consistent with previous studies in this field [23]. Power grids still exhibit a large number of synchronous generators thereby rotating mass with inertia. This already makes more continuous and correlated dynamics plausible. In addition, fluctuations both from the consumer [37] and from the generation side will often be correlated [10]. These results do not support a Markov property for all regions. Meanwhile, the full data displayed mostly linear properties, potentially simplifying at least one modeling aspect.
Let us review the added benefit of considering measurements from diverse geographic regions, instead of limiting ourselves to synchronous areas from one continent. While Singapore displayed a pronounced bimodality, there is no clear trend that Asian synchronous areas are more bimodal than European ones. Synchronous areas in Asia, Australia, and Europe all displayed heavy tails and mostly linear dynamics, with the two most non-linear areas (Balearic and Singapore) from different geographic regions. The correlation results differed the most: The Asian data sets returned a slightly higher Hurst exponent, while Iceland is the only anti-correlated area in our data set. Overall, we conclude that we require many different synchronous areas to have access to interesting dynamics as in Singapore or Iceland. Including data from multiple geographic regions will likely increase the chance of observing non-standard effects that have to be incorporated into any general-purpose model.
Concluding, our findings advance our understanding of power grids and their simulations. Regardless of the origin of the added complexity (bimodal, non-Gaussian fluctuations, non-Markov), power grid models, such as the simplified (1) should take these deviations into account and benchmark their models against empirical data, whether they are using linear [38, 39, 40] or non-linear [12, 14] models to be applicable to as many different settings as possible.
In the future, there are several other prospective studies and analytical options to continue the work presented here. One intriguing area of research is the comparison of frequency dynamics between different seasons, particularly winter and summer. Seasonal variations in power demand and generation patterns can significantly influence frequency behavior, and exploring these variations could provide valuable insights into the system's response to changing operational conditions. Furthermore, investigating the interdependencies and interactions between power-grid frequency and other critical system variables, such as voltage amplitudes and power flows, could be beneficial to gain a more comprehensive understanding of the system's behavior. Finally, our examination can be extended to islanded and microgrids operated primarily using power electronics to assess if different statistical and stochastic properties are present.
Figure 7: **Computing Hurst exponents**. We perform a Detrended Fluctuation Analysis. With the exception of Iceland, all regions exhibit Fluctuation Function patterns typified by Hurst values greater than 0.5.
## VIII Acknowledgments
We gratefully acknowledge funding from the Helmholtz Association and the Networking Fund through Helmholtz AI and under grant no. VH-NG-1727, as well as the Scientific Research Projects Coordination Unit of Istanbul University, Project no. 39071. Map data copyrighted by OpenStreetMap contributors are available from [https://www.openstreetmap.org](https://www.openstreetmap.org).
|
2305.19555 | Large Language Models Are Not Strong Abstract Reasoners | Large Language Models have shown tremendous performance on a large variety of
natural language processing tasks, ranging from text comprehension to common
sense reasoning. However, the mechanisms responsible for this success remain
opaque, and it is unclear whether LLMs can achieve human-like cognitive
capabilities or whether these models are still fundamentally circumscribed.
Abstract reasoning is a fundamental task for cognition, consisting of finding
and applying a general pattern from few data. Evaluating deep neural
architectures on this task could give insight into their potential limitations
regarding reasoning and their broad generalisation abilities, yet this is
currently an under-explored area. In this paper, we introduce a new benchmark
for evaluating language models beyond memorization on abstract reasoning tasks.
We perform extensive evaluations of state-of-the-art LLMs, showing that they
currently achieve very limited performance in contrast with other natural
language tasks, even when applying techniques that have been shown to improve
performance on other NLP tasks. We argue that guiding LLM generation to follow
causal paths could help improve the generalisation and reasoning abilities of
LLMs. | Gaël Gendron, Qiming Bao, Michael Witbrock, Gillian Dobbie | 2023-05-31T04:50:29Z | http://arxiv.org/abs/2305.19555v3 | # Large Language Models Are Not Abstract Reasoners
# Large Language Models Are Not Abstract Reasoners
Gael Gendron
University of Auckland
[email protected]
&Qiming Bao
University of Auckland
[email protected]
&Michael Witbrock
University of Auckland
[email protected]
&Gillian Dobbie
University of Auckland
[email protected]
###### Abstract
Large Language Models have shown tremendous performance on a large variety of natural language processing tasks, ranging from text comprehension to common sense reasoning. However, the mechanisms responsible for this success remain unknown, and it is unclear whether LLMs can achieve human-like cognitive capabilities or whether these models are still fundamentally limited. Abstract reasoning is a fundamental task for cognition, consisting of finding and applying a general pattern from few data. Evaluating deep neural architectures on this task could give insight into their potential limitations regarding reasoning and their broad generalisation abilities, yet this is currently an under-explored area. In this paper, we perform extensive evaluations of state-of-the-art LLMs on abstract reasoning tasks, showing that they achieve very limited performance in contrast with other natural language tasks, and we investigate the reasons for this difference. We apply techniques that have been shown to improve performance on other NLP tasks and show that in most cases their impact on abstract reasoning performance is limited. In the course of this work, we have generated a new benchmark for evaluating language models on abstract reasoning tasks.
## 1 Introduction
Large Language Models (LLMs) have recently achieved impressive performance on a large variety of Natural Language Processing (NLP) tasks, including text comprehension [15; 30], commonsense reasoning [38], translation [31], and code generation [10; 8], and have shown promising results for out-of-distribution generalisation [7; 8]. The most recent and larger language models also perform well on mathematical problems, which had been out of reach for transformers for a long time [11; 37]. However, it is still unknown whether the reasoning ability of LLMs has an upper bound. While empirical testing of LLMs trained on large corpora of data informally yields signs of high comprehension of presented problems, there is little theoretical evidence regarding why and how this performance has been achieved and whether these models are simply memorising the training data, extrapolating it, or some combination [40; 19]. A notable limitation of these models is a lack of control mechanisms, or possible misalignment [29], for which the absence of a world model or causal representation have been advanced as explanations [4; 46]. More recently, early experiments on GPT-4 showed signs of limitations on reasoning tasks requiring planning and backtracking [8]. Despite these early limitations, the question of whether or not LLMs can perform human-like reasoning remains open, as measuring the level of intelligence, or more broadly, the competence, of a system is a challenging task [12].
Abstract reasoning is a potential task for effective measurement of the cognitive abilities of neural models [34; 12]. Abstract reasoning problems consist of identifying generic structures over a small
set of examples and applying them to unseen cases. They aim to evaluate the ability of a system to integrate a new skill or process from limited data. The abstract nature of these problems helps avoid spurious correlations that could lie in the data and may create potential bias in the results. In particular, this task is well-suited for evaluating the broad generalisation capacity of a system, i.e. its ability to handle a large category of tasks and environments without human intervention, including situations that may not have been foreseen when the system was created [12]. This is a well-studied class of task in the field of program induction [16; 24]. However, the problem of abstract reasoning has long remained outside the scope of evaluation of language models, and there currently exist no extensive evaluations of the performance of LLMs in this domain.
In this paper, we seek to bridge this gap by investigating the abstract reasoning abilities of language models and by providing insight into the following question: Do LLMs contain sufficient building blocks for broad generalisation, or do they lack fundamental capabilities? We evaluate state-of-the-art LLMs on abstract reasoning tasks, applying recent training, fine-tuning, and prompt design techniques that have been shown to improve performance on other NLP tasks. To this end, we create a benchmark based on existing and novel datasets. We then perform extensive experiments on this benchmark. We also build and train a language model for abstract reasoning and compare its performance with the other models. Our results indicate that Large Language Models do not yet have the ability to perform sound abstract reasoning. All of the tested models exhibit poor performance, and the tuning techniques that improved LLM reasoning abilities do not provide significant help for abstract reasoning. We release our code and data at: [https://github.com/Strong-AI-Lab/Logical-and-abstract-reasoning](https://github.com/Strong-AI-Lab/Logical-and-abstract-reasoning). Our contributions can be summarised as follows:
* We evaluate Large Language Models on abstract reasoning tasks.
* We show that existing training and tuning techniques do not help increase the performance of LLMs for abstract reasoning.
* We create a benchmark for the evaluation of language models for abstract reasoning.
## 2 Related Work
The abilities of Language Models have been thoroughly studied on a wide range of problems. In particular, their reasoning capacities are the focus of a great deal of recent work. Some of this [45; 25; 11] has explored prompt techniques to improve mathematical reasoning in LLMs; [37] proposes a framework based on causality theory to evaluate language models on this kind of task. Recently, GPT-4 has been shown to perform well on mathematical problems, outperforming PaLM and LLaMA [13; 41], although it still produces calculation mistakes [8]. In the domain of logical reasoning, several methods and benchmarks exists for evaluating language models. Notable benchmarks include DEER [47], ParaRules [14], PARARULE-Plus [3], ReClor [49], LogiQA [26], and AbductionRules [48]. Models such as LReasoner [43], MERIt [21], and AMR-LE [2] attempt to induce logical reasoning abilities in language models, but the performance of the most recent LLMs is yet to be evaluated. Similarly, the CLRS dataset benchmark for evaluating algorithmic reasoning has not yet been applied to language models [42]. Causal structure discovery and causal inference are other domains where LLMs have shown mixed results [46; 22]. These tasks are distinct from commonsense causal reasoning, where LLMs perform well [18; 53; 22]. Early experiments with GPT-4 [8] showed that, despite presenting systematically better performance than its previous versions, it still has some innate limitations. The authors introduce several examples indicating that the autoregressive nature of LLMs may prevent them from planning and backtracking, two abilities necessary for complex reasoning. GPT-4 also showed limitations in text generation under constraints; the model can handle local constraints but fails to apply global constraints that require thinking ahead [8]. GPT-4 also does not always reason in a consistent manner. Although it produces consistent results more often than GPT-3, there are no guarantees that the process leading to the result is always correct. The scope of cognitive abilities of the system remain incompletely characterised, especially for precise reasoning [8].
The evaluations described above do not, of course, provide a measure of the intelligence or global cognitive abilities of those models; measuring the level of intelligence of LLMs and other AI systems is challenging as there is no clear widely accepted definition [6; 19]. Chollet [12] defines the intelligence of a system as "a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty". Following this definition, abstract
reasoning is a well-suited domain over which to measure aspects of the learning and generalisation abilities of a system. To this end, the Abstract Reasoning Challenge (ARC) has been proposed as a benchmark for artificial systems [12]. A handful of works have proposed to measure abstract reasoning abilities in neural networks, but they focus on visual tasks [34; 51; 50]. To the best of our knowledge, this paper is the first to present an extensive evaluation of abstract reasoning for Large Language Models. Other domains of study focus on problems similar to abstract reasoning. Notably, in program induction, DreamCoder is a system that learns to solve problems described by a small set of input-output pairs by writing programs [16]. Abstract reasoning can also be related to causal representation learning, as finding abstract relations amounts to recovering the causal structure of a task and the Independent Causal Mechanisms (ICMs) linking the variables [35; 17].
## 3 Evaluation Method
### Evaluation Data
We evaluate language models on a large variety of abstract reasoning datasets, selected based on their capacity to evaluate the ability of language models to find a general abstract rule from limited examples. Some of the datasets used are visual. In order to use these with language models, we generate text or symbolic versions. After formatting, the datasets can be divided into two categories: Open-Ended Question Answering (Open QA) and Multiple-Choice Question Answering (MCQA). Open QA datasets require the model to generate the correct answer, while MCQA requires it to choose the answer from a set of possible answers. We note that most of the evaluated models are built for general-purpose text generation. Therefore, even when choosing between several options, they must _generate_ the correct choice and may fail to do so (e.g. answering D when only options A, B, or C are available). For comparison, we also evaluate models built for question answering. We give more details in Section 3.2. As shown in Figure 1, QA engines can only answer MCQA datasets, while text completion models can answer any type of question. Some MCQA datasets can also be converted to Open QA datasets by removing the choices. The datasets are summarised in Table 1.
AcreWe conduct experiments on the Abstract Causal Reasoning (ACRE) dataset [50]. ACRE is a Visual Question-Answering (VQA) dataset. In our work, we use a transcription of the dataset into text. Each sample in the data comprises six context images and four test cases. Each context image comprises a set of objects with various shapes, colours and textures, and a light. In the context images, the light can be on or off. The goal of a system is to determine from the context examples if the light is on, off, or if its state cannot be determined in the test cases. To solve this task, the
Figure 1: Different types of models and datasets considered in our experiments and their interactions. Dataset types are represented as green circles and model types are represented as blue rectangles. Text completion models can answer both types of datasets while QA engines can only answer MCQA datasets. However, MCQA datasets can be altered to fit into the Open QA category.
Figure 2: Example task in the Evals-P dataset. For this task, the system must return “foo” if the first character of the input is in the list or “bar” otherwise. Pre-prompts are omitted from the input. In the test case, the target answer is indicated in italics.
model has to determine for each sample what objects are causally responsible for the activation of the light. We generate two versions of the dataset: in ACRE-Text, each image is replaced by a textual description, and in ACRE-Symbolic, each image is replaced with a numerical vector representation.
ArcThe second dataset we use is the Abstract Reasoning Challenge (ARC) dataset [12]. The dataset is composed of tasks, each comprising three input and output grids. The goal of the system is to determine the algorithm that converts the input to the output and apply it to a test case. The grids have a variable size comprised between \(8\times 8\) and \(30\times 30\), and contain visual patterns (e.g. recognisable shapes, symmetries). We provide the raw grid to the model as a two-dimensional array of integers. The high dimensionality of the input makes it a challenging task for LLMs. The tasks themselves are also challenging as their transcription in natural language is often complex and supposedly impossible for 12% of them [1].
BIG-BenchWe select a subset of the BIG-Bench dataset [33; 36] that we name BIG-Bench-F for _Functions_. The subset comprises various tasks represented by a function taking a list as input and returning a new transformed list as output. For each task, several input-output samples is given. In BIG-Bench-F, we give four samples per task by default. The functions include typical list processing like replacing one list element with another value, selecting a subset of the list, or counting elements. The difficulty in this task is to accurately recognise the function from a few samples.
EvalsWe select a subset of the Evals dataset [28] representing logic puzzles. Evals-P is composed of a set of tasks. For each task, a tuple containing a character and a list of characters is given as an input and a single word from the set {"foo", "bar"} is generated from the input according to a logic hidden from the evaluated system. The task consists of finding the logic from eight samples and applying it to a test case. An example is given in Figure 2. Evals-S is composed of another set of tasks. For each task, a list of integers is given as an input and an output list of words is generated from the input according to a logic hidden from the evaluated system. The task consists of finding the logic from three samples and applying it to a test case.
PvrThe Pointer-Value Retrieval (PVR) dataset [52] is a dataset for retrieval tasks. Tasks involve selecting one or several values in a list and applying a function on this subset. For each task, the system must recognise the retrieval and application functions and apply them to a test case. Samples in the datasets are composed of a pointer-values pair and a label. The values are stored in an array, and the pointer is an integer pointing to an index in the array. The pointer indicates the subset of values to consider for the task.
RavenRAVEN [51] is a VQA dataset composed of sequences of images to complete. The images contain Raven matrices [32], i.e. geometric shapes (e.g. square, circle, pentagon) assembled in various ways (e.g. one shape inside another, four shapes in a \(4\times 4\) grid). RAVEN is a dataset similar to Procedurally Generated Matrices (PGM) [34] but has the advantage of providing a tree structure describing the semantics of each matrix. We focus on a subset where a single shape appears in the image. The task is, given a sequence of eight images and eight possible choices, to pick the
\begin{table}
\begin{tabular}{l l c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Type} & \multicolumn{2}{c}{Versions} \\ \cline{3-3} & & Text & Symb \\ \hline ARC\({}^{T}\) & Open QA & \(\checkmark\) \\ BIG-Bench-F & & \(\checkmark\) \\ Evals-S & & \(\checkmark\) \\ PVR & & \(\checkmark\) \\ ACRE\({}^{T}\) & MCQA & \(\checkmark\) \\ Evals-P & & \(\checkmark\) \\ RAVEN\({}^{T}\) & & \(\checkmark\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Datasets considered. When not written, type is similar to the one above. Datasets can exist in text or symbolic versions. Text datasets built from an image dataset are indicated with the symbol \({}^{T}\).
\begin{table}
\begin{tabular}{l l} \hline \hline Model & Type \\ \hline GPT-2 & Text completion \\ Text-Davinci-3 & \\ GPT-3.5-Turbo & \\ GPT-4 & \\ LLaMA-7B & \\ Alpaca & \\ Alpaca-LoRA & \\ RoBERTa-AR\({}^{*}\) & Question Answering \\ MERIt-AR\({}^{*}\) & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Models considered. When not written, type is similar to the one above. Models with the symbol \({}^{*}\) are introduced in this paper. “-AR” indicates that the model has been fine-tuned for abstract reasoning.
correct image that follows in the sequence. As RAVEN is a visual dataset like ACRE, we generate a text description of each image from their semantic tree that we will feed into the evaluated models. We create two sets: RAVEN-Text contains descriptions in natural language, and RAVEN-Symbolic contains symbolic descriptions. We also build another version of the dataset where choices are hidden. We name the former RAVEN-mcqa and the latter RAVEN-opqa.
### Models evaluated
We perform evaluations on the most recent and popular architectures for NLP tasks. Table 2 provides the list of models used in the experiments.
Text Completion ModelsWe restrict our experiments to Large Language models, also named _Foundation Models_[5]. We conduct experiments on the popular family of GPT architectures. We include three generations of GPT models: GPT-2 [30], a 1.5B parameter model; aligned GPT-3 models with Text-Davinci-3, optimised for text completion, and GPT-3.5-Turbo, optimised for chat, two 175B models [7; 29]; and GPT-4, for which the training and architectural details are unknown [28]. We also perform experiments on LLaMA[41] and its variants. In particular, Alpaca is a fine-tuned version of LLaMA to respond to instructions [44; 39], and Alpaca-LoRA is a LLaMA model instruction-tuned using Low-Rank Adaptation [20]. For the three models, we evaluate the 7B parameters versions.
QA EnginesWe also compare these generic models on architecture fine-tuned for Multiple-Choice Question Answering. Unlike the text completion engines that produce text in the output, their task consists of discriminating the solution from a small set of options. This problem is more straightforward to solve than the problem of next token prediction tackled by the models described in the previous paragraph. We fine-tune two models for Multiple-Choice Question Answering: RoBERTa-large [27], a language model used for text comprehension, and MERIt [21], a model using contrastive pre-training on rules-based data to perform logical reasoning.
### Methodology
For Text-Davinci-3, GPT-3.5-Turbo, and GPT-4, we use the Open AI API to run all the evaluations. Text-Davinci is a text-completion model, so we convert our input context and question to a single string. GPT-3.5-Turbo and GPT-4 are chat completion models, so we provide the instructions in chat format. The pre-prompt and examples are given to the model by the system, and the supposed user gives the question. We use a temperature of 0.5 for the output generation and the default parameters of each model for the maximum number of generated tokens. For GPT-2, LLaMA-7B, Alpaca, Alpaca-LoRA, RoBERTa-large, and MERIt, we use the weights provided on the Huggingface hub. RoBERTa-large and MERIt are used as MCQA models, while the others are used as causal language modelling models. We set the maximum number of generated tokens to 128 in the default models and 256 in the code models (see Section 4.5). We evaluate each model with its default configuration. The fine-tuned models are trained for 10 epochs with a batch size of 10, using Adam optimizer [23] and a learning rate of \(5\times 10^{-4}\). As the language models generate free-text answers, we need to extract the answers using regular expression patterns. We consider a model to provide a valid answer even if the format is incorrect (e.g. if they accompany their answer with additional text although we ask only for the answer). Unless specified otherwise, we always ask the model to provide a single answer and return only the aforementioned answer with no explanation.
## 4 Experiments
### Open-Ended Question Answering
In this section, we detail our experiments on open-ended abstract reasoning. The model is asked to generate an answer in text format for each dataset. Depending on the dataset, the answer can be in natural language or a symbolic format. The accuracy for each model on every dataset is summarised in Table 3.
Our results indicate poor performance of language models on all the presented datasets, although the performance varies between datasets and models. In particular, Text-Davinci-3 and GPT-4
consistently achieve the best performance across the datasets. On the other hand, LLaMA-7B has the worst performance of all models. Alpaca and Alpaca-LoRA present slight improvements on BIG-Bench-F, PVR and RAVEN\({}^{T}\). This improvement is explained by the instruction-tuning used to build Alpaca and Alpaca-LoRA. We illustrate this difference in Figure 3. In this example, the models must return an array containing only the element at the third index of the input array. They must discover the rule and apply it to the test case. LLaMA-7B does not attempt to solve the problem but completes the text by giving more examples. The generated examples do not match the abstract rule for the task. On the other hand, Alpaca-LoRA returns an incorrect answer. Instruction-tuning helps the model understand the format of the answer and what it is asked to do but provides little help on how to solve the tasks. Moreover, the performance difference between Text-Davinci-3 and GPT-3.5-Turbo indicates that the type of instruction-tuning matters as Text-Davinci-3 performs systematically better than GPT-3.5-Turbo despite being based on the same model.
Overall, GPT-4 performs noticeably better than all the other models. As the details of its architecture and training set are unavailable, we cannot provide satisfactory explanations for this difference. However, the increase in performance is highest on the RAVEN\({}^{T}\) dataset. Given that Raven matrices are a standard and long-existing test [32; 9], we can hypothesize that the training data of GPT-4 included some versions of the test. The same remark can be made for BIG-Bench-F as it includes traditional list processing algorithms.
\begin{table}
\begin{tabular}{l l l l l l l} \hline & ARC\({}^{T}\) & BIG-Bench-F & Evals-S & PVR & \multicolumn{2}{c}{RAVEN\({}^{T}\)-oppa} \\ \cline{1-1} \cline{2-7} Text-Davinci-3 & _0.105_ & _0.404_ & **0.314** & **0.228** & _0.343_ & _0.234_ \\ GPT-3.5-Turbo & 0.033 & 0.153 & 0.186 & 0.124 & 0.226 & 0.161 \\ GPT-4 & **0.119** & **0.514** & _0.304_ & 0.177 & **0.410** & **0.330** \\ LLaMA-7B & 0.010 & 0.012 & 0.014 & 0.060 & 0.000 & 0.000 \\ Alpaca & 0.010 & 0.188 & 0.014 & _0.184_ & 0.075 & 0.030 \\ Alpaca-LoRA & 0.012 & 0.144 & 0.000 & 0.152 & 0.000 & 0.067 \\ \hline \end{tabular}
\begin{tabular}{l} Input and Label \\ \hline \([3,4,1,5,2,0,8,6,9]\rightarrow[1]\) \\ \([5,0,6,8,2,9,4,7,3]\rightarrow[6]\) \\ \([6,3,1,4,9,0,7]
Text-Davinci-3 and GPT-4 also achieve good performance on the ARC\({}^{T}\) dataset relative to other existing architectures challenged on the task, making them 11\({}^{th}\) and 14\({}^{th}\) on the Kaggle leaderboard1. However, they still fail to answer a vast majority of the tasks correctly. All LLMs generally fail to answer most of the tasks in each dataset. Despite a performance increase compared to previous versions, the most recent language models do not perform open-ended abstract reasoning well.
Footnote 1: [https://www.kaggle.com/competitions/abstraction-and-reasoning-challenge/leaderboard](https://www.kaggle.com/competitions/abstraction-and-reasoning-challenge/leaderboard)
### Multiple-Choice Question Answering
As seen in Section 4.1, open-ended abstract reasoning is a challenging problem for language models. We also performed a series of experiments on Multiple-Choice Question Answering tasks. For these tasks, the models are given a set of possible answers and must pick a single one from the set. This task is more accessible than Open-Ended QA, as the valid response is given as part of the input. Results are given in Table 4.
We first compare the results of RAVEN\({}^{T}\)-mcqa and RAVEN\({}^{T}\)-opqa from Table 3. RAVEN\({}^{T}\)-opqa contains the same questions as RAVEN\({}^{T}\)-mcqa, but the answer choices have been removed. Following intuition, giving multiple choices to LLMs helps systematically improve their performance. Only the performance of LLaMA remains the same, and the performance of Alpaca is slightly reduced. Given the low accuracy in both cases, it can be interpreted as noise. We now look at the performance for all datasets. GPT-4 achieves the best performance of all completion models and is the only model to perform systematically better than random. Within the remaining models, only Text-Davinci-3 and GPT-3.5-Turbo achieve performance above random on several datasets. MCQA models achieve slightly above random performance (see details in appendix), performing better than most LLMs. However, they have an advantage compared to completion engines as they have to select one answer among a list of possible choices, whereas completion models must generate the correct answer. Therefore, the latter may not return any valuable output (e.g. a nonsensical or empty answer), explaining how they can achieve worse than random performance.
Surprisingly, GPT-2 performs better than its bigger counterparts on the ACRE\({}^{T}\) dataset. The nature of the ACRE\({}^{T}\) dataset can explain this phenomenon. The set of possible answers in ACRE\({}^{T}\) is: "on", "off", and "undetermined". Therefore, GPT-2 learns to output one of those words as the most _plausible_ answer and can reach results close to random performance. However, Text-Davinci-3 and GPT-3.5-Turbo attempt to reason about the task but fail to comprehend it. Figure 4 gives an example. As for the experiments on Open-Ended QA, the performance of language models is poor globally, except for GPT-4, which gets average to good performance. The multiple options can provide useful hints but not all models exploit them equally.
### Symbolic Representations
We generate text and symbolic versions of the ACRE\({}^{T}\) and RAVEN\({}^{T}\) datasets and study the performance evolution depending on the input format. We focus on the results with RAVEN\({}^{T}\)-opqa in Table 3, and ACRE\({}^{T}\) and RAVEN\({}^{T}\)-mcqa in Table 4.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{ACRE\({}^{T}\)} & \multicolumn{2}{c}{Evals-P} & \multicolumn{2}{c}{RAVEN\({}^{T}\)-mcqa} \\ \cline{2-6} & Text & Symb & Text & Symb \\ \hline GPT-2 & **0.371** & 0.00 & 0.496 & 0.00 & 0.126 \\ Text-Davinci-3 & 0.098 & 0.427 & _0.560_ & _0.461_ & _0.452_ \\ GPT-3.5-Turbo & 0.184 & _0.445_ & 0.481 & 0.276 & 0.315 \\ GPT-4 & _0.272_ & **0.512** & **0.625** & **0.697** & **0.535** \\ LLaMA-7B & 0.000 & 0.257 & 0.544 & 0.004 & 0.000 \\ Alpaca & 0.036 & 0.238 & 0.544 & 0.015 & 0.058 \\ Alpaca-LoRA & 0.015 & 0.123 & 0.552 & 0.082 & 0.124 \\ \hline random & 0.33 & 0.33 & 0.5 & 0.125 & 0.125 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Accuracy of Large Language Models for Multiple-Choice QA on the ACRE\({}^{T}\), Evals-P and RAVEN\({}^{T}\) datasets. The last line indicates random performance. Completion models can perform worse than random if they do not reply with a valid answer. The best result for each dataset is indicated in **bold**, and the second best is indicated in _italics_.
On the ACRE\({}^{T}\) dataset, results are better across all models when the input is symbolic, except for GPT-2. This observation is consistent with the idea that the latter does not try to solve the task but only predicts a _plausible_ answer, for which an input in natural language is more helpful. However, inputs using symbolic data are smaller and may convey only relevant information, while natural language could contain distracting information or biases harmful to task performance. The same observation can be made concerning RAVEN\({}^{T}\)-mcqa, except for GPT-4. In the open-ended version of RAVEN\({}^{T}\), models achieve better performance with the natural language representation. Without the answer set available, inductive biases caused by language seem to help performance.
### Varying the Example Set Size
We perform further experiments on the BIG-Bench-F and PVR datasets. For these two datasets, we alter the number of examples given to the system before the test case. By default, we give four examples to the model before asking it to answer. The results for BIG-Bench-F and PVR are shown in Figures 4(a) and 4(b), respectively. In this section, we focus on the results of the base models (without the "-code" suffix).
We first observe that, for both datasets, there is no linear relationship linking performance and number of examples. For all but the Text-Davinci-3 and GPT-4 models, adding more examples has little or no effect on the accuracy. GPT-3.5-Turbo and Alpaca-LoRA see their performance decrease for 8 and 14 examples, respectively. We hypothesize that a _distraction_ effect exists when too many examples are presented. In this situation, the model has trouble finding the relevant information. The datasets consist of abstract patterns, likely not common in the training sets of the models. Therefore, the model can use only a slight bias to determine the information relevant to the task. On the BIG-Bench-F dataset, GPT-3.5-Turbo and Alpaca-LoRA have similar performances across trials. On PVR, Alpaca-LoRA outperforms GPT-3.5-Turbo by a noticeable margin when 16 examples are given. Text-Davinci-3 and GPT-4 perform similarly across all cases, and their performances consistently increase with the number of examples, achieving up to an accuracy of 0.6 when given 16 examples on the BIG-Bench-F dataset. However, on PVR, Text-Davinci-3 achieves only 0.26 when given 12
Figure 4: Example of answers from GPT-2, Text-Davinci-3, and GPT-3.5-Turbo on the ACRE\({}^{T}\) dataset. Pre-prompts are omitted from the input. GPT-2 randomly returns an answer within the expected set of answers. Text-Davinci-3 returns “undetermined”. GPT-3.5-Turbo refuses to answer the question. The true answer (not visible for the models) is indicated in _italics_.
examples. GPT-4 follows a similar trend but performs slightly worse than its predecessor. Absent technical details of GPT-4, we can only speculate on the reasons. As this effect is observed only on BIG-Bench-F and not on PVR, we can reasonably assume that the models perform better because their training sets contain the list processing algorithms used by BIG-Bench-F.
### Enabling Structure Discovery with Code
In the next experiments, we follow an idea similar to _Program-of-Thought_ prompting [11] and ask the model to return the function responsible for generating the output from the input. Then, we execute the produced code on the test case and return the result as the model prediction. This method differs from a base prompt as we do not ask the model to produce the answer directly. This part is delegated to a code interpreter in Python. This method aims to verify the ability of LLMs to extract the correct structure behind each abstract reasoning task under code format. We test this method on the BIG-Bench-F and PVR datasets and name _"model-code"_ the models prompted using this method. The results of these models can be compared with their original counterparts in Figures 4(a) and 4(b).
In general, we observe that the models prompted to produce code perform worse than those tasked to produce the answer directly. The only exception is GPT-3.5-Turbo. On the BIG-Bench-F dataset, the performance of GPT-3.5-Turbo-code increases steadily while that of GPT-3.5-Turbo stagnates, and on PVR, GPT-3.5-Turbo-code outperforms GPT-3.5-Turbo by a significant margin. Producing code solving the abstract problem is a more complicated task for an LLM as it requires the model to produce a rigorous code explanation for its answer. It is consistent with the results for most models, but we also observe in the case of GPT-3.5-Turbo-code that it can help the model better understand the task. On BIG-Bench-F, the code versions of Text-Davinci-3 and GPT-4 perform better than both base and code versions of the other models. As this behaviour is not observed with PVR, we infer that this performance is due to the functions being part of the training sets of the models. The models can almost always generate code able to compile and produce an answer (details are in the appendix). We deduce that producing a program with a valid syntax is not a bottleneck for performance. The issue lies in the recovery of the correct reasoning process.
## 5 Conclusion
Large Language Models can perform very well on a large range of NLP tasks but still struggle at some reasoning tasks. Understanding the potential reasoning capabilities of LLMs is crucial as they are starting to be widely adopted. Measuring the level of intelligence of a system is hard, but abstract reasoning provides a valuable framework for this task. In this paper, we present what is, to the best of our knowledge, the first extensive evaluation of Large Language Models for abstract reasoning. We show that LLMs do not perform well, although not all models are equally poor. Moreover, techniques
Figure 5: Evolution of the model performance as a function of the number of examples seen from the dataset. The legend is shared by both figures. Models with straight lines are used with default prompting, while models with dashed lines are prompted to produce code.
known to improve performance on NLP tasks do not work for abstract reasoning and increasing the number of examples seen does not generally lead to significant improvement.
We hypothesise that autoregressive LLMs currently lack fundamental properties needed for abstract reasoning tasks and human-like cognition. In our future work, we will investigate theoretical evidence and attempt to combine language models with other methods to improve their performance. In particular, we posit that methods based on causal reasoning and program induction could help improve the reasoning abilities of neural networks.
|
2309.09493 | HiFTNet: A Fast High-Quality Neural Vocoder with Harmonic-plus-Noise
Filter and Inverse Short Time Fourier Transform | Recent advancements in speech synthesis have leveraged GAN-based networks
like HiFi-GAN and BigVGAN to produce high-fidelity waveforms from
mel-spectrograms. However, these networks are computationally expensive and
parameter-heavy. iSTFTNet addresses these limitations by integrating inverse
short-time Fourier transform (iSTFT) into the network, achieving both speed and
parameter efficiency. In this paper, we introduce an extension to iSTFTNet,
termed HiFTNet, which incorporates a harmonic-plus-noise source filter in the
time-frequency domain that uses a sinusoidal source from the fundamental
frequency (F0) inferred via a pre-trained F0 estimation network for fast
inference speed. Subjective evaluations on LJSpeech show that our model
significantly outperforms both iSTFTNet and HiFi-GAN, achieving
ground-truth-level performance. HiFTNet also outperforms BigVGAN-base on
LibriTTS for unseen speakers and achieves comparable performance to BigVGAN
while being four times faster with only $1/6$ of the parameters. Our work sets
a new benchmark for efficient, high-quality neural vocoding, paving the way for
real-time applications that demand high quality speech synthesis. | Yinghao Aaron Li, Cong Han, Xilin Jiang, Nima Mesgarani | 2023-09-18T05:30:15Z | http://arxiv.org/abs/2309.09493v1 | HiFTNet: A Fast High-Quality Neural Vocoder with Harmonic-Plus-Noise Filter and Inverse Short Time Fourier Transform
###### Abstract
Recent advancements in speech synthesis have leveraged GAN-based networks like HiFT-GAN and BigVGAN to produce high-fidelity waveforms from mel-spectrograms. However, these networks are computationally expensive and parameter-heavy. iSTFTNet addresses these limitations by integrating inverse short-time Fourier transform (iSTFT) into the network, achieving both speed and parameter efficiency. In this paper, we introduce an extension to iSTFTNet, termed HiFTNet, which incorporates a harmonic-plus-noise source filter in the time-frequency domain that uses a sinusoidal source from the fundamental frequency (F0) inferred via a pre-trained F0 estimation network for fast inference speed. Subjective evaluations on LJSpeech show that our model significantly outperforms both iSTFTNet and HiFT-GAN, achieving ground-truth-level performance. HiFTNet also outperforms BigVGAN-base on LibriTTS for unseen speakers and achieves comparable performance to BigVGAN while being four times faster with only \(1/6\) of the parameters. Our work sets a new benchmark for efficient, high-quality neural vocoding, paving the way for real-time applications that demand high quality speech synthesis.
Yinghao Aaron Li, Cong Han, Xilin Jiang, Nima Mesgarani Department of Electrical Engineering, Columbia University, USA
Waveform synthesis, mel-spectrogram vocoder, harmonic-plus-noise neural source filter, inverse short-time Fourier transform, generative adversarial networks
## 1 Introduction
Waveform synthesis plays a crucial role in modern speech generation technologies such as text-to-speech (TTS) and voice conversion (VC). These systems often employ a two-stage strategy: the first stage generates an intermediate representation, and the second stage converts it into waveforms. Mel-spectrograms have long been the favored intermediate representations in TTS [1, 2, 3, 4, 5] and VC [6, 7, 8, 9, 10, 11] due to their closeness to human perceptions and reduced dimensionality. A vocoder that performs this second stage must infer missing phase information from the mel-spectrogram to reconstruct the waveform. The most effective and efficient methods so far have been adversarial generative networks (GAN) with convolutional neural network (CNN) architectures [12, 13, 14, 15, 16]. While models like BigVGAN [16] have obtained state-of-the-art performance in terms of synthesis quality, they are burdened by a large number of parameters required to generate waveforms directly from input mel-spectrograms, which hinders their application in real-time scenarios like TTS and VC. Therefore, the development of faster and more lightweight high-quality vocoders without sacrificing performance has become a pressing need.
In this paper, we introduce **H**armonics-plus-noise **I**nverse **F**ourier **T**ransform **N**etwork (HiFTNet), a neural vocoder designed to meet these criteria. HiFTNet builds upon iSTFTNet [17] but goes beyond it to achieve high-quality waveform synthesis. Unlike previous vocoder models that generate waveform directly, HiFTNet follows iSTFTNet by modeling the magnitude and phase of the spectrogram and uses inverse short-time Fourier transform (iSTFT) for the final waveform generation. A key innovation in HiFTNet is its integration of a neural harmonic-plus-noise source filter [18] in the time-frequency domain using a sine wave source computed from the fundamental frequency (F0) extracted by a pre-trained F0 estimation network as opposed to traditional acoustic algorithms [19, 20]. This modification substantially enhances the quality of the synthesized speech while minimally affecting the inference speed.
Our evaluations demonstrate that HiFTNet significantly outperforms iSTFTNet and HiFTFT-GAN while maintaining similarly fast inference speed, achieving ground-truth level performance on LJSpeech [21] with a comparative mean opinion score (CMOS) of \(-0.06\) (\(p\gg 0.05\)). Additionally, it is on par with BigVGAN on the LibriTTS [22] dataset (\(\text{CMOS}=0.01,p\gg 0.05\)) but is \(4\times\) faster and requires only \(1/6\) of the parameters, thereby setting a new benchmark for efficient, high-quality neural vocoding. The demo samples are available at [https://hifntet.github.io/](https://hifntet.github.io/).
## 2 Methods
HiFTNet builds upon the iSTFTNet _V1-C8C8I_ architecture [17] but introduces several key modifications. Firstly, we integrate a neural harmonic-plus-noise source filter [18] in the time-frequency domain, using the fundamental frequency extracted from the input mel-spectrogram via a pre-trained F0 network. We also substitute the MSD discriminator [14] with the MRD discriminator [15] and replace the leaky ReLU activation function in the generator with the Snake activation function [23]. Lastly, we adopt the truncated point-wise relativistic loss function [24] to further enhance sound quality. The following sections elaborate on each of these modifications.
### Time-Frequency Harmonic-plus-Noise Source Filter
Neural harmonic-plus-noise source filters (hn-NSF) [18] have found various applications in speech synthesis [25, 26] and singing synthesis [27, 28]. These filters enhance the quality of the synthesized waveform by mitigating phase distortion. Generally, a sinusoidal source aligned in-phase with the target waveform is generated from its fundamental frequency (F0) for the voiced portions, while Gaussian noise fills the unvoiced segments. This source is then processed through a series of neural network layers named NSF. Here, we introduce several adjustments to better suit the iSTFTNet architecture and optimize inference speed, as detailed below.
#### 2.1.1 Efficient Source Generation
We adopt the original hn-NSF source generation scheme presented in [18], but with a critical change to significantly boost inference
speed. In the original work [18], the input fundamental frequency (F0) \(p\) is initially upsampled to align with the sampling rate of the target waveform. It is then multiplied by a factor \(i\in\{1,\dots,K+2\}\) to produce harmonic overtones \(h_{i}\), where \(K\) is the total number of harmonic overtones. Each \(h_{i}\) is integrated to yield the instantaneous phase \(\varphi_{i}\) in radian for generating the sinusoidal source \(s_{i}\):
\[h_{i}(t)=i\cdot p\left[\lfloor t\cdot f_{s}/L\rfloor\right], \tag{1}\]
\[\varphi_{i}(t)=\left(\frac{1}{f_{s}}\text{ mod }1\right)\int_{0}^{t}h_{i}(t) \,dt, \tag{2}\]
\[s_{i}(t)=A\cdot\sin(2\pi\varphi_{i}(t)), \tag{3}\]
where \(f_{s}\) denotes the sampling rate of the target waveform, \(L\) is the hop size, and \(A\) is the source amplitude. It is worth noting that the integration operation is typically implemented via cumulative sum. Since \(p\) originates from the mel-spectrogram domain, its length \(N\) is considerably smaller than the target waveform length \(T\). In Equation 1, \(p\) is upsampled from size \(N\) to \(T\) where \(T=NL\). The integration in Equation 2 is of order \(O(T)\), which greatly hinders the inference speed when dealing with long target waveforms. However, given that both upsampling and integration are linear operations, we can swap their order to reduce the complexity to \(O(N)\):
\[h_{i}[n]=i\cdot p[n], \tag{4}\]
\[\phi[n]=\left(\frac{1}{f_{s}}\text{ mod }1\right)\sum_{k=0}^{n-1}h_{i}[k], \tag{5}\]
\[\tilde{\varphi}(t)=L\cdot\phi\left[\lfloor t\cdot f_{s}/L\rfloor\right], \tag{6}\]
\[s_{i}(t)=A\cdot\sin(2\pi\tilde{\varphi}_{i}(t)), \tag{7}\]
where \(\phi[n]\) is the instantaneous phase before upsampling and \(\tilde{\varphi}_{i}(t)\approx\varphi_{i}(t)\)1. We note that the \(L\) factor in Equation 6 scales the value by the hop size, as Equation 5 now integrates with \(1/L\) of steps compared to steps in Equation 2.
Footnote 1: Although the upsampling and the continuous version of integration are both linear and can commute with each other, with the discrete version \(\tilde{\varphi}_{i}(t)\neq\varphi_{i}(t)\) even after scaled by the hop size \(L\). The difference between \(\tilde{\varphi}_{i}(t)\) and \(\varphi_{i}(t)\) is \([a_{1},a_{2},\dots,a_{N}]\), where \(a_{i}=[(L-1)\cdot\phi[i],(L-2)\cdot\phi[i],\dots,(L-L)\cdot\phi[i]]\) is the adjusting factor of length \(L\). However, since this additional adjusting factor does not add new information to the neural source filter, we noticed that there is no difference in sound quality regardless of whether the adjusting factor is subtracted from \(\tilde{\varphi}(t)\).
Gaussian noise serves as the source for the unvoiced segments. An unvoiced (UV) flag is set by applying a 10 Hz threshold to the input F0, marking frames with F0 values below this as unvoiced. The final excitation source for the \(i\)th harmonic overtone is expressed as:
\[x_{i}(t)=(1-UV(t))s_{i}(t)+UV(t)\xi, \tag{8}\]
where \(\xi\sim\mathcal{N}(0,A/3)\). Following [18], we set \(A=0.1\).
Finally, all harmonics are linearly combined and processed through a tanh function, as shown in the yellow block of Figure 1:
\[x(t)=\text{tanh}\left(\sum_{i=1}^{K+2}w_{i}x_{i}(t)\right), \tag{9}\]
where \(w_{i}\) are learnable parameters and \(K=8\) following [28].
#### 2.1.2 F0 Estimation with Pre-Trained Neural Network
In both the original hn-NSF model [18] and subsequent vocoder works [25], the F0 for source generation is derived using the WORLD vocoder [29]. However, as shown in our prior research [5], traditional acoustic algorithms [19, 20] for pitch extraction tend to be both inaccurate and failure-prone, negatively affecting reconstruction quality. Furthermore, commonly applied algorithms for pitch extraction, such as distributed inline-filter operation (DIO) [19] and Harvest [20], have an \(O(N\log N)\) complexity and run on the CPU without GPU acceleration. Most critically, these algorithms operate in the time domain, requiring the very waveform input we aim to synthesize from mel-spectrograms for F0 extraction.
To address these limitations, we employ a neural network for F0 estimation. Specifically, we adopt the approach in [8] that pre-trains a JDC network [30] using pitch labels extracted with DIO and Harvest, supplemented with standard data augmentation techniques in speech recognition [31]. This pre-trained network is then used for more accurate and robust F0 estimation from the input mel-spectrograms. Performance with alternative architecture without the LSTM RNN in the JDC network is also explored in section 4.2.
#### 2.1.3 Time-Frequency Neural Source Filter
In HiFTNet, the final output of the generator consists of the magnitude and phase of the spectrogram rather than waveforms. Consequently, the neural source filters must also process the excitation source within the time-frequency domain to align with this output.
Figure 1: Overview of the HiFTNet architecture. The figure shows an example architecture of HiFTNet for 22.5 kHz audio generation with a hop size of 256. Orange modules are basic neural network components with tunable parameters during training, while grey modules are either pre-trained and fixed or non-trainable. The MRF module is the same as in HiFi-GAN [14] that adds features from \(|k_{r}|\) blocks but consists of ResBlocks with Snake functions instead of leaky ReLU.
Instead of directly feeding the source waveforms to the neural source filter (NSF) module, we initially perform an STFT transformation using the same parameters (FFT size, hop size, and window length) as the terminating inverse STFT operation in the network output, thereby converting the source waveform to the time-frequency domain. Section 4.2 demonstrates that this time-frequency processing is crucial for high-quality waveform synthesis, as substituting the STFT module with a learnable CNN module of the same stride as the hop size and the same number of output channels as the FTT size significantly deteriorates performance.
In contrast to the complex NSF modules described in [18], our NSF module is only composed of a 1D convolutional layer for source downsampling to match the intermediate feature size, followed by a residual block for fast inference, as illustrated in Figure 1. We find that this architecture suffices for generating high-quality samples.
### MRD Discriminator and Snake Function
We substitute the original multi-scale discriminator (MSD) from iSTFTNet with the multi-resolution discriminator (MRD) as introduced in [15]. This change has been demonstrated to enhance sound quality in subsequent studies [16]. We retain the multi-period discriminator (MPD) initially proposed in [14], applying the same LSGAN [32] objective for both generator and discriminator training. Additionally, we employ the same feature matching loss during the generator training as in [14], a technique commonly adopted in contemporary neural vocoders [15, 16, 17].
Furthermore, we replace leaky ReLU activation functions across the generator with Snake functions [23], first proposed for speech synthesis in BigVGAN [16]. The Snake function is defined as:
\[f_{\alpha}(x)=x+\frac{1}{\alpha}\sin^{2}(\alpha x), \tag{10}\]
where \(\alpha\) is a learnable parameter. Although the generator's final output is not a waveform but rather the magnitude and phase of the spectrogram, these are still highly periodic, especially the phase. As such, employing the Snake activation function aids in the model's capacity to learn the periodic structure of the speech signal. This is also in line with what we have found in our previous work [33] where iSTFTNet is used as the speech decoder for human-level TTS. Unlike BigVGAN [16], we do not include the anti-aliasing filter for upsampling. This is primarily due to the instability introduced by the filter, and also because our generator consists of only two upsampling modules, resulting in less aliasing compared to previous vocoders that synthesize waveforms directly.
### Truncated Pointwise Relativistic Loss Function
To further enhance sound quality during adversarial training, we incorporate the Truncated Pointwise Relativistic (TPR) loss function [24]. This approach has proven successful in our previous work for achieving human-level TTS with iSTFTNet-based decoders [33]. This loss function aims to quantify the disparity between the discriminator's outputs for the real target waveform \(\mathbf{y}\) and the generated or reconstructed waveform \(\mathbf{\hat{y}}\). Specifically, the TPR loss encourages the discriminator to assign lower scores to the generated samples relative to their real counterparts for each frame point. Conversely, it motivates the generator to produce samples that the discriminator would rate higher compared to the real samples for each frame point.
The loss is formulated using the relativistic difference \(\mathcal{R}(\mathbf{y},\mathbf{\hat{y}})\):
\[\mathcal{R}(\mathbf{y},\mathbf{\hat{y}})=D(\mathbf{y})-D(\mathbf{\hat{y}})-m(\mathbf{y},\mathbf{\hat{ y}}), \tag{11}\]
\[m(\mathbf{y},\mathbf{\hat{y}})=\mathbb{M}_{\mathbf{y},\mathbf{\hat{y}}}\left[D(\mathbf{y})-D(\mathbf{ \hat{y}})\right]. \tag{12}\]
Here, \(D(\cdot)\) denotes both MPD and MRD outputs, and \(m(\mathbf{y},\mathbf{\hat{y}})\) is the median of the relativistic difference in a batch, calculated via \(\mathbb{M}\left[\cdot\right]\), the median operation. The TPR loss is thus defined as:
\[\mathcal{L}_{\text{rel}}(D;G)=\tau-~{}\mathbb{E}_{\{\mathcal{R}(\mathbf{y},\mathbf{ \hat{y}})\leq 0\}}\left[\text{ReLU}\left(\tau-\mathcal{R}(\mathbf{y},\mathbf{\hat{y}})^{2} \right)\right], \tag{13}\]
\[\mathcal{L}_{\text{rel}}(G;D)=\tau-~{}\mathbb{E}_{\{\mathcal{R}(\mathbf{\hat{y}}, \mathbf{y})\leq 0\}}\left[\text{ReLU}\left(\tau-\mathcal{R}(\mathbf{\hat{y}},\mathbf{y})^{2} \right)\right], \tag{14}\]
where \(\{\mathcal{R}(\mathbf{y},\mathbf{\hat{y}})\leq 0\}\) and \(\{\mathcal{R}(\mathbf{\hat{y}},\mathbf{y})\leq 0\}\) denote the sets of \(\mathbf{y}\) and \(\mathbf{\hat{y}}\) that satisfy the respective conditions in a batch, \(\text{ReLU}(\cdot)=\max(\cdot,0)\), and \(\tau\) is the truncation factor, set to 0.04 per [24].
## 3 Experiments
### Datasets, Models and Training Details
We conducted evaluations using the LJSpeech [21] and LibriTTS [22] datasets. The LJSpeech dataset, which comprises 13,100 short audio clips totaling approximately 24 hours, was used for training our single-speaker model. We compared this model to HiFi-GAN and iSTFTNet, both also trained on the LJSpeech dataset. The dataset was partitioned into 12,950 training and 150 validation samples, following the same split used in [14]. For our multi-speaker model, we employed the combined LibriTTS _train-960_ subset [22], which is sourced from _train-clean-100_, _train-clean-360_, and _train-other-500_ subsets per [16]. This dataset contains around 555 hours of audio from 2,311 speakers. We compared our model to BigVGAN-base and BigVGAN on the _test-clean_ and _test-other_ subsets for unseen speakers. The former subset comprises clean speech, while the latter contains noisier samples.
We followed the pre-processing pipeline of 22.5 kHz audio as in [14] for generating the mel-spectrograms. Specifically, we used a hop size of 256, an FFT size of 1024, a window length of 1024, a lowest frequency of 0 Hz, and the highest frequency of 8000 Hz with 80 mel bins. Audio samples from the LibriTTS dataset were downsampled to 22.5 kHz to align with this pre-processing. Our model was trained for 500k steps on both the LJSpeech and LibriTTS datasets, with a batch size of 16 one-second-long audio segments on a single NVIDIA A40 GPU. We employed the AdamW optimizer [34] with \(\beta_{1}=0.8,\beta_{2}=0.99\), weight decay \(\lambda=0.01\), and an initial learning rate \(\gamma=0.0002\) with an exponential decay rate of \(0.999\).
For comparison, we used official pre-trained checkpoints for HiFi-GAN on LJSpeech 2 and BigVGAN on LibriTTS 3. As there was no official iSTFTNet implementation and checkpoint, we trained an iSTFTNet baseline model using the same hyperparameters with an unofficial implementation 4 for 500k steps.
Footnote 2: Available at [https://github.com/jik876/hifi-gan](https://github.com/jik876/hifi-gan)
Footnote 3: Available at [https://github.com/NVIDIA/BigVGAN](https://github.com/NVIDIA/BigVGAN)
Footnote 4: [https://github.com/rishikksh20/iSTFTNet-pytorch](https://github.com/rishikksh20/iSTFTNet-pytorch)
### Evaluations
To assess model performance, we employed both subjective and objective evaluation methods. For the subjective assessments, we used the Comparative Mean Opinion Score (CMOS) metric to establish statistical significance as the differences between these models are subtle and not readily noticeable. This allows raters to discern subtle differences often overlooked in traditional MOS experiments [33]. We recruited native English speakers located in the U.S. via Amazon Mechanical Turk for these evaluations. Participants were guided to
listen to paired samples from distinct models using headphones and then rate the second sample as better or worse than the first, using a scale from -6 to 6 in increments of 1. Each test comprised 30 randomly selected audio samples from the test dataset, which were converted into mel-spectrograms and then back into waveforms using both our model and the baseline models. We also included three attention-checker pairs containing identical audio clips. Raters who assigned these pairs an average score more than \(\pm 0.5\) were excluded from the results. Each evaluation set involved a minimum of ten raters, ensuring at least five had passed the attention checks.
For objective evaluations, we relied on mel-cepstral distortion (MCD) with dynamic time warping calculated using an open source implementation 5 as a metric to compare the synthesized waveform with the ground-truth audio. To assess inference speed, we computed the real-time factor (RTF) using an NVIDIA RTX 3090 Ti GPU.
Footnote 5: [https://github.com/chenqi008/pymcd/](https://github.com/chenqi008/pymcd/)
## 4 Results
### Model Performance
As illustrated in Table 2, HiFTNet exhibits a CMOS score of -0.06 with \(p\gg 0.05\) when tested on the LJSpeech dataset. This essentially places our model on par with the ground truth for this particular dataset. Moreover, HiFTNet has significantly outperformed both iSTFTNet and HiFi-GAN in terms of CMOS (\(p<0.05\)) and MCD, while incurring only a minor increase in inference speed and RAM.
When evaluated on the LibriTTS _test-clean_ subset, HiFTNet significantly surpasses BigVGAN, with a CMOS of 0.21 (\(p<0.05\)) and also a slightly lower MCD. This is achieved while maintaining the same RAM usage yet being \(2.5\times\) faster. Furthermore, HiFTNet demonstrates performance comparable to BigVGAN with a CMOS of \(-0.05\) (\(p\gg 0.05\)), but operates \(4\times\) faster and consumes only half the GPU RAM during inference. Similar trends are observed on the _test-other_ dataset, where HiFTNet notably outperforms BigVGAN-base and achieves performance akin to BigVGAN.
Together, HiFTNet achieves a CMOS of 0.013 (\(p=0.873\)) compared to BigVGAN on the LibriTTS dataset for unseen speakers. Notably, HiFTNet accomplishes all this with only 17.7 M trainable parameters, making it approximately 1/6 lighter in size compared to BigVGAN's 114 M parameters. This positions HiFTNet as a viable alternative to BigVGAN in end-to-end training scenarios, such as speech language model (SLM) adversarial training with SLM feature matching loss in our recently proposed VC model [11], thanks to its more efficient RAM usage and faster inference speed.
### Ablation Study
In table 2, we present the CMOS of the proposed model compared to models with components ablated to demonstrate the effectiveness of our proposed components. Omitting the hn-NSF results in a dramatic performance decline, reflected by a CMOS of \(-1.116\), making the model inferior to iSTFTNet. Substituting the STFT modules with trainable 1D convolutional layers prior to the NSF also yields a reduced CMOS of \(-0.358\). Additionally, switching the Snake activation function back to leaky ReLU causes a minor performance dip, indicated by a CMOS of \(-0.108\). Finally, removing the LSTM layer from the pitch extraction network, while accelerating inference time, significantly degrades performance with a CMOS of \(-0.475\).
These findings affirm the efficacy of each proposed component in enhancing performance, although some may slightly increase inference time. The Snake activation function, for example, decelerates the system by approximately 15% but only marginally bolsters performance, making it an optional component if inference speed is paramount. Intriguingly, removing the LSTM from the F0 extraction network has a negative impact on performance, implying that F0 estimation quality is a critical factor for vocoder performance. This suggests that, even though F0 is largely a local feature, some global information not captured by CNN still contributes to accurate F0 estimation needed for high-quality speech synthesis.
## 5 Conclusions
In this study, we introduced HiFTNet, a neural vocoder model that offers substantial improvements in sound quality and inference speed over existing models like iSTFTNet, HiFi-GAN and BigVGAN-base, with performance comparable to significantly larger models such as BigVGAN. Leveraging a suite of novel components, including the time-frequency harmonic-plus-noise neural source filter, the Snake activation function, and a MRD discriminator and TPR loss, our model achieved superior performance across multiple metrics and datasets. The ablation study further corroborated the importance of each component, highlighting their individual contributions to the model's efficacy. The study also suggests a future research direction in optimizing neural networks for faster and more precise F0 estimation to further enhance the performance and inference speed of hn-NSF-based vocoders.
## 6 Acknowledgements
This work was funded by the National Institutes of Health (NIH-NIDCD) and a grant from Marie-Josee and Henry R. Kravis.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **Dataset** & **CMOS (p-value)**\(\uparrow\) & **MCD**\(\downarrow\) & **RTF**\(\downarrow\) & **RAM**\(\downarrow\) \\ \hline Ground Truth & LJSpeech & \(-0.06\) (\(p=0.396\)) & — & — & — \\ HiFTNet & LJSpeech & — & 2.567 & 0.0057 & 0.90 GB \\ SiFTNet & LJSpeech & \(+0.64\) (\(p\sim 10^{-7}\)) & 2.820 & 0.0031 & 0.77 GB \\ HiFi-GAN & LJSpeech & \(+0.19\) (\(p=0.208\)) & 2.816 & 0.0043 & 0.75 GB \\ \hline Ground Truth & _test-clean_ & \(-0.21\) (\(p=0.033\)) & — & — & — \\ HiFTNet & _test-clean_ & \(-2.892\) & —\({}^{*}\)— & —\({}^{*}\)— \\ BigVGAN-base & _test-clean_ & \(+0.21\) (\(p=0.001\)) & 3.079 & 0.0159 & 0.90 GB \\ BigVGAN & _test-clean_ & \(-0.05\) (\(p=0.552\)) & 2.656 & 0.0243 & 1.52 GB \\ \hline Ground Truth & _test-other_ & \(-0.10\) (\(p=0.189\)) & — & — & — \\ HiFTNet & _test-other_ & — & 3.690 & —\({}^{*}\)— & —\({}^{*}\)— \\ BigVGAN-base & _test-other_ & \(+0.17\) (\(p=0.022\)) & 3.892 & —\({}^{*}\)— & —\({}^{*}\)— \\ BigVGAN & _test-other_ & \(+0.12\) (\(p=0.354\)) & 3.189 & —\({}^{*}\)— & —\({}^{*}\)— \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparative mean opinion scores (CMOS) for HiFTNet with p-values from Wilcoxon test relative to other models, mel-spectral distortion (MCD) relative to ground truth, inference speed (RTF), and GPU RAM usage when synthesizing a 10-second audio. For CMOS, positive scores indicate that HiFTNet is better. For the dataset column, _test-clean_ and _test-other_ represent the results on the corresponding subsets of the LibriTTS dataset.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & **CMOS \(\uparrow\)** & **MCD \(\downarrow\)** & **RTF**\(\downarrow\) \\ \hline Baseline & **0** & **2.567** & 0.0057 \\ w/o hn-NSF & \(-1.116\) & 2.929 & **0.0036** \\ w/o STFT & \(-0.358\) & 2.716 & 0.0055 \\ w/o Snake & \(-0.108\) & 2.689 & 0.0050 \\ w/o LSTM & \(-0.475\) & 2.639 & 0.0047 \\ \hline \hline \end{tabular}
\end{table}
Table 2: CMOS of proposed model relative to component-ablated models, MCD relative to ground truth, and inference speed (RTF). |
2309.08955 | IntelliBeeHive: An Automated Honey Bee, Pollen, and Varroa Destructor
Monitoring System | Utilizing computer vision and the latest technological advancements, in this
study, we developed a honey bee monitoring system that aims to enhance our
understanding of Colony Collapse Disorder, honey bee behavior, population
decline, and overall hive health. The system is positioned at the hive entrance
providing real-time data, enabling beekeepers to closely monitor the hive's
activity and health through an account-based website. Using machine learning,
our monitoring system can accurately track honey bees, monitor pollen-gathering
activity, and detect Varroa mites, all without causing any disruption to the
honey bees. Moreover, we have ensured that the development of this monitoring
system utilizes cost-effective technology, making it accessible to apiaries of
various scales, including hobbyists, commercial beekeeping businesses, and
researchers. The inference models used to detect honey bees, pollen, and mites
are based on the YOLOv7-tiny architecture trained with our own data. The
F1-score for honey bee model recognition is 0.95 and the precision and recall
value is 0.981. For our pollen and mite object detection model F1-score is 0.95
and the precision and recall value is 0.821 for pollen and 0.996 for "mite".
The overall performance of our IntelliBeeHive system demonstrates its
effectiveness in monitoring the honey bee's activity, achieving an accuracy of
96.28 % in tracking and our pollen model achieved a F1-score of 0.831. | Christian I. Narcia-Macias, Joselito Guardado, Jocell Rodriguez, Joanne Rampersad-Ammons, Erik Enriquez, Dong-Chul Kim | 2023-09-16T11:13:47Z | http://arxiv.org/abs/2309.08955v1 | # IntelliBeeHive: An Automated Honey Bee, Pollen, and Varroa Destructor Monitoring System
###### Abstract
Utilizing computer vision and the latest technological advancements, in this study, we developed a honey bee monitoring system that aims to enhance our understanding of Colony Collapse Disorder, honey bee behavior, population decline, and overall live health. The system is positioned at the live entrance providing real-time data, enabling beekeepers to closely monitor the live's activity and health through an account-based website. Using machine learning, our monitoring system can accurately track honey bees, monitor pollen-gathering activity, and detect Varroa mites, all without causing any disruption to the honey bees. Moreover, we have ensured that the development of this monitoring system utilizes cost-effective technology, making it accessible to apariates of various scales, including hobbyists, commercial bekeeping businesses, and researchers. The inference models used to detect honey bees, pollen, and mites are based on the YOLOv7-tiny architecture trained with our own data. The F1-score for honey bee model recognition is 0.95 and the precision and recall value is 0.981. For our pollen and mite object detection model F1-score is 0.95 and the precision and recall value is 0.821 for pollen and 0.996 for "mite". The overall performance of our IntelliBeeHive system demonstrates its effectiveness in monitoring the honey bee's activity, achieving an accuracy of 96.28% in tracking and our pollen model achieved a F1-score of 0.831.
Computer vision, Object tracking, Honey bee, Embedded system
## I Introduction
Honey bees (Apis mellifera) are small insects that play a crucial role in maintaining the balance of ecosystems. They serve as important pollinators, contributing to the pollination of crops worth an estimated 15 billion dollars in the United States alone [1]. In today's rapidly advancing technological world, innovative solutions can potentially aid honey bees in overcoming challenges such as parasites and other factors that contribute to the decline of bee colonies. Honey bees are renowned for their role as pollinators, facilitating the reproduction of flowers and fruits through the collection of pollen, which eventually leads to the creation of delicious honey.
Vaorra mites, which are not native to the United States and were introduced from Asia, contribute to the decline of honey bee populations [2]. Varroa mites survive by feeding on the body fat cells of honey bees and extracting essential nutrients from their bodies [3, 4] as well as transmitting viruses that cause deadly diseases to honey bees [5]. The presence of these ectoparasites can devastate a honey bee colony, and even a colony with minimal signs of infestation has a high likelihood (around 90-95 percent) of collapsing [6]. This poses significant challenges for beekeepers who invest their time and resources in maintaining honey bee colonies, as a single mite can jeopardize their hives.
Throughout the years of beekeeping, there have been methods developed to control over infestation of varroa mites. Today, many beekeepers have kept traditional methods of checking monthly such as sugar rolls, alcohol washes, or using sticky boards to monitor the bees for mites [7, 8]. All of these methods have their pros and cons depending on preference but they are all time-consuming and require manual labor and some approaches are destructive, meaning that the sample used for detecting the infestation levels will not be reintroduced back to the hive [8]. Therefore, a faster and more effective alternative is essential for monitoring infestation levels for such a time-sensitive issue in order to allow beekeepers to give the proper treatment only when needed to help maintain the bee hive population.
Foraging is another important indicator of the beehives' overall health and is important for beekeepers to monitor. Beekepers use different methods to monitor the honey bee's foraging activity, one example would be using a pollen trap method that utilizes a mesh screen that has big enough holes for the honey bee to go through but small enough to scrap off pollen from the honey bees' legs [9]. This method removes the pollen from the bees' legs for the beekeeper to analyze the amount of pollen that is being brought into the hive from when they forage. Removing pollen from the honey bees' legs is not as efficient as it does not collect enough pollen in the mesh screens, which have an efficiency of 3-43 percent in trapping the incoming pollen, making it ineffective [9]. This measuring method is inaccurate and removes the nonishment from the honey bees, as they feed on pollen and nectar, which can take a toll on their brood development [9, 10].
## II Related Works
There are numerous techniques that implement approaches to monitor honey bees' health. _A computer vision system to monitor the infestation level of varroa destructor in a honeybee colony_ paper deployed a _Monitoring Unit_ with a computer
system to record honey bees entering their bee hives using a multi-spectral camera and red, blue, and infrared LED lights to collect footage. They then use computer vision to detect varroa destructors and determine the infestation level of the beehive [11]. The objective of this study is to propose an alternative method for assessing the infestation level without harming honey bees, which is commonly done in traditional sampling methods as mentioned previously [7, 8].
_A real-time imaging system for multiple honey bee tracking and activity monitoring purpose is to monitor honey bee behavior_ research emphasizes monitoring the activity of honey bees in-and-out activity of the beehive in order to assess honey bee colonies' behavior and the hives overall health when exposed to different concentrations of Imidacloprid pesticides [12]. Their system consists of 2 microcomputers, a Jetson TX2 using background subtraction for object segmentation and honey bee tracking and a Raspberry Pi 3 for environment monitoring using sensors.
The _Automated monitoring and analyses of honey bee pollen foraging behavior using a deep learning-based imaging system_ study, aims to provide a better and more efficient alternative to analyze the foraging done by honey bees [13]. This monitoring system also consists of the same two microcomputers but this time for object detection, they used YOLOv3's real-time object detection. Their method proved to be a more effective and reliable tool compared to the conventional pollen trap method previously mentioned.
_Pollen Bearing Honey Bee Detection in Hive Entrance Video Recorded by Remote Embedded System for Pollination Monitoring_ developed a non-invasive monitoring system to detect pollen-bearing honey bees. The main focus of this paper was to use their own method to classify pollen-bearing honey bees on an embedded system. Their proposed algorithm wasn't far behind from state-of-the-art classification models but was computationally efficient to be implemented in embedded systems [14].
The IntelliBeeHive project aims to develop a cost-effective monitoring system using Machine Learning to track honey bees in order to monitor their activity, foraging activity, and varroa mites detection without disturbing the honey bees. This monitoring system is placed at the entrance of the beehive and allows beekeopers to keep track of the beehive's overall activity through an account-based website. For our object detection software, we will be using YOLOv7. YOLOv7 is an object detection model introduced in July 2022 that surpasses all previously known object detection models in speed and accuracy [15]. YOLOv7 achieved the highest accuracy at 56.8 percent AP at 30FPS or higher depending on the GPU [15].
## III Hardware
Our monitoring system is implemented on an NVIDIA Jetson Nano Developer Kit. We chose the NVIDIA Jetson Nano taking several factors into consideration including its affordability ($99 USD at the time of implementation before the global chip shortage) and performance in computer vision applications compared to other Jetson modules available [16][17] and the Raspberry Pi. Although the Raspberry Pi is more affordable, it does not have the capability to provide live tracking data.
The initial design was divided into segments, allowing us to 3D print each section individually. This modular approach facilitated the printing process and provided flexibility to replace specific components if necessary. The container was computer-aid designed (CAD) using Blender, then 3D printed using PLA Filament with three main sections: the Top Box, the Camera Room, and the Mesh Frame. The Top Box has a 3D-printed camera trag to secure a Raspberry Pi Camera, air vents to help cool down the Jetson Nano, and we had to make sure to make it rainproof to protect our electronics, such as the PoE Adapter and the Jetson Nano. The camera room is just an empty box with a window made out of sanded acrylic to reduce glare and allow sunlight to improve inferring accuracy. Our camera distance from the honey bee passage for our PLA container was set at 155 mm high with a viewing area of 150 mm by 80 mm giving us the view shown in Figure 1.
To ensure the effectiveness of our inference algorithm, we devised a method to prevent honey bees from approaching the camera and restricting their movement to prevent overlapping. Our approach involves creating a mesh using a fishing line, as illustrated in Figure 1. The use of a fishing line offers several advantages over alternatives such as acrylic. It provides a clearer view of the honey bees without the issue of glare that would occur had we used glass or acrylic. Additionally, using other clear solids would not be viable in the long run, as they would accumulate wax residue and trash over time compromising our tracking algorithm.
The reason we had to change our 3D printing approach was due to heat and pressure. Over time, we noticed warping with our container in 2 significant locations. One location is where we secured our container to the hive using a bungee cord, the container started to bend inward which in the long run will affect our footage. The second location is the mesh frame, due to the tension caused by the fishing line and the hot temperature in Texas reaching 100 \({}^{\circ}\)F (37.7 \({}^{\circ}\)C) during the summer, the mesh frame started to warp inwards loosening the fishing line as shown in Figure 1 and in return, honey bees are able to break into the camera room compromising our tracking.
Therefore, we changed our design to laser-cut our container out of wood. While the overall appearance of the container
Fig. 1: Camera view of fishing line mesh frame warping.
is similar, adjustments in the approach of our CAD design process were made to accommodate the laser-cutting process. In order to laser cut, our 3D model needs to be separated into 2D sections to convert our model into an SVG file format. Using wood gave us a stronger foundation and cut our time to make a container significantly. Previously, the creation of a container took between 4 to 5 days to 3D print, whereas the adoption of laser cutting reduced the time to manufacture to approximately 4 hours followed by an additional day for assembly. The figures below provide an overview of the enclosure and the mesh frame computer-aided design model before converting to SVG.
Our viewing area for the wooden container was also reduced to allow our camera to get closer to the honey bees improving our pollen and mite detection accuracy. Our new viewing area is reduced to 110 mm by 65 mm and our camera height is lowered to 120 mm giving us a significantly better view of the honey bees as shown in Figure 4.
Our container incorporates two cable exits. The upper cable exit is specifically designated for our Power over Ethernet (PoE) cable, which both powers the Jetson Nano and provides Internet connectivity. The lower cable exit is dedicated to the BME680 sensor, which runs from the top section through the camera room and out into the honey bee hive. In order to achieve a water-tight seal and protect our electronics we use the cable lids we designed shown in Figure 5.
For monitoring the honey bee hive's humidity and temperature, we employ a BME680 sensor. Considering this sensor is not specifically intended for outdoor environments, we designed and developed a case with air vents to ensure we don't compromise our readings as shown in Figure 6. To 3D print the container we used PLA filament due to its non
Fig. 4: Wooden enclosure camera view.
Fig. 3: Wooden enclosure
Fig. 2: CAD enclosure design
toxic nature. To connect our sensor to the Jetson Nano we soldered flexible silicone 30 gauge copper wires to the sensor and ran them through our container to the Jetson Nano's 40-pin expansion header. We placed the sensor halfway inside the bee hive through the entrance of the bee hive.
To provide internet access and power to our Jetson Nano, we utilize a Power over Ethernet (PoE) switch. A PoE switch provides both power and internet access all through a Cat6 cable running from the PoE switch placed indoors to our PoE adapter inside our container. The PoE Adapter splits the ethernet and power into two channels in order to connect our Jetson Nano. We chose this approach instead of others, such as solar panels, battery packs, or wifi, because it allows us to reduce cable clutter while providing a long-lasting solution with a reliable source of internet and power to our Jetson Nano. Figure 9 is an image of the Top Box fully assembled with our Jetson Nano, BME680 sensor cables, Raspberry Pi camera, and PoE adapter all connected.
To capture footage of the honey bee's in the enclosure, we used the Raspberry Pi Camera V2.1 connected to the Jetson Nano via Raspberry Pi ribbon cable. To hold the camera in place we laser cut a frame from wood and secured it in place in the Top Box as shown in Figure 8.
Fig. 5: Side View of the container with cable lids attached.
Fig. 6: BME680 Sensor Case
Fig. 7: BME680 Sensor
Fig. 8: Raspberry Pi Camera V2.1 in monitoring system.
Fig. 9: Image of container Top Box section fully assembled.
Lastly, we add a wooden plywood sheet to the bottom of the container. This addition provides a landing place for the honey bees, gives our object detection a neutral background, and helps stand our container. The wooden plywood can be seen in Figure (a)a.
## IV Software
### _Secure Shell Protocol_
In order to enable remote updates for our Jetson Nano device, we implemented Secure Shell Protocol (SSH) tunneling. To ensure accessibility from different networks, we utilized a virtual machines hosted on the Google Cloud platform. This configuration enables us to establish an SSH tunnel from our local computer to the Google Cloud VM, and perform reverse SSH from the Jetson Nano to the Google Cloud.
### _Honey bee Detection_
In this study, YOLOv7 Tiny object detection model was used to identify honey bees in order to track their activity. YOLOv7 proved to be the fastest and most accurate real-time object detection model during the implementation of our study[15]. Due to our computational limitations using a Jetson Nano, we implemented YOLOv7 Tiny version of YOLOv7 to achieve a higher frame rate[15].
To train our model, approximately 50 5-minute videos at 10 frames per second at 1280 x 720 every 10 minutes over the span of 4 days (to account for different lighting) were obtained from our own honey bee hive using the containers we developed. Images every 3 seconds (30 frames) were then extracted from the videos to allow the honey bees to move and give us variety in our training data.
The process of annotating honey bee images for our YOLOv7-Tiny model involved the use of the LabelImg[18] tool. For our labeling, we purposely annotated only honey bees whose majority of their body is shown in order to improve our detection algorithm due to partial honey bee detection being irrelevant to our tracking and also avoiding flickering if honey bees are on the edge of the frame. Annotations were saved in the YOLO format with the only class being "Honey bee", resulting in a total of 1235 annotated images. Approximately 9,700 honey bees were annotated in total. The detection model is trained with an NVIDIA GeForce RTX 3070 GPU. The training image is resized to 416 x 416 pixels input for our YOLOv7-Tiny model with a batch size of 8 for 100 epochs.
Our goal is to have a live status update from every hive with a 5-minute delay. In order to achieve such a goal we must optimize our model as much as possible. Given our resource constraints to make our approach cost-effective, our YOLOv7-Tiny model takes approximately 56 ms for every frame for inferring on the Jetson Nano. Since we have a 5-minute video at 10 frames per second totaling 3000 frames, this means that it would take about 2 minutes 48 seconds for inferring only. To achieve faster inferring, we convert our model into a TensorRT engine[19]. Before converting our model to TensorRT our model has to be converted into ONNX [20] by exporting our model with the script provided by YOLOv7 repository [21].
Open Neural Network Exchange (ONNX) is an open standard format that serves as a common representation for machine learning models. It offers a standardized set of operators and a shared file format, allowing AI developers to utilize models seamlessly across various frameworks, tools, runtimes, and compilers. The key benefit of ONNX is its ability to promote interoperability between different frameworks, enabling easier integration and facilitating access to hardware optimizations. By adopting ONNX, developers can leverage the advantages of different frameworks and streamline the deployment of machine learning models[20]. Once our model is in ONNX format, the Tensorrt engine is then created using _TensorRT-For-YOLO-Series_ repository[22] on the Jetson Nano. With our TensorRT engine, inferring time was cut by almost half, taking approximately 27 ms per frame. Our total inference time is cut down to about 1 minute and 21 seconds per video.
For our pollen and mite detection, we train a second YOLOv7-Tiny using 2 classes, "Pollen" and "Mite". To collect pollen training data, we filtered through the videos collected with our container searching for honey bees with pollen. We then extracted the honey bee images for training data from the videos using our YOLOv7-Tiny honey bee detection model. Once we had a collection of approximately 1,000 honey bee images with pollen, we used the Labeling [18] tool for annotation. For mite training data, due to limited time and availability of varroa mites, we used mite placeholders to train our mite detection. We acknowledge that our approach may not perfectly replicate realistic scenarios. However, to simulate the presence of varroa mites on the honey bees, we utilized opaque red beads with a diameter of 1.5 mm as temporary placeholders. While these beads may not accurately mimic the characteristics of actual varroa mites, they served as a substitute to analyze the capabilities of our monitoring system. To collect training data we glued beads onto dead honey bees and extracted data, approximately 700 images of honey bees with "mites". The detection model was also with a NVIDIA GeForce RTX 3070 GPU with the same training parameters except for our input size. For this model our training images were resized to 64 x 64 pixels. Once our YOLOv7-Tiny model was trained, we converted our model into ONNX and then into a Tensorrt engine as we did with our previous model.
### _Tracking Algorithm_
Our tracking algorithm is based on honey bees currently visible. Once the honey bee goes out of sight, it will be counted as a new honey bee if reintroduced. The honey bee's position is based on the midpoint derived from the detection box extracted from our YOLOv7 tiny model. To track the honey bees we store the current position of each be and compare the previous frame with the current frame to determine if the honey bee moved and in which direction.
Our primary objective is to give as close of a live feed as possible with minimal delay. To achieve this, our monitoring system captures a 5-minute video of the honey bees' activity and processes the video afterward with our tracking system. While the initial video is being processed, the system concurrently records the subsequent 5-minute video. By adopting
this approach, we ensure a near real-time observation of the honey bees' behavior without any significant interruptions.
To record our 5-minute video we use GStreamer recording at 1280 by 720p at 10 frames per second and save our video in 640 by 420p. Downscaling the images is essential to speed up our system's throughput, particularly due to the processing limitations of the Jetson Nano. By downsizing the image, we can significantly enhance the extraction and processing time, resulting in a more efficient workflow. For instance, our processing time for images with a resolution of 1280 by 720p typically takes around 7 minutes and 20 seconds. However, by downscaling, we can reduce this processing time to approximately 3 minutes, excluding the time required for pollen and mite inference. Deepstream can be used to speed up our throughput problem but at the time of implementation, Deepstream isn't available for Jetpack 4.6 which is the last available Jetpack for Jetson Nanos [23].
Our tracking algorithm uses the output of every frame processed through the honey bee inference TensorRT engine. The output given by our model is based on the upper left and lower right corners of a rectangle of each honey bee inference from the current frame. To determine the midpoint of each honey bee on the video feed we use the following equation:
\[X=(((maxX-minX)/2)+minX)\]
\[Y=(((maxY-minY)/2)+minY)\]
The maxX and minY are our coordinates of the lower right vertex of the rectangle and minX and maxY are our upper left vertex.
To track each honey bee, on initial detection of each honey bee we create a new profile. Each honey bee profile includes Id, last seen location, status, and bee size. To determine whether a honey bee has been detected previously or not when tracking, we use the location of all honey bees detected on frame n-1 and compare them to the output of the current frame n. To consider a honey bee the same bee, we give the new midpoint a tolerance of 50 pixels offset in any direction from the previous location favoring proximity to other honey bees that might be close enough to fall within that range. Any honey bee that does not fall under any currently existing profile is then treated as a new honey bee. Honey bees that don't have a new midpoint in the current frame are then dropped from the list of active honey bees.
A honey bee can have any of the 4 statuses, "Arriving", "Leaving", "New", and "Deck" depending on their movement. Initially, upon the first detection of the honey bee, they are assigned the status of "New", meaning that it's the first time it sees the honey bee or that the honey bee has not crossed any triggers. To track honey bee movement, we have two triggers that change the status of the honey bee. The resolution of the video is set at 640 by 420 pixels meaning the height **y** of the video is from 0-420 pixels. We then divided the height into three even sections of 140 pixels wide, setting our "Arriving" trigger at 140 pixels, and our "Leaving" trigger at 280 pixels. If the midpoint of the honey bee at n-1 is greater than 140 and n less than or equal to 140, the status of the honey bee changes to "Arriving" meaning that the honey bee is headed to the inside of the beehive, but if the midpoint changes from n-1 is less than or equal to 140 and n greater than 140 the status changes to "Deck" meaning they are in the middle of the container.
The "Leaving" trigger is determined based on its crossing at the Y-coordinate value of 280. This trigger will result in the honey bee status being changed to either "Leaving" or "Deck," depending on whether the midpoint is less than 280 at frame n-1 and greater than or equal to 280 at frame n, or if the midpoint is greater than 280 at frame n-1 and less than or equal to 280 at frame n, respectively. Figure 10 is a diagram demonstrating how the status of the tracking algorithm works.
The honey bee size is extracted once per honey bee profile. The honey bee size is based on the longest side of the rectangle output given by our model. Our camera covers a work area of 110 mm by 65 mm. To get the size of the honey bee we the following formulas:
\[1.(maxX-minX)/(framesizeX/containerSizeX)\]
\[2.(maxY-minY)/(framesizeY/containerSizeY)\]
Formula 1 is used if the longer side of the rectangle is along the X-axis or formula 2 for the Y-axis. We divide the frame size by the container size for the respective axis to get the ratio and determine the size of each bee. The objective of determining the size of each honey bee is to investigate the ratio between a drone and a worker honey bee. However, due to variations in the inference rectangle's size, which can change depending on whether a honey bee is fully visible or not fully present due to it being on the edge of the frame, we only extract the honey bee size when it crosses a "Leaving" or "Arriving" trigger. This approach ensures that we capture the complete size of the honey bee. It is important to note that this method may not be optimal since the size is solely determined by the longest side of the inference rectangle. Consequently, if the honey bee is at an angle when its size is captured, the accuracy and reliability of our data may be affected.
The purpose of considering the honey bee size is to determine if using the size alone is enough to show the difference between worker and drone bees. The graph below shows the
Fig. 10: Triggers diagram status breakdown for honey bee tracking.
size output of our model from a 5-minute video and then manually annotated drone and worker honey bees.
The images below are outputs extracted from two profiles of two different types of honey bees inferred from a 5-minute video.
To identify the presence of pollen or mites on a honey bee, we follow a specific procedure. For each honey bee profile, we save an image of the honey bee into a designated folder when it passes any of the triggers. This ensures that we capture a complete view of the honey bee for analysis. Once the honey bee TensorRT engine model has completed processing the video, we proceed to load the pollen and mite TensorRT engine model and process all the images extracted by the honey bee TensorRT engine.
## V Website
The IntelliBeeHive has a web application designed to store and present data gathered from honey bee hive monitoring systems, catering to apiarists or beekeepers. Our web page can be found at [https://bee.utrgv.edu/](https://bee.utrgv.edu/). The monitoring system collects hive data, which is then transmitted to the IntelliBeeHive web server via an API. The web server, a remote computer accessible through the internet, receives and stores the data in its database [24]. An API serves as the interface that enables communication between programs on separate machines [25]. Once the hive data is stored, it is presented to the user in an organized and user-friendly manner through their web browser whether it'd be on a personal computer or mobile device. This chapter will discuss the functionality of the IntelliBeeHive web application, breaking it down into two main components: the frontend and the backend. The frontend is what the user experiences and interacts with on their personal device, while the backend is what happens on the web server, such as data collection and storage.
### _Frontend_
The IntelliBeeHive is designed for apiarists meaning the website is user-friendly and accessible by almost all devices with web access including smartphones and computers.
#### V-A1 Layout
IntelliBeeHive's front-end consists of 8 separate web pages. These pages are accessed sequentially and have specific restrictions depending on the type of user accessing them. There are 3 user types: all users, registered users, and admin users.
All users refer to anyone who has access to the IntelliBeeHive website and doesn't require any credentials. All users have access to the landing, log-in, and sign-up pages and to the hive demo page. The hive demo page displays a single hive's live video recording, data, and statistics.
Once a user signs up and has verified their credentials they become registered users. Registered users have access to the hive feed page, which showcases all hives currently utilizing a monitoring system. The hive feed page provides live and past data in graph and table formats. Registered users can navigate
Fig. 11: Honey bee drone versus worker bees size analysis.
Fig. 14: Shows IntelliBeeHive’s landing page welcoming new and current users.
Fig. 12: Drone and Worker Bee Comparison
Fig. 13: Shows an illustration of the IntelliBeeHive web application functionality.
to the comment page to leave feedback or questions regarding the web application. They can also access the settings page to update their credentials or delete their account.
Registered users can only become admin users if they are granted the privilege by the webmaster. Admin users have special privileges, including the ability to create, edit, and delete hives. They can also view comments submitted by registered users and delete registered user accounts. However, admin users cannot add a monitoring system or link one to an existing hive, as this privilege is exclusive to the webmaster.
#### V-A2 Adding Users
New users can be added as registered users by signing up through the sign-up page. To complete the sign-up process, users are required to provide their first and last name, email address, and an 8-character alphanumeric password.
The sign-up page will automatically show the user a prompt box where they can input the verification code. For security a user has 24 minutes to input the code before it expires, if the code expires the user will need to start over the sign-up process [26]. Once the user inputs the verification code within the specified time limit, their credentials are stored in the web server and they are recognized as a registered user. The web page then redirects the user to the hive feed page.
In case a registered user forgets their password, the web application offers a "Forgot Password" function where the user can re-verify their identity with a verification code and reset their password and regain access to their account.
#### V-A3 Adding Hives
Only admin users have the privilege to add, edit, and delete hives. To add a new hive an admin needs to navigate to the admin page and provide the following:
1. Hive name: A unique name to identify the hive.
2. City: The city where the hive is located.
3. State: The state where the hive is located.
4. Coordinates: The geographical coordinates (latitude and longitude) of the hive's location.
5. Picture: An image of the hive.
Once the admin has submitted this information, a success message will be shown displayed indicating that the has been added to the list of hives in the hive feed page. However, initially, the hive will be empty, and the live data displayed will be shown as "-", indicating that no data is available and its graphs and tables will be empty. This is because there is currently no monitoring system linked to the newly added hive. Only the webmaster has the privilege of linking the monitoring system to the hive. Once the monitoring system is linked, the hive data will start to populate, and the live data, graphs, and tables will reflect the actual data collected from the hive.
#### V-A4 Hive Feed
Upon logging in, registered users will be directed to the hive feed page. This page showcases live and past data of each hive collected by their monitoring system. The data collected by the monitoring system is shown in Table I. On the hive feed page, the live or most recent data is displayed in the yellow block beneath the hive's image, location, and video feed, as depicted in Figure 15. Each individual measurement is shown alongside its unit of measurement and above its title, providing a clear visualization of the data.
The measurements are updated every 5 minutes using IntelliBeeHive's API mentioned in Section V-B4, this API facilitates communication between the web server and the user's personal device. However, it is important to note that the live video feed is available only for demo purposes and not for regular users. Regular users do not have access to a live video feed. The focus of IntelliBeeHive is to provide comprehensive data for analyzing the health of beehives, and the video feed is not considered a requirement for this analysis.
#### V-A5 Graphs and Tables
Below the yellow block containing the hive's live measurements are a series of graphs and tables containing the past data for each measurement in Table I. There are a total of 10 blocks, one for each measurement, and users can alternate between viewing the data in graph or table format as shown in Figure 16 using the 2 buttons at the top left corner of each block.
The past data presented in these graphs and tables encompasses all the data collected from the current year, starting from January. Since hive data is uploaded every 5 minutes to the web server, a single hive can accumulate 105,120 data points for each measurement in one year. To alleviate the strain on the web server caused by loading such a large amount of data for each hive, we retrieve data collected every hour instead of every 5 minutes, signif
\begin{table}
\begin{tabular}{l|l} Measurement & Unit of Measurement \\ \hline Temperature & Fahrenheit (F) \\ Humidity & Relative Humidity (\%) \\ CPU Temperature & Celsius (C) \\ GPU Temperature & Celsius (C) \\ Bees on Deck & Single Unit \\ Bees Leaving & Single Unit \\ Bees Arriving & Single Unit \\ Bees Average Size & Millimeters (mm) \\ Pollen Count & Single Unit \\ Mite Count & Single Unit \\ \hline \end{tabular}
\end{table} TABLE I: The table shows the list measurements collected from each hive to monitor their daily activity.
Fig. 15: Honey bee hive feed users see upon logging into the website.
size from 105,120 units per measurement to 8,760 units per measurement. This approach makes the data more manageable.
Once the data is retrieved it is rendered into table format using HTML and CSS and into graph format using Dygraphs, an open-source JavaScript charting library designed to handle large data sets [27]. Open-source software refers to software that grants users the freedom to use, modify, and distribute the code without restrictions. How the data is retrieved will be discussed in Section V-B.
### _Backend_
IntelliBeeHive is hosted on a Linux virtual machine located at the University of Texas Rio Grande Valley (UTRGV). The virtual machine serves as the web server or cloud computer for IntelliBeeHive, providing a secure and flexible environment. The web server is responsible for hosting the web application, as well as collecting, storing, and sending beehive data.
IntelliBeeHive is written in PHP, an open-source scripting language tailored for web applications, and was developed using a Laravel framework. A web framework provides an application with many useful libraries specific for web development and provides a standard structure that most web applications use. The Laravel framework is a powerful open-source framework offering numerous libraries and components for APIs and database handling and follows a standard structure that is commonly used in web applications. This section will cover IntelliBeeHive's backend workflow, database structure, and how data is collected and sent by the API.
#### V-B1 SQL Database
The IntelliBeeHive website stores all of its data in an SQL or relational database managed by MySQL, an open-source SQL management system. SQL stands for Structured Query Language and is used to create, store, update, and retrieve data from structured tables. In an SQL table, each row represents a data entry and each column identifies a specific field of the entry. IntelliBeeHive's database is made up of 6 main tables: Users, Comments, Activity, Hives, DB_Info, and Network_Info. Figure 17 illustrates the logical structure of the tables. The Users, Comments, and Activity tables contain all the data pertaining to the users. The Users table contains information such as the user's name, credentials, and a primary key that uniquely identifies each user. The Comments and Activity tables store user comments and web activity respectively. These tables can be linked to a specific user through their primary key, as shown in Figure 17. The Hives, DB_Info, and Network tables store data pertaining to the beehives. The Hives table stores a hive's name, location, picture, and primary key, and the Network_Info table stores the hive's monitoring system's identification key. Whenever a new monitoring system is assigned or added to a hive by the webmaster, a new Hive Activity table is created with a unique title, serving as a key. Each Hive Activity table stores the measurements listed in Table I for a specific hive. Thus, there is a separate Hive Activity table for each hive in the system. The DB_Info table stores a hive's primary key, system identification key, and table key to link each hive to their Hive Data table and monitoring system.
Fig. 16: Shows 6 of 10 graphs created using DygraphsJS and Bootstrap libraries.
Fig. 17: Shows IntelliBeeHive’s SQL database schema.
2 Backend Workflow
IntelliBeeHive's back-end workflow is similar to its front-end workflow covered in Section V-A1, however in this section we will discuss the underlying processes.
When a user visits the Landing Page, they have several options: they can view the Hive Feed Demo page, create a new account through the Sign Up page, or log into their existing account. If a user opens the Hive Demo page, the hive data is fetched from the SQL database using the API. Since hive activity data will be continuously sent to the user's browser from the web server every 5 minutes, the API is used to facilitate this process. On the other hand, when a user creates an account through the Sign Up page, their input information is submitted to the web server without the use of the API. The API is primarily reserved for scenarios where data needs to be frequently sent from or received by the web server. If the submitted information is correct, the user is assigned a token, which serves as a verification of their access and privileges. Subsequently, they are redirected to the Hive Feed page. If the information is incorrect the user is sent back to the Sign Up page.
Similarly, when a user logs into the application their credentials will be queried and verified against the stored information in the SQL database. If the credentials exist and match then the application will determine if the user should have admin privileges. If the user is an admin, they will be assigned a special token that identifies them as an admin and redirects them to the Admin Page, else they'll be assigned a regular token and redirected to the Hive Feed page. The Hive Feed page similar to the Hive Feed Demo page uses the API to fetch all hive past and current activity data.
#### V-A3 Adding Users, Activities, Comments and Hives
Once a user is logged in they can add comments, update their credentials, or manage their hives. Regular users can add comments and update or delete their credentials, meanwhile admin users can do the same plus add, update, and delete hives.
We can consider each user, comment, activity, and hive as a class with its own set of attributes mentioned in Section V-B1. An instance of a class can be considered an object. For example, when an action is performed, an instance of the corresponding class is created, which can be seen as an object. We can use a UML (Unified Modeling Language) diagram to represent the relationship and interaction between these classes. Figure 19 shows a UML diagram of our user, comment, activity, and hive classes. Each box in the UML diagram represents an object and is made up of 3 sections, going from top to bottom: class name, list of attributes, and list of privileges. Attributes input by the user are marked as public (+) and must be valid, else an object is not created and the user is sent a fail message. A regular and admin user are objects inherited from the user class since they both have the same attributes but differ in privileges. An admin user is an aggregation of a regular user since it has the privileges of a regular user in addition to its own. A regular user can create multiple comment and activity objects that will be associated with the user who created them by their primary key. However, unlike comments and activity objects, when a hive object is created there is no key associating the hive to who created it. The only association the hive object has with the admin user is that only admin users can create hives. When any object is created they are stored in the SQL database. Hive, comment, and activity objects will continue to exist without the user who created them, thus why they are only associated with the user.
#### V-A4 REST API
IntelliBeeHive's API follows a REST (Represental State Transfer) architecture, which adheres to several design principles. These principles include having a uniform interface, separating the client and server, being stateless, and employing a layered system architecture [28]. A uniform interface means every request made to the API should work the same. The client and server refer to two separate computers, one making the request and the other fulfilling the request. In our case, the computer making the request is either the monitoring system or the web browser, and the computer fulfilling the request is the web server. The requests must be stateless, meaning each request should have the necessary information for the web server to fulfill without the need for a second request. The life cycle of a request follows a layered system architecture. The client layer handles sending requests and receiving responses from the API that includes a status code that indicates whether the request succeeded or failed. The authentication layer verifies if the client is authorized to access the API, for authorization the client must provide an alpha-numeric authentication key. The endpoint layer verifies if the client's input data is valid and formats the request's output data in JSON, a lightweight data-interchange format. The data access layer is responsible for handling the client's input data by checking for and removing any malicious code, preparing the necessary database query to retrieve or store
Fig. 18: Shows a flowchart diagram of IntelliBeeHive’s back-end workflow.
data, and determining the success of the query execution. The database layer executes the query and returns the output to the data access layer, this layer occurs in MySQL which is covered in Section V-B1.
#### V-B5 Collecting and Retrieving Honey Bee Data
IntelliBeeHive's REST API has has the following 4 main operations: getData, uploadData, uploadVideo, and uploadNetwork. The UML diagram in Figure 21 depicts each operation in blocks. Each block is made up of 3 parts, going from top to bottom: the purpose and URL of the operation, the variable data being sent/received, and the REST API request type. Three of the operations are of type POST and are used only by the monitoring system. POST requests in a REST API are used to upload data, thus they are used exclusively by the monitoring system to upload the hive's environment condition, video feed of the hive, and network information of the system. On the other hand, GET requests in an API are used to retrieve data and thus are used by the website's Hive Feed and Hive Feed Demo pages to display the hive's latest condition and video feed. Although the REST API and the website are hosted on the same machine, the GET request is made from the user's browser located on a different machine. The reason behind making GET requests to the API from the user's machine is to give the user live updates without them having to refresh their browser. When a user opens up a page to any website they receive a static page that won't change unless they requery the web server by refreshing their browser. Our page contains a JavaScript script that queries the web server using the REST API to provide the user with the newest updates every 5 minutes without them having to refresh their browser.
Fig. 19: Shows IntelliBeeHive’s UML diagram.
Fig. 20: Shows a flowchart diagram of IntelliBeeHive’s REST API request workflow.
Fig. 21: Shows a UML diagram of IntelliBeeHive’s REST API.
## VI Results
### _YOLOV7 Training_
The graphs shown below are the results of our YOLOv7-Tiny model's training. The F1-score for honey bee model recognition is 0.95 and the precision and recall value is 0.981 as shown in Figure 1(a) and 1(b).
For our pollen and mite object detection model F1-score is 0.95 and the precision and recall value is 0.821 for pollen and 0.996 for mite as shown in Figure 1(a) and 1(b).
The images shown below are extracted frames from video output after it's processed by the honey bee YOLOv7 tiny model and our tracking algorithm. The circle around each detection is the freedom where the honey bee can move and still be considered the same honey bee. The blue dot represents the honey bees' previous mid-point and the red dot represents the current mid-point.
The figures below are example outputs of our pollen and mites detection model using our TensorRT engine on each honey bee. The letter P indicates pollen was detected followed by the confidence of the model.
Due to our "mite" detection model being trained with placeholder data, we will not go in-depth into our model's accuracy in detecting mites.
### _Ground Truth Data vs Tracking Algorithm_
To evaluate the accuracy of our algorithm, we conducted an experiment using five 1-minute long videos. Each video
Fig. 23: Pollen and Mite model training results
Fig. 22: Honey bee model training results
Fig. 26: Honey bee Example 1 with Mite
Fig. 25: Pollen Detection Example Output Images
was manually labeled tracking each honey bee's identification, final status, initial frame detected, and last frame seen. We processed the videos through our algorithm to obtain the algorithm's output. The results for the five videos are presented in Table II.
We determine the accuracy of our algorithm by extracting the error rate using the number of "Arriving" and "Leaving" counts of honey bee's status given by the algorithm (\(C_{\text{Algorithm}}\)) compared to the manual count (\(C_{\text{Manuel}}\)) using the Equation 1 below.
\[\text{Error Rate}=\frac{|C_{\text{Algorithm}}-C_{\text{Manuel}}|}{C_{\text{Manuel}}} \tag{1}\]
Once we have the Error Rate of our Algorithm we can then extract the accuracy by using Equation 2.
\[\text{Accuracy}=1-ErrorRate \tag{2}\]
We calculate the average accuracy for each video and then calculate the overall accuracy across all 5 videos to determine the accuracy of our tracking algorithm and honey bee object detection model.
Formula Key: Error = Error Rate, Arr = Arriving, Acc = Accuracy
\[\text{Error}_{1} =\text{ Arr }\frac{17-17}{17}=1.0000\quad\text{Leaving}\frac{19-19}{19}=1.0000\] \[\text{Acc}_{1} =1-\text{Error}_{1}=1-\frac{1.0000+1.0000}{2}=1.0000\] \[\text{Error}_{2} =\text{ Arr }\frac{39-36}{36}=0.9166\quad\text{Leaving}\frac{29-32}{32}=0.9062\] \[\text{Acc}_{2} =1-\text{Error}_{2}=1-\frac{0.9166+0.9062}{2}=0.9114\] \[\text{Error}_{3} =\text{ Arr }\frac{42-44}{44}=0.9545\quad\text{Leaving}\frac{33-34}{34}=0.9705\] \[\text{Acc}_{3} =1-\text{Error}_{3}=1-\frac{0.9545+0.9705}{2}=0.9625\] \[\text{Error}_{4} =\text{ Arr }\frac{35-33}{33}=0.9393\ \ \text{Leaving}\frac{22-22}{22}=1.0000\] \[\text{Acc}_{4} =1-\text{Error}_{4}=1-\frac{0.9393+1.0000}{2}=0.9696\]
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c|c c} & \multicolumn{2}{c|}{Arriving} & \multicolumn{2}{c|}{Leaving} & \multicolumn{2}{c|}{Deck} & \multicolumn{2}{c|}{Total} & \multicolumn{2}{c}{Pollen} \\ \hline Vid & M & A & M & A & M & A & M & A & M & A \\ \hline
1 & 17 & 17 & 19 & 19 & 0 & 0 & 36 & 36 & 2 & 1 \\
2 & 36 & 39 & 32 & 29 & 3 & 4 & 71 & 72 & 1 & 1 \\
3 & 44 & 42 & 34 & 33 & 1 & 4 & 79 & 79 & 0 & 0 \\
4 & 33 & 35 & 22 & 22 & 0 & 5 & 55 & 62 & 0 & 0 \\
5 & 40 & 40 & 34 & 42 & 1 & 7 & 75 & 79 & 2 & 1 \\ \hline \end{tabular}
\end{table} TABLE II: This table shows a performance comparison between our manual (M) and algorithm (A) output
Fig. 24: Honey Bee Tracking Output Example
\[\text{Error}_{5} =\text{Arr}\ \frac{40-40}{40}=1.0000\quad\text{Leaving}\frac{32-34}{34}=0.941\] \[\text{Acc}_{5} =1-\text{Error}_{5}=1-\frac{1+0.9411}{2}=0.9705\]
\[\text{Avg Acc} =\frac{\text{Acc}_{1}+\text{Acc}_{2}+\text{Acc}_{3}+\text{Acc}_{4} +\text{Acc}_{5}}{5}\] \[=\frac{1.0000+0.9114+0.9625+0.9696+0.9705}{5}\] \[\approx 0.9628\quad\text{(or }96.28\%)\]
We exclude honey bees with a "New" status from our analysis due to the potential unreliability of their count. This is because honey bees have the ability to stay near the entrance and exit of the container, which can create complications for the model in accurately determining whether an object is indeed a honey bee or not.
The "Deck" difference happens due to our approach in our algorithm. The issue arises when the algorithm relies on identifying the nearest honey bee in each frame to track their movement. However, if a honey bee happens to move significantly faster than usual, this approach can lead to problems. Specifically, when the algorithm considers the closest midpoint in the next frame as the same bee, it may result in losing track of the current honey bee and mistakenly pairing other honey bees with the wrong counterparts. This can lead to unpaired honey bees being marked as new and potentially disrupting the tracking process. Increasing the frame rate can significantly improve this problem.
To measure the accuracy of our pollen and mite detection, because the five 1-minute videos do not give us enough honey bees with pollen as shown in Table II, we manually annotated honey bee profile images only for five different 5-minute videos shown in Table III. The pollen model results include the counts of false positives and false negatives, as well as the total number of honey bees detected for each video. Due to our limitation on mite data, we aren't able to accurately represent the accuracy of our mite detection class.
To determine the accuracy of our pollen detection model we use the Precision 3 and Recall 4 formulas to then extract our F1 scores 5.
\[\text{Precision}=\frac{\text{True Positive}}{\text{True Positive}+\text{ False Positive}} \tag{3}\]
\[\text{Recall}=\frac{\text{True Positive}}{\text{True Positive}+\text{ True Negatives}} \tag{4}\]
\[\text{F1 Score}=\frac{2*(\text{Precision}*\text{Recall})}{(\text{Precision}* \text{Recall})} \tag{5}\]
\[\text{Precision}_{1}=\frac{19}{19+3}=0.8636\] \[\text{Recall}_{1}=\frac{19}{19+4}=0.8261\] \[\text{F1 Score}_{1}=\frac{2*(0.8636*0.8261)}{(0.8636+0.8261)}=0.8444\]
\[\text{Precision}_{2}=\frac{13}{13+1}=0.9286\] \[\text{Recall}_{2}=\frac{13}{13+8}=0.6190\] \[\text{F1 Score}_{2}=\frac{2*(0.9286*0.6190)}{(0.9286+0.6190)}=0.7428\]
\[\text{Precision}_{3}=\frac{6}{6+1}=0.8571\] \[\text{Recall}_{3}=\frac{6}{6+4}=0.6000\] \[\text{F1 Score}_{3}=\frac{2*(0.8571*0.6000)}{(0.8571+0.6000)}=0.7059\]
\[\text{Precision}_{4}=\frac{7}{7+0}=1.0000\] \[\text{Recall}_{4}=\frac{7}{7+0}=1.0000\] \[\text{F1 Score}_{4}=\frac{2*(1.0000*1.000)}{(1.0000+1.0000)}=1.0000\]
\[\text{Precision}_{5}=\frac{13}{13+2}=0.8667\] \[\text{Recall}_{5}=\frac{13}{13+2}=0.8667\] \[\text{F1 Score}_{5}=\frac{2*(0.8667*0.8667)}{(0.8667+0.8667)}=0.8667\]
\[\text{Avg Prec}=\frac{\text{Prec}_{1}+\text{Prec}_{2}+\text{ Prec}_{3}+\text{Prec}_{4}+\text{Prec}_{5}}{5}\] \[=\frac{0.863+0.928+0.857+1.000+0.866}{5}\] \[=0.9032\]
Avg Rec \[=\frac{\text{Rec}_{1}+\text{Rec}_{2}+\text{Rec}_{3}+\text{Rec}_{4} +\text{Rec}_{5}}{5}\] \[=\frac{0.826+0.619+0.600+1.000+0.866}{5}\] \[=0.7823\]
\begin{table}
\begin{tabular}{c|c|c|c|c|c} & Pollen & & & \\ \hline Vid & M & A & False Pos. & False Neg. & Total Bees \\ \hline
1 & 23 & 22 & 3 & 4 & 325 \\
2 & 21 & 14 & 1 & 8 & 296 \\
3 & 10 & 6 & 1 & 4 & 267 \\
4 & 7 & 7 & 0 & 0 & 209 \\
5 & 15 & 15 & 2 & 2 & 253 \\ \hline \end{tabular}
\end{table} TABLE III: This table shows the performance of our pollen model where M is the Manually counted total of honey bees with pollen and A is the Algorithms total count of honey bees with pollen.
### _Website Data Visualization_
Our monitoring system uses Cron, a time-based job scheduler, to schedule a script for recording and processing videos every 5 minutes and 30 seconds. The additional 30 seconds are to give Gstreamer (our recording application) time free the camera to start the next process. However, the scheduled hours for running the monitoring system are limited to sunrise (7 am) and sunset (8 pm). This constraint is imposed because the camera system utilizes, Raspberry Pi V2.1, which lacks night vision capabilities. Therefore, the system is scheduled to operate only during daylight hours when sufficient visibility is available.
The graphs below show 4 out of the 10 available on the IntelliBeeHive web application to show different time periods and demonstrate the changes ins activity, humidity, CPU temperature, and hive temperature throughout the days/weeks/months. |
2309.15092 | Formation of the hydrogen line 21-cm in Dark Ages and Cosmic Dawn:
dependences on cosmology and first light | We analyze the formation of the redshifted hyperfine structure line 21-cm of
hydrogen atom in the Dark Ages, Cosmic Dawn, and Reionization epochs. The
evolution of the global differential brightness temperature in this line was
computed to study its dependence on the values of cosmological parameters and
physical conditions in the intergalactic medium. Variations of the depth of the
Dark Ages absorption line at $z\sim80$ with variations of the cosmological
parameters $\Omega_b$, $\Omega_{cdm}$, $\Omega_{\Lambda}$, $\Omega_K$ and $H_0$
are studied. The standard model with post-Planck parameters predicts a value of
the differential brightness temperature in the center of the absorption line
$\sim$30-50 mK. The profile of this line can be quite another in the
non-standard cosmological models, which include the annihilating or decaying
dark matter, a primordial stochastic magnetic field, etc. It can be shallower
or be an emission bump instead of an absorption trough. It is also shown that
the position and depth of the Cosmic Dawn absorption line formed at 10<z<30,
due to the Wouthuysen-Field effect, is mainly defined by the spectral energy
distribution of the first sources of light. If reionization occurs at
$z_{ri}=7\pm1$, then the differential brightness temperature in the center of
this line is $\sim$80 mK. During the reionization, the emission with an
amplitude of $\sim$20 mK is possible. It is also shown that the temperature,
density, and degree of ionization of the baryonic component are decisive in
calculating the intensity of the 21-cm absorption/emission line from these
epochs. | Bohdan Novosyadlyj, Yurij Kulinich, Gennadi Milinevsky, Valerii Shulga | 2023-09-26T17:38:15Z | http://arxiv.org/abs/2309.15092v1 | Formation of the hydrogen line 21-cm in Dark Ages and Cosmic Dawn: dependences on cosmology and first light
###### Abstract
We analyze the formation of the redshifted hyperfine structure line 21-cm of hydrogen atom in the Dark Ages, Cosmic Dawn, and Reionization epochs. The evolution of the global differential brightness temperature in this line was computed to study its dependence on the values of cosmological parameters and physical conditions in the intergalactic medium. Variations of the depth of the Dark Ages absorption line at \(z\sim 80\) with variations of the cosmological parameters \(\Omega_{b}\), \(\Omega_{cdm}\), \(\Omega_{\Lambda}\), \(\Omega_{K}\) and \(H_{0}\) are studied. The standard model with post-Planck parameters predicts a value of the differential brightness temperature in the center of the absorption line \(\sim\)30-50 mK. The profile of this line can be quite another in the non-standard cosmological models, which include the annihilating or decaying dark matter, a primordial stochastic magnetic field, etc. It can be shallower or be an emission bump instead of an absorption trough. It is also shown that the position and depth of the Cosmic Dawn absorption line formed at 10\(<\)z\(<\)30, due to the Wouthuysen-Field effect, is mainly defined by the spectral energy distribution of the first sources of light. If reionization occurs at \(z_{\rm{}_{\rm{}_{\rm{}_{\rm{}_{\rm{}_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{{\rm{\rm{ \rm{ }}}}}}}}}}}}}}}}}}} 1\), then the differential brightness temperature in the center of this line is \(\sim\)80 mK. During the reionization, the emission with an amplitude of \(\sim\)20 mK is possible. It is also shown that the temperature, density, and degree of ionization of the baryonic component are decisive in calculating the intensity of the 21-cm absorption/emission line from these epochs.
keywords: cosmology: theory - large-scale structure of Universe - dark energy
## 1 Introduction
Recent data on massive galaxies and quasars at high redshifts have heightened interest in the early epochs when the first luminous objects in our Universe began to form. An important information channel about the state of baryonic matter in this period is the redshifted line 21-cm of neutral hydrogen (see reviews Barkana & Loeb (2001); Fan et al. (2006); Furlanetto et al. (2006); Bromm & Yoshida (2011); Pritchard & Loeb (2012)). The earliest signal from forming halos in Dark Ages can be received in this spectral line as well (Iliev, 2002, 2003; Furlanetto & Oh, 2006; Shapiro, 2006; Kuhlen et al., 2006; Novosyadlyj et al., 2020). The physical conditions of hydrogen gas, the excitation and ionization states during the Dark Ages and Cosmic Dawn epochs in the standard cosmological models and scenarios of the first light sources formation are well studied. The known spectral features include the absorption wide lines redshifted to \(\sim 20\) MHz at \(z\sim 80\) and \(\sim 70-130\) MHz at \(z\sim 20-10\), and the emission line before complete reionization. The second absorption line is caused by the Wouthuysen-Field effect (Wouthuysen, 1952; Field, 1958, 1959) and is determined by the spectral energy distribution (SED) of the first sources of light (the first light). Non-contractority models of the first light sources in the standard cosmological model predict a line depth that does not exceed \(\sim 250\) mK of brightness temperature (Cohen et al., 2017). The first and to date only detection of this line in the Experiment to Detect the Global Epoch of Reionization Signature experiment (EDGES) (Bowman et al., 2018) indicates an unusually shaped profile and an unexpectedly large depth centred on 78 MHz, which is \(\sim\)3-4 times deeper than expected in the standard cosmology. The explanations go out of the bounds of standard cosmology, including additional mechanisms for cooling the baryons, excess radio background at high redshifts, viscous dark energy, and so on (Barkana, 2018; Ewall-Wice et al., 2018; Halder et al., 2022). Another explanation consists of the challenge of measuring useful signals at the huge foreground and decreasing the systematic errors (Hills et al., 2018). The recent non-detection of a signal from the Cosmic Dawn epoch in the Shaped Antenna measurement of the background RAdio Spectrum 3 (SARAS3) experiment Singh et al. (2022) supports the
last assumption: its spectrum has not the feature found by (Bowman et al., 2018) and rejects their best-fitting profile with 95.3% confidence. But SARAS3 (Singh et al., 2022) nothing sad about the spectral feature of the Cosmic Dawn signal in the range of 55-85 MHz, therefore, the predictions of signal in this range by the standard model are actual, and their measurements are even more urgent.
The most prominent spectral feature of the redshifted 21-cm line is the absorption line formed in Cosmic Dawn when the scattering of \(Ly\alpha\)-radiation of the first sources of light affects the populations of hyperfine structure levels of ground state hydrogen (Field, 1959; Hirata, 2006). Theoretical aspects of its formation are comprehensively studied since it can bring us information about the first sources of light, the first stars, galaxies and phenomena related to their origin. The traditional approaches to the modeling of the \(Ly\alpha\)-coupling are based on the phenomenological connections between stars and galaxies formation rates and intensity of \(Ly\alpha\)-radiation (see, for example, review Furlanetto et al. (2006). In this paper we analyse the dependence of spectral features of the 21-cm line of Dark Ages, Cosmic Dawn, and Reionization epochs on cosmological parameters and models of the first light in the non-traditional approach: the intensity of \(Ly\alpha\)-radiation we deduce for given SED of the first light using the observational constraints on the Reionization epoch. By the variations time of appearance of the first light, rate of increasing its intensity and ratio of \(Ly\alpha\)-photons to ionization ones we estimate the possible spectral position and intensity of the 21-cm line taking into account the observational constraints on \(x_{HI}(z)\) at \(6\leq z\leq 20\)(Planck Collaboration, 2020a,b; Bouwens et al., 2015; Banados et al., 2018; Davies et al., 2018; Mason et al., 2018).
The outline of the paper is as follows. In Section 2 we describe the models of the energy distribution of the light from the first sources that appeared in the Cosmic Dawn, and state of atomic hydrogen from cosmological recombination to reionization. In Section 3 we analyse the dependences of the position and depth of the Dark Ages absorption line to cosmological parameters and additional heating and cooling of the baryonic component. In Section 4 we estimate the spectral and redshift position and intensity of the 21-cm line depending on the SED of the first light and its evolution. The results are summarised in Section 5.
## 2 State of atomic hydrogen from cosmological recombination to reionization
The neutral hydrogen atoms were the dominant component after the Cosmological Recombination epoch and before Reionization one. During the Dark Ages epoch, it is about 99.98%, decreases when the UV radiation of the first sources of light appear in the Cosmic Dawn epoch and sharply decreases during Reionization one1. The cosmological recombination is well studied both theoretically (Seager et al. (1999, 2000)) and citing therein) and instrumentally in ground-based, stratospheric and space observations of the cosmic microwave background radiation. We know about Dark Ages and Cosmic Dawn mainly from the theory and have a few scenarios of forming the first light sources without direct observational support. The study of reionization has a long history (see overviews Choudhury (2022); Gnedin & Madau (2022) and citing therein). According to the currently popular models, the UV radiation of the first stars of the early galaxies reionized hydrogen atoms progressively throughout the entire Universe at redshift \(6<z<12\), while helium atoms have been reionized by hard UV and X- radiation of quasars at \(2<z<6\)(Planck Collaboration, 2020a; Gnedin & Madau, 2022). Figure 1 illustrates the evolution of \(x_{HI}\equiv n_{HI}/n_{H}\) and \(x_{HI}\equiv n_{HI}/n_{H}\), where \(n_{H}\equiv n_{HI}+n_{HHI}\), from the beginning of hydrogen recombination at \(z=2000\) up to complete reionization at \(z=6\). The inaccuracy of the calculation of cosmological recombination using the code RecFast Seager et al. (1999, 2000) in the framework of the given cosmological model is not larger than 1-3%, and the current uncertainties of reionization epoch are shown by the shaded zone, which is \(2\sigma\) confidence range following from the Plank low-l polarization measurements (Planck Collaboration, 2020a,b). Such measurements are very demanding since the amplitude of the E-mode polarization power spectrum at low multipoles is lower by more than two orders of magnitude than the amplitude of the temperature anisotropy power spectrum. Other probes of the reionization epoch based on the spectral features of the most distant quasars and galaxies give \(x_{HI}\)-value in the shaded range. They are presented in Fig. 2, where Planck \(2\sigma\)-range of constraints on \(x_{HI}\)(Planck Collaboration, 2020a) is shown by the thin green solid lines, and the median value \(\overline{x}_{HI}(z)\)(Glazer et.al., 2018) by the thick red solid line. The astrophysical observational data are shown also. There are data on \(x_{HI}(z)=1-x_{HI}(z)\) derived from the dark pixel statistics (squares, McGreer et al. (2015)), the gap/peak statistics (diamonds, Gallerani et al. (2008)), the damping wing absorption profiles in the spectra of quasars (QD-WAP) (circles, Schroeder et al. (2013); Greig et al. (2017); Mortlock et al. (2011); Davies et al. (2018); Banados et al. (2018); Greig et al. (2022)), the redshift-dependent prevalence of \(Ly\alpha\) emitters (LAEs) (4-fold stars, Schenker et al. (2014); Mason et al. (2018, 2019); Ouchi et al. (2010)).
Footnote 1: Molecular hydrogen fractions are a few orders lower (Novosyadlyj et al. (2022) and citations therein) and can be omitted here from consideration.
In this paper, we analyse a few models of the first light with different SEDs, which provide the evolution of \(x_{HI}(z)\) shown in Fig. 1-2 and their influence on the absorption/emission line 21-cm of neutral hydrogen before and during reionization. To analyze the allowable level of illumination in the inter-proto-galaxy medium of the Cosmic Dawn epoch we assume that the sources of the first light are thermal. We consider here the thermal models of the first light described by the Planck function with temperature \(T_{f
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Model & \(T_{*}\) (K) & \(\alpha_{fl}\) & \(z_{fl}\) & \(a_{fl}\) & \(b_{fl}\) \\ \hline fl1a & 5000 & \(5.0\cdot 10^{-8}\) & 0.2 & 5.0 & 0.7 \\ \hline fl1b & 10000 & \(6.0\cdot 10^{-15}\) & 2.4 & 4.0 & 1.4 \\ \hline fl1c & 20000 & \(1.0\cdot 10^{-19}\) & 2.5 & 3.15 & 1.4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Parameters of models of the first light 1.
of redshift, and dilution coefficient \(\alpha_{fl}\). The spectral energy density of total radiation at some equidistance from them is as follows
\[J_{fl}(\nu) = \frac{4\pi}{c}\left[B(\nu;T_{CMB})+\sum_{fl}\alpha_{fl}^{(i)}B(\nu; T_{fl}^{(i)})\right], \tag{1}\] \[T_{fl}^{(i)} = T_{\nu}^{(i)}\tanh\left[a_{fl}^{(i)}\left(\frac{1+z_{fl}^{(i)}}{1 +z}\right)^{b_{fl}^{(i)}}\right], \tag{2}\]
where \(B(\nu,T)\) is the Planck function. The coefficients \(\alpha_{fl}\), \(a_{fl}\), \(b_{fl}\) and \(z_{fl}\) are fitting ones to obtain the \(x_{HII}(z)\) matching the observational data for given \(T_{\nu}\).
We consider three specific reionization z-tracks: early and late, which correspond to the high-z and low-z contours of the reionization range established by the Plank team (Planck Collaboration, 2020), and intermediate z-track which corresponds to the median dependence \(x_{HII}(z)\) by Glazer et.al. (2018).
Accurate computations of hydrogen and helium ionization and thermal history are crucial for solving similar tasks. We start from the early prerecombination epoch when Saha equations of recombination were applicable to all chemical components. Using the relevant basic kinetic equations, we compute the cosmological recombination using the RecFast model of an effective three-level atom. This model we apply up to \(z\!=\!200\). At lower \(z\) up to complete of reionization, the number density of quanta of Lyman series is not large. Therefore, hydrogen photoionization from the ground level (case A) occurs mainly (see Novosadlyj et al. (2022)), and the kinetic equation for hydrogen ionization is simplified:
\[-(1+z)H\frac{dx_{HII}}{dz}=R_{HI}x_{HII}+C_{i}n_{i}x_{HII}-\alpha_{HII}x_{HII}, \tag{3}\]
where \(\alpha_{HII}\) is the photorecombination rate, \(R_{HI}\!=\!R_{HI}(T_{CMB})+\alpha_{fl}R_{HI}(T_{fl})\) is the photoionization rate of hydrogen, \(C_{i}\) is collisional ionization rate by electron or/and proton, and \(n_{i}\) is the number density of corresponding particles.
Equation (3) for hydrogen and similar for helium we solve numerically together with the equations of expansion of the Universe and the energy balance for the baryonic component:
\[H=H_{0}\sqrt{\Omega_{r}(1\!+\!z)^{4}+\Omega_{m}(1\!+\!z)^{3}+ \Omega_{\Lambda}(1\!+\!z)^{3(1+4\omega_{c})}}, \tag{4}\] \[-\frac{3}{2}n_{tot}k_{B}(1\!+\!z)H\frac{dT_{B}}{dz}=\Gamma_{Cen }+\Gamma_{C_{fl}}+\Gamma_{phi}+\] \[\Gamma_{phdH_{2}}+\Gamma_{H^{-}H}-\Lambda_{ad}-\Lambda_{ff}- \Lambda_{phr}-\Lambda_{21cm}-\Lambda_{H_{2}}-\] (5) \[\Lambda_{\rm ex}-\Lambda_{H^{+}H}-\Lambda_{H^{-}e}-\Lambda_{H^{-} H}-\Lambda_{ci}-\Lambda_{cdH_{2}}-\Lambda_{dr},\]
where \(\Gamma_{Cen}\) is the Compton heating by CMB due to free electrons, \(\Gamma_{C_{fl}}\) is the same by the first light, \(\Gamma_{phi}\) is the heating by the photoionization, \(\Gamma_{phdH_{2}}\) is the heating by the photodissociation of H\({}_{2}\) and HD, \(\Gamma_{H^{-}H}\) is the heating due to reactions H\({}^{-}\) + H \(\rightarrow\) H\({}_{2}\) + e, \(\Lambda_{ad}\) is the adiabatic cooling, \(\Lambda_{ff}\) is the cooling via bremsstrahlung (free-free) emission, \(\Lambda_{phr}\) is the recombination cooling, \(\Lambda_{21cm}\) is the cooling via excitation of hydrogen 21-cm line, \(\Lambda_{H_{2}}\) is the cooling due to collisional excitations of lines of H\({}_{2}\), \(\Lambda_{ex}\) is the cooling due to collisional excitation of HI, HeI and HeII, \(\Lambda_{H^{+}H}\) is the cooling by the reaction H\({}^{+}\) + H \(-\)\(>\) H\({}_{2}\) + \(\gamma\), \(\Lambda_{H^{-}e}\) is the cooling due to collisional deionization H\({}^{-}\) + e\(\rightarrow\) H + 2e, \(\Lambda_{H^{-}H}\) is the cooling due to collisional deionization H\({}^{-}\) + H \(\rightarrow\) 2H + e, \(\Lambda_{ci}\) is the cooling due to collisional ionization of HI, HeI and HeII, \(\Lambda_{cdH_{2}}\) is the cooling due to collisional dissociation of H\({}_{2}\), \(\Lambda_{dr}\) is the cooling due to dielectron recombination. Expressions for all heating/cooling functions and their sources are presented in Appendix, equations (11)-(36).
The publicly available codes RecFast2 and DDRIV13 have been used in the general code H21cm.f, which was designed for integrating the system of equations (3)-(5) in the expanding Universe over Cosmological Recombination, Dark Ages, Cosmic Dawn and Reionization epochs when the first light becomes important for the ionization and dissociation of atoms and molecules. The last equation (5) is used at \(z\!\leq\!850\), at higher redshifts \(T_{b}=T_{r}=T_{CMB}^{0}(1\!+\!z)\).
Footnote 2: [http://www.astro.ubc.ca/people/scott/recfast.html](http://www.astro.ubc.ca/people/scott/recfast.html)
Footnote 3: [http://www.netlib.org/slatec/src/ddriv.f](http://www.netlib.org/slatec/src/ddriv.f)
In this section we consider the models of the first light with
Figure 1: Fractions of neutral and ionized hydrogen from the cosmological recombination at z=2000 up to complete reionization at z=6. Shaded range is 2\(\sigma\) confidence range following from the Plank low-1 polarization data (Planck Collaboration, 2020, 2020, 2020).
Figure 2: Fraction of ionized hydrogen at Cosmic Dawm and Reionization epochs from cosmological and astrophysical observational data.
a single given temperature in which the ionization follows the red line in Fig. 2. The parameters \(\alpha_{fl}\), \(a_{fl}\), \(b_{fl}\) and \(z_{fl}\) for three values of \(T_{\rm s}\) are presented in Tab. 2. The SEDs of the radiation (CMB+first light) for these models are shown in Fig. 3 for \(z\!=\!30\), \(20\), \(10\) and \(6\). One can see, the lower is \(T_{\rm s}\), the larger is energy density of the first light at \({\bf v}\!<\!v_{\rm L_{\rm c}}\) and slower is its evolution. It is caused by increase of the spectrum steepness after the hydrogen potential of ionization (right vertical dotted line in each panel) for lower \(T_{fl}\), and, accordingly, a lower number of ionizing quanta. Therefore, there is degeneration: the first light models with different SEDs and time evolutions can result in the same evolution track of fraction \(x_{HI}(z)\) during reionization.
But the thermal history of the plasma can be quite different for these models of the first light. To compute it we integrate eq. (5) taking into account the main heating/cooling processes listed in the Appendix. Their contributions to the variation of temperature \(((dT/dz)_{i}\!)\!\equiv\!{\rm T}_{i}/\left(3\pi_{tot}k_{B}(1+z)H(z)\right)\), \((dT/dz)_{i}\!)\!\equiv\!2\Lambda_{i}/\left(3\pi_{tot}k_{B}(1+z)H(z)\right)\)) are shown in Fig. 4. It illustrates the well-known fact: during the Dark Ages epoch the temperature of baryonic matter is determined by competition between adiabatic cooling and heating via Compton scattering of CMBR on free electrons. While during the Cosmic Dawn and Reionization epochs the heating/cooling due to photoionization by the first light and photorecombination, the Compton scattering of first light on free electrons as well as free-free transitions and inverse Compton effect becomes essential. As we see, for the same reionization history, they depend strongly on the SED of the first light. The evolution of kinetic temperature of baryonic matter from the Dark Ages up to complete hydrogen reionization is shown in Fig. 5. One can see, that it depends on energy density and SED of the first light as well as on their temporal variation. That will be manifested in the position and intensity of the 21-cm line from the corresponding epochs.
## 3 Absorption line 21-cm from Dark Ages
The signal in the redshifted line 21-cm of neutral hydrogen from the Dark Ages could be a source of information about the hydrogen state since the line's intensity depends on the number density of neutral hydrogen fraction and the kinetic temperature of baryonic matter. At \(z\!<\!850\), the adiabatic cooling begins to prevail over the Compton thermalization by CMBR and the kinetic temperature decreases faster than the CMB temperature. It is noticeable in the logarithmic scale of Fig. 5 at \(z\!<\!400\). The spin temperature4, which defines the populations of hydrogen hyperfine structure levels at this time and results from the kinetic equation, is as follows (Field, 1958; Novosyadlyj et al., 2020):
Footnote 4: The excitation temperature of the hyperfine transition
\[T_{\rm s}=T_{b}\frac{T_{\rm cmb}+T_{0}}{T_{b}+T_{0}}=\frac{(1+x_{\rm c})T_{ \rm cmb}}{T_{b}+x_{\rm c}T_{\rm cmb}},\quad T_{0}=\frac{h_{\rm F}\nu_{21}C_{ 10}}{k_{B}A_{10}}, \tag{6}\]
where \(x_{\rm c}\!\equiv\!T_{0}/T_{\rm cmb}\) is called the collision coupling parameter, \(\nu_{21}\) is the frequency of the 21-cm line, \(A_{10}\) is the Einstein coefficient of spontaneous transition, \(C_{10}\) is the collisional deactivation rate by electrons, protons and neutral hydrogen atoms, \(h_{\rm F}\) and \(k_{B}\) are Planck and Boltzmann constants correspondingly. Since the hyperfine structure line frequency \(\nu_{21}\) is in the Rayleigh-Jeans range of CMBR it is comfortable to use a brightness temperature \(T_{br}\) instead intensity: \(I_{\rm v}\!=\!2k_{B}T_{br}\nu^{2}/c^{2}\). Since the useful signal is the difference of the redshifted intensities \(\delta I_{\rm v}=(I_{\rm v}-I_{\rm v}^{\rm cmb})/(1+z)\) in any point of the sky the radiation transfer equation gives the expression for the differential brightness temperature in the line (Madau et al., 1997; Zaldarriaga, 2004; Furlanetto et al., 2006; Pritchard & Loeb, 2012)
\[\delta T_{br}=\frac{T_{s}-T_{\rm cmb}}{1+z}(1-e^{-\tau_{\rm v_{21}}}), \tag{7}\]
where the optical thickness \(\tau_{\rm v_{21}}\) of the line forming bulk is as follows (Field, 1959; Barkana & Loeb, 2001; Zaldarriaga, 2004)
\[\tau_{\rm v_{21}}=\frac{3c^{3}h_{P}A_{10}n_{HI}}{32\pi k_{B}y_{ \rm 21}^{2}T_{s}H(z)}=8.6\cdot 10^{-3}[1+\delta_{b}(z,{\bf n})]x_{HI}\] \[\times\left[\left(\frac{0.15}{\Omega_{m}}\right)\left(\frac{1+z}{ 10}\right)\right]^{\frac{1}{2}}\left(\frac{\Omega_{b}h}{0.02}\right)\left[ \frac{T_{\rm cmb}(z)}{T_{s}(z)}\right] \tag{8}\]
Here \(\delta_{b}(z,{\bf n})\) is the density fluctuation of baryonic matter at redshift \(z\) and direction in the sky \({\bf n}\). Taking into account that the line profile at any \(z\) of Dark Ages is caused by thermal processes in baryonic gas and expansion of the Universe, and small optical thickness (\(\tau_{\rm v_{21}}\ll 1\)), we can obtain the expression for sky averaged (or global) signal in the hyperfine line 21-cm of neutral hydrogen (Madau et al., 1997; Zaldarriaga, 2004; Furlanetto et al., 2006; Pritchard & Loeb, 2012)
\[\delta T_{br}(z)=23x_{HI}(z)\left[\left(\frac{0.15}{\Omega_{m}}\right)\left( \frac{1+z}{10}\right)\right]^{\frac{1}{2}}\left(\frac{\Omega_{b}h}{0.02} \right)\left[1-\frac{T_{\rm cmb}(z)}{T_{s}(z)}\right] \tag{9}\]
Figure 3: The evolution of SEDs in the models f1a, f1b and f1c (from left to right) for the Cosmic Dawn and Reionization epochs with parameters presented in Tab. 1.
in units of mK. The multiplier \([1+\delta_{b}(z,\mathbf{n})]\), which is in eq. (8), after sky averaging became 1. The line depth at any \(z\) is proportional to the difference \(T_{\text{c}mb}-T_{s}\), which is largest at \(z\approx 50-100\), where the collisional processes are the most effective (Fig. 6). The expression (9) shows an apparent dependence on some cosmological parameters, such as \(\Omega_{\text{m}}\), \(\Omega_{b}\), and h. Implicit dependence on these and other cosmological parameters is also in the values of \(x_{HI}\), \(T_{s}\) and the thermal history of baryonic matter.
We compute the ionization and thermal history of baryonic matter in the Dark Ages for different cosmological parameters, and differential brightness temperature in the hyperfine line 21-cm of neutral hydrogen. The cosmological parameters were varied around ones following the final data release of the Planck Space Observatory (Planck Collaboration, 2020): the Hubble constant \(H_{0}=67.36\) km/s/Mpc, the mean density of baryonic matter in the units of critical one \(\Omega_{b}=0.0493\), the mean density of dark matter \(\Omega_{dm}=0.266\), the mean density of dark energy \(\Omega_{\Lambda}=0.6847\), the current temperature of cosmic microwave radiation \(T_{\text{c}mb}^{0}=2.7255\) K. The space curvature of the fiducial model \(\Omega_{K}=0\). We use also the primordial helium abundance \(Y_{p}=0.2446\)(Peimbert et al., 2016) and deuterium fraction \(y_{Dp}=2.527\cdot 10^{-5}\)(Cooke et al., 2018) which well agree with the posteriors mean of Planck Collaboration (2020).
Fig. 6 shows the sensitivity of the profile of the absorption line 21-cm formed in the Dark Ages to the variation of cosmological parameters \(\Omega_{b}\), \(\Omega_{dm}\), \(\Omega_{\Lambda}\) and \(\Omega_{K}\), which strongly satisfy the cosmological equation \(\Omega_{b}+\Omega_{dm}+\Omega_{\Lambda}+\Omega_{K}=1\), as well as \(H_{0}\). First of all, we can state that variations of \(\Omega_{\Lambda}\) and \(\Omega_{K}\) with unchanged \(\Omega_{b}\) and \(\Omega_{dm}\) do not affect the line \(z\)-profile at all (not shown). This result is expected in the framework of standard cosmology. While the variations of \(\Omega_{b}\), \(\Omega_{dm}\) and \(H_{0}\) change the depth and width of \(z\)-profile of the line. Rising \(\Omega_{b}\) increases the depth of the line (decreases the \(\delta T_{br}^{min}\)) via a larger number density of absorbers, and larger effectiveness of collisional processes in the deactivation of exited hyperfine level which stronger pull the spin temperature \(T_{s}\) to \(T_{b}\). And contrary, the increase of \(\Omega_{dm}\) decreases the depth of the line via the Hubble expansion rate \(H(z)\) in the denominator of eq. (8). It should be noted, that 10% variation of \(\Omega_{b}\) results in \(\sim 17\%\) variation of depth of the redshifted absorption line 21-cm, but 10% variation of \(\Omega_{dm}\) results in about 4.3% variation of line depth. The variation of the line \(z\)-profile over the variation of Hubble constant \(H_{0}\) is shown in the right panel of
Figure 5: The evolution of the temperature of baryonic matter \(T_{b}\) from the Dark Ages epoch, through the Cosmic Dawn one up to complete hydrogen reionization for the first light models with parameters presented in Tab. 1.
Fig. 6: increasing of \(H_{0}\) increases the depth of the line. Here 10% variation of \(H_{0}\) results in 28% variation of minimal \(\delta T_{br}\). In all cases, the minima are in the range \(87\leq z\leq 93\) (\(\sim 15-16\) MHz). The variations of \(\Omega_{b}\), \(\Omega_{dm}\) and \(H_{0}\) we deliberately made large for illustrative purposes. The \(2\sigma\) uncertainties of these parameters which are defined from the Planck measurements (Planck Collaboration 2020a) are 4.5% for \(\Omega_{b}\), 5.2% for \(\Omega_{dm}\) and 1.6% for \(H_{0}\). The variations of the absorption line for them are 8.2%, 2.5% and 4.3% accordingly. Figure 6 illustrates also the degeneration of dependences of \(z\)-profile of the redshifted absorption 21-cm line on cosmological parameters \(\Omega_{b}\), \(\Omega_{dm}\) and \(H_{0}\). Nevertheless, the detection of the redshifted 21-cm absorption line of neutral hydrogen at frequencies 10-30 MHz from the Dark Ages could be useful to refine the values of these cosmological parameters.
The Dark Ages absorption line 21-cm is sensitive also to the possible additional mechanisms of heating/cooling which appear in the non-standard cosmological models. We set their key parameters keeping the reionization history in the \(2\sigma\)-range of median Planck's \(x_{HH}(z)\) (fig. 2). The results of computations of \(T_{b}\), \(T_{s}\) and \(\delta T_{br}\) for non-standard cosmological models with annihilating and decaying dark matter and decaying of stochastic background primordial magnetic field are presented in Fig. 7 (top row). We use the models of annihilating and decaying dark matter following Chluba (2010); Liu & Slatyer (2018) accordingly. It was supposed that the mass of dark matter particles \(m_{dm}=100\) GeV and the thermally averaged product of the cross-section and relative velocity of the annihilating DM particles5\(\langle\sigma v\rangle=10^{-29}\) cm\({}^{3}\)s\({}^{-1}\). The heating functions of baryonic matter by annihilating and decaying DM particles are presented in Appendix, formulae (33) and (34) accordingly. The presented results for models with annihilating dark matter particles are obtained for three values of fraction of the released energy, which is deposited into the intergalactic medium \(f_{dmm}=\)0.5%, 5% and 50%. The fractions of the deposited energy that goes into the heating and ionization of atoms we computed according to prescription by Chluba (2010) which gives \(\approx 1/3\) for each channel of deposition, heating, ionization and excitation. Such models of dark matter keep all properties of cold dark matter, so, they match well the most observational data like the fiducial model does. The profiles of the 21-cm line in the models with annihilating and decaying dark matter are noticeably different which is due to the fact that in the first case the heating function is proportional to \((1+z)\)6, and in the second one \(\propto(1+z)^{3}\).
Footnote 5: The rate of annihilation DM particles is proportional to the product of undefined parameters \(f_{dmm}(\sigma v)/m_{dm}\) (see (33) in Appendix).
The additional heating mechanisms such as annihilation or decay of dark matter particles into gamma quanta, electron-positron or other charged particle-antiparticle pairs pull the kinetic temperature of baryonic gas and spin temperature to CMB one at \(z<400\), which decreases the depth of the absorption line and shifting its bottom to higher redshift. At the highest values of the fraction of annihilating dark matter and the lowest values of the lifetime of decaying dark matter the absorption line turns into the emission one.
In the top right panel of Fig. 7 we show how evolutions of \(T_{b}\), \(T_{s}\) and \(\delta T_{br}\) depend on the r.m.s. amplitude \(B_{0}\) of the primordial magnetic fields. We suppose they heat the baryonic matter in the post-recombination Universe due to the decaying of magnetic turbulence and the ambipolar diffusion following Sethi & Subramanian (2005); Chluba et al. (2015); Minoda et al. (2019). The heating functions for them (35) and (36) are in Appendix. The Planck Collaboration (2015) data constrain the r.m.s. amplitude of the primordial magnetic fields at the nano-Gauss level, therehury we set \(B_{0}=\)0.2, 0.4 and 0.6 nG. When \(B_{0}\) increases the depth of the absorption line decreases and disappears when \(B_{0}\approx\)0.3 nG and turns into the emission one for larger \(B_{0}\). The amplitude of emission 21-cm line for \(B_{0}=\)0.6 nG reaches \(\approx 30\) mK at \(z=170\). We can also note the similarity of the profiles of the 21-cm line in models with a decaying magnetic field and annihilating/decaying dark matter with the corresponding values of the parameters of these models.
The real Universe can consist of a few sorts of dark matter particles, primordial stochastic background magnetic field
and everything else that is already in the standard model. We present the evolution of \(T_{b}\), \(T_{s}\) and \(\delta T_{br}\) in the models with annihilating (\(f_{d_{\textit{mm}}}=\)0.5%, 5%, 50%) and decaying (\(\tau_{\textit{dm}}/f_{\textit{dmd}}=\texttt{5}\cdot\texttt{10}^{26}\) s) dark matter particles in the bottom left panel of Fig. 7, and annihilating dark matter particles (\(f_{d_{\textit{mm}}}=\)0.5%, 5%, 50%) and decaying of primordial magnetic field with \(B_{0}=\)0.4 nG (bottom middle).
The general trend of changing the profile of the 21-cm line of neutral hydrogen, which is formed during the Dark Ages in the models with additional heating and ionization, compared to the profile in the standard model, is a decrease in the depth of the absorption line, or even a transition to emission, determined by the specific parameters of the models. The opposite trend can be expected in models with additional cooling.
In the bottom right panel of Fig. 7 the baryon matter temperature \(T_{b}(z)\), spin temperature \(T_{s}(z)\) and \(z\)-profiles of the redshifted 21-cm line are presented for additional cooling modeled as \((1+\beta)\Lambda_{\textit{sd}}\) with \(\beta\)=0.0, 0.1, 0.2 and 0.3, where \(\Lambda_{\textit{sd}}\) is the adiabatic cooling function (see Appendix). It shows that 10% additional cooling of baryonic matter at \(30\leq z\leq 300\) results in \(\sim 30\%\) increase of absorption line depth. For example, the possibility of such excess cooling of the cosmic gas induced by its weak interaction with the dark matter (Munoz et al., 2015) has been discussed by Barkana (2018) for an explanation of the unexpected deep absorption line 21-cm detected by EDGES from Cosmic Dawn epoch (Bowman et al., 2018). Another possible reason could be the decoupling of the temperature of the baryon component from the temperature of the CMB due to the decrease in the concentration of free electrons during the Dark Ages.
## 4 Hyperfine Line 21-cm in Cosmic Dawn
The populations of the hyperfine structure levels of the hydrogen ground state at \(300\leq z\leq 30\) are caused by CMB radi
Figure 7: Dependence of profile of H 21-cm line from Dark Ages on the heating and cooling in non-standard cosmology: heating by self-annihilating dark matter particles (top left), heating due to the decaying of dark matter particles (top middle), heating due to the decaying of primordial helical magnetic field (top right), heating by self-annihilating and decaying dark matter particles (bottom left), heating by self-annihilating dark matter particles and decaying of primordial magnetic field (bottom middle), cooling by interaction with cold dark matter particles (bottom right).
ation and collisions of neutral hydrogen with atoms, protons and electrons. As the Universe expands, the efficiency of collisions decreases, the spin temperature approaches the CMB temperature (\(T_{s}\to T_{cmb}\)), and the 21-cm absorption line disappears at \(z\sim 30\), which Figs. 6-7 illustrate well.
The appearance of the first extra light from the forming stars and galaxies symbolizes the beginning of the Cosmic Dawn epoch. The Universe starts to fill by \(Ly\alpha\) quanta, which changes the population levels of the hyperfine structure of the ground state (\(n=1\)) of atomic hydrogen through transitions between the levels of the hyperfine structure of the first excited level (\(n=2\)) in such a way that \(T_{s}\) approaches \(T_{b}<T_{cmb}\) again, and the absorption line appears again. This effect was first predicted by the Dutch physicist Siegfried A. Wouthuysen in 1952 (Wouthuysen, 1952) and studied in detail theoretically by George B. Field, American astrophysicist, in the late 50s (Field, 1958, 1959). This phenomenon is called the Wouthuysen-Field effect or coupling and is described with different levels of detalization in a lot of reviews and textbooks, including those mentioned in the introduction. The SED of radiation of the first sources evolves increasing the ratio of the number density of Lyman continuum (\(LyC\)) quanta to \(Ly\alpha\) ones and the appearance of X-ray radiation. It results in ionization and heating of hydrogen as shown in Fig. 2 and 5. The mainstream approach to describe these very complicated processes is based on the phenomenological concepts such as star formation efficiency, \(Ly\alpha\) efficiency, ionizing efficiency, X-ray efficiency,the minimal virial temperature of star formation halo and a few more like these. The dependencies of the global 21-cm signal on them are analysed by Pritchard & Loeb (2010); Mirocha et al. (2013, 2015); Cohen et al. (2017); Monsalve et al. (2017, 2018, 2019).
In this section we study the dependencies of the global 21-cm signal from the Cosmic Dawn epoch based on physical approaches and parameters: averaged energy distribution of the first light, mean energy density of \(Ly\alpha\) and ionizing \(LyC\) radiation and mean number density of neutral hydrogen \(n_{HI}\) which match existing observations. Since the first sources of light were protostellar halos, then first stars, the first globular cluster it is natural to assume the SED of the first light as a sum of thermal distributions with some effective temperatures \(T_{H}^{(i)}\) which depend on the redshift as it was proposed in section 2, eqs. (1)-(2).
We compute the differential brightness temperature for the fiducial cosmological model and different models of the first light using eq. (9) with spin temperature
\[T_{s}^{-1}=\frac{T_{cmb}^{-1}+x_{c}T_{b}^{-1}+x_{\alpha}T_{c}^{-1}}{1+x_{c}+x_ {\alpha}},\quad x_{\alpha}\equiv\frac{8\pi c^{2}\Delta v_{\alpha}}{9A_{10}v_{ \alpha}^{2}}S_{\alpha}J_{\alpha}, \tag{10}\]
where \(x_{\alpha}\) is \(Ly\alpha\) coupling parameter, \(v_{\alpha}\) is the frequency in the \(Ly\alpha\) line, \(\Delta v_{\alpha}\) is its half-width, \(S_{\alpha}\) is scattering function of \(Ly\alpha\) quanta and \(J_{\alpha}\) the energy density of them, \(T_{c}\) is colour temperature in the line. \(J_{\alpha}\) is computed using eqs. (1)-(2), \(S_{\alpha}\) and \(T_{c}\) are computed using the analytic approximation formulae (40)-(42) from Hirata (2006). They approximate the numerical results with an accuracy \(\sim 1\%\) in the range temperatures \(T_{b}\geq 2\) K, \(T_{s}\geq 2\) K and Gunn-Peterson optical depth \(10^{5}\leq\tau_{GP}\leq 10^{7}\), where \(\tau_{GP}=2.08\cdot 10^{4}x_{\rm HI}(1+z)^{3/2}\) for fiducial cosmological model. The last formula shows that at the redshift of complete reionization at \(z\geq 6\) the accuracy of the approximation is worse, but there line disappears since \(x_{HI}\to 0\).
Let's suppose that SED of the first light is described by a single Planck function with effective temperature \(T_{fl}(z;T_{s},a_{fl},b_{fl},z_{fl})\) and dilution coefficient \(\alpha_{fl}\) in (1)-(2). The parameters \(a_{fl},\,b_{fl},\,z_{fl}\) and \(\alpha_{fl}\) are defined for given \(T_{s}\) in such way to obtain the median value of \(\tau_{HI}(z)\)(Glazer et.al., 2018), shown by thick red solid line in Fig. 2. They are presented in Tab. 1 for \(T_{s}=\)5000 K, 10000 K, 20000 K. The evolution of SED for such toy models of the first light is shown in Fig. 3. Slow evolution of energy density of the first light in the models f1a means that the first sources in large number must appear at very large redshifts that looks unrealistic in the framework of standard cosmology. In Fig. 8 we present the evolution of \(T_{b}\), \(T_{s}\) (top panel), and differential brightness temperature in the redshifted 21-cm line \(\delta T_{br}\) (bottom panel) for these three models of the first light. One can see how different SEDs of the first light result in quite different profiles of \(\delta T_{br}(z)\). To explain them, we present in Fig. 9 the evolution of the number density of \(Ly\alpha\) quanta, \(N_{Ly\alpha}\), and the number of ionization per second by \(LyC\) quanta, \(N_{LyC}\). The absorption line starts to form (left wing) when the number density of \(Ly\alpha\) quanta reaches some critical value, and starts to disappear (right wing) when \(LyC\) quanta heat the baryonic matter and \(T_{b}\to T_{cmb}\). When \(T_{b}>T_{cmb}\)\(Ly\alpha\) quanta pull \(T_{s}\) to \(T_{b}\), that result in the appearance of an emission line.
Now we suppose that the SED of the first light is described by two Planck functions with effective temperatures \(T_{fl}^{(i)}(z;T_{s}^{(i)},a_{fl}^{(i)},b_{fl}^{(i)},z_{fl}^{(i)})\) and dilution coefficients \(\alpha_{fl}^{(i)}\) with \(i=1,2\) in (1)-(2). Here we put \(T_{s}^{(1)}=5000\) K and \(T_{s}^{(2)}=20000\)
Figure 8: Evolution of spin (top panel) and differential brightness (bottom panel) temperatures in the redshifted hyperfine line 21-cm of atomic hydrogen in the f1a, f1b and f1c models of the first light with parameters presented in Tab. 1.
K. We define the values of the remaining parameters to obtain the early, middle, and late reionization, as shown by two turquoise lines and a red line in Fig. 2. They are presented in Tab. 2. The SED of radiation (1) for these models of the first light at Cosmic Dawn and Reionization epochs are shown in Fig. 10. As in the previous case, the very slow evolution of energy density of the first light in the models fl2a means that the first sources in large number must appear at very large redshifts that looks unrealistic in the framework of standard cosmology. Evolutions of spin and differential brightness temperatures in the redshifted hyperfine line 21-cm of atomic hydrogen in the fl2a, fl2b and fl2c models of the first light with parameters presented in Tab. 2 are shown in Fig. 11. We can see that absorption lines in these models appear at \(z\)=17.7 (at the central frequency 76 MHz) with \(\delta T_{br}^{min}=\)-0.045 mK, \(z=\)10.8 (120 MHz) with \(\delta T_{br}^{min}=\)-0.089 mK and \(z=\)7.3 (170 MHz) with \(\delta T_{br}^{min}=\)-0.063 mK accordingly. Much weaker emission lines are at \(z\)=9.7 (\(\delta T_{br}^{max}=\)0.013 mK), \(z=\)7.8 with (\(\delta T_{br}^{max}=\)0.0093 mK) and at \(z\)=6.6 (\(\delta T_{br}^{max}=\)0.0083 mK). The number densities \(N_{Ly\alpha}\) of \(Ly\alpha\)-photons and the number of ionizations of hydrogen per second \(N_{Ly\alpha}\) by \(LyC\)-photons are presented in Fig. 12 for these models of the first light. Together with the evolution of \(T_{b}\) in the top panel of Fig. 11 they explain the appearance of the absorption and emission lines caused by Wouthuysen-Field coupling.
It should be noted that line profiles similar to those obtained here can be found in Pritchard & Loeb (2010); Mirocha et al. (2013, 2015); Cohen et al. (2017); Monsalve et al. (2017, 2018, 2019) for other parameterizations of the first light models.
The model of the first light (2) used here has a sufficient degree of freedom to simulate its different history and the different \(N_{Ly\alpha}/N_{LyC}\) ratio, however, we were unable to obtain a global signal in the 21 cm line larger than 100 mK in the framework of standard cosmology. This can be explained by reference to the observed limitations on reionization and the limitation of the number of models with other values of the parameter \(T_{\ast}\). Only with additional cooling we can obtain signal \(\sim 200\) mK, which, however, is still far from the announced result \(\sim 530\) mK obtained in the EDGES experiment (Bowman et al., 2018) at the centre frequency 78 MHz, that corresponds \(z\approx 17\). The results of the SARAS3 experiment, which were published last year (Singh et al., 2022), however, rejected the EDGES best-fit profile at \(2\sigma\) confidence level. The arbitral experiments are desirable. Some of them are being implemented now, while others are planned.
## 5 Conclusion
We analysed the spectral features of radiation in the range 5-200 MHz formed by the hyperfine structure of the ground level of hydrogen atoms during the Dark Ages in the range of redshift \(30<z<300\), and by the ground and first excited levels during the Cosmic Dawn and Reionization epochs in the range of redshift \(6<z<30\). In the \(\Lambda\)CDM model with post-Planck parameters the first such feature is the broad absorption line at the frequency of 16 MHz (\(z=87\)) with a depth \(\delta T_{br}=38.6\) mK and a full width at half maximum (FWHM) of 23 MHz. The depth and FWHM show noticeable dependence on cosmological parameters, such as baryon and dark matter density parameters, \(\Omega_{b}\) and \(\Omega_{dm}\), and Hubble constant \(H_{0}\). So, its detection can be used to refine the values of these cosmological parameters. The profile of this line is also sensitive to additional mechanisms of cooling and heating of baryonic gas in non-standard cosmology and can be used for its testing.
The second crucial spectral feature is the absorption line at frequency range \(68<\nu<180\) MHz (\(7<z<20\)) with depth from 0 up to \(\sim 80\) mK and FWHM\(\sim\)15-40 MHz, which is caused by Wouthuysen-Field coupling between the scattering of \(Ly\alpha\) radiation of the first sources on neutral hydrogen atoms and population of the hyperfine their levels. The position, depth, and FWHM drastically depend on the SED of the first light and its time history. So, detecting this line is very important for studying the first light sources.
The third spectral feature, which appears due to the Wouthuysen-Field effect and heating of the baryonic matter by UV-radiation during the Reionization epoch, is the emission line 21 cm redshifted to the meter-size wavelengths. This emission frequency is in the range \(130<\nu<190\) MHz, and the amplitude is below 20 mK. Its disappearing is caused by the complete reionization of hydrogen atoms. The detection of an emission line 21-cm redshifted to 1.5 - 2.5 m can pro
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Model & \(T_{\ast}\) (K) & \(\alpha_{fl}\) & \(z_{fl}\) & \(a_{fl}\) & \(b_{fl}\) \\ \hline fl2a & 5000 & \(1.0\cdot 10^{-11}\) & 4.6 & 6.0 & 2.5 \\ & 20000 & \(8.0\cdot 10^{-19}\) & 4.8 & 5.8 & 2.5 \\ \hline fl2b & 5000 & \(1.0\cdot 10^{-10}\) & 5.8 & 6.0 & 2.4 \\ & 20000 & \(4.5\cdot 10^{-19}\) & 3.8 & 6.0 & 3.8 \\ \hline fl2c & 5000 & \(1.0\cdot 10^{-11}\) & 4.6 & 6.0 & 2.5 \\ & 20000 & \(8.0\cdot 10^{-19}\) & 4.8 & 2.5 & 5.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Parameters of models of the first light 2.
Figure 9: Number densities of \(N_{Ly\alpha}\)\(Ly\alpha\)-photons and number of ionizations of hydrogen per second \(N_{LyC}\) by \(LyC\)-photons for Cosmic Dawn and Reionization epochs models of the first light with parameters presented in Tab. 1.
vide important information about the progress of hydrogen reionization in the final stage.
The dependences of the amplitudes and FWHM of these lines on the parameters of the cosmological and first-light models are strongly degenerate. Detection of all three features by tomography of the Dark Ages, Cosmic Dawn, and Reionization epochs, in the 21 cm line, would significantly reduce it. This task is a challenge even for modern advanced telescopes, receivers, and technologies for extracting a useful signal from the prevailing noise background.
We have modeled the evolution of SED of the first light by thermal sources with different effective temperatures and dilution coefficients. Since most of the cosmological scenarios of galaxies formation admit succession Pop III - Pop II - globular cluster - dwarf galaxies or like that, such an assumption makes sense. The results and conclusions obtained in such models of the first light are worthy of attention: i) the flatter the radiation energy distribution in the Lyman-continuum, the lower are the radiation energy density, the temperature of the baryon gas, and the concentration of Lyman-alpha quanta, therefore, the lower is depth of absorption line in the frequency range of 45-200 MHz, or even its absence (fig. 8); ii) the amplitude of the absorption line in this frequency range does not exceed 100 mK (fig. 11) if the observational limits of the reionization z-region obtained in Planck experiment (Planck Collaboration 2020a) is taken into account; iii) the emission redshifted 21-cm line with an amplitude below 20 mK in the Reionization epoch is possible when \(0.2<x_{HII}<0.99\).
## Acknowledgements
This work was supported by the International Center of Future Science and College of Physics of Jilin University (P.R.China), and the project of Ministry of Education and Science of Ukraine "Modeling the luminosity of elements of the large-scale structure of the early universe and the rem
Figure 11: Evolution of spin (top panel) and differential brightness (bottom panel) temperatures in the redshifted hyperfine line 21-cm of atomic hydrogen in the f2fa, f2b and f2c models of the first light with parameters presented in Tab. 2.
Figure 12: Number densities of \(N_{Ly\alpha}\)\(Ly\alpha\) photons and number of ionizations of hydrogen per second \(N_{LyC}\) by \(LyC\)-photons for Cosmic Dawn and Reionization epochs with parameters presented in Tab. 2.
Figure 10: The models of SED of radiation f2fa, f2b and f2c (from left to right) for Cosmic Dawn and Reionization epochs with parameters presented in Tab. 2.
nants of galactic supernovae and the observation of variable stars" (state registration number 0122U001834).
## Data availability
The data underlying this article will be shared on reasonable request to the corresponding author Bohdan Novosyadlyj ([email protected]).
## Appendix A Heating/cooling functions
Heating due to Compton scattering of thermal radiation with temperature \(T_{r}\) on free electrons (Seager et al. (1999); Weyman (1965)):
\[\Gamma_{C} = \frac{4\sigma_{T}k_{B}a_{r}n_{e}}{m_{e}c}T_{r}^{4}\left(T_{r}-T_{b }\right)\quad\mathrm{erg/cm^{3}s}. \tag{11}\]
Adiabatic cooling (Seager et al. (1999); Peebles (1971, 1993)):
\[\Lambda_{\mathrm{s}d}=-3n_{tot}k_{B}T_{b}H(z)\left(1+\frac{1}{3}\frac{d\ln{(1+ \delta)}}{dz}\right)\quad\mathrm{erg/cm^{3}s}. \tag{12}\]
Bremsstrahlung (free-free) emission (Shapiro & Kang (1987); Spitzer (1978)):
\[\Lambda_{ff} = 1.426\cdot 10^{-27}\sqrt{T_{\mathrm{E}}}[0.79464+0.1243\log T/Z_{i} |Z_{i}^{2}n_{e}n_{i}\quad\mathrm{erg/cm^{3}s}. \tag{13}\]
Heating by photoionization (Anninos et al. (1997); Osterbrock (1974)):
\[\Gamma_{pli}=4\pi n_{i}\int_{V_{\mathrm{t}}}^{10V_{\mathrm{t}}} \sigma_{i}(v)B_{V}\frac{V_{i}-V}{V}d\nu\quad\mathrm{erg/cm^{3}s},\quad\sigma_{ HeII}(\nu)=1.58\cdot 10^{-18}(\alpha_{HeII}^{2}+1)^{-4}e^{4-\frac{4}{4n_{HeII} (\alpha_{HeII})}}/(1-e^{-\frac{12\pi}{n_{HeII}}})\quad\mathrm{cm^{2}}, \tag{14}\] \[\sigma_{HI}(\nu)=6.3\cdot 10^{-18}(\alpha_{HI}^{2}+1)^{-4}e^{4- \frac{4}{4n_{HeII}(\alpha_{HeII})}}/(1-e^{-\frac{2\pi}{4\mu_{H}}}),\quad\sigma _{HeI}(\nu)=7.42\cdot 10^{-18}\left[1.66(\alpha_{HeI}^{2}+1)^{-2.05}-0.66(\alpha_{HeI}^ {2}+1)^{3.05}\right]\quad\mathrm{cm^{2}}.\]
Recombination cooling (Anninos et al. (1997); Black (1981); Spitzer (1978)):
\[\Lambda_{plr}^{(HII)} = 8.7\cdot 10^{-27}\sqrt{T_{b}}\left(T_{b}/10^{3}\right)^{-0.2} \left[1+\left(T_{b}/10^{6}\right)^{0.7}\right]^{-1}n_{e}n_{HII}\quad\mathrm{ erg/cm^{3}s}, \tag{15}\] \[\Lambda_{plr}^{(HeII)} = \left(1.55\cdot 10^{-26}T_{b}^{0.3647}+1.24\cdot 10^{-13}T_{b}^{-1.5}\left[1+0.3e^{-94000/T_{b}}\right]e^{-470000/T_{b}}\right)n_{e}n_{HeII}\quad \mathrm{erg/cm^{3}s},\] (16) \[\Lambda_{plr}^{(HeIII)} = 3.48\cdot 10^{-28}\sqrt{T_{b}}\left(T_{b}/10^{3}\right)^{-0.2} \left[1+\left(T_{b}/10^{6}\right)^{0.7}\right]^{-1}n_{e}n_{HeII}\quad\mathrm{ erg/cm^{3}s}. \tag{17}\]
Coiling via excitation of hydrogen 21-cm line (Seager et al. (1999)):
\[\Lambda_{21cm} = h_{p}\nu_{10}\ast(n_{He_{b}}C_{01}-n_{He_{b}}C_{10})\quad \mathrm{erg/cm^{3}s}. \tag{18}\]
Heating in reactions H\({}^{-}\)+H\(\rightarrow\)H\({}_{2}\)+e (Shapiro & Kang (1987); Hollenbach & McKee (1979)):
\[\Gamma_{H^{-}H} = 1.3\cdot 10^{-9}n_{H^{-}}n_{H^{-}}n_{H\overline{H}}\frac{3.53n_{HI} }{n_{H^{-}}+n_{\sigma}}\quad\mathrm{erg/cm^{3}s},\quad n_{cr}=\frac{10^{6}/ \sqrt{T}}{1.4\pi n\exp{[-(400/T)^{2}]}+1.4\pi n_{\mathrm{H}}\exp{[-12000/(T+120 0)]}}\ \mathrm{cm^{-3}}. \tag{19}\]
Cooling due to collisional excitation of lines of H\({}_{2}\) (Seager et al. (1999)):
\[\Lambda_{H_{2}}=10^{\alpha_{H}+\alpha_{H}+\alpha_{H}}e^{2}+\mathrm{erg}^{3}+ \mathrm{erg}^{3}+\mathrm{erg}^{3},\quad x\equiv\lg{(T/1000)}\quad\mathrm{erg/cm^ {3}s}. \tag{20}\]
Heating by photodissociation H\({}_{2}\) and HD (Shapiro & Kang (1987); Coppola et al. (2011)):
\[\Gamma_{pldH_{2}}=6.41\cdot 10^{-13}(n_{H2}+n_{HD})(k_{30}+k_{31})\quad \mathrm{erg/cm^{3}s},\quad k_{30}=6.46\cdot 10^{8}\left[\alpha_{f1}e^{-165530/T_{f1}}+ \alpha_{f12}e^{-165530/T_{f2}}\right], \tag{21}\] \[k_{31}=1.27\cdot 10^{8}\left[\alpha_{f11}T_{f11}^{0.084}e^{-159600/T_{f1 }}+\alpha_{f12}T_{f12}^{0.084}e^{-159600/T_{f2}}\right]. \tag{22}\]
Cooling by reactions H\({}^{+}\) + H \(\rightarrow\) H\({}_{2}\) + \(\gamma\) (Shapiro & Kang (1987)):
\[\Lambda_{pH} = 3.83\cdot 10^{-39}T_{b}^{2}8n_{H\overline{H}}n_{H\overline{H}}\quad \mathrm{erg/cm^{3}s}\quad\mathrm{for}\quad T_{b}\leq 6700\,\mathrm{K}, \tag{23}\] \[\Lambda_{pH} = 1.2\cdot 10^{-31}T_{b}(T_{b}/56200)^{-0.6657\lg{(T_{b}/56200)}}n_{H \overline{H}}n_{H\overline{H}}\quad\mathrm{erg/cm^{3}s}\quad\mathrm{for}\quad T _{b}>6700\,\mathrm{K} \tag{24}\]
Cooling due to collisional excitation of HI, HeI and HeII (Anninos et al. (1997), Black (1981); Cen (1992)):
\[\Lambda_{\mathrm{c-ex}} = 7.5\cdot 10^{-19}\left(1+\sqrt{T_{b}/10^{5}}\right)^{-1}e^{-11834/T_{b }}n_{e}n_{H\overline{H}}+9.1\cdot 10^{-27}\left(1+\sqrt{T_{b}/10^{5}}\right)^{-1}T_{b}^{-0.1687}e^{ -13179/T_{b}}n_{e}^{2}n_{HeI} \tag{25}\] \[+ 5.54\cdot 10^{-17}\left(1+\sqrt{T_{b}/10^{5}}\right)^{-1}T_{b}^{-0.397 }e^{-473638/T_{b}}n_{e}n_{HeII}\quad\mathrm{erg/cm^{3}s}. \tag{26}\]
Cooling due to collisional deionization H\({}^{-}\) + e \(->\) H + 2e (Shapiro & Kang (1987)):
\[\Lambda_{H^{-}e} = 4.801\cdot 10^{-24}T_{b}e^{-8750/T_{b}}n_{He}n_{H\overline{H}}\quad \mathrm{erg/cm^{3}s}. \tag{27}\]
Cooling due to collisional deionization H\({}^{-}\) + H \(->\) 2H + e (Shapiro & Kang (1987)):
\[\Lambda_{H^{-}H} = 6.368\cdot 10^{-32}T_{b}^{2.17}e^{-8750/T_{b}}n_{HI}n_{H^{-}}\quad \mathrm{erg/cm^{3}s}. \tag{28}\]
Cooling due to collisional ionization of HI, HeI and HeII (Anninos et al. (1997); Shapiro & Kang (1987); Cen (1992)):
\[\Lambda_{cl} = 2.77\cdot 10^{-32}\sqrt{T_{b}}\left(1+\sqrt{T_{b}/10^{5}}\right)^{-1} e^{-157809.1/T_{b}}n_{e}n_{HI}+3.7\cdot 10^{-32}\sqrt{T_{b}}\left(1+\sqrt{T_{b}/10^{5}} \right)^{-1}e^{-285335.4/T_{b}}n_{e}n_{HeI} \tag{29}\] \[+ 4.32\cdot 10^{-32}\sqrt{T_{b}}\left(1+\sqrt{T_{b}/10^{5}}\right)^{- 1}e^{-631515/T_{b}}n_{e}n_{HeII}+5.01\cdot 10^{-27}T_{b}^{-0.1687}\left(1+ \sqrt{T_{b}/10^{5}}\right)^{-1}e^{-55338/T_{b}}n_{e}^{2}n_{HeII}\quad\mbox{erg/ cm${}^{3}$s}.\]
Cooling due to collisional dissociation of H\({}_{2}\) (Shapiro & Kang (1987); Cen (1992)):
\[\Lambda_{clH_{2}}=3.14\cdot 10^{-21}n_{H_{2}}\left[e^{-102000/T_{b}}n_{e}+2.7 4\left(10.7e^{17950/T_{b}}\right)^{1+n_{HI}/n_{e}^{(HIH_{2})}}n_{HI}+3.0\left( 11e^{16200/T_{b}}\right)^{1+n_{HI}/n_{e}^{(HIH_{2})}}n_{H_{2}}\right], \tag{30}\]
\[n_{cr}^{(HIH_{2})}=10^{4.40-0.461\mbox{g}\left(7/10^{4}\right) ^{-0.327(\mbox{g}\left(7/1.4\right))^{2}}},\quad n_{cr}^{(H_{2}H_{2})}=10^{4.84 50-1.3\mbox{g}\left(7/10^{4}\right)+1.62(\mbox{g}\left(7/1.44\right))^{2}} \quad\mbox{erg/cm${}^{3}$s}. \tag{31}\]
Cooling due to dielectron recombination (Shapiro & Kang (1987)):
\[\Lambda_{dr} = 1.24\cdot 10^{-13}T^{-1.5}e^{-470000/T_{b}}\left[1+0.3e^{-94000/T }\right]\quad\mbox{erg/cm${}^{3}$s}. \tag{32}\]
Heating due to dark matter annihilation (Chluba (2010)):
\[\Gamma_{dmdm}=2.4\cdot 10^{-36}f_{dmdm}g_{u}n_{H}\left(1+z\right)^{3} \left[\frac{m_{dm}}{100\mbox{GeV}}\right]^{-1}\left[\frac{\Omega_{dm}n^{2}}{0.13}\right]^{2}\left[\frac{\langle\sigma v\rangle}{3\cdot 10^{-26}\mbox{cm${}^{2}$s$-$ 1}}\right]\quad\mbox{erg/cm${}^{3}$s}, \tag{33}\]
where \(x_{HIH}=n_{HI}/n_{H}\), \(x_{HeII}=n_{HeII}/n_{He}\), \(f_{He}=n_{He}/n_{H}\), \(m_{dm}\) is mass of DM particle, \(\langle\sigma v\rangle\) is the thermally averaged product of the cross-section and relative velocity of the annihilating DM particles, \(f_{dmdm}\) is fraction of the released energy which is deposited into the intergalactic medium, \(g_{h}=(1+2x_{HI}+f_{He}(1+2x_{HeII}))/3(1+f_{He})\) is fraction of deposited energy which going into the heating of gas.
Heating due to decay of dark matter (Liu & Slatyer (2018)):
\[\Gamma_{dmdm}=2.2\cdot 10^{-9}f_{dmdm}g_{u}n_{H}\left(1+z\right)^{3} \left[\frac{\Omega_{dm}n^{2}}{0.13}\right]^{2}\quad\mbox{erg/cm${}^{3}$s}, \tag{34}\]
where \(f_{dmdm}\) is fraction of the released energy which is deposited into the intergalactic medium, \(\tau_{dmd}\) is decay lifetime, \(g_{h}\) is fraction of deposited energy which going into the heating of gas taken from Chluba (2010).
Heating due to decaying turbulence of the primordial magnetic field (Sethi & Subramanian (2005), Chluba et al. (2015)):
\[\Gamma_{mfdt} = 1.5\rho_{mf}H(z)[f_{D}(z)]^{n_{H}+3}\frac{ma^{m}}{(a+1.5\mbox{ln} ((1+z_{cr})/(1+z)))^{m+1}}\quad\mbox{for}\quad z<z_{cr},\] \[\Gamma_{mfdt} = 1.5\rho_{mf}H(z)\frac{m}{a}[f_{D}(z)]^{n_{H}+3}\exp\left\{-\frac{ (z-z_{cr})^{2}}{5000}\right\}\left(\frac{1+z_{cr}}{1+z}\right)^{4}\quad\mbox{ for}\quad z\geq z_{cr}, \tag{35}\]
where \(z_{cr}=1088\), \(\rho_{mf}=3.98\cdot 10^{-21}\left(B_{0}/\mbox{nG}\right)^{2}(1+z)^{4}\) J\(\cdot\)m\({}^{-3}\), \(n_{B}=-2.9\), \(a=\mbox{ln}(1+t_{d}/t_{\rm rec})\), \(m\equiv 2(n_{B}+3)/(n_{B}+5)\), \(t_{d}/t_{\rm rec}=14.8/(B_{0}k_{D})\), \(k_{D}=(2.89\cdot 10^{4}h)^{1/(n_{B}+5)}B_{\lambda}^{-2/(n_{B}+5)}k_{\lambda}^{(n_{B}+3)/( n_{B}+5)}\) Mpc\({}^{-1}\), \(\lambda=1\) Mpc, \(B_{\lambda}=B_{1}\mbox{Mpc}=B_{0}\), \(k_{\lambda}=k_{1}\mbox{Mpc}=2\pi\) Mpc\({}^{-1}\). The factor \(f_{D}(z)^{n_{B}+3}\) describes the energy loss by the primordial magnetic field (see Minoda et al. (2019)), which we approximate as \([f_{D}(z)]^{n_{B}+3}\simeq 0.6897525+0.2944149\cdot 10^{-3}z-0.3805730\cdot 10^{- 6}z^{2}+0.2259742\cdot 10^{-9}z^{3}+0.6354026\cdot 10^{-13}z^{4}\) for \(z<1178\), fixed values of \(n_{B}\) and \(k_{D}\) (for \(z\geq 1178\)\(\rho^{n_{B}+3}(z)\equiv 1\)).
Heating due to ambipolar diffusion caused by the primordial magnetic field (Chluba et al. (2015), Minoda et al. (2019)):
\[\Gamma_{mfad}=\frac{1-x_{HI}}{g(t_{b})x_{HI}}[f_{D}(z)]^{2n_{B}+8} \left[\frac{(1+z)k_{D}}{3.086\cdot 10^{22}}\frac{\rho_{mf}}{\rho_{b}}\right]^{2}f_{L}, \tag{36}\]
where \(x_{HII}=n_{HII}/n_{H}\), \(f_{L}=0.8313(n_{B}+3)^{1.105}(1.0-0.0102(n_{B}+3))\), \(g(T_{b})=1.95\cdot 10^{11}T_{b}^{0.375}\) m\({}^{3}\)/s/kg, \(\rho_{b}=\rho_{cr}^{(0)}\Omega_{b}(1+z)^{3}\), \(k_{D}=286.91(B_{0}/\mbox{nG})^{-1}\) Mpc\({}^{-1}\).
|
2309.03828 | Proposal for all-electrical skyrmion detection in van der Waals tunnel
junctions | A major challenge for magnetic skyrmions in atomically thin van der Waals
(vdW) materials is reliable skyrmion detection. Here, based on rigorous
first-principles calculations, we show that all-electrical skyrmion detection
is feasible in 2D vdW magnets via scanning tunneling microscopy (STM) and in
planar tunnel junctions. We use the nonequilibrium Green's function method for
quantum transport in planar junctions, including self-energy due to electrodes
and working conditions, going beyond the standard Tersoff-Hamann approximation.
We obtain a very large tunneling anisotropic magnetoresistance (TAMR) around
the Fermi energy for a vdW tunnel junction based on
graphite/Fe$_3$GeTe$_2$/germanene/graphite. For atomic-scale skyrmions the
noncollinear magnetoresistance (NCMR) reaches giant values. We trace the origin
of the NCMR to spin-mixing between spin-up and -down states of $p_z$ and
$d_{z^2}$ character at the surface atoms. Both TAMR and NCMR are drastically
enhanced in tunnel junctions with respect to STM geometry due to orbital
symmetry matching at the interface. | Dongzhe Li, Soumyajyoti Haldar, Stefan Heinze | 2023-09-07T16:41:07Z | http://arxiv.org/abs/2309.03828v3 | # Proposal for all-electrical skyrmion detection in van der Waals tunnel junctions
###### Abstract
Based on rigorous first-principles calculations, we show that all-electrical detection of skyrmions in 2D van der Waals (vdW) magnets is feasible in tunnel junctions with straightforward implementation into device architectures. We use the nonequilibrium Green's function method for quantum transport, including self-energy due to electrodes and working conditions, going beyond the standard Tersoff-Hanmann approximation. An extremely large noncollinear magnetoresistance (NCMR) of above 10,000 % at the Fermi energy is predicted for a vdW tunnel junction based on graphite/Fe\({}_{3}\)GeTe\({}_{2}\)/germanene/graphite. We trace the origin of the NCMR to spin-mixing between states of \(p_{z}\) and \(d_{z^{2}}\) character at the surface atoms and the orbital matching effect at the interface.
pacs: Magnetic skyrmions [1] - topologically stabilized chiral spin structures with size down to the nanometer scale - have emerged as a promising avenue to realize next-generation spintronic devices [2; 3; 4]. Ten years ago, Fert and co-workers first proposed to use skyrmions in a racetrack memory in their seminal paper [5]. Today, many other potential applications of skyrmions are being explored ranging from logic devices to neuromorphic or quantum computing [6; 7; 8; 9]. An essential prerequisite for most applications is reliable electrical detection of individual skyrmions or other topological spin structures.
In ultrathin transition-metal films, skyrmions have been observed directly using spin-polarized scanning tunneling microscopy (STM) [10; 11], which is based on the tunneling magnetoresistance (TMR). TMR devices rely on magnetic electrodes, which may perturb the skyrmion state during detection. Skyrmion detection is also possible via the tunneling anisotropic magnetoresistance (TAMR) [12; 13] using non-magnetic STM tips [14; 10; 15]. However, because TAMR relies purely on spin-orbit coupling (SOC), it is typically too small for device applications. In 2015, the noncollinear magnetoresistance (NCMR) has been discovered and proposed for skyrmion detection [16]. NCMR is based on spin-mixing of majority and minority spin channels in a non-collinear spin structure [16; 17; 18; 19]. Since it is not caused by SOC, it can be much larger than TAMR. NCMR was used to observe skyrmions and domain walls by STM [16; 19].
An alternative way for all-electrical skyrmion detection is to use the topological Hall effect [20; 21; 22; 23]. However, such device setups are more difficult to fabricate in terms of device geometries. A simpler solution is to design perpendicular tunnel junctions that can easily integrate skyrmions into conventional semiconductor devices, e.g., magnetic tunnel junctions (MTJ) for skyrmions [24; 25].
More recently, with the discovery of 2D van der Waals (vdW) magnets, a comprehensive study has been performed on the spin transport on 2D magnets in planar junctions [26; 27; 28; 29; 30; 31]. However, these investigations are restricted to TMR-based devices, i.e., collinear magnetic configurations. There are currently neither experimental nor theoretical works on quantum transport through magnetic skyrmions in 2D vdW magnet planar junctions. Moreover, the Tersoff-Hamann (TH) model [32], which allows explaining NCMR in STM geometry [16; 19], is questionable if one aims to explore planar tunnel junction devices.
Here, we demonstrate that the NCMR effect can be very large in 2D vdW magnets, allowing the all-electrical detection of magnetic skyrmions. We study both the regime of tunneling across a vacuum gap as applicable to STM as well as tunnel junction devices with non-magnetic electrodes. We predict NCMR detectable by STM in Fe\({}_{3}\)GeTe\({}_{2}\)/germanene of up to about 400%. Our first-principles calculations predict an extremely large NCMR of more than 10,000 % at the Fermi energy for an experimentally feasible system of graphite/Fe\({}_{3}\)GeTe\({}_{2}\)/germanene/graphite tunnel junction. This NCMR is at least two orders of magnitude larger than that observed at conventional transition-metal interfaces [16; 19]. Furthermore, we demonstrate the significance of employing non-equilibrium Green's functions (NEGF) for quantum transport in tunnel junctions, where the TH approximation proves inadequate.
Fig. 1 shows our transport setup for all-electrical skyrmion detection, consisting of Fe\({}_{3}\)GeTe\({}_{2}\)/germanene (FGT/Ge), a representative 2D vdW heterostructure, sandwiched between two nonmagnetic electrodes, namely tunnel junctions with nonmagnetic electrodes (TJ-NM). A key difference between a TJ-NM and the widely used MTJ setup is that we do not rely on an external magnetic field for device operations. Fig. 2a shows the atomic structure of the FGT heterostructure, a system of great experimental interest for magnetic skyrmions [33; 34; 35; 36]. Recent theoretical work predicts that the magnetic interactions in FGT/Ge are highly tunable by strain [37], leading to stabilize skyrmions with diameters down to a few nanometers [38].
Electronic structure and quantum transport calculations were carried out using QuantumATK [39], which uses the non-equilibrium Green function (NEGF) [40] formalism combined with noncollinear density functional theory (DFT) [41]. Additionally, we used Fleur[42] to perform spin spiral calculations based on the generalized Bloch theorem [43]. Computational details can be found in Supplemental Material [44].
**NCMR in STM geometry.** We start our discussion with the NCMR effect in STM geometry (Fig. 2a) using the TH approach [32]. Due to the large computational cost, we used the smallest Neel-type skyrmion, which fits into a (3\(\times\)3) FGT/Ge supercell (Fig. 2b). Note, that such a skyrmion was obtained from atomistic spin dynamics using parameters from DFT calculations [38].
According to the TH model [32], the differential conductance, \(dI/dU\), in an STM experiment is given by
\[dI/dU(\mathbf{R}_{\mathrm{T}},U)\propto n(\mathbf{R}_{\mathrm{T}},E_{ \mathrm{F}}+eU) \tag{1}\]
where \(\mathbf{R}_{\mathrm{T}}\) is the tip position, \(U\) is the bias voltage, and \(n(\mathbf{r},E)\) is the local density of states (LDOS) of the sample evaluated in the vacuum a few A above the surface. Even if the STM tip is non-magnetic, the LDOS and thereby the obtained \(dI/dU\) signal can be sensitive to the local spin texture due to NCMR [16], defined for a skyrmion (Sk) in the ferromagnetic (FM) background as NCMR = \(\frac{dI/dU_{\mathrm{St}}-dI/dU_{\mathrm{FM}}}{dI/dU_{\mathrm{FM}}}\).
The NCMR signal varies locally above the Neel-type skyrmion (Fig. 2b) and reaches a maximum value of about 392 % at the skyrmion core. In a ring around the core the value drops to about \(-21\) % and rises again to about 90 % as one moves further from the center. As expected, the NCMR contrast becomes smaller as one approaches the edge of the skyrmion since the effective non-collinearity is reduced close to the FM environment.
From the energy-resolved vacuum LDOS of the skyrmion core vs. the FM environment (Fig. 2c) evaluated at the skyrmion core above the Te atom, we observe the large NCMR effect of varying positive and negative sign at various energies around \(E_{\mathrm{F}}\) (Fig. 2d). A similar energy dependence of the NCMR is found at other STM tip positions, and we find that the SOC contribution to the NCMR is rather small. We have also performed calculations for a skyrmion in a (4\(\times\)4) FGT/Ge supercell, which exhibits an NCMR effect of a similar order of magnitude and lateral variation (see Fig. S2 and Fig. S3 in Supplemental Material [44]).
To go beyond the nanoscale skyrmion (Fig. 2b) and to vary the period of the noncollinear spin structure on a larger scale, we locally approximate the electronic structure in a skyrmion by that of a homogeneous spin spiral state [43]. This approach can explain the experimentally observed NCMR effect of skyrmions and domain walls in ultrathin films [16; 19]. Note, that spin-polarized STM was used to resolve domain walls in FGT [45; 46].
Fig. 2e shows the vacuum LDOS calculated about 3 A above FGT/Ge for spin spirals of different periods. As the spin spiral rotating angle \(\theta\) varies, one observes significant changes in the height and position of the peaks in the vacuum LDOS (Fig. 2e). We find a prominent peak at about 0.25 eV above \(E_{\mathrm{F}}\), which quickly decreases with rising non-collinearity, and one at about 0.75 eV above \(E_{\mathrm{F}}\), which shows a more complex change. In contrast, the peak at about 0.5 eV below \(E_{\mathrm{F}}\) displays only a small shift and broadening.
The corresponding NCMR (Fig. 2f) calculated for various spin spiral states [47] shows a large negative value around \(E_{\mathrm{F}}+0.25\) eV up to about 80 % due to the vanishing peak (cf. Fig. 2e). At larger energies, we find two peaks in the NCMR of up to about 100% and 300%. The NCMR spectrum of the spin spiral state with the shortest period, i.e. largest angle \(\theta=60^{\circ}\) (red curve in Fig. 2f), is similar in sign and order of magnitude to that obtained for the skyrmion (Fig. 2d) since the rotating angle between neighboring spins in our nanoscale skyrmion (Fig. 2b) is close to \(60^{\circ}\). An exception is the large NCMR peak at about 0.3 eV below \(E_{\mathrm{F}}\) for skyrmions, which is missing for spin spirals.
The origin of the NCMR for skyrmions can be understood by analyzing the electronic structure. We find the vacuum LDOS (Fig. 3) to be dominated by the \(p_{z}\) and \(d_{z^{2}}\) states of Te2 and Fe3 since these orbitals exhibit the slowest decay into the vacuum. The characteristic changes of the vacuum LDOS for the skyrmion vs. FM state (Fig. 2c), i.e. the shifted double peak structure above \(E_{\mathrm{F}}\) and the extra peak at about \(E_{\mathrm{F}}-0.3\) eV for the skyrmion, are clearly visible in the Fe3-\(p_{z}\) LDOS (Fig. 3a) and in the Te2-\(p_{z}\) and Te2-\(d_{z^{2}}\) LDOS (Fig. 3e,f).
For the spin spiral states, we obtain similar conclu
Figure 1: The proposed vertical tunnel junctions with nonmagnetic electrodes (TJ-NM) for electrical read-out of skyrmions in 2D vdW magnets, e.g., in racetrack memory. The skyrmions are stabilized at the Fe\({}_{3}\)GeTe\({}_{2}\)/germanene vdW heterostructure. Reading data from the skyrmion pattern is accomplished all electrically based on NCMR.
sions. The variation of the angle \(\theta\) between adjacent spins leads to a gradual change of the peaks, e.g., the decreasing peak height in the vacuum LDOS at about \(E_{\rm F}+0.25\) eV (Fig. 2e). This effect also shows in the Fe3-\(p_{z}\) LDOS (Fig. 3c) and in the Te2-\(p_{z}\) and Te2-\(d_{z^{2}}\) LDOS (Fig. 3g,h). In the LDOS of Fe3-\(d_{z^{2}}\) (Fig. 3d) we notice a shift and decrease of the peak at \(E_{\rm F}-0.5\) eV which explains the change of the vacuum LDOS at this energy (Fig. 2e). For more detailed \(p_{z}\)- and \(d_{z^{2}}\)-orbital resolved NCMR from the Fe3 and Te2 atoms, see Fig. S4 and Fig. S5 in Supplemental Material [44]. Note, that the variations of the orbital decomposed LDOS at the Fe3 and Te2 atoms is also similar for the skyrmion and spin spiral state with the largest angle, i.e. \(\theta=60^{\circ}\) (red).
The physical mechanism of NCMR can be elucidated through the spin mixing effect, mainly driven by interlayer hopping between Fe3-\(d_{z^{2}}^{\uparrow}\) and Fe2-\(p_{z}^{\downarrow}\), as well as intralayer hopping between Te2-\(p_{z}^{\uparrow}\) and Te2-\(p_{z}^{\downarrow}\). The shift of the \(d_{z^{2}}^{\uparrow}\) peak at about 0.5 eV below \(E_{\rm F}\) at the Fe3 atom can be attributed to spin mixing with \(p_{z}^{\downarrow}\) state peak at \(E_{\rm F}-0.25\) eV of the Fe2 atom (Fig. 4a). The spin mixing leads to a shift and splitting of the peaks in both LDOS, and this effect is reproduced and understood by a two-level tight binding (TB) model proposed in Ref. [19], as shown in Fig. 4b. For the \(p_{z}\) states at the Te2 atom a similar effect is found for the states above \(E_{\rm F}\) (see Fig. S6 in Supplemental Material [44]). Note, that these changes are directly reflected in the vacuum LDOS (Fig. 2e) and can explain the NCMR spectrum above \(E_{\rm F}\) (Fig. 2f). See Figs. S7-S9 in Supplemental Material [44] for more detailed LDOS of the Te2, Fe3, and Fe2 atoms.
**NCMR in tunnel junctions.** To properly address NCMR in tunnel junctions, including self-energy due to electrodes and working conditions, one has to calculate the nonequilibrium charge/spin density using the NEGF formalism [40], going beyond the TH approximation. The transmission function is calculated by NEGF as
\[T(E)=\mathrm{Tr}\left[\mathbf{\Gamma}_{\rm L}\mathbf{G}\mathbf{\Gamma}_{\rm R }\mathbf{G}^{\dagger}\right] \tag{2}\]
where \(\mathbf{G}\) is the retarded Green's function of the central region, and \(\mathbf{\Gamma}_{\rm L}/\mathbf{\Gamma}_{\rm R}\) are matrices describing its coupling to left/right semi-infinite electrodes.
We propose to consider a TJ-NM created by the graphite/FGT/Ge/graphite junction to detect a skyrmion by all-electrical means (Fig. 5a-b). Note, that we used a FGT/Ge (1\(\times\)1) lattice strained by \(-3\) % as a fixed layer, and a \(\sqrt{3}\)\(\times\)\(\sqrt{3}\) in-plane unit cell of graphite is matched to it. Fig. 5c shows zero-bias transmission functions through a nanoscale skyrmion (spin structure as in Fig. 2a) and the FM state for the tunnel junction.
Figure 2: Calculated NCMR in STM geometry using the TH approximation. (a) Schematic plot of an STM experiment on FGT/Ge with a non-magnetic tip. (b) Spin structure of the nanoscale Néel-type skyrmion stabilized in strained FGT/Ge [38] and the NCMR map calculated at a distance of 3 Å from the surface and an energy of \(E=E_{\rm F}-0.3\) eV. (c) Vacuum LDOS for the FM state (black) and the Sk state (red). The green arrow marks the energy at which the NCMR map in panel (b) has been plotted. (d) NCMR calculated from the LDOS of the FM and Sk states shown in panel (c). (e) LDOS in the vacuum calculated for spin spiral states at 3 Å above FGT/Ge for various nearest-neighbor angles \(\theta\). (f) NCMR calculated from the LDOS of the FM and spin spiral states shown in panel (e). The inset shows the hexagonal atomic lattice with the spin spiral vector \(\mathbf{q}\) along the \(\overline{\Gamma}-\overline{\rm K}-\overline{\rm M}\) direction of the 2D Brillouin zone and the angle \(\theta\) between spins on neighboring sites.
The junction exhibits an insulating feature for the FM state with a clear dip at \(E_{\rm F}\) due to the orbital matching effect between C-\(p_{z}\) and Te-\(p_{z}\) orbitals at the interface. Note that the hybridization effect between FGT/Ge and the graphite electrodes is small due to the weak vdW interaction (see Fig. S11 in Supplemental Material [44]).
The transmission for the skyrmion state (Fig. 5c) exhibits much larger values in a broad energy range, leading to an extremely large NCMR of more than \(10,000\) % near \(E_{\rm F}\) (Fig. 5d). This value is at least two orders of magnitude higher than that observed for transition-metal interfaces [16, 17, 19, 48]. The NCMR calculated by NEGF differs significantly from that obtained in the TH model (Fig. 5d), especially around \(E_{\rm F}\). The NCMR obtained via the TH approach changes sign several times in the considered energy range and reaches a maximum of about \(400\%\) at \(0.3\) eV below \(E_{\rm F}\). In contrast, the NCMR calculated by NEGF is positive in almost the entire energy range and between \(1,000\) and \(10,000\) % in a wide range around \(E_{\rm F}\), reaching a maximum value of about \(100,000\) %. This demonstrates that the proposed TJ-NM is an ideal platform for all-electrical detection of skyrmions.
The extremely large NCMR observed in TJ-NM stems from the interplay of the symmetry of electronic states and the spin mixing effect. Due to orbital matching at the interface of FGT/Ge and the graphite electrodes, C-\(p_{z}\) mainly couples to \(p_{z}\) orbitals of the surface atoms of FGT/Ge (Te2, Fe3, and Ge2) (see Fig. S11 and S12 in Supplemental Material [44]). The shape of the LDOS at the C atoms in combination with the low \(p_{z}\) LDOS at the Te2, Fe3, and Ge2 atoms near \(E_{\rm F}\) can explain the v-shaped transmission functions (see Fig. S13 in Supplemental Material [44]). For a large transmission obtained by NEGF, the contributing states need to extend through the entire FGT/Ge layer. The higher transmission in the skyrmion and positive NCMR above \(E_{\rm F}+0.2\) eV indicate that the \(p_{z}\) states, which leads to a large LDOS between \(E_{\rm F}+0.2\) eV and \(E_{\rm F}+0.5\) eV in the FM state, contribute little to the current due to their strong localization at the surface atoms. This interpretation is supported by the spatial localization visible in the LDOS map of FGT/Ge (see Fig. S14 in Supplemental Material [44]).
For comparison, we also calculated the TAMR of our tunnel junction (see Supplemental Material [44]). The obtained value of about \(200\%\) near \(E_{\rm F}\) is much smaller than NCMR since TAMR originates from SOC. However, the value is still more than one order of magnitude larger than the TAMR reported in ultrathin films [12, 14, 17, 48, 18] and similar to that reported for molecular junctions
Figure 4: NCMR mechanism by spin-mixing in FGT/Ge. LDOS of Fe3-\(d_{z^{2}}^{\uparrow}\) (a) and Fe2-\(p_{z}^{\downarrow}\) (b) states as obtained from DFT calculations for spin spiral states with an angle \(\theta\) between adjacent spins. (c-d) The same as in (a-b) but using a two-level TB model proposed in Ref. [19]. (See Supplemental Material [44] for details.)
Figure 3: LDOS of (a) \(p_{z}\) and (b) \(d_{z^{2}}\) character in the FM (black) and in the skyrmion state (red) for the Fe3 (surface) atom (cf. Fig. 3a) in the FGT/Ge heterostructure. Note, that the sum of spin-up and -down states is shown. (c-d) as (a-b) for the spin spiral state for various angles \(\theta\) between adjacent magnetic moments. (e-h) as (a-d) for the Te2 (surface) atom.
[49; 50; 51], showing again the high promise of the proposed TJ-NM for all-electrical skyrmion detection.
To conclude, we suggest the graphite/Fe\({}_{3}\)GeTe\({}_{2}\)/germanene/graphite tunnel junction as an ideal platform for reliable all-electrical skyrmion detection down to the atomically thin limit. An extremely large NCMR of beyond 10,000 % is observed at \(E_{\mathrm{F}}\), which is more than two orders of magnitude higher than the NCMR obtained for conventional transition-metal interfaces. The physical mechanism is explained by the interplay between the spin mixing and orbital matching effects at the interface. Our work highlights the crucial importance of using the NEGF approach for quantum transport on noncollinear spin structures in tunnel junctions, going beyond the TH model.
This study has been supported through the ANR Grant No. ANR-22-CE24-0019. This study has been (partially) supported through the grant NanoX no. ANR-17-EURE-0009 in the framework of the "Programme des Investissements d'Avenir". Financial support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through SPP2137 "Skyrmionics" (project no. 462602351) is gratefully acknowledged. We acknowledge CALMIP (Grant 2023-[P21008]) and the North-German Supercomputing Alliance (HLRN) for providing HPC resources.
|
2309.06788 | Resolution of the Diagonal on the Root Stacks | In this paper, we give a new constructive proof of the semi-orthogonal
decomposition of the derived category of (quasi)-coherent sheaves of root
stacks, through an explicit resolution of the diagonal. | Yu Zhao | 2023-09-13T08:22:47Z | http://arxiv.org/abs/2309.06788v1 | # Resolution of the diagonal on the root stacks
###### Abstract.
In this paper, we give a new constructive proof of the semi-orthogonal decomposition of the derived category of (quasi)-coherent sheaves of root stacks, through an explicit resolution of the diagonal.
_To laugh is to live profoundly._
_Milan Kundera_
## 1. Introduction
### The main result of this paper
The main result of this paper is a constructive proof on the semi-orthogonal decomposition of derived category of (quasi)-coherent sheaves on root stacks, which has been intensively studied by Ishii-Ueda [11], Bergh-Lunts-Schunurer [4], Kuznetsov-Perry [15], Bergh-Schunurer [5] and recently by Bodzenta-Donovan [6]:
**Theorem 1.1**.: _Let \(\mathcal{D}\) be an effective Cartier divisor of an algebraic stack \(\mathcal{X}\). Given an integer \(l>1\), let \(\mathcal{X}_{\mathcal{D},l}\) as the \(l\)-th root stack of \(\mathcal{X}\) along \(\mathcal{D}\) and \(\mathcal{D}_{l}:=\mathcal{D}\times_{B\mathbb{G}_{m}}B\mathbb{G}_{m}\), where the map from \(\mathcal{D}\to B\mathbb{G}_{m}\) is decided by the line bundle \(\mathcal{L}_{\mathcal{D},l}:=\mathcal{O}_{\mathcal{X}}(-\mathcal{D})|_{ \mathcal{D}}\) and the map \(B\mathbb{G}_{m}\to B\mathbb{G}_{m}\) is the \(l\)-th power map. Then under the following natural commutative diagram_
_the following functors_
\[\triangleright_{i,\mathcal{X}}:=(-\otimes\mathcal{L}_{D,l}^{i}) \circ Ri_{\mathcal{X}_{\mathcal{D},l}\ast}\circ LBt_{\mathcal{D}}^{\ast}:D_{ qcoh}^{+}(\mathcal{D})\to D_{qcoh}^{+}(\mathcal{X}_{\mathcal{D},l}),\] \[L\theta_{\mathcal{X}}^{\ast}:D_{qcoh}^{+}(\mathcal{X})\to D_{ qcoh}^{+}(\mathcal{X}_{\mathcal{D},l})\]
_are fully faithful. Moreover, let \(D_{l,\mathcal{D}}^{i}\) be the image of \(\triangleright_{i,\mathcal{X}}\), then for any \(0\leq i\leq l-1\), we have the semi-orthogonal decomposition_
\[D_{qcoh}^{+}(\mathcal{X}_{\mathcal{D},l}):=<D_{l,\mathcal{D}}^{i-l+1},\cdots,D _{l,\mathcal{D}}^{-1},L\theta_{l}^{\ast}D_{qcoh}^{+}(\mathcal{X}),D_{l, \mathcal{D}}^{0},\cdots,D_{l,\mathcal{D}}^{i-1}>. \tag{1.1}\]
_Similar arguments also hold for complexes with bounded or coherent cohomologies._
_Remark 1.2_.: Our setting is similar to Bergh-Lunts-Schunurer [4], which only assumes that \(\mathcal{X}\) is an algebraic stack over \(\mathbb{Z}\). Our strategy could also be generalized to the derived category of perfect complexes with very mild modifications. We will leave it as an exercise for readers with interest.
_Remark 1.3_.: Bodzenta-Donovan [6] pointed out that the above semi-orthogonal decomposition is \(2l\)-periodic, and thus induces higher spherical functors. We will try to sketch a potential relation of those higher spherical functors with the categorical representation theory in Section 1.4.
_Remark 1.4_.: The root stack construction by Cadman [7] or Abramovich-Graber-Vistoli [1] only requires a global section of a line bundle \(\mathcal{L}\) over \(\mathcal{X}\). In this paper, we only consider the case that the global section is injective i.e. induced by an effective Cartier divisor. We make this restriction so we can work in a more classical framework rather than the derived algebraic geometry, and this issue can be solved by introducing the definition "virtual effective Cartier divisor" in Khan-Rydh [12].
### Resolutions of the diagonal and the semi-orthogonal decomposition
Given an integer \(n>0\), Beilinson [3] gave a resolution of the diagonal of \(\mathbb{P}^{n}\) by the long-exact sequence:
\[0\to\wedge^{n}\Omega_{\mathbb{P}^{n}}(n)\boxtimes\mathcal{O}_{\mathbb{P}^{n}} (-n)\to\cdots\to\Omega_{\mathbb{P}^{n}}\boxtimes\mathcal{O}_{\mathbb{P}^{n}} (-1)\to\mathcal{O}_{\mathbb{P}^{n}\times\mathbb{P}^{n}}\to\Delta_{*}\mathcal{ O}_{\mathbb{P}^{n}}\to 0,\]
which induces the semi-orthogonal decomposition
\[D^{b}_{coh}(\mathbb{P}^{n})=<\mathcal{O}_{\mathbb{P}_{n}},\mathcal{O}_{ \mathbb{P}_{n}}(1),\cdots,\mathcal{O}_{\mathbb{P}_{m}}(n)>=<\mathcal{O}_{ \mathbb{P}_{n}},\Omega_{\mathbb{P}_{n}}(1),\cdots,\wedge^{n}\Omega_{\mathbb{ P}_{n}}(n)>.\]
We refer to Kuznetsov [13][14] for a general introduction to the semi-orthogonal decomposition of the derived category of coherent sheaves on algebraic varieties. One of the main purposes of this paper is to give an explicit resolution of the diagonal which induced the semi-orthogonal decomposition, following Appendix A of our work [18] for Orlov's semi-orthogonal decomposition theorem for blow-up of smooth varieties [16]. We prove that
**Theorem 1.5** (Theorem 4.6).: _Let_
\[\spherical_{i,\mathcal{X}}:D^{+}_{qcoh}(\mathcal{X}_{\mathcal{D},l})\to D^{+}_{ qcoh}(\mathcal{D})\]
_be the cohomological degree \(1\) shift of the right adjoint functor of \(\triangleright_{i,\mathcal{X}}\). Given \(0\leq n\leq m\leq n-1\), there exists_
\[\tau_{n,m,\mathcal{X}}:=D^{b}_{coh}(\mathcal{X}_{\mathcal{D},l}\times_{ \theta^{l}_{\mathcal{X}},\mathcal{X},\theta^{l}_{\mathcal{X}}}\mathcal{X}_{ \mathcal{D},l}).\]
_such that the Fourier-Mukai functors generated by kernels \(\tau_{n,m,\mathcal{X}}\) satisfy:_
\[\overline{\tau}_{n,m,\mathcal{X}}:D^{+}_{qcoh}(\mathcal{X}_{\mathcal{D},l}) \to D^{+}_{qcoh}(\mathcal{X}_{\mathcal{D},l})\]
1. _we have_ \[\overline{\tau_{n,n,\mathcal{X}}}\cong\otimes\mathcal{O}_{\mathcal{X}_{ \mathcal{D},l}}(-\mathcal{D}_{l})^{n},\quad\overline{\tau_{0,l-1,\mathcal{X}} }\cong L\theta^{l*}_{\mathcal{X}}R\theta^{l}_{\mathcal{X}*}\]
2. _for any_ \(0\leq n<m\leq l-1\)_, we have canonical triangles:_ \[\overline{\tau_{n,m-1,\mathcal{X}}}\to\overline{\tau_{n,m,\mathcal{X}}}\to \bigoplus_{i=0}^{m}\triangleright_{m-i,\mathcal{X}}\trianglelefteq_{i,\mathcal{ X}}\] \[\overline{\tau_{n,m,\mathcal{X}}}\to\overline{\tau_{n+1,m,\mathcal{X}}}\to \bigoplus_{i=n+1}^{l-1}\triangleright_{i-1-n,\mathcal{X}}\trianglelefteq_{i, \mathcal{X}}.\] _._ _._
3. _All those functors map complexes with bounded cohomologies (resp. coherent cohomologies) to complexes with bounded cohomologies (resp. coherent cohomologies)._
### Motivation and derived birational geometry
In [18], we found a surprising relation between the resolution of the diagonal with the birational geometry of derived schemes. Now we try to briefly introduce this insight without introducing the machinery of \(\infty\)-categories.
For the simplicity, we assume that \(\mathcal{X}=[\mathbb{A}^{1}_{\mathbb{C}}/\mathbb{C}^{*}]\) and the Cartier divisor is \(B\mathbb{C}^{*}\). In this situation \(\mathcal{X}_{\mathcal{D},l}\) is also \([\mathbb{A}^{1}_{\mathbb{C}^{1}}/\mathbb{C}^{*}]\) and
\[\mathcal{X}_{\mathcal{D},l}\times_{\mathcal{X}}\mathcal{X}_{\mathcal{D},l} \cong[\{(x,y)\in\mathbb{A}^{2}_{\mathbb{C}}|\prod_{j\in\mathbb{Z}/l\mathbb{Z}} (x-e^{\frac{2\pi i}{l}\cdot j}y)=0\}/(\mathbb{C}^{*}\times\mathbb{Z}/l\mathbb{ Z})]\]
which is \(l\)-lines that intersect at the original point, quotient the action of \(\mathbb{C}^{*}\) by the scalar action on both \(x\) and \(y\) and \(\mathbb{Z}/l\mathbb{Z}\) by multiplying the roots of unity on \(y\).
Now we blow up the origin point \((0,0)\) in
\[\alpha_{l}:=\{(x,y)\in\mathbb{A}^{2}_{\mathbb{C}}|\prod_{j\in\mathbb{Z}/l \mathbb{Z}}(x-e^{\frac{2\pi i}{l}\cdot j}y)=0\}.\]
In the classical setting, we will get \(l\) lines which do not intersect with each other, which is exactly \(\mathcal{X}_{\mathcal{D},l}\) after quotient \(\mathbb{C}^{*}\times\mathbb{Z}/l\mathbb{Z}\), and the natural projection is the diagonal morphism. However, in the sense of derived setting of Hekking [10], we should consider a family of varieties
\[\mathcal{M}_{k}:=\{((x_{1},y_{1}),[x_{2},y_{2}])\in\mathbb{A}^{2}_{\mathbb{C} }\times\mathbb{P}^{1}_{\mathbb{C}}|x_{1}y_{2}=y_{1}x_{2},x_{1}^{l-k}x_{2}^{l} =y_{1}^{l-k}y_{2}^{l}\},\quad 0\leq k\leq l.\]
Then \(\mathcal{M}_{l}\cong\mathbb{A}^{1}_{\mathbb{C}}\times\mathbb{Z}/l\mathbb{Z}\) and for other \(k<l\), \(\mathcal{M}_{k}\) contains \(\mathbb{P}^{1}_{\mathbb{C}}\) which contains points such that \(x_{1}=y_{1}=0\). In the sense of derived blow-up of Hekking [10], we will have
\[\mathbb{R}Bl_{\mathcal{M}_{k}}\mathbb{P}^{1}\cong\mathcal{M}_{k+1},\quad \mathbb{R}Bl_{\alpha_{k}}\{(0,0)\}\cong\mathcal{M}_{1}\]
Moreover, in [18] we systematically studied the variation of certain quasi-coherent sheaves after the derived blow-up. Thus Theorem 1.1 actually is a direct corollary of the generalized vanishing theorem in [18].
For algebraic stacks over \(\mathbb{Z}\), we should be careful as the group of \(l\)-th of unity \(\mu_{l}\) and the constant group \(\mathbb{Z}/l\mathbb{Z}\) are not isomorphic in this situation. A detailed computation in this case is given in Section 2.5.
### Relations with the categorical representation theory
In this subsection, we describe a potential relation between the semi-orthogonal decomposition of root stacks with the categorical representation, which we will study the detail in future work.
We denote \(\Theta:=[\mathbb{A}^{1}/\mathbb{G}_{m}]\) and the morphism of \(l\)-th power on \(\mathbb{A}^{1}\) induces a morphism \(\theta_{l}:\Theta\to\Theta\). Then the composition of relative Fourier-Mukai transforms induce a monoidal structure on
\[D^{b}_{coh}(\Theta\times_{\theta_{l},\Theta,\theta_{l}}\Theta).\]
Then for any \(l\)-the root stack \(\mathcal{X}_{\mathcal{D},l}\), the pull-back morphisms induce that \(D^{b}_{Coh}(\mathcal{X}_{\mathcal{D},l})\) is a representation of the monoidal category \(D^{b}_{coh}(\Theta\times_{\theta_{l},\Theta,\theta_{l}}\Theta)\). Moreover, we notice that \(Coh(\Theta\times_{\theta_{l},\Theta,\theta_{l}}\Theta)\) consists of \(\mathbb{Z}[x]\otimes_{\mathbb{Z}[x^{l}]}\mathbb{Z}[x]\) bimodules with certain grading. We expect that the semi-orthogonal decomposition of the root stacks follows directly from the categorical representation theory of those bimodules.
### Organization of this paper
As we will work on stacks over \(\operatorname{Spec}(\mathbb{Z})\), we review the background of affine groups over commutative rings and the derived category of (quasi)-coherent sheaves on algebraic stacks in Section 2 and Section 3 respectively. Experts or readers who only care about schemes over \(\mathbb{C}\) should feel free to just read Section 2.5, Section 3.4 and Section 3.5. The proof of Theorem 1.1 and Theorem 1.5 is given in Section 4.
### Acknowledgments
The paper is directly inspired by Bodzenta-Donovan [6], where the author learned and discussed with them in the conference "Current Trends in Categorically Approach to Algebraic and Symplectic Geometry II" in June 2023 at Kavli IPMU. The author would like to appreciate Agnieszka Bodzenta and Will Donovan for their interest and many helpful discussions in this topic.
Part of this paper was given in my lecture series in the Shanghai Tech University and Chinese Academy of Sciences. The author would like to thank the above institutes and Zhiyuan Ding, Mingliang Cai, Ziyu Zhang, Siqi He and Baohua Fu for their invitations and support.
The author is supported by World Premier International Research Center Initiative (WPI initiative), MEXT, Japan, and Grant-in-Aid for Scientific Research grant (No. 22K13889) from JSPS Kakenhi, Japan.
## 2. Affine group schemes and actions
In this section, we always assume that \(R\) is a commutative ring. We identify the category of affine schemes over \(\operatorname{Spec}(R)\) with the opposite category of commutative \(R\)-algebras. Given an affine \(R\)-scheme \(X:=R[X]\), we identify the abelian category of quasi-coherent (resp. coherent if \(R[X]\) is Noetherian) sheaves \(\operatorname{QCoh}(X)\) (resp. \(\operatorname{Coh}(X)\)) with the abelian category of (resp. finite generated) \(R[X]\)-modules \(R[X]-\operatorname{Mod}\) (resp. \(R[X]-\operatorname{Mod}^{fg}\)).
### Affine group schemes
**Definition 2.1**.: We define an \(R\)-affine group \(G\) as a flat, finite type and commutative \(R\)-algebra \(\pi_{G}:R\to R[G]\) with \(R\)-algebra morphisms
\[m_{G}:R[G]\to R[G]\otimes_{R}R[G],\quad i_{G}:R[G]\to R[G],\quad e_{G}:R[G]\to R\]
which we call multiplication, inversion, and identification respectively, such that they satisfy the group laws:
\[(m_{G}\otimes_{R}id_{R[G]})\circ m_{G}=(id_{R[G]}\otimes_{R}m_{G})\circ m_{G}, R[G]\to R[G]\otimes_{R}R[G]\otimes_{R}R[G],\]
\[id_{R[G]}=(id_{R[G]}\otimes_{R}e_{G})\circ m_{G}=(e_{G}\otimes_{R}id_{R[G]}) \circ m_{G}, R[G]\to R[G],\]
\[\pi_{G}\circ e_{G}=(i_{G}\otimes_{R}id_{R[G]})\circ m_{G}=(id_{R[G]}\otimes_{R }i_{G})\circ m_{G}, R[G]\to R[G].\]
We define an \(R\)-affine group morphism from an \(R\)-affine group \(G\) to another \(R\)-affine group \(H\) as a morphism of \(R\)-algebras \(f:R[H]\to R[G]\) such that
\[(f\otimes_{R}f)\circ m_{H}=m_{G}\circ f:R[H]\to R[G]\otimes_{R}R[G]\]
The \(R\)-affine group has the following properties by definition:
* Given a morphism of commutative ring \(S\to T\) and an \(R\)-affine group \(G\), \(G_{T}:=\operatorname{Spec}(R[G]\otimes_{S}T)\) is also an \(T\)-affine group by the induced morphisms. Similarly, affine group morphisms are also stable under base change.
* Given two morphisms of \(R\)-affine groups \[f_{1}:G_{1}\to G,\quad f_{2}:G_{2}\to G,\] \(G_{1}\times_{f_{1},G,f_{2}}G_{2}\) is also an affine \(R\)-group with the multiplication induced by \(m_{G_{1}}\) and \(m_{G_{2}}\) (if it is \(\operatorname{ppf}\) over \(\operatorname{Spec}(R)\)). Particularly, given a morphism of \(R\)-affine group \(f:G\to H\), we denote \(ker(f):=G\times_{f,H,\pi_{H}}\operatorname{Spec}(R)\).
**Example 2.2**.: The scheme \(\mathbb{G}_{m}:=\operatorname{Spec}(\mathbb{Z}[t,t^{-1}])\) has a \(\mathbb{Z}\)-affine group structure
\[m_{\mathbb{G}_{m}}:\mathbb{Z}[t,t^{-1}]\to\mathbb{Z}[t_{1},t_{1}^ {-1},t_{2},t_{2}^{-1}], t\to t_{1}t_{2},\] \[i_{\mathbb{G}_{m}}:\mathbb{Z}[t,t^{-1}]\to\mathbb{Z}[t,t^{-1}], t\to t^{-1},\] \[e_{\mathbb{G}_{m}}:\mathbb{Z}[t,t^{-1}]\to\mathbb{Z}, t\to 1.\]
Given any integer \(l\), we have a \(\mathbb{Z}\)-group \(t^{l}:\mathbb{G}_{m}\to\mathbb{G}_{m}\) by
\[t^{l}:\mathbb{Z}[t,t^{-1}]\to\mathbb{Z}[t,t^{-1}],\quad t\to t^{l}.\]
When \(l>0\), we denote
\[\mu_{l}:=ker(t^{l})=\operatorname{Spec}(\mathbb{Z}[t]/(t^{l-1})).\]
We denote \(\mathbb{G}_{m,R}:=\mathbb{G}_{m}\times_{Z}R\) and make similar notations for \(\mu_{l,R}\) and \(t_{R}^{l}\).
### Representations of an affine group
**Definition 2.3**.: Let \(G\) be an \(R\)-affine group. We define a \(G\)-representation \(M\) as an \(R\)-module with an \(R\)-module morphism
\[\sigma_{G,M}:M\to R[G]\otimes_{R}M\]
such that
\[(m_{G}\otimes_{R}id_{M})\circ\sigma_{G,M}=(id_{R[G]}\otimes_{R} \sigma_{G,M})\circ\sigma_{G,M}, M\to R[G]\otimes_{R}R[G]\otimes_{R}M\] \[id_{M}=(e_{G}\otimes_{R}id_{M})\circ\sigma_{G,M}, M\to M.\]
We say that \(M\) is a finitely generated \(G\)-representation if \(M\) is finitely generated as an \(R\)-module. Given two \(G\)-representations \(M\) and \(N\), we define a \(G\)-equivariant morphism from \(M\) to \(N\) as an \(R\)-module morphism \(f:M\to N\) such that
\[\sigma_{G,N}\circ f=(id_{R[G]}\otimes_{R}f)\circ\sigma_{G,M},\quad M\to R[G] \otimes_{R}N.\]
The category of \(G\)-representations with \(G\)-equivariant morphisms forms an abelian category, which we denote as \(\operatorname{Rep}(G)\). Moreover, if \(R\) is Noetherian, the category of finitely generated \(G\)-representations form an abelian subcategory of \(\operatorname{Rep}(G)\), which we denote as \(\operatorname{Rep}(G)^{fg}\).
The category of \(G\)-representations has the following properties by definition:
1. The tensor product of \(R\)-modules induces a canonical monoidal structure for \(\operatorname{Rep}(G)\).
2. Given a \(R\)-affine group morphism \(\psi:G\to H\), let \(M\) be an \(H\)-representation. Then \[\sigma_{G,M}:=\psi\otimes_{R}\sigma_{H,M}:M\to R[G]\otimes_{R}M\] induces a \(G\)-representation structure on \(M\). Hence it induces an exact and monoidal pull-back functor: \[\psi^{*}:\operatorname{Rep}(H)\to\operatorname{Rep}(G).\]
**Example 2.4**.: There exists a canonical monoidal equivalence between \(\operatorname{Rep}(\mathbb{G}_{m,R})\) with the category of \(\mathbb{Z}\)-graded (resp. \(\mathbb{Z}/l\mathbb{Z}\)) graded \(R\)-modules \(R-\operatorname{Mod}_{\mathbb{Z}}\) in the following way: given a \(\mathbb{Z}\)-graded \(R\)-module \(M=\oplus_{d\in\mathbb{Z}}M_{d}\), we denote \(M|_{d}:M\to M_{d}\) as the projection morphism. Then
\[\sigma_{\mathbb{G}_{m,R},M}:=\bigoplus_{d\in\mathbb{Z}}t^{d}\otimes_{R}M|_{d}: M\to M[t,t^{-1}]\]
is a \(\mathbb{G}_{m,R}\)-representation structure on \(M\). Conversely, given \(\mathbb{G}_{m,R}\) representation \(M\), the \(R\)-module morphism
\[\sigma_{G,M}:M\to M[t,t^{-1}]\]
could be written as
\[\sum_{d\in\mathbb{Z}}t^{d}\otimes\sigma_{d}\]
where \(\sigma_{d}\) is an \(R\)-module endomorphism of \(M\) for each \(d\). The law of group actions requires that
\[\sigma_{d}\sigma_{e}=\delta_{de}\sigma_{e},\quad\sum_{d\in\mathbb{Z}}\sigma_{d }=1,\]
where \(\delta_{de}\) is the Kronecker symbol. Hence we have \(M\cong\oplus_{d\in\mathbb{Z}}\sigma_{d}M\).
Under the above equivalences, we compute the pull-back and push-forward functor of \(t_{R}^{l}:\mathbb{G}_{m,R}\to\mathbb{G}_{m,R}\), which is represented by
\[t_{R}^{l*}:R-\operatorname{Mod}_{\mathbb{Z}}\to R-\operatorname{Mod}_{ \mathbb{Z}},\quad\bigoplus_{d\in\mathbb{Z}}M_{d}\to\bigoplus_{d\in l\mathbb{Z }}M_{d/l},\]
Both \(t_{R}^{l*}\) and \(t_{R*}^{l}\) are exact functors and maps finite generated \(R\)-modules to finite generated \(R\)-modules.
Given a rational number \(i\), we denote
\[\mathcal{L}_{R}^{i}:=\begin{cases}\bigoplus_{d=i}R,&i\in\mathbb{Z},\\ 0,&i\not\in\mathbb{Z}.\end{cases}\]
Then we have
\[t_{R}^{l*}(\mathcal{L}_{R}^{i})\cong\mathcal{L}_{R}^{li},\quad t_{R*}^{l}( \mathcal{L}_{R}^{i})\cong\mathcal{L}_{R}^{i/l}.\]
Like the case of \(\mathbb{G}_{m,R}\), we have a canonical equivalence between the category of \(\mu_{n,R}\) representations and the category of \(\mathbb{Z}/l\mathbb{Z}\)-graded \(R\)-modules.
\[\operatorname{Rep}(\mu_{n,R})\cong R-\operatorname{Mod}_{\mathbb{Z}/l \mathbb{Z}}.\]
### Equivariant affine schemes and morphisms
**Definition 2.5**.: Given an \(R\)-affine group \(G\), we define a \(G\)-equivariant affine scheme as a \(G\)-representation \(R[X]\) with a \(G\)-equivariant morphism
\[*_{X}:R[X]\otimes_{R}R[X]\to R[X].\]
such that \(R[X]\) is an commutative \(R\)-algebra under the multiplication \(*_{X}\). We denote
\[\sigma_{G,X}:=\sigma_{G,R[X]},\quad\pi_{G,X}:=\pi_{G,R[X]}.\]
We define an \(R\)-affine equivariant scheme as a pair \((G,X)\) where \(G\) is an \(R\)-affine group and \(X\) is a \(G\)-equivariant affine scheme. Given two affine equivariant
schemes \((G,X)\) and \((H,Y)\), an equivariant morphism from \(X\) to \(Y\) is defined as a pair \((\psi,f)\), where \(\psi:G\to H\) is an \(R\)-affine group morphism and \(f:R[Y]\to R[X]\) is an \(R\)-algebra morphism such that
\[(\psi\otimes_{R}f)\circ\sigma_{H,Y}=\sigma_{G,X}\circ g,\quad R[Y]\to R[G] \otimes_{R}R[X]\]
The category of equivariant affine schemes and morphisms has the following properties by definitions:
1. Every \(R\)-affine group morphism \(f:G\to H\) induces a \(G\)-equivariant structure on \(H\). Particularly, there is a canonical \(G\)-equivariant structure on \(\operatorname{Spec}(R)\).
2. Let \(\psi:G\to H\) be an \(R\)-affine group morphism and \(Y\) be a \(H\)-equivariant scheme, then \[\sigma_{G,X}:=(\psi\otimes_{R}id_{R[X]})\circ\sigma_{H,X}:R[X]\to R[G]\otimes_ {R}R[X]\] induces a \(G\)-equivariant structure on \(X\). Moreover, any equivariant morphism of equivariant schemes: \[(G,X)\xrightarrow{(\psi,f)}(H,Y)\] factors through \[(G,X)\xrightarrow{(id,f)}(G,Y)\xrightarrow{(\psi,id)}(H,Y).\]
3. Given two equivariant morphisms \[(\psi_{1},f_{1}):G_{1}\times_{\operatorname{Spec}(R)}X_{1}\to G \times_{\operatorname{Spec}(R)}X,\] \[(\psi_{2},f_{2}):G_{2}\times_{\operatorname{Spec}(R)}X_{2}\to G \times_{\operatorname{Spec}(R)}X,\] it induces a \(G_{1}\times_{G}G_{2}\)-equivariant structure on \(X_{1}\times_{X}X_{2}\) by the actions \(\sigma_{G_{1},X_{1}}\) and \(\sigma_{G_{2},X_{2}}\).
**Example 2.6** (GIT quotients).: By Example 2.4, there is a canonical equivalence between the opposite category of \(\mathbb{Z}\)-graded (resp. \(\mathbb{Z}/l\mathbb{Z}\)) commutative \(R\)-algebras with the category of \(\mathbb{G}_{m,R}\) (resp. \(\mu_{l,R}\)) equivariant affine schemes. Moreover, given an \(\mathbb{Z}\)-graded (resp. \(\mathbb{Z}/l\mathbb{Z}\)-graded) algebra
\[R[X]=\bigoplus_{d\in\mathbb{Z}}R[X]_{d}\text{ (resp. }\bigoplus_{d\in\mathbb{Z}/l \mathbb{Z}}R[X]_{d}\text{)},\]
\(R[X]_{0}\) is also an \(R\)-algebra and all \(R[X]_{i}\) is an \(R[X]_{0}\)-module. The inclusion of \(R[X]_{0}\) into \(R[X]\) induces a canonical equivariant morphism
\[\phi_{0,X}:(\mathbb{G}_{m},X)\to(\operatorname{Spec}(R),\operatorname{Spec}(R [X]_{0}))\text{ or }\phi_{l,X}:(\mu_{l},X)\to(\operatorname{Spec}(R),R[X]_{0}).\]
### Equivariant modules
**Definition 2.7**.: Given an \(R\)-affine equivariant scheme \((G,X)\), we define a \(G\)-equivariant \(R[X]\)-module as a \(G\)-representation \(M\) with a \(G\)-equivariant morphism
\[*_{M}:R[X]\otimes_{R}M\to M\]
such that \(*_{m}\) induces an \(R[X]\)-module structure on \(M\). Given two \(G\)-equivariant \(R[X]\)-modules \(M,N\), we define a \(G\)-equivariant \(R[X]\)-module morphism from \(M\) to \(N\) as a \(G\)-equivariant morphism \(f:M\to N\) such that \(f\circ*_{M}=*_{N}\circ id_{R[X]}\otimes_{R}f\). We say that \(M\) is finitely generated if it is finitely generated as \(R[X]\)-module.
We denote \(\operatorname{QCoh}_{G}(X)\) as the abelian category of \(G\)-equivariant \(R[X]\)-modules. If \(R[X]\) is Noetherian, we denote \(\operatorname{Coh}_{G}(X)\) as the abelian category of finite generated \(G\)-equivariant \(R[X]\)-modules.
The category of equivariant modules has the following property by definition
1. We have \(\operatorname{QCoh}_{G}(\operatorname{Spec}(R))=\operatorname{Rep}(G)\) and \(\operatorname{Coh}_{G}(\operatorname{Spec}(R))=\operatorname{Rep}(G)^{fg}\).
2. The category of \(G\)-equivariant \(R[X]\)-modules has a canonical monoidal structure: let \(M\) and \(N\) be two \(G\)-equivariant \(R[X]\)-modules, the \(R[X]\)-module \(M\otimes_{R[X]}N\), regarding as an \(R\)-module, is the cokernel of \[R[X]\otimes_{R}M\otimes_{R}N\to M\otimes_{R}N,\quad x\otimes m\otimes n\to*_{ M}(xm)\otimes n-x\otimes*_{N}(xn)\] which has a canonical \(G\)-equivariant structure.
3. Given an equivariant morphism \((\psi,f):(G,X)\to(H,Y)\), there is a canonical pull-back functor \[(\psi,f)^{*}:\operatorname{QCoh}_{H}(Y)\to\operatorname{QCoh}_{G}(X)\] defined in the following way: * when \(\psi=id\), we define \((f,id)^{*}M:=M\otimes_{R[X]}R[Y]\); * when \(f=id\), we define \((f,id)^{*}M:=f^{*}M\) as a \(G\)-representation, and the \(R[X]\)-module structure is still induced by \(*_{M}\). * in general, we define \((\psi,f)^{*}:=(\psi,id)^{*}(id,f)^{*}\). The pull-back functor \((\psi,f)^{*}\) maps finite generated \(H\)-equivariant \(R[Y]\)-modules to finite generated \(G\)-equivariant \(R[X]\)-modules. We denote \[(\psi,f)_{*}:\operatorname{QCoh}_{G}(X)\to\operatorname{QCoh}_{H}(X)\] as the right adjoint functor of \((\psi,f)^{*}\) if it exists.
**Example 2.8**.: Let \(R[X]\) be a \(\mathbb{Z}\)-graded \(R\)-algebra, with the \(\mathbb{G}_{m,R}\) action induced by the grading. By Example 2.4 and Example 2.6, there is a canonical equivalence between \(\operatorname{QCoh}_{\mathbb{G}_{m,R}}(X)\) (resp. \(\operatorname{Coh}_{\mathbb{G}_{m,R}}(X)\) if \(R[X]\) is Noetherian) with the category of \(\mathbb{Z}\)-graded (resp. finite generated) \(R[X]\)-modules. The similar arguments also work for \(\mu_{l,R}\)-equivariant schemes.
Particularly, given a \(\mathbb{Z}/l\mathbb{Z}\)-graded \(R\)-algebra \(R[X]\), under the GIT quotient map
\[\phi_{l,X}:(\mu_{l},X)\to(\operatorname{Spec}(R),\operatorname{Spec}(R[X]_{0}))\]
the pull-back and push-forward maps are represented by
\[\phi_{l,X}^{*}:\operatorname{QCoh}(\operatorname{Spec}(R[X]_{0}))\to \operatorname{QCoh}_{\mu_{l,R}}(X): M\to\bigoplus_{d\in\mathbb{Z}/l\mathbb{Z}}M\otimes_{R[X]_{0}}R[X]_{i},\] \[\phi_{l,X*}:\operatorname{QCoh}_{\mu_{l,R}}(X)\to\operatorname{ QCoh}(\operatorname{Spec}(R[X]_{0})): \bigoplus_{d\in\mathbb{Z}/l\mathbb{Z}}M_{i}\to M_{0}\]
Particularly, \(\phi_{0,X*}\) is always exact, and maps finite generated \(R[X]\)-graded modules to finite generated \(R[X]_{0}\)-modules if \(R[X]\) is finite generated as a \(R[X]_{0}\)-module and \(R[X]_{0}\) is Noetherian.
### The \(\mathbb{G}_{m,R}\)-equivariant scheme \(\mathbb{A}^{1}_{R}\)
The scheme \(\mathbb{A}^{1}_{R}:=\operatorname{Spec}(R[x])\) has a universal \(\mathbb{G}_{m,R}\)-equivariant structure by associating the degree of \(x\) as \(1\). Given a positive integer \(l\), we denote \(x^{l}:\mathbb{A}^{1}\to\mathbb{A}^{1}\) by
\[\mathbb{Z}[x]\to\mathbb{Z}[x],\quad x\to x^{l}.\] \[(t^{l},x^{l}):(\mathbb{G}_{m},\mathbb{A}^{1})\to(\mathbb{G}_{m}, \mathbb{A}^{1})\]
is an equivariant action. Moreover, we have
\[(\mathbb{G}_{m},\mathbb{A}^{1})_{(t^{l},x^{l})}\times_{(\mathbb{G}_{m},\mathbb{A} ^{1}),(t^{l},x^{l})}(\mathbb{G}_{m},\mathbb{A}^{1})\cong(\mathbb{G}_{m}\times \mu_{l},\alpha_{l})\]
where \(\alpha_{l}:=\operatorname{Spec}(\mathbb{Z}[x,y]/(x^{l}-y^{l}))\) and the group action is induced by
\[(t,\mu)(x,y)\to(tx,t\mu y)\]
There are two equivariant morphisms:
\[\alpha_{l,1},\alpha_{l,2}:(\mathbb{G}_{m}\times\mu_{l},\alpha_{l})\to(\mathbb{ G}_{m},\mathbb{A}^{1})\]
such that
\[\alpha_{l,1}((t,\mu),(x,y))=(t,x),\quad\alpha_{l,2}((t,\mu),(x,y))=(t\mu,y).\]
The affine group \(\mathbb{G}_{m}\times\mu_{l}\) also acts on \(\mathbb{A}^{1}\times\mu_{l}\) by \((t,\mu_{1})\circ(x,\mu_{2})\to(tx,\mu_{1}\mu_{2})\). We have an \(\mathbb{G}_{m}\times\mu_{l}\)-equivariant morphism of affine schemes from \(\mathbb{A}^{1}\times\mu_{l}\) to \(\alpha_{l}\) by
\[\Delta_{t^{l},x^{l}}:\mathbb{Z}[x,y]/(x^{l}-y^{l})\to\mathbb{Z}[x,t]/(t^{l}-1 ),\text{ by }x\to x,y\to ty\]
Here \(x,y,t\) had homogeneous degree \((1,0)\), \((1,1)\) and \((0,1)\) with respect to the \(\mathbb{G}_{m}\times\mu_{l}\) action.
The morphism \(\Delta_{t^{l},x^{l}}\) induces morphisms of equivariant \(\mathbb{Z}[x,y]/(x^{l}-y^{l})\) modules
\[\Delta_{t^{l},x^{l}}^{n}:(x,y)^{n}\mathbb{Z}[x,y]/(x^{l}-y^{l})\to(x^{n}) \mathbb{Z}[x,t]/(t^{l}-1),\quad n\in\mathbb{Z}_{\geq 0}\]
which is always injective and is an isomorphism when \(n=l-1\). For integers \(0\leq m\leq n\leq l-1\), we define
\[\tau_{m,n}:=\frac{(x^{n})\mathbb{Z}[x,t]/(t^{l}-1)\oplus(x,y)^{m}\mathbb{Z}[x,y]/(x^{l}-y^{l})}{(x,y)^{n}\mathbb{Z}[x,y]/(x^{l}-y^{l})}\in\operatorname{Coh }_{\mathbb{G}_{m}\times\mu_{l}}(\alpha_{l})\]
The modules \(\tau_{m,n}\) have the following properties:
1. we have \[\tau_{n,n}\cong\Delta_{t^{l},x^{l*}}(x^{n}\mathbb{Z}[x,t]/(t^{l}-1)),\quad \tau_{n,l-1}\cong(x,y)^{n}\mathbb{Z}[x,y]/(x^{l}-y^{l})\]
2. When \(n<m\leq l-1\), there exists a canonical injection morphism \[\tau_{n,m-1}\to\tau_{n,m},\quad\tau_{n,m}\to\tau_{n+1,m}\] such that cokernels are \[\bigoplus_{i=0}^{m}\mathbb{Z}<m,i>,\text{ and }\bigoplus_{i=n+1}^{l-1}\mathbb{Z}<n+1,i>,\] respectively, where \(<a,b>\) is the homogeneous degree with respect to the \(\mathbb{G}_{m}\times\mu_{l}\) action.
## 3. Derived category of (quasi)-coherent sheaves on algebraic stacks
In this section, we review the background about algebraic stacks and (quasi)-coherent sheaves on algebraic stacks, following [17, Tag 0ELS]. Experts or readers who are only interested in the scheme cases should feel free to read from Section 3.4.
### Quotient stacks
Given an \(R\)-equivariant scheme \((G,X)\), the groupoid in schemes \((X,G\times_{\operatorname{Spec}(R)}X,\sigma_{G,X},\pi_{G}\times_{\operatorname{ Spec}(R)}id_{X},m_{G}\times_{\operatorname{Spec}(R)}id_{X})\) induces an algebraic stack which we denote as \([X/G]\), following [17, Tag 044O]. Let
\[\sigma_{X}^{G}:X\to[X/G]\]
be the canonical quotient map, we have the Cartesian diagram:
where all the morphisms are fppf coverings.
Given an \(R\)-equivariant morphism \((\psi,f):(G,X)\to(H,Y)\), the morphism of groupoid in \(R\)-schemes \((f,f\times_{\operatorname{Spec}(R)}f)\) induces a morphism of quotient stacks:
\[[f/\psi]:[X/G]\to[Y/H],\]
Particularly, given a morphism of \(R\)-affine groups \(\psi:G\to H\), we denote
\[BG:=[\operatorname{Spec}(R)/G],\quad BH:=[\operatorname{Spec}(R)/H],\quad B \psi:=[id/\psi]:BG\to BH.\]
Given two morphisms \((\psi_{1},f_{1}):(G_{1},X_{1})\to(H,Y)\) and \((\psi_{2},f_{2}):(G_{2},X_{2})\to(H,Y)\), the fiber product of equivariant morphisms induces the fiber product of algebraic stacks if \(G_{1}\times_{H}G_{2}\) is also fppf over \(R\).
**Example 3.1**.: We follow Halpern-Leistner [9] to denote \(\Theta:=[\mathbb{A}^{1}/\mathbb{G}_{m}]\) and denote
\[\theta^{l}:=[x^{l}/t^{l}]:\Theta\to\Theta.\]
Then the diagonal of \(\theta^{l}\), which we denote as \(\Delta_{\theta^{l}}\) is represented by
\[[\Delta_{t^{l},x^{l}}/id]:[\mathbb{A}^{1}\times\mu_{l}/\mathbb{G}_{m}\times\mu _{l}]\to[\alpha_{l}/\mathbb{G}_{m}\times\mu_{l}]\]
in Section 2.5.
### Morphisms of algebraic stacks
We refer to the Stacks Project [17, Tag 04XM], for the property of being flat, locally of finite type, quasi-compact, quasi-separated and having an affine diagonal for morphisms of algebraic stacks. Moreover, we mention the following facts
1. Given a morphism \(f:\mathcal{X}\to\mathcal{Y}\) such that both \(\mathcal{X}\) and \(\mathcal{Y}\) are represented by schemes, the definition of the above properties coincide with the definition for morphisms of schemes.
2. The above properties are stable under base change, i.e. given a Cartesian diagram of algebraic stacks (*) \(f^{\prime}\) is flat (resp. locally of finite type, quasi-compact, quasi-separated, with an affine diagonal) if \(f\) is flat (resp. locally of finite type, quasi-compact, quasi-separated, with an affine diagonal) if \(f^{\prime}\) is flat (resp. locally of finite type, quasi-compact, quasi-separated, with an affine diagonal).
3. We consider a morphism of affine quotient stacks \[[f/\psi]:[X/G]\to[Y/H].\] The morphism \([f/\psi]\) is always quasi-compact. It is flat (resp. of locally finite type) if \(f\) is flat (resp. locally of finite type). It is quasi-separated and with an affine diagonal if \(\psi\) is faithfully flat.
4. Given a quasi-compact morphism of algebraic stacks \(f:\mathcal{X}\to\mathcal{Y}\) such that \(\mathcal{Y}\) is represented by an affine scheme, then \(\mathcal{X}\) has a smooth cover by an affine scheme.
**Example 3.2**.: The morphisms
\[\theta^{l}:\Theta\to\Theta,\quad Bl^{l}:B\mathbb{G}_{m}\to B\mathbb{G}_{m}\]
are both locally of finite type, flat, quasi-compact, quasi-separated, and have affine diagonals.
### Quasi-coherent sheaves on algebraic stacks
We refer to [17, Tag 06WU] for the abelian category of quasi-coherent sheaves \(\operatorname{QCoh}(\mathcal{Y})\) for an algebraic stack \(\mathcal{Y}\). It has the following properties:
1. for an affine quotient stack \([X/G]\), we have a canonical equivalence: \[\operatorname{QCoh}([X/G])\cong\operatorname{QCoh}_{G}(X).\]
2. for a morphism of algebraic stacks \(f:\mathcal{X}\to\mathcal{Y}\) the pull-back \(f^{*}\) preserves quasi-coherent sheaves and induces a functor \[f^{*}:\operatorname{QCoh}(\mathcal{Y})\to\operatorname{QCoh}(\mathcal{X}).\] Moreover, for a morphism \([f/\psi]:[X/G]\to[Y/H]\) of affine quotient stacks, the pull-back functor \([f/\psi]^{*}\) is represented by \((f,\psi)^{*}\).
3. when \(f\) is quasi-compact and quasi-separated, it induces a functor \[f_{*}:\operatorname{QCoh}(\mathcal{X})\to\operatorname{QCoh}(\mathcal{Y})\] which is a right adjoint to \(f^{*}\), following Proposition 103.11.1 of [17, Tag 070A]. A similar construction will produce higher direct image functors \(R^{i}f_{*}:\operatorname{QCoh}(\mathcal{X})\to\operatorname{QCoh}(\mathcal{Y})\) for all integers \(i\geq 0\).
4. if \(\mathcal{X}\) is locally Noetherian, i.e. there is a smooth atlas of \(\mathcal{X}\) which is a locally Noetherian scheme, we can define the abelian category of coherent sheaves \(\operatorname{Coh}(\mathcal{X})\) which is a full subcategory of \(\operatorname{QCoh}(\mathcal{X})\), and \(f^{*}\) induces a pull-back functor for the category of coherent sheaves if \(f\) is a morphism of locally Noetherian algebraic stacks, which we also denote as \(f^{*}\). Moreover, we have a canonical equivalence: \[\operatorname{Coh}([X/G])\cong\operatorname{Coh}_{G}(X),\] if \(R[X]\) is Noetherian.
5. The abelian category of quasi-coherent sheaves satisfies the fppf descent in the sense of flat-fppf sites (see [17, Tag 08MZ] for the introduction).
The quasi-coherent sheaves over algebraic stacks satisfy the following flat base change theorem:
**Theorem 3.3** (Flat base change theorem, Lemma 103.4.1 of [17, Tag 076W], Lemma 103.7.2 and Lemma 103.7.3 of [17, Tag 0760]).: _Given a flat morphism \(f:\mathcal{X}\to\mathcal{Y}\) of algebraic stacks, the pullback functor_
\[f^{*}:\operatorname{QCoh}(\mathcal{Y})\to\operatorname{QCoh}(\mathcal{X})\]
_is an exact functor. Moreover, a Cartesian diagram of algebraic stacks:_
_such that \(g\) is quasi-compact and quasi-separated and \(f\) is flat induces canonical equivalence of natural transformations:_
\[f^{*}R^{i}g_{*}\cong R^{i}g^{\prime}_{*}f^{\prime*}:\operatorname{QCoh}( \mathcal{X}_{2})\to\operatorname{QCoh}(\mathcal{Y}_{1})\]
_for all non-negative integers \(i\)._
### Universally good morphisms
**Definition 3.4** (Universally good morphisms).: Let \(f:\mathcal{X}\to\mathcal{Y}\) be a morphism of algebraic stacks. We say that \(f\) is a universally good morphism if \(f\) is locally of finite type, quasi-compact, quasi-separated, with an affine diagonal, and for any Cartesian diagram of algebraic stacks
(3.1)
1. the functor \[f_{*}:QCoh(\mathcal{X})\to QCoh(\mathcal{Y}).\] is exact;
2. if \(\mathcal{Y}^{\prime}\) is locally Noetherian, then \(f_{*}\) maps coherent sheaves to coherent sheaves;
3. the canonical morphism \(\mathcal{O}_{\mathcal{Y}^{\prime}}\to f_{*}\mathcal{O}_{\mathcal{X}^{\prime}}\) is an isomorphism,
_Remark 3.5_.: Obviously, the terminology "universally good" is from Alper's "good moduli space" in [2].
By the fppf descent of quasi-coherent sheaves (resp. coherent sheaves) over algebraic stacks, we have
**Lemma 3.6**.: _Universally good morphisms are stable under base change and compositions. Moreover, given a Cartesian diagram of algebraic stacks in (3.1) such that \(g\) is a fppf atlas of \(\mathcal{Y}\), \(f\) is universally good if and only if \(f^{\prime}\) is universally good._
_Particularly, assuming a morphism of algebraic stacks \(f:\mathcal{X}\to\mathcal{Y}\) is locally of finite type, quasi-compact, quasi-separated and with an affine diagonal, \(f\) is universally good if and only if for any Cartesian diagram of algebraic stacks in (3.1) such that \(\mathcal{Y}^{\prime}\cong\operatorname{Spec}(R)\), the functor_
\[f^{\prime}_{*}:\operatorname{QCoh}(\mathcal{X}^{\prime})\to R-\operatorname{ Mod},\quad F\to\Gamma(\mathcal{X}^{\prime},F)\]
_is exact, maps coherent sheaves to finite generated \(R\)-modules if \(R\) is Noetherian and the canonical morphism \(R\to\Gamma(\mathcal{X}^{\prime},\mathcal{O}_{\mathcal{X}^{\prime}})\) is an isomorphism._
**Example 3.7**.: Given an \(R\)-affine group \(G\), we say that \(BG\) is linearly reductive if the morphism \(BG\to\operatorname{Spec}(R)\) is universally good. Particularly, \(\mathbb{G}_{m,R}\) and \(\mu_{l,R}\) are all linearly reductive groups by Example 2.4
**Example 3.8**.: Let \(R[X]\) be an \(\mathbb{Z}/l\mathbb{Z}\)-graded \(R\)-algebra which is finite generated as a \(R[X]_{0}\)-module. Then the quotient stack morphism
\[\psi_{l,X}:[X/\mu_{l,R}]\to\operatorname{Spec}(R[X]_{0})\]
is universally good by Example 2.8.
Particularly, we consider the Cartesian diagram of algebraic stacks:
The morphism
\[[x^{l}/t^{l}]:[\mathbb{A}^{1}/\mu_{l}]\to\mathbb{A}^{1}\]
is universally good by the above argument, and thus the morphism
\[\theta^{l}:\Theta\to\Theta\]
is also universally good.
**Lemma 3.9**.: _Given a universally good morphism \(f:\mathcal{X}\to\mathcal{Y}\) of algebraic stacks, the higher direct image vanishes for any \(\mathcal{F}\in QCoh(\mathcal{X})\), the higher direct image vanishes:_
\[R^{i}f_{*}(\mathcal{F})=0,\quad i>0.\]
Proof.: As the higher direct image is fppf local, we can assume that \(\mathcal{Y}\) is an affine scheme. As \(f\) is quasi-compact, we have a smooth cover \(u:U\to\mathcal{Y}\) where \(U\) is an affine scheme. Then for any integer \(n>0\), the \(n\)-Cartesian product \(U_{n}:=\underbrace{U\times_{\mathcal{X}}\times\cdots\times_{\mathcal{X}}U}_{n \text{ times}}\) is also an affine scheme, as the diagonal \(\Delta_{\mathcal{X}}:\mathcal{X}\to\mathcal{X}\times\mathcal{X}\) is affine. The projection morphisms \(u_{n}:U_{n}\to\mathcal{X}\) are also affine morphisms We denote \(u_{n}:U_{n}\to\mathcal{X}\) as all \(U_{n+1}\) are affine. Thus we have the long exact sequence of Cech complexes in \(\operatorname{QCoh}(\mathcal{Y})\)
\[0\to\mathcal{F}\to u_{1*}u_{1}^{*}\mathcal{F}\to u_{2*}u_{2}^{*}\mathcal{F}\to\cdots.\]
By the Grothendieck spectral sequence and the fact that all \(U_{n}\) and \(u_{n}\) are affine, \(R^{i}f_{*}\mathcal{F}\) is the cohomology of the following complex
\[f_{*}u_{1*}u_{1}^{*}\mathcal{F}\to f_{*}u_{2*}u_{2}^{*}\mathcal{F}\to\cdots\]
and the higher cohomology vanishes because \(f_{*}\) is an exact functor.
### Derived category of (quasi)-coherent sheaves on algebraic stacks
We refer to [17, Tag 07B5] for the derived category of (quasi)-coherent sheaves on algebraic stacks. Given an algebraic stack \(\mathcal{X}\), we denote \(D_{qcoh}(\mathcal{O}_{\mathcal{X}})\) (resp. \(D_{qcoh}^{+}(\mathcal{O}_{\mathcal{X}})\), \(D_{qcoh}^{-}(\mathcal{O}_{\mathcal{X}})\), \(D_{qcoh}^{b}(\mathcal{O}_{\mathcal{X}})\)) as the derived category of quasi-coherent sheaves (resp. with cohomology bounded from below, above and both sides) on \(\mathcal{X}\). We denote \(D_{Coh}(\mathcal{O}_{\mathcal{X}})\) (resp. \(D_{coh}^{+}(\mathcal{O}_{\mathcal{X}})\), \(D_{coh}^{-}(\mathcal{O}_{\mathcal{X}})\), \(D_{coh}^{b}(\mathcal{O}_{\mathcal{X}})\)) as the full subcategory of \(D_{qcoh}(\mathcal{O}_{\mathcal{X}})\) (resp. \(D_{qcoh}^{+}(\mathcal{O}_{\mathcal{X}})\), \(D_{qcoh}^{-}(\mathcal{O}_{\mathcal{X}})\), \(D_{qcoh}^{b}(\mathcal{O}_{\mathcal{X}})\)) with coherent cohomologies.
Given a morphism \(f:\mathcal{X}\to\mathcal{Y}\), the pull-back functor of quasi-coherent sheaves induce the derived functors
\[Lf^{*}:D_{qcoh}(\mathcal{O}_{\mathcal{Y}})\to D_{qcoh}(\mathcal{O}_{\mathcal{X}}),\]
which maps complexes with bounded from above (resp. coherent) cohomologies to complexes with bounded from above (resp. coherent) cohomologies. Moreover, it maps complexes with bounded from below (resp. bounded) cohomologies to complexes with bounded from below (resp. bounded) cohomologies if \(f\) is flat.
Moreover, when \(f:\mathcal{X}\to\mathcal{Y}\) is quasi-compact and quasi-separated, then the push-forward functor induces the derived functor
\[Rf_{*}:D^{+}_{qcoh}(\mathcal{O}_{\mathcal{X}})\to D^{+}_{qcoh}(\mathcal{O}_{ \mathcal{Y}}).\]
If \(f\) is universally good, then \(Rf_{*}\) maps complexes with bounded cohomologies (resp. coherent cohomologies) to complexes with bounded cohomologies (resp. coherent cohomologies).
Like the flat base change theorem for quasi-coherent sheaves, we also have the following theorem for the derived category of quasi-coherent sheaves:
**Theorem 3.10** (Proposition 2.3.2 of [8]).: _A Cartesian diagram of algebraic stacks:_
_such that \(g\) is quasi-compact and quasi-separated and \(f\) is flat induces canonical equivalence of natural transformations:_
\[Lf^{*}Rg_{*}\cong Rg^{\prime}_{*}Lf^{\prime*}:D^{+}_{qcoh}(\mathcal{X}_{2}) \to D^{+}_{qcoh}(\mathcal{Y}_{1})\]
## 4. Semir-orthogonal decomposition of the root stacks
In this section, we give a constructive proof of the semi-orthogonal decomposition of the derived category of bounded-(from-below) (quasi)-coherent sheaves on the root stack.
### Semi-orthogonal decomposition of the \(\Theta_{R}\)
We first give a semi-orthogonal decomposition for the derived category of (quasi)-coherent sheaves of
\[\Theta_{R}:=\Theta\times_{\operatorname{Spec}(\mathbb{Z})}\operatorname{Spec}( R).\]
We recall a lemma about effective Cartier divisor:
**Lemma 4.1**.: _[_17_, Tag 0B4A]_ _Let \(f:X\to Z\) be an effective Cartier divisor such that \(Z\) is an algebraic stack. Then the right adjoint functor of_
\[Rf_{*}:D^{+}_{qcoh}(X)\to D^{+}_{qcoh}(Z)\]
_which we denote as \(Lf^{\dagger}\) is \(Lf^{*}(-\otimes^{L}\mathcal{O}(X))[-1]\). Moreover, the ad junction formula induces a canonical triangle of derived functors_
\[L^{*}_{f}Rf_{*}\to id\to L^{\dagger}_{f}R_{f*}.\]
We consider the \(\mathbb{G}_{m,R}\)-equivariant morphism:
\[i_{R}:B\mathbb{G}_{m,R}\to\Theta_{R},\quad R[X]\to R,\quad x\to 0.\]
and the following commutative diagram
(4.1)
and denote the following derived functors:
\[\triangleright_{m,R}:=RBi_{R*}(-\otimes_{R}\mathcal{L}_{R}^{m})LBl_{R}^{l*}, \quad\trianglelefteq_{m,R}:=RBl_{R*}^{l}(-\otimes_{R}\mathcal{L}_{R}^{m})Li_ {R}^{*}.\]
If \(l>1\), by Example 2.4 and Example 2.8, we have
\[R\theta_{R*}^{l}\mathcal{O}_{\Theta_{l,R}}\cong\mathcal{O}_{\Theta_{l,R}}, \quad RBl_{R*}^{l}\mathcal{L}_{R}^{i}\cong\mathcal{L}_{R}^{i/l}. \tag{4.2}\]
**Lemma 4.2**.: _The derived functor \(\triangleleft_{i+1,R}[-1]\) is the right adjoint functor of \(\triangleright_{i,R}\) and \(R\theta_{R*}^{l}\) is the right-adjoint functor of \(L\theta_{R}^{l*}\). Moreover, we have_
\[R\theta_{R*}^{l}L\theta_{R}^{l*}\cong id. \tag{4.3}\]
\[\triangleleft_{-j,R}\circ\triangleright_{i,R}\cong-\otimes(\mathcal{L}_{R}^{(i- j)/l}\oplus\mathcal{L}_{R}^{(i-j+1)/l}[-1]) \tag{4.4}\]
\[R\theta_{R*}^{l}\circ\triangleright_{i,R}\cong Ri_{R*}\circ(-\otimes(L)_{R}^{i/ l}),\quad\triangleleft_{i,R}\circ L\theta_{R}^{l*}\cong(-\otimes\mathcal{L}_{R}^{i /l})Li_{R}^{*}. \tag{4.5}\]
\[\triangleright_{i+l,R}\cong\triangleright_{i,R}\circ(-\otimes_{R}\mathcal{L}_{R}),\quad\triangleright_{i+l,R}\cong(-\otimes_{R}\mathcal{L}_{R})\circ\triangleright _{i,R}. \tag{4.6}\]
Proof.: The equation (4.2) follows from Example 2.4 and Example 2.8. The equations (4.3), (4.4),(4.5),Section 4.1 follows from the projection formula, Lemma 4.1 and (4.2).
**Theorem 4.3**.: _There are functors:_
\[\overline{\tau_{m,n,R}}:D^{+}_{qcoh}(\Theta)\to D^{+}_{qcoh}(\Theta),\quad 0 \leq n\leq m\leq l-1\]
_which maps complexes with bounded cohomologies (resp. coherent cohomologies) to complexes with bounded cohomologies (resp. coherent cohomologies) and satisfies the following properties:_
1. _we have_ \[\overline{\tau_{n,n,R}}\cong\otimes\mathcal{L}_{\Lambda_{R}^{1}}^{n},\quad \overline{\tau_{0,l-1,R}}\cong L\theta_{R}^{l*}R\theta_{R*}^{l}\]
2. _for any_ \(0\leq n<m\leq l-1\)_, we have canonical triangles:_ \[\overline{\tau_{n,m-1,R}}\to\overline{\tau_{n,m,R}}\to\bigoplus_{i=0}^{m} \triangleright_{m-i,R}\triangleleft_{i,R}\] \[\overline{\tau_{n,m,R}}\to\overline{\tau_{n+1,m,R}}\to\bigoplus_{i=n+1}^{l-1 }\triangleright_{i-1-n,R}\triangleleft_{i,R}.\] _._
3. _All those functors map complexes with bounded cohomologies (resp. coherent cohomologies) to complexes with bounded cohomologies (resp. coherent cohomologies)._
Proof.: We consider the equivariant morphisms in Section 2.5
\[\alpha_{1}:(\mathbb{G}^{m}\times\mu_{l},\alpha_{l})\to(\mathbb{G}_{m}, \mathbb{A}^{1}), ((t,\mu),(x,y))\to(t,x),\] \[\alpha_{2}:(\mathbb{G}^{m}\times\mu_{l},\alpha_{l})\to(\mathbb{G}_{ m},\mathbb{A}^{1}), ((t,\mu),(x,y))\to(t\mu,y).\]
which induces morphisms of quotient stacks from \([\alpha_{l}/\mathbb{G}_{m}\times\mu_{l}]\) to \(\Theta\) and we abuse the notation to still denote them as \(\alpha_{1},\alpha_{2}\) respectively. Then we have the Cartesian diagram of affine quotient stacks:
\[\diagram{0.0}{\alpha_{l}/\mathbb{G}^{m}\times\mu_{l}]\xrightarrow{\alpha_{1}} \Theta}{\alpha_{2}\node{\theta^{l}}\node{\theta^{l}}\node{\theta^{l}}}\]
Such that the morphisms \(\alpha_{1}\) and \(\alpha_{2}\) are both flat, universally cohomologically affine and universally cohomologically proper. The diagonal \(\Delta_{\theta_{l}}:\Theta\to\alpha_{l}\) is represented by \(\Delta_{t^{l},x^{l}}\) in Section 2.5 and Example 3.1. We denote \(\alpha_{1,R}:=\alpha_{1}\otimes R\) and \(\alpha_{2,R}:=\alpha_{2}\otimes R\).
For
\[x\in D^{b}_{coh}([\alpha_{l,R}/\mathbb{G}_{R}^{m}\times\mu_{l,R}]),\]
we consider the Fourier-Mukai transform:
\[\bar{x}:R_{\alpha_{2,R}*}(x\otimes^{L}L\alpha_{1,R}^{*}(-)):D^{+}_{qcoh}( \Theta_{R})\to D^{+}_{qcoh}(\Theta_{R}).\]
which maps complexes with bounded or coherent cohomologies to complexes with bounded or coherent cohomologies.
Now we define \(\tau_{m,n,R}:=\tau_{m,n}\otimes R\), where \(\tau_{m,n}\) is defined in Example 3.1. By the flat base change theorem, we have \(\overline{\tau_{0,l-1,R}}\cong L\theta^{l*}R\theta_{*}^{l}\) and
\[R_{\alpha_{2,R}*}(R<m,n>\otimes^{L}L\alpha_{1,R}^{*}(-))\cong\triangleright_{n- m,R}\triangle_{m,R}.\]
Thus Theorem 4.3 follows from Example 3.1.
**Corollary 4.4**.: _If \(l>1\), the following functors are fully faithful:_
\[L\theta_{l}^{*}:D^{+}_{qcoh}(\Theta_{R})\to D^{+}_{qcoh}(\Theta_{R}),\quad \triangleright_{i,R}:D^{+}_{qcoh}(B\mathbb{G}_{m,R})\to D^{+}_{qcoh}(\Theta_{R}).\]
_Moreover, we denote \(D^{i}_{l,R}:=\triangleright_{i,R}D^{+}_{qcoh}(B\mathbb{G}_{m,R})\). Then_
\[D^{i}_{l,R}\cong D^{i+l}_{l,R}\]
_and for any \(0\leq i\leq l-1\), we have the semi-orthogonal decomposition_
\[<D^{i-l+1}_{l,R},\cdots,D^{-1}_{l,R},L\theta_{l}^{*}D^{+}_{qcoh}(\Theta_{R}),D ^{0}_{l,R},\cdots,D^{i-1}_{l,R}> \tag{4.7}\]
_which is a fully subcategory of \(D^{+}_{qcoh}(\Theta)\). Moreover, similar arguments also hold for for derived category of complexes with bounded or coherent cohomologies._
### The semi-orthogonal decomposition for root stacks
Now we give a proof for the semi-orthogonal decomposition of the derived category of (below)-bounded (quasi)-coherent sheaves of root stacks.
Let \(\mathcal{X}\) be an algebraic stack. Given a line bundle \(\mathcal{L}\) over \(\mathcal{X}\), it induces a canonical morphism from \(\mathcal{X}\to B\mathbb{G}_{m}\) such that \(\mathcal{L}\) is the pull-back of \(\mathcal{L}_{\mathbb{Z}}\). Given an integer \(l>1\), we denote
\[\mathcal{X}_{\mathcal{L},l}:=\mathcal{X}\times_{B\mathbb{G}_{m},Bt^{l}}B \mathbb{G}_{m}.\]
and denote \(\mathcal{L}_{l}\in Pic(\mathcal{X}_{\mathcal{L},l})\) as the pull-back of \(\mathcal{L}_{\mathbb{Z}}\) along the second projection \(\mathcal{X}_{\mathcal{L},l}\to B\mathbb{G}_{m}\).
Let
\[r_{\mathbb{Z}}:\mathcal{L}_{\mathbb{A}^{1}_{\mathbb{Z}}}\to\mathcal{O}_{\Theta}\]
be the canonical on \(\Theta\) which maps \(1\) to \(x\). Given a line bundle \(\mathcal{L}\) with a co-section
\[r:\mathcal{L}\to\mathcal{O}_{\mathcal{X}}\]
over \(\mathcal{X}\), it induces a canonical morphism from \(\mathcal{X}\) to \(\Theta\) such that \(r\) is the pull-back of \(r_{\mathbb{Z}}\). We define the \(l\)-th root stack of \(r\) as
\[\mathcal{X}_{r,l}:=\mathcal{X}\times_{\Theta,\theta^{l}}\Theta.\]
Now we assume that \(r\) is generated by an effective divisor \(\mathcal{D}\), i.e.
\[r=(\mathcal{O}_{\mathcal{X}}(-\mathcal{D})\subset\mathcal{O}_{\mathcal{X}}).\]
It induces a Cartesian diagram
such that the morphism from \(\mathcal{D}\) to \(B\mathbb{G}_{m}\) is induced by the line bundle \(\mathcal{L}_{\mathcal{D}}:=\mathcal{O}_{\mathcal{X}}(-\mathcal{D})|_{\mathcal{ D}}\). We denote
\[\mathcal{X}_{\mathcal{D},l}:=\mathcal{X}_{r,l},\quad\mathcal{D}_{l}:=\mathcal{D }_{\mathcal{L}_{\mathcal{D}},l}\]
and consider the pull-back of (4.1) for \(R=\mathbb{Z}\) along \(r\), and have the following Cartesian diagram
We notice that \(Bt^{l}_{\mathcal{D}}\) and \(\theta^{l}_{\mathcal{X}}\) are all universally good morphisms by the base change. For any integer \(m\), we define the following functors
\[\triangleleft_{m,\mathcal{X}}:=RBi^{l}_{\mathcal{D}*}\circ(- \otimes\mathcal{L}^{m}_{\mathcal{D},l})\circ Li^{*}_{\mathcal{X}_{\mathcal{D},l}}:D^{+}_{qcoh}(\mathcal{X}_{\mathcal{D},l})\to D^{+}_{qcoh}(\mathcal{D}),\] \[\triangleright_{m,\mathcal{X}}:=Ri_{\mathcal{X}_{\mathcal{D},l}*} \circ(-\otimes\mathcal{L}^{m}_{\mathcal{D},l})\circ LBi^{l*}_{\mathcal{D}}:D^ {+}_{qcoh}(\mathcal{D})\to D^{+}_{qcoh}(\mathcal{X}_{\mathcal{D},l}).\]
By the fppf descent of quasi-coherent sheaves, Lemma 4.1 and Lemma 4.2, we have the following lemmas
**Lemma 4.5**.: _If \(l>1\), we have_
\[R\theta^{l}_{\mathcal{X}*}\mathcal{O}_{\mathcal{X}_{\mathcal{D},l}}\cong \mathcal{O}_{\mathcal{X}},\quad RBi^{l}_{\mathcal{D}*}(\mathcal{L}^{i}_{ \mathcal{D},l})\cong\mathcal{L}^{i/l}_{\mathcal{D}} \tag{4.8}\]
_The derived functor \(\triangleleft_{i+1,\mathcal{X}}[-1]\) is the right adjoint functor of \(\triangleright_{i,\mathcal{X}}\) and \(R\theta^{l}_{\mathcal{X}*}\) is the right-adjoint functor of \(L\theta^{l}_{\mathcal{X}}\). Moreover, we have_
\[R\theta^{l}_{\mathcal{X}*}L\theta^{l*}_{\mathcal{X}}\cong id. \tag{4.9}\]
\[\triangleleft_{-j,\mathcal{X}}\circ\triangleright_{i,\mathcal{X}}\cong-\otimes( \mathcal{L}^{(i-j)/l}_{\mathcal{D}}\oplus\mathcal{L}^{(i-j+1)/l}_{\mathcal{ D}}[-1]) \tag{4.10}\]
\[R\theta^{l}_{\mathcal{X}*}\circ\triangleright_{i,\mathcal{X}}\cong Ri_{ \mathcal{X}*}\circ(-\otimes\mathcal{L}^{i/l}_{\mathcal{D}}),\quad\triangleleft_ {i,\mathcal{X}}\circ L\theta^{l*}_{\mathcal{X}}\cong(-\otimes\mathcal{L}^{i/ l}_{\mathcal{D}})Li^{*}_{\mathcal{X}}. \tag{4.11}\]
\[\triangleright_{i+l,\mathcal{X}}\cong\triangleright_{i,\mathcal{X}}\circ(-\otimes \mathcal{L}_{\mathcal{D}}),\quad\triangleright_{i+l,\mathcal{X}}\cong(-\otimes \mathcal{L}_{\mathcal{D}})\circ\triangleright_{i,\mathcal{X}}. \tag{4.12}\]
**Theorem 4.6**.: _Given \(0\leq n\leq m\leq n-1\), we consider_
\[\tau_{n,m,\mathcal{X}}:=r^{*}\tau_{n,m}\in D^{b}_{coh}(\mathcal{X}_{\mathcal{D},l }\times_{\theta^{l}_{\mathcal{X}},\mathcal{X},\theta^{l}_{\mathcal{X}}}\mathcal{ X}_{\mathcal{D},l}).\]
_and define_
\[\overline{\tau}_{n,m,\mathcal{X}}:D^{+}_{qcoh}(\mathcal{X}_{\mathcal{D},l}) \to D^{+}_{qcoh}(\mathcal{X}_{\mathcal{D},l})\]
_as the Fourier-Mukai transform generated by the Fourier-Mukai kernels \(\tau_{n,m,\mathcal{X}}\). Then which maps complexes with bounded cohomologies (resp. coherent cohomologies) to complexes with bounded cohomologies (resp. coherent cohomologies) and satisfy the following properties:_
1. _we have_ \[\overline{\tau_{n,n,\mathcal{X}}}\cong\otimes\mathcal{O}_{\mathcal{X}_{ \mathcal{D},l}}(-\mathcal{D}_{l})^{n},\quad\overline{\tau_{0,l-1,\mathcal{X}}} \cong L\theta^{l*}_{\mathcal{X}}R\theta^{l}_{\mathcal{X}*}\]
2. _for any_ \(0\leq n<m\leq l-1\)_, we have canonical triangles:_ \[\overline{\tau_{n,m-1,\mathcal{X}}}\to\overline{\tau_{n,m,\mathcal{X}}}\to \bigoplus_{i=0}^{m}\triangleright_{m-i,\mathcal{X}}\triangleleft_{i,\mathcal{X}}\] \[\overline{\tau_{n,m,\mathcal{X}}}\to\overline{\tau_{n+1,m,\mathcal{X}}}\to \bigoplus_{i=n+1}^{l-1}\triangleright_{i-1-n,\mathcal{X}}\triangleleft_{i, \mathcal{X}}.\]
3. _All those functors map complexes with bounded cohomologies (resp. coherent cohomologies) to complexes with bounded cohomologies (resp. coherent cohomologies)._
By Lemma 4.5 and Theorem 4.6, we have the following theorem
**Theorem 4.7**.: _If \(l>1\), the following functors are fully faithful:_
\[L\theta^{l*}_{\mathcal{X}}:D^{+}_{qcoh}(\mathcal{X})\to D^{+}_{ qcoh}(\mathcal{X}_{\mathcal{D},l}),\] \[\triangleright_{i,\mathcal{X}}:D^{+}_{qcoh}(\mathcal{D})\to D^{+}_{ qcoh}(\mathcal{X}_{\mathcal{D},l}).\]
_Moreover, we denote \(D^{i}_{l,\mathcal{D}}:=\triangleright_{i,\mathcal{X}}D^{+}_{qcoh}(\mathcal{D})\). Then_
\[D^{i}_{l,\mathcal{D}}\cong D^{i+l}_{l,\mathcal{D}}\]
_and for any \(0\leq i\leq l-1\), we have the semi-orthogonal decomposition_
\[D^{+}_{qcoh}(\mathcal{X}_{\mathcal{D},l}):=<D^{i-l+1}_{l,\mathcal{D}},\cdots,D^ {-1}_{l,\mathcal{D}},L\theta^{*}_{l}D^{+}_{qcoh}(\mathcal{X}),D^{0}_{l, \mathcal{D}},\cdots,D^{i-1}_{l,\mathcal{D}}>. \tag{4.13}\]
_Similar arguments also hold for complexes with bounded or coherent cohomologies._
|
2309.11592 | Parallel-mentoring for Offline Model-based Optimization | We study offline model-based optimization to maximize a black-box objective
function with a static dataset of designs and scores. These designs encompass a
variety of domains, including materials, robots and DNA sequences. A common
approach trains a proxy on the static dataset to approximate the black-box
objective function and performs gradient ascent to obtain new designs. However,
this often results in poor designs due to the proxy inaccuracies for
out-of-distribution designs. Recent studies indicate that: (a) gradient ascent
with a mean ensemble of proxies generally outperforms simple gradient ascent,
and (b) a trained proxy provides weak ranking supervision signals for design
selection. Motivated by (a) and (b), we propose \textit{parallel-mentoring} as
an effective and novel method that facilitates mentoring among parallel
proxies, creating a more robust ensemble to mitigate the out-of-distribution
issue. We focus on the three-proxy case and our method consists of two modules.
The first module, \textit{voting-based pairwise supervision}, operates on three
parallel proxies and captures their ranking supervision signals as pairwise
comparison labels. These labels are combined through majority voting to
generate consensus labels, which incorporate ranking supervision signals from
all proxies and enable mutual mentoring. However, label noise arises due to
possible incorrect consensus. To alleviate this, we introduce an
\textit{adaptive soft-labeling} module with soft-labels initialized as
consensus labels. Based on bi-level optimization, this module fine-tunes
proxies in the inner level and learns more accurate labels in the outer level
to adaptively mentor proxies, resulting in a more robust ensemble. Experiments
validate the effectiveness of our method. Our code is available here. | Can Chen, Christopher Beckham, Zixuan Liu, Xue Liu, Christopher Pal | 2023-09-20T19:07:01Z | http://arxiv.org/abs/2309.11592v2 | # Parallel-mentoring for Offline Model-based Optimization
###### Abstract
We study offline model-based optimization to maximize a black-box objective function with a static dataset of designs and scores. These designs encompass a variety of domains, including materials, robots and DNA sequences. A common approach trains a proxy on the static dataset to approximate the black-box objective function and performs gradient ascent to obtain new designs. However, this often results in poor designs due to the proxy inaccuracies for out-of-distribution designs. Recent studies indicate that: (a) gradient ascent with a mean ensemble of proxies generally outperforms simple gradient ascent, and (b) a trained proxy provides weak ranking supervision signals for design selection. Motivated by (a) and (b), we propose _parallel-mentoring_ as an effective and novel method that facilitates mentoring among parallel proxies, creating a more robust ensemble to mitigate the out-of-distribution issue. We focus on the three-proxy case and our method consists of two modules. The first module, _voting-based pairwise supervision_, operates on three parallel proxies and captures their ranking supervision signals as pairwise comparison labels. These labels are combined through majority voting to generate consensus labels, which incorporate ranking supervision signals from all proxies and enable mutual mentoring. However, label noise arises due to possible incorrect consensus. To alleviate this, we introduce an _adaptive soft-labeling_ module with soft-labels initialized as consensus labels. Based on bi-level optimization, this module fine-tunes proxies in the inner level and learns more accurate labels in the outer level to adaptively mentor proxies, resulting in a more robust ensemble. Experiments validate the effectiveness of our method. Our code is available here.
## 1 Introduction
Designing new objects or entities to optimize specific properties is a widespread challenge, encompassing various domains such as materials, robots, and DNA sequences [1]. Traditional approaches often involve interacting with a black-box function to propose new designs, but this can be expensive or even dangerous in some cases [2; 3; 4; 5; 6]. In response, recent work [1] has focused on a more realistic setting known as offline model-based optimization (MBO). In this setting, the objective is to maximize a black-box function using only a static (offline) dataset of designs and scores.
A prevalent approach to addressing the problem is to train a deep neural network (DNN) model parameterized as \(f_{\mathbf{\theta}}(\cdot)\), on the static dataset, with the trained DNN serving as a proxy. The proxy allows for gradient ascent on existing designs, generating improved designs by leveraging the gradient information provided by the DNN model. However, this method encounters an issue with the trained proxy being susceptible to out-of-distribution problems. Specifically, the proxy produces inaccurate predictions when applied to data points that deviate significantly from the training distribution.
Recent studies have observed that (a) employing a mean ensemble of trained proxies for gradient ascent in offline MBO generally leads to superior designs compared to using a single proxy [7]. This improvement stems from the ability of the ensemble to provide more robust predictions compared to a single proxy [8; 9; 10; 11]. Recent work has also found that (b) a trained proxy offers weak (valuable, albeit potentially unreliable) ranking supervision signals for design selection in various offline MBO contexts, such as evolutionary algorithms [12], reinforcement learning [13], and generative modeling [14]. These signals, focusing on the relative order of designs over absolute scores, are more resilient to noise and inaccuracies. By exchanging these signals among proxies in the ensemble, we can potentially enhance its robustness. As shown in Figure 1, we have three parallel proxies \(f^{A}_{\mathbf{\theta}}(\cdot)\), \(f^{B}_{\mathbf{\theta}}(\cdot)\) and \(f^{C}_{\mathbf{\theta}}(\cdot)\). For two designs \(\mathbf{x}_{1}^{n}\) and \(\mathbf{x}_{2}^{n}\) within the neighborhood of the current optimization point, proxies \(f^{A}_{\mathbf{\theta}}(\cdot)\) and \(f^{B}_{\mathbf{\theta}}(\cdot)\) agree that the score of \(\mathbf{x}_{1}^{n}\) is larger than that of \(\mathbf{x}_{2}^{n}\), while proxy \(f^{C}_{\mathbf{\theta}}(\cdot)\) disagrees. Based on the majority voting principle, proxies \(f^{A}_{\mathbf{\theta}}(\cdot)\) and \(f^{B}_{\mathbf{\theta}}(\cdot)\) provide a more reliable ranking, and their voted ranking signal \(f^{V}(\mathbf{x}_{1}^{n})>f^{V}(\mathbf{x}_{2}^{n})\) could mentor the proxy \(f^{C}_{\mathbf{\theta}}(\cdot)\), thus enhancing its performance.
To this end, we propose an effective and novel method called _parallel-mentoring_ that facilitates mentoring among parallel proxies to train a more robust ensemble against the out-of-distribution issue. This paper primarily focuses on the three-proxy case, referred to as _tri-mentoring_, but we also examine the situation with more proxies in Appendix A.1. As depicted in Figure 2, _tri-mentoring_ consists of two modules. **Module 1**, _voting-based pairwise supervision_ (shown in Figure 2(a)), operates on three parallel proxies \(f^{A}_{\mathbf{\theta}}(\cdot)\), \(f^{B}_{\mathbf{\theta}}(\cdot)\), and \(f^{C}_{\mathbf{\theta}}(\cdot)\) and utilizes their mean for the final prediction. To ensure consistency with the ranking information employed in design selection, this module adopts a pairwise approach to represent the ranking signals of each proxy. Specifically, as illustrated in Figure 2(a), this module generates samples (e.g. \(\mathbf{x}_{1}^{n}\), \(\mathbf{x}_{2}^{n}\) and \(\mathbf{x}_{3}^{n}\)) in the neighborhood of the current point \(\mathbf{x}\) and computes pairwise comparison labels \(\hat{\mathbf{y}}^{A}\) for all sample pairs, serving as ranking supervision signals for the proxy \(f^{A}_{\mathbf{\theta}}(\cdot)\). The label \(\hat{\mathbf{y}}^{A}_{ij}\) is defined as \(1\) if \(f^{A}_{\mathbf{\theta}}(\mathbf{x}_{i})>f^{A}_{\mathbf{\theta}}(\mathbf{x}_{j})\) and \(0\) otherwise, and similar signals are derived for proxies \(f^{B}_{\mathbf{\theta}}(\cdot)\) and \(f^{C}_{\mathbf{\theta}}(\cdot)\). These labels \(\hat{\mathbf{y}}^{A}\), \(\hat{\mathbf{y}}^{B}\) and \(\hat{\mathbf{y}}^{C}\) are combined via majority voting to create consensus labels \(\hat{\mathbf{y}}^{V}\) which are more reliable and thus can be used for mentoring proxies. The voted ranking signal \(f^{V}(\mathbf{x}_{1}^{n})>f^{V}(\mathbf{x}_{2}^{n})\) in Figure 1 corresponds to the pairwise consensus label \(\hat{y}_{12}^{V}=1\) in Figure 2(a), and both can mentor the proxy \(f^{C}_{\mathbf{\theta}}(\cdot)\).
**Module 2**, _adaptive soft-labeling_ (shown in Figure 2(b)), mitigates the issue of label noise that may arise, since the voting consensus may not always be correct. To this end, this module initializes the consensus labels \(\hat{\mathbf{y}}^{V}\) from the first module as soft-labels \(\hat{\mathbf{y}}^{S}\). It then aims to learn more accurate soft-labels to better represent the ranking supervision signals by leveraging the knowledge from the static dataset. Specifically, assuming accurate soft-labels, one of the proxies, either \(f^{A}_{\mathbf{\theta}}(\cdot)\), \(f^{B}_{\mathbf{\theta}}(\cdot)\) or \(f^{C}_{\mathbf{\theta}}(\cdot)\), fine-tuned using them, is expected to perform well on the static dataset, as both soft-labels
Figure 1: Motivation illustration.
Figure 2: Illustration of _tri-mentoring_.
(pairwise perspective) and the static dataset (pointwise perspective) describe the same ground-truth and share underlying similarities. This formulation leads to a bi-level optimization framework with an inner _fine-tuning_ level and an outer _soft-labeling_ level as shown in Figure 2(b). The inner level fine-tunes the proxy with soft-labels, which establishes the connection between them. The outer level optimizes soft-labels to be more accurate by minimizing the loss of the static dataset via the inner-level connection. The optimized labels are further fed back to the first module to adaptively mentor the proxy, ultimately yielding a more robust ensemble. Experiments on design-bench validate the effectiveness of our method.
To summarize, our contributions are three-fold:
* We propose _parallel-mentoring_ for offline MBO, effectively utilizing weak ranking supervision signals among proxies, with a particular focus on the three-proxy case as _tri-mentoring_.
* Our method consists of two modules: _voting-based pairwise supervision_ and _adaptive soft-labeling_. The first module generates pairwise consensus labels via majority voting to mentor the proxies.
* To mitigate label noise in consensus labels, the second module proposes a bi-level optimization framework to adaptively fine-tune proxies and soft-labels, resulting in a more robust ensemble.
## 2 Preliminaries: Gradient Ascent on Offline Model-based Optimization
Offline model-based optimization (MBO) aims to find the optimal design \(\mathbf{x}^{*}\) that maximizes the black-box objective function \(f(\cdot)\):
\[\mathbf{x}^{*}=\arg\max_{\mathbf{x}}f(\mathbf{x})\,, \tag{1}\]
To achieve this, a static dataset \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\) with \(N\) points is available, where \(\mathbf{x}_{i}\) represents a design and \(y_{i}\) is its corresponding score.
A common approach for solving this optimization problem is to fit a deep neural network (DNN) model \(f_{\mathbf{\theta}}(\cdot)\) with parameters \(\mathbf{\theta}\) to the static dataset in a supervised manner. The optimal parameters \(\mathbf{\theta}^{*}\) can be obtained by minimizing the mean squared error between the predictions and the true scores:
\[\mathbf{\theta}^{*}=\arg\min_{\mathbf{\theta}}\frac{1}{N}\sum_{i=1}^{N}(f_{\mathbf{\theta }}(\mathbf{x}_{i})-y_{i})^{2}\,. \tag{2}\]
The trained DNN model \(f_{\mathbf{\theta}^{*}}(\cdot)\) acts as a proxy to optimize the design using gradient ascent steps:
\[\mathbf{x}_{t+1}=\mathbf{x}_{t}+\eta\nabla_{\mathbf{x}}f_{\mathbf{\theta}}(\mathbf{x})|_{\mathbf{x}= \mathbf{x}_{t}}\,,\quad\text{for }t\in[0,T-1]\,, \tag{3}\]
where \(T\) is the number of steps and \(\eta\) represents the learning rate. \(\mathbf{x}_{T}\) serves as the final design candidate. However, this method faces a challenge with the proxy being vulnerable to out-of-distribution designs. When handling designs that substantially differ from the training distribution, the proxy yields inaccurate predictions.
## 3 Method
In this section, we introduce _parallel-mentoring_, focusing on the three-proxy scenario, also known as _tri-mentoring_. The method can be easily extended to incorporate more proxies, as discussed in Appendix A.1. _Tri-mentoring_ consists of two modules. The first module, _voting-based pairwise supervision_ in Section 3.1, manages three proxies parallelly and generates consensus labels via majority voting to mentor proxies. To mitigate label noise, we introduce a second module, _adaptive soft-labeling_ in Section 3.2. This module adaptively fine-tunes proxies and soft-labels using bi-level optimization, improving ensemble robustness. The overall algorithm is shown in Algorithm 1.
### Voting-based Pairwise Supervision
We train three parallel proxies \(f_{\mathbf{\theta}}^{A}(\cdot)\), \(f_{\mathbf{\theta}}^{B}(\cdot)\) and \(f_{\mathbf{\theta}}^{C}(\cdot)\) on the static dataset with different initializations and utilize their mean as the final prediction as suggested in [1; 15]:
\[f_{\mathbf{\theta}}(\cdot)=\frac{1}{3}(f_{\mathbf{\theta}}^{A}(\cdot)+f_{\mathbf{\theta}} ^{B}(\cdot)+f_{\mathbf{\theta}}^{C}(\cdot)). \tag{4}\]
We then apply gradient ascent with \(f_{\mathbf{\theta}}(\cdot)\) on existing designs to generate improved designs as per Eq.(3). Although the mean ensemble generally results in superior designs compared to a single proxy
[7] due to the ensemble robustness [8; 9], this approach does not fully exploit the potential of weak (valuable, albeit potentially unreliable) ranking supervision signals within every proxy. Emphasizing the relative order of designs rather than their absolute scores, these signals are more resilient to noise and inaccuracies. These ranking signals are commonly used in evolutionary algorithms [12], reinforcement learning [13], and generative modeling [14] to select designs and could further improve the ensemble robustness. We extract the ranking supervision signals from individual proxies in the form of pairwise comparison labels, and then combine these labels via majority voting to generate consensus labels to mentor proxies. We provide a detailed explanation of this module below, with its implementation shown in Algorithm 1 from Line \(4\) to Line \(6\).
```
Input: The static dataset \(\mathcal{D}\), the number of iterations \(T\), the optimizer \(OPT(\cdot)\). Output: The high-scoring design \(\mathbf{x}_{h}^{*}\).
1 Initialize \(\mathbf{x}_{0}\) as the design with the highest score in \(\mathcal{D}\).
2 Train proxies \(f^{A}_{\mathbf{\theta}}(\cdot)\), \(f^{B}_{\mathbf{\theta}}(\cdot)\) and \(f^{C}_{\mathbf{\theta}}(\cdot)\) on \(\mathcal{D}\) with different initializations.
3for\(t\gets 0\)to\(T-1\)do \(\triangleright\)Voting-based pairwise supervision. Sample \(K\) neighborhood points at \(\mathbf{x}_{t}\) as \(\mathcal{S}(\mathbf{x}_{t})\). Compute pairwise comparison labels \(\hat{\mathbf{y}}^{A}\), \(\hat{\mathbf{y}}^{B}\) and \(\hat{\mathbf{y}}^{C}\) for three proxies on \(\mathcal{S}(\mathbf{x}_{t})\). Derive consensus labels: \(\hat{\mathbf{y}}^{V}=\text{majority\_voting}(\hat{\mathbf{y}}^{A},\hat{\mathbf{y}}^{B}, \hat{\mathbf{y}}^{C})\). \(\triangleright\)Adaptive soft-labeling. for proxy in \([f^{A}_{\mathbf{\theta}}(\cdot)\), \(f^{B}_{\mathbf{\theta}}(\cdot)\), \(f^{C}_{\mathbf{\theta}}(\cdot)]\)do Initialize soft-labels as consensus labels: \(\hat{\mathbf{y}}^{S}=\hat{\mathbf{y}}^{V}\). Inner level: fine-tune the proxy with Eq.(8). Outer level: learn more accurate soft-labels \(\hat{\mathbf{y}}^{S}\) with Eq.(9). Mentor proxy using the optimized soft-labels \(\hat{\mathbf{y}}^{S}\) with Eq.(8). \(\triangleright\)Gradient ascent with a mean ensemble. Form a more robust ensemble as \(f_{\mathbf{\theta}}(\mathbf{x})=\frac{1}{3}(f^{A}_{\mathbf{\theta}}(\mathbf{x})+f^{B}_{\mathbf{ \theta}}(\mathbf{x})+f^{C}_{\mathbf{\theta}}(\mathbf{x}))\) Gradient ascent: \(\mathbf{x}_{t+1}=\mathbf{x}_{t}+\eta OPT(\nabla_{\mathbf{x}}f_{\mathbf{\theta}}(\mathbf{x}_{t}))\) Return \(\mathbf{x}_{h}^{*}=\mathbf{x}_{T}\)
```
**Algorithm 1**Tri-mentoring for Offline Model-based Optimization
**Pairwise comparison label.** We adopt a pairwise approach to represent the ranking supervision signals for every proxy, focusing on relative order to align with the ranking information used in design selection. We sample \(K\) points at the neighborhood of the optimization point \(\mathbf{x}_{t}\) as \(\mathcal{S}(\mathbf{x}_{t})\) = \(\{\mathbf{x}_{1}^{n},\dots,\mathbf{x}_{K}^{n}\}\sim\mathcal{N}(\mathbf{x}_{t},\delta^{2})\) where \(\mathcal{N}(\mathbf{x}_{t},\delta^{2})\) represents a Gaussian distribution centered at \(\mathbf{x}_{t}\) with variance \(\delta^{2}\). For each sample pair \((\mathbf{x}_{i}^{n},\mathbf{x}_{j}^{n})\) and a proxy (e.g., \(f^{A}_{\mathbf{\theta}}(\cdot)\)), we define the pairwise comparison label \(\mathbf{y}_{ij}^{A}=\mathbf{1}(f^{A}_{\mathbf{\theta}}(\mathbf{x}_{i}^{n})>f^{A}_{\mathbf{\theta}} (\mathbf{x}_{j}^{n}))\), where \(\mathbf{1}\) is the indicator function. The labels \(\hat{\mathbf{y}}^{A}\) from all sample pairs serves as the ranking supervision signals for the proxy \(f^{A}_{\mathbf{\theta}}(\cdot)\). We repeat this process for all proxies, generating signals \(\hat{\mathbf{y}}^{B}\) and \(\hat{\mathbf{y}}^{C}\) for proxies \(f^{B}_{\mathbf{\theta}}(\cdot)\) and \(f^{C}_{\mathbf{\theta}}(\cdot)\) respectively.
**Majority voting.** Given these pairwise comparison labels \(\hat{\mathbf{y}}^{A}\), \(\hat{\mathbf{y}}^{B}\) and \(\hat{\mathbf{y}}^{C}\), we derive the pairwise consensus labels \(\hat{\mathbf{y}}^{V}\) via an element-wise majority voting:
\[\hat{\mathbf{y}}^{V}_{ij}=\text{majority\_voting}(\hat{\mathbf{y}}^{A}_{ij},\hat{ \mathbf{y}}^{B}_{ij},\hat{\mathbf{y}}^{C}_{ij})\,, \tag{5}\]
where \(i\) and \(j\) are the indexes of the neighborhood samples. As consensus labels are generally more reliable, they can be employed for mentoring the proxies to promote the exchange of ranking supervision signals. Specifically, we can fine-tune the proxy \(f^{A}_{\mathbf{\theta}}(\cdot)\) using the binary cross-entropy loss, where \(\sigma(f^{A}_{\mathbf{\theta}}(\mathbf{x}_{i}^{n})-f^{A}_{\mathbf{\theta}}(\mathbf{x}_{j}^{n}))\) represents the predicted probability that \(f^{A}_{\mathbf{\theta}}(\mathbf{x}_{i}^{n})>f^{A}_{\mathbf{\theta}}(\mathbf{x}_{j}^{n})\), as also used in the ChatGPT reward model training [16; 17; 18]. The loss function can be computed as:
\[\mathcal{L}^{A}(\mathbf{\theta})=-\frac{1}{C_{K}^{2}}\sum_{1\leq i<j\leq K}\hat{ \mathbf{y}}^{V}_{ij}\log[\sigma(f^{A}_{\mathbf{\theta}}(\mathbf{x}_{i}^{n})-f^{A}_{\mathbf{ \theta}}(\mathbf{x}_{j}^{n}))]+(1-\hat{\mathbf{y}}^{V}_{ij})\log[\sigma(f^{A}_{\mathbf{ \theta}}(\mathbf{x}_{j}^{n})-f^{A}_{\mathbf{\theta}}(\mathbf{x}_{i}^{n}))], \tag{6}\]
where \(C_{K}^{2}=\frac{K(K-1)}{2}\) denotes the number of the sample pairs. This procedure is also applied to proxies \(f^{B}_{\mathbf{\theta}}(\cdot)\) and \(f^{C}_{\mathbf{\theta}}(\cdot)\). While our approach encourages alignment with the consensus, it does not aim to make proxies identical. Each proxy maintains its unique learning trajectory, thereby preserving the diversity among the proxies.
### Adaptive Soft-labeling
The consensus labels \(\mathbf{\hat{y}}^{V}\) may contain noise since the majority voting consensus can be incorrect. To mitigate this issue, we propose an _adaptive soft-labeling_ module. This module initializes soft-labels as consensus labels, and employs a bi-level optimization framework for adaptive mentoring, where the inner level fine-tunes the proxies and the outer level refines the soft-labels. We provide a detailed description of this module below; its implementation is outlined in Algorithm 1, Line 7- Line \(11\).
**Fine-tuning.** We initialize the soft-labels as the consensus labels: \(\mathbf{\hat{y}}^{S}=\mathbf{\hat{y}}^{V}\), serving as an effective starting point. Utilizing the soft-labels \(\mathbf{\hat{y}}^{S}\), we can perform fine-tuning on the proxy \(f^{A}_{\mathbf{\theta}}(\cdot)\) against the binary cross-entropy loss following Eq.(6),
\[\mathcal{L}^{A}(\mathbf{\theta},\mathbf{y}^{S})=-\frac{1}{C_{K}^{2}}\sum_{1\leq i<j \leq K}\hat{\mathbf{y}}^{S}_{ij}\log[\sigma(f^{A}_{\mathbf{\theta}}(\mathbf{x}_{i})-f^{A}_ {\mathbf{\theta}}(\mathbf{x}_{j}))]+(1-\hat{\mathbf{y}}^{S}_{ij})\log[\sigma(f^{A}_{\mathbf{ \theta}}(\mathbf{x}_{j})-f^{A}_{\mathbf{\theta}}(\mathbf{x}_{i}))]\,. \tag{7}\]
In contrast to Eq.(6), we express the loss \(\mathcal{L}^{A}(\mathbf{\theta},\mathbf{\hat{y}}^{S})\) as a function of both the proxy parameters \(\mathbf{\theta}\) and the soft-labels \(\mathbf{\hat{y}}^{S}\) as the accurate soft-labels \(\mathbf{\hat{y}}^{S}\) have yet to be determined. One-step gradient descent enables fine-tuning, resulting in the following relationship:
\[\mathbf{\theta}(\mathbf{\hat{y}}^{S})=\mathbf{\theta}-\gamma\frac{\partial\mathcal{L}^{A }(\mathbf{\theta},\mathbf{\hat{y}}^{S})}{\partial\mathbf{\theta}^{\top}}, \tag{8}\]
where \(\gamma\) denotes the fine-tuning learning rate.
**Soft-labeling.** Assuming the soft-labels are accurate, the fine-tuned proxy \(f^{A}_{\mathbf{\theta}(\mathbf{\hat{y}}^{S})}(\cdot)\) is expected to perform well on the static dataset. This is due to the fact that, despite having different data distributions, the soft-labels and the static dataset share underlying similarities, as they both represent the same ground-truth from pairwise and pointwise perspectives, respectively. This leads to a bi-level optimization framework with the fine-tuning mentioned above as the inner level and the soft-labeling here as the outer level. In particular, we enhance the accuracy of soft-labels \(\mathbf{\hat{y}}^{S}\) by minimizing the mean squared error on the static dataset \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\),
\[\mathbf{\hat{y}}^{S^{\prime}}=\mathbf{\hat{y}}^{S}-\frac{\lambda}{N}\frac{\partial \sum_{i=1}^{N}(f^{A}_{\mathbf{\theta}(\mathbf{\hat{y}}^{S})}(\mathbf{x}_{i})-y_{i})^{2}}{ \partial(\mathbf{\hat{y}}^{S})^{\top}}\,, \tag{9}\]
where \(\lambda\) represents the soft-labeling learning rate. The nested optimization problem can be easily solved by _higher_, a library for higher-order optimization [19]. Once optimized, the labels are fed back to the first module, adaptively mentoring the proxy \(f^{A}_{\mathbf{\theta}}(\cdot)\) according to Eq.(8). The same procedure can be employed for proxies \(f^{B}_{\mathbf{\theta}}(\cdot)\) and \(f^{C}_{\mathbf{\theta}}(\cdot)\), ultimately resulting in a more robust ensemble. We further clarify the novelty of this module in Appendix A.2.
## 4 Experiments
We conduct extensive experiments on design-bench to investigate the effectiveness and robustness of the proposed method. In Section 4.4, we benchmark our approach against several well-established baselines. In Section 4.5, we verify the effectiveness of two modules: _voting-based pairwise supervision_ and _adaptive soft-labeling_, as well as other contributing factors. In Section 4.6, we investigate the sensitivity of our method to hyperparameter changes. While our primary focus is the _tri-mentoring_ with \(3\) proxies, we additionally explore other _parallel-mentoring_ implementations utilizing \(5\), \(7\), \(9\), and \(11\) proxies in Appendix A.1.
### Task Overview
We adopt the design-bench which comprises both continuous and discrete tasks. Below, we outline the dataset details and evaluation protocols.
**Dataset.** We conduct experiments on four continuous tasks: **(1)** Superconductor (SuperC) [2]: discover an \(86\)-D superconductor to maximize critical temperature with \(17010\) designs. **(2)** Ant Morphology (Ant) [20]: identify a \(60\)-D ant morphology to crawl quickly with \(10004\) designs. **(3)** D'Kitty Morphology (D'Kitty) [21]: determine a \(56\)-D D'Kitty morphology to crawl quickly with \(10004\) designs. **(4)** Hopper Controller (Hopper) [1]: find a neural network policy with \(5126\) weights to maximize return with \(3200\) designs. Besides, we perform experiments on four discrete tasks: **(1)** TF Bind 8 (TFB8) [5]: design a length \(8\) DNA sequence to maximize binding activity score with 32896 designs. **(2)** TF Bind 10 (TFB10) [5]: find a length \(10\) DNA sequence to maximize binding
activity score with \(50000\) designs. **(3)** NAS [1]: find a \(64\)-D NN with \(5\) categories per dimension to maximize the performance on CIFAR10 with \(1771\) designs.
**Evaluation.** We use the oracle evaluation of design-bench to evaluate a certain design and the details of the oracle functions are reported in _Design-Bench Benchmark Tasks_ in [1]. Following [7], we select the top N = \(128\) candidates for each method and report the \(100^{th}\) percentile (maximum) normalized ground truth score. The score, denoted as \(y_{n}\) is computed as \(\frac{y-y_{min}}{y_{max}-y_{min}}\) where \(y\) is the design score, and \(y_{min}\) and \(y_{max}\) are the lowest and highest scores in the complete unobserved dataset, respectively. In addition, we provide the \(50^{th}\) percentile (median) normalized ground truth scores used in the prior work in Appendix A.3. We also provide the mean and median ranks of all comparison methods to better assess the overall performance.
### Comparisons with Other Methods
In this paper, we benchmark our method against both gradient-based and non-gradient-based approaches. The gradient-based methods include: **(1)** Grad: optimizes the design against the learned proxy via simple gradient ascent; **(2)** DE (Deep Ensemble)[7]: optimizes the design against the mean ensemble of three proxies via gradient ascent; **(3)** GB (Gradient Boosting) [22]: sequentially trains new proxies to obtain a robust proxy, followed by gradient ascent using the proxy; **(4)** COMs [7]: lower bounds the proxy's prediction on out-of-distribution designs and subsequently carries out gradient ascent; **(5)** ROMA [23]: incorporates the smoothness prior into the proxy and optimizes the design against the proxy; **(6)** NEMO [24]: leverages normalized maximum likelihood to constrain the distance between the proxy and the ground-truth, and acquires new designs by gradient ascent; **(7)** BDI [25]: proposes to distill the information from the static dataset into the high-scoring design; **(8)** IOM [26]: enforces the invariance between the representations of the static dataset and generated designs to achieve a natural trade-off. Since our _tri-mentoring_ adopts three proxies, methods using one proxy including COMs, ROMA and IOM are equipped with three proxies for a fair comparison. We also explore combining our method with ROMA and COMs and please refer to Appendix A.4 for an in-depth discussion and corresponding empirical results.
The non-gradient-based methods include: **(1)** BO-qEI [27]: fits a Gaussian Process, proposes candidate designs utilizing the quasi-Expected Improvement acquisition function, and assigns labels to the candidates with the proxy; **(2)** CMA-ES [28]: labels the sampled designs and gradually adapts the covariance matrix distribution towards the high-scoring part among the sampled designs; **(3)** REINFORCE [29]: parameterizes a design distribution and optimizes the distribution towards the optimal design by policy gradient; **(4)** CbAS [30]: trains a VAE model and progressively adapts the model to focus on the high-scoring designs; **(5)** Auto.CbAS [31]: retrains the proxy adopted in CbAS by leveraging importance sampling; **(6)** MIN [32]: learns an inverse map from a score to a design and then samples from the map conditioned an optimal score value.
### Training Details
We follow the settings in [7; 25] if not specified. We adopt a three-layer MLP network with the ReLU function as the activation. We train the MLP model on the static dataset with a \(1\times 10^{-3}\) learning rate and an Adam optimizer. The fine-tuning learning rate \(\gamma\) is set as \(1\times 10^{-3}\) and the soft-labeling learning rate \(\lambda\) is set as \(1\times 10^{-1}\). The standard deviation \(\delta\) is set as \(1\times 10^{-1}\) and the number of the samples \(K\) is set as \(10\). All experiments are performed on a single V100 GPU. To ensure the robustness of our results, we perform \(16\) trials for each setting and report the mean and standard error. We've detailed the training time and computational overhead of our approach in Appendix A.5 to provide a comprehensive view of its practicality.
### Results and Analysis
In Table 1 and Table 2, we present the results of our experiments for continuous and discrete tasks, respectively. A delineating line is drawn to separate the gradient-based methods from the non-gradient-based methods. Results for non-gradient-ascent methods are taken from [1]. The highest score of the static dataset for each task is denoted by \(\mathcal{D}(\mathbf{best})\). For each task, we highlight the algorithms that fall within one standard deviation of the highest performance by bolding their results.
**Continuous tasks.** (1) Table 1 demonstrates _tri-mentoring_ attains the best results across the board, highlighting its effectiveness. Its consistent gains over Grad confirm its ability to tackle the out-of-distribution issue. We further test our tri-mentoring for out-of-distribution issues as detailed in Appendix A.6. (2) Compared to DE and GB, which also use multiple proxies, _tri-mentoring_ achieves
better results in all four tasks, indicating its improved robustness by sharing ranking supervision signals among proxies. (3) DE typically outperforms simple gradient ascent due to ensemble prediction robustness, consistent with findings in [1]. (4) Other gradient-based methods, such as COMs, fail to achieve the performance as _tri-mentoring_, further highlighting its superior effectiveness. (5) In low-dimensional TF Bind 8 tasks, gradient-based methods (average rank \(8.8\)) underperform compared to non-gradient-based methods (average rank \(6.8\)); however, in high-dimensional TF Bind 10 tasks, gradient-based methods (average rank \(7.7\)) surpass non-gradient-based methods (average rank \(8.2\)). This suggests non-gradient-based methods, like REINFORCE and generative modeling, are more suited for low-dimensional design due to their global search ability, while gradient-based methods provide more direct guidance in high-dimensional designs.
**Discrete tasks.** (1) Table 2 shows that _tri-mentoring_ achieves the best results in two out of the three tasks, with a marginal difference in the third. This indicates that _tri-mentoring_ is a potent method for discrete tasks as well. (2) However, in complex tasks such as NAS, where each design is represented as a \(64\)-length sequence of \(5\)-category one-hot vectors, the performance of _tri-mentoring_ is slightly compromised. This could be attributed to the encoding in design-bench not fully accounting for the sequential and hierarchical nature of network architectures, leading to less effective gradient updates. Our proposed method, also demonstrates its effectiveness on high-dimensional biological sequence design, achieving maximum normalized scores of \(0.865\) and \(0.699\) on GFP and UTR respectively.
**In summary, _tri-mentoring_ achieves the highest ranking as shown in Table 2 and Figure 3, and delivers the best performance in six out of the seven tasks.
\begin{table}
\begin{tabular}{c c c c c} \hline Method & Superconductor & Ant Morphology & D’Kitty Morphology & Hopper Controller \\ \hline \(\mathcal{D}(\textbf{best})\) & \(0.399\) & \(0.565\) & \(0.884\) & \(1.000\) \\ BO-qEI & \(0.402\pm 0.034\) & \(0.819\pm 0.000\) & \(0.896\pm 0.000\) & \(0.550\pm 0.018\) \\ CMA-ES & \(0.465\pm 0.024\) & \(\textbf{1.214}\pm\textbf{0.732}\) & \(0.724\pm 0.001\) & \(0.604\pm 0.215\) \\ REINFORCE & \(0.481\pm 0.013\) & \(0.266\pm 0.032\) & \(0.562\pm 0.196\) & \(-0.020\pm 0.067\) \\ CbAS & \(\textbf{0.503}\pm\textbf{0.069}\) & \(0.876\pm 0.031\) & \(0.892\pm 0.008\) & \(0.141\pm 0.012\) \\ Auto.CbAS & \(0.421\pm 0.045\) & \(0.882\pm 0.045\) & \(0.906\pm 0.006\) & \(0.137\pm 0.005\) \\ MIN & \(0.499\pm 0.017\) & \(0.445\pm 0.080\) & \(0.892\pm 0.011\) & \(0.424\pm 0.166\) \\ \hline Grad & \(0.495\pm 0.011\) & \(0.934\pm 0.011\) & \(0.944\pm 0.017\) & \(1.797\pm 0.116\) \\ DE & \(\textbf{0.514}\pm\textbf{0.015}\) & \(0.937\pm 0.016\) & \(\textbf{0.956}\pm\textbf{0.014}\) & \(1.805\pm 0.105\) \\ GB & \(0.496\pm 0.012\) & \(0.926\pm 0.029\) & \(0.948\pm 0.012\) & \(\textbf{1.793}\pm\textbf{0.429}\) \\ COMs & \(0.491\pm 0.028\) & \(0.856\pm 0.040\) & \(0.938\pm 0.015\) & \(0.642\pm 0.167\) \\ ROMA & \(0.508\pm 0.014\) & \(0.914\pm 0.029\) & \(0.930\pm 0.012\) & \(\textbf{1.728}\pm\textbf{0.266}\) \\ NEMO & \(0.502\pm 0.002\) & \(\textbf{0.958}\pm\textbf{0.011}\) & \(0.954\pm 0.007\) & \(0.481\pm 0.003\) \\ IOM & \(\textbf{0.522}\pm\textbf{0.018}\) & \(0.926\pm 0.030\) & \(0.943\pm 0.012\) & \(1.015\pm 0.380\) \\ BDI & \(0.513\pm 0.000\) & \(0.906\pm 0.000\) & \(0.919\pm 0.000\) & \(\textbf{1.993}\pm\textbf{0.000}\) \\ \hline _Tri-mentoring_ & \(\textbf{0.514}\pm\textbf{0.018}\) & \(\textbf{0.948}\pm\textbf{0.014}\) & \(\textbf{0.966}\pm\textbf{0.010}\) & \(\textbf{1.983}\pm\textbf{0.110}\) \\ \hline \end{tabular}
\end{table}
Table 1: Results (maximum normalized score) on continuous tasks.
\begin{table}
\begin{tabular}{c c c c c c} \hline Method & TF Bind 8 & TF Bind 10 & NAS & Rank Mean & Rank Median \\ \hline \(\mathcal{D}(\textbf{best})\) & \(0.439\) & \(0.467\) & \(0.436\) & & \\ BO-qEI & \(0.798\pm 0.083\) & \(0.652\pm 0.038\) & \(\textbf{1.079}\pm\textbf{0.059}\) & \(10.1/15\) & \(11/15\) \\ CMA-ES & \(\textbf{0.953}\pm\textbf{0.022}\) & \(0.670\pm 0.023\) & \(0.985\pm 0.079\) & \(6.4/15\) & \(4/15\) \\ REINFORCE & \(\textbf{0.948}\pm\textbf{0.028}\) & \(0.663\pm 0.034\) & \(-1.895\pm 0.000\) & \(11.4/15\) & \(15/15\) \\ CbAS & \(\textbf{0.927}\pm\textbf{0.051}\) & \(0.651\pm 0.060\) & \(0.683\pm 0.079\) & \(9.1/15\) & \(9/15\) \\ Auto.CbAS & \(0.910\pm 0.044\) & \(0.630\pm 0.045\) & \(0.506\pm 0.074\) & \(11.6/15\) & \(11/15\) \\ MIN & \(0.905\pm 0.052\) & \(0.616\pm 0.021\) & \(0.717\pm 0.046\) & \(11.0/15\) & \(12/15\) \\ \hline Grad & \(0.886\pm 0.035\) & \(0.647\pm 0.021\) & \(0.624\pm 0.102\) & \(7.9/15\) & \(9/15\) \\ DE & \(0.900\pm 0.056\) & \(0.659\pm 0.033\) & \(0.655\pm 0.059\) & \(5.3/15\) & \(4/15\) \\ GB & \(\textbf{0.922}\pm\textbf{0.050}\) & \(0.630\pm 0.041\) & \(0.716\pm 0.088\) & \(7.6/15\) & \(6/15\) \\ COMs & \(0.496\pm 0.065\) & \(0.622\pm 0.003\) & \(0.783\pm 0.029\) & \(10.0/15\) & \(11/15\) \\ ROMA & \(0.917\pm 0.039\) & \(0.672\pm 0.035\) & \(0.927\pm 0.071\) & \(5.7/15\) & \(6/15\) \\ NEMO & \(0.943\pm 0.005\) & \(\textbf{0.711}\pm\textbf{0.021}\) & \(0.737\pm 0.010\) & \(5.0/15\) & \(4/15\) \\ IOM & \(0.861\pm 0.079\) & \(0.647\pm 0.027\) & \(0.559\pm 0.081\) & \(7.9/15\) & \(7/15\) \\ BDI & \(0.870\pm 0.000\) & \(0.605\pm 0.000\) & \(0.722\pm 0.000\) & \(8.1/15\) & \(9/15\) \\ \hline _Tri-mentoring_ & \(\textbf{0.970}\pm\textbf{0.001}\) & \(\textbf{0.722}\pm\textbf{0.017}\) & \(0.759\pm 0.102\) & \(\textbf{2.1}/\textbf{15}\) & \(\textbf{2/15}\) \\ \hline \end{tabular}
\end{table}
Table 2: Results (maximum normalized score) on discrete tasks & ranking on all tasks.
### Ablation Studies
In this subsection, the proposed method serves as the baseline, and we systematically remove each module including _voting-based pairwise supervision_ and _adaptive soft-labeling_ to verify its contribution. The results are presented in Table 3. Besides the performance results here, we also provide an evaluation of the accuracy of generated pairwise labels to further verify the effectiveness of the two modules in Appendix A.7.
**Voting-based pairwise supervision.** Instead of using the proposed module, we compute the mean prediction of the ensemble and use this prediction to create pairwise consensus labels. We denote this as w/o _voting-based ps_. Our results in Table 3 show a decline in performance when adopting this alternative. A plausible explanation for this performance decline is that the alternative module fails to effectively exploit weak ranking supervision signals from individual proxies, resulting in reduced information exchange and collaboration among the proxies compared to the proposed module.
**Adaptive soft-labeling.** We remove this module and resort to using one-hot consensus labels. The performance across all tasks generally deteriorates compared to the full _tri-mentoring_. A possible explanation for this is that this module ensures that fine-tuned proxies are optimized to perform well on the static dataset by introducing soft-labels, reducing the risk of overfitting to consensus labels derived from individual proxy predictions. This demonstrates the significance of the _adaptive soft-labeling_ module in mitigating the effects of label noise and enhancing the ensemble performance.
Furthermore, we assess the impact of neighborhood sampling in our method and a post selection strategy related to our method, with results outlined in Table 3.
**Neighborhood sampling.** Typically, we sample \(K\) neighborhood points at the optimization point \(\mathbf{x}_{t}\) for pairwise labels. When neighborhood sampling is excluded (_w/o neighbor_), labels are directly generated near the static dataset. The performance generally deteriorates for _w/o neighbor_ in Table 3, due to the lack of local ranking information around the optimization point.
**Post selection.** We investigate a variant, _post selection_, where the mean of two proxies is used to select the third proxy's candidates. From Table 3, we find that this variant generally yields worse results compared to the full tri-mentoring. This finding suggests that using the ranking supervision signals directly for design selection is less effective than allowing proxies to exchange ranking supervision signals within the ensemble to produce a more robust ensemble.
### Hyperparameter Sensitivity
In this section, we study the sensitivity of our method to hyperparameters, specifically the number of neighborhood samples \(K\) and the number of optimization steps \(T\), on two tasks, the continuous Ant and the discrete TFB8. We also discuss the standard deviation hyperparameter \(\delta\) in Appendix A.8.
**Number of neighborhood samples (\(K\)).** We evaluate the performance of our method for different values of \(K\), i.e., the number of neighborhood samples around the optimization point. We test \(K\) values of \(5\), \(10\), \(15\), \(20\), and \(25\), with \(K=10\) being the default value used in this paper. The results are normalized by dividing them by the result obtained for \(K=10\). As shown in Figure 4, the performance of the tri-mentoring method is quite robust to changes in \(K\) for both tasks.
**Number of optimization steps (\(T\)).** We also investigate the impact of the number of optimization steps on the performance of our method. As indicated in Figure 5, the method is robust to changes in the number of optimization steps for both Ant and TFB8 tasks.
In summary, our sensitivity analysis demonstrates that the _tri-mentoring_ method is robust to variations in key hyperparameters, ensuring stable performance across a range of settings.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Task & _tri-mentoring_ & w/o _voting-based ps_ & w/o _ada soft-labeling_ & w/o _neighbor_ & _post selection_ \\ \hline SuperC & \(0.514\pm 0.018\) & \(0.505\pm 0.014\) & \(0.504\pm 0.014\) & \(\mathbf{0.516\pm 0.017}\) & \(0.512\pm 0.011\) \\ Ant & \(\mathbf{0.948\pm 0.014}\) & \(0.938\pm 0.021\) & \(0.945\pm 0.018\) & \(0.945\pm 0.012\) & \(0.945\pm 0.009\) \\ D’Kitty & \(0.966\pm 0.010\) & \(0.956\pm 0.010\) & \(0.947\pm 0.008\) & \(0.958\pm 0.008\) & \(\mathbf{0.970\pm 0.013}\) \\ Hopper & \(\mathbf{1.983\pm 0.110}\) & \(1.902\pm 0.138\) & \(1.916\pm 0.108\) & \(1.839\pm 0.112\) & \(1.901\pm 0.148\) \\ \hline TF Band 8 & \(0.970\pm 0.001\) & \(\mathbf{0.971\pm 0.003}\) & \(0.944\pm 0.026\) & \(0.950\pm 0.018\) & \(0.949\pm 0.006\) \\ TF Band 10 & \(\mathbf{0.722\pm 0.017}\) & \(0.694\pm 0.030\) & \(0.710\pm 0.025\) & \(0.643\pm 0.009\) & \(0.635\pm 0.027\) \\ NAS & \(\mathbf{0.759\pm 0.102}\) & \(0.509\pm 0.074\) & \(0.538\pm 0.082\) & \(0.666\pm 0.089\) & \(0.519\pm 0.076\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation studies on _tri-mentoring_.
## 5 Related Work
**Offline model-based optimization.** Recently two broad groups of methods have emerged for offline model-based optimization. One group is based on generative modeling, where methods aim to gradually adapt a generative model towards the high-scoring design [31, 32, 33]. Another group is based on using gradient ascent to optimize existing designs via gradient information. These methods generally try to incorporate prior information into the proxy before using it for gradient ascent. Examples of this approach include COMs [7], ROMA [23], NEMO [24], BDI [25, 34] and IOM [26]. Our proposed method called _parallel-mentoring_ (_tri-mentoring_) belongs to this category. We maintain an ensemble of proxies and aim to incorporate weak ranking signals from any pair of proxies into the third. This symmetric learning process, which cycles through all proxies, enhances the robustness and resilience of our ensemble. Our proposed ensemble training process, with its focus on parallel-mentoring, has the potential to improve the proxy and reward training [35, 33], thereby contributing to advancements in biological sequence design. Notably, the contemporaneous ICT method [36] exchanges direct proxy scores, which may be less robust than our pairwise comparison approach.
**Tri-training.** Our work is related to tri-training [37] which trains three classifiers and refines them using unlabeled samples. In each round, an unlabeled sample is labeled for one classifier if the other two agree on the label, under specific conditions. While _tri-mentoring_ is inspired by tri-training, there are fundamental differences between them: (1) Tri-training aims to enhance classification tasks by mitigating label noise, while _tri-mentoring_ focuses on producing a more robust regression ensemble; (2) Tri-training leverages unlabeled data, while _tri-mentoring_ operates on samples near the current optimization point; (3) _Tri-mentoring_ incorporates an _adaptive soft-labeling_ module to mitigate label noise, which is not present in tri-training. In addition to tri-training and _tri-mentoring_, there are other research works [38][39][40] involving multiple proxies cooperating to improve learning. Unlike tri-training and _tri-mentoring_ with three proxies, these methods focus on two proxies.
**Bi-level optimization for hyperparameter optimization.** Bi-level optimization has become increasingly popular for hyperparameter optimization [41, 42, 43, 44, 45, 46] due to the hierarchical problem structure [47]. In the inner level, the relationship between model parameters and hyperparameters is established by minimizing the training loss. Meanwhile, the outer level optimizes hyperparameters through the connection built at the inner level by minimizing the validation loss. A specific category of hyperparameters is soft-label [48, 49], which is updated under the guidance of noise-free data to reduce noise. In this paper, we propose _adaptive soft-labeling_ to reduce noise in the consensus labels.
**Ensemble learning.** Ensemble learning techniques train multiple base learners and aggregate their predictions to achieve better performance and generalization than individual base learners alone [8, 9, 50, 10, 11, 51, 52]. These methods can be broadly classified into boosting [53, 54], bagging [55], and stacking [56]. In contrast to our proposed _tri-mentoring_, where multiple proxies collaborate to enhance the learning process, ensemble learning mainly involves interaction during the aggregation phase, without influencing each other's training.
**Pairwise learning to rank.** Learning to rank [57, 16] has been extensively employed by commercial search engines for ranking search results [58, 59]. Unlike pointwise methods that score inputs, pairwise methods focus on relative order, aligning more with ranking concepts [57]. Recent research [60] applies pairwise binary cross-entropy loss for training reward models in reinforcement learning, a technique used in ChatGPT [18]. Our work expresses a proxy's ranking ability through pairwise comparison labels, generating consensus labels via majority voting to enable mutual mentor
Conclusion
In this work, we introduce _parallel-mentoring_ to enhance ensemble robustness against out-of-distribution issues through mutual mentoring among proxies. Focusing on a three-proxy case, we instantiate this as _tri-mentoring_, with two modules: _voting-based pairwise supervision_ for generating consensus labels, and _adaptive soft-labeling_ which mitigates label noise through bi-level optimization. Experimental results validate our approach's effectiveness. We discuss potential negative impacts in Appendix A.9 and limitations in Appendix A.10.
## 7 Acknowledgement
We thank CIFAR for support under the AI Chairs program. This research was empowered in part by the computational support provided by Compute Canada (www.computecanada.ca).
|
2306.17766 | Comparing Reinforcement Learning and Human Learning using the Game of
Hidden Rules | Reliable real-world deployment of reinforcement learning (RL) methods
requires a nuanced understanding of their strengths and weaknesses and how they
compare to those of humans. Human-machine systems are becoming more prevalent
and the design of these systems relies on a task-oriented understanding of both
human learning (HL) and RL. Thus, an important line of research is
characterizing how the structure of a learning task affects learning
performance. While increasingly complex benchmark environments have led to
improved RL capabilities, such environments are difficult to use for the
dedicated study of task structure. To address this challenge we present a
learning environment built to support rigorous study of the impact of task
structure on HL and RL. We demonstrate the environment's utility for such study
through example experiments in task structure that show performance differences
between humans and RL algorithms. | Eric Pulick, Vladimir Menkov, Yonatan Mintz, Paul Kantor, Vicki Bier | 2023-06-30T16:18:07Z | http://arxiv.org/abs/2306.17766v1 | # Comparing Reinforcement Learning and Human Learning using the Game of Hidden Rules
###### Abstract
Reliable real-world deployment of reinforcement learning (RL) methods requires a nuanced understanding of their strengths and weaknesses and how they compare to those of humans. Human-machine systems are becoming more prevalent and the design of these systems relies on a task-oriented understanding of both human learning (HL) and RL. Thus, an important line of research is characterizing how the structure of a learning task affects learning performance. While increasingly complex benchmark environments have led to improved RL capabilities, such environments are difficult to use for the dedicated study of task structure. To address this challenge we present a learning environment built to support rigorous study of the impact of task structure on HL and RL. We demonstrate the environment's utility for such study through example experiments in task structure that show performance differences between humans and RL algorithms.
## 1 Introduction
Reinforcement learning (RL) [38] benchmarks often come directly from human benchmarks (e.g., games) or are designed to mimic complex reasoning tasks a human might encounter. These increasingly complex benchmark environments have been used to improve the capability of RL algorithms, leading to impressive achievements in domains such as board games [31; 39; 34; 35; 33; 36] and video games [25; 18; 43; 27]. Unfortunately, while the difficulty of such environments motivates greater RL capabilities, their complexity often makes them ill-suited for the rigorous study of task structure; that is, how the logical structure of a learning task affects learning performance. A task-oriented understanding of RL methods' strengths and weaknesses remains an important gap in the informed deployment of RL tools. In particular, such an understanding would allow decision-makers to better relate findings from benchmark environments to the specific attributes of their own problem settings.
Increasingly, practitioners must also decide how to integrate RL tools alongside humans. A granular, task-oriented understanding of _both_ human learning (HL) and RL is necessary to design systems where humans and algorithms best complement one another. Changes in task attributes that make learning easier for either HL or RL but harder for the other may suggest cases where human-machine learning pairs can be effective [14; 4; 5]. Direct HL-RL comparisons can also identify helpful human priors or heuristics for future algorithm development. An important step in this line of research is the development of learning environments that support the study of task structure for both HL and RL.
In this paper we study learning tasks within the classical RL setting, where agents learn through sequential interaction with an uncertain environment. Specifically, we consider a new benchmark environment that can be used to systematically assess how the logical structure of learning tasks affects the performance of HL and RL (accounting for hyperparameter selections, feature representations, etc.). Our environment has several advantages over existing environments for the dedicated study of task structure. Existing environments vary in multiple ways, making direct performance comparisons
challenging to interpret. Likewise, due to their complexity, it is difficult to use variations of existing environments to create generalizable findings about the impact of task structure. For example, while the objective of chess is clear, the underlying learning task is a complex composition of the board structure, game mechanics, and adversarial dynamics. An experiment might explore variations to this learning task by modifying the movement patterns of different pieces [40]. However, without a clear understanding of how the existing elements of chess contribute to learning performance, it is unclear how to incorporate these results into a generalized understanding of the impact of task structure.
**Contributions** We present a novel RL environment, called the Game of Hidden Rules (GOHR), that allows researchers to rigorously investigate the impact of task structure on learning performance. This extends preliminary work on this environment [30], as discussed in Appendix A.1. The GOHR complements existing learning environments and distinguishes itself as a useful tool for the study of task structure in three substantive ways. First, each hidden rule encodes a clearly defined logical pattern as the learning objective, allowing researchers to draw systematic distinctions between learning tasks. Second, GOHR's rule syntax allows for fine variations in task definition, enabling experiments that study controlled differences in learning tasks. Third, GOHR's rule syntax introduces a vast space of hidden rules for study, ranging from trivial to complex, providing an appropriate starting point for the study of task structure. We demonstrate the use of the GOHR through two example experiments in task structure that compare human learners to sample RL algorithms.
## 2 Related work
**RL environments** As noted, recent improvements in RL algorithms can be credited to an explosion in simulation-based benchmarking environments. Tools such as the Arcade Learning Environment [6], openAI gym [7], modern video games [20; 42; 13; 45], and procedurally generated environments [11; 19; 22; 32] have spurred RL development through increasingly complex and realistic problem settings. For instance, procedurally generated environments motivate more robust learning algorithms that can better handle variations to the environment. Other environments highlight cooperative or adversarial challenges unique to multi-agent settings [37], further expanding the breadth of tasks RL algorithms face. Collectively, these environments raise expectations for state-of-the-art RL algorithm performance. However, their emphasis on challenging high-end capabilities of algorithms often makes them difficult starting points for fundamental studies into the impact of task structure. The GOHR is intended to address this unmet need in the space of RL environments by allowing researchers to design precise experiments investigating the impact of task structure on learning.
**Analysis of RL performance** Islam et al. [17] and Henderson et al. [15] initiated important efforts to assess the reproducibility of RL performance and look more deeply at the effects of different internal design choices (e.g., network architectures) on performance [44; 2; 3]. Such studies offer a great deal of practical insight related to algorithm design choices, but do not generally clarify task-oriented differences in tested benchmark environments. Among broad efforts to compare algorithm performance across benchmarks, the bsuite, introduced by Osband et al. [28], is most closely aligned with our line of research. The bsuite identifies high-level desired characteristics of effective learning agents (generalization, exploration, and handling of long-term consequences) and gathers a set of benchmark environments to assess the performance of different algorithms against these characteristics. While the bsuite concentrates on higher-level performance characteristics, the GOHR provides a complementary testbed focused specifically on the logical structure of the task to be learned and its impact on learning performance. Both approaches mark important steps toward a more nuanced understanding of RL algorithms.
**Comparisons of humans to machine learning** There is a growing number of studies comparing the performance of machine learning (ML) to humans on particular real-world tasks, such as medical imaging [24]. Similarly, the RL benchmarking literature often measures algorithm performance against human-level performance. These analyses provide valuable reference points for the performance of humans and state-of-the-art ML algorithms on particular tasks, but they do not clarify fundamental questions regarding the impact of task structure on learning performance. To our knowledge, there is little research that addresses rigorous ML/HL comparisons _with respect to task structure_. We believe a primary reason for the gap in literature investigating task structure, particularly within RL, is the present lack of environments capable of supporting small, precise, and interpretable changes to tasks. As noted by Hernandez-Orallo [16] and Burnell et al. [9], more granular evaluation metrics are needed to properly interpret ML capabilities; this need for granularity holds when ML capabilities
are compared rigorously to human performance [12]. Deeper investigations into HL/RL responses to task structure may give important insight into algorithm design for more ambitious benchmarks like the Abstract Reasoning Corpus (ARC) [10]. With respect to human-ML comparisons, most similar to our work is that of Kuhl et al. [21], which examines a range of pattern recognition tasks in a supervised learning setting. As in our work, they present a curated set of tasks and demonstrate differences between the performance of human players and various algorithms.
## 3 Game of Hidden Rules
This overview of the GOHR closely follows [30]. Additional details can be found in Appendix A.2. Comprehensive documentation and the complete toolset are available at our public site.
**Game board** The GOHR is played on a \(6\times 6\) grid-style board using game pieces of varying shapes and colors. At the beginning of an episode, the game engine populates the board with game pieces; the player's goal is to clear the board of game pieces by placing them, one at a time, into any of the four buckets located at the corners of the board. Figure 1 provides a diagram of the board where all individual board cells, rows, columns, and buckets are given numeric labels along with a sample board as a human player would see it. Note that the collection of shapes and colors used for a given experiment is entirely configurable by the researcher, with a default set of four shapes and four colors. If desired, researchers can construct exact board layouts in advance of play (to be seen randomly or in a specified order), otherwise pieces are generated randomly per a set of input parameters. This flexibility allows the experimenter to design experiments addressing the learning curricula itself (e.g., to determine if seeing particular game pieces affects the performance of the learner for a given rule). Additional details are provided in Appendix A.2.1.
**Hidden rules** A hidden rule, known to the researcher but not to the player, determines which pieces may be placed into which buckets. For example, a rule might allow pieces into certain buckets based on their shape, color, or position on the board. When the player makes a move (i.e., tries placing a particular game piece into a bucket), they receive immediate feedback on whether the move is allowed; if the move is allowed, the piece is removed from play, otherwise the piece returns to its original place on the board. Hidden rules are constructed from one or more **rule lines**, each of which is built from one or more **atoms**. For instance, a two-line rule with five atoms might look like:
\begin{tabular}{l} (atom 1) (atom 2) (atom 3) \\ (atom 4) (atom 5) \\ \end{tabular}
Only one rule line is **active** at a time; this active line determines the current **rule state** (how game pieces may be placed into buckets for the player's current move). In the example above, the rule state is formed by the contents of either atoms 1, 2, and 3 or atoms 4 and 5, depending on which line is active. Each atom maps a set of game pieces to a set of permitted buckets and is defined as follows:
\begin{tabular}{l} (count, shapes, colors, positions, buckets) \\ \end{tabular}
Any game pieces matching the _shapes_, _colors_, and _positions_ specified in the atom are accepted in any element of its set of _buckets_. The _count_ parameter defines the number of successful moves for which the atom remains valid and is used in rules where the rule state changes during play. Multiple values can be listed for each non-count field and are grouped in brackets. A simple example where stars and triangles always go in the top-left bucket (0) while circles and squares always go in the bottom-right bucket (2), regardless of their color or position, can be expressed with two atoms on one rule line:
Figure 1: Game board diagram (left) and a sample board with four shapes and colors (right).
(*, [star,triangle], *, *, 0) (*, [circle,square], *, *, 2)
The \(*\) character is a wildcard. For the count field, \(*\) means the atom is always valid; for shape, color, or position, it means any value for that feature is permissible. A set of helpful rule examples, illustrating the expressiveness of the rule syntax, can be found in Appendix A.2.2. Broadly, rules within the GOHR can be divided into two categories: **stationary** and **non-stationary**. Stationary rules are those in which the rule state does not change during game play. In such rules, whether a move is permitted does not depend on the state of the board or past actions made by the player. These rules can be used to evaluate a player's ability to learn strictly feature-based patterns. The example noted above is stationary; the board state and move history do not impact where game pieces are permitted. In contrast, non-stationary rules are those in which the rule state changes during play, meaning that permitted moves will depend on the state of the board or past successful moves the player has made. Non-stationary rules embed temporal components into the logical pattern the player must learn. The rules presented in Appendix A.2.2 further describe the mechanics available to researchers in creating non-stationary rules. Importantly, we note that non-stationary rules update the rule state only after successful moves. The rule state is unchanged after an incorrect move.
## 4 Example rules
In this section, we introduce the rules used in our experiments. Our first experiment explores a few **stationary** and **non-stationary** learning task structures expressible within the GOHR. We consider the following rules (see Appendix A.3 for related rule syntax and our public site to play these rules):
* Each of the four shapes is mapped uniquely to a bucket (i.e., stars go in bucket 0, triangles in bucket 1, squares in bucket 2, circles in bucket 3).
* Pieces in each board quadrant are mapped uniquely to the bucket nearest to that quadrant.
* The rule state alternates between allowing a piece in the bottom-left bucket (3) and allowing a piece in the top-right bucket (1).
* The first piece must be placed in the top-left bucket (0) and each subsequent piece must be placed in the next clockwise bucket (i.e., the pattern follows buckets 0-1-2-3).
Note that these rules are equivalently difficult at random; in each, a random policy will always have a \(\nicefrac{{1}}{{4}}\) chance of making a correct move, regardless of the board state or past moves. The rules, however, task the player with learning patterns with different logical structures. Shape Match and Quadrant Nearby are stationary and use a single game piece feature (shape or position). Bottom-left then Top-right and Clockwise are non-stationary, encoding sequences of length 2 and 4, respectively. Our aim with this experiment is to demonstrate how HL and RL may respond differently to specific variations in task structure. Identifying how specific logical structures within learning tasks affect performance for RL or HL players will better inform the analysis of more complex environments, where learning tasks may be compositions of fundamental logical structures expressible in the GOHR.
Our second experiment addresses a characteristic of learning tasks that we call **rule generality**. Broadly, rule generality reflects that multiple policies may be effective for a particular rule. More rigorously, let a player's **move policy** be the policy by which they generate their moves. Such a policy is **sufficient** if applying this policy to any possible board state and sequence of past actions yields error-free play. A given move policy may be sufficient to many rules and rules may permit many sufficient policies. For example, consider the stationary rule where red and blue game pieces are permitted in buckets 0 and 1 while green and yellow game pieces are permitted in buckets 2 and 3. A sufficient policy could be to select game pieces and associated actions in the following order:
\[\text{red}\rightarrow\text{bucket 0},\quad\text{ blue}\rightarrow\text{bucket 1},\quad\text{ green}\rightarrow\text{bucket 2},\quad\text{ yellow}\rightarrow\text{bucket 3}\]
Any policy relying on these same color-to-bucket mappings would also be sufficient, regardless of the order in which it selects pieces. Further consider arbitrary rules \(\mathcal{A}\) and \(\mathcal{B}\). With respect to generality, \(\mathcal{A}\)**properly dominates**\(\mathcal{B}\) if any sufficient policy for \(\mathcal{B}\) is sufficient for \(\mathcal{A}\) and there exists a sufficient policy for \(\mathcal{A}\) that is not sufficient for \(\mathcal{B}\). We refer to \(\mathcal{A}\) as more general than \(\mathcal{B}\) (denoted \(\mathcal{A}\succ\mathcal{B}\)). To
study the response of players to increasing rule generality, we consider the following variations of the rules given in our first experiment (see Appendix A.3 for expression in GOHR rule syntax):1
Footnote 1: We also studied a set of color-based rules (CM, CM1F, CM2O) mirroring the structure of shape rules SM, SM1F, and SM2O. Results were identical to shape rules for RL players and very similar for humans, see Appendix A.6.1.
* Shape Match modified so one shape can be placed in any bucket.
* Similar to Shape Match, except each of the four shapes is mapped to two buckets rather than one.
* Identical to Quadrant Nearby, except that pieces in two of the quadrants can be placed in any bucket.
* The rule state alternates between allowing a piece in either of the bottom two buckets (2,3) and allowing a piece in either of the top two buckets (0,1).
* Same as Clockwise, but every other move is a free move (i.e., any piece will be accepted in any bucket), per the repeating pattern 0-*-2-*.
* Same as Clockwise, except the last two moves in the 0-1-2-3 bucket pattern are free moves, i.e., following the repeating pattern 0-1-*-*.
Each rule is constructed to properly dominate a corresponding 'base rule' from the first experiment (e.g., CWAF \(\succ\) CW, CW2F \(\succ\) CW). When we provide two rule variations of a base rule (e.g., CWAF, CW2F), neither is more general than the other. Our aim with this experiment is to study the responses of human players and RL algorithms to increasing generality by comparing performance on more general rule variations to their respective base rules.
## 5 Experimental setup
We describe the human and RL participants in our experiments, experimental procedures, performance metrics, and statistical comparison of our results. Portions of this section closely follow [30].
Human participantsHuman participants in our GOHR experiments came from the Amazon Mechanical Turk platform [8], a popular tool for crowdsourcing tasks and research. Each player received a brief set of instructions about the mechanics of the GOHR and subsequently played 3-7 episodes of the same rule as part of their participation in the experiment. Players were selected such that they had no prior exposure to the GOHR. Approximately 25 participants were assigned to each rule listed in Section 4. In each episode, players received boards randomly populated with 8 or 9 pieces, depending on the rule. Additional information regarding experimental flow, subject counts, payments, and board generation parameters can be found in Appendix A.4.
RL algorithmsTo describe our two sample algorithms, we model the GOHR as a Markov Decision Process (MDP). The state observed by the player at time \(t\), \(S_{t}\in\mathcal{S}\), is described by the sequence of board arrangements (\(B_{i}\)) and associated actions (\(A_{i}\)) leading to the current board, \((B_{0},A_{0},\ldots,B_{t})\). The game engine generator, \(g\), randomly generates the initial board arrangement \(B_{0}\) per input parameters provided for the experiment, \(\beta\), i.e., \(S_{0}=(B_{0})\sim g(\beta)\). The action space \(\mathcal{A}\) is the set of 144 action tuples \((r,c,b)\) given by placing the piece in row \(r\in\{1,\ldots,6\}\) and column \(c\in\{1,\ldots,6\}\) into bucket \(b\in\{0,\ldots,3\}\). If the game engine evaluates that action \(A_{t}\) is permitted by the rule, the corresponding piece is removed in board arrangement \(B_{t+1}\), otherwise \(B_{t+1}=B_{t}\). Note that state transitions are deterministic given the player's action, according to the logic of the hidden rule. A player receives reward \(R_{t}(S_{t},A_{t})=0\) if action \(A_{t}\in\mathcal{A}\) from state \(S_{t}\) is permitted by the hidden rule and \(-1\) otherwise. A terminal state is reached when the board is cleared, denoted by \(t=T\). The player's objective is to find a policy \(\pi(s)\) that maximizes the value function \(v_{\pi}(s)=\mathbb{E}_{\pi}[\sum_{k=t}^{T}\gamma^{k-t}R_{k}(s,\pi(s))|S_{t}=s]\) over all \(s\in\mathcal{S}\) reachable from \(S_{0}\sim g(\beta)\), where \(\gamma\in(0,1)\) is a discount factor.
We used two sample RL algorithms for comparison to human players. As an example of a policy-gradient based method, we employed a variant of the canonical REINFORCE algorithm [46]. For a sample value-based method, we used a variant of epsilon-greedy DQN with experience replay [25]. \(\mathcal{S}\) is too large to deal with directly and thus we constructed feature maps \(\phi_{i}(s)\) to make the problem tractable. Our goal in constructing \(\phi_{i}\) was to faithfully represent \(\mathcal{S}\) without artificially biasing algorithm performance for any particular subset of rules. As part of our experiments, we explored numerous feature maps and found that none universally outperformed the others across our
tested rules (see Section 6). For REINFORCE, we used a neural network, parameterized by \(\theta\), to represent the learner's policy. Similarly for DQN, we used such a neural network to approximate the state-action value function \(Q(S,A)\). Complete descriptions of these algorithms, corresponding hyperparameter selections, feature maps, and experimental flow can be found in Appendix A.5. For a fair comparison to humans encountering the GOHR for the first time, we measured the performance of these RL algorithms with no pre-training. Each **learning run** for a given algorithm and rule consisted of initialization of the model with random network weights followed by serial play of a set number of episodes of the same rule. Learning runs used separate random seeds for the algorithm and the game-engine board generator. For each algorithm, we performed 50 independent learning runs on each rule provided in Section 4.
**Performance metrics and statistical comparison** Rules may permit many sufficient policies; we measured performance based on a learner's ability to exhibit _any_ sufficient policy for the rule. Due to the limited participation time of human players, we measured a human player's understanding of a rule using streaks of consecutive correct moves that are sufficiently unlikely to occur at random. The base rules give players a \(\nicefrac{{1}}{{4}}\) chance of making a correct move at random; we chose a threshold of 10 correct moves in a row as it corresponds to a random probability of roughly \(1\times 10^{-6}\). We define the point metric \(m^{*}\) to be the index of the first move in the first streak of 10 or more correct moves that a player demonstrates. If a player never achieves such a streak, they are assigned an arbitrary placeholder value for \(m^{*}\) that is higher than all measured \(m^{*}\) values across the population of human players. Larger values of \(m^{*}\) are interpreted to mean greater difficulty as they correspond to the player needing more moves to obtain an understanding of the rule.
In contrast to humans, we can prompt RL algorithms to play a fixed number of episodes large enough to exhaustively evaluate policy sufficiency. We define the point metric of terminal cumulative error (TCE) to be the cumulative error count made across all episodes in a learning run. If the algorithm has reached an understanding of the hidden rule, this error count is expected to converge to a constant, with some allowance made for algorithm stochasticity (see Appendix A.5.6 for chosen convergence criteria). If a learning run does not meet our convergence criteria, we set the TCE for that learning run to be a placeholder value larger than all convergent TCE values. We chose fixed episode horizons of 4,000 and 60,000 for learning runs of DQN and REINFORCE, respectively; learning runs typically converged well before these horizons, but use of a common horizon allowed for fair comparison of learning runs that needed more episodes to converge. As with \(m^{*}\), larger values of TCE are interpreted as the result of more difficult learning tasks. The metrics \(m^{*}\) and TCE summarize the performance of each learning run, as shown in Figure 2. For each type of player (humans, DQN, and REINFORCE), we gather their associated \(m^{*}\) or TCE values for all learning runs on all rules and we compare their distributions using the non-parametric Mann-Whitney U-Test [26]. Statistical tests are performed at the \(\alpha=0.05\) significance level.
## 6 Results
**Comparison of feature representations** As noted, our RL experiments considered different feature representations of the observed state. To study the impact of memory on performance, we tested input feature maps that included 2, 4, 6, or 8 previous board states and actions. This testing further included feature maps with different representations of both the past boards and actions themselves. For example, a past action could be represented as a 144-long one-hot vector or by three one-hot
Figure 2: Sample learning runs for a human (left) and DQN (right), plotting cumulative error count against the move and episode indices, respectively. The human’s learning run is summarized by an \(m^{*}\) of 34, the first move of the first streak of 10+ correct moves. The DQN’s learning run is summarized by a TCE of 305, the error count after 4000 episodes.
vectors (6-long for row, 6-long for column, and 4-long for bucket). We refer to the former as a sparse representation and the latter as a dense representation; we extend similar notions to representations of past boards. We found that no feature map universally outperformed the others across all tested rules; different choices of memory and board/action representation yielded different performance tradeoffs. In general, additional memory improved performance on non-stationary rules but worsened performance on stationary rules. With some exceptions, dense representations tended to outperform sparse representations. Additionally, while REINFORCE performed poorly in comparison to DQN, we noted that both algorithms showed similar trends in performance with respect to different feature representations. We provide a complete discussion of performance using different feature maps in Appendix A.7. In order to provide a single point of comparison for each method to humans in the following experiments, we select a feature map that showed a good balance of performance across our tested rules for both algorithms: dense board and action representations with 6 steps of memory.
**Comparison of base rules** Our first experiment introduced a set of 'base rules' for study (SM, QN, CW, BLTR). Figure 3 summarizes performance on these rules using empirical cumulative distribution function (ECDF) plots of \(m^{*}\) for humans and strip plots of TCE for DQN and REINFORCE. Note that lower ECDF curves indicate greater difficulty as they imply that fewer players achieve an \(m^{*}\) streak at a given move index. Higher values of TCE indicate greater difficulty as more mistakes occur before reaching a sufficient policy. See Appendix A.6.3 for a complete tabulation of p-value results from two-sided Mann-Whitney U-tests for all base rule pairs and learners.
First, we note the similarity of human performance across the base rules, despite differences in learning task structure. Only one rule pairing, QN-BLTR, showed a statistically significant difference in human performance, with BLTR appearing more difficult. In contrast, both DQN and REINFORCE showed statistically significant performance differences for all rule pairs, even after applying a conservative Bonferroni correction. Although DQN outperformed REINFORCE (measured by TCE), both algorithms exhibited the same rule difficulty ordering: QN, BLTR, CW, SM (easiest to hardest, with SM noticeably more difficult). The non-stationary rules followed an expected ordering based on underlying sequence length: BLTR, a 2-long sequence, was easier than CW, a 4-long sequence. Regarding stationary rules, we did not expect SM to be so difficult, especially compared to QN. We believe the difficulty of SM reflects a subtle interaction between our feature representations and the details of the learning task. While our boolean board representations are an intuitive way to create a static size characterization of the board, they favor the learning of position-based patterns over shape- or color-based patterns. In particular, the one-hot encoding fails to provide a notion of similarity for identical shapes or colors in different board positions. This type of finding might be easily overlooked in settings outside the GOHR, where the relevant learning tasks are more opaque and systematic investigation of task structure is not the primary focus.
These results suggest meaningful differences between HL and RL in their response to varying task structures. In particular, human performance likely depends on both the structure of the task and its relation to human priors for patterns. The plausible closeness of these rules to common priors (e.g. clockwise) would explain the similar human performance we observed across rules despite their structural differences. RL players, on the other hand, responded strongly to differences in the logical structure of learning tasks and performed identically for logically equivalent rules, such as SM
Figure 3: Base rule performance of humans (left) and RL players (right). ECDF curves denote the fraction of human players achieving an \(m^{*}\) streak by a given move index on each rule (‘Never’ indicates player does not achieve such a streak). Strip plots of TCE distributions of each rule are provided for DQN and REINFORCE, separated due to different TCE magnitudes (‘C.N.M.’ indicates convergence criteria were not met for that learning run). Each dot corresponds to a learning run.
and its color-based equivalent CM (see Appendix A.6.1). The GOHR serves as an ideal testbed for deeper investigations into such differences in task structure. For example, CW represents one possible instance of a 4-long repeating pattern; a dedicated experiment might explore other 4-long patterns to measure how human priors affect performance across rules of equivalent logical structure. Likewise, such an experiment could also include 2-, 3-, or 5-long patterns to precisely measure how human and RL player performance varies with respect to incremental changes in the logical structure itself. Similar approaches could be used to explore performance differences within families of stationary rules and, more generally, to identify the strengths and weaknesses of both learners with respect to fundamental elements of task structure. Further, even from this example experiment we see that the difficulty ordering of a common set of rules is not shared for human and RL players, suggesting that human-machine learning pairs might be constructed to exploit the differing strengths of each.
**Impact of rule generality** Our second experiment introduced more general variants of our base rules. Families with three rules (e.g., base rule \(\mathcal{A}\) and more general rule variants \(\mathcal{B},\mathcal{C}\)) offer two generality comparisons (\(\mathcal{B}\succ\mathcal{A}\) and \(\mathcal{C}\succ\mathcal{A}\)), while families with two rules offer one comparison (\(\mathcal{B}\succ\mathcal{A}\)). Figure 4 summarizes human performance within each rule family with ECDFs of the \(m^{*}\) distributions. DQN and REINFORCE showed uniformly better performance on more general rule variants (see Appendix A.6.2 for related plots). Table 1 shows the results of the Mann-Whitney U-Tests associated with each generality comparison. We used one-sided tests, with the null hypothesis that the more general rule is no harder than the base rule, as more general rules offer a larger number of sufficient policies. Per the U-tests, we note that RL players uniformly found more general rule variants easier than their base counterparts (the p-values greater than 0.999 would be significant under the opposite direction null hypothesis). In contrast, the response of human players to increasing generality depended on the structure of the base rule. In particular, human players appeared to find the more general forms of our non-stationary rules more difficult than their base rule counterparts (i.e., CWAF and CW2F appear more difficult than CW, BT appears more difficult than BLTR). This is surprising as more general rules offer a higher probability of achieving a streak of 10 correct moves at random. We posit that this difference between humans and our RL players reflects important differences in their respective learning strategies. While greater rule generality might plausibly assist learning by offering a larger number of sufficient policies for the player to learn, it also could hinder learning by decreasing the amount of useful, negative feedback. For primarily inductive learners, such as our sample RL algorithms, it appears that the availability of a larger number of sufficient policies dominates, making these more general rules easier. Humans, however, likely employ some combination of induction and deduction; the additional positive feedback from more general rules may complicate deduction as feedback could agree with many candidate classes of hidden rules. Future studies, with a broader set of base rules, could explore such an effect in greater detail. As in the first experiment, HL and RL did not respond identically to changes in task structure, and our results show that the parallel study of task structure for HL and RL may provide important insight into the strengths and weaknesses of each learner. Although the GOHR deals with relatively abstract task structures, we believe a systematic understanding of performance within the GOHR can provide important perspective in complex environments, where tasks are compositions of many such fundamental elements.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & SM1F & SM2O & QN2F & **BT** & **CWAF** & **CW2F** \\ & \((\succ\) SM) & \((\succ\) QN) & \((\succ\)** BLTR)** & \((\succ\)** CW) \\ \hline Human & 0.496 & 0.620 & 0.322 & 0.014\({}^{\dagger}\) & 0.001\({}^{\dagger}\) & \textless{}0.001\({}^{\dagger}\) \\ DQN & 0.999 & 0.999 & 0.999 & 0.999 & 0.999 & 0.999 \\ REINFORCE & 0.999 & 0.999 & 0.999 & 0.999 & 0.999 & 0.999 \\ \hline \hline \end{tabular}
\end{table}
Table 1: P-value results of one-sided U-Tests comparing each more general rule to its base rule counterpart. Tests use \(m^{*}\) and TCE as point metrics for humans and algorithms, respectively. Null hypothesis is always that the more general rule is no harder with alternative hypothesis that the more general rule is harder. Significant results at the \(\alpha=0.05\) level are highlighted with a \(\dagger\) and columns with contrasting HL/RL behavior are shown in bold.
## 7 Conclusion
We have shown that the GOHR provides a capability for studying the performance of HL and RL in a novel and principled way. Using the GOHR's expressive rule syntax, researchers can make precise changes to learning tasks in order to study their effects on human and RL algorithm performance. The GOHR complements existing environments by empowering researchers to perform rigorous experiments into different learning task structures. Beyond the kind of experiments presented here, the GOHR could also be used for related studies such as teaching curricula, transfer learning, or human-machine learning pairs. Task-oriented experiments augment efforts to improve the overall capabilities of RL algorithms by furthering our understanding of these methods' strengths and weaknesses. Most importantly, this type of study provides a step toward task-oriented understandings of RL _and_ HL, both of which are needed to better inform the real-world use of RL. With this goal in mind, we are sharing the complete suite of tools with all interested researchers. We hope that researchers using the GOHR will share their findings to help this inquiry.
## Acknowledgments and Disclosure of Funding
Support for this research was provided by the University of Wisconsin-Madison Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation, and by the National Science Foundation under Grant No. 2041428. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsors. We acknowledge numerous stimulating discussions with collaborators in the broader project: Professors Xiaojin (Jerry) Zhu and Gary Lupyan; Drs. Ellise Sufffill and Charles Davis; and doctoral student Yuguang (Aria) Duan. We thank Mr. Kevin Mui for building the first instance of the GOHR, to support research on human learning. We also thank doctoral students Shubham Bharti and Yiding Chen for their work on past machine learning experiments and associated code.
|
2309.13385 | Cine cardiac MRI reconstruction using a convolutional recurrent network
with refinement | Cine Magnetic Resonance Imaging (MRI) allows for understanding of the heart's
function and condition in a non-invasive manner. Undersampling of the $k$-space
is employed to reduce the scan duration, thus increasing patient comfort and
reducing the risk of motion artefacts, at the cost of reduced image quality. In
this challenge paper, we investigate the use of a convolutional recurrent
neural network (CRNN) architecture to exploit temporal correlations in
supervised cine cardiac MRI reconstruction. This is combined with a
single-image super-resolution refinement module to improve single coil
reconstruction by 4.4\% in structural similarity and 3.9\% in normalised mean
square error compared to a plain CRNN implementation. We deploy a high-pass
filter to our $\ell_1$ loss to allow greater emphasis on high-frequency details
which are missing in the original data. The proposed model demonstrates
considerable enhancements compared to the baseline case and holds promising
potential for further improving cardiac MRI reconstruction. | Yuyang Xue, Yuning Du, Gianluca Carloni, Eva Pachetti, Connor Jordan, Sotirios A. Tsaftaris | 2023-09-23T14:07:04Z | http://arxiv.org/abs/2309.13385v1 | # Cine cardiac MRI reconstruction using a convolutional recurrent network with refinement
###### Abstract
Cine Magnetic Resonance Imaging (MRI) allows for understanding of the heart's function and condition in a non-invasive manner. Undersampling of the \(k\)-space is employed to reduce the scan duration, thus increasing patient comfort and reducing the risk of motion artefacts, at the cost of reduced image quality. In this challenge paper, we investigate the use of a convolutional recurrent neural network (CRNN) architecture to exploit temporal correlations in supervised cine cardiac MRI reconstruction. This is combined with a single-image super-resolution refinement module to improve single coil reconstruction by 4.4% in structural similarity and 3.9% in normalised mean square error compared to a plain CRNN implementation. We deploy a high-pass filter to our \(\ell_{1}\) loss to allow greater emphasis on high-frequency details which are missing in the original data. The proposed model demonstrates considerable enhancements compared to the baseline case and holds promising potential for further improving cardiac MRI reconstruction.
Keywords:Cardiac MRI Reconstruction MRI Acceleration MRI Refinement CRNN
## 1 Introduction
Cardiac magnetic resonance imaging is a powerful, non-invasive tool to aid visualise the heart's chambers, valves, blood vessels and surrounding tissue. To gain a 3D depiction of the heart, a sequential acquisition process of 2D slices is used, with the scanning duration increasing with the number of slices and temporal resolution desired. Thus for detailed scanning, multiple cardiac cycles must be monitored and the duration of the MRI process can consequently exceed the patients ability to remain steady and hold their breath. By undersampling in the \(k\)-space data acquisition process, the scan time can be substantially reduced
at the cost of missing information that must be interpolated. Deep learning achieves \(k\)-space reconstruction with greater prior knowledge for the regularisation term that covers the missing \(k\)-space domain, without requiring an iterative optimisation process and hence greatly accelerating the reconstruction rate.
Various architectures have been explored for MRI reconstruction, including convolutional neural networks (CNNs) and U-Nets [7, 8, 12, 21], variational networks [5] and generative adversarial networks [13, 22]. Other deep learning methods that exploit prior knowledge and extend traditional iterative methods include the model-based deep-learning architecture [1] and deep density priors [18]. Enhancing cine MRI through deep learning involves not only capitalising on the spatial relationships acquired from a given dataset but also leveraging temporal correlations. This has been evidenced across various model architectures [10, 13, 19, 25] as well as through registration-based [23] and motion-guided alignment [6] approaches. In [16], data sharing layers were incorporated in a cascaded CNN, whereby adjacent time step \(k\)-space data was used to fill the unsampled lines. In [15], recurrent connections are employed across each iteration step as well as bidirectional convolutional recurrent units facilitating knowledge sharing between iterations and input time frames, respectively.
Working within the confines of the challenge, we explored various architectures and found the CRNN block of [15] to perform best within the given limitations in memory and reconstruction time set by the organisers. This was subsequently combined with a lightweight refinement module inspired by single-image super-resolution approaches [3] to perform further de-noising and resolve finer details. The rest of the paper is organised as follows: Sections 2 and 3 describes the problem, dataset, and methodology, Sect. 4 presents the results of experiments and Sect. 5 and 6 provide discussion and conclusions, respectively.
## 2 Problem formulation and dataset
The objective of MRI reconstruction is to address an ill-posed inverse problem, retrieving image information denoted as \(\mathbf{x}\in\mathbb{C}^{N}\) from acquired undersampled signals \(\mathbf{y}\in\mathbb{C}^{K}\), where \(K\ll N\). This procedure can be depicted using a linear forward operator \(\mathbf{E}\), which defines the characteristics of the forward problem:
\[\mathbf{y}=\mathbf{E}\mathbf{x}+\epsilon. \tag{1}\]
Eq. 1 represents the general form of MRI reconstruction. The goal of reconstruction is to minimise the difference between \(\mathbf{x}\) and the ground truth. Therefore, the reconstruction problem can be defined as follows:
\[\tilde{\mathbf{x}}=\underset{\mathbf{x}\in\mathbb{C}^{N}}{\arg\min}\frac{ \lambda}{2}\|\mathbf{E}\mathbf{x}-\mathbf{y}\|+f_{\theta}(\mathbf{x}). \tag{2}\]
Here, \(f_{\theta}\) denotes a neural network for image reconstruction with trainable parameters \(\theta\) and \(\lambda\) controls the balance between the network and data consistency.
DataOur model is evaluated on the CMRxRecon Challenge dataset from the \(26^{\mathrm{th}}\) International Conference on Medical Image Computing and Computer Assisted Intervention. The dataset includes both short-axis (SA) and long-axis (LA) (two-chamber, three-chamber and four-chamber) views under acceleration rates of \(4\times\), \(8\times\), \(10\times\). The dataset was obtained following recommended protocols and processing [20, 24], more details of which can be found on the challenge website [4]. The 300-patient dataset is split 120:60:120 between challenge training, validation, and testing respectively. Only the challenge training set contained ground truth reference data, hence this was further split 90:20:10 for training, evaluation, and testing respectively for all models.
#### 2.0.2 Data pre-processing
The unpadded image size varies between widths of 132, 162, 204 & 246 and heights of 448 & 512 pixels. To maintain consistent input size, we apply zero-padding for image sizes of 256 \(\times\) 512 following the Inverse Fourier Fast Transform, with the outputs cropped after inference.
When using the approach from [10], the computationally intensive conjugate gradient step was a limiting factor due to the GPU's initial 24GB memory constraint set by the challenge organisers. We thus chose to use the coil combined data rather than the multi-coil format, which allowed use of a simpler data consistency step, at a potential loss of accuracy without using the extra information. Likewise, we used a single channel for the processed image instead of using independent channels for amplitude/phase or real/imaginary components, as adopted by [5, 7, 12, 16].
The SA data are 3-dimensional spatially with an additional time component. It is therefore conceivable that full 4D convolutional kernels could be used to fully utilise spatio-temporal redundancies, but this would result in extremely large memory requirements as discussed in [11]. Furthermore, studies such as [19] have demonstrated that it is preferable to have a larger \(2\mathrm{D}+t\) network than a smaller 3D-input network with equivalent memory consumption. Therefore, due to the large image size, we choose to use time-series batches of 2D depth slices as per [15] rather than \(3\mathrm{D}+t\) or 4D for the long-axis images.
## 3 Methodology
### Model exploration
The initial limitations for inference imposed by the organisers were 24GB GPU VRAM and 4 hours for the reconstruction of the test dataset, which was later increased after our initial investigations. Pre-trained models or loss functions were not permitted. Denoising diffusion probabilistic models (DDPM) were found to take too long in inference, whilst transformer models have been found to lead to heavily pixelated reconstructions. Hence, more conventional approaches were tested, building upon an existing repository6.
We compared networks similar to the CineNet [10] and CRNN networks [15]. The number of parameters in each network were maximised such that the full 24GB VRAM memory would be used in training. A 2D U-Net is deployed to serve as an additional baseline to compare all models to. The U-Net is trained on a slice-by-slice basis with 3 cascades and 48 feature map channels. Weight sharing is used when training the model, and \(\lambda\) is set to be learnable with an initialisation of \(\log(10^{-1})\). The learning rate is \(3\times 10^{-4}\), and the Adam optimiser is deployed to guide the training process.
### Model architecture
A high-level depiction of the complete architecture of the final model is presented in Fig. 1. The backbone of proposed architecture is based on the CRNN block detailed in [15]. The first step in CRNN is a bidirectional convolutional recurrent unit (BCRNN) with three convolution layers: a standard convolution between layers, one convolution between temporal slices, and one between iterations. This is followed by three convolutional recurrent units (CRNN) which evolve only over iterations before a plain CNN. Finally, residual connections are employed prior to a data consistency term, preserving the information from sampled data.
Aiming to improve performance, we include an additional BCRNN unit to further exploit spatio-temporal dependencies, followed by a refinement module to further denoise the output of the CRNN model and refine further details. We deploy a very lightweight single-image super-resolution network, Bicubic++ [3], which maintains short reconstruction times. The refinement module first learns lower resolution features to decrease computational cost and then performs numerous convolutions to denoise the image before a final convolutional filter
Figure 1: Final model architecture: BCRNN, CRNN, and CNN units with a data consistency (DC) step from [15] for primary reconstruction. "t" and "i" denote time and iterations, respectively. The low-cost refinement module, inspired by [3], includes downsampling (DS), CNN, and upsampling (US) units.
and upscaling back to the original image size. We test the performance of the network with end-to-end and separate learning for each module.
#### 3.2.3 Loss function
In the context of of image reconstruction, \(\ell_{1}\) loss, \(\ell_{2}\) loss and SSIM loss are widely used to constrain models for high-quality reconstructed images, but often disregard the complex nature of MRI data. Thus, we investigate a range of losses using an additional loss term, denoted the \(\bot\)-loss [17]. The \(\bot\)-loss adds a phase term which can be combined with \(\ell_{1}\), \(\ell_{2}\) or SSIM losses to address the asymmetry in the magnitude/phase loss landscape. This operates on the polar representation of complex numbers, rather than on two real value channels for magnitude and phase, thus taking advantage of the fact that fully symmetric loss functions can improve task performance [14]. For the separate training of the CRNN and the refinement module, \(\bot\)-losses are only utilised for the CRNN output, with \(\ell_{1}\) and SSIM loss functions deployed for the refinement module. For the end-to-end training, \(\ell_{1}\) and SSIM loss are employed to constrain both CRNN and refinement module. We split the \(\ell_{1}\) loss by introducing a high-pass frequency filter, allowing us to emphasise the high-frequency content in our reconstructed images to resolve finer details. We denote this as \(\ell_{1\,\mathrm{split}}\).
In training, losses were quantified across the entire image, whereas for the validation leaderboard, assessment was limited to the initial 3 time frames and the central sixth portion of the images. The competition metrics were structural similarity index measure (SSIM), normalised mean square error (NMSE) and peak signal-to-noise ratio (PSNR). Hence, whilst the complete reconstructed images often surpassed SSIM values of 0.98, validation scores only reached 0.85.
### Implementation details
Implementation details and our code is available at: [https://github.com/vios-s/CMRxRECON_Challenge_EDIPO](https://github.com/vios-s/CMRxRECON_Challenge_EDIPO)
## 4 Results
#### 4.1.1 Model choice and weight sharing
Fig. 2 shows the training losses between models, demonstrating the stronger performance of the CRNN model. The point of convergence for the CRNN with weight-sharing and the CineNet models is similar, but the CRNN networks start at a much lower loss value. This is despite the non-weight-sharing model (1.1M) having over \(2\times\) more trainable parameters than the 6-cascade CineNet model (0.5M). Between CRNN models, we see more rapid convergence in the weight-sharing model as there are less parameters to optimise and reduced likelihood of early overfitting. However, the lower number of parameters leads to reduced expressive power and is outperformed by the non-weight-sharing model with insufficient memory gains to justify its use. Using the \(\bot\)-loss only, the weight-sharing model had SSIM of 0.683, NMSE of 0.123 and PSNR of 23.917, performing notably worse than the non-weight-sharing model, as presented in the next section. Fig. 3 shows the reconstruction through the CineNet and the CRNN (with and without weight-sharing) models.
#### 4.1.2 Loss function investigation
Table 1 presents the findings of investigations of the loss functions using a low-cost CRNN model. Use of the \(\bot\)-\(\ell_{1}\) loss led to an improvement in SSIM and PSNR compared to \(\ell_{1}\) loss alone, but a slightly higher NMSE error. Providing greater emphasis on high-frequency data using a high-pass filter led to improved SSIM but lower NMSE and PSNR performance.
Figure 3: Reconstruction (top) and associated error maps (bottom) for the initial network investigation. (a) 8 \(\times\) undersampled LAX input (b) fully sampled ground truth (c,d) CineNet model (6 cascades) (e,f) CRNN model (weight-sharing between cascades) (g,h) CRNN model (no weight-sharing).
Figure 2: Log loss during exploratory training of modified CineNet and CRNN (with and without weight-sharing between kernels). Note that the implementation is not identical to the original works.
Further tuning to improve the ratio of high- to low- frequency led to better results for the higher cascade models. Notably, combining SSIM with the \(\bot\)-\(\ell_{1}\) loss was counter-productive for all metrics and suggests that further tuning of the weighting of each loss component is required.
#### 4.2.2 Introduction of the refinement module
Table 2 highlights the improvements in the quality of inference made by introducing the refinement module. Deploying the refinement as a separately trained post-processing module shows notable benefit improving more than adding an additional cascade to the plain CRNN. The end-to-end model results in further improvements upon separate training, of 4.4% in structural similarity and 3.9% in normalised mean square error relative to the plain CRNN, in spite of no longer being able to take advantage of the \(\bot\) loss.
Fig. 4 shows qualitatively the improvements made by the introduction of the refinement module at full scale. The error is reduced substantially and some finer details are resolved, though there is still scope for improvement at smaller scales. We generally see that the model is incapable of generating details that are completely lost in the undersampling process.
#### 4.2.3 Validation results
Our final tests prior to submission are presented in Table 3. Across all models, the short-axis reconstruction performs better quantitatively as there is more short-axis data available in training. For both views, the performance reduces with increased undersampling, as more detail is lost.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Metric} & \multicolumn{5}{c}{Loss function} \\ \cline{2-6} & \(\bot\) & \(\ell_{1}\) & \(\bot\)-\(\ell_{1}\) & \(\bot\)-\(\ell_{1\,\mathrm{split}}\) & \(\bot\)-SSIM-\(\ell_{1\,\mathrm{split}}\) \\ \hline SSIM & 0.712 & 0.741 & 0.752 & **0.753** & 0.739 \\ NMSE & 0.0925 & **0.0646** & 0.0655 & 0.0671 & 0.0719 \\ PSNR & 25.143 & 26.525 & **26.535** & 26.487 & 26.067 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparisons for different loss function combinations, evaluated on the validation data. A 48-channel 5 cascade CRNN network was used without the refinement module. \(\ell_{1\,\mathrm{split}}\) denotes the \(\ell_{1}\) loss whereby a high-pass filter is used to provide more focus on the high frequency content, in addition to the conventional \(\ell_{1}\) loss.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Cascades & 6 & 6 & 6 & 7 \\ Refinement module & None & Sequential & End-to-end & None \\ \hline SSIM & 0.768 & 0.792 & **0.802** & 0.765 \\ NMSE & 0.0516 & 0.0496 & **0.0454** & 0.0535 \\ PSNR & 27.354 & 27.597 & **27.969** & 27.351 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparisons of various model set-ups. Sequential (separate) and end-to-end training (combined) of the CRNN and refinement module are presented.
\begin{table}
\begin{tabular}{c c c c c c c c} & & & Short-axis & & \multicolumn{3}{c}{Long-axis} \\ AR & Metric & U-Net & Plain & Proposed & U-Net & Plain & Proposed \\ & & Baseline & CRNN & & Baseline & CRNN & \\ \hline \multirow{3}{*}{\(4\times\)} & SSIM & 0.641 & 0.824 & **0.854** & 0.573 & 0.757 & **0.792** \\ & NMSE & 0.180 & 0.0311 & **0.0277** & 0.188 & 0.0485 & **0.0433** \\ & PSNR & 23.084 & 29.842 & **30.295** & 22.040 & 26.958 & **27.540** \\ \hline \multirow{3}{*}{\(8\times\)} & SSIM & 0.637 & 0.796 & **0.829** & 0.574 & 0.723 & **0.763** \\ & NMSE & 0.201 & 0.0428 & **0.0377** & 0.191 & 0.0687 & **0.0586** \\ & PSNR & 22.603 & 28.364 & **29.002** & 22.234 & 25.588 & **26.370** \\ \hline \multirow{3}{*}{\(10\times\)} & SSIM & 0.641 & 0.788 & **0.822** & 0.588 & 0.717 & **0.753** \\ & NMSE & 0.210 & 0.0464 & **0.0408** & 0.198 & 0.0724 & **0.064** \\ \cline{1-1} & PSNR & 22.429 & 28.030 & **28.644** & 22.065 & 25.343 & **25.965** \\ \end{tabular}
\end{table}
Table 3: Performance comparisons on evaluation of CMRxRecon cine cardiac MRI coil combined validation data for different acceleration rates (AR).
Figure 4: Reconstruction (top) and associated error maps (middle) for the U-Net baseline and CRNN models. Finer details (bottom) are not resolved by the U-Net, which are partially captured by the plain CRNN model. The refinement module subsequently deblurs the image and provides better resolution at boundaries. (a) \(10\ \times\) undersampled SAX input (b, c) fully sampled ground truth (d, e, f) U-Net (g, h, i) \(6\) cascades with combined refinement (j, k, l) \(7\) cascades, no refinement.
## 5 Discussion
#### 5.0.1 On model choice
Without extensive hyperparameterisation, the CRNN architecture demonstrated more promising performance than the CineNet, both qualitatively and quantitatively as shown in Fig. 2 and 3. In this network, the temporal average is subtracted from each slice and then the residuals are transformed into \(x-t\) and \(y-t\) axes being passed through a 2D U-Net structure. The 2D U-Net is computationally lightweight as the CineNet was originally designed for multi-coil radial acquisition processes, but even with an increased number of cascades fails to resolve finer details as clearly shown in Fig. 3. The recurrent connections exploit the temporal dependencies between slices more effectively than by attempting to capture these relationships through transformation and sparsification.
However, training of the plain CRNN has still resulted in lower perceptual quality than presented in the original implementation, which may have been further improved with more hyperparameterisation tuning. The large image size of up to 246 \(\times\) 512 proved challenging, and in our implementation limited us to 5 cascades of 48 channels for the original 24GB inference limitation for the CRNN network. In the original work [15], the model consisted of 10 cascades with 64 feature channels, which operated on smaller data with a smaller GPU.
#### 5.0.2 On the final model
The plain CRNN implementation substantially outperforms the baseline 2D U-Net, which has over double the number of trainable parameters, demonstrating the importance of exploiting temporal correlations. The reconstruction of the plain CRNN is a considerable improvement upon the 10\(\times\) undersampled input, however smaller scale details that are resolved are blurry (e.g. Fig. 4l). The introduction of the low-cost refinement module led to better results with further denoising as presented in Fig. 4i. This shows promise for the implementation of lightweight single-image super-resolution models to assist in improving cardiac cine MRI reconstruction, either combined with the main reconstruction model or as a post-processing step. Relative to the ground truth, we still see that finer details are being missed that have been obscured due to the undersampling process. In [2], numerous MRI reconstruction experiments were conducted to test current deep learning reconstruction models. They proposed that "networks must be retrained on any subsampling patterns" for end to end CNN networks. In our approach, we trained all the acceleration rate together to get a stable but averaged reconstruction in a single model, subsequently resolving less finer details. The failure in generating details that have been lost, could be better tackled by a generative model [9]. As such, performance on patient volumes where more aliasing artefacts were present was poorer.
Therefore, to improve the proposed model, there is potential that training exclusively for each view can improve the final details. However, our model performs relatively well on the validation stage leaderboard for high acceleration factors, where finer details are more difficult to resolve, whilst we generally perform worse at lower acceleration factors. This suggests that whilst our model
failed to generate some finer details, other architectures also struggled once these details were lost or heavily obscured. There are numerous further modules that could have been implemented, had more time been available.
#### 5.3.2 On the loss function
We found that introduction of the \(\bot\) loss to the \(\ell_{1}\) loss improved both SSIM and PSNR, though at the expense of a slightly reduced \(\ell_{1}\) value itself. Treating the weightings of the loss functions as learnable parameters could lead to improved results in all metrics, as anticipated due to the results presented in [17]. Likewise, the introduction of the high-pass filter loss to focus the \(\ell_{1}\) loss on higher-frequency information increases the complexity of optimisation but was beneficial after the weightings were improved, though not presented quantitatively here.
## 6 Conclusions
In this challenge, we deployed a CRNN network combined with a refinement module to perform MRI reconstruction of cardiac cine data. We train the model for a range of acceleration factors and views using a high-pass filter to focus our loss on high-frequency details. From the quantitative analysis of the evaluation data and from direct viewing of the validation portion of the training data, the refinement module provides additional image quality with improvements of around 4% in all metrics relative to the plain CRNN implementation. As is typically found, some finer details at smaller scales remain unresolved that may be improved upon with further hyperparameter tuning and new modules. Nonetheless, the improvement upon the baseline is substantial and our model shows promise for improving cardiac MRI reconstruction.
## Acknowledgements
This work was supported in part by National Institutes of Health (NIH) grant 7R01HL148788-03. C. Jordan, Y. Du and Y. Xue thank additional financial support from the School of Engineering, the University of Edinburgh. S.A. Tsaftaris also acknowledges the support of Canon Medical and the Royal Academy of Engineering and the Research Chairs and Senior Research Fellowships scheme (grant RCSRF1819\(\backslash\)8\(\backslash\)25). The authors would like to thank Dr. Chen and K. Vilouras for inspirational discussions and assistance. |
2309.15938 | Exploring Self-Supervised Contrastive Learning of Spatial Sound Event
Representation | In this study, we present a simple multi-channel framework for contrastive
learning (MC-SimCLR) to encode 'what' and 'where' of spatial audios. MC-SimCLR
learns joint spectral and spatial representations from unlabeled spatial
audios, thereby enhancing both event classification and sound localization in
downstream tasks. At its core, we propose a multi-level data augmentation
pipeline that augments different levels of audio features, including waveforms,
Mel spectrograms, and generalized cross-correlation (GCC) features. In
addition, we introduce simple yet effective channel-wise augmentation methods
to randomly swap the order of the microphones and mask Mel and GCC channels. By
using these augmentations, we find that linear layers on top of the learned
representation significantly outperform supervised models in terms of both
event classification accuracy and localization error. We also perform a
comprehensive analysis of the effect of each augmentation method and a
comparison of the fine-tuning performance using different amounts of labeled
data. | Xilin Jiang, Cong Han, Yinghao Aaron Li, Nima Mesgarani | 2023-09-27T18:23:03Z | http://arxiv.org/abs/2309.15938v1 | # Exploring Self-Supervised Contrastive Learning of Spatial Sound Event Representation
###### Abstract
In this study, we present a simple multi-channel framework for contrastive learning (MC-SimCLR) to encode 'what' and 'where' of spatial audios. MC-SimCLR learns joint spectral and spatial representations from unlabeled spatial audios, thereby enhancing both event classification and sound localization in downstream tasks. At its core, we propose a multi-level data augmentation pipeline that augments different levels of audio features, including waveforms, Mel spectrograms, and generalized cross-correlation (GCC) features. In addition, we introduce simple yet effective channel-wise augmentation methods to randomly swap the order of the microphones and mask Mel and GCC channels. By using these augmentations, we find that linear layers on top of the learned representation significantly outperform supervised models in terms of both event classification accuracy and localization error. We also perform a comprehensive analysis of the effect of each augmentation method and a comparison of the fine-tuning performance using different amounts of labeled data.
Xilin Jiang, Cong Han, Yinghao Aaron Li, Nima Mesgarani Department of Electrical Engineering, Columbia University, USA Spatial audio, Sound event localization and detection, Contrastive learning, Self-supervised learning
## 1 Introduction
The majority of audio pre-training models are centered on learning robust auditory representations, facilitating the identification of 'what' the sound source is [1, 2, 3]. However, a complete representation of audio that can be used in a broader range of applications needs to include spatial attributes, as location is an intrinsic feature of all sound objects. In many applications, including acoustic surveillance, environmental monitoring, augmented reality, and autonomous vehicles, where ambient intelligence and acoustic awareness are desired, it is not sufficient to merely classify what and when sound events happen, but we also need to locate them in space. Learning disjoint representation of spectral and spatial properties leads to unnecessary problems, such as linking each sound event with its location. Moreover, a common real-world challenge revolves around the absence of annotations for either spectral or spatial attributes in audio data, rendering large-scale supervised training unfeasible. To tackle both issues, we propose _a simple multi-channel framework for contrastive learning_ (MC-SimCLR), the first self-supervised representation learning framework of multi-channel audio, to jointly learn spectral and spatial attributes of audios without supervision.
MC-SimCLR is an adaptation of _a simple framework for contrastive learning_ (SimCLR) [4] for unlabeled multi-channel audio data. The core of our framework is Multi-level Data Augmentation, a chain of augmentation applying to the waveform, Mel spectrograms and generalized cross-correlation (GCC) features. We adopt existing augmentations that operate on two-dimensional features and also introduce new argmenations that operate on the channel dimension. Specifically, we randomly swap the order of the microphones to generate more training samples and drop entire channels of features to discourage overfitting on specific channels. We assess the efficacy of the framework with the task of sound event localization and detection (SELD). The experimental results show that using MC-SimCLR embedding leads to improved event classification accuracy and reduced azimuth prediction error compared to training from scratch. This clearly underscores MC-SimCLR's proficiency in extracting both spectral and spatial-discriminative features from unlabeled multi-channel audio data.
## 2 Related Works
Supervised training for SELD has witnessed significant progress in recent years [5, 6, 7, 8, 9, 10]. However, a challenge in real-world scenarios is the lack of labels pertaining to either spectral or spatial attributes in audio data, which makes large-scale supervised training impracticable. Self-supervised learning has been a promising representation learning approach without the necessity for explicit labels. The learned representations often serve as input features for downstream tasks, diminishing the demand for extensive labeled training data but improving task performance. Past research has studied self-supervised learning approaches for sound event detection (SED) and sound source localization (SSL) separately.
In the field of SED, contrastive learning frameworks based on SimCLR [4] maximize the similarity of segments from the same recording and minimize the similarity of segments from different recordings [11, 12, 13]. This objective discriminates segments of different classes, as segments from the same recording are more likely to share the same label than those from different recordings. Self-distillation method [14] achieves the same goal without contrasting multiple segments by having an online encoder predict the embedding of the target encoder [15]. Additionally, transformer patch modeling method also learns discriminative features for event classification through a proxy task of predicting and reconstructing masked spectrogram patches from unmasked ones [1].
Self-supervised learning for SSL has been less studied. Although traditional signal processing techniques such as the Delay- and Sum Beamformer and SRP-PHAT [16] do not require supervised training, their performance degrades significantly in the presence of noise and reverberation. Recently, contrastive random work [17] has been utilized to estimate interaural time difference (ITD) from the learned embedding of each channel. However, this set of embeddings is not class-discriminative for the SED task, and the variable number (which is not fixed to one) of embeddings also imposes limitations on its applicability. In another study [18], direction-variant features are extracted from binaural recordings through contrastive learning. However, their framework thresholds on known head rotation to define positive or negative samples in |
2309.10840 | Anomalous crystalline-electromagnetic responses in semimetals | We present a unifying framework that allows us to study the mixed
crystalline-electromagnetic responses of topological semimetals in spatial
dimensions up to $D = 3$ through dimensional augmentation and reduction
procedures. We show how this framework illuminates relations between the
previously known topological semimetals, and use it to identify a new class of
quadrupolar nodal line semimetals for which we construct a lattice
tight-binding Hamiltonian. We further utilize this framework to quantify a
variety of mixed crystalline-electromagnetic responses, including several that
have not previously been explored in existing literature, and show that the
corresponding coefficients are universally proportional to weighted
momentum-energy multipole moments of the nodal points (or lines) of the
semimetal. We introduce lattice gauge fields that couple to the crystal
momentum and describe how tools including the gradient expansion procedure,
dimensional reduction, compactification, and the Kubo formula can be used to
systematically derive these responses and their coefficients. We further
substantiate these findings through analytical physical arguments, microscopic
calculations, and explicit numerical simulations employing tight-binding
models. | Mark R. Hirsbrunner, Oleg Dubinkin, Fiona J. Burnell, Taylor L. Hughes | 2023-09-19T18:00:00Z | http://arxiv.org/abs/2309.10840v1 | # Anomalous crystalline-electromagnetic responses in semimetals
###### Abstract
We present a unifying framework that allows us to study the mixed crystalline-electromagnetic responses of topological semimetals in spatial dimensions up to \(D=3\) through dimensional augmentation and reduction procedures. We show how this framework illuminates relations between the previously known topological semimetals, and use it to identify a new class of quadrupolar nodal line semimetals for which we construct a lattice tight-binding Hamiltonian. We further utilize this framework to quantify a variety of mixed crystalline-electromagnetic responses, including several that have not previously been explored in existing literature, and show that the corresponding coefficients are universally proportional to weighted momentum-energy multipole moments of the nodal points (or lines) of the semimetal. We introduce lattice gauge fields that couple to the crystal momentum and describe how tools including the gradient expansion procedure, dimensional reduction, compactification, and the Kubo formula can be used to systematically derive these responses and their coefficients. We further substantiate these findings through analytical physical arguments, microscopic calculations, and explicit numerical simulations employing tight-binding models.
+
Footnote †: These authors contributed equally.
+
Footnote †: These authors contributed equally.
## I Introduction
Topological responses are a key manifestation of electronic topology in solids. Celebrated examples such as the integer quantum Hall effect [1; 2; 3] and axion electrodynamics [4; 5] have paved the way for a broader exploration of topological response phenomena in insulating systems. As of now, a wide-variety of phenomena that are directly determined by the electronic topology have been considered, including thermal response [6; 7], geometric response [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24], and electric multipole response [25; 26; 27]. These responses are robust features of topological insulators (TIs), and topological phases in general, and are often described by a quantized response coefficient, e.g., the integer Hall conductance [1; 2; 3], or the quantized magneto-electric polarizability [28; 29; 5].
Interestingly, certain distinctive features of response of topological Weyl or Dirac semimetals can be described by response theories that are closely analogous to those of topologically insulating phases, albeit with coefficients that are determined by the momentum-space and energy locations of the point or line nodes [30; 31; 32; 33; 34; 35]. For point-node semimetals, the relevant response coefficients are momentum-energy vectors determined as a sum of the momentum and energy locations of the point-nodes weighted by their chirality (or by their helicity, for Dirac semimetals), yielding a momentum-energy space dipole. For example, the low-energy, nodal contribution to the anomalous Hall effect tensor of a 3D Weyl semimetal is determined by the momentum components of this momentum-energy dipole vector.
The quasi-topological response coefficients of topological semimetals are not strictly quantized since they can be continuously tuned with the nodal momenta. However, the forms of the responses share many features with topological insulators in one lower dimension, or perhaps more precisely, with weak topological insulators in the same dimension [36; 37]. Indeed, topological semimetals and weak topological insulators both require discrete translation symmetry to be protected and both are sensitive to translation defects such as dislocations [38]. Interestingly, the connection to translation symmetry has motivated recent work which recasts many previously proposed topological responses of these systems as couplings between the electromagnetic gauge field \(A_{\mu}\) and gauge fields for translations \(\mathfrak{e}^{a}_{\mu}\), where \(\mu\) runs over spacetime indices, and \(a\) runs over the spatial directions in which translation symmetry exists. This insight has also led to the development of new response theories that are just beginning to be understood [17; 18; 19; 20; 24].
Motivated by these previous results and our recent related work on higher rank chiral fermions [17; 24; 39], here we study the topological responses of 1D, 2D, and 3D topological semimetals coupled to electromagnetic and strain (translation gauge) fields. In addition to the well-studied dipole case mentioned above, we also study cases where point-nodes have momentum-energy quadrupole or octupole patterns. Our approach allows us to make clear connections between a wide variety of response theories across dimensions, and clarifies relationships between many of the response theories we discuss. We find that the chirality-weighted momentum-energy _multipole_ moments of the semimetals determine new types of quasi-topological responses to electromagnetic fields and strain. We are able to explicitly derive many of these responses from Kubo formula calculations (sometimes combined with dimensional reduction procedures [5]). Using these results we explicitly study these families of response theories using lattice model realizations. We also extend our results to the responses of nodal line semimetals (NLSMs) and construct a new type of NLSM with an unusual crossed, cage-like nodal structure.
Our article is organized as follows. In Sec.II we provide an overview of and intuition about the response theories that will be discussed in more detail, and in model contexts, in later sections. In Sec. III we derive a family of effective actions that describe mixed crystalline-electromagnetic responses in various spatial dimensions. From here we proceed in Sec. IV by presenting concrete lattice models and explicit numerical calculations that realize and demonstrate the mixed responses in D\(=1,2,3\). We conclude in Sec. V by discussing possible extensions to future work, and potential pathways to experimental observation of some of the described phenomena.
## II Overview of response theories
The systems we consider in this article all exhibit \(U(1)\) charge conservation and discrete translation symmetry in at least one spatial direction. In the presence of these symmetries we can consider the responses to background field configurations of the electromagnetic gauge field \(A_{\mu}\) and a collection of translation gauge fields \(\mathfrak{e}^{a}_{\mu}.\) For example, if the system exhibits translation symmetry in the \(x\)-direction, then we can consider coupling the system to the field \(\mathfrak{e}^{x}_{\mu}.\) Our goal is to study low-energy response theories of electrons coupled to translation and electromagnetic gauge fields.
Since most readers are likely less familiar with the translation gauge fields \(\mathfrak{e}^{a}_{\mu}\) than the electromagnetic field \(A_{\mu}\), we will briefly review the nature of these fields as they appear in our work. In a weakly deformed lattice, \(\mathfrak{e}^{a}\) is given by
\[\mathfrak{e}^{a}_{j}=\delta^{a}_{j}-\frac{\partial u^{a}}{\partial x^{j}}, \tag{1}\]
where the Kronecker \(\delta^{a}_{j}\) encodes the fixed reference lattice vectors, \(u^{a}\) is the lattice displacement, and \(\frac{\partial u^{a}}{\partial x^{j}}\) is the distortion tensor [41]. The fields \(\mathfrak{e}^{a}_{j}\) in Eq. (1) are reminiscent of gauge fields (see, e.g., [42]): from Eq. (1) we immediately see that line integrals of \(\mathfrak{e}\) describe lattice dislocations since \(\oint\frac{\partial u_{a}}{\partial x_{\mu}}dx^{i}=b^{a},\) where \(b^{a}\) is the net Burgers vector of all the dislocations inside the loop [41]. This points to an analogy with the configurations of the usual electromagnetic field. The analog of magnetic fields derived from \(\mathfrak{e}^{a}_{\mu}\) essentially encodes configurations of dislocations, each with an amount of flux equal to the corresponding Burgers' vector. Additionally, electric fields are time-dependent strains. In earlier work, e.g., Ref. [11], these fields could have been called frame-fields, but crucially the translation gauge fields encode only the translation/torsional part of the geometric distortion, whereas the frame fields also carry rotational information. In keeping with previous literature, here we will call the set of (Abelian) fields \(\mathfrak{e}^{a}_{\mu}\) translation "gauge" fields, by analogy of their relationship to translation "fluxes" (i.e. lattice defects). This language is convenient because, as we will see, actions describing the response to such lattice fluxes are invariant under (vector-charge) gauge transformations of the \(\mathfrak{e}^{a}_{\mu}\) fields.
A second way in which we will use the close analogy between \(\mathfrak{e}^{a}_{\mu}\) and electromagnetic gauge fields is through the lattice analog of the usual Aharonov-Bohm effect (holonomy), in which a charged particle encircles a magnetic flux of the gauge field. In the electromagnetic case, a charged particle moving around a magnetic flux generates a \(U(1)\) phase factor. For the translation gauge field, taking a particle around a translation magnetic flux having Burgers vector \(b^{a}\) generates a translation operator by the displacement \(b^{a}\). For particles with a fixed translation charge, i.e., a fixed momentum, this generates a momentum-dependent \(U(1)\) phase factor. This will lead us to introduce momentum-dependent Peierls factors when performing some lattice calculations. To complement this discussion, in Appendix A we show more explicitly how translation symmetry can be "gauged" under a teleparallel constraint of the underlying system geometry. A very similar approach has been used to study the effects of strain on graphene [43; 44; 45] and other semimetallic systems [46; 47; 48; 49; 50], where strain can play the role of a valley-dependent magnetic field.
For our purposes there are many ways in which \(\mathfrak{e}^{a}_{\mu}\) can be treated on equal footing with the electromagnetic gauge field. However, there are some important distinctions. First, the fields \(\mathfrak{e}^{a}_{j}\) in Eq. (1) are not true gauge fields. This becomes important when considering the possible response actions: while the total charge of a system strictly conserved, momentum conservation is not similarly inviolable (see e.g. [48; 49; 50] for some interesting physical consequences of this distinction). Second, responses involving \(\mathfrak{e}^{a}_{\mu}\) are predicated on the existence of translation symmetry. Thus, if the response is characterized by a boundary effect or a response to a flux/defect, we must be careful to ensure that (at least approximate) translation symmetry is maintained in order to connect the coefficient of the response action to explicit model calculations. Indeed, some responses are not well-defined unless configurations that maintain translation symmetry are used. This is unlike the electromagnetic response for which \(U(1)\) charge symmetry is maintained independently of the geometry and gauge field configuration. Other important distinctions have been discussed in recent literature that has begun putting the gauging of discrete spatial symmetries on firmer ground [51; 52; 53]. One important distinction is that the translation gauge fields correspond to a discrete gauge symmetry \(\mathbb{Z}_{N_{a}}\), where \(N_{a}\) is the number of unit cells in the \(a\)-th direction. This discreteness can play an important role in the topological response properties [18], but we will not focus on this aspect in our work.
Using this framework, our goal is to consider the low-energy responses of electrons to the background electromagnetic and translation gauge fields. Given a translationally invariant Bloch Hamiltonian \(H\), the response theories we consider can, in principle, be derived from
correlation functions of the electromagnetic current
\[j^{\mu}=e\frac{\partial H}{\partial k_{\mu}}, \tag{2}\]
and the crystal momentum current
\[\mathcal{J}_{a}^{\mu}=\hbar k_{a}\frac{\partial H}{\partial k_{\mu}}, \tag{3}\]
where the former couples to \(A_{\mu}\) and the latter to \(\epsilon_{\mu}^{a}\) (see App. A for more details for the latter). Indeed, we take exactly this approach in Section III to derive response actions for 2D and 3D systems. While our explicit derivations are important for precisely determining the coefficients of the response actions we study, it will be helpful to first motivate the overarching structure that connects a large subset of these response theories. We also note that alternative approaches to determining some of the response actions we discuss have been proposed in Refs. [18, 19, 20], and where the results overlap with ours, they agree.
To understand the connections between the response theories we study, it is useful to begin by reviewing the well-known dimensional hierarchy of response theories of strong topological insulators [5]. We show the general structure in Fig. 1(a), where the response terms are built solely from the electromagnetic gauge field. Furthermore, Chern-Simons and \(\theta\)-term response actions appear in even and odd spatial dimensions, respectively. There are a number of connections between the theories in different dimensions, and we will now review three of them. First, a Chern-Simons action in D spatial dimensions can be dimensionally reduced to a \(\theta\)-term action in (D-1)-dimensions by compactifying one spatial direction [54, 55]. The (D-1)-dimensional system can also represent a TI if the value of \(\theta\) is quantized to be \(0,\pi\) by a symmetry that protects the (D-1)-dimensional topological insulator [5]. Second, one can consider the reverse process where quantized adiabatic pumping [40] in (D-1)-dimensions will convert a \(\theta\)-term action to a D-dimensional Chern-Simons action. Finally, a \(\theta\)-term action for a (D-1)-dimensional topological insulator exhibits a half-quantized (D-2)-Chern-Simons response on boundaries where \(\theta\) jumps by \(\pi\). These general relationships are summarized in Fig. 1(a) where each type of relationship is color- and symbol-coordinated.
Next we can consider a less familiar set of relationships in Fig. 1(b) between _gapped_ theories with mixed
Figure 1: (a) A dimensional hierarchy of theories describing responses of strong topological insulators. The theories are related by dimensional reduction (\(\theta\) symbol, green arrow) [5], taking the boundary response (\((1/2)\partial\) symbol, purple arrow), or adiabatic pumping (\(\copy\) symbol, red arrow) [40]. (b) A dimensional hierarchy of insulating systems with mixed crystalline-electromagnetic responses. The theories are related by stacking (layer symbol, dark red arrow) and cutting (scissor symbol, blue arrow). (c) A family tree of dimensional hierarchies establishing connections between responses of strong TIs and insulators with mixed crystalline-electromagnetic responses. (d) Illustrations representing the nature of the phases constituting the hierarchy depicted in (c). (i) A single isolated charge. (ii) A line of charges forming a lattice. (iii) An insulating chain having a quantized charge polarization. (iv) A two-dimensional lattice of charges. (v) A two-dimensional weak topological insulator where polarized chains are stacked transverse to their polarization. (vi) A two-dimensional Chern insulator having chiral edge states indicated by red arrows. (vii) A three-dimensional lattice of charges. (viii) A three-dimensional lattice built from a two-dimensional array of polarized chains; alternatively, a stack of two-dimensional weak topological insulators. (ix) A three-dimensional stack of Chern insulators forming a time-reversal breaking weak topological insulator. (x) A three-dimensional strong topological insulator with surface Dirac cones.
crystalline-electromagnetic responses arising from effective actions having both \(A_{\mu}\) and \(\epsilon_{\mu}^{a}\) fields. We emphasize that the precise relationships we refer to in Fig. 1(b) are for gapped systems where the coefficients of the actions are quantized. In contrast, for the majority of this article we will focus on the quasi-topological responses of _gapless_ systems which take similar forms, but with non-quantized coefficients. Remarkably, many of the actions we discuss for insulators can be generalized to the non-quantized case. For semimetals, however, the dimensional relationships we point out are more akin to physical guides than a precise prescription for deriving matching coefficients in-between dimensions.
With this caveat in mind, let us consider the family of theories in Fig. 1(b). In 0D we can consider the response action for a gapped system of electrons,\(S[A]=Q\int dtA_{0}\), which represents a system with charge \(Q=eN_{e}\) where \(N_{e}\) is the (integer) number of electrons. If we imagine stacking these 0D systems in a discrete, translationally invariant lattice in the \(x\)-direction, then we will generate a line of charges. Indeed, stacking produces the response for a translation invariant line-charge density which is captured by the next action in the sequence in Fig. 1(b), i.e., \(Q\int A\wedge e^{x}=Q\int dxdt(A_{0}\epsilon_{x}^{x}-A_{x}\epsilon_{0}^{x}).\) In this action the first term represents the charge density along the line, while the second term represents a current generated if the lattice of charges is moving. The latter consequence becomes manifest in the weakly distorted lattice limit since the current is proportional to the displacement rate: \(j\sim\epsilon_{0}^{x}\sim\frac{\partial u^{x}}{\partial t}\).
We can also imagine a reverse process where we are given a translationally invariant line of charge at integer filling and cut out a single unit cell. Since the system is gapped and translation invariant, this will result in a move in the opposite direction in Fig. 1(b), i.e., from \(A\wedge\epsilon^{x}\) in 1D to \(A\) in 0D with the same _integer_ coefficient \(Q\). We can use this example to highlight our caveat about gapped vs. gapless systems mentioned above. That is, while it is reasonable to have a 1D gapless system with non-quantized (i.e., non-integer) charge (per unit cell) described by the 1D action, the cutting procedure will not work properly at non-integer filling since the result will be a 0D point with a fractional charge.
In comparison to the response sequence for strong TIs, we see that stacking is the analog of pumping for the translation gauge field [56]. Indeed, while pumping adds an extra electromagnetic gauge field factor \(A\), stacking adds an extra translation gauge field \(\mathbf{\epsilon}^{D+1}\) where D+1 is the stacking direction. As a result, given any action in the strong TI sequence, we can stack copies to get the response action of a primary weak TI (stacks of co-dimension-1 strong TIs, e.g., lines stacked into 2D) by adding a wedge product with \(\mathbf{\epsilon}^{D+1}\). We can push the stacking idea further to generate secondary weak TIs (stacks of co-dimension-2 strong TIs, e.g., lines stacked into 3D) by a wedge product with \(\mathbf{\epsilon}^{D+1}\wedge\mathbf{\epsilon}^{D+2}\) and so on.
The stacking and cutting procedures are not the only relationships between the response theories in Fig. 1(b). Just as in the strong TI sequence, we can find connections between the boundary properties of some D-dimensional systems and the bulk response of a (D-1)-dimensional system. For example, the 2D response action in Fig. 1(b) represents the response of a stack of Su-Schrieffer-Heeger chains (SSH) [57], each with a quantized polarization of \(e/2.\) The boundary of such a 2D system is a line of charge on the edge, albeit with a density of \(e/2\) electrons per unit cell on the edge line instead of the integer density we would get by stacking integer-filled 0D points. As such, the boundary of the 2D \(A\wedge d\mathbf{\epsilon}^{x}\) action represents a line-charge described by the action \(A\wedge\mathbf{\epsilon}^{x}\), but with a half-integer coefficient.
Now we can combine the dimensional relationships in the sequences of both Fig. 1(a) and (b) to make a family tree of related theories. We show a tree in Fig. 1(c) that includes response actions in 0, 1, 2, and 3 spatial dimensions. In 0D we have only an integer electron charge response that couples to \(A_{0}.\) For 1D, we can either stack charges to form a line of charge (upper branch), or consider an electrically polarized TI (lower branch) where the charge is split in half and moved to opposing ends of the chain while the interior remains neutral (so to speak). In 2D we can stack line charges to get a plane of charge (top branch), stack 1D polarized TIs to get a weak TI (middle branch), or pump charge in a 1D TI to generate a 2D Chern insulator (bottom branch).
In 3D the set of responses is richer. We can stack plane charges to generate a 3D volume of charges (top branch), stack Chern insulators to get a 3D primary weak TI (second from bottom branch), or stack 2D weak TIs to get a 3D secondary weak TI built from 1D polarized wires (second branch from top). The other well-known possibility is the magneto-electric response for a 3D strong TI [28, 5] (bottom branch). Although it is not shown, this theory is related to a 4D quantum Hall system via pumping (3D to 4D) or dimensional reduction (4D to 3D) [5]. The final option we consider, which is the middle branch enclosed by a dotted rectangle, is \(\int dA\wedge d\mathbf{\epsilon}^{a}.\) This response theory has not been previously studied in detail. This theory is a total derivative, and yields a gapped boundary with an electric polarization (e.g., a stack of SSH chains on the boundary). This is reminiscent of an electric quadrupole (higher-order) response [27, 58], and we will explore this connection further in Sec. III.5.
While this discussion has centered on gapped systems, our primary focus is on gapless topological semimetals. Importantly, each of the actions that contains a translation gauge field in the family tree in Fig. 1(c) can also represent a contribution to the response of various types of metals or topological semimetals [17, 18, 19, 20, 30, 31, 32, 33, 34]. This is because many semimetals can be generated by translation-invariant stacking of lower dimensional topological phases. Since the momentum \(k^{a}\) in the stacking direction is conserved, one can consider adding up the set of topological response terms for each gapped \(k^{a}.\) A semimetal represents a scenario where the coefficients of
these topological terms at each \(k^{a}\) are quantized and have discrete jumps where \(k^{a}\) hits a nodal point. For example, the 2D electric polarization response of a stack of 1D TIs becomes the response of a 2D Dirac semimetal if the wires forming the stack are coupled strongly enough to close the insulating gap [34]. In the presence of reflection symmetry, each momentum in the stacking direction has a quantized charge polarization that jumps when the momentum hits a gapless 2D Dirac point. Additionally, the 3D response of a stack of Chern insulators becomes the non-quantized anomalous Hall effect response of a time-reversal breaking Weyl semimetal where each fixed-\(k\) plane that does not intersect a Weyl point carries a quantized Chern number that jumps at a Weyl point [30; 31; 32; 33], and so on. While many of these response theories have been discussed in detail before, only a few works have highlighted the contributions from the translation gauge fields [17; 18; 19; 20; 24; 59; 60]. As such, a large fraction of our paper will be devoted to both the explicit derivations of the response coefficients of the actions in Fig. 1(c) that have couplings to the translation gauge fields (Sec. III), and to the explicit calculations of the physical response phenomena in representative model systems (Sec. IV).
Before we move on to more explicit derivations, we want to motivate three additional response theories we will study that lie outside the family tree in Fig. 1(c). As mentioned above, a remarkable feature of the response actions of point-node semimetals is that their coefficients are determined from the energy-momentum locations of the nodal points. Indeed, for the relevant response actions in Fig. 1(c), the coefficients are obtained as a chirality-weighted momentum dipole moment of the point-nodes (note that Dirac points do not have a chirality, nevertheless there is a signed quantity that plays the same role). Interestingly, recent work on rank-2 chiral fermions and Weyl semimetals with a chirality-weighted momentum _quadrupole_ moment [17; 18; 19; 24] has unveiled a new set of response theories. This category of theories has actions that include factors of more than one translation gauge field of the same type (e.g., \(\mathfrak{e}^{a}\wedge d\mathfrak{e}^{b}\), where \(a=b\)), and as such, does not appear in the family tree in Fig. 1(c). This also implies that the translation gauge field factors in these response theories cannot be obtained by the conventional stacking of lower dimensional systems that we discussed above, since stacking produces wedge products with distinct translation gauge fields. We could also construct related higher dimensional theories (and lower dimensional theories if we considered both space and time translational gauge fields) to form an additional connected tree of theories, but we leave further discussion of those extensions to future work.
To give some explicit examples we show three response theories that follow this pattern in Fig. 2. Fig. 2(a) shows the Fermi surface structure of a 3D time-reversal invariant Weyl semimetal having a Weyl node quadrupole moment. The response action of this system is a mixed response between electromagnetic and translation gauge
Figure 2: (a) Fermi-surfaces of a 3D time-reversal invariant Weyl semimetal with a quadrupole Weyl node configuration. Red and blue colors denote positive and negative Berry-curvature respectively. The associated action has a coefficient matrix \(\mathcal{Q}_{ab}\) which is symmetric and proportional to the Weyl-node quadrupole moment. For this configuration the coefficients \(\mathcal{Q}_{xx}\) and \(\mathcal{Q}_{yy}\) are non-vanishing. (b) Similar to subfigure (a) except it is the Fermi surfaces for a 2D Dirac semimetal having four Dirac nodes in a quadrupole pattern. The action is described by a symmetric matrix of coefficients \(\mathcal{Q}_{ab}\). (c) the Fermi surface of an unusual cage-like nodal line semimetal built from stacking the Dirac node quadrupole semimetal in subfigure (b). The action has a set of coefficients \(\mathcal{B}_{ab,c}\) which is anti-symmetric in \(a\) and \(b\). Heuristically the action in (b) can generate the action in (a) by adiabatic pumping, or can generate the action in (c) by stacking.
fields, and the inset in the Fermi-surface figure lists which coefficients \(\mathcal{Q}_{ab}\) are non-vanishing. Some details of this response were discussed in Refs. [17; 18; 17], the former of which connected the response to rank-2 chiral fermions on the surface of the 3D Weyl semimetal. Fig. 2(b) shows the Fermi surface structure of a 2D Dirac semimetal having a Dirac node quadrupole structure. This response represents a momentum current responding to a translation gauge field (e.g., a strain configuration). Its form shares some similarities with the torsional Hall viscosity [11; 59; 61; 62; 63], though a precise connection will be left to future work. Finally, in Fig. 2(c) we show the Fermi-surface for an unusual nodal line semimetal formed from stacking the Dirac node quadrupole semimetal of Fig. 2(b). While one might have naively expected two independent Fermi rings, we instead find a new type of Fermi-surface structure where the Fermi lines join at two crossing cap regions to form a cage.
The symbols on the right-hand-side of Fig. 2 indicate the connections between these theories: (i) the response of the nodal line structure is just a stacked version of the 2D Dirac node quadrupole semimetal response from Fig. 2(b), and (ii) one can heuristically consider the four-node Weyl response in Fig. 2(a) to be a dimensional extension of the response in Fig. 2(b) via pumping.
## III Effective response actions
Now that we have described the forms of the various response actions of interest, we will spend this section determining their coefficients. All of the response actions in Fig. 1(c) that contain only electromagnetic gauge fields represent insulators, and their coefficients have been studied in detail (e.g., see Ref. [5]). The actions containing translation gauge fields can represent insulators or gapless systems, and the two can often be distinguished by the values of the coefficients. That is, for insulators we expect the coefficients to be quantized in some units (in even spatial dimensions they are quantized in the presence of some symmetry), while for topological semimetals we expect the coefficients to be a tunable function of the momentum and energy locations of the nodal points or lines. Interestingly, some of the response coefficients for metals/semimetals can take the same values allowed for an insulator, although this would typically require fine-tuning, or extra symmetry. For example, a 1D system can have compensating particle and hole Fermi surfaces such that the total filling is an integer, as one would find in an insulator, yet the system is still gapless. In such a case we will show that the system has additional response terms that have coefficients that are incompatible with a gapped insulator.
Our focus will be on 2D Dirac, 3D Weyl, and 3D nodal line semimetals, and before we begin our derivations it is important to acknowledge a key qualitative difference between these types of topological semimetals. Namely, we recall that 2D topological Dirac semimetals and 3D nodal line semimetals require symmetry (the composite \(\mathcal{TI}\) symmetry) to guarantee the local stability of the gapless points/lines in momentum space. This is inherently different from the case of Weyl semimetals in 3D where Weyl nodes require no extra symmetry to protect them against perturbations. Indeed, a Weyl node can be gapped out only by bringing another Weyl node of opposite chirality to the same point in the Brillouin zone. A similar story applies to (semi)metallic systems in 1D: each gapless point has a well-defined chirality defined as the sign of the Fermi velocity, and a gap can be opened only after overlapping Fermi points of opposite chiralities.
This distinction in symmetry protection is important for the response theories describing Dirac and Weyl semimetals as it reflects the well-known structure of anomalies in even and odd spatial dimensions. Furthermore, it will impact our strategy for deriving the response coefficients for these systems. As an example, the response properties of 2D Dirac semimetals can be determined straightforwardly from the Kubo formula if we first apply a symmetry-breaking perturbation that weakly gaps out the nodes. The resulting insulator response can then be taken to the semimetallic limit if we tune the perturbation to zero. Hence, the effective response action for such systems can be obtained by treating the system as an insulator and applying the Kubo formula, or more generally, a gradient expansion procedure. This method can be applied to 2D and 4D Dirac semimetals, and consequently 3D nodal line semimetals since they are just stacks of 2D Dirac semimetals. For such semimetals we actually have a choice of what symmetry to break, e.g., inversion or time-reversal. Which one we need to break depends on the nodal configuration and the action we are intending to generate. For example, in the case of a 2D Dirac semimetal with a pair of nodes, breaking time-reversal is well-studied and generates a quantum Hall response via a Chern-Simons term. However, breaking inversion symmetry is relatively less-studied and generates a mixed Chern-Simons response between an electromagnetic gauge field and a translational gauge field. This is corroborated by the fact that the electromagnetic Chern-Simons action breaks time-reversal, while the mixed Chern-Simons term with these fields breaks inversion. We will show that the mixed Chern-Simons term has a well-defined limit as the gap closes and inversion symmetry is restored, which leads to a non-trivial response action for the 2D Dirac semimetal.
Alternatively, the response of isolated chiral gapless points in 1D and 3D can be determined if they are viewed as theories that live on the boundary of a higher dimensional topological insulator or topological semimetal. In the presence of gauge fields, the higher dimensional bulk will generate a current inflow to the boundary to compensate the anomalous response of the gapless boundary modes. From this perspective, we expect that the effective response action of Weyl semimetals in odd spatial
dimensions can be obtained by taking the boundary contribution of a higher dimensional system. There are likely other methods that could be applied to derive these response actions in their intrinsic spatial dimension, e.g., via the subtle introduction of an auxiliary \(\theta\)-field, but we choose our procedure since it reinforces the dimensional relationships discussed in the previous section and requires fewer formal tools.
Thus, our strategy for deriving the general form of the coefficients of mixed crystalline-electromagnetic responses is to begin by deriving effective response actions in even spatial dimensions, i.e., 2D and 4D. We will do so by identifying gradient expansion contributions (see Appendix B for a brief review) that contain an appropriate effective action constructed out of translational (\(\mathfrak{e}^{\lambda}\)) and electromagnetic (\(A\)) gauge fields. Then the response of semimetals in odd spatial dimensions can be obtained by looking at the boundary of a response theory defined in one dimension higher.
### Effective responses of 2D semimetals
In this subsection we will derive the coefficients of two 2D response actions that contain translation gauge fields, namely response action (v) from Fig. 1(c), and the response action in Fig. 2(b). We will find that the coefficients of these actions are characterized by the dipole and quadrupole moments of the Berry curvature in the 2D Brillouin Zone, respectively. When we specialize to 2D Dirac semimetals, the distribution of Berry curvature is sharply localized as \(\pm\pi\)-fluxes at the Dirac nodes. Hence, the coefficients will become proportional to the dipole and quadrupole moment of the distribution of Dirac nodes.
#### ii.1.1 Dirac node dipole semimetal
Let us start by considering a gapped \(\mathcal{T}\)-invariant system having broken \(\mathcal{I}\) symmetry. Under these conditions the electromagnetic Chern-Simons term, which represents the Hall conductivity, vanishes, and we can consider the mixed linear response of a momentum current responding to an electromagnetic field, or vice-versa. Using the Kubo formula, or applying the gradient expansion procedure described in App. B, we find the contribution to the effective action (when the chemical potential lies in the insulating gap) (See also [50]):
\[S_{e,A}=-e\int d^{3}r\ \mathfrak{e}^{\alpha}_{\mu}\partial_{r}A_{\rho}\int \frac{d\omega d^{2}k}{(2\pi)^{3}}k_{\alpha}\Omega^{(3)}_{\mu\nu\rho}(\omega,k), \tag{4}\]
where
\[\Omega^{(3)}_{\mu\nu\rho}(\omega,k)=\mathrm{tr}\left(G_{0}\frac{\partial G_{0 }^{-1}}{\partial k_{\mu}}\frac{\partial G_{0}}{\partial k_{\nu}}\frac{ \partial G_{0}^{-1}}{\partial k_{\rho}}\right), \tag{5}\]
and \(G_{0}(k_{\mu})\) is the single-particle Green function. To extract the coefficient of the \(\mathfrak{e}^{\alpha}\wedge dA\) term, we contract \(\Omega^{(3)}_{\mu\nu\rho}\) with the totally antisymmetric tensor \(\frac{1}{3!}\varepsilon^{\mu\nu\rho}\). This gives the coefficient
\[c_{\alpha}=e\frac{\varepsilon^{\mu\nu\rho}}{3!}\int\frac{d\omega d^{2}k}{(2 \pi)^{3}}k_{\alpha}\ \Omega^{(3)}_{\mu\nu\rho}(\omega,k) \tag{6}\]
of the response action
\[S_{e,A}=c_{\alpha}\int\mathfrak{e}^{\alpha}\wedge dA. \tag{7}\]
We note that Eq. 6 is very similar to the response coefficient of the standard electromagnetic Chern-Simons term apart from the factor of \(k_{\alpha}\) in the integrand. As such, assuming \(\alpha=x,y\), we can use a well-established result to evaluate the frequency integral to obtain [64]:
\[\frac{\varepsilon^{\mu\nu\rho}}{24\pi^{2}}\int d\omega d^{2}k\ \Omega^{(3)}_{\mu\nu\rho}( \omega,k)=\frac{1}{2\pi}\int_{BZ}dk_{x}dk_{y}\ \mathcal{F}^{xy}(k_{x},k_{y}), \tag{8}\]
where \(\mathcal{F}^{xy}\) is the Berry curvature. Hence, we can rewrite \(c_{\alpha}\) as an integral over the BZ by substituting this relationship into Eq. 6 to find:
\[c_{\alpha}=\frac{e}{(2\pi)^{2}}\int_{BZ}dk_{x}dk_{y}\ k_{\alpha}\mathcal{F}^{ xy}(k_{x},k_{y}). \tag{9}\]
We have thus arrived at the result that \(c_{\alpha}\) is proportional to the \(\alpha\)-th component of the dipole moment of the distribution of Berry curvature. This coefficient can be non-zero since it is allowed by broken \(\mathcal{I}\) and preserved \(\mathcal{T}\), i.e., \(\mathcal{F}^{xy}(\mathbf{k})=-\mathcal{F}^{xy}(-\mathbf{k}).\) We also note that \(c_{\alpha}\) is independent of the choice of zone center, and shifts of \(k\) in the integrand in general, because the Chern number (Hall conductivity) vanishes in the presence of \(\mathcal{T}\).
In a gapped \(\mathcal{T}\)-invariant system, restoring \(\mathcal{I}\)-symmetry forces \(c_{\alpha}\) to vanish, since \(\mathcal{F}^{xy}(\mathbf{k})=0.\) However, in gapless systems this need not be the case. To see this, we apply our result from Eq. 9 to a 2D Dirac semimetal by first introducing a weak perturbation \(V_{\mathcal{I}}\) which breaks \(\mathcal{I}\) and opens up a small gap, and then taking the limit \(V_{\mathcal{I}}\to 0\), in which inversion symmetry is restored. In the gapped system the Berry curvature \(\mathcal{F}^{xy}\) is distributed smoothly across the entire 2D BZ. In the gapless limit, however, the Berry curvature distribution will develop sharp peaks of weight \(\pi\) localized at the positions of the Dirac points:
\[\mathcal{F}^{xy}=\sum_{a=1}^{N_{D}}\pi\chi_{a}\delta(\mathbf{k}-\mathbf{k}^{a}), \tag{10}\]
where \(a\) runs over all Dirac nodes at momenta \(\mathbf{k}^{a}\), and \(\chi_{a}=\pm 1\) is an integer indicating the sign of the \(\pi\)-Berry phase around the Fermi surface of the \(a\)-th Dirac point at a small chemical potential above the node [34]. Ultimately, we find the effective response action of a Dirac node dipole semimetal is given by:
\[S_{DD}=\frac{e\mathcal{P}_{\alpha}}{4\pi}\int\mathfrak{e}^{\alpha}\wedge dA, \tag{11}\]
where
\[\mathcal{P}_{\alpha}=\sum_{a=1}^{N_{D}}\chi_{a}k_{\alpha}^{a}, \tag{12}\]
is the dipole moment of the Dirac nodes.
Note that if the Dirac nodes meet at the zone boundary, they can be generically gapped even in the presence of \(\mathcal{TI}\) symmetry. The resulting insulating phase represents a weak TI having \(\mathcal{P}_{\alpha}=G_{\alpha}\), where \(G_{\alpha}\) are the components of a reciprocal lattice vector. In this case, the action in Eq. 11 describes a stack (i.e., a family of lattice lines/planes corresponding to \(G_{a}\)) of 1D polarized TI chains aligned perpendicular to \(G_{a}\). To see this explicitly, take \(G_{x}=\frac{2\pi}{a_{x}}\), and set \(e_{\beta}^{\alpha}=\delta_{\beta}^{\alpha}\) in Eq. 11 to obtain the action
\[\frac{e}{2}\int\frac{dx}{a_{x}}\left(\int dydtE_{y}\right)=N_{x}\frac{e}{2} \int dydtE_{y}, \tag{13}\]
where \(N_{x}\) is the number of unit cells in the \(x\)-direction. This action is just \(N_{x}\) copies of the usual \(\theta\)-term action for 1D, electrically-polarized topological insulators (\(\theta=\pi\)) parallel to the \(y\)-direction, stacked along \(\hat{x}\).
We have now derived Eq. 11 as a quasi-topological contribution to the response of a 2D Dirac semimetal where the nodes have a dipolar configuration. However, there is another important subtlety that we will now point out. Earlier work has shown that the electromagnetic response of 2D Dirac semimetals with both \(\mathcal{T}\) and \(\mathcal{I}\) symmetry is an electric polarization proportional to the Dirac-node dipole moment [34]. Even more recently, connections have been made between mixed translation-electromagnetic responses and the electric polarization [52]. Since we have a clear derivation of the response term we can use our results to understand the precise connection between the electric polarization and the coefficient \(c_{\alpha}\) of the \(e^{\alpha}\wedge dA\) response action. Using the standard approach of Ref. [26], the polarization in 2D is
\[P_{e}^{\alpha}=\frac{e}{(2\pi)^{2}}i\int_{BZ}d^{2}\mathbf{k}\langle u_{ \mathbf{k}}|\partial_{k_{\alpha}}u_{\mathbf{k}}\rangle \tag{14}\]
where \(\mathcal{A}^{\alpha}(k)=i\langle u_{\mathbf{k}}|\partial_{k_{\alpha}}u_{ \mathbf{k}}\rangle\) is the Berry connection. Hence, we find that the electric polarization \(P_{e}^{\alpha}\) is related to \(c_{\alpha}\) by an integration by parts (See Appendix C):
\[\begin{split} P_{e}^{\alpha}&=\frac{e}{(2\pi)^{2}} \varepsilon^{\alpha\beta}\int d^{2}k\ k_{\beta}\mathcal{F}^{xy}+\frac{e}{2\pi} W^{\alpha}\\ &=\epsilon^{\alpha\beta}c_{\beta}+\frac{e}{2\pi}W^{\alpha},\end{split} \tag{15}\]
where we have set the lattice constants equal to unity, and the Wilson loop
\[W^{\alpha}=\oint dk_{\alpha}\mathcal{A}^{\alpha}(k_{\alpha},k_{\beta}=\pi), \tag{16}\]
is an integral of the Berry connection \(\mathcal{A}^{\alpha}\) along the \(\alpha\)-th momentum direction at a fixed, inversion-invariant transverse momentum \(k_{\beta}=\pi\) at the boundary of the BZ.
From this explicit relationship we can immediately draw some conclusions. First, in the Dirac semimetal limit, we reproduce the result of Ref. [34] where the polarization is proportional to the Dirac node dipole moment: \(P_{e}^{\alpha}=\frac{e}{2(2\pi)}\varepsilon^{\alpha\beta}\mathcal{P}_{\beta}.\) And second, if we have broken inversion symmetry (while \(\mathcal{T}\) is still preserved), we see that the polarization and the coefficient \(c_{\alpha}\) are not quantized, and _not equal_ to each other. This scenario can be found in inversion-breaking insulators with a Berry curvature dipole moment. These insulators will have a charge polarization, and they will also have a mixed translation-electromagnetic response. However, we find from this calculation, and explicit numerical checks, that they are generically inequivalent. Ultimately this boils down to the fact that the Wilson loop at the boundary of the BZ requires a symmetry to be quantized, e.g., mirror or inversion. Otherwise, the Wilson loop gives a contribution that distinguishes the polarization and the mixed crystalline-electromagnetic responses. We leave a detailed discussion of this subtle distinction to future work.
To summarize, Eq. 11 captures the generic mixed crystalline-electromagnetic response of the bulk of a 2D system with \(\mathcal{T}\)-symmetry. In the limit of a Dirac semimetal, the coefficient of the response coincides with the electric polarization of the system. We note that in this limit there will be other non-vanishing response terms since the system is gapless, but Eq. 11 represents a distinct contribution to the total response of the system to electromagnetic and translation gauge fields. We will study an explicit model with this response term in Sec. IV.1.
#### iv.2.2 Dirac node quadrupole semimetal
Now we will move on to discuss the response of quadrupole arrangements of 2D Dirac nodes as in Fig. 2(b). If the Chern number and momentum dipole moment \(\mathcal{P}_{\alpha}\) vanish, then our semimetal has a well-defined momentum quadrupole moment, which is independent of the choice of zone center. We now show that such systems are described by the response action:
\[S_{DQ}=\frac{\hbar\mathcal{Q}_{\alpha\beta}}{8\pi}\int\mathfrak{c}^{\alpha} \wedge d\mathfrak{e}^{\beta}. \tag{17}\]
From the derivation in the previous section we anticipate that, in the limit of a Dirac semimetal band structure, the coefficient \(\mathcal{Q}_{\alpha\beta}\) of this response action is related to the momentum quadrupole moment of the Dirac nodes. To confirm this statement let us consider the linear response of a momentum current to a translation gauge field for a gapped system. From the Kubo formula, or gradient expansion, we find a coefficient of the \(\mathfrak{c}^{\alpha}\wedge d\mathfrak{e}^{\beta}\) term:
\[\mathcal{Q}_{\alpha\beta}\equiv\frac{1}{2}\frac{\varepsilon_{\mu\nu\rho}}{3! }\int\frac{d\omega d^{2}k}{(2\pi)^{3}}k_{\alpha}k_{\beta}\ \Omega_{\mu\nu\rho}^{(3)}(\omega,k). \tag{18}\]
We can use the relationship mentioned in Eq. 8 to carry out the frequency integral to obtain the coefficient of Eq. 17:
\[\mathcal{Q}_{\alpha\beta}=\frac{1}{\pi}\int_{BZ}dk_{x}dk_{y}\ k_{\alpha}k_{\beta} \mathcal{F}^{xy}(k_{x},k_{y}). \tag{19}\]
To apply this to the Dirac node quadrupole semimetal shown in Fig. 2(b), we evaluate the response by first introducing a symmetry-breaking mass term and then studying the topological response of the resulting gapped system. In this case the mass term breaks \(\mathcal{T}\) but has a vanishing total Chern number. In the example at hand, this can be done by adding a \(k\)-independent term that opens a local mass of the same sign for each of the four Dirac points in Fig. 2b. Such a mass term preserves \(\mathcal{I}\), which in the gapped system automatically guarantees a vanishing dipole moment of the Berry curvature. This, together with the vanishing Chern number, is necessary so that the momentum quadrupole moment is well-defined, independent of the choice of zone center. For this scenario, in the limit that the perturbative mass goes to zero,
\[\mathcal{Q}_{\alpha\beta}=\sum_{a=1}^{N_{D}}\chi_{a}k_{\alpha}^{a}k_{\beta}^{a}, \tag{20}\]
which is the Dirac node quadrupole moment. In Sec. IV.2 we will explicitly study a model with this Berry curvature configuration and a resulting non-vanishing \(\mathcal{Q}_{\alpha\beta}\). We will see that while the Dirac node dipole moment captures the electric polarization (see Appendix C), the nodal quadrupole moment captures a kind of momentum polarization (see Appendix D) (this time, without the subtlety of the additional Wilson loop contribution discussed above). For comparison, the surface charge theorem relates the bulk electric polarization to a boundary charge, and for the momentum polarization there will be a boundary momentum.
### Effective responses of 1D (semi)metals
Now that we have derived the responses of 2D systems coupled to electromagnetic and translation gauge fields, we will use Figs. 1(b) and 2 as guides to generate related responses in 1D and 3D. To get 1D responses we will consider the boundary response of the 2D systems (this subsection), and we will stack the 2D responses to get 3D responses of nodal line semimetals (next subsection). We note that in the following discussion we will treat translation as a continuous symmetry (as in Appendix A, as this perspective is useful for obtaining the correct response actions from our diagram calculations). One can see Ref. [18], for example, for a discussion that treats the subtleties associated to having a discrete translation symmetry.
It is well-known that chiral modes in 1D are anomalous, i.e., charge is not conserved when we apply an electric field. In 1D lattice models this anomaly is resolved because of fermion doubling, i.e., for every right-moving chiral mode there is a corresponding left-moving mode that compensates the anomaly. While it is true that the electromagnetic charge anomaly is resolved with such a lattice dispersion, the doubled system can still be anomalous in a different but related sense if we have translation symmetry (see Ref. [18] for a similar discussion).
To be specific, in the presence of translation symmetry we can consider the momentum current in Eq. 3: \(\mathcal{J}_{x}^{\mu}=\hbar k_{x}j^{\mu}\) where \(j^{\mu}\) is the particle number current. At low energies, current-carrying excitations lie in the vicinity of Fermi points \(k_{x}^{F,\alpha}\) and carry corresponding particle currents \(j_{(\alpha)}^{\mu}.\) The total contribution to momentum current from these low-lying modes is:
\[\mathcal{J}_{x}^{\mu}=\sum_{\alpha}\hbar k_{x}^{F,\alpha}j_{(\alpha)}^{\mu}. \tag{21}\]
In the simplest case of a nearest-neighbor lattice model having a single, partially-filled band, we have two Fermi points: \(k^{F}\equiv k_{x}^{F,R}=-k_{x}^{F,L}\), with \(j_{R}^{\mu}=(\rho_{R},v_{F}\rho_{R})\) and \(j_{L}^{\mu}=(\rho_{L},-v_{F}\rho_{L})\), where \(\rho\) is the number density. Interestingly, the momentum current in this scenario is
\[\mathcal{J}_{x}^{\mu}=\hbar k^{F}(j_{R}^{\mu}-j_{L}^{\mu}), \tag{22}\]
which, up to a factor of \(\hbar k^{F}\), is just the axial current!
Importantly, even though this lattice model does not have an electromagnetic charge anomaly,
Figure 3: (a) One-dimensional band structure of an ordinary metal. The pair of gapless points is marked by the sign of their respective chiralities, highlighting the momentum-space dipole characterizing the response of the system. (b) Band structure of a 1D metal characterized by a momentum quadrupole moment. The system has an integer (vanishing in this case) charge filling, but a non-zero momentum. (c) Band structure of a 1D metal characterized by a momentum octupole moment. The system has an integer (vanishing) filling, a vanishing momentum, but a non-vanishing expectation value for the square of the momentum. See Appendix E.
0, it does have an axial anomaly:
\[\partial_{\mu}(j_{R}^{\mu}-j_{L}^{\mu})=\frac{eE^{x}}{\pi\hbar}. \tag{23}\]
Taking this point of view, we can reformulate the axial anomaly in this system as a mixed crystalline-electromagnetic anomaly where an electric field \(E_{x}\) violates conservation of the \(k_{x}\) momentum current,
\[\partial_{\mu}\mathcal{J}_{x}^{\mu}=\frac{e\hbar k^{F}}{\pi\hbar}E^{x}. \tag{24}\]
More generally the anomaly is proportional to the momentum dipole moment of the Fermi points, which replaces a factor of \(2k_{F}\) in Eq. 24 (see App. E).
There is a conjugate effect that occurs in an applied strain field, which can be implemented as a translation electric field \(\mathcal{E}_{x}^{x}=\partial_{x}\mathfrak{e}_{0}^{x}-\partial_{t}\mathfrak{e }_{x}^{x}\). Naively such a non-vanishing field will generate violations to the conservation law for the usual electromagnetic current according to
\[\partial_{\mu}(ej^{\mu})=\frac{ek^{F}}{\pi}\mathcal{E}_{x}^{x}, \tag{25}\]
(again see App. E for a more general expression in terms of the momentum dipole). However, this equation is not quite correct if we have an isolated system with a fixed number of electrons, and hence, we must be careful when considering time-dependent changes to \(\mathfrak{e}_{x}^{x}\) as we will now describe.
To gain some intuition for Eq. 25, consider increasing the system size by one lattice constant \(a\) during a time \(T\) by adding an extra site to the system: \(\int dxdt\ \mathcal{E}_{x}^{x}=a\) (one can also think of threading a dislocation into the hole of a 1D periodic system). From the anomaly equation we would find that the amount of charge in the system changes by \(ek^{F}a/\pi\), as one would expect for adding a unit cell to a translationally invariant system having a uniform charge density \(\rho=ek_{F}/\pi.\) However, there is a subtlety that we can illustrate by considering a system having a fixed number of electrons \(N_{e}=k^{F}L_{x}/\pi\), which we strain by uniformly increasing the lattice constant. Assuming a uniform system, the anomalous conservation law in this case becomes
\[\partial_{t}\rho=\partial_{t}\left(\frac{ek^{F}}{\pi}\mathfrak{e}_{x}^{x} \right). \tag{26}\]
Crucially, we note that if we increase the system size with fixed particle number, then \(k^{F}\) will decrease. Indeed, in the small deformation limit the momenta are proportional to \((\mathfrak{e}_{x}^{x})^{-1}\) since their finite size quantization depends inversely on the system size. Using this result, the conservation law becomes:
\[\partial_{t}\rho=\frac{e}{\pi}\left(\mathfrak{e}_{x}^{x}\partial_{t}k^{F}+k^{F }\partial_{t}\mathfrak{e}_{x}^{x}\right)=\frac{ek^{F}}{\pi}(-\partial_{t} \mathfrak{e}_{x}^{x}+\partial_{t}\mathfrak{e}_{x}^{x})=0 \tag{27}\]
where we used \(\partial_{t}(\mathfrak{e}_{x}^{x})^{-1}=-(\mathfrak{e}_{x}^{x})^{-2}\partial_{ t}\mathfrak{e}_{x}^{x}\).
The outcome that \(\partial_{t}\rho=0\) is the result one would expect by stretching the system uniformly while keeping the number of particles fixed. To clarify, at a fixed particle number we know the total charge cannot change, however it perhaps seems counter-intuitive that the _density_ does not decrease if we stretch the system. The reason is that the quantity \(\rho\) above, which is defined as \(\frac{\delta S}{\delta A_{0}}\), is not a scalar density. Indeed, for general geometries the scalar charge density would be defined as
\[\bar{\rho}=\frac{1}{\mathfrak{e}_{x}^{x}}\frac{\delta S}{\delta A_{0}}, \tag{28}\]
where the \(\mathfrak{e}_{x}^{x}\) is essentially playing the role of the determinant of a spatial metric. To calculate the total charge we would then use
\[Q=\int dx\,\mathfrak{e}_{x}^{x}\bar{\rho}=\int dx\,\rho. \tag{29}\]
Indeed, the scalar charge density \(\bar{\rho}\) will decrease as the system is stretched since \(\partial_{t}\bar{\rho}\propto\partial_{t}\mathcal{P}_{x}\) which decreases as the system size increases at fixed electron number.
The effective response action of the 1D system can be derived as a boundary effective action of an appropriate 2D theory. In fact, we have already seen such a 2D system when studying the 2D Dirac semimetal with Dirac nodes arranged in a dipolar fashion. The bulk response for this 2D system with a weak inversion-breaking gap is Eq. 11. As mentioned above, this bulk theory implies that the system has an electric polarization. From the surface-charge theorem for polarization we expect that the boundary will have a charge density equal the polarization component normal to the boundary. The contribution to the boundary effective action from Eq. 11 is:
\[S_{\partial}=\frac{e}{4\pi}\mathcal{P}_{\alpha}\int\mathfrak{e}^{\alpha} \wedge A.\]
From this we can extract the boundary charge density: \(\rho_{\partial}=\frac{e}{2}\frac{\mathcal{P}_{\alpha}}{2\pi}\mathfrak{e}_{ \partial}^{\partial}\) where \(\mathcal{P}_{\partial}\) is the component along the boundary, and \(\mathfrak{e}_{\partial}^{\partial}\) is the diagonal translation gauge field component along the boundary that is simply equal to unity in non-deformed geometries.
While the form of this action is what we expect for a 1D metal, the coefficient is half the size it should be. The reason is that on the edge of the 2D Dirac semimetal, the momentum-space projections of the bulk Dirac nodes in the edge BZ represent points where the edge-filling changes by \(\pm e/2\), [34] not \(\pm e\) as would be the case for a 1D Fermi-point in a metal. Hence for a metal we expect a result twice as large (we will see a similar result in Sec. III.5 when comparing the boundary response of a 4D system to that of a 3D Weyl semimetal). Thus the action for the 1D system is
\[S_{1D,D}=\frac{e}{2\pi}\mathcal{P}_{\alpha}\int\mathfrak{e}^{\alpha}\wedge A. \tag{30}\]
From this form we can identify \(\mathcal{P}_{\alpha}=(-\Delta\mu/\hbar,\Delta k_{x})\) such that \(\frac{\mathcal{P}_{\alpha}}{2\pi}\) is simply the filling fraction of the 1D metal
and \(\frac{\mathcal{P}_{\mu}}{2\pi}\) measures the imbalance of left- and right-moving excitations in the system (\(\Delta\mu=\mu_{R}-\mu_{L}\)).
Introducing a charge current vector
\[j^{\mu}=\frac{e}{2\pi}\varepsilon^{\mu\nu}\mathcal{P}_{\nu}=\frac{e}{2\pi}\left( \Delta k_{x},\Delta\mu/\hbar\right)^{T} \tag{31}\]
we can recast Eq. 31 in the most familiar form: \(S_{1D,D}=\int dtdx\ j^{\mu}A_{\mu}\). Thus, we have now generated the action (ii) from Fig. 1(c). Let us also note that the edge states of the Dirac semimetal can be flat, while the 1D context we mentioned above has a dispersion. However, the key feature of both cases is that as momentum is swept across the 1D BZ (1D surface BZ for the 2D case) the filling of the states changes in discrete jumps at either the Fermi points in 1D, or the (surface-projected) Dirac points in 2D. It is this change in the filling that is captured by the quantity \(\mathcal{P}_{x}\), and does not depend on the dispersion in a crucial way.
Now that we have this example in mind, we can ask what the analogous 1D boundary system is for the Berry curvature quadrupole action Eq. 17. We mentioned that this bulk response represents a momentum polarization, which implies that the boundary should have a momentum density parallel to the edge. Indeed, we expect that such a 1D system will have a vanishing Fermi-point dipole moment (i.e., the filling is integer), but a quadrupole moment that is non-vanishing (see Fig. 3(b)).
From the point of view of the translation gauge fields, such band structures are chiral since either the right movers or left movers carry larger momentum charge. To see this, consider a 1D Fermi surface with right-movers at momenta \(\pm K_{F}\), and left-movers at momenta \(\pm Q_{F}\). Let us further restrict our attention to currents for which the net number of right-movers (and of left-movers) is zero, e.g. \(\rho_{R}(K_{F})+\rho_{R}(-K_{F})=0\). Defining \(\delta\rho_{R}=\left(\rho_{R}(K_{F})-\rho_{R}(-K_{F})\right)\), and \(\delta\rho_{L}=\left(\rho_{L}(Q_{F})-\rho_{L}(-Q_{F})\right)\), we see that the momentum gauge field couples to
\[\mathcal{J}_{x}^{\mu}=K_{F}\delta\rho_{R}+Q_{F}\delta\rho_{L}. \tag{32}\]
Thus we see that for \(K_{F}\neq Q_{F}\) (as in Fig. 3(b)), the momentum gauge field couples differently to right- and left- moving density fluctuations. In the extreme limit that \(Q_{F}=0\), the momentum gauge theory is fully chiral.
More generally, in a 1D system with a Fermi-point quadrupole (c.f. Eq. 20) \(\mathcal{Q}_{xx}=\sum_{a=1}^{N_{F}}=\text{sgn}(v_{Fa})(k_{x}^{(a)})^{2}\), and fixed electric charge, this chiral coupling leads to an anomaly in the presence of a non-vanishing translation gauge field:
\[\partial_{\mu}\mathcal{J}_{x}^{\mu}=\frac{\hbar\mathcal{Q}_{xx}}{4\pi} \mathcal{E}_{x}^{x}. \tag{33}\]
This anomaly implies that if we turn on a translation gauge field (e.g., via strain) then we will generate momentum as shown in App. E [65].
The response theory describing such a 1D system is similar to that describing the chiral boundary of a Chern-Simons theory. Indeed, if we start from Eq. 17 and derive the boundary response (and compensate for a similar factor of two as mentioned above in the momentum-dipole case) we arrive at an effective action:
\[S=-\frac{\hbar}{4\pi}\int dtdx\ \left(\mathcal{Q}_{xx}\mathfrak{e}_{x}^{x} \mathfrak{e}_{t}^{x}+\mathcal{Q}_{xt}\mathfrak{e}_{x}^{x}\mathfrak{e}_{t}^{t} \right). \tag{34}\]
In this effective action the momentum quadrupole moment of the Fermi points \(\mathcal{Q}_{xx}\) encodes the ground state momentum density (see Appendix E). The quantity \(\mathcal{Q}_{xt}\) is the mixed Fermi-point quadrupole moment in momentum and energy, but we leave a detailed discussion of such mixed moments to future work.
The arguments of this section can be extended to look at higher moments of the chirality-weighted Fermi momenta, which are proportional to the ground state expectation values of higher and higher powers of momenta. To describe these properties, and related response phenomena, we can introduce gauge fields \(\mathfrak{e}^{abc...}\) that couple to higher monomials of momentum, \(k_{a}k_{b}k_{c}.....\) For example, the fields that couple to zero powers or one power of momentum are the electromagnetic \(A\) and translation gauge fields \(k_{x}\mathfrak{e}^{x}\) respectively, and we could introduce a coupling \(k_{a}k_{b}\mathfrak{e}^{ab}\) to the set of 1-form gauge fields \(\mathfrak{e}^{ab}\), e.g., \(k_{2}^{2}\mathfrak{e}^{xx}\). We describe the hierarchical anomalies associated to these gauge fields in Appendix E.
### Effective responses of 3D nodal line semimetals
We can now use our 2D results from Sec. III.1 to generate the responses of two types of nodal line semimetals in 3D. To generate the two types we imagine stacking either the action in Eq. 11 or the action in Eq. 17. The action resulting from the former has been discussed in Refs. [19; 35]; the second is, to the best of our knowledge, new. From our arguments for gapped systems in Sec. II, we expect that the form of the actions we obtain from stacking will contain an extra wedge product with the translation gauge field in the stacking direction. To be explicit, suppose we are stacking up 2D semimetals (that are parallel to the \(xy\)-plane) into the \(z\)-direction. By stacking decoupled planes of the responses in either Eq. 11 or Eq. 17, we expect to find
\[S=\frac{e\mathcal{P}_{\alpha}}{4\pi a_{z}}\int\mathfrak{e}^{z}\wedge\mathfrak{ e}^{\alpha}\wedge dA,\]
or
\[S=\frac{\hbar\mathcal{Q}_{\alpha\beta}}{8\pi a_{z}}\int\mathfrak{e}^{z} \wedge\mathfrak{e}^{\alpha}\wedge d\mathfrak{e}^{\beta},\]
respectively, where \(\alpha,\beta=x,y.\) The forms of these actions match action (viii) in Fig. 1(c) and the action in Fig. 2(c) respectively. We note that the stacked, decoupled systems simply inherit the response coefficient of the 2D system.
We want to consider more general configurations of systems with stacked and coupled planes, perhaps stacked
in several directions. As we have seen, if the layers we stack are decoupled, then each layer contributes the same amount. This contribution (for a stack in the \(z\)-direction) is captured by the integral \(\frac{1}{a_{z}}\int\mathbf{\epsilon}^{z}=N_{z}\) where \(N_{z}\) is the number of layers. However, if the layers are coupled, then each fixed-\(k_{z}\) plane can have a different amount of Dirac node dipole (\(\mathcal{P}_{\alpha}(k_{z})\)) or Dirac node quadrupole moment (\(\mathcal{Q}_{\alpha\beta}(k_{z})\)) respectively. The total coefficient is then determined by the sum over all values of \(k_{z}\). One can also have stacks in any direction, not just the \(z\)-direction. Hence, in this more generic scenario the actions become
\[S_{DD3}=e\mathcal{B}_{\alpha\beta}\int\mathbf{\epsilon}^{\alpha}\wedge\mathbf{\epsilon} ^{\beta}\wedge dA, \tag{35}\]
and
\[S_{DQ3}=\hbar\mathcal{B}_{\alpha\beta,\gamma}\int\mathbf{\epsilon}^{\alpha}\wedge \mathbf{\epsilon}^{\beta}\wedge d\mathbf{\epsilon}^{\gamma}, \tag{36}\]
with coefficients
\[\mathcal{B}_{\alpha\beta}=\frac{1}{4(2\pi)^{3}}\epsilon^{\alpha\beta\sigma} \int d^{3}k\ k_{\beta}\mathcal{F}^{\sigma\delta} \tag{37}\]
and
\[\mathcal{B}_{\alpha\beta,\gamma}=\frac{1}{6(2\pi)^{3}}\epsilon^{\alpha\beta \sigma}\int d^{3}k\ k_{\gamma}k_{\delta}\mathcal{F}^{\sigma\delta}. \tag{38}\]
where \(\mathcal{F}^{\mu\nu}\) is the Berry curvature of the \(k_{\mu}k_{\nu}\)-plane. These forms of the coefficients capture scenarios with more complicated nodal line geometries. Indeed, as previously shown in Ref. [35] the coefficient \(\mathcal{B}_{\alpha\beta}\) is determined by the line nodes that have non-vanishing area when projected into the \(\alpha\beta\)-plane. Additionally, for nodal line semimetals with \(\mathcal{TI}\) symmetry the coefficient is proportional to the charge polarization in the direction normal to the \(\alpha\beta\)-plane [35]. We can see this explicitly by integrating Eq. 37 by parts with the same caveats mentioned in Sec. III.1.1 surrounding Eq. 15.
Analogously, the coefficient \(\mathcal{B}_{\alpha\beta,\gamma}\) can represent a kind of "momentum"-polarization where the polarization is again normal to the \(\alpha\beta\)-plane, and the charge that is polarized is the momentum along the \(\gamma\)-direction. We can see this heuristically by integrating by parts using the derivatives in the \(\mathcal{F}^{\sigma\delta}\) to find
\[\mathcal{B}_{\alpha\beta,\gamma}\sim-\frac{1}{2(2\pi)^{3}}\int d^{3}k\left( \epsilon^{\alpha\beta\sigma}k_{\gamma}\mathcal{A}^{\sigma}-\epsilon^{\alpha \beta\gamma}k_{i}\mathcal{A}^{i}\right) \tag{39}\]
where we have used the \(\sim\) symbol to indicate that there are boundary terms we have dropped that can be important if the line nodes span the Brillouin zone. We can see from this form that the coefficient for the case when \(\alpha,\beta,\gamma\) are not all different, e.g. \(\mathcal{B}_{xz,x}\), is proportional to the polarization in the \(y\)-direction (i.e. normal to the \(xz\)-plane) weighted by the momentum in the \(x\)-direction.
We note that for \(\mathcal{B}_{\alpha\beta}\) to be well-defined, the Chern number in each plane must vanish. In addition to this constraint, \(\mathcal{B}_{\alpha\beta}=0\) is a necessary constraint for \(\mathcal{B}_{\alpha\beta,\gamma}\) to be well defined. These hierarchical requirements are analogous to the usual requirements for the ordinary (magnetic) dipole and (magnetic) quadrupole moments of the electromagnetic field to be independent of the choice of origin. Here the role of the magnetic field distribution is being played by \(\mathcal{F}^{\sigma\rho}(k)\), and, for example, the constraint on the vanishing Chern number eliminates the possibility of magnetic monopoles (i.e., Weyl points).
### Effective responses of 4D semimetals
Our next goal is to determine the coefficients for the response actions of 3D Weyl point-node semimetals. However, because the Weyl nodes in 3D exhibit an anomaly, the responses are subtle to calculate intrinsically in 3D. Instead, to accomplish our goal we will first carry out more straightforward calculations of the responses of 4D semimetals and then return to 3D either by considering the boundary of a 4D system, or by compactifying and shrinking one dimension of the bulk. Hence, as a step toward 3D semimetals, in this subsection we provide the derivation for effective response actions of semimetals in 4D.
The first action we consider is of the form
\[S=c_{\alpha}\int\epsilon^{\alpha}\wedge dA\wedge dA, \tag{40}\]
where for our purposes \(\alpha=x,y,z,w.\) Collecting all terms in the gradient expansion that have this field content we obtain:
\[\begin{split} S&=\frac{e^{2}}{\hbar}\int d^{5}\tau \ \epsilon_{\mu}^{\alpha}\partial_{\nu}A_{\rho}\partial_{\sigma}A_{\tau}\\ &\times\int\frac{d\omega d^{4}k}{(2\pi)^{5}}k_{\alpha}\Omega^{(5)} _{\mu\nu\rho\sigma\tau}(\omega,k),\end{split} \tag{41}\]
where
\[\Omega^{(5)}_{\mu\nu\rho\sigma\tau}(\omega,k)=\text{tr}\left(G_{0}\frac{ \partial G_{0}^{-1}}{\partial k_{\mu}}\frac{\partial G_{0}}{\partial k_{\nu}} \frac{\partial G_{0}^{-1}}{\partial k_{\rho}}\frac{\partial G_{0}}{\partial k _{\sigma}}\frac{\partial G_{0}^{-1}}{\partial k_{\tau}}\right), \tag{42}\]
and \(G_{0}(\omega,k)\) is the single-particle Green function. To determine the coefficient \(c_{\alpha}\) we project this coefficient onto the totally antisymmetric part and then, just as in Eq. 8, we can carry out the frequency integral [64] to obtain the simpler expression
\[\begin{split}&\int\frac{d\omega d^{4}k}{2\pi}\frac{\varepsilon_{\mu \nu\rho\sigma\tau}}{5!}k_{\alpha}\Omega^{(5)}_{\mu\nu\rho\sigma\tau}(\omega,k)\\ &=\frac{1}{16}\int_{BZ}d^{4}\mathbf{k}\ k_{\alpha}\varepsilon_{ ijkl}\mathcal{F}^{ij}\mathcal{F}^{kl}.\end{split} \tag{43}\]
Hence, the response coefficient takes the form
\[c_{\alpha}=\frac{e^{2}}{\hbar}\frac{1}{16(2\pi)^{4}}\int_{BZ}d^{4}k\ k_{\alpha} \varepsilon_{ijkl}\mathcal{F}^{ij}\mathcal{F}^{kl}=\frac{e^{2}\mathcal{P}_{ \alpha}}{16\pi^{2}\hbar}, \tag{44}\]
where we introduced
\[\mathcal{P}_{\alpha}=\frac{1}{16\pi^{2}}\int_{BZ}d^{4}\mathbf{k}\ k_{\alpha} \varepsilon_{ijkl}\mathcal{F}^{ij}\mathcal{F}^{kl}. \tag{45}\]
As we see from this calculation, similar to 2D, the 4D response theories can be characterized by the distribution of the quantity \(\varepsilon_{ijkl}\mathcal{F}^{ij}\mathcal{F}^{kl}\) across the 4D Brillouin zone. For our focus, let us consider the case where the 4D system is a semimetal with a set of isolated Dirac points (linearly dispersing band touchings where four bands meet). Without symmetry, these Dirac points are locally unstable in momentum space to the opening of a gap. If we open up an infinitesimally small energy gap, the quantity \(\varepsilon_{ijkl}\mathcal{F}^{ij}\mathcal{F}^{kl}\) becomes well-defined across the entire BZ and its distribution takes the following form in the massless limit:
\[\varepsilon_{ijkl}\mathcal{F}^{ij}\mathcal{F}^{kl}=\sum_{a=1}^{N_{D}}16\pi^{2 }\chi_{a}\delta(\mathbf{k}-\mathbf{k}_{a}). \tag{46}\]
If we substitute this into Eq. 45 then we immediately see that \(\mathcal{P}_{\alpha}\) becomes the momentum space dipole of the set of 4D Dirac nodes. Let us also comment that if we integrate Eq. 45 by parts we see that \(\mathcal{P}_{\alpha}\) can also be interpreted as a set of magneto-electric polarizabilities [5; 28]. Just as in the case of the polarization of a 2D Dirac semimetal, the integration by parts will generate a boundary term that captures the magneto-electric polarizability coming from the 3D boundaries of the 4D BZ. Hence, the connection between the total magneto-electric polarizability and the mixed translation-electromagnetic response is only exact in the symmetric limit when the boundary term is quantized.
In summary, a 4D response of a system characterized by a dipolar distribution of the \(\varepsilon_{ijkl}\mathcal{F}^{ij}\mathcal{F}^{kl}\) quantity reads:
\[S=\frac{e^{2}\mathcal{P}_{\alpha}}{16\pi^{2}\hbar}\int\mathfrak{c}^{\alpha} \wedge dA\wedge dA. \tag{47}\]
Similar to 2D, if the dipolar response vanishes we can obtain a momentum quadrupole response coefficient for the action:
\[S=\frac{e\mathcal{Q}_{\alpha\beta}}{16\pi^{2}}\int\mathfrak{c}^{\alpha}\wedge d \mathfrak{c}^{\beta}\wedge dA, \tag{48}\]
where \(\mathcal{Q}_{\alpha\beta}\) is a symmetric matrix determined by the momentum space quadrupole moment of the 4D Dirac nodes. Finally, if both the dipolar and quadrupolar responses vanish we can consider an octupolar distribution that will give the response coefficient for the action:
\[S=\frac{\hbar\mathcal{O}_{\alpha\beta\gamma}}{48\pi^{2}}\int\mathfrak{c}^{ \alpha}\wedge d\mathfrak{c}^{\beta}\wedge d\mathfrak{c}^{\gamma}, \tag{49}\]
where \(\mathcal{O}_{\alpha\beta\gamma}\) is determined by the momentum space octupole moment of the 4D Dirac nodes. We will leave the discussion of octupolar configurations of Dirac and Weyl nodes to future work. We also mention that, similar to 2D, for these responses to be independent of the choice of BZ origin we require that the second Chern number of the 4D system vanishes. Alternatively, if the second Chern number is non-vanishing, then the boundary of the system will contain a non-vanishing chirality of Weyl nodes. As such, the anomalous charge response of the chiral boundary will not allow us to uniquely determine the momentum response on the boundary.
Before moving on to 3D, let us briefly present some physical intuition about the response in Eq. 47. Consider a 4D time-reversal and inversion invariant system having two Dirac nodes separated in the \(k_{z}\)-direction. To simplify the discussion, let us also assume the system has mirror symmetry \(M_{z}.\) The assumed symmetries imply that each fixed-\(k_{z}\) volume can be treated as an independent 3D insulator having 3D inversion symmetry, and hence the magneto-electric polarizability of these 3D insulator subspaces is quantized [5; 67; 66]. Now, if we sweep through \(k_{z}\) then each bulk 4D Dirac point crossing changes the magneto-electric polarizability of the fixed-\(k_{z}\) volume by a half-integer (i.e., changes the related axion angle by \(\pi\)) [5]. Since the magneto-electric polarizability jumps between its quantized values as we pass through the two bulk Dirac nodes, the \(k_{z}\) Brillouin zone splits into two intervals: (i) an interval with a vanishing magneto-electric polarizability, and (ii) an interval with a non-vanishing quantized magneto-electric polarizability. Indeed, we could have anticipated this result from the form of the action Eq. 47 when \(\alpha=z,\) i.e., the action represents stacks of 3D topological insulators that each have a non-vanishing magneto-electric polarizability.
### Effective responses of 3D semimetals
From this discussion we see that, in the presence of symmetry, the 4D bulk Dirac node dipole moment determines the magneto-electric polarizability of these 4D topological semimetals via Eq. 47. We want to connect this result to 3D semimetals in two ways. First, we will consider the 3D boundary of the 4D system, and then we will consider the spatial compactification of one spatial dimension.
Let us begin by considering the boundary response action from Eq. 47. For the model system described at the end of the previous subsection we know the system has a \(k_{z}\)-dependent magneto-electric polarizability. Consider a boundary in the fourth spatial direction \(w.\) Since the magneto-electric polarizability is changing from inside to outside of the boundary, the boundary itself will have a non-vanishing Hall conductivity. For our example system, each fixed-\(k_{z}\) slice of this boundary will have a Hall conductivity \(\sigma_{xy},\) which is quantized, but possibly vanishing. Additionally, since the bulk 4D Dirac nodes are separated in the \(k_{z}\) direction, they will project to gapless points in the 3D surface BZ (on surfaces that have at least one direction perpendicular to the \(z\)-direction) where the
Hall conductivity discretely jumps by \(\Delta\sigma_{xy}=\pm\frac{e^{2}}{2h}\).
From this phenomenology, i.e., discrete Hall conductivity jumps as we sweep through \(k_{z}\) we expect that the boundary response of Eq. 47 captures the same response as a Weyl semimetal that has a non-vanishing momentum space dipole moment of the Weyl nodes in the \(z\)-direction. Indeed the generic boundary contribution from Eq. 47 has the form:
\[S_{WD}=\frac{e^{2}\mathcal{P}_{\alpha}}{8\pi^{2}\hbar}\int\mathbf{\epsilon}^{\alpha }\wedge dA\wedge A \tag{50}\]
which was proposed by Ref. [33] to describe the response of Weyl semimetals, though in the more conventional form using an axion field and without the translation gauge field. Here \(\mathcal{P}_{\alpha}\), \(\alpha=x,y,z\) is the momentum dipole of the Weyl nodes in the \(\alpha\)-th direction. This action is represented as (ix) in Fig. 1(c). We note that the coefficient in Eq. 50 is twice as large as the actual boundary term derived from Eq. 47. This is because when \(k_{i}\) passes through a single Weyl point we have \(\epsilon_{ijk}\Delta\sigma_{jk}=\pm\frac{e^{2}}{h}\), where the surface the response of the 4D system has jumps of half the size. This is analogous to the fact that a 1D metal has an integer jump in the filling as we pass through a Fermi point, whereas the surface of a 2D Dirac semimetal has a boundary "filling" that jumps by a half-integer as we pass through a gapless point in the surface BZ.
We can repeat this analysis for Eq. 48. The coefficient of this term is proportional to the momentum space quadrupole moment of the nodal points. Unfortunately the phenomenology of this term is not as easy to analyze in 4D because it is not generated from a lower dimensional system in a clear way [68]. By analogy with the previous case, the bulk 4D Dirac nodes will project to a quadrupole of 3D Weyl nodes on the surface. We can extract the form of the 3D action we want by taking the boundary term generated from Eq. 48. Then accounting for the factor of two as in the previous case, we arrive at:
\[S_{WQ}=\frac{e\mathcal{Q}_{\alpha\beta}}{8\pi^{2}}\int\mathbf{\epsilon}^{\alpha} \wedge d\mathbf{\epsilon}^{\beta}\wedge A. \tag{51}\]
(Note that since \(\mathcal{Q}_{\alpha\beta}\) is symmetric, the related contribution of the form \(e\mathcal{Q}_{\alpha\beta}/8\pi^{2}\int\mathbf{\epsilon}^{\alpha}\wedge\mathbf{ \epsilon}^{\beta}\wedge dA\) vanishes). This action is the same as that shown in Fig. 2(a). It produces a mixed crystalline-electromagnetic response and represents a rank-2 vector charge response when certain mirror symmetries are preserved [17]. Its response coefficient is determined by the momentum space quadrupole moment of the Weyl nodes.
Finally, we come to the action (x) in Fig. 1(c). Let us briefly sketch some salient features of this response, while we leave a detailed discussion to future work. We can arrive at this action using a formal compactification of the action in Eq. 47 [5]. First we can integrate that action by parts to arrive at
\[\frac{e^{2}\mathcal{P}_{\alpha}}{16\pi^{2}\hbar}\int A\wedge d\mathbf{\epsilon}^{ \alpha}\wedge dA,\]
where we have ignored the boundary term. We now want to dimensionally reduce the fourth spatial direction \(w\), which we accomplish by choosing periodic boundary conditions in \(w\) and letting the size of the system in this direction shrink toward zero. In this limit any derivatives with respect to \(w\) are (formally in our case) dropped [69]. The resulting non-vanishing contribution is
\[\frac{e^{2}\mathcal{P}_{\alpha}}{8\pi^{2}\hbar}\oint A_{w}dw\int d\mathbf{ \epsilon}^{\alpha}\wedge dA,\]
where the integral and exterior derivative in the second factor are over only the remaining four spacetime dimensions. We can now make the definition
\[\Theta\equiv 2\pi\frac{e}{h}\int A_{w}dw, \tag{52}\]
to arrive at action (x) from Fig. 1(c):
\[\frac{e\mathcal{P}_{\alpha}}{8\pi^{2}}\int\Theta d\mathbf{\epsilon}^{\alpha} \wedge dA. \tag{53}\]
To illustrate some of the phenomenology of this action let us assume that \(\mathcal{P}_{z}\neq 0.\) Additionally let us assume that we maintain time-reversal and inversion symmetry. As such, \(\Theta=0,\pi\). To begin, we see that the action in Eq. 53 is a total derivative if \(\Theta\) and \(\mathcal{P}_{\alpha}\) are space-time independent. The resulting pure boundary term is just proportional to the response of a 2D weak TI (or 2D Dirac semimetal), i.e., Eq. 7. Depending on the symmetry of the surfaces, this implies that we expect the surface to be gapped except for possibly isolated Dirac points. Since the boundary terms appear as \(\mathbf{\epsilon}^{z}\wedge dA\) we expect that surfaces normal to \(\hat{x}\) (\(\hat{y}\)) will harbor a \(y\)-polarization (\(x\)-polarization), i.e., the polarization is tangent to the surface.
Importantly, the sign of the polarization depends on the interpolation of \(\Theta\) between its non-trivial bulk value of \(\Theta=\pi\) and the trivial vacuum value \(\Theta=0\) outside the system. For neighboring surfaces where the effective sign of the polarization changes we anticipate hinge charges where surfaces intersect since the polarizations are converging or diverging from the hinges. Thus, the response of this system is similar to a stack of 2D planes of quadrupole moment having component \(q_{xy}\neq 0.\) In this scenario, coupled quadrupole planes could lead to either a higher order weak topological insulator having a quadrupole moment, or a higher order topological semimetal with boundary (and possibly bulk) Dirac nodes [70; 71]. To make further progress it would be advantageous to have a microscopic derivation of the coefficient in Eq. 53 intrinsically in 3D. Hence, we will leave further discussion of this action to future work.
## IV Explicit response calculations for lattice models
Now that we have completed the derivations of the actions in Figs. 1(c) and 2, we will provide a series of
model examples that manifest these responses. Using these models we can numerically calculate the various charge and momentum responses to electromagnetic and translation gauge fields, providing an independent verification of the coefficients derived in the previous section. Some of the models and responses we discuss below have appeared elsewhere in the literature, while others are have not. We will carry out this analysis in the same order as the previous section, i.e., point-node Dirac semimetals in 2D, nodal line semimetals in 3D, and then point-node Weyl semimetals in 3D. Calculations for 1D systems were carried out analytically in Sec. III.2, and additional discussion can be found in App. E.
### 2D Dirac node dipole semimetal and insulator
We begin with the time-reversal invariant 2D systems discussed in Sec. III.1 that exhibit a mixed crystalline-electromagnetic response. Since \(\mathcal{T}\) is preserved, the usual Chern-Simons, Hall-effect response of the electromagnetic field vanishes. Instead, the response action derived in the Sec. IV.1 takes the form of a mutual Chern-Simons term [52]:
\[S[\mathbf{\epsilon}_{\nu}^{\lambda},A_{\mu}]=\frac{e}{4\pi}\mathcal{P}_{\lambda} \int\mathbf{\epsilon}^{\lambda}\wedge dA. \tag{54}\]
Unlike the purely electromagnetic polarization response action considered in Ref. [34], this formulation of the low-energy response theory also includes bulk electromagnetic responses to the translation gauge fields. For example, by taking a functional derivative with respect to \(A_{\mu}\) we have
\[\begin{split}\rho&=-\frac{e}{4\pi}\mathcal{P}_{ \lambda}\varepsilon^{ij}\partial_{i}\mathbf{\epsilon}_{j}^{\lambda},\\ j^{x}&=\frac{e}{4\pi}\mathcal{P}_{\lambda}( \partial_{t}\mathbf{\epsilon}_{y}^{\lambda}-\partial_{y}\mathbf{\epsilon}_{t}^{ \lambda}),\\ j^{y}&=-\frac{e}{4\pi}\mathcal{P}_{\lambda}( \partial_{t}\mathbf{\epsilon}_{x}^{\lambda}-\partial_{x}\mathbf{\epsilon}_{t}^{ \lambda}).\end{split} \tag{55}\]
We see that the first equation predicts an electric charge density localized on a dislocation in the bulk of the lattice, which is exactly the phenomenology we expect for a weak topological insulator [38] or a 2D Dirac semimetal. The action (54) also predicts a bulk momentum response to the electromagnetic field when varied with respect to \(\mathbf{\epsilon}^{\mu}\),
\[\begin{split}\mathcal{J}_{\lambda}^{t}&=-\frac{e}{ 4\pi}\mathcal{P}_{\lambda}B_{z},\\ \mathcal{J}_{\lambda}^{i}&=-\frac{e}{4\pi}\mathcal{P} _{\lambda}\varepsilon^{ij}E_{j},\end{split} \tag{56}\]
where \(E_{i}\) and \(B_{i}\) are the components of electric and magnetic fields respectively. In the inversion-symmetric limit and in the absence of lattice defects and deformations, for which the crystalline gauge fields reduce to \(\mathbf{\epsilon}_{\mu}^{\lambda}=\delta_{\mu}^{\lambda}\), Eq. (55) simply reproduces the boundary charge and current responses of an ordinary 2D Dirac semimetal or weak topological insulator, which harbors a non-vanishing electric polarization. However, as we mentioned in Sec. III.1.1, and comment further on below, we do not expect the coefficient of this action to match the electric polarization when inversion is strongly broken.
While the electric polarization/magnetization responses of Dirac semimetals were discussed in detail in Ref. [34], the momentum responses in Eq. [56], and the charge responses to translation fluxes (i.e., dislocations) in Eq. [55] are less familiar. Thus, we will explicitly calculate these responses using a minimal tight-binding model. For simplicity, we employ a two-band Bloch Hamiltonian that can model both 2D Dirac semimetals and weak topological insulators:
\[\begin{split} H(\mathbf{k})&=V_{\mathcal{I}}\sigma ^{x}+\sin(k_{y}a_{y})\sigma^{y}\\ &+(m-\cos(k_{x}a_{x})-\cos(k_{y}a_{y}))\sigma^{z}.\end{split} \tag{57}\]
When \(V_{\mathcal{I}}=0\), \(H\) has both inversion symmetry, \(\mathcal{I}=\sigma^{z}\), and (spinless) time-reversal symmetry, \(\mathcal{T}=K\). In this symmetric regime, \(m\) can be chosen to produce a semimetal with Dirac points located at, for example, \((k_{x},k_{y})=(\pm\pi/(2a_{x}),0)\), when \(m=1\). In the semimetal phase, turning on \(V_{\mathcal{I}}\sigma^{x}\), which breaks inversion while preserving \(\mathcal{T}\), generates a mass term that opens a gap at the Dirac points. The signs of the Berry curvature localized near the two now-gapped Dirac points are opposite, as shown in Fig. 4(a), with the sign at a particular point determined by the sign of the perturbation \(V_{\mathcal{I}}\). Hence the total Berry curvature of the occupied band integrated over the entire BZ, equivalent to the Chern number, is zero, and hence the Berry curvature dipole is well-defined.
To confirm our analytic calculations of the response coefficients we will first calculate the momentum density localized around an out-of-plane magnetic flux \(\Phi_{z}\) using the tight-binding model Eq. (57). In order to determine the \(k_{x}\) momentum density in the lattice model, we must introduce magnetic flux in a fashion that preserves translation symmetry in the \(\hat{x}\)-direction. We show the configuration that we employ in Fig. 4(b). This configuration keeps the crystal momentum \(k_{x}\) as a good quantum number and allows us to compute the value of \(\mathcal{J}_{x}^{t}\) as the probability density of the occupied single particle states weighted by their momentum \(\hbar k_{x}\). The results of the numerical calculations are presented in Fig. 4(c,d), where we study how the excess \(k_{x}\) momentum density bound to magnetic flux behaves as a function of both the magnetic flux \(\Phi_{z}\) at fixed Berry curvature dipole \(\mathcal{P}_{x}\), and and as a function of \(\mathcal{P}_{x}\) at fixed \(\Phi_{x}.\) Our numerical results match our analytic calculations precisely.
We can interpret this result by noting that the momentum current in Eq. [56] can be obtained in the semi-classical limit by considering the momentum current carried by electron wavepackets subject to an anomalous velocity [72; 73]. The equation of motion of an electron wavepacket with momentum \(\mathbf{k}\) formed from a single band
is
\[v^{i}(\mathbf{k})=\frac{\partial\mathcal{E}}{\hbar\partial k_{i}}+\frac{e}{\hbar} \epsilon^{ij}E_{j}\mathcal{F}^{xy}(\mathbf{k}), \tag{58}\]
where \(v^{i}(\mathbf{k})\) is the wavepacket velocity, \(\mathcal{E}(\mathbf{k})\) is the energy spectrum of the band, \(E_{j}\) is the electric field, and \(\frac{e}{\hbar}\epsilon^{ij}E_{j}\mathcal{F}^{xy}(\mathbf{k})\) is the anomalous velocity. The momentum current of the occupied states is obtained by adding up the contributions \(\hbar k_{x}v^{i}(\mathbf{k})\) in the BZ and contains a term arising from the anomalous velocity given by
\[\begin{split}\mathcal{J}_{\lambda}^{i}&=-\frac{e }{(2\pi)^{2}}\epsilon^{ij}E_{j}\int dk_{x}dk_{y}\,k_{\lambda}\mathcal{F}^{xy}( k_{x},k_{y})\\ &=-\frac{e}{4\pi}\mathcal{P}_{\lambda}\epsilon^{ij}E_{j}.\end{split} \tag{59}\]
We can also numerically probe our response equations by studying the charge response to the deformation of the lattice. To do so, we introduce a translation flux to rows of plaquettes located near \(y=N_{y}/4\) and \(y=3N_{y}/4\), analogous to the magnetic flux configuration we just considered. This effectively inserts two rows of dislocations such that if one encircles a plaquette containing translation flux, the Burgers vector is in the \(x\)-direction. This effectively creates opposite translational magnetic fields \(\mathcal{B}_{z}^{x}=\partial_{x}\mathfrak{e}_{y}^{x}-\partial_{y}\mathfrak{e} _{x}^{x}\) penetrating the two rows of plaquettes. Again, we choose this geometry since it is compatible with translation symmetry in the \(x\)-direction. In our lattice model we insert the translation flux by explicitly adding generalized Peierls' factors that are \(k_{x}\)-dependent, i.e., \(\exp\left(ik_{x}\int\mathfrak{e}_{i}^{x}dx^{i}\right)\) such that the colored regions in Fig. 4(b) contain non-vanishing translation flux. The resulting electron charge density localized on the translation magnetic flux has a dependence on both the \(\mathcal{B}_{z}^{x}\) field strength and the Berry curvature dipole moment \(\mathcal{P}_{x}\) as shown in Fig. 4(e),(f). This again matches the expectation from our analytic response equations.
We emphasize that the effective action (55) describes the _mutual bulk_ response between the electromagnetic and the momentum currents in semimetallic and insulating systems with vanishing Chern number. We showed in Sec. III.1.1 that one must be careful when comparing this response to the charge polarization. In particular, our numerics show that, even in the presence of significant inversion-breaking, the bulk momentum density response to a magnetic flux tracks the value of the coefficient \(c_{\alpha}\) from Eq. 9 as demonstrated in Fig. 4 (d). In contrast, as shown in Sec. III.1.1, the expression for the electric polarization, Eq. 15, contains an additional term that is proportional to the value of a Wilson loop along the boundary of the BZ. This value is not quantized when inversion symmetry is broken, and, for large values of \(V_{\mathcal{I}}\), this contribution becomes significant enough that the polarization response clearly deviates from the result one would expect from a naive interpretation of Eq. 55. However, the mutual response between the electromagnetic and translation gauge fields described by this action remains valid. This subtlety is not the focus of our current article, so we leave further discussions to future work.
### 2D Dirac quadrupole semimetal
Next, we consider the class of 2D semimetallic phases characterized by the quadrupole moment of the Berry curvature introduced in Section III.1.2. We know from Section III.1.2 that the low-energy effective response action for this system takes the form:
\[S=\frac{\hbar}{8\pi}\mathcal{Q}_{\alpha\beta}\int\mathfrak{e}^{\alpha}\wedge d \mathfrak{e}^{\beta}. \tag{60}\]
This action generates a momentum current response
\[\mathcal{J}_{\alpha}^{\mu}=-\frac{\hbar}{4\pi}\mathcal{Q}_{\alpha\beta} \epsilon^{\mu\nu\sigma}\partial_{\nu}\mathfrak{e}_{\sigma}^{\beta} \tag{61}\]
Figure 4: (a) Plot of the Berry curvature across the 2D Brillouin zone for the Dirac node dipole semimetal model (57) for \(m=1.1\) with an added inversion-breaking perturbation with \(V_{\mathcal{I}}=-0.5\). We use this model to probe the \(k_{x}\) momentum density response. For that we consider a completely periodic system and insert the magnetic flux \(\Phi_{i}\) thorough two lines of plaquettes such that the translational symmetry along the \(\hat{x}\)-direction is preserved, as shown in panel (b). (c) shows the \(k_{x}\) momentum density localized around one line of plaquettes penetrated by the magnetic field \(B_{z}\) as a function of magnetic flux. (d) shows the \(k_{x}\) momentum density as a function of Berry curvature dipole moment \(\mathcal{P}_{x}\) defined in Eq. (9) which we tune in our model by varying the parameter \(m\) between \(m=1.0\) and \(m=1.5\). In (e) and (f) we show analogous calculations for the charge density response to a translation flux with Burgers vector in the \(x\)-direction as a function of (e) translation flux at fixed Berry curvature dipole, and (f) Berry curvature dipole at fixed translation flux. The open circles in (e) represent Burgers’ vector choices that are not integer multiples of a lattice constant. The red dashed lines in (c)-(f) are guides to the eye indicating a slope of 1.
These currents describe both a bulk momentum polarization (e.g., yielding momentum on the boundary where \(\mathcal{Q}_{\alpha\beta}\) changes), and a bulk energy-momentum response to translation gauge fields. We note that this response is exactly analogous to that of the Dirac node dipole semimetal discussed above if we replace the electromagnetic field with a translation gauge field.
To illustrate and explicitly confirm the responses numerically we use the following 2-band square lattice Bloch Hamiltonian with next-nearest-neighbor hopping terms:
\[\begin{split} H(\mathbf{k})&=V_{\mathcal{T}} \sigma^{x}+\sin(k_{x}a)\sin(k_{y}a)\sigma^{y}\\ &+(m-\cos(k_{x}a)-\cos(k_{y}a))\sigma^{z}.\end{split} \tag{62}\]
This model has an inversion symmetry (i.e., \(C_{z}^{z}\) symmetry) that is realized trivially on-site with \(\mathcal{I}=\mathbb{I}\), mirror symmetry along the \(k_{x}=k_{y}\) axis, and, when \(V_{\mathcal{T}}=0\), time-reversal symmetry \(\mathcal{T}=\sigma^{z}K\). This model can be tuned to a semimetal phase as well, for example, setting \(m=1\) we find four gapless Dirac points located at \((k_{x},k_{y})=(\pm\pi/2a,0)\) and \((k_{x},k_{y})=(0,\pm\pi/2a)\).
To confirm the response action is correct, we first need to calculate the Dirac-node quadrupole moment. To see that the Berry curvature quadrupole moment is well-defined, we first note that the choice of \(V_{\mathcal{T}}\) as a mass perturbation forces \(\mathcal{P}_{\alpha}\) to vanish. We also need the Chern number to vanish, which is guaranteed by the mirror symmetry. With these symmetries, the Berry curvature peaks at Dirac points that are related by inversion symmetry have the same sign, while the peaks related by mirror symmetry carry opposite signs, resulting in a quadrupolar distribution of the Berry curvature, as in Fig. 5(b). Since the Chern number and \(\mathcal{P}_{\alpha}\) both vanish, the quadrupolar distribution is well-defined and signals the presence of a well-defined elastic response in this model (see also Ref. [63]). The diagonal elements of the Dirac-node quadrupole moment of our model are equal and opposite, \(\mathcal{Q}_{xx}=-\mathcal{Q}_{yy}\), and the off-diagonal elements are zero. Since the sign of the Berry curvature flux for 2D Dirac points with \(\mathcal{T}\mathcal{I}\)-symmetry is ambiguous, we once again treat our system in the insulating regime with non-zero \(V_{\mathcal{T}}\) first and then recover the semimetallic case by taking the limit \(V_{\mathcal{T}}\to 0\).
Using this model, let us first focus on the momentum polarization response and highlight the difference with the 2D Dirac node dipole semimetal case from Section IV.1. If the bulk has a momentum polarization we expect translation-symmetric edges to have a bound momentum density. We will first make a general argument for the existence of the boundary momentum and then confirm the results numerically for our model. Let us suppose our system has a boundary normal to the \(y\)-direction. We expect such a boundary will carry \(k_{x}\) momentum if \(\mathcal{Q}_{xx}\neq 0\). To show this, let us make a gauge transformation on the fields in Eq. 60: \(\mathfrak{e}^{a}_{\mu}\to\mathfrak{e}^{a}_{\mu}+\partial_{\mu}\lambda^{a}\) for some vector function \(\lambda^{a}\). Since there is a boundary, the response action is not gauge invariant and we find the variation \(\delta_{\lambda}S=-\frac{\hbar\mathcal{Q}_{xx}}{8\pi}\lambda^{a}(\partial_{0} \mathfrak{e}^{x}_{\nu}-\partial_{x}\mathfrak{e}^{b}_{0})\). Our system has no translation-twisting of the boundaries, i.e., \(\mathfrak{e}^{y}_{x}=\mathfrak{e}^{x}_{y}=0\), so we find the variation reduces to \(\delta_{\lambda}S=-\frac{\hbar\mathcal{Q}_{xx}}{8\pi}\lambda^{x}(\partial_{0} \mathfrak{e}^{x}_{x}-\partial_{x}\mathfrak{e}^{x}_{0})\). This variation can be canceled by adding an action of the form Eq. 34. That is, we expect to have 1D degrees of freedom on the boundary that harbor a non-vanishing \(k_{x}\)-momentum density captured by an effective 1D quadrupole moment \(\mathcal{Q}_{xx}\) that matches the value of the 2D quadrupole moment. Interestingly, we note that the coefficient of Eq. 34 is twice that of the variation we need to cancel. Hence, the edge of our system has a fractional momentum density, i.e., a 1D system with the same \(\mathcal{Q}_{xx}\) would have twice as much momentum. This is analogous to the fractional boundary charge density one finds from the half-quantized electric charge polarization.
We confirm this response numerically by studying the model (62) on a lattice in a ribbon geometry that is open in the \(\hat{y}\)-direction and periodic in \(\hat{x}\). Figure 5 (a) shows the resulting band structure, for which a gap is opened by a non-vanishing \(V_{\mathcal{T}}\) and the occupied states have two symmetrically positioned sets of flat band states: one in an interval having \(k_{x}<0\) and the other in an interval having \(k_{x}>0\). The occupied boundary states with \(k_{x}<0\) (red) are localized near the top (\(y=N_{y}\)) boundary, while the occupied boundary states with \(k_{x}>0\) (blue) are localized near the bottom (\(y=1\)) boundary. At half filling we find that the excess/deficit charge near the boundary depends on \(k_{x}\) as shown in Fig. 5(c). We see that the states at positive and negative \(k_{x}\) are imbalanced, indicating a non-vanishing \(k_{x}\) momentum density on the edge. Indeed, each state between the Dirac nodes contributes an amount to the total edge momentum equal to \(k_{x}\) weighted by a factor of \(\pm 1/2\), since the particle density on the edge at each \(k_{x}\) in this range is \(\pm 1/2\). Because states at opposite \(k_{x}\) have opposite excess/deficit probability density, the total sum is non-vanishing and depends on \(\mathcal{Q}_{xx}\) as shown in Fig. 5(f). We find that the bulk momentum polarization \(P^{y}_{k_{x}}=\frac{\hbar\mathcal{Q}_{xx}}{8\pi}\) matches the numerically calculated boundary momentum density, as expected for a generalized surface charge theorem [74]. To further probe the response equations, we subject the Dirac node quadrupole semimetal to the same linear array of dislocations employed in the previous subsection (c.f. Fig. 4(b)). From Eq. 61, we expect to find momentum density localized on dislocations. Since our geometry preserves translation in the \(\hat{x}\)-direction, we can compute the amount of \(k_{x}\) momentum bound to dislocations, similar to how we computed the amount of charge bound to dislocations in the previous subsection. We show our results in Fig. 5(d)(e) where we first plot momentum density as a function of \(\mathcal{Q}_{xx}\) for fixed translation flux \(\mathcal{B}^{x}_{z}\), and then plot momentum density as a function of \(\mathcal{B}^{x}_{z}\) for fixed \(\mathcal{Q}_{xx}\). Both results match the analytic value from the response action.
Finally, let us briefly consider a case when the mixed energy-momentum quadrupole moment \(\mathcal{Q}_{ta}\) is non-vanishing. In this scenario the effective action (60) implies the existence of a bulk orbital _momentum mag
netization_ of
\[M^{z}_{\tilde{k}_{\mu}}=-\frac{\hbar}{8\pi}\mathcal{Q}_{t\mu}, \tag{63}\]
that will manifest as boundary momentum currents, even in equilibrium (note we assume \(\mathfrak{e}^{t}_{t}=1\)). To generate a non-vanishing \(\mathcal{Q}_{t\mu}\) in our model (62), we turn on an additional perturbation
\[\Delta H(\mathbf{k})=\epsilon\sin(k_{x})\mathbb{I}_{2\times 2}. \tag{64}\]
When \(m=1\) and \(V_{\mathcal{T}}\to 0_{-}\), this induces \(\mathcal{Q}_{tx}=-\pi\epsilon\) and \(\mathcal{Q}_{tt}=\epsilon^{2}\), leading to momentum \(k_{x}\) magnetization, \(M^{z}_{\tilde{k}_{x}}=-\frac{\hbar}{8\pi}\mathcal{Q}_{tx}\), and bulk energy magnetization, \(M^{z}_{\tilde{k}_{t}}=-\frac{\hbar}{8\pi}\mathcal{Q}_{tt}\), following from Eq. (63). In Fig. 5(g) we plot the boundary energy current response \(\Delta\mathcal{J}^{x}_{t}\) as a function of \(\mathcal{Q}_{tt}.\) We calculate this quantity by summing the particle current \(\frac{\hbar}{\hbar}\frac{\partial H}{\partial k_{x}}\) weighted by the energy \(\epsilon(k)\) of each state. The slope of the plot confirms the coefficients predicted in Eq. (63).
### 3D nodal line dipole semimetal
Heuristically we can consider nodal 3D semimetals as arising from stacks of 2D Dirac node dipole semimetals. Furthermore, similar to the 2D case, with inversion symmetry the bulk response action
\[S[\mathfrak{e}^{\lambda},A]=e\mathcal{B}_{\mu\nu}\int\mathfrak{e}^{\mu}\wedge \mathfrak{e}^{\nu}\wedge dA \tag{65}\]
can be interpreted as a charge magnetization \(M_{i}\) and electric polarization \(P^{i}_{e}\):
\[e\mathcal{B}_{ta}=M^{i}\mathfrak{e}^{a}_{i},\quad e\mathcal{B}_{ab}=\varepsilon _{ijk}P^{k}_{e}\mathfrak{e}^{a}_{i}\mathfrak{e}^{b}_{j} \tag{66}\]
where we have taken functional derivatives of Eq. 65 with respect to the magnetic and electric fields respectively, and used \(\mathfrak{e}^{t}_{t}=1.\) For an unmodified geometry we recover the results of Ref. [35], i.e.,
\[e\mathcal{B}_{ta}=M^{a},\,\,\,e\mathcal{B}_{ab}=\varepsilon_{abk}P^{k}_{e}. \tag{67}\]
Microscopically, the coefficient \(\mathcal{B}_{ab}\), where \(a,b=1,2,3\), is proportional to the area of the line nodes that project onto surfaces normal to the \(ab\)-plane as illustrated in Fig. 6(a).
The bulk action also implies a non-vanishing momentum response to electromagnetic fields:
\[\mathcal{J}^{\mu}_{\lambda}=2e\mathcal{B}_{\lambda\eta}\varepsilon^{\mu\nu \rho\sigma}\mathfrak{e}^{\eta}_{\nu}\partial_{\rho}A_{\sigma}, \tag{68}\]
and a conjugate electromagnetic response to translation gauge fields:
\[j^{\mu}=2e\mathcal{B}_{\lambda\eta}\varepsilon^{\mu\nu\rho\sigma}\mathfrak{e} ^{\lambda}_{\nu}\partial_{\rho}\mathfrak{e}^{\eta}_{\sigma}. \tag{69}\]
To illustrate how these responses manifest in an explicit model, we can construct a Hamiltonian for a 3D nodal line dipole semimetal by stacking copies of the 2D Dirac node dipole semimetal in Eq. (57) in the \(\hat{z}\)-direction. When there is no hopping between the 2D layers, such a system will have two lines of gapless states spanning the BZ along the \(k_{z}\) direction, located at \((k_{x},k_{y})=(\pm K,0)\) (for our model). Adding hopping terms in the \(\hat{z}\)-direction leads to a Bloch Hamiltonian:
\[\begin{split} H(\mathbf{k})&=V_{\mathcal{I}}\sigma ^{x}+\sin(k_{y}a_{y})\sigma^{y}\\ &+(m-\cos(k_{x}a_{x})-\cos(k_{y}a_{y})-\cos(k_{z}a_{z}))\sigma^{z }.\end{split} \tag{70}\]
Taking \(V_{\mathcal{I}}\to 0\) and \(m=2\), we find a single loop of gapless states located in the \(k_{y}=0\) plane, described by the equation \(\cos(k_{x}a_{x})+\cos(k_{z}a_{z})=1\). The stack of 2D Dirac node dipole semimetals will naturally endow the 3D nodal line system with electric polarization (and/or magnetization). Correspondingly, this model has a single non-zero component of the antisymmetric tensor \(\mathcal{B}_{xz}\) defined in Eq. (37), which encodes a charge polarization in the \(\hat{y}\)-direction. From Eq. 68, a non-vanishing \(\mathcal{B}_{xz}\) also implies a \(k_{x}\) momentum line-density localized on a magnetic flux tube oriented in the \(\hat{z}\)-direction:
\[\mathcal{J}^{t}_{x}=2e\mathcal{B}_{xz}\varepsilon^{tzij}\varepsilon^{z}_{z}B _{z}=2e\mathcal{B}_{xz}B^{z}, \tag{71}\]
similar to a stack of decoupled 2D Dirac semimetallic layers (in the last equality we replaced \(\mathfrak{e}^{z}_{z}=1\)). This is the 3D analog of the response shown in Figs. 4(c) and (d) for the 2D Dirac semimetal.
We can see an example of a charge response if we tilt the nodal line to introduce a non-zero value of \(\mathcal{B}_{tz}\) as illustrated in Fig. 6(a). In our model we can tilt the node by adding an extra dispersion
\[\Delta H(\mathbf{k})=\epsilon\sin(k_{x}a_{x})\mathbb{I}_{2\times 2}, \tag{72}\]
to the Hamiltonian. This term breaks \(\mathcal{T}\) and induces a net magnetization \(M_{z}=e\mathcal{B}_{tz}\), setting up the corresponding circulating boundary currents in the system [35].
Now, when \(\mathcal{B}_{tz}\) is non-vanishing, Eq. 69 implies that a screw dislocation with Burgers vector \(b^{z}\hat{z}\) hosts a bound electromagnetic current. Indeed, if we assume the screw dislocation is located at \((x,y)=(0,0)\) and runs along the \(z\)-axis we find
\[j^{z}=-2e\mathcal{B}_{tz}\varepsilon^{tzjk}\mathfrak{e}^{t}_{t}\partial_{j} \mathfrak{e}^{z}_{k}=-2e\mathcal{B}_{tz}b^{z}\delta(x)\delta(y), \tag{73}\]
where we used \(\mathfrak{e}^{t}_{t}=1\) and \(\nabla\times\mathfrak{e}^{z}=b^{z}\delta(x)\delta(y)\).
We can illustrate the origin of this current by considering the magnetization \(M_{z}\) (and associated boundary currents) induced by \(\mathcal{B}_{tz}.\) A screw dislocation with Burgers vector \(b^{z}\hat{z}\) can be constructed by cutting a seam into layers normal to \(\hat{z}\) and re-gluing them along the seams with neighboring layers above or below. When cut, the boundary current associated to \(M_{z}\) will appear, and after re-gluing this current will be routed vertically along the screw-dislocation line, i.e., along the \(z\)-direction as shown in Fig. 6 (b). The magnetization \(M_{z}\) gives rise to a surface bound current \(j_{\partial}=M_{z}\) circulating around
the \(\hat{z}\)-axis in each layer. The effective number of current loops winding around the dislocation line per unit length is equal to the Burgers vector \(b^{z}\). Thus the total current in the \(\hat{z}\)-direction is:
\[j^{z}=-b^{z}j_{\partial}=-2e\mathcal{B}_{tz}b^{z}, \tag{74}\]
which reproduces the result obtained from the response action. Furthermore, we can understand the sign of the current from Fig. 6(b) where we see that the current on the dislocation has an opposite orientation to the current generated by \(M_{z}.\) Another interesting consequence of Eq. (65) is the topological piezoelectric effect discussed in Ref. [50].
### 3D nodal line quadrupole semimetal
In Sec. III.3, we derived the effective response action:
\[S[\mathfrak{\epsilon}^{\lambda}]=\hbar\mathcal{B}_{\lambda\eta,\alpha}\int \mathfrak{\epsilon}^{\lambda}\wedge\mathfrak{\epsilon}^{\eta}\wedge d \mathfrak{\epsilon}^{\alpha}.\]
for the nodal line quadrupole semimetal. The response action implies the energy-momentum currents:
\[\mathcal{J}_{\lambda}^{\mu}=2\hbar\left(\mathcal{B}_{\lambda\eta,\alpha}- \mathcal{B}_{\eta\alpha,\lambda}\right)\varepsilon^{\mu\nu\rho\sigma} \mathfrak{e}_{\nu}^{\eta}\partial_{\rho}\mathfrak{\epsilon}_{\sigma}^{\alpha}, \tag{75}\]
where we have used that \(\mathcal{B}_{\lambda\eta,\alpha}\) is anti-symmetric under exchange of the first two indices.
In analogy with the 2D Dirac node dipole and Dirac node quadrupole semimetals, we expect that most of the responses from the Dirac nodal line dipole semimetal in Sec. IV.3 can be translated to describe some of the responses of this action if we replace charge currents and densities with momentum currents and densities etc. Indeed, we showed in Eq. 39 that when \(\lambda\) and \(\eta\) are both spatial indices, \(\mathcal{B}_{\lambda\eta,\lambda}\) implies a momentum polarization
Figure 6: (a) Fermi line of a 3D NLSM (70) with \(V_{\mathcal{I}}=0\), \(m=2\) that is tiled in the energy-momentum space \(\{k_{z},k_{x},E\}\) by the perturbation (72) where we set \(\epsilon=1\). The projections of this curve onto the \(\{k_{x},k_{z}\}\) and \(\{k_{z},E\}\) planes give the exact values of the \(\mathcal{B}_{xz}\) and \(\mathcal{B}_{tz}\) coefficients respectively. (b) A screw dislocation characterized by a Burgers’ vector \(b^{z}=a_{z}\) creates an internal boundary carrying a current circulating around the magnetization vector \(M_{z}\). Note that the currents’ direction is perpendicular to the Burgers’ vector and the Magnetization vector \(M_{z}\), as predicted by Eq. 73.
Figure 5: (a) Spectrum of the 2D Dirac node quadrupole semimetal (62) in a ribbon geometry (\(y\)-direction open, \(x\)-direction periodic) for \(m=1\), the \(\mathcal{T}\)-breaking perturbation set to \(V_{\mathcal{T}}=-0.2\), and the energy tilt in Eq. 64\(\epsilon=0.1\). At half filling, the ground state of the model is momentum-polarized: occupied states localized near \(y=1\), which are indicated by the blue color, carry a positive value of the \(k_{x}\) momentum, while the occupied states near \(y=N_{y}\) have a negative value of \(k_{x}\). (b) Berry curvature distribution across the Brillouin zone for a small gapping perturbation \(V_{\mathcal{T}}=-0.2\). (c) The boundary charge distribution as a function of momentum. (d), (e) \(k_{x}\) momentum bound to a row of dislocations (c.f. Fig. 4(b)) as a function of \(\mathcal{Q}_{xx}\) at fixed \(\mathcal{B}_{z}^{z}\) in (d) and as a function of \(\mathcal{B}_{z}^{z}\) at fixed \(\mathcal{Q}_{xx}\) in (e). (f) Plot of momentum polarization \(P_{k_{x}}^{y}\) obtained from computing \(k_{x}\)-momentum bound to an edge normal to \(\hat{y}.\) (g) As a consequence of non-zero \(\epsilon\) we see that the velocities of single-particle states in (a)localized on opposite edges have the same sign, while the energy and \(k_{x}\) momentum charges are exact opposite. This leads to boundary energy currents as illustrated in panel (g) as a function of \(\mathcal{Q}_{tt}.\)
in a direction perpendicular to \(\lambda\) and \(\eta\), and carrying momentum parallel to \(\lambda\). By analogy, the mixed temporal-spatial components \(\mathcal{B}_{it,j}\) describe a momentum magnetization in the \(i\)-th direction carrying momentum in the \(j\)-th direction. The momentum magnetization is further responsible for generating bound-currents on screw-dislocations, i.e., the momentum magnetization will have circulating boundary momentum currents and a momentum current along screw dislocations similar to the charge bound currents on dislocations shown in Section IV.3.
To be more explicit, we can illustrate the momentum polarization in a model by showing the analog of the surface charge theorem, i.e., momentum polarization will yield surface momentum densities. To obtain a Hamiltonian for the nodal line quadrupole semimetal, we begin by stacking 2D Dirac node quadrupole semimetals (see Fig. 5 (b)) along the \(\hat{z}\)-direction. When the planes are completely decoupled, this construction produces a set of four straight Fermi lines stretching in the \(k_{z}\)-direction. If we couple the two-dimensional planes, then we arrive at the following Bloch Hamiltonian:
\[\begin{split} H(\mathbf{k})&=V_{\mathcal{T}} \sigma^{x}+\sin(k_{x}a)\sin(k_{y}a)\sigma^{y}\\ &+(m-\cos(k_{x}a)-\cos(k_{y}a)-\cos(k_{z}a_{z}))\sigma^{z}.\end{split} \tag{76}\]
For a wide range of parameters this model has a pair of nodal line loops that form a cage structure as shown in Figs. 2 and 7 with \(m=2\) and \(V_{\mathcal{T}}=0.\) In general, the local gaplessness of the nodal loops can be protected by the product \(\mathcal{TI}.\) The cage structure created by the joined, intersecting loops can be split apart by, for example, breaking mirror symmetry along the \(k_{x}=k_{y}\) axis while preserving \(\mathcal{TI}\). However, even in this case the nodal loops still produce a non-vanishing contribution to the response coefficient \(\mathcal{B}_{\alpha,\gamma}.\) Hence, the response is more general than the specific cage-like nodal configuration. Calculating the response coefficient for the action in the limit \(V_{\mathcal{T}}\to 0_{-}\), we find that \(\mathcal{B}_{xz,x}=-\mathcal{B}_{zx,x}\), \(\mathcal{B}_{yz,y}=-\mathcal{B}_{yz,y}\) are non-vanishing, as shown in Fig. 7.
Using this model we can illustrate the origin of the boundary momentum resulting from the bulk momentum polarization. The discussion is analogous to the calculation of the boundary momentum of the 2D Dirac node quadrupole semimetal in Sec. IV.2. Indeed, the analogy is clear since the cage nodal structure is just arising from a family of 2D Dirac node quadrupoles parameterized by \(k_{z}.\) To specify an unambiguous momentum polarization we turn on a small \(\mathcal{T}\)-breaking perturbation \(V_{\mathcal{T}}.\) After doing this, and as shown in Fig. 7, we see that the two nodal loop segments that lie in the \(k_{y}=0\) plane (one for \(k_{x}>0\) and one for \(k_{x}<0\)) carry the same Berry flux in the \(k_{z}\)-direction (red arrows in Fig. 7). Similarly, the two loop segments in the \(k_{x}=0\) plane carry the same Berry flux (blue arrows), which is opposite to that carried by the \(k_{y}=0\) segments. Consequently, the loop segment in the \(k_{y}=0\), \(k_{x}>0\) half-plane must connect with a loop segment in the \(k_{x}=0\) plane in order to form a closed nodal loop with a consistent helicity/flux sign.
To clarify the consequences of this nodal configuration let us consider the \(k_{x}k_{z}\) plane in Fig. 7. We can calculate a Berry-Zak phase [25] in the \(k_{y}\) direction parameterized by \((k_{x},k_{z})\), and for our model we find a Berry phase of magnitude \(\pi\) inside the projected nodal region in the \(k_{x}k_{z}\) plane. When \(V_{\mathcal{T}}\) is turned on, the sign of the \(\pi\) Berry-Zak phases are no longer ambiguous, and are opposite for the projected areas at \(k_{x}>0\) and \(k_{x}<0.\) If we calculate the total polarization in the \(y\)-direction when summed over all \(k_{x}\) and \(k_{z}\) it will vanish. However, the polarization weighted by the \(k_{x}\) momentum will be non-zero. The occupied drumhead surface states in the \(k_{x}k_{z}\) surface-BZ (see Fig. 7 and c.f. Fig. 5(a,b,c)) will have an imbalanced \(k_{x}\) momentum, but, when combined with the bulk charge density, a vanishing charge (c.f. Fig. 5(c)). This is a reflection of the surface charge theorem for a vanishing charge polarization, and non-vanishing momentum polarization. We numerically calculated the magnitude of the bound surface momentum, finding it to be in agreement with the value predicted by the response action, \(2\hbar\mathcal{B}_{xz,x}.\) We see from this picture that to have a non-zero response \(\mathcal{B}_{xz,x},\) we want two oppositely oriented nodal loops with identical, non-vanishing areas when projected in the \(k_{x}k_{z}\)-plane, but positioned so that the sums of all \(k_{x}\) inside each nodal loop are different from each other, e.g., in our model they are opposite values.
As an additional explicit example of a non-vanishing response allowed in our model we can consider the mo
Figure 7: Fermi Lines of the model (76) with \(m=2\) and \(V_{\mathcal{T}}\to 0_{-}\). Resolving this structure as a pair of loops with _fixed orientation_ we can project them onto the \(k_{x}k_{z}\) or \(k_{y}k_{z}\) surfaces to determine the momentum polarization. The colored regions of the projected nodes indicate flat drumhead states that would appear in open boundary conditions on one boundary (red) or the opposing boundary (blue). By looking at the relative positions of the two areas bounded by the projected loops in the surface BZ, we see that one surface will have one sign of the \(k_{x}\) or \(k_{y}\) momentum, and the other surface will have the other. For example, for the \(k_{x}k_{z}\) surface BZ the the projections indicate a dipole moment of \(k_{x}\) momentum polarized along the \(y\) direction captured by the response coefficient \(\mathcal{B}_{zx,x}\). Inset: Cage-like nodal Fermi surface in the model (76) with \(E_{F}=0.2\).
mentum density
\[\mathcal{J}_{x}^{0}=2\hbar\mathcal{B}_{xz,x}\epsilon^{ijk}(2\epsilon_{i}^{z} \partial_{j}\epsilon_{k}^{x}-\epsilon_{i}^{x}\partial_{j}\epsilon_{k}^{z}) \tag{77}\]
generated by a geometric deformation. To generate a non-vanishing response let us consider an \(xz\)-planar interface. Since we must preserve translation symmetry along \(x\) to calculate \(k_{x}\) momentum, and we want to preserve translation in \(z\) for convenience, we have the following terms:
\[\mathcal{J}_{x}^{0}=2\hbar\mathcal{B}_{xz,x}\left(2\epsilon_{\alpha}^{z} \partial_{y}\epsilon_{z}^{x}-2\epsilon_{z}^{z}\partial_{y}\epsilon_{x}^{x}- \epsilon_{x}^{x}\partial_{y}\epsilon_{z}^{z}+\epsilon_{\alpha}^{x}\partial_{y }\epsilon_{x}^{z}\right).\]
If we cut the system at \(y=0\), both sides of the interface will carry a surface \(k_{x}\)-momentum density \(\mathcal{J}_{x,surf}^{0}=\pm 2\hbar\mathcal{B}_{xz,x}\), since the system has a \(k_{x}\) momentum polarization along \(\hat{y}\) with this magnitude. Since each interface carries an opposite sign of the momentum density, if we glue them back together there will be no momentum at the interface. Now, for \(y>0\) let us perturb away from the background translation gauge field configuration to \(\mathfrak{e}_{i}^{a}=(1+\epsilon^{a})\delta_{i}^{a}\) where \(\epsilon^{a}=(\epsilon^{x},\,0,\,\epsilon^{z})\) is a small deformation. The momentum density response to leading order in \(\epsilon^{x}\) is
\[\mathcal{J}_{x}^{0}=2\hbar\mathcal{B}_{xz,x}\left[-2\epsilon^{x}\delta(y)- \epsilon^{z}\delta(y)\right], \tag{78}\]
which we see is localized at the interface \(y=0\).
We can interpret this response by noting that changing \(\mathfrak{e}_{x}^{x}\) or \(\mathfrak{e}_{z}^{z}\) effectively changes the area of one side of the interface (\(y>0\)) relative to the other (\(y<0\)). Since the total \(k_{x}\) momentum on both sides of the interface should be unchanged by this deformation (we maintain translation symmetry in \(x\) during the process), then increasing the area for \(y>0\) must lower the momentum _density_. Indeed, the surface \(k_{x}\) momentum density on \(\hat{y}\) surfaces must be inversely proportional to \(L_{x}\) and \(L_{z}\). Finally, since we are considering \(k_{x}\)-momentum density, the quantization of which depends on \(L_{x}^{-1}\), \(\mathcal{J}_{x}^{0}\) actually depends on \(L_{x}^{-2}\), hence the difference between the coefficients of \(\epsilon^{x}\) and \(\epsilon^{z}\) in Eq. 78.
### 3D Weyl node dipole semimetal
The electromagnetic and geometric response of time-reversal breaking 3D Weyl semimetals have been discussed extensively in the literature [15; 16; 17; 18; 19; 20; 21; 23; 33; 59; 60; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 59; 60; 75; 76; 77; 78; 79; 80; 82; 83; 84; 85]. Here we focus on a few particular consequences of the mixed crystalline-electromagnetic response and the matching between the response field theory and microscopic lattice model calculations. Recall that the response action for a 3D Weyl semimetal with a non-vanishing Weyl-node dipole moment \(\mathcal{P}_{\lambda}\) is
\[S[\mathfrak{e}_{\nu}^{\lambda},A_{\mu}]=\frac{e^{2}\mathcal{P}_{\lambda}}{8 \pi^{2}\hbar}\int\mathfrak{e}^{\lambda}\wedge A\wedge dA. \tag{79}\]
This response implies the following bulk electromagnetic and momentum currents:
\[j^{\mu}=-\frac{e^{2}\mathcal{P}_{\lambda}}{4\pi^{2}\hbar}\varepsilon^{\mu\nu \rho\sigma}\mathfrak{e}_{\nu}^{\lambda}\partial_{\rho}A_{\sigma}+\frac{e^{2} \mathcal{P}_{\lambda}}{8\pi^{2}\hbar}\varepsilon^{\mu\nu\rho\sigma}A_{\nu} \partial_{\rho}\mathfrak{e}_{\sigma}^{\lambda}\;, \tag{80}\]
\[\mathcal{J}_{\lambda}^{\mu}=\frac{e^{2}\mathcal{P}_{\lambda}}{8\pi^{2}\hbar} \epsilon^{\mu\nu\rho\sigma}A_{\nu}\partial_{\rho}A_{\sigma}. \tag{81}\]
In the presence of dislocations the translational flux is non-vanishing, and hence the bulk electromagnetic current is anomalous:
\[\partial_{\mu}j^{\mu}=-\frac{e^{2}\mathcal{P}_{\lambda}}{8\pi^{2}\hbar} \varepsilon^{\mu\nu\sigma\rho}\partial_{\mu}\mathfrak{e}_{\nu}^{\lambda} \partial_{\sigma}A_{\rho}. \tag{82}\]
This reflects the fact that the action Eq. 79 is not gauge-invariant in the presence of dislocations. Indeed, in our explicit tight-binding model calculations below we find the spectrum on a single screw dislocation line contains a pair of chiral modes of the same chirality (one near each bulk Weyl node momentum as shown in Fig. 9(b)). These modes are responsible for the anomalous current on dislocation lines, as was first described by Ref. [38].
To verify the electromagnetic response to the applied crystalline gauge field we consider a simple 2-band model of a 3D Weyl semimetal with a pair of gapless nodes:
\[H(\mathbf{k}) =\sin(k_{z}a_{z})\sigma^{x}+\sin(k_{y}a_{y})\sigma^{y}\] \[+(2-m-\cos(k_{x}a_{x})-\cos(k_{y}a_{y})-\cos(k_{z}a_{z}))\sigma^{z}. \tag{83}\]
The Weyl node with the positive chirality \(\chi=+1\) is located at \(\mathbf{k}=(\arccos(-m),\,0,\,0)\) and the node with \(\chi=-1\) is at \(\mathbf{k}=(-\arccos(-m),\,0,\,0)\). The Weyl node dipole moment therefore has only one non-zero component \(\mathcal{P}_{x}=2\arccos(-m)\) and the resulting response action is
\[S[\mathfrak{e}_{\nu}^{\lambda},A_{\mu}]=\frac{e^{2}\mathcal{P}_{x}}{8\pi^{2} \hbar}\int d^{4}xe^{\mu\nu\rho\sigma}\mathfrak{e}_{\mu}^{x}A_{\nu}\partial_{ \rho}A_{\sigma}. \tag{84}\]
Let us first consider the response arising from the constant background translation fields \(\mathfrak{e}_{x}^{x}=1\) and \(\mathfrak{e}_{y}^{x}=b^{x}/L_{y}\), which describe a twist such that a particle traversing the lattice in the \(y\)-direction translates by \(b^{x}\) in the \(x\)-direction. We note that such a configuration is volume preserving since \(\det(\mathbf{e})=1\), where the matrix \(\mathbf{e}\) has matrix elements \(e_{ij}=\mathfrak{e}_{i}^{j}.\) When \(b^{x}=0\) the response action is
\[\frac{e^{2}\mathcal{P}_{x}}{8\pi^{2}\hbar}\int\mathfrak{e}_{x}^{x}dx\int dtdydz \epsilon^{x\nu\rho\sigma}A_{\nu}\partial_{\rho}A_{\sigma}.\]
Using the relation \(\int dx\mathfrak{e}_{x}^{x}=L_{x}\) we find an anomalous Hall effect in the \(yz\)-plane such that \(\sigma_{yz}=\frac{e^{2}}{h}\frac{\mathcal{P}_{x}L_{x}}{2\pi}\), which is the standard result [32; 33]. Now, if we turn on \(b^{x}\) we will still have the same \(\sigma_{yz}\), but we will also have the additional term
\[\frac{e^{2}\mathcal{P}_{x}}{8\pi^{2}\hbar}\int\mathfrak{e}_{y}^{x}dy\int dtdxdz \epsilon^{y\nu\rho\sigma}A_{\nu}\partial_{\rho}A_{\sigma}.\]
Because of the different index on the \(\epsilon\)-symbol, this term represents an anomalous Hall effect in the \(xz\)-plane with \(\sigma_{zx}=\frac{\epsilon^{2}}{h}\frac{P_{x}b_{x}}{2\pi}\). We can find a simple interpretation for this effect: when we turn on \(\epsilon_{y}^{x}\), the minimal coupling \(k_{x}\to k_{x},k_{y}\to k_{y}+k_{x}\epsilon_{y}^{x}\) shifts the bulk Weyl nodes, \((\pm\mathcal{P}_{x}/2,\,0,\,0)\rightarrow(\pm\mathcal{P}_{x}/2,\,\pm\mathcal{ P}_{x}b_{x}/(2L_{y}),\,0)\). Hence an effective \(\mathcal{P}_{y}=\frac{P_{x}b_{x}}{L_{y}}\) is generated when the Weyl momenta are sheared. Indeed, we expect that, at least for uniform, traceless translation gauge field deformations, the response phenomena can be simply interpreted as transformations of the Weyl node dipole \(\mathcal{P}_{i}\rightarrow\epsilon_{i}^{j}\mathcal{P}_{j}\). We show an explicit example of this in the first and third surface-BZ panels of Fig. 8(a) where the bulk nodes and their connected Fermi arcs have been rotated in the deformed geometry relative to the undeformed geometry. We note that if the deformation is not volume-preserving, then we must be careful when considering what is held fixed while volume is changing in order to interpret the resulting phenomena.
In addition to these cases of fixed background translation fields, let us consider varying those fields in space. We are interested in the electromagnetic response to applied translational _magnetic_ fields \(\mathcal{B}_{i}^{a}=\epsilon^{ijk}\partial_{j}\epsilon_{k}^{a}\). Since the nodes in our model are separated in \(k_{x}\), we will consider geometries where the Burgers vector of the translation magnetic field also points along the \(x\)-direction, \(\mathcal{B}_{i}^{x}\neq 0\).
First let us consider a system containing a domain wall as a function of \(z\), such that at \(z=0\) the field \(\epsilon_{y}^{x}\) jumps from \(0\) to \(b_{x}/L_{y}\). For \(z<0\) we have bulk Weyl nodes that project onto the \(z\)-surface at \((\pm\mathcal{P}_{x}/2,0)\), while for \(z>0\) the bulk Weyl nodes have been transformed and sit at \((\pm\mathcal{P}_{x}/2,\pm\mathcal{P}_{x}b_{x}/(2L_{y}))\). We show the numerically calculated Fermi arcs for our un-deformed and deformed models in the left and right surface BZ panels of Fig. 8(a).
Now let us glue the \(z<0\) and \(z>0\) sides to each other to make a domain-wall interface. We schematically illustrate the interface geometry in Fig. 8(b). Since the normal vector on each side of the interface is opposite, we expect the Fermi arcs for \(z<0\) to have the opposite chirality to their corresponding arcs for \(z>0\). Indeed, as shown in the center surface BZ panel of Fig. 8(a), the Fermi arcs on both sides can hybridize because of their opposite chiralities and form new arcs in the 2D subsystem of the interface. These new Fermi arcs encode the fact that the Hall conductivity \(\sigma_{xz}\) is varying at this interface. These effects are all manifestations of the fact that the Weyl node dipole moment \(\mathcal{P}_{i}\) is changing at the interface, and hence we expect Fermi arcs to be trapped generically at the interfaces of this type. We note that a similar strain geometry, and the corresponding Weyl node configuration, was discussed in [49].
From Eq. 80 we see that applying a uniform, non-vanishing \(A_{0}\) to the system described above should generate a charge current in the \(x\)-direction. We can see the microscopic origin of this current as follows. If we increase \(A_{0}\), each linearly-dispersing point on the Fermi arc will have an excess charge density \(\delta n(\mathbf{k})=\frac{eA_{0}}{2\pi\hbar\epsilon_{y}^{x}(\mathbf{k})|}\) where \(v_{F}(\mathbf{k})\) is the Fermi velocity at the Fermi arc located at \(\mathbf{k}\) in the surface-BZ. Hence, the contribution to the current of such a point on the Fermi arc is \(j^{x}(\mathbf{k})=ev_{F}(\mathbf{k})\delta n(\mathbf{k})\). For our model and geometry, the contributions to the \(j^{x}\) current that are linear in the deformations of \(\epsilon_{i}^{a}\) arise from the Fermi arcs stretching between \((K,0)\rightarrow(K,K\epsilon_{y}^{x})\) and \((-K,0)\rightarrow(-K,-K\epsilon_{y}^{x})\). Each of these arcs has a fixed value \(k_{x}=\pm K\) and each
Figure 8: (a) The three panels show numerically calculated Fermi arcs in (left) the surface BZ with un-deformed geometry, (right) the surface BZ with \(\epsilon_{y}^{x}\) non-vanishing, and (center) the arcs localized at the interface formed by gluing the two sides of the interface together. The colored circles in the first and third panels represent the surface BZ projections of the bulk Weyl nodes on either side of the interface. The color is a guide to show the connectivity/orientation of the Fermi arcs, not the chirality of the bulk nodes. On both sides of the interface the bulk nodes have the same chirality, but since they are effectively projected onto surfaces having opposite normal vectors they generate Fermi arcs having opposite chirality. (b) Illustrations of (left) the un-deformed geometry and (right) the deformed geometry with \(\epsilon_{y}^{x}\) non-vanishing. (c) The numerically calculated current localized at the interface between un-deformed and deformed geometries as a function of the chemical potential shift \(A_{0}\).
arc has an opposite Fermi velocity. Hence
\[j^{x}=ev_{F}(K,k_{y})\delta n\frac{K\epsilon_{y}^{x}}{2\pi}\] \[+ev_{F}(-K,k_{y})\delta n\frac{K\epsilon_{y}^{x}}{2\pi}=\frac{e^{2 }\mathcal{P}_{x}\epsilon_{y}^{x}A_{0}}{4\pi^{2}\hbar}\text{sgn}(v_{F}),\]
where \(K\epsilon_{y}^{x}/2\pi\) counts the density of states on the Fermi arc in the \(k_{y}\) direction, \(\text{sgn}(v_{F})\) is sign of the velocity on the \(k_{x}=+K\) arc, and \(\mathcal{P}_{x}=2K^{2}\) is the undeformed value. This result matches the prediction from the response theory and matches the numerical results in Fig. 8(c) [86].
We can also study a system with a pair of screw dislocation lines. We explicitly insert two screw dislocations at positions \((y,z)=(N_{y}/4,0)\) and \((y,z)=(3N_{y}/4,0)\), running parallel to the \(\hat{x}\)-axis with Burgers vectors \(b^{x}=+1\) and \(b^{x}=-1\), respectively. In Fig. 9(a) we show the energy spectrum of a Weyl semimetal with Weyl nodes on the \(k_{x}\)-axis with periodic boundary conditions and no dislocations. In Fig. 9(b) we show the spectrum of the same system after two screw dislocations have been inserted as described above. The blue/red coloration indicates on which dislocation the states are localized. We see that near each Weyl point the right-moving modes are on the red dislocation while the left-moving modes are on the blue dislocation, as described by Eq. (80). Hence, each dislocation has a net chirality.
To test the response equation we apply a non-vanishing \(A_{x}\) and numerically calculate the charge density localized on a single dislocation. We can carry out a microscopic calculation of the charge bound to a dislocation as a function of \(A_{x}\). Let us assume a nodal configuration with a positive node at \(\mathbf{k}=(\mathcal{P}_{x}/2,0,0)\) and a negative node at \((-\mathcal{P}_{x}/2,0,0)\). In the presence of a dislocation having Burgers vector \(b^{x}\), each \(k_{y}k_{z}\)-plane sees an effective magnetic flux \(\Phi(k_{x})=\frac{b^{x}k_{x}}{2\pi}\Phi_{0}\), where \(\Phi_{0}=h/e.\) Hence each \(k_{y}k_{z}\)-plane having a non-vanishing Chern number will contribute to the charge as
\[\Delta Q=\frac{eL_{x}}{2\pi}\int_{BZ}C(k_{x})\frac{k_{x}b^{x}}{2\pi}dk_{x}=0, \tag{85}\]
where \(C(k_{x})\) is the Chern number of each \(k_{y}k_{z}\)-plane parameterized by \(k_{x}.\) If we turn on a non-vanishing \(A_{x}\) (\(k_{x}\to k_{x}+\frac{e}{\hbar}A_{x}\)) and re-calculate the bound charge we find
\[\Delta Q|_{A_{x}} =-\frac{L_{x}}{2\pi}\int_{-\mathcal{P}_{x}}^{\mathcal{P}_{x}}{}^ {-\frac{e}{\hbar}A_{x}}\frac{k_{x}b^{x}}{2\pi}dk_{x} \tag{86}\] \[=\frac{e^{2}\mathcal{P}_{x}b^{x}L_{x}}{4\pi^{2}\hbar}A_{x}.\]
This result is exactly what is found in our numerics shown in Fig. 9(c). Both of these results match the analytic prediction in Eq. 80 after including an extra factor of two which takes into account the bulk and boundary inflow to the boundary [87, 88, 61, 89].
### 3D Weyl node quadrupole semimetal
Finally, we will discuss some aspects of the crystalline response of 3D Weyl semimetals with gapless Weyl nodes forming a quadrupole pattern. Some of these responses were recently discussed in Refs. [17, 18, 24], and here we consider some of the responses in more microscopic detail and compare directly with lattice model calculations.
Recall from Sec. III.5 the response action
\[S_{WQ}=\frac{e\mathcal{Q}_{\alpha\beta}}{8\pi^{2}}\int\mathfrak{e}^{\alpha} \wedge d\mathfrak{e}^{\beta}\wedge A.\]
The bulk linear response implied by Eq. (51) is
\[\begin{split}\mathcal{J}_{\alpha}^{\mu}&=\frac{e}{8 \pi^{2}}\varepsilon^{\mu\nu\rho\sigma}\mathcal{Q}_{\alpha\beta}\mathfrak{e}_{ \nu}^{\beta}\partial_{\rho}A_{\sigma}\\ &-\frac{e}{4\pi^{2}}\varepsilon^{\mu\nu\rho\sigma}\mathcal{Q}_{ \alpha\beta}A_{\nu}\partial_{\rho}\mathfrak{e}_{\sigma}^{\beta},\end{split} \tag{87}\]
\[j^{\mu}=-\frac{e}{8\pi^{2}}\epsilon^{\mu\nu\rho\sigma}Q_{\alpha\beta}\mathfrak{ e}_{\nu}^{\alpha}\partial_{\rho}\mathfrak{e}_{\sigma}^{\beta}. \tag{88}\]
Figure 9: (a) The bulk spectrum of a Weyl semimetal with two nodes on the \(k_{x}\)-axis (b) The spectrum of the same Weyl semimetal with periodic boundary conditions and two screw dislocations with opposite Burgers vectors threaded along the \(x\)-direction. Red and blue coloration indicates on which dislocation the chiral modes are localized. Each dislocation has a net positive (red) or negative (blue) chirality. (c) Numerical calculation of the charge density bound to a screw dislocation as \(A_{x}\) is tuned.
We also note that both of these currents can be anomalous when subjected to certain gauge field configurations:
\[\partial_{\mu}\mathcal{J}_{\alpha}^{\mu} =-\frac{e}{8\pi^{2}}\varepsilon^{\mu\nu\rho\sigma}\mathcal{Q}_{ \alpha\beta}\partial_{\mu}\mathbf{\epsilon}_{\nu}^{\beta}\partial_{\rho}A_{\sigma}, \tag{89}\] \[\partial_{\mu}j^{\mu} =-\frac{e}{8\pi^{2}}\varepsilon^{\mu\nu\rho\sigma}Q_{\alpha\beta }\partial_{\mu}\mathbf{\epsilon}_{\nu}^{\alpha}\partial_{\rho}\mathbf{\epsilon}_{\sigma }^{\beta}. \tag{90}\]
Now let us consider several different phenomena associated to these response equations in the context of a lattice model introduced in Ref. [17]:
\[H(\mathbf{k}) =\sin k_{x}\sin k_{y}\Gamma^{x}+\sin k_{z}\Gamma^{y} \tag{91}\] \[+(m+t(\cos k_{x}+\cos k_{y}+\cos k_{z}))\,\Gamma^{z}.\]
Without any geometric deformations, the semimetal phase of our model with a Weyl node quadrupole has two nodes of one chirality at \(\mathbf{k}=(\pm K,0,0)\) and two of the opposite chirality at \((0,\pm K,0).\) Thus the gapped, 2D \(k_{y}k_{z}\) planes parameterized by \(k_{x}\) will have a non-vanishing Chern number \(C\) for \(-K<k_{x}<0\) and a non-vanishing Chern number \(-C\) for \(0<k_{x}<K\) where \(C=\pm 1.\) Similar statements can be made about the \(k_{x}k_{z}\) planes. Without loss of generality let us choose the nodes on the \(k_{x}\)-axis to have positive chirality such that \(\mathcal{Q}_{xx}>0\) and \(C=+1.\) For our model this also implies that \(\mathcal{Q}_{yy}<0\) and the non-vanishing \(k_{x}k_{z}\) Chern number planes have a negative Chern number for \(k_{y}<0\) and positive Chern number for \(k_{y}>0.\) For example, in our model we can generate a configuration with this structure using \(m=-2,\)\(t=1.\)
#### v.2.1 Response to flux and dislocation lines
We will begin by studying the momentum density bound to magnetic flux and charge density bound to dislocations. These two responses, some aspects of which are described in Ref. [17] (see also Refs. [18] and [24]), are the most straightforward because they are essentially bulk responses and do not generate anomalous currents,
Figure 10: (a) The bulk spectrum of a Weyl semimetal with two nodes of one chirality on the \(k_{x}\)-axis and two nodes of the opposite chirality on the \(k_{y}\)-axis. (b) The spectrum of the same Weyl semimetal with periodic boundary conditions and two screw dislocations with opposite Burgers vectors threaded along the \(x\)-direction. Red and blue coloration indicates on which dislocation the chiral modes are localized. Each dislocation has a no net chirality, and the Weyl nodes on the \(k_{y}\)-axis do not form chiral modes. (c) The spatially-resolved \(k_{x}\) momentum density response of a Weyl node quadrupole semimetal to a pair of screw dislocations with opposite Burgers’ vectors \(b_{x}=\pm a_{x}\) located at \((y,z)=(20a_{y},(20\pm 10)a_{z})\) with the background gauge field \(A_{x}=2.5\times 10^{-4}\hbar/ea_{x}\) and \(\mathcal{Q}_{xx}=\pi^{2}/(2a_{x}^{2})\). (d) Numerically calculated dependence of the \(k_{x}\) momentum density localized on a screw dislocation with Burgers’ vector \(b_{x}=1\) as a function of the background gauge field \(A_{x}\), using the same model as in (c).
i.e., the RHS of the anomalous conservation laws above will vanish. Our model has \(\mathcal{Q}_{xx}=-\mathcal{Q}_{yy}\neq 0,\) and the responses generated by these two coefficients give two separate sets of terms in the response action. Hence, for simplicity we consider only the \(\mathcal{Q}_{xx}\) responses for now.
Let us first microscopically calculate the expected response to inserting a magnetic flux or a screw dislocation and compare with the response theory. First, consider inserting a thin magnetic flux line along the \(x\)-direction having flux \(\Phi\) localized at, say \((y,z)=(0,0).\) This flux will generate a Hall effect from each of the non-trivial \(k_{y}k_{z}\) Chern planes. The total charge bound to the flux line will vanish because there are equal and opposite contributions from \(k_{x}<0\) and \(k_{x}>0.\) However, threading the flux will build up a non-vanishing \(k_{x}\)-momentum since planes with opposite \(k_{x}\)-momentum have opposite Chern number. The total momentum (spatial integral of momentum density) driven to the flux line by the Hall effect at each \(k_{x}\) momentum is
\[\Delta P_{x}=-\frac{\Phi}{\Phi_{0}}\frac{L_{x}}{2\pi}\int_{-\pi}^{\pi}C(k_{x}) \hbar k_{x}dk_{x}=\frac{\Phi}{\Phi_{0}}\frac{\hbar K^{2}L_{x}}{2\pi}, \tag{92}\]
where the Chern number \(C(k_{x})\) is the piecewise-constant function across the \(k_{x}\) BZ described above, and \(\Phi_{0}=h/e\) is the quantum of magnetic flux. Using the fact that \(\mathcal{Q}_{xx}=2K^{2}\) and dividing by the volume we find the momentum density
\[\mathcal{J}_{x}^{0}=\frac{e\mathcal{Q}_{xx}}{8\pi^{2}}B_{x}. \tag{93}\]
This is the same result coming from the first term in Eq. 87 when \(\mathfrak{e}_{x}^{x}=1.\)
Next let us calculate the charge response to inserting dislocations. Consider a screw dislocation with Burgers vector component \(b^{x}\) associated to a translation gauge field configuration \(\mathcal{B}_{x}^{x}\equiv\partial_{y}\mathfrak{e}_{x}^{x}-\partial_{z} \mathfrak{e}_{y}^{x}=b^{x}\delta(y)\delta(z).\) From Eqs. 87 and 88 we see that both the momentum and charge currents have responses to dislocations, and we will first calculate the charge response. Heuristically the dislocation is like a \(U(1)\) gauge flux that couples to momentum instead of electric charge, so the dislocation couples to \(k_{x}\) momentum because it has a non-vanishing \(b^{x}\). Hence each \(k_{y}k_{z}\)-plane having non-vanishing Chern number (and non-vanishing \(k_{x}\)) will generate a Hall response, but with a magnitude proportional to its \(k_{x}\) charge. Indeed, each plane sees an effective flux \(\Phi(k_{x})=\frac{k_{x}b^{x}}{2\pi}\Phi_{0}.\) Hence, the total charge bound to the dislocation will be
\[\Delta Q=\frac{eL_{x}}{2\pi}\int_{-\pi}^{\pi}\frac{k_{x}b_{x}}{2\pi}C(k_{x})dk _{x}=-\frac{eb_{x}\mathcal{Q}_{xx}}{8\pi^{2}}L_{x}. \tag{94}\]
This matches Eq. 88, again after setting \(\mathfrak{e}_{x}^{x}=1\) (see also Refs. [17; 18; 24]).
Now we consider the momentum response to a dislocation, i.e., a momentum density bound to the dislocation when \(A_{x}\) is non-vanishing (this comes from the second term in Eq. 87). First we can compute the amount of momentum bound to a dislocation when \(A_{x}=0\) by adding the contributions of each Chern plane:
\[\begin{split}\Delta P_{x}&=\frac{L_{x}}{2\pi}\int_ {-\pi}^{\pi}\frac{k_{x}b_{x}}{2\pi}C(k_{x})\hbar k_{x}dk_{x}\\ &=\frac{L_{x}b_{x}\hbar}{4\pi^{2}}\left(\int_{0}^{K}k_{x}^{2}dk_ {x}-\int_{-K}^{0}k_{x}^{2}dk_{x}\right)\\ &=0.\end{split} \tag{95}\]
We note that this calculation is similar to Eq. 94 except with an additional factor of the "momentum-charge" \(\hbar k_{x}\) in the integrand. Now if we turn on an \(A_{x}\) such that \(k_{x}\to k_{x}+\frac{e}{\hbar}A_{x},\) we can repeat the calculation to find
\[\begin{split}\Delta P_{x}|_{A_{x}}&=\frac{L_{x}b_{ x}\hbar}{4\pi^{2}}\left(\int_{-\frac{eA_{x}}{\hbar}}^{K-\frac{eA_{x}}{\hbar}}k_{x}^{2}dk _{x}-\int_{-K-\frac{eA_{x}}{\hbar}}^{-\frac{eA_{x}}{\hbar}}k_{x}^{2}dk_{x} \right)\\ &=-\frac{eL_{x}b_{x}2K^{2}}{4\pi^{2}}A_{x}.\end{split} \tag{96}\]
The final result yields
\[\mathcal{J}_{x}^{0}=-\frac{e\mathcal{Q}_{xx}A_{x}}{4\pi^{2}}\mathcal{B}_{x}^{x}, \tag{97}\]
which matches Eq. 87 and our numerical calculations in Figs. 10(c) and (d). For the numerics we inserted a pair of screw dislocations with burgers vectors \(b^{x}=\pm a_{x}\) in the presence of a constant background gauge potential \(A_{x}\). The resulting \(k_{x}\) momentum density of the ground state as a function of the \(y\) and \(z\) lattice coordinates is shown in Fig. 10 (c). Furthermore, the dependence of this momentum density on \(A_{x}\) reproduces the expected response coefficient, as shown in Fig. 10 (d).
#### v.2.2 Response of a deformed interface
Next let us consider an interface between an undeformed geometry and a geometry having a non-vanishing background \(\mathfrak{e}_{x}^{y}\) and \(\mathfrak{e}_{y}^{x}\) as shown in Fig. 11(b). To be explicit, let the interface between the two geometries occur as a function of \(z\) at \(z=0.\) On the surface of the un-deformed system we numerically calculated the characteristic (rank-2) Fermi arc structure as shown in the left surface-BZ panel in Fig. 11(a). For our deformed geometry we show the modified bulk Weyl node quadrupole and Fermi arcs when \(\mathfrak{e}_{x}^{y}=\mathfrak{e}_{y}^{x}\neq 0\) in the right surface-BZ panel in Fig. 11(a).
From these figures we see that the Weyl node quadrupole moment \(\mathcal{Q}_{ab}^{(R)}\) on the deformed side is modified from the quadrupole moment \(\mathcal{Q}_{ab}^{(L)}\) on the undeformed side. Explicitly, we can compute:
\[\begin{split}\mathcal{Q}_{xx}^{(R)}&=(\mathfrak{e}_{x}^ {x})^{2}\mathcal{Q}_{xx}^{(L)}+2\mathfrak{e}_{x}^{x}\mathfrak{e}_{x}^{y} \mathcal{Q}_{xy}^{(L)}+(\mathfrak{e}_{x}^{y})^{2}\mathcal{Q}_{yy}^{(L)}\\ \mathcal{Q}_{xy}^{(R)}&=\mathfrak{e}_{x}^{x}\mathfrak{e}_ {y}^{x}\mathcal{Q}_{xx}^{(L)}+\mathfrak{e}_{x}^{y}\mathfrak{e}_{y}^{y} \mathcal{Q}_{yy}^{(L)}+(\mathfrak{e}_{x}^{x}\mathfrak{e}_{y}^{y}+\mathfrak{e}_{x} ^{y}\mathfrak{e}_{y}^{x})\mathcal{Q}_{xy}^{(L)}\\ \mathcal{Q}_{yy}^{(R)}&=(\mathfrak{e}_{y}^{y})^{2} \mathcal{Q}_{xx}^{(L)}+2\mathfrak{e}_{y}^{x}\mathfrak{e}_{y}^{y}\mathcal{Q}_{xy}^{(L )}+(\mathfrak{e}_{y}^{y})^{2}\mathcal{Q}_{yy}^{(L)},\end{split} \tag{98}\]
i.e., \(\mathcal{Q}_{ij}^{(R)}=\mathfrak{e}_{i}^{a}Q_{ab}^{(L)}\mathfrak{e}_{j}^{b}\). For our model and geometry we can make the simplifications \(\mathfrak{e}_{x}^{x}=1=\mathfrak{e}_{y}^{y},\mathfrak{e}_{x}^{y}=\mathfrak{e}_{ y}^{x},\mathcal{Q}_{xy}^{(L)}=0\), and \(\mathcal{Q}_{xx}^{(L)}=2K^{2}=-\mathcal{Q}_{yy}^{(L)}\). Substituting these relations into Eq. 98 yields
\[\mathcal{Q}_{xx}^{(R)}=-\mathcal{Q}_{yy}^{(R)}=2K^{2}(1-(\mathfrak{e}_{x}^{y}) ^{2}), \tag{99}\]
and \(Q_{xy}^{(R)}=0\). Alternatively, we can see this result from the locations of the deformed Weyl nodes which will sit at \((K,K\mathfrak{e}_{y}^{x},0)_{+}\), \((-K,-K\mathfrak{e}_{y}^{x},0)_{+}\), \((K\mathfrak{e}_{x}^{y},K,0)_{-}\), and \((-K\mathfrak{e}_{x}^{y},-K,0)_{-}\) (where the subscripts \(\pm\) encode the chirality for our choice of model parameters).
Since the Weyl node quadrupole moments on the two sides of the interface are different, we expect gluing the two sides together will leave behind a signature at the interface. Indeed, from the middle surface-BZ panel in Fig. 11(a) we see gapless Fermi arcs that remain at the interface and stretch between the unmodified and modified projected locations of the bulk Weyl nodes. From Eqs. 87, 88 we see there should be responses
\[\mathcal{J}_{x}^{x} =-\frac{e}{4\pi^{2}}\mathcal{Q}_{xx}A_{0}\partial_{z}\mathfrak{e }_{y}^{x},,\quad\mathcal{J}_{y}^{y}=-\frac{e}{4\pi^{2}}\mathcal{Q}_{yy}A_{0} \partial_{z}\mathfrak{e}_{x}^{y},\] \[j^{0} =\frac{e}{8\pi^{2}}\left(\mathcal{Q}_{xx}\mathfrak{e}_{x}^{x} \partial_{z}\mathfrak{e}_{y}^{x}-\mathcal{Q}_{yy}\mathfrak{e}_{y}^{y}\partial _{z}\mathfrak{e}_{x}^{y}\right)=\frac{e\mathcal{Q}_{xx}}{4\pi^{2}}\partial_{z} \mathfrak{e}_{x}^{y},\]
where in the last equality we substituted in the relations that are specific to our model and interface geometry, which we stated above.
We confirmed the momentum and charge responses numerically, in particular the \(\mathcal{J}_{x}^{x}\) response shown in Fig. 11(c), and we also provide microscopic analytic arguments here. The momentum currents both follow the same logic, so let us consider only \(\mathcal{J}_{x}^{x}\) for now. From the center surface-BZ panel in Fig. 11(a) we see remnant Fermi arcs. If we increase \(A_{0}\), each linearly-dispersing point on the Fermi arc will have an excess charge density \(\delta n(\mathbf{k})=\frac{eA_{0}}{2\pi\hbar|v_{F}(\mathbf{k})|}\) where \(v_{F}(\mathbf{k})\) is the Fermi velocity at the Fermi arc located at \(\mathbf{k}\) in the surface-BZ. Hence, the contribution to the \(k_{x}\) momentum current of such a point on the Fermi arc is \(\mathcal{J}_{x}^{x}(\mathbf{k})=\hbar k_{x}v_{F}(\mathbf{k})\delta n(\mathbf{ k})\). For our model and geometry, the contributions to the \(\mathcal{J}_{x}^{x}\) current that are linear in the deformations of \(\mathfrak{e}_{i}^{a}\) arise from the Fermi arcs stretching between \((K,0)\rightarrow(K,K\mathfrak{e}_{y}^{x})\) and \((-K,0)\rightarrow(-K,-K\mathfrak{e}_{y}^{x})\). Each of these arcs has a fixed value \(k_{x}=\pm K\) and each arc has an opposite Fermi velocity. Hence
\[\mathcal{J}_{x}^{x} =\hbar Kv_{F}(K,k_{y})\delta n\frac{K\mathfrak{e}_{y}^{x}}{2\pi}\] \[+\hbar(-K)v_{F}(-K,k_{y})\delta n\frac{K\mathfrak{e}_{y}^{x}}{2\pi}\] \[=\frac{e\mathcal{Q}_{xx}^{(L)}\mathfrak{e}_{y}^{x}A_{0}}{4\pi^{2} }\text{sgn}(v_{F}),\]
where \(K\mathfrak{e}_{y}^{x}/2\pi\) counts the density of states on the Fermi arc in the \(k_{y}\) direction, \(\text{sgn}(v_{F})\) is sign of the velocity on the \(k_{x}=+K\) arc, and the un-deformed \(\mathcal{Q}_{xx}^{(L)}=2K^{2}\). This result matches the prediction from the response theory and matches the numerical results in Fig. 11(c).
The calculation of the charge density \(j^{0}\) at the interface is simpler since it comes from the bulk response to a translation magnetic field. At the interface there is a non-vanishing \(\mathcal{B}_{x}^{x}=-\partial_{z}\mathfrak{e}_{y}^{x}\) and \(\mathcal{B}_{y}^{y}=\partial_{z}\mathfrak{e}_{x}^{y}\). Since the \(k_{y}k_{z}\)-planes and \(k_{x}k_{z}\)-planes have non-vanishing Chern numbers, they yield a density response similar to what we found on the dislocation line in Eq. 94. Each \(k_{x}\) state sees an effective magnetic flux \(\Phi(k_{x})=-\frac{k_{x}\mathfrak{e}^{x}}{2\pi}\Phi_{0}\), and similarly for each \(k_{y}\) state \(\Phi(k_{y})=\frac{k_{y}\mathfrak{e}_{y}^{y}}{2\pi}\Phi_{0}\), where \(\mathfrak{e}^{x}=\frac{k_{y}\mathfrak{e}_{y}^{y}}{2\pi}\Phi_{0}\). The \(k_{x}\)-plane is the \(k_{y}\)-plane, and \(k_{y}\)-plane is the \(k_{z}\)-plane. The \(k_{x}\)-plane is the \(k_{y}\)-plane, and the \(k_{z}\)-plane is the \(k_{z}\)-plane. The \(k_{y}\)-plane is the \(k_{z}\)-plane, and the \(k_{z}\)-plane is the \(k_{z}\)-plane. The \(k_{z}\)-plane is the \(k_{z}\)-plane, and the \(k_{z}\)-plane is the \(k_{z}\)-plane.
\(\int dy\epsilon_{y}^{x}|_{z>0}\) and \(b^{y}=\int dx\epsilon_{y}^{y}|_{z>0}\) are the Burgers vectors obtained when integrating across the entire periodic \(y\)- and \(x\)-directions respectively. Hence the total charge at the interface is
\[\Delta Q =-\frac{2eL_{x}}{2\pi}\int_{0}^{K}\frac{k_{x}b^{x}}{2\pi}dk_{x}+ \frac{2eL_{y}}{2\pi}\int_{0}^{K}\frac{k_{y}b^{y}}{2\pi}dk_{y} \tag{100}\] \[=\frac{e}{8\pi^{2}}\left(-\mathcal{Q}_{xx}b^{x}L_{x}+\mathcal{Q} _{yy}b^{y}L_{y}\right)\] \[=-\frac{e\mathcal{Q}_{xx}b^{x}L_{x}}{4\pi^{2}},\]
where the leading factors of two in the first line account for identical contributions from the interval \(k_{x}\in[-K,0]\), and in the last equation we used \(\mathcal{Q}_{xx}=-\mathcal{Q}_{yy}\) and \(L_{x}b^{x}=L_{y}b^{y}\) since \(\mathfrak{e}_{x}^{y}=\mathfrak{e}_{y}^{x}.\) This final result matches Eq. 88.
## V Conclusion
In this article we have presented a framework of explicit connections between a wide-ranging family of topological response theories from 0D to 3D. Using this framework, we have shown how the coefficients for these response theories, most of which are well-known in insulators, can be obtained for topological semimetals. This has allowed us to provide careful derivations and characterizations of mixed crystalline-electromagnetic responses of semimetallic and insulating systems in various spatial dimensions. Finally, we have provided an extensive set of microscopic lattice calculations and numerical confirmations affirming that our predicted field theory responses do indeed arise in tight binding lattice models. With the advent of topological quantum chemistry [90; 91; 92; 93; 94; 95], thousands of crystalline topological insulators and semimetals have been identified, but many open questions persist about how to probe their topological features. This work provides insight into how the topology in some of these materials may be probed and characterized, i.e., by combining geometric/strain distortions and electromagnetic responses.
There is a growing body of work studying the mixed crystalline-electromagnetic responses of Weyl semimetals with dipole and quadrupole arrangements of nodes [12; 12; 13; 15; 16; 17; 18; 19; 20; 21; 23; 24; 60; 76; 77; 78; 79; 80; 82; 83; 84; 85], that indicate a broad interest in these topics. Our work serves two major purposes in the context of this previous literature: (i) we identified several aspects of mixed crystalline-electromagnetic responses that have not yet been addressed in earlier work, and (ii) we synthesized aspects of the existing literature to present a unified description of these responses in terms of the momentum-space multipole moments of the nodal configurations, and to provide new intuition in previously studied responses. While prior work has examined the mixed crystalline-electromagnetic response of two-dimensional Dirac node dipole semimetals [34; 52], we have advanced this understanding by identifying a Wilson loop correction the response coefficient that raises a subtle question about the connection between the charge polarization and the mixed-crystalline-electromagnetic response. Additionally, the Dirac node quadrupole semimetal has not been previously discussed, making our work the first study of its properties and mixed crystalline-electromagnetic responses. Furthermore, our model of a nodal line quadrupole semimetal and its corresponding response theory are new to the literature as well.
The results of this work point in many possible directions for future work. First, finding experimental realizations of the proposed topological responses in solid state or metamaterial systems is an exciting prospect. Rank-2 chiral fermions, which have an anomaly compensated by the bulk response of a Weyl quadrupole semimetal [17], were realized in a recent experiment on non-Hermitian topo-electric circuit metamaterials [39]. In that platform, the mixed crystalline-electromagnetic response generates a momentum-resolved non-Hermitian skin effect that was observed in the experiment. Topo-electric circuits, along with other metamaterials and solid state platforms are promising are in which the many mixed crystalline-electromagnetic responses we discuss in this paper could be realized. Other extensions of this work include the consideration of additional crystalline gauge fields as was done in, e.g., Refs. [18; 53; 96; 97; 98; 99; 100; 10; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 82; 83; 84; 85], that indicate a broad interest in these topics. Our work serves two major purposes in the context of this previous literature: (i) we identified several aspects of mixed crystalline-electromagnetic responses that have not yet been addressed in earlier work, and (ii) we synthesized aspects of the existing literature to present a unified description of these responses in terms of the momentum-space multipole moments of the nodal configurations, and to provide new intuition in previously studied responses. While prior work has examined the mixed crystalline-electromagnetic response of two-dimensional Dirac node dipole semimetals [34; 52], we have advanced this understanding by identifying a Wilson loop correction the response coefficient that raises a subtle question about the connection between the charge polarization and the mixed-crystalline-electromagnetic response. Additionally, the Dirac node quadrupole semimetal has not been previously discussed, making our work the first study of its properties and mixed crystalline-electromagnetic responses. Furthermore, our model of a nodal line quadrupole semimetal and its corresponding response theory are new to the literature as well.
The results of this work point in many possible directions for future work. First, finding experimental realizations of the proposed topological responses in solid state or metamaterial systems is an exciting prospect. Rank-2 chiral fermions, which have an anomaly compensated by the bulk response of a Weyl quadrupole semimetal [17], were realized in a recent experiment on non-Hermitian topo-electric circuit metamaterials [39]. In that platform, the mixed crystalline-electromagnetic response generates a momentum-resolved non-Hermitian skin effect that was observed in the experiment. Topo-electric circuits, along with other metamaterials and solid state platforms are promising are in which the many mixed crystalline-electromagnetic responses we discuss in this paper could be realized. Other extensions of this work include the consideration of additional crystalline gauge fields as was done in, e.g., Refs. [18; 53; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 64; 65; 66; 67; 68; 69; 70; 76; 77; 78; 79; 80; 82; 83; 84; 85], that indicate a broad interest in these topics. Our work serves two major purposes in the context of this previous literature: (i) we identified several aspects of mixed crystalline-electromagnetic responses that have not yet been addressed in earlier work, and (ii) we synthesized aspects of the existing literature to present a unified description of these responses in terms of the momentum-space multipole moments of the nodal configurations, and to provide new intuition in previously studied responses. While prior work has examined the mixed crystalline-electromagnetic response of two-dimensional Dirac node dipole semimetals [34; 52], we have advanced this understanding by identifying a Wilson loop correction the response coefficient that raises a subtle question about the connection between the charge polarization and the mixed-crystalline-electromagnetic response. Additionally, the Dirac node quadrupole semimetal has not been previously discussed, making our work the first study of its properties and mixed crystalline-electromagnetic responses. Furthermore, our model of a nodal line quadrupole semimetal and its corresponding response theory are new to the literature as well.
The results of this work point in many possible directions for future work. First, finding experimental realizations of the proposed topological responses in solid state or metamaterial systems is an exciting prospect. Rank-2 chiral fermions, which have an anomaly compensated by the bulk response of a Weyl quadrupole semimetal [17], were realized in a recent experiment on non-Hermitian topo-electric circuit metamaterials [39]. In that platform, the mixed crystalline-electromagnetic response generates a momentum-resolved non-Hermitian skin effect that was observed in the experiment. Topo-electric circuits, along with other metamaterials and solid state platforms are promising are in which the many mixed crystalline-electromagnetic responses we discuss in this paper could be realized. Other extensions of this work include the consideration of additional crystalline gauge fields as was done in, e.g., Refs. [18; 53; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 54; 53; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 76; 77; 78; 79; 80; 82; 83; 84; 85], some of us are also working on extending the nodal, higher-multipole responses to interacting systems and non-equilibrium systems where, in the latter, one can have mixed energy-momentum multipole moments. Studying the leading nodal dipole moments has already led to a rich set of phenomena, and the higher moments provide a large hierarchy of phenomena that can be explored in current experiments.
|
2307.16745 | Advancing Smart Malnutrition Monitoring: A Multi-Modal Learning Approach
for Vital Health Parameter Estimation | Malnutrition poses a significant threat to global health, resulting from an
inadequate intake of essential nutrients that adversely impacts vital organs
and overall bodily functioning. Periodic examinations and mass screenings,
incorporating both conventional and non-invasive techniques, have been employed
to combat this challenge. However, these approaches suffer from critical
limitations, such as the need for additional equipment, lack of comprehensive
feature representation, absence of suitable health indicators, and the
unavailability of smartphone implementations for precise estimations of Body
Fat Percentage (BFP), Basal Metabolic Rate (BMR), and Body Mass Index (BMI) to
enable efficient smart-malnutrition monitoring. To address these constraints,
this study presents a groundbreaking, scalable, and robust smart
malnutrition-monitoring system that leverages a single full-body image of an
individual to estimate height, weight, and other crucial health parameters
within a multi-modal learning framework. Our proposed methodology involves the
reconstruction of a highly precise 3D point cloud, from which 512-dimensional
feature embeddings are extracted using a headless-3D classification network.
Concurrently, facial and body embeddings are also extracted, and through the
application of learnable parameters, these features are then utilized to
estimate weight accurately. Furthermore, essential health metrics, including
BMR, BFP, and BMI, are computed to conduct a comprehensive analysis of the
subject's health, subsequently facilitating the provision of personalized
nutrition plans. While being robust to a wide range of lighting conditions
across multiple devices, our model achieves a low Mean Absolute Error (MAE) of
$\pm$ 4.7 cm and $\pm$ 5.3 kg in estimating height and weight. | Ashish Marisetty, Prathistith Raj M, Praneeth Nemani, Venkanna Udutalapally, Debanjan Das | 2023-07-31T15:08:02Z | http://arxiv.org/abs/2307.16745v1 | Advancing Smart Malnutrition Monitoring: A Multi-Modal Learning Approach for Vital Health Parameter Estimation
###### Abstract
Malnutrition poses a significant threat to global health, resulting from an inadequate intake of essential nutrients that adversely impacts vital organs and overall bodily functioning. Periodic examinations and mass screenings, incorporating both conventional and non-invasive techniques, have been employed to combat this challenge. However, these approaches suffer from critical limitations, such as the need for additional equipment, lack of comprehensive feature representation, absence of suitable health indicators, and the unavailability of smartphone implementations for precise estimations of Body Fat Percentage (BFP), Basal Metabolic Rate (BMR), and Body Mass Index (BMI) to enable efficient smart-malnutrition monitoring. To address these constraints, this study presents a groundbreaking, scalable, and robust smart malnutrition-monitoring system that leverages a single full-body image of an individual to estimate height, weight, and other crucial health parameters within a multimodal learning framework. Our proposed methodology involves the reconstruction of a highly precise 3D point cloud, from which 512-dimensional feature embeddings are extracted using a headless-3D classification network. Concurrently, facial and body embeddings are also extracted, and through the application of learnable parameters, these features are then utilized to estimate weight accurately. Furthermore, essential health metrics, including BMR, BFP, and BMI, are computed to conduct a comprehensive analysis of the subject's health, subsequently facilitating the provision of personalized nutrition plans. While being robust to a wide range of lighting conditions across multiple devices, our model achieves a low Mean Absolute Error (MAE) of \(\pm\) 4.7 cm and \(\pm\) 5.3 kg in estimating height and weight.
Multi-modal Learning, 3D Reconstruction, Feature Fusion, Height and Weight estimation, Smart Healthcare, Non-invasive.
## I Introduction
**Malnutrition** is an ailment caused by consuming food that lacks an adequate quantity of essential nutrients. It is most commonly used in reference to undernutrition, [1] which occurs when a person does not receive sufficient calories, proteins, or micronutrients. A scarcity of a quality diet most commonly causes undernourishment or undernutrition. According to a WHO survey, there are 178 million malanowished children globally, with 20 million suffering from severe malnutrition, contributing to 3.5 to 5 million deaths in children under five each year. On a global scale, undernutrition is responsible for 45% of all casualties in children under five and is widespread in developing nations, especially among women and children. Malnutrition also poses a range of severe health problems that include anemia, diarrhea, disorientation, weight loss, night blindness, anxiety, attention deficits, and other neuropsychological disorders [2]. In the aftermath of the COVID-19 outbreak, which caused significant concerns and stress regarding public health [3], the traditional approach of measuring height and weight in public health centers has been impacted. During the pandemic, strict social distancing measures were put in place to minimize the spread of infection, making the conventional method of calculating essential health metrics through direct measurements undesirable.
In addition, pandemics like COVID-19, according to UNICEF, put malnourished children at an ever-increasing danger of mortality, as well as impaired growth, development, and learning for those who survive. Therefore, there is a dire need to identify important health indicators and monitor chronic stress & uncontrolled or unmonitored food consumption integrated with data-driven approaches [4]. A primary step in identifying or diagnosing malnutrition and the nutritional status of any person can be determined by computing their **Body Fat Percentage (BFP)**, **Basal Metabolic Rate (BMR)**, and **Body Mass Index (BMI)** and comparing it with standardized charts. It is more accurate to infer the risk of malnutrition and various medical conditions from these metrics since they represent the human body's functionality in a well-oriented manner. In this work, we intend to predict the height, weight and successively calculate the important health metrics as mentioned above from a sing-shot full-body image by incorporating a holistic representation of prominent features under the multi-modal learning paradigm. Fig. 1 illustrates a conceptual overview of the proposed method.
In this paper, we propose a solution based on multi-feature fusion that includes 3D, facial, body, and metadata features integrated with a smartphone application prototype to estimate a human's height, weight, and other health parameters. The smartphone's camera serves as a sensor to capture a full-body image of a human, and the height is estimated by calculating the centimetre per pixel ratio using image processing techniques. Following that, the captured image is pre-processed by detecting, cropping, aligning the face and body, reconstructing & samping a 3D person mesh object, and feature extraction in
Fig. 1: Conceptual Overview
a multi-modal framework. To summarize, the key contributions of our work are:
### _Contributions_
* A holistic feature fusion of facial, body & 3D embeddings, including the correlation between them, optimal feature combination and individual importance in estimating the weight is insightfully discussed.
* This paper is the first to incorporate the fine-grain local 3D representation in combination using 3D classification network backbones as feature extractors.
* To the best of our knowledge, this is the first time an IoMT framework has been used to develop an autonomous smart application for peripheral devices without any manual intervention.
* The trained model outperformed state-of-the-art methods for weight estimation on real-world data using a multi-modal architecture, achieving a 5.3 kg error.
## II Related Research Overview
With the COVID-19 pandemic behind us and a shift in the global landscape, including a rise in obesity and undernutrition in many countries, the need for a simple non-contact height and weight estimation technique remains as relevant as ever. Ongoing research is actively investigating and developing such techniques to address the current health challenges. The following sections discuss the related literature categorized based on the model output - height, weight, and medium of deployment.
### _Height Prediction_
**Alberink et al. [5]** pointed out that in the field of forensic practice, there is a recurring demand for height estimations of individuals observed in surveillance video footage captured by cameras. Multiple approaches exist for conducting such estimations and to gain insights into the disparities between actual and measured heights, validation measurements are taken from a group of test subjects. Based on this analysis, a method was proposed to determine confidence intervals for the height of individuals depicted in images, accounting for factors such as head and footwear. The aim was to provide a reliable framework for estimating the height of questioned individuals captured in surveillance images while considering both systematic and random sources of variation. Later, **Abdelsader et al. [6]** employed an equation that predicts height based on explicitly labeled keypoint coordinates in the image. **Dey et al. [7]** assessed the height differences of individuals in every picture and generated a height disparity graph from a photo compilation to estimate height. Several of the earliest works estimated height and weight using metrics such as physique and bone length alongside face and body images. Then with the rise of deep learning, **Dantcheva et al. [8]** first proposed a 50-layer ResNet architecture, achieving an 8.2 cm and 8.51 kg MAE for height and weight prediction, respectively, using only face images. **Gunel et al. [9]** later tried improving the architecture using face, body, and gender information for predicting height in unconstrained settings. In addition to these inputs, techniques involving depth information were developed, such as the work by **Fuken et al. [10]**, where a four-stage architecture performs segmentation of the human body into explicit segments, predicts the height of the segments using three CNNs with an error of 0.9%, and the research by **Lee et al. [11]**, which devised a height estimation method using both color and depth information with the help of Mask R- CNN's, achieving a 2.2% error rate.
### _Weight Prediction_
One of the initial works for weight estimation used anthropometric features as proposed by **Velardo et al. [12]**. By employing multiple regression analysis, the authors aimed to establish a model that can effectively estimate weight using various anthropometric features. They relied on a comprehensive medical database to train the model, ensuring that it captures a wide range of anthropometric variations and provides accurate weight predictions. The weight assessor proposed by **Nguyen et al. [13]** made use of the abundant information available in RGB-D images to improve estimation accuracy. The method takes into account visual color signals, depth information, and gender to estimate multiple weight-related dimensions.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Existing Technologies & Height & Weight & Holistic Feature & Local 3D & Smartphone & Real-Time & Other Health \\ & Estimation & Estimation & Representation & Features & Application & Testing & Metrics \\ \hline Alberink _et al. [5]_ & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ Abdelkader _et al. [6]_ & ✓ & \(\times\) & ✓ & \(\times\) & \(\times\) & ✓ & \(\times\) \\ Dey _et al. [7]_ & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ Dantcheva _et al. [8]_ & ✓ & ✓ & \(\times\) & \(\times\) & \(\times\) & ✓ & ✓ \\ Gunel _et al. [9]_ & ✓ & \(\times\) & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ Fukun _et al. [10]_ & ✓ & \(\times\) & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ Lee _et al. [11]_ & ✓ & \(\times\) & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ Velardo _et al. [12]_ & \(\times\) & ✓ & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ Nguyen _et al. [13]_ & \(\times\) & ✓ & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ Jiang _et al. [14]_ & \(\times\) & ✓ & ✓ & \(\times\) & \(\times\) & ✓ & \(\times\) \\ Jin _et al. [15]_ & ✓ & ✓ & ✓ & ✓ & \(\times\) & ✓ & ✓ \\ Altininger _et al. [16]_ & ✓ & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ Thapar _et al. [17]_ & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ Child Growth Monitor [18] & ✓ & ✓ & \(\times\) & \(\times\) & ✓ & ✓ & \(\times\) \\ \hline
**autoNutri** & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison with existing literature works
This integrated strategy offered an extensive framework for predicting mass from a single RGB-D image. Influenced by recent developments in health science research, **Jiang et al. [14]** investigated the viability of analyzing body weight using 2D frontal view human body images with BMI as the metric for measuring body weight. The intention of the study was to examine this analysis at differing levels of difficulty by investigating three feasibility problems ranging from simple to complex. To facilitate the analysis of body weight from human body images, the researchers developed a system that involved computing five anthropometric features, which have been recommended as viable indices for determining body weight. A **visual-body-to-BMI dataset** has been acquired and systematically cleansed to support the research study.
As mentioned previously, **Dantcheva et al. [8]** investigated the viability of estimating measurements of height, weight, and BMI from single-shot photographs of the face. The authors proposed a regression method based on the 50-layer ResNet architecture to accomplish this goal. This method utilized the exclusive properties of facial images to precisely estimate the aforementioned characteristics. In addition, a new dataset containing 1026 subjects has been included in this study. In a recent study, **Jin et al. [15]** noted that BMI is frequently employed as a measurement of weight and health conditions and that previous research in this field has focused primarily on using numerous 2D images, 3D images, or images of the face. However, these indicators are not always accessible and the authors proposed a dual-branch regression approach to estimate weight and BMI from a single 2D body image to circumvent this limitation. The researchers intend to improve the accuracy of BMI estimation from a single 2D body image by integrating information from the anthropometric feature computation branch and the deep learning-based feature extraction branch. In addition, few methods attempted to estimate both height and weight simultaneously, such as **Altinigne et al. [16]**, who developed a deep learning method that employs the estimation of individual silhouette and skeleton joints as effective regularizers.
### _Mahnutrition and IoT Solutions_
Many previous works have focused on developing a solution for malnutrition, such as the expert system by **Thapar et al. [17]**, which analyses malnutrition using a Mamdani inference method with 13 different categorical input variables, but it is only recently that work has begun to make them accessible and deployable. One such IoT-based solution is **Child Growth Monitor [18]**, an AI-based application that relies on the availability of infrared sensors in selected smartphones to capture 3D measurements of a child's height, body volume, and weight ratio. However, even these techniques fell short of providing a complete solution involving height, weight estimation, all wrapped up in an application that could be used by anyone with a smartphone. Our work overcomes all of the aforementioned drawbacks while also improving weight estimation performance through the use of local 3D features, multimodal embedding fusion, and an edge device prototype for computation. Table I depicts an overview of all the discussed existing solutions.
## III Methodology
This section describes the proposed three-phase height and weight estimation workflow, as shown in Fig. 2. Phase 1 deals with image pre-processing and height estimation while phase 2 emphasizes feature extraction, multi-modal fusion, and regression. Subsequently, the final phase depicts the integration of the above system with an edge device application prototype in an IoT framework.
### _Phase 1: Pre-processing and Height Prediction_
In this phase, we pre-process the input image of a person, reconstruct the 3D volumetric information and perform height prediction. The mentioned phase is divided into four sub-phases: Facial landmark detection and alignment, Body key points detection, 3D reconstruction and Height prediction.
#### Iii-A1 Facial Landmarks Detection and Alignment
To extract the face crop from full body image we perform face verification, cropping and subsequently alignment. The initial step of face detection determines the position of a face, by traversing through the points around the facial region to locate 68 landmarks. Subsequently, the faces are aligned and transformed such that facial landmarks (inner eyes and bottom lip) appear in approximately in same regions, preserving the collinearity, parallelism, and the ratio of distances between the points with Affine Transformation. Fig. 3 (a) visualizes an example of the localization of face from the input image, Fig. 3 (b) depicts the facial landmarks while Fig. 3 (c) illustrates the facial alignment and region cropping. After completing the facial alignment step, the subsequent stage in the preprocessing pipeline involves the detection of body key points.
#### Iii-A2 Body Keypoints Detection
Considering the inherent unpredictability of real-world scenarios, it is imperative to establish the elimination of unwanted noise. Following the extraction of the human body region from varying backgrounds through the application of a U-Net trained for human segmentation, the subsequent stage involves the detection of human body landmarks within the input image. This process commences with the initial layers of the VGG-19 network extracting pertinent image features, which are then passed into two parallel branches of convolutional layers. The first branch predicts a group of 18 confidence maps, each representing a different portion of the human posture skeleton. The second
Fig. 2: Proposed System Overview
branch predicts a group of 38 Part Affinity Fields (PAFs) [19], which indicates the degree of affinity between parts. Let set S = \((S_{1},S_{2},....,S_{J})\) denote the confidence maps for \(j\) i.e., detected body parts. Then the individual confidence maps for each person \(k\) can be formulated as \(S_{j,k}^{*}\) at a location \(p\) is denoted in the following Eq. 1, where \(x_{j,k}\) be the ground truth position of body part \(j\) for person \(k\) in the image and \(\sigma\) controls the spread of the peak.
\[S_{j,k}^{*}(p)=exp(-\frac{|p-x_{j,k}|_{2}^{2}}{\sigma^{2}}) \tag{1}\]
#### Iii-A3 3D Reconstruction
The loss of 3D information during the process of capturing pictures poses a significant challenge in accurately inferring and extracting 3D characteristics from 2D visuals. To tackle the aforementioned challenge, we adopt a multi-level architecture PiFuHD [20] which is trained end-to-end on high-resolution images. This model is profound in reconstructing 3D mesh, preserving intricate 3D details solely from a single human image. The objective of the algorithm is to model a function, \(f(X)\), such that for any given 3D position in continuous space \(X=(X_{x},X_{y},X_{z})\in R^{3}\), it predicts the occupancy value as shown in Eq. 2.
\[f(X,I)=\begin{cases}1,\textit{if X is inside the mesh surface}\\ 0,\textit{otherwise}\end{cases} \tag{2}\]
For an orthogonal projected 2D point given by \(\pi(X)=x=(X_{x},X_{y})\), an image feature embedding is extracted by function \(f\). Then the occupancy of the query 3D point X is estimated by Eq. 3 where Z = \(X_{z}\) is the depth along the ray defined by the 2D projection \(x\).
\[f(X,I)=g(\phi(X,I),Z) \tag{3}\]
Finally, we employ mesh sampling to generate a point cloud representation of the mesh, which provides a straightforward yet efficient means of representing 3D data. The detected body key points are illustrated in Fig. 4 (a), Fig. 4 (b) depicts the result of masking the input image, Fig. 4 (c) shows the 3D Mesh Reconstruction and Fig. 4 (d) illustrates its conversion to 3D Point-cloud.
#### Iii-A4 Height Estimation
The final step of this phase is height prediction, and taking previous work results into account, we decided to use a simple yet efficient computer vision technique that works best for input images that are parallel to the subject, similar to our dataset images. The simple pixel arithmetic method relies on the person's scale and camera orientation to calculate the person's height. To begin, we undistort the image to remove radial and tangential distortions and make the image independent of the device used to capture it. Then, we calculate the pixel per metric (ppm) attribute on the tight-crop masked image (\(I_{c}\)) from previous sub-phases using Eq. 4. This metric is then re-used throughout the process to predict the height of a new person (\(I_{pred}\)) given a static camera position by Eq. 5.
\[ppm=\frac{I_{c}.size[0]}{I_{c}~{}height} \tag{4}\]
\[height_{pred}=\frac{I_{pred}.size[0]}{ppm} \tag{5}\]
### _Phase 2: Unimodal representation and fusion_
The preprocessed data extracted from the previous phase is passed to this phase for feature extraction. This phase can be further divided into three sub-phases: 3D-feature extraction, 2D-feature extraction, multi-modal fusion and regression. The overview of the computational architecture is represented in Fig. 5
#### Iii-B1 3D feature extraction
The point cloud obtained after the previous phase's pre-processing is used as an input to extract the 3D embedding representation. The 3D point classifiers are the best at classifying the point cloud based on the local granular shape and the overall global shape, making them the ideal feature extractors for our problem. As a result, we use the PointNet [21] classifier to compile depictions because of its capacity to deal with unordered input points by employing a symmetric function (max pooling) to learn a set of optimization functions/criteria that select informative areas in the point cloud and represent the explanation for their inheritance. The final fully connected layers of the network consolidate these optimally learned values into the global descriptor for the entire shape, resulting in the 512-dimensional feature vector.
Fig. 4: Our body detection and prepossessing pipeline: (a) Body Keypoint estimation, (b) Masking, (c) 3D Human Mesh Reconstruction, (d) Conversion to 3D Point-Cloud
Fig. 3: Our face verification and pre-processing pipeline: (a) Face Detection, (b) Facial Landmark Detection, (c) Face Alignment & Cropping
Since each point undergoes its own transformation, our input format makes it simple to implement unchanging or affine modifications.
#### Iii-B2 2D feature extraction
The 3D embedding features have been computed in the previous step. Now we take a similar approach to calculate the 2D feature representation. First, the preprocessed face image is passed through a VGGFace architecture [22] without a head to extract a 512-dimension vector. Parallelly, we also pass the body image through an Xception architecture [23] without a head, using it as a feature extractor to get a 512-dimension body representation. Here, the VGG-16 has 16 trainable convolutional layers followed by a max-pooling operation whereas Xception is a deep convolutional neural network architecture with Depthwise Separable Convolutions. Finally, we employ Transfer Learning techniques with these trained VGGFace and Xception model pre-trained weights to extract 2D facial and deep body features from preprocessed face and full-body images, respectively. This forms the basis for the subsequent step of multi-modal feature fusion and regression.
#### Iii-B3 Multi-modal fusion and regression
Now as all the unimodal features are extracted we fuse the different sub-embedding streams of 512- dimensional feature representations. These representations comprises of two different modalities - point cloud (\(z_{R}\)) and image data (\(z_{F}\), \(z_{B}\)) and hence cannot be fused with a simple concatenation. Instead, we use learnable weights (\(w_{F}\), \(w_{B}\), \(w_{R}\)) to weigh these features and add them all up to get a final 515-dimensional feature vector (line 2, Algorithm 1). This feature vector is then passed through two 512-units Multi Layer Perceptron (\(g^{[0]}\), \(g^{[1]}\)), followed by 256 units MLP (\(g^{[2]}\)) and finally through a single unit linear layer (\(g^{[3]}\)) to predict the weight of the person (line 3-5, Algorithm 1). The final layer uses Ridge regression to penalize the layer to not overfit the distribution but to generalize to new plausible test data samples. Then we compute the person's Body Mass Index (BMI), followed by Body Metabolic Rate (BMR) using Mifflin-St Jeor Equation [24] and Body Fat Percentage (BFP) using BMI for suggesting appropriate nutrition plan and malnutrition monitoring. In Algorithm 1 (lines 8-9), \(p\) and \(m\) are intercept constants that vary with gender, with values of 5 and 16.2 for men and 161 and 5.4 for women, respectively.
```
0: Input, \(z_{F}\), \(z_{B}\), \(z_{R}\), gender, \(height_{pred}\)
0:\(weight_{pred}\), BMI, BMR, BFP
1:\(E(r_{F},r_{B},r_{R})\) = \(\Sigma_{j\in(F,B,R)}\)\(w_{j}\times z_{j}(X_{j};\ \theta_{j})\)
2:\(h^{[-1]}\)(F, a, g) = concatenate(E, gender, \(height_{pred}\))
3:for i in [0, 1, 2, 3] do
4:\(h^{[i]}\) = \(g^{[i]}\)(\(W[i]\times h^{[i-1]}\) + \(b^{[i]}\))
5:\(weight_{pred}\) = \(h^{[i]}\)
6:endfor
7: BMI = \(\frac{weight_{pred}[kg]}{height_{pred}^{2}[m^{2}]}\) = \(\frac{weight_{pred}[lb]\times 703}{height_{pred}^{2}[in^{2}]}\)
8: BMR = \(10\times weight_{pred}\) + \(6.25\times height_{pred}\) - \(5\times age+p\)
9: BFP = \(1.2\times BMI+0.23\times age-m\)
```
**Algorithm 1**_Multimodal fusion and Regression_
### _Phase 3: Android Application Prototype_
Following training the model, the model's learned weights are saved using Pytorch's.save() function and converted to an
Fig. 5: Overview of our proposed multi-modal computational architecture. The feature fusion obtains the unimodal representations \(z_{F}\), \(z_{B}\), \(z_{R}\) by passing the inputs \(X_{F}\), \(X_{B}\), \(X_{R}\) into the sub-embedding networks parametrized by \(\theta_{F}\), \(\theta_{B}\), \(\theta_{R}\) respectively. The representations are then weighed by learned weights \(w_{F}\), \(w_{B}\), \(w_{R}\) and concatenated with gender and height information to predict the weight and subsequently calculate BMI, BMR, and BFP.
\(.pb\) file using ONNX as the intermediate format [25]. Then, we use TensorFlow Serving to deploy and serve the trained model as an \(.apk\) file integrated with the created Android interface. The Android interface is intended to be simple and efficient for people from all walks of life and social strata. The workflow of the proposed system's GUI is depicted in Fig. 6.
## IV Experimental Study
This section describes the dataset used for training and testing our model, ablation studies, and experiments on the proposed multi-modal system under various scenarios. Following that, we will go over the performance of cloud-based IoT application as well as the computational platform used.
### _Datasets Used_
In our work, we majorly used two datasets - visual-body-to-BMI dataset [14], locally collected dataset. As mentioned earlier, the visual-body-to-BMI dataset consists of 47574 images of 16483 people scraped and downloaded from the progressive subreddit website. These images are then annotated and filtered resulting in a total of 5900 images, with two images for each of the 2950 subjects. The 2950 subjects comprises of 966 females and 1984 males, as well as the corresponding gender and weight labels. On the other hand, we locally collected a dataset of 30 people in 9 - 10 frontal poses, as well as height and device information. Table II highlights the statistical information about these two datasets. These two datasets are then combined to jointly train the model but is bench-marked only on the visual-body-to-BMI to enable comparison with the previous works. Meanwhile, we have held-out a sample size of 30 from the locally collected dataset, with each image being one of the 30 subjects in a randomly sampled pose, for experiments across devices and lighting conditions in Section IV D, IV G respectively.
### _Steps followed to capture input images_
The following steps are followed while capturing a full-body image of a person to estimate height and weight:
* An RGB image of a frontal pose of person standing at a distance of 1.5 meters from the camera lens placed 1 meter from the ground is captured under sufficient lighting conditions as depicted in Fig. 10 (a).
* The smartphone lens was parallel to the person, i.e., 90-degree angle w.r.t the person, and perpendicular to the ground, to accurately calculate the per-pixel metric for height estimation.
* The captured image is further masked & pre-processed to remove the redundant background thereby extracting the facial, body, and 3D representations under pre-processing & feature extraction pipelines.
### _Performance of multiple model architecture combinations_
To come up with the current architecture, we systemically explored the combinations of various facial feature extractors like VGGFace and FaceNet [26], body feature extractors like Xception and ResNet-152 in combination with 3D feature extractors like PointNet [27], DG-CNN and GB-Net as summarized in Table III. The best architecture observed is a combination of Xception, VGG-Face and PointNet for the body, face, and 3D feature extraction, achieving a MAE weight of 5.3 kg. We also noticed that VGG-Face outperforms FaceNet in general, while Xception outperforms ResNet-512. PointNet, on the other hand, outranks its corresponding point-cloud classifiers with its ability to extract rich 3D representations.
### _Effect of Lighting Conditions on height and weight prediction and Device Comparison_
The collected dataset contains images in unconstrained lighting conditions and is a perfect representation of real-world lighting conditions. To illustrate this and test the model performance further, we have artificially simulated the image brightness using gamma correction. The model performs best in \(\gamma\) range of 1.0 to 1.25 as shown in Fig. 7 (b). We can also deduce that the MAE decreases when \(\gamma\) is in the range of 0 - 1.25, attains its minimum MAE at \(\gamma\) = 1.0, and
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{**Participant Information**} & \multicolumn{3}{c|}{**Gender**} & \multicolumn{3}{c|}{**Height (\# m)**} & \multicolumn{3}{c|}{**Weight (in kg)**} \\ \hline \multirow{3}{*}{Dataset} & \multirow{3}{*}{Total} & \multirow{3}{*}{Male} & \multirow{3}{*}{Female} & \multirow{3}{*}{Range} & \multirow{3}{*}{Mean} & Standard & 95\(\%\) Confidence & \multirow{3}{*}{Range} & \multirow{3}{*}{Mean} & Standard & 95\(\%\) \\ & & & & & Deviation & Interval & & & Deviation & & Confidence \\ \hline Visual-body-to-BMI & 5900 & 3968 & 1932 & 213.36 - 147.32 & 175.54 & 9.89 & 176.99 - 174.09 & 254.01 - 44.90 & 95.05 & 27.12 & 100.9 - 89.1 \\ \hline Locally Collected Data & 287 & 261 & 26 & 184-101 & 164.09 & 21.35 & 167.23 - 160.95 & 100 - 13 & 63.51 & 21.41 & 68.26 - 58.76 \\ \hline \end{tabular}
\end{table} TABLE II: Statistical information of the Datasets
Fig. 6: Workflow of the proposed IoT Application prototype
increases as \(\gamma\) increases. The above variation in extreme cases can be attributed primarily to the poor performance of 3D reconstruction in extreme lighting conditions, where reconstruction quality decreases considerably when image global lighting drastically increases or decreases, despite performing well for a wide range of natural illumination. For performance comparison on different devices we used a hold-out set which contains images collected from a variety of devices, including laptops and multiple smartphone brands. Figure 7 (a) shows the predicted weight versus the actual weight, demonstrating that our model's performance is robust and coherent across all types of devices.
### _Importance of Multiple Features_
Our best-performing model works on weighing and averaging the multiple input feature embeddings. The embeddings are weighed such that each embedding is assigned a weight between 0 and 1, and their sum equals 1. It enables us to interpret the relative importance of these different embeddings across multiple architectures in predicting the weight. We have observed that these weights vary significantly when the 3D feature extractor architecture is changed, while the best extractors for both facial and body features are kept constant. From Fig. 8 (b), we can also infer that PointNet allows the model to have a balanced weight distribution with lower error as compared to the others. Overall, though the 3D features have relatively low importance, they perform slightly better in extreme use-cases such as obese and under-nourished conditions than only-image-based techniques.
Furthermore, in order to find the best pair-wise feature combination, we systematically tested combinations of various types of features, as shown in Fig. 8 (a). The abbreviations BF, DF & FF in Fig. 8 (a) represent the body features (BF), 3D features (DF) & facial features (FF) respectively. This experiment is carried out by including the best architectures (PointNet+Xception+VGG-Face) for the respective features. The best pair-wise combination of DF+FF, with low MAE and high correlation, demonstrates the importance of 3D and facial features. Despite the lack of BFs, the combined effect of DFs and FFs can still yield a reasonable result. The presence of scale free images in the dataset, which does not produce meaningful human structure anthropometric representations, can be attributed in large part to the lack of BFs importance. To summarise the previous experiments findings, we can conclude that facial features are the most important predictor across all possible architecture combinations, followed mostly by 3D features and body features.
In addition, we illustrated the correlation plots of the individual features including BFs, DFs & FFs along with their combination on our best found architecture in Fig. 9 (a), Fig. 9 (b), Fig. 9 (c) and Fig. 9 (d).
### _Android Application and Deployment_
To interface with the proposed model, we designed a versatile and user-friendly android application. The first page
Fig. 8: (a) Performance comparison of pairwise feature combinations in weight estimation, (b) Feature importance across different 3D network architectures.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{**3D Features**} & \multicolumn{2}{c|}{**PointNet**} & \multicolumn{2}{c|}{**DG-CNN**} & \multicolumn{2}{c|}{**GB-Net**} \\ \hline
**Face FE** & **Body FE** & MAE & RMSE & \(R^{2}\) & MAE & RMSE & \(R^{2}\) & MAE & RMSE & \(R^{2}\) \\ \hline
**VGGFace** &
\begin{tabular}{c} **Xception** \\ **ResNet152** \\ \end{tabular} & 5.309 & 7.438 & 0.720 & 7.763 & 9.352 & 0.572 & 6.421 & 8.396 & 0.639 \\
**FaceNet** &
\begin{tabular}{c} **Xception** \\ **ResNet152** \\ \end{tabular} & 5.612 & 7.635 & 0.697 & 7.894 & 9.650 & 0.560 & 6.989 & 8.903 & 0.596 \\ \hline \multirow{2}{*}{**FaceNet**} & **Xception** & 5.978 & 7.998 & 0.661 & 8.363 & 10.016 & 0.559 & 6.640 & 8.511 & 0.615 \\ & **ResNet152** & 6.112 & 8.131 & 0.651 & 8.606 & 10.400 & 0.548 & 7.200 & 9.155 & 0.587 \\ \hline \end{tabular}
\end{table} TABLE III: Performance comparison of different architecture combinations for weight prediction
Fig. 7: (a) Performance under simulated lighting conditions. Our system remains robust to wide range of illumination but it is preferable to have sufficient lighting to decrease the error. (b) Performance of our system across various devices
Fig. 9: Correlation between predicted and measured body weight of 42 randomly selected test-set samples using only: (a) Body Features (BFs), (b) 3D Features (DFs), (c) Facial Features (FFs) and, (d) Combination of all features (FFs+DFs+BFs).
of the Graphical User Interface (GUI) consists of three sets of inputs: Age, Gender, and Image of the subject as shown in Fig. 10 (b). Next, the inputs are provided and the model present in the cloud computes the different output metrics. These computed metrics include the height, weight, BMI, Ideal weight, active BMR and the BFP of the person. To achieve the ideal weight, the user is then asked to select the type of diet and the number of weeks they are willing to dedicate to the program to attain the desired weight as shown in Fig 10 (c). Once this computation is performed, the results are displayed and the customized nutrition plan is made ready to download.
### _Performance Observation on real-time data_
We further extensively tested our system for malnutrition classification on held-out locally collected real-time data from 30 people in various frontal poses. Based on their true BMI values, 20 of these 30 participants are healthy, while the remaining 10 are considered malnourished. The model's corresponding confusion matrix on this withheld dataset is as shown in Table IV. As depicted, the model achieves an accuracy of 86.67%, as well as precision, recall, and F1 score of 80 %, 80 %, and 80 %, respectively.
## V Key findings and Comparative Analysis
In the proposed solution, several architectures with a combination of different fusion techniques for weight estimation and a pixel per metric approach for height estimation have been extensively tested. These findings are then used to calculate and infer pertinent health indicators from a single image, ultimately determining if the person is malnourished. The following are the key findings and comparative analysis of the proposed solution:
### _Key Findings_
**Fine-scale 3D Representation**: Our research presents a pioneering application of PiFuHD for reconstruction, employing highly precise and detailed local 3D representation. This approach allows for a fine-grained level of detail in the reconstructed output. Furthermore, we utilize state-of-the-art 3D classification networks in our work by removing the last layers to extract a 512-dimensional vector as the 3D feature embedding. This technique enables us to capture and represent essential information from the input data.
**Multi-Modal Learning Paradigm**: Many existing solutions in the field often rely on a single modality or feature representation, such as facial or manually crafted anthropometric information, or statistical measures, as highlighted in previous studies [8][14][28][29][30]. However, in our research, we adopted a holistic feature representation approach and conducted a systematic exploration to ascertain the significance of various features. This was achieved through extensive experimentation and in-depth analysis. Our solution stands out by achieving state-of-the-art results in weight estimation. Notably, we achieved the lowest mean absolute error (MAE) of 5.3 kg, surpassing previous works. This achievement was made possible by employing learnable weighing parameters in fusion, which enhances the accuracy of our weight estimation model. Through our research, we provide a comprehensive and advanced approach to weight estimation, considering multiple features and their interplay.
**Edge Device Deployment:** A notable observation in the existing literature is the absence of deployed solutions or a reliance on sensor infrastructure for collecting user health data for monitoring, as highlighted in previous studies [17][31][32]. In contrast, our research introduces a novel solution through the development of a smart application prototype. This prototype enables the estimation of health parameters such as height and weight and predicts the risk of malnutrition using a single full-body image. Importantly, this solution proves particularly valuable in remote locations with limited or no access to health facilities. One significant advantage of our approach is the use of an edge device prototype that operates independently, eliminating the need for additional equipment. This self-sufficiency empowers the prototype to estimate nutritional status accurately, providing crucial health insights even in resource-constrained environments.
### _Comparative Analysis_
The proposed methodology showcased remarkable performance, achieving an impressive mean absolute error (MAE) score of 5.3 kg in weight estimation and 4.7 cm in height estimation. A comprehensive evaluation of error rates in height and weight prediction revealed that our approach outperformed previous works [8][14][16][18], as highlighted in Table V. Importantly, our designed multi-modal system operates autonomously, eliminating the need for human intervention during crucial stages such as detecting body and facial landmarks, masking, cropping, and alignment. This autonomy enhances the efficiency and reliability of the system, setting it apart from non-autonomous and non-invasive techniques.
Fig. 10: (a) Image acquiring technique, (b) Uploading the picture in addition to the associated metadata, (c) Illustrating the calculated outcomes and choosing a diet approach.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Predicted Condition** & \multicolumn{2}{c|}{**Actual Condition**} & \multicolumn{1}{c|}{**Accuracy**} & \multicolumn{1}{c|}{**Precision**} & \multicolumn{1}{c|}{**Recall**} \\ \cline{2-6} & Healthy & Malnutritious & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline \multicolumn{6}{|c|}{Malnutritious} & \multicolumn{1}{c|}{2} & 8 & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline \end{tabular}
\end{table} TABLE IV: Confusion matrix of Malnutrition classification on Testset
## VI Conclusion and Future Works
This research presents a novel approach for predicting height and weight and inferring other health indicators, such as BMI, BMR, and BFP, from a single-shot full-body image. The methodology employs a holistic feature representation within a multi-modal learning paradigm. The proposed solution undergoes meticulous validation and testing using real-world images, including the simulation of various lighting conditions. The study also systematically examines the significance of 2D and 3D features. To further enhance the performance of weight and height prediction, future investigations can explore more rigorous methods for training and converging the multi-modal architecture. Additionally, efforts can be made to improve the extraction of FFs (Feature Fusion), DFs (Depth Fusion), and BFs (Body Fusion) embeddings. Exploring sub-embedding representation fusion methods and designing approaches to predict height without scale information or constraints could also contribute to improved prediction accuracy. Furthermore, future app development endeavors can focus on fostering communities and addressing security concerns related to the Machine Learning model and databases. These aspects will contribute to a more comprehensive and impactful implementation of the solution.
## VII Acknowledgements
The authors would like to thank Dr. Min Jiang for providing access to the Visual-body-to-BMI dataset for our research. The authors are also grateful to the 30 volunteers for the contribution of the required images in creating a local dataset.
|
2309.11586 | Rapid Changes in Synchronizability in Conductance-based Neuronal
Networks with Conductance-based Coupling | Real neurons connect to each other non-randomly. How the connectivity of
networks of conductance-based neuron models like the classical Hodgkin-Huxley
model, or the Morris-Lecar model, impacts synchronizability remains unknown.
One powerful tool to resolve the synchronizability of these networks is the
Master Stability Function (MSF). Here, we apply and extend the MSF approach to
networks of Morris-Lecar neurons with conductance-based coupling to determine
under which parameters and graphs synchronous solutions are stable. We consider
connectivity graphs with a constant row-sum, where the MSF approach can be
readily extended to conductance-based synapses rather than the more
well-studied diffusive connectivity case, which primarily applies to gap
junction connectivity. In this formulation, the synchronous solution is a
single, self-coupled or 'autaptic' neuron. We find that the primary determining
parameter for the stability of the synchronous solution is, unsurprisingly, the
reversal potential, as it largely dictates the excitatory/inhibitory potential
of a synaptic connection. However, the change between "excitatory" and
"inhibitory'' synapses is rapid, with only a few millivolts separating
stability and instability of the synchronous state for most graphs. We also
find that for specific coupling strengths (as measured by the global synaptic
conductance), islands of synchronizability in the MSF can emerge for inhibitory
connectivity. We verified the stability of these islands by direct simulation
of pairs of neurons coupled with eigenvalues in the matching spectrum. These
results were robust for different transitions to spiking (Hodgkin Class I vs
Class II), which displayed very similar synchronizability characteristics. | Wilten Nicola | 2023-09-20T18:49:14Z | http://arxiv.org/abs/2309.11586v1 | Rapid Changes in Synchronizability in Conductance-based Neuronal Networks with Conductance-based Coupling
###### Abstract
Real neurons connect to each other non-randomly. These connectivity graphs can potentially impact the ability of networks to synchronize, along with the dynamics of neurons and the dynamics of their connections. How the connectivity of networks of conductance-based neuron models like the classical Hodgkin-Huxley model, or the Morris-Lecar model, impacts synchronizability remains unknown. One powerful tool to resolve the synchronizability of these networks is the Master Stability Function (MSF). Here, we apply and extend the MSF approach to networks of Morris-Lecar neurons with conductance-based coupling to determine under which parameters and graphs synchronous solutions are stable. We consider connectivity graphs with a constant row-sum, where the MSF approach can be readily extended to conductance-based synapses rather than the more well studied diffusive connectivity case, which primarily applies to gap junction connectivity. In this formulation, the synchronous solution is a single, self-coupled or 'autaptic' neuron. We find that the primary determining parameter for the stability of the synchronous solution is, unsurprisingly, the reversal potential, as it largely dictates the excitatory/inhibitory potential of a synaptic connection. However, the change between an "excitatory" and "inhibitory" synapses is rapid, with only a few millivolts separating stability and instability of the synchronous state for most graphs. We also find that for specific coupling strengths (as measured by the global synaptic conductance), islands of synchronizability in the MSF can emerge for inhibitory connectivity. We verified the stability of these islands by direct simulation of pairs of neurons coupled with eigenvalues in the matching spectrum. These results were robust for different transitions to spiking (Hodgkin Class I vs Class II), which displayed very similar synchronizability characteristics.
## 1 Introduction
Brain cells, like other complex interacting systems can readily synchronize and fire their action potentials or spikes simultaneously, under the right conditions. Sometimes, this synchronizability is a normal part of brain function. For example, pyramidal neurons in the hippocampus collectively fire during the 100-150 millisecond hippocampal sharp-wave ripples, a so-called cognitive biomarker for memory consolidation and memory replay [1, 2, 3]. This synchronization is strong enough to be observed even in the hippocampal local field potential, a macroscopic observable of collective neuronal activity. However, neurons can also pathologically synchronize [1]. In fact, the same hippocampal neurons that synchronize in sharp-wave ripples in the hippocampus often synchronize excessively leading to epileptic seizures [1]. The myriad of conditions that can lead to synchronization or de-synchronization remains an active area of research.
One hypothesis is that the specific characteristics of the connectivity between neurons can promote or obstruct the ability of neurons to otherwise synchronize. Thus, with a network that is only constrained by a fixed number of connections, or with a fixed global connection strength, different connectivity profiles can lead to synchronization or any number of asynchronous states.
Fortunately, there is a tool to analyze how different networks of neuronal models can synchronize: the master stability function (MSF) [4, 5, 6, 7, 8]. The MSF allows one to determine the stability of a synchronous solution across any connectivity graph through a three-step process. First, the master stability function is computed in the complex plane. Next, the eigenvalues are computed for any particular connectivity graph. Finally, the sign of the MSF evaluated at the eigenvalues of the graph on the complex plane dictates the local asymptotic stability of the synchronous solution. If all eigenvalues fall in regions where the MSF is negative, the synchronous solution is locally asymptotically stable while a single eigenvalue falling into a region where the MSF is positive destabilizes the synchronous solution [4, 5].
However, the application of the MSF in networks of spiking neurons with conductance-based coupling or chemical synapses has remained under studied for a few reasons. The first reason is that many neuron models or synaptic models are non-smooth. For example, all integrate-and-fire neurons utilize a discontinuity to reset the membrane potential of a neuron after a "spike", and possibly change other state variables [9, 10, 11, 12, 13]. While there have been considerable advances in extending the MSF approach to non-smooth spiking networks and non-smooth differential equations in general [6, 7, 8, 14], real neurons do not have the types of membrane discontinuities exhibited by integrate-and-fire neurons. The second reason is that the connections that real neurons form are not entirely "excitatory" or "inhibitory" [15]. The net effect of a chemical synapse depends on the driving force, which is influenced by ionic reversal potentials and the voltage and spike-shape of a neuron itself. Thus, a connection can switch between excitatory to inhibitory during a single spike depending on the voltage of the neuron at any moment. Finally, the master stability function is primarily limited to cases where the row-sum of the connectivity matrix is 0, which forces both positive and negative connection weights [4, 5]. This constraint, which is termed "diffusive connectivity" is mathematically convenient as it implies that the synchronous solution in the network is also simultaneously a solution to a single uncoupled neuron. Unfortunately, this constraint is incompatible with chemical synapses where the unitary synaptic conductances, being physical quantities, are non-negative. This is however compatible with gap-junction based connectivity or electrical synapses. Indeed, this latter case has been extensively studied [16, 17, 18, 19, 20, 21, 22].
Here, we apply the MSF approach to networks of smooth conductance-based neurons with smooth conductance-based or chemical synaptic coupling. By utilizing a constant row-sum, rather than 0 row-sum constraint we successfully applied the MSF approach to analyzing the synchronizability of networks of Morris-Lecar neurons with chemical synapses [23, 24]. This constant row-sum corresponds to a global conductance strength, but not necessarily a measure of inhibition or excitation strength. We found that independent of the uncoupled neurons' bifurcation to spiking (Hodgkin Class I or Hodgkin Class II), the reversal potential of the synapse would most strongly dictate the stability of synchronous solutions. We also found that the MSF would rapidly change sign as a function of the reversal potential over the complex plane. Only a few millivolts separated large-scale stability or instability of the synchronous solution. The global portrait of synchronizability looked similar for both Hodgkin Class I and Class II parameter regimes. However, the MSF deviated from this result in two ways. First, the synchronous solution could lose stability depending on the value of the global unitary synaptic conductance and the reversal potential for excitatory coupling parameter ranges. This loss of stability was readily observed with sufficiently large ring structures. Second, we found that for both classes of firing, islands of stability would emerge for the synchronous solution for inhibitory synapses. We tested this finding directly with simulations of networks with constrained connectivity matrices that forced the eigenvalues of the matrices into the synchronizability islands. The synchronizability of any particular graph exhibits regimes that are highly parameter dependent in conductance-based neurons with chemical synapses. Stable configurations can emerge with increasing inhibition and unstable configurations can emerge with increasing excitation.
## 2 Results
### The Master Stability Function for Conductance Models with Conductance-Based Coupling
To investigate the synchronization of spiking networks with conductance-based synapses, we used the general form for a conductance-based neuron with conductance-based coupling. The network equations are given by:
\[C\frac{dV_{i}}{dt} = F(V,\mathbf{x}_{i})-\sum_{j=1}^{N}g_{ij}r_{j}(t)(V_{i}-E) \tag{1}\] \[\frac{d\mathbf{x}_{i}}{dt} = G(V_{i},\mathbf{x}_{i})\] (2) \[\frac{dr_{i}}{dt} = a_{r}T(V_{i})(1-r_{i})-a_{d}r_{i}, \tag{3}\]
where \(V\) corresponds to the voltage of the neuron, and \(\mathbf{x}\) consists of a vector of gating variables. Although our derivation was for a general conductance-based neuron, we restricted our numerical analysis primarily to the Morris-Lecar neuron model [23, 24] which we describe below. These neurons are coupled with a smooth synaptic gating variable equation (3) which was first introduced in [15]. The synaptic connection between neuron \(j\) to neuron \(i\) is given by \(\bar{g}_{ij}r_{j}(t)(V_{i}-E)\), where the inhibitory/excitatory nature of the synapse is determined by the reversal potential \(E\). The function \(T(V_{j})\) models the amount of neurotransmitter released in the synaptic cleft by neuron \(j\), and is given by:
\[T(V)=\frac{\bar{T}_{max}}{1+\exp{(-(V-V_{T})/K_{p})}}\]
Finally, the conductance matrices \(\mathbf{g}\) is an \(N\times N\) matrix with the following constraints:
\[g_{ij}\geq 0,\forall i,j=1,2,\ldots N,\quad\sum_{j=1}^{N}\bar{g}_{ij}=G,\forall i =1,2,\ldots N \tag{4}\]
Thus, all the unitary conductances must be positive. The constraints 4 are critical for the application of a modified Master Stability Function (MSF) analysis of the synchronized solutions.
With the constraint (4) in hand, then the synchronous solution corresponds to the dynamics of a self-coupled or autaptic neuron:
\[C\frac{dV_{S}}{dt} = F(V_{S},\mathbf{x}_{S})-\bar{g}r_{S}(t)(V_{S}-E_{E})\] \[\frac{d\mathbf{x}_{S}}{dt} = G(V_{S},\mathbf{x}_{S})\] \[\frac{dr_{S}}{dt} = a_{r}T(V_{S})(1-r_{S})-a_{d}r_{S}\]
where \((V_{S}(t),\mathbf{x}_{S}(t),r_{S}(t))\) is the solution to the synchronous differential equations (5)-(5).
Next, we perturbed around the synchronous solution with:
\[V_{i}=\epsilon_{i}^{V}+V_{S}(t),\mathbf{x}_{i}=\epsilon_{i}^{\mathbf{x}}+\mathbf{x}_{S}(t),r_{i}^{E}=\epsilon_{i}^{{}^{F}E}+r_{S}(t)\]
which yielded the following linearization
\[C\frac{d\mathbf{\epsilon}^{V}}{dt} = \left(\frac{\partial F}{\partial V}-\bar{g}r_{S}\right)\mathbf{ \epsilon}^{V}+\sum_{j=1}^{m}\frac{\partial F}{\partial x_{j}}\mathbf{\epsilon}^{x _{j}}-(V_{S}-E)\mathbf{g}\mathbf{\epsilon}^{r} \tag{5}\] \[\frac{d\mathbf{\epsilon}^{\mathbf{x}_{i}}}{dt} = \frac{\partial G}{\partial V}\mathbf{\epsilon}^{V}+\sum_{j=1}^{m} \frac{\partial G_{i}}{\partial x_{j}}\mathbf{\epsilon}^{x_{j}}\] (6) \[\frac{d\mathbf{\epsilon}^{r^{E}}}{dt} = a_{r}T^{\prime}(V_{s})(1-r_{s})\mathbf{\epsilon}^{V}-(a_{r}T(V_{S}) +a_{d})\mathbf{\epsilon}^{r} \tag{7}\]
Note that all of the partial derivative terms (e.g. \(\frac{\partial F}{\partial V},\frac{\partial G}{\partial x_{j}}\)) are evaluated with respect to the synchronized solution (\(V_{S}(t),\mathbf{x}_{S}(t),r_{S}(t)\)).
Next, we will make the standard assumption in MSF-applications that the matrix \(\mathbf{g}\) is diagonalizable
\[\mathbf{g}=\mathbf{PDP}^{-1}\]
Then consider the substitution:
\[\mathbf{\eta}^{V}=\mathbf{P}^{-1}\mathbf{\epsilon}^{V},\quad\mathbf{\eta}^{x_{i}}=\mathbf{P}^{-1} \mathbf{\epsilon}^{x_{i}},\quad\mathbf{\eta}^{r}=\mathbf{P}^{-1}\mathbf{\epsilon}^{r}. \tag{8}\]
Substituting (8) into equations (5)-(7) yields:
\[C\frac{d\mathbf{\eta}^{V}}{dt} = \left(\frac{\partial F}{\partial V}-\bar{g}r_{S}\right)\mathbf{\eta} ^{V}+\sum_{j=1}^{m}\frac{\partial F}{\partial x_{j}}\mathbf{\eta}^{x_{j}}-(V_{S}- E)\mathbf{D}\mathbf{\eta}^{r}\] \[\frac{d\mathbf{\eta}^{x_{i}}}{dt} = \frac{\partial G}{\partial V}\mathbf{\eta}^{V}+\sum_{j=1}^{m}\frac{ \partial G_{i}}{\partial x_{j}}\mathbf{\eta}^{x_{j}}\] \[\frac{d\mathbf{\eta}^{r^{E}}}{dt} = T^{\prime}(V_{S})(1-r_{S})\mathbf{\eta}^{V}-(a_{r}T(V_{S})+a_{d}) \mathbf{\eta}^{r}.\]
The key insight drawn from the MSF approach is that equation (9) is now effectively uncoupled as the system has been block diagonalized (see Appendix 1 for further details). This implies that to determine the stability of the synchronized solution, we can determine Lyapunov exponents of the system
\[C\frac{d\eta^{V}}{dt} = \left(\frac{\partial F}{\partial V}-\bar{g}r_{S}\right)\eta^{V}+ \sum_{j=1}^{m}\frac{\partial F}{\partial x_{j}}\eta^{x_{j}}-\lambda_{i}(V_{S} -E)\eta^{r} \tag{9}\] \[\frac{d\eta^{x_{i}}}{dt} = \frac{\partial G}{\partial V}\eta^{V}+\sum_{j=1}^{m}\frac{ \partial G_{i}}{\partial x_{j}}\eta^{x_{j}}\] (10) \[\frac{d\eta^{r}}{dt} = a_{r}T^{\prime}(V_{S})(1-r_{S})\eta^{V}-(a_{r}T(V_{S})+a_{d}) \eta^{r}. \tag{11}\]
over a mesh in \(\lambda\). The system (9)-(11) is numerically integrated along with the synchronous solution (5)-(5), in conjunction with a numerical estimation of the Lyapunov exponents (see Supplementary Information for Details). Then, the stability the synchronized solution can be determined readily by first computing eigenvalues of the connectivity matrix, and then "looking up" the values on the mesh produced by the MSF.
### The Morris-Lecar Neuron Model
To test the predictions of the MSF function, we primarily considered the Morris-Lecar neuron model [23, 24] :
\[C\frac{dV}{dt} = I-g_{L}(V-E_{L})-g_{K}n(V-E_{K})-g_{Ca}m_{\infty}(V)(V-E_{Ca})\] \[\frac{dn}{dt} = \phi\left(\frac{n_{\infty}(V)-n}{\tau_{n}(V)}\right)\] \[m_{\infty}(V) = \frac{1}{2}(1+\tanh(V-V_{1})/V_{2})\] \[n_{\infty}(V) = \frac{1}{2}(1+\tanh(V-V_{3})/V_{4})\] \[\tau_{n}(V) = \frac{1}{\cosh(V-V_{3})/(2V_{4})}\]
The parameters for this model are given in Table 1 and correspond to two classical regimes, the Hodgkin Class I regime which corresponds to a Saddle Node on an Invariance Cycle (SNIC) bifurcation
from quiescence to spiking, and the Hodgkin Class II regime which corresponds to a subcritical Hopf bifurcation followed by a saddle-node of limit cycles from quiescence to spiking [24]. The network equations are given by
\[C\frac{dV_{i}}{dt} = I-g_{L}(V_{i}-E_{L})-g_{K}n_{i}(V_{i}-E_{K})-g_{Ca}m_{\infty}(V_{i} )(V_{i}-E_{Ca})-\sum_{j=1}^{N}g_{ij}r_{j}(t)(V_{i}-E)\] \[\frac{dn_{i}}{dt} = \phi\left(\frac{n_{\infty}(V_{i})-n_{i}}{\tau_{n}(V_{i})}\right)\] \[\frac{dr_{i}}{dt} = a_{r}T(V_{i})(1-r_{i})-r_{i}a_{d}\] \[\sum_{j=1}^{N}g_{ij} = \bar{g},\forall i,\quad g_{ij}\geq 0\]
while the synchronized solution corresponds to the autaptic Morris-Lecar neuron:
\[C\frac{dV_{S}}{dt} = I-g_{L}(V_{S}-E_{L})-g_{K}n_{S}(V_{S}-E_{K})-g_{Ca}m_{\infty}(V_ {S})(V_{S}-E_{Ca})-\bar{g}r_{S}(t)(V_{S}-E) \tag{12}\] \[\frac{dn_{S}}{dt} = \phi\left(\frac{n_{\infty}(V_{S})-n_{S}}{\tau_{n}(V_{S})}\right)\] (13) \[\frac{dr_{S}}{dt} = a_{r}T(V_{S})(1-r_{S})-r_{S}a_{d} \tag{14}\]
Prior to computing the MSF, we investigated which regions of \(\bar{g}\) lead to stable spiking solutions as a function of the driving current \(I\), and the reversal of the synapses \(E\) for the synchronous solution in equations (12)-(14). First, we found that excitatory self-coupling (\(E=0\) mV) did not change the overall bifurcation types for Hodgkin Class I or Hodgkin Class II parameter regimes (Figure 1). However, we did find changes to the overall bifurcation structure with inhibitory self-coupling would either shift the spiking regimes to non-physical negative \(\bar{g}\), or reduce these regimes to narrow parameter ranges in \(\bar{g}\) depending on how close the driving current was to the bifurcation point of the non-autaptic neuron. Thus, we primarily focused on parameter regimes with higher driving currents for both excitatory (\(E=0\) mV) and inhibitory (\(E=-70\) mV) reversal potentials. Furthermore, we found that for these regimes, we could systematically vary \(\bar{g}\) over the interval \((0,3]\) across both Class I and Class II parameter regimes, albeit with different applied currents (\(I=50\) pA for Class I, \(I=115\) pA for Class II). For the currents we considered, the local bifurcation structure appears largely identical for both classes of firing (Figure 1), with both parameter regimes stopping spiking via a supercritical Hopf (\(E=-70\) mV) bifurcation for \(\bar{g}>3\) or subcritical Hopf (\(E=0\) mV) regimes.
### Computing and Validating the Master Stability Function for the Morris-Lecar Model
The block linearization for the MSF function of the Morris-Lecar network is given by
\[C\frac{d\eta^{V}}{dt} = \left(-g_{L}-g_{Ca}m^{\infty}(V_{S})-g_{Ca}(V_{S}-E_{Ca})\frac{dm ^{\infty}}{dV}g_{K}n_{S}-\bar{g}r_{S}\right)\eta_{V} \tag{15}\] \[- g_{K}(V_{s}-E_{k})\eta^{n}-\bar{g}\lambda(V_{S}-E)\eta^{r}\] \[\frac{d\eta^{n}}{dt} = \phi\left(\frac{1}{\tau_{n}(V_{S})}\frac{dn^{\infty}(V_{S})}{dV} -\frac{d\tau_{n}}{dV_{S}}\frac{n_{\infty}(V_{S})-n_{S}}{\tau_{n}(V_{S})^{2}} \right)\eta_{V}-\frac{\phi}{\tau_{n}(V_{S})}\eta^{n}\] (16) \[\frac{d\eta^{r}}{dt} = a_{r}\frac{dT(V_{S})}{dV}(1-r_{s})\eta^{V}-(a_{r}(T(V_{S}))+a_{ d})\eta^{r} \tag{17}\]
where \((V_{S},\eta_{S},r_{S})\) correspond to the synchronized solution for the self-coupled node:
To compute the master stability function, we simulated the system of equations (12)-(14) while computing the Lyapunov exponents in parallel for each of the blocks in (15)-(17). The eigenvalues \(\lambda\) were selected over a \(101\times 101\) mesh over the unit cube \([a,b]\in[0,1]^{2}\) with \(\lambda=a+bi\).
Figure 1: The bifurcation diagrams for self-coupled Morris-Lecar neuron models. The left column corresponds to the Hodgkin Class I parameter regime, while the right column corresponds to the Hodgkin Class II parameter regime. The parameters correspond to excitatory self-coupling near the onset to spiking (low \(I\)), excitatory self-coupling far from the onset to spiking (high \(I\)), and inhibitory self-coupling far from the onset to spiking (high \(I\)). The inhibitory self-coupling regime near the onset to spiking was not considered as it typically leads to narrower parameter regions in \(\bar{g}\), or can sometimes lead to spiking in non-physical regimes (e.g. \(\bar{g}<0\)). The master stability functions were computed for Hodgkin Class I and Hodgkin Class II parameters in the large \(I\) regimes for similar ranges in \(\bar{g}\).
A single computation for the MSF for \(\bar{g}=2.1\) nS and \(E=30\) mV is shown in Figure 2. For these particular parameter regimes, the reversal potential indicates a predominantly excitatory synaptic coupling. This excitatory connectivity leads to large regions in the eigenvalue space where the MSF is negative, (Figure 2A-B). For eigenvalues with larger magnitudes, and positive real components, the MSF exhibits a sign change indicating a loss of stability in the synchronous solution. To test this loss of stability, we used ring networks of different sizes as the eigenvalues of a ring of \(N\) neurons lie on the unit circle as the \(N\)th roots of unity. A ring with 5-neurons has all eigenvalues laying in the negative MSF area indicating all negative Lyapunov exponents while a ring with 7 neurons has a pair of complex conjugate eigenvalues in the positive MSF area. This indicates that the synchronous solution is stable for a ring of \(N=5\) neurons but unstable for \(N=7\) neurons. The ring of \(N=6\) neurons has eigenvalues that are extremely close to the sign change of the MSF, and thus was not considered. The \(N=5\) Morris-Lecar ring and the \(N=7\) Morris-Lecar ring were both simulated with initial conditions near the synchronous solution to test its local asymptotic stability (Figure 2C-D). After the initial transient, the \(N=5\) ring converges to the synchronous solution (Figure 2E) while the \(N=7\) ring diverges (Figure 2F).
The Master Stability Function over the (\(E,\bar{g}\)) parameter space for Class I and Class II Neurons
Next, we determined how the inhibitory/excitatory valence of the conductance-based synapse would impact synchronizability. For a sufficiently large positive reversal potential, the current induced by presynaptic spikes primarily serves to depolarize the cell, and therefore initiates subsequent spikes. For a sufficiently large negative reversal potential, the cell becomes hyperpolarized by presynaptic spikes. The conventional wisdom would be that increasing the reversal potential would lead to more synchronization as all the synapses transitioned from inhibitory to excitatory. To test this hypothesis, we computed the MSF over a discrete mesh in \((\bar{g},E)\) space for the Morris-Lecar network under both Class I (Figure 3) and Class II parameters (Figure 4).
We found that the conventional wisdom largely prevails here, with increasing reversal potentials leading to synchronizability (Figure 3-4). However, there are some important caveats and unexpected deviations that occur. First, we noticed that predominantly inhibitory connectivity (\(E\leq-50mV\)) can lead to islands of synchronizability when the global conductance strength \(\bar{g}\) is sufficiently high. This is a common feature of MSFs where one can find localized and compact region(s) of stability [25]. These islands are "born" and die through a process we describe below. Second, the transition from a predominantly inhibitory synapse to an excitatory synapse appears quite suddenly, somewhere in between \(E\in[-10,10]\) mV. For the most part, there does not appear to be a gradual change in synchronizability, but an abrupt one. Finally, even for "excitatory" synapses, increasing the global conductance strength can decrease the area where the MSF is negative. This occurs for example in the \(E=10\) mV reversal potential, where a higher \(\bar{g}\) progressively erodes the synchronizability regime. Interestingly, the loss of stability with increasing \(\bar{g}\) can also be reversed for a sufficiently high reversal potential. Finally, we note that in the weak coupling regime (\(\bar{g}\ll O(1)\)), there is a vertical slice of synchronizability where neurons can synchronize provided that they connect primarily with strong autaptic connections and weak cross connections to each other
### Abrupt Changes to the MSF in the (\(E,\bar{g}\)) parameter space.
For both Hodgkin Class I and Hodgkin Class II parameter sets, there was a near global change in the sign of the MSF. For Class I excitability, the MSF evolved from predominantly positive Lyapunov exponents for \(E=-10\) mV, to predominantly negative Lyapunov exponents for \(E=10\) mV. We investigated how abrupt this transition was by considering the Hodgkin Class I parameters on a finer mesh. We computed the MSF and plotted its sign from the interval \(E=-4\) mV to \(E=5\) mV with increments of 1 mV, with \(\bar{g}\) fixed as in Figures 3-4 (Figure 5). We found that the MSF was predominantly positive, aside from the small \(\bar{g}\) region. As \(E\) is increased however from \(E=0\) to \(E=5\) mV the MSF rapidly changes sign over a large area of the admissible eigenvalue space \(|\lambda|\leq 1\). For every millivolt of increase in the reversal potential, the region of stability for the synchronous solution grows rapidly from the left side of the admissible eigenvalue domain. For non-weak coupling (\(\bar{g}\gg O(10^{-1})nS\), most admissible
Figure 2: Testing the master stability function in simulated ring networks **(A)** The MSF for the Morris-Lecar neuron under Hodgkin Class I Excitability. The MSF was computed over a \(101\times 101\) evenly distributed mesh over the complex plane for \(\bar{g}=2.1\) nS and \(E=30\) mV (excitatory synapses) **(B)** The sign of the MSF function. Black denotes negative values, indicating a stable Lyapunov exponent associated with the eigenvalue, while white denotes positive Lyapunov exponents associated with the eigenvalue. The MSF was tested with two rings, a 5-neuron ring and a 7 neuron ring. The 6-neuron ring lay close to the sign change transition point. **(C)** The 5-neurons in the ring are simulated near the basin of attraction of the synchronous solution. The voltage of the neurons is initialized with a normally distributed random variable with mean -60 mV, and a standard deviation of 5 mV. The \(n\) and \(r\) variables are all set to 0.5. **(D)** Identical as in (A), only with the 7 neuron ring. **(E)** The asymptotic behaviour of the 5-neuron ring is a synchronous solution. **(F)** The 7 neuron ring desynchronizes after a suitably long period of time.
Figure 3: The sign of the master stability function (MSF) as the network parameters \(\bar{g},E\) are varied The Morris-Lecar parameters were taken to be in the Hodgkin Class I parameter regime. The sign of the MSF function was computed over a mesh in the \((\bar{g},E)\) parameter space. The mesh points correspond to \(E=-70+20j\) mV, for \(j=0,1,2,\ldots 5\) and\(g=0.1+0.5k\) nS for \(k=0,1,2\ldots 7\). In between \(E=-30\) and \(E=-10\) mV, there is a large-scale transition from predominantly unstable synchronized solutions and predominantly stable synchronized solutions.
Figure 4: The sign of the master stability function (MSF) as the network parameters \(\bar{g},E\) are varied. The Morris-Lecar parameters were taken to be in the Hodgkin Class II parameter regime. The sign of the MSF function was computed over a mesh in the \((\bar{g},E)\) parameter space. The mesh points correspond to \(E=-70+20j\) mV, for \(j=0,1,2,\ldots 5\) and\(g=0.1+0.5k\) nS for \(k=0,1,2\ldots 7\). In between \(E=-10\) and \(E=10\) mV, there is a large-scale transition from predominantly unstable synchronized solutions and predominantly stable synchronized solutions. Note the similarities to Figure 3
Figure 5: The sign of the master stability function (MSF) as the network parameters \(\bar{g},E\) are varied. The Morris-Lecar parameters were taken to be in the Hodgkin Class II parameter regime. The sign of the MSF function was computed over a mesh in the \((\bar{g},E)\) parameter space. The mesh points correspond to \(E=-4+j\) mV, for \(j=0,1,2,\ldots 9\) and\(g=0.1+0.5k\) nS for \(k=0,1,2\ldots 7\). In between \(E=-4\) and \(E=5\) mV, there is a large-scale transition from predominantly unstable synchronize solutions and predominantly stable synchronized solutions.
weight matrices go from unstable synchronous solutions to stable synchronous solutions with the change of just a few millivolt's in the reversal potential of the synapse.
Next, we tested these abrupt changes in synchronizability by using randomly coupled networks of Morris-Lecar neurons (Figure 6). The network was simulated at two parameter values, with \(\bar{g}=2.1\) nS and \(E=0\) mV, and \(\bar{g}=2.1\) nS and \(E=4\) mV. The former was predicted to yield unstable synchronized solutions while the latter had a large region of stability. The network consisted of 50 neurons, with 85% sparse random coupling, and all gating variables initialized to 0.5, while the voltage variable was drawn from a normal random variable with a mean of -60 mV and a standard deviation of 3 mV (Figure 6A-B). The eigenvalues of the matrix were verified to lie in the positive sign MSF (\(E=0\) mV) and the negative sign MSF (\(E=4\) mV), respectively (Figure 6A-B). The network simulated with the lower reversal potential desynchronized after 1.5 seconds while the network at the slightly higher reversal potential synchronized (Figure 6C-D). These results demonstrate how a change of only a few millivolts in the reversal potential changes the stability of the synchronous solution for many types of network coupling.
Figure 6: A change of a few mV in the reversal potential changes the stability of the synchronous solution. **(A)** The sign of the MSF function for \(\bar{g}=2.1\), \(E=0\) mV. The eigenvalues (red dots) of a randomly generated \(N=50\) node Erdős–Rényi model network that is 85% sparse is generated are also plotted. **(B)** The sign of the MSF function for \(\bar{g}=2.1\), \(E=4\) mV. The eigenvalues (red dots) are identical as in (A). **(C)** A simulation of a network Morris-Lecar neurons with \(\bar{g}\) and \(E\) identical as in (A), coupled with the weight matrix from (A). Every neuron is generated with \(n_{j}=0.5\), \(r_{j}=0.5\), and a random initial voltage drawn from a normal distribution with a mean of -60 mV, and standard deviation of 3 mV. **(D)** An identical simulation as in (C), only with \(\bar{g}=2.1\) and \(E=4\) mV. The initial conditions are identical as in (C). The synchronous solution is now locally asymptotically stable. The network parameters were in the Hodgkin Class I regime.
### Islands of Synchronizability in the \((E,\bar{g})\) parameter space.
To witness the evolution of an island of synchronizability, we first computed the MSF function over a finer mesh in the \(\bar{g}\) space with \(E\) fixed to \(-70\) mV (Figure 7). In these islands, the MSF has a local minimum which can emerge from the left-hand side of the eigenvalue domain (Figure 7). The island transitions to the right-hand side of the eigenvalue domain while expanding in size. Eventually, the island is "absorbed" by the neutral stability eigenvalue at \(\lambda=1\) (Figure 7). Multiple islands (negative minima of the MSF function) can co-exist (Figure 7).
Next, we investigated if these islands of synchronizability could be directly tested with a network simulation (Figure 8). As the islands bound small areas in the complex plane, we tested if networks of pairs \(N=2\) of oscillators would synchronize (Figure 82A, 2D). We constrained the connection strengths of these matrices to have a constant row sum of 1, which forces the maximum eigenvalue of \(\lambda=1\). It is then trivial to constrain the connections such that the second eigenvalue lies in an arbitrary position on the real line (Figure 8 2B, E). For each parameter regime considered, two networks were tested. One network was selected with an eigenvalue inside the island, and another with an eigenvalue outside of the island. As predicted by the MSF function, we found that the connectivity matrices with eigenvalues within the islands of synchronizability lead to local asymptotic stability of the synchronous solution. Given the small size of these islands, our results suggest that for conductance-based synapses, the connectivity graphs in which inhibition can induce synchronization may be highly constrained and strongly parameter dependent.
## 3 Discussion
We have found that the synchronizability of conductance-based neurons is a complex affair. By applying a modified MSF approach, we analyzed how networks of chemically coupled Morris-Lecar neurons synchronize in Class I and Class II regimes. We found remarkable consistency in the stability regimes across these two parameter regimes. As a general rule of thumb, higher reversal potentials tend to lead to the (local) stability of synchronous autaptic solutions, and lower reversal potentials tend to lead to the (local) instability of synchronous autaptic solutions. However, this rule of thumb is often deviated from as islands of stability under inhibition, and wedge-shaped regions of instability under excitation were both observed. The actual change between predominantly stable or predominantly unstable synchronized solutions occurs very rapidly as a function of the reversal potential (a few mVs). Evidently, the stability of synchronous solutions is strongly parameter dependent with small changes to the global conductance or small changes to the reversal potential stabilizing or destabilizing the synchronizability of different connectivity graphs.
We note that we are not the only authors to consider applying the MSF function to neurons with chemical synapses [26, 27]. In [26] the authors consider networks of Hindmarsh-Rose neuron models with both electrical and chemical coupling, thereby making a direct comparison to the work here difficult. In [27], the authors consider networks of Izhikevich neuron models with chemical synapses and utilize non-smooth analysis to determine the MSF. This was performed for electrical only, chemical only, and simultaneous electrical/chemical synapses. For chemical synapses, the estimated MSF appears positive (Figure 5 in [27]). The modification to the MSF to allow for non-diffusive coupling appears in multiple sources in the literature [27, 28, 29, 30, 31].
The islands of stability observed here occurred for primarily inhibitory (low \(E\)) connection strengths. Interestingly, work with classical integrate-and-fire neurons with current based synapses has also demonstrated that inhibition can induce synchronization [32]. In particular, the authors consider leaky integrate-and-fire neurons with alpha-function like synaptic connectivity, where every spike at time \(t^{*}\) increases the current arriving to a neuron by \(E_{s}(t)=E_{s}(t-t^{*})\) where \(E_{s}=g\alpha^{2}t\exp(-\alpha t)\). The authors find that for inhibitory synapses (\(g<0\)), the synchronous solution is always stable, although at a critical value of \(\alpha\) (faster synapses), a pitchfork bifurcation occurs that stabilizes for the asynchronous solution. As the synapses become faster, the basin of attraction for the synchronous solution shrinks. We primarily considered synapses with a rise time of \(~{}1~{}(a_{r}=1.1)\) ms and a decay time of \(~{}5\) ms (\(a_{d}=0.19\)). For conductance-based synapses, it appears that the synchronous solution is stable under inhibition dominated regimes only for very specific connectivity graphs, in contrast with the behaviour
Figure 7: The “birth” and “death” of an island of synchronizability as \(\tilde{g}\) is increased. The MSF was computed for the Morris-Lecar neuron model under Hodgkin Class I excitability. The island of stability emerges from the negative real part of the eigenvalue mesh (1st row). Then, the island undergoes a period of expansion and lateral movement to larger real components (2nd to 4th rows). The island subsequently collides with neutral stability eigenvalue \(\lambda=1\) and begins to be “absorbed” by it (5th row). A second, very small island of synchronizability has emerged in the last column. The reversal potential was -70 mV.
Figure 8: Testing the islands of stability with simulated pairs of coupled oscillators. **(A)** Two connectivity matrices for a network of two coupled Morris-Lecar neurons with Hodgkin Class I excitability. The two matrices yield eigenvalues that lie in the island of synchronizability (top), or are outside and adjacent to it (bottom). Note that these matrices satisfy both the unity row-sum constraint, and all positive elements constraint of a conductance matrix. **(B)** The computed master stability function for \(\bar{g}=2.25\) nS (left), \(E=-70\) mV, and the sign of the MSF (right). Both matrices have eigenvalues of 1 (neutral stability) and a second eigenvalue less than 1. The matrix on the top of (A) has a second eigenvalue within the island of stability (red), while the matrix on the bottom has an eigenvalue outside of the island of stability (blue). **(C)** Simulation of the Morris-Lecar neurons coupled as in (A), with only the voltage plotted. The matrix with an eigenvalue inside of the island has a locally asymptotically stable synchronous solution. The matrix on the bottom of (A) has an eigenvalue outside of the island, and has an unstable synchronous solution. **(D)** Identical as in (A), only with a larger conductance value \(\bar{g}=2.75\) nAS. **(E)** Identical as in (B), with the connectivity matrices determined by (D) **(F)** Identical as in (C), with the connectivity matrices determined by (D).
of an inhibitory coupled integrate-and-fire network. We leave the general analysis of the impacts of synaptic timing for future work, however we do remark that the findings in [32] were supported and extended by subsequent modelling work [33] and even experimental findings [34, 35].
By thoroughly exploring parameter space, we were able to shed insight into the impacts of having conductance-based synaptic coupling on synchronizability. Chemical synapses are not strictly excitatory or inhibitory as the flow of current is dictated by the driving force, which leads to complex behaviours and synchronizability regimes, where excitation can lead to desynchronization or inhibition can lead to synchronization.
### Acknowledgement
WN is supported by an NSERC Discovery Grant (DGECR/00334-2020) and a Canada Research Chair (CRC-2019-00416)
## Appendix A
### The Master Stability Function
The Master Stability Function (MSF) approach to resolving the stability of synchronous solutions was originally proposed in [4], with subsequent advances to the analysis of clustered systems in [5]. Briefly, the approach considers coupled non-linear systems of the following form:
\[\frac{d\boldsymbol{x}_{i}}{dt}=Q(\boldsymbol{x}_{i})+\sum_{j=1}^{N}g_{ij}H( \boldsymbol{x}_{j}),\quad i=1,2,\ldots N,\quad\boldsymbol{x}_{i}\in\mathbb{R} ^{k},\]
with the constraint that
\[\sum_{j=1}^{N}g_{ij}=0,\forall i\]
and that the coupling matrix \(\boldsymbol{g}\) be diagonalizable. This is sometimes referred to as "diffusive connectivity"
One can express the system in (18) with the direct product \(\otimes\) or tensor operator as
\[\frac{d\boldsymbol{x}}{dt}=\boldsymbol{Q}(\boldsymbol{x})+\boldsymbol{G} \otimes\boldsymbol{H}(\boldsymbol{x})\]
where
\[\boldsymbol{Q}(\boldsymbol{x})=[Q(\boldsymbol{x}_{1}),Q(\boldsymbol{x}_{2}), \ldots Q(\boldsymbol{x}_{N})],\quad\boldsymbol{H}(\boldsymbol{x})=[H( \boldsymbol{x}_{1}),H(\boldsymbol{x}_{2})\ldots H(\boldsymbol{x}_{N})].\]
Under these constraints, the synchronous solution \(\boldsymbol{x}_{s}(t)\) is a solution to the differential equation
\[\frac{d\boldsymbol{x}_{s}}{dt}=Q(\boldsymbol{x}_{s}) \tag{18}\]
Then, perturbations off of the synchronous solution are considered
\[\boldsymbol{x}_{i}(t)=\boldsymbol{x}_{s}(t)+\boldsymbol{\epsilon}_{i}(t).\]
which yields the following block diagonal system
\[\frac{d\boldsymbol{\epsilon}}{dt}=\left[\boldsymbol{I}_{N}\otimes\frac{ \partial Q}{\partial\boldsymbol{x}}+\boldsymbol{G}\otimes\frac{\partial H}{ \partial\boldsymbol{x}}\right]\boldsymbol{\epsilon} \tag{19}\]
The coupled non-autonomous system in (19) determines the linear stability of the system (18) via computation of the Lyapunov exponents of the system. In a master-stability function approach, the problem of computing these Lyapunov is simplified greatly by assuming that \(\boldsymbol{G}\) is diagonalizable:
\[\boldsymbol{G}=\boldsymbol{PD}\boldsymbol{P}^{-1}\]
By applying the substitution \(\boldsymbol{\eta}=\boldsymbol{P}^{-1}\boldsymbol{\eta}\), the system simplifies into a diagonalized block-system:
\[\frac{d\boldsymbol{\eta}_{i}}{dt}=\left[\frac{\partial Q}{\partial\boldsymbol{ x}}+\lambda_{i}\frac{\partial H}{\partial\boldsymbol{x}}\right]\boldsymbol{ \eta}_{i},\quad i=1,2,\ldots N \tag{20}\]
where \(\lambda_{i}\) is an eigenvalue of the matrix \(\boldsymbol{G}\). The next step in a MSF function approach is to compute the Lyapunov exponents of equation 20 over a mesh in the complex eigenvalue space of \(\lambda=a+bi\). Thus, one computes the mesh of lyapunov exponents, \(\mu\) as a function of the eigenvalues \(\lambda\):
\[\frac{d\boldsymbol{\eta}}{dt}=\left[\frac{\partial Q}{\partial\boldsymbol{x}}+ \lambda\frac{\partial H}{\partial\boldsymbol{x}}\right]\boldsymbol{\eta}, \rightarrow\mu(\lambda) \tag{21}\]
The Lyapunov exponents \(\mu(\lambda)\) can be computed with established methods for their approximation (e.g. [37]).
The maximum Lyapunov exponent of the block 20 as function of \(\lambda\) is the Master Stability Function (MSF). With the mesh computed, one can use any diagonalizable connection matrix and simply "look up" the value of the maximum Lyapunov exponent as a function of \(\lambda\) with \(\mu(\lambda)\), the MSF. In this work, we refer to the final diagonalized block structure in (21) as the MSF equations, as they are necessary for the numerical approximation of \(\mu(\lambda)\).
### Supplementary Material
The parameters for the Morris-Lecar neuron model under Hodgkin Class I and Hodgkin Class 2 regimes are shown in Table 1, along with the synaptic parameters.
### Numerical Integration and Computation of the Lyapunov Exponents
All ODEs for the Morris-Lecar system(s) under consideration were integrated with the MATLAB2023a function _ode45_. The 'RelTol' (relative error tolerance) and 'AbsTol' absolute error tolerance parametes were set to \(10^{-14}\) for all direct integration of the Morris-Lecar network equations. The Lypaunov exponents were computed with the algorithm in [37] with code modified from [30, 31].
\begin{table}
\begin{tabular}{|c|c|c|} \hline Parameter & Value & Units \\ \hline \(C\) & 20 & pF \\ \hline \(g_{L}\) & 2 & nS \\ \hline \(E_{L}\) & 60 & mV \\ \hline \(g_{K}\) & 8 & nS \\ \hline \(E_{K}\) & -84 & mV \\ \hline \(g_{Ca}\) & 4 & nS \\ \hline \(E_{Ca}\) & 120 & mV \\ \hline \(V_{1}\) & -1.2 & mV \\ \hline \(V_{2}\) & 18 & mV \\ \hline \(V_{3}\) & 12 (Class I), 2 (Class II) & mV \\ \hline \(V_{4}\) & 17.4 (Class I), 30 (Class II) & mV \\ \hline \(\phi\) & 0.067 (Class I), 0.04 (Class II) & mV \\ \hline \(I\) & 50 (Class I), 115 (Class II) & pA \\ \hline \(a_{r}\) & 1.1 & ms\({}^{-1}\) \\ \hline \(a_{d}\) & 0.19 & ms\({}^{-1}\) \\ \hline \(T_{max}\) & 1 & unitless \\ \hline \(K_{p}\) & 5 & mV \\ \hline \(V_{t}\) & 2 & mV \\ \hline \end{tabular}
\end{table}
Table 1: Parameter values used for the Morris-Lecar neuron model, unless otherwise specified in a figure |
2309.06944 | Three-cuts are a charm: acyclicity in 3-connected cubic graphs | Let $G$ be a bridgeless cubic graph. In 2023, the three authors solved a
conjecture (also known as the $S_4$-Conjecture) made by Mazzuoccolo in 2013:
there exist two perfect matchings of $G$ such that the complement of their
union is a bipartite subgraph of $G$. They actually show that given any
$1^+$-factor $F$ (a spanning subgraph of $G$ such that its vertices have degree
at least 1) and an arbitrary edge $e$ of $G$, there exists a perfect matching
$M$ of $G$ containing $e$ such that $G\setminus (F\cup M)$ is bipartite. This
is a step closer to comprehend better the Fan--Raspaud Conjecture and
eventually the Berge--Fulkerson Conjecture. The $S_4$-Conjecture, now a
theorem, is also the weakest assertion in a series of three conjectures made by
Mazzuoccolo in 2013, with the next stronger statement being: there exist two
perfect matchings of $G$ such that the complement of their union is an acyclic
subgraph of $G$. Unfortunately, this conjecture is not true: Jin, Steffen, and
Mazzuoccolo later showed that there exists a counterexample admitting 2-cuts.
Here we show that, despite of this, every cyclically 3-edge-connected cubic
graph satisfies this second conjecture. | František Kardoš, Edita Máčajová, Jean Paul Zerafa | 2023-09-13T13:28:15Z | http://arxiv.org/abs/2309.06944v1 | # Three-cuts are a charm:
###### Abstract
Let \(G\) be a bridgeless cubic graph. In 2023, the three authors solved a conjecture (also known as the \(S_{4}\)-Conjecture) made by Mazzuoccolo in 2013: there exist two perfect matchings of \(G\) such that the complement of their union is a bipartite subgraph of \(G\). They actually show that given any \(1^{+}\)-factor \(F\) (a spanning subgraph of \(G\) such that its vertices have degree at least 1) and an arbitrary edge \(e\) of \(G\), there exists a perfect matching \(M\) of \(G\) containing \(e\) such that \(G\setminus(F\cup M)\) is bipartite. This is a step closer to comprehend better the Fan-Raspaud Conjecture and eventually the Berge-Fulkerson Conjecture. The \(S_{4}\)-Conjecture, now a theorem, is also the weakest assertion in a series of three conjectures made by Mazzuoccolo in 2013, with the next stronger statement being: there exist two perfect matchings of \(G\) such that the complement of their union is an acyclic subgraph of \(G\). Unfortunately, this conjecture is not true: Jin, Steffen, and Mazzuoccolo later showed that there exists a counterexample admitting 2-cuts. Here we show that, despite of this, every cyclically 3-edge-connected cubic graph satisfies this second conjecture.
_Keywords: acyclicity, circuit, factor, perfect matching, cubic graph, snark_
_Math. Subj. Class.: 05C15, 05C70_
Introduction
In 2013, Giuseppe Mazzuoccolo [8] proposed three beguiling conjectures about bridgeless cubic graphs. His first conjecture, implied by the Berge-Fulkerson Conjecture [4], is the following.
**Conjecture 1.1** (Mazzuoccolo, 2013 [8]).: _Let \(G\) be a bridgeless cubic graph. Then, there exist two perfect matchings of \(G\) such that the complement of their union is a bipartite graph._
This conjecture, which is no longer open, has been solved by the three authors. More precisely they prove the following stronger statement.
**Theorem 1.2** (Kardos, Macajova & Zerafa, 2023 [6]).: _Let \(G\) be a bridgeless cubic graph. Let \(F\) be a \(1^{+}\)-factor of \(G\) and let \(e\in E(G)\). Then, there exists a perfect matching \(M\) of \(G\) such that \(e\in M\), and \(G\setminus(F\cup M)\) is bipartite._
We note that a \(1^{+}\)_-factor_ of \(G\) is the edge set of a spanning subgraph of \(G\) such that its vertices have degree 1, 2 or 3. Theorem 1.2 not only shows the existence of two perfect matchings of \(G\) whose deletion leaves a bipartite subgraph of \(G\), but that for every perfect matching of \(G\) there exists a second one such that the deletion of the two leaves a bipartite subgraph of \(G\). In particular, Theorem 1.2 also implies that for every collection of disjoint odd circuits of \(G\), there exists a perfect matching which intersects at least one edge from each odd circuit (this was posed as an open problem by Mazzuoccolo and the last author in [9], see also [11]).
Mazzuoccolo moved on to propose two stronger conjectures, with Conjecture 1.4 being the strongest of all three.
**Conjecture 1.3** (Mazzuoccolo, 2013 [8]).: _Let \(G\) be a bridgeless cubic graph. Then, there exist two perfect matchings of \(G\) such that the complement of their union is an acyclic graph._
**Conjecture 1.4** (Mazzuoccolo, 2013 [8]).: _Let \(G\) be a bridgeless cubic graph. Then, there exist two perfect matchings of \(G\) such that the complement of their union is an acyclic graph, whose components are of order 2 or 3._
Clearly, these last two conjectures are true for 3-edge-colourable cubic graphs, and Janos Hagglund verified the strongest of these conjectures (Conjecture 1.4) by computer for all snarks (non 3-edge-colourable cubic graphs) of order at most 34 [8]. However, 5 years later, Jin, Steffen, and Mazzuoccolo [5] gave a counterexample to Conjecture 1.3. Their counterexample contains a lot of 2-edge-cuts and the authors state that the conjecture "could hold true for 3-connected or cyclically 4-edge-connected cubic graphs". In fact, as in real life, being more connected has its own benefits, and in this paper we show the following stronger statement.
**Theorem 1.5**.: _Let \(G\) be a cyclically 3-edge-connected cubic graph, which is not a Kleegraph. Then, for any \(e\in E(G)\) and any \(1^{+}\)-factor \(F\) of \(G\), there exists a perfect matching \(M\) of \(G\) containing \(e\) such that \(G\setminus(F\cup M)\) is acyclic._
We remark that Klee-graphs (see Definition 2.1), which are to be discussed further in Section 2, are \(3\)-edge-colourable cubic graphs and so are not a counterexample to Conjecture 1.3. However, the stronger statement given in Theorem 1.5 does not hold for this class of graphs, and this is the reason why we exclude them.
Although Theorem 1.5 is not a direct consequence of the Berge-Fulkerson Conjecture, we believe that the results presented here and in [6] are valuable steps towards trying to decipher long-standing conjectures such as the Fan-Raspaud Conjecture [3], and the Berge-Fulkerson Conjecture itself.
In fact, we will prove the following statement, which is equivalent to Theorem 1.5
**Theorem 1.6**.: _Let \(G\) be a cyclically 3-edge-connected cubic graph, which is not a Klee-graph. Then, for any \(e\in E(G)\) and any collection of disjoint circuits \(\mathcal{C}\), there exists a perfect matching \(M\) of \(G\) containing \(e\) such that every circuit in \(\mathcal{C}\) contains an edge from \(M\)._
Indeed, given a collection of disjoint circuits \(\mathcal{C}\), its complement is a \(1^{+}\)-factor, say \(F_{\mathcal{C}}\). A perfect matching \(M\) containing \(e\) such that \(G\setminus(F_{\mathcal{C}}\cup M)\) is acyclic must contain an edge from every circuit in \(\mathcal{C}\). On the other hand, given a \(1^{+}\)-factor \(F\), its complement is a collection of disjoint paths and circuits, and so it suffices to consider the collection \(\mathcal{C}_{F}\) of circuits disjoint from \(F\). A perfect matching \(M\) containing \(e\) such that every circuit in \(\mathcal{C}_{F}\) contains an edge from \(M\), clearly makes \(G\setminus(F\cup M)\) acyclic.
### Important definitions and notation
Graphs considered in this paper are simple, that is, they cannot contain parallel edges and loops, unless otherwise stated.
Let \(G\) be a graph and \((V_{1},V_{2})\) be a partition of its vertex set, that is, \(V_{1}\cup V_{2}=V(G)\) and \(V_{1}\cap V_{2}=\emptyset\). Then, by \(E(V_{1},V_{2})\) we denote the set of edges having one endvertex in \(V_{1}\) and one in \(V_{2}\); we call such a set an _edge-cut_. An edge which itself is an edge-cut of size one is a _bridge_. A graph which does not contain any bridges is said to be _bridgeless_.
An edge-cut \(X=E(V_{1},V_{2})\) is called _cyclic_ if both graphs \(G[V_{1}]\) and \(G[V_{2}]\), obtained from \(G\) after deleting \(X\), contain a _circuit_ (a \(2\)-regular connected subgraph). The _cyclic edge-connectivity_ of a graph \(G\) is defined as the smallest size of a cyclic edge-cut in \(G\) if \(G\) admits one; it is defined as \(|E(G)|-|V(G)|+1\), otherwise. For cubic graphs, the latter only concerns \(K_{4}\), \(K_{3,3}\), and the graph consisting of two vertices joined by three parallel edges, whose cyclic edge-connectivity is thus 3, 4, and 2, respectively. An _acyclic_ graph is a graph which does not contain any circuits.
Let \(G\) be a bridgeless cubic graph. A \(1^{+}\)_-factor_ of \(G\) is the edge set of a spanning subgraph of \(G\) such that its vertices have degree 1, 2 or 3. In particular, a _perfect matching_ and a \(2\)_-factor_ of \(G\) are \(1^{+}\)-factors whose vertices have exactly degree 1 and 2, respectively.
## 2 Klee-graphs
**Definition 2.1** ([7]).: A graph \(G\) is a Klee-graph if \(G\) is the complete graph on 4 vertices \(K_{4}\) or there exists a Klee-graph \(G_{0}\) such that \(G\) can be obtained from \(G_{0}\) by replacing a vertex by a triangle (see Figure 1).
For simplicity, if a graph \(G\) is a Klee-graph, we shall sometimes say that \(G\) is Klee. We note that there is a unique Klee-graph on 6 vertices (the graph of a 3-sided prism), and a unique Klee-graph on 8 vertices. As we will see in Section 2.1, these two graphs are Klee ladders, and shall be respectively denoted as \(KL_{6}\) and \(KL_{8}\).
**Lemma 2.2** ([7]).: _The edge set of any Klee-graph can be uniquely partitioned into three pairwise disjoint perfect matchings. In other words, any Klee-graph is 3-edge-colourable, and the colouring is unique up to a permutation of the colours._
Since Klee-graphs are 3-edge-colourable, they easily satisfy the statement of Conjecture 1.3.
**Proposition 2.3**.: _Let \(G\) be a Klee-graph. Then, \(G\) admits two perfect matchings \(M_{1}\) and \(M_{2}\) such that \(G\setminus(M_{1}\cup M_{2})\) is acyclic._
The new graph obtained after expanding a vertex of a Hamiltonian graph (not necessarily Klee) into a triangle is still Hamiltonian, and so, since \(K_{4}\) is Hamiltonian, all Klee-graphs are Hamiltonian. Hamiltonian cubic graphs have the following distinctive property.
**Proposition 2.4**.: _Let \(G\) be a Hamiltonian cubic graph. Then, for any collection of disjoint circuits \(\mathcal{C}\) of \(G\) there exists a perfect matching \(M\) of \(G\) which intersects at least one edge of every circuit in \(\mathcal{C}\)._
Proof.: Since \(G\) is Hamiltonian, it admits three disjoint perfect matchings \(M_{1},M_{2},M_{3}\) covering \(E(G)\) such that at least two of them induce a Hamiltonian circuit. Without loss of generality, assume that \(M_{2}\cup M_{3}\) induce a Hamiltonian circuit. Let \(\mathcal{C}\) be a collection of disjoint circuits of \(G\) for which the statement of the proposition does not hold. In particular, this implies that \(M_{1}\) does not intersect all the circuits in \(\mathcal{C}\) -- since the complement of \(M_{1}\) is a Hamiltonian circuit, \(\mathcal{C}\) consists of exactly one circuit. However, this means that \(M_{2}\) (or \(M_{3}\)) intersect the only circuit in \(\mathcal{C}\), contradicting our initial assumption.
**Corollary 2.5**.: _For any collection of disjoint circuits \(\mathcal{C}\) of a Klee-graph \(G\) there exists a perfect matching \(M\) of \(G\) which intersects at least one edge of every circuit in \(\mathcal{C}\)._
On the other hand, we have to exclude Klee-graphs from Theorem 1.5 (and Theorem 1.6) since for some Klee-graphs there are edges contained in a unique perfect matching, as we will see in the following subsection.
Figure 1: Examples of Klee-graphs on 4 upto 12 vertices, left to right.
### Other results about Klee-graphs
**Lemma 2.6** ([7]).: _Let \(G\) be a Klee-graph on at least 6 vertices. Then, \(G\) has at least two triangles and all its triangles are vertex-disjoint._
Indeed, expanding a vertex into a triangle can only destroy triangles containing the vertex to be expanded.
We will now define a series of particular Klee-graphs, which we will call _Klee ladders_. Let \(KL_{4}\) be the complete graph on 4 vertices, and let \(u_{4}v_{4}\) be an edge of \(KL_{4}\). For any even \(n\geq 4\), let \(KL_{n+2}\) be the Klee-graph obtained from \(KL_{n}\) by expanding the vertex \(u_{n}\) into a triangle. In the resulting graph \(KL_{n+2}\), we denote the vertex corresponding to \(v_{n}\) by \(v_{n+2}\), and denote the vertex of the new triangle adjacent to \(v_{n+2}\) by \(u_{n+2}\).
In other words, the graph \(KL_{2k+2}\) consists of the Cartesian product \(P_{2}\square P_{k}\) (where \(P_{t}\) denotes a path on \(t\) vertices) with two additional vertices \(u_{2k+2}\) and \(v_{2k+2}\) adjacent to each other, such that \(u_{2k+2}\) (\(v_{2k+2}\)) is adjacent to the two vertices in the first (last, respectively) copy of \(P_{2}\) in \(P_{2}\square P_{k}\) (see Figure 2).
Klee ladders can be used to illustrate why we have to exclude Klee-graphs from our main result. For a given Klee ladder \(G\) there exists an edge \(e\) such that \(e\) is contained in a unique perfect matching of \(G\), and therefore there is no hope for a statement like Theorem 1.6 to be true.
We will frequently use the following structural property of certain Klee-graphs.
**Lemma 2.7**.: _Let \(G\) be a Klee-graph on at least 8 vertices having exactly two (disjoint) triangles. Then,_
1. _exactly one edge of each triangle lies on a 4-circuit; and_
2. _if_ \(G\) _admits an edge joining the two triangles, then_ \(G\) _is a Klee ladder._
Proof.: We prove this by induction. Claim (i) is obvious for \(KL_{8}\), the only Klee-graph on 8 vertices, so let \(G\) be a Klee-graph on \(n\geq 10\) vertices. By definition, it can be obtained from a smaller one, say \(G_{0}\), by expanding a vertex into a triangle. Since \(G\) only has two triangles, this operation must have destroyed a (single) triangle of \(G_{0}\), which in turn gives rise to a 4-circuit containing exactly one of the edges of the new triangle.
Moreover, if \(G\) admits an edge \(e\) joining the two triangles, then the corresponding edge \(e_{0}\) in \(G_{0}\) joins a triangle to a vertex contained in a (distinct) triangle, so it joins the two triangles of \(G_{0}\). By induction, \(G_{0}\) is a Klee ladder, say \(KL_{n-2}\), for some \(n\geq 8\), and the edge \(e_{0}\) is the edge \(u_{n-2}v_{n-2}\) (see the definition of Klee ladders above). Claim (ii) follows immediately.
Figure 2: An example of a Klee ladder \(KL_{12}\). There is a unique perfect matching (here depicted using dotted lines) containing the edge \(e\). The complement of this perfect matching is a Hamiltonian circuit.
Proof of Theorem 1.6
Proof.: Let \(G\) be a minimum counterexample to the statement of Theorem 1.5. Since \(K_{4}\) is Klee, \(G\) has at least six vertices. There are only two 3-connected cubic graphs on six vertices, namely \(KL_{6}\) and \(K_{3,3}\). The former is Klee. For the latter, \(K_{3,3}\), a collection of disjoint circuits can only contain one circuit on either four or six vertices and in both cases it is easy to check that every edge is contained in a perfect matching intersecting the prescribed circuit. Therefore, \(G\) has at least eight vertices.
Let \(e\in E(G)\) be an edge of \(G\) such that there exists a collection of disjoint circuits such that for every perfect matching \(M\) containing \(e\) there is a circuit in the collection containing no edge from \(M\). Amongst all such collections, we can choose an inclusion-wise minimal one, denoted by \(\mathcal{C}\). By the choice of \(\mathcal{C}\), we may assume that \(e\notin C\) for any \(C\in\mathcal{C}\).
In the sequel, we will prove progressively a series of structural properties of \(G\). Before that, we need to define three additional graph families. Let \(KL_{2k-2}\) be the Klee ladder on \(2k-2\) vertices with \(k\geq 3\); let \(u_{2k-2}\) and \(v_{2k-2}\) be the two vertices contained in the two triangles, say \(u_{2k-2}u_{1}u_{2}\) and \(v_{2k-2}v_{1}v_{2}\), which are adjacent to each other. Moreover, we may assume that \(KL_{2k-2}\setminus\{u_{2k-2},v_{2k-2}\}\) contains two disjoint paths of length \(k-3\), one from \(u_{1}\) to \(v_{1}\) and the other from \(u_{2}\) to \(v_{2}\).
We remove the vertices \(u_{2k-2}\) and \(v_{2k-2}\) and replace them by four vertices, say \(u^{\prime}_{1}\), \(u^{\prime}_{2}\), \(v^{\prime}_{1}\), and \(v^{\prime}_{2}\), adjacent to \(u_{1}\), \(u_{2}\), \(v_{1}\), and \(v_{2}\), respectively, and we add a 4-cycle passing through the four new vertices. In fact, we can see the last operation as adding a complete graph on 4 vertices and removing a perfect matching. Up to symmetry, only three outcomes are possible.
* A _ladder_\(L_{2k}\) is obtained if the edges \(u^{\prime}_{1}v^{\prime}_{2}\) and \(u^{\prime}_{2}v^{\prime}_{1}\) are missing.
* A _Mobius ladder_\(ML_{2k}\) is obtained if the edges \(u^{\prime}_{1}v^{\prime}_{1}\) and \(u^{\prime}_{2}v^{\prime}_{2}\) are missing.
* A _quasi-ladder_\(QL_{2k}\) is obtained if the edges \(u^{\prime}_{1}u^{\prime}_{2}\) and \(v^{\prime}_{1}v^{\prime}_{2}\) are missing.
Observe that the ladder \(L_{2k}\) is the graph of a \(k\)-sided prism. Observe that ladders and Mobius ladders are vertex-transitive.
**Claim 1**.: The graph \(G\) is not a ladder, a Mobius ladder, nor a quasi-ladder.
_Proof of Claim 1._ Let \(G\in\{L_{2n},ML_{2n},QL_{2n}:n\geq 4\}\), and let \(\mathcal{C}\) be a collection of disjoint circuits in \(G\). We prove that for every edge \(e\) there exists a perfect matching \(M_{e}\) containing \(e\) such that its complement is a Hamiltonian circuit, say \(C_{e}\); moreover, there exists yet another perfect matching \(M^{\prime}_{e}\) containing \(e\). The first perfect matching can be used to prove Theorem 1.6 unless \(\mathcal{C}=\{C_{e}\}\). If this is the case, then we can use \(M^{\prime}_{e}\).
In most of the cases, the second perfect matching \(M^{\prime}_{e}\) can be obtained from \(M_{e}\) by the following operation: We find a 4-circuit consisting of the edges \(e_{1},e_{2},e_{3},e_{4}\) (in this cyclic order) avoiding \(e\) and containing exactly two edges from \(M_{e}\), say \(e_{1}\) and \(e_{3}\). We then set \(M^{\prime}_{e}=M_{e}\setminus\{e_{1},e_{3}\}\cup\{e_{2},e_{4}\}\). In other words, \(M^{\prime}_{e}\) is obtained as the symmetric difference of \(M_{e}\) and a suitable 4-circuit.
If \(G\) is a ladder or a Mobius ladder, then \(G\) is vertex-transitive, and there are only two edge orbits. It suffices to distinguish between \(e\) being an edge contained in two 4-circuits (vertical according to Figure 3) or in a single one (horizontal or diagonal). An example of
a pair of perfect matchings \(M_{e}\) and \(M_{e}^{\prime}\) having the desired properties is depicted in Figure 4.
Let \(G=QL_{2k}\) for some \(k\geq 4\). If \(e\) is an edge of the subgraph \(P_{2}\square P_{k-2}\) or an edge of the 4-circuit \(u_{1}^{\prime}v_{1}^{\prime}u_{2}^{\prime}v_{2}^{\prime}\) (see the definition of a quasi-ladder for the notation), then a pair of perfect matchings \(M_{e}\) and \(M_{e}^{\prime}\) having the desired properties can be found in a same way as in the previous case, see Figure 5 for illustration.
Otherwise, let \(e=u_{1}u_{1}^{\prime}\) (for the remaining three edges the situation is symmetric). There is a unique Hamiltonian circuit \(C_{e}\) avoiding \(e\) and containing \(u_{2}u_{2}^{\prime}\), see Figure 6 for an illustration. In this case, there is another perfect matching \(M_{e}^{\prime}\) containing \(\{u_{1}u_{1}^{\prime},u_{2}u_{2}^{\prime},v_{1}v_{1}^{\prime},v_{2}v_{2}^{ \prime}\}\) and all the vertical edges of the subgraph \(P_{2}\square P_{k-2}\) except for the first and the last one.
**Claim 2.** The graph \(G\) does not have any cyclic 3-edge-cuts.
_Proof of Claim 2._ Suppose that \(G\) admits a cyclic 3-edge-cut \(E(V^{\prime},V^{\prime\prime})\) with \(E(V^{\prime},V^{\prime\prime})=\{f_{1},f_{2},f_{3}\}=:X\), where each \(f_{i}=v_{i}^{\prime}v_{i}^{\prime\prime}\), for some \(v_{1}^{\prime},v_{2}^{\prime},v_{3}^{\prime}\in V^{\prime}\) and \(v_{1}^{\prime\prime},v_{2}^{\prime},v_{3}^{\prime\prime}\in V^{\prime\prime}\). Since \(G\) has no 2-edge-cuts, the vertices \(v_{1}^{\prime},v_{2}^{\prime},v_{3}^{\prime},v_{1}^{\prime\prime},v_{2}^{ \prime\prime},v_{3}^{\prime\prime}\) are all distinct.
Figure 4: An example of a Hamiltonian circuit \(C_{e}\) (drawn using double lines) avoiding a given edge \(e\) whose complement is a perfect matching \(M_{e}\) containing \(e\), for both possible positions of the prescribed edge \(e\) in a ladder (top line) or a Möbius ladder (bottom line). A second perfect matching \(M_{e}^{\prime}\) can be obtained by the symmetric difference with the grey 4-circuit.
Figure 3: An illustration of a Klee ladder, a ladder, a Möbius ladder, and a quasi-ladder.
Figure 5: An example of a Hamiltonian circuit \(C_{e}\) avoiding a given edge \(e\) (drawn using double lines) whose complement is a perfect matching \(M_{e}\) containing \(e\), for edges contained in the grid \(P_{2}\square P_{k-2}\) (top line) and in the complementary 4-circuit (bottom line) of a quasi-ladder. A second perfect matching \(M_{e}^{\prime}\) can be obtained by the symmetric difference with the grey 4-circuit.
Figure 6: An example of a Hamiltonian circuit \(C_{e}\) avoiding a given edge \(e\) (drawn using double lines) whose complement is a perfect matching \(M_{e}\) containing \(e\), for an edge \(e\) joining a vertex in the grid \(P_{2}\square P_{k-2}\) to a vertex of the complementary 4-circuit in a quasi-ladder (top line, two cases depending on the parity of the length of the grid). A second perfect matching \(M_{e}^{\prime}\) (bottom).
Either there is no circuit in \(\mathcal{C}\) intersecting \(X\), or the cut \(X\) is intersected by a unique circuit \(C_{X}\) in \(\mathcal{C}\). Without loss of generality, we shall assume that when \(C_{X}\) exists, \(X\cap C_{X}=\{f_{2},f_{3}\}\).
Let \(G^{\prime}\) and \(G^{\prime\prime}\) be the two graphs obtained from \(G\) after deleting \(X\) and joining the vertices \(v^{\prime}_{i}\) to a new vertex \(v^{\prime}\), and the vertices \(v^{\prime\prime}_{i}\) to a new vertex \(v^{\prime\prime}\). For each \(i\in[3]\), let \(e^{\prime}_{i}=v^{\prime}_{i}v^{\prime}\) and \(e^{\prime\prime}_{i}=v^{\prime\prime}_{i}v^{\prime\prime}\).
Let
\[\mathcal{C}^{\prime}=\begin{cases}\{C\in\mathcal{C}\setminus\{C_{X}\}:C\cap E (G^{\prime})\neq\emptyset\}\cup\{(C_{X}\cap E(G^{\prime}))\cup\{e^{\prime}_{2},e^{\prime}_{3}\}\}&\text{if $C_{X}$ exists,}\\ \{C\in\mathcal{C}:C\cap E(G^{\prime})\neq\emptyset\}&\text{otherwise.}\end{cases}\]
Similarly, let
\[\mathcal{C}^{\prime\prime}=\begin{cases}\{C\in\mathcal{C}\setminus\{C_{X}\}:C \cap E(G^{\prime\prime})\neq\emptyset\}\cup\{(C_{X}\cap E(G^{\prime\prime})) \cup\{e^{\prime\prime}_{2},e^{\prime\prime}_{3}\}\}&\text{if $C_{X}$ exists,}\\ \{C\in\mathcal{C}:C\cap E(G^{\prime\prime})\neq\emptyset\}&\text{otherwise.} \end{cases}\]
It is not hard to see that \(\mathcal{C}^{\prime}\) (\(\mathcal{C}^{\prime\prime}\)) is a collection of disjoint circuits in \(G^{\prime}\) (in \(G^{\prime\prime}\), respectively). Every circuit \(C\neq C_{X}\) in \(\mathcal{C}\) corresponds to a circuit either in \(\mathcal{C}^{\prime}\) or in \(\mathcal{C}^{\prime\prime}\). The circuit \(C_{X}\) (if it exists) corresponds to two circuits \(C^{\prime}_{X}\) and \(C^{\prime\prime}_{X}\) in \(\mathcal{C}^{\prime}\) and \(\mathcal{C}^{\prime\prime}\), respectively.
**Case A.** We first consider the case when \(G\) does not admit any triangles, and claim that \(G^{\prime}\) (similarly \(G^{\prime\prime}\)) is not Klee. For, suppose that \(G^{\prime}\) is Klee. Since \(G\) has no triangles, \(|V(G^{\prime})|\geq 6\), and so, by Lemma 2.6, \(G^{\prime}\) must admit two disjoint triangles. This is impossible since any triangle in \(G^{\prime}\) must contain the vertex \(v^{\prime}\). Hence, when \(G\) does not admit any triangles, \(G^{\prime}\) and \(G^{\prime\prime}\) are both not Klee.
Without loss of generality, we can also assume that at least one of the endvertices of \(e\) corresponds to a vertex in \(V^{\prime}\). We consider two cases, depending on the existence of \(C_{X}\).
_Case A1._ First, consider the case when \(C_{X}\) does not exist. When \(e\in X\), say \(e=f_{1}\), then, by minimality of \(G\), there exists a perfect matching \(M^{\prime}\) of \(G^{\prime}\) (\(M^{\prime\prime}\) of \(G^{\prime\prime}\)) containing \(e^{\prime}_{1}\) (\(e^{\prime\prime}_{1}\)), intersecting every circuit in \(\mathcal{C}^{\prime}\) (in \(\mathcal{C}^{\prime\prime}\), respectively). Consequently, \(M=M^{\prime}\cup M^{\prime\prime}\cup\{f_{1}\}\setminus\{e^{\prime}_{1},e^{ \prime\prime}_{1}\}\) is a perfect matching of \(G\) containing \(e=f_{1}\), intersecting every circuit in \(\mathcal{C}\).
It remains to consider the case when \(e\notin X\), and so the endvertices of \(e\) both correspond to vertices in \(G^{\prime}\). Once again, for simplicity, we shall refer to this edge as \(e\). Let \(M^{\prime}\) be a perfect matching of \(G^{\prime}\) containing \(e\) intersecting every circuit in \(\mathcal{C}^{\prime}\). Without loss of generality, we shall assume that \(e\) is not a perfect matching of \(G\).
Figure 7: The graphs \(G^{\prime}\) and \(G^{\prime\prime}\) when \(G\) admits a cyclic 3-edge-cut \(\{f_{1},f_{2},f_{3}\}\).
generality, assume that \(e^{\prime}_{1}\in M^{\prime}\). Let \(M^{\prime\prime}\) be a perfect matching of \(G^{\prime\prime}\) containing \(e^{\prime\prime}_{1}\) intersecting every circuit in \(\mathcal{C}^{\prime\prime}\). Let \(M=M^{\prime}\cup M^{\prime\prime}\cup\{f_{1}\}\setminus\{e^{\prime}_{1},e^{ \prime\prime}_{1}\}\). This is a perfect matching of \(G\) containing \(e\), intersecting every circuit in \(\mathcal{C}\), a contradiction.
_Case A2._ Suppose that \(C_{X}\) exists. When \(e\in X\), we have that \(e=f_{1}\) by the choice of \(C_{X}\), and so, by the minimality of \(G\), there exists a perfect matching \(M^{\prime}\) of \(G^{\prime}\) (\(M^{\prime\prime}\) of \(G^{\prime\prime}\)) containing \(e^{\prime}_{1}\) (\(e^{\prime\prime}_{1}\)), intersecting every circuit in \(\mathcal{C}^{\prime}\) (in \(\mathcal{C}^{\prime\prime}\), respectively). Consequently, \(M=M^{\prime}\cup M^{\prime\prime}\cup\{f_{1}\}\setminus\{e^{\prime}_{1},e^{ \prime\prime}_{1}\}\) is a perfect matching of \(G\) containing \(e=f_{1}\). Clearly, every circuit in \(\mathcal{C}\setminus\{C_{X}\}\) is intersected by \(M\). The circuit \(C_{X}\) must be intersected by \(M\) since \(C^{\prime}_{X}\) (\(C^{\prime\prime}_{X}\)) contains an edge of \(M^{\prime}\) (\(M^{\prime\prime}\)), not incident to \(v^{\prime}\) (\(v^{\prime\prime}\), respectively).
When \(e\notin X\), the endvertices of \(e\) both correspond to vertices in \(G^{\prime}\). Once again, for simplicity, we shall refer to this edge as \(e\). Let \(M^{\prime}\) be a perfect matching of \(G^{\prime}\) containing \(e\) intersecting every circuit in \(\mathcal{C}^{\prime}\). We have \(e^{\prime}_{i}\in M^{\prime}\) for some \(i\in[3]\). Let \(M^{\prime\prime}\) be a perfect matching of \(G^{\prime\prime}\) containing \(e^{\prime\prime}_{i}\) intersecting every circuit in \(\mathcal{C}^{\prime\prime}\). Let \(M=M^{\prime}\cup M^{\prime\prime}\cup\{f_{i}\}\setminus\{e^{\prime}_{i},e^{ \prime\prime}_{i}\}\). This is a perfect matching of \(G\) containing \(e\). As before, \(M\) intersects every circuit in \(\mathcal{C}\) unless \(i=1\) and no edge of \(G^{\prime}\) or \(G^{\prime\prime}\) corresponding to an edge of \(C_{X}\) is in \(M^{\prime}\) or \(M^{\prime\prime}\), which is impossible since \(C^{\prime}_{X}\) (\(C^{\prime\prime}_{X}\)) is a circuit in \(\mathcal{C}^{\prime}\) (\(\mathcal{C}^{\prime\prime}\)), so it contains an edge of \(M^{\prime}\) (\(M^{\prime\prime}\)), not incident to \(v^{\prime}\) (\(v^{\prime\prime}\), respectively).
**Case B.** What remains to be considered is the case when \(G\) admits a triangle. Consequently, without loss of generality, we can assume that \(G^{\prime\prime}\) is \(K_{4}\). We note that in this case, \(G^{\prime}\) cannot be Klee because otherwise \(G\) itself would be Klee. Thus, the inductive hypothesis can only be applied to \(G^{\prime}\) but not to \(G^{\prime\prime}\). As in Case A, we can assume that at least one of the endvertices of \(e\) corresponds to a vertex in \(V^{\prime}\), since if the endvertices of \(e\) both belong to \(V^{\prime\prime}\), say \(e=v^{\prime\prime}_{i}v^{\prime\prime}_{j}\), a perfect matching of \(G\) contains \(e\) if and only if it contains \(f_{k}\), where \(\{i,j,k\}=[3]\). We proceed as in Case A and note that the perfect matching \(M^{\prime}\) containing (the edge corresponding to) \(e\) intersecting every circuit in \(\mathcal{C}^{\prime}\) obtained after applying the inductive hypothesis to \(G^{\prime}\) can be easily extended to a perfect matching \(M\) of \(G\) containing \(e\). What remains to show is that \(M\) intersects every circuit in \(\mathcal{C}\). The only circuit possibly not intersected by \(M\) is \(C_{X}\), if it exists. However, this can only happen if \(i=1\), and, if this is the case, then, in particular, \(C^{\prime}_{X}\) is a circuit in \(G^{\prime}\) and so contains an edge of \(M^{\prime}\) not incident to \(v^{\prime}\). This implies that \(C_{X}\) contains the corresponding edge of \(M\) in \(G\), a contradiction.
**Claim 3**.: The graph \(G\) does not have any cyclic 4-edge-cuts.
_Proof of Claim 3._ Suppose first that, in particular, \(G\) has a 4-circuit \(C=(v^{\prime\prime}_{1},v^{\prime\prime}_{2},v^{\prime\prime}_{3},v^{\prime \prime}_{4})\). Let \(v^{\prime}_{1},v^{\prime}_{2},v^{\prime}_{3},v^{\prime}_{4}\) be the vertices in \(G-C\) respectively adjacent to \(v^{\prime\prime}_{1},v^{\prime\prime}_{2},v^{\prime\prime}_{3},v^{\prime \prime}_{4}\), let \(f_{i}=v^{\prime}_{i}v^{\prime\prime}_{i}\) for \(i\in\{1,2,3,4\}\) and let \(X=\{f_{1},f_{2},f_{3},f_{4}\}\). The vertices \(v^{\prime}_{i}\) are pairwise distinct since \(G\) does not have any cyclic 3-cuts.
Let \(\{i,j,k\}=\{2,3,4\}\). We denote by \(G_{1i}\) the graph obtained after adding two new vertices \(x\) and \(y\) to \(G-C\), such that:
* \(x\) and \(y\) are adjacent;
* \(v^{\prime}_{1}\) and \(v^{\prime}_{i}\) are adjacent to \(x\); and
* \(v^{\prime}_{j}\) and \(v^{\prime}_{k}\) are adjacent to \(y\).
It is known that the graph \(G_{1i}\) is 3-connected whenever \(G\) is cyclically 4-edge-connected [2]. We claim that \(G_{1i}\) is not Klee, for any \(i\in\{2,3,4\}\). For, suppose not. Since \(G\) does not admit any cyclic 3-cuts, by Lemma 2.6, the only two possible triangles in \(G_{1i}\) are \((v^{\prime}_{1},v^{\prime}_{i},x)\) and \((v^{\prime}_{j},v^{\prime}_{k},y)\). Moreover, since \(x\) is adjacent to \(y\), by Lemma 2.7, \(G_{1i}\) is a Klee ladder. For every \(i\in\{2,3,4\}\), this implies that \(G\) is a graph isomorphic to a ladder, a Mobius ladder, or a quasi-ladder -- this is a contradiction.
We proceed by considering whether \(e\) belongs to \(C\), \(X\), or \(G-C\).
**Case A.** When \(e\in E(C)\), then for every \(i\in\{2,3,4\}\), every perfect matching of \(G_{1i}\) containing \(e^{\prime}=xy\) extends to a perfect matching of \(G\) containing \(e\). The cut \(X\) contains an even number of edges belonging to some circuit in \(\mathcal{C}\). In particular, \(E(C)\) can contain at most three circuit edges belonging to some circuit in \(\mathcal{C}\), and so \(C\not\in\mathcal{C}\).
If \(X\) contains no circuit edges, then we can set \(\mathcal{C}^{\prime}=\mathcal{C}\) and apply induction on any \(G^{\prime}=G_{1i}\) to find a perfect matching \(M^{\prime}\) containing \(e^{\prime}\) intersecting every circuit in \(\mathcal{C}^{\prime}\), which readily extends to a perfect matching \(M\) containing \(e\) intersecting every circuit in \(\mathcal{C}\).
If there is a single circuit intersecting \(X\) twice, say \(C_{X}\) passing through the edges \(f_{j}\) and \(f_{k}\), then we apply induction on the graph \(G^{\prime}=G_{1i}\) where \(\{1,i\}=\{j,k\}\) (if \(1\in\{j,k\}\)), or \(|\{1,i,j,k\}|=4\) (otherwise). The circuit \(C^{\prime}_{X}\) in \(G^{\prime}\) corresponding to \(C_{X}\) contains two edges both incident to either \(x\) or \(y\). Hence, if a perfect matching \(M^{\prime}\) containing \(e^{\prime}=xy\) intersects every circuit in \(\mathcal{C}^{\prime}=(\mathcal{C}\setminus\{C_{X}\})\cup\{C^{\prime}_{X}\}\), then it extends to a perfect matching containing \(e\) intersecting every circuit in \(\mathcal{C}\), since \(C^{\prime}_{X}\) contains an edge in \(M^{\prime}\) not incident to \(x\) (nor \(y\)).
If there are two distinct circuits intersecting \(X\) twice, say \(C_{X}\) passing through the edges \(f_{1}\) and \(f_{2}\), and \(D_{X}\) passing through the edges \(f_{3}\) and \(f_{4}\), or if there is a single circuit intersecting \(X\) four times, say \(C_{X}\) passing through the vertices \(v^{\prime}_{1},v^{\prime\prime}_{1},v^{\prime\prime}_{2},v^{\prime}_{2}\), and also \(v^{\prime}_{3},v^{\prime\prime}_{3},v^{\prime\prime}_{4},v^{\prime}_{4}\), then we can apply induction on the graph \(G_{12}\) with \(e^{\prime}=xy\) just like in the previous case.
**Case B.** When \(e\in X\), say \(e=f_{1}\), then every perfect matching of \(G^{\prime}=G_{13}\) containing \(e^{\prime}=xu_{1}\) extends to a perfect matching of \(G\) containing \(e\) in a unique way. If there is no circuit in \(\mathcal{C}\) intersecting \(X\), then we can set \(\mathcal{C}^{\prime}=\mathcal{C}\) and apply induction directly. If there is a circuit in \(\mathcal{C}\) intersecting \(X\), say \(C_{X}\), then \(|C_{X}\cap X|=2\). The corresponding circuit \(C^{\prime}_{X}\) in \(G^{\prime}\) is well-defined: it always contains \(y\) and eventually also \(x\) (when \(C_{X}\cap X\neq\{f_{2},f_{4}\}\)). A perfect matching \(M^{\prime}\) in \(G^{\prime}\) containing \(e^{\prime}\) intersecting every circuit in \(\mathcal{C}^{\prime}=(\mathcal{C}\setminus\{C_{X}\})\cup\{C^{\prime}_{X}\}\) intersects \(C^{\prime}_{X}\) at a cut-edge incident to \(y\) or an edge of \(G-C\). In both cases, the corresponding perfect matching \(M\) in \(G\) containing \(e\) intersects every circuit in \(\mathcal{C}\), since \(M\) intersects \(C_{X}\) at an edge in \(X\) or an edge of \(G-C\).
**Case C.** It remains to consider the case when \(e\in G-C\). Let \(G^{\prime}=G_{13}\) and let \(e^{\prime}\) be the edge of \(G^{\prime}\) corresponding to \(e\) in \(G\). Every perfect matching \(M^{\prime}\) of \(G^{\prime}\) containing \(e^{\prime}\) and not containing \(xy\) extends to a perfect matching \(M\) of \(G\) containing \(e\) in a unique way; every perfect matching \(M^{\prime}\) of \(G^{\prime}\) containing \(e^{\prime}\) and \(xy\) extends to a perfect matching \(M\) of \(G\) in two distinct ways, whose symmetric difference is the 4-circuit \(C\). In all the cases, we obtain a perfect matching \(M\) of \(G\) containing at least one edge of \(C\).
If \(X\) contains no edges belonging to any circuit in \(\mathcal{C}\), then we can set \(\mathcal{C}^{\prime}=\mathcal{C}\setminus\{C\}\) and apply induction directly. The circuit \(C\) in particular (if it is in \(\mathcal{C}\)) is always intersected by at least one edge of \(M\).
If there is a single circuit intersecting \(X\) twice, say \(C_{X}\), passing through the edge
and \(f_{i}\) for some \(i\in\{2,3,4\}\), then the corresponding circuit \(C^{\prime}_{X}\) in \(G^{\prime}\) is well-defined: it always contains \(x\) and eventually also \(y\) (when \(C_{X}\cap X\neq\{f_{1},f_{3}\}\)). We can set \(\mathcal{C}^{\prime}=(\mathcal{C}\setminus\{C_{X}\})\cup\{C^{\prime}_{X}\}\) and apply induction. If \(M^{\prime}\) contains an edge of \(C^{\prime}_{X}\) not incident to \(x\) nor \(y\), then \(M\) contains an edge of \(C_{X}\) not incident to any vertex of \(C\). If \(M^{\prime}\) contains the edge \(xy\), then amongst the two possible extensions of \(M^{\prime}\) into \(M\) we can always choose one that contains at least one edge of \(C_{X}\). If \(M^{\prime}\) contains an edge incident to \(x\) or to \(y\) distinct from \(xy\), then \(M\) contains the corresponding edge in \(X\). In all the cases, it is possible to extend a perfect matching \(M^{\prime}\) of \(G^{\prime}\) containing \(e^{\prime}\) and intersecting every circuit in \(\mathcal{C}^{\prime}\) into a perfect matching \(M\) of \(G\) containing \(e\) and intersecting every circuit in \(\mathcal{C}\).
If there are two distinct circuits intersecting \(X\) twice, say \(C_{X}\) passing through the edges \(f_{1}\) and \(f_{2}\) and \(D_{X}\) passing through the edges \(f_{3}\) and \(f_{4}\), then we apply induction on \(G^{\prime}\) with \(\mathcal{C}^{\prime}=\mathcal{C}\setminus\{C_{X},D_{X}\}\). If the perfect matching \(M^{\prime}\) containing \(e^{\prime}\) and intersecting every circuit in \(\mathcal{C}^{\prime}\) obtained by induction also contains \(xy\), then we can choose \(M\) to contain both \(v^{\prime\prime}_{1}v^{\prime\prime}_{2}\) and \(v^{\prime\prime}_{3}v^{\prime\prime}_{4}\), and so it intersects both \(C_{X}\) and \(D_{X}\) as well. If \(M^{\prime}\) does not contain \(xy\), then \(|M\cap\{f_{1},f_{2},f_{3},f_{4}\}|=2\). If \(M\) contains exactly one of \(f_{1}\) and \(f_{2}\) then it also contains one of \(f_{3}\) and \(f_{4}\), and so \(M\) intersects both \(C_{X}\) and \(D_{X}\). If \(\{f_{1},f_{2}\}\subset M\), then \(v^{\prime\prime}_{3}v^{\prime\prime}_{4}\in M\); similarly, if \(\{f_{3},f_{4}\}\subset M\), then \(v^{\prime\prime}_{1}v^{\prime\prime}_{2}\in M\). In all the cases \(M\) intersects both \(C_{X}\) and \(D_{X}\), as desired.
If there is a single circuit intersecting \(X\) four times, say \(C_{X}\) passing through \(v^{\prime}_{1}v^{\prime\prime}_{1}v^{\prime\prime}_{2}v^{\prime}_{2}\) and also \(v^{\prime}_{3}v^{\prime\prime}_{3}v^{\prime\prime}_{4}v^{\prime}_{4}\), then we can apply induction on the graph \(G^{\prime}\) with \(\mathcal{C}^{\prime}=\mathcal{C}\setminus\{C_{X}\}\) just like in the previous case.
From this point on we may assume that \(G\) does not contain any \(4\)-circuits. In particular, for every cyclic \(4\)-edge-cut \(E(V^{\prime},V^{\prime\prime})\) both sides have at least six vertices. Suppose that \(G\) admits a cyclic 4-edge-cut \(E(V^{\prime},V^{\prime\prime})\) with \(E(V^{\prime},V^{\prime\prime})=\{f_{1},f_{2},f_{3},f_{4}\}=:X\), where each \(f_{i}=v^{\prime}_{i}v^{\prime\prime}_{i}\), for some \(v^{\prime}_{1},v^{\prime}_{2},v^{\prime}_{3},v^{\prime}_{4}\in V^{\prime}\) and \(v^{\prime\prime}_{1},v^{\prime\prime}_{2},v^{\prime\prime}_{3},v^{\prime \prime}_{4}\in V^{\prime\prime}\). Since \(G\) has no 3-edge-cuts, the vertices \(v^{\prime}_{1},v^{\prime}_{2},v^{\prime}_{3},v^{\prime}_{4},v^{\prime\prime}_{ 1},v^{\prime\prime}_{2},v^{\prime\prime}_{3},v^{\prime\prime}_{4}\) are all distinct.
We define graphs \(G^{\prime}_{1i}\) and \(G^{\prime\prime}_{1i}\) for \(i\in\{2,3,4\}\) analogously as in the previous part. We denote by \(x^{\prime}\) and \(y^{\prime}\) (\(x^{\prime\prime}\) and \(y^{\prime\prime}\)) the two new vertices in \(G^{\prime}_{1i}\) (in \(G^{\prime\prime}_{1i}\)), and by \(e^{\prime}_{1}\), \(e^{\prime}_{2}\), \(e^{\prime}_{3}\), \(e^{\prime}_{4}\) (\(e^{\prime\prime}_{1}\), \(e^{\prime\prime}_{2}\), \(e^{\prime\prime}_{3}\), \(e^{\prime\prime}_{4}\)) the edges of \(G^{\prime}_{1i}\) (of \(G^{\prime\prime}_{1i}\), respectively) corresponding to \(f_{1}\), \(f_{2}\), \(f_{3}\), \(f_{4}\), respectively, for \(i\in\{2,3,4\}\). These graphs are all 3-connected [2]. None of these graphs can be a Klee-graph: if this was the case, it would have to be a Klee ladder on at least eight vertices, but there are no 4-circuits at all in \(G\), so this is impossible.
Consider first the case when \(e\in X\), say \(e=f_{1}\). If there is a circuit \(C_{X}\) in \(\mathcal{C}\) intersecting \(X\), then \(e\notin C_{X}\) and \(|C_{X}\cap X|=2\). We may assume that \(C_{X}\cap X=\{f_{2},f_{3}\}\). We consider all the three graphs \(G^{\prime}_{12}\), \(G^{\prime}_{13}\), and \(G^{\prime}_{14}\) (and all the three graphs \(G^{\prime\prime}_{12}\), \(G^{\prime\prime}_{13}\), and \(G^{\prime\prime}_{14}\)) at the same time. The circuit \(C_{X}\) (if it exists) corresponds to a circuit \(C^{\prime}_{X}\) (\(C^{\prime\prime}_{X}\)) in each of them in a natural way, covering either one or two vertices amongst \(x^{\prime}\) and \(y^{\prime}\) (\(x^{\prime\prime}\) and \(y^{\prime\prime}\), respectively). If \(C_{X}\) does not exist, we shall proceed in the same manner, but letting \(C_{X}\), \(C^{\prime}_{X}\), and \(C^{\prime\prime}_{X}\) be equal to \(\emptyset\). We apply induction with \(e^{\prime}=e^{\prime}_{1}\) (\(e^{\prime\prime}=e^{\prime\prime}_{1}\)) and \(\mathcal{C}^{\prime}=((\mathcal{C}\setminus\{C_{X}\})\cap E(G^{\prime}_{1i})) \cup\{C^{\prime}_{X}\}\) (\(\mathcal{C}^{\prime\prime}=((\mathcal{C}\setminus\{C_{X}\})\cap E(G^{\prime \prime}_{1i}))\cup\{C^{\prime\prime}_{X}\}\), respectively). Let \(M^{\prime}_{i}\) (\(M^{\prime\prime}_{i}\)) be a perfect matching in \(G^{\prime}_{1i}\) (\(G^{\prime\prime}_{1i}\)) containing \(e^{\prime}\) (\(e^{\prime\prime}\)) intersecting every circuit in \(\mathcal{C}^{\prime}\) (in \(\mathcal{C}^{\prime\prime}\), respectively). Every perfect matching amongst \(M^{\prime}_{2}\), \(M^{\prime}_{3}\), and \(M^{\prime}_{4}\) contains exactly one edge \(e^{\prime}_{k}\) corresponding to a cut edge \(f_{k}\) for some \(k\in\{2,3,4\}\) (besides the edge \(e^{\prime}\) corresponding to \(f_{1}\)) and the three values of \(k\) cannot all be the same for the
three perfect matchings. The same thing holds for the other three perfect matchings \(M_{2}^{\prime\prime}\), \(M_{3}^{\prime\prime}\), and \(M_{4}^{\prime\prime}\). Therefore, for some \(k\in\{2,3,4\}\) there exist two perfect matchings \(M_{i}^{\prime}\) and \(M_{j}^{\prime\prime}\) containing the edge \(e_{k}^{\prime}\) and \(e_{k}^{\prime\prime}\), respectively. We can combine them together into a perfect matching \(M\) containing \(e\) and \(f_{k}\), intersecting every circuit in \(\mathcal{C}\). In particular, if \(C_{X}\) exists, then it can only be avoided by \(M\) if \(k=4\), but then \(M_{i}^{\prime}\) (\(M_{j}^{\prime\prime}\)) cannot contain any edge of \(C_{X}^{\prime}\) (\(C_{X}^{\prime\prime}\)) incident to \(x^{\prime}\) or to \(y^{\prime}\) (to \(x^{\prime\prime}\) or to \(y^{\prime\prime}\)), so it intersects \(C_{X}^{\prime}\) inside \(G[V^{\prime}]\) (\(C_{X}^{\prime\prime}\) inside \(G[V^{\prime\prime}]\), respectively). Consequently, \(M\) intersects \(C_{X}\) inside \(G[V^{\prime}]\) and \(G[V^{\prime\prime}]\).
Consider next the case where \(e\notin X\). We may assume that \(e\in G[V^{\prime}]\). Let \(\mathcal{C}_{X}\) be the set of circuits in \(\mathcal{C}\) intersecting \(X\). We have \(|\mathcal{C}_{X}|\leq 2\), and even if there is a single circuit in \(\mathcal{C}_{X}\), it may contain all four edges of \(X\). Let \(\mathcal{C}_{0}^{\prime}\) (\(\mathcal{C}_{0}^{\prime\prime}\)) be the set of circuits from \(\mathcal{C}\) within \(G[V^{\prime}]\) (\(G[V^{\prime\prime}]\), respectively). Given \(G^{\prime}=G_{1i}^{\prime}\) (\(G^{\prime\prime}=G_{1i}^{\prime\prime}\)) for some arbitrary \(i\in\{2,3,4\}\), let \(\mathcal{C}_{X}^{\prime}\) (\(\mathcal{C}_{X}^{\prime\prime}\)) be the set of circuits obtained from the subpaths of circuits in \(\mathcal{C}_{X}\) contained in \(G[V^{\prime}]\) (in \(G[V^{\prime\prime}]\)) by adding the necessary edges from \(\{e_{1}^{\prime},e_{2}^{\prime},e_{3}^{\prime},e_{4}^{\prime}\}\) (from \(\{e_{1}^{\prime\prime},e_{2}^{\prime\prime},e_{3}^{\prime\prime},e_{4}^{\prime \prime}\}\)) and eventually also the edge \(x^{\prime}y^{\prime}\) (\(x^{\prime\prime}y^{\prime\prime}\), respectively), if needed. Observe that \(|\mathcal{C}_{X}^{\prime}|=2\) (\(|\mathcal{C}_{X}^{\prime\prime}|=2\)) is possible when \(|\mathcal{C}_{X}|=1\), and vice-versa. Finally, let \(\mathcal{C}^{\prime}=\mathcal{C}_{0}^{\prime}\cup\mathcal{C}_{X}^{\prime}\) and \(\mathcal{C}^{\prime\prime}=\mathcal{C}_{0}^{\prime\prime}\cup\mathcal{C}_{X}^ {\prime\prime}\).
Let \(e^{\prime}\) be the edge in \(G^{\prime}\) corresponding to \(e\) in \(G\). By induction, we obtain a perfect matching \(M^{\prime}\) containing \(e^{\prime}\) intersecting every circuit in \(\mathcal{C}^{\prime}\).
Consider first the case when \(x^{\prime}y^{\prime}\in M^{\prime}\). We apply induction to obtain a perfect matching \(M^{\prime\prime}\) of \(G^{\prime\prime}=G_{1i}\), for any \(i\in\{2,3,4\}\), containing \(x^{\prime\prime}y^{\prime\prime}\) intersecting every circuit in \(\mathcal{C}^{\prime\prime}\). Then, \(M=(M^{\prime}\setminus\{x^{\prime}y^{\prime}\})\cup(M^{\prime\prime}\setminus \{x^{\prime\prime}y^{\prime\prime}\})\) is a perfect matching of \(G\) containing \(e\). It is easy to check that \(M\) intersects every circuit in \(\mathcal{C}_{0}^{\prime}\) and in \(\mathcal{C}_{0}^{\prime\prime}\); it remains to certify that \(M\) intersects all the circuits in \(\mathcal{C}_{X}\). If \(|\mathcal{C}_{X}|\leq 1\), then we choose \(G_{1i}^{\prime\prime}\) in such a way that \(x^{\prime\prime}y^{\prime\prime}\) does not belong to any circuit in \(\mathcal{C}_{X}^{\prime\prime}\), and so \(M^{\prime\prime}\) contains at least one edge (not incident to \(x^{\prime\prime}\) or \(y^{\prime\prime}\)) of every circuit in \(\mathcal{C}_{X}^{\prime\prime}\), and so the circuit in \(\mathcal{C}_{X}\) will contain at least one edge from \(M\). If \(|\mathcal{C}_{X}|=2\), then it suffices to choose \(G_{1i}^{\prime\prime}\) in such a way that \(\mathcal{C}_{X}^{\prime\prime}\) contains two distinct circuits (avoiding \(x^{\prime\prime}y^{\prime\prime}\)), and then each of them will contain at least one edge (not incident to \(x^{\prime\prime}\) or \(y^{\prime\prime}\)) from \(M^{\prime\prime}\), and thus each circuit in \(\mathcal{C}_{X}\) will contain at least one edge from \(M\), as desired.
It remains to consider the case when for every choice of \(G^{\prime}=G_{1i}^{\prime}\), a perfect matching \(M_{i}^{\prime}\) of \(G^{\prime}\) containing \(e^{\prime}\) and intersecting every circuit in \(\mathcal{C}^{\prime}\) never contains the edge \(x^{\prime}y^{\prime}\). Without loss of generality, we may assume that for \(G_{12}^{\prime}\) the perfect matching \(M_{2}^{\prime}\) contains the edges \(e_{1}^{\prime}\) and \(e_{3}^{\prime}\). We then consider \(G_{13}^{\prime}\). Again, without loss of generality, the perfect matching \(M_{3}^{\prime}\) contains the edges \(e_{1}^{\prime}\) and \(e_{2}^{\prime}\). Finally, we apply induction on \(G^{\prime\prime}=G_{14}^{\prime\prime}\) with \(e^{\prime\prime}=e_{1}^{\prime\prime}\). Every perfect matching \(M^{\prime\prime}\) of \(G^{\prime\prime}\) containing \(e^{\prime\prime}\) contains either \(e_{2}^{\prime\prime}\) or \(e_{3}^{\prime\prime}\), so it can be combined with either \(M_{2}^{\prime}\) or \(M_{3}^{\prime}\) to give a perfect matching \(M\) of \(G\) containing \(e\). We may assume that \(e_{2}^{\prime\prime}\in M^{\prime\prime}\). It is easy to check that such a perfect matching \(M\) intersects all the circuits in \(\mathcal{C}_{0}^{\prime}\) and in \(\mathcal{C}_{0}^{\prime\prime}\); it remains to certify that \(M\) intersects all the circuits in \(\mathcal{C}_{X}\). The only circuit from \(\mathcal{C}_{X}\) potentially not intersected by \(M\) is the one containing the edges \(f_{3}\) and \(f_{4}\), say \(C_{X}\). However, the corresponding circuits \(C_{X}^{\prime}\) and \(C_{X}^{\prime\prime}\) in \(\mathcal{C}_{X}^{\prime}\) and \(\mathcal{C}_{X}^{\prime}\) (there is exactly one on each side) respectively contain an edge of \(M_{2}^{\prime}\) (not incident to \(x^{\prime}\) or \(y^{\prime}\)) and an edge of \(M^{\prime\prime}\) (not incident to \(x^{\prime\prime}\) or \(y^{\prime\prime}\)). Therefore, \(M\) intersects \(C_{X}\) at least twice, which is more than what is desired.
From this point on we may assume that \(G\) is cyclically 5-edge-connected. We now consider the edges at distance 2 from \(e\) (distance measured as the length of a shortest path joining corresponding vertices in the line graph of \(G\)).
**Claim 4**.: Let \(f\) be an edge at distance 2 from \(e\). Then, \(f\notin C\) for any \(C\in\mathcal{C}\).
_Proof of Claim 4._ We will use a procedure that transforms a cubic graph \(G\) into a cubic graph \(G^{\prime}\) smaller than \(G\), such that every perfect matching of \(G^{\prime}\) containing a certain edge can be extended into a perfect matching of \(G\) containing the corresponding edge.
This operation was already used by Voorhoeve [10] to study perfect matchings in bipartite cubic graphs and it is one of the main tools used for counting perfect matchings in general in [1]. This technique is also used by the three authors in [6] to prove Theorem 1.2.
Let \(f=uv\), let the neighbours of \(u\) distinct from \(v\) be \(\alpha\) and \(\gamma\), and let the neighbours of \(v\) distinct from \(u\) be \(\beta\) and \(\delta\). In particular, since \(G\) is cyclically 5-edge-connected, these four vertices are all distinct and non-adjacent. Without loss of generality, we may assume that \(\alpha\) is an endvertex of \(e\).
As shown in Figure 8, we obtain a smaller graph by deleting the endvertices of \(f\) (together with all edges incident to them) and adding the edges \(\alpha\beta\) and \(\gamma\delta\). Let this resulting graph be \(G^{\prime}\). We shall say that \(G^{\prime}\) is obtained after an \((\alpha\beta:\gamma\delta)_{uv}\)-reduction. It is well-known that when applying this operation, the cyclic edge-connectivity of a cubic graph can drop by at most 2. Since \(G\) is cyclically 5-edge-connected, \(G^{\prime}\) is cyclically 3-edge-connected.
Let the edge in \(G^{\prime}\) corresponding to \(e\), and the vertices in \(G^{\prime}\) corresponding to \(\alpha,\beta,\gamma,\delta\) be denoted by the same name. We recall that any perfect matching of \(G^{\prime}\) which contains \(e\) can be extended to a perfect matching of \(G\) containing the edge \(e\) (see also Figure 9). In fact, let \(M^{\prime}\) be a perfect matching of \(G^{\prime}\) containing \(e\). This is extended to a perfect matching \(M\) of \(G\) containing \(e\) as follows:
\[M=\begin{cases}M^{\prime}\cup\{u\gamma,v\delta\}\setminus\{\gamma\delta\}& \text{if $\gamma\delta\in M^{\prime}$,}\\ M^{\prime}\cup\{f\}&\text{otherwise.}\end{cases}\]
Suppose that for some edge \(f\) at distance 2 from \(e\), \(f\) is in some circuit \(C_{f}\) in \(\mathcal{C}\). This means that exactly one of \(u\alpha\) and \(u\gamma\), and exactly one of \(v\beta\) and \(v\delta\) belong to \(C_{f}\). Without loss of generality, we may assume that \(u\alpha\in C_{f}\) if and only if \(v\beta\in C_{f}\) (otherwise, we rename \(\beta\) and \(\delta\)). Let \(G^{\prime}\) be the graph obtained from \(G\) after an \((\alpha\beta:\gamma\delta)_{uv}\)-reduction. Let \(C^{\prime}_{f}\) be the circuit in \(G^{\prime}\) corresponding to \(C_{f}\) in \(G\) obtained by replacing the 3-edge path passing through \(u\) and \(v\) by a single edge. Since \(G\) is of girth 5, \(C_{f}\) is a circuit of length at
least 3. Let \(\mathcal{C}^{\prime}=(\mathcal{C}\setminus\{C_{f}\})\cup\{C^{\prime}_{f}\}\) be the collection of disjoint circuits of \(G^{\prime}\) obtained by this reduction. This is portrayed in Figure 10.
Let us first assume that \(G^{\prime}\) is not a Klee-graph. Since \(G^{\prime}\) is cyclically 3-edge-connected and its order is strictly less than \(G\), it is not a counterexample. Let \(M^{\prime}\) be a perfect matching of \(G^{\prime}\) containing \(e\) intersecting all the circuits in \(\mathcal{C}^{\prime}\). We extend this perfect matching to a perfect matching \(M\) of \(G\) containing \(e\) as described above (see Figure 9), and claim that it intersects all the circuits in \(\mathcal{C}\). Every circuit \(C^{\prime}\neq C^{\prime}_{f}\) in \(\mathcal{C}^{\prime}\) is hit by an edge of \(M^{\prime}\) in \(G^{\prime}\), and so the corresponding circuit \(C\) is hit by the corresponding edge of \(M\) in \(G\). The circuit \(C^{\prime}_{f}\) is hit by an edge \(M^{\prime}\) in \(G^{\prime}\), and so the corresponding circuit \(C_{f}\) is hit by the corresponding edge in \(G\), unless \(\gamma\delta\in E(C_{f})\) and the hitting edge is \(\gamma\delta\), but then \(C_{f}\) is hit by both edges \(\gamma u\) and \(v\delta\). Observe that \(M^{\prime}\) cannot contain \(\alpha\beta\) because \(e\in M^{\prime}\).
Therefore, \(G^{\prime}\) must be Klee. Since \(G^{\prime}\) is obtained after an \((\alpha\beta:\gamma\delta)_{uv}\)-reduction, and \(G\) is cyclically 5-edge-connected, by Lemma 2.6, the graph \(G^{\prime}\) admits exactly two (disjoint) triangles \(T_{\ell}\) and \(T_{r}\) such that \(V(T_{\ell})=\{v_{\ell},\alpha,\beta\}\) and \(V(T_{r})=\{v_{r},\gamma,\delta\}\), for some \(v_{\ell}\) and \(v_{r}\) in \(G^{\prime}\). Let \(a,b,c,d\) be the vertices in \(G^{\prime}-\{v_{\ell},v_{r},\alpha,\beta,\gamma,\delta\}\) which are adjacent to \(\alpha,\beta,\gamma,\delta\), respectively (see Figure 11). Furthermore, since \(G\) is cyclically 5-edge-connected, by Lemma 2.7 the edge \(\alpha\beta\) (\(\gamma\delta\)) is the only edge in \(T_{\ell}\) (in \(T_{r}\)) which lies on a 4-circuit. Therefore, \((\alpha,\beta,b,a)\) and \((\gamma,\delta,d,c)\) are 4-circuits in \(G^{\prime}\). Next we show that \(a,b,c,d\) are pairwise distinct. Clearly, \(a\neq b\), and \(c\neq d\). Moreover, \(a\neq c\), and \(b\neq d\), otherwise \(G\) would admit a 4-circuit. What remains to show is that \(a\neq d\), and \(b\neq c\). We first note that since \(G\) is cubic, and \(a=d\) if and only if \(b=c\). Indeed, if \(a=d\) and \(b\neq c\), then, \(a\) is adjacent to \(\alpha,b,c,\delta\), a contradiction. Moreover, since \(G\) is cyclically 4-edge-connected, if \(a=d\), \(G\) would be the Petersen graph. However, it is an easy exercise to check that the Petersen graph is not a counterexample.
Hence, \(a,b,c,d\) are four distinct vertices. Consequently, if we apply an \((\alpha\delta:\beta\gamma)_{uv}\)-reduction to \(G\) we can be sure that \(\alpha\delta\) and \(\beta\gamma\) do not lie on a triangle in the resulting graph
Figure 10: If \(f\in C_{f}\), then we apply induction on \(G^{\prime}\) — the graph obtained from \(G\) after an \((\alpha\beta:\gamma\delta)_{uv}\)-reduction. Dashed lines represent edges outside \(\mathcal{C}\) or \(\mathcal{C}^{\prime}\), respectively.
Figure 9: Extending a perfect matching of \(G^{\prime}\) containing \(e\) to a perfect matching of \(G\) containing \(e\). Dotted lines represent edges in \(M\) or \(M^{\prime}\).
\(G^{\prime\prime}\). In particular, \(G^{\prime\prime}\) is not Klee. Let \(\mathcal{C}^{\prime\prime}=\mathcal{C}\setminus\{C_{f}\}\). By the inductive hypothesis, \(G^{\prime\prime}\) admits a perfect matching \(M^{\prime\prime}\) containing \(e\) intersecting every circuit in \(\mathcal{C}^{\prime\prime}\). By extending the perfect matching \(M^{\prime\prime}\) to a perfect matching \(M\) of \(G\) containing \(e\) as described above (see Figure 9), we can deduce that \(M\) intersects every circuit in \(\mathcal{C}\), because, in particular, \(M\) contains exactly one edge from \(E(C_{f})\cap\{u\gamma,uv,v\beta\}\).
From this point on we may assume that no edge \(f\) at distance 2 from \(e\) is contained in a circuit in \(\mathcal{C}\). As a consequence, we have that no edge at distance at most 2 from \(e\) is contained in a circuit in \(\mathcal{C}\).
**Claim 5**.: Every vertex at distance 2 from \(e\) is traversed by a circuit in \(\mathcal{C}\).
_Proof of Claim 5._ Once again, let us consider an edge \(f=uv\) at distance 2 from \(e\), with vertices denoted \(\alpha\), \(\beta\), \(\gamma\), \(\delta\) as above. In particular, there can be a circuit in \(\mathcal{C}\) passing through an endvertex of \(f\) only if it is passes through the edges \(\beta v\) and \(v\delta\).
Suppose that there is no such circuit. As we have seen in the above claim, at least one of the the resulting graphs obtained by an \((\alpha\beta:\gamma\delta)_{uv}\)-reduction or an \((\alpha\delta:\gamma\beta)_{uv}\)-reduction is not Klee, and so, without loss of generality, we can assume that the graph \(G^{\prime}\) obtained after an \((\alpha\beta:\gamma\delta)_{uv}\)-reduction is not Klee. In \(G^{\prime}\), there is a perfect matching \(M^{\prime}\) containing \(e\) and intersecting every circuit in \(\mathcal{C}^{\prime}=\mathcal{C}\). It is easy to see that the perfect matching \(M\) (of \(G\)) containing \(e\) obtained as an extension of \(M^{\prime}\) still intersects all the circuits in \(\mathcal{C}\), a contradiction.
As a consequence of Claim 5, we have the following.
**Claim 6**.: The edge \(e\) does not belong to a 5-circuit.
_Proof of Claim 6._ Suppose that \(e\) belongs to a 5-circuit \(C=(t_{1},t_{2},t_{3},t_{4},t_{5})\). Let the vertices in \(G-V(C)\) which are adjacent to some vertex in \(C\) be \(v_{1},v_{2},v_{3},v_{4},v_{5}\), such that \(v_{i}\) is adjacent to \(t_{i}\) and \(e=t_{1}t_{2}\). Since \(G\) is cyclically 5-edge-connected, the \(v_{i}\)s are pairwise distinct. Moreover, by Claim 4, no edge in \(C\) can be contained in a circuit of \(\mathcal{C}\), but by Claim 5, the vertex \(t_{4}\) must be traversed by a circuit of \(\mathcal{C}\), which is clearly impossible.
We also show that \(e\) cannot be at distance 2 from a 5-circuit.
**Claim 7**.: Edges at distance 2 from \(e\) do not belong to a 5-circuit.
_Proof of Claim 7._ Suppose the above assertion is false and let \(C=(t_{1},t_{2},t_{3},t_{4},t_{5})\) be such a 5-circuit, with \(t_{1}\) being an endvertex of an edge adjacent to \(e\). We obtain a smaller graph \(G^{\prime}\) by deleting the edge \(t_{3}t_{4}\), and smooth the vertices \(t_{3}\) and \(t_{4}\). Let the resulting graph be denoted by \(G^{\prime}\). It can be easily seen that \(G^{\prime}\) is cyclically 3-edge-connected and that it does not admit any triangles, because otherwise, \(G\) would contain 4-circuits. For each \(i\in[5]\), let the vertex in \(V(G)-V(C)\) adjacent to \(t_{i}\) be denoted by \(t_{i}^{\prime}\). We proceed by first showing that a perfect matching \(M^{\prime}\) of \(G^{\prime}\) containing \(e\) can be extended to a perfect matching \(M\) of \(G\) containing \(e\). Without loss of generality, assume that \(t_{1}t_{2}\in M^{\prime}\). We extend this to a perfect matching of \(G\) as follows:
\[M=\begin{cases}M^{\prime}\cup\{t_{3}t_{4}\}&\text{if $t_{5}t_{5}^{\prime}\in M ^{\prime}$,}\\ M^{\prime}\cup\{t_{1}t_{5},t_{2}t_{3},t_{4}t_{4}^{\prime}\}\setminus\{t_{1}t_ {2},t_{4}^{\prime}t_{5}\}&\text{otherwise.}\end{cases}\]
Next, since in \(G\), no edge at distance at most 2 from \(e\) belongs to a circuit in \(\mathcal{C}\), and every vertex at distance 2 from an endvertex of \(e\) is traversed by one, we have that \(t_{2}t_{3}\) and \(t_{4}t_{5}\) belong to some circuit in \(\mathcal{C}\) (possibly the same). We consider two cases depending on whether the edge \(t_{3}t_{4}\) is in a circuit edge or not.
1. When \(t_{3}t_{4}\) is a circuit edge, then the vertices \(t_{2},t_{3},t_{4},t_{5}\) are consecutive vertices on some circuit \(C_{X}\) in \(\mathcal{C}\). In this case, we let \(C_{X}^{\prime}=C_{X}\cup\{t_{1}t_{2},t_{1}t_{5}\}\setminus\{t_{2}t_{3},t_{3}t_ {4},t_{4}t_{5}\}\) and \(\mathcal{C}^{\prime}=(\mathcal{C}\setminus\{C_{X}\})\cup\{C_{X}^{\prime}\}\) to be a collection of disjoint circuits in \(G^{\prime}\). By the inductive hypothesis there exists a perfect matching \(M^{\prime}\) containing \(e\) intersecting every circuit in \(\mathcal{C}^{\prime}\). The perfect matching \(M\) of \(G\) containing \(e\) obtained from \(M^{\prime}\) as explained above clearly intersects every circuit in \(\mathcal{C}\) (it contains either \(t_{2}t_{3}\) or \(t_{3}t_{4}\)). This contradicts our initial assumption and so we must have the following case.
2. When \(t_{3}t_{4}\) is not a circuit edge, we let \(C_{X}\) and \(C_{Y}\) (with \(X\) not necessarily distinct from \(Y\)) to be the circuits containing the edges \(t_{2}t_{3}\) and \(t_{4}t_{5}\), respectively. The corresponding circuits \(C_{X}^{\prime}\) and \(C_{Y}^{\prime}\) in \(G^{\prime}\) are obtained by smoothing out the vertices \(t_{3}\) and \(t_{4}\). We then set \(\mathcal{C}^{\prime}=(\mathcal{C}\setminus\{C_{X},C_{Y}\})\cup\{C_{X}^{\prime},C_{Y}^{\prime}\}\). Note that the edges \(t_{2}t_{2}^{\prime}\) and \(t_{5}t_{5}^{\prime}\) belong to distinct circuits in \(\mathcal{C}^{\prime}\) if and only if they belong to distinct circuits in \(\mathcal{C}\). By the inductive hypothesis, there exists a perfect matching \(M^{\prime}\) of \(G^{\prime}\) containing \(e\) intersecting every circuit in \(\mathcal{C}^{\prime}\). Without loss of generality, assume that \(t_{1}t_{2}\in M^{\prime}\). The perfect matching \(M^{\prime}\) contains either \(t_{5}t_{5}^{\prime}\) or \(t_{5}t_{4}^{\prime}\), and consequently, so does the perfect matching \(M\) obtained from \(M^{\prime}\) as explained above. This implies that \(C_{Y}\) is intersected by \(M\). If \(C_{X}^{\prime}\neq C_{Y}^{\prime}\), then \(M^{\prime}\) contains an edge of \(C_{X}^{\prime}\) not incident to \(t_{2}\) in \(G^{\prime}\), and so \(M\) contains the corresponding edge of \(C_{X}\) in \(G\). Altogether, the perfect matching \(M\) obtained from \(M^{\prime}\) as shown above intersects every circuit in \(\mathcal{C}\). This is again a contradiction to our initial assumption that \(G\) is a counterexample -- thus proving our claim.
Let's get back to analysing an edge \(f=uv\) at distance 2 from \(e\). We cannot use the reduction portrayed in Figure 8 as we do not have a guarantee that we can obtain a perfect matching \(M\) intersecting the circuit in \(\mathcal{C}\) containing the edges \(v\beta\) and \(v\delta\), which we
shall denote by \(C_{v}\). Since \(G\) is cyclically 5-edge-connected, this latter circuit is of length at least 5. Let \(\delta,v,\beta,y,z\) be consecutive and distinct vertices on this circuit (see Figure 12). Moreover, let \(w\) and \(x\) be the vertices in \(G\) respectively adjacent to \(\beta\) and \(y\), such that \(w\beta,xy\notin C_{v}\). We proceed by applying an \((\alpha\beta:\gamma\delta)_{uv}\)-reduction followed by an \((\alpha x:wz)_{\beta y}\)-reduction as portrayed in Figure 12.
Let the resulting graph after these two reductions be denoted by \(G^{\prime}\), and let \(\mathcal{C}^{\prime}=\mathcal{C}\setminus\{C_{v}\}\). Since \(G^{\prime}\) is obtained by applying twice the reduction at an edge at distance 2 from \(e\), a perfect matching of \(G^{\prime}\) containing \(e\) can always be extended to a perfect matching of \(G\) containing \(e\) (recall that \(\beta y\) cannot be adjacent to \(e\), since \(\beta y\in C_{v}\)). Moreover, any such matching contains either the edge \(\beta y\) or the edge \(yz\), and so it also contains at least one edge of the circuit \(C_{v}\). Therefore, as long as \(G^{\prime}\) is cyclically 3-edge-connected and not Klee, by minimality of \(G\), there exists a perfect matching \(M^{\prime}\) of \(G^{\prime}\) containing \(e\) intersecting every circuit in \(\mathcal{C}^{\prime}\), which extends to a perfect matching \(M\) of \(G\) containing \(e\) intersecting every circuit in \(\mathcal{C}\). This contradicts our initial assumption that \(G\) is a counterexample.
Therefore, \(G^{\prime}\) is either Klee or admits a (cyclic) edge-cut of size at most 2.
**Claim 8**.: The graph \(G^{\prime}\) is not Klee.
_Proof of Claim 8._ Suppose that \(G^{\prime}\) is Klee. The edge \(\gamma\delta\) cannot be on a triangle otherwise the edges \(uv\) and \(u\gamma\) at distance 2 from \(e\) belong to a 5-circuit, contradicting Claim 7. Since \(G\) is cyclically 5-edge-connected, if \(G^{\prime}\) is Klee then we must have that \(\alpha x\) and \(wz\) each lie on a triangle (see Lemma 2.6). Therefore, in particular, if \(G^{\prime}\) is Klee, \(\alpha\) and \(x\) must have a common neighbour (in both \(G^{\prime}\) and \(G\), so it is not \(u\)). This common neighbour cannot be \(u^{\prime}\), the neighbour of \(\alpha\) not incident to \(e\) and distinct from \(u\), either, since \(x\) would then be a vertex at distance 2 from the endvertex \(\alpha\) of \(e\) (via \(u^{\prime}\)) and so, by Claim 5, it would be traversed by a circuit in \(\mathcal{C}\). However, the edges \(xu^{\prime}\) and \(xy\) are not in any circuit in \(\mathcal{C}\), a contradiction. Therefore, the common neighbour of \(x\) and \(\alpha\) is \(\alpha^{\prime}\), the other endvertex of \(e\). By Lemma 2.7, one edge of the triangle \((x,\alpha,\alpha^{\prime})\) lies on a 4-circuit in \(G^{\prime}\), which is not present in \(G\). First, consider the case when exactly one of \(\alpha^{\prime}x\) and \(\alpha^{\prime}\alpha\) lie on a 4-circuit, say \((\alpha^{\prime},s,t,x)\) or \((\alpha^{\prime},s,t,\alpha)\) accordingly. Since the edges \(\alpha^{\prime}s,xt,\alpha^{\prime}x,\alpha^{\prime}\alpha,\alpha t\) all belong to \(G\), \(s\) and \(t\) cannot be adjacent in \(G\), and so \(\{s,t\}\) is equal to \(\{w,z\}\) or \(\{\gamma,\delta\}\). If \(\{s,t\}=\{\gamma,\delta\}\), then we either have that \(\alpha^{\prime}\gamma\in E(G)\), implying that \((\alpha^{\prime},\gamma,u,\alpha)\) is a 4-circuit in \(G\), or that \(\alpha^{\prime}\delta\in E(G)\), implying that \((\alpha^{\prime},\delta,u,v,\delta)\) is a 5-circuit in \(G\) containing \(e\), both a contradiction. Hence, \(\{s,t\}=\{w,z\}\). Since, \(G\) is cyclically 5-edge-connected,
\(x\) cannot be adjacent to \(z\) nor \(w\), and so, we must have that the edge lying on the 4-circuit with \(w\) and \(z\) is \(\alpha^{\prime}\alpha\). However, this is impossible since \(z\) cannot be adjacent to an endvertex of \(e\). Consequently, we must have that the edge of the triangle \((x,\alpha,\alpha^{\prime})\) lying on a 4-circuit in \(G^{\prime}\) is \(\alpha x\). In this case, \(st\) cannot be an edge in \(G\), otherwise \((\alpha^{\prime},\alpha,s,t,x)\) would be a 5-circuit in \(G\) containing \(e\), contradicting Claim 6. Thus, \(\{s,t\}\) is equal to \(\{w,z\}\) or \(\{\gamma,\delta\}\) once again. As before, \(x\) cannot be adjacent to \(z\) or \(w\), implying that \(\alpha\) being adjacent to \(\gamma\) or \(\delta\), respectively giving rise to \((\alpha,u,\gamma)\) or \((\alpha,u,v,\delta)\) in \(G\), a contradiction. Therefore, \(G^{\prime}\) is not Klee.
Consequently, \(G^{\prime}\) must admit some (cyclic) edge-cut of size at most 2. Whenever \(G\) is cyclically 4-edge-connected, by the analysis done at the end of the main result in [6], we know the graph \(G^{\prime}\) is bridgeless, so if \(G^{\prime}\) is not (cyclically) 3-edge-connected, it admits a 2-edge-cut. We next show that this cannot be the case, that is, if \(G^{\prime}\) admits a 2-edge-cut, then \(G\) is not a counterexample to our statement.
**Claim 9**.: \(G^{\prime}\) is cyclically 3-edge-connected, unless \(\alpha\) is adjacent to \(x\) in \(G\).
_Proof of Claim 9._ Suppose that \(G^{\prime}\) admits a 2-edge-cut \(X_{2}=\{g_{1},g_{2}\}\). Let \(\Omega_{1}=\{\alpha,\gamma,\delta,w,x,z\}\) and let \(\Omega_{2}=\{u,v,\beta,y\}\).
We label the vertices of \(G^{\prime}\setminus X_{2}\) with labels \(A\) and \(B\) depending in which connected component of \(G^{\prime}\setminus X_{2}\) they belong to. Consequently, \(G^{\prime}\) has exactly two edges which are not monochromatic: \(g_{1}\) and \(g_{2}\). We consider different cases depending on the number of vertices in \(\Omega_{1}\) labelled \(A\) in \(G^{\prime}\), and show that, in each case, a 2-edge-cut in \(G^{\prime}\) would imply that \(G\) is not cyclically 5-edge-connected or not a counterexample to our statement. Without loss of generality, we shall assume that the number of vertices in \(\Omega_{1}\) labelled \(A\) is at least the number of vertices in \(\Omega_{1}\) labelled \(B\). We consider four cases.
1. All the vertices in \(\Omega_{1}\) are labelled \(A\) in \(G^{\prime}\). First, we extend this labelling of \(V(G^{\prime})\) to a partial labelling of \(V(G)\) by giving to the vertices in \(V(G)-\Omega_{2}\) the same label they had in \(G^{\prime}\). We then give label \(A\) to all the vertices in \(\Omega_{2}\). However, this means that \(G\) has exactly two edges, corresponding to the edges in \(X_{2}\), which are not monochromatic, a contradiction, since \(G\) does not admit any 2-edge-cuts.
2. Exactly 5 vertices in \(\Omega_{1}\) are labelled \(A\) in \(G^{\prime}\). This means that exactly one of the edges in \(\{\alpha x,wz,\gamma\delta\}\) belongs to \(X_{2}\), say \(g_{1}\), without loss of generality. Once again, we extend this labelling to a partial labelling of \(G\), and then give label \(A\) to all the vertices in \(\Omega_{2}\). However, this means that \(G\) has an edge which has exactly one endvertex in \(\Omega_{1}\) labelled \(B\) and exactly one endvertex in \(\Omega_{2}\) labelled \(A\), which together with the edge \(g_{2}\) gives a 2-edge-cut in \(G\), a contradiction once again.
3. Exactly 4 vertices in \(\Omega_{1}\) are labelled \(A\) in \(G^{\prime}\). We consider two cases depending on whether there is one or three monochromatic edges in \(\{\alpha x,wz,\gamma\delta\}\). First, consider the case when \(\{\alpha x,wz,\gamma\delta\}\) has exactly one monochromatic edge, meaning that \(X_{2}\subset\{\alpha x,wz,\gamma\delta\}\). As in the previous cases,
we extend this labelling to a partial labelling of \(V(G)\), and then give label \(A\) to all the vertices in \(\Omega_{2}\). However, this means that \(G\) has exactly two edges each having exactly one endvertex in \(\Omega_{1}\) labelled \(B\) and exactly one endvertex in \(\Omega_{2}\) labelled \(A\), meaning that \(G\) admits a 2-edge-cut, a contradiction. Therefore the edges \(\{\alpha x,wz,\gamma\delta\}\) are all monochromatic: two edges with all their endvertices coloured \(A\), and one edge with its endvertices coloured \(B\). We extend this labelling to a partial labelling of \(V(G)\), and then give label \(A\) to all the vertices in \(\Omega_{2}\). This gives rise to exactly two edges each having exactly one endvertex in \(\Omega_{1}\) labelled \(B\) and exactly one endvertex in \(\Omega_{2}\) labelled \(A\). These two edges together with the two edges in \(X_{2}\) form a 4-edge-cut \(X_{4}\) of \(G\). Since the latter is cyclically 5-edge-connected this 4-edge-cut is not cyclic -- it separates two adjacent vertices from the rest of the graph. Since \(w\neq z\) and \(\gamma\neq\delta\) (otherwise there would be a 3-circuit in \(G\)) and also \(\alpha\neq x\) (otherwise \(C_{v}\) would contain edges at distance 1 from \(e\)), these two adjacent vertices in \(G\) are endvertices of exactly one of \(\alpha x\), \(wz\), or \(\gamma\delta\) in \(G^{\prime}\), and the 2-edge-cut in \(G^{\prime}\) separates a 2-circuit from the rest of the graph. However, \(G\) can contain neither the edge \(wz\) nor \(\gamma\delta\), since \(G\) has no 4-circuits. Thus, we must have that \(\alpha\) is adjacent to \(x\) in \(G\).
3. Exactly 3 vertices in \(\Omega_{1}\) are labelled \(A\) in \(G^{\prime}\). Since \(G^{\prime}\) has exactly two edges which are not monochromatic, there is exactly one edge in \(\{\alpha x,wz,\gamma\delta\}\) which is not monochromatic. The latter corresponds to one of the edges in \(X_{2}\), say \(g_{1}\), without loss of generality. As before, we extend this labelling to a partial labelling of \(V(G)\), and then give label \(A\) to all the vertices in \(\Omega_{2}\). This gives rise to exactly three edges each having exactly one endvertex in \(\Omega_{1}\) labelled \(B\) and exactly one endvertex in \(\Omega_{2}\) labelled \(A\), which together with the edge \(g_{2}\) from \(X_{2}\) form a 4-edge-cut \(X_{4}\) of \(G\). As in the previous case, \(X_{4}\) separates two adjacent vertices from the rest of the graph -- \(G\) has exactly two vertices labelled \(B\). As in the previous case, the endvertices of the monochromatic edge belonging to \(\{\alpha x,wz,\gamma\delta\}\) (in \(G^{\prime}\)) which are labelled \(B\) must be either equal or adjacent in \(G\), which is only possible if \(\alpha\) is adjacent to \(x\) in \(G\).
Therefore, given any edge \(f=uv\) at distance 2 from \(e\), applying an \((\alpha\beta:\gamma\delta)_{uv}\)-reduction followed by an \((\alpha x:wz)_{\beta y}\)-reduction would lead to \(\alpha\) being adjacent to \(x\) in \(G\). Let \(y^{\prime}\) and \(z^{\prime}\) be the two consecutive vertices on \(C_{v}\) such that \(y^{\prime}\) is adjacent to \(\delta\) (note that \(y^{\prime}\) or \(z^{\prime}\) are possibly equal to \(z\)). Let \(w^{\prime}\) and \(x^{\prime}\) respectively be the vertices in \(G-C_{v}\) adjacent to \(\delta\) and \(y^{\prime}\), and let \(\alpha^{\prime}\) be the other endvertex of \(e\). Applying an \((\alpha\delta:\gamma\beta)_{uv}\)-reduction followed by an \((\alpha x^{\prime}:w^{\prime}z^{\prime})_{\delta y^{\prime}}\)-reduction leads to \(\alpha\) being adjacent to \(x^{\prime}\). Therefore, \(x^{\prime}\) can be equal to \(\alpha^{\prime},u\) or \(x\). If \(x^{\prime}=\alpha^{\prime}\), then the edge \(\delta y^{\prime}\) would be an edge belonging to \(C_{v}\) at distance 2 from the edge \(e\), and if \(x^{\prime}=u\), then \((u,v,\delta,y^{\prime})\) would be a 4-circuit in \(G\), with both cases leading to a contradiction. Therefore, \(x=x^{\prime}\).
Let \(G^{\prime}\) be the graph obtained after an \((\alpha\beta:\gamma\delta)_{uv}\)-reduction and let \(\mathcal{C}^{\prime}=\mathcal{C}\setminus\{C_{v}\}\). By induction, there exists a perfect matching \(M^{\prime}\) containing \(e\) intersecting all the circuits in \(\mathcal{C}^{\prime}\), and it can be extended into a perfect matching \(M\) of \(G\) containing \(e\). Suppose that \(M\) does not intersect \(C_{v}\) (it is the only circuit in \(\mathcal{C}\) that \(M\) could possibly avoid). To cover vertices \(y\) and \(y^{\prime}\), we must have \(\{xy,xy^{\prime}\}\subset M\) -- which is impossible.
Here are some consequences of Theorem 1.6. Corollary 3.1 follows by the above result and Corollary 2.5.
**Corollary 3.1**.: _Let \(G\) be a cyclically 3-edge-connected cubic graph and let \(\mathcal{C}\) be a collection of disjoint circuits of \(G\). Then, there exists a perfect matching \(M\) such that \(M\cap E(C)\neq\emptyset\), for every \(C\in\mathcal{C}\)._
**Corollary 3.2**.: _Let \(G\) be a cyclically 3-edge-connected. For every perfect matching \(M_{1}\) of \(G\), there exists a perfect matching \(M_{2}\) of \(G\) such that \(G\setminus(M_{1}\cup M_{2})\) is acyclic._
|
2309.07482 | MuLaN: a MultiLayer Networks Alignment Algorithm | A Multilayer Network (MN) is a system consisting of several topological
levels (i.e., layers) representing the interactions between the system's
objects and the related interdependency. Therefore, it may be represented as a
set of layers that can be assimilated to a set of networks of its own objects,
by means inter-layer edges (or inter-edges) linking the nodes of different
layers; for instance, a biological MN may allow modeling of inter and intra
interactions among diseases, genes, and drugs, only using its own structure.
The analysis of MNs may reveal hidden knowledge, as demonstrated by several
algorithms for the analysis. Recently, there is a growing interest in comparing
two MNs by revealing local regions of similarity, as a counterpart of Network
Alignment algorithms (NA) for simple networks. However, classical algorithms
for NA such as Local NA (LNA) cannot be applied on multilayer networks, since
they are not able to deal with inter-layer edges. Therefore, there is the need
for the introduction of novel algorithms. In this paper, we present MuLaN, an
algorithm for the local alignment of multilayer networks. We first show as
proof of concept the performances of MuLaN on a set of synthetic multilayer
networks. Then, we used as a case study a real multilayer network in the
biomedical domain. Our results show that MuLaN is able to build high-quality
alignments and can extract knowledge about the aligned multilayer networks.
MuLaN is available at https://github.com/pietrocinaglia/mulan. | Marianna Milano, Pietro Cinaglia, Pietro Hiram Guzzi, Mario Cannataro | 2023-09-14T07:43:40Z | http://arxiv.org/abs/2309.07482v1 | # MuLaN: a MultiLayer Networks Alignment Algorithm
###### Abstract
A Multilayer Network (MN) is a system consisting of several topological levels (i.e., layers) representing the interactions between the system's objects and the related interdependency. Therefore, it may be represented as a set of layers that can be assimilated to a set of networks of its own objects, by means _inter_-layer edges (or _inter_-edges) linking the nodes of different layers; for instance, a biological MN may allow modeling of _inter_ and _intra_ interactions among diseases, genes, and drugs, only using its own structure. The analysis of MNs may reveal hidden knowledge, as demonstrated by several algorithms for the analysis. Recently, there is a growing interest in comparing two MNs by revealing local regions of similarity, as a counterpart of Network Alignment algorithms (NA) for simple networks. However, classical algorithms for NA such as Local NA (LNA) cannot be applied on multilayer networks, since they are not able to deal with _inter_-layer edges. Therefore, there is the need for the introduction of novel algorithms. In this paper, we present _MuLaN_, an algorithm for the local alignment of multilayer networks. We first show as proof of concept the performances of _MuLaN_ on a set of synthetic multilayer networks. Then, we used as a case study a real multilayer network in the biomedical domain. Our results show that _MuLaN_ is able to build high
quality alignments and can extract knowledge about the aligned multilayer networks.
_MuLaN_ is available at [https://github.com/pietrocinaglia/mulan](https://github.com/pietrocinaglia/mulan).
keywords: Multilayer Network, Network Alignment, Local Network Alignment +
Footnote †: journal: Journal of LaTeX Templates
## 1 Introduction
Networks are largely used to represent entities and their association in many fields [1; 2; 3]. For instance, in computational biology and bioinformatics, networks are used to model the set of association among genes, proteins and other macromolecules [4; 5; 6]. More recently, it has been shown that a single level of representation, i.e., considering only genes, or only proteins, may result in a loss of information, so the need for introduction of other models arise. Such models are based on a multilevel (or multilayered) view of a system. As an example, in the biological scenario, each level may correspond to a different view, e.g. genes, proteins, and diseases. In such a view, each layer is composed by a set of homogeneous nodes linked by edges (i.e., _intra_-layer edges). Associations between two different layers are modelled by cross layer edges (i.e., _inter_-layer edges) [7; 8]. Figure 1 represents a non-exhaustive example of a simple multilayer network having three layers. Each layer is a different network. Figure 2 depicts an example of a biological multilayer network representing the interplay between diseases and drugs.
Formally, a multilayer graph may be described as a tuple \(G_{M}=(V_{M},E_{M},V,L)\), where \(V_{M}\) and \(E_{M}\) are a set of nodes and edges, respectively, and \(L\) is a set of layers. Thus, given \(L=\{L_{1},L_{2},...,L_{k}\}\), with \(k\) the number of layers in the multilayer graph, the nodes as pairs \(V_{M}\subseteq VxL_{1}x...L_{k}\), the edges \(E_{M}\subseteq V_{M}xV_{M}\) connect the pairs \((v,l)\), \((v^{\prime},l^{\prime})\). In a multilayer graph, an edge is defined _intra_-layer in case of \(l\) is equal to \(l^{\prime}\) or _inter_-layer, in case of \(l\) and \(l^{\prime}\) are different [9].
In general, the analysis of networks is rarely simple. Common challenges that can be generalized for all types of networks concern i) the modelling of the network, i.e., what it represents and ii) the analysis of the network, i.e., the metrics grabbing the biological phenomenon of interest. These aspects are at least as important for multilayer networks, given their added complexity. Furthermore, there are additional observations that are different to multilayer networks with respect to classical networks. For instance, an
Figure 1: A non-exhaustive example of multilayer network. The figure represents a network in a multilayer prospective. The same entities are modeled as nodes of a same layer, and their interactions consists of _intra_-edges. Otherwise, the nodes of different layers are connected by _inter_-edges.
Figure 2: The figure shows a toy example of a biological multilayer network, representing disease-drug associations. The nodes are the diseases and the drugs, both discriminated by belonging to the respective layer. The _intra_-edges represent the drug-drug and the disease-disease associations, while the _inter_-edges the disease-drug associations. For instance, these may be analyzed to investigate the mechanisms underlying the interaction between diseases and drugs, or for computational drug repositoring.
analysis should take into account the difference among layers. In particular, since the strength of multilayer network analysis consists of the capability to include information on different type of relationship, it would be necessary to consider which layers should be included in the analysis, and the interpretation of intra-layer and inter-layer edges values, since they discriminate against the different relationships.
The analysis of networks enables to find interesting regions of similarity between them [10; 11]. For simple networks, this task is accomplished by means of Network Alignment (NA) algorithms. Unfortunately, as also evidenced in [12; 13; 14], existing alignment algorithms are unable to process multilayer networks. Therefore, the need for defining a novel multilayer network algorithm arises. We focused in particular on evidencing relatively small regions of similarity among the multilayer networks, so we extended the Local Network Algorithm approaches. In a previous work [14], we presented _MultiLoAl_ for the local alignment of multilayer networks. Despite the effectiveness of the approach, it presented some limitations related to the scalability on large datasets and on the topological structure of the region of the similarity.
Therefore, we extended our methodology to admit further topological structures of the found regions as well as better to obtain performances in terms of quality and running time. Such a methodology has been implemented in a novel tool, namely _MuLaN_ (from _MultiLAyer Network Alignment Algorithm_). It consists of a novel approach for the building of alignment graph 1 who achieves better performances, and it is integrated with a set of community discover algorithms enabling to discover regions of similarity with different topological structures.
Footnote 1: the alignment graph is the key data structure for local alignment algorithm
Summarizing, _MuLaN_ allows you to mainly benefit from the following advantages, compared to _MultiLoAl_: (i) a reduction of running time for building the alignment graph, and (ii) a deeper experimentation on a larger dataset.
_MuLaN_ is based on the workflow depicted in Figure 3.
It receives as input two multilayer networks (\(M_{1}\) and \(M_{2}\)) having the same number of layers \(n\). Without loss of generality, we suppose the existence of a bijective correspondence among layers of the networks, so the layer \(k\) in 1-\(n\) of \(M_{1}\) corresponds to the layer \(k\) of \(M_{2}\). _MuLaN_ also receives a
multiset of seed nodes representing the similarity between the node of the graph of the same layer \(k\) of the two networks. Starting from \(M_{1}\) and \(M_{2}\) and multiset of seed nodes, at first, _MuLaN_ analyzes each pair of \(k\) layers in \(M_{1}\) and \(M_{2}\) independently. Then, it builds \(n\) alignment graphs. After building each alignment graph for each \(k\) layer separately, _MuLaN_ analyzes \(M_{1}\) and \(M_{2}\) to add _inter_-layer edges among \(n\) alignment graphs. Thus, the algorithms build \(n-1\) alignment graphs, that we call multilayer alignment graphs. Finally, _MuLaN_ applies a community detection algorithm suitable for multilayer networks on the final multilayer alignment graph to detect communities representing local regions of similarity, i.e. a single region of the local alignment. The current version of _MuLaN_ offer three different community discover algorithms: _Louvain_[15], _Infomap_[16] and _Greedy_[17].
We implemented three versions of _MuLaN_ by varying the community detection method. The default version applies _Louvain_[15] to mine community from the multilayer alignment graph, whereas, the other two versions use _Infomap_[16] and _Greedy_[17] as community detection algorithms, respectively [18].
The rest of this paper is organized as follows. Section 2 discusses the background on multilayer networks and multilayer community detection, Section 3 presents the _MuLaN_ Algorithm, and Section 4 presents and discusses the results. Finally, Section 5 concludes the paper.
## 2 Related Work
We report here some existing state-of-the-art approaches for the analysis of MNs. We separate the algorithms that analyze a single network from those comparing two networks [19].
### Network Alignment Algorithms
The problem of graph alignment consists of finding a mapping between nodes of two or more graphs to maximize an associated cost function. Formally, let \(G_{1}=\{V_{1},E_{1}\}\) and \(G_{2}=\{V_{2},E_{2}\}\) two graphs, where \(V_{1,2}\) are sets of nodes and \(E_{1,2}\) are sets of edges, the **graph alignment problem** consists of finding a function (or a mapping) \(f:V_{1}\to V_{2}\) such that the similarity between mapped entities is maximized. Among the others, algorithms may be categorised in global network alignment (GNA) algorithms (i.e. algorithms that aim to find a single mapping among all the nodes of the networks), or
local network alignment (LNA) algorithms (i.e. algorithms aiming at finding multiple small regions of similarity among networks).
We here report some LNA algorithms, since MuLaN search for multiple regions of similarity in multilayer networks.
An example of LNA algorithm is AlignNemo [20] that enables the discovery of subnetworks of proteins related to biological function and topology. AlignMCL [21] extends AlignNemo [20], by providing a formal definition of the alignment graph, which is clustered by using the Markov cluster algorithm MCL [22].
Also, GLAlign (Global Local Aligner) [23] combines the topology information from global alignment with biological information (e.g. homology relationships) to build local network alignment. NetworkBLAST [24] aims to find small dense regions in protein-protein interaction networks. Such subgraphs represent protein complexes, i.e. groups of proteins performing a similar function or involved in the same biological process. NetAligner [25] presents a strategy to identify evolutionarily conserved interactions on the basis of the consideration that interacting proteins evolve at rates significantly closer than expected by chance. Furthermore, L-HetNetAligner [12] extends the local alignment to the heterogeneous networks. Finally, some recent papers focused on the alignment of dynamic or dual networks [26; 27; 5]. However, these network alignment algorithms do not perform very well for multilayer networks [7; 28].
### Community Detection in Multilayer Networks
Community detection algorithms aim to identify groups of nodes closely connected with respect to the average of the network, starting from the assumption that nodes in the same community have a similar role [29; 30; 31]. In detail, a community is defined as groups of nodes that are more densely connected than the rest of the network; they represent significant characteristics for understanding the functionalities and organizations of complex systems modeled as a network.
In multilayer networks, the communities represent groups of well-connected nodes considering whole layers, community detection methods should consider the diversity between the layers[18].
To handle these issues, many community detection algorithms for multilayer networks have been recently developed.
The _Louvain_ algorithm, initially developed for simple networks, has been extended to handle multilayer networks [15]. The algorithms are based on an
ad hoc defined notion of modularity for multilayer network, which is based on the definition of community as a set of nodes and edges belonging to multiple layers. Therefore, the algorithm applies an iterative greedy approach to build communities.
_Infomap_[16, 32] discovers communities in multilayer networks by running random walks, admitting also interlayer edges. _Infomap_ can be used to find both overlapping and non-overlapping communities.
_Greedy_[17] algorithm is a simple and fast community detection method that works by greedily optimizing a quality function known as modularity. The algorithm starts with each node in its own community and iteratively merges communities that increase the modularity at the most.
ABACUS [33] algorithm enables to extract multidimensional communities by mining frequent closed item sets on one-dimensional community memberships. Initially, it extracts one-dimensional communities, looking at each dimension. Then, ABACUS assigns a label to nodes that consists in a list of pair tags (e.g., dimension and community). After that, ABACUS uses a frequent closed item set mining algorithm by treating each pair of tags as an item. At the end, the multidimensional communities described by the item sets consist of frequent closed item sets.
Multi-Layer Clique Percolation [34] method applies the classical clique percolation method on traditional networks. In the classical version, dense regions are cliques, whereas the two cliques are adjacent if they have common nodes. Multi-Layer Clique Percolation extends research of cliques by considering multiple layers, and it redefines the adjacency metric by considering both common nodes and common layers are expected. Thus, the communities correspond to the combinations of adjacent cliques.
Multi-Dimensional Label Propagation [35] is an extension of the Label Propagation algorithm, which is a semi-supervised learning algorithm used for graph-based classification problems. The MDLP algorithm is designed to work with multilayer networks.
In MDLP, each layer of the multilayer network is treated as a separate graph, and the algorithm propagates labels across all layers simultaneously. It allows incorporating information from multiple sources and to capture complex relationships between nodes across layers. Furthermore, it can be computationally expensive, especially for large multilayer networks, and may require careful parameter tuning to achieve optimal results.
## 3 _MuLaN_ Algorithm
The _MuLaN_ algorithm builds the alignment through two steps:
* **Building of the Multilayer Alignment Graph**: First, it integrates the input networks into a multilayer alignment graph (MAG) using a set of initial pairs of correspondences;
* **Analysis of the Multilayer Alignment Graph**: Second, it reveals the regions of similarity by analysing the topology of the multilayer alignment graph.
The core of the algorithm is the multilayer alignment graph, that is used to integrate initial input networks and which is able to contains the regions of similarity. The multilayer alignment graph is a multilayer network which has the same layer of the input networks. Each layer \(K\) of the MAG is an alignment graph between the corresponding pairs of layers of the input network. The edges between two layers of MAG are derived by analysing inter-layer edges of the input networks, as shown in the following.
Figure 3 shows the workflow of the algorithm, while Algorithm 1 shows the pseudocode of _MuLaN_.
We explain our contribution using a toy example of a multilayer network with two layers without loss of generality; then we will discuss its formalization. Let us consider two multilayer input networks \(G_{1}\), and \(G_{2}\) presenting two layers: disease and drug, as reported in Figure 4. Node colors are used to distinguish different types of nodes belonging to two different types of layers. For simplicity, the two multilayer input networks have the same number of nodes.
### Step 1: Building of the multilayer alignment graph
#### Step (1.a). Building of intra-layer edges.
In the first step, MuLaN builds the MAG in two sub steps, initially it build all the alignment graphs for each layer \(k\) (step 1.a, see Appendix for complete details), then it add all the _inter_-layer edges (Step 1.b).
**Input:**\(G_{1}=(V1,E_{1},C_{1})\), and \(G_{2}=(V2,E_{2},C2)\),\(\Delta\), A set of high-similar seed nodes
**Result:** A set of Aligned Regions
_Initialization_;
**Step 1 - Building of the Multilayer Alignment Graph** : \(Gal1_{=}(V_{al},E_{al})\)
\(Gal2_{=}(V_{al},E_{al})\)
**forall** Pair of the input list of paired nodes **do**
add a node in Alignment Graph
**end**
**forall** Nodes in \(V_{al}\)**do**
Add Edges Verifying Match-Mismatch- Conditions
**end**
Building of the _inter_-layer Alignment Graph : \(Ginter_{l}=(G1_{a}l,G2_{a}l,E_{inter_{l}})\)
**Step 2 - Community Detection Algorithm on: \(Ginter_{l}=(G1_{a}l,G2_{a}l,E_{inter_{l}})\).
**return** A set of subgraphs of \(Ginter_{l}=(G1_{a}l,G2_{a}l,E_{inter_{l}})\)
### Step (1.b). Building of inter-layer edges.
In the step 1.b, the algorithm adds the _inter_-layer edges. This step considers all the pairs of layers in the MAG. For each pair of layers \(k,t\), it analyzes all the nodes. Each node of the graph into a single layer represents a pair of nodes in the corresponding of nodes. For each pair of nodes of the layers, _MuLaN_ analyzes the corresponding layer of the input networks, and it inserts and weights the edges considering two conditions: **match**, or **mismatch**.
Let us consider the nodes of the alignment graphs 1 and the alignment graphs 2; in particular, let us analyze the pair of nodes \((D1-Dr4)\) and \((D5-Dr2)\) in Figure 4. To determine the presence of an edge, we consider the edges \((D1,Dr4)\in G_{1}\) network and \((D1,Dr4)\in G_{2}\) network. If \(G_{1}\), and \(G_{2}\) contains these nodes, and the nodes are adjacent, there is a **match**, that we call **heterogeneous match** because the node type are different, see Figure 5 (a).
Otherwise, if \(G_{1}\), and \(G_{2}\) contain these nodes, and the nodes are adjacent only in a single network, there is a **mismatch** which we call **heterogeneous mismatch** Figure 5 (b).
Finally, the algorithm assigns the weight to each edge as follows: heterogeneous match equal to 0.9, heterogeneous mismatch equal to 0.4. At the end of this step, the multilayer alignment graph is built.
### Step 2. Community Detection on the multilayer alignment graph
At this point, the algorithms analyzes the MAG to discover communities employing one of the previously introduced community detection methods [36; 37; 27; 38]. Since our methodology presents a general design, it is possible to mine the MAG by applying different community detection methods.
In the current version of _MuLaN_, we applied _Louvain_ algorithm to mine the communities on the alignment graph. However, the user can choose the community detection algorithm by selecting among _Louvain_, _Infomap_ and _Greedy_ algorithms.
_MuLaN_ generates three outputs: 1) a file containing the mined communities; 2) a file of the multilayer alignment graph represented as edge list, where the weight of the edge according to homogeneous/heterogeneous match, homogeneous/mismatch, homogeneous gap cases and type of layer; 3) a file containing the number of mined communities, the modularity value (where modularity is the measure related to the network structure that detects the
density of connections within a module) and the run time (see an example of output at [https://github.com/pietrocinaglia/mulan](https://github.com/pietrocinaglia/mulan)).
### Complexity of the algorithm
The asymptotic temporal complexity of the alignment process was estimated as \(NxM\), with \(N\) the number of interactions defined into the source network and \(M\) that of the target one, calculated on the basis of the respective edge list.
The latter was designed as a multi-line adjacency list with data values, in order to store both node and edge attributes: node type, and the layer upon which (or between) the interaction is modeled. This approach allows us to have a flat view of the graph, which is therefore aligned using the same basic principles of alignment between static networks; its complexity will be proportionate to the resulting size.
It turns out to be quadratic, for multilayer networks consisting of edge lists with a same size (\(N=M\)).
## 4 Results and Discussion
As a proof of principle, we experimented _MuLaN_ on two datasets, consisting of (i) ten synthetic multilayer networks, and (ii) one real multilayer network.
### Dataset 1: Synthetic Networks
We build ten multilayer networks with two layers. Each layer is a Barabasi-Albert network Barabasi and Albert (1998) having 1000 nodes (\(n=1000\)) and 1000 edges (\(m=1\), with \(m\) the number of edges to attach from a new node to existing ones).
The number of _inter_-edges was defined as 30% (i.e., 300) of the _intra_-edges, and these were randomly generated. The resulting edge list consisted of 2300 interactions among 2000 nodes, for the whole network.
According to the described approach, we modeled a set of 10 initial multi-layer networks with 2 layers, and five noisy versions for each one. The noisy version was built by removing 5%, 10%, 15%, 20%, and 25% of interactions, randomly selected from the set of _inter_- and _intra_- edges. We have generated the pairs to be aligned by using the initial networks and the respective noisy versions.
### Dataset 2: Real Network
We considered the following datasets of the Stanford Biomedical Network Dataset Collection (BioSNAP) [40]:
* _Drug-Drug Interaction (DDI)_ network of interactions between drugs, approved by the U.S. Food and Drug Administration (FDA): 1514 nodes and 48514 edges.
* _Disease-Disease (DD)_ network of interaction between inherited diseases: 6878 nodes and 6877 edges.
* _Disease-Drug Association (DDA)_ network, a set of curated relationships between diseases and drugs: 5535 disease nodes, 1662 chemical nodes, and 466656 edges.
We build a multilayer network with two layers obtained from the DDI and DD database. Then, we added inter-layer edges by considering the DDA database. Subsequently, a preprocessing was also performed to remove (i) zero degree nodes, (ii) duplicate edges, and (iii) objects outside the intersection set from the DDI, DD, and DDA networks. Finally, the resulting multilayer network consisted of \(83,92\) nodes and \(128,200\) interaction, of which \(72,809\)_inter_-edges.
Similarly to our own synthetic networks, we generated five noised versions of the real network by removing \(5\%\), \(10\%\), \(15\%\), \(20\%\), and \(25\%\) of interactions, randomly selected from the set of _inter_- and _intra_- edges.
### Alignment Parameters
We set \(\Delta\)distance equal to \(2\), and following weights:
* Homogeneous Match : 1
* Homogeneous Mismatch: 0.5
* Homogeneous Gap: 0.2
* Heterogeneous Match: 0.9
* Heterogeneous Mismatch: 0.4.
### Performance Evaluation
We evaluated _MuLaN_ alignments by considering:
* the alignment of a network with respect to itself to show the ability to find known regions of similarity;
* the alignment of the network with respect to an altered version of the network obtained by adding different levels of noise (5%, 10%, 15%, 20%, and 25%) by randomly removing edges from the network.
We aimed to demonstrate the ability of our algorithm to build high-quality alignments with edge conservation of about 90%.
The proposed solution aligned high-confidence synthetic networks to themselves and to their noisy counterparts. Overall, we computed a set of 60 local alignments.
In addition to the experimentation on synthetic data, we aligned the high-confidence real network with itself and its noisy counterparts, by producing a total of 6 local alignments.
All experiments have been conducted on a 64-bit workstation with the following specifications: AMD Ryzen 3 (2.6 GHz Dual-Core) and 8 GB of RAM.
_MuLaN_ aligned two multilayer networks consisting of 2000 nodes and 2298 edges, each one, in \(\sim 1.25\) seconds. To the latter, it was then necessary to include the time required by the individual methods for community detection; in details, _Louvain_, _Greedy_, and _Infomap_ take \(\sim 0.17\), \(\sim 1.89\), and \(\sim 14.09\) seconds, respectively.
According to our results, the most performing method in terms of modularity is _Louvain_ (see Table 1 and Table 2); therefore, we defined the latter by default. However, _Greedy_ and _Infomap_ can be chosen as other options.
In order to evaluate the effectiveness of _MuLaN_, we generated the same local alignment built with _Louvain_ by using _Infomap_ and _Greedy_, and we analyzed the results. We selected those algorithms for the best and fast performance to extract community in multilayer network according to literature [18].
Since, to the best of our knowledge, _MultiLoAl_ and its extension _MuLaN_ are the only local alignment algorithms of multilayer networks in the literature, we evaluated the effectiveness of _MuLaN_, by comparing the results
obtained with default _MuLaN_ version, in which Louvain is used as community detection algorithm and other _MuLaN_ versions in which are applied _Infomap_ and _Greedy_ algorithms.
Then, we measured the performance of the alignments built with different versions of _MuLaN_ by evaluating the quality of the results. In particular, we measure the topological quality of alignments and the quality of communities found.
#### 4.4.1 Topological quality
At first, the results are evaluated by analyzing the topological quality. We recall that intuitively an alignment is of high topological quality if it reconstructs the underlying true node mapping well (when such mapping is known) and if it conserves many edges. For example, for simple networks the \(F-NC\) (F-score node correctness) is applied to measure the node correctness, and it is defined as \(\frac{M\cap N}{N}\) where \(M\) is the set of node pairs that are mapped under the true node mapping and \(N\) the set of node pairs that are aligned under an alignment \(f\).
Otherwise, NCV-G\(S^{3}\) (high node coverage (NCV) and Generalized \(S^{3}\) (G\(S^{3}\))) is applied to measure the edge correctness, and it is defined as the geometric mean of high node coverage (NCV) and generalized \(S^{3}\) (G\(S^{3}\)) measures. NCV is the percentage of nodes from \(G_{1}\) and \(G_{2}\) that are also in \(G_{1}^{\prime}\) and \(G_{2}^{\prime}\) and G\(S^{3}\)measures how well edges are conserved between \(G_{1}^{\prime}\) and \(G_{2}^{\prime}\) where \(G_{1}\) and \(G_{2}\) are two graphs and \(G_{1}^{\prime}\) and \(G_{2}^{\prime}\) are subgraphs of \(G_{1}\) and \(G_{2}\) that are induced by the mapping.
In a previous work ([14]), we extended such measures in the multilayer case, due to, to the best of our knowledge, there are not any other available measures.
In particular, we compute the multilayer \(F-NC_{m}\) as the average of \(F-NC_{i}\) estimated for each layer. Likewise, we compute the multilayer \(NCV-GS_{m}^{3}\) as the average of \(NCV-GS_{m}^{3}\) estimated for each layer.
Finally, we consider the edge correctness for the _inter_-layer edges. Without loss of information, we consider all the _inter_-layer edges as a whole, and we calculate the correctness of all the _inter_-layer edges as \(NCV-GS_{i}^{3}nter\).
We computed multilayer \(NCV-GS_{m}^{3}\) and multilayer \(F-NC_{m}\) measures for all alignments built for each synthetic network and for real network by considering the _intra_-layer and _inter_-layer. Figure 7, Figure 8, Figure 9, Figure 10 report the results for synthetic multilayer networks, whereas Figure 11, Figure 12, Figure 13, Figure 14 report the results for real multilayer
network. The Tables related to \(NCV-GS_{m}^{3}\) and multilayer \(F-NC_{m}\) measures for all alignments built for each synthetic network and for real network by considering the _intra_-layer and _inter_-layer are reported in appendix.
By analyzing the results, it is possible to notice that multilayer \(NCV-GS_{m}^{3}\) and multilayer \(F-NC_{m}\) values decrease by increasing noise level from 5 % to 25 %. The reduction of quality of the alignment is detected for different _MuLaN_ versions applied on both synthetic network and real network. Otherwise, the results show that the quality of the alignment is greater when _Louvain_ is applied to extract communities.
#### 4.4.2 Community Quality
We also evaluated the quality of mined communities by considering i) the number of extracted communities and ii) the strength of the community structure by using the modularity [41] as a community quality metric. In detail, modularity is a measure related to the network structure that detects the density of connections within a module. The modularity can be either positive or negative, and positive values are indicative of the presence of community structure. Thus, a network with a high modularity score will have many connections within a community [41].
Thus, we first estimated the number of mined communities with _Louvain_, _Infomap_ and _Greedy_, for each local alignment built on synthetic and real networks, then, we computed the modularity of the extracted communities.
Table 1 and Table 2 report the results obtained with different versions of the _MuLaN_ algorithm for synthetic and real networks. In particular, for each network, the Tables report the number of the extracted communities and the modularity value for each community. The result shows that, all the community detection algorithms are able to mine more communities by aligning the original network with itself. Otherwise, the number of communities decreases on the alignment built with the synthetic counterparts with 5 %, 10%, 15%, 20% and 25% of added noise. Furthermore, it is possible to notice that _Louvain_ is able to identify a higher number of communities in the multilayer network with respect to other community detection algorithms.
By analyzing the results, it is possible to see that the higher modularity values are obtained by applying _Louvain_ as a community extraction algorithm. This means that _Louvain_ is able to extract communities that reveal the relationships among nodes in various layers, with respect to other community detection algorithms.
#### 4.4.3 Functional Quality Evaluation
Finally, we evaluate the functional quality of the results by considering the biological relevance of the extracted communities. For classical networks,
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline
the biological relevance evaluation consists in evaluating whether groups of related entities have a similar biological role or share functions. Since, _MuLaN_ constructs the local alignment that includes associations between different entities, i.e. disease-drug, we conducted the evaluation differently. In particular, we evaluated the functional quality of local alignment, considering whether our algorithm is able to extract drug-disease associations still unobserved.
We recall that, drug-disease associations consist of the cases in which drugs affect disease. Thus, in research field, drug-disease associations may bring relevant information in the drug discovery, for example, the detection of convenient relations between drugs and diseases. The literature contain a large set of drug-disease associations already identified, however many associations are unobserved or not detected. In the rest of the paper, we refer to unobserved or not detected drug-disease associations as candidate drug-disease associations.
Starting from this consideration, our goal is to demonstrate that _MuLaN_ is able to build a local alignment containing candidate drug-disease associations, and it is able to extract knowledge about the aligned multilayer networks.
For this aim, we consider the local alignment constructed by aligning the real network with its noisy counterpart at 25% (because it is the most different from the original network) and we analyze the drug-disease associations forming the extracted communities.
To evaluate whether the associations contained in the communities are known in the literature or still unobserved drug-disease associations, we used SCMFDD [42], a similarity constrained matrix factorization method for the drug-disease association prediction. SCMFDD reveals unobserved drug-disease associations by using drug features, disease semantic information and known associations incorporated into the matrix factorization frame (see [42] for complete details).
SCMFDD computes: i) the drug-drug similarities by using Jaccard index [43] taking into account diverse drug features (such as, targets, enzymes, pathways and drug-drug interactions), ii) the disease-disease semantic similarity by using MeSH information [44], iii) two low-rank feature matrices by applying a factorization on observed drug-disease associations matrix. Then, it uses an objective function based on Newton's method [45] to compute a score representing the probability that a drug and a disease have an association (see [42] for complete details on SCMFDD). Thus, we applied SCMFDD
to analyze each couple of drug-disease associations forming the community extracted with _MuLaN_. The results show that the extracted communities contain 46,498 candidate drug-disease associations. Among these, 711 drug-disease associations present a score greater than 0.5. We reported candidate drug-disease associations with score equal to 1 in Table 3. The complete detected associations are available at [https://github.com/pietrocinaglia/mulan](https://github.com/pietrocinaglia/mulan)).
Furthermore, we manually evaluated the candidate drug-disease associations by searching literature evidences. For example, by considering the drug-disease association Fenofibrrate-Cholestasis, the works in [46; 47] report the effect of fenofibrrate against cholestasis. In [47], Ghonem et al. discuss the effectiveness and well tolerability of Orlistat treatment for hyperglycemia. Also, [48] presents the successful use of rifampin in a patient with Stevens-Johnson syndrome. We list the literature evidence of the top 10 detected drugs-disease associations in Table 4.
In conclusion, the results demonstrate that our algorithm is capable of discovering candidate drug-disease associations and, consequently, of extracting new knowledge from multilayer networks.
#### 4.4.4 MuLaN vs MultiLoAl
To clarify the need for developing an extended version of _MultiLoAl_, in this section, we present the comparison of _MuLaN_ with _MultiLoAl_.The aim is to demonstrate that _MuLaN_ reports the best performance to analyze multilayer networks with respect to previous version _MultiLoAl_. The dataset that we used for the comparison consists of ten synthetic multilayer networks and one real multilayer network used for _MuLaN_ evaluation. Thus, we built the alignment of a network with respect to itself, and considering the alignment of a network with respect to an altered version by applying _MultiLoAl_. Our experimentation shows that _MultiLoAl_ builds the alignment between two synthetic networks, having 2000 nodes and 2298 edges each one, in \(\sim 120\) minutes, against the \(\sim 1.42\) seconds required by _MuLaN_ for the same operation.
Results show clearly that _MuLaN_ outperforms _MultiLoAl_, in terms of memory efficiency and runtime.
## 5 Conclusions
Multilayer networks are a powerful tool for the modelling of complex data in biology and medicine. In this work, we focused on the problem of analysis
of multilayer networks through the comparison of their internal structure, by applying Network Alignment algorithms. Since classical network alignment algorithms for simple networks do not perform well on multilayer networks, we presented a deep extension of a previously developed network alignment algorithm for multilayer network, named, _MuLaN_ (Local Alignment Algorithm for Multilayer Networks). We tested it on ten synthetic multilayer networks and on a real multilayer network, and we evaluated the quality of results. Results confirm that _MuLaN_ is able to build alignment with high topological quality. According to these analyses, we demonstrated the _MuLaN_ methodology is suitable for the comparison and hence, the analysis of multilayer networks.
## 6 Competing interests
No competing interest is declared.
## 7 Author contributions statement
Conceptualization, PHG. and MM.; methodology, PHG, PC, and MM; software, PC, MM; data curation, MM and PC; writing-original draft preparation, MM, PC, PHG, and MC.; writing-review and editing, MM, PC, PHG and MC.; funding acquisition, MC. All authors have read and agreed to the published version of the manuscript.
## 8 Funding
This work was funded by the Next Generation EU - Italian NRRP, Mission 4, Component 2, Investment 1.5, call for the creation and strengthening of 'Innovation Ecosystems', building 'Territorial R&D Leaders' (Diectorial Decree n. 2021/3277) - project Tech4You - Technologies for climate change adaptation and quality of life improvement, n. ECS0000009. This work reflects only the authors' views and opinions, neither the Ministry for University and Research nor the European Commission can be considered responsible for them.
|
2309.08969 | Rethinking STS and NLI in Large Language Models | Recent years have seen the rise of large language models (LLMs), where
practitioners use task-specific prompts; this was shown to be effective for a
variety of tasks. However, when applied to semantic textual similarity (STS)
and natural language inference (NLI), the effectiveness of LLMs turns out to be
limited by low-resource domain accuracy, model overconfidence, and difficulty
to capture the disagreements between human judgements. With this in mind, here
we try to rethink STS and NLI in the era of LLMs. We first evaluate the
performance of STS and NLI in the clinical/biomedical domain, and then we
assess LLMs' predictive confidence and their capability of capturing collective
human opinions. We find that these old problems are still to be properly
addressed in the era of LLMs. | Yuxia Wang, Minghan Wang, Preslav Nakov | 2023-09-16T11:58:39Z | http://arxiv.org/abs/2309.08969v2 | # Rethinking STS and NLI in Large Language Models
###### Abstract
In this study, we aim to rethink STS and NLI in the era of large language models (LLMs). We first evaluate the accuracy of clinical/biomedical STS and NLI over five datasets, and then we assess LLM predictive confidence and their capability of capturing collective human opinions. We find that LLMs may be able to provide personalised descriptions for a specific topic, or to generate semantically similar content in different tones, but that this is hard for current LLMs to make personalised judgements or decisions. We further find that zero-shot ChatGPT achieves competitive accuracy over clinical and biomedical STS/NLI, constraining to the fine-tuned BERT-base. However, there is a large variation in sampling, ensembled results perform the best.
## 1 Introduction
Semantic textual similarity (STS) is a fundamental natural language understanding (NLU) task involving the prediction of the degree of semantic equivalence between two pieces of text Cer et al. (2017). Under the regime of first pre-training a language model and then fine-tuning with labelled examples, there are three major challenges in STS modelling (see examples in Table 1): (_i_) low accuracy in low-resource and knowledge-rich domains due to the exposure bias Wang et al. (2020, 2020), (_ii_) models make incorrect predictions over-confidently (and unreliable estimations are dangerous in safety-critical applications such as autonomous driving or clinical decision support, and may lead to catastrophic errors Wang et al. (2022)), (_iii_) difficulty in capturing collective human opinions on individual examples Wang et al. (2022). Akin to STS, natural language inference (NLI) faces similar issues, where the goal is to determine whether a _hypothesis_ sentence can be entailed from a _premise_, is contradicted, or is neutral with respect to the _premise_.
Large language models (LLMs), such as ChatGPT, Claude and LLaMA-2, have demonstrated impressive performance on natural language understanding and reasoning tasks, by simply inputting appropriate prompts or instructions, without any parameter modifications. On general STS-B Cer et al. (2017), zero-shot ChatGPT achieves competitive Pearson correlation (\(r\)) of 80.9 vs. 83.0 by fine-tuning BERT-base using thousands of training examples Devlin et al. (2019).1 On MNLI-m Williams et al. (2018), zero-shot ChatGPT even outperforms fine-tuned RoBERTa-large: accuracy of 89.3 vs. 88.0. LLMs' remarkable capabilities in zero-shot setting motivate us to rethink the task of STS/NLI and the three challenges under LLM prompt-based generation.2
Footnote 1: Note that Zhong et al. (2023) have reported much higher results of 92.9 using RoBERTa-large on STS-B, but they are calculated on a subset that they sampled from a uniform distribution based on similarity bins, i.e., sampling an equal number of examples binning to 0.0-1.0, 1.0-2.0, 2.0-3.0, 3.0-4.0, and 4.0-5.0, instead of the whole development or test set of STS-B.
Footnote 2: However, there might also be data contamination, i.e., the LLM might have seen (part of) the data during training.
We ask the following questions: (_i_) How well do LLMs perform over knowledge-rich and low-resource domains, such as biomedical and clinical STS/NLI? (_ii_) Does the paradigm of prompting LLMs lead to over-confident predictions? and (_iii_) How can we capture collective human opinion (the distribution of human judgements) based on LLMs?
Chen et al. (2023) evaluated GPT-3.5 (_text-davinci-003_) on NLI (e.g., SNLI, MNLI, QQP) and on the semantic matching dataset MRPC (it is a binary classification task that predicts whether two sentences are semantically equivalent). Zhong et al. (2023) evaluated ChatGPT over STS/NLI datasets including STS-B, MNLI, QNLI, and RTE. We found that they focused on the performance of general-purpose STS and NLI. However, it is unclear how well ChatGPT performs on clinical and
biomedical domains over these two tasks.
Jiang et al. (2021) examined the calibration of three LMs including T5, BART, and GPT-2 on QA tasks: they studied whether the model's probability estimates were well-aligned with the actual probability of the answer being correct. That is, if a model makes well-calibrated predictions, the probability it assigns to the outcomes coincides with the frequency with which these outcomes actually occur. The predictive probability (confidence) will be a reliable signal to assist in deciding how much we can trust a prediction and the corresponding risks we may take. Unfortunately, the answer is a relatively emphatic _no_. Kadavath et al. (2022) found that larger language models are calibrated on diverse multiple choice questions. However, the majority of studies focused on white-box calibration for question answering, and there have been no studies on the calibration of STS/NLI neither in a white-box nor in a black-box scenario.
Moreover, there are studies exploring LLMs' robustness across NLI tasks, i.e., the accuracy variation against adversarial attacks (Chen et al., 2023), while less attention has been paid to human disagreement in labelling and how to capture the distribution of multiple individual opinions instead of an aggregated label by averaging or majority voting. In this work, we aim to bridge these gaps by first evaluating the accuracy of clinical/biomedical STS and NLI over five datasets, and then assessing LLM predictive confidence and their capability of capturing collective human opinions.
We have three major findings:
* LLMs may be able to provide personalised descriptions for a specific topic, or generate semantically-similar content in different tones, but it is hard for current LLMs to make personalised judgements or decisions.
* Zero-shot ChatGPT achieves competitive accuracy over clinical and biomedical STS/NLI, constraating to the fine-tuned BERT-base.
* There exists large variation in sampling, ensembled results perform the best.
## 2 Background
### Task and Datasets
Task:STS and NLI are both sentence-pair relationship prediction tasks. STS assesses the degree of semantic equivalence between two (short) texts. The aim is to predict a continuous similarity score for a sentence pair \((S1,S2)\), generally in the range \([0;5]\), where 0 indicates complete dissimilarity and 5 indicates equivalence in meaning. NLI highlights semantic reasoning, determining whether a given _hypothesis_ can be logically inferred from a given _premise_, where if it can be, the example falls into entailment), otherwise contradiction, if undetermined neutral.
Datasets:For STS, we use: two large-scale general datasets -- STS-B (Cer et al., 2017) and uncertainty-aware USTS (Chinese) with a collection of annotations for each example (Wang et al., 2023); two small clinical datasets -- MedSTS (Wang et al., 2018) and N2C2-STS (Wang et al., 2020); and two small biomedical datasets -- BIOSSES (Sogancoglu et al., 2017) and EBM-SASS (Hassanzadeh et al., 2019).
For NLI, we use: MedNLI, which was annotated by physicians and is grounded in the medical history of patients (Romanov and Shivade, 2018), and ChaosNLI (Nie et al., 2020), which was created by collecting 100 annotations per example for 3,113 examples in SNLI (1,514) (Bowman et al., 2015) and MNLI (1,599) (Williams et al., 2018), denoted as Chaos-SNLI and Chaos-MNLI, respectively. Table 2 shows some statistics about the datasets.
### STS/NLI Challenges under PLM
There are three major challenges in STS and NLI modelling based on the paradigm of fine-tuning a pre-trained language model (PLM) such as BERT (Wang et al., 2020, 2022, 2023).
Low accuracy in low-resource domainsIn domains such as biomedical and clinical, domain experts (e.g., a physician or a clinician) are required in the annotation process for high quality, which leads to an extremely-limited amount of labelled data (less than 2,000 examples in clinical/biomedical STS datasets). In addition, domain text is rich in specific terms and concepts that rarely appear in a general text. It is hard for language models that were pre-trained on a general corpus to understand the meaning of domain terms and the relationship between them due to exposure bias, when the lexical expressions are different.
Example 1 in Table 1 shows that a clinical STS model tuned using N2C2-STS training data still struggles to model the sentence pair, and assigns a semantic similarity score of 2.0, while the gold
label is 4.5. This is due to the lack of clinical knowledge that _Tapentadol_ and _Oxycodone_ are both pain-relief medicine.
Current language models have much larger capacity and are trained using more data in pre-training, compared with BERT, can they overcome this problem? How well do LLMs perform on low-resource and knowledge-rich domains? We investigate in Section 3.
Over-confidence on wrong predictionsNeural models have been empirically demonstrated to have poor calibration -- the predictive probability does not reflect the true correctness likelihood, and they are generally over-confident when they make wrong predictions (Guo et al., 2017; Wang et al., 2022). Put differently, the models do not know what they don't know. For example No.2 in Table 1, the STS model incorrectly predicts the similarity score as 1.95 when the gold label is 0.0. In such a case, a reliable model should display its highly-uncertain estimation by a large standard deviation instead of 0.004.
Faithfully assessing the uncertainty of model predictions is as important as obtaining high accuracy in many safety-critical applications, such as autonomous driving or clinical decision support (Chen et al., 2021; Kendall and Gal, 2017). If models were able to more faithfully capture their lack of certainty when they make erroneous predictions, they could be used more reliably in critical decision-making contexts, and avoid catastrophic errors.
Thus, we expect the model to be confident when they make correct predictions and less confident when they make wrong predictions. How to estimate predictive confidence/uncertainty by a generation output with prompt-based LLMs for STS and NLI? Are the predictions well-calibrated? We will answer these questions in Section 4.
Capturing collective human opinionsDue to the task subjectivity and language ambiguity, there exists high disagreement for some cases in STS and NLI labelling, as examples under category No.3 in Table 1. Based on a collection of individual ratings, the average score \(\mu\) of 1.7 does not convey the fact that the ratings vary substantially (\(\sigma>1.0\)), and the label (0, 1, 0) also does not reflect the inherent disagreements among raters for the NLI example, where there are 57 annotators among 100 assign entailment and 42 assign Neutral.
The gold label aggregated by averaging or majority voting may reflect the average opinion or the majority viewpoint, but fails to capture the latent distribution of human opinions or interpretations, and masks the uncertain nature of subjective assessments. Simply estimating aggregated labels over examples with high disagreement is close to a random guess of an average opinion. How to capture the distribution of human opinions under LLMs? Can it be achieved by leveraging LLMs' capability of generating personalised responses under different roles (see Section 5)?
### STS and NLI under LLMs
Are STS and NLI still worth investigating?STS and NLI tasks were used to evaluate language models' semantic understanding ability. LLMs
\begin{table}
\begin{tabular}{l l} \hline \hline
**No. 1** & Low-resource \& knowledge-rich \\ S1 & _Tapentadol 50 MG Oral tablet 1 tablets by mouth every 4 hours as needed._ \\ S2 & _Oxycodone [ROXICODONE] 5 mg tablet 1 tablets by mouth every 4 hours as needed._ \\ Gold label & 4.5 \\ Prediction & 2.0 \\ Reason & Lack of knowledge: _Tapentadol_ and _Oxycodone [ROXICODONE]_ are both pain-relief medicine. \\ \hline
**No. 2** & Over-confidence wrong prediction \\ S1 & _You will want to clean the area first._ \\ S2 & _You will also want to remove the seeds._ \\ Gold label & 0.0 \\ Prediction & \(1.95\pm\mathbf{0.004}\) \\ \hline
**No. 3** & Capture Human Disagreement \\ S1 & _A man is carrying a canoe with a dog._ \\ S2 & _A dog is carrying a man in a canoe._ \\ Old label & 1.8 \\ New label & \(\mathcal{N}(\mu=1.7,\sigma=1.0)\) \\ Annotations & [0.0, 0.3, 0.5, 0.5, 1.2, 1.5, 1.5, 1.8, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.5, 3.5, 3.5] \\ Prediction & 4.3 \\ Reason & Uncertainty about the impact of key differences in event participants on instances of high lexical overlap \\ \hline Premise & Look, there’s a legend here. \\ Hypothesis & See, there is a well known hero here. \\ Old label & (0, 1, 0) \\ New label & (0.01, 0.57, 0.42) \\ Annotations & C: 1, **E: 57, N: 42** \\ Source & Chaos-MultiNLI \\ \hline \hline \end{tabular}
\end{table}
Table 1: Challenging STS/NLI examples for the PLM-fine-tuned model. “Old label” = gold label by averaging or majority voting; “New label” = full distribution aggregated over 15 or 100 new ratings; and “Prediction” = similarity score predicted by fine-tuned the STS model based on BERT-base.
such as GPT-4 and Claude have shown remarkable capabilities in following user instructions and helpfully responding a variety of open-domain questions. This implicitly indicates their great semantic understanding ability. Moreover, labels of both tasks are sometimes ambiguous and subjective due to the high disagreement between annotators in labelling. As such, _is it not worthwhile to continue studying STS and NLI anymore under LLMs?_
**Yes**, they are still important. On the one hand, we wonder whether LLMs have the same challenges as PLMs. On the other hand, STS and NLI focus on analysing semantic relationship between two pieces of text, which allows us to automatically compare, analyse and evaluate LLMs' responses in terms of helpfulness, factuality, bias, toxicity and harmfulness. For example, in fact-checking to identify the veracity, STS is the core technique in dense information retrieval to collect the most relevant evidence given a claim, and NLI is always used to identify the stance of the evidence, supporting, refuting or being irrelevant to the claim. They reduce the human intervention and improve the efficiency.
### Prompting Strategies
GPT-3 Brown et al. (2020) demonstrated that LLMs are strong few-shot learners, where fast in-context learning can be achieved through prompting strategies. Through a handful of demonstration examples encoded as prompt text in the input context, LLMs are able to generalise to new examples and new tasks without any gradient updates or fine-tuning. The remarkable success of in-context few-shot learning has spurred the development of many prompting strategies including scratchpad, chain-of-thought, and least-to-most prompting, especially for multi-step computation and reasoning problems such as mathematical problems. In this study for STS and NLI, we focus on standard zero-shot, few-shot, chain-of-thought, and self-consistency prompting as discussed below.
**Few-shot:** The standard few-shot prompting strategy was introduced with GPT-3. The prompt to the model is designed to include few-shot examples describing the task through text-based demonstrations. These demonstrations are typically encoded as input-output pairs. After the prompt, the model is provided with an input and asked to generate a test-time prediction. In this study, we identify five demonstration input-output examples for each dataset and craft the few-shot prompts.
**Zero-shot:** The zero-shot prompting counterpart typically only involves an instruction describing the task without including any additional examples (see Table 3).
**Chain of thought (CoT) and Explanation:** CoT Wei et al. (2022) involves augmenting each few-shot example in the prompt with a step-by-step breakdown and a coherent set of intermediate reasoning steps towards the final answer. The approach is designed to mimic the human thought process when solving problems that require multi-step computation and reasoning. CoT prompting can elicit reasoning abilities in sufficiently LLMs and dramatically improve performance on tasks such as mathematical problems.
A variant of CoT is to prompt LLMs with explanation, instead of label-only prediction. It shows to
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Dataset & \#Train & \#Dev & \#Test & Range & \#Annotation & Domain \\ \hline STS-B (2017) & 5,749 & 1,500 & 1,379 & \([0,5]\) & 5 & general \\ MedSTS (2018) & 750 & — & 318 & \([0,5]\) & 2 & clinical \\ N2C2-STS (2019) & 1642 & — & 412 & \([0,5]\) & 2 & clinical \\ BIOSSES (2017) & — & — & 100 & \([0,4]\) & 5 & biomedical \\ EBMASS (2019) & — & — & 1,000 & \([1,5]\) & 5 & biomedical \\ \hline USTS-U (2023) & 4,900 & 2,000 & 2,000 & \([0,5]\) & 4 & general \\ USTS-C (2023) & 2,051 & 2,000 & 2,000 & \([0,5]\) & 19 & general \\ \hline MedNLI & 11,232 & 1,395 & 1,422 & 3-class & – & clinical \\ \hline Chaos-SNLI (2020) & — & — & 1,514 & 3-class & 100 & general \\ Chaos-MNLI (2020) & — & — & 1,599 & 3-class & 100 & general \\ \hline \hline \end{tabular}
\end{table}
Table 2: STS/NLI datasets. #Train, Dev, Test Size = number of text pairs, range = label range. #Annotator = number of raw annotations for each example.
be more robust over hard and adversarial NLI examples, since it forces models to conduct rationalise-then-predict (Kavumba et al., 2023). That is to learn what NLI task intended to learn, rather than superficial cues, such as association between label _contradict_ and token _not_ in hypothesis (models are "right for the wrong reason").
This is consistent with the finding presented by Zhang et al. (2023), LLMs indeed have the knowledge/capability to answer questions correctly if we prompt it to rationalise step by step, instead of asking them to give a _Yes/No_ answer in the first token, where they tend to predict wrongly. Multiple steps or explanation prompting may allow models to "think over" and then infer answers, decreasing the error rate resulting from _quick quiz_ (less time to think).
Overall, these findings indicate that prompting large language models by multi-step reasoning or giving explanations before predicting labels can lead to robust performance over hard and adversarial answers. On top of these findings, when proposing prompts, we allow models to generate explanation by "thinking" multiple steps before predicting the final label, to fully unlock LLM's capabilities.
**Self-consistency** A straightforward strategy to improve the performance on the multiple-choice benchmarks is to prompt and sample multiple decoding outputs from the model. The final answer is the one received the majority vote. This idea was introduced as self-consistency. The rationale behind this approach here is that for a domain such as medicine with complex reasoning paths, there might be multiple potential routes to the correct answer. Marginalising out the reasoning paths can lead to the most consistent answer. The self-consistency prompting strategy led to particularly strong improvements in reasoning tasks, and we adopted the same approach for our datasets.
## 3 Clinical and Biomedical Evaluation
How well do LLMs encode clinical and biomedical knowledge, compared with small PLMs?
Singhal et al. (2023) assess LLM's potential in medicine through the answering of medical questions based on PaLM (8B to 540B). They observed strong performance as a result of scaling and instruction fine-tuning. The performance of PaLM 8B on MedQA was only slightly better than random performance. Accuracy improved by more than 30% for PaLM 540B.
Wu et al. (2023) evaluate the commercial LLMs ChatGPT and GPT-4, and open-source models including LLaMA, Alpaca and BLOOMz on a radiology corpus, determining whether the given context sentence from a radiology report contains the answer to the provided question (Yes or No). The results show that GPT-4 outperforms ChatGPT, followed by LLaMA-7B, Alpaca and BLOOMz-7B. The comparison with models fine-tuned over task
\begin{table}
\begin{tabular}{l l} \hline \hline
**Task** & **Prompt Template** \\ \hline STS & Zero-shot \\ & Determine the similarity between the following two sentences (S1, S2). The score should be ranging from 0.0 to 5.0, and can be a decimal. S1: \textbackslash{} S2: \textbackslash{} Score: \\ \hline STS & Zero-shot (AG) \\ & Annotation instructions + Task description. \\ & S1: \textbackslash{} S2: \textbackslash{} Score: \\ \hline STS & Zero-shot (CoT) \\ & Determine the similarity between the following two sentences (S1, S2). _Explain the assessment step by step._ & The score should be ranging from 0.0 to 5.0, and can be a decimal. \\ & S1: \textbackslash{} S2: \textbackslash{} Score: \\ \hline STS & Few-shot \\ & Five demonstration examples \(\cdots\) \\ & Task description. S1: \textbackslash{} S2: \textbackslash{} Score: \\ \hline STS & Few-shot (AG) \\ & Annotation instructions + Five demonstrations \\ & + Task description. S1: \textbackslash{} S2: \textbackslash{} Score: \\ \hline STS & Few-shot (CoT) \\ & Task description + Five demonstrations with explanation for each, e.g., \\ & S1: A woman is washing her hands. \\ & S2: A woman is straightening her hair. \\ & Explain: S1 and S2 are in the same topic, but important information is totally different. \\ & Score: 0.8 \\ & S1: \textbackslash{} S2: \textbackslash{} \\ \hline NLI & Zero-shot \\ & Given the sentence \textbackslash{}, determine if the following statement is entailed or contradicted or neutral: \textbackslash{}. \\ \hline \hline \end{tabular}
\end{table}
Table 3: Prompt templates used for STS and NLI in zero-shot and few-shot setting. CoT = Chain of thought, AG = annotation guidelines. Task description is same as zero-shot prompt.
specific examples based on BERT shows that PLMs fine-tuned using >1k examples and >8k examples can achieve competitive accuracy against 10-shot ChatGPT and 10-shot GPT-4 respectively. These findings demonstrate a promising ability that is non-existent in small models but rapidly improves above random performance beyond a certain model size.
How do LLMs perform over clinical and biomedical STS and NLI? What are influential factors of the accuracy? We first assess the impact of different prompting strategies based on ChatGPT.
### Impact of Prompting Strategy
Based on general STS-B and clinical N2C2-STS test sets, we evaluate the impact of six different prompting strategies on the STS accuracy, for both ChatGPT and LLaMA-2 (7B), including (see Table 3):
* Zero-shot
* Zero-shot with annotation guidelines (AG)
* Zero-shot with chain of thought (CoT)
* Few-shot
* Few-shot with annotation guidelines (AG)
* Few-shot with chain of thought (CoT)
How to craft prompts?Naive few-shot prompt only shows exemplars to the model, such as five training examples whose similarity score spans from zero to five in our setting. However, the model is often confused about what task it should perform and fail to predict a score. Thus, we append a task description (same as zero-shot prompt) at the end of demonstrations. Compared to appending the description at the beginning of the prompt, first showing examples and then elaborating instructions before inputting test cases is easier for model to follow the instruction, resulting in more valid predictions and better accuracy.
For a few-shot prompt with annotation guidelines (see Section B), three components are included: demonstrations, annotation instructions and the task description. Prompting by the order of task description, instruction and demonstrations, the majority of responses are invalid (441 among the first 500 examples in STS-B), returning "the score for the given sentence pair is not provided". While prompting by first instruction, demonstrations and then the task description, the model will return similarity scores.
Few-shot prompt with chain of thought is crafted with the task description followed by five demonstration examples with an explanation for each.
How to parse labels from responses?One challenge is how to accurately parse the model prediction from a long free-form generation. Many predicted labels do not appear at the beginning, the end or the position requested by the instruction, since the model does not always follow the instruction, particularly for LLaMA-2.
For responses of ChatGPT, we use rules and regular expressions to match and parse labels. It is hard to parse LLaMA-2 responses by rules because the answers are too diverse to induce generalised rules, especially using CoT. To solve this problem, we resort to LLaMA-2 itself to parse the label out, and then apply simple rules to normalise the results. This method alleviates the manual workload to summarise complicated rules, but at the risk of hallucinations. We observed that LLaMA-2 would omit decimal places, like parsing similarity score 4.5 to 4, or even generate a new scalar 1.0 without reference in minority cases.
#### 3.1.1 ChatGPT
Zero-shot prompt gives the best correlation based on ChatGPT.Results over both general-purpose and clinical STS in Table 4 show that providing annotation guidelines, using chain of thought, and demonstrating labelled examples to the model hurt the STS performance, particularly zero-shot with chain of thought (estimations collapse). This is counter-intuitive and inconsistent with previous findings that chain of thought and few shots improve the accuracy of reasoning tasks, although Reynolds and McDonell (2021) also showed that cleverly-constructed prompts in a zero-shot setting could outperform prompts in a few-shot setting, implying that, for some tasks, models can achieve better performance by leveraging their existing knowledge than from attempting to learn the task from in-context exemplars.
Brief annotation guideline and limited exemplars may mislead models.With annotation guidelines, it becomes clearer to identify sentence pairs that are completely dissimilar and irrelevant over topics, but it may also force model to analyse
what is important information and unimportant details, and how to reflect at the assessment. Such interpretation is ambiguous and subjective.
For Nos.1 and 2 in Table 5, the model explains that two sentences are expressing the same action (dancing in the rain and singing with guitar) and the highly-similar semantic meaning. However, there is a slight difference in the details mentioned, the similarity score between S1 and S2 can be determined as 2.5 and 3.0. This suggests that the model fully understand the meaning of two sentences, but fails to assign a correct similarity score.
For No.3, model analyse that _pipe_ and _carpet_, _scissors_ and _knife_ are not similar. Then the model concludes that there are differences in important details described in the sentences, the similarity score between S1 and S2 could be considered as 3.0. We find the reasoning process is entirely correct, but the assigned score is around 3.0 either two sentences differ significantly in key points or slightly on details.
Why does Zero-shot CoT collapse?The rationale behind CoT is improving the performance of reasoning tasks by allowing generative model to infer step by step, instead of outputting results directly. In the context of STS, reasoning could be either calculating a similarity score step by step quantitatively, or explaining why.
By prompting ChatGPT using zero-shot CoT, it is found to give detailed steps of how to calculate a similarity score using different metrics and features (e.g. tokenising, stemming, then IF-IDF or cosine similarity). Some examples analyse similarity score on axes of sentence structure, bag of words, topics and other aspects between two sentences and then sometimes these scores conflict with each other -- some are low and some are high, the model will return that it's difficult to determine the final score. Generally, these scores would be added together and re-scaled to 0-1 or 0-5, sometimes even out by the maximum range of 5.0 without considering the meaning behind the score. Such casual and inconsistent re-scaling make predictions evaluated in different scale.
Coarse measurements highlighting some specific aspects such as lexicon overlap and sentence structure, overlook comparison of the overall semantics. Careless re-scaling neglects the meaning behind the score. The combination hurts the accuracy of STS significantly. So we provide explanations in few-shot CoT.
#### 3.1.2 LLaMA-2
LLaMA-2 (7B) shows extremely poor performance on both STS-B and N2C2-STS, particularly without demonstration examples: \(r\)<0.15 for zero-shot prompts without annotation guidelines. Few-shot (CoT) prompt gives the best correlation: r=0.67 for STS-B, few-shot prompt for N2C2-STS: r=0.33. Results of another five STS datasets also show very
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c|c c c c|c c c} \hline \hline
**Model**\(\rightarrow\) & \multicolumn{4}{c|}{**ChatGPT**} & \multicolumn{4}{c}{**LLaMA-2 (7B)**} \\ Dataset \(\rightarrow\) & \multicolumn{4}{c}{**STS-B**} & \multicolumn{4}{c|}{**N2C2-STS**} & \multicolumn{4}{c}{**STS-B**} & \multicolumn{4}{c}{**STS-B**} & \multicolumn{4}{c}{**N2C2-STS**} \\ Prompt Strategy \(\downarrow\) & \#Valid & \(\tau\) & \(\rho\) & MSE \(\downarrow\) & \#valid & \(r\) & \(\rho\) & \(\uparrow\) & MSE \(\downarrow\) & \#valid & \(r\) & \(\rho\) & \(\uparrow\) & MSE \(\downarrow\) & \#valid & \(r\) & \(\rho\) & \(\uparrow\) & MSE \(\downarrow\) \\ \hline zero-shot & 1379 & 0.758 & 0.766 & 1.87 & 412 & **0.817** & **0.754** & **0.90** & 1292 & 0.044 & 0.106 & 4.56 & 378 & -0.065 & -0.013 & 5.93 \\ zero-shot (AG) & 1379 & 0.640 & 0.638 & 1.59 & 412 & 0.532 & 0.531 & 2.53 & 1356 & 0.375 & 0.314 & **2.24** & 402 & 0.228 & 0.196 & **3.73** \\ zero-shot (CoT) & 1379 & 0.109 & 0.054 & 489 & 368 & 0.173 & 0.185 & 3.75 & 1147 & 0.147 & 0.158 & 4.27 & 388 & 0.018 & 0.012 & 4.99 \\ \hline few-shot & 1324 & 0.688 & 0.75 & 2.14 & 393 & 0.533 & 0.514 & 3.49 & 1373 & 0.506 & 0.423 & 3.26 & 407 & **0.327** & **0.317** & 6.97 \\ few-shot (AG) & 1377 & 0.07 & 0.756 & 1.79 & 389 & 0.505 & 0.469 & 3.03 & 1375 & 0.436 & 0.383 & 4.06 & 405 & 0.266 & 0.244 & 6.87 \\ few-shot (CoT) & 1316 & **0.796** & **0.796** & **1.56** & 412 & 0.637 & 0.680 & 3.18 & 1351 & **0.668** & **0.658** & 2.60 & 397 & -0.029 & -0.183 & 11.02 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Impact of prompt strategy:** Pearson (\(r\)), Spearman (\(\rho\)) correlation and MSE of test set in general STS-B (1379) and clinical N2C2-STS (412) test sets using six different prompt strategies: AG = with annotation guidelines and CoT = with chain of thought. #valid = the number of valid predictions, where the invalid cases are either refused to respond by LLMs or hard to parse the similarity score from the free-form text by simple rules and auto-parsing by LLM itself.
\begin{table}
\begin{tabular}{l l} \hline \hline
**No.** & **Example** \\ \hline
1 & S1: A woman is dancing in the rain. \\ & S2: A woman dances in the rain outside. \\ & Label: 5.0 \\ & Pred: 2.5 \\ \hline
2 & S1: A man is playing the guitar and singing. \\ & S2: A man sings with a guitar. \\ & Label: 4.75 \\ & Pred: 3.0 \\ \hline
3 & S1: A man is cutting a pipe with scissors. \\ & S2: A man is cutting carpet with a knife. \\ & Label: 1.2 \\ & Pred: 3.0 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Incorrectly predicted examples of STS-B using zero-shot prompting with annotation guidelines.
low correlations, and few-shot (w/wt CoT) provides the best accuracy (see Section D). LLaMA-2 (7B) tends to predict a score closing to 5, as the similarity score distribution in Figure 1, the predicted score distribution deviates significantly away from the gold label distribution.
Additionally, the low accuracy on the one hand results from the failure of STS modelling of LLaMA-2, on the other hand, is partially attributed to the imprecise parsing. That is, not all predicted labels can be accurately parsed from the generated responses by automatic strategies. We pass the hard-parsed cases, so the number of valid labels is less than the size of full test set. Considering the number of valid cases and performance, we use few-shot prompt without guidelines and CoT on STS, in the following experiments by LLaMA-2.
Impact of Parsing Strategies:We find that responses by few-shot prompt is easier to parse by rules. Table 6 compares Pearson correlation of predictions parsed by rules and LLaMA-2. Overall, rule-based parsing empirically performs better than parsing by LLaMA-2 itself on responses by few-shot prompt. Accuracy of LLaMA-2 (13B) is slightly impacted by parsing strategies, while LLaMA-2 (7B) is influenced significantly. We speculate that larger LLMs not only can more accurately parse labels, they are also more capable to follow instructions and generate easily-parsed responses.
#### 3.1.3 Zero-shot vs. Few-shot for NLI
Table 7 shows that for both LLaMA-2 7B and 13B, few-shot prompt can achieve either higher or comparable F1-score than zero-shot prompt across three NLI datasets. This is consistent with the STS task using LLaMA-2.
### Impact of Metadata in Prompt
Does setting system role as domain expert result in better performance in domain datasets? Do Chinese prompts perform better than English prompt on Chinese datasets? We investigate the impact of system role and the language of prompt in this section.
System role and contextOn the biomedical STS dataset BIOSSES and two clinical datasets: MedSTS and N2C2-STS, we compare the correlation with system role (pre-context) set as "helpful assistant" vs. "biomedical/clinical expert". Figure 2 shows that the accuracy either declines or keeps the same when setting system role to domain expert from general assistant. Similarly, changing zero-shot prompt to "Determine the similarity between the follow- ing two sentences (S1, S2) _in the biomedical context with domain knowledge_" does not help as well. Combining both leads BIOSSTS correlation declines from 0.868 to 0.848.
Language of promptTo evaluate the performance of non-English benchmarks based on LLMs, we have two choices in terms of the language for prompts: English prompt that the LLM is more familiar than other languages, and corresponding language instruction that is consistent with the input content.
Based on a Chinese STS corpus USTS with two subsets: USTS-C with high human disagreement in labelling and USTS-U with low human disagree
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline Dataset & STS-B & BIOSSES & EMBSAS & MoTS & N2C-STS & USTS-C & USTS-U \\ \hline \hline
**LLaMA-2 (7B)** & & & & & & & \\ \hline Bois & 0.528 & 0.181 & 0.078 & 0.278 & 0.328 & 0.038 & 0.076 \\ LLaMA-2 & 0.506 & 0.151 & 0.081 & 0.255 & 0.327 & 0.033 & 0.076 \\ \hline
**LLaMA-2 (13B)** & & & & & & & \\ \hline Rolla & 0.834 & 0.254 & 0.189 & 0.186 & 0.254 & 0.004 & 0.107 \\ & 0.583 & 0.255 & 0.195 & 0.186 & 0.252 & 0.003 & 0.11 \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Impact of parsing strategy:** Pearson correlation (\(r\)) of seven STS datasets based on few-shot prompt under LLaMA-2 7B (top) and 13B (bottom). Rule-based parsing overall performs better than parsing by LLaMA-2 itself on responses by few-shot prompt. Accuracy of LLaMA-2 (13B) is slightly impacted by parsing strategies.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Dataset & lan\_instruction & \(r\uparrow\) & \(\rho\uparrow\) & MSE \(\downarrow\) \\ \hline USTS-C (high) & English & **0.556** & **0.551** & **2.97** \\ USTS-C (high) & Chinese & 0.461 & 0.503 & 5.00 \\ USTS-U (low) & English & **0.552** & **0.465** & **3.09** \\ USTS-U (low) & Chinese & 0.472 & 0.435 & 5.42 \\ \hline \hline \end{tabular}
\end{table}
Table 7: **F1-score by Zero vs. Few-shot for NLI over Chaos-SNLI (S), Chaos-MNLI (M) and MedNLI (MED) under LLaMA-2 7B and 13B.**
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Model\(\rightarrow\) & \multicolumn{2}{c|}{**LLaMA-2 (7B)**} & \multicolumn{2}{c}{**LLaMA-2 (13B)**} \\ Dataset\(\rightarrow\) & S & M & MED & S & M & MED \\ \hline Few-shot & **0.375** & **0.306** & **0.312** & **0.319** & 0.321 & **0.414** \\ Zero-shot & 0.204 & 0.288 & 0.253 & 0.205 & **0.323** & 0.293 \\ \hline \hline \end{tabular}
\end{table}
Table 7: **F1-score by Zero vs. Few-shot for NLI over Chaos-SNLI (S), Chaos-MNLI (M) and MedNLI (MED) under LLaMA-2 7B and 13B.**
ment, we compare the results using English vs. Chinese zero-shot prompts.
In Table 8, using English instruction shows higher correlation and smaller MSE than using Chinese instruction. Over both USTS-C and USTS-U, correlations between the predicted score and the gold label by averaging annotations of all raters are both extremely low (around 0.5), and MSE is large. This implies that it is challenging for ChatGPT to correctly estimate semantic similarity scores for Chinese sentence pairs in USTS, either they have high or low human disagreement.
Moreover, for fine-tuned STS models based on BERT or cosine similarity based on semantic representation of two sentences, it is easier to predict the average score for USTS-U than USTS-C. ChatGPT does not seem to perceive the degree of human disagreement in labelling, showing higher accuracy on more uncertain subset USTS-C.
### Evaluation
Using zero-shot prompt, we evaluate ChatGPT over ten general, clinical and biomedical STS/NLI benchmarks. USTS-C and USTS-U are Chinese STS datasets, and Chaos-SNLI and Chaos-MNLI are composed of ambiguous cases in which high human disagreement exists among annotators. Estimations by ChatGPT are inferior to baseline predictions by fine-tuned _BERT-base_, except for comparable results on BIOSSES. This suggests that clinical and biomedical domains remain challenging for LLMs. Chinese sentence pairs and cases with controversial labels are particularly hard to predict correctly for ChatGPT.
BaselinesSTS-B, MedSTS, N2C2-STS, USTS-C and USTS-U are predicted by _BERT-base_ fine-tuned over the training data of corresponding dataset, some coupled with data augmentation strategies. Without training data, BIOSSES uses the fine-tuned N2C2-STS model and EBMASS uses fine-tuned STS-B. Chaos-SNLI/MNLI are predicted by _BERT-base_ fine-tuned over combination of SNLI and MNLI training data, and fined-tuned using MedNLI training data for MedNLI baseline. These baseline results are extracted from Wang et al. (2020, 2020, 2022, 2023).
## 4 Calibration under LLM
Calibration measures how well the predictive confidence aligns with the real accuracy. Depending on a well-calibrated model in real-world applications, we can trust how certain a model is for a correct prediction, and then deliver tasks to human experts when the model is highly uncertain.
Figure 1: Similarity Score distribution of STS-B (top) and N2C2-STS (bottom) by LLaMA-2 (7B). Ref=Gold labels
Figure 2: The impact of system role on the performance of domain datasets using ChatGPT.
### Challenges
Differences between textual discriminative and generative models pose challenges in LLM calibration in terms of accuracy calculation and confidence estimation.
Accuracy Calculation:Accuracy can be easily calculated in the classification task where the decision space is clearly defined among the given classes. However, the distribution of casual generation from large language models is complicated and intricate. It is ambiguous to scope the label space, given the fact that the golden semantics can be expressed in various ways (Kuhn et al., 2023). For STS and NLI tasks discussed in this work, we simplify this issue by prompting LLMs with task-specific instructions that constrain label space, so that generated text contains predicted labels.
Confidence Estimation:For a classifier, the probabilistic outputs from _softmax_ with logits passing through often serve as the predictive confidence. For continuous labels, predictive uncertainty is practically represented by standard deviation (Wang et al., 2022). However, how to estimate predictive confidence for STS and NLI under generative models is an open question. Moreover, for black-box LLMs such as ChatGPT, we can only access to the generated text by API, without the probability of predicting the next token.
### Predictive Confidence Estimation
A good confidence estimation is expected to truly reflect a model's uncertainty in predicting or making decisions. How to estimate predictive confidence for LLMs, in both black-box and white-box settings?
Black-box LLMsWe generate K samples given an example, and then calculate the mean and standard deviation for STS and the empirical probability for NLI, similar to Lin et al. (2023); Kuhn et al. (2023), where they further incorporate the similarity between any two samples.
White-box LLMsDifferent from the approaches in Black-box LLMs, here we estimate the confidence score by directly obtaining the probability distribution of the next token after the prompt. In detail, for both the STS and NLI tasks, in order to obtain a reasonable estimation, we constrain the output format of the model by using few-shot prompts to ensure that the sampling of the first token has more chance to align with the label space. Then, we obtain the output logits from the last prompt token and normalize them into probability distribution with the _softmax_ function. For the STS tasks, as its label space consists of real numbers from 0.0 to 5.0, to simplify the experiments, we only study the probability of the first digit, which corresponds to the tokens: [0, 1, 2, 3, 4, 5]. For the NLI tasks, we guide the model to output lowercase labels with few-shot samples, so we only collect probability scores for the three sub-words: [_ent, _neutral, _contradiction].
### Experiments
MetricsExpected calibration error (ECE) is applied to measure if the predictive confidence estimates are aligned with the empirical correctness likelihoods. The lower ECE, the better calibrated. For STS in black-box setting, we calculate ECE
\begin{table}
\begin{tabular}{l c|c c c|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{**BERT**} & \multicolumn{3}{c|}{**ChatGPT** Zero-shot} & \multicolumn{3}{c|}{**LLaMA-2 (7B)** Few-shot} & \multicolumn{3}{c}{**LLaMA-2 (13B)** Few-shot} \\
**STS\(\downarrow\)** & Base (r) & \(r\uparrow\) & \(\rho\uparrow\) & MSE \(\downarrow\) & \(r\uparrow\) & \(\rho\uparrow\) & MSE \(\downarrow\) & \(r\uparrow\) & \(\rho\uparrow\) & MSE \(\downarrow\) \\ \hline STS-B & 0.868 & 0.827 & 0.825 & 1.16 & 0.528 & 0.551 & 3.49 & 0.584 & 0.597 & 2.87 \\ BIOSESS & 0.854 & 0.865 & 0.888 & 0.56 & 0.181 & 0.129 & 6.73 & 0.254 & 0.223 & 8.50 \\ EBMSASS & 0.867 & 0.805 & 0.650 & 0.50 & 0.078 & 0.071 & 8.62 & 0.189 & 0.202 & 9.51 \\ MedSTS & 0.859 & 0.790 & 0.701 & 0.72 & 0.278 & 0.250 & 2.49 & 0.186 & 0.176 & 3.69 \\ N2C2-STS & 0.902 & 0.817 & 0.754 & 0.90 & 0.328 & 0.316 & 6.99 & 0.254 & 0.270 & 9.88 \\ USTS-C (high) & 0.861 & 0.556 & 0.551 & 2.97 & 0.038 & 0.052 & 11.3 & 0.004 & 0.042 & 10.4 \\ USTS-U (low) & 0.838 & 0.552 & 0.465 & 3.09 & 0.076 & 0.096 & 14.6 & 0.107 & 0.129 & 13.1 \\ \hline
**NLI\(\downarrow\)** & Base (Acc) & Acc \(\uparrow\) & F1-macro\(\uparrow\) & Prec/Recall\(\uparrow\) & Acc \(\uparrow\) & F1-macro\(\uparrow\) & Prec/Recall\(\uparrow\) & Acc \(\uparrow\) & F1-macro\(\uparrow\) & Prec/Recall\(\uparrow\) \\ \hline Chaos-SNLI & 0.747 & 0.491 & 0.485 & 0.480/0.521 & 0.368 & 0.375 & 0.407/0.452 & 0.350 & 0.319 & 0.314/0.480 \\ Chaos-MNLI & 0.558 & 0.479 & 0.472 & 0.498/0.509 & 0.348 & 0.306 & 0.361/0.434 & 0.396 & 0.321 & 0.358/0.471 \\ MedNLI & 0.777 & 0.739 & 0.743 & 0.763/0.739 & 0.412 & 0.312 & 0.431/0.412 & 0.516 & 0.414 & 0.509/0.516 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Evaluation by ChatGPT Zero-shot with system role=helpful assistant and LLaMA-2 (7B, 13B) Few-shot: Pearson (\(r\)), Spearman (\(\rho\)) correlation and MSE of test set over seven STS datasets across domains; and precision (Prec), recall and F1 score on three NLI datasets. Baselines (Base) are estimated by fine-tuned STS/NLI model based on _BERT-base_.
using the formula for continuous values with the mean and standard deviation as Wang et al. (2022), while for NLI and white-box STS, we use Eq (1):
\[\sum_{m=1}^{M}\frac{|B_{m}|}{n}|acc(B_{m})-conf(B_{m})|) \tag{1}\]
Experimental SetupBased on MedSTS, BIOSSES, USTS-C for STS, and MedNLI, ChaosNLI for NLI,3 we experiment with ChatGPT as the black-box and Vicuna (7b, 13B) as the white-box proxy. In black-box setting, we sample K times (K=10 with zero-shot prompt), and use standard deviation for continuous labels and the probability for each class for classification outputs as confidence score. In white-box setting, the length-normalised joint probability is used for both STS and NLI.
Footnote 3: We use 200 samples for USTS-C and ChaosNLI, same subset as Section 5
Results and AnalysisChatGPT achieves much higher correlation, F1 and lower ECE across all datasets than LLaMA-2. 13B is better than 7B, as shown in Table 10.
## 5 Collective Human Opinion
Capturing the distribution of human opinions under large neural models is non-trivial, especially for continuous values. Applying Bayesian estimation to all model parameters in large language models is theoretically possible, in practice it is prohibitively expensive in both model training and evaluation. Deriving uncertainty estimates by integrating over millions of model parameters, and initialising the prior distribution for each are both non-trivial (Wang et al., 2022).
Instead of estimating key parameters of a standard distribution (e.g. \(\mu\) and \(\sigma\) in a Gaussian distribution) to fit the collective human opinions, in this work, we propose estimating personalised ratings which simulate individual annotations, and then computing the collective distribution. Specifically, we prompt LLMs by setting the system role with different personas characterised by age, gender, educational background, profession and other skills. It is assumed that LLMs can make persona-specific judgement within the capability and background of the role.
Hypothesis:If language models are capable to do personalised assignment matching the ability of different roles, a helpful assistant should give more accurate estimations than a five-year old child on the complex semantic reasoning tasks, and a linguistic expert is better than an assistant, a NLP PhD student should have comparable judgement to a NLP expert. Judgements collected from different roles should be close to the distribution of the collective human opinions gathered by crowdsourcing.
### Experiment Setup
Based on ChaosNLI for NLI and USTS-C for STS task, where given an example, multiple annotations are available to represent the collective human opinions, we randomly sample 200 examples from USTS-C (similarity score distributes uniformly spanning across 0-5 by the interval of 1.0) and ChaosNLI test sets (100 from Chaos-SNLI and 100 from Chaos-MNLI) to investigate whether ChatGPT can imitate individual ratings under different roles.
### Results and Analysis
Performance differs under different roles.However, the model uncertainty may contribute more to the judgement divergence, instead of the personalised estimation
\begin{table}
\begin{tabular}{l|c c c c c|c c c} \hline Model-\(\downarrow\) & \multicolumn{3}{c|}{**ChaGPT**} & \multicolumn{3}{c|}{**LLaMA-2(7b)**} & \multicolumn{3}{c}{**LLaMA-2(13B)**} \\ Dataset & \(\uparrow\) & F1 & EC2 & F1 & F1 & EC2 & F1 & F1 & EC2 \\ \hline MedSTS & 0.801 & - & 0.622 & 0.269 & 0.076 & 0.818 & 0.252 & 0.087 & 0.754 \\ BiOSSES & 0.849 & - & _1.096_ & 0.107 & 0.017 & 0.840 & 0.272 & 0.010 & 0.723 \\ USTS-C & 0.809 & - & _4.424_ & 0.114 & 0.114 & 0.000 & -0.119 & 0.122 & 0.000 \\ \hline MedNLI & - & 0.668 & 0.238 & - & 0.312 & 0.457 & - & 0.407 & 0.277 \\ ChaosNLI & - & 0.541 & 0.215 & - & 0.356 & 0.418 & - & 0.309 & 0.348 \\ \hline \end{tabular}
\end{table}
Table 10: Pearson correlation (\(r\)), F1 and ECE for STS/NLI by ChatGPT and LLaMA-2 (7B, 13B). Note that calculation formula of ECE for STS under ChatGPT is different from others (_italic numbers_), they cannot be compared directly.
\begin{table}
\begin{tabular}{l|c c c c c c} \hline Model-\(\downarrow\) & \multicolumn{3}{c|}{**ChaGPT**} & \multicolumn{3}{c|}{**LLaMA-2(7b)**} & \multicolumn{3}{c}{**LLaMA-2(13B)**} \\ Dataset & \(\uparrow\) & F1 & EC2 & F1 & F1 & EC2 & F1 & F1 & EC2 \\ \hline MedSTS & 0.801 & - & 0.622 & 0.269 & 0.076 & 0.818 & 0.252 & 0.087 & 0.754 \\ BiOSSES & 0.849 & - & _1.096_ & 0.107 & 0.017 & 0.840 & 0.272 & 0.010 & 0.723 \\ USTS-C & 0.809 & - & _4.424_ & 0.114 & 0.100 & -0.119 & 0.122 & 0.000 \\ \hline MedNLI & - & 0.668 & 0.238 & - & 0.312 & 0.457 & - & 0.407 & 0.277 \\ ChaosNLI & - & 0.541 & 0.215 & - & 0.356 & 0.418 & - & 0.309 & 0.348 \\ \hline \end{tabular}
\end{table}
Table 11: ChaosNLI and USTS-C performance under ten different system roles against the aggregated labels of collective human opinions. Aggregation: majority voting for NLI and averaging for STS. Ensemble refers to aggregating predictions of ten roles.
On samples of ChaosNLI and USTS-C, the accuracy differs significantly under different system roles. NLP PhD student performs the best on ChaosNLI and the linguistic expert is the best on USTS-C. However, how is the distinction affected by the setup of different roles in context versus the model predictive uncertainty? If the deviation of multiple runs under the same role is notably smaller than the variance stemming from roles setting, and a relatively-high performance consistently appears in the well-performed role, we believe that different roles in in-context learning may unlock multiple reasoning paths, an optimal role leads reasoning route to more correct answers.
Therefore, we re-run ten times on ChaosNLI and USTS-C by the roles of a NLP PhD student and a linguistic expert respectively. In Table 12, on both ChaosNLI and USTS-C, results deviate significantly across ten runs. A higher performance is not kept. The accuracy of ChaosNLI ranges from 0.48 to 0.55, and Pearson correlation for USTS-C also ranges from 0.67 to 0.76. This suggests that the model uncertainty may contribute more to the performance variance, than the setting of system roles.
The collective predictions essentially does not match the human opinions.Label distributions represented by (\(\mu\), \(\sigma\)) of USTS-C annotators and predictions of ten different roles differ substantially (see top of Figure 3). The distribution by ten roles and ten runs in the same role of _linguistic expert_ is much similar, their KL-divergence of 171 (86%) examples is less than 1.0, indicating small distributional distance for the majority cases between the same role and different roles. While KL-divergence between annotators and ten roles/ten runs is mostly large (KL>1.0 for 177/185 examples). This suggests that neither estimations under the setting of different roles nor multiple runs in the same role can imitate the distribution of collective human opinions.
Similarly, in Figure 4 for ChaosNLI, the distributional divergence between annotators and simulated raters by model system roles spans from 0 to 400, while KL-divergence between ten roles and ten runs in the same role is much smaller, with the majority concentrating within 50.4 Moreover, distributions of both KL and JSD of (annotators, ten roles) and (annotators, ten runs under the role of PhD student) are similar. It indicates that the effect of setting different roles is similar to running multiple times under the same role. That is, prompting models using different roles is either unable to unlock LLM's capability, or despite of powerful LLMs, personalised prediction remains challenging.
Footnote 4: Bootstrap is applied to sample 100 judgements, imitating 100 annotations in ChaosNLI.
What does JSD=0.2 mean if reflected to NLI labels?JSD is symmetric and ranged from 0.0 to 1.0. Reflected to a specific label, how large differences between two distributions will result in JSD=0.2? We randomly selected an example whose JSD between annotators and ten roles equal to 0.2, 0.4, 0.6, 0.7, and 0.9, which we show on Figure 5. We can see that when JSD\(\leq\)0.2, the majority label always remain the same, while it changes to another when JSD is greater than 0.2.
\begin{table}
\begin{tabular}{c|c c c c c|c c} \hline \hline Dataset-\(\downarrow\) & \multicolumn{3}{c|}{ChaosNLI} & \multicolumn{3}{c}{USTS-C} \\
**Run No.** & Acc! & Prec! & Recall! & Fl-ancor! & \(\tau\) & \(\rho\) & \(\tau\) & MSE \(\downarrow\) \\ \hline
1 & 0.555 & 0.532 & 0.526 & 0.522 & 0.758 & 0.778 & 2.77 \\
2 & 0.500 & 0.476 & 0.470 & 0.467 & 0.675 & 0.746 & 3.27 \\
3 & 0.530 & 0.502 & 0.500 & 0.497 & 0.699 & 0.741 & 3.02 \\
4 & 0.530 & 0.509 & 0.519 & 0.510 & 0.666 & 0.669 & 3.13 \\
5 & 0.510 & 0.496 & 0.466 & 0.467 & 0.707 & 0.715 & 2.96 \\
6 & 0.540 & 0.528 & 0.526 & 0.518 & 0.702 & 0.749 & 3.15 \\
7 & 0.520 & 0.494 & 0.492 & 0.488 & 0.718 & 0.765 & 3.00 \\
8 & 0.560 & 0.547 & 0.553 & 0.538 & 0.675 & 0.719 & 3.19 \\
9 & 0.555 & 0.527 & 0.527 & 0.523 & 0.721 & 0.749 & 2.91 \\
10 & 0.565 & 0.540 & 0.533 & 0.530 & 0.707 & 0.736 & 2.90 \\ \hline Ensemble & 0.570 & 0.547 & 0.544 & 0.541 & 0.809 & 0.840 & 2.79 \\ \hline \hline \end{tabular}
\end{table}
Table 12: Ten runs for ChaosNLI under the role of NLP PhD student and USTS-C under a linguistic expert. Ensemble refers to majority voting for NLI and averaging for STS over ten runs.
Figure 3: USTS-C (\(\mu\), \(\sigma\)) distribution of annotators versus ChatGPT roles and ten runs by the role of _linguistic expert_, and KL-Divergence (bottom) between the collective human opinions and the distribution of predictions by ten different roles using ChatGPT.
## 6 Conclusion and Future Work
We performed a study, aiming to rethink STS and NLI in the era of large language models (LLMs). We evaluated the accuracy of clinical/biomedical STS and NLI over five datasets, and then we assessed LLM predictive confidence and their capability of capturing collective human opinions. We found that LLMs may be able to provide personalised descriptions for a specific topic, or to generate semantically similar content in different tones, but that this is hard for current LLMs to make personalised judgements or decisions. We further found that zero-shot ChatGPT achieves competitive accuracy over clinical and biomedical STS/NLI, constraining to the fine-tuned BERT-base. However, there is a large variation in sampling, ensembled results perform the best.
In future work, we plan to experiment with more STS/NLI datasets and with other LLMs. We further want to experiments with other semantic tasks, as well as with other languages.
|
2309.12931 | On Separate Normalization in Self-supervised Transformers | Self-supervised training methods for transformers have demonstrated
remarkable performance across various domains. Previous transformer-based
models, such as masked autoencoders (MAE), typically utilize a single
normalization layer for both the [CLS] symbol and the tokens. We propose in
this paper a simple modification that employs separate normalization layers for
the tokens and the [CLS] symbol to better capture their distinct
characteristics and enhance downstream task performance. Our method aims to
alleviate the potential negative effects of using the same normalization
statistics for both token types, which may not be optimally aligned with their
individual roles. We empirically show that by utilizing a separate
normalization layer, the [CLS] embeddings can better encode the global
contextual information and are distributed more uniformly in its anisotropic
space. When replacing the conventional normalization layer with the two
separate layers, we observe an average 2.7% performance improvement over the
image, natural language, and graph domains. | Xiaohui Chen, Yinkai Wang, Yuanqi Du, Soha Hassoun, Li-Ping Liu | 2023-09-22T15:30:53Z | http://arxiv.org/abs/2309.12931v2 | # On Separate Normalization in Self-supervised Transformers
###### Abstract
Self-supervised training methods for transformers have demonstrated remarkable performance across various domains. Previous transformer-based models, such as masked autoencoders (MAE), typically utilize a single normalization layer for both the class token \([\mathrm{CLS}]\) and the tokens. We propose in this paper a new yet simple normalization method that separately normalizes embedding vectors respectively corresponding to normal tokens and the \([\mathrm{CLS}]\) token, in order to better capture their distinct characteristics and enhance downstream task performance. Our empirical study shows that the \([\mathrm{CLS}]\) embeddings learned with our separate normalization layer better encode the global contextual information and are distributed more uniformly in its anisotropic space. When the conventional normalization layer is replaced with a separate normalization layer, we observe an average 2.7% performance improvement in learning tasks from the image, natural language, and graph domains.
## 1 Introduction
Transformer models [Vaswani et al., 2017] have revolutionized natural language processing (NLP) [Devlin et al., 2018, Liu et al., 2019] and demonstrated remarkable performances across a wide range of NLP tasks. The significance of transformer models lies in their ability to model context and capture complex linguistic patterns without being constrained by the sequential nature of data. Beyond NLP transformers have further found their successes in areas such as computer vision (CV) [Han et al., 2022], speech recognition [Karmakar et al., 2021], and recommendation systems [Sun et al., 2019, Gu et al., 2020, Wu et al., 2020]. Their flexible architecture and ability to capture dependencies have made them adaptable to diverse data modalities in these domains.
Transformer architectures have been studied extensively from various perspectives such as attention mechanisms, positional encoding (Devlin et al., 2018), and normalization techniques. Specifically, layer normalization (Ba et al., 2016) and batch normalization (Ioffe and Szegedy, 2015) are employed to enhance stability and speed up convergence during training. The literature on transformers also explores parameter initialization (Xu et al., 2019), optimization algorithms (Huang et al., 2020), regularization techniques (Steiner et al., 2021; Zhou et al., 2020), and improved architectures (Han et al., 2021). This collective research has advanced transformer architectures and their applications in NLP, CV, and other learning domains.
The study of normalization in transformer architectures is motivated by several factors (Xiong et al., 2020; Shen et al., 2020; Nguyen and Salazar, 2019). For example, Xiong et al. (2020) emphasize the importance of the warm-up of the learning rate and the position of layer normalization layers for the purpose of stable training and faster convergence. Shen et al. (2020) investigates the disadvantage of using batch normalization in transformers and proposes power normalization. While most previous works focus on how the normalization layer can be modified to stabilize the training process, it is less understood how the normalization affects the encoding abilities of these embeddings.
In self-supervised transformers, the \([\mathrm{CLS}]\) symbol is frequently used as a global representation for various downstream tasks (Devlin et al., 2018; He et al., 2022). Often, the normalization applied to the \([\mathrm{CLS}]\) symbol is shared with the rest of the tokens in the sequence, which we term it as Shared Normalization (ShareNorm). Given that the \([\mathrm{CLS}]\) symbol plays a special role in representation learning, a natural question is whether we should treat it separately in the normalization operation. Driven by this question, our research first scrutinizes the behavior of the current shared normalization in transformers, particularly the properties of the \([\mathrm{CLS}]\) embedding and its influence on downstream task performance. Subsequently, we propose a replacement of ShareNorm with Separate Normalization (SepNorm), the latter of which employs distinct normalization operations for the \([\mathrm{CLS}]\) symbol and the token features, as depicted in Figure 1. Through extensive analysis, we demonstrate that \([\mathrm{CLS}]\) embeddings learned using ShareNorm have the issue of dimensional collapse, which cannot be rectified even by enforcing uniformity (Wang and Isola, 2020). However, the straightforward substitution of SepNorm for ShareNorm substantially mitigates this issue. We empirically validate the effectiveness of SepNorm in tasks from the image, text, and graph domains, demonstrating the universal advantage of the proposed SepNorm.
## 2 Background
### Pretraining Transformers with the \([\mathrm{CLS}]\) symbol
Unsupervised pretraining of a transformer-based model (Vaswani et al., 2017) is widely investigated in many domains, including NLP, computer vision (CV), and graphs.
Pretraining BERT for NLP.In NLP, Devlin et al. (2018) first develop the BERT model by pretraining a transformer-based network by performing the following two tasks - masked language modeling and next sentence prediction. During pretraining, BERT takes a pair of sentences \((\mathbf{x},\mathbf{y})\)
Figure 1: Comparison of the shared normalization (ShareNorm, left) and the proposed separate normalization (SepNorm, right) configurations for token normalization. In the ShareNorm setup, both the \([\mathrm{CLS}]\) symbol and other tokens are normalized using a single-layer normalization. In the SepNorm setup, normalization is done separately: the \([\mathrm{CLS}]\) symbol is normalized through batch normalization, while other tokens are normalized via layer normalization.
which are represented as a special sequence
\[\mathbf{s}=\big{(}[\mathrm{CLS}],\mathbf{x},[\mathrm{SEP}],\mathbf{y}\big{)}. \tag{1}\]
Here \([\mathrm{SEP}]\) is a special token that separates the two sentences. A fraction (e.g., 15%) of the tokens in \(\mathbf{x}\) and \(\mathbf{y}\) are randomly replaced by a special symbol \([\mathrm{MASK}]\). The first task in BERT is to predict the original tokens replaced by \([\mathrm{MASK}]\) with cross-entropy loss. The second task is to predict whether \(\mathbf{y}\) is the next sentence following \(\mathbf{x}\), and the decision is made by classifying the final embedding of the \([\mathrm{CLS}]\) symbol. After pretraining, the representation in the \([\mathrm{CLS}]\) is usually used for sentence-level downstream tasks such as sentiment analysis (Medhat et al., 2014).
Pretraining MAE for CV.The Vision Transformer (ViT) (Dosovitskiy et al., 2020) applies the transformers to computer vision tasks. In ViT, an image is usually voxelized into \(16\times 16\) patches, which are then flattened into a sequence of 256 tokens and fed into the ViT. He et al. (2022) proposes a self-supervised training scheme, Masked Autoencoder (MAE), for the ViT architecture. A training image has 75% of its patches masked. The MAE feeds Tokens of unmasked patches as well as a \([\mathrm{CLS}]\) token into the encoder and gets the representations for these tokens. Then the decoder tries to reconstruct the original image by minimizing the mean square error (MSE). Only the encoder will be used for downstream tasks after pretraining. The \([\mathrm{CLS}]\) symbol is treated as the class token for linear probing and fine-tuning in the downstream tasks.
Pretraining Graphormer for molecule discovery.Graphormer (Ying et al., 2021) is a transformer-based model designed for graph representation learning tasks. It is used to predict the property of a graph rather than a node or edge. Specifically, Graphormer introduces a new symbol \([\mathrm{VNode}]\) as a node connecting to all original graph nodes. Then the vector learned for \([\mathrm{VNode}]\) represents the global information of the entire graph. The mechanism of \([\mathrm{VNode}]\) is similar to the \([\mathrm{CLS}]\) symbol in BERT and MAE.
In typical applications of transformers, the \([\mathrm{CLS}]\) symbol is not a natural data token. It summarizes other tokens to capture global information, which is especially useful in downstream tasks. For these reasons, we argue that it should be treated differently in normalization operations.
### Normalization Layers in Transformers
Given that transformers are initially proposed for NLP tasks, layer normalization (LN) Ba et al. (2016) is typically the normalization method of choice (Xiong et al., 2020). LN normalizes across feature dimensions and is independent of the sequence length and the batch size. For any features \(\mathbf{h}\in\mathbb{R}^{d}\), the LN has the following computation:
\[\mathrm{LN}(\mathbf{h})=\boldsymbol{\gamma}\odot\frac{\mathbf{h}-\mu}{ \sigma}+\boldsymbol{\beta},\quad\mu=\frac{1}{d}\sum_{i=1}^{d}h_{i},\quad\sigma =\sqrt{\frac{1}{d}\sum_{i=1}^{d}\big{(}h_{i}-\mu\big{)}^{2}}. \tag{2}\]
Here \(h_{i}\) is the \(i\)-th dimension of \(\mathbf{h}\), \(\odot\) represents element-wise multiplication, and \(\boldsymbol{\gamma},\boldsymbol{\beta}\in\mathbb{R}^{d}\) are scale and bias parameters, respectively. In a transformer, all tokens, including special tokens, such as \([\mathrm{CLS}]\) and \([\mathrm{SEP}]\), are all treated equally and share the same LNs.
Batch Normalization (BN) (Ioffe and Szegedy, 2015) works by normalizing the input data to have zero mean and unit variance along the batch dimension, followed by an affine transformation to scale the result using gamma and beta parameters. BN normalizes a given vector \(\mathbf{h}\) as:
\[\mathrm{BN}(\mathbf{h})=\boldsymbol{\gamma}\odot\frac{\mathbf{h}-\boldsymbol{ \mu}_{B}}{\boldsymbol{\sigma}_{B}}+\boldsymbol{\beta}. \tag{3}\]
Here \(\boldsymbol{\mu}_{B},\boldsymbol{\sigma}_{B}^{2}\in\mathbb{R}^{d}\) are the running statistics (mean and variance) maintained by the BN. The running mean and variance are updated during training after each batch. They are usually calculated as an exponential moving average of the batch mean and variance. BN is widely adopted in CV but leads to significant performance degradation when naively used in NLP.
### Uniformity of the Learned Representations
The dimensional collapse in self-supervised representation learning is a common phenomenon where the embedding vectors only span a lower-dimensional subspace (Jing et al., 2021) of the entire vector
space. This means that the model fails to capture data patterns with full power and instead collapses to a simpler representation. Contrastive methods (Oord et al., 2018; Chen et al., 2020b) have been one of the standard approaches to address this problem. Specifically, Wang and Isola (2020) propose the _uniformity_ metric (loss) to quantify the degree of dimensional collapse. Given a set of representation vectors \(\{\mathbf{h}_{1},\ldots,\mathbf{h}_{N}\}\) from a dataset of size \(N\), the uniformity metric \(\mathcal{L}_{\mathcal{U}}\) is computed as follows:
\[\mathcal{L}_{\mathcal{U}}=\log\frac{1}{N(N-1)/2}\sum_{\begin{subarray}{c}n=1, \\ m=n+1\end{subarray}}^{N,N}\exp^{-2\left\|\frac{\mathbf{h}_{m}}{\|\mathbf{h}_{m} \|}-\frac{\mathbf{h}_{m}}{\|\mathbf{h}_{m}\|}\right\|^{2}}. \tag{4}\]
If the distribution of the representation is perfectly uniform, then the numerical value of \(\mathcal{L}_{\mathcal{U}}\) will converge to -4 as the dimension of \(\mathbf{h}\) increases to infinity (Wang and Isola, 2020).
In self-supervised transformers, the uniformity of the representation is also taken into consideration by some works. For example, Gao et al. (2021) finetune the pretrained BERT model using the InfoNCE loss (Oord et al., 2018), and Zhang et al. (2022) jointly train the MAE loss along with uniformity loss.
## 3 Approach
### Separate Normalization
We present SepNorm, a normalization scheme that separately normalizes embeddings of the \([\mathrm{CLS}]\) symbol and embeddings of other tokens. In this work, we focus on the exploration of combinations of BN and LN for the two separate normalization channels.
For instance, if we apply BN to the \([\mathrm{CLS}]\) symbol and LN to other tokens, the learnable parameters are structured as \(g_{1}=(\mathbf{\gamma}_{1},\mathbf{\beta}_{1})\) and \(g_{2}=(\mathbf{\gamma}_{2},\mathbf{\beta}_{2})\). Let \(\mathbf{H}\in\mathbb{R}^{L\times d}\) represent the feature sequence, where \(L\) denotes the sequence length, and \(d\) is the feature dimension. Assume embedding \(\mathbf{H}_{0}\) in the first position corresponds to the \([\mathrm{CLS}]\) symbol. The normalization process is as follows:
\[\mathbf{H}^{\prime}=\big{(}\mathrm{BN}(\mathbf{H}_{0};g_{1}),\mathrm{LN}( \mathbf{H}_{1};g_{2}),\ldots,\mathrm{LN}(\mathbf{H}_{L};g_{2})\big{)}, \tag{5}\]
where \(\mathbf{H}^{\prime}\) denotes the normalized features. We can also run separate normalization with one of the three other combinations:
\[\mathbf{H}^{\prime} =\big{(}\mathrm{BN}(\mathbf{H}_{0};g_{1}),\mathrm{BN}(\mathbf{H }_{1};g_{2}),\ldots,\mathrm{BN}(\mathbf{H}_{L};g_{2})\big{)},\] \[\mathbf{H}^{\prime} =\big{(}\mathrm{LN}(\mathbf{H}_{0};g_{1}),\mathrm{BN}(\mathbf{H }_{1};g_{2}),\ldots,\mathrm{BN}(\mathbf{H}_{L};g_{2})\big{)},\] \[\mathbf{H}^{\prime} =\big{(}\mathrm{LN}(\mathbf{H}_{0};g_{1}),\mathrm{LN}(\mathbf{H }_{1};g_{2}),\ldots,\mathrm{LN}(\mathbf{H}_{L};g_{2})\big{)}.\]
Separate normalization allows the \([\mathrm{CLS}]\) features to be encoded distinctly from other tokens.
As a comparison, the \([\mathrm{CLS}]\) token's embedding and other tokens' embeddings interfere with each other in a shared normalization structure. With ShareNorm, the update directions of the LN parameters \(\{\mathbf{\gamma},\mathbf{\beta}\}\) are primarily driven by the embeddings of normal tokens. Below is the gradient calculation for these parameters,
\[\frac{\delta\mathcal{L}}{\delta\gamma_{i}}=\sum_{l=1}^{L}\frac{ \delta\mathcal{L}}{\delta\tilde{\mathbf{H}}_{l,i}}\tilde{\mathbf{H}}_{l,i}, \quad\frac{\delta\mathcal{L}}{\delta\beta_{i}}=\sum_{l=1}^{L}\frac{\delta \mathcal{L}}{\delta\tilde{\mathbf{H}}_{l,i}}, \tag{6}\] \[\text{where }\tilde{\mathbf{H}}_{l,i}=\frac{\mathbf{H}_{l,i}- \mu_{l}}{\sigma_{l}},\mu_{l}=\frac{1}{d}\sum_{i=1}^{d}\mathbf{H}_{l,i},\sigma _{l}=\sqrt{\frac{1}{d}\sum_{i=1}^{d}\big{(}\mathbf{H}_{l,i}-\mu_{l}\big{)}^{2}}. \tag{7}\]
We see the summation in the gradient calculation is dominated by normal tokens given that the number of normal tokens is typically a large number. Given the potentially diverse characteristics (i.e., mean and scale) of feature distributions, it might be challenging for normalization parameters to accommodate both token types simultaneously. Moreover, mapping two types of token features into the same sphere may also mix the signal of \([\mathrm{CLS}]\) tokens with other tokens. Figure 2(a, b) demonstrates this phenomenon in the scenario where both token types utilize a ShareNorm and how using SepNorm mitigates this effect.
### Encourage the Uniformity of the \([\mathrm{CLS}]\) Embeddings via a Contrastive Term
We further relate SepNorm with the uniformity of embeddings. Higher uniformity values indicate that embeddings can better exploit the space to store information. Contrastive methods often employ negative instances to encourage uniformity. In particular, we incorporate SepNorm into transformers trained with U-MAE (Zhang et al., 2022), which uses a constrastive term to promote uniformity of features.
The U-MAE explicitly adds a uniformity loss term \(\mathcal{L}_{\mathrm{unif}}\) to the training objective to encourage uniformity of \([\mathrm{CLS}]\) embeddings.
\[\mathcal{L}_{\mathrm{U-MAE}}=\mathcal{L}_{\mathrm{MAE}}+\lambda\mathcal{L}_{ \mathrm{unif}},\;\;\text{with}\;\mathcal{L}_{\mathrm{unif}}=\mathbb{E}_{i} \left[\mathbb{E}_{j}\left[\mathbf{h}_{\mathrm{CLS},i}^{\top}\,\mathbf{h}_{ \mathrm{CLS},j}\right]\right] \tag{8}\]
Here \(\mathcal{L}_{\mathrm{MAE}}\) is the MAE training objective. The two indices \(i\) and \(j\) represent two sequences within the same batch. \([\mathrm{CLS}]\) embeddings \(\mathbf{h}_{\mathrm{CLS},i}\) and \(\mathbf{h}_{\mathrm{CLS},j}\), which are respectively for the two sequences, are obtained from our SepNorm during the transformer calculation. By minimizing \(\mathcal{L}_{\mathrm{unif}}\), \([\mathrm{CLS}]\) features tend to be different from each other.
## 4 Experiments
We examine the effectiveness of the proposed SepNorm component in three domains: CV, NLP, and graphs. We then further investigate how the ShareNorm and SepNorm affect the uniformity of the \([\mathrm{CLS}]\) embeddings.
### Computer Vision
Datasets.We investigate the model performance on the four image datasets: STL10 (Coates et al., 2011), FGVC Aircraft (Maji et al., 2013), Street View House Numbers (SVHN) (Netzer et al., 2011), and Oxford 102 Flowers (Nilsback and Zisserman, 2008). All four datasets are for classification tasks. We follow the train/test split provided in the papers introducing the datasets. We report top-1 and top-5 accuracy for all datasets.
Vision transformers (ViT) and MAE.We choose Vision Transformer (ViT) (Dosovitskiy et al., 2020) as our feature extractor for all datasets. To pretrain the ViT, we adopt the MAE training scheme (He et al., 2022). We follow MAE and use a 75% masking ratio on input image. During the downstream tasks, we use the embeddings of the \([\mathrm{CLS}]\) token to predict the class labels.
Experiment setup.We follow the setup in He et al. (2022) to pretrain and evaluate the ViT. For pertaining, we train the ViT for 4000 epochs. For linear probing, we freeze the encoder's weight and train the last layer on the specific datasets for 2000 epochs. We use a batch size of 512 for pretraining and a batch size of 128 for linear probing.
Figure 2: The effect of SepNorm on feature distributions. Each subplot shows the distributions of the first 50 feature dimensions: \([\mathrm{CLS}]\) features are in blue, and other tokens’ features are in red. The \([\mathrm{CLS}]\) features of ShareNorm are more concentrated around the mean and the mean deviates more from the zero centers, while in SepNorm, the \([\mathrm{CLS}]\) distribution is more centered and flattened.
Experiment results.The results presented in Table 1 demonstrate the performances of our model and the baseline model. Our model consistently outperforms the baseline across multiple datasets, indicating its effectiveness in image classification tasks. In the STL-10 dataset, our approach achieves the top-1 accuracy of 93.84% and the top-5 accuracy of 99.7%, higher than the baseline's respective accuracies of 92.01% and 99.5%. Similar improvements are observed in the Aircraft, SVHN, and Flower datasets, where our model consistently outperforms the baseline in both top-1 and top-5 accuracies. These results demonstrate the effectiveness of SepNorm in enhancing image classification performance. We also visualize the embeddings of ShareNorm and SepNorm using t-SNE in Figure 3. Compared with ShareNorm, SepNorm provides embeddings that have better separation among different classes.
### Natural Language Processing
Datasets.We evaluated our approach using the STS dataset, which comprises seven semantic textual similarity (STS) tasks. These tasks, including STS 2012-2016 (Agirre et al., 2012, 2013, 2014, 2015, 2016), STS Benchmark (Cer et al., 2017), and SICK-Relatedness (Marelli et al., 2014). We also evaluate our method on multiple transfer tasks, including MR (Pang and Lee, 2005), CR (Hu and Liu, 2004), SUBJ (Pang and Lee, 2004), MPQA (Wiebe et al., 2005), SST-2 (Socher et al., 2013), TREC Voorhees and Tice (2000), and MRPC (Dolan and Brockett, 2005). Following the evaluation settings of SimCSE (Gao et al., 2021), we use Spearman's correlation coefficient as the evaluation metric.
BERT and RoBERTa.We conduct our study with pretrained checkpoints of BERT (uncased) (Devlin et al., 2018) and RoBERTa (cased) (Liu et al., 2019), instead of training them from scratch. Using pretrained models is common in this research field (Gao et al., 2021) because the findings are compatible with the common practice of finetuning pretrained models in actual learning tasks. This strategy also saves significant training time and computational resources, allowing us to extend the study to more learning tasks.
Experiment setup.We follow the experiment setup in Gao et al. (2021) and further finetune the BERT and RoBERTa models on English Wikipedia. We evaluate the models using established STS tasks and employ standard evaluation metrics such as Spearman's correlation.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{2}{c}{STL10} & \multicolumn{2}{c}{Aircraft} & \multicolumn{2}{c}{SVHN} & \multicolumn{2}{c}{Flower} \\ & ACC@1 & ACC@5 & ACC@1 & ACC@5 & ACC@1 & ACC@5 & ACC@1 & ACC@5 \\ \hline MAE & 92.01 & 99.5 & 52.54 & 84.16 & 88.97 & 99.13 & 27.63 & 53.73 \\ + SepNorm & **93.84** & **99.7** & **59.02** & **86.65** & **89.18** & **99.21** & **32.51** & **60.92** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of linear probing performance of ShareNorm and SepNorm across 4 image classification datasets when the ViT\({}_{\text{base}}\) is pretrained with MAE.
Figure 3: t-SNE visualization of representations learned from the STL-10 dataset.
Experiment results.The experiment results presented in Table 2 highlight the performance of our model compared to the SimCSE baseline on NLP tasks. With the SepNorm layer, BERT\({}_{\text{base}}\) and RoBERT\({}_{\text{base}}\) achieve overall higher average accuracy compared to ShareNorm's average accuracy. Only in the transfer learning tasks, SepNorm works slightly worse than ShareNorm in BERT\({}_{\text{base}}\), but the difference is marginal.
### Prediction of Molecule Properties
Datasets.We conducted experiments using the ZINC dataset [11], which contains approximately 250,000 molecular graphs. The task is to predict the properties of molecules from their graphs. We use a subset of 12,000 molecular graphs, as recommended by the benchmarking methodology outlined in [13], so that our results are comparable with other studies. Despite being smaller, the subset retained sufficient diversity and complexity for effective evaluation. We also the MolHIV dataset from the OGB [10] collection, which is widely used for training and evaluating graph-based models in molecular property prediction tasks.
Graphormer.We use Graphormer [23] as the transformer backbone to construct the predicting model. To obtain graph-level information, Graphormer adds a special node [VNode] to the graph and connects it to all normal graph nodes. The embedding of [VNode] is a summary of the entire graph and will be used in downstream classification tasks. The special node [VNode] serves the same purpose as the \([\mathrm{CLS}]\) token in traditional Transformer models. Graphormer has used three encodings to enhance the transformer's learning ability: centrality encoding captures node importance, spatial encoding considers spatial relations, and edge encoding incorporates edge features.
Experiment setup.We strictly follow Graphormer [23] in terms of the model architecture, hyperparameters, and training strategies. We replaced the ShareNorm in Graphormer with SepNorm to investigate the effectiveness of the proposed component. We evaluate the pretrained model on a broad class of graph-level prediction tasks. We report the mean absolute error for the ZINC and ZINC (subset) datasets and the area under the curve (AUC) for the MolHIV dataset.
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline \hline & & STS12 & STS13 & STS14 & STS15 & STS16 & STS-B & SICK-R & Avg. \\ \hline \multicolumn{8}{c}{Unsupervised Training} \\ \hline BERT\({}_{\text{base}}\) & ShareNorm & 65.28 & 78.82 & 69.65 & 79.02 & 77.21 & 76.4 & **71.74** & 74.04 \\ & SepNorm & **67.01** & **82.16** & **72.48** & **81.38** & **79.11** & **77.56** & 71.36 & **75.87** \\ \hline RoBERT\({}_{\text{base}}\) & ShareNorm & **68.25** & 81.24 & 72.78 & 81.38 & **80.31** & 79.83 & 68.16 & 76.00 \\ & SepNorm & 66.63 & **82.40** & **74.47** & **82.39** & **80.44** & **81.14** & **69.44** & **76.70** \\ \hline \multicolumn{8}{c}{Supervised Training} \\ \hline BERT\({}_{\text{base}}\) & ShareNorm & **77.72** & 81.07 & **78.97** & **85.15** & **82.00** & 82.36 & **79.74** & 81.00 \\ & SepNorm & 75.32 & **84.41** & **79.94** & 84.91 & 80.87 & **83.63** & **79.61** & **81.23** \\ \hline RoBERT\({}_{\text{base}}\) & ShareNorm & **77.38** & 80.87 & 78.72 & 84.02 & **82.56** & 83.08 & 78.25 & 80.70 \\ & SepNorm & 75.80 & **84.94** & **80.33** & **85.51** & 82.11 & **84.88** & **79.72** & **81.90** \\ \hline \multicolumn{8}{c}{Transfer Learning} \\ \hline BERT\({}_{\text{base}}\) & ShareNorm & **82.78** & 88.79 & **94.69** & **89.86** & **87.94** & **84.44** & **75.99** & **86.36** \\ & SepNorm & **82.82** & **89.08** & 94.30 & 89.70 & **87.97** & 83.88 & 75.21 & 86.14 \\ \hline RoBERT\({}_{\text{base}}\) & ShareNorm & 84.45 & **91.50** & 93.94 & **89.45** & 90.96 & 86.80 & **76.13** & 87.61 \\ & SepNorm & **85.11** & **91.56** & **94.30** & **89.43** & **91.66** & **90.96** & 75.58 & **88.37** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Sentence embedding performance on STS tasks and transfer tasks.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Dataset & ZINC & ZINC (subset) & MolHIV \\ \hline Metrics & Mean absolute error\(\downarrow\) & AUC\(\uparrow\) \\ \hline Graphormer & 0.069 & 0.164 & 73.36\% \\ + SepNorm & **0.052** & **0.144** & **75.64\%** \\ \hline \hline \end{tabular}
\end{table}
Table 3: A comparison of ShareNorm and SepNorm in three tasks of graph property prediction.
Experiment results.Table 3 shows the performances of our model and the Graphormer baseline. For the ZINC datasets, Graphormer with SepNorm achieves a significantly lower mean absolute error compared to that with ShareNorm. On the MolHIV dataset, SepNorm also improves the AUC to 75.64%, compared with ShareNorm's AUC of 73.36%. These results are strong evidence that the embeddings of the [VNode] can better summarize the properties of the entire graph and thus give superior performance on downstream tasks.
### Uniformity Analysis
In this section, we investigate how, under both non-contrastive and contrastive training methods, ShareNorm and SepNorm respectively affect the uniformity of learned embeddings and further classification performances.
Experiment setup.We pretrain MAE on the STL10 dataset via four different losses:
* MAE loss \(\mathcal{L}_{\mathrm{MAE}}\) without any \(\mathcal{L}_{\mathrm{unif}}\) on \([\mathrm{CLS}]\) and token embeddings. This setting is a study with MAE training only.
* MAE loss \(\mathcal{L}_{\mathrm{MAE}}\) with \(\mathcal{L}_{\mathrm{unif}}\) on the \([\mathrm{CLS}]\) embeddings. We treat all \([\mathrm{CLS}]\) embeddings within the same batch (except itself) as negative instances.
* MAE loss \(\mathcal{L}_{\mathrm{MAE}}\) with \(\mathcal{L}_{\mathrm{unif}}\) on the token embeddings. We treat all token embeddings within the same batch or same images (except itself) as negative instances.
* MAE loss \(\mathcal{L}_{\mathrm{MAE}}\) with \(\mathcal{L}_{\mathrm{unif}}\) on both \([\mathrm{CLS}]\) and token embeddings.
We choose \(\lambda=\{0,0.01,0.1,1\}\). Note that the second loss with \(\lambda=0.1\) corresponds to the U-MAE [Zhang et al., 2022]. We also replace the normalization layer of the ViT in MAE with one of the following: [LN, BN, BN+LN, BN+BN]. The combination of different losses, different \(\lambda\)'s, and different normalization layers yields 40 specifications of the experiments.
We first report our results with MAE training only. The uniformity of learned embeddings is first measured by singular values of the decomposition of an embedding matrix: we randomly choose 10k embeddings to form the matrix. We do this separately for \([\mathrm{CLS}]\) embeddings and normal token embeddings. Figrue 4 shows the results, which indicate that \([\mathrm{CLS}]\) features learned from SepNorm exhibit better representational power and thus can better encode the global information.
Then the uniformity is measured by the score in Eqn. 4. Table 6(a) shows the numerical value of the uniformity on the STL10 and Aircraft datasets [Coates et al., 2011, Maji et al., 2013]. Compared to ShareNorm, **SepNorm significantly enhances the uniformity of \([\mathrm{CLS}]\) embeddings**. Interestingly, the uniformity of normal tokens' embeddings remains comparable. We also empirically verify that better uniformity on the \([\mathrm{CLS}]\) embeddings results in better performance on the downstream task (Figure 6(b)). Another observation is that the uniformity of \([\mathrm{CLS}]\) embeddings is clearly improved when they are normalized by BN instead of LN. Our hypothesis is that BN tries to make each feature dimension useful by controlling its variance while LN may still neglect some feature dimensions.
Figure 4: **(a)** Reconstruction loss of the MAE pertaining – MAE with SepNorm achieves lower MSE loss compared to ShareNorm, demonstrating a better ability to encode global contextual information. **(b) & (c)** Comparison of the singular values of learned (\([\mathrm{CLS}]\) and normal token) features with ShareNorm and different configurations of SepNorm. \([\mathrm{CLS}]\) embeddings learned from SepNorm have larger singular values, which suggests that vectors are better used to encode information.
We then report results from studies with U-MAE training. Figure 5 shows the uniformity metrics obtained using different \(\lambda\)'s. When using ShareNorm, the uniformity of the \([\mathrm{CLS}]\) embeddings is no better than -3.088, and even the explicit uniformity loss does not help much. On the contrary, embeddings learned from the proposed SepNorm can easily achieve better uniformity scores. The study with the contrastive approach further verifies the advantage of SepNorm in terms of encouraging uniformity of \([\mathrm{CLS}]\) embeddings.
The results provide strong evidence that **the uniformity of the \([\mathrm{CLS}]\) embeddings is held down by ShareNorm even the minimization of an explicit contrastive loss cannot increase it**. We hypothesize that all features after LN will distribute in the same sphere, and \([\mathrm{CLS}]\) embeddings are squeezed to a small area of the sphere surface because they need to be different from embeddings of normal tokens.
Table 4 reports the downstream performance (accuracy) on STL10 across 40 different settings. We summarize our observations: (1) In the non-contrastive method MAE, with proper configuration, the performance of SepNorm is superior to that of ShareNorm. (2) In contrastive methods (\(\lambda\neq 0\)), SepNorms' advantages are further highlighted. For example, when \(\lambda=1\), the performance of SepNorm (BN+LN) is improved by 1.6% compared to the non-contrastive method. The performance gain in SepNorm (BN+BN) is less obvious as the double BNs already impose implicit uniformity loss on both \([\mathrm{CLS}]\) and token embeddings.
In contrast to SepNorm, the performance of ShareNorm is less satisfactory when using contrastive methods. We believe it is very challenging to encourage the two types of embeddings to be uniformly distributed in the same sphere and keep them separable at the same time. (3) The uniformity of the token embeddings is also vital for learning a good \([\mathrm{CLS}]\) representation, as evident by SepNorm (BN+LN) gaining accuracy with increasing \(\lambda\) on the token embeddings. We hypothesize that by enforcing uniformity, the token embeddings are forced to contain less information about others, which encourages the \([\mathrm{CLS}]\) embedding to encode as much information as possible. Our empirical study also shows that, when contrastive loss [1] is used to encourage the uniformity of \([\mathrm{CLS}]\) features in self-supervised transformers, the difference between BN and LN on \([\mathrm{CLS}]\) features is not significant anymore.
## 5 Related Works
The training of transformer architectures with self-supervised learning has seen significant advancements in both contrastive and non-contrastive training. Among self-supervised learning methods,
non-contrastive ones do not rely on negative samples for learning. They have emerged as a powerful approach for training transformer models and demonstrated remarkable successes in various tasks. BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) were proposed in the NLP domain. Additionally, there are some works focus on the specific task, such as speech recognition (Wang et al., 2020), image generation (Chen et al., 2020), and heterogeneous graph generation (Hu et al., 2020).
Contrastive methods on the contrary train networks using positive and negative samples that are constructed without manual labeling. They have also been used to train transformer-based architectures. Gao et al. (2021) and Zhang et al. (2022) make significant strides in natural language processing tasks, while Chen et al. (2021) provide valuable insights into the pre-training of transformers. Meanwhile, the potential of contrastive methods in vision transformers has been demonstrated by Caron et al. (2021) and Radford et al. (2021). These collective efforts underscore the versatility and efficacy of contrastive methods in self-supervised learning of transformers.
Normalization layers, including layer normalization and batch normalization, are essential to transformer architectures because they help stabilize the training procedure and accelerate convergence. Xiong et al. (2020) delve into the role of layer normalization in the transformer architecture and provide insights about how the layer improves the training stability and the performance of transformers. Similarly, Xu et al. (2019) explores the intricacies of layer normalization and offers potential enhancements to its effectiveness. To address the limitations of traditional batch normalization in a transformer architecture, Shen et al. (2020) introduces a new normalization layer, Powernorm, which is a variant of batch normalization. Nguyen and Salazar (2019) focus on the normalization process in the self-attention mechanism of transformers and propose methods to optimize the normalization of self-attention. All the efforts above underscore the critical role of normalization layers in transformer models.
## 6 Conclusion
In this work, we have introduced SepNorm to separate the normalization of \([\mathrm{CLS}]\) embeddings from that of other tokens. Across three application domains (images, text, and graphs), SepNorm shows consistent performance improvement when it is incorporated into transformer models. Our analysis shows that SepNorm promotes uniformity of \([\mathrm{CLS}]\) embeddings and thus enhances the transformers' ability to encode information. As a valuable technique for improving the foundational transformer architecture, SepNorm has the potential to benefit a wide range of applications.
## Acknowledgement
We thank all reviewers for their constructive feedback. This research is supported by the NIGMS of the National Institutes of Health, Awards R35GM148219, the Army Research Office, MURI program, contract # W911NF2210239, and NSF Award 1909536. Chen and Liu are also supported by the NSF CAREER Award # 2239869.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Normalization layer & \(\lambda=0\) & Negative pairs & \(\lambda=0.01\) & \(\lambda=0.1\) & \(\lambda=1\) & Best \\ \hline \multirow{3}{*}{SepNorm (BN+BN)} & \multirow{3}{*}{93.84} & token & 93.65 & 94.15 & 93.94 & 94.15 \\ & & \([\mathrm{CLS}]\) & 93.73 & 93.85 & 93.93 & 93.93 \\ & & \([\mathrm{CLS}]\) + token & 93.40 & 94.25 & 94.28 & 94.28 \\ \hline \multirow{3}{*}{SepNorm (BN+LN)} & \multirow{3}{*}{92.80} & token & 92.98 & 93.60 & 94.30 & 94.30 \\ & & \([\mathrm{CLS}]\) & 92.98 & 93.48 & 93.36 & 93.48 \\ & & \([\mathrm{CLS}]\) + token & 92.74 & 93.18 & 94.40 & **94.40** \\ \hline \multirow{3}{*}{ShareNorm (BN)} & \multirow{3}{*}{92.84} & token & 92.48 & 93.38 & 92.78 & 93.38 \\ & & \([\mathrm{CLS}]\) & 93.10 & 93.33 & 92.93 & 93.33 \\ & & \([\mathrm{CLS}]\) + token & 93.41 & 93.46 & 92.99 & 93.46 \\ \hline \multirow{3}{*}{ShareNorm (LN)} & \multirow{3}{*}{92.01} & token & 92.61 & 92.74 & 92.14 & 92.74 \\ & & \([\mathrm{CLS}]\) & 92.28 & 92.75 & 92.36 & 92.75 \\ \cline{1-1} & & \([\mathrm{CLS}]\) + token & 92.74 & 92.38 & 92.74 & 92.74 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study of the effect of \([\mathrm{CLS}]\) and token uniformity on the downstream tasks with \(\lambda\) varied. We report downstream task accuracy for the STL10 dataset. |
2309.14398 | Seeing and hearing what has not been said; A multimodal client behavior
classifier in Motivational Interviewing with interpretable fusion | Motivational Interviewing (MI) is an approach to therapy that emphasizes
collaboration and encourages behavioral change. To evaluate the quality of an
MI conversation, client utterances can be classified using the MISC code as
either change talk, sustain talk, or follow/neutral talk. The proportion of
change talk in a MI conversation is positively correlated with therapy
outcomes, making accurate classification of client utterances essential. In
this paper, we present a classifier that accurately distinguishes between the
three MISC classes (change talk, sustain talk, and follow/neutral talk)
leveraging multimodal features such as text, prosody, facial expressivity, and
body expressivity. To train our model, we perform annotations on the publicly
available AnnoMI dataset to collect multimodal information, including text,
audio, facial expressivity, and body expressivity. Furthermore, we identify the
most important modalities in the decision-making process, providing valuable
insights into the interplay of different modalities during a MI conversation. | Lucie Galland, Catherine Pelachaud, Florian Pecune | 2023-09-25T16:00:06Z | http://arxiv.org/abs/2309.14398v2 | # Seeing and hearing what has not been said
###### Abstract.
Motivational Interviewing (MI) is an approach to therapy that emphasizes collaboration and encourages behavioral change. To evaluate the quality of an MI conversation, client utterances can be classified using the Misc code as either change talk, sustain talk, or follow/neutral talk. The proportion of change talk in a MI conversation is positively correlated with therapy outcomes, making accurate classification of client utterances essential.
In this paper, we present a classifier that accurately distinguishes between the three Misc classes (change talk, sustain talk, and follow/neutral talk) leveraging multimodal features such as text, prosody, facial expressivity, and body expressivity. To train our model, we perform annotations on the publicly available AnnoMI dataset to collect multimodal information, including text, audio, facial expressivity, and body expressivity. Furthermore, we identify the most important modalities in the decision-making process, providing valuable insights into the interplay of different modalities during a MI conversation.
change talk, multimodality, interpretable +
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features
+
Footnote †: journal: Multimodal Features: Multimodal Features
+
Footnote †: journal: Multimodal Features: Multimodal Features
+
Footnote †: journal: Multimodal Features: Multimodal Features
+
Footnote †: journal: Multimodal Features: Multimodal Features
+
Footnote †: journal: Multimodal Features: Multimodal Features
+
Footnote †: journal: Multimodal Features: Multimodal Features
+
Footnote †: journal: Multimodal Features: Multimodal Features
+
Footnote †: journal: Multimodal Features: Multimodal Features: Multimodal Features
+
Footnote †: journal: Multimodal Features: Multimodal Features: Multimodal Features
+
Footnote †: journal: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Multimodal Features: Multimodal Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Multimodal Features: Multimodal Multimodal Features: Multimodal Features: Multimodal Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Features: Multimodal Multimodal Features: Multimodal Multimodal Features: Multimodal Multimodal Features: Multimodal Features: Multimodal Multimodal Features: Multimodal Multimodal Features: Multimodal Multimodal Multimodal Features: Multimodal Features: Multimodal Multimodal Features: Multimodal Multimodal Multimodal Features: Multimodal Features: Multimodal Multimodal Features: Multimodal Multimodal Features: Multimodal Multimodal Multimodal Features: Multimodal Multimodal Features: Multimodal: Multimodal Multimodal Features: Multimodal Multimodal Features: Multimodal Multimodal Multimodal Features: Multimodal Multimodal Features: Multimodal Multimodal Multimodal Features: Multimodal Multimodal Multimodal Features: Multimodal Multimodal Features: Multimodal Multimodal Features: Multimodal Multimodal Multimodal Features: Multimodal Multimodal Multimodal Features: Multimodal Multimodal Multimodal Features: Multimodal Multimodal: Multimodal Multimodal Features: Multimodal Multimodal Features: Multimodal Multimodal Multimodal Features: Multimodal: Multimodal Multimodal Features: Multimodal Multimodal: Multimodal Multimodal Features: Multimodal: Multimodal Features: Multimodal Multimodal Multimodal Features: Multimodal Multimodal: Multimodal Features: Multimodal Multimodal: Multimodal Multimodal Features: Multimodal Multimodal: Multimodal Multimodal Features: Multimodal Multimodal Multimodal Features: Multimodal Multimodal: Multimodal Features: Multimodal Multimodal Multimodal Features: Multimodal Multimodal Multimodal: Multimodal Features: Multimodal Multimodal Multimodal Features: Multimodal: Multimodal Multimodal Multimodal Features: Multimodal: Multimodal Multimodal Multimodal Multimodal: Multimodal Features: Multimodal Multimodal Multimodal Multimodal: Multimodal Multimodal Multimodal: Multimodal Multimodal Multimodal Features: Multimodal Multimodal Multimodal: Multimodal Multimodal Multimodal Multimodal Multimodal: Multimodal Multimodal Multimodal Multimodal: Multimodal Multimodal Multimodal: Multimodal Multimodal Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal Multimodal Multimodal: Multimodal Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal Multimodal: Multimodal: Multimodal Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal: Multimodal Multimodal: Multimodal Multimodal Multimodal: Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal: Multimodal Multimodal: Multimodal Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal Multimodal: Multimodal: Multimodal: Multimodal Multimodal: Multimodal: Multimodal: Multimodal: Multimodal Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal Multimodal: Multimodal: Multimodal Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal Multimodal: Multimodal: Multimodal: Multimodal: Multimodal Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal Multimodal: Multimodal: Multimodal: Multimodal Multimodal: Multimodal: Multimodal Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal: Multimodal Multimodal: Multimodal:
Motivational Interviewing Skill Code (MISC) (Kalal et al., 2017) that classifies both therapist and client behaviors into three relevant categories:
* **Change talk (CT)**: reflecting actions toward behavior change
* **Sustain talk (ST)**: reflecting actions away from behavior change
* **Follow/Neutral (F/N)**: unrelated to the target behavior
This classification of client language is of interest as it is a predictor of the therapy outcome. Indeed, (Kal et al., 2017) revealed that sustain-talk was associated with poorer treatment results. Furthermore, (Kal et al., 2017) showed that change talk was linked to reductions in risk behavior during follow-up assessments. This correlation makes MISC a promising tool for studying the efficacy of Motivational Interviewing (MI).
The labeling of client utterances is usually done by training coders to manually encode utterances into these three categories. However, this process of annotation can be resource-intensive, as it requires trained annotators to carefully review videos. Furthermore, it can not be done in real-time and can not be used in the context of a human-agent dialogue for instance. As a result, there has been growing interest in developing automatic annotation methods for MISC using various modalities and approaches. These efforts aim to streamline the annotation process and reduce the time and resources required for the analysis.
In this paper, we continue these efforts by presenting a classifier that can distinguish automatically between the three MISC classes. This classifier is based on multimodal features of face-to-face conversations, including (spoken) text, prosody, facial expressivity, and body expressivity. Our classifier is designed to be interpretable, meaning that it is possible to identify the modality that was most important in its decision-making process.
In the remaining of the paper, we first present the data we used to train our MISC classifier, then we present our modality attentive fusion architecture. We explore the performance of different models and compare our results with existing work. Finally, we present a way to interpret the results of the classification to shed a light on the contribution of modalities in the classification process.
## 2. Related Work
The correlation between MISC codes and therapy outcomes has motivated several studies to develop their own classification systems for client language, categorizing it as change talk, sustain talk, or follow neutral. These studies use various modalities as inputs.
Text-based modalities have been widely investigated in the context of MISC annotation on different temporal levels. For example, (Kal et al., 2017) used topic modeling to predict therapy outcomes at the session level, while (Kal et al., 2017) incorporated topic angles and session timing (beginning or end) to predict MISC codes at the utterance level. In their work, an utterance represents a turn by either the client or the therapist. More recent advances have been made using deep learning-based approaches, such as those presented in (Kal et al., 2017), which leveraged word-level features, and in (Kal et al., 2017), which incorporated additional utterance-level features like Linguistic Inquiry and Word Count (LIWC) for improved annotation accuracy. In the latter work, utterances were segmented after a pause of at least two seconds. While these advancements highlight the ongoing exploration of various feature sets and modalities in the automatic annotation of MISC codes, they also expose a variety of ways to decide the level used for coding as well as the specification of an utterance.
Text is not the only modality that can convey the nuances of change talk. Several studies have incorporated prosody or acoustic features to improve MISC classification. For instance, (Beng et al., 2016) combined acoustic features with linguistic features to slightly improve the accuracy of change talk detection. Deep learning methods such as Long Short-Term Memory (LSTM) (Lynch et al., 2017) has also been employed to predict change talk using both text and audio modalities. In this work, the addition of the audio modality improves the prediction score. More recently, such classification was performed using Transformers (Kal et al., 2017). The use of audio generates a loss in performance that can be explained by the low quality of the recordings.
In addition to acoustic cues, other social signals such as laughter have been explored. (Kal et al., 2017) demonstrated that adding laughter as input improved the accuracy of change talk prediction compared to text alone. Furthermore, non-verbal cues such as facial Action Units have been utilized as predictors for change talk, as shown in (Kal et al., 2017) which resulted in improving the prediction.
While the text remains a commonly studied modality, incorporating prosody, non-verbal, and other multimodal information alongside text has shown promising potential for improving the accuracy and robustness of MISC annotation and prediction tasks.
Although using different modalities can improve classifier performance, one limitation of the above works is that they rely on at most two modalities at a time. Furthermore, understanding the contribution of each modality to the decision process remains a challenge. Only (Lynch et al., 2017) addressed this by examining attention weights of the fusion layer, revealing that prosody information have more influence at the end of utterances.
To overcome these limitations, the main contributions of our work include:
* Developing a MISC classifier using 3 different modalities: text, prosody, and nonverbal behavior
* Developing a classifier that identifies the specific modalities that played a key role in the decision-making process. This feature enables practitioners to determine why the classifier made a particular decision.
## 3. Data
Motivational interviewing data that could be used to train a MISC classifier is difficult to find due to the sensitive nature of the discussed topics. Most of the existing corpora are either private for medical reasons (Beng et al., 2016; Bong et al., 2016) or owned privately and payable. Because of this, most studies need to collect a new dataset first and models can not be compared. For instance, (Kal et al., 2017) collected their own non public corpus over Zoom and developed a classifier on the resulting corpus. However, Two corpora of MI conversations have recently been published and are publicly available. The High Low-quality MI dataset (Kal et al., 2017) is composed of 249 videos of MI annotations available on YouTube. Some errors remain in the automatic transcription of the videos and even though MISC annotations have been performed, they are not currently available. The second public corpus is AnnoMI (Kal et al., 2017), a corpus of MI conversations transcribed and annotated with MISC with publicly available annotations. These datasets do not provide multimodal annotations.
### AnnoMI corpus
In our work, we rely on the AnnoMI dataset (Steiner et al., 2017) to train ou MISC classifier. AnnoMI is a publicly available dataset of MI videos of 7 minutes on average that have been annotated by 133 experts. The videos are designed as a demonstration of either high or low-quality therapy. Each video is transcribed and each utterance is annotated in term of primary therapist behavior (question, reflection, therapist input, and others) and client talk type (neutral, change, sustain) using MISC. In this work, we are interested in the client side of MISC. A client utterance can be annotated into three categories: Change Talk (CT), Sustain Talk (ST), or Follow/Neutral (F/N). An utterance classified as CT conveys movement towards the behavior of change while ST conveys a movement away from the behavior of change. A F/N utterance does not indicate a preference towards or against change. The data is annotated by MI practitioners into these 3 classes with 0.9 inter-annotators agreement.
From this corpus we use 121 videos: 3 videos were removed because of outdated URLs and 9 were removed for the poor quality of the video stream. The original transcriptions of the AnnoMI dataset are separated into utterances where a new utterance starts every time a new interlocutor is speaking, only the timestamp of the start of each of these utterances is provided.
### Dataset preprocessing
In this paper, we take advantage of the publicly available videos of AnnoMI to train a classifier that predicts client's MISC category relying on multimodal behavior. Multimodality gives valuable insights for various tasks such as sentiment analysis (Steiner et al., 2017). Moreover (Steiner et al., 2017) shows that visual cues such as facial Action Unit occurrences, head pose, eye gaze, and body gestures can be a sign of depression. Therefore in this paper, we study multiple modalities such as (spoken) text, audio (prosody), and facial and body expressivity.
TextIn the original AnnoMI transcriptions, sentences were cut into two utterances whenever a listener's backchannel occurred during their production. However, backchannels are not aimed to take the speaking turn. In our model, backchannels are removed from the original transcript and utterances are reorganized to recreate sentences corresponding to speaking turns. We updated the MISC coding whenever utterances of the same sentence received different labels in the original AnnoMI annotation. The only conflicts involved utterances annotated as neutral and change or as neutral and sustain. The resulting sentence is coded as change, respectively sustain. They were no change / sustain conflicts. We illustrate our changes in the Fig.2.
Facial expressivityThe facial expressivity is extracted using OpenFace (Bordes et al., 2016). As the performance of the OpenFace model is significantly better on videos containing only one face, we produce two new videos from the original ones: one with the therapist only, and one with the patient only. In most cases, the camera focuses mainly on the person talking, leaving out of focus the other interlocutor. Yet, speaking makes the detection of mouth-related action units by OpenFace noisy. Therefore, we extract the action units of the upper face (AU 1 2 4 5 6 7 9 and 45). OpenFace is also applied to extract gaze angles and head positions and rotations. The action units are smoothed using a median filter with a kernel of size 5 and missing data are interpolated.
Body expressivityBody expressivity can convey information on one's affective state (Bordes et al., 2016). Two interesting measures of body expressivity are Amplitude of movement (Bordes et al., 2016) and Quantity of motion (Bordes et al., 2016). Amplitude is defined as the width of a movement and Quantity of motion is defined as an approximation of the amount of detected movement.
Raw body joints position data are extracted using OpenPose (Bordes et al., 2016). From these raw skeleton data of the client and the therapist, we compute the Amplitude and Quantity of motion for each frame.
The Amplitude is defined as the bounding box around the speaker for a given time frame. It is computed by dividing the length between the two wrists by the height \(H\) of the bust in the current framing. Dividing by \(H\) accounts for the different sizes in framing.
The quantity of motion QoM is computed following a simplified version of the method described in (Bordes et al., 2016). Given a silhouette \(t\) that moves over \(n\) frames, QoM is defined as: '
\[QoM=Area(Silhouette(t+n))-Area(Silhouette(t)) \tag{1}\]
We define \(Area(Silhouette(t))\) as the bounding box used for the Amplitude and we set n=10 frames. This simplification is chosen as the interlocutors are seated and the motion is mainly focused on the arms. As the bounding box only takes into account the upper body, the simplification is acceptable.
On both Amplitude and Quantity of motion, missing data are interpolated and a Median filter of size 5 is applied to reduce detection errors from OpenPose.
### Data distribution
Similar to other MI datasets (Steiner et al., 2017; Steiner et al., 2017), our corpus is unbalanced: the Follow/Neutral class is significantly more prevalent than the Change Talk or Sustain Talk classes (see Table 1). However, our data are more balanced than some previous studies, since we considered speakers' sentences and removed listeners' backchannels.
The proportion of each class in the corpus is similar for all modalities, which means that the available modalities are independent of the classes and therefore will not affect the model.
Figure 2. Example of transcript reorganization
## 4. Architecture
Our MSC classifier relies on the following architecture: each modality of the client input is first prepossessed individually by an adapted network. These encoding networks represent each of the modalities as an embedding vector. The different modalities represented are merged using a modified version of Embracenet (Embracenet, 2017), a fusion architecture that allows missing modalities. We modify Embracenet by adding attention to modalities and call this new architecture MALEFIC (see Section 4.2) The optimal sizes of the models are determined using a grid search.
### Modalities pre processing
_Text preprocessing._ The text is preprocessed using a frozen Bert pre-trained model from the HuggingFace library (bert-base-uncased) followed by two linear layers of size 30 interposed with dropout layers, Leaky-Relu activations and one skip connection. We choose to use a frozen Bert model to avoid overfitting.
_Text and context preprocessing._ According to the findings of previous works (Levy et al., 2017; Li et al., 2018), we take into account both the therapist's and the client's behaviors. We take as input the previous turn of the therapist, the previous sentences that make up the turn of the client, and the actual client sentence to classify. Each of these sentences is processed sequentially through an un-frozen Bert, and the embeddings obtained from average pooling are concatenated.
_Audio preprocessing._ The Audio modality is preprocessed using the pre-trained Beats model (Embracenet, 2017). It takes as input the Mel filter bank of the audio and outputs an embedding of size 758.
_Facial expressivity preprocessing._ Action Units and head pose values are preprocessed using an encoder composed of two 2-dimensional convolutional layers with 16 filters and a 1-layer Transformer encoder. The encoding of the transformer is then combined to compute an embedding for the entire sequence of size 256.
_Body expressivity preprocessing._ Amplitude and Quantity of motion are preprocessed using an encoder composed of 2 convolutional layers and a 1 layer transformer encoder. The encoding of the transformer is then combined to compute an embedding for the entire sequence of size 8.
### Fusion
The fusion of modalities is achieved using a modified version of Embracenet. This method is useful for handling missing modalities. First, each preprocessing network's output is reduced to the size of the final embedding by a linear layer. Then, Embracenet combines the embeddings by randomly selecting one modality per embedding dimension. In addition, dropout of modality is used during training to prevent over fitting on modalities. During training, modality dropout involves randomly removing available modalities.
This approach enables each preprocessing network to efficiently learn the data structure while also taking advantage of multimodality. Furthermore, it enables us to address missing data in our corpus (namely, the face and body information that are not available for every sentence). In fact, as a result of this training, any missing modality can be easily ignored.
We improve the EmbracNet architecture by incorporating self-attention. Self-attention is used to determine the significance of a given modality. If a modality is deemed important by the self-attention module, then this modality will be more likely to be selected (see Fig. 1).
The output of the self-attention layer gives the weight of each modality for each embedding dimension. During training, the output of the self-attention layer for a given embedding dimension is used as the probability of selecting each modality. During the evaluation, the selected modality for a given embedding dimension is the modality with the highest probability. We choose to use probabilistic selection during training to avoid over fitting.
We enhance the Embracenet framework with self-attention, as some of the modalities in our problem contribute more to the classification. (for instance, the Text modality has a more substantial classification power than the nonverbal modality, see Tab.2).
The resulting architecture also estimates the usefulness of each modality, which allows for interpretation (see Section 6)
In the following, we use this architecture that we call : Modality Attentive Late Embracenet Fusion with Interpretable Modality Contribution (MALEFIC), with different combinations of modalities : Facial and body expressivity; Text and context; Text, context and audio; Text, context and facial expressivity; and Text, context, audio and facial expressivity. For Text and context, we previously took the context into account by concatenating the Bert embeddings of the surrounding sentences. Here, we take advantage of our fusion architecture and treat the context as another modality. A self-attention layer will decide whether in this case the client-therapist context is relevant.
## 5. Classification Results
To explore the performance of our architecture to predict the MSC classes, we train and evaluate different models using the data described in Section 3. The unbalanced data set is handled using a weighted random sampler. First, we evaluate the performance of each modality regarding the classification by training different unimodal classifiers. Then, we investigate whether multimodality improves the performance of our best unimodal model. Finally, we compare our results to existing multimodal MISC classification models.
### Single modality models
Our first objective is to evaluate which modality allows for the best MISC classification score. To that extent, we train different models that take as input a single modality. These models are composed of the preprocessing networks described above, followed by a linear classifier. The results summarized in Table 2 show that the text + context modality appears to be the most efficient. On the other hand,
\begin{table}
\begin{tabular}{|l|l l l|} \cline{2-4} \multicolumn{1}{c|}{} & text and audio & visible face & visible body \\ \hline CT & 1279 : 0.24\% & 1059 : 0.26\% & 483 : 0.23\% \\ F/N & 3167 : 0.60\% & 2340 : 0.57\% & 1200 : 0.60\% \\ ST & 817 : 0.16\% & 718: 0.17\% & 353 : 0.17\% \\ \hline Total & 5263 : 100\% & 4117 : 0.78\% & 2036:0.39\% \\ \hline \end{tabular}
\end{table}
Table 1. AnnoMI distribution
body expressivity has low prediction power. Confidence intervals are calculated using the bootstrap method (Krizhevsky et al., 2014). Training details are provided below.
_Text based model._ The text preprocessing model is trained for 150 epochs with an AdamW optimizer(Kingmare et al., 2014) and a Cosine Aligned scheduler (Kingmare et al., 2014) with a maximum learning rate of \(2*10^{-4}\).
_Text and context based model._ The text and context preprocessing model is trained for 25 epochs with an AdamW optimizer (Kingmare et al., 2014) and a learning rate of \(2*10^{-5}\).
_Audio based model._ The audio preprocessing model is trained for 25 epochs with an AdamW optimizer (Kingmare et al., 2014) and a learning rate of \(10^{-5}\).
_Facial expressivity based model._ The facial expressivity preprocessing model is trained for 150 epochs with an AdamW optimizer (Kingmare et al., 2014) and a One Cycle LR scheduler (Kingmare et al., 2014)with a maximum learning rate of \(10^{-4}\).
_Body expressivity based model._ The body expressivity preprocessing model is trained for 1500 epochs with an AdamW optimizer (Kingmare et al., 2014) and a learning rate of \(5*10^{-5}\).
### Multimodal models
Now that we learned more about our unimodal models performance, we investigate whether multimodality could improve the performance of our MISC classification model. Using the fusion architecture described above, we train several multimodal models. We use a frozen Bert and Beats models to improve training time and avoid over fitting. As a mean of comparaison, we also train the model using text and context linearly from the previous section with a frozen-Bert transformer. These multimodal models are trained for 150 epochs with AdamW optimizer (Kingmare et al., 2014) and Cosine Aligned scheduler (Kingmare et al., 2014) with a maximum learning rate of \(2*10^{-4}\). The results are displayed in Table 4. Because of the low diversity of body expressivity (clients are seated in the videos and do not move much) and the large number of missing data (a quarter of sentences are provided with body expressivity information), the addition of body expressivity decreases the accuracy of change talk detection, which is the most important class. Therefore, in the following, we decide not to use body expressivity in the model.
In all cases, using the MALEFIC architecture improves classification results over the most performant preprocessing network (Text + context linear) Particularly, combining text, context, audio, and facial expressivity outperforms all models with frozen Bert and Beats embeddings. Meaning that the combination of visual, vocal, and verbal modalities improves the classification performance. MALEFIC is able to take advantage of the new modalities and to select relevant multimodal information. For a MISC classifier, we especially want to be able to classify change talk and avoid classifying change talk as sustain talk and vice versa. The confusion matrix in Tab.3 shows that our model makes few change talk/sustain talk mistakes.
### Comparison with existing studies
We compare our results with three existing studies (Zhou et al., 2017; Zhang et al., 2018; Zhang et al., 2018). However, the data set used in these studies is not available, so the conclusion of the comparison should be made with care. The Table 5 summarizes our comparisons.
#### 5.3.1. Text based model.
In (Zhou et al., 2017), a Bert model is trained on AnnoMI to predict MISC classes only on the current utterance (text without context). This model is similar to the one we described in section 5.1 and is trained on the same dataset. The only difference with our work is the reorganization of the transcripts performed in Section 3.2. The model in (Zhou et al., 2017) reaches a 0.55 F1 macro score, which is significantly lower than the score achieved by our approach (0.68), which uses a similar architecture.
One factor that may explain the performance gap is the preprocessing of the text performed in our approach, as discussed in Section 3.2. By providing full sentences with semantic meaning, our approach is able to capture more nuanced linguistic features, enabling a more accurate classification of MISC classes. These results provide a validation of the effectiveness of our text preprocessing.
#### 5.3.2. Text and audio-based model.
In (Zhang et al., 2018), audio and text are used to classify utterances into the 3 MISC classes, change talk, sustain talk, and follow / neutral. Our approach achieves a significantly higher F1 micro score of 0.62 compared to their score of 0.53, based solely on audio input (see Table 5). However, this accuracy gap may be attributed to the poor quality of audio recordings in their corpus, which is not the case in ours.
Moreover, in their approach, adding the audio modality results in a small drop in precision, where, using our fusion method, we are able to slightly improve accuracy by adding the audio modality.
#### 5.3.3. Text and Facial expressivity based model.
In (Zhou et al., 2017), text and facial expressivity (action units, head positions, and eye direction) are used to predict whether an utterance displays change talk or not. They looked at a two-label classification problem when we classify utterances into 3 categories. Their corpus was collected using Zoom, meaning that participants are always facing the camera, whereas our corpus shows a greater variety of body orientations and, therefore, noisier OpenFace outputs. However, we are able to classify change talk significantly better.
In their approach, adding facial expressivity improves the F1 scores on the not change talk class, but does not change the change talk F1 score. Our approach allows us to slightly improve the F1 score on change talk and to produce a higher overall F1 score despite the variety of positions of the clients in the videos and the missing data (when the camera does not show the client's face).
## 6. Interpretation
The ability to quantify the contribution of each modality in the classification process is a key advantage of our approach. By utilizing multiple modalities, such as text, prosody and facial expressivity, we can gain a more comprehensive understanding of the client's communication and behavior during an MI conversation.
Identifying which modality is relevant to the classification of a given sentence can offer valuable insights into the client's state of mind. For example, if facial expressivity or prosody are found to be more influential in the classification process, it may suggest that the client is trying to conceal their true thoughts. Several elements of our model offer the bases to draw explanations of the model outputs. We can name the use of dropout and random selection of
embeddings during training allows the final embeddings of each modality to be computed in the same embedding space as the fusion embedding. This ensures that all modalities are represented consistently.
Furthermore, the self-attention layers included in our approach allow the model to dynamically weigh the importance of each modality for each sentence. These layers give a sense of the relevance of each modality not only for each embedding but also for each sentence to be classified.
In this section, we take advantage of these properties to visualize and quantify the contribution of each modality. All the following statistics are computed on the part of the validation set where all modalities are available.
### Overall modality contributions
To quantify the contribution of each modality within the corpus, we examine the average number of times a modality is selected by the self-attention module over all embedding dimensions. Our analysis reveals the following overall contribution: text (26%), audio (16%), face (26%), previous client sentence in the turn (16%), and previous therapist turn (16%). This distribution shows that all modalities are considered by the model with more weight given to the Text and Facial expressivity. These results demonstrate that the model considers all modalities, with a greater weight placed on text and facial expressivity. This aligns with our finding that text is the strongest predictor when taken as a single input (see Table 2). The fact that facial expressivity has a strong weight despite its low predictive powers can be explained in the following sections (see Section 6.3).
### Embedding specialization
To understand the role of each embedding dimension, we examine the average number of times a modality was selected for a given embedding dimension. Figure 3 shows the distributions of the modality contribution averaged over each embedding dimension.
This figure shows that some embedding dimensions have a modality contribution of 1 for the text and facial expressivity modalities. This means that this dimension has specialized into a certain modality. This modality will be systematically selected if available. The two modalities that have the greater weight in the overall corpus (text and facial expressivity) are the two modalities with specialized embeddings. The fact that the dimensions are specialized in the text modality aligns with our finding that the text is the strongest predictor when taken as a single input (see Table 2).
On the other hand, there are, for every modality, some dimensions with a contribution of 0 meaning that this modality is never selected for this dimension.
### Quantification of modality contribution for each sentence
To quantify the contribution of each modality to the classification of a given sentence, we examine the number of dimensions of the fusion embedding that have been selected from this modality for a particular sentence. This provides insights, for a given instance of the client's speech (a sentence), of the amount of information of a modality that is used to make a decision. Figure 4 shows the distribution of the modality contribution averaged over each sentence. Our analysis indicates that the contribution of each modality is highly dependent on sentences. Specifically, we observed that the distribution of text, audio, and context from both the client and the therapist can be characterized by two Gaussian distributions, indicating that these modalities are more informative for some sentences than for others.
In contrast, only one Gaussian distribution is visible for facial expressivity, suggesting that this modality is used more consistently
\begin{table}
\begin{tabular}{|l|l|c c|c c|} \hline modality : & Text without context & Text + context (linear) & Audio & Facial expressivity & Body expressivity \\ \hline F1 - CT & 0.62[0.56,0.68] & **0.72[**0.66,0.77**] & 0.32[0.26,0.39] & 0.30 [0.23,0.36] & 0.14[0.05,0.22] \\ F1 - ST & 0.63[0.58,0.67] & **0.71[**0.67,0.75**] & 0.44[0.39,0.5] & 0.36 [0.31,0.42] & 0.25[0.17,0.35] \\ F1 - F/N & 0.79[0.77,0.82] & **0.85[**0.83,0.87**] & 0.74[0.71,0.76] & 0.58 [0.54,0.61] & 0.67[0.63,0.72] \\ \hline F1 - micro & 0.73[0.70,0.75] & **0.80[**0.76,0.82**] & 0.62[0.59,0.65] & 0.46 [0.43,0.49] & 0.51[0.46,0.55] \\ F1 - macro & 0.68[0.65,0.71] & **0.76[**0.74,0.79**] & 0.51[0.47,0.54] & 0.41[0.38,0.45] & 0.36[0.31,0.40] \\ \hline \end{tabular}
\end{table}
Table 2. F1 score of single-modality models
Figure 3. Distribution of modalities contribution for each embedding dimension
\begin{table}
\begin{tabular}{|l|c c c|} \hline \multicolumn{3}{|c|}{Predicted} \\ \cline{2-4} \multicolumn{1}{c|}{} & \multicolumn{1}{c}{ST} & F/N & CT \\ \hline \multirow{3}{*}{**Category**} & ST & 0.65 & 0.29 & 0.06 \\ & F/N & 0.07 & 0.79 & 0.14 \\ & CT & 0.04 & 0.27 & 0.69 \\ \hline \end{tabular}
\end{table}
Table 3. Confusion matrix of the model Text+Audio+Face
across the dataset. This may be because facial expressivity is not a strong predictor for classifying MISC classes. Indeed, because of the use of modality dropout, the model is not able to completely ignore a modality. Therefore, in case of weak predictor, the model has a harder time determining when the modality is useful and takes it into account consistently across the corpus. This can also explain why facial expressivity has a weight as large as the text modality in the overall contribution and why some embeddings are specialized in this modality. Indeed, the face modality is always selected as the model is not able to detect the sentences where it is really useful and the other modalities are selected only when they are relevant.
To better understand the differences in the sentences that lead to the above results, we perform a clustering of the contribution of each of the considered modalities using the elbow method and K-means and find five clusters with a silhouette score of 0.96.
Sentences can be clustered into groups where the contributions of the modalities are different (see Fig. 5). The five clusters can be interpreted as five types of sentences:
* Cluster 1: The text and the context of both, the client and the therapist are relevant: 57%
* Cluster 2: The previous speaking turn of the therapist is relevant: 16%
* Cluster 3: The previous sentences of the client in the speaking turn are relevant: 12%
\begin{table}
\begin{tabular}{|p{42.7pt}|p{34.1pt}|p{34.1pt}|p{34.1pt}|p{34.1pt}|p{34.1pt}|p{34.1pt}|p{34.1pt}|p{34.1pt}|p{34.1pt}|p{34.1pt}|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Modalities:}} & \multicolumn{2}{c|}{Text + context (linear)} & \multicolumn{2}{c|}{Text + context (MALEFIC)} & \multicolumn{2}{c|}{Face + Body} & \multicolumn{2}{c|}{Text + Face} & \multicolumn{2}{c|}{Text + Audio} & \multicolumn{2}{c|}{Text + Audio + Face} \\ \hline F1 - CT & 0.61[0.54,0.66] & 0.63[0.57,0.68] & 0.24[0.18,0.31] & 0.64[0.58,0.69] & **0.65[0.59,0.70]** & **0.65[0.59,0.71]** \\ F1 - ST & 0.58[0.53,0.63] & 0.63[0.58,0.68] & 0.41[0.35,0.47] & 0.60[0.55,0.66] & **0.66[0.62,0.70]** & **0.66[0.61,0.71]** \\ F1 - F/N & 0.78[0.75,0.80] & 0.80[0.77,0.82] & 0.63[0.60,0.67] & 0.80[0.78,0.83] & **0.81[0.78,0.83]** & **0.81[0.77,0.82]** \\ \hline F1 - micro & 0.71[0.68,0.73] & 0.73[0.70,0.76] & 0.51[0.47,0.54] & 0.74[0.71,0.76] & 0.74[0.72,0.77] & **0.76[0.72,0.77]** \\ F1 - macro & 0.65[0.62,0.69] & 0.69[0.65,0.72] & 0.43[0.40,0.46] & 0.68[0.65,0.71] & **0.71[0.67,0.73]** & 0.70[0.67,0.73] \\ \hline \end{tabular}
\end{table}
Table 4. F1 score of models trained with frozen Bert and Beats models
* Cluster 4: The current sentence is relevant: 9%
* Cluster 5: The audio is relevant: 6%
Table 6 shows an example of sentences for each group.
These clusters confirm that facial expressivity contributes consistently across the dataset. Additionally, they demonstrate the importance of considering multiple modalities. By revealing which modality is most relevant for a given sentence, this analysis provides a valuable tool for validating decisions and could be used by the therapist to provide feedback to the client in real-time. It could also be used by a virtual agent acting as the therapist to detect change talk and to use this information for its next dialog move. For example, the agent could explain its decisions by saying something like "From your tone of voice, it sounds like you are not ready to change.". As foreseen, the cluster distributions display that text and context are the most important features in most cases.
### Embedding visualization
The embedding space is visualized using UMAP (Krizhevsky et al., 2017), a framework used for dimensionality reduction that is reversible. Due to its reversible quality, we are able to create a map of the embedding space showing how each embedding point would be classified. This visualization visible in Figure 6 allows us to determine how confident the classification is for every modality. The text is indeed the most expressive modality (see Fig. 5(b)) and that most of the other modalities are pertinent to accurately classify only in some cases, as seen in the previous sections. This visualization illustrates also which modalities contributed and in which direction to the classification of each sentence. Figure 7 shows example of sentences where the text embedding alone does not classify accurately but is improved by other modalities (Figure 6(a)). On the left, the text alone classifies as change, on the right as sustain, when the true classification is neutral. It also shows an example where only text alone classifies the sentence correctly as change, and the model is not misled by other modalities (see Figure 6(b)).
## 7. Conclusion and Future Work
In this paper, we present a multimodal classifier for the three MISC classes of client behavior: change talk, sustain talk, and follow neutral. Our classifier is based on AnnoMI, an open access Motivational Interviewing database that is annotated in MISC classes and has been transcribed. We reorganized the transcript into sentences with lexical meaning and performed multimodal annotations of facial and body expressivity. Taking advantage of these multimodal inputs, we train a classifier that achieves greater accuracy than a unimodal approach and outperforms the existing approaches. We also use self-attention layers to determine the contribution of each modality, allowing us to interpret the results of our classifier and identify the most informative modality for a given sentence.
Figure 5. Proportion of modalities contribution within each cluster
Figure 6. Visualization of modalities embeddings with UMAP projection
Figure 7. Examples of sentences representation
In future work, we plan to improve the model's performance by fine-tuning the Bert and Beats transformers. In addition, we envision endowing a virtual therapist agent with this model to enable it to detect whether the client is responding to therapy and is producing change talk. The agent could also provide feedback to the user regarding why it detected that the client may not be ready to change (e.g., tone of voice). Finally, we aim to make the model publicly available to facilitate the annotation of new MI videos and serve as a baseline for future work. Overall, our approach demonstrates the value of multimodal input in improving the accuracy of MISC classification while providing interpretable features.
## Acknowledgement
This work was partially funded by the ANR-DFG-JST Panorama and ANR-JST-CREST TAPAS (19-JSTS-0001-01) projects.
|
2309.12676 | JCoLA: Japanese Corpus of Linguistic Acceptability | Neural language models have exhibited outstanding performance in a range of
downstream tasks. However, there is limited understanding regarding the extent
to which these models internalize syntactic knowledge, so that various datasets
have recently been constructed to facilitate syntactic evaluation of language
models across languages. In this paper, we introduce JCoLA (Japanese Corpus of
Linguistic Acceptability), which consists of 10,020 sentences annotated with
binary acceptability judgments. Specifically, those sentences are manually
extracted from linguistics textbooks, handbooks and journal articles, and split
into in-domain data (86 %; relatively simple acceptability judgments extracted
from textbooks and handbooks) and out-of-domain data (14 %; theoretically
significant acceptability judgments extracted from journal articles), the
latter of which is categorized by 12 linguistic phenomena. We then evaluate the
syntactic knowledge of 9 different types of Japanese language models on JCoLA.
The results demonstrated that several models could surpass human performance
for the in-domain data, while no models were able to exceed human performance
for the out-of-domain data. Error analyses by linguistic phenomena further
revealed that although neural language models are adept at handling local
syntactic dependencies like argument structure, their performance wanes when
confronted with long-distance syntactic dependencies like verbal agreement and
NPI licensing. | Taiga Someya, Yushi Sugimoto, Yohei Oseki | 2023-09-22T07:35:45Z | http://arxiv.org/abs/2309.12676v1 | # JCoLA: Japanese Corpus of Linguistic Acceptability
###### Abstract
Neural language models have exhibited outstanding performance in a range of downstream tasks. However, there is limited understanding regarding the extent to which these models internalize syntactic knowledge, so that various datasets have recently been constructed to facilitate syntactic evaluation of language models across languages. In this paper, we introduce **JCoLA** (Japanese Corpus of Linguistic Acceptability), which consists of 10,020 sentences annotated with binary acceptability judgments. Specifically, those sentences are manually extracted from linguistics textbooks, handbooks and journal articles, and split into in-domain data (86 %; relatively simple acceptability judgments extracted from textbooks and handbooks) and out-of-domain data (14 %; theoretically significant acceptability judgments extracted from journal articles), the latter of which is categorized by 12 linguistic phenomena. We then evaluate the syntactic knowledge of 9 different types of Japanese language models on JCoLA. The results demonstrated that several models could surpass human performance for the in-domain data, while no models were able to exceed human performance for the out-of-domain data. Error analyses by linguistic phenomena further revealed that although neural language models are adept at handling local syntactic dependencies like argument structure, their performance wanes when confronted with long-distance syntactic dependencies like verbal agreement and NPI licensing.
## 1 Introduction
Neural language models, especially Transformer-based language models (Vaswani et al., 2017), have exhibited outstanding performance in a range of downstream tasks (Wang et al., 2018, 2019), yet there is limited understanding regarding the extent of linguistic knowledge these models have internalized. Several studies have explored the syntactic competence of language models through acceptability judgment tasks (e.g., Linzen et al., 2016; Marvin and Linzen, 2018). These and other related studies are critical as they mark the beginning of syntactic evaluations of language models, but they were limited in the scope of linguistic phenomena. In more recent times, researchers have constructed extensive datasets to facilitate more comprehensive syntactic evaluations (Warstadt et al., 2019, 2020; Xiang et al., 2021; Trotta et al., 2021; Mikhailov et al., 2022). Nonetheless, the majority of these investigations have centered around English and other European languages (Gulordava et al., 2018; Warstadt et al., 2019, 2020; Wilcox et al., 2018), with only a handful expanding their scope to encompass non-European languages (Gulordava et al., 2018; Ravfogel et al., 2018). Notably, an even smaller number of studies have addressed a broad spectrum of linguistic phenomena in languages other than English (Trotta et al., 2021; Xiang et al., 2021; Mikhailov et al., 2022).
In this paper, we introduce JCoLA (Japanese Corpus of Linguistic Acceptability) 1, which consists of 10,020 sentences with acceptability judgments by linguists. Specifically, those sentences are manually extracted from linguistics textbooks, handbooks and journal articles, and split into in-domain data (86 %; relatively simple acceptability judgments extracted from textbooks and handbooks) and out-of-domain data (14 %; theoretically significant acceptability judgments extracted from journal articles), the latter of which is categorized by 12 linguistic phenomena. We then evaluate the syntactic knowledge of 9 different types of Japanese language models on JCoLA. The results demonstrated that several models could surpass human performance for the in-domain data, while no models were able to exceed human performance for the out-of-domain data. Error analyses by lin
guistic phenomena further revealed that although neural language models are adept at handling local syntactic dependencies like argument structure, their performance wanes when confronted with long-distance syntactic dependencies like verbal agreement and NPI licensing.
## 2 Related Work
Acceptability judgment is a crucial aspect of human linguistic competence. It refers to the innate ability of individuals to differentiate between sentences that are grammatically correct and those that are not, even without any explicit training in grammar. For instance, when presented with two sentences, individuals can intuitively recognize which one is more acceptable or natural-sounding. Such judgments are considered the primary behavioral measure used by generative linguists to study the underlying structure of language in humans (Chomsky, 1957). By examining acceptability judgments, linguists can gain insights into the rules that govern language and how these rules are applied by speakers of a particular language.
Historically, the evaluation of language models has been conducted using metrics such as perplexity, or based on how well the models perform on specific downstream tasks, as seen in benchmarks like GLUE (Wang et al., 2018). However, in recent years, there have been efforts to assess the syntactic knowledge of language models through acceptability judgment tasks.
Linzen et al. (2016) first employed minimal pairs to examine how well LSTM language models could capture subject-verb agreement in English.
1. The key is on the table.
2. * The key are on the table.
This and other related studies are critical as they mark the beginning of syntactic evaluations of language models. However, they were limited in the scope of linguistic phenomena considered (e.g., Marvin and Linzen, 2018; Futrell et al., 2019; Gulordava et al., 2018).
In light of this, more recent approaches introduced large-scale acceptability judgment corpora for targeted syntactic evaluations of language models (Warstadt et al., 2019, 2020). Similar to Linzen et al. (2016), Warstadt et al. (2020) constructed BLiMP (Benchmark of Linguistic Minimal Pairs) as a dataset employing minimal pairs. BLiMP consists of 67,000 minimal pairs automatically generated across 12 types of linguistic phenomena. This enables the evaluation of language models on a wide range of linguistic phenomena, not limited to subject-verb agreement. Furthermore, similar datasets have been developed for languages other than English, allowing for comparable evaluations across various languages (Xiang et al., 2021; Someya and Oseki, 2023).
Concurrently, there is also an approach to targeted syntactic evaluations of language models that does not rely on minimal pairs but instead evaluates language models with binary classification tasks based on acceptability. CoLA (Corpus of Linguistic Acceptability; Warstadt et al. (2019)) is the first corpus that achieves this, a dataset built by collecting sentences from syntax textbooks, handbooks, and linguistics journals. Similar datasets to CoLA have also been emerging for languages other than English (Trotta et al., 2021; Mikhailov et al., 2022), though none exist for Japanese as of yet (cf. Table 1).
## 3 JCoLA
In this study, we introduce JCoLA (Japanese Corpus of Linguistic Acceptability), which will be the first large-scale acceptability judgment task dataset focusing on Japanese. JCoLA consists of sentences from textbooks and handbooks on Japanese syntax, as well as from journal articles on Japanese syntax that are published in JEAL (Journal of East Asian Linguistics), one of the prestigious journals in theoretical linguistics.
### Data Collection
Sentences in JCoLA were collected from prominent textbooks and handbooks focusing on Japanese syntax. In addition to the main text, example sentences included in the footnotes were also considered for collection. We also collected acceptability judgments from journal articles on Japanese syntax published in JEAL (Journal of East Asian Linguistics): one of the prestigious journals in theoretical linguistics. Specifically, we examined all the articles published in JEAL between 2006 and 2015 (133 papers in total), and extracted 2,252 acceptability judgments from 26 papers on Japanese syntax (Table 2). Acceptability judgments include sentences in appendices and footnotes, but not sentences presented for analyses of syntactic structures (e.g. sentences with brackets to show their syntactic structures). As a result, a total of 11,984 example
sentences were collected. Using this as a basis, JCoLA was constructed through the methodology explained in the following sections.
### Data Preparation
#### 3.2.1 Data Preprocessing
Among the sentences extracted through the above method, there were sentences that were not appropriate for JCoLA, a binary classification dataset based on single-sentence acceptability judgments. We either remove or modify these sentences in preprocessing. First, sentences labeled with '?', '#', '%', or '(?)' were removed. Additionally, sentences that did not have such labels but were noted to have variable acceptability depending on the speaker were also removed. Furthermore, duplicates, examples that were not single-sentence acceptability judgments, those containing inappropriate vocabulary, and examples whose unacceptability depends on the context were eliminated. Lastly, some sentences were found to be incomplete. In these cases, they were supplemented to form complete sentences, ensuring that the acceptability did not change. (e.g., John's book -? John's book is red.)
#### 3.2.2 Categorization
A part of the data is annotated based on linguistic phenomena in order to analyze each phenomenon in detail. We categorize the 12 phenomena in JCoLA as follows (Table 3):
**Argument Structure:** acceptability judgements based on the order of arguments (3a) and case marking (3b).
* [noitemsep,topsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,p=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,p=0pt,parsep=0pt,p=0pt,parsep=0pt,p=0pt,parsep=0pt,parsep=0pt,p=0pt,parsep=0pt,p=0pt,parsep=0pt,parsep=0pt,p=0pt,parsep=0pt,p=0pt,parsep=0pt,p=0pt,parsep=0pt,p=0pt,parsep=0pt,p=0pt,parsep=0pt,p=0pt,parsep=0pt,parsep=0pt,p=0pt,parsep=0pt,parsep=0pt,p=0pt,parsep=0pt,parsep=0pt,p=0pt,parsep=0pt,p=0pt,parsep=0pt,p=0pt,parsep=0pt,parsep=0pt,p=0pt,parsep=0pt,p=0pt,parsep=0pt,parsep=0pt,p=0pt,parsep=0pt,parsep=0pt,parsep=0pt,p=0pt,parsep=0pt,parsep=0pt,p=0pt,parsep=0pt,p=0pt,parsep=0pt,p,parsep=0pt,p=0pt,parsep=0pt,p=0pt,parsep=0pt,p,parsep=0pt,p=0pt,parsep=0pt,p=0pt,parsep=0pt,p=0pt,parsep=0pt,p=0pt,parsep=0pt,parsep=0pt,p=0pt,parsep=0pt,parsep=0pt,p=0pt,parsep=0pt,p,parsep=0pt,p=0pt,parsep=0pt,p,parsep=0pt,p=0pt,parsep=0pt,p,parsep=0pt,p=0pt,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p=0pt,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep=0pt,p,parsep,p,parsep=0pt,p,parsep,p,parsep=0pt,p,parsep,p,parsep=0pt,p,parsep,p,parsep=0pt,p,parsep,p,parsep=0pt,p,parsep=0pt,p,parsep,p,parsep=0pt,p,parsep,p,parsep=0pt,p,parsep,p,parsep=0pt,parsep,p,parsep=0pt,p,parsep=0pt,p,parsep,p,parsep,p,parsep=0pt,p,parsep=0pt,p,parsep,p,parsep=0pt,p,parsep,p,parsep,p,parsep,p,parsep=0pt,p,parsep,p,parsep,p,parsep=0pt,p,parsep=0pt,p,parsep,p,parsep,p,parsep,p,parsep=0pt,p,parsep,p,parsep=0pt,p,parsep,p,parsep,p,parsep,p,parsep,p,parsep=0pt,p,parsep,p,parsep,p,parsep,p,p,parsep,p,parsep,p,p,parsep,p,parsep,p,p,parsep,p,p,parsep,p,p,parsep,p,p,parsep,p,p,p,parsep,p,p,parsep,p,p,p,parsep,p,p,parsep,p,p,parsep,p,p,parsep,p,p,p,parsep,p,p,p,parsep,p,p,p,parsep,p,p,p,p,parsep,p
* After Taroo was fired for that reason, Hanako was fired too.'
**Filler-gap:** acceptability judgements based on the dependency between the moved element and the gap. For instance, this includes comparatives (7a) and cleft sentences (7b).
* Mary-wa John-ga kaita yori nagai Mary-top John-nom wrote than long ronbun-o kaita. paper-acc wrote
* Mary wrote a longer paper than John wrote'
* Taroo-ga atta no-wa Hanako-ni Taroo-nom saw that-top Hanako-dat da. is
* It was Hanako that Taroo saw.'
**Island Effects:** acceptability judgements based on the restrictions on filler-gap dependencies such as wh-movements.
* Taroo-wa Hanako-ga naze kare-no Taroo-top Hanako-nom why he-gen tegami-o suteta kara okotetir letter-acc discarded because be.angry no? C
* Why is Taro angry because Hanako discarded his letter?'
**Morphology:** acceptability judgements based on the morphology. For instance, it includes idioms.
* Taroo-no kotoba-wa hi-ni abura-o Taroo-gen words-top fire-dat oil-acc cossoida. pour
* Taroo's words made the situation worse'
**Nominal Structure:** acceptability judgements based on the internal structure of noun phrases.
* amen-no hi-wa kiraida rainy day-top hate.be 'I hate rainy days.'
**NPI/NCI:** acceptability judgements based on the restrictions on where negative polarity/concord items (NPIs/NCIs) can appear. For instance, NCIs include _daremo_.
* Daremo monku-o iw-anakat-ta. who-mo complaint-acc say-neg-past 'Nobody complained.'
\begin{table}
\begin{tabular}{l r r} \hline \hline Source & N & \(\%\) \\ \hline Gunji (1987) & 301 & 88.0 \\ Inoue (1976a,b) & 1805 & 86.2 \\ Kuno (1973) & 1553 & 78.0 \\ Kuroda (1965) & 332 & 91.6 \\ Kuroda (1992) & 681 & 85.5 \\ Miyagawa (2008) & 591 & 82.7 \\ Shibatani (1976) & 2209 & 83.3 \\ Shibatani (1990) & 387 & 90.2 \\ Tsujimura (1999) & 531 & 75.9 \\ Tsujimura (2013) & 259 & 81.1 \\
**In-Domain** & 8649 & 83.4 \\ \hline Abe (2011) & 15 & 53.3 \\ Asano and Ura (2010) & 92 & 63.0 \\ Bobaljik and Wurmbrand (2007) & 11 & 72.7 \\ Grosu (2010) & 11 & 18.2 \\ Grosu and Landman (2012) & 8 & 62.5 \\ Hayashishita (2009) & 34 & 76.5 \\ Ivana and Sakai (2007) & 38 & 73.7 \\ Kishida and Sato (2012) & 81 & 77.8 \\ Kishimoto (2008) & 204 & 71.1 \\ Kishimoto (2012) & 90 & 61.1 \\ Miyamoto (2009) & 17 & 94.1 \\ Nishigauchi (2014) & 68 & 94.1 \\ Oshima (2006) & 25 & 96.0 \\ Saito et al. (2008) & 32 & 78.1 \\ Sawada (2013) & 40 & 95.0 \\ Shibata (2015) & 72 & 80.6 \\ Shimoyama (2014) & 51 & 92.2 \\ Sudo (2015) & 133 & 65.4 \\ Takahashi (2006) & 26 & 57.7 \\ Takahashi (2010) & 29 & 79.3 \\ Takano (2011) & 41 & 90.2 \\ Takita (2009) & 6 & 16.7 \\ Tenny (2006) & 45 & 93.3 \\ Tomioka (2009) & 15 & 60.0 \\ Tsujioka (2011) & 67 & 56.7 \\ Watanabe (2010) & 27 & 81.5 \\ Watanabe (2013) & 93 & 64.5 \\
**Out-of-Domain** & 1371 & 73.2 \\ \hline
**Total** & 10,020 & 82.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The number of sentences in JCoLA by source. \(N\) is the number of sentences in a source. \(\%\) is the percent of the acceptable sentences in a source. While _In-Domain_ sources are textbooks and handbooks on Japanese syntax, all the sources listed above as _Out-of-Domain_ are journal articles published in JEAL.
\begin{table}
\begin{tabular}{c c} \hline \hline Phenomenon & \# Sentences \\ \hline Argument structure & 545 \\ Filler-gap & 257 \\ Morphology & 159 \\ Nominal structure & 150 \\ Quantifier & 127 \\verbal agreement & 105 \\ Binding & 101 \\ Ellipsis & 44 \\ Island effects & 19 \\ NPI/NCI & 12 \\ Control/raising & 11 \\ Simple & 71 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Number of sentences by phenomenon in out-of-domain data. Note that the examples in JCoLA could be categorized into multiple phenomena.
**Quantifier:** acceptability judgements based on the distribution of quantifiers such as floating quantifiers.
1. [label=(12)]
2. John-wa hon-o san-satsu katta. John-top book-acc three-cl bought
3. 'John bought three books.'
**Verbal Agreement:** acceptability judgements based on the dependency between subjects and verbs. Japanese doesn't have the same kind of subject-verb agreement as in English. Instead, this includes the linguistic phenomena such as subject honorification where the social status of subjects are reflected in the morphology of verbs.
1. [label=(13)]
2. 1.
of morphological analysis and tokenization, and training corpus.
BertWe evaluate three different types of BERT language models provided by Tohoku University NLP group3: Tohoku BERTBASE4, Tohoku BERT-charBASE5 and Tohoku BERTLARGE6. These models are trained on the Japanese version of Wikipedia. The texts are first tokenized by MeCab (Kudo et al., 2004) and then split into subwords by BPE (Sennrich et al., 2016).7 Tohoku BERTBASE and Tohoku BERT-charBASE have 12 layers, 12 attention heads, and 768-dimensional hidden states, while Tohoku BERTLARGE has 24 layers, 16 attention heads, and 1024-dimensional hidden states.
Footnote 3: [https://github.com/cl-tohoku](https://github.com/cl-tohoku)
Footnote 4: [https://huggingface.co/cl-tohoku/bert-base-language-v2](https://huggingface.co/cl-tohoku/bert-base-language-v2)
Footnote 5: [https://huggingface.co/cl-tohoku/bert-base-japanese-char-v2](https://huggingface.co/cl-tohoku/bert-base-japanese-char-v2)
Footnote 6: [https://huggingface.co/cl-tohoku/bert-large-japanese](https://huggingface.co/cl-tohoku/bert-large-japanese)
Footnote 7: For Tohoku BERT-charBASE, the texts are segmented into characters.
In addition, we evaluate a BERT language model provided by NICT (NICT BERTBASE).8 The model configuration is the same as Tohoku BERTBASE and Tohoku BERT-charBASE.
Footnote 8: [https://direct.nict.go.jp/](https://direct.nict.go.jp/)
Japanese RoBERTaWe also evaluate three variants of RoBERTa language models provided by Kawahara Lab. at Waseda University9: Waseda RoBERTaBASE10, Waseda RoBERTa-seq128LARGE11 and Waseda RoBERTa-seq512LARGE12. These models are trained on the Japanese version of Wikipedia and the Japanese portion of CC-100. The texts are first tokenized by Juman++ (Morita et al., 2015) and then split into subwords using Sentence Piece (Kudo and Richardson, 2018) with a unigram language model (Kudo, 2018). Waseda RoBERTaBASE has 12 layers, 12 attention heads, and 768-dimensional hidden states. Waseda RoBERTa-seq128LARGE and Waseda RoBERTa-seq512LARGE both have 24 layers, 16 attention heads, and 1024-dimensional hidden states, but are trained with the maximum sequence length of 128 and 512, respectively.
Footnote 8: [https://direct.nict.go.jp/](https://direct.nict.go.jp/)
Footnote 9: [https://nlp-waseda.jp/en/](https://nlp-waseda.jp/en/)
Footnote 10: [https://huggingface.co/nlp-waseda/roberta-base-japanese](https://huggingface.co/nlp-waseda/roberta-base-japanese)
Footnote 11: [https://huggingface.co/nlp-waseda/roberta-large-japanese](https://huggingface.co/nlp-waseda/roberta-large-japanese)
XLM-RoBERTaTo compare the performance of monolingual and multilingual language models on JCoLA, we also evaluate two multilingual language models with different parameter sizes: XLM-RoBERTaBASE13 and XLM-RoBERTaLARGE14. These models are trained on multilingual Common Crawl (Wenzek et al., 2020) and the train texts are directly tokenized using Sentence Piece (Kudo and Richardson, 2018) with a unigram language model (Kudo, 2018). XLM-RoBERTaBASE has 12 layers, 12 attention heads and 768-dimensional hidden states. XLM-RoBERTaLARGE has 24 layers, 16 attention heads and 1024-dimensional hidden states.
Footnote 13: [https://huggingface.co/xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
Footnote 14: [https://huggingface.co/xlm-roberta-large](https://huggingface.co/xlm-roberta-large)
### Training Settings
Each language model is trained for five epochs with AdamW optimizer (Loshchilov and Hutter, 2019) and linear warmup with a warmup ratio of 0.1. In addition, the language models are trained using three different learning rates (5e-5, 3e-5, and 2e-5) and we evaluate models which achieved the highest Matthews Correlation Coefficient (MCC; Matthews (1975)) on the development data. This evaluation metric is an evaluation metric suitable for unbalanced binary classifiers also used in Warstadt et al. (2019). For each configuration, we trained 20 models with different random seeds to mitigate the effect of randomness. The score for each language model is calculated as the average across 20 different random seeds, but we ignore those results where the models achieved less than zero MCC score on the development set, as in Warstadt and Bowman (2020).
## 5 Results and Discussion
### Overall performance
Table 4 presents the Matthews Correlation Coefficient (MCC) and accuracy of various models on the in-domain and out-of-domain data, along with human performance. In the in-domain data, several models demonstrate performance surpassing that of human individuals. However, in the case of out-of-domain data, none of the models were able to exceed human performance. This suggests that the language models may not necessarily capture
the complex linguistic phenomena addressed in theoretical linguistics (Class III judgement). However, while the majority of models have lower performance on out-of-domain data compared to in-domain data, some models perform better on out-of-domain data. These models appear to be generalizing the linguistic phenomena observed in in-domain data correctly and are somewhat able to judge acceptability even for more complex linguistic phenomena.15
Footnote 15: Interestingly, the models that exhibited higher performance on out-of-domain data all utilized Sentence Piece with a unigram language model for tokenization, indicating the possibility that this choice of tokenization method may have contributed in some way to their performance.
### Performance by phenomenon
Figure 1 shows the Matthews Correlation Coefficient (MCC) values for each linguistic phenomenon in the out-of-domain test set across different models. Notably, almost all models demonstrate high accuracy in the Simple category, which suggests that they are capable of accurately capturing this linguistic phenomenon, even with sentences from sources not seen during training. However, for other phenomena, the performance is generally lower than that for Simple. In fact, the average MCC across linguistic phenomena, excluding Simple, is 0.248, which is significantly lower than the 0.599 observed for Simple. This suggests that while language models can effectively learn relatively simple linguistic phenomena (Class II judgement) as presented in textbooks and handbooks of syntactic theory, they may not necessarily be able to generalize to more complex linguistic phenomena (Class III judgement).
Furthermore, upon examining the performance of language models on different phenomena, it becomes apparent that language models perform relatively well on certain linguistic phenomena, such as binding, argument structure, and filler-gap, but struggle with others. Relatively high performance in Binding could be attributed to the fact that the proportion of positive examples for Binding is \(93.1\%\), significantly higher than the overall \(73.2\%\) for the out-of-domain data. For Argument Structure, many sentences only require capturing relatively local dependencies related to the order of arguments and/or case marking, such as in the example below.
(16) John-ga hon-o/*-ni yonda John-nom book-acc/*DAT read
'John read a book.'
Regarding filler-gap, even though it generally involves complex linguistic phenomena such as wh-movement, the presence of a relatively large number of sentences involving simpler comparison phenomena could be contributing to the higher accuracy.
(17) Mary-wa John-ga kaita yori nagai Mary-top John-nom wrote than long ronbun-o kaita. paper-acc wrote
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{In-domain} & \multicolumn{2}{c}{Out-of-domain} \\ & Acc. & MCC & Acc. & MCC \\ \hline Tohoku BERT base & \(0.838\pm 0.007\) & \(0.350\pm 0.027\) & \(0.753\pm 0.007\) & \(0.247\pm 0.028\) \\ Tohoku BERT base (char) & \(0.815\pm 0.007\) & \(0.236\pm 0.032\) & \(0.740\pm 0.008\) & \(0.164\pm 0.057\) \\ Tohoku BERT large & \(0.835\pm 0.004\) & \(0.346\pm 0.022\) & \(0.769\pm 0.008\) & \(0.309\pm 0.033\) \\ NICT BERT base & \(0.841\pm 0.007\) & \(0.360\pm 0.036\) & \(0.773\pm 0.006\) & \(0.329\pm 0.023\) \\ Waseda RoBERTa base & \(0.855\pm 0.008\) & \(0.404\pm 0.037\) & \(0.781\pm 0.017\) & \(0.355\pm 0.069\) \\ Waseda RoBERTa large (s128) & \(\mathbf{0.864\pm 0.007}\) & \(\mathbf{0.461\pm 0.032}\) & \(\mathbf{0.822\pm 0.012}\) & \(\mathbf{0.507\pm 0.038}\) \\ Waseda RoBERTa large (s512) & \(0.860\pm 0.009\) & \(0.419\pm 0.054\) & \(0.810\pm 0.010\) & \(0.465\pm 0.032\) \\ XLM RoBERTa base & \(0.827\pm 0.004\) & \(0.172\pm 0.055\) & \(0.745\pm 0.009\) & \(0.176\pm 0.063\) \\ XLM RoBERTa large & \(0.831\pm 0.007\) & \(0.214\pm 0.128\) & \(0.772\pm 0.008\) & \(0.320\pm 0.033\) \\ \hline Human (Individual) & 0.760 & 0.384 & 0.854 & 0.653 \\ Human (Majority vote) & 0.795 & 0.437 & 1.000 & 1.000 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of each language model on JCoLA out-of-domain test set. The score for each language model is calculated as the average across 20 different random seeds, but we ignore those results where the models achieved less than zero MCC score on the development set, as in Warstadt and Bowman (2020). The best performance across models is indicated in bold.
'Mary wrote a longer paper than John wrote'
On the other hand, language models show lower accuracy on linguistic phenomena such as NPI/NCI and verbal agreement. This could be because NPI/NCI and verbal agreement often require capturing relatively long-distance dependencies, as seen in the examples below.16
Footnote 16: The results for control/raising were not considered to be reliable due to the small sample size, and they were excluded from the analysis.
1. **Ito-sensei-ga** Mary-o Ito-teacher-nom Mary-acc
o-home-**ni-nat**-ta.
hon-praise-lv-past
'Prof. Ito praised Mary.'
2. **Mary-ga** Ito-sensei-o Mary-nom Ito-teacher-acc
o-home-**ni-nat**-ta.
hon-praise-lv-past
3. **Daremo** monku-o iw-**anakat**-ta.
who-mo complaint-acc say-neg-past
'Nobody complained.'
Overall, the analysis by linguistic phenomenon highlights the strengths and limitations of language models in capturing various linguistic phenomena. While they are adept at handling simpler structures, their performance wanes when confronted with more complex linguistic phenomena, especially those requiring long-distance dependencies.
## 6 Conclusion
In this paper, we introduced JCoLA (Japanese Corpus of Linguistic Acceptability), which consists of 10,020 sentences annotated with binary acceptability judgments. Specifically, those sentences were manually extracted from linguistics textbooks, handbooks and journal articles, and split into indomain data (86 %; relatively simple acceptability judgments extracted from textbooks and handbooks) and out-of-domain data (14 %; theoretically significant acceptability judgments extracted from linguistics journals), the latter of which was categorized by 12 linguistic phenomena. We then evaluated the syntactic knowledge of 9 different types of Japanese language models on JCoLA. The results demonstrated that several models could surpass human performance for the in-domain data, while no models were able to exceed human performance for the out-of-domain data. Error analyses by linguistic phenomena further revealed that although neural language models are adept at handling local syntactic dependencies like argument structure, their performance wanes when confronted with long-distance syntactic dependencies like verbal agreement and NPI licensing.
### Limitations
All the sentences included in JCoLA have been extracted from textbooks, handbooks and journal articles on theoretical syntax. Therefore, those sentences are guaranteed to be theoretically meaningful, making JCoLA a challenging dataset. However,
Figure 1: Performance of each language model on JCoLA out-of-domain test set by phenomenon. The MCC score for each language model is calculated as the average across 20 different random seeds, but we ignore those results where the models achieved less than zero MCC score on the development set, as in Warstadt and Bowman (2020). Error bars mark the mean \(\pm\)1 SD.
the distribution of linguistic phenomena directly reflects that of the source literature and thus turns out to be extremely skewed. Indeed, as can be seen in Table 3, while the number of sentences exceeds 100 for most linguistic phenomena, there are several linguistic phenomena for which there are only about 10 sentences. In addition, since it is difficult to force language models to interpret sentences given specific contexts, those sentences whose unacceptability depends on contexts were inevitably removed from JCoLA. This removal process resulted in the deletion of unacceptable sentences from some linguistic phenomena (such as ellipsis), consequently skewing the balance between acceptable and unacceptable sentences (with a higher proportion of acceptable sentences).
## Acknowledgements
This work was supported by JST PRESTO Grant Number JPMJPR21C2, Japan.
|
2309.04014 | Channel Estimation for Quantized Systems based on Conditionally Gaussian
Latent Models | This work introduces a novel class of channel estimators tailored for coarse
quantization systems. The proposed estimators are founded on conditionally
Gaussian latent generative models, specifically Gaussian mixture models (GMMs),
mixture of factor analyzers (MFAs), and variational autoencoders (VAEs). These
models effectively learn the unknown channel distribution inherent in radio
propagation scenarios, providing valuable prior information. Conditioning on
the latent variable of these generative models yields a locally Gaussian
channel distribution, thus enabling the application of the well-known Bussgang
decomposition. By exploiting the resulting conditional Bussgang decomposition,
we derive parameterized linear minimum mean square error (MMSE) estimators for
the considered generative latent variable models. In this context, we explore
leveraging model-based structural features to reduce memory and complexity
overhead associated with the proposed estimators. Furthermore, we devise
necessary training adaptations, enabling direct learning of the generative
models from quantized pilot observations without requiring ground-truth channel
samples during the training phase. Through extensive simulations, we
demonstrate the superiority of our introduced estimators over existing
state-of-the-art methods for coarsely quantized systems, as evidenced by
significant improvements in mean square error (MSE) and achievable rate
metrics. | Benedikt Fesl, Nurettin Turan, Benedikt Böck, Wolfgang Utschick | 2023-09-07T20:50:46Z | http://arxiv.org/abs/2309.04014v2 | # Channel Estimation for Quantized Systems based on Conditionally Gaussian Latent Models
###### Abstract
This work introduces a novel class of channel estimators tailored for coarse quantization systems. The proposed estimators are founded on conditionally Gaussian latent generative models, specifically Gaussian mixture models (GMMs), mixture of factor analyzers (MFAs), and variational autoencoders (VAEs). These models effectively learn the unknown channel distribution inherent in radio propagation scenarios, providing valuable prior information. Conditioning on the latent variable of these generative models yields a locally Gaussian channel distribution, thus enabling the application of the well-known Bussgang decomposition. By exploiting the resulting conditional Bussgang decomposition, we derive parameterized linear minimum mean square error (MMSE) estimators for the considered generative latent variable models. In this context, we explore leveraging model-based structural features to reduce memory and complexity overhead associated with the proposed estimators. Furthermore, we devise necessary training adaptations, enabling direct learning of the generative models from quantized pilot observations without requiring ground-truth channel samples during the training phase. Through extensive simulations, we demonstrate the superiority of our introduced estimators over existing state-of-the-art methods for coarsely quantized systems, as evidenced by significant improvements in mean square error (MSE) and achievable rate metrics.
Channel estimation, generative latent model, coarse quantization, Bussgang theorem, covariance recovery.
## I Introduction
Massive multiple-input multiple-output (MIMO) and millimeter wave (mmWave) systems enable the ever-increasing requirements of bandwidth and throughput in wireless communications. However, deploying a large number of high-precision analog-to-digital converters (ADCs) for each antenna's radio frequency (RF) chain with bandwidths sufficient for mmWave systems is unaffordable in terms of cost and power consumption [1, 2]. One of the most direct and promising ways in order to solve the power consumption bottleneck and achieve high energy efficiency is to use low-resolution ADCs at the base station (BS). In recent years, considerable research efforts have been devoted to analyzing the performance of low-resolution quantization systems [3, 4, 5, 6]. Remarkably, although the low-resolution quantization causes nonlinear distortions at the receiver, the capacity is not severely reduced, especially at low signal-to-noise ratios (SNRs) [7].
In order to realize the mentioned favorable characteristics in practical low-resolution systems of the next generation of cellular systems (6G), accurate channel estimation is a crucial task. However, the severe nonlinearity of the ADCs degrades the performance of conventional channel estimation algorithms [1, 2]; thus, it is necessary to design novel channel estimators that provide good performance together with reasonable complexity and robustness in quantized systems.
Various channel estimation algorithms for coarsely quantized systems have been proposed in recent years. In [8], least squares (LS) estimation is considered, which is computationally simple but results in rather poor estimation quality. An iterative channel estimation technique based on the expectation-maximization (EM) algorithm is proposed in [9], which exhibits limitations due to high complexity and convergence to local optima. Iterative maximum likelihood (ML) methods are investigated in [10, 11]; however, they typically require a large number of pilot signals, resulting in an unaffordable signaling overhead [2]. MMSE channel estimation approaches in the case of a Gaussian channel together with uncorrelated channel entries are studied in [12, 13]; unfortunately, the MMSE estimator has no closed-form in the general case and is intractable to compute [14]. In [15, 16, 17], joint channel estimation and decoding is investigated, where payload data is used to assist in channel estimation. Due to the iterative nature of the approaches, these methods are considered to have too high complexity for commercial massive MIMO systems [2].
The works in [18, 19, 20] take into account the sparsity of wireless channels and utilize compressive sensing (CS) approaches such as iterative hard thresholding and generalized approximate message passing (GAMP); the main disadvantages thereby are the sensitivity concerning the (estimated) sparsity level and the high complexity of the iterative procedure. A well-known linear MMSE channel estimator is based on Bussgang's theorem [21]. In [22, 23], the estimator is derived for the one-bit as well as multi-bit quantization case. It has the advantage that only a few pilot observations are necessary to achieve a good channel estimation quality. The critical prerequisite of the Bussgang estimator is a Gaussian distributed channel with known second-order statistics, which seriously limits the estimator's application.
Recently, deep learning-based approaches were investigated for channel estimation in quantized systems [24, 25, 26, 27]. Although providing good performance for low numbers of pilots, the approaches generally lack generalization ability with respect to different numbers of quantization bits, pilot signals, antennas, and SNRs. Additionally, accumulating a large repre
sentative training dataset consisting of perfect channel state information (CSI) samples is necessary, which may be costly to acquire in practical systems, especially with coarse quantization. In recent works, conditionally Gaussian latent generative models were utilized in order to learn the underlying unknown channel distribution of a radio propagation environment and leverage this prior information to design wireless communication functionalities [28], especially channel estimators for high-resolution systems [29, 30, 31, 32, 33, 34]. As mentioned before, the highly nonlinear distortion of low-resolution ADCs makes it impractical to directly utilize the mentioned channel estimation techniques in coarsely quantized systems.
_Contributions:_ This article presents novel contributions for channel estimation in coarsely quantized systems utilizing conditionally Gaussian latent generative models. Specifically, we investigate the applicability of three different models: GMM, MFA, and VAE for learning the unknown underlying channel distribution within a BS cell. Through this, we achieve a conditionally Gaussian representation of the learned channel distribution. By inferring the latent variable of the presented generative approaches based on a quantized pilot observation, we can efficiently determine the parameters of the conditional Gaussian distribution. By applying Bussgang's theorem, we derive a parameterized conditional Bussgang estimator, which serves as the linear MMSE estimator for the given latent encoding. Importantly, our work showcases the diverse strengths of the presented models concerning estimation performance, computational complexity, and memory overhead.
To facilitate practical feasibility, we propose training adaptations that allow us to learn the channel distribution solely from quantized pilot observations as training data, which eliminates the need for perfect CSI in the training stage. This feature enables us to use pilot observations collected during the regular BS operation for training. In the case of the GMM, we introduce a novel covariance recovery method that serves as an unbiased and consistent estimator of the unquantized input covariance matrix, using only quantized samples. Additionally, for the VAE, we adapt the evidence lower bound (ELBO) loss function through model-based insights, specifically accounting for the nonlinear quantization process.
Extensive simulations based on various realistic channel models demonstrate the outstanding performance of the proposed conditionally Gaussian latent generative modeling-aided estimators, especially in setups with low numbers of pilot observations, surpassing state-of-the-art methods.
_Notation:_ The standard Gaussian cumulative distribution function (CDF) is denoted by \(\Phi(x)=\int_{-\infty}^{x}\frac{1}{\sqrt{2\pi}}\exp(-\frac{x^{2}}{2})\mathrm{d}x\) and the error function is given as \(\mathrm{erf}(x)=\frac{2\sqrt{\pi}}{\int_{0}^{x}\exp(-t^{2})\mathrm{d}t}\). We denote the indicator function as \(\chi(x\in\mathcal{A})\), which returns one if \(x\in\mathcal{A}\) and zero otherwise. The \(n\)th entry of a vector is denoted by \([\mathbf{x}]_{n}\). The diagonals and off-diagonals of a matrix are denoted by \(\mathrm{diag}(\mathbf{A})\) and \(\mathrm{nondiag}(\mathbf{A})=\mathbf{A}-\mathrm{diag}(\mathbf{A})\).
## II Preliminaries
#### Ii-1 System Model
We consider the uplink transmission of \(P\) pilot signals from a single-antenna mobile terminal (MT) to an \(N\)-antenna BS which operates ADCs with \(B\) quantization bits. The quantized receive signal is therefore written as \(\mathbf{R}=Q_{B}(\mathbf{Y})=Q_{B}(\mathbf{A}\mathbf{a}^{\mathrm{T}}+\mathbf{N})\), where \(\mathbf{R}=[\mathbf{r}_{1},\dots,\mathbf{r}_{P}]\in\mathbb{C}^{N\times P}\) contains the \(P\) quantized receive signals as columns, \(\mathbf{Y}\in\mathbb{C}^{N\times P}\) describes the unquantized receive signal, \(\mathbf{h}\in\mathbb{C}^{N}\) denotes the wireless channel, \(\mathbf{a}\in\mathbb{C}^{P}\) is the pilot vector which fulfills the power constraint \(\|\mathbf{a}\|_{2}^{2}=P\), \(\mathbf{N}=[\mathbf{n}_{1},\dots,\mathbf{n}_{P}]\in\mathbb{C}^{N\times P}\) is additive white Gaussian noise (AWGN) with \(\mathbf{n}_{i}\sim\mathcal{N}_{\mathbb{C}}(\mathbf{0},\sigma^{2}\,\mathbf{I})\), and \(Q_{B}\) denotes the \(B\)-bit quantization function, which is discussed below. By column-wise vectorization, the system model can be written as
\[\mathbf{r}=Q_{B}(\mathbf{y})=Q_{B}(\mathbf{A}\mathbf{h}+\mathbf{n})\in\mathbb{C}^{NP} \tag{1}\]
with \(\mathbf{r}=\mathrm{vec}(\mathbf{R})\), \(\mathbf{y}=\mathrm{vec}(\mathbf{Y})\), \(\mathbf{n}=\mathrm{vec}(\mathbf{N})\), and \(\mathbf{A}=\mathbf{a}\otimes\mathbf{I}\). Note that an extension to a multi-user setup can, in principle, be straightforwardly achieved by stacking the channels of all users, cf., e.g., [22]; however, the analysis of multi-user systems is out of the scope of this work. By normalizing the channels as \(\mathrm{E}[\|\mathbf{h}\|_{2}^{2}]=N\), the SNR of the quantizer input is defined as \(\text{SNR}=1/\sigma^{2}\).
Typically, several pilot observations are required to achieve reasonable channel estimation performance in coarsely quantized systems. For the case of one-bit quantization where the amplitude information is eliminated after the quantization, it is shown in [14] that a pilot sequence with equidistant phase shifts in the range \([0,\frac{\pi}{2})\) is MSE-optimal with respect to the conditional mean estimator (CME) for jointly Gaussian inputs. For the general case of \(B\)-bit quantization, the optimization of the pilot sequence is intractable due to the exponentially increasing number of quantization levels. Therefore, in this work, we consider pilot sequences that have an equidistant spacing in both the amplitude and the angle, i.e.,
\[[\tilde{\mathbf{a}}]_{i}=\beta_{i}\exp\left(\mathrm{j}\frac{\pi}{2P}(i-1)\right),\ i\in\{1,\dots,P\}, \tag{2}\]
where \(\beta_{i}=\frac{1}{2}+\frac{i-1}{2(P-1)}\) is the amplitude spacing. In order to fulfill the power constraint \(\|\mathbf{a}\|_{2}^{2}=P\), the pilot vector is normalized as \(\mathbf{a}=\frac{\sqrt{P}}{\|\mathbf{a}\|_{2}}\tilde{\mathbf{a}}\).
#### Ii-2 Quantizer Design
In this work, we consider scalar quantizers, i.e., the quantization is performed elementwise on the input, where the real and imaginary parts are quantized independently. The quantizer can be described by means of the \(2^{B}\) quantization labels \(\ell_{i}\), \(i\in\{1,\dots,2^{B}\}\), and the quantization thresholds \(\tau_{i}\), \(i\in\{0,\dots,2^{B}\}\), where \(\tau_{0}=-\infty\) and \(\tau_{2^{B}}=\infty\) by definition. The quantization function of the real/imaginary part of the signal can be denoted as
\[Q_{B}(x)=\sum_{i=1}^{2^{B}}\ell_{i}\chi(\tau_{i-1}\leq x<\tau_{i}). \tag{3}\]
For the case of one-bit quantization \(B=1\), the quantization function of the complex-valued signal \(\mathbf{y}\) can be expressed as
\[Q_{1}(\mathbf{y})=\frac{1}{\sqrt{2}}\left(\mathrm{sign}(\Re(\mathbf{y}))+\mathrm{j} \,\mathrm{sign}(\Im(\mathbf{y}))\right). \tag{4}\]
Practicable ADCs usually have uniformly spaced quantization thresholds with a constant step size \(\Delta\), which depends on the input distribution, and the quantization labels are placed in the middle of two quantization thresholds. For the case
of zero-mean Gaussian input with variance one, the optimal values for \(\Delta\) are computed numerically in [35].
Under the assumption that the elementwise quantizer input is zero-mean Gaussian distributed with variance \(1+\sigma^{2}\) following the considered SNR definition, we choose the SNR-dependent step size as suggested in [36] as
\[\Delta=\sqrt{\tfrac{1}{2}(1+\sigma^{2})}\Delta_{\star} \tag{5}\]
where \(\Delta_{\star}\) is the step size for the standard Gaussian input [35]. Although the quantizer input is generally not Gaussian distributed, this choice gives a reasonable performance with regard to practical feasibility. The necessary scaling in (5) is resolved by automatic gain control in practice. We note that there exist more sophisticated quantizer designs, e.g., non-uniform scalar quantization [35, 37]; however, the uniform quantizer is considered the most practicable choice in wireless communications [5]. Furthermore, the channel estimation techniques proposed in this work are not limited to uniform quantization but can be straightforwardly extended to non-uniform quantization.
#### Ii-B3 Channel Models
We work with the 3rd Generation Partnership Project (3GPP) spatial channel model [38, 39] where channels are modeled conditionally Gaussian: \(\mathbf{h}|\mathbf{\delta}\sim\mathcal{N}_{\mathbb{C}}(\mathbf{0},\mathbf{C}_{\mathbf{\delta}})\). The random vector \(\mathbf{\delta}\) collects the angles of arrival/departure and path gains of the main propagation clusters between a MT and the BS. The main angles are drawn independently and uniformly from the interval \([0,2\pi]\); the path gains are also drawn uniformly and are subsequently normalized such that they sum up to one. The BS employs a uniform linear array (ULA) such that the spatial channel covariance matrix is given by
\[\mathbf{C}_{\mathbf{h}|\mathbf{\delta}}=\int_{-\pi}^{\pi}\omega(\phi;\mathbf{\delta})\mathbf{t}( \phi)\mathbf{t}(\phi)^{\mathrm{H}}\mathrm{d}\phi. \tag{6}\]
Here, \(\mathbf{t}(\phi)=[1,\mathrm{e}^{\mathrm{j}\pi\sin(\phi)},\ldots,\mathrm{e}^{ \mathrm{j}\pi(N-1)\sin(\phi)}]^{\mathrm{T}}\) is the array steering vector for an angle of arrival \(\phi\), and \(\omega\) is a power density consisting of a sum of weighted Laplace densities whose standard deviations describe the angle spread of the propagation clusters [38]. For every channel sample, we generate random angles and path gains, combined in \(\mathbf{\delta}\), and then draw the sample as \(\mathbf{h}\sim\mathcal{N}_{\mathbb{C}}(\mathbf{0},\mathbf{C}_{\mathbf{h}|\mathbf{\delta}})\), which results in an overall non-Gaussian channel distribution [29]. Note that the conditional Gaussianity of the channel model is not connected to the conditional Gaussianity of the proposed latent models since the inference of (6) from a single snapshot is intractable.
To ensure a broader evaluation of different channel models, version 2.4 of the QuaDRiGa channel simulator [40, 41] is used to generate channel samples. We simulate an urban macrocell scenario at a center frequency of 6 GHz. The BS' height is 25 meters, and it covers a \(120^{\circ}\) sector. The distances between the MTs and the BS are in the range of 35-500 meters. We either consider a pure line-of-sight (LOS) scenario or a mixed LOS/non-line-of-sight (NLOS) scenario, where in 80% of the cases, the MTs are located indoors at different floor levels, whereas the MTs' height is 1.5 meters in the case of outdoor locations. The BS is equipped with a ULA with \(N\) "3GPP-3D" antennas, and the MTs employ an omnidirectional antenna. The generated channels are post-processed to remove the effective path gain [41, Sec. 2.7].
#### Ii-B4 Training Datasets
In this work, we consider the channel distribution to be unknown and arbitrarily complex by means of the sophisticated channel models, cf. Section II-3. However, we assume the availability of a representative dataset which is comprised of samples stemming from the respective channel model. In practice, this means that data samples from the respective BS cell are available. In this work, we discuss two different setups. First, we assume the availability of a training dataset consisting of \(T\) ground-truth channel samples \(\mathcal{H}=\{\mathbf{h}_{t}\}_{t=1}^{T}\). This can be achieved in practice via measurement campaigns or digital twins, e.g., ray tracing. Afterward, we consider a training dataset consisting solely of noisy and quantized pilot observations \(\mathcal{R}=\{\mathbf{r}_{t}\}_{t=1}^{T}\). Therefore, pilot observations from the regular BS operation can be utilized, and no ground-truth channel samples are needed.
## III Bussgang Estimator
In this section, we briefly revise the linear MMSE estimator based on the Bussgang decomposition, which is a direct consequence of Bussgang's theorem [21]. The Bussgang theorem, and hence also the decomposition, is valid only if the input to the quantizer is zero-mean Gaussian distributed. However, even if this is generally not true in the considered setting, the linear MMSE estimator based thereon is a reasonable baseline for channel estimation in quantized systems.
In particular, under the assumption of a jointly Gaussian quantizer input, the Bussgang decomposition implies that the system in (1) can be written as a linear combination of the desired signal part and an uncorrelated distortion \(\mathbf{q}\) as
\[\mathbf{r}=Q_{B}(\mathbf{y})=\mathbf{B}\mathbf{y}+\mathbf{\eta}=\mathbf{B}\mathbf{A}\mathbf{h}+\mathbf{q}, \tag{7}\]
where \(\mathbf{B}\) is the Bussgang gain that can be obtained from the linear MMSE estimation of \(\mathbf{r}\) from \(\mathbf{y}\) as \(\mathbf{B}=\mathbf{C}_{\mathbf{r}\mathbf{y}}\mathbf{C}_{\mathbf{y}}^{-1}\), cf. [42, Sec. 9.2], and where the distortion term \(\mathbf{q}=\mathbf{B}\mathbf{n}+\mathbf{\eta}\) contains both the AWGN \(\mathbf{n}\) and the quantization noise \(\mathbf{\eta}\). The Bussgang gain matrix for a uniform quantizer is derived in [43] and is computed as
\[\mathbf{B}=\frac{\Delta}{\sqrt{\pi}}\mathbf{D}_{\mathbf{y}}^{-\frac{1}{2}}\sum_{i=1}^{2^{ \theta}-1}\exp\left(-\Delta^{2}\left(i-2^{B-1}\right)^{2}\mathbf{D}_{\mathbf{y}}^{-1 }\right) \tag{8}\]
where \(\mathbf{D}_{\mathbf{y}}=\mathrm{diag}(\mathbf{C}_{\mathbf{y}})\). In the case of one-bit quantization \(B=1\), by choosing \(\Delta=\sqrt{2}\), we get the well-known solution
\[\mathbf{B}=\sqrt{\frac{2}{\pi}}\,\mathrm{diag}(\mathbf{C}_{\mathbf{y}})^{-\frac{1}{2}}. \tag{9}\]
As the statistically equivalent model (7) is linear, one can formulate the linear MMSE estimator
\[\hat{\mathbf{h}}_{\text{Buss}}=\mathbf{C}_{\mathbf{h}\mathbf{r}}\mathbf{C}_{\mathbf{r}}^{-1}\mathbf{r}. \tag{10}\]
The cross-correlation matrix between the channel and the received signal is calculated as \(\mathbf{C}_{\mathbf{h}\mathbf{r}}=\mathrm{E}[\mathbf{h}(\mathbf{B}\mathbf{A}\mathbf{h}+\mathbf{q})^{\mathrm{H} }]=\mathbf{C}_{\mathbf{h}}\mathbf{A}^{\mathrm{H}}\mathbf{B}^{\mathrm{H}}\) which follows from the fact that the noise term \(\mathbf{q}\) is uncorrelated with the channel \(\mathbf{h}\), see [22, Appendix A]. For the one-bit quantization case, the auto-correlation matrix is equal to the covariance matrix \(\mathbf{C}_{\mathbf{r}}\) due to the elimination
of the amplitude information and can be calculated in closed-form via the so-called arcsine law [44] as
\[\begin{split}\mathbf{C}_{\mathbf{r}}&=\frac{2}{\pi}\left( \arcsin\left(\mathbf{D}_{\mathbf{y}}^{-\frac{1}{2}}\Re(\mathbf{C}_{\mathbf{y}})\mathbf{D}_{\mathbf{y}}^ {-\frac{1}{2}}\right)\right.\\ &\left.+\mathrm{j}\arcsin\left(\mathbf{D}_{\mathbf{y}}^{-\frac{1}{2}} \Im(\mathbf{C}_{\mathbf{y}})\mathbf{D}_{\mathbf{y}}^{-\frac{1}{2}}\right)\right).\end{split} \tag{11}\]
Unfortunately, for the multi-bit quantization case, no closed-form expression for \(\mathbf{C}_{\mathbf{r}}\) exists. Besides that, the computation of the variances after the quantization are no longer non-trivial. As shown in [5, eq. (2.14)] (adapted for the complex-valued case), for Gaussian input and the uniform quantizer, the variances are computed as
\[[\mathbf{C}_{\mathbf{r}}]_{i,i}=\sum_{i=1}^{2^{B}}2\ell_{i}^{2}\left(\Phi\left(\sqrt{2 }\tau_{i}c_{i}\right)-\Phi\left(\sqrt{2}\tau_{i-1}c_{i}\right)\right), \tag{12}\]
where \(c_{i}=[\mathbf{C}_{\mathbf{y}}]_{i,i}^{-\frac{1}{2}}\). Although there exist various practicably feasible approximations for the evaluation of the involved Gaussian CDF, cf. [45, 46], the evaluation of (12) may still be problematic in time-critical systems; thus, a reasonable approximation is used in this work. By assuming that the signal's variance does not change for different antennas, the Bussgang gain becomes a scaled identity of the form \(\mathbf{B}=\rho\,\mathbf{I}\), cf. (8). By further neglecting cross-correlations of the quantization distortion, the quantized covariance matrix \(\mathbf{C}_{\mathbf{r}}\) is well approximated, especially in the low SNR regime, by, cf. [3],
\[\mathbf{C}_{\mathbf{r}}\approx\rho^{2}\mathbf{C}_{\mathbf{y}}+(1-\rho^{2})\operatorname{diag}( \mathbf{C}_{\mathbf{y}}). \tag{13}\]
Importantly, the resulting covariance matrix \(\mathbf{C}_{\mathbf{r}}\) remains positive semi-definite (PSD) if \(0\leq\rho^{2}\leq 1\). We note that the expression for \(\mathbf{C}_{\mathbf{r}}\) with respect to \(\mathbf{C}_{\mathbf{y}}\) generally depends on the quantizer choice and the input distribution, and useful approximations can be found differently, cf., e.g., [4]. However, the design of the channel estimation algorithms in this work is not founded upon the choice in (13), and different approximations can be utilized.
## IV Parameterized Bussgang Channel Estimators
In this section, we introduce several novel channel estimators which can be applied in coarse quantization systems for variable numbers of quantization bits. Therefore, we utilize generative models to approximate the underlying unknown channel distribution by means of a given dataset. The common characteristic of the generative models that are used in this context is their modeling of the data distribution as conditionally Gaussian through a latent variable model. These models include the GMM, the MFA, and the VAE. Recently, all three models have been successfully deployed for channel estimation in high-resolution scenarios, i.e., under the assumption of perfect ADCs [29, 30, 31, 32, 33, 34]. In this work, we exploit the conditional Gaussianity of the mentioned generative models to parameterize a Bussgang estimator. This is motivated by the insight that, conditioned on the latent variable, the underlying channel is Gaussian, which establishes the prerequisite for the application of Bussgang's theorem. In this section, it is assumed that a training dataset \(\mathcal{H}\) of ground-truth channel samples is available, cf. Section II-4.
### _GMM-based Bussgang Estimator_
We start by deriving the GMM-based estimator, which parameterizes a _componentwise_ Bussgang estimator. Generally, a GMM is a probability density function (PDF) of the form
\[f_{\mathbf{h}}^{(K)}(\mathbf{h})=\sum_{k=1}^{K}\pi_{k}\mathcal{N}_{\mathbb{C}}(\mathbf{h}; \mathbf{\mu}_{\mathbf{h}|k},\mathbf{C}_{\mathbf{h}|k}) \tag{14}\]
where \(K\) is the number of mixture components, and \(\{\pi_{k},\mathbf{\mu}_{\mathbf{h}|k},\mathbf{C}_{\mathbf{h}|k}\}_{k=1}^{K}\) is the set of parameters of the GMM, namely the mixing coefficients, the means, and the covariances of the Gaussian components. The parameters of the GMM are fitted via the EM algorithm for a given training dataset \(\mathcal{H}\) of channel samples, cf. [47, Ch. 9]. An essential property of GMMs is that for a given data sample, the _responsibility_ of each component can be computed as, cf. [47, Ch. 9], \(p(k|\mathbf{h})\propto\pi_{k}\mathcal{N}_{\mathbb{C}}(\mathbf{h};\mathbf{\mu}_{\mathbf{h}|k}, \mathbf{C}_{\mathbf{h}|k})\).
To ensure the validity of the Bussgang theorem for each component, we enforce the component means to be zero, i.e., \(\mathbf{\mu}_{\mathbf{h}|k}=\mathbf{0}\), \(\forall k\in\{1,\dots,K\}\). To reflect this constraint in the fitting process, the component means are set to zero in every M-step of the EM algorithm. Naturally, the zero-mean constraint diminishes the capability of the GMM to a certain extent concerning its ability to approximate the true underlying distribution. Nevertheless, since a feasible wireless channel distribution is considered to be zero-mean with a decreasing probability density towards higher amplitudes, cf., e.g., [38], the loss of accuracy of the model can be considered to be small. Moreover, restricting the component means prevents overfitting and allows to model high-dimensional data [48].
After the GMM is trained, it is used for channel estimation similar to [29] but with multiple adaptions in order to take the quantization effect into account. We first note that if the channel distribution is modeled as a zero-mean GMM, also the distribution of the unquantized receive signal \(\mathbf{y}\) follows a zero-mean GMM with covariances \(\mathbf{C}_{\mathbf{y}|k}=\mathbf{A}\mathbf{C}_{k}\mathbf{A}^{\mathrm{H}}+\sigma^{2}\,\mathbf{I}\), \(\forall k\in\{1,\dots,K\}\), cf. (1). Consequently, for each component, we can apply the Bussgang decomposition to find a statistically equivalent model:
\[\mathbf{r}|k=(\mathbf{B}_{k}\mathbf{A}\mathbf{h}+\mathbf{q}_{k})|k, \tag{15}\]
where \(\mathbf{B}_{k}\) is the Bussgang gain of component \(k\), and \(\mathbf{q}_{k}=\mathbf{B}_{k}\mathbf{n}+\mathbf{\eta}\), cf. (7). The componentwise Bussgang gain is computed via (8) or (9) by plugging in the \(k\)th covariance \(\mathbf{C}_{\mathbf{y}|k}\) for the multi- or one-bit case, respectively. The evaluation of the discrete distribution \(p(\mathbf{r}|k)\) is intractable in general since the cardinality of its discrete support increases exponentially in the number of dimensions [14]. Thus, in order to evaluate the responsibility for a given pilot signal, we assume that the quantized receive signal follows a zero-mean GMM distribution with the same second-order moments; this assumption effectively resembles approximate inference [47, Ch. 10]. The covariance matrix of component \(k\), named \(\mathbf{C}_{\mathbf{r}|k}\), is thereby computed via (11) or (13) for the one- or multi-bit quantization case, respectively, by plugging in the component's unquantized covariance \(\mathbf{C}_{\mathbf{y}|k}\). Thereby, in the case of a covariance matrix \(\mathbf{C}_{\mathbf{h}|k}\) with a non-constant diagonal, the scaling parameter \(\rho_{k}\) in (13) for the \(k\)th component
is approximated via \(\rho_{k}=\min(\frac{1}{N}\sum_{i=1}^{N}[\mathbf{B}_{k}]_{i,i},1)\), which ensures that the resulting matrix is PSD. This results in the following responsibility evaluation of the quantized receive signal:
\[p(k|\mathbf{r})\approx\frac{\pi_{k}\mathcal{N}_{\mathbb{C}}(\mathbf{r};\mathbf{0},\mathbf{C}_{ \mathbf{r}|k})}{\sum_{i=1}^{K}\pi_{i}\mathcal{N}_{\mathbb{C}}(\mathbf{r};\mathbf{0},\mathbf{C}_ {\mathbf{r}|i})}. \tag{16}\]
The final channel estimate is computed via the convex combination of the componentwise Bussgang estimators, similar to the high-resolution case [29], parameterized by the GMM covariances, which yields
\[\hat{\mathbf{h}}_{\text{BGMM}}^{(K)}(\mathbf{r})=\sum_{k=1}^{K}p(k|\mathbf{r})\mathbf{C}_{\mathbf{h }|k}\mathbf{A}^{\text{H}}\mathbf{B}_{k}^{\text{H}}\mathbf{C}_{\mathbf{r}|k}^{-1}\mathbf{r}. \tag{17}\]
As shown in [31], it is possible to enforce different structural constraints for the GMM's covariances, such as a circulant ("GMM circ") or Toeplitz ("GMM too") structure. The covariance matrix of the \(k\)th GMM component is thereby constrained to be of the form \(\mathbf{C}_{\mathbf{h}|k}=\mathbf{Q}^{\text{H}}\operatorname{diag}(\mathbf{c}_{\mathbf{h}|k})\bm {Q}\) where \(\mathbf{Q}\) is an (oversampled) discrete Fourier transform (DFT) matrix and \([\mathbf{c}_{\mathbf{h}|k}]_{i}\in\mathbb{R}_{+}\). These structural constraints reflect the typical array geometries of a BS, e.g., a ULA, and result in a reduced number of parameters and a lower online complexity of the estimator due to the usage of fast Fourier transforms (FFTs) [31]. The structural constraints are, without limitation, also applicable in coarsely quantized systems. The necessary memory overhead and computational complexity of the resulting estimators are discussed in more detail in Section VII.
### _MFA-based Bussgang Estimator_
A related concept to the GMM is the MFA model, which, in addition to a discrete latent variable \(k\) which describes the mixture component, also contains a continuous latent variable \(\mathbf{z}\in\mathbb{C}^{L}\) of lower dimension, i.e., \(L<N\) holds [49, Ch. 12], [32]. This effectively models the data on a piecewise linear subspace. After integrating out the continuous latent variable \(\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\), the PDF of the MFA model is a special form of a GMM with low-rank plus diagonal-constrained covariances of the form
\[f^{(K,L)}(\mathbf{h})=\sum_{k=1}^{K}\pi_{k}\mathcal{N}_{\mathbb{C}}(\mathbf{h};\mathbf{ \mu}_{\mathbf{h}|k},\mathbf{W}_{\mathbf{h}|k}\mathbf{W}_{\mathbf{h}|k}^{\text{H}}+\mathbf{\Psi}_{\mathbf{ h}|k}), \tag{18}\]
where \(\mathbf{W}_{\mathbf{h}|k}\in\mathbb{C}^{N\times L}\) is the _factor loading_ matrix and \(\mathbf{\Psi}_{\mathbf{h}|k}\in\mathbb{C}^{N\times N}\) is a diagonal matrix. In order to fit the parameters \(\{\pi_{k},\mathbf{\mu}_{\mathbf{h}|k},\mathbf{W}_{\mathbf{h}|k},\mathbf{\Psi}_{\mathbf{h}|k}\}_{k=1}^ {K}\) of the MFA model for a given dataset \(\mathcal{H}\) of channel realizations, an EM algorithm can be used [49, Ch. 12]. After training, by defining \(\mathbf{C}_{\mathbf{h}|k}=\mathbf{W}_{\mathbf{h}|k}\mathbf{W}_{\mathbf{h}|k}^{\text{H}}+\mathbf{\Psi}_{ \mathbf{h}|k}\), the model can be effectively treated as a GMM. A zero-mean MFA model with \(\mathbf{\mu}_{\mathbf{h}|k}=\mathbf{0}\)\(\forall k\in\{1,\dots,K\}\) can be similarly enforced as in the GMM case. Similar to [32], we set \(\mathbf{\Psi}_{\mathbf{h}|k}=\psi_{\mathbf{h}|k}\,\mathbf{I}\), \(\forall k\in\{1,\dots,K\}\).
The main advantage of the MFA model in the context of channel estimation, in contrast to the GMM, lies in the reduced number of parameters due to the low-dimensional latent space, which mitigates overfitting effects during training. Thus, it is a more robust model for lower numbers of training data, as demonstrated in [32]. Since the MFA also parameterizes a conditionally Gaussian distribution, the componentwise Bussgang estimator is equivalently applicable as for the GMM case, see Section IV-A, which yields the MFA-parameterized Bussgang channel estimator \(\hat{\mathbf{h}}_{\text{BMFA}}\) by substituting the respective covariances in (17).
### _VAE-based Bussgang Estimator_
The VAE was introduced in [50] and has attracted a lot of interest in the area of generative modeling due to its strong performance, which builds on the basis of neural networks (NNs) that are used for the encoder and decoder of the VAE, cf. Fig. 1. In [33, 34], the VAE was successfully utilized for channel estimation in high-resolution systems. In contrast to the GMM and MFA, the VAE comprises a continuous and _nonlinear_ latent space, encoded by the low-dimensional latent vector \(\mathbf{z}\in\mathbb{R}^{L}\). The most common design choice is a Gaussian model for the latent vector, i.e., \(\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). Since the variational inference task is no longer tractable by the classical EM algorithm, NNs in combination with the reparameterization trick are used to train the VAE [50, 51]. In this work, we choose the parameterized encoder and decoder distributions as
\[q_{\phi}(\mathbf{z}|\mathbf{r}) =\mathcal{N}(\mathbf{z};\mathbf{\mu}_{\phi}(\mathbf{r}),\operatorname{diag} (\mathbf{\sigma}_{\phi}^{2}(\mathbf{r}))), \tag{19}\] \[p_{\theta}(\mathbf{h}|\mathbf{z}) =\mathcal{N}_{\mathbb{C}}(\mathbf{h};\mathbf{0},\mathbf{F}^{\text{H}} \operatorname{diag}(\mathbf{c}_{\theta}(\mathbf{z}))\,\mathbf{F}),\ \mathbf{c}_{\theta}\in\mathbb{R}_{+}^{N}, \tag{20}\]
respectively. The matrix \(\mathbf{F}\) is a DFT matrix such that the parameterized channel covariance matrix of the VAE is a circulant matrix, cf. [34]. This choice results in a reduced number of parameters and is justified by the array geometry of the BS, similar to the GMM case, cf. Section IV-A.
For every training data point \(\mathbf{h}_{t}\in\mathcal{H}\), the VAE computes the ELBO on the log-likelihood [50]
\[\log p_{\theta}(\mathbf{h}_{t})\geq\mathrm{E}_{\phi}[\log p_{\theta}(\mathbf{h}_{t}| \mathbf{z})]-\mathrm{D}_{\text{KL}}(q_{\phi}(\mathbf{z}|\mathbf{h}_{t})||p(\mathbf{z})).\]
By plugging in (19) and (20) and ignoring the constant terms (they do not influence the optimization), the ELBO is utilized as the loss function \(L_{\phi,\theta}\) of the VAE, which reads as, cf. [34],
\[L_{\phi,\theta}(\mathbf{h}_{t})=\sum_{n=1}^{N}-\log[\mathbf{c}_{\theta}]_{n }-\mathbf{h}^{\text{H}}\,\mathbf{F}^{\text{H}}\operatorname{diag}(\mathbf{c}_{\theta})^{ -1}\,\mathbf{F}\,\mathbf{h} \tag{21}\] \[\qquad\qquad+\sum_{\ell=1}^{L}\log[\mathbf{\sigma}_{\phi}]_{\ell}-\frac{ 1}{2}[\mathbf{\mu}_{\phi}]_{\ell}^{2}-\frac{1}{2}[\mathbf{\sigma}_{\phi}]_{\ell}^{2}.\]
As demonstrated in [34, 34], the VAE can be utilized to parameterize a channel estimator by means of the conditional Gaussianity at the output of the decoder of the VAE in combination with linear MMSE filters. After the training, we
Fig. 1: Proposed adapted VAE architecture with the encoder, latent space, and decoder together with the parameterized distributions and the reparameterization trick.
utilize the VAE to parameterize a channel covariance matrix for each quantized receive pilot \(\mathbf{r}\) by forwarding the pilot through the encoder and then using the latent mean \(\mathbf{\mu}_{\phi}(\mathbf{r})\) as input to the decoder which yields the channel covariance matrix \(\mathbf{C}_{\theta}=\mathbf{F}^{\mathrm{H}}\operatorname{diag}(\mathbf{c}_{\theta}) \,\mathbf{F}\). The VAE-parameterized Buss-gang estimator then reads as
\[\hat{\mathbf{h}}_{\text{BVAE}}(\mathbf{r})=\mathbf{F}^{\mathrm{H}}\operatorname{diag}( \mathbf{c}_{\theta})\,\mathbf{F}\,\mathbf{A}^{\mathrm{H}}\mathbf{B}_{\mathbf{x}}^{\mathrm{H}} \mathbf{C}_{\mathbf{r}|\mathbf{x}}^{-1}\mathbf{r}, \tag{22}\]
where \(\mathbf{B}_{\mathbf{z}}\) is computed by plugging \(\mathbf{C}_{\mathbf{y}|\mathbf{z}}=\mathbf{A}\mathbf{C}_{\theta}\mathbf{A}^{\mathrm{H}}+\sigma^{2} \,\mathbf{I}\) into (8) or (9). Similarly, the quantized receive covariance \(\mathbf{C}_{\mathbf{r}|\mathbf{z}}\) is computed by plugging \(\mathbf{C}_{\mathbf{y}|\mathbf{z}}\) into (11) or (13).
For the encoder and decoder, we use a four-layer feed-forward NN with rectified linear unit (ReLU) activation functions, respectively, for which we stack the real and imaginary parts of the pilot observation at the input to the encoder. In the case of multiple pilot observations, we add a convolutional NN (NN) with \(P/2\) layers and ReLU activation functions before the encoder, which perform \(1\times 1\) convolutions in order to always have a \(2N\)-dimensional input at the encoder. This modification drastically reduces the number of parameters and simplifies a forward pass through the VAE. The complete VAE architecture is detailed in Fig. 1.
## V Learning from Quantized Data
As discussed above, the availability of a large training dataset \(\mathcal{H}\) consisting of representative ground-truth channel samples for a whole BS cell (radio propagation scenario) is questionable in practical communication systems. Although different approaches exist, such as ray tracing, to generate a training dataset that mimics the underlying channel distribution, it is unclear whether they can sufficiently capture the characteristics of a real communication scenario. A different idea is to train the respective models directly on pilot observations and mitigate imperfections, e.g., additive noise or sparsely allocated pilot positions, through model-based training adaptations. This has already been shown to work well for high-resolution scenarios [34, 52]. However, learning a generative model from quantized data poses a significant challenge due to the pronounced nonlinear distortion resulting from low-resolution quantization.
### _Covariance Recovery for GMM Approximation_
Recovering the unquantized covariance matrix of the input to a one-bit quantizer solely from quantized data has gained a lot of interest very recently [53, 54, 55]. Since the amplitude information is lost in the case of a zero-threshold one-bit quantization, only the normalized correlation matrix can be obtained via the inverse arcsine law [44]. To resolve this issue, non-zero or time-varying quantizer thresholds were considered in order to be able to estimate the variances of the input signal and, thus, the whole covariance matrix [53, 54, 55]. The work in [56] validates that the covariance recovery technique from [53] can be used in combination with the Bussgang estimator in order to perform channel estimation. However, the considered quantizer designs with non-zero thresholds are challenging to implement in communication systems and may require more sophisticated analog and digital signal processing, ultimately resulting in performance losses. In contrast, considering multi-bit quantization of the input signal, coarse amplitude information is preserved because of the multi-level quantization, even with a fixed zero-threshold. However, up to now, there is no covariance recovery algorithm proposed for this case.
We derive a novel low-complexity covariance recovery algorithm for the multi-bit case by splitting the task into estimating the correlation matrix and the variances independently. Let us define the following problem statement where we reuse the notation from above for simplicity. Consider a dataset \(\mathbf{y}_{t}\sim\mathcal{N}_{\mathrm{C}}(0,\mathbf{C}_{\mathbf{y}})\) and \(\mathbf{B}>1\). The task is to recover \(\mathbf{C}_{\mathbf{y}}\) from the \(T\) quantized samples. Since no closed-form solution for the correlation matrix \(\mathbf{R}_{\mathbf{y}}=\operatorname{diag}(\mathbf{C}_{\mathbf{y}})^{-\frac{1}{2}}\mathbf{C}_{\bm {y}}\operatorname{diag}(\mathbf{C}_{\mathbf{y}})^{-\frac{1}{2}}\) in the case of multi-bit quantization exists, we simply discard the samples' amplitude information, effectively treating them as one-bit quantization data. Because of that, the closed-form expression for estimating the unquantized correlation matrix \(\mathbf{R}_{\mathbf{y}}\) by means of the one-bit sample covariance matrix \(\hat{\mathbf{C}}_{\text{1bit}}=\frac{1}{T}\sum_{t=1}^{T}Q_{1}(\mathbf{r}_{t})Q_{1}(\bm {r}_{t})^{\mathrm{H}}\) can be obtained via the inverse arcsine law:
\[\hat{\mathbf{R}}_{\mathbf{y}}=\sin\left(\frac{\pi}{2}\Re(\hat{\mathbf{C}}_{\text{1bit}}) \right)+\operatorname{j}\sin\left(\frac{\pi}{2}\Im(\hat{\mathbf{C}}_{\text{1bit}}) \right). \tag{23}\]
Unfortunately, although having a closed-form solution, the resulting correlation estimate is not necessarily PSD [53, 57]. However, this can be resolved by a projection onto the set of PSD matrices, as discussed later.
Since the quantization acts elementwise on the real and imaginary part independently, it is sufficient to derive the variance estimation for a real-valued scalar \(y\sim\mathcal{N}(0,\xi^{2})\) and \(r=Q_{B}(y)\in\mathcal{R}\) for ease of notation. We note that the amplitude of \(y\) follows the half-normal distribution. The corresponding CDF of the half-normal distribution is given by \(\mathrm{P}(|y|\leq\tau)=\operatorname{erf}(\tau/\sqrt{2\xi^{2}})\). Because the CDF is fully parameterized by the input signal's variance \(\xi^{2}\), one can utilize the coarse amplitude information after the quantizer for its estimation. By defining the positive quantization thresholds as \(\tilde{\tau}_{i}<\infty\), \(i\in\{1,\dots,2^{B-1}-1\}\), i.e., \(\tilde{\tau}_{i}=\tau_{i+2^{B-1}}\), one can estimate the probability of observing a sample with an amplitude of at most \(\tilde{\tau}_{i}\) by \(\hat{\mathrm{P}}(|y|\leq\tilde{\tau}_{i})=\frac{1}{T}\sum_{i=1}^{T}\chi(|r_{t} |\leq\tilde{\tau}_{i})\).
For circularly symmetric Gaussian distributed complex-valued input and multiple quantization thresholds, i.e., \(B>2\), an overdetermined system of equations can be constructed using the different quantizer thresholds for both the real- and imaginary parts, yielding \(2^{B}-2\) equations. The subtraction of two comes from the fact that the last quantization regions up to infinity are uninformative since, in this case, the (sample) probability is always one. In summary, the equation system accounting for the real part of the input is of the form
\[\operatorname{erf}\left(\frac{\tilde{\tau}_{i}}{\sqrt{2\xi^{2}}}\right)=\frac{1 }{T}\sum_{t=1}^{T}\chi\left(|\Re(r_{t})|\leq\tilde{\tau}_{i}\right) \tag{24}\]
with \(i\in\{1,\dots,2^{B-1}-1\}\). The remaining half of the equation system is built similarly by replacing the real with the imaginary part. Note that the equation system only depends on the unknown variance parameter \(\xi^{2}\). Geometrically, we aim to interpolate the sample probabilities belonging to the different
thresholds by a Gaussian CDF curve in a LS sense with the adjustable variance parameter \(\xi^{2}\). Since the derivative of the CDF is trivially given by the Gaussian PDF, a simple Gauss-Newton approach can be utilized for solving the nonlinear LS problem. For a robust initial starting point \(\xi_{0}^{2}\), a solution to the equation with the quantizer's largest \(\tilde{\tau}_{i}\) is used.
In the multi-dimensional case, if the input signal's variance is assumed to be different for each antenna, the nonlinear LS problem can be solved for each dimension independently, yielding an estimate of \(\operatorname{diag}(\mathbf{C_{y}})\). Otherwise, the equation systems for each dimension can be combined to yield a more accurate estimate of the single variance parameter. Note that in the complex-valued case, the estimated variance has to be scaled by a factor of two to account for the sum of the real and imaginary parts. Finally, the full covariance matrix estimate is computed as
\[\hat{\mathbf{C_{y}}}=\operatorname{diag}(\hat{\mathbf{C_{y}}})^{\frac{1}{2}}\hat{\mathbf{R_ {y}}}\operatorname{diag}(\hat{\mathbf{C_{y}}})^{\frac{1}{2}}. \tag{25}\]
The derived covariance recovery scheme is now used in order to fit the GMM's covariances by only using quantized data. For simplicity, we assume that the training data stems from single snapshot observations, i.e., \(\mathbf{r}=Q_{B}(\mathbf{h}+\mathbf{n})\) with \(P=1\), which can always be enforced by pre-processing. In each iteration of the EM algorithm, the M-step is adapted by using the proposed covariance recovery algorithm for estimating the unquantized covariance matrix due to the Gaussianity of each GMM component. The necessary change to the purely Gaussian setting from before is that the responsibilities, computed in each E-step, are used in order to weight the sample probability for component \(k\) accordingly as
\[\hat{\mathrm{P}}(|\Re([\mathbf{y}]_{n}]|\leq\tilde{\tau}_{i}|k)=\frac{1}{N_{k}} \sum_{t=1}^{T}p(k|\mathbf{r}_{t})\chi(|\Re([\mathbf{r}_{t}]_{n})|\leq\tilde{\tau}_{i}), \tag{26}\]
\(\forall i\in\{1,\dots,2^{B-1}-1\}\), where \(N_{k}=\sum_{t=1}^{T}p(k|\mathbf{r}_{t})\). Note that \(\Re([\mathbf{r}_{t}]_{n})\) is replaced by \(\Im([\mathbf{r}_{t}]_{n})\) for the second half of the equation system. Since the quantizer's input signal is also distorted with AWGN, the M-step adaptation from [52, Th. 1] for noisy data is used in addition by means of subtracting the noise covariance and afterward projecting to the set of PSD matrices by performing an eigenvalue decomposition (EVD) and truncating the negative eigenvalues. This also accounts for the possibly non-PSD correlation estimate from (23). After estimating the channel covariance \(\hat{\mathbf{C}}_{\mathbf{h}|k}\) of component \(k\) in this way, one first determines \(\hat{\mathbf{C}}_{\mathbf{y}|k}\) to eventually construct the covariance of the quantized observation \(\hat{\mathbf{C}}_{\mathbf{r}|k}\) by using one of the approximations given in (12) or (13). Since the complexity is not crucial in the offline learning, we utilize the accurate formula for the variance (12). The necessary adaptations in the M-step are concisely summarized in Algorithm 1. The so-found covariance matrix \(\hat{\mathbf{C}}_{\mathbf{r}|k}\) is afterward used to compute the responsibilities in the E-step, similar to (16).
```
0:\(\mathcal{R}=\{\mathbf{r}_{t}\}_{t=1}^{T}\), \(\sigma^{2}\), \(\{p(k|\mathbf{r}_{t})\}_{t=1}^{T}\)
1:for\(k=1\) to \(K\)do
2:\(N_{k}=\sum_{t=1}^{T}p(k|\mathbf{r}_{t})\)
3:\(\hat{\mathbf{C}}_{\mathbf{r}|k}^{\text{Ibl}}=\frac{1}{N_{k}}\sum_{t=1}^{T}p(k|\mathbf{r}_{t })Q_{1}(\mathbf{r}_{t})Q_{1}(\mathbf{r}_{t})^{\text{H}}\)
4:\(\hat{\mathbf{R}}_{\mathbf{y}|k}=\sin\left(\frac{\pi}{2}\Re(\hat{\mathbf{C}}_{\mathbf{r}|k}^{ \text{Ibl}})\right)+\text{j}\sin\left(\frac{\pi}{2}\Im(\hat{\mathbf{C}}_{\mathbf{r}|k}^ {\text{Ibl}})\right)\)
5:for\(n=1\) to \(N\)do
6: Construct equation system for \(\operatorname{diag}(\hat{\mathbf{C}}_{\mathbf{y}|k})\) via (26).
7: Solve equation system via Gauss-Newton.
8:end
9:\(\hat{\mathbf{C}}_{\mathbf{y}|k}=\operatorname{diag}(\hat{\mathbf{C}}_{\mathbf{y}|k})^{\frac{1 }{2}}\hat{\mathbf{R}}_{\mathbf{y}|k}\operatorname{diag}(\hat{\mathbf{C}}_{\mathbf{y}|k})^{ \frac{1}{2}}\)
10:\(\mathbf{V}\operatorname{diag}(\hat{\mathbf{c}}_{\mathbf{h}|k})\mathbf{V}^{\text{H}}=\operatorname {EVD}(\hat{\mathbf{C}}_{\mathbf{y}|k}-\sigma^{2}\,\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\
## VI Baseline Channel Estimators
We compare the proposed parameterized generative modeling-aided channel estimators with state-of-the-art baseline channel estimators for coarse quantization systems. First, for the 3GPP channel model from Section II-3, we have genie access to the true underlying channel covariance matrix \(\mathbf{C}_{\mathbf{h}|\mathbf{\delta}}\) from (6) in the simulation. This allows us to evaluate the genie-aided Bussgang estimator \(\hat{\mathbf{h}}_{\text{Buss-genie}}=\mathbf{C}_{\mathbf{h}|\mathbf{\delta}}\mathbf{A}^{\text{H}} \mathbf{B}_{\mathbf{\delta}}^{\text{H}}\mathbf{C}_{\mathbf{r}|\mathbf{\delta}}\) where \(\mathbf{B}_{\mathbf{\delta}}\) and \(\mathbf{C}_{\mathbf{r}|\mathbf{\delta}}\) are found by plugging \(\mathbf{C}_{\mathbf{h}|\mathbf{\delta}}\) into (8) or (9) and (13) or (11) for the multi-bit or one-bit case, respectively. Note that this estimator is not feasible in practice but only serves as a lower bound on the performance of the Bussgang estimator, which is the best linear estimator. The corresponding curves are labeled as "Buss-genie".
A practically feasible approach that is primarily used in the literature is to use the sample covariance matrix \(\hat{\mathbf{C}}_{\mathbf{h}}=\frac{1}{T}\sum_{t=1}^{T}\mathbf{h}_{t}\mathbf{h}_{t}^{\text{H}}\) in combination with the Bussgang estimator (10), labeled as "Buss-Scov". Note that for this case, a training dataset of ground-truth channels \(\mathcal{H}\) is necessary.
A simple baseline is the LS estimate based on the Bussgang decomposition (7), i.e., \(\hat{\mathbf{h}}_{\text{BLS}}=\mathbf{A}^{\dagger}\mathbf{B}^{\dagger}\mathbf{r}\), labeled as "BLS", where \(\mathbf{A}^{\dagger}\) is the pseudo-inverse of \(\mathbf{A}\). For computing the Bussgang gain (8) or (9), we use the sample covariance matrix \(\hat{\mathbf{C}}_{\mathbf{h}}\) from above.
In [20], a CS-based channel estimator is proposed, which is a combination of the EM algorithm for approximating the channel PDF in the sparse angular domain and the GAMP algorithm to solve the sparse recovery problem. Note that in this case, an EM algorithm is deployed online for each transmission link and pilot observation, which is fundamentally different from the GMM approach, which exploits the EM solely in the offline phase. We applied the EM-GM-GAMP algorithm to estimate the channel parameters \(\hat{\mathbf{x}}\) in the angular domain, such that the final channel estimate is computed to \(\hat{\mathbf{h}}_{\text{EM-GM-GAMP}}=\mathbf{F}\,\hat{\mathbf{x}}\), labeled as "EM-GM-GAMP".
We also evaluate a deep learning-based estimator, similar to [26], where a three-layered feed-forward NN is trained to directly map the pilot observation to a channel estimate. The ReLU function is used as the activation function in all layers except the output layer. To achieve a fair comparison to the proposed approaches, a single network is trained for the whole SNR range. Similar to [26], the best performance was achieved by a drastic increase of the neurons in the hidden layers. We, therefore, set the number of neurons in both hidden layers to \(2N^{2}\). The corresponding curves are labeled as "DNN".
## VII Memory and Complexity Analysis
The offline memory requirements of data-based techniques and the algorithmic online complexity are key features for channel estimation in real-time systems. The number of parameters of the (structured) zero-mean GMM and the MFA model is determined by the \(K\) covariances and the number of mixing coefficients. The corresponding linear MMSE filters for each component and SNR value are fixed after the offline training, which means that they can be pre-computed. Since this can be similarly done for the evaluation of the responsibilities in (16), the overall online complexity is determined by matrix-vector products for each component [29]. Notably, the computation of the \(K\) filters/responsibilities is trivially parallelizable, which is of great importance in practical systems. For the case of circulant-structured GMM covariances, cf. Section IV-A, the complexity reduces due to the usage of FFTs [29].
For the VAE approach, the number of parameters and the complexity for a forward pass through the network depend on the network architecture, cf. Section IV-C. The resulting filter is computable by means of FFTs since a circulant covariance matrix is parameterized, similar to the circulant GMM case. We further note that the approaches that learn from quantized data, cf. Section V, are only adapted in the training procedure and thus have the same memory overhead and online complexity as the models learned with perfect CSI.
Table I summarizes the memory overhead and computational online complexity of all proposed approaches as well as the baseline methods. It can be seen that the proposed approaches vary in the number of parameters and online complexity to allow for a smooth trade-off with respect to the desirable performance and practical system requirements. Of particular importance is the comparison to the deep NN approach, which is adapted from [26] and directly provides a channel estimate at the output, i.e., it does not parameterize an analytical estimator. It becomes apparent that the proposed approaches exhibit a much lower number of parameters as well as a lower online complexity compared to the deep NN approach. The main reason for this is the drastic increase of neurons in the hidden layers in the NN approach, cf. [26]; in contrast, the proposed models are comprised of a latent space which enforces a compression, and thus, a reduced memory and complexity overhead. As shown in the following numerical results, the estimation performance of the proposed approaches is, in most cases, even better, although having fewer parameters and reduced online complexity.
## VIII Achievable Rate Lower Bound
The achievable rate is of great interest in quantized systems [3, 22]. We evaluate a lower bound on the corresponding achievable rate of a respective data transmission system that is taking the CSI mismatch into account. To this end, after estimating the channel with the pilot transmission in (1), the data symbol \(s\) is transmitted over the same channel, i.e., \(\mathbf{r}=Q_{B}(\mathbf{h}s+\mathbf{n})=\mathbf{B}\mathbf{h}s+\mathbf{q}\); in the second equation, the linearized model with Bussgang's decomposition is used where \(\mathbf{q}=\mathbf{B}\mathbf{n}+\mathbf{\eta}\). We make the worst-case assumption that the aggregated noise is Gaussian, i.e., \(\mathbf{q}\sim\mathcal{N}_{\text{C}}(\mathbf{0},\mathbf{C}_{\mathbf{q}}=\mathbf{C}_{\mathbf{r}}-\mathbf{B} \mathbf{C}_{\mathbf{h}}\mathbf{B}^{\text{H}})\), cf. [58]. Furthermore, the BS is assumed to perform maximum-ratio combining (MRC) with the normalized filter \(\mathbf{g}_{\text{MRC}}^{\text{H}}=\hat{\mathbf{h}}^{\text{H}}/\|\hat{\mathbf{h}}\|_{2}^{2}\). Note that the variance of the data symbol \(s\) is assumed to be one without loss of generality. We further assume that the SNR is the same during pilot and data transmission. Thus, we can evaluate the use-and-then-forget (UatF) bound as a lower bound on the achievable rate, cf. [22, Lemma 1], as
\[R_{\text{UatF}}=\log_{2}\left(1+\frac{\left|\text{E}[\mathbf{g}_{\text{MRC}}^{\text {H}}\mathbf{B}\mathbf{h}]\right|^{2}}{\text{var}[\mathbf{g}_{\text{MRC}}^{\text{H}}\mathbf{B} \mathbf{h}]+\mathbf{g}_{\text{MRC}}^{\text{H}}\mathbf{C}_{\mathbf{q}}\mathbf{g}_{\text{MRC}}}\right) \tag{28}\]
with Monte Carlo simulations.
## IX Simulation Results
### _Covariance Recovery_
Before investigating the channel estimation performance, we evaluate the proposed covariance recovery algorithm from Section V-A in a purely Gaussian setting without AWGN, comparing it to reasonable baselines. By assuming genie-knowledge of the unquantized samples, we can evaluate the unquantized sample covariance matrix, i.e., \(\hat{\mathbf{C}}_{\text{unquant}}=\frac{1}{T}\sum_{t=1}^{T}\mathbf{h}_{t}\mathbf{h}_{t}^{ \text{H}}\). Note that this baseline requires perfect CSI and thus only serves as a baseline. A feasible approach is to neglect the quantization effect and evaluate the quantized sample covariance matrix \(\hat{\mathbf{C}}_{\text{quant}}=\frac{1}{T}\sum_{t=1}^{T}\mathbf{r}_{t}\mathbf{r}_{t}^{ \text{H}}\), where \(\mathbf{r}_{t}=Q_{B}(\mathbf{h}_{t})\). This baseline becomes more accurate with more quantization bits \(B\) but introduces a systematic error due to the coarse quantization.
We construct \(100\) random covariance matrices \(\mathbf{C}_{\mathbf{h}|\mathbf{\delta}}\) from (6) and draw a fixed number of samples from each covariance matrix as \(\{\mathbf{h}_{t}\sim\mathcal{N}_{\mathbb{C}}(\mathbf{0},\mathbf{C}_{\mathbf{h}|\mathbf{\delta}}) \}_{t=1}^{T}\). Afterward, the normalized MSE is computed by using those \(100\) covariance realizations.
The left plot in Fig 2 shows the normalized MSE versus different numbers \(T\) of samples for different numbers \(B\) of quantization bits. It can be seen that the proposed covariance recovery scheme performs equally well for all quantization levels since it yields a consistent estimator, i.e., the estimation error steadily decreases for a larger number of samples. This is similar to the unquantized sample covariance matrix ("Scov \(\infty\)-bit") but with a more or less constant offset, which is mainly caused by the correlation estimate that is unchanged for varying bits \(B\), cf. (23). In contrast, the quantized sample covariance matrix ("Scov \(B\)-bit") is a biased estimator and shows a relatively high error floor, which decreases for a higher number of quantization bits, as excepted. We note that the consistency and unbiasedness of the proposed estimator is a key characteristic since, for the training with quantized pilot observations, it can be expected that a large dataset \(\mathcal{R}\) can be acquired cheaply during regular operation of the BS; this is in contrast to a dataset \(\mathcal{H}\) consisting of ground-truth channels, which either requires costly measurement campaigns or intricate modeling of the underlying propagation environment.
In the right plot in Fig. 2, we evaluate the necessary number of iterations of the Gauss-Newton algorithm for solving the nonlinear LS problem until convergence, i.e., until the absolute change of the estimated variances is smaller than \(10^{-5}\). It can be seen that in all cases, only a few iterations are necessary for the convergence; the number of iterations also decreases for a higher number of data samples \(T\) and for less quantization bits (due to an increasing number of equations for larger \(B\)), which makes the variance estimation very fast. In combination with the closed-form solution for the correlation matrix (23), the covariance estimator exhibits low complexity and is applicable for any given number of quantization bits.
### _Channel Estimation_
This section provides numerical results to evaluate the proposed channel estimators, cf. Section IV and Section V, against the discussed state-of-the-art baselines from Section VI. In all simulations, we have fixed the number of training samples for both \(\mathcal{H}\) and \(\mathcal{R}\), cf. Section II-4, to \(T=100{,}000\). The normalized MSE and the achievable rate lower bound (28) are computed by means of \(10{,}000\) channel samples, which are not part of the training dataset. If not otherwise stated, the number of components for the GMM/MFA is \(K=64\), the latent dimension for the MFA/VAE is \(L=N/4\), and the data-aided approaches are trained with the channel dataset \(\mathcal{H}\). For both the VAE and the DNN approach, a single NN architecture is trained for the whole SNR range of \([-10,20]\)dB. Since a low pilot overhead is considered to be a key aspect in practical systems [2], we, therefore, especially focus on the single snapshot scenario in this paper.
In Fig. 3, we evaluate the MSE performance of the proposed channel estimators in comparison to the baseline methods over the SNR for the 3GPP channel model from Section II-3 with one (top row) and three (bottom row) propagation clusters for \(B\in\{1,2,3\}\) quantization bits, \(N=64\) BS antennas, and \(P=1\) pilot. In all cases, the approaches "BLS", "Buss-Scov", and "EM-GM-GAMP" are outperformed with a considerable performance gap over the whole SNR range by the proposed
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Name** & **Model Parameters (real-valued)** & **Example (rounded)** & **Online Complexity** \\ \hline GMM full & \(K\left(N^{2}+1\right)-1\) & \(2.62\cdot 10^{5}\) & \(\mathcal{O}(KPN^{2})\) (parallel in \(K\)) \\ \hline GMM deep & \(K\left(4N+1\right)-1\) & \(1.64\cdot 10^{5}\) & \(\mathcal{O}(KPN^{2})\) (parallel in \(K\)) \\ \hline GMM circ & \(K\left(N+1\right)-1\) & \(4.16\cdot 10^{3}\) & \(\mathcal{O}(KPN^{2}(PN))\) (parallel in \(K\)) \\ \hline MFA & \(K\left(2LN+2\right)-1\) & \(1.31\cdot 10^{5}\) & \(\mathcal{O}(KPN^{2})\) (parallel in \(K\)) \\ \hline VAE & \(6N^{2}+10NL+\frac{1}{2}P^{2}+\frac{1}{2}P+\mathcal{O}(N)\) & \(3.53\cdot 10^{4}\) & \(\mathcal{O}(N^{2}+PN\log(PN))\) \\ \hline DNN [26] & \(4N^{4}+(4P+4)N^{3}+4N^{2}+2N\) & \(7.24\cdot 10^{7}\) & \(\mathcal{O}(N^{3})\) \\ \hline BLMMSE & \(N^{2}\) & \(5.0\cdot 10^{3}\) & \(\mathcal{O}(PN^{2})\) \\ \hline EM-GM-GAMP [20] & - & - & \(\mathcal{O}(PN\log(PN))\) \\ \hline BLS & - & - & \(\mathcal{O}(PN)\) \\ \hline \end{tabular}
\end{table} TABLE I: Computational complexity and number of parameters of the discussed estimators with example numbers for the case of \(K=N=64\), \(L=16\), and \(P=4\).
Fig. 2: Performance evaluation for covariance estimation with \(N=64\) dimensions and \(100\) Monte Carlo iterations using covariances obtained from the 3GPP model (6).
approaches. This is due to the fact that the Bussgang theorem does not hold for the non-Gaussian distributed channels, which is assumed by "Buss-Scov"; moreover, the channels are generally not perfectly sparse in the angular domain (leakage effect), which substantially impacts the CS approach "EM-GM-GAMP". Interestingly, for the case of one propagation cluster, the GMM-based approach is close to the "Bussgenie" approach, which is the Bussgang estimator with utopian knowledge of the true channel covariance matrix for a single snapshot; this underlines the powerful estimation abilities of the GMM. For the considered case of a ULA, the Toeplitz-structured GMM version is almost on par with the full GMM approach, whereas the circulant-structured approach exhibits a small performance gap. The observation of having increasing MSE for higher SNR values beyond a certain SNR level in some cases is due to stochastic resonance, which is a well-known effect in quantized systems [59]; thereby, the effect can vary for different estimators, depending on the parameterization. The "DNN" approach also shows good estimation results for the different scenarios but is consistently outperformed by at least one of the proposed estimators, although having a much larger number of parameters and a higher online complexity, cf. Table I. Overall, the simulation results in Fig. 3 demonstrate the great potential of the proposed class of parameterized estimators based on Gaussian latent models in combination with the Bussgang estimator.
Fig. 4 assesses the achievable rate lower bound from (28) for the 3GPP channel model with one (left) and three (right) propagation clusters for \(N=64\) antennas, \(P=1\) pilot observation, and different numbers of quantization bits. For the case of one propagation cluster, the achievable rate lower bound of the GMM and VAE approach is almost on par with the "Bussgenie" approach. Moreover, a substantial gap to the achievable rate lower bound of the "Buss-Scov" approach is apparent for all considered numbers of quantization bits. This behavior similarly translates to the case of three propagation clusters
Fig. 4: Evaluation of the achievable rate lower bound (28) for the 3GPP channel model (cf. Section II-3) with one (left) and three (right) propagation clusters for \(B=\{1,2,3,\infty\}\) quantization bits, \(N=64\) antennas, and \(P=1\) pilot observation.
Fig. 5: MSE (left) and achievable rate lower bound (right) performance for the QuaDBiGa LOS channel model (cf. Section II-3) for \(N=64\) antennas and \(P=1\) pilot.
Fig. 3: MSE performance for the 3GPP channel model (cf. Section II-3) with one (top) and three (bottom) propagation clusters for \(B\in\{1,2,3\}\) quantization bits, \(N=64\) antennas, and \(P=1\) pilot observation (single snapshot).
but with an overall reduced gap to the baseline approach. This result indicates that the better estimation performance of the proposed estimators can be effectively converted to a higher data rate or to a lower resolution while preserving the same throughput as the baseline approach "Buss-Scov".
In Fig. 5 (left), the MSE performance is compared for different numbers \(B\) of quantization bits, now for the QuaDRiGa LOS channel model, cf. Section II-3. Once again, the approaches "BLS", "Buss-Scov", and "EM-GM-GAMP" are outperformed over the whole range of quantization bits. Interestingly, the "DNN" approach is comparably good for \(B=2\) and \(B=3\) but, in turn, suffers in performance in the extreme cases of \(B=1\) and infinite resolution, which indicates better robustness of the proposed approaches in comparison. Interestingly, the MFA estimator is ranked among the best estimators in this case, which highlights the fact that estimators' performances may vary slightly for different channel models; however, it can also be seen that the overall performance of the proposed class of estimators is stable and robust with respect to a different channel model. In Fig. 5 (right), the corresponding achievable rate lower bound from (28) is evaluated. It can be seen that the better estimation qualities of the proposed approaches in terms of the MSE directly translate to a higher achievable rate guarantee, which is approximately only 1-2 bits/s/Hz below the achievable rate lower bound with perfect CSI knowledge at the receiver.
The left plot in Fig. 6 examines the estimation quality over different numbers \(N\) of antennas at the BS for \(B=1\) bit, \(P=1\) pilot, and an SNR of 10dB for the QuaDRiGa LOS channel model, cf. Section II-3. In contrast to the baseline approaches "BLS", "Buss-Scov", and "EM-GM-GAMP", the estimation performance of the proposed estimators significantly increases for a higher number of antennas, which is particularly important in massive MIMO systems. The GMM approach performs best for all considered antenna numbers, whereas the VAE approach is especially strong in the high number of antennas case; this is reasoned by the circulant parameterization of the covariances by the VAE, which only holds asymptotically for high numbers of antennas. A similar behavior is observed for the "GMM-circ" estimator. The "DNN" approach, which has a quartic scaling of the number of parameters in the number of antennas, cf. Table I, is outperformed over the whole range.
The right plot in Fig. 6 shows the MSE performance for an increasing number of pilot observations by utilizing the pilot design from Section II-1 for \(N=64\) antennas and a fixed SNR of \(5\)dB. The proposed estimators outperform all baseline approaches over the whole range of pilot observations, including the "DNN" estimator. Especially the MFA and VAE models, which are comprised of a nonlinear latent space, perform well in the high number of pilots regime. Although we focus our analysis primarily on the single-snapshot case, we see that also with an increasing number of pilots, the proposed estimators perform very well.
Next, the number \(K\) of GMM components that are necessary to achieve a certain performance is discussed. In the left plot of Fig. 7, the MSE over the number of GMM components is evaluated for \(B=1\) bit, \(N=64\) antennas, \(P=1\) pilot, and for varying SNRs for the QuaDRiGa LOS as well as mixed LOS/NLOS channel model, cf. Section II-3. It can be expected that the superposition of many sub-paths, as it is the case in the mixed LOS/NLOS scenario, results in a less structured wireless channel, and thus, less structural information can be inferred as prior knowledge by the data-aided models. Therefore, it can be observed that the overall performance is worse for the mixed scenario. However, in both scenarios, the increase of GMM components continuously enhances the estimation performance, with a greater improvement in the pure LOS case. This points towards applications in mmWave communications, where the high frequency in combination with smaller BS cells results in high LOS probabilities.
The right plot of Fig. 7 analyzes the same setup but now for \(B=3\) quantization bits. In this case, we evaluate the approximation quality of computing \(\mathbf{C_{r}}\) for a given \(\mathbf{C_{y}}\) via (13) by comparing it with the estimator that computes the exact variances via (12) and otherwise uses the same approximation for the off-diagonals, labeled "GMM-ex.". As expected, the approximation is highly accurate in the low to medium SNR region, which is the considered operating range of low-resolution systems. In high SNR, the approximation is less accurate and shows saturation effects when increasing the number of GMM components. However, the overall approximation loss is small, and it still results in a high estimation accuracy as compared to the baseline approaches. Besides that, the estimation performance generally steadily increases for a higher number of GMM components with an overall saturation
Fig. 6: MSE performance for the QuaDRiGa LOS channel model (cf. Section II-3) for \(B=1\). Left: \(P=1\) and SNR = 10dB; right: \(N=64\) and SNR = 5dB.
Fig. 7: MSE performance for the QuaDRiGa LOS and mixed LOS/NLOS channel model (cf. Section II-3) for \(B=1\) (left) and \(B=3\) (right), \(N=64\), and \(P=1\).
for high numbers \(K\) of components.
Fig. 8 evaluates the proposed training adaptations for the GMM and the VAE, as detailed in Section V, in order to learn from noisy and quantized pilot observations \(\mathcal{R}\), cf. Section II-4, without having ground-truth channel samples during training. We refer to the adapted GMM, cf. Algorithm 1, as "GMM \(\mathcal{R}\)", and the GMM learned with ground-truth channels as "GMM \(\mathcal{H}\)". The VAEs are denoted likewise. Note that a main difference between "GMM \(\mathcal{R}\)" and "VAE \(\mathcal{R}\) is that the training of the adapted GMM is performed for a fixed SNR, but afterward, the model can be utilized for the whole SNR range, whereas the VAE is trained directly for the whole SNR range in order to generalize properly. In order to have a meaningful comparison, the "GMM \(\mathcal{R}\)" is trained for each SNR point, whereas the "VAE \(\mathcal{R}\)" is trained for the whole SNR range (no performance gain was seen in the simulation results for an SNR-dependent training).
In Fig. 8 (a) and (b), the case of \(N=64\), \(P=1\), \(B=2\), and the 3GPP channel model with one and three propagation clusters are considered, respectively, whereas in Fig. 8 (c) and (d) the same setup with \(B=3\) bits is investigated. Astonishingly, through the model-based adaptations, the models trained with the dataset \(\mathcal{R}\) consisting of coarsely quantized and noisy data samples are almost on par with their counterparts that are trained with ground-truth noise-free channel samples from \(\mathcal{H}\). Overall, the estimation quality seems to be most accurate in the low to medium SNR range, where the approximation in (13) and thus the training adaptations are highly accurate. This implies that the generally costly dataset \(\mathcal{H}\) can be replaced by \(\mathcal{R}\) with almost no performance loss in this regime. In the high SNR regime, the performance loss of "GMM \(\mathcal{R}\)" tends to increase, which is a consequence of the generally indefinite closed-form correlation estimate (23). After the projection onto the set of PSD matrices by truncating the negative eigenvalues, cf. Algorithm 1, the resulting covariance estimate is missing the corresponding eigenvectors, which has a higher impact on the performance in the high SNR regime. This correlates with the observation that the adaptations are working best in cases with fewer multi-path components.
## X Conclusion
In this work, we presented a novel and promising approach for channel estimation in coarse quantization systems by utilizing Gaussian latent models such as GMMs, MFAs, and VAEs. These models successfully learn the unknown and complex channel distributions present in radio propagation scenarios and, afterward, utilize this valuable prior information to enable the development of tractable parameterized linear MMSE estimators based on a conditional Bussgang decomposition. We have shown that all of the presented estimators perform well for various channel and system parameters with only minor differences. This allows for selecting the preferred model using the discussed memory and complexity overhead.
In addition, we derived model-based training adaptations, i.e., a covariance recovery algorithm for the GMM and a loss function adaptation for the VAE, in order to learn these models directly from quantized training data, with only marginal performance losses. Extensive simulations verified a superior performance over classical and deep learning-based approaches in terms of MSE and achievable rate metrics.
The analysis of multi-user systems, especially in the presence of coarsely quantized training data, is left as future work.
## XI Acknowledgment
The authors gratefully acknowledge valuable discussions with and input from Dr.-Ing. Michael Koller in the early stages of this work.
|
2309.13568 | The $\circ$ operation and $*$ operation of Cohen-Macaulay bipartite
graphs | Let $G$ be a finite simple graph with the vertex set $V$ and let $I_G$ be its
edge ideal in the polynomial ring $S=\mathbb{K}[x_V]$. In this paper, we
compute the depth and the Castelnuovo--Mumford regularity of $S/I_G$ when
$G=G_1\circ G_2$ or $G=G_1* G_2$ is a graph obtained from Cohen-Macaulay
bipartite graphs $G_1$, $G_2$ by $\circ$ operation or $*$ operation,
respectively. | Yulong Yang, Guangjun Zhu, Yijun Cui, Shiya Duan | 2023-09-24T06:50:26Z | http://arxiv.org/abs/2309.13568v2 | # The \(\circ\) operation and \(*\) operation of
###### Abstract.
Let \(G\) be a finite simple graph with the vertex set \(V\) and let \(I_{G}\) be its edge ideal in the polynomial ring \(S=\mathbb{K}[x_{V}]\). In this paper, we compute the depth and the Castelnuovo-Mumford regularity of \(S/I_{G}\) when \(G=G_{1}\circ G_{2}\) or \(G=G_{1}*G_{2}\) is a graph obtained from Cohen-Macaulay bipartite graphs \(G_{1}\), \(G_{2}\) by \(\circ\) operation or \(*\) operation, respectively.
2020 _Mathematics Subject Classification_. Primary 13C15, 13A15, 13D02; Secondary 05E40. Keywords: Regularity, depth, \(\circ\) operation, \(*\) operation, Cohen-Macaulay bipartite graphs.
quantity \(\max\{|\mathcal{X}|\ \big{|}\ \mathcal{X}\) is a star packing of \(G\}\) is called the _star packing number_ of \(G\), denoted by \(\gamma(G)\). Fouli et al. in [8] showed that
\[\operatorname{depth}(S/I_{G})\geq\gamma(G).\]
Let \(u,v\in V(G)\). The distance of \(u\) and \(v\), denoted by \(d(u,v)\), is the length of the shortest path between \(u\) and \(v\). If \(G\) is connected, then the diameter of \(G\) is \(d(G)=\max\{d(u,v)|u,v\in V\}\). Fouli and Morey in [9] showed that if a graph \(G\) has \(p\) connected components, then
\[\operatorname{depth}(S/I_{G})\geq\sum_{i=1}^{p}\lceil\frac{d_{i}+1}{3}\rceil\]
where \(\lceil\frac{d_{i}+1}{3}\rceil\) is the smallest integer \(\geq\frac{d_{i}+1}{3}\) and \(d_{i}\) is the diameter of the \(i\)-th connected component of \(G\). The second author of this paper in [23] proved that if \(G\) is a path, then \(\operatorname{depth}(S/I_{G})\) can reach this lower bound. Morey et al. in [18] showed that for a connected bipartite graph \(G\) with \(n\) vertices, then
\[\operatorname{depth}(S/I_{G})\leq\lfloor\frac{n}{2}\rfloor\]
where \(\lfloor\frac{n}{2}\rfloor\) is the largest integer \(\leq\frac{n}{2}\). The second author of this paper in [23] and [24] provided some exact formulas for the depth and regularity of the edge ideals of path graphs and cycle graphs respectively.
The first three authors of this article in [25] studied two family of simple graphs obtained from some fan graphs by the \(*\) operation and the \(\circ\) operation, respectively. For such two graphs, we gave some formulas for the depth and regularity of \(S/I_{G}\).
In this article, we are interested in algebraic properties of depth and regularity of \(S/I_{G}\) if \(G\) is a graph obtained from two Cohen-Macaulay bipartite graphs by the \(\circ\) operation or the \(*\) operation.
The article is organized as follows. In section 2, we will recall some basic definitions and terminology that we will need later. In sections 3, we will study the depth and regularity of a bipartite graph obtained from a Cohen-Macaulay bipartite graph by deleting its one leaf. We give some exact formulas for the depth and regularity of the edge ideal of such a graph. In sections 4, we will study some graphs obtained from Cohen-Macaulay bipartite graphs by the \(\circ\) operation or the \(*\) operation. For such graphs, we give some exact formulas for the depth and regularity of their edge ideals.
## 2. Preliminary
In this section, we gather together the needed definitions and basic facts, which will be used throughout this paper. However, for more details, we refer the reader to [5, 11, 21].
Let \(G=(V(G),E(G))\) be a finite simple (no loops, no multiple edges) graph, where \(V(G)\) and \(E(G)\) are the vertex set and edge set of \(G\), respectively. Sometimes for short we denote \(V(G)\) and \(E(G)\) by \(V\) and \(E\) respectively. The _neighborhood_ of a vertex \(v\) in \(G\) is defined as \(N_{G}(v)=\{u\,|\,\{u,v\}\in E(G)\}\) and its degree, denoted by \(\deg_{G}(v)\), is \(|N_{G}(v)|\). If \(|N_{G}(v)|=1\), then \(v\) is called a leaf. Set \(N_{G}[v]=N_{G}(v)\cup\{v\}\).
For \(A\subset V(G)\), \(G|_{A}\) denotes the _induced subgraph_ of \(G\) on the set \(A\), i.e., for \(i,j\in A\), \(\{i,j\}\in E(G|_{A})\) if and only if \(\{i,j\}\in E(G)\). For \(W\subseteq V(G)\), we denote by \(G\backslash W\) the induced subgraph of \(G\) on \(V(G)\setminus W\). For a vertex \(v\in V(G)\), we denote by \(G\backslash v\) the induced subgraph of \(G\) on the set \(V(G)\backslash\{v\}\) for simplicity.
A _walk_ of length \((n-1)\) in a graph \(G\) is an alternating sequence of vertices and edges \(w=\{v_{1},z_{1},v_{2},\ldots,v_{n-1},z_{n-1},v_{n}\}\), where \(z_{i}=\{v_{i},v_{i+1}\}\) is the edge joining \(v_{i}\) and \(v_{i+1}\). A walk is closed if \(v_{1}=v_{n}\). A walk may also be denoted \(\{v_{1},\ldots,v_{n}\}\), the edges being evident by context. A _cycle_ of length \(n\) is a closed walk, in which the points \(v_{1},\ldots,v_{n}\) are distinct. We denote the graph consisting of a cycle with \(n\) vertices by \(C_{n}\). A _path_ is a walk with all the points distinct. For simplicity, a _path_ with \(n\) vertices, denoted \(P_{n}\), is a walk with the vertex set \([n]\) and edge set \(\{\{1,2\},\{2,3\},\ldots,\{n-1,n\}\}\), and the length of \(P_{n}\) is defined to be \(n-1\). Any graph isomorphic to \(P_{n}\) is also called a path.
In the sequel, let \(S_{+}\) be the unique graded maximal ideal of the standard graded algebra \(S\). The local cohomology modules of a finitely generated graded \(S\)-module \(M\) with respect to \(S_{+}\) are denoted by \(H^{i}_{S_{+}}(M)\) for \(i\in\mathbb{Z}\).
**Definition 2.1**.: _Let \(M\) be a finitely generated graded \(S\)-module._
1. _The_ depth _of_ \(M\) _is defined as_ \[\operatorname{depth}(M):=\min\{i:H^{i}_{S_{+}}(M)\neq 0\}.\]
2. _For_ \(i=0,\ldots,\dim(M)\)_, the_ \(i^{\text{th}}\) _a_-invariant _of_ \(M\) _is defined as_ \[a_{i}(M):=\max\{t:(H^{i}_{S_{+}}(M))_{t}\neq 0\}\] _with the convention that_ \(\max\emptyset=-\infty\)_._
3. _The_ Castelnuovo-Mumford regularity _of_ \(M\) _is defined as_ \[\operatorname{reg}(M):=\max\{a_{i}(M)+i:0\leq i\leq\dim(M)\}.\]
A graph \(G\) is called Cohen-Macaulay (abbreviated as C-M) if the quotient ring \(S/I_{G}\) is Cohen-Macaulay, i.e., \(\operatorname{depth}(S/I_{G})=\dim(S/I_{G})\). For a proper non-zero homogeneous ideal \(I\) in \(S\), it is known that \(\operatorname{reg}(S/I)=\operatorname{reg}(I)-1\).
The following lemmas are often used to compute the depth and regularity of a module. In particular, since the facts in Lemma 2.2 are well-known, they will be used implicitly in this paper.
**Lemma 2.2**.: _Let \(M,N\) be two finitely generated graded \(S\)-modules. Then,_
1. \(\operatorname{depth}(M\oplus N)=\min\{\operatorname{depth}(M),\operatorname{ depth}(N)\}\)_, and_
2. \(\operatorname{reg}(M\oplus N)=\max\{\operatorname{reg}(M),\operatorname{reg}(N)\}\)_._
**Lemma 2.3**.: ([13, Lemmas 2.1 and 3.1]) _Let \(0\longrightarrow M\longrightarrow N\longrightarrow P\longrightarrow 0\) be an exact sequence of finitely generated graded \(S\)-modules. Then we have_
1. \(\operatorname{depth}\left(M\right)\geq\min\{\operatorname{depth}\left(N \right),\operatorname{depth}\left(P\right)+1\}\)_, the equality holds if_ \(\operatorname{depth}\left(N\right)\)__\(\neq\operatorname{depth}\left(P\right)\)_._
2. \(\operatorname{reg}\left(M\right)\leq\max\{\operatorname{reg}\left(N\right), \operatorname{reg}\left(P\right)+1\}\)_, the equality holds if_ \(\operatorname{reg}\left(N\right)\neq\operatorname{reg}\left(P\right)\)_._
**Lemma 2.4**.: ([13, Lemma 2.2, Lemma 3.2]) _Let \(S_{1}=\mathbb{K}[x_{1},\ldots,x_{m}]\) and \(S_{2}=\mathbb{K}[x_{m+1},\ldots,x_{n}]\) be two polynomial rings over \(\mathbb{K}\), let \(I\subset S_{1}\) and \(J\subset S_{2}\) be two non-zero homogeneous ideals. Let \(S=S_{1}\otimes_{\mathbb{K}}S_{2}\). Then we have_
1. \(\operatorname{reg}\left(S/(I+J)\right)=\operatorname{reg}\left(S_{1}/I\right)+ \operatorname{reg}\left(S_{2}/J\right)\)_;_
2. \(\operatorname{depth}\left(S/(I+J)\right)=\operatorname{depth}\left(S_{1}/I \right)+\operatorname{depth}\left(S_{2}/J\right)\)_;_
For a subset \(A\subset V(G)\), let \((A)=(v\ |\ v\in A)\) be an ideal of \(S=\mathbb{K}[V]\) generated by the element in \(A\). The following lemma is very important for the whole paper.
**Lemma 2.5**.: ([25, Lemma 1.5]) _Let \(G=(V,E)\) be a connected simple graph. Let \(J=(N_{G}(v))+I_{G\setminus N_{G}[v]}\) and \(K=(v)+I_{G\setminus v}\), where \(v\in V\). Then_
1. \(J+K=(N_{G}[v])+I_{G\setminus N_{G}[v]}\)_;_
2. \(I_{G}=J\cap K\)_;_
3. \(\operatorname{depth}(S/J)=\operatorname{depth}(S/(J+K))+1\)_;_
4. \(\operatorname{reg}(S/J)=\operatorname{reg}(S/(J+K))\)_._
A graph \(G\) is called _bipartite_ if there exists a _bipartition_\(V(G)=V_{1}\sqcup V_{2}\) with \(V_{1}\cap V_{2}=\emptyset\) such that each edge of \(G\) is of the form \(\{i,j\}\) with \(i\in V_{1}\) and \(j\in V_{2}\). For a positive integer \(n\), let \([n]=\{1,2,\ldots,n\}\) by convention. In [12], Herzog and Hibi classified all C-M bipartite graphs. we state their result.
**Theorem 2.6**.: ([12, Theorem 3.4]) _Let \(G=(V(G),E(G))\) be a bipartite graph with bipartition \(V(G)=\{x_{1},x_{2},\ldots,x_{n}\}\sqcup\{y_{1},y_{2},\ldots,y_{m}\}\). Then \(G\) is C-M if and only if \(n=m\), and there exists a labeling such that_
1. \(\{x_{i},y_{i}\}\in E(G)\) _for_ \(i\in[n]\)_,_
2. _if_ \(\{x_{i},y_{j}\}\in E(G)\)_, then_ \(i\leq j\)_, and_
3. _if_ \(\{x_{i},y_{j}\}\in E(G)\) _and_ \(\{x_{j},y_{k}\}\in E(G)\) _with_ \(i<j<k\)_, then_ \(\{x_{i},y_{k}\}\in E(G)\)_._
By Theorem 2.6, the vertices \(y_{1}\) and \(x_{n}\) must must be of degree one, and their neighborhood points are \(x_{1}\) and \(y_{n}\), respectively. Let \(N_{G}(y_{n})=\{x_{i_{1}},\ldots,x_{i_{s}},x_{n}\}\) for some \(x_{i_{j}}\in\{x_{1},\ldots,x_{n}\}\). Francisco et al. in [7] showed:
**Lemma 2.7**.: ([7, Lemma 3.4]) _Let \(G\) be a C-M bipartite graph with bipartition \(V(G)=\{x_{1},\ldots,x_{n}\}\sqcup\{y_{1},\ldots,y_{n}\}\). Then_
1. \(G\backslash\{x_{n},y_{n}\}\) _is a C-M bipartite graph._
2. \(G\backslash\{x_{i_{1}},y_{i_{1}},\ldots,x_{i_{s}},y_{i_{s}},x_{n},y_{n}\}\) _is a C-M bipartite graph._
For a proper ideal \(I\subset S\), its _arithmetic rank_, denoted by \(\operatorname{ara}(I)\), is the minimum number of elements of \(S\) that generate an ideal whose radical is \(I\). An ideal is said to be a _set-theoretic complete intersection_ if its arithmetic rank is equal to its height. In general, if \(I\) is a square-free monomial ideal, we have the well-known inequalities
\[\operatorname{height}(I)\leq\operatorname{pd}(S/I)\leq\operatorname{ara}(I)\]
where \(\operatorname{height}(I)\) is the height of \(I\) and \(\operatorname{pd}(S/I)\) is the projective dimension of the quotient ring \(S/I\).
**Lemma 2.8**.: _Let \(G=(V(G),E(G))\) be a C-M bipartite graph without isolated vertices. Then_
1. \(\operatorname{depth}(S/I_{G})=\frac{|V(G)|}{2}\)_;_
2. \(\operatorname{reg}(S/I_{G})=\vartheta(G)\)_, where_ \(\vartheta(G)\) _is the induced matching number of_ \(G\)_._
Proof.: (1) Since \(G\) is a C-M bipartite graph, \(I_{G}\) is unmixed and a set-theoretic complete intersection by [2, Corollary 3.5]. This forces \(\operatorname{pd}(S/I_{G})=\operatorname{height}(I_{G})=\frac{|V(G)|}{2}\). It follows from the graded Auslander-Buchsbaum formula that \(\operatorname{depth}(S/I_{G})=|V(G)|-\operatorname{pd}(S/I_{G})=\frac{|V(G)|}{2}\).
(2) is a direct consequence of [20, Corollary 3.4].
**Lemma 2.9**.: ([23, Theorem 3.3, Corollary 3.3]) _Let \(n\geq 2\) be an integer and \(P_{n}\) be a path with \(n\) vertices, then_
\[\operatorname{depth}(S/I_{G})=\lceil\frac{n}{3}\rceil\text{ and }\operatorname{ reg}(S/I_{G})=\lfloor\frac{n+1}{3}\rfloor.\]
## 3. Study of bipartite graphs
In this section, we will study the depth and regularity of a bipartite graph obtained from a Cohen-Macaulay bipartite graph by deleting its one leaf. We give some exact formulas for the depth and regularity of the edge ideal of such a graph.
**Lemma 3.1**.: _Let \(G\) be a C-M bipartite graph with a leaf \(u\). Let \(N_{G}(u)=\{v\}\) with \(\deg_{G}(v)\geq 2\). Then_
\[\operatorname{depth}(S_{G\backslash u}/I_{G\backslash u})=\operatorname{depth }(S_{G}/I_{G})-1.\]
Proof.: Let \(V(G)=\{x_{1},x_{2},\ldots,x_{n}\}\sqcup\{y_{1},y_{2},\ldots,y_{n}\}\) and \(u=x_{i}\) for some \(i\in[n]\), by symmetry. Then \(v=y_{i}\) with \(\deg_{G}(y_{i})\geq 2\). Thus \(n\geq 2\). We prove the claimed formula by induction on \(n\). If \(n=2\), then \(G\) and \(G\backslash u\) are paths with \(4\) and \(3\) vertices respectively. This case is covered by Lemma 2.9.
In the following, we assume that \(n\geq 3\). Let \(H=G\backslash u\) and \(N_{H}(v)=\{x_{i_{1}},\ldots,x_{i_{s}}\}\) with \(1\leq i_{1}<i_{2}<\cdots<i_{s}<i\), then \(\deg_{H}(y_{i_{1}})=1\) by Theorem 2.6(2). Let \(N_{H}(x_{i_{1}})=\{y_{j_{1}},y_{j_{2}},\ldots,y_{j_{t}}\}\) with \(j_{1}=i_{1}<j_{2}<\cdots<j_{t}\leq n\). Set \(J=(N_{H}(x_{i_{1}}))+I_{H\backslash N_{H}[x_{i_{1}}]}\), \(K=(x_{i_{1}})+I_{H\backslash x_{i_{1}}}\). We distinguish between the following two cases:
(1) If \(|N_{H}(v)|=1\), then \(s=1\) and \(H\backslash x_{i_{1}}\) is the disjoint union of \(G\backslash\{u,v,x_{i_{1}},y_{i_{1}}\}\) and the isolated set \(\{y_{i_{1}},v\}\). Thus, by Lemma 2.8(1), we have
\[\operatorname{depth}(S_{H}/K)=2+(n-2)=n.\]
Meanwhile, \(H\backslash N_{H}[x_{i_{1}}]\) has one of the following forms. Figure 1 will be helpful in understanding the arguments.
1. \(H\backslash N_{H}[x_{i_{1}}]=G\backslash\{u,v,x_{i_{1}},y_{i_{1}}\}\);
2. \(H\backslash N_{H}[x_{i_{1}}]\) is the disjoint union of \(G\backslash\{x_{j_{1}},y_{j_{1}},\ldots,x_{j_{t}},y_{j_{t}}\}\) and the isolated set \(\{x_{j_{2}},\ldots,x_{j_{t}}\}\backslash\{u\}\).
By Lemma 2.7(2), we get that both \(G\backslash\{u,v,x_{i_{1}},y_{i_{1}}\}\) and \(G\backslash\{x_{j_{1}},y_{j_{1}},\ldots,x_{j_{t}},y_{j_{t}}\}\) are C-M bipartite graphs. We consider the following two subcases:
(i) If \(H\backslash N_{H}[x_{i_{1}}]\) is of form (a), then we get by Lemma 2.8(1) that
\[\operatorname{depth}(S_{H}/J)=1+\operatorname{depth}(S_{H\backslash N_{H}[x_ {i_{1}}]}/I_{H\backslash N_{H}[x_{i_{1}}]})=1+(n-2)=n-1.\]
(ii) If \(H\backslash N_{H}[x_{i_{1}}]\) is of form (b), then we also have
\[\operatorname{depth}(S_{H}/J)=1+\operatorname{depth}(S_{H\backslash N_{H}[x_{i_ {1}}]}/I_{H\backslash N_{H}[x_{i_{1}}]})=1+(t-2)+(n-t)=n-1.\]
(2) If \(|N_{H}(v)|\geq 2\), then \(H\backslash x_{i_{1}}\) is the disjoint union of \(G\backslash\{x_{i_{1}},y_{i_{1}},u\}\) and an isolated vertex \(y_{i_{1}}\). Note that \(G\backslash\{x_{i_{1}},y_{i_{1}},u\}\) can be viewed as \(G_{1}\backslash u\), where \(G_{1}=G\backslash\{x_{i_{1}},y_{i_{1}}\}\). In this case, let \(H^{\prime}=G_{1}\backslash u\), then \(|N_{H^{\prime}}(v)|=|N_{H}(v)|-1\). Thus, by induction and Lemma 2.8(1), we have
\[\operatorname{depth}(S_{H}/K) =1+\operatorname{depth}(S_{G\backslash\{x_{i_{1}},y_{i_{1}},u\} }/I_{G\backslash\{x_{i_{1}},y_{i_{1}},u\}})\] \[=1+\operatorname{depth}(S_{G_{2}}/I_{G_{2}})-1=\operatorname{ depth}(S_{G_{2}}/I_{G_{2}})=n-1.\]
At the same time, \(H\backslash N_{H}[x_{i_{1}}]\) has one of the following forms. Figure 2 will be helpful in understanding the arguments.
1. \(H\backslash N_{H}[x_{i_{1}}]\) is the disjoint union of \(G\backslash\{x_{j_{1}},\ldots,x_{j_{t}},y_{j_{1}},\ldots,y_{j_{t}}\}\) and the isolated set \(\{x_{j_{2}},\ldots,x_{j_{t}}\}\backslash\{u\}\);
2. \(H\backslash N_{H}[x_{i_{1}}]=G\backslash\{u,v,x_{i_{1}},y_{i_{1}}\}\).
As shown in \((a)\) and \((b)\) of case (1) above, we can get
\[\operatorname{depth}(S_{H}/J)=1+\operatorname{depth}(S_{H\backslash N_{H}[x _{i_{1}}]}/I_{H\backslash N_{H}[x_{i_{1}}]})=1+(n-2)=n-1.\]
Furthermore, using Lemmas 2.2(1), 2.5(2), 2.3(1) and the following exact sequence
\[0\longrightarrow\frac{S_{H}}{J\cap K}\longrightarrow\frac{S_{H}}{J}\oplus \frac{S_{H}}{K}\longrightarrow\frac{S_{H}}{J+K}\longrightarrow 0, \tag{1}\]
we get the expected result. \(\square\)
Figure 1:
**Remark 3.2**.: _Let \(G\) be a C-M bipartite graph with a leaf \(u\). Let \(N_{G}(u)=\{v\}\) with \(\deg_{G}(v)=1\). Then \(G\backslash u\) is the disjoint union of \(G\backslash\{u,v\}\) and an isolated vertex \(v\). So \(\operatorname{depth}(S_{G\backslash u}/I_{G\backslash u})=1+\operatorname{depth }(S_{G\backslash\{u,v\}}/I_{G\backslash\{u,v\}})=\operatorname{depth}(S_{G} /I_{G})\)._
**Lemma 3.3**.: ([1, Lemma 3.5]) _Let \(G\) be a simple graph and \(H\) be its induced subgraph. Then \(\operatorname{reg}(I_{H})\leq\operatorname{reg}(I_{G})\)._
**Lemma 3.4**.: _Let \(G=(V,E)\) be a C-M bipartite graph with a leaf \(u\). Let \(N_{G}(u)=\{v\}\), \(J=(N_{G}(v))+I_{G\backslash N_{G}[v]}\) and \(K=(v)+I_{G\backslash v}\). Then_
1. \(\operatorname{reg}(S_{G}/J)\leq\operatorname{reg}(S_{G}/I_{G})-1\)_;_
2. \(\operatorname{reg}(S_{G}/K)\leq\operatorname{reg}(S_{G}/I_{G})\)_._
Proof.: It is clear that \(\operatorname{reg}(S_{G}/K)\leq\operatorname{reg}(S_{G}/I_{G})\) and \(\operatorname{reg}(S_{G}/J)\leq\operatorname{reg}(S_{G}/I_{G})\) by Lemma 3.3, since \(\operatorname{reg}(S_{G}/K)=\operatorname{reg}(S_{G\backslash v}/I_{G \backslash v})\), \(\operatorname{reg}(S_{G}/J)=\operatorname{reg}(S_{G\backslash N_{G}[v]}/I_{G \backslash N_{G}[v]})\) and both \(G\backslash v\) and \(G\backslash N_{G}[v]\) are induced subgraphs of \(G\).
Let \(V=X\sqcup Y\) with \(X=\{x_{1},x_{2},\ldots,x_{n}\}\), \(Y=\{y_{1},y_{2},\ldots,y_{n}\}\) and \(u=x_{\ell}\). Suppose \(N_{G}(u)=\{v\}\) and \(N_{G}(v)=\{x_{i_{1}},\ldots,x_{i_{t}}\}\) with \(1\leq i_{1}<i_{2}<\cdots<i_{t}=\ell\). Two cases are discussed below:
(i) If \(N_{G}(v)=X\), then \(G\backslash N_{G}[v]=Y\backslash\{v\}\) consists of isolated points. Hence, \(I_{G\backslash N_{G}[v]}=0\), which implies that \(\operatorname{reg}(S_{G}/J)=\operatorname{reg}(S_{G\backslash N_{G}[v]}/I_{G \backslash N_{G}[v]})=0\).
(ii) If \(N_{G}(v)\subsetneq X\), then \(G\backslash N_{G}[v]\) is the disjoint union of a graph \(H\) and the isolated set \(\{y_{i_{1}},y_{i_{2}},\ldots,y_{i_{t-1}}\}\), where \(H=G\backslash\{x_{i_{1}},\ldots,x_{i_{t}},y_{i_{1}},\ldots,y_{i_{t}}\}\). So by Lemma 2.8(2) we have
\[\operatorname{reg}(S_{G}/J)=\operatorname{reg}(S_{G\backslash N_{G}[v]}/I_{G \backslash N_{G}[v]})=\operatorname{reg}(S_{H}/I_{H})=\vartheta(H),\]
since \(H\) is a C-M bipartite graph by Lemma 2.7(2). Let \(M=\{e_{1},e_{2},\ldots,e_{\vartheta(H)}\}\) be an induced matching of \(H\) and \(e=\{u,v\}\). Since \(V(H)\cap N_{G}[v]=\emptyset\) and \(u\) is a leaf with \(N_{G}(u)=\{v\}\), we have \(M\cap e=\emptyset\), which implies \(M\sqcup\{e\}\) is an induced matching of \(G\). Hence, in this case, \(\operatorname{reg}(S_{G}/I_{G})\geq\vartheta(H)+1\), establishing the claim.
**Lemma 3.5**.: _Let \(G=(V,E)\) be a C-M bipartite graph with a leaf \(u\). Let \(N_{G}(u)=\{v\}\) with \(\deg_{G}(v)\geq 2\). Suppose \(V=X\sqcup Y\) where \(X=\{x_{1},x_{2},\ldots,x_{n}\}\) and \(Y=\{y_{1},y_{2},\ldots,y_{n}\}\). Let \(u=y_{i_{1}}\), \(N_{G}(v)=\{y_{i_{1}},y_{i_{2}},\ldots,y_{i_{t}}\}\) with \(1\leq i_{1}<i_{2}<\cdots<i_{t}\leq n\) and \(w=y_{i_{t}}\). Suppose also that \(J=(N_{G}(w))+I_{G\backslash N_{G}[w]}\), \(K=(w)+I_{G\backslash w}\). If \(\vartheta(G\backslash v)=\vartheta(G)-1\), then_
1. \(\operatorname{reg}(S_{G}/J)\leq\operatorname{reg}(S_{G}/I_{G})-2\)_;_
2. _If_ \(\vartheta(G\backslash v)=\vartheta(G)-1\)_, then_ \(\operatorname{reg}(S_{G}/J)\leq\operatorname{reg}(S_{G}/I_{G})-2\)_._
Proof.: Let \(G=(V,E)\) be a C-M bipartite graph with a leaf \(u\). Let \(N_{G}(u)=\{v\}\), \(J=(N_{G}(v))+I_{G\backslash N_{G}[v]}\) and \(K=(v)+I_{G\backslash v}\). Then
[MISSING_PAGE_POST]
2. \(\operatorname{reg}(S_{G}/K)=\operatorname{reg}(S_{G}/I_{G})\)_._
Proof.: Since \(N_{G}(v)=\{y_{i_{1}},y_{i_{2}},\ldots,y_{i_{t}}\}\) with \(1\leq i_{1}<i_{2}<\cdots<i_{t}\leq n\), we get that \(x_{i_{t}}\) is a leaf and \(N_{G}(x_{i_{t}})=\{w\}\). Let \(N_{G}(w)=\{x_{j_{1}},x_{j_{2}},\ldots,x_{j_{m}}\}\) with \(1\leq j_{1}<j_{2}<\cdots<j_{m}=i_{t}\). It follows that \(\operatorname{reg}(S_{G}/K)=\operatorname{reg}(S_{G\setminus w}/I_{G\setminus w})\) and \(\operatorname{reg}(S_{G}/J)\leq\operatorname{reg}(S_{G}/I_{G})-1\) from Lemma 3.4.
(1) Conversely, if \(\operatorname{reg}(S_{G}/J)=\operatorname{reg}(S_{G}/I_{G})-1\), then by Lemma 2.8, we have
\[\operatorname{reg}(S_{G}/J)=\vartheta(G)-1. (\dagger)\]
We distinguish between the following two cases:
(i) If \(N_{G}(w)=X\), then \(G\backslash N_{G}[w]=Y\backslash\{w\}\) is consists of isolated points. It follows that \(\operatorname{reg}(S_{G}/J)=\operatorname{reg}(S_{G\backslash N_{G}[w]}/I_{G \backslash N_{G}[w]})=0\), which implies \(\vartheta(G)=1\) by formula \((\dagger).\) Note that \(\deg_{G}(v)\geq 2\), we have \(\operatorname{reg}(S_{G\backslash\{u,v\}}/I_{G\backslash\{u,v\}})\geq 1\). On the other hand, since \(G\backslash\{u,v\}\) is a C-M bipartite graph by Lemma 2.7(1), it follows from Lemma 2.8(2) that \(\operatorname{reg}(S_{G\backslash\{u,v\}}/I_{G\backslash\{u,v\}})=\vartheta( G\backslash\{u,v\})=\vartheta(G\backslash v)=\vartheta(G)-1\). Thus \(\vartheta(G)\geq 2\), a contradiction to \(\vartheta(G)=1\).
(ii) If \(N_{G}(w)\subsetneq X\). Then \(G\backslash N_{G}[w]\) is the disjoint union of \(H\) and the isolated set \(\{y_{j_{1}},y_{j_{2}},\ldots,y_{j_{m-1}}\}\), where \(H=G\backslash\{x_{j_{1}},\ldots,x_{j_{m}},y_{j_{1}},\ldots,y_{j_{m}}\}\) is C-M bipartite graph by Lemma 2.7(2). So by Lemma 2.8(2) we have
\[\operatorname{reg}(S_{G}/J)=\operatorname{reg}(S_{G\backslash N_{G}[u]}/I_{G \backslash N_{G}[u]})=\operatorname{reg}(S_{H}/I_{H})=\vartheta(H).\]
It follows that \(\vartheta(H)=\vartheta(G)-1\) by formula \((\dagger).\) Let \(M=\{e_{1},e_{2},\ldots,e_{\vartheta(G)-1}\}\) be an induced matching of \(H\) and \(e=\{x_{i_{t}},w\}\). Since \(V(H)\cap N_{G}[w]=\emptyset\) and \(x_{i_{t}}\) is a leaf with \(N_{G}(x_{i_{t}})=\{w\}\), we have \(M\cap e=\emptyset\), which implies that \(M\sqcup\{e\}\) is an induced matching of \(G\backslash v\). Note that the size of \(M\sqcup\{e\}\) is \(\vartheta(G)\), which contradicts \(\vartheta(G\backslash v)=\vartheta(G)-1\).
(2) Let \(M=\{e_{1},e_{2},\ldots,e_{\vartheta(G)}\}\) be any induced matching of \(G\). Claim: \(\{u,v\}\in M\).
Indeed, if \(\{u,v\}\notin M\), then \(v\) is a vertex of \(e_{i}\) for some \(i\in[\vartheta(G)]\), since \(\vartheta(G\backslash v)=\vartheta(G)-1\). Let \(e_{i}=\{v,y_{i_{t}}\}\), then \(x_{i_{t}}\cap e_{j}=\emptyset\) for any \(e_{j}\in M\) with \(j\neq i\). Conversely, if \(x_{i_{t}}\cap e_{j}\neq\emptyset\) for some \(e_{j}\in M\) with \(j\neq i\), then we choose edge \(e=\{x_{i_{t}},y_{i_{t}}\}\), thus \(e\cap e_{i}=\{y_{i_{t}}\}\) and \(e\cap e_{j}=\{x_{i_{t}}\}\), which contradicts with \(M\) being an induced matching of \(G\). This implies that \(x_{i_{t}}\) cannot be a vertex of any edge in the set \(M\backslash\{e_{i}\}\). Substituting edge \(\{x_{i_{t}},y_{i_{t}}\}\) for \(e_{i}\) yields an induced match of \(G\backslash v\). Consequently, \(\vartheta(G\backslash v)=\vartheta(G)\), which contradicts with \(\vartheta(G\backslash v)=\vartheta(G)-1\).
Note that \(w=y_{i_{t}}\) and \(N_{G}(v)=\{y_{i_{1}},y_{i_{2}},\ldots,y_{i_{t}}\}\), thus \(w\in N_{G}(v)\). Since \(\{u,v\}\in M\), \(w\) cannot be a vertex of any edge in \(M\) by the definition of induced matching, which implies that \(\vartheta(G\backslash w)=\vartheta(G)\). It follows that \(\operatorname{reg}(S_{G}/K)=\operatorname{reg}(S_{G\backslash w}/I_{G \backslash w})=\operatorname{reg}(S_{G\backslash\{w,x_{i_{t}}\}}/I_{G \backslash\{w,x_{i_{t}}\}})=\vartheta(G\backslash w,x_{i_{t}}\})=\vartheta(G \backslash w)=\vartheta(G)=\operatorname{reg}(S_{G}/I_{G})\) by Lemmas 2.7(1) and 2.8(2).
**Remark 3.6**.: _Let \(G=(V,E)\) be a C-M bipartite graph with a leaf \(u\). Suppose \(N_{G}(u)=\{v\}\) with \(\deg_{G}(v)\geq 2\). If \(\vartheta(G\backslash v)<\vartheta(G)\), then we can obtain \(\operatorname{reg}(S_{G}/I_{G})\geq 2\) by similar arguments as the subcase \((ii)\) in the proof of Lemma 3.4._
**Theorem 3.7**.: _Let \(G=(V,E)\) be a C-M bipartite graph with a leaf \(u\). Let \(N_{G}(u)=\{v\}\), then_
\[\operatorname{reg}(S_{G\backslash u}/I_{G\backslash u})=\operatorname{reg}(S_{G}/I _{G})-s,\]
_where \(s=\begin{cases}0,&\text{if }\vartheta(G\backslash v)=\vartheta(G),\\ 1,&\text{otherwise}.\end{cases}\)_
_Proof._ Let \(V=X\sqcup Y\) with \(X=\{x_{1},\ldots,x_{n}\}\), \(Y=\{y_{1},\ldots,y_{n}\}\) and \(u=y_{\ell}\). Assume that \(N_{G}(u)=\{v\}\) and \(N_{G}(v)=\{y_{i_{1}},y_{i_{2}},\ldots,y_{i_{t}}\}\) with \(\ell=i_{1}<i_{2}<\cdots<i_{t}\leq n\). Let \(H=G\backslash u\).
(1) If \(\vartheta(G\backslash v)=\vartheta(G)\). In this case, we suppose \(J=(N_{H}(v))+I_{H\backslash N_{H}[v]}\), \(K=(v)+I_{H\backslash v}\), then \(I_{H}=J\cap K\) and \(H\backslash\{x_{i_{1}},x_{i_{2}},y_{i_{2}},\ldots,x_{i_{t}},y_{i_{t}}\}=G \backslash\{x_{i_{1}},y_{i_{1}},\ldots,x_{i_{t}},y_{i_{t}}\}\) is a C-M bipartite graph by Lemma 2.7(2). Thus by Lemma 3.4(1), we have \(\text{reg}(S_{H}/J)=\text{reg}(S_{H\backslash N_{H}[v]}/I_{H\backslash N_{H}[ v]})=\text{reg}(S_{G\backslash N_{G}[v]}/I_{G\backslash N_{G}[v]})\leq\text{ reg}(S_{G}/I_{G})-1\). Meanwhile, \(H\backslash v=G\backslash\{u,v\}\) is a C-M bipartite graph by Lemma 2.7 (2). Thus \(\text{reg}(S_{H}/K)=\text{reg}(S_{H\backslash v}/I_{H\backslash v})=\text{reg} (S_{G\backslash\{u,v\}}/I_{G\backslash\{u,v\}})=\vartheta(G\backslash\{u,v\})= \vartheta(G\backslash v)=\text{reg}(S_{G}/I_{G})\). By Lemmas 2.2(2), 2.3(2), 2.5(4) and the exact sequence (1), we obtain \(\text{reg}(S_{G\backslash u}/I_{G\backslash u})=\text{reg}(S_{G}/I_{G})\).
(2) If \(\vartheta(G\backslash v)\neq\vartheta(G)\). We prove the statement by induction on \(\text{deg}_{G}(v)\).
(i) If \(\text{deg}_{G}(v)=1\), then \(H\) is the disjoint union of \(G\backslash\{u,v\}\) and isolated vertice \(v\), and \(G\backslash\{u,v\}\) is a C-M bipartite graph by Lemma 2.7(1). Thus, \(\text{reg}(S_{H}/I_{H})=\text{reg}(S_{G\backslash\{u,v\}}/I_{G\backslash\{u,v \}})=\vartheta(G\backslash\{u,v\})=\vartheta(G\backslash v)=\vartheta(G)-1= \text{reg}(S_{G}/I_{G})-1\).
(ii) If \(\text{deg}_{G}(v)\geq 2\). Let \(w=y_{i_{t}}\) and \(N_{G}(w)=\{x_{j_{1}},x_{j_{2}},\ldots,x_{j_{m}}\}\), where \(1\leq j_{1}<j_{2}<\cdots<j_{m}=i_{t}\). In this case, let \(J=(N_{H}(w))+I_{H\backslash N_{H}[w]}\), \(K=(w)+I_{H\backslash w}\), then \(I_{H}=J\cap K\). We divide into the following two cases for \(H\backslash N_{H}[w]\):
(a) If \(N_{H}[w]=X\), then \(H\backslash N_{H}[w]=Y\backslash\{u,w\}\) consists of isolated points. Hence, \(I_{H\backslash N_{H}[w]}=0\), which implies \(\text{reg}(S/J)=\text{reg}(S_{H\backslash N_{H}[u]}/I_{H\backslash N_{H}[u]})= 0\leq\text{reg}(S_{G}/I_{G})-2\) by Remark 3.6.
(b) If \(N_{H}[w]\subsetneq X\), then \(H\backslash N_{H}[w]\) is the disjoint union of \(H^{\prime}\) and isolated set \(\{y_{j_{1}},y_{j_{2}},\ldots,y_{j_{t-1}}\}\backslash u\), where \(H^{\prime}=H\backslash\{x_{j_{1}},y_{j_{1}},\ldots,x_{j_{m}},y_{j_{m}}\}\). By Lemma 2.7(2), we have \(H\backslash\{x_{j_{1}},y_{j_{1}},\ldots,x_{j_{m}},y_{j_{m}}\}=G\backslash\{x_{ j_{1}},y_{j_{1}},\ldots,x_{j_{m}},y_{j_{m}}\}\) is a C-M bipartite graph. Thus \(\text{reg}(S_{H}/J)=\text{reg}(S_{H\backslash N_{H}[w]}/I_{H\backslash N_{H}[ w]})=\text{reg}(S_{G\backslash N_{G}[w]}/I_{G\backslash N_{G}[w]})\)
\(\leq\text{reg}(S_{G}/I_{G})-2\) by Lemma 3.5(1).
In order to compute \(\text{reg}(S_{H}/K)\), we apply induction on \(\text{deg}_{G}(v)\).
If \(\text{deg}_{G}(v)=2\), \(H\backslash w\) is the disjoint union of \(G\backslash\{x_{i_{1}},y_{i_{1}},x_{i_{t}},y_{i_{t}}\}\) and isolated set \(\{x_{i_{1}},x_{i_{t}}\}\), then \(\text{reg}(S_{H}/K)=\text{reg}(S_{H\backslash w}/I_{H\backslash w})=\text{reg}(S_ {G\backslash\{x_{i_{1}},y_{i_{1}},x_{i_{t}},y_{i_{t}}\}}/I_{G\backslash\{x_{i_{1} },y_{i_{1}},x_{i_{t}},y_{i_{t}}\}})\)\(=\vartheta(G)-1=\text{reg}(S_{G}/I_{G})-1\).
Now assume that \(\text{deg}_{G}(v)\geq 3\). Let \(G^{\prime}=G\backslash\{x_{i_{t}},y_{i_{t}}\}\), then \(G^{\prime}\) is a C-M bipartite graph with a leaf \(x_{i_{t-1}}\) and \(\text{deg}_{G^{\prime}}(v)=\text{deg}_{G}(v)-1\). Meanwhile, \(H\backslash w\) is the disjoint union of \(G^{\prime}\backslash u\) and isolated point \(x_{i_{t}}\). Thus
\[\text{reg}(S_{H}/K) =\text{reg}(S_{H\backslash w}/I_{H\backslash w})=\text{reg}(S_{G^{ \prime}\backslash u}/I_{G^{\prime}\backslash u})=\text{reg}(S_{G^{\prime}}/I_{G^{ \prime}})-1\] \[=\vartheta(G\backslash\{x_{i_{t}},y_{i_{t}}\})-1=\text{reg}(S_{G} /I_{G})-1.\]
Note that \(I_{H}=J\cap K\), we obtain \(\text{reg}(S_{G\backslash u}/I_{G\backslash u})=\text{reg}(S_{G}/I_{G})-s\) by applying Lemmas 2.2 and to 2.3(2) the exact sequence (1).
## 4. the \(\circ\) operation and the \(*\) operation
In this section, we will study some graphs obtained from Cohen-Macaulay bipartite graphs by the \(\circ\) operation or the \(*\) operation. The main task of this section is to give some exact formulas for the depth and regularity of the edge ideals of such graphs. We start by recalling from [4, 19] the two aforementioned special gluing operations.
**Definition 4.1**.: _For \(i=1,2\), let \(G_{i}\) be a graph with a leaf \(u_{i}\). Furthermore, let \(N_{G}(u_{i})=\{v_{i}\}\) with \(\deg_{G_{i}}(v_{i})\geq 2\)._
1. _Let_ \(G\) _be a graph obtained from_ \(G_{1}\) _and_ \(G_{2}\) _by first removing the leaves_ \(u_{1},u_{2}\)_, and then identifying the vertices_ \(v_{1}\) _and_ \(v_{2}\)_. In this case, we say that_ \(G\) _is obtained from_ \(G_{1}\) _and_ \(G_{2}\) _by the_ \(\circ\) _operation and write_ \(G=(G_{1},u_{1})\circ(G_{2},u_{2})\) _or simply_ \(G=G_{1}\circ G_{2}\)_. If_ \(v_{1}\) _and_ \(v_{2}\) _are identified as the vertex_ \(v\) _in_ \(G\)_, then we also write_ \(G=G_{1}\circ_{v}G_{2}\)_. Unless otherwise specified, when we perform the_ \(\circ\) _operation in this way, we always implicitly assume that neither_ \(G_{1}\) _nor_ \(G_{2}\) _is the path graph_ \(P_{2}\) _of two vertices._
2. _Let_ \(H\) _be the graph obtained from_ \(G_{1}\) _and_ \(G_{2}\) _by identifying the vertices_ \(u_{1}\) _and_ \(u_{2}\)_. In this case, we say that_ \(H\) _is obtained from_ \(G_{1}\) _and_ \(G_{2}\) _by the_ \(*\) _operation and write_ \(H=(G_{1},u_{1})*(G_{2},u_{2})\) _or simply_ \(H=G_{1}*G_{2}\)_. If we denote the identified vertex in_ \(H\) _by_ \(u\)_, then we also write_ \(H=G_{1}*_{u}G_{2}\)_._
**Theorem 4.2**.: _Let \(G=(G_{1},u_{1})\circ(G_{2},u_{2})\), where each \(G_{i}\) is a C-M bipartite graph with a leaf \(u_{i}\). Let \(N_{G}(u_{i})=\{v_{i}\}\), then_
\[\operatorname{depth}(S_{G}/I_{G})=\operatorname{depth}(S_{G_{1}}/I_{G_{1}})+ \operatorname{depth}(S_{G_{2}}/I_{G_{2}})-s,\]
_where \(s=\begin{cases}1,&\text{if $\deg_{G_{i}}(v_{i})=1$ for all $i\in[2]$},\\ 2,&\text{otherwise.}\end{cases}\)_
Proof.: Let \(V(G_{i})=\{x_{i,1},x_{i,2},\ldots,x_{i,n_{i}}\}\sqcup\{y_{i,1},y_{i,2},\ldots,y_{i,n_{i}}\}\) for \(i\in[2]\). By symmetry, we can assume that every \(u_{i}=x_{i,j_{i}}\) for some \(j_{i}\in[n_{i}]\), and \(v_{i}=y_{i,j_{i}}\) is the only neighbor point of \(x_{i,j_{i}}\) in \(G_{i}\). Suppose \(y_{1,j_{1}}\) and \(y_{2,j_{2}}\) are identified as \(v\) in \(G\) by the \(\circ\) operation. We distinguish into three cases:
(I) If \(\deg_{G_{i}}(v_{i})=1\) for all \(i\in[2]\), then \(G\) is the disjoint union of \(G_{1}\backslash\{u_{1},v_{1}\}\), \(G_{2}\backslash\{u_{2},v_{2}\}\) and an isolated vertex \(v\). Thus, by Lemmas 2.7(1) and 2.8(1), we have
\[\operatorname{depth}(S_{G}/I_{G}) =1+[\operatorname{depth}(S_{G_{1}}/I_{G_{1}})-1]+[ \operatorname{depth}(S_{G_{2}}/I_{G_{2}})-1]\] \[=\operatorname{depth}(S_{G_{1}}/I_{G_{1}})+\operatorname{depth}(S_ {G_{2}}/I_{G_{2}})-1.\]
(II) If \(\deg_{G_{1}}(v_{1})\geq 2\) and \(\deg_{G_{2}}(v_{2})=1\), then \(G=(G_{1}\backslash u_{1})\sqcup(G_{2}\backslash\{u_{2},v_{2}\})\). Thus, by Lemmas 2.7(1), 2.8(1) and 3.1, we have
\[\operatorname{depth}(S_{G}/I_{G}) =\operatorname{depth}(S_{G_{1}\backslash u_{1}}/I_{G_{1}\backslash u _{1}})+\operatorname{depth}(S_{G_{2}\backslash\{u_{2},v_{2}\}}/I_{G_{2} \backslash\{u_{2},v_{2}\}})\] \[=\operatorname{depth}(S_{G_{1}}/I_{G_{1}})+\operatorname{depth}(S_ {G_{2}}/I_{G_{2}})-2.\]
(III) If \(\deg_{G_{i}}(v_{i})\geq 2\) for all \(i\in[2]\). We will prove the statement by induction on \(n_{2}\). When \(n_{2}=2\). Let \(N_{G_{2}}(v_{2})=\{x_{2,1},x_{2,2}\}\), where \(x_{2,2}=u_{2}\). In this case, \(G=G_{1}\cup_{x_{2,1}}P_{2}\) is the clique sum of \(G_{1}\) and a path \(P_{2}\) with vertex set \(\{x_{2,1},y_{2,1}\}\)
Set \(J=(N_{G}(y_{2,1}))+I_{G\backslash N_{G}[y_{2,1}]}\), \(K=(y_{2,1})+I_{G\backslash y_{2,1}}\). Note that \(G\backslash N_{G}[y_{2,1}]=G_{1}\backslash u_{1}\) and \(G\backslash y_{2,1}=G_{1}\), we obtain \(\operatorname{depth}(S_{G}/J)=1+\operatorname{depth}(S_{G\backslash N_{G}[y_{ 2,1}]}/I_{G\backslash N_{G}[y_{2,1}]})=1+\operatorname{depth}(S_{G_{1} \backslash u_{1}}/I_{G_{1}\backslash u_{1}})=\operatorname{depth}(S_{G_{1}}/I_{ G_{1}})\) by Lemma 3.1, and \(\operatorname{depth}(S_{G}/K)=\operatorname{depth}(S_{G\backslash y_{2,1}}/I_ {G\backslash y_{2,1}})=\operatorname{depth}(S_{G_{1}}/I_{G_{1}})\). Using Lemmas 2.2(1), 2.3(1) and 2.5(3) to the following exact sequence
\[0\longrightarrow\frac{S_{G}}{J\cap K}\longrightarrow\frac{S_{G}}{J}\oplus \frac{S_{G}}{K}\longrightarrow\frac{S_{G}}{J+K}\longrightarrow 0, \tag{2}\]
we get the desired regularity results.
In the following, we assume that \(n_{2}\geq 3\). Let \(N_{G_{2}}(v)=\{x_{2,\ell_{1}},x_{2,\ell_{2}},\ldots,x_{2,\ell_{s}}\}\) with \(1\leq\ell_{1}<\ell_{2}<\cdots<\ell_{s}=j_{2}\), then \(\deg_{G_{2}}(y_{2,\ell_{1}})=1\) by Theorem 2.6(2). In this case, let \(N_{G}(x_{2,\ell_{1}})=\{y_{2,k_{1}},y_{2,k_{2}},\ldots,y_{2,k_{t}}\}\) with \(\ell_{1}=k_{1}<k_{2}<\cdots<k_{t}\leq n_{2}\) and \(k_{s}=j_{2}\) for some \(s\in[t]\). Set \(J=(N_{G}(x_{2,\ell_{1}}))+I_{G\backslash N_{G}[x_{2,\ell_{1}}]}\), \(K=(x_{2,\ell_{1}})+I_{G\backslash x_{2,\ell_{1}}}\). We distinguish between the following two cases:
(A1) If \(|N_{G_{2}}(v)|=2\), then \(s=2\) and \(G\backslash x_{2,\ell_{1}}\) is the disjoint union of \(G_{1}\backslash u_{1}\), \(G_{2}\backslash\{x_{2,\ell_{1}},y_{2,\ell_{1}},x_{2,j_{2}},y_{2,j_{2}}\}\) and an isolated vertex \(y_{2,\ell_{1}}\). So by Lemmas 2.8(1) and 3.1, we have
\[\operatorname{depth}(S_{G}/K) =1+\operatorname{depth}(S_{G_{1}\backslash u_{1}}/I_{G_{1} \backslash u_{1}})+\operatorname{depth}(S_{H_{1}}/I_{H_{1}})\] \[=1+[\operatorname{depth}(S_{G_{1}}/I_{G_{1}})-1]+[\operatorname{ depth}(S_{G_{2}}/I_{G_{2}})-2]\] \[=\operatorname{depth}(S_{G_{1}}/I_{G_{1}})+\operatorname{depth}( S_{G_{2}}/I_{G_{2}})-2\]
where \(H_{1}=G_{2}\backslash\{x_{2,\ell_{1}},y_{2,\ell_{1}},x_{2,j_{2}},y_{2,j_{2}}\}\).
Meanwhile, \(G\backslash N_{G}[x_{2,\ell_{1}}]\) takes one of the following forms:
1. \(G\backslash N_{G}[x_{2,\ell_{1}}]=H\sqcup H_{1}\), where \(H=G_{1}\backslash\{u_{1},v_{1}\}\);
2. \(G\backslash N_{G}[x_{2,\ell_{1}}]\) is the disjoint union of \(H\), \(H_{2}\) and the isolated set \(\{x_{2,k_{2}},\ldots,x_{2,k_{t}}\}\backslash\{u_{2}\}\), where \(H_{2}=G_{2}\backslash\{x_{2,k_{1}},\ldots,x_{2,k_{t}},y_{2,k_{1}},\ldots,y_{2,k _{t}}\}\).
Note that \(H\), \(H_{1}\) and \(H_{2}\) are C-M bipartite graphs. We consider two subcases:
(i) If \(G\backslash N_{G}[x_{2,\ell_{1}}]\) is of form (a), then we get by Lemma 2.8(1) that
\[\operatorname{depth}(S_{G}/J) =1+\operatorname{depth}(S_{H}/I_{H})+\operatorname{depth}(S_{H_{ 1}}/I_{H_{1}})\] \[=1+[\operatorname{depth}(S_{G_{1}}/I_{G_{1}})-1]+[\operatorname{ depth}(S_{G_{2}}/I_{G_{2}})-2]\] \[=\operatorname{depth}(S_{G_{1}}/I_{G_{1}})+\operatorname{depth}( S_{G_{2}}/I_{G_{2}})-2.\]
(ii) If \(G\backslash N_{G}[x_{2,\ell_{1}}]\) is of form (b), then we also have
\[\operatorname{depth}(S_{G}/J) =1+\operatorname{depth}(S_{H}/I_{H})+\operatorname{depth}(S_{H_{ 2}}/I_{H_{2}})+(t-2)\] \[=1+[\operatorname{depth}(S_{G_{1}}/I_{G_{1}})-1]+(\operatorname{ depth}(S_{G_{2}}/I_{G_{2}})-t)+(t-2)\] \[=\operatorname{depth}(S_{G_{1}}/I_{G_{1}})+\operatorname{depth}(S_{G_ {2}}/I_{G_{2}})-2.\]
(A2) If \(|N_{G_{2}}(v)|\geq 3\), then \(G\backslash x_{2,\ell_{1}}\) is the disjoint union of \(G^{\prime}\) and an isolated vertex \(y_{2,\ell_{1}}\), where \(G^{\prime}=(G_{1},u_{1})\circ(G^{\prime}_{2},u_{2})\) and \(G^{\prime}_{2}=G_{2}\backslash\{x_{2,\ell_{1}},y_{2,\ell_{1}}\}\). In this case,
we have \(|N_{G_{2}^{\prime}}(v)|=|N_{G_{2}}(v)|-1\). Thus, by induction and Lemma 2.8(1), we have
\[\operatorname{depth}(S_{H}/K) =1+\operatorname{depth}(S_{G^{\prime}}/I_{G^{\prime}})\] \[=1+[\operatorname{depth}(S_{G_{1}}/I_{G_{1}})+\operatorname{depth }(S_{G_{2}^{\prime}}/I_{G_{2}^{\prime}})-2]\] \[=\operatorname{depth}(S_{G_{1}}/I_{G_{1}})+[\operatorname{depth }(S_{G_{2}}/I_{G_{2}})-1]-1\] \[=\operatorname{depth}(S_{G_{1}}/I_{G_{1}})+\operatorname{depth}( S_{G_{2}}/I_{G_{2}})-2.\]
At the same time, \(G\backslash N_{G}[x_{2,\ell_{1}}]\) has one of the following forms:
1. \(G\backslash N_{G}[x_{2,\ell_{1}}]=H\sqcup H_{1}\), where \(H=G_{1}\backslash\{u_{1},v_{1}\}\);
2. \(G\backslash N_{G}[x_{2,\ell_{1}}]\) is the disjoint union of \(H\), \(H_{2}\) and isolated set \(\{x_{2,k_{2}},\ldots,x_{2,k_{t}}\}\backslash\{u_{2}\}\), where \(H_{2}=G_{2}\backslash\{x_{2,k_{1}},y_{2,k_{1}},x_{2,k_{2}},y_{2,k_{2}},\ldots, x_{2,k_{t}},y_{2,k_{t}}\}\).
Applying the analysis as \((a)\) and \((b)\) of in case (1) above, we obtain
\[\operatorname{depth}(S_{G}/J)=\operatorname{depth}(S_{G_{1}}/I_{G_{1}})+ \operatorname{depth}(S_{G_{2}}/I_{G_{2}})-2.\]
In summary, the desired result follows from Lemmas 1.2(1), 2.3(1), 2.5 and exact sequence (2).
**Example 4.3**.: _The following are two examples that satisfy the two conditions in Theorem 4.2, respectively._
_Let \(G=G_{1}\circ G_{2}\). In Figure 3, \(G\) is the disjoint union of an isolated vertex \(v\) and a path of length \(3\) with vertice set \(\{x_{1,1},y_{1,1},y_{1,3},x_{1,3}\}\), thus \(\operatorname{depth}(S_{G}/I_{G})=3\) by Lemma 2.9. In Figure 4, let \(J=(x_{N_{G}(x_{2,1})})+I_{G\backslash N_{G}[x_{2,1}]}\), \(K=(x_{2,1})+I_{G\backslash x_{2,1}}\). In this case, \(\operatorname{depth}(S_{G}/J)=1+\operatorname{depth}(S_{G\backslash N_{G}[x_ {2,1}]}/I_{G\backslash N_{G}[x_{2,1}]})=3\) and \(\operatorname{depth}(S_{G}/K)=3\) by Lemma 2.9. It follows from Lemmas 2.2(1), 2.3, 2.5 and the exact sequence \((2)\) that \(\operatorname{depth}(S_{G}/I_{G})=3\)._
Figure 4.
Figure 3.
**Theorem 4.4**.: _Let \(G=(G_{1},u_{1})\circ(G_{2},u_{2})\), where each \(G_{i}\) is a C-M bipartite graph with a leaf \(u_{i}\). Let \(N_{G_{i}}(u_{i})=\{v_{i}\}\). If \(t=|\{i:\vartheta(G_{i}\backslash v_{i})\neq\vartheta(G_{i})\}|\), then_
\[\operatorname{reg}(S_{G}/I_{G})=\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+ \operatorname{reg}(S_{G_{2}}/I_{G_{2}})-t.\]
Proof.: First, \(t\in\{0,1,2\}\) by the definition of \(t\). Let \(V_{i}\) be the vertex set of \(G_{i}\) and \(V_{i}=X_{i}\sqcup Y_{i}\) be a bipartition of \(V_{i}\) with \(X_{i}=\{x_{i,1},x_{i,2},\ldots,x_{i,n_{i}}\}\), \(Y_{i}=\{y_{i,1},y_{i,2},\ldots,y_{i,n_{i}}\}\) for \(i\in[2]\). By symmetry, let \(N_{G_{1}}(v_{1})=\{x_{1,i_{1}},x_{1,i_{2}},\ldots,x_{1,i_{m}}\}\) and \(N_{G_{2}}(v_{2})=\{y_{2,j_{1}},y_{2,j_{2}},\ldots,y_{2,j_{t}}\}\), where \(u_{1}=x_{1,i_{m}}\), \(u_{2}=y_{2,j_{1}}\), \(1\leq i_{1}<i_{2}<\cdots<i_{m}\leq n_{1}\) and \(1\leq j_{1}<j_{2}<\cdots<j_{t}\leq n_{2}\). Suppose that \(v_{1}\) and \(v_{2}\) are identified as \(v\) in \(G\) by the \(\circ\) operation and \(N_{G_{2}}(y_{2,j_{t}})=\{x_{2,h_{1}},x_{2,h_{2}},\ldots,x_{2,h_{s}}\}\) with \(1\leq h_{1}<h_{2}<\cdots<h_{s}=j_{t}\). We divide into the following two cases:
(I) If \(t=2\), then \(\vartheta(G_{i}\backslash v_{i})\neq\vartheta(G_{i})\) for all \(i\in[2]\). Now we prove the formulas for the regularity of \(S_{G}/I_{G}\) by induction on \(\deg_{G_{2}}(v_{2})\). If \(\deg_{G_{2}}(v_{2})=1\), then \(G=(G_{1},u_{1})\circ(G_{2},u_{2})\) is the disjoint union of \(G_{1}\backslash u_{1}\) and \(G_{2}\backslash\{u_{2},v_{2}\}\). If \(G_{2}\backslash\{u_{2},v_{2}\}=\emptyset\), then the desired result follows from Theorem 3.7. Now, we assume that \(G_{2}\backslash\{u_{2},v_{2}\}\neq\emptyset\). In this case, \(G_{2}\backslash\{u_{2},v_{2}\}\) is a C-M bipartite graph by Lemma 2.7(1). It follows from Lemmas 2.4(1), 2.8(2) and 3.7 that
\[\operatorname{reg}(S_{G}/I_{G}) =\operatorname{reg}(S_{G_{1}\backslash u_{1}}/I_{G_{1}\backslash u _{1}})+\operatorname{reg}(S_{G_{2}\backslash\{u_{2},v_{2}\}}/I_{G_{2} \backslash\{u_{2},v_{2}\}})\] \[=(\operatorname{reg}(S_{G_{1}}/I_{G_{1}})-1)+\vartheta(G_{2} \backslash u_{2},v_{2})\] \[=(\operatorname{reg}(S_{G_{1}}/I_{G_{1}})-1)+\vartheta(G_{2} \backslash v_{2})\] \[=(\operatorname{reg}(S_{G_{1}}/I_{G_{1}})-1)+(\vartheta(G_{2})-1)\] \[=\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+\operatorname{reg}(S_{G_ {2}}/I_{G_{2}})-2.\]
Assume that \(\deg_{G_{2}}(v_{2})\geq 2\) and the regularity statement holds for \(\deg_{G_{2}}(v_{2})-1\). In this case, let \(w=y_{2,j_{t}}\), then \(x_{2,j_{t}}\) is a leaf of \(G_{2}\). Choose \(J=(N_{G}(w))+I_{G\backslash N_{G}[w]}\), \(K=(w)+I_{G\backslash w}\). thus \(G\backslash w\) is the disjoint union of \(G_{1}\circ(G_{2}\backslash\{x_{2,j_{t}},w\})\) and isolated vertices \(x_{2,j_{t}}\). Let \(G_{2}^{\prime}=G_{2}\backslash\{x_{2,j_{t}},w\}\), then \(\deg_{G_{2}^{\prime}}(v_{2})=\deg_{G_{2}}(v_{2})-1\). By the induction hypothesis, we have
\[\operatorname{reg}(S_{G}/K) =\operatorname{reg}(S_{G\backslash w}/I_{G\backslash w})= \operatorname{reg}(S_{G_{1}\circ G_{2}^{\prime}}/I_{G_{1}\circ G_{2}^{\prime}})\] \[=\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+\operatorname{reg}(S_{G_ {2}^{\prime}}/I_{G_{2}^{\prime}})-2\] \[=\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+\operatorname{reg}(S_{G_ {2}}/I_{G_{2}})-2.\]
where the last equality holds because of \(t=2\) and Lemma 3.5(2).
In order to compute \(\operatorname{reg}(S_{G}/J)\), we consider the induced subgraph \(G\backslash N_{G}[w]\) of \(G\). We distinguish into the following two cases:
(a) If \(N_{G_{2}}(w)=X_{2}\), then \(G\backslash N_{G}[w]\) is the disjoint union of \(G_{1}\backslash\{u_{1},v_{1}\}\) and isolated set \(Y_{2}\backslash\{u_{2},w\}\), and \(G_{1}\backslash\{u_{1},v_{1}\}\) is a C-M bipartite graph by Lemma 2.7(1). It follows that
\[\operatorname{reg}(S_{G}/J) =\operatorname{reg}(S_{G\backslash N_{G}[w]}/I_{G\backslash N_{G}[ w]})=\operatorname{reg}(S_{G_{1}\backslash\{u_{1},v_{1}\}}/I_{G_{1}\backslash\{u_{1},v_{1}\}})\] \[=\vartheta(G_{1}\backslash\{u_{1},v_{1}\})=\vartheta(G_{1} \backslash v_{1})=\operatorname{reg}(S_{G_{1}}/I_{G_{1}})-1.\]
(b) If \(N_{G_{2}}(w)\subsetneq X_{2}\), then \(G\backslash N_{G}[w]\) is the disjoint union of \(G_{1}\backslash\{u_{1},v_{1}\}\), \(H\) and the isolated set \(\{y_{2,h_{1}},y_{2,h_{2}},\ldots,y_{2,h_{s-1}}\}\backslash\{u_{2}\}\), where \(H=G_{2}\backslash\{x_{2,h_{1}},y_{2,h_{1}},\ldots,x_{2,h_{s}},y_{2,h_{s}}\}\).
By Lemmas 2.8(2) and 3.5(1), we obtain
\[\operatorname{reg}(S_{G}/J) =\operatorname{reg}(S_{G\backslash N_{G}[w]}/I_{G\backslash N_{G}[w] })=\operatorname{reg}(S_{G_{1}\backslash\{u_{1},v_{1}\}}/I_{G_{1}\backslash\{u_ {1},v_{1}\}})+\operatorname{reg}(S_{H}/I_{H})\] \[=\vartheta(G_{1}\backslash v_{1})+\operatorname{reg}(S_{H}/I_{H})\] \[=(\operatorname{reg}(S_{G_{1}}/I_{G_{1}})-1)+\operatorname{reg}( S_{H}/I_{H})\] \[\leq(\operatorname{reg}(S_{G_{1}}/I_{G_{1}})-1)+(\operatorname{ reg}(S_{G_{2}}/I_{G_{2}})-2)\] \[=\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+\operatorname{reg}(S_{G _{2}}/I_{G_{2}})-3.\]
where the penultimate inequality holds by Lemma 3.5(1). Using Lemmas 2.2(2), 2.3(2) and 2.5 to the exact sequence (2), we get the desired regularity results.
(II) If \(t\leq 1\). In this case, we choose \(J=(N_{G}(v))+I_{G\backslash N_{G}[v]}\), \(K=(v)+I_{G\backslash v}\). Thus \(G\backslash N_{G}[v]\) is the disjoint union of \(G_{1}^{\prime},G_{2}^{\prime}\) and the isolated set \(\{y_{1,i_{1}},\ldots,y_{1,i_{m-1}},x_{2,j_{2}},\\ \ldots,x_{2,j_{t}}\}\), where \(G_{1}^{\prime}=G_{1}\backslash\{x_{1,i_{1}},y_{1,i_{1}},\ldots,x_{1,i_{m}},y_{ 1,i_{m}}\}\)\(G_{2}^{\prime}=G_{2}\backslash\{x_{2,j_{1}},y_{2,j_{1}},\ldots,x_{2,j_{t}},\\ y_{2,j_{t}}\}\). By Lemmas 2.7(2) and 2.8(2), we get
\[\operatorname{reg}(S_{G}/J) =\operatorname{reg}(S_{G\backslash N_{G}[v]}/I_{G\backslash N_{G} [v]})=\operatorname{reg}(S_{G_{1}^{\prime}}/I_{G_{1}^{\prime}})+\operatorname{ reg}(S_{G_{2}^{\prime}}/I_{G_{2}^{\prime}})\] \[\leq(\operatorname{reg}(S_{G_{1}}/I_{G_{1}})-1)+(\operatorname{ reg}(S_{G_{2}}/I_{G_{2}})-1)\] \[=\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+\operatorname{reg}(S_{G _{2}}/I_{G_{2}})-2.\]
where the penultimate inequality holds by Lemma 3.4. On the other hand, \(G\backslash v\) is the disjoint union of \(G_{1}\backslash\{u_{1},v_{1}\}\) and \(G_{2}\backslash\{u_{2},v_{2}\}\). By Lemmas 2.7(1) and 2.8(2), we get
\[\operatorname{reg}(S_{G}/K) =\operatorname{reg}(S_{G\backslash v}/I_{G\backslash v})\] \[=\operatorname{reg}(S_{G_{1}\backslash\{u_{1},v_{1}\}}/I_{G \backslash\{u_{1},v_{1}\}})+\operatorname{reg}(S_{G_{2}\backslash\{u_{2},v_{2} \}}/I_{G_{2}\backslash\{u_{2},v_{2}\}})\] \[=\vartheta(G_{1}\backslash v_{1})+\vartheta(G_{2}\backslash v_{2}).\]
Thus, if \(t=0\), then \(\vartheta(G_{i}\backslash v_{i})=\vartheta(G_{i})\) for all \(i\in[2]\), Thus \(\operatorname{reg}(S/K)=\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+\operatorname{ reg}(S_{G_{2}}/I_{G_{2}})\). If \(t=1\), then \(\vartheta(G_{1}\backslash v_{1})=\vartheta(G_{1})\) and \(\vartheta(G_{1}\backslash v_{1})\neq\vartheta(G_{1})\), or vice versa. Thus \(\operatorname{reg}(S/K)=\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+\operatorname{ reg}(S_{G_{2}}/I_{G_{2}})-1\).
Applying Lemma 2.2, 2.3(2) and 2.5 to the exact sequence (2), we obtain that \(\operatorname{reg}(S_{G}/I_{G})=\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+ \operatorname{reg}(S_{G_{2}}/I_{G_{2}})-t\).
**Theorem 4.5**.: _Let \(G=(G_{1},u_{1})*(G_{2},u_{2})\), where each \(G_{i}\) is a C-M bipartite graph with a leaf \(u_{i}\). Let \(N_{G_{i}}(u_{i})=\{v_{i}\}\). Let \(t=|\{i:\vartheta(G_{i}\backslash v_{i})\neq\vartheta(G_{i})\}|\). Then_
\[\operatorname{reg}(S_{G}/I_{G})=\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+ \operatorname{reg}(S_{G_{2}}/I_{G_{2}})-s\]
_where \(s=\begin{cases}0,&\text{if }t\leq 1,\\ 1,&\text{if }t=2\text{.}\end{cases}\)_
Proof.: First, \(t\in\{0,1,2\}\) by the definition of \(t\). Suppose \(u_{1}\) and \(u_{2}\) are identified as \(u\) in \(G\) by the \(*\) operation. Let \(N_{G_{2}}(v_{2})=\{y_{j_{1}},y_{j_{2}},\ldots,y_{j_{m}}\}\) with \(u_{2}=y_{j_{1}}\), where \(1\leq j_{1}<\cdots<j_{m}\leq n_{2}\). Then \(G\backslash N_{G}[v_{2}]\) is the disjoint union of \(G_{1}\backslash u_{1}\), \(G_{2}\backslash\{x_{j_{1}},y_{j_{1}},\ldots,x_{j_{m}},y_{j_{m}}\}\) and isolated set \(\{x_{j_{2}},\ldots,x_{j_{m}}\}\), and \(G\backslash v_{2}\) is the disjoint union of \(G_{1}\) and \(G_{2}\backslash\{u_{2},v_{2}\}\). We divide into the following two cases:
(1) If \(t\leq 1\), then \(\vartheta(G_{i}\backslash v_{i})=\vartheta(G_{i})\) for some \(i\in[2]\). By symmetry, we assume \(\vartheta(G_{2}\backslash v_{2})=\vartheta(G_{2})\). In this case, we choose \(J=(N_{G}(v_{2}))+I_{G\backslash N_{G}[v_{2}]}\), \(K=(v_{2})+I_{G\backslash v_{2}}\). Let \(H=G_{2}\backslash\{x_{j_{1}},y_{j_{1}},\ldots,x_{j_{m}},y_{j_{m}}\}\), then by Lemmas 3.3, 3.4(1) and 2.8(2), we have
\[\operatorname{reg}(S_{G}/J) =\operatorname{reg}(S_{G\backslash N_{G}[v_{2}]}/I_{G\backslash N_ {G}[v_{2}]})=\operatorname{reg}(S_{G_{1}\backslash u_{1}}/I_{G_{1}\backslash u _{1}})+\operatorname{reg}(S_{H}/I_{H})\] \[\leq\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+(\operatorname{reg}(S _{G_{2}}/I_{G_{2}})-1)\] \[=\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+\operatorname{reg}(S_{G _{2}}/I_{G_{2}})-1,\] \[\operatorname{reg}(S_{G}/K) =\operatorname{reg}(S_{G\backslash v_{2}}/I_{G\backslash v_{2}}) =\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+\operatorname{reg}(S_{G_{2} \backslash\{u_{2},v_{2}\}}/I_{G_{2}\backslash\{u_{2},v_{2}\}})\] \[=\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+\vartheta(G_{2} \backslash\{v_{2}\})\] \[=\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+\operatorname{reg}(S_{G _{2}}/I_{G_{2}}).\]
Applying Lemmas 2.2, 2.3(2) and 2.5(2) to the exact sequence (2), we obtain the expected results.
(2) If \(t=2\), then \(\vartheta(G_{i}\backslash v_{i})\neq\vartheta(G_{i})\) for all \(i\in[2]\). In this case, we choose \(J=(N_{G}(v_{2}))+I_{G\backslash N_{G}[v_{2}]}\), \(K=(v_{2})+I_{G\backslash v_{2}}\). Thus, by Theorem 3.7, we have
\[\operatorname{reg}(S_{G}/J) =\operatorname{reg}(S_{G\backslash N_{G}[v_{2}]}/I_{G\backslash N _{G}[v_{2}]})=\operatorname{reg}(S_{G_{1}\backslash u_{1}}/I_{G_{1}\backslash u _{1}})+\operatorname{reg}(S_{H}/I_{H})\] \[=(\operatorname{reg}(S_{G_{1}}/I_{G_{1}})-1)+(\operatorname{reg} (S_{G_{2}}/I_{G_{2}})-1)\] \[=\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+\operatorname{reg}(S_{G _{2}}/I_{G_{2}})-2.\]
where the penultimate equality holds by the proof of Lemma 3.5(2). Meanwhile, by Lemmas 2.7(1) and 2.8(2), we have
\[\operatorname{reg}(S_{G}/K) =\operatorname{reg}(S_{G\backslash v_{2}}/I_{G\backslash v_{2}}) =\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+\operatorname{reg}(S_{G_{2} \backslash\{u_{2},v_{2}\}}/I_{G_{2}\backslash\{u_{2},v_{2}\}})\] \[=\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+\vartheta(G_{2} \backslash v_{2})=\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+\vartheta(G_{2})-1\] \[=\operatorname{reg}(S_{G_{1}}/I_{G_{1}})+\operatorname{reg}(S_{G _{2}}/I_{G_{2}})-1.\]
Again applying Lemmas 2.2, 2.3(2) and 2.5 to the exact sequence (2), we obtain the desired result.
For two graph \(G_{1}\) and \(G_{2}\), let their clique sum \(G_{1}\cup_{v}G_{2}\) be a union of graphs \(G_{1}\) and \(G_{2}\) such that \(V(G_{1})\cap V(G_{2})=\{v\}\).
**Lemma 4.6**.: _Let \(G=G_{1}\cup_{u}P_{2}\) be the clique sum of a C-M bipartite graph \(G_{1}\) and a path \(P_{2}\) with vertex set \(\{u,v_{2}\}\), where \(u\) is a leaf of \(G_{1}\) and \(N_{G_{1}}(u)=\{v_{1}\}\). Then_
\[\operatorname{depth}(S_{G}/I_{G})=\operatorname{depth}(S_{G_{1}}/I_{G_{1}})\]
Proof.: If \(\operatorname{deg}_{G_{1}}(v_{1})=1\), then \(G\) is the disjoint union of \(G_{1}\backslash\{u_{1},v_{1}\}\) and a path of length \(3\) with vertice set \(\{v_{1},v_{2},u\}\). Thus, by Lemma 2.8(1) and Lemma 2.9, we get \(\operatorname{depth}(S_{G}/I_{G})=1+\frac{|V(G_{1})|-2}{2}=\operatorname{depth }(S_{G_{1}}/I_{G_{1}})\), since \(G_{1}\backslash\{u,v_{1}\}\) is a C-M bipartite graph by Lemma 2.7(1).
If \(\deg_{G_{1}}(v_{1})\geq 2\), then we choose \(J=(N_{G}(v_{2}))+I_{G\backslash N_{G}[v_{2}]}\), \(K=(v_{2})+I_{G\backslash v_{2}}\). In this case, \(G\backslash v_{2}=G_{1}\) and \(G\backslash N_{G}[v_{2}]=G_{1}\backslash u\). Then by Lemma 3.1, we obtain that
\[\operatorname{depth}(S_{G}/J) =1+\operatorname{depth}(S_{G\backslash N_{G}[v_{2}]}/I_{G \backslash N_{G}[v_{2}]})\] \[=1+\operatorname{depth}(S_{G_{1}\backslash u}/I_{G_{1}\backslash u})\] \[=1+(\operatorname{depth}(S_{G_{1}}/I_{G_{1}})-1)\] \[=\operatorname{depth}(S_{G_{1}}/I_{G_{1}}),\] \[\operatorname{depth}(S_{G}/K) =\operatorname{depth}(S_{G\backslash v_{2}}/I_{G\backslash v_{2}} )=\operatorname{depth}(S_{G_{1}}/I_{G_{1}}).\]
Using Lemmas Lemmas 2.2, 2.3(1) and 2.5 to the exact sequence (2), we get \(\operatorname{depth}(S_{G}/I_{G})=\operatorname{depth}(S_{G_{1}}/I_{G_{1}})\).
**Theorem 4.7**.: _Let \(G=(G_{1},u_{1})*(G_{2},u_{2})\), where each \(G_{i}\) is a C-M bipartite graph with a leaf \(u_{i}\). Let \(N_{G_{i}}(u_{i})=\{v_{i}\}\). Then_
\[\operatorname{depth}(S_{G}/I_{G})=\operatorname{depth}(S_{G_{1}}/I_{G_{1}})+ \operatorname{depth}(S_{G_{2}}/I_{G_{2}})-1,\]
Proof.: For any \(i\in[2]\), let \(V(G_{i})=X_{i}\sqcup Y_{i}\) with \(X_{i}=\{x_{i,1},\ldots,x_{i,n_{i}}\}\), \(Y_{i}=\{y_{i,1},\ldots,y_{i,n_{i}}\}\) and \(u_{i}=x_{i,j_{i}}\) for some \(j_{i}\in[n_{i}]\). Then \(N_{G_{i}}(u_{i})=\{y_{i,j_{i}}\}\). Let \(v_{i}=y_{i,j_{i}}\) and \(N_{G_{1}}(v_{1})=\{x_{1,k_{1}},\ldots,x_{1,k_{t}}\}\) with \(1\leq k_{1}<\cdots<k_{t}=j_{1}\). We divide into the following two cases:
(1) If \(\deg_{G_{i}}(v_{i})=1\) for some \(i\in[2]\), then we assume \(\deg_{G_{2}}(v_{2})=1\) by symmetry. Then \(G_{2}=P_{2}\sqcup(G_{2}\backslash\{u_{2},v_{2}\})\), which implies \(G=(G_{1}\cup_{u_{1}}P_{2})\sqcup(G_{2}\backslash\{u_{2},v_{2}\})\). By Lemmas 2.4(2), 2.8(1), and 4.6, we have
\[\operatorname{depth}(S_{G}/I_{G}) =\operatorname{depth}(S_{G_{1}\cup_{u_{1}}P_{2}}/I_{G_{1}\cup_{u_ {1}}P_{2}})+\operatorname{depth}(S_{G_{2}\backslash\{u_{2},v_{2}\}}/I_{G_{2} \backslash\{u_{2},v_{2}\}})\] \[=\operatorname{depth}(S_{G_{1}}/I_{G_{1}})+\operatorname{depth}(S _{G_{2}}/I_{G_{2}})-1.\]
(2) If \(\deg_{G_{i}}(v_{i})\geq 2\) for all \(i\in[2]\), then we choose \(J=(N_{G}(v_{1}))+I_{G\backslash N_{G}[v_{1}]}\), \(K=(v_{1})+I_{G\backslash v_{1}}\). In this case, \(G\backslash v_{1}\) is the disjoint union of \(G_{2}\) and \(G_{1}\backslash\{u_{1},v_{1}\}\). Thus by Lemma 2.8(1), we get
\[\operatorname{depth}(S_{G}/K) =\operatorname{depth}(S_{G_{2}}/I_{G_{2}})+\operatorname{depth}(S _{G_{1}\backslash\{u_{1},v_{1}\}}/I_{G_{1}\backslash\{u_{1},v_{1}\}})\] \[=\operatorname{depth}(S_{G_{2}}/I_{G_{2}})+\operatorname{depth}(S _{G_{1}}/I_{G_{1}})-1.\]
In order to compute the depth of \(S_{G}/J\), we distinguish into the following two cases:
(i) If \(N_{G}(v_{1})=X_{1}\), then \(G\backslash N_{G}[v_{1}]\) is the disjoint union of \(G_{2}\backslash u_{2}\) and isolated set \(Y_{1}\backslash v_{1}\). By Lemmas 2.4 (2), 2.8(1) and 3.1, we have
\[\operatorname{depth}(S_{G}/J) =1+\operatorname{depth}(S_{G\backslash N_{G}[v_{1}]}/I_{G \backslash N_{G}[v_{1}]})\] \[=1+\operatorname{depth}(S_{G_{2}\backslash u_{2}}/I_{G_{2} \backslash u_{2}})+(n_{1}-1)\] \[=\operatorname{depth}(S_{G_{1}}/I_{G_{1}})+\operatorname{depth}(S _{G_{2}}/I_{G_{2}})-1.\]
(ii) If \(N_{G}(v_{1})\subsetneq X_{1}\), then \(G\backslash N_{G}[v_{1}]\) is the disjoint union of \(G_{1}\backslash\{x_{1,k_{1}},y_{1,k_{1}},\ldots,\)\(x_{1,k_{t}},y_{1,k_{t}}\}\), \(G_{2}\backslash u_{2}\), and isolated set \(\{y_{1,k_{1}},y_{1,k_{2}},\ldots,y_{1,k_{t-1}}\}\). Note that \(H=G_{1}\backslash\{x_{1,k_{1}},\)\(y_{1,k_{1}},\ldots,x_{1,k_{t}},y_{1,k_{t}}\}\) is a C-M bipartite graph by Lemma 2.7(2). Thus, by Lemmas
2.8(1) and 3.1, we have
\[\begin{split}\operatorname{depth}(S_{G}/J)&=1+ \operatorname{depth}(S_{G\setminus N_{G}[v_{1}]}/I_{G\setminus N_{G}[v_{1}]}) \\ &=1+\operatorname{depth}(S_{H}/I_{H})+\operatorname{depth}(S_{G_{2} \setminus u_{2}}/I_{G_{2}\setminus u_{2}})+(t-1)\\ &=1+\frac{|V(G_{1})|-2t}{2}+[\operatorname{depth}(S_{G_{2}}/I_{ G_{2}})-1]+(t-1)\\ &=\frac{|V(G_{1})|}{2}+\operatorname{depth}(S_{G_{2}}/I_{G_{2}})-1 \\ &=\operatorname{depth}(S_{G_{1}}/I_{G_{1}})+\operatorname{depth}( S_{G_{2}}/I_{G_{2}})-1.\end{split}\]
Applying Lemmas 2.2, 2.3(1) and 2.5 to the exact sequence (2), we obtain that \(\operatorname{depth}(S_{G}/I_{G})=\operatorname{depth}(S_{G_{1}}/I_{G_{1}})+ \operatorname{depth}(S_{G_{2}}/I_{G_{2}})-1\). We finish the proof.
### Acknowledgments
This research is supported by the Natural Science Foundation of Jiangsu Province (No. BK20221353) and the foundation of the Priority Academic Program Development of Jiangsu Higher Education Institutions. The authors are grateful to the computer algebra systems CoCoA [6] for providing us with a large number of examples.
### Data availability statement
The data used to support the findings of this study are included within the article.
|
2309.10061 | Transformed-Linear Innovations Algorithm for Modeling and Forecasting of
Time Series Extremes | The innovations algorithm is a classical recursive forecasting algorithm used
in time series analysis. We develop the innovations algorithm for a class of
nonnegative regularly varying time series models constructed via
transformed-linear arithmetic. In addition to providing the best linear
predictor, the algorithm also enables us to estimate parameters of
transformed-linear regularly-varying moving average (MA) models, thus providing
a tool for modeling.
We first construct an inner product space of transformed-linear combinations
of nonnegative regularly-varying random variables and prove its link to a
Hilbert space which allows us to employ the projection theorem, from which we
develop the transformed-linear innovations algorithm. Turning our attention to
the class of transformed linear MA($\infty$) models, we give results on
parameter estimation and also show that this class of models is dense in the
class of possible tail pairwise dependence functions (TPDFs). We also develop
an extremes analogue of the classical Wold decomposition. Simulation study
shows that our class of models captures tail dependence for the GARCH(1,1)
model and a Markov time series model, both of which are outside our class of
models. | Nehali Mhatre, Daniel Cooley | 2023-09-18T18:17:07Z | http://arxiv.org/abs/2309.10061v1 | # Transformed-Linear Innovations Algorithm for Modeling and Forecasting of Time Series Extremes
###### Abstract
The innovations algorithm is a classical recursive forecasting algorithm used in time series analysis. We develop the innovations algorithm for a class of nonnegative regularly varying time series models constructed via transformed-linear arithmetic. In addition to providing the best linear predictor, the algorithm also enables us to estimate parameters of transformed-linear regularly-varying moving average (MA) models, thus providing a tool for modeling.
We first construct an inner product space of transformed-linear combinations of nonnegative regularly-varying random variables and prove its link to a Hilbert space which allows us to employ the projection theorem, from which we develop the transformed-linear innovations algorithm. Turning our attention to the class of transformed linear MA(\(\infty\)) models, we give results on parameter estimation and also show that this class of models is dense in the class of possible tail pairwise dependence functions (TPDFs). We also develop an extremes analogue of the classical Wold decomposition. Simulation study shows that our class of models captures tail dependence for the GARCH(1,1) model and a Markov time series model, both of which are outside our class of models.
We also develop prediction intervals based on the geometry of regular variation. Simulation study shows that we obtain good coverage rates for prediction errors. We perform modeling and prediction for hourly windspeed data by applying the innovations algorithm to the estimated TPDF.
Transformed-linear regularly-varying time series models Innovations algorithm Stationary Tail pairwise dependence function ARMA models.
## 1 Introduction
A primary aim of time series analysis is forecasting. Mhatre and Cooley (2020+) developed transformed-linear time series models, a class of time series which are nonnegative and regularly-varying and which are similar to familiar ARMA models in the non-extreme setting. Mhatre and Cooley (2020+) showed that these relatively simple models can capture dependence in a time series' upper tail. We now address the problem of forecasting specifically when values are large.
Our approach for forecasting is to develop the innovations algorithm for transformed-linear time series. The innovations algorithm, a well known forecasting method in classical time series, relies on the autocovariance function. In traditional time series analysis and elsewhere, the best linear predictor \(\hat{X}_{n+1}\) minimizes mean squared prediction error (MSPE), \(E[(X_{n+1}-\hat{X}_{n+1})^{2}]\) and Gaussian assumptions are usually used to create prediction intervals. However, the autocovariance function is not well-suited for describing tail dependence and expected-squared error is not a natural or intuitive measure of loss for extremes. The innovations algorithm we develop uses the tail pairwise dependence function (TPDF) to characterize extremal dependence, and minimizes a quantity describing tail behavior. Despite these differences, we show that the form of the transformed-linear predictor is of the same form as in the non-extreme setting.
To develop the innovations algorithm, we construct a vector space \(\mathbb{V}\) of a series of absolutely summable transformed-linear combinations of nonnegative regularly-varying random variables. We show that \(\mathbb{V}\) is an inner product space and is isomorphic to \(\ell^{1}\), the space of absolutely summable sequences. Although \(\mathbb{V}\) itself is not a Hilbert space, we show that the set of predictors based on previous \(n\) observations is isomorphic to a closed linear subspace of \(\ell^{2}\), the space of square summable sequences, and we can employ the projection theorem. Using the properties of the projection theorem we develop a transformed-linear analogue of the classical innovations algorithm that allows us to do modeling and prediction iteratively.
The innovations algorithm gives us more than just a method for prediction, as we also use it as a tool to demonstrate properties of this modeling framework. Using the innovations algorithm we show that if the true model is in our transformed-linear space then applying the innovations algorithm iteratively yields parameter estimates that converge to the true parameters. Furthermore we show that even if the underlying model is not a transformed-linear model, applying the innovations algorithm will yield a transformed-linear model whose TPDF matches closely the estimated TPDF of the underlying model. We go on to show that the class of transformed-linear regularly-varying MA(\(\infty\)) time series is dense in the class of possible TPDFs. We also develop a transformed-linear analogue of the Wold decomposition. To demonstrate the richness of the class of transformed-linear regularly-varying MA(\(\infty\)) models we run the innovations algorithm on data simulated from two different models from outside our class of transform linear models: the GARCH(1,1) process and a first-order Markov chain.Neither of these models is in the family of transformed-linear time series. We show for both these models that by running the innovations algorithm on the estimated TPDF we can get estimates for coefficients of a transformed-linear regularly-varying MA time series whose TPDF closely matches the estimated TPDF of the simulated data and well represents summary measures of tail dependence.
We also develop a method based on the polar geometry of regular variation for producing prediction intervals for the case when predictands are large. Because the regular variation geometry differs from the elliptical geometry typically assumed in standard linear prediction settings, uncertainty quantification is significantly different from the non-extreme setting. We perform modeling and prediction for the windspeed anomalies data discussed in Mhatre and Cooley (2020+) by applying the innovations algorithm to the estimated TPDF.
## 2 Background: Transformed-Linear Time Series Models for Extremes
Mhatre and Cooley (2020+) use transformed-linear arithmetic to produce nonnegative regularly varying time series models for capturing extremal dependence. Here we review the basics of these models. We consider time series \(\{X_{t}\},t\in(Z)\) whose finite-dimensional distributions are multivariate regularly varying. Let \(\mathbf{X}_{t,p}=(X_{t},X_{t+1},\ldots,X_{t+p-1})^{T}\) for any \(t\) and \(p>0\). Then, there exists a function \(b(s)\rightarrow\infty\) as \(s\rightarrow\infty\) and a non-trivial limit measure \(\nu_{\mathbf{X}_{t,p}}\) such that
\[s\text{ pr}\left\{\frac{\mathbf{X}_{t,p}}{b(s)}\in.\right\}\xrightarrow{v}\nu_{ \mathbf{X}_{t,p}}(\cdot)\text{ as }s\rightarrow\infty, \tag{1}\]
where \(\xrightarrow{v}\) denotes vague convergence in \(M_{+}(\mathbb{R}^{p}\setminus\{0\})\), the space of nonnegative Radon measures on \(\mathbb{R}^{p}\setminus\{0\}\)(Resnick, 2007, Section 6). The normalizing function \(b(s)=s^{1/\alpha}U(s)\) for \(\alpha>0\) and some slowly varying function \(U\). The scaling property \(\nu_{\mathbf{X}_{t,p}}(aC)=a^{-\alpha}\nu_{\mathbf{X}_{t,p}}(C)\) for any \(a>0\) and any set \(C\subset\mathbb{R}^{p}\setminus\{0\}\) implies that the measure is more easily understood via polar coordinates. In particular, for a radially-defined set \(C(r,B)=\{\mathbf{X}_{t,p}\in\mathbb{R}^{p}:\|\mathbf{X}_{t,p}\|>r,\|\mathbf{X}_{t,p}\|^{-1} \mathbf{X}_{t,p}\in B\}\) where Borel set \(B\subset\mathbb{S}^{p-1}=\{\mathbf{x}\in\mathbb{R}^{p}:\|\mathbf{x}\|=1\}\), \(\nu_{\mathbf{X}_{t,p}}(C(r,B))=r^{-\alpha}H_{\mathbf{X}_{t,p}}(B)\) where \(H_{\mathbf{X}_{t,p}}\) is the angular measure taking values on \(\mathbb{S}^{p-1}\).
Kulik and Soulier (2020) provide a comprehensive treatment of regularly-varying time series, beginning with the finite-dimensional distribution definition and developing the limit measure of the process. Fully characterizing the limit measure is challenging, particularly because only a subset of extreme data are used for estimation. Mhatre and Cooley (2020+) instead develop the notion of tail stationarity, an extremal analogue to second-order stationarity. Characterization of the tail of a time series is simplified to characterizing the pairwise dependence via the TPDF, which takes the place of the autocovariance function in standard time series analysis. The TPDF is the functional extension of the tail pairwise dependence matrix (TPDM), introduced in Cooley and Thibaud (2019). Kiriliouuk and Zhou (2022) define the TPDM for regularly-varying random vectors with tail index \(\alpha\), but we will follow both Cooley and Thibaud (2019) and Mhatre and Cooley (2020+) and further assume \(\{X_{t}\}\) has tail index \(\alpha=2\), so that in Section 5 we can connect the TPDF to the inner product we introduce in Section 3.1. In practice, this assumption generally requires a marginal transformation. The TPDF is given by
\[\sigma(X_{t},X_{t+h})=\int_{\Theta_{1}}w_{t}w_{t+h}\text{d}H_{X_{t},X_{t+h}}( \mathbf{w}), \tag{2}\]
where \(\Theta_{1}=\{\mathbf{x}\in\mathbb{R}^{2}\mid\|\mathbf{x}\|_{2}=1\}.\) If the time series is stationary the TPDF can be viewed as a function of the lag \(h\). Like the autocovariance function in standard time series analysis, the TPDF evaluated at lag 0 provides a measure of the marginal'scale' of the time series, as \(\lim\limits_{s\rightarrow\infty}s\text{ pr}\left\{\frac{|X_{s}|}{b(s)}>c\right\}=c^{-2} \sigma(0).\) As \(\sigma(h)/\sigma(0)\in[0,1]\), this ratio provides an interpretable number for dependence strength at lag \(h\).
Linear time series models of the form \(X_{t}=\sum_{j=-\infty}^{\infty}\psi_{j}Z_{t-j},\) where \(Z_{t-j}\) is a white noise sequence make up a large portion of classical time series analysis, and include the familiar ARMA models. For time series with finite second moments, \(n\)-step predictors of \(X_{t+1}\) based on \(X_{t},\ldots X_{t-n+1})\) minimizing mean square prediction error
(MSPE) can be constructed via the projection theorem. It is straightforward to construct linear regularly varying time series models by assuming the noise sequence is regularly varying. As in Mhatre and Cooley (2020+), we are motivated to construct _nonnegative_ time series models, which enables one to focus attention on the time series upper tail. If one wants to construct a nonnegative linear time series using traditional arithmetic, one needs to restrict \(\psi_{j}>0\) for all \(j\) and employ a nonnegative noise sequence. Mhatre and Cooley (2020+) show that transformed-linear time series can more flexibly fit data than traditional linear time series models restricted to be nonnegative. Important for this work, allowing the \(\psi_{j}\)'s to take negative values will allow us to show the elements of transformed-linear \(\{X_{t}\}\) can be thought of as members of a vector space.
Mhatre and Cooley (2020+) employ the transformed-linear arithmetic of Cooley and Thibaud (2019) to construct nonnegative time series models. Given bijective transform \(\tau:\mathbb{R}\mapsto\mathbb{R}_{+}\), define \(X_{1}\oplus X_{2}=\tau(\tau^{-1}(X_{1})+\tau^{-1}(X_{2}))\) and \(a\circ X_{1}=\tau(a\tau^{-1}(X_{1})\) for \(a\in\mathbb{R}\). Cooley and Thibaud (2019) show that if \(X_{1},X_{2}\) are independent regularly varying with index \(\alpha\) with respective measures \(\nu_{X_{1}},\nu_{X_{2}}\), if \(\lim_{y\rightarrow\infty}\tau(y)/y=1\), and if \(X_{1},X_{2}\) meet a lower tail condition associated with the particular \(\tau\), then \(\nu_{X_{1}\oplus X_{2}}(\cdot)=\nu_{X_{1}}(\cdot)+\nu_{X_{2}}(\cdot)\) and \(\nu_{a\circ X_{1}}(\cdot)=(a^{(0)})^{\alpha}\nu_{X_{1}}(\cdot)\), where \(a^{(0)}=\max(a,0)\). If \(\tau(y)=\log\left\{1+\exp\left(y\right)\right\}\), the lower tail condition \(s\) pr \([X_{i}\leq\exp\left\{-kb(s)\right\}]\to 0,\ k>0,\ i=1,2\ s\rightarrow\infty\) is sufficient.
Mhatre and Cooley (2020+) largely focus on causal transformed linear time series models
\[X_{t}=\bigoplus_{j=0}^{\infty}\psi_{j}\circ Z_{t-j}, \tag{3}\]
where \(\sum_{j=-\infty}^{\infty}|\psi_{j}|<\infty\), and \(\{Z_{t}\}\) is a noise sequence of independent and tail stationary \(RV_{+}(2)\) random variables. The focus of Mhatre and Cooley (2020+) is to develop transformed linear ARMA models and to develop their properties, particularly the TPDF, as this is the critical parameter from the tail stationarity standpoint. The model in (3) is known as the transformed-linear MA(\(\infty\)) and has TPDF \(\sigma(h)=\sum_{j=0}^{\infty}\psi_{j}^{(0)}\psi_{j+h}^{(0)}\).
## 3 Space \(\mathbb{V}\) and Innovations Algorithm
### Inner Product Space \(\mathbb{V}\)
We begin by considering the space \(\mathbb{V}=\{X_{t}:X_{t}=\bigoplus_{j=0}^{\infty}\psi_{t,j}\circ Z_{j},\sum_{ j=0}^{\infty}|\psi_{t,j}|<\infty\) where \(Z_{j}\)'s are independent and tail stationary \(RV_{+}(2)\) random variables with \(\lim_{x\rightarrow\infty}\text{Pr}(Z_{j}>x)/\{x^{-2}L(x)\}=1\) for some slowly-varying function \(L(x)\), \(\psi_{t,j}\in\mathbb{R}\), and \(t\in\mathbb{Z}\). Mhatre and Cooley (2020+) show that \(X_{t}\) converges with probability one. Letting \(X_{t}=\bigoplus_{j=0}^{\infty}\psi_{t,j}\circ Z_{j},X_{s}=\bigoplus_{j=0}^{ \infty}\psi_{s,j}\circ Z_{j}\in\mathbb{V}\), then \(X_{t}\oplus X_{s}=\bigoplus_{j=0}^{\infty}(\psi_{t,j}+\psi_{s,j})\circ Z_{j}\) and \(a\circ X_{t}=\bigoplus_{j=0}^{\infty}(a^{(0)}\psi_{t,j})\circ Z_{j}\). The space \(\mathbb{V}\) equipped with transformed linear operations is a vector space. The details of this and other properties of \(\mathbb{V}\) described in this section are provided in the supplementary materials.
Define the inner product between \(X_{t}\) and \(X_{s}\) as,
\[\langle X_{t},X_{s}\rangle:=\sum_{j=0}^{\infty}\psi_{t,j}\psi_{s,j}. \tag{4}\]
We define the norm \(\|X_{t}\|=\sqrt{\langle X_{t},X_{t}\rangle}=\sqrt{\sum_{j=0}^{\infty}\psi_{t,j} ^{2}}\) and say \(X_{t}\) and \(X_{s}\) are orthogonal if \(\langle X_{t},X_{s}\rangle=0\). Note the space \(\mathbb{V}\) is more general than the stationary time series construction given in (3). We will return to the stationary time series setting in Section 4, and we will relate the inner product to the TPDF in Section 5.
It will be useful to simultaneously consider the infinite-dimensional space of absolutely summable sequences,
\[\ell^{1}=\left\{a_{j}\}_{j=0}^{\infty},a_{j}\in\mathbb{R}:\sum_{j=0}^{\infty}|a _{j}|<\infty\right\},\]
equipped with the standard vector addition and scalar multiplication. For any \(X_{t}\in\mathbb{V}\) we can define a mapping \(T:\mathbb{V}\rightarrow\ell^{1}\) such that \(T(X_{t})=\{\psi_{t,j}\}_{j=0}^{\infty}\in\ell^{1}\). The mapping \(T\) is a linear map and an isomorphism.We know that vector space \(\ell^{1}\subset\ell^{2}\), where \(\ell^{2}=\{\{d_{j}\}:\sum_{j=0}^{\infty}d_{j}^{2}<\infty\}\), the space of square-summable sequences. The inner product defined in (4) is isomorphic to the usual inner product on \(\ell^{2}\).
### Best Transformed-Linear Prediction in \(\mathbb{V}\)
We want to use the projection theorem to perform prediction. Unfortunately, \(\mathbb{V}\) is not itself a Hilbert space as \(\mathbb{V}\) is isomorphic to \(\ell^{1}\), and \(\ell^{1}\) is not complete in the metric induced by the \(\ell^{2}\) inner product in (4). We consider the sequence \(\{X_{t}\},t\in\mathbb{Z}\) and consider transformed-linear prediction in terms of the previous \(n\) steps; that is we consider predictors
\[\hat{X}_{n+1}(\mathbf{b}_{n})=\bigoplus_{j=1}^{n}b_{nj}\circ X_{n+1-j}, \tag{5}\]
where \(\mathbf{b}_{n}=(b_{n1},\cdots,b_{nn})^{T}\in\mathbb{R}^{n}\). Let \(\mathbb{V}^{n}\) be the set of all such predictors. Let us consider the analogous problem in \(\ell^{1}\). Consider sequences \(\{a_{j}\}=T(X_{n+1-j}),j=1,\ldots,n\) in \(\ell^{1}\). Let \(\mathbb{C}^{n}\) be the set of sequences \(c(\mathbf{b}_{n})=b_{n1}\{a_{1}\}+\cdots+b_{nn}\{a_{n}\}\). \(\mathbb{C}^{n}\) is the space spanned by \(\{a_{1}\},\cdots,\{a_{n}\}\), and \(\text{dim}(\mathbb{C}^{n})\leq n\). Since any \(n\)-dimensional subspace of a complex topological vector space is closed (Rudin, 1991, Theorem 1.21), \(\mathbb{C}^{n}\) is a closed subspace of \(\ell^{1}\subset\ell^{2}\). By the projection theorem, there is a unique \(\hat{c}\in\mathbb{C}^{n}\) such that \(\|x-\hat{c}\|=\inf_{c\in\mathbb{C}^{n}}\|x-c\|\), for every \(x\) in \(\ell^{2}\). Thus, the set of predictors \(\mathbb{V}^{n}\) based on previous \(n\) observations is isomorphic to a closed linear subspace of \(\ell^{2}\) and we can employ the projection theorem since \(\ell^{2}\) is known to be a Hilbert space.
Armed now with the projection theorem, the best linear one-step predictor \(\hat{X}_{n+1}\) is given by
\[\hat{X}_{n+1}=\left\{\begin{array}{ll}0,&\text{ if }n=0,\\ P_{\mathbb{V}^{n}}X_{n+1}&\text{ if }n\geq 1,\end{array}\right. \tag{6}\]
where \(P_{\mathbb{V}^{n}}\) denotes the projection mapping onto \(\mathbb{V}^{n}\). Thus, \(\hat{X}_{n+1}\) is a transformed-linear combination of \(\{X_{1},...,X_{n}\}\) as given in (5). Define \(\hat{\mathbf{b}}_{n}=(\hat{b}_{n1},\ldots,\hat{b}_{nn})^{T}\) to be the solutions to the prediction equations given by the projection theorem
\[\left\langle X_{n+1}\ominus\bigoplus_{j=1}^{n}\hat{b}_{nj}\circ X _{n+1-j},\ \ X_{n+1-k}\right\rangle=0,\ \ k=1,\cdots,n. \tag{7}\]
Equivalently,
\[\left\langle\bigoplus_{j=1}^{n}\hat{b}_{nj}\circ X_{n+1-j},\ \ X_{n+1-k}\right\rangle= \left\langle X_{n+1},\ X_{n+1-k}\right\rangle,\ \ k=1,\cdots,n. \tag{8}\]
By linearity of the inner product, the prediction equations can be rewritten in matrix form as
\[\Gamma_{n}\hat{\mathbf{b}}_{n}=\mathbf{\gamma}_{n} \tag{9}\]
where \(\Gamma_{n}=\left[\left\langle X_{n+1-j},X_{n+1-k}\right\rangle\right]_{j,k=1}^ {n}\), and \(\mathbf{\gamma}_{n}=\left[\left\langle X_{n+1},X_{n+1-k}\right\rangle\right]_{k=1}^ {n}\). If \(\Gamma_{n}\) is non-singular, then the solution is given as
\[\hat{\mathbf{b}}_{n}=\Gamma_{n}^{-1}\mathbf{\gamma}_{n}. \tag{10}\]
It can be shown that the above is equivalent to minimizing the squared norm \(\|X_{n+1}\ominus\hat{X}_{n+1}\|^{2}\) by setting the appropriate derivative to zero. We see that (10) is of the familiar form for linear prediction in the non-extreme setting where the inner product terms are autocovariances.
### Transformed-Linear Innovations
Following Brockwell and Davis (1991), we develop a transformed-linear analogue of the recursive innovations algorithm to obtain the one-step predictors \(\hat{X}_{n+1}\), \(n\geq 1\), defined in (7), without having to perform matrix inversion of \(\Gamma_{n}\).
Consider the transformed-linear innovation, \((X_{n+1}\ominus\hat{X}_{n+1})\), \(n\geq 1\). Since \(\mathbb{V}^{n}=\bar{\wp}\{X_{1},\cdots,X_{n}\}\), letting \(\hat{X}_{1}:=\tau^{-1}(0)\), \(\mathbb{V}^{n}=\bar{\wp}\{X_{1}\ominus\hat{X}_{1},\cdots,X_{n}\ominus\hat{X}_ {n}\}\). We can rewrite the predictor in (5) in terms of the innovations as,
\[\hat{X}_{n+1}=\bigoplus_{j=1}^{n}\theta_{nj}\circ\left(X_{n+1-j} \ominus\hat{X}_{n+1-j}\right).\]
By properties of projection mappings, \(\hat{X}_{n+1}\in\mathbb{V}^{n}\) and by (8),
\[\langle X_{n+1}\ominus\hat{X}_{n+1},\hat{X}_{n+1}\rangle=0.\]
That is, the transformed-linear innovation \((X_{n+1}\ominus\hat{X}_{n+1})\) is orthogonal to a transformed-linear combination of \(X_{1},...,X_{n}\). Thus, the innovation is orthogonal to each of \(X_{1},...,X_{n}\).
Consider the set of transformed-linear innovations, \(\{X_{n+1-j}\ominus\hat{X}_{n+1-j}\}_{j=1,...,n}\). The innovation \((X_{i}\ominus\hat{X}_{i})\in\mathbb{V}^{j-1}\) for \(i<j\), as \((X_{i}\ominus\hat{X}_{i})\) is a transformed-linear combination of \(X_{1},...,X_{i}\). Also, by (8), \((X_{j}\ominus\hat{X}_{j})\perp\mathbb{V}^{j-1}\). Thus, elements of the set \(\{X_{1}\ominus\hat{X}_{1},X_{2}\ominus\hat{X}_{2},...,X_{n}\ominus\hat{X}_{n}\}\) are mutually orthogonal. In fact, \(\{X_{n+1-j}\ominus\hat{X}_{n+1-j}\}_{j=1,...,n}\) is an orthogonal basis of \(\mathbb{V}^{n}\). Let the squared distance of prediction be denoted by \(\nu_{n}\), that is, \(\nu_{n}=\|X_{n+1}\ominus\hat{X}_{n+1}\|^{2}\). The innovations algorithm for a transformed-linear time series in \(\mathbb{V}\) is given as follows:
**Proposition 1** (The Transformed-Linear Innovations Algorithm).: _If \(\{X_{t}\}\) is a transformed-linear time series in \(\mathbb{V}\), where the matrix \(\Gamma_{n}=[\langle X_{i},X_{j}\rangle]_{i,j=1}^{n}\) is non-singular for each \(n\geq 1\), then the one-step predictors \(\hat{X}_{n+1}\), \(n\geq 0\), and their squared distances of prediction \(\nu_{n}\), \(n\geq 1\), are given by_
\[\hat{X}_{n+1}=\begin{cases}0&\text{if }n=0\\ \bigoplus_{j=1}^{n}\theta_{nj}\circ(X_{n+1-j}\ominus\hat{X}_{n+1-j})&\text{if }n \geq 1,\end{cases} \tag{11}\]
_and_
\[\begin{cases}\nu_{0}&=\langle X_{1},X_{1}\rangle\\ \theta_{n,n-k}&=\nu_{k}^{-1}\Big{(}\langle X_{n+1},X_{k+1}\rangle-\sum_{j=0}^{ k-1}\theta_{k,k-j}\theta_{n,n-j}\nu_{j}\Big{)},\quad k=0,1,...,n-1,\\ \nu_{n}&=\langle X_{n+1},X_{n+1}\rangle-\sum_{j=0}^{n-1}\theta_{n,n-j}^{2}\nu _{j},\end{cases} \tag{12}\]
Proof: Taking the inner product on both sides of (11) with \((X_{k+1}\ominus\hat{X}_{k+1})\), \(0\leq k<n\), we get
\[\Big{\langle}\hat{X}_{n+1},(X_{k+1}\ominus\hat{X}_{k+1})\Big{\rangle} =\left\langle\left\{\bigoplus_{j=1}^{n}\theta_{nj}\circ(X_{n+1-j} \ominus\hat{X}_{n+1-j})\right\},(X_{k+1}\ominus\hat{X}_{k+1})\right\rangle\] \[=\sum_{j=1}^{n}\theta_{nj}\left\langle(X_{n+1-j}\ominus\hat{X}_{n +1-j}),(X_{k+1}\ominus\hat{X}_{k+1})\right\rangle\] \[=\theta_{n,n-k}\nu_{k},\]
since \((X_{n+1-j}\ominus\hat{X}_{n+1-j})\perp(X_{k+1}\ominus\hat{X}_{k+1})\) for all \(j\neq n-k\).
Also, since \((X_{n+1}\ominus\hat{X}_{n+1})\perp(X_{k+1}\ominus\hat{X}_{k+1})\) for \(k=0,\cdots,n-1\), we get,
\[\langle\hat{X}_{n+1},(X_{k+1}\ominus\hat{X}_{k+1})\rangle=\langle X_{n+1},(X_{ k+1}\ominus\hat{X}_{k+1})\rangle.\]
Hence, the coefficients \(\theta_{n,n-k}\), \(k=0,...,n-1\) are given by
\[\theta_{n,n-k}=\nu_{k}^{-1}\langle X_{n+1},(X_{k+1}\ominus\hat{X}_{k+1})\rangle. \tag{13}\]
Using the representation in (11) with \(n\) replaced by \(k\), we get
\[\theta_{n,n-k}=\nu_{k}^{-1}\Big{(}\langle X_{n+1},X_{k+1}\rangle-\sum_{j=0}^{ k-1}\theta_{k,k-j}\langle X_{n+1},(X_{j+1}\ominus\hat{X}_{j+1})\rangle\Big{)}. \tag{14}\]
Since by (13), \(\langle X_{n+1},(X_{j+1}\ominus\hat{X}_{j+1})\rangle=\nu_{j}\theta_{n,n-j}\), \(0\leq j<n\), we can rewrite (14) as
\[\theta_{n,n-k}=\nu_{k}^{-1}\Big{(}\langle X_{n+1},X_{k+1}\rangle-\sum_{j=0}^{ k-1}\theta_{k,k-j}\theta_{n,n-j}\nu_{j}\Big{)}.\]
By properties of projection mapping, we have
\[\nu_{n}=\|X_{n+1}\ominus\hat{X}_{n+1}\|^{2}=\|X_{n+1}\|^{2}-\|\hat{X}_{n+1}\|^ {2}=\langle X_{n+1},X_{n+1}\rangle-\sum_{k=0}^{n-1}\theta_{n,n-k}^{2}\nu_{k}.\]
We will return to forecasting in Section 7 where we will propose a method for quantifying prediction uncertainty. For now, we will turn our attention to using the innovations algorithm as a tool to better understand the richness of our class of models and also as a tool for model fitting.
Implications for Modeling of Stationary Time Series
Rather than the general setting described by the class \(\mathbb{V}\), we now focus on stationary time series. If \(\{X_{t}\}\) is an MA(\(\infty\)) time series (3), \(X_{t}\in\mathbb{V}\) for all \(t\). As \(\{X_{t}\}\) is stationary, it is natural to think of the inner product as a function of lag:
\[\gamma(h)=\langle X_{t},X_{t+h}\rangle=\sum_{j=0}^{\infty}\psi_{j}\psi_{j+h}.\]
Being an inner product, \(\gamma(.)\) is nonnegative definite and by the Cauchy-Schwarz inequality, \(|\gamma(h)|\leq\gamma(0)\).
The following corollary shows that given an invertible transformed-linear regularly-varying MA time series, iterating the transformed-linear innovations algorithm yields coefficient estimates that converge to the true MA coefficients.
Corollary 1.: _If \(\{X_{t}\}\) is an invertible MA process in \(\mathbb{V}\), that is,_
\[Z_{t}=X_{t}\oplus\bigoplus_{j=1}^{\infty}\pi_{j}\circ X_{t-j},\]
_with \(\lim_{x\to\infty}\mbox{Pr}(Z_{j}>x)/\{x^{-2}L(x)\}=1\), then as \(n\to\infty\),_
_(i) \(\nu_{n}\to 1\),_
_(ii) \(\|(X_{n}\ominus\hat{X}_{n})\ominus Z_{n}\|^{2}\to 0\), and_
_(iii) \(\theta_{nj}\to\psi_{j},j=1,2,\cdots\)._
Proof:
Let \(\mathbb{M}_{n}=\mbox{sp}\{X_{s},-\infty<s\leq n\}\) and \(\mathbb{V}^{n}=\mbox{sp}\{X_{1},\cdots,X_{n}\}\). Because \(\{X_{t}\}\) is invertible,
\[Z_{n+1}\ominus X_{n+1}=\bigoplus_{j=1}^{\infty}\pi_{j}\circ X_{n+1-j}=P_{ \mathbb{M}_{n}}(Z_{n+1}\ominus X_{n+1})=\ominus P_{\mathbb{M}_{n}}X_{n+1},\]
since \(Z_{n+1}\perp\mathbb{M}_{n}\). Also, we can think of \(Z_{k}\) as \(Z_{k}=\bigoplus_{j=0}^{\infty}\psi_{j}\circ Z_{j}\), where \(\psi_{j}=1\) for \(j=k\) and \(\psi_{j}=0\) for all \(j\neq k\). Thus, \(Z_{k}\in\mathbb{V}\) and subsequently, \(\|Z_{k}\|^{2}=1\) for all \(k\). Then,
\[1=\|Z_{n+1}\|^{2} = \|X_{n+1}\oplus\bigoplus_{j=1}^{\infty}\pi_{j}\circ X_{n+1-j}\|^{ 2}=\|X_{n+1}\ominus P_{\mathbb{M}_{n}}X_{n+1}\|^{2}\] \[\leq \|X_{n+1}\ominus P_{\mathbb{V}^{n}}X_{n+1}\|^{2}=\nu_{n}\] \[\leq \|X_{n+1}\oplus\bigoplus_{j=1}^{n}\pi_{j}\circ X_{n+1-j}\|^{2}=\| Z_{n+1}\ominus\bigoplus_{j=n+1}^{\infty}\pi_{j}\circ X_{n+1-j}\|^{2}\] \[= \|Z_{n+1}\|^{2}+\|\bigoplus_{j=n+1}^{\infty}\pi_{j}\circ X_{n+1-j }\|^{2}=1+\sum_{i,j=n}^{\infty}\pi_{i}\pi_{j}\langle X_{n+1-i},X_{n+1-j}\rangle\] \[\leq 1+\left(\sum_{j=n+1}^{\infty}\pi_{j}\right)^{2}\gamma(0).\]
Thus (i) is established since,
\[1\leq\nu_{n}\leq 1+\left(\sum_{j=n+1}^{\infty}\pi_{j}\right)^{2}\gamma(0)\implies \nu_{n}\to 1\mbox{ as }n\to\infty.\]
Consider,
\[\|X_{n}\ominus\hat{X}_{n}\ominus Z_{n}\|^{2} = \|X_{n}\ominus\hat{X}_{n}\|^{2}-2\langle Z_{n},X_{n}\ominus\hat{ X}_{n}\rangle+\|Z_{n}\|^{2} \tag{15}\] \[= v_{n-1}-2\left[\langle Z_{n},X_{n}\rangle-\langle Z_{n},\hat{X}_ {n}\rangle\right]+1\] \[= v_{n-1}-2\left[\langle Z_{n},\bigoplus_{j=0}^{\infty}\psi_{j} \circ Z_{n-j}\rangle-\langle Z_{n},\bigoplus_{j=1}^{n-1}b_{nj}\circ X_{n-j} \rangle\right]+1\] \[= v_{n-1}+2[\|Z_{n}\|^{2}-0]+1\] \[= v_{n-1}-1,\]
where (15) converges to \(0\) as \(n\to\infty\) by (i), thus proving (ii).
Since \(X_{n+1}=\bigoplus_{j=0}^{\infty}\psi_{j}\circ Z_{n+1-j}\), we have that
\[\psi_{j}=\langle X_{n+1},Z_{n+1-j}\rangle.\]
Also, by (13),
\[\theta_{nj}=\nu_{n-j}^{-1}\langle X_{n+1},(X_{n+1-j}\ominus\hat{X}_{n+1-j})\rangle.\]
Then,
\[|\theta_{nj}-\psi_{j}| = \left|\theta_{nj}-\langle X_{n+1},(X_{n+1-j}\ominus\hat{X}_{n+1-j} )\rangle+\langle X_{n+1},(X_{n+1-j}\ominus\hat{X}_{n+1-j})\rangle-\psi_{j}\right| \tag{16}\] \[\leq \left|\theta_{nj}-\langle X_{n+1},(X_{n+1-j}\ominus\hat{X}_{n+1-j })\rangle\right|+\left|\langle X_{n+1},(X_{n+1-j}\ominus\hat{X}_{n+1-j}) \rangle-\langle X_{n+1},Z_{n+1-j}\rangle\right|\] \[= |\theta_{nj}-\theta_{nj}v_{n-j}|+\left|\langle X_{n+1},(X_{n+1-j }\ominus\hat{X}_{n+1-j}\ominus Z_{n+1-j})\rangle\right|\] \[\leq |\theta_{nj}-\theta_{nj}v_{n-j}|+\sqrt{\gamma(0)}\left\|(X_{n+1- j}\ominus\hat{X}_{n+1-j}\ominus Z_{n+1-j})\right\|, \tag{17}\]
where the inequalities in (16) and (17) hold by the triangle inequality and the Cauchy-Schwarz inequality, respectively. Since \(\theta_{nj}\) and \(\gamma(0)\) are bounded, as \(n\to\infty\), the first term on the right-hand side of (17) converges to \(0\) by (i) and the second term on the right-hand side of (17) converges to \(0\) by (ii). Thus, \(\theta_{nj}\to\psi_{j}\) as \(n\to\infty\), proving (iii). \(\Box\)
Analogous to Proposition 3.2.1 in Brockwell and Davis (1991), Proposition 2 shows that a \(q\)-tail-dependent tail stationary regularly-varying time series can be represented as a transformed-linear regularly-varying MA(\(q\)) process. This proposition is a beginning step in showing the richness of the class of transformed-linear time series, which we build on in Section 6.
**Proposition 2**.: _If \(\{X_{t}\}\) is a regularly-varying tail stationary process in \(\mathbb{V}\) with inner product function \(\gamma(.)\) such that \(\gamma(h)=0\) for \(|h|>q\) and \(\gamma(q)\neq 0\), then \(\{X_{t}\}\) is a transformed-linear regularly-varying MA(\(q\)) process, i.e. there exists a regularly-varying noise sequence \(\{Z_{t}\}\) of independent and tail stationary \(Z_{t}\)'s such that_
\[X_{t}=Z_{t}\oplus\theta_{1}\circ Z_{t-1}\oplus\cdots\oplus\theta_{q}\circ Z_{t -q}.\]
Before we prove Proposition 2, we prove the following Lemma which requires the notion of convergence in tail ratio. Let \(X_{t}\) be the MA(\(\infty\)) in (3) and for some \(q>0\) define \(X_{t}^{(q)}=\bigoplus_{j=0}^{q}\psi_{j}\circ Z_{t-j}\) and \(X_{t}^{(q)^{\prime}}=\bigoplus_{j=q+1}^{\infty}\psi_{j}\circ Z_{t-j}\).
Mhatre and Cooley (2020+, Section 3.4) say that \(X_{t}^{(q)}\) converges to \(X_{t}\) in tail ratio if \(\lim\limits_{x\to\infty}\frac{\text{\rm{pr}}(X_{t}^{(q)^{\prime}}>x)}{x^{-2}L( x)}=0\) as \(q\to\infty\). The proofs below are extreme analogues to the steps in Brockwell and Davis (1991, Proposition 3.2.1), with tail ratio convergence replacing mean-square convergence.
**Lemma 1**.: _If \(X_{t}\) is a tail stationary process in \(\mathbb{V}\), then_
\[P_{\widehat{\text{\rm{pr}}}\{X_{j},t-n\leq j\leq t-1\}}X_{t}\stackrel{{ \text{tail ratio}}}{{\to}}P_{\widehat{\text{\rm{pr}}}\{X_{j},-\infty<j\leq t-1\}}X_ {t},\ \ \text{as $n\to\infty$}.\]
Proof: Consider the transformed-linear combination
\[\bigoplus_{j=n+1}^{\infty}b_{j}\circ X_{t-j}.\]
As \(X_{t}\) is a tail stationary process,
\[\text{\rm{pr}}(X_{t}>x)\sim x^{-2}L(x)\sigma(0),\]
where \(\sigma(0)=\sigma(X_{t},X_{t})\). As shown in Mhatre and Cooley (2020+, Section 3.4),
\[\text{\rm{pr}}\left(\bigoplus_{j=n+1}^{\infty}b_{j}\circ X_{t-j}>x\right)\sim x ^{-2}L(x)\sigma(0)\sum_{j=n+1}^{\infty}|b_{j}|^{2},\ \ \text{as $x\to\infty$}. \tag{18}\]
Taking limit on both sides of (18) we get, as \(x\to\infty\),
\[\lim\limits_{n\to\infty}\text{\rm{pr}}\left(\bigoplus_{j=n+1}^{ \infty}b_{j}\circ X_{t-j}>x\right) \sim \lim\limits_{n\to\infty}x^{-2}L(x)\sigma(0)\sum_{j=n+1}^{\infty}|b_{ j}|^{2}=0,\] \[\implies\lim\limits_{x\to\infty}\frac{\text{\rm{pr}}\left( \bigoplus_{j=n+1}^{\infty}b_{j}\circ X_{t-j}>x\right)}{x^{-2}L(x)} \to 0,\ \ \text{as $n\to\infty$}.\]
Thus
\[\bigoplus_{j=1}^{n}b_{j}\circ X_{t-j} \stackrel{{\text{tail\,ratio}}}{{\rightarrow}} \bigoplus_{j=1}^{\infty}b_{j}\circ X_{t-j},\text{ as }n\rightarrow\infty,\text{ and}\] \[P_{\bar{\wp}\{X_{j},t-n\leq j\leq t-1\}}X_{t} \stackrel{{\text{tail\,ratio}}}{{\rightarrow}} P_{\bar{\wp}\{X_{j},-\infty<j\leq t-1\}}X_{t},\text{ as }n\rightarrow\infty.\]
Proof of Proposition 2: For each \(t\), let \(\mathbb{M}_{t}\) be the closed transformed-linear subspace \(\bar{\wp}\{X_{s},s\leq t\}\) of \(\mathbb{V}\) and set
\[Z_{t}=X_{t}\ominus P_{\mathbb{M}_{t-1}}X_{t}. \tag{19}\]
That is,
\[Z_{t}=X_{t}\ominus\bigoplus_{j=1}^{\infty}b_{j}\circ X_{t-j},\text{ \ }a_{j}\in \mathbb{R}.\]
Thus, \(Z_{t}\in\mathbb{M}_{t}\). By definition of \(P_{\mathbb{M}_{t-1}}\), \(P_{\mathbb{M}_{t-1}}X_{t}\in\mathbb{M}_{t-1}\) and \(Z_{t}=X_{t}\ominus P_{\mathbb{M}_{t-1}}X_{t}\in\mathbb{M}_{t-1}^{\perp}\). Thus \(Z_{s}\in\mathbb{M}_{s}\subset\mathbb{M}_{t-1}\) and hence \(\langle Z_{s},Z_{t}\rangle=0\) for \(s<t\). Also, by Lemma 1
\[P_{\bar{\wp}\{X_{s},t-n\leq s\leq t-1\}}X_{t}\stackrel{{\text{ tail\,ratio}}}{{\rightarrow}}P_{\mathbb{M}_{t-1}}X_{t},\text{ as }n\rightarrow\infty.\]
By stationarity and continuity of norm,
\[\|Z_{t+1}\| = \|X_{t+1}\ominus P_{\mathbb{M}_{t}}X_{t+1}\|\] \[= \lim_{n\rightarrow\infty}\|X_{t+1}\ominus P_{\bar{\wp}\{X_{s},s =t+1-n,\cdots,t\}}X_{t+1}\|\] \[= \lim_{n\rightarrow\infty}\|X_{t}\ominus P_{\bar{\wp}\{X_{s},s=t -n,\cdots,t-1\}}X_{t}\|\] \[= \|X_{t}\ominus P_{\mathbb{M}_{t-1}}X_{t}\|=\|Z_{t}\|.\]
Letting \(c^{2}=\|Z_{t}\|^{2}\), \(\{Z_{t}\}\) is a sequence of independent and tail stationary regularly-varying random variables with scale \(c\), that is, \(\text{Pr}(Z_{t}>x)/\{x^{-2}L(x)\}=c^{2}\).
By (19),
\[X_{t-1}=Z_{t-1}\oplus P_{\mathbb{M}_{t-2}}X_{t-1}.\]
Consequently,
\[\mathbb{M}_{t-1} = \bar{\wp}\{X_{s},s\leq t-1\}\] \[= \bar{\wp}\{X_{s},s<t-1,Z_{t-1}\}\] \[= \bar{\wp}\{X_{s},s<t-q,Z_{t-q},\cdots,Z_{t-1}\}.\]
Therefore, \(\mathbb{M}_{t-1}\) can be decomposed into two orthogonal subspaces, \(\mathbb{M}_{t-q-1}\) and \(\bar{\wp}\{Z_{t-q},\cdots,Z_{t-1}\}\). Since \(\gamma(h)=0\) for \(|h|>q\), it follows that \(X_{t}\perp\mathbb{M}_{t-q-1}\) and since \(\bar{\wp}\{c^{-2}\circ Z_{t-q},\cdots,c^{-2}\circ Z_{t-1}\}\) is an orthonormal set, by properties of projection mappings,
\[P_{\mathbb{M}_{t-1}}X_{t} = P_{\mathbb{M}_{t-q-1}}X_{t}\oplus P_{\bar{\wp}\{Z_{t-q},\cdots Z _{t-1}\}}X_{t} \tag{20}\] \[= 0\ \oplus\ \big{(}c^{-2}\langle X_{t},Z_{t-1}\rangle\big{)}\circ Z _{t-1}\ \oplus\cdots\oplus\ \big{(}c^{-2}\langle X_{t},Z_{t-q}\rangle\big{)}\circ Z_{t-q}\] \[= \theta_{1}\circ Z_{t-1}\oplus\cdots\oplus\theta_{q}\circ Z_{t-q},\]
where \(\theta_{j}:=c^{-2}\langle X_{t},Z_{t-j}\rangle\), which by stationarity is independent of \(t\) for \(j=1,\cdots,q\). By (19) and (20),
\[X_{t}=Z_{t}\oplus\theta_{1}\circ Z_{t-1}\oplus\cdots\oplus\theta_{q}\circ Z_{t -q}.\]
## 5 Modeling in Subset \(\mathbb{V}_{+}\)
We have defined \(\mathbb{V}\) allowing for negative \(\psi_{j}\)'s as they are needed to have a vector space and to create orthogonal elements. In this section we will show that the negative coefficients defining \(\{X_{t}\}\) are academic in the sense that a time series that has negative coefficients is indistinguishable in terms of tail behavior from a time series that has zeroes in place of those coefficients. Rather than being a problem, this will allow us to restrict our attention to a subset \(\mathbb{V}_{+}\) for which the TPDF is equivalent to the inner product function.
Consider a subset of \(\mathbb{V}\) defined as
\[\mathbb{V}_{+}=\{X_{t}:X_{t}=\bigoplus_{j=0}^{\infty}\psi_{j}\circ Z_{t-j},\;\psi _{j}\geq 0,\sum_{j=0}^{\infty}\psi_{j}<\infty\}.\]
Proposition 3 below follows from the definition of TPDF.
**Proposition 3**: _If a transformed-linear MA(\(\infty\)) time series in \(\mathbb{V}\) has TPDF \(\sigma(h)\), then there exists a transformed-linear MA(\(\infty\)) time series in subset \(\mathbb{V}_{+}\) which has the same TPDF \(\sigma(h)\), for all lag \(h\)._
Proof: Let \(X_{t}=\bigoplus_{j=0}^{\infty}\psi_{j}\circ Z_{t-j}\in\mathbb{V}\). The TPDF of \(X_{t}\) is given by \(\sigma(h)=\sum_{j=0}^{\infty}\psi_{j}^{(0)}\psi_{j+h}^{(0)}\), which is equal to the TPDF of \(X_{t}^{*}=\bigoplus_{j=0}^{\infty}\psi_{j}^{(0)}\circ Z_{t-j}\in\mathbb{V}_{+}\). \({}_{\blacksquare}\)
In other words, \(X_{t}\) and \(X_{t}^{*}\) are indistinguishable in terms of tail dependence. \(X_{t}\) and \(X_{t}^{*}\) are also indistinguishable in terms of the tail ratio since recall that tail ratio is equal to \(\sigma(0)\). Furthermore, the TPDF gives full information for a stationary time series in \(\mathbb{V}_{+}\) and unlike the inner product function, the TPDF is estimable. Also, it can be clearly seen that \(\gamma(h)=\sigma(h)\), for \(X_{t}^{*},X_{t+h}^{*}\in\mathbb{V}_{+}\), for all lag \(h\). Hence it is reasonable to restrict our attention to \(\mathbb{V}_{+}\).
As the inner product is equivalent to the TPDF in \(\mathbb{V}_{+}\), equation 10 can be rewritten as
\[\hat{\mathbf{b}}_{n}=\Sigma_{n}^{-1}\mathbf{\sigma}_{n}. \tag{21}\]
where \(\Sigma_{n}=\left[\sigma(i-j)\right]_{i,j=1}^{n}\) and \(\mathbf{\sigma}_{n}=\left[\sigma(i)\right]_{i=1}^{n}\).
Also, if our time series is in \(\mathbb{V}_{+}\), we can rewrite the equations (12) of the innovations algorithm in terms of the TPDF \(\sigma(\cdot)\) instead of the inner product as
\[\begin{cases}\nu_{0}&=\sigma(0)\\ \theta_{n,n-k}&=\nu_{k}^{-1}\left(\sigma(n-k)-\sum_{j=0}^{k-1}\theta_{k,k-j} \theta_{n,n-j}\nu_{j}\right),\quad k=0,1,...,n-1,\\ \nu_{n}&=\sigma(0)-\sum_{j=0}^{n-1}\theta_{n,n-j}^{2}\nu_{j},\end{cases}. \tag{22}\]
Rewriting Corollary 1 for \(\mathbb{V}_{+}\), we get the following corollary.
**Corollary 2**: _If \(\{X_{t}\}\) is an invertible MA process in \(\mathbb{V}_{+}\) with \(\lim_{x\to\infty}\text{Pr}(Z_{j}>x)/\{x^{-2}L(x)\}=1\), then as \(n\to\infty\), (i) \(\nu_{n}\to 1\), (ii) \(\|X_{n}\ominus\hat{X}_{n}\ominus Z_{n}\|^{2}\to 0\), and (iii) \(\theta_{nj}\to\psi_{j},j=1,2,\cdots;\;\;\psi_{j}\geq 0\). \({}_{\blacksquare}\)_
Also, rewriting Proposition 2 for \(\mathbb{V}_{+}\),
**Corollary 3**: _If \(\{X_{t}\}\) is a regularly-varying tail stationary process in \(\mathbb{V}_{+}\) with tail pairwise dependence function \(\sigma(.)\) such that \(\sigma(h)=0\) for \(|h|>q\) and \(\sigma(q)\neq 0\), then \(\{X_{t}\}\) is an transformed-linear regularly-varying MA(q) process, i.e. there exists a regularly-varying noise sequence \(\{Z_{t}\}\) of independent and tail stationary \(Z_{t}\)'s such that_
\[X_{t}=Z_{t}\oplus\theta_{1}\circ Z_{t-1}\oplus\cdots\oplus\theta_{q}\circ Z_{ t-q}.\]
The relation between the TPDF and the inner product gives an important result described in the following remark:
**Remark 1**: _If \(X_{t}\) is an MA(\(\infty\)) time series in \(\mathbb{V}\) and \(X_{t}\notin\mathbb{V}_{+}\), then by Proposition 3 there exists \(X_{t}^{*}\in\mathbb{V}_{+}\), obtained by applying the zero-operator on the coefficients of \(X_{t}\), which has the same TPDF as \(X_{t}\). Thus, the innovations algorithm applied to the TPDF of \(X_{t}\in\mathbb{V}\) will give us the one-step predictors for the corresponding \(X_{t}^{*}\in\mathbb{V}_{+}\). \({}_{\blacksquare}\)_
## 6 Flexibility of the MA(\(\infty\)) Class for Modeling
In this section, we show that the class of MA(\(\infty\)) models is a rich class for modeling.
### Richness of the MA(\(\infty\)) Class in terms of the TPDF
Given a valid TPDF (that is, a completely positive function) that converges to \(0\) as lag increases, we can run the innovations algorithm to get the \(\theta_{nj}\)'s and \(\nu_{n}\) as defined in (12). If we apply the TPDF formula to these \(\theta_{nj}\)'s we will get a TPDF that gets arbitrarily close to the given TPDF. In other words, if we consider random noise terms \(\{Z_{j}\}\)
that are Frechet with \(\alpha=2\) and scale \(\sqrt{\nu_{n}}\), and generate a process applying the coefficients \(\theta_{nj}\) to the \(Z\)'s, the TPDF of this generated process will be arbitrarily close to the given TPDF. Thus, given any valid TPDF, we can run the transformed-linear innovations algorithm long enough to find a transformed-linear regularly-varying MA(\(\infty\)) time series whose TPDF will get arbitrarily close to the given TPDF. As such, the class of MA(\(\infty\)) time series is rich in the class of possible TPDFs that converge to \(0\).
To show this, first, we prove the result for a \(q\)-tail-dependent TPDF in the following corollary.
**Corollary 4**: _If \(\{X_{t}\}\) is any regularly-varying tail stationary process with TPDF \(\sigma(.)\) such that \(\sigma(h)=0\) for \(|h|>q\) and \(\sigma(q)\neq 0\), then as \(n\rightarrow\infty,\) the \(\theta_{nj}\)s generated from the transformed-linear innovations algorithm approach \(\theta_{1},\cdots,\theta_{q}\) of an MA(\(q\)) whose TPDF matches the given TPDF._
Proof: Let us consider the form for the \(\theta_{nj}\)s given by the transformed-linear innovations algorithm in 22:
\[\theta_{n,n-k}=\nu_{k}^{-1}\left(\sigma(n-k)-\sum_{l=0}^{k-1} \theta_{k,k-l}\theta_{n,n-l}\nu_{l}\right),\quad k=0,1,...,n-1. \tag{23}\]
Rewriting (23) by letting \(h=n-k\),
\[\theta_{n,h} = \nu_{n-h}^{-1}\left(\sigma(h)-\sum_{l=0}^{n-h-1}\theta_{n-h,n-l} \theta_{n,n-l}\nu_{l}\right), \tag{24}\] \[= \nu_{n-h}^{-1}\left(\sigma(h)-\sum_{l=n-h-q}^{n-h-1}\theta_{n-h,n -h-l}\theta_{n,n-l}\nu_{l}\right),\quad h=0,1,...,q,\]
since \(\theta_{n,n-l}=0\) for all \(l=0,1,\cdots,n-h-q-1\).
Rewriting (24) by letting \(j=n-h-l\),
\[\theta_{n,h}=\nu_{n-h}^{-1}\left(\sigma(h)-\sum_{j=1}^{q}\theta_{ n-h,j}\theta_{n,j+h}\nu_{n-h-j}\right). \tag{25}\]
As \(n\rightarrow\infty\), let \(\theta_{n,h}\rightarrow\theta_{h}\) and \(\nu_{n}\to c^{2}\). Then (25) becomes,
\[\theta_{h}=c^{-2}\left(\sigma(h)-\sum_{j=1}^{q}\theta_{j}\theta_{ j+h}c^{2}\right). \tag{26}\]
Rearranging (26),
\[\sigma(h) = \theta_{h}c^{2}+\sum_{j=1}^{q}\theta_{j}\theta_{j+h}c^{2} \tag{27}\] \[= c^{2}\sum_{j=0}^{q}\theta_{j}\theta_{j+h},\quad h=0,1,...,q,\]
which is the form for the TPDF at lag \(h\) for a regularly-varying tail stationary MA(\(q\)) with \(\theta_{j}\geq 0\) for \(j=0,1,\cdots,q\) and \(\lim_{x\rightarrow\infty}\text{pr}(Z_{j}>x)/\{x^{-2}L(x)\}=c^{2}\). Thus, the TPDF of this MA(\(q\)) matches the given TPDF \(\sigma(h)\). \(\Box\)
We are extending the above result to the MA(\(\infty\)) case.
### Transformed-Linear Wold Decomposition
If the TPDF of a time series does not converge to \(0\), analogous to the Wold decomposition discussed in Brockwell and Davis (1991) we can decompose the time series into an MA(\(\infty\)) process and a deterministic process. Following Brockwell and Davis (1991) and Sargent (1979) we prove our Transformed-Linear Wold Decomposition as follows:
**Theorem 1** (The Transformed-Linear Wold Decomposition): _If \(c^{2}=\|X_{n+1}\ominus\hat{X}_{n+1}\|^{2}>0\), then \(X_{t}\) can be expressed as_
\[X_{t}=\bigoplus_{j=0}^{\infty}\psi_{j}\circ Z_{t-j}\ \oplus\ U_{t}, \tag{28}\]
_where_
_(i)_ \(\psi_{0}=1\) _and_ \(\sum_{j=0}^{\infty}\psi_{j}^{2}<\infty,\)__
_(ii)_ \(\{Z_{t}\}\) _is a sequence of independent and tail stationary regularly-varying random variables with scale_ \(c,\)__
_(iii)_ \(Z_{t}\in\mathbb{V}^{t}\) _for each_ \(t\in\mathbb{Z},\)__
_(iv)_ \(\langle Z_{t},U_{s}\rangle=0\) _for all_ \(s,t\in\mathbb{Z},\)__
_(v)_ \(\{U_{t}\}\) _is deterministic._
_The sequences \(\{\psi_{j}\}\), \(\{Z_{j}\}\), and \(\{U_{j}\}\) are uniquely determined by (28) and the conditions (i) - (v)._
Proof: Consider the sequences
\[Z_{t}=X_{t}\ominus P_{\mathbb{V}^{t-1}}X_{t}, \tag{29}\]
\[\psi_{j}=c^{-2}\langle X_{t},Z_{t-j}\rangle, \tag{30}\]
\[U_{t}=X_{t}\ominus\bigoplus_{j=0}^{\infty}\psi_{j}\circ Z_{t-j}. \tag{31}\]
That is,
\[Z_{t}=X_{t}\ominus\bigoplus_{j=1}^{\infty}a_{j}\circ X_{t-j},\ \ a_{j}\in \mathbb{R},\ j=1,\cdots,t-1.\]
Thus, \(Z_{t}\in\mathbb{V}^{t}\), establishing (iii). By definition of \(P_{\mathbb{V}^{t-1}}\), \(P_{\mathbb{V}^{t-1}}X_{t}\in\mathbb{V}^{t-1}\) and \(Z_{t}=X_{t}\ominus P_{\mathbb{V}^{t-1}}X_{t}\in\mathbb{V}^{t-1\perp}.\) Thus,
\[Z_{t}\in\mathbb{V}^{t-1\perp}\subset\mathbb{V}^{t-2\perp}\subset\cdots\]
Hence for \(s<t\), \(\langle Z_{s},Z_{t}\rangle=0\). By Lemma 1
\[P_{\mathfrak{R}\{X_{s},t-n\leq s\leq t-1\}}X_{t}\stackrel{{\text {tail ratio}}}{{\rightarrow}}P_{\mathbb{V}^{t-1}}X_{t},\ \ \text{as}\ n\rightarrow\infty.\]
By stationarity and continuity of norm,
\[\|Z_{t+1}\| = \|X_{t+1}\ominus P_{\mathbb{V}^{t}}X_{t+1}\|\] \[= \lim_{n\rightarrow\infty}\|X_{t+1}\ominus P_{\mathfrak{R}\{X_{ s},s=t+1-n,\cdots,t\}}X_{t+1}\|\] \[= \lim_{n\rightarrow\infty}\|X_{t}\ominus P_{\mathfrak{R}\{X_{s},s=t-n,\cdots,t-1\}}X_{t}\|\] \[= \|X_{t}\ominus P_{\mathbb{V}^{t-1}}X_{t}\|=\|Z_{t}\|.\]
Letting \(c^{2}=\|Z_{t}\|^{2}\), \(\{Z_{t}\}\) is a sequence of independent and tail stationary regularly-varying random variables with scale \(c\), thus establishing (ii).
By equation (20) in the proof of Proposition 2,
\[P_{\mathfrak{R}\{Z_{j},j\leq t\}}X_{t}=\sum_{j=0}^{\infty}\psi_{j}\circ Z_{t- j},\]
where \(\psi_{j}=c^{-2}\langle X_{t},Z_{t-j}\rangle\) and \(\sum_{j=0}^{\infty}\psi_{j}^{2}<\infty\). By stationarity, the coefficients \(\psi_{j}\) are independent of \(t\). Also,
\[\psi_{0}=c^{-2}\langle X_{t},X_{t}\ominus P_{\mathbb{V}^{t-1}}X_{t}\rangle=c^ {-2}\|X_{t}\ominus P_{\mathbb{V}^{t-1}}X_{t}\|^{2}=c^{-2}\|Z_{t}\|^{2}=1,\]
thus proving (i). From equation (30) and (31), for \(s\leq t\),
\[\langle U_{t},Z_{s}\rangle = \left\langle X_{t}\ominus\bigoplus_{j=0}^{\infty}\psi_{j}\circ Z _{t-j},\ Z_{s}\right\rangle\] \[= \langle X_{t},Z_{s}\rangle-\left\langle\bigoplus_{j=0}^{\infty} \psi_{j}\circ Z_{t-j},\ Z_{s}\right\rangle\] \[= \langle X_{t},Z_{s}\rangle-\psi_{t-s}\left\langle Z_{s},Z_{s}\right\rangle\] \[= \langle X_{t},Z_{s}\rangle-\|Z_{s}\|^{-2}\left\langle X_{t},Z_{s} \right\rangle\|Z_{s}\|^{2}\] \[= 0.\]
In addition, if \(s>t\), \(Z_{s}\in\mathbb{V}^{s-1\perp}\subset\mathbb{V}^{t\perp}\). But \(U_{t}\in\mathbb{V}^{t}\). Hence \(\langle U_{t},Z_{s}\rangle=0\) for \(s>t\). Thus (iv) is proved.
Since \(U_{t}\) is orthogonal to \(Z_{t}\), \(U_{t}\in\mathbb{V}^{t-1}\), that is \(U_{t}\) can be predicted perfectly from previous \(X\)'s. To see this clearly, consider the projection of \(U_{t}\) on \(\mathbb{V}^{t-1}\) to get
\[P_{\mathbb{V}^{t-1}}U_{t}=P_{\mathbb{V}^{t-1}}X_{t}\ominus P_{\mathbb{V}^{t-1 }}\bigoplus_{j=0}^{\infty}\psi_{j}\circ Z_{t-j}=P_{\mathbb{V}^{t-1}}X_{t}\ominus \bigoplus_{j=1}^{\infty}\psi_{j}\circ Z_{t-j},\]
since \(P_{\mathbb{V}^{t-1}}Z_{t}=0\) and \(P_{\mathbb{V}^{t-1}}Z_{t-k}=Z_{t-k}\) for \(k\geq 1\). Transformed-linearly subtracting above equation from (31) gives
\[U_{t}\ominus P_{\mathbb{V}^{t-1}}U_{t}=\left(X_{t}\ominus P^{\mathbb{V}^{t-1 }}X_{t}\right)\ \ominus\ \psi_{0}\circ Z_{t}=0_{\mathbb{V}},\]
since the one-step ahead prediction error for \(X_{t}\) is \(\psi_{0}\circ Z_{t}\). Hence, \(U_{t}=P_{\mathbb{V}^{t-1}}U_{t}\). In general,
\[P_{\mathbb{V}^{t-k}}U_{t}=P_{\mathbb{V}^{t-k}}X_{t}\ominus\bigoplus_{j=k}^{ \infty}\psi_{j}\circ Z_{t-j}.\]
Transformed-linearly subtracting above equation from (31) gives
\[U_{t}\ominus P_{\mathbb{V}^{t-k}}U_{t}=\left(X_{t}\ominus P_{\mathbb{V}^{t-k }}X_{t}\right)\ \ominus\bigoplus_{j=0}^{k-1}\psi_{j}\circ Z_{t-j}=0_{\mathbb{V}},\]
since the k-step ahead prediction error for \(X_{t}\) is \(\bigoplus_{j=0}^{k-1}\psi_{j}\circ Z_{t-j}\). Thus \(\{U_{t}\}\) is deterministic as it can be predicted from past \(X\)'s.
### Simulation Study
We conduct a simulation study that corroborates the richness of the class of transformed-linear regularly-varying MA(\(\infty\)) models. We simulate data from two models. The first model is a GARCH(1,1) process (Bollerslev (1986)) with Gaussian noise terms and parameters \(\alpha_{0}=0.2\), \(\alpha_{1}=0.5\), and \(\beta_{1}=0.3\). We consider the time series of absolute values of this GARCH process, which we denote \(x_{t}^{(orig)}\) A chi-plot (not shown) for the upper tail shows asymptotic dependence with \(\hat{\chi}(1)\approx 0.34\). The Hill estimator (Hill (1975)) at the empirical \(0.99\) quantile of this transformed data gives an estimate \(\hat{\alpha}=3.27\) of the tail index. The scale is estimated to be \(\hat{c}=0.47\). We further transform the data into \(x_{t}=\hat{c}^{-1/2}(x_{t}^{(orig)})^{\hat{\alpha}/2}\) so that our marginal now can be assumed to have. \(\alpha=2\) and \(\sigma(0)=1\). As discussed in Mhatre and Cooley (2020+), preprocessing the data to have \(\sigma(0)=1\) allows us to reduce bias in TPDF estimation. Note that by doing this the noise terms \(Z_{j}\) are no longer such that \(\sigma_{Z_{j}}(0)=1\). As done in Mhatre and Cooley (2020+), to reduce bias in TPDF estimation, we subtract off the mean of the transformed data and replace the negative observations with \(0\). We estimate the TPDF up to 500 lags using data whose radial components exceeds the \(0.99\) quantile. The squared distance of prediction \(\nu_{n}\) converges to 0.65. Running the innovations algorithm on the estimated TPDF gives us converged \(\theta\) estimates of an MA model. We consider the first 25 \(\hat{\theta}\)'s as we deem the \(\theta\) estimates to be negligible beyond that. We then generate Frechet noise terms with \(\alpha=2\) and scale \(\sqrt{\nu_{n}}=\sqrt{0.65}\), and simulate a transformed-linear regularly-varying MA(\(25\)) time series using the estimated \(\theta\)'s from the innovations algorithm. We then back-transform the simulated time series to the original marginals. The average difference for the first 25 lags between the estimated TPDF from the original GARCH data and the estimated TPDF from our fitted model is \(-0.01\) (se = \(0.01\)). Figure 1 shows the estimated TPDF's for both the GARCH model data and the data simulated from the fitted MA(\(25\)).
The second model is a first-order Markov chain such that each pair of consecutive observations has a bivariate logistic distribution with a dependence parameter of 0.4 and common unit-Frechet marginals (refer Smith et al. (1997)). A chi-plot (not shown) for the upper tail shows asymptotic dependence with \(\hat{\chi}(1)\approx 0.7\). We perform the square-root transformation on the data so that \(\alpha=2\). Following the same process as for the first model, we simulate a transformed-linear regularly-varying MA(\(30\)) time series using the estimated \(\theta\)'s from the innovations algorithm and back-transform the simulated time series to the original marginals. The average difference for the first 30 lags between the estimated TPDF from the original logistic data and the estimated TPDF from our fitted model is \(0.05\) (se = \(0.03\)).
Table 1 gives the average length of run above higher quantiles for the original time series data and the fitted time series data (using coefficient estimates from the innovations algorithm) for the GARCH model and the logistic model. Table 2 gives the higher quantiles for sum of three consecutive time series terms. The fitted models seem to produce reasonable estimates of these tail quantities. Interestingly, for the fitted MA model in the logistic case (last column in Table 1), there is an increasing trend in the estimates and we do not see the "threshold stability" as exhibited by
the fitted GARCH model.Tables 1 and 2 show that a model which captures the level of dependence in the TPDF can adequately estimate quantities of interest like length of run or sums of consecutive terms, despite not fully characterizing the angular measure of the true model. In fact, lag-1 scatterplots of the target and fitted models (available in Mhatre (2022)) show distinct differences in the nature of the bivariate dependence structure.
## 7 Prediction Error
We now return our attention to prediction and investigate the problem of assessing prediction uncertainty. Because the geometry of regular variation is very different from the elliptical geometry assumed in many non-extreme settings, we need to deal with uncertainty in prediction differently.
### Completely Positive Decomposition of the Prediction TPDM
The squared distance of prediction, \(\nu_{n}\), is the analogue to mean square prediction error. In the finite-dimensional multivariate case Lee (2021) have shown that squared-norm prediction error \(\nu_{n}\) is not useful to construct a prediction interval in the polar geometry of regular variation because the magnitude of error is dependent on the magnitude of the predicted value. We follow Lee (2021) and apply their method to construct prediction intervals when \(\hat{X}_{n+1}\) is large.
\begin{table}
\begin{tabular}{l c c c c} \hline Threshold & \multicolumn{2}{c}{GARCH} & \multicolumn{2}{c}{Logistic} \\ quantile & Original & Fitted & Original & Fitted \\ \hline
0.95 & 1.57 (0.02) & 1.71 (0.03) & 3.02 (0.09) & 3.88 (0.11) \\
0.98 & 1.57 (0.03) & 1.60 (0.04) & 3.21 (0.15) & 3.87 (0.16) \\
0.99 & 1.56 (0.05) & 1.62 (0.06) & 3.44 (0.22) & 4.46 (0.26) \\
0.995 & 1.56 (0.06) & 1.66 (0.09) & 3.45 (0.29) & 4.63 (0.35) \\
0.999 & 1.47 (0.11) & 1.52 (0.15) & 2.86 (0.48) & 4.76 (0.76) \\ \hline \end{tabular}
\end{table}
Table 1: Average length (standard error) of run above a threshold for the simulation study
\begin{table}
\begin{tabular}{l c c c c} \hline Quantile & \multicolumn{2}{c}{GARCH} & \multicolumn{2}{c}{Logistic} \\ & Original & Fitted & Original & Fitted \\ \hline
0.95 & 16.23 (0.20) & 17.49 (0.16) & 223.19 (8.29) & 233.21 (8.26) \\
0.98 & 21.36 (0.31) & 22.05 (0.34) & 565.32 (37.79) & 589.23 (37.77) \\
0.99 & 25.72 (0.68) & 26.72 (0.58) & 1252.96 (184.83) & 1250.13 (118.99) \\
0.995 & 32.08 (1.24) & 32.09 (1.03) & 2789.53 (420.92) & 2672.24 (338.49) \\
0.999 & 50.67 (3.67) & 48.90 (2.98) & 12833.18 (3736.45) & 12732.06 (3705.63) \\ \hline \end{tabular}
\end{table}
Table 2: Quantiles for sum (standard error) of twelve consecutive terms for the simulation study
Figure 1: Left panel: estimated TPDF’s for the original GARCH data and for the data simulated from the fitted transformed-linear MA(25) model. Right panel: same but for the original Markov logistic model and the fitted transformed-linear MA(30).
The tail dependence between \(\hat{X}_{n+1}\) and \(X_{n+1}\) can be characterized by the bivariate angular measure \(H_{\hat{X}_{n+1},X_{n+1}}\). As shown in Lee (2021), the dependence of \(H_{\hat{X}_{n+1},X_{n+1}}\) is summarized by the prediction TPDM
\[\Sigma_{\hat{X}_{n+1},X_{n+1}}=\begin{bmatrix}\mathbf{\sigma}_{n}^{T} \Sigma_{n}^{-1}\mathbf{\sigma}_{n}&\mathbf{\sigma}_{n}^{T}\Sigma_{n}^{-1}\mathbf{\sigma}_{ n}\\ \mathbf{\sigma}_{n}^{T}\Sigma_{n}^{-1}\mathbf{\sigma}_{n}&\sigma(0)\end{bmatrix}, \tag{32}\]
where \(\Sigma_{n}=\left[\sigma(i-j)\right]_{i,j=1}^{n}\) and \(\mathbf{\sigma}_{n}=\left[\sigma(i)\right]_{i=1}^{n}\). Since \(\Sigma_{\hat{X}_{n+1},X_{n+1}}\) is a \(2\times 2\) completely positive matrix, given any \(q_{*}\geq 2\), there exist nonnegative matrices \(B\in\mathbb{R}^{2\times q_{*}}\) such that \(BB^{T}=\Sigma_{\hat{X}_{n+1},X_{n+1}}\). For feasible computation, Lee (2021) choose a moderate \(q_{*}\) and apply the algorithm in Groetzner (2020) repeatedly to get \(n_{decomp}\) nonnegative \(B^{(k)}\) matrices, \(k=1,\cdots,n_{decomp}\), such that \(B^{(k)}B^{(k)^{T}}=\Sigma_{\hat{X}_{n+1},X_{n+1}}\) for all \(k\). Then,
\[\hat{H}_{\hat{X}_{n+1},X_{n+1}}=n_{decomp}^{-1}\sum_{k=1}^{n_{decomp}}\sum_{j =1}^{q_{*}}\|b_{kj}^{(0)}\|_{2}^{2}\;\delta_{b_{kj}^{(0)}/\|b_{kj}^{(0)}\|_{2} }(\cdot),\]
where \(b_{kj}\) is the \(j^{\text{th}}\) column of the \(k^{\text{th}}\) matrix \(B^{(k)}\) and \(\delta\) is the Dirac mass function, and
\[\Sigma_{\hat{H}}=n_{decomp}^{-1}\sum_{k=1}^{n_{decomp}}B^{(k)}B^{(k)^{T}}= \Sigma_{\hat{X}_{n+1},X_{n+1}}.\]
Defined this way, \(\hat{H}_{\hat{X}_{n+1},X_{n+1}}\) consists of \(n_{decomp}\times q_{*}\) point masses.
As in Section 6.3, we simulate 100,000 random observations of a first order Markov chain model such that each pair of consecutive observations has a bivariate logistic distribution with dependence parameter of 0.4 and common unit-Frechet marginals. We perform the square-root transformation on the data so that \(\alpha=2\). We consider the first 70,000 observations as training data and the remaining as test data. In Section 6.3 we fitted a transformed-linear regularly-varying MA(\(30\)) time series to data simulated from the logistic model because the converged innovations algorithm gave negligible \(\theta\) estimates beyond \(\theta_{30}\). Hence we consider the problem of predicting any observation \(X_{n+1}\), \(n\geq 30\), based on the previous \(30\) observations. Let us denote this predicted observation as \(\hat{X}_{n+1|n:n-29}\). Using equation (21) we obtain \(\hat{\mathbf{b}}\) and the prediction TPDM \(\Sigma_{\hat{X}_{n+1|n:n-29},X_{n+1}}\) is obtained from equation (32). We apply the algorithm given in Groetzner (2020) repeatedly to compute \(2\times 5\) matrices \(B^{(k)}\), \(k=1,\cdots,100\), each of which is a completely positive decomposition of \(\Sigma_{\hat{X}_{n+1|n:n-29},X_{n+1}}\). Thus our estimated angular measure \(\hat{H}_{\hat{X}_{n+1|n:n-29},X_{n+1}}\) has \(500\) point masses. The \(0.025\) and \(0.975\) quantiles of \(\hat{H}_{\hat{X}_{n+1|n:n-29},X_{n+1}}\) give us a \(95\%\) joint region. The left panel of Figure 2 gives \(95\%\) joint region on the 30,000 test data. Thresholding at the \(0.95\) quantile of \(\|\hat{X}_{n+1|n:n-29},X_{n+1}\|\), \(99.6\%\) of the large data points fall within this joint region.
### Conditional Prediction Intervals
The conditional density of \(X_{2}|X_{1}=x_{1}\) if \(x_{1}\) is large is given in Lee (2021) as approximately
\[f_{X_{2}|X_{1}}(x_{2}|x_{1})=2c^{-1}\|(x_{1},x_{2})\|_{2}^{-5}x_{2}h\left( \frac{(x_{1},x_{2})}{\|(x_{1},x_{2})\|_{2}}\right), \tag{33}\]
where \(c=\int_{0}^{\infty}2\|(x_{1},x_{2})\|_{2}^{-5}x_{2}h\left(\frac{(x_{1},x_{2})} {\|(x_{1},x_{2})\|_{2}}\right)\text{d}x_{2}\). We obtain an estimate of the conditional density of \(X_{n+1}\) given a large value of \(\hat{X}_{n+1}\) using equation (33). The angular density \(h\) is estimated through a kernel density estimate of \(\hat{H}_{\hat{X}_{n+1},X_{n+1}}\). The \(0.025\) and \(0.975\) quantiles of the estimated conditional density in equation (33) give us a \(95\%\) conditional prediction interval. The right panel of Figure 2 gives the scatterplot after thresholding the test data at the \(0.95\) quantile of \(\hat{X}_{n+1|n:n-29}\), along with the \(95\%\) conditional prediction intervals. These prediction intervals have a coverage rate of \(0.975\).
## 8 Application to Santa Ana Winds
We return to the March AFB hourly windspeed data that we fitted a transformed-linear regularly-varying ARMA(1,1) time series to in Mhatre and Cooley (2020+). We marginally transformed the data to have regularly-varying tails with \(\alpha=2\) and \(\sigma(0)=1\). After subtracting off the mean of the transformed data and replacing the negative observations
with \(0\), we estimate the TPDF up to 500 lags using data whose radial components exceeds the \(0.99\) quantile. Running the innovations algorithm on the estimated TPDF gives us converged \(\theta\) estimates of an MA model. The squared distance of prediction, \(\nu_{n}\), converges to We consider the first 40 \(\hat{\theta}\)'s since the \(\theta\) estimates are negligible beyond that. We then generate Frechet noise terms, with \(\alpha=2\) and scale \(\sqrt{\nu_{n}}=\sqrt{0.65}\), and simulate a transformed-linear regularly-varying MA(\(40\)) time series using the estimated \(\theta\)'s from the innovations algorithm and back transform the simulated time series to the original marginals. The average difference between the estimated TPDF from the original windspeed anomalies data and the estimated TPDF from our fitted model is \(-0.02\) (se = \(0.01\)). Table 3 gives average length of run above higher quantiles for the original windspeed anomalies time series data and the fitted time series data (using coefficient estimates from the innovations algorithm). Table 4 gives the higher quantiles for sum of three consecutive time series terms. The fitted models seem to produce reasonable estimates of these tail quantities.
We now perform prediction of the windspeed anomalies time series. Out of the 103,630 observations, we consider the first 70,000 observations as training data and the remaining as test data. We consider the problem of predicting an observation \(X_{n+1}\), \(n\geq 40\), based on the previous 40 observations. Let us denote this predicted observation as
\begin{table}
\begin{tabular}{l l l} \hline Quantile & Original & Fitted \\ \hline
0.95 & 27.70 (0.72) & 28.44 (0.52) \\
0.98 & 43.74 (1.19) & 42.93 (1.07) \\
0.99 & 56.65 (1.66) & 54.82 (1.62) \\
0.995 & 69.67 (2.11) & 68.91 (2.50) \\
0.999 & 91.89 (3.60) & 97.93 (4.49) \\ \hline \end{tabular}
\end{table}
Table 4: Quantiles for sum (standard error) of twelve consecutive terms for the windspeed data
\begin{table}
\begin{tabular}{l l l} \hline Threshold & Original & Fitted \\ quantile & & \\ \hline
0.95 & 2.43 (0.06) & 2.32 (0.07) \\
0.98 & 2.35 (0.09) & 2.27 (0.11) \\
0.99 & 2.10 (0.10) & 2.46 (0.18) \\
0.995 & 1.77 (0.11) & 2.35 (0.23) \\
0.999 & 1.40 (0.10) & 2.04 (0.33) \\ \hline \end{tabular}
\end{table}
Table 3: Average length (standard error) of run above a threshold for the windspeed data
Figure 2: Scatterplot of the Logistic model test data, on the transformed Fréchet scale with \(\alpha=2\), with the estimated 95% joint prediction region (left panel). 95% conditional prediction intervals given each large value of \(\hat{X}_{n+1|n:n-29}\) of the Logistic model test data, on the transformed Fréchet scale with \(\alpha=2\) (right panel).
\(\hat{X}_{n+1|n:n-39}\). Using equation (21) we obtain \(\hat{\mathbf{b}}\) and the prediction TPDM \(\Sigma_{\hat{X}_{n+1|n:n-39},X_{n+1}}\) is obtained from equation (32). We apply the algorithm given in Groetzner (2020) repeatedly to compute \(2\times 5\) matrices \(B^{(k)}\), \(k=1,\cdots,100\), each of which is a completely positive decomposition of \(\Sigma_{\hat{X}_{n+1|n:n-39},X_{n+1}}\). Thus our estimated angular measure \(\hat{H}_{\hat{X}_{n+1|n:n-39},X_{n+1}}\) has \(500\) point masses. The \(0.025\) and \(0.975\) quantiles of \(\hat{H}_{\hat{X}_{n+1|n:n-39},X_{n+1}}\) give us a \(95\%\) joint region. The left panel of Figure 3 gives \(95\%\) joint region on the test data. Thresholding at the \(0.95\) quantile of \(\|\hat{X}_{n+1|n:n-39},X_{n+1}\|\), \(98.99\%\) of the large data points fall within this joint region.
We obtain an estimate of the conditional density of \(X_{n+1}\) given a large value of \(\hat{X}_{n+1}\) using equation (33). The \(0.025\) and \(0.975\) quantiles of the estimated conditional density in equation (33) give us a \(95\%\) conditional prediction interval. The center panel of Figure 3 gives the scatterplot after thresholding at the \(0.95\) quantile of \(\hat{X}_{n+1|n:n-39}\) of the test data along with the \(95\%\) conditional prediction bounds. These prediction intervals have a coverage rate of \(0.96\). The right panel of Figure 3 gives the prediction intervals on the original scale of the anomalies obtained by taking the inverse of the marginal transformation. We compare our prediction intervals to the standard Gaussian method. We transform the marginal of the original windspeed anomalies data to be standard normal and estimate the ACVF. We use the estimated covariance matrix to find the best linear unbiased predictor and to estimate the MSPE. We then create 95% Gaussian prediction intervals from the estimated MSPE and get a coverage rate of 0.94. For the windspeed anomalies data, our prediction intervals do not show significant advantage over the standard Gaussian based prediction intervals because our data is not too heavy-tailed, resulting into a negligible difference between the corresponding predicted weight vectors \(\hat{\mathbf{b}}\). We are investigating a heavy-tailed precipitation data set where we suspect the difference between the corresponding predicted weight vectors will be more significant.
## 9 Summary
This paper extends the work of Mhatre and Cooley (2020+), which introduced the transformed linear time series models. That paper laid out the foundation of these models, introducing the notion of tail stationarity which enables characterization of tail dependence in time series by the TPDF. Mhatre and Cooley (2020+) introduced the transformed linear backshift operator which allows the AR and ARMA models to be defined as transformed linear solutions to the model equations. They showed via example that the transformed linear models were more flexible than nonnegative traditional linear models in capturing tail dependence.
This paper introduces the transformed linear innovations algorithm with the aim of performing prediction when the previous time series terms are large. Not only does the innovations algorithm enable iterative prediction, but it also provides us a tool demonstrating the richness of the class of models. Perhaps the most important result is that one can fit a transformed-linear regularly-varying MA(\(\infty\)) arbitrarily close to any valid TPDF. Using the polar geometry of regular variation, we develop prediction intervals when predicted values are large.
There are many avenues for future work. A prevalent method for identifying the orders of AR and MA in classical time series is through looking at autocorrelation function (ACF) and partial autocorrelation function (PACF) plots. This
Figure 3: Scatterplot of the windspeed anomalies test data on the Fréchet scale with the estimated 95% joint prediction region (left panel). 95% conditional prediction intervals given each large value of \(\hat{X}_{n+1|n:n-39}\) of the windspeed anomalies test data on the Fréchet scale (center panel). 95% conditional prediction intervals given each large value of \(\hat{X}_{n+1|n:n-39}\) of the windspeed anomalies test data on the original scale (right panel).
necessitates the development of a PACF analogue for our models. Some initial results on partial tail correlation in the finite dimensions have been worked on by Lee & Cooley in their upcoming paper.
There might also be motivation in extremes to think about a non-causal time series. The innovations algorithm looks only backward in time, by definition. In the simulated time series in our causal setting, we see that extreme behavior is characterized by an asymmetric spike, where large values have subsequent large values, but no large values precede the spike. Although these transformed linear values have shown good ability to capture dependence summary measures, behavior of the simulated series differs from that of the environmental time series we have so far explored. A relaxation of causality could likely address this.
## Acknowledgement
Nehali Mhatre and Daniel Cooley were both partially supported by National Science Foundation award DMS-1811657.
## Supplementary Material
Supplementary materials demonstrate that \(\mathbb{V}\) is a vector space, that the defined inner product meets the considitions of inner product, and that \(T\) is an linear map creatig the isomorphism between \(\mathbb{V}\) and \(\ell_{1}\). The windspeed anomalies data and the R codes for the simulation study results (Section 6.3) and the application to the windspeed anomalies data (Section 8) are available at (_[https://www.stat.colostate.edu/](https://www.stat.colostate.edu/) cooleyd/TransLinTS/_).
|
2309.11419 | KOSMOS-2.5: A Multimodal Literate Model | The automatic reading of text-intensive images represents a significant
advancement toward achieving Artificial General Intelligence (AGI). In this
paper we present KOSMOS-2.5, a multimodal literate model for machine reading of
text-intensive images. Pre-trained on a large-scale corpus of text-intensive
images, KOSMOS-2.5 excels in two distinct yet complementary transcription
tasks: (1) generating spatially-aware text blocks, where each block of text is
assigned spatial coordinates within the image, and (2) producing structured
text output that captures both style and structure in markdown format. This
unified multimodal literate capability is achieved through a shared
decoder-only autoregressive Transformer architecture and task-specific prompts.
Building on this foundation, we fine-tune KOSMOS-2.5 for document understanding
tasks, resulting in a document understanding generalist named KOSMOS-2.5-CHAT.
Additionally, a large corpus of 357.4 million document pages spanning diverse
domains was curated for pre-training. We evaluate KOSMOS-2.5 on two newly
proposed benchmarks, OCREval and MarkdownEval, for document-level text
recognition and image-to-markdown generation, demonstrating impressive literate
capabilities comparable to GPT-4o. KOSMOS-2.5-CHAT achieves performance
comparable to other state-of-the-art generalists that are five times larger
(1.3B vs. 7B) across nine text-rich visual question answering benchmarks.
Models and code have been available at \url{https://aka.ms/kosmos25}. | Tengchao Lv, Yupan Huang, Jingye Chen, Yuzhong Zhao, Yilin Jia, Lei Cui, Shuming Ma, Yaoyao Chang, Shaohan Huang, Wenhui Wang, Li Dong, Weiyao Luo, Shaoxiang Wu, Guoxin Wang, Cha Zhang, Furu Wei | 2023-09-20T15:50:08Z | http://arxiv.org/abs/2309.11419v2 | # Kosmos-2.5: A Multimodal Literate Model
###### Abstract
We present Kosmos-2.5, a multimodal literate model for machine reading of text-intensive images. Pre-trained on large-scale text-intensive images, Kosmos-2.5 excels in two distinct yet cooperative transcription tasks: (1) generating spatially-aware text blocks, where each block of text is assigned its spatial coordinates within the image, and (2) producing structured text output that captures styles and structures into the markdown format. This unified multimodal literate capability is achieved through a shared Transformer architecture, task-specific prompts, and flexible text representations. We evaluate Kosmos-2.5 on end-to-end document-level text recognition and image-to-markdown text generation. Furthermore, the model can be readily adapted for any text-intensive image understanding task with different prompts through supervised fine-tuning, making it a general-purpose tool for real-world applications involving text-rich images. This work also paves the way for the future scaling of multimodal large language models.
Figure 1: Kosmos-2.5 is a multimodal large language model that takes text images as input and generates spatially-aware texts (i.e., texts with bounding boxes) or markdown-formatted texts (i.e., texts with markdown elements), following different task prompts, respectively.
Introduction
Over the past several years, large language models (LLMs) have emerged as a critical area of research in artificial intelligence. These models are designed to learn from massive amounts of natural language data, allowing them to perform a wide range of language-related tasks with impressive accuracy. This development has been fueled by advancements in model scaling that enabled researchers to create models with unprecedented complexity. As a result, LLMs have become increasingly prevalent across various industries and applications, from customer service chatbots to virtual assistants and automated content creation. One notable trend in recent years has been the focus on building larger and more complex models, such as GPT-3 [1] and GPT-4 [2], which has hundreds/thousands of billion parameters and can generate compelling language outputs. While these models require significant computing resources to train and operate, they hold enormous potential for revolutionizing how we interact with and understand natural language.
Current LLMs primarily focus on textual information and cannot understand visual information. However, advancements in the field of multimodal large language models (MLLMs) aim to address this limitation. MLLMs combine visual and textual information within a single Transformer-based model, enabling the model to learn and generate content based on both modalities. MLLMs have shown promise in a variety of real-world applications, including natural image understanding and text image understanding. These models leverage the power of language modeling as a general interface for multimodal problems, allowing them to process and generate responses based on textual and visual inputs. While existing MLLMs have mainly focused on natural images with lower resolutions, the exploration of text images is an area that requires further investigation. Taking advantage of large-scale multimodal pre-training for text images is an important direction for MLLM research. By incorporating text images into the training process and developing models based on textual and visual information, we can unlock new possibilities for multimodal applications involving high-resolution text-intensive images.
In this study, we present **Kosmos-2.5**, a multimodal literate model that takes advantage of Kosmos-2 [1] designed to tackle machine reading of text-intensive images, which is shown in Figure 1. Kosmos-2.5 performs two closely related transcription tasks in a unified multimodal model. The first task generates spatially-aware text blocks, assigning text lines their corresponding spatial coordinates within the original text-rich image. The second task produces structured text output, capturing styles and structures in the markdown format. Both tasks are conducted under a unified framework, leveraging a shared Transformer architecture, task-specific prompts, and flexible text representations. Specifically, our model architecture combines a ViT-based vision encoder and a Transformer-based language decoder linked by a resampler module. Our model is pre-trained on a large corpus of text-intensive images, whose text representations include text lines with bounding boxes and plain markdown texts. By employing this dual-task training strategy, Kosmos-2.5 enhances its general-purpose multimodal literate capabilities. We assess the performance of Kosmos-2.5 on two tasks: end-to-end document-level text recognition and markdown-formatted image-to-text generation. Experiment results have demonstrated strong literate performance on several text-intensive image understanding tasks. In addition, Kosmos-2.5 also demonstrates promising capabilities in few-shot and zero-shot learning scenarios, offering a universal interface for real-world applications that involve text-rich images.
The contributions of this work are summarized as follows:
* Kosmos-2.5 represents a significant paradigm shift in text image understanding, transitioning from encoder-only/encoder-decoder models to a decoder-only model. It is pre-trained by incorporating dual transcription tasks (spatially-aware text block generation and structured markdown text generation) into a single, unified model architecture.
* This innovative method streamlines the application interface by integrating generative multimodal language modeling, simplifying the traditionally complex cascaded pipelines used for various downstream tasks.
* Furthermore, Kosmos-2.5 demonstrates impressive multimodal literate capabilities, thus setting the stage for future scaling of multimodal large language models.
## 2 Kosmos-2.5
### Model Architecture
The model architecture of Kosmos-2.5 consists of a pre-trained vision encoder and a language decoder connected with a resampler module, shown in Figure 2. We adopt the pre-trained vision encoder based on the Vision Transformer (ViT) [1]. We further adapt a Perceiver Resampler module with an attentive pooling mechanism to reduce the size of image embeddings [1]. The language decoder is built upon the Transformer-based decoder to condition on image and text context for the next token prediction.
### Image and Text Representations
Kosmos-2.5 takes a composite input consisting of an image and a text representation. **The image representation** is uniform across various configurations and leverages a variable-resolution input strategy following Pix2Struct [11]. Precisely, we extract the maximum number of fixed-size patches (\(16\times 16\)) that can fit within a predefined sequence length \(L\). In addition, Resampler [1] is used as an attentive pooling mechanism to reduce the number of image embeddings. **The text representation**, however, is more versatile and can be one of two types: text lines with bounding boxes or plain markdown texts.
**Text lines with bounding boxes:** For the layout-based document representation, text lines and their associated bounding boxes are extracted. Inspired by Kosmos-2 [23], we ground the text lines to their spatial positions in images by aligning their representations. The coordinates of these bounding boxes are then converted into discrete location tokens. Given that \(L\) also represents the maximum length for each image dimension, we introduce a set of \(2L+2\) specialized tokens. These tokens, <x\({}_{0}\)>, <x\({}_{1}\)>,..., <x\({}_{L-1}\)>, <y\({}_{0}\)>,..., <y\({}_{L-1}\)>, <bbox>, and </bbox>, correspond to the coordinates and the start and end of a bounding box. The coordinates are obtained by rounding down the actual position after resizing images. Consider a document \(T\) that comprises \(N\) text lines. Each line is represented as \(\mathbf{T}_{n}=\{w_{1}^{(n)},w_{2}^{(n)},\ldots,w_{M_{n}}^{(n)}\}\), where \(M_{n}\) is the number of words in the \(n\)-th text line. The bounding box for \(\mathbf{T}_{n}\) is then denoted by \(\mathbf{B}_{n}=\) <bbox><\(x_{\text{tl}}^{(n)}\)><\(y_{\text{tl}}^{(n)}\)><\(x_{\text{br}}^{(n)}\)><\(y_{\text{br}}^{(n)}\)></bbox>, which includes coordinates for its top-left and bottom-right corners.
**Markdown texts:** For the markup-based document representation where the output text is in the markdown format, the text component captures both content and formatting markup. Unlike layout-based documents, markdown text does not require bounding boxes. Instead, the text is directly tokenized, retaining all special characters and formatting indicators.
To facilitate these diverse input types, we employ different composite representations. For image-text pairs with text lines and bounding boxes, the input is denoted as <s> \(\bigcup_{n=1}^{N}\) (\(\mathbf{B}_{n}\oplus\mathbf{T}_{n}\)) </s>. The operator \(\oplus\) represents the concatenation of the
Figure 2: Model architecture of Kosmos-2.5. A shared decoder-only Transformer model generates the output text sequence based on the input image from a vision encoder and different task prompts.
text line \(\mathbf{T}_{n}\) and its bounding box \(\mathbf{B}_{n}\). Conversely, when the text is in the markdown format, the input simplifies to <s>Markdown Text</s>. In both cases, <s> and </s> signify the sequence boundaries, while  indicate the beginning and end of image embeddings. This flexibility in text representation allows Kosmos-2.5 to apply to various document analysis tasks.
### Pre-training Data
The pre-training process enables Kosmos-2.5 to learn versatile representations suitable for various text-intensive image understanding tasks. The model is pre-trained on a rich array of datasets from diverse sources. Traditional Optical Character Recognition (OCR) task is primarily geared towards generating text content and its 2D positions within an image. However, they often neglect the need to maintain the order and structural integrity of the original document, which is essential for text-intensive image understanding tasks involving structured information.
To address this, we steer Kosmos-2.5 to excel in two distinct yet cooperative transcription tasks: (1) generating spatially-aware text blocks, where each block of text is assigned its spatial coordinates within the image, and (2) producing structured text output that captures styles and structures into the markdown format. Markdown provides an advantage over plain text by explicitly distinguishing different structural elements, such as tables and lists, with specific tokens. For example, table cells can be denoted with vertical bars (!) and list items with bullets (*, -, or +). It also standardizes the representation of typographic emphases like bold (**bold*) and italics (*italics*), integrating the learning of document structure with natural language understanding in a unified model.
For spatially-aware text blocks, we use:
* **IIT-CDIP:** The IIT-CDIP dataset is a large-scale public collection comprising scanned document images. We used approximately 27.6 million pages to train our model.
* **arXiv papers:** arXiv, an open-access research-sharing platform, provides another significant data source, accounting for roughly 20.9 million pages. We downloaded a bulk of data, consisting of PDF and LaTeX source files, from the official arXiv repository2. Footnote 2: [https://info.arxiv.org/help/bulk_data/index.html](https://info.arxiv.org/help/bulk_data/index.html)
* **PowerPoint slides:** A corpus of 6.2 million pages is collected from various web pages containing PowerPoint documents, significantly enhancing the diversity of our training data.
* **General PDF:** Additionally, we crawled the web for diverse open-domain digital PDF files, leading to the collection of a large corpus comprising approximately 155.2 million pages.
* **Web screenshots:** A subset of the mC4 webpages is scraped and rendered as screenshots containing almost 100 million pages.
For structured text output in markdown format, we use:
* **README:** We collect 2.9 million "README.md" files from open-source GitHub projects, primarily written in markdown format.
* **DOCK:** We also extract 1.1 million DOCX pages from millions of WORD files crawled from the web. The DOCX pages are converted to markdown format, and each page corresponds to its markdown information.
* **LaTeX:** A subset of the entire arXiv papers is used to extract the mapping of PDF pages and its corresponding markdown information converted from the LaTeX code, which contains a total of 3.7 million pages.
* **HTML:** We obtain 6.3 million HTML files from the aforementioned mC4 subset and convert them into markdown format.
### Data Processing
The pre-training data has a wide coverage, and each type of data requires a different processing workflow, which is introduced as follows:
IIT-CDIPThe IIT-CDIP dataset mainly consists of scanned document images. We use the Microsoft Read API 3 to extract text and layout information.
Footnote 3: [https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/overview-ocr#read-api](https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/overview-ocr#read-api)
Footnote 4: [https://github.com/pymupdf/PyMuPDF](https://github.com/pymupdf/PyMuPDF)
Footnote 5: [https://github.com/microsoft/playwright-python](https://github.com/microsoft/playwright-python)
Footnote 6: [https://lxml.de/](https://lxml.de/)
arXiv papers, PowerPoint slides, General PDFWe first compile and convert arXiv papers and PowerPoint slides into PDF files. Together with other general PDFs, we employed the PyMuPDF parser 4 to extract text and layout information efficiently.
Footnote 4: [https://pandoc.org/](https://pandoc.org/)
Web screenshotsWe also include webpage screenshots in the model pre-training to diversify the layout distribution further. We collect the webpage URLs from the English portion of the mC4 dataset. Playwright 5 is used to access a specified URL and open the webpage. The HTML content of the page is extracted and parsed using the lxml library 6 to obtain a Document Object Model (DOM) tree representation. This DOM tree is traversed, examining the XPath of each element within it. This traversal aims to determine whether each element is visible and retrieve information about its bounding boxes.
Footnote 8: [https://whtmltopdf.org/](https://whtmltopdf.org/)
README (markdown)In addition to layout-based data, we collect markup-based data for the pre-training. We collect "README.md" files from many GitHub projects and convert these files into HTML using Pandoc 7. Then, wkhtmltopdf 8 is used to obtain the images from the generated HTML content.
Footnote 9: [https://github.com/matthewwithamm/python-markdownify](https://github.com/matthewwithamm/python-markdownify)
DOCX (markdown)The Microsoft Office WORD files have been extensively used in existing research like TableBank [14] and ReadingBank [21]. We collect WORD DOCX files and convert them into texts with markdown. First, we use Pandoc to convert the XML content within the DOCX files into markdown files. As Pandoc keeps the "<table>" tags to represent the tabular cells in the generated markdown, we further identify all the tables and use markdownify 9 to convert them into the markdown formats. Finally, the original DOCX files are converted into PDF files, and each page is aligned to the corresponding span of the markdown content based on a heuristic method.
Footnote 4: [https://github.com/pymupdf/PyMuPDF](https://github.com/pymupdf/PyMuPDF)
Footnote 5: [https://github.com/microsoft/playwright-python](https://github.com/microsoft/playwright-python)
Footnote 6: [https://lxml.de/](https://lxml.de/)
Footnote 7: [https://pandoc.org/](https://pandoc.org/)
Footnote 8: [https://wkhtmltopdf.org/](https://wkhtmltopdf.org/)
Footnote 9: [https://github.com/matthewwithamm/python-markdownify](https://github.com/matthewwithamm/python-markdownify)
Footnote 10: [https://math.nist.gov/~BMIiller/LaTeXML/](https://math.nist.gov/~BMIiller/LaTeXML/)
FileX (markdown)IdEx documents from arXiv have been used to generate PDF files to obtain texts with bounding boxes. Meanwhile, we also convert the LaTeX content into the markdown texts. Similar to Nougat [2], LaTeXML 11 is used to convert the LaTeX code into the HTML sequence, which is further transformed into the markdown format. Different from Nougat, we keep all the tables at the beginning of the page as most LaTeX users prefer to position tables with "[t]" or "[h]" instead of "[b]". Meanwhile, we also convert the table content from the LaTeX format into the markdown format.
Footnote 11: [https://github.com/pymupdf/PyMuPDF](https://github.com/pymupdf/PyMuPDF)
HTML (markdown)The most straightforward way to obtain markdown resources from HTML webpages is through web scraping. However, webpages are often cluttered with various layouts and styles, resulting from the misuse of HTML tags. Moreover, HTML pages may include extraneous elements, such as advertisements, navigation menus, or formatting elements, making extracting clean and meaningful content challenging. To overcome these obstacles, we employ Playwright, a fast and reliable end-to-end testing framework for the web. The library allows us to navigate the HTML structure, filter out non-essential elements, and extract the relevant text content. We also apply custom rules and regular expressions to further refine the extracted text and format it as markdown, ensuring that the resulting markdown files are coherent and readable.
### Filtering and Quality Control
We employ fastText for language identification (with a threshold of 0.5) to filter out non-English documents from the entire pre-training dataset. To ensure content diversity within each source, we
utilize the MinHash [1] to identify and remove redundant pages. We use the same parameters as [11] and a document pair with similarity 0.8 will be marked as duplicate. A comprehensive breakdown of the pre-training data, along with their respective sampling ratios, is provided in Table 1. When dealing with image-to-markdown data from README, DOCX, LaTeX, and HTML sources, we observe discrepancies between the content in text images and their corresponding markdown sequences due to conversion issues. Consequently, we refine the data by evaluating token overlap between images and markdown files, requiring a token intersection-to-union ratio greater than 0.95 for inclusion. Section A.2 shows some of the training samples.
## 3 Experiments
### Evaluation
Text RecognitionWe utilize word-level _precision_ (# or correct matches over the number of detected words), _recall_ (# of correct matches over the number of ground truth words), and _f1_ as the metrics to evaluate the text recognition performance. If there are repeated words in the ground truth, they are expected to be repeated in the prediction. Text recognition is evaluated on three benchmark datasets, including FUNSD [15], SROIE [16] and CORD [20]. We compare Kosmos-2.5 to the text recognition results from Document OCR in Google Document AI 11.
Footnote 11: [https://cloud.google.com/document-ai](https://cloud.google.com/document-ai)
Image-to-markdown GenerationIn light of the unique nature of the image-to-markdown conversion task, assessing the quality of the generated markdown necessitates specialized metrics. We adopt a two-fold evaluation scheme: Normalized Edit Distance (NED) and Normalized Tree Edit Distance (NTED), considering both the lexical accuracy and the preservation of the original structural elements.
The NED is formulated as
\[\textit{NED}=1-\frac{1}{N}\sum_{i=1}^{N}D\left(s_{i},\hat{s}_{i}\right)/\max \left(\mathrm{len}(s_{i}),\mathrm{len}(\hat{s}_{i})\right)\]
where \(N\), \(s\), and \(\hat{s}\) denote the number of samples, prediction, and ground truth, respectively. \(D(\cdot,\cdot)\) and \(\mathrm{len}(\cdot)\) represent the edit distance function and the length of a string. The _NED_ value ranges from 0 to 1, with a higher _NED_ value indicating the prediction is closer to the ground truth.
However, given the hierarchical structure inherent to markdown, relying solely on a string-based comparison metric like NED can be insufficient. Thus, we adopt NTED as an additional evaluation metric for structural differences. NTED is a tree edit distance normalized by the number of nodes in the tree, considering the structural discrepancies between parse trees. Specifically, the predicted markdown sequence is first transformed into an HTML tree. Then, the tree edit distance between
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Task** & **Data Source** & **Number of Pages** & **Sampling Ratio** \\ \hline \multirow{4}{*}{Layout-based (texts++bboxes)} & IIT-CDIP & 27.6M & 10\% \\ & arXiv papers & 20.9M & 5\% \\ & PowerPoint slides & 6.2M & 5\% \\ & General PDF & 155.2M & 20\% \\ & Web screenshots & 100.5M & 10\% \\ \hline \multirow{4}{*}{Markup-based (texts++markdown)} & README & 2.9M & 15\% \\ & DOCX & 1.1M & 10\% \\ & LaTeX & 3.7M & 15\% \\ & HTML & 6.3M & 10\% \\ \hline
**Total** & & 324.4M & 100\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of Pre-training Data in Kosmos-2.5
the prediction and the ground truth is calculated using the ZSS algorithm [28]. The NTED is formulated as
\[\textit{NTED}=1-\frac{1}{N}\sum_{i=1}^{N}\mathrm{TD}\left(t_{i},\hat{t}_{i} \right)/\max\left(\mathrm{node}(t_{i}),\mathrm{node}(\hat{t}_{i})\right)\]
where \(N\), \(t\), and \(\hat{t}\) signify the number of samples, the HTML tree of prediction, and the HTML tree of ground truth, respectively. Besides, \(\mathrm{TD}(\cdot,\cdot)\) and \(\mathrm{node}(\cdot)\) stand for the tree edit distance function and the number of nodes in a tree.
We create three datasets to evaluate the image-to-markdown task from different data sources, including document-level markdown generation, README markdown generation and table markdown generation. Each dataset includes 1,000 \(\langle\)image, markdown\(\rangle\) pairs, which are held out from the pre-training data. We compare Kosmos-2.5 to the markdown generated by the Nougat [2] base and small models.
### Implementation Details
We employ the AdamW optimizer [17] with \(\beta=(0.9,0.98)\) for optimization, setting the weight decay to 0.01 and the dropout rate to 0.1. The learning rate is warmed up to \(2\times 10^{-4}\) during the initial 375 steps, followed by a linear decay to zero throughout the remaining training steps. The batch size is adjustable to align with the available computational resources and specific training requirements. Kosmos-2.5 contains a total of 1.3 billion parameters. The vision encoder is initialized from the encoder of the Pix2Struct-Large model. The language decoder includes 24 Transformer layers with a hidden size of 1,536, an FFN intermediate size of 6,144, and 16 attention heads. Section A.1 shows more details of the training hyperparameters.
Due to the substantially larger quantity of available layout-based data than markup-based data, we initially trained the model for 100k steps exclusively using the layout-based dataset. Subsequently, the two datasets were combined for further training of 140k steps. Additionally, we incorporate the training split of the evaluation dataset into the entire pre-training data, extending the process by an additional 10k steps. For text tokenization, we utilize SentencePiece [18] and adopt the "full-sentence" format [19]. This approach packs each input sequence with full sentences, continuously sampled from one or multiple documents. Newly added word embeddings of location tokens are randomly initialized, with all parameters updated during training. We also leverage the data augmentation approaches from TrOCR [12] in the training to make models more robust.
Throughout the evaluation process, model inference is conducted using a single model checkpoint across various evaluation datasets with the corresponding task prompt respectively, demonstrating that our approach does not necessitate individualized model fine-tuning for each dataset.
### Results
Kosmos-2.5 is a flexible framework that facilitates multitasking, with tasks determined by the provided task prompts. Experimental results are demonstrated in Table 2 and Table 3. Specifically, for the text recognition task, our Kosmos-2.5 outperforms Google Document OCR by 0.33%, 2.45%, and 1.35% in terms of the F1 score, showcasing its effectiveness. For the image-to-markdown task, it is worth noting that our method significantly outperforms the Nougat [2]. For example, Kosmos-2.5 achieves a notable improvement of 33.68% (95.09% vs 61.41%) over Nougat \(\textsc{BASE}\) in terms of NED on the README dataset. Besides, regarding NTED, Kosmos-2.5 also boosts the performance by 33.38% (82.08% vs 48.70%) compared with \(\textsc{Nougat}_{\textsc{BASE}}\) on the Documents dataset. We attribute the performance boost to the increased diversity of our training data compared to Nougat, which primarily focuses on the academic paper domain. Notably, the greater diversity in our training data significantly enhances our model's comprehension of different document types and strengthens its generalization capabilities. In summary, the experimental results validate the remarkable capabilities of Kosmos-2.5 in various tasks.
model generates distinct outputs depending on the task prompts it receives. When given the layout task prompt, the model produces the following text sequence, which includes textual content and corresponding bounding boxes:
[f,g] [f,h13] [f,76] [f,h45] : ATC Department of Education School Year Culendar 2023-2024
[y,t59] [y,t59] [y,h82] [y,h81]: This is the 2023-24 school year calendar for all 3K-12 NYCODE public schools. If your child > extends a private,
[y,t50] [y,t50] [y,t520] [y,t520]: parochial, charter school, NTC Early Education Center (NYCEED) or Family Childcare Program,
[y,t52] [y,t52] [y,t53] [y,t53] [y,t54]: your child?s school for information about their calendar. Please note the following:
[y,t55] [y,t56] [y,t57] [y,t58]: [y,t59] [y,t50] [y,t50] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54]: On days when school buildings are closed due to inclignant weather or other emergencies, all <= students
...
With the markup task prompt, the model generates another text sequence that follows the markdown format:
[f] # NTC Department of Education School Year Calendar 2023-2024
This is the 2023-24 school year calendar for all 3K-12 NYCODE public schools. If your child attends a private, parochial,
[y,t57] [y,t58] [y,t59] [y,t50] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t52] [y,t50] [y,t53] [y,t51] [y,t52] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t50] [y,t52] [y,t51] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t51] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t51] [y,t50] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t50] [y,t51] [y,t52] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t51] [y,t50] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t59] [y,t50] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t59] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t51] [y,t52] [y,t54] [y,t55] [y,t59] [y,t50] [y,t52] [y,t53] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t50] [y,t51] [y,t52] [y,t54] [y,t55] [y,t59] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t50] [y,t51] [y,t52] [y,t54] [y,t55] [y,t59] [y,t51] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t51] [y,t52] [y,t59] [y,t50] [y,t51] [y,t53] [y,t51] [y,t52] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t59] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t50] [y,t51] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t50] [y,t51] [y,t59] [y,t50] [y,t50] [y,t51] [y,t52] [y,t51] [y,t53] [y,t51] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t59] [y,t50] [y,t50] [y,t51] [y,t50] [y,t51] [y,t52] [y,t53] [y,t51] [y,t52] [y,t54] [y,t53] [y,t55] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t50] [y,t51] [y,t50] [y,t51] [y,t50] [y,t51] [y,t52] [y,t51] [y,t53] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t50] [y,t51] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t50] [y,t51] [y,t50] [y,t51] [y,t52] [y,t53] [y,t54] [y,t55] [y,t56] [y,t57] [y,t58] [y,t59] [y,t50] [y,t51] [y,t50] [y,t51] [y,t50
LLMs, enhancing their capabilities through further prompt engineering. This approach empowers LLMs with robust text image understanding capabilities. Thirdly, we have the potential to augment the pre-training with textual data, transforming it into a general-purpose MLLM. This expanded model not only processes visual signals but also possesses strong language understanding capabilities.
## 4 Related Work
### Multimodal Large Language Models
The flourishing blossom of large language models (LLM), represented by ChatGPT [14], has revolutionized artificial intelligence and significantly impacted numerous downstream tasks such as text translation, code generation, question answering, etc. Despite the rapid development, it is significant to recognize that the human perception of the world is not limited to language alone but encompasses a wide range of modalities, with particular emphasis on the visual modality. Many research works attempt to "bring eyes" to LLM and develop multimodal large language models (MLLM), which can be categorized into LLM-centric scheduling systems and end-to-end trainable multimodal systems.
The LLM-centric scheduling system [23], [22], [23], [24], [25] takes advantage of many vision foundation models (e.g., Stable Diffusion [19], ControlNet [15], BLIP [16], etc.), and schedules these models in a language-centric manner. For example, Visual ChatGPT [23] develops a set of prompts to incorporate visual information into ChatGPT, enabling users to draw or edit images through chatting. MM-REACT [23] leverages vision experts to augment its multimodal capabilities by incorporating a textual prompt design that can effectively represent various visual signals, including text descriptions, coordinates, and aligned file names, for images and videos. HuggingGPT [23] connects LLMs with extensive AI models in machine learning communities, tackling user requests through ChatGPT's task planning, model selection, and response summarization capabilities. Further, TaskMatrix.AI [23] largely extends the scale and connects foundation models with millions of APIs for solving tasks in both digital and physical domains. Differently, InternGPT [21] incorporates pointing instructions (e.g., clicking and dragging) for better communication between chatbots and users, while also improving the accuracy of chatbots in performing vision-centric tasks. Nevertheless, this approach has several limitations, such as the expenses associated with API calls or the storage space required for the pre-trained weights of foundation models.
End-to-end trainable multimodal system [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36], [37], [38], [39], [40], [41], [42], [43], [44], [45], [46], [47], [48], [49], [50], [51], [52], [53], [54], [55], [56], [57], [58], [59], [60], [61], [62], [63], [64], [65], [66], [67], [68], [69], [70], [71], [72], [73], [74], [75], [76], [77], [78], [79], [80], [81], [82], [83], [84], [85], [86], [87], [88], [89], [90], [91], [92], [93], [94], [95], [96], [97], [98], [99], [99], [90], [91], [90], [92], [94], [95], [96], [97], [98], [99], [99], [90], [99], [91], [90], [93], [94], [95], [96], [97], [98], [99], [99], [99], [90], [99], [91], [92], [93], [94], [95], [96], [97], [98], [99], [99], [99], [90], [90], [90], [90], [91], [92], [93], [94], [95], [96], [97], [98], [99], [99], [99], [99], [99], [90], [99], [99], [90], [90], [91], [92], [93], [94], [95], [96], [97], [98], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [99], [9], [99], [99], [99], [99],
gated cross-attention to fuse pre-trained vision and language models, showing impressive ability in downstream multimodal tasks. Besides, BLIP-2 [11] utilized Q-Former to align the visual features with a large language model. Furthermore, Instruct-BLIP improves the training of Q-Former by introducing a novel instruction-aware visual feature extraction method. Based on this design, MiniGPT-4 [23] uses Vicuna [12] as the text encoder and fine-tunes detailed image descriptions to better match user intent. Sparkles unlocks multimodal instruction-following models' capabilities in open-ended dialogues involving multiple images [13]. LLaVA [11] ispacts visual features into the language model by treating image tokens as a foreign language, and uses conversation generated by GPT-4 [14] for fine-tuning. Kosmos-1 [13] is trained from scratch using web-scale corpora while showing impressive performance in zero-shot, few-shot, and multimodal chain-of-thought prompting settings. Analogously, Kosmos-2 [15] incorporates grounding and referring abilities and can accept image regions users select using bounding boxes as input. mPLUG-Owl [23] efficiently fine-tunes the language model using low-rank adaption with multimodal instruction datasets. Otter [11] is built using Flamingo and aims to explore multimodal in-context learning capabilities.
### Text Image Understanding
Text image understanding is a cutting-edge technology that harnesses the power of artificial intelligence, including natural language processing and computer vision, to automatically comprehend, categorize, and extract information from documents [16]. Any file containing written or printed characters can be considered a document, including web pages, slides, posters, and even scene text images. Documents are ubiquitous in our daily lives, so the research on documents is significant.
Before the deep learning era, researchers used rule-based heuristic approaches for document analysis [17, 18]. They manually observed layout information and summarized heuristic rules, but these methods are not scalable and require enormous labour costs. Subsequently, the rise of deep learning has led to significant advancements in the field of Document AI [19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 223, 219, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 311, 320, 321, 333, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 38, 38, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 84, 88, 89, 92, 85, 86, 88, 89, 93, 94, 95, 96, 97, 98, 99, 100, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 18, 19, 19, 20, 21, 22, 22, 2,
document elements' positions using natural language instructions, despite being pre-trained on inputs and outputs involving the spatial coordinates of text. Instruction tuning could offer a promising route to enhance this aspect of the model, leading to broader application capabilities. Furthermore, documents spanning multiple pages pose a challenge as they typically demand holistic processing and comprehension. Meanwhile, it is also feasible that Kosmos-2.5 allows for multiple image pages interleaved with text as input; however, managing long context windows remains a vital issue we aim to address in future work.
In the broader research landscape, a significant direction lies in furthering the development of model scaling capabilities. With an expanding spectrum of tasks and rising complexities, scaling up the model to handle larger volumes of data is crucial for the progression of multimodal literate models. Ultimately, our goal is to develop a model that effectively interprets both visual and textual data, and generalizes smoothly across an expanded array of text-intensive multimodal tasks.
## Acknowledgement
We would like to acknowledge Zhiliang Peng for the helpful discussions.
|
2309.10707 | Corpus Synthesis for Zero-shot ASR domain Adaptation using Large
Language Models | While Automatic Speech Recognition (ASR) systems are widely used in many
real-world applications, they often do not generalize well to new domains and
need to be finetuned on data from these domains. However, target-domain data
usually are not readily available in many scenarios. In this paper, we propose
a new strategy for adapting ASR models to new target domains without any text
or speech from those domains. To accomplish this, we propose a novel data
synthesis pipeline that uses a Large Language Model (LLM) to generate a target
domain text corpus, and a state-of-the-art controllable speech synthesis model
to generate the corresponding speech. We propose a simple yet effective
in-context instruction finetuning strategy to increase the effectiveness of LLM
in generating text corpora for new domains. Experiments on the SLURP dataset
show that the proposed method achieves an average relative word error rate
improvement of $28\%$ on unseen target domains without any performance drop in
source domains. | Hsuan Su, Ting-Yao Hu, Hema Swetha Koppula, Raviteja Vemulapalli, Jen-Hao Rick Chang, Karren Yang, Gautam Varma Mantena, Oncel Tuzel | 2023-09-18T15:43:08Z | http://arxiv.org/abs/2309.10707v1 | # Corpus Synthesis for Zero-Shot ASR Domain Adaptation Using Large Language Models
###### Abstract
While Automatic Speech Recognition (ASR) systems are widely used in many real-world applications, they often do not generalize well to new domains and need to be finetuned on data from these domains. However, target-domain data usually are not readily available in many scenarios. In this paper, we propose a new strategy for adapting ASR models to new target domains without any text or speech from those domains. To accomplish this, we propose a novel data synthesis pipeline that uses a Large Language Model (LLM) to generate a target domain text corpus, and a state-of-the-art controllable speech synthesis model to generate the corresponding speech. We propose a simple yet effective in-context instruction finetuning strategy to increase the effectiveness of LLM in generating text corpora for new domains. Experiments on the SLURP dataset show that the proposed method achieves an average relative word error rate improvement of \(28\%\) on unseen target domains without any performance drop in source domains.
Hsuan Su\({}^{\diamondsuit\diamondsuit}\)+ &Ting-Yao Hu\({}^{\diamondsuit}\) &Hema Swetha Koppula\({}^{\diamondsuit}\) &Raviteja Vemulapalli\({}^{\diamondsuit}\) &Jen-Hao Rick Chang\({}^{\diamondsuit}\) &Karren Yang\({}^{\diamondsuit}\) &Gautam Varma Mantena\({}^{\diamondsuit}\) &Oncel Tuzel\({}^{\diamondsuit}\)\({}^{\diamondsuit}\)National Taiwan University &Apple
Footnote †: Work done when interning at Apple.
automatic speech recognition, large language models, controllable speech synthesis, zero-shot ASR adaptation
## 1 Introduction
Adapting an End-to-End (E2E) Automatic Speech Recognition (ASR) system to new target domains is a challenging task due to the limited availability of paired speech-text data. Recently, text-only adaptation methods [1, 2, 3, 4, 5, 6] have been developed to address the data scarcity problem. Some of these works [5, 6] use a Controllable Speech Synthesis (CSS) model to generate speech for the target domain text corpus, and create a paired dataset with real text and synthetic speech for ASR model adaptation. However, in many scenarios, collecting a target domain text corpus may be costly, time consuming, or even infeasible due to privacy concerns.
Large Language Models (LLMs) have recently been shown to work extremely well on numerous natural language processing tasks, especially in few/zero-shot settings. This motivates us to leverage LLMs for adapting ASR models to new domains without any speech or text data from those domains. Previous works exploit LLMs in ASR systems during inference using re-scoring [7, 8] or fusion [9] techniques, and they suffer from the costly overhead of LLM inference. In contrast, we use LLMs to generate target domain synthetic data for adapting a pretrained ASR model (see Fig. 1).
First, we generate a synthetic text corpus by prompting an LLM with the target domain name (a word or short phrase). To improve the quality of the synthetic text corpus, we propose a simple yet effective in-context instruction finetuning (ICIF) strategy. Assuming that the pretrained ASR model has been trained on a source dataset with multiple domains (e.g. a personal assistant with a set of existing features), ICIF learns to relate domain names to the knowledge of LLMs from source text sentences. Then, we use a state-of-the-art CSS model [10] to generate speech corresponding to the synthetic texts. Finally, the fully synthetic paired speech-text corpus is used to finetune a pretrained ASR model, improving performance on the target domain of interest while retaining the performance on the source domain.
**Major contributions**: (1) We demonstrate that text corpus synthesis using LLMs enables zero-shot ASR domain adaptation. (2) Our proposed in-context instruction finetuning strategy improves the quality of the synthetic text corpus resulting in significant gains in the final ASR performance. (3) We show that the proposed data synthesis pipeline achieves an average of \(28\%\) relative Word Error Rate (WER) reduction on unseen target domains in the SLURP dataset [11].
Figure 1: **Data Synthesis Pipeline.** The pipeline consists of a LLM and a CSS model. We use a LLM to generate text corpus, and then synthesize speech data using a CSS model. **ASR Adaptation** - The synthetic target domain data is used to finetune a pretrained ASR model along with real speech from source domains. See Section 3 for details.
## 2 Related Works
Many works have shown that LLMs can synthesize data useful for downstream tasks. Ye et al. [12], Yoo et al. [13], and Meng et al. [14] prompt LLMs with handcrafted instructions to generate data for finetuning downstream models. In this work, we finetune the LLMs with instruction data to improve the format consistency of the generated text corpus.
Some previous works also use LLMs to improve the performance of ASR. Dingliwa et al. [7] and Ma et al. [8] conduct second-pass re-scoring using the perplexity score from LLMs. Li et al. [9] propose deep LLM-fusion, which integrates an LLM into the decoder of an encoder-decoder based E2E ASR model. While these methods improve performance, they require LLM inference during ASR decoding, which increases the computational cost. In contrast, our method transfers knowledge from the LLM to an ASR model through a synthetic text corpus.
## 3 Methodology
Fig. 1 shows an overview of the proposed approach. Our pipeline consists of a LLM, a CSS model, and a pretrained ASR model. Given a target domain of interest \(d_{t}\), we generate a fully synthetic text-speech paired corpus \(C_{t}=\{(x_{i}^{t},y_{i}^{t})\}_{i=1}^{N}\), where \(x_{i}^{t}\), \(y_{i}^{t}\) are the text content and speech signal of the \(i\)-th sample respectively. To do this, we first generate a text sentence \(x_{i}^{t}\sim p_{LLM}(x|d_{t})\) from the LLM conditioned on \(d_{t}\). Then, we synthesize the corresponding speech \(y_{i}^{t}\sim p_{CSS}(y|x_{i}^{t})\) using the CSS model. Finally, we use the synthetic text-speech data to finetune the ASR model for target domain \(d_{t}\).
### Text Synthesis with LLMs
Our goal is to synthesize a text corpus that matches the text distribution of a given target domain \(d_{t}\). To this end, we ask LLMs which are pretrained on trillions of text tokens to generate sentences relevant to the target domain using the prompt: "_Please generate a sentence related to \(d_{t}\):_". Our initial experiments show that naively prompting off-the-shelf LLMs in our pipeline leads to some ASR improvement on the target domain. However, the quality of the generated text and its relevance to the target domain are both insufficient. Since off-the-shelf LLMs are trained on large-scale general text corpora, it is difficult for them to produce high quality in-domain text using only the target domain name \(d_{t}\). To address this issue, we propose a simple yet effective in-context instruction finetuning (ICIF) strategy that improves the ability of LLMs to generate in-domain text when prompted with the target domain name.
#### 3.1.1 In-Context Instruction Finetuning (ICIF)
Our proposed in-context instruction finetuning (ICIF) strategy combines instruction finetuning (IF) [15] with demonstration or in-context learning (ICL) [16] using a unified instruction format. Specifically, we first finetune the LLM with instruction data. Then, during inference, we prompt the LLM with additional demonstrations in the same instruction format.
To construct the instruction data and demonstrations, we use a source text corpus \(C_{s}=\{(x_{j}^{s},d_{j}^{s})\}_{j=1}^{M}\), which contains text \(x_{j}^{s}\) from source domains \(d_{j}^{s}\) distinct from \(d_{t}\). As shown in Fig. 2, we reformulate each \((x_{j}^{s},d_{j}^{s})\in C_{s}\) as a natural language instruction- "_Please generate a sentence related to \(d_{j}^{s}:x_{j}^{s}\)_"- and finetune the LLM on these instructions. In the inference stage, we prepend a subset of these instructions from the source domain to the original prompt for the unseen target domain ("_Please generate a sentence related to \(d_{t}\):_") as additional demonstrations. The LLM uses the extended prompt to generate a text sentence in target domain.
Our ICIF strategy learns the structure and format of the source corpus \(C_{s}\), and relates the target domain name \(d_{t}\) to the knowledge from pretrained LLMs. As shown in Section 5.2, the resulting synthetic text corpus comprises high quality, diverse sentences which are semantically related to the unseen target domain.
### Controllable Speech Synthesis
We use a state-of-the-art Controllable Speech Synthesis (CSS) model [10] to synthesize speech \(y_{i}^{t}\sim p_{CSS}(y|x_{i}^{t})\), given target domain text \(x_{i}^{t}\) generated by the instruction finetuned LLM model. The CSS model learns a prior distribution to model the acoustic style of speech. By sampling from this prior distribution, the model can produce a synthetic speech corpus in various acoustic conditions.
### ASR Model Adaptation
Finally, we finetune the ASR model on the synthetic speech prepared by the LLM and CSS model. In our initial experiments, we observed that the ASR model usually overfits to synthetic speech artifacts during finetuning, which limits its performance. To address this problem, we add real speech data (_i.e.,_ from source domains) to the synthetic speech from the target domain to regularize the ASR model finetuning.
Figure 2: **Illustration of In-Context Instruction Finetuning.** We explicitly relate domain name \(d\) to text \(x\) by reformulating an instruction to finetune the LLMs. During inference, we use source domain demonstrations and an unseen target domain (_email_ in the figure) instruction to prompt the LLMs.
## 4 Experimental Setup
### Dataset
SLURP [11] is a spoken language understanding dataset containing 16521 utterances of human commands towards a virtual agent, based on 200 pre-defined prompts such as "How would you ask for the time." The utterances are recorded in two types of acoustic environments (headset and far-field), and categorized into 18 domains (email, alarm, and takeaway, etc.). We use the headset subset to conduct experiments of zero-shot ASR domain adaptation. In each of our experiments, we select one of these domains as the target domain and combine the remaining 17 domains to form the source domain. Our goal is to improve the performance of a pretrained source domain ASR model on the target domain, without using any real speech or real text data from the target domain.
### Large Language Models
We use LLaMA-7B [17] to synthesize the text corpus. LLaMA is a state-of-the-art LLM with a decoder-based transformer architecture that is pretrained on trillions of tokens. LLaMA has shown excellent performance on downstream tasks with instruction finetuning [17, 18]. We apply low-rank adaptation (LoRA) [19] to freeze most of the model parameters and improve efficiency of instruction finetuning. During inference/synthesis, we follow [20] to use typical decoding [18] with \(\tau=0.9\) and set the repetition penalty [21] to \(1.1\). We include \(10\) demonstrations in the inference prompt.
### Controllable Speech Synthesis (CSS)
Our CSS model is adopted from Style Equalization [10], which is based on a Variational Recurrent Neural Net (VRNN). We make the following four modifications to enhance its acoustic style modeling: (1) increasing the number of Gaussian mixtures of VRNN output distribution (from 3 to 10); (2) increasing the size of acoustic style feature (from 512 to 768); (3) initializing the hidden states of VRNN using the average of the style vector sequence; and (4) using the acoustic style feature to modulate the output linear layers, similar to what is done in [22]. We train the modified CSS model on the training set of LibriTTS [23].
### ASR Model Adaptation
We use ESPNet [24] to build the E2E ASR model, which is composed of a conformer-based encoder [25] and a transformer-based decoder [26]. In each of our experiments, we first obtain a pretrained source domain ASR model by training on LibriSpeech [27] followed by the source domain data (_i.e.,_ 17 pre-defined SLURP domains excluding the target domain). We then adapt this pretrained ASR model to the target domain using the synthetic data from LLM and CSS. For fair comparison between models, we select all final checkpoints using the target domain development set as a validation set.
## 5 Results and Discussion
### ASR Adaptation with Synthetic Text Corpus
In Table 1, we report the performance of ASR models finetuned on target domain data from our corpus synthesis pipeline. For each target domain, we prepare the synthetic text corpus using LLMs with ICIF and the corresponding synthetic speech using CSS. Remarkably, we achieve large reductions in WER across the board (average relative improvement of \(28.73\%\)), without using any real text or speech from the target domain for finetuning. For some target domains (_i.e., Audio_, _Cooking_, and _Transport_), we achieve more than \(40\%\) relative improvement compared to the pretrained source domain models. In addition, the finetuned ASR models also yield a small improvement (average relative WER reduction of \(5.98\%\)) in source domains. Overall, these results demonstrate the efficacy of our corpus synthesis pipeline for adapting ASR models to unseen text domains.
We also finetune the source domain ASR models with (1) real target domain text and synthetic speech, and (2) real text and real speech, receiving average WERs of 10.77% and 10.74%, respectively. Note that, the purpose of this experiment is to establish an upper-bound for adaptation, and real target domain data is not available in zero-shot adaptation.
### Analysis of ICIF
**Contribution of IF and ICL**
As detailed in Section 3.1.1, ICIF involves two steps: (1) _instruction_ (IF), which finetunes the LLM using instructions formulated from a source text corpus, and (2) _demonstration
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c c c} \hline \hline Zero-shot (WER) & \multicolumn{10}{c}{Target Domains} & \multicolumn{1}{c}{Average} \\ \hline Methods & Alvar & Audio & Classifier & Cognitive & Diotting & Final & Ground & IOT & Lain & Music & News & Play & QA & Recommunition & Social & Takeaway & Tonepurt & Worther \\ \hline Source domain ASR & 8.0 & 13.1 & 12.8 & 18.2 & 11.2 & 19.0 & 14.4 & 19.2 & 14.6 & 10.5 & 15.3 & 24.8 & 22.3 & 15.7 & 26.3 & 26.5 & 17.1 & 12.9 & 16.77 \\ (Bietuned) & **4.90** & **7.50** & **18.27** & **8.33** & **8.33** & **12.10** & **13.33** & **12.17** & **11.37** & **8.00** & **16.47** & **18.50** & **18.00** & **18.03** & **12.57** & **16.80** & **18.33** & **5.80** & **8.37** & **11.55** \\ ICIF & **38.75** & **42.75** & **19.79** & **64.25** & **25.49** & **33.36** & **7.415** & **36.45** & **23.52** & **23.81** & **9.20** & **23.79** & **12.86** & **19.96** & **36.12** & **27.045** & **42.09** & **27.90** & **28.73** \\ \hline \hline \end{tabular}
\end{table}
Table 1: ASR Adaptation with Synthetic Text Corpus. Results of ASR models finetuned on target domain synthetic data from our pipeline with ICIF. For each target domain, the source domain ASR (baseline) is trained on LibriSpeech followed by the data from 17 domains (excluding the target domain) in SLURP dataset. The metric shown is WER (lower is better).
(ICL), which prompts the LLM with some example instructions. Table 2 analyzes the individual contributions of these components. We observe that both are useful for improving the WER of the finetuned ASR model: using either instruction (IF) or demonstration (ICL) improves the WER over naive prompting (_i.e.,_ from \(14.02\) to \(12.13\) and \(12.59\) respectively). Combining IF and ICL (ICIF) further improves the WER to \(11.95\). These results indicate that both instruction and demonstration are useful to our synthetic corpus pipeline. Next, we ask whether instruction and demonstration have overlapping effects on the synthetic text quality, or whether they play distinct roles. To address this question, we profile the synthetic text along two additional axes: (1) diversity, measured by Self-BLEU 4-gram [28] and (2) similarity to the real target corpus, measured by the JS divergence between token distributions [29]. As shown in Table 2, instruction (IF) is highly effective at generating text similar to the target domain, but at the cost of diversity. On the other hand, demonstration (ICL) achieves high diversity with a modest improvement in similarity. Combining the two techniques strikes a balance between improving diversity and similarity of the synthetic text to the target domain. We conclude that ICIF enables the LLM to map from domain names to more relevant and diverse texts, which in turn improves the generalization of ASR models to unseen target domains.
**Impact of synthetic text corpus size on WER**
Fig. 3 shows the performance of ASR models finetuned on varying amounts of synthetic text for two randomly-selected target domains ('_Transport_' and '_Cooking_'). In general, we find that using more synthetic text data to finetune the ASR models improves the WER, which suggests that the models benefit from exposure to greater text diversity. On the other hand, we also observe that ASR performance saturates at some point (_e.g.,_ at around 55K samples for the _"Cooking"_ domain). This may be due to synthetic artifacts or noise. We leave the problem of synthetic data selection to future work.
**Impact of number of demonstrations on WER**
Since demonstrations increase synthetic text diversity, we also investigate the impact of the number of demonstrations on the performance of finetuned ASR models. Fig. 4 shows the WER on two randomly-selected target domains when varying the number of demonstrations from \(0\) to \(10\). We observe that WER is improved significantly even with two demonstrations and continues to improve with more demonstrations. Interestingly, we also observe that the standard deviation of the WER increases with more demonstrations. We hypothesize this is due to increased text diversity, which leads to variable outcomes during finetuning. The selection and ordering of demonstrations may also impact the synthetic text quality. We leave these investigations to future work.
## 6 Conclusions
In this paper, we propose a pipeline which consists of a LLM and a CSS model to adapt ASR models with synthesize speech corpus. We apply the data synthesis pipeline to ASR domain adaptation with no target domain data, and receive \(16\%\) relative improvements with pretrained LLMs. To further improve synthesized text quality, we employ an innovative in-context instruction finetuning (ICIF) method on LLMs. The results show that our proposed method yields \(28\%\) average WER relative improvement on unseen target domains without dropping the performance on source domains.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & WER \(\downarrow\) & \begin{tabular}{c} Relative WER \\ Improvement (\%) \(\uparrow\) \\ \end{tabular} & \begin{tabular}{c} Diversity \(\downarrow\) \\ (SB-4) \\ \end{tabular} &
\begin{tabular}{c} Similarity \(\downarrow\) \\ (JS-Div) \\ \end{tabular} \\ \hline Source Domain ASR & 16.77 & - & - & - \\ \hline ICIF (F+ICL) & **11.95** & **28.73** & 0.596 & 0.466 \\ Demo (ICL) & 12.13 & 27.67 & **0.424** & 0.482 \\ Instruct (IF) & 12.59 & 24.92 & 0.74 & **0.451** \\ Naive Prompting & 14.02 & 16.40 & 0.471 & 0.521 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Analysis of ICIF. We investigate the individual contributions of instruction (IF) and demonstration (ICL) to ICIF. In addition to the WER, we report the diversity of the synthetic text (SB-4), and its similarity to the target domain text corpus (JS-Div). See Section 5.2 for details.**
Figure 4: **Number of demonstrations v. WER. We vary the number of demonstrations used for promting the LLM model and report the WER of finetuned ASR models for two randomly-selected target domains. Number of demonstrations and WER are shown on the x and y axes respectively.**
Figure 3: **Number of synthetic text v. WER. We vary the number of synthetic text samples used to finetune the ASR models and report the WER for two randomly-selected target domains. Number of samples and WER are shown on the x and y axes respectively.** |
2309.14561 | POEMMA (Probe of Extreme Multi-Messenger Astrophysics) Roadmap Update | The Probe Of Extreme Multi-Messenger Astrophysics (POEMMA) was designed as a
NASA Astrophysics probe-class mission to identify the sources of ultrahigh
energy cosmic rays (UHECRs) and observe cosmic neutrinos from extremely
energetic transient sources. POEMMA consists of two identical spacecraft flying
in a loose formation at 525 km altitude oriented to view a common atmospheric
volume and to provide full-sky coverage for both types of messengers. Each
spacecraft hosts a wide field of view Schmidt telescope with a hybrid focal
plane optimized to observe both the UV fluorescence signal from extensive air
showers (EASs) and the optical Cherenkov signals from EASs. When in stereo
close to nadir mode, POEMMA can measure the spectrum, composition, and full-sky
distribution of the UHECRs above 20 EeV and be sensitive to UHE neutrinos. When
pointing just below the Earth's limb, POEMMA will be sensitive to cosmic tau
neutrinos above 20 PeV by observing the Cherenkov radiation of EASs produced by
upward-moving tau decays, induced from tau neutrino interactions in the Earth.
POEMMA is designed to quickly re-orient to follow a Target-of-Opportunity (ToO)
neutrino transient from astrophysical sources with exceptional sensitivity to
neutrinos from both short-duration transients, such as short-gamma-ray bursts
(sGRBs), and long-duration sources, such as binary neutron star (BNS) mergers.
Here we review the POEMMA mission and discuss the recent progress towards its
technical readiness provided by the Mini-EUSO and EUSO-SPB2 missions and the
forthcoming Terzina and POEMMA-Balloon-Radio missions | Angela V. Olinto | 2023-09-25T22:14:53Z | http://arxiv.org/abs/2309.14561v1 | # POEMMA (Probe of Extreme Multi-Messenger Astrophysics) Roadmap Update
###### Abstract:
The Probe Of Extreme Multi-Messenger Astrophysics (POEMMA) was designed as a NASA Astrophysics probe-class mission to identify the sources of ultrahigh energy cosmic rays (UHECRs) and observe cosmic neutrinos from extremely energetic transient sources. POEMMA consists of two identical spacecraft flying in a loose formation at 525 km altitude oriented to view a common atmospheric volume and to provide full-sky coverage for both types of messengers. Each spacecraft hosts a wide field of view Schmidt telescope with a hybrid focal plane optimized to observe both the UV fluorescence signal from extensive air showers (EASs) and the optical Cherenkov signals from EASs. When in stereo close to nadir mode, POEMMA can measure the spectrum, composition, and full-sky distribution of the UHECRs above 20 EeV and be sensitive to UHE neutrinos. When pointing just below the Earth's limb, POEMMA will be sensitive to cosmic tau neutrinos above 20 PeV by observing the Cherenkov radiation of EASs produced by upward-moving tau decays, induced from tau neutrino interactions in the Earth. POEMMA is designed to quickly re-orient to follow a Target-of-Opportunity (ToO) neutrino transient from astrophysical sources with exceptional sensitivity to neutrinos from both short-duration transients, such as short-gamma-ray bursts (sGRBs), and long-duration sources, such as binary neutron star (BNS) mergers. Here we review the POEMMA mission and discuss the recent progress towards its technical readiness provided by the Mini-EUSO and EUSO-SPB2 missions and the forthcoming Terzina and POEMMA-Balloon-Radio missions.
## 1 POEMMA Science Goals
As described in [1], the main scientific goals of POEMMA are to discover the elusive sources of cosmic rays with energies above \(10^{18}\) eV (\(\equiv\) 1 EeV) and to observe cosmic neutrinos with energies above 20 PeV from multi-messenger transients. POEMMA exploits the tremendous gains in both ultrahigh energy cosmic ray (UHECR) and cosmic neutrino exposures offered by space-based measurements, including the _full-sky coverage_ of the celestial sphere. For cosmic rays with energies \(E\gtrsim 20\) EeV, POEMMA can measure the UHECR spectrum, composition, and source location with high statistics at the highest energies. For multi-messenger transients, POEMMA can follow-up targets of opportunity (ToO) to detect cosmic neutrinos with energies \(E_{\nu}\gtrsim 20\) PeV. POEMMA also has sensitivity to neutrinos with energies above 20 EeV through fluorescence observations of neutrino induced EAs. Supplementary science capabilities of POEMMA include probes of physics beyond the Standard Model of particle physics, the measurement of \(pp\) cross-section at \(\sim\) 0.3 PeV center-of-mass energy, the study of atmospheric transient luminous events (TLEs), and the search for meteors and nuclearites.
POEMMA can achieve this significant increase in sensitivity by operating two observatories (described in Fig.2 and Table I) with very wide field of view (FoV) in different orientation modes: a stereo fluorescence configuration pointing close to the nadir for more precise UHECR observations and a tilted, Earth-limb viewing configuration to follow ToO neutrino searches (see Fig.1). In limb observing mode, POEMMA can simultaneously search for neutrinos with Cherenkov observations, while observing UHECRs with fluorescence, thanks to the POEMMA hybrid focal surface design.
POEMMA's fluorescence observations can yield one order of magnitude increase in yearly UHECR exposure compared to ground observatory arrays and two orders of magnitude compared to ground fluorescence telescopes. In the limb-viewing mode, POEMMA searches for optical Cherenkov signals of upward-moving EASs generated by \(\tau\)-lepton decays produced by \(\nu_{\tau}\) interactions in the Earth with a terrestrial neutrino target of \(\sim 10^{10}\) gigatons. In the limb-viewing mode,
Figure 1: POEMMA observing modes. _Left:_ POEMMA-Stereo mode to observe fluorescence from UHE cosmic rays and neutrinos in stereo. (Telescope separation \(\sim\)300 km and pointing close to nadir for the most precise measurements at 10s of EeV.) _Right:_ POEMMA-Limb mode to observe Cherenkov from cosmic neutrinos just below the limb of the Earth and fluorescence from UHECRs throughout the atmospheric volume. (Telescope separation \(\sim\)25 km and pointing towards rising or setting ToO sources.)
an even more extensive volume can be monitored for UHECR fluorescence observations.
**UHECR Science:**
As summarized in [2], the nature of UHECR sources and their acceleration mechanism(s) remain a mystery. Proposed sources span a large range of astrophysical objects including extremely fast-spinning young pulsars, active galactic nuclei (AGN), starburst galaxies (SBGs), gamma-ray bursts (GRBs), and galaxy clusters [3]. The powerful exposure provided by POEMMA is designed
\begin{table}
\begin{tabular}{l l l l l} \hline \hline \multicolumn{1}{c}{ Photometer} & Components & \multicolumn{2}{c}{Spacecraft} \\ \hline Optics & Schmidt & 45\({}^{\circ}\) full FoV & Slew rate & 90\({}^{\circ}\) in 8 min \\ & Primary Mirror & 4 m diam. & Pointing Res. & 0.1\({}^{\circ}\) \\ & Corrector Lens & 3.3 m diam. & Pointing Know. & 0.01\({}^{\circ}\) \\ & Focal Surface & 1.6 m diam. & Clock synch. & 10 ns \\ & Pixel Size & 3 \(\times\) 3 mm\({}^{2}\) & Data Storage & 7 days \\ & Pixel FoV & 0.084\({}^{\circ}\) & Communication & S-band \\ PFC & MAPMT (1\(\mu\)s) & 126,720 pixels & Wet Mass & 3,450 kg \\ PCC & SiPM (20 ns) & 15,360 pixels & Power (w/cont) & 550 W \\ \hline Photometer & (One) & & Mission & (2 Observatories) \\ \hline & Mass & 1,550 kg & Lifetime & 3 year (5 year goal) \\ & Power (w/cont) & 700 W & Orbit & 525 km, 28.5\({}^{\circ}\) Inc \\ & Data & \(<\) 1 GB/day & Orbit Period & 95 min \\ & & & Observatory Sep. & \(\sim\)25 - 1000 km \\ \hline \hline \multicolumn{5}{c}{ Each Observatory = Photometer + Spacecraft; POEMMA Mission = 2 Observatories} \\ \end{tabular}
\end{table}
Table 1: POEMMA Specifications:
Figure 2: _Left:_ Concept of the POEMMA photometer with major components identified (PFC stands for POEMMA Fluorescence Camera and PCC stands for POEMMA Cherenkov Camera). _Right:_ Both POEMMA photometers accommodated on Atlas V for launch. From [1].
to help determine the sources of UHECRs through the combined detailed observations of the sky distribution, the spectrum, and the composition at the highest energies (above 100 EeV). Fig. 3 shows the POEMMA exposure compared to leading ground observatories on the left and a sky map in equatorial coordinates for starburst galaxies on the right. POEMMA can explore the differences in source models for the UHECRs by measuring the spectrum above the current reach in energy and the UHECR composition at 100s of EeV where models differ in predictions further illuminating the origin of UHECRs [1].
**Cosmic Neutrino Science:**
Very high-energy cosmic neutrinos are emitted in a number of models of astrophysical transient events (gamma-ray bursts, blazars, binary neutron star coalescence, etc.). Astrophysical sources generally produce electron and muon neutrinos, which after astronomical propagation distances, arrive on Earth with approximately equal numbers of the three flavors: electron, muon, and tau neutrinos. POEMMA detects primarily tau-neutrinos through the tau-decay generated EASs corresponding to about a third of the generated neutrino flux. POEMMA has the unique capability of detecting neutrinos above 20 PeV (1 PeV \(\equiv 10^{15}\) eV) from ToO transient events with a follow-up time scale of about one orbit (95 min) over the entire dark sky [1]. POEMMA can follow up these events and reach a neutrino fluence around \(E_{\nu}^{2}J_{\nu}\geq 0.1\) GeV\(/\)cm\({}^{-2}\) depending on the location of the sources. POEMMA can observe the full sky after several months, given the orbit of the Earth around the Sun.
## 2 POEMMA Instrument and Mission
As shown in Fig.2 and Table I, the POEMMA observatory [1] is comprised of two identical space-based platforms that detect extreme energy particles by recording the signals generated by EASs in the dark side of the Earth's atmosphere. The central element of each POEMMA observatory
Figure 3: _Left:_ Differential exposure vs declination for POEMMA 5-yr in nadir (purple) at \(10^{19.7}\) eV (dotted) and \(10^{20}\) eV (solid); and for limb (red) at \(10^{20}\) eV (dotted), \(10^{20.3}\) eV (dashed), and \(10^{21}\) eV (solid). Exposures for Auger (green) and TAx4 (black) surface detectors (SD) until 2030. _Right:_ Sky map of the normalized UHECR flux in equatorial coordinates for starburst galaxies with 11% anisotropy fraction and angular spread of \(15^{\circ}\). Adapted from [1].
is a high sensitivity low resolution photometer that measures two types of emission from these EASs: the faint isotropic emission due to the fluorescence of atmospheric nitrogen excited by air shower particles, and the brighter collimated Cherenkov emission from EASs directed at the POEMMA observatory. The photometers are designed for deployment after launch. A stowed configuration enables two identical satellites to be launched together on a single Atlas V rocket. Space qualified mechanisms extend each instrument after launch to their deployed position to begin observations. The instrument architecture incorporates a large number of identical parallel sensor chains that meet the high standards of a Class B mission.
The POEMMA **photometer** is based on a Schmidt optical design with a large spherical primary mirror (4 m diameter), the aperture and a thin refractive aspheric aberration corrector lens (3.3 m diameter) at its center of curvature, and a convex spherical focal surface (1.61 m diameter). This particular system provides a large collection aperture (6.39 m\({}^{2}\)) and a massive field-of-view (45\({}^{\circ}\) full FoV). The diameter of the POEMMA primary mirror is set to fit the launch vehicle (Atlas V). The point-spread-function of the POEMMA optics is much less than a pixel size. POEMMA's imaging requirement is \(10^{4}\) away from the diffraction limit, implying optical tolerances closer to a microwave dish than an astronomical telescope.
The **focal surface** (FS) of the POEMMA photometer consists of the POEMMA Fluorescence Camera (PFC), optimized for the fluorescence signals, and the POEMMA Cherenkov Camera (PCC), optimized for Cherenkov signals (see Fig. 5). The PFC records the EAS videos in \(1\mu\)s frames in the \(300\lesssim\lambda/\mathrm{nm}\lesssim 500\) wavelength band using multi-anode photomultiplier tubes (MAPMTs). Each MAPMT has 64 (3x3 mm\({}^{2}\)) pixels in an 8 x 8 array. The PFC is composed of 1,980 MAPMTs containing a total of 126,720 pixels. The stereo videos of the EAS determines the energy, direction,
Figure 4: _Left:_ POEMMA ToO sensitivities to a long burst. The magenta bands (dark and light) show the range of possible source locations. Also shown are the IceCube, Antares, and Auger upper limits (solid histogram) from a neutrino search within a 14-day time window around the binary neutron star merger GW170817 [4]. The blue bands show the variation of IceCube sensitivity due to celestial source location and the red dashed curves represent the projected sensitivity of GRAND200k at zenith angles \(90^{\circ}\) and \(94^{\circ}\)[5], and models from Fang & Metzger [6] of the all-flavor neutrino fluence produced \(10^{5.5}-10^{6.5}\) s and \(10^{4.5}-10^{5.5}\) s after a binary neutron star merger event occurring at a distance of 10 Mpc. _Right:_ Sky plot of the expected number of neutrino events with POEMMA as a function of galactic coordinates for the Fang & Metzger [6] binary neutron star merger model, placing the source at 10 Mpc. Figure adapted from [1].
and composition of the UHECR. The PCC uses solid-state silicon photomultipliers (SiPMs) and is optimized to observe in the \(300\lesssim\lambda/\mathrm{nm}\lesssim 900\) wavelength band for Cherenkov emission of showers developing towards the observatory. The PCC covers 9\({}^{\circ}\) of the FoV from the edge (see Fig. 5). The PCC SiPMs are assembled in arrays of 8 x 8 pixels with a total 15,360 pixels covering a 31 x 31 mm\({}^{2}\) area. With a time sampling of 20 ns, the PCC records EASs produced by cosmic rays above the limb of the Earth and showers from \(\tau\)-lepton decays below the Earth's limb induced by \(\nu_{\tau}\) interactions in the Earth.
The **POEMMA mission** is designed for launch into a circular orbit at an inclination of 28.5\({}^{\circ}\) and an altitude of 525 km, implying a period of 95 mins. The satellites are launched in a stowed configuration. Once on orbit, the corrector plate and focal surface are deployed into their final position. After calibration, the instruments will be pointed close to the nadir to make stereo observations of UHECRs via fluorescent light. Once sufficient statistics have been acquired, the satellites separation will be reduced to \(\sim\) 30 km and the instruments will be pointed for limb observations via both fluorescence and Cherenkov. Throughout the mission instruments will be re-oriented towards neutrino ToO directions following a transient event alert. During a ToO follow-up, measurements of fluorescence from EASs will continue utilizing the larger volumes of the atmosphere observed with the satellites pointed at the limb.
## 3 The Roadmap to POEMMA
The design of the POEMMA observatory and mission evolved from previous work on the JEM-EUSO [7] missions, the OWL [8] design, and the CHANT [9] concept. Among the JEM-EUSO missions, Mini-EUSO and EUSO-SPB2 are paving the way to POEMMA. Mini-EUSO is measuring the low Earth orbit background for the fluorescence technique from the International Space Station [10, 11]. The suborbital payload EUSO-SPB2 [12] was designed to test the two main observation techniques of POEMMA with two separate telescopes: the EUSO-SPB2 Fluorescence Telescope (FT) [13] and the Cherenkov Telescope (CT) [14]. Both the FT and CT were designed as scaled down versions of the POEMMA optics for the 1 meter Schmidt optics of EUSO-SPB2.
Figure 5 shows the correspondence between the two EUSO-SPB2 telescopes and POEMMA's PFC and PCC designs. The EUSO-SPB2 FT is a 1m diameter modified Schmidt telescope, with a 6,912 pixel camera of Multi-Anode Photomultiplier Tubes (MAPMTs) and integration time of 1\(\mu\)s. The MAPMTs are arranged in three photo-detector modules (PDMs) each composed of 36 MAPMTs each with 8 x 8 pixels, resulting in a total of 2304 channels per PDM. (Mini-EUSO's camera consists of one PDM.) The FT field of view is 36\({}^{\circ}\) by 12\({}^{\circ}\) with a fixed nadir pointing direction.
The EUSO-SPB2 CT is also a 1m diameter modified Schmidt telescope with a bifocal alignment of the 4 mirror segments so signals are focused in two distinct spots separated by a couple of pixels on the camera to reduce the background of direct cosmic ray hits. The CT camera has a 512 pixel SiPM camera with an integration time of 10 ns. The FoV of the instrument is 6.4\({}^{\circ}\) in zenith and 12.8\({}^{\circ}\) in azimuth and can be pointed during the flight from horizontal to 10\({}^{\circ}\) below the Earth's limb to follow ToOs.
EUSO-SPB2 was launched from Wanaka, New Zealand on May 13, 2023 [12]. Although the flight was terminated early due to a leak in the super-pressure balloon, the data collected during
Figure 5: _Top:_ POEMMA hybrid focal surface: POEMMA Fluorescence Camera (PFC) on the left and POEMMA Cherenkov Camera (PCC) on the right. The PFC records fluorescence from EAS in 1\(\mu\)s frames in the UV band using 1,980 MAPMTs with 3x3 mm\({}^{2}\) pixels to a total of 126,720 pixels. The PCC uses SiPMs assembled in arrays of 8x8 pixels with a total of 15,360 pixels optimized to observe in the optical band for Cherenkov emission with 20 nanosecond time integration. _Bottom:_ EUSO-SPB2 Telescopes [12]: The EUSO-SPB2 FT is a 1m diameter modified Schmidt telescope, with a 6,912 pixel camera of MAPMTs and integration time of 1\(\mu\)s. The FT field of view is 36\({}^{\circ}\) by 12\({}^{\circ}\) with a fixed nadir pointing direction. The EUSO-SPB2 CT is also a 1m diameter modified Schmidt telescope with a bifocal alignment of the 4 mirror segments to reduce the background. The CT camera has a 512 pixel SiPM camera with an integration time of 10 ns. The FoV of the instrument is 6.4\({}^{\circ}\) in zenith and 12.8\({}^{\circ}\) in azimuth and can be pointed during the flight from horizontal to 10\({}^{\circ}\) below the Earth’s limb to follow ToOs.
the 36 hours and 53 mins flight show that the techniques of the POEMMA design can achieve the technical goals. The flight was too short for significant observations on EAS via fluorescence [13], but the CT was able to detect a number of events consistent with predicted high-energy cosmic rays [14].
Two upcoming projects will further qualify POEMMA subsystems in orbital and sub-orbital platforms raising the technical readiness level of POEMMA: the Terzina [15] Cherenkov detector on the NUSES small-satellite mission and the proposed POEMMA-Balloon-Radio (PBR) mission on a super-pressure balloon. Both are expected to launch in 2026.
**Acknowledgements** The POEMMA conceptual study was supported by NASA Grant NNX17AJ82G. EUSO-SPB2 was supported by NASA awards 11-APRA-0058, 16-APROBES16-0023, 17-APRA17-0066, NNX13AH54G, 80NSSC18K0246, 80NSSC18K0473,80NSSC19K0626, 80NSSC18K0464, 80NSSC22K1488, 80NSSC19K0627 and 80NSSC22K0426, the French space agency CNES, the National Science Centre in Poland grant n. 2017/27/B/ST9/02162, and by ASI-INFN agreement n. 2021-8-HH.0 and its amendments. This research used resources of the US National Energy Research Scientific Computing Center (NERSC), the DOE Science User Facility operated under Contract No. DE-AC02-05CH11231.
|
2310.10664 | Nebula: Self-Attention for Dynamic Malware Analysis | Dynamic analysis enables detecting Windows malware by executing programs in a
controlled environment, and storing their actions in log reports. Previous work
has started training machine learning models on such reports to perform either
malware detection or malware classification. However, most of the approaches
(i) have only considered convolutional and long-short term memory networks,
(ii) they have been built focusing only on APIs called at runtime, without
considering other relevant though heterogeneous sources of information like
network and file operations, and (iii) the code and pretrained models are
hardly available, hindering reproducibility of results in this research area.
In this work, we overcome these limitations by presenting Nebula, a versatile,
self-attention transformer-based neural architecture that can generalize across
different behavior representations and formats, combining heterogeneous
information from dynamic log reports. We show the efficacy of Nebula on three
distinct data collections from different dynamic analysis platforms, comparing
its performance with previous state-of-the-art models developed for malware
detection and classification tasks. We produce an extensive ablation study that
showcases how the components of Nebula influence its predictive performance,
while enabling it to outperform some competing approaches at very low false
positive rates. We conclude our work by inspecting the behavior of Nebula
through the application of explainability methods, which highlight that Nebula
correctly focuses more on portions of reports that contain malicious
activities. We release our code and models at github.com/dtrizna/nebula. | Dmitrijs Trizna, Luca Demetrio, Battista Biggio, Fabio Roli | 2023-09-19T09:24:36Z | http://arxiv.org/abs/2310.10664v1 | # Nebula: Self-Attention for Dynamic Malware Analysis
###### Abstract
Dynamic analysis enables detecting Windows malware by executing programs in a controlled environment, and storing their actions in log reports. Previous work has started training machine learning models on such reports to perform either malware detection or malware classification. However, most of the approaches (i) have only considered convolutional and long-short term memory networks, (ii) they have been built focusing only on APIs called at runtime, without considering other relevant though heterogeneous sources of information like network and file operations, and (iii) the code and pretrained models are hardly available, hindering reproducibility of results in this research area. In this work, we overcome these limitations by presenting Nebula, a versatile, self-attention transformer-based neural architecture that can generalize across different behavior representations and formats, combining heterogeneous information from dynamic log reports. We show the efficacy of Nebula on three distinct data collections from different dynamic analysis platforms, comparing its performance with previous state-of-the-art models developed for malware detection and classification tasks. We produce an extensive ablation study that showcases how the components of Nebula influence its predictive performance, while enabling it to outperform some competing approaches at very low false positive rates. We conclude our work by inspecting the behavior of Nebula through the application of explainability methods, which highlight that Nebula correctly focuses more on portions of reports that contain malicious activities. We release our code and models at [https://anonymous.4open.science/r/nebula-3185](https://anonymous.4open.science/r/nebula-3185).
## 1 Introduction
Dynamic malware analysis is a crucial task not only for detecting, but also for understanding the threats that are wide-spread over the entire Internet. Once samples are collected, analysts execute malware inside isolated environments (sandboxes, virtual machines, or emulation), where they list all the actions performed by the program like network and filesystem access, registry modifications, API calls, and kernel modifications [1, 2]. These are then gathered in the form of textual summaries and reports that are read by humans, who distill the rationale behind the maliciousness of the analyzed sample. As complicated as it reads, this task is both time and resource consuming, since it involves domain experts in the process, and manual labeling. To speed-up this process, analysts often rely on machine learning techniques, by training models on millions of textual reports and then using them to classify future inputs, thus reducing the amount of human work that is needed to complete this daunting task. Currently, machine learning models for dynamic analysis focus on applying Convolutional Neural Networks and Long Short-term Memory (LSTM) models on reports [3, 4, 5, 6, 7, 8, 9], given their impressive results in other Natural Language Understanding (NLU) tasks [10, 11, 12, 13, 14]. Local patterns learned by convolutional layers provide qualitative features for further analysis with neural architectures, thus reaching state-of-the-art performance in many information security tasks [4, 5, 15, 16], while LSTM models bypass locality limitation and learn global token relationships [10, 11]. However, these proposed schemes are hindered by three main downsides: (i) convolutions only capture local information, discarding the global correlations contained in reports between actions, while LSTM models struggle in modeling sample behavior based on prolonged token sequences, like a chain of API calls with arguments; (ii) most of the proposed techniques solely rely on homogeneous input data, like API calls [3, 5, 9], rather than leveraging more complete and heterogeneous information representing the behavior of malware samples; and (iii) source code, data, and pre-trained models are typically not available for most of the proposed techniques, hindering reproducibility of results.
To overcome these issues, we present Nebula, a machine learning model based on the transformer architecture [12] trained on reports of different nature. Nebula achieves state-of-the-art performance, overtaking most of the proposed solutions, based on convolutional and LSTM networks, thanks to the introduction of the self-attention mechanism. In particular, Nebula both better detects and classifies input malware within
its correct family, providing higher-quality responses to human analysts. This result is achieved under a strict regime of very low false positive rate, which is a crucial aspects for deployed systems [6, 7, 17]. To the best of our knowledge, we are the first to leverage the transformer architecture to tackle both malware detection and classification from dynamic log reports. To better understand the rationale behind its performances, we conduct an extensive ablation study on the components of Nebula, quantifying the individual contribution of each component to the final score. This analysis is intended to provide a foundational reference for future research in this area, equipping researches with insights into the anticipated outcomes when using analogous components. On top of these results, we empirically highlight how Nebula is focusing on the portions of textual reports that contain malicious activity, by studying the activation of the attention mechanism and applying explainability techniques [18, 19]. To foster reproducible results, we do not only share the code and pre-trained models of Nebula, but we also re-implement, re-train and release methods that were not previously shared with the community [4, 9].1
Footnote 1: [https://anonymous.4open.science/r/nebula-3185](https://anonymous.4open.science/r/nebula-3185)
We present our work as follows: we first describe the background concepts needed to understand the technical contributions of Nebula (Section 2), where we also categorize all the previous contributions in the field. We continue by detailing the components of Nebula (Section 3), followed by our extensive experimental analysis (Section 4). We conclude the manuscript with an overview of related techniques (Section 5), and with the main limitations and lines of research that can inspire future work (Section 6).
## 2 Dynamic Windows Malware Analysis
This section provides the background information necessary to understand the technical advancements made by Nebula. We initiate our discussion in Section 2.1, where we briefly delve into the landscape of malware and discuss techniques for detecting compromises through the behavioral analysis of system telemetry. Then, in Section 2.2, we outline the main steps required to implementdynamic malware analysis modeling via machine learning. We conclude this section with a comprehensive survey of published models in Section 2.3.
### Logs and Behavioral Reports
The issue of identifying unauthorized system access is paramount within the field of cybersecurity. System compromise can have different manifestations depending on impact, like sensitive data exposure or the misuse of computational resources. Threat actors employ a diverse range of methods, from utilizing built-in tools and protocols aligning with a "living-off-the-land" approach to leveraging stolen credentials or employing social engineering tactics to achieve their goals through legitimate user accounts. Additionally, adversaries often deploy their own software agents, referred to as malware. According to the 2022 Verizon Data Breach Investigation Report [23], malware was responsible for nearly 40% of breaches, highlighting the importance of its detection in ensuring the digital domain's security. Moreover, efficient analysis of malware capabilities is crucial for comprehensive threat intelligence. Malware analysis can be segregated into static and dynamic methodologies. The former entails the evaluation of software samples without executing them. For instance, malware samples scripted using technologies such as VBScript are analyzed through deobfuscation and string analysis. Binary samples, like those compiled into EXE or DLL formats, are assessed through disassembly or parsing known structures, such as the import address table of portable executable files.
Dynamic software analysis is a sophisticated process that commences with the "detonation" of a sample in a debugging, sandbox, or emulated environment. This process isolates the application, preventing it from impacting other system parts. However, this may result in inconsistent behavior depending on the detonation environment, subsequently generating _dynamic_ features suitable for further examination. Machine learning has become a significant element in malware analysis, with efficient modeling schemes proposed for both static and dynamic data structures derived from malware. Within the scope of this paper, we focus solely on modeling schemes applied to dynamic reports. We argue that the modeling of behavioral reports, which convey machine data with a structure that varies based on the execution environment, yet possesses generally common information, is crucial for capturing a broader, more precise understanding of malware behaviors. This, in turn, can lead to enhanced detection rates, automated reverse engineering capabilities, and improved cybersecurity measures.
### Machine Learning Pipeline for Dynamic Analysis
We now describe how machine learning can be applied on top of textual reports by introducing three main steps: (i) _data cleaning_ to prepare raw data; (ii) _feature extraction_ to create a mathematical representation of reports; and (iii) _modeling_ the problem and train the final classifier.
**Data cleaning.** Once the behavioral representation of software is acquired, it is important to perform cleaning and normalizing of the report to make the data manageable for further processing. Filters are used to remove unnecessary data and preserve only a specific set of fields, while normalization techniques are applied to systematize values that are stochastic in nature and do not correlate with application behavior like hash-sums or IP addresses. This allows to introduce do
main knowledge [24] and, as shown in Section 4.3, improves the model's generalization abilities by reducing variability in values irrelevant to the prediction. We denote this step as \(z^{\prime}=\psi(z)\), where \(z\) is the raw data collected from the dynamic analysis environment, and \(z^{\prime}\) is the cleaned and normalized textual data.
**Feature extraction.** As a next step, \(z^{\prime}\) undergoes feature extraction denoted \(x=\phi(z^{\prime})\). As a final step, it produces a numerical array \(x\), suitable for analysis by machine learning model. Feature extraction \(\phi\) involves a dichotomy between (a) feature engineering or (b) token encoding. Feature engineering involves the manual or automated selection and transformation of relevant features from the cleaned data \(z^{\prime}\), for instance, feature hashing applied to API call names [25, 9] or regular expression-based feature extractors [9]. Token encoding involves a tokenization step, which transforms the textual data \(z^{\prime}\) into a sequence of tokens and a vocabulary \(V\) of all possible tokens. Tokenization can be based on regular expressions like _WhiteSpace_ or _WordPunct_[26] tokenizers, or be influenced by a domain knowledge [27, 28], or involve statistical methods like Byte-Pair Encoding (BPE) [29, 30]. The sequence of tokens is then encoded into a numerical array \(x\) using an encoding function \(f\), which might be as simple as one-hot encoding, be calculated with term frequency-inverse document frequency (TF-IDF) [31], or use embedding function \(f:V\rightarrow\mathbb{R}^{d}\), where \(d\) is the embedding dimension [32].
**Modeling.** The final step is to use the numerical array \(x\) as an input to a machine learning model \(f(x)\), which produces a prediction \(y\) of a malware label. The modeling function can be as simple as linear models like logistic regression. However, for behavioral reports, the best schemes incorporate representations of sequential information. This can be achieved by convolutions, recurrent neural networks or self-attention with positional encoding.
### Dynamic Model Review
The landscape of dynamic malware analysis showcases a competitive interplay between commercial solutions and academic research. Commercial anti-virus (AV) and Endpoint Detection and Response (EDR) products have integrated behavioral analytics into their detection methodologies, forming part of a multi-objective heuristic that leverages both static and dynamic analysis. Unfortunately, a direct comparison between Nebula and commercial AV or EDR products is not feasible due to the proprietary nature of commercial solutions. The behavioral components of their multi-objective heuristics are closed, which prohibits their disentanglement on the user side for comparison purposes. This lack of transparency means that we cannot gauge how much of the overall performance of these commercial solutions is attributed to their behavioral modeling component specifically.
In academic research, we encounter several groundbreaking methodologies in dynamic malware analysis that pose a formidable challenge to the current state-of-the-art. To offer a consolidated view of these promising approaches, we have curated a selection of these solutions in Table 1, systematizing their respective pipelines according to the steps introduced in Section 2.2. A common theme among contemporary academic contributions is the employment of traditional techniques, such as one-dimensional convolutions, optionally complemented with recurrent layers through Long Short-Term Memory (LSTM) [10], as part of their core modeling approach \(f\). However, each of these methodologies introduces a unique approach in either data cleaning (\(\psi\)) or feature extraction (\(\phi\)) processes, thereby diversifying the analytical landscape of dynamic malware analysis.
**Neurlux (Jindal et al. [4]).** A distinctive feature of this approach is the absence of operations during the data cleaning phase (\(\psi\)), passing raw behavioral reports directly to the feature extraction process (\(\phi\)). This phase involves a simple whitespace tokenization procedure and sequences encoding with a vocabulary size of \(V=10,000\). The resulting sequences are then modeled \(f\) using a combination of one-dimensional convolutions, Long Short-Term Memory (LSTM), and conventional attention mechanisms [11], which is applied to the output of the LSTM layer. Their code is publicly accessible; therefore, we are able to compare our
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & **Data Cleaning \(\psi(z)\)** & **Feature Extraction \(\phi(z^{\prime})\)** & **Model \(f(x)\)** & **Size** & **Code Released** \\ \hline Neurlux [4] & ✗ & Tokenization & CNN, LSTM, Attention & 2.8M & ✓ \\ Gated CNN [9] & API filter & Feature Hashing & CNN, LSTM & 0.4M & ✗1 \\ Quo.Vadis [3] & API filter & Tokenization & CNN & 1.4M & ✓ \\ JSONGrinder [20] & ✗ & HMIL [21] & MLP & 2.4M & ✓2 \\ CruParameter [5, 22] & API filter & API “labeling” & CNN, LSTM & – & ✗ \\ \hline \multirow{3}{*}{**Nebula (ours)**} & API, network, file, & & \multirow{3}{*}{Tokenization} & Transformer & \multirow{3}{*}{5.6M} & \multirow{3}{*}{✓} \\ & registry filters and & & & & \\ \cline{1-1} \cline{5-5} & normalization & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dynamic malware analysis modeling techniques.
results with this model. However, the data utilized in their research remains undisclosed.
**Gated CNN (Zhang et al. [9]).** This model introduced an analysis where \(\psi\) preserves only API call data, each undergoing a custom feature engineering process during \(\phi\) phase. Then, the sequence of featurized vectors is modeled though a gated convolution network as \(f\). While the code for their model is not released publicly, it was provided by the researchers upon request, enabling us to draw a direct comparison between our results and their model. However, similar to the case of Neurlux, the data utilized in their study has not been released.
**Quo.Vadis (Trizna [3]).** This hybrid model simultaneously assesses contextual, static, and dynamic features. Their model code is released publicly, which significantly contributes to the transparency of their work. For our analysis, we concentrated on the dynamic component of their pipeline, which data cleaning \(\psi\) preserves only API call names. Feature extraction \(\phi\) label-encodes each API call name with a vocabulary of \(V=600\) and subsequently models \(f\) with a 1d convolutional neural network. This work is especially notable for its public release of a comprehensive dataset consisting of Speakeasy [33] emulation reports. This allows for the pursuance of both malware detection and type classification objectives.
**JSONGFinder (Bosansky et al. [20]).** This model provides a unique method for parsing hierarchical JSON reports, originally proposed in [21]. This method employs a combination of Julia libraries, specifically JsonGrinder.jl used for feature extraction \(\phi\) and Mill.jl for modeling \(f\), data cleaning \(\psi\) is omitted. The \(\phi\) phase infers a Hierarchical Multiple Instance Learning (HMIL) schema from the data, constructing a fixed-size vector, while the modeling is based on a multilayer perceptron (MLP) for sample classification. However, it is worth noting that their implementation was not compatible with the latest version of Julia (v1.8.5) at the time of our experiments, causing the original model implementation to fail without modifications. Additionally, Bosansky et al.'s work is notable for its release of a comprehensive dataset useful for malware family classification, which adds considerable value to the existing body of resources in this field.
**CurParamer (Chen et al. [5, 22]).** This method preserves only API calls from the original report during the data cleaning phase (\(\phi\)). The feature extraction step (\(\psi\)) involves a unique approach to API labeling and embedding, which includes parameter-assisted API labeling and sensitivity-inspired API embedding. These techniques utilize domain knowledge to generate more efficient numerical representations of API calls. To model these representations (\(f\)), they employ two separate networks based on 2D convolutions and LSTM. Although their feature extraction methodology is intriguing, it is presented with little implementation details, which reduces its replicability. Despite efforts to access the modeling code, the authors made no public version available, even upon private request.
## 3 Nebula: Transformer Architecture for Dynamic Malware Detection
The design of our dynamic malware analysis pipeline draws from the proven success of the attention mechanism in Natural Language Understanding (NLU). Particularly, the self-attention-based Transformer architecture [12] has demonstrated superior performance over conventional RNN- or CNN-based modeling methods [13, 34, 14]. These successes guided the selection of techniques used during our feature extraction (\(\phi\)) and modeling (\(f\)) stages. The most significant deviation from standard NLU pipelines is evident during the data cleaning phase (\(\psi\)). Here, we employ a domain-specific parser that (i) retains only those fields from the original structured report relevant for behavior generalization; and (ii) normalizes unconstrained and arbitrary values within such selected fields. In the feature extraction phase (\(\phi\)), we tokenize each report into a sequence of tokens of length \(N\) and encode each token based on a vocabulary of size \(V\). When modeling this sequence, we first embed the input vector to a higher dimension, apply position encoding, and then process it through a Transformer encoder layer to apply the self-attention operation. The resulting attended tensors are then forwarded to a classifier that produces the final prediction. A high-level overview of our Nebula modeling scheme is depicted in Figure 1.
### Data Cleaning
We detail the data cleaning \(z^{\prime}=\psi(z)\) applied by Nebula.
Figure 1: A schematic overview of Nebula.
**Vocabulary and field filters.** Machine data is more volumetric and heterogeneous than natural languages. Therefore, it can have a significantly larger vocabulary, as no distinct lexical boundaries or grammatical rules define the language being used. In system logs, it is common to see arbitrary character combinations like /tmp/83afba/setup.bin or jrel.8\(0\)311, which explode vocabulary given improper handling. For instance, even after path normalization, we observe more than 6000 unique filepaths, where only roughly 400 paths repeat, and the rest appear only once. The Figure 2 visualizes the frequency distribution of tokens for different JSON fields in the Speakeasy emulated behavioral report training set [3]. Every additional field included in the analysis increases the vocabulary size. For instance, given filter that uses API calls, file, network, and registry records total vocabulary size is about 2.5M tokens. Given no filters applied, this number jumps close to 8M tokens, exploding vocabulary more than three times and significantly reducing the epistemic density (valuable information per token) of the data.
Concerning field filtering, existing dynamic malware modeling techniques fall into two categories: do not implement any filters [4], or use only a single type of information, usually API calls [3, 5]. Focus solely on API calls eliminates valuable behavior representations necessary for establishing effective decision boundaries in malware detection. Research has demonstrated that domain experts rely on a broader range of information when performing actual malware analysis [24, 35]. Based on ablation studies discussed in Section 4.3, we preserve the following fields from dynamic analysis report: (i) API call names, arguments, and return codes; (ii) file operation type and path; (iii) network connection port and server name; and (iv) registry access type and key value. We found this combination of fields produces the best generalization and the least overfitting. The application of field filters on Speakeasy behavioral report is exemplified in Figure 3. In this redacted illustration, only fields that contain API call and network event information are retained, while irrelevant parts such as start_addr or apihash are discarded.
**Normalization.** Retained fields still have unbounded or unpredictable values, which may not inherently contribute to the effectiveness of machine learning models. For instance, the exact values of IP addresses are not representative _per se_ and primarily provide broader context, such as indicating whether the IP is from a private or public network or what autonomous system it belongs to. Similarly, file paths may contain elements like usernames, drive letters, or randomized file and directory names, which have relative contextual significance for behavioral analysis. Hence, the raw values of such fields may not be directly beneficial for modeling, emphasizing the need for suitable normalization before analysis. We incorporate domain knowledge via placeholders by normalizing filepaths, network connection, and registry access information in the following manner: (i) hash-sums in any field, including SHA1, SHA256, and MD5, are substituted with placeholders like <shal>, <sha256>, <md5> placeholders; (ii) IP addresses are mapped to placeholders symbolizing loopback, private, public, or IPv6 addresses; (iii) recognizable domain names associated with a list of common top-level domains such as com or net (but not exclusive to these) are assigned the <domain> placeholder; (iv) Windows path variables, for instance, %windir$ or %userprofile$, are expanded to a full path; and (v) frequent Windows paths patterns are replaced with specific placeholders such as <drive> or <user>.
Figure 3: Example of unfiltered (left) and filtered (right) Speakeasy [33] behavior reports, illustrating the application of field filters. API call names with arguments and network connection details are retained. Both reports have been redacted for brevity.
Figure 2: Visualization of whitespace token frequency in the Speakeasy [33] emulated behavioral report training set [3].
### Feature Extraction
We now detail the feature extraction \(x=\phi(z^{\prime})\) applied after the data cleaning phase \(\psi\) by Nebula.
**Tokenization.** The next phase in dynamic report processing involves breaking down the normalized text sequence into individual tokens suitable for encoding. We decided to test several prevalent modes of tokenization - Byte Pair Encoding (BPE) [29], Whitespace, and Wordpuct. Whitespace tokenization is a straightforward scheme that uses regular expressions to split the text. It creates tokens by separating words based on spaces, tabs, and newline characters. In essence, this method views each word or punctuation as a standalone token. Wordpunct uses similar regular expression-based logic that, in addition to whitespace symbols, uses punctuation to split tokens. We implement both tokenizers mentioned above using NLTK library [26]. An fragment of whitespace tokenized dynamic analysis report:
```
"0x0","0xl","kernel32.getprocedress","0xl000","0xfa","kernel32.tlsgetvalue"
```
BPE is a more sophisticated approach. It tokenizes text based on frequently occurring byte pairs within the data. Initially, each character in the text is considered a token. The algorithm then iteratively merges the most common pair of consecutive tokens to form new tokens, repeating this process until a specified number of merges have been completed, or no further merges can be made. We utilize the SentencePiece [30] library to implement BPE. The redacted set of BPE tokens covering the same dynamic analysis report fragment are as follows:
```
"0x","0xl","ne","32.","kernel32.","et","ad","getproc","10","0xf","tls","val"
```
Furthermore, for both tokenization schemes, we limit our vocabulary to \(V=50000\) most common tokens and introduce two special tokens to denote all other tokens (<unk>) and padding of shorter sequences (<pad>).
**Sequence length.** In the case of machine data, the tokenized sequences from system log events are typically lengthy. To manage this, we confine behavioral reports to the first N tokens. By keeping the computational budget constant, we evaluate the performance of models with varying sequence lengths. The results of these comparative studies, often referred to as ablation studies, will be detailed in Section 4.3, with the final choice of \(N=512\).
### Model Architecture
We now detail the last component of Nebula, which is the model function \(f\).
**Embedding and positional encoding.** Embedding operation maps the input sequence of integers to a higher dimensional space: \(e=E(x)\cdot\sqrt{d_{e}}\), where \(E(x)\) is the embedding of the input \(x\) and \(d_{e}\) is the dimension of the embedding, with square root used for scaling. This results in vector \(e=[e_{1},e_{2},...,e_{pos},...,e_{N}]\), where \(e_{pos}\in\mathbb{R}^{d_{e}}\).
Since our method relies on the Transformer architecture [12], which lacks the inherent sense of order provided by recurrent models, we need to incorporate positional information in our sequence. There are multiple alternative ways to encode position [36]. We replicate the approach introduced by Vaswani et al. [12], creating a set of sinusoidal functions with different frequencies for each position in the sequence:
\[PE_{(pos,\ 2i)}=\sin\left(\frac{pos}{10000^{2i/d}}\right), \tag{1}\]
\[PE_{(pos,\ 2i+1)}=\cos\left(\frac{pos}{10000^{2i/d}}\right), \tag{2}\]
where \(PE_{(pos,\ i)}\) is the \(i\)-th dimension of the positional encoding of the token at position \(pos\) in the sequence, and \(d\) is the dimensionality of the model. The \(PE_{(pos,\ 2i)}\) and \(PE_{(pos,\ 2i+1)}\) terms are used for even and odd dimension \(i\) respectively. These values are then added to the embedded vectors \(e_{pos}\) to incorporate the positional information into the sequence \(e^{\prime}_{pos}=e_{pos}+PE_{pos}\) where \(PE_{pos}=[PE_{(pos,\ 1)},PE_{(pos,\ 2)},...,PE_{(pos,\ d)}]\) is the positional encoding vector for position \(pos\). The result is a sequence of vectors \(e^{\prime}=[e^{\prime}_{1},e^{\prime}_{2},...,e^{\prime}_{N}]\), where each vector represents both the token semantics and its position in the sequence, which can now be fed into the Transformer network.
**Neural layers.** Our architecture leverages Transformer architecture, which originally employs both encoder and decoder layers [12]. Our setup utilizes only encoder layers similar to Devlin et al. [14], a design choice that aligns our model with inference task rather than generative objectives as in applications that include decoder [12, 13]. We employ two Transformer encoder layers that align our model size with those of comparable models in Table 1. This choice is not restrictive - the model can be scaled up to incorporate more Transformer layers to improve performance, consistent with the principle of model scaling laws [37]. After the self-attention operation, data is forwarded to a classifier for the final prediction. In our implementation, the classifier consists of a fully connected neural network with a single hidden layer composed of 64 neurons and the final layer for binary or multi-class classification.
**Reduced self-attention span.** Input comprised from structured machine data like malware behavior reports contain information in lengthy sequences, which poses a challenge for self-attention architectures like Transformer [12] since such models exhibit quadratic complexity with respect to the sequence length. The self-attention operation can be represented as:
\[\text{Attention}(Q,K,V)=\text{softmax}\left(\frac{QK^{T}}{\sqrt{d_{k}}} \right)V, \tag{3}\]
where \(Q\), \(K\), and \(V\) as queries, keys, and values, respectively used as inputs to a self-attention layer, and \(d_{k}\) is the dimension of the keys. The product \(QK^{T}\) results in a matrix of size \(N\times N\), where \(N\) is the sequence length. Calculating this product has a complexity of \(O(N^{2})\), leading to the quadratic computational complexity with respect to the sequence length.
We propose an alternative approach to reduce computational complexity by partitioning the self-attention operation described in Equation 3 into several independent attention spans instead of applying it to the entire sequence. Assume that the original sequence length \(N\) is divisible by the span \(S\), so there are \(M=N/S\) spans. Let \(Q_{i}\), \(K_{i}\), and \(V_{i}\) denote the queries, keys, and values for the \(i^{th}\) span. Then, \(\text{Attention}(Q_{i},K_{i},V_{i})\quad\forall i\in\{1,2,...,M\}\) and independent attention results are concatenated to an original size vector \(N\).
In this way, the complexity is reduced to \(O(MS^{2})\), improving the model's computational efficiency, especially when \(S<<N\). Our experiments use \(S=64\) with \(N=512\), resulting in \(M=8\) independent self-attention spans. We observe that reducing attention spans enhances the model's inferential capacity on behavioral reports while adhering to the same computational constraints.
## 4 Experimental Evaluation
The following section presents an in-depth experimental evaluation designed to assess the effectiveness and robustness of Nebula. In Section 4.1, we discuss the datasets used for our experiments. We outline our experimental setup in Section 4.2, including details about the model training and evaluation metrics, and providing a comprehensive view of our methodology. Section 4.3 dives into ablation studies, which assess the impact of each component of Nebula on the overall model's performance. In Section 4.4, we compare Nebula with other dynamic analysis models expressing state-of-the-art potential. This comparative analysis gives us a clearer understanding of where Nebula stands in relation to other techniques in the field, highlighting its strengths and areas for improvement. Finally, Section 4.5 focuses on the explainability of Nebula's heuristic. As machine learning models become increasingly complex, interpretability is paramount for trust and practical applicability.
### Datasets
In our experiments, we evaluate three publicly available datasets that contain dynamic malware analysis reports by discussing two different types of analysis.
**Malware detection.** This binary classification task discerns between benign and malicious software. It's a fundamental task performed by AV and EDR solutions with the aim of detecting malevolent logic running on a system. In real-world applications, it is paramount to maintain severely low false-positive rates to ensure usability and efficiency.
**Malware classification.** This is a multi-label classification objective, targeting the attribution of malware samples to a specific type or family. Threat intelligence teams often execute it to study the evolution of malware strains, uncover shared characteristics, and identify potential countermeasures.
We now characterize each dataset by its sample size, the environment used for data collection, its applicability for either malware detection and classification tasks, and the availability of separate training and test sets.
**Speakeasy Dataset [3].** This dataset2 was generated using Speakeasy v1.5.9 [33], a Windows kernel emulator, comprising behavioral reports from in total approximately 93,500 samples, with both legitimate and malicious reports formatted in JSON. The malicious samples belong to seven distinct malware types, with sample prevalence across labels detailed in Table 3. Therefore, the dataset is suitable for both malware detection and classification tasks. The dataset provides a test set explicitly, collected in a different timeframe (April 2022) from the training set (January 2022). This temporal separation facilitates the examination of concept drift in malware behavior.
Footnote 2: [https://www.kaggle.com/ds/3231810](https://www.kaggle.com/ds/3231810)
**Avast-CTU Dataset [20].** This dataset3 houses sandbox re
\begin{table}
\begin{tabular}{c|c c c c c c c c c|c} \hline \hline Family & **Adload** & **Emotet** & **HarHar** & **Lokibot** & **njRAT** & **Qakbot** & **Swisyn** & **Trickbot** & **Ursnif** & **Zeus** & Total \\ \hline Samples & 704 & 14429 & 655 & 4191 & 3372 & 4895 & 12591 & 4202 & 1343 & 2594 & 54000 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Number of samples per malware family in Avast-CTU Dataset [20].
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{Training set} & \multicolumn{2}{c}{Test set} \\ \cline{2-5}
**Sample label** & **Size (Gb)** & **Count** & **Size (Gb)** & **Count** \\ \hline _Benignware_ & 127.0 & 26061 & 47.0 & 10000 \\ Backdoor & 30.0 & 11089 & 7.4 & 2500 \\ Coinminer & 46.0 & 10044 & 11.0 & 2500 \\ Dropper & 36.0 & 11275 & 9.0 & 2500 \\ Keylogger & 34.0 & 7817 & 9.8 & 2500 \\ Ransomw. & 14.0 & 10014 & 4.6 & 2500 \\ RAT & 5.5 & 9537 & 2.5 & 2500 \\ Trojan & 40.0 & 13128 & 7.1 & 2500 \\ \hline
**Total** & 329 & 98966 & 98 & 27500 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Speakeasy Dataset [3] structure and size.
ports in JSON format derived from CAPEv2 [38] (a) Cuckoo sandbox [39] derivative), with approximately 400,000 samples collected between January 2017 and January 2020. The reports represent ten different malware families (Table 2). Due to the absence of legitimate samples, this dataset is solely used for malware classification tasks. The dataset formation aligns with the splitting approach recommended by Bosansky et al. [20], in which all samples preceding August 2019 are designated as the training set, while the remainder forms the test set.
**Malicious Code Dataset (MCD) [40].** This dataset has approximately 30,000 labeled samples containing API call sequences in XML format without any additional behavioral data (such as filesystem, registry, or network access). The dataset's collection methodology and the environment are not explicitly detailed. The training set contains 10,000 malware and 20,000 goodware samples. As no malware family or type labels are available, this dataset is solely applicable for malware detection task. The test set with 15,000 unlabeled samples cannot be used for evaluation due to the lack of labels. Hence we report mean metrics only on validation sets through cross-validation folds.
### Experimental Setup
Our experiments were conducted on an NVIDIA Quadro T2000, a standard consumer GPU. To align with the limitations of the hardware capacity, the batch size was fixed at \(b=96\) for all experiments. For optimization, we employed the AdamW optimizer [41] with a static learning rate of \(\alpha=2.5^{-4}\). The hyperparameters were set as \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), and \(\epsilon=10^{-8}\). An \(L_{2}\) regularization with a weight decay of \(\lambda=1e^{-2}\) was also implemented. The evaluation metrics were derived from three cross-validation (CV) folds on the training set. The reported metrics are the mean values of the three models evaluated on the validation subsets and a single test set.
To ensure fair evaluation given the variations in model size as indicated in Table 1, we maintained a constant time budget for training instead of a fixed number of epochs. Each fold was allocated a training duration of five minutes, resulting in a total training budget of 15 minutes per cross-validation run for three folds, excluding pre-processing time. Initial experiments with longer training runs, such as an hour per cross-validation, yielded similar relative outcomes with tolerable deviations.As such, the 15-minute training budget was deemed optimal for subsequent experiments.
### Ablation Studies
In this section, we explore the impact of variations in model components and their configurations on the final performance of the Transformer model. This serves to highlight the effectiveness of individual components in the context of the model's overall performance. For our ablation experiments, we utilize Speakeasy Dataset as it offers a comprehensive range of behavioral representations. Furthermore, this data enables us to evaluate malware detection performance using a binary classification objective, thereby yielding more interpretable results.
**Vocabulary size.** The impact of varying vocabulary size on the performance of the model using the Speakeasy emulation data is presented in Table 4. The results demonstrate marginal differences in performance within the range of vocabulary size \(V\in\{30\ 000,...,70\ 000\}\), suggesting that performance in this interval is largely subject to the randomness introduced during model initialization and training. This trend suggests that the model's performance is relatively stable with respect to variations in vocabulary size within this range, indicating a degree of robustness to this parameter. Considering these observations, we chose \(V=50\ 000\) as a good compromise that balances performance and complexity.
**Field filters.** Initially, we examine the utility of individual fields for malware detection. Figure 3(a) presents the outcomes of experiments in which only a specific single field from the behavioral report is retained. Notably, the most influential component of behavioral representation is the sequence of API calls, especially when arguments are provided alongside the API names. All other fields exhibit inferior performance when considered in isolation. This observation can be rationalized by recognizing that not every type of malware or emulation generates traces in the filesystem, registry, or network - only a limited subset of emulation reports contain this data. However, all samples invariably exhibit a sequence of API calls, which underscores the critical role of API call information in malware detection. However, the inclusion of filesystem, registry, or network information in conjunction with API calls enhances detection capabilities. This synergy enables the model to capture a more comprehensive representation of the software's behavior, improving the accuracy and reliability of its predictions.
Additionally, we investigated two preprocessing modalities: (i) a version that abstains from the application of filters, and (ii) one that incorporates optimal field filters during preprocessing. Table 5 presents the F1 scores on the validation and test sets of the Speakeasy emulation reports for both the BPE
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Metric & 5k & 10k & 30k & 50K & 70k \\ \hline TPR & 0.8078 & 0.7834 & **0.8576** & 0.8383 & 0.8407 \\ AUC & 0.9965 & 0.9969 & **0.9977** & 0.9976 & **0.9977** \\ F1 & 0.9817 & 0.9839 & 0.9861 & 0.9856 & **0.9862** \\ Acc. & 0.9753 & 0.9782 & 0.9811 & 0.9806 & **0.9814** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Mean validation set metrics over three CV folds with different vocabulary sizes on Speakeasy data. Reported TPR is at FPR\(=10^{-3}\).
and whitespace tokenization schemes. Remarkably, a significant overfitting issue is present when filters are not employed, evidenced by a difference (\(\Delta\)) in performance between the validation and test sets. While modeling that employs filters lose about \(7\%-8\%\) of F1 on the test set, the performance of modeling without filters degrades down by \(23\%-25\%\). A visual examination of this trend is depicted in Figure 3(b), where the Receiver Operating Characteristic (ROC) curve for the test set demonstrates significant degradation when filters are not employed. Additionally, the high standard deviation between cross-validation runs suggests a level of model instability or variance in prediction.The observed outcome can be attributed to the presence of unconstrained variables representative of one specific execution, like hash sum or start address memory segment. These fields cause the model to overfit the training data, hindering its generalization and predictive capabilities to unseen data in the test set. Hence, the application of field filters appears instrumental in enhancing model stability and performance, contributing to more reliable and generalizable predictions.
**Tokenization.** We conducted ablation studies on tokenization to investigate the impact of different tokenization strategies on model performance. Three different tokenizers were tested: BPE [30], Whitespace, and Wordpunct [26]. The test set F1 scores for different tokenization methods are reported in Table 6. The results reveal that all three tokenization methods deliver comparable mean F1 scores. The BPE tokenizer demonstrates slightly better generalization capabilities, achieving an F1 score on the test set that is almot 1% higher than the others. This observation is further supported by the field filter experiments discussed in the previous paragraph, with results in Table 5, where BPE exhibited the smallest performance decrease (\(\Delta\)) between the validation and test sets. Furthermore, it is noteworthy that the Whitespace tokenizer achieves impressive results on the test set, surpassing the other tokenization methods if evaluated by area under the curve (AUC) or true positive rate (TPR) at false positive rate (FPR) of \(10^{-3}\), as shown in Table 6. Given competitive performance of BPE and Whitespace, we report metrics of both tokenizers for subsequent malware detection and classification, as well as explainability experiments.
**Sequence length.** Figure 3(c) depicts the F1 scores on the validation and test sets with varying sequence lengths. It is evident that the performance on both validation and test sets peaks at a sequence length of \(N=512\). This suggests that sequences of length \(N\in\{64,...,256\}\) may not encapsulate all the necessary information for effective model inference, leading to a significant drop in test set performance. On the other hand, longer sequences are more computationally demanding, especially when utilizing self-attention-based modeling. Hence, under the same computational time constraints, sequences with length \(N\in[1024,2048]\) yield less robust results.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Tokenizer & TPR & AUC & F1 & Acc. \\ \hline \multicolumn{5}{c}{Validation set} \\ \hline Wordpunct & 0.8919 & **0.9979** & **0.9872** & **0.9828** \\ Whitespace & **0.8965** & **0.9979** & 0.9870 & 0.9824 \\ BPE [29] & 0.8208 & 0.9973 & 0.9847 & 0.9793 \\ \hline \multicolumn{5}{c}{Test set} \\ \hline Wordpunct & 0.5540 & 0.9630 & 0.9049 & 0.9041 \\ Whitespace & **0.5703** & **0.9664** & 0.9068 & 0.9053 \\ BPE & 0.5213 & 0.9657 & **0.9136** & **0.9104** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Mean test set metrics over three CV folds of tokenizer ablation studies on Speakeasy data. TPR is at FPR\(=10^{-3}\).
Figure 4: Results of ablation studies under different configurations on Speakeasy dataset.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Fields & Val. set F1 & Test set F1 & \(\Delta\) \\ \hline Raw JSON (BPE) & 0.9884 & 0.7495 & **0.2389** \\ Raw JSON (whtsp.) & 0.9899 & 0.7275 & **0.2624** \\ Filtered JSON (BPE) & 0.9847 & 0.9136 & _0.0711_ \\ Filt. JSON (whtsp.) & 0.9870 & 0.9068 & _0.0802_ \\ \hline \hline \end{tabular}
\end{table}
Table 5: Mean F1 values of field filter ablation studies over three CV folds on Speakeasy dataset.
### Comparison with State of the Art
**Malware detection.** In this section, we evaluate the performance of our proposed method, Nebula, with several state-of-the-art models in the domain of malware detection. For all conducted experiments, we report AUC, F1, Accuracy, and True Positive Rate (TPR) at a fixed False Positive Rate (FPR) of \(10^{-3}\).
Malware detection metrics on the Speakeasy dataset [3] are reported in Table 7 with ROC curve on the test set exemplified in Figure 5. Four modeling techniques with source code available as presented in Table 1 were able to model this data, namely Neurlux presented by Jindal et al. [4], Gated CNN model by Zhang et al. [9], Quo.Vadis released by Trizna [3], and Nebula.
Our model surpasses all competitive architectures on Speakeasy emulation data, outperforming all metrics on validation and test sets in either the whitespace and BPE tokenization modes. This is particularly evident under low false-positive conditions. For instance, with an FPR of \(10^{-3}\), Nebula with whitespace tokenization demonstrates a detection rate of 0.897 on the validation set and 0.570 on the test set. In comparison, the next best performing model, Neurlux, scores 0.836 and 0.425 on the validation and test sets, respectively. This observation becomes critically significant considering that strict low false positive rates are enforced on production-grade malware detectors [6, 7].
The efficiency of Nebula is also reflected in the number of training batches required. As seen in Table 7, Nebula achieves these results with less than a third of the training batches required by the second-best model, Neurlux. Turning awareness to the Malware Code Dataset (MCD) [40], the mean validation set metrics are presented in Table 8. MCD preprocessing is computationally demanding due to the high information density per sample because of lengthy API call traces. This has a detrimental effect on models that employ custom feature engineering schemes, such as Zhang et al. [9], which take several seconds of feature engineering per MCD sample. Processing a training set of 30,000 samples in this manner would take approximately 100 hours--impractical in both experimental and real-world scenarios. Consequently, we excluded this model from our experiments on MCD.
Our observations reveal that Quo.Vadis, a simplistic modeling scheme focused solely on API call names, outperforms both Neurlux and Nebula based on AUC and detection rate under low false-positive conditions. Given the lengthier API call sequences in the MCD data, narrowly focused models like Quo.Vadis might capture more behavior relevant to the data. This outlines the evidence that narrow modeling schemes are still more tuned to this specific data type for specific data sources and can outcompete more general mechanisms.
**Malware classification.** Predicting malware family is a multi-label objective, and we report the results of the performances of the considered models in Table 9 and Table 10 the F1 scores
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{Validation set} & \multicolumn{4}{c}{Test set} \\ \cline{2-10} Model & Training batches & TPR & AUC & F1 & Acc. & TPR & AUC & F1 & Acc. \\ \hline Gated CNN [9] & 1058 & 0.476 & 0.9827 & 0.9411 & 0.9233 & 0.2152 & 0.8879 & 0.6465 & 0.7014 \\ Neurlux [4] & 7406 & 0.836 & 0.9976 & 0.9847 & 0.9794 & 0.4250 & 0.9528 & 0.8792 & 0.8786 \\ Quo.Vadis [3] & 4761 & 0.769 & 0.9954 & 0.971 & 0.9614 & 0.3081 & 0.9224 & 0.8065 & 0.8173 \\ \hline Nebula (BPE) & **2116** & 0.8208 & 0.9973 & 0.9847 & 0.9793 & 0.5213 & 0.9657 & **0.9136** & **0.9104** \\ Nebula (whitesp.) & 2159 & **0.8965** & **0.9979** & **0.9870** & **0.9824** & **0.5703** & **0.9664** & 0.9058 & 09053 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Malware detection mean metrics over three CV folds on Speakeasy dataset. Reported TPR is at FPR\(=10^{-3}\).
Figure 5: Mean ROC curves over three cross-validations for malware detection of Neurlux [4], Gated CNN [9], Quo.Vadis [3] and Nebula (with BPE tokenizer) on Speakeasy data test set.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Model & TPR & AUC & F1 & Acc. \\ \hline Neurlux [4] & 0.8508 & 0.9942 & **0.9687** & **0.9794** \\ Quo.Vadis [3] & **0.9035** & **0.9950** & 0.9613 & 0.9736 \\ \hline Nebula (BPE) & 0.8332 & 0.9937 & 0.9653 & 0.9770 \\ Nebula (whitesp.) & 0.8243 & 0.9932 & 0.9590 & 0.9731 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Malware detection mean metrics over five CV folds on MCD data. Reported TPR is at FPR\(=10^{-3}\).
on the Speakeasy and Avast-CTU datasets. We also report a comprehensive evaluation with other relevant metrics as well for all malware families in Appendix A. Specifically, results on the Speakeasy data can be found in Table 11, while metrics on the Avast-CTU data are detailed in Table 12.
Notable, the Avast-CTU dataset, introduced by Bosansky et al. [20], is characterized by the absence of sequential information. Instead, it provides a summary of behavioral operations conducted within the sandbox by each sample. Consequently, this dataset is _un_suitable for models that examine sequential patterns in API calls, such as the Quo.Vadis and the Gated CNN. More generalized architectures like Neurlux and Nebula are capable of effectively modeling this data. Thus Avast-CTU analysis includes these models only. Nebula exhibits superior test set F1 scores for 4 out of 7 malware types on Speakeasy (Table 9) data and in 6 out of 10 malware families on Avast-CTU data (Table 10). Nebula superiority is particularly noticeable in malware families experiencing significant concept drift, such as polymorphic Emotet [42], in families with many sub-variants, like Zeus [43], or on malware types that exhibit rich and diverse behaviors, such as benignware, backdoors, ransomware, or trojans. _Modus operandi_ of such agents require frequent manipulation with network, filesystem, and registry. An examination of metrics on Speakeasy Dataset test set shows that Neurlux still surpasses Nebula in detecting Droppers and RATs, achieving 18% and 30% higher F1 scores, respectively. This might suggest a weakness in Nebula's data-cleaning approach for these particular malware families, indicating a potential avenue for future improvements. Simultaneously, we note that models focusing solely on API calls, such as Quo.Vadis, exhibit slightly superior performance over the general models for malware families with less diverse behavior on Speakeassy Data. For instance, Quo.Vadis performs better in detecting Keyloggers, a malware type that only occasionally interacts with the network or filesystem to store logged keys.
### Explaining the Behavior of Nebula
We now analyze the behavior of Nebula, by leveraging well-known techniques from the _explainable AI_ (XAI) research domain. The first technique, named _Integrated Gradients_[18] computes the importance of input features by integrating gradients along a path from a baseline to the input. In our case, we use an empty JSON file as the baseline, that stands for the absence of any behavior. We leverage the GradientSHAP implementation from the SHapley Additive exPlanations (SHAP) framework [44]. Since this technique requires an end-to-end differentiable model, it is not directly applicable in our case due to the presence of the initial embedding layer. To overcome this issue, we extract sample embeddings and obtain explanations from this point onwards, recovering SHAP values by taking the mean over the embedding dimension. The second technique leverages attention activations from the transformer encoder layer to indicate the learned importance of relative token weights within the model. Transformer self-attention layers are multi-headed; in our case, each layer has eight independent heads. We examine all the layers and heads, focusing on the strongest attention weight deviations and investigating their implications.
We showcase the results of the described techniques on a particular sample infected with the "Urelas" trojan4 which Nebula assigns a maliciousness score of \(f(x)=0.71\). We find that integrated gradients and attention activations pinpoint the highest maliciousness indicators within a particular segment shown in Figure 6, which includes a token sequence representing filesystem manipulations. In Figure 5(a), integrated gradients suggest that the model leans towards a malicious
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & Adload & Emotet & HarHar & Lokibot & Qakbot & Swisyn & Trickbot & Ursnif & Zeus & njRAT \\ \hline Neurlux & **0.7150** & 0.9294 & **0.9031** & 0.8320 & 0.9320 & **0.9991** & **0.9536** & 0.8910 & 0.6503 & 0.8479 \\ \hline Nebula (BPE) & 0.4390 & **0.9392** & 0.7763 & 0.8957 & **0.9876** & 0.9973 & 0.9227 & 0.9362 & 0.6419 & 0.8656 \\ Nebula (whitsp.) & 0.6975 & 0.9319 & 0.8363 & **0.9048** & 0.9768 & 0.9984 & 0.9056 & **0.9585** & **0.6690** & **0.8896** \\ \hline \hline \end{tabular}
\end{table}
Table 10: Mean F1 scores over three CV folds for malware classification objective on Avast-CTU dataset.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & Clean & Backdoor & Coinminer & Dropper & Keylogger & Ransomware & RAT & Trojan \\ \hline Neurlux & 0.8453 & 0.8329 & **0.6910** & 0.4488 & 0.2032 & 0.5527 & **0.6625** & 0.6153 \\ Gated CNN & 0.7588 & 0.6870 & 0.5586 & 0.2015 & 0.0794 & 0.3584 & 0.0000 & 0.5282 \\ Quo.Vadis & 0.8338 & 0.8520 & 0.4884 & 0.3580 & **0.2119** & 0.6861 & 0.1195 & 0.5359 \\ \hline Nebula (BPE) & **0.8526** & **0.8548** & 0.6303 & 0.2850 & 0.1295 & **0.7421** & 0.3683 & **0.6827** \\ Nebula (whitsp.) & 0.8240 & 0.8324 & 0.6214 & **0.4615** & 0.1179 & 0.6523 & 0.1854 & 0.6486 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Mean F1 scores over three CV folds for malware classification objective on Spekeasy dataset.
prediction with positive (red) values and a benign prediction with negative (blue) values. Concurrently, the self-attention activations in Figure 5(b) illustrate a high attention value for the ini token, indicated by a more pronounced color gradient.
The behavior encapsulated in this segment involves the trojan first writing the golfinfo.ini file into the C:\windows\temp directory, followed by opening and reading another file in the C:\windows\system32 directory, the name of which is a random SHA256 hash. Notable background information is that INI files, though primarily configuration files, can serve as pointers to files located elsewhere. Furthermore, the C:\windows\temp directory, being world-writable, should not be utilized by user-level applications. Instead, they typically use $userprofile$\appdata\temp. The C:\windows\system32 directory, a Windows OS system directory, should not contain files with hashsum names and should not be accessed by user-level applications under normal conditions. From Figure 5(a), it is evident that SHAP identifies the reading operation from the windows folder with a hashsum pattern as highly influential, along with the ini file in the temp directory. These patterns are mirrored in the attention activations in Figure 5(b), where the strongest activation in the second Transformer layer is on the ini token's connection with the preceding temp and the subsequent windows and read. Consequently, the model has learned to associate the maliciousness of this trojan with the general pattern of an ini config file creation in the C:\windows\temp directory, followed by the opening and reading of a file in the C:\windows\system32 directory. Beyond these local activations, the model also demonstrates strong activations on a more global level. For example, Figure 7 shows a significant attention pattern spanning a broader range of tokens. This indicates an evasion technique used by the sample, exhibiting a strong attention between the exedows and gettickcount tokens. The former token represents a filename pattern not native to a default Windows system, but by trojan in DLL name, which could evade detection due to its composition of exe and dows, both prevalent in Windows system telemetry. Simultaneously, GetTickCount is commonly employed in timing-based anti-debugging [45]. The model considers the combination of both these elements as highly malicious.
## 5 Related Work
From an academic perspective, we are not the first to explore Transformer applicability for malware detection. For instance, others explored the applicability of self-attention but only for _static_ malware analysis. Li et al. [46] were the first to propose a Transformer-based architecture for static
Figure 6: Explainability of the “Urelas” trojan based on learned representations.
Figure 7: Long span pattern from self-attention layer – combination of unusual Windows OS filepath and anti-debugging technique in the “Urelas” trojan.
malware analysis applied on assembly instructions. They used a custom architecture called "Galaxy Transformer" to avoid length limitations and construct hierarchical representations. Rudd et al. [47] explored Transformer applicability on static malware detection applied on raw malware bytes. Influenced by the success of the GPT modeling scheme [13], Rudd et al. [47] analyzed Transformer decoder with an autoregressive pre-training objective. Lu et al. [48] use Visual Transformer for pixel-based image analysis that were formed from static properties of malware. Pei et al. [49] apply a hierarchical transformer for code similarity analysis and vulnerability detection. They generate a dataset from benign Linux ELF binaries, obtaining behavioral micro-traces with QEMU based Unicorn emulator.
While some similarities exist between our work and the studies previously mentioned, the application of Transformers in a dynamic malware context, particularly using structured telemetry, distinguishes our approach. Moreover, the comparison with other state-of-the-art dynamic malware detectors and our exploration of the model's explainability represent additional, distinct contributions with respect to previous work.
## 6 Conclusions, Limitations, and Future Work
In this paper, we present Nebula, a novel self-supervised learning transformer model for dynamic malware detection and classification. We conduct an in-depth ablation study to highlight the effect of each of the components of Nebula on its performance, and how they may influence its decision-making process. This analysis highlights how much the inclusion of different behavioral aspects manifested by malware improves the performance and also how much the data cleaning procedure boosts the accuracy at test time. We compare our approach against previously-proposed machine-learning methods for dynamic malware analysis, in a pure supervised learning setting, and we show that Nebula often achieves better results than CNNs and LSTMs. In particular, Nebula clearly outperforms its competitors on both the Speakeasy and Avast-CTU dataset, when considering the problem of malware classification. To better understand the underlying reasons, we consider two explainability methods, i.e., integrated gradients and attention activations, which use respectively (i) gradient information to compute the attributed importance to each feature, and (ii) the activations of the attention mechanism to highlight which tokens get the maximum focus by the model. Our analysis reveals that Nebula is giving attention to relevant tokens associated with malicious activity by exhibiting long spans of attention.
**Limitations.** To maintain a fair comparison with competing techniques, we did not leverage the self-supervised learning [13, 14] of Nebula, i.e., the possibility of using also unlabeled data to further improve the pretraining step of the transformer architecture. Nevertheless, this is a relevant feature for both malware detection and classification, given the large availability of unlabeled samples. Another limitation of our work is that, while we have analyzed the predictive performance of Nebula, we have not considered its adversarial robustness against well-crafted, gradient-based attacks aimed to mislead its predictions. While Nebula, as any other approach based on machine learning, may be vulnerable to such powerful attacks, especially in the white-box scenario, it is also worth remarking that, in this case, the attacker may be required to modify the dynamic behavior to obfuscate the malicious activities. In fact, as shown by our explainability analysis, Nebula tends to correctly focus more on portions of reports that contain malicious activities, and hiding them may be more complicated for the attacker than modifying some other spurious features of the input sample. In addition, crafting adversarial malware by manipulating the dynamic behavior without corrupting the original, malicious intent of the source malware remains a complex task. To date, indeed, only some initial attempts have been reported, and no implementation has been released to replicate these attacks [50].
**Future work.** To overcome the aforementioned limitations, we plan to investigate the effect of self-supervised learning on Nebula, pretraining it on a much larger data collection with unlabeled samples. We firmly believe that this approach can further boost its performance, and also enable us to use less supervisions for the downstream tasks. Furthermore, given the tremendous performance of large language models in NLU, we will train Nebula with different report formats together, to study the impact of such heterogeneity on its performances. Finally, we also plan to investigate more in depth the adversarial robustness properties of Nebula, and how to improve them, by developing novel attack algorithms tailored to bypass dynamic malware detectors and classifiers.
## Availability
All the source code used to run the experiments of this work, as well as the pre-trained models, are available to download from GitHub.5
Footnote 5: [https://anonymous.4open.science/r/mebula-3185](https://anonymous.4open.science/r/mebula-3185)
|
2309.12110 | Exploiting CLIP-based Multi-modal Approach for Artwork Classification
and Retrieval | Given the recent advances in multimodal image pretraining where visual models
trained with semantically dense textual supervision tend to have better
generalization capabilities than those trained using categorical attributes or
through unsupervised techniques, in this work we investigate how recent CLIP
model can be applied in several tasks in artwork domain. We perform exhaustive
experiments on the NoisyArt dataset which is a dataset of artwork images
crawled from public resources on the web. On such dataset CLIP achieves
impressive results on (zero-shot) classification and promising results in both
artwork-to-artwork and description-to-artwork domain. | Alberto Baldrati, Marco Bertini, Tiberio Uricchio, Alberto Del Bimbo | 2023-09-21T14:29:44Z | http://arxiv.org/abs/2309.12110v1 | # Exploiting CLIP-based Multi-modal Approach
###### Abstract
Given the recent advantages in multimodal image pretraining where visual models trained with semantically dense textual supervision tend to have better generalization capabilities than those trained using categorical attributes or through unsupervised techniques, in this work we investigate how recent CLIP model can be applied in several tasks in artwork domain. We perform exhaustive experiments on the NoisyArt dataset which is a collection of artwork images collected from public resources on the web. On such dataset CLIP achieve impressive results on (zero-shot) classification and promising results in both artwork-to-artwork and description-to-artwork domain.
Keywords:image retrieval, zero-shot classification, artwork, CLIP
## 1 Introduction
Image Classification and Content-Based Image Retrieval (CBIR) are fundamental tasks for many domains, and have been thoroughly studied by the multimedia and computer vision communities. In cultural heritage domain, these tasks allow to simplify the management of large collections of images, allowing to annotate, search and explore them more easily and with lower costs.
In the latest years neural networks have proved to outperform engineered features in both tasks. These networks are typically used in an unimodal fashion, i.e. only one media is used to train and use a network. This may limit the types of application that can be developed and may also reduce the performance of the networks. Several recent works are showing how using multi-modal approaches may improve the performance in several tasks related to visual information. In [17] it has been shown that CLIP, a model trained using an image-caption objective alignment on a giant dataset made of 400 million (image, text) pairs, obtain impressive results on several downstream tasks. The authors pointed out that, using only textual supervision, CLIP model learns to perform a wide set of tasks during pre-training including OCR, geo-localization, action recognition and
many others. This task learning can be leveraged via natural language prompting to enable zero-shot transfer to many existing dataset.
In this work we try to exploit the zero-shot capabilities of CLIP in the artworks domain, in particular we focus on the NoisyArt [2] dataset which is originally designed to support research on webly-supervised recognition of artworks and Zero-Shot Learning (ZSL). Webly-supervised learning is interesting since it allows to greatly reduce annotation costs required to train deep neural networks, thus allowing cultural institutions to train and develop deep learning methods while keeping their budgets for the curation of their collections rather than the curation of training datasets. In Zero-Shot Learning approaches visual categories are acquired without any training samples, exploiting the alignment of semantic and visual information learned on some training dataset. ZSL in artwork recognition is a problem of instance recognition, unlike the other common ZSL problems that address class recognition. Zero-shot recognition is particularly appealing for cultural heritage and artwork recognition, although it is an extremely challenging problem, since it can be reasonably expected that museums have a set of curated description paired with artworks in their collections.
To get a better idea of how CLIP behaves in the artworks domain we started with a classification task using a shallow classifier and CLIP as the backbone. Subsequently, thanks to the descriptions of the artworks in the dataset, we performed experiments in the field of zero-shot classification where CLIP was able to demonstrate its abilities in this task. Finally, we performed experiments on the tasks of artwork-to-artwork and description-to-artwork retrieval obtaining very promising results and superior performance to a ResNet-50 pre-trained on ImageNet [19]
## 2 Related Works
Regarding CBIR, after the introduction of the successful Bag-of-Visual-Words model in [23] that use engineered visual features such as SIFT points, many works have improved the performance addressing different aspects such as approximating local descriptors [8], learning improved codebooks [14], improving local features aggregation [15, 9, 4]. In the last years, following the success obtained using Convolutional Neural Networks (CNN) to address the problem of image classification [12], CNN-based features have started to be used also for image retrieval tasks. A complete survey that compares SIFT-based and CNN-based methods for instance-based image retrieval is presented in [28]. Commonly used backbone networks are VGG [22] and ResNet [7], typically pretrained on ImageNet and then fine tuned for a specific domain. CNN features have been pooled using techniques like Regional maximum activation of convolutions (R-MAC) [25]. R-MAC considers a set of fixed squared regions at different scales, collecting the maximum response in each channel and then sum-pooling them to create the final R-MAC descriptor. More recent works follow an end-to-end approach: in [1] has been proposed a layer called NetVLAD, pluggable in any CNN architecture and trainable through back-propagation, that allows to train
end-to-end a network using an aggregation of VGG16 convolutional activations. Multi-scale pooling of CNN features followed by NetVLAD has been proposed in [26], obtaining state-of-the-art results using VGG16. In [16] a trainable pooling layer called Generalized-Mean (GeM) has been proposed, along with learning whitening, for short representations. In this work a two stream Siamese network is trained using contrastive loss. The authors use up to 5 image scales to extract features.
## 3 Dataset
NoisyArt [2] is a collection of artwork images collected using articulated queries to metadata repositories and image search engines on the web. According to the creators of the dataset, the goal of NoisyArt is to support research on webly-supervised artwork recognition for cultural heritage applications.
In Table 1 the characteristics of the NoisyArt dataset are summarized
NoisyArt is a complex dataset which can be used on a wide variety of automated recognition problems. The dataset is particularly well suited to webly supervised instance recognition as a weakly-supervised extension of fully-supervised learning. In the dataset, for testing purposes, a subset of classes with manually verified test images is provided (_i.e._ with no label noise).
The NoisyArt dataset is collected from numerous public resources available on the web. These resources are DBpedia (where also the metadata are retrieved), Google Images and Flickr. Figure 1 shows some examples of artworks with their respective sources.
From these sources the authors collected 89,095 images divided into 3,120 classes. Each class contains a minimum of 20 images and a maximum of 33. To make sure to have a non-noisy and more reliable test set the authors decided to create a supervised test set using a small subset of the original classes: 200 classes containing more than 1,300 images taken from the web or from personal photos. This test set is not balanced: for some classes we have few images, and some others have up to 12. The different method of collecting training and test sets also raises the issue of a strong domain shift between these images and those in the training set. Finally, each artwork has a description and metadata retrieved from DBpedia, from which a single textual document was created for each class. These descriptions are included in the dataset to support research on zero-shot
\begin{table}
\begin{tabular}{r r r r} & \multicolumn{2}{c}{**(webly images)** & **(verified images)**} \\ & **classes training validation** & **test** \\ \hline & 2,920 & 65,759 & 17,368 & 0 \\ & 200 & 4,715 & 1,253 & 1,355 \\ \hline
**totals** & 3,120 & 70,474 & 18,621 & 1,355 \\ \end{tabular}
\end{table}
Table 1: Characteristics of the NoisyArt dataset
learning and other multi-modal approaches to learning over weakly supervised data.
### GradCAM visualization
In order to have a better idea of the portions of the image that CLIP considers most important when it associates a text with an image, before moving on to the quantitative experiments, we carried out some qualitative tests using the well-know visualization technique gradCAM [21]. The technique we used is a generalization of gradCAM, where, instead of computing gradients with respect to an output class, gradients are computed with respect to textual features computed with CLIP's text encoder from the description. This approach makes each heat-map calculated by gradCAM dependent on the individual description, showing us the portions of the image that CLIP most closely associates with it. As a common practice the _saliency layer_ used is the last convolutional layer of CLIP's visual encoder.
Figure 1: Sample classes and training images from the NoisyArt dataset. For each artwork/artist pair we show the seed image obtained from DBpedia, the first two Google Image search results, and first two Flickr search. Image taken from [2]
Figure 2 shows four examples of gradCAM visualization. We can see how, using the descriptions in the dataset, CLIP places attention to the most significant portions of the image. This fact made us confident that CLIP would work very well in the domain of artwork.
Figure 2: Examples of gradCAM visualization on NoisyArt computing the gradients with respect to the description CLIP text features
## 4 Experiments
### Webly-supervised Classification
To test the performance of CLIP in the art domain, following the experimental setup followed by the authors of the dataset, we performed a webly-supervised classification on the 200 classes that are also available in the test set.
#### 4.1.1 Experimental Setup
Given an input image \(\mathbf{x}\), we extract a feature vector using only the CLIP image encoder and then we pass it through a shallow classifier, consisting of a single hidden layer and an output layer that estimates class probabilities \(p(c\mid\mathbf{x})\). The hidden layer is followed by an \(L^{2}\)-normalization layer which, as noted in [3], helps to create similar representations for image with different visual characteristics because the magnitude of features is ignored by the final classification layer. Such normalization is therefore useful to alleviate the effects of the domain shift between training and test set.
The structure of the shallow classifier is basically the same of [2, 10]. This choice was made intentionally to analyze the effects of using the CLIP image encoder instead of a convolutional backbone trained on ImageNet. For mitigating and identifying label noise during training in [2] several techniques like Labelflip noise, entropy scaling for Outlier Mitigation and Gradual Bootstrapping are used. In our experiments however, following [3], we only use the \(L^{2}\)-normalization layer after the hidden layer.
We trained such shallow classifier for 300 epochs with a batch size of 64, the learning rate used was \(1e-4\). We used the CLIP model which has as convolutional backbone a slightly modified version of the ResNet-50. The hidden layer has an input dimension of 1024 (CLIP output dimension) and output dimension of 4096.
#### 4.1.2 Experimental Results
Table 2 summarizes the experimental results we obtained in this classification setting. In the table _BL_ refers to the baseline network [2] without any sort of label mitigating approach, _BS_ refers to the noisy mitigating approach of [2] and _RN50_\(\alpha=0.4\) refers to the normalization approach of [3] where the \(L^{2}\)-normalization is scaled by \(\alpha\).
\begin{table}
\begin{tabular}{c||c|c|c|c} & \multicolumn{4}{c}{**test**} & \multicolumn{1}{c}{**validation**} \\
**Model** & **acc** & **mAP** & **acc** & **mAP** \\ \hline \hline
**RN50 BL [2]** & 64.80 & 51.69 & 76.14 & 63.08 \\ \hline
**RN50 BS [2]** & 68.27 & 57.44 & 75.98 & 62.83 \\ \hline
**RN50 \(\alpha=\mathbf{0.4}\)[3]** & 74.89 & 62.86 & 77.14 & 63.71 \\ \hline \hline
**CLIP RN50** & **86.63** & **77.88** & **83.56** & **72.23** \\ \end{tabular}
\end{table}
Table 2: Recognition accuracy (acc) and mean Average Precision (mAP) on NoisyArt dataset
From the table it is immediately evident that with the use of CLIP as a backbone it is possible to obtain very significant improvements both on the test and the validation set. It is very interesting to see that [2, 3] have better results on validation than on the test set. In our case, however, the situation is reversed by having comparable and slightly better results on the test set. This demonstrates how CLIP is quite robust to domain shift being it able to extract the semantic of an image regardless of its raw content.
### Zero-shot Classification
The availability of descriptions associated with artwork made it possible to perform experiments in the area of zero-shot classification by exploiting CLIP's ability to assign a similarity score between text and images.
#### 4.2.1 Experimental Results
Table 3 shows the immense potential in the zero-shot classification domain of CLIP. As a matter of fact, comparing the results with those found in the literature, we notice that by using CLIP, improvements of over 20% can be achieved. It is also worth noting that the results we have compared ours with have been achieved through a training process that uses a three-fold cross validation where the 200 verified classes are divided into 150 for training/validation and 50 for zero-shot test classes. On our side we used CLIP out-of-the-box without any training on NoisyArt dataset.
In order to make a complete argument, it is also necessary to mention that since the data on which CLIP was actually trained is not public, we do not know if any images from this dataset were used in its training process. If so we would have some sort of leak of information that would make the comparison less fair.
### Image Retrieval
Seeing the excellent behavior of CLIP in the (zero-shot) classification of artwork, we decided to perform some experiments in image retrieval.
In all the experiments that we are going to present, the images contained in the validation set (1253 images belonging to the 200 verified classes) were used as queries, while those of the test set (1379 images of the same 200 classes) were used as index images.
\begin{table}
\begin{tabular}{c||c|c}
**Model** & **acc** & **mAP** \\ \hline \hline
**DEVISE RN50 [6]** & 24.79 & 31.90 \\ \hline
**EsZSL RN50 [18]** & 25.63 & 29.89 \\ \hline
**COS+NLL+L2 RN50 [3]** & 34.93 & 45.53 \\ \hline \hline
**CLIP RN50** & **60.27** & **69.23** \\ \end{tabular}
\end{table}
Table 3: Zero-shot recognition accuracy (acc) and mean Average Precision (mAP) on NoisyArt dataset
#### 4.2.2 Experimental Setup
We have conducted numerous experiments to make sure that we have a complete idea of how CLIP performs in this task on the NoisyArt dataset. As in classification experiments the CLIP model which has as visual backbone a modified version of the ResNet-50 is used.
The most natural way to use CLIP in retrieval is obviously to use the output of the visual encoder as global descriptor comparing only the visual features, that is exactly what we did initially as a first experiment.
To take advantage of the CLIP textual encoder and of its goodness in zero-shot classification, we then reinterpreted the image-to-image retrieval task as zero-shot classification followed by text-to-image retrieval. This reinterpretation was made possible by the description and the metadata associated to each class. Thus given a query, zero-shot classification of that image was performed as first phase, by exploiting CLIP's ability to link images and texts. We therefore used CLIP to assign a similarity score to each possible (query image, artwork description) pair using the description with the highest score in the second phase. The second phase consists in comparing the description chosen at the end of the first one with all the images in the dataset, assigning a similarity score to each possible (query description, index image). For a complete comparison in the results we have also reported an experiment where the first part of classification is bypassed and the correct monument is always used in the text-to-image retrieval phase.
Another setup we experimented with consists in adding to the zero-shot classification followed by text-to-image retrieval experiment as a re-ranking phase where the first 100 retrieved images are re-ordered using the similarity of the visual features.
Finally, the CLIP network was fine-tuned for adapting to this task. The fine-tuning process was done by inserting a shallow classifier composed of two linear layers at the output of the visual encoder. The learning rate was set to \(1e-7\) for the CLIP encoder (keeping the normalization layers frozen) and \(1e-4\) for the shallow classifier. For ease of use, a classification loss (categorical cross-entropy) was used during this fine-tuning process. We fine-tuned the model for 30 epochs using the 2,920 classes not included in the test set.
#### 4.2.3 Experimental Results
Before commenting on the results obtained we summarize the experimental setups:
* **RN50 image features**: We compare the image features extracted with a ResNet-50 pretrained on ImageNet
* **CLIP image features**: We compare the image features extracted with the CLIP image encoder
* **CLIP class + text-to-image**: We perform a zero-shot classification of the query followed by a text-to-image retrieval using CLIP text and visual encoder
* **CLIP class + text-to-image + visual re-ranking**: We perform a visual re-ranking of the first 100 retrieved results after CLIP zero-shot classification and text-to-image retrieval
* **Oracle + CLIP text-to-image**: We perform only the text-to-image retrieval using the ground-truth class for the description
* **CLIP fine-tuned image features**: We compare the image features extracted with the CLIP image encoder after fine-tuning
Table 4 summarizes the results of the experiments performed in the image retrieval setting previously described. It can be seen that CLIP visual features perform better than features extracted with a ResNet-50 pre-trained on ImageNet. It is interesting to note that the re-ranking process makes the retrieval process performed by a zero-shot classification followed by a text-to-image retrieval operation more performing than the approach which uses only the visual features pre fine-tuning. This fact is obviously made possible by CLIP's good results in zero-shot classification illustrated in previous section. It is also worth mentioning that using the ground-truth class and performing the text-to-image retrieval operation yields surprisingly good results: this confirms the goodness of CLIP in the text-to-image retrieval task. These results are even greater, by a significant margin, than those obtained using only visual features pre fine-tuning. This is probably due to the domain shift between validation and test set the visual features are more subject to. Finally, we can see that CLIP fine-tuning was very successful, bringing a very significant performance boost and achieving better results than all other approaches.
## 5 Conclusions
In this paper we propose to use the zero-shot capabilities of CLIP in the artworks domain, showing how this approach can greatly improve over competing state-of-the-art approaches in the challenging NoisyArt dataset. Experiments show that in addition to zero-shot classification, the proposed approach can be used for content-based image retrieval, again outperforming by a large margin other competing approaches. A benefit of using the proposed method is that it can be trained using very small datasets, thanks to the extensive pretraining of CLIP, and thus the method can be deployed also to be used on relatively small collections like those of small and medium-sized museums.
\begin{table}
\begin{tabular}{c||c}
**Experimental Setup** & **mAP** \\ \hline \hline
**RN50 image features** & 36.32 \\ \hline
**CLIP image features** & 46.40 \\ \hline
**CLIP class + text-to-image** & 40.54 \\ \hline
**CLIP class + text-to-image + visual re-ranking** & 47.41 \\ \hline
**Oracle + CLIP text-to-image** & 54.21 \\ \hline
**CLIP fine-tuned image features** & **69.60** \\ \end{tabular}
\end{table}
Table 4: Retrieval results on NoisyArt dataset using as queries the validation set and as index images the test set.
AcknowledgmentsThis work was partially supported by the European Commission under European Horizon 2020 Programme, grant number 101004545 - ReInHerit. |
2309.14575 | Monolithic Integration of Single Quantum Emitters in hBN Bullseye
Cavities | The ability of hexagonal boron nitride to host quantum emitters in the form
of deep-level color centers makes it an important material for quantum photonic
applications. This work utilizes a monolithic circular Bragg grating device to
enhance the collection of single photons with 436 nm wavelength emitted from
quantum emitters in hexagonal boron nitride. We observe a 6- fold increase in
collected intensity for a single photon emitter coupled to a device compared to
an uncoupled emitter, and show exceptional spectral stability at cryogenic
temperature. The devices were fabricated using a number of etching methods,
beyond standard fluorine-based reactive ion etching, and the quantum emitters
were created using a site-specific electron beam irradiation technique. Our
work demonstrates the potential of monolithically-integrated systems for
deterministically-placed quantum emitters using a variety of fabrication
options. | Lesley Spencer, Jake Horder, Sejeong Kim, Milos Toth, Igor Aharonovich | 2023-09-25T23:19:50Z | http://arxiv.org/abs/2309.14575v1 | # Monolithic integration of single quantum emitters in hBN bullseye cavities
###### Abstract
The ability of hexagonal boron nitride to host quantum emitters in the form of deep-level color centers makes it an important material for quantum photonic applications. This work utilizes a monolithic circular Bragg grating device to enhance the collection of single photons with 436 nm wavelength emitted from quantum emitters in hexagonal boron nitride. We observe a 6-fold increase in collected intensity for a single photon emitter coupled to a device compared to an uncoupled emitter, and show exceptional spectral stability at cryogenic temperature. The devices were fabricated using a number of etching methods, beyond standard fluorine-based reactive ion etching, and the quantum emitters were created using a site-specific electron beam irradiation technique. Our work demonstrates the potential of monolithically-integrated systems for deterministically-placed quantum emitters using a variety of fabrication options.
**Key words:** hexagonal boron nitride, quantum emitter, B-center, fabrication, circular Bragg grating, bullseye cavity
Quantum states of light are highly sought after for emerging technologies offering accelerated computation [1], secure communication [2], and enhanced sensing [3]. Applications which make use of single photon number states in particular place stringent performance requirements on the single photon sources (SPSs) [4]. Namely, the source should output a train of single, indistinguishable photons at a rate on the order of 1 GHz. While single photon purity and photon indistinguishability characterize the distinction between quantum and classical light, the photon count rate, or SPS brightness, is a key performance metric for technologies like photonic quantum computation and quantum key distribution [5]. Solid state SPSs have been established as an attractive photonics platform for these technologies [6], particularly lattice defects in wide bandgap semiconductors such as diamond and hexagonal boron nitride (hBN) which are known to host bright, stable emitters at room temperature [7, 8, 9]. The layered nature of hBN allows for simple exfoliation and transfer methods which enable versatile sample preparation and device integration techniques [10, 11]. Several approaches to defect-based quantum emitter creation in hBN have been explored, including annealing [12, 13], electron beam irradiation [14, 15, 16], and ion beam irradiation [17, 18]. Quantum emitters in bulk hBN are typically characterized by an in-plane dipole emission pattern and have an excited state lifetime
on the order of 2 ns [19, 20], which corresponds to a maximum photon emission rate of 0.5 GHz.
The photon detection rate can be increased toward the emission rate by enhancing the photon collection efficiency through radiation mode shaping, and/or by reducing the lifetime through Purcell enhancement. The circular Bragg grating (CBG) has been a popular approach to address this need. Enhanced collection efficiency from CBG structures has been reported for various SPS platforms, including self-assembled quantum dots (QDs) [21, 22, 23, 24, 25], colloidal QDs [26, 27], bulk diamond [28] and nanodiamond [29], and CBGs have also been used to enhance excitonic bandgap emission [30, 31]. In hBN, ensemble emission from the boron vacancy spin defect was enhanced using monolithic integration with a CBG device [32], and the recently discovered 436 nm quantum emitter, termed B-center, was coupled to a waveguide and the emission efficiently collected using a semicircular Bragg-style output coupler [33]. Early works on QDs and diamond etched the CBG structure into an existing host material containing a nominal density of single photon emitters, resulting in a low probability of cavity coupling. Deterministic positioning has since been achieved using pick-and-place methods that require secondary dielectric deposition to fully embed the emitter within the cavity, and can still suffer from random out-of-plane dipole orientations. In contrast, single B-centers have in-plane dipoles and they can be positioned deterministically relatively easily in pre-fabricated hBN photonic structures [34], making them uniquely suited to monolithic integration with CBG devices. Here we enhance the collection efficiency of photons generated by a single B-center using a monolithic CBG in hBN. The emitter brightness is found to be sensitive to the CBG dimensions, and for an optimal design we observe a high spectral stability, suggesting that the employed electron beam lithography (EBL) and reactive ion etching (RIE) fabrication methods, impart negligible damage to the hBN lattice in the vicinity of the emitters.
In our previous work [32], we designed a CBG structure optimized for boron vacancy ensembles, which are characterized by a broad emission spectrum centered at 780 nm. The geometry of that successful design has been rescaled to accommodate the higher energy 436 nm photons produced by the B-center, shown schematically in Figure 1a. The SEM image in Figure 1b shows the resulting CBG, with a central disk diameter of 512 nm, ring width of 256 nm, and ring spacing of 97 nm. Single B-centers can be created within the central disk easily using electron beam irradiation, although their in-plane orientation is random. The CBG cavity mode is simulated in Figure 1c, illustrating normalized electric field intensity. The cavity is designed to induce resonance at 436 nm so that it matches the wavelength of the B-center. The cavity is resilient to variations in the in-plane orientation of the dipole due to the structural rotational symmetry of the CBG. Deterministic positioning of B-centers allowed dense arrays of CBGs to be fabricated, as shown in Figure 1d, in order to systematically investigate the influence of fabrication parameters.
Figure 1: **Circular Bragg grating cavity.** a) Schematic of a B-center in an hBN CBG on a SiO2/Si substrate, excited by a 405 nm laser. The inset shows the energy level structure of the emitter: CB, conduction band; VB, valence band; \(|\)G\(\rangle\), defect ground state; \(|\)E\(\rangle\), defect excited state. b) SEM image of an hBN CBG cavity. The scale bar is 1 \(\mu\)m. c) Simulated in-plane normalized electric field intensity from a dipole source embedded in the CBG cavity center. The scale bar is 1 \(\mu\)m. d) Optical microscope image of an array of CBG devices patterned in resist using EBL. The scale bar is 50 \(\mu\)m.
Figure 2: **SEM images of CBG structures fabricated using several etching methods.** The images are at 0\({}^{\circ}\) tilt (left column) and 52\({}^{\circ}\) tilt (right column): (a,b) argon IBE with PMMA mask; (c,d) argon and sulfur hexafluoride CAIBE with PMMA mask; (e,f) argon and chlorine RIE with CSAR mask removed; (g,h) argon and sulfur hexafluoride RIE with CSAR mask removed. Each scale bar is 2 \(\mu\)m.
One such parameter that we explored is the dry etching method used to transfer the EBL pattern into the hBN. Photonic structures in hBN are commonly etched by fluorine-based RIE [33, 35, 36]. Here, we compare this approach to initial tests of chlorine RIE, ion beam etching (IBE), and chemically-assisted ion beam etching (CAIBE). In all cases, we used the EBL resist as a hard mask for the etching process. This is done to eliminate fabrication imperfections caused by the liftoff process required for a metal mask, which we avoid at the expense of mask robustness.
Figure 2a,b show a CBG fabricated using a five minute argon IBE etch process, using a PMMA mask. The PMMA was ineffective as a mask for the physical argon etch process and appears to have been significantly sputtered whilst the exposed hBN is only slightly etched. The etched hBN has rough surfaces and slanted sidewalls from the mask being sputtered away throughout the etch. In comparison, Figure 2c,d show a CBG fabricated using a ten minute argon CAIBE etch. Similar to the IBE process, this is a physical argon etch, but it is chemically assisted by sulfur hexafluoride (SF\({}_{6}\)) gas injected at the hBN surface. We note that the surface roughness of both the mask and hBN is substantially lower than that generated by the IBE etch. However, the CAIBE process caused cracking of the polymer mask and lateral etching near regions that were developed leading to the slanted sidewalls, similar to those observed in the IBE etch. The PMMA mask appears to have withheld longer under the CAIBE conditions, leading to a longer etch time and resulting in a deeper etch. Both the IBE and CAIBE methods can therefore be improved by the use of a more robust mask and a chemical etch component.
Next, in Figure 2e and Figure 2f we present SEM images of a CBG fabricated using a chlorine-based RIE etch. This sample was etched for two minutes with a CSAR mask that was chemically removed with CSAR remover before SEM imaging. This etch is shallow but appears to have the most precise features with the smoothest sidewalls. Compared to the fluorine etch, shown in Figures 2g and 2h after chemical CSAR mask removal, the chlorine-based RIE etch as well as the IBE and CAIBE etches are very slow. The fluorine recipe etches through the 300 nm flake in a third of the time that the chlorine etch was run. However, it shows a roughness of the sidewalls known as microtrenching. This is an undesired result of RIE that degrades the photonic properties of a structure, especially since a given physical defect becomes more significant as the features of the structure become smaller and approach a scale similar to the defect. Despite this, our fluorine-based etch produced the devices most appropriate for coupling to emitters and characterisation due to the etch depth. Whilst the emitter coupling and characterisation for the rest of this work will be conducted with the devices fabricated with fluorine based RIE, we acknowledge the potential for development of each etching method and refinement of fluorine-based RIE.
Continuing now with the fluorine RIE fabrication, the sensitivity of the design dimensions presented in Figure 1b was evaluated by comparing the intensity of B-centers placed in CBG devices fabricated with varying spatial and lithographic parameters, as well as the comparison to B-centers in bulk, unstructured hBN. This latter case is shown in Figure 3a, where the bright photoluminescence (PL) spectrum from an emitter coupled to a CBG device is clearly distinguished from the spectrum of the same type of emitter situated in bulk hBN (uncoupled). Under the same excitation conditions of 500 \(\upmu\)W of 405 nm laser, the CBG-coupled emitter has a six times greater total intensity than the uncoupled emitter. The increased spectral intensity is attributed primarily to out-of-plane emission redirection, rather than lifetime reduction, since we expect the CBG to have negligible quality factor and hence minimal Purcell enhancement.
The coupling to the CBG is also evident in the power saturation behavior, as shown in Figure 3b. Here, the observed saturation power for the coupled and uncoupled emitters are of similar magnitude, being 1.23 mW and 0.82 mW respectively, but we find a six-fold increase in the saturation intensity from 0.4 Mcps with the uncoupled emitter to 2.3 Mcps for the coupled
Figure 3: **Coupling single emitters to CBG devices.** a) PL spectra from a B-center emitter within a CBG and within bulk hBN, using 500 \(\upmu\)W of 405 nm excitation over 10 s. b) Power saturation behavior for each emitter in (a). c) Second order correlation measurement for each emitter in (a), using 500 \(\upmu\)W of 405 nm excitation. d) Integrated spectral emission intensity from emitters placed in a CBG with varying lattice constants.
emitter. Hence, for the same excitation power the CBG allows for a much greater proportion of emitted photons to be collected, and also reduces the relative proportion of background fluorescence. The second order correlation measurements in Figure 3c indicate the single photon nature of the emission from the two emitters under study, further evidence that increased intensity is due to the action of the CBG structure rather than the presence of multiple single photon emitters.
To compare the performance of each CBG, the total emission intensity was calculated by summing the PL spectra data. The influence of device geometry is shown in Figure 3d, where total intensity data are grouped by variation of scaling factor, relative to the dimensions of the device in Figure 1b. The initial design is seen to be optimal, since the average intensity is rapidly reduced for small increments of the scaling factor in either direction.
Figure 4: **Cryogenic spectroscopy and photon statistics.** a) PL spectrum from an emitter monolithically embedded in a CBG device, using a 405 nm excitation laser at 800 \(\mu\)W power and integrating for 1 s with an 1800 l/mm grating. The fit in orange is composed of two Lorentzian functions. b) Kinetic PL spectrum over 2 minutes, showing the stability of the ZPL. c) Power saturation behavior, using the total intensity of spectra integrated for 5 s with an 1800 l/mm grating. The fit in orange yields a saturated emission intensity of \(I_{\infty}=\) 18.47 thousand counts/s, and \(P_{sat}=\) 0.40 mW. d) Second order correlation measurement, using 800 \(\mu\)W of 405 nm laser over 60 minutes. The fit in orange yields a minimum of g\({}^{(2)}\)(0) = 0.18.
Next, we cooled the CBG array sample to 5 K and performed cryogenic spectroscopy with a well-coupled emitter. Off-resonant excitation reveals a bright, spectrometer-limited zero phonon line (ZPL) accompanied by a low energy acoustic phonon sideband, as shown in Figure 4a. Fitting both features with a Lorentzian curve yields a ZPL full width at half maximum (FWHM) of 0.1 nm and PSB FWHM of 5 nm. The proximity of this sideband introduces significant dephasing that ultimately limits the visibility in two photon interference experiments [37], although moderate Hong-Ou-Mandel visibility was recently achieved with a B-center under off-resonant excitation [38]. Photon indistinguishability would be improved with narrow filtering of the ZPL at the expense of collection count rates and hence measurement duration. The collection enhancement offered by a CBG structure is therefore desirable to offset the reduction in brightness due to spectral filtering.
The kinetic spectrum plot in Figure 4b shows that the spectral stability is very high, with no ZPL wavelength deviation over two minutes observed within the limit of the spectrometer. Minimizing spectral diffusion is also crucial for advanced quantum optics experiments involving photon interference [38-40], and this result reflects the minimal impact that the fabrication procedure has on the quality of the hBN lattice surrounding the B-center. Additional spectral stability may be due to the reduced crystal volume of the central CBG disk, leading to a lower population of neighboring charge traps whose fluctuations cause inhomogeneous linewidth broadening [41]. We note that the ZPL occurs at 432.5 nm, which is moderately far from the expected value of 436.0 nm \(\pm\) 0.2 nm [42], likely due to anomalous local strain conditions. From the power saturation measurement in Figure 4c we estimate a maximum photon count rate of 18 kcps, and the saturation power occurs at 0.4 mW. The second order correlation measurement in Figure 4d indicates that the emission is predominantly from a single emitter, with the small degree of bunching evident in the shoulders being due to excitation above the saturation power.
In summary, we explored etching options for monolithic hBN fabrication in order to produce monolithic circular Bragg grating cavities. We positioned single quantum emitters within the fabricated cavities and demonstrated a 6-fold emission collection enhancement of the 436 nm emission compared to uncoupled emitters in bulk hBN. We discuss the effects of design scaling and EBL conditions on the fabrication and photonic function of the devices and observe spectrometer-limited temporal spectral stability for a coupled emitter at 5 K.
We thank Angus Gale for assistance with the electron irradiation. The authors acknowledge financial support from the Australian Research Council (CE200100010, FT220100053) and the Office of Naval Research Global (N62909-22-1-2028). The UTS node of the ANFF is greatly acknowledged for access to nanofabrication tools.
## Methods
Flatkes of pristine and carbon doped hBN were mechanically exfoliated with scotch tape onto 285 nm thick SiO\({}_{2}\) on Si. Flakes were identified as appropriate for fabrication using optical contrast and were 50 \(\upmu\)m by 50 \(\upmu\)m in area and approximately 275 nm thick.
### Patterning
Samples were prepared for EBL with spin coating of a positive resist. Either CSAR, for 10 s at 900 rpm and 50 s at 4500 rpm, or PMMA, for 10 s at 800 rpm and 50 s at 3000 rpm. They were then baked for three minutes at 180 \({}^{\circ}\)C. Patterning was done with a RAITH EBL system in an FEG-SEM (Zeiss Supra 55 VP). The CSAR samples were patterned with an electron beam energy of 30 kV, beam current of 20 pA and fluence of 110 \(\upmu\)C/cm\({}^{2}\) was used. The PMMA samples were patterned with an electron beam energy of 30 kV, beam current of 40 pA and fluence of 200 \(\upmu\)C/cm\({}^{2}\). For a robust fabrication, a pattern was designed for an array of devices scaled by 2% so the Bragg coefficients for each column ranged from 240.64 nm in the first (left) column to 256 nm in the fourth and 266.24 nm in the 6th (right). In addition, each row was exposed with a different electron fluence during EBL. This centered around 110 \(\upmu\)C/cm\({}^{2}\) and ranged from 104.5 \(\upmu\)C/cm\({}^{2}\) in the bottom row to 115.5 \(\upmu\)C/cm\({}^{2}\) in the top for the CSAR samples.
### Etching
After development of the EBL the patterns were transferred into the hBN using ion beam etching, chemically assisted ion beam etching and reactive ion etching. IBE and CAIBE were conducted in an Intlvac Nanoquest I with a PMMA mask. IBE was conducted for 5 min, with an argon beam energy of 400 V. CAIBE was conducted for 10 min with an argon beam energy of 300 V and a flow of 5 sccm of SF6 at the sample surface. Chlorine- and fluoride-based RIE were conducted in separate TRION ICP Plasma Chambers. Both RIE samples used CSAR EBL resist as a hard mask and were oxygen plasma cleaned for 5 s before etching. The chlorine sample was etched for 120 s at 5 mT with 5 W ICP and 100 W RIE power and 10 sccm of argon and 2 sccm of chlorine. The fluoride sample was etched for 37 s at 5 mT with 1 W ICP and 300 W RIE power, 60 sccm of argon and 1 sccm of SF\({}_{6}\). The CSAR resist was then chemically removed.
### Defect Creation
Samples were spot irradiated in a FEI DB235 Dual Beam FIB/SEM with a 5 kV electron beam at 1.6 nA to create B-center emitters at multiple locations on the fabricated flake.
### Characterisation
Samples were first optically characterized at room temperature on a lab-built scanning confocal microscope with a 405 nm continuous-wave (CW) laser (PiL040X, A.L.S. GmbH) and an XYZ piezostage (NanoCube P-611.3). The sample was excited and emission was collected through a 0.9 NA Nikon objective. Collection was then filtered through a long pass 405 nm dichroic mirror and a 460 nm band pass filter. It was coupled into a 50:50 fiber recorded by avalanche photodiode single-photon detector (Excelitas Technologies) and spectrometer (Princeton Instruments, Inc.). Cryogenic characterization was performed on a closed-loop cryostat (Attocube) using a 0.82 NA objective (Attocube LT-APO/VIS/0.82).
### Photonic Simulation
3D finite-domain time-difference (FDTD) method is used. The cavity is made of a hBN layer with a thickness of 200 nm that is on SiO\({}_{2}\) substrate. In this setup, starting from the central hBN disk with a diameter of 512 nm, each successive ring has a width and spacing of 256 nm and 97 nm, respectively. The simulation utilises a refractive index of 2.2 for in-plane and 1.9 for out-of-plane.
|
2310.10670 | Smart OMVI: Obfuscated Malware Variant Identification using a novel
dataset | Cybersecurity has become a significant issue in the digital era as a result
of the growth in everyday computer use. Cybercriminals now engage in more than
virus distribution and computer hacking. Cyberwarfare has developed as a result
because it has become a threat to a nation's survival. Malware analysis serves
as the first line of defence against an attack and is a significant component
of cybercrime. Every day, malware attacks target a large number of computer
users, businesses, and governmental agencies, causing billions of dollars in
losses. Malware may evade multiple AV software with a very minor, cunning tweak
made by its designers, despite the fact that security experts have a variety of
tools at their disposal to identify it. To address this challenge, a new
dataset called the Obfuscated Malware Dataset (OMD) has been developed. This
dataset comprises 40 distinct malware families having 21924 samples, and it
incorporates obfuscation techniques that mimic the strategies employed by
malware creators to make their malware variations different from the original
samples. The purpose of this dataset is to provide a more realistic and
representative environment for evaluating the effectiveness of malware analysis
techniques. Different conventional machine learning algorithms including but
not limited to Support Vector Machine (SVM), Random Forrest (RF), Extreme
Gradient Boosting (XGBOOST) etc are applied and contrasted. The results
demonstrated that XGBoost outperformed the other algorithms, achieving an
accuracy of f 82%, precision of 88%, recall of 80%, and an F1-Score of 83%. | Suleman Qamar | 2023-09-24T16:28:35Z | http://arxiv.org/abs/2310.10670v1 | # Smart OMVI: Obfuscated Malware Variant Identification using a novel dataset
###### Abstract
Cybersecurity has become a significant issue in the digital era as a result of the growth in everyday computer use. Cybercriminals now engage in more than virus distribution and computer hacking. Cyberwarfare has developed as a result because it has become a threat to a nation's survival. Malware analysis serves as the first line of defence against an attack and is a significant component of cybercrime. Every day, malware attacks target a large number of computer users, businesses, and governmental agencies, causing billions of dollars in losses. Malware may evade multiple AV software with a very minor, cunning tweak made by its designers, despite the fact that security experts have a variety of tools at their disposal to identify it. To address this challenge, a new dataset called the Obfuscated Malware Dataset (OMD) has been developed. This dataset comprises 40 distinct malware families having 21924 samples, and it incorporates obfuscation techniques that mimic the strategies employed by malware creators to make their malware variations different from the original samples. The purpose of this dataset is to provide a more realistic and representative environment for evaluating the effectiveness of malware analysis techniques. Different conventional machine learning algorithms including but not limited to Support Vector Machine (SVM), Random Forrest (RF), Extreme Gradient Boosting (XGBOOST) etc are applied and contrasted. The results demonstrated that XGBoost outperformed the other algorithms, achieving an accuracy of f 82%, precision of 88%, recall of 80%, and an F1-Score of 83%.
Antivirus OMD Malware Obfuscation, Identification Variants Malware classification
## 1 Introduction
Malicious software shortened to malware, is a piece of software made with the intention of breaking into and causing harm to computers without the user's knowledge. The word "malware" refers to a broad category of destructive software; some of the most popular varieties are listed in table 1. Software malware may come in a variety of shapes and sizes. Desktops, servers, mobile phones, printers, and programmable electrical circuits are just a few possible deployment platforms. Sophisticated assaults have proven that data may be taken using well-written malware that only exists in system memory and leaves no trace in the form of permanent data. Information security safeguards like desktop firewalls and anti-virus software have been reported to be disabled by malware. Some are even capable of compromising audit, authentication, and authorisation processes.
Even when a compromised machine is rebooted, startup files have been set to preserve persistence. When run, advanced malware may duplicate itself or remain dormant until called upon by its command features to extract data or delete files. Four operational characteristics often serve to characterise a particular piece of malware:
1. Propagation: The method through which malware spreads across several systems.
## 1 Introduction
The study of the behavior of a system is a fundamental problem in the field of computer vision. The goal of this paper is to develop a system of computer vision, and to develop a system of computer vision. The goal of this paper is to develop a system of computer vision, and to develop a system of computer vision. The goal of this paper is to develop a system of computer vision, and to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer vision, to develop a system of computer, to
2. Infection: The malware's method of installation and its capacity to withstand cleanup efforts after being set up.
3. Self-Defense: The technique utilised to obfuscate its existence and thwart examination.
4. Capabilities: Functions accessible by malware operator.
These might also be referred to as anti-reversing capabilities.
### Different Families of Malware
Malware are categorized into different families on the basis of their behavior and type of damage done to host machine.
### Criminal Steps of Malware
Malware are diverse in their kind and activity, attacking different types of information in order to cause problems for the user.
1. Intelligence gathering: A criminal searches the target for weak spots in order to prepare an assault.
2. Preparation: A criminal develops, tweaks, or somehow acquires malware to meet the demands of an attack.
3. Distribution: The spread of malware takes place.
4. Compromise: Malware infects the system.
5. Demand: The power of malware is released.
6. Execution: Malware transfers data to the malware operator, a process known as exfiltration, which achieves the attack's goal by transferring data from an information system without any type of consent.
Modern malware remediation is getting harder and harder to do for a variety of reasons. Malware that exploits the use zero-day vulnerabilities Kumar and Subbiah (2022); Burros et al. (2022) comes in a much wider range of forms. A vulnerability is described as "zero-day" if potential victims are unaware of it, in which case they have no time to prepare. Malware is now capable of taking on several forms thanks to polymorphic design. Malware that is polymorphic alters some aspects of itself with each infection. This modification may take the form of an unusable code update. This method avoids signature-based detection Botacin et al. (2022) methods since they frequently create a distinct signature from a file containing malware using a hash algorithm, meaning that any modification to the file would alter its signature. Moreover, because polymorphic Selamat and Ali (2022) malware has the ability to alter its own filename upon infection, standard signature-based methods of detection are hampered. As a result, to address this problem, a new dataset called the OMD is presented. The following are the overall contributions of this work:
1. A large malware dataset named Obfucated Malware Dataset (OMD) is generated by collecting malware data from different sources, combining them with two other datasets, namely, Malimg and Kaggle's Microsoft Malware Classification Challenge (BIG 2015) Dataset and applying various obfuscation techniques resulting in 40 different families of malware. The final dataset contains 40 classes and 21924 samples.
2. In contrast to other datasets, all samples in OMD are obfuscated using different techniques resulting in a dataset that can mimic new or polymorphic malwares. Traditional machine learning techniques are applied on the dataset and results are compared and contrasted.
The rest of the paper is organized as follows. The next section highlights the related work in the field of malware classification. Section 3 explains our novel classification framework methodology, while section 4 discusses the experimental setup. Section 5 presents the result analysis and discussion of our work. Section 6 concludes the paper.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Abbreviations** & **Full Form** \\ \hline OMD & Obfucated Malware Dataset \\ \hline RNN & Recurrent Neural Network \\ \hline PUA & Potentially Unwanted Applications \\ \hline CCV & Card Code Verification \\ \hline ASM & Assembly language \\ \hline NOP & No Operation \\ \hline \end{tabular}
\end{table}
Table 2: List of Abbreviations
## 2 Related Work
Any software that is intentionally designed to disrupt a computer, server, client, or computer network, leak sensitive information, allow unauthorised access to data or systems, deny access to information, or inadvertently compromise user privacy and security on computers is known as malware, a combination of words malicious and software Brewer (2016), Tahir (2018).
The underground market for stolen data is the chief reason of malware. In several forums, data hackers may resell their loot Menn (2010). Table 3 Geer and Conway (2009) contains examples of prices paid for various categories of stolen data. The money demanded for the stolen goods indicated in Table 3 drove the development of secondary malware marketplaces, which result in software tools that make malware more and more successful at facilitating information theft. In general, people utilise software to automate laborious and resource-intensive jobs, and malware authors are no different. Automating the transmission of malware and the data collection process lowers operating expenses while enabling criminals to conceal their operations. Systems for the distribution and operation of malware have gotten more and more modular. Crimeware Salloum et al. (2022), Wang et al. (2021) is a term used to describe software that has been found to have such malware support systems. The Zeus toolkit Grammatikakis et al. (2021) is a good illustration of crimeware. The Zeus virus first appeared in 2006, and the associated crimeware in 2007. Zeus' crimeware takes use of its modular nature, so attackers may modify and deploy new capabilities relatively rapidly. An attacker may choose the features to be included in a "release" and a unique encryption key for data that has been captured using an intuitive graphical interface. Zeus crimeware has been used to produce more than 5,000 different versions of the Zeus software. Although numerous Zeus users have been identified and prosecuted with cybercrimes, the Zeus crimeware writers remain at large Kazi et al. (2022), Rose et al. (2022).
### Detection and Classification
For the purpose of classifying and detecting malware, both static and dynamic analysis approaches have been widely used. The development of machine learning has created a wide range of possibilities for analysis and forecasting for both malware analysis methods. Visual malware image-based categorization is a relatively new development in the field of malware analysis. The textural elements of the malware's visual image file were discovered in 2011 by Nataraj et al. (2011). These files are generated by translating the byte code of a portable executable (PE) binary file to the pixel value's grey level. They used wavelet decomposition to obtain the textural characteristics from the malware picture. The K-nearest Neighbor machine learning algorithm is then used on these characteristics.
\begin{table}
\begin{tabular}{|l|l|} \hline
**Theft Enabling Commodity** & **Price** \\ \hline Keystroke logger & \$25 on average \\ \hline Botnets & \$100 to \$200 per 1,000 infections, depending on location \\ \hline Spamming email service & \$0.01 per 1,000 emails, reliability of more than 85\% delivered \\ \hline Shop admins (Credit Card databases) & \$100 to \$300 \\ \hline Credit Card numbers without CCV2 & \$1 to \$3 \\ \hline Credit Card numbers with CCV2 & \$1.50 to \$10.00, depending on the country \\ \hline Socks accounts & \$5 to \$40/month \\ \hline Sniffer dumps & \$50 to \$100/month \\ \hline Western Union exploits & \$300 to \$1,000 \\ \hline Remote desktops & \$5 to \$8 \\ \hline Scam letters & \$3 to \$5 \\ \hline \end{tabular}
\end{table}
Table 4: Cost of Malware and Crimeware
\begin{table}
\begin{tabular}{|l|l|} \hline
**Data type** & **Price** \\ \hline CCV & \$3.25 \\ \hline OS administrative login & \$2.50 \\ \hline FTP exploit & \$6.00 \\ \hline Full identity information & \$5.00 \\ \hline Rich bank account credentials & \$750.00 \\ \hline US passport information & \$800.00 \\ \hline Router credentials & \$12.50 \\ \hline \end{tabular}
\end{table}
Table 3: Stolen data prices
Gibert et al. (2019) suggested a simple design for a convolutional neural network made up of three convolutional blocks, one fully-connected block, and one output layer. ReLU activation, max-pooling, normalization, and a convolution operation made up each convolution block. The convolutional layers served as detection filters for certain features or patterns in the input, while the following fully-connected layers combined the learnt information to produce a particular target output. The effectiveness of their method was tested on the Microsoft Malware Classification Challenge Ronen et al. (2018) versus manually created feature extractors Kancherla and Mukkamala (2013); Ahmadi et al. (2016), and the findings show that deep learning architectures perform better at identifying malware represented as grayscale photos. Similar to this, Rezende et al. (2017) performed classification on the Mallmg Nataraj et al. (2011) dataset using the ResNet-50 architecture with pretrained weights.
### Notable Datasets
Malware datasets with coarse family labels are shown in Table 5. The Mallmg, VX Heaven Qiao et al. (2016); Kaggle, and MalDozer Karbab et al. (2018) datasets' collecting periods are unrecorded; publishing dates are taken as an upper limit for the period's conclusion. Drebin's Arp et al. (2014) labels appear to have been combined from those of 10 other antivirus programs, while the precise labelling process is unknown. The Microsoft Security Essentials program was used to label the MalImg dataset. The VX Heaven website was active from 1999 to 2012, and the malware in the collection is thought to be extremely old. The Kaspersky antivirus software was used to label the VX Heaven dataset. MalDozer's labelling strategy was not made public, however family names imply that one antivirus was used. Family labels are not present in the initial EMBER Anderson and Roth (2018) dataset, but an extra 1,000,000 files--both harmful and benign--were made available in 2018. AVClass Sebastian et al. (2016) labels indicate that 485,000 of these files are malware samples. The Malpedia Blohmann et al. (2017) collection includes labels that were received from open-source reporting, and some malware samples were dumped and unpacked using human analysis. Other family designations, however, were generated automatically using tools like YARA rules and comparisons of unpacked files to known malware samples.
The bulk of files in the Malsign Kotzias et al. (2015) collection are not malicious programs but rather PUAs. Malsign reference labels were created by clustering characteristics that had been statically extracted. MaLabel has 115,157 samples, of which 46,157 are part of 11 major families and the rest 69,000 are a part of families with fewer than 1,000 samples. The dataset contains an unknown number of families in total. Microsoft provided a collection of 1.3 million malware samples, labelled using a combination of antivirus labelling and manual labelling, to the developers of the MtNet Huang and Stokes (2016) malware classifier.
Although there are some datasets that has a few obfuscated malware samples, no dataset is purely focused on obfuscated malware classification. Using malware reference datasets with these proprties may yield evaluation results that are biased or incorrect for newer malwares. There aren't many prominent datasets that contain malware that targets other operating systems (including Linux, macOS, and iOS), but this research is outside the purview of our article.
## 3 Methodology
### Dataset Generation
Dataset is one of the most important things in machine and deep learning. An appropriate dataset was required that mimics the polymorphism of modern malware families, hence, a dataset named as OMD shown in fig. 4 is created using three smaller datasets given in table 6. First dataset is Malimg Dataset represented in table 7 that comprises of 9339 malware samples which were classified into 25 malware families. This dataset contains grey-scale images formed from malware binaries. A 2D matrix was generated from these malware binaries and then represented as grey-scale images. Second dataset was Kaggle Malware Classification challenge 2015 dataset represented in table 8 containing 10868 malware samples that contain 9 different malware families. This dataset doesn't contain malware samples in the form of grey-scale images, instead, it has ASM and Byte type malware sample files. ASM files were taken, different obfuscations were applied, and then these files were converted to grey-scale in order to add them to the dataset. Third dataset 10 was generated using malware samples collected from different sources and manually labelling them with the help of VirusTotal. Similar obfuscation techniques were also applied on this dataset. It was named as Tiny Obfuscated Malware Dataset (TinyOMD) shown in figure 1 and has 489 samples representing 6 classes given in Table 10. So, to Conclude three dataset namely Malimg, Kaggle Malware Classification challenge 2015 dataset and TinyOMD amounting to a total of 20696 and 40 classes given in table 6. Next step is Obfuscation, multiple techniques are applied for data obfuscation which explained in obfuscation section.
### Obfuscation
Obfuscation is applied by help of two obfuscation blocks each containing six different obfuscation techniques, furthermore, encryption is applied to 20% of malware samples at random and combined with the reaming 80% to formulate the Obfuscated Malware Classification Dataset.
#### 3.2.1 Obfuscation Block-I
Six obfuscation techniques applied in the first block were Dead-Code Insertion, Subroutine Reordering, Register Reassignment, XOR-Operation, Instruction Substitution and Code Transposition. This block is applied on ASM sample files in Kaggle's Microsoft Classification 2015 Dataset and the newly created Tiny Obfuscated Malware Dataset (TinyOMD). All these techniques change malware source code thus resulting in change of signature without having any impact on its functionality. Dead-Code Insertion also known as NOp-Insertion is an instruction that itself doesn't result in any sort of change in the functionally of the malware but is used to change its signature. It is commonly known as dead obfuscation. Code Transposition also known as Jump instruction transfers the program sequence to the memory address given in the operand based on the specified flag. This results in complex and difficult to understand code obfuscation. In Register Reassignment, extra lines of code are added, again resulting in change in malware signature.
#### 3.2.2 Obfuscation Block-II
Six different obfuscation techniques were also applied in the second obfuscation block, namely, masking, in-painting, blurring, warping, scrambling, and tokenization. This block is applied on image sample files in all three sub-datasets including Malimg dataset, Kaggle's Microsoft Classification 2015 Dataset and the newly created Tiny Obfuscated Malware Dataset (TinyOMD). These techniques make malware detection even more difficult. Furthermore, around 10% malware samples were taken at random from overall dataset, encrypted and added back to the dataset.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Dataset Name** & \begin{tabular}{c} **Number of** \\ **Families** \\ \end{tabular} &
\begin{tabular}{c} **Number of** \\ **Samples** \\ \end{tabular} \\ \hline Malimg & 25 & 9339 \\ \hline Kaggle’s Microsoft Malware & 9 & 10868 \\ Classification Challenge (BIG 2015) & & & \\ \hline TinyOMD & 6 & 489 \\ \hline Total & 40 & 21924 \\ \hline \end{tabular}
\end{table}
Table 6: Three datasets used to create OMD
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Name & Year & Samples & Family & \begin{tabular}{c} Operating \\ System \\ \end{tabular} &
\begin{tabular}{c} Labelling \\ Methodology \\ \end{tabular} & Period of collection \\ \hline MOTIF & 2022 & 3,095 & 454 & Windows & Threat Reports & Jan. 2016 - Jan. 2021 \\ \hline MalImg & 2011 & 9,458 & 25 & Windows & Single AV & July 2011 or earlier \\ \hline Kaggle & 2018 & 10,868 & 9 & Windows & Susp. Single AV & Feb. 2015 or earlier \\ \hline AMD & 2017 & 24,553 & 71 & Android & Cluster Labeling & 2010 - 2016 \\ \hline MalDozer & 2018 & 20,089 & 32 & Android & Susp. Single AV & Mar. 2018 or earlier \\ \hline EMBER & 2018 & 485,000 & 3,226 & Windows & AVClass & 2018 \\ \hline MalGenome & 2015 & 1,260 & 49 & Android & Threat Reports & Aug. 2010 - Oct. 2011 \\ \hline Variant & 2015 & 85 & 8 & Windows & Threat Reports & Jan. 2014 \\ \hline Malheur Rieck & 2006 & 3,133 & 24 & Windows & AV Majority Vote & 2006 - 2009 \\ \hline Drebin & 2010 & 5,560 & 179 & Android & AV-based & Aug. 2010 - Oct. 2012 \\ \hline VX Heaven & 2016 & 271,092 & 137 & Windows & Single AV & 2012 \\ \hline Malicia & 2012 & 11,363 & 55 & Windows & Cluster Labeling & Mar. 2012 - Mar. 2013 \\ \hline Malpedia & 2017 & 5,862 & 2,165 & Both & Hybrid & 2017- ongoing \\ \hline Malsign & 2015 & 142,513 & Unknown & Windows & Cluster labeling & 2012- 2014 \\ \hline MalLabel & 2015 & 115,157 & \textgreater{}80 & Windows & AV Majority Vote & Apr. 2015 \\ \hline MtNet & 2016 & 1,300,000 & 98 & Windows & Hybrid & Jun. 2016 \\ \hline \end{tabular}
\end{table}
Table 5: Notable Datasets
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline S.No. & Family & Family Name &
\begin{tabular}{c} Number of \\ samples \\ \end{tabular} \\ \hline \(1\) & Dailer & Adiar.C & 122 \\ \hline \(2\) & Backdoor & Agent.FYI & 116 \\ \hline \(3\) & Worm & Allaple.A & 2949 \\ \hline \(4\) & Worm & Allaple.L & 1591 \\ \hline \(5\) & Trojan & Alueron.gen!J & 198 \\ \hline \(6\) & Worm:AutoIT & Autourn.K & 106 \\ \hline \(7\) & Trojan & C2lop.gen!G & 200 \\ \hline \(8\) & Trojan & C2lop.p & 146 \\ \hline \(9\) & Dailer & Diaplaform.B & 177 \\ \hline \(10\) & TrojanDownloader & Dontovo.A & 162 \\ \hline \(11\) & Rogue & Fakerean & 381 \\ \hline \(12\) & Dailer & Instantaccess & 431 \\ \hline \(13\) & PWS & Lolyda.AA1 & 213 \\ \hline \(14\) & PWS & Lolyda.AA2 & 184 \\ \hline \(15\) & PWS & Lolyda.AA3 & 123 \\ \hline \(16\) & PWS & Lolyda.AT & 159 \\ \hline \(17\) & Trojan & Malex.gen!J & 136 \\ \hline \(18\) & TrojanDownloader & Obfuscated.AD & 142 \\ \hline \(19\) & Backdoor & Rbot!gen & 158 \\ \hline \(20\) & Trojan & Skintrim.N & 80 \\ \hline \(21\) & TrojanDownloader & Swizzor.gen!E & 128 \\ \hline \(22\) & TrojanDownloader & Swizzor.gen!I & 132 \\ \hline \(23\) & Worm & VB.AT & 408 \\ \hline \(24\) & TrojanDownloader & Wintrim.BX & 97 \\ \hline \(25\) & Worm & Yuner.A & 800 \\ \hline & **Total** & & **9339** \\ \hline \end{tabular}
\end{table}
Table 7: Malimg Dataset
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline S.No. & Family & Family Name & \begin{tabular}{c} Number of \\ samples \\ \end{tabular} \\ \hline \(1\) & Backdoor & Gatak & 1013 \\ \hline \(2\) & Backdoor & Kelihos\_ver1 & 398 \\ \hline \(3\) & Backdoor & Kelihos\_ver3 & 2493 \\ \hline \(4\) & Adware & Lollipop & 2476 \\ \hline \(5\) &
\begin{tabular}{c} Any Obfuscated \\ Malware \\ \end{tabular} & Obfuscator.ACY & 1228 \\ \hline \(6\) & Worm & Ramnit & 1541 \\ \hline \(7\) & Backdoor & Simda & 42 \\ \hline \(8\) & TrojanDownloader & Tracur & 751 \\ \hline \(9\) & Trojan & Vundo & 475 \\ \hline & **Total** & & **10868** \\ \hline \end{tabular}
\end{table}
Table 8: Obfuscated Malware Dataset (OMD)
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Class** & **Number of Samples** \\ \hline Adware & 16 \\ \hline Backdoor & 8 \\ \hline Obfuscated & 76 \\ \hline Other & 177 \\ \hline Trojan & 180 \\ \hline Virus & 32 \\ \hline Total & 489 \\ \hline \end{tabular}
\end{table}
Table 9: Tiny Obfuscated Malware Dataset (TinyOMD)
Figure 1: TinyOMD Classes with percentage of samples.
Figure 3: Major malware families sample count in OMD
Figure 2: Percentage of each dataset samples in OMD
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline S.No. & Family & Family Name & \begin{tabular}{c} Number of \\ samples \\ \end{tabular} \\ \hline \(1\) & Dailer & Adialer.C & 122 \\ \hline \(2\) & Adware & Various & 16 \\ \hline \(3\) & Backdoor & Agent.FYI & 116 \\ \hline \(4\) & Worm & Allaple.A & 2949 \\ \hline \(5\) & Worm & Allaple.L & 1591 \\ \hline \(6\) & Trojan & Alueron.gen!J & 198 \\ \hline \(7\) & Worm:AutoIT & Autorn.K & 106 \\ \hline \(8\) & Backdoor & Various & 8 \\ \hline \(9\) & Trojan & C2lop.gen!G & 200 \\ \hline \(10\) & Trojan & C2lop.p & 146 \\ \hline \(11\) & Dailer & Diaplaform.B & 177 \\ \hline \(12\) & TrojanDownloader & Dontovo.A & 162 \\ \hline \(13\) & Rogue & Fakerean & 381 \\ \hline \(14\) & Backdoor & Gatak & 1013 \\ \hline \(15\) & Dailer & Instamaccess & 431 \\ \hline \(16\) & Backdoor & Kelihos\_ver1 & 398 \\ \hline \(17\) & Backdoor & Kelihos\_ver3 & 2493 \\ \hline \(18\) & Adware & Lollipop & 2476 \\ \hline \(19\) & PWS & Lolyda.AA1 & 213 \\ \hline \(20\) & PWS & Lolyda.AA2 & 184 \\ \hline \(21\) & PWS & Lolyda.AA3 & 123 \\ \hline \(22\) & PWS & Lolyda.AT & 159 \\ \hline \(23\) & Trojan & Male.gen!J & 136 \\ \hline \(24\) & Obfuscated & Various & 76 \\ \hline \(25\) & TrojanDownloader & Obfuscated.AD & 1370 \\ \hline \(26\) &
\begin{tabular}{c} Any Obfuscated \\ Malware \\ \end{tabular} & Obfuscator.ACY & 1679 \\ \hline \(27\) & Various & Various & 177 \\ \hline \(28\) & Worm & Ramnit & 1541 \\ \hline \(29\) & Backdoor & Rbot!gen & 158 \\ \hline \(30\) & Backdoor & Simda & 42 \\ \hline \(31\) & Trojan & Skitrin.N & 80 \\ \hline \(32\) & TrojanDownloader & Swizzor.gen!E & 128 \\ \hline \(33\) & TrojanDownloader & Swizzor.gen!I & 132 \\ \hline \(34\) & TrojanDownloader & Tracur & 751 \\ \hline \(35\) & Trojan & Various & 180 \\ \hline \(36\) & Worm & VB.AT & 408 \\ \hline \(37\) & Virus & Various & 32 \\ \hline \(38\) & Trojan & Vundo & 475 \\ \hline \(39\) & TrojanDownloader & Wintrim.BX & 97 \\ \hline \(40\) & Worm & Yuner.A & 800 \\ \hline & **Total** & & **21924** \\ \hline \end{tabular}
\end{table}
Table 10: Obfuscated Malware Dataset (OMD))
#### 3.2.3 Data augmentation
A lack of data causes deep learning models to overfit. To achieve effective generalisation and practical training, a large amount of data is therefore needed. Data augmentation is the process of increasing data samples by supplementing the underlying data Shorten and Khoshgoftaar (2019); Naveed et al. (2021). To enhance generality and make the suggested classification framework resistant to different types of malware data, we have expanded the training dataset in this study. Hence, it makes the suggested categorization system useful for categorising malware families. As illustrated in Table 11, the adopted augmentation technique incorporates a number of transformations, including reflections, scaling, rotation, and shear. To increase the generality of the models, all trainings are done after data augmentation.
#### 3.2.4 Dataset Partitioning
In this study, the training and testing stages of the dataset partitioning strategy were 70-30. The literature often mentions dataset partitioning as 80-20, 75-25, or even 70-30. Hence, 70-30 was choosen to make the model robust. In this context, the dataset's dimensions are important.
\begin{table}
\begin{tabular}{|c|c|c|} \hline S.No. & Augmentation type & Parameter \\ \hline \(1\) & Rotate & [0, 360] degrees \\ \hline \(2\) & Shear & [-0.1,0.1] \\ \hline \(3\) & Reflection & X: [-1, 1],Y: [-1, 1] \\ \hline \(4\) & Scale & [0.2, 1] \\ \hline \(5\) & Horizontal Flip & - \\ \hline \(6\) & Vertical Flip & - \\ \hline \(7\) & Width Shift & [0, 0.2] \\ \hline \(8\) & Height Shift & [0, 0.2] \\ \hline \end{tabular}
\end{table}
Table 11: Augmentation parameters
Figure 4: Dataset Generation Workflow
## 4 Performance Metrics
Prior to delving deeper into the performance measures, it was necessary to establish several fundamental units or classification categories, including TruePositive, TrueNegative, FalsePositive, and FalseNegative. To comprehend how each of these units is to be categorised, refer to Table.
1. TruePositive (TP): Both the actual and anticipated labels for the data sample are positive. If the model predicts class name of sample correctly out of 40 class names then it will be considered a true positive.
2. TrueNegative (TN): In multi-class classification, true negative is sum of all classes except for the class which is being under consideration.
3. FalsePositive (FP): False positive will represent sum of all classes except true positive in the corresponding columns.
4. FalseNegative (FN): Similarly, false negative will represent sum of all classes except true positive in the corresponding rows.
### Recall
In multiclass classification, recall (also known as sensitivity or true positive rate) given in Eg. 12 is a metric that measures the proportion of true positive predictions for a given class out of all the actual positive instances in that class. It is defined as:
\[Recall=\frac{TruePositive}{TruePositive+FalseNegative} \tag{1}\]
where True positives are the number of correctly classified instances of a specific class, and False negatives are the number of instances that belong to that class but are incorrectly classified as belonging to a different class. In other words, recall in multiclass classification tells us how well the model is able to correctly identify all instances of a specific class, regardless of whether it misclassifies some instances from other classes as belonging to that class. A high recall value for a specific class indicates that the model is good at correctly identifying all instances of that class, while a low recall value indicates that the model is missing many instances of that class.
### Specificity
Specificity is recall's inverse, that is, it indicates how well a model is performing to correctly identify the negative labels. In simple words, specificity would be the ratio of TrueNegative to Total Negatives. Negative labels in our data would be the logs generated by benign applications. Specificity is calculated using formula given in Eq. 2.
\[Specificity=\frac{TrueNegative}{TrueNegative+FalsePositive} \tag{2}\]
### Precision
In multiclass classification, precision given in Eg. 12 is a metric that measures the proportion of true positive predictions for a given class out of all the positive predictions made by the model for that class. It is defined as:
\[Precision=\frac{TruePositive}{TruePositive+FalsePositive} \tag{3}\]
where True positives are the number of correctly classified instances of a specific class, and False positives are the number of instances that are incorrectly classified as belonging to that class, when in fact they belong to a different class.
In other words, precision in multiclass classification tells us how well the model is able to correctly classify instances of a specific class, without misclassifying instances from other classes as belonging to that class. A high precision value for a specific class indicates that the model is good at correctly identifying instances of that class, while a low precision value indicates that the model is misclassifying many instances from other classes as belonging to that class.
### Accuracy
Accuracy is how well the model is performing in correctly predicting the Positive and Negative Labels. It is the ratio correctly predicted to total samples. Eq. 12 is used to calculate the accuracy of any model.
\[Accuracy=\frac{TruePositive+TrueNegative}{Totalnumberofsamples} \tag{4}\]
### F - Score
F-score is also known as harmonic mean of precision and recall. This performance metric is highly suitable when it comes to imbalanced datasets. F-Score can be calculated using the model's precision and recall given in 5.
\[F-Score=2*\frac{precision*recall}{precision+recall} \tag{5}\]
Performance metrics are summarized in table 12.
## 5 Traditional Machine Learning Classifiers
### Decision Tree
Decision trees (DT) Charbuty and Abdulazeez (2021) are a non-parametric supervised leaning approach. DTs are very famous when it comes to classification and regression problems. Multiple decision rules are constructed which are inferred from various features of data. They are limited by the fact that they can be very non-robust. A small change in the training data can result in a large change in the tree and consequently the final predictions James et al. (2013). The problem of learning an optimal decision tree is known to be NP-complete under several aspects of optimality and even for simple concepts Laurent and Rivest (1976).
### Bagging
Bagging also known as Bootstrap Aggregating Lee et al. (2020) is a technique in machine learning that involves combining multiple models trained on different subsets of the training data to improve predictive performance. The bagging technique works by creating multiple bootstrap samples of the training data and training a different model on each sample. By averaging the predictions of all the individual models, bagging can reduce overfitting and improve the stability and accuracy of the final prediction. Bagging can be applied to various machine learning algorithms, such as decision trees, neural networks, and random forests. The main benefits of bagging are improved accuracy, stability, and robustness, making it a popular technique in ensemble learning. Bagging is not always effective with data that has a high degree of correlation or has a very small number of informative features and may not always improve the performance of certain machine learning algorithms, such as k-nearest neighbors, that are inherently stable.
\begin{table}
\begin{tabular}{|c|c|} \hline Name & Formula \\ \hline Accuracy & \(\frac{TruePositive+TrueNegative}{Totalnumborfsamples}\) \\ \hline Recall & \(\frac{TruePositive}{TruePositive+FalseNegative}\) \\ \hline Precesion & \(\frac{TruePositive}{TruePositive+FalsePositive}\) \\ \hline F1-Score & \(2*\frac{precision*recall}{precision+recall}\) \\ \hline \end{tabular}
\end{table}
Table 12: Summarized Performance Metrics
### Gradient Boosting
Gradient boosting Zhang et al. (2019); Bentejac et al. (2021) is a machine learning technique used in regression and classification tasks, among others. It gives a prediction model in the form of an ensemble of weak prediction models, which are typically decision trees. While boosting can increase the accuracy of a base learner, such as a decision tree or linear regression, it sacrifices intelligibility and interpretability.For example, following the path that a decision tree takes to make its decision is trivial and self-explained, but following the paths of hundreds or thousands of trees is much harder.
### AdaBoost
AdaBoost, short for Adaptive Boosting Shahraki et al. (2020); Wang and Sun (2021), is a statistical classification meta-algorithm, Every learning algorithm tends to suit some problem types better than others, and typically has many different parameters and configurations to adjust before it achieves optimal performance on a dataset. AdaBoost (with decision trees as the weak learners) is often referred to as the best out-of-the-box classifier Kegl (2013). When used with decision tree learning, information gathered at each stage of the AdaBoost algorithm about the relative 'hardness' of each training sample is fed into the tree growing algorithm such that later trees tend to focus on harder-to-classify examples. AdaBoost is particularly prone to overfitting on noisy datasets.
### Support Vector Machine (SVM)
Support Vector Machine Kurani et al. (2023); Vos et al. (2022); Koklu et al. (2022) is a supervised learning method, which is used for problems such as classifying different classes (classification), predicting continuous value (Regression) and detection of any outliers. SVM is emplyed here because it is very effective in high dimensional spaces, and our dataset has deep feature space. Another reason for using SVM is that it is very memory efficient as it uses a subset of training samples in decision function. When devising the architecture of SVM approach there were 3 primary parameters of concern, Kernel function, gamma, and C. Gamma dictates how much influence can a single sample in training space has. A lower value of C indicates the decision surface of the classifier to be smooth, which means that there can some percentage of mis-classification allowed. However, if C is set to a higher value, then SVM aims to classify all the training samples correctly, and percentage of error or mis-classification is reduced. Different values of these that were evaluated are given in table 13.
### Random Forest (RF)
When multiple decision trees are combined, and are used as an ensemble strategy to improve the accuracy, this architecture is known as Random Forest Balyan et al. (2022); Wang et al. (2023). Key parameter for Random forest is the \(n\_estimators\) which indicates the number of trees to be used in forest. Another vital parameter is the \(max\_depth\) of the trees in forest which limits the number of splits/divisions that can be performed per tree. Table 14 gives parameters applied during training using RF.
### XGBoost
XGBoost Velarde et al. (2023) (eXtreme Gradient Boosting) is an open-source software library which provides a regularizing gradient boosting. Salient features of XGBoost which make it different from other gradient boosting algorithms are clever penalization of trees, proportional shrinking of leaf nodes, Newton Boosting, extra randomization parameter, implementation on single, distributed systems and out-of-core computation and automatic feature selection. It is known for its superior performance in various machine learning tasksZhang et al. (2018); Chen et al. (2019); Jiang et al. (2019), including malware analysis, for the following reasons: it effectively handles complex relationships
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Kernel** & **C-Parameter** & **Gamma** \\ \hline & 1 & 0.1 \\
**Linear** & 5 & 0.1 \\ & 10 & 0.1 \\ \hline
**Radial** & 1 & 0.1 \\
**Basis** & 5 & 0.1 \\
**Function** & 10 & 0.1 \\ \hline \end{tabular}
\end{table}
Table 13: SVM model variations
and captures intricate patterns within the data, which is essential for accurately detecting and classifying obfuscated malware. XGBoost employs regularization techniques to prevent overfitting and build robust models that generalize well to unseen malware samples. Built on the gradient boosting framework, it learns from previous models' mistakes and leverages the strengths of multiple decision trees to achieve high accuracy. XGBoost includes strategies to handle class imbalance, ensuring effective learning from imbalanced data. Additionally, it offers computational efficiency, scalability, and the ability to handle large-scale analysis efficiently, crucial for processing extensive malware datasets.
### Voting
Voting ensemble Hussain et al. (2023), Zhang et al. (2023), Sevim et al. (2023), Mohammadifar et al. (2023) is a machine learning technique that combines multiple models trained on the same dataset to improve the overall predictive power of the system. In voting ensemble, each model is given an equal vote, and the final prediction is based on the majority vote of all the models. By combining multiple models trained on the same dataset, voting ensemble can often achieve higher accuracy than any individual model but this leads to complexity and makes it computationally expensive than individual models, as it requires training and combining multiple models. It can reduce the risk of overfitting, as it combines multiple models with different biases and strengths, which helps to reduce the variance in the final predictions and is often more robust than individual models, as it can handle missing or noisy data more effectively by combining the predictions of multiple models. But it may not be effective if the individual models are too similar, as it can lead to over-reliance on certain features or biases and most importantly requires training multiple models, which can be time-consuming and resource-intensive.
## 6 Experimental Environment
Hardware and software resources employed during experiments are given in table 15 and 16, respectively.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Name of software & Source & Description \\ \hline Python3.9.15 & www.python.org & Platform independent programming language (open source) \\ \hline TensorFlow2.10.0 & www.tensorflow.org/ & End-to-end learning framework for deploying machine learning models (open source) \\ \hline Pytorch1.13.1 & www.pytorch.org & Large-scale deep machine learning library (open source) \\ \hline Scikit-learn & www.scikit-learn.org/stable/ & Simple open-source efficient predictive data analysis tool \\ \hline Microsoft Windows 11 & www.microsoft.com/e- & The most recent major version of Microsoft’s Windows NT operating system \\ \hline \end{tabular}
\end{table}
Table 16: Software Resources
\begin{table}
\begin{tabular}{|c|c|} \hline
**n-estimators** & **Depth of trees** \\ \hline
**100** & 10 \\
**100** & 20 \\ & 30 \\ & 40 \\ \hline
**200** & 10 \\ \hline \end{tabular}
\end{table}
Table 14: RF model variations
\begin{table}
\begin{tabular}{|c|c|} \hline Name of hardware & Specification \\ \hline Intel(R) Core(TM) i7-8700 & CPU @ 3.20GHz \\ \hline RAM & 32GB \\ \hline \end{tabular}
\end{table}
Table 15: Hardware Resources
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Classifier(s) & Precision & Recall & F-1 Score & Accuracy \\ \hline Decision Tree & 74 & 76 & 72 & 75 \\ \hline Logistic Regression & 82 & 72 & 69 & 76 \\ \hline Random Forest & 91 & 77 & 79 & 82 \\ \hline XGBoost & 88 & 80 & 82 & 83 \\ \hline SVM & 83 & 71 & 76 & 73 \\ \hline AdaBoostClassifier & 88 & 76 & 76 & 80 \\ \hline Ensemble & \multirow{2}{*}{79} & \multirow{2}{*}{77} & \multirow{2}{*}{77} & \multirow{2}{*}{77} \\ (Voting:SVM+LR) & & & & \\ \hline \end{tabular}
\end{table}
Table 17: Results
Figure 5: Comparison between Decision Trees, AdaBoost, SVM, RF, Logistic Regression, XGBoost, and Ensemble of SVM and LR using Recall, Accuracy, Precision and F-Score, XGBoost and RF are best performing ones
Figure 6: XGBoost and RF perform best with RF edging it on precision while in other three metrics XGBoost performs better
## 7 Results and Discussion
Results demonstrate that XGBoost outperforms all other methods achieving an accuracy, precision, recall, F1-Score of 82%, 88%, 80% and 83% respectively as shown in fig. 5. RF has the highest precision shown in fig. 6 at 88% but in other three metrics XGBoost performs better and overall gives a better result.
## 8 Conclusion
With the rise in computer usage, cybersecurity has emerged as a crucial concern in the digital era. Cybercriminals have expanded their activities beyond traditional hacking and virus distribution. Daily malware attacks inflict significant financial losses by targeting computer users, businesses, and government agencies. Despite the availability of diverse security tools, malware can evade detection by making clever adjustments, causing significant challenges for security experts. To address this issue, the Obfuscated Malware Dataset (OMD) is introduced, consisting of 21,924 samples from 40 distinct malware families. The dataset incorporates various obfuscation techniques, simulating the tactics employed by malware authors to create new strains. Robust models utilizing traditional machine learning algorithms, such as SVM, RF, XGBoOST, and others, are trained to effectively identify these evasive malware instances that are difficult to detect through conventional means.
|
2309.12104 | Cohomological Lagrangian field theory | This paper introduces a geometric framework for classical cohomological field
theories based on $G^{\star}$-algebras and gauge natural field theories. A
BV-BFV extension of the framework is provided, which incorporates the cotangent
lift of the Donaldson-Witten theory as an illustrative example. | Shuhan Jiang | 2023-09-21T14:21:29Z | http://arxiv.org/abs/2309.12104v1 | # Cohomological Lagrangian field theory
###### Abstract
This paper introduces a geometric framework for classical cohomological field theories based on \(G^{*}\)-algebras and gauge natural field theories. A BV-BFV extension of the framework is provided, which incorporates the cotangent lift of the Donaldson-Witten theory as an illustrative example.
###### Contents
* 1 Introduction
* 2 \(G^{*}\)-algebras and equivariant cohomology
* 2.1 \(G^{*}\)-algebras
* 2.2 Algebraic equivariant cohomology
* 2.2.1 Weil model
* 2.2.2 BRST model and Cartan model
* 2.3 BV extensions of \(G^{*}\)-algebras and gauge fixings
* 2.3.1 BRST and BV systems
* 2.3.2 Cotangent lift of a BRST system
* 3 Gauge natural field theories
* 3.1 Gauge natural bundle
* 3.2 Variational bicomplex
* 3.2.1 Variational bicomplex of a gauge natural bundle
* 3.2.2 Variational bicomplex of a graded gauge natural bundle
* 3.3 Lagrangian field theory
* 3.3.1 Noether theorem
* 4 Cohomological Lagrangian field theory
* 4.1 \(QKG^{*}\)-structures
* 4.2 Cohomological Lagrangian field theories and supersymmetries
* 4.2.1 Vector supersymmetries
* 4.2.2 Descendant sequences of preobservables
* 4.3 Cohomological Lagrangian gauge field theory
* 4.4 Examples
4.4.1 N=2 supersymmetric quantum mechanics * 4.4.2 Donaldson-Witten theory
* 5 CohLFTs in the extended BV-BFV formalism
* 5.1 Extended BV-BFV formalism in the variational bicomplex setting
* 5.1.1 BV Lagrangian field theory
* 5.1.2 Extended BV-BFV Lagrangian field theory
* 5.1.3 K-sequences
* 5.2 Cotangent lift of CohLGFTs
* 5.2.1 Cotangent lift of BRST theories
* 5.2.2 Cotangent lift of CohLGFTs
* 5.2.3 Cotangent lift of Donaldson-Witten theory
## 1 Introduction
Cohomological field theories over a Riemannian manifold \((M,g)\) are typically characterized by the following property [14, 15]:
\[\frac{\delta}{\delta g_{\mu\nu}}\left(\mathcal{O}\exp\left(-S\right)\right) \text{ is $Q$-exact}, \tag{1.1}\]
where \(Q\) is the scalar supersymmetry, \(S\) is the action, and \(\mathcal{O}\) is an observable of the theory. Since \(S\) and \(\mathcal{O}\) are \(Q\)-closed, (1.1) holds when both \(\frac{\delta}{\delta g_{\mu\nu}}\mathcal{O}\) and the Einstein-Hilbert energy-momentum tensor \(T^{\mu\nu}:=\frac{\delta}{\delta g_{\mu\nu}}S\) of the theory are \(Q\)-exact.
Let's restate everything in a more mathematically rigorous manner. A field theory over \(M\) should be specified by the space of sections \(\Gamma(Y)\) of a bundle \(Y\) over \(M\) and a local form \(\mathcal{L}\) of degree \((n,0)\) in the variational bicomplex \(\Omega_{loc}(M\times\Gamma(Y))\) of \(Y\), known as the Lagrangian of the theory [13, 12, 11]. The action \(S\) of the theory should then be defined as the integration \(\int_{M}\mathcal{L}\). For our purposes, it is essential to assume that \(Y\) is graded, and \(\Gamma(Y)\) is equipped with an evolutionary cohomological vector field \(Q\). The space of local forms then becomes a tricomplex \(\Omega_{loc}=\bigoplus_{r,s,t}\Omega_{loc}^{r,s,t}\), with the extra differential given by the Lie derivative \(\text{Lie}_{Q}\) along \(Q\). The condition \(S\) being \(Q\)-closed is equivalent to \(\text{Lie}_{Q}\mathcal{L}\) being \(d_{h}\)-exact, where \(d_{h}\) is the horizontal differential of \(\Omega_{loc}\). In fact, it is often the case that
\[H_{\text{Lie}_{Q}}^{p,0,\bullet}(\Omega_{loc}(M\times\Gamma(Y))/d_{h}\Omega_{ loc}(M\times\Gamma(Y)))\cong\Omega^{p}(M)/d\Omega^{p-1}(M) \tag{1.2}\]
for \(p=1,\cdots,n\), where \(d\) is the de Rham differential of the de Rham complex \(\Omega(M)\) of \(M\). In such cases, \(\mathcal{L}\) can be decomposed into two parts: \(\mathcal{L}=\mathcal{L}_{0}+\text{Lie}_{Q}\mathcal{V}\), where \(\mathcal{L}_{0}\) corresponds to the pullback of an \(n\)-form over \(M\). From a physical point of view, the decomposition tells us that \(Q\) and \(\mathcal{V}\) should be interpreted as a BRST operator and a gauge fixing fermion. In the context of cohomological field theories, the condition \(T^{\mu\nu}\) being \(Q\)-exact is equivalent to the condition \(\mathcal{L}_{0}\) being topological, meaning it does not depend on the Riemannian metric \(g\). Such \(\mathcal{L}_{0}\) usually possesses a very large symmetry group. It becomes necessary for \(\mathcal{V}\) to be dependent on \(g\) to gauge fix these
symmetries. This formulation represents the essence of the BRST approach to cohomological field theories [11, 12].
There exists a systematic way to construct cohomological vector fields \(Q\) such that (1.2) holds true [13]. The idea is to consider the shifted vertical bundle \(Y=V[1]Y^{\prime}\) of a graded bundle \(Y^{\prime}\) over \(M\). The vertical differential \(d_{v}\) of \(\Omega_{loc}(M\times\Gamma(Y^{\prime}))\) can then be viewed a cohomological vector field \(Q\) over \(\Gamma(Y)\). When \(Y^{\prime}\) is affine, one can show that \(H^{p,0,\bullet}_{d_{v}}(\Omega_{loc}(M\times\Gamma(Y^{\prime}))/d_{h}\Omega_{ loc}(M\times\Gamma(Y^{\prime})))\cong\Omega^{p}(M)/d\Omega^{p-1}(M)\)[14, Proposition A.1], which then implies (1.2). Such \(Q\) alone is not the right BRST operator because it does not encode any information about the symmetries of the theory. However, this problem can be overcome by "deforming" \(Q\) using the Mathai-Quillen map introduced in [15] to bridge the Weil and Cartan models of equivariant cohomology.
Let \(\mathfrak{X}_{Q}(\Gamma(Y))\) denote the ideal of the graded Lie superalgebra \(\mathfrak{X}(\Gamma(Y))\) of evolutionary vector fields over \(\Gamma(Y)\) generated by \(Q\). For \([Q,\Xi]\in\mathfrak{X}_{Q}\), we have
\[[Q,\Xi]S=Q(\Xi S)\pm\Xi(Q(S))=Q(\Xi S). \tag{1.3}\]
In other words, the \(Q\)-cohomology class of \(S\) is preserved under the action of \(\mathfrak{X}_{Q}(\Gamma(Y))\). The contractions and Lie derivatives on \(\Omega_{loc}(M\times\Gamma(Y^{\prime}))\) can be also viewed as evolutionary vector fields over \(\Gamma(Y)\) of degrees \(-1\) and \(0\), respectively. Together with \(Q\), they span a graded Lie superalgebra \(\widehat{\mathfrak{X}(\Gamma(Y^{\prime}))}\subset\mathfrak{X}_{Q}(\Gamma(Y))\). If the bundle \(Y^{\prime}\) is natural, the diffeomorphism group \(\mathrm{Diff}(M)\) of \(M\) acts canonically on \(\Gamma(Y^{\prime})\) and every vector field \(X\in\mathfrak{X}(M)\) determines canonically two vector fields \(K_{X}\) in \(\widehat{\mathfrak{X}(\Gamma(Y^{\prime}))}_{-1}\) and \(\xi_{X}\in\mathfrak{X}(\widehat{\Gamma(Y^{\prime})})_{0}\) such that \(\xi_{X}=[Q,K_{X}]\). Combining with (1.3), this in particular implies that the \(Q\)-cohomology class of \(S\) is preserved under the \(\mathrm{Diff}(M)\)-action. The evolutionary vector field \(K_{X}\) is, in general, not a Noether symmetry of the Lagrangian. In the case of \(M=\mathbb{R}^{n}\), one can choose \(X\) to be \(\frac{\partial}{\partial x^{\mu}}\) and "deform" \(K_{\mu}:=K_{\frac{\partial}{\partial x^{\mu}}}\) properly such that it becomes a Noether symmetry of the theory. Such \(K_{\mu}\) are known as the vector supersymmetries in the physics literature [1, 12, 13]. Their existence guarantees the \(Q\)-exactness of the canonical energy-momentum tensor of the theory.
An observable \(\mathcal{O}\) in a cohomological field theory (or any BRST theory) is usually obtained by integrating a local form \(\mathcal{O}^{(p)}\in\Omega^{n-p,0,p}_{loc}(M\times\Gamma(Y))\) over a \(p\)-cycle in \(M\). \(\mathcal{O}^{(p)}\) are solutions to the so-called descent equations
\[\mathrm{Lie}_{Q}\mathcal{O}^{(p)}=d_{h}\mathcal{O}^{(p-1)} \tag{1.4}\]
for \(p=1,\cdots,n\) with \(\mathrm{Lie}_{Q}\mathcal{O}^{(0)}=0\). The existence of solutions to (1.4) is guaranteed by the isomorphism (1.2). It is shown in [13] that every solution \(\sum_{p=0}^{n}\mathcal{O}^{p}\) to (1.4) is locally equivalent to \(\exp(-K)\mathcal{W}\), where \(\mathcal{W}\) is any \(Q\)-closed local form of total degree \(n\), and \(K:=dx^{\mu}\wedge\mathrm{Lie}_{K_{\mu}}\) is a locally defined "homotopy operator".
Our main achievement here is the construction of a rigorous geometric framework for cohomological Lagrangian field theories, which solidifies the mathematical basis of the above BRST picture. The framework is built upon the variational bicomplex of gauge natural bundles [10] and the concept of a \(G^{\star}\)-algebra, which was introduced in [11] to provide an axiomatic treatment of algebraic equivariant cohomology. Our framework also admits a direct extension to the extended BV-BFV formalism [12, 13] via the standard cotangent lift procedure of a BRST theory [10]. In both the BRST and BV world, we use the Donaldson-Witten theory [14] as a primary example. Interestingly, our approach aligns with the AKSZ construction of the Donaldson-Witten theory [1] when the auxiliary fields are integrated out.
## 2 \(G^{\star}\)-algebras and equivariant cohomology
### \(G^{\star}\)-algebras
A graded Lie supergroup is a triple \((G,\widetilde{\mathfrak{g}},\tau)\) where
1. \(G\) is a Lie group with Lie algebra \(\mathfrak{g}\);
2. \(\widetilde{\mathfrak{g}}=\bigoplus_{i\in\mathbb{Z}}\widetilde{\mathfrak{g}}_{i}\) is a graded Lie superalgebra with \(\widetilde{\mathfrak{g}}_{0}=\mathfrak{g}\);
3. \(\tau:G\to\operatorname{Aut}(\widetilde{\mathfrak{g}})\) is an action of \(G\) on \(\widetilde{\mathfrak{g}}\) by graded algebra automorphisms such that its restriction to \(\widetilde{\mathfrak{g}}_{0}\) is the adjoint action of \(G\) on \(\mathfrak{g}\).
**Example 2.1**.: _Let \(A\) be a graded commutative algebra. The automorphism group \(\operatorname{Aut}(A)\), the graded Lie superalgebra \(\operatorname{Der}(A)\) of derivations of \(A\), and the adjoint action of \(\operatorname{Aut}(A)\) on \(\operatorname{Der}(A)\) determines a graded Lie supergroup which we denote by \(G(A)\)._
Given two graded Lie supergroups \((G,\widetilde{\mathfrak{g}},\tau)\) and \((G^{\prime},\widetilde{\mathfrak{g}}^{\prime},\tau^{\prime})\). A homomorphism between them is a pair \((\phi,\varphi)\) where
1. \(\phi:G\to G^{\prime}\) is a Lie group homomorphism;
2. \(\varphi:\widetilde{\mathfrak{g}}\to\widetilde{\mathfrak{g}}^{\prime}\) is a graded Lie superalgebra homomorphism;
3. \(\phi\) and \(\varphi\) are compatible in the sense that \[\varphi|_{\widetilde{\mathfrak{g}}_{0}}=d\phi|_{\operatorname{Id}},\quad \tau^{\prime}(\phi(g))\circ\varphi=\varphi\circ\tau(g),\ \forall g\in G.\] (2.1)
Let \(X\) be a \(G\)-manifold. To each \(\xi\in\mathfrak{g}\) we can associate a vector field \(v_{\xi}\) on \(X\), which again induces a contraction \(\iota_{\xi}\) and a Lie derivative \(\operatorname{Lie}_{\xi}\) on the de Rham complex \(\Omega(X)\) of \(X\). Let \(d\) denote the de Rham differential. Fix a basis \(\{\xi_{a}\}\) of \(\mathfrak{g}\). Let \(f^{c}_{ab}\) denote the structure constants of \(\mathfrak{g}\) with respect to \(\{\xi_{a}\}\). Let \(\iota_{a}\) and \(\operatorname{Lie}_{a}\) denote the contraction and Lie derivative associated to \(\xi_{a}\). \(d\), \(\iota_{a}\), \(\operatorname{Lie}_{a}\) satisfy the following relations
\[\operatorname{Lie}_{a}\operatorname{Lie}_{b}-\operatorname{Lie }_{b}\operatorname{Lie}_{a}=f^{c}_{ab}\operatorname{Lie}_{c},\quad \operatorname{Lie}_{a}\iota_{b}-\iota_{b}\operatorname{Lie}_{a}=f^{c}_{ab} \iota_{c},\quad\operatorname{Lie}_{a}d-d\operatorname{Lie}_{a}=0, \tag{2.2}\] \[d^{2}=0,\quad\iota_{a}\iota_{b}+\iota_{b}\iota_{a}=0,\quad d \iota_{a}+\iota_{a}d=\operatorname{Lie}_{a}, \tag{2.3}\]
which are known as the Cartan calculus. (2.2) and (2.3) define a differential graded Lie superalgebra \(\widetilde{\mathfrak{g}}=\widetilde{\mathfrak{g}}_{-1}\oplus\widetilde{ \mathfrak{g}}_{0}\oplus\widetilde{\mathfrak{g}}_{1}\) where \(\widetilde{\mathfrak{g}}_{0}\) is spanned by \(\iota_{a}\), and \(\widetilde{\mathfrak{g}}_{1}\) is spanned by \(d\). \(G\), \(\widetilde{\mathfrak{g}}\), and the adjoint action of \(G\) on \(\widetilde{\mathfrak{g}}\) determines a graded Lie supergroup which we follow [11] to denote by \(G^{\star}\).
**Definition 2.1**.: _We call \(G^{\star}\) the Cartan graded Lie supergroup associated to \(G\)._
Let \(G^{\star}\) and \(H^{\star}\) be the Cartan graded Lie supergroups associated to the Lie groups \(G\) and \(H\), respectively. A homomorphism \(\phi:G\to H\) naturally defines a homomorphism \(\phi^{\star}=(\phi,\varphi)\) from \(G^{\star}\) to \(H^{\star}\) where \(\varphi\) is defined by setting
\[\varphi(d)=d,\quad\varphi(\operatorname{Lie}_{\xi})=\operatorname{Lie}_{d\phi |_{\operatorname{Id}}(\xi)},\quad\varphi(\iota_{\xi})=\iota_{d\phi|_{ \operatorname{Id}}(\xi)}\]
for all \(\xi\in\mathfrak{g}\). One can easily verify that \(\phi\) and \(\varphi\) satisfy (2.1). It follows that a subgroup \(L\) of \(G\) defines naturally a sub-supergroup \(L^{\star}\) of \(G^{\star}\).
**Definition 2.2**.: _A \(G^{\star}\)-algebra is a graded commutative algebra \(A\) together with a \(G^{\star}\)-action, i.e., a morphism \(\rho:G^{\star}\to G(A)\) of graded Lie supergroups. A morphism between two \(G^{\star}\)-algebras is just a graded algebra homomorphism which is compatible with the \(G^{\star}\)-action._
An element \(\alpha\in A\) is called horizontal if \(\iota_{\xi}\alpha=0\) for all \(\xi\in\mathfrak{g}\). A horizontal element \(\alpha\) is called basic if in addition \(\mathrm{Lie}_{\xi}\alpha=0\) for all \(\xi\in\mathfrak{g}\). Let \(A_{hor}\) and \(A_{bas}\) denote the subalgebras of horizontal and basic elements in \(A\), respectively. (They are subalgebras because \(\iota_{a}\) and \(\mathrm{Lie}_{a}\) are derivations.) It is easy to see that for \(\alpha\in A_{bas}\), \(d\alpha\) is also in \(A_{bas}\). We use \(H(A)\) to denote the cohomology of \((A,d)\) and \(H_{bas}(A)\) to denote the cohomology of \((A_{bas},d)\).
**Definition 2.3**.: _Let \(A\) and \(B\) be two \(G^{\star}\)-algebras. A semi-homotopy is a linear map \(K:A\to B\) of degree \(-1\) which satisfies_
\[\iota_{\xi}K+K\iota_{\xi}=0,\ \forall\xi\in\mathfrak{g} \tag{2.4}\]
_and_
\[B_{hor}\subset\mathrm{ker}(\mathrm{Lie}_{\xi}K-K\mathrm{Lie}_{\xi}),\ \forall\xi\in\mathfrak{g}. \tag{2.5}\]
_A semi-homotopy \(K\) is said to be a homotopy if \((\mathrm{Lie}_{\xi}K-K\mathrm{Lie}_{\xi})=0\) for all \(\xi\in\mathfrak{g}\). Two morphisms \(\alpha_{0}\) and \(\alpha_{1}:A\to B\) are (semi-)homotopic if they are equal up to a (semi-)homotopy, i.e., if_
\[\alpha_{1}-\alpha_{2}=dK+Kd.\]
**Remark 2.1**.: _Let \(A\) and \(B\) be two \(G^{\star}\)-algebras with a morphism \(\alpha:A\to B\). A (semi-)homotopy \(K:A\to B\) is said to be a (semi-)homotopy relative to \(\alpha\) if_
\[K(xy)=K(x)\alpha(y)+(-1)^{d(x)}\alpha(x)K(y)\]
_for all \(x,y\in A\). For \(B=A\), one can take \(\alpha\) to be the identity morphism \(\mathrm{Id}:A\to A\) and such \(K\) becomes a derivation._
**Proposition 2.1**.: _Let \(\alpha_{0}\) and \(\alpha_{1}:A\to B\) be two morphisms between \(G^{\star}\)-algebras. They induce the same morphism \(H(A)\to H(B)\) if they are homotopic. They induce the same morphism \(H_{bas}(A)\to H_{bas}(B)\) if they are semi-homotopic._
Proof.: Let \(L=dK+Kd\) and \(P_{\xi}=\mathrm{Lie}_{\xi}K-K\mathrm{Lie}_{\xi}\). It is not hard to show that \(\iota_{\xi}L-L\iota_{\xi}=P_{\xi}\) and \(\mathrm{Lie}_{\xi}L-L\mathrm{Lie}_{\xi}=dP_{\xi}-P_{\xi}d\). If \(K\) is a homotopy, then \(P_{\xi}=0\) and \(L\) is a morphism of \(G^{\star}\)-algebras, hence induces a graded commutative algebra morphism \(H(A)\to H(B)\). If \(K\) is a semi-homotopy, then \(L\) still commutes with \(\mathrm{Lie}_{\xi}\) when restricted to the basic part of \(B\), hence induces a graded commutative algebra morphism \(H_{bas}(A)\to H_{bas}(B)\). The rest of the proof follows directly from the standard arguments of homological algebras.
### Algebraic equivariant cohomology
**Definition 2.4**.: _A \(G^{\star}\)-algebra \(E\) is said to be of type (C) if there exists a \(G\)-invariant free submodule \(C\) of the \(A_{0}\)-module \(A_{1}\) such that the contractions_
\[\iota_{a}:A_{1}\to A_{0}\]
_form a basis of \(C^{\star}\), the dual module of \(C\) over \(A_{0}\)._
**Remark 2.2**.: \(C\) _be can be seen as an algebraic analogue of the dual bundle of the vertical bundle \(VP\) of a principal \(G\)-bundle \(P\)._
**Example 2.2**.: _The de Rham complex of a principal \(G\)-bundle is of type (C)._
A \(G^{\star}\)-algebra \(E\) is of type (C) if and only if there are elements \(\theta^{a}\in E_{1}\) such that [10]
\[\iota_{a}\theta^{b}=\delta^{b}_{a}, \tag{2.6}\] \[\mathrm{Lie}_{a}\theta^{b}=-f^{b}_{ac}\theta^{c}. \tag{2.7}\]
Using Cartan's magic formula, one can show that there exists elements \(\phi^{a}\in E_{2}\) satisfying
\[d\theta^{a}=\phi^{a}-\frac{1}{2}f^{a}_{bc}\theta^{b}\theta^{c}. \tag{2.8}\]
The actions of \(d\), \(\iota_{a}\) and \(\mathrm{Lie}_{a}\) on \(\phi^{b}\) are uniquely determined by (2.6) to (2.8).
There is a universal object in the category of \(G^{\star}\)-algebras of type (C).
**Definition 2.5**.: _The Weil Algebra of a Lie algebra \(\mathfrak{g}\) is a \(G^{\star}\)-algebra of type (C) with underlying graded commutative algebra_
\[W(\mathfrak{g})=\Lambda(\mathfrak{g}^{\ast})\otimes\mathrm{S}( \mathfrak{g}^{\ast}).\]
\(W(\mathfrak{g})\) _is graded by assigning degree \(1\) to elements of \(\mathfrak{g}^{\ast}\subset\Lambda(\mathfrak{g}^{\ast})\) and degree \(2\) to elements of \(\mathfrak{g}^{\ast}\subset\mathrm{S}(\mathfrak{g}^{\ast})\). The action of \(G\) on \(W(\mathfrak{g})\) is induced by its coadjoint action on \(\mathfrak{g}^{\ast}\). The action of \(\widetilde{\mathfrak{g}}\) on \(W(\mathfrak{g})\) is specified by (2.6) to (2.8) and_
\[\iota_{a}\phi^{b}=0, \tag{2.9}\] \[d\phi^{a}=f^{a}_{bc}\phi^{b}\theta^{c},\] (2.10) \[\mathrm{Lie}_{a}\phi^{b}=-f^{b}_{ac}\phi^{c}, \tag{2.11}\]
_where \(\theta^{a}=\xi^{a}\otimes 1\), \(\phi^{a}=1\otimes\xi^{a}\), \(\{\xi^{a}\}\) is a basis of \(\mathfrak{g}^{\ast}\)._
**Remark 2.3**.: \((W(\mathfrak{g}),d)\) _is acyclic [10, Theorem 3.2.1]._
It is easy to show that the tensor product \(A\otimes B\) of two \(G^{\star}\)-algebra is again a \(G^{\star}\) algebra, and that \(A\otimes B\) is of type (C) if \(B\) is of type (C).
**Definition 2.6**.: _The algebraic equivariant cohomology of a \(G^{\star}\)-algebra \(A\), denoted by \(H_{G}(A)\), is defined as \(H_{bas}(A\otimes E)\), where \(E\) is an acyclic \(G^{\star}\)-algebra of type (C)._
**Remark 2.4**.: \(E\) _should be thought of as an algebraic analogue of the universal \(G\)-space \(EG\) of a Lie group \(G\). Just like in the topological case, Definition 2.6 does not depend on the choice of \(E\)[10, Section 4.4]._
#### 2.2.1 Weil model
Let's consider the tensor product \(W(\mathfrak{g})\otimes\Omega(X)\). It has a canonical \(G^{\star}\)-algebra structure where the contractions, the Lie derivatives, and the differential are
\[\iota_{a}\otimes 1+1\otimes\iota_{a},\quad\mathrm{Lie}_{a}\otimes 1+1 \otimes\mathrm{Lie}_{a},\quad d\otimes 1+1\otimes d=:d_{W}.\]
The equivariant cohomology of \(X\) is defined as \(H_{G}(X):=H_{G}(\Omega(X))=H_{bas}(W(\mathfrak{g})\otimes\Omega(X))\).
**Definition 2.7**.: \((W(\mathfrak{g})\otimes\Omega(X),d_{W})\) _is called the Weil Model for the equivariant cohomology of \(X\). \(d_{W}\) is called the Weil differential._
#### 2.2.2 BRST model and Cartan model
Let's consider the automorphism map \(j\) of \(W(\mathsf{g})\otimes\Omega(X)\) defined by \(j=\exp{(-\theta^{a}\otimes\iota_{a})}\). \(j\) is known as the Mathai-Quillen map. Let \(d_{K}:=j\circ d_{W}\circ j^{-1}\). One can show that [10]
\[d_{K}=d_{W}+\theta^{a}\otimes\mathrm{Lie}_{a}-\phi^{a}\otimes \iota_{a},\] \[\iota_{a}\otimes 1=j\circ(\iota_{a}\otimes 1+1\otimes\iota_{a}) \circ j^{-1},\] \[\mathrm{Lie}_{a}\otimes 1+1\otimes\mathrm{Lie}_{a}=j\circ(\mathrm{ Lie}_{a}\otimes 1+1\otimes\mathrm{Lie}_{a})\circ j^{-1}.\]
**Definition 2.8**.: \((W(\mathsf{g})\otimes\Omega(X),d_{K})\) _is called the BRST (or Kalkman) model of the equivariant cohomology of \(X\). \(d_{K}\) is called the BRST (or Kalkman) differential._
The basic part of \(W(\mathsf{g})\otimes\Omega(X)\) in the Kalkman model of \(X\) can be identified as \((S(\mathfrak{g}^{*})\otimes\Omega(X))^{G}\), and the restriction of \(d_{K}\) to \((S(\mathfrak{g}^{*})\otimes\Omega(X))^{G}\), denoted by \(d_{C}\), takes the form \(d_{C}=d\otimes 1-\phi^{a}\otimes\iota_{a}\). \(((S(\mathfrak{g}^{*})\otimes\Omega(X))^{G},d_{C})\) is called the Cartan Model of the equivariant cohomology of \(X\). \(d_{C}\) is called the Cartan differential.
### BV extensions of \(G^{\star}\)-algebras and gauge fixings
#### 2.3.1 BRST and BV systems
Let \(\mathcal{M}\) be a graded manifold equipped with a \(G^{\star}\)-action. The graded commutative algebra \(C^{\infty}(\mathcal{M})\) of functions over \(\mathcal{M}\) is a \(G^{\star}\)-algebra. The differential of \(\widetilde{\mathfrak{g}}\) induces a cohomological vector field \(Q\) over \(\mathcal{M}\). With a slight abuse of notation, we denote the vector fields generated by \(\iota_{\xi}\in\widetilde{\mathfrak{g}}_{-1}\) and \(\mathrm{Lie}_{\xi}\in\widetilde{\mathfrak{g}}_{0}\) again as \(\iota_{\xi}\) and \(\mathrm{Lie}_{\xi}\), respectively. Let \(S\) be a \(Q\)-closed basic function over \(\mathcal{M}\) of degree \(0\).
**Definition 2.9**.: _We call the triple \((\mathcal{M},Q,S)\) a (\(G^{\star}\)-equivariant) BRST system._
**Definition 2.10**.: _A gauge fixing procedure of a BRST system \((\mathcal{M},Q,S)\) is referred to as the modification of \(S\) as \(S+Q(\Psi)\), where \(\Psi\) is a function of degree \(-1\). \(\Psi\) is called a gauge fixing fermion. Such a procedure is called \(G^{\star}\)-invariant if \(\Psi\) is a basic function._
**Remark 2.5**.: _In particular, one can choose \(\mathcal{M}\) to be the graded manifold associated to the graded vector bundle \(\underline{\mathfrak{g}}[1]\oplus\underline{\mathfrak{g}}[2]\oplus T[1]M\). We have \(C^{\infty}(\mathcal{M})\cong W(\mathfrak{g})\otimes\Omega(M)\) as graded commutative algebras. Let the \(G^{\star}\)-action on \(\mathcal{M}\) be induced from the \(G^{\star}\)-algebraic structure on \(W(\mathfrak{g})\otimes\Omega(M)\) through the Weil (or BRST) model. Then the cohomology \(H_{bas}(\mathcal{M})\) of basic functions over \(\mathcal{M}\) is nothing but the equivariant cohomology \(H_{G}(M)\) of its underlying \(G\)-manifold \(M\). Moreover, one can see that a \(G^{\star}\)-invariant gauge fixing of \(S\) does not change the cohomology class of \(S\) in \(H_{bas}(\mathcal{M})\)._
Let \((\mathcal{M},\omega)\) be an odd symplectic graded manifold equipped with a Hamiltonian \(G^{\star}\)-action, where \(\omega\) is an odd symplectic form over \(\mathcal{M}\) of degree \(-1\).1 Let \(S\) be a Hamiltonian function associated to the cohomological vector field \(Q\). Let \(I_{\xi}\) and \(L_{\xi}\) be Hamiltonian functions associated to \(\iota_{\xi}\) and \(\mathrm{Lie}_{\xi}\). It follows that
Footnote 1: We choose the sign convention such that \(\iota_{X_{f}}\omega-df=0\), where \(X_{f}\) is the Hamiltonian vector field of \(f\).
\[Q(S)=\{S,S\}=0,\quad\iota_{\xi}(S)=\{I_{\xi},S\}=L_{\xi},\quad\mathrm{Lie}_{ \xi}(S)=\{L_{\xi},S\}=0,\]
where \(\{\cdot,\cdot\}\) is the graded Poisson bracket associated to \(\omega\) defined by setting \(\{f,g\}=\iota_{X_{f}}\iota_{X_{g}}\omega\).
**Definition 2.11**.: _We call the quadruple \((\mathcal{M},\omega,Q,S)\) a (\(G^{\star}\)-equivariant) BV system._
Obviously, \(S\) is not basic with respect to a \(G^{\star}\)-action with nontrivial underlying \(G\)-action. However, one can introduce a "\(G^{\star}\)-invariant" gauge fixing procedure to fix this problem. Recall that a gauge fixing procedure of a BV system is the restriction of \(S\) to a Lagrangian submanifold \(\mathcal{L}\) of \(\mathcal{M}\). There is also an implicit assumption that the cohomological vector field \(Q\) is tangential to \(\mathcal{L}\) so that the restriction of \(Q\) to \(\mathcal{L}\) is well-defined and we still have \(Q(S|_{\mathcal{L}})=0\). Likewise, we can require \(\mathcal{L}\) to be \(G^{\star}\)-invariant so that the restriction of the \(G^{\star}\)-action to \(\mathcal{L}\) is well-defined.
**Definition 2.12**.: _A gauge fixing procedure of a \(BV\) system is called \(G^{\star}\)-invariant if the corresponding Lagrangian submanifold \(\mathcal{L}\) is \(G^{\star}\)-invariant and is contained in the zero locus of \(L_{\xi}\) for all \(\xi\in\mathfrak{g}\)._
For a \(G^{\star}\)-invariant gauge fixing, we have
\[Q(S|_{\mathcal{L}})=Q(S)|_{\mathcal{L}}=0,\quad\iota_{\xi}(S|_ {\mathcal{L}})=\iota_{\xi}(S)|_{\mathcal{L}}=L_{\xi}|_{\mathcal{L}}=0,\quad \mathrm{Lie}_{\xi}(S|_{\mathcal{L}})=\mathrm{Lie}_{\xi}(S)|_{\mathcal{L}}=0.\]
Therefore, \(S|_{\mathcal{L}}\) is a \(Q\)-closed basic function over the \(G^{\star}\)-manifold \(\mathcal{L}\). In other words, we obtain a (gauge fixed) BRST system \((\mathcal{L},Q,S|_{\mathcal{L}})\) from the BV system \((\mathcal{M},\omega,Q,S)\).
#### 2.3.2 Cotangent lift of a BRST system
One can also obtain a BV system out of a BRST system using a trick called cotangent lift. Let \((\mathcal{M},Q,S)\) be a BRST system. Let \(T^{*}[-1]\mathcal{M}\) be the cotangent bundle of \(\mathcal{M}\) shifted by degree \(-1\) equipped with the canonical odd symplectic form \(\omega\). Let \((x^{\mu},\theta^{a})\) be a coordinate system of \(\mathcal{M}\) and \((x^{\mu},\theta^{a},x^{+}_{\mu},\theta^{+}_{a})\) be the induced coordinate system on \(T^{*}[-1]\mathcal{M}\). \(\omega\) can be locally expressed as
\[\omega=dx^{+}_{\mu}\wedge dx^{\mu}+d\theta^{+}_{a}\wedge d\theta^ {a}.\]
Every vector field \(X=X^{\mu}\frac{\partial}{\partial x^{\mu}}+X^{a}\frac{\partial}{\partial\theta ^{a}}\) over \(\mathcal{M}\) can be lifted to a function \(\widetilde{X}\) over \(T^{*}[-1]\mathcal{M}\) defined by the formula
\[\widetilde{X}=x^{+}_{\mu}X^{\mu}+\theta^{+}_{a}X^{a}.\]
Let \(X_{cl}\) denote the Hamiltonian vector field associated to \(\widetilde{X}\) over \(T^{*}[-1]\mathcal{M}\). One can easily check that
\[CL:\mathfrak{X}(\mathcal{M}) \rightarrow\mathfrak{X}(T^{*}[-1]\mathcal{M})\] \[X \mapsto X_{cl}\]
is a graded Lie superalgebra homomorphism.2 In local coordinates, we have
Footnote 2: One can check that both the map \((\mathfrak{X}(M),[\cdot,\cdot])\rightarrow(C^{\infty}(T^{*}[-1]\mathcal{M}), \{\cdot,\cdot\}),\quad X\mapsto\widetilde{X}\) and the map \((C^{\infty}(T^{*}[-1]\mathcal{M}),\{\cdot,\cdot\})\rightarrow(\mathfrak{X}(T ^{*}[-1]\mathcal{M}),[\cdot,\cdot]),\quad f\mapsto X_{f}\) are anti-homomorphisms. Therefore, their composition \(CL\) is a homomorphism.
\[X_{cl}=X^{i}\frac{\partial}{\partial u^{i}}+(-1)^{(|u^{i}|+1)|u^ {j}|}u^{+}_{i}\frac{\partial X^{i}}{\partial u^{j}}\frac{\partial}{\partial u ^{+}_{j}},\]
for \(X\) odd and
\[X_{cl}=(-1)^{|u^{i}|}X^{i}\frac{\partial}{\partial u^{i}}+(-1)^{|u^{i}||u^{j}|+1}u _{i}^{+}\frac{\partial X^{i}}{\partial u^{j}}\frac{\partial}{\partial u_{j}^{+}},\]
for \(X\) even, where \((u^{j})=(x^{\mu},\theta^{a})\). On the other hand, any function over \(\mathcal{M}\) can be canonically viewed as a function over \(T^{*}[-1]\mathcal{M}\). Note that for an odd vector field \(X\) over \(\mathcal{M}\), \(X_{cl}(f)=X(f)\) for \(f\in C^{\infty}(\mathcal{M})\). Let \(X_{S}\) be the Hamiltonian vector field of \(S\). \(X_{S}\) locally takes the form
\[X_{S}=\frac{\partial S}{\partial u^{j}}\frac{\partial}{\partial u_{j}^{+}}.\]
Since \(S\) is \(Q\)-closed and basic, we have
\[[Q_{cl},X_{S}]=[(\iota_{\xi})_{cl},X_{S}]=0.\]
It follows from \([Q,\iota_{\xi}]=\mathrm{Lie}_{\xi}\) that \([(\mathrm{Lie}_{\xi})_{cl},X_{S}]=0\). One can then deform the \(G^{\star}\)-action on \(T^{*}[-1]\mathcal{M}\) induced by the cotangent lift by introducing the new cohomological vector field
\[Q_{BV}:=Q_{cl}+X_{S}.\]
The Hamiltonian function of \(Q_{BV}\) is \(S_{BV}:=S+\widetilde{Q}\). We have proved that
**Proposition 2.2**.: \((T^{*}[-1]\mathcal{M},\omega,Q_{BV},S_{BV})\) _is a \(BV\)-system._
It was shown by Schwarz that any Lagrangian submanifold of \(T^{*}[-1]\mathcal{M}\) can be obtained by combining the following two types of examples [10].
"Graph Lagrangian": Let \(\Psi\) be a function over \(\mathcal{M}\) of degree \(-1\). The graph \(\mathrm{Graph}(d\Psi)=:\mathcal{L}_{\Psi}\) of the \(1\)-form \(d\Psi\) is a Lagrangian submanifold of \(T^{*}[-1]\mathcal{M}\).
"Conormal Lagrangian": Let \(\mathcal{N}\) be a submanifold of \(\mathcal{M}\). The conormal bundle \(N^{*}[-1]\mathcal{N}\) of \(\mathcal{N}\) shifted by degree \(-1\) is a Lagrangian submanifold of \(T^{*}[-1]\mathcal{M}\).
For the first type of Lagrangian submanifolds, one has a natural isomorphism \(\mathcal{L}_{0}\cong\mathcal{L}_{\Psi}\) where \(0\) is the zero function over \(\mathcal{M}\). Under this isomorphism, we have
\[S_{BV}|_{\mathcal{L}_{0}}=S+Q(\Psi).\]
Obviously, this gauge fixing procedure is \(G^{\star}\)-invariant if \(\Psi\) is a basic function. We have
\[\iota_{\xi}(S_{BV}|_{\mathcal{L}_{0}})=\iota_{\xi}(S)+\iota_{\xi} Q(\Psi)=0+\mathrm{Lie}_{\xi}(\Psi)=0,\] \[\mathrm{Lie}_{\xi}(S_{BV}|_{\mathcal{L}_{0}})=\mathrm{Lie}_{\xi} (S)+Q(\mathrm{Lie}_{\xi}(\Psi))=0.\]
For the second type of Lagrangian submanifolds, the gauge fixing is \(G^{\star}\)-invariant if \(\mathcal{N}\) is a \(G^{\star}\)-invariant submanifold in \(\mathcal{M}\). In fact, one can choose a local coordinate system \((x^{\mu},y^{\nu},\theta^{a},\eta^{b})\) such that \(\mathcal{N}\) is determined locally by the equations \(y^{\nu}=0\) and \(\eta^{b}=0\). Let \((x^{\mu},y^{\nu},\theta^{a},\eta^{b},x^{+}_{\mu},y^{+}_{\nu},\theta^{+}_{a}, \eta^{+}_{b})\) be the induced local coordinate system on \(T^{*}[-1]\mathcal{M}\). The conormal Lagrangian \(N^{*}[-1]\mathcal{N}\) is then determined locally by \(y^{\nu}=0,\eta^{b}=0,x^{+}_{\mu}=0,\theta^{+}_{a}=0\). We have
\[\mathrm{Lie}_{\xi}|_{N^{*}[-1]\mathcal{N}}=y^{+}_{\nu}\mathrm{Lie}^{\nu}_{\xi }+\eta^{+}_{b}\mathrm{Lie}^{b}_{\xi}=0\]
since \(\mathcal{N}\) is invariant under the \(G^{\star}\)-action, i.e., \(\mathrm{Lie}^{\nu}_{\xi}=0,\mathrm{Lie}^{b}_{\xi}=0\).
**Remark 2.6**.: _The preceding discussion makes it evident that one of the key advantages of BV systems over BRST systems is the increased flexibility in selecting a gauge-fixing procedure._
## 3 Gauge natural field theories
### Gauge natural bundle
Let \(M\) be a \(n\)-dimensional manifold and \(s\) a positive integer. Consider the set
\[L^{s}(M)=\{j^{s}(\epsilon)(0)|\epsilon:\mathbb{R}^{n}\to M,\text{ locally invertible around }0\in\mathbb{R}^{n}\},\]
where \(j^{s}(\epsilon)\) is the \(s\)-th order jet prolongation of \(\epsilon\). \(L^{s}(M)\) is a fiber bundle over \(M\) via the canonical projection
\[\pi:L^{s}(M) \to M\] \[j^{s}(\epsilon)(0) \mapsto\epsilon(0).\]
Moreover, it is a principal bundle with the standard fiber
\[\text{GL}^{s}(n)=\{j^{s}(\alpha)(0)|\alpha:\mathbb{R}^{n}\to \mathbb{R}^{n},\text{locally invertible around }0\in\mathbb{R}^{n},\alpha(0)=0\}.\]
The group structure of \(\text{GL}^{s}(n)\) is specified by \(j^{s}(\alpha)(0)j^{s}(\beta)(0):=j^{s}(\alpha\circ\beta)(0)\). The right action of \(\text{GL}^{s}(n)\) on \(L^{s}(M)\) is specified by \(j^{s}(\epsilon)(0)j^{s}(\alpha)(0):=j^{s}(\epsilon\circ\alpha)(0)\). For \(s=1\), one has the identification \(\text{GL}^{1}(n)\cong\text{GL}(n)\) and \(L^{1}(M)\cong\text{Fr}(M)\).
Let \(G\) be a Lie group and \(P\) be a principal \(G\)-bundle. The set
\[J^{r}_{n}G:=\{j^{r}(a)(0)|a:\mathbb{R}^{n}\to G\}\]
is also a Lie group with group multiplication defined by \(j^{r}(a)(0)j^{r}(b)(0):=j^{r}(ab)(0)\). For \(r\leq s\), one has a canonical right action of \(\text{GL}^{s}(n)\) on \(J^{r}_{n}G\) defined by setting \(j^{s}(a)(0)j^{s}(\alpha)(0):=j^{s}(a\circ\alpha)(0)\). Consider the right semi-direct product
\[W^{r,s}_{n}(G):=J^{r}_{n}G\rtimes\text{GL}^{s}(n)\]
and the fiber product
\[W^{r,s}_{n}P:=J^{r}P\times_{M}L^{s}(M),\]
where \(J^{r}P\) denote the \(r\)-th order jet prolongation of \(P\). \(W^{r,s}P\) is again a principal bundle with structure group \(W^{r,s}_{n}(G)\). The \(W^{r,s}_{n}(G)\)-action on \(W^{r,s}_{n}P\) is defined by setting
\[(j^{r}(\sigma)(x),j^{s}(\epsilon)(0))(j^{r}(a)(0),j^{s}(\epsilon)(0)):=(j^{r} (\sigma\cdot(a\circ\alpha^{-1}\circ\epsilon^{-1}))(x),j^{s}(\epsilon\circ \alpha)(0)),\]
where '\(\cdot\)' denote the \(G\)-action on \(P\).
**Definition 3.1**.: _A gauge natural bundle of finite order \((r,s)\) is a fiber bundle associated to \(W^{r,s}_{n}P\) so that \((r,s)\) is minimal._
**Example 3.1**.: _Recall that a connection \(1\)-form on a principal \(G\)-bundle \(P\) is a \(G\)-equivariant \(1\)-form \(A\) with values in the Lie algebra \(\mathfrak{g}\) such that \(A(K_{\xi})=\xi,\ \xi\in\mathfrak{g},\) where \(K_{\xi}\) is the fundamental vector field generated by \(\xi\) on \(P\). The space of all connections \(\mathcal{A}\) is an affine space modeled on \(\Omega^{1}(\text{ad}P)\), where \(\text{ad}P\) is the adjoint bundle of \(P\)._
_On the other hand, let \(J^{1}P\) be the first jet bundle of \(P\). \(J^{1}P\) is an affine bundle modeled on the vector bundle \(T^{*}M\otimes_{M}VP\), where \(VP\) is the vertical bundle over \(P\) and the tensor product is
taken over \(M\). Let \(j^{1}\Phi:J^{1}P\to J^{1}P\) denote the jet prolongation of a bundle automorphism \(\Phi\) of \(P\), i.e., a \(G\)-equivariant diffeomorphism \(\Phi:P\to P\). Such operations satisfy the chain rules_
\[j^{1}(\Phi_{1}\circ\Phi_{2})=j^{1}(\Phi_{1})\circ j^{1}(\Phi_{2}),\] \[j^{1}(\mathrm{id}_{P})=\mathrm{Id}_{J^{1}P}.\]
_Thus, \(J^{1}P\) also has a principal \(G\)-action. The quotient space \(C=J^{1}P/G\) is then an affine bundle modeled on the vector bundle \((T^{*}M\otimes_{M}VP)/G\cong T^{*}M\otimes\mathrm{ad}P\) over \(M\). One can show that there exists an identification \(\mathcal{A}\cong\Gamma(C)\). Moreover, \(C\) is a gauge natural bundle of order \((1,1)\)._
Let \(\mathrm{Aut}(P)\) denote the group of bundle automorphisms of \(P\). Each \(\Phi\in\mathrm{Aut}(P)\) determines uniquely a diffeomorphism \(\phi\) of \(M\). In other words, we have a group homomorphism \(\mathrm{Aut}(P)\to\mathrm{Diff}(M)\). Let \(\mathrm{Diff}_{P}(M)\) denote the image of \(\mathrm{Aut}(P)\) under this homomorphism. Recall that there exists a bijection between the set of isomorphism classes of principal \(G\)-bundles over \(M\) and the set of of homotopy classes of maps from \(M\) to the classifying space \(BG\) of \(G\). Let \(f:M\to BG\) represent the isomorphism class of \(P\). \(\mathrm{Diff}_{P}(M)\) can be identified as the group of diffeomorphisms \(\phi\) such that \(f\circ\phi\) is homotopy equivalent to \(f\).
We have a short exact sequence of groups
\[\mathrm{Id}\to\mathrm{Gau}(P)\to\mathrm{Aut}(P)\to\mathrm{Diff}_{P}(M)\to \mathrm{Id}, \tag{3.1}\]
where \(\mathrm{Gau}(P)\) is the kernel of \(\mathrm{Aut}(P)\to\mathrm{Diff}(M)\), known as the gauge group of \(P\). The Lie algebra \(\mathfrak{gau}(P)\) of \(\mathrm{Gau}(P)\) can be identified with the space of sections of \(\mathrm{ad}P\). (3.1) induces infinitesimally the short exact sequences of Lie algebras
\[0\to\mathfrak{gau}(P)\to\mathfrak{X}_{inv}(P)\to\mathfrak{X}_{P}(M)\to 0, \tag{3.2}\]
where \(\mathfrak{X}_{inv}(P)\) denotes the set of \(G\)-invariant vector fields over \(P\) and \(\mathfrak{X}_{P}(M)\) denotes the set of vector fields over \(M\) which can be lifted to \(P\).
There exists a canonical action of \(\mathrm{Aut}(P)\) on \(W_{n}^{r,s}P\) defined by
\[W_{n}^{r,s}\Phi:W_{n}^{r,s}P \to W_{n}^{r,s}P\] \[(j^{r}(\sigma)(x),j^{s}(\epsilon)(0)) \mapsto(j^{r}(\Phi\circ\sigma\circ\phi^{-1})(x),j^{s}(\epsilon \circ\phi)(0)).\]
Therefore, \(\mathrm{Aut}(P)\) also acts naturally on any gauge natural bundle over \(M\). It follows that any \(G\)-invariant vector field \(\Xi\) over \(P\) which projects on a vector field \(\xi\) over \(M\) determines naturally a vector field \(\Xi_{Y}\) over a gauge natural bundle \(Y\) which projects on the same vector field \(\xi\).
**Definition 3.2**.: _Let \(\sigma\) be a section of \(Y\). The Lie derivative of \(\sigma\) with respect to \(\Xi\) is defined to be the map_
\[\mathrm{Lie}_{\Xi}\sigma:M\to VY,\quad\mathrm{Lie}_{\Xi}\sigma:=T\sigma\circ \xi-\Xi_{Y}\circ\sigma,\]
_where \(VY\) is the vertical bundle of \(Y\)._
Let \(\{(\Phi_{t},\phi_{t})\}\) denote the one-parameter group generated by \((\Xi_{Y},\xi)\). Let \(\sigma_{t}:=\Phi_{t}\circ\sigma\circ\phi_{t}^{-1}\).
**Proposition 3.1**.: [10, Proposition 2.6.5]_\(\mathrm{Lie}_{\Xi}\sigma=-\frac{d}{dt}\sigma_{t}|_{t=0}\)._
**Example 3.2**.: _Let \(\Xi\) be a \(G\)-invariant vector field over \(P\). Let \(\{U,P|_{U}\cong U\times G\}\) be a local trivialization of \(P\) equipped with a coordinate system \((x^{\mu})\). Let \(\{\xi_{a}\}\) be a basis of \(\mathfrak{g}\). Every \(G\)-invariant vector field \(\Xi\) over \(P\) can be written as \(\Xi=\Xi^{\mu}(x)\frac{\partial}{\partial x^{\mu}}+\Xi^{a}(x)\rho_{a}\), where \(\rho_{a}\) is the right invariant vector field generated by \(\xi_{a}\) over \(G\). The vertical part \(\Xi_{v}=\Xi^{a}(x)\rho_{a}\) of \(\Xi\) can be identified with a section \(\lambda\) of the adjoint bundle \(\mathrm{ad}P\). Locally, \(\lambda\) is given by \(\lambda=\Xi^{a}(x)\xi_{a}\). Recall that the infinitesimal (left) action of \(\mathfrak{gau}(P)\) on \(\mathcal{A}\) is given by_
\[\mathfrak{gau}(P)\times\mathcal{A} \to T\mathcal{A}\] \[(\lambda,A) \mapsto(A,d_{A}\lambda).\]
_where we use identification \(T_{A}(\mathcal{A})\cong\Omega^{1}(\mathrm{ad}P)\). This agrees with the Lie derivative of \(A=A_{\mu}^{a}dx^{\mu}\otimes\xi_{a}\in\Gamma(C)\) with respect to \(\Xi_{v}=\Xi^{a}(x)\rho_{a}\), which is locally given by_
\[\mathrm{Lie}_{\Xi_{v}}A_{\mu}^{a}=(A_{\mu}^{a},\partial_{\mu}\Xi^{a}+f_{bc}^{ a}A_{\mu}^{b}\Xi^{c}),\]
_where \(f_{bc}^{a}\) are the structure constants of \(\mathfrak{g}\) with respect to \(\xi_{a}\), and we identify \(VC\) as \(C\times_{M}(T^{*}M\otimes\mathrm{ad}P)\)._
Let's consider the special case \(G=\mathrm{Id}\) and \(r=0\).
**Definition 3.3**.: _A natural bundle of finite order \(s\) is a fiber bundle associated to \(L^{s}(M)\) so that \(s\) is minimal._
Unlike the case of natural gauge bundles, the Lie derivative of a section of a natural bundle can be defined with respect to any vector field over the base manifold.
**Example 3.3**.: _Let \(\mathrm{Met}(M)\) denote the bundle of (Riemannian) metrics over \(M\). It is naturally associated to \(\mathrm{Fr}(M)\), therefore a natural bundle of order \(1\). The vertical bundle of \(\mathrm{Met}(M)\) can be identified with \(\mathrm{Met}(M)\times_{M}S^{2}T^{*}M\), where \(S^{2}T^{*}M\) is the bundle of symmetric tensors of type \((0,2)\) over \(M\). The Lie derivative of a metric \(g=g_{\mu\nu}dx^{\mu}\otimes dx^{\nu}\) along a vector field \(X=X^{\mu}\partial_{\mu}\) is locally given by_
\[\mathrm{Lie}_{X}g_{\mu\nu}=(g_{\mu\nu},\nabla_{\mu}X_{\nu}+\nabla_{\nu}X_{\mu }),\]
_where \(\nabla_{\mu}X_{\nu}=\partial_{\mu}X_{\nu}-\Gamma_{\mu\nu}^{\rho}X_{\rho}\), \(\Gamma_{\mu\nu}^{\rho}\) are the Christoffel symbols of \(g\), and \(X_{\mu}=g_{\mu\nu}X^{\nu}\)._
**Example 3.4**.: _Let \(A(M)\) denote the bundle of affine connections over \(M\). One can show that it is a natural bundle of order \(2\). The vertical bundle of \(A(M)\) can be identified with \(A(M)\times_{M}(\mathrm{End}(M)\otimes T^{*}M)\), where \((\mathrm{End}(M)\otimes T^{*}M)\) is the bundle of tensors of type \((1,2)\) over \(M\). The Lie derivative of an affine connection \(\Gamma=\Gamma_{\mu\nu}^{\rho}\partial_{\rho}\otimes dx^{\mu}\otimes dx^{\nu}\) along a vector field \(X=X^{\mu}\partial_{\mu}\) is locally given by [1]
\[\mathrm{Lie}_{X}\Gamma_{\mu\nu}^{\rho}=(\Gamma_{\mu\nu}^{\rho},\nabla_{\mu} \nabla_{\nu}X^{\rho}+X^{\lambda}R_{\mu\lambda\nu}^{\rho}-2\nabla_{\mu}(\Gamma _{[\mu\lambda]}^{\rho}V^{\lambda})),\]
_where \(\nabla_{\mu}X^{\rho}=\partial_{\mu}X^{\rho}+\Gamma_{\mu\nu}^{\rho}V^{\nu}\) and \(\Gamma_{[\mu\nu]}^{\rho}=\frac{1}{2}(\Gamma_{\mu\nu}^{\rho}-\Gamma_{\nu\mu}^{ \rho})\), i.e., the torsion of \(\Gamma\)._
### Variational bicomplex
#### 3.2.1 Variational bicomplex of a gauge natural bundle
Let \(M\) be an \(n\)-dimensional (compact) manifold. Let \(P\) be a principal \(G\)-bundle over \(M\). Let \(\pi:Y\to M\) be a gauge natural bundle associated to \(P\) with fiber \(Z\) being an \(m\)-dimensional
manifold. Let \(\Gamma(Y)\) be the space of sections of \(Y\). Let's consider the de Rham complex \(\Omega(M\times\Gamma(Y))\) of differential forms on \(M\times\Gamma(Y)\). It is a bicomplex bigraded according to the product structure of \(M\times\Gamma(Y)\). We can write
\[\Omega(M\times\Gamma(Y))=\bigoplus_{p,q}\Omega^{p,q}(M\times\Gamma(Y)).\]
Correspondingly, the de Rham differential \(d_{tot}\) on \(M\times\Gamma(Y)\) breaks into two parts \(d_{tot}=d+\delta\), where \(d\) is the de Rham differential on \(M\) and \(\delta\) is the de Rham differential on \(\Gamma(Y)\).
Let \(J^{\infty}Y\) be the infinite jet bundle of \(Y\) over \(M\). Let ev be the evaluation map from \(M\times\Gamma(Y)\) to \(J^{\infty}Y\), i.e.,
\[\operatorname{ev}:M\times\Gamma(Y) \to J^{\infty}Y\] \[(x,\Phi) \mapsto j^{\infty}(\Phi)(x),\]
where \(j^{\infty}(\Phi)\) is the infinite jet prolongation of \(\Phi\). The pull-back \(\operatorname{ev}^{*}\Omega(J^{\infty}Y)\) is stable under both \(d\) and \(\delta\), hence a sub-bicomplex, which is denoted by \(\Omega_{loc}(M\times\Gamma(Y))\)[22].
**Definition 3.4**.: \(\Omega_{loc}(M\times\Gamma(Y))\) _is called the variational bicomplex of \(Y\). Elements in \(\Omega_{loc}(M\times\Gamma(Y))\) are called local forms. \(d\) and \(\delta\) restricted to \(\Omega_{loc}(M\times\Gamma(Y))\) are called the horizontal differential, denoted by \(d_{h}\), and the vertical differential, denoted by \(d_{v}\), respectively._
A vector field \(\Xi\) over \(M\times\Gamma(Y)\) is a map \(\Xi:M\times\Gamma(Y)\to TM\times\Gamma(VY)\). \(\Xi\) is called local if it projects on a vector field \(\xi\) over \(J^{\infty}Y\), i.e., if the following diagram commutes,
A local vector field \(\Xi\) is called (strictly) vertical if it is of the form \((0,\Xi^{\prime})\) where \(\Xi^{\prime}\) is a vector field over \(\Gamma(Y)\), and (strictly) horizontal if it is of the form \((X,0)\) where \(X\) is a vector field over \(M\).
**Remark 3.1**.: _Let \(\iota_{\Xi}\) denote the contraction of the local vector field \(\Xi\). Obviously, we have_
\[[\iota_{\Xi},d_{h}]=0\]
_if \(\Xi\) is vertical and_
\[[\iota_{\Xi},d_{v}]=0\]
_if \(\Xi\) is horizontal. It follows that the Lie derivative \(\operatorname{Lie}_{\Xi}\) is of the form \(\operatorname{Lie}_{\Xi}=[d_{v},\iota_{\Xi}]\) if \(\Xi\) is vertical and \(\operatorname{Lie}_{\Xi}=[d_{h},\iota_{\Xi}]\) if \(\Xi\) is horizontal._
**Example 3.5**.: _Since \(Y\) is gauge natural bundle, every \(G\)-invariant vector field over \(P\) induces naturally a vertical local vector field over \(\Gamma(Y)\). In particular, if \(Y\) is a natural bundle, a vector field \(X\) over \(M\) induces a vertical local vector field \(\xi_{X}\) over \(\Gamma(Y)\)._
On the other hand, every vector field \(X\) over \(M\) can be lifted to a vector field \(\widehat{X}\) over \(J^{\infty}(Y)\) via the Cartan connection, i.e., we have the following commutative diagram,
where the vertical morphisms are the canonical projections. One can show that the lift of \(\widehat{X}\) to \(M\times\Gamma(Y)\) is exactly of the form \((X,0)\). With a slight abuse of notation, we denote \((X,0)\) as \(\widehat{X}\).
Let \(U\) be an open neighborhood of \(M\) such that \(\pi^{-1}U\cong U\times Z\). Let \(V\) be a coordinate chart of \(Z\). \(Y\) can then be covered by coordinate charts of the form \(U\times V\) with coordinate functions \(x^{1},\ldots,x^{n},u^{1},\ldots,u^{m}\). Let \(\mathcal{W}(U,V)\) be the set of pairs \((x,\Phi)\) such that \(\Phi(x)\) is in \(V\) for all \(x\in U\), it is then an open neighborhood of \(M\times\Gamma(Y)\). We can explicitly define functions \(x^{\mu}\) and \(u_{I}^{j}\) on \(\mathcal{W}(U,V)\) by setting
\[x^{\mu}(x,\Phi)=x^{\mu}(x),\quad u_{I}^{j}(x,\Phi)=\partial_{I}(u^{j}(\Phi(x) )),\]
where \(\partial_{I}\) is the partial derivative in \(x^{\mu}\) with respect to the multi-index \(I=(\mu_{1},\ldots,\mu_{n})\). By definition, a local function on \(\mathcal{W}(U,V)\) depends only on finitely many of \(x^{\mu}\) and \(u_{I}^{j}\). In particular, \(x^{\mu}\) and \(u_{I}^{j}\) themselves are local functions. Their derivatives \(dx^{\mu}\) and \(\delta u_{I}^{j}\) can be viewed as local forms of degree \((1,0)\) and \((0,1)\), respectively. One can write any local \((k,l)\)-form \(\omega\) as a finite sum
\[\omega=f_{\mu_{1},\ldots,\mu_{k},j_{1},\ldots,j_{l}}^{I_{1},\ldots,I_{l}}dx^{ \mu_{1}}\wedge\cdots\wedge dx^{\mu_{k}}\wedge\delta u_{I_{1}}^{j_{1}}\wedge \cdots\wedge\delta u_{I_{l}}^{j_{l}},\]
where each \(f_{\mu_{1},\ldots,\mu_{k},j_{1},\ldots,j_{l}}^{I_{1},\ldots,I_{l}}\) is a local function. On the other hand, one can show that every horizontal local vector field is of the form
\[\widehat{X}=X^{\mu}\partial_{\mu}+X^{\mu}u_{I\cup\{\mu\}}^{j}\frac{\partial}{ \partial u_{I}^{j}} \tag{3.3}\]
and every vertical local vector field is of the form
\[\Xi=\Xi^{j}\frac{\partial}{\partial u^{j}}+\widehat{\partial_{I}}(\Xi^{j}) \frac{\partial}{\partial u_{I}^{j}}. \tag{3.4}\]
The differentials \(d_{h}\) and \(d_{v}\) can then be expressed as
\[d_{h}=dx^{\mu}\wedge\operatorname{Lie}_{\widehat{\partial_{\mu}}},\quad d_{v }=\delta u_{I}^{j}\wedge\operatorname{Lie}_{\frac{\partial}{\partial u_{I}^{j }}}. \tag{3.5}\]
**Definition 3.5**.: _A local form \(\omega\) is called invariant with respect to a \(G\)-invariant vector field \(\Xi\in\mathfrak{X}_{inv}(P)\) which covers a vector field \(X\in\mathfrak{X}_{P}(M)\) if_
\[\operatorname{Lie}_{\widehat{\mathcal{X}}-\Xi}\omega=0.\]
\(\omega\) _is said to be invariant if it is invariant with respect to all \(\Xi\in\mathfrak{X}_{inv}(P)\). In particular, if \(Y\) is a natural bundle, \(\omega\) is called invariant with respect to a vector field \(X\in\mathfrak{X}(M)\) if_
\[\operatorname{Lie}_{\widehat{\mathcal{X}}-\xi_{X}}\omega=0.\]
_where \(\xi_{X}\) is defined as in Example 3.5. \(\omega\) is said to be invariant if it is invariant with respect to all \(X\in\mathfrak{X}(M)\)._
By definition, the set \(\Omega_{loc,inv}(M\times\Gamma(Y))\) of invariant local forms over \(M\times\Gamma(Y)\) is a subbicomplex of the variational bicomplex of \(Y\).
**Remark 3.2**.: _Let \((U,P|_{U}\cong U\times G)\) be a local trivialization of \(P\) over an open neighborhood \(U\) of \(M\) with coordinates \(x^{\mu}\). Obviously, every local vector field over \(U\) can be lifted to a \(G\)-invariant vector field over \(P|_{U}\). Locally, the one-parameter group \((\Phi_{t},\phi_{t})\) generated by \(\xi_{\partial_{\mu}}\) is nothing but the trivial transportation along \(x\mapsto x+tx^{\mu}\). It follows from Proposition 3.1 that_
\[\mathrm{Lie}_{\xi_{\partial_{\mu}}}=\mathrm{Lie}_{u^{j}_{I\cup\{\mu\}}\frac{ \partial}{\partial u^{j}_{I}}}.\]
_(Use \(u^{j}_{I\cup\{\mu\}}=\lim_{t\to 0}\frac{u^{j}_{I}(x+tx^{\mu})}{t}\).) Therefore, we have [10, Example 2.17]_
\[\mathrm{Lie}_{\widehat{\partial_{\mu}}-\xi_{\partial_{\mu}}}=\mathrm{Lie}_{ \partial_{\mu}}.\]
_In other words, an invariant local form must be locally independent of the base coordinates \(x^{\mu}\). The converse is not true. Let \(\Omega_{\mathit{loc}}(U\times\Gamma(Y|_{U}))\) denote the set of local forms that are independent of \(x^{\mu}\), we have_
\[\Omega_{loc,inv}(U\times\Gamma(Y|_{U}))\subsetneq\Omega_{\mathit{\circ}loc}(U \times\Gamma(Y|_{U}))\subsetneq\Omega_{\mathit{loc}}(U\times\Gamma(Y|_{U})).\]
Let \(d_{h,inv}:=dx^{\mu}\wedge\mathrm{Lie}_{\xi_{\partial_{\mu}}}\) and \(K_{0}:=dx^{\mu}\wedge\iota_{\xi_{\partial_{\mu}}}\). They preserves \(\Omega_{\mathit{\circ}loc}(U\times\Gamma(Y|_{U}))\) because \([\xi_{\partial_{\mu}},\xi_{\partial_{\nu}}]=\xi_{[\partial_{\mu},\partial_{ \nu}]}=0\). Note that the expressions for \(d_{h,inv}\) and \(K_{0}\) only make sense locally since the the map \(\mathfrak{X}(U)\rightarrow\mathfrak{X}_{inv}(P|U),X\mapsto\xi_{X}\) is not \(C^{\infty}(U)\)-linear.
By definition, \(d_{v}\), \(d_{h,inv}\), and \(K_{0}\) satisfy the following relations
\[[d_{v},d_{v}]=[d_{v},d_{h,inv}]=[d_{h,inv},d_{h,inv}]=[K_{0},K_{0}]=[K_{0},d_{ h,inv}]=0,\quad[d_{v},K_{0}]=d_{h,inv}. \tag{3.6}\]
The last relation tells us the \(K_{0}\) can be interpreted as a homotopy operator and \(d_{h,inv}\) is locally homotopy equivalent to the zero differential.
**Proposition 3.2**.: \(d_{h,inv}|_{\Omega_{\mathit{\circ}loc}(U\times\Gamma(Y|_{U}))}=d_{h}|_{\Omega _{\mathit{\circ}inv}(U\times\Gamma(Y|_{U}))}\)_._
Proof.: This follows directly from (3.5) and the definition of \(\Omega_{\mathit{\circ}loc}\).
In particular, we have \(d_{h,inv}|_{\Omega_{\mathit{\circ}lev,inv}(U\times\Gamma(Y|_{U}))}=d_{h}|_{ \Omega_{\mathit{\circ}lev,inv}(U\times\Gamma(Y|_{U}))}\). It follows that \(d_{h,inv}\) is globally well-defined when restricted to the subbicomplex of invariant local forms.
#### 3.2.2 Variational bicomplex of a graded gauge natural bundle
The previous discussion can be easily generalized to the graded case.
**Definition 3.6**.: _A graded gauge natural bundle over \(M\) is a composite fiber bundle \(Y\to Y_{0}\to M\) over \(M\) where \(Y_{0}\to M\) is a ordinary gauge natural bundle over \(M\) and \(Y\to Y_{0}\) is a gauge natural graded vector bundle over \(Y_{0}\)._
**Example 3.6**.: \(Y=V[1]Y_{0}\to Y_{0}\to M\) _where \(V[1]Y_{0}\) is the vertical bundle of the gauge natural bundle \(Y_{0}\to M\) shifted by degree \(1\)._
The infinite jet bundle \(J^{\infty}Y\) of \(Y\) (as a bundle over \(M\)) is again a graded gauge natural bundle \(J^{\infty}Y\to J^{\infty}(Y_{0})\to M\) where \(J^{\infty}Y\to J^{\infty}Y_{0}\) is induced from the bundle morphism \(Y\to Y_{0}\) of fiber bundles over \(M\), and the grading of the fibers of \(J^{\infty}Y\) is induced from the grading of the fibers of \(Y\). On the other hand, \(\Gamma(Y)\) can be viewed as a graded vector bundle over \(\Gamma(Y_{0})\) with fiber \(\Gamma(\varphi^{*}Y)\) at the point \(\varphi\in\Gamma(Y_{0})\). It follows that \(M\times\Gamma(Y)\) can be viewed as a graded vector bundle over \(M\times\Gamma(Y_{0})\). The evaluation map
\[\operatorname{ev}:M\times\Gamma(Y) \to J^{\infty}Y\] \[(x,\Phi) \mapsto j^{\infty}(\Phi)(x)\]
together with the evaluation map
\[\operatorname{ev}_{0}:M\times\Gamma(Y_{0}) \to J^{\infty}Y_{0}\] \[(x,\Phi_{0}) \mapsto j^{\infty}(\Phi_{0})(x)\]
gives us a morphism of graded vector bundles, which induces a morphism of the corresponding graded manifolds.
The evaluation map \(\operatorname{ev}\) can be again used to define local forms and local vector fields over the graded manifold \(M\times\Gamma(Y)\). The only subtleties of this generalization are
1. \(\Omega_{loc}(M\times\Gamma(Y))\) is trigraded. We write \(\Omega_{loc}(M\times\Gamma(Y))=\bigoplus_{p,q,r}\Omega_{loc}^{p,q,r}(M\times \Gamma(Y))\), where \(p\) and \(q\) are the horizontal and vertical form degree, respectively, and \(r\) is the ghost number degree. \(d_{h}\) and \(d_{v}\) are of degrees \((1,0,0)\) and \((0,1,0)\), respectively. In local coordinates, we should assign degree \((0,0,d(u^{j}))\) to the local function \(u^{j}_{I}\) and degree \((0,1,d(u^{j}))\) to the local form \(\delta u^{j}_{I}\) where \(d(u^{j})\) is the degree of \(u^{j}\) induced from the grading of \(Y\).
2. A local function should be a graded polynomial in \(u^{j}_{I}\) when \(d(u^{j})\neq 0\). We also need to fix a notion of commutativity for local forms. Our convention will be \[dx^{\mu}\wedge\delta u^{j}_{I}=\delta u^{j}_{I}\wedge dx^{\mu},\quad\delta u^{ j}_{I}\wedge\delta u^{j^{{}^{\prime}}}_{I^{\prime}}=(-1)^{(d(u^{j})+1)(d(u^{j^{ \prime}})+1)}\delta u^{j^{{}^{\prime}}_{I}}_{I^{\prime}}\wedge\delta u^{j}_{I}.\]
3. While horizontal local vector fields over \(M\times\Gamma(Y)\) remain ungraded, vertical local vector fields are graded. In local coordinates, we should assign degree \((0,0,-d(u^{j}))\) to \(\frac{\partial}{\partial u^{j}_{I}}\) and degree \((0,-1,-d(u^{j}))\) to the contraction \(\iota_{\frac{\partial}{\partial u^{j}_{I}}}\).
4. The Cartan calculus of the vertical part of \(\Omega_{loc}(M\times\Gamma(Y))\) should also be modified properly. More precisely, we choose the bracket \([\cdot,\cdot]\) between \(d_{v}\), \(\iota_{\Xi}\), and \(\operatorname{Lie}_{\Xi}\) such that 3 Footnote 3: \(\operatorname{Lie}_{\Xi}\) should be given by \([\iota_{\Xi},d_{v}]\) instead of \([d_{v},\iota_{\Xi}]\) because one needs \(\operatorname{Lie}_{\Xi}f=\Xi(f)\) for a function \(f\). \[[\iota_{\Xi_{1}},\iota_{\Xi_{2}}]=\iota_{\Xi_{1}}\iota_{\Xi_{2}}-(-1)^{(-1+d( \Xi_{1}))(-1+d(\Xi_{2}))}\iota_{\Xi_{2}}\iota_{\Xi_{1}}=0,\] \[\operatorname{Lie}_{\Xi}=[\iota_{\Xi},d_{v}]=\iota_{\Xi}d_{v}-(-1)^{(-1+d( \Xi))}d_{v}\iota_{\Xi}=\iota_{\Xi}d_{v}+(-1)^{d(\Xi)}d_{v}\iota_{\Xi}.\] Moreover, our sign convention also implies that \(d_{h}\) should commute with \(d_{h}\), \(\iota_{\Xi}\), and \(\operatorname{Lie}_{\Xi}\) without producing any sign factors.
**Remark 3.3**.: _It is not hard to see that \(\Omega_{loc}^{p,q}(M\times\Gamma(Y_{0}))\cong\Omega_{loc}^{p,0,q}(M\times \Gamma(V[1]Y_{0}))\) for a ordinary gauge natural bundle \(Y\) over \(M\). And more generally, \(\Omega_{loc}^{p,q,r}(M\times\Gamma(Y))\cong\Omega_{loc}^{p,0,q+r}(M\times \Gamma(V[1]Y))\) for a graded gauge natural bundle \(Y\) over \(M\)._
We also have a well-defined notion of invariant local forms in the graded case. Let again \(\Omega_{loc,inv}(M\times\Gamma(Y))\) denote the bicomplex of invariant local forms. Every result we established for \(\Omega_{loc,inv}(M\times\Gamma(Y))\) in the ungraded case remains valid in the graded case. In particular, \(d_{h}\) is homotopic to \(0\) on \(\Omega_{loc,inv}(M\times\Gamma(Y))\), i.e., there exists a homotopy operator \(K_{0}\) such that \(K_{0}d_{v}+d_{v}K_{0}=d_{h,inv}\), where \(d_{h,inv}\) and \(K_{0}\) are given by the same definitions.
**Remark 3.4**.: _From now on, we will frequently use the shorthand notation \(\Omega_{loc}^{p,q,r}\) to refer to \(\Omega_{loc}^{p,q,r}(M\times\Gamma(Y))\) and \(\Omega_{loc}^{p,q}\) to denote \(\bigoplus_{r}\Omega_{loc}^{p,q,r}\), provided there is no potential for confusion._
### Lagrangian field theory
Let \(Y\to Y_{0}\to M\) be a graded gauge natural bundle over \(M\).
**Definition 3.7**.: _A (local) Lagrangian field theory (LFT) is a triple \((M,Y,\mathcal{L})\) where \(\mathcal{L}\in\Omega_{loc}^{n,0,0}(M\times\Gamma(Y))\).4\(M\) is called the spacetime manifold, \(Y\) is called the configuration bundle, and \(\mathcal{L}\) is called the Lagrangian. The theory is called generally covariant if \(\mathcal{L}\) is an invariant local form._
Footnote 4: If \(M\) is not compact, we require that \(\mathcal{L}\) has compact support along \(M\).
Recall that the interior Euler operator \(\mathcal{I}:\Omega_{loc}^{n,s}\to\Omega_{loc}^{n,s}\), \(s\geq 1\), is defined by setting \(\mathcal{I}\omega=\delta u^{j}\wedge(\iota_{\frac{\partial}{\partial u^{j}}} \omega)+\delta u^{j}\wedge((-\partial_{I})(\iota_{\frac{\partial}{\partial u^{ j}_{I}}}\omega)).\)\(\mathcal{I}\) has the properties
\[\mathcal{I}^{2}=\mathcal{I},\quad\mathcal{I}\circ d_{h}=0.\]
The subspace of \(\Omega_{loc}^{n,s}\) that is invariant under \(\mathcal{I}\) is denoted by \(\mathcal{F}^{s}\). Elements in \(\mathcal{F}^{s}\) for \(s>1\) are called functional form. Elements in \(\mathcal{F}^{1}\) are called source forms. Note that every source form \(\alpha\) can be locally written as \(\alpha=\alpha_{j}\delta u^{j}\wedge\nu\), where \(\alpha^{j}\) is any local function and \(\nu\) is a volume form over \(M\).
**Lemma 3.1**.: [10, Theorem 6] _The interior rows of the augmented variational bicomplex_
\[0\longrightarrow\Omega_{loc}^{0,s}\xrightarrow{d_{h}}\Omega_{loc}^{1,s} \xrightarrow{d_{h}}\cdots\xrightarrow{d_{h}}\Omega_{loc}^{n,s}\xrightarrow{ \mathcal{I}}\mathcal{F}^{s}\longrightarrow 0,\]
_with \(s\geq 1\), are globally exact. Moreover, the cohomology groups of the following cochain complex_
\[0\longrightarrow\Omega_{loc}^{0,0}\xrightarrow{d_{h}}\Omega_{loc}^{1,0} \xrightarrow{d_{h}}\cdots\xrightarrow{d_{h}}\Omega_{loc}^{n,0} \xrightarrow{\mathcal{I}\circ d_{v}}\mathcal{F}^{1}\xrightarrow{\mathcal{I} \circ d_{v}}\mathcal{F}^{2}\xrightarrow{\mathcal{I}\circ d_{v}}\cdots\]
_is isomorphic to the de Rham cohomology of the fiber bundle \(Y_{0}\)._
We also need an infinitesimal version the first part of Lemma 3.1. A local form \(\omega\in\Omega_{loc}^{r,s}\) is said to be \(d_{h}\)-closed at \(\Phi\in\Gamma(Y)\) if \(d_{h}\omega\) vanishes at \(\Phi\). It is said to be \(d_{h}\)-exact at \(\Phi\) if there exists another local form \(\omega^{\prime}\in\Omega_{loc}^{r-1,s}\) such that \(\omega-d_{h}\omega^{\prime}\) vanishes at \(\Phi\).
**Lemma 3.2**.: [11, Proposition 6.3.23] _For \(r<n\) and \(s\geq 1\), \(\omega\) is \(d_{h}\)-closed at \(\Phi\) if and only if it is \(d_{h}\)-exact at \(\Phi\)._
Let \(\mathcal{E}:=\mathcal{I}\circ d_{v}:\Omega_{loc}^{n,0}\to\Omega_{loc}^{n,1}\). \(\mathcal{E}\) is known as the Euler-Lagrange operator. Let \(EL=\mathcal{E}(\mathcal{L})\). \(EL\) is known as the Euler-Lagrange form of the LFT.
**Corollary 3.1**.: \(d_{v}\mathcal{L}=EL+d_{h}\gamma\) _for some \(\gamma\in\Omega_{loc}^{n-1,1,0}\)._
\(\gamma\) is known as the boundary form of the LFT. By Lemma 3.1, \(\gamma\) is unique up to a \(d_{h}\)-exact term for a fixed \(\mathcal{L}\).
**Definition 3.8**.: _The action functional \(S\in C^{\infty}(\Gamma(Y))\) of a \(LFT\)\((M,Y,\mathcal{L})\) is defined as_
\[S(\Phi):=\int_{M}\mathcal{L}(x,\Phi).\]
_A field \(\Phi\in\Gamma(Y)\) is called on-shell if it is a critical point of \(S\), i.e., if \(\delta S(\Phi)=0\)._
Let \(C_{\mathcal{L}}\subset\Gamma(Y)\) denote the space of on-shell fields. We have
\[\delta S=\int_{M}d_{v}\mathcal{L}=\int_{M}EL.\]
It follows that \(\Phi\in C_{\mathcal{L}}\) if \(\iota_{\delta_{\Phi}}EL(x,\Phi)\) vanishes identically over \(M\), where \(\delta_{\Phi}\) is a tangent vector in \(T_{\Phi}\Gamma(Y)\). In fact, one can show that \(\Phi\) is on-shell if and only if \((\iota_{\Xi}EL)(x,\Phi)\) vanishes identically over \(M\) for all vertical local vector field \(\Xi\) over \(M\times\Gamma(Y)\)[10].
Note that every vertical vector field over \(M\times\Gamma(Y)\) can be viewed canonically as a vector field over \(\Gamma(Y)\) by definition. If \(Y\) is a natural bundle and the LFT is generally covariant, we have
\[\mathrm{Lie}_{\xi_{X}}S=\int_{M}\mathrm{Lie}_{\xi_{X}}\mathcal{L}=\int_{M} \mathrm{Lie}_{\widehat{X}}\mathcal{L}=\int_{M}d(\iota_{\widehat{X}}\mathcal{L })=0\]
for all \(X\in\mathfrak{X}(M)\), i.e., \(S\) is \(\mathrm{Diff}(M)\)-invariant.
#### 3.3.1 Noether theorem
Let \((M,Y,\mathcal{L})\) be a LFT. A Noether current \(j\) is an element in \(\Omega_{loc}^{n-1,0}\) such that there exists a vertical local vector field \(\Xi\), \(d_{h}j=\iota_{\Xi}EL\). \((j,\Xi)\) is called a Noether pair [10]. Given two Noether pairs \((j_{i},\Xi_{i})\), \(i=1,2\), one can define their bracket to be
\[\{(j_{1},\Xi_{1}),(j_{2},\Xi_{2})\}=(\mathrm{Lie}_{\Xi_{1}}j_{2}-(-1)^{d(\Xi _{1})d(\Xi_{2})}\mathrm{Lie}_{\Xi_{2}}j_{1},[\Xi_{1},\Xi_{2}]).\]
Note that
\[\iota_{[\Xi_{1},\Xi_{2}]}EL=[\mathrm{Lie}_{\Xi_{1}},\iota_{\Xi_{2}}]EL=d_{h} (\mathrm{Lie}_{\Xi_{1}}j_{2}-(-1)^{d(\Xi_{1})d(\Xi_{2})}\mathrm{Lie}_{\Xi_{2 }}j_{1}).\]
\(\{(j_{1},\Xi_{1}),(j_{2},\Xi_{2})\}\) is again a Noether pair. In other words, Noether pairs together with the bracket \(\{\cdot,\cdot\}\) form a (graded) Lie (super)algebra and the map which sends \(\Xi\) to \((j,\Xi)\) is a (graded) Lie (super)algebra homomorphism.
A vertical local vector field \(\Xi\) is said to be a (Noether) symmetry of the LFT if there exists an element \(\alpha\in\Omega_{loc}^{n-1,0,0}\) such that \(\mathrm{Lie}_{\Xi}\mathcal{L}=d_{h}\alpha\). \(\alpha\) is unique up to a \(d_{h}\)-closed term for a fixed \(\mathcal{L}\). Moreover, if the ghost number degree of \(\Xi\) is non-zero, \(\alpha\) is unique up to a \(d_{h}\)-exact term by Lemma 3.1.
**Theorem 3.1**.: _Let \(j:=\alpha-\iota_{\Xi}\gamma\). \(j\) is a Noether current._
Proof.: \(d_{h}j=d_{h}\alpha-\iota_{\Xi}d_{h}\gamma=\mathrm{Lie}_{\Xi}\mathcal{L}- \iota_{\Xi}(d_{v}\mathcal{L}-EL)=\iota_{\Xi}EL\)
Let \(\Xi\) be a Noether symmetry of the LFT with \(\text{Lie}_{\Xi}=d_{h}\alpha\). We have
\[\Xi(S)=\int_{M}\text{Lie}_{\Xi}\mathcal{L}=\int_{M}d_{h}\alpha=0,\]
i.e., \(S\) is invariant under \(\Xi\). Let \(\Sigma\) be a hyper-surface in \(M\). The integral
\[\mathcal{J}_{\Sigma}(\Phi):=\int_{\Sigma}j(x,\Phi)\]
is known as the Noether charge associated to \((j,\Xi)\). Note that for an on-shell \(\Phi\), \(d_{h}j(x,\Phi)=(\iota_{\Xi}EL)(x,\Phi)=0\). It follows that \(\mathcal{J}_{\Sigma}(\Phi)=\mathcal{J}_{\Sigma^{\prime}}(\Phi)\) when \(\Sigma\) and \(\Sigma^{\prime}\) are two hypersurfaces having the same homology. Let \(\omega_{\Sigma}(\Phi):=\int_{\Sigma}(d_{v}\gamma)(x,\Phi)\). \(\delta\omega_{\Sigma}=0\). \(\omega_{\Sigma}\) defines a presymplectic form over \(\Gamma(Y)\). We have
\[\delta\mathcal{J}_{\Sigma}=\int_{\Sigma}d_{v}j=\int_{\Sigma}(d_{v}\alpha- \text{Lie}_{\Xi}\gamma+\iota_{\Xi}d_{v}\gamma)=\iota_{\Xi}\omega_{\Sigma}+ \int_{\Sigma}(d_{v}\alpha-\text{Lie}_{\Xi}\gamma).\]
By (3.1), we have
\[d_{h}(d_{v}\alpha-\text{Lie}_{\Xi}\gamma)=d_{v}\text{Lie}_{\Xi}\mathcal{L}-d _{h}\text{Lie}_{\Xi}\gamma=\text{Lie}_{\Xi}EL.\]
One can show that \((\text{Lie}_{\Xi}EL)(x,\Phi)\) vanishes over \(M\) for an on-shell \(\Phi\). By Lemma 3.2, \(d_{v}\alpha-\text{Lie}_{\Xi}\gamma\) is \(d_{h}\)-exact on-shell. Therefore, we have
\[(\delta\mathcal{J}_{\Sigma}-\iota_{\Xi}\omega_{\Sigma})\left|{}_{C_{\mathcal{ L}}}=0.\right.\]
In other words, \(\Xi|_{C_{\mathcal{L}}}\)5 is the Hamiltonian vector field associated to the presymplectic form \(\omega_{\Sigma}|_{C_{\mathcal{L}}}\).
Footnote 5: This restriction makes sense because \(\Xi(S)=0\).
**Remark 3.5**.: _Let \(\{\cdot,\cdot\}_{C_{\mathcal{L}}}\) denote the Poisson bracket associated to \(\omega_{\Sigma}|_{C_{\mathcal{L}}}\). Let \(\Xi_{1}\) and \(\Xi_{2}\) be two Noether symmetries of the LFT. Let \(\mathcal{J}_{\Sigma,i}\) be the Noether charges associated to the Noether pairs \((j_{i},\Xi_{i})\), \(i=1,2\). We have_
\[\{\mathcal{J}_{\Sigma,1},\mathcal{J}_{\Sigma,2}\}_{C_{\mathcal{L}}}=\Xi_{1}( \mathcal{J}_{\Sigma,2})=-(-1)^{d(\Xi_{1})d(\Xi_{2})}\Xi_{2}(\mathcal{J}_{ \Sigma,1})=\frac{1}{2}\int_{\Sigma}(\text{Lie}_{\Xi_{1}}j_{2}-(-1)^{d(\Xi_{ 1})d(\Xi_{2})}\text{Lie}_{\Xi_{2}}j_{1}).\]
_In other words, up to a factor \(2\), the bracket \(\{\cdot,\cdot\}\) between Noether pairs can be viewed as an off-shell extension of the Poisson bracket between Noether charges._
## 4 Cohomological Lagrangian field theory
### \(Qkg^{\star}\)-structures
Let \(Y\) be a gauge natural bundle associated to the principal \(G\)-bundle \(P\) over \(M\). Let \(\text{Aut}(P)^{\star}\) denote the Cartan graded Lie supergroup associated to the automorphism group \(\text{Aut}(P)\) of \(P\). There is a canonical sub-supergroup \(\text{Gau}(P)^{\star}\) of \(\text{Aut}(P)^{\star}\), which is the Cartan graded Lie supergroup associated to the gauge group \(\text{Gau}(P)\) of \(P\). Noting that in the case of a trivial group \(G=\text{Id}\), \(\text{Gau}(P)^{\star}\) is trivial and \(\text{Aut}(P)^{\star}=\text{Diff}(M)^{\star}\), the Cartan graded Lie supergroup associated to the diffeomorphism group \(\text{Diff}(M)\) of \(M\).
**Definition 4.1**.: _A \(QKG^{\star}\)-structure on \(M\times\Gamma(Y)\) is a vertical local \(\operatorname{Aut}(P)^{\star}\)-action on \(M\times\Gamma(Y)\) whose underlying \(\operatorname{Aut}(P)\)-action is the canonical \(\operatorname{Aut}(P)\)-action on \(M\times\Gamma(Y)\). A \(QK^{\star}\)-structure is a \(QKG^{\star}\)-structure with \(G=\operatorname{Id}\)._
**Remark 4.1**.: \(\bigoplus_{r}\Omega^{p,q,r}_{loc}(M\times\Gamma(Y))\) _is an \(\operatorname{Aut}(P)^{\star}\)-algebra, hence also a \(\operatorname{Gau}(P)^{\star}\)-algebra_
Equivalently, a \(QKG^{\star}\)-structure on \(M\times\Gamma(Y)\) is specified by the following data:
1. A vertical local vector field \(Q\) of degree \(1\) over \(M\times\Gamma(Y)\) satisfying \(Q^{2}=0\);
2. A family of vertical local vertical field \(K_{\Xi}\) of degree \(-1\) over \(M\times\Gamma(Y)\), parameterized by \(G\)-invariant vector fields \(\Xi\) over \(P\), satisfying \[[K_{\Xi_{1}},K_{\Xi_{2}}]=0,\quad[\Xi_{1},K_{\Xi_{2}}]=K_{[\Xi_{1},\Xi_{2}]}, \quad[Q,K_{\Xi}]=\Xi,\] where we identify \(\Xi\) as a vertical local vector field of degree \(0\) over \(M\times\Gamma(Y)\) via taking Lie derivatives.
Let \(\Xi\) be a \(G\)-invariant vector field over \(P\). Let \(\{U,P|_{U}\cong U\times G\}\) be a local trivialization of \(P\) equipped with a coordinate system \((x^{\mu})\). Let \(\{\xi_{a}\}\) be a basis of \(\mathfrak{g}\). Recall that \(\Xi\) locally takes the form \(\Xi=\Xi^{\mu}(x)\frac{\partial}{\partial x^{\mu}}+\Xi^{a}(x)\rho_{a}\), where \(\rho_{a}\) is the right invariant vector field generated by \(\xi_{a}\) over \(G\). The second component of \(\Xi\) can be identified with a local section \(\lambda=\Xi^{a}(x)\xi_{a}\) of the adjoint bundle \(\operatorname{ad}P\). Let \(K_{\mu}:=K_{\frac{\partial}{\partial x^{\mu}}}\) and \(I_{\lambda}:=K_{\Xi^{a}\rho_{a}}\). We have
\[[Q,K_{\mu}]=\xi_{\partial_{\mu}},\quad[K_{\mu},K_{\nu}]=0,\quad[ \xi_{\partial_{\mu}},K_{\nu}]=0,\] \[[Q,I_{\lambda}]=\delta_{\lambda},\quad[I_{\lambda},I_{\lambda^{ \prime}}]=0,\quad[\delta_{\lambda},I_{\lambda^{\prime}}]=I_{[\lambda,\lambda^ {\prime}]},\] \[[K_{\mu},I_{\lambda}]=0,\quad[\xi_{\partial_{\mu}},I_{\lambda}]=I _{\partial_{\mu}\lambda},\]
where \(\xi_{\partial_{\mu}}\) is the vertical lift of \(\frac{\partial}{\partial x^{\mu}}\) on \(U\times\Gamma(Y|_{U})\) via taking Lie derivatives, \(\delta_{\lambda}\) is the infinitesimal gauge transformation induced by \(\lambda\), and \(\partial_{\mu}\lambda:=\partial_{\mu}\Xi^{a}\xi_{a}\). Note also that
\[[K_{\mu},\delta_{\lambda}]=[[K_{\mu},Q],I_{\lambda}]-[Q,[K_{\mu},I_{\lambda}]] =[\xi_{\partial_{\mu}},I_{\lambda}]=I_{\partial_{\mu}\lambda}.\]
Let \(K:=dx^{\mu}\wedge\operatorname{Lie}_{K_{\mu}}:\Omega^{p,q,r}_{loc}(U\times \Gamma(Y|_{U}))\to\Omega^{p+1,q,r-1}_{loc}(U\times\Gamma(Y|_{U}))\). Just like \(K_{0}\), \(K\) is only locally well-defined.
**Proposition 4.1**.: \(Q\)_, \(K\), and \(d_{h,inv}\) satisfy the following relations. (With a slight abuse of notation, we often use \(Q\) instead of \(\operatorname{Lie}_{Q}\) to denote the Lie derivative along \(Q\).)_
\[Q^{2}=0,\quad QK+KQ=d_{h,inv},\quad Kd_{h,inv}+d_{h,inv}K=0. \tag{4.1}\]
Proof.: \([Q,K]=dx^{\mu}\wedge\operatorname{Lie}_{[Q,K_{\mu}]}=d_{h,inv},\,[d_{h,inv},K] =dx^{\mu}\wedge dx^{\nu}\wedge\operatorname{Lie}_{[\xi_{\partial_{\mu}},K_{ \nu}]}=0.\)
**Remark 4.2**.: \(Q\)_, \(K\), and \(d_{h,inv}\) together with the relations (4.1) define a \(QK\)-algebra which is studied in details in [11]._
**Lemma 4.1**.: \(K:\bigoplus_{r}\Omega^{p,q,r}_{loc}(U\times\Gamma(Y|_{U}))\to\bigoplus_{r} \Omega^{p+1,q,r}_{loc}(U\times\Gamma(Y|_{U}))\) _is a semi-homotopy of \(\operatorname{Gau}(P)^{\star}\)-algebras._
Proof.: We need to show that \([K,I_{\lambda}]=0\) and \([K,\delta_{\lambda}]\) vanishes for local functions that are basic with respect to the \(\operatorname{Gau}(P)^{\star}\)-action. The first one follows directly from \([K_{\mu},I_{\lambda}]=0\) and the second one follows from \([K_{\mu},\delta_{\lambda}]=I_{\partial_{\mu}\lambda}\)
For the reader's convenience, we summarize the global and local derivations we have defined on \(\Omega_{loc}\) in Table 4.1.
For later use, we compute that
\[[K,\iota_{Q}]=dx^{\mu}\wedge[\mathrm{Lie}_{K_{\mu}},\iota_{Q}]=dx^{ \mu}\wedge\iota_{[K_{\mu},Q]}=K_{0}, \tag{4.2}\] \[[\mathrm{Lie}_{Q},K_{0}]=[\mathrm{Lie}_{Q},[K,\iota_{Q}]]=[d_{h, inv},\iota_{Q}]-[K,[\mathrm{Lie}_{Q},\iota_{Q}]]=0, \tag{4.3}\]
where we use \([d_{h},\iota_{Q}]=0\) and \([\mathrm{Lie}_{Q},\iota_{Q}]=\iota_{[Q,Q]}=0\).
**Definition 4.2**.: _A deformation of a \(QKG^{\star}\)-structure is a family of deformations_
\[K_{\Xi}\mapsto K_{\Xi}+tK^{\prime}_{\Xi}\]
_for \(t\in\mathbb{R}\), where \(K^{\prime}_{X}\) is a vertical local vector field of degree \(-1\) satisfying_
\[[Q,K^{\prime}_{\Xi}]=0,\quad[K^{\prime}_{\Xi_{1}},K^{\prime}_{\Xi_{2}}]=0.\]
_A deformation is said to be vertically compatible with the original \(QKG^{\star}\)-structure if \([K^{\prime}_{\Xi},I_{\lambda}]=0\) for all \(\lambda\in\mathrm{ad}P\). It is said to be (fully) compatible with the original \(QKG^{\star}\)-structure if \([K^{\prime}_{\Xi},K_{\Xi^{\prime}}]=0\) for all \(\Xi^{\prime}\in\mathfrak{X}_{inv}(P)\)._
Obviously, \(Q\) and \(K_{\Xi}+tK^{\prime}_{\Xi}\) define a new \(QKG^{\star}\)-structure if the deformation is compatible with the original \(QKG^{\star}\)-structure.
### Cohomological Lagrangian field theories and supersymmetries
Let \((M,g)\) be a Riemannian manifold. Let \(Y\) be a graded natural bundle over \(M\). Let \(\mathrm{Iso}(M)\) denote the isometry group of the Riemannian manifold \(M\). Let \(\mathfrak{iso}(M)\) denote the Lie algebra of \(\mathrm{Iso}(M)\), whose elements are Killing vector fields over \(M\).
**Definition 4.3**.: _A cohomological Lagrangian field theory (CohLFT) is a LFT \((M,Y,\mathcal{L})\) such that_
1. \(M\times\Gamma(Y)\) _is equipped with a (deformed)_ \(QK^{\star}\)_-structure;_
2. \(Q\) _is a Noether symmetry of the LFT, i.e., there exists a local form_ \(\alpha_{Q}\) _of degree_ \((n-1,0,1)\) _such that_ \(QL=d_{h}\alpha_{Q}\)_._
_A CohLFT is called supersymmetric if \(K_{X}\) is a Noether symmetry of the LFT for all \(X\) in (a nontrivial subalgebra of) \(\mathfrak{iso}(M)\)._
Supersymmetric CohLFTs are usually obtained by applying a trick called topological twisting to supersymmetric LFTs with \(R\)-symmetries [10]. The idea is to "twist" the structure of the super Poincare algebra via a nontrivial group homomorphism from the spin group to the \(R\)-symmetry group of the LFT. For more details, we refer the reader to Section 5 of [10].
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Operation & \(d_{h}\) & \(d_{h,inv}\) & \(d_{v}\) & \(\mathrm{Lie}_{Q}\) & \(\iota_{Q}\) & \(K_{0}\) & \(K\) \\ \hline Degree & \((1,0,0)\) & \((1,0,0)\) & \((0,1,0)\) & \((0,0,1)\) & \((0,-1,1)\) & \((1,-1,0)\) & \((1,0,-1)\) \\ Global/local & Global & Global6 & Global & Global & Global & Local & Local \\ \hline \end{tabular}
\end{table}
Table 4.1: Global/local derivations on \(\Omega_{loc}\).
**Definition 4.4**.: _A preobservable \(\mathcal{O}\) in the CohLFT \((M,Y,\mathcal{L})\) is an (invariant) local form over \(M\times\Gamma(Y)\) of vertical form degree \(0\) which is \(Q\)-closed up to a \(d_{h}\)-exact term._
Let \(\mathcal{O}\) be a preobservable of horizontal form degree \(p\). Let \(\gamma\) be a \(p\)-dimensional submanifold in \(M\). One can define
\[O[\gamma](\Phi):=\int_{\gamma}\mathcal{O}(x,\Phi).\]
Obviously, \(O[\gamma]\) is a \(Q\)-closed function over \(\Gamma(Y)\). Moreover, \(O[\gamma]=O[\gamma^{\prime}]\) if \(\gamma\) and \(\gamma^{\prime}\) are two \(p\)-dimensional submanifolds having the same homology.
**Proposition 4.2**.: \(O[\gamma]\) _is \(\mathrm{Diff}(M)\)-invariant up to a \(Q\)-exact term._
Proof.: By definition, one can find a local form \(\mathcal{O}^{\prime}\) such that \(Q\mathcal{O}=d_{h}\mathcal{O}^{\prime}\). We have
\[\xi_{X}(O[\gamma])=\int_{\gamma}\mathrm{Lie}_{\xi_{X}}\mathcal{O}=\int_{ \gamma}(QK_{X}+K_{X}Q)\mathcal{O}=Q(\int_{\gamma}K_{X}\mathcal{O})+\int_{ \gamma}d_{h}(K_{X}\mathcal{O}^{\prime})=Q(\int_{\gamma}K_{X}\mathcal{O}),\]
where we use \(\mathrm{Lie}_{\xi_{X}}=[Q,K_{X}]\) and \([K_{X},d_{h}]=0\).
Using similar arguments as in the proof of Proposition 4.2, one can prove
**Proposition 4.3**.: _The action \(S\) of a CohLFT is \(\mathrm{Diff}(M)\)-invariant up to a \(Q\)-exact term._
Let \(\mathrm{Loc}(\Gamma(Y))\) denote the subspace of \(C^{\infty}(\Gamma(Y))\) spanned by functions \(F_{\alpha,\gamma}\) of the form
\[F_{\alpha,\gamma}:=\int_{\gamma}\alpha\]
where \(\alpha\) is a local form of horizontal degree \(p\), \(\gamma\) is a \(p\)-dimensional submanifold of \(M\).
**Definition 4.5**.: _An observable of the CohLFT \((M,Y,\mathcal{L})\) is an element in \(\mathrm{Loc}(\Gamma(Y))\) which is \(Q\)-closed and \(\mathrm{Diff}(M)\)-invariant up to a \(Q\)-exact term, i.e., a \(\mathrm{Diff}(M)\)-invariant element in the cohomology group \(H_{Q}(\mathrm{Loc}(\Gamma(Y)))\)._
**Remark 4.3**.: _By definition, the function \(O[\gamma]\) over \(\Gamma(Y)\) obtained by integrating the preobservable \(\mathcal{O}\) over \(\gamma\) is an observable of \((M,Y,\mathcal{L})\)._
The expectation value of an observable in quantum field theory is given by the formula
\[\langle O\rangle=\int_{\Gamma(Y)}\mathcal{D}\Phi O(\Phi)\exp(-S(\Phi)),\]
where \(\mathcal{D}\Phi\) is the path integral measure on \(\Gamma(Y)\). If this measure is invariant under the action of the Lie supergroup generated by \(Q\), one can show that \(\langle Q(O)\rangle=0\). Therefore, \(\langle\cdot\rangle\) in a CohLFT can be viewed as a map
\[\langle\cdot\rangle:H_{Q}(\mathrm{Loc}(\Gamma(Y)))\to\mathbb{R} \tag{4.4}\] \[O\mapsto\langle O\rangle.\]
Let's assume that the path integral measure \(\mathcal{D}\Phi\) is also \(\mathrm{Diff}(M)\)-invariant. Let \(\{\phi_{t}\}\) be the one-parameter group generated by \(X\in\mathfrak{X}(M)\). Let \(O\) be a observable of the CohLFT, i.e., a \(\mathrm{Diff}(M)\)-invariant element in \(H_{Q}(\mathrm{Loc}(\Gamma(Y)))\). By Proposition 4.3, we have
\[\lim_{t\to 0}\frac{d}{dt}\left(\phi_{t}\langle O\rangle\right)=-\int_{ \Gamma(Y)}\mathcal{D}\Phi\mathrm{Lie}_{\xi_{X}}\left(O(\Phi)\exp(-S(\Phi)) \right)=\int_{\Gamma(Y)}\mathcal{D}\Phi(O(\Phi)\exp(-S(\Phi))\mathrm{Lie}_{ \xi_{X}}(S))=0.\]
Therefore, \(\langle\cdot\rangle\) is constant when restricted to the \(\mathrm{Diff}(M)\)-invariant subspace of \(H_{Q}(\mathrm{Loc}(\Gamma(Y)))\).
#### 4.2.1 Vector supersymmetries
Let \((M,g)\) be the \(n\)-dimensional Euclidean space \(\mathbb{R}^{n}\) equipped with the canonical Euclidean metric. The Killing vector fields over \(M\) are \(\partial_{\mu}\), which generate the translations, and \(x^{\mu}\partial_{\nu}-x^{\nu}\partial_{\mu}\), which generate the rotations. In this case, the vertical local vector field \(K_{\mu}\) is globally well-defined and we have
\[QK_{\mu}+K_{\mu}Q=\xi_{\partial_{\mu}}. \tag{4.5}\]
If a CohLFT \((M,Y,\mathcal{L})\) is supersymmetric, \(K_{\mu}\) is a Noether symmetry of \((M,Y,\mathcal{L})\). In the physics literature, \(K_{\mu}\) is referred to as a vector supersymmetry [1, 13, 14]. It follows from 4.5 that the infinitesimal translation \(\xi_{\partial_{\mu}}\) is also a Noether symmetry of the theory. Let \(\mathcal{Q}\), \(\mathcal{G}_{\mu}\), and \(\mathcal{T}_{\mu}\) denote the Noether currents associated to \(Q\), \(K_{\mu}\), and \(\xi_{\partial_{\mu}}\), respectively. We then have
\[(\mathcal{T}_{\mu},\xi_{\partial_{\mu}})=\{(\mathcal{Q},Q),( \mathcal{G}_{\mu},K_{\mu})\},\]
where \(\{\cdot,\cdot\}\) is the bracket between Noether pairs. Note that \(\mathcal{K}_{\mu}\) and \(\mathcal{G}_{\mu}\) can be written as
\[\mathcal{K}_{\mu}=\mathcal{G}_{\mu\nu}\star dx^{\nu},\quad \mathcal{T}_{\mu}=\mathcal{T}_{\mu\nu}\star dx^{\nu},\]
where \(\star\) is the Hodge star operator. \(\mathcal{T}_{\mu\nu}\) is known as the canonical energy-momentum tensor. In the physics literature, one often writes
\[\mathcal{T}_{\mu\nu}=\{\mathcal{Q},\mathcal{G}_{\mu\nu}\}\]
to emphasize the \(Q\)-exact nature of \(\mathcal{T}_{\mu\nu}\).
#### 4.2.2 Descendant sequences of preobservables
Let \(\mathcal{O}^{(0)}\) be a \(Q\)-closed preobservable of degree \((0,0,n)\) of the CohLFT \((M,Y,\mathcal{L})\).
**Definition 4.6**.: _A descendant sequence of \(\mathcal{O}^{(0)}\) is a sequence \(\{\mathcal{O}^{(p)}\}_{p=0}^{n}\) of preobservables satisfying_
\[Q\mathcal{O}^{(p)}=d_{h}\mathcal{O}^{(p-1)} \tag{4.6}\]
_for \(p=1,\ldots,n\). (4.6) is called the (topological) descent equations of preobservables._
**Remark 4.4**.: _Descendant sequences of preobservables can be introduced for not only a CohLFT, but also any LFT with a scalar supersymmetry \(Q\). In such a LFT, let's consider a preobservable \(\mathcal{O}\) which is an invariant local form of horizontal form degree \(p\). We then have_
\[\xi_{X}(O[\gamma])=\int_{\gamma}\mathrm{Lie}_{\xi_{X}}\mathcal{O}=\int_{ \gamma}\mathrm{Lie}_{\widehat{X}}\mathcal{O},\]
_where \(\xi_{X}\) is the vertical local vector field induced by the Lie derivatives along \(X\). Now apply Cartan's formula \([d_{h},\iota_{\widehat{X}}]=\mathrm{Lie}_{\widehat{X}}\). We obtain_
\[\xi_{X}(O[\gamma])=\int_{\gamma}d_{h}(\iota_{\widehat{X}}\mathcal{O})+\iota_ {\widehat{X}}(d_{h}\mathcal{O})=Q(\int_{\gamma}\iota_{\widehat{X}}\mathcal{O} ^{\prime}),\]
_where \(\mathcal{O}^{\prime}\) is a preobservable of horizontal degree \(p+1\) satisfying \(Q\mathcal{O}^{\prime}=d_{h}\mathcal{O}\). In other words, the observable \(O[\gamma]\) associated to \(\mathcal{O}\) is again \(\mathrm{Diff}(M)\)-invariant up to a \(Q\)-exact term despite the absence of a \(QK^{\star}\)-strucutre._
Recall that we have (locally) a operator \(K\) on \(\Omega_{\mathit{loc}}\), the bicomplex consisting of those local forms that are locally independent of the base coordinates \(x^{\mu}\), satisfying
\[QK+KQ=d_{h,inv},\quad Kd_{h,inv}+d_{h,inv}K=0.\]
**Definition 4.7**.: _The standard \(K\)-sequence of an invariant preobservable \(\mathcal{O}^{(0)}\) is locally defined by_
\[\mathcal{O}^{(p)}:=\frac{K^{p}}{p!}\mathcal{O}^{(0)}.\]
_for \(p=1,\cdots,n\)._
By definition, \(\mathcal{O}^{(p)}\) is again independent of \(x^{\mu}\), i.e., an element in \(\Omega_{\mathit{loc}}\).
**Proposition 4.4**.: _Every standard \(K\)-sequence \(\{\mathcal{O}^{(p)}\}_{p=0}^{n}\) of an invariant preobservable \(\mathcal{O}^{(0)}\) is a descendant sequence._
Proof.: We have \(Q\mathcal{O}^{(p)}=\frac{1}{p!}QK^{p}\mathcal{O}^{(0)}=\frac{1}{p!}[Q,K^{p}] \mathcal{O}^{(0)}=\frac{p}{p!}d_{h,inv}K^{p-1}\mathcal{O}^{(0)}=d_{h}\mathcal{ O}^{(p-1)}\).
**Definition 4.8**.: _Let \(\mathcal{W}^{(q)}\) be a \(Q\)-closed invariant local form of degree \((q,0,n-q)\), \(1\leq q\leq n\). A (general) \(K\)-sequence of an invariant preobservable \(\mathcal{O}^{(0)}\) is a sequence \(\{\mathcal{O}^{(p)}\}_{p=0}^{n}\), where_
\[\mathcal{O}^{(p)}:=\frac{1}{p!}K^{p}\mathcal{O}^{(0)}+\sum_{q=1}^{p}\frac{1}{ (p-q)!}K^{p-q}\mathcal{W}^{(q)}\]
_for \(p=1,\ldots,n\)._
Likewise, one can show that
**Proposition 4.5**.: _Every (general) \(K\)-sequence \(\{\mathcal{O}^{(p)}\}_{p=0}^{n}\) of an invariant preobservable \(\mathcal{O}^{(0)}\) is a descendant sequence._
Let \(\{\mathcal{O}^{(p)}\}_{p=0}^{n}\) be such that \(\mathcal{O}^{(p)}=Q\rho^{(p)}+d\rho^{(p-1)}\) for \(p>0\) and \(\mathcal{O}^{(0)}=Q\rho^{(0)}\), where \(\rho^{(p)}\) is an arbitrary invariant local form of degree \((p,n-p-1)\). Then, \(\{\mathcal{O}^{(p)}\}_{p=0}^{n}\) is a solution to (4.6). Such a sequence is called an exact sequence. Obviously, \(\{\mathcal{O}^{(p)}\}_{p=0}^{n}\) is an exact sequence if and only if \(\mathcal{O}=\sum_{p=0}^{n}\mathcal{O}^{(p)}\) is \((Q-d_{h})\)-exact.
**Theorem 4.1**.: _Every descendant sequence of an invariant preobservable \(\mathcal{O}^{(0)}\) is locally a \(K\)-sequence up to an exact sequence._
Proof.: Consider the "Mathai-Quillen automorphism" \(j:=\exp(\widetilde{K})\) of \(\Omega_{\mathit{loc}}\), where \(\widetilde{K}\) is defined by setting \(\widetilde{K}\omega=(-1)^{d_{\mathit{tot}}(\omega)-n}K\omega\) for \(\omega\in\Omega_{\mathit{loc}}\), \(d_{\mathit{tot}}(\omega)\) is the total degree of \(\omega\). Note that the expression \(\exp(\widetilde{K})\) is well-defined because \(K\) is nilpotent. It is not hard to show that \([Q,\widetilde{K}]:=Q\widetilde{K}-\widetilde{K}Q=\widetilde{d_{h,inv}}\), where \(\widetilde{d_{h,inv}}\) is defined in a similar manner as \(\widetilde{K}\). It follows that
\[j\circ Q\circ j^{-1}=\exp(\widetilde{K})\left([Q,\exp(-\widetilde{K})]+\exp( -\widetilde{K}Q)\right)=-\exp(\widetilde{K})\widetilde{d_{h,inv}}\exp(- \widetilde{K})+Q=\widetilde{Q-\widetilde{d_{h,inv}}},\]
where we use \(\widetilde{d_{h,inv}}\widetilde{K}=\widetilde{K}\widetilde{d_{h,inv}}\). The proof is complete by observing that \(d_{h,inv}=d_{h}\) when restricted to \(\Omega_{\mathit{loc}}\) and the total degree of \(\mathcal{O}^{(p)}\) is \(n\) for \(p=0,\cdots,n\)
### Cohomological Lagrangian gauge field theory
Let \((M,g)\) be a Riemannian manifold. Let \(P\) be a principal \(G\)-bundle over \(M\). Let \(Y\) be a gauge natural bundle over \(M\) associated to \(P\).
**Definition 4.9**.: _A cohomological Lagrangian gauge field theory (CohLGFT) is a LFT \((M,Y,\mathcal{L})\) such that_
1. \(M\times\Gamma(Y)\) _is equipped with a (deformed)_ \(QKG^{\star}\)_-structure;_
2. \(Q\) _is a Noether symmetry of the LFT;_
3. \(I_{\lambda}\) _and_ \(\delta_{\lambda}\) _are Noether symmetries of the LFT for all_ \(\lambda\in\Gamma(\mathrm{ad}P)\)_._
_A CohLFT is called supersymmetric if \(K_{X}\) is a Noether symmetry of the LFT for all \(X\) in (a nontrivial subalgebra of) \(\mathfrak{iso}(M)\cap\mathfrak{X}_{P}(M)\)._
**Remark 4.5**.: _Let's consider \(\mathcal{L}\) of the form_
\[\mathcal{L}=\mathcal{L}_{top}+Q(\mathcal{V}_{g}), \tag{4.7}\]
_where \(\mathcal{L}_{top}\) is an invariant local form7 of degree \((n,0,0)\), and \(\mathcal{V}_{g}\) is a local form of degree \((n,0,-1)\) that is dependent on \(g\) and is basic respect to the \(\mathrm{Gau}(P)^{\star}\)-action. The action functional of the CohLFT takes the form_
Footnote 7: In this paper, we do not consider gravity theories. That is to say, \(g\) is a background field and we do not include the bundle \(\mathrm{Met}(M)\) of Riemannian metrics as part of the configuration bundle \(Y\). Therefore, any invariant local form is automatically independent of \(g\).
\[S=S_{top}+Q(\Psi_{g})\]
_where \(S_{top}=\int_{M}\mathcal{L}_{top}\) and \(\Psi_{g}=\int_{M}\mathcal{V}\). In other words, the triple \((\Gamma(Y),Q,S_{top})\) is a BRST system which can be gauge fixed to a BRST system \((\Gamma(Y),Q,S_{top})\) via the gauge fixing fermion \(\Psi_{g}\)._
**Definition 4.10**.: _A preobservable \(\mathcal{O}\) in the CohLGFT \((M,Y,\mathcal{L})\) is an (invariant) local form over \(M\times\Gamma(Y)\) of vertical form degree \(0\) such that_
1. \(\mathcal{O}\) _is_ \(Q\)_-closed up to a_ \(d_{h}\)_-exact term;_
2. \(\mathcal{O}\) _is gauge invariant, i.e., it is basic with respect to the_ \(\mathrm{Gau}(P)^{\star}\)_-action._
Obviously, the integral \(O[\gamma]:=\int_{\gamma}\mathcal{O}\) is a \(Q\)-closed function over \(\Gamma(Y)\) which is basic with respect to the \(\mathrm{Gau}(P)^{\star}\)-action. Likewise, one can prove
**Proposition 4.6**.: _The action \(S\) and \(O[\gamma]\) are \(\mathrm{Aut}(P)\)-invariant up to \(Q\)-exact terms._
**Definition 4.11**.: _An observable of the CohLGFT \((M,Y,\mathcal{L})\) is an element in \(\mathrm{Loc}(\Gamma(Y))\) which is \(Q\)-closed and \(\mathrm{Aut}(P)\)-invariant up to a \(Q\)-exact term._
The descendant sequence and the standard \(K\)-sequence of a preobservable in a CohLGFT can be defined in a similar manner as in the case of a CohLFT.
**Lemma 4.2**.: _Let \(\omega\) be an element in \(\Omega_{\circ\text{loc}}\) which is basic with respect to the \(\mathrm{Gau}(P)^{\star}\)-action. \(K\omega\) is also basic._
Proof.: This follows directly from Lemma 4.1.
By Lemma 4.2, the standard \(K\)-sequence of a gauge invariant preobservable is again gauge invariant. We have
**Proposition 4.7**.: _In a CohLGFT, every standard \(K\)-sequence \(\{\mathcal{O}^{(p)}\}_{p=0}^{n}\) of an invariant preobservable \(\mathcal{O}^{(0)}\) is a descendant sequence._
Likewise, one can show that
**Theorem 4.2**.: _In a CohLGFT, every gauge invariant descendant sequence is locally a \(K\)-sequence up to an exact sequence._
Proof.: The proof is essentially the same as the proof of Theorem 4.1.
### Examples
In this subsection, we will take \(Y\) to be of the form
\[Y=V[1]Y^{\prime}\to Y_{0}^{\prime}\to M, \tag{4.8}\]
where \(Y^{\prime}\to Y_{0}^{\prime}\to M\) is a graded (gauge) natural bundle over \(M\). In the case of gauge theories, \(Y_{0}^{\prime}\) needs to include the affine bundle \(C\) whose sections can be identified with connection \(1\)-forms \(A\). In simple terms, the fields can be divided into two groups \(\Phi\) and \(\Psi\) where \(d(\Psi)=d(\Phi)+1\). The canonical \(QKG^{\star}\)-structure is then given by
\[Q\Phi=\Psi,\quad Q\Psi=0,\quad K_{\mu}\Phi=0,\quad K_{\mu}\Psi=\mathrm{Lie}_{ \xi_{\partial_{\mu}}}\Phi,\quad I_{\lambda}\Phi=0,\quad I_{\lambda}\Psi=\delta _{\lambda}\Phi.\]
#### 4.4.1 N=2 supersymmetric quantum mechanics
Let \(M=\mathbb{R}\). Let \((N,h)\) be a Riemannian manifold. We take \(Y^{\prime}\) to be the trivial bundle
\[Y^{\prime}=T^{*}[-1]N\times M\to Y_{0}=N\times M\to M.\]
A coordinate chart of \(N\) induces a local coordinate system
\[(x^{\mu},\;\chi^{\mu},\;\psi^{\mu},\;b^{\mu})\]
for \(Y\) with degrees \(0,-1,1,0\), respectively. Let \(t\) be the standard coordinate function over \(M\). We adopt the conventional notation \(\overline{Q}\) to denote the vertical local vector field \(K_{\frac{d}{dt}}\). Let \(\Phi\) be a local section of \(Y\), we also use \(\dot{\Phi}\) to denote the jet coordinates associated to \(\frac{d\Phi}{dt}\). The \(QK^{\star}\)-structure of the theory is the canonical one, defined by8
Footnote 8: Note that any vertical local vector field is determined by its action on the zero jets.
\[Qx^{\mu}=\psi^{\mu},\quad Q\psi^{\mu}=0\quad Q\chi^{\mu}=b^{\mu },\quad Qb^{\mu}=0,\] \[\overline{Q}x^{\mu}=0,\quad\overline{Q}\psi^{\mu}=\dot{x}^{\mu}, \quad\overline{Q}\chi^{\mu}=0,\quad\overline{Q}b^{\mu}=\dot{\chi}^{\mu}.\]
It is straightforward to verify that
\[Q^{2}=0,\quad Q\overline{Q}+\overline{Q}Q=\xi_{\frac{d}{dt}},\quad\overline{Q} ^{2}=0.\]
Let \(\Gamma=\Gamma^{\rho}_{\mu\nu}\partial_{\rho}\otimes dx^{\mu}\otimes dx^{\nu}\) be the Levi-Civita connection of \(N\). Apply the following change of coordinates
\[b^{\mu}\to b^{\mu}-\Gamma^{\mu}_{\rho\nu}\psi^{\rho}\chi^{\nu}.\]
We obtain
\[Qx^{\mu}=\psi^{\mu},\quad Q\psi^{\mu}=0,\quad Q\chi^{\mu}=b^{\mu} -\Gamma^{\mu}_{\rho\nu}\psi^{\rho}\chi^{\nu},\quad Qb^{\mu}=-\Gamma^{\mu}_{ \rho\nu}\psi^{\rho}b^{\nu}+\frac{1}{2}R^{\mu}_{\nu\rho\sigma}\psi^{\rho}\psi^{ \sigma}\chi^{\nu}\] \[\overline{Q}x^{\mu}=0,\quad\overline{Q}\psi^{\mu}=\dot{x}^{\mu}, \quad\overline{Q}\chi^{\mu}=0,\quad\overline{Q}b^{\mu}=\nabla_{\frac{d}{dt}} \chi^{\mu},\]
where \(\nabla_{\frac{d}{dt}}\chi^{\mu}=\dot{\chi}^{\mu}+\Gamma^{\mu}_{\rho\nu}\dot{x} ^{\rho}\chi^{\nu}\), and \(R=R^{\nu}_{\mu\rho\sigma}\partial_{\nu}\otimes dx^{\mu}\otimes dx^{\rho} \otimes dx^{\sigma}\) is the Riemannian curvature tensor of \(N\).
The Lagrangian of the theory is then given by
\[\mathcal{L}=Q(\chi_{\mu}(\dot{x}^{\mu}-b^{\mu}))dt. \tag{4.9}\]
where \(\chi_{\mu}=h_{\mu\nu}\chi^{\nu}\). A straightforward computation shows that
\[\mathcal{L}=\left(b_{\mu}(\dot{x}^{\mu}-b^{\mu})-\chi_{\mu}\nabla_{\frac{d}{dt }}\psi^{\mu}+\frac{1}{2}R_{\mu\beta\rho\sigma}\chi^{\mu}\chi^{\beta}\psi^{\rho }\psi^{\sigma}\right)dt,\]
where \(R_{\mu\beta\rho\sigma}=h_{\mu\nu}R^{\nu}_{\beta\rho\sigma}\).
By definition, \(Q\) is a symmetry of \(\mathcal{L}\), but \(\overline{Q}\) is not. This problem can be solved by considering the following deformation of the canonical \(QK^{\star}\)-structure
\[Qx^{\mu}=\psi^{\mu},\quad Q\psi^{\mu}=0\quad Q\chi^{\mu}=b^{\mu},\quad Qb^{\mu}=0, \tag{4.10}\] \[\overline{Q}x^{\mu}=r\chi^{\mu},\quad\overline{Q}\psi^{\mu}=\dot{ x}^{\mu}-rb^{\mu},\quad\overline{Q}\chi^{\mu}=0,\quad\overline{Q}b^{\mu}=\dot{ \chi}^{\mu}. \tag{4.11}\]
For our purpose, we set \(r=1\) and apply again the change of coordinates \(b^{\mu}\to b^{\mu}-\Gamma^{\mu}_{\rho\nu}\psi^{\rho}\chi^{\nu}\). We then obtain
\[Qx^{\mu}=\psi^{\mu},\quad Q\psi^{\mu}=0,\quad Q\chi^{\mu}=b^{ \mu}-\Gamma^{\mu}_{\rho\nu}\psi^{\rho}\chi^{\nu},\quad Qb^{\mu}=-\Gamma^{\mu}_{ \rho\nu}\psi^{\rho}b^{\nu}+\frac{1}{2}R^{\mu}_{\nu\rho\sigma}\psi^{\rho}\psi^{ \sigma}\chi^{\nu},\] \[\overline{Q}x^{\mu}=\chi^{\mu},\quad\overline{Q}\psi^{\mu}=\dot{ x}^{\mu}-b^{\mu}+\Gamma^{\mu}_{\rho\nu}\psi^{\rho}\chi^{\nu},\quad\overline{Q} \chi^{\mu}=0,\quad\overline{Q}b^{\mu}=\nabla_{\frac{d}{dt}}\chi^{\mu}-\Gamma^{ \mu}_{\rho\nu}\chi^{\rho}b^{\nu}+\frac{1}{2}R^{\mu}_{\nu\rho\sigma}\chi^{\rho }\psi^{\sigma}\chi^{\nu}.\]
It is not hard to verify that \(\overline{Q}\) is a Noether symmetry of the theory. In fact, we have
\[\overline{Q}(\chi_{\mu}(\dot{x}^{\mu}-b^{\mu}))=(\partial_{\sigma}h_{\mu\nu}+ \Gamma_{\sigma\mu\nu})\chi^{\sigma}\chi^{\mu}(\dot{x}^{\nu}-b^{\nu})+\frac{1}{2 }R_{\mu\nu\rho\sigma}\chi^{\mu}\chi^{\nu}\chi^{\rho}\psi^{\sigma}=0,\]
where \(\Gamma_{\sigma\mu\nu}=h_{\sigma\rho}\Gamma^{\rho}_{\mu\nu}\). We use the fact that \(\partial_{\sigma}h_{\mu\nu}+\Gamma_{\sigma\mu\nu}\) is anti-symmetric in \(\sigma\) and \(\mu\), and the first Bianchi identity \(R_{\mu\nu\rho\sigma}+R_{\nu\rho\mu\sigma}+R_{\rho\mu\nu\sigma}=0\). It follows that
\[\overline{Q}\mathcal{L}=\text{Lie}_{\xi_{\frac{d}{dt}}}(\chi_{\mu}(\dot{x}^{ \mu}-b^{\mu}))dt=d_{h}(\chi_{\mu}(\dot{x}^{\mu}-b^{\mu}))=:d_{h}\alpha_{ \overline{Q}}.\]
Let's compute the Noether currents of \(Q\) and \(\overline{Q}\). Observe that the boundary form \(\gamma\) is of the form
\[\gamma=b_{\mu}\delta x^{\mu}+\chi_{\mu}(\nabla_{ver}\psi^{\mu}),\]
where \(\nabla_{ver}\psi^{\mu}=\delta\psi^{\mu}+\Gamma^{\mu}_{\rho\nu}\delta x^{\rho}\psi^ {\nu}\). The Noether current \({\cal Q}\) of \(Q\) can be computed as
\[{\cal Q}=-\iota_{Q}\gamma=-b_{\mu}\psi^{\mu}-\chi_{\mu}\Gamma^{\mu}_{\rho\nu} \psi^{\rho}\psi^{\nu}=-b_{\mu}\psi^{\mu},\]
where we use the torsion free condition \(\Gamma^{\mu}_{\rho\nu}=\Gamma^{\mu}_{\nu\rho}\). Likewise, the Noether current \(\overline{\cal Q}\) of \(\overline{Q}\) can be computed as
\[\overline{\cal Q}=\alpha_{\overline{Q}}-\iota_{\overline{Q}} \gamma=\chi_{\mu}(\dot{x}^{\mu}-b^{\mu})-b_{\mu}\chi^{\mu}-\chi_{\mu}(\dot{x}^ {\mu}-b^{\mu}+\Gamma^{\mu}_{\rho\nu}\psi^{\rho}\chi^{\nu})-\chi_{\mu}\Gamma^{ \mu}_{\rho\nu}\chi^{\rho}\psi^{\nu}=-b_{\mu}\chi^{\mu}.\]
Let \(f\) be a Morse function over \(N\). Let's consider the following change of coordinates
\[b^{\mu}\to b^{\mu}-{\rm grad}f^{\mu}-\Gamma^{\mu}_{\rho\nu} \psi^{\rho}\chi^{\mu}, \tag{4.12}\]
where \({\rm grad}f^{\mu}=h^{\mu\nu}\partial_{\nu}f\) is the gradient of \(f\). (4.10) and (4.11) can be then generalized as follows.
\[Qx^{\mu}=\psi^{\mu},\quad Q\psi^{\mu}=0,\] \[Q\chi^{\mu}=b^{\mu}-{\rm grad}f^{\mu}-\Gamma^{\mu}_{\rho\nu}\psi ^{\rho}\chi^{\nu},\quad Qb^{\mu}=-\Gamma^{\mu}_{\rho\nu}\psi^{\rho}b^{\nu}+ \nabla_{\rho}{\rm grad}f^{\mu}\psi^{\rho}+\frac{1}{2}R^{\mu}_{\nu\rho\sigma} \psi^{\rho}\psi^{\sigma}\chi^{\nu},\] \[\overline{Q}x^{\mu}=\chi^{\mu},\quad\overline{Q}\psi^{\mu}=\dot{ x}^{\mu}-b^{\mu}+{\rm grad}f^{\mu}+\Gamma^{\mu}_{\rho\nu}\psi^{\rho}\chi^{\nu},\] \[\overline{Q}\chi^{\mu}=0,\quad\overline{Q}b^{\mu}=\nabla_{\frac{ d}{dt}}\chi^{\mu}-\Gamma^{\mu}_{\rho\nu}\chi^{\rho}b^{\nu}+\nabla_{\rho}{\rm grad }f^{\mu}\chi^{\rho}+\frac{1}{2}R^{\mu}_{\nu\rho\sigma}\chi^{\rho}\psi^{\sigma} \chi^{\nu}.\]
We consider the following Lagrangian as a generalization of (4.9) [30].
\[{\cal L}={\cal L}_{top}+Q({\cal V}),\]
where
\[{\cal L}_{top}=d_{h}f=\partial_{\mu}f\dot{x}^{\mu}dt,\quad{\cal V }=\chi_{\mu}(\dot{x}^{\mu}-b^{\mu})dt.\]
It is straightforward to show that
\[{\cal L}=\left(b_{\mu}(\dot{x}^{\mu}+{\rm grad}f^{\mu}-b^{\mu})- \chi_{\mu}\nabla_{\frac{d}{dt}}\psi^{\mu}+{\rm Hess}f_{\mu\nu}\chi^{\mu}\psi^{ \nu}+\frac{1}{2}R_{\mu\beta\rho\sigma}\chi^{\mu}\chi^{\beta}\psi^{\rho}\psi^{ \sigma}\right)dt,\]
where \({\rm Hess}f=\nabla df\) is the Hessian of \(f\). We have
\[Q{\cal L}=d_{h}(\partial_{\mu}f\psi^{\mu})=:d_{h}\alpha_{Q}.\]
On the other hand, note that
\[\overline{Q}{\cal V}=(\chi_{\mu}\nabla_{\rho}{\rm grad}f^{\mu} \chi^{\rho})\,dt=(\chi^{\nu}{\rm Hess}f_{\mu\nu}\chi^{\mu})\,dt=0,\]
where we use the identity \(h_{\rho\nu}\nabla_{\mu}{\rm grad}f^{\rho}={\rm Hess}f_{\mu\nu}\) and the symmetric property of \({\rm Hess}f\). We then have
\[\overline{Q}{\cal L}=d_{h}(\chi_{\mu}(\dot{x}^{\mu}+{\rm grad}f^{ \mu}-b^{\mu}))=:d_{h}\alpha_{\overline{Q}}.\]
Note expression of the boundary form \(\gamma\) remain unchanged. The Noether currents of \(Q\) and \(\overline{Q}\) are then
\[{\cal Q}=(\partial_{\mu}f-b_{\mu})\psi^{\mu},\quad\overline{\cal Q }=-b_{\mu}\chi^{\mu}.\]
**Remark 4.6**.: _The Morse function \(f\) does not show up in the expression of \(\overline{\mathcal{Q}}\) because we only deform \(\overline{Q}\) in (4.10) and (4.11). More generally, one can consider the deformation_
\[Qx^{\mu}=s\psi^{\mu},\quad Q\psi^{\mu}=0\quad Q\chi^{\mu}=sb^{\mu} -\dot{x}^{\mu},\quad Qb^{\mu}=\dot{\psi}^{\mu},\] \[\overline{Q}x^{\mu}=r\chi^{\mu},\quad\overline{Q}\psi^{\mu}= \dot{x}^{\mu}-rb^{\mu},\quad\overline{Q}\chi^{\mu}=0,\quad\overline{Q}b^{\mu}= \dot{\chi}^{\mu}.\]
_with \(r\) and \(s\) satisfying \(s-r=1\). By properly choosing \(s\) and \(r\), one can get symmetric expressions for \(\mathcal{Q}\) and \(\overline{\mathcal{Q}}\). We leave the details to the reader._
#### 4.4.2 Donaldson-Witten theory
Let \(G=\mathrm{SU}(2)\). Let \(\mathrm{Tr}\) be the Killing form on \(\mathfrak{su}(2)\). Let \(P\) be a principal \(G\)-bundle over an \(4\)-dimensional Riemannian manifold \((M,g)\). Let \(\mathrm{ad}P\) denote the adjoint bundle of \(P\). Let \(\mathcal{A}\) denote the affine space of connection \(1\)-forms on \(P\). Recall that \(\mathcal{A}\) can be identified with \(\Gamma(C)\) where \(C\) is an affine bundle over \(M\). Let \(W\) be an associated vector bundle to \(P\). For our purpose, we choose \(W=\mathrm{ad}P\otimes\Lambda_{-}^{2}T^{*}M\), where \(\Lambda_{-}^{2}T^{*}M\) denotes the anti-sell-dual part of \(\Lambda^{2}T^{*}M\) with respect to the Hodge star operator \(\star\) on \(M\). We take \(Y^{\prime}\) to be
\[Y^{\prime}=\mathrm{ad}P[1]\times_{M}C\times_{M}W[-1]\to Y_{0}=C\to M\]
A bundle chart of \(P\) induces a local coordinate system
\[(x^{\mu},\ \theta^{a},\ A_{\mu}^{a},\ \chi_{\mu\nu}^{a},\ \phi^{a},\ v_{\mu}^{a}, \ b_{\mu\nu}^{a})\]
for \(Y\) with degrees \(0,1,0,-1,2,1,0\), respectively. We use the Greek indices to denote the components of differential forms and the Roman indices to denote the components of elements in the Lie algebra \(\mathfrak{g}\). The \(QKG^{\star}\)-structure is given by the canonical one.
\[Q\theta^{a}=\phi^{a},\quad Q\phi^{a}=0,\quad QA_{\mu}^{a}=v_{\mu }^{a},\quad Qv_{\mu}^{a}=0,\quad Q\chi_{\mu\nu}^{a}=b_{\mu\nu}^{a},\quad Qb_{ \mu\nu}^{a}=0,\] \[K_{\mu}\theta^{a}=0,\quad K_{\mu}\phi^{a}=\theta_{\mu}^{a},\quad K _{\mu}A_{\nu}^{a}=0,\quad K_{\mu}v_{\nu}^{a}=A_{\nu,\mu}^{a},\quad K_{\mu} \chi_{\nu\rho}^{a}=0,\quad K_{\mu}b_{\nu\rho}^{a}=\chi_{\nu\rho,\mu}^{a},\] \[I_{\lambda}\theta^{a}=0,\ I_{\lambda}\phi^{a}=-f_{bc}^{a}\lambda^ {b}\theta^{a},\ I_{\lambda}A_{\mu}^{a}=0,\ I_{\lambda}v_{\mu}^{a}=\partial_{ \mu}\lambda^{a}+f_{bc}^{a}A^{b}\lambda^{c},\ I_{\lambda}\chi_{\mu\nu}^{a}=0, \ I_{\lambda}b_{\mu\nu}^{a}=-f_{bc}^{a}\lambda^{b}\chi_{\mu\nu}^{c},\]
where \(f_{bc}^{a}\) are the structure constants of \(\mathfrak{g}\), \(\lambda=\lambda^{a}\xi_{a}\) is a local section of \(\mathrm{ad}P\), and we use \(\theta_{\mu}^{a}\) to denote the jet coordinates of associated to \(\partial_{\mu}\theta^{a}\). The above expressions can be rewritten in physicists' notation as follows.
\[Q\theta=\phi,\quad Q\phi=0,\quad QA=\upsilon,\quad Q\upsilon=0, \quad Q\chi=b,\quad Qb=0,\] \[K_{\mu}\theta=0,\quad K_{\mu}\phi=\partial_{\mu}\theta,\quad K_{ \mu}A=0,\quad K_{\mu}\upsilon=\partial_{\mu}A,\quad K_{\mu}\chi=0,\quad K_{ \mu}b=\partial_{\mu}\chi,\] \[I_{\lambda}\theta=0,\quad I_{\lambda}\phi=-[\lambda,\theta],\quad I _{\lambda}A=0,\quad I_{\lambda}\upsilon=d_{A}\lambda,\quad I_{\lambda}\chi=0, \quad I_{\lambda}b=-[\lambda,\chi].\]
One can deform the canonical \(QKG^{\star}\)-structure by
\[I_{\lambda}\theta=r\lambda.\]
For our purpose, we set \(r=1\). Note that this deformation is only vertically compatible with the canonical \(QKG^{\star}\)-structure.
Let's apply the following change of coordinates
\[\phi\to\phi-\frac{1}{2}[\theta,\theta],\quad\upsilon\to\upsilon+d_{A}\theta, \quad b\to b-[\theta,\chi]. \tag{4.13}\]
We obtain
\[Q\theta=\phi-\frac{1}{2}[\theta,\theta],\quad Q\phi=-[\theta,\phi], \tag{4.14}\] \[QA=\upsilon+d_{A}\theta,\quad Q\upsilon=-[\theta,\upsilon]-d_{A}\phi\] (4.15) \[Q\chi=b-[\theta,\chi],\quad Qb=-[\theta,b]+[\phi,\chi], \tag{4.16}\]
and
\[I_{\lambda}\theta=\lambda,\quad I_{\lambda}\phi=I_{\lambda}A=I_{\lambda} \upsilon=I_{\lambda}\chi=I_{\lambda}b=0.\]
The expression for the vertical local vector fields \(K_{\mu}\) remain unchanged. It becomes evident that \(Q\), \(I_{\lambda}\), and \(\delta_{\lambda}=[Q,I_{\lambda}]\) define an infinite dimensional BRST model for the equivariant cohomology of the \(\mathrm{Gau}(P)\)-manifold \(\Gamma(Y^{\prime})\).
For the Lagrangian, we consider [30]
\[\mathcal{L}=\mathcal{L}_{top}+Q(\mathcal{V}),\]
where
\[\mathcal{L}_{top}=\mathrm{Tr}(F\wedge F)/2,\quad\mathcal{V}= \mathrm{Tr}\left(\chi\wedge(F+b)\right),\]
and \(F=d_{A}+\frac{1}{2}[A,A]\) is the curvature 2-form of \(A\). It is straightforward to show that
\[\mathcal{L}=\mathrm{Tr}\left(F\wedge F\right)/2+b\wedge(F+b)- \chi\wedge d_{A}\upsilon-\chi\wedge[\phi,\chi])\,.\]
By definition, \(Q\) and \(\delta_{\lambda}\) are Noether symmetries of \(\mathcal{L}\). Moreover, we have \(I_{\lambda}\mathcal{L}=0\) because \(\mathcal{L}\) does not depend on \(\theta\).
**Remark 4.7**.: _The equations of motion of \(b\) are \(b=-F_{-}/2\). After integrating out \(b\), the Lagrangian becomes_
\[\mathcal{L}=\mathrm{Tr}\left(F_{+}\wedge F_{+}\right)/2-\chi \wedge d_{A}\upsilon-\chi\wedge[\phi,\chi])\,.\]
For the preobservables, we consider
\[\mathcal{O}^{(0)}=\mathrm{Tr}(\phi^{2}),\ \mathcal{O}^{(1)}=-2 \mathrm{Tr}(\phi\upsilon),\ \mathcal{O}^{(2)}=\mathrm{Tr}(\upsilon\wedge\upsilon-2 \phi F),\ \mathcal{O}^{(3)}=2\mathrm{Tr}(\upsilon\wedge F),\ \mathcal{O}^{(4)}=\mathrm{Tr}(F\wedge F). \tag{4.17}\]
It is straightforward to verify that \(\{\mathcal{O}^{(p)}\}_{p=0}^{n}\) satisfy the descent equations (4.6). Moreover, by Theorem 4.1, it can be locally expressed as the sum of a \(K\)-sequence and an exact sequence. For example, we have
\[\mathcal{O}^{(1)}=Q(\mathrm{Tr}(\phi A))-K\mathcal{O}^{(0)}.\]
It is not manifest that \(\mathcal{O}^{(p)}\) for \(p>1\) still can be expressed in the above form. Let's take \(M\) to be \(\mathbb{R}^{n}\). The canonical \(K_{\mu}\) and \(I_{\lambda}\) can be deformed by setting
\[K_{\mu}\theta=rA_{\mu},\quad K_{\mu}\phi=\partial_{\mu}\theta-r \upsilon_{\mu},\quad I_{\lambda}\theta=r\lambda.\]
This deformation is indeed compatible with the canonical \(QKG^{\star}\)-structure. For our purposes, we set \(r=1\). Applying again the change of coordinates (4.13), we obtain
\[K_{\mu}\theta=A_{\mu},\quad K_{\mu}\phi=-\upsilon_{\mu},\quad K_{ \mu}A=0,\quad K_{\mu}\upsilon=F_{\mu\nu}dx^{\nu},\quad K_{\mu}\chi=0,\quad K_{ \mu}b=d_{A\mu}\chi,\] \[I_{\lambda}\theta=\lambda,\quad I_{\lambda}\phi=I_{\lambda}A=I_{ \lambda}\upsilon=I_{\lambda}\chi=I_{\lambda}b=0,\]
where \(d_{A\mu}\chi=\partial_{\mu}\chi+[A_{\mu},\chi]\).
The operator \(K=dx^{\mu}\wedge\text{Lie}_{K_{\mu}}\) is now a globally defined. We have
\[K\theta=A\quad K\phi=-\upsilon,\quad KA=0,\quad K\upsilon=2F,\quad K\chi=0, \quad Kb=d_{A}\chi.\]
The descendant sequence \(\{\mathcal{O}^{(p)}\}_{p=0}^{n}\) (4.17) is now just the standard \(K\)-sequence of \(\mathcal{O}^{(0)}\) with respect to the deformed \(QKG^{\star}\)-structure.
The Donaldson-Witten theory also admits a supersymmetric quantum mechanical interpretation. Let's take \(M=\mathbb{R}\times\Sigma\), where \(\Sigma\) is a 3-dimensional manifold. Let \(g\) be the product metric. Since \(G=\text{SU}(2)\) is simply connected, the principle \(G\)-bundle \(P\) is trivial over \(M\). Let \(\star_{3}\) denote the Hodge star operator on \(\Sigma\). Let \(a\) be a one form over \(\Sigma\), \(a\) can be identified as an anti-self dual 2-form over \(M\) via the map [14]
\[a\mapsto a\wedge dt+\star_{3}a.\]
It follows that the bundle \(W\) can be identified as \(T^{*}\Sigma\otimes\mathfrak{g}\). The affine bundle \(C\) can be identified as the vector bundle \((\underline{\mathbb{R}}\oplus T^{*}\Sigma)\otimes\mathfrak{g}\), where \(\underline{\mathbb{R}}\) is the trivial line bundle over \(M\). Every connection \(A\) can be written as
\[A=A_{0}dt+\sum_{\mu=1}^{3}A_{\mu}(x,t)dx^{\mu}\]
For our purpose, we need to choose the temporal gauge \(A_{0}=0\). Or equivalently, we take \(C\) to be the bundle \(T^{*}\Sigma\otimes\mathfrak{g}\) instead of \(T^{*}M\otimes\mathfrak{g}\). Since \(P\) is a trivial bundle, any gauge transformation \(g\) is just a \(G\)-valued function \(g(t,x)\) over \(M=\mathbb{R}\times\Sigma\). The gauge transformation of \(A\) under \(g\) is given by \(A\mapsto gAg^{-1}+gdg^{-1}\). Since we require \(A_{0}\) to be \(0\), \(g\) must not depend on \(t\). In other words, \(g\in\text{Gau}(P_{\Sigma})\), where \(P_{\Sigma}\) is the trivial principle \(G\)-bundle over \(\Sigma\). On the other hand, let \(f\in\text{Diff}(M)\). The pull back of \(A\) under \(f\) is locally given by \(A\mapsto A_{\mu}(f(t,x))\frac{\partial f^{\mu}(x,t)}{\partial x^{\nu}}dx^{\nu} +A_{\mu}(f(t,x))\frac{\partial f^{\mu}(x,t)}{\partial t}dt\). Therefore, we also need to require \(f^{\mu}\) to be independent of \(t\). In other words, \(f\) can be viewed as an automorphism of the trivial line bundle \(\mathbb{R}_{\Sigma}\) over \(\Sigma\). To conclude, we take \(Y^{\prime}\) to be the graded gauge natural bundle
\[Y^{\prime}=\underline{\mathfrak{g}}[1]\times_{M}(T^{*}\Sigma \otimes\mathfrak{g})\times_{M}(T^{*}[-1]\Sigma\otimes\mathfrak{g})\to Y_{0}=T^{ *}\Sigma\otimes\mathfrak{g}\to M=\mathbb{R}\times\Sigma.\]
And \(M\times\Gamma(Y)\) is equipped with the \((\text{Aut}(\mathbb{R}_{\Sigma})\times\text{Gau}(P_{\Sigma}))^{*}\)-action instead of the full \(\text{Aut}(P)^{\star}\)-action.
A coordinate chart of \(\Sigma\) induces a local coordinate system
\[(x^{\mu},\ \theta^{a},\ A_{\mu}^{a},\ \chi_{\mu}^{a},\ \phi^{a},\ v_{\mu}^{a}, \ b_{\mu}^{a})\]
for \(Y\) with degrees \(0,1,0,-1,2,1,0\), respectively. The canonical \(QKG^{\star}\)-structure is
\[Q\theta^{a}=\phi^{a},\quad Q\phi^{a}=0,\quad QA_{\mu}^{a}=v_{ \mu}^{a},\quad Qv_{\mu}^{a}=0,\quad Q\chi_{\mu}^{a}=b_{\mu}^{a},\quad Qb_{\mu}^ {a}=0,\] \[K_{\frac{\partial}{\partial t}}\theta^{a}=0,\quad K_{\frac{ \partial}{\partial t}}\phi^{a}=\dot{\theta}^{a},\quad K_{\frac{\partial}{ \partial t}}A_{\mu}^{a}=0,\quad K_{\frac{\partial}{\partial t}}v_{\mu}^{a}=\dot {A}_{\mu}^{a},\quad K_{\frac{\partial}{\partial t}}\chi_{\mu}^{a}=0,\quad K_{ \frac{\partial}{\partial t}}b_{\mu}^{a}=\dot{\chi}_{\mu}^{a},\] \[K_{\mu}\theta^{a}=0,\quad K_{\mu}\phi^{a}=\theta_{\mu}^{a},\quad K _{\mu}A_{\nu}^{a}=0,\quad K_{\mu}v_{\nu}^{a}=A_{\nu,\mu}^{a},\quad K_{\mu} \chi_{\nu}^{a}=0,\quad K_{\mu}b_{\nu}^{a}=\chi_{\nu,\mu}^{a},\] \[I_{\lambda}\theta^{a}=0,\ I_{\lambda}\phi^{a}=-f_{bc}^{a}\lambda^ {b}\theta^{a},\ I_{\lambda}A_{\mu}^{a}=0,\ I_{\lambda}v_{\mu}^{a}=\partial_{\mu} \lambda^{a}+f_{bc}^{a}A^{b}\lambda^{c},\ I_{\lambda}\chi_{\mu}^{a}=0,\ I_{ \lambda}b_{\mu}^{a}=-f_{bc}^{a}\lambda^{b}\chi_{\mu}^{c}.\]
Just like the case of supersymmetric quantum mechanics, we use \(\overline{Q}\) to denote \(K_{\frac{\partial}{\partial t}}\). Using physicists' notation, we rewrite the \(QKG^{\star}\)-structure as
\[Q\theta=\phi,\quad Q\phi=0,\quad QA=\upsilon,\quad Q\upsilon=0, \quad Q\chi=b,\quad Qb=0,\] \[\overline{Q}\theta=0,\quad\overline{Q}\phi=\dot{\theta},\quad \overline{Q}A=0,\quad Q\upsilon=\dot{A},\quad\overline{Q}\chi=b,\quad\overline{ Q}b=\dot{\chi},\] \[K_{\mu}\theta=0,\quad K_{\mu}\phi=\partial_{\mu}\theta,\quad K_{ \mu}A=0,\quad K_{\mu}\upsilon=\partial_{\mu}A,\quad K_{\mu}\chi=0,\quad K_{\mu} b=\partial_{\mu}\chi,\] \[I_{\lambda}\theta=0,\quad I_{\lambda}\phi=-[\lambda,\theta], \quad I_{\lambda}A=0,\quad I_{\lambda}\upsilon=d_{A}\lambda,\quad I_{\lambda} \chi=0,\quad I_{\lambda}b=-[\lambda,\chi].\]
In this case, one can consider the following deformation of \(\overline{Q}\).
\[\overline{Q}\theta=0,\quad\overline{Q}\phi=\dot{\theta},\quad \overline{Q}A=s\chi,\quad\overline{Q}\upsilon=\dot{A}-sb,\quad\overline{Q} \chi=0,\quad\overline{Q}b=\dot{\chi}.\]
For simplicity, we set \(s=1\). Applying the following change of coordinates as an analogue of (4.12)
\[\phi\to\phi-\frac{1}{2}[\theta,\theta],\quad\upsilon\to\upsilon+d_{A}\theta, \quad b\to b-\star_{3}F-[\theta,\chi],\]
we obtain
\[Q\theta=\phi-\frac{1}{2}[\theta,\theta],\quad Q\phi=-[\theta, \phi],\] \[QA=\upsilon+d_{A}\theta,\quad Q\upsilon=-[\theta,\upsilon]-d_{A}\phi\] \[Q\chi=b-\star_{3}F-[\theta,\chi],\quad Qb=\star_{3}d_{A}\upsilon- [\theta,b]+[\phi,\chi],\]
and
\[\overline{Q}\theta=0,\quad\overline{Q}\phi=\dot{\theta},\quad \overline{Q}A=\chi,\quad\overline{Q}\upsilon=\dot{A}-b+\star_{3}F+[\theta, \chi],\quad\overline{Q}\chi=0,\quad\overline{Q}b=\dot{\chi}+\star_{3}d_{A}\chi.\]
The expressions for \(K_{\mu}\) and \(I_{\lambda}\) are irrelevant. For the Lagrangian, we consider
\[\mathcal{L}=\frac{d}{dt}\left(\int_{\Sigma}\mathrm{CS}(A)\right)dt+Q\left( \int_{\Sigma}d\mathrm{vol}_{\Sigma}\mathrm{Tr}(\chi^{\mu}(\dot{A}_{\mu}-b_{\mu}) )\right)dt\]
where \(\mathrm{vol}_{\Sigma}\) is the volume form on \(\Sigma\), and \(\mathrm{CS}(A)\) is the three dimensional Chern-Simons Lagrangian
\[\mathrm{CS}(A)=\mathrm{Tr}(\frac{1}{2}A\wedge dA+\frac{1}{6}A\wedge[A,A]).\]
It is straightforward to show that
\[\mathcal{L}=\int_{\Sigma}d\mathrm{vol}_{\Sigma}\left(b^{\mu}(\dot{A}_{\mu}+( \star_{3}F)_{\mu}-b_{\mu})-\chi^{\mu}(\dot{\upsilon}_{\mu}+(d_{A}\dot{\theta}) _{\mu}-(\star_{3}d_{A}\upsilon)_{\mu}-[\phi,\chi_{\mu}])\right)dt.\]
By definition, \(Q\) is a Noether symmetry of \(\mathcal{L}\). We have
\[Q\mathcal{L}=\frac{d}{dt}\left(\int_{\Sigma}\mathrm{Tr}(\upsilon\wedge F) \right)dt.\]
On the other hand, we have
\[\overline{Q}\left(\int_{\Sigma}d\mathrm{vol}_{\Sigma}\mathrm{Tr}(\chi^{\mu}( \dot{A}_{\mu}-b_{\mu}))\right)=\int_{\Sigma}d\mathrm{vol}_{\Sigma}\mathrm{Tr} (\chi^{\mu}(\star_{3}d_{A}\chi)_{\mu})=2\int_{\Sigma}d\mathrm{Tr}(\chi\wedge \chi)=0.\]
It follows that
\[\overline{Q}\mathcal{L} =\frac{d}{dt}\left(\int_{\Sigma}\mathrm{Tr}(\chi\wedge F)+\int_{ \Sigma}d\mathrm{vol}_{\Sigma}\mathrm{Tr}(\chi^{\mu}(\dot{A}_{\mu}-b_{\mu})) \right)dt\] \[=\frac{d}{dt}\left(\int_{\Sigma}d\mathrm{vol}_{\Sigma}\mathrm{Tr}( \chi^{\mu}(\dot{A}_{\mu}+(\star_{3}F)_{\mu}-b_{\mu}))\right)dt.\]
Hence, \(\overline{Q}\) is also a symmetry of the theory. The boundary term of the Lagrangian is
\[\gamma=\int_{\Sigma}d\mathrm{vol}_{\Sigma}\mathrm{Tr}(b^{\mu} \delta A_{\mu}+\chi^{\mu}(\delta v_{\mu}+(d_{A}\delta\theta)_{\mu})).\]
It is not hard to show that the Nother currents of \(Q\) and \(\overline{Q}\) are
\[\mathcal{Q}=-\int_{\Sigma}d\mathrm{vol}_{\Sigma}\mathrm{Tr}((b^{ \mu}-(\star_{3}F)^{\mu}-[\theta,\chi^{\mu}])(v_{\mu}+(d_{A}\theta)_{\mu}))\]
and
\[\overline{\mathcal{Q}}=-\int_{\Sigma}d\mathrm{vol}_{\Sigma} \mathrm{Tr}((b^{\mu}-[\theta,\chi^{\mu}])\chi_{\mu}).\]
## 5 CohLFTs in the extended BV-BFV formalism
### Extended BV-BFV formalism in the variational bicomplex setting
Let \(Y\) be a graded gauge natural bundle over an \(n\)-manifold \(M\). For our purposes, we assume that the degree \(0\) component \(Y_{0}\) of \(Y\) is an affine bundle. Let \(\omega_{i}\), \(i=1,2\), be two local form over \(M\times\Gamma(Y)\). We say that \(\omega_{1}\) is equivalent to \(\omega_{2}\) if they are equal up to an \(d_{h}\)-exact term. We follow [15] use "\(\simeq\)" to denote this equivalence relation.
#### 5.1.1 BV Lagrangian field theory
**Definition 5.1**.: _A presymplectic local structure/form of degree \(q\) on \(M\times\Gamma(Y)\) is a local form \(\omega\in\Omega^{p,2,q}_{loc}(M\times\Gamma(Y))\) such that \(d_{v}\omega\simeq 0\). (In the case of \(p=n\), we also require \(\omega\) to be a functional form.) \(\omega\) is called a symplectic local structure/form if it is nondegenerate with respect to vertical local vertical fields \(\Xi\) over \(M\times\Gamma(Y)\), i.e., if \(\iota_{\Xi}\omega\simeq 0\) implies that \(\Xi=0\). An odd symplectic local form of degree \(-1\) in \(\Omega^{n,2,-1}_{loc}\) is called a BV symplectic local form._
We will need the following lemma.
**Lemma 5.1**.: [15, Proposition A.1] _The cohomology groups \(H^{p,q}(\Omega_{loc};d_{v}/d_{h})\) of the following cochain complex_
\[\Omega^{p,0}_{loc}(M\times\Gamma(Y))/d_{h}\Omega^{p-1,0}_{loc}(M \times\Gamma(Y))\xrightarrow{d_{v}}\Omega^{p,1}_{loc}(M\times\Gamma(Y))/d_{h }\Omega^{p-1,1}_{loc}(M\times\Gamma(Y))\xrightarrow{d_{v}}\cdots\]
_is trivial for \(q\geq 1\). While for \(q=0\),_
\[H^{p,0}(\Omega_{loc};d_{v}/d_{h})\cong\Omega^{p}(M)/d\Omega^{p-1 }(M). \tag{5.1}\]
Let \(\omega\) be a (pre)symplectic local form. It follows from Lemma 5.1 that one can always find a local form \(\gamma\) such that \(\omega\simeq d_{v}\gamma\). \(\gamma\) is called the (pre)symplectic local potential of \(\omega\).
**Definition 5.2**.: _Let \(\omega\) be a (pre)symplectic local form. A vertical local vector field \(\Xi\) is called symplectic with respect to \(\omega\) if \(\mathrm{Lie}_{\Xi}\omega\simeq 0\). \(\Xi\) is called Hamiltonian with respect to \(\omega\) if there exists a local form \(\mathcal{F}\) of form degree \((n,0)\) such that \(\iota_{\Xi}\omega-d_{v}\mathcal{F}\simeq 0\)._
By Lemma 5.1, every symplectic \(\Xi\) is Hamiltonian.
**Definition 5.3**.: _A BV Lagrangian field theory is a LFT \((M,Y,\mathcal{L})\) together with a Noether symmetry \(Q\) of degree \(1\) and a BV symplectic local form \(\omega\) such that_
1. \(Q\) _is a cohomological vector field, i.e.,_ \(Q^{2}=0\)_;_
2. \(Q\) _is Hamiltonian with respect to_ \(\omega\) _and_ \(\iota_{Q}\omega\simeq d_{v}\mathcal{L}\)_._
Let \(\omega\) be a BV symplectic local form. Let \(\omega_{M}:=\int_{M}\omega\). By definition, \(\omega\) is a presymplectic form over \(\Gamma(Y)\) in the usual sense, i.e.,
\[\delta\omega_{M}=\int_{M}d_{v}\omega=0.\]
Moreover, \(\omega_{M}\) is nondegenerate with respect to local vector fields over \(\Gamma(Y)\) since \(\omega\) is symplectic. Therefore, if \(F\) is a local functional over \(\Gamma(Y)\), one can always find a local vector field \(\Xi_{F}\) such that \(\delta F=\iota_{\Xi_{F}}\omega_{M}\). It follows that there exists a well-defined Poisson bracket \(\{\cdot,\cdot\}\) between local functionals, given by
\[\{F_{1},F_{2}\}:=\iota_{\Xi_{F_{1}}}\iota_{\Xi_{F_{2}}}\omega_{M}.\]
In particular, we can take \(F\) to be the action \(S=\int_{M}\mathcal{L}\) and \(\Xi_{F}\) to be the cohomological vector field \(Q\). By definition, we have \(\{S,S\}=Q(S)=0\). In this way, one can derive a classical BV theory from a BV Lagrangian field theory.
**Remark 5.1**.: _If \(\Phi\in C_{\mathcal{L}}\) is a critical point of \(S\), then it is also in the zero locus of the cohomological vector field \(Q\). In fact, from the LFT point of view, we have_
\[\iota_{Q}\omega\simeq d_{v}\mathcal{L}\simeq EL,\]
_where \(EL\) is the Euler-Lagrange form of the BV LFT. It follows that_
\[(\iota_{\Xi}\iota_{Q}\omega)(x,\Phi)\simeq\iota_{\Xi}EL(x,\Phi)=0\]
_vanishes over \(M\) for all vertical local vector field \(\Xi\). Since \(\omega\) is symplectic, we must have \(Q(\cdot,\Phi)\equiv 0\)._
#### 5.1.2 Extended BV-BFV Lagrangian field theory
Let \(\omega^{(0)}\) be a BV symplectic local form. Let \(Q\) be a cohomological vector field which is symplectic with respect to \(\omega\), i.e., \(\mathrm{Lie}_{Q}\omega^{(0)}+d_{h}\omega^{(1)}=0\) for some \(\omega^{(1)}\) of degree \((n-1,2,0)\). Note that
\[d_{v}(d_{h}\omega^{(1)})=-\mathrm{Lie}_{Q}d_{v}\omega^{(0)}\simeq 0.\]
By Lemma 3.1, we have \(d_{v}\omega^{(1)}\simeq 0\), i.e., it is also a presymplectic local form. Moreover, note that
\[d_{h}(\mathrm{Lie}_{Q}\omega^{(1)})=-\mathrm{Lie}_{Q^{2}}\omega^{(0)}=0.\]
By Lemma 3.1, one can also find a local form \(\omega^{(2)}\) of degree \((n-2,2,1)\) satisfying \(\mathrm{Lie}_{Q}\omega^{(1)}+d_{h}\omega^{(2)}=0\) and \(d_{v}\omega^{(2)}\simeq 0\). Repeating this process, one can find a sequence of presymplectic local forms \(\{\omega^{(p)}\}_{p=0}^{n}\) satisfying
\[(\mathrm{Lie}_{Q}+d_{h})\sum_{p=0}^{n}\omega^{(p)}=0. \tag{5.2}\]
**Definition 5.4**.: _Let \(\alpha^{(0)}\) be a local form of degree \((n,q,r)\). An ascendant sequence of \(\alpha^{(0)}\) is a sequence \(\{\alpha^{(p)}\}_{p=0}^{n}\) of local forms of degrees \((n-p,q,p+r)\) satisfying_
\[(\mathrm{Lie}_{Q}-(-1)^{q+r}d_{h})\sum_{p=0}^{n}\alpha^{(p)}=0. \tag{5.3}\]
_(5.3) is called the ascent equations._
Let \(\omega^{(0)}=d_{v}\gamma^{(0)}\) be a BV symplectic local form. Let \(\{\gamma^{(p)}\}_{p=0}^{n}\) be an ascendant sequence of \(\gamma^{(0)}\). Let \(\omega^{(p)}:=d_{v}\gamma^{(p)}\). We have
\[(\mathrm{Lie}_{Q}+d_{h})\sum_{p=0}^{n}\omega^{(p)}=(\mathrm{Lie}_{Q}+d_{h})d_ {v}\sum_{p=0}^{n}\gamma^{(p)}=-d_{v}(\mathrm{Lie}_{Q}-d_{h})\sum_{p=0}^{n} \gamma^{(p)}=0.\]
In other words, \(\{\omega^{(p)}\}_{p=0}^{n}\) is an ascendant sequence of \(\omega^{(0)}\).
**Definition 5.5**.: _Let \(m\) be an integer, \(0\leq m\leq n\). An \(m\)-extended BV-BFV Lagrangian field theory consists of the following data:_
1. \(a\) \(LFT\left(M,Y,\mathcal{L}\right)\) _together with a Noether symmetry_ \(Q\)_, where_ \(Q\) _is a cohomological vector field;_
2. _a BV symplectic local form_ \(\omega^{(0)}=d_{v}\gamma^{(0)}\) _over_ \(M\times\Gamma(Y)\)_;_
3. _an ascendant sequence_ \(\{\omega^{(p)}\}_{p=0}^{n}\) _of_ \(\omega^{(0)}\) _such that_ \(\omega^{(p)}=d_{v}\gamma^{(p)}\) _for some presymplectic potential_ \(\gamma^{(p)}\) _when_ \(p\leq m\) _and_ \(\omega^{(p)}=0\) _when_ \(p>m\)_;_
4. _a sequence of local forms_ \(\{\mathcal{L}^{(p)}\}_{p=0}^{n}\) _with_ \(\mathcal{L}^{(0)}=\mathcal{L}\) _such that_ \[\iota_{Q}\omega^{(p)}=d_{v}\mathcal{L}^{(p)}+d_{h}\gamma^{(p+1)}\] (5.4) _for_ \(p=0,\cdots,n-1\)_, and_ \(\iota_{Q}\omega^{(n)}=d_{v}\mathcal{L}^{(n)}\)_._
_The theory is said to be fully extended if \(m=n\). An \(1\)-extended BV-BFV LFT is simply called a BV-BFV LFT._
**Remark 5.2**.: _The requirement that \(\{\omega^{(p)}\}_{p=0}^{n}\) is an ascendant sequence is actually redundant and can be derived from (5.4)._
For \(m=0\), Definition 5.5 reduces to the definition of a BV Lagrangian field theory.
**Proposition 5.1**.: \(Q\) _is a symmetry of \(\mathcal{L}^{(p)}\), i.e., \(\mathrm{Lie}_{Q}\mathcal{L}^{(p)}\simeq 0\)._
Proof.: On one hand,
\[\mathrm{Lie}_{Q}(\iota_{Q}\omega^{(p)})=\iota_{Q}(\mathrm{Lie}_{Q}\omega^{(p) })=-d_{h}(\iota_{Q}\omega^{(p+1)})=-d_{h}(d_{v}\mathcal{L}^{(p+1)}),\]
where we use the ascent equations of \(\omega^{(p)}\) and (5.4). On the other hand,
\[\mathrm{Lie}_{Q}(\iota_{Q}\omega^{(p)})=\mathrm{Lie}_{Q}(d_{v} \mathcal{L}^{(p)}+d_{h}\gamma^{(p+1)})=-d_{v}\mathrm{Lie}_{Q}\mathcal{L}^{(p) }+d_{h}((\iota_{Q}d_{v}-d_{v}\iota_{Q})\gamma^{(p+1)})\] \[=-d_{v}\mathrm{Lie}_{Q}\mathcal{L}^{(p)}+d_{h}(d_{v}\mathcal{L}^{ (p+1)})-d_{h}(d_{v}(\iota_{Q}\gamma^{(p+1)})),\]
where we use \(\mathrm{Lie}_{Q}=[\iota_{Q},d_{v}]\) and (5.4). It follows that
\[d_{v}\left(\mathrm{Lie}_{Q}\mathcal{L}^{(p)}-d_{h}(2\mathcal{L}^{(p+1)}- \iota_{Q}\gamma^{(p+1)})\right)=0.\]
Note that \(\mathrm{Lie}_{Q}\mathcal{L}^{(p)}\) is of vertical form degree \(0\) and ghost number degree \(p+1>0\). By (5.1), we must have
\[\mathrm{Lie}_{Q}\mathcal{L}^{(p)}=d_{h}(\mathcal{L}^{(p+1)}_{CMR}), \tag{5.5}\]
where \(\mathcal{L}^{(p)}_{CMR}:=2\mathcal{L}^{(p)}-\iota_{Q}\gamma^{(p)}\) is known as the modified Lagrangian of the extended BV-BFV LFT [10].
**Remark 5.3**.: _Noting that \(\mathrm{Lie}_{Q}\mathcal{L}^{(p)}=\iota_{Q}d_{v}\mathcal{L}^{(p)}=\iota_{Q}^ {2}\omega^{(p)}-d_{h}(\iota_{Q}\gamma^{(p+1)})\), (5.5) is equivalent to_
\[\iota_{Q}^{2}\omega^{(p)}=2d_{h}\mathcal{L}^{(p+1)}.\]
Let \(\mathbb{A}^{(p)}:=\mathcal{L}^{(p)}-\iota_{Q}\gamma^{(p)}\) be the difference between the modified Lagrangian \(\mathcal{L}^{(p)}_{CMR}\) and \(\mathcal{L}^{(p)}\). \(\mathbb{A}^{(p)}\) is known as the BV-BFV difference [10]. By definition,
\[(\mathrm{Lie}_{Q}-d_{h})\sum_{p=0}^{n}\mathcal{L}^{(p)}=d_{h}\sum_{p=0}^{n} \mathbb{A}^{(p)},\quad(\mathrm{Lie}_{Q}-d_{h})\sum_{p=0}^{n}\gamma^{(p)}=d_{v }\sum_{p=0}^{n}\mathbb{A}^{(p)}.\]
It follows that
\[0=(\mathrm{Lie}_{Q}+d_{h})(\mathrm{Lie}_{Q}-d_{h})\sum_{p=0}^{n}\gamma^{(p)} =(\mathrm{Lie}_{Q}+d_{h})d_{v}\sum_{p=0}^{n}\mathbb{A}^{(p)}=-d_{v}\left(( \mathrm{Lie}_{Q}-d_{h})\sum_{p=0}^{n}\mathbb{A}^{(p)}\right).\]
By Lemma 5.1, we have
\[(\mathrm{Lie}_{Q}-d_{h})\sum_{p=0}^{n}\mathbb{A}^{(p)}=0. \tag{5.6}\]
In other words, \(\{\mathbb{A}^{(p)}\}_{p=0}^{n}\) satisfies the ascent equations and measures the failure of \(\{\mathcal{L}^{(p)}\}_{p=0}^{n}\) and \(\{\gamma^{(p)}\}_{p=0}^{n}\) to be ascendant sequences. In the case of a vanishing \(\{\mathbb{A}^{(p)}\}_{p=0}^{n}\), the extended BV-BFV LFT is fully determined by the presymplectic local potentials \(\gamma^{(p)}\) and the cohomological vector field \(Q\).
**Definition 5.6**.: _An \(f\)-transformation of an extended BV-BFV LFT is the map_
\[P_{f}:(\sum_{p=0}^{n}\mathcal{L}^{(p)},\sum_{p=0}^{n}\gamma^{(p)})\mapsto(\sum_{p =0}^{n}\mathcal{L}^{(p)}+d_{h}\sum_{p=0}^{n}f^{(p)},\sum_{p=0}^{n}\gamma^{(p)}-d _{v}\sum_{p=0}^{n}f^{(p)}),\]
_where \(f^{(p)}\) is a local form of degree \((n-p,0,p-1)\)._
It is easy to observe that (5.4) is preserved by \(P_{f}\). Therefore, \(f\)-transformations are well-defined over the space of extended BV-BFV LFTs. Moreover, one can check that
\[P_{f}(\mathbb{A}^{(p)})-\mathbb{A}^{(p)}=\mathrm{Lie}_{Q}f^{(p)}+d_{h}f^{(p+1)}.\]
Or equivalently,
\[P_{f}(\sum_{p=0}^{n}\mathbb{A}^{(p)})=\sum_{p=0}^{n}\mathbb{A}^{(p)}+(\mathrm{ Lie}_{Q}+d_{h})\sum_{p=0}^{n}f^{(p)}. \tag{5.7}\]
In other words, the \((\mathrm{Lie}_{Q}-d_{h})\)-cohomology class of the BV-BFV difference \(\sum_{p=0}^{n}\mathbb{A}^{(p)}\) is preserved under \(f\)-transformations.
**Lemma 5.2**.: _Let \(\mathcal{L}^{(0)}\) be the Lagrangian of an extended BV-BFV LFT. Let \(\gamma\) be the (canonical) boundary form of \(\mathcal{L}^{(0)}\). We have \(\gamma\simeq-\gamma^{(1)}\) if \(\omega^{(0)}=d_{v}\gamma^{(0)}\) takes the form_
\[\omega^{(0)}=\omega_{ab}\delta\Phi^{a}\wedge\delta\Phi^{b}\wedge\nu \tag{5.8}\]
_where \(\nu\) is a volume form on \(M\)._
Proof.: By Corollary 3.1, we have
\[\iota_{Q}\omega^{(0)}=d_{v}\mathcal{L}^{(0)}+d_{h}\gamma^{(1)}=EL+d_{h}( \gamma+\gamma^{(1)}),\]
where \(EL\) is the Euler-Lagrange form of the LFT. On the other hand, note that
\[\iota_{Q}\omega^{(0)}=\omega_{ab}(Q(\Phi^{a})\delta\Phi^{b}+\delta\Phi^{a}Q( \Phi^{b}))\wedge\nu\]
is a source form. By Lemma 3.1, we must have \(\iota_{Q}\omega^{(0)}=EL\) and \(d_{h}(\gamma+\gamma^{(1)})=0\). It follows again from Lemma 3.1 that \(\gamma\simeq-\gamma^{(1)}\).
#### 5.1.3 K-sequences
Supposing that there exists a \(QKG^{\star}\)-structure on \(M\times\Gamma(Y)\), one can then locally define two homotopy operators \(K_{0}:=dx^{\mu}\wedge\iota_{\xi_{\partial_{\mu}}}\) and \(K:=dx^{\mu}\wedge\mathrm{Lie}_{K_{\mu}}\) on \(\Omega_{\circ loc}\) as before. Let \(\gamma^{(n)}\), \(\mathbb{A}^{(n)}\), and \(\mathcal{L}\) be local forms over \(M\times\Gamma(Y)\) of degrees \((0,1,n-1)\), \((0,0,n)\), and \((0,0,n)\), respectively, such that
\[\mathrm{Lie}_{Q}\gamma^{(n)}=d_{v}\mathbb{A}^{(n)},\quad\mathcal{L}^{(n)}= \mathbb{A}^{(n)}+\iota_{Q}\gamma^{(n)}. \tag{5.9}\]
By Lemma (5.1), \(\mathbb{A}^{(n)}\) is \(Q\)-closed. \(\mathrm{Lie}_{Q}\mathcal{L}^{(n)}=\mathrm{Lie}_{Q}\iota_{Q}\gamma^{(n)}=\iota _{Q}d_{v}\mathbb{A}^{(n)}=0\), i.e., \(\mathcal{L}^{(n)}\) is also \(Q\)-closed.
Using the homotopy operator \(K\), one can construct a fully extended BV-BFV LFT from (5.9). Let
\[\gamma_{K}^{(n-p)}:=\frac{K^{p}}{p!}\gamma^{(n)},\quad\mathbb{A}_{K}^{(n-p)}:= \frac{K^{p}}{p!}\mathbb{A}^{(n)},\quad\mathcal{L}_{K}^{(n-p)}:=\frac{K^{p}}{p!} \mathcal{L}^{(n)}.\]
Or equivalently,
\[\sum_{p=0}^{n}\gamma_{K}^{(p)}:=\exp(K)\gamma^{(n)},\quad\sum_{p=0}^{n} \mathbb{A}_{K}^{(p)}:=\exp(K)\mathbb{A}^{(n)},\quad\sum_{p=0}^{n}\mathcal{L}_{K }^{(p)}:=\exp(K)\mathcal{L}^{(n)}.\]
Let \(\widetilde{D}\omega:=(-1)^{i_{D}(d_{tot}(\omega)-n)}D\omega\) for a derivation \(D\) of \(\Omega_{\circ loc}\) of horizontal form degree \(i_{D}\). We have
\[(\mathrm{Lie}_{Q}-d_{h})\sum_{p=0}^{n}\gamma_{K}^{(p)}=\left(\mathrm{Lie}_{Q} -\widetilde{d_{h,inv}}\right)\exp(\widetilde{K})\gamma^{(n)}=\exp(\widetilde{ K})\mathrm{Lie}_{Q}\gamma^{(n)}=d_{v}\exp(\widetilde{K})\mathbb{A}^{(n)}=d_{v} \sum_{p=0}^{n}\mathbb{A}_{K}^{(p)},\]
where we use \(\exp(\widetilde{K})\mathrm{Lie}_{Q}=(\mathrm{Lie}_{Q}-\widetilde{d_{h,inv}} )\exp(\widetilde{K})\). Let's assume that both \(\{\gamma_{K}^{(p)}\}_{p=0}^{n}\) and \(\{\mathbb{A}_{K}^{(p)}\}_{p=0}^{n}\) are globally well-defined. They then determine a fully extended BV-BFV LFT, whose Lagrangians are given by
\[\sum_{p=0}^{n}\mathcal{L}^{(p)} =\sum_{p=0}^{n}\mathbb{A}_{K}^{(p)}+\iota_{Q}\sum_{p=0}^{n} \gamma_{K}^{(p)}=\sum_{p=0}^{n}\mathbb{A}_{K}^{(p)}+\left([\iota_{Q},\exp( \widetilde{K})]+\exp(\widetilde{K})\iota_{Q}\right)\gamma^{(n)}\] \[=\sum_{p=0}^{n}\mathbb{A}_{K}^{(p)}-\widetilde{K_{0}}\exp( \widetilde{K})\gamma^{(n)}+\exp(\widetilde{K})\left(\mathcal{L}^{(n)}- \mathbb{A}^{(n)}\right)=\sum_{p=0}^{n}\mathcal{L}_{K}^{(p)}-K_{0}\sum_{p=0}^{ n}\gamma_{K}^{(p)}, \tag{5.10}\]
where we use (4.2) to pass to the second line.
**Remark 5.4**.: _The above discussion can be easily generalized to the case of \(m\)-extended BV-BFV LFTs, \(m=1,\cdots,n\)._
### Cotangent lift of CohLGFTs
#### 5.2.1 Cotangent lift of BRST theories
Let \(Y\) be a graded gauge natural bundle over an \(n\)-dimensional Riemannian manifold \((M,g)\). Let \(Y_{ct}:=V^{*}[-1]Y\). There is a canonical BV symplectic local form \(\omega_{ct}\) over \(M\times\Gamma(Y_{ct})\). Let \((x^{\mu},\Phi^{a})\) be a local coordinate system of \(Y\), which induces a local coordinate system \((x^{\mu},\Phi^{a},\Phi^{+}_{a})\) of \(Y_{ct}\), which again induces a local coordinate system of \(M\times\Gamma(Y_{ct})\). \(\omega_{ct}\) is then given by
\[\omega_{ct}=\delta\Phi^{+}_{a}\wedge\delta\Phi^{a}\wedge d\mathrm{vol}_{g}=d_{ v}(\Phi^{+}_{a}\delta\Phi^{a}\wedge d\mathrm{vol}_{g})=:d_{v}\gamma_{ct}.\]
Every vertical local vector field \(\Xi=\Xi^{a}\frac{\partial}{\partial\Phi^{a}}+\widehat{\partial_{I}}(\Xi^{a}) \frac{\partial}{\partial\Phi^{+}_{I}}\) over \(M\times\Gamma(Y)\) can be lifted to a local form over \(M\times\Gamma(Y_{ct})\) of horizontal degree \(n\) defined by the formula
\[\widetilde{\Xi}=\Phi^{+}_{a}\Xi^{a}d\mathrm{vol}_{g}.\]
Recall that \(d_{v}\widetilde{\Xi}\) can be composed as
\[d_{v}\widetilde{\Xi}=\mathcal{I}(d_{v}\widetilde{\Xi})+d_{h}\text{-exact term},\]
where \(\mathcal{I}\) is the interior Euler operator and \(\mathcal{I}(d_{v}\widetilde{\Xi})\) is a source form. There exists a unique vertical local vector field \(\Xi_{cl}\) over \(M\times\Gamma(Y_{ct})\) such that \(\iota_{\Xi_{cl}}\omega_{ct}=\mathcal{I}(d_{v}\widetilde{\Xi})\). One can verify that the map \(\Xi\mapsto\Xi_{cl}\) defines a homomorphism of graded Lie superalgebras, and that \(\Xi_{cl}\) can be written as
\[\Xi_{cl}=\Xi^{a}\frac{\partial}{\partial\Phi^{a}}+\text{terms involving }\frac{\partial}{\partial\Phi^{a}_{I}},\frac{\partial}{\partial\Phi^{+}_{a}}, \text{ and }\frac{\partial}{\partial\Phi^{+}_{a,I}}, \tag{5.11}\]
when \(\Xi\) is odd, and
\[\Xi_{cl}=(-1)^{|\Phi^{a}|}\Xi^{a}\frac{\partial}{\partial\Phi^{a}}+\text{ terms involving }\frac{\partial}{\partial\Phi^{a}_{I}},\frac{\partial}{\partial\Phi^{+}_{a}}, \text{ and }\frac{\partial}{\partial\Phi^{+}_{a,I}}, \tag{5.12}\]
when \(\Xi\) is even. From (5.11) and (5.12), one can easily see that \(\widetilde{\Xi}=\iota_{\Xi_{cl}}\gamma_{ct}\).
Let \((M,Y,\mathcal{L})\) be any LFT such that \(\Xi\) is a Noether symmetry of \(\mathcal{L}\). \(\mathcal{L}\) can be canonically viewed as a local form over \(M\times\Gamma(Y_{ct})\). By (5.11) and (5.12), we have
\[\text{Lie}_{\Xi_{cl}}\mathcal{L}=\text{Lie}_{\Xi}\mathcal{L}\simeq 0.\]
Let \(Q_{\mathcal{L}}\) denote the vector field associated to \(\mathcal{L}\) via \(\omega_{ct}\). By definition, \(Q^{2}_{\mathcal{L}}=0\) and satisfy the equation
\[\iota_{Q_{\mathcal{L}}}\omega_{ct}=EL,\]
where \(EL\) is the Euler-Lagrange form of the LFT. Note that
\[\iota_{[\Xi_{cl},Q_{\mathcal{L}}]}\omega_{ct}=(\text{Lie}_{\Xi_{cl}}\iota_{Q_ {\mathcal{L}}}-(-1)^{|\Xi|}\iota_{Q_{\mathcal{L}}}\text{Lie}_{\Xi_{cl}}) \omega_{ct}\simeq\text{Lie}_{\Xi_{cl}}(d_{v}\mathcal{L})\simeq 0,\]
where we use \([d_{h},\iota_{Q_{\mathcal{L}}}]=0\), \(\text{Lie}_{\Xi_{cl}}\mathcal{L}\simeq 0\), and \(\text{Lie}_{\Xi_{cl}}\omega_{ct}\simeq 0\). Since \(\omega_{ct}\) is symplectic, we conclude that
\[[\Xi_{cl},Q_{\mathcal{L}}]=0.\]
In particular, one can choose \(\Xi\) to be a cohomological vector field \(Q\) and define
\[Q_{BV}:=Q_{ct}+Q_{\mathcal{L}},\]
which is again a cohomological vector field. By definition, it is also the Hamiltonian vector field associated to
\[\mathcal{L}_{BV}:=\mathcal{L}+\widetilde{Q}=\mathcal{L}+\iota_{Q_{BV}}\gamma_{ ct}. \tag{5.13}\]
\((M,Y_{ct},\mathcal{L}_{BV})\) together with \(Q_{BV}\) and \(\omega_{ct}\) defines a BV Lagrangian field theory, which is called the cotangent lift of \((M,Y,\mathcal{L})\).
**Definition 5.7**.: _An extended BV-BFV LFT \((M,Y_{ct},\mathcal{L}_{BV})\) is said to be of BRST type if_
1. \(\gamma^{(0)}=\gamma_{ct}\)_;_
2. \(\mathcal{L}^{(0)}=\mathcal{L}_{BV}=\mathcal{L}_{BRST}+\iota_{Q_{BV}}\gamma_{ct}\)_,_
_where \(Q_{BV}=(Q_{BRST})_{cl}+Q_{\mathcal{L}_{BRST}}\) is the cohomological vector field of the extended BV-BFV LFT, \((M,Y,\mathcal{L}_{BRST})\) is a LFT with \(Q_{BRST}\) as a Noether symmetry._
By definition, we have \(\mathbb{A}^{(0)}=\mathcal{L}^{(0)}-\iota_{Q_{BV}}\gamma^{(0)}=\mathcal{L}_{BRST}\). By (5.6), we have
\[\text{Lie}_{Q_{BRST}}\mathbb{A}^{(0)}=\text{Lie}_{Q_{BV}}\mathbb{A}^{(0)}=d_{h} \mathbb{A}^{(1)}\]
It follows that
\[(\text{Lie}_{Q_{BRST}}-d_{h})\sum_{p=0}^{n}\mathbb{A}^{(p)}=0.\]
Therefore, \(f\)-transformations of extended BV-BFV LFTs of BRST type should be only defined for local forms \(f^{(p)}\) over \(M\times\Gamma(Y)\).
#### 5.2.2 Cotangent lift of CohLGFTs
Let \((M,Y,\mathcal{L})\) be a CohLGFT. Recall that there is a cohomological vector field \(Q\) over \(M\times\Gamma(Y)\) and a family of vertical local vertical fields \(K_{\Xi}\) parameterized by \(G\)-invariant vector fields \(\Xi\) over the principal \(G\)-bundle \(P\) defining \(Y\), satisfying
\[[K_{\Xi_{1}},K_{\Xi_{2}}]=0,\quad[\Xi_{1},K_{\Xi_{2}}]=K_{[\Xi_{1},\Xi_{2}]}, \quad[Q,K_{\Xi}]=\Xi,\]
where we identify \(\Xi\) as a vertical local vector field of degree \(0\) over \(M\times\Gamma(Y)\) via taking Lie derivatives. A \(G\)-invariant vertical vector field \(\Xi\) over \(P\) can be identified with a section \(\lambda\) of \(\text{ad}P\). We write \(I_{\lambda}\) to denote such \(K_{\Xi}\) and \(\delta_{\lambda}\) to denote the corresponding gauge transformations. By definition, \(Q\), \(I_{\lambda}\), and \(\delta_{\lambda}\) are Noether symmetries of the Lagrangian. Therefore, \(Q_{BV}\), \((I_{\lambda})_{cl}\), and \((\delta_{\lambda})_{cl}\) satisfy
\[[Q_{BV},(I_{\lambda})_{cl}]=(\delta_{\lambda})_{cl}.\]
In other words, they define a vertical local \(\text{Gau}(P)^{*}\)-action on \(M\times\Gamma(Y_{ct})\). Note that \(\mathcal{L}_{BV}=\mathcal{L}+\iota_{Q_{BV}}\gamma_{ct}\) is not basic with respect to this \(\text{Gau}(P)^{*}\)-action. We have
\[(I_{\lambda})_{cl}\mathcal{L}_{BV}=\widetilde{\delta_{\lambda}},\]
which is not \(d_{h}\)-exact.
**Remark 5.5**.: _Just like the finite dimensional case, one can always choose a \(\text{Gau}(P)^{*}\)-invariant Lagrangian submanifold \(\mathcal{L}(\Gamma(Y_{ct}))\) of \(\Gamma(Y_{ct})\) such that \(\widetilde{\delta_{\lambda}}\) vanishes over \(\mathcal{L}(\Gamma(Y_{ct}))\). (Such \(\mathcal{L}(\Gamma(Y_{ct}))\) are usually chosen to be the space of sections of some subbundle of \(Y_{ct}\), e.g., \(\Gamma(Y)\).) The restriction of \(\mathcal{L}_{BV}|_{M\times\mathcal{L}(\Gamma(Y_{ct}))}\) is then basic with respect to this \(\text{Gau}(P)^{\star}\)-action._
**Remark 5.6**.: _One can also consider the cotangent lift \((K_{\xi_{X}})_{cl}\) of \(K_{\xi_{X}}\), \(X\in\mathfrak{X}_{P}(M)\). However, \([Q_{BV},(K_{\xi_{X}})_{cl}]=(\xi_{X})_{cl}\) does not hold unless \(X\) is a Noether symmetry of \(\mathcal{L}\). By definition, such \(X\) exists only if the CohLGFT is supersymmetric._
**Definition 5.8**.: _An extended BV-BFV LFT \((M,Y_{ct},\mathcal{L}_{BV})\) is said to be of CohLGFT type if it is of BRST type, \((M,Y,\mathcal{L}_{BRST})\) is a CohLGFT, and the BV-BFV difference \(\sum_{p=0}^{n}\mathbb{A}^{(p)}\) is basic with respect to the \(\text{Gau}(P)^{\star}\)-action on \(M\times\Gamma(Y_{ct})\), i.e.,_
\[(\iota_{\lambda})_{cl}\mathbb{A}^{(p)}\simeq 0,\quad(\delta_{\lambda})_{cl} \mathbb{A}^{(p)}\simeq 0.\]
\(f\)-transformations of extended BV-BFV LFTs of CohLGFT type should be only defined for local forms \(f^{(p)}\) over \(M\times\Gamma(Y)\) that are basic with respect to the \(\operatorname{Gau}(P)^{\star}\)-action.
**Theorem 5.1**.: _The BV-BFV difference of an \(m\)-extended BV-BFV LFT of CohLGFT type is uniquely determined by the (general) \(K\)-sequence of \(\mathbb{A}^{(m)}\) up to an \(f\)-transformation._
Proof.: This follows directly from (5.7) and Theorem 4.2.
#### 5.2.3 Cotangent lift of Donaldson-Witten theory
Recall that the configuration bundle \(Y\) of Donaldson-Witten theory is given by \(Y=V[1]Y^{\prime}\), where
\[Y^{\prime}=\operatorname{ad}P[1]\times_{M}C\times_{M}W[-1].\]
For our purpose, we will choose \(W\) to be \(\operatorname{ad}P\otimes\Lambda^{2}T^{*}M\) instead of \(\operatorname{ad}P\otimes\Lambda^{2}_{-}T^{*}M\). This is justifiable because within the BV formalism, one can apply a gauge fixing to eliminate the self-dual component of \(W\). Let
\[(x,\ \theta,\ A,\ \chi,\ \phi,\ \upsilon,\ b)\]
be a coordinate system of \(Y\). (We are adopting physicists' notation again for convenience.) It induces a coordinate system
\[(x,\ \theta,\ A,\ \chi,\ \phi,\ \upsilon,\ b,\ \theta^{+},\ A^{+},\ \chi^{+},\ \phi^{+},\ \upsilon^{+},\ b^{+}),\]
of
\[Y_{ct}=V^{*}[-1]Y \cong Y\times_{M}\operatorname{ad}P[-2]\times_{M}(\operatorname{ ad}P\otimes\Lambda^{3}T^{*}M)[-1]\times_{M}(\operatorname{ad}P\otimes\Lambda^{2}T^{*}M)\] \[\times_{M}\operatorname{ad}P[-3]\times_{M}(\operatorname{ad}P \otimes\Lambda^{3}T^{*}M)[-2]\times_{M}(\operatorname{ad}P\otimes\Lambda^{2}T ^{*}M)[-1],\]
where we use the Killing form \(\operatorname{Tr}\) of \(\mathfrak{g}\), the Riemannian metric \(g\), and the Hodge star operator \(\star\) to make the identifications \(\operatorname{ad}P^{*}\cong\operatorname{ad}P\) and \(\Lambda^{p}TM\cong\Lambda^{4-p}T^{*}M\). For brevity, we use \(\Phi\) to denote the fields \((\theta,\ A,\ \chi,\ \phi,\ \upsilon,\ b)\) and \(\Phi^{+}\) to denote the anti-fields \((\theta^{+},\ A^{+},\ \chi^{+},\ \phi^{+},\ \upsilon^{+},\ b^{+})\). The canonical symplectic local form \(\omega_{ct}\) is of the form
\[\omega_{ct}=\operatorname{Tr}(\delta\Phi^{+}\wedge\delta\Phi)=d_{v} \operatorname{Tr}(\Phi^{+}\wedge\delta\Phi)=:d_{v}\gamma_{ct}.\]
The local form \(\widetilde{Q}\) associated to the cohomological vector field \(Q\) defined by (4.14) to (4.16) is given by
\[\widetilde{Q}= \operatorname{Tr}(\theta^{+}(\phi-\frac{1}{2}[\theta,\theta])+ \phi^{+}(-[\theta,\phi])+A^{+}\wedge(\upsilon+d_{A}\theta)+\upsilon^{+}\wedge( -[\theta,\upsilon]-d_{A}\phi)\] \[+\chi^{+}\wedge(b-[\theta,\chi])+b^{+}\wedge(-[\theta,b]+[\phi, \chi])).\]
We then have
\[d_{v}\widetilde{Q}=\mathrm{Tr}(\delta\Phi^{+}\wedge Q(\Phi))+ \mathrm{Tr}(\theta^{+}(\delta\phi+[\theta,\delta\theta])-\phi^{+}(-[\delta\theta, \phi]+[\theta,\delta\phi])\] \[-A^{+}\wedge(\delta v+[\delta A,\theta]+d_{A}\delta\theta)+\upsilon ^{+}\wedge(-[\delta\theta,v]+[\theta,\delta v]-[\delta A,\phi]-d_{A}\delta\phi)\] \[+\chi^{+}\wedge(\delta b-[\delta\theta,\chi]+[\theta,\delta\chi] )-b^{+}\wedge(-[\delta\theta,b]+[\theta,\delta b]+[\delta\phi,\chi]+[\phi, \delta\chi]))d\mathrm{vol}_{g}.\] \[=\mathrm{Tr}(\delta\Phi^{+}\wedge Q(\Phi))+\mathrm{Tr}(\theta^ {+}\wedge\delta\phi+[\theta^{+},\theta]\wedge\delta\theta-[\phi^{+},\phi] \wedge\delta\theta-[\phi^{+},\theta]\wedge\delta\phi\] \[-A^{+}\wedge\delta v-[A^{+},\theta]\wedge\delta A-d_{A}A^{+} \wedge\delta\theta+[\upsilon^{+},v]\wedge\delta\theta+[\upsilon^{+},\theta] \wedge\delta v+[\upsilon^{+},\phi]\wedge\delta A-d_{A}\upsilon^{+}\wedge\delta\phi)\] \[+\chi^{+}\wedge\delta b+[\chi^{+},\chi]\wedge\delta\theta+[ \chi^{+},\theta]\wedge\delta\chi-[b^{+},b]\wedge\delta\theta-[b^{+},\theta] \wedge\delta b-[b^{+},\chi]\wedge\delta\phi-[b^{+},\phi]\wedge\delta\chi)\] \[+d_{h}\mathrm{Tr}(A^{+}\wedge\delta\theta+\upsilon^{+}\wedge \delta\phi) \tag{5.14}\]
The cotangent lift \(Q_{cl}\) of the cohomological vector field \(Q\) defined by (4.14) to (4.16) is given by
\[Q_{cl}\theta=\phi-\frac{1}{2}[\theta,\theta],\quad Q_{cl}\phi=-[ \theta,\phi],\] \[Q_{cl}A=\upsilon+d_{A}\theta,\quad Q_{cl}\upsilon=-[\theta, \upsilon]-d_{A}\phi\] \[Q_{cl}\chi=b-[\theta,\chi],\quad Q_{cl}b=-[\theta,b]+[\phi,\chi],\] \[Q_{cl}\theta^{+}=-[\theta,\theta^{+}]+[\phi,\phi^{+}]-d_{A}A^{+} +[\upsilon,\upsilon^{+}]-[\chi,\chi^{+}]+[b,b^{+}],\] \[Q_{cl}\phi^{+}=\theta^{+}-[\theta,\phi^{+}]-d_{A}\upsilon^{+}-[ \chi,b^{+}],\] \[Q_{cl}A^{+}=-[\theta,A^{+}]-[\phi,\upsilon^{+}],\quad Q_{cl} \upsilon^{+}=-A^{+}-[\theta,\upsilon^{+}],\] \[Q_{cl}\chi^{+}=-[\theta,\chi^{+}]+[\phi,b^{+}],\quad Q_{cl}b^{+}= \chi^{+}-[\theta,b^{+}],\]
**Remark 5.7**.: _Applying the change of coordinates \(A^{+}\mapsto-A^{+}\), the cohomological vector field \(Q_{cl}\) restricted to \((A^{+},\upsilon^{+},\chi^{+},b^{+})\) becomes the Kalkman differential in the BRST model of equivariant cohomology._
One can also check that the cotangent lifts \((I_{\lambda})_{cl}\), \((\delta_{\lambda})_{cl}\) of \(I_{\lambda}\) and \(\delta_{\lambda}\) are given by
\[(I_{\lambda})_{cl}\theta=\lambda,\quad I_{\lambda}\phi=I_{\lambda}A=I_{ \lambda}\upsilon=I_{\lambda}\chi=I_{\lambda}b=0,\quad I_{\lambda}\Phi^{+}=0,\]
and
\[(\delta_{\lambda})_{cl}\Phi=\delta_{\lambda}\Phi,\quad(\delta_{\lambda})_{ cl}\Phi^{+}=\delta_{\lambda}\Phi^{+}.\]
Therefore, we will omit the subscript "\(cl\)" and simply use \(I_{\lambda}\) and \(\delta_{\lambda}\) to denote the corresponding vector fields.
The BRST Lagrangian of Donaldson-Witten theory is given by
\[\mathcal{L}_{BRST} =\mathrm{Tr}(F\wedge F)/2+Q\mathrm{Tr}(\chi\wedge(F+b/2))\] \[=\mathrm{Tr}\left((F+b)\wedge(F+b)/2-\chi\wedge d_{A}\upsilon- \chi\wedge[\phi,\chi]/2\right). \tag{5.15}\]
One can easily show that
\[d_{v}\mathcal{L}_{BRST} =\mathrm{Tr}((F+b)\wedge\delta b-(d_{A}\upsilon+[\phi,\chi]) \wedge\delta\chi-(d_{A}b+[\chi,\upsilon])\wedge\delta A-d_{A}\chi\wedge\delta \upsilon+[\chi,\chi]\wedge\delta\phi/2)\] \[+d_{h}\mathrm{Tr}((b+F)\wedge\delta A+\chi\wedge\delta\upsilon). \tag{5.16}\]
It follows that \(Q_{{\cal L}_{BRST}}\Phi=0\) and
\[Q_{{\cal L}_{BRST}}\theta^{+}=0,\quad Q_{{\cal L}_{BRST}}\phi^{+}=[ \chi,\chi]/2,\] \[Q_{{\cal L}_{BRST}}A^{+}=-d_{A}b-[\chi,\upsilon],\quad Q_{{\cal L }_{BRST}}\upsilon^{+}=-d_{A}\chi,\] \[Q_{{\cal L}_{BRST}}\chi^{+}=-d_{A}\upsilon-[\phi,\chi],\quad Q_{ {\cal L}_{BRST}}b^{+}=F+b,\]
Summing up, we have
\[Q_{BV}\theta=\phi-\frac{1}{2}[\theta,\theta],\quad Q_{BV}\phi=-[ \theta,\phi],\] \[Q_{BV}A=\upsilon+d_{A}\theta,\quad Q_{BV}\upsilon=-[\theta, \upsilon]-d_{A}\phi\] \[Q_{BV}\chi=b-[\theta,\chi],\quad Q_{BV}b=-[\theta,b]+[\phi,\chi],\] \[Q_{BV}\theta^{+}=-[\theta,\theta^{+}]+[\phi,\phi^{+}]-d_{A}A^{+ }+[\upsilon,\upsilon^{+}]-[\chi,\chi^{+}]+[b,b^{+}],\] \[Q_{BV}\phi^{+}=\theta^{+}-[\theta,\phi^{+}]-d_{A}\upsilon^{+}-[ \chi,b^{+}-\chi/2],\] \[Q_{BV}A^{+}=-d_{A}b-[\chi,\upsilon]-[\theta,A^{+}]-[\phi,\upsilon ^{+}],\quad Q_{BV}\upsilon^{+}=-d_{A}\chi-A^{+}-[\theta,\upsilon^{+}],\] \[Q_{BV}\chi^{+}=-d_{A}\upsilon-[\phi,\chi]-[\theta,\chi^{+}]+[ \phi,b^{+}],\quad Q_{BV}b^{+}=F+b+\chi^{+}-[\theta,b^{+}],\]
The cotangent lift of the BRST Lagrangian (5.15) is given by
\[{\cal L}_{BV}={\rm Tr}((F+b)\wedge(F+b)/2-\chi\wedge d_{A}\upsilon -\chi\wedge[\phi,\chi]/2+\theta^{+}(\phi-\frac{1}{2}[\theta,\theta])-\phi^{+} [\theta,\phi]\] \[+A^{+}\wedge(\upsilon+d_{A}\theta)-\upsilon^{+}\wedge([\theta, \upsilon]+d_{A}\phi)+\chi^{+}\wedge(b-[\theta,\chi])-b^{+}\wedge([\theta,b]-[ \phi,\chi])).\]
Let \(\gamma\) denote the boundary term of \({\cal L}_{BV}\). \(\gamma\) is the sum of the boundaries terms in (5.14) and (5.16).
\[\gamma={\rm Tr}(A^{+}\wedge\delta\theta+\upsilon^{+}\wedge\delta\phi+(b+F) \wedge\delta A+\chi\wedge\delta\upsilon).\]
(\({\cal L}_{BV},\gamma\)) admit a BV-BFV extension. Consider the following change of coordinates.
\[\widetilde{\phi}=\phi-\frac{1}{2}[\theta,\theta],\quad\widetilde{\upsilon}= \upsilon+d_{A}\theta,\quad\widetilde{b}=b-[\theta,\chi],\]
\[\widetilde{\theta^{+}}=\theta^{+}-[\theta,\phi^{+}]-d_{A}\upsilon^{+}-[\chi,b^ {+}-\chi/2],\quad\widetilde{A^{+}}=-d_{A}\chi-A^{+}-[\theta,\upsilon^{+}], \quad\widetilde{\chi^{+}}=F+b+\chi^{+}-[\theta,b^{+}].\]
The cohomological vector field \(Q_{BV}\) takes a simplified form in the new coordinates.
\[Q_{BV}\theta=\widetilde{\phi},\quad Q_{BV}A=\widetilde{\upsilon},\quad Q_{ BV}\chi=\widetilde{b},\qquad\qquad\qquad Q_{BV}\widetilde{\phi}=0,\quad Q _{BV}\widetilde{\upsilon}=0,\quad Q_{BV}\widetilde{b}=0,\] \[Q_{BV}\phi^{+}=\widetilde{\theta^{+}},\quad Q_{BV}\upsilon^{+}= \widetilde{A^{+}},\quad Q_{BV}b^{+}=\widetilde{\chi^{+}},\quad\quad Q_{BV} \widetilde{\theta^{+}}=0,\quad Q_{BV}\widetilde{A^{+}}=0,\quad Q_{BV} \widetilde{\chi^{+}}=0.\]
It is then straightforward to write down an expression for the homotopy operator \(K\).
\[K\theta=A,\quad KA=s\chi+s^{\prime}b^{+},\quad K\chi=t\upsilon^{+}, \quad K\widetilde{\phi}=d\theta-\widetilde{\upsilon},\quad K\widetilde{ \upsilon}=dA-s\widetilde{b}-s^{\prime}\widetilde{\chi^{+}},\quad K\widetilde{ b}=d\chi-t\widetilde{A^{+}},\] \[K\phi^{+}=0,\quad K\upsilon^{+}=u\phi^{+},\quad Kb^{+}=w \upsilon^{+},\quad\quad K\widetilde{\theta^{+}}=0,\quad K\widetilde{A^{+}}= dv^{+}-u\widetilde{\theta^{+}},\quad K\widetilde{\chi^{+}}=db^{+}-w\widetilde{A^{+}},\]
where \(s,s^{\prime},t,u,w\) are real numbers. Reverting to the original coordinates, we have
\[K\theta=A,\quad KA=s\chi+s^{\prime}b^{+},\quad K\chi=t\upsilon^{+},\] \[K\phi=-\upsilon,\quad K\upsilon=(2-s^{\prime})F-(s+s^{\prime})b -s^{\prime}\chi^{+},\quad Kb=(1+t)d_{A}\chi+tA^{+},\] \[K\phi^{+}=0,\quad K\upsilon^{+}=u\phi^{+},\quad Kb^{+}=w\upsilon ^{+},\] \[K\theta^{+}=0,\quad KA^{+}=(u/2-s)[\chi,\chi]-(s^{\prime}+u)[\chi,b ^{+}]+(t-1-u)d_{A}\upsilon^{+}+u\theta^{+},\] \[K\chi^{+}=(s^{\prime}+1)d_{A}b^{+}+(s+w-1-t)d_{A}\chi+(w-t)A^{+}.\]
Let
\[\gamma^{(4)}:=\mathrm{Tr}(\phi\wedge\delta\theta),\quad\mathbb{A}^{(4)}:=-\mathrm{ Tr}(\phi^{2})/2,\quad\mathcal{L}^{(4)}:=\mathbb{A}^{(4)}+\iota_{Q}\gamma^{(4)}= \mathrm{Tr}(\phi^{2}-\phi[\theta,\theta])/2.\]
One can easily verify that \(\mathrm{Lie}_{Q}\gamma^{(4)}=d_{v}\mathbb{A}^{(4)}\). Therefore, \(\gamma_{K}:=\exp(K)\gamma^{(4)}\) together with \(\mathbb{A}_{K}:=\exp(K)\mathbb{A}^{(4)}\) define a fully extended BV-BFV LFT. Let
\[\theta_{K}:=\exp(\widetilde{K})\theta=\exp(-K)\theta,\quad\phi_{K}:=\exp( \widetilde{K})\phi=\exp(K)\theta.\]
We have
\[\gamma_{K}=-\mathrm{Tr}(\phi_{K}\wedge\delta\theta_{K}),\quad \mathbb{A}_{K}=\mathrm{Tr}(\phi_{K}^{2})/2,\quad\mathcal{L}_{K}=\mathrm{Tr}( \phi_{K}\wedge\phi_{K}-\phi_{K}\wedge[\theta_{K},\theta_{K}])/2.\]
By (5.10), the Lagrangian \(\mathcal{L}=\sum_{p=0}^{n}\mathcal{L}^{p}\) of the theory takes the following form
\[\mathcal{L}=\mathcal{L}_{K}-K_{0}\gamma_{K}=\mathrm{Tr}(\phi_{K}\wedge\phi_{ K}/2-\phi_{K}\wedge F_{K}), \tag{5.17}\]
where \(F_{K}:=d\theta_{K}+\frac{1}{2}[\theta_{K},\theta_{K}]\). \(\theta_{K}\), \(\phi_{K}\), and (5.17) can be interpreted as the superfields and the AKSZ Lagrangian of the "\(BF+B^{2}\)" theory [14, Section 7.4].
More concretely, let's set \(w=s^{\prime}=0,s=-2,t=-3,u=-4\). We have
\[K\theta=A,\quad KA=-2\chi,\quad Kb^{+}=0,\quad K\phi=-\upsilon,\quad Kv=2F+2b,\quad K\chi^{+}=d_{A}b^{+}+3A^{+},\] \[Kb=-2d_{A}\chi-3A^{+},\quad KA^{+}=4[\chi,b^{+}]-4\theta^{+}, \quad K\theta^{+}=0,\quad K\chi=-3\upsilon^{+},\quad Kv^{+}=-4\phi^{+},\quad K \phi^{+}=0.\]
and
\[\theta_{K}=\theta-A-\chi-\upsilon^{+}-\phi^{+},\quad\phi_{K}=\phi-\upsilon-F- b+A^{+}-\theta^{+}+[\chi,b^{+}].\]
It follows that
\[\gamma^{(4)}_{K}=-\mathrm{Tr}(\phi\wedge\delta\theta),\quad \gamma^{(3)}_{K}=\mathrm{Tr}(\upsilon\wedge\delta\theta+\phi\wedge\delta A), \quad\gamma^{(2)}_{K}=\mathrm{Tr}((F+b)\wedge\delta\theta-\upsilon\wedge \delta A+\phi\wedge\delta\chi),\] \[\gamma^{(1)}_{K}=-\mathrm{Tr}(A^{+}\wedge\delta\theta+(F+b) \wedge\delta A+\upsilon\wedge\delta\chi-\phi\wedge\delta\upsilon^{+}),\] \[\gamma^{(0)}_{K}=\mathrm{Tr}((\theta^{+}-[\chi,b^{+}])\wedge \delta\theta+A^{+}\wedge\delta A-(F+b)\wedge\delta\chi-\upsilon\wedge \delta\upsilon^{+}+\phi\wedge\delta\phi^{+}),\]
and
\[\mathbb{A}^{(4)}_{K}=\mathrm{Tr}(\phi^{2})/2,\quad\mathbb{A}^{ (3)}_{K}=-\mathrm{Tr}(\phi\wedge\upsilon),\quad\mathbb{A}^{(2)}_{K}=\mathrm{ Tr}(\upsilon\wedge\upsilon/2-\phi\wedge(F+b)),\] \[\mathbb{A}^{(1)}_{K}=\mathrm{Tr}(\phi\wedge A^{+}+\upsilon \wedge(F+b)),\quad\mathbb{A}^{(0)}_{K}=\mathrm{Tr}(\phi\wedge[\chi,b^{+}]- \phi\wedge\theta^{+}-\upsilon\wedge A^{+}+(F+b)\wedge(F+b)/2).\]
The Lagrangian \(\mathcal{L}^{(0)}\) takes the form
\[\mathcal{L}^{(0)}=\iota_{Q}\gamma^{(0)}_{K}+\mathbb{A}^{(0)}_{K}\] \[=\mathrm{Tr}\left((F+b)\wedge(F+b)/2+\upsilon\wedge d_{A}\chi+ \phi\wedge[\chi,\chi]/2+\theta^{+}\wedge(\phi-[\theta,\theta]/2)-\phi^{+} \wedge[\theta,\phi]\right.\] \[\left.+A^{+}\wedge(\upsilon+d_{A}\theta)-\upsilon^{+}\wedge[ \theta,\upsilon]-\phi\wedge d_{A}\upsilon^{+}+b^{+}\wedge([\phi,\chi]-[ \theta,[\theta,\chi]])-(F+b)\wedge(b-[\theta,\chi])\right).\]
**Remark 5.8**.: _The subspace \(\Gamma(Y)_{red}\) of \(\Gamma(Y)\) defined by the equations_
\[F+\chi^{+}+b=0,\quad b^{+}=0.\]
_is preserved by the action of \(Q_{BV}\) and \(K\). It is not hard to see that \((\mathcal{L}^{(0)},\gamma^{(1)}_{K})|_{M\times\Gamma_{red}}\) is equal to \((\mathcal{L}_{BV},-\gamma)|_{M\times\Gamma_{red}}\) up to an \(f\)-transformation. Moreover, \(Q_{BV}|_{\Gamma_{red}}\), \(\mathcal{L}_{BV}|_{M\times\Gamma_{red}}\), and \(\gamma|_{M\times\Gamma_{red}}\) are just the cohomological vector field, the Lagrangian, and the boundary term in the AKSZ construction of the Donaldson-Witten theory [1]._
### Sign conventions
Let \(A=\bigoplus_{i,j,k}A^{i,j,k}\) be a trigraded associative algebra. \(A\) is said to be commutative if
\[ab=(-1)^{i_{a}i_{b}+(j_{a}+k_{a})(j_{b}+k_{b})}ba,\]
where \(a\in A^{i_{a},j_{a},k_{a}}\) and \(b\in A^{i_{b},j_{b},k_{b}}\). In our specific case, consider the algebra \(\Omega_{loc}=\bigoplus_{i,j,k}\Omega_{loc}^{i,j,k}\) of local forms over \(M\times\Gamma(Y)\) with the wedge product \(\wedge\) as the algebraic product. Here, \(Y\) is a graded fiber bundle, and \(i\), \(j\), and \(k\) denote the horizontal form degree, vertical form degree, and ghost number degree, respectively. A derivation \(D\) (of degree \((i_{D},j_{D},k_{D})\)) of \(A\) is an element in \(\mathrm{End}(A)^{i_{D},j_{D},k_{D}}\) such that
\[D(ab)=D(a)b+(-1)^{i_{a}i_{D}+(j_{a}+k_{a})(j_{D}+k_{D})}a(Db).\]
For \(A=\Omega_{loca}\), consider, for example, \(D=d_{h}\), \(d_{v}\), \(\iota_{Q}\), and \(\mathrm{Lie}_{Q}\), where \(Q\) is a cohomological vector field. We have
\[d_{h}(a\wedge b) =d_{h}a\wedge b+(-1)^{i_{a}}a\wedge d_{h}b,\] \[d_{v}(a\wedge b) =d_{v}a\wedge b+(-1)^{j_{a}+k_{a}}a\wedge d_{v}b,\] \[\mathrm{Lie}_{Q}(a\wedge b) =\mathrm{Lie}_{Q}a\wedge b+(-1)^{j_{a}+k_{a}}a\wedge\mathrm{Lie}_ {Q}b,\] \[\iota_{Q}(a\wedge b) =\iota_{Q}a\wedge b+a\wedge\iota_{Q}b.\]
\(\mathrm{Lie}_{Q}^{2}=d_{h}^{2}=d_{v}^{2}=0\). One can combine any two of them to make \(\Omega_{loc}\) into a double cochain complex. For example, one can consider \(B=\bigoplus_{i,j}B^{i,j}\) with \(B^{i,j}:=\bigoplus_{i^{\prime}+k^{\prime}=i}\Omega_{loc}^{i^{\prime},j,k^{ \prime}}\). The two differentials on \(B\) are given by
\[d_{v}:B^{i,j}\to B^{i,j+1},\quad(\mathrm{Lie}_{Q}-\widetilde{d_{h}}):B^{i,j} \to B^{i+1,j},\]
where \(\widetilde{D}:=(-1)^{i_{D}(i+j-\dim(M))}D\) for a derivation \(D\) of horizontal form degree \(i_{D}\). It is easy to see that \((\mathrm{Lie}_{Q}-\widetilde{d_{h}})\) squares to zero and anti-commutes with \(d_{v}\).
|
2309.06171 | Privacy-Preserving Linkage of Distributed Datasets using the Personal
Health Train | With the generation of personal and medical data at several locations,
medical data science faces unique challenges when working on distributed
datasets. Growing data protection requirements in recent years drastically
limit the use of personally identifiable information. Distributed data analysis
aims to provide solutions for securely working on highly sensitive data while
minimizing the risk of information leaks, which would not be possible to the
same degree in a centralized approach. A novel concept in this field is the
Personal Health Train (PHT), which encapsulates the idea of bringing the
analysis to the data, not vice versa. Data sources are represented as train
stations. Trains containing analysis tasks move between stations and aggregate
results. Train executions are coordinated by a central station which data
analysts can interact with. Data remains at their respective stations and
analysis results are only stored inside the train, providing a safe and secure
environment for distributed data analysis.
Duplicate records across multiple locations can skew results in a distributed
data analysis. On the other hand, merging information from several datasets
referring to the same real-world entities may improve data completeness and
therefore data quality. In this paper, we present an approach for record
linkage on distributed datasets using the Personal Health Train. We verify this
approach and evaluate its effectiveness by applying it to two datasets based on
real-world data and outline its possible applications in the context of
distributed data analysis tasks. | Maximilian Jugl, Sascha Welten, Yongli Mou, Yeliz Ucer Yediel, Oya Deniz Beyan, Ulrich Sax, Toralf Kirsten | 2023-09-12T12:32:14Z | http://arxiv.org/abs/2309.06171v1 | # Privacy-Preserving Linkage of Distributed Datasets using the Personal Health Train
###### Abstract
With the generation of personal and medical data at several locations, medical data science faces unique challenges when working on distributed datasets. Growing data protection requirements in recent years drastically limit the use of personally identifiable information. Distributed data analysis aims to provide solutions for securely working on highly sensitive data while minimizing the risk of information leaks, which would not be possible to the same degree in a centralized approach. A novel concept in this field is the Personal Health Train (PHT), which encapsulates the idea of bringing the analysis to the data, not vice versa. Data sources are represented as train stations. Trains containing analysis tasks move between stations and aggregate results. Train executions are coordinated by a central station which data analysts can interact with. Data remains at their respective stations and analysis results are only stored inside the train, providing a safe and secure environment for distributed data analysis.
Duplicate records across multiple locations can skew results in a distributed data analysis. On the other hand, merging information from several datasets referring to the same real-world entities may improve data completeness and therefore data quality. In this paper, we present an approach for record linkage on distributed datasets using the Personal Health Train. We verify this approach
and evaluate its effectiveness by applying it to two datasets based on real-world data and outline its possible applications in the context of distributed data analysis tasks.
The source code for the services and analysis scripts mentioned in this paper is open source and available from the authors1.
Footnote 1: The source code is located at [https://gitlab.com/ul-mds/record-linkage/infrastructure](https://gitlab.com/ul-mds/record-linkage/infrastructure).
###### Abstract
The proposed method is based on a
After the train has been executed at all stations, the final train is sent back to the CS. Results are extracted from the train and provided to the data scientist. At no point did the data scientist ever obtain access to confidential data. Access to sensitive data sources remains at the participating institutions. The only PHT component that the data scientist has to interact with to perform a distributed data analysis task is the CS.
Multiple recent use cases using the PHT prove its potential as a functional mean to enabling distributed data analysis [18, 33]. In cases where a centralized study has been performed in the past, running the same study in a decentralized way using the PHT shows no to minimal deviation in the quality of the obtained results [32].
### Problem statement
In distributed datasets, it is possible for duplicates to manifest that refer to the same real-world entity. Within the medical domain, these entities are usually patients that seek treatment at multiple locations. In a patient's lifetime medical history, it is common to seek treatment at multiple healthcare providers, such as being referred by one's own general practitioner to obtain a verified diagnosis that they cannot make, or seeking special treatment for a rare disease type.
The process of identifying similar records within one or multiple datasets is called data matching or record linkage [5]. Duplicates in distributed datasets can skew the outcome of an analysis task, yet they may also present the opportunity of fusing multiple information sources together. Though record linkage algorithms have been extensively studied and applied to centralized datasets, very few have applied them in a distributed environment. This is rooted in the inherent challenges of record linkage.
Linkage using a permanent identifier, like a person's health insurance number, passport ID, identity card number or social security number would facilitate this process tremendously, yet they are subject to strict data privacy and data sharing protections. Additionally, healthcare providers may only store a subset of the aforementioned identifiers, shrinking the potential overlap in distributed datasets.
As such, record linkage relies on combining a multitude of so-called quasi-identifiers (QIDs) to identify similar records [35]. These are stable identifiers that are unlikely to change over a long period of time, such as a patient's first and last name, sex and birth date. Yet since combining these identifiers allows one to uniquely identify a person with high precision, they are subject to the same data protection guidelines as the mentioned identifiers. Releasing identifying information past institutional borders for the purpose of record linkage therefore poses a great risk for data providers and the privacy of the entities that are supposed to be linked.
Using the distributed data analysis execution platform that the PHT provides, we propose an approach to record linkage with distributed datasets which does not require data providers to share identifying data outside their own borders.
### Related work
Though all of the following works present ways on how to integrate record linkage into real-life environments, just a select few propose workflows on distributed datasets. None so far have sought the PHT as their platform of choice.
Randall et al. [24] performed an evaluation of a Bloom filter based record linkage protocol on datasets with a total of 26 million hospital admission records. They showed that Bloom filters can scale well to large datasets and provided a workflow to improve scalability without severely affecting linkage quality. However, the setup was performed locally and not in a distributed environment.
Figure 1: Overview of the PHT using the PADME implementation as an example. A data scientist submits an analysis script to the Central Service, which packages it into a train using container-based technologies and orchestrates the execution between institutions, called stations. Results are aggregated and reported back to the data scientist.
Yigzaw et al. [36] designed a record linkage protocol based on Bloom filters for disease surveillance across three laboratories that specialize in the testing for strains of influenza. Their approach manages to stay performant while ensuring security in a setting with semi-honest adversaries.
Nguyen et al. [19] performed an evaluation of the GRHANITE software in a public health surveillance system. They used two separate gold standard datasets sourced from several clinical sites in Australia, including uniquely identifiable attributes such as electronic medical record numbers. The authors opted for a deterministic linkage approach which, while reliable in most cases, provides limited error tolerance due to the nature of this approach.
Stammler et al. [27] extended the medical record database software Mainzelliste with a modified version of the EpiLink software by Contiero et al. [8] to enable secure multi-party privacy-preserving record linkage. Their linkage algorithm is based on Bloom filters, but the integrity and privacy of shared data between participating clients is preserved using a tunneled connection. However, the integration into Mainzelliste imposes a strong software dependency that may not be implementable in certain environments.
Nobrega et al. [21] recently published a novel protocol using Bloom filters using Blockchain technology and applied it to real-world datasets. Their main motivation for this approach was to provide security beyond the commonly considered honest-but-curious adversary model. Since then, Christen et al. [6] published an attack on the protocol which has been considered and mitigated by the original authors [22].
## 2 Materials and methods
### Overall approach
We developed an approach for performing record linkage on distributed datasets using the PHT. Fig 2 demonstrates the two-phase record linkage execution within the PHT.
In the "submission" phase, the data analyst sends their record linkage script along with a configuration file to the central service. They select the participating stations and dispatch the record linkage train. At each station, the train starts the process of sourcing relevant records, pre-processing and masking them into a non-reversible bit vector form while preserving similarities between near-identical records. These bit vectors are then sent to a Central Linkage Unit (CLU), which asynchronously performs matching on the provided bit vectors.
In the "result" phase, the data analyst dispatches the same train for a second time. Once the train arrives at the station, it prompts the retrieval of results from the CLU for that particular station. The train itself receives identified matches in a pseudonymized form, so that the train never comes into contact with any personal data. These results can be used to perform informed decisions on how to treat duplicates in a subsequent data analysis task. The following sections describe the process in more detail.
### Train execution
We developed several standalone web services that enable record linkage with distributed datasets: a Resolver, Encoder, Broker and Matcher service. An architectural overview of their integration into the PADME PHT implementation is presented in Fig 3. Although our research focuses on the interoperability with the PHT, our services do not rely on any PHT component and can be freely made to fit with any architecture where client-server interactions are possible.
ComponentsOur services are split up into station-side and central components. The Resolver service is hosted on-premise at every participating station. Its purpose is to look up study-specific pseudonyms in the station's Master
Figure 2: Overview of the record linkage execution within the PHT. In the first phase, stations are prompted to encode records into a non-reversible bit vector form and to submit these to a Central Linkage Unit. After matching has concluded, results are fetched from the Central Linkage Unit at each station.
Patient Index (MPI). For our purposes, we opted for E-PIX as the patient database and gPAS as the pseudonymization service from the MOSAIC suite of tools to act as an MPI in combination [2, 3]. These services offer an intuitive user interface for data providers, as well as a SOAP interface for integration into other software. However, the Resolver offers flexibility to integrate with other MPI software solutions. The Resolver leverages the Encoder service, which performs data pre-processing and masking based on a hash-based approach using Bloom filters [25] on the records obtained from the MPI. The result is a bit vector for every record which obfuscates the original data while preserving their similarity. The CLU is composed of the Broker service which takes in bit vectors from all stations. Similarities between bit vectors are computed by the Matcher service and reported back to the Broker, which provides results to the participating stations.
WorkflowAs mentioned before, a record linkage execution within the PHT consists of two phases. These two phases make up a matching session. In the first "submission" phase, the data analyst who wants to perform record linkage selects the corresponding train image at the Central Service, the participating stations and provides the necessary configuration which is packaged into the train. The record linkage configuration consists of a random session identifier and a matching threshold between 0 % and 100 %. Upon arrival of the train, station administrators provide a link to the Resolver service hosted at the station, as well as a list of study-specific pseudonyms.
Once train execution commences at the station, an executable script inside the train submits the supplied pseudonyms as well as the record linkage configuration to the Resolver service at the station. The Resolver looks up the provided pseudonyms in the MPI and converts the personal data records into bit vectors using the Encoder service. Finally, the Resolver generates a random client identifier, ties it to the session identifier locally, and passes both with the record linkage configuration and the list of bit vectors to the CLU. The first phase has concluded once all participating stations have performed this process.
Between phases, the CLU performs asynchronous matching using the bit vectors submitted by the stations. The submitted station and client identifiers are used by the Broker service to tell bit vectors from different stations apart, as well as consolidating bit vectors that are supposed to be matched against one another. Given bit vectors from two separate stations, the Matcher performs a cross product on them and computes their similarity using the Jaccard index. Similarities above the threshold specified in the PPRL configuration are reported back to the Broker, which aggregates these results.
In the second "result" phase, the data analyst sends the same PPRL train for a second iteration to all participating stations. The script inside the train contacts the Resolver hosted at the station. The Resolver looks up the client identifier generated in the previous phase using the session identifier and requests results from the Broker. Finally, the Resolver passes the results along to the train, converting them back into their pseudonymized form. This means that by the end of the second phase, every station will have a list of pseudonyms that refer to records that have a match with at least another record at a different station. The train itself would have had no access to sensible data during the entire session.
Figure 3: Integration of record linkage services into the PHT infrastructure. Custom components are highlighted in bold. Trains carry the scripts necessary to communicate with the Resolver service at each station and report identified matches back to the Central Service.
### Data preprocessing
The QIDs withing a record are first preprocessed at their respective stations. The aim is to convert all records into a unified representation across all stations, so that record linkage is successful even in the presence of different character encodings and naming conventions. This is done by performing the following series of steps for every field of a record:
1. Ligatures (e.g. \(\mathcal{B}\), \(\mathbf{x}\), \(\mathbf{c}\)) are replaced with their non-ligature forms (e.g. ss, ae, oe).
2. Diacritics are separated from the characters they apply to by performing Unicode normalization in the KD form.
3. Non-ASCII characters are removed.
4. All characters are converted to lowercase.
5. Multiple consecutive whitespaces are reduced into one.
### Masking technique
Masking is the key step in ensuring that the privacy of the matched records is preserved. If all records are sent to the CLU as they are, then the CLU administrator has access to all records across all stations, posing a big risk of a potential private data breach. Even if it is assumed that the CLU is hosted by a trustworthy third party, malicious actors could obtain access to the original records if the CLU is ever compromised. As such, it is necessary to make sure that the records are transmitted in a masked form such that re-identification attacks on masked records are computationally infeasible.
Bloom filters have seen wide adoption in privacy-preserving record linkage (PPRL) protocols as a data structure that can obfuscate the data that is inserted into it while preserving similarities. Since the inception of Bloom filters in the field of PPRL, many security recommendations have been proposed to make them more resilient to re-identification attacks [7, 15, 20]. According to current security recommendations for Bloom filters in PPRL, we chose to implement the following measures.
We used CLKRBF as our masking technique [29]. The tokenized values of a record are hashed into the same Bloom filter, but the amount of hashes is decided on weights assigned to a record's attributes. Consequentially, attributes with a higher weight likely occupy more bits in the resulting Bloom filter. The reason for this approach is that some attributes, like first and last name, have a higher discriminatory power than others, like gender [10]. We computed the weights by generating a random list of values for each attribute, splitting them into unique text tokens and then computing the entropy across all values for an attribute.
Further security measures we implemented were to use _HMAC-SHA-256_ as a keyed hash function, apply random hashing, attribute salting, as well as balancing and randomly permuting Bloom filters after the masking step [20, 26]. All these techniques mitigate basic dictionary and frequency attacks, as well as known cryptanalytic attacks on Bloom filters based on their Hamming weights at the time of writing.
## 3 Results
We used the readily available PADME PHT stations at the RWTH Aachen University, the University of Applied Sciences Mittweida and the University Medical Center Leipzig to validate our suggested approach to record linkage on distributed datasets. We performed two experiments with two different datasets. The source code for the generation of these datasets, as well as the software components we developed, is available from the authors.
For our first experiment, we used the North Carolina Voter Registration (NCVR) dataset2. This dataset has been used to test several record linkage applications in the past [10, 11, 16, 23]. Its popularity stems from the fact that it contains real-world data sourced from a large population. It is a public record of registered voters in the state of North Carolina where a large number of personally identifiable attributes are tracked for every person. These attributes include first, middle and last name, gender, year of birth, residential and mail address, party affiliation, phone number and supplemental information on the record itself. Every person in the NCVR database has a unique identifier called NCID. We performed data cleaning by only keeping records that abide by the following rules.
Footnote 2: The data is available at [https://www.ncsbe.gov/results-data/voter-registration-data](https://www.ncsbe.gov/results-data/voter-registration-data). We used a snapshot of statewide voter registrations from February \(16^{\text{th}}\), 2023
* person must have an active and verified record
* person must have been at least 16 years old on their registration date
* person must be no older than 120 years
* record must not be marked as confidential
* residential and mail zip codes must be either five or nine characters in length and must not be all zeros
* phone numbers must not be all zeros
After this process, about 6.1 million records remain from the original 8.2 million records. From the cleaned dataset, we selected first and last name, gender, year of birth and city of residence to be used for our record linkage experiment.
We created three files with approximately 5000 records each, where all files have 100 records in common. To select records, we grouped all records by first and last name and counted the amount of records in each group, which we refer to as group size. For each group size up to five, we then randomly selected records drawn from each group such that every group size contributes about a fifth of all records to the final dataset, which is composed of 14 800 unique records across all files. The distribution of records with respect to the group sizes they were drawn from is shown in Tab 1.
Finally, we selected 100 records at random and inserted them into each of the three files. We inserted each of the remaining 14 700 records into one of the three files at random. The amount of records for each file is shown in Tab 2.
We assigned the files to the participating stations at random, so that each station holds one of the three files. The station administrators were provided a mapping to import the given files into the data schema of their on-premise E-PIX instance. Knowing that only exact matches were supposed to be found, we set the match threshold for the train execution accordingly to 100 %. We used the NCIDs of predicted matching pairs to classify them as true or false positives.
We were able to identify all true matches while only accumulating three false positives, yielding a F1-score of 99.5 %. The false positives are a result of the limited subset of attributes we chose. Upon review, we found that the affected records were sufficiently different from one another when observing all attributes of the NCVR dataset. However, by chance alone, the false positives we identified were indeed identical by our choice of attributes.
For our second experiment, we generated a synthetic dataset with built-in typographic errors to validate the error-resistance of our approach. We used German first names, surnames and city names with respective frequency information from publicly available data sources. We used the GeCo framework [28] to generate a file containing 1000 personal records. The distribution of the aforementioned attribute values was modeled after their real-life frequencies. We also added a randomly generated gender and birth date to each record.
Next, we generated two corrupted versions of the original file by randomly introducing typographic errors commonly found in the real world. These include optical character recognition errors, phonetic errors, input errors, e.g. by pressing a neighboring key on a keyboard, and edit errors, such as random insertion, deletion or substitution of a character. Finally, we shuffled the records between the three files in a way that ensured that matches could only be found between files, not in one and the same file. As with the first dataset, we randomly assigned each file to one of the three participating stations and provided the mappings for the E-PIX import to all station administrators.
\begin{table}
\begin{tabular}{r r r} \hline \hline
**Group size** & **\# Total records** & **\# Selected records** \\ \hline
1 & 2 975 253 & 2971 \\
2 & 668 560 & 2992 \\
3 & 358 608 & 2901 \\
4 & 240 640 & 2784 \\
5 & 179 465 & 2910 \\ \hline \(\Sigma\) & 4 422 526 & 14 558 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Amount of selected records in the final dataset with respect to group size based on distinct first and last names in the NCVR dataset
\begin{table}
\begin{tabular}{r r} \hline \hline
**File** & **\# Records** \\ \hline
1 & 4981 \\
2 & 4945 \\
3 & 4832 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Amount of records in each file for our record linkage experiment with the NCVR dataset
For our synthetic dataset, we opted for a more conservative matching threshold of 70 %. We found this threshold to be a reasonable choice in prior testing with similar datasets using our infrastructure. In our train execution, we achieved an F1-score of 99.3 %. Out of 1157 possible true matches, we accumulated only four false positives and 12 false negatives, as represented in Tab 3. This result is impressive, considering the amount of true non-matches is bigger by about three orders of magnitude compared to the amount of matches.
Finally, we conducted a statistical analysis on both datasets to evaluate our threshold choices. We ran the same linkage procedures using a local setup with varying thresholds, ranging from 0 % to 100 % in 1 % increments. At every threshold, we computed precision, recall and F1-score. The resulting plots are shown in Fig 4.
For the NCVR dataset, we observed a sharp decline in match quality around the 95 % threshold. The F1-score declined slowly between 90 % and 75 % before falling off drastically again and leveling off below 60 %. Since we chose records in groups based on distinct first and last names, a base level of similarity is present in a significant number of record pairs. This explains the first drop-off as records with identical first and last names are being recognized as matches. The second, significantly more prolonged drop-off is caused by record pairs that are decisively unlike one another.
For our synthetic dataset, we found that our threshold of choice at 70 % was not far off the threshold which maximized the F1-score on our synthetic dataset. At a threshold of 68 %, the amount of false negatives dropped down to four, raising the F1-score to 99.7 %. Furthermore, we found that all true matches were classified as such at a threshold of 66 %. However, false positives increased drastically with lower thresholds. On the opposite end, the first false positive was reported at a threshold of 84 %, while about 40 % of true matches were already being identified as such.
## 4 Discussion
### Security implications of the PHT
The PHT concept allows the execution of arbitrary analysis tasks on medical data provided by healthcare institutions. This is desirable for data scientists who act in good faith and wish to use the statistical tools and programming languages that they are familiar with, but this still poses a major threat to data protection and privacy. Therefore PHT implementations must provide security guarantees to mitigate unauthorized access to and distribution of medical data.
Figure 4: Precision, recall and F1-score depending on the selected match threshold. Measures were taken using our experimental datasets. Samples were taken in 1 % threshold increments.
\begin{table}
\begin{tabular}{r r r r} \hline \hline & True matches & True non-matches & \(\Sigma\) \\ \hline Predicted matches & 1145 & 4 & **1149** \\ Predicted non-matches & 12 & 1 142 091 & **1 142 103** \\ \hline \hline \(\Sigma\) & 1157 & 1 142 095 & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Confusion matrix of predicted matches and non-matches at a 70 % threshold
We consider possible attack vectors for a malicious actor aiming to exfiltrate medical data using the PHT and show mitigations at the example of PADME and PHT-meDIC.
Both PADME and PHT-meDIC employ a manual approval process for incoming trains. Station administrators must first inspect and confirm that they wish to execute a train at their station. The code packaged inside a train is always public and should be used by station administrators to conclude what the train's purpose is and which data sources it aims to connect to. This allows them to reject trains that have been sent to a station due to human error or malicious intent.
Furthermore, both PADME and PHT-meDIC require data sources to be defined manually. For PADME, connection details must be provided before the train is executed. This allows the data holders to configure their data sources in a way that mitigates illegal operations and to limit the train's view on the required data. PHT-meDIC completely denies network access to the train. Data sources are mounted into the train as a read-only volume. This further mitigates the train's abilities to retrieve and share information outside of institutional borders.
Results of a train execution at a station must be securely stored, and PADME and PHT-meDIC choose different ways to ensure that no unauthorized party may be able to inspect results. PADME employs the use of public-key cryptography. The Central Service and all stations maintain their own keypairs which are used to encrypt trains and the results that are packaged into them after each train execution. PHT-meDIC goes one step further by cryptographically signing trains to ensure that trains are only executed by trusted parties. Results from a train execution at a station are not stored within the train but rather encrypted using homomorphic encryption. Only the data scientist running the analysis task using PHT-meDIC may decrypt the results which ensures that analysis artifacts submitted to the central service cannot be read by its administrators.
At the time of writing, manual review and approval of train, manual configuration of data sources, restricting network access, result encryption and train signatures summarize the scope of security practices of both PADME and PHT-meDIC. This serves to ensure that medical data is not leaked to untrusted parties, but puts a major responsibility on station administrators. They must diligently inspect trains before executing them and ensure that the data sources are properly configured. The public nature of both PHT implementations gives them the necessary tools to validate trains and approve or deny their execution on site.
### Attacks on the record linkage protocol
Cryptanalysis of Bloom filters in privacy-preserving record linkage protocols has been conducted in the past with the consensus being that basic field-level Bloom filters provide no meaningful security for data that is meant to be obfuscated [7, 15, 20]. We chose to adapt our use of Bloom filters for PPRL in line with current security recommendations, which make re-identification attacks infeasible. Instead of providing a formal cryptanalysis of Bloom filters in our protocol, we present the scope of possible attack vectors that malicious actors could abuse in our approach.
There are three main actors who take part in a PPRL execution within the PHT. The first actor is the data scientist. Since they only interact with the Central Service, there is no way for them to get access to the underlying container infrastructure. Their only entrypoint for attack is the PPRL container image, which can be freely adjusted. This means that a malicious data scientist could include malicious code in the PPRL train image that is then executed at the participating stations. However, within the context of the PPRL workflow as described in this paper, there is no way to obtain unauthorized access to confidential patient data. The only information that could be leaked are the pseudonyms provided at every station, which inherently do not represent personally identifiable information. Since the Resolver service is the sole component responsible for handling depseudonymized data, there is no feasible way of leaking confidential data from it using a malicious PPRL train image either. Furthermore, the PADME PHT implementation allows station administrators to deny the execution of a train image that they do not trust.
The second actor is the station administrator. Since they are restricted to executing or rejecting trains, there are no means for the station administrator to achieve unauthorized data access. They have no access to the underlying container infrastructure that is managed by the station software and can therefore not leak results from executions at previous stations. Since the PPRL train is not supposed to store sensitive data in the first place, it is not possible to infer patient data from other stations.
The third actor is the administrator of the CLU. They are the weakest link, since we assume a trusted third party setting and therefore do not consider their entire attack scope in a practical setting. However, a bad actor in this position has a few options. The central linkage unit receives and processes bit vectors submitted by all stations. We assume that a re-identification attack is too compute-intensive and therefore infeasible. However, suppose there are matching bit vectors across all participating stations of a PPRL execution. Given the metadata attached to these bit vectors, it would be an easy task for a malicious CLU administrator to make assumptions about the medical or treatment history of a patient that produced these bit vectors. This greatly limits the search scope, though this knowledge is still useless
without prior knowledge about personal patient details. Furthermore, if the secret for the authenticated hash digest in the masking step is changed in every train execution, then inferring information during multiple train executions is not possible for the CLU administrator.
Another attack vector for a CLU administrator is to disrupt the PPRL execution. Since the assignment of clients to a match session is handled by the central linkage unit, a malicious CLU administrator could assign submitted bit vectors to arbitrary match sessions, therefore impacting the usefulness of the reported results. It is also possible to simply withhold match results or to inject false match results.
### Federated train execution
Our proposed approach performs a train execution in two round trips. This incremental workflow comes down to the fact that all participating stations are processed sequentially. In theory, a federated workflow where all stations are processed simultaneously is also possible and would require a single round trip instead.
A train class for this federated approach is prepared. All participating stations are selected and the train is dispatched. Once at a station, the train performs the usual workflow of submitting pseudonyms to the Resolver service. However, instead of terminating, the PPRL train queries the progress of the match session and waits for it to reach completion. As soon as all stations have submitted their bit vectors and matching has been carried out, all trains submit a request to the Resolver to fetch the results from the broker. The main challenge lies in coordinating all stations to perform their submission step in a similar time frame. Suppose that in a setup with three stations, two execute the train immediately and one does so only an hour later after receiving, there is currently no way for the Broker service to tell when matching is to be considered finished. In the worst case, the first two stations request their results too early, omitting potential matches with the third station. A federated version of the train used in our approach is actively being worked on.
### Future improvements
We used a simple measure for estimating attribute weight based on their discriminatory power. Though weight estimation algorithms for record linkage exist [4, 34], none of them can be applied to blind record linkage, meaning that known algorithms which are mostly based on the Fellegi-Sunter model of record linkage cannot be applied to PPRL, since we do not have the datasets to be matched available to us. In the future, we hope to develop and evaluate more advanced algorithms for blind attribute weight estimation.
The same applies to the threshold estimation. We chose a threshold based on prior knowledge about the data to be matched and previous experiments. A future area of research concerns the study of thresholds based on the chosen masking technique and input data.
While the GeCo framework is decent at generating authentic-looking datasets with a variety of options for common errors to induce, the data it generates is still synthetic. We are actively working on cooperating with personal data providers to create use cases around our approach to PPRL to prove its effectiveness in real-world settings.
## 5 Conclusion
We presented an approach to perform record linkage on distributed datasets using the PHT. The PHT enables data holding organizations to allow access to their personal records without leaking sensitive information beyond institutional borders. Inferring information from masked records is infeasible with the security measures we chose to implement, keeping the chance of a successful re-identification attack at a minimum. We validated this approach in two experiments with two different synthetic datasets with respect to the quality of the identified matches and the typographic error-resistance. In the future, we aim to demonstrate our approach in real-life scenarios.
|
2306.17821 | Binary Supermassive Black Holes Orbiting Dark Matter Solitons: From the
Dual AGN in UGC4211 to NanoHertz Gravitational Waves | We explore orbital implications of the Supermassive Black Hole (SMBH) binary
in UGC4211 for the energy spectrum of stochastic gravitational wave background
(SGWB), measured with pulsar timing. The SMBH binary in UGC4211 has a projected
separation of $\sim 230\,$pc and relative velocity of $\sim 150\,$km/s along
the line of sight. It orbits with a disk of gas and stars, with a total mass
$\sim 1.7 \times 10^9 M_\odot$ that is several times larger than the combined
SMBHs plus the observed gas and stars. The unseen mass can be naturally
explained by a soliton of wave dark matter present within the SMBH orbit. Such
a scenario is encouraging as during galaxy merger, the two precursor galactic
solitons are expected to combine to generate a new soliton and hence bind the
two initial SMBHs efficiently. Generalizing this scenario to the cosmological
population of SMBH binaries, we show the SGWB spectrum produced by late-stage
inspiraling is modified preferentially at low frequency by the presence of
soliton. Finally, we demonstrate that the NANOGrav and EPTA data can be
well-fit in this scenario, favoring $\{m_a, f_a\} \sim \{10^{-21.7} {\rm eV},
10^{15.5} {\rm GeV}\}$ and $\{10^{-20.5} {\rm eV}, 10^{16.8} {\rm GeV}\}$
respectively when the UGC4211 data and the constraints from dwarf galaxies are
also combined. | Tom Broadhurst, Chao Chen, Tao Liu, Kai-Feng Zheng | 2023-06-30T17:40:07Z | http://arxiv.org/abs/2306.17821v3 | # Binary Supermassive Black Holes Orbiting Dark Matter Solitons:
###### Abstract
We explore the orbital implications of the Supermassive Black Hole (SMBH) binary in UGC4211, for the frequency spectrum of stochastic gravitational wave background (SGWB) being measured with pulsar timing arrays. The SMBH binary in UGC4211 has a projected separation of \(\sim 230\) pc and relative velocity of \(\sim 150\) km/s along the line of sight. It orbits a common disk of gas and stars, with a total dynamical mass of \(\sim 10^{9}M_{\odot}\) which is several times larger than the combined SMBHs plus the observed gas and stars. This can be explained by a massive soliton of wave dark matter present within the orbit of two SMBHs. Such a scenario is encouraging as during galaxy merger, the two precursor galactic solitons are expected to combine to generate a new soliton and hence the two initial SMBHs become efficiently bound. Generalizing this scenario to the cosmological population of SMBH binaries, we show that the SGWB spectrum produced by their late-stage inspiraling is modified preferentially at low frequency by the presence of the soliton. Finally we discuss future prospects for this proof-of-concept study, by fitting this scenario to the 15-year NANOGrav data.
## I Introduction
With the advent of NanoHertz (nHz) gravitational wave (GW) detection, using pulsar timing arrays (PTA) (NANOGrav [1], PPTA [2], CPTA [3], EPTA [4], etc.), we can examine the evolution of supermassive black hole (SMBH) binaries anticipated to dominate the stochastic GW background (SGWB) at nHz frequencies [5; 6; 7; 8; 9; 10] and new physics as well [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35]. The SMBH binaries are understood to be relics of galaxy mergers which have resulted in the hierarchical growth of structure in the Universe. Supporting this interpretation, examples of binary SMBHs have been uncovered within luminous galaxies in the local Universe, including the recently reported dual active galactic nuclei (AGN) at \(z=0.03474\), in UGC4211, a late-stage major galaxy merger where two SMBHs are found to be separated by \(\sim 230\) pc with a mass of \(\sim 10^{8}M_{\odot}\)[36]. Associated with this dual AGN is a rotating disk of stars and gas sharing a common position angle, where their velocities are measured to be \(\simeq 200\)km/s with a maximum radial extent of 180 pc. This implies a dynamical mass \(\sim 1.7\times 10^{9}M_{\odot}\), several times larger than the AGN's, gas and stars combined. Given that UGC4211 was found within a relatively small, volume-limited sample of nearby hard X-ray detected AGN [37], the inferred cosmological SMBH merger rate may be surprisingly high. This system provides a unique opportunity to study binary SMBHs during the inspiral phase where gravitational radiation is significant ahead of predictable SMBH merger.
This newly resolved SMBH binary in UGC4211 raises possible new implications for the ingredients required in calculating the nHz SGWB. As mentioned above, there is the possibility of substantial unseen mass responsible for the orbiting gas and stars in this binary. This observation motivates us to examine the potential role of "Wave" Dark Matter (WDM) [38; 39] (for a review, see [40; 41; 42]), one of the most important classes of DM candidates, in the formation and evolution of SMBH binaries. In this context, the unseen mass in UGC4211 could be naturally explained as a WDM soliton, a standing wave of ultralight bosons such as axions [43], formed from galaxy merger and hence enclosed by this binary.
UGC4211 could represent the format of the cosmological population of GW-driven SMBH binaries. The predicted spectrum of SGWB from the inspiraling of binary SMBHs then can be modified, due to the extra gravitational contributions from solitons. The PTA experiments are thus able to provide insights into the physical properties of WDM solitons, including the boson mass and interaction coupling which determine the soliton profile.
In this Letter, we first model the dual AGN in UGC4211 as binary SMBHs orbiting a WDM soliton, where the soliton as an extra source for gravitational potential yields a shift to their angular velocity. We then calculate the modified SGWB spectrum from the inspralling SMBH binaries, and finally discuss the possibility of testing this model and constraining the WDM theory by using the UGC4211 observational dataset and the recently released NANOGrav dataset [1].
## II WDM Soliton Profile
In the non-relativistic limit, the WDM profile can be described by a classical complex wavefunction \(\psi(t,\mathbf{x})\). Its
dynamics is governed by the Schrodinger-Poisson (SP) equations:
\[i\frac{\partial\psi}{\partial t} =\left(-\frac{\nabla^{2}}{2m_{a}}+m_{a}\Phi_{a}+g|\psi|^{2}\right) \psi\, \tag{1}\] \[\nabla^{2}\Phi_{a} =4\pi Gm_{a}\left(|\psi|^{2}-n_{0}\right)\, \tag{2}\]
where \(\nabla^{2}\equiv\delta^{ij}\partial_{i}\partial_{j}\) is spatial Laplacian operator, \(\Phi_{a}\) is gravitational potential generated by the WDM itself, \(n_{0}\) is background WDM number density and has been subtracted for consistency [43], and \(g\equiv-\frac{1}{8f_{s}^{2}}\) arises from the small-field expansion of axion-like potential \(V(\phi)=\frac{1}{2}m_{a}^{2}\phi^{2}-\frac{1}{4}\frac{m_{a}^{2}}{f_{s}^{2}} \phi^{4}+\cdots\). Here \(\phi(t,\mathbf{x})=\frac{1}{\sqrt{2m_{a}}}\left[\psi(t,\mathbf{x})e^{-im_{a}t} +\psi^{*}(t,\mathbf{x})e^{im_{a}t}\right]\) is real and subjects to an attractive self-interaction.
The self-gravitation leads to the formation of galactic WDM halo with a central solitonic core [43] which is a spherically symmetric ground state1 of the SP equations. With the WDM self-interaction, we have [47]
Footnote 1: It has been reported in [44] that the central part of soliton could oscillate in time with an order-unity amplitude. The oscillation period is approximately \(\sim 1.1\left(\frac{\rho_{\rm peak}}{M_{\odot}{\rm pc}^{-3}}\right)^{-1/2}{\rm Gyr}\)[42], where \(\rho_{\rm peak}\) is the soliton central density. Also, the soliton may randomly walk around the central region of the WDM halo [45]. Both of them arise from the wave interference [46]. In our case, the oscillation period is roughly \(10^{3}\) yr that is much longer than the typical timescale of binary motion in UGC4211 and PTA observations. Hence, we ignore these effects below for simplicity.
\[\psi(t,\mathbf{x})=\chi(r)e^{-i\omega t}\, \tag{3}\]
with \(r=|\mathbf{x}|\) and \(\omega\) being a chemical potential. \(\chi(r)\) describes the localized spatial distribution of soliton and can be approximately described by
\[\chi(r)\simeq A\ {\rm sech}\left(\frac{r}{R_{s}}\right)\,. \tag{4}\]
Here \(A\) and \(R_{s}\) are the soliton amplitude and core radius, respectively. The density drops to one half of its peak value at \(r=R_{s}\). Note that both \(A\) and \(R_{s}\) are determined by the axion mass \(m_{a}\) and its decay constant \(f_{a}\). For a stable soliton, where the gravitation dominates over the attractive WDM self-interaction, we have [47]
\[M_{s} \sim 10^{2}M_{\odot}\left(\frac{m_{a}}{10^{-20}\ {\rm eV}} \right)^{-1}\left(\frac{f_{a}}{10^{10}{\rm GeV}}\right)\, \tag{5}\] \[R_{s} \sim 10^{5}{\rm pc}\left(\frac{m_{a}}{10^{-20}\ {\rm eV}} \right)^{-1}\left(\frac{f_{a}}{10^{10}{\rm GeV}}\right)^{-1}. \tag{6}\]
Here \(M_{s}=\int_{0}^{\infty}m_{a}|\psi(t,r)|^{2}{\rm d}^{3}r=\frac{1}{3}\pi^{3}R_{s }^{3}m_{a}A^{2}\) is the total mass of soliton.
## UGC4211 and binary SMBHs
Surprisingly, if the binary SMBHs in UGC4211 form a circular orbit with the projected separation as its diameter, the gravity between them can support a stable velocity as large as \(\sim 40\,{\rm km/s}\), about one quarter of their relative velocity along the line of sight only. This implies that up to an observation uncertainty a "hidden" mass might be present within the binary orbit, in excess of the two SMBHs plus associated gas and stellar disc combined. Yet, the two SMBHs share the same position angle, which indicates that they rotate in common with the gas and stellar disk. Their spatial distance and orbital velocity are thus bigger than the projected separation and the velocity along the line of sight, respectively. A more accurate estimate therefore should be based on the full-extent observation of the stars and gas. With the maximal rotation speed of \(\sim 200\) km/s at a radius of \(\simeq 180\) pc for the gas, one can find that a total mass of \(\simeq 1.7\times 10^{9}M_{\odot}\) within this radius.
One possible explanation to this observation is that the binary SMBHs in UGC4211 are orbiting a giant WDM soliton, where the required mass is "hidden". Such a system could be common in the WDM scenario as a relic of galaxy mergers. The SMBHs are usually hosted by the dark matter halo at galactic center. As two galaxies merge, a SMBH binary forms while the solitons tend to merge faster into a new core, as the ejected matter may take away a large portion of energy and angular moment from the original solitons in this process. A few simulations on the isolated two-soliton merger have been performed in literatures (see, e.g., [48; 49; 50]). As indicated in [49], as much as \(\sim 30\%\) of the initial total mass could be ejected by such gravitational cooling.
We simply model this system as a binary where the two SMBHs have equal mass \(m_{1}=m_{2}=M_{\rm BH}\), orbital velocity \(v_{\rm orb}\) and a spatial separation \(r_{1}+r_{2}=2r\). \(r\) refers
Figure 1: Gravitational force per unit SMBH mass provided by a WDM soliton with the total mass \(M_{s}\) and core radius \(R_{s}\).
to the distance from the soliton center to the SMBHs. The dynamics of this binary is then described by
\[\frac{v_{\rm orb}^{2}}{r}=\frac{GM_{\rm BH}}{4r^{2}}+\nabla\Phi_{a}\left(r\right) \cdot\mathbf{\hat{r}}. \tag{7}\]
Here \(\nabla\Phi_{a}\left(r\right)\cdot\mathbf{\hat{r}}\) denotes the gravitational force provided by the soliton. It can be calculated with the Poisson equation (2) and the soliton profile (4), which gives
\[\begin{split}\nabla\Phi_{a}(r)\cdot\mathbf{\hat{r}}& =\frac{12GM_{s}}{\pi^{2}r^{2}}\int_{0}^{\frac{r}{\pi_{s}}}x^{2} \mathrm{sech}^{2}(x)\mathrm{d}x\\ &=\left\{\begin{array}{ll}\frac{4GM_{s}}{r^{2}R_{s}}r\,&r\ll R_{s}\, \\ \frac{GM_{s}}{r^{2}}\,&r\gg R_{s}\.\end{array}\right.\end{split} \tag{8}\]
We show this force as a function of \(r/R_{s}\) in Fig. 1. For the inner region where \(r\ll R_{s}\), the density of soliton core is approximately flat and \(\nabla\Phi(r)\cdot\mathbf{\hat{r}}\) thus scales linearly with \(r\). For the outer region where \(r\gg R_{s}\), however, the soliton core behaves like a point mass whose gravitational force follows the inverse square scaling. The later UGC4211 data analysis will be based on this model.
## III SGWB from binary SMBHs orbiting a soliton
UGC4211 could be just one of the enormous amount of binary SMBHs orbiting a WDM soliton formed in the cosmological history. As these binaries evolve to a smaller separation, the nano-hertz GWs could be produced, which are expected to be detected with the PTAs as a stochastic background. We will assume first and also show later: for the parameter region interesting to us, this stage corresponds to a binary separation smaller than the soliton size. So we simply assume here that for all relevant binaries their two SMBHs are inside the soliton. Then the equation of motion for such two SMBHs are given by
\[\begin{split}\omega_{\rm orb}^{2}r_{1}m_{1}&= \frac{Gm_{1}m_{2}}{R^{2}}+\frac{4GM_{s}}{\pi^{2}R_{s}^{3}}m_{1}r_{1}\,\\ \omega_{\rm orb}^{2}r_{2}m_{2}&=\frac{Gm_{1}m_{2}}{R ^{2}}+\frac{4GM_{s}}{\pi^{2}R_{s}^{3}}m_{2}r_{2}\,\end{split} \tag{9}\]
where \(\omega_{\rm orb}\) represents the orbital angular velocity of the system, and \(R=r_{1}+r_{2}\) denotes the separation between the two SMBHs. \(\omega_{\rm orb}\) is solved to be
\[\omega_{\rm orb}^{2}=\frac{G(m_{1}+m_{2})}{R^{3}}+\frac{4GM_{s}}{\pi^{2}R_{s} ^{3}}\, \tag{10}\]
where the impacts of the soliton are encoded as a frequency shift of the binary's orbit, namely \(\frac{4GM_{s}}{\pi^{2}R_{s}^{2}}\). The modified orbital binding energy is given by
\[E_{\rm orb} = \frac{1}{2}m_{1}\omega_{\rm orb}^{2}r_{1}^{2}+\frac{1}{2}m_{2} \omega_{\rm orb}^{2}r_{2}^{2}-\frac{Gm_{1}m_{2}}{R} \tag{11}\] \[+m_{1}\Phi_{a}(r_{1})+m_{2}\Phi_{a}(r_{2})\.\]
Here we do not require \(m_{1}=m_{2}\) or \(r_{1}=r_{2}\), but assume the soliton to be spherically symmetric with respect to the mass center of binary SMBHs.
The energy spectrum of SGWB in a homogeneous and isotropic Universe can be calculated using the formalism in [51], which gives
\[\Omega_{\rm gw}(f)\equiv\frac{1}{\rho_{c}}\frac{\mathrm{d}\mathcal{E}_{\rm gw }}{\mathrm{d}\ln f}=-\frac{1}{\rho_{c}}\int_{0}^{\infty}\mathrm{d}z\frac{N(z)} {1+z}\frac{\mathrm{d}E_{\rm orb}}{\mathrm{d}\ln f_{r}}\Big{|}_{f_{r}=f(1+z)}. \tag{12}\]
Here \(\mathcal{E}_{\rm gw}\) is the total energy density of GWs at present, \(\rho_{c}=3H_{0}^{2}/(8\pi G)\) is the current critical density of the Universe, \(1/(1+z)\) denotes the GW redshift, and \(N(z)\mathrm{d}z\) is the number of GW events in the unit comoving volume occurring between redshift \((z,z+\mathrm{d}z)\). Note that \(f\) and \(f_{r}=f(1+z)\) are frequencies observed at present and in the source's cosmic rest frame, respectively. The quantity \(-\mathrm{d}E_{\rm orb}/\mathrm{d}\ln f_{r}\), determined by the underlying physics, describes the GW energy per unit logarithmic frequency interval radiated in individual events. As the frequency of GWs emitted by a binary is twice the frequency of its orbital motion [52], namely \(f_{r}=\omega_{\rm orb}/\pi\), we obtain
\[-\frac{\mathrm{d}E_{\rm orb}}{\mathrm{d}\ln f_{r}}=\frac{\pi^{2/3}}{3G}(G \mathcal{M})^{5/3}f_{r}^{2/3}\frac{1+3(f_{s}/f_{r})^{2}}{\left[1-(f_{s}/f_{r} )^{2}\right]^{5/3}}. \tag{13}\]
Here \(\mathcal{M}=(m_{1}m_{2})^{3/5}(m_{1}+m_{2})^{-1/5}\) is the chirp mass, and the "soliton-induced frequency"
\[f_{\rm s}\equiv\sqrt{\frac{4GM_{s}}{\pi^{4}R_{s}^{3}}}\sim 1.6\times 10^{-22} \ \mathrm{Hz}\left(\frac{m_{a}}{10^{-20}\ \mathrm{eV}}\right)\left(\frac{f_{a}}{10^{10}\mathrm{GeV}}\right)^{2} \tag{14}\]
is a parameter characterizing the effect of angular frequency shift in Eq. (10). Eq. (13) is consistent with the prediction of \(-\mathrm{d}E_{\rm orb}/\mathrm{d}\ln f=\frac{\pi^{2/3}}{3G}(G\mathcal{M})^{5/3 }f^{2/3}\) made in a standard scenario where no soliton is positioned between the binary SMBHs [51; 52]. By plugging Eq. (13) into Eq. (12), we finally have the energy spectrum of SGWB (the power of \(f/f_{\rm ref}\) is often referred to as \(2+2\alpha\))
\[\Omega_{\rm gw}(f)=A_{\rm gw}^{2}\frac{2\pi^{2}}{3H_{0}^{2}}\left(\frac{f}{f_{ \rm ref}}\right)^{\frac{2}{3}}\frac{1+3(f_{s}/f)^{2}}{\left[1-(f_{s}/f)^{2} \right]^{5/3}}\, \tag{15}\]
where \(f_{\rm ref}\equiv 1\ \mathrm{yr}^{-1}\) is a reference frequency, \(H_{0}\equiv h\times 100\ \mathrm{km\ s^{-1}\ Mpc^{-1}}\) is the present Hubble parameter, and
\[A_{\rm gw}=f_{\rm ref}^{1/3}\sqrt{4(G\mathcal{M})^{5/3}N_{0}\left<(1+z)^{-1/3} \right>/(3\pi^{1/3})} \tag{16}\]
is the amplitude of spectrum. Here \(N_{0}=\int_{0}^{\infty}N(z)\mathrm{d}z\) is the comoving number density of merged events today, and \(\left<(1+z)^{-1/3}\right>=\frac{1}{N_{0}}\int_{z_{\rm min}}^{z_{\rm max}}\frac{N(z) }{(1+z)^{1/3}}\mathrm{d}z\) is a redshift factor with \(z_{\rm min}=\mathrm{max}(0,f_{\rm min}/f-1]\) and \(z_{\rm max}=f_{\rm max}/f-1\)[51]. The maximum and minimum frequencies, namely \(f_{\rm max}\) and \(f_{\rm min}\), are set by the binary separation at its birth and the frequency at which it comes into Roche
lobe contact, respectively [51]. As the galaxy merger is expected to occur late (typically at a redshift \(z\lesssim 1\)[53]), while deriving Eq. (15) we have taken an approximation of \(f_{s}/f_{r}\approx f_{s}/f\) to simplify the analysis. Notably, the correction factor in Eq. (15) is bigger than unity for \(f_{s}\neq 0\) but reduced to unity in the trivial case. As \(f_{s}<f_{r}\sim f\), this implies that the \(\Omega_{\rm gw}(f)\) receives more positive corrections in the low-frequency end than the high-frequency one. This can be well-explained. Due to the existence of central soliton in the binary, the two SMBHs can stay from each other farther to radiate the GWs of the same frequency, compared to the standard scenario. For this case, the quadrupole moment of the binary SMBHs gets enhanced. But, as the separation between the two SMBHs becomes smaller, where the radiated GWs have a higher frequency, the hidden mass enclosed by these two SMBHs gets reduced. We anticipate these effects to be qualitatively unchanged even if the circular orbital is deformed to an elliptical one.
## IV Data fitting
In the recent round of efforts, NANOGrav [1], PPTA [2], CPTA [3] and EPTA [4] have impressingly demonstrated the consistency between the data and the Hellings-Downs curve. This achievement verifies the nature of the PTA signals as the SGWB. However, a deviation of \(\alpha\) from its prediction of \(-2/3\) from the standard SMBH binary scenario has been observed. Instead, the \(\Omega_{\rm gw}(f)h^{2}\) spectrum from data is steeper, favoring a less negative \(\alpha\) value in the power law fitting [1]. This discrepancy might be caused by the effects of binary population, environment friction or orbital eccentricity [6] which are known to be able to correct \(\alpha\) and even the power law. Its existence makes a direct comparison between the model prediction in Eq. (15) and the new spectrum data not informative, as these new effects might dominate over the soliton physics in the data fitting. To make this picture clear, we present the model fitting for the NANOGrav \(\Omega_{\rm gw}(f)h^{2}\) spectrum in Fig. 2. The black curve, as the best-fit, essentially represents the prediction from the standard scenario. A strong soliton effect on the spectrum is not favored by the data for the unresolved discrepancy. Separately, non-zero eccentricity can indeed yield a steeper spectrum, which however requests an unrealistically large value to fit the data and hence is expected to be not important [1].
Given this background, we consider a generalized spectrum pattern for the data fitting, to demonstrate the effects of central soliton. Concretely, we keep the analytical formula in Eq. (15), but allow \(\alpha\) to vary to approximately mimic the effects such as binary population [56] and environment friction [57]. The best-fit is demonstrated as a magenta curve in the same figure. Indeed, with the resolving of the spectrum steepness, one can see that an enhancement in the low-frequency end is favored by data. Notably, here we have implicitly assumed that the underlying physics to resolve the steepness discrepancy will not qualitatively change the properties of the correction factor in Eq. (15). This is expected to be the case if the orbital essentricity of binaries is as small as anticipated. Finally, we show the favored parameter regions for \((m_{a},f_{a})\) in Fig. 3, by the UGC4211 dataset and the NANOGrav 15-year dataset [1].
## V Summary and outlook
In this Letter, we explore the orbital effects of a central WDM soliton on the SMBH binaries, which may leave an imprint in the SGWB frequency spectrum being measured with pulsar timing arrays. Such a scenario is motivated by the newly resolved dynamics of the dual AGN in UGC4211. This insight is timely for the interpretation of the new PTA detections of nHz SGWB, long anticipated to be dominated by the SMBH binaries. By modeling the dynamics of binary SMBHs circularly orbiting inside a soliton and integrating the contributions in the cosmological history, we show that the produced SGWB energy spectrum can be enhanced at the low-frequency end. This is a new type of environmental effect, different from, e.g., dynamical friction which is more familiar to people.
Figure 2: Data fitting for the \(\Omega_{\rm gw}(f)h^{2}\) spectrum. The black solid curve represents the best-fit of Eq. (15) to the NANOGrav 15-year dataset [1], whereas the black dotted and dashed curves represent a shift from the best-fit curve with \(\log_{10}A_{\rm gw}=-14.0,\,-15.0\), respectively. The blue and red solid curves have the same value of \(\log_{10}A_{\rm gw}\) as that of the best-fit curve, but with different \(f_{s}\) values: \(\log_{10}(f_{s}/{\rm Hz})=-9.1,\,-9.2\). As a reference, we present the spectra from the SMBH binaries with a uniform initial eccentricity [54] of \(e=0.4,\,\,0.8,\,0.9\), respectively, as the orange dotted, dashed and solid curves. We finally present the data fitting for the generalized spectrum pattern (see its definition in the main text) as dashed magenta curves, with the best-fit values: \((\log_{10}A_{\rm gw},\log_{10}(f_{s}/{\rm Hz}),\alpha)=(-14.0,-9.0,0.2)\).
Following this proof-of-concept work, there are several directions worth investigating further. Firstly, the orbital evolution of binary SMBHs with a WDM soliton, together with the merger of two galactic solitons with self-interactions, need to be studied in detail. Such a study will benefit our understanding on the impacts of dark matter on the galaxy merger. Secondly, as some physical effects other than the central WDM soliton may contribute significantly to explaining the data, a more comprehensive study by including all of these factors is important. The outcomes may open or tighten the GW observational window on the galactic WDM solitons with the future PTA experiments. We leave these explorations to an ongoing project [58].
## Acknowledgement
C. Chen would like to thank Jie-Wen Chen, Chengjie Fu, Leo WH Fung, Yudong Luo, Hoang Nhan Luu, Xin Ren, Chengfeng Tang, Zi-Qing Xia and Guan-Wen Yuan for useful discussions. This project is supported by the Collaborative Research Fund under Grant No. C6017-20G which is issued by Research Grants Council of Hong Kong S. A. R.
|
2310.02274 | Quantum Scalar Field Theory Based On Principle of Least Observability | Recently it is shown that the non-relativistic quantum formulations can be
derived from a least observability principle [36]. In this paper, we apply the
principle to massive scalar fields, and derive the Schr\"{o}dinger equation of
the wave functional for the scalar fields. The principle extends the least
action principle in classical field theory by factoring in two assumptions.
First, the Planck constant defines the minimal amount of action a field needs
to exhibit in order to be observable. Second, there are constant random field
fluctuations. A novel method is introduced to define the information metrics to
measure additional observable information due to the field fluctuations,
\added{which is then converted to the additional action through the first
assumption.} Applying the variation principle to minimize the total actions
allows us to elegantly derive the transition probability of field fluctuations,
the uncertainty relation, and the Schr\"{o}dinger equation of the wave
functional. Furthermore, by defining the information metrics for field
fluctuations using general definitions of relative entropy, we obtain a
generalized Schr\"{o}dinger equation of the wave functional that depends on the
order of relative entropy. Our results demonstrate that the extended least
action principle can be applied to derive both non-relativistic quantum
mechanics and relativistic quantum scalar field theory. We expect it can be
further used to obtain quantum theory for non-scalar fields. | Jianhao M. Yang | 2023-09-29T06:58:36Z | http://arxiv.org/abs/2310.02274v2 | # Quantum Scalar Field Theory Based On Principle of Least Observability
###### Abstract
Recently it is shown that the non-relativistic quantum formulations can be derived from a principle of least observability[36]. In this paper, we apply the principle to massive scalar fields, and derive the Schrodinger equation of the wave functional for the scalar fields. The principle can be considered as an extension of the least action principle in classical field theory by factoring in two assumptions. First, the Planck constant defines the minimal amount of action a field needs to exhibit in order to be observable. This enables us to calculate the degree of observability from the dynamics of a classical field. Second, there are constant random field fluctuations. A novel method is introduced to define the information metrics to measure additional observability due to the field fluctuations. Applying the variation principle to minimize the total degree of observability allows us to elegantly derive the transition probability of field fluctuations, the uncertainty relation, and the Schrodinger equation of the wave functional. Furthermore, by defining the information metrics for field fluctuations using general definitions of relative entropy, we obtain a generalized Schrodinger equation of the wave functional that depends on the order of relative entropy. Our results demonstrate that the least observability principle can be applied to derive both non-relativistic quantum mechanics and relativistic quantum scalar field theory. We expect it can be further used to obtain quantum theory for non-scalar fields.
## I Introduction
Advancements of quantum information and quantum computing [1; 2] in recent decades have inspired active researches for new foundational principles for quantum mechanics from the information perspective [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35]. Reformulating quantum mechanics based on information principles can bring in new conceptual insights to the unresolved challenges in the current quantum theory. For instance, is probability amplitude, or wavefunction, just a mathematical tool or associated with ontic physical property? Does quantum entanglement imply non-local causal connection among entangled objects? With this motivation, recently a least observability principle is proposed to derive the formalism of non-relativistic quantum mechanics [36]. Here observability refers to how observable that a physical object exhibits in its dynamics. It measures the information available for potential observation. The principle can be understood as recasting and extending the least action principle in classical mechanics into a new principle of minimizing proper information measures. This is achieved by factoring two assumptions. First, there is a lower limit to the amount of action a physical system needs to exhibit in order to be observable. Such a discrete action unit is defined by the Planck constant. It serves as a basic unit to measure the observability from the action a physical system exhibits during its motion. Second, there is vacuum fluctuation that is completely random. New information metrics are introduced to measure the additional distinguishability, or observability, due to these random fluctuations. Applying the variational principle to minimize the total degree of observability allows us to elegantly recover the basic formulations of non-relativistic quantum mechanics. In addition, a family of generalized Schrodinger equation for the wave functional is obtained by defining the information metrics for vacuum fluctuations using generic relative entropy definitions.
The goal of this paper is to apply the same framework to relativistic quantum theory. Specifically, we will apply the least observability principle to derive the quantum field theory of massive scalar fields. Impressively, we find that the only adjustment needed to the least observability principle is to replace the assumption of random vacuum fluctuations in the non-relativistic setting to random field fluctuations in the relativistic settings. By recursively applying the least observability principle, we are able to derive the transition probability density of the field fluctuations, the uncertainty relation, and most importantly, the Schrodinger equation of the wave functional for the scalar fields. The Schrodinger equation of the wave functional is the fundamental equation for the quantum scalar field theory in the Schrodinger picture, and it is typically introduced as a postulate. Here we derive it from a first principle. Similarly to the non-relativistic quantum formalism, by relaxing the definition of the information metrics using generic relative entropy, we obtain a family of generalized Schrodinger equations. The application of such generalized Schrodinger equations needs further investigation, but the result shows the flexibility of the mathematical framework.
The Schrodinger picture offers several advantages compared to the standard Fock space description of scalar fields [37]. In particular, the Schrodinger wave functional gives an intrinsic description of the vacuum without reference to the spectrum of excited states, which is an inherent problem in the Fock space of state in curved spacetime [37]. It is also argued that the Schrodinger picture in field theory is the most natural representation
from the viewpoint of canonical quantum gravity where the spacetime is usually decomposed into a spatial manifold evolving in time [39]. The Schrodinger formulations in both non-relativistic quantum theory and relativistic quantum field theory allows us to understand the difference and similarity between the two theories. It may provide hints on applying certain concepts from one theory to the other. For instance, calculating information metrics such as the entanglement entropy of a quantum field is challenged [40]. In non-relativistic quantum mechanics, such a quantity for entangled systems is typically calculated with the help of the wave function. With the availability of the Schrodinger wave functional, one may find a similar method to calculate the entangled entropy for a scalar field.
Extending the least action principle in classical mechanics to the least observability principle in quantum mechanics not only shows clearly how classical mechanics becomes quantum mechanics, but also offers a powerful mathematical framework. As shown in this paper, the principle and mathematical framework allow us to derive the Schrodinger equation for the wave functional of the scalar field in a way very similar to that in the non-relativistic settings. Although the derivation is currently carried in the Minkowski spacetime, it should not be difficult to extend the derivation in a curved spacetime. The least observability principle also provides interesting implications on the interpretation of quantum theory, which will be discussed in a separate report.
The rest of the article is organized as follows. First, we briefly overview a least action principle for the classical scalar field, since it is the starting point of the quantum formulation. Second, we review the underlying assumptions for the least observability principle and what should be adjusted to apply the principle in the case of scalar fields. In Section IV we apply the principle recursively to analyze the dynamics of field fluctuations, then derive the uncertainty relation and the Schrodinger equation for the wave functional. The Schrodinger equation is generalized in Section V. We then conclude the article after comprehensive discussions and comparisons to previous relevant research works.
## II Classical theory for massive scalar fields
This section briefly reviews the classical theory of scalar fields, the canonical transformation, and the Hamilton-Jacobi equation. Consider a massive scalar field configuration \(\phi\). Here we denote the coordinates for a four dimensional spacetime point \(x\) either by \(x=(x^{(0)},x^{(i)})\) where \(i=\{1,2,3\}\), or by \(x=(t,\mathbf{x})\) where \(\mathbf{x}\) is a spatial point. The field component at a spacetime point \(x\) is denoted as \(\phi_{x}=\phi(x)\). The Lagrangian density for the a massive scalar field is given by
\[\begin{split}\mathcal{L}&=\frac{1}{2}[\partial_{\mu }\phi(x)]^{2}-\frac{1}{2}m^{2}[\phi(x)]^{2}\\ &=\frac{1}{2}[\dot{\phi}(x)]^{2}-\frac{1}{2}([\nabla\phi(x)]^{2} +m^{2}[\phi(x)]^{2}).\end{split} \tag{1}\]
where \(\mu=\{0,1,2,3\}\) and the convention of Einstein summation is assumed. The first term \(\frac{1}{2}[\dot{\phi}(x)]^{2}\) resembles the kinetic energy density in Newtonian mechanics, while the second term is the potential energy density and denoted as \(V(\phi(x))\). The correspondent action functional is
\[A=\int d^{4}x\mathcal{L}. \tag{2}\]
The momentum conjugate to the field is defined by
\[\pi(x)=\frac{\partial\mathcal{L}}{\partial(\partial_{0}\phi)}=\partial_{0} \phi(x)=\dot{\phi}(x). \tag{3}\]
Applying the least action principle to minimize the action functional \(S\), one obtains the Euler-Lagrange equation
\[\partial_{\mu}\partial^{\mu}\phi+m^{2}\phi^{2}=0, \tag{4}\]
which is the Klein-Gordon equation.
Variables \((\phi,\pi)\) form a pair of canonical variables, and the corresponding Hamiltonian is constructed by a Legendre transform of the Lagrangian [37]
\[\begin{split} H[\phi,\pi]&=\int d^{3}\mathbf{x}\{ \pi(x)\dot{\phi}(x)-\mathcal{L}\}\\ &=\int d^{3}\mathbf{x}\{\frac{1}{2}[\dot{\phi}(x)]^{2}+V\}.\end{split} \tag{5}\]
Next we want to apply the canonical transformation technique in field theory. To do this, we will need to choose a foliation of the spacetime into a succession of spacetime hypersurfaces. Here we only consider the Minkowksi spacetime and it is natural to choose these to be the hypersurfaces \(\Sigma_{t}\) of fixed \(t\). The field configuration \(\phi\) for \(\Sigma_{t}\) can be understood as a vector with infinitely many components for each spatial point on the Cauchy hypersurface \(\Sigma_{t}\) at time instance \(t\) and denoted as \(\phi_{t,\mathbf{x}}=\phi(t,\mathbf{x})\). For simplicity of notation, we will still denote \(\phi(t,\mathbf{x})=\phi(x)\) for the rest of this paper, but the meaning of \(\phi(x)\) should be understood as the field component \(\phi_{\mathbf{x}}\) at each spatial point of the hypersurfaces \(\Sigma_{t}\) at time instance \(t\). In Appendix A, we show that by an extended canonical transformation, the action functional of the field can be written as
\[A_{c}=\int dt\{\frac{\partial S}{\partial t}+H[\phi,\pi]\}, \tag{6}\]
where \(S[\phi,t]\) is a generation functional that satisfies the identity \(\pi(x)=\delta S/\delta\phi(x)\). A special solution to the least action principle for the above action functional is \(\partial S/\partial t+H=0\). Substituting \(H\) from (5), we have
\[\frac{\partial S}{\partial t}+\int d^{3}\mathbf{x}\{\frac{1}{2}[\dot{\phi}(x )]^{2}+V(\phi(x))\}=0. \tag{7}\]
Since \(\dot{\phi}(x)=\pi(x)=\delta S/\delta\phi(x)\), the above equation can be rewritten as
\[\frac{\partial S}{\partial t}+\int d^{3}\mathbf{x}\{\frac{1}{2}(\frac{\delta S} {\delta\phi(x)})^{2}+V(\phi(x)\}=0. \tag{8}\]
This is the Hamilton-Jacobi equation for the scalar field that governs the evolution of the functional \(S\) between the spacelike hypersurfaces. It is equivalent to the Klein-Gordon equation (4).
As also shown in Appendix A, suppose the scalar field configuration \(\phi\) follows a probability distribution, with probability density \(\rho[\phi,t]\) for the hypersurface \(\Sigma_{t}\), the average value of the action functional is,
\[S_{c}=\int\mathcal{D}\phi dt\{\rho[\frac{\partial S}{\partial t}+\int d^{3} \mathbf{x}\{\frac{1}{2}(\frac{\delta S}{\delta\phi(x)})^{2}+V(\phi(x)]\}. \tag{9}\]
Note that \(S_{c}\) and \(S\) are different functional, where \(S_{c}\) can be considered as the ensemble average of classical action functional and \(S\) is a generation functional introduced in an extended canonical transformation that satisfied \(\pi(x)=\delta S/\delta\phi(x)\). Now we consider the generalized canonical pair as \((\rho,S)\), and apply the least action principle on the action functional defined in (9). Variation of \(S_{c}\) over \(\rho\) leads to (8), and variation of \(S_{c}\) over \(S\) gives
\[\frac{\partial\rho}{\partial t}+\int\frac{\delta}{\delta\phi(x)}(\rho\frac{ \delta S}{\delta\phi(x)})d^{3}\mathbf{x}=0, \tag{10}\]
which is the continuity equation for the probability density. Both equations (8) and (10) determine the dynamics of the classical scalar field ensemble, and they are obtained by applying the least action principle based on the action functional \(S_{c}\) defined in (9).
## III Principle of Least Observability
Ref. [36] shows that the least action principle in classical mechanics can be extended to a least observability principle to derive quantum formulation by factoring in the following two assumptions.
_Assumption 1 - A quantum system experiences vacuum fluctuations constantly. The fluctuations are local and completely random._
_Assumption 2 - There is a lower limit to the amount of action that a physical system needs to exhibit in order to be observable. This basic discrete unit of action effort is given by the Planck constant \(\hbar/2\)._
The first assumption is generally accepted in mainstream quantum mechanics, which is responsible for the intrinsic randomness of the dynamics of a quantum object. Locality of vacuum fluctuation is assumed, and it implies that for a composite system, the fluctuation of each subsystem is independent of each other.
The justifications of the second assumption is explained in detail in Section II of Ref. [36]. Historically the Planck constant was first introduced to show that energy of radiation from a black body is discrete. One can consider the discrete energy unit as the smallest unit to be distinguished, or detected, in the black body radiation phenomenon. In general, it is understood that Planck constant is associated with the discreteness of certain observable in quantum mechanics. Here, we just interpret the Planck constant from an information measure point of view. Essentially, what we assume is that there is a lower limit to the amount of action that the physical system needs to exhibit in order to be observable or distinguishable in potential observation, and such a unit of action is defined by the Planck constant.
The existing of the Planck constant as a unit of action for the physical system to exhibit in order to be observable allows us to ask the question: Given certain amount of action \(S_{c}\) from its motion along a classical trajectory, how much observability does the system exhibit from its dynamics? According to assumption 2, this is calculated as \(I_{p}=2S_{c}/\hbar\). \(I_{p}\) is not a conventional information metric but it has clear meaning about a piece of physical information. That is, it can be considered as the amount of observable information measured in the unit of \(\hbar/2\). This step of converting \(S_{c}\) into \(I_{p}\) appears trivial mathematically, but conceptually it is not. It recasts the least action principle into a least observability principle, and shifts the working language to be information related. Thus, \(I_{p}\) can be paired with additional information metrics due to vacuum fluctuations. To measure the degree of observability due to vacuum fluctuations, a new information metric \(I_{f}\) is introduced. \(I_{f}\) is defined as a metric to measure the additional distinguishable, hence observable, information exhibited due to vacuum fluctuations. \(I_{f}\) will be defined as a functional of Kullback-Leibler divergence \(D_{KL}\), \(I_{f}:=f(D_{KL})\), where \(D_{KL}\) measures the information distances of different probability distributions caused by vacuum fluctuations. Thus, the total degree of observability due to both classical trajectory and vacuum fluctuation is
\[I=\frac{2}{\hbar}S_{c}+f(D_{KL}). \tag{11}\]
Non-relativistic quantum theory can be derived through a variation method that demands \(I\) is stationary, that is, \(\delta I=0\). When \(\hbar\to 0\), the observability due to classical path \(I_{p}\rightarrow\infty\). Thus, the system can be observed with infinite accuracy, and any finite amount of \(I_{f}\) can be ignored. Minimizing \(I\) is then equivalent to minimizing \(S_{c}\), resulting in the dynamics laws of classical mechanics. However, in quantum mechanics, according to the Assumption 2, the action to exhibit observability is discrete so that \(\hbar\neq 0\), \(I_{p}\) is finite. This means there is only finite amount of observable information available. The contribution from \(I_{f}\) can be comparable to \(I_{f}\) and therefore must be included when minimizing the total amount of observable information. These ideas can be
condensed as1
Footnote 1: The principle can be called the principle of least distinguishability as well, since the term distinguishability and observability is interchangeable in this paper. Also, the term observability should not be confused with the same terminology in control theory.
_Principle of Least Observability - The law of physical dynamics tends to exhibit as little as possible the observability defined in (11)._
Now we want to apply this principle to the scalar field and derive the quantum scalar field theory. Assumption 1 needs to be slightly modified, since in the field theory, one does not deal with a physical object. Instead, we are dealing with the field configuration. Assumption 1 is restated as
_Assumption 1' - There are constant fluctuations in the field configurations. The fluctuations are completely random, and local._
It is not our intention here to investigate the origin, or establish a physical model, of such field fluctuations. Instead, we make a minimal number of assumptions on the underlying physical model, only enough so that we can apply the variation principle based on the degree of observability.
Assumption 2 is unchanged for the quantum field theory. The observability metrics for the dynamics of a classical scalar field is defined as \(2S_{c}/\hbar\), where \(S_{c}\) is given by (2), or (9). Similarly, the metrics to measure the additional distinguishable information exhibited due to field fluctuations, is defined as a functional of Kullback-Leibler divergence \(D_{KL}\), \(I_{f}:=f(D_{KL})\), where \(D_{KL}\) measures the information distances of different probability distributions caused by field fluctuations. Thus, the total degree of observability due to both classical trajectory and vacuum fluctuation is given by the same equation as (11). Quantum field theory can be derived through a variation method to minimize such a functional quantity, \(\delta I=0\). When \(\hbar\to 0\), the observability due to classical field \(2S_{c}/\hbar\to\infty\), any finite amount of \(I_{f}\) can be ignored. Minimizing \(I\) is then equivalent to minimizing \(S_{c}\), resulting in the dynamics laws of classical fields. However, in quantum field theory, the action to exhibit observability is discrete so that \(\hbar\neq 0\), \(2S_{c}/\hbar\) is finite. Therefore, the contribution from \(I_{f}\) must be included when minimizing the total degree of observability.
Next we will show that by applying the variational principle to minimize the total degree of observability, we can obtain the uncertainty relation and the Schrodinger equation of the wave functional for the scalar field, which are the basic formulation of the quantum scalar field.
## IV Quantum theory for massive scalar fields
### Field Fluctuations and Uncertainty Relation
First we consider the field fluctuations in an equal times hyper-surfaces for an infinitesimal time internal \(\Delta t\). At a given time \(t\to t+\Delta t\) in the hyper-surface \(\Sigma_{t}\), the field configuration fluctuates randomly, \(\phi\rightarrow\phi+\omega\), where \(\omega=\Delta\phi\) is the change of field configuration due to random fluctuations. Define the probability for the field configuration to transition from \(\phi\) to \(\phi+\omega\) as \(p[\phi+\omega|\phi]\mathcal{D}\omega\). The expectation value of classical action over all possible field fluctuations is \(S_{c}=\int p[\phi+\omega|\phi]\mathcal{L}d^{3}\mathbf{x}\mathcal{D}\omega dt\) where \(\mathcal{L}\) is given by (1) for a scalar field. For an infinitesimal time internal \(\Delta t\), one can approximate \(\dot{\phi}=\Delta\phi/\Delta t=\omega/\Delta t\). The classical action for the infinitesimal time internal \(\Delta t\) is approximately given by
\[S_{c}=\int p[\phi+\omega|\phi]\mathcal{D}\omega\int_{\Sigma_{t}}\{\frac{[ \omega(x)]^{2}}{2\Delta t}+V(\phi(x))\Delta t\}d^{3}\mathbf{x}. \tag{12}\]
The information metrics \(I_{f}\) is supposed to capture the additional revelation of information due to vacuum fluctuations. Thus, it is naturally defined as a relative entropy, or more specifically, the Kullback-Leibler divergence, to measure the information distance between \(p[\phi+\omega|\phi]\) and some prior probability distribution. Since the field fluctuations are completely random, it is intuitive to assume the prior distribution with maximal ignorance [33; 44]. That is, the prior probability distribution is a uniform distribution \(\sigma\).
\[I_{f} :=D_{KL}(p[\phi+\omega|\phi]||\sigma)\] \[=\int p[\phi+\omega|\phi]ln[p[\phi+\omega|\phi]/\sigma]\mathcal{ D}\omega.\]
Combined with (12), the total amount of information defined in (11) is
\[I= \frac{1}{\hbar\Delta t}\int p[\phi+\omega|\phi]\mathcal{D} \omega\int([\omega(x)]^{2}+2V(\phi(x))\Delta t)d^{3}\mathbf{x}\] \[+\int p[\phi+\omega|\phi]ln[p[\phi+\omega|\phi]/\sigma]\mathcal{ D}\omega.\]
Taking the variation \(\delta I=0\) with respect to \(p\) gives
\[\delta I=\int\{\int(\frac{[\omega(x)]^{2}}{\hbar\Delta t}+\frac{2V\Delta t}{ \hbar})d^{3}\mathbf{x}+ln\frac{p}{\sigma}+1)\}\delta p\mathcal{D}\omega=0. \tag{13}\]
Since \(\delta p\) is arbitrary, one must have
\[\int([\omega(x)]^{2}+2V(\Delta t)^{2}]d^{3}\mathbf{x}+\hbar\Delta t(ln\frac{ p}{\sigma}+1)=0.\]
When \(\Delta t\) is infinitesimally small, we can ignore the higher order term with \((\Delta t)^{2}\), and obtain the solution for \(p\) as
\[\begin{split} p[\phi+\omega|\phi]&=\sigma e^{-\frac {1}{\hbar\Delta t}\int[\omega(x)]^{2}d^{3}\mathbf{x}-1}\\ &=\frac{1}{Z}e^{-\frac{1}{\hbar\Delta t}\int[\omega(x)]^{2}d^{3} \mathbf{x}},\end{split} \tag{14}\]
where \(Z\) is a normalization factor that absorbs factor \(\sigma e^{-1}\). Equation (14) shows that the transition probability density is a Gaussian-like distribution. It is independent of \(\phi\) and can be simply denoted as \(p[\omega]\). Clearly, the expectation value of \(\omega(x)\) is
\[\langle\omega(x)\rangle=\int p[\omega]\omega(x)\mathcal{D}\omega=0. \tag{15}\]
We also want to evaluate the expectation value field fluctuations at two spatial points in hypersurface \(\Sigma_{t}\), \(x=(t,\mathbf{x})\) and \(x^{\prime}=(t,\mathbf{x}^{\prime})\),
\[\langle\omega(x)\omega(x^{\prime})\rangle=\int p[\omega]\omega(x)\omega(x^{ \prime})\mathcal{D}\omega. \tag{16}\]
In Appendix B, we verify that
\[\langle\omega(x)\omega(x^{\prime})\rangle=\frac{\hbar\Delta t}{2}\delta( \mathbf{x}-\mathbf{x}^{\prime}), \tag{17}\]
Recall that \(\omega=\Delta\phi\), and \(\pi=\dot{\phi}=\Delta\phi/\Delta t=\omega/\Delta t\). Since \(\langle\omega\rangle=0\), one has \(\langle\pi\rangle=\langle\omega\rangle/\Delta t=0\) as well. Thus, \(\Delta\pi=\pi-\langle\pi\rangle=\pi=\omega/\Delta t\), we re-arrange (17) as
\[\langle\Delta\phi(x)\Delta\pi(x^{\prime})\rangle=\frac{\hbar}{2}\delta( \mathbf{x}-\mathbf{x}^{\prime}). \tag{18}\]
Applying the Cauchy-Schwarz inequality we get
\[\langle\Delta\phi(x)\rangle\langle\Delta\pi(x^{\prime})\rangle\geq\langle \Delta\phi(x)\Delta\pi(x^{\prime})\rangle=\frac{\hbar}{2}\delta(\mathbf{x}- \mathbf{x}^{\prime}). \tag{19}\]
But comparing with the \(\delta\)-function in the right hand side of (19) appears inappropriate. Instead, we introduce a pair of positive spatial test functions \(f(\mathbf{x}),g(\mathbf{x}):\mathbb{R}^{3}\rightarrow\mathbb{R}^{+}\), and define
\[\langle\omega(f)\omega(g)\rangle=\int p[\omega]\{\int_{\Sigma_{t}}\omega(x)f( \mathbf{x})\omega(x^{\prime})g(\mathbf{x}^{\prime})d\mathbf{x}d\mathbf{x}^{ \prime}\}\mathcal{D}\omega. \tag{20}\]
Repeating the similar calculations from (17) to (19), we can obtain
\[\langle\Delta\phi(f)\rangle\langle\Delta\pi(g)\rangle\geq\frac{\hbar}{2} \langle f|g\rangle, \tag{21}\]
where \(\langle f|g\rangle=\int_{\Sigma_{t}}f(\mathbf{x})g(\mathbf{x})d\mathbf{x}\). This is the uncertainty relation between the field variable \(\phi\) and its conjugate momentum variable \(\pi\) for the scalar fields.
### Derivation of The Schrodinger Equation for the Wave Functional
We now turn to the field dynamics for a period of time from \(t_{A}\to t_{B}\). As described earlier, the spacetime during the time duration \(t_{A}\to t_{B}\) is sliced into a succession of \(N\) Cauchy hypersurfaces \(\Sigma_{t_{i}}\), where \(t_{i}\in\{t_{0}=t_{A},\ldots,t_{i},\ldots,t_{N-1}=t_{B}\}\), and each time step is an infinitesimal period \(\Delta t\). The field configuration for each \(\Sigma_{t_{i}}\) is denoted as \(\phi(t_{i})\), which has infinite number of components, labeled as \(\phi_{\mathbf{x}}(t_{i})=\phi(\mathbf{x},t_{i})\), for each spatial point in \(\Sigma_{t_{i}}\). Without considering the random field fluctuation, the dynamics of the field configuration is governed by the Hamilton-Jacobi equation (8). Furthermore, we consider an ensemble of field configurations hypersurface \(\Sigma_{t_{i}}\) that follow a probability density2\(\rho_{t_{i}}[\phi]=\rho[\phi,t_{i}]\) which follows the continuity equation (10). As shown in Section II, both the Hamilton-Jacobi equation and the continuity equation can be derived through variation over the classical action functional \(S_{c}\), as defined in (9), with respect to \(\rho\) and \(S\), respectively.
Footnote 2: The notation \(\rho[\phi,t_{i}]\) is legitimate since in this case \(\phi\) describes the field configuration for the equal time hypersurface \(\Sigma_{t_{i}}\).
To apply the least observability principle, first we compute the observability from the dynamics of the classical field ensemble, according to Assumption 2, as \(I_{c}=2S_{c}/\hbar\). Next we need to define the information metrics for the field fluctuations, \(I_{f}\). For each new field configuration \(\phi+\omega\) due to the field fluctuations, there is a new probability density \(\rho[\phi+\omega,t_{i}]\). We need a proper metrics to measure the additional revelation of observability due to the field fluctuations on top of the observability from classical field dynamics. The proper measure of this distinction is the information distance between \(\rho[\phi,t_{i}]\) and \(\rho[\phi+\omega,t_{i}]\). A natural choice of such information measure is the relative entropy \(D_{KL}(\rho[\phi,t_{i}]||\rho[\phi+\omega,t_{i}])\). Moreover, we need to consider the contributions for all possible \(\omega\). Thus, we take the expectation value of \(D_{KL}\) over \(\omega\), denoted as \(\langle\cdot\rangle_{\omega}\). The contribution of distinguishable information due to field fluctuations for hypersurance \(\Sigma_{t_{i}}\) is \(\langle D_{KL}(\rho[\phi,t_{i}]||\rho[\phi+\omega,t_{i}])\rangle_{\omega}\). Finally, we sum up the contributions from all hypersurfaces, lead to the definition of information metrics
\[I_{f} :=\sum_{i=0}^{N-1}\langle D_{KL}(\rho[\phi,t_{i}]||\rho[\phi+ \omega,t_{i}])\rangle_{\omega} \tag{22}\] \[=\sum_{i=0}^{N-1}\int\mathcal{D}\omega p[\omega]\int\mathcal{D} \phi\rho[\phi,t_{i}]ln\frac{\rho[\phi,t_{i}]}{\rho[\phi+\omega,t_{i}]}. \tag{23}\]
Notice that \(p[\omega]\) is a Gaussian-like distribution given in (14). When \(\Delta t\) is small, only small fluctuations \(\omega\) will contribute to \(I_{f}\). As shown in Appendix C, when \(\Delta t\to 0\), \(I_{f}\) turns out to be
\[I_{f}=\frac{\hbar}{4}\int\frac{1}{\rho[\phi,t]}(\frac{\delta\rho[\phi,t]}{ \delta\phi(x)})^{2}d^{3}\mathbf{x}\mathcal{D}\phi dt. \tag{24}\]
Eq. (24) is analogous to the Fisher information for the probability density [36; 43] in non-relativistic quantum mechanics. Some literature directly adds such Fisher information term in the variation method as a postulate to derive the Schrodinger equation [41; 42]. But (24) bears much more physical significance than Fisher information. First, it shows that \(I_{f}\) is proportional to \(\hbar\)
This is not trivial because it avoids introducing additional arbitrary constants for the subsequent derivation of the Schrodinger equation. More importantly, defining \(I_{f}\) using the relative entropy opens up new results that cannot be obtained if \(I_{f}\) is defined using Fisher information, because there are other generic forms of relative entropy such as Renyi divergence or Tsallis divergence. As will be seen later, by replacing the Kullback-Leibler divergence with Renyi divergence, one will obtain a family of generalized Schrodinger equations.
Together with (9), (24), and (11), the total observability is
\[\begin{split} I=&\frac{2}{h}\int\rho\{\frac{ \partial S}{\partial t}+\int[\frac{1}{2}(\frac{\delta S}{\delta\phi(x)})^{2}+V (\phi(x))\\ &+\frac{\hbar^{2}}{8}(\frac{1}{\rho}\frac{\delta\rho}{\delta\phi( x)})^{2}]d^{3}\mathbf{x}\}\mathcal{D}\phi dt.\end{split} \tag{25}\]
Variation of \(I\) with respect to \(S\) gives the same continuity equation (10), while variation with respect to \(\rho\) leads to (see Appendix C)
\[\frac{\partial S}{\partial t}=-\int\{\frac{1}{2}(\frac{\delta S}{\delta\phi( x)})^{2}+V(\phi(x))-\frac{\hbar^{2}}{2R}\frac{\delta^{2}R}{\delta\phi^{2}(x)}\}d^{3} \mathbf{x}, \tag{26}\]
where \(R[\phi,t]=\sqrt{\rho[\phi,t]}\). The last term in the R.H.S. of (26) is the scalar field equivalence of Bohm's quantum potential [48]. In non-relativistic quantum mechanics, the Bohm's potential is considered responsible for the non-locality phenomenon in quantum mechanics [49]. Its origin is mysterious. Here we show that it originates from the information metrics related to relative entropy, \(I_{f}\).
Defined a complex functional \(\Psi[\phi,t]=R[\phi,t]e^{iS[\phi,t]/\hbar}\), the continuity equation and the extended Hamilton-Jacobi equation (26) can be combined into a single functional derivative equation (see Appendix C),
\[i\hbar\frac{\partial\Psi[\phi,t]}{\partial t}=\{\int[-\frac{\hbar^{2}}{2}\frac {\delta^{2}}{\delta\phi^{2}(x)}+V(\phi(x))]d^{3}\mathbf{x}\}\Psi[\phi,t]. \tag{27}\]
This is the Schrodinger equation for the wave functional \(\Psi[\phi,t]\). It governs the evolution of wave functional \(\Psi[\phi,t]\) between hypersurfaces \(\Sigma_{t}\). The potential density in (27), for the massive scalar field, is given in (1) as \(V(\phi(x))=\frac{1}{2}([\nabla\phi(x)]^{2}+m^{2}[\phi(x)]^{2})\). But it can be generalized to be
\[\begin{split} V(\phi(x))=&\frac{1}{2}[\nabla\phi(x )]^{2}+\frac{m^{2}}{2}[\phi(x)]^{2}\\ &+\lambda[\phi(x)]^{3}+\lambda^{\prime}[\phi(x)]^{4}+\ldots\end{split} \tag{28}\]
where the coefficients \(\lambda\), \(\lambda^{\prime}\), represent mass and other coupling constants. Once the Schrodinger equation for the wave functional \(\Psi[\phi,t]\) is obtained, other standard results follow, such as the solutions for the wave functional and the energy of the ground state and excited state [37].
In summary, by recursively applying the same least observability principle in two steps, we recover the uncertainty relation and the Schrodinger representations of the standard relativistic quantum theory of scalar field [37; 38]. In the first step, we analyze the dynamics of field fluctuations in a hypersurface \(\Sigma_{t}\) for a short period of time interval \(\Delta t\), and obtain the transitional probability density due to field fluctuations; In the second step, we apply the principle for a cumulative time period to obtain the dynamics laws that govern the evolutions of \(\rho\) and \(S\) between the hypersurfaces. The applicability of the same principle in both steps shows the consistency and simplicity of the theory, although the forms of Lagrangian density are different in each step. In the first step, the Lagrangian density \(\mathcal{L}\) is given by (1), while in the second step, we use a different form of Lagrangian density \(\mathcal{L}^{\prime}=\rho(\partial S/\partial t+H)\). As shown in Appendix A, \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\) are related through an extended canonical transformation. The choice of Lagrangian \(\mathcal{L}\) or \(\mathcal{L}^{\prime}\) does not affect the form of Legendre's equations. We choose \(\mathcal{L}^{\prime}\) as the Lagrangian density in the second step in order to use the pair of functional \((\rho,S)\) in the subsequent variation procedure.
## V The generalized Schrodinger equation for the wave functional
As mentioned earlier, by relaxing the definition of the information metrics \(I_{f}\), one can generalize the Schrodinger equation for the wave functional. The term \(I_{f}\) is supposed to capture the additional distinguishability exhibited by the field fluctuations, and is defined in (22) as the summation of the expectation values of Kullback-Leibler divergence between \(\rho[\phi,t]\) and \(\rho[\phi+\omega,t]\). However, there are more generic definitions of relative entropy, such as the Renyi divergence [50; 52]. From an information theoretic point of view, it is legitimate to consider alternative definitions of relative entropy. Suppose we define \(I_{f}\) based on Renyi divergence,
\[\begin{split} I_{f}^{\alpha}&:=\sum_{i=0}^{N-1} \langle D_{R}^{\alpha}(\rho[\phi,t_{i}]||\rho[\phi+\omega,t_{i}])\rangle_{ \omega}\\ &=\sum_{i=0}^{N-1}\int\mathcal{D}\omega p[\omega]\frac{1}{\alpha-1} ln(\int\mathcal{D}\phi\frac{\rho^{\alpha}[\phi,t_{i}]}{\rho^{\alpha-1}[\phi+\omega,t_{i}]}). \end{split} \tag{29}\]
Parameter \(\alpha\in(0,1)\cup(1,\infty)\) is called the order of Renyi divergence. When \(\alpha\to 1\), \(I_{f}^{\alpha}\) converges to \(I_{f}\) as defined in (22). In Appendix D, we show that using \(I_{f}^{\alpha}\) and following the same variation principle, we arrive at a similar extended Hamilton-Jacobi equation as (26),
\[\begin{split}\frac{\partial S}{\partial t}=-\int\{\frac{1}{2}( \frac{\delta S}{\delta\phi(x)})^{2}+V(\phi(x))-\frac{\alpha\hbar^{2}}{2R}\frac{ \delta^{2}R}{\delta\phi^{2}(x)}\}d^{3}\mathbf{x},\end{split} \tag{31}\]
with an additional coefficient \(\alpha\) appearing in the Bohm's quantum potential term. Defined a complex functional \(\Psi_{\alpha}[\phi,t]=R[\phi,t]e^{iS[\phi,t]/\sqrt{\alpha\hbar}}\), the continuity equation and
the extended Hamilton-Jacobi equation (31) can be combined into an equation similar to the Schrodinger equation (see Appendix D),
\[i\sqrt{\alpha}\hbar\frac{\partial\Psi_{\alpha}[\phi,t]}{\partial t}=\{\int[- \frac{\alpha\hbar^{2}}{2}\frac{\delta^{2}}{\delta\phi^{2}(x)}+V(\phi(x))]d^{3} \mathbf{x}\}\Psi_{\alpha}[\phi,t]. \tag{32}\]
When \(\alpha=1\), the regular Schrodinger equation of wave functional (27) is recovered, as expected. Equation (32) gives a family of linear equations for each order of Renyi divergence.
As observed in Appendix D, if we define \(\hbar_{\alpha}=\sqrt{\alpha}\hbar\), then \(\Psi_{\alpha}[\phi,t]=R[\phi,t]e^{iS[\phi,t]/\hbar_{\alpha}}\), and (32) becomes the same form of the regular Schrodinger equation (27) but with replacement of \(\hbar\) to \(\hbar_{\alpha}\). It is as if there is an intrinsic relation between the order of Renyi divergence \(\alpha\) and the Plank constant \(\hbar\). This remains to be investigated further. On the other hand, if the wavefunction is defined as usual without the factor \(\sqrt{\alpha}\), \(\Psi[\phi,t]=R[\phi,t]e^{iS[\phi,t]/\hbar}\), it will result in a nonlinear Schrodinger equation for the wave functional. This implies that the linearity of Schrodinger equation depends on how the wave functional is defined from the pair of real functional \((\rho,S)\).
We also want to point out that \(I_{f}^{\alpha}\) can be defined using Tsallis divergence [51; 53] as well, instead of using the Renyi divergence,
\[\begin{split} I_{f}^{\alpha}&:=\sum_{i=0}^{N-1} \langle D_{T}^{\alpha}(\rho[\phi,t_{i}]||\rho[\phi+\omega,t_{i}])\rangle_{ \omega}\\ &=\sum_{i=0}^{N-1}\int\mathcal{D}\omega p[\omega]\frac{1}{\alpha -1}\int\mathcal{D}\phi\{\frac{\rho^{\alpha}[\phi,t_{i}]}{\rho^{\alpha-1}[\phi +\omega,t_{i}]}-1\}.\end{split} \tag{33}\]
When \(\Delta t\to 0\), it can be shown that the \(I_{f}^{\alpha}\) defined above converges into the same form as (42). Hence it results in the same generalized Schrodinger equation (32).
## VI Discussion and Conclusions
### Alternative Formulation of the Least Observability Principle
Rewriting Eq.(11) as \(J=S_{c}+(\hbar/2)I_{f}\), and performing the same variation procedure will give the same Schrodinger equation of the wave functional for the scalar field. However, the physical interpretation is different. One would need to consider \((\hbar/2)I_{f}\) as additional action due to field fluctuations. In other words, one would compute the amount of distinguishability information \(I_{f}\) first, then apply Assumption 2 to convert the information quantity into action quantity. It is mathematically equivalent to Eq. (11). But the question is how such action effort is physically realized. To answer this question, it requires a physical model for the field fluctuations at the sub-quantum level. This is challenged and beyond the scope of this paper.
Alternatively, we can interpret the least observability principle based on Eq. (11) as minimizing \(I_{f}\) with the constraint of \(S_{c}\) being a constant, and \(\hbar/2\) simply being a Lagrangian multiplier for such a constraint. Again, mathematically, it is an equivalent formulation. In that case, Assumption 2 is not needed. Instead it will be replaced by the assumption that the classical action functional \(S_{c}\) is a constant with respect to variations on \(\rho\) and \(S\). But such an assumption needs sound justification. Which assumption to use depends on which choice is more physically intuitive. We believe that the least observability principle based on Assumption 2, where the Planck constant defines the discrete unit of action effort to exhibit observable information, gives more intuitive physical meaning of the mathematical formulation and without the need of a physical model for the field fluctuations.
### Comparisons with Relevant Research Works
The Schrodinger equation for the wave functional of scalar fields is typically introduced as a postulate [37; 38] instead of derived from a first principle. Recent attempts to derive it from the entropic dynamics approach can be found in Ref. [41; 42]. The entropic dynamic approach bears some similarity with the theory presented in this work. For instance, the formulations are carried out with two steps, an infinitesimal time step and a cumulative time period. It also aims to derive the physical dynamics by extremizing information quantity such as the relative entropy. However, the entropic dynamics approach relies on another postulate on energy conservation to complete the derivation of the Schrodinger equation. The theory presented in this paper, on the other hand, has the advantage of simplicity since it recursively applies the same least observability principle in both infinitesimal time step and cumulative time period. The entropic dynamics approach also requires several seemingly arbitrary constants in their formulations, while we only need the Planck constant \(\hbar\) and its meaning is clearly given in Assumption 2. We clearly show the the Bohm's potential term in (26) is originated from the information metrics of field fluctuations \(I_{f}\), while [41; 42] justify it from information geometry perspective. The advantages of our approach have two fold. First, it is far more conceptually clear to define \(I_{f}\) as expectation value of relative entropy between different probability distribution due to field fluctuations. There is clear physical meaning associated with \(I_{f}\). Second, we show that by using the general definition of relative entropy for \(I_{f}\) we obtain the generalized Schrodinger equation, which is unclear using the information geometry justification.
The derivation of the Schrodinger equation in Section IV.2 starts from (9) which is inspired from its non-relativistic version initially proposed by Hall and Reginatto [45; 46]. Ref. [36] gives a rigorous justification to the non-relativistic version of (9) using canonical trans
formation method. In Appendix A, we extend the canonical transformation method to scalar fields and prove (9). Hall and Reginatto [45; 46] only show the formulations in the non-relativistic setting. Even in the non-relativistic formulations, Hall and Reginatto assume an so-called exact uncertainty relation, while in our theory the exact uncertainty relation is derived from the same least observability principle in a infinitesimal time step.
### Limitations and Future Researches
Assumption 1' makes minimal assumptions on the field fluctuations, but does not provide a more concrete physical model for the field fluctuations. The underlying physics for the field fluctuations is expected to be complex but crucial for a deeper understanding of quantum field theory. It is beyond the scope of this paper. The intention here is to minimize the assumptions that are needed to derive the Schrodinger equation for the wave functional, so that future research can just focus on justifying these assumptions.
As shown in the appendix, the infinite dimension integration over the field variable \(\phi(\mathbf{x})\) is approximated as a \(N\) dimensional integral, then we take the limit \(N\to\infty\). This essentially assumes a uniform Lebesgue measure. There is argument that probability integration measure is needed to ensure consistency between Fock representation and Schodinger representation [39]. More rigorous mathematical treatment of infinite dimension integration is desirable. We also assume that the probability density \(\rho[\phi]\) and its first order of functional derivative approach zero when \(|\phi|\to\infty\). These assumptions are intuitive and give the correct results, but it is valuable to seek for stronger justifications.
The formulations presented in this paper is based on the flat Minkowski spacetime. We expect it is possible to extend the formulations to curved spacetimes and derive the Schodinger equation for curved spacetime. Furthermore, it would be interesting to investigate whether the least observability principle can be applied to non-scalar fields such as fermion matter fields whose equation of motion is the Dirac equation.
### Conclusions
The least observability principle, which is initially proposed to derive the non-relativistic quantum theory [36], is applied here to the scalar field theory. We successfully obtain the Schrodinger equation for the wave functional of the scalar field using the mathematical framework based on the principle. The Schrodinger equation of the wave functional is the fundamental equation for the quantum scalar field theory in the Schrodinger picture, and it is typically introduced as a postulate. Here we derive it from a first principle. The Schrodinger equation enables one to calculate other standard results for the scalar fields, such as the solutions for the wave functional and the energy of the ground state and excited states[37; 38].
The least observability principle illustrates how classical field theory becomes quantum field theory from the information perspective. These are captured in the two assumptions stated in Section III. Assumption 2 points out that the Planck constant defines the discrete unit of action that a field configuration needs to exhibit in its dynamics in order to be observable. Classical field theory corresponds to a theory when such a lower limit of discrete action effort is approximated as zero. Assumption 1' demands new information metrics for the additional degree of observability exhibited from vacuum fluctuations. These metrics are defined in terms of relative entropy to measure the information distances of different probability distributions caused by field fluctuations. To derive quantum theory, the least observability principle seeks to minimize the observability from both classical field dynamics and additional field fluctuations. Nature appears to behave in a most economic fashion and exhibits as least observable information as possible. Furthermore, defining the information metrics \(I_{f}\) using Renyi divergence in the least observability principle leads to a generalized Schrodinger equation (32) that depends on the order of Renyi divergence. At this point it is inconceivable that one will find physical scenarios for which the generalized Schrodinger equation for the wave functional with \(\alpha\neq 1\) is applicable. However, the generalized Schrodinger equation is legitimate from an information perspective. It confirms that the mathematical framework based on the least observability principle can produce new results.
The works in Ref. [36] and this paper show that the least observability principle can be applied to derive both non-relativistic quantum mechanics and relativistic quantum scalar field theory, demonstrating the versatility of the frameworks based on the principle. Extending the present work to scalar fields in curved spacetime is highly feasible. It is also reasonable to speculate the principle can be applied to obtain the quantum theory for non-scalar fields such as fermion matter fields, though it can be much more challenging since the structure of Lagrangian density for non-scalar fields is complicated.
Lastly, the least observability principle also brings in interesting implications on the interpretation aspects of quantum mechanics, including new insights on quantum entanglement, which will be reported separately.
## Data Availability Statement
The data that support the findings of this study are available within the article. |
2301.13438 | Hopf-Rinow Theorem of sub-Finslerian geometry | The sub-Finslerian geometry means that the metric $F$ is defined only on a
given subbundle of the tangent bundle, called a horizontal bundle. In the
paper, a version of the Hopf-Rinow theorem is proved in the case of
sub-Finslerian manifolds, which relates the properties of completeness,
geodesically completeness, and compactness. The sub-Finsler bundle, the
exponential map and the Legendre transformation are deeply involved in this
investigation. | Layth M. Alabdulsada, Laszlo Kozma | 2023-01-31T06:25:47Z | http://arxiv.org/abs/2301.13438v1 | # Hopf-Rinow Theorem of sub-Finslerian geometry
###### Abstract.
The sub-Finslerian geometry means that the metric \(F\) is defined only on a given subbundle of the tangent bundle, called a horizontal bundle. In the paper, a version of the Hopf-Rinow theorem is proved in the case of sub-Finslerian manifolds, which relates the properties of completeness, geodesically completeness, and compactness. The sub-Finsler bundle, the exponential map and the Legendre transformation are deeply involved in this investigation.
Key words and phrases:sub-Finslerian geometry; sub-Hamiltonian geometry; Legendre transformation; sub-Finsler bundle; normal geodesics; Exponential map; Hopf-Rinow theorem 2
Introduction
Let \(M\) be an \(n\)-dimensional connected manifold. A \(n\)-dimensional connected manifold is a \(n\)-dimensional connected manifold.
From now on we suppose that \(\mathcal{D}\subset TM,\ \ \sigma:\mathcal{D}\mathop{\hbox{\hbox to 0.0pt{\hbox{\vrule height 6. 559993pt depth -0.215pt width 1px\hss}\vrule height 6.559993pt depth -0.215pt width 1px\hss}\vrule height 6.559993pt depth -0.215pt width 1px\hss} \vrule height 6.559993pt depth -0.215pt width 1px\hss}\vrule height 6.559993pt depth -0.215pt width 1px\hss}\vrule height 6.559993pt depth -0.215pt width 1px\hss}\vrule height 6.559993pt depth -0.215pt width 1px\hss}\vrule height 6.559993pt depth -0.215pt width 1px\hss} \vrule height 6.559993pt depth -0.215pt width 1px\hss}\vrule height 6.559993pt depth -0.215pt width 1px\hss} \vrule height 6.559993pt depth -0.215pt width 1px\hss}\vrule height 6.559993pt depth -0.215pt width 1px\hss} \vrule height 6.559993pt depth -0.215pt width 1px\hss}\vrule height 6.559993pt depth -0.215pt width 1px\hss} \vrule height 6.559993pt depth -0.215pt width 1px\hss}\vrule height 6.559993pt depth -0.215pt width 1px\hss} \vrule height 6.559993pt depth -0.215pt width 1px\hss} \vrule height 6.559993pt depth -0.215pt width 1px\hss} \vrule height 6.559993pt depth -0.215pt width 1px\hss} \vrule height 6.559993pt depth -0.215pt width 1px\hss} \vrule height 6.559993pt depth -0.215pt width 1px\hss} \vrule height 6.559993pt depth -0.215pt width 1px\hss} \vrule height 6.59993pt depth -0.215pt width 1px\hss} \vrule height 6.559993pt depth -0.215pt width 1px\hss} \vrule height 6.
\(T^{*}M\). Such a field of cotangent \(s\)-planes is spanned locally by \(s\) pointwise linear independent smooth differential \(1\)-forms, namely,
\[\mathcal{D}_{x}^{*}=\operatorname{span}\{\alpha_{1}(x),\dots,\alpha_{s}(x)\}, \qquad\alpha_{i}(x)\in\mathfrak{X}^{*}(M).\]
In addition, we refer to \(\mathcal{D}_{x}^{0}\) as the annihilator of the distribution \(\mathcal{D}\) (isomorphic to \(\mathcal{D}\)), of rank \(n-k\), which is the set of all covectors that annihilates the vectors in \(\mathcal{D}_{x}\), i.e.
\[\mathcal{D}_{x}^{0}=\{\alpha\in T_{x}^{*}M:\alpha(v)=0\ \forall\ v\in\mathcal{D}_{x}\}. \tag{1}\]
In [2], we introduced the Legendre transformation of sub-Finsler geometry. Let us briefly recall it:
The _sub-Lagrange function_, determined by \(F\) is given in the following way: \(L=\frac{1}{2}F^{2}.\) The fiber derivative of \(L\) defines the map
\[\mathcal{L}_{L}:\mathcal{D}\rTo\mathcal{D}^{*},\qquad\mathcal{L}_{L}(v)(w)= \frac{d}{dt}L_{x}(v+tw),\ \text{where}\ v,w\in\mathcal{D}_{x},\]
called the _Legendre transformation_ of \((M,\mathcal{D},F)\).
We denote by \((x^{i})\) the coordinate in a neighborhood \(U\subset M\) with \((x^{i},v^{a})\) in \(\mathcal{D}|_{U}\subset TM\), and \((x^{i},p_{a})\) in \(\mathcal{D}^{*}|_{U}\subset T^{*}M\), respectively, where \(i=1,\dots,n,\ a=1,\dots,k\). Then the relation of the distribution \(\mathcal{D}\) of the tangent bundle and the distribution \(\mathcal{D}^{*}\) of the cotangent bundle is given by the Legendre transformation in local coordinates as follows
\[\mathcal{L}_{L}(x^{i},v^{a})=(x^{i},\frac{\partial L}{\partial v^{a}}).\]
Then the _sub-Hamiltonian_ is given by
\[H:\mathcal{D}^{*}\rTo\mathcal{R},\] \[H=\iota_{\mathcal{L}_{L}^{-1}}-L\circ\mathcal{L}_{L}^{-1},\]
where \(\iota_{v}(p)=\langle v,p\rangle=p(v)\) for any \(v=\mathcal{L}_{L}^{-1}(p)\in\mathcal{D}\) and \(p\in\mathcal{D}^{*}\). Moreover, locally given by,
\[H(x^{i},p_{a})=v^{a}p_{a}-L(x^{i},v^{a}),\ \text{where}\ p_{a}=\frac{ \partial L}{\partial v^{a}}.\]
Secondly, using the fiber derivative of \(H\), we define the Legendre transformation of the sub-Hamiltonian \(H\) in the following way:
For any \(p,q\in\mathcal{D}_{x}^{*}\), it holds
\[q(\mathcal{L}_{H}(p))=\frac{d}{dt}H(x,p+tq).\]
This locally relates the distribution \(\mathcal{D}^{*}\) of the cotangent bundle and the distribution \(\mathcal{D}\) of the tangent bundle according to the next expression:
\[\mathcal{L}_{H}(x^{i},p_{a})=(x^{i},\frac{\partial H}{\partial p_{a}}).\]
Naturally, \(\mathcal{L}_{L}\) and \(\mathcal{L}_{H}\) are inverses of each other:
\[\mathcal{L}_{H}\circ\mathcal{L}_{L}=1_{\mathcal{D}}, \mathcal{L}_{L}\circ\mathcal{L}_{H}=1_{\mathcal{D}^{*}}.\]
In other hand, for every \(p\in\mathcal{D}^{*}_{x}\), one can define the sub-Finsler metric \(F^{*}\in\widetilde{\mathcal{D}}^{*}\sim T^{*}M\setminus\mathcal{D}^{0}\) with help of the indicatrix \(I_{x}\) as follows:
\[F^{*}_{x}(p):=\sup_{w\in I_{x}}\ p(w)=\sup_{0\neq v\in\mathcal{D}_{x}}\ p[ \frac{v}{F_{x}(v)}].\]
Observed that \(\widetilde{\mathcal{D}}^{*}\) is the subbundle of the cotangent bundle obtained by removing the zero cotangent vector from each fibre. In fact, \(F^{*}\) turns out to meet the same properties that mentioned in Definition 1, but on \(\mathcal{D}^{*}\) instead of \(\mathcal{D}\). Then
\[F^{*}(p)=F(v),\ \text{where}\ p=\mathcal{L}_{L}(v),\quad\text{and}\quad H:= \frac{1}{2}(F^{*})^{2},\]
see details in [5].
## 4. Sub-Finsler bundle
We define in this section a sub-Finsler vector bundle which will play a major role in the formalization of the sub-Hamiltonian in sub-Finsler geometry. Let us consider first the covector subbundle \((\mathcal{D}^{*},\tau,M)\) with the projection \(\tau:\mathcal{D}^{*}\rTo M\), which is a subbundle of rank \(k\) (\(=\text{dim}\ \mathcal{D}^{*}\)) in the cotangent bundle of \(T^{*}M\). The illustrious role in our consideration will play by the pullback bundle \(\tau^{*}(\tau)=(\mathcal{D}^{*}\times\mathcal{D}^{*},\text{pr}_{1},\mathcal{D }^{*})\) of \(\tau\) by \(\tau\) as follows:
\[\mathcal{D}^{*}\times_{M}\mathcal{D}^{*}:=\{(p,q)\in\mathcal{D}^{*}\times \mathcal{D}^{*}|\ \ \tau(p)=\tau(q)\},\]
Throughout, we call the above pullback bundle as the _sub-Finsler bundle_ over \(\mathcal{D}^{*}\). Now, if \(p\) is fixed, then
\[(\text{pr}_{1})^{-1}(p) =\{(p,q)\in\mathcal{D}^{*}\times\mathcal{D}^{*}|\ \ q\in\mathcal{D}^{*}_{\tau(q)}\}\] \[=\{p\}\times\mathcal{D}^{*}_{\tau(p)},\]
is a fiber of the sub-Finsler bundle over \(p\in\mathcal{D}^{*}\).
We can introduce a Riemannian metric \(g^{*}\) on the sub-Finsler vector bundle induced by the sub-Hamiltonian \(H\) as follows:
\[\langle q,r\rangle_{p}=g^{*}_{p}(q,r):=\frac{\partial^{2}H(p+tq+sr)}{\partial t \partial s}|_{t,s=0}\qquad\text{for all}\ q,r\in\mathcal{D}^{*}_{\tau(p)},\]
which locally means
\[g^{*ij}=\frac{\partial^{2}H}{\partial p_{i}\partial p_{j}}.\]
Now the sub-Finsler bundle \(\tau^{*}(\tau)\) allows \(k\) covector fields \(X_{1},X_{2},\dots,X_{k}\) which form an orthonormal frame with respect to the induced Riemannian metric \(g^{*}\).
Notice that \(X_{i}(p)\) is a covector field that depends on the position \(x\in M\) and the direction \(p\in\mathcal{D}^{*}\). Moreover, one can choose in a way that \(X_{i}(p)\) is a homogeneous of degree zero in \(p\), i.e. \(X_{i}(tp)=t^{0}X_{i}(p)=X_{i}(p).\) According to the above metric \(g^{*ij}\) on \(M\) which is homogeneous of degree zero, we could generate a new formalism of the sub-Hamiltonian function in the components \(p_{i}\) (induces naturally by the inner product, see [6])
\[H(x,p)=\frac{1}{2}\sum_{i,j=1}^{n}g^{*ij}p_{i}p_{j},\]
such that this metric defined in the extended Finsler metric which was shown in [2]. We can write the sub-Hamiltonian function (2) in a more useful way using the orthonormality of \(X_{i}\) as follows
\[H(x,p)=\frac{1}{2}\sum_{i=1}^{k}\langle p,X_{i}(p)\rangle^{2},\qquad p\in \mathcal{D}_{x}^{*}. \tag{3}\]
One can easily check the homogeneity of degree 2 in \(p\) of the sub-Hamiltonian function \(H(x,p)\):
\[H(x,tp)=\frac{1}{2}\sum_{i=1}^{k}\langle tp,X_{i}(tp)\rangle^{2}=\frac{t^{2}}{ 2}\sum_{i=1}^{k}\langle p,X_{i}(p)\rangle^{2}=t^{2}H(x,p). \tag{4}\]
The importance of \(H(x,p)\) is to define sub-Finslerian geodesics. Our function \(H(x,p)\) produces a system of sub-Hamiltonian differential equations, since it is a smooth function on \(\mathcal{D}^{*}\). Such differential equations are in terms of canonical coordinates \((x^{i},p_{i})\).
**Definition 4**.: The generated sub-Hamiltonian differential equations
\[\dot{x}^{i} =\frac{\partial H}{\partial p_{i}}(x,p),\] \[\dot{p}_{i} =-\frac{\partial H}{\partial x^{i}}(x,p),\quad i=1,\ldots,n,\]
are called _normal geodesic equations_.
**Lemma 5**.: _If \(\xi(t):=(x(t),p(t))\) is a solution of the sub-Hamiltonian system for all \(t\in\mathbb{R}\), then there exists a constant \(c\in\mathbb{R}\) such that \(H(x(t),p(t))=c\)._
Proof.: Taking the derivative of \(H(x(t),p(t))\) w.r.t. \(t\), we get
\[\frac{d}{dt}H(x(t),p(t))=\frac{\partial H}{\partial x^{i}}(x(t),p(t))\dot{x}( t)+\frac{\partial H}{\partial p_{i}}(x(t),p(t))\dot{p}(t).\]
Replacing \(\dot{x}(t)\) and \(\dot{p}(t)\) by the above sub-Hamiltonian differential equations in the Definition 4, we obtain
\[\frac{d}{dt}H(x(t),p(t))) =\frac{\partial H}{\partial x^{i}}(x(t),p(t))\frac{\partial H}{ \partial p_{i}}(x(t),p(t))-\frac{\partial H}{\partial p_{i}}(x(t),p(t))\frac{ \partial H}{\partial x^{i}}(x(t),p(t))\] \[=0.\]
Therefore \(H(x(t),p(t))\) is constant.
**Remark 2**.: From Lemma 5, it follows that any solution \(\xi(t):=(x(t),p(t))\) of the sub-Hamiltonian differential equations on \(\mathcal{D}^{*}\) for a sub-Hamiltonian function \(H(p)\) satisfies \(H(x(t),p(t))=c\). Let the projection \(x(t)=\tau(\xi(t))\in M\), so each sufficiently short subarc of \(x(t)\) is a minimizer sub-Finslerian geodesic, (see [11, Corollary 2.2]). In addition, this subarc is the unique minimizer joining its end points.
The projection curve \(x(t)\) mentioned above is said to be the _normal sub-Finslerian geodesics_ or simply the _normal geodesics_.
**Remark 3**.: In the sub-Finslerian geometry, not all the sub-Finslerian geodesics are normal (contrary to the Finsler geometry). This is due to the fact that the sub-Finslerian geodesics which are also a minimizing geodesic might not be solved
the sub-Hamiltonian system. Those minimizer that are not normal geodesics called _singular_ or _abnormal_ geodesics (see [9] for more details).
Moreover, we call the extremal pair \(\xi(t)=(x(t),p(t))\) a _normal extremal_ if it is a solution for the sub-Hamiltonian system, otherwise it is called an _abnormal extremal_.
Turning to the relationship between the normal geodesic and the locally length-minimizing horizontal curves, Calin et al. proved in [6] that any normal geodesic is a horizontal curve and a locally length-minimizing horizontal curve. After all, by using (3) one can generate the system of differential equations in terms of canonical coordinates \((x,p)\) as follows:
\[\dot{x}^{i}=\frac{\partial H}{\partial p_{i}}=\sum_{j=1}^{k}\langle p,X_{j}(p )\rangle\ (\delta_{i}(X_{j}(p))+\langle p,D_{p_{i}}X_{j}(p)\rangle), \tag{5}\]
\[\dot{p}_{i}=-\frac{\partial H}{\partial x^{i}}=-\sum_{j=1}^{k}\langle p,X_{j} (p)\rangle\langle p,D_{x^{i}}X_{j}(p)\rangle, \tag{6}\]
where \(\delta_{i}\) is the \(i\)-th coordinate function.
## 5. Exponential map in sub-Finsler geometry
Let \((M,d)\) be a general metric space, such that \(M\) is an \(n\)-dimensional manifold and the function \(d:M\times M\rTo\mathbb{R}^{+}\cup\{\infty\}\), is called a metric if have the following properties: for all \(x,y,z\in M\),
1. \(d(x,y)=0\), with equality if and only if \(x=y\);
2. \(d(x,y)+d(y,z)\leq d(x,z)\).
If the function \(d\) is an asymmetric, then we can define the forward metric balls and forward metric spheres, with center \(x\in M\) and radius \(r>0\) as follows:
\[B_{x}(r)=\{\ y\in M:\ d(x,y)<r\},\]
\[S_{x}(r)=\{\ y\in M:\ d(x,y)=r\}.\]
The cotangent balls and the cotangent spheres in \(\mathcal{D}^{*}\) are defined as follows:
\[\mathcal{B}^{*}_{x}(r)=\{\ p\in\mathcal{D}^{*}:\ F^{*}_{x}(p)<r\},\]
\[\mathcal{S}^{*}_{x}(r)=\{\ p\in\mathcal{D}^{*}:\ F^{*}_{x}(p)=r\},\]
for any fix \(x\in M\) and radius \(r\).
A subset \(U\subset M\) is said to be open if, for each point \(x\in U\), there is a forward metric ball about \(x\) contained in \(U\). Then we get the topology on \(M\) and all metric spaces are first countable and \(T_{1}\)-spaces. In general, we assume that the metric \(d\) of any metric space \((M,d)\) is continuous with respect to the product topology on \(M\times M\). Thus, every backward metric ball, i.e. \(B^{-}_{x}(r)=\{\ y\in M:\ d(y,x)<r\}\), is open and the metric space is a Hausdorff (\(T_{2}\)) space. Hence the compact sets in such a space are closed.
As a result of the above, we immediately have the following
**Proposition 6**.: _In a metric space \((M,d)\) the following are equivalent:_
1. _A sequence_ \(\{x_{k}\}\) _in_ \((M,d)\) _converges to_ \(x\in M\) _in the sense of topology._
2. \(\lim_{k\to\infty}d(x,x_{k})=0\)
**Proposition 7**.: _Let \(x\) be any point in a (reversible) sub-Finslerian manifold \(M\), and \(\bar{B}_{x}(r)\) is a compact ball, for some \(r>0\). Then for any \(y\in B_{x}(r)\) there is a minimizing geodesic from \(x\) to \(y\), that is,_
Proof.: Fix \(y\in B_{x}(r)\) and suppose that is a minimizing sequence of horizontal paths with unit speed from \(x\) to \(y\) and such that
\[\lim_{k\to\infty}\gamma_{k}(0)=x,\quad\lim_{k\to\infty}\gamma_{k}(T)=y,\quad \lim_{k\to\infty}\ell(\gamma_{k})=d(x,y).\]
For the reason that \(d(x,y)<r\), we get \(\ell(\gamma_{k})\leq r\) for all \(k\geq k_{0}\) large enough. Proposition 6 asserts that the metric \(d\) is continuous under the topology of the manifold and the reversibility of \(F\) holds on a compact set. Consequently, any sequence \(\gamma_{k}\) of curves which have uniformly bounded lengths has an uniformly convergent subsequence (Ascoli--Arzela theorem), we denote this subsequence by the same symbol, and a Lipschitz curve.
From above one can assume that is a convergent subsequence of length minimizers parametrized by arc length (i.e. \(F(\dot{\gamma}(t))=1\)) on \(M\) such that such that uniformly on \([0,T]\). This gives that
which is due to the claim that \(\gamma_{k}\) is a minimizing geodesic. The sequence \(\gamma_{k}\) converges uniformly if for every \(\epsilon>0\) there is a natural number \(N\) such that for all \(n\geq N\) and all \(t\in[0,T]\) one has \(d(\gamma_{k}(t),\gamma(t))<\epsilon.\) Further, the semicontinuity of the length implies that if \(\lim_{k\to\infty}\gamma_{k}=\gamma\) then
\[\ell(\gamma)\leq\lim_{k\to\infty}\inf\,\ell(\gamma_{k}).\]
Now, by continuity of the distance, we obtain
\[\ell(\gamma)\leq\lim_{k\to\infty}\inf\,\ell(\gamma_{k})=\lim_{k\to\infty}\inf \,d(\gamma_{k}(0),\gamma_{k}(T))=d(\gamma(0),\gamma(T)).\]
This yields that \(\gamma\) is minimizing geodesic, i.e. \(\ell(\gamma)=d(x,y)\). The horizontal property of \(\gamma\) follows in the same way as was done in [1], Theorem 3.41.
Next, we define the exponential map. For the general case, roughly speaking, if \(M\) is a smooth Finsler manifold, \(x\) a point in \(M\) and \(u\in T_{x}M\). Then the exponential map is given by
such that for the unique geodesic \(\gamma\) that starts at \(x\) and has initial speed vector \(u\). Furthermore, in the dual space the exponential map for every \(x\in M\) and \(p\in T_{x}^{*}M\) defined by
such that for the unique geodesic \(\gamma\) that starts at \(x\) and has initial speed vector \(u=\mathcal{L}_{L}^{-1}(p)\), where \(L\) here is the Lagrangian of the Finsler manifold.
The exponential map is an essential object in sub-Finslerian geometry, parametrizing normal extremals through their initial covectors. We are going to define the exponential map in both of the distribution \(\mathcal{D},\mathcal{D}^{*}\) of the tangent and the cotangent bundles respectively.
**Definition 8**.: Let \(\Omega_{x}\subset\mathcal{D}_{x}\) be the domain of the exponential map over \(x\in M\) such that \(\Omega_{x}\) given by
\[\Omega_{x}=\left\{v\in\mathcal{D}_{x}|\ \text{$\xi$ is defined on the interval $[0,1]$}\right\},\]
where \(v=\mathcal{L}_{H}(p)\) by the Legendre transformation of sub-Hamiltonian \(H\), and \(\xi(t)\) is the normal extremal. Then the _sub-Finsler exponential map_ is defined as follows
We can do the same in the distribution \(\mathcal{D}_{x}^{*}\). Let \(\Omega_{x}^{*}\subset\mathcal{D}_{x}^{*}\) be the domain of the exponential map over \(x\in M\) such that \(\Omega_{x}^{*}\) given by
\[\Omega_{x}^{*}=\left\{p\in\mathcal{D}_{x}^{*}|\ \text{$\xi$ is defined on the interval $[0,1]$}\right\}.\]
Consequently, the _sub-Hamiltonian exponential map_ is given by
\[\exp_{x}^{*}:\Omega_{x}^{*}\subset\mathcal{D}_{x}^{*}\subset T_{x}^{*}M \rToq m,\ p\mapsto\tau(\xi(1)),\]
where \(\xi(t)\) is the same normal extremal as above. The set \(\Omega_{x}^{*}\) contains the origin and star-shaped with respect to \(0\). Moreover, with the help of Legendre transformation it is fairly easy to see that
\[\exp_{x}(v)=\exp_{x}^{*}(p),\quad\text{where}\quad p=\mathcal{L}_{L}(v). \tag{7}\]
It follows that the normal sub-Finslerian geodesics \(x(t)=\tau(\xi(t))\) satisfies
\[x(t)=\exp_{x}^{*}(tp),\quad\text{for all $t\in[0,T]$}.\]
**Theorem 9**.: _The exponential mapping \(\exp_{x}^{*}\) is a local diffeomorphism on \(\mathcal{D}_{x}^{*}\subset T_{x}^{*}M\backslash\{0\}\)._
Proof.: In 4, we show the homogeneity of the sub-Hamiltonian function \(H(x,p)\) with respect to \(p\). So, for any constant \(a>0\), the curve \(\xi(at):(\epsilon/a,\epsilon/a)\rToq M\) is the same geodesic satisfying the initial conditions \(\tau(\xi_{p}(0))=x\) and \(\xi_{p}(0)=ap\), i.e.,
\[\tau(\xi_{p}(at))=\tau(\xi_{ap}(t)).\]
Since the sub-Hamiltonian vector field
\[\vec{H}(x,p)=g^{ab}(x,p)p_{b}\frac{\partial}{\partial x^{a}}-\frac{1}{2}\frac {\partial g^{ab}}{\partial x^{k}}(x,p)p_{a}p_{b}\frac{\partial}{\partial p_{k }},\]
that introduced in [2], is smooth except for \(p=0\) where it is \(C^{1}\). Then \(\exp_{x}^{*}\) is \(C^{\infty}\) on \(\mathcal{D}_{x}^{*}\subset T_{x}^{*}M\backslash\{0\}\), while it is \(C^{1}\) at \(p=0\) and \(d(\exp_{x}^{*})|_{0}=\text{id}\). Thus, \(\exp_{x}^{*}\) is a local diffeomorphism.
By equation (7), one can get the following
**Corollary 10**.: _The sub-Finsler exponential map \(\exp_{x}\) is a \(C^{\infty}\) away from the zero section of \(\mathcal{D}\) and only \(C^{1}\) at the zero section such that for each \(x\in M\)_
_is the identity map at the origin \(0\in\mathcal{D}_{x}\)._
**Remark 4**.: It is clear that in the case of sub-Finsler exponential map the following expressions holds:
\[\exp_{x}^{*}[\mathcal{B}_{x}^{*}(r)] =B_{x}(r),\] \[\exp_{x}^{*}[\mathcal{S}_{x}^{*}(r)] =S_{x}(r),\]
which are analogous to the Finslerian context, see Bao et al. [5] for more details.
**Remark 5**.: Turning to sub-Riemannian case, Strichartz in [13] stated that for bracket generating distributions the exponential map is a local diffeomorphism. This is due to the fact that the solutions of the sub-Hamiltonian system depend differentially on the initial data. But this is a difference from the Riemannian context, the exponential map is not a diffeomorphism at the origin just like the Finslerian case.
## 6. Hopf-Rinow Theorem in sub-Finslerian geometry
In the following, one can see the explanation of the terms that will be used in Hopf-Rinow Theorem. A sub-Finsler manifold is said to be _forward complete_ if every forward Cauchy sequence converges, and it is a _forward geodesically complete_ if every normal geodesic \(\gamma(t),t\in[0,T)\) parametrized to have unit speed, can be extended to a geodesic for all \(t\in[0,\infty).\) A subset is said to be _forward bounded_ if it is contained in some forward metric ball \(B_{x}(r).\)
**Theorem 11**.: _Let \((M,\mathcal{D},F)\) be any connected sub-Finsler manifold, where \(\mathcal{D}\) is bracket generating distribution. The following conditions are equivalent:_
1. _The metric space_ \((M,d)\) _is forward complete._
2. _The sub-Finsler manifold_ \((M,\mathcal{D},F)\) _is forward geodesically complete._
3. \(\Omega^{*}_{x}=\mathcal{D}^{*}_{x}\)_, additionally, the exponential map is onto if there are no strictly abnormal minimizer._
4. _Every closed and forward bounded subset of_ \((M,d)\) _is compact._
_Furthermore, for any \(x,y\in M\) there exists a minimizing geodesic \(\gamma\) joining \(x\) to \(y\), i.e. the length of this geodesic is equal to the distance between these points._
Proof.: (i) \(\Longrightarrow\) (ii) Let \(\gamma(t):[0,T)\)\(\longrightarrow\)\(M\) be a unit speed and maximally forward extended geodesic, \(t\in[0,T)\). If we assume that \(T\neq\infty\), and choose a sequence \(\{t_{i}\}\)\(\longrightarrow\)\(T\) in \([0,T)\) then \(\gamma(t_{i})\) is forward Cauchy, since
\[d(\gamma(t_{i}),\gamma(t_{j}))\leq|t_{j}-t_{i}|,\text{ for all }i\leq j.\]
Now, (i) makes it obvious that \(\gamma(t_{i})\) converges to \(y\in M\). On one hand, let us define \(\gamma(T)\) to be \(y\). On the other hand, Lemma 4.1 in [13] told us that \(\gamma(t)\) can be extended beyond \(t=T\). This contradicts our assumption the fact that \(T\neq\infty\). Thus, \(T=\infty\) for sure, so we have the forward geodesically completeness.
(ii) \(\Longrightarrow\) (iii) It is sufficient (for first part \(\Omega^{*}_{x}=\mathcal{D}^{*}_{x}\)) to prove that any normal extremal pair \(\xi(t)\), starting from the initial conditions, is defined for all \(t\in\mathbb{R}\). Suppose that the normal extremal is not extendable to the some interval \([0,T+\delta)\) for all \(\delta>0\) and suppose that it is defined on \([0,T)\). Let \(\{t_{i}\}\) be any increasing sequence such that the limit of this sequence is \(T\). Hence, the projection \(x(t)=\tau(\xi(t))\) is a curve with unit speed defined on \([0,T)\), therefore, the sequence \(\{t_{i}\}\) is a forward Cauchy sequence on \(M\), since
\[d(x(t_{i}),x(t_{j}))\leq|t_{i}-t_{j}|.\]
By completeness, it follows that the sequence \(x(t_{i})\) converges to some point \(y\in M\). We suppose there are coordinates around the point \(y\) and an orthonormal frame \(X_{1},X_{2},...,X_{k}\) in small ball \(\mathcal{B}^{*}_{y}(r)\) in the sub-Finsler bundle. Let us show that in the coordinates \(\xi(t)=(x(t),p(t))\) the curve \(x(t)\) is uniformly bounded. This grants a contradiction that the normal extremal is not extendable. In fact, for every
\(p\in\mathcal{D}^{*}\), we consider the following non-negative form (3) of the sub-Hamiltonian function \(H\):
\[H(x,p)=\frac{1}{2}\sum_{i=1}^{k}\langle p,X_{i}(p)\rangle^{2}.\]
Then, the sub-Hamiltonian system has the form:
\[\dot{x}^{i}(t)=\frac{\partial H}{\partial p_{i}}(x(t),p(t))=\sum_{j=1}^{k} \langle p(t),X_{j}(p(t))\rangle(\delta_{i}(X_{j}(p))+\langle p,D_{p_{i}}X_{j}(p )\rangle),\]
\[\dot{p}_{i}(t)=-\frac{\partial H}{\partial x^{i}}(x(t),p(t))=-\sum_{j=1}^{k} \langle p(t),X_{j}(p(t))\rangle\langle p(t),D_{x^{i}}X_{j}(p(t))\rangle,\]
for \(t\in[T-\delta,T)\) with \(\delta>0\) small enough. Since \(D_{\gamma(t)}X_{i}\) are given in a compact small ball \(\bar{\mathcal{B}}_{y}^{*}(r)\), they are bounded, so there is a constant \(\mathcal{C}>0\) such that
\[|\dot{p}(t)|\leq\mathcal{C}|p(t)|\quad\forall t\in[T-\delta,T).\]
If we apply Gronwall's Lemma (see [12], p.122), it leads us to that \(|p(t)|\) is uniformly bounded on a bounded interval. This contradicts our assumption that the normal extremal can not be extended beyond \(T\).
(iii) \(\Longrightarrow\) (iv) Assume that \(\bar{A}\) is a closed and forward bounded subset of \((M,d)\). Applying the bracket generating assumption, for every \(y\in\bar{A}\), Proposition 7 asserts that there is a minimizing geodesic \(\exp_{x}^{*}(tp_{y})\), \(0\leq t\leq T\), from \(x\) to \(y\). The set of all \(p_{y}\) is subset \(A\) of \(\mathcal{D}_{x}^{*}\). Since \(F_{x}^{*}(p_{y})=d(x,y)\), and \(d(x,y)\leq r\) for some \(r\) due to the forward boundedness of \(\bar{A}\), the subset \(A\) is bounded and contained in the compact set \(\mathcal{B}_{x}^{*}(r)\cup\mathcal{S}_{x}^{*}(r)\). By Remark 4, \(\exp_{x}^{*}[\mathcal{B}_{x}^{*}(r)\cup\mathcal{S}_{x}^{*}(r)]\) is compact and contained in the closed set \(\bar{A}\), then \(\bar{A}\) it must be compact.
(iv) \(\Longrightarrow\) (i) Let \(\{x_{i}\}\) be a forward Cauchy sequence in \(M\), and by the subadditivity it must be forward bounded. Choose \(A:=\{x_{i}|i\in\mathbb{N}\}\), then its closure \(\bar{A}\) is still forward bounded under the manifold topology of \(M\). Taking into account the assumption (iv), \(\bar{A}\) should satisfy the compactness property, therefore, the sequence \(\{x_{i}\}\) contains a convergent subsequence.
Let \(\{x_{k}\}\) be a convergent subsequence, consider it converges to some \(y\in\bar{A}\subset M\). In other hand, we need to check that \(\{x_{i}\}\) converges to \(y\in\bar{A}\subset M\). To do this, fix \(\epsilon>0\), since \(\{x_{i}\}\) is forward Cauchy, there exist a positive number \(n_{0}\) such that \(j>i\geq n_{0}\), then
\[d(x_{i},x_{j})<\frac{\epsilon}{2}.\]
At the same time \(\{x_{k}\}\) converge to \(y\). So there is a positive number \(n_{1}\) such that if \(k\geq n_{1}\), then
\[d(x_{k},y)<\frac{\epsilon}{2}.\]
One can assume that \(n\) is greater than \(n_{0}\) and \(n_{1}\). If needed, by expanding \(n\) further, there is no loss of generality in assuming that \(n\) indeed equals some index of the convergent subsequence. Then \(d(x_{n},y)\leq\ \frac{\epsilon}{2}\), so, for \(i>n\), we get
\[d(x_{i},y)\ \leq\ d(x_{i},x_{n})+d(x_{n},y){<\frac{\epsilon}{2}+\frac{ \epsilon}{2}=\epsilon}.\]
So, we have been shown that every forward Cauchy sequence is convergent. Hence \((M,d)\) is forward complete.
At the end, we can use the same proof of Proposition 7 to verify that for every \(x,y\in M\) there exists a length minimizing geodesic joining \(x\) and \(y\), and it has to be normal geodesic by Remark 2. Finally, the property of compactness and completeness with help of Proposition 7, proves the second part of (iii).
|
2302.10307 | ViewCo: Discovering Text-Supervised Segmentation Masks via Multi-View
Semantic Consistency | Recently, great success has been made in learning visual representations from
text supervision, facilitating the emergence of text-supervised semantic
segmentation. However, existing works focus on pixel grouping and cross-modal
semantic alignment, while ignoring the correspondence among multiple augmented
views of the same image. To overcome such limitation, we propose
multi-\textbf{View} \textbf{Co}nsistent learning (ViewCo) for text-supervised
semantic segmentation. Specifically, we first propose text-to-views consistency
modeling to learn correspondence for multiple views of the same input image.
Additionally, we propose cross-view segmentation consistency modeling to
address the ambiguity issue of text supervision by contrasting the segment
features of Siamese visual encoders. The text-to-views consistency benefits the
dense assignment of the visual features by encouraging different crops to align
with the same text, while the cross-view segmentation consistency modeling
provides additional self-supervision, overcoming the limitation of ambiguous
text supervision for segmentation masks. Trained with large-scale image-text
data, our model can directly segment objects of arbitrary categories in a
zero-shot manner. Extensive experiments show that ViewCo outperforms
state-of-the-art methods on average by up to 2.9\%, 1.6\%, and 2.4\% mIoU on
PASCAL VOC2012, PASCAL Context, and COCO, respectively. | Pengzhen Ren, Changlin Li, Hang Xu, Yi Zhu, Guangrun Wang, Jianzhuang Liu, Xiaojun Chang, Xiaodan Liang | 2023-01-31T01:57:52Z | http://arxiv.org/abs/2302.10307v1 | # ViewCo: Discovering Text-Supervised Segmentation Masks via Multi-View Semantic Consistency
###### Abstract
Recently, great success has been made in learning visual representations from text supervision, facilitating the emergence of text-supervised semantic segmentation. However, existing works focus on pixel grouping and cross-modal semantic alignment, while ignoring the correspondence among multiple augmented views of the same image. To overcome such limitation, we propose multi-**View** Consistent learning (ViewCo) for text-supervised semantic segmentation. Specifically, we first propose text-to-views consistency modeling to learn correspondence for multiple views of the same input image. Additionally, we propose cross-view segmentation consistency modeling to address the ambiguity issue of text supervision by contrasting the segment features of Siamese visual encoders. The text-to-views consistency benefits the dense assignment of the visual features by encouraging different crops to align with the same text, while the cross-view segmentation consistency modeling provides additional self-supervision, overcoming the limitation of ambiguous text supervision for segmentation masks. Trained with large-scale image-text data, our model can directly segment objects of arbitrary categories in a zero-shot manner. Extensive experiments show that ViewCo outperforms state-of-the-art methods on average by up to 2.9%, 1.6%, and 2.4% mIoU on PASCAL VOC2012, PASCAL Context, and COCO, respectively. 1
Footnote 1: Code release: [https://github.com/pzhren/ViewCo](https://github.com/pzhren/ViewCo)
## 1 Introduction
Recently, vision-language contrastive learning (Radford et al. (2021); Li et al. (2021)) has attracted a lot of attention because it can obtain more generalized feature representation. And at the same time, it can also make use of abundant image-text pairs to avoid labor-intensive annotation costs. Vision-language pre-training (VLP) models have exhibited impressive potential in various visual (Xu et al. (2022); Mu et al. (2021); Radford et al. (2021)) and multimodal (Wang et al. (2021); Kim et al. (2021)) tasks, including text-supervised semantic segmentation (Xu et al. (2022); Ghiasi et al. (2021); Xu et al. (2021); Zabari and Hoshen (2021); Zhou et al. (2021), which uses text instead of traditional dense labels for supervision to achieve zero-shot semantic segmentation. It provides a feasible solution for learning segmentation masks without mask annotation.
However, existing works with CLIP-based (Radford et al. (2021)) segmentation (Xu et al. (2022; 2021); Zhou et al. (2021)) mainly focus on pixel grouping or cross-modal semantic alignment. They have the following two obvious limitations: _(i)_ the excessive strictness of image-text correspondence; and _(ii)_ the ambiguity of text description. _First_, in vanilla vision-language contrastive learning, each image-text
Figure 1: Illustration of text description ambiguity. Text descriptions are highly abstract and difficult to be semantically aligned with images. Cross-view semantic consistency modeling can effectively alleviate the effect of the text description ambiguity issue.
pair is regarded as a unique positive pair, while all the other combinations are regarded as negative ones. This image-text correspondence is actually too rigorous. In fact, one textual description may correspond to different images. The excessive strictness is not conducive to the model learning high-level cross-modal semantic correspondences. Therefore, more relaxed vision-language contrastive learning needs to be considered. _Second_, the ambiguity of textual descriptions is also a key challenge. Compared with the traditional semantic segmentation pipeline that uses dense annotations as supervision information (Touvron et al. (2021); Ren et al. (2022)), the CLIP-based segmentation methods (Xu et al. (2022; 2021); Zhou et al. (2021a)) use text as supervision, which is easier to access but more noisy and ambiguous.
This is mainly because compared with traditional segmentation annotations, text descriptions are often more abstract and do not contain location information. Moreover, the background in the image is usually ignored in the description. In some cases, the objects in the image do not even exist in the text description (see Figure 1). Such ambiguity is common in the textual supervision in vision-language pre-training. In the semantic segmentation task, the ambiguity of textual supervision makes the segmented object-label correspondence very fragile. Therefore, Fully mining the information carried by the dataset itself may need to be considered.
On the other hand, visual self-supervision (Caron et al. (2021); He et al. (2022); Chen et al. (2020a); Zhou et al. (2021b)) has been widely used for visual pre-training. It includes two categories: reconstructing masked images (He et al. (2022); Zhou et al. (2021b)) and multicrop image contrast (Caron et al. (2021); Chen et al. (2020a)). For example, SLIP (Mu et al. (2021)) introduces contrastive learning of multicrop visual consistency for VLP. MaskCLIP (Dong et al. (2022)) introduces a visual self-supervised task of reconstructing masked images. They utilize visual self-supervision to provide more useful information for VLP models. However, the semantic consistency of multiple views of an image in segmentation and cross-modal contrast have not received enough attention and research.
Based on the above observations, in this paper, we explore the impact of multi-view semantic consistency on the task of text-supervised semantic segmentation through visual self-supervision. To this end, we propose multi-**View** Consistency learning (ViewCo), which aims at discovering text-supervised segmentation masks via multi-view semantic consistency. Specifically, we propose _text-to-views consistency modeling_ to alleviate the excessive strictness of image-text correspondence in vanilla vision-language contrastive learning. It enables the model to benefit from the dense assignment of visual features by encouraging different crops to align with the same text. This relaxed one-to-many contrast mechanism also facilitates the learning of multi-view consistent semantics, enabling the model to acquire high-level cross-modal alignment capabilities. Moreover, as shown in Figure 1, to alleviate the ambiguity issue of textual supervision, we propose _cross-view segmentation consistency modeling_. It overcomes the limitation imposed by textual ambiguity by providing additional self-supervision to vision-language contrastive learning via cross-view segmentation consistency. ViewCo uses the proposed text-to-views consistency modeling for vision-language cross-modal contrastive learning and additionally enables cross-view segmentation consistency modeling by contrasting the segment features of Siamese visual encoders. As shown in Figure 2, with the help of the two consistency modeling schemes, ViewCo establishes a solid semantic correspondence in different views, and the semantics in different views maintain a good consistency. The semantic consistency of GroupViT in different views is difficult to guarantee.
Overall, ViewCo's design is simple and effective. We train it on large-scale image-text pair datasets CC12M (Changpinyo et al. (2021)) and YFCC (Thomee et al. (2016)). In the inference stage, we
Figure 2: The consistent comparison of semantic segmentation results in multiple views of a “horse”. (a) GroupViT: the semantic segmentations of different views are inconsistent. (b) ViewCo: the semantic segmentations of different views are much more consistent. Here, \(x\), \(u\), and \(v\) represent the segmentation results on the original image and views \(u\), \(v\), respectively.
use the similarity scores between the segmentation embeddings generated by the teacher network and the label prompts to assign labels to the image masks for zero-shot semantic segmentation. Compared with the state-of-the-art methods, ViewCo achieves an average improvement of 2.9%, 1.6%, and 2.4% mIoU on PASCAL VOC2012, PASCAL Context, and COCO, respectively. Our contributions can be summarized as follows:
* We propose a novel one-to-many text-to-views consistency modeling that improves the model's ability of high-level cross-modal semantic alignment by encouraging different crops of an image to align with the same text.
* To alleviate the problem of supervision failure that may arise from text ambiguity, we propose cross-view segmentation consistency modeling to provide additional self-supervision for the vision branch and encourage the model to generate consistent segmentation masks for different views.
* ViewCo consistently outperforms the state-of-the-art methods on PASCAL VOC2012, PASCAL Context, and MS-COCO when pre-trained on CC12M or CC12M+YFCC.
## 2 Related Work
**Vision-Language Pretraining.** In recent years, vision-language pre-training models (Chen et al. (2020b); Desai and Johnson (2021); Li et al. (2020a, 2021a, 2020b)) have developed rapidly with the help of large-scale image-text pair data available on the Internet. Recently, VLP models such as CLIP (Radford et al. (2021)), ALIGN (Li et al. (2021a)), and SLIP (Mu et al. (2021)) have made great progress in visual representation learning by using contrastive learning. And they have been successfully transferred to various downstream tasks, such as visual question answering (Antol et al. (2015); Zhou et al. (2020)) and visual reasoning (Zellers et al. (2019)). In particular, CLIP (Radford et al. (2021)) uses the image-text matching relationship for contrastive learning, and the learned model can be directly transferred to ImageNet classification (Deng et al. (2009)) in a zero-shot manner without any fine-tuning. This success is also found on zero-shot semantic segmentation (Xu et al. (2022)). However, the one-to-one contrastive learning mechanism between image and text in the vanilla VLP pipeline is too strict, which is not conducive to the model learning high-level cross-modal semantic alignment. Based on the above observations, this paper proposes one-to-many text-to-views consistency modeling. It relaxes the original one-to-one correspondence by encouraging different crops of an image to match the same text, allowing the model to benefit from the dense assignment of the visual features.
**Visual Self-Supervision.** This framework relies on the information carried by the image itself for self-supervision without any additional annotation information. Visual self-supervision is mainly divided into generative (He et al. (2022); Bao et al. (2021)) and contrastive (He et al. (2020a); Caron et al. (2021); Chen et al. (2020a)). A generative model allows the model to learn the feature representation of the image by reconstructing the masked image. Contrastive models focus more on learning-centric global representations. Since semantic segmentation requires dense prediction of images, generative models may not help much because they destroy the original structure and information of images. On the other hand, the contrastive visual self-supervised model can provide the required multi-view features for ViewCo's text-to-views consistency modeling. Moreover, this visual contrastive learning can provide additional visual self-supervision information for the VLP model to alleviate the risk of supervision failure caused by text ambiguity. Therefore, this paper focuses on the help of visual contrastive learning for semantic segmentation consistency.
**Consistent Semantics.** Capturing consistent semantics is one of the main challenges shared by many tasks such as cross-modality and visual understanding. Vision-language contrastive learning (Radford et al. (2021)) is essential to encode different modal data into the same feature space and enforces the features sharing the same semantics to get closer, and the features with different semantics to be pushed away. Similarly, multicro
Figure 3: Comparison of single-view text-to-image contrastive learning (left) and multi-view text-to-views contrastive learning (right).
also the core idea of visual self-supervised contrastive learning (Caron et al. (2021); Chen et al. (2020a); He et al. (2020a)). For example, DenseCL (Wang et al. (2021a)) performs pixel-level dense contrastive learning on dense output vectors from multiple views, which is not helpful to the learning of high-level global semantic information. Further, GroupViT (Xu et al. (2022)) uses text as supervision and achieves pixel grouping by capturing the contextual consistent semantics of images. However, in the text-supervised semantic segmentation task, the ambiguous properties of text relative to dense annotations result in that the semantic consistency of images sharing the same semantics cannot be sufficiently guaranteed in the embedding space. Furthermore, the strict one-to-one correspondence between image and text in the vanilla VLP model is also not conducive to the true alignment of high-level cross-modal semantics. Figure 3 (left) illustrates the above observation: although one of the views of an image (_e.g._, the solid circle) is already close to the corresponding text embedding, other views (_e.g._, the dashed circles) may still be far away. Previous VLP methods generally only focus on the alignment of a single view with text. In contrast, as shown in Figure 3 (right), ViewCo focuses on text-to-views consistency modeling, doing one-to-many matching in cross-modal contrastive learning.
## 3 Multi-View Consistent Learning
As shown in Figure 4, our ViewCo is mainly composed of a cross-view segmentation consistency module and a text-to-views consistency module. We describe these two modules in Sections 3.1 and 3.2, respectively, and summarize the final loss function in Section 3.3.
### cross-view segmentation consistency module
As shown in Figure 4 (left), given a batch of image-text pairs \(\{(x_{i}^{I},x_{i}^{T})\}_{i=1}^{B}\), two random augmentations are performed on the input image \(x_{i}^{I}\), generating two warped views \(u\) and \(v\). We use GroupViT (Xu et al. (2022)) as the bottom-up segmentation backbone of ViewCo, where each view is segmented into \(K\) segment tokens. For each of the views (e.g., \(u\)), this process is expressed as: \(Z_{\text{seg}}^{u}=\{Z_{\text{seg}_{k}}^{u},k=1,...,K\}=f_{s}(u)\in\mathbb{R}^ {K\times d}\), where \(Z_{\text{seg}_{k}}^{u}\in\mathbb{R}^{d}\) is the \(k\)-th segment feature from \(f_{s}\), and \(d\) is the dimensionality of the segment feature. Similarly, we have \(Z_{\text{seg}_{k}}^{v_{s}}\) and the segment features \(Z_{\text{seg}_{k}}^{u_{t}}\) and \(Z_{\text{seg}_{k}}^{v_{t}}\) from the teacher network \(f_{t}\). We update the parameters of \(f_{t}\) using the exponential moving average (EMA) He et al. (2020b) of the parameters of \(f_{s}\). For example, let \(\theta_{i}\) and \(\overline{\theta}_{i}\) be the parameters of \(f_{s}\) and \(f_{t}\) at training step \(i\), respectively, and then \(\overline{\theta}_{i}\) is updated as: \(\overline{\theta}_{i}=\alpha\overline{\theta}_{i-1}+(1-\alpha)\theta_{i}\), where \(\alpha\) is a hyper-parameter for smoothing the update. In addition, the standard contrastive loss function, called InfoNCE (Oord et al. (2018)), is considered in this paper, for an encoded query \(q\) and a set of encoded samples \(k=\{k_{0},k_{1},k_{2},...\}^{N}\) that are the keys of a dictionary, we have:
\[\mathcal{L}_{\text{{NCE}}}(q,k)=-\log\frac{\text{exp}(q\cdot k_{+}/\tau)}{\sum _{i=0}^{N}\text{exp}(q\cdot k_{i}/\tau)}, \tag{1}\]
where \(\tau\) is a learnable temperature parameter. And \(q\) and \(k_{+}\) are positive pairs, and the other \((N-1)\) pairs are negative.
Figure 4: Framework of ViewCo. It is mainly composed of a cross-view segmentation consistency module and a text-to-views consistency module. The visual branch adopts a visual self-supervised model, which consists of teacher \(f_{t}\) and student \(f_{s}\) networks with the same structure. \(f_{t}\) and \(f_{s}\) are the bottom-up segmentation backbone that outputs segment features of the image.
Intuitively, the segment features obtained from different crops of the same image should be roughly the same, _i.e_., cross-view segmentation consistency. To this end, for the semantic segmentation task, we replace the image-level contrastive learning in previous methods (Caron et al. (2021); Zhou et al. (2021b)) with cross-view segmentation consistency learning within images. Therefore, we define the minimization training objective of the cross-view segmentation consistency module in ViewCo as:
\[\mathcal{L}_{\text{[Seg]}}^{t\leftrightarrow s}=\mathcal{L}_{\text{[Seg]}}^{t \to s}+\mathcal{L}_{\text{[Seg]}}^{s\to t}. \tag{2}\]
It is a bi-directional contrast loss between the segment features from the teacher \(f_{t}\) and the student \(f_{s}\). \(\mathcal{L}_{\text{[Seg]}}^{t\to s}\) considers two pairs of views (_i.e_., \((u_{t},v_{s})\) and \((v_{t},u_{s})\)) outputted by \(f_{t}\) and \(f_{s}\). The segment features of \((u_{t},v_{s})\) from the same image are multiplied (\(Z_{\text{seg}}^{u}\cdot Z_{\text{seg}}^{v}\in\mathbb{R}^{K\times K}\)) after \(l_{2}\) normalization. In the image branch of ViewCo, we use the EMA policy for parameter updates, so the learnable grouping tokens on the corresponding position IDs of different views of the same image are highly correlated, and they have the same semantics. Therefore, the semantic pairs \(\{(Z_{\text{seg}}^{u_{t}},Z_{\text{seg}}^{v_{s}}),i=1,...,K\}\) on the diagonal are regarded as positive, and the other \(K(K-1)\) pairs \(\{(Z_{\text{seg}}^{u_{t}},Z_{\text{seg}}^{v_{s}}),i,j=1,...,K,i\neq j\}\) are regarded as negative. Therefore, the contrastive loss \(\mathcal{L}_{\text{[Seg]}}^{t\to s}\) of the teacher-to-student segment features is defined as \(\mathcal{L}_{\text{[Seg]}}^{t\to s}=\mathcal{L}_{u_{t}\to v_{s}}+ \mathcal{L}_{v_{t}\to u_{s}}\), more specifically:
\[\mathcal{L}_{\text{[Seg]}}^{t\to s}=-\frac{1}{KB}\sum_{i=1}^{B}\sum_{k=1}^{K}( \mathcal{L}_{\text{NCE}}(Z_{\text{seg}_{k}}^{u_{t}},\{Z_{\text{seg}_{k}}^{v_ {s}}\}_{k=1}^{K})+\mathcal{L}_{\text{NCE}}(Z_{\text{seg}_{k}}^{v_{t}},\{Z_{ \text{seg}_{k}}^{u_{s}}\}_{k=1}^{K})). \tag{3}\]
Similarly, the contrastive loss \(\mathcal{L}_{\text{[Seg]}}^{s\to t}\) of the student-to-teacher segment features is defined as \(\mathcal{L}_{\text{[Seg]}}^{s\to t}=\mathcal{L}_{u_{s}\to v_{t}}+ \mathcal{L}_{v_{s}\to u_{t}}\), more specifically:
\[\mathcal{L}_{\text{[Seg]}}^{s\to t}=-\frac{1}{KB}\sum_{i=1}^{B}\sum_{k=1}^{K} (\mathcal{L}_{\text{NCE}}(Z_{\text{seg}_{k}}^{u_{s}},\{Z_{\text{seg}_{k}}^{v _{s}}\}_{k=1}^{K})+\mathcal{L}_{\text{NCE}}(Z_{\text{seg}_{k}}^{v_{s}},\{Z_{ \text{seg}_{k}}^{u_{s}}\}_{k=1}^{K})). \tag{4}\]
Figure 4(a) shows the positive and negative pairs for cross-view segmentation consistency learning in the vision branch.
### text-to-views consistency module
Previous methods (Radford et al. (2021); Xu et al. (2022)) build visual-linguistic semantic correspondences by performing a contrastive loss on image-text pairs. In this paper, we consider the contrastive learning between multiple views and text, using one-to-many text-to-views consistency modeling instead of one-to-one text-to-image contrastive learning. The model learns to capture intra-modal and inter-modal semantic consistency through the alignment of multi-view images and text.
Specifically, for a given image-text pair \((x_{i}^{I},x_{i}^{T})\), by performing two different augmentations to the input image, we have a triplet \((u_{i},v_{i},x_{i}^{T})\). As shown in Figure 4 (right), in the training phase, we take the output \((Z_{i}^{u},Z_{i}^{v})\) of the view pair \((u_{i},v_{i})\) through the student network \(f_{s}\) and the output \(Z_{i}^{T}\) of the text encoder \(E_{T}\) to calculate the contrastive loss respectively. The visual embeddings \((Z_{i}^{u},Z_{i}^{v})\) and text embedding \(Z_{i}^{T}\) are mapped to the same feature space through two MLPs, respectively, before performing the final \(l_{2}\) regularization. This procedure is represented as:
Figure 5: Illustration of the contrastive loss of (a) cross-view segmentation consistency modeling and (b) text-to-views consistency modeling. \(Z_{\text{seg}_{k}}^{l_{i}}\) is the \(k\)-th semantic feature of the \(i\)-th image (_i.e_., view \(u\) or \(v\)). \(Z_{i}^{l_{u}}\) and \(Z_{i}^{l_{u}}\) are the embeddings of the views \(v\) and \(u\) of the \(i\)-th image, respectively.
\(Z_{i}^{I_{u}}=\text{MLP}(\text{AvgPool}(Z_{\text{Seg}}^{u_{i}})),Z_{\text{Seg}}^{u_ {i}}=f_{s}(u_{i});Z_{i}^{I_{v}}=\text{MLP}(\text{AvgPool}(Z_{\text{Seg}}^{v_{i} })),Z_{\text{Seg}}^{v_{i}}=f_{s}(v_{i})\). The multi-view feature \(Z_{i}^{I}=\{Z_{i}^{I_{u}},Z_{i}^{I_{v}}\}\) and text embedding \(Z_{i}^{T}\) constitute positive pairs, and the other \(2B(B-1)\) pairs are negative pairs. The contrastive loss of text-to-views consistency modeling is defined as follows:
\[\mathcal{L}_{I_{\{u,v\}}\leftrightarrow T}=\mathcal{L}_{I_{\{u,v\}} \to T}+\mathcal{L}_{T\to I_{\{u,v\}}}, \tag{5}\]
where the contrastive loss of views \(I_{\{u,v\}}\)-to-text is defined as:
\[\mathcal{L}_{I_{\{u,v\}}\to T}=-\frac{1}{KB}\sum_{i=1}^{B}\sum_{k=1}^{K}( \mathcal{L}_{\text{{NCE}}}(Z_{i}^{I_{u}},\{Z_{i}^{T}\}_{i=1}^{B})+\mathcal{L} _{\text{{NCE}}}(Z_{i}^{I_{v}},\{Z_{i}^{T}\}_{i=1}^{B})). \tag{6}\]
and the contrastive loss of text-to-views \(I_{\{u,v\}}\) is defined as:
\[\mathcal{L}_{T\to I_{\{u,v\}}}=-\frac{1}{KB}\sum_{i=1}^{B}\sum_{k=1}^{K}( \mathcal{L}_{\text{{NCE}}}(Z_{i}^{T},\{Z_{i}^{I_{u}}\}_{i=1}^{B})+\mathcal{L} _{\text{{NCE}}}(Z_{i}^{T},\{Z_{i}^{I_{v}}\}_{i=1}^{B})). \tag{7}\]
Additionally, in order to further enhance the association between multi-view semantics and text semantics, we also compute the multi-label image-text contrastive loss (Xu et al. (2022)) of multi-view and "prompted text" pairs \(\{(Z_{i}^{I_{u}},\{Z_{i}^{I_{m}}\}_{m=1}^{M})_{1=1}^{B},(Z_{i}^{I_{u}},\{Z_{i }^{I_{m}}\}_{m=1}^{M})_{1=1}^{B}\}\), where \(\{Z_{i}^{I_{m}}\}_{m=1}^{M}\) are the embeddings of the additional \(M\) text prompts \(\{T_{i}^{m}\}_{m=1}^{M}\) generated by the \(i\)-th text \(x_{i}^{T}\) according to the "prompt engineering" mechanism (Radford et al. (2021)). \((Z_{i}^{I_{u}},\{Z_{i}^{I_{m}}\}_{m=1}^{M})\), _i.e._, the embedding of the \(i\)-th image view \(u\) and the generated \(M\) text embeddings \(\{Z_{i}^{I_{m}}\}_{m=1}^{M}\) are positive pairs, and the other combinations are negative pairs. Therefore, similar to Eq.(5), the multi-label contrastive loss of multi-view \(I_{\{u,v\}}\) and multi-prompt \(\{T^{m}\}_{m=1}^{M}\) is defined as:
\[\mathcal{L}_{I_{\{u,v\}}\leftrightarrow\{T^{m}\}_{m=1}^{M}}=\mathcal{L}_{I_{\{ u,v\}}\rightarrow\{T^{m}\}_{m=1}^{M}}+\mathcal{L}_{\{T^{m}\}_{m=1}^{M} \to I_{\{u,v\}}}. \tag{8}\]
First, the views-to-prompts loss is the average of the losses of the two views. Considering a single view, e.g. \(u\), the contrastive loss of \(u\) to all the prompts is defined as:
\[\mathcal{L}_{I_{u}\rightarrow\{T^{m}\}_{m=1}^{M}}=-\frac{1}{B}\sum_{i=1}^{B} \Bigg{(}\log\frac{\sum_{m=1}^{M}\exp(Z_{i}^{I_{u}}\cdot Z_{i}^{I_{m}}/\tau)}{ \sum_{m=1}^{M}\sum_{j=1}^{B}\exp(Z_{i}^{I_{u}}\cdot Z_{j}^{I_{m}}/\tau)} \Bigg{)}. \tag{9}\]
Second, the contrastive loss of multi-prompt-to-views is defined as:
\[\mathcal{L}_{\{T^{m}\}_{m=1}^{M}\to I_{\{u,v\}}}= -\frac{1}{2MB}\sum_{m=1}^{M}\sum_{i=1}^{B}(\mathcal{L}_{\text{{NCE}}}(Z_{ i}^{I_{m}},\{Z_{i}^{I_{u}}\}_{i=1}^{B})+\mathcal{L}_{\text{{NCE}}}(Z_{i}^{I_{m}},\{Z_{i }^{I_{v}}\}_{i=1}^{B})). \tag{10}\]
In particular, a similar work to our text-to-views consistency module is DeCLIP (Li et al. (2021)). It believes that the text description may only be a small part of the image, so in addition to the global view in CLIP (Radford et al. (2021)), DeCLIP also adds a local view for image self-supervision, which may cause information leakage. In addition, DeCLIP uses EDA (Wei and Zou (2019)) as a text augmentation strategy. The augmented text still contains multiple semantics, which is not helpful to the alignment of local semantics in segmentation tasks. In contrast, ViewCo uses self-supervision of two local views to ensure the difficulty of the task, while using a "prompt engineering" mechanism to obtain an augmented text with a single semantic. Combining one-to-many alignment can help ViewCo to better mine consistent segmentation semantics in images.
### Overall Loss Function
Finally, the total loss of ViewCo is the sum of the cross-view segmentation consistency contrastive loss and the two cross-modal contrastive losses:
\[\mathcal{L}=\mathcal{L}_{\text{[Seg]}}^{\{t\leftrightarrow g}}+\mathcal{L}_{I_{ \{u,v\}}\leftrightarrow T}+\mathcal{L}_{I_{\{u,v\}}\leftrightarrow\{T^{m}\}_{m=1} ^{M}}. \tag{11}\]
## 4 Experiments
### Implementation Details
**Architecture.** In the cross-view segmentation consistency module, \(f_{t}\) and \(f_{s}\) have the same network structure. The parameters of \(f_{t}\) are updated using the exponential moving average of the parameters
of \(f_{s}\). We use GroupViT (Xu et al. (2022)) with two stages as the backbone for semantic feature extraction of ViewCo's visual branch. It is built on ViT-S (Dosovitskiy et al. (2020); Touvron et al. (2021)) with 12 Transformer layers. The input image size is \(224\times 224\), the patch size is \(16\times 16\), and the hidden dimensionality is 384. The 2-stage GroupViT finally outputs 8 segment tokens, _i.e.,_\(K=8\)). Following Radford et al. (2021), ViewCo's text encoder \(E_{T}\) consists of 12 Transformer layers with a hidden feature dimensionality of 256.
**Training and Inference.** In the training phase, we use CC12M (Changpinyo et al. (2021)) and the filtered YFCC (Thomee et al. (2016)) as training datasets, which contain 12M and 14M image-text pairs, respectively. See A.1 of the supplementary material for more training details. In the inference phase, following (Xu et al. (2022); Radford et al. (2021)), the image is segmented by associating the image patches with the \(K\) segment tokens outputted by the teacher network \(f_{t}\). The semantics in the images are further classified by computing the similarity of the \(K\) visual-semantic embeddings to the text embeddings generated from the class labels of the test dataset.
**Zero-Shot Transfer to Semantic Segmentation.** We evaluate ViewCo on the task of zero-shot transfer to semantic segmentation on the validation sets of PASCAL VOC 2012 (Everingham et al. (2010)), PASCAL Context (Mottaghi et al. (2014)) and COCO Stuff (Lin et al. (2014)) datasets. The three datasets contain 20, 59, and 80 foreground classes and an additional background class, respectively. During inference, following GroupViT (Xu et al. (2022)), ViewCo predicts only the foreground classes by thresholding the softmax-normalized-similarity between the embedding of the outputted image segments and the text segmentation labels. The thresholds on PASCAL VOC 2012, PASCAL Context, and COCO are set to 0.95, 0.35, and 0.95, respectively. We resize each input image to have a shorter side of 448.
### Comparisons with Recent Methods
We first compare the performance of ViewCo with some ViT-S-based zero-shot baselines. Then, to further evaluate the performance of ViewCo on the zero-shot semantic segmentation task, we compare ViewCo with some fully supervised transfer and CLIP-based models.
**Comparison with Zero-Shot Baselines.** Table 1 shows the performance comparison of ViewCo and zero-shot baselines on PASCAL VOC 2012. Among them, the four ViT-based baselines train vision and text encoders through the image-text contrastive loss defined in CLIP (Radford et al. (2021)). They adopt four different pixel grouping methods: pixel-wise, K-means, Mean-shift (Comaniciu and Meer (2002)), and Spectral clustering (Shi and Malik (1997)) respectively. And GroupViT (Xu et al. (2022)) uses the bottom-up patch grouping mechanism. As shown in Table 1, ViewCo significantly outperforms the CLIP-trained ViT and GroupViT (52.4% vs. 51.2%) baselines. It is worth noting that ViewCo and GroupViT adopt the same segmentation backbone, indicating that ViewCo can effectively improve the model's ability of segmentation and cross-modal semantic alignment with the help of the two consistent semantic modelings.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline Arch. & Method & Mask \\ & & mIoU (\%) \\ \hline ViT & pixel-wise & 20.1 \\ ViT & K-means & 25.0 \\ ViT & Mean-shift & 20.7 \\ ViT & Spectral clustering & 19.7 \\ \hline GroupViT & - & 51.2 \\ ViewCo (ours) & - & **52.4** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison with zero-shot baselines on PASCAL VOC 2012.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{Pre-training} & \multicolumn{3}{c}{Transfer (mIoU (\%))} \\ \hline \multirow{2}{*}{Arch} & \multirow{2}{*}{Model} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{Supervision} & \multirow{2}{*}{Zero-Shot} & PASCAL & PASCAL \\ & & & & VOC & Contexts & COCO \\ \hline \multirow{4}{*}{ViT} & DiT & ImageNet & class & \(\bigstar\) & \(53.0\) & \(35.9\) & - \\ \cline{2-6} & DINO & ImageNet & self & \(\bigstar\) & \(39.1\) & \(20.4\) & - \\ & DINO & CC12M+YFCC & self & \(\bigstar\) & \(37.6\) & \(22.8\) & - \\ & MoCo & ImageNet & self & \(\bigstar\) & \(34.3\) & \(21.3\) & - \\ & MoCo & CC12M+YFCC & self & \(\bigstar\) & \(36.1\) & \(23.0\) & - \\ \hline \multirow{4}{*}{CLIP} & CLIP & LAION-20M & text & \(\bigstar\) & - & \(13.5\) & \(8.2\) \\ & GroupViT & CC12M & text & \(\bigstar\) & \(41.1\) & \(18.2\) & \(18.4\) \\ & GroupViT & CC12M+YFCC & text & \(\bigstar\) & \(51.2\) & \(22.3\) & \(20.9\) \\ \cline{1-1} \cline{2-6} & SLIP & LAION-20M & text \& self & \(\bigstar\) & - & \(12.3\) & \(8.8\) \\ \cline{1-1} & CLIP-MAE & LAION-20M & text \& self & \(\bigstar\) & - & \(16.8\) & \(11.8\) \\ \cline{1-1} & MaxCLIP & LAION-20M & text \& self & \(\bigstar\) & \(7.7\) & \(17.1\) & \(11.8\) \\ \cline{1-1} & ViewCo (ours) & CC12M & text \& self & \(\bigstar\) & \(\bigstar\) & \(\mathbf{45.7}\) & \(\mathbf{20.8}\) & \(\mathbf{20.6}\) \\ \cline{1-1} & ViewCo (ours) & CC12M+YFCC & text \& \(\bigstar\) & \(\mathbf{52.4}\) & \(\mathbf{23.0}\) & \(\mathbf{23.5}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparisons with recent methods. Zero-shot means that the model is directly transferred to the semantic segmentation task without any fine-tuning on the target dataset.
**Comparison with Other SoTA Methods.** These methods include one fully supervised baseline DeiT (Touvron et al. (2021)), two visual self-supervised baselines DINO (Caron et al. (2021)) and MoCo (He et al. (2020)), two vision-language contrastive learning baselines CLIP (Radford et al. (2021)) and GroupViT (Xu et al. (2022)), two vision-language contrast and visual self-supervised learning combined baselines SLIP (Mu et al. (2021)) and CLIP-MAE (Dong et al. (2022)), and one vision-language contrast and self-distillation combined baseline MaskCLIP (Dong et al. (2022)).
Table 2 shows the mIoU performance comparison between ViewCo and the SoTA methods on PASCAL VOC 2012, PASCAL Context, and COCO validation sets. ViewCo significantly outperforms them on all three datasets. Compared to GroupViT, when pre-trained on CC12M, ViewCo achieves a 4.6% mIoU improvement on PASCAL VOC. Similarly, when pre-trained on CC12M+YFCC, ViewCo achieves a 2.6% mIoU improvement on COCO compared to GroupViT. Similar to ViewCo, SLIP, MaskCLIP, and CLIP-MAE all use additional supervision information in the vision branch of the VLP models. Compared with them, ViewCo still has clear advantages in PASCAL Context and COCO. In addition, ViewCo obtains segmentation performance close to the fully supervised DeiT on PASCAL VOC, which again demonstrates the effectiveness of ViewCo for zero-shot semantic segmentation.
### Analysis
In this section, for the convenience of comparison, we use CC12M as the pre-training dataset by default for the ablation of ViewCo components, qualitative analysis, and image classification performance comparison. GroupViT (Xu et al. (2022)) is used as the baseline for ViewCo.
**Image-Level Contrast _vs._ Semantic-Level Contrast.** To ablate the role of the cross-view segmentation consistency module in the vision branch, we add an image-level contrastive module to GroupViT in the visual branch, where we first calculate the average of the \(K\) segment tokens outputted by the teacher and student networks, and then perform contrastive learning. For ViewCo, we remove the text-to-views consistency module and directly average pool the multi-view features outputted by the student network. To be consistent with GroupViT, we use the pooled visual features for contrastive learning with text embeddings. As shown in Table 3, adding a visual self-supervised module for vision-language contrastive learning can improve the performance of the model on semantic segmentation by improving the quality of visual feature learning. Furthermore, the improved performance (_i.e._, 19.1 _vs._ 18.6) of semantic-level learning relative to image-level contrastive learning suggests that the cross-view segmentation consistency module can further improve the performance by capturing the consistency of cross-view semantic segmentation.
**Vision-Language Contrast: Text-to-Image _vs._ Text-to-Views._ We further ablate the text-to-views consistency module in ViewCo. In single-view vision-language contrastive learning, we use the average embedding of multi-view features outputted by the student network and the text embedding for contrastive learning during training. As shown in Table 4, text-to-views consistency modeling significantly improves the performances of the models compared to single-view text-to-image (_i.e._, 1.1% and 1.5%). This indicates that text-to-views consistency modeling has better high-level semantic alignment capabilities than text-to-image single-view modeling. This is exactly what previous methods of single-view vision-language contrastive learning do not have.
**Qualitative Analysis.** Figure 2 shows some visualization results of multi-view semantic segmentation consistency for ViewCo and GroupViT. As shown in Figure 2(a), in GroupViT, the semantic segmentations of different views from the same image are inconsistent. For exam
\begin{table}
\begin{tabular}{c|c|c} \hline \hline & Vision & \begin{tabular}{c} COCO \\ mIoU (\%) \\ \end{tabular} \\ \hline \begin{tabular}{c} GroupViT+ \\ GroupViT+ \\ ViewCo \\ \end{tabular} & \begin{tabular}{c} image-level \\ image-level \\ semantic-level \\ \end{tabular} & \begin{tabular}{c} single \\ **19.7**(1.17) \\ \end{tabular} \\ \hline \begin{tabular}{c} ViewCo \\ ViewCo \\ \end{tabular} & \begin{tabular}{c} semantic-level \\ semantic-level \\ \end{tabular} & \begin{tabular}{c} single \\ multiple \\ multiple \\ \end{tabular} &
\begin{tabular}{c} **19.1** \\ **20.6** (1.5\(\uparrow\)) \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 4: Vision-language contrast: single-view _vs._ multi-view. "single" and "multiple" denote the number of image views used in vision-language contrastive learning.
\begin{table}
\begin{tabular}{c|c} \hline \hline & Visual branch & \begin{tabular}{c} COCO \\ mIoU (\%) \\ \end{tabular} \\ \hline \begin{tabular}{c} GroupViT \\ GroupViT+ \\ ViewCo \\ \end{tabular} & \begin{tabular}{c} 18.4 \\ image-level \\ semantic-level \\ \end{tabular} &
\begin{tabular}{c} 18.6 \\ **19.1**(0.7\(\uparrow\)) \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 3: Image-level contrast _vs._ semantic-level contrast. “-” indicates that no visual self-supervision module is used. GroupViT+ represents modifying the corresponding component in GroupViT.
ple, in image \(x\), "umbrella" is misclassified as "cow", and in view \(u\), "umbrella" is misclassified as "horse". There is also the problem of inconsistent semantic segmentations between views \(u\) and \(v\). As shown in Figure 2(b), the semantic segmentation in different views in ViewCo is completely consistent. This shows that our cross-view segmentation consistency modeling and text-to-views consistency modeling in ViewCo are effective. To evaluate ViewCo's ability to perform semantic segmentation through semantic understanding in rare scenes, we show the more visual comparison in Figure 6. The images of rare scenes are selected from the Internet. In Figure 6(a), we use the class labels of the PASCAL VOC 2012 dataset as the label set for the images. ViewCo's segmentation and prediction results in rare scenes are significantly better than GroupViT's. This indicates that ViewCo can better understand high-level semantics in images through consistent semantic learning. In Figure 6(b), we only focus on the model's ability to segment images in rare scenes. Compared to GroupViT, ViewCo handles the details of image segmentation much better.
More visual comparison results are shown in Figure 7 of A.2 of the supplementary material. In addition, we also visually compare the segmentation consistency of ViewCo and GroupViT on different views in A.3. Finally, we present an analysis of ViewCo's cross-view segmentation consistency in A.4.
**Image Classification.** We also evaluate the classification performance of ViewCo. As shown in Table 5, ViewCo significantly outperforms ViT (_i.e._, 46.3% _vs._ 42.4%) and GroupViT (_i.e._, 46.3% _vs._ 42.9%), showing that ViewCo achieves better cross-modal semantic alignment through text-to-views consistency modeling.
## 5 Conclusion
We propose a novel and simple multi-view consistency learning (ViewCo) for text-supervised semantic segmentation. To deal with the problems of excessively strict image-text correspondence and ambiguous text supervision in the VLP model, ViewCo models the text-to-views consistency and cross-view segmentation consistency. ViewCo can generate consistent segmentations and better capture high-level cross-modal semantic alignment. We expect that this exploration of multi-view consistent learning is also applicable to other VLP tasks.
## 6 Acknowledgment
This work was supported in part by National Key R&D Program of China under Grant No.2020AAA0109700, National Natural Science Foundation of China (NSFC) under Grant No.61976233, Guangdong Outstanding Youth Fund (Grant No.2021B1515020061), Shenzhen Fundamental Research Program (Project No.RCYX20200714114642083, No.JCYJ20190807154211365). We thank MindSpore and CAAI-Huawei MindSpore Open Fund for the partial support of this work, which is a new deep learning computing framwork2.
Figure 6: Comparison of semantic segmentation of images in rare scenes. (a) Image segmentation and semantic prediction. (b) Image segmentation. ViewCo can better learn high-level cross-modal semantic alignment with the help of two consistency modeling schemes.
\begin{table}
\begin{tabular}{l|c|c c} \hline \hline & Pre-training & \multicolumn{2}{c}{Zero-shot} \\ & dataset & Acc@1 (\%) Acc@5 (\%) \\ \hline GroupViT & CIC2IM & 37.5 & 65.5 \\ ViewCo & CIC2IM & **39.5** (2.07) & **68.4** (2.9\(\uparrow\)) \\ \hline ViT & CIC2IM+YTCC & 42.4 & \\ GroupViT & CIC2IM+YTCC & 42.9 & 71.7 \\ ViewCo & CIC2IM+YTCC & **46.3** (3.47) & **74.0** (2.3 \(\uparrow\)) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Zero-shot performance on ImageNet. |
2309.12662 | Hausdorff dimension of exceptional sets arising in $θ$-expansions | For a fixed $\theta^2=1/m$, $m \in \mathbb{N}_+$, let $x \in [0, \theta)$ and
$[a_1(x) \theta, a_2(x) \theta, \ldots]$ be the $\theta$-expansion of $x$. Our
first goal is to extend for $\theta$-expansions the results of Jarnik
\cite{J-1928} concerning the set of badly aproximable numbers and the set of
irrationals whose partial quotients do not exceed a positive integer. Define $
L_n (x)= \displaystyle \max_{1 \leq i \leq n} a_i(x), x \in \Omega:=[0,
\theta)\setminus \mathbb{Q} $. The second goal is to complete our result
inspired by Philipp \cite{Ph-1976} % \[ \liminf_{n \to \infty} \frac{L_n(x)
\log\log n}{n} = \frac{1}{\log \left( 1+ \theta^2\right)} \mbox{ for a.e. } x
\in [0, \theta]. \] % In this regard we prove that for any $\eta > 0$ the set
\[ E(\eta) = \left\lbrace x \in \Omega: \lim_{n \to \infty} \frac{L_n(x)
\log\log n}{n} = \eta \right\rbrace \] is of full Hausdorff dimension. | Gabriela Ileana Sebe, Dan Lascu | 2023-09-22T07:02:28Z | http://arxiv.org/abs/2309.12662v1 | # Hausdorff dimension of exceptional sets arising in \(\theta\)-expansions
###### Abstract
For a fixed \(\theta^{2}=1/m\), \(m\in\mathbb{N}_{+}\), let \(x\in[0,\theta)\) and \([a_{1}(x)\theta,a_{2}(x)\theta,\ldots]\) be the \(\theta\)-expansion of \(x\). Our first goal is to extend for \(\theta\)-expansions the results of Jarnik [4] concerning the set of badly aproximable numbers and the set of irrationals whose partial quotients do not exceed a positive integer. Define \(L_{n}(x)=\max\limits_{1\leq i\leq n}a_{i}(x),x\in\Omega:=[0,\theta)\setminus \mathbb{Q}\). The second goal is to complete our result inspired by Philipp [6]
\[\liminf_{n\to\infty}\frac{L_{n}(x)\log\log n}{n}=\frac{1}{\log\left(1+\theta^ {2}\right)}\text{ for a.e. }x\in[0,\theta].\]
In this regard we prove that for any \(\eta>0\) the set
\[E(\eta)=\left\{x\in\Omega:\lim_{n\to\infty}\frac{L_{n}(x)\log\log n}{n}=\eta\right\}\]
is of full Hausdorff dimension.
keywords: \(\theta\)-expansions, partial quotients, Hausdorff dimension. +
Footnote †: journal: Elsevier
## 1 Introduction
The present paper continues our series of papers dedicated to \(\theta\)-expansions [8; 7; 9]. Our aim here is to complete some results on extreme value theory obtained in [10]. In order to do this, we introduce a powerful tool for discriminating between the sets of Lebesgue measure zero, namely the notion of Hausdorff dimension, developed by Hausdorff [3] in 1919.
Fractional dimensional theory provides an indication of the size and complexity of a set and has applications in studying the exceptional sets arising in the metrical theory of continued fractions.
Shortly thereafter Jarnik [4] applied it to number theoretical problems and published the first paper in which the investigation is inspired by a problem of Diophantine approximation. In fact,
Jarnik determined the Hausdorff dimension of sets of real numbers very close to infinitely many rational numbers. He investigated the set of irrationals whose partial quotients are bounded, i.e., the set of badly approximable numbers from the point of view of Diophantine approximation, and the set of irrationals whose partial quotients do not exceed a positive integer.
These results have been subsequently generalized in many directions.
Our first goal is to extend for \(\theta\)-expansions the work of Jarnik which remains a reference in the theory of regular continued fractions (RCFs). Since the case \(\theta=1\) refers to RCF-expansions, we generalize and even improve some results of Jarnik.
For a fixed \(\theta\in(0,1)\), every \(x\in(0,\theta)\) can be expanded into a finite or infinite \(\theta\)-_expansion_
\[x=\frac{1}{a_{1}\theta+\cfrac{1}{a_{2}\theta+\cfrac{1}{a_{3}\theta+\ddots}}}=: [a_{1}\theta,a_{2}\theta,a_{3}\theta,\ldots]. \tag{1.1}\]
The positive integers \(a_{n}\), \(n\in\mathbb{N}_{+}:=\{1,2,\ldots\}\), which are called _partial quotients_ or _digits_ are determined as follows. Consider a generalization of the Gauss map \(T_{\theta}:[0,\theta]\to[0,\theta]\),
\[T_{\theta}(x):=\left\{\begin{array}{ll}\frac{1}{x}-\theta\left| \cfrac{1}{x\theta}\right|&\mbox{if $x\in(0,\theta]$,}\\ \\ 0&\mbox{if $x=0$.}\end{array}\right. \tag{1.2}\]
and \(a_{n+1}(x)=a_{n}\left(T_{\theta}(x)\right)=a_{1}\left(T_{\theta}^{n}(x)\right)\), \(n\in\mathbb{N}_{+}\), where
\[a_{1}(x):=\left\{\begin{array}{ll}\left|\cfrac{1}{x\theta}\right|&\mbox{if $x \neq 0$,}\\ \\ \infty&\mbox{if $x=0$}\end{array}\right. \tag{1.3}\]
Here \(\lfloor\cdot\rfloor\) stands for integer part.
This expansion introduced by Chakraborty and Rao [1] has many of the usual properties of RCFs. Moreover, Chakraborty and Rao proved that the dynamical system given by the transformation \(T_{\theta}\) admits an absolutely continuous invariant probability for certain values of \(\theta\). They have identified for \(\theta^{2}=\cfrac{1}{m}\), \(m\in\mathbb{N}_{+}\), the invariant measure for the transformation \(T_{\theta}\) as
\[\mathrm{d}\gamma_{\theta}:=\cfrac{1}{\log\left(1+\theta^{2}\right)}\cfrac{ \theta\,\mathrm{d}x}{1+\theta x}. \tag{1.4}\]
It was proved in [1] that the dynamical system \(([0,\theta],T_{\theta})\) is ergodic and the measure \(\gamma_{\theta}\) is invariant under \(T_{\theta}\), that is, \(\gamma_{\theta}(A)=\gamma_{\theta}(T_{\theta}^{-1}(A))\) for any \(A\in\mathcal{B}_{[0,\theta]}=\) the \(\sigma\)-algebra of all Borel subsets of \([0,\theta]\).
Moreover, if \(\theta^{2}=\cfrac{1}{m}\), \(m\in\mathbb{N}_{+}\), \([a_{1}\theta,a_{2}\theta,a_{3}\theta,\ldots]\) is the \(\theta\)-expansion of any \(x\in(0,\theta)\) if and only if the following conditions hold:
1. \(a_{n}\geq m\) for any \(n\in\mathbb{N}_{+}\)
2. in the case when \(x\) has a finite expansion, i.e., \(x=[a_{1}\theta,a_{2}\theta,\ldots,a_{n}\theta]\), then \(a_{n}\geq m+1\).
Every irrational \(x\in(0,\theta)\setminus\mathbb{Q}=:\Omega\) has an infinite \(\theta\)-expansion. Note that for all \(n\in\mathbb{N}_{+}\), \(a_{n}(x)\geq m\) and \(T_{\theta}^{n}([a_{1}\theta,a_{2}\theta,\ldots])=[a_{n+1}\theta,a_{n+2}\theta,\ldots]\).
For all \(n\in\mathbb{N}_{+}\), the finite truncation of (1.1)
\[\frac{p_{n}(x)}{q_{n}(x)}=[a_{1}(x)\theta,a_{2}(x)\theta,\ldots,a_{n}(x)\theta]\]
is called the \(n\)_-th convergent_ of the \(\theta\)-expansion of \(x\). For every infinite \(\theta\)-expansion \([a_{1}\theta,a_{2}\theta,\ldots]\) the sequences \(\{p_{n}\}_{n\geq-1}\) and \(\{q_{n}\}_{n\geq-1}\) can be obtained by the following recursive relations
\[p_{n}(x) = a_{n}(x)\theta p_{n-1}(x)+p_{n-2}(x), \tag{1.5}\] \[q_{n}(x) = a_{n}(x)\theta q_{n-1}(x)+q_{n-2}(x), \tag{1.6}\]
with \(p_{-1}(x):=1\), \(p_{0}(x):=0\), \(q_{-1}(x):=0\) and \(q_{0}(x):=1\). By induction, we obtain
\[p_{n-1}(x)q_{n}(x)-p_{n}(x)q_{n-1}(x)=(-1)^{n},\quad n\in\mathbb{N}. \tag{1.7}\]
From (1.6), we have that \(q_{n}(x)\geq\theta\), \(n\in\mathbb{N}_{+}\). Further, also from (1.6) and by induction we get
\[q_{n}(x)\geq\left\lfloor\frac{n}{2}\right\rfloor\theta^{2}. \tag{1.8}\]
Define \(L_{n}(x):=\max\limits_{1\leq i\leq n}a_{i}(x),x\in\Omega.\) Inspired by a Philipp's result [6] which answered a conjecture of Erdos, we have proved in [10] that for a.e. \(x\in[0,\theta]\)
\[\liminf_{n\to\infty}\frac{L_{n}(x)\log\log n}{n}=\frac{1}{\log\left(1+\theta^ {2}\right)}.\]
In 2002, Okano [5] constructed some specific numbers \(x\in[0,1)\) and showed that for any \(k\geq 2\)
\[\liminf_{n\to\infty}\frac{L_{n}(x)\log\log n}{n}=\frac{1}{\log k}.\]
The results established by Philipp and Okano for RCF expansions were complemented by Wu and Xu [11] by showing that its exceptional set is of full Hausdorff dimension.
The second goal of this paper is to prove that for any \(\eta\geq 0\), the set
\[E(\eta)=\left\{x\in\Omega:\lim_{n\to\infty}\frac{L_{n}(x)\log\log n}{n}=\eta\right\} \tag{1.9}\]
is of full Hausdorff dimension.
The paper is organized as follows. In Section 2, we make a brief survey of the Hausdorff dimension. In Section 3 we establish some basic metric properties of \(\theta\)-expansions. In Section 4 we generalize the results obtained by Jarnik [4]. It is worth mentioning that Proposition 4.5 is a key tool for our further results. We also add some concluding remarks. The final section is devoted to the proof of Theorem 5.5. In Section 5, using the lower and upper bounds of the Hausdorff dimension obtained in Section 4 for some sets, we achieve the second goal of the paper mentioned above.
## 2 Hausdorff dimension
In this section we recall some definitions and we establish notations for later use.
Hausdorff measure is an extension of Lebesgue measure that allows for the measurement of subsets within \(\mathbb{R}^{n}\) possessing dimensions smaller than \(n\). This includes subsets like submanifolds and the intriguing category of fractal sets. Through the application of Hausdorff measure, it becomes possible to define the dimension of any set within \(\mathbb{R}^{n}\) even in cases involving complex or intricate geometries.
Hausdorff's idea consists in measuring a set by covering it by an infinite countable family of sets of bounded diameter, and then in looking at what happens when the maximal diameter of these covering sets tends to \(0\).
For a non-empty set \(E\subset\mathbb{R}\), its _diameter_, denoted by \(|E|\), is by definition
\[|E|:=\sup\left\{|x-y|:\ x,y\in E\right\}.\]
Let \(J\) be a finite or infinite set of indices. If for some positive real number \(\delta\) the set \(E\) and the collection \(\{C_{j}\}_{j\in J}\) of subsets of \(\mathbb{R}\) satisfy \(E\subset\bigcup_{j\in J}C_{j}\) and \(0<|C_{j}|\leq\delta\) for any \(j\in J\), then \(\{C_{j}\}_{j\in J}\) is called a \(\delta\)_-covering of_\(E\).
Note that the sets in a countable cover can be any sets whatever. For example, they do not need to be open or closed.
If \(\mathcal{C}=\{C_{j}\}_{j\in J}\) is an infinite countable collection of sets in \(\mathbb{R}\) and \(s>0\) is a real number we say the \(s\)_-total length of_\(\mathcal{C}\) is
\[\Delta_{s}(\mathcal{C})=\sum_{j\in J}|C_{j}|^{s}.\]
For any positive real number \(\delta\), we define the \(s\)_-covered length_ of \(E\) as
\[H_{\delta,s}(E):=\inf_{J}\sum_{j\in J}|C_{j}|^{s},\]
where the infimum is taken over all the countable \(\delta\)-coverings \(\{C_{j}\}_{j\in J}\) of \(E\). Clearly, the function \(\delta\mapsto H_{\delta,s}(E)\) is non-increasing. Consequently,
\[H_{s}(E)=\lim_{\delta\to 0}H_{\delta,s}(E)=\sup_{\delta>0}H_{\delta,s}(E)\]
is well-defined and lies in \([0,\infty]\).
The _Hausdorff dimension_ of a set \(E\subset\mathbb{R}\) denoted by \(\dim_{H}(E)\), is the unique non-negative real number \(s_{0}\) such that \(H_{s}(E)=0\) if \(s>s_{0}\) and \(H_{s}(E)=+\infty\) if \(0<s<s_{0}\). In other words, we have
\[\dim_{H}(E) = \inf\{s:\ H_{s}(E)=0\}\] \[= \sup\{s:\ H_{s}(E)=+\infty\}.\]
Recall some properties of Hausdorff dimension for subsets \(E,E_{1},E_{2},\ldots\) of \(\mathbb{R}\):
1. If \(E_{1}\subset E_{2}\), then \(\dim_{H}(E_{1})\leq\dim_{H}(E_{2})\);
2. \(\dim_{H}\left(\bigcup_{j=1}^{\infty}E_{j}\right)=\sup\{\dim_{H}(E_{j}):j\geq 1\}\);
3. The Hausdorff dimension of a finite or countable set of points is \(0\);
4. Two sets differing by a countable set of points have the same Hausdorff dimension.
We also present the following result (see [2, Proposition 2.3]).
**Lemma 2.1**.: _Given a function \(h:E\subset[0,1)\to[0,1)\) and suppose that it satisfies the following \(\mu\)-Holder condition (\(\mu>0\)), for some constant \(C>0\)_
\[|h(x)-h(y)|\leq C\left|x-y\right|^{\mu},\text{ for all }x,y\in E.\]
_Then,_
\[\dim_{H}(h(E))\leq\frac{1}{\mu}\dim_{H}(E).\]
For more details and for an extensive exposure of the Hausdorff measure and the properties of the Hausdorff dimension we recommend Falconer's book [2].
## 3 Some basic metric properties of \(\theta\)-expansions
Let us fix \(\theta^{2}=\dfrac{1}{m}\), \(m\in\mathbb{N}_{+}\). Putting \(\mathbb{N}_{m}=\{m,m+1,\ldots\}\), \(m\in\mathbb{N}_{+}\), the partial quotients \(a_{n}\), \(n\in\mathbb{N}_{+}\), take positive integer values in \(\mathbb{N}_{m}\).
For any \(n\in\mathbb{N}_{+}\) and \((a_{1},\ldots,a_{n})\in\mathbb{N}_{m}^{m}\), let
\[I_{n}\left(a_{1},\ldots,a_{n}\right)=\left\{x\in\Omega:a_{1}(x)=a_{1},\ldots, a_{n}(x)=a_{n}\right\}\]
be the \(n\)-th order fundamental interval. For \(\theta\)-expansions, such intervals generate the most natural partition of the interval \([0,\theta]\).
From the definition of \(T_{\theta}\), (1.5) and (1.6) we have for any \(n\in\mathbb{N}_{+}\) and \((a_{1},\ldots,a_{n})\in\mathbb{N}_{m}^{n}\),
\[I_{n}(a_{1},\ldots,a_{n})=\left\{\begin{array}{ll}\left[\dfrac{p_{n}}{q_{n} },\dfrac{p_{n}+\theta p_{n-1}}{q_{n}+\theta q_{n-1}}\right]&\text{ if }n\text{ is even,}\\ \\ \left(\dfrac{p_{n}+\theta p_{n-1}}{q_{n}+\theta q_{n-1}},\dfrac{p_{n}}{q_{n}} \right]&\text{ if }n\text{ is odd.}\end{array}\right. \tag{3.1}\]
Using (1.7) we get
\[\left|I_{n}\left(a_{1},\ldots,a_{n}\right)\right|=\frac{\theta}{q_{n}(q_{n}+ \theta q_{n-1})} \tag{3.2}\]
and
\[\frac{\theta}{\left(1+\theta^{2}\right)q_{n}^{2}}\leq\left|I_{n}\left(a_{1}, \ldots,a_{n}\right)\right|\leq\frac{\theta}{q_{n}^{2}}. \tag{3.3}\]
By (3.1), the endpoints of the interval \(I_{n+1}(a_{1},\ldots,a_{n},k)\), \(k\geq m\), are \(\dfrac{p_{n+1}}{q_{n+1}}\) and \(\dfrac{p_{n+1}+\theta p_{n}}{q_{n+1}+\theta q_{n}}\) with \(p_{n+1}=k\theta p_{n}+p_{n-1}\) and \(q_{n+1}=k\theta q_{n}+q_{n-1}\). So we obtain
\[\frac{p_{n+1}}{q_{n+1}}=\frac{k\theta p_{n}+p_{n-1}}{k\theta q_{n}+q_{n-1}}, \quad\frac{p_{n+1}+\theta p_{n}}{q_{n+1}+\theta q_{n}}=\frac{(k+1)\theta p_{n} +p_{n-1}}{(k+1)\theta q_{n}+q_{n-1}}\]
and
\[\left|I_{n+1}\left(a_{1},\ldots,a_{n},k\right)\right|=\frac{\theta}{(k\theta q _{n}+q_{n-1})((k+1)\theta q_{n}+q_{n-1})}. \tag{3.4}\]
**Lemma 3.1**.: _For any \(n\in\mathbb{N}_{+}\) and \((a_{1},\ldots,a_{n})\in\mathbb{N}_{m}^{n}\), we have_
1. \(q_{n}(a_{1},\ldots,a_{n})\geq(m+1)^{\frac{n-1}{2}}\)_;_
2. \(\frac{(a_{k}+m)\theta}{2}\leq\frac{q_{n}(a_{1},\ldots,a_{n})}{q_{n-1}(a_{1}, \ldots,a_{k-1},a_{k+1},\ldots,a_{n})}\leq(a_{k}+m)\theta\)_,_ \(1\leq k\leq n\)_._
Proof.: \((i)\) Since by (1.6) for any \(n\geq 2\) we have \(q_{n-1}\geq\frac{q_{n-2}}{\theta}\) it follows that
\[q_{n}=a_{n}\theta q_{n-1}+q_{n-2}\geq a_{n}\theta\frac{q_{n-2}}{\theta}+q_{n- 2}=(a_{n}+1)q_{n-2}\geq(m+1)q_{n-2}.\]
The successive application of this inequality gives us
\[q_{2n}\geq(m+1)^{n}q_{0}=(m+1)^{n},\] \[q_{2n+1}\geq(m+1)^{n}q_{1}=(m+1)^{n}a_{1}\theta\geq(m+1)^{n}m \theta\geq(m+1)^{n}.\]
\((ii)\) For any fixed \(n\in\mathbb{N}_{+}\), we prove this result by induction on \(n-k\). When \(n-k=0\), by (1.6),
\[\frac{q_{k}(a_{1},\ldots,a_{k})}{q_{k-1}(a_{1},\ldots,a_{k-1})}=\frac{a_{k} \theta q_{k-1}(a_{1},\ldots,a_{k-1})+q_{k-2}(a_{1},\ldots,a_{k-2})}{q_{k-1}(a _{1},\ldots,a_{k-1})}\]
Since \(q_{k-2}\leq\theta q_{k-1}\),
\[\frac{(a_{k}+m)\theta}{2}\leq\frac{q_{k}(a_{1},\ldots,a_{k})}{q_{k-1}(a_{1}, \ldots,a_{k-1})}\leq(a_{k}+m)\theta.\]
Now, since \(a_{k+1}\geq m\) and \(q_{k-2}\leq\theta q_{k-1}\),
\[\frac{q_{k+1}(a_{1},\ldots,a_{k+1})}{q_{k}(a_{1},\ldots,a_{k-1}, a_{k+1})} = \frac{a_{k+1}\theta q_{k}(a_{1},\ldots,a_{k})+q_{k-1}(a_{1}, \ldots,a_{k-1})}{a_{k+1}\theta q_{k-1}(a_{1},\ldots,a_{k-1})+q_{k-2}(a_{1}, \ldots,a_{k-2})}\] \[= \frac{(a_{k+1}a_{k}\theta^{2}+1)q_{k-1}(a_{1},\ldots,a_{k-1})+a_ {k+1}\theta q_{k-2}(a_{1},\ldots,a_{k-2})}{a_{k+1}\theta q_{k-1}(a_{1},\ldots, a_{k-1})+q_{k-2}(a_{1},\ldots,a_{k-2})}\] \[\leq a_{k}\theta+\frac{q_{k-1}(a_{1},\ldots,a_{k-1})+a_{k+1}\theta q _{k-2}(a_{1},\ldots,a_{k-2})}{a_{k+1}\theta q_{k-1}(a_{1},\ldots,a_{k-1})}\] \[= a_{k}\theta+\frac{1}{a_{k+1}\theta}+\frac{q_{k-2}}{q_{k-1}}\leq (a_{k}+m)\theta,\]
\[\frac{q_{k+1}(a_{1},\ldots,a_{k+1})}{q_{k}(a_{1},\ldots,a_{k-1}, a_{k+1})} = \frac{(a_{k+1}a_{k}\theta^{2}+1)q_{k-1}(a_{1},\ldots,a_{k-1})+a_ {k+1}\theta q_{k-2}(a_{1},\ldots,a_{k-2})}{a_{k+1}\theta q_{k-1}(a_{1},\ldots, a_{k-1})+q_{k-2}(a_{1},\ldots,a_{k-2})}\] \[\geq \frac{(a_{k+1}a_{k}\theta^{2}+1)q_{k-1}(a_{1},\ldots,a_{k-1})}{a _{k+1}\theta q_{k-1}(a_{1},\ldots,a_{k-1})+\theta q_{k-1}(a_{1},\ldots,a_{k-1})}\] \[= \frac{a_{k+1}a_{k}\theta^{2}+1}{(a_{k+1}+1)\theta}\geq\frac{(a_{ k}+m)\theta}{2}.\]
Using (1.6) and applying an induction argument, we get the desired result.
## 4 Improvement and generalization of a Jarnik result
For any \(M\in\mathbb{N}_{m}\) and \(n\in\mathbb{N}_{+}\), let
\[E_{M}:=\{x\in\Omega:\;m\leq a_{n}(x)\leq M\text{ for any }n\geq 1\}\]
and
\[E_{M}^{n}:=\{x\in\Omega:\;m\leq a_{i}(x)\leq M\text{ for }i=1,\ldots,n\}\,.\]
Clearly we have for any \(n\in\mathbb{N}_{+}\)
\[E_{M}^{n}\supset E_{M}^{n+1},\quad E_{M}=\bigcap_{n\geq 1}E_{M}^{n} \tag{4.1}\]
\[E_{M}^{n}=\sum_{\begin{array}{c}m\leq a_{i}\leq M\\ i=1,\ldots,n\end{array}}I_{n}(a_{1},\ldots,a_{n}). \tag{4.2}\]
The exact calculation of the Hausdorff dimension of a set is in most cases a difficult problem. It is however often possible to obtain lower and upper bounds for the Hausdorff dimension. In the sequel we shall obtain a result that improves Proposition 4 obtained by Jarnik [4]. In the case of RCF-expansion (\(\theta=1\)), Jarnik established that for any \(M>8\)
\[1-\frac{4}{M\log 2}\leq\dim_{H}(E_{M})\leq 1-\frac{1}{8M\log M}. \tag{4.3}\]
### The lower bound
The following lemma allows us to bound from below the Hausdorff dimension of \(E_{M}\).
**Lemma 4.1**.: _Let us fix a real number \(s\in(0,1)\) and a positive integer \(M\geq m\). If for any \(n\geq 1\) and for all digits \(a_{1},\ldots,a_{n-1}\) (\(m\leq a_{i}\leq M\), \(i=1,\ldots,n-1\)) the following inequality holds_
\[|I_{n-1}(a_{1},\ldots,a_{n-1})|^{s}\leq\sum_{k=m}^{M}|I_{n}(a_{1},\ldots,a_{n- 1},k)|^{s}\,, \tag{4.4}\]
_then \(\dim_{H}(E_{M})\geq s\)._
Proof.: By (4.2) and (4.1) for any \(n\geq 1\) all the \(n\)-th order intervals \(I_{n}\) form a \(\delta\)-covering \(\mathcal{C}_{n}\) for \(E_{M}^{n}\) and implicitly for \(E_{M}\). Any interval \(I_{n}\) belonging to \(\mathcal{C}_{n}\) is contained in exactly one of the intervals of \(\mathcal{C}_{n-1}\) and contains at least two intervals belonging to \(\mathcal{C}_{n+1}\). The maximum of the lengths of the intervals in \(\mathcal{C}_{n}\) tends to \(0\) when \(n\) tends to infinity.
From (4.4) we get
\[\theta^{s}=\Delta_{s}(\mathcal{C}_{0})\leq\ldots\leq\Delta_{s}(\mathcal{C}_{n -1})\leq\Delta_{s}(\mathcal{C}_{n})\leq\ldots\]
where \(\mathcal{C}_{0}=\{I_{0}\}\) with \(I_{0}=[0,\theta]\).
We deduce that the \(s\)-covered length of \(E_{M}\), \(H_{\mathcal{S},s}(E_{M})\geq\theta^{s}\) for any \(\delta^{\prime}\leq\delta\). By letting \(\delta^{\prime}\) tend to \(0\) we have
\[H_{s}(E_{M})=\lim_{\delta^{\prime}\to 0}H_{\mathcal{S},s}(E_{M})\geq\theta^{s}>0,\]
which implies that the Hausdorff dimension of \(E_{M}\) is at least equal to \(s\)
**Proposition 4.2**.: _For \(s=1-\dfrac{2(m+1)}{M+1}\dfrac{1}{\log(m+1)}\), where \(M>2m+1\), the Hausdorff dimension \(\dim_{H}(E_{M})\geq s\)._
Proof.: We check if the assumption from Lemma 4.1 is fulfilled. The inequality to be proved is (4.4). Using (3.2) and (3.4) we obtain
\[\dfrac{\theta^{s}}{q_{n-1}^{s}(q_{n-1}+\theta q_{n-2})^{s}}\leq \sum_{k=m}^{M}\dfrac{\theta^{s}}{(k\theta q_{n-1}+q_{n-2})^{s}\left((k+1) \theta q_{n-1}+q_{n-2}\right)^{s}} \tag{4.5}\]
Using (1.7) we get
\[\sum_{k=m}^{M}\dfrac{1}{(k\theta q_{n-1}+q_{n-2})\left((k+1) \theta q_{n-1}+q_{n-2}\right)}=\dfrac{(-1)^{n-1}}{\theta}\sum_{k=m}^{M}\left( \dfrac{k\theta p_{n-1}+p_{n-2}}{k\theta q_{n-1}+q_{n-2}}-\dfrac{(k+1)\theta p _{n-1}+p_{n-2}}{(k+1)\theta q_{n-1}+q_{n-2}}\right)\] \[=\dfrac{(-1)^{n-1}}{\theta}\left(\dfrac{m\theta p_{n-1}+p_{n-2}} {m\theta q_{n-1}+q_{n-2}}-\dfrac{(M+1)\theta p_{n-1}+p_{n-2}}{(M+1)\theta q_{ n-1}+q_{n-2}}\right)\] \[=\dfrac{(-1)^{n-1}}{\theta}\left[\left(\dfrac{m\theta p_{n-1}+p_ {n-2}}{m\theta q_{n-1}+q_{n-2}}-\dfrac{p_{n-1}}{q_{n-1}}\right)+\left(\dfrac{p _{n-1}}{q_{n-1}}-\dfrac{(M+1)\theta p_{n-1}+p_{n-2}}{(M+1)\theta q_{n-1}+q_{n- 2}}\right)\right]\] \[=\dfrac{1}{\theta}\left(\dfrac{1}{q_{n-1}(m\theta q_{n-1}+q_{n-2} )}-\dfrac{1}{q_{n-1}((M+1)\theta q_{n-1}+q_{n-2})}\right). \tag{4.6}\]
Using (4.6) directly computation yields that
\[\sum_{k=m}^{M}\dfrac{1}{(k\theta q_{n-1}+q_{n-2})^{s}\left((k+1) \theta q_{n-1}+q_{n-2}\right)^{s}}=\sum_{k=m}^{M}\dfrac{(k\theta q_{n-1}+q_{n-2 })^{1-s}\left((k+1)\theta q_{n-1}+q_{n-2}\right)^{1-s}}{(k\theta q_{n-1}+q_{n- 2})\left((k+1)\theta q_{n-1}+q_{n-2}\right)}\] \[\geq\sum_{k=m}^{M}\dfrac{(m\theta q_{n-1}+q_{n-2})^{1-s}\left((m+1 )\theta q_{n-1}+q_{n-2}\right)^{1-s}}{(k\theta q_{n-1}+q_{n-2})\left((k+1) \theta q_{n-1}+q_{n-2}\right)}\] \[\geq\sum_{k=m}^{M}\dfrac{(m\theta q_{n-1}+q_{n-2})^{1-s}\left((m+1 )\theta q_{n-1}\right)^{1-s}}{(k\theta q_{n-1}+q_{n-2})\left((k+1)\theta q_{n- 1}+q_{n-2}\right)}\] \[=(m\theta q_{n-1}+q_{n-2})^{1-s}\left((m+1)\theta q_{n-1}\right) ^{1-s}\dfrac{1}{\theta}\dfrac{1}{q_{n-1}(m\theta q_{n-1}+q_{n-2})}\left[1- \dfrac{m\theta q_{n-1}+q_{n-2}}{(M+1)\theta q_{n-1}+q_{n-2}}\right]\] \[=\dfrac{(m+1)^{1-s}}{(\theta q_{n-1})^{s}\left(m\theta q_{n-1}+q _{n-2}\right)^{s}}\left(1-\dfrac{m\theta q_{n-1}+q_{n-2}}{(M+1)\theta q_{n-1}+ q_{n-2}}\right)\] \[=\dfrac{(m+1)^{1-s}}{(q_{n-1})^{s}\left(q_{n-1}+\theta q_{n-2} \right)^{s}}\left(1-\dfrac{q_{n-1}+\theta q_{n-2}}{(M+1)\theta^{2}q_{n-1}+ \theta q_{n-2}}\right).\]
To obtain (4.5) we have to show that
\[(m+1)^{1-s}\left(1-\dfrac{q_{n-1}+\theta q_{n-2}}{(M+1)\theta^{2}q_{n-1}+ \theta q_{n-2}}\right)\geq 1. \tag{4.7}\]
Since \(q_{n-2}\leq\theta q_{n-1}\) we get
\[\dfrac{q_{n-1}+\theta q_{n-2}}{(M+1)\theta^{2}q_{n-1}+\theta q_{n-2}}\leq \dfrac{q_{n-1}+\theta^{2}q_{n-1}}{(M+1)\theta^{2}q_{n-1}}=\dfrac{1+\theta^{2}} {(M+1)\theta^{2}}=\dfrac{m+1}{M+1},\]
\[(m+1)^{1-s}\left(1-\frac{q_{n-1}+\theta q_{n-2}}{(M+1)\theta^{2}q_{n-1}+\theta q_{n -2}}\right)\geq(m+1)^{1-s}\left(1-\frac{m+1}{M+1}\right).\]
Clearly
\[(m+1)^{1-s}\left(1-\frac{m+1}{M+1}\right)\geq 1\]
if and only if
\[(1-s)\log(m+1)\geq-\log\left(1-\frac{m+1}{M+1}\right).\]
Since \(2x\geq-\log(1-x)\), for all \(x\in(0,1/2)\), by choosing \((1-s)\log(m+1)=2\frac{m+1}{M+1}\), and assuming that \(\frac{m+1}{M+1}\in\left(0,\frac{1}{2}\right)\), we obtain (4.7) for \(s=1-\frac{2(m+1)}{M+1}\frac{1}{\log(m+1)}\), with \(M>2m+1\).
### The upper bound
Now we bound from above the Hausdorff dimension of \(E_{M}\) by applying the following lemma.
**Lemma 4.3**.: _Let us fix a real number \(s\in(0,1)\) and a positive integer \(M\geq m\). If for any \(n\geq 1\) and for all digits \(a_{1},\ldots,a_{n-1}\)\((m\leq a_{i}\leq M,\ i=1,\ldots,n-1)\) the following inequality holds_
\[|I_{n-1}(a_{1},\ldots,a_{n-1})|^{s}\geq\sum_{k=m}^{M}|I_{n}(a_{1},\ldots,a_{n- 1},k)|^{s}\,, \tag{4.8}\]
_then \(\dim_{H}(E_{M})\leq s\)._
Proof.: Since \(I_{0}=[0,\theta]\), by (4.8) we obtain
\[\theta^{s}=|I_{0}|^{s}\geq\sum_{k_{1}=m}^{M}|I_{1}(k_{1})|^{s}\geq\sum_{k_{1}, k_{2}=m}^{M}|I_{2}(k_{1},k_{2})|^{s}\geq\ldots\geq\sum_{k_{1},\ldots,k_{n}=m}^{M }|I_{n}(k_{1},\ldots,k_{n})|^{s}.\]
Now let \(\delta>0\) be given. We choose \(n\geq 1\) so large such that the lengths of all \(n\)-th order intervals are shorter than \(\delta\). This is possible since \(q_{n}\geq\left\lfloor\frac{n}{2}\right\rfloor\theta^{2}\) and
\[|I_{n}|=\frac{\theta}{q_{n}(q_{n}+\theta q_{n-1})}<\frac{\theta}{q_{n}^{2}} \leq\frac{1}{\left\lfloor\frac{n}{2}\right\rfloor^{2}\theta^{3}}.\]
Since all the \(n\)-th order intervals \(I_{n}\) form a \(\delta\)-covering for \(E_{M}\) it follows that for the \(s\)-covered length of \(E_{M}\) we have \(H_{\delta,s}(E_{M})\leq\sum|I_{n}|^{s}\leq|I_{0}|^{s}=\theta^{s}\). Consequently
\[H_{s}(E_{M})=\lim_{\delta\to 0}H_{\delta,s}(E_{M})\leq\theta^{s},\]
what involves \(\dim_{H}(E_{M})\leq s\).
**Proposition 4.4**.: _For \(s=1-\dfrac{m}{M+2}\dfrac{1}{\log\dfrac{2M(M+1)}{m}}\), where \(M\geq m\), the Hausdorff dimension \(\dim_{H}(E_{M})\leq s\)._
Proof.: We have to show that the assumption from Lemma 4.3 is fulfilled. The inequality to be proved is (4.8). Using (3.2) and (3.4) we obtain
\[\dfrac{\theta^{s}}{q_{n-1}^{s}(q_{n-1}+\theta q_{n-2})^{s}}\geq\sum_{k=m}^{M} \dfrac{\theta^{s}}{(k\theta q_{n-1}+q_{n-2})^{s}\left((k+1)\theta q_{n-1}+q_{n -2}\right)^{s}}. \tag{4.9}\]
We have already shown in (4.6) that
\[\sum_{k=m}^{M}\dfrac{1}{(k\theta q_{n-1}+q_{n-2})\left((k+1)\theta q _{n-1}+q_{n-2}\right)} = \dfrac{1}{\theta}\dfrac{1}{q_{n-1}(m\theta q_{n-1}+q_{n-2})} \left(1-\dfrac{m\theta q_{n-1}+q_{n-2}}{(M+1)\theta q_{n-1}+q_{n-2}}\right) \tag{4.10}\] \[= \dfrac{1}{\theta}\dfrac{\theta}{q_{n-1}(q_{n-1}+\theta q_{n-2})} \left(1-\dfrac{q_{n-1}+\theta q_{n-2}}{(M+1)\theta^{2}q_{n-1}+\theta q_{n-2}}\right)\] \[< \dfrac{1}{q_{n-1}(q_{n-1}+\theta q_{n-2})}\left(1-\dfrac{1}{ \theta^{2}(M+2)}\right)\] \[= \dfrac{1}{q_{n-1}(q_{n-1}+\theta q_{n-2})}\left(1-\dfrac{m}{M+2} \right).\]
Multiplying (4.10) with \((k\theta q_{n-1}+q_{n-2})^{1-s}\left((k+1)\theta q_{n-1}+q_{n-2}\right)^{1-s}\) we get
\[\sum_{k=m}^{M}\dfrac{(k\theta q_{n-1}+q_{n-2})^{1-s}\left((k+1) \theta q_{n-1}+q_{n-2}\right)^{1-s}}{(k\theta q_{n-1}+q_{n-2})\left((k+1) \theta q_{n-1}+q_{n-2}\right)}\leq\sum_{k=m}^{M}\dfrac{(M\theta q_{n-1}+q_{n-2 })^{1-s}\left((M+1)\theta q_{n-1}+q_{n-2}\right)^{1-s}}{(k\theta q_{n-1}+q_{n- 2})\left((k+1)\theta q_{n-1}+q_{n-2}\right)}.\]
Since \(q_{n-2}\leq\theta q_{n-1}\) and \(q_{n-2}<Mq_{n-2}\) we get
\[(M\theta q_{n-1}+q_{n-2})^{1-s}\left((M+1)\theta q_{n-1}+q_{n-2} \right)^{1-s} < (M\theta q_{n-1}+M\theta q_{n-1})^{1-s}\left((M+1)\theta q_{n-1}+ (M+1)\theta^{2}q_{n-2}\right)^{1-s}\] \[= (2M\theta q_{n-1})^{1-s}\left((M+1)\theta\right)^{1-s}(q_{n-1}+ \theta q_{n-2})^{1-s}.\]
Now by (4.10) we get
\[\sum_{k=m}^{M}\dfrac{1}{(k\theta q_{n-1}+q_{n-2})^{s}\left((k+1) \theta q_{n-1}+q_{n-2}\right)^{s}} < \dfrac{(2M\theta q_{n-1})^{1-s}\left((M+1)\theta)^{1-s}(q_{n-1}+ \theta q_{n-2})^{1-s}}{q_{n-1}(q_{n-1}+\theta q_{n-2})}\left(1-\dfrac{m}{M+2}\right)\] \[= \dfrac{2^{1-s}(M(M+1))^{1-s}\left(\theta^{2}\right)^{1-s}}{q_{n-1 }^{s}(q_{n-1}+\theta q_{n-2})^{s}}\left(1-\dfrac{m}{M+2}\right).\]
Now to obtain (4.9) we only have to show that
\[\left(\dfrac{2M(M+1)}{m}\right)^{1-s}\left(1-\dfrac{m}{M+2}\right)\leq 1 \tag{4.11}\]
which is equivalent with
\[(1-s)\log\dfrac{2M(M+1)}{m}\leq-\log\left(1-\dfrac{m}{M+2}\right).\]
Since \(x\leq-\log(1-x)\), for all \(x\in(0,1)\), by choosing \((1-s)\log\frac{2M(M+1)}{m}=\frac{m}{M+2}\), and assuming that \(\frac{m}{M+2}\in(0,1)\), we get (4.11) for \(s=1-\frac{m}{M+2}\frac{1}{\log\frac{2M(M+1)}{m}}\), with \(M\geq m\).
From Proposition 4.2 and Proposition 4.4 we obtain the following result which significantly strengthens Jarnik's result (see (4.3)).
**Proposition 4.5**.: _For any \(M>2m+1\)_
\[1-\frac{2(m+1)}{M+1}\frac{1}{\log(m+1)}\leq\dim_{H}(E_{M})\leq 1-\frac{m}{M+2} \frac{1}{\log\frac{2M(M+1)}{m}}. \tag{4.12}\]
_In particular, the set_
\[E=\left\{x\in\Omega:\sup_{n\geq 1}a_{n}(x)<+\infty\right\}\]
_is of Hausdorff dimension \(1\)._
**Remark 4.6**.: _We focus on the case of RCFs when \(\theta=1\), thus for \(m=1\)\((\theta^{2}=1/m)\)._
1. _Firstly, we obtained less restrictive conditions for_ \(M\) _compared to Jarnik._
2. _Secondly, our estimates for the lower and upper bounds are much better than those obtained by Jarnik for all values of_ \(M\) _(_\(M>8\)_). The Table_ 4.1 _is very suggestive._
3. _In Table_ 4.2 _we also collect some values of the lower and upper bounds for different values of_ \(m\geq 2\) _and_ \(M>2m+1\)_._
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline \(M\) & Lower bound (Jarnik) & Lower bound & Upper bound & Upper bound (Jarnik) \\ \hline & \(1-\frac{4}{M\log 2}\) & \(1-\frac{4}{M+1}\frac{1}{\log 2}\) & \(1-\frac{1}{M+2}\frac{1}{\log 2M(M+1)}\) & \(1-\frac{1}{8M\log M}\) \\ \hline
9 & 0.358802204 & 0.422921983 & 0.982493771 & 0.993678894 \\ \hline
100 & 0.942292198 & 0.942863562 & 0.999011047 & 0.999728565 \\ \hline
10000 & 0.999422922 & 0.999422979 & 0.999994769 & 0.999998642 \\ \hline \end{tabular}
\end{table}
Table 4.1:
## 5 The main result
In this section, we prove our second goal of this paper. For this, we need the following lemmas.
**Lemma 5.1**.: \(\dim_{H}(E(0))=1\)_._
Proof.: It is clear that \(E\subset E(0)\), where \(E\) is as in Proposition 4.5. Since \(\dim_{H}(E)=1\), it follows that \(\dim_{H}(E(0))=1\).
In the sequel let us study the case: \(\eta>0\). Define a sequence \(\{n_{k}\}_{k\geq 1}\subset\mathbb{N}_{+}\) satisfying \(n_{k}=(k+1)^{2}\) for any \(k\geq 1\). For any \(M\in\mathbb{N}_{m}\) and \(\eta>0\) let
\[E_{M}(\eta)=\left\{x\in\Omega:a_{k^{2}}(x)=\left\lfloor\frac{\eta k^{2}}{\log \log k^{2}}\right\rfloor\text{ for all }k\geq 2\text{ and }m\leq a_{i}(x)\leq M\text{ for }i\neq n_{k}\text{ for any }k\geq 1 \right\}.\]
**Lemma 5.2**.: _For any \(M\in\mathbb{N}_{m}\) and \(\eta>0\), \(E_{M}(\eta)\subset E(\eta)\)._
Proof.: Choose \(k_{0}\) large enough such that \(\left\lfloor\frac{\eta k^{2}}{\log\log k^{2}}\right\rfloor\geq M\) and \(\frac{k^{2}}{\log\log k^{2}}\) is monotone increasing for \(k\geq k_{0}\). Fix \(x\in E_{M}(\eta)\), for any \(n\geq n_{k_{0}}\), there exists a positive integer \(k\geq k_{0}\) such that \((k+1)^{2}=n_{k}\leq n<n_{k+1}=(k+2)^{2}\). Thus
\[L_{n}(x)=\max\{a_{1}(x),\ldots,a_{n}(x)\}=\max\left\{a_{(k+1)^{2}}(x),M\right\} =\left\lfloor\frac{\eta\left(k+1\right)^{2}}{\log\log(k+1)^{2}}\right\rfloor.\]
Since
\[\frac{L_{n}(x)\log\log n}{n}\geq\left\lfloor\frac{\eta\left(k+1\right)^{2}}{ \log\log(k+1)^{2}}\right\rfloor\frac{\log\log\left((k+2)^{2}-1\right)}{(k+2)^{ 2}-1}\]
and
\[\frac{L_{n}(x)\log\log n}{n}\leq\left\lfloor\frac{\eta\left(k+1\right)^{2}}{ \log\log(k+1)^{2}}\right\rfloor\frac{\log\log\left((k+1)^{2}\right)}{(k+1)^{ 2}},\]
we get
\[\lim_{n\to\infty}\frac{L_{n}(x)\log\log n}{n}=\eta.\]
In order to estimate the Hausdorff dimension of \(E_{M}(\eta)\), we shall introduce for any \(n\geq 1\), the sets \(A_{n}\), described as follows:
\[A_{n} =\left\{(\alpha_{1},\ldots,\alpha_{n})\in\mathbb{N}_{m}^{n},\, \alpha_{(k+1)^{2}}=\left\lfloor\frac{\eta\left(k+1\right)^{2}}{\log\log(k+1)^ {2}}\right\rfloor\text{ for any }k\right.\] \[\left.\text{ satisfying }(k+1)^{2}\leq n\text{ and }m\leq\alpha_{i}\leq M \text{ for }1\leq i\neq n_{k}\leq n\right\}.\]
We call \(I_{0}=[0,\theta]\) the basic interval of order \(0\) and for any \(n\geq 1\) and \((\alpha_{1},\ldots,\alpha_{n})\in A_{n}\), we call \(I_{n}(\alpha_{1},\ldots,\alpha_{n})\) the basic interval of order \(n\). Define
\[J_{n}(\alpha_{1},\ldots,\alpha_{n})=\bigcup_{\alpha_{n+1}}I_{n+1}(\alpha_{1}, \ldots,\alpha_{n},\alpha_{n+1}) \tag{5.1}\]
a fundamental interval of order \(n\), where the union is taken over all \(\alpha_{n+1}\) such that \((\alpha_{1},\ldots,\alpha_{n},\alpha_{n+1})\in A_{n+1}\). Consequently
\[E_{M}(\eta)=\bigcap_{n\geq 1}\ \bigcup_{(\alpha_{1},\ldots,\alpha_{n})\in A_{n}}I_{ n}(\alpha_{1},\ldots,\alpha_{n})=\bigcap_{n\geq 1}\ \bigcup_{(\alpha_{1},\ldots,\alpha_{n})\in A_{n}}J_{n}(\alpha_{1},\ldots, \alpha_{n}). \tag{5.2}\]
For any \(n\geq 1\), let \(c(n)=\#\left\{k\in\mathbb{N}:(k+1)^{2}\leq n\right\}\). For any \((\alpha_{1},\ldots,\alpha_{n})\in A_{n}\), let \(\overline{(\alpha_{1},\ldots,\alpha_{n})}\) to be the block by eliminating the terms \(\left\{\alpha_{n_{i}}:1\leq i\leq c(n)\right\}\) in \((\alpha_{1},\ldots,\alpha_{n})\); let \(\overline{[\alpha_{1}\theta,\ldots,\alpha_{n}\theta]}\) be the finite \(\theta\)-expansion corresponding to \(\overline{(\alpha_{1},\ldots,\alpha_{n})}\). Since the length of the block \(\overline{(\alpha_{1},\ldots,\alpha_{n})}\) is \(n-c(n)\), let us consider
\[\overline{q}_{n}(\alpha_{1},\ldots,\alpha_{n})=q_{n-c(n)}\overline {(\alpha_{1},\ldots,\alpha_{n})},\] \[\overline{I}_{n}(\alpha_{1},\ldots,\alpha_{n})=I_{n-c(n)} \overline{(\alpha_{1},\ldots,\alpha_{n})}.\]
It is easy to see that
\[c(n)<\sqrt{n}\ \text{and}\ \overline{(\alpha_{1},\ldots,\alpha_{n})}\in \mathbb{N}_{m,M}^{n-c(n)}, \tag{5.3}\]
where \(\mathbb{N}_{m,M}:=\{m,m+1,\ldots,M\}\).
**Lemma 5.3**.: _For any \(0<\varepsilon<1\), there exists \(N_{0}=N_{0}(\varepsilon)\) such that for any \(n\geq N_{0}\) and \((\alpha_{1},\ldots,\alpha_{n})\in A_{n}\), we have_
\[|I_{n}(\alpha_{1},\ldots,\alpha_{n})|\geq\frac{1}{\left(1+\theta^{2}\right) \theta^{1+\varepsilon}}\left|\overline{I}_{n}(\alpha_{1},\ldots,\alpha_{n}) \right|^{1+\varepsilon}.\]
Proof.: For \((\alpha_{1},\ldots,\alpha_{n})\in A_{n}\), if \(n_{k}\leq n<n_{k+1}\) we have
\[\alpha_{n_{k}}=\left|\frac{\eta\left(k+1\right)^{2}}{\log\log(k+1)^{2}} \right|:=\beta_{k+1}.\]
Since \(\{\beta_{k}\}_{k\geq 1}\) is an increasing positive integer sequence satisfying \(\beta_{k}\to\infty\) as \(k\to\infty\) and \(\lim_{k\to\infty}(\beta_{k+1})^{\frac{k+1}{n_{k}}}=1\), it follows that
\[\lim_{k\to\infty}(\alpha_{n_{k}}+m)^{\frac{k+1}{n_{k}}}=1.\]
Thus, there exists an integer \(k_{0}\) such that
\[(\alpha_{n_{k}}+m)^{\frac{k+1}{n_{k}}}<(m+1)^{\frac{\varepsilon}{4}} \tag{5.4}\]
for any \(k\geq k_{0}\). Assume that \(n_{k_{c(n)}}\leq n<n_{k_{c(n)}+1}\) for some \(k_{c(n)}\geq k_{0}\) and \(k_{0}\) is also chosen to be sufficiently large such that
\[(n_{k_{c(n)}}-k_{c(n)}-3)\varepsilon>n_{k_{c(n)}}\cdot\frac{\varepsilon}{2} \tag{5.5}\]
which is ensured by \(\lim_{k\to\infty}\frac{k+1}{n_{k}}=0\).
Then by Lemma 3.1(i), (5.3), (5.4) and (5.5), it yields that
\[\overline{q}_{n}^{2\varepsilon} = q_{n-c(n)}^{2\varepsilon}\geq(m+1)^{(n-c(n)-1)\varepsilon}\geq( m+1)^{(n_{k_{c(n)}}-k_{c(n)}-3)\varepsilon} \tag{5.6}\] \[\geq (m+1)^{n_{k_{c(n)}}\cdot\frac{\varepsilon}{2}}>(\alpha_{n_{k}}+m )^{2(k_{c(n)}+1)}>(\alpha_{n_{k}}+m)^{2c(n)}.\]
Take \(N_{0}=n_{k_{0}}\). For any \(n\geq N_{0}\), since the sequence \(\{\alpha_{n_{k}}\}_{k\geq 1}\) is increasing, we have by Lemma 3.1(ii), (3.3) and (5.6)
\[|I_{n}(\alpha_{1},\ldots,\alpha_{n})| \geq \frac{\theta}{\left(1+\theta^{2}\right)q_{n}^{2}(\alpha_{1},\ldots,\alpha_{n})}\] \[\geq \frac{\theta}{\left(1+\theta^{2}\right)}\frac{1}{q_{n-c(n)}^{2} \overline{(\alpha_{1},\ldots,\alpha_{n})}\left[\left(\alpha_{n_{k_{1}}}+m \right)\left(\alpha_{n_{2}}+m\right)\cdots\left(\alpha_{n_{k_{(n)}}}+m\right) \right]^{2}\cdot\theta^{2c(n)}}\] \[\geq \frac{\theta}{\left(1+\theta^{2}\right)\theta^{2c(n)}}\frac{1}{q_ {n-c(n)}^{2}\overline{(\alpha_{1},\ldots,\alpha_{n})}\left(\alpha_{n_{k_{(n) }}}+m\right)^{2c(n)}}\] \[\geq \frac{1}{\left(1+\theta^{2}\right)\theta^{2c(n)}}\frac{\theta}{ \left(q_{n-c(n)}^{2}\overline{(\alpha_{1},\ldots,\alpha_{n})}\right)^{1+x}}\] \[= \frac{1}{\left(1+\theta^{2}\right)\theta^{2c(n)}}\frac{1}{\theta ^{c}}\left(\frac{\theta}{q_{n-c(n)}^{2}\overline{(\alpha_{1},\ldots,\alpha_{n })}\right)^{1+x}}\geq\frac{1}{\left(1+\theta^{2}\right)\theta^{2c(n)}\theta^{ c}}\left|\overline{I}_{n}(\alpha_{1},\ldots,\alpha_{n})\right|^{1+x}\] \[\geq \frac{1}{\left(1+\theta^{2}\right)\theta\cdot\theta^{c}}\left| \overline{I}_{n}(\alpha_{1},\ldots,\alpha_{n})\right|^{1+x}=\frac{1}{\left(1 +\theta^{2}\right)\theta^{1+x}}\left|\overline{I}_{n}(\alpha_{1},\ldots,\alpha _{n})\right|^{1+x}.\]
Without loss of generality for any two different \(x,y\in E_{M}(\eta)\), \(x<y\), there exists a greatest integer, say \(n\), such that \(x,y\) are contained in the same basic interval of order \(n\). Thus there exist \(\alpha_{1},\ldots,\alpha_{n}\in\mathbb{N}_{m}\) and \(u_{n+1}\neq v_{n+1}\) such that \((\alpha_{1},\ldots,\alpha_{n},u_{n+1})\in A_{n+1}\), \((\alpha_{1},\ldots,\alpha_{n},v_{n+1})\in A_{n+1}\) and \(x\in I_{n+1}(\alpha_{1},\ldots,\alpha_{n},u_{n+1})\), \(y\in I_{n+1}(\alpha_{1},\ldots,\alpha_{n},v_{n+1})\) respectively. Since
\[I_{n+1}(\alpha_{1},\ldots,\alpha_{n},u_{n+1})\cap E_{M}(\eta) = J_{n+1}(\alpha_{1},\ldots,\alpha_{n},u_{n+1})\cap E_{M}(\eta),\] \[I_{n+1}(\alpha_{1},\ldots,\alpha_{n},v_{n+1})\cap E_{M}(\eta) = J_{n+1}(\alpha_{1},\ldots,\alpha_{n},v_{n+1})\cap E_{M}(\eta),\]
we have
\[x\in J_{n+1}(\alpha_{1},\ldots,\alpha_{n},u_{n+1}),\quad x\in J_{n+1}(\alpha_{ 1},\ldots,\alpha_{n},v_{n+1}).\]
Consequently, \(y-x\) is greater than or equal to the gap between \(J_{n+1}(\alpha_{1},\ldots,\alpha_{n},u_{n+1})\) and \(J_{n+1}(\alpha_{1},\ldots,\alpha_{n},v_{n+1})\). Notice that \(n+1\neq n_{k}\) for any \(k\in\mathbb{N}\), i.e., \(m\leq u_{n+1}\neq v_{n+1}\leq M\), otherwise \(u_{n+1}=\left\lfloor\frac{\eta\left(k+1\right)^{2}}{\log\log(k+1)^{2}}\right\rfloor\) for some \(k\geq 1\), which contradicts \(u_{n+1}\neq v_{n+1}\).
**Lemma 5.4**.: \(y-x\geq\mathcal{K}(\theta,M)\cdot|I_{n}(\alpha_{1},\ldots,\alpha_{n})|\)_, where_
\[\mathcal{K}(\theta,M):=\frac{m}{\theta\left(M(M+1)+(M+1)\theta+m\right)\left( M+\theta+1\right)}.\]
Proof.: We assume that \(n\) is even. We can proceed in the same way when \(n\) is odd. The proof is divided into two parts.
I) \(n+2=n_{k}\) for some \(k\geq 1\).
By (3.1), \(y-x\) is greater than the distance between \(J_{n+1}(\alpha_{1},\ldots,\alpha_{n},u_{n+1})\)'s right end point \(\left[\alpha_{1}\theta,\ldots,\alpha_{n}\theta,u_{n+1}\theta,\left(\left\lfloor \frac{\eta\left(k+1\right)^{2}}{\log\log(k+1)^{2}}\right\rfloor+1\right)\theta\right]\) and \(J_{n+1}(\alpha_{1},\ldots,\alpha_{n},v_{n+1})\)'s left end point
\(\left[\alpha_{1}\theta,\ldots,\alpha_{n}\theta,v_{n+1}\theta,\left\lfloor\frac{\eta(k +1)^{2}}{\log\log(k+1)^{2}}\right\rfloor\theta\right]\). Thus we obtain
\[y-x \geq \frac{\left|\left(u_{n+1}\theta+\frac{1}{\left(\left\lfloor\frac{n \theta(k+1)^{2}}{\log\log(k+1)^{2}}\right\rfloor+1\right)\theta}\right)p_{n}+ \theta p_{n-1}}{\left(\left\lfloor\frac{n\theta(k+1)^{2}}{\log\log(k+1)^{2}} \right\rfloor\theta\right)q_{n}+\theta q_{n-1}}-\frac{\left(v_{n+1}\theta+\frac {1}{\left\lfloor\frac{n\theta(k+1)^{2}}{\log\log(k+1)^{2}}\right\rfloor\theta} \right)p_{n}+\theta p_{n-1}}{\left(v_{n+1}\theta+\frac{1}{\left\lfloor\frac{n \theta(k+1)^{2}}{\log\log(k+1)^{2}}\right\rfloor\theta}\right)q_{n}+\theta q_ {n-1}}\right| \tag{5.7}\] \[\geq \frac{\theta^{2}\frac{M}{M+1}}{q_{n}^{2}\Big{(}M\theta+\frac{1} {(M+1)\theta}+\theta^{2}\Big{)}\Big{(}M\theta+\frac{1}{M\theta}+\theta^{2} \Big{)}}\] \[\geq \frac{\theta}{q_{n}^{2}}\cdot\frac{M^{2}\theta^{3}}{\left(M(M+1) \theta^{2}+(M+1)\theta^{3}+1\right)\left(M^{2}\theta^{2}+M\theta^{3}+1\right)}\] \[= \frac{\theta}{q_{n}^{2}}\cdot\frac{M^{2}}{\theta\Big{(}M(M+1)+(M+ 1)\theta+\frac{1}{\theta^{2}}\Big{)}\Big{(}M^{2}+M\theta+\frac{1}{\theta^{2} }\Big{)}}\] \[\geq \frac{M^{2}}{\theta\left(M(M+1)+(M+1)\theta+m\right)\left(M^{2}+ M\theta+m\right)}\left|I_{n}(\alpha_{1},\ldots,\alpha_{n})\right|.\]
II) \(n+2\neq n_{k}\) for any \(k\geq 1\).
Similarly to part I, we have
\[y-x \geq \left|\frac{\left(u_{n+1}\theta+\frac{1}{(M+1)\theta}\right)p_{n }+\theta p_{n-1}}{\left(u_{n+1}\theta+\frac{1}{(M+1)\theta}\right)q_{n}+\theta q _{n-1}}-\frac{\left(v_{n+1}\theta+\frac{1}{m\theta}\right)p_{n}+\theta p_{n-1 }}{\left(v_{n+1}\theta+\frac{1}{m\theta}\right)q_{n}+\theta q_{n-1}}\right| \tag{5.8}\] \[\geq \frac{\theta\left|(u_{n+1}-v_{n+1})\theta+\frac{1}{(M+1)\theta}- \frac{1}{m\theta}\right|}{\left(\left(u_{n+1}\theta+\frac{1}{(M+1)\theta} \right)q_{n}+\theta q_{n-1}\right)\left(\left(v_{n+1}\theta+\frac{1}{m\theta} \right)q_{n}+\theta q_{n-1}\right)}\] \[\geq \frac{\theta\frac{\theta m}{M+1}}{\left(\left(M\theta+\frac{1}{(M +1)\theta}\right)q_{n}+\theta^{2}q_{n}\right)\left(\left(M\theta+\frac{1}{m \theta}\right)q_{n}+\theta^{2}q_{n}\right)}\] \[= \frac{1}{q_{n}^{2}(M+1)\left(M\theta+\frac{1}{(M+1)\theta}+ \theta^{2}\right)\left(M\theta+\frac{1}{m\theta}+\theta^{2}\right)}\] \[= \frac{\theta}{q_{n}^{2}}\cdot\frac{m\theta}{\left(M(M+1)\theta^{ 2}+(M+1)\theta^{3}+1\right)\left(Mm\theta^{2}+m\theta^{3}+1\right)}\] \[= \frac{\theta}{q_{n}^{2}}\cdot\frac{m\theta}{\theta^{2}\left(M(M+ 1)+(M+1)\theta+m\right)\left(M+\theta+1\right)}\] \[\geq \frac{m}{\theta\left(M(M+1)+(M+1)\theta+m\right)\left(M+\theta+1 \right)}\left|I_{n}(\alpha_{1},\ldots,\alpha_{n})\right|.\]
Thus, from (5.7) and (5.8), the proof is complete.
For a fixed \(\eta>0\) and \(M>2m+1\), consider the map \(f:E_{M}(\eta)\to E_{M}\) defined by
\[f(x)=\lim_{n\to\infty}\overline{[\alpha_{1}\theta,\ldots,\alpha_{n}\theta]}\]
for any \(x=[\alpha_{1}\theta,\alpha_{2}\theta,\ldots,\alpha_{n}\theta,\ldots]\in E_{M}(\eta)\).
For any \(0<\varepsilon<1\), by Lemma 5.3 and based on the estimation given by Lemma 5.4, let us define another constant
\[\mathcal{K}_{1}(\theta,M)=\frac{\mathcal{K}(\theta,M)}{1+\theta^{2}}\cdot\min_ {(\alpha_{1},\ldots,\alpha_{N_{0}})\in A_{N_{0}}}\left\{\left|I_{N_{0}}(\alpha _{1},\ldots,\alpha_{N_{0}})\right|\right\},\]
where \(N_{0}\) is as in Lemma 5.3. Now, by Lemma 5.3 and Lemma 5.4, for any two different numbers \(x,y\in E_{M}(\eta)\) we have that
\[|f(x)-f(y)| \leq \left|\overline{I}_{n}(\alpha_{1},\ldots,\alpha_{n})\right|\leq \theta\left(1+\theta^{2}\right)^{\frac{1}{1+\varepsilon}}|I_{n}(\alpha_{1}, \ldots,\alpha_{n})|^{\frac{1}{1+\varepsilon}}\] \[\leq \theta\left(1+\theta^{2}\right)^{\frac{1}{1+\varepsilon}}\cdot \frac{|x-y|^{\frac{1}{1+\varepsilon}}}{(\mathcal{K}(\theta,M))^{\frac{1}{1+ \varepsilon}}}\leq\left(\frac{1+\theta^{2}}{\mathcal{K}(\theta,M)}\right)^{ \frac{1}{1+\varepsilon}}|x-y|^{\frac{1}{1+\varepsilon}}\] \[\leq \frac{1+\theta^{2}}{\mathcal{K}(\theta,M)}|x-y|^{\frac{1}{1+ \varepsilon}},\]
if \(|x-y|<\mathcal{K}_{1}(\theta,M)\). It means that the function \(f\) is \(\frac{1}{1+\varepsilon}\)-Holder on all intervals whose lengths are bounded by \(\mathcal{K}_{1}(\theta,M)\). By Lemma 2.1 for \(E_{M}(\eta)\cap I\), where \(I\) is an arbitrary interval such that \(|I|\leq\mathcal{K}_{1}(\theta,M)\), we have
\[\dim_{H}\left(f(E_{M}(\eta))\cap I\right)\leq(1+\varepsilon)\dim_{H}\left(E_ {M}(\eta)\cap I\right).\]
It follows immediately that
\[\dim_{H}\left(f(E_{M}(\eta))\right)\leq\dim_{H}\left(E_{M}(\eta)\right) \tag{5.9}\]
since \(\varepsilon\) and \(I\) are arbitrary.
Next we will show that \(f\left(E_{M}(\eta)\right)=E_{M}\). Clearly, \(f\left(E_{M}(\eta)\right)\subset E_{M}\). For each \(y=[\alpha_{1}\theta,\alpha_{2}\theta,\ldots]\in E_{M}\), we can construct the inverse image of \(y\) by inserting in \([\alpha_{1}\theta,\alpha_{2}\theta,\ldots]\) a sequence of big partial quotients \(\{\beta_{k}\}_{k\geq 2}\). Actually, we put
\[f^{-1}(y)=[\alpha_{1}\theta,\alpha_{2}\theta,\alpha_{3}\theta,\beta_{2}\theta,\alpha_{4}\theta,\ldots,\alpha_{8}\theta,\beta_{3}\theta,\alpha_{9}\theta, \ldots,\alpha_{k^{2}-1}\theta,\beta_{k}\theta,\alpha_{k^{2}}\theta,\ldots],\]
where \(\beta_{k}=\left\lfloor\frac{\eta k^{2}}{\log\log k^{2}}\right\rfloor\). Then \(f^{-1}(y)\in E_{M}(\eta)\). By (5.9) and Proposition 4.5, we have
\[\dim_{H}(E_{M}(\eta))\geq\dim_{H}(E_{M})\geq 1-\frac{2(m+1)}{M+1}\frac{1}{\log( m+1)},\]
for any \(M>2m+1\). From Lemma 5.2, we have
\[\dim_{H}(E(\eta))\geq 1-\frac{2(m+1)}{M+1}\frac{1}{\log(m+1)}.\]
Since \(M>2m+1\) is arbitrary, we have \(\dim_{H}(E(\eta))=1\).
Therefore, we may state the main result of this section.
**Theorem 5.5**.: _Let \(\eta\geq 0\) and \(E(\eta)\) be as in (1.9). Then_
\[\dim_{H}(E(\eta))=1.\] |
2310.00305 | Towards LLM-based Fact Verification on News Claims with a Hierarchical
Step-by-Step Prompting Method | While large pre-trained language models (LLMs) have shown their impressive
capabilities in various NLP tasks, they are still under-explored in the
misinformation domain. In this paper, we examine LLMs with in-context learning
(ICL) for news claim verification, and find that only with 4-shot demonstration
examples, the performance of several prompting methods can be comparable with
previous supervised models. To further boost performance, we introduce a
Hierarchical Step-by-Step (HiSS) prompting method which directs LLMs to
separate a claim into several subclaims and then verify each of them via
multiple questions-answering steps progressively. Experiment results on two
public misinformation datasets show that HiSS prompting outperforms
state-of-the-art fully-supervised approach and strong few-shot ICL-enabled
baselines. | Xuan Zhang, Wei Gao | 2023-09-30T08:33:04Z | http://arxiv.org/abs/2310.00305v1 | Towards LLM-based Fact Verification on News Claims with a Hierarchical Step-by-Step Prompting Method
###### Abstract
While large pre-trained language models (LLMs) have shown their impressive capabilities in various NLP tasks, they are still under-explored in the misinformation domain. In this paper, we examine LLMs with in-context learning (ICL) for news claim verification, and find that only with 4-shot demonstration examples, the performance of several prompting methods can be comparable with previous supervised models. To further boost performance, we introduce a Hierarchical Step-by-Step (HiSS) prompting method which directs LLMs to separate a claim into several subclaims and then verify each of them via multiple questions-answering steps progressively. Experiment results on two public misinformation datasets show that HiSS prompting outperforms state-of-the-art fully-supervised approach and strong few-shot ICL-enabled baselines.
## 1 Introduction
Misinformation such as fake news often causes confusion or wrong belief because they contain claims that are factually false or inaccurate Lazer et al. (2018). To combat misinformation in news claims, stakeholders rely on fact-checking practices for claim verification. Fact-checking services online, such as PolitiFact1 and Snopes2 require laborious manual efforts, making it challenging to match the rapid pace of misinformation being produced.
Footnote 1: [https://www.politifact.com/](https://www.politifact.com/).
Footnote 2: [https://www.snopes.com/](https://www.snopes.com/).
In recent years, deep neural networks-based misinformation detection and fact-checking methods have been studied extensively Wang (2017); Rashkin et al. (2017); Popat et al. (2018); Ma et al. (2019); Kotonya and Toni (2020); Atanasova et al. (2020); Yang et al. (2022). In particular, pre-trained language models (PLMs) like BERT Kenton and Toutanova (2019) have demonstrated superior results and surpassed traditional methods in fake news related benchmarks Soleimani et al. (2020); Atanasova et al. (2020); Kruengkrai et al. (2021), thanks to their strong ability to understand nuanced context for more accurate decision. Recently, large pre-trained language models (LLMs) with a massive number of parameters, such as GPT-3.5, have shown impressive performances in various downstream tasks Brown et al. (2020); Wei et al. (2022); Zhou et al. (2022); Press et al. (2022). But it is basically unclear how well LLMs can perform on fact verification task as this is not at the core of LLM pre-training Brown et al. (2020); Anil et al. (2023).
While it is not practical to directly fine-tune most LLMs, _in-context learning_ (ICL) Brown et al. (2020) offers an alternative way to instruct LLMs to learn new tasks via inference only, conditioning on demonstration examples without any gradient updates. Properly prompted LLMs can carry out similar steps of logical traces with that in demonstration examples, which is known as Chain-of-Thought (CoT) reasoning Wei et al. (2022). This generative reasoning process not only enhances the model's performance on tasks such as arithmetic, commonsense, and symbolic reasoning, but also facilitates the understanding of the underlying rationale behind the results from LLMs.
Previous research has suggested the importance of reasoning in improving the accuracy and explainability of fake news detection Jin et al. (2022). However, leveraging LLM reasoning in the context of fake news related tasks remains under-explored. In this work, we first evaluate three classical ICL methods, including standard prompting and CoT-based methods for news claim verification. The standard prompting takes in a news claim for LLM to return its factuality judgment on the claim, while CoT additionally generates a series of intermediate verbal reasoning steps in the result. On two fake news benchmark datasets RAWFC Yang et al. (2022) and LIAR Wang (2017), we find that the standard prompting performs comparably well as
strong supervised baselines, but the vanilla CoT is worse than the standard prompting, which is counter-intuitive. We found that there are two main issues causing the failure of vanilla CoT, as illustrated in Figure 1: (1) Omission of necessary thoughts - vanilla CoT tends to ignore some noteworthy parts in the claim, resulting in inaccurate decisions; (2) Fact hallucination3 - When necessary information is not available, the model tends to generate relevant but unreliable "facts" on its own, which misleads the final prediction.
Footnote 3: This type of hallucination is also referred to as the extrinsic hallucination [1] that cannot be verified with the given source, and the fact-conflicting hallucination [1] that, more broadly, are not faithful to established world knowledge.
To address the issues, we instruct LLMs to decompose a complex claim into smaller subclaims, so that the reasoning follows up with the fine-grained decomposition. This aims to enable a much more thorough examination of the claim, reducing the risk of overlooking necessary details in the claim and enhancing the reasoning effect based on different reasoning chains. This is analogous to breaking down complex questions into subquestions [14] for QA and devising a plan for solving complex tasks into multiple steps [15]. Additionally, we instruct LLM to employ a search engine for providing up-to-date external information, aiding the model in reasoning and mitigating the hallucination problem. In light of this, we propose a **H**ierarchical **S**tep-by-**S**tep (**HiSS**) prompting method, which is composed of two main processes: (1) _Claim Decomposition_, which prompts the LLM to split a complex claim into smaller subclaims. (2) _Subclaim Verification_, which prompts LLM to verify the subclaim step-by-step employing a search engine to obtain relevant evidence. Our contributions are three-fold:
* We investigate the ability of LLMs with ICL for news claim verification. And we find that with only four-shot demonstration examples, LLMs can outperform most of the supervised methods, which indicates LLM is a promising tool to combat misinformation.
* We propose a HiSS prompting method to prompt LLM to do fine-grained checking of news claims. Experiments on two public datasets show that HiSS-prompted LLMs outperform traditionally strong fully-supervised models with an improvement of 4.95% on average in macro-average F1 and set a new state-of-the-art for few-shot news claim verification4. Footnote 4: Code and prompts data is available at [https://github.com/jadeCurl/HiSS](https://github.com/jadeCurl/HiSS).
* Compared with previous methods, our HiSS-prompted LLMs provide superior explanations, which are more fine-grained and easier to follow based on automatic and human evaluation.
## 2 Related Work
### Explainable Fake News Detection
Existing research on explainable fake news detection is mainly focused on generating explanations from input evidence. These approaches include generating human-comprehensible explanations for candidate facts based on background knowledge encoded in the form of Horn clauses [1], as well as using attention-based models to highlight relevant factual words [16], news attributes [17] and suspicious users [18]. Such an approach is based on general deep neural networks and knowledge base instead of language models.
Later, Atanasova et al. (2020) and Kotonya and Toni (2020) propose directly producing veracity explanations based on extractive and abstractive summarization. However, these methods predominantly generate explanations by summarizing fact-checking articles. While such an approach can
Figure 1: An example of claim verification based on vanilla CoT prompting. The claim (underlined) and CoT (in green) are given as a demonstration. The generated CoT (in italics) leads to an incorrect judgment due to (1) omission of necessary thoughts regarding “nakes”, and (2) fact hallucination about the war-loving speeches without specific evidence in the generated CoT (in blue).
somewhat explain fact-checking decisions following human thoughts written in the articles, it does not reason based on raw evidence to form the thoughts for drawing conclusions, which should be the core of fact verification.
### Fact Verification with Language Models
Previous research has utilized PLMs (e.g., BERT and BART) in fake news related tasks. For example, Lee et al. (2020) directly uses the internal knowledge implicitly stored as PLMs' parameters for fact verification. Lewis et al. (2020) proposes a retrieval-augmented approach to endow language models with document retrieval capability, which was applied for selecting relevant evidence in fact extraction and verification. Instead of using language models to provide evidence only, Lee et al. (2021) utilizes LLMs such as GPT-2 (Radford et al., 2019) and their few-shot capability to assess the claim's factuality based on the perplexity of evidence-conditioned claim generation.
Research on utilizing the reasoning capabilities of LLMs, such as CoT-based reasoning, in the misinformation domain is still limited. Recent works (Press et al., 2022; Yao et al., 2023; Jiang et al., 2023) find that combining LLM's reasoning capability with accessibility to external knowledge is helpful to many reasoning-intensive NLP tasks including HotpotQA (Yang et al., 2018) and FEVER (Thorne et al., 2018). In contrast to existing works, our research is motivated by the counter-intuitive observation that CoT under-performs the standard prompting in news claim verification, and explores how to better elicit LLMs to mitigate two salient issues of LLMs in this task. We focus on the verification of real-world news claims, which could be more temporally dynamic and sensitive than FEVER type of claims, necessitating the model to access up-to-date knowledge.
## 3 Our HiSS Prompting Method
In this section, we address the two main issues of LLMs observed in the news claim verification task, i.e., 1) Omission of necessary thoughts and 2) Fact hallucination. We will first raise our specific research questions, and then present our HiSS prompting method.
### Research Questions
For the omission of necessary thoughts, the basic research question we need to address would be:
* _How to instruct LLMs not to overlook any crucial points of a claim in its CoT?_
The context of real-world claims could be complex and deep. For example, the seemingly easy claim _Donald Trump has said he loves war, "including with nukes"_ is actually quite intricate, as it not only explicitly states Trump's declaration of love for both regular and nuclear wars, but also implies that in order to verify the statement is factual or not, one has to examine whether and in what circumstances he has expressed such passion on both types of wars. Therefore, we propose to prompt LLMs to thoroughly generate all explicit and implicit points that are check-worthy given a claim.
Hallucination is an intrinsic and fundamental problem of LLMs (Ji et al., 2023; Bang et al., 2023). We address it by providing relevant and up-to-date contextual information to LLM as external knowledge, assuming that hallucinations most likely result from the lack of knowledge on the necessary context (Bang et al., 2023). Our specific research question would be:
* _How can we determine when external knowledge is needed during the verification and assist LLM in acquiring the necessary knowledge to mitigate fact hallucination?_
While the decomposition can prompt LLM to raise fine-grained questions, the model may make up responses when background information is lacking. For instance, if the model is unaware of the specific contexts of Trump's wording on "war"5 and "nukes"6, it can lead to factually inaccurate answer, such as "During his term as the 45th President of the US, Donald Trump gave speeches proclaiming his love for war".
Footnote 5: The comment regarding Trump’s “love” of war comes from his speech in Iowa on Nov. 12, 2015. In the speech, Trump theorized that former Iraqi leader Saddam Hussein feigned having weapons of mass destruction to scare Iran, before briefly sidetracking into his feelings on war generally: “This is the Trump theory on war,” he said. “But I’m good at war. I’ve had a lot of wars of my own. I’m really good at war. I love war in a certain way. But only when we win.”6
Footnote 6: Trump made his comments about “nukes” in an April 3 interview with Fox News Sunday’s Chris Wallace. Wallace was asking Trump about his suggestion that Japan might be better off with nuclear weapons. Trump suggested that Japan might need to acquire nuclear weapons to defend against neighboring North Korea. It’s worth noting that the comment wasn’t about the United States using nuclear weapons, but about his belief that Japan might be better off if it had nuclear weapons.
In the following subsections, we will describe our Hierarchical Step-by-Step (HiSS) prompting
method. As shown in Figure 2, HiSS involves three processes: (1) _Claim decomposition_, (2) _Subclaim step-by-step verification_, and (3) _Final prediction_.
### Claim Decomposition
At the first level of HiSS, we focus on instructing the model to capture all the explicit points in the original claim and decompose them into subclaims. This level aligns with previous studies (Ousidhoum et al., 2022; Fan et al., 2020) in fact-checking, which found that segmenting the original claim by identifying entities or focal points can facilitate human fact-checkers in making informed judgments. However, these models require the manual collection of datasets for training, whereas our method prompts LLM to do the decomposition guided by only a few demonstration examples.
Specifically, LLM is prompted with \(K\)-shot (\(K\) is a hyperparameter) demonstration examples (see Table 8 (a) and Table 8 (b) in Appendix B for details) that serve to illustrate the entire verification process, followed by the test claim to be checked, as shown in the Level 1 in Figure 2. The demonstration examples exhibit to LLM how to break down a claim \(c_{i}\) into a series of subclaims \([s_{i1},s_{i2},\cdots,s_{iN_{i}}]\) that cover all check-worthy points explicitly expressed. The demonstration examples vary in their complexity, with some simple claims not undergoing deep decomposition and more complex claims being decomposed into a few more subclaims. The LLM presumably follows the demonstrated decomposition approach in accordance with the complexity of the input claim \(c_{i}\). Therefore, \(N_{i}\) is determined by LLM automatically. Figure 2 illustrates that LLM decomposes the test claim into two subclaims.
Figure 2: Overview of the proposed HiSS model: Original human inputs are in red background, LLM directly generated text is in white, and answers generated based on search results are in green. We start by providing a few-shot demonstration, followed by appending the claim to be checked (underlined). HiSS prompts the LLM to (1) decompose the claim into subclaims; (2) verify each subclaim step-by-step via raising and answering a series of questions. For each question, we prompt LLM to assess if it is confident to answer it or not, and if not, we input the question to a web search engine. The search results are then inserted back into the ongoing prompt to continue the verification process; (3) generate the final prediction. The detailed demonstrations are omitted in this illustration for space which can be found in Table 8 (a) and Table 8 (b) in Appendix B.
### Subclaim-level Step-by-Step Verification
In the second level, LLM individually verifies each subclaim obtained from Level 1. Underlying the explicit points conveyed in each subclaim can be a few implicit points that are not expressed but need further scrutinization in one way or the other. For example, "Did Trump really say he loves war?", "What is his exact wording?", "In what context did he express it?", etc. for the first subclaim "Donald Trump has expressed a love for war".
Specifically, we leverage the reasoning capability of LLM to delve deeper into the underlying information needed to validate each subclaim \(s_{ij}\) by generating a series of probing questions \(\{q^{m}_{ij}\}\), each \(q^{m}_{ij}\) corresponding to an implicit point. Similarly, the number of probing questions of each subclaim is determined by LLM automatically with reference to the demonstration example. We adopt a progressive approach to generate the questions. This allows us to adjust the subsequent question generation based on the answers to previous questions and the acquired context information on the chain. As a result, the generated questions become more targeted and in-depth, facilitating a comprehensive analysis of the subclaims.
Once a question \(q^{m}_{ij}\) is generated, the next step is to elicit the corresponding answer \(a^{m}_{ij}\) from LLM. Recent works have found that providing LLMs with access to external knowledge can lead to notable improvements Yao et al. (2023); Jiang et al. (2023). An important consideration is how to prompt LLM to automatically decide when it needs to consult with an external knowledge source (e.g., web search engine), to mitigate fact hallucination. It is hypothesized that LLM can be prompted to assess its own confidence in answering a question, so that we can acquire relevant external information to aid it when it lacks confidence. We resort to Google Search as an external source.
Specifically, LLM follows the specific format of demonstration examples to generate questions: it starts with the prefix "Question:" and presents the generated question \(q^{m}_{ij}\), followed by "Tell me if you are confident...". We control the model to pause at the end of \(q^{m}_{ij}\) by setting the phrase "Tell me if you are confident" as the stop sequence7. This aims to facilitate 1) extracting the text of \(q^{m}_{ij}\), and 2) probing the LLM to assess its confidence in answering the question without additional information. During its pause, we append the following instruction: Tell me if you are confident to answer the question or not. Answer with "yes" or "no":, and set the stop sequence to 'no'. This means that if the LLM responds with 'no', the model will cease to further generate an answer for \(q^{m}_{ij}\), but wait for us to input \(q^{m}_{ij}\) into Google Search API8 to obtain top search results9, so that we can feed them into the LLM for it to generate the answer \(a^{m}_{ij}\). However, if the LLM responds with "yes", the LLM does not halt and proceeds to generate the answer \(a^{m}_{ij}\) to the question. Following the specific format of the demonstration example, after a prior question is addressed, the LLM continues to generate the subsequent question until it ceases to produce any more questions, transitioning then to the final prediction phase.
Footnote 7: The “stop sequence” mechanism is a setting provided by the OpenAI API ([https://help.openai.com/en/articles/5072263-how-do-i-use-stop-sequences](https://help.openai.com/en/articles/5072263-how-do-i-use-stop-sequences)). When a specific word or phrase is set as a “stop sequence”, the model will halt its generation upon encountering that word or phrase, allowing users to control the length or content of the generated output.
Footnote 8: [https://sergapi.com](https://sergapi.com).
Footnote 9: Search results from fact-checking websites are filtered to avoid ground-truth leakage. Specifically, we remove the search results with URLs containing keywords such as “fact check”, and “fact-checking” since the URL of fact-checking websites and fact-check articles on mainstream media, e.g., NY Times ([https://www.nytimes.com/spotlight/fact-checks](https://www.nytimes.com/spotlight/fact-checks).), typically contain such keywords. After filtering, we choose the top-one snippet from the search result to feed into the LLM.
### Final Prediction
Once all the subclaims have been verified, the LLM can make a final prediction. At this point, it outputs "Among [label set], the claim is classified as" before providing the final answer, where [label set] is substituted with the actual label set for a specific dataset. This facilitates the parsing of the final prediction, as the predicted class label will appear after the word "as" in the last output line.
## 4 Experiments and Results
### Experimental Setup
We conducted experiments on two standard English fake news datasets: 1) **RAWFC**Yang et al. (2022) contains gold labels based on Snopes fact-check articles and follows a three-class classification scheme (True/False/Half); 2) **LIAR**Wang (2017) contains gold labels based on PolitiFact articles with six classes (True/Mostly-true/Half-true/Barely-true/False/Pants-fire). Different from
FEVER Thorne et al. (2018) which uses manually synthesized claims from Wikipedia articles, the claims in these two datasets are based on real-world news. Table 1 displays the statistics of datasets. We use the provided valid-test split of both datasets. The few-shot demonstration examples are randomly selected from the training set.
Following Yang et al. (2022), we use macro-average precision (\(P\)), recall (\(R\)), and \(F_{1}\) (\(F_{1}=\frac{2RP}{R+P}\)) scores as the metrics for evaluation.
Supervised baselines.We compare with seven strong supervised models in claim verification: 1) CNN Wang (2017) uses a convolutional neural model to integrate claim information and available metadata features (e.g. subject, speaker, and party) to get the prediction; 2) RNN Rashkin et al. (2017) uses recurrent neural networks to learn representation from word sequences of the claim. 3) DeClarE Popat et al. (2018) considers word embedding from both the claim and searched external information as evidence. 4) SentHAN Ma et al. (2019) proposes a hierarchical attention network to represent external evidence as well as their semantic relatedness with the claim. 5) SBERT-FC Kotonya and Toni (2020) uses Sentence-BERT to encode both claim and evidence for classification. 6) GenFE Atanasova et al. (2020) predicts fact-check results and generates explanations in the multi-task setup. 7) CofCED Yang et al. (2022) uses a hierarchical encoder for text representation and two coarse-to-fine cascaded selectors to extract key evidence for news claim verification.
ables evidence acquisition via web search when necessary, mitigating the risk of hallucination.
* **The performance of few-shot ICL methods varies.** Despite utilizing the same backbone, HiSS surpasses standard prompting, vanilla CoT, and ReAct by 7.95%, 11.65%, and 5.3% in F1 on average, respectively. This observation highlights the importance of specific methods prompting LLM for news claim verification. After conducting an in-depth error analysis on 40 randomly selected samples for vanilla CoT, ReAct, and HiSS11, as shown in Table 3, we classify the errors observed in the verification traces into two categories: (1) _fact hallucination_ and (2) _omission of necessary thoughts_. We find that vanilla CoT exhibits substantial issues of both hallucination and thought omission. Although the Search-Augmented CoT improves its performance, it still falls short of meeting the standard prompting method. This suggests that using the original claim as a search query may end up with insufficiently detailed and informative search results, which explains its subpar performance. In contrast, ReAct, with its ability to autonomously generate search queries and access external knowledge, effectively mitigates failures caused by hallucinations. However, it encounters challenges of thought omission as it may ignore noteworthy points of a claim due to the lack of claim decomposition and a fine-grained step-by-step process. Our HiSS prompting method instead effectively addresses both issues, thanks to its ability to cover both explicit and implicit points of the claim to get checked and the ability to seek necessary external knowledge supported by the search engine.
Footnote 11: We omit standard prompting as it directly outputs the final prediction without providing intermediate or reasoning steps.
### Ablation Study
To analyze the impact of different configurations of HiSS, we conducted an ablation analysis on RAWFC as shown in Figure 3.
Effect of claim decomposition:Firstly, we consider HiSS without claim decomposition, where we directly pose probing questions based on the original claim, bypassing the claim decomposition step
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**RAWFC**} & \multicolumn{3}{c}{**LIAR**} \\ \cline{2-7} & \(P(\%)\) & \(R(\%)\) & \(F_{1}(\%)\) & \(P(\%)\) & \(R(\%)\) & \(F_{1}(\%)\) \\ \hline _Fully Supervised Models_ & & & & & & \\ CNN (Wang, 2017) & 38.8 & 38.5 & 38.6 & 22.6 & 22.4 & 22.5 \\ RNN (Rashkin et al., 2017) & 41.4 & 42.1 & 41.7 & 24.4 & 21.2 & 22.7 \\ DeClarE\({}^{\dagger}\)(Popat et al., 2018) & 43.4 & 43.5 & 43.4 & 22.9 & 20.6 & 21.7 \\ SentHAN\({}^{\dagger}\)(Ma et al., 2019) & 45.7 & 45.5 & 45.6 & 22.6 & 20.0 & 21.2 \\ SBERT\({}^{\diamondsuit}\)(Kotonya and Toni, 2020) & 51.1 & 46.0 & 48.4 & 24.1 & 22.1 & 23.1 \\ GenFE\({}^{\diamondsuit}\)(Atanasova et al., 2020) & 44.3 & 44.8 & 44.5 & 28.0 & 26.2 & 27.1 \\ CofCED\({}^{\dagger}\)(Yang et al., 2022) & 53.0 & 51.0 & 52.0 & 29.5 & 29.6 & 29.5 \\ \hline _Few-shot Models w/ GPT3.5_ & & & & & & \\ Standard Prompt (Brown et al., 2020) & 48.5 & 48.5 & 48.5 & 29.1 & 25.1 & 27.0 \\ Vanilla CoT (Wei et al., 2022) & 42.4 & 46.6 & 44.4 & 22.6 & 24.2 & 23.7 \\ Search-Augmented CoT\({}^{\dagger}\) & 47.2 & 51.4 & 49.2 & 27.5 & 23.6 & 25.4 \\ ReAct\({}^{\dagger}\)(Yao et al., 2023) & 51.2 & 48.5 & 49.8 & 33.2 & 29.0 & 31.0 \\ HiSS\({}^{\dagger}\) (ours) & **53.4** & **54.4\({}^{*}\)** & **53.9\({}^{*}\)** & **46.8\({}^{*}\)** & **31.3\({}^{*}\)** & **37.5\({}^{*}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experimental results of claim verification. Bold denotes the best performance. \({}^{*}\) means significantly better than the previous SoTA (CofCED) with \(p<0.01\). \({}^{\dagger}\) uses external information obtained via search engines. \({}^{\diamondsuit}\) uses gold evidence from fact-check reports. Results of fully supervised models are quoted from (Yang et al., 2022).
\begin{table}
\begin{tabular}{l c c c} \hline \hline Error Types & CoT & ReAct & HiSS \\ \hline Fact Hallucination & 43\% & 28\% & 5\% \\ Thoughts Omission & 60\% & 53\% & 13\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Distribution of errors based on 40 examples from RAWFC, where Vanilla CoT, ReAct, and HiSS give incorrect verification results.
while keeping the step-by-step process. In this setting, the performance of HiSS decreases by 1.5%. This result demonstrates that claim decomposition, which separates the claim based on explicit points, is helpful in improving the final predictions.
Effect of subclaim step-by-step verification:Next, we conduct an ablation study on the step-by-step verification process for each subclaim. Instead of generating probing questions, we let LLM directly verify subclaims by searching for relevant background information. Subsequently, the LLM made predictions based on the subclaims and the retrieved information. Notably, this modification resulted in a 2.9% performance drop, underscoring the importance of employing the subclaim step-by-step verification approach to address the implicit points associated with each subclaim.
Effect of strategy using search:We compared three approaches to explore the effect of different strategies on using the search function or not and how it is used: 1) _HiSS w/o search_, which relies solely on the internal knowledge from LLM, 2) _HiSS always search_, which always queries the search engine to access the external knowledge, and 3) _HiSS_, which lets LLM self-decide whether to use the search results in each step based on its own confidence (see Section 3.3).
As expected, the performance of _HiSS w/o search_ is poor which achieves only an F1 of 49.8%, indicating that reliance solely on LLM's internal knowledge is unreliable and insufficient. An interesting finding is that _HiSS_ prompted to decide whether to leverage search results or not based on the self-confidence of LLM achieves an F1 of 54.4%, which is just slightly worse than the _HiSS always search_ by 1.0%. Our further inspection reveals that out of the 200 test claims on RAWFC, a total of 934 questions are generated, and LLM flags 690 of them as being confident to answer. This indicates that in cases where the model is confident, external knowledge from the search engine can only marginally improve its performance, as the model is capable of providing accurate answers. In contrast, for the cases where the model lacks confidence, leveraging search results can enhance its performance much more greatly. Assuming that we can basically trust the factuality of search results from the web, this suggests that the model demonstrates a reasonably good estimation of its own confidence.
### Human Evaluation
We conduct a human evaluation to study the explanation quality of three different types of explanations: Gold justification given by human journalists, explanations generated by the strongest supervised explainable model CofCED, and the reasoning trajectory generated from the HiSS method. We ask three English-speaking judges to rate these explanations with scores of 1, 2, and 3 (higher is better) according to the following criteria:
* **Coverage**. The explanation and reasoning does not miss any important points that contribute to the check.
* **Non-redundancy**. The explanation and reasoning provided only includes relevant information that is necessary for understanding the claim and fact-checking it, without any redundant or repeated information.
* **Readability**. The explanation and reasoning is straightforward and simple to read.
* **Overall**. The overall quality of the generated explanation and reasoning.
We randomly sample 34 claims from the LIAR test set. Three annotators rate them independently. We compute Krippendorff's \(\alpha\) inter-annotator agreement (IAA) [11] and get \(0.36\) for coverage, \(0.42\) for non-redundancy, \(0.30\) for readability and \(0.38\) for overall.
Table 4 shows the averaged scores of human evaluation. We find that the gold explanations are slightly better than HiSS-based explanations, while the state-of-the-art automatic explainable claim verification model CofCED is the worst. In particular, for the coverage criteria, HiSS can elicit explanations that are on par with the human-written ones. This explains that our HiSS elicits GPT-3.5 to generate more fine-grained checking points and steps. In addition, the non-redundancy score is relatively
Figure 3: Ablation results on RAWFC dataset.
lower, since GPT-3.5 may generate repeated subclaims. We conjecture that this may be due to the intrinsic problem of greedy sampling of language models Holtzman et al. (2019).
## 5 Conclusion and Future Work
In this paper, we study different prompting methods for using LLMs in news claim verification. We introduce a hierarchical step-by-step (HiSS) method that prompts LLM to perform the verification in fine-grained steps, aiming to mitigate the omission of thoughts and fact hallucination. Validated on two public datasets, HiSS prompting improves the performance of LLMs on the task over fully-supervised SoTA models and its strong few-shot ICL-based counterparts. HiSS prompted explanations show superior explainability in their coverage and readability.
In the future, we will build a conversational fact-checking model based on LLMs which can be user-friendly and incorporate human fact-checkers in the loop.
## 6 Limitations
Despite the promising performance of LLMs based on few-shot ICL, fact verification is a challenging research problem given the fact that performance scores are still quite low in general. There are a few limitations. Firstly, in this work, we highlight that all the baselines and our proposed method solely rely on textual information. We focus on an unimodal approach utilizing language models and do not consider the potential assistance from other modalities, such as images and videos, for this task. Although the exploration of multimodal approaches has gradually drawn some research attention Wang et al. (2018); Silva et al. (2021); Bu et al. (2023), it falls outside the scope of our current work.
Meanwhile, the scope of this study is limited to the verification of news claims, which represents only a subset of the broader issue of misinformation. Misinformation encompasses a wide range of false or misleading information, including rumors, fake news articles, and spams Wu et al. (2019). While our focus was specifically on news claims, future research could explore the detection and mitigation of misinformation in other formats.
Further, our proposed prompting method heavily relies on the capabilities of backbone LLMs, which can come with substantial computational costs. Our method leverages the advancements in multi-step reasoning exhibited by these LLMs, necessitating high-performance expectations. However, it is worth noting that most state-of-the-art LLMs are currently not open-source and only available as services. For instance, GPT-3.5 can only be accessed via API. The reliance on such LLMs makes deep model control infeasible, and the need for API access poses challenges in terms of cost.
Finally, while our approach leverages search engines to mitigate the fact hallucination issue in LLMs, it operates under the assumption that pertinent information is readily accessible through web search. However, not all information is indexed or available in search engines. For instance, if someone claims to have witnessed a rare meteorological phenomenon in a small town, such event might not be reported on major news websites or databases. Such firsthand, non-digitized accounts might be retrieved or fact-checked. This underscores the limitation in relying solely on search engines as a primary source of external knowledge for fact-checking with LLMs. Another limitation of our method lies in the claims that are beyond established world knowledge when necessary relevant knowledge is not complete or even not available. This necessitates the model's ability to infer novel knowledge by formulating and subsequently validating appropriate hypotheses, a task that remains beyond the capabilities of existing technologies.
## Acknowledgement
We thank the anonymous reviewers for their helpful comments during the review of this paper.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{**RAWFC**} \\ \cline{2-4} & Gold & CofCED & HiSS \\ \hline Readability & **2.75** & 1.63 & 2.44 \\ Coverage & **2.65** & 1.99 & 2.63 \\ Non-redundancy & **2.72** & 1.28 & 2.25 \\ Overall & **2.69** & 1.74 & 2.54 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Average human ratings on explanations of verification for the claims in the RAWFC dataset. Gold, CofCED, and HiSS correspond to the explanations produced by human journalists, CofCED and HiSS, respectively. A higher score means a better explanation. The highest score is in bold, and the second is underlined. |
2302.00114 | Real-time quantitative imaging of RTV silicone pyrolysis | Quantitative microstructural analysis of Room Temperature Vulcanized (RTV)
silicone pyrolysis at high temperatures is presented. RTV is used as a bonding
agent in multiple industries, particularly filling gaps in ablative tiles for
hypersonic (re-)entry vehicles and fire prevention. Decomposition of RTV is
resolved in real time using in situ high-temperature X-ray computed
micro-tomography. Full tomographies are acquired every 90~seconds for four
different linear heating rates ranging from 7 to 54 C/min. The microstructure
is resolved below 5 micro-meters/pixel, allowing for a full quantitative
analysis of the micro-structural evolution and porous network development.
Results are highly heating rate dependent, and are evaluated for bulk sample
volume change, porosity, pore network size, and observed densification from
X-ray attenuation. The outcome of this work is critical to develop
multi-physics models for thermal response. | Collin Foster, Sreevishnu Oruganti, Francesco Panerai | 2023-01-31T21:40:51Z | http://arxiv.org/abs/2302.00114v1 | # Real-time quantitative imaging of RTV silicone pyrolysis
###### Abstract
Quantitative microstructural analysis of Room Temperature Vulcanized (RTV) silicone pyrolysis at high temperatures is presented. RTV is used as a bonding agent in multiple industries, particularly filling gaps in ablative tiles for hypersonic (re-)entry vehicles and fire prevention. Decomposition of RTV is resolved in real time using _in situ_ high-temperature X-ray computed micro-tomography. Full tomographies are acquired every 90 seconds for four different linear heating rates ranging from 7 to 54\({}^{\circ}\)C/min. The microstructure is resolved below 5 \(\mu\)m/pixel, allowing for a full quantitative analysis of the micro-structural evolution and porous network development. Results are highly heating rate dependent, and are evaluated for bulk sample volume change, porosity, pore network size, and observed densification from X-ray attenuation. The outcome of this work is critical to develop multi-physics models for thermal response.
RTV Pyrolysis Microstructure X-ray tomography Synchrotron radiation
## 1 Introduction
High temperature materials are advancing in the fields of nuclear reactors, fire-prevention, and thermal protection systems (TPS) for hypersonic flight. With the increasing applications of such materials, their characterization as a function of temperature is critical to application and modeling efforts. Room Temperature Vulcanizing Silicone (RTV) is a material system noted for its excellent thermal and mechanical properties and resistance to chemical degradation at high temperatures [1, 2, 3]. RTV is also adaptable as a gap-filler for ablative heat shields, seeing widespread applications
in exploration missions such as Mars2020 [4], Mars Science Laboratory (MSL) heatshield [5], and other sample-return missions [6]. RTV's chemical composition contains iron compounds, which are well-known for being flame retardant, smoke suppressant, and can delay thermal decomposition, all of which benefit the fire-safety industries [7; 8]. Data recovered from the Space Shuttle missions and MSL describe the extreme aerothermal (re-)entry environments with temperatures over 2500\({}^{\circ}\)C where RTV was used as a bonding agent [9; 10; 11]. RTV and silicone-based coatings are unique in that they are also intumescent, meaning that they will swell when heated to create an insulating barrier from the source of the heat[12]. This property of intumescence is helpful in coating applications (such as on wood, plastic, or metal) [13] but must be investigated further when swelling could cause unforeseen mechanical stress on a highly compact TPS [14; 15; 16]. RTV as a gap-filler on tiled TPS is observed to create protuberances in hypersonic flow as it ablates differently from the heat shield tiles, transitioning the flow from laminar to turbulent, resulting in areas of higher heating on the heatshield [17]. This behavior can be non-destructively examined with X-ray computed micro-tomography (\(\mu\)-CT), as a method to examine the inner-morphology of the material system as it pyrolyzes at high temperatures. Performing \(\mu\)-CT is a proven technique for evaluating complicated porous networks that are chemically decomposing, and the results can then be used to model multi-scale material behavior [18; 19; 20; 21; 22]. Recent work has thoroughly characterized the microstructure of RTV, emphasizing the utility of \(\mu\)-CT as a quantitative tool for examining the formation of voids at high temperatures [23; 24]. However, further investigation must be made into its heating rate dependence on micro-scale (1-100\(\,\mathrm{\SIUnitSymbolMicro m}\)) porosity evolution. Experiments to resolve the microstructural evolution of RTV at various heating ramp rates will be used to inform multi-physics codes, to then incorporated into larger TPS and fire modeling efforts [25; 26; 27; 28; 29; 30]. Thus, this study utilizes synchrotron radiation to resolve the microstructural effects of pyrolysis at a range of heating rates _in situ_. This is quantified by measurements of porosity, total volume change, X-ray attenuation, followed by further discussion with scanning electron microscopy (SEM) images.
## 2 Materials and Methods
The experiments are conducted at the Advanced Light Source (ALS) synchrotron facility at Lawrence Berkeley National Lab. Data is collected at the beamline 8.3.2, where _in situ_\(\mu\)-CT can be performed using the high flux of X-rays directly from the synchrotron source, combined with environmental chambers to test materials under thermal, mechanical and chemical loads [31; 32]. The inner morphology of the material is evaluated during heating through continuous tomographies being generated every 90 seconds. This temporal resolution enables microstructural analysis beyond that of a typical lab-based computed tomography device. The RTV samples are contained in a sealed quartz-chamber as shown in Fig. 1a, flushed with inert Argon gas constantly being cycled at 0.091 g/s to prevent oxidation from occurring at elevated temperatures. The sample enclosure is aligned in the beam path while also being surrounded by six confocally arranged infrared lamps aimed on the RTV cylinder (Fig. 1b) to increase temperature to 1000\({}^{\circ}\)C, similar to previous _in situ_ work [33; 34; 35; 36; 37]. The power supplied to the lamps is controlled such that a range of heating rates could be evaluated, this included: 54\({}^{\circ}\)C/min, 28\({}^{\circ}\)C/min, 14\({}^{\circ}\)C/min, and 7\({}^{\circ}\)C/min. Experiments are performed at
temperatures ranging from 23\({}^{\circ}\)C (virgin material) to 1000\({}^{\circ}\)C (charred). A lens with a resolution of 3.25 \(\mu\)m/pixel is used for the high heating rates (54 and 28\({}^{\circ}\)C/min), and a higher resolution 1.303 \(\mu\)m/pixel for the low heating rates (14 and 7\({}^{\circ}\)C/min) to further evaluate micro-pore evolution at smaller length scales. The samples are shaped to fit completely within the field of view (FOV) of the lens, \(3.3\,\mathrm{mm}\) diameter for the high heating rates and \(2\,\mathrm{mm}\) diameter for the low heating rates. Lastly, the smaller pores (\(<\)\(1\,\mathrm{\SIUnitSymbolMicro m}\)) below the resolution of the \(\mu\)-CT are qualitatively imaged with Scanning Electron Microscopy (SEM) using a FEI ESEM Quanta 450 FEG (FELMI ZFE, Gras, Austria).
## 3 Results
Results are presented in Fig. 2 for the four heating rates investigated. Sample cross sections are shown from reconstructed \(\mu\)-CT datasets at increasing temperatures. Beginning with 54\({}^{\circ}\)C/min, Fig. 2a shows the virgin sample with small voids formed that grow and interconnect into large porosities shown in its mid pyrolysis region (500\({}^{\circ}\)C). Along with the clearly visible pore network, small shear porosities (thin with high eccentricity) develop in the surrounding virgin region, further connecting in the late charring regime (\(>\)800\({}^{\circ}\)C). Next in Fig. 2b at 28\({}^{\circ}\)C/min, the sample similarly begins with pores from curing that contribute in joining adjacent voids forming during peak pyrolysis. Seen in both Fig. 2a and Fig. 2b, while the sample generally experiences a reduction in height, swelling is observed from the outer circumference where the incident thermal radiation from the lamp heating occurs. This swelling behavior is clearly observed in both high heating rates (28 and 54\({}^{\circ}\)C/min) due to the rapid build up of internal pressure that causes a anisotropic distribution of new pores to form until the pyrolysis gases are liberated through open pores.
For the low heating rates shown in Fig. 2c and Fig. 2d, smaller sample sizes are used to achieve higher spatial resolution while keeping the sample entirely in the FOV. Less initial defects (voids) are seen, but the same swelling and shrinking behavior is observed as the virgin material pyrolyzes to its charred state. Qualitatively, much less micro-pore development is observed with the lower heating rates (7 and 14\({}^{\circ}\)C/min), indicating a large dependence on heating rate.
Figure 1: _In situ_\(\mu\)-CT at the ALS. a) Controlled environment enclosure housing the sample. b) Sample enclosure installed in the focal spot of 6 IR heating lamps in the ALS \(\mu\)-CT hutch.
As further detailed in the subsequent quantitative analysis of the \(\mu\)-CT data, the tomographies are seen to get brighter in the solid phase, due to an increase in the X-ray attenuation of the sample. The increased X-ray absorption relates to the chemical structure transition from virgin RTV to char, and can be used as an estimate in material density shift. Additional video files of the RTV pyrolysis for all heating rates is provided in the supplementary material section.
The quantitative behavior of RTV decomposition for the four heating rates is summarized in Fig. 3. First we examine the overall change in sample height, and we see that there is a percentage loss of 20-30%. The higher heating rates tend to create a more anisotropically distributed pore network, contributing to a less-predictable change in the bulk volume as the pyrolysis progresses.
The incident heat flux is applied radially to the sample, encapsulating a \(5\,\mathrm{mm}\) diameter sphere. As a result, the change in mean sample diameter is shown here to be similar for all samples. The initial solid swelling caused by internal gas pressure build-up is seen in the region leading to peak pyrolysis at 200-450\({}^{\circ}\)C. This swelling is followed by an outgassing of pyrolysis byproducts and a shrinkage of the solid phase giving the reduction in diameter seen for the remainder of the pyrolysis process.
Figure 2: Grayscale output from \(\mu\)-CT.
The result of the structural shifts in height and diameter are then fully realized by computing the change in solid volume during pyrolysis. Across all heating rates the sample is seen to swell at low temperatures (\(\lesssim\)400\({}^{\circ}\)C) during the initial phase of pyrolysis, as a result of gas build-up inside the material. Intumescence is found to be heating rate dependant with a volume increase up to nearly 15% at the highest heating rate. An increase in heating rate is also observed to delay peak intumescence towards higher temperatures. At higher pyrolysis temperatures (\(\gtrsim\)400\({}^{\circ}\)C) there is a prominent material shrinkage that leads to up to 60% volume reduction for the fully charred material, compared to the virgin sample. Shrinkage rate is higher in the 400-600\({}^{\circ}\)C range, where the bulk of the pyrolysis occurs [24] and decreases in the 600-1000\({}^{\circ}\)C range.
While the higher heating rates create larger, more interconnected porosities shown in Fig. 2, the lower heating rates exhibit dissimilar behavior shown in the porosity measurements. We observed an increase in porosity up to nearly 20% in the 400-600\({}^{\circ}\)C pyrolysis region, with both 28\({}^{\circ}\)C/min and 54\({}^{\circ}\)C/min producing numerous pores that join neighboring voids until eventually reaching the surface. As the reaction progresses beyond peak pyrolysis gas release (\(>\)600\({}^{\circ}\)C), volumetric shrinkage begins to dominate in an overall reduction and collapse of pores within the samples. Similar behavior can be observed for the 14\({}^{\circ}\)C/min sample, as it experiences an increase in porosity through pyrolysis followed by a subsequent decrease thereafter. To be noted is the behavior of the slowest heating rate, 7\({}^{\circ}\)C/min, as porosity
Figure 3: Quantitative analysis of RTV morphological evolution during pyrolysis.
appears to monotonically increase during pyrolysis. This likely is the case as porosity is increasing on the nano-scale but is not resolved by the 1.303 \(\mu\)m/pixel resolution of the lens.
Porosity variation is further examined by the mean pore diameter. Shown to be largely unchanging for the two lower heating rates, they fluctuate slightly with the opening and closing of nano-porosities nearly undetectable by the resolution of the \(\mu\)-CT. Because the two higher heating rates begin with larger samples, they also have larger pre-existing pores that account for larger initial mean pore diameters. Where the 28\({}^{\circ}\)C/min generates more numerous disconnected porosities, the pores of the 54\({}^{\circ}\)C/min case coalescence with neighboring pores that do not shift dramatically in size nor shape.
Lastly, the change from initial material attenuation is plotted for all heating rates to show the change in relative density of the RTV. The change in attenuation was also qualitatively observed in Fig. 2. All the RTV samples experience a small decrease in material attenuation leading up to peak pyrolysis (200-400\({}^{\circ}\)C) during volumetric swelling due to pressure build-up of the air trapped in the initial voids, followed by pyrolysis gas release, and subsequent cross-linking reactions that continue to increase the attenuation monotonically to char. This observed trend in attenuation is subject to further analysis by incorporating mass measurements to get predictions on density transition.
SEM images for the tested materials are shown in Fig. 4. The RTV examined is a two-phase material with a matrix engulfing smaller solid particles deemed whiskers. Previous energy dispersive electron spectroscopy described the whiskers and remaining matrix phase being a slurry of C, O, Si, and Fe [24; 23]. The iron in the RTV matrix transitions from a red Hematite (Fe\({}_{2}\)O\({}_{3}\)) to the black Magnetite (Fe\({}_{3}\)O\({}_{4}\)), with charred Si, C, and SiO\({}_{2}\) matrix when fully-charred; decomposition reactions and the release of volatile compounds of O-Si-C chains account for the mass loss in the RTV [24]. This is first visualized in Fig. 4a-a' for the 7\({}^{\circ}\)C/min case. At low heating rates the resulting microstructure resembles the observations of past TGA investigations at 10\({}^{\circ}\)C/min [24; 23]. More of the matrix material is pyrolyzed
Figure 4: SEM of the 1000\({}^{\circ}\)C charred RTV samples.
and coalesces with the whiskers, exposing porosities at the sub-micron scale not resolved by the \(\mu\)-CT. The same is observed to a lesser degree for the 14\({}^{\circ}\)C/min sample, as there is evidence of the whiskers material still exposed. After the final tomography is taken at 1000\({}^{\circ}\)C, the heating is immediately shut-off and the sample is cooled to room-temperature, quenching any further pyrolysis reaction. Therefore, at the highest heating rates pyrolysis remains incomplete. Moving to 28\({}^{\circ}\)C/min, it is seen that the matrix phase has not fully pyrolyzed, with exposed whisker material visible in 4c'. This trend is further emphasized by the 54\({}^{\circ}\)C/min case, with uniform outgassing leaving behind a more opaque surface with few nano-porosities that would have likely formed given more time at high temperature. Evidence of incomplete pyrolysis of the material is also seen in the plot of X-ray attenuation in Fig. 2, with the lower attenuation values for the high heating rates, indicating pyrolysis not being complete.
A generalized graphic summarizing the key observations of the pyrolysis of RTV between high (28 and 54\({}^{\circ}\)C/min) and low (7 and 14\({}^{\circ}\)C/min) heating rates is shown in Fig. 5. Both non-pristine samples begin similarly with slight volumetric expansion, although the higher heating rate experiences a larger degree of new porosity development, where the low heating rates generally just expand the pre-existing pores. The pyrolysis continues and the high heating rate experiences large internal pressure gradients causing the development of anisotropically distributed pores that interconnect with the pre-existing pores. The low heating develops nano-pores as the pyrolysis process nears closer to completion. Lastly, both samples experience large volumetric shrinkage as mass is lost from pyrolysis gas escaping, and the overall porosity decreases as the solid volume collapses.
Figure 5: Decomposition model of RTV.
Conclusion
In conclusion, this study is a critical contribution to informing micro- and meso-scale behavior of material systems that utilize RTV in high temperature environments. Examined is a quantitative analysis of the micro-structural development of RTV under a range of relevant pyrolysis heating rates. The results show stark differences from a higher heating rate (\(>\)28\({}^{\circ}\)C/min) to a low rate (7-14\({}^{\circ}\)C/min) in porosity formation and morphology. Bulk material swelling and shrinking is also observed and is relatively similar across all heating rates, critical when evaluating mechanical performance of the material system. The continuation of this work will investigate the densification of this material utilizing the attenuation measurements, and conduct a similar set of experiments on RTV samples that are defect-free from manufacturing. This information will be fed directly to multi-physics reentry and fire codes to evaluate for relevant effective transport properties.
## Acknowledgements
This work is supported by a NASA Space Technology Graduate Research Opportunities Award Grant No: 80NSSCC22K1192 and 80NSSC21K1117. This research used resources of the Advanced Light Source, which is a DOE Office of Science User Facility under contract no. DE-AC02-05CH11231. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Workforce Development for Teachers and Scientists, Office of Science Graduate Student Research (SCGSR) program. The SCGSR program is administered by the Oak Ridge Institute for Science and Education for the DOE under contract number DE-SC0014664. Work was also done in part at the Beckman Institute Microscopy Suite at the University of Illinois at Urbana-Champaign. We would also like to thank the support of Dula Parkinson and Harold Barnard at the ALS for their guidance in experiments.
|
2303.17999 | The modelling error in multi-dimensional time-dependent solute transport
models | Starting from full-dimensional models of solute transport, we derive and
analyze multi-dimensional models of time-dependent convection, diffusion, and
exchange in and around pulsating vascular and perivascular networks. These
models are widely applicable for modelling transport in vascularized tissue,
brain perivascular spaces, vascular plants and similar environments. We show
the existence and uniqueness of solutions to both the full- and the
multi-dimensional equations under suitable assumptions on the domain velocity.
Moreover, we quantify the associated modelling errors by establishing a-priori
estimates in evolving Bochner spaces. In particular, we show that the modelling
error decreases with the characteristic vessel diameter and thus vanishes for
infinitely slender vessels. Numerical tests in idealized geometries corroborate
and extend upon our theoretical findings. | Rami Masri, Marius Zeinhofer, Miroslav Kuchta, Marie E. Rognes | 2023-03-31T12:19:15Z | http://arxiv.org/abs/2303.17999v1 | # The modelling error in multi-dimensional
###### Abstract.
Starting from full-dimensional models of solute transport, we derive and analyze multi-dimensional models of time-dependent convection, diffusion, and exchange in and around pulsating vascular and perivascular networks. These models are widely applicable for modelling transport in vascularized tissue, brain perivascular spaces, vascular plants and similar environments. We show the existence and uniqueness of solutions to both the full- and the multi-dimensional equations under suitable assumptions on the domain velocity. Moreover, we quantify the associated modelling errors by establishing a-priori estimates in evolving Bochner spaces. In particular, we show that the modelling error decreases with the characteristic vessel diameter and thus vanishes for infinitely slender vessels. Numerical tests in idealized geometries corroborate and extend upon our theoretical findings.
_Key words_. Multi-dimensional modeling; time dependent convection-diffusion, solute transport models; modeling error in evolving Bochner spaces.
_AMS Subject Classification_. 35K45, 65G99, 65J08, 65M15, 92-10.
## 1. Introduction
We consider transport of solutes by diffusion, convection, and exchange in a coupled system consisting of networks of slender vessels and their surroundings. This setting is ubiquitous in the human body [5] as exemplified by the transport and exchange of nutrients such as oxygen or glucose, or medical drugs in the vasculature and surrounding tissue, e.g. in skeletal muscle, the liver [52], or the placenta [61]; or conversely, the transport of metabolic by-products from tissue into and through lymphatic vessels [50]. Similar structures and processes are also fundamental in biology, think of e.g. the roots of vascularized plants [32], and in geoscience e.g. in connection with flow and transport in reservoir wells [21], in the context of C02 sequestration [48], or groundwater contamination [43].
Of particular interest, both from a physiological and mathematical point-of-view, is the transport of solutes in, around and out of the human _brain_. Despite decades - even centuries - of research, solute transport and clearance within the human brain remain poorly understood [62, 27, 29]. In contrast to the rest of the body, the brain vasculature is equipped with a blood-brain-barrier, which carefully regulates the exchange of substances between the blood and the surrounding tissue, while the brain parenchyma itself lacks typical lymph vessels. Better understanding of these physiological processes is vital for targeting brain drug
delivery [45, 42] or for unraveling the role of metabolic waste clearance in neurodegenerative disease [56, 29]. Concurrently, in tissue engineering, efforts are currently underway to develop human brain cortical organoids, but crucially rely on vascularization via e.g. microfluidic devices for improved oxygen and nutrient transport as well as cellular signalling [39].
The human brain is composed of soft tissue, is lined and penetrated by networks of blood vessels, and is surrounded by the narrow subarachnoid space filled with cerebrospinal fluid (CSF). The cerebral arteries pulsate in sync with the cardiac cycle and undergo other forms of vasomotion with variations in radii of \(\sim\)1-10% [44], while the entire brain parenchyma deforms by around 1% as the result of a complex interplay between the cardiac and respiratory cycles as well as autoregulation [54, 10]. Perivascular (or paravascular) spaces (PVSs) are spaces surrounding the vasculature on the brain surface or within the brain parenchyma. On the brain surface, these spaces are clearly visible [58], and PVSs persist as the blood vessels branch and penetrate into the brain parenchyma - then known as Virchow-Robin spaces. The extent to which perivascular spaces exist along the length of the vasculature within the brain, even to the capillary level, is debated however [25]. Within the parenchyma, perivascular spaces are often represented as generalized (elliptic) annular cylinders, filled with cerebrospinal or interstitial fluid and bounded by a nearly tight layer of astrocyte endfeet, see e.g. [8, 60, 15] and references therein.
Solutes move by diffusion within the brain tissue [46], and by diffusion and convection within the vasculature [5]. However, to what extent also convection in _perivascular_, _intracellular_ or _extracellular spaces_ play a role in brain solute transport and clearance stand as important open questions. Convective velocity magnitudes are expected to differ by many orders of magnitude between and within the respective compartments: blood may flow at the order of 1 m/s in major cerebral arteries [5], CSF flows in surface perivascular spaces at up to 60 \(\mu\)m/s with Peclet numbers of up to 1000 [44], while flow of interstitial fluid within the tissue is unlikely to exceed 10 \(\mu\)m/min on average [59, 1]. Depending on their ability to cross the blood-brain barrier, solutes may also exchange between the vascular and perivascular spaces, as well as into the surrounding tissue or subarachnoid space. To mathematically and computationally study such transport at the scale of larger vascular networks, our target here is to derive and analyze time-dependent convection-diffusion models with a geometrically-explicit but dimensionally-reduced representation of the (peri)vascular spaces coupled with the full-dimensional surroundings.
As a starting point (more precise details are presented later), consider second-order elliptic equations describing diffusion of the solute concentrations \(c_{v}:\Omega_{v}\to\mathbb{R}\), and \(c_{s}:\Omega_{s}\to\mathbb{R}\):
\[-\nabla\cdot D\nabla c_{s} =f\quad\text{ in }\Omega_{s}, \tag{1.1b}\] \[-\nabla\cdot D\nabla c_{v} =f\quad\text{ in }\Omega_{v}, \tag{1.1a}\]
where \(D\) is a given effective diffusion coefficient and \(f\) given sources. Assuming that the compartments are separated by a semi-permeable membrane \(\Gamma\) gives the interface condition
\[D\nabla c\cdot\mathbf{n}=\xi[[c]]\quad\text{ on }\Gamma, \tag{1.2}\]
where \(\mathbf{n}\) is the interface normal, \([[\cdot]]\) denotes the jump across the interface(s), and \(\xi\) is a membrane permeability parameter. Now assuming that \(\Omega_{v}\) can be well-represented by its centerline \(\Lambda\) with coordinate \(s\) (to be made more precise later), the coupled 3D-3D problem of (1.1)-(1.2) may be reduced to a coupled 3D-1D problem of the form: find the solute
concentrations \(\bar{c}:\Lambda\to\mathbb{R}\), and \(c:\Omega\to\mathbb{R}\)
\[-\nabla\cdot D\nabla c+\mathcal{C}=f\quad\text{ in }\Omega, \tag{1.3b}\] \[-\partial_{s}\bar{D}\partial_{s}\bar{c}+\bar{\mathcal{C}}=\bar{f} \quad\text{ on }\Lambda, \tag{1.3a}\]
where \(\mathcal{C}\), \(\bar{\mathcal{C}}\) denote coupling terms depending on the concentrations \(c,\bar{c}\), and choice of coupling. Note that flow in a porous medium (Darcy flow) can be described with the same equation structure, with \(c\) instead representing the pore pressure and \(D\) the hydraulic conductance. Modelling, discretization, and applications of 3D-1D problems such as (1.3) has been the subject of active research, especially over the last two decades, with key contributions from e.g. [19, 13, 12, 49, 34, 22, 37, 31, 36] and references therein to mention but a few. Notably, Laurino and Zunino [40] rigorously analyze the modelling error associated with replacing (1.1)-(1.2) by (1.3), and demonstrate that the modelling error indeed vanishes for infinitely thin vessels.
Here, we consider a parabolic extension of the classical elliptic 3D-1D equations (1.3) accounting also for (i) time-evolving distributions, (ii) convective transport, (iii) moving interfaces, and (iv) both cylindrical and non-convex (annular) vessel networks representing e.g. vascular and perivascular spaces, respectively. We also derive and study a 3D-1D-1D model representing solute transport in coupled tissue, perivascular and vascular spaces. Previously, Possenti, Zunino and coauthors [51] and Koppl, Vidotto and Wohlmuth [33] have studied applications of 3D-1D models for (oxygen) transport including convection but at steady state. Furthermore, Formaggia et al [20] consider coupled Navier-Stokes equations for flow problems in compliant vessels but with a different type of mixed-dimensional coupling. More specifically, we are interested in solute concentrations \(c_{v}(t):\Omega_{v}(t)\to\mathbb{R}\), and \(c_{s}(t):\Omega_{s}(t)\to\mathbb{R}\) satisfying the time-dependent diffusion equations for a.e. \(t>0\):
\[\partial_{t}c_{s}+\boldsymbol{u}\cdot\nabla c_{s}-\nabla\cdot D \nabla c_{s}=f\quad\text{ in }\Omega_{s}(t), \tag{1.4b}\] \[\partial_{t}c_{v}+\boldsymbol{u}\cdot\nabla c_{v}-\nabla\cdot D \nabla c_{v}=f\quad\text{ in }\Omega_{v}(t), \tag{1.4a}\]
where now additionally \(\boldsymbol{u}\) represents a convective velocity field and the interface \(\Gamma\) between \(\Omega_{s}\) and \(\Omega_{t}\) is allowed to move and deform in time.
Our main findings are as follows.
* We introduce a system of time-dependent convection-diffusion equations in and around embedded networks of moving vessels. Under suitable assumptions on the domain velocity, we prove well-posedness i.e. that suitably regular weak solutions to these equations exist and are unique (Section 3).
* We derive reduced 1D equations, and we formally derive weak formulations of 3D-1D and 3D-1D-1D coupled models of time-dependent solute transport governed by convection, diffusion, and exchange in deforming vascular and/or perivascular networks, and the surrounding domain (Section 4, Section 5). We prove well-posedness of the coupled 3D-1D formulation and show a regularity estimate for the 3D solution. These formulations are widely applicable for modelling transport in vascularized tissue in general and the brain in particular, vascular plant environments etc.
* We rigorously estimate the modelling error in evolving Bochner spaces associated with replacing the time-dependent 3D-3D convection-diffusion problem by the 3D-1D problem via a duality argument. We show that a relevant dual problem is well-posed, and that the modelling error decreases with the characteristic vessel diameter \(\epsilon\), and thus vanishes as \(\epsilon\to 0\) (Section 7).
* The presence of deforming networks with annular cross-sections poses key technical challenges relating to classical numerical analysis tools, such as e.g. Poincare and trace inequalities, and extension operators over moving, non-convex domains, which we address separately (Section 6).
These points are prefaced by introducing notation and preliminary results in Section 2, while concluding remarks and outlook relating to e.g. the discretization errors form Section 9.
## 2. Notation and preliminaries
### Function spaces, inner products and norms
Given an open domain \(O\subset\mathbb{R}^{d}\), \(d\in\{1,2,3\}\) and measurable real valued functions \(f,g\), we let \((f,g)_{O}\) denote the usual \(L^{2}\) inner product. If \(O\) is the whole domain \(\Omega\), the we write \((f,g)=(f,g)_{\Omega}\). The Hilbert space gernerated by this inner product is denoted by \(L^{2}(O)\) with the usual induced norm \(\|\cdot\|_{L^{2}(O)}\). We also use standard notation for the Sobolev spaces \(W^{m,p}(O)\) and \(H^{m}(O)=W^{m,2}(O)\) for \(m\in\mathbb{N}\) and \(1\leq p\leq\infty\). For a given weight \(w\in L^{\infty}(O)\) and \(w>0\) a.e. in \(O\), we define the weighted \(L^{2}\) inner product \((f,g)_{O,w}=(f,wg)_{O}\) and the respective weighted \(L^{2}\) space :
\[\|f\|_{L^{2}_{w}(O)}=\|w^{1/2}f\|_{L^{2}(O)},\ L^{2}_{w}(O)=\{f:O\to\mathbb{R} \ |\ \|f\|_{L^{2}_{w}(O)}<\infty\}. \tag{2.1}\]
The weighted Sobolev space \(H^{1}_{w}(O)\) is then defined as
\[H^{1}_{w}(O)=\{f\in L^{2}_{w}(O)\ |\ \|\nabla f\|_{L^{2}_{w}(O)}<\infty\}, \tag{2.2}\]
and the weighted inner product and norm are
\[(f,g)_{H^{1}_{w}(O)}=(f,g)_{L^{2}_{w}(O)}+(\nabla f,\nabla g)_{L^{2}_{w}(O)}, \ \|f\|^{2}_{H^{1}_{w}(O)}=\|f\|^{2}_{L^{2}_{w}(O)}+\|\nabla f\|^{2}_{L^{2}_{w}(O )}. \tag{2.3}\]
We omit the subscript/weight \(w\) when \(w=1\).
Given a Hilbert space \(X\), we denote the dual space of \(X\) by \(X^{\prime}\). The duality pairing between \(X\) and \(X^{\prime}\) is denoted by
\[\langle v^{\prime},v\rangle_{X^{\prime}\times X}.\]
For brevity in notation, we let
\[H^{-1}_{w}(O)=H^{1}_{w}(O)^{\prime},\qquad\langle v^{\prime},v\rangle_{H^{-1} _{w}(O)}=\langle v^{\prime},v\rangle_{H^{-1}_{w}(O)\times H^{1}_{w}(O)}.\]
We also recall the definition of standard Bochner type spaces. For \(t,T>0\), \(f:(t,T)\to X\), we say that \(f\in L^{2}(t,T;X)\) if
\[\|f\|^{2}_{L^{2}(t,T;X)}=\int_{t}^{T}\|f\|^{2}_{X}<\infty. \tag{2.4}\]
If \(f\) is weakly differentiable in time and \(\partial_{t}f\in L^{2}(t,T;X)\), then we say \(f\in H^{1}(t,T;X)\) with the norm:
\[\|f\|^{2}_{H^{1}(t,T;X)}=\|f\|^{2}_{L^{2}(t,T;X)}+\|\partial_{t}f\|^{2}_{L^{2} (t,T;X)}. \tag{2.5}\]
Given two Hilbert spaces \(V\) and \(H\) with \(V\subset H\), we define
\[\mathcal{W}(V,H)=\{v:(0,T)\to V;v\in L^{2}(0,T;V),\ \partial_{t}v\in L^{2}(0,T;H)\}. \tag{2.6}\]
Finally, we will use the space \(C^{0}(0,T;V)\) of continuous \(V\)-valued functions and the space \(\mathcal{D}(0,T;V)\) of infinitely differentiable \(V\)-valued functions.
### The geometrical setting
We consider a generalized annular domain \(\Omega_{v}\) (Figure 1), described in cylindrical coordinates and moving in time:
\[\Omega_{v}(t)=\{\boldsymbol{\lambda}(s)+r\cos(\theta)\boldsymbol{N} (s)+r\sin(\theta)\boldsymbol{B}(s),\\ 0<s<L,\,0\leq\theta\leq 2\pi,\,R_{1}(s,t,\theta)<r<R_{2}(s,t, \theta)\}\subset\mathbb{R}^{3}\]
of length \(L>0\), inner radius \(R_{1}\geqslant 0\), and outer radius \(R_{2}>0\). We refer to the \(s\)-direction as the axial direction. For \(R_{1}=0\), we consider a cylindrical domain where in the above definition, we let \(0\leq r<R_{2}(s,t,\theta)\). In general, \(\Omega_{v}\) represents a vessel segment such as a perivascular space (\(R_{1}>0\)), or blood vessel segment, plant root or borehole (\(R_{1}=0\)). We assume that \(\boldsymbol{\lambda}(s)=[\lambda^{1}(s),\lambda^{2}(s),\lambda^{3}(s)]\) above is a parametrized \(C^{2}\)-regular curve with non-moving centerline \(\Lambda\) defined as \(\Lambda=\{\boldsymbol{\lambda}(s)\}\) for \(s\in(0,L)\), and that \(\|\boldsymbol{\lambda}^{\prime}(s)\|=1\), thus implying that \(s\) is the arc length. The vectors \(\boldsymbol{N}\) and \(\boldsymbol{B}\) are from the Frenet-Serret frame of \(\Lambda\). Throughout the paper, \(\Theta(s,t)\) denotes the cross-section of \(\Omega_{v}(t)\) at \(s\in\Lambda\).
This domain \(\Omega_{v}(t)\) is embedded into a fixed domain \(\Omega\subset\mathbb{R}^{3}\) with (outer) surroundings \(\Omega_{s}=\Omega\backslash B_{R_{2}}\) where \(B_{R_{2}}\) is the outer cylinder given by:
\[B_{R_{2}}(t)=\{\boldsymbol{\lambda}(s)+r\cos(\theta)\boldsymbol{N}(s)+r\sin( \theta)\boldsymbol{B}(s),0<s<L,\,0\leq\theta\leq 2\pi,\,0\leq r<R_{2}(s,t, \theta)\}.\]
We emphasize that, by construction, the surrounding domain \(\Omega_{s}\) does not include the vessel \(\Omega_{v}\) itself nor the inner-most generalized cylinder in the case \(R_{1}>0\). We assume that for all \(t\in[0,T]\), \(\Omega_{v}(t)\) is completely embedded in \(\Omega\); that is,
\[\mathrm{dist}(\partial\Omega_{v}(t),\partial\Omega)>0,\quad\forall t\in[0,T].\]
We denote by \(\Gamma\) the lateral boundary of \(\Omega_{v}\) intersecting the boundary of \(\Omega_{s}(t)\), \(\Gamma=\partial\Omega_{v}\cap\partial\Omega_{s}\), and by \(\Gamma_{0}\) and \(\Gamma_{L}\) the vertical boundary of \(\Gamma\) at \(s=0\) and at \(s=L\) respectively. The unit normal to \(\partial\Omega_{v}\) is denoted by \(\boldsymbol{n}_{v}\), and on \(\Gamma\), \(\boldsymbol{n}_{s}=-\boldsymbol{n}_{v}\). For each cross-section \(\Theta(s,t)\), we label its area \(A(s,t)=|\Theta(s,t)|\) for \(s\in\Lambda\) and \(t\in[0,T]\). We also label the boundary of the lateral cross-section of \(\Theta\) by \(\partial\Theta\), the boundary of the outer circle (at \(r=R_{2}\)) by \(\partial\Theta_{2}\), and (if \(R_{1}>0\)) the boundary of the inner circle (at \(r=R_{1}\)) by \(\partial\Theta_{1}\). We denote by \(P(s,t)=|\partial\Theta_{2}(s,t)|\) the perimeter of the outer circle (representing the interface between the vessel and its outer surroundings).
Figure 1. Geometrical setting. Left: An (annular) cylinder \(\Omega_{v}\) (shown in green) representing a vessel parametrized in terms of the centerline curve \(\Lambda\) (shown in red). The vessel is surrounded by the domain \(\Omega_{s}\) while \(\Gamma\) forms the lateral boundary between the domains (shown in blue). Right: Lateral cross-section of the domain \(\Omega\) showing \(\Omega_{v}\) in green, the inner boundary \(\partial\Theta_{1}\) in red, and the outer boundary \(\partial\Theta_{2}\) or \(\Gamma\) in blue. Note that in the annular cylinder case, the inner-most cylinder, i.e. the extrusion along \(\Lambda\) of the domain bounded by \(\partial\Theta_{1}\), is not part of \(\Omega_{s}\) or \(\Omega_{v}\).
Note that we consider vessels \(\Omega_{v}\) both of cylinder-type or annular cylinder-type and their (outer) surroundings. The former case is well-suited to represent e.g. transport in the vasculature, roots or geothermal wells. The latter case targets e.g. perivascular transport in the brain, intracranial space, or spinal compartments. In the latter (annular cylinder) case, we only include the outer surroundings in the 3D-3D and reduced 3D-1D model formulations in the subsequent Sections 3-4. The distinction between inner and outer surroundings are motivated by the potentially large jumps in material parameters such as the convective velocity or diffusion coefficient between the inner and outer compartments in applications. Such jumps would be challenging to represent in an extended domain in the 3D-1D setting. These models are thus particularly relevant for perivascular transport with a vascular-perivascular barrier, such as e.g. the blood-brain barrier (BBB) in the human brain. However, the case of vascular-perivascular exchange may also be highly relevant e.g. in connection with a leaky BBB or transport of substances across the BBB. Therefore, we address the extended 3D-3D-3D and 3D-1D-1D problem setting representing coupled tissue, perivascular and vascular transport separately in Section 5.
In Section 4.6, we will also consider an extension of this setting to networks of vessels. We will then consider a network of \(N\) domains \(\Omega_{v,i}\) with center-curves \(\Lambda_{i}=\{\boldsymbol{\lambda}_{i}(s),\ s\in(0,L_{i})\}\) for \(i=1,\ldots,N\). Extending upon the notation introduced above, we then denote \(A_{i}=|\Theta_{i}|\) and \(P_{i}=|\partial\Theta_{2,i}|\) where \(\Theta_{i}\) is the cross-section of \(\Omega_{v,i}\) and \(\partial\Theta_{2,i}\) is the outer boundary of \(\Omega_{v,i}\).
## 3. Transport by convection and diffusion in a moving domain
We are interested in analyzing the coupled transport of a solute in a moving domain governed by diffusion and convection in general, and in a moving vessel and its surroundings in particular. To this end, we introduce a system of coupled convection-diffusion equations (Section 3.1). We may directly consider a more general geometrical setting (Section 3.2) for the weak formulation (Section 3.3) to show that such solutions exist (Section 3.4, Proposition 3.1).
### System of convection-diffusion equations in and around a moving vessel
We consider a moving vessel and assume that the vessel motion \(\Omega_{v}(t)\), and convective velocity fields \(\boldsymbol{u}_{v}(t):\Omega_{v}\to\mathbb{R}^{3}\) and \(\boldsymbol{u}_{s}(t):\Omega_{s}\to\mathbb{R}^{3}\) are prescribed for \(t\in[0,T]\). Our coupled three-dimensional transport boundary-value problem in an Eulerian frame reads as: for a.e. \(t\), find the solute concentrations \(c_{v}(t):\Omega_{v}(t)\to\mathbb{R}\) and \(c_{s}(t):\Omega_{s}(t)\to\mathbb{R}\) such that the following governing equations, interface conditions, boundary conditions and initial conditions hold:
\[\partial_{t}c_{v}-\nabla\cdot(D_{v}\nabla c_{v})+\nabla\cdot( \boldsymbol{u}_{v}c_{v}) =f_{v}, \text{in}\,\Omega_{v}(t)\times(0,T], \tag{3.1b}\] \[\partial_{t}c_{s}-\nabla\cdot(D_{s}\nabla c_{s})+\nabla\cdot( \boldsymbol{u}_{s}c_{s}) =f_{s}, \text{in}\,\Omega_{s}(t)\times(0,T],\] (3.1c) \[(c_{v}\tilde{\boldsymbol{u}}_{v}-D_{v}\nabla c_{v})\cdot \boldsymbol{n}_{v}-\xi(c_{v}-c_{s}) =0, \text{on}\,\Gamma(t)\times(0,T],\] (3.1d) \[(c_{v}\tilde{\boldsymbol{u}}_{v}-D_{v}\nabla c_{v})\cdot \boldsymbol{n}_{v}+(c_{s}\tilde{\boldsymbol{u}}_{s}-D_{s}\nabla c_{s})\cdot \boldsymbol{n}_{s} =0, \text{on}\,\Gamma(t)\times(0,T],\] (3.1e) \[(c_{v}\tilde{\boldsymbol{u}}_{v}-D_{v}\nabla c_{v})\cdot \boldsymbol{n}_{v}=(c_{s}\tilde{\boldsymbol{u}}_{s}-D_{s}\nabla c_{s})\cdot \boldsymbol{n}_{s} =0, \text{on}\,(\Gamma_{0}(t)\cup\Gamma_{L}(t))\times(0,T],\] (3.1f) \[c_{s} =0, \text{on}\,\partial\Omega\times(0,T],\] (3.1g) \[c_{v}(0)=c_{v}^{0},\quad\text{in}\,\Omega_{v}(0),\quad c_{s}(0 )=c_{s}^{0} \text{in}\,\Omega_{s}(0). \tag{3.1a}\]
If \(\partial\Omega_{v}\backslash\Gamma\neq\emptyset\) (\(R_{1}>0\), an annular domain \(\Omega_{v}\)), we also impose the boundary condition:
\[(c_{v}\tilde{\boldsymbol{u}}_{v}-D_{v}\nabla c_{v})\cdot \boldsymbol{n}_{v}=0\quad\text{on}\,\partial\Omega_{v}(t)\backslash\Gamma(t) \times(0,T]. \tag{3.2}\]
The time derivatives in the above formulation are the Eulerian time derivatives. The parameters \(D_{v}\), \(D_{s}\) are given diffusion tensors in \(\Omega_{v}\) and \(\Omega_{s}\) respectively, while \(f_{v}\) and \(f_{s}\) are given source functions. For \(i\in\{v,s\}\), the relative (net) velocity \(\tilde{\mathbf{u}}_{i}=\mathbf{u}_{i}-\mathbf{w}\) accounts for the velocity of the domain \(\mathbf{w}\), defined below cf. (3.3). The interface condition (3.1c) models the lateral interface between the vessel and its surroundings \(\Gamma\) as a semi-permeable membrane with permeability \(\xi\), while the auxiliary condition (3.1d) enforces conservation of mass. At the vertical boundaries, the condition (3.1e) stipulates no flux, while we keep the concentration fixed and zero (for simplicity) at the outermost boundary \(\partial\Omega\) via (3.1f). The last relations (3.1g) define the initial conditions with given initial states \(c_{v}^{0}:\Omega_{v}\to\mathbb{R}\), \(c_{v}^{0}\in L^{2}(\Omega_{v})\) and \(c_{s}^{0}:\Omega_{s}\to\mathbb{R}\), \(c_{s}^{0}\in L^{2}(\Omega_{s})\).
### Observations on the domain velocity
For the existence result we can weaken our geometrical assumptions on the domains. Precisely, we let \(\Omega(0)\subset\mathbb{R}^{d}\) be a Lipschitz domain, i.e., open, connected and with a Lipschitz boundary and we assume that \(\Omega_{v}(0)\subset\Omega(0)\) is itself a Lipschitz domain and compactly contained in \(\Omega(0)\). In particular, it holds \(\mathrm{dist}(\partial\Omega_{v}(0),\partial\Omega(0))>0\). We measurably partition \(\partial\Omega_{v}(0)\) into two sets that play the role of \(\Gamma(0)\) in (3.1c) and (3.1d) and \(\Gamma_{0}(0)\cup\Gamma_{L}(0)\) in (3.1e), respectively (and will be denoted by the same symbols).
We define moving domains according to the velocity method, see [16]. More precisely, assume that the domain velocity \(\mathbf{w}:\mathbb{R}^{d}\times\mathbb{R}\to\mathbb{R}^{d}\) is smooth and compactly supported. We denote by \(\mathbf{\psi}:\mathbb{R}^{d}\times\mathbb{R}\to\mathbb{R}^{d}\) the flow map of the order
\[\begin{split}\partial_{t}\mathbf{\psi}(\mathbf{x},t)&=\bm {w}(\mathbf{\psi}(\mathbf{x},t),t),\\ \mathbf{\psi}(\mathbf{x},0)&=\mathbf{x}.\end{split} \tag{3.3}\]
Standard ODE theory implies that \(\mathbf{\psi}\in C^{\infty}(\mathbb{R}^{d+1})\) and for all fixed \(t\in[0,T]\) the map
\[\mathbf{\psi}_{t}:\mathbb{R}^{d}\to\mathbb{R}^{d},\quad\mathbf{x}\mapsto\mathbf{\psi}(\mathbf{ x},t)\]
is a diffeomorphism. In this notation, the connection between our reference domain and the domains to a later time is given by
\[\Omega_{s}(t)=\mathbf{\psi}_{t}(\Omega_{s}(0)),\quad\Omega_{v}(t)=\mathbf{\psi}_{t}( \Omega_{v}(0)).\]
As \(\Omega_{s}(0)\) and \(\Omega_{v}(0)\) are open, connected and Lipschitz it holds \(\partial\Omega_{v}(t)=\mathbf{\psi}_{t}(\partial\Omega_{v}(0))\) and \(\partial\Omega_{v}(t)\) has Lipschitz boundary, see [28]. An important observation which links the above definitions to the specific setting in subsection 2.2 is now in order. Denoting by \(\det(D\mathbf{\psi}_{t})\) the determinant of the Jacobian matrix of \(\mathbf{\psi}_{t}\), it holds that [47, Section 1.1.1]:
\[\partial_{t}A(s,t)=\partial_{t}\int_{\Theta(s,t)}1=\partial_{t} \int_{\Theta(s,0)}|\det(D\mathbf{\psi}_{t})|\\ =\int_{\Theta(s,0)}\mathbf{\psi}_{-t}(\nabla\cdot\mathbf{w})|\det(D\mathbf{ \psi}_{t})|=\int_{\Theta(s,t)}\nabla\cdot\mathbf{w}=\int_{\partial\Theta(s,t)}\mathbf{ w}\cdot\mathbf{n}. \tag{3.4}\]
### A weak formulation of the coupled 3D-3D transport model
Let \(i\in\{s,v\}\), for fixed \(t\in I:=[0,T]\) we set \(X_{s}(t)=H^{1}_{\partial\Omega}(\Omega_{s}(t)):=\{c\in H^{1}(\Omega_{s}(t)),\, c|_{\partial\Omega}=0\}\), and \(X_{v}(t)=H^{1}(\Omega_{v}(t))\) and \(H_{i}(t)=L^{2}(\Omega_{i}(t))\). Further, we abbreviate \(X_{i}=(X_{i}(t))_{t\in I}\) and \(H_{i}=(H_{i}(t))_{t\in I}\). To relate the function spaces at time \(t\) to the reference time (and vice versa) we use the pushforward induced by \(\mathbf{\psi}_{t}\), and we define:
\[\phi_{t}:X_{i}(0)\to X_{i}(t),\quad\phi_{t}c_{i}=c_{i}\circ\psi_{t}^{-1}\]
with inverse \(\phi_{-t}=\phi_{t}^{-1}\) given by \(\phi_{-t}c_{i}=c_{i}\circ\psi_{t}\). By the chain rule for Sobolev spaces it can be seen that for all \(t\in[0,T]\) the maps \(\phi_{t}\) are linear homeomorphisms. Now, to define a function space framework we follow [3] and set
\[L^{2}_{X_{i}} =\{c_{i}:[0,T]\to\bigcup_{t\in[0,T]}X_{i}(t)\times\{t\},t\mapsto( \bar{c}_{i}(t),t)\mid\phi_{-(\cdot)}\bar{c}_{i}(\cdot)\in L^{2}(0,T;X_{i}(0))\},\] \[L^{2}_{X^{\prime}_{i}} =\{f_{i}:[0,T]\to\bigcup_{t\in[0,T]}X^{\prime}_{i}(t)\times\{t\},t \mapsto(\bar{f}_{i}(t),t)\mid\phi^{*}_{(\cdot)}\bar{f}_{i}(\cdot)\in L^{2}(0,T; X^{\prime}_{i}(0))\},\]
where \(\phi^{*}_{t}:X_{i}(t)^{\prime}\to X_{i}(0)^{\prime}\) denotes the adjoint map to \(\phi_{t}\). The above spaces are equipped with the norms:
\[\forall c_{i}\in L^{2}_{X_{i}},\ \|c_{i}\|^{2}_{L^{2}_{X_{i}}}=\int_{0}^{T}\|c _{i}\|^{2}_{X_{i}(t)},\quad\forall f_{i}\in L^{2}_{X^{\prime}_{i}},\ \|f_{i}\|^{2}_{L^{2}_{X^{\prime}_{i}}}=\int_{0}^{T}\|f_{i}\|^{2}_{X^{\prime}_{ i}(t)}.\]
Next, we define a weak material derivative, where we specialize the abstract definition of [3] to our case. We say that a function \(c_{i}\in L^{2}_{X_{i}}\) has a weak material derivative \(\dot{c}_{i}\in L^{2}_{X^{\prime}_{i}}\) if it holds
\[\int_{0}^{T}\langle\dot{c}_{i},\eta\rangle_{X^{\prime}_{i}(t)}\, \mathrm{d}t=-\int_{0}^{T}\int_{\Omega_{i}(t)}c_{i}\dot{\eta}\,\mathrm{d}x \mathrm{d}t-\int_{0}^{T}\int_{\Omega_{i}(t)}c_{i}\eta\nabla\cdot\mathbf{w}\, \mathrm{d}x\mathrm{d}t,\quad\forall\eta\in\mathcal{D}_{X_{i}}(0,T), \tag{3.5}\]
where \(\mathcal{D}_{X_{i}}\) is the subset of \(L^{2}_{X_{i}}\) such that \(t\mapsto\phi_{-t}\eta\) is a member of \(\mathcal{D}(0,T;X_{i}(0))\). We are now in a position to define the Sobolev space \(W(X_{i},X^{\prime}_{i})\) used for existence theory
\[W(X_{i},X^{\prime}_{i})=\{c_{i}\in L^{2}_{X_{i}}\mid\dot{c}_{i}\in L^{2}_{X^{ \prime}_{i}}\}.\]
As in the classical case, this space embeds into \(C^{0}_{H_{i}}(0,T)\), which is defined similarly as \(\mathcal{D}_{X_{i}}(0,T)\) above and thus initial value problems can be formulated meaningfully.
**Remark 3.1** (Connection to strong material derivative).: _For smooth functions \(c\) the above definition agrees with the Arbitrary Lagrangian Eulerian (ALE) framework [47, Section 1.1], and it holds that_
\[\dot{c}=\phi_{t}(\,\partial_{t}\,(\phi_{-t}c)).\]
_By the chain rule, it then follows for smooth functions, that_
\[\dot{c}(\mathbf{x},t)=\partial_{t}c(\mathbf{x},t)+\nabla c(\mathbf{x},t)\cdot\mathbf{w}(\mathbf{x },t),\quad(\mathbf{x},t)\in\Omega(t)\times(0,T). \tag{3.6}\]
Replacing the Eulerian time derivative via the definition of the material time derivative, and using the standard identity
\[\nabla c\cdot\mathbf{w}=\nabla\cdot(\mathbf{w}c)-(\nabla\cdot\mathbf{w})c,\]
we can rephrase (3.1a)-(3.1b) as
\[\dot{c}_{v}+\nabla\cdot\mathbf{w}c_{v}-\nabla\cdot(D_{v}\nabla c_{v})+ \nabla\cdot((\mathbf{u}_{v}-\mathbf{w})c_{v})=f_{v},\quad\text{in}\quad\Omega_{v}(t) \times(0,T], \tag{3.7b}\] \[\dot{c}_{s}+\nabla\cdot\mathbf{w}c_{s}-\nabla\cdot(D_{s}\nabla c_{s} )+\nabla\cdot((\mathbf{u}_{s}-\mathbf{w})c_{s})=f_{s},\quad\text{in}\quad\Omega_{s}(t) \times(0,T]. \tag{3.7a}\]
To formulate a coherent weak formulation for the system of coupled equations, we introduce the following product spaces and their respective norms (written for \(\mathbf{\phi}=(\phi_{v},\phi_{s})\))
\[\mathbf{V}(t) =H^{1}(\Omega_{v}(t))\times H^{1}_{\partial\Omega}(\Omega_{s}(t)), \ \ \|\mathbf{\phi}\|^{2}_{\mathbf{V}(t)}=\|\phi_{v}\|^{2}_{H^{1}(\Omega_{v}(t))}+\|\phi_{s} \|^{2}_{H^{1}(\Omega_{s}(t))},\] \[\mathbf{H}(t) =L^{2}(\Omega_{v}(t))\times L^{2}(\Omega_{s}(t)),\ \ \ \ \|\mathbf{\phi}\|^{2}_{\mathbf{H}(t)}=\|\phi_{v}\|^{2}_{L^{2}(\Omega_{v}(t))}+\|\phi_{s} \|^{2}_{L^{2}(\Omega_{s}(t))}.\]
Similarly, we define the product space:
\[\boldsymbol{W}=\{\boldsymbol{w}=(w_{v},w_{s}),\dot{\boldsymbol{w}}=(\dot{w}_{v}, \dot{w}_{s}):\ w_{v}\in W(X_{v},X_{v}^{\prime}),\ w_{s}\in W(X_{s},X_{s}^{ \prime})\}, \tag{3.9}\]
equipped with the norm
\[\|\boldsymbol{w}\|_{\boldsymbol{W}}^{2}=\sum_{i\in\{v,s\}}(\|w_{i}\|_{L^{2}_{X _{i}}}^{2}+\|\dot{w}_{i}\|_{L^{2}_{X_{i}^{\prime}}}^{2}). \tag{3.10}\]
The weak formulation for (3.1) then reads: find \(\boldsymbol{c}=(c_{v},c_{s})\in\boldsymbol{W}\) such that for all \(\boldsymbol{\varphi}=(\varphi_{v},\varphi_{s})\in\boldsymbol{V}(t)\),
\[\langle\dot{\boldsymbol{c}}(t),\boldsymbol{\varphi}\rangle_{ \boldsymbol{V}(t)}+\lambda(t;\boldsymbol{c}(t),\boldsymbol{\varphi})+\mathcal{ A}(t;\boldsymbol{c}(t),\boldsymbol{\varphi})+\mathcal{B}(t;\boldsymbol{c}(t), \boldsymbol{\varphi})\\ =(\boldsymbol{f}_{v}(t),\varphi_{v})_{\Omega_{v}}+(\boldsymbol{f }_{s}(t),\varphi_{s})_{\Omega_{s}}, \tag{3.11}\]
complemented by the initial condition
\[\boldsymbol{c}(0)=(c_{v}^{0},c_{s}^{0})\in\boldsymbol{H}(0),\]
where for any \(\boldsymbol{c}=(c_{v},c_{s})\in\boldsymbol{V}(t)\) and \(\boldsymbol{\varphi}=(\varphi_{v},\varphi_{s})\in\boldsymbol{V}(t)\) we have the bilinear forms:
\[\lambda(t;\boldsymbol{c},\boldsymbol{\varphi}) =(\nabla\cdot\boldsymbol{w}c_{v},\varphi_{v})_{\Omega_{v}(t)}+( \nabla\cdot\boldsymbol{w}c_{s},\varphi_{s})_{\Omega_{s}(t)},\] \[\mathcal{A}(t;\boldsymbol{c},\boldsymbol{\varphi}) =(D_{v}\nabla c_{v}-(\boldsymbol{u}_{v}-\boldsymbol{w})c_{v}, \nabla\varphi_{v})_{\Omega_{v}(t)}+(D_{s}\nabla c_{s}-(\boldsymbol{u}_{s}- \boldsymbol{w})c_{s},\nabla\varphi_{s})_{\Omega_{s}(t)},\] \[\mathcal{B}(t;\boldsymbol{c},\boldsymbol{\varphi}) =(\xi(c_{v}-c_{s}),\varphi_{v})_{\Gamma(t)}+(\xi(c_{s}-c_{v}), \varphi_{s})_{\Gamma(t)}.\]
### Well-posedness of the convection-diffusion problem over a moving domain
We then obtain the following result for the existence and well-posedness of weak solutions.
**Proposition 3.1**.: _Assume the geometrical setting of Section 3.2 and let \(\xi\in L^{\infty}(0,T;L^{\infty}(\Gamma(t)))\) with \(\xi\geq 0\). Further assume that \(D_{i}\in L^{\infty}(0,T;L^{\infty}(\Omega_{i}(t),\mathbb{R}^{d\times d}))\) with a uniform ellipticity constant \(\nu>0\) and \(\boldsymbol{u}_{i}\in L^{\infty}(0,T;L^{\infty}(\Omega_{i}(t)))\). Then, for every \(\boldsymbol{c}_{0}=(c_{v}^{0},c_{s}^{0})\in\boldsymbol{H}(0)\) and \(\boldsymbol{f}=(f_{v},f_{s})\in L^{2}_{X_{v}^{\prime}}\times L^{2}_{X_{s}^{ \prime}}\), there exists a unique solution \(\boldsymbol{c}=(c_{v},c_{s})\in\boldsymbol{W}\) to (3.11). Further, there exists a constant \(C\) such that_
\[\|\boldsymbol{c}\|_{\boldsymbol{W}}\leq C(\|f_{v}\|_{L^{2}_{X_{v}^{\prime}}}+ \|f_{s}\|_{L^{2}_{X_{s}^{\prime}}}+\|\boldsymbol{c}_{0}\|_{\boldsymbol{H}(0)} )).\]
Proof.: We verify the assumptions of the abstract framework given in [3]. These can be grouped in two sets of requirements, one set of assumptions concerns the level of smoothness that must be imposed on the moving domains - in the notation of [3] these are Assumption 2.17, 2.24 and Assumption 2.31. On the other hand we need standard assumptions on the involved operators which are summarized in Assumption 3.3 of [3].
_Verifying the smoothness assumptions of the moving domains_. Let \(\boldsymbol{c}\in\boldsymbol{V}(0)\), then by the transformation formula it holds
\[t\mapsto\|\phi_{t}\boldsymbol{c}\|_{\boldsymbol{V}(t)}^{2}=\sum_{i\in\{v,s\} }\int_{\Omega_{i}(0)}(c_{i}^{2}+|\nabla c_{i}|^{2})|\det(D\psi_{t})|\,\mathrm{ d}x. \tag{3.12}\]
As \((x,t)\mapsto\psi(x,t)\) is smooth and \(\psi_{t}\) is a diffeomorphism we know that \(D\psi_{t}\) is invertible everywhere in \(\overline{\Omega}\) and thus \(|\det(D\psi_{t})|\) is bounded away from zero. Using the smoothness of \(\psi\) with respect to the temporal variable implies that this bound is independent of time. Hence, \(t\mapsto\|\phi_{t}\boldsymbol{c}\|_{\boldsymbol{V}(t)}\) is continuous as required in Assumption 2.17.
To show Assumption 2.24, we need prove that
\[t\mapsto\theta(t,\boldsymbol{c}):=\sum_{i\in\{v,s\}}\int_{\Omega_{i}(0)}c_{i}^{2} |\det(D\psi_{t})|\,\mathrm{d}x\]
is classically differentiable. As mentioned above, \((x,t)\mapsto\psi(x,t)\) is smooth and so is \((x,t)\mapsto|\det(D\psi_{t})(x)|\) which allows us, resorting to Lebesgue's dominated convergence theorem, to differentiate under the integral sign. Further, for \(\boldsymbol{c}^{1},\boldsymbol{c}^{2}\in\boldsymbol{V}(0)\) we estimate using the boundedness of \((x,t)\mapsto|\det(D\psi_{t}(x))|\)
\[|\theta(t,\boldsymbol{c}^{1}+\boldsymbol{c}^{2})-\theta(t,\boldsymbol{c}^{1}- \boldsymbol{c}^{2})|=\sum_{i\in\{v,s\}}\int_{\Omega_{i}(0)}|c_{i}^{1}c_{i}^{2} |\,|\det(D\psi_{t})|\,\mathrm{d}x\leq C\|\boldsymbol{c}^{1}\|_{\boldsymbol{H}( 0)}\|\boldsymbol{c}^{2}\|_{\boldsymbol{H}(0)}\]
for some constant \(C\). This completes the requirements of Assumption 2.24.
Concerning Assumption 2.31 of [3], note that the map \(T_{t}\) defined in equation (2.7) of this paper is in our case given by
\[T_{t}:\boldsymbol{H}(0)\to\boldsymbol{H}(0),\quad\boldsymbol{c}=(c_{v},c_{s}) \mapsto(c_{v}|\det(D\psi_{t})|,c_{s}|\det(D\psi_{t})|)\]
and as \(|\det(D\psi_{t})|\) is smooth and bounded away from zero it holds that
\[\boldsymbol{c}\in\boldsymbol{V}(0)\quad\Leftrightarrow\quad T_{t}\boldsymbol{ c}\in\boldsymbol{V}(0).\]
By Remark 2.34 in [3] this guarantees that Assumption 2.31 therein holds.
_Properties of the PDE Operators._ We now verify the coercivity and continuity properties of the bilinear forms. We must show that for a.e. \(t\), there exist constants \(K_{1},K_{2}\) and \(K_{3}\) independent of \(t\) such that
\[\mathcal{A}(t;\boldsymbol{c},\boldsymbol{c})+\mathcal{B}(t; \boldsymbol{c},\boldsymbol{c}) \geq K_{1}\|\boldsymbol{c}\|_{\boldsymbol{V}(t)}^{2}-K_{2}\| \boldsymbol{c}\|_{\boldsymbol{H}(t)}^{2} \forall\boldsymbol{c}\in\boldsymbol{V}(t), \tag{3.14}\] \[|\mathcal{A}(t;\boldsymbol{c},\boldsymbol{\varphi})+\mathcal{B}(t ;\boldsymbol{c},\boldsymbol{\varphi})| \leq K_{3}\|\boldsymbol{c}\|_{\boldsymbol{V}(t)}\|\boldsymbol{ \varphi}\|_{\boldsymbol{V}(t)} \forall\boldsymbol{c},\boldsymbol{\phi}\in\boldsymbol{V}(t). \tag{3.13}\]
Using that \(\boldsymbol{u}_{v},\boldsymbol{u}_{s}\) and \(\boldsymbol{w}\in L^{\infty}(\Omega_{v}(t))\) with a norm bound independent of \(t\in[0,T]\) and that \(D_{v},D_{s}\) are uniformly elliptic with ellipticity constant \(\nu\) independent of time, we may estimate using Young's and Holder's inequality for \(\boldsymbol{c}=(c_{v},c_{s})\)
\[\mathcal{A}(t;\boldsymbol{c},\boldsymbol{c}) \geq\nu\sum_{i\in\{s,v\}}\|\nabla c_{i}\|_{L^{2}(\Omega_{i}(t))}^ {2}-\sum_{i\in\{s,v\}}\|\boldsymbol{u}_{i}-\boldsymbol{w}\|_{L^{\infty}(\Omega _{i}(t))}\|c_{i}\|_{L^{2}(\Omega_{i}(t))}\|\nabla c_{i}\|_{L^{2}(\Omega_{i}(t))}\] \[\geq\frac{\nu}{2}\sum_{i\in\{s,v\}}\|\nabla c_{i}\|_{L^{2}(\Omega _{i}(t))}^{2}-\frac{1}{2\nu}\sum_{i\in\{s,v\}}\|\boldsymbol{u}_{i}-\boldsymbol {w}\|_{L^{\infty}(\Omega_{i}(t))}^{2}\|c_{i}\|_{L^{2}(\Omega_{i}(t))}^{2}\] \[\geq\frac{\nu}{2}\|\boldsymbol{c}\|_{\boldsymbol{V}(t)}^{2}-\max _{i\in\{s,v\}}\left(\frac{\|\boldsymbol{u}_{i}-\boldsymbol{w}\|_{L^{\infty}( \Omega_{i}(t))}}{2\nu},\frac{\nu}{2}\right)\|\boldsymbol{c}\|_{\boldsymbol{H}( t)}^{2}.\]
Using that \(\xi\geq 0\) it is readily seen that \(\mathcal{B}(t;\boldsymbol{c},\boldsymbol{c})\geq 0\), in fact it holds that
\[\mathcal{B}(t;\boldsymbol{c},\boldsymbol{c})=\int_{\Gamma(t)}(c_{v}-c_{s})^{2 }\xi\,\mathrm{d}s\geq 0.\]
For the continuity property, we note that the trace constant used to handle \(\mathcal{B}\) is independent of \(t\) since for any \(c_{i}\in H^{1}(\Omega_{i}(t))\) it holds that
\[\|c_{i}\|_{L^{2}(\Gamma(t))}=\||\det(D(\psi_{t}^{-1}))|^{1/2}\phi_{-t}c_{i}\|_{ L^{2}(\Gamma(0))}\leq C_{0}\|\phi_{-t}c_{i}\|_{H^{1}(\Omega_{i}(0))}\leq C_{1}\|c_{i} \|_{H^{1}(\Omega_{i}(t))} \tag{3.15}\]
for some constants \(C_{0}\), \(C_{1}\). The above holds from the trace inequality on \(\Gamma(0)\) and from the continuity bound of the map \(\phi_{-t}\) which is independent of \(t\). The continuity bound (3.14) then immediately follows. Therefore, as all the assumptions of [3, Theorem 3.6] hold, the stated result follows.
## 4. Coupled 3D-1D formulations for solute transport models
Our next objective is to derive geometrically-explicit but dimensionally-reduced representations of the coupled solute transport models introduced and established in the previous (Section 3). We first derive transport equations describing the cross-section average concentration in each vessel network segment (Section 4.2) and their variational formulation (Section 4.3). Conversely, the solute transport equations are extended accordingly; from the surrounding to the complete domain (Section 4.4). The full coupled variational problem is well-posed (Section 4.5), and can be extended to vascular networks (Section 4.6). We begin by making assumptions on the material parameters mainly to simplify the presentation. We will adopt these assumptions in the remainder of this paper.
### Assumptions on material parameters
The parameter \(D_{v}\) is assumed to be single valued function rather than a tensor, and \(D_{v}\in L^{\infty}(0,T;L^{\infty}(\Omega_{v}(t)))\). The parameter \(D_{s}\in L^{\infty}(0,T;L^{\infty}(\Omega_{s}(t),\mathbb{R}^{d\times d}))\) with uniform ellipticity constant \(\nu>0\). In addition, \(\xi\) and \(D_{v}\) are assumed to be constant in each cross-section \(\Theta(s,t)\), \((s,t)\in\Lambda\times(0,T)\). Finally, we assume that the velocity fields \(\mathbf{u}_{i}\in L^{\infty}(0,T;H^{2}(\Omega_{i})^{3})\) for \(i\in\{v,s\}\).
### Derivation of a vessel-averaged (1D) transport equation
The aim of this section is to derive a one-dimensional model for the cross-section average of the concentration \(c_{v}\). Recalling the cross-sections \(\Theta(s)\) with area \(A=A(s)\), we define the cross-section average for \(s\in\Lambda\) by
\[\langle f\rangle(s)=\frac{1}{A(s)}\int_{\Theta(s)}f,\quad\forall f\in L^{1}( \Theta(s)).\]
Analogously, recalling the cross-section boundary \(\partial\Theta_{2}(s)\) with (lateral cross-section) perimeter \(P=P(s)\), we set:
\[\bar{f}(s)=\frac{1}{P(s)}\int_{\partial\Theta_{2}(s)}f,\quad\forall f\in L^{1} (\partial\Theta_{2}(s)). \tag{4.1}\]
For the derivation, we rely on the following assumptions on the vessel geometry and vessel deformations (adapted from [11, Chapter 2], and [40]). Assumption 4.1 is needed in the derivation of the reduced 1D model, see Proposition 4.3, and Assumption 4.2 is used in the derivation of its variational formulation, see Section 4.2.
**Assumption 4.1** (Averages and shape profile).: _Assume the following._
* _For_ \(c_{v}:\Omega_{v}\times(0,T)\to\mathbb{R}\) _and_ \(c_{s}:\Omega_{s}\times(0,T)\to\mathbb{R}\) _solving (_3.1_), the (lateral) cross-section averages are well-defined i.e._ \(c_{v}(t)\in L^{1}(\Theta(s))\cap L^{1}(\partial\Theta(s))\) _and_ \(c_{s}(t)\in L^{1}(\partial\Theta(s))\) _for all_ \(s\in\Lambda\) _and_ \(t\in(0,T)\)_._
* _Further, there exists a shape function_ \(w_{c}=w_{c}(r)\) _in the radial variable_ \(r\) _only, with_ \(\langle w_{c}\rangle=1\) _and such that the following splitting holds: for all_ \((s,r,\theta,t)\in\Omega_{v}(t)\times(0,T]\)_,_ \[c_{v}(s,r,\theta,t)=\langle c_{v}\rangle(s,t)\,w_{c}(r).\]
**Assumption 4.2** (Conditions on the vessel geometry and deformation).: _Assume the following:_
\[\partial_{s}R_{2}^{2}=\partial_{s}R_{1}^{2}=0,\quad\text{on}\ \Gamma_{0}\cup \Gamma_{L}. \tag{4.2}\]
_The above is adapted from [40]. In fact, if \(R_{1}\) and \(R_{2}\) are independent of \(\theta\) or if \(w_{c}=1\), then we can relax the above assumption by only requiring that_
\[\partial_{s}A=0,\quad\text{on}\ s=0,L,\]
_since it will be sufficient for the derivation of our weak formulation, see subsection 4.2._
The next proposition states a one-dimensional transport equation for the average concentration \(\hat{c}\) along the vessel centerline \(\Lambda\) and over time \(t\in(0,T)\).
**Proposition 4.1** (1D transport equation).: _Under Assumption 4.1, the cross-section average concentration \(\hat{c}=\langle c_{v}\rangle\) satisfies the following equation in \(\Lambda\times(0,T)\):_
\[\partial_{t}(A\hat{c})-\partial_{s}\left(D_{v}A\partial_{s}\hat{c}\right)+ \partial_{s}\left(A\langle u_{v,s}w_{c}\rangle\hat{c}\right)+\xi P\left( \overline{w_{c}}\hat{c}-\overline{c_{s}}\right)+G(\hat{c})=A\langle f_{v}\rangle, \tag{4.3}\]
_where \(u_{v,s}\) is the axial component of the velocity \(\mathbf{u}_{v}\), and where we have introduced the auxiliary expressions_
\[G(\hat{c}) =G(R_{1},R_{2},w_{c})(\hat{c})=-\partial_{s}\left(D_{v}g_{s}(R_{1},R_{2},w_{c})\hat{c}\right), \tag{4.5}\] \[g_{s}(R_{1},R_{2},w_{c}) =\sum_{i=1}^{2}-\frac{\gamma_{i}}{2}\int_{0}^{2\pi}\partial_{s}R_ {i}^{2}(1-w_{c}(R_{i})),\quad\gamma_{1}=1,\gamma_{2}=-1. \tag{4.4}\]
Before proceeding with the proof of Proposition 4.1, we make two remarks.
**Remark 4.1**.: _Recall that the functions \(A=A(s,t)\) and \(\langle u_{v,s}w_{c}\rangle=\langle u_{v,s}w_{c}\rangle(s,t)\) denote the cross-sectional area and a weighted average axial velocity, respectively. These functions can be either a-priori determined or solved for via reduced flow models, such as e.g. reduced blood flow models [9], perivascular fluid flow models [14], root water uptake models [30], or geothermal wells [21] as appropriate for the problem setting._
**Remark 4.2**.: _If \(\Omega_{v}\) is a cylinder (representing for instance a blood vessel, reservoir well or plant root but not a perivascular space), \(R_{1}(s,\theta,t)=0\) and \(R_{2}(s,\theta,t)=R(s,t)\). In this case, if we also assume that \(w_{c}(r)=w_{c}(R)=1\), then \(G=0\) and (4.3) simplifies to:_
\[\partial_{t}(A\hat{c})-\partial_{s}(D_{v}A\partial_{s}\hat{c})+\partial_{s}(A \langle u_{v,s}\rangle\hat{c})+\xi P(\hat{c}-\bar{c}_{s})=A\langle f_{v}\rangle.\]
Proof of Proposition 4.1.: We proceed via a similar approach as in [40]. Namely, consider two arbitrary points \(s_{1}\) and \(s_{2}\) with \(0\leq s_{1}<s_{2}\leq L\). Let \(\mathcal{P}=\mathcal{P}(t)\) denote the portion of \(\Omega_{v}(t)\) bounded by two cross-sections \(\Theta(s_{1})\) and \(\Theta(s_{2})\) perpendicular to \(\Lambda\), and let \(\Gamma_{\mathcal{P}}\) denote the lateral boundary of \(\mathcal{P}\). To simplify notation, we drop the subscript \(v\) in (3.1a). We now integrate (3.1a) over \(\mathcal{P}\) omitting the integration measures when self-evident.
\[\int_{\mathcal{P}}\partial_{t}c-\int_{\mathcal{P}}\nabla\cdot(D\nabla c)+\int _{\mathcal{P}}\nabla\cdot(\mathbf{u}c)-\int_{\mathcal{P}}f:=\mathcal{T}_{1}+ \mathcal{T}_{2}+\mathcal{T}_{3}+\mathcal{T}_{4}=0.\]
For \(\mathcal{T}_{1}\), we have by Reynolds transport theorem accounting for the domain velocity \(\mathbf{w}\) and by definition of the cross-section average, see e.g [47, 3], that
\[\mathcal{T}_{1}=\int_{\mathcal{P}}\partial_{t}c=\partial_{t}\int_{\mathcal{P}} c-\int_{\partial\mathcal{P}}c\mathbf{w}\cdot\mathbf{n}=\int_{s_{1}}^{s_{2}}\partial_{t}(A \langle c\rangle)-\int_{\partial\mathcal{P}}c\mathbf{w}\cdot\mathbf{n}. \tag{4.6}\]
For \(\mathcal{T}_{2}\), using the divergence theorem, we have that
\[\mathcal{T}_{2}=-\int_{\mathcal{P}}\nabla\cdot(D\nabla c)=-\int_{ \partial\mathcal{P}}D\nabla c\cdot\boldsymbol{n}\\ =\int_{\Theta(s_{1})}D\partial_{s}c-\int_{\Theta(s_{2})}D\partial _{s}c-\int_{s_{1}}^{s_{2}}\int_{\partial\Theta(s)}D\nabla c\cdot\boldsymbol{n}. \tag{4.7}\]
Following [40], we write the first and second terms above as follows.
\[\int_{\Theta(s_{1})}D\partial_{s}c-\int_{\Theta(s_{2})}D\partial_{s}c=-\int_{s _{1}}^{s_{2}}\frac{\partial}{\partial s}\int_{\Theta(s)}D\partial_{s}c\, \mathrm{d}s=-\int_{s_{1}}^{s_{2}}\frac{\partial}{\partial s}\int_{0}^{2\pi} \int_{R_{1}}^{R_{2}}D\partial_{s}cr\,\mathrm{d}r\,\mathrm{d}\theta\,\mathrm{d }s.\]
Using the assumption that \(D\) is constant on each cross-section, recalling that \(\gamma_{1}=1\) and \(\gamma_{2}=-1\), and applying Leibniz's rule yield
\[\int_{0}^{2\pi}\int_{R_{1}}^{R_{2}}D\partial_{s}cr\,\mathrm{d}r \,\mathrm{d}\theta=D\partial_{s}\int_{0}^{2\pi}\int_{R_{1}}^{R_{2}}cr\, \mathrm{d}r\theta+\sum_{i=1}^{2}\frac{\gamma_{i}}{2}\int_{0}^{2\pi}Dc(R_{i}) \partial_{s}R_{i}^{2}\,\mathrm{d}\theta\\ =D\partial_{s}(A\langle c\rangle)+\sum_{i=1}^{2}\int_{0}^{2\pi} \frac{\gamma_{i}}{2}Dc(R_{i})\partial_{s}R_{i}^{2}\,\mathrm{d}\theta.\]
Thus, we write \(\mathcal{T}_{2}\) as
\[\mathcal{T}_{2}=\int_{s_{1}}^{s_{2}}\left(-\partial_{s}(D\partial_{s}(A \langle c\rangle))-\sum_{i=1}^{2}\int_{0}^{2\pi}\frac{\gamma_{i}}{2}\partial_ {s}(Dc(R_{i})\partial_{s}R_{i}^{2})\,\mathrm{d}\theta-\int_{\partial\Theta(s)} D\nabla c\cdot\boldsymbol{n}\right)\,\mathrm{d}s. \tag{4.8}\]
For \(\mathcal{T}_{3}\), we proceed similarly, letting \(u_{v,s}\) denote the axial component of the velocity field \(\boldsymbol{u}_{v}\) (denoted \(\boldsymbol{u}\) here). We then have
\[\mathcal{T}_{3} =\int_{\mathcal{P}}\nabla\cdot(\boldsymbol{u}c)=\int_{\partial \mathcal{P}}(\boldsymbol{u}c)\cdot\boldsymbol{n}=\int_{\Theta(s_{2})}u_{v,s}c -\int_{\Theta(s_{1})}u_{v,s}c+\int_{s_{1}}^{s_{2}}\int_{\partial\Theta(s)}( \boldsymbol{u}c)\cdot\boldsymbol{n}\,\mathrm{d}s\] \[=\int_{s_{1}}^{s_{2}}\left(\partial_{s}(A\langle u_{v,s}c\rangle) +\int_{\partial\Theta(s)}(\boldsymbol{u}c)\cdot\boldsymbol{n}\right)\,\mathrm{ d}s. \tag{4.9}\]
From (3.1c), (3.2) if \(R_{1}>0\), and the assumption that \(\xi\) is constant in each cross-section, we obtain
\[\int_{s_{1}}^{s_{2}}\int_{\partial\Theta(s)}((\boldsymbol{u}-\boldsymbol{w})c -D\nabla c)\cdot\boldsymbol{n}=\int_{s_{1}}^{s_{2}}\int_{\partial\Theta_{2}(s )}\xi(c-c_{s})=\int_{s_{1}}^{s_{2}}\xi P(\overline{c}-\overline{c_{s}}). \tag{4.10}\]
The term \(\mathcal{T}_{4}\) simply reads
\[\mathcal{T}_{4}=-\int_{s_{1}}^{s_{2}}A\langle f\rangle\,\mathrm{d}s. \tag{4.11}\]
Collecting the derivations for \(\mathcal{T}_{i}\), \(i=1,\ldots,4\): (4.6),(4.8),(4.9), and (4.11) and using (4.10) for the resulting boundary terms yield:
\[\int_{s_{1}}^{s_{2}}\left(\partial_{t}(A\langle c\rangle)-\partial _{s}(D\partial_{s}(A\langle c\rangle))+\xi P(\overline{c}-\overline{c_{s}})+ \partial_{s}(A\langle u_{v,s}c\rangle)\right)\,\mathrm{d}s\\ -\int_{s_{1}}^{s_{2}}\sum_{i=1}^{2}\frac{\gamma_{i}}{2}\int_{0}^ {2\pi}\left(\partial_{s}(Dc(R_{i})\partial_{s}R_{i}^{2})\right)\,\mathrm{d} \theta\,\mathrm{d}s=\int_{s_{1}}^{s_{2}}A\langle f\rangle\,\mathrm{d}s. \tag{4.12}\]
To make the above equation solvable, we use assumption 4.1 and write:
\[\partial_{s}(A\langle c\rangle)=A\partial_{s}\langle c\rangle+\partial_{s}A \langle c\rangle=A\partial_{s}\langle c\rangle+\sum_{i=1}^{2}-\frac{\gamma_{i} }{2}\int_{0}^{2\pi}\langle c\rangle\partial_{s}R_{i}^{2}\,\mathrm{d}\theta.\]
We use the above and substitute \(c=c_{v}=w_{c}(r)\langle c_{v}\rangle\) and \(\hat{c}=\langle c_{v}\rangle\) in (4.12). Thus, since (4.12) holds for any \(s_{1}\) and \(s_{2}\), we conclude the result.
#### 4.2.1. Boundary conditions for the reduced transport model
We finalize the derivation of the reduced transport model by stating boundary conditions corresponding to the cross-section average of (3.1e), modified from [40]:
\[D_{v}A\partial_{s}\hat{c}-A\langle u_{v,s}w_{c}\rangle\hat{c}=0\quad\text{for $s= 0,L$.} \tag{4.13}\]
One can see that integrating (3.1e) over perpendicular cross-sections \(\Theta(0)\) and \(\Theta(L)\) and using assumption (4.2), the above condition is recovered if \(\mathbf{w}\cdot\mathbf{n}\) on \(\Gamma_{0}(t)\cup\Gamma_{1}(t)\) is negligible.
### Variational formulation of the reduced transport model
To formally derive a variational formulation of (4.3) combined with (4.13), we multiply (4.3) by \(\phi\in H^{1}(\Lambda)\) and integrate by parts. We first observe that
\[\int_{\Lambda}G(\hat{c})\phi\,\mathrm{d}s\equiv-\int_{\Lambda}\partial_{s} \left(D_{v}g_{s}\hat{c}\right)\phi\,\mathrm{d}s=\int_{\Lambda}\left(D_{v}g_{ s}\hat{c}\right)\partial_{s}\phi\,\mathrm{d}s-D_{v}g_{s}\hat{c}\,\phi|_{0}^{L}.\]
Therefore, after applying the boundary conditions (4.2) and (4.13) and collecting terms, we obtain the variational formulation: for \(t>0\), given coefficients \(D_{v}\), \(\xi\) and functions \(A\in L^{\infty}(0,T;L^{\infty}(\Lambda))\), \(\langle f_{v}\rangle\in L^{2}(0,T,(H^{1}_{A}(\Lambda))^{\prime})\), and \(u_{v,s}\), \(w_{c}\) such that \(\langle u_{v,s}w_{c}\rangle\in L^{\infty}(0,T;L^{\infty}(\Lambda))\) and \(\overline{w_{c}}\in\mathbb{R}\), find \(\hat{c}\in L^{2}(0,T;H^{1}_{A}(\Lambda))\) with \(\partial_{t}\hat{c}\in L^{2}(0,T;(H^{1}_{A}(\Lambda))^{\prime})\) such that for all
\[\langle\partial_{t}\hat{c},\phi\rangle_{H^{-1}_{A}(\Lambda)}+\int_{\Lambda}D _{v}\left(A\partial_{s}\hat{c}+g_{s}\hat{c}\right)\cdot\partial_{s}\phi-\int _{\Lambda}A\langle u_{v,s}w_{c}\rangle\hat{c}\cdot\partial_{s}\phi\\ +\int_{\Lambda}\left(\xi P\left(\overline{w_{c}}\hat{c}-\overline {c_{s}}\right)+\partial_{t}A\hat{c}\right)\phi=\left\langle\langle f_{v} \rangle,\phi\right\rangle_{H^{-1}_{A}(\Lambda)},\ \forall\phi\in H^{1}_{A}(\Lambda). \tag{4.14}\]
As mentioned, note that \(g_{s}=0\) if \(w_{c}=1\). In the case of a cylindrical (vessel) domain with \(R_{2}=R(s,t)\) and \(R_{1}=0\), then \(g_{s}=(1-w_{c}(R))\partial_{s}A\).
### Variational formulation for the extended transport model
We next formally extend the variational formulation of (3.11) to the whole domain \(\Omega\). Here, a model reduction approach is used, similar to the one by Laurino and Zunino [40], to reduce the interface condition (3.1c). This approach uses the average operator (4.1) as the restriction operator to the centerline for both the trial and test functions. This is different than the approach used in D'Angelo and Quarteroni [13] where the restriction operator for the test functions is taken as the trace operator onto \(\Lambda\) which is well-defined on special weighted spaces that enjoy
better regularity properties than \(H^{1}(\Omega)\). As we will show, the approach used in [40] and here is well-defined on functions in \(H^{1}(\Omega)\) and yields to solutions with better regularity properties than the ones in [13].
From (3.11), we have that for \(\phi\in H^{1}_{0}(\Omega)\)
\[\int_{\Omega_{s}}\dot{c}_{s}\phi+\int_{\Omega_{s}}\nabla\cdot\mathbf{w}c_{s}\phi+ \int_{\Omega_{s}}D_{s}\nabla c_{s}\cdot\nabla\phi+\int_{\Gamma}\xi(c_{s}-c_{v}) \phi-\int_{\Omega_{s}}(\tilde{\mathbf{u}}_{s}c_{s})\cdot\nabla\phi=\int_{\Omega_{s} }f_{s}\phi \tag{4.15}\]
recalling that \(\tilde{\mathbf{u}}_{s}=\mathbf{u}_{s}-\mathbf{w}\). For the first two terms, we have that
\[\int_{\Omega_{s}}\dot{c}_{s}\phi+\int_{\Omega_{s}}\nabla\cdot\mathbf{w}c_{s}\phi= \int_{\Omega_{s}}\partial_{t}c_{s}\phi+\int_{\Omega_{s}}\nabla\cdot(\mathbf{w}c_{ s})\phi.\]
Consider now the fourth term in (4.15). Define an operator subtracting the perimeter-average i.e. \(\tilde{\phi}=\phi-\overline{\phi}\). Clearly, \((\tilde{\phi}_{1},\overline{\phi_{2}})_{\partial\Theta}=(\tilde{\phi}_{2}, \overline{\phi_{1}})_{\partial\Theta}=0\) since \(\overline{\tilde{\phi}_{1}}=\overline{\tilde{\phi}_{2}}=0\) for \(\phi_{1},\phi_{2}\in L^{1}(\partial\Theta)\). We thus have that
\[\int_{\Gamma}\xi(c_{s}-c_{v})\phi=\int_{\Lambda}\int_{\partial \Theta}\xi(c_{s}-c_{v})\phi=\int_{\Lambda}\int_{\partial\Theta}\xi(\tilde{c}_{s }+\overline{c_{s}}-\tilde{c}_{v}-\overline{c_{v}})(\tilde{\phi}+\overline{ \phi})\\ =\int_{\Lambda}\int_{\partial\Theta}\xi(\overline{c_{s}}-\overline {c_{v}})\overline{\phi}+\int_{\Lambda}\int_{\partial\Theta}\xi(\tilde{c}_{s}- \tilde{c}_{v})\tilde{\phi}. \tag{4.16}\]
Following [40, 34], we assume that the second term on the right hand side above is negligible:
\[\int_{\partial\Theta}\xi\tilde{c}_{s}\tilde{\phi}\approx 0,\quad\int_{\partial \Theta}\xi\tilde{c}_{v}\tilde{\phi}\approx 0.\]
Hence, combining (4.16) with the assumption that \(c_{v}=\langle c_{v}\rangle w_{c}=\hat{c}w_{c}\) (Assumption 4.1), we obtain
\[\int_{\Gamma}\xi(c_{s}-c_{v})\phi=\int_{\Lambda}\xi P(\bar{c}_{s}-\overline{w_ {c}}\hat{c})\overline{\phi}.\]
Finally, we identify the domain \(\Omega_{s}\) with \(\Omega\) where we introduce the extended solution \(c\). That is, we have:
\[\int_{\Omega}\partial_{t}c\phi+\int_{\Omega}\nabla\cdot(\mathbf{w}c)\phi+\int_{ \Omega}\mathcal{E}(D_{s})\nabla c\cdot\nabla\phi+\int_{\Lambda}\xi P(\bar{c}- \overline{w_{c}}\hat{c})\overline{\phi}-\int_{\Omega}(\mathcal{E}(\mathbf{u}_{s})- \mathbf{w})c\cdot\nabla\phi=\int_{\Omega}\mathcal{E}(f_{s})\phi.\]
In the above, \(\mathcal{E}\) is a suitable extension operator: \(\mathcal{E}:H^{1}(\Omega_{s})\to H^{1}(\Omega)\). This operator will be further specified in Section 7.3. Integrating the second term above by parts, we arrive at the following weak formulation: Find \(c\in L^{2}(0,T;H^{1}_{0}(\Omega))\) with \(\partial_{t}c\in L^{2}(0,T;H^{-1}(\Omega))\) such that for all \(\phi\in H^{1}_{0}(\Omega)\),
\[\int_{\Omega}\partial_{t}c\phi+\int_{\Omega}\mathcal{E}(D_{s})\nabla c\cdot \nabla\phi+\int_{\Lambda}\xi P(\bar{c}-\overline{w_{c}}\hat{c})\bar{\phi}-\int _{\Omega}(\mathcal{E}(\mathbf{u}_{s})c)\cdot\nabla\phi=\int_{\Omega}\mathcal{E}(f_ {s})\phi. \tag{4.17}\]
### Coupled multi-dimensional variational formulation of transport model
We now combine the variational formulations derived in Sections 4.3-4.4, to summarize the time-dependent coupled 3D-1D solute transport model in variational form. To this end, we introduce the following bilinear forms. First, given \(\mathbf{u}_{s}\) and for all \(c,v\in H^{1}(\Omega)\),
\[a(c,v)=\int_{\Omega}\mathcal{E}(D_{s})\nabla c\cdot\nabla v-\int_{\Omega}( \mathcal{E}(\mathbf{u}_{s})c)\cdot\nabla v,\]
where \(\mathcal{E}\) is an extension operator (to be defined in Section 7.3). Second, from inspecting (4.14), and recalling the definitions of \(g_{s}\) as introduced in (4.5), we also define for all \(\hat{c},\phi\in H^{1}(\Lambda)\),
\[a_{\Lambda}(\hat{c},\phi)=\int_{\Lambda}D_{v}\left(A\partial_{s}\hat{c}+g_{s} \hat{c}\right)\cdot\partial_{s}\phi\,\mathrm{d}s-\int_{\Lambda}A\langle u_{v,s }w_{c}\rangle\hat{c}\cdot\partial_{s}\phi\,\mathrm{d}s+\int_{\Lambda}\partial_ {t}A\,\hat{c}\phi\,\mathrm{d}s. \tag{4.18}\]
In the above, we recall that \(g_{s}\) is given in (4.5) and accounts for the deviation of \(c_{v}\) from a uniform distribution in \(\Theta(s)\) for \(s\in\Lambda\).
For the coupling terms, we recall the weighted product (2.1) and define for all \(v,w\in L^{2}_{P}(\Lambda)\):
\[b_{\Lambda}(v,w)=(\xi v,w)_{L^{2}_{P}(\Lambda)}. \tag{4.19}\]
The coupled weak formulation reads as follows. Given \(\mathcal{E}(f)\in L^{2}(0,T;H^{-1}(\Omega))\) and \(\langle f_{v}\rangle\in L^{2}(0,T;H^{-1}_{A}(\Lambda))\), find \(c\in L^{2}(0,T;H^{1}_{0}(\Omega)),\hat{c}\in L^{2}(0,T;H^{1}_{A}(\Lambda))\) with \(\partial_{t}c\in L^{2}(0,T;H^{-1}_{A}(\Lambda))\), \(\partial_{t}\hat{c}\in L^{2}(0,T;H^{-1}_{A}(\Lambda))\) such that
\[\langle\partial_{t}c,v\rangle_{H^{-1}(\Omega)}+a(c,v)+b_{\Lambda} (\bar{c}-\overline{w_{c}}\hat{c},\bar{v}) =\langle\mathcal{E}(f),v\rangle_{H^{-1}(\Omega)}, \forall v\in H^{1}_{0}(\Omega), \tag{4.20b}\] \[\langle\partial_{t}\hat{c},\hat{v}\rangle_{H^{-1}_{A}(\Lambda)}+ a_{\Lambda}(\hat{c},\hat{v})+b_{\Lambda}(\overline{w_{c}}\hat{c}-\bar{c}, \hat{v}) =\langle\langle f_{v}\rangle,\hat{v}\rangle_{H^{-1}_{A}(\Lambda)}, \forall\hat{v}\in H^{1}_{A}(\Lambda),\] (4.20c) \[c^{0}=\mathcal{E}(c^{0}_{s})\in L^{2}(\Omega),\quad\hat{c}^{0} =\langle c^{0}_{v}\rangle\in L^{2}_{A}(\Lambda). \tag{4.20a}\]
Observe that the term \(b_{\Lambda}(\bar{c},\bar{v})\) is well-defined since for \(v\in H^{1}(\Omega)\), \(\bar{v}\in L^{2}_{P}(\Lambda)\). Indeed, by Jensen's and trace inequality (3.15), we have that
\[\|\bar{v}\|^{2}_{L^{2}_{P}(\Lambda)}=\int_{\Lambda}\frac{1}{P}\left(\int_{ \partial\Theta}v\right)^{2}\leq\int_{\Lambda}\int_{\partial\Theta}v^{2}=\|v \|^{2}_{L^{2}(\Gamma)}\leq C_{1}^{2}\|v\|^{2}_{H^{1}(\Omega)}. \tag{4.21}\]
**Proposition 4.2** (Well-posedness and regularity of the 3D-1D problem).: _Assume that \(A,\partial_{t}A,\)\(\langle u_{v,s}w_{c}\rangle\in L^{\infty}(0,T;L^{\infty}(\Lambda))\), \(A\geq A_{0}>0\) a.e in \(\Lambda\), \(\mathcal{E}\mathbf{u}_{s}\in L^{\infty}(0,T;L^{\infty}(\Omega,\mathbb{R}^{d\times d }))\), and that \(\mathcal{E}(\mathcal{D}_{s})\in L^{\infty}(0,T;L^{\infty}(\Omega))\) with uniform ellipticity constant \(\tilde{\nu}>0\). Then, the coupled weak formulation (4.20) is well-posed._
_In addition, if the material parameters are Holder continuous of index \(\beta>1/2\):_
\[\|\mathcal{E}(D_{s})(t_{1})-\mathcal{E}(D_{s})(t_{2})\|_{L^{\infty}(\Omega, \mathbb{R}^{d\times d})}+\|D_{v}(t_{1})-D_{v}(t_{2})\|_{L^{\infty}(\Lambda)}+ \|\xi(t_{1})-\xi(t_{2})\|_{L^{\infty}(\Lambda)}\leq C|t_{2}-t_{1}|^{\beta},\]
_for some constant \(C\) independent of \(t\), and if \(\partial\Omega\in C^{2}\), \(c^{0}_{v}\in H^{1}(\Omega_{v})\), \(c^{0}_{s}\in H^{1}(\Omega_{s})\), \(\mathcal{E}(f)\in L^{2}(\Omega)\) and \(\langle f\rangle\in L^{2}_{A}(\Lambda)\), then_
\[c\in L^{2}(0,T;H^{3/2-\eta}(\Omega)),\quad\eta>0.\]
Proof.: _Well-posedness_. We use J.-L. Lions theorem, see e.g [7, Theorem 10.9]. Let \(\mathbf{V}=H^{1}_{0}(\Omega)\times H^{1}_{A}(\Lambda)\) with dual \(\mathbf{V}^{\prime}=H^{-1}(\Omega)\times H^{-1}_{A}(\Lambda)\). The space \(\mathbf{V}\) defines a Hilbert space with inner product \((\mathbf{u},\mathbf{v})_{\mathbf{V}}=(u,v)_{H^{1}_{0}(\Omega)}+(\hat{u},\hat{v})_{H^{1}_{A }(\Lambda)}\), for all \(\mathbf{u}=(u,\hat{u})\) and \(\mathbf{v}=(v,\hat{v})\in\mathbf{V}\). Further it holds that \(\mathbf{V}\subset L^{2}(\Omega)\times L^{2}_{A}(\Lambda)\subset\mathbf{V}^{\prime}\). We then write (4.20) as: Find \(\mathbf{c}=(c,\hat{c})\in\mathcal{W}(\mathbf{V},\mathbf{V}^{\prime})\) such that
\[\langle\partial_{t}\mathbf{c},\mathbf{v}\rangle_{V^{\prime}\times V}+\mathcal{A}_{ \Lambda}(t,\mathbf{c},\mathbf{v})=\ell(\mathbf{v}),\quad\forall\mathbf{v}\in\mathbf{V},\]
where for all \(\mathbf{c}=(c,\hat{c}),\mathbf{v}=(v,\hat{v})\in\mathbf{V}\)
\[\mathcal{A}_{\Lambda}(t,\mathbf{c},\mathbf{v}) =a(c,v)+b_{\Lambda}(\bar{c}-\overline{w_{c}}\hat{c},\bar{v})+a_{ \Lambda}(\hat{c},\hat{v})+b_{\Lambda}(\overline{w_{c}}\hat{c}-\bar{c},\hat{v}),\] \[\ell(\mathbf{v}) =(\mathcal{E}f,v)_{\Omega}+(A\langle f_{v}\rangle,\hat{v})_{ \Lambda}.\]
We proceed to show that the continuity and coercivity conditions of Lions' Theorem hold: There exist constants \(M\), \(\kappa\) and \(\mu\) independent of \(t\) such that
\[\mathcal{A}_{\Lambda}(t,\boldsymbol{c},\boldsymbol{v}) \leq M\|\boldsymbol{c}\|_{\boldsymbol{V}}\|\boldsymbol{v}\|_{ \boldsymbol{V}}, \forall\boldsymbol{c},\boldsymbol{v}\in\boldsymbol{V}. \tag{4.23}\] \[\mathcal{A}_{\Lambda}(\boldsymbol{c},\boldsymbol{c}) \geq\kappa\|\boldsymbol{c}\|_{\boldsymbol{V}}^{2}-\mu(\|c\|^{2}+ \|\hat{c}\|_{L^{2}_{A}(\Lambda)}^{2}), \forall\boldsymbol{c}\in\boldsymbol{V}. \tag{4.22}\]
We begin by showing (4.22). By Holder's inequality, we immediately have that
\[a(c,v)\leq(\|\mathcal{E}(D_{s})\|_{L^{\infty}(\Omega)}+\|\mathcal{E}( \boldsymbol{u}_{s})\|_{L^{\infty}(\Omega)})\|c\|_{H^{1}(\Omega)}\|v\|_{H^{1}( \Omega)}.\]
Further, with Holder's and triangle inequalities and (4.21), we have that
\[b_{\Lambda}(\bar{c}-\overline{w_{c}}\hat{c},\bar{v})+b_{\Lambda }(\overline{w_{c}}\hat{c}-\bar{c},\hat{v})\leq\|\xi\|_{L^{\infty}(\Lambda)}( \|\hat{c}\|_{L^{2}_{P}(\Lambda)}+\|\overline{w_{c}}\hat{c}\|_{L^{2}_{P}( \Lambda)})(\|\bar{v}\|_{L^{2}_{P}(\Lambda)}+\|\hat{v}\|_{L^{2}_{P}(\Lambda)})\] \[\leq\|\xi\|_{L^{\infty}(\Lambda)}\left(C_{1}+(\|\overline{w_{c}} \|_{L^{\infty}(\Lambda)}+1)\|PA^{-1}\|_{L^{\infty}(0,T;L^{\infty}(\Lambda))}^{ 2}\right)^{2}\|\boldsymbol{c}\|_{\boldsymbol{V}}\|\boldsymbol{v}\|_{ \boldsymbol{V}}.\]
In the above, we note that \(C_{1}\) is independent of \(t\), see (3.15), and we use the definition of weighted norms which result in the following bound.
\[\|P^{-1}A\|_{L^{\infty}(\Lambda)}^{-1}\|\hat{c}\|_{L^{2}_{A}(\Lambda)}^{2}\leq \|\hat{c}\|_{L^{p}_{P}(\Lambda)}^{2}\leq\|PA^{-1}\|_{L^{\infty}(0,T;L^{\infty} (\Lambda))}\|\hat{c}\|_{L^{2}_{A}(\Lambda)}^{2}. \tag{4.24}\]
With (4.24) and Holder's inequality, the following easily follows.
\[a_{\Lambda}(\hat{c},\hat{v})\leq(\|D_{v}\|_{L^{\infty}(\Lambda)}+\|g_{s}A^{-1 }\|_{L^{\infty}(\Lambda)}+\|A^{-1}\partial_{t}A\|_{L^{\infty}(\Lambda)}+\| \langle u_{v,s}w_{c}\rangle\|_{L^{\infty}(\Lambda)})\|\hat{c}\|_{H^{1}_{A}( \Lambda)}\|\hat{v}\|_{H^{1}_{A}(\Lambda)}.\]
By combining the above bounds, we obtain (4.22) for a constant \(M\) independent of \(t\). We now show (4.23), but we do not track constants for simplicity. It easily follows that
\[a(v,v)+b_{\Lambda}(\overline{c},\overline{c})\geq\frac{\tilde{\nu}}{2}\| \nabla c\|_{L^{2}(\Omega)}^{2}-\frac{1}{2\tilde{\nu}}\|\mathcal{E}\boldsymbol{ u}_{s}\|_{L^{\infty}(\Omega)}^{2}\|c\|_{L^{2}(\Omega)}^{2}+\|\xi^{1/2} \overline{c}\|_{L^{2}_{P}(\Lambda)}^{2}.\]
With similar arguments and with (4.24), we also have positive constants \(\kappa_{1}\) and \(\mu_{1}\) such that
\[a_{\Lambda}(\hat{c},\hat{c})+b_{\Lambda}(\overline{w_{c}}\hat{c},\hat{c}) \geq\kappa_{1}\|\hat{c}\|_{H^{1}_{A}(\Lambda)}^{2}-\mu_{1}\|\hat{c}\|_{L^{2}_ {A}(\Lambda)}^{2}+\|\xi^{1/2}\overline{w_{c}}^{1/2}\hat{c}\|_{L^{2}_{P}( \Lambda)}^{2}.\]
To handle the coupling terms, we use Young's inequality and (4.24) as follows.
\[|b_{\Lambda}(\overline{w_{c}}\hat{c},\overline{c})|+|b_{\Lambda}( \overline{c},\hat{c})|\leq(\|\xi^{1/2}\overline{w_{c}}\hat{c}\|_{L^{2}_{P}( \Lambda)}+\|\xi^{1/2}\hat{c}\|_{L^{2}_{P}(\Lambda)})\|\xi^{1/2}\overline{c}\| _{L^{2}_{P}(\Lambda)}\\ \leq\frac{1}{2}\|\xi^{1/2}\overline{c}\|_{L^{2}_{P}(\Lambda)}^{2} +\frac{1}{2}(\|\xi^{1/2}\overline{w_{c}}\|_{L^{\infty}(\Lambda)}^{2}+\|\xi^{1/ 2}\|_{L^{\infty}(\Lambda)}^{2})\|PA^{-1}\|_{L^{\infty}(\Lambda)}\|\hat{c}\|_{L^{ 2}_{A}(\Lambda)}^{2}.\]
Then, upon writing \(\mathcal{A}_{\Lambda}(t,\boldsymbol{c},\boldsymbol{c})-b_{\Lambda}(\overline{w_{ c}}\hat{c},\overline{c})-b_{\Lambda}(\overline{c},\hat{c})=a(c,c)+b_{\Lambda}( \overline{c},\overline{c})+a_{\Lambda}(\hat{c},\hat{c})+b_{\Lambda}(\overline{w_ {c}}\hat{c},\hat{c})\) and using the above bounds, we conclude that (4.23) holds. In addition, one easily sees that \(\ell\) defines a bounded functional on \(\boldsymbol{V}\). Therefore, all the requirements for Lions Theorem hold and existence and uniqueness of weak solutions is obtained.
_Additional regularity._ We proceed to show the stated \(H^{3/2-\eta}\) regularity. The first step is to show that \(\partial_{t}c\in L^{2}(0,T;L^{2}(\Omega))\). This is achieved by invoking maximal regularity [4, Theorem 7.1]. We verify that \(\mathcal{A}_{\Lambda}(t,\boldsymbol{c},\boldsymbol{v})\) is Holder continuous of index \(\beta>1/2\): there exists a constant \(C\) independent of \(t\) such that
\[|\mathcal{A}_{\Lambda}(t_{2},\boldsymbol{c},\boldsymbol{v})-\mathcal{A}_{ \Lambda}(t_{1},\boldsymbol{c},\boldsymbol{v})|\leq K|t_{2}-t_{1}|^{\beta}\| \boldsymbol{c}\|_{\boldsymbol{V}}\|\boldsymbol{v}\|_{\boldsymbol{V}},\quad \forall\boldsymbol{c},\boldsymbol{v}\in\boldsymbol{V}. \tag{4.25}\]
The delicate terms in \(\mathcal{A}_{\Lambda}\) are the ones involving \(b_{\Lambda}(\cdot,\cdot)\) as the bounds for all the other terms follow directly from the assumptions on the material parameters. We provide some details for showing (4.25) in Appendix A.2. Under the additional assumption that
\(H^{1}(\Omega_{v})\), we have that \(c^{0}=\mathcal{E}c_{s}\in H^{1}(\Omega)\) and \(\hat{c}^{0}=\langle c^{0}_{v}\rangle\in H^{1}_{A}(\Lambda)\). Thus, since \(\mathcal{E}f\in L^{2}(\Omega)\) and \(\langle f_{v}\rangle\in L^{2}_{A}(\Lambda)\), we have verified the assumptions of [4, Theorem 7.1] and \(\partial_{t}c\in L^{2}(0,T;L^{2}(\Omega))\).
We now use the fractional space \(H^{1/2+\eta}(\Omega)\) normed by
\[\|v\|^{2}_{H^{1/2+\eta}(\Omega)}=\|v\|^{2}_{L^{2}(\Omega)}+\int_{\Omega}\int_ {\Omega}\frac{|v(x)-v(y)|^{2}}{|x-y|^{4+2\eta}},\]
and we define the linear functional \(\mathcal{F}(v)\):
\[\mathcal{F}(v)=-\int_{\Omega}\partial_{t}cv+\int_{\Gamma}\xi(\overline{w_{c} }\hat{c}-\bar{c})v+\int_{\Omega}\nabla\cdot(\mathcal{E}\boldsymbol{u}_{s}c)v +\int_{\Omega}\mathcal{E}fv. \tag{4.26}\]
The trace theorem yields for a positive constant \(K_{\Gamma}\)[17]:
\[\|v\|_{L^{2}(\Gamma)}\leq K_{\Gamma}\|v\|_{H^{1/2+\eta}(\Omega)},\quad\forall v \in H^{1/2+\eta}(\Omega). \tag{4.27}\]
With the above, \(\mathcal{F}(v)\) is a bounded linear functional on \(H^{1/2+\eta}(\Omega)\). Indeed, with Cauchy-Schwarz inequality and (4.27), we have
\[\sup_{v\in H^{1/2+\eta}(\Omega),v\neq 0}\frac{|\mathcal{F}(v)|}{ \|v\|_{H^{1/2+\eta}(\Omega)}}\leq\|\partial_{t}c\|_{L^{2}(\Omega)}+K_{\Gamma} \|\xi(\overline{w_{c}}\hat{c}-\bar{c})\|_{L^{2}(\Gamma)}\\ +\|\mathcal{E}\boldsymbol{u}_{s}\|_{L^{\infty}(\Omega)}\|\nabla c \|_{L^{2}(\Omega)}+\|\nabla(\mathcal{E}\boldsymbol{u}_{s})\|_{L^{6}(\Omega)} \|c\|_{L^{3}(\Omega)}+\|\mathcal{E}f\|_{L^{2}(\Omega)}. \tag{4.28}\]
The second term above is further bounded as follows:
\[\|\xi(\overline{w}_{c}\hat{c}-\overline{c})\|_{L^{2}(\Gamma)}\leq\|\xi \overline{w}_{c}\hat{c}\|_{L^{2}_{P}(\Lambda)}+\|\xi\overline{c}\|_{L^{2}_{P} (\Lambda)}\leq K(\|\hat{c}\|_{L^{2}_{P}(\Lambda)}+C_{1}\|c\|_{H^{1}(\Omega)}),\]
where we used (4.21) and (3.15). For the third and fourth terms in (4.28), Sobolev embedding results yield:
\[\|\mathcal{E}\boldsymbol{u}_{s}\|_{L^{\infty}(\Omega)}\|\nabla c\|_{L^{2}( \Omega)}+\|\nabla(\mathcal{E}\boldsymbol{u}_{s})\|_{L^{6}(\Omega)}\|c\|_{L^{3 }(\Omega)}\leq K\|\mathcal{E}\boldsymbol{u}_{s}\|_{H^{2}(\Omega)}\|c\|_{H^{1} (\Omega)}.\]
Thus, (4.28) becomes:
\[\|\mathcal{F}\|_{H^{-1/2-\eta}(\Omega)}\leq\|\partial_{t}c\|_{L^{ 2}(\Omega)}+K\|\hat{c}\|_{L^{2}_{P}(\Lambda)}\\ +K(C_{1}+\|\mathcal{E}\boldsymbol{u}_{s}\|_{H^{2}(\Omega)})\|c\|_ {H^{1}(\Omega)}+\|\mathcal{E}f\|_{L^{2}(\Omega)}. \tag{4.29}\]
For a.e. \(t\in(0,T)\), \(c(t)\in H^{1}_{0}(\Omega)\) solves
\[\int_{\Omega}D_{s}\nabla c\cdot\nabla v=\mathcal{F}(v),\quad\forall v\in H^{1 }_{0}(\Omega). \tag{4.30}\]
It then follows from Lemma 3.10 in [40], see also [23], and the principle of superposition that a.e. in time
\[\|c\|_{H^{3/2-\eta}(\Omega)}\leq K\|\mathcal{F}\|_{H^{-1/2-\eta}(\Omega)}. \tag{4.31}\]
The above can also be deduced from interpolation theory, see Chapter 14 in [6]. Integrating the above bound over \(t\), using (4.29) and the regularity properties of \(c\) and \(\hat{c}\) as discussed above, we have that \(c\in L^{2}(0,T;H^{3/2-\eta}(\Omega))\)
### Extension to vascular networks
Up til now, we have considered a representation of a single vessel and its surroundings. However, in applications such as for transport in the human (peri)-vascular or root networks, each vessel is but a segment of a larger (peri)vascular network. To extend our setting, consider now a network of \(N\) domains \(\Omega_{v,i}\) with center-curves \(\Lambda_{i}=\{\mathbf{\lambda}_{i}(s),\ s\in(0,L_{i})\}\) for \(i=1,\ldots,N\). Denote by \(\Lambda_{\text{graph}}=\cup_{i}\Lambda_{i}\). We use a similar notation and approach as Laurino and Zunino [40]. By direct extension from one to several vessels, letting \(w_{c}=1\) in \(\Omega_{v,i}\) for all \(i\) for clarity, we have that in each \(\Lambda_{i}\), the 1D concentration \(\hat{c}_{i}\) solves:
\[\partial_{t}(A_{i}\hat{c}_{i})-\partial_{s}(D_{v,i}A_{i}\partial_{s}\hat{c}_{ i})+\partial_{s}(A_{i}\langle u_{v,s,i}\rangle\hat{c}_{i})+\xi P_{i}(\hat{c}_{i}- \overline{c_{s}})=A_{i}\langle f_{v,i}\rangle. \tag{4.32}\]
The key next step is to specify interface and inlet/outlet conditions. Let \(Y\) denote the collection of bifurcation points, i.e. vertices that are shared between two or more curves: \(y\in Y\) if there exists at least one pair \((i,j)\) such that \(y=\lambda_{i}(0)=\lambda_{j}(L_{j})\). The set of curves with inlet nodes is denoted by \(I\) and the set of curves with outlet nodes is denoted by \(O\). At the level of one node \(y_{j}\in Y\), we separate the connecting curves as follows.
\[I_{j} =\{i\in\{1,\ldots,N\}:\ \mathbf{\lambda}_{i}(0)=y_{j}\}\quad\text{( curves having $y_{j}$ as an inlet node)},\] \[O_{j} =\{i\in\{1,\ldots,N\}:\ \mathbf{\lambda}_{i}(L_{i})=y_{j}\}\quad\text{( curves having $y_{j}$ as an outlet node)}.\]
Now, at every bifurcation point, we enforce conservation of fluxes and continuity or instantaneous mixing of the solute:
\[\sum_{k\in I_{j}}(A_{k}\langle u_{v,s,k}\rangle\hat{c}_{k}-D_{v,k}\partial_{s }\hat{c}_{k})(0) =\sum_{k\in O_{j}}(A_{k}\langle u_{v,s,k}\rangle\hat{c}_{k}-D_{v,k} \partial_{s}\hat{c}_{k})(L_{k}),\ \text{and}\]
\[\hat{c}_{k}(0) =\hat{c}_{i}(L_{i}),\quad\forall k\in I_{j},\ \forall i\in O_{j}.\]
For inlet and outlet curves, we set
\[(A_{k}\langle u_{v,s,k}\rangle\hat{c}_{k}-D_{v,k}\partial_{s} \hat{c}_{k})(0) =0,\quad\forall k\in I\] \[(A_{k}\langle u_{v,s,k}\rangle\hat{c}_{k}-D_{v,k}\partial_{s} \hat{c}_{k})(L_{k}) =0,\quad\forall k\in O.\]
Let
\[H^{1}(\Lambda_{\text{graph}})=\bigoplus H^{1}_{A_{i}}(\Lambda_{i})\cap C^{0}( \Lambda_{\text{graph}}), \tag{4.33}\]
consist of functions that are locally in \(H^{1}_{A_{i}}(\Lambda_{i})\) for each \(i\) (which implies continuity in each \(\Lambda_{i}\) since \(\Lambda_{i}\) is 1D) and that are continuous across bifurcation points. A natural weak formulation for the coupled network with the 3D surroundings now follows: Given \(\mathcal{E}f\in L^{2}(0,T;H^{-1}(\Omega))\) and \((\langle f_{v,1}\rangle,\ldots,\langle f_{v,n}\rangle)\in L^{2}(0,T;H^{1}( \Lambda_{\text{graph}})^{\prime})\), find \(c\in L^{2}(0,T;H^{1}_{0}(\Omega)),\hat{c}=(\hat{c}_{1},\ldots,\hat{c}_{N})\in L ^{2}(0,T;H^{1}(\Lambda_{\text{graph}}))\) with \(\partial_{t}c\in L^{2}(0,T;H^{-1}(\Omega)),\partial_{t}\hat{c}\in L^{2}(0,T;H ^{1}(\Lambda_{\text{graph}})^{\prime})\) such that for all \(v\in H^{1}_{0}(\Omega)\) and \(\hat{v}\in H^{1}(\Lambda_{\text{graph}})\):
\[\langle\partial_{t}c,v\rangle_{H^{-1}(\Omega)}+a(c,v)+\sum_{i=1}^{N}b_{ \Lambda_{i}}(\bar{c}-\hat{c}_{i},\bar{v}) =\langle\mathcal{E}f,v\rangle_{H^{-1}(\Omega)} \tag{4.34b}\] \[\sum_{i=1}^{N}\left(\langle\partial_{t}\hat{c}_{i},\hat{v} \rangle_{H^{-1}_{A_{i}}(\Lambda_{i})}+a_{\Lambda_{i}}(\hat{c}_{i},\hat{v})+b_{ \Lambda_{i}}(\hat{c}_{i}-\bar{c},\hat{v})\right)=\sum_{i=1}^{N}\langle \langle f_{v,i}\rangle,\hat{v}\rangle_{H^{-1}_{A_{i}}(\Lambda_{i})} \tag{4.34a}\]
with the initial conditions
\[c^{0}=\mathcal{E}c^{0}_{s},\quad\hat{c}^{0}_{i}=\langle c^{0}_{v,i}\rangle\quad i \in\{1,\ldots,N\}. \tag{4.35}\]
In the above, the forms \(b_{\Lambda_{i}}(\cdot,\cdot)\) and \(a_{\Lambda_{i}}(\cdot,\cdot)\) are obtained by naturally modifying the form \(b(\cdot,\cdot)\) (4.19) and the form \(a_{\Lambda}(\cdot,\cdot)\) (4.18) respectively. At the cost of only additional notation and conditions similar to (4.2), the above can be easily extended for the case of non-uniform concentration profiles \(w_{c}\neq 1\) in each vessel.
## 5. Coupled 3D-1D-1D models of solute transport
In this section, we focus on a case of particular neurological relevance, namely the case of a vascular network surrounded by a perivascular network and embedded in brain tissue with semi-permeable and moving membranes. From the 3D-3D-3D equations, we derive a coupled 3D-1D-1D model formulation allowing for strong jumps between the vascular and tissue domains in terms of material parameters (e.g. diffusion coefficient, velocity). We do not analyze this model further here, beyond stating a weak formulation. However, noting the similarity between the 3D-1D and 3D-1D-1D models, we expect that their well-posedness and model error analysis would follow from applications of the same techniques.
### A coupled 3D-3D-3D model of vascular-perivascular-tissue transport
We now consider the case of \(\Omega_{v}(t)\) representing a cylindrical blood vessel
\[\Omega_{v}(t)=\{\boldsymbol{\lambda}(s)+r\cos(\theta)\boldsymbol{N}(s)+r\sin( \theta)\boldsymbol{B}(s),0<s<L,\,0\leq\theta\leq 2\pi,\,0\leq r<R_{1}(s,t, \theta)\},\]
and introduce an intermediate annular domain \(\Omega_{p}(t)\) representing a perivascular space along the centerline \(\boldsymbol{\lambda}\) surrounding the blood vessel \(\Omega_{v}(t)\):
\[\Omega_{p}(t)=\{\boldsymbol{\lambda}(s)+r\cos(\theta)\boldsymbol{ N}(s)+r\sin(\theta)\boldsymbol{B}(s),\\ 0<s<L,\,0\leq\theta\leq 2\pi,\,R_{1}(s,t,\theta)<r<R_{2}(s,t, \theta)\}.\]
The domain \(\Omega_{p}(t)\) is further surrounded by a domain \(\Omega_{s}(t)\subset\mathbb{R}^{d}\), and the fixed domain \(\Omega\) is defined such that \(\Omega=(\Omega_{p}\cup\Omega_{v}\cup\Omega_{s})\). In each domain \(\Omega_{i}\), for \(i\in\{v,p,s\}\) and \(t\in(0,T]\), we assume that we are given a velocity field \(\boldsymbol{u}_{i}\) and diffusion coefficient \(D_{i}\), and we are interested in finding the concentration \(c_{i}:\Omega_{i}\times(0,T]\to\mathbb{R}\) such that
\[\partial_{t}c_{i}-\nabla\cdot(D_{i}\nabla c_{i})+\nabla\cdot(\tilde{ \boldsymbol{u}}_{i}c_{i})=f_{i}.\]
As before, \(\tilde{\boldsymbol{u}}_{i}=\boldsymbol{u}_{i}-\boldsymbol{w}\) with \(\boldsymbol{w}\) representing the domain velocity.
We assume that the interfaces \(\Gamma_{v}\) (separating the vasculature \(\Omega_{v}\) and perivascular \(\Omega_{p}\)) and \(\Gamma_{s}\) (separating the perivascular \(\Omega_{p}\) and tissue \(\Omega_{s}\)) are semi-permeable:
\[(c_{v}\tilde{\boldsymbol{u}}_{v}-D_{v}\nabla c_{v})\cdot \boldsymbol{n}-\xi_{v}(c_{v}-c_{p})=0 \text{on}\,\Gamma_{v}\times(0,T],\] \[(c_{p}\tilde{\boldsymbol{u}}_{p}-D_{p}\nabla c_{p})\cdot \boldsymbol{n}-\xi_{s}(c_{p}-c_{s})=0 \text{on}\,\Gamma_{s}\times(0,T],\]
where \(\boldsymbol{n}\) denotes a consistently-oriented normal at the interfaces, and \(\xi_{v}\) and \(\xi_{s}\) are the membrane permeabilities of the concentrations. In addition, conservation of mass is enforced on \(\Gamma_{v}\) and \(\Gamma_{s}\) with conditions similar to (3.1d). At the sides \(\Gamma_{v}^{0},\Gamma_{s}^{0}\) and \(\Gamma_{v}^{L},\Gamma_{s}^{L}\), where by the superscripts \(0\) and \(L\) we denote the cross-sections of any interface \(\Gamma\) at \(s=0\) and \(s=L\), respectively, we apply no flux boundary conditions:
\[(c_{i}\tilde{\boldsymbol{u}}_{i}+\nabla c_{i})\cdot\boldsymbol{n}=0,\quad \text{on}\,\,\Gamma_{i}^{0}\cup\Gamma_{i}^{L}\times(0,T],\,\,\,i\in\{v,s\}.\]
On \(\partial\Omega\times(0,T]\), we set \(c_{s}=0\).
### Derivation of 1D averaged equations
We now aim to derive coupled cross-section averaged equations for the vascular and perivascular concentrations. Let \(\Theta_{v}(s)\) and \(\Theta_{p}(s)\) be the cross-sections of \(\Omega_{v}\) and \(\Omega_{p}\) respectively at \(s\in\Lambda\). We also denote by \(\partial\Theta_{v}(s)\) and \(\partial\Theta_{p}(s)\) the inner and outer boundaries of the cross-section \(\Theta_{p}(s)\), respectively. Note that \(\partial\Theta_{v}(s)\) is also the boundary of \(\Theta_{v}(s)\). We let \(A_{i}(s,t)=|\Theta_{i}(s,t)|\) and \(P_{i}=|\partial\Theta_{i}(s,t)|\) for \(i\in\{v,p\}\). We introduce the following cross-sectionally averaged quantities
\[\hat{c}_{v}(s,t)=\frac{1}{A_{v}(s,t)}\int_{\Theta_{v}(s,t)}c_{v},\quad\hat{c}_ {p}(s,t)=\frac{1}{A_{p}(s,t)}\int_{\Theta_{p}(s,t)}c_{p},\quad\forall(s,t)\in \Lambda\times(0,T).\]
The reduced 1-D equations for \(\hat{c}_{v}\) and \(\hat{c}_{p}\) are presented in the next proposition. To clarify the presentation, we consider the constant cross-section case (rather than allowing radially-varying weights \(w_{c}\)).
**Proposition 5.1** (1D-1D vascular-perivascular transport equations).: _Assume that the vascular and perivascular concentrations \(c_{v},c_{p}\) solve the equations of Section 5.1, are constant on each cross-section:_
\[c_{v}(s,r,\theta,t)=\hat{c}_{v},\quad c_{p}(s,r,\theta,t)=\hat{c}_{p},\]
_and are sufficiently regular in the sense that \(c_{v}\in L^{1}(\Theta_{v}(s))\cap L^{1}(\partial\Theta_{v}(s))\), \(c_{p}\in L^{1}(\Theta_{p}(s))\cap L^{1}(\partial\Theta_{p}(s))\) for all \(s\in\Lambda\). Also assume that \(c_{s}\in L^{1}(\partial\Theta_{p}(s))\) for all \(s\in\Lambda\). Then, the vascular cross-section averaged concentration \(\hat{c}_{v}\) satisfies the following in \(\Lambda\):_
\[\partial_{t}(A_{v}\hat{c}_{v})-\partial_{s}(D_{v}A_{v}\partial_{s}\hat{c}_{v} )+\partial_{s}(A_{v}\langle u_{v,s}\rangle\hat{c}_{v})+\xi_{v}P_{v}(\hat{c}_{ v}-\hat{c}_{p})=A_{v}\langle f_{v}\rangle. \tag{5.1}\]
_In addition, the perivascular cross-section averaged concentration \(\hat{c}_{p}\) satisfies the following also in \(\Lambda\):_
\[\partial_{t}(A_{p}\hat{c}_{p})-\partial_{s}(D_{p}A_{p}\partial_{s}\hat{c}_{p} )+\partial_{s}(A_{p}\langle u_{p,s}\rangle\hat{c}_{p})+\xi_{v}P_{v}(\hat{c}_{ p}-\hat{c}_{v})+\xi_{s}P_{p}(\hat{c}_{p}-\overline{c_{s}})=A_{p}\langle f_{p}\rangle, \tag{5.2}\]
_where \(\overline{c_{s}}\) is the lateral average of \(c_{s}\) over \(\partial\Theta_{p}\)._
Proof.: We provide a brief proof sketch. For deriving (5.1), we follow the same arguments as the proof of Proposition 4.1. In particular, with the notation of Proposition 4.1, the same equations hold with \(R_{1}=0\), \(R_{2}=R_{1}\), and \(\overline{c_{s}}=\overline{c_{p}}=\hat{c}_{p}\). For (5.2), the same arguments also hold. The main difference is in the step (4.10). We now have by the stated interface and boundary conditions that
\[\int_{\partial\Theta_{v}(s)}(\tilde{\mathbf{u}}_{p}c_{p}-D_{p}\nabla c _{p})\cdot\mathbf{n}_{p}+\int_{\partial\Theta_{p}(s)}(\tilde{\mathbf{u}}_{p}c_{p}-D_{ p}\nabla c_{p})\cdot\mathbf{n}_{p}\\ =\int_{\partial\Theta_{v}(s)}\xi_{v}(c_{p}-c_{v})+\int_{\partial \Theta_{p}(s)}\xi_{s}(c_{p}-c_{s})=\xi_{v}P_{v}(\overline{c_{p}}-\overline{c _{v}})+\xi_{s}P_{s}(\overline{c_{p}}-\overline{c_{s}}),\]
where the overlines denote context-dependent lateral averages (defined relative to the respective interfaces). Now, invoking the cross-section average assumptions, we adopt all the remaining arguments in the proof of Proposition 4.1 to arrive at the stated equations.
### Coupled 3D-1D-1D formulation
A similar approach as in Section 4.4 is adopted to extend the solution \(c_{s}\) to the whole domain \(\Omega\). The coupled 3D-1D-1D perivascular-vascular-tissue weak formulation then reads: find \(c\in L^{2}(0,T;H^{1}_{0}(\Omega))\) with \(\partial_{t}c\in L^{2}(0,T;H^{-1}(\Omega))\) such that
\[\langle\partial_{t}c,v\rangle_{H^{-1}(\Omega)}+a(c,v)+b^{p}_{\Lambda}(\xi_{s}( \overline{c}-\hat{c}_{p}),\overline{v})=(\mathcal{E}f,v)_{\Omega},\quad\forall v \in H^{1}_{0}(\Omega). \tag{5.3}\]
In addition, find \(\hat{c}_{i}\in L^{2}(0,T;H^{1}_{A_{i}}(\Lambda))\) with \(\partial_{t}\hat{c}_{i}\in L^{2}(0,T;H^{-1}_{A_{i}}(\Lambda))\) for \(i\in\{v,p\}\) such that \(\forall\hat{v}\in H^{1}_{A_{p}}(\Lambda)\),
\[\langle\partial_{t}\hat{c}_{p},\hat{v}\rangle_{H^{-1}_{A_{p}}(\Lambda)}+a^{p} _{\Lambda}(\hat{c}_{p},\hat{v})+b^{p}_{\Lambda}(\xi_{s}(\hat{c}_{p}-\overline{c _{s}}),\hat{v})+b^{v}_{\Lambda}(\xi_{v}(\hat{c}_{p}-\hat{c}_{v}),\hat{v})=(A_{ p}\langle f_{p}\rangle,\hat{v})_{\Lambda}, \tag{5.4}\]
and \(\,\forall\hat{v}\in H^{1}_{A_{v}}(\Lambda)\):
\[\langle\partial_{t}\hat{c}_{v},\hat{v}\rangle_{H^{-1}_{A_{v}}(\Lambda)}+a^{v} _{\Lambda}(\hat{c}_{v},\hat{v})+b^{v}_{\Lambda}(\xi_{v}(\hat{c}_{v}-\hat{c}_{ p}),\hat{v})=(A_{p}\langle f_{v}\rangle,\hat{v})_{\Lambda}. \tag{5.5}\]
In the above, the forms \(a^{p}_{\Lambda},a^{v}_{\Lambda}\) are given by (4.18) where \(A\) is taken to be either \(A_{v}\) or \(A_{p}\), and we have defined
\[b^{i}_{\Lambda}(\hat{v},\hat{w})=(\hat{v},\hat{w})_{\Lambda,P_{i}},\quad \forall\hat{v},\hat{w}\in L^{2}_{P_{i}}(\Lambda),\ i\in\{p,v\}.\]
## 6. Inequalities for Sobolev spaces over annular and moving domains
To estimate the modelling error induced by the model reduction introduced in Section 4, we expect to rely on typically standard inequalities such as the Poincare and trace inequalities on \(H^{1}(\Omega_{v})\). However, since generally \(\Omega_{v}\) is non-convex and allowed to move in time, these inequalities require some attention. Moreover, a key question is how the inequality constants depend on the (inner and) outer radii. In this section, we address these theoretical questions separately. Here and in what follows, we assume that \(R_{1}\) and \(R_{2}\) are independent of \(\theta\).
We define the maximal cross-section diameter \(\epsilon_{\max}\) and axial radius variation \(\epsilon_{s}\):
\[\epsilon_{\max} =\max_{t\in[0,T]}\max_{s\in\Lambda}\epsilon(s,t),\quad\text{where} \quad\epsilon(s,t)=\operatorname{diam}(\Theta(s,t))=\max_{\boldsymbol{x}, \boldsymbol{y}\in\Theta(s,t)}|\boldsymbol{x}-\boldsymbol{y}|, \tag{6.2}\] \[\epsilon_{s} =\|\partial_{s}R_{1}\|_{L^{\infty}(0,T;L^{\infty}(\Lambda))}+\| \partial_{s}R_{2}\|_{L^{\infty}(0,T;L^{\infty}(\Lambda))}. \tag{6.1}\]
We assume that as \(\epsilon_{\max}\to 0\),
\[\epsilon(s,t)\lesssim R_{1}(s,t)\lesssim\epsilon(s,t)\,\text{ and }\,\epsilon(s,t) \lesssim R_{2}(s,t)\lesssim\epsilon(s,t). \tag{6.3}\]
This implies that:
\[\epsilon(s,t)\lesssim P(s,t)\lesssim\epsilon(s,t),\,\text{ and }\,\epsilon^{2}(s,t) \lesssim A(s,t)\lesssim\epsilon^{2}(s,t), \tag{6.4}\]
with (implicit) inequality constants independent of \(\Omega_{v},\Omega_{s}\). In what follows, \(K\) will denote a generic constant independent of \(\epsilon_{\max}\) and of the norms of \(c,\hat{c},c_{v}\), and \(c_{s}\). This generic constant \(K\) may take different values when used in different places and may depend on the final time and on the material parameters. Hereinafter, we will use \(A\lesssim B\) if there exists a generic constant \(K\) as defined above such that \(A\leq KB\).
**Lemma 6.1** (Poincare inequality over \(\Theta\)).: _For a.e. \(s\in\Lambda\) and \(t\in(0,T]\), the following Poincare inequality holds with inequality constant \(K_{p}\) independent of \(\epsilon(s,t)\):_
\[\|v-\langle v\rangle\|_{L^{2}(\Theta(s,t))}\leq K_{p}\epsilon(s,t)\|\nabla v \|_{L^{2}(\Theta(s,t))},\quad v\in H^{1}(\Theta(s,t)). \tag{6.5}\]
Proof.: See for example [24, Section 3.3]. The dependence on the diameter is recovered from standard scaling arguments where the constant \(K_{p}\) depends on a scaled annulus which is independent of \(t\), \(R_{1}\), and \(R_{2}\).
**Example 6.1** (Poincare inequality over \(\Theta\)).: _We can also numerically study the behaviour of the constant \(K_{p}\) in (6.5) via the following eigenvalue problem: find \(u\in H^{1}(\Theta)\) and \(\lambda>0\) such that_
\[-\Delta u =\lambda(u-\langle u\rangle) \text{in }\Theta,\] \[-\nabla u\cdot\boldsymbol{n} =0 \text{on }\partial\Theta. \tag{6.6}\]
_Denoting by \(\lambda_{1}\) the smallest eigenvalue of (6.6), it follows that \(\lambda_{1}^{-1/2}=K_{p}\epsilon\)._
_To investigate how \(K_{p}\) varies with the sizes of annular domains, we let \(\Theta\) be an annulus with inner and outer radii \(R_{1}\) and \(R_{2}\), respectively. We are interested in studying the cases where (i) \(R_{2}\) is fixed while \(R_{1}\to 0\), and (ii) \(R_{2}\) is fixed and \(R_{1}\to R_{2}\) (corresponding to \(\epsilon\to 0\), and covered by the theoretical result). We solve the eigenvalue problem (6.6) numerically via continuous linear finite elements defined relative to uniform meshes of the annuli using the FEniCS finite element software [41] and the SLEPc eigen solvers [26], and a relative difference between smallest eigenvalue approximations on consecutive meshes of 0.1%. The smallest approximate eigenvalue is denoted \(\tilde{\lambda}_{1}\), For both cases, we observe that \(\tilde{\lambda}_{1}\) scales linearly with the diameter \(\epsilon=2R_{2}\) of \(\Theta\) (Figure 2, left). Denoting the estimated slope by \(\tilde{K}_{p}\approx K_{p}\), we further observe that \(K_{p}\) remains bounded, both as \(R_{1}\to 0\) and \(R_{1}\to R_{2}\) (Figure 2, right)._
For a convex and regular domain such as a circle or ellipse, it is well-known that a Sobolev trace inequality holds [6]. However, is this also the case for (nearly) annular domains? The subsequent Lemma 6.2 addresses this question affirmatively.
**Lemma 6.2** (Trace inequality over \(\Theta\)).: _For a.e. \(s\in\Lambda\) and \(t\in(0,T]\), the following trace inequality holds with \(K_{\rm tr}\) independent of \(\epsilon(s,t)\):_
\[\|v\|_{L^{2}(\partial\Theta(s,t))}^{2}\leq K_{\rm tr}\left(\epsilon(s,t)^{-1} \|v\|_{L^{2}(\Theta(s,t))}^{2}+\epsilon(s,t)\|\nabla v\|_{L^{2}(\Theta(s,t))} ^{2}\right)\quad v\in H^{1}(\Theta(s,t)), \tag{6.7}\]
_where \(\epsilon(s,t)=\operatorname{diam}(\Theta(s,t))\)._
Proof.: For the circular case, if \(\Theta(s)=\{(r\cos(\theta),r\sin(\theta)),0\leq r<R_{2}(s,\theta),0\leq\theta \leq 2\pi\}\), this inequality is well known, Section 1.6 in [6]. We use similar arguments to extend the proof to an annulus.
Suppose now that \(R_{1}>0\) and let \(\Theta(s)=\{(r\cos(\theta),r\sin(\theta)),R_{1}(s,\theta)<r<R_{2}(s,\theta),0 \leq\theta\leq 2\pi\}\). We omit \(s,t\) in the notation for the sake of brevity. Let \((r,\theta)\in(R_{1},R_{2})\times[0,2\pi]\). We write
\[R_{1}^{2}u^{2}(R_{1},\theta)-r^{2}u^{2}(r,\theta)=-\int_{R_{1}}^{r}\partial_{ z}(z^{2}u^{2}(z,\theta))\,\mathrm{d}z.\]
Figure 2. Numerical investigation of the Poincaré inequality on annular domains (Example 6.1). Left: Linear scaling of the (approximate) smallest eigenvalue of (6.6) related to the constant \(K_{p}\epsilon\) in (6.5). Right: Dependence of \(K_{p}\) on the ratio of radii reveals that both limits \(R_{1}\to 0\) and \(R_{1}\to R_{2}\) lead to bounded \(K_{p}\).
Here, for simplicity, we write \(u(r,\theta)=u(r\cos(\theta),r\sin(\theta))\). Thus, we have that
\[R_{1}^{2}u^{2}(R_{1},\theta)\leq r^{2}u^{2}(r,\theta)+\int_{R_{1}}^{r}|\partial_{ z}(z^{2}u^{2}(z,\theta))|\,\mathrm{d}z.\]
Integrating over \((R_{1},R_{2})\) and over \([0,2\pi]\), we find that
\[R_{1}(R_{2}-R_{1})\|u\|_{L^{2}(\partial\Theta_{1})}^{2}\leq R_{2 }\|u\|_{L^{2}(\Theta)}^{2}\\ +\int_{0}^{2\pi}\int_{R_{1}}^{R_{2}}\int_{R_{1}}^{r}2|u(z,\theta )|\,|\partial_{z}u(z,\theta)|z^{2}\,\mathrm{d}z\,\mathrm{d}r\,\mathrm{d}\theta +\int_{0}^{2\pi}\int_{R_{1}}^{R_{2}}\int_{R_{1}}^{r}2zu^{2}(z,\theta)\,\mathrm{ d}z\,\mathrm{d}r\,\mathrm{d}\theta,\]
where \(\partial\Theta_{1}\) denotes the inner circle of \(\Theta\). Simplifying the last term and using Cauchy-Schwarz inequality for the penultimate, we obtain:
\[R_{1}(R_{2}-R_{1})\|u\|_{L^{2}(\partial\Theta_{1})}^{2}\leq(R_{2}+2(R_{2}-R_{ 1}))\|u\|_{L^{2}(\Theta)}^{2}+2(R_{2}-R_{1})R_{2}\|u\|_{L^{2}(\Theta)}\|\nabla u \|_{L^{2}(\Theta)}.\]
With assumption (6.3) and Young's inequality, we obtain:
\[\|u\|_{L^{2}(\partial\Theta_{1})}^{2}\lesssim\epsilon^{-1}\|u\|_{L^{2}(\Theta) }^{2}+\epsilon\|\nabla u\|_{L^{2}(\Theta)}^{2}.\]
Similar arguments yield the same bound over the outer circle \(\partial\Theta_{2}\) for \(\|u\|_{L^{2}(\partial\Theta_{2})}^{2}\). Adding the two bounds gives the result. The above computations are for smooth functions. The result for functions in \(H^{1}(\Theta)\) follows by density.
We now turn to consider a trace inequality for the surrounding domain \(\Omega_{s}\) (Lemma 6.4) by way of an extension operator (Lemma 6.3) first introduced and studied in [53].
**Lemma 6.3** (Extension operator).: _For any \(t\in[0,T]\) and \(k\in\{1,2\}\), there exists an extension operator \(\mathcal{E}(t):H^{k}(\Omega_{s}(t))\to H^{k}(\Omega)\) satisfying \(\mathcal{E}(t)v|_{\Omega_{s}(t)}=v|_{\Omega_{s}(t)},\;\mathcal{E}(t)v|_{ \Gamma(t)}=v|_{\Gamma(t)}\) and such that_
\[\|\mathcal{E}(t)v\|_{H^{k}(\Omega)}\leq K_{\mathcal{E}}\|v\|_{H^{k}(\Omega_{s }(t))},\quad\forall v\in H^{k}(\Omega_{s}(t)), \tag{6.8}\]
_with a constant \(K_{\mathcal{E}}\) independent of \(\epsilon_{\max}\) and \(t\)._
Proof.: The construction of the extension operator and the proof of the continuity bound are very similar to [53, Theorem 2.1]. For completeness, we provide some details adapted to our geometrical setting. First, we define the extension from a fixed domain \(\tilde{B}=B_{1}\backslash B\) to \(B_{1}\) where \(B\) and \(B_{1}\) are cylindrical domains of radii \(1\) and \(2\) respectively. Let \(\mathcal{E}_{0}:H^{1}(\tilde{B})\to H^{1}(\mathbb{R}^{3})\) be the extension operator as defined in [18, Section 5.4]. We have the following two bounds:
\[\|\mathcal{E}_{0}u\|_{H^{1}(\mathbb{R}^{d})}\leq K_{1}\|u\|_{H^{1}(\tilde{B})},\;\;\|\mathcal{E}_{0}u\|_{H^{2}(\mathbb{R}^{d})}\leq K_{2}\|u\|_{H^{2}(\tilde {B})}.\]
Let \(H^{k}_{0}(B)=\{v\in H^{k}(B),\partial^{\alpha}v=0,\;|\alpha|<k\text{ on }\partial B\}\) and define \(z\in H^{k}_{0}(B)\) such that
\[\sum_{|\alpha|=k}(\partial^{\alpha}z,\partial^{\alpha}q)_{B}=\sum_{|\alpha|=k} (\partial^{\alpha}(\mathcal{E}_{0}u),\partial^{\alpha}q)_{B},\;\;\forall q \in H^{k}_{0}(B). \tag{6.9}\]
One can show that \(z\) is well-defined by the Lax-Milgram theorem since a Poincare inequality holds in \(H^{k}_{0}(B)\), and we have that
\[\|z\|_{H^{k}(B)}\leq\tilde{K}_{k}\|\mathcal{E}_{0}u\|_{H^{k}(B)}. \tag{6.10}\]
The extension operator \(\mathcal{E}_{\tilde{B}}:H^{1}(\tilde{B})\to H^{1}(B_{1})\) is then defined as follows:
\[\mathcal{E}_{\tilde{B}}u\left(\boldsymbol{x}\right)=\begin{cases}u(\boldsymbol{x }),&\boldsymbol{x}\in\tilde{B}\\ \mathcal{E}_{0}u(\boldsymbol{x})-z(\boldsymbol{x}),&\boldsymbol{x}\in B\end{cases}. \tag{6.11}\]
To show continuity of \(\mathcal{E}_{\tilde{B}}\), we have that for \(k\in\{1,2\}\):
\[\|\mathcal{E}_{\tilde{B}}u\|^{2}_{H^{k}(B_{1})}=\|u\|^{2}_{H^{k} (\tilde{B})}+\|\mathcal{E}_{0}u-z\|^{2}_{H^{k}(B)}\leq\|u\|^{2}_{H^{k}(\tilde{ B})}+(\|\mathcal{E}_{0}u\|_{H^{k}(B)}+\|z\|_{H^{k}(B)})^{2}\\ \leq\|u\|^{2}_{H^{k}(\tilde{B})}+(1+\tilde{K}_{k})^{2}K_{k}^{2}\|u \|^{2}_{H^{k}(\tilde{B})}\leq K_{k}^{f}\|u\|^{2}_{H^{k}(\tilde{B})}. \tag{6.12}\]
In the above, we let \(K_{k}^{f}=1+(1+\tilde{K}_{k})^{2}K_{k}^{2}\) which clearly depends on \(B,\tilde{B}\) and \(B_{1}\). A key property of this extension is that \(\mathcal{E}_{\tilde{B}}p=p\) for all polynomials \(p\) of degree less than \(k\), see [53, Lemma 2.1]. By choosing \(p\) as the average of \(u\) for \(k=1\) or the Lagrange interpolant of degree \(1\) for \(k=2\), this observation yields the following bounds on the semi-norms:
\[|\mathcal{E}_{\tilde{B}}u|^{2}_{H^{k}(B_{1})}=|\mathcal{E}_{\tilde{B}}(u-p)|^ {2}_{H^{k}(B_{1})}\leq K_{k}^{f}\|u-p\|^{2}_{H^{k}(\tilde{B})}\leq K_{k}^{f}K_ {2}|u|^{2}_{H^{k}(\tilde{B})}, \tag{6.13}\]
for some constant \(K_{2}\). Now, we define the extension operator from \(H^{k}(\Omega_{s}(t))\to H^{k}(\Omega)\) as:
\[\mathcal{E}(t)v=\begin{cases}\mathcal{E}_{\tilde{B}}(v\circ\chi_{R_{2}(t)}^{-1 })\circ\chi_{R_{2}(t)},&\text{in }B_{2\,R_{2}(t)}\\ v,&\text{in }\Omega_{s}(t)\backslash B_{2\,R_{2}(t)}\end{cases}, \tag{6.14}\]
where \(B_{2R_{2}(t)}\) is the cylinder surrounding \(B_{R_{2}}\) of radius \(2R_{2}\) and
\[\chi_{R_{2}(t)}((s,R_{2}\cos(\theta),R_{2}\sin(\theta)))=(s,\cos(\theta),\sin (\theta)),\ \ \forall(s,\theta)\in\Lambda\times[0,2\pi].\]
The continuity of \(\mathcal{E}\) then follows from a scaling argument and (6.3) which yield that
\[|v\circ\chi_{R_{2}(t)}^{-1}|^{2}_{H^{i}(\tilde{B})}\lesssim\epsilon_{\max}^{- 3+2i}|v|^{2}_{H^{i}(B_{2R_{2}(t)}\backslash B_{R_{2}(t)})},\ \ |\hat{v}\circ\chi_{R_{2}(t)}|^{2}_{H^{i}(B_{2R_{2}(t)})}\lesssim\epsilon_{\max}^ {3-2i}|\hat{v}|^{2}_{H^{i}(B_{1})},\]
for \(i\in\{0,1,2\}.\) Thus, we obtain the following:
\[\|\mathcal{E}(t)v\|^{2}_{H^{k}(\Omega)} =\|v\|^{2}_{H^{k}(\Omega_{s}(t)\backslash B_{2\,R_{2}(t)})}+\| \mathcal{E}_{\tilde{B}}(v\circ\chi_{R_{2}(t)}^{-1})\circ\chi_{R_{2}(t)}\|^{2} _{H^{k}(B_{2\,R_{2}(t)})}\] \[\lesssim\|v\|^{2}_{H^{k}(\Omega_{s}(t))}+\sum_{i=0}^{k}\epsilon_{ \max}^{3-2i}|\mathcal{E}_{\tilde{B}}(v\circ\chi_{R_{2}(t)}^{-1})|^{2}_{H^{i}(B _{1})}\] \[\leq\|v\|^{2}_{H^{k}(\Omega_{s}(t))}+\sum_{i=0}^{k}\epsilon_{\max} ^{3-2i}K_{k}^{f}K_{2}|v\circ\chi_{R_{2}(t)}^{-1}|^{2}_{H^{i}(\tilde{B})}\] \[\lesssim\|v\|^{2}_{H^{k}(\Omega_{s}(t))}.\qed\]
**Lemma 6.4** (Trace inequality over \(\Omega_{s}\)).: _There exists a constant \(K_{\Gamma}\) independent of \(t\) and of \(\epsilon_{\max}\) such that_
\[\|v\|_{L^{2}(\Gamma(t))}\leq K_{\Gamma}(\epsilon_{\max}\,|\ln\epsilon_{\max}|)^ {1/2}\,\|v\|_{H^{1}(\Omega_{s}(t))},\ \ \ \forall v\in H^{1}(\Omega_{s}(t)). \tag{6.15}\]
Proof.: Without loss of generality, we consider the case of \(\Omega_{v}\) being an annular cylinder domain and \(\Omega_{s}\) its outer surroundings. We have for \(v\in H^{1}(\Omega_{s}(t))\):
\[\|v\|^{2}_{L^{2}(\Gamma(t))}=\|\mathcal{E}v\|^{2}_{L^{2}(\Gamma(t))}=\int_{ \Lambda}\|\mathcal{E}v\|^{2}_{L^{2}(\partial\Theta_{2}(s,t))}\,\mathrm{d}s. \tag{6.16}\]
We use ideas from the proofs of [34, Lemma 2.1 and Lemma 2.2] where we adapt the arguments to 3D. We write for a.e. \(s\in\Lambda,t\geq 0\),
\[\|\mathcal{E}v\|_{L^{2}(\partial\Theta_{2}(s,t))}\leq\|\mathcal{E}v-\overline{ \mathcal{E}v}\|_{L^{2}(\partial\Theta_{2}(s,t))}+\|\overline{\mathcal{E}v}\|_ {L^{2}(\partial\Theta_{2}(s,t))}. \tag{6.17}\]
The first term is bounded by a Stekloff type inequality [38]:
\[\|\mathcal{E}v-\overline{\mathcal{E}v}\|_{L^{2}(\partial\Theta_{2}(s,t))} \leq K_{\mathrm{st}}\epsilon(s,t)^{1/2}\|\nabla(\mathcal{E}v)\|_{L^{2}(\Theta _{2}(s,t))}. \tag{6.18}\]
For the second term in (6.17), observe that by definition of the perimeter average
\[\|\overline{\mathcal{E}v}\|_{L^{2}(\partial\Theta_{2}(s,t))}=|\partial\Theta _{2}(s,t)|^{1/2}|\overline{\mathcal{E}v}|. \tag{6.19}\]
From the proof of [34, Lemma 2.1], we further have for \(p>2\)
\[|\overline{\mathcal{E}v}|\leq\left(\pi R_{2}(s,t)^{2}\right)^{-1/p}\|\mathcal{ E}v\|_{L^{p}(\Theta_{2}(s,t))}+\frac{1}{2\sqrt{\pi}}\|\nabla\mathcal{E}v\|_{L^{2}( \Theta_{2}(s,t))}. \tag{6.20}\]
Hence, we obtain:
\[\|\overline{\mathcal{E}v}\|_{L^{2}(\partial\Theta_{2}(s,t))}\leq K\left( \epsilon(s,t)^{1/2-2/p}\|\mathcal{E}v\|_{L^{p}(\Theta_{2}(s,t))}+\epsilon(s,t )^{1/2}\|\nabla\mathcal{E}v\|_{L^{2}(\Theta_{2}(s,t))}\right). \tag{6.21}\]
Upon substituting in (6.16), we have that
\[\int_{\Lambda}\|\mathcal{E}v\|_{L^{2}(\partial\Theta_{2}(s,t))}^{2}\leq K\int _{\Lambda}(\epsilon_{\max}^{1-4/p}\|\mathcal{E}v\|_{L^{p}(\Theta_{2}(s,t))}^{ 2}+\epsilon_{\max}\|\nabla\mathcal{E}v\|_{L^{2}(\Theta_{2}(s,t))}^{2}). \tag{6.22}\]
Consider now a fixed cylindrical domain \(B_{\Lambda}\) around the centerline \(\Lambda\) with cross-sections \(\Theta_{\Lambda}(s,t)\). We emphasize that \(B_{\Lambda}\) does not depend on \(\epsilon_{\max}\). Observe that for \(\epsilon_{\max}\) small, \(\Omega_{v}(t)\subset B_{\Lambda}\). Let \(\tilde{B}_{\Lambda}\) be another cylinder such that \(B_{\Lambda}\subset\tilde{B}_{\Lambda}\subset\Omega\) with cross-sections \(\tilde{\Theta}_{\Lambda}(s,t)\supset\Theta_{\Lambda}(s,t)\). Define \(\chi\) to be a smooth cut-off function on \(B_{\Lambda}\) such that \(\chi=1\) in \(B_{\Lambda}\) with compact support in \(\tilde{B}_{\Lambda}\). By construction, we have
\[\|\mathcal{E}v\|_{L^{p}(\Theta_{2}(s,t))}\leq\|\mathcal{E}v\|_{L^{p}(\Theta_ {\Lambda}(s,t))}=\|\chi(\mathcal{E}v)\|_{L^{p}(\Theta_{\Lambda}(s,t))}\leq\| \chi(\mathcal{E}v)\|_{L^{p}(\tilde{\Theta}_{\Lambda}(s,t))}.\]
Since \(\chi(\mathcal{E}v)\in H^{1}_{0}(\tilde{\Theta}_{\Lambda}(s,t))\), we apply the Sobolev embedding result in 2D which gives a constant with an explicit dependence on \(p\)[57, eq (6.20)]:
\[\|\chi(\mathcal{E}v)\|_{L^{p}(\tilde{\Theta}_{\Lambda}(s,t))}\leq Kp^{1/2}\| \nabla(\chi(\mathcal{E}v))\|_{L^{2}(\tilde{\Theta}_{\Lambda}(s,t))}\leq Kp^{1/ 2}\|\mathcal{E}v\|_{H^{1}(\tilde{\Theta}_{\Lambda}(s,t))}.\]
The above constant depends on \(\tilde{\Theta}_{\Lambda}(s,t)\) and on \(\chi\) but not \(\epsilon_{\max}\). Substituting in (6.22), and choosing \(p=|\ln\epsilon_{\max}|\) yields:
\[\|\mathcal{E}v\|_{L^{2}(\Gamma(t))}^{2} \leq K\int_{\Lambda}\left(\epsilon_{\max}|\ln\epsilon_{\max}|\| \mathcal{E}v\|_{H^{1}(\Theta_{\Lambda}(s,t))}^{2}+\epsilon_{\max}\|\nabla \mathcal{E}v\|_{L^{2}(\Theta_{2}(s,t))}^{2}\right)\] \[\leq K\epsilon_{\max}\left(|\ln\epsilon_{\max}|\|\mathcal{E}v\|_ {H^{1}(\tilde{B}_{\Lambda})}^{2}+\|\mathcal{E}v\|_{H^{1}(\Omega)}^{2}\right) \leq K\epsilon_{\max}|\ln\epsilon_{\max}|\|\mathcal{E}v\|_{H^{1}(\Omega)}^{2}.\]
Using (6.8) in the above concludes the proof.
**Example 6.2** (Trace inequality over \(\Omega_{s}\)).: _The scaling law of Lemma 6.4 can be demonstrated numerically by considering the following Stekloff eigenvalue problem [55]: find \(u\in H^{1}(\Omega_{s})\) and \(\lambda>0\) such that_
\[\Delta u =u \text{in }\Omega_{s},\] \[\nabla u\cdot\boldsymbol{n} =\lambda u \text{on }\Gamma,\] \[\nabla u\cdot\boldsymbol{n} =0 \text{on }\partial\Omega_{s}\setminus\Gamma. \tag{6.23}\]
_More precisely, for \(\lambda_{1}\) being the smallest non-zero eigenvalue of (6.23), there holds that_
\[\|u\|_{L^{2}(\Gamma)}\leq\lambda_{1}^{-1/2}\|u\|_{H^{1}(\Omega_{s})},\quad\forall v \in H^{1}(\Omega_{s}). \tag{6.24}\]
_Thus, approximations to \(\lambda_{1}\) in (6.23) can be used to estimate the bound in (6.15). Now, consider an embedding domain \(\Omega_{s}\) (also) in the shape of an cylinder with unit height and unit outer radius. Consider an inner cylinder \(\Omega_{v}\) with diameter \(\epsilon=2R_{1}\) (and unit height) and consider a decreasing sequence of \(R_{1}\)s. As in Example 6.1, we approximate this smallest non-zero eigenvalue using the discretization of (6.23) by continuous linear elements defined relative to a series of uniformly refined meshes, and deem the eigenvalues converged when the relative difference between refinements is less then 5%. Clearly, the trace constant decreases with decreasing \(\epsilon_{\max}\) (Figure 3). We note that the data are well-fitted by the theoretically established \(\epsilon_{\max}^{1/2}|\mathrm{ln}\,\epsilon_{\max}|^{1/2}\) expression especially for small radii._
## 7. Analysis of the modelling error
We next turn to address the following question: how large of a modelling error has been introduced by the derivation and associated assumptions of the coupled 3D-1D model in Section 4.3? We begin by considering the modelling error associated with the cross-section average concentration in the vessel, before turning to the modelling error in the surroundings. We will make use of a duality argument, and therefore introduce and analyze the stability of an associated dual problem before turning to the modelling error estimates.
### Well-posedness and stability of a dual transport problem
We consider the properties of a backward-in-time dual transport problem, defined in association with the forward vessel transport problem of Proposition 3.1, in Lemma 7.1. A key aspect is to appropriately account for the moving domain and time-derivatives with respect to moving frames. We therefore explicitly track the domain dependence on time \(t\).
**Lemma 7.1** (Backward-in-time dual problem).: _The following problem is well posed: given \(g\in L^{2}(0,T;L^{2}(\Omega_{v}(t)))\), find \(h\in W=\{L^{2}(0,T;H^{1}(\Omega_{v}(t)))\mid\dot{h}\in L^{2}(0,T;H^{-1}(\Omega _{v}(t)))\}\) and \(h(T)=0\) in \(\Omega_{v}(T)\) such that for \(t\in(0,T)\) and for all \(\varphi\in H^{1}(\Omega_{v}(t))\):_
\[-\langle\dot{h}(t),\varphi\rangle_{H^{-1}(\Omega_{v}(t))}+(D_{v} \nabla h(t),\nabla\varphi)_{\Omega_{v}(t)}+(\xi h(t),\varphi)_{\Gamma(t)}\\ -((\mathbf{u}_{v}-\mathbf{w})\cdot\nabla h(t),\varphi)_{\Omega_{v}(t)}= (g(t),\varphi)_{\Omega_{v}(t)}. \tag{7.1}\]
Figure 3. Numerical investigation of the trace inequality (Example 6.2) for vessels of decreasing diameters \(\epsilon=2R_{1}\).
_In addition, the following stability bound holds_
\[\|h\|_{L^{\infty}(0,T;L^{2}(\Omega_{v}(t)))}+\|D_{v}^{1/2}\nabla h\|_ {L^{2}(0,T;L^{2}(\Omega_{v}(t)))}\\ +\|\xi^{1/2}h\|_{L^{2}(0,T;L^{2}(\Gamma(t)))}\leq K_{b}\|g\|_{L^{2} (0,T;L^{2}(\Omega_{v}(t)))}, \tag{7.2}\]
_where \(K_{b}\) is independent of \(\epsilon_{\max}\) but depends on the final time \(T\), on \(\|\nabla\cdot\mathbf{w}\|_{L^{\infty}(0,T;L^{\infty}(\Omega))}\), and on \(\|D_{v}^{-1/2}\tilde{\mathbf{u}}_{v}\|_{L^{\infty}(0,T;L^{\infty}(\Omega_{v}(t)))}\)._
Proof.: We consider the forward-in-time solution \(z\in W\) and \(z(0)=0\) in \(\tilde{\Omega}_{v}(0)\) solving for all \(\varphi\in H^{1}(\tilde{\Omega}_{v}(t))\)
\[\langle\dot{z},\varphi\rangle_{H^{-1}(\tilde{\Omega}_{v}(t))}+(\overset{ \leftarrow}{D_{v}}\nabla z,\nabla\varphi)_{\tilde{\Omega}_{v}(t)}+(\overset{ \leftarrow}{\xi}z,\varphi)_{\tilde{\Gamma}(t)}-((\overset{\leftarrow}{\mathbf{u }}_{v}-\overset{\leftarrow}{\mathbf{w}})\cdot\nabla z,\varphi)_{\tilde{\Omega}_{v }(t)}=(\overset{\leftarrow}{g},\varphi)_{\tilde{\Omega}_{v}(t)},\]
where a "\(\leftarrow\) " over a function indicates that we reverse the time, e.g., \(\overset{\leftarrow}{g}(t)=g(T-t)\). The domain \(\tilde{\Omega}_{v}(t)=\Omega_{v}(T-t)\) (similarly \(\tilde{\Gamma}(t)=\Gamma(T-t)\)) is given by the flow map \(\overset{\leftarrow}{\mathbf{\psi}}(\mathbf{x},t)=\mathbf{\psi}(\mathbf{x},T-t)\). Setting \(h(t)=z(T-t)\) for \(t\in[0,T]\), we recover the solution to (7.1) since \(\dot{h}=-\dot{z}\) and \(h(T)=z(0)=0\). Verifying the existence and uniqueness of \(z\) then follows from the abstract framework in [3] and from very similar arguments to the proof of Proposition 3.1.
Moreover, choose \(\varphi=h\in L^{2}(0,T;H^{1}(\Omega_{v}(t)))\) in (7.1), integrate over time \(\tau\in[t,T]\) and use the following formula [3, Theorem 2.40 and Corollary 2.41]:
\[-2\int_{t}^{T}\langle\dot{h},h\rangle_{H^{-1}(\Omega_{v}(\tau))}\mathrm{d} \tau=\|h(t)\|_{L^{2}(\Omega_{v}(t))}^{2}+\int_{t}^{T}(h,h\nabla\cdot\mathbf{w})_{ \Omega_{v}(\tau)}\mathrm{d}\tau. \tag{7.3}\]
Along with Holder's inequality, this yields:
\[\frac{1}{2}\|h(t)\|_{L^{2}(\Omega_{v}(t))}^{2}+\|D_{v}^{1/2} \nabla h\|_{L^{2}(t,T;L^{2}(\Omega_{v}))}^{2}+\|\xi^{1/2}h\|_{L^{2}(t,T;L^{2} (\Gamma))}^{2}\\ \leq\|g\|_{L^{2}(t,T;L^{2}(\Omega_{v}(t)))}\|h\|_{L^{2}(t,T;L^{2} (\Omega_{v}(t)))}+\frac{1}{2}\|\nabla\cdot\mathbf{w}\|_{L^{\infty}(0,T;L^{\infty} (\Omega_{v}(t)))}\|h\|_{L^{2}(t,T;L^{2}(\Omega_{v}(t)))}^{2}\\ +\|D_{v}^{-1/2}\tilde{\mathbf{u}}_{v}\|_{L^{\infty}(0,T;L^{\infty}( \Omega_{v}(t)))}\|h\|_{L^{2}(t,T;L^{2}(\Omega_{v}(t)))}\|D_{v}^{1/2}\nabla h\| _{L^{2}(t,T;L^{2}(\Omega_{v}(t)))}. \tag{7.4}\]
Applying Young's inequality for the first and last term on the right hand side above results in:
\[\frac{1}{2}\|h(t)\|_{L^{2}(\Omega_{v}(t))}^{2}+\frac{1}{2}\|D_{v} ^{1/2}\nabla h\|_{L^{2}(t,T;L^{2}(\Omega_{v}))}^{2}+\|\xi^{1/2}h\|_{L^{2}(t,T; L^{2}(\Gamma))}^{2}\leq\frac{1}{2}\|g\|_{L^{2}(0,T;L^{2}(\Omega_{v}(t)))}^{2}\\ +\frac{1}{2}\left(1+\frac{1}{2}\|\nabla\cdot\mathbf{w}\|_{L^{\infty}(0, T;L^{\infty}(\Omega_{v}(t)))}+\|D_{v}^{-1/2}\tilde{\mathbf{u}}_{v}\|_{L^{\infty}(0,T;L^{ \infty}(\Omega_{v}(t)))}^{2}\right)\|h\|_{L^{2}(t,T;L^{2}(\Omega_{v}(t)))}^{2}.\]
The result can then be concluded by Gronwall's inequality, see e.g [18, Appendix B.k].
### Model error introduced in the derivation of the 1D model
With this dual stability result at hand, we now turn to our first main modelling error estimate, namely comparing (the extension of) the cross-section average vessel solution \(\hat{c}:\Lambda\times(0,T)\to\mathbb{R}\) with its reference solution \(c_{v}:\Omega_{v}\times(0,t)\to\mathbb{R}\), the weak solution of (3.1a). More specifically, we aim to quantify the modelling error
\[\|c_{v}-E\hat{c}\|_{L^{2}(0,T;L^{2}(\Omega_{v}))}.\]
The (constant cross-section) extension \(E\) from \(H^{1}(\Lambda)\) to \(H^{1}(\Omega_{v})\) is given by
\[E\hat{c}(s,r,\theta,t)=\hat{c}(s,t),\quad\forall(r,\theta)\in\Theta(s,t),\quad \forall(s,t)\in\Lambda\times(0,T). \tag{7.5}\]
We will frequently write \(\hat{c}=E\hat{c}\) when context allows this simplification. Proposition 7.1 gives the main modelling error estimate for the solutions in the vessel.
**Proposition 7.1** (Model error in the vessel).: _Let \(c_{v},c_{s}\) be weak solutions to the coupled 3D-3D transport problem (3.11) and assume that \(c_{v}(0)\in H^{1}(\Omega_{v})\). Let \(c,\hat{c}\) be the weak solutions to the reduced coupled 3D-1D problem (4.20) with \(w_{c}=1\). Then,_
\[\|c_{v}-\hat{c}\|_{L^{2}(0,T;L^{2}(\Omega_{v}))}\\ \lesssim K_{b}\,K_{p}\,\left(\|f_{v}\|_{L^{2}(0,T;H^{1}(\Omega_{v} ))}+\|\nabla c_{v}(0)\|_{L^{2}(\Omega_{v})}\right)\epsilon_{\max}+(K_{b}\,K_{ \Gamma}\,C_{2})\ \epsilon_{\max}^{1/2}|\ln\epsilon_{\max}|^{1/2}\\ +K_{b}(K_{p}+1)(K_{\rm tr}+1)C_{1}\left(\epsilon_{\max}^{1/2}+\|u _{v,r}\|_{L^{\infty}(0,T;L^{\infty}(\Omega_{v}(t)))}+\|u_{v,\theta}\|_{L^{ \infty}(0,T;L^{\infty}(\Omega_{v}(t)))}+\epsilon_{s}\right). \tag{7.6}\]
_Here, \(C_{1}\) and \(C_{2}\) depend on the material parameters and the solutions \(c,\hat{c},\) and \(c_{s}\) as_
\[C_{1}=\left(\max_{s\in\Lambda,t\in[0,T]}\|u_{v,s}\|_{H^{1}( \Theta)}+\|\mathbf{w}\|_{L^{\infty}(\Omega)}\right)\|\hat{c}\|_{L^{2}(0,T;L^{2}( \partial\Omega_{v}(t)))}\\ +\|\hat{c}\|_{L^{2}(0,T;H^{1}(\Omega_{v}(t)))}+\|D_{v}\partial_{s }\hat{c}+\hat{c}\hat{u}\|_{L^{2}(0,T;L^{2}(\Omega_{v}(t)))}+\|\xi(\hat{c}- \overline{c})\|_{L^{2}(0,T;L^{2}(\Gamma(t)))},\]
_and_
\[C_{2}=\|\xi^{1/2}\|_{L^{\infty}(0,T;L^{\infty}(\Gamma(t)))}\left(\|c_{s}\|_{L ^{2}(0,T;H^{1}(\Omega_{s}(t)))}+\|c\|_{L^{2}(0,T;H^{1}(\Omega))}\right).\]
_In addition, there exists a constant \(K\) depending only on the material parameters and the final time \(T\) but not on \(\epsilon_{\max}\) such that \(C_{1}+C_{2}\lesssim K\). Under the additional assumption that_
\[\|\overline{c}-\overline{c_{s}}\|_{L^{2}(0,T;L^{2}(\Gamma(t)))}\lesssim \epsilon_{\max}^{1/2}, \tag{7.7}\]
_bound (7.6) can be improved by replacing its last term by \(K_{b}(K_{\rm st}K_{\mathcal{E}}+1)C_{2}\ \epsilon_{\max}^{1/2}\)._
Before presenting the proof, we remark that this proposition and in particular (7.6) provides a rigorous bound on the error in the vessel introduced by the derivation of the 3D-1D model. For the error to converge to \(0\) as \(\epsilon_{\max}\to 0\), one needs to assume that \(u_{v,r}\) and \(u_{v,\theta}\) are negligible at least for small \(\epsilon_{\max}\). In addition, if \(\epsilon_{s}\lesssim\epsilon_{\max}\) as \(\epsilon_{\max}\to 0\), then one recovers a convergence rate of \(1/2\) (up to a log factor) with respect to \(\epsilon_{\max}\). The additional assumption (7.7) essentially leads to an estimate for the error induced by Assumptions 4.1 and 4.2 of Section 4.2 alone, without those of Section 4.4; i.e. an estimate for the error between \(E\hat{c}\) as given in (4.14) and \(c_{v}\) the solution of (3.11). In this case, we can remove the log factor.
Proof.: (Proposition 7.1) We proceed in three main steps to (I) derive a first identity for the modelling error by using a duality argument, (II) manipulate this identity by deriving the weak form satisfied by the extended solution \(E\hat{c}\), and (III) bound its terms via Poincare, trace, Stekloff inequalities, and the regularity bound derived in Lemma 7.1.
_Step I._ We first recall from (3.11) that the reference solution \(c_{v}\) satisfies
\[\langle\hat{c}_{v},\phi\rangle_{H^{-1}(\Omega_{v}(t))}+(\nabla\cdot\mathbf{w}c_{v },\phi)_{\Omega_{v}(t)}+a_{\rm ref}(c_{v},\phi)=\ell_{\rm ref}(\phi),\quad \forall\phi\in H^{1}(\Omega_{v}(t)), \tag{7.8}\]
where we have introduced the two forms
\[a_{\rm ref}(c,\phi) =(D_{v}\nabla c,\nabla\phi)_{\Omega_{v}(t)}+(\xi c,\phi)_{\Gamma( t)}-((\mathbf{u}_{v}-\mathbf{w})c,\nabla\phi)_{\Omega_{v}(t)} \forall c,\phi\in H^{1}(\Omega_{v}(t)),\] \[\ell_{\rm ref}(\phi) =(\xi c_{s},\phi)_{\Gamma(t)}+(f_{v},\phi)_{\Omega_{v}(t)} \forall\phi\in H^{1}(\Omega_{v}(t)).\]
To estimate the error \(e\equiv c_{v}-E\hat{c}\), we proceed by duality. Namely, let \(h\) be the solution of (7.1) with \(g=e\in L^{2}(0,T;L^{2}(\Omega_{v}(t)))\). From [3, Corollary 2.41] and the fact that \(h(T)=0\)
the following integration by parts formula holds:
\[\int_{0}^{T}-\langle\dot{h},e\rangle_{H^{-1}(\Omega_{v}(t))}=\int_{0}^{T}\langle \dot{e},h\rangle_{H^{-1}(\Omega_{v}(t))}+\int_{0}^{T}(e,h\nabla\cdot\mathbf{w})_{ \Omega_{v}(t)}+(e(0),h(0))_{\Omega_{v}(0)}.\]
With this identity, (7.1) tested with \(e\in L^{2}(0,T;H^{1}(\Omega_{v}(t)))\) and integrated over \((0,T)\) reads:
\[\int_{0}^{T}\langle\dot{e},h\rangle_{H^{-1}(\Omega_{v}(t))}+\int_{0}^{T}(e,h \nabla\cdot\mathbf{w})_{\Omega_{v}(t)}+(e(0),h(0))_{\Omega_{v}(0)}+\int_{0}^{T}a_{ \text{ref}}(e,h)=\int_{0}^{T}\|e\|_{L^{2}(\Omega_{v}(t))}^{2}.\]
Subtracting the time-integrated (7.8), combined with the observation that indeed \(\dot{\hat{c}}\in L^{2}(0,T;L^{2}(\Omega_{v}(t)))\) (where we write \(\hat{c}\) in place of \(E\hat{c}\) here and in the following), we obtain the following identity for the modelling error \(e\):
\[\int_{0}^{T}\|e\|_{L^{2}(\Omega_{v}(t))}^{2}=-\int_{0}^{T}(\dot{ \hat{c}},h)_{\Omega_{v}(t)}-\int_{0}^{T}(\hat{c},h\nabla\cdot\mathbf{w})_{\Omega_{ v}(t)}-\int_{0}^{T}a_{\text{ref}}(\hat{c},h)\\ +\int_{0}^{T}\ell_{\text{ref}}(h)+(e(0),h(0))_{\Omega_{v}(0)}. \tag{7.9}\]
_Step II._ Next, we aim to derive an alternative expression for this error identity. By definition of the strong material derivative cf. (3.6):
\[\int_{0}^{T}(\dot{\hat{c}},h)_{\Omega_{v}(t)}+\int_{0}^{T}(\hat{c},h\nabla \cdot\mathbf{w})_{\Omega_{v}(t)}=\int_{0}^{T}(\partial_{t}\hat{c},h)_{\Omega_{v}( t)}+\int_{0}^{T}(\nabla\cdot(\hat{c}\mathbf{w}),h)_{\Omega_{v}(t)}. \tag{7.10}\]
Note that this definition holds for \(\hat{c}\in\{v\in L^{2}(0,T;H^{1}(\Omega_{v}(t)))\,|\,\dot{v}\in L^{2}(0,T;L^{2 }(\Omega_{v}(t)))\}\) by density of \(\mathcal{D}(0,T;H^{1}(\Omega_{v}(t)))\) in such spaces, [3, Lemma 2.38]. Further, integrating by parts gives
\[(\nabla\cdot(\hat{c}\mathbf{w}),h)_{\Omega_{v}(t)}=(\hat{c}\mathbf{w}\cdot\mathbf{n},h)_{ \partial\Omega_{v}(t)}-(\hat{c}\mathbf{w},\nabla h)_{\Omega_{v}(t)},\]
while the cross-section average definitions combined with the chain rule yield
\[(\partial_{t}\hat{c},h)_{\Omega_{v}(t)}=(\partial_{t}\hat{c},A\langle h \rangle)_{\Lambda}=(\partial_{t}(A\hat{c}),\langle h\rangle)_{\Lambda}-(\hat {c}\partial_{t}A,\langle h\rangle)_{\Lambda}.\]
We will derive equivalent expressions for the two terms on the right hand side. First for the last term, by definition of the area \(A\) and (3.4), we have that:
\[(\hat{c}\partial_{t}A,\langle h\rangle)_{\Lambda}=\int_{\Lambda}\hat{c} \langle h\rangle\partial_{t}\left(\int_{\Theta(t)}1\right)=\int_{\Lambda}\int _{\partial\Theta(t)}\hat{c}\langle h\rangle\mathbf{w}\cdot\mathbf{n}=(\hat{c}\mathbf{w} \cdot\mathbf{n},\langle h\rangle)_{\partial\Omega_{v}(t)}.\]
Second, we will address the former term in combination with other terms from (7.9). To this end, denote by \(\hat{\mathbf{u}}_{v}=(\langle u_{v,s}\rangle,0,0)\). Note by the definition of \(a_{\text{ref}}\), the cross-section, and perimeter averages, and by adding and subtracting, that,
\[\begin{split} a_{\text{ref}}(\hat{c},& h)-(\hat{c}\mathbf{w},\nabla h)_{\Omega_{v}(t)}=(D_{v} \partial_{s}\hat{c},\partial_{s}h)_{\Omega_{v}(t)}+(\xi\hat{c},h)_{\Gamma(t)}- (\mathbf{u}_{v}\hat{c},\nabla h)_{\Omega_{v}(t)}\\ &=(D_{v}A\partial_{s}\hat{c},\langle\partial_{s}h\rangle)_{ \Lambda}+(\xi P\hat{c},\overline{h})_{\Lambda}-(\mathbf{u}_{v}\hat{c},\nabla h)_{ \Omega_{v}(t)}\\ &=(D_{v}A\partial_{s}\hat{c},\langle\partial_{s}h\rangle)_{ \Lambda}+(\xi P\hat{c},\overline{h})_{\Lambda}-((\mathbf{u}_{v}-\hat{\mathbf{u}}_{v}) \hat{c},\nabla h)_{\Omega_{v}(t)}-(A\langle u_{v,s}\rangle\hat{c},\langle \partial_{s}h\rangle)_{\Lambda}.\end{split} \tag{7.11}\]
We proceed by returning to the weak formulation of the coupled 3D-1D problem (4.20b) with \(\hat{v}=\langle h\rangle\in H^{1}_{A}(\Lambda)\). We now invoke the assumption that \(w_{c}=1\); then \(g_{s}=0\) and \(\overline{w_{c}}=1\). Let \(\hat{u}=\langle u_{v,s}\rangle\). Then, (4.20b), after combining time-integration terms, gives that
\[(\partial_{t}(A\hat{c}),\langle h\rangle)_{\Lambda}+(D_{v}A\partial_{s}\hat{c },\partial_{s}\langle h\rangle)_{\Lambda}-(A\hat{u}\hat{c},\partial_{s}\langle h \rangle)_{\Lambda}+(\xi P(\hat{c}-\bar{c}),\langle h\rangle)_{\Lambda}=(A \langle f_{v}\rangle,\langle h\rangle)_{\Lambda}. \tag{7.12}\]
Observe that 1
Footnote 1: With Leibniz integration rule, we have
\[\partial_{s}(A\langle h\rangle) =\partial_{s}\left(\int_{R_{1}}^{R_{2}}\int_{0}^{2\pi}hr\,\mathrm{d }r\,\mathrm{d}\theta\right)=\int_{R_{1}}^{R_{2}}\int_{0}^{2\pi}\partial_{s}hr+ \int_{0}^{2\pi}(h(R_{2})R_{2}\partial_{s}R_{2}-h(R_{1})R_{1}\partial_{s}R_{1})\] \[=A\langle\partial_{s}h\rangle+\int_{\partial\Theta_{2}}h\partial_ {s}R_{2}-\int_{\partial\Theta_{1}}h\partial_{s}R_{1}\]
\[A\partial_{s}\langle h\rangle=\partial_{s}(A\langle h\rangle)-\langle h\rangle \partial_{s}A=A\langle\partial_{s}h\rangle+\int_{\partial\Theta_{2}}(h- \langle h\rangle)\partial_{s}R_{2}-\int_{\partial\Theta_{1}}(h-\langle h \rangle)\partial_{s}R_{1}.\]
Using (7.13) and (7.11) in (7.12), we obtain:
\[(\partial_{t}(A\hat{c}),\langle h\rangle)_{\Lambda}+a_{\mathrm{ref}}(\hat{c},h)-(\hat{c}\boldsymbol{w},\nabla h)_{\Omega_{v}(t)}=-((\boldsymbol{u}_{v}- \hat{\boldsymbol{u}}_{v})\hat{c},\nabla h)_{\Omega_{v}(t)}+\ell(h)-\ell_{1}(h), \tag{7.14}\]
where we have introduced the short-hand
\[\ell(h) =\int_{\Lambda}\xi P\overline{c}\,\overline{h}+\int_{\Lambda}A \langle f_{v}\rangle\langle h\rangle=(\xi\overline{c},h)_{\Gamma}+(\langle f _{v}\rangle,h)_{\Omega_{v}},\] \[\ell_{1}(h) =\int_{\Lambda}(-D_{v}\partial_{s}\hat{c}+\hat{c}\hat{u})\left( \int_{\partial\Theta_{1}}(h-\langle h\rangle)\partial_{s}R_{1}-\int_{\partial \Theta_{2}}(h-\langle h\rangle)\partial_{s}R_{2}\right)-(\xi(\langle h \rangle-\bar{h}),\overline{c}-\hat{c})_{\Gamma(t)}.\]
Collecting all the above expressions in (7.9) yields:
\[\int_{0}^{T}\|e\|_{L^{2}(\Omega_{v}(t))}^{2}=\int_{0}^{T}(( \boldsymbol{u}_{v}-\hat{\boldsymbol{u}}_{v})\hat{c},\nabla h)_{\Omega_{v}(t)} -\int_{0}^{T}(\hat{c}\boldsymbol{w}\cdot\boldsymbol{n},h-\langle h\rangle)_{ \partial\Omega_{v}(t)}\\ +\int_{0}^{T}(\ell_{\mathrm{ref}}(h)-\ell(h))+\int_{0}^{T}\ell_{1 }(h)+(e(0),h(0))_{\Omega_{v}(0)}:=\sum_{i=1}^{5}W_{i}. \tag{7.15}\]
_Step III._ We now bound each term \(W_{i}\) (\(i=1,\ldots,5\)) on the right-hand side of (7.15). For brevity, we omit the time-dependence of the domains in the notation in the below. For \(W_{1}\), write
\[W_{1}=\int_{0}^{T}((0,u_{v,r},u_{v,\theta})\hat{c},\nabla h)_{\Omega_{v}}+\int _{0}^{T}(u_{v,s}-\langle u_{v,s}\rangle)\hat{c}\partial_{s}h=W_{1,1}+W_{1,2}.\]
An application of Holder's inequality yields
\[W_{1,1}\equiv\int_{0}^{T}((0,u_{v,r},u_{v,\theta})\hat{c},\nabla h)_{\Omega_{ v}}\leq\int_{0}^{T}(\|u_{v,r}\|_{L^{\infty}(\Omega_{v})}+\|u_{v,\theta}\|_{L^{ \infty}(\Omega_{v})})\|\hat{c}\|_{L^{2}(\Omega_{v})}\|h\|_{H^{1}(\Omega_{v})}.\]
For \(W_{1,2}\), with Holder's and Poincare's inequality (6.5), we have that
\[W_{1,2} =\int_{0}^{T}\int_{\Lambda}\hat{c}\int_{\Theta}(u_{v,s}-\langle u _{v,s}\rangle)\partial_{s}h\leq\int_{0}^{T}\int_{\Lambda}|\hat{c}|\|u_{v,s}- \langle u_{v,s}\rangle\|_{L^{2}(\Theta)}\|\partial_{s}h\|_{L^{2}(\Theta)}\] \[\leq K_{p}\int_{0}^{T}\int_{\Lambda}|\hat{c}|\epsilon(s,t)\| \nabla u_{v,s}\|_{L^{2}(\Theta)}\|\partial_{s}h\|_{L^{2}(\Theta)}\] \[=K_{p}\int_{0}^{T}\int_{\Lambda}\epsilon(s,t)P^{-1/2}\|\hat{c}\|_ {L^{2}(\partial\Theta_{2})}\|\nabla u_{v,s}\|_{L^{2}(\Theta)}\|\partial_{s}h \|_{L^{2}(\Theta)}.\]
Thus, with Holder's inequality and the assumption that the vessel area and outer perimeter are both bounded in terms of \(\epsilon\) but with (implicit) inequality constants independent of \(\Omega_{v},\Omega_{s}\) (6.4),
\[W_{1,2}\,\lesssim\,K_{p}\int_{0}^{T}\epsilon_{\max}^{1/2}\left(\max_{s\in\Lambda }\|\nabla u_{v,s}\|_{L^{2}(\Theta)}\right)\|\hat{c}\|_{L^{2}(\Gamma)}\|h\|_{H^ {1}(\Omega_{v})}. \tag{7.16}\]
Hence, with Holder's inequality again, we obtain the following for \(W_{1}\):
\[W_{1}=W_{1,1}+W_{1,2}\,\lesssim\,K_{p}\,\,\epsilon_{\max}^{1/2} \max_{s\in\Lambda,t\in[0,T]}\|u_{v,s}\|_{H^{1}(\Theta)}\|\hat{c}\|_{L^{2}(0,T; L^{2}(\Gamma))}\|h\|_{L^{2}(0,T;H^{1}(\Omega_{v}))}\\ +(\|u_{v,r}\|_{L^{\infty}(0,T;L^{\infty}(\Omega_{v}))}+\|u_{v, \theta}\|_{L^{\infty}(0,T;L^{\infty}(\Omega_{v}))})\|\hat{c}\|_{L^{2}(0,T;L^{ 2}(\Omega_{v}))}\|h\|_{L^{2}(0,T;H^{1}(\Omega_{v}))}. \tag{7.17}\]
Continuing, we bound \(W_{2}\) and \(W_{4}\) by first obtaining a bound on \(\|\overline{v}-\langle v\rangle\|_{L^{2}(\Gamma)}\) for any \(v\in H^{1}(\Omega_{v})\). First note that
\[\|\langle v\rangle-\overline{v}\|_{L^{2}(\partial\Theta)}^{2}= \int_{\partial\Theta}(\langle v\rangle-\overline{v})(\langle v\rangle- \overline{v})=\int_{\partial\Theta}(\langle v\rangle-v)(\langle v\rangle- \overline{v})+\int_{\partial\Theta}(v-\overline{v})(\langle v\rangle- \overline{v})\\ =\int_{\partial\Theta}(\langle v\rangle-v)(\langle v\rangle- \overline{v})\leq\|\langle v\rangle-v\|_{L^{2}(\partial\Theta)}\|\langle v \rangle-\overline{v}\|_{L^{2}(\partial\Theta)}.\]
Using this observation, the trace inequality (6.7), and Poincare's inequality (6.5), we have that for any \(v\in H^{1}(\Theta)\)
\[\|\langle v\rangle-\bar{v}\|_{L^{2}(\partial\Theta)}\leq\|\langle v \rangle-v\|_{L^{2}(\partial\Theta)}\\ \leq K_{\rm tr}\left(\epsilon(s,t)^{-1/2}\|\langle v\rangle-v\|_ {L^{2}(\Theta)}+\epsilon(s,t)^{1/2}\|\nabla v\|_{L^{2}(\Theta)}\right)\leq K _{\rm tr}(K_{p}+1)\epsilon_{\max}^{1/2}\|\nabla v\|_{L^{2}(\Theta)}. \tag{7.18}\]
With Cauchy-Schwarz inequality, we have that:
\[W_{2}+W_{4}=\int_{0}^{T}(\hat{c}\mathbf{w}\cdot\mathbf{n},h-\langle h \rangle)_{\partial\Omega_{v}}+\int_{0}^{T}\ell_{1}(h)\] \[\leq\int_{0}^{T}(\|\mathbf{w}\|_{L^{\infty}(\Omega)}\|\hat{c}\|_{L^{ 2}(\partial\Omega_{v})}+\epsilon_{s}\|D_{v}\partial_{s}\hat{c}+\hat{c}\hat{u} \|_{L^{2}(\partial\Omega_{v})}+\|\xi(\hat{c}-\overline{c})\|_{L^{2}(\Gamma)} )\|\langle h\rangle-h\|_{L^{2}(\partial\Omega_{v})}\] \[\leq K_{\rm tr}(K_{p}+1)\epsilon_{\max}^{1/2}\int_{0}^{T}(\|\mathbf{w }\|_{L^{\infty}(\Omega)}\|\hat{c}\|_{L^{2}(\partial\Omega_{v})}+\epsilon_{s} \|D_{v}\partial_{s}\hat{c}+\hat{c}\hat{u}\|_{L^{2}(\partial\Omega_{v})}+\|\xi( \hat{c}-\overline{c})\|_{L^{2}(\Gamma)})\|h\|_{H^{1}(\Omega_{v})}\] \[\lesssim K_{\rm tr}(K_{p}+1)(\epsilon_{\max}^{1/2}\|\mathbf{w}\|_{L^{ \infty}(0,T;L^{\infty}(\Omega))}\|\hat{c}\|_{L^{2}(0,T;L^{2}(\partial\Omega_{v }))}+\epsilon_{s}\|D_{v}\partial_{s}\hat{c}+\hat{c}\hat{u}\|_{L^{2}(0,T;L^{2}( \Omega_{v}))}\] \[\quad+\epsilon_{\max}^{1/2}\|\xi(\hat{c}-\overline{c})\|_{L^{2}(0, T;L^{2}(\Gamma))})\|h\|_{L^{2}(0,T;H^{1}(\Omega_{v}))}.\]
In the above, we used that \(\epsilon_{\max}^{1/2}\|D_{v}\partial_{s}\hat{c}+\hat{c}\hat{u}\|_{L^{2}( \partial\Omega_{v})}\lesssim\|D_{v}\partial_{s}\hat{c}+\hat{c}\hat{u}\|_{L^{2} (\Omega_{v})}.\) This follows from the observation that \(D_{v},\hat{c},\) and \(\hat{u}\) are uniform on each cross-section and from (6.3).
Consider now the definition of \(W_{3}\) in combination with Cauchy-Schwarz:
\[W_{3} =\int_{0}^{T}(\xi(c_{s}-\bar{c}),h)_{\Gamma}+(f_{v}-\langle f_{v} \rangle,h)_{\Omega_{v}}\] \[\leq\int_{0}^{T}\|\xi^{1/2}(c_{s}-\overline{c})\|_{L^{2}(\Gamma)} \|\xi^{1/2}h\|_{L^{2}(\Gamma)}+\|f_{v}-\langle f_{v}\rangle\|_{L^{2}(\Omega_{v} )}\|h\|_{L^{2}(\Omega_{v})}. \tag{7.19}\]
For the first integrand term of the previous line, we may use the observation that \(\|\overline{c}\|_{L^{2}(\Gamma)}=\|\overline{c}\|_{L^{2}_{P}(\Lambda)}\leq\|c\|_{L ^{2}(\Gamma)}\) and the trace Lemma 6.4 over \(\Omega_{s}\).
\[\|\xi^{1/2}(c_{s}-\overline{c})\|_{L^{2}(\Gamma)}\leq K_{\Gamma}(\epsilon_{\max }|\ln\epsilon_{\max}|)^{1/2}\|\xi\|_{L^{\infty}(\Gamma)}^{1/2}(\|c_{s}\|_{H^{1} (\Omega_{s})}+\|c\|_{H^{1}(\Omega_{s})}).\]
For the last integrand in (7.19), we use the Poincare inequality (6.5). Combining with Holder's inequality, we obtain
\[W_{3}\leq K_{\Gamma}(\epsilon_{\max}|\ln\epsilon_{\max}|)^{1/2} \|\xi\|_{L^{\infty}(\Gamma)}^{1/2}\left(\|c_{s}\|_{L^{2}(0,T;H^{1}(\Omega_{s}) )}+\|c\|_{L^{2}(0,T;H^{1}(\Omega))}\right)\|\xi^{1/2}h\|_{L^{2}(0,T;L^{2}( \Gamma))}\\ +K_{p}\epsilon_{\max}\|f_{v}\|_{L^{2}(0,T;H^{1}(\Omega_{v}))}\|h \|_{L^{2}(0,T;L^{2}(\Omega_{v}))}.\]
Alternatively, if the sharper bound (7.7) holds, we then first use the triangle inequality for bounding the first term in (7.19):
\[\|\xi^{1/2}(c_{s}-\overline{c})\|_{L^{2}(\Gamma)}\leq\|\xi^{1/2}(c_{s}- \overline{c_{s}})\|_{L^{2}(\Gamma)}+\|\xi^{1/2}(\overline{c_{s}}-\overline{c} )\|_{L^{2}(\Gamma)},\]
and then a Stekloff-type inequality along with the boundedness of the extension operator \(\mathcal{E}\) (6.8), giving:
\[\|c_{s}-\overline{c_{s}}\|_{L^{2}(\Gamma)}^{2}=\int_{\Lambda}\|\mathcal{E}c_{ s}-\overline{\mathcal{E}c_{s}}\|_{L^{2}(\partial\Theta_{2})}^{2}\leq K_{ \mathrm{st}}\epsilon_{\max}\int_{\Lambda}\|\nabla\mathcal{E}c_{s}\|_{L^{2}( \Theta_{2})}^{2}\leq K_{\mathrm{st}}K_{\mathcal{E}}\epsilon_{\max}\|c_{s}\|_{ H^{1}(\Omega_{s})}^{2}.\]
Then \(W_{3}\) can instead be bounded by:
\[W_{3}\lesssim(K_{\mathrm{st}}K_{\mathcal{E}}+1)\epsilon_{\max}^{ 1/2}\|c_{s}\|_{L^{2}(0,T;H^{1}(\Omega_{s}))}\|\xi h\|_{L^{2}(0,T;L^{2}(\Gamma))} \\ +K_{p}\epsilon_{\max}\|f_{v}\|_{L^{2}(0,T;H^{1}(\Omega_{v}(t)))} \|h\|_{L^{2}(0,T;L^{2}(\Omega_{v}(t)))}.\]
The term \(W_{5}\) involving the modelling error associated with the initial condition is handled by the Poincare inequality (6.5),and that \(\hat{c}(0)=\langle c_{v}(0)\rangle\):
\[\|(c_{v}-\hat{c})(0)\|_{L^{2}(\Omega_{v}(0))}^{2}=\int_{\Lambda} \int_{\Theta}(c_{v}(0)-\hat{c}(0))^{2}\leq K_{p}\epsilon_{\max}^{2}\int_{ \Lambda}\|\nabla c_{v}(0)\|_{L^{2}(\Theta)}^{2}\\ =K_{p}\epsilon_{\max}^{2}\|\nabla c_{v}(0)\|_{L^{2}(\Omega_{v}(0) )}^{2}.\]
This implies that
\[W_{5}\leq K_{p}\epsilon_{\max}\|\nabla c_{v}(0)\|_{L^{2}(\Omega_{v}(0))}^{2}\| h(0)\|_{L^{2}(\Omega_{v}(0))}. \tag{7.20}\]
Collecting all the above bounds in (7.15) and using (7.2) yields the estimate. The proof of the boundedness of \(C_{1}\) and \(C_{2}\) by a constant \(K\) independent of \(\epsilon_{\max}\) is given in the Appendix, section A.1.
### Model error introduced in the surrounding 3D domain
In this subsection, we study the error introduced in the model derivation of the extended transport model (Section 4.4). In particular, we aim to study the difference \((c_{s}-c)\) between the reference solution \(c_{s}\in L^{2}(0,T;H^{1}_{\partial\Omega}(\Omega_{s}(t)))\) satisfying the weak solute transport equations defined over \(\Omega_{s}(t)\) (3.11) and the reduced (or perhaps more aptly, extended) solution \(c\in L^{2}(0,T;H^{1}_{0}(\Omega))\) satisfying the weak solute transport equations defined over \(\Omega\) (4.17). Here, we will assume that \(\mathcal{E}D_{s}\in L^{\infty}(0,T;L^{\infty}(\Omega,\mathbb{R}^{3\times 3}))\) with a uniform ellipticity constant \(\tilde{\nu}>0\).
We start by recalling the relevant equations and that \(\tilde{\mathbf{u}}_{s}=\mathbf{u}_{s}-\mathbf{w}\), we have that \(c_{s}\) and \(c\) satisfy
\[\langle\dot{c}_{s},\phi\rangle_{H^{-1}(\Omega_{s}(t))}+\int_{\Omega _{s}(t)}(\nabla\cdot\mathbf{w}c_{s}\phi+D_{s}\nabla c_{s}\cdot\nabla\phi-(\tilde{ \mathbf{u}}_{s}c_{s})\cdot\nabla\phi)\\ +\int_{\Gamma(t)}\xi(c_{s}-c_{v})\phi=\int_{\Omega_{s}(t)}f_{s} \phi\quad\forall\phi\in H^{1}_{\partial\Omega}(\Omega_{s}(t)), \tag{7.21}\]
and
\[\int_{\Omega}\partial_{t}c\phi+\int_{\Omega}(\mathcal{E}D_{s}\nabla c\cdot \nabla\phi-(\mathcal{E}\mathbf{u}_{s}c)\cdot\nabla\phi)+\int_{\Gamma(t)}\xi(\overline {c}-\hat{c})\phi=\int_{\Omega}\mathcal{E}f_{s}\phi\quad\forall\phi\in H^{1}_{ 0}(\Omega). \tag{7.22}\]
In (7.22), the Eulerian derivative is used since now \(\partial\Omega\) is independent of \(t\). In (7.22), we used that
\[\int_{\Lambda}\xi P(\overline{c}-\hat{c})\overline{\phi}=\int_{\Gamma(t)}\xi( \overline{c}-\hat{c})\phi. \tag{7.23}\]
As a step on the way towards quantifying \(c_{s}-c\) over the whole domain, we introduce an intermediate solution \(c_{r}\) solving (7.22) but without the coupling terms and aim to bound \((c_{s}-c_{r})\) and \((c_{r}-c)\). More precisely, let \(c_{r}\in L^{2}(0,T;H^{1}_{0}(\Omega))\) with \(c_{r}(0)=c(0)=\mathcal{E}c_{s}(0)\) solve
\[\int_{\Omega}\partial_{t}c_{r}\phi+\int_{\Omega}\mathcal{E}D_{s}\nabla c_{r} \cdot\nabla\phi-(\mathcal{E}\mathbf{u}_{s}c_{r})\cdot\nabla\phi=\int_{\Omega} \mathcal{E}f_{s}\phi \tag{7.24}\]
for all \(t>0\) and for all \(\phi\in H^{1}_{0}(\Omega)\). From standard parabolic regularity results, see e.g [18, Chapter 7], and from the continuity of the extension operator (6.8), we have for a convex domain \(\Omega\) that:
\[\|\partial_{t}c_{r}\|_{L^{2}(0,T;L^{2}(\Omega))}+\|c_{r}\|_{L^{2} (0,T;H^{2}(\Omega))} \leq K(\|\mathcal{E}f_{s}\|_{L^{2}(0,T;L^{2}(\Omega))}+\|c_{r}(0) \|_{H^{1}(\Omega)})\] \[\leq K_{r}(\|f_{s}\|_{L^{2}(0,T;H^{1}(\Omega_{s}(t)))}+\|c_{s}(0) \|_{H^{1}(\Omega_{s}(0))}). \tag{7.25}\]
Here \(K_{r}\) depends on \(\mathcal{E}D_{s}\), \(\mathcal{E}\mathbf{u}_{s}\), and the final time \(T\).
We proceed by first bounding \(c-c_{r}\) in Lemma 7.2, and then consider \(c_{r}-c_{s}\) and \(c-c_{s}\) in Proposition 7.2.
**Lemma 7.2** (Estimating \(c-c_{r}\)).: _For \(c\) and \(c_{r}\) defined by (7.22) and (7.24) respectively, there holds that_
\[\|c-c_{r}\|_{L^{\infty}(0,T;L^{2}(\Omega))}\leq K_{\epsilon_{1}}\epsilon_{\max }^{1/2}|\ln\epsilon_{\max}|^{1/2}\left(\|c\|_{L^{2}(0,T;H^{1}(\Omega))}+\|\xi^{ 1/2}\hat{c}\|_{L^{2}(0,T;L^{2}(\Gamma(t)))}\right), \tag{7.26}\]
_where \(\epsilon_{\max}\) is the maximal vessel cross-section diameter as defined by (6.1) and \(K_{\epsilon_{1}}\) depends on \(T\), \(\tilde{\nu}^{-1/2}\), and \(\mathbf{u}_{s}\), but not on \(\epsilon_{\max}\)._
Proof.: Define \(e_{1}\equiv c-c_{r}\). Subtracting (7.24) from (7.22), choosing \(\phi=e_{1}\), integrating over time, and using standard arguments, we obtain:
\[\|e_{1}(t)\|_{L^{2}(\Omega)}^{2}+\frac{\tilde{\nu}}{2}\|\nabla e _{1}\|_{L^{2}(0,t;L^{2}(\Omega))}^{2}\leq\frac{1}{2\tilde{\nu}}\|\mathcal{E} \mathbf{u}_{s}\|_{L^{\infty}(0,T;L^{\infty}(\Omega))}^{2}\|e_{1}\|_{L^{2}(0,t;L^{2}( \Omega))}^{2}\\ +\|\xi^{1/2}(\overline{c}-\hat{c})\|_{L^{2}(0,t;L^{2}(\Gamma(t)) )}\|\xi^{1/2}e_{1}\|_{L^{2}(0,t;L^{2}(\Gamma(t)))}\equiv L_{1}+L_{2}.\]
For the last term \(L_{2}\), we use the trace inequality over \(\Omega_{s}\) (Lemma 6.4) since \(e_{1}=c-c_{r}\in L^{2}(0,T;H^{1}_{0}(\Omega))\) and thus \(e_{1}\in L^{2}(0,T;H^{1}(\Omega_{s}(t)))\). Along with Young's inequality, we derive
\[L_{2} \leq K_{\Gamma}\epsilon_{\max}^{1/6}\|\xi^{1/2}\|_{L^{\infty}(0,T; L^{\infty}(\Gamma(t)))}\|\xi^{1/2}(\overline{c}-\hat{c})\|_{L^{2}(0,t;L^{2}( \Gamma(t)))}\|e^{1}\|_{L^{2}(0,t;H^{1}(\Omega_{s}(t)))}\] \[\leq K_{\Gamma}^{2}\left(\frac{1}{\tilde{\nu}}+1\right)\|\xi\|_{L ^{\infty}(0,T;L^{\infty}(\Gamma(t)))}\|\xi^{1/2}(\overline{c}-\hat{c})\|_{L^{ 2}(0,T;L^{2}(\Gamma(t)))}^{2}\epsilon_{\max}|\ln\epsilon_{\max}|\] \[\quad+\frac{\tilde{\nu}}{4}\|\nabla e_{1}\|_{L^{2}(0,t;L^{2}( \Omega))}^{2}+\frac{1}{4}\|e_{1}\|_{L^{2}(0,t;L^{2}(\Omega))}^{2}.\]
The first term in the last line above can be further bounded as follows:
\[\|\xi^{1/2}(\overline{c}-\hat{c})\|_{L^{2}(0,T;L^{2}(\Gamma(t)))} \leq\|\xi^{1/2}\overline{c}\|_{L^{2}(0,T;L^{2}(\Gamma(t)))}+\|\xi^{1/2}\hat{c} \|_{L^{2}(0,T;L^{2}(\Gamma(t)))}\\ \leq K_{\Gamma}\epsilon_{\max}^{1/2}|\ln\epsilon_{\max}|^{1/2}\| \xi\|_{L^{\infty}(0,T;L^{\infty}(\Gamma))}^{1/2}\|c\|_{L^{2}(0,T;H^{1}(\Omega) )}+\|\xi^{1/2}\hat{c}\|_{L^{2}(0,T;L^{2}(\Gamma(t)))}. \tag{7.27}\]
The above holds by first noting that \(\|\overline{c}\|_{L^{2}(\Gamma(t))}=\|\overline{c}\|_{L^{2}_{P}(\Lambda)}\) and then using Jensen's inequality as in (4.21) followed by Lemma 6.4 and \(\Omega_{s}\subset\Omega\). With the above and using \(\epsilon_{\max}\lesssim 1\), we obtain that
\[\|e_{1}(t)\|_{L^{2}(\Omega)}^{2}+\frac{1}{4}\|D_{s}^{1/2}\nabla e _{1}\|_{L^{2}(0,t;L^{2}(\Omega))}^{2}\lesssim\frac{1}{2}\left(\frac{1}{2}+ \frac{1}{\tilde{\nu}}\|\mathcal{E}\mathbf{u}_{s}\|_{L^{\infty}(0,T;L^{\infty}( \Omega))}^{2}\right)\|e_{1}\|_{L^{2}(0,t;L^{2}(\Omega))}^{2}\\ \left.+\epsilon_{\max}|\ln\epsilon_{\max}|\big{(}\|c\|_{L^{2}(0,T; H^{1}(\Omega))}+\|\xi^{1/2}\hat{c}\|_{L^{2}(0,T;L^{2}(\Gamma(t)))}\big{)}^{2}.\right.\]
With Gronwall's inequality, we can conclude the result.
**Proposition 7.2** (Model error in the surroundings).: _Assume that \(\Omega\) is convex. Let \(c_{v},c_{s}\) be the weak solutions of the coupled 3D-3D transport problem (3.11), and \(\hat{c},c\) be the weak solutions of the reduced 3D-1D problem (4.20) with \(w_{c}=1\). Then, there holds that_
\[\|c_{s}-c\|_{L^{2}(0,T;L^{2}(\Omega_{s}(t)))}\lesssim N_{1}\left((1 +\|\mathbf{u}_{s}\|_{L^{\infty}(0,T;H^{2}(\Omega_{s}(t)))})\epsilon_{\max}^{2/3}+ \epsilon_{\max}|\ln\epsilon_{\max}|\right)\\ +N_{2}(\epsilon_{\max}|\ln\epsilon_{\max}|)^{1/2}. \tag{7.28}\]
_Here, \(N_{1}\) and \(N_{2}\) are given by:_
\[N_{1} =\|f_{s}\|_{L^{2}(0,T;H^{1}(\Omega_{s}(t)))}+\|c_{s}(0)\|_{H^{1}( \Omega_{s}(0))},\] \[N_{2} =\|\xi^{1/2}c_{v}\|_{L^{2}(0,T;L^{2}(\Gamma(t)))}+\|c\|_{L^{2}(0, T;H^{1}(\Omega))}+\|\xi^{1/2}\hat{c}\|_{L^{2}(0,T;L^{2}(\Gamma(t)))}.\]
_In addition, \(N_{2}\) is bounded independently of \(\epsilon_{\max}\)._
Proof.: Considering Lemma 7.2, it suffices to estimate \(\|c_{s}-c_{r}\|_{L^{2}(0,T;L^{2}(\Omega_{s}(t)))}\) as the final result follows by the triangle inequality. The derivation also follows by duality arguments. Define \(\psi\) as the solution of the following backward-in-time problem: find \(\psi\in L^{2}(0,T;H^{1}_{\partial\Omega}(\Omega_{s}(t)))\) with \(\dot{\psi}\in L^{2}(0,T;H^{-1}(\Omega_{s}(t)))\) and \(\psi(T)=0\) in \(\Omega_{s}(T)\) such that for a.e. \(t\) in \((0,T)\) and for all \(v\in H^{1}_{\partial\Omega}(\Omega_{s}(t))\):
\[-\langle\dot{\psi},v\rangle_{H^{-1}(\Omega_{s}(t))}+(D_{s}\nabla\psi,\nabla v )_{\Omega_{s}(t)}+(\xi\psi,\phi)_{\Gamma(t)}-(\tilde{\mathbf{u}}_{s}\cdot\nabla \psi,v)_{\Omega_{s}(t)}=(c_{s}-c_{r},v)_{\Omega_{s}(t)}. \tag{7.29}\]
Then, using similar arguments as in Lemma 7.1, we have
\[\|\psi\|_{L^{\infty}(0,T;L^{2}(\Omega_{s}(t)))}+\nu\|\nabla\psi\|_{L ^{2}(0,T;L^{2}(\Omega_{s}(t)))}+\|\xi^{1/2}\psi\|_{L^{2}(0,T;L^{2}(\Gamma(t)))}\\ \leq K\left(1+\|\nabla\cdot\mathbf{w}\|_{L^{\infty}(0,T;L^{\infty}( \Omega))}+\|\tilde{\mathbf{u}}_{s}\|_{L^{\infty}(0,T;L^{\infty}(\Omega_{s}(t)))} \right)\|c_{s}-c_{r}\|_{L^{2}(0,T;L^{2}(\Omega_{s}(t)))}. \tag{7.30}\]
Testing (7.29) with \(v=e\equiv c_{r}-c_{s}\in H^{1}_{\partial\Omega}(\Omega_{s}(t))\) for a.e. \(t\), integrating from \(0\) to \(T\), using the integration by parts rule [3, Corollary 2.41], and using that \((c_{r}-c_{s})(0)=0\) in \(\Omega_{s}(0)\) and \(\psi(T)=0\) in \(\Omega_{s}(T)\) yield
\[L=\int_{0}^{T}\|c_{r}-c_{s}\|_{L^{2}(\Omega_{s}(t))}^{2}=\int_{0} ^{T}\langle\dot{e},\psi\rangle_{H^{-1}(\Omega_{s}(t))}+(e\psi,\nabla\cdot \boldsymbol{w})_{\Omega_{s}(t)}+(D_{s}\nabla\psi,\nabla e)_{\Omega_{s}(t)}\\ +(\xi\psi,e)_{\Gamma(t)}-(e(\boldsymbol{u}_{s}-\boldsymbol{w}), \nabla\psi)_{\Omega_{s}(t)}\,\mathrm{d}t.\]
Next, we expand \(e=c_{r}-c_{s}\), replace \(\psi\) by \(\mathcal{E}\psi\) its extension from \(\Omega_{s}(t)\) to \(\Omega\), use the equations for the weak solution \(c_{s}\) recalled in (7.21) (with \(\boldsymbol{u}_{s}-\boldsymbol{w}=\tilde{\boldsymbol{u}}_{s}\)), the relation between the material and partial time derivative (3.6) in combination with the product rule to find:
\[L\equiv\int_{0}^{T}(\partial_{t}c_{r},\mathcal{E}\psi)_{\Omega_ {s}(t)}+(\mathcal{E}\psi,\nabla\cdot(c_{r}\boldsymbol{w}))_{\Omega_{s}(t)}+( D_{s}\nabla(\mathcal{E}\psi),\nabla c_{r})_{\Omega_{s}(t)}\\ -((\boldsymbol{u}_{s}-\boldsymbol{w})c_{r},\nabla\mathcal{E}\psi )_{\Omega_{s}(t)}-(f_{s},\psi)_{\Omega_{s}(t)}+(\xi(c_{r}-c_{v}),\psi)_{ \Gamma(t)}\,\mathrm{d}t.\]
Now, we use the definition of \(c_{r}\) (7.24), expand terms involving \(\boldsymbol{w}\) and use integration by parts to be left with terms over \(\Omega_{v}\) and \(\Gamma\):
\[L=\int_{0}^{T}-(\partial_{t}c_{r},\mathcal{E}\psi)_{\Omega_{v}( t)}-(\mathcal{E}D_{s}\nabla(\mathcal{E}\psi),\nabla c_{r})_{\Omega_{v}(t)}+( \mathcal{E}\boldsymbol{u}_{s}c_{r},\nabla\mathcal{E}\psi)_{\Omega_{v}(t)}+( \mathcal{E}f,\mathcal{E}\psi)_{\Omega_{v}(t)}\,\mathrm{d}t\\ +\int_{0}^{T}(\psi,c_{r}\boldsymbol{w}\cdot\boldsymbol{n})_{\Gamma (t)}+(\xi(c_{r}-c_{v}),\psi)_{\Gamma(t)}\,\mathrm{d}t\equiv T_{1}+\ldots+T_{6}.\]
Our next task is to bound each term \(T_{i}\) for \(i=1,\ldots,6\). Hereinafter, we omit writing \(t\) for the sake of brevity. To bound \(T_{1}\), we first apply Cauchy-Schwarz inequality to have that
\[T_{1}\leq\|\partial_{t}c_{r}\|_{L^{2}(0,T;L^{2}(\Omega_{v}))}\|\mathcal{E} \psi\|_{L^{2}(0,T;L^{2}(\Omega_{v}))}.\]
With Holder's inequality, a Sobolev embedding, and the continuity of the extension operator (6.8), we obtain
\[\|\mathcal{E}\psi\|_{L^{2}(\Omega_{v})}\leq|\Omega_{v}|^{1/3}\| \mathcal{E}\psi\|_{L^{6}(\Omega_{v})}\leq|\Omega_{v}|^{1/3}\|\mathcal{E}\psi \|_{L^{6}(\Omega)}\\ \leq K|\Omega_{v}|^{1/3}\|\mathcal{E}\psi\|_{H^{1}(\Omega)}\leq K |\Omega_{v}|^{1/3}\|\psi\|_{H^{1}(\Omega_{s})}. \tag{7.31}\]
In the above bound, \(K\) depends on \(\Omega\) but not on \(\Omega_{v}\). Hence,
\[T_{1}\leq K\max_{t\in[0,T]}|\Omega_{v}|^{1/3}\|\partial_{t}c_{r}\|_{L^{2}(0,T; L^{2}(\Omega_{v}))}\|\psi\|_{L^{2}(0,T;H^{1}(\Omega_{s}))}. \tag{7.32}\]
To handle \(T_{2}\), we use a similar approach. Since \(c_{r}\in L^{2}(0,T;H^{2}(\Omega))\), \(\nabla c_{r}\in L^{2}(0,T;H^{1}(\Omega)^{3})\), a continuous Sobolev embedding yields:
\[\|\nabla c_{r}\|_{L^{q}(\Omega)}\leq K\|\nabla c_{r}\|_{H^{1}(\Omega)},\quad q \in[1,6]. \tag{7.33}\]
Hence, with Holder's inequality and the above bound (7.33), we have
\[\|\nabla c_{r}\|_{L^{2}(\Omega_{v})}\leq|\Omega_{v}|^{1/3}\|\nabla c_{r}\|_{L^ {6}(\Omega_{v})}\leq|\Omega_{v}|^{1/3}\|\nabla c_{r}\|_{L^{6}(\Omega)}\leq K |\Omega_{v}|^{1/3}\|c_{r}\|_{H^{2}(\Omega)}.\]
Then, with the continuity of \(\mathcal{E}\) (6.8), it follows that
\[T_{2}\leq\|\mathcal{E}D_{s}\nabla\mathcal{E}\psi\|_{L^{2}(0,T;L^ {2}(\Omega_{v}))}\|\nabla c_{r}\|_{L^{2}(0,T;L^{2}(\Omega_{v}))}\\ \leq K\max_{t\in[0,T]}|\Omega_{v}|^{1/3}\|\psi\|_{L^{2}(0,T;H^{1} (\Omega_{s}))}\|c_{r}\|_{L^{2}(0,T;H^{2}(\Omega))},\]
where \(K\) depends on \(\mathcal{E}D_{s}\) and again \(\Omega\), but not on \(\Omega_{v}\).
For \(T_{3}\), we again use similar arguments as for \(T_{1}\) cf. (7.31) to obtain that
\[\|c_{r}\|_{L^{2}(\Omega_{v})}\leq|\Omega_{v}|^{1/3}\|c_{r}\|_{L^{6}(\Omega_{v})} \leq|\Omega_{v}|^{1/3}\|c_{r}\|_{L^{6}(\Omega)}\leq K|\Omega_{v}|^{1/3}\|c_{r} \|_{H^{1}(\Omega)}.\]
Further, by the Sobolev embedding \(H^{2}(\Omega)\subset L^{\infty}(\Omega)\), the following bound holds
\[T_{3} \leq K\max_{t\in[0,T]}|\Omega_{v}|^{1/3}\|\mathcal{E}\mathbf{u}_{s}\|_ {L^{\infty}(0,T;L^{\infty}(\Omega))}\|c_{r}\|_{L^{2}(0,T;H^{1}(\Omega))}\| \mathcal{E}\psi\|_{L^{2}(0,T;H^{1}(\Omega_{v}))}\] \[\leq K\max_{t\in[0,T]}|\Omega_{v}|^{1/3}\|\mathcal{E}\mathbf{u}_{s} \|_{L^{\infty}(0,T;H^{2}(\Omega))}\|c_{r}\|_{L^{2}(0,T;H^{1}(\Omega))}\|\psi\| _{L^{2}(0,T;H^{1}(\Omega_{s}))}\] \[\leq K\max_{t\in[0,T]}|\Omega_{v}|^{1/3}\|\mathbf{u}_{s}\|_{L^{\infty }(0,T;H^{2}(\Omega_{s}))}\|c_{r}\|_{L^{2}(0,T;H^{1}(\Omega))}\|\psi\|_{L^{2}(0,T;H^{1}(\Omega_{s}))}.\]
With (7.31), the term \(T_{4}\) is bounded as follows.
\[T_{4}\leq\|\mathcal{E}f_{s}\|_{L^{2}(0,T;L^{2}(\Omega_{v}))}\| \mathcal{E}\psi\|_{L^{2}(0,T;L^{2}(\Omega_{v}))}\\ \leq K\max_{t\in[0,T]}|\Omega_{v}|^{2/3}\|f_{s}\|_{L^{2}(0,T;H^{1 }(\Omega_{s}))}\|\psi\|_{L^{2}(0,T;H^{1}(\Omega_{s}))}.\]
For the remaining \(T_{5}\) and \(T_{6}\), we use Cauchy-Schwarz and the trace inequality over \(\Omega_{s}\) (Lemma 6.4, (6.15)) to arrive at
\[T_{5} \leq\|\mathbf{w}\|_{L^{\infty}(0,T;L^{\infty}(\Gamma))}\|c_{r}\|_{L^ {2}(0,T;L^{2}(\Gamma(t)))}\|\psi\|_{L^{2}(0,T;L^{2}(\Gamma(t)))}\\ \leq K\epsilon_{\max}|\ln\epsilon_{\max}|\|c_{r}\|_{L^{2}(0,T;H^{ 1}(\Omega))}\|\psi\|_{L^{2}(0,T;H^{1}(\Omega_{s}(t)))},\]
and
\[T_{6} \leq K\epsilon_{\max}^{1/2}|\ln\epsilon_{\max}|^{1/2}\|\xi^{1/2}( c_{r}-c_{v})\|_{L^{2}(0,T;L^{2}(\Gamma(t)))}\|\psi\|_{L^{2}(0,T;H^{1}(\Omega_{s}(t)))}\] \[\leq K\epsilon_{\max}^{1/2}|\ln\epsilon_{\max}|^{1/2}(\|\xi^{1/2 }c_{v}\|_{L^{2}(0,T;L^{2}(\Gamma(t)))}+\epsilon_{\max}^{1/2}|\ln\epsilon_{\max }|^{1/2}\|c_{r}\|_{L^{2}(0,T;H^{1}(\Omega_{s}(t)))})\|\psi\|_{L^{2}(0,T;H^{1}( \Omega_{s}(t)))}.\]
Now, having bounded \(T_{1},\dots,T_{6}\), we use the regularity bound of the backward in time problem (7.30) and that \(|\Omega_{v}(t)|\leq K\epsilon_{\max}^{2}\), to obtain
\[\|e_{2}\|_{L^{2}(0,T;L^{2}(\Omega))}\leq K\epsilon_{\max}|\ln \epsilon_{\max}|\|c_{r}\|_{L^{2}(0,T;H^{1}(\Omega))}+K(\epsilon_{\max}|\ln \epsilon_{\max}|)^{1/2}\|\xi^{1/2}c_{v}\|_{L^{2}(0,T;L^{2}(\Gamma(t)))}\\ +K\epsilon_{\max}^{2/3}\left(\|\partial_{t}c_{r}\|_{L^{2}(0,T;L^{ 2}(\Omega_{v}))}+(\|\mathbf{u}_{s}\|_{L^{\infty}(0,T;H^{2}(\Omega_{s}))}+1)\|c_{r} \|_{L^{2}(0,T;H^{2}(\Omega))}+\|f_{s}\|_{L^{2}(0,T;H^{1}(\Omega_{s}))}\right).\]
The proof is concluded by the triangle inequality, (7.25), and Lemma 7.2. The boundedness of \(N_{2}\) is shown in Appendix A.1.
## 8. Numerical results
In this section, we consider two numerical examples to demonstrate the analysis presented in the previous sections. The two examples correspond to the 3D-1D model of Section 4.5 and to the 3D-1D-1D model of Section 5. Our implementation uses the FEniCS finite element framework [2] and the \((\)FEniCS\()_{ii}\) module [35].
### A coupled 3D-1D solute transport finite element example
We let the surrounding domain \(\Omega\) (also) take the form of cylinder with radius \(0.5\) and length \(L\) containing an inner cylinder \(\Omega_{v}\) of radius \(R=R_{2}<0.5\) with centerline \(\Lambda\). Using a Galerkin finite element method in space with continuous piecewise linear polynomials defined relative to conforming meshes of \(\Omega_{s}=\Omega\backslash\Omega_{v}\), \(\Omega_{v}\) and an implicit Euler discretization in time with time step \(\tau\), we compute approximate 3D-3D solutions \(c_{v,\tau h},c_{s,\tau h}\) of (3.1). On the same meshes of \(\Omega\) with centerline meshes \(\Lambda_{h}\), we compute approximate solutions to (4.20), again using continuous piecewise
linear finite elements defined relative to \(\Omega\) for \(c_{\tau h}\) and relative to \(\Lambda_{h}\) for \(\hat{c}_{\tau h}\) (Figure 4). We set \(D_{v}=D_{s}=\xi=1\), \(f_{s}=f_{v}=0.5\), \(\hat{c}_{h}^{0}=1.0\), \(c^{0}=0.0\), \(\mathbf{u}_{v}=(0.5,0,0)\) and \(\mathbf{u}_{s}=(0.1,0,0)\), and \(T=0.2\).
To numerically explore the modelling error for decreasing radii (\(R\), \(\epsilon_{\max}\to 0\)), we consider a series of experiments with different radii \(R\in\{0.2,0.1,0.05,0.025,0.0125\}\), a relatively small, fixed mesh size \((h_{\min},h_{\max})|_{\Omega_{v}}=(0.009,0.014)\) and \((h_{\min},h_{\max})|_{\Omega_{s}}=(0.01,0.024)\), and small, fixed time step \(\tau=0.01\). In practice, we compute the discrepancy between the approximate solutions in the 3D and 1D vessels:
\[\|c_{v,\tau h}(T)-E\hat{c}_{\tau h}(T)\|_{L^{2}(\Omega_{v})}\approx\|c_{v}(T)-E \hat{c}(T)\|_{L^{2}(\Omega_{v})}\]
as a proxy for the modelling error while noting that the computed error includes both the spatio-temporal approximation errors as well as modelling errors:
\[\|c_{v}(T)-\hat{c}(T)\|_{L^{2}(\Omega_{v})}\\ \leq\|c_{v}(T)-c_{v,\tau h}(T)\|_{L^{2}(\Omega_{v})}+\|c_{v,\tau h }(T)-\hat{c}_{\tau h}(T)\|_{L^{2}(\Omega_{v})}+\|\hat{c}(T)-\hat{c}_{\tau h}(T )\|_{L^{2}(\Omega_{v})}.\]
We here thus presume that with the choice of small mesh size and time step, the approximation errors are negligible compared to the modelling error.
Table 1 shows the computed \(L^{2}\) norms in \(\Omega_{v}\) and \(\Omega_{s}\) along with normalized norms and the corresponding rates. We observe that the errors decrease with decreasing \(R\) until the radius and mesh size become of comparable size, and that the modelling error in the surroundings continues to decrease even when the modelling error in the vessel stagnates.
Figure 4. Plot of the numerical solutions for the first example with \(R=0.0125\). The 3D-3D solution \((c_{s,\tau h},c_{v,\tau h})\) is plotted next to the 3D-1D model \((c_{\tau,h},E\hat{c}_{\tau h})\) with the 1D solution extended to the inner cylinder. The outer cylinder is clipped at the plane intersecting the center line \(\Lambda\). (Left) Solutions shown at \(t=0.1\). (Right) Solutions shown at \(t=T=0.2\).
### A coupled 3D-1D-1D solute transport example
As a second example, we consider solutions to the coupled 3D-1D-1D models of solute transport and the corresponding 3D-3D-3D-3D model set up in the blood vessel, \(\Omega_{v}\), the perivascular domain \(\Omega_{p}\), and the tissue \(\Omega_{s}\). We also use backward Euler and continuous linear finite element methods to solve (5.3)-(5.5) with solutions denoted by \((c_{\tau h},\hat{c}_{p,\tau h},\hat{c}_{v,\tau h})\), and the corresponding 3D-3D-3D model with solutions denoted by \((c_{s,\tau h},c_{p,\tau h},c_{v,\tau h})\). We set \(\Omega_{s}=(-1,1)\times(-1,1)\times(-0.5,0.5)\), \(\Omega_{v}\) be a cylinder of radius \(R_{1}\) with centerline \(x=0\), \(y=0\), \(\Omega_{p}\) be the annular cylinder around \(\Omega_{v}\) with outer radius \(R_{2}=2R_{1}\). We vary \(R_{1}\) and compute the \(L^{2}\) error between the 3D solutions \(c_{i,\tau h}\) and the reduced 1D solutions \(\hat{c}_{i,\tau h}\) for \(i\in\{p,v\}\). We keep \(\tau=0.01\) and \(T=0.1\), \(D_{v}=D_{p}=D_{s}=\xi_{v}=\xi_{p}=1\), \(f_{s}=f_{v}=f_{p}=0.5\), \(c_{v,\tau h}=\hat{c}_{v,\tau h}(0)=1.0\), \(c_{p,\tau h}=\hat{c}_{p,\tau h}(0)=c_{\tau h}(0)=c_{s,\tau h}(0)=0\), \(\boldsymbol{u}_{v}=(0.5,0,0)\), \(\boldsymbol{u}_{p}=(0.1,0,0)\), and \(\boldsymbol{u}_{s}=(0.05,0,0)\). For the mesh-size in the various domains, we have \((h_{\min},h_{\max})|_{\Omega_{v}}=(0.011,0.019)\), \((h_{\min},h_{\max})|_{\Omega_{p}}=(0.011,0.024)\), and \((h_{\min},h_{\max})|_{\Omega_{s}}=(0.017,0.043)\).
Tables 2 and 3 show the computed \(L^{2}\) norm in \(\Omega_{v}\), \(\Omega_{p}\) and \(\Omega_{s}\) along with a normalized norm and the corresponding rates. We observe that the modelling errors all decrease for decreasing radii, though in a non-uniform manner and with uneven rates. The modeling error in the surroundings decreases robustly at rates between 1 and 2. The (non-normalized) modelling error in the vascular domain decreases with similar rates. The modelling error in the perivascular space increases in the first \(R\)-refinement before decreasing at rates close to 2. Clearly, further theoretical and numerical studies of the interplay between the modelling and approximation errors are warranted (though outside the scope of the current study).
Figure 5. Plot of the numerical solutions for the second example with \(R=0.0125\). In each of the four quadrants, the 3D-3D-3D solution \((c_{s,\tau h},c_{p,\tau h},c_{v,\tau h})\) is plotted next to the 3D-1D-1D model \((c_{\tau h},\hat{c}_{p,\tau h},\hat{c}_{v,\tau h})\) with the 1D solutions extended to their respective cylinder or annulus. Top row shows a slice of \(\Omega_{s}\) and \(\Omega\) respectively with the vessel solutions without the PVS domain. The second row shows the PVS solution. (Left) Solutions shown at \(t=0.1\). (Right) Solutions shown at \(t=T=0.2\).
## 9. Conclusions and outlook
Understanding solute transport and exchange in the brain vasculature, perivascular, and surrounding tissue is critical for unraveling the brain's delivery and clearance mechanisms. Here, we have presented a mathematical model for modelling diffusive and convective transport and exchange in deformable domains, and rigorously analyzed its modelling characteristics. Future research directions include the error analysis of conforming and non-conforming finite element approximations of such models. We easily envision that this framework can be combined with medical imaging to study brain perivascular transport and exchange at scale.
## Acknowledgments
We gratefully acknowledge valuable discussions with Prof. Barbara Wohlmuth and Dr. Johannes Haubner.
|
2309.07521 | Conformational isomerization dynamics in solvent violates both the
Stokes-Einstein relation and Kramers' theory | Molecular isomerization kinetics in liquid solvents are determined by a
complex interplay between the friction acting on a rotating dihedral due to
interactions with the solvent, internal dissipation effects (also known as
internal friction), the viscosity of the solvent, and the free energy profile
over which a dihedral rotates. Currently, it is not understood how these
quantities are related at the molecular scale. Here, we combine molecular
dynamics simulations of isomerizing n-alkane chains and dipeptide molecules in
mixed water-glycerol solvents with memory-kernel extraction techniques to
directly evaluate the frequency-dependent friction acting on a rotating
dihedral. We extract the friction and isomerization times over a range of
glycerol concentrations and accurately evaluate the relationships between
solvent viscosity, isomerization kinetics, and dihedral friction. We show that
the total friction acting on a rotating dihedral does not scale linearly with
solvent viscosity, thus violating the Stokes-Einstein relation. Additionally,
we demonstrate that the kinetics of isomerization are significantly faster
compared to the Kramers prediction in the overdamped limit. We suggest that
isomerization kinetics are determined by the multi-time-scale friction coupling
between a rotating dihedral and its solvent environment, which results in
non-Markovian kinetic speed-up effects. | Benjamin A. Dalton, Henrik Kiefer, Roland R. Netz | 2023-09-14T08:47:34Z | http://arxiv.org/abs/2309.07521v1 | Conformational isomerization dynamics in solvent violates both the Stokes-Einstein relation and Kramers' theory
###### Abstract
Molecular isomerization kinetics in liquid solvents are determined by a complex interplay between the friction acting on a rotating dihedral due to interactions with the solvent, internal dissipation effects (also known as internal friction), the viscosity of the solvent, and the free energy profile over which a dihedral rotates. Currently, it is not understood how these quantities are related at the molecular scale. Here, we combine molecular dynamics simulations of isomerizing n-alkane chains and dipeptide molecules in mixed water-glycerol solvents with memory-kernel extraction techniques to directly evaluate the frequency-dependent friction acting on a rotating dihedral. We extract the friction and isomerization times over a range of glycerol concentrations and accurately evaluate the relationships between solvent viscosity, isomerization kinetics, and dihedral friction. We show that the total friction acting on a rotating dihedral does not scale linearly with solvent viscosity, thus violating the Stokes-Einstein relation. Additionally, we demonstrate that the kinetics of isomerization are significantly faster compared to the Kramers prediction in the overdamped limit. We suggest that isomerization kinetics are determined by the multi-time-scale friction coupling between a rotating dihedral and its solvent environment, which results in non-Markovian kinetic speed-up effects.
## I Introduction
Molecular conformation transition rates, such as the folding rates of proteins, are influenced by interactions between the molecule and its solvent environment, as well as by intra-molecular interactions within the molecule itself. In experimental settings, the molecular conformation dynamics of a solute molecule can be modulated by altering the viscosity of the solvating medium. One way to achieve this is to incorporate viscogenic agents like glucose, sucrose, or glycerol into the medium. By doing so, one can generate a solvent viscosity \(\eta\) that is far greater than that of pure water. This method has been widely used in the field of protein folding and has played a critical role in uncovering the importance of internal friction effects [1; 2; 3; 4; 5; 6; 7; 8], whereby the viscosity scaling of the folding time \(\tau\) was typically written as \(\tau=\alpha\eta^{\beta}+\varepsilon\) such that it was argued that \(\beta=1\) and \(\varepsilon=0\) in the absence of internal friction and that either \(\beta<1\) or \(\varepsilon>0\) when internal friction effects are present. Applying similar methodologies, all-atom simulations have also been used to elucidate the molecular mechanisms of internal friction [9; 10; 11; 12; 13; 14]. The supposed linear relation between \(\eta\) and \(\tau\) (i.e. \(\beta=1\)) with \(\varepsilon=0\) is actually founded on the combination of two more fundamental relations, which have been impossible to check separately. According to the Stokes-Einstein relation, the friction \(\gamma\) acting on a molecule that moves through a solvent of viscosity \(\eta\) satisfies \(\gamma\sim\eta\), with a pre-factor that incorporates information about the molecule's geometry. For sufficiently over-damped systems, Kramers' theory [15] tells us that the average time \(\tau\) to undergo a state transition by overcoming an energy barrier satisfies \(\tau\sim\gamma\), with a pre-factor that incorporates information about the energy barrier. Therefore, the linear relation between \(\tau\) and \(\eta\) is indirectly mediated by the friction acting on the reconfiguring molecule, with deviations from linearity actually indicating violations of either the Stokes-Einstein relation, the overdamped Kramers relation, or both. While it is relatively commonplace to measure transition times and solvent viscosities, both in experiments and in simulations, a direct evaluation of the friction acting on some collective reaction coordinate is far more complicated. Therefore, a direct verification of the Stokes-Einstein relation and the overdamped Kramers relation for molecular isomerization in complex viscoelastic solvents has so far not been possible.
To evaluate friction, in the past one typically relied on indirect methods using memoryless reaction-rate theory [17; 18; 19; 20]. In this paper, we utilize recent non-Markovian memory kernel extraction methods [21; 22; 23; 24; 25; 26] to directly evaluate the frequency-dependent friction acting on a rotating dihedral. Memory extraction methods enable a direct evaluation of the friction acting on any arbitrary reaction coordinate by mapping the time series evolution of that reaction coordinate onto a generalized Langevin equation (GLE) [27; 28]. These methods are remarkably general and have been applied recently to butane isomerization [14], cell migration [22], the vibrational spectra of water molecules [24], pair reactions in water [25], the dynamics of small polypeptide chains [23], and, most recently, to the folding dynamics of a diverse set of fast-folding proteins [26]. To investigate the relationship between dihedral friction, solvent viscosity, and isomerization kinetics, we simulate four n-alkane chains: n-butane, n-hexane, n-octane, and n-decane (hereafter, we omit the \(n\) prefix), and two amino acid residues: alanine and phenylalanine, both with NMA C-terminal capping and ACE at N-terminal capping, using molecular dynamics (MD) simulations with explicit solvents. As a viscogenic agent, we mix water and glycerol, and we vary the concentration of glycerol to change the solvent viscosity. We
compare our results to an idealized system where the viscosity of a pure water solvent is modified by scaling the mass of the water molecules. We observe that the dependence of the friction governing the dihedral rotation on the solvent viscosity is the same, regardless of whether we vary the glycerol concentration or the mass of the water molecules. However, the scaling is significantly sublinear in both cases, showing that dihedral isomerization strongly violates the Stokes-Einstein relation. Interestingly, the mean first-passage times \(\tau_{\rm MFP}\) for dihedral isomerization exhibits dramatically different viscosity scaling depending on whether we vary the glycerol concentration or the water mass. This disagreement is most extreme for the smallest solute butane and reduces as a function of solute size. When we evaluate the dependence of \(\tau_{\rm MFP}\) on \(\gamma\) we find that the linear scaling \(\tau_{\rm MFP}\sim\gamma\) does not hold and that \(\tau_{\rm MFP}\) is significantly reduced compared to the Kramers prediction in the overdamped limit. This dramatic acceleration of reconfiguration kinetics was recently reported for a set of extensive fast-folding protein simulations [26], where it was shown that many proteins fold and unfold in a memory-induced barrier-crossing speed-up regime [29]. The same non-Markovian mechanism also applies to dihedral isomerization kinetics. Overall, our investigation reveals the full complexity of the relationships between friction, viscosity, and reaction kinetics at the molecular scale.
## Results and discussion
**Viscosity dependence of butane isomerization kinetics.** To begin, we study the viscosity dependence of the dihedral dynamics of butane, which is the smallest molecule to exhibit distinct isomeric states and has been used as a model system for many classic studies in the statistical mechanics of dihedral barrier-crossing processes [30; 31; 32; 33; 34]. We modify the viscosity by either varying the concentration of glycerol in a mixed water-glycerol solvent (denoted as w/gly throughout) or by scaling the mass of the water molecules in a pure water solvent (referred to as super-heavy water and denoted as \(\Delta m_{\rm w}\) throughout). In the case of the super-heavy water, we uniformly change the mass of the water molecules such that the viscosity \(\eta\) scales as \(\eta/\eta_{0}=\sqrt{m/m_{0}}\), where \(m_{0}\) and \(\eta_{0}\) are the mass and viscosity of neat water, and \(m\) is the scaled water mass [35; 36; 37; 38]. This is an idealised approach with no experimental counterpart that is frequently used to simulate both high and low-viscosity solvents and has been essential for simulation studies inves
Figure 1: Simulation of butane in explicit solvents. (A) Simulation snapshot of a butane molecule dissolved in a water-glycerol mixture at 20% mass-fraction glycerol. (B) Schematic indicating the butane dihedral. The dihedral angle \(\theta\) is subtended by the intercept of the two planes formed by the four carbon-hydrogen groups. (C) A 0.5 ns trajectory segment of the butane dihedral angle in a standard pure water solvent. The dashed lines show \(\theta=\pm 120\) deg and \(\theta=0\) deg. (D) Free energy profiles for the butane dihedral extracted from MD simulations for three different solvent conditions. The schematics indicate the configurations of the cis and trans states, which occupy the various energy minima. (E) Viscosity for water-glycerol mixtures, calculated using the Green-Kubo method, plotted as a function of glycerol mass fraction. Results are compared to an experimental empirical curve by Cheng et al. [16].
tigating internal friction effects [10; 12; 14]. In Fig. 1A, we show a snapshot from equilibrium MD simulations of a single united-atom butane molecule suspended in a water-glycerol solvent environment. See Supplementary Information Section S1 for simulation details. The bunatene dihedral angle \(\theta\) (Fig. 1B) stochastically transitions between the trans-state, located at \(\theta=0\) deg, and the cis-states, located at \(\theta=\pm 120\) deg. In Fig. 1C, we show a typical 0.5 ns trajectory segment for the dihedral of butane in a pure water solvent. The dihedral transitions between states by overcoming barriers in the free-energy landscape. We extract free energy profiles from the trajectory of \(\theta(t)\) such that \(U(\theta)=-k_{\text{B}}T\text{log}[\rho(\theta)]\), where \(\rho(\theta)\) is the probability density, \(T\) is the temperature, and \(k_{\text{B}}\) is Boltzmann's constant. In Fig. 1D, we show three example free energy profiles extracted from simulations: butane in standard pure neat water, butane in a super-heavy water solvent with \(m/m_{0}=100\), and bunatene in a water-glycerol solvent with 60% mass-fraction glycerol. The three free energy profiles are in excellent agreement, indicating that neither visogenic method affects the equilibrium properties of the dihedral. For the mass-scaled systems, we consider \(m/m_{0}=1\), 9, 25, and 100, corresponding to \(\eta/\eta_{0}=1\), 3, 5, and 10. Standard pure water for the TIP4P/2005 water model has viscosity \(\eta_{0}=0.86\) mPas [39]. For the water-glycerol mixtures, we evaluate viscosity using the Green-Kubo relationship, which relates the shear viscosity to auto-correlations of the shear stress tensor (Supplementary Information Section S2). In Fig. 1E, we plot the water-glycerol viscosities for the range of glycerol concentrations used throughout this paper and find excellent agreement compared to an experimental empirical curve for water-glycerol mixtures at \(T=300\) K [16].
To evaluate the friction acting on the butane dihedral, we map \(\theta(t)\) onto a generalized Langevin equation (GLE):
\[m\ddot{\theta}(t)=-\int\limits_{0}^{t}\Gamma(t-t^{\prime})\dot{\theta}(t^{ \prime})dt^{\prime}-\nabla U\big{(}\theta(t)\big{)}+F_{R}(t), \tag{1}\]
where \(\Gamma(t)\) is the friction memory kernel, and \(F_{R}(t)\) is the stochastic force term, which has a zero mean \(\langle F_{R}(t)\rangle=0\) and satisfies the fluctuation-dissipation theorem \(\langle F_{R}(t)F_{R}(t^{\prime})\rangle=k_{\text{B}}T\Gamma(t-t^{\prime})\). \(U(\theta)\) is the dihedral free energy profile, as given in Fig. 1D, and \(\nabla\equiv\partial/\partial\theta\). \(m\) is the effective mass of the dihedral, which we assume to be independent of \(\theta\) since the slight \(\theta\)-dependence of \(m\) has little effect on dihedral dynamics (see Supplementary Information section S3 and [40]).
We use recent friction memory-kernel extraction techniques and extract the running integral function \(G(t)=\int_{0}^{t}\Gamma(t^{\prime})dt^{\prime}\) directly from the time series of \(\theta(t)\)[21; 23]. Details of the memory kernel extraction method are given in Supplementary Information Section S4. In Fig. 2A, we show \(G(t)\) for the butane dihedral under three solvent conditions. The fitting results for each curve, underlaid in grey, are discussed below. In Fig. 2B, we show the corresponding memory kernels \(\Gamma(t)\), evaluated using numerical differentiation of \(G(t)\). The large oscillations in \(\Gamma(t)\) are due to couplings between the dihedral and bond angle vibrations, the latter being flexible in our model (Supplementary Information Section S5). The inset shows magnifications of the long-time tails, which eventually decay to zero, resulting in the plateauing behaviour of the \(G(t)\) functions. To evaluate the total friction \(\gamma\) acting on the dihedral, we evaluate \(\gamma=G(t\rightarrow\infty)\), given by the plateau values in Fig. 2A. In Fig. 2C, we show \(\gamma\) for all solvent conditions as a function of the normalized viscosity \(\eta/\eta_{0}\), where \(\eta_{0}\) is the viscosity of neat water. We see that the viscosity dependence of the friction is almost identical, whether measured in pure, super-heavy water or in the water-glycerol mixtures, and scales approximately as \(\gamma(\eta)\sim\eta^{0.4}\), indicating a strong violation of the Stokes-Einstein relation. Interestingly, the translation diffusion coefficients for the butane centre of mass (Fig. 2C inset), which we calculate by fitting the long-time diffusive regime of the mean square displacements (see Supplementary Information Section S6), scale linearly with viscosity in both the water-glycerol mixtures and in the super heavy water, indicating that the translation diffusion does satisfy the Stokes-Einstein relation.
In Fig. 2D, we show the mean first-passage times \(\tau_{\text{MFP}}\) plotted as a function of solvent viscosity for both trans\(\rightarrow\)cis transitions and cis\(\rightarrow\)trans transitions. The calculation of \(\tau_{\text{MFP}}\) is detailed in Supplementary Information Section S7, where we discuss a method for eliminating recrossing effects. In super-heavy water, \(\tau_{\text{MFP}}\) increases significantly with \(\eta\). However, in the water-glycerol mixtures, \(\tau_{\text{MFP}}\) is completely independent of solvent viscosity. One possibility could be that this is a nano-viscosity effect, where for solutes that are small compared to the size of the visogenic co-solvent, or similar in size, the viscosity experienced by the solute can deviate away from the measured macroscopic viscosity [41; 42]. We dismiss this suggestion since both the friction experienced by the rotating dihedral, and the translation diffusion for the butane center of mass, are the same whether measured in the water-glycerol mixtures or in the super-heavy water (Fig. 2C). The results for \(\tau_{\text{MFP}}\) indicate that, in the small molecule regime, molecular conformation reaction kinetics can completely decouple from the macroscopic viscosity of the solvent environment. In fact, the disparities between \(\tau_{\text{MFP}}(\eta)\) measured in super-heavy water and the water-glycerol mixtures result from complex non-Markovian effects, which we return to below.
**Molecular-size dependence of isomerisation kinetics.** To show that the decoupling between \(\tau_{\text{MFP}}\) and \(\eta\) in the water-glycerol solvent is a small-molecule effect, we systematically increase the length of the alkane chain to include hexane, octane, and decane. We then evaluate the mean first-passage times for the inner-most dihedral of each chain (Fig. 3A, see supplementary Information Section S8 for details). Additionally, we simulate
capped alanine and phenylalanine amino acids and evaluate \(\tau_{\rm MFP}\) for the \(\phi\)-dihedral. The mean first-passage times for the alkanes exhibit a clear solute-size dependence (Fig. 3B). Specifically, when rescaled by \(\tau_{\rm MFP}^{0}\) (the result for neat water), the isomerization times for both octane and decane show convergent scaling between the super-heavy water and water-glycerol mixtures in the low viscosity regime. The results for alanine and phenylalanine (Fig. 3C) are consistent with the alkane results since the backbone lengths of the two amino acids are both seven heavy atoms long, between that of hexane and octane. De Sancho et al. show that the viscosity scaling of isomerization kinetics in the super-heavy water solvent is essentially the same for a range of dipeptides and that there are only slight deviations for the alanine dipeptide in the super-heavy water solvent when compared to a mixed glucose-water solvent [12]. However, they only measure in the range of \(1<\eta/\eta_{0}<3.5\), where they interpret the difference as negligible, suggesting that the scaling is therefore identical in the two visocgens (water + glucose and super-heavy water). By extending the viscosity range, our investigation reveals that slight deviations are present and that the deviations increase with increasing viscosity.
In Fig. 3D, we show the dihedral isomerization times in neat water \(\tau_{\rm MFP}^{0}\). The -140\({}^{\circ}\rightarrow\) -70\({}^{\circ}\) transition in phenylalanine \(\tau_{\rm MFP}^{0}\) is much greater than that of alanine. This increase is only partially due to the 20% increase in the phenylalanine barrier height (Fig. 3C inset and Supplementary Information Section S8) with the remaining contribution coming from the presence of the large benzyl side group. In Fig. 3C, we see that the viscosity scaling for the two dipeptides is similar, suggesting that the addition of the large benzyl side group does not significantly affect the viscosity scaling of the isomerization times, but rather just the absolute values. De Sancho et al. also addressed the issue of size dependence by expanding the radius of the united-atom groups in their n-butane model, which led to longer mean isomerization times [12]. Our results demonstrate that varying the width of a molecule has a different effect compared to increasing a dihedral chain length, where it is the length of a dihedral backbone
Figure 2: (A) Running integrals \(G(t)\) extracted from MD trajectories for butane dihedral dynamics under various solvent conditions. The corresponding memory kernels \(\Gamma(t)\) are shown in (B) and are evaluated by numerically differentiating \(G(t)\). Fits of the running integrals (Eq. 3) are underlaid with the corresponding MD curves in (A). The inset in (B) shows the long-time memory kernel decay. The vertical dashed lines indicate the corresponding \(\tau_{\rm MFP}\). Figure legend in (A) also corresponds to (B). (C) Total friction \(\gamma\) acting on the butane dihedral rotation for all super-heavy water and water-glycerol conditions. The black dashed line indicates \(\sim(\eta/\eta_{0})^{0.4}\) scaling. The inset shows the linear viscosity scaling of the butane centre of mass transition diffusion coefficient \(D^{\rm tr}\), where \(D_{0}^{\rm tr}\) is the result in standard neat water. (D) Mean first-passage times \(\tau_{\rm MFP}\) for the cis-to-trans (cis\(\rightarrow\)trans) and trans-to-cis (trans\(\rightarrow\)cis) transitions of the butane dihedral isomerization.
that predominantly influences isomerization rate scaling.
We investigate the scaling relations shown in Figs. 3B and C in more detail by fitting \(\tau_{\rm MFP}\) for each molecule with the scaling function
\[\tau_{\rm MFP}(\eta)=\alpha\bigg{(}\frac{\eta}{\eta_{0}}\bigg{)}^{\beta}+\varepsilon \tag{2}\]
(see Supplementary Information Section S9 for details). There are two ways that Eq. 2 can reveal strong internal friction effects. The first is when \(\varepsilon/(\alpha+\varepsilon)\to 1\), indicating that contributions from the non-zero intercept (\(\varepsilon>0\)) dominate the pre-factor \(\alpha\). The second is when \(\beta<1\), such that the scaling of Eq. 2 is sublinear. In Fig. 3E, we show the exponent \(\beta\) and the ratio \(\varepsilon/(\alpha+\varepsilon)\) for all systems. For alkanes in super-heavy water (blue data), the scaling is effectively independent of chain length, with \(\beta\approx 1\) indicating linear scaling for all systems. However, for all alkanes, \(\varepsilon/(\alpha+\varepsilon)\) is between 0.8 and 0.9, indicating that despite the linear scaling, strong internal friction effects are present. For butane, it was previously shown that \(\tau_{\rm MFP}\) scales linearly with viscosity for \(\eta>\eta_{0}\) but that the scaling transitions to an inertia-dominated regime for \(\eta<\eta_{0}\)[14]. We do not consider this regime here. However, it is interesting to note that the approximate linear scaling for \(\eta/\eta_{0}>1\) is a characteristic of all alkanes in super-heavy water. For alkanes in water-glycerol solvents (red data), \(\varepsilon/(\alpha+\varepsilon)\) transitions from 1 to 0 for longer alkane chains, accompanied by a significant decrease in \(\beta\). Here, butane is insensitive to changes in viscosity such that \(\beta\to 1\) and \(\alpha\to 0\) represent a
Figure 3: Molecular-size dependence of isomerization kinetics. (A) Molecular schematic for alkanes and two capped, dipeptide-bond amino acids. For the alkanes, the inner dihedrals are indicated. For alanine (_ala_) and phenylalanine (_phe_), the \(\phi\)-dihedrals are indicated. (B) Cis-to-trans mean first-passage times \(\tau_{\rm MFP}\) for alkanes. \(\tau_{\rm MFP}\) values are scaled by the reaction times measured in neat water (pure water at standard-viscosity) for each system \(\tau_{\rm MFP}^{0}\). The left legend indicates the colour scheme for the different alkanes. The right legend indicates the solvent type. (C) Scaled reaction times for amino acids. Reaction times are shown for the \(-140^{\circ}\) to \(-70^{\circ}\) transition, as indicated by the arrow on the free-energy profiles (inset). (D) Reaction times in neat water \(\tau_{\rm MFP}^{0}\). Reaction times for alkanes are plotted as a function of backbone length (number of carbon atoms). For the amino acids, the backbone length is \(n=7\). (E) Fitting parameters \(\beta\) and \(\varepsilon/(\alpha+\varepsilon)\) for scaling relations: \(\tau_{\rm MFP}(\eta)=\alpha(\eta/\eta_{0})^{\beta}+\varepsilon\) (See Supplementary Information Section S9).
constant function. However, chains longer than butane are sensitive to \(\eta\) such that \(\tau_{\rm MFP}(\eta)=\alpha(\eta/\eta_{0})^{\beta}\), with \(\varepsilon=0\) and sub-linear scaling such that \(\beta<0.25\). The dipeptides are interesting because they also scale linearly in the super-heavy water but with relatively reduced internal friction contributions (\(\varepsilon/(\alpha+\varepsilon)\approx 0.4\)). In contrast, in the water-glycerol solvents, the peptides are described by \(\varepsilon=0\) but with \(\beta\approx 0.45\). Altogether, these results confirm two distinct behaviours for the influence of internal friction effects on the viscosity dependence of dihedral isomerization kinetics and that it is not just the construction of the molecule, or the viscosity of the solvent that determines the nature of the viscosity scaling, but also the composition of the solvent.
**Memory-induced speed-up of isomerization kinetics.** In Fig. 4A, we show the dependence of the cis-to-trans mean first-passage times \(\tau_{\rm MFP}\) for the butane dihedral and the inner decane dihedral on the extracted dihedral friction. For decane, we also show the friction-viscosity scaling (inset), which, like butane (Fig. 2C), is strongly sublinear. In the high friction, overdamped limit, Kramers' theory predicts that \(\tau_{\rm Kr}(\gamma)=2\pi\gamma{\rm e}^{U_{0}/k_{\rm B}T}/\sqrt{|U_{\rm min }^{\prime\prime}U_{\rm max}^{\prime\prime}|}\). From Fig. 56, the free energy profiles for all alkanes are approximately the same. \(U_{\rm max}^{\prime\prime}=-1.35\times 10^{-3}\)\(k_{\rm B}T/{\rm deg}^{2}\) and \(U_{\rm min}^{\prime\prime}=6.25\times 10^{-3}\)\(k_{\rm B}T/{\rm deg}^{2}\) are the curvatures of the free energy at the maximum and minimum, and \(U_{0}=3.9\)\(k_{\rm B}T\) is the free energy barrier height. For butane, \(\tau_{\rm Kr}(\gamma)\) clearly does not represent the water-glycerol system well (Fig. 4A), and it overestimates the super-heavy water results by as much as a factor of 3. For decane, the range of \(\gamma\) is an order of magnitude higher than butane. However, the Kramers pre-factor is the same such that the Kramers prediction \(\tau_{\rm Kr}(\gamma)\) for decane dramatically overestimates the measured values by a factor of between 13 and 25. In the Supplementary Information Section S10, we also compare predictions for the Grote-Hynes theory [43], which explicitly accounts for frequency-dependent friction effects, and the more general intermediate-friction Kramers' theory [15], to the measured MD reaction times. Neither theory consistently predicts the reaction times for butane and decane isomerization in both the water-glycerol solvent and the super heavy water. The comparison between the MD data and the Kramers predictions in Fig. 4A suggest that memory-induced barrier crossing speed-up effects are present.
To quantify memory effects, we fit the following function to the memory kernels for the butane dihedral and the inner decane dihedral [24]:
\[\begin{split}&\Gamma(t)\approx\sum_{i=1}^{3}\frac{\gamma_{i}^{ \rm exp}}{\tau_{i}^{\rm exp}}e^{-t/\tau_{i}^{\rm exp}}\\ &+\frac{(1+\omega_{1}\tau_{1}^{\rm osc})\gamma_{1}^{\rm osc}}{2 \tau_{1}^{\rm osc}}e^{-t/\tau_{1}^{\rm osc}}\bigg{[}\text{cos}(\omega_{1}t)+ \frac{\text{sin}(\omega_{1}t)}{\omega_{1}\tau^{\rm osc}}\bigg{]}.\end{split} \tag{3}\]
Here, \(\gamma_{i}^{\rm exp}\) and \(\tau_{i}^{\rm exp}\) are the amplitudes and time scales for exponentially decaying modes. \(\gamma_{1}^{\rm osc}\) is the amplitude, \(\omega_{1}\) is the angular frequency, and \(\tau_{1}^{\rm osc}\) is the decay time for the decaying-oscillating term. Eq. 3 is written such that \(\gamma_{1}^{\rm exp}+\gamma_{2}^{\rm exp}+\gamma_{3}^{\rm exp}+\gamma_{1}^{\rm osc }=\gamma\). Fits of Eq. 3 are shown in Fig. 2A and Figs. S13 and S14 of Supplementary Information Section S10. It has been shown that for systems with multi-exponential memory kernels, memory compo
nents with times scales much longer than the diffusion time \(\tau_{\text{D}}\) do not affect mean first-passage times [44, 45]. We evaluate the diffusion times using \(\tau_{\text{D}}=\gamma L^{2}/k_{\text{B}}T\), where \(L\) represents a characteristic length on the reaction coordinate (here taken to be 60 deg - the distance from the cis minimum to the barrier top). For butane, the longest exponential time scale \(\tau_{1}^{\text{exp}}\) is much greater than \(\tau_{\text{D}}\) (Fig. 4B). Therefore, the intermediate exponential mode, for which \(\tau_{2}^{\text{exp}}<\tau_{\text{D}}\), has the dominant influence on \(\tau_{\text{MFP}}\). This is the only mode for which the amplitudes (\(\gamma_{2}^{\text{exp}}\)) are greater in the super-heavy water than in the water-glycerol mixtures, which helps to explain the divergent behaviour for \(\tau_{\text{MFP}}\) in Fig. 2D. Although the decaying time scale for the oscillating mode \(\tau_{1}^{\text{osc}}\) is approximately equal to \(\tau_{\text{D}}\), the amplitudes \(\gamma_{1}^{\text{osc}}\) are relatively small. (In Supplementary Information section S11, we investigate the influence of the oscillating contributions to the Grote-Hynes prediction and show no effect on the viscosity scaling of the barrier crossing times.) For decane, the longest exponential mode is likely the dominant contribution since \(\tau_{1}^{\text{exp}}\approx\tau_{\text{D}}\). This time scale analysis is consistent with a recent analysis of extensive protein folding simulations [26], where it was shown that proteins fold in a memory-induced speed-up regime. Decane exhibits strongly accelerate kinetics (Fig. 4A), which is expected for systems with \(\tau_{1}^{\text{exp}}\approx\tau_{\text{D}}\)[44, 45] since such systems are most sensitive to kinetic speed-up effects. As memory time scales exceed \(\tau_{\text{D}}\), systems approach a memory induced slow-down regime, which has been shown to be the kinetic mode for some proteins [26]. For butane, speed-up effects are present but weaker, suggesting that butane is closer to the memory induced slow-down transition, which is consistent with the measurement that \(\tau_{1}^{\text{exp}}>\tau_{\text{D}}\). Overall, these results reveal that dihedral isomerization in viscous solvents exhibits multi-time-scale non-Markovian dynamics with memory-accelerated isomerization kinetics.
## Discussion and Conclusions
We utilize recent memory kernel extraction techniques to directly evaluate the frequency-dependent friction acting on an isomerizing dihedral. In doing so, we explore the relationship between the friction acting on a dihedral, the viscosity of a solvent, and molecular reconfiguration kinetics, and we do so for a variety of molecular solutes in different solvent conditions. Our study reveals two significant findings. Firstly, the total butane isomerization friction \(\gamma\) scales equivalently with viscosity, whether measured in a water-glycerol mixture or the super heavy water (Fig. 2C). In both scenarios, this scaling strongly deviates from the linear scaling expected according to the Stokes-Einstein relation. Secondly, the isomerization kinetics differ markedly when measured in a water-glycerol mixture compared to super heavy water. For butane, the mean first-passage times become completely decoupled from viscosity in the water-glycerol mixture but scale linearly in the super-heavy water (Fig. 2D). We can exclude nano-viscosity effects as the cause of this difference since both the translational diffusion coefficient for butane and the friction-viscosity scaling of the dihedral rotation are equivalent in both the water-glycerol mixture and super-heavy water (Fig. 2C). We suggest that this difference in kinetic behaviour between the two solvation methods arises from the multi-time-scale nature of the frequency-dependent friction. Different time-scale contributions of friction interact in distinct ways with the mixed water-glycerol solvent and the super-heavy water. We confirm the significance of non-Markovian contributions by demonstrating that dihedral isomerization times are much shorter than the predictions of Kramers' theory in the high-friction, overdamped limit, which is due to memory-induced acceleration effects.
The viscosity-dependent isomerization times of larger molecules, such as decane or capped amino acids, appear to converge between the two solvation methods, at least in the lower viscosity regime (Figs. 3B and C). These results could be validated experimentally. Evidence suggests that the viscosity scaling of relaxation rates remains largely consistent for hairpin-forming polypeptide chains, regardless of whether they are dissolved in a glucose or sucrose co-solvent. However, deviations have been observed for helix-forming polypeptide chains [2]. Similarly, Sekhar et al. showed that the interconversion rates of a four-helix bundle domain are different when measured in either a mixed water-glycerol solvent or a mixed water-bovine serum albumin (BSA) solvent [42], which they attributed to micro-viscosity effects. However, as we have shown here, these difference may be rather due to complex non-Markovian effects that result from the interactions between the protein domain and mixed solvent environments. Another example where the current investigation is directly applicable is in the study of molecular rotor dyes, where the reconfiguration kinetics of a dye molecule are used to estimate the viscosity of some complex viscous environment [46, 47, 48, 49, 50]. These fluorescent molecules undergo stochastic isomeric switching at rates determined by the viscosity of their environment. Currently, it remains uncertain to what degree multi-time-scale friction effects are important for these dyes, especially in complex viscogenic environments. Overall, there are many interesting areas where accurate measurements of friction, viscosity, and reaction kinetics are essential for understanding molecular processes and complex solvent-solute coupling.
## Methods
For further information on simulation details, see Supplementary Information Section S1. Additional details regarding the simulations and analysis, including the evaluation of solvent viscosities, extraction of friction memory kernels, and various fitting procedures, are also available in the Supplementary Information document.
## Acknowledgements
The project was supported by the European Research Council (ERC) Advanced Grant 835117 NoMaMemo and the Deutsche Forschungsgemeinschaft (DFG) Grant No. SFB 1449 "Dynamic Hydrogels at Biointerfaces". The authors would like to acknowledge the HPC Service of ZEDAT, Freie Universitat Berlin, for providing computing time. We are also thankful to the physics-department HPC services at Freie University of Berlin for their generous support.
|
2308.16705 | Exploring Cross-Cultural Differences in English Hate Speech Annotations:
From Dataset Construction to Analysis | Warning: this paper contains content that may be offensive or upsetting.
Most hate speech datasets neglect the cultural diversity within a single
language, resulting in a critical shortcoming in hate speech detection. To
address this, we introduce CREHate, a CRoss-cultural English Hate speech
dataset. To construct CREHate, we follow a two-step procedure: 1) cultural post
collection and 2) cross-cultural annotation. We sample posts from the SBIC
dataset, which predominantly represents North America, and collect posts from
four geographically diverse English-speaking countries (Australia, United
Kingdom, Singapore, and South Africa) using culturally hateful keywords we
retrieve from our survey. Annotations are collected from the four countries
plus the United States to establish representative labels for each country. Our
analysis highlights statistically significant disparities across countries in
hate speech annotations. Only 56.2% of the posts in CREHate achieve consensus
among all countries, with the highest pairwise label difference rate of 26%.
Qualitative analysis shows that label disagreement occurs mostly due to
different interpretations of sarcasm and the personal bias of annotators on
divisive topics. Lastly, we evaluate large language models (LLMs) under a
zero-shot setting and show that current LLMs tend to show higher accuracies on
Anglosphere country labels in CREHate. Our dataset and codes are available at:
https://github.com/nlee0212/CREHate | Nayeon Lee, Chani Jung, Junho Myung, Jiho Jin, Jose Camacho-Collados, Juho Kim, Alice Oh | 2023-08-31T13:14:47Z | http://arxiv.org/abs/2308.16705v3 | # CREHate: A CRoss-cultural English Hate Speech Dataset
###### Abstract
_Warning: this paper contains content that may be offensive or upsetting._
Most NLP datasets neglect the cultural diversity among language speakers, resulting in a critical shortcoming in hate speech detection and other culturally sensitive tasks. To address this, we introduce **CREHate**, a **CR**oss-cultural **E**nglish **Hate** speech dataset. To construct CREHate, we follow a two-step procedure: 1) culture-specific post collection and 2) cross-cultural annotation. We sample posts from the SBIC dataset, which predominantly represents North America, and collect posts from four geographically diverse English-speaking countries using culture-specific hate speech keywords that we retrieve from our survey. Annotations are then collected from those four English-speaking countries plus the US to establish representative labels for each country. Our analysis highlights statistically significant disparities in cross-cultural hate speech annotations. Only 56.2% of the posts in CREHate achieve consensus among all five countries, with a peak pairwise disagreement rate of 26%. The annotations show that label disagreements tend to come from the inherent cultural context, subjectivity, and ambiguity of the posts. Lastly, we develop cross-cultural hate speech classifiers that are more accurate at predicting each country's labels than the monocultural classifiers. This confirms the utility of CREHate for constructing culturally sensitive hate speech classifiers.
## 1 Introduction
Identifying hate speech is highly subjective and relies heavily on an annotator's understanding and knowledge of the cultural context [1, 16]. Unfortunately, existing English hate speech datasets often overlook the cultural diversity within the posts and the annotators. They are mostly composed of posts from Twitter (Table 1), in which the user demographics in terms of their country of residence are heavily skewed1. Furthermore, annotators' geographic location is either neglected or limited to only one or two countries, despite English being spoken in over 50 countries2. This limitation hinders the datasets' ability to capture diverse viewpoints. Figure 1 illustrates how people from different countries show varying hate speech annotations on identical posts. While looking at five countries does not necessarily capture the full extent of cultural diversity of English speakers, and even within the same country there are cultural differences among them, from now on, we assume that annotators from those countries understand and represent the main cultural norms, and therefore their geographic diversity leads directly to cultural diversity. We also deliberately choose countries from vastly different parts of the world. More details about the annotators are in Section SS3.
Footnote 1: The United States has the highest number of Twitter users by country ([https://datareportal.com/essential-twitter-stats](https://datareportal.com/essential-twitter-stats)).
Footnote 2: The World Factbook, Languages ([https://www.cia.gov/the-world-factbook/field/languages/](https://www.cia.gov/the-world-factbook/field/languages/))
We construct **CREHate**3, a **CR**oss-cultural **E**nglish **Hate** speech dataset, comprising 1,580 online posts annotated by individuals from five distinct English-speaking countries: Australia (AU), the United Kingdom (GB), Singapore (SG), the United States (US), and South Africa (ZA)4. Construction of CREHate is done in a 2-step procedure: 1) culture-specific post collection and 2) cross-cultural annotation. For culture-specific post collection, we collect a total of 600 posts from YouTube and Reddit using search keywords retrieved from a survey from four countries: AU, GB, SG, and
ZA. We also sample 980 posts from SBIC (Sap et al., 2020), a dataset of social media posts annotated with social bias implications about diverse target groups, primarily reflecting a North American perspective (Table 1). During the cross-cultural annotation stage, five annotators from each country annotate each post to establish representative labels for each country, which is used to analyze cross-cultural differences in hate speech annotation. The overview of the dataset construction process is shown in Figure 1.
We conduct a chi-squared test on the cross-cultural annotations of CREHate and demonstrate significant differences across them. Only 56.2% of the entire posts received unanimous label agreement across all five countries, and the average pairwise agreement between countries was 78.8%, with a maximum disagreement of 26.0%. The pairwise label agreement distribution among countries exhibits a notable deviation from that of randomly selected annotator groups, with its average being 2.58\(\sigma\) lower than that of the random groups. Furthermore, by conducting a qualitative analysis of potential reasons for label disagreements, we show that the primary contributing factors are likely the inherent cultural context, subjectivity, and ambiguity within the posts.
Additionally, we present hate speech classifiers that can be tailored to different cultural contexts with the help of CREHate. These classifiers employ various training techniques, including multi-labeling, multi-task learning, and culture tagging, to discern distinct labels for each country within a unified model. Our approach achieves an accuracy improvement of up to 8.2% when compared to the separate models trained on each country's labels, presenting a step towards creating more equitable and culturally sensitive automated content moderation systems.
Our main contributions are as follows:
* We build CREHate, a cross-cultural English hate speech dataset including posts and annotations from diverse cultural backgrounds.
* Through quantitative and qualitative analysis, we identify significant variations in hate speech annotations attributed to the cultural backgrounds of the posts and the annotators.
* We adopt various techniques to construct culturally adaptive hate speech classifiers using CREHate.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Datasets** & **Source** & **Platform** & \begin{tabular}{c} **Annotator** \\ **Country** \\ \end{tabular} \\ \hline \begin{tabular}{c} MLMA \\ (Ousidhoum et al., 2019) \\ \end{tabular} & Twitter & MTurk & N/A \\ \hline \begin{tabular}{c} ImplicitHateCorpus \\ (ElSherief et al., 2021) \\ \end{tabular} & Twitter & MTurk & N/A \\ \hline \begin{tabular}{c} SBIC \\ (Sap et al., 2020) \\ \end{tabular} & \begin{tabular}{c} Twitter, \\ Gold, \\ Stormfront \\ \end{tabular} & MTurk & US, CA \\ \hline \begin{tabular}{c} HateXplain \\ (Mathew et al., 2021) \\ \end{tabular} & \begin{tabular}{c} Twitter, \\ Gab \\ \end{tabular} & CrowdFlower & N/A \\ \hline \begin{tabular}{c} OLID \\ (Zampieri et al., 2019) \\ \end{tabular} & Twitter & CrowdFlower & N/A \\ \hline Davidson et al. (2017) & Twitter & CrowdFlower & N/A \\ \hline \begin{tabular}{c} Founta et al. (2018) \\ \end{tabular} & \begin{tabular}{c} Twitter \\ \end{tabular} & CrowdFlower & N/A \\ \hline \begin{tabular}{c} **CREHate (Ours)** \\ \end{tabular} & \begin{tabular}{c} SBIC, \\ Reddit, \\ YouTube \\ \end{tabular} & \begin{tabular}{c} MTurk, \\ Prolific, \\ Tictag \\ \end{tabular} &
\begin{tabular}{c} AU, US, \\ GB, ZA, \\ SG \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 1: Toxic language corpus annotated using crowdsourcing platforms. MTurk refers to Amazon Mechanical Turk, and CA refers to Canada. The authors of current datasets neglect or limit the cultural backgrounds of the annotators and posts.
Figure 1: Illustration of the two-step procedure of CREHate construction; 1) culture-specific post collection and 2) cross-cultural annotation. We show examples of how annotations on identical posts differ across countries.
Related Work
Impact of Annotator DemographicsAnnotator demographics, such as gender, affect their annotations in NLP datasets (Biester et al., 2022). Hate speech detection is particularly a subjective task where the demographics can affect the annotations, inter-annotator agreement (IAA), and classifier performance (Waseem, 2016; Sap et al., 2022; Goyal et al., 2022; Larimore et al., 2021; Binns et al., 2017).
Cultural Considerations in Hate Speech DetectionRecent research in offensive language has looked into cross-cultural differences as well as building datasets in a variety of languages (Lee et al., 2023; Jeong et al., 2022; Jin et al., 2023; Arango Monnar et al., 2022; Deng et al., 2022; Demus et al., 2022; Mubarak et al., 2022; Alvarez-Carmona et al., 2018), but these papers assume incorrectly that a single language reflects a single culture. For instance, English is spoken by a culturally diverse population, necessitating the consideration of cultural differences among English speakers. Arango Monnar et al. (2022) built the first hate speech dataset for Chilean Spanish to enrich the cultural diversity of Spanish datasets. They evaluated knowledge transfer performance on another Spanish dataset with different cultural backgrounds, but the impact of cultural background on annotations was unexplored. In this paper, we conduct a comprehensive study of how hate speech and its annotations vary across English speakers from different cultures.
Multiple Cultures in English NLPFrenda et al. (2023) developed a corpus specifically for irony detection, emphasizing the impact of annotator diversity on the resulting annotations. They collected posts for irony detection and gathered annotators from five English-speaking countries: Ireland, India, GB, US, and AU. Our study, focusing on hate speech detection, extends the scope by collecting posts as well as annotations from different cultures and investigate the annotation disparities stemming from cultural variations.
## 3 Dataset Construction
To construct CREHate, we follow a 2-step procedure: 1) culture-specific post collection and 2) cross-cultural annotation. CREHate consists of 1,580 posts, each with five labels representing five countries, resulting in a total of 7,900 labels.
English-Speaking CountriesWe choose one country from each continent to ensure geographical diversity for our annotator pool, while also considering cultural differences within and outside the Anglo-American sphere of influence (Cox and O'Connor, 2020; Gamble, 2021). Specifically, we select three core Anglosphere countries--AU, GB, and US (Davies et al., 2013)--and two additional countries where English is the official language but not necessarily the primary language--SG and ZA (Khokhlova, 2015; Tan, 1997).
### Culture-specific Post Collection
#### 3.1.1 Sampling from SBIC
To incorporate hate speech targeting diverse groups, we choose to sample posts from the SBIC dataset (Sap et al., 2020). Posts in SBIC originate from subReddits, microaggressions corpus (Breifeller et al., 2019), Twitter (Founta et al., 2018; Davidson et al., 2017; Waseem and Hovy, 2016), and hate sites (Gab 5, Stormfront (de Gibert et al., 2018)). The dataset contains annotations of which target groups and minorities are targeted within each post. From SBIC, we sample 980 posts while balancing the target group categories, including _race/ethnicity_, _gender/sexuality_, _religion/culture_, _victims_, _disability_, _social/political_, and _body/age_. To prioritize our analysis on hate speech rather than non-hate speech, we maintain a 2:1 ratio between targeted and non-targeted posts in our sampled SBIC data.
Footnote 5: [https://files.pushshift.io/gab/GABPOSTS_CORPUS.x](https://files.pushshift.io/gab/GABPOSTS_CORPUS.x)
#### 3.1.2 Collecting Culture-Specific Samples
The source of SBIC's posts are culturally skewed towards the US 6, resulting in a bias towards prevalent target groups and cultural context of the US. To address this potential cultural bias, we have collected 600 culture-specific online posts from four English-speaking countries: AU, GB, SG, and ZA. We select 150 culture-specific posts from each country, collectively referred to as CP and individually labeled as CP\({}_{AU}\), CP\({}_{GB}\), CP\({}_{SG}\), and CP\({}_{ZA}\).
Footnote 6: Reddit and Gab’s users are mostly from the US ([https://www.semrush.com/website/reddit.com/overview/](https://www.semrush.com/website/reddit.com/overview/), [https://www.semrush.com/website/gab.com/overview/](https://www.semrush.com/website/gab.com/overview/)).
Search Keywords CollectionTo efficiently gather hate speech posts, we employ a keyword-based search using words that refer to specific demographic groups that are often subjected to
hate. To obtain the most appropriate and culturally relevant keywords, we recruit workers whose nationality and current residency match our target country. We ask them to provide commonly targeted groups and possible hateful keywords that may refer to them within their culture. We collect target groups in _Race/Ethnicity_, _Gender/Gender Identity/Sexuality_, and _Culture/Origiw/Religion_, as these are the three main categories within the original SBIC dataset. Limiting the number of categories allows us to collect sufficient posts from each category and culture. We continue collecting until we gather at least 20 keywords per country.
Post Candidate CrawlingTo find appropriate websites for post crawling, we ask the workers to identify popular social media and news platforms in their respective countries. As a result, we select Reddit as our primary social media platform for collecting comments, as it is widely used across all countries. We crawl comments from YouTube channels of commonly reported news organizations in each country due to higher comment volume than news websites. Specific subReddits and news sites are shown in Table 27. On Reddit, we extract all comments on the posts including the target groups or the keywords. On YouTube, we search using the query, '<media name> + <target group>', to locate comments related to the target groups (e.g., 'BBC news pakistani'). We only include comments written in 2020 or later for an up-to-date dataset.
Footnote 7: Note that there is only one news platform for Australia, as no other YouTube channels of news sites provided by the workers allow comments.
Post FilteringTo ensure that we have enough potentially hateful posts in our dataset, we go through a pre-annotation stage. The process begins by randomly selecting 300 comments from each country, balancing those from Reddit and YouTube. We then obtain two annotations per comment from the source country of the comments. Subsequently, we curate a collection of 150 comments by selecting 50 comments from each of the three hate annotation counts, ranging from 0 to 2. With this procedure, we get a total of 600 culture-specific posts from four countries.
### Cross-cultural Annotation
Annotator RecruitmentWe recruit annotators from five countries, choosing only those whose nationality and current residence match and those who have spent most of their lives in their respective countries. We recruit workers from Prolific8 (AU, GB, ZA), Amazon Mechanical Turk9 (US), and Tictag10 (SG) depending on annotator recruitment availability of the desired country. In total, we have 1,061 annotators, balancing their gender but not restricting others for a broader representation of demographics.Table 3 shows a detailed demographic distribution of annotators from each country.
Footnote 8: [https://www.prolific.co/](https://www.prolific.co/)
Footnote 9: [https://www.mturk.com/](https://www.mturk.com/)
Footnote 10: [https://www.titcagkr.com/](https://www.titcagkr.com/)
Annotation ProcessBefore the annotation process, annotators are required to review the definitions11 and examples of hate and non-hate speech. Examples were selected among posts with identical labels across five countries from a pilot study. The task is to annotate posts as either _Hate_, _Non-hate_, or _I don't know_. We obtain five _Hate_ or _Non-hate_
\begin{table}
\begin{tabular}{c|c|c} \hline
**Source** & **Reddit** & **YouTube** \\ \hline \multirow{2}{*}{AU} & _t_/nustralia, r/nustralia, r/melbourne, _t_/sydney, _t_/perth, _r_/brisburg, r/d Adelaide & Sky News Australia \\ \hline \multirow{2}{*}{GB} & _t_/nustralia6homi, r/casualk, r/england, _t_/sydenstorm, r/Wakes, r/nortermeland & Sky News, GBNews \\ \hline \multirow{2}{*}{SG} & _t_/nissapore, r/singsaporeRaw, _t_/singsaporeharpenings, r/singsuprama & CNA, The Strais Times \\ \hline \multirow{2}{*}{ZA} & _t_/suthorfrica, r/RSA, r/capetown, _t_/Johannesburg, r/Durth, r/Pretoria & SABC News, eNCA \\ \hline \hline \end{tabular}
\end{table}
Table 2: Data sources for each country. We crawled comments from country-specific subreddits and news platforms’ YouTube channels.
\begin{table}
\begin{tabular}{l|c c c c c} \hline & **AU** & **GB** & **US** & **SG** & **ZA** \\ \hline
**No. of Annotators** & 216 & 405 & 166 & 103 & 173 \\ \hline
**Gender** (\%) & & & & & \\ male & 51.18 & 45.23 & 53.61 & 54.46 & 50.60 \\ female & 46.45 & 52.76 & 46.39 & 44.55 & 48.81 \\ non-binary & 2.37 & 2.01 & - & 0.99 & 0.60 \\ \hline
**Ethnicity** (\%) & & & & & \\ Asian & 23.70 & 4.27 & 4.22 & 100.00 & 3.57 \\ Black & 0.47 & 2.76 & 6.63 & - & 78.57 \\ Hispanic & - & 0.25 & 0.60 & - & - \\ Middle Eastern & 1.90 & 0.25 & 0.60 & - & 0.60 \\ White & 67.30 & 89.20 & 86.75 & - & 10.71 \\ Other & 6.64 & 3.27 & 1.20 & - & 6.55 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Annotator demographic statistics from each country. We only include demographic categories were shown to significantly affect the label disagreements as mentioned in Section §4.1.
labels for each post from each country. Attention-check questions are incorporated throughout the annotation tasks to ensure high-quality data collection. To prevent a single annotator from having an excessive effect on the dataset, we limit the number of annotations from each annotator to less than 5% of the total annotations.
Label FinalizationAfter gathering all five annotations, we use majority voting to finalize the representative labels for each country. Our analysis shows that there exists a moderate agreement between each annotator and the aggregated labels with an average Cohen's \(\kappa\) agreement of 0.640 [10]. In addition, we include soft labels in the final dataset for future research purposes, although we did not incorporate them into our current work. Examples of posts with labels from each country are presented in Table 4.
## 4 Analysis on the Annotations
In this section, we show that varying cultural background of annotators and posts leads to a significant disparity in hate speech annotation.
### Statistical Analysis
To show that an annotator's cultural background is a significant factor in hate speech detection, we obtain labels representative of the different demographic categories--countries, ethnicity, age groups, religions, political orientations, and education levels--using a majority vote scheme. We only collect the representative labels from the demographic groups within each category with at least three annotators for all posts on average.
Labels from each group are then subjected to chi-squared tests to examine the impact of annotator demographics on the annotations. The results indicate significant disparities in annotations based on country (\(p=0.000\)), race (\(p=0.004\)), gender (\(p=0.007\)), and education level (\(p=0.000\)). Among these, several studies have shown the importance of race and gender of annotators on annotations [23, 2], whereas the impact of the annotator's cultural background has been underexplored.
### Label Agreement among Countries
Posts with Varying AgreementsWe assess the level of agreement among different countries regarding the proportion of posts that received each level of agreement. Only 56.2% of the posts achieved unanimous agreement among all countries, with 25.5% of the posts receiving agreement among four countries.
Pairwise Label Agreement across CountriesWe also examine the label agreement between all pairs of countries. As shown in Figure 1(a), our findings suggest pairwise label agreements among core Anglosphere countries are greater than those observed in other country pairs. Among all countries, AU and GB exhibit the highest label agreement at 83.7%, while SG and ZA show the lowest agreement at 74.0%. We compare these results to the cultural distance index [13] between countries, which measures the degree to which cultural norms in two countries differ. A value of 0 indicates identical cultural norms, while a value close to 1 indicates average distance among all countries. The cultural distance and the hate speech label agreements among the countries show a high negative Pearson correlation with \(r=-0.658\) (\(p=0.039\)). This implies that country pairs with larger cultural distances have lower label agreement, as shown in ZA and SG, where they
\begin{table}
\begin{tabular}{l|l|c|c|c|c|c} \hline
**Source** & **Post** & **AU** & **GB** & **SG** & **US** & **ZA** \\ \hline SBIC & Why did they plant trees in harlem? For free public transit. & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\times\) \\ \hline CP\({}_{AU}\) & The special treatment is for real abbos only...... equality. & \(\circ\) & \(\circ\) & \(\times\) & \(\times\) & \(\circ\) \\ \hline CP\({}_{GB}\) & @USER A pixley calling someone uneducated. The irony is painful. & \(\times\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ \hline CP\({}_{SG}\) & The attitude of Malays being ”holier than thou” is kinda sickening. Have experienced & \(\circ\) & \(\circ\) & \(\times\) & \(\times\) & \(\circ\) \\ & it many times, sadly it’s not gonna stop. & & & & & \\ \hline CP\({}_{ZA}\) & I wonder when people will recognize the EFF for what it really is. It’s not a political & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\circ\) \\ & party, it’s a terrorist organization. & & & & & \\ \hline \end{tabular}
\end{table}
Table 4: Examples of annotated posts in CREHate. \(\circ\) refers to ‘hate’, and \(\times\) refers to ‘non-hate’ label. The example posts require culture-specific knowledge on target minorities (abbos, Malays), locations (Harlem), public organizations (EFF), and slang (pikey). People from different countries provide different hate speech annotations for posts including culture-specific context.
have a higher cultural distance (2.178) compared to AU and GB (0.144).
Our study employs a McNemar Test [14] to investigate the pairwise label differences across different countries. The results indicate significant pairwise label disparity between AU and SG, with the highest cultural distance. ZA and GB also display significant disparity, with GB being the second farthest from ZA. Furthermore, our examination reveals more pronounced differences within CP than in SBIC posts. Specifically, CP\({}_{AU}\), CP\({}_{GB}\), and CP\({}_{ZA}\) exhibit significant differences in eight or more country pairs out of ten. CP\({}_{SG}\) shows significant differences in six out of ten country pairs. However, in SBIC posts, significant differences are observed in only two pairs: US and ZA, and AU and SG. These outcomes demonstrate significant variations in annotations among individuals from diverse countries, particularly within culturally specific posts.
**Comparison with Random Annotator Groups** This section shows that label disparities stem from the annotators' cultural backgrounds rather than random variations among individuals. We compare the distribution of pairwise label agreements among different nationalities with the distribution among randomly organized annotator groups. For random annotator groups, we create two groups of five randomly selected annotations out of 25 for each sample in our dataset and draw representative labels from each group using majority voting. We calculate the label agreement of the two groups for the whole dataset and repeat this process \(10^{5}\) times for robustness. The outcomes are graphically depicted in Figure 1(b), which includes a histogram and the corresponding estimated normal distribution.
Based on the D'Agostino-Pearson normality test [13], the label agreements among random annotators follow a normal distribution with \(\mu=0.81\) and \(\sigma=0.008\). The average label agreement between pairs of countries is 0.79, as indicated by the dashed line in Figure 1(b), falling 2.58\(\sigma\) below the average of the random annotator groups. This clearly illustrates a notable distinction between the two distributions. The two highest and two lowest label agreements between country pairs are marked in solid vertical lines. The two highest label agreements, shown between the Anglosphere countries, are larger than the average label agreement among random annotator groups by 1.82\(\sigma\) and 2.36\(\sigma\). The two lowest agreements
\begin{table}
\begin{tabular}{c|c c c} \hline \hline & **Agreement** & **H-F1** & **N-F1** \\ \hline
**CREHate** & 0.7882 & 0.7636 & 0.8077 \\ \hline
**SBIC** & 0.8045 & 0.8034 & 0.8050 \\
**CP** & 0.7617 & 0.6762 & 0.8108 \\ \hline
**CP\({}_{AU}\)** & 0.7293 & 0.6937 & 0.7565 \\
**CP\({}_{GB}\)** & 0.7493 & 0.6851 & 0.7913 \\
**CP\({}_{SG}\)** & 0.7827 & 0.6583 & 0.8390 \\
**CP\({}_{ZA}\)** & 0.7853 & 0.6565 & 0.8433 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The average pairwise label agreements F1 scores for hate (H-F1) and non-hate (N-F1) labels among countries on subsets of CREHate. Our cultural posts (CP) show lower average pairwise label agreement and lower F1 scores for hate labels compared to SBIC posts.
Figure 2: (a) Pairwise label agreements among countries ordered by the average agreement with others. Labels from Singapore tend to be the most different. (b) Distribution of pairwise label agreements among random annotator groups. Label agreements among countries significantly differ from that of random annotator groups, exhibiting a lower average agreement.
fall 4.97\(\sigma\) and 8.41\(\sigma\) below this average. Through this, we demonstrate that label agreements among countries significantly differ from those from random annotator groups.
Label Agreements on Subsets of CREHateWe also analyze label agreements among countries on different subsets of CREHate (Table 5). First, we compare the label agreements on two disjoint subsets of CREHate: sampled SBIC and culture-specific posts (CP) from AU, GB, SG, and ZA. CP exhibits a lower average pairwise label agreement than the sampled SBIC. Although SBIC and CP show comparable average pairwise F1 scores for non-hate labels, the F1 score for hate labels on CP significantly lags behind that of SBIC. This suggests that CP derives larger label disparities for identifying hate speech compared to SBIC. This trend is consistent across the sets of posts collected from different countries.
Annotator AgreementInter-annotator agreements (IAA) within a country also differ among countries. Using Krippendorf's \(\alpha\), IAA is the highest in the US (\(\alpha=0.462\)), followed by GB (\(\alpha=0.425\)), AU (\(\alpha=0.408\)), ZA (\(\alpha=0.351\)), and SG (\(\alpha=0.344\)).
### Annotators' Disagreement Analysis
This section analyzes the main factors behind label disagreements among countries by using the taxonomy of reasons for annotators' disagreement proposed by Sandri et al. (2023).
Possible Disagreement Reason AnnotationAmong the 1,580 posts in CREHate, 692 posts exhibited label disagreements among countries. 400 posts are sampled for in-depth analysis, including 200 posts from SBIC and 50 posts from each of the four countries. We annotate the posts based on two strategies: 1) whether the posts contain cultural context, and 2) the specific reasons contributing to label disagreements (Table 7), allowing for multi-labeling where applicable. Three authors collaboratively carry out the annotation process after a norming session to establish a consensus on label definitions. Through discussions, all annotators achieved unanimous agreement on the final labels.
Impact of Cultural Context within PostsPosts containing cultural context refer to culture-specific stereotypes about minorities, social phenomena, historical events, public figures, locations, or slang. Among the sampled posts of CREHate displaying label disagreements among countries, 47.0% contained cultural context. Notably, 32.5% of SBIC dataset posts and 61.5% of additional posts included cultural context. Specifically, 72% of the posts from ZA and 68% from SG required cultural knowledge, compared to 48% of posts from the GB and 58% from AU.
\begin{table}
\begin{tabular}{l|l|c|l} \hline
**Source** & **Post** & **Cultural** & **Reason Type** \\ \hline SBIC & RT @USER: I’m really fucking sick of watching the Republican Party be rewarded & \(\Circle\) & personal bias \\ & for flagantly degrading our democracy. & & \\ \hline CP\({}_{AU}\) & You’re Irish and you support your pals the Provisional IRA murderers of the & \(\Circle\) & personal bias \\ & innocent Australian civilians Stephen Melrose and Nick Spanos. You support terrorists. & & \\ \hline CP\({}_{GB}\) & Home Office will do nothing. Illegal immigrants are more valuable than us. BICYCLES & \(\Circle\) & sarcasm \\ & are expensive, I had to save to buy one. WHERE DID THEY GET \& HUNDREDS???? & \\ \hline CP\({}_{GB}\) & Gays have cards? are they sparkly? & & \(\times\) & sarcasm \\ \hline CP\({}_{SG}\) & So with the repeal of 377A, married men are going to church to have gay sex? & \(\Circle\) & not complete \\ \hline CP\({}_{ZA}\) & Amazing. Kudos to the Poles. Fuck these ANC count nuggets. & \(\Circle\) & not complete \\ \hline \end{tabular}
\end{table}
Table 6: Examples of disagreement reason annotation. For a sampled set of posts that countries disagreed on hate speech labeling, we annotated 1) whether the posts require cultural background knowledge to comprehend and annotate and 2) possible reasons behind the disagreements.
\begin{table}
\begin{tabular}{l|l} \hline
**Categories** & **Subtypes** \\ \hline Sloppy Annotation & noise \\ \hline Ambiguity & analogy, false assertion, rhetorical question, sarcasm, word play, reported speech \\ \hline Missing Information & ungrammatical, no context, not complete \\ \hline Subjectivity & personal bias, swearing, threatening \\ \hline \end{tabular}
\end{table}
Table 7: Taxonomy of annotators’ disagreement in subjective tasks from Sandri et al. (2023). We annotate the possible reasons behind label disagreements between countries based on this categorization, on top of culture-relevance labeling.
Possible Factors behind DisagreementIn this section, we focus on the possible reasons behind the label disagreements for the posts with and without cultural context. Overall, _ambiguity_ and _subjectivity_ contributed the most to the disagreements. Disagreements within the posts without cultural context were more likely to occur due to the _ambiguity_ of the posts (54.7%) compared to _subjectivity_ (35.4%). In contrast, in the posts with cultural context, _subjectivity_ of the posts (41.5%) was more likely to provoke disagreements than _ambiguity_ (39.4%). Specifically, _personal bias_ was the most frequently observed subtype reason (30.9%), followed by _sarcasm_ (29.8%) and _not complete_ (18.1%) within the posts containing cultural context. In the posts without cultural context, the most common subtype reasons were _sarcasm_ (36.8%), _personal bias_ (18.9%), and _swearing_ (16.0%).
_Personal bias_ carries significant weight in the label disagreements, mainly when cultural context is included in the post. It refers to posts including discussions on divisive topics like politics or social movements. Label disagreements can arise from annotators having differing opinions about certain topics. Annotators can have different personal biases depending on their cultural background, especially when the topic is related to a cultural context.
_Sarcasm_ indicates expressing the opposite of the literal meaning of the words to make fun of a topic, often relying on contextual and extralinguistic cues. It amplifies challenges in achieving intercultural agreement in both posts with and without cultural context. First of all, annotators' sensitivity to sarcasm may vary depending on the topic and the annotators' cultural backgrounds. Furthermore, sarcasm referring to a specific cultural context may be difficult for annotators from different backgrounds to identify accurately.
_Not complete_ indicates a lack of sufficient information for full comprehension and is a common cause of disagreement in posts containing cultural context. Annotators from diverse cultures may struggle to label posts involving cultural references or nuances when crucial information is missing, requiring extra cultural background knowledge.
_Swearing_ is an important factor in label disagreement within posts without cultural context. Annotators' perceived offensiveness of a swear word in a text can impact their annotations. Different cultures may have varying perceptions of swear words based on usage and social context, resulting in label disagreements on the text containing them.
## 5 Culturally-adapted Models
Interpretation of hate speech varies depending on one's cultural background, highlighting the need to incorporate cultural diversity into datasets used to train culturally sensitive hate speech classifiers. This section shows that current large language models (LLMs), under a zero-shot setting, perform poorly on CREHate. Moreover, models trained solely on labels from one country yield different predictions for identical posts, underscoring the importance of including diverse cultural perspectives to ensure their efficacy across various communities. Lastly, we use several methodologies to train models capable of making culturally tailored predictions in a unified model. We leverage multi-labeling and multi-task learning that are known to be effective on learning disagreements Mostafazadeh Davani et al. (2022). We also introduce culture tagging, which in our experiment shows comparative results.
Experimental SettingsFor zero-shot experiments of LLMs, we use a multiple-choice question format. We use five different variations of the question prompt (e.g., "_Answer if this post is hate or not with a single alphabet letter among given answer choices a and b._"), each presented with two answer options, 'Hate' and 'Non-hate', whose order is randomly shuffled for each variation12. The final value we present is an average among all variations.
Footnote 12: We plan to release all our prompts when the paper is accepted along with the final dataset.
To develop culturally aware classifiers, we use a ratio of 7:1.5:1.5 for train, validation, and test. We experiment with all possible country permutations when training with multi-labeling and multi-task learning. We randomly shuffle the entire culture-tagged dataset to prevent the models from learning from the order of the country tags. The final value we present is an average of all these iterations.
Models used are as follows: GPT-3.5 (gpt-3.5-turbo-0613)13, FLAN-T5-XXL Chung et al. (2022), OPT-IML Iyer et al. (2022), BERTweetbase Nguyen et al. (2020), HateBERT Caselli et al. (2021), TwHIN-BERT Zhang et al. (2023), Twitter-RoBERTa Barbieri et al. (2020), ToxDecet-RoBERTa Zhou et al. (2021), BERTbase-casel Devlin et al. (2019), and RoBERTa-base Liu et al. (2019).
Footnote 13: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
4 Quadro RTX A6000 48GB were used with
CUDA version 11.4 for all experiments. For GPT-3.5, we set the temperature as 0 to use greedy decoding. For training BERT-variants, we use AdamW (Loshchilov and Hutter, 2019) as the optimizer with a learning rate of 2e-5 and use linear scheduling for training with 6 epochs. We set the maximum sequence length of texts to 128 and batch size to 32 for both training and evaluation steps.
### Zero-shot Predictions in LLMs
We use zero-shot classification to evaluate the performance of LLMs in identifying hate speech on CREHate. Table 8 shows the results evaluated on the labels of each country. Our results show that the models do not provide sufficient performance for cross-cultural hate speech detection. CREHate could be leveraged to improve these LLMs to be culturally aware, which we leave for future work.
### Monoculturally trained Models
This section analyzes to what extent monoculturally trained models exhibit different label predictions. In Table 8, the first row for each BERT-variant model showcases its performance when trained on a particular country label. The models trained on respective country labels show an average of 82.1% of average pairwise label agreements within the test set, with a range of 78.6% to 84.4%. Notably, these models showed higher average label agreements within the SBIC posts (85.7%), compared to CP posts (76.4%), showing a similar trend with the entire CREHate dataset, as mentioned in Table 5. Then, we utilize Twitter-RoBERTa, achieving the best average performance for monocultural training, to present specific examples of how each model shows distinct predictions on identical posts, as displayed in Table 9. Despite sharing the same baseline model, the models show different predictions on the same posts.
### Cross-cultural Training
Culture TaggingSimilarly to BERT's [CLS] token, a token representing each culture is added to the beginning of every post and utilized as a single data sample. Posts with labels corresponding to those from each country are prepended with a [{country_code}] token (e.g., [AU]). This approach enables the model to predict the label for each culture using the culture token. Its efficiency lies in the fact that not all labels from each country need to be collected for the model to be trained. Unlike multi-labeling or multi-task learning, culture tagging's strength is in the separate learning of all data points by the model, thereby not requiring all five labels to exist.
Cross-cultural Model ResultsAs shown in Table 8, our study goes parallel with the work of Mostafazadeh Davani et al. (2022) that multi-labeling and multi-task learning benefits from sharing layers to learn each country's perspectives. Multi-task learning proves to slightly outperform multi-labeling for most of the models in our experiment, as it trains separate classifier layers for each country. The model performance increased up to 8.2% when utilizing culture tokens for learning each country's perceptions when compared to monocultural models. Compared to multi-labeling and multi-task learning, the results suggest that culture tagging shows a comparable performance.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline & **AU** & **GB** & **SG** & **US** & **ZA** \\ \hline GPT-3.5 & 70.55 & 70.50 & 70.55 & 69.08 & 68.97 \\ FLAN-T5 & 66.29 & 66.93 & 66.61 & 66.02 & 67.11 \\ OPT-IML & 56.31 & 57.79 & 56.30 & 60.83 & 57.98 \\ \hline BERTweet & 67.59 & 67.32 & 69.60 & 64.89 & 71.64 \\ + ML & 72.48 & 71.91 & 71.72 & 72.04 & **73.04** \\ + MTL & 73.09 & 72.60 & **72.06** & 72.63 & 72.52 \\ + TAG & **73.97** & **72.64** & 70.37 & **73.12** & 70.65 \\ \hline HateBERT & **74.14** & 71.11 & 63.72 & 69.71 & 70.47 \\ + ML & 73.46 & 75.54 & 70.64 & 74.05 & 72.87 \\ + MTL & 73.43 & 74.91 & 69.98 & **74.66** & **73.06** \\ + TAG & 73.54 & **77.88** & **71.93** & 72.83 & 71.92 \\ \hline TwHN-BERT & 65.79 & 66.67 & 66.67 & 67.38 & **71.70** \\ + ML & **70.51** & **71.27** & **69.75** & **72.44** & **71.70** \\ + MTL & 70.23 & 70.69 & 68.95 & 72.24 & 71.30 \\ + TAG & 69.72 & 71.09 & 67.91 & 71.20 & 69.27 \\ \hline Twitter-RoBERTa & 75.63 & 74.34 & 67.53 & 71.66 & 68.52 \\ + ML & 75.19 & 76.51 & 71.84 & 76.52 & 72.48 \\ + MTL & 75.59 & 76.95 & 72.31 & **76.80** & **72.57** \\ + TAG & **78.45** & **79.45** & **73.45** & 76.14 & 70.65 \\ \hline ToxDect-RoBERTa & 69.96 & 71.02 & 67.73 & 65.64 & 66.39 \\ + ML & 72.68 & 73.27 & 70.54 & 72.44 & **70.01** \\ + MTL & **73.03** & **73.47** & 70.91 & **72.89** & 69.86 \\ + TAG & 72.97 & 71.03 & **71.56** & 70.41 & 68.27 \\ \hline BERT & 69.53 & 70.48 & 62.56 & 67.78 & 67.31 \\ + ML & 69.48 & **71.21** & 67.02 & 71.20 & 71.22 \\ + MTL & 69.74 & **72.21** & 67.85 & **72.40** & **71.97** \\ + TAG & **70.39** & 68.97 & **69.64** & 63.23 & 68.97 \\ \hline RoBERTa & 72.50 & 69.52 & 66.37 & 75.71 & 72.73 \\ + ML & 73.22 & 74.36 & 70.84 & **75.57** & **73.62** \\ + MTL & **73.38** & **74.56** & **71.23** & 75.13 & 73.37 \\ + TAG & 73.06 & 73.68 & 69.16 & 73.68 & 72.28 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Macro-F1 scores of the models’ predictions on each country’s labels. LLM results are calculated based on the comparison of the predictions with each country label. For BERT-variants, we show monocultural and cross-cultural model results. Multi-labeling (ML), multi-task learning (MTL), and culture tagging (TAG) outperform monoculturally trained models.
## 6 Conclusion
We develop CREHate, a cross-cultural English hate speech dataset consisting of 1,580 posts collected and annotated from five English-speaking countries--AU, GB, SG, US, and ZA. Through statistical analysis, we discover significant variations in the annotations across countries. Only 56.2% of the posts achieved consensus in labels across all countries, with an average of 21.2% pairwise label disagreement between countries. Qualitative analysis shows that the potential factors behind the label disagreements are inherent cultural context as well as subjectivity and ambiguity of the posts.
Moreover, we develop culturally sensitive unified hate speech classifiers that are capable of predicting country-specific labels more accurately than monocultural models trained exclusively on each country's labels. This highlights CREHate's utility for building cross-cultural hate speech classifiers.
Developing hate speech datasets and classifiers that account for cultural nuances is crucial. Therefore, researchers and practitioners must take into account the cultural backgrounds of both posts and annotators. We urge the construction of hate speech datasets that encompass a broad spectrum of cultural references and contextual nuances, with annotators who are familiar with the content to be annotated. This comprehensive approach ensures that the dataset reflects the real-world diversity present in online communication.
## 7 Limitations
CREHate consists of 1,580 posts, making it relatively small compared to other existing English hate speech datasets. Moreover, the collection of culture-specific posts was limited to Reddit and YouTube based on fixed hate-related keywords, which may introduce bias into the collected posts. Also, employing a single crowdsourcing platform for collecting each country's annotation may lead to annotator bias, as different platforms possess varying user demographics. To enhance the representativeness and generalizability of our findings, we anticipate future efforts to expand our dataset by using diverse platforms and post collection methods.
Considering that many countries are multicultural, it is also essential to examine within-country annotation differences. For instance, Singapore has a diverse population, including Chinese, Malaysians, and Indians. Exploring hate speech annotation differences across different ethnicities within a country presents another avenue for investigation. Moreover, although we recruit annotators from countries where English is one of their official language(s), this may not be enough to cover all English-speaking cultures. Further study is needed to include English as a Foreign Language (EFL) learners in cross-cultural hate speech detection. Moreover, the same approach could be extended to languages other than English (e.g., Spanish) spoken in various countries.
There are other subjective tasks that are affected by cultural context, such as common sense reasoning. Future research could extend the scope of our study to other tasks by constructing datasets tailored towards specific cultures, both within and across countries with diverse languages.
## 8 Ethics Statement
This research was conducted with full approval from the Institutional Review Board (IRB). We conducted our crowd worker recruitment without any discrimination based on age, ethnicity, disability, or gender. Our workers are compensated at a rate higher than Prolific's ethical standards. We emphasize our unequivocal disapproval of any form of malicious exploitation of our dataset, including
\begin{table}
\begin{tabular}{l|l|c|c|c|c|c} \hline Source & Post & M\({}_{AU}\) & M\({}_{GB}\) & M\({}_{SG}\) & M\({}_{US}\) & M\({}_{ZA}\) \\ \hline SBIC & MeToo Movement Femoid Likens Being Hit on in Public to Being Mugged. & \(\Circle\) & \(\Circle\)* & \(\times\) & \(\times\) & \(\Circle\)* \\ \hline CP\({}_{GB}\) & Out of all the immigrants how many are gonna go back home? & \(\Circle\)* & \(\Circle\)* & \(\Circle\) & \(\times\)* & \(\Circle\) \\ \hline CP\({}_{ZA}\) & Reading this thread has turned me into a communist, the morality and values & \(\times\)* & \(\Circle\)* & \(\Circle\) & \(\Circle\)* & \(\Circle\)* \\ & of rich Boers is some real upside-down twisted shit lmao & & & & & \\ \hline CP\({}_{ZA}\) & Wow. Rainbow turned completely black & \(\times\)* & \(\times\)* & \(\times\)* & \(\times\)* & \(\Circle\)* \\ \hline \end{tabular}
\end{table}
Table 9: Examples of predictions from models that are monoculturally trained. M\({}_{AU}\) refers to model predictions trained on Australian labels and the same for all other countries. \(\Circle\) refers to ‘hate’, and \(\times\) refers to ‘non-hate’ label. * means the prediction and the actual label are the same. This table shows that models trained on different perspectives show different labeling tendencies even for an identical post.
any misuse of our dataset for generating hateful language. We demand that researchers and practitioners use this dataset solely for constructive purposes.
|
2309.11343 | Using Property Elicitation to Understand the Impacts of Fairness
Regularizers | Predictive algorithms are often trained by optimizing some loss function, to
which regularization functions are added to impose a penalty for violating
constraints. As expected, the addition of such regularization functions can
change the minimizer of the objective. It is not well-understood which
regularizers change the minimizer of the loss, and, when the minimizer does
change, how it changes. We use property elicitation to take first steps towards
understanding the joint relationship between the loss and regularization
functions and the optimal decision for a given problem instance. In particular,
we give a necessary and sufficient condition on loss and regularizer pairs for
when a property changes with the addition of the regularizer, and examine some
regularizers satisfying this condition standard in the fair machine learning
literature. We empirically demonstrate how algorithmic decision-making changes
as a function of both data distribution changes and hardness of the
constraints. | Jessie Finocchiaro | 2023-09-20T14:20:56Z | http://arxiv.org/abs/2309.11343v2 | # Using Property Elicitation to Understand the Impacts of Fairness Constraints
###### Abstract
Predictive algorithms are often trained by optimizing some loss function, to which regularization functions are added to impose a penalty for violating constraints. As expected, the addition of such regularization functions can change the minimizer of the objective. It is not well-understood which regularizers change the minimizer of the loss, and, when the minimizer does change, _how_ it changes. We use _property elicitation_ to take first steps towards understanding the joint relationship between the loss and regularization functions and the optimal decision for a given problem instance. In particular, we give a necessary and sufficient condition on loss and regularizer pairs for when a property changes with the addition of the regularizer, and examine some regularizers satisfying this condition standard in the fair machine learning literature. We empirically demonstrate how algorithmic decision-making changes as a function of both data distribution changes and hardness of the constraints.
## 1 Introduction
Machine learning is increasingly being used for prediction and resource allocation tasks pertaining to human livelihood; algorithms often make predictions based on patterns in historical data to make or supplement decisions about future events. For example, algorithms are commonly used to determine a whether or not a loan applicant should receive a loan [2, 30, 31], estimate a patient's risk of heart disease [14, 25, 28], and estimate need for public assistance [21], among other settings. Typically, an algorithm tries to predict something like the probability of an applicant repaying the loan if granted one, and then uses this prediction to assign a treatment to the applicant, such as granting or not granting a loan. Implicit in this model is the use of an underlying distribution to assign a treatment by computing some underlying summary statistic, or _property_, of the distribution over outcomes. Property elicitation studies the relationship between the choice of objective function, treatment assignments, and various statistics. For example, minimizing squared loss corresponds to predicting the _expected value_ of the outcome (the probability of repayment) and deciding whether or not to give a loan based on the expected value being above a given threshold.
This contrasts with minimizing the 0-1 loss, which corresponds to learning the _mode_, of whether the person is more likely than not to repay a loan, and the assigned treatment is simply the decision to grant a loan.
In most practical optimization tasks, however, one faces constraints on the treatment space, especially when the treatments impact human livelihood and when resources are scarce. In particular, fairness constraints are often employed to enforce the (approximately) equal algorithmic treatment of different predefined groups. Instead of minimizing the original loss function, these algorithms often instead minimize loss + weight * regularizer, where the regularization term adds a penalty for violating certain desiderata about community-level outcomes.
However, to date, there is little understanding of how adding regularization functions into the optimization problem changes the property of the data distribution learned. We give a necessary and sufficient condition for regularizers to preserve an elicited property: the property elicited by the fairness regularizer must be equivalent to the property elicited by the original loss. However, with additional knowledge about the possible outcome distribution, we can ask when generally non-equivalent optimization problems become equivalent with this additional knowledge. We demonstrate our results on group fairness regularizers, though other regularization functions can be used as well (e.g., [26]).
To this end, we introduce the notion of _regularized property elicitation_, and what it means for two properties to be equivalent. In Theorem 1 we show that, under mild conditions on the regularizer, a regularized property is equivalent to the original property if and only if the property elicited by the regularizer is equivalent to the original property. We apply Theorem 1 to a handful of popular fairness regularizers- the absolute difference of demographic parity, expected equality of opportunity, and equalized false positive rates- and demonstrate they are not equivalent to cost-sensitive classifications. However, it is not necessarily the case that a regularizer changes the elicited property: many additive regularizers yield regularized properties equivalent to the original, namely (multi)calibration and bounded group loss.1 In these cases, while the property does not change, using the regularizer is still effective because of practical limitations on the experessivity of the hypothesis class \(\mathcal{H}\), among other optimization challenges. It does suggest in some sense that these equivalent regularizers value "accuracy as fairness," in line with sentiment from the original works.
Footnote 1: We are not assigning a value judgment to whether or not a regularizer changes a property.
In SS 3, we present Theorem 1, which gives the necessary and sufficient condition for the equivalence of properties, and in SS 4 demonstrate these conditions on common fairness regularizers for binary classification. For those regularizers that do change an elicited property, we additionally provide examples and geometric intuition about _for which data distributions_ the regularizers change (or do not change) the optimal decision, enforcing the imposed constraints. Finally, in SS 5, we demonstrate our results with empirical evaluation on synthetic data, a heart attack risk analysis dataset [28], and the German lending dataset [18].
### Literature review
In machine learning, a variety of pre-, in-, and post-processing techniques have emerged in recent years to make algorithmic decision-making more fair or equitable. We focus on
one algorithmic aspect of in-processing wherein one modifies the learning algorithm itself by adding a soft constraint to the objective function, which is some weighted metric of the fairness violation. The addition of fairness regularizers is one common approach to try to improve algorithmic decision-making in practice, though their effects are generally not well-understood (cf. [3, 4, 7, 8, 13, 16, 17, 19, 33]). While many proposed fairness metrics are situated in binary classification settings, extensions beyond the binary setting have been proposed more recently [7, 9, 20, 33, 34]. Our framework is general enough to handle a variety of prediction tasks and regularizers beyond the fair machine learning literature.
We study the impact of regularization functions on the "right" decision an algorithm should make as a function of the underlying data distribution through the lens of property elicitation. Property elicitation is well understood on an individual basis for a variety of discrete prediction tasks [10, 22, 23, 24] and continuous estimation problems [6, 11, 12, 29, 32] on an individual level. However, to the best of our knowledge, no work has been done on property elicitation at the community level, which is necessary when regularizers are not additive in treatments \(\mathbf{t}\). Regularizers considering community-level outcomes and group membership requires we extend traditional notions of property elicitation.
## 2 Background
We are primarily concerned with evaluating the optimal treatment for various prediction tasks. Consider an agent \(i\in\{1,2,\ldots,m\}=[m]\) who will achieve some outcome \(y^{(i)}\in\mathcal{Y}\) with probability \(p^{(i)}\in\Delta_{\mathcal{Y}}\), where \(\Delta_{\mathcal{Y}}\) is the simplex over a finite set of outcomes \(\mathcal{Y}\). A central decision-maker (often a principal or algorithm) assigns a treatment \(t^{(i)}\in\mathcal{T}\) to the agent, and their error is scored according to a loss function \(L:\mathcal{T}\times\mathcal{Y}\rightarrow\mathbb{R}_{+}\). As shorthand, denote \(L(t^{(i)};p^{(i)}):=\mathbb{E}_{Y\sim p^{(i)}}L(t^{(i)},Y)\) as the expected loss over \(p^{(i)}\). Moreover, we assume each agent \(i\) is a member of a group \(s^{(i)}\in\mathcal{S}\), and want to ensure agents of different groups are treated fairly by the centralized decision-maker. Let \(n_{g}:=|\{i\in[m]:s^{(i)}=g\}|\) be the number of agents belonging to group \(g\), which we assume is positive. Often, we are concerned with possibly set-valued functions, \(\Gamma:\Delta_{\mathcal{Y}}\to 2^{\mathcal{T}}\setminus\{\emptyset\}\); for shorthand, we denote this \(\Gamma:\Delta_{\mathcal{Y}}\rightrightarrows\mathcal{T}\).
In supervised machine learning, predictions are made by learning a hypothesis function \(h:\mathcal{X}\rightarrow\mathcal{T}\) mapping features \(x\in\mathcal{X}\) to treatments \(t\in\mathcal{T}\). We assume \(\mathcal{T}\) is a finite set unless otherwise stated. If the class of hypotheses \(\mathcal{H}\) is sufficiently expressive, then \(t\) encapsulates how the optimal hypothesis _should assign treatment_, given an input \(x\). Equivalently, we are concerned with optimal decisions under \(p^{(i)}=\Pr[Y\mid X=x^{(i)}]\). For simplicity, we abstract away \(\mathcal{X}\) and proceed with \(p^{(i)}\in\Delta_{\mathcal{Y}}\) and \(t^{(i)}\in\mathcal{T}\) in the sequel.
### Regularization functions
Often, "fair" algorithms constrain optimization to ensure certain desiderata are satisfied. However, some standard optimization algorithms such as stochastic gradient descent often softens these constraints, adding an additional penalty to the loss function for violating the constraints. We study how the addition of regularization functions
(henceforth: regularizers) change the optimal treatment assigned by minimizing the expected loss.
For example, imposing group fairness constraints, one might aim to ensure treatments are independent of the sensitive statistic (as in demographic parity) or treatments are calibrated to line up with the true probabilities of positive classification (as in calibration). In this setting, given a collection of individuals \(\{(s^{(i)},p^{(i)})\}\), we aim to optimize
\[\min_{\mathbf{t}\in\mathcal{T}^{m}}L^{\mathcal{R},\lambda}(\mathbf{t}; \mathbf{s};\mathbf{p}):=(1-\lambda)\underbrace{\left[\frac{1}{m}\sum_{i=1}^{m} L(t^{(i)};p^{(i)})\right]}_{\text{expected loss over $m$ agents}}+\lambda\mathcal{R}(\mathbf{t};\mathbf{s};\mathbf{p}). \tag{1}\]
Because the regularizer might not be additive in \(\mathbf{t}\), the treatment of an individual is not necessarily independent of the treatment of others. This necessitates the optimization of \(\mathbf{t}\in\mathcal{T}^{m}\) rather than considering each data point individually, as is standard in unregularized property elicitation.
### Property elicitation
When making predictions, a decision-maker often aims to learn a _property_\(\Gamma:\Delta_{\mathcal{Y}}\rightrightarrows\mathcal{T}\), which is simply a function mapping probability distributions to treatments. Examples of commonly sought properties include the expected value \(EV(p)=\{\mathbb{E}_{Y\sim p}[Y]\}\), the mode mode(\(p\)) = arg \(\max_{y}p_{y}\), \(\alpha\)-quantiles, and rankings.
**Definition 1** (Property, elicits).: _A property is a function \(\Gamma:\Delta_{\mathcal{Y}}\rightrightarrows\mathcal{T}\) mapping probability distributions to reports. If \(|\mathcal{T}|\) is finite, we call \(\Gamma\) a finite property. Moreover, a minimizable2 loss \(L:\mathcal{T}\times\mathcal{Y}\rightarrow\mathbb{R}_{+}\) elicits a property \(\Gamma\) if, for all \(p\in\Delta_{\mathcal{Y}}\),_
Footnote 2: One that attains the infimum in its first argument for all \(y\in\mathcal{Y}\)
\[\Gamma(p)=\arg\min_{t\in\mathcal{T}}L(t;p)\.\]
_Conversely, we denote the level set of a property \(\Gamma_{t}=\{p\in\Delta_{\mathcal{Y}}\mid t\in\Gamma(p)\}\) as the set of distributions yielding the same optimal treatment._
Throughout, we assume that properties are _nonredundant_, meaning that the level set \(\Gamma_{t}\) is full-dimensional3 for all \(\mathbf{t}\in\mathcal{T}\) and for each \(p\in\operatorname{relint}(\hat{\Gamma}_{\mathbf{t}})\), we have \(|\hat{\Gamma}(\mathbf{p})|=1\). This precludes the consideration of treatments that are rarely optimal, or only optimal if and only if another treatment is optimal as well.
Footnote 3: The affine dimension of the set equals the affine dimension of the simplex
Every minimizable loss elicits some property; we denote \(\operatorname{prop}[L]\) as the (unique) property elicited by the loss \(L\). For example, the squared loss elicits the expected value [6, 29], and the level set \(\Gamma_{0}=\{p\in\Delta_{\mathcal{Y}}:\mathbb{E}_{p}[Y]=\{0\}\}\) of the expected value is the set of distributions with zero mean. We will later study the geometry of the level sets of various properties to characterize the how the minimizers of unregularized losses differ from those of their regularized counterparts. In order to do so, we consider the property \(\Gamma\) evaluated on a population. Given \(\mathbf{p}\in\Delta_{\mathcal{Y}}^{m}\), we consider the extension \(\hat{\Gamma}(\mathbf{p}):=[\Gamma(p^{(i)})]_{i}\) with level sets \(\hat{\Gamma}_{\mathbf{t}}:=\bigcap_{i}\{\mathbf{p}\in\Delta_{\mathcal{Y}}^{m} \mid t^{(i)}\in\Gamma(p^{(i)})\}\).
We now extend Definition 1 to include population-level reports for loss functions to encapsulate the case where the regularizer is not additive in \(\mathbf{t}\) or dependent on \(\mathbf{s}\).
**Definition 2** (Regularized property elicitation).: _A regularized property is a function \(\Theta^{\mathcal{R},\lambda}:\mathcal{S}^{m}\times\Delta^{m}_{\mathcal{Y}} \rightrightarrows\mathcal{T}^{m}\) mapping beliefs over outcomes to population-level treatments. Similarly, an objective function \(L\) regularized by \(\mathcal{R}\) (weighted by \(\lambda\)), denoted \(L^{\mathcal{R},\lambda}\), elicits a regularized property if, for all \(\mathbf{s}\in\mathcal{S}^{m}\) and \(\mathbf{p}\in\Delta^{m}_{\mathcal{Y}}\),_
\[\Theta^{\mathcal{R},\lambda}(\mathbf{s};\mathbf{p})=\arg\,\min_{\mathbf{t}\in \mathcal{T}^{m}}L^{\mathcal{R},\lambda}(\mathbf{t};\mathbf{s};\mathbf{p}).\]
_We let \(\operatorname{prop}[L^{\mathcal{R},\lambda}]\) denote the regularized property elicited by \(L^{\mathcal{R},\lambda}\)._
Denoting the level set of a regularized property requires some nuance because we are concerned with the change in optimal treatments as a function outcome distributions \(\mathbf{p}\), but the regularized property is a function of \(\mathbf{s}\) as well as \(\mathbf{p}\). Therefore, we denote the level set \(\Theta^{\mathcal{R},\lambda}_{\mathbf{t};\mathbf{s}}=\{\mathbf{p}\in\Delta^{m }_{\mathcal{Y}}\mid\mathbf{t}\in\Theta^{\mathcal{R},\lambda}(\mathbf{s}; \mathbf{p})\}\) denote the level set of the regularized property \(\Theta^{\mathcal{R},\lambda}\). If \(\mathbf{s}\) is clear from context, we sometimes omit it and write \(\Theta^{\mathcal{R},\lambda}_{\mathbf{t}}\).
## 3 Equivalence of (regularized) properties
With an understanding of regularized property elicitation, we are now equipped to ask when a property "changes" with the addition of a regularizer to a loss; this requires us to consider what it means for properties to be unchanged, or equivalent.
**Definition 3** (Equivalence of properties).: _A property \(\Gamma:\Delta_{\mathcal{Y}}\rightrightarrows\mathcal{T}\) is equivalent to a regularized property \(\Theta:\mathcal{S}^{m}\times\Delta^{m}_{\mathcal{Y}}\rightrightarrows\mathcal{ T}^{m}\) on \(\mathbf{s}\) (denoted \(\Gamma\equiv_{\mathbf{s}}\Theta\) or \(\hat{\Gamma}\equiv_{\mathbf{s}}\Theta\)) if, for all \(p\in\Delta^{m}_{\mathcal{Y}}\), we have \(\mathbf{t}\in\Gamma(\mathbf{p})\iff\mathbf{t}\in\Theta(\mathbf{s};\mathbf{p})\)._
In general, but particularly for large sets of agents, equivalence of a regularized property to its unregularized counterpart is a rather strong condition: when there is a "universally fair" report, equivalence holds if (and only if) the regularizer elicits essentially the same property as the original loss.
**Theorem 1**.: _Fix \(\lambda\in(0,1)\) and \(\mathbf{s}\in\mathcal{S}^{m}\). Let a loss \(L\) elicit the nonredundant finite property \(\Gamma\). Moreover, let \(L^{\mathcal{R},\lambda}\) elicit the finite property \(\Theta\), and \(\mathcal{R}\) elicit finite property \(H\). Then (1) \(\hat{\Gamma}\equiv_{\mathbf{s}}H\implies\hat{\Gamma}\equiv_{\mathbf{s}}\Theta\). If, additionally, there exists a \(\mathbf{t}^{\prime}\) such that \(\{\mathbf{t}^{\prime}\}\subseteq H(\mathbf{p})\) for all \(\mathbf{p}\in\Delta^{m}_{\mathcal{Y}}\), then (2) \(\hat{\Gamma}\equiv_{\mathbf{s}}\Theta\implies\hat{\Gamma}\equiv_{\mathbf{s}}H\)._
Proof.: (1) The first statement is immediate as \(H\equiv_{\mathbf{s}}\hat{\Gamma}\) implies
\[\mathbf{t}\in\arg\,\min_{\mathbf{t}^{\prime}}\mathcal{R}(\mathbf{t}^{\prime} ;\mathbf{p}) \iff\mathbf{t}\in\arg\,\min_{\mathbf{t}^{\prime}}L(\mathbf{t}^{\prime}; \mathbf{p})\]
\[\iff\mathbf{t}\in\arg\,\min_{\mathbf{t}^{\prime}}\lambda\mathcal{R}(\mathbf{t} ^{\prime};\mathbf{p}) \iff\mathbf{t}\in\arg\,\min_{\mathbf{t}^{\prime}}(1-\lambda)L( \mathbf{t}^{\prime};\mathbf{p})\]
\[\implies\mathbf{t}\in\arg\,\min_{\mathbf{t}^{\prime}}\lambda\mathcal{R}( \mathbf{t}^{\prime};\mathbf{p})+(1-\lambda)L(\mathbf{t}^{\prime};\mathbf{p})\]
Now \(\mathbf{t}\in\hat{\Gamma}(\mathbf{p})\implies\mathbf{t}\in\Theta(\mathbf{p})\). If \(\mathbf{t}\in\Theta(\mathbf{p})\), then consider two cases: if \(\mathbf{t}\in H(\mathbf{p})\), we are done by assumption. If \(\mathbf{t}\not\in H(\mathbf{p})\), then there exists a \(\mathbf{t}^{\prime}\in H(\mathbf{p})\cap\hat{\Gamma}(\mathbf{p})\), which implies \(\mathbf{t}^{\prime}\in\Theta(\mathbf{p})\). If \(\mathbf{t}\in\Theta(\mathbf{p})\) as well, it must be the case that \(\mathbf{t}\in H(\mathbf{p})\cap\hat{\Gamma}(\mathbf{p})\).
(2) We show the contrapositive. Suppose \(\Gamma\not\equiv_{\mathbf{s}}H\). Then there exists a \(\mathbf{t}\) such that \(H_{\mathbf{t}}\setminus\hat{\Gamma}_{\mathbf{t}}\neq\emptyset\). In particular, \(H_{\mathbf{t}^{\prime}}\setminus\hat{\Gamma}_{\mathbf{t}^{\prime}}\neq\emptyset\) by nonredundancy (and nontriviality) of \(\Gamma\) and \(H_{\mathbf{t}^{\prime}}\cap\hat{\Gamma}_{\mathbf{t}^{\prime}}=\hat{\Gamma}_ {\mathbf{t}^{\prime}}\neq\emptyset\) since \(H_{\mathbf{t}^{\prime}}=\Delta_{\mathcal{Y}}^{m}\). Moreover, \(H_{\mathbf{t}^{\prime}}\cap\hat{\Gamma}_{\mathbf{t}^{\prime}}\neq\emptyset\) implies \(\Theta_{\mathbf{t}^{\prime}}\cap H_{\mathbf{t}^{\prime}}\neq\emptyset\). We claim \((\Theta_{\mathbf{t}^{\prime}}\cap H_{\mathbf{t}^{\prime}})\setminus\hat{ \Gamma}_{\mathbf{t}^{\prime}}\neq\emptyset\), which implies \(\Theta_{\mathbf{t}^{\prime}}\setminus\hat{\Gamma}_{\mathbf{t}^{\prime}}\neq\emptyset\), yielding \(\Theta\not\equiv_{\mathbf{s}}\Gamma\).
For contradiction, suppose \((\Theta_{\mathbf{t}^{\prime}}\cap H_{\mathbf{t}^{\prime}})\setminus\hat{ \Gamma}_{\mathbf{t}^{\prime}}=\emptyset\). Observe \(\mathbf{t}^{\prime}\in H(\mathbf{p})\cap\Gamma(\mathbf{p})\implies\mathbf{t}^{ \prime}\not\in\Theta(\mathbf{p})\), which cannot be true by the construction of \(\Theta\). Moreover, if \(\mathbf{t}^{\prime}\in H(\mathbf{p})\cap(\Delta_{\mathcal{Y}}^{m}\setminus \hat{\Gamma}(\mathbf{p}))\), then we must have \(\mathbf{t}^{\prime}\not\in\Theta(\mathbf{p})\) by assumption. Either way, \(\mathbf{t}^{\prime}\in H(\mathbf{p})\implies\mathbf{t}^{\prime}\not\in\Theta( \mathbf{p})\). Therefore, we must have \(H_{\mathbf{t}^{\prime}}\cap\Theta_{\mathbf{t}^{\prime}}=\emptyset\), which yields a contradiction.
Intuitively, Theorem 1 says that the property elicited by a regularized loss function is the same as the unregularized loss if and only if the regularizer elicits the same property as the loss itself. Since loss functions are measurements of accuracy, then equivalence of properties implies an algorithm values accuracy as fairness. The assumption that one report is always optimal implies the existence of an "unconditionally fair" treatment, which is satisfied by group fairness regularizers such as demographic parity (assigning every agent the same treatment), equalized false positives (assigning everyone the negative treatment), equalized false negatives (assigning everyone the positive treatment), and expected equality of opportunity (assigning the negative treatment).
## 4 (Non)equivalence of common fairness metrics for binary classification
We now evaluate five common fairness regularizers, and apply Theorem 1 to show nonequivalence between binary classification tasks and their regularized counterparts. For each regularizer, we give restrictions on \(\Delta_{\mathcal{Y}}^{m}\) such that the regularized property is equivalent to the original under these restrictions.
To build intuition, we will examine simple cases of how regularizers change elicited properties with populations of \(m=2\) agents belonging to different groups \(\mathbf{s}=(a,b)\).
Figure 1 provides some additional intuition for the proof of Theorem 1. Each subfigure gives the level sets of the property elicited by the mode regularized by the demographic parity violation (DP), where each point in \([0,1]^{2}\) represents \(\mathbf{p}\in\Delta_{\mathcal{Y}}^{2}\) by \((\Pr_{p^{(1)}}[Y=1],\Pr_{p^{(2)}}[Y=1])\). Each colored cell depicts a different level set of a regularized property \(\Theta^{DP,\lambda}\). This regularized property is overlaid on the (unregularized) mode, so that, upon visual inspection, one observes the regions where the two properties differ. As \(\lambda\to 0\), the regularized property becomes increasingly similar to the unregularized, and as \(\lambda\to 1\), the regularized property increasingly resembles the property elicited by \(\mathcal{R}\).
### Demographic parity
In the context of binary classification, one might be interested in regularizing their loss with the demographic parity violation, measured by the absolute difference of the rates at which agents are assigned the positive treatment from each of two groups. Any treatment that assigns the positive treatment at the same rate optimizes the demographic parity regularizer, which is not equivalent to the mode. That is, \(H(\mathbf{s};\mathbf{p})\supseteq\{\mathbf{0},\mathbb{1}\}\) for all \(\mathbf{p}\in\Delta_{\mathcal{Y}}^{m}\) and \(\mathbf{s}\in\mathcal{S}^{m}\). Thus, if \(\mathcal{S}=\{a,b\}\)4, we can apply Theorem 1 to conclude the DP-regularized mode is not equivalent to the unregularized mode.
Footnote 4: This is simply for ease of exposition, and can be relaxed.
\[L^{DP,\lambda}(\mathbf{t};\mathbf{s};\mathbf{p})=\frac{1-\lambda}{m}\sum_{i=1} ^{m}L(t^{(i)};p^{(i)})+\lambda\left|\frac{1}{n_{a}}\sum_{i:s^{(i)}=a}t^{(i)}- \frac{1}{n_{b}}\sum_{i:s^{(i)}=b}t^{(i)}\right|\] (DP)
Now, with \(\mathcal{T}=\{0,1\}\), if \(L\) is the 0-1 loss5, we can evaluate \(L^{DP,\lambda}\) for each treatment in \(\mathcal{T}^{2}=\{(1,1),(0,1),(1,0),(0,0)\}\).
Footnote 5: These derivations also hold if \(L\) is squared loss, hinge loss, and many other losses for binary classification.
\[L^{DP,\lambda}((1,1);(p^{(1)},p^{(2)})) =\frac{1-\lambda}{2}\left[(1-p^{(1)})+(1-p^{(2)})\right]\] \[L^{DP,\lambda}((0,1);(p^{(1)},p^{(2)})) =\frac{1-\lambda}{2}\left[p^{(1)}+(1-p^{(2)})\right]+\lambda\] \[L^{DP,\lambda}((1,0);(p^{(1)},p^{(2)})) =\frac{1-\lambda}{2}\left[(1-p^{(1)})+p^{(2)}\right]+\lambda\] \[L^{DP,\lambda}((0,0);(p^{(1)},p^{(2)})) =\frac{1-\lambda}{2}\left[p^{(1)}+p^{(2)}\right].\]
These expected losses now enable us to study the level sets \(\Theta_{\mathbf{t};\mathbf{s}}^{DP,\lambda}=\{\mathbf{p}\in\Delta_{ \mathcal{Y}}^{m}\mid\mathbf{t}\in\Theta^{DP,\lambda}(\mathbf{s};\mathbf{p})\}\). Abusing notation, we use \(\Theta_{\mathbf{t}}^{DP,\lambda}\) understanding \(\mathbf{s}=(a,b)\).
We have have \((0,0)\in\arg\,\min_{\mathbf{t}\in\mathcal{T}^{2}}\) if
\[\frac{1-\lambda}{2}\left[(1-p^{(1)})+(1-p^{(2)})\right] \leq\frac{1-\lambda}{2}\left[p^{(1)}+(1-p^{(2)})\right]+\lambda\] \[\iff\frac{1-3\lambda}{2(1-\lambda)} \leq p^{(1)}\] \[\frac{1-\lambda}{2}\left[(1-p^{(1)})+(1-p^{(2)})\right] \leq\frac{1-\lambda}{2}\left[p^{(2)}+(1-p^{(1)})\right]+\lambda\] \[\iff\frac{1-3\lambda}{2(1-\lambda)} \leq p^{(2)}\] \[\frac{1-\lambda}{2}\left[(1-p^{(1)})+(1-p^{(2)})\right] \leq\frac{1-\lambda}{2}\left[p^{(1)}+p^{(2)}\right]\] \[\iff p^{(1)}+p^{(2)} \leq 1\.\]
Therefore, the level set \(\Theta^{DP,\lambda}_{(0,0)}\) can be described by the polyhedron
\[\Theta^{DP,\lambda}_{(0,0)}=\left\{p\in[0,1]^{2}\mid\begin{bmatrix}0&-1&\frac{1-3 \lambda}{2(1-\lambda)}\\ -1&0&\frac{1-3\lambda}{2(1-\lambda)}\\ 1&-1&1\end{bmatrix}\begin{bmatrix}p^{(1)}\\ p^{(2)}\\ 1\end{bmatrix}\geq\mathbf{0}\right\}\.\]
Observe that the final constraint is actually one on the marginal \(P[Y]\): the expected outcome over the whole population should be less likely to be \(1\) than \(0\). We can evaluate the rest of the level sets in a similar manner.
Now let us gain some geometric intuition for how these level sets change by referencing Figure 1. For two agents belonging to different groups, each point in the figure represents a pair \(\mathbf{p}:=(p^{(1)},p^{(2)})\) of true probabilities for the two agents. The pair \(\mathbf{p}\in[0,1]^{2}\), and the region \([0,1]^{2}\) can be divided into up to \(|\mathcal{T}^{m}|\) regions for which each \(\mathbf{t}\in\mathcal{T}^{m}\) is contained in \(\Theta^{DP,\lambda}(\mathbf{p})\). The sequence of figures in Figure 1 denotes the level sets of \(\Theta^{DP,\lambda}\) as one varies \(\lambda\in[0,1]\). For intuition, one can observe that the regions where the players receive the same treatment (blue and red) grows as \(\lambda\) increases, starting with \(1/2\) of the \([0,1]^{2}\) space, and increasing to all of \([0,1]^{2}\) as \(\lambda\to 1\).
We now turn our attention towards the regions of \(\Delta^{m}_{\mathcal{Y}}\) where the regularized and unregularized properties are equivalent with a demographic parity regularizer. First, we observe that if uniform treatment of a population is optimal on the unregularized property, it is also optimal on the regularized property, which is actually a corollary of Theorem 1.
**Proposition 1**.: _Fix \(\lambda\in(0,1)\). Let \(L\) elicit \(\Gamma\), \(L^{\mathcal{R},\lambda}\) elicit \(\Theta\), and \(\mathcal{R}\) elicit \(H\). For all \(t\in\mathcal{T}^{m}\) and \(\mathbf{s}\in\mathcal{S}^{m}\), \(\hat{\Gamma}_{\mathbf{t}}\cap H_{\mathbf{t};\mathbf{s}}\subseteq\Theta_{t}\)._
Proof.: \(\mathbf{p}\in\hat{\Gamma}_{\mathbf{t}}\cap H_{\mathbf{t};\mathbf{s}}\implies L( \mathbf{t};\mathbf{p})\leq L(\mathbf{t}^{\prime};\mathbf{p})\) and \(\mathcal{R}(\mathbf{t};\mathbf{s};\mathbf{p})\leq\mathcal{R}(\mathbf{t}^{ \prime};\mathbf{s};\mathbf{p})\) for all \(t^{\prime}\in\mathcal{T}^{m}\), which in turn implies \(L(\mathbf{t};\mathbf{p})+\mathcal{R}(\mathbf{t};\mathbf{s};\mathbf{p})\leq L (\mathbf{t}^{\prime};\mathbf{p})+\mathcal{R}(\mathbf{t}^{\prime};\mathbf{s}; \mathbf{p})\implies(1-\lambda)L(\mathbf{t};\mathbf{p})+\lambda\mathcal{R}( \mathbf{t};\mathbf{s};\mathbf{p})\leq(1-\lambda)L(\mathbf{t}^{\prime};\mathbf{ p})+\lambda\mathcal{R}(\mathbf{t}^{\prime};\mathbf{s};\mathbf{p})\) for all \(t^{\prime}\in\mathcal{T}^{m}\).
Figure 1: Visualizing the level sets of the \(DP\)-regularized property \(\Theta^{DP,\lambda}\) for different values of \(\lambda\in[0,1]\), where \(m=2\) and \(\mathbf{s}=(a,B)\). Each point \((p^{(1)},p^{(2)})\) in a square represents \((\Pr_{p^{(1)}}[Y=1],\Pr_{p^{(2)}}[Y=1])\), and each colored cell represents sets of \((p^{(1)},p^{(2)})\) pairs such that the optimal treatment is the same for all points in the cell. For example, the magenta cell (lower right) is the set of distributions where the decision-maker prefers to attribute the positive treatment (\(t^{(i)}=1\)) to the first agent, and the negative treatment (\(t^{(i)}=0\)) to the second agent.
We apply this result to the "universally fair" reports via demographic parity \(\mathbf{0}\) and \(\mathbb{1}\).
**Corollary 1**.: _Fix \(\mathbf{s}\in\mathcal{S}^{m}\) and \(\lambda\in[0,1]\). Let \(L\) elicit \(\Gamma\) and \(\mathbf{L}^{DP,\lambda}\) elicit \(\Theta\). \(\hat{\Gamma}_{\mathbf{0}}\subseteq\Theta_{\mathbf{0};\mathbf{s}}\). Moreover, \(\hat{\Gamma}_{\mathbb{1}}\subseteq\Theta_{\mathbf{s};\mathbb{1}}\)._
Proof.: Let \(H:=\operatorname{prop}[L^{\lambda,DP}]\). For all \(\mathbf{p}\in\Delta_{\mathcal{Y}}^{m}\), we have \(\{\mathbf{0},\mathbf{1}\}\subseteq H(\mathbf{p})\). Therefore, \(\hat{\Gamma}_{\mathbf{0}}\cap H_{\mathbf{0}}=\hat{\Gamma}_{\mathbf{0}}\) (and similarly with \(\hat{\Gamma}_{\mathbb{1}}\cap H_{\mathbb{1}}\)). Therefore, \(\hat{\Gamma}_{\mathbf{0}}=\hat{\Gamma}_{\mathbf{0}}\cap H_{\mathbf{0}} \subseteq\Theta_{\mathbf{0}}\) and \(\hat{\Gamma}_{\mathbb{1}}=\hat{\Gamma}_{\mathbb{1}}\cap H_{\mathbb{1}} \subseteq\Theta_{\mathbb{1}}\).
We now turn our attention to the opposite case: if, while regularized, treating different groups differently (and uniformly within the groups) is optimal, then it is also optimal in the unregularized setting. In particular, this holds for treatments maximizing \(\mathcal{R}\).
**Proposition 2**.: _Fix \(\mathbf{s}\in\{a,b\}^{m}\) and \(\lambda\in[0,1]\). Fix \(\mathbf{t}=\mathbb{1}_{a}\) (or \(\mathbb{1}_{b}\) without loss of generality). Let \(L\) elicit \(\Gamma\) over outcomes \(\mathcal{Y}=\{0,1\}\). \(\Theta_{\mathbf{t};\mathbf{s}}^{DP,\lambda}\subseteq\hat{\Gamma}_{\mathbf{t}}\)._
Proof.: With \(\mathbf{s}\) fixed, \(t\in\arg\max_{\mathbf{t}^{\prime}}DP(\mathbf{t}^{\prime};\mathbf{p})\) for all \(\mathbf{p}\in\Delta_{\mathcal{Y}}^{m}\). Therefore,
\[(1-\lambda)L(\mathbf{t};\mathbf{p})+\lambda DP(\mathbf{t};\mathbf{ p}) \leq(1-\lambda)L(\mathbf{t}^{\prime};\mathbf{p})+\lambda DP(\mathbf{t}^{\prime}; \mathbf{p})\forall\mathbf{t}^{\prime}\] \[\implies(1-\lambda)L(\mathbf{t};\mathbf{p}) \leq(1-\lambda)L(\mathbf{t}^{\prime};\mathbf{p})\qquad\forall \mathbf{t}^{\prime}\,\]
which implies the result.
With that, we partially characterize the relationship between the unregularized and DP-regularized level sets for standard binary classification. In the simple case with \(m=2\) agents, this characterization is complete: if the optimal treatment is uniform, it stays uniform. Moreover, if the most "unfair" treatment wherein all the members of one group receive the treatment, and none of the second group is optimal in the regularized setting, it is also optimal in the unregularized setting. In any other setting, the optimal treatment changes with the addition of a DP regularizer.
### Equalized FPR
Following a similar process to SS 4.1, we now consider the regularizer that measures the absolute difference of false positive rates across groups, where the false positive rate is given by \(FPR_{g}(\mathbf{t};\mathbf{s};\mathbf{p})=\Pr[Y^{(i)}=0\mid t^{(i)}=1,s^{(i)} =g]=\frac{1}{|(i:t^{(i)}=1,s^{(i)}=g)|}\sum_{i:s^{(i)}=g,t^{(i)}=1}(1-p^{(i)})\). The optimization problem then becomes
\[L^{FPR,\lambda}(\mathbf{t};\mathbf{s};\mathbf{p})=\frac{1-\lambda}{m}\sum_{i} L(t^{(i)};p^{(i)})+\lambda\left|FPR_{a}(\mathbf{t};\mathbf{s};\mathbf{p})-FPR_{b}( \mathbf{t};\mathbf{s};\mathbf{p})\right|\] (FPR)
The FPR regularizer computes the difference of false positive rates between groups, so one can observe that the false positive rate of a group is is reduced by assigning more negative treatments \(t^{(i)}=0\). We can see in Figure 2 that the FPR regularizer then makes it worse for an algorithm to assign the positive treatment to an agent \(i\) even if \(p^{(i)}\) is slightly greater than \(1/2\), as marked by the \(\star\) in Figure 2(R).
As in SS 4.1, we can apply Proposition 1 to show that if assigning everyone the negative treatment is optimal in the unregularized setting, it is also the optimal treatment with the FPR regularizer.
**Corollary 2**.: _Fix \(\mathbf{s}\in\mathcal{S}^{m}\). Let \(L\) elicit \(\Gamma\) and \(L^{FPR,\lambda}\) elicit \(\Theta^{FPR,\lambda}\). \(\hat{\Gamma}_{\mathbf{0}}\subseteq\Theta^{FPR,\lambda}_{\mathbf{0};\mathbf{s}}\)._
Proof.: For all \(\mathbf{p}\in\Delta^{m}_{\mathcal{Y}}\), we have \(\mathbf{0}\in H(p)\). Therefore \(\hat{\Gamma}_{0}=\hat{\Gamma}_{0}\cap H_{\mathbf{0};\mathbf{s}}\subseteq \Theta_{\mathbf{0}}\) by Proposition 1.
### Expected equality of opportunity
While standard equality of opportunity (cf. [15]) requires access to observed labels, we are interested in equality of opportunity in expectation, and consider a variant that does not require access to labels proposed by Blandin and Kash [5]. Consider the treatment space \(\mathcal{T}=\{0,1\}^{m}\) and regularizer \(\mathcal{R}(\mathbf{t};\mathbf{s};\mathbf{p})=|EEO_{a}(\mathbf{t};\mathbf{s}; \mathbf{p})-EEO_{b}(\mathbf{t};\mathbf{s};\mathbf{p})|\), where
\[EEO_{g}(\mathbf{t};\mathbf{s};\mathbf{p};g) =\Pr_{i\sim[m]}[t^{(i)}=1\mid y^{(i)}=1,s^{(i)}=g]\] \[=\frac{\Pr[Y^{(i)}=1\mid t^{(i)}=1,s^{(i)}=g]\Pr[t^{(i)}=1]}{\Pr[ Y^{(i)}=1]}\] \[=\frac{\left(\frac{1}{|\{i:s^{(i)}=g,t^{(i)}=1\}|}\sum_{i:t^{(i)} =1,s^{(i)}=g}(p^{(i)})\right)(\sum_{i}t^{(i)})}{\sum_{i}p^{(i)}}\.\]
In order to apply Theorem 1, we can observe that \(\mathbf{0}\) is a "universally fair" treatment, as \(EEO_{a}=EEO_{b}=0\), regardless of \(\mathbf{p}\) and \(\mathbf{s}\), and we observe nonequivalence. Moreover, we can apply Proposition 1 to show that uniform treatment being optimal in the unregularized case implies it is also optimal with the EEO regularizer as well.
**Corollary 3**.: _Fix \(\mathbf{s}\in\mathcal{S}^{m}\), and let \(L\) elicit \(\Gamma\) over outcomes \(\mathcal{Y}=\{0,1\}\) and \(L^{EEO,\lambda}\) elicit \(\Theta^{EEO,\lambda}\). \(\hat{\Gamma}_{\mathbf{0}}\subseteq\Theta^{EEO,\lambda}_{\mathbf{0}}\)._
Figure 2: Visualizing the level sets of the \(FPR\)-regularized property \(\Theta^{FPR,\lambda}\) for different values of \(\lambda\in[0,1]\), where \(m=2\) and \(s=(a,b)\). Each point \((p^{(1)},p^{(2)})\) in a square represents \((\Pr_{p^{(1)}}[Y=1],\Pr_{p^{(2)}}[Y=1])\), and each colored cell represents sets of \((p^{(1)},p^{(2)})\) pairs such that the optimal treatment is the same for all points in the cell. For example, the magenta cell (lower right) is the set of distributions where the decision-maker prefers to attribute the positive treatment (\(t^{(1)}=1\)) to the first, and the negative treatment (\(t^{(2)}=0\)) to the second agent.
### Equivalent regularizers
In the previous section, we use Theorem 1 to show the nonequivalence of reguliarized properties, and examine a few common regularizers to show some restrictions that recover equivalence under certain distributional assumptions on the outcomes. Turning our attention to the converse of Theorem 1, we examine two regularizers that elicit the mode, and thus the regularized property is equivalent to the unregularized on all of \(\Delta_{y}^{m}\): calibration [27] and bounded group loss [1]. In some sense, this suggests that these regularizers value accuracy as fairness. If models are as accurate as they could possibly be, the most "fair" treatments to assign are also the most accurate. In practice, the regularizers mitigate unfairness arising from limited expressivity of the model: if the model was perfectly expressive and could predict the mode perfectly, it would assign the same treatments even with heavy penalties for "unfairness."
CalibrationCalibration constraints ensure that the predicted value \(t^{(i)}\) most closely lines up with the true probability \(p^{(i)}\), regularizing the loss by the sums of the absolute differences \(|t^{(i)}-p^{(i)}|\). The absolute difference elicits the \(1/2\)-quantile, which is also the mode on \(\mathcal{Y}=\{0,1\}\), so the regularizer \(\mathcal{R}(\mathbf{t};\mathbf{s};\mathbf{p})=\sum_{g}\frac{1}{n_{g}}\sum_{i :s^{(i)}=g}|t^{(i)}-p^{(i)}|\) elicits the mode in binary classification problems.
Formally, consider the objective
\[L^{Cal,\lambda}(\mathbf{t};\mathbf{s};\mathbf{p})=\frac{1-\lambda}{m}\sum_{i }L(t^{(i)};p^{(i)})+\lambda\sum_{g}\frac{1}{n_{g}}\sum_{i:s^{(i)}=g}|t^{(i)}- p^{(i)}|\] (Cal)
This constraint does not include any comparisons across group averages, so the optimal report is obtained by giving individual predictions. In binary classification, the \(1/2\)-quantile is the same as the mode, so the property is given \(\Theta^{Cal,\lambda}(\mathbf{s};\mathbf{p})=\text{mode}(\mathbf{p})\).
Bounded group lossWe now consider the constraint on bounded group loss: \(\mathbb{E}_{Y|S=s}L(r,Y)<\epsilon\) for all \(s\in\mathcal{S}\), introduced by Agarwal et al. [1]. To model bounded group loss as a soft constraint, we simply weigh the expected loss conditioned on the group size as a regularizer, so accuracy is more incentivized on small groups.
\[L^{BGL,\lambda}(\mathbf{t};\mathbf{s};\mathbf{p})=\frac{1-\lambda}{m}\sum_{i }L(t^{(i)};p^{(i)})+\sum_{g}\frac{\lambda}{n_{g}}\sum_{i:s^{(i)}=g}L(t^{(i)};p ^{(i)})\]
Figure 3: Level sets of the EEO-regularized mode on \(\mathcal{Y}=\{0,1\}\).
Adding this constraint as a fairness regularizer does not change the property elicited (e.g., \(\Theta^{\lambda}(\mathbf{s};\mathbf{p})=\hat{\Gamma}(\mathbf{p})\) for all \(p\in\Delta^{m}_{\mathcal{Y}}\)). In part this is because it still encourages the hypothesis to learn what is best for each individual in the population, where other constraints add a regularizer that compares the deviation between two groups.
**Corollary 4**.: _Let \(L\) elicit \(\Gamma\). \(\hat{\Gamma}\equiv_{\mathbf{s}}\Theta^{\mathit{BGL},\lambda}\) for all \(\mathbf{s}\in\mathcal{S}^{m}\) and \(\lambda\in[0,1]\)._
Proof.: The regularizer \(\mathcal{R}(\mathbf{t};\mathbf{s};\mathbf{p}):=\sum_{g}\frac{1}{n_{g}}\sum_{ i:s^{(i)}=g}L(t^{(i)};p^{(i)})\) is additive in \(\mathbf{t}\), and elicits the same property as \(L\) since it is simply a reweighing of \(L\).
## 5 Experiments
While property elicitation allows us to reason about what a treatment an algorithm _should_ assign, we examine whether or not these decisions are consistent with the treatments assigned by algorithms in practice with simple models. We first generate a set of synthetic datasets to understand how a classifier's decisions change as one navigates the space of data distributions. Moving through this space demonstrates the relationship between loss and regularizer in the synthetic setting as the data distribution over changes in \(\Delta^{m}_{\mathcal{Y}}\). We then evaluate the effect of the regularizer weight \(\lambda\) on treatment assignment in cardiovascular disease risk prediction [28] and lending [18] datasets, where the data distribution is fixed. In both settings, we train a linear classifier over 30 trials with binary cross entropy loss with (a) no regularizer, (b) demographic parity difference (c) false positive rate difference (d) false negative rate difference, (e) equality of opportunity difference, and compute the fairness violations of the classifier trained on each of the four losses, where elicited property values are shown in Figure 9.
### Effect of the data distribution
Recall that we applied Theorem 1 and its intuition in Figures 1, 2, and 3 to conclude the mode is not equivalent to \(\Theta^{\mathcal{R},\lambda}\) for various regularizers including (DP), (FPR), False Negative Rates (in SS A), and Expected Equality of Opportunity. However, the equivalence of regularized properties and their unregularized counterparts is a rather strong condition, as pointwise equivalence must hold for _every_ set of data distributions. In practice, the true data distribution may be somewhere in the space of distributions where the property value does not change for the chosen value of \(\lambda\). With the knowledge that equivalent distributions have no endogeneous differences in hand, we generate a set of synthetic distributions to understand tradeoffs to regularizers as we move though the space of data distributions.
We generate generate synthetic datasets for binary classification as follows: there are two groups, \(\mathcal{S}=\{a,b\}\) with \(\Pr[a]=\Pr[b]=1/2\), a member of each group has \(\Pr[Y=1\mid S=g]=p_{g}\in[0,1]\). Each set of agents is represented by \(x=\{p_{a},p_{b},r_{1},\ldots,r_{k}\}\), where \(r_{1},\ldots,r_{k}\) are uniformly random values in \([-1,1]\). We then train a linear classifier via stochastic gradient descent (30 trials with learning rate = 0.001, 1500 epochs, 10000 (\(p_{a},p_{b}\)) pairs, \(k=3\)), that minimizes the binary cross entropy loss regularized by either demographic parity, false positive rate, false negative rate, or difference in equality of opportunity with \(\lambda=0.15\). The simplicity of features is intentional: the "perfect" decision should be fully
realizable in the unregularized setting, so the benchmark accuracy should be relatively high. Fixing the probability for a positive outcome \(p_{a}=0.3\) for a member of group \(a\), we vary the probability of a positive outcome \(p_{b}\) for a member of group \(b\) to observe how fairness violations change as the underlying data distribution changes. For intuition, by design of the datasets, we reason about the "average member" of the population and reference the level sets drawn in Figure 6. Fixing \(p_{a}\) and varying \(p_{b}\) can be thought of as understanding what happens in decision making as one moves vertically up the line \(\{(0.3,p_{b})\mid p_{b}\in[0,1]\}\), denoted by the black dashed lines in Figure 6. For the level sets of \(\Theta^{DP,0.15}\), \(\Theta^{FPR,0.15}\), and \(\Theta^{EEO,0.15}\), this approximately suggests that the algorithm should predict negatively for members of both groups until \(p_{b}\geq 2/3\), at which point it should predict positively for the second agent in group \(b\). However, for \(\Theta^{FNR,0.15}\), one should always give give a member of group \(a\) the negative treatment in expectation, while giving a member of group \(b\) the positive treatment as long as \(p_{b}\geq 1/2+\epsilon\), marking its similarity to the unregularized property on this line.
In Figure 4(TL), we observe approximately equal accuracy of all models regardless of the choice of regularizer. Moreover, most fairness-regularized models tended to yield lower DP violations than the unregularized models, and in Figure 5(L) one observes this gap is larger as \(p_{b}\) increases over \(1/2\), in line with the intuition provided by Figure 6. Similar trends follow with FPR and EEO regularized models, but the difference is least salient with FNR regularized models. This is unsurprising given the similarity between the mode and \(\Theta^{FNR,0.15}\) along the line \(p_{a}=0.3\) mentioned above and demonstrated in Figure 6.
Figure 4: Regularizer values with synthetic data generated via \(\Pr[Y=1\mid g=a]=0.3\) and \(\Pr[Y=1\mid g=b]\) on the horizontal axis.
Figure 5: Regularizer values with synthetic data generated via \(\Pr[Y=1\mid g=a]=0.3\) and \(\Pr[Y=1\mid g=b]\) on the horizontal axis.
Figure 6: Fixing \(p_{a}=0.3\), examining how the property value changes as a function of \(p_{b}\) for different regularizers. Demographic parity results in different decisions only if \(p_{b}\in[1/2,3/4]\), FPR if \(p_{b}\in[1/2,2/3]\), FNR has essentially the same property values on the line \(p_{a}=0.3\), and EEO leads to a small region where optimal decisions change for \(p_{b}\in[1/2,2/3]\).
### The effect of choice of \(\lambda\)
Conversely to the interpretation of the experiments in SS 5.1, to gain intuition for why decisions might change as a function of \(\lambda\), we now consider each dataset representing a \((p_{a},p_{b})\) point in one of Figures 1-3, and consider how the level set it belongs to changes as one changes \(\lambda\). We examine two datasets, German lending [18] and heart disease risk prediction [28]. For both datasets, we train 30 linear models with 1500 epochs, learning rate of 0.001.
German lendingIn the German lending dataset, we treat age as the sensitive attribute, using an indicator thresholded at 25 years old. On the entire dataset, we have \(\Pr[Y=1\mid S\geq 25]=0.728\) and \(\Pr[Y=1\mid S<25]=0.578\), and an unbalanced group representation with \(\Pr[S<25]=0.191\).
Perhaps surprisingly, we observe little impact of the choice of \(\lambda\): in fact, for large values of \(\lambda\), the regularized model seems to yield _less fair_ treatments on average, demonstrated in Figure 7. Upon closer inspection, this can be explained partly by the observation that \((p_{a},p_{b})=(0.728,0.578)\): a distribution that warrants treating the "average member" of each subpopulation the same, which aligns with most fairness regularizers. This is demonstrated in Figure 9, where the \((p_{a},p_{b})\) coordinate is denoted by a \(g\), for German.
Heart disease riskIn the heart disease risk prediction dataset, we treat sex as the sensitive attribute, and observe \(\Pr[Y=1\mid S=0]=0.75\) and \(\Pr[Y=1\mid S=1]=0.449\) yields a \(p_{a},p_{b}\) pair warranting different treatments for the "average" member of each group, and \(\Pr[S=1]=0.63\) for a more attribute-balanced dataset. The relationship between the optimal treatment of the "average" member of both groups as \(\lambda\) changes can be seen in Figure 9, denoted by \(h\).
Figure 8: Distributions of accuracy and fairness violations in lending data. In general, it seems the models are tending to make similar predictions, which often nearly equal medians.
Figure 7: Effect of \(\lambda\) on regularizer values on the German lending dataset [18]. Because the \((p_{a},p_{b})\) point summarizing group differences in the dataset are at a point where regularized decisions are the same as unregluarized decisions, it is unsurprising that regularizers do not significantly reduce unfairness, regardless of \(\lambda\).
Figure 9: The level sets of different regularized properties as \(\lambda\) changes. (Top to bottom: DP, FPR, FNR, EEO). The \(g\) represents the “average” members of each group in the German lending dataset, and \(h\) the heart disease risk dataset.
Figure 10 demonstrates little significant difference between the unregularized and regularized models. On average, the models perform about equally in terms of fairness violations (Figure 11), but the regularized models have a higher variance in their fairness violations, probably because of limited expressivity of the hypothesis class.
## 6 Discussion and Conclusion
In this work, we extend the notion of property elicitation to consider regularized loss functions, and give a necessary and sufficient condition on a regularizer to be equivalent to the original property. We apply this condition to demonstrate the (non-)equivalence of properties with a handful of regularizers common in the fair machine learning literature. Finally, we show how the choice and weight of regularization function can change decision-making on synthetic data as well as the German lending and heart disease risk datasets.
Limitations and considerationsThe main intent of this work is to provide conceptual insight about how fairness regularizers change algorithmic decision-making and predictions. The insights provided rely on the hypothesis class being sufficiently expressive, and should not be solely used to justify the use of a regularizer. The addition of a regularizer and insights given are agnostic to the data itself and therefore agnostic to pre-processing and post-processing of data. Additional pre- or post-processing of the data may change the elicited property, though we leave this to future work.
Future workThere are many directions for future work. This work serves as a proof of concept for the extension of property elicitation to accommodate regularization functions, demonstrated on a handful of regularizers, but applying the necessary and sufficient condition on the equivalence of properties under different regularizers and more general prediction
Figure 11: Distributions of accuracy and fairness violations in heart disease data. In general, it seems the models are tending to make similar predictions, which often nearly equal medians.
Figure 10: Effect of \(\lambda\) on regularizer values on the heart disease risk dataset [28].
tasks remains an open direction of work. Moreover, it is important to understand how model complexity as well as pre- and post-processing of data can affect results. Finally, the addition of a regularizer seems to increase the complexity of the optimization problem linearly in \(m\). Understanding if there are more efficient ways to frame the optimization problem for certain regularization functions or data distributions also remains an open line of work.
### Acknowledgements
This material is based upon work supported by the National Science Foundation under Award No. 2202898. Thanks to Yiling Chen, Francisco Marmolejo Cossio, Esther Rolf, and Arpita Biswas for feedback and comments, as well as participants in the EC Gender Inclusion Workshop.
|
2309.05336 | Vacuum Static Spherically Symmetric Spacetimes in Harada's Theory | Very recently Harada proposed a gravitational theory which is of third order
in the derivatives of the metric tensor with the property that any solution of
Einstein's field equations (EFEs) possibly with a cosmological constant is
necessarily a solution of the new theory. He then applied his theory to derive
a second-order ODE for the evolution of the scale factor of the FLRW metric.
Remarkably he showed that, even in a matter-dominated universe with zero
cosmological constant, there is a late-time transition from decelerating to
accelerating expansion. Harada also derived a generalisation of the
Schwarzschild solution. However, as his starting point he assumed an
unnecessarily restricted form for a static spherically symmetric metric. In
this note the most general spherically symmetric static vacuum solution of the
theory is derived.
Mantica and Molinari have shown that Harada's theory may be recast into the
form of the EFEs with an additional source term in the form of a second-order
conformal Killing tensor(CKT). Accordingly they have dubbed the theory
conformal Killing gravity. Then, using a result in a previous paper of theirs
on CKTs in generalised Robertson-Walker spacetimes, they rederived Harada's
generalised evolution equation for the scale factor of the FLRW metric.
However, Mantica and Molinari appear to have overlooked the fact that all
solutions of the new theory (except those satisfying the EFEs) admit a
non-trivial second-order Killing tensor. Such Killing tensors are invaluable
when considering the geodesics of a metric as they lead to a second quadratic
invariant of the motion in addition to that derived from the metric. | Alan Barnes | 2023-09-11T09:34:13Z | http://arxiv.org/abs/2309.05336v2 | # Vacuum Static Spherically Symmetric Spacetimes in Harada's Theory
###### Abstract
Very recently Harada proposed a gravitational theory which is of third order in the derivatives of the metric tensor with the property that any solution of Einstein's field equations (EFEs) possibly with a cosmological constant is necessarily a solution of the new theory. He then applied his theory to derive a second-order ODE for the evolution of the scale factor of the FLRW metric. Remarkably he showed that, even in a matter-dominated universe with zero cosmological constant, there is a late-time transition from decelerating to accelerating expansion. Harada also derived a generalisation of the Schwarzschild solution. However, as his starting point he assumed an unnecessarily restricted form for a static spherically symmetric metric. In this note the most general spherically symmetric static vacuum solution of the theory is derived.
Mantica and Molinari have shown that Harada's theory may be recast into the form of the EFEs with an additional source term in the form of a second-order conformal Killing tensor(CKT). Accordingly they have dubbed the theory _conformal Killing gravity_. Then, using a result in a previous paper of theirs on CKTs in generalised Robertson-Walker spacetimes, they rederived Harada's generalised evolution equation for the scale factor of the FLRW metric.
However, Mantica and Molinari appear to have overlooked the fact that all solutions of the new theory (except those satisfying the EFEs) admit a non-trivial second-order _Killing_ tensor. Such Killing tensors are invaluable when considering the geodesics of a metric as they lead to a second quadratic invariant of the motion in addition to that derived from the metric.
Introduction
Recently Harada[1] has proposed a new gravitational theory satisfying three theoretical criteria for generalised theories of gravity namely:
The cosmological constant \(\Lambda\) is obtained as a constant of integration.
The conservation law \(T^{a}_{b;a}=0\) is a consequence of the field equations.
Conformally flat metrics are not necessarily vacuum.
Applying the above criteria he was led to consider the totally symmetric derivatives of a trace-modified Einstein tensor \(\tilde{G}_{ab}\)
\[H_{abc}=\tilde{G}_{(ab;c)}\quad\mbox{where}\quad\tilde{G}_{ab}=R_{ab}-\frac{1} {3}Rg_{ab}=G_{ab}-\frac{1}{6}Gg_{ab} \tag{1}\]
where round brackets indicate symmetrisation; and the similarly modified energy-momentum tensor
\[T_{abc}=\tilde{T}_{(ab;c)}\quad\mbox{where}\quad\tilde{T}_{ab}=T_{ab}-\frac{1} {6}Tg_{ab} \tag{2}\]
and to adopt as the field equations of his theory:
\[H_{abc}=T_{abc}. \tag{3}\]
The vacuum case in this theory is characterised by the condition \(T_{abc}=0\).
The energy-momentum conservation equation follows from (3) by contraction:
\[g^{ac}H_{abc}=G^{a}_{b;a}=0=g^{ac}T_{abc}=T^{a}_{b;a}.\]
It also follows immediately that any solution of the EFEs: \(G_{ab}=T_{ab}\) automatically satisfies Harada's field equations (3). A similar conclusion holds for solutions of the EFEs with a cosmological constant \(G_{ab}+\Lambda g_{ab}=T_{ab}\).
Harada[1] went on to consider the evolution of the scale factor \(a(t)\) of the Friedmann-Lemaitre-Robertson-Walker metric:
\[ds^{2}=\mbox{d}t^{2}-a^{2}(t)\left(\frac{\mbox{d}r^{2}}{1-kr^{2}}+r^{2}\mbox{d} \theta^{2}+r^{2}\sin^{2}\theta\mbox{d}\phi^{2}\right) \tag{4}\]
in his theory and obtained a third order ODE which has the first integral:
\[2\left(\frac{\dot{a}}{a}\right)^{2}-\frac{\ddot{a}}{a}+\frac{2k}{a^{2}}=\frac{ 4\pi G}{3}(5\rho+3p)+\frac{\Lambda}{3}. \tag{5}\]
He went on to show that, even in the case of a matter-dominated universe (\(p=0\)) with \(\Lambda=0\), there was a transition to accelerating expansion. In a second paper
Harada[2] considered this problem in greater depth and suggested that his theory also had the potential to address the Hubble tension problem.
Mantica and Molinari[3] have examined Harada's field equations and shown that they can be recast in the form of Einstein's field equations with an additional source term which is a second-order conformal Killing tensor \(C_{ab}\) defined by \(C_{ab}=G_{ab}-T_{ab}\). The trace \(C\) of \(C_{ab}\) is given by \(C=G-T\) and a straightforward calculation using (3) shows that
\[C_{(ab;c)}=\frac{1}{6}g_{(ab}C_{,c)}, \tag{6}\]
and hence \(C_{ab}\) is a gradient conformal Killing tensor. Hence (3) is equivalent to the 'Einstein' equation
\[G_{ab}=T_{ab}+C_{ab}.\]
Earlier they[4] had investigated generalised Robertson-Walker spacetimes and shown that they must admit a gradient conformal Killing tensor of the form \(C_{ab}=Bu_{a}u_{b}+Ag_{ab}\) where \(A\) and \(B\) are scalar fields and \(u_{a}\) is the velocity vector. They were then able to use this result to rederive Harada's first integral (5) of the evolution equations for the scale factor \(a(t)\) and to independently obtain the results of [2].
## 2 Static Spherically Symmetric Vacuum Solutions
Harada[1] derived the following exact solution of his field equations (3) for the static spherically symmetric vacuum case:
\[{\rm d}s^{2}=e^{2a(r)}{\rm d}t^{2}-e^{-2a(r)}{\rm d}r^{2}-r^{2}({\rm d}\theta^ {2}+\sin^{2}\theta{\rm d}\phi^{2}), \tag{7}\]
where
\[e^{2a(r)}=1-2m/r-\Lambda r^{2}/3-\lambda r^{4}. \tag{8}\]
\(m\), \(\Lambda\) and \(\lambda\) are arbitrary constants of integration. When \(\lambda=0\), this is the well-known Schwarzschild-de Sitter metric, but the \(\lambda r^{4}\) term is a new feature of the theory.
However, as his starting point Harada assumed the metric had the form (7) which is not the most general form. In terms of curvature coordinates the most general spherically symmetric static metric is
\[{\rm d}s^{2}=e^{2a}{\rm d}t^{2}-e^{2b}{\rm d}r^{2}-r^{2}({\rm d}\theta^{2}+\sin ^{2}\theta{\rm d}\phi^{2}), \tag{9}\]
where \(a\) and \(b\) are functions of \(r\) only.
The computer algebra system Classi (see [5] and [6]) was used to calculate the components of the tensor \(H_{abc}\). In terms of the obvious Lorentz orthonormal tetrad of one forms:
\[e^{a}{\rm d}t,\qquad e^{b}{\rm d}r,\qquad r{\rm d}\theta,\qquad r\sin\theta{ \rm d}\phi, \tag{10}\]
the only non-zero _frame_ componentents of \(H_{abc}\) are \(H_{rtt}\), \(H_{rrr}\) and \(H_{r\theta\theta}=H_{r\phi\phi}\). In fact only two of these are linearly independent as \(2H_{r\theta\theta}-H_{rtt}+H_{rrr}\equiv 0\). It is convenient to work with the two equations:
\[3H_{rtt}+H_{rrr}=r(a^{\prime\prime}-2a^{\prime 2}-4a^{\prime}b^{\prime}+b^{ \prime\prime}-2b^{\prime 2})-a^{\prime}-b^{\prime}=0 \tag{11}\]
and
\[H_{rtt} = -r^{3}(a^{\prime\prime\prime}+2a^{\prime\prime}a^{\prime}-3a^{ \prime\prime}b^{\prime}-2a^{\prime 2}b^{\prime}-a^{\prime}b^{\prime\prime}+2a^{ \prime}b^{\prime 2}) \tag{12}\] \[+r^{2}(4a^{\prime\prime}-8a^{\prime}b^{\prime}+2b^{\prime\prime} -4b^{\prime 2})-r(4a^{\prime}+6b^{\prime})+4e^{2b}-4=0,\]
where a prime denotes differentiation with wrt \(r\).
Using the substitution \(b=f-a\) and cancelling common factors (11) simplifies to
\[rf^{\prime\prime}-2rf^{\prime 2}-f^{\prime}=0. \tag{13}\]
This equation may be integrated to yield
\[f=-log(c+dr^{2})/2 \tag{14}\]
where \(c\) and \(d\) are arbitrary constants of integration. Hence the metric takes the form (9) with \(e^{2b}=e^{-2a}/(c+dr^{2})\). Eliminating \(b\) from (12) and removing common factors we obtain
\[(r^{3}(c+dr^{2})(a^{\prime\prime\prime}+6a^{\prime}a^{\prime\prime }+4a^{\prime 3})\] \[-r^{2}(2c-dr^{2})(a^{\prime\prime}+2a^{\prime 2})-r(2c+dr^{2})a^{ \prime}+4(c-1)=0. \tag{15}\]
By applying the substitution \(y=e^{2a}\), this equation may be simplified to produce the linear equation
\[(c+dr^{2})r^{3}y^{\prime\prime\prime}-(2c-dr^{2})r^{2}y^{\prime\prime}-(2c+dr^ {2})ry^{\prime}+8cy=8 \tag{16}\]
which may be solved by using standard textbook methods.
If \(d=0\), then \(e^{b}=e^{-a}/c\) and after a suitable constant rescaling of the \(t\) coordinate, \(c\) may be set to 1. Equation (16) reduces to
\[r^{3}y^{\prime\prime\prime}-2r^{2}y^{\prime\prime}-2ry^{\prime}+8y=8. \tag{17}\]
This is easily integrated to yield \(y=e^{2a}=1-2m/r-\Lambda r^{2}/3-\lambda r^{4}/5\) where \(m\), \(\Lambda\) and \(\lambda\) are arbitrary constants of integration. Following Harada the numerical factors of the \(1/r\) and \(r^{2}\) terms have been chosen to correspond with those of the Schwarzschild-de Sitter solution. Thus Harada's solution in (7) and (8) is obtained.
If \(c=0\), a similar rescaling of the \(t\) coordinate may be used to set \(d=1\) and \(e^{2b}=e^{-2a}/r^{2}\). Equation (16) now reduces to
\[r^{5}y^{\prime\prime\prime}+r^{4}y^{\prime\prime}-r^{3}y^{\prime}=8. \tag{18}\]
This may be integrated to yield
\[y=e^{2a}=\lambda-\Lambda r^{2}/3+m\log r-1/(2r^{2})\qquad e^{2b}=e^{-2a}/r^{2} \tag{19}\]
where \(m\), \(\Lambda\) and \(\lambda\) are again arbitrary constants of integration.
In the general case where both \(c\) and \(d\) are both non-zero, \(c\) may be again set to 1 by a rescaling of the \(t\) coordinate. In this case the solution of (16) involves infinite power series and is obtained by use of the well-known Frobenius method. Firstly there is an obvious _particular integral_\(y=1\). The _complementary function_ is obtained by searching for solutions of the form
\[y=\sum_{n=0}^{\infty}a_{n}r^{c+n}. \tag{20}\]
and equating the coefficient of each power of \(r\) in (16) to zero. For \(n=0\) this results in the _indicial equation_\(c^{3}-5c^{2}+2c+8=(c-2)(c-4)(c+1)=0\). The case \(c=2\) leads to the monomial solution \(y=r^{2}\) (the cosmological constant term). The cases \(c=4\) and \(c=-1\) each result in an infinite power series; for \(n=1\) we obtain \((c^{3}-2c^{2}-5c+6)a_{1}=0\) and thus \(a_{1}=0\) in both cases. For \(n>=2\) and \(c=-1\) the recurrence relation \(a_{n}=-d\frac{n-3}{n}a_{n-2}\) is obtained whilst for \(c=4\) we obtain \(a_{n}=-d\frac{n+2}{n+5}a_{n-2}\). Clearly in both cases the coefficient \(a_{n}\) vanishes when \(n\) is odd. Also the radius of convergence of both series is easily seen to be \(1/\sqrt{|d|}\). The general solution of (16) is therefore
\[y=e^{2a} = 1-2mp(r)/r+\lambda q(r)r^{4}/5-\Lambda r^{2}/3 \tag{21}\] \[\mbox{where}\quad p(r) = 1+dr^{2}/2-d^{2}r^{4}/8+d^{3}r^{6}/16\] (22) \[\mbox{and}\quad q(r) = 1-4dr^{2}/7+8d^{2}r^{4}/21-64d^{3}r^{6}/231\] (23) \[+640d^{4}r^{8}/3003-512d^{5}r^{10}/3003\ldots\] \[\mbox{with}\quad e^{2b} = e^{-2a}/(1+dr^{2}). \tag{24}\]
where \(m\), \(\Lambda\) and \(\lambda\) are again arbitrary constants of integration.
If we set \(m=0\) and \(\lambda=0\) in the general solution above, the following metric is obtained
\[{\rm d}s^{2}=(1-\Lambda r^{2}/3){\rm d}t^{2}-\frac{1}{(1-\Lambda r^{2}/3)(1+ dr^{2})}{\rm d}r^{2}-r^{2}({\rm d}\theta^{2}+\sin^{2}\theta{\rm d}\phi^{2}). \tag{25}\]
As all the solutions obtained in this section are spherically symmetric they must be of Petrov type D or conformally flat; in fact apart from the Minkowski metric and de Sitter metric which is a special case of the Harada metric (8) with \(m=\lambda=0\)
the only conformally flat solution is a special case of (25) with \(\Lambda=0\). This is the metric of Einstein's static universe which is a vacuum solution in Harada's theory!
The only non-zero _frame_ components of the Weyl tensor are
\[C_{0101}=-C_{2323}=A,\qquad C_{1212}=C_{1313}=-C_{0202}=-C_{0303}=A/2, \tag{26}\]
where
\[A=-\frac{r^{2}(c+dr^{2})y^{\prime\prime}-r(2c+dr^{2})y^{\prime}+2cy-2}{6r^{2}}. \tag{27}\]
## 3 Killing Tensors
As pointed out by Mantica and Molinari[3], \(C_{ab}=G_{ab}-T_{ab}\) is a gradient conformal Killing tensor (see (6) in the Introduction). However, a gradient CKT is associated with a Killing tensor. For if \(C_{(ab;c)}=g_{(ab}\eta_{,c)}\) for some scalar field \(\eta\), then \(K_{ab}-\eta g_{ab}\) is a _Killing tensor_(see, for example [7]). In fact from (1), (2) and (3), it is obvious that \(K_{ab}=\tilde{G}_{ab}-\tilde{T}_{ab}\) is a Killing tensor. If a solution of (3) is also a solution of the EFEs, this Killing tensor is identically zero whilst for solutions of the EFEs with a cosmological contant, the Killing tensor is a constant multiple of the metric tensor. However, for other solutions of Harada's theory the Killing tensor is non-trivial.
As is well-known (for example [8]), a Killing tensor corresponds to a constant of motion on geodesics; specifically for a geodesic with tangent vector \(u^{a}\), \(K_{ab}u^{a}u^{b}\) is a constant along geodesics. Since the trace-modified Eistein tensor \(\tilde{G}_{ab}=G_{ab}-\frac{G}{6}g_{ab}\) is a Killing tensor, the analysis of geodesics in vacuum Harada fields is considerably simplified by the existence of this second quadratic constant of motion in addition to that associated with the metric \(g_{ab}u^{a}u^{b}\).
For the general static spherically symmetric metric (7) derived in the previous section the Killing tensor has the following non-zero _coordinate_ components:
\[K_{tt} = y\frac{r^{2}(c+dr^{2})y^{\prime\prime}-r(2c+dr^{2})y^{\prime}-4( c+3dr^{2})y+4}{6r^{2}} \tag{28}\] \[K_{rr} = -\frac{r^{2}(c+dr^{2})y^{\prime\prime}-r(2c+dr^{2})y^{\prime}-4cy +4}{6r^{2}y(c+dr^{2})}\] (29) \[K_{\theta\theta} = \frac{r^{2}(c+dr^{2})y^{\prime\prime}+r(c+2dr^{2})y^{\prime}-cy+1} {3}\] (30) \[K_{\phi\phi} = \frac{r^{2}(c+dr^{2})y^{\prime\prime}+r(c+2dr^{2})y^{\prime}-cy+ 1}{3}\sin^{2}\theta. \tag{31}\]
where \(y=e^{2a}\) and \(e^{2b}=e^{-2a}/(c+dr^{2})\). Thus for the geodesics of the solutions discussed in section 2, there are four constants of motion:
\[g_{ab}\dot{x}^{a}\dot{x}^{b},\qquad K_{ab}\dot{x}^{a}\dot{x}^{b},\qquad\xi_{a} \dot{x}^{a},\qquad\eta_{a}\dot{x}^{a}, \tag{32}\]
where \(\dot{x}^{a}\) is the tangent vector of the affinely-parameterised geodesic and \(\xi=\partial_{t}\) and \(\eta=\partial_{\phi}\) are the timelike and rotational Killing vectors. Further simplification is possible since, by a suitable rotation of the \(\theta\) and \(\phi\) coordinates, the geodesic motion can be chosen to lie in the 'equatorial' plane so that \(\theta=\pi/2\) and \(\dot{\theta}=0\).
## 4 Conclusions
All static spherically symmetric vacuum solutions in Harada's 'conformal Killing' gravity theory are derived. The most general solution involves five parameters; however only the ratio of the fourth and fifth parameters \(c/d\) is essential; either \(c\) or \(d\) (unless zero) may be set to unity by a constant rescaling of the \(t\) coordinate.
The general solution when \(c\) and \(d\) are both non-zero involves two infinite power series in the curvature coordinate \(r\) plus a 'cosmological constant' term which is a multiple of \(r^{2}\). Harada's original three-parameter 'Schwarzschild-like' solution is obtained when the parameter \(d=0\); the two power series degenerating to monomials in this case. Two of the parameters may be identified as the mass \(m\) and the cosmological constant \(\Lambda\), but the third \(\lambda\) and the ratio \(c/d\) are new and have no analogue in General Relativity. A second three-parameter solution not involving power series is obtained when \(c=0\); its physical interpretation is currently unclear.
All solutions of Harada's theory other than those satisfying the Einstein field equations are shown to admit a non-trivial second-order Killing tensor. The quadratic constant of motion associated with this Killing tensor simplifies the analysis of geodesic motion in Harada gravitational fields especially the static spherically symmetric vacuum fields derived in section 2.
A number of important questions arise regarding the theory. For example, are the spherically symmetric vacuum solutions of the theory necessarily static as they are in General Relativity. If not, then it would appear that a spherically symmetric collapsing or pulsating star ought to generate gravitational waves leading to possible experimental evidence regarding the theory. In this context a study of the propagation of gravitational waves in the weak field approximation and/or a study of exact plane wave solutions would be useful.
There are a number of perhaps interesting theoretical questions; for example what are the asymptotic and event horizon structures of the solutions derived in section 2. There is clearly a curvature singularity at \(r=0\) except when \(m=0\) and if either \(\Lambda\) or \(\lambda\) is non-zero, the solutions are not asymptotically flat.
Senovilla[9] considered junction conditions in F(\(R\)) gravity which is fourth order in the metric derivatives. He showed that interesting new features can arise such as gravtational double layers. It would be interesting to see whether such features appear in Harada's theory which is third order.
## Acknowledgements
The extensive calculations in sections 2 and 3 were performed using the Sheep/Classi package for General Relativity which was kindly supplied to me by Jan Aman of the University of Stockholm. I would also like to thank him for useful discussions on some undocumented features of the system. Some calculations were also performed using the Reduce computer algebra freely available for download from SourceForge ([https://sourceforge.net/projects/reduce-algebra/files/](https://sourceforge.net/projects/reduce-algebra/files/)). Two source files harada.shp and harada.lor not in the standard Classi distribution are available from the author on request.
|
2309.03747 | The Daunting Dilemma with Sentence Encoders: Success on Standard
Benchmarks, Failure in Capturing Basic Semantic Properties | In this paper, we adopted a retrospective approach to examine and compare
five existing popular sentence encoders, i.e., Sentence-BERT, Universal
Sentence Encoder (USE), LASER, InferSent, and Doc2vec, in terms of their
performance on downstream tasks versus their capability to capture basic
semantic properties. Initially, we evaluated all five sentence encoders on the
popular SentEval benchmark and found that multiple sentence encoders perform
quite well on a variety of popular downstream tasks. However, being unable to
find a single winner in all cases, we designed further experiments to gain a
deeper understanding of their behavior. Specifically, we proposed four semantic
evaluation criteria, i.e., Paraphrasing, Synonym Replacement, Antonym
Replacement, and Sentence Jumbling, and evaluated the same five sentence
encoders using these criteria. We found that the Sentence-Bert and USE models
pass the paraphrasing criterion, with SBERT being the superior between the two.
LASER dominates in the case of the synonym replacement criterion.
Interestingly, all the sentence encoders failed the antonym replacement and
jumbling criteria. These results suggest that although these popular sentence
encoders perform quite well on the SentEval benchmark, they still struggle to
capture some basic semantic properties, thus, posing a daunting dilemma in NLP
research. | Yash Mahajan, Naman Bansal, Shubhra Kanti Karmaker | 2023-09-07T14:42:35Z | http://arxiv.org/abs/2309.03747v1 | The Daunting Dilemma with Sentence Encoders: Success on Standard Benchmarks, Failure in Capturing Basic Semantic Properties
###### Abstract
In this paper, we adopted a retrospective approach to examine and compare five existing popular sentence encoders, i.e., Sentence-BERT, Universal Sentence Encoder (USE), LASER, InferSent, and Doc2vec, in terms of their performance on downstream tasks versus their capability to capture basic semantic properties. Initially, we evaluated all five sentence encoders on the popular SentEval benchmark and found that multiple sentence encoders perform quite well on a variety of popular downstream tasks. However, being unable to find a single winner in all cases, we designed further experiments to gain a deeper understanding of their behavior. Specifically, we proposed four semantic evaluation criteria, i.e., Paraphrasing, Synonym Replacement, Antonym Replacement, and Sentence Jumbling, and evaluated the same five sentence encoders using these criteria. We found that the Sentence-Bert and USE models pass the paraphrasing criterion, with SBERT being the superior between the two. LASER dominates in the case of the synonym replacement criterion. Interestingly, all the sentence encoders failed the antonym replacement and jumbling criteria. These results suggest that although these popular sentence encoders perform quite well on the SentEval benchmark, they still struggle to capture some basic semantic properties, thus, posing a daunting dilemma in NLP research.
## 1 Introduction
One of the fundamental tasks in NLP is to map sentences computationally into dense vector representations for subsequent analysis. These dense vectors of fixed size, which are known as "sentence embeddings", represent the meaning of sentences in some latent semantic space. Till today, many supervised Conneau et al. (2017) and unsupervised Le and Mikolov (2014) methods have been proposed to learn embeddings for a given sentence. For instance, Doc2vec Le and Mikolov (2014), proposed in 2014, is one of the earliest sentence encoding techniques that uses a deep neural network. In 2017, InferSent Conneau et al. (2017), developed by Facebook, used Bi-LSTM networks to learn sentence embeddings. Later, in 2017, Transformers Vaswani et al. (2017) were introduced and subsequently, many transformer-based sentence encoders have been proposed since then including BERT Devlin et al. (2019), USE Universal-Sentence-Encoder Cer et al. (2018), Sentence-BERT Reimers and Gurevych (2019), LASER Artetxe and Schwenk (2019) etc..
While some of these powerful sentence encoders have demonstrated superior performance on standard benchmarks and downstream NLP tasks Choi et al. (2021); Conneau and Kiela (2018), we still lack a good understanding of the pros and cons of using different sentence encoders for any task Pham et al. (2021). In other words, despite achieving high accuracy numbers on benchmark datasets, it is still unclear whether they indeed capture basic linguistic properties (which is desired) while doing so. To investigate this in detail, we adopt a retrospective approach in this paper to analyze and compare five existing popular sentence encoders in terms of their capability to capture the basic semantics. Specifically, we designed four semantic criteria to evaluate sentence encoders: 1) Paraphrasing, 2) Synonym Replacement, 3) Antonym Replacement, and 4) Sentence Jumbling, as shown in Table 1, to quantify how well a sentence encoder can capture the semantic relations between two related sentences.
Computationally, given a sentence \(S\) and its corresponding embedding \(S_{x}\), the basic idea here is to perturb \(S\) according to a particular criterion to create \(S^{\prime}\) (with embedding \(S^{\prime}_{x}\)), then look at how similar/different two embedding vectors \(S_{x}\) and \(S^{\prime}_{x}\) are and match those observations against the expected behavior. For example (see Table 1), given original sentence: "_Levin's attorney, Bo Hitchcock, declined to comment last Friday_", an example of synonym
replacement perturbation is: "Levin's attorney, Bo Hitchcock, _refused_ to comment last Friday". Obviously, these two sentences are very similar, and intuitively, a good sentence encoder should produce very similar sentence embeddings for them. On the contrary, an Antonym Replacement or Sentence Jumbling perturbation usually shifts the meaning of the sentence significantly, and therefore, a good sentence encoder should yield a somewhat diverse embedding for Antonym Replacement/Jumbling.
Based on the intuitions mentioned above, we designed experiments to test five popular sentence encoders with respect to our four semantic evaluation criteria. Note that these four criteria only constitute a subset of linguistic properties that we argue a good sentence encoder should hold, _but it is far from an exhaustive list, which is beyond the scope of this paper_. A benefit of these four evaluation criteria is that they can be experimentally evaluated in an unsupervised fashion without requiring a specific downstream task.
For experiments, we initially evaluated and analyzed the performance of five popular sentence encoders, i.e., Sentence-BERT, Universal Sentence Encoder (USE), LASER, InferSent, and Doc2vec, on the SentEval benchmark and found that there is no single winner for all benchmark tasks. Next, we conducted extensive experiments to test our four semantic evaluation criteria on these five encoder models, and the Sentence-BERT model demonstrated the closest to expected behavior in the case of the Paraphrasing criterion, while LASER performed the best in the case of the Synonym Replacement criterion. On the contrary, all the sentence encoders failed to satisfy the Antonym Replacement and Sentence Jumbling criteria. These results suggest that a sentence encoder model can perform quite well on the SentEval benchmark even though they fail to satisfy some of the basic semantic evaluation criteria. This result raises several daunting philosophical dilemmas in NLP research in general, e.g., when can we claim a sentence encoder as "good" vs. "bad"? Do we only care about the performance of downstream tasks even when basic linguistic properties are violated? Is the SentEval benchmark challenging enough, or do we need to add harder tasks into the SentEval benchmark that demands a deeper semantic understanding for a more accurate evaluation of sentence encoders? We urge the community to conduct further research on these questions based on our study.
## 2 Related Works
By far, many have proposed a variety of techniques to generate the embedding for a given sentence. Doc2Vec [10] is an unsupervised technique that generates embedding based on the variable-length piece of text and creates unique embeddings for each paragraph in a document. Later, others attempted to learn sentence embedding using auto-encoder [21, 15], [16]. On the other hand, InferSent [12] used SNLI [13] and Multi-genre NLI labeled data [20] and learned the sentence embedding using the Bi-LSTM with max-pooling architecture and a Siamese network.
More recently, Cer2018 proposed "Universal Sentence Encoder" (USE), which is trained on the combination of supervised and unsupervised NLI (Natural Language Inference) data, and it has effectively produced sophisticated sentence embeddings. Sentence BERT (SBert) [17], which is trained on Wikipedia corpus and news-wire articles and later fine-tuned on SNLI and Multi-Genre NLI dataset. These models have been trained rigorously on a large corpus of data, and many of them used data parallelisms [22, 23, 24, 25], natural language inference (NLI) [14, 15, 16], or a combination of both [20].
However, recently Reimers and Gurevych2019, Li et al.2020, Pham et al.2021 reported that these pre-trained language models produce poor embeddings for semantic similarity tasks. Many pre-trained language models are designed for
\begin{table}
\begin{tabular}{c|c|l} \hline \multicolumn{1}{c|}{Original Sentence: “_Levin’s attorney, Bo Hitchcock, declined to comment last Friday_”} \\ \hline \multicolumn{1}{c|}{**Perturbation Task**} & \multicolumn{1}{c|}{**Example Sentence**} \\ \hline
**Paraphrasing** & Hitchcock has declined to comment on the case, as has Levin. & Similar to Original \\ \hline
**Synonym Replacement** & Levin’s attorney, Bo Hitchcock, _refused_ to comment last Friday. & Similar to Original \\ \hline
**Antonym Replacement** & Levin’s attorney, Bo Hitchcock, _accepted_ to comment last Friday. & Diverse from Original \\ \hline
**Sentence Jumbling** & Levin’s attorney _to_ Bo Hitchcock, declined, comment last Friday. & Diverse from Original \\ \hline \end{tabular}
\end{table}
Table 1: Example of four unsupervised Semantic Understanding Task.
task-specific purposes; as a result, the embeddings generated by the models could be biased. To further investigate this issue in this paper, we conduct a systematic study of popular sentence encoders by proposing four basic semantic evaluation criteria and report our findings to inform the research community.
## 3 Evaluation on SentEval Benchmarks
SentEval Conneau and Kiela (2018) is a widely used framework for evaluating the efficacy of sentence embeddings. Here, sentence embeddings are used to perform various classification tasks. Specifically, the SentEval toolkit uses a logistic regression classifier or multi-layered perceptron (MLP), which deploys a 10-fold cross-validation methodology across a range of classification tasks. The testing fold is then utilized to compute the prediction accuracy of the classifiers.
In this work, we assess the effectiveness of five distinct sentence encoders on seven datasets from the SentEval benchmark to identify the best one.
1. **MR**: Movie review dataset for sentiment binary classification task Pang and Lee (2005).
2. **CR**: Sentiment prediction on Product review dataset with binary labels Hu and Liu (2004).
3. **MPQA**: An opinion polarity dataset with binary classification task Wiebe et al. (2005).
4. **SSTb**: Stanford Sentiment Treebank dataset with binary labels Socher et al. (2013).
5. **SUBJ**: Subjective prediction from movie reviews/plot summaries Pang and Lee (2004).
6. **TREC**: Fine-grained question-type classification from TREC Li and Roth (2002a).
7. **MRPC**: Microsoft Paraphrase Corpus from parallel news sources Li and Roth (2002b).
The accuracy scores of each sentence encoder can be found in Table 2. The SBERT model exhibits superior performance, generating more useful embeddings than other models on five out of seven benchmark datasets, with the highest average performance of 86.9. However, USE and Infersent model also demonstrates very similar performance to SBERT with nearly one and three-point difference, respectively, in terms of average scores. This raises concerns regarding the best sentence encoder to use for an unseen task, as we do not want an encoder that generates quality embeddings for a particular task but fails to capture the context in other tasks. For instance, the SBERT model performed poorly on the MRPC dataset and TREC dataset but performed well on other benchmarks. Thus, raising doubts about the encoder's ability to provide quality embeddings for any task. Therefore, we need to investigate in-depth whether a sentence encoder can differentiate between two orthogonal and non-orthogonal sentences. Thus, we designed and created four intuitive semantic criteria _Paraphrase, Synonym Replacement, Paraphrase Vs. Antonym Replacement and, Paraphrase Vs. Jumbled Sentence_ that are simple yet important to evaluate sentence encoder (see section 4). The evaluation of these criteria will provide detailed insight into how sentence encoders understand the natural language and how efficiently they capture the context in their embeddings.
## 4 Four Semantic Evaluation Criteria
1. **Criterion-1 (Paraphrasing)**: As our first criterion, we argue that: "A _good_ sentence encoder should generate similar embeddings for two sentences which are paraphrases of each other". Similarly, a _good_ sentence encoder should generate significantly different embeddings for two unrelated sentences. Therefore, the difference between the average similarity score (in terms of sentence embeddings) of a collection of paraphrase pairs and that of non-paraphrase pairs should be high for a _"good"_ sentence encoder.
2. **Criterion-2 (Synonym Replacement)**: For the second criterion, we argue that: "If we replace \(n\) words (where, \(n\) is small) from sentence \(S\) with their respective synonyms to create another sentence \(S^{\prime}_{P}\), a good sentence encoder will yield similar embeddings for \(S\) and \(S^{\prime}_{P}\) in the latent semantic space". The intuition here is that synonym replacement does not alter the meaning of a sentence significantly, hence, the embeddings are also expected to remain similar.
3. **Criterion-3 (Paraphrase Vs. Antonym Replacement)**: For the third criterion, we argue that: "Given a sentence \(S\), its paraphrase \(S^{\prime}_{P}\) and an antonym-replaced sentence \(S^{\prime}_{A}\) (created by replacing exactly one word (verb/adjective) in \(S\) with its antonym), \(S^{\prime}_{P}\) should be semantically more similar to \(S\) than \(S^{\prime}_{A}\) to \(S\) by some clear margin, i.e., \(Sim(S,S^{\prime}_{P})-Sim(S,S^{\prime}_{A})>\epsilon_{1}\), where \(\epsilon_{1}\) denotes the expected minimum margin. The intuition here is that a good sentence encoder should generate embeddings in a manner such that any paraphrase is closer to the
original sentence than an antonym-replaced sentence in the latent semantic space. This will ensure that the encoding can indeed differentiate between paraphrased and antonym-replaced sentences and capture their semantic variance.
4. **Criterion-4 (Paraphrase Vs. Sentence Jumbling)**: Our fourth criterion is similar to the third one, except that instead of antonym replacement, we consider sentence jumbling. Formally, Given a sentence \(S\), its paraphrase \(S^{\prime}_{P}\) and a jumbled sentence \(S^{\prime}_{J}\) (created by randomly swapping \(n\) pairs of words among each other in the original sentence \(S\)), \(S^{\prime}_{P}\) should be semantically more similar to \(S\) compared to the same for \(S^{\prime}_{J}\) by some clear margin, i.e, \(Sim(S,S^{\prime}_{P})-Sim(S,S^{\prime}_{J})>\epsilon_{2}\), where \(\epsilon_{2}\) denotes the expected minimum margin. The intuition here is that a good sentence encoder should generate embeddings in a manner such that any paraphrase of a given sentence should be closer to the original sentence than a jumbled one in the latent semantic space.
## 5 Experiments
### Data-set
In this work, we used three publicly available paraphrasing data-sets with human-annotated labels. All three datasets come with binary labels assigned to each pair of sentences. Label \(1\) (Pos) indicates that the pair of sentences have a similar meaning and \(0\) (Neg) indicates otherwise. The data sets are **1) QQP**Quora Questions Pair) dataset [2], which is the collection of paraphrased and non-paraphrased pairs of questions. **2) PAWS-WIKI**Paraphrase Adversaries from Word Scrambling-Wikipedia) dataset [15], which is a collection of pair of sentences from Wikipedia with high lexical overlaps. These pairs of sentences are labeled with 1's and 0's for paraphrase and non-paraphrase pairs, respectively. And, **3) MRPC**Microsoft Research Paraphrasing Corpus) dataset [1], which is a collection of sentence pairs extracted from news articles. More details can be found in appendix A.2.
### Sentence Encoder Models
In this work, we compared five popular sentence embedding methods: 1) Universal Sentence Encoder (USE) [11], 2) Sentence-BERT (SBert) [12], 3) InferSent [13], 4) Language-Agnostic-SEntence Representation (LASER) [15], and 5) Document To Vector (Doc2Vec or D2V) [1]. Detailed descriptions of the models can be found in appendix A.3.
## 6 Results
All models were evaluated on the four proposed criteria. Here, we argue that _a sentence encoder shall pass all the criteria in order to be considered a "good" sentence encoder._ Our results on the three datasets (refer to section 5.1) are described below. All the evaluations were performed on a Google Colab GPU server and a local system having an Intel i5 processor and 8GB of RAM.
1. **Criterion-1 (Paraphrasing)**: To evaluate criterion-1, we took sentence pairs (both paraphrases and non-paraphrases) from each data set with no modification and encoded each sentence using the five popular sentence encoders mentioned in Section 5.2. Next, we computed _Cosine Similarity_ on the embedding vector pairs and calculated the similarity between the sentence pairs, and further averaged the similarity scores in case of paraphrases (positive instances) and non-paraphrases (negative instances), separately. Finally, we computed the difference between the average similarity of paraphrase and non-paraphrase pairs to evaluate our criterion. As mentioned in section 4, this difference
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline
**Model** & **MR** & **CR** & **SUBJ** & **MPQA** & **SSTb** & **TREC** & **MRPC** & **Avg** \\ \hline \hline
**SBERT** & **83.95** & **88.98** & **93.77** & **89.51** & **90.01** & 84.8 & 76.28 & **86.9** \\
**USE** & 75.58 & 81.83 & 91.87 & 87.17 & 85.68 & **92.2** & 69.62 & 83.42 \\
**Infersent** & 81.1 & 86.3 & 92.4 & 90.2 & 84.6 & 88.2 & 76.2 & 85.57 \\
**LASER** & 56.14 & 63.89 & 67.65 & 72.36 & 72.85 & 79.85 & **89.19** & 72.04 \\
**Doc2Vec** & 49.76 & 63.76 & 49.16 & 68.77 & 49.92 & 19.2 & 66.49 & 52.43 \\ \hline \end{tabular}
\end{table}
Table 2: Evaluation of existing sentence encoders on SentEval Benchmark. The accuracy scores are generated using the SentEval toolkit on different classification tasks. The scores are generated using 10-fold cross-validation.
is expected to be high in the case of a "good" sentence encoder.
Table 3 summarizes the results for criterion-1. Overall, the SBert model was able to better distinguish between paraphrase (positive) and non-paraphrase (negative) pairs for QQP and MRPC data sets, followed by Universal Sentence Encoder (USE) as the second best. Whereas, all encoders failed to differentiate in the case of the PAWS-WIKI data set (Note that, sentence pairs in the PAWS-WIKI data-set share high lexical overlap, hence, it is easy for encoders to get confused). In fact, Doc2Vec (D2V) failed to differentiate in the case of all three data sets, while InferSent and LASER showed sub-optimal performance. The performance of all models aligned with the SentEval benchmark (refer 2). Hence, we conclude that the SBERT and USE models pass this criterion, and the rest of the models struggle. Also, the SBERT model was the best-performing among all models.
**Criterion-1 (Alternative Setup)**: In this criterion, we aimed to test an alternative setup to evaluate criterion-1. Instead of selecting negative pairs that are non-paraphrases yet somewhat related, we created negative pairs by randomly sampling two sentences, each belonging to a different topic. The idea behind this is to create negative pairs from orthogonal topics with fewer overlapping words, leading to a lower expected similarity between them. In contrast, positive samples remained the same as in the original setup of criterion-1 evaluation.
Table 4 summarizes the results for criterion-1 (Alternative Setup). Again, the SBert model was able to best distinguish between paraphrase (positive) and non-paraphrase (negative) pairs for QQP and MRPC data sets, followed by Universal Sentence Encoder (USE) as the second best. This time, the differences are much larger. Consistently with SentEval (refer to Table 2 trend and Table 3, InferSent, LASER, and Doc2Vec were again found to be sub-optimal.
ever, LASER and Infersent were sub-optimal in the case of the criterion-1 (Paraphrasing) task. On the other hand, similarity scores yielded by SBert and USE for Synonym Replacement are pretty close to the same for LASER and Infersent, and hence, SBert and USE still remain the better choice considering both criteria 1 and 2. Also, the similarity score drops gradually with an increase in the order of \(n\), i.e., an increase in word replacement for each sentence. Hence, we can say that except for the D2V model, all other models satisfy criterion-2.
3. **Criterion-3 (Paraphrase Vs. Antonym Replacement)**: In the third criterion, we expect that the paraphrased sentence \(S^{\prime}_{P}\) should be semantically closer to the original sentence \(S\) compared to an antonym sentence \(S^{\prime}_{A}\). To test this criterion, we computed the cosine similarities between the sentence pairs (\(S\), \(S^{\prime}_{P}\)(paraphrase)) and (\(S\), \(S^{\prime}_{A}\) (antonym)) separately and then, computed the difference between these two similarity scores. We used WordNet toolkit Miller (1995) to create an antonym sentence and repeated this process for each sentence \(S\) in a data set and, finally, plotted a cumulative histogram where the bins in the x-axis represent the expected minimum difference margins (\(\epsilon_{1}\) = [-0.3 to 0.3]), and the y-axis represents the number of sentences \(S\) in the dataset with that minimum margin, i.e., \(Sim(S,S^{\prime}_{P})-Sim(S,S^{\prime}_{A})>\epsilon_{1}\), for each sentence embedding technique (Figure 1). To satisfy the criterion, we expect that a good sentence encoder should yield higher similarity for (\(S\),\(S^{\prime}_{P}\)) pair than the (\(S\),\(S^{\prime}_{A}\)) pair, and hence, their difference in similarity score should be somewhat significant (\(>>0\)). Upon closer examination of Figure 1, it becomes apparent that all five sentence encoders produce left-skewed cumulative histograms, indicating that they are unable to differentiate between \(S^{\prime}_{A}\) and \(S^{\prime}_{P}\) in terms of their differences from the original sentence \(S\). This observation essentially means that all five sentence encoders fail to satisfy criterion 3, as most of the samples fall within the \(\epsilon_{1}\) range of -0.3 to 0, with very few samples having positive differences. Thus, the sentence encoders tend to produce embeddings that place \(S^{\prime}_{A}\) closer to \(S\) than \(S^{\prime}_{P}\) in the latent semantic space, which is the opposite of what we expected. Surprisingly, the Doc2Vec model in Figure 1 demonstrates some
Figure 1: The figures demonstrate the cosine similarity difference for Paraphrased Vs. Antonym hypothesis. The score are calculated based on \(Sim(S,S^{\prime}_{P})-Sim(S,S^{\prime}_{A})>\epsilon_{1}\). All five different sentence encoders were tested on all three datasets. Here the epsilon is equal to \(\epsilon_{1}\).
positive difference between the sentence pair ((\(S\), \(S^{\prime}_{P}\)(paraphrase)) and (\(S\), \(S^{\prime}_{A}\) (antonym)). Moreover, for the PAWS-WIKI dataset (figure 0(a)), where all other models failed miserably, the Doc2Vec model exhibited a significantly higher positive difference between pairs. The cause of this difference is unclear, and we plan to investigate it in more detail in future work. Further, comparing these results to those of SentEval (see Table 2) reveals an interesting finding. Four out of five models achieved relatively high accuracy scores on downstream tasks, yet all models failed to capture a desired basic linguistic property. These findings also raise questions about whether SentEval is a hard-enough benchmark for testing sentence encoders or whether we may be overly reliant on metrics such as cosine similarity score, whose underlying working is unclear yet considered as a potential similarity metric to evaluate our sentence encoders.
4. **Criterion-4 (Paraphrase Vs. Sentence Jumbling)**: In this criterion, we expect that when some words are swapped by each other in a sentence, the meaning of the perturbed sentence should be completely destroyed. The perturbed jumbled sentence \(S^{\prime}_{J}\) should no more convey the same meaning as the original sentence \(S\), and thus, it should not be placed close to the original sentence \(S\) in latent semantic space. On the other hand, a paraphrased sentence \(S^{\prime}_{P}\) conveys the same meaning and hence, should be closer to the original sentence \(S\) in the same latent space. To investigate this criterion, a similar equation is used as in criterion 3, \(Sim(S,S^{\prime}_{P})-Sim(S,S^{\prime}_{J})>\epsilon_{2})\) where the similarity score of antonym \(Sim(S,S^{\prime}_{A})\) is replaced with the similarity of a jumbled sentence, i.e., \(Sim(S,S^{\prime}_{J})\) and the epsilon is changed to \(\epsilon_{2}\). The value of \(\epsilon_{2}\) represents the expected minimum margin (\(>>0\)) for this criterion. This criterion is evaluated by applying the equation to each sample and plotting a cumulative histogram where the x-axis represents the expected minimum difference margins ranges between [-0.3 to 0.3], and the y-axis represents the number of sentence \(S\) that is within the minimum margin \(\epsilon_{2}\). Further, Figure 2 is a result of swapping \(n=3\) words, indicating that the sentence encoders failed to capture the impact of jumbled words on the sentence similarity task. The encoder
Figure 2: The figures demonstrate the cosine similarity difference for Paraphrased Vs. Jumbling criterion. The score are calculated based on \(Sim(S,S^{\prime}_{P})-Sim(S,S^{\prime}_{J})>\epsilon_{2}\). All five different sentence encoders were tested on all three datasets. Here the epsilon is equal to \(\epsilon_{2}\).
models generated almost similar semantic representation for the original sentence \(S\) and its jumbled version \(S^{\prime}_{J}\) across all three datasets. This implies that the majority of the samples in figure 2 had a difference between -0.3 to 0, and only a few samples showed positive differences, suggesting that current sentence encoders pay little attention to the word order of the sentence and sheerly rely on contextual words. Therefore, as a result, all five encoders performed in a manner that was opposite to what was expected, resulting in the failure with respect to criterion-3. We have reported the results of all sentence encoders on \(n=1\) and \(n=2\) in the appendix (omitted here due to lack of space).
## 7 Discussions and Conclusion
This paper examines the performance of five different sentence encoder models using the SentEval benchmark to evaluate their ability to produce high-quality embeddings for various downstream tasks. The results showed that four out of five models performed reasonably on the benchmark with no single winner for all cases. Although this is a promising result overall, it is still unclear whether the sentence encoders indeed capture basic linguistic properties (which is desired) while performing these downstream tasks or if they are counting on some latent features which are hard to interpret for humans. To further investigate this issue, the paper proposed four criteria to quantify the models' basic semantic understanding abilities in an unsupervised setting and evaluated all five sentence encoding techniques with respect to each criterion.
Our experimental results reveal that the Sentence-BERT model performed the best on the paraphrasing task, while LASER and Infersent were optimal for synonym replacement. However, the experiments conducted on antonym and sentence jumbling tasks revealed limitations in current sentence encoder models' ability to capture desired basic semantic properties. All models failed to differentiate between a sentence and its antonym, as evidenced by the left-skewed cumulative histogram on the respective datasets. The high overlap of words between sentences may be a possible reason for the failure, making it challenging for the encoder to detect subtle differences and leading to similar embeddings. The same results were observed on two additional datasets (QQP and MRPC), confirming the models' inability to capture the semantics of sentences in these tasks.
On the other hand, the evaluation of criterion 4 (Paraphrase Vs. Jumbling) demonstrates that the current sentence encoder models fail to capture the significance of the word order in a sentence; hence, the criterion is not satisfied by any sentence encoder model we tested. Since the current encoders are mainly trained on masked language modeling and next-sentence prediction tasks followed by fine-tuning on many arbitrary downstream tasks, it does not necessarily enforce that the encoders will understand word ordering properly. This is likely to be the reason for the failure of this criterion. Additionally, trends are similar to all three datasets, which confirms the conclusion that current sentence encoders struggle to capture the importance of word order in a sentence.
Based on the above observation, it can be inferred that the current sentence encoder has limitations in capturing the semantic meaning of antonym sentences when there is a high degree of overlapping words. Additionally, the model tends to overpriorize the contextual words, and as a result, it overlooks the actual ordering of words within the sentence. On the contrary, the same sentence encoders demonstrated high performance on the SentEval benchmark, which consists of several downstream task datasets, despite the limitations discussed above. This raises a daunting dilemma about how a sentence encoder can be considered _"good"_ and _"reliable"_ when it fails to capture basic linguistic properties like jumbling or antonym interpretation but performs better on downstream tasks. This could be attributed to two possible two reasons, first, either the SentEval benchmark is not hard enough to properly evaluate sentence encoders, or second, the inadequacy of the similarity metrics to assess the sentence relations like cosine-similarity. Alternatively, there is room for improvement in the current sentence encoder models to better capture the semantic meaning of a sentence. Therefore, we believe it is necessary to develop a sentence encoder that can capture the subtle nuances of sentences and generate high-quality embeddings that can reflect all aspects of a sentence. Additionally, a diverse and robust benchmark is needed to evaluate sentence encoders accurately. Hence, further research and development in this area are needed to improve the capabilities of these models and achieve a more human-like understanding of sentences.
## 8 Limitation
Our findings are limited to the English language and five popular embedding models (SBert, USE, InferSent, LASER, and Doc2Vec). The experiments are primarily focused on unsupervised semantic understanding tasks where no training data / previous observation about the goal task is available. Thus, evaluation of the constructed perturbed sentences is required. Therefore, our findings may not hold for all possible downstream NLP tasks. However, in the absence of available training data for a particular domain, our findings can still be very useful to choose a suitable sentence encoder and designing initial experiments.
In future work, we intend to improve upon the limitations discussed above by incorporating the antonyms and word orders to produce more generalized sentence embeddings. Additionally, one can study more recent large language models like chatGPT, and LLAMA to test their limitations on similar criteria.
|
2309.13554 | A Novel Stochastic Interacting Particle-Field Algorithm for 3D
Parabolic-Parabolic Keller-Segel Chemotaxis System | We introduce an efficient stochastic interacting particle-field (SIPF)
algorithm with no history dependence for computing aggregation patterns and
near singular solutions of parabolic-parabolic Keller-Segel (KS) chemotaxis
system in three space dimensions (3D). The KS solutions are approximated as
empirical measures of particles coupled with a smoother field (concentration of
chemo-attractant) variable computed by the spectral method. Instead of using
heat kernels causing history dependence and high memory cost, we leverage the
implicit Euler discretization to derive a one-step recursion in time for
stochastic particle positions and the field variable based on the explicit
Green's function of an elliptic operator of the form Laplacian minus a positive
constant. In numerical experiments, we observe that the resulting SIPF
algorithm is convergent and self-adaptive to the high gradient part of
solutions. Despite the lack of analytical knowledge (e.g. a self-similar
ansatz) of the blowup, the SIPF algorithm provides a low-cost approach to study
the emergence of finite time blowup in 3D by only dozens of Fourier modes and
through varying the amount of initial mass and tracking the evolution of the
field variable. Notably, the algorithm can handle at ease multi-modal initial
data and the subsequent complex evolution involving the merging of particle
clusters and formation of a finite time singularity. | Zhongjian Wang, Jack Xin, Zhiwen Zhang | 2023-09-24T05:33:19Z | http://arxiv.org/abs/2309.13554v1 | A Novel Stochastic Interacting Particle-Field Algorithm for 3D Parabolic-Parabolic Keller-Segel Chemotaxis System
###### Abstract
We introduce an efficient stochastic interacting particle-field (SIPF) algorithm with no history dependence for computing aggregation patterns and near singular solutions of parabolic-parabolic Keller-Segel (KS) chemotaxis system in three space dimensions (3D). The KS solutions are approximated as empirical measures of particles coupled with a smoother field (concentration of chemo-attractant) variable computed by the spectral method. Instead of using heat kernels causing history dependence and high memory cost, we leverage the implicit Euler discretization to derive a one-step recursion in time for stochastic particle positions and the field variable based on the explicit Green's function of an elliptic operator of the form Laplacian minus a positive constant. In numerical experiments, we observe that the resulting SIPF algorithm is convergent and self-adaptive to the high gradient part of solutions. Despite the lack of analytical knowledge (e.g. a self-similar ansatz) of the blowup, the SIPF algorithm provides a low-cost approach to study the emergence of finite time blowup in 3D by only dozens of Fourier modes and through varying the amount of initial mass and tracking the evolution of the field variable. Notably, the algorithm can handle at ease multi-modal initial data and the subsequent complex evolution involving the merging of particle clusters and formation of a finite time singularity.
_AMS subject classification:_ 35K57, 92C17, 65C35, 65M70, 65M75.
_Keywords:_ fully parabolic Keller-Segel system, interacting particle-field approximation, singularity detection, critical mass.
## 1 Introduction
Chemotaxis partial differential equations (PDEs) were introduced by Keller and Segel (KS [15]) to describe the aggregation of the slime mold amoeba Dictyostelium discoideum due to an attractive chemical substance. Related random walk model by Patlak was known earlier [24], see [29] for an analysis of basic taxis behaviors (aggregation, blowup, and collapse) based on reinforced random walks. We consider the parabolic-parabolic
(fully parabolic) KS system of the form:
\[\rho_{t} =\nabla\cdot(\mu\,\nabla\rho-\chi\,\rho\,\nabla c),\] \[\epsilon\,c_{t} =\Delta\,c-k^{2}\,c+\rho, \tag{1}\]
where \(\chi,\mu\) (\(\epsilon,k\)) are positive (non-negative) constants. The model is called elliptic if \(\epsilon=0\) (when \(c\) evolves rapidly to a local equilibrium), and parabolic if \(\epsilon>0\). The \(\rho\) is the density of active particles (bacteria), and \(c\) is the concentration of chemo-attractant (e.g. food). See detail discussion in Section 2
The KS systems (1) have been studied for several decades, with various cases and dimensions explored. For the parabolic-elliptic case with \(k=0\) and \(\epsilon=0\), Herrero et al [11] investigated the 3D case and found the existence of self-similar radial blowup, while such a blowup does not occur in 2D. An overview of blow-up phenomena, particularly in 2D, can be found in the book by Perthame [25]. In [8], Giga et al. further explored the parabolic-elliptic case with \(k=0\) and introduced the concept of type I blowup, denoted by \(y_{t}=y^{2}\). They demonstrated that when the spatial dimension \(d>3\), all type-I radial blowup is self-similar. More recently, Souplet and Winkler [27] provided a detailed profile of the 3D parabolic-elliptic self-similar blowup satisfying the inequality \(u(x,t)\leq C(T-t+|x|^{2})^{-1}\), where \(C\) is a constant.
The fully parabolic case, i.e. system (1) with \(\epsilon\neq 0\), has also been extensively studied. In the 2D fully parabolic case, Herrero and Velazquez [12] demonstrated the existence of self-similar Dirac-delta type blow-up in the 2D fully parabolic case for \(k\neq 0\); while Calvez and Corrias [1] and Mizoguchi [23] showed that under mild assumptions on the initial conditions, a global weak solution exists for mass \(M_{0}<8\pi\). In contrast, for super-critical mass, the system blows up in finite time under the smallness assumption of the second moment. Further work by Lemarie-Rieusset [17] proved global existence and stability in \(\mathbb{R}^{n}\) with small initial data in the critical Morrey space. When \(k=0\), Takeuchi [30] demonstrated the existence of a global strong solution on \(\mathbb{R}^{n}\) provided that the initial data is small in the homogeneous Besov space, which is scaling invariant.
Several notable numerical methods have been developed for KS systems to date. Chertock et al. [5] developed a finite-volume method for a class of chemotaxis models and a closely related haptotaxis model. This approach allows for accurate and efficient simulations of chemotaxis phenomena. Shen et al. [26] proposed an energy dissipation and bound preserving scheme that is not restricted to specific spatial discretization. The bound preserving property is achieved through modification of the system. In a related work, Hillen and Othmer [13] assumed a saturation concentration \(M_{0}\) for the bacteria, such that if \(\rho>M\), there is no chemo-attractant contribution. Under this assumption, the system does not blow up and still exhibits spiky solutions. Chen et al. [3] developed a fully-discrete finite element method (FEM) scheme for the 2D classical parabolic-elliptic Keller-Segel system, following the approach of Shen et al. [26]. They showed that the proposed scheme will blow up in a finite time, under assumptions similar to those in the continuous blow-up scenarios. In the classic setting, Liu and Wang [18] reformulated the equation using the Le Chatelier Principle to attain a positive-preserving scheme. It is worth noting that all the aforementioned numerical methods are tailored for 2D cases.
Besides the Eulerian discretization methods above, there have been theoretical developments in the Lagrangian framework for the KS system (1) and related equations.
Stevens [28] derived an \(N\)-particle system with convergence in the fully parabolic case. Additionally, Havskovec and Sevcovic [9] developed a convergent regularized particle system for the 2D parabolic elliptic case. Havskovec and Markowich [10] demonstrated convergence in the BBGKY hierarchy modulo a gap due to the lack of uniqueness of the Boltzmann hierarchy. This gap was addressed by Mischler and Mouhot [22] who studied the propagation of chaos and mean-field limits for systems of indistinguishable particles undergoing collisions. Craig and Bertozzi [6] proved the convergence of a blob method for the related aggregation equation. In the study of the KS system, Liu et al. [20] and [19] developed a random particle blob method with a mollified kernel for the parabolic-elliptic case. They demonstrated convergence when the limiting (macroscopic mean field) equation admits a global weak solution. As noted by Mischler and Mouhot [22], the success of this analysis strongly relies on detailed knowledge of the nonlinear mean field equation, rather than the details of the underlying many-particle Markov process. A particle computation based on [9] for the 2D advective parabolic-elliptic KS system, i.e. (1) with \(\epsilon=0\) and an additional passive flow, was conducted in [16]. A deep learning study for chemotaxis aggregation in 3D laminar and chaotic flows based on a kernel regularization technique of a particle method by the present authors is in [31].
Most existing particle-field algorithms for KS equations are deterministic, assuming that the underlying particle systems are well-mixed. In this paper, we propose a novel stochastic interacting particle-field (SIPF) algorithm for the fully parabolic KS system (1). Our method takes into account the coupled stochastic particle evolution (density \(\rho\)) and the accompanying field (concentration \(c\)) in the system and allows for a self-adaptive simulation of focusing and potentially singular behavior.In the SIPF algorithm, we represent the active particle density \(\rho\) by empirical particles and the concentration field \(c\) is discretized by a spectral method instead of a finite difference method [7]. This is possible since the field \(c\) is smoother than density \(\rho\). We demonstrate the effectiveness of our method through numerical experiments in three space dimensions (3D), which have not been systematically computed and benchmarked to the best of our knowledge.
It is worth noting that the pseudo-spectral methods were employed to compute the nearly singular solutions of the 3D Euler equations [14]. Subsequently, the finite-time blowup of the 3D axisymmetric Euler equations was computed using the adaptive moving mesh method [21]. These methods represent the cutting edge in the computation of nearly singular solutions of the 3D Euler equations. Nevertheless, we also point out that the implementation of pseudo-spectral methods for 3D problems demands substantial computational resources, while the adaptive moving mesh method requires sophisticated design and advanced programming skills.
It is also worth noting that the Lagrangian algorithms in computation of parabolic-elliptic KS system, for instance [9], cannot be directly generalized to fully parabolic cases. Those algorithms rely on that the field \(c\) at time \(t\) can be accessed by particle density \(\rho\) at the same instant. Hence one only needs to update the particle density locally in time. A direct generalization to the fully parabolic case will require historical particle density \(\rho\) from the starting time of the algorithm. An example and related convergence analyses can be found in [2]. However from computational perspective, the volume of such historical data increases in time and becomes a costly burden on memory and flops. In contrast, our SIPF algorithm computes particle and field once per time step without involving a long past history so the computational cost does not grow in time.
The goal of this paper is to introduce a novel stochastic interacting particle-field algorithm (SIPF) for the fully parabolic KS system. Though we verify the convergence of SIPF algorithm numerically, the theoretical study will be left for a future work.
The rest of the paper is organized as follows. In Section 2, we briefly review the blow-up behavior in the fully parabolic KS models under critical mass conditions and the Lagrangian formulations in the computation of KS models. In Section 3, we present our SIPF algorithms for solving the fully parabolic KS system by simplifying a theoretically equivalent yet computationally undesirable method with history dependent parabolic kernel functions (a naive extension of particle method in the parabolic-elliptic case) into efficient recursions. In Section 4, we show numerical results to demonstrate the performance of our method for 3D KS chemotaxis systems. Concluding remarks are given in section 5.
## 2 Parabolic-Parabolic KS System
In this section, we list some theoretical analyses of singular behaviors and related computational methods for Keller Segel (KS) models in both parabolic elliptic cases and parabolic parabolic (fully parabolic) cases. To begin, we recall the KS model:
\[\rho_{t} =\nabla\cdot(\mu\,\nabla\rho-\chi\,\rho\,\nabla c), \tag{2}\] \[\epsilon c_{t} =\Delta\,c-k^{2}\,c+\rho,\] (3) \[x\in \Omega\subseteq\mathbb{R}^{d},\quad t\in[0,T]. \tag{4}\]
The first equation (2) of \(\rho\) models the evolution of the density of active particles (bacteria). The bacteria diffuse with mobility \(\mu\) and drift in the direction of \(\nabla c\) with velocity \(\chi\nabla c\), where \(\chi\) is called chemo-sensitivity. The second equation (3) of \(c\) models the evolution of the concentration of chemo-attractant (e.g. food). The increment of \(c\) is proportion to \(\rho\), which indicates the aggregation (attraction) between active particles. Another important physical parameter is \(\epsilon\) in Eq.(3), which models the time scale of the chemotaxis. When \(\epsilon\neq 0\), it is referred to as parabolic parabolic Keller Segel systems. For \(\epsilon=0\) the system is reduced to the parabolic-elliptic case, which models the chemical attractant released by the active particle instantly turns to equilibrium.
### From Critical Collapse to Coexistence of Blow-up and Global Smooth Solutions
Well-known KS dichotomy (critical collapse) states that \(8\pi\) is the critical mass for the simplest two-dimensional parabolic-elliptic KS system in \(\Omega=\mathbb{R}^{2}\), namely (1) with \(\epsilon=k=0\),
\[\rho_{t} =\nabla\cdot(\nabla\rho-\rho\,\nabla c),\] \[\Delta\,c =-\rho, \tag{5}\]
so that
1. If \(M_{0}<8\pi\), the system has a global smooth solution.
2. If \(M_{0}>8\pi\), the system blows up in finite time in the sense of \(|\cdot|_{\infty}\) norm.
It can be seen from the classical variance identity for system (5), [25], that,
\[\frac{d}{dt}\int_{x\in\mathbb{R}^{2}}|x|^{2}\,\rho(x)\,dx=\frac{M}{2\pi}(8\pi-M). \tag{6}\]
Then the solution of (5) exhibits a quantized concentration of mass at origin, a \(\delta\) type blow-up.
For system (5) on \(\mathbb{R}^{d}\) (\(d\geq 3\)), the identity (6) does not apply and the KS evolution is not as clear cut. Nonetheless, the coexistence of blow-up and global smooth solutions remains depending on the size of the initial data. In addition, there exists the blow-up profile that is different from \(\delta\) type blowup. For example, it is shown in [11] that in 3D fully parabolic systems, there exist radial, positive, backward self-similar solutions of the form,
\[\rho(x,t)=\frac{V(x/\sqrt{T-t})}{T-t},\qquad 0<t<T, \tag{7}\]
where the radially decreasing profile function \(V\) satisfies \(\lim_{y\to\infty}y^{2}V(y)=L\in\mathbb{R}^{+}\).
Later in a more refined result by [27], the blowup is said to be type I if
\[0<\limsup_{t\to T}\left(T-t\right)\|\rho\|_{\infty}\,<\infty. \tag{8}\]
Then for radial initial data in \(L^{1}(\mathbb{R}^{3})\), if a blowup is type I, \(\exists\,C>0\) such that
\[\rho(x,t)\leq C(T-t+|x|^{2})^{-1},\quad 0<|x|\leq R,\quad 0<t<T. \tag{9}\]
On the other side of the dichotomy, it is shown in [30], that the global strong solution exists in the fully parabolic system (1) for small initial data.
The analyses are unknown for the blowup behavior of the KS system on \(\mathbb{R}^{3}\) from a non-radial initial value to our best knowledge. One must resort to numerical computation to investigate the possible singular behavior which will be discussed in section 4.3.
### Lagrangian formulations
As a fundamental step of deriving the algorithms, we introduce the Lagrangian formulation of active particle density \(\rho\) in the KS system (1) and start with the elliptic system with \(\epsilon=k=0\), namely (5) in general \(d\) dimension. From \(\Delta c=-\rho\) and the Green's function of Laplacian operator in \(R^{d}\), we know,
\[c(x,t)=\left\{\begin{array}{ll}-\frac{1}{2\pi}\int\ln|x-y|\,\rho(y,t),\quad d =2\\ C_{d}\int\frac{1}{|x-y|^{d-2}}\,\rho(y,t)\,dy,\quad d\geq 3\end{array}\right., \tag{10}\]
where \(C_{d}=\frac{\Gamma(d/2+1)}{d(d-2)\pi^{d/2}}\). So the convection term in (2) turns to,
\[\nabla c(x)=-\frac{\Gamma(d/2)}{2\pi^{d/2}}\int\frac{x-y}{|x-y|^{d}}\,\rho(y, t)\,dy. \tag{11}\]
Now we arrive at the interactive stochastic differential equation system of \(P\) particles, \(\{X_{t}^{p}\}_{p=1:P}\),
\[dX_{t}^{p}=-\chi\frac{M}{P}\sum_{q\neq p}\frac{\Gamma(d/2)}{2\pi^{d/2}}\frac{X _{t}^{p}-X_{t}^{q}}{|X_{t}^{p}-X_{t}^{q}|^{d}}+\sqrt{2\mu}\,dW_{t}^{p},\quad p =1,\cdots,P, \tag{12}\]
where \(W^{p}_{t}\) denotes independent identically distributed standard Brownian motions. In [22], it is shown with mild regularity condition, when \(P\to\infty\), the distribution of empirical particles \(\{X^{p}_{t}\}_{p=1:P}\) converges to \(\rho\) in the continuous PDE system (2). Several novel numerical methods have been developed or implemented to study the singularity behavior in the parabolic elliptic Keller Segel systems, see [19; 9; 31].
In the fully parabolic case (\(\epsilon\neq 0\)), the solution of chemical concentration \(c\) comes from solving a parabolic equation, which is non longer Markovian as one in (12). At time \(t>0\), solution of \(\rho\) in \([0,t]\) has to be involved in the representation of \(c\), namely,
\[c(\cdot,t)=e^{-k^{2}t}e^{t\Delta}c(\cdot,0)+\int_{0}^{t}e^{k^{2}(s-t)}e^{(t-s) \Delta}\,\rho(\cdot,s)\,ds, \tag{13}\]
where the heat semigroup operator \(e^{t\Delta}\) is defined by
\[(e^{t\Delta}f)(x,t):=\int\frac{e^{-|x-y|^{2}/(4t)}}{(4\pi t)^{d/2}}\,f(y)\,dy. \tag{14}\]
Similar to (12), the empirical particle system converging to density \(\rho\) reads:
\[dX^{p}=\,\chi\nabla_{X}\,c(X^{p}_{t},t)\,dt+\sqrt{2\,\mu}\,dW^{p},\,\,\,p=1, \cdots,P, \tag{15}\]
and \(W^{p}\)'s are independent Brownian motions in \(\mathbb{R}^{d}\). Due to the historic path dependent in the solution of \(c\) in (13), direct computation of drift \(\nabla_{X}\,c(X^{p}_{t},t)\) in (15) will lead to significant memory cost, which increases w.r.t. computational time \(T\). To our best knowledge, a memory-less algorithm to compute the fully parabolic KS system has not been developed. We will present one in the following section.
## 3 SIPF Algorithms for Parabolic-Parabolic KS
In this section, we present the SIPF algorithm for solving the fully parabolic KS models. Since we are interested in the spatially localized aggregation behavior as discussed in Sec 2.1, it is viable that we restrict the system (2) and (3) in a large domain \(\Omega=[-L/2,L/2]^{d}\) and assume Dirichlet boundary condition for particle density \(\rho\) and Neumann boundary condition for chemical concentration \(c\).
As a discrete algorithm, we assume the temporal domain \([0,T]\) is partitioned by \(\{t_{n}\}_{n=0:nT}\) with \(t_{0}=0\) and \(t_{nT}=T\). We approximate the density \(\rho\) by particles, i.e.
\[\rho_{t}\approx\frac{M_{0}}{P}\,\sum_{j=1}^{P}\delta(x-X^{p}_{t}),\,\,\,P\gg 1, \tag{16}\]
where \(M_{0}\) is the conserved total mass (integral of \(\rho\)). For chemical concentration \(c\), we approximate by Fourier basis, namely, \(c(\mathbf{x},t)\) has an series representation
\[\sum_{j,m,l\in\mathcal{H}}\,\,\alpha_{t;j,m,l}\,\exp(i2\pi j\,x_{1}/L)\exp(i2 \pi m\,x_{2}/L)\exp(i2\pi l\,x_{3}/L), \tag{17}\]
where \(\mathcal{H}\) denotes index set
\[\{(j,m,l)\in\mathbb{N}^{3}:|j|,|m|,|l|\leq\frac{H}{2}\}, \tag{18}\]
and \(i=\sqrt{-1}\).
Then at \(t_{0}=0\), we generate \(P\) empirical samples \(\{X_{0}^{p}\}_{p=1:P}\) according to the initial condition of \(\rho_{0}\) and set up \(\alpha_{0;j,m,l}\) by the Fourier series of \(c_{0}\).
For ease of presenting our algorithm, with a slight abuse of notation, we use \(\rho_{n}=\frac{M_{0}}{P}\,\sum_{p=1}^{P}\delta(x-X_{n}^{p})\), and
\[c_{n}=\sum_{j,m,l\in\mathcal{H}}\alpha_{n;j,m,l}\,\exp(i2\pi j\,x_{1}/L)\exp(i2 \pi m\,x_{2}/L)\exp(i2\pi l\,x_{3}/L)\]
to represent density \(\rho\) and chemical concentration \(c\) at time \(t_{n}\).
Considering time stepping system (1) from \(t_{n}\) to \(t_{n+1}\), with \(\rho_{n}\) and \(c_{n-1}\) known, our algorithm, inspired by the operator splitting technique, consists of two sub-steps: updating chemical concentration \(c\) and updating organism density \(\rho\).
_Updating chemical concentration \(c\)._ Let \(\delta t=t_{n+1}-t_{n}>0\) be the time step. We discretize the \(c\) equation of (1) in time by an implicit Euler scheme:
\[\epsilon\,(c_{n}-c_{n-1})/\delta t=(\Delta-k^{2})\,c_{n}+\rho_{n}. \tag{19}\]
From (19), we obtain the explicit formula for \(c_{n}\) as:
\[(\Delta-k^{2}-\epsilon/\delta t)\,c_{n}=-\epsilon\,c_{n-1}/\delta t-\rho_{n}. \tag{20}\]
It follows that:
\[c_{n}=c(\mathbf{x},t_{n})=-\mathcal{K}_{\epsilon,\delta t}*(\epsilon\,c_{n-1} /\delta t+\rho_{n})=-\mathcal{K}_{\epsilon,\delta t}*(\epsilon\,c(\mathbf{x}, t_{n-1})/\delta t+\rho(x,t_{n})) \tag{21}\]
where \(*\) is spatial convolution operator, and \(\mathcal{K}_{\epsilon,\delta t}\) is the Green's function of the operator \(\Delta-k^{2}-\epsilon/\delta t\). In case of \(\mathbb{R}^{3}\), the Green's function \(\mathcal{K}_{\epsilon,\delta t}\) reads as follows
\[\mathcal{K}_{\epsilon,\delta t}=\mathcal{K}_{\epsilon,\delta t}(\mathbf{x})=- \frac{\exp\{-\beta|\mathbf{x}|\}}{4\pi|\mathbf{x}|}.\quad\beta^{2}=k^{2}+ \epsilon/\delta t, \tag{22}\]
The Green's function admits a closed form Fourier transform,
\[\mathcal{F}\mathcal{K}_{\epsilon,\delta t}(\omega)=-\frac{1}{|\omega|^{2}+ \beta^{2}}. \tag{23}\]
For the term \(-\mathcal{K}_{\epsilon,\delta t}*c_{n-1}\) in (21), by Eq.(23) it is equivalent to modify Fourier coefficients \(\alpha_{j,m,l}\) to \(\alpha_{j,m,l}/(4\pi^{2}j^{2}/L^{2}+4\pi^{2}m^{2}/L^{2}+4\pi^{2}l^{2}/L^{2}+ \beta^{2})\).
For the second term \(\mathcal{K}_{\epsilon,\delta t}*\rho\), we first approximate \(\mathcal{K}_{\epsilon,\delta t}\) with cos series expansion, then according to the particle representation of \(\rho\) in (16),
\[(\mathcal{K}_{\epsilon,\delta t}*\rho)_{j,m,l}\approx\frac{M_{0}}{P}\sum_{p=1 }^{P}\frac{\exp(-2\pi jX_{n,1}^{p}/L-2\pi mX_{n,2}^{p}/L-2\pi lX_{n,l}^{p}/L)( -1)^{j+m+l}}{4\pi^{2}j^{2}/L^{2}+4\pi^{2}m^{2}/L^{2}+4\pi^{2}l^{2}/L^{2}+\beta ^{2}}. \tag{24}\]
Finally, we summarize the one-step update of Fourier coefficients of chemical concentration \(c\) in Alg.1.
_Updating density of active particles \(\rho\)._ In the one-step update of density \(\rho_{n}\) represented by particles \(\{X_{n}^{p}\}_{p=1:P}\), we apply Euler-Maruyama scheme to solve the SDE (15):
\[X_{n+1}^{p}=X_{n}^{p}+\chi\nabla_{\mathbf{x}}c(X_{n}^{p},t_{n})\delta t+\sqrt{2 \,\mu\,\delta t}\,N_{n}^{p}, \tag{25}\]
where \(N_{n}^{p}\)'s are i.i.d. standard normal distributions with respect to the Brownian paths in the SDE formulation (15). For \(n>1\), substituting (21) in (25) gives:
\[X_{n+1}^{p}=X_{n}^{p}-\chi\nabla_{\mathbf{x}}\mathcal{K}_{\epsilon,\delta t}*( \epsilon\,c_{n-1}(\mathbf{x})/\delta t+\rho_{n}(\mathbf{x}))|_{\mathbf{x}=X_{ n}^{p}}\delta t+\sqrt{2\,\mu\,\delta t}\,N_{n}^{p}, \tag{26}\]
from which \(\rho_{n+1}(\mathbf{x})\) is constructed via (16).
In such particle formulation, the computation of spacial convolution is slightly different from one in the update of \(c\), namely (21).
For \(\nabla_{\mathbf{x}}\mathcal{K}_{\epsilon,\delta t}*c_{n-1}(X_{n}^{p})\), to avoid the singular points of \(\nabla_{\mathbf{x}}\mathcal{K}_{\epsilon,\delta t}\), we evaluate the integral with the quadrature points that are away from \(0\). Precisely, denote the standard quadrature point in \(\Omega\) with
\[x_{j,m,l}=(j\,L/H,m\,L/H,j\,L/H), \tag{27}\]
where \(j\), \(m\), \(l\) are integers ranging from \(-H/2\) to \(H/2-1\). When computing the integral \(\nabla_{\mathbf{x}}\mathcal{K}_{\epsilon,\delta t}*c_{n-1}(X_{n}^{p})\), we evaluate \(\nabla_{\mathbf{x}}\mathcal{K}_{\epsilon,\delta t}\) at \(\{X_{n}^{p}+\bar{X}_{n}^{p}-x_{j,m,l}\}_{j,m,l}\) where a small spatial shift \(\bar{X}_{n}^{p}=\frac{H}{2L}+\lfloor\frac{X_{n}^{p}}{H/L}\rfloor\frac{H}{L}-X ^{p}\) and \(c\) at \(\{x_{j,m,l}-\bar{X}_{n}^{p}\}_{j,m,l}\) correspondingly. The latter one is computed by inverse Fourier transform of shifted coefficients, with \(\alpha_{j,m,l}\) modified to \(\alpha_{j,m,l}\exp(-i2\pi j\bar{X}_{n;1}^{p}/L-i2\pi m\bar{X}_{n;2}^{p}/L-i2\pi l \bar{X}_{n;3}^{p}/L)\) where \((\bar{X}_{n;i}^{p})\) denotes the \(i\)-th component of \(\bar{X}_{n}^{p}\).
The term \(\nabla_{\mathbf{x}}\mathcal{K}_{\epsilon,\delta t}*\rho(X_{n}^{p},t_{n})\) is straightforward thanks to the particle representation of \(\rho(X_{n}^{p},t_{n})\) in (16):
\[\nabla_{\mathbf{x}}\mathcal{K}_{\epsilon,\delta t}*\,\rho_{n}(X_{n}^{p})=\int \mathcal{K}_{\epsilon,\delta t}(X_{n}^{p}-y)\rho(y)\approx\sum_{q=1,q\neq p}^ {P}\frac{M}{P}\mathcal{K}_{\epsilon,\delta t}(X_{n}^{p}-X_{n}^{q}). \tag{28}\]
We summarize the one-step update (for \(n>1\)) of density in SIPF as in Alg.2.
Combining (21) and (26), we conclude that the recursion from \((\{X_{n}^{p}\}_{p=1:P},\rho_{n}(\mathbf{x}),c_{n-1}(\mathbf{x}))\) to \((\{X_{n+1}^{p}\}_{p=1:P},\rho_{n+1}(\mathbf{x}),c_{n}(\mathbf{x}))\) is complete. We summarize the SIPF method in the following Algorithm 3.
```
Data: Distribution \(\rho_{n}\) represented by empirical samples \(X_{n}\), input: concentration \(c_{n-1}\) represented by Fourier coefficients \(\alpha_{n-1}\); for\(p=1\)to\(P\)do \(X_{n+1}^{p}\gets X_{n+1}^{p}+\sqrt{2\mu\delta t}N\) where \(N\) is a random generated standard normal distribution. for\(q=1\)to\(P\)do \(X_{n+1}^{p}\gets X_{n+1}^{p}-\frac{\chi M\delta t}{P}\mathcal{K}_{\epsilon, \delta t}(X_{n}^{p}-X_{n}^{q})\) end for\(\bar{X}_{n}^{p}\leftarrow\frac{H}{2L}+\lceil\frac{X_{n}^{p}}{H/L}\rceil\frac{H}{L}-X^{p}\) for\((j,m,l)\in\mathcal{H}\)do \(F_{j,m,l}\leftarrow\nabla_{\mathbf{x}}\mathcal{K}_{\epsilon,\delta t}(X_{n}^{ p}+\bar{X}_{n}^{p}-x_{j,m,l})\), \(x_{j,m,l}\) from Eq. (27) \(G_{j,m,l}\leftarrow\alpha_{j,m,l}\exp(-i2\pi j\bar{X}_{n;1}^{p}/L-i2\pi m\bar{X }_{n;2}^{p}/L-i2\pi l\bar{X}_{n;3}^{p}/L)\) end for\(\tilde{G}=iFFT(G)\) \(X_{n+1}^{p}\gets X_{n+1}^{p}-\epsilon\chi(F,\tilde{G})\frac{L^{3}}{H^{3}}\), where \((\cdot,\cdot)\frac{L^{3}}{H^{3}}\) denote an inner product corresponding to \(L^{2}(\Omega)\) quadrature. end Result: Output \(\rho_{n+1}\) represented by updated \(X_{n+1}\).
```
**Algorithm 2**One step update of density in SIPF
```
Data: Initial distribution \(\rho_{0}\), initial concentration \(c_{0}\); Generate \(P\) i.i.d samples following distribution \(\rho_{0}\), \(X^{1},X^{2},\cdots X^{P}\). for\(p\gets 1\)to\(P\)do Compute \(X_{1}^{p}\) by (25), with \(c_{-1}=c_{0}\) end for Compute \(c_{1}\) by Alg.1 with \(c_{0}\) and \(\rho_{1}=\sum_{p=1}^{P}\frac{M}{P}\delta_{X_{1}^{p}}\). for\(\text{step }n\gets 2\)to\(N=T/\delta t\)do Compute \(X_{n}\) by Alg.2 with \(\rho_{n-1}\) and \(c_{n-2}\) Compute \(c_{n}\) by Alg.1 with \(c_{n-1}\) and \(\rho_{n}=\sum_{p=1}^{P}\frac{M}{P}\delta_{X_{n}^{p}}\). end
```
**Algorithm 3**Stochastic Interacting Particle-Field Method
## 4 Numerical Experiments
### Aggregation Behaviors
To illustrate the functionality of the algorithm, we start with two examples. In both cases, the initial distribution \(\rho_{0}\) is assumed to be a uniform distribution over a ball centered at \(\begin{bmatrix}0\\ 0\\ 0\end{bmatrix}\) with radius \(1\), see Fig.1(a). Also in both cases, we assume the following model parameters,
\[\mu=\chi=1,\quad\epsilon=10^{-4}\;\text{and}\;k=10^{-1}. \tag{29}\]
for the fully parabolic Keller Segel model (1). The choice (29) is made so that the model exhibits comparable behavior as the corresponding parabolic-elliptic KS system whose blow-up behavior is known. For the first example, the total mass is chosen to be \(M_{0}=20\), while for the second, the mass is \(M_{0}=80\).
In the numerical computation of both examples, we use \(H=24\) Fourier basis in each spatial dimension to discretize chemical concentration \(c\) and use \(P=10000\) particles to represent approximated distribution \(\rho\). The computational domain is in the domain \(\Omega=[-L/2,L/2]^{3}\) where \(L=8\). We then compute the evolution of \(c\) and \(\rho\) via Alg.3 with \(\delta t=10^{-4}\) up to \(T=0.1\).
In Fig.1, we plot the distribution \(\rho\) by empirical samples, at the starting time \(T=0\) and final computation time \(T=0.1\). In Fig.1(b), we can see the diffusive behaviors compared with the initial distribution shown in Fig.1(a). While in Fig.1(c), we increase the total mass from \(M_{0}=20\) to \(M_{0}=50\), we can see particles become concentrated at the origin, which indicates the possible blow-up of the continuous system.
In Fig.2, we present the chemical concentration \(c\) at final time \(T=0.1\) and third component \(z=0\) for various \(M_{0}\). By comparing the subfigures, we can see that in the large total mass case, \(c\) exhibits a sharp profile at the origin due to the near singular behavior of \(\rho\) towards a possible blow-up.
Furthermore, if we assume, there exists a self-similar profile of \(\rho\) at origin as discussed in [27] and Section 2.1, namely, \(\rho(x,t)\sim\frac{1}{|x|^{2}}\), by (1), the Fourier coefficients of chemical
Figure 1: Density \(\rho\) approximated by empirical distribution at \(T=0.1\): the mass effect on focusing.
Figure 3: Maximum of chemical concentration \(c\) vs computation time \(T\) with different total mass \(M_{0}\).
Figure 2: Chemical concentration \(c\) at final time \(T=0.1\), sliced at \(z=0\).
concentration \(c\) has the following asymptotics,
\[\mathcal{F}c(\omega)\sim\frac{1}{|\omega|^{2}+k^{2}}\hat{\rho}\sim\frac{1}{(| \omega|^{2}+k^{2})|\omega|}. \tag{30}\]
Then the maximum of \(c\) in the computation shall vary vs the discretization parameter \(H\). More precisely, we note at the origin,
\[c(0)\sim\int\frac{1}{(|\omega|^{2}+k^{2})|\omega|}e^{i\omega x}d\omega|_{x=0}= \int\frac{1}{(|\omega|^{2}+k^{2})|\omega|}d\omega. \tag{31}\]
In practical discretization, the range of integral (31) is related to the maximum frequency, namely \([-\frac{\pi}{L}(\frac{H}{2}-1),\frac{\pi}{L}\cdot\frac{H}{2}]^{3}\). Then, for the type of \(\frac{1}{|x|^{2}}\) profile blow up,
\[\|c\|_{\infty}=\mathcal{O}(\ln(H)). \tag{32}\]
Similar analysis shows for the type of \(\delta(x)\) profile blow up,
\[\|c\|_{\infty}=\mathcal{O}(H). \tag{33}\]
In Fig.3, we present the maximum value of \(c\) vs the computational time \(T\) with a different number of Fourier modes \(H\) and total mass \(M_{0}\). We can see in the case of a possible blow-up (Fig.3(b)), that the maximum of \(c\) varies dramatically for different \(H\). In the investigation following, we will use this as an indicator of a possible blow-up.
Furthermore, under the same configuration in the case of \(M_{0}=80\), we take \(T=1\) to achieve a numerically stable \(\|c\|_{\infty}\). And test for \(H\) ranging from \(8\) to \(24\). In Fig.4, we plot \(\|c\|_{\infty}\) vs \(H\) and observe that the maximum of \(c\) grows near-linearly in \(H\).
**Remark 4.1**.: _Similar ideas that detect blow-ups by comparing maximum values computed under different discretizations, can be found in the literature on finite volume approach to 2D Keller Segel systems. For example in [4], the \(\delta\) type singularities in the 2D system are identified when \(\|\rho\|_{\infty}=\mathcal{O}(\frac{1}{\Delta x\Delta y})\),_
Figure 4: Maximum of \(c\) vs. the number of Fourier modes \(H\) (in each dimension), total mass \(M_{0}=80\).
### Convergence over \(\delta t\)
Now we turn to validate the convergence of algorithms with respect to time step \(\delta t\). In this regard, we consider the same initial condition (\(\rho\) and \(c\) at \(t=0\)) and physical parameters, see (29), as in the first example. Also, we keep the number of Fourier modes in each dimension as \(H=24\), the number of particles \(P=10000\), and computational domain \(\Omega=[-L/2,L/2]^{3}\) with \(L=8\). Lastly, we set \(M_{0}=80\) and \(T=0.01\) when the system has not formed any singularities (see Fig.3(b)). To investigate the convergence, we consider \(\delta\) in the range between \(2^{-8}T\) to \(2^{-4}T\) and take \(\delta_{ref}=2^{-11}T\) as the reference solution. In Fig.5, we compute the \(L_{2}\) relative error of chemical concentration \(c\) at the final time \(T\). In addition, we fit the slope of error vs \(\delta t\) in the logarithmic scale and find \(e(\delta t)=\mathcal{O}(\delta t^{1.011})\) indicating the algorithm being approximately first order in time.
### Blow up behaviors
As mentioned in Sec.2.1, it is a well-known dichotomy that \(8\pi\) is the critical mass for the simplest two-dimensional parabolic-elliptic KS system (5).
1. If \(M_{0}<8\pi\), the system has a global smooth solution.
2. If \(M_{0}>8\pi\), the system has no global smooth solutions.
While for fully parabolic system or (5) with passive advection, no variance identity like (6) is known. One must resort to numerical computation to investigate the physical factors that lead to the possible blow-up behaviors. As suggested by the asymptotics (32) and (33), in the following examples, we will test for two cases \(H=24\) and \(H=12\) and comparing the \(\|c\|_{\infty}\) to detect possible blowup.
Mass dependenceWe start with investigating the critical mass \(M_{0}\) which plays the dominant role in the dichotomy of simple 2D parabolic elliptic system (5). To this end, we initialize the algorithm with uniform distribution over the unit ball centered at the origin and \(c(0,x)=0\). We then apply the algorithm with two different \(H\) to compute the density and chemical concentration until \(T=1\). To identify the possible blow-up, we compute the ratio of the \(|c|_{\infty}\) between two cases. In Fig.6(a) we present the ratio, namely, \(\frac{|c|_{\infty,H=24}}{|c|_{\infty,H=12}}\), along time with various \(M_{0}\). We can see the ratio increases dramatically when a potential blow-up forms for \(M_{0}\geq 47.6\). In Fig.6(b) we present the ratio at final time \(T=1\), indicating that the critical mass of the aforementioned initial condition shall be between \(47.6\) and \(47.8\).
_Geometry dependence._ Unlike the simplest parabolic-elliptic KS system (5) where the total mass is the only factor that determines the aggregation behaviors, we find experimentally that the critical mass varies for the different initial distributions of \(\rho\). For example, we follow the same configuration in the experiment of finding critical mass (as shown in Fig.6) except replacing the initial distribution to be the uniform distribution on a ball centered at the origin with radius \(\mathbf{0.8}\). Given a more concentrated initial distribution, we find the critical mass for the system decreases. More precisely, in Fig.7(a), we present the ratio of \(|c|_{\infty}\) of various total mass \(M_{0}\) vs. computational time \(T\). We can see a sharp change of ratio when the total mass \(M_{0}\) is large enough (\(M_{0}\geq 39\)) then the possible singularities have formed. While for \(M_{0}\) that is relatively small (\(M_{0}\leq 38.8\)) the ratio is stable near \(1\) over the computational time. In Fig.7,(b) we present the ratio at final time \(T=0.1\) vs. total mass \(M_{0}\), which indicates the critical mass for such initial condition is between \(38.8\) and \(39\).
Figure 6: Ratio of \(|c|_{\infty}\)’s from \(2\) runs with \(H=24\) and \(H=12\), revealing critical mass for blowup.
Figure 7: Ratio of \(|c|_{\infty}\)’s from \(2\) runs with \(H=24,12\); particles stay within initial radius \(0.8\).
### Aggregation behaviors from non-radial initial data
In this subsection, we investigate the aggregation behaviors in more general distributions. To this end, we consider a more practical scenario where the initial distribution \(\rho\) models several separated clusters of organisms and the mass in each individual cluster is below the critical mass while the total mass is super-critical. To be more concrete, we assume the initial distribution is a uniform distribution on four balls with a radius \(0.5\) and centered at four vertices of a regular tetrahedron, namely,
\[\begin{pmatrix}1\\ 0\\ 0\end{pmatrix},\quad\begin{pmatrix}-\frac{1}{2}\\ \frac{\sqrt{3}}{2}\\ 0\end{pmatrix},\quad\begin{pmatrix}-\frac{1}{2}\\ -\frac{\sqrt{3}}{2}\\ 0\end{pmatrix},\quad\begin{pmatrix}0\\ 0\\ \sqrt{2}\end{pmatrix}. \tag{34}\]
See also Fig.8(a) for the scatter plot of particles representing the initial distribution. We assume the total mass to be \(M_{0}=80\) and so each cluster has a mass of \(20\) which is below the critical mass for a ball with radius \(r=0.5\). Then we apply the algorithm to compute the KS system up to \(T=0.5\) with \(H=24\) and \(H=12\) while keeping the rest of the configurations. In Fig.8(b), we compute the ratio between the maxima of \(c\) vs time with two different spatial discretizations. We can see the singularities formed in the system at around \(T=0.3\).
In Fig.9, we present the scatter plot of particles between \(T=0.1\) and \(T=0.4\). Comparing Fig.8(a) with Fig.9(a), we can see diffusive behavior. This is due to the mass in each individual cluster being below the critical mass. Such diffusive behavior lasts until around \(T=0.2\), see Fig.8(b) where the active particles form a single larger cluster. The mass of the new cluster centered at the origin is \(M_{0}=80\). Then in Fig.9(c), the aggregation starts to form a singularity, which can also be seen from the sharp increase in the ratio of maximum of \(c\) in Fig.8(b). Lastly, in Fig.9(d), we can directly identify the possible blow-up at the origin through the scatter plot.
### Critical mass and blowup in parabolic-parabolic KS
As the last example, we access the singular solutions in the fully parabolic systems. For expository purposes, we set \(\epsilon=0.1\) in (1) and keep the rest of the physical parameters.
Figure 8: Identifying the formation of a finite time singularity at \(t\approx 0.3\) in non-radial solutions.
Figure 9: Particle scatter plot at \(T=0.1:0.1:0.4\): three cluster merging and a singularity formation.
The initial condition is assumed to be a uniform distribution on a ball with radius \(0.8\) and \(c(x,0)=0\).
From Fig.7, we know the critical mass is around \(M_{0}=39\). We apply the same computational configuration as in Fig.7, besides enlarging the domain to \(L=12\) to accommodate the possible diffusive behavior. We test our algorithm in two cases, \(M_{0}=40\) and \(M_{0}=160\) correspondingly.
The behaviors of the system are reported in Fig.10. In Fig.10(a) and (b), we present the scatter plot of the particles representing the density \(\rho\) with \(M_{0}=40\) and \(M_{0}=160\) correspondingly, we found that despite the initial mass \(M_{0}=40\) being larger than the critical mass in the case of \(\epsilon=10^{-4}\), the system does not blow-up. We report that the variance of the particles grows linearly in computational time \(T\) with diffusion coefficients fitted to be \(1.696\). In the absence of the chemical attractant, namely \(\chi=0\), the diffusion coefficient is expected to be \(4\mu=4\). While for \(M_{0}=160\) the system exhibits a possible singularity at the origin. In Fig.10(c), we present the ratio of \(|c|_{\infty}\) under \(H=24\) and \(H=12\) for both initial mass. Similar to the observation of Fig.10(a) and (b), the blow-up behavior crucially depends on a critical level of the initial mass.
## 5 Concluding Remarks
We introduced a stochastic interacting particle and field algorithm, observed its convergence, and demonstrated its efficacy in computing blowup dynamics of fully parabolic KS systems in 3D from general non-radial initial data. The algorithm is recursive with no history dependence, and the field variable is computed by FFT. Due to the field variable (concentration) being smoother than the density, the FFT approach works with only dozens of Fourier modes. The aggregation or focusing behavior in the density variable is resolved by 10k particles. The algorithm successfully detected blowup through the field variable in terms of the critical amount of initial mass. The algorithm is self-adaptive and does not rely on any anzatz of blowup which is unknown except in the parabolic-elliptic KS system. A weakness is the potentially high cost of FFT in 3D when a large number of Fourier modes is required in case of a high-resolution computation near blowup time. We plan to study this issue in a future work.
Figure 10: Effects of initial mass \(M_{0}\) on focusing behavior (finite time blowup).
## Acknowledgements
ZW was partially supported by NTU SUG-023162-00001, and JX by NSF grant DMS-2309520. ZZ was supported by the Hong Kong RGC grant (Projects 17300318 and 17307921), the National Natural Science Foundation of China (Project 12171406), Seed Funding Programme for Basic Research (HKU), the Outstanding Young Researcher Award of HKU (2020-21), and Seed Funding for Strategic Interdisciplinary Research Scheme 2021/22 (HKU).
|
2309.04996 | Quantum non-Markovianity, quantum coherence and extractable work in a
general quantum process | A key concept in quantum thermodynamics is extractable work, which specifies
the maximum amount of work that can be extracted from a quantum system.
Different quantities are used to measure extractable work, the most prevalent
of which are ergotropy and the difference between the non-equilibrium and
equilibrium quantum free energy. Using the former, we investigate the evolution
of extractable work when an open quantum system goes through a general quantum
process described by a completely-positive and trace-preserving dynamical map.
We derive a fundamental equation of thermodynamics for such processes as a
relation between the distinct sorts of energy change in such a way the first
and second laws of thermodynamics are combined. We then identify the
contributions made by the reversible and irreversible processes in this
equation and demonstrate that they are respectively responsible for the heat
flow and change in the extractable work during the process. Furthermore, we
discuss the potential benefit of this assignment in favor of a clear
explanation of the impact of quantum effects on the evolution of extractable
work. Specifically, we establish this by directly connecting the extractable
work with standard quantifiers of quantum non-Markovianity and quantum
coherence during the process. We illustrate these results with two examples. | Amin Mohammadi, Afshin Shafiee | 2023-09-10T11:05:35Z | http://arxiv.org/abs/2309.04996v2 | # Quantum non-Markovianity, quantum coherence and extractable work in a general quantum process
###### Abstract
A key concept in quantum thermodynamics is extractable work, which specifies the maximum amount of work that can be extracted from a quantum system. Different quantities are used to measure extractable work, the most prevalent of which are ergotropy and the difference between the non-equilibrium and equilibrium quantum free energy. Using the former, we investigate the evolution of extractable work when an open quantum system goes through a general quantum process described by a completely-positive and trace-preserving dynamical map. We derive a fundamental equation of thermodynamics for such processes as a relation between the distinct sorts of energy change in such a way the first and second laws of thermodynamics are combined. We then identify the contributions made by the reversible and irreversible processes in this equation and demonstrate that they are respectively responsible for the heat flow and change in the extractable work during the process. Furthermore, we discuss the potential benefit of this assignment in favor of a clear explanation of the impact of quantum effects on the evolution of extractable work. Specifically, we establish this by directly connecting the extractable work with standard quantifiers of quantum non-Markovianity and quantum coherence during the process. We illustrate these results with two examples.
## I Introduction
Thermodynamics can be thought of as an extension of classical mechanics to contain concepts such as temperature and heat in the study of properties of macroscopic systems [7]. Depending on the underlying mechanics, one may wonder how the transition from classical to quantum mechanics will affect the laws of thermodynamics. It becomes even more intriguing if the system under study exhibits quantum effects that have no classical counterpart. Followed by advances in laboratory and quantum technological implementations, much attention in recent years has been devoted to understanding thermodynamics of small quantum systems [7]. One of the most fundamental challenges in this new field of research is finding the general definitions of work and heat for arbitrary quantum systems. Although standard formulations of work and heat exist with limitations to weak coupling conditions and Markovian dynamics [7], the situation in general quantum evolutions is rather unclear. The difficulty arises from the fact that both work and heat are path-dependent quantities that can not be described by Hermitian operators [7], a standard way to define physical observables in quantum mechanics. In the absence of a single, universal concept, several definitions based on various criteria emerge. For example, a variety of approaches from use of quantifiers like ergotropy [7] and free energy difference [7] ; [8] for work extraction from non-passivity of quantum states to statistical treatments attempting to measure quantum work during a process by constructing appropriate probability distributions [9] ; [10] have been proposed.
In this regard, in a remarkable study Binder _et al._[7] established in an operational sense a relation in resemblance to the first law of thermodynamics between work done, extractable work, heat and change of internal energy for a quantum system evolving in a general quantum process presented by a completely-positive and trace-preserving (CPTP) map. The constituents of this relation are the difference in ergotropy between the initial and final states of the map, adiabatic work and introduced operational heat that their sum is equal to the change in internal energy of the open system due to general quantum process.
In the present study, we establish similarly an energy balance relation for general quantum processes in which the free energy work is used to measure the extractable work from the system during the process. This relation replaces the above-mentioned operational heat with the change in entropy of the system due to the fact that in contrast to ergotropy, the free energy work exploits the resource from open quantum systems. We can think of this relation as the fundamental thermodynamic equation combining the first and second laws of thermodynamics for general quantum processes. We then show the evolution of heat and the extractable work are determined respectively by the reversible and irreversible change in the entropy of the system. The fact that irreversibility governs the evolution of the extractable work underlies the main results of this paper. We demonstrate how this fact can be used to link directly non-Markovianity with extractable work and its rate of change during evolution. In addition, We discuss how such a utility is also provided for exploring the connection between the amount of extractable work and the time evolution of quantum coherence, a well-known resource for many quantum processes. In the end, as an illustration of our results, we investigate the effects of quantum non-Markovianity and quantum coherence on the evolution of the charging power for two types of quantum batteries: a two-qubit battery-charger model and a two-qubit battery charged by quantum coherence due to one photon mode mediation.
Preliminaries
The first Law of Thermodynamics remarks the contribution of work and heat to internal energy change in a thermodynamic process. In quantum thermodynamics, depending on whether one considers a system out of passive or thermal equilibrium (completely passive) states as a resource from which work can be extracted, ergotropy and free energy difference are the two most widely, used work extractable quantifiers. More precisely, ergotropy [7] quantifies the maximum work that can be extracted from a system by transforming it from a non-passive to a passive state in a cyclic unitary manner
\[W_{e}=tr[\rho H-\pi H]. \tag{1}\]
The non-passive state \(\pi\) is represented as \(\pi=\Sigma\ r_{n}|\varepsilon_{n}><\varepsilon_{n}|\) provided that the state \(\rho\) and Hamiltonian \(H\) are expressed in their spectral decomposition respectively in a decreasing and increasing order i.e., \(\rho=\Sigma r_{n}|r_{n}><r_{n}|\) and \(H=\Sigma\varepsilon_{n}|\varepsilon_{n}><\varepsilon_{n}|\) with \(r_{n+1}\leq r_{n}\) and \(\varepsilon_{n+1}\geq\varepsilon_{n}\ \forall n\). Cyclicity, here, refers to the fact that Hamiltonian is identical in the initial and final points of evolution. The ergotropy has been measured experimentally in the quantum heat engines with spin [2] and single atom [7] as working fluids and recently in quantum batteries modeled by low dimensional metal complexes [7]. An extension of ergotropy for open quantum systems and non-unitary evolutions has been made by Binder _et al._[7]. They recovered the first law of thermodynamics for finite quantum systems undergoing general quantum processes:
\[\Delta E=\Delta W_{e}+<W>_{ad}+<Q>_{op}. \tag{2}\]
Here, \(\Delta W_{e}=W_{e}(H_{\tau},\rho_{\tau})\) - \(W_{e}(H_{0},\rho_{0})\) is change in ergotropy of the system during the process (\(H_{0}\),\(\rho_{0}\))\(\rightarrow\)(\(H_{\tau},\rho_{\tau}\)) in which \(H_{0}\) and \(H_{\tau}\) are initial and final Hamiltonians and \(\rho_{0}\) and \(\rho_{\tau}\) are initial and final quantum states that denote inputs and outputs of CPTP map \(\Lambda_{\tau}\) i.e. \(\rho_{\tau}=\Lambda_{\tau}\rho_{0}\). Although \(\Delta W_{e}\) bears much resemblance to the change of thermodynamic state functions as it depends only on the states and Hamiltonians in the initial and final of a process, here, it enters a genuine out-of-equilibrium contribution in the first law relation. This is important because any generalization of thermodynamic laws to quantum regimes must have the ability to describe both equilibrium and non-equilibrium situations.
Also, \(<W>_{ad}\) measures the change in internal energy due to a non-cyclic unitary transformation that change in Hamiltonian is assumed to be adiabatic in a quantum sense, i.e. at each instant of evolution eigenstates of Hamiltonian remain eigenstates
\[<W>_{ad}=tr[\pi_{t}H_{t}]-tr[\pi_{m}H_{0}] \tag{3}\]
where \(\pi_{t}\) and \(\pi_{m}\) are passive states with respect to final and initial Hamiltonians.
Finally, to account for heat contribution in the first law formulation, the last term in Eq. (3) was defined as operational heat \(<Q>_{op}\):
\[<Q>_{op}=tr[\pi_{m}H_{0}]-tr[\pi_{0}H_{0}] \tag{4}\]
which is the energy change in the transformation between two different passive states \(\pi_{0}\) and \(\pi_{m}\) of the initial Hamiltonian.
## III Fundamental equation of quantum thermodynamics
Free energy is one of the most important quantities in thermodynamics because, according to Clausius's statement of second law, it informs us of the possible state transitions of a system and the optimal work can be extracted from the system as a result. In consequence of generally extending the validity of second law (in Clausius form) to quantum regime [7], one can consider another extractable work quantifier that is given by the free energy difference between non-equilibrium and equilibrium states of the system in contact with a thermal bath [7] [7] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [99] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [108] [109] [111] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [180] [181] [189] [181] [180] [182] [183] [185] [186] [187] [188] [189] [180] [183] [189] [180] [181] [181] [182] [184] [185] [186] [187] [189] [188] [181] [189] [180] [183] [181] [184] [185] [187] [188] [189] [180] [181] [182] [189] [180] [183] [184] [185] [186] [187] [188] [189] [189] [180] [180] [181] [189] [181] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [188] [189] [180] [189] [180] [180] [181] [181] [183] [181] [184] [185] [186] [187] [188] [189] [180] [189] [180] [181] [189] [181] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [181] [189] [180] [189] [180] [181] [180] [183] [189] [180] [181] [181] [182] [184] [185] [186] [187] [188] [189] [189] [180] [180] [181] [180] [181] [181] [181] [182] [183] [184] [185] [189] [180] [181] [189] [180] [180] [181] [181] [182] [183] [184] [185] [189] [180] [181] [186] [187] [188] [189] [180] [189] [180] [189] [180] [181] [180] [181] [189] [180] [181] [181] [180] [181] [181] [182] [183] [189] [180] [181] [180] [181] [181] [182] [184] [185] [189] [180] [180] [181] [181] [182] [183] [184] [189] [180] [181] [180] [181] [182] [185] [186] [187] [188] [189] [181] [189] [180] [180] [181] [180] [181] [180] [181] [180] [180] [181] [181] [189] [181] [181] [180] [181] [182] [183] [181] [184] [185] [189] [180] [181] [180] [181] [181] [182] [183] [181] [183] [
and
\[\Delta S_{R}=tr[(\rho_{t}-\pi_{t}^{B})log(\pi_{t}^{B})]-tr[(\rho_{0}-\pi_{0}^{B}) log(\pi_{0}^{B})]. \tag{11}\]
We argue Eq. (10), expressed in terms of change in relative entropy between state \(\rho\) and its correspondence equilibrium state at initial and final times of transformation, can identify the irreversible change in the entropy based on the intuition that states \(\rho\) suffers from relaxation to \(\pi^{B}\) by the irreversible loss of its coherence. Then the relative entropy measuring the closeness of two states [3] can also be considered as a measure of irreversibility. In the case that the Hamiltonian of the system remains unchanged during the evolution, Eq. (10) turns into \(\Delta S_{tr}=S(\rho_{0}||\pi_{0}^{B})-S(\rho_{t}||\pi_{t}^{B})\) that is a famous expression for entropy production indicating the irreversible term in the entropy balance relation. Because of the relative entropy's contractivity property when subject to the action of CPTP maps [3], the irreversible entropy change defined by Eq. (10) is a positive amount. It may, however, be negative for time-independent Hamiltonians, as was the case with CPTP non-Markovian dynamics with a non-stationary thermal equilibrium state [3]. On the other hand, we introduce Eq. (11) which includes the change in the state of the system between non-equilibrium and equilibrium situations as the reversible change in entropy. It may also be viewed as an extension of the quantum Hatano-Sasa inequality [3], a well-known formulation of the second law for CPTP maps which is expressed by \(\Delta S\geq tr[(\rho_{t}-\rho_{0})log(\pi^{B})]\), to CPTP maps with time-dependent Hamiltonians.
Putting these together in Eq. (7), we define heat flow from the system corresponding to the reversible change in entropy
\[<Q>=-\beta^{-1}\Delta S_{R}=\Delta E-<W>_{ad} \tag{12}\]
and moreover, find the time evolution of extractable work for the system is governed by irreversible entropy change, that is:
\[\Delta W_{f}=-\beta^{-1}\Delta S_{Ir} \tag{13}\]
## IV Effective quantum parameters on extractable work of open quantum systems
We begin this section by noting the rate at which the extractable work changes can be defined concerning Eq. (13) as:
\[P(t)=\lim_{\Delta t\to 0}\frac{\Delta W_{f}}{\Delta t}=-\beta^{-1}\frac{dS_{Ir} }{dt}. \tag{14}\]
\(P(t)\) is commonly referred to as charging power in the study of quantum batteries, which employ \(P(t)\) to investigate how quickly a battery may be charged or discharged [3]. We then go over how Eqs. (13) and (14) can be used to highlight the thermodynamic importance of particular quantum properties such as non-Markovianity and coherence as valuable resources on account of the enhancement of extracted work.
### Quantum non-Markovianity
Irreversibility is a fundamental concept in the theory of open quantum systems [3]. The interaction between the system and the bath causes the system to continuously lose its quantum properties, which is typically understood as an irreversible flow of information from the system to the environment. However, in some situations, for instance, when there is a strong system-bath interaction, the information that the system loses to the environment can be recovered. In contrast to Markovian systems, which lose information irreversibly, these systems that revive information are referred to as non-Markovian open quantum systems [3]. Since quantum properties are well known to be valuable work resources, it is anticipated that reviving these properties and resulting in non-Markovianity will have an advantage in thermodynamic studies. For example, some studies report positive effects of non-Markovianity on quantum Landauer erasure work cost revivals [3], the performance of various kinds of quantum heat engines [3] ; [4] ; and the charging of quantum batteries [3] ; [5].
According to thermodynamics, irreversibility is considered as a contribution to the entropy balance relation of the system [3]. As we can see from Eq. (13), this contribution determines the change in extractable work from the system. The extractable work calculated from this relation is monotonically decreasing throughout a CTPM evolution as a result of an increase in irreversibility, which is indicated by a positive value for \(\Delta S_{Ir}\). However, the the rate of change i.e. charging power can become in general negative in some time intervals.
In this respect, we note a precise connection with non-Markovianity is deducible when we take charging power into account and apply the result of Chen _et al._[3] to measure the amount of non-Markovianity during evolution. In accordance with the foregoing intuition of a highly close relation between non-Markovianity and lack of irreversibly, Chen _et al._ show that under the hypothesis of static environment, the non-Markovianity of open quantum systems can be defined as:
\[I=-\frac{\partial S_{Ir}}{\partial t}. \tag{15}\]
The static hypothesis is satisfied if the environment state doesn't change during evolution. Now the charging power can be simply written as:
\[P(t)=\beta^{-1}I. \tag{16}\]
This evidently shows more quantum properties a system revives, as indicated by a higher positive amount of non-Markovianity measure \(I\), the more charging power it recovers. The non-Markovianity advantage is especially apparent when the system is thermalized using baths at a higher temperature.
### Quantum coherence
Quantum coherence, both fundamentally and practically, is one of the most important quantum features as it distinguishes the quantum world from the classical one and is an essential resource in many quantum processes [3]. When defined with relative to energy basis, quantum coherence is a prominent resource for carrying out certain thermodynamic tasks. Many studies have been conducted in recent
years to look into the role that coherence play in the work extraction protocols [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 291; 292; 293; 294; 295; 296; 297; 298; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 324; 336; 337; 341; 342; 343; 344; 356; 357; 368; 379; 380; 391; 392; 393; 394; 395; 396; 397; 398; 399; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 410; 411; 412; 413; 414; 415; 416; 417; 418; 419; 420; 421; 422; 423; 424; 425; 426; 427; 428; 429; 430; 431; 432; 433; 434; 435; 436; 437; 438; 439; 44; 445; 446; 447; 448; 459; 451; 452; 453; 454; 456; 457; 458; 459; 460; 461; 462; 463; 464; 465; 466; 467; 468; 469; 470; 471; 472; 473; 474; 475; 476; 477; 478; 479; 480; 481; 482; 483; 484; 485; 486; 487; 488; 489; 490; 491; 492; 493; 494; 495; 496; 497; 498; 499; 500; 501; 502; 503; 504; 505; 506; 507; 508; 511; 509; 510; 511; 512; 513; 514; 515; 516; 517; 518; 528; 539; 540; 555; 561; 571; 572; 573; 574; 575; 581; 582; 583; 584; 591; 592; 593; 594; 501; 512; 513; 514; 515; 517; 519; 528; 539; 55; 595; 596; 597; 598; 599; 600; 61; 621; 63; 64; 65; 667; 68; 69; 701; 69; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 88; 99; 90; 910; 111; 112; 133; 144; 156; 167; 178; 189; 199; 120; 131; 147; 179; 180; 191; 192; 193; 194; 195; 196; 197; 198; 1999; 201; 211; 222; 23; 241; 25; 267; 278; 289; 299; 300; 310; 311; 328; 301; 339; 402; 41; 42; 43; 444; 45; 46; 47; 48; 49; 510; 52; 53; 547; 54; 55; 56; 57; 58; 59; 61; 70; 71; 72; 73; 73; 75; 76; 77; 78; 80; 81; 83; 84; 85; 86; 87; 88; 89; 91; 921; 10; 112; 13; 147; 15; 168; 89; 922; 170; 171; 18; 193; 194; 195; 196; 197; 198; 199; 2003; 210; 211; 222; 233; 242; 25; 26; 27; 28; 293; 301; 329; 311; 32; 338; 311; 302; 332; 33; 339; 40; 41; 42; 435; 46; 47; 48; 59; 60; 61; 70; 71; 72; 73; 74; 75; 76; 77; 78; 81; 82; 89; 90; 911; 102; 103; 104; 105; 106; 107; 108; 109; 111; 122; 133; 147; 17; 18; 199; 214; 19; 215; 26; 27; 28; 94; 30; 40; 41; 42; 43; 45; 46; 47; 48; 58; 59; 60; 71; 72; 73; 74; 75; 76; 77; 78; 82; 83; 84; 86; 87; 88; 89; 90; 911; 102; 113; 147; 18; 195; 196; 197; 198; 199; 30; 103; 312; 32; 333; 34; 44; 45; 46; 47; 48; 59; 60; 70; 71; 72; 73; 74; 75; 76; 77; 78; 83; 84; 85; 86; 87; 88; 91; 92; 103; 114; 15; 16; 179; 18; 199; 193; 194; 195; 196; 197; 198; 199; 200; 21; 22; 231; 24; 25; 26; 27; 28; 29; 303; 31; 32; 34; 45; 46; 47; 48; 59; 61; 70; 71; 72; 73; 74; 78; 849; 93; 94; 95; 86;
We here suppose the initial state of the system as \(|\psi(0)>=|0>_{1}|1>_{2}\) implying that the battery is empty and the charger has its maximum amount of energy before the charging process. The details of the solution and analytical expression for amplitude \(|c_{1}(t)|^{2}\) can be found in [2], whereby the state of the battery is easily determined by Eq. (25). The Gibbs state is reached by \(|c_{1}(t)|^{2}=\frac{\epsilon^{-\beta\alpha_{0}}}{Z}\) and \(1-|c_{1}(t)|^{2}=\frac{1}{Z}\)
We then apply Eq. (16) to calculate the charging power of the battery during the time evolution. The results have been plotted in Fig. 1 for two different values of \(R\) taken as 0.3 and 30 to enable us to better compare the effects of Markovian and non-Markovian dynamics on the calculated results. With \(R=0.3\), as we would expect the dynamics to become mostly Markovian, Fig. 1(a) shows that the charging power decreases monotonically over the given time interval as a result of the monotonically decreasing non-Markovian characteristic of the dynamics (measured by \(I\) in Eq. (15)). On the other hand, for \(R=30\), Fig. 1(b) shows revivals in the charging power of the battery in consequence of the increase in non-Markovianity (\(I\)) demonstrating that the system interpolates between Markovian and non-Markovian regimes. These results exhibit obviously that the charging power is enhanced by virtue of non-Markovian effects on the dynamics of the quantum battery a fact that is well expressed in Eq. (16) by establishing a direct connection between two desired quantities \(P\) and \(I\).
### Example 2: Two-qubit battery-one photon charger model
To explain the role of quantum coherence, we consider in example 2 a two-qubit battery model in which the battery has been charged by coherence mediated by one photon mode. We investigate this model in two cases: in the first, the model is assumed as a close system in which the battery and the photon evolve unitarily. In the second, we regard the detrimental effects of spontaneous emission brought on by the interaction of the battery with a Markovian environment. The Hamiltonian of the system is given by \(H=H_{0}+f(t)H_{I}\), with
\[H_{0}=\sum_{i=1}^{2}\alpha_{0}\sigma_{i}^{+}\sigma_{i}^{-}+\omega_{p}a_{p}^{ \dagger}a_{p} \tag{27}\]
and
\[H_{I}=g(\sigma_{1}^{+}+\sigma_{2}^{+})(a_{p})+h.c \tag{28}\]
where p denotes the photon mode and all operators and parameters are defined before in Eqs. (21) and (22). The dynamic of the battery is described after tracing out the photon degree of freedom by the Schrodinger equation
\[\frac{\partial\rho_{b}}{\partial t}=i\,tr_{p}([\rho,H]) \tag{29}\]
in the first case, and by a master equation [7] :
\[\frac{\partial\rho_{b}}{\partial t}=i\,tr([\rho,H^{{}^{\prime}}])+\sum_{i=1}^ {2}\frac{\gamma_{ij}}{2}(2\sigma_{i}^{-}\rho_{b}\ \sigma_{j}^{+}-\sigma_{i}^{+}\sigma_{j}^{-}\rho_{b}-\rho_{b}\sigma_{i}^{+} \sigma_{j}^{-}) \tag{30}\]
in the second case. Here, \(H^{{}^{\prime}}\) is lamb shift Hamiltonian describing the effective interaction between two qubits:
\[H^{{}^{\prime}}=\sum_{i=1}^{2}\alpha_{0}\sigma_{i}^{+}\sigma_{i}^{-}+g_{12}( \sigma_{1}^{+}\sigma_{2}^{-}+\sigma_{2}^{+}\sigma_{1}^{-}) \tag{31}\]
with \(g_{12}\) being the coupling constant. Also, \(\gamma_{ii}\) is the spontaneous emission rate for i-th qubit while \(\gamma_{ij}\) represents the contribution from two-qubit interactions. Here, we assume \(\gamma_{11}=\gamma_{22}=\gamma\) and use \(g_{12}=g^{2}/\Delta\) and \(\gamma_{12}\approx 0\) in accordance with realistic situations in cavity quantum electrodynamics (CQED) experiments [7] where \(\Delta\) denotes the detuning between frequencies of qubits and photonic cavity mode. The results for two cases are plotted in Fig. 2. In Fig. 2(a), one can see the battery in the first case exhibits a cycle of gain and loss of quantum coherence continuing indefinitely due to the unitarity of evolution. In the second case, however, even though the battery reaches its maximum amount of quantum coherence more quickly than in the first case (thanks to the coherent part of Eq. (30) in the first term), quantum coherence of battery vanishes as a result of decoherence process induced by interaction with Markovian bath.
The same behavior can be observed in Fig. 2(b) for the coherent part of the charging power of quantum battery as it indicates the oscillations with equal amplitude in case 1, while in case 2 the decoherence induced by the Markovian environment leads to null coherent charging power. Finally, in Fig. 2(c), it is shown that despite having initially markedly different values, the coherent and total charging powers nearly reach zero at the same time. This supports the main finding of Shi _et al._[7] that coherence must be attained during charging in order to extract work from a battery in its initial non-coherent state. The values of total charging power reveal that, notwithstanding early revivals in quantum coherence, the system loses its available work monotonically through interaction with the environment.
## VI Conclusions
Examining how a system's quantum characteristics affect its thermodynamic performance is crucial to understanding quantum thermodynamics. This is particularly important in the context of work extraction to design more efficient quantum thermodynamic devices like quantum heat engines and quantum batteries. The main objective of this study is to make one more contribution to addressing this problem by providing a formalism capable of highlighting the effective quantum parameters on the dynamics of extractable work of an open quantum system. Dynamics has been considered as a general quantum process which means the only limitation to dynamics is that it must be physically legitimate. Complete-positive and trace-preserving maps ensure this condition. We have quantified the extractable work of the evolved system with the differences between its instantaneous non-equilibrium and equilibrium free energies that measure the maximum work that can be extracted from a quantum system when it is in contact with a thermal bath. We have established a fundamental thermodynamic equation that relates the change in internal energy
of the quantum system to the change in its extractable work and entropy. We have demonstrated the salient fact about this equation appears when we write the entropy change of the system as a sum of two contributions stemming from reversible and irreversible processes. They respectively specify the heat flow and the change in extractable work of the system during the process. We have demonstrated how this correspondence between irreversibility and extractable work can be used to explore the genuine quantum effects on the dynamics of the latter. In particular, this is accomplished by establishing a direct correspondence between extractable work and standard measures of quantum non-Markovianity and quantum coherence. Our results, therefore, show that quantum non-Markovianity and quantum coherence are resourceful for the thermodynamic task of extracting work from an open quantum system evolving under a general quantum process and open up the possibility of systematically studying the efficiency of these quantum phenomena for technological purposes. We have illustrated our results by taking their advantage to show that dynamics with non-Markovian effects can improve the thermodynamic performance of an open quantum battery by increasing its charging power during the evolution. Both the battery and charger in this example are qubits. We have also demonstrated the same benefit is realized when we look at the role of quantum coherence induced by an optical charger in the charging power of a two-qubit battery.
###### Acknowledgements.
We thank Dr. Ali Soltanmanesh for his constructive comments on the presentation of the paper.
## Author declarations
### Conflict of Interest
The authors have no conflicts to disclose.
## Author contributions
Amin Mohammadi proposed the idea and did the calculations, Afshin Shafiee contributed to the development and completion of the idea, analyzing the results and discussions. Amin Mohammadi and Afshin Shafiee participated in writing the manuscript.
## Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request. |
2309.15643 | Why do Angular Margin Losses work well for Semi-Supervised Anomalous
Sound Detection? | State-of-the-art anomalous sound detection systems often utilize angular
margin losses to learn suitable representations of acoustic data using an
auxiliary task, which usually is a supervised or self-supervised classification
task. The underlying idea is that, in order to solve this auxiliary task,
specific information about normal data needs to be captured in the learned
representations and that this information is also sufficient to differentiate
between normal and anomalous samples. Especially in noisy conditions,
discriminative models based on angular margin losses tend to significantly
outperform systems based on generative or one-class models. The goal of this
work is to investigate why using angular margin losses with auxiliary tasks
works well for detecting anomalous sounds. To this end, it is shown, both
theoretically and experimentally, that minimizing angular margin losses also
minimizes compactness loss while inherently preventing learning trivial
solutions. Furthermore, multiple experiments are conducted to show that using a
related classification task as an auxiliary task teaches the model to learn
representations suitable for detecting anomalous sounds in noisy conditions.
Among these experiments are performance evaluations, visualizing the embedding
space with t-SNE and visualizing the input representations with respect to the
anomaly score using randomized input sampling for explanation. | Kevin Wilkinghoff, Frank Kurth | 2023-09-27T13:29:38Z | http://arxiv.org/abs/2309.15643v2 | # Why do Angular Margin Losses work well for Semi-Supervised Anomalous Sound Detection?
###### Abstract
State-of-the-art anomalous sound detection systems often utilize angular margin losses to learn suitable representations of acoustic data using an auxiliary task, which usually is a supervised or self-supervised classification task. The underlying idea is that, in order to solve this auxiliary task, specific information about normal data needs to be captured in the learned representations and that this information is also sufficient to differentiate between normal and anomalous samples. Especially in noisy conditions, discriminative models based on angular margin losses tend to significantly outperform systems based on generative or one-class models. The goal of this work is to investigate why using angular margin losses with auxiliary tasks works well for detecting anomalous sounds. To this end, it is shown, both theoretically and experimentally, that minimizing angular margin losses also minimizes compactness loss while inherently preventing learning trivial solutions. Furthermore, multiple experiments are conducted to show that using a related classification task as an auxiliary task teaches the model to learn representations suitable for detecting anomalous sounds in noisy conditions. Among these experiments are performance evaluations, visualizing the embedding space with t-SNE and visualizing the input representations with respect to the anomaly score using randomized input sampling for explanation.
representation learning, anomaly detection, angular margin loss, compactness loss, machine listening, domain generalization, explainable artificial intelligence
## I Introduction
Semi-supervised anomalous sound detection (ASD) is the task of reliably detecting anomalous sounds while only having access to normal sounds for training a model [1]. Since anomalies occur only rarely by definition and usually are very diverse, collecting realistic anomalous samples for training a system is much more difficult and thus more costly than collecting normal data. Hence, a semi-supervised ASD setting is more realistic than a supervised ASD setting, for which anomalous sounds are available for training, because it substantially simplifies the data collection process. There are also unsupervised ASD settings, for which the training dataset may also contain anomalous samples and it is unknown whether a training sample is normal or anomalous. But for many applications, it can be ensured that only normal samples are collected for training and thus a semi-supervised setting can be assumed.
ASD has many applications. Examples are machine condition monitoring [2, 3, 4], medical diagnosis [5, 6], bioacoustic monitoring [7, 8], intrusion detection in smart home environments [9] and detecting crimes [10, 11] or accidents [12, 13]. Furthermore, detecting anomalous samples can also be understood as a subtask in acoustic open-set classification [14, 15, 16]. Throughout this work, we will use machine condition monitoring in domain-shifted conditions as an application example [4]. Here, the audio signals may contain one or several of the following three components: 1) normal machine sounds, 2) anomalous machine sounds and 3) background noise consisting of a mixture of many other sound events. The major difficulty of this ASD application is that anomalous components of machine sounds can be very subtle when being compared to the background noise making it difficult to reliably detect anomalous signal components. Furthermore, machine sounds and background noise can change substantially for different domain shifts, which we define as alterations in the (acoustic) environment or changes in parameter settings of the machines. The ASD system still needs to only detect anomalous signal components without frequently raising false alarms caused by any domain shift.
There are several strategies to train an ASD system for machine condition monitoring using only normal data. Among these strategies are generative models such as autoencoders [17, 18, 19, 20, 21, 22] or normalizing flows [23, 24] that directly try to model the probablity distribution of normal data, which is also called inlier modeling (IM) [3]. Another strategy is to use an auxiliary task, usually a classification task, for training a model to learn meaningful representations of the data (embeddings) that can be used to identify anomalies. Possible auxiliary tasks for machine condition monitoring are classifying between machine types [25, 26, 27, 28, 29] or, additionally, between different machine states and noise settings [30, 31, 32, 33], recognizing augmented and not augmented versions of normal data (self-supervised learning) [25] or predicting the activity of machines [32]. Using an auxiliary task to learn embeddings is also called outlier exposure (OE) [34] because normal samples belonging to other classes than a target class can be considered as proxy outliers [35]. Often an angular margin loss such as SphereFace [36], CosFace [37] or ArcFace [38] is utilized for training an OE model. Systems based on embeddings pre-trained on very large datasets [39, 40, 41] can be used, too. However, it has been shown that directly training a system on the data yields better ASD results, even when only very limited training data is available [42]. In addition, different strategies can be combined by using an ensemble of multiple models [43, 44, 45].
Different strategies to train an ASD system have different strengths and weaknesses. Using an auxiliary task for training relies on additional meta-information to generate labels for a
classification task whereas IM-based models do not need any labels. Furthermore, autoencoders can localize anomalies in the input space by visualizing an element-wise reconstruction error as done in [19, 21]. However, training ASD models by using an auxiliary task usually enhances their performance [46]. Even for IM-based models, performance can be significantly improved when utilizing meta information such as machine types. In [21] a class-conditioned autoencoder is used, in [44] not only spectral features but also the machine ID is encoded and decoded, and in [23] a normalizing flow is trained to assign lower likelihood to sounds of other machines and a higher likelihood to sounds of the target machine. As suspected in [32, 33], the most likely reason for the difference in performance is that, as stated before, recordings for machine condition monitoring are very noisy because of factory background noise. This is a problem for IM-based models because they cannot tell the difference between arbitrary sound events not emitted by a monitored machine and normal or anomalous sounds emitted by the machine. Both are considered equally important by the model. Moreover, anomalies present in these noisy audio recordings are usually very subtle when being compared to the noise or other sound events present in a recording making it even more difficult to detect potential anomalies. When being trained with an auxiliary task, a model learns to ignore noise, which can be assumed to be similar for all considered classes, and therefore to isolate the target machine sound by ignoring the uninformative background sound events. As a result, these models are more sensitive to changes of the machine sounds and have better anomaly detection capabilities.
Localizing and visualizing frequencies or temporal regions of recordings that are being considered anomalous is important for practical applications because users can better understand the decisions of the ASD system (explainable artificial intelligence (xAI) [47]). Furthermore, this may help to find the cause of mechanical failure and thus can simplify the maintenance process. As stated before, autoencoders can easily localize anomalies by using an element-wise reconstruction error. Additional investigations on visualizing and explaining ASD decisions include showing that decisions of ASD systems for machine condition monitoring largely rely on high-frequency information [48]. This has been visualized using local interpretable model-agnostic explanations (LIME) [49] applied to sounds (SLIME) [50]. Furthermore, uniform manifold approximation and projection (UMAP) [51] has been used to visualize representations of the data such as stacked consecutive frames of log magnitude spectrograms, log-mel magnitude spectrograms, or openL3 embeddings [46].
The goal of this work is to explain why angular margin losses work well for anomalous sound detection. To achieve this goal, the following contributions are made: First and foremost, it is theoretically proven that, after normalizing the embedding space, training an ASD model by minimizing an angular margin loss using an auxiliary task can be considered as minimizing a regularized one-class loss while being less affected by noise or non-target sound events present in the data. Moreover, it is experimentally verified that using an angular margin loss for training a model to discriminate between classes of an auxiliary task also leads to better ASD performance and thus is a better choice for an ASD task than minimizing a one-class loss such as an intra-class (IC) compactness loss with a single or multiple classes. Last but not least, a procedure for visualizing normal and anomalous regions of the input representations based on randomized input sampling for explanation (RISE) is presented. Using these visualizations, it is shown that normal and anomalous sounds cannot be distinguished from the highly complex background noise when training with a one-class loss. In contrast, when using an auxiliary task with multiple classes the model learns to ignore noise and isolate the targeted machine sound for monitoring their condition.
The paper is structured as follows: In Section II, various one-class losses and angular margin losses are reviewed. Section III presents our main theoretical results about the relation between these loss functions. Section IV contains a description of the experimental setup and all experimental evaluations consisting of performance evaluations, a comparison between losses during training, visualizing normal and anomalous regions of input representations as perceived by the system and visualizing the resulting embedding spaces. Section V consists of the conclusions of this work.
## II Loss Functions
In this section, a unified presentation and discussion of several loss functions that are needed for presenting one of the main results of this work in Sec. III will be given. The following notation will be used throughout the paper: \(X\) denotes the space of input data samples, \(N\in\mathbb{N}\) the number of classes defined for an auxiliary task and \(D\in\mathbb{N}\) the dimension of the embedding space.
### _One-Class Classification Losses_
When training a model for ASD while only having access to normal data i.e. a single class, this is referred to as _one-class classification_ and is some form of IM. The compactness loss [52], whose goal it is to project the data into a hypersphere of minimum volume, will serve as a representative of losses for one-class classification and is defined as follows.
**Definition 1** (Compactness loss).: Let \(Y\subset X\) be finite. Let \(\mathcal{P}\) denote the power set, \(\Phi\) denote the space of network architectures for extracting embeddings and \(W(\phi)\) denote the parameter space of \(\phi\in\Phi\), i.e. \(\phi:X\times W(\phi)\rightarrow\mathbb{R}^{D}\). Then, the _compactness loss_ is defined as
\[\begin{split}&\mathcal{L}_{\text{comp}}:\mathcal{P}(X)\times \mathbb{R}^{D}\times\Phi\times W\rightarrow\mathbb{R}_{+}\\ &\mathcal{L}_{\text{comp}}(Y,c,\phi,w):=\frac{1}{|Y|}\sum_{x\in Y} \lVert\phi(x,w)-c\rVert_{2}^{2}.\end{split} \tag{1}\]
The vector \(c\in\mathbb{R}^{D}\) is called _center_.
After training, the (squared) Euclidean distance between the embedding of a given sample and the center can be utilized as an anomaly score: A greater distance indicates a higher likelihood for the sample to be anomalous. A _trivial solution_ for minimizing the compactness loss with center \(c\in\mathbb{R}^{D}\) is
a parameter setting \(w_{c}\in W(\phi)\) such that \(\phi\) is the constant function \(\phi(x,w_{c})=c\) for all \(x\in X\). It is of utmost importance to prevent that the model to be trained is able to learn such a trivial solution. Otherwise it is impossible to differentiate between normal and anomalous samples.
There are several strategies to prevent a model from learning a trivial solution. First of all, it needs to be ensured that \(c\neq c_{0}\in\mathbb{R}^{D}\) where \(c_{0}=\phi(x,w_{0})\) is defined as the output of the network obtained by setting the weight parameters of model \(\phi\) to zero. This is because we have \(\phi(x,w_{0})=c_{0}\) for all \(x\in X\) as long as the model uses only linear operators, e.g. dense or convolutional layers, and all activation functions have zero as a fixed point, which is the case for most commonly used activation functions. In [52], is has been shown that using bias terms, bounded activation functions or a trainable center all enable the model to learn a constant function when using an additive weight decay regularization term and thus must also be avoided.
Another possibility to avoid trivial solutions is to impose additional tasks, so-called _auxiliary tasks_, not directly related to the ASD problem while training. Autoencoders [53], which are trained to first encode and then decode the input again and have many interesting applications by themselves such as denoising data [54], can also be viewed as a way to regularize one-class models. Here, the encoder is the one-class model mapping the input to an embedding space. Learning a constant function is not a (trivial) solution for the task because all necessary information for being able to completely reconstruct the input needs to be encoded. However, noise including other sound sources present in the input audio data needs to be encoded as well because otherwise the input cannot be reconstructed. Therefore, the noise heavily influences the embeddings and thus the embeddings can also be considered noisy. Depending on the complexity of the noise, most information contained in the embeddings is only related to the noise and not to the target sound to be analyzed and thus detecting anomalies using an autoencoder may be difficult. Moreover, in [52] it has been shown that using compactness loss, even for clean datasets, outperforms commonly used autoencoder architectures when detecting anomalies.
A second choice of an auxiliary task to prevent the model from learning a constant function as a trivial solution is a classification task. Defining multiple classes through an auxiliary task inherently prevents learning a constant function as this would not be a (trivial) solution to the imposed classification problem. In [55], an additional _descriptiveness loss_ is used whose goal is to reduce inter-class similarity between classes of an arbitrary, external multi-class dataset, which is only used to regularize the one-class classification task. This is done by minimizing the standard categorical cross-entropy (CCE) loss for classification on this additional dataset as an auxiliary task. For each of the two tasks, another version of the same network with identical structure and tied weights is used. During training, both losses are jointly minimized using a weighted sum ensuring that the so-called reference network associated with the compactness loss does not learn a constant function because this would prevent the secondary network to be able to classify correctly.
_Remark_.: The original definition of the compactness loss [52] also includes an additional weight decay term. Such a weight decay term can be used to complement any loss function and does not prevent the model from learning trivial solutions as it is still possible that the model learns to map everything to the center. Furthermore, all theoretical results presented in this work are valid regardless of whether this specific weight decay term is included or not. The proof of the main theorem can easily be modified to including the same weight decay term because it is just an additional additive term. Therefore, we omitted this term in the theoretical investigations of this work for the sake of simplicity while still using it in our experiments. However, we did not notice any significant effect on the performance.
For the remainder of this work, we propose to normalize all representations in the embedding space \(\mathbb{R}^{D}\), meaning that \(\|c\|_{2}=1=\|\phi(x,w)\|_{2}\) for all \(x\in X,w\in W(\phi)\) and centers \(c\in\mathbb{R}^{D}\). This can easily be achieved by dividing the embeddings by their corresponding Euclidean norms. A normalization of the embedding space essentially reduces the dimension by one as evident by using stereographic projection. But doing so does not degrade the ASD performance because the dimension of the embedding space usually is larger than it needs to be.
Normalizing the embedding space has several advantages. Most importantly, the initialization of the centers is substantially simplified. In high-dimensional vector spaces i.i.d. random elements are almost surely approximately orthogonal [56]. Hence, all class centers can be randomly initialized by sampling from a uniform random distribution as also done in [33] and a careful strategy for initializing the class centers is not needed. This does not cause any problems e.g. by accidentally using class centers that are very similar to each other in terms of cosine similarity whereas the corresponding acoustic classes are very dissimilar or vice versa. Moreover, normalizing the centers ensures that all centers are distributed equidistantly and sufficiently far away from zero to avoid learning a trivial solution. Last but not least, normalizing the embeddings may even prevent numerical issues while training similar to when using batch normalization [57].
### _Angular Margin Losses_
We will review the definition of ArcFace [38] as a representative of angular margin losses.
**Definition 2** (ArcFace).: Let \(Y\subset X\) be finite and \(l_{j}(x)\in\{0,1\}\) denote the \(j\)th component of the categorical class label function \(l\in L\) where \(L\) denotes the space of all functions \(l:X\rightarrow\{0,1\}^{N}\) with \(\sum_{j=1}^{N}l_{j}(x)=1\) for all \(x\in X\). Let \(\mathcal{P}\) denote the power set, \(\Phi\) denote the space of network architectures for extracting embeddings and \(W(\phi)\) denote the parameter space of \(\phi\in\Phi\), thus \(\phi:X\times W(\phi)\rightarrow\mathbb{R}^{D}\). Let \(\operatorname{smax}:\mathbb{R}^{N}\rightarrow[0,1]^{N}\) denote the softmax function, i.e.
\[\operatorname{smax}(x)_{i}=\frac{\exp(x_{i})}{\sum_{j=1}^{N}\exp(x_{j})}. \tag{2}\]
Then, the _ArcFace_ loss is defined as
\[\mathcal{L}_{\text{ang}}:\mathcal{P}(X)\times\mathcal{P}(\mathbb{R}^ {D})\times\Phi\times W\times L\times\mathbb{R}_{+}\times[0,\frac{\pi}{2}]\to \mathbb{R}_{+}\] \[\mathcal{L}_{\text{ang}}(Y,C,\phi,w,l,s,m)\] \[:= -\frac{1}{|Y|}\sum_{x\in Y}\sum_{j=1}^{N}l_{j}(x)\log(\text{smax}( s\cdot\text{cos}_{\text{smax}}(\phi(x,w),c_{j},m))) \tag{3}\]
where \(|C|=N\) and, in this case,
\[\text{smax}(s\cdot\text{cos}_{\text{smax}}(\phi(x,w),c_{i},m))\] \[:= \frac{\exp(s\cdot\text{cos}_{\text{smax}}(\phi(x,w),c_{i},m))}{ \sum_{j=1}^{N}\exp(s\cdot\text{cos}_{\text{smax}}(\phi(x,w),c_{j},m\cdot l_{j} (x))} \tag{4}\]
with
\[\text{cos}_{\text{smax}}(x,y,m):=\text{cos}(\text{arccos}(\text{cos}(x,y))+m) \tag{5}\]
for cosine similarity
\[\text{cos}(x,y):=\frac{\langle x,y\rangle}{\|x\|_{2}\|y\|_{2}}\in[-1,1]. \tag{6}\]
The vectors \(c_{j}\in\mathbb{R}^{D}\) are called _class centers_, \(m\in[0,\frac{\pi}{2}]\) is called _margin_ and \(s\in\mathbb{R}_{+}\) is called _scale parameter_.
_Remark_.: When using mixup [58] for data augmentation, the definition of the class label function needs to be generalized to \(l:X\to[0,1]^{N}\) with \(\sum_{j=1}^{N}l_{j}(x)=1\) for all \(x\in X\). In the experimental evaluations of this work, mixup will be used when training a model as this improves the ASD performance [29]. Furthermore, the theoretical results presented in this work still hold when using mixup but in the proofs only binary labels will be used for the sake of simplicity.
In [59], it has been shown that the choice of both hyperparameters, the scale parameter \(s\) and the margin \(m\), can have a significant impact on the resulting performance. Strongly varying the magnitude of one of the individual parameters has a similar effect on the sensitivity of the posterior probabilities with respect to the angles as varying the other parameter. Both a scale parameter that is too large and a margin that is too small lead to very high posterior probabilities for the target class, approximately equal to one, even for relatively large angles. Therefore, the loss function is insensitive to changing the angle. A scale parameter that is too small limits the maximum posterior probability of the target class that can be achieved. Similarly, a margin that is too large also leads to relatively small posterior probabilities. Thus, in both cases the model still tries to adapt its parameters even when the angles are already small, which hinders convergence. Due to the similar behavior of both parameters, a single appropriately chosen parameter is sufficient for controlling the posterior probabilities and it has even been shown that an adaptive scale parameter outperforms using two tuned but fixed parameters. Therefore, we will assume that \(s\) is adaptive as specified for the AdaCos loss in [59] and set \(m=0\), i.e. \(\text{cos}_{\text{smax}}(x,y,0)=\text{cos}(x,y)\) for the remainder of this work. Formally, the definition of the AdaCos loss is the following.
**Definition 3** (AdaCos).: Using the same notation as in Definition 2, let \(Y^{(t)}\subset Y\) denote all samples belonging to a mini-batch of size \(B\in\mathbb{N}\), i.e. \(|Y^{(t)}|=B\). Let \(\theta_{x,i}:=\arccos(\cos(\phi(x,w),c_{i}))\in[0,\pi]\) and the _dynamically adaptive scale parameter_\(\tilde{s}^{(t)}\in\mathbb{R}_{+}\) at training step \(t\in\mathbb{N}_{0}\) be set to
\[\tilde{s}^{(t)}:=\begin{cases}\sqrt{2}\cdot\log(N-1)&\text{if }t=0\\ \frac{\log B^{(t)}_{\text{smax}}}{\cos\big{(}\min(\frac{\pi}{2},\theta^{(t)}_{ \text{smax}})\big{)}}&\text{else}\end{cases} \tag{7}\]
where \(\theta^{(t)}_{\text{med}}\in[0,\pi]\) denotes the median of all angles \(\theta_{x,i(x)}\) with \(x\in X^{(t)}\) and \(i(x)\in\{1,...,N\}\) such that \(l_{i}(x)=1\) and
\[B^{(t)}_{\text{avg}}:=\frac{1}{B}\sum_{x\in Y^{(t)}}\sum_{\begin{subarray}{c}j= 1\\ l_{j}(x)\neq 1\end{subarray}}^{N}\exp\big{(}\tilde{s}^{(t-1)}\cdot\cos(\phi(x,w),c _{j})\big{)} \tag{8}\]
is the sample-wise average over all summed logits belonging to the non-corresponding classes. Then, the _AdaCos_ loss is defined as
\[\mathcal{L}_{\text{ada}}:\mathcal{P}(X)\times\mathcal{P}(\mathbb{R }^{D})\times\Phi\times W\times L\to\mathbb{R}_{+} \tag{9}\] \[\mathcal{L}_{\text{ada}}(Y,C,\phi,w,l):=\mathcal{L}_{\text{ang}}(Y, C,\phi,w,l,\tilde{s},0).\]
_Remark_.: When using mixup [58] for data augmentation, \(\theta^{(t)}_{\text{med}}\in[0,\pi]\) needs to be replaced with the median of the mixed-up angles as specified in [29].
The AdaCos loss can also be extended to using multiple centers for each class, called sub-clusters, instead of a single one. The idea of using these sub-clusters is to allow the network to learn more complex distributions than a normal distribution for each class enabling the model to have a more differentiated view on the embeddings when using the cosine similarity as an anomaly score. This has been shown to improve the ASD performance [29] and thus helps to differentiate between normal and anomalous samples.
**Definition 4** (Sub-cluster AdaCos).: Using the same notation as in Definitions 2 and 3, let \(C_{j}\in\mathcal{P}(\mathbb{R}^{D})\) with \(|C_{j}|=M\) denote all centers belonging to class \(j\in\{1,...,N\}\). Let the _dynamically adaptive scale parameter_\(\hat{s}^{(t)}\in\mathbb{R}_{+}\) at training step \(t\in\mathbb{N}_{0}\) be set to
\[\hat{s}^{(t)}:=\begin{cases}\sqrt{2}\cdot\log(N\cdot M-1)&\text{if }t=0\\ \frac{f^{(t)}_{\text{smax}}+\log\hat{B}^{(t)}_{\text{smax}}}{\cos\big{(}\min( \frac{\pi}{2},\theta^{(t)}_{\text{smax}})\big{)}}&\text{else}\end{cases} \tag{10}\]
with
\[\hat{B}^{(t)}_{\text{avg}}:=\frac{1}{B}\sum_{x\in Y^{(t)}}\sum_{j= 1}^{N}\sum_{c\in C_{j}}\exp\big{(}\hat{s}^{(t-1)}\cos(\phi(x,w),c)-f^{(t)}_{ \text{max}}\big{)} \tag{11}\]
and
\[f^{(t)}_{\text{max}}:=\max_{x\in Y^{(t)}}\max_{j=1}^{N}\max_{c\in C _{j}}\hat{s}^{(t-1)}\cdot\cos(\phi(x,w),c). \tag{12}\]
Then, the _sub-cluster AdaCos_ loss is defined as
\[\mathcal{L}_{\text{sc-ada}}:\mathcal{P}(X)\times\mathcal{P}(\mathcal{ P}(\mathbb{R}^{D}))\times\Phi\times W\times L\to\mathbb{R}_{+}\] \[\mathcal{L}_{\text{sc-ada}}(Y,C,\phi,w,l)\] \[:= -\frac{1}{|Y|}\sum_{x\in Y}\sum_{j=1}^{N}l_{j}(x)\log(\text{smax}( \hat{s}\cdot\cos(\phi(x,w),C_{j}))) \tag{13}\]
where \(|C|=N\) and, in this case,
\[\begin{split}&\operatorname{smax}(\hat{s}\cdot\cos(\phi(x,w),C_{j})) \\ :=&\sum_{c_{j}\in C_{j}}\frac{\exp(\hat{s}\cdot\cos( \phi(x,w),c_{j}))}{\sum_{k=1}^{N}\sum_{c_{k}\in C_{k}}\exp(\hat{s}\cdot\cos( \phi(x,w),c_{k}))}\end{split} \tag{14}\]
_Remark_.: As shown in [29], for the sub-cluster AdaCos loss as defined above mixup [58] needs to be used. Otherwise, the dynamically adaptive scale parameter \(\hat{s}^{(t)}\) grows exponentially.
For the compactness loss, there is no benefit of using sub-clusters. The reason is that an optimal solution of this sub-cluster compactness loss would correspond to the mean of the sub-clusters or, in case all embeddings are normalized, to its projection onto the unit sphere. Hence, there would be a single global optimum and this sub-cluster compactness loss would behave as if only a single sub-cluster is used. For the sub-cluster AdaCos loss, the situation is completely different because the softmax function is applied to all individual sub-clusters and the sum over the resulting scores is taken. This makes the resulting softmax probability, and thus also the loss function, symmetric with respect to the corresponding sub-clusters of an individual class. Therefore, the loss is invariant to changing the position of an embedding on the hypersphere as long as the sum of the distances to the sub-clusters is the same. Hence, also the space of optimal solutions grows with respect to the number of sub-clusters. However, due to the dependence on the sub-clusters of the other classes caused by the softmax function, this invariance is a simplification and the real situation is more complex.
## III Relation between One-Class Losses and Angular Margin Losses
For the proof of the main theoretical result of this work, the following basic identity is needed.
**Lemma 5**.: _For \(x,y\in\mathbb{R}^{D}\) with \(\|x\|_{2}=\|y\|_{2}=1\), it holds that_
\[\cos(x,y)=1-\frac{\|x-y\|_{2}^{2}}{2}. \tag{15}\]
Proof.: See Appendix.
_Remark_.: This lemma also shows that for normalized embeddings using Euclidean distance and using cosine distance, which in this case is equal to the standard scalar product, are equivalent for computing an anomaly score.
Now, the theorem itself follows.
**Theorem 6**.: _Let \(Y_{j}:=\{x\in Y:l_{j}(x)=1\}\). Then minimizing \(\mathcal{L}_{\text{x-data}}(Y,C,\phi,w,l)\) with gradient descent minimizes all IC compactness losses with weighted gradients given by_
\[\begin{split}&\frac{\hat{s}}{2}\sum_{i=1}^{N}\frac{1}{|Y_{i}|}\sum_{x \in Y_{i}}\sum_{c_{i}\in C_{i}}P(\tau(\phi(x,w))=c_{i}|\tau(\phi(x,w))\in C_{ i})\\ &\cdot\frac{\partial}{\partial w}\|\phi(x,w)-c_{i}\|_{2}^{2} \end{split} \tag{16}\]
_while maximizing all inter-class compactness losses with weighted gradients given by_
\[\begin{split}&-\frac{\hat{s}}{2}\sum_{i=1}^{N}\frac{1}{|Y_{i}|} \sum_{x\in Y_{i}}\sum_{k=1}^{N}\sum_{c_{k}\in C_{k}}P(\tau(\phi(x,w))=c_{k}) \\ &\cdot\frac{\partial}{\partial w}\|\phi(x,w)-c_{k}\|_{2}^{2} \end{split} \tag{17}\]
_where_
\[\begin{split}& P(\tau(\phi(x,w))=c_{i}|\tau(\phi(x,w))\in C_{i}) \\ :=&\frac{\exp(\hat{s}\cdot\cos(\phi(x,w),c_{i}))}{ \sum_{c_{i}^{\prime}\in C_{i}}\exp(\hat{s}\cdot\cos(\phi(x,w),c_{i}^{\prime})) }\end{split} \tag{18}\]
_and_
\[\begin{split}& P(\tau(\phi(x,w))=c_{k})\\ :=&\frac{\exp(\hat{s}\cdot\cos(\phi(x,w),c_{k}))}{ \sum_{k=1}^{N}\sum_{c_{k}^{\prime}\in C_{k}}\exp(\hat{s}\cdot\cos(\phi(x,w),c _{k}^{\prime}))}\end{split} \tag{19}\]
_with a cluster assignment function \(\tau:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\) given by_
\[\tau(z,C)=\operatorname*{arg\,max}_{c\in C}\cos(z,c). \tag{20}\]
Proof.: Let \(x\in Y\), \(\phi\in\Phi\) and \(\hat{s}\in\mathbb{R}_{+}\) be fixed and \(i\in\{1,...,N\}\) such that \(l_{i}(x)=1\) and \(l_{j}(x)=0\) for \(j\neq i\). To simplify notation, define \(e(w,c):=\exp(\hat{s}\cdot\cos(\phi(x,w),c))\). Using Lemma 5, we see that
\[\begin{split}&\frac{\partial}{\partial w}\log\bigg{(}\sum_{c_{i}\in C _{i}}e(w,c_{i})\bigg{)}\\ =&\frac{\sum_{c_{i}\in C_{i}}e(w,c_{i})\cdot\hat{s} \cdot\frac{\partial}{\partial w}\cos(\phi(x,w),c_{i}))}{\sum_{c_{i}^{\prime} \in C_{i}}e(w,c_{i}^{\prime})}\\ =&-\frac{\hat{s}}{2}\sum_{c_{i}\in C_{i}}\frac{e(w,c_{i })\cdot\frac{\partial}{\partial w}\|\phi(x,w)-c_{i}\|_{2}^{2}}{\sum_{c_{i}^{ \prime}\in C_{i}}e(w,c_{i}^{\prime})}\end{split}\]
and similarly
\[\begin{split}&\frac{\partial}{\partial w}\log\bigg{(}\sum_{k=1}^{N} \sum_{c_{k}\in C_{k}}e(w,c_{k})\bigg{)}\\ =&\frac{\sum_{k=1}^{N}\sum_{c_{k}\in C_{k}}e(w,c_{k}) \cdot\hat{s}\cdot\frac{\partial}{\partial w}\cos(\phi(x,w),c_{k}))}{\sum_{k=1}^ {N}\sum_{c_{k}^{\prime}\in C_{k}}e(w,c_{k}^{\prime})}\\ =&-\frac{\hat{s}}{2}\sum_{c_{i}\in C_{i}}\frac{e(w,c_{i })\cdot\frac{\partial}{\partial w}\|\phi(x,w)-c_{k}\|_{2}^{2}}{\sum_{k=1}^{N} \sum_{c_{k}^{\prime}\in C_{k}}e(w,c_{k}^{\prime})}\\ =&-\frac{\hat{s}}{2}\sum_{k=1}^{N}\sum_{c_{k}\in C_{k}} \frac{e(w,c_{k})\cdot\frac{\partial}{\partial w}\|\phi(x,w)-c_{k}\|_{2}^{2}}{ \sum_{k=1}^{N}\sum_{c_{k}^{\prime}\in C_{k}}e(w,c_{k}^{\prime})}.\end{split}\]
Using both identities, we obtain
\[\frac{\partial}{\partial w}\sum_{j=1}^{N}l_{j}(x)\log(\max(\hat{s} \cdot\cos(\phi(x,w),C_{j})))\] \[= \frac{\partial}{\partial w}\log\bigg{(}\sum_{c_{i}\in C_{i}}\frac{e (w,c_{i})}{\sum_{k=1}^{N}\sum_{c_{k}\in C_{k}}e(w,c_{k})}\bigg{)}\] \[= \frac{\partial}{\partial w}\log\bigg{(}\sum_{c_{i}\in C_{i}}e(w,c _{i})\bigg{)}-\frac{\partial}{\partial w}\log\bigg{(}\sum_{k=1}^{N}\sum_{c_{k} \in C_{k}}e(w,c_{k})\bigg{)}\] \[= -\frac{\hat{s}}{2}\sum_{c_{i}\in C_{i}}\frac{e(w,c_{i})\cdot\frac{ \partial}{\partial w}\|\phi(x,w)-c_{i}\|_{2}^{2}}{\sum_{c^{\prime}_{i}\in C_{i }}^{2}e(w,c^{\prime}_{i})}\] \[+\frac{\hat{s}}{2}\sum_{c_{i}\in C_{i}}\frac{e(w,c_{i})\cdot\frac {\partial}{\partial w}\|\phi(x,w)-c_{i}\|_{2}^{2}}{\sum_{k=1}^{N}\sum_{c^{ \prime}_{k}\in C_{k}}^{2}e(w,c^{\prime}_{k})}\] \[+\frac{\hat{s}}{2}\sum_{\begin{subarray}{c}k=1\\ k\neq i\end{subarray}}^{N}\sum_{c_{k}\in C_{k}}\frac{e(w,c_{k})\cdot\frac{ \partial}{\partial w}\|\phi(x,w)-c_{k}\|_{2}^{2}}{\sum_{k=1}^{N}\sum_{c^{ \prime}_{k}\in C_{k}}^{2}e(w,c^{\prime}_{k})}\] \[= -\frac{\hat{s}}{2}\bigg{(}\sum_{c_{i}\in C_{i}}e(w,c_{i})\cdot \frac{\partial}{\partial w}\|\phi(x,w)-c_{i}\|_{2}^{2}\] \[\cdot\bigg{(}\frac{1}{\sum_{c^{\prime}_{i}\in C_{i}}e(w,c^{ \prime}_{i})}-\frac{1}{\sum_{k=1}^{N}\sum_{c^{\prime}_{k}\in C_{k}}e(w,c^{ \prime}_{k})}\bigg{)}\] \[-\sum_{\begin{subarray}{c}k=1\\ k\neq i\end{subarray}}^{N}\sum_{c_{k}\in C_{k}}\frac{e(w,c_{k})\cdot\frac{ \partial}{\partial w}\|\phi(x,w)-c_{k}\|_{2}^{2}}{\sum_{k=1}^{N}\sum_{c^{ \prime}_{k}\in C_{k}}e(w,c^{\prime}_{k})}\bigg{)}\] \[= -\frac{\hat{s}}{2}\bigg{(}\sum_{c_{i}\in C_{i}}e(w,c_{i})\cdot \frac{\partial}{\partial w}\|\phi(x,w)-c_{i}\|_{2}^{2}\] \[\cdot\bigg{(}\sum_{\begin{subarray}{c}k=1\\ k\neq i\end{subarray}}^{N}\sum_{c_{k}\in C_{k}}\frac{e(w,c_{k})}{(\sum_{c^{ \prime}_{i}\in C_{i}}e(w,c^{\prime}_{i}))(\sum_{k=1}^{N}\sum_{c^{\prime}_{k} \in C_{k}}e(w,c^{\prime}_{k}))}\bigg{)}\] \[-\sum_{\begin{subarray}{c}k=1\\ k\neq i\end{subarray}}^{N}\sum_{c_{k}\in C_{k}}\frac{e(w,c_{k})\cdot\frac{ \partial}{\partial w}\|\phi(x,w)-c_{k}\|_{2}^{2}}{\sum_{k=1}^{N}\sum_{c^{ \prime}_{k}\in C_{k}}^{2}e(w,c^{\prime}_{k})}\] \[= -\frac{\hat{s}}{2}\sum_{\begin{subarray}{c}k=1\\ k\neq i\end{subarray}}^{N}\sum_{c_{k}\in C_{k}}\frac{e(w,c_{k})}{\sum_{k=1}^{N} \sum_{c^{\prime}_{k}\in C_{k}}e(w,c^{\prime}_{k})}\] \[\cdot\bigg{(}\sum_{c_{i}\in C_{i}}\frac{e(w,c_{i})}{\sum_{c^{ \prime}_{i}\in C_{i}}e(w,c^{\prime}_{i})}\cdot\frac{\partial}{\partial w}\| \phi(x,w)-c_{i}\|_{2}^{2}\] \[-\frac{\partial}{\partial w}\|\phi(x,w)-c_{k}\|_{2}^{2}\bigg{)}\] \[= -\frac{\hat{s}}{2}\sum_{k=1}^{N}\sum_{c_{k}\in C_{k}}\underbrace{ \frac{e(w,c_{k})}{\sum_{k=1}^{N}\sum_{c^{\prime}_{k}\in C_{k}}e(w,c^{\prime}_{k })}}_{=P(\tau(\phi(x,w))=c_{k})}\] \[\cdot\sum_{\begin{subarray}{c}c_{i}\in C_{i}\\ =P(\tau(\phi(x,w))=c_{k}|\tau(\phi(x,w^{\prime}))\in C_{i})\end{subarray}}\] \[\cdot\bigg{(}\frac{\partial}{\partial w}\|\phi(x,w)-c_{i}\|_{2}^{2}- \frac{\partial}{\partial w}\|\phi(x,w)-c_{k}\|_{2}^{2}\bigg{)}\]
where we used that
\[\frac{1}{\sum_{c^{\prime}_{i}\in C_{i}}e(w,c^{\prime}_{i})}-\frac{1 }{\sum_{k=1}^{N}\sum_{c^{\prime}_{k}\in C_{k}}e(w,c^{\prime}_{k})}\] \[= \frac{\sum_{k=1}^{N}\sum_{c_{k}\in C_{k}}e(w,c_{k})-\sum_{c_{i}\in C _{i}}e(w,c_{i})}{(\sum_{c^{\prime}_{i}\in C_{i}}e(w,c^{\prime}_{i}))(\sum_{k=1}^{N }\sum_{c^{\prime}_{k}\in C_{k}}e(w,c^{\prime}_{k}))}\] \[= \sum_{\begin{subarray}{c}k=1\\ k\neq i\end{subarray}}^{N}\sum_{c_{k}\in C_{k}}\frac{e(w,c_{k})}{(\sum_{c^{ \prime}_{i}\in C_{i}}e(w,c^{\prime}_{i}))(\sum_{k=1}^{N}\sum_{c^{\prime}_{k}\in C _{k}}e(w,c^{\prime}_{k}))}.\]
Now, summing over all samples \(x\in Y\), normalizing with \(|Y|\) and taking the additive inverse yields the desired result.
When using mixup, the right hand side of the last equation needs to be replaced with a weighted sum of two terms, each corresponding to one of the two classes that are mixed-up, because there are \(i_{1},i_{2}\in\{1,...,N\}\) such that \(l_{i_{1}}(x)\neq 0\neq l_{i_{2}}(x)\). Otherwise, the proof is exactly the same. In conclusion, the proven result still holds for mixed-up samples but includes two similar terms instead of one term.
**Corollary 7**.: _Minimizing \(\mathcal{L}_{\text{ads}}(Y,C,\phi,w,l)\) with gradient descent is equivalent to minimizing_
\[-\frac{\hat{s}}{2}\sum_{k=1}^{N}\max(\hat{s}\cdot\cos(\phi(x,w),c_ {k})) \tag{21}\] \[\cdot \bigg{(}\frac{\partial}{\partial w}\|\phi(x,w)-c_{i}\|_{2}^{2}- \frac{\partial}{\partial w}\|\phi(x,w)-c_{k}\|_{2}^{2}\bigg{)}.\]
Proof.: The proof of Theorem 6 does not depend on the exact structure of the dynamically adaptive scale parameter and thus also holds for the standard AdaCos loss by replacing \(\hat{s}\) with \(\hat{s}\) and using only a single sub-cluster for each class.
This theorem shows that using an angular margin loss such as the AdaCos loss is essentially the same strategy as proposed in [55] and applied to ASD in [27], i.e. using a compactness loss for increasing IC similarity, as defined in Definition 1, and a so-called descriptiveness loss to decrease inter-class similarity. However, there are differences between both approaches. When minimizing an angular margin loss, inter-class compactness losses are used to decrease inter-class similarity instead of a standard CCE loss. Second, when using two loss functions one usually has to tune a weight parameter to create a weighted sum of both loss terms, which is not needed for an angular margin loss and impossible without access to anomalous samples. Furthermore, the gradients belonging to individual samples are weighted with specific softmax probabilities giving more emphasis the closer the sub-clusters are. As these weights are non-uniform in general, this explicitly shows why using multiple sub-clusters is not equivalent to using a single sub-cluster given by the projection of the mean of the sub-clusters onto the hypersphere as it is the case for an IC compactness loss with multiple sub-clusters. Last but not least, an angular margin loss explicitly ensures a margin between classes, as illustrated in Fig. 1, whereas a combination of compactness losses and a CCE loss only implicitly does this by increasing intra-class similarity. Note that, in [55], inter-class similarity is decreased on another dataset using less relevant classes because only a single class
is available on the target dataset. Because of these differences, directly minimizing an angular margin loss leads to a different solution than minimizing a combination of IC losses and a descriptiveness loss.
Note that the IC compactness loss with multiple classes can also be considered a prototypical loss [61] or angular prototypical loss [62] as used for few-shot classification [63], which defines settings where only very few training samples, called shots, are available for each class. The only difference between these prototypical losses and an angular margin loss is that, for prototypical losses, the center vectors are re-calculated as the means of embeddings belonging to corresponding classes by using a so-called support set during training while, for an angular margin loss, the class centers are fixed or adaptable parameters of the network. Hence, this theorem also shows that angular margin losses are a suitable choice for few-shot classification as shown for open-set sound event classification [42] and few-shot keyword spotting [64].
Choosing a classification task as an auxiliary task prevents learning a constant function as a trivial solution. The reason is that, for such a classification task, an optimal solution is a classifier that maps each sample to its corresponding class center and thus corresponds to jointly learning multiple trivial solutions, one for each class, instead of only learning a constant function. As long as each anomalous sample belongs to a well-defined normal class used during training, this optimal solution would yield representations not suitable for detecting anomalies as they would not be distinguishable from representations obtained with normal samples. However, obtaining such a perfect classifier is much more difficult than learning a constant mapping for a single class and thus training a single model to classify between multiple classes already prevents trivial solutions as long as the classification problem itself is not trivial e.g. by consisting of only a single class. Still, in [33] it has been shown that the ASD performance can be improved by applying the same three strategies as used for the compactness loss [52], namely 1) not using bias terms, 2) not using bounded activation functions and 3) not using trainable class centers. The most likely reason is that these strategies prevent the model to learn trivial solutions, leading to less informative embeddings, for individual classes that are easily recognized.
## IV Experimental Results
Using one-class losses and angular margin losses for ASD will now be compared experimentally.
### _Dataset_
For most experiments conducted in this work, the DCASE2022 ASD dataset [4] of the task titled "Unsupervised Anomalous Sound Detection for Machine Condition Monitoring Applying Domain Generalization Techniques" has been used. The dataset consists of recordings of machine sounds with background factory noise. Each recording has a single channel, a length of ten seconds and a sampling rate of \(16\) kHz and belongs to one of the seven machine types "fan", "gearbox", "bearing", "slide rail", "valve" from MIMI DG [65] and "toy car", "toy train" from ToyADMOS2 [66]. For each machine type, there are six different so-called sections each of which is dedicated to a specific type of domain shift. A domain shift means that the characteristics of a machine sound differ in some way between a source domain with many training samples and a target domain with only few training samples. These shifts can be caused by physical changes of the machines e.g. caused by replacing parts for maintenance, or changes in the acoustical environment e.g. a different background noise or using different recording devices. Ideally, the ASD system is able to reliably detect anomalies despite these domain shifts without the need for adapting the system (domain generalization [67]).
The dataset is divided into a development and an evaluation split each containing recordings of \(21\) sections, three for each machine type. For each recording, information about the machine type and section are given. For the training datasets, domain information ("source" or "target") and additional attribute information such as states of machine types or noise conditions are given for each recording. For the test datasets, no domain information and no additional attribute information
Fig. 1: Illustration of IC compactness losses and the angular margin to be ensured between the classes for \(D=2,N=2\), \(M=1\). Intra-class losses are computed by summing all distances of samples to their corresponding class centers (blue and red areas). Inter-class losses are computed by summing all distances of samples to their corresponding decision boundaries. An unaltered decision boundary is exactly the midpoint between the class centers. When using an angular margin loss, the decision boundaries to the other classes are essentially shifted closer to the class center for which the inter-class loss is computed (see Fig. 1 in [60]). This explicitly ensures a margin between the classes, which is depicted by the green area.
are given. The exact structure of the dataset can be found in Tab I. The task of an ASD system is to reliably detect anomalous samples regardless of whether a sample belongs to a source or target domain, i.e. using a single decision threshold for both domains of a section.
Some of the experiments have also been conducted on the DCASE2023 ASD dataset [68, 69] belonging to the task "First-Shot Unsupervised Anomalous Sound Detection for Machine Condition Monitoring". Similar to the DCASE2022 ASD dataset, this dataset is also aimed at domain generalization for ASD with the following differences. First and foremost, the development and evaluation split of the dataset contain different machine types. The development set contains the same machine types as the DCASE2022 dataset, namely "fan", "gearbox", "bearing", "slide rail", "valve" from MIMI DG [65] and "toy car", "toy train" from ToyADMOS2 [66]. The evaluation set contains seven completely different machine types, namely "toy drone", "n-scale toy train", "vacuum", and "toy tank" from [70] and "bandsaw", "grinder", "shaker" from [65]. Furthermore, for each machine type there is only a single section. This lowers the difficulty of the auxiliary classification task and thus makes it more difficult to extract embeddings, which are sensitive to anomalous changes of the target sounds.
For the DCASE ASD datasets, two performance measures are used to evaluate the performance of individual ASD systems. One metric is the area under the receiver operating characteristic (ROC) curve (AUC), the other metric is the partial area under the ROC curve (pAUC) [71], which is the AUC calculated over a low false positive rate ranging from \(0\) to \(p\) with \(p=0.1\) in this case. The pAUC is used as an additional metric because decision thresholds for machine condition monitoring are usually set to a value that gives a low number of false alarms and thus this area of the ROC curve is of particular interest. Both are threshold-independent metrics allowing a more objective comparison between different ASD systems than threshold-dependent metrics [72, 1].
### _System Description_
The focus of this work is to explain why angular margin losses work well for ASD. This requires using different loss functions for training an ASD system. To this end, the conceptually simple state-of-the-art system presented in [33], which only consists of a single model and uses the same settings for all machine types, is utilized. For all experiments conducted in this work, only the loss function used for training the system is altered. The system utilizes a magnitude spectrogram as well as the whole magnitude spectrum as input representations and uses two different convolutional sub-models for handling these, resulting in two different embeddings. Then, both embeddings are concatenated to obtain a single embedding and the sub-cluster AdaCos loss [29] is applied with \(16\) sub-clusters, which are initialized uniformly at random, for training the model. For the magnitude spectrogram, temporal mean normalization is applied to reduce the effect of different acoustic domains and make both input feature representations a bit more different by removing constant frequency information from the spectrograms. Furthermore, the model does not use bias terms or trainable clusters as this improves the ASD performance by avoiding trivial solutions as discussed before. The model is trained for \(10\) epochs with a batch size of \(64\) using mixup [58] with a uniform distribution for sampling the mixing coefficient and is implemented in Tensorflow [73].
After training the model using an auxiliary classification task, embeddings are extracted for the recordings. For each section of the dataset, k-means with \(k=16\) is applied to all normal training samples belonging to the source domain of this section. The goal is to represent the distribution of the normal embeddings and be able to compute an anomaly score by taking the minimum cosine distance to the mean embeddings belonging to the same section as a given test sample. Note that these means do not correspond to the sub-clusters as some sub-clusters may not have been used by the network during training. It is possible that the embeddings are clustered between the sub-clusters due to the complex dependence between the sub-clusters of the other classes. Still, it has been shown taking the same number of clusters usually performs best [29]. Since there are only \(10\) normal samples available for the target domain, the minimum over the direct cosine distances to the corresponding embeddings is used. As a last step, the minimum of the minimum cosine distances belonging to both domains is used to have an ASD system that generalizes to both domains. Hence, a higher anomaly score indicates anomalous sounds whereas a smaller value indicates normal sounds. More details about the system including a hyperlink to an open-source implementation can be found in [33].
### _Performance Evaluations_
Regardless of the loss function, training the ASD model without using anomalous samples is not directly targeting the ASD performance but only indirectly since the auxiliary task is aimed at obtaining embeddings suitable for ASD. Although, there is a strong relation between the auxiliary and the ASD task, as otherwise training an ASD model by using an auxiliary task would not lead to usable representations, the actual ASD performance needs to be evaluated experimentally and cannot be investigated theoretically because there are no anomalous samples available during training. Therefore, the resulting ASD performances obtained by minimizing both types of loss functions, angular margin losses and one-class losses, using individual auxiliary classification tasks will be evaluated first. Furthermore, a combined loss consisting of the sum of the mean of the IC compactness losses and an additional softmax layer with a CCE loss for classification, as proposed in [55], is evaluated. The results can be found in Tab. II. Note that it is also possible to divide the classification task into several different classification tasks as for example one task for the machine type and other ones for all or specific attributes [30, 31]. However, in our experience this does not improve performance unless weights for the losses belonging to different machine types are manually tuned to improve the ASD performance. Since this requires access to anomalous samples, tuning these weights is impossible in a truly semi-supervised setting.
It can be seen that for both datasets the ASD performance improves with the number of classes being used for the auxiliary task. When using only a single class for all data or for individual machine types and sections, the AUC is close to \(50\%\), which corresponds to randomly guessing whether a sample is anomalous or not. The most likely reason for this is the factory background noise contained in the recordings, which is highly diverse and contains many sound sources other than the target machine. A model trained with a one-class loss does not know the difference between the sound events emitted by the machines to be monitored and any other sounds contained in the recordings. The more complex (in terms of numbers of classes) the chosen auxiliary task is, the more information needs to be captured inside the embeddings for solving this task. Additionally, the background noise does not contain any helpful information for learning to discriminate between the classes defined by the auxiliary task assuming the noise is not class-specific. As a result, the model learns to monitor specific frequencies or temporal patterns important for specific machine types with specific settings and thus also learns to ignore the background noise and to isolate sounds emitted by the targeted machines. Furthermore, it can be observed that using an explicit classification task improves performance on all dataset splits. Ensuring an angular margin between the classes slightly improves the overall performance, but not significantly, often leading to very similar results. The most likely reason is that by increasing intra-class similarity implicitly introduces a margin between different classes. Still, using an angular margin loss does not have any drawbacks over using a compactness and a descriptiveness loss. As a last observation, the sub-cluster AdaCos loss performs slightly better than the AdaCos loss on the development split of the DCASE2022 dataset while yielding a similar performance on the other dataset splits. A possible explanation that there are no significant improvements on the DCASE2023 datasets when using an angular margin loss is that the auxiliary classification task is not as difficult as for the DCASE2022 dataset because there is only one section for each machine type. Slight improvements in performance when using multiple sub-clusters for the AdaCos loss have been observed on the DCASE2020 dataset [2] in [29]. Note that the DCASE2020 dataset only contains machine recordings with a single parameter setting for each section and no domain shifts, i.e. consists of a single source domain, and thus the task is very different from the
much more difficult task considered here. In conclusion, an angular margin loss for ASD in combination with an auxiliary classification task that uses as many meaningful classes as possible is an excellent choice when training an ASD system based on audio embeddings.
In the previous paragraph, we made the assumption that the noise is not class-specific. However, if there is a single class with very specific noise that is only present for this particular class or, even worse, if this is the case for all classes, then an auxiliary classification task will very likely not improve the results. The reason is that the model does not learn to closely monitor the machine sound because also the background noise contains useful information for discriminating between the classes. Therefore, assuming that the noise is not class-specific is essential and intuitively makes sense for machine condition monitoring as one would expect that at least some machines share the same noise distribution when running in the same factory or acoustic environment. Moreover, as shown in Theorem 6, minimizing an angular margin loss using an auxiliary classification task also explicitly increases intra-class similarity. Hence, even if the noise is class-specific and thus the auxiliary classification task does not aid the ASD task, the performance is still as least as good as when not using a classification task at all but only minimizing the intra-class compactness losses and there should not be a disadvantage.
### _Minimizing Compactness Loss by Minimizing an Angular Margin Loss_
In Theorem 6, it has been shown that minimizing an angular margin loss also minimizes all IC compactness losses and maximizes all inter-class compactness losses. This fact is now verified experimentally by training a model using the sub-cluster AdaCos loss while also monitoring all compactness losses. The results are depicted in Fig. 2 and Fig. 3. Regardless of the dataset splits and regardless of using or not using mixup, the angular margin loss and the mean of the IC compactness losses are decreasing during training. The mean of the inter-class compactness loss is constantly equal to \(2\), even without training. The reason is that all sub-cluster centers in this work are constant, randomly initialized and projected to the unit sphere. Hence, By Lemma 5, a squared Euclidean distance of \(2\) corresponds to an angle of \(\frac{\pi}{2}\), i.e. orthogonality. The most likely reason is that the randomly initialized center vectors are approximately orthogonal with very high probability because of the high dimension \(D=256\) of the embedding space. Thus, samples that are similar to the center of one class will be approximately orthogonal to the centers of the other classes. Overall, this is exactly the expected behavior as predicted by Theorem 6 and therefore verifies the theoretical results. Note that smaller loss values do not correspond to a better ASD performance because minimizing these losses only optimizes the performance for the auxiliary task, which is not the same as the ASD task.
### _Visualizing Normal and Anomalous Regions in Input Representations as Perceived by the System_
To further investigate the effect of using an auxiliary task with multiple classes, another experiment using RISE [74] is carried out. RISE highlights regions of the input representations that are considered normal or anomalous by the ASD system. Our goal is to show that utilizing an auxiliary classification task for training the system, as done when minimizing an angular margin loss, enables the system to closely monitor specific machine sounds by focusing on regions belonging to specific patterns of the input data. Although the ASD performance is worse when only using spectrograms as input representations [33], for these experiments a model using only spectrograms as input has been trained. The reason is that these representations are visually more appealing for the human eye than waveforms or spectra and thus more suitable to visually highlight normal and anomalous regions.
To visualize areas of the input representation responsible for a decision, RISE masks random entries of the spectrograms using binary masks and evaluates the ASD score using the masked spectrogram. This step is repeated for many iterations. Then, the sum of the masks weighted with the corresponding ASD scores is taken and normalized with the expected value of a random binary mask, which depends on the chosen sampling distribution. The result is called an _importance map_ and visualizes the impact of specific regions of a spectrogram on the resulting anomaly score.
The problem is that the dimension of the spectrograms is very high because a time dimension of \(T=311\) and a frequency dimension of \(F=513\) is used. Thus, there are
Fig. 3: Different losses after each epoch when training by minimizing AdaCos and not using mixup.
Fig. 2: Different losses after each epoch when training by minimizing sub-cluster AdaCos with a single sub-cluster per class and using mixup.
\(2^{T\cdot F}=2^{159543}\) possible binary masks and thus RISE requires clearly too many iterations. To significantly reduce the search space from \(2^{F\cdot T}\) to \(2^{F+T}\), individual time and frequency masks are randomly generated with a probability of \(0.25\) for a time step or frequency bin to be masked and both masks are combined by element-wise multiplication. This restriction is not too severe because most sounds emitted by machines are relatively stable over time with specific frequencies (e.g. fans), consist of multiple stable sound events with on- and offsets (e.g. slide rails) or only consist of short sound events over a wide frequency range with a specific temporal structure (e.g. valves). For further reduction of the search space, small binary masks are generated and then up-sampled and randomly cropped to match the dimension of the spectrogram to be masked as proposed in [74]. More concretely, we used time masks of size \(20\) and frequency masks of size \(34\) resulting in a search space of \(2^{54}\), which is still very large but much smaller than before. For generating a single importance map, \(640,000\) iterations have been used.
Magnitude spectrograms (visualized in log scale) and corresponding importance maps belonging to two different samples using i) a model trained with an IC compactness loss without an auxiliary task, and ii) a model trained with the sub-cluster AdaCos loss and an auxiliary task for classifying between different machine types, sections and attribute information are depicted in Fig. 4. For the depicted importance maps, blue colors indicate normal regions and yellow colors indicate anomalous regions as perceived by the system. Note that, since the system does not yield perfect results, these regions do not need to really belong to normal and anomalous regions. As there are only binary labels, indicating normal or anomalous samples, available for each entire audio recording and we are no subject matter experts for machine condition monitoring, we do not know which regions are normal or anomalous. Still, for the purpose of showing that utilizing meta information when training a model, as done by angular margin losses, helps the system to have a better understanding of the structure of the data these plots are sufficient. There are several observations to be made. Comparing the representations depicted in Fig. 4b and 4e with the ones depicted in Fig. 4c and 4f, we suggest that using sub-cluster AdaCos, i.e. Fig. 4b and 4e, more clearly shows time and frequency structures at a resolution correlating with the structures resp. acoustic events visible in the spectrograms depicted in Fig. 4a and 4d.
For the anomalous gearbox example (Fig. 4a), the importance map depicted in Fig. 4b shows that specific frequencies are monitored and considered to be normal or anomalous. Interestingly, the normal frequency regions (in blue) in Fig. 4b exactly correspond to the frequencies containing high energy (Fig. 4a) showing that the model expects a gearbox sound from this section to have high energy in these regions. The frequencies that are considered most anomalous, which mostly corresponds to the frequency range between the bottom two normal frequency bands, only contain some energy. This indicates that a normal machine sound should either contain no energy or much more energy for these frequencies. In contrast to this, the importance map depicted in Fig. 4c does not monitor specific frequencies and the only clearly visible
Fig. 4: Log scaled spectrograms (left column), importance maps obtained with RISE when training with the sub-cluster AdaCos loss and classifying between different machine types, sections and attribute information (middle column), and importance maps obtained with RISE when training with an IC compactness loss and no auxiliary classification task (right column) for two different recordings belonging to the test split of the development set (rows). For the importance maps, blue colors indicate normal regions and yellow colors indicate regions that are found to be anomalous by the model. All subfigures use individual color scales to improve visual appearance for differently scaled importance maps and thus colors of different subfigures cannot be compared to each other.
structures are two vertical lines indicating anomalous regions (in yellow). Although we cannot guarantee that the regions in the spectrogram corresponding to these vertical lines are not anomalous, at least visually there is no energy present in these locations. Since the recordings of the machine sounds do not start and end at the same fixed time steps, it does not make sense that the model expects temporal patterns at exactly these time steps that are missing and to thus consider such patterns to be anomalous. Therefore, it seems that these structures are errors of the model.
The importance maps belonging to the normal valve example (Fig. 3(d)) show a similar behavior but for temporal patterns in addition to specific frequencies. Here, the main four normal vertical patterns in the importance map shown in Fig. 3(e) correspond the four high energy patterns of the spectrogram showing that the system views these temporal patterns as normal for a valve sound. In contrast, the importance map depicted in Fig. 3(f) does not show that the system has learned to detect these patterns and looks almost random.
Overall, the depicted results add further confidence to the claim that training a model with an auxiliary classification task with many classes enables the model to learn much more meaningful embeddings, also leading to much better capabilities for detecting anomalous sound events than a model trained with only a single class.
### _Visualizing the Resulting Embedding Spaces Using t-SNE_
As a last experiment, the embedding spaces resulting from using different loss functions and auxiliary tasks are visualized in Figure 5 using t-SNE [75]. Note that by Lemma 5 it does not matter whether t-SNE is evaluated with the cosine distance or the Euclidean distance because both are equivalent when determining the degree of similarity between samples on the unit sphere. It can be seen that using more classes for the auxiliary task helps to separate normal and anomalous samples (Fig. 4(b),c,e,f). When only using a single class (Fig. 4(a)) or individually trained models (Fig. 4(d)), there is no visual difference
Fig. 5: Visualizations of the test split of the development set in the learned embedding space for different loss functions and auxiliary tasks using t-SNE. Numbers in brackets denote the number of different classes used for the auxiliary task.
between normal and anomalous samples. However, it can also be seen that the model has not learned a trivial solution as the embedding spaces did not collapse to a single fixed point, which would correspond to a uniformly distributed t-SNE embedding space. Moreover, the ASD performance would be very close to \(50\%\) as normal and anomalous samples would be indistinguishable in the embedding space. Therefore, the applied regularization strategies, namely not using trainable centers and not using bias terms, work and a completely failed regularization is not the main underlying problem. These visual impressions are verified by computing the average Euclidean distance between each anomalous sample and the closest normal sample in the t-SNE embedding space. The results can be found in Tab. III and also agree with the performance results shown in Table II. Note that the distance in the original embedding space is implicitly captured by the ASD performance given in II because the anomaly score is computed by taking the distance to the closest normal sample in the target domain and the closest mean in the source domain. Again, the most likely explanation for the strong differences between the embedding spaces in terms of ASD capabilities is that using multiple classes enables the model to focus less on or even ignore the background noise and isolate the targeted machine sounds. This helps the model to more robustly detect deviations from normal machine sounds despite the acoustically noisy recording conditions and thus results in better ASD performance.
## V Conclusions
In this work, it has been investigated why using angular margin losses works well for semi-supervised ASD. To this end, it has been shown, both theoretically and experimentally, that reducing an angular margin loss also minimizes the IC compactness loss while simultaneously maximizing the inter-class compactness loss. Therefore, angular margin losses in combination with an auxiliary classification task can be viewed as regularized one-class losses preventing the model to learn trivial solutions. In experiments conducted on the DCASE2022 and DCASE2023 ASD datasets for machine condition monitoring, it has been shown that using an auxiliary task with as many meaningful classes as possible and using an angular margin loss leads to significantly better ASD performance than using a one-class loss such as the IC compactness loss. Furthermore, RISE has been applied to create importance maps for different losses and t-SNE has been used to visualize the resulting embedding spaces. All the conducted experiments show that by using an angular margin the model used for extracting the embeddings learns to monitor relevant frequency bins and learns machine-specific temporal patterns. This enables the model to isolate machine sounds and effectively ignore background noise present in the recording explaining why angular margin losses with an auxiliary task are a good choice for training an ASD system.
For future work, is is planned to investigate whether using auxiliary tasks based on self-supervised learning to obtain suitable representations of the data improves the resulting ASD performance. In addition, sophisticated methods for visualizing anomalous regions of input representations should be developed as being able to localize these regions is very useful for practical applications and theoretical analysis of ASD systems.
## Acknowledgments
We would like to thank Paul M. Baggenstoss and Lukas Henneke as well as the anonymous reviewers for their valuable comments that improved the quality of this work.
## Proof of Lemma 5
Using only basic definitions, we obtain
\[\|x-y\|_{2}^{2} =\sum_{i=1}^{D}(x_{i}-y_{i})^{2}\] \[=\sum_{i=1}^{D}x_{i}^{2}+\sum_{i=1}^{D}y_{i}^{2}-2\sum_{i=1}^{D} x_{i}y_{i}\] \[=\|x\|_{2}^{2}+\|y\|_{2}^{2}-2\langle x,y\rangle\] \[=2\bigg{(}1-\frac{\langle x,y\rangle}{\|x\|_{2}\|y\|_{2}}\bigg{)}\] \[=2(1-\cos(x,y)),\]
which finishes the proof.
|
2301.13410 | Multi-Channel Auction Design in the Autobidding World | Over the past few years, more and more Internet advertisers have started
using automated bidding for optimizing their advertising campaigns. Such
advertisers have an optimization goal (e.g. to maximize conversions), and some
constraints (e.g. a budget or an upper bound on average cost per conversion),
and the automated bidding system optimizes their auction bids on their behalf.
Often, these advertisers participate on multiple advertising channels and try
to optimize across these channels. A central question that remains unexplored
is how automated bidding affects optimal auction design in the multi-channel
setting.
In this paper, we study the problem of setting auction reserve prices in the
multi-channel setting. In particular, we shed light on the revenue implications
of whether each channel optimizes its reserve price locally, or whether the
channels optimize them globally to maximize total revenue. Motivated by
practice, we consider two models: one in which the channels have full freedom
to set reserve prices, and another in which the channels have to respect floor
prices set by the publisher. We show that in the first model, welfare and
revenue loss from local optimization is bounded by a function of the
advertisers' inputs, but is independent of the number of channels and bidders.
In stark contrast, we show that the revenue from local optimization could be
arbitrarily smaller than those from global optimization in the second model. | Gagan Aggarwal, Andres Perlroth, Junyao Zhao | 2023-01-31T04:57:59Z | http://arxiv.org/abs/2301.13410v1 | # Multi-Channel Auction Design in the Autobidding World
###### Abstract
Over the past few years, more and more Internet advertisers have started using automated bidding for optimizing their advertising campaigns. Such advertisers have an optimization goal (e.g. to maximize conversions), and some constraints (e.g. a budget or an upper bound on average cost per conversion), and the automated bidding system optimizes their auction bids on their behalf. Often, these advertisers participate on multiple advertising channels and try to optimize across these channels. A central question that remains unexplored is how automated bidding affects optimal auction design in the _multi-channel setting_.
In this paper, we study the problem of setting auction reserve prices in the multi-channel setting. In particular, we shed light on the revenue implications of whether each channel optimizes its reserve price _locally_, or whether the channels optimize them _globally_ to maximize total revenue. Motivated by practice, we consider two models: one in which the channels have full freedom to set reserve prices, and another in which the channels have to respect floor prices set by the publisher. We show that in the first model, welfare and revenue loss from _local_ optimization is bounded by a function of the advertisers' inputs, but is independent of the number of channels and bidders. In stark contrast, we show that the revenue from _local_ optimization could be arbitrarily smaller than those from _global_ optimization in the second model.
Introduction
Advertisers are increasingly using automated bidding in order to set bids for ad auctions in online advertising. Automated bidding simplifies the bidding process for advertisers - it allows an advertiser to specify a high-level goal and one or more constraints, and optimizes their auction bids on their behalf [16, 24, 22, 8]. A common goal is to maximize conversions or conversion value. Some common constraints include Budgets and TargetCPA (i.e. an upper bound on average cost per conversion). This trend has led to interesting new questions on auction design in the presence of automated bidders [4, 13, 7, 6].
One central question that remains unexplored is how automated bidding affects optimal auction design in the _multi-channel setting_. It is common for advertisers to show ads on multiple channels and optimize across channels. For example, an advertiser can optimize across Google Ads inventory (YouTube, Display, Search, Discover, Gmail, and Maps) with Performance Ads [2], or can optimize across Facebook, Instagram and Messenger with Automated Ad Placement [3], or an app developer can advertise across Google's properties including Search, Google Play and YouTube with App campaigns [1]. With traditional quasi-linear bidders, the problem of auction design on each channel is independent of other channels' designs. However, when advertisers use automated bidders and optimize across channels, the auction design of one channel creates externalities for the other channels through the constraints of automated bidders.
Motivated by this, we introduce the problem of auction design in the multi-channel setting with automated bidding across channels. In particular, we study the problem of setting reserve prices across channels. We consider two behavior models: _Local_ and _Global_. In the _Local_ model, each channel optimizes its reserve price to maximize its own revenue, while in the _Global_ model, the channels optimize their reserve prices _globally_ in order to maximize the total revenue across channels. The main question is: _what is the revenue loss from optimizing_ locally_, rather than_ globally?
We consider this question in two settings: one in which each channel has full control over its reserve prices, and one in which the channels have to respect an externally-imposed lower bound on the reserve prices. The first setting which we call _Without Publisher Reserves_ is very common in practice and arises when the impressions are owned by the selling channel, or when the publisher leaves the pricing decisions to the selling channel. The second setting which we call _With Publisher Reserves_ arises when the impressions are owned by a third-party publisher that sets a floor price for its impressions - this could come from an outside option for selling the impression. This is common in Display advertising where the selling channel is often different from the publisher who owns the impressions.
**Model:** Our model consists of \(k\) channels, each selling a set of impressions. Each channel can set a uniform reserve price. The uniform reserve price is in the cost-per-unit-value space1 (see Section 3.1 for details). This is motivated by the observation that, in practice, values are commonly known by the channels; values are usually click-through-rate or conversion-rate of an ad, as in [4], and the channels have good estimates for those. Besides the reserve prices set by channels, in the _With Publisher Reserves_ setting, each impression could have a price floor set by the publisher who owns the impression. Each impression is sold in a Second-Price-Auction with a floor price that depends on the reserve price set by the selling channel and the price constraint set by the publisher. Bidders want to maximize their conversions (or some other form of value) subject to one of two
types of constraints: (1) Budget, an upper bound on spend and (2) TargetCPA, an upper bound on the average cost per conversion. The model also allows standard quasi-linear bidders with no constraints. The game consists of two main stages: First, each channel simultaneously announces its reserve price; then, bidders bid optimally for the different impressions.
### Our results
The paper's main focus is to compare the revenue2 at equilibrium when channels optimize locally, i.e. each sets its reserve price(s) to maximize its own revenue, to the revenue where channels act globally and set their reserve prices to maximize the sum of the total revenue. We define the Price of Anarchy (PoA) as the worst-case ratio between the total revenue when the channels optimize locally compared to the case where the channels optimize globally. Our main goal is to bound the Price of Anarchy in the two settings: _Without Publisher Reserves_, and _With Publisher Reserves_.
Footnote 2: See Section 7 for a brief discussion of _welfare_
### Setting without Publisher Reserves
In order to bound the Price of Anarchy, we first bound the local and global revenue in terms of the optimal Liquid Welfare (see Section 3 for the definition). These revenue bounds are interesting in their own right and the proof methodology gives (non-polytime) algorithms for determining good reserve prices.
We first consider the worst-case revenue in the local model where each channel is optimizing for its own revenue, compared to the optimal Liquid Welfare. We show in Theorem 2 that the worst-case revenue is at least \(\Omega(\frac{1}{\log\eta})\) fraction of the optimal Liquid Welfare, where \(\eta\) depends on the bidders' inputs3 and quantifies the heterogeneity of the pool of bidders. This lower bound on revenue trivially carries over to the setting where the channels are optimizing globally and to the single-channel setting. Next, we show that this bound is tight up to constant factors (Proposition 3). In particular, we give an example in the single-channel setting where the optimal revenue with a uniform reserve price is \(O(\frac{1}{\log\eta})\) of the optimal Liquid Welfare. This upper bound also applies to the global and local models in the multi-channel setting. In other words, the upper bound and lower bounds on the gap between Liquid Welfare and revenue in each of these settings is \(\Theta(\log\eta)\). That naturally makes one wonder: Is optimizing locally as good for revenue as optimizing globally? If we look into the gap bounds, we find that they arise from trying to capture values of different scales with a uniform reserve price. And one might conjecture that since the source of the gap applies to both the global and local model, that even if there is a revenue gap between the two models, it should depend on different factors. Surprisingly, we show that the gap between optimizing locally and globally is exactly the same \(\log\eta\) factor (Theorem 4). Note that in all the above settings, the revenue guarantee is independent of the number of channels and bidders and depends only on the heterogeneity of the bidders.
Footnote 3: \(\eta\) is the maximum of the ratio of the highest to lowest TargetCPA among TargetCPA bidders and a ratio defined (in Definition 5) for Budgeted bidders.
### Setting with Publisher Reserves
In stark contrast to the setting without publisher reserves, we show that the PoA in this setting can be arbitrarily small even with one tCPA bidder (Theorem 6). The gap example depends heavily
on the asymmetry between the different channels. Motivated by this, we consider the restricted setting where each channel sells a random sample of the impressions (see Section 6 for the exact details). For this case, with one tCPA bidder in the game, we show that under some mild constraint on the channels' strategies, the PoA = \(1/k\), where \(k\) is the number of channels. When the channels optimize globally, the equilibrium is efficient and all channels set low reserve prices. On the other hand, for the equilibrium in the local optimization model, the larger channels (in terms of the volume of impressions they own) set low reserve prices while small channels are extractive and set high reserve prices.
### Hardness of Equilibrium Computation
To complement our Price of Anarchy results, we also study the computational complexity of computing the equilibrium of the game. We show an impossibility result - that it is PPAD-hard to compute the subgame equilibrium of the bidders (Theorem 1). To prove this result, we use gadget reduction from the problem of finding approximate Nash equilibrium for 0-1 bimatrix game, and we need to handle many difficulties unique to our subgame, which we explain with more details in Section 4 and Appendix B.
### Key implications of our results
Our results have several implications for setting reserve prices in the multi-channel setting:
* The revenue gap between local and global optimization depends heavily on whether there are publisher-imposed reserve prices.
* Without publisher reserves, the worst-case gap between the revenue in the local model and the global model is \(\Theta(\log\eta)\), where \(\eta\) captures the heterogeneity of the bidders' inputs and is independent of the number of channels and bidders. Thus it is better to optimize globally when possible.
* Without publisher reserves, it is possible to obtain a revenue of \(\Theta(\frac{1}{\log\eta})\) fraction of the optimal Liquid Welfare by setting uniform reserve prices. This observation is not surprising in the single-channel setting and for global optimization in the multi-channel setting, but it is remarkable that it holds even with local optimization, where the selfishness of a channel could have made it difficult for other channels to make revenue. We also note that the approximation can be improved by setting reserve prices at a more granular level, rather than a uniform reserve price. In that case, the approximation ratio will depend on the heterogeneity of bidders per slice.
* With publisher reserves, the gap between the revenue in the local and global model can be arbitrarily large. This can happen even when only one of the channels has external pricing constraint.
### Organization of the paper
We present a formal model of the problem in Section 3. Then, in Section 4, we show that it is PPAD-hard to compute the equilibrium of the sub-game. In Section 5, we study the setting without publisher reserves and present a tight bound on the Price of Anarchy, as well as on the gap between
the revenue and optimal Liquid Welfare in the local and global models. In Section 6, we study the setting with publisher reserves and show the Price of Anarchy is 0. We also study a restricted version of this setting, and show a Price of Anarchy of \(1/k\) for that version. Finally, in Section 7, we discuss extensions for welfare and for setting reserve prices in the cost-per-impression space.
## 2 Related Work
**Autobidding.** There has been a lot of recent interest in exploring questions related to automated bidding, including bidding algorithms and their equilibria [4], and auction design in the presence of automated bidding [13, 7, 6]. Aggarwal et al. [4] initiate the study of autobidding and find optimal bidding strategies for a general class of autobidding constraints. They also prove the existence of an equilibrium and prove a lower bound on liquid welfare at equilibrium compared to the optimal liquid welfare. Deng et al. [13] show how boosts can be used to improve welfare guarantees when bidders can have both TargetCPA and Budget constraints, potentially at the cost of revenue. Balseiro et al. [7] characterize the revenue-optimal single-stage auctions with either value-maximizers or utility-maximizers with TargetCPA constraints, when either the values and/or the targets are private. Similar to our paper, Balseiro et al. [6] also study reserve prices in the presence of autobidders, and show that with TargetCPA and Quasi-linear bidders, revenue and welfare can be increased by using (bidder-specific) reserve prices. They do not study budget-constrained bidders. All of the above papers are in the single channel setting.
**Auction design with multiple channels.** Most of this stream of literature have focused on models where multiple channels (auctioneers) compete to take captive profit-maximizers buyers [9, 15, 19]. The competition across channels leads to lower reserve prices, obtaining lower revenues and more efficient outcomes [23]. Our model differs from them in that our bidders are not captive but are instead are optimizing under their autobidding constraints. Interestingly, we show that in some cases the competition among channels leads to higher reserve prices, and at the same time, improves welfare (see Theorem 7.).
## 3 Model
Our baseline model considers a set of bidders (advertisers) \(J\) interested in purchasing a set of impressions \(I\) that are sold by \(K\) different channels. The impressions that channel \(k\) sells, \(i\in I_{k}\), are sold using a second-price auction with a floor price. This floor price depends on the reserve price \(r_{k}\) chosen by the channel and by the minimum price \(p_{i}\) set by the publisher that owns the impression4.
Footnote 4: The publisher might have an outside option to sell some of the impressions and sets a reserve price to account for that. These reserve prices are prechosen by the publishers and hence are fixed constants known to both channels and bidders.
### Bidders
Motivated by the most common bidding formats that are used in practice, we assume that each bidder can be one of the following types: a _tCPA_ bidder, a _Budgeted_ bidder, or a _Quasi-linear (QL)_
bidder. We denote by \(J_{\text{\tiny tCPA}}\), \(J_{\text{\tiny Budgeted}}\) and \(J_{\text{\tiny QL}}\) the set of bidders that are tCPA, Budgeted and QL bidders, respectively.
Each Bidder \(j\) has a value (e.g. conversion rate) \(v_{j,i}\) for impression \(i\) and submits a bid \(b_{j,i}\) for the impression. A bidder's cost for buying impression \(i\in I_{k}\) is
\[c_{j,i}(\boldsymbol{b}_{i},\boldsymbol{r})=\max\bigl{\{}\max_{\ell\neq j\text { s.t. }b_{\ell,i}\geq\max\{r_{k}v_{\ell,i},p_{i}v_{\ell,i}\}}\{b_{\ell,i}\},\,r_{k}v _{j,i},\,p_{i}v_{j,i}\}, \tag{1}\]
where \(\boldsymbol{b}_{i}=(b_{j,i})_{j\in J}\) (note we use the notation \(c_{j,i}(\boldsymbol{b}_{i},\boldsymbol{r})\) for simplicity, even though \(c_{j,i}(\boldsymbol{b}_{i},\boldsymbol{r})\) does not depend on \(b_{j,i}\)) and \(\boldsymbol{r}=(r_{k})_{k\in K}\) (because \(p_{i}\)'s are fixed constants prechosen by the publishers, for simplicity we do not include them as variables). That is, a bidder's cost for an impression is the maximum among (i) the bids of the bidders who bid above their own reserve prices, (ii) reserve price set by the channel which owns the impression, and (iii) reserve price set by the publisher. Also, note that the reserve prices \(r_{k}\) and \(p_{i}\) are multiplied by \(v_{j,i}\) to get the final floor price of impression \(i\) for bidder \(j\). In other words, the reserve prices are in the cost-per-unit-value space. We will refer to the final reserve price of impression \(i\) for bidder \(j\) by \(r_{j,i}:=\max\{r_{k}v_{j,i},\,p_{i}v_{j,i}\}\). Now we explain the bidder types.
**QL bidder:** This is a traditional profit-maximizing bidder with no constraint. The dominant strategy for such a Bidder \(j\) is to bid her value \(v_{j,i}\) for impression \(i\), regardless of how everyone else bids for that impression.
**tCPA bidder:** Such Bidder \(j\) maximizes the number of conversions (i.e., the total value of the impressions which the bidder gets) subject to the constraint that the average cost per conversion is no greater than their tCPA \(T_{j}\geq 0\). Namely, bidder \(j\) solves the following maximization problem:
\[\max_{\forall i\in I,\,b_{j,i}\geq 0,\,x_{j,i}\in[0,1]} \sum_{i\in I\text{ s.t. }b_{j,i}\geq c_{j,i}(\boldsymbol{b}_{i}, \boldsymbol{r})}v_{j,i}x_{j,i}\] s.t. \[\sum_{i\in I}c_{j,i}(\boldsymbol{b}_{i},\boldsymbol{r})x_{j,i} \leq T_{j}\cdot\sum_{i\in I}v_{j,i}x_{j,i}\quad\forall j\in J\] \[x_{j,i}=1\text{ if }b_{j,i}>c_{j,i}(\boldsymbol{b}_{i}, \boldsymbol{r})\quad\forall j\in J,i\in I. \tag{2}\]
**Budgeted bidder:** Such Bidder \(j\) maximizes the number of conversions subject to a budget constraint \(B_{j}\). Namely, Bidder \(j\) solves the following maximization problem:
\[\max_{\forall i\in I,\,b_{j,i}\geq 0,\,x_{j,i}\in[0,1]} \sum_{i\in I\text{ s.t. }b_{j,i}\geq c_{j,i}(\boldsymbol{b}_{i}, \boldsymbol{r})}v_{j,i}x_{j,i}\] s.t. \[\sum_{i\in I}c_{j,i}(\boldsymbol{b}_{i},\boldsymbol{r})x_{j,i} \leq B_{j}\quad\forall j\in J\] \[x_{j,i}=1\text{ if }b_{j,i}>c_{j,i}(\boldsymbol{b}_{i}, \boldsymbol{r})\quad\forall j\in J,i\in I. \tag{3}\]
Notice that both tCPA bidder and Budgeted bidder are allowed to decide the fraction of an impression they get in case they are tied for that impression (we say that bidder \(j\) is **tied** for an impression \(i\) if \(b_{j,i}=c_{j,i}(\boldsymbol{b}_{i},\boldsymbol{r})\)). This is in line with the standard approach in the literature (e.g., budget pacing equilibrium [12]) that endogenizes the tie-breaking rule as part of the equilibrium
concept which we will define shortly. Moreover, in the following proposition, we show that given other bidders' bids, it is optimal for a tCPA bidder (or a Budgeted bidder) \(j\) to bid uniformly5, i.e., the bids are characterized by a single **bidding parameter**\(\alpha_{j}\geq 0\) as follows: \(\forall i\in I\), \(b_{j,i}=\alpha_{j}v_{j,i}\).
Footnote 5: Qualitatively, this is same as the well-known result of Aggarwal et al. [4]. They prove this by introducing small perturbations to bidders’ values. Instead, we take the approach that endogenizes the tie-breaking rule as part of the equilibrium concept, which is the standard approach in the literature for proving existence and computational complexity of equilibrium.
**Proposition 1**.: _For a tCPA bidder (or a Budgeted bidder resp.) \(j\), the optimal bids for Problem (2) (or (3) resp.) have the following form: there exists \(\alpha_{j}\geq 0\) such that \(\forall i\in I\), \(b_{j,i}=\alpha_{j}v_{j,i}\)._
The proof of Proposition 1 is provided in appendix.
### Bidders' Subgame
Bidders observe the reserve prices \(\mathbf{r}=(r_{k})_{k\in K}\) posted by the channels and decide their bids \(\mathbf{b}_{i}(\mathbf{r})\) for each impression \(i\), and if a Bidder \(j\) is tied for impression \(i\), they can also decide the fraction \(x_{j,i}(\mathbf{r})\) of impression \(i\) they get. In the previous subsection, we have shown that for any bidder of any type, the best response given other bidders' bids is bidding uniformly, and hence, we assume that each bidder \(j\) uses uniform bidding with a bidding parameter \(\alpha_{j}(\mathbf{r})\).
Moreover, we assume that in the bidders' subgame, bidders use the _undominated uniform_ bidding strategies. Specifically, for a QL bidder, bidding less than their value is dominated by bidding their value, and for a tCPA bidder \(j\), using a bidding parameter less than \(T_{j}\) is dominated by using a bidding parameter \(\alpha_{j}(\mathbf{r})=T_{j}\). To see the latter, notice that a tCPA bidder \(j\) using a bidding parameter strictly less than \(T_{j}\) cannot be tCPA-constrained since their cost for any impression they are winning cannot be more than their bid \(b_{j,i}=\alpha_{j}(\mathbf{r})v_{j,i}<T_{j}v_{j,i}\). Thus, their tCPA constraint, i.e., the first constraint in Problem (2), is not tight, and hence, by increasing their bidding parameter to \(T_{j}\), the bidder can only increase the total value without violating its tCPA constraint. In summary, we make the following assumption:
**Assumption 1** (Uniform Undominated Bidding).: _Each bidder \(j\in J\) uses uniform bidding, i.e., \(\forall i\in I\), \(b_{j,i}(\mathbf{r})=\alpha_{j}(\mathbf{r})v_{j,i}\) for some bidding parameter \(\alpha_{j}(\mathbf{r})\geq 0\). Moreover, each QL bidder \(j\) uses a bidding parameter \(\alpha_{j}(\mathbf{r})=1\), and each tCPA bidder \(j\) uses a bidding parameter \(\alpha_{j}(\mathbf{r})\geq T_{j}\)._
The equilibrium solution we adopt for the bidders' subgame is a version of subgame perfection that takes into account endogenous tie-breaking rules, which is in line with the literature, e.g., the pacing equilibrium for Budgeted bidders [12] and the autobidding equilibrium for tCPA bidders [18].
**Definition 1** (Subgame Bidding Equilibrium).: _Consider the bidders' subgame given reserve prices \(\mathbf{r}\) posted by the channels. An equilibrium for the subgame consists of bidders' bidding parameters \(\mathbf{\alpha}(\mathbf{r})=(\alpha_{j}(\mathbf{r}))_{j\in J}\) and probabilities of allocations of the impressions \(\mathbf{x}(\mathbf{r})=(x_{j,i}(\mathbf{r}))_{j\in J,i\in I}\) such that_
1. _Only a bidder whose bid is no less than the cost gets the impression: for_ \(i\in I_{k}\)_,_ \(x_{j,i}(\mathbf{r})>0\) _holds only if_ \(b_{j,i}(\mathbf{r})\geq c_{j,i}(\mathbf{b}_{i}(\mathbf{r}),\mathbf{r})\)_._
2. _Full allocation of any item with a bid above the reserve price: for_ \(i\in I_{k}\)_,_ \(\sum_{j\in J}x_{j,i}(\mathbf{r})=1\) _must hold if there exists some_ \(\ell\in J\) _such that_ \(b_{\ell,i}(\mathbf{r})>r_{\ell,i}\)
3. _Constraints are satisfied: for each_ \(j\in J_{\mbox{\tiny budget}}\)_,_ \(\sum_{i\in I}c_{j,i}(\mathbf{b}_{i}(\mathbf{r}),\mathbf{r})\cdot x_{j,i}(\mathbf{r})\leq B_{j}\)_, and for each_ \(j\in J_{\mbox{\tiny tCPA}}\)_,_ \(\sum_{i\in I}c_{j,i}(\mathbf{b}_{i}(\mathbf{r}),\mathbf{r})x_{j,i}(\mathbf{r})\leq T_{j}\cdot\sum_{i\in I}v_{j,i}x _{j,i}(\mathbf{r})\)_._
4. _For every Budgeted or tCPA bidder_ \(j\)_, even if they can decide the fraction_ \(x_{j,i}(\mathbf{r})\) _of an impression_ \(i\) _they get in case they are tied for impression_ \(i\)_, increasing their bidding parameter would not increase their value without violating their budget/tCPA constraint._
The existence of subgame bidding equilibrium is a straightforward consequence by adapting the existence proofs of the pacing equilibrium for Budgeted bidders [12] and the autobidding equilibrium for tCPA bidders [18].
**Proposition 2**.: _In the bidders' subgame, the subgame bidding equilibrium always exists._
### Channels
We focus on two models that depend on the objective functions the channels may have: the _Local_ channels model and the _Global_ channels model.
**Local Channels Model**: In this case, each channel sets its reserve price \(r_{k}\) to maximize its own revenue given the other channels' reserve prices \(\mathbf{r}_{-k}\). Thus, channel \(k\) solves
\[\max_{r_{k}}\sum_{j\in J}\sum_{i\in I_{k}}c_{j,i}(\mathbf{b}_{i}(r_{ k},\mathbf{r}_{-k}),r_{k},\mathbf{r}_{-k})x_{j,i}(r_{k}, \mathbf{r}_{-k}).\]
**Global Channels Model**: In this case, the channels determine the reserve prices \(r\) to maximize the sum of the revenue across all channels. Thus, they set reserve prices solving
\[\max_{\mathbf{r}}\sum_{k\in K}\sum_{j\in J}\sum_{i\in I_{k}}c_{j,i}( \mathbf{b}_{i}(\mathbf{r}),\mathbf{r})x_{j,i}( \mathbf{r}).\]
### The Full Game
We summarize the full game for the channels and the bidders as the following two-stage game:
1. Each Channel \(k\in K\) chooses a uniform reserve price (in the cost-per-unit-value space) \(r_{k}\) with finite precision6 for their impressions \(I_{k}\). Footnote 6: Note that assuming the reserve prices have finite precision is very natural for practice.
2. Each Bidder \(j\in J\) observes the reserve prices \(r\) posted by the channels (and the reserve prices \((p_{i})_{i\in I}\) prechosen by the publishers), and then they choose a bidding parameter \(\alpha_{j}(\mathbf{r})\) and submit their bids according to \(\alpha_{j}(\mathbf{r})\) (see Assumption 1). If Bidder \(j\) is tied for impression \(i\), they can also decide the fraction \(x_{j,i}(\mathbf{r})\) of impression \(i\) they get.
By Proposition 2, given any fixed \(r\) in the support of the channels' mixed strategies, stage (S1) has a subgame equilibrium between the bidders. We assume that stage (S1) always results into one such equilibrium deterministically, i.e., henceforth, we assume that \(((\mathbf{b}_{i}(\mathbf{r}))_{i\in I},\mathbf{x$ }(\mbox{\boldmath$r}))\) in stage (S1) is always a fixed subgame equilibrium given \(r\) (as defined in Definition 1).
Channels are allowed to use mixed strategies in stage (S0), i.e., sampling their reserve price \(r_{k}\) from a distribution \(\mathcal{R}_{k}\). Notice that the game for the channels is a finite game between finite players, and hence, there always exists a mixed-strategy equilibrium by the celebrated Nash's theorem [20].
Additionally, we assume the game is complete-information. That is, \((v_{j,i},B_{j},T_{j},p_{i})_{j\in J,i\in I}\) are known to the channels and the bidders.
### Important Concepts
We now present the main concepts which we will use to compare the outcomes of the local channels model to the global channels model.
**Definition 2** (Liquid Welfare).: _The liquid welfare of a fractional allocation \(\mathbf{x}=(x_{j,i})_{j\in J,i\in I}\) is_
\[Wel(\mathbf{x})=\sum_{j\in J_{\text{Hudgeted}}}\min\left\{B_{j},\sum_{i\in I}v_{j,i }x_{j,i}\right\}+\sum_{j\in J_{\text{ICPA}}}\sum_{i\in I}T_{j}v_{j,i}x_{j,i}+ \sum_{j\in J_{\text{QL}}}\sum_{i\in I}v_{j,i}x_{j,i},\]
_and the optimal liquid welfare is \(Wel^{*}:=\max_{\mathbf{x}}\text{ that satisfies bidders' constraints}Wel(\mathbf{x})\)._
This concept of liquid welfare has been previously studied in e.g., Aggarwal et al. [4] and Azar et al. [5], and it was first introduced by Dobzinski and Paes Leme [14]. It is well-known that optimal liquid welfare is an upper bound on the sum of the revenues of all channels. More precisely, optimal liquid welfare \(Wel^{*}\) is greater or equal than the sum of the channels' revenues, which we denote by \(Rev(\mathbf{r}):=\sum_{k\in K}\sum_{j\in J}\sum_{i\in I_{k}}c_{j,i}(\mathbf{b}_{i}( \mathbf{r}),\mathbf{r})x_{j,i}(\mathbf{r})\), regardless of the reserve prices they choose:
**Fact 1**.: _For any \(\mathbf{r}\in\mathbb{R}_{\geq 0}^{K}\), \(Rev(\mathbf{r})\leq Wel^{*}\)._
Thus, we use the optimal liquid welfare as the benchmark to measure performance of the revenue in the local and global models. We let LocalEQ denote the set that contains every mixed-strategy equilibrium \(\mathcal{R}:=(\mathcal{R}_{k})_{k\in K}\) for the channels in the local channel model, and we define the revenue guarantees in the local and global models as follows:
**Definition 3** (Revenue Guarantee).: _The revenue guarantees for the local and global models are defined as_
\[RevG(Local) =\frac{\inf_{\mathcal{R}\in\text{LocalEQ}}\mathbb{E}_{\mathbf{r} \sim\mathcal{R}}[\text{Rev}(\mathbf{r})]}{Wel^{*}},\] \[RevG(Global) =\frac{\sup_{\mathbf{r}\in\mathbb{R}_{\geq 0}^{K}}\text{ Rev}(\mathbf{r})}{Wel^{*}}.\]
Note that any reserve prices \(\mathbf{r}\) in the support of any \(\mathcal{R}\in\text{LocalEQ}\) in the local model is also feasible in the global model, thereby giving us the following fact.
**Fact 2**.: \(RevG(Local)\leq RevG(Global)\leq 1\)__
Furthermore, to compare the outcomes of the two models, we use the standard notion of the Price of Anarchy [17].
**Definition 4** (Price of Anarchy).: _The Price of Anarchy (PoA) of the local model compared to the global model is_
\[PoA\ =\ \frac{\inf_{\mathcal{R}\in\text{LocalEQ}}\mathbb{E}_{\mathbf{r}\sim \mathcal{R}}[\text{Rev}(\mathbf{r})]}{\sup_{\mathbf{r}\in\mathbb{R}_{\geq 0}^{K}} \text{ Rev}(\mathbf{r})}.\]
Hardness of equilibrium computation
In this section, we study the computational complexity of computing equilibrium of our game. Our main result in this section is that we show that even just finding the subgame equilibrium (Definition 1) for the bidders' subgame is already computationally hard:
**Theorem 1**.: _Finding the subgame equilibrium (Definition 1) is PPAD-hard._
We prove this by reduction from the problem of finding an approximate Nash equilibrium for the 0-1 bimatrix game, which was shown to be PPAD-hard in [11]. The basic idea of the proof is similar to that of the hardness result for finding a pacing equilibrium for budget-constrained quasilinear bidders [10]. However, we have to handle many difficulties that are unique to tCPA bidders. Most notably, in contrast to budget-constrained quasi-linear bidders, whose bidding parameters are at most 1, tCPA bidders do not have a natural upper bound for their bids, and their bidding parameters can be arbitrarily high when their tCPA constraints are not tight. We construct new gadgets that force tCPA bidders' bidding parameters to stay bounded but still leave a controlled amount of "slack" for them, so that they can bid on impressions that are more expensive than their tCPA but not win all of them. We provide the full reduction in the appendix.
Despite the computational hardness, we are able to prove tight revenue guarantees that channels can achieve in the equilibrium, which we will present in the subsequent sections.
## 5 Revenue and Price of Anarchy with no Publisher Reserves
In this section, we focus on the setting where impressions do not have publisher-chosen reserve prices, i.e. \(p_{i}=0\) for every \(i\in I\). We study the revenue guarantees that the channels can achieve in the local model where each channel chooses their reserve price out of their own self-interest vs. the global model where the channels cooperatively choose the reserve prices to maximize their total revenue.
Our main results in this section are the following:
* We establish a revenue guarantee (defined in Definition 3) in the local model (Theorem 2).
* Moreover, we prove that our revenue guarantee in the local model is _tight even for the global model_ (Proposition 3).
* Furthermore, as a corollary of the revenue guarantee in the local model, we immediately get a lower bound for the price of anarchy (Theorem 3). We give a matching upper bound for the price of anarchy (Theorem 4) and thus establish a _tight separation between the local and global models_.
### Revenue Guarantees
We begin by proving the main technical result of this section, which establishes a revenue guarantee for the local model. It is PPAD-hard to actually compute the equilibrium, as shown in Theorem 1. Nevertheless, we will show that each channel can set a certain reserve price in order to guarantee itself a decent amount of revenue, irrespective of the reserve prices set by other channels.
To do this, we will show that each channel can set a reserve price which ensures that its revenue is at least a certain fraction of the total budget of unconstrained Budgeted bidders (Lemma 1 and
Corollary 1). Then, we will show that each channel can set a reserve price which ensures that its revenue is at least a certain fraction of its contribution to the optimal liquid welfare (Definition 2) from tCPA and QL bidders (Lemma 2 and Corollary 2). Finally, we will put these together to get the final revenue guarantee (Theorem 2).
The main difficulty in this proof comes from Budgeted bidders who are unconstrained, i.e. not spending their budget, at the equilibrium of the local model. The contribution to optimal Liquid Welfare from tCPA and QL bidders can be easily attributed to different channels (see the definition of \(W^{*}_{\textsc{tCPA}}(k)\) and \(W^{*}_{\textsc{QL}}(k)\) in Lemma 2) and there is a natural way for a channel to obtain a good fraction of its contribution as revenue (see Lemma 2). However, there is no obvious attribution for the contribution of Budgeted bidders to different channels, and no obvious lower bound on the bid of Budgeted bidders. In order to get a handle on unconstrained Budgeted bidders, we define the notion of _Budget-fraction_.
**Definition 5** (Budget-fraction \(\beta_{j}\) and \(\beta_{max}\), \(\beta_{min}\)).: _For a Budgeted bidder \(j\), define their budget-fraction as \(\beta_{j}=\frac{B_{j}}{\sum_{i\in I}v_{j,i}}\), i.e., the ratio of their budget to the sum of their values of all impressions. Also, define \(\beta_{max}=\max_{j\in J_{\textit{Budgeted}}}\beta_{j}\) and \(\beta_{min}=\min_{j\in J_{\textit{Budgeted}}}\beta_{j}\)._
Intuitively, the budget-fraction for a Budgeted bidder plays a role similar to the tCPA of a tCPA bidder. With this, we are ready to prove some key technical claims that will help establish a lower bound on the bids of unconstrained Budgeted bidders, which in turn will help us find a good reserve price for these bidders.
#### Key Claims
**Claim 1**.: _In a subgame equilibrium (Definition 1), if a Budgeted bidder \(j\) is unconstrained, i.e. not spending all its budget, then they must be winning all impressions \(i\) with \(v_{j,i}>0\) and cannot be tied with another unconstrained Budgeted bidder on those impressions._
Proof.: Suppose for contradiction that a Budgeted bidder \(j\) is unconstrained but does not fully win certain impression \(i\) (i.e., \(x_{j,i}(\mathbf{r})<1\)) such that \(v_{j,i}>0\). Notice that Bidder \(j\) can increase the bidding parameter \(\alpha_{j}\) without increasing the total spend until Bidder \(j\) is tied for (but does not fully win) some impression \(i^{\prime}\) with \(v_{j,i^{\prime}}>0\). Such tie must occur, because otherwise, as Bidder \(j\) increases \(\alpha_{j}\), at some point Bidder \(j\) will be tied for the impression \(i\). However, this contradicts item (4) of Definition 1, because Bidder \(j\) can strictly increase the utility by increasing \(x_{j,i^{\prime}}(\mathbf{r})\) by a sufficiently small amount such that their budget constraint is not violated.
The next claim follows directly from Claim 1.
**Claim 2**.: _In a subgame equilibrium, for any impression \(i\in I\), there can be at most one unconstrained Budgeted bidder \(j\) with \(v_{j,i}>0\)._
Next we prove the following claim, which will be helpful in bounding revenue against the optimal Liquid Welfare from unconstrained Budgeted bidders.
**Claim 3**.: _If the final reserve price of an impression \(i\) for a Budgeted bidder \(j\) satisfies that \(r_{j,i}<\beta_{j}v_{j,i}\), then impression \(i\) will be sold for a cost at least \(r_{j,i}\) in the subgame equilibrium._
Proof.: We first show that unless impression \(i\) is fully sold to bidder \(j\) (in which case the statement holds trivially because the cost of impression \(i\) is at least its reserve price \(r_{j,i}\)), Bidder \(j\) will bid \(b_{j,i}\geq\beta_{j}v_{j,i}\) for impression \(i\).
Suppose \(b_{j,i}<\beta_{j}v_{j,i}\) for contradiction. Recall that \(\alpha_{j}\) denotes the bidding parameter of Bidder \(j\). Since \(b_{j,i}=\alpha_{j}v_{j,i}\) is assumed to be less than \(\beta_{j}v_{j,i}\), we get \(\alpha_{j}<\beta_{j}\). Moreover, because the total amount spent by Bidder \(j\) is at most the sum of their bids, we have that
\[\text{Total amount spent by bidder }j\leq\sum_{i\in I}b_{j,i}=\sum_{i\in I} \alpha_{j}v_{j,i}<\sum_{i\in I}\beta_{j}v_{j,i}=B_{j}.\]
Thus, Bidder \(j\) is unconstrained. By Claim 1, this bidder must be winning all its impressions.
Now we have shown that \(b_{j,i}\geq\beta_{j}v_{j,i}\), which is by our assumption strictly greater than \(r_{j,i}\). Thus, it follows from item (2) of Definition 1 that impression \(i\) will be fully sold in the subgame equilibrium (for a cost that is at least \(b_{j,i}>r_{j,i}\) because of item (1) of Definition 1).
The following claim, analogous to the claim above, will be used to bound revenue against the optimal Liquid Welfare from tCPA and QL bidders.
**Claim 4**.: _If the final reserve price (i.e., in the cost space) of impression \(i\) for a tCPA bidder \(j\) (recall this is denoted by \(r_{j,i}\) in the model section) satisfies that \(r_{j,i}<T_{j}v_{j,i}\), then impression \(i\) will be sold for a cost of at least \(r_{j,i}\) in the subgame equilibrium. Similarly, if the final reserve price of impression \(i\) for a QL bidder \(j\) satisfies that \(r_{j,i}<v_{j,i}\), then impression \(i\) will be sold for a cost of at least \(r_{j,i}\) in the subgame equilibrium._
Proof.: By Assumption 1, a tCPA bidder \(j\) bids \(b_{j,i}(\boldsymbol{r})\geq T_{j}v_{j,i}>r_{j,i}\) on impression \(i\). Thus, by item (2) of Definition 1, impression \(i\) will be fully sold in the subgame equilibrium (for a cost that is at least \(r_{j,i}\) because of item (1) of Definition 1). An analogous argument holds for the case of QL bidder.
Next, we first lower bound each channel's revenue against the optimal Liquid Welfare contribution from Budgeted bidders (Lemma 1 and Corollary 1), and then we lower bound each channel's revenue against the welfare contribution from tCPA and QL bidders (Lemma 2 and Corollary 2). Finally, we will put these together to get a lower bound on the revenue guarantee (Theorem 2).
#### Welfare from Budgeted Bidders
**Lemma 1**.: _Let \(E\) be any subgame equilibrium given any reserve prices. Define the following:_
* _Let_ \(J_{C}^{E}\) _be the subset of Budgeted bidders who are_ **constrained**_, i.e. are spending their entire budget in the equilibrium_ \(E\)_._
* _Let_ \(J_{U}^{E}\) _be the subset of Budgeted bidders who are_ **unconstrained**_, i.e. are spending strictly less than their budget in the equilibrium_ \(E\)_._
* _For Channel_ \(k\) _and Budgeted bidder_ \(j\)_, let_ \(\rho(k,j)=\frac{\sum_{i\in I_{k}}v_{j,i}}{\sum_{i\in I}v_{j,i}}\) _be the ratio of the total value of impressions in_ \(I_{k}\) _for Bidder_ \(j\) _to the total value of all impressions in_ \(I\) _for Bidder_ \(j\)_._
_Then, for any \(\varepsilon>0\),_
1. _in the equilibrium_ \(E\)_, the total revenue of all the channels from a Bidder_ \(j\in J_{C}^{E}\) _is no less than their budget_ \(B_{j}\)_,_
2. _and moreover, if Channel_ \(k\) _could set bidder-specific reserve prices_ \(r_{k}(j)=(1-\varepsilon)\beta_{j}\) _for each Budgeted bidder_ \(j\) _(recall_ \(\beta_{j}\) _is the budget-fraction in Definition_ 5_), then regardless of other channels' reserve prices, in the resulting subgame equilibrium (this is not necessarily_ \(E\)_), Channel_ \(k\) _can obtain a revenue of at least_ \[\sum_{j\in J_{U}^{E}}(1-\varepsilon)\rho(k,j)B_{j},\]
3. _and furthermore, Channel_ \(k\) _can set a uniform reserve price_ \(r_{k}\) _which is independent of_ \(E\) _such that regardless of other channels' reserve prices, in the resulting subgame equilibrium (not necessarily_ \(E\)_), Channel_ \(k\) _will obtain a revenue of at least_ \[\frac{\sum_{j\in J_{U}^{E}}(1-\varepsilon)\rho(k,j)B_{j}}{2\max\left\{1,\left\lceil \log\frac{\beta_{max}}{\beta_{min}}\right\rceil\right\}}.\]
Proof.:
1. Since bidders \(j\in J_{C}^{E}\) are spending their entire budget in \(E\) (by definition of \(J_{C}^{E}\)), the total revenue of all channels from them is equal to their budget.
2. Consider any equilibrium \(E_{r}\) resulting from Channel \(k\)'s bidder-specific reserve prices given in the statement and arbitrary reserve prices of other channels (note \(E_{r}\) is unrelated to \(E\)). Consider any impression \(i\in I_{k}\). Since Channel \(k\) has set a bidder-specific reserve price of \((1-\varepsilon)\beta_{j}\) for each Budgeted bidder \(j\), the reserve price of impression \(i\) for Budgeted bidder \(j\) is \(r_{j,i}=(1-\varepsilon)\beta_{j}v_{j,i}<\beta_{j}v_{j,i}\). Then, by Claim 3, each impression \(i\) is sold for a price of at least \(r_{j,i}=(1-\varepsilon)\beta_{j}v_{j,i}\) in the equilibrium \(E_{r}\) for any \(j\in J_{\text{\rm budgeted}}\). That is, \[\text{the revenue of Channel $k$ in the equilibrium }E_{r}\geq\sum_{i\in I_{k}}\max_{j\in J_{\text{\rm budgeted}}}(1- \varepsilon)\beta_{j}v_{j,i}.\] (4) Now let \(I_{k}(j)\subseteq I_{k}\) denote the set of the impressions \(i\in I_{k}\) such that \(v_{j,i}>0\). By Claim 2, if \(j\) and \(j^{\prime}\) are two bidders unconstrained in the equilibrium \(E\), then \(I_{k}(j)\) and \(I_{k}(j^{\prime})\) are disjoint. Hence, we have that \[\sum_{i\in I_{k}}\max_{j\in J_{\text{\rm budgeted}}}(1-\varepsilon) \beta_{j}v_{j,i} \geq\sum_{j\in J_{U}^{E}}\sum_{i\in I_{k}(j)}(1-\varepsilon) \beta_{j}v_{j,i}\] (5) \[=\sum_{j\in J_{U}^{E}}(1-\varepsilon)\beta_{j}\sum_{i\in I_{k}(j )}v_{j,i}\] (6) \[=\sum_{j\in J_{U}^{E}}(1-\varepsilon)\beta_{j}\rho(k,j)\sum_{i\in I }v_{j,i}\] (7) \[=\sum_{j\in J_{U}^{E}}(1-\varepsilon)\rho(k,j)B_{j},\] (8) which finishes the proof by Inequality (4).
3. The high-level idea for setting a good uniform reserve price is to bucketize the reserve prices and pick the one with the highest revenue potential. Specifically, we divide Budgeted bidders into the following buckets: \[J^{s}_{\text{Budgeted}}=\{j:j\in J_{\text{Budgeted}}\text{ and }2^{s}\beta_{min}\leq\beta_{j}\leq 2^{s+1}\beta_{min}\}\] for \(s\in\{0\}\cup\left[\lceil\log\frac{\beta_{max}}{\beta_{min}}\rceil-1\right]\) (recall \(\beta_{max},\beta_{min}\) are the largest and smallest budget-fractions respectively defined in Definition 5). We observe that if Channel \(k\) sets its uniform reserve price to \((1-\varepsilon)2^{s}\beta_{min}\), then for bidders \(j\in J^{s}_{\text{Budgeted}}\), it holds that \(r_{j,i}=(1-\varepsilon)2^{s}\beta_{min}v_{j,i}<\beta_{j}v_{j,i}\) for all impressions \(i\in I_{k}\). Thus, by Claim 3, each impression \(i\in I_{k}\) will get sold for a price of at least \(\max_{j\in J^{s}_{\text{Budgeted}}}r_{j,i}=\max_{j\in J^{s}_{\text{Budgeted}} }(1-\varepsilon)2^{s}\beta_{min}v_{j,i}\geq\max_{j\in J^{s}_{\text{Budgeted}} }\frac{1-\varepsilon}{2}\beta_{j}v_{j,i}\) (the inequality is by bucketization) in the subgame equilibrium \(E_{r}\) that results from Channel \(k\) setting a uniform reserve price of \((1-\varepsilon)2^{s}\beta_{min}\) and arbitrary reserve prices set by other channels. Thus, the revenue of Channel \(k\) from setting a uniform reserve price to \((1-\varepsilon)2^{s}\beta_{min}\)
\[\text{Rev}_{k}((1-\varepsilon)2^{s}\beta_{min})\geq\sum_{i\in I_{k}}\max_{j \in J^{s}_{\text{Budgeted}}}\frac{1-\varepsilon}{2}\beta_{j}v_{j,i}. \tag{9}\]
Now let \(I_{k}(j)\subseteq I_{k}\) denote the set of impressions \(i\in I_{k}\) such that \(v_{j,i}>0\). Then, by Claim 2, \(I_{k}(j)\) and \(I_{k}(j^{\prime})\) are disjoint for two unconstrained Budgeted bidders \(j\neq j^{\prime}\) in the equilibrium \(E\). Hence, we have that
\[\sum_{s}\sum_{i\in I_{k}}\max_{j\in J^{s}_{\text{Budgeted}}}\frac {1-\varepsilon}{2}\beta_{j}v_{j,i} \geq\sum_{s}\sum_{j\in J^{s}_{U}\cap J^{s}_{\text{Budgeted}}}\sum _{i\in I_{k}(j)}\frac{1-\varepsilon}{2}\beta_{j}v_{j,i}\] \[=\sum_{j\in J^{E}_{U}}\sum_{i\in I_{k}(j)}\frac{1-\varepsilon}{2 }\beta_{j}v_{j,i}\] \[\geq\frac{1-\varepsilon}{2}\sum_{j\in J^{E}_{U}}\rho(k,j)B_{j}, \tag{10}\]
where the last inequality follows from the same derivation as in Inequalities (5-8).
Finally, let \(s^{*}=\arg\max_{s\in\{0\}\cup\left[\lceil\log\frac{\beta_{max}}{\beta_{min}} \rceil-1\right]}\sum_{i\in I_{k}}\max_{j\in J^{s}_{\text{Budgeted}}}\frac{1- \varepsilon}{2}\beta_{j}v_{j,i}\). Then, we have that the revenue of Channel \(k\) by setting a uniform reserve price \(r^{*}_{k}=2^{s^{*}}\beta_{min}\) (notice \(r^{*}_{k}\) is indeed independent of \(E\)) is
\[\text{Rev}_{k}((1-\varepsilon)2^{s^{*}}\beta_{min}) \geq\sum_{i\in I_{k}}\max_{j\in J^{s}_{\text{Budgeted}}}\frac{1- \varepsilon}{2}\beta_{j}v_{j,i}\] (By Inequality (9)) \[\geq\frac{\sum_{s}\sum_{i\in I_{k}}\max_{j\in J^{s}_{\text{Budgeted }}}\frac{1-\varepsilon}{2}\beta_{j}v_{j,i}}{\max\left\{1,\left\lceil\log\frac{ \beta_{max}}{\beta_{min}}\right\rceil\right\}}\] (By definition of \[s^{*}\] ) \[\geq\frac{\frac{1-\varepsilon}{2}\sum_{j\in J^{E}_{U}}\rho(k,j)B_{j} }{\max\left\{1,\left\lceil\log\frac{\beta_{max}}{\beta_{min}}\right\rceil\right\}}\] (By Inequality (10)).
Item (3) in Lemma 1 implies the following corollary:
**Corollary 1**.: _Define \(\rho(k,j)\) as in Lemma 1. Let \(\mathcal{R}=(\mathcal{R}_{k})_{k\in K}\) be any mixed-strategy equilibrium of the channels' game (i.e., S0), and let \(E(\mathbf{r})\) be the subgame equilibrium given any reserve prices \(\mathbf{r}\) in the support of the channels' mixed strategies. Then, the expected revenue of Channel \(k\) in the mixed-strategy equilibrium \(\mathcal{R}\) is at least_
\[\mathbf{E}_{\mathbf{r}\sim\mathcal{R}}\left[\frac{\sum_{j\in J_{U}^{E(\mathbf{r})}}(1- \varepsilon)\rho(k,j)B_{j}}{2\max\left\{1,\left\lceil\log\frac{\beta_{max}}{ \beta_{min}}\right\rceil\right\}}\right].\]
#### Welfare from tCPA and QL Bidders
**Lemma 2**.: _Let \(\mathbf{x}^{*}\) be a welfare maximizing allocation (i.e., \(\mathbf{x}^{*}\) is s.t. \(Wel^{*}=Wel(\mathbf{x}^{*})\) in Definition 2) and_
* _let_ \(W^{*}_{\text{tCPA}}(k)\) _be the liquid welfare generated by the impressions in_ \(I_{k}\) _allocated to tCPA bidders in_ \(x^{*}\)_, i.e.,_ \[W^{*}_{\text{tCPA}}(k):=\sum_{j\in J_{\text{tCPA}}}\sum_{i\in I_{k}}T_{j}v_{ j,i}x^{*}_{j,i},\]
* _and let_ \(W^{*}_{\text{QL}}(k)\) _be the liquid welfare generated by the impressions in_ \(I_{k}\) _allocated to quasi-linear bidders in_ \(x^{*}\)_, i.e.,_ \[W^{*}_{\text{QL}}(k):=\sum_{j\in J_{\text{QL}}}\sum_{i\in I_{k}}v_{j,i}x^{*}_{ j,i}.\]
_Then, for any \(\varepsilon>0\),_
1. _if Channel_ \(k\) _could set the bidder-specific reserve prices (also in the cost-per-unit-value space)_ \(r_{k}(j)\) _for each tCPA or QL bidder_ \(j\) _as follows:_ \[r_{k}(j)=\begin{cases}(1-\varepsilon)T_{j}&\text{if $j$ is a tCPA bidder}\\ 1-\varepsilon&\text{if $j$ is a QL bidder},\end{cases}\] _then Channel_ \(k\) _obtains a revenue at least_ \((1-\varepsilon)(W^{*}_{\text{tCPA}}(k)+W^{*}_{\text{QL}}(k))\) _regardless of what other channels do,_
2. _and moreover, we let_ \(T_{max}=\max_{j\in J_{\text{tCPA}}}T_{j}\) _and let_ \(T_{min}=\min_{j\in J_{\text{tCPA}}}T_{j}\)_, and then Channel_ \(k\) _can set a uniform reserve price_ \(r_{k}\) _s.t. Channel_ \(k\) _obtains a revenue at least_ \[\frac{(1-\varepsilon)(W^{*}_{\text{tCPA}}(k)+W^{*}_{\text{QL}}(k))}{2+2\max \left\{1,\left\lceil\log\frac{T_{max}}{T_{min}^{\text{min}}}\right\rceil\right\}}\] _regardless of what other channels do._
Proof.:
1. Fix Channel \(k\)'s bidder-specific reserve prices as in the assumption and consider any subgame equilibrium. For any impression \(i\in I_{k}\), let Bidder \(j=\arg\max_{\ell\in J_{\text{tCPA s.t. }x^{*}_{\ell,i}>0}}T_{\ell}v_{\ell,i}\), and let Bidder \(q=\arg\max_{\ell\in J_{\text{QL s.t. }x^{*}_{\ell,i}>0}}v_{\ell,i}\). Since \(r_{j,i}=r_{k}(j)v_{j,i}=(1-\varepsilon)T_{j}v_{j,i}<T_{j}v_{j,i}\), it follows from Claim 4 that impression \(i\) will be sold for a cost of at least \(r_{j,i}=(1-\varepsilon)T_{j}v_{j,i}\).
Similarly, since \(r_{q,i}=r_{k}(q)v_{q,i}=(1-\varepsilon)v_{q,i}<v_{q,i}\), it follows from Claim 4 that impression \(i\) will be sold for a cost of at least \(r_{q,i}=(1-\varepsilon)v_{q,i}\). Moreover, note that the contribution of impression \(i\) to \(W^{*}_{\mbox{\tiny{tCPA}}}(k)+W^{*}_{\mbox{\tiny{QL}}}(k)\) is at most \(\max\{v_{q,i},T_{j}v_{j,i}\}\), and we have shown impression \(i\) will be sold for at least \((1-\varepsilon)\)-fraction of this amount, it follows that channel \(k\)'s revenue is at least \((1-\varepsilon)(W^{*}_{\mbox{\tiny{tCPA}}}(k)+W^{*}_{\mbox{\tiny{QL}}}(k))\).
2. The high-level idea for setting a good uniform reserve price is again to bucketize the bidder-specific reserve prices used above and set the uniform reserve price \(r_{k}\) to the lower end of the bucket that has the highest revenue potential. Specifically, we divide all the tCPA bidders into the following buckets: \[J^{s}_{\mbox{\tiny{tCPA}}}=\{j:j\in J_{\mbox{\tiny{tCPA}}},2^{s}T_{min}\leq T _{j}\leq 2^{s+1}T_{min}\}\] for \(s\in\{0\}\cup\left[\lceil\log\frac{T_{max}}{T_{min}}\rceil-1\right]\). As before, for any impression \(i\in I_{k}\), let bidder \(j=\arg\max_{\ell\in J_{\mbox{\tiny{tCPA}}}\mbox{ s.t. }x^{*}_{\ell,i}>0}T_{\ell}v_{\ell,i}\) and bidder \(q=\arg\max_{\ell\in J_{\mbox{\tiny{QL}}}\mbox{ s.t. }x^{*}_{\ell,i}>0}v_{\ell,i}\), and notice that the contribution of impression \(i\) to \(W^{*}_{\mbox{\tiny{tCPA}}}(k)+W^{*}_{\mbox{\tiny{QL}}}(k)\) is at most \(\max\{v_{q,i},T_{j}v_{j,i}\}\). If \(T_{j}v_{j,i}>v_{q,i}\), let \(s\) be such that \(j\in J^{s}_{\mbox{\tiny{tCPA}}}\). Suppose Channel \(k\) sets a reserve price of \(r_{k}=(1-\varepsilon)2^{s}T_{min}\) which is strictly less than \(T_{j}\) because of the bucketization. Then, by Claim 4, impression \(i\) will be sold at a cost at least \((1-\varepsilon)2^{s}T_{min}v_{j,i}\geq\frac{1-\varepsilon}{2}T_{j}v_{j,i}\) in the subgame equilibrium, where the inequality is because of the bucketization. If \(v_{q,i}\geq T_{j}v_{j,i}\), suppose Channel \(k\) sets a reserve price \(r_{k}=1-\varepsilon\). Then, by Claim 4, impression \(i\) will be sold at a cost at least \(v_{q,i}\) in the subgame equilibrium. Now we put these two cases together. Let \(Rev_{k}(r_{k})\) be the revenue of Channel \(k\) at the subgame equilibrium if Channel \(k\) sets a uniform reserve price \(r_{k}\) (regardless of the reserve prices of other channels). Then, summing over all the buckets \(s\), we have \[\sum_{r_{k}\in\{1-\varepsilon,\,T_{min}\}\cup\left\{2^{s}T_{min}\right\}s\in \left[\lceil\log\frac{T_{max}}{T_{min}}\rceil-1\right]\right\}Rev_{k}(r_{k}) \geq\frac{1-\varepsilon}{2}(W^{*}_{\mbox{\tiny{tCPA}}}(k)+W^{*}_{\mbox{\tiny {QL}}}(k)),\] because as we have shown in the above case analysis, all the buckets together cover at least \(\frac{1-\varepsilon}{2}\)-fraction of the liquid welfare of each impression's contribution to \(W^{*}_{\mbox{\tiny{tCPA}}}(k)+W^{*}_{\mbox{\tiny{QL}}}(k)\). Let \(r_{k}^{*}=\arg\max_{r_{k}\in\{1-\varepsilon,\,T_{min}\}\cup\left\{2^{s}T_{min }\right\}s\in\left[\lceil\log\frac{T_{max}}{T_{min}}\rceil-1\right]\right\}Rev _{k}(r_{k})\). Then, by setting a reserve price of \(r_{k}^{*}\), Channel \(k\) can get a revenue of at least \[\frac{(1-\varepsilon)(W^{*}_{\mbox{\tiny{tCPA}}}(k)+W^{*}_{\mbox{\tiny{QL}}}( k))}{2+2\max\left\{1,\left\lceil\log\frac{T_{max}}{T_{min}}\right\rceil \right\}}.\]
If Channel \(k\) can always get certain amount of revenue by setting a particular uniform reserve price \(r_{k}\) regardless of what other channels do, then Channel \(k\)'s revenue at any mixed-strategy equilibrium of the channels' game (i.e., stage (S0) of the full game) is at least the same amount (because otherwise Channel \(k\) will deviate to the uniform reserve price \(r_{k}\)). Thus, item (2) in Lemma 2 implies the following corollary:
**Corollary 2**.: _Let \(W^{*}_{\mbox{\tiny{tCPA}}}(k)\) and \(W^{*}_{\mbox{\tiny{QL}}}(k)\) be defined as in Lemma 2 above. Then, for any \(\varepsilon>0\), at any mixed-strategy equilibrium of the channels' game (S0), the expected revenue of channel \(k\) is at least_
\[\frac{(1-\varepsilon)(W^{*}_{\mbox{\tiny{tCPA}}}(k)+W^{*}_{\mbox{\tiny{QL}}}( k))}{2+2\max\left\{1,\left\lceil\log\frac{T_{max}}{T_{min}}\right\rceil \right\}}.\]
### The Final Revenue Guarantee
**Theorem 2**.: _For any \(\varepsilon>0\),_
\[RevG(Local)\geq\frac{1-\varepsilon}{3+2\max\left\{1,\left\lceil\log\frac{T_{ max}}{T_{min}}\right\rceil\right\}+2\max\left\{1,\left\lceil\log\frac{\beta_{max}}{ \beta_{min}}\right\rceil\right\}}.\]
Proof.: Let \(\mathcal{R}=(\mathcal{R}_{k})_{k\in K}\) be any mixed-strategy equilibrium of the channels' game, and let \(E(\mathbf{r})\) denote the subgame equilibrium given any reserve prices \(\mathbf{r}\) in the support of the channels' mixed strategies. Let \(\mathbf{x}^{*}\) be the liquid welfare maximizing allocation (i.e., \(\mathbf{x}^{*}\) is s.t. \(Wel^{*}=Wel(\mathbf{x}^{*})\)).
By Corollary 2, the expected revenue of Channel \(k\) in the equilibrium \(\mathcal{R}\), denoted by \(\mbox{Rev}_{k}[\mathcal{R}]\), is
\[\mbox{Rev}_{k}[\mathcal{R}]\geq\frac{(1-\varepsilon)(W^{*}_{\mbox{\tiny{tCPA }}}(k)+W^{*}_{\mbox{\tiny{QL}}}(k))}{2+2\max\left\{1,\left\lceil\log\frac{T_{ max}}{T_{min}}\right\rceil\right\}},\]
where \(W^{*}_{\mbox{\tiny{tCPA}}}(k)\) and \(W^{*}_{\mbox{\tiny{QL}}}(k)\) are defined as in Lemma 2.
Thus, the expected total revenue of all channels, denoted by \(\mbox{Rev}[\mathcal{R}]\), is
\[\mbox{Rev}[\mathcal{R}]\geq\frac{(1-\varepsilon)(W^{*}_{\mbox{\tiny{tCPA}}}+ W^{*}_{\mbox{\tiny{QL}}})}{2+2\max\left\{1,\left\lceil\log\frac{T_{max}}{T_{min}} \right\rceil\right\}}, \tag{11}\]
where \(W^{*}_{\mbox{\tiny{tCPA}}}:=\sum_{k\in K}W^{*}_{\mbox{\tiny{tCPA}}}(k)\) and \(W^{*}_{\mbox{\tiny{QL}}}:=\sum_{k\in K}W^{*}_{\mbox{\tiny{QL}}}(k)\) denote the total contributions of tCPA and QL bidders to the liquid welfare of \(x^{*}\) respectively.
Let \(\rho(k,j)\) be as defined in Lemma 1. Then, by Corollary 1,
\[\mbox{Rev}_{k}[\mathcal{R}]\geq\mathbf{E}_{\mathbf{r}\sim\mathcal{R}}\left[\frac{ \sum_{j\in J^{E(\mathbf{r})}_{U}}(1-\varepsilon)\rho(k,j)B_{j}}{2\max\left\{1, \left\lceil\log\frac{\beta_{max}}{\beta_{min}}\right\rceil\right\}}\right].\]
Summing over all channels, we get
\[\mbox{Rev}[\mathcal{R}]\geq\sum_{k}\mathbf{E}_{\mathbf{r}\sim\mathcal{R}}\left[ \frac{\sum_{j\in J^{E(\mathbf{r})}_{U}}(1-\varepsilon)\rho(k,j)B_{j}}{2\max\left\{ 1,\left\lceil\log\frac{\beta_{max}}{\beta_{min}}\right\rceil\right\}}\right] =\mathbf{E}_{\mathbf{r}\sim\mathcal{R}}\left[\frac{\sum_{j\in J^{E(\mathbf{r})}_{U}}(1 -\varepsilon)B_{j}}{2\max\left\{1,\left\lceil\log\frac{\beta_{max}}{\beta_{min }}\right\rceil\right\}}\right]. \tag{12}\]
Also, by item (1) of Lemma 1, we have that
\[\mbox{Rev}[\mathcal{R}]\geq\mathbf{E}_{\mathbf{r}\sim\mathcal{R}}\left[\sum_{j\in J ^{E(\mathbf{r})}_{C}}B_{j}\right]. \tag{13}\]
Notice that
\[Wel^{*}\leq W^{*}_{\mbox{\tiny{tCPA}}}+W^{*}_{\mbox{\tiny{QL}}}+\sum_{j\in J _{\mbox{\tiny{Budgeted}ed}}}B_{j},\]
and then the theorem follows from Inequalities (11), (12) and (13).
Combining Theorem 2 with Fact 2, we get the following corollary:
**Corollary 3**.: _For any \(\varepsilon>0\),_
\[RevG(Global)\geq\frac{1-\varepsilon}{3+2\max\left\{1,\left\lceil\log\frac{T_{max} }{T_{min}}\right\rceil\right\}+2\max\left\{1,\left\lceil\log\frac{\beta_{max}} {\beta_{min}}\right\rceil\right\}}.\]
Finally, we show that the above revenue guarantees in the local and global models are both tight up to a constant factor by constructing an example using the well-known "equal-revenue" trick.
**Proposition 3**.: _For the single-channel setting, there is an instance where \(RevG(Global)=RevG(Local)=O(1/(\log(T_{max}/T_{min})+\log(\beta_{max}/\beta_{min})))\)._
Proof.: Since there is only one channel, \(RevG(Global)=RevG(Local)\).
Consider \(2^{\ell}\) tCPA bidders with tCPAs \(1/2^{\ell}\) for \(\ell=0,\ldots,w_{1}-1\), each interested in a unique impression with a value of \(1\) (i.e. their value for every other impression is \(0\), and everyone else's value for their impression is \(0\)). Similarly, there are \(2^{\ell}\) Budgeted bidders with budgets \(1/2^{\ell}\) for \(\ell=0,\ldots,w_{2}-1\), each interested in a unique impression with a value of \(1\) (i.e. their value for every other impression is \(0\), and everyone else's value for their impression is \(0\)). Optimal liquid welfare is \(w_{1}+w_{2}\) obtained by giving everyone their unique impression. The best uniform reserve price cannot get a revenue more than \(4\). This shows that \(RevG(Global)\leq 4/(w_{1}+w_{2})=4/(2+\log\frac{T_{max}}{T_{min}}+\log\frac{ \beta_{max}}{\beta_{min}})\).
### Price of Anarchy
In this subsection, we study how much total revenue the channels lose in the local model where they set their uniform reserve prices out of their own self-interest compared to the global model where they choose the reserve prices cooperatively. Specifically, we consider the standard notion - price of anarchy \(PoA\) (Definition 4). First, we observe that the revenue guarantee from Theorem 2 immediately implies a lower bound for the \(PoA\):
**Theorem 3**.: _For any \(\varepsilon>0\),_
\[PoA\geq\frac{1-\varepsilon}{3+2\max\left\{1,\left\lceil\log\frac{T_{max}}{T_{ min}}\right\rceil\right\}+2\max\left\{1,\left\lceil\log\frac{\beta_{max}}{ \beta_{min}}\right\rceil\right\}}.\]
Proof.: By definition of \(PoA\) (Definition 4), \(PoA=\frac{RevG(Local)}{RevG(Global)}\). By Fact 2, \(RevG(Global)\leq 1\). It follows that \(PoA\geq RevG(Local)\), and then the proof finishes by applying Theorem 2.
Next, we show that the \(PoA\) lower bound in Theorem 3 is tight (up to a constant factor).
**Theorem 4**.: _There is an instance with two channels such that \(PoA=O(1/(\log(T_{max}/T_{min})+\log(\beta_{max}/\beta_{min})))\)._
The high-level idea:We first construct an "equal-revenue" instance (which consists of many tCPA bidders \(J_{1}\) with geometrically decreasing tCPAs, each interested in a unique impression owned by Channel \(k_{1}\)) as in the proof of Proposition 3. For this "equal-revenue" instance, Channel \(k_{1}\) cannot simultaneously get good revenues from all the bidders in \(J_{1}\) by setting a uniform reserve price.
Now the key idea is to introduce another Channel \(k_{2}\) and another tCPA bidder \(j_{2}\notin J_{1}\), such that Channel \(k_{2}\) only owns one impression, for which only bidder \(j_{2}\) has strictly positive value. Moreover, Bidder \(j_{2}\) has a value for each impression \(i\) in channel \(k_{1}\), and Bidder \(j_{2}\)'s value for impression \(i\) is carefully chosen to be proportional to the tCPA of the bidder in \(J_{1}\) who is interested in impression \(i\). Thus, if Bidder \(j_{2}\) makes a uniform bid (in the cost-per-unit-value space), it results into non-uniform bids (in the cost space) for the impressions in Channel \(k_{1}\), which are proportional to the tCPAs of bidders in \(J_{1}\). We can think of these non-uniform bids as non-uniform bidder-specific reserve prices for bidders in \(J_{1}\), which are proportional to their tCPAs. Thus, we are able to extract the full revenue from all the bidders in \(J_{1}\) (similar to item (1) of Lemma 2).
Finally, we just need to argue the above idea can only be successfully applied in the global model but not in the local model. This is because in the local model, Channel \(k_{2}\) sets a high reserve price for its sole impression in order to profit more from bidder \(j_{2}\), and as a result, Bidder \(j_{2}\) does not have enough "slack" to make a sufficiently high uniform bid to incur sufficiently high bidder-specific reserve prices for bidders in \(J_{1}\).
The construction of the instance with Budgeted bidders uses essentially the same idea as above. The full proof is provided in Appendix C.
As a corollary of Theorem 3 and Theorem 4, we have the following tight price of anarchy:
**Theorem 5** (Price of Anarchy).: \(PoA=\Theta(1/(\log(T_{max}/T_{min})+\log(\beta_{max}/\beta_{min})))\)_._
## 6 Price of Anarchy with Publisher Reserves
This section studies the general version of the model where a publisher, owning impression \(i\), sets a minimum price \(p_{i}\) for the impression to be sold.7 The main finding we obtain is that Theorem 5 dramatically depends on not having publisher prices. We show that with publisher prices and general channels, \(PoA=0\) in the worst case (Theorem 6).
Footnote 7: Recall that the price \(p_{i}\) is also in the cost-per-unit-value space.
We then restrict our attention to an important subclass of instances where channels are _scaled_ copy of each other. That is, channels share a set of a homogeneous set of impressions and differ on the revenue share each owns. In this context, we show that \(PoA\) has non-trivial lower bound only if there is one bidder in the auction. In this case, \(PoA=1/|K|\), and hence, depends on the number of channels in the game in contrast to our results in Section 5.
### General Channels
We now present the main result of the section for the general case when channels can have arbitrary asymmetries for the impressions they own with arbitrary publisher reserve prices.
**Theorem 6**.: _If publishers can set arbitrary minimum prices on their impressions, then there is an instance for which \(PoA=0\)._
Proof.: Consider the following instance with two channels and one bidder who is a tCPA bidder with a target constraint \(T=1\). Channel 1 has only one impression to sell. This impression does not have any publisher pricing constraint (\(p_{i}=0\)). Channel 2 has \(q\) impressions to sell, each of these impressions has the same publisher pricing constraint \(p_{i}=1+1/q\). The bidder's valuation for all impressions is the same, i.e., \(v_{i}=1\) for all \(i\in I\).
We assert that in the global model, it is optimal to set reserve prices equal to zero for both channels. Indeed, with no reserve prices, the bidder can purchase all impressions since she gets a value of \(1+q\) for a total cost of \(q\cdot(1+1/q)=1+q\). This is the optimal solution for the global model as the total revenue is exactly the optimal liquid welfare.
On the other hand, in the local model, it is a strictly dominant strategy for Channel 1 to set a uniform reserve \(r_{1}=1\): if \(r_{1}>1\), Channel 1 gets zero revenue. If \(r_{1}\leq 1\), the bidder purchases its impression, which leads to a revenue of \(r_{1}\). Thus, Channel 1 strictly prefers to set a reserve price of \(r_{1}=1\). Because of \(r_{1}=1\), the bidder cannot afford to buy any impression of Channel 2 since the cost of each impression is at least \(1+1/q\). Thus, in this equilibrium, the bidder submits a uniform bid of 1, gets only the impressions sold by Channel 1, and the global revenue is 1.
Therefore from this instance we have that \(RevG(Local)/RevG(Global)\leq 1/(1+q)\). We conclude the proof by taking \(q\to\infty\).
The intuition behind the previous result comes from instances where some of the channels have high publisher prices relative to the bidder's tCPA targets while some other channels do not have publisher prices. In these instances, in the global model, channels benefit by keeping low reserve prices in the _cheap_ channels (without publisher reserves) as they provide subsidy to the tCPA bidders to buy impressions from the _expensive_ channels. However, when the cheap channels are myopic, they would like to raise their reserve prices to increase their local revenue. This local behavior negatively impacts the revenue of the expensive channels, which in turn, is negative for all channels.
Given that the reason for the previous negative \(PoA\) result is the asymmetry of the publisher prices on the different channels, in what follows we restrict our \(PoA\) analysis for a special subclass where channels are _scaled_ versions of each other.
#### Scaled Channels
The scaled channels model consists of weights \(\boldsymbol{\gamma}=(\gamma_{1},\ldots,\gamma_{k})\in\Delta([0,1]^{K})\)8 so that Channel \(k\) owns a fraction \(\gamma_{k}\) of each impression \(i\in I\).9
Footnote 8: \(\Delta([0,1]^{K}\) is the unit simplex in \(\mathbb{R}^{K}\)
Footnote 9: For simplicity of the exposition we assume that impressions are divisible. A similar model with non-divisible impressions would assume that each impression \(i\) is duplicated so that Channel \(k\) owns a fraction \(\gamma_{k}\) of those duplicates.
The first result shows that, surprisingly, so long as there are more than one bidder participating in the auctions, then \(PoA=0\) in the worst case.
**Theorem 7**.: _For the scaled channels models if there are two or more bidders participating in the auctions, then there is an instance for which \(PoA=0\)._
The instance we construct (deferred to Appendix D) consists of two channels and two tCPA bidders. The idea of the instance is that the main source of revenue for the channels comes from Bidder 1 buying the expensive impressions, those with high publisher reserve price. Bidder 1 needs enough slack to be able to purchase those expensive impressions. Thus, Bidder 1 needs to buy
enough cheap impressions. However, the cheap impressions may have a high price if Bidder 2 sets a high bid. Bidder 2 can only set a high bid if, instead, it has enough slack from (other) cheap impressions. The crux of the argument is that, in the global model, by setting a sufficiently high reserve price, the channels can avoid Bidder 2 to have enough slack. This, in turn, allows Bidder 1 to have slack to buy the expensive impressions. On the contrary, for the local models, there is an equilibrium where both channels set a low reserve. This prevents Bidder 1 to buy expensive impressions because Bidder 2 is setting a high bid and removing Bidder 1's slack.
As a corollary of this instance, we show that in the autobidding framework setting a high reserve price like in the global model not only increases revenue but also increases the welfare. This contrasts with the classic profit-maximizing framework where there is a negative correlation between high reserve prices and welfare.
We finish this section by showing that for the case of only one bidder participating across all channels the \(PoA\) is always strictly positive (for pure-strategy equilibria).
**Theorem 8**.: _If there is only a single bidder, then for pure-strategy equilibria we have that \(PoA=\frac{1}{|K|}\), where \(|K|\) is the number of channels in the game._
We defer the proof to Appendix D. We note that in contrast to the results of Section 5 where the \(PoA\) is independent of the number of channels, in the setting with publisher reserves, the \(PoA\) directly depends of the number of channels.
## 7 Further Discussion
In this paper, we have established tight bounds on _revenue_ guarantees and Price of Anarchy when the reserve prices are set in the _cost-per-unit-value_ space. Two natural follow-up questions are:
* Can we obtain similar bounds for _welfare_ of the bidders?
* What are the revenue guarantees if the channels set reserve prices in the _cost-per-impression_ space?
We briefly discuss how to extend some of our results to answer these questions. We defer the details to the full paper.
### Bounds for Welfare
Most of our revenue and Price of Anarchy results carry over to _welfare_. In particular for the setting without publisher reserves, we can get bounds similar to the the revenue bounds in Theorem 2 and Proposition 3 and the Price of Anarchy bound in Theorem 5 for welfare (see Appendix E for a proof sketch). Many of the results in the setting without publisher reserves also carry over to welfare. We defer the details to the full paper.
We also observe an interesting phenomena - in contrast to the quasi-linear setting, using a higher reserve price can sometimes increase the welfare (see the discussion after Theorem 7).
### Uniform cost-per-impression reserve prices
We can obtain a revenue guarantee analogous to Theorem 2 when channels set uniform cost-per-impression reserve prices (i.e., value-independent and the same for all bidders and impressions).
We can do this by adapting the bucketization arguments in Section 5 to bucketize \(T_{j}v_{j,i}\) instead of \(T_{j}\) for tCPA bidders, bucketize \(v_{j,i}\) for Quasi-linear bidders, and bucketize \(\beta_{j}v_{j,i}\) instead of \(\beta_{j}\) for Budgeted bidders.
|
2309.17040 | The contact process on dynamic regular graphs: monotonicity and
subcritical phase | We study the contact process on a dynamic random~$d$-regular graph with an
edge-switching mechanism, as well as an interacting particle system that arises
from the local description of this process, called the herds process. Both
these processes were introduced in~\cite{da2021contact}; there it was shown
that the herds process has a phase transition with respect to the infectivity
parameter~$\lambda$, depending on the parameter~$\mathsf{v}$ that governs the
edge dynamics. Improving on a result of~\cite{da2021contact}, we prove that the
critical value of~$\lambda$ is strictly decreasing with~$\mathsf{v}$. We also
prove that in the subcritical regime, the extinction time of the herds process
started from a single individual has an exponential tail. Finally, we apply
these results to study the subcritical regime of the contact process on the
dynamic $d$-regular graph. We show that, starting from all vertices infected,
the infection goes extinct in a time that is logarithmic in the number of
vertices of the graph, with high probability. | Bruno Schapira, Daniel Valesin | 2023-09-29T07:58:28Z | http://arxiv.org/abs/2309.17040v1 | # The contact process on dynamic regular graphs: monotonicity and subcritical phase
###### Abstract
We study the contact process on a dynamic random \(d\)-regular graph with an edge-switching mechanism, as well as an interacting particle system that arises from the local description of this process, called the herds process. Both these processes were introduced in [1]; there it was shown that the herds process has a phase transition with respect to the infectivity parameter \(\lambda\), depending on the parameter \(\mathsf{v}\) that governs the edge dynamics. Improving on a result of [1], we prove that the critical value of \(\lambda\) is strictly decreasing with \(\mathsf{v}\). We also prove that in the subcritical regime, the extinction time of the herds process started from a single individual has an exponential tail. Finally, we apply these results to study the subcritical regime of the contact process on the dynamic \(d\)-regular graph. We show that, starting from all vertices infected, the infection goes extinct in a time that is logarithmic in the number of vertices of the graph, with high probability.
Keywords: contact process, dynamic graphs
Introduction
This paper is a follow-up to [1], which studied the contact process on a dynamic random \(d\)-regular graph with an edge-flip mechanism introduced in [1]. The work [1] mainly focused on proving the existence of a supercritical regime, where the extinction time of the process grows exponentially with the number of vertices of the graph. Here, we show that there is a phase transition between two regimes, where the order of magnitude of the extinction time switches abruptly from logarithmic to exponential, as the infection parameter crosses a critical value. The highlight of our analysis is that it allows us to establish that this critical value of the infection parameter is a strictly monotone function of the rate of the edge-flip mechanism.
### Contact process on static finite graphs
The **contact process** on a graph \(G\) is an interacting particle system in which the vertices of the graph can be either healthy or infected. Healthy vertices get infected at rate \(\lambda\) times the number of infected neighbors, where \(\lambda>0\) is a fixed parameter of the model, while infected vertices become healthy at rate \(1\), independently of each other. When the graph \(G\) is infinite, a quantity of interest is the **critical rate**\(\lambda_{c}(G)\), defined as the supremum of the values of \(\lambda\) for which the process started from any finite infected set dies out (reaches the all-healthy configuration) almost surely.
In the case when \(G\) is finite, the all-healthy configuration is always reached almost surely (regardless of \(\lambda\)). The **extinction time**\(\tau_{G}\) is the hitting time of the all-healthy configuration, for the process started from all infected. It has been observed in several cases that when \((G_{n})_{n\geqslant 1}\) is a sequence of finite graphs which converges locally to some (rooted) infinite graph \(G_{\infty}\), typically the extinction time \(\tau_{G_{n}}\) grows logarithmically with \(n\) when \(\lambda\) is smaller than \(\lambda_{c}(G_{\infty})\) and grows exponentially with \(n\) when \(\lambda\) is larger than \(\lambda_{c}(G_{\infty})\). For instance, this has been shown when \(G_{n}\) is a \(d\)-dimensional cube \(\{0,\ldots,n\}^{d}\)[1, 2, 3, 4, 5], a \(d\)-regular tree up to height \(n\)[1, 2], or in the case which interests us more here, when \(G_{n}\) is a random \(d\)-regular graph with \(n\) vertices [1, 16], in which
cases \(G_{\infty}\) is respectively \(\mathbb{Z}^{d}\), the canopy tree, and the \(d\)-regular tree \(\mathbb{T}^{d}\).
### Contact process on a dynamical random \(d\)-regular graph
We present now the dynamical version of the random \(d\)-regular graph first introduced and studied in [1]. Throughout the paper, we fix the degree \(d\geqslant 3\), and whenever we talk about a \(d\)-regular graph with \(n\) vertices, we assume that \(nd\) is even. We allow our graphs to contain loops (edges involving the same vertex twice) and parallel edges (multiple edges between the same two vertices), but we will keep writing 'graph' instead of some other terminology such as'multi-graph'.
Let \(G\) be a \(d\)-regular graph with \(n\) vertices, and \(e,e^{\prime}\) be two of its edges; let \(u,v\) be the vertices of \(e\) and \(u^{\prime},v^{\prime}\) the vertices of \(e^{\prime}\). We can define two possible **switches** of these edges by replacing them with either (1) edges with vertices \(\{u,u^{\prime}\}\) and \(\{v,v^{\prime}\}\), or (2) edges with vertices \(\{u,v^{\prime}\}\) and \(\{v,u^{\prime}\}\). We then define a continuous-time Markov chain \((G_{t})_{t\geqslant 0}\) on the space of \(d\)-regular graphs on a fixed set of \(n\) vertices as follows. The initial graph \(G_{0}\) is distributed according to the uniform distribution on the set of \(d\)-regular graphs. Then, given the state \(G_{t}\) at time \(t\), we prescribe that any of the \(2\cdot\binom{|E|}{2}\) possible edge switches occurs on this graph with rate \(\frac{\mathsf{v}}{nd}\), where \(\mathsf{v}>0\) is a positive parameter. It is readily seen that the uniform distribution on random \(d\)-regular graphs is stationary with respect to this dynamics, and moreover, that any fixed edge is involved in a switch at a rate which converges to \(\mathsf{v}\), as \(n\to\infty\).
We next consider the process \((G_{t},\xi_{t})_{t\geqslant 0}\) where \((G_{t})_{t\geqslant 0}\) is as above, and \((\xi_{t})_{t\geqslant 0}\) is a contact process evolving on the dynamic graph. As previously mentioned, the process starts from the configuration where all vertices are infected, and our main interest is in the time \(\tau_{(G_{t})}\) when the process reaches the all-healthy configuration. The following result was proved in [1]. In both this theorem and in Theorem 1.2 below, the probability measure \(\mathbb{P}\) includes the randomness of both the random dynamic graph and the contact process.
**Theorem 1.1** ([1]).: _For each \(\mathsf{v}>0\), there exists \(\bar{\lambda}(\mathsf{v})\in(0,\lambda_{c}(\mathbb{T}_{d}))\)
_such that the following holds. For any \(\lambda>\bar{\lambda}(\mathsf{v})\), there exists \(c>0\) such that_
\[\mathbb{P}\big{(}\tau_{(G_{t})}>\exp\{cn\}\big{)}\xrightarrow{n\to\infty}1.\]
Note in particular the interesting feature that \(\bar{\lambda}(\mathsf{v})\) is strictly smaller than \(\lambda_{c}(\mathbb{T}_{d})\), which means that the dynamics of the graph helps the contact process to survive for a longer time than in the static model.
### Main results
In this paper, we complete the picture by proving the following result.
**Theorem 1.2**.: _For each \(\mathsf{v}>0\), there exists \(\bar{\lambda}(\mathsf{v})\in(0,\lambda_{c}(\mathbb{T}_{d}))\) such that the following holds._
* _For any_ \(\lambda>\bar{\lambda}(\mathsf{v})\)_, there exists_ \(c>0\) _such that_ \[\mathbb{P}\big{(}\tau_{(G_{t})}>\exp\{cn\}\big{)}\xrightarrow{n\to\infty}1.\]
* _For any_ \(\lambda<\bar{\lambda}(\mathsf{v})\)_, there exists_ \(C>0\) _such that_ \[\mathbb{P}\big{(}\tau_{(G_{t})}>C\log n\big{)}\xrightarrow{n\to\infty}0.\]
As in the static case, the value \(\bar{\lambda}(\mathsf{v})\) corresponds to the critical value for the contact process on a limiting model, which in our case is called the **herds process**. This was introduced and analyzed in [1], where in particular a phase transition delimited by a positive and finite parameter \(\bar{\lambda}(\mathsf{v})\) was established. Here we improve upon this result by showing that the herds process exhibits a form of sharp threshold phenomenon, namely that the tail distribution of the extinction time decays exponentially fast in the whole subcritical regime (see Lemma 3.1 below). An informal description of the herds process is given in the next subsection, and a precise definition is given in Section 2.
Our second result answers a question of [1] concerning the monotonicity of \(\bar{\lambda}(\mathsf{v})\).
**Theorem 1.3**.: _The mapping \(\mathsf{v}\mapsto\bar{\lambda}(\mathsf{v})\) is strictly decreasing._
### Methods of proof and organization of the paper
As in [1], the proof of Theorem 1.2 relies on a detailed analysis of the herds process. Informally, this process evolves as a contact process on a family of \(d\)-regular trees, where the number of trees also evolves with time. On each tree, the process obeys the same rules as the usual contact process, regarding infection and recovery (though we adopt a slight change of terminology: the vertex states 'healthy' and 'infected' here are called 'empty' and 'occupied by a particle', respectively). In addition, each edge in any of the existing tree splits the tree into two pieces at a constant rate \(\mathsf{v}\). When this happens, the two disjoint pieces of the tree are completed to form two new copies of a \(d\)-regular tree.
The value \(\bar{\lambda}(\mathsf{v})\) is defined as the threshold for the infection parameter \(\lambda\) above which the process has a positive probability of surviving forever, when starting from a single tree with a single particle. The heart of the proof is to show that when \(\lambda\) is smaller than this threshold, the probability to survive for a time larger than \(t\) decays exponentially fast with \(t\). This is obtained using some coupling argument with a two-type herds process, which allows to show that the expected number of infected particles at time \(t\), denoted (in this section only) \(F(\lambda,\mathsf{v},t)\), is a sub-multiplicative sequence (as a function of the time parameter), see Section 2.2. Using this, we can define the rate of exponential decay of this function, \(\varphi(\lambda,\mathsf{v})\), and then the proof boils down to showing that it is strictly increasing with respect to both parameters. This is obtained via a kind of Russo's formula, see Proposition 3.5, which in our setting is quite involved, compared to the original formula from percolation theory. Moreover, the strict monotonicity of \(\varphi(\lambda,\mathsf{v})\) also proves Theorem 1.3.
Another important ingredient is to control the higher moments of the number of infected particles, which requires some serious additional technical work, due to the non-linearity of these functionals, see Section 3.3. Bounding higher moments is needed to control the total number of particle births (or in the usual contact process terminology, infections) up to the extinction time, which in turn allows one to couple the contact process on a large dynamic \(d\)-regular graph with the herds process, up to this extinction time. This coupling argument is explained in Section 4, where we complete the proof of Theorem 1.2.
### Related works
The study of the phase transition for long vs. short time extinction for the contact process on finite graphs, and the closely related question of metastability in the supercritical regime, has been studied intensively over the past years. In particular as we already mentioned on finite boxes of \(\mathbb{Z}^{d}\)[11, 12, 13, 14, 15], finite \(d\)-regular trees [16, 17], and random \(d\)-regular graphs [18, 19], but also in a number of other examples, such as the configuration model with power law degree distribution [13, 14, 15], or with general degree-distribution [10, 12], preferential attachment graphs [1, 18], Erdos-Renyi random graphs [10], inhomogeneous random graphs [19], or hyperbolic random graphs [15]. There are also results concerning some general classes of finite graph sequences [14, 17].
On the other hand, the study of the contact process on dynamical graphs started more recently, see e.g. [1, 2, 1, 19, 2, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15].
## 2 Preliminaries on the herds process
In this section, we give a formal definition of the herds process and introduce notation. We also present some tools that will be employed in the analysis of this process in later sections, namely, stochastic domination by a pure-birth process and a submultiplicativity inequality for the expectation of the number of particles.
### Definition and construction
Throughout this paper, we fix \(d\geqslant 3\) and let \(\mathbb{T}^{d}\) denote the infinite \(d\)-regular rooted tree. The root is denoted by \(o\), and we write \(u\sim v\) when two vertices \(u\) and \(v\) are neighbors.
**Definition 2.1**.: _Let_
\[P_{\mathrm{f}}(\mathbb{T}^{d}):=\{A\subseteq\mathbb{T}^{d}:\;A\text{ is finite and non-empty}\}.\]
_We call each \(A\in P_{\mathsf{f}}(\mathbb{T}^{d})\) a **herd shape**, and each \(x\in A\) a **particle** of \(A\). Given a herd shape \(A\) and an edge \(e=\{u,v\}\) of \(\mathbb{T}^{d}\) with \(u\) closer to the root than \(v\), in the graph distance of \(\mathbb{T}^{d}\), define_
\[A_{e,1}:=\{w\in A:\;w\mbox{ is closer to }u\mbox{ than to }v\},\quad A_{e,2}:=A\backslash A_{e,1}.\]
_We say that \(e\) is an **active edge of \(A\)** if \(A_{e,1}\neq\varnothing\) and \(A_{e,2}\neq\varnothing\)._
**Definition 2.2**.: _Define the **set of herd configurations**_
\[\mathcal{S}:=\{\xi:P_{\mathsf{f}}(\mathbb{T}^{d})\to\mathbb{N}_{0}\mbox{ with }\sum_{A}\xi(A)<\infty\}.\]
_In a herd configuration \(\xi\in\mathcal{S}\), \(\xi(A)\) is interpreted as the number of herds with shape \(A\). Given \(A\in P_{\mathsf{f}}(\mathbb{T}^{d})\), we let \(\delta_{A}\) denote the herd configuration such that \(\delta_{A}(B)=1\) if \(B=A\), and \(\delta_{A}(B)=0\) otherwise. An **enumeration** of \(\xi\in\mathcal{S}\) is a sequence \(A_{1},\ldots,A_{m}\in P_{\mathsf{f}}(\mathbb{T}^{d})\) such that \(\xi=\sum_{i=1}^{m}\delta_{A_{i}}\)._
We will generally denote deterministic elements of \(\mathcal{S}\) by the letter \(\xi\), and random elements of \(\mathcal{S}\) by \(\Xi\).
**Definition 2.3**.: _The **herds process**\((\Xi_{t})_{t\geqslant 0}\) with birth rate \(\lambda>0\) and splitting rate \(\mathsf{v}>0\) is a continuous-time Markov chain on \(\mathcal{S}\) whose dynamics is given by the following description of possible jumps and corresponding rates:_
1. _death in herds with more than one particle: for each_ \(A\) _with_ \(|A|>1\) _such that_ \(\xi(A)>0\)_, and for each_ \(x\in A\)_, with rate_ \(\xi(A)\)_, the process jumps from_ \(\xi\) _to_ \(\xi-\delta_{A}+\delta_{A\backslash\{x\}}\)_;_
2. death in herds with one particle: for each \(A\) with \(|A|=1\) such that \(\xi(A)>0\), with rate \(\xi(A)\), the process jumps from \(\xi\) to \(\xi-\delta_{A}\);
3. birth: for each \(A\) such that \(\xi(A)>0\), and for each \(y\in\mathbb{T}^{d}\) with \(y\notin A\), with rate \(\lambda\cdot|\{x\in A:x\sim y\}|\cdot\xi(A)\), the process jumps from \(\xi\) to \(\xi-\delta_{A}+\delta_{A\cup\{y\}}\);
4. split: for each \(A\) such that \(\xi(A)>0\), and for each active edge \(e\) of \(A\), with rate \(\mathsf{v}\cdot\xi(A)\), the process jumps from \(\xi\) to \(\xi-\delta_{A}+\delta_{A_{e,1}}+\delta_{A_{e,2}}\).
**Remark 2.1**.: _A priori, it could be the case that the above description gave rise to an explosive chain, that is, a chain that jumps infinitely many times in a bounded time interval. So, strictly speaking, the process is only defined up to the explosion time (the infimum of times \(t>0\) such that infinitely many jumps happen in \((0,t)\)). However, we will show shortly (see Corollary 2.2 below) that the chain is in fact not explosive._
**Remark 2.2**.: _Our choice for the state space \(\mathcal{S}\) of the herds process makes it so that, in case there are multiple herds with the same shape \(A\) (meaning that \(\xi(A)\geqslant 2\)), then these herds are indistinguishable. An alternative choice was made in [1]: there, a state of the process was an index set \(\mathcal{J}\) and a mapping from \(\mathcal{J}\) to \(P_{\mathsf{f}}(\mathbb{T}^{d})\), so that each \(i\in\mathcal{J}\) represented a different herd. This alternative choice requires heavier notation, but has some advantages; for instance, when a particle is born in a herd, it makes sense to consider the herd before and after the birth (since it keeps the same index). Although here we will adopt the leaner description of Definition 2.3, we will sometimes pretend that a richer description is available. For instance, in one of our arguments (see Lemma 3.2) we fix a particle in a herd at time \(0\), and consider the evolution of the cardinality of the herd containing that particle for times \(t\geqslant 0\)._
We let \(\mathbb{P}\) be a probability measure under which the herds process is defined, and \(\mathbb{E}\) the associated expectation. When we want to be explicit about the parameters, we will write \(\mathbb{P}_{\lambda,\mathsf{v}}\) and \(\mathbb{E}_{\lambda,\mathsf{v}}\). _When no explicit mention regarding the initial configuration is made, we assume it to consist of a single herd with a single particle placed at the root vertex_. We may write \(\mathbb{P}_{\lambda,\mathsf{v}}(\cdot\mid\Xi_{0}=\xi)\) (and similarly \(\mathbb{E}_{\lambda,\mathsf{v}}[\cdot\mid\Xi_{0}=\xi]\)) to specify some other initial configuration \(\xi\in\mathcal{S}\).
**Definition 2.4**.: _Given \(\xi\in\mathcal{S}\), we let_
\[X(\xi):=\sum_{A\in P_{\mathsf{f}}(\mathbb{T}^{d})}|A|\cdot\xi(A), \tag{1}\]
_where \(|\cdot|\) denotes cardinality; that is, \(X(\xi)\) is the total number of particles among all herds in \(\xi\). We also let_
\[\mathscr{E}(\xi):=\sum_{A\in P_{\mathsf{f}}(\mathbb{T}^{d})}|\{\text{active edges of }A\}|\cdot\xi(A), \tag{2}\]
the total number of active edges among all herds of \(\xi\). For the herds process \((\Xi_{t})_{t\geqslant 0}\) (started from an arbitrary configuration), we write_
\[X_{t}:=X(\Xi_{t}),\qquad\mathscr{E}_{t}:=\mathscr{E}(\Xi_{t}), \tag{3}\]
_with \(X\) and \(\mathscr{E}\) as in (1) and (2), respectively._
We now state a useful stochastic domination result. The proof involves a quick inspection of transition rates, and we omit it.
**Lemma 2.1** (Domination by pure-birth chain).: _For the herds process \((\Xi_{t})_{t\geqslant 0}\) with parameters \(\lambda\), \(\mathsf{v}\) and some initial configuration \(\xi\in\mathcal{S}\), let \(N_{t}\) denote the number of birth events until time \(t\). Let \((Z_{t})_{t\geqslant 0}\) be the continuous-time Markov chain on \(\mathbb{N}\) with \(Z_{0}=X(\xi)\) and jump rates_
\[q(i,i+1)=d\lambda i,\qquad q(i,j)=0\text{ for }j\neq i+1. \tag{4}\]
_Then, \((N_{t})_{t\geqslant 0}\) is stochastically dominated by \((Z_{t}-Z_{0})_{t\geqslant 0}\). In particular, \(\mathbb{P}(N_{t}<\infty)\geqslant\mathbb{P}(Z_{t}<\infty)=1\) for any \(t\)._
**Corollary 2.2**.: _The herds process is non-explosive._
Proof.: When there are finitely many birth events in \([0,t]\), there are also finitely many death events and split events in \([0,t]\), so the herds process performs finitely many jumps of any kind in \([0,t]\).
We also have the following important consequence concerning the processes \((X_{t})\) and \((\mathscr{E}_{t})\) defined in (3).
**Corollary 2.3**.: _For any \(\lambda>0\), \(\mathsf{v}>0\), \(T>0\) and \(k\geqslant 1\), there exists \(c>0\) such that the herds process \((\Xi_{t})_{t\geqslant 0}\) with parameters \(\lambda,\ \mathsf{v}\) and arbitrary (deterministic) initial configuration \(\xi\) satisfies_
\[\mathbb{E}\left[\,\max_{0\leqslant t\leqslant T}(X_{t})^{k}\;\middle|\;\Xi_{0} =\xi\right]\leqslant cX(\xi)^{k}\]
_and_
\[\mathbb{E}\left[\,\max_{0\leqslant t\leqslant T}(\mathscr{E}_{t})^{k}\; \middle|\;\Xi_{0}=\xi\right]\leqslant c(X(\xi)+\mathscr{E}(\xi))^{k}.\]
Proof.: Again let \(N_{t}\) denote the number of births in the herds process until time \(t\). Note that
\[\max_{0\leqslant t\leqslant T}X_{t}\leqslant X_{0}+N_{T}\qquad\text{and}\quad \max_{0\leqslant t\leqslant T}\mathscr{E}_{t}\leqslant\mathscr{E}_{0}+N_{T}; \tag{5}\]
to justify the latter, we observe that deaths and splits can only decrease \(\mathscr{E}_{t}\), while a birth can increase \(\mathscr{E}_{t}\) by at most one.
Let \((Z_{t})_{t\geqslant 0}\) be the pure-birth chain of Lemma 2.1, started with \(Z_{0}=X_{0}\). By that lemma, \((Z_{t}-Z_{0})_{t\geqslant 0}\) stochastically dominates \((N_{t})_{t\geqslant 0}\). Then,
\[\mathbb{E}\left[\max_{0\leqslant t\leqslant T}(X_{t})^{k}\ \bigg{|}\ \Xi_{0}=\xi\right] \leqslant\mathbb{E}\left[\left(X_{0}+N_{T}\right)^{k}\ \big{|}\ \Xi_{0}=\xi\right]\leqslant\mathbb{E}[(X_{0}+Z_{T}-Z_{0}) )^{k}],\]
where the last expectation is with respect to the probability measure under which \((Z_{t})\) is defined (note that \(X_{0}\) is fixed and deterministic). Next, the law of \(Z_{T}-Z_{0}\) is equal to the law of \(\sum_{i=1}^{Z_{0}}\zeta_{T}^{(i)}\), where \((\zeta_{t}^{(1)}),\ldots,(\zeta_{t}^{(Z_{0})})\) are independent pure-birth processes with rates as in (4), each started with a population of one. Using Minkowski's inequality,
\[\mathbb{E}[(X_{0}+Z_{T}-Z_{0})^{k}]^{1/k}\leqslant X_{0}+\sum_{i=1}^{Z_{0}} \mathbb{E}[(\zeta_{T}^{(i)})^{k}]^{1/k}=cX_{0},\]
where \(c:=\mathbb{E}[(\zeta_{T}^{(i)})^{k}]^{1/k}+1\), which by [1, Corollary 1 p.111] is finite and only depends on \(\lambda\), \(k\) and \(T\). This completes the proof of the first bound. For the second one, we start using the second bound in (5):
\[\mathbb{E}\left[\left.\max_{0\leqslant t\leqslant T}(\mathscr{E} _{t})^{k}\ \right|\ \Xi_{0}=\xi\right] \leqslant\mathbb{E}\left[\left.(\mathscr{E}_{0}+N_{T})^{k}\ \right|\ \Xi_{0}=\xi\right]\] \[=\mathbb{E}\left[\left.(X_{0}+\mathscr{E}_{0}+N_{T}-X_{0})^{k} \ \right|\ \Xi_{0}=\xi\right],\]
and then complete the proof as in the previous case.
We now present three properties of the herds process (in Lemmas 2.4, 2.5 and 2.6 below). In all three cases, the proof is elementary and omitted.
**Lemma 2.4** (Invariance under tree automorphisms).: _Let \(\psi:\mathbb{T}^{d}\to\mathbb{T}^{d}\) be a graph automorphism. Fix \(A\in P_{\mathfrak{f}}(\mathbb{T}^{d})\) and let \((\Xi_{t})_{t\geqslant 0}\) and \((\Xi_{t}^{\prime})_{t\geqslant 0}\) be herds processes started from \(\delta_{A}\) and \(\delta_{\psi(A)}\), respectively. Then, the process_
\[\sum_{A\in P_{\mathfrak{f}}(\mathbb{T}^{d})}\Xi_{t}(A)\cdot\delta_{\psi(A)}, \quad t\geqslant 0\]
_has the same distribution as \((\Xi_{t}^{\prime})_{t\geqslant 0}\). In particular, the processes \((X(\Xi_{t}))_{t\geqslant 0}\) and \((X(\Xi_{t}^{\prime}))_{t\geqslant 0}\) have the same distribution._
**Lemma 2.5** (Decomposition into independent processes).: _Let \(\xi\in\mathcal{S}\) with enumeration \(\xi=\sum_{i=1}^{n}\delta_{A_{i}}\), where \(A_{1},\ldots,A_{n}\in P_{\mathfrak{f}}(\mathbb{T}^{d})\). Then, the herds process started from \(\xi\) has the same distribution as \((\Xi_{t}^{(1)}+\cdots+\Xi_{t}^{(n)})_{t\geqslant 0}\), where \((\Xi_{t}^{(1)})_{t\geqslant 0}\), \(\ldots\), \((\Xi_{t}^{(n)})_{t\geqslant 0}\) are independent herds processes, started from \(\delta_{A_{1}}\), \(\ldots\), \(\delta_{A_{n}}\), respectively._
Before stating the third property, we define a partial order on \(\mathcal{S}\).
**Definition 2.5**.: _Given two herd configurations \(\xi\) and \(\xi^{\prime}\), we write \(\xi\preceq\xi^{\prime}\) if there exist enumerations_
\[\xi=\sum_{i=1}^{m}\delta_{A_{i}},\qquad\xi^{\prime}=\sum_{j=1}^{n}\delta_{A_{ j}^{\prime}}\]
_such that \(m\leqslant n\) and \(A_{i}\subseteq A_{i}^{\prime}\) for \(i=1,\ldots,m\)._
**Lemma 2.6** (Attractiveness).: _If \(\xi\preceq\xi^{\prime}\), then \((\Xi_{t})_{t\geqslant 0}\) started from \(\xi\) is stochastically dominated (with respect to \(\preceq\)) by \((\Xi_{t}^{\prime})_{t\geqslant 0}\) started from \(\xi^{\prime}\)._
We now give a definition pertaining to extinction vs. survival of the herds process, and introduce the value \(\bar{\lambda}(\mathsf{v})\) that appears in the statements of our main theorems.
**Definition 2.6**.: _We say that the herds process **survives** if the event \(\{\sum_{A}\Xi_{t}(A)>0\ \forall t\}\) occurs, that is, the process always has particles; otherwise we say that the process **dies out**. For any \(\mathsf{v}>0\), we define \(\bar{\lambda}(\mathsf{v})\) as the supremum of the values of \(\lambda\) for which the process with parameters \(\lambda\) and \(\mathsf{v}\) dies out with probability 1._
It is easy to see that \(\bar{\lambda}(\mathsf{v})\geqslant 1/d\). Indeed, when \(\lambda<1/d\), the rate at which existing particles die always exceeds the rate at which new particles are born, so the process eventually reaches the empty configuration. A moment's thought, using for instance a comparison with a branching process, shows that \(\bar{\lambda}(\mathsf{v})<\infty\).
### Sub-multiplicativity of number of particles
The goal of this section is to prove the following inequality. Recall that \(X_{t}\) denotes the number of particles in the herds process at time \(t\). Also recall that, whenever the initial condition of the herds process is omitted (say, as in the expectation in the right-hand side of (6) below), it is equal to \(\delta_{\{o\}}\).
**Proposition 2.7**.: _For any \(t\geqslant 0\) and \(\xi\in\mathcal{S}\) we have_
\[\mathbb{E}[X_{t}^{p}\mid\Xi_{0}=\xi]\leqslant X(\xi)^{p}\cdot\mathbb{E}[X_{t}^ {p}]. \tag{6}\]
_Consequently, for any \(t,s\geqslant 0\), \(p\geqslant 1\) and \(\xi\in\mathcal{S}\), we have_
\[\mathbb{E}[X_{t+s}^{p}\mid\Xi_{0}=\xi]\leqslant\mathbb{E}[X_{s}^{p}\mid\Xi_{0} =\xi]\cdot\mathbb{E}[X_{t}^{p}]. \tag{7}\]
This proposition will be a consequence of the following lemma.
**Lemma 2.8**.: _Let \(A,B\in P_{\text{f}}(\mathbb{T}^{d})\) be disjoint and \(p\geqslant 1\). Then,_
\[\mathbb{E}[X_{t}^{p}\mid\Xi_{0}=\delta_{A\cup B}]^{1/p}\leqslant\mathbb{E}[X _{t}^{p}\mid\Xi_{0}=\delta_{A}]^{1/p}+\mathbb{E}[X_{t}^{p}\mid\Xi_{0}=\delta_{ B}]^{1/p}.\]
We postpone the proof of this lemma, and for now show how it implies the proposition:
Proof of Proposition 2.7.: We claim that for any \(p\geqslant 1\), \(A\in P_{\text{f}}(\mathbb{T}^{d})\), and \(t\geqslant 0\),
\[\mathbb{E}[X_{t}^{p}\mid\Xi_{0}=\delta_{A}]^{1/p}\leqslant|A|\cdot\mathbb{E}[ X_{t}^{p}]^{1/p}. \tag{8}\]
We prove this by induction on \(|A|\). For \(|A|=1\) the above holds with an equality, by Lemma 2.4. For the induction step, we assume that \(|A|\geqslant 2\), take \(u\in A\) and bound, using Lemma 2.8, Lemma 2.4 and the induction hypothesis:
\[\mathbb{E}[X_{t}^{p}\mid\Xi_{0}=\delta_{A}]^{1/p} \leqslant\mathbb{E}[X_{t}^{p}\mid\Xi_{0}=\delta_{A\setminus\{u \}}]^{1/p}+\mathbb{E}[X_{t}^{p}\mid\Xi_{0}=\delta_{\{u\}}]^{1/p}\] \[\leqslant(|A\backslash\{u\}|)\cdot\mathbb{E}[X_{t}^{p}]^{1/p}+ \mathbb{E}[X_{t}^{p}]^{1/p}\] \[=|A|\cdot\mathbb{E}[X_{t}^{p}]^{1/p}.\]
Now take \(\xi\in\mathcal{S}\) with enumeration \(\xi=\sum_{i=1}^{m}\delta_{A_{i}}\). Let \((\Xi_{t}^{(1)}),\ldots,(\Xi_{t}^{(m)})\) be independent herds processes, started from \(\delta_{A_{1}},\ldots,\delta_{A_{m}}\), respectively. By Lemma 2.5, we have
\[\mathbb{E}[X_{t}^{p}\mid\Xi_{0}=\xi]^{1/p}=\mathbb{E}\left[\left(\sum_{i=1}^{m} X(\Xi_{t}^{(i)})\right)^{p}\right]^{1/p}.\]
Minkowski's inequality gives
\[\mathbb{E}\left[\left(\sum_{i=1}^{m}X(\Xi_{t}^{(i)})\right)^{p}\right]^{1/p} \leqslant\sum_{i=1}^{m}\mathbb{E}\left[X(\Xi_{t}^{(i)})^{p}\right]^{1/p}. \tag{9}\]
By (8), the right-hand side is smaller than
\[\sum_{i=1}^{m}\left|A_{i}\right|\cdot\mathbb{E}\left[X_{t}^{p}\right]^{1/p}=X( \xi)\cdot\mathbb{E}\left[X_{t}^{p}\right]^{1/p}.\]
We have thus proved (6). To prove (7), we use the Markov property. Let \(s,t\geqslant 0\) and \(\xi\in\mathcal{S}\); we have:
\[\mathbb{E}[X_{t+s}^{p}\mid\Xi_{0}=\xi] =\sum_{\xi^{\prime}\in\mathcal{S}}\mathbb{E}[X_{t+s}^{p}\mid\Xi_{ s}=\xi^{\prime}]\cdot\mathbb{P}(\Xi_{s}=\xi^{\prime}\mid\Xi_{0}=\xi)\] \[\leqslant\sum_{\xi^{\prime}\in\mathcal{S}}X(\xi^{\prime})^{p} \cdot\mathbb{E}[X_{t}^{p}]\cdot\mathbb{P}(\Xi_{s}=\xi^{\prime}\mid\Xi_{0}=\xi)\] \[=\mathbb{E}[X_{s}^{p}\mid\Xi_{0}=\xi]\cdot\mathbb{E}[X_{t}^{p}].\]
To prove Lemma 2.8, we will define an auxiliary process, which informally describes two herds processes that evolve almost independently, except that they share the same splitting events. We start by defining the state space of this process.
**Definition 2.7**.: _Let_
\[P_{\mathfrak{f},2}(\mathbb{T}^{d}):=\{(A,B):\ A,B\subseteq\mathbb{T}^{d},\ A \cup B\text{ finite and non-empty}\}\]
_and_
\[\mathcal{S}_{2}:=\{\widetilde{\xi}:P_{\mathfrak{f},2}(\mathbb{T}^{d})\to \mathbb{N}_{0}\text{ with }\sum_{(A,B)}\widetilde{\xi}(A,B)<\infty\}.\]
We interpret an element \((A,B)\in P_{\mathfrak{f},2}(\mathbb{T}^{d})\) as a **two-type herd**, that is, there are two species of particles, one of which occupies \(A\) and the other \(B\). We emphasize that \(A\) and \(B\) need not be disjoint, and one of them, but not both, can be empty. We will also need some projection functions from \(\mathcal{S}_{2}\) to \(\mathcal{S}\).
**Definition 2.8**.: _We define \(\pi,\pi_{1},\pi_{2}:\mathcal{S}_{2}\to\mathcal{S}\) by setting, for \(\xi^{\prime}=\sum_{i=1}^{m}\delta_{(A_{i},B_{i})}\):_
\[\pi(\widetilde{\xi})=\sum_{i=1}^{m}\delta_{A_{i}\cup B_{i}},\quad\pi_{1}( \widetilde{\xi})=\sum_{i:A_{i}\neq\varnothing}\delta_{A_{i}},\quad\pi_{2}( \widetilde{\xi})=\sum_{i:B_{i}\neq\varnothing}\delta_{B_{i}}.\]
This definition is illustrated in Figure 1.
**Definition 2.9**.: _The **two-type herds process**\((\widetilde{\Xi}_{t})_{t\geqslant 0}\) (with rates \(\lambda\) and \(\mathsf{v}\)) is a continuous-time Markov chain on \(\mathcal{S}_{2}\) with transitions described as follows._
Figure 1: A two-type herds configuration \(\widetilde{\xi}\) is depicted on top, with the two types represented in blue and red (it is assumed that in the four herds of this configuration, no particle is present other than the ones depicted). The projections \(\pi(\widetilde{\xi})\), \(\pi_{1}(\widetilde{\xi})\) and \(\pi_{2}(\widetilde{\xi})\) are shown in the second, third and fourth rows, respectively.
_At any time \(t\geqslant 0\), and for any \((A,B)\) with \(\widetilde{\Xi}_{t}(A,B)\geqslant 1\), both \(A\) and \(B\) are subject, independently of each other, to death and birth mechanisms as in the original herds process (in particular, if either of them is empty, it stays empty). However, they are subject together to the same splitting events: an edge \(e\) is said to be active if \((A\cup B)_{e,1}\neq\varnothing\) and \((A\cup B)_{e,2}\neq\varnothing\). Then, a split occurs at any active edge \(e\) with rate \(\mathsf{v}\), and when this happens, \((A,B)\) is split into the two pairs \((A_{e,1},B_{e,1})\) and \((A_{e,2},B_{e,2})\). (In case \(A=\varnothing\), we let \(A_{e,1}=A_{e,2}=\varnothing\), and similarly for \(B\))._
We record two observations about the two-type herds process in the following lemma. The proof is done by comparing jump rates, and we omit it.
**Lemma 2.9**.: _Let \((A,B)\in P_{\mathsf{f}}(\mathbb{T}^{d})\times P_{\mathsf{f}}(\mathbb{T}^{d})\), and let \((\widetilde{\Xi})_{t\geqslant 0}\) be the two-type herds process started from \(\delta_{(A,B)}\)._
1. _The processes_ \((\pi_{1}(\widetilde{\Xi}_{t}))_{t\geqslant 0}\) _and_ \((\pi_{2}(\widetilde{\Xi}_{t}))_{t\geqslant 0}\) _are herds processes started from_ \(\delta_{A}\) _and_ \(\delta_{B}\)_, respectively._
2. _The herds process started from_ \(\delta_{A\cup B}\) _is stochastically dominated (in the sense of the partial order_ \(\preceq\)_) by_ \((\pi(\widetilde{\Xi}_{t}))_{t\geqslant 0}\)_._
Proof of Lemma 2.8.: Fix \(p\geqslant 1\) and disjoint sets \(A,B\in P_{\mathsf{f}}(\mathbb{T}^{d})\). Let \((\widetilde{\Xi}_{t})_{t\geqslant 0}\) denote a two-type herds process started from \(\delta_{(A,B)}\), defined under a probability measure \(\widetilde{\mathbb{P}}\) (with expectation operator \(\widetilde{\mathbb{E}}\)). By Lemma 2.9(b), and the fact that \(X(\cdot)\) is monotone with respect to the partial order \(\preceq\), we have
\[\mathbb{E}[X_{t}^{p}\mid\Xi_{0}=\delta_{A\cup B}]\leqslant\widetilde{\mathbb{ E}}[X(\pi(\widetilde{\Xi}_{t}))^{p}].\]
Next, noting that \(X(\pi(\widetilde{\Xi}_{t}))\leqslant X(\pi_{1}(\widetilde{\Xi}_{t}))+X(\pi_{ 2}(\widetilde{\Xi}_{t}))\), we have
\[\widetilde{\mathbb{E}}[X(\pi(\widetilde{\Xi}_{t}))^{p}]\leqslant\widetilde{ \mathbb{E}}[(X(\pi_{1}(\widetilde{\Xi}_{t}))+X(\pi_{2}(\widetilde{\Xi}_{t}))^{ p}].\]
Putting these inequalities together (raised to the power \(1/p\)) and using Minkowski's inequality, we obtain
\[\mathbb{E}[X_{t}^{p}\mid\Xi_{0}=\delta_{A\cup B}]^{1/p}\leqslant\widetilde{ \mathbb{E}}[X(\pi_{1}(\widetilde{\Xi}_{t}))^{p}]^{1/p}+\widetilde{\mathbb{E}}[ X(\pi_{2}(\widetilde{\Xi}_{t}))^{p}]^{1/p}.\]
By Lemma 2.9(a), the right-hand side equals
\[\mathbb{E}[X_{t}^{p}\mid\Xi_{0}=\delta_{A}]^{1/p}+\mathbb{E}[X_{t}^{p}\mid \Xi_{0}=\delta_{B}]^{1/p}.\]
This completes the proof.
**Remark 2.3**.: _It is worth mentioning a curious fact here, which is the main reason for introducing the two-type herds process. Let \(A,B\in P_{\mathfrak{f}}(\mathbb{T}^{d})\) be disjoint. Then, a natural guess would be that the number of particles in the herds process starting from \(\delta_{A}+\delta_{B}\) should stochastically dominate the number of particles in the process starting from \(\delta_{A\cup B}\), since in the former case there is a priori more space for the particles to spread. However, there seems to be no simple proof of this fact, and it is not even clear whether it should be true or not. In a natural choice of coupling, one would try to map particles of the process started from \(\delta_{A\cup B}\) injectively into particles of the process started from \(\delta_{A}+\delta_{B}\), and make it so that when a particle of the former process dies or gives birth, its image under the injective mapping does the same. However, this does not work. For instance, when starting from \(\delta_{A\cup B}\), it could be that at a later time, a single split would separate more particles than in the process starting from \(\delta_{A}+\delta_{B}\), which in turn could after another birth event give rise to a larger number of particles in the process starting from \(\delta_{A\cup B}\)._
**Remark 2.4**.: _The notion of two-type herds process can of course be generalized to a multi-type herds process. Given any integer \(k\geqslant 1\), the \(k\)-type herds process is the continuous-time Markov chain on_
\[\mathcal{S}_{k}:=\{\xi:P_{\mathfrak{f},k}(\mathbb{T}^{d})\to\mathbb{N}_{0},\ \text{with}\ \sum_{(A_{1},\ldots,A_{k})}\xi(A_{1},\ldots,A_{k})<\infty\},\]
_where_
\[P_{\mathfrak{f},k}(\mathbb{T}^{d}):=\{(A_{1},\ldots,A_{k}):A_{1},\ldots,A_{k} \subseteq\mathbb{T}^{d},\ \cup_{j=1}^{k}A_{j}\neq\varnothing\}\]
_and where, like in the two-type herds process, every type obeys birth and death mechanisms as in the original herds process, ignoring the other types, but they all share the same splitting events. Naturally, an analogue of Lemma 2.9 holds as well in this general setting. This remark will be used in the proof of Lemma 3.15 below._
Analysis of the herds process through a growth index
### Definition and properties of growth index
For any \(p\geqslant 1\), define the **growth index of order \(p\)** as
\[\varphi_{p}=\varphi_{p}(\lambda,\mathsf{v}):=\inf_{t\geqslant 0}\ \mathbb{E}_{ \lambda,\mathsf{v}}[X_{t}^{p}]^{1/t}. \tag{10}\]
In case \(p=1\), we omit the subscript, so that
\[\varphi:=\varphi_{1}.\]
Note that we have
\[\varphi_{p}^{t}\leqslant\mathbb{E}[X_{t}^{p}],\qquad t\geqslant 0,\ p\geqslant 1. \tag{11}\]
Moreover, (7) implies that
\[\mathbb{E}[X_{t+s}^{p}]\leqslant\mathbb{E}[X_{t}^{p}]\cdot\mathbb{E}[X_{s}^{p} ],\qquad t,s\geqslant 0,\ p\geqslant 1, \tag{12}\]
and Fekete's lemma then ensures that
\[\lim_{t\to\infty}\mathbb{E}[X_{t}^{p}]^{1/t}=\varphi_{p},\qquad p\geqslant 1. \tag{13}\]
It will also be useful to observe that
\[[1,\infty)\ni p\mapsto\varphi_{p}^{1/p}\ \text{is non-decreasing}. \tag{14}\]
To see this, let \(p\geqslant q\geqslant 1\). We bound, for any \(t\geqslant 0\):
\[\mathbb{E}[X_{t}^{p}]=\mathbb{E}[(X_{t}^{q})^{p/q}]\geqslant\mathbb{E}[X_{t}^ {q}]^{p/q},\]
by Jensen's inequality. Hence,
\[(\mathbb{E}[X_{t}^{p}]^{1/t})^{1/p}\geqslant(\mathbb{E}[X_{t}^{q}]^{1/t})^{1/q}.\]
By taking \(t\to\infty\) and using (13), we obtain \(\varphi_{p}^{1/p}\geqslant\varphi_{q}^{1/q}\), as desired.
For the rest of this section, we focus on the growth index with \(p=1\). We will analyse higher values of \(p\) in Section 3.4.
We state a lemma with an upper bound that, apart from a constant prefactor, matches (11) in the case \(p=1\):
**Lemma 3.1**.: _There is a constant \(C=C(\lambda,\mathsf{v})\), such that for any \(t\geqslant 0\),_
\[\mathbb{E}[X_{t}]\leqslant C\cdot\varphi^{t}. \tag{15}\]
Before proving this, we state and prove an auxiliary result, concerning the expected number of herds containing a single particle, after one time unit of the dynamics has elapsed.
**Lemma 3.2**.: _There is a constant \(\rho=\rho(\lambda,\mathsf{v})\) such that for any \(\xi\in\mathcal{S}\), we have_
\[\mathbb{E}\left[\left.\sum_{A:|A|=1}\Xi_{1}(A)\;\middle|\;\Xi_{0}=\xi\right] \geqslant\rho\cdot X(\xi). \tag{16}\]
Proof.: By Lemma 2.5 and the linearity of expectation, it suffices to show that there exists \(\rho>0\) such that for any \(A\in P_{\mathsf{f}}(\mathbb{T}^{d})\),
\[\mathbb{E}\left[\left.\sum_{A^{\prime}:|A^{\prime}|=1}\Xi_{1}(A^{\prime})\; \middle|\;\Xi_{0}=\delta_{A}\right]\geqslant\rho\cdot|A|.\]
To prove this, fix \(A\in P_{\mathsf{f}}(\mathbb{T}^{d})\), and note that the left-hand side above is larger than
\[\sum_{u\in A}\mathbb{E}[\Xi_{1}(\{u\})\;\middle|\;\Xi_{0}=\delta_{A}] \geqslant\sum_{u\in A}\mathbb{P}(\Xi_{1}(\{u\})>0\;\middle|\;\Xi_{0}=\delta_{ A}).\]
It is easy to see that there is a constant \(\rho>0\), depending only on \(\lambda\) and \(\mathsf{v}\), such that the probability in the sum in the right-hand side is larger than \(\rho\) (for any \(A\) and \(u\)). This is achieved by prescribing that the particle present at \(u\) at time \(0\) does not die or give birth until time \(1\), and moreover this particle becomes separated, through successive splits in the edges that are incident to \(u\), of any other particle in its herd. This shows that the right-hand side above is larger than \(\rho|A|\), completing the proof.
Proof of Lemma 3.1.: For any \(s,t\geqslant 0\) we have
\[\mathbb{E}[X_{t+s}\;\middle|\;\mathcal{F}_{s}]=\sum_{A}\Xi_{s}(A)\cdot\mathbb{ E}[X_{t}\;\middle|\;\Xi_{0}=\delta_{A}]\geqslant\sum_{A:|A|=1}\Xi_{s}(A) \cdot\mathbb{E}[X_{t}],\]
where the equality follows from Lemma 2.5 and the Markov property, and the inequality from Lemma 2.4. Then, by taking expectations and using Lemma 3.2, we obtain that
\[\mathbb{E}[X_{t+s}]\geqslant\mathbb{E}\left[\sum_{A:|A|=1}\Xi_{s}(A)\right] \cdot\mathbb{E}[X_{t}].\]
In case \(s\geqslant 1\), by Lemma 3.2 and the Markov property, the right-hand side is larger than
\[\rho\cdot\mathbb{E}[X_{s-1}]\cdot\mathbb{E}[X_{t}].\]
Further, by Proposition 2.7, letting \(\kappa:=1/\mathbb{E}[X_{1}]\), the above is larger than
\[\kappa\rho\cdot\mathbb{E}[X_{s}]\cdot\mathbb{E}[X_{t}].\]
Using this recursively, we obtain, for any \(t>0\) and \(n\in\mathbb{N}\),
\[\mathbb{E}[X_{nt}]\geqslant(\kappa\rho)^{n-1}\cdot\mathbb{E}[X_{t}]^{n},\]
so that
\[\mathbb{E}[X_{t}]\leqslant(\kappa\rho)^{-\frac{n-1}{n}}\cdot\mathbb{E}[X_{nt} ]^{\frac{1}{n}}=(\kappa\rho)^{-\frac{n-1}{n}}\cdot(\mathbb{E}[X_{nt}]^{\frac{ 1}{nt}})^{t}.\]
Using (13), we obtain
\[\mathbb{E}[X_{t}]\leqslant\liminf_{n\to\infty}\left((\kappa\rho)^{-\frac{n-1} {n}}\cdot(\mathbb{E}[X_{nt}]^{\frac{1}{nt}})^{t}\right)=C\cdot\varphi^{t},\]
with \(C=\frac{1}{\kappa\rho}\), proving the lemma.
Together with the fact that \((\lambda,\mathsf{v})\mapsto\mathbb{E}_{\lambda,\mathsf{v}}[X_{t}]\) is continuous, we deduce that \((\lambda,\mathsf{v})\mapsto\varphi(\lambda,\mathsf{v})\) is both upper- and lower-semicontinuous, hence it is continuous. Moreover, we have the following simple characterization of the supercritical regime. Recall the definition of \(\bar{\lambda}(\mathsf{v})\) from Definition 2.6.
**Lemma 3.3**.: _For any \(\lambda>0\) and \(\mathsf{v}>0\), the following are equivalent:_
1. _the herds process survives with positive probability;_
2. \(\varphi>1\)_;_
3. \(\mathbb{E}[X_{t}]\xrightarrow{t\to\infty}\infty\)_._
_Consequently, for any \(\mathsf{v}>0\),_
\[\varphi(\bar{\lambda}(\mathsf{v}),\mathsf{v})=1. \tag{17}\]
Proof.: The fact that (b) and (c) are equivalent follows from the fact that for any \(t\geqslant 0\), \(\varphi^{t}\leqslant\mathbb{E}[X_{t}]\leqslant C\varphi^{t}\).
Now, if (a) holds, then we must also have \(X_{t}\xrightarrow{t\to\infty}\infty\) with positive probability, as otherwise using the conditional Borel-Cantelli Lemma and a standard argument, we would get a contradiction. Then (c) follows from Fatou's Lemma.
Conversely, assume that (c) holds. Let \(Z_{t}:=\sum_{A:|A|=1}\Xi_{t}(A)\) denote the number of herds in \(\Xi_{t}\) with a single particle in them. By Lemma 3.2, we have \(\mathbb{E}[Z_{t+1}]\geqslant\rho\cdot\mathbb{E}[X_{t}]\). It follows that there exists some \(T>0\) such that \(\mathbb{E}[Z_{T}]\geqslant 2\). Since different herds evolve independently of each other, we deduce that \((Z_{nT})_{n\geqslant 0}\) dominates a supercritical branching process, and therefore survives forever with positive probability. Since it is dominated by \((X_{nT})_{n\geqslant 0}\), we get that (a) is satisfied.
Having established the equivalence between (a) and (b), the equality (17) follows from the continuity of \(\varphi\).
### Strict monotonicity of growth index
Our goal in this section is to prove the following result.
**Proposition 3.4**.: _The map \((\lambda,\mathsf{v})\mapsto\varphi(\lambda,\mathsf{v})\) is strictly increasing in both arguments._
Before discussing the proof of this, let us see how it allows us to prove Theorem 1.3:
Proof of Theorem 1.3.: Let \(\mathsf{v}^{\prime}>\mathsf{v}>0\). We have
\[\varphi(\bar{\lambda}(\mathsf{v}^{\prime}),\mathsf{v}^{\prime})=1=\varphi( \bar{\lambda}(\mathsf{v}),\mathsf{v})<\varphi(\bar{\lambda}(\mathsf{v}), \mathsf{v}^{\prime}),\]
where the two equalities are given by (17) and the inequality by the strict monotonicity of \(\varphi\). Again using the strict monotonicity of \(\varphi\), we conclude from \(\varphi(\bar{\lambda}(\mathsf{v}^{\prime}),\mathsf{v}^{\prime})<\varphi(\bar{ \lambda}(\mathsf{v}),\mathsf{v}^{\prime})\) that \(\bar{\lambda}(\mathsf{v}^{\prime})<\bar{\lambda}(\mathsf{v})\).
The proof of Proposition 3.4 will require several steps. We start with a definition.
**Definition 3.1**.: _Fix the parameters \(\mathsf{v}\) and \(\lambda\) of the herds process. Given \(\xi\in\mathcal{S}\) and \(t>0\), we let_
\[g_{\mathsf{v}}(\xi,t):=\sum_{A\in P(\mathbb{T}^{d})}\xi(A)\sum_{ \begin{subarray}{c}e\text{active}\\ \text{edge of }A\end{subarray}}(\mathbb{E}_{\lambda,\mathsf{v}}[X_{t}\mid \Xi_{0}=\delta_{A_{e,1}}+\delta_{A_{e,2}}]-\mathbb{E}_{\lambda,\mathsf{v}}[X_ {t}\mid\Xi_{0}=\delta_{A}]), \tag{18}\]
_and_
\[h_{\lambda}(\xi,t):=\sum_{A\in P_{\mathrm{f}}(\mathbb{T}^{d})}\xi(A)\sum_{ \begin{subarray}{c}x\in A,\ y\not\in A,\\ x\sim y\end{subarray}}(\mathbb{E}_{\lambda,\mathsf{v}}[X_{t}\mid\Xi_{0}= \delta_{A\cup\{y\}}]-\mathbb{E}_{\lambda,\mathsf{v}}[X_{t}\mid\Xi_{0}=\delta_ {A}]) \tag{19}\]
_(we omit \(\lambda\) from the notation for \(g_{\mathsf{v}}\) and we omit \(\mathsf{v}\) from the notation for \(h_{\lambda}\))._
The functions \(g_{\mathsf{v}}\) and \(h_{\lambda}\) give a measure of the total impact on the number of particles at time \(t\) of splitting the herds of \(\xi\) at time zero and having a birth at time zero, respectively.
**Proposition 3.5**.: _For any \(T>0\) we have_
\[\frac{\partial}{\partial\mathsf{v}}\mathbb{E}_{\lambda,\mathsf{v}}[X_{T}]= \int_{0}^{T}\mathbb{E}_{\lambda,\mathsf{v}}[g_{\mathsf{v}}(\Xi_{t},T-t)]\; \mathrm{d}t, \tag{20}\]
_and_
\[\frac{\partial}{\partial\lambda}\mathbb{E}_{\mathsf{v},\lambda}[X_{T}]=\int_{ 0}^{T}\mathbb{E}_{\mathsf{v},\lambda}[h_{\lambda}(\Xi_{t},T-t)]\;\mathrm{d}t. \tag{21}\]
The proof is postponed to Section 3.3, and we now prove another intermediate result.
**Lemma 3.6**.: _Let \(\xi^{\prime}\) consist of a single herd with exactly two particles, which are neighbors, and \(\xi^{\prime\prime}\) consist of two herds, each containing a single particle. Then, there exists \(\gamma>0\) (depending continuously on \(\lambda\) and \(\mathsf{v}\)) such that for any \(t\geqslant 1\),_
\[\mathbb{E}[X_{t}\mid\Xi_{0}=\xi^{\prime\prime}]\geqslant\mathbb{E}[X_{t}\mid \Xi_{0}=\xi^{\prime}]+\gamma\cdot\mathbb{E}[X_{t}\mid X_{0}=\delta_{\{o\}}].\]
Proof.: Fix \(v\sim o\). Without loss of generality, we assume that \(\xi^{\prime}\) consists of the herd \(\{o,v\}\) and \(\xi^{\prime\prime}\) consists of the herds \(\{o\}\) and \(\{v\}\). We define a coupling of the two herds processes \((\Xi^{\prime}_{s})\) and \((\Xi^{\prime\prime}_{s})\) starting respectively from \(\xi^{\prime}\) and \(\xi^{\prime\prime}\): first we take independent random variables, as follows:
* \(\tau_{o}\) and \(\tau_{v}\), both \(\sim\mathrm{Exp}(1)\);
* for each \(u\sim o\), \(\tau_{o,u}\sim\mathrm{Exp}(\lambda)\);
* for each \(u\sim v\), \(\tau_{v,u}\sim\mathrm{Exp}(\lambda)\);
* \(\tau_{\mathrm{split}}\sim\mathrm{Exp}(\mathsf{v})\).
Additionally, let \(\tau^{\prime}\) denote the minimum of all these random variables, and \(\tau:=\min(1,\tau^{\prime})\). Now, the coupling is defined as follows. In all cases, we let \((\Xi^{\prime}_{s},\Xi^{\prime\prime}_{s})=(\xi^{\prime},\xi^{\prime\prime})\) for \(s\in[0,\tau)\); the definition of \((\Xi^{\prime}_{\tau},\Xi^{\prime\prime}_{\tau})\) will be split into several cases, but after time \(\tau\), we let \(\Xi^{\prime}_{\tau}\) and \(\Xi^{\prime\prime}_{\tau}\) continue evolving independently as two herds processes with split rate \(\mathsf{v}\). The definition of \((\Xi^{\prime}_{\tau},\Xi^{\prime\prime}_{\tau})\) is as follows: if \(\tau=1<\tau^{\prime}\), then \((\Xi^{\prime}_{\tau},\Xi^{\prime\prime}_{\tau})=(\xi^{\prime},\xi^{\prime\prime})\). Otherwise:
* if \(\tau=\tau_{o}\), then \(\Xi^{\prime}_{\tau}=\Xi^{\prime\prime}_{\tau}=\delta_{\{v\}}\); similarly if \(\tau=\tau_{v}\), then \(\Xi^{\prime}_{\tau}=\Xi^{\prime\prime}_{\tau}=\delta_{\{o\}}\);
* if for some \(u\sim o\), \(\tau=\tau_{o,u}\), then \(\Xi^{\prime}_{\tau}=\delta_{\{o,u,v\}}\) and \(\Xi^{\prime\prime}_{\tau}=\delta_{\{o,u\}}+\delta_{\{v\}}\);
* if for some \(u\sim v\), \(\tau=\tau_{v,u}\), then \(\Xi^{\prime}_{\tau}=\delta_{\{o,u,v\}}\) and \(\Xi^{\prime\prime}_{\tau}=\delta_{\{o\}}+\delta_{\{u,v\}}\);
* if \(\tau=\tau_{\mathrm{split}}\), then \(\Xi^{\prime}_{\tau}=\Xi^{\prime\prime}_{\tau}=\delta_{\{o\}}+\delta_{\{v\}}\).
We let \(\widehat{\mathbb{P}}\) denote a probability measure under which this coupling is defined, and \(\widehat{\mathbb{E}}\) be the associated expectation operator. Also let \((\mathcal{F}_{t})_{t\geqslant 0}\) denote the natural filtration of \((\Xi^{\prime}_{t},\Xi^{\prime\prime}_{t})_{t\geqslant 0}\).
Fix \(t\geqslant 1\). Using Lemma 2.8 and inspecting all cases concerning \(\tau\), it is easy to check that
\[\widehat{\mathbb{E}}[X(\Xi^{\prime\prime}_{t})\mid\mathcal{F}_{\tau}]\geqslant \widehat{\mathbb{E}}[X(\Xi^{\prime}_{t})\mid\mathcal{F}_{\tau}]. \tag{22}\]
Define the good event \(E:=\{\tau=\tau_{o,v}\}\cup\{\tau=\tau_{v,o}\}\), and note that on this event, \(\Xi^{\prime\prime}_{\tau}\) contains \(\Xi^{\prime}_{\tau}\) plus an additional herd with a single particle in it. Hence,
\[\widehat{\mathbb{E}}[X(\Xi^{\prime\prime}_{t})\mid\mathcal{F}_{\tau}]\cdot \mathds{1}_{E}=\left(\widehat{\mathbb{E}}[X(\Xi^{\prime}_{t})\mid\mathcal{F}_{ \tau}]+F(t-\tau)\right)\cdot\mathds{1}_{E}, \tag{23}\]
where \(F(s):=\mathbb{E}[X_{s}\mid\Xi_{0}=\delta_{\{o\}}]\).
We now write
\[\mathbb{E}[X_{t}\mid\Xi_{0}=\xi^{\prime\prime}]=\widehat{\mathbb{E}}[X(\Xi_{t}^{ \prime\prime})]=\widehat{\mathbb{E}}[\widehat{\mathbb{E}}[X(\Xi_{t}^{\prime \prime})\mid\mathcal{F}_{\tau}]\cdot\mathds{1}_{E}+\widehat{\mathbb{E}}[X(\Xi_ {t}^{\prime\prime})\mid\mathcal{F}_{\tau}]\cdot\mathds{1}_{E^{c}}]\]
and using (22) and (23), we bound the right-hand side from below by
\[\widehat{\mathbb{E}}[(\widehat{\mathbb{E}}[X(\Xi_{t}^{\prime}) \mid\mathcal{F}_{\tau}]+F(t-\tau))\cdot\mathds{1}_{E}+\widehat{\mathbb{E}}[X( \Xi_{t}^{\prime})\mid\mathcal{F}_{\tau}]\cdot\mathds{1}_{E^{c}}]\] \[\geqslant\widehat{\mathbb{E}}[X(\Xi_{t}^{\prime})]+\widehat{ \mathbb{E}}[F(t-\tau)\cdot\mathds{1}_{E}]\] \[\geqslant\mathbb{E}[X_{t}\mid\Xi_{0}=\xi^{\prime}]+\widehat{ \mathbb{P}}(E)\cdot\min_{t-1\leqslant s\leqslant t}F(s).\]
Using (7) we have that \(F(s)\cdot F(t-s)\geqslant F(t)\) for any \(s\in[t-1,t]\), which gives
\[\min_{t-1\leqslant s\leqslant t}F(s)\geqslant F(t)\cdot\left(\min_{0\leqslant r \leqslant 1}F(r)\right)^{-1}.\]
The lemma is thus proved with \(\gamma:=\widehat{\mathbb{P}}(E)\cdot(\min_{0\leqslant t\leqslant 1}F(r))^{-1}\).
Proof of Proposition 3.4.: We only prove the strict monotonicity in \(\mathsf{v}\), the argument for the strict monotonicity in \(\lambda\) is entirely similar. We start with some basic observations. Let \(X_{t}^{\prime}\) denote the number of herds in the herds process at time \(t\) which contain exactly two particles, these particles being neighbors. We claim that
\[\mathbb{E}[X_{t}^{\prime}]\geqslant c\cdot\mathbb{E}[X_{t-1}]\quad\text{for any $t\geqslant 1$}, \tag{24}\]
with \(c\) some positive constant depending continuously on \(\mathsf{v}\). To see this, recall that if we denote by \(Z_{t}\) the number of herds of \(\Xi_{t}\) that contain a single particle, then (as in the proof of Lemma 3.1) one has \(\mathbb{E}[Z_{t-1/2}]\geqslant c_{1}\cdot\mathbb{E}[X_{t-1}]\), for some constant \(c_{1}>0\), depending continuously on \(\mathsf{v}\). Since in any half unit of time, a herd with a single particle can be transformed in a herd with two neighboring particles, at a constant price (depending continuously on \(\mathsf{v}\)), this gives the claim (24).
Recalling the definition of \(g_{\mathsf{v}}\) in Definition 3.1, by Lemma 3.6 we have
\[g_{\mathsf{v}}(\Xi_{s},t-s)\geqslant\gamma\cdot X_{s}^{\prime}\cdot\mathbb{E }[X_{t-s}].\]
Together with Proposition 3.5 and (24), this gives for any \(\mathsf{v}>0\), and any \(t\geqslant 1\), with \(F(\mathsf{v},t)=\mathbb{E}_{\mathsf{v}}[X_{t}]\),
\[\frac{\partial F}{\partial\mathsf{v}}(\mathsf{v},t)\geqslant c\gamma\cdot\int_ {0}^{t}\mathbb{E}[X_{s-1}]\cdot\mathbb{E}[X_{t-s}]\,ds.\]
Applying then Proposition 2.7, gives
\[\frac{\partial F}{\partial\mathsf{v}}(\mathsf{v},t)\geqslant c^{\prime}\cdot t \cdot F(\mathsf{v},t),\]
where \(c^{\prime}>0\) depends continuously on \(\mathsf{v}\). This implies that, for any fixed \(0<\mathsf{v}_{1}<\mathsf{v}_{2}\),
\[\log\frac{F(\mathsf{v}_{2},t)}{F(\mathsf{v}_{1},t)}\geqslant\rho\cdot t\cdot (\mathsf{v}_{2}-\mathsf{v}_{1}),\]
for some \(\rho>0\). Thus, by taking the limit as \(t\to\infty\) and using (13), we get
\[\varphi(\mathsf{v}_{1},\lambda)\leqslant e^{-\rho(\mathsf{v}_{2}-\mathsf{v}_ {1})}\cdot\varphi(\mathsf{v}_{2},\lambda).\]
### Proof of derivative formulas
We now turn to establishing (20) and (21). The proofs are entirely analogous, so we only do the former. A few of the more technical points of the proof are done in the appendix.
Recall the definition of the function \(g_{\mathsf{v}}(\xi,t)\) in (18). We now define the closely related function:
\[g_{\mathsf{v},\varepsilon}(\xi,t):=\sum_{A\in P(\mathbb{T}^{d})}\xi(A)\sum_{ \begin{subarray}{c}e\text{ active}\\ \text{edge of }A\end{subarray}}(\mathbb{E}_{\lambda,\mathsf{v}+\varepsilon}[X_{t} \mid\Xi_{0}=\delta_{A_{e,1}}+\delta_{A_{e,2}}] \tag{25}\]
where \(\mathsf{v}>0\) and \(\varepsilon>0\). Note that the expression for \(g_{\mathsf{v},\varepsilon}\) only differs from that for \(g_{\mathsf{v}}\) in that in the former, the first expectation that appears in the right-hand side is under parameters \(\lambda\), \(\mathsf{v}+\varepsilon\) rather than \(\lambda\), \(\mathsf{v}\).
The following integrability result will be useful. The proof is done in the appendix.
**Lemma 3.7**.: _For any \(T>0\) and \(k\geqslant 1\), we have_
\[\mathbb{E}_{\lambda,\mathsf{v}}\left[\max_{0\leqslant t\leqslant T}|g_{\mathsf{ v}}(\Xi_{t},T-t)|^{k}\right]<\infty\quad\text{and}\quad\mathbb{E}_{\lambda,\mathsf{v}} \left[\max_{0\leqslant t\leqslant T}|g_{\mathsf{v},\varepsilon}(\Xi_{t},T-t) |^{k}\right]<\infty.\]
The following definition will give an alternative expression for \(g_{\mathsf{v}}\) and \(g_{\mathsf{v},\varepsilon}\), in (26) and (27) below.
**Definition 3.2**.: _Given \(\xi\in\mathcal{S}\), let \(\mathcal{G}(\xi)\) denote the set of all herd configurations \(\xi^{\prime}\) that can be obtained by performing a single split on \(\xi\). Given \(\xi^{\prime}\in\mathcal{G}(\xi)\), let \(A\) be the (unique) herd shape that is split into two to obtain \(\xi^{\prime}\) from \(\xi\); let \(\mathsf{m}(\xi,\xi^{\prime}):=\xi(A)\)._
To further clarify this definition, fix an enumeration \(\xi=\sum_{i=1}^{m}\delta_{A_{i}}\) of \(\xi\). Then, \(\mathcal{G}(\xi)\) is the set of \(\xi^{\prime}\in\mathcal{S}\) with
\[\xi^{\prime}=\sum_{i\neq j}\delta_{A_{i}}+\delta_{(A_{j})_{e,1}}+\delta_{(A_{ j})_{e,2}},\]
where \(j\in\{1,\ldots,m\}\) and \(e\) is an active edge of \(A_{j}\). For \(\xi^{\prime}\) as in the above display, we have \(\mathsf{m}(\xi,\xi^{\prime})=\xi(A_{j})\).
We now observe that, using Lemma 2.5, we can rewrite
\[g_{\mathsf{v}}(\xi,t)=\sum_{\xi^{\prime}\in\mathcal{G}(\xi)}\mathsf{m}(\xi,\xi ^{\prime})\cdot(\mathbb{E}_{\lambda,\mathsf{v}}[X_{t}\mid\Xi_{0}=\xi^{\prime}] -\mathbb{E}_{\lambda,\mathsf{v}}[X_{t}\mid\Xi_{0}=\xi]) \tag{26}\]
and
\[g_{\mathsf{v},\varepsilon}(\xi,t)=\sum_{\xi^{\prime}\in\mathcal{G}(\xi)} \mathsf{m}(\xi,\xi^{\prime})\cdot(\mathbb{E}_{\lambda,\mathsf{v}+\varepsilon }[X_{t}\mid\Xi_{0}=\xi^{\prime}]-\mathbb{E}_{\lambda,\mathsf{v}}[X_{t}\mid\Xi_ {0}=\xi]). \tag{27}\]
Fix \(\mathsf{v}>0\), \(\lambda>0\) and \(\varepsilon>0\). We will now construct a coupling \((\mathcal{V}_{t},\mathcal{W}_{t})_{t\geqslant 0}\) under a probability measure \(\widehat{\mathbb{P}}\) (with dependence on \(\mathsf{v},\lambda,\varepsilon\) omitted) so that \((\mathcal{V}_{t})\) is a herds process with parameters \(\lambda\), \(\mathsf{v}\), and \((\mathcal{W}_{t})\) is a herds process with parameters \(\lambda\), \(\mathsf{v}+\varepsilon\); both these processes are started from a single herd with a single particle (at the root of \(\mathbb{T}^{d}\)).
**Definition 3.3** (Coupling \((\mathcal{V}_{t},\mathcal{W}_{t})\)).: _Take a probability space with probability measure \(\widehat{\mathbb{P}}\) under which a herds process \((\mathcal{V}_{t})_{t\geqslant 0}\) with parameters \(\lambda\), \(\mathsf{v}\) is defined,
started from \(\delta_{\{o\}}\). Assume that the split jumps of \((\mathcal{V}_{t})\) are given as follows: splitting instructions arise with rate \(\mathsf{v}+\varepsilon\) (rather than \(\mathsf{v}\)), but they are rejected with probability \(\frac{\varepsilon}{\mathsf{v}+\varepsilon}\). Let_
\[\tau_{\mathrm{sep}}:=\inf\{t\geqslant 0:\text{ a splitting instruction is rejected at time }t\}.\]
_We define \((\mathcal{W}_{t})_{t\geqslant 0}\) as follows. For \(0\leqslant t<\tau_{\mathrm{sep}}\), we set \(\mathcal{W}_{t}=\mathcal{V}_{t}\). At time \(\tau_{\mathrm{sep}}\), this process obeys the splitting instruction that was rejected by \((\mathcal{V}_{t})\). We then let \((\mathcal{W}_{t})_{t\geqslant\tau_{\mathrm{sep}}}\) continue evolving from \(\mathcal{W}_{\tau_{\mathrm{sep}}}\) as a herds process with parameters \(\lambda\), \(\mathsf{v}+\varepsilon\), independent of \((\mathcal{V}_{t})_{t\geqslant\tau_{\mathrm{sep}}}\). Finally, we let_
\[\mathcal{X}_{t}=X(\mathcal{V}_{t}),\qquad\mathcal{Y}_{t}=X(\mathcal{W}_{t}), \qquad t\geqslant 0.\]
For the rest of this section, we fix \(T>0\).
**Definition 3.4**.: _We define the process \((\mathcal{A}_{t})_{0\leqslant t\leqslant T}\) as_
\[\mathcal{A}_{t}:=\mathds{1}\{\tau_{\mathrm{sep}}\leqslant t\}\cdot\widehat{ \mathbb{E}}[\mathcal{Y}_{T}-\mathcal{X}_{T}\mid\mathcal{F}_{\tau_{\mathrm{sep} }}],\quad 0\leqslant t\leqslant T.\]
That is, in the event \(\{\tau_{\mathrm{sep}}>T\}\) we have \(\mathcal{A}_{t}\equiv 0\), whereas in \(\{\tau_{\mathrm{sep}}\leqslant T\}\), this process takes just two values: \(\mathcal{A}_{t}=0\) for \(t\in[0,\tau_{\mathrm{sep}})\) and \(\mathcal{A}_{t}=\widehat{\mathbb{E}}[\mathcal{Y}_{T}-\mathcal{X}_{T}\mid \mathcal{F}_{\tau_{\mathrm{sep}}}]\) for \(t\in[\tau_{\mathrm{sep}},T]\). Our interest in this process stems from the fact that
\[\mathbb{E}_{\lambda,\mathsf{v}+\varepsilon}[X_{T}]-\mathbb{E}_{ \lambda,\mathsf{v}}[X_{T}] =\widehat{\mathbb{E}}[\mathcal{Y}_{T}-\mathcal{X}_{T}]\] \[=\widehat{\mathbb{E}}[(\mathcal{Y}_{T}-\mathcal{X}_{T})\cdot \mathds{1}\{\tau_{\mathrm{sep}}\leqslant T\}]\] \[=\widehat{\mathbb{E}}[\widehat{\mathbb{E}}[(\mathcal{Y}_{T}- \mathcal{X}_{T})\mid\mathcal{F}_{\tau_{\mathrm{sep}}}]\cdot\mathds{1}\{\tau_{ \mathrm{sep}}\leqslant T\}]=\widehat{\mathbb{E}}[\mathcal{A}_{T}]. \tag{28}\]
We now compute the right derivative with respect to time of the conditional expectation of \(\mathcal{A}_{t}\). This lemma is where the function \(g_{\mathsf{v},\varepsilon}\) enters the picture.
**Lemma 3.8**.: _For any \(t\in[0,T)\), on the event \(\{\tau_{\mathrm{sep}}>t\}\) we have_
\[\frac{\mathrm{d}}{\mathrm{d}s}\widehat{\mathbb{E}}[\mathcal{A}_{t+s}\mid \mathcal{F}_{t}]\bigg{|}_{s=0+}=\varepsilon\cdot g_{\mathsf{v},\varepsilon}( \mathcal{V}_{t},T-t). \tag{29}\]
Proof.: We abbreviate (27) by writing
\[g_{\mathsf{v},\varepsilon}(\xi,s)=\sum_{\xi^{\prime}\in\mathcal{G}(\xi)} \mathsf{m}(\xi,\xi^{\prime})\cdot\beta(\xi,\xi^{\prime},s),\]
where
\[\beta(\xi,\xi^{\prime},s):=\mathbb{E}_{\lambda,\mathsf{v}+\varepsilon}[X_{s}\mid \Xi_{0}=\xi^{\prime}]-\mathbb{E}_{\lambda,\mathsf{v}}[X_{s}\mid\Xi_{0}=\xi], \quad\xi\in\mathcal{S},\ \xi^{\prime}\in\mathcal{G}(\xi),\ s>0.\]
Fix \(s>0\) so that \(t+s\leqslant T\), and fix \(\xi\in\mathcal{S}\). For each \(\xi^{\prime}\in\mathcal{G}(\xi)\), let \(E(\xi^{\prime})\) be the event that:
* \(\tau_{\mathrm{sep}}\in(t,t+s]\),
* the first jump of \((\mathcal{V}_{r},\mathcal{W}_{r})_{t\leqslant r\leqslant t+s}\) is the one that occurs at time \(\tau_{\mathrm{sep}}\), and
* \(\mathcal{W}_{\tau_{\mathrm{sep}}}=\xi^{\prime}\).
On the event \(\{\tau_{\mathrm{sep}}>t,\ \Xi_{t}=\xi\}\), we have
\[\widehat{\mathbb{E}}[\mathcal{A}_{t+s}\mid\mathcal{F}_{t}]- \widehat{\mathbb{E}}[\mathcal{A}_{t}\mid\mathcal{F}_{t}] =\widehat{\mathbb{E}}[\mathcal{A}_{t+s}\mid\mathcal{F}_{t}]\] \[=\sum_{\xi^{\prime}\in\mathcal{G}(\xi)}\widehat{\mathbb{E}}[ \mathds{1}_{E(\xi^{\prime})}\cdot\beta(\xi,\xi^{\prime},T-\tau_{\mathrm{sep}}) \mid\mathcal{F}_{t}]+o(s),\]
where the \(o(s)\) term (which of course refers to when \(s\to 0\)) comes from events where there are multiple jumps in \((t,t+s]\). The above sum equals
\[\begin{split}&\sum_{\xi^{\prime}\in\mathcal{G}(\xi)}\beta(\xi, \xi^{\prime},T-t)\cdot\widehat{\mathbb{P}}(E(\xi^{\prime})\mid\mathcal{F}_{t} )\\ &+\sum_{\xi^{\prime}\in\mathcal{G}(\xi)}\widehat{\mathbb{E}}[ \mathds{1}_{E(\xi^{\prime})}\cdot(\beta(\xi,\xi^{\prime},T-\tau_{\mathrm{sep} })-\beta(\xi,\xi^{\prime},T-t))\mid\mathcal{F}_{t}].\end{split} \tag{30}\]
We will treat the two sums separately. The absolute value of the second sum is bounded by
\[\left(\sup_{\xi^{\prime}\in\mathcal{G}(\xi),\ u\in(t,t+s]}\left|\beta(\xi,\xi ^{\prime},T-u)-\beta(\xi,\xi^{\prime},T-t)\right|\right)\cdot\sum_{\xi^{ \prime}\in\mathcal{G}(\xi)}\widehat{\mathbb{P}}(E(\xi^{\prime})\mid\mathcal{F }_{t}).\]
As \(s\to 0\), the supremum tends to zero and the sum is bounded by \(1\), so the whole expression is \(o(s)\).
We now turn to the first sum in (30). As \(s\to 0\) we have
\[\widehat{\mathbb{P}}(E(\xi^{\prime})\mid\mathcal{F}_{t})=(\mathsf{v}+ \varepsilon)\cdot\frac{\varepsilon}{\mathsf{v}+\varepsilon}\cdot\mathsf{m}( \xi,\xi^{\prime})\cdot s+o(s)=\varepsilon s\cdot\mathsf{m}(\xi,\xi^{\prime})+ o(s),\]
so the sum equals
\[\varepsilon s\sum_{\xi^{\prime}\in\mathcal{G}(\xi)}\mathsf{m}(\xi,\xi^{\prime}) \cdot\beta(\xi,\xi^{\prime},T-t)+o(s)=\varepsilon s\cdot g_{\mathsf{v},\varepsilon }(\xi,T-t)+o(s).\]
We have thus proved that
\[\mathds{1}\{\tau_{\mathrm{sep}}>t\}\cdot(\widehat{\mathbb{E}}[\mathcal{A}_{t+ s}\mid\mathcal{F}_{t}]-\widehat{\mathbb{E}}[\mathcal{A}_{t}\mid\mathcal{F}_{t}])= \mathds{1}\{\tau_{\mathrm{sep}}>t\}\cdot(\varepsilon s\cdot g_{\mathsf{v}, \varepsilon}(\mathcal{V}_{t},T-t)+o(s)),\]
so (29) follows.
Next, we obtain the expression for the derivative with respect to time of the (non-conditional) expectation of \(\mathcal{A}_{t}\).
**Lemma 3.9**.: _For \(t\in[0,T)\), we have_
\[\frac{\mathrm{d}}{\mathrm{d}t}\widehat{\mathbb{E}}[\mathcal{A}_{t}]= \varepsilon\cdot\widehat{\mathbb{E}}[\mathds{1}\{\tau_{\mathrm{sep}}>t\}\cdot g _{\mathsf{v},\varepsilon}(\mathcal{V}_{t},T-t)]. \tag{31}\]
The first step in establishing this lemma is noting that, for \(0\leqslant t<t+s\leqslant T\),
\[\frac{\widehat{\mathbb{E}}[\mathcal{A}_{t+s}]-\widehat{\mathbb{E}}[\mathcal{A }_{t}]}{s}=\widehat{\mathbb{E}}\left[\mathds{1}\{\tau_{\mathrm{sep}}>t\} \cdot\frac{\widehat{\mathbb{E}}[\mathcal{A}_{t+s}\mid\mathcal{F}_{t}]}{s} \right],\]
since \(\mathcal{A}_{t+s}=\mathcal{A}_{t}\) on \(\{\tau_{\mathrm{sep}}\leqslant t\}\), and \(\mathcal{A}_{t}=0\) on \(\{\tau_{\mathrm{sep}}>t\}\). We would now like to take \(s\) to zero (from the right only, at least at first) and use Lemma 3.8, but we need to exchange the limit and the expectation; formally:
\[\lim_{s\to 0+}\widehat{\mathbb{E}}\left[\mathds{1}\{\tau_{\mathrm{sep}}>t\} \cdot\frac{\widehat{\mathbb{E}}[\mathcal{A}_{t+s}\mid\mathcal{F}_{t}]}{s} \right]=\widehat{\mathbb{E}}\left[\mathds{1}\{\tau_{\mathrm{sep}}>t\}\cdot \lim_{s\to 0+}\frac{\widehat{\mathbb{E}}[\mathcal{A}_{t+s}\mid\mathcal{F}_{t}]}{ s}\right]. \tag{32}\]
The justification of this exchange is done with a standard dominated convergence argument, but an additional bound is required, so we postpone the full proof of Lemma 3.9 to the Appendix.
Proof of (20).: Fix \(\varepsilon>0\). Using (28) and Lemma 3.9, we have
\[\frac{\mathbb{E}_{\lambda,\mathsf{v}+\varepsilon}[X_{T}]-\mathbb{E}_{\lambda,\mathsf{v}}[X_{T}]}{\varepsilon}=\int_{0}^{T}\widehat{\mathbb{E}}[ \mathds{1}\{\tau_{\mathrm{sep}}>t\}\cdot g_{\mathsf{v},\varepsilon}(\mathcal{ V}_{t},T-t)]\;\mathrm{d}t.\]
By Fubini's theorem (which we can use since we have the integrability condition given in Lemma 3.7), the right-hand side above equals
\[\widehat{\mathbb{E}}\left[\int_{0}^{\tau_{\mathrm{sep}}\wedge T}g_{\mathsf{v}, \varepsilon}(\mathcal{V}_{t},T-t)\;\mathrm{d}t\right].\]
We write this as
\[\widehat{\mathbb{E}}\left[\int_{0}^{T}g_{\mathsf{v}}(\mathcal{V}_ {t},T-t)\;\mathrm{d}t\right]\] \[+\widehat{\mathbb{E}}\left[\int_{0}^{T}(g_{\mathsf{v},\varepsilon }(\mathcal{V}_{t},T-t)-g_{\mathsf{v}}(\mathcal{V}_{t},T-t))\;\mathrm{d}t \right]-\widehat{\mathbb{E}}\left[\int_{\tau_{\mathrm{sep}}}^{T}g_{\mathsf{v}, \varepsilon}(\mathcal{V}_{t},T-t)\;\mathrm{d}t\right]. \tag{33}\]
Note that, since the law of \((\mathcal{V}_{t})\) under \(\widehat{\mathbb{P}}\) equals the law of \((\Xi_{t})\) under \(\mathbb{P}_{\lambda,\mathsf{v}}\), we have
\[\widehat{\mathbb{E}}\left[\int_{0}^{T}g_{\mathsf{v}}(\mathcal{V}_{t},T-t)\; \mathrm{d}t\right]=\mathbb{E}_{\lambda,\mathsf{v}}\left[\int_{0}^{T}g_{ \mathsf{v}}(\Xi_{t},T-t)\;\mathrm{d}t\right].\]
Hence, the proof will be completed once we prove that the second and third expectations in (33) tend to zero as \(\varepsilon\to 0\).
It is straightforward to show that, for any \(\xi\in\mathcal{S}\) and any \(t\in[0,T]\),
\[g_{\mathsf{v},\varepsilon}(\xi,t)\xrightarrow{\varepsilon\to 0}g_{\mathsf{v}}( \xi,t).\]
Combining this with the dominated convergence theorem (using Lemma 3.7), we obtain
\[\widehat{\mathbb{E}}\left[\int_{0}^{T}(g_{\mathsf{v},\varepsilon}(\mathcal{V }_{t},T-t)-g_{\mathsf{v}}(\mathcal{V}_{t},T-t))\;\mathrm{d}t\right] \xrightarrow{\varepsilon\to 0}0.\]
Next, using the Cauchy-Schwarz inequality, we bound
\[\left|\widehat{\mathbb{E}}\left[\int_{\tau_{\mathrm{sep}}}^{T}g_ {\mathsf{v},\varepsilon}(\mathcal{V}_{t},T-t)\;\mathrm{d}t\right]\right|\] \[\leqslant T\cdot\widehat{\mathbb{E}}\left[\left(\max_{0\leqslant t \leqslant T}|g_{\mathsf{v},\varepsilon}(\mathcal{V}_{t},T-t)|^{2}\right) \right]^{1/2}\cdot\widehat{\mathbb{P}}(\tau_{\mathrm{sep}}\leqslant T)^{1/2}.\]
The expectation on the right-hand side is finite by Lemma 3.7, and it is straightforward to check that \(\widehat{\mathbb{P}}(\tau_{\mathrm{sep}}\leqslant T)\xrightarrow{\varepsilon \to 0}0\). This completes the proof.
### Analysis of higher moments
Throughout this section, we fix \(\lambda\) and \(\mathsf{v}\) with \(\lambda<\bar{\lambda}(\mathsf{v})\) (recalling the definition of \(\bar{\lambda}(\mathsf{v})\) in Definition 2.6).
We now analyse the growth index \(\varphi_{p}\) for \(p\) possibly larger than \(1\). Our main goal is to prove the following.
**Proposition 3.10**.: _If \(\lambda<\bar{\lambda}(\mathsf{v})\), then for every \(p\geqslant 1\) we have_
\[\varphi_{p}<1 \tag{34}\]
_and_
\[\sup_{t\geqslant 0}\mathbb{E}[X_{t}^{p}]<\infty. \tag{35}\]
Note that the case \(p=1\) has already been proved: the fact that \(\varphi<1\) when \(\lambda<\bar{\lambda}(\mathsf{v})\) is given by (17) and Proposition 3.4, and then (35) follows from \(\varphi<1\) and (15).
In order to prove Proposition 3.10, we shall need an intermediate result, which we now state.
**Lemma 3.11**.: _Let \(p\geqslant 2\). There exists \(\mathfrak{C}_{p}>0\) (depending on \(\lambda\), \(\mathsf{v}\) and \(p\)) such that for any \(s\geqslant 12p\log(2d)/\mathsf{v}\),_
\[\mathbb{E}\left[\sum_{A\in P(\mathbb{T}^{d})}\Xi_{s}(A)\cdot|A|^{p}\right] \leqslant\mathfrak{C}_{p}\cdot\big{(}\mathbb{E}[X_{s}^{3/2}]+\varphi^{s}\big{)}.\]
The proof of this lemma is quite involved, and we postpone it to the next section. We now show how to obtain Proposition 3.10 from it.
Proof of Proposition 3.10.: Fix \(\lambda<\bar{\lambda}(\mathsf{v})\). Note that (35) follows readily from (34) and (13), so we only need to prove (34).
By (14), it suffices to prove that \(\varphi_{p}<1\) for all \(p\in\mathbb{N}\), and the case \(p=1\) is already done. We proceed by induction, fixing \(p\in\{2,3,\ldots\}\) and assuming that \(\varphi_{q}<1\) for all \(q\in\{1,\ldots,p-1\}\).
We first claim that there exist \(C,c>0\) such that, for all \(t\geqslant 0\) and \(\xi\in\mathcal{S}\),
\[\mathbb{E}[X_{t}^{p}\mid\Xi_{0}=\xi]\leqslant\left(\sum_{A\in P_{t}(\mathbb{T}^ {d})}\xi(A)\cdot|A|^{p}\right)\cdot\mathbb{E}[X_{t}^{p}]+X(\xi)^{p}\cdot C \mathrm{e}^{-ct}. \tag{36}\]
This bound is a refinement of Proposition 2.7. While in the proof of that proposition we used Minkowski's inequality, here we expand the \(p\)-th power of a sum in full and bound the various terms that appear.
To prove (36), let \(t\geqslant 0\) and \(\xi\in\mathcal{S}\) with enumeration \(\xi=\sum_{i=1}^{m}\delta_{A_{i}}\). By Lemma 2.5, \((\Xi_{t})_{t\geqslant 0}\) started from \(\Xi_{0}=\xi\) has the same distribution as \((\Xi_{t}^{(1)}+\cdots+\Xi_{t}^{(m)})_{t\geqslant 0}\), where \((\Xi_{t}^{(1)})_{t\geqslant 0}\), \(\ldots\), \((\Xi_{t}^{(m)})_{t\geqslant 0}\) are independent herds processes, with \(\Xi_{0}^{(i)}=\delta_{A_{i}}\) for each \(i\). In particular,
\[\mathbb{E}[X_{t}^{p}\mid\Xi_{0}=\xi] =\mathbb{E}\left[\left(\sum_{i=1}^{m}X(\Xi_{t}^{(i)})\right)^{p}\right]\] \[=\sum_{\begin{subarray}{c}(a_{1},\ldots,a_{m}):\\ a_{1}+\cdots+a_{m}=p\end{subarray}}\frac{p!}{a_{1}!\cdots a_{m}!}\cdot\prod_{ \begin{subarray}{c}i\in\{1,\ldots,m\}:\\ a_{i}>0\end{subarray}}\mathbb{E}[X(\Xi_{t}^{(i)})^{a_{i}}].\]
By the fact that \(\Xi_{0}^{(i)}=\delta_{A_{i}}\) and Proposition 2.7, the right-hand side is smaller than
\[\sum_{\begin{subarray}{c}(a_{1},\ldots,a_{m}):\\ a_{1}+\cdots+a_{m}=p\end{subarray}}\frac{p!}{a_{1}!\cdots a_{m}!}\cdot\prod_{ \begin{subarray}{c}i\in\{1,\ldots,m\}:\\ a_{i}>0\end{subarray}}|A_{i}|^{a_{i}}\cdot\mathbb{E}[X_{t}^{a_{i}}]. \tag{37}\]
We break this sum as
\[\mathbb{E}[X_{t}^{p}]\cdot\sum_{i=1}^{m}|A_{i}|^{p}+\sum_{ \begin{subarray}{c}(a_{1},\ldots,a_{m}):\\ a_{1}+\cdots+a_{m}=p,\\ a_{1},\ldots,a_{m}<p\end{subarray}}\frac{p!}{a_{1}!\cdots a_{m}!}\cdot\prod_{ \begin{subarray}{c}i\in\{1,\ldots,m\}:\\ a_{i}>0\end{subarray}}|A_{i}|^{a_{i}}\cdot\mathbb{E}[X_{t}^{a_{i}}]. \tag{38}\]
Note that the first sum can be written as
\[\mathbb{E}[X_{t}^{p}]\cdot\sum_{A\in P_{t}(\mathbb{T}^{d})}\xi(A)\cdot|A|^{p}.\]
Next, the induction hypothesis that \(\varphi_{a}<1\) for all \(a\in\{1,\ldots,p-1\}\) together with (13) imply that there exist \(C,c>0\) (not depending on \(t\)) such
that \(\mathbb{E}[X_{t}^{a}]\leqslant Ce^{-ct}\) for any \(a\in\{1,\ldots,p-1\}\). Hence, the second sum in (38) is smaller than
\[\sum_{\begin{subarray}{c}a_{1},\ldots,a_{m}\in\mathbb{N}_{0}:\\ a_{1}+\cdots+a_{m}=p,\\ a_{1},\ldots,a_{m}<p\end{subarray}}\frac{p!}{a_{1}!\cdots a_{m}!}\cdot\prod_{ \begin{subarray}{c}i\in\{1,\ldots,m\}:\\ a_{i}>0\end{subarray}}|A_{i}|^{a_{i}}\cdot C\mathrm{e}^{-ct}.\]
By forgetting the last condition in the summation and increasing the value of \(C\) if necessary, this is smaller than
\[C\mathrm{e}^{-ct}\cdot\sum_{\begin{subarray}{c}a_{1},\ldots,a_{m}\in\mathbb{ N}_{0}:\\ a_{1}+\cdots+a_{m}=p\end{subarray}}\frac{p!}{a_{1}!\cdots a_{m}!}\cdot\prod_{ \begin{subarray}{c}i\in\{1,\ldots,m\}:\\ a_{i}>0\end{subarray}}|A_{i}|^{a_{i}}=C\mathrm{e}^{-ct}\cdot X(\xi)^{p}.\]
We have thus proved (36).
Now, (36) and the Markov property imply that, for any \(s,t\geqslant 0\),
\[\mathbb{E}[X_{t+s}^{p}]\leqslant\mathbb{E}\Big{[}\sum_{A}\Xi_{s}(A)\cdot|A|^{ p}\Big{]}\cdot\mathbb{E}[X_{t}^{p}]+\mathbb{E}[X_{s}^{p}]\cdot C\mathrm{e}^{-ct}. \tag{39}\]
Using the bound of Lemma 3.11, increasing the constant \(C\) if necessary, for any \(s,t\) large enough we then have
\[\mathbb{E}[X_{t+s}^{p}]\leqslant C\left\{\mathbb{E}[X_{s}^{3/2}]\cdot\mathbb{ E}[X_{t}^{p}]+\varphi^{s}\cdot\mathbb{E}[X_{t}^{p}]+\mathbb{E}[X_{s}^{p}]\cdot \mathrm{e}^{-ct}\right\}. \tag{40}\]
We also bound
\[\mathbb{E}[X_{s}^{3/2}] =\mathbb{E}[X_{s}^{3/2}\cdot\mathds{1}\{X_{s}>0\}]\] \[\leqslant\mathbb{E}[(X_{s}^{3/2})^{4/3}]^{3/4}\cdot\mathbb{P}(X_ {s}>0)^{1/4}\] \[\leqslant\mathbb{E}[X_{s}^{2}]^{3/4}\cdot\mathbb{P}(X_{s}\neq 0)^{1/ 4}\leqslant\mathbb{E}[X_{s}^{2}]^{3/4}\cdot\mathbb{E}[X_{s}]^{1/4}\leqslant C \mathbb{E}[X_{s}^{p}]^{3/4}\cdot\varphi^{s/4},\]
where the first inequality is Holder's, the second inequality follows from \(\mathds{1}\{X_{s}\neq 0\}\leqslant X_{s}\) and the third inequality follows from \(X_{s}^{2}\leqslant X_{s}^{p}\) and (15). We use this bound in (40), together with the fact that \(\varphi<1\), to obtain
\[\mathbb{E}[X_{t+s}^{p}]\leqslant C\left\{\mathbb{E}[X_{s}^{p}]^{3/4}\cdot \mathbb{E}[X_{t}^{p}]\cdot\mathrm{e}^{-cs}+\mathbb{E}[X_{t}^{p}]\cdot\mathrm{ e}^{-cs}+\mathbb{E}[X_{s}^{p}]\cdot\mathrm{e}^{-ct}\right\} \tag{41}\]
for suitable choices of \(c,C\).
Now, assume for a contradiction that \(\varphi_{p}\geqslant 1\). In that case, by (11) we have \(\mathbb{E}[X_{r}^{p}]\geqslant 1\) for all \(r\geqslant 0\), and from (41) we obtain, for large enough \(s,t\) with \(s\leqslant t\):
\[\mathbb{E}[X_{t+s}^{p}]\leqslant C\mathbb{E}[X_{s}^{p}]\cdot\mathbb{E}[X_{t}^ {p}]\cdot\mathrm{e}^{-cs}.\]
Using this recursively, we have that for all sufficiently large \(s\) and all \(n\in\mathbb{N}\),
\[\mathbb{E}[X_{ns}^{p}]\leqslant C^{n}\cdot\mathbb{E}[X_{s}^{p}]^{n}\cdot\mathrm{e }^{-cns}.\]
Taking both sides to the power \(\frac{1}{ns}\) and letting \(n\to\infty\) (using (13)) gives
\[\varphi_{p}\leqslant C^{1/s}\cdot\mathbb{E}[X_{s}^{p}]^{1/s}\cdot\mathrm{e}^{- c}.\]
Now letting \(s\to\infty\) and again using (13) yields \(\varphi_{p}\leqslant\varphi_{p}\cdot\mathrm{e}^{-c}\), a contradiction.
Before we turn to the proof of Lemma 3.11, we want to give a consequence of Proposition 2.7, namely Corollary 3.13 below, which will be useful in dealing with the contact process on dynamic graphs. For \(t\geqslant 0\), we denote by \(N_{t}\) the number of birth events in the herds process up to time \(t\), as in Lemma 2.1. Also let \(N_{\infty}:=\lim_{n\to\infty}N_{t}\).
**Lemma 3.12**.: _Let \(p\geqslant 1\). For any \(t\in[0,\infty]\) and \(\xi\in\mathcal{S}\), we have_
\[\mathbb{E}[N_{t}^{p}\mid\Xi_{0}=\xi]\leqslant X(\xi)^{p}\cdot\mathbb{E}[N_{t} ^{p}].\]
_Consequently, for any \(s\in[0,\infty)\) and \(t\in[0,\infty]\),_
\[\mathbb{E}[(N_{s+t}-N_{s})^{p}]\leqslant\mathbb{E}[X_{s}^{p}]\cdot\mathbb{E}[ N_{t}^{p}]. \tag{42}\]
Proof.: First, recall Remark 2.4, which in particular shows that one can dominate the number of particles in a herds process starting from \(\xi\) by the total number of particles in a multi-type herds process with \(\ell:=X(\xi)\) types, starting from the configuration \(\xi^{\prime}\) where each particle in \(\xi\) represents a distinct type. For this auxiliary process, let \(N_{t}^{(i)}\) denote the number of births of particles of type \(i\) by time \(t\), for \(i\in\{1,\ldots,\ell\}\). Then,
\[\mathbb{E}[N_{t}^{p}\mid\Xi_{0}=\xi]\leqslant\mathbb{E}\left[\left(\sum_{i=1} ^{\ell}N_{t}^{(i)}\right)^{p}\right]\leqslant\left(\sum_{i=1}^{\ell}\mathbb{E }[(N_{t}^{(i)})^{p}]^{1/p}\right)^{p},\]
where we have used Minkowski's inequality. Now, since each type evolves a usual herds process, we have \(\mathbb{E}[(N_{t}^{(i)})^{p}]=\mathbb{E}[N_{t}^{p}]\) for every \(i\), so the right-hand side above equals \(\ell^{p}\cdot\mathbb{E}[N_{t}^{p}]\), as required.
**Corollary 3.13**.: _If \(\lambda<\bar{\lambda}(\mathsf{v})\), then \(\mathbb{E}[N_{\infty}^{p}]<\infty\) for all \(p\geqslant 1\)._
Proof.: Fix \(p\geqslant 1\). For any \(x>1\), let \(t_{x}:=\frac{p+1}{|\log\varphi|}\cdot\log x\) (note that \(\varphi<1\) since \(\lambda<\bar{\lambda}(\mathsf{v})\), by Lemma 3.3). Letting \(\tau\) denote the extinction time of the herds process, we first bound
\[\mathbb{P}(N_{\infty}>x)\leqslant\mathbb{P}(\tau>t_{x})+\mathbb{P}(N_{t_{x}}>x).\]
The first term on the right-hand side is easy to bound:
\[\mathbb{P}(\tau>t_{x})\leqslant\mathbb{E}[X_{t_{x}}]\stackrel{{ \eqref{eq:
that is,
\[Y_{t}:=\sum_{\begin{subarray}{c}(A,B):\\ A\neq\varnothing,\\ B\neq\varnothing\end{subarray}}\widetilde{\Xi}_{t}(A,B).\]
It will be useful to note that
\[s\leqslant t\quad\Longrightarrow\quad\{Y_{s}=0\}\subseteq\{Y_{t}=0\}. \tag{44}\]
Noting that
\[\left(\sum_{(A,B):A\neq\varnothing}\widetilde{\Xi}_{t_{\alpha}}(A,B)\cdot|B| \right)^{p}=\left(\sum_{(A,B):A\neq\varnothing}\widetilde{\Xi}_{t_{\alpha}}(A,B )\cdot|B|\right)^{p}\cdot\mathds{1}\{Y_{t_{\alpha}}>0\}\]
and using the Cauchy-Schwarz inequality, the left-hand side of (43) is smaller than
\[\mathbb{E}\left[\left(\sum_{(A,B):A\neq\varnothing}\widetilde{\Xi}_{t_{\alpha}} (A,B)\cdot|B|\right)^{2p}\right]^{1/2}\cdot\mathbb{P}(Y_{t_{\alpha}}\geqslant 1 )^{1/2}.\]
Using domination by a pure-birth process, the first term in the product above can be bounded by a finite constant that only depends on \(\lambda\), \(p\) and \(\alpha\). We now show that \(\mathbb{P}(Y_{t_{\alpha}}\geqslant 1)\) is smaller than \(C\alpha^{-2\mathrm{dist}(\mathrm{o},\mathrm{u})}\) for some \(C>0\).
For \(i\in\{1,2\}\) and \(t\geqslant 0\), let \(K_{t}^{(i)}\) denote the set of vertices of \(\mathbb{T}^{d}\) that have been occupied by a type-\(i\) particle in some herd for some time \(s\leqslant t\), that is,
\[K_{t}^{(1)} :=\{v\in\mathbb{T}^{d}:\;\widetilde{\Xi}_{s}(A,B)>0\text{ for some }s \leqslant t\text{ and }(A,B)\text{ with }v\in A\},\] \[K_{t}^{(2)} :=\{v\in\mathbb{T}^{d}:\;\widetilde{\Xi}_{s}(A,B)>0\text{ for some }s \leqslant t\text{ and }(A,B)\text{ with }v\in B\}.\]
We have that \(K_{0}^{(1)}=\{o\}\), \(K_{0}^{(2)}=\{u\}\) and \(t\mapsto K_{t}^{(1)}\) and \(t\mapsto K_{t}^{(2)}\) are both non-decreasing (with respect to set inclusion). Moreover, \(K_{t}^{(1)}\) and \(K_{t}^{(2)}\) are connected subsets of \(\mathbb{T}^{d}\), since they only grow by the inclusion of vertices neighboring vertices that are already present. Also note that as long as these sets stay disjoint, there can be at most one herd containing both types. In other words, letting
\[\sigma:=\inf\{t:K_{t}^{(1)}\cap K_{t}^{(2)}\neq\varnothing\},\]
we have, for any \(t\geqslant 0\),
\[\{\sigma>t\}\subseteq\{Y_{s}\leqslant 1\text{ for all }s\leqslant t\}. \tag{45}\]
Next, let
\[\ell=\operatorname{dist}(o,u)\]
and let \(o=u_{0}\sim u_{1}\sim\ldots\sim u_{\ell}=u\) be the vertices of \(\mathbb{T}^{d}\) in the geodesic from \(o\) to \(u\). Let
\[u^{\prime}:=u_{\lfloor\ell/3\rfloor},\quad\sigma^{(1)}:=\inf\{t\geqslant 0:\ u ^{\prime}\in K_{t}^{(1)}\}\]
and
\[u^{\prime\prime}:=u_{\lfloor 2\ell/3\rfloor},\quad\sigma^{(2)}:=\inf\{t \geqslant 0:\ u^{\prime\prime}\in K_{t}^{(2)}\}.\]
It is easy to see that
\[\min(\sigma^{(1)},\sigma^{(2)})\leqslant\sigma.\]
Putting this together with (45), we obtain
\[\{Y_{s}\geqslant 2\text{ for some }s\leqslant t_{\alpha}\}\subseteq\{\sigma \leqslant t_{\alpha}\}\subseteq\{\min(\sigma^{(1)},\sigma^{(2)})\leqslant t _{\alpha}\}, \tag{46}\]
so we can bound
\[\mathbb{P}(Y_{t_{\alpha}}\geqslant 1) \stackrel{{\eqref{eq:2010}}}{{=}}\mathbb{P}(Y_{s} \geqslant 1\text{ for all }0\leqslant s\leqslant t_{\alpha})\] \[\stackrel{{\eqref{eq:2010}}}{{\leqslant}}\mathbb{P}( \sigma^{(1)}\leqslant t_{\alpha})+\mathbb{P}(\sigma^{(2)}\leqslant t_{\alpha}) \tag{47}\] \[\quad+\mathbb{P}(\min(\sigma^{(1)},\sigma^{(2)})>t_{\alpha},\;Y_{s }=1\text{ for }0\leqslant s\leqslant t_{\alpha}). \tag{48}\]
We now bound the three terms on the right-hand side separately.
Let us first consider the probability in (48). For any \(t\), if \(\min(\sigma^{(1)},\sigma^{(2)})>t\) and \(Y_{t}=1\), then a split in any of the edges
\[\{u_{\lfloor\ell/3\rfloor},u_{\lfloor\ell/3\rfloor+1}\},\;\{u_{\lfloor\ell/3 \rfloor+1},u_{\lfloor\ell/3\rfloor+2}\},\;\ldots,\;\{u_{\lfloor 2\ell/3 \rfloor-1},\;u_{\lfloor 2\ell/3\rfloor}\}\]
separates the two types permanently, causing \(Y\) to drop to zero. This observation gives
\[\mathbb{P}(\min(\sigma^{(1)},\sigma^{(2)})>t_{\alpha},\;Y_{s}=1 \text{ for }0\leqslant s\leqslant t_{\alpha}) \leqslant\exp\{-\mathsf{v}\cdot t_{\alpha}\cdot(\lfloor 2\ell/3 \rfloor-\lfloor\ell/3\rfloor)\}\] \[\leqslant\alpha^{-2\ell},\]
where the second inequality follows from the definition of \(t_{\alpha}\).
We now turn to the two probabilities in (47). We only bound the first one; by a symmetry argument, the same bound will then apply to the second. Let \((W_{t})_{t\geqslant 0}\) be a growth process on \((\mathbb{N}_{0})^{\mathbb{T}^{d}}\) defined as follows. We let \(W_{0}(o)=1\) and \(W_{0}(u)=0\) for \(u\neq o\). We interpret \(W_{t}(v)=m\) as saying that there are \(m\) particles at \(v\) at time \(t\). Then, we define the dynamics by prescribing that independently, for any \(v\sim w\), a particle at \(v\) gives birth at a particle at \(w\) with rate \(\lambda\) (and particles never die). In particular, \((\sum_{v}W_{t}(v))_{t\geqslant 0}\) is a pure-birth process in which the birth rate is \(d\lambda\). It is easy to see that the set-valued process \((\{u\in\mathbb{T}^{d}:\ W_{t}(u)>0\})_{t\geqslant 0}\) stochastically dominates \((K_{t}^{(1)})_{t\geqslant 0}\), and in particular,
\[\mathbb{P}(\sigma^{(1)}\leqslant t_{\alpha})\leqslant\mathbb{P}(W_{t_{\alpha} }(u^{\prime})>0)\leqslant\mathbb{E}[W_{t_{\alpha}}(u^{\prime})]. \tag{49}\]
We now claim that, for any \(t\geqslant 0\) and \(v\in\mathbb{T}^{d}\),
\[\mathbb{E}[W_{t}(v)]=e^{d\lambda t}\cdot p_{t}(v), \tag{50}\]
where \(p_{t}(v):=\mathbb{P}(\mathcal{Z}_{t}=v)\), with \((\mathcal{Z}_{t})_{t\geqslant 0}\) the continuous-time random walk on \(\mathbb{T}^{d}\) which starts at the root at time zero, and jumps from any vertex to any neighboring vertex with rate \(\lambda\). It is simple to verify (50) using the observations that \((t,v)\mapsto\mathbb{E}[W_{t}(v)]\) is the solution to
\[\left\{\begin{array}{l}\frac{\mathrm{d}}{\mathrm{d}t}g(t,v)=\lambda\sum_{w \sim v}g(t,w),\;t\geqslant 0,\;v\in\mathbb{T}^{d},\\ g(0,\cdot)=\delta_{o}(\cdot),\end{array}\right.\]
and that the right-hand side of (50) solves this equation, by direct computation.
Putting together (49) and (50), we have
\[\mathbb{P}(\sigma^{(1)}\leqslant t_{\alpha})\leqslant e^{d\lambda t_{\alpha} }\cdot p_{t_{\alpha}}(u^{\prime})\leqslant e^{d\lambda t_{\alpha}}\cdot \mathbb{P}(\mathrm{Poi}(d\lambda t_{\alpha})>\lfloor\ell/3\rfloor),\]
where \(\mathrm{Poi}(d\lambda t_{\alpha})\) represents a random variable with the Poisson distribution with parameter \(d\lambda t_{\alpha}\), which is the law of the number of jumps of \((\mathcal{Z}_{t})\) until time \(t_{\alpha}\). Since the tail of the Poisson distribution is lighter than exponential, the right-hand side above is smaller than \(C\alpha^{-2\ell}\) for some \(C>0\), uniformly in \(\ell\). This concludes the proof.
**Lemma 3.15**.: _For any \(p\geqslant 1\), there exists \(C^{\prime}_{p}>0\) (depending on \(\lambda\), \(\mathsf{v}\) and \(p\)) such that the following holds. Fix \(B_{0}\subseteq\mathbb{T}^{d}\backslash\{o\}\) and let \((\widetilde{\Xi}_{t})_{t\geqslant 0}\) be a two-type herds process with rates \(\lambda\), \(\mathsf{v}\) started from \(\widetilde{\Xi}_{0}=\delta_{\{o\},B_{0}}\). Also let \(T_{p}:=12p\log(2d)/\mathsf{v}\). Then, (uniformly over the choice of \(B_{0}\)),_
\[\mathbb{E}\left[\left(\sum_{(A,B):A\neq\varnothing}\widetilde{\Xi}_{T_{p}}(A,B )\cdot(|A|+|B|)\right)^{p}\,\right]\leqslant C^{\prime}_{p}.\]
Proof.: Let
\[R:=\sum_{(A,B):A\neq\varnothing}\widetilde{\Xi}_{T_{p}}(A,B)\cdot(|A|+|B|);\]
note that \(R\) is the total number of particles in \(\widetilde{\Xi}_{T_{p}}\) that belong to herds that contain type-1 particles.
We enumerate \(\{o\}\cup B_{0}=\{u_{1},\ldots,u_{m}\}\), with \(u_{1}=o\) and \(m=|B_{0}|+1\). We now define a multi-type herds process, as described in Remark 2.4. This new process, denoted \((\widetilde{\Xi}_{t})_{t\geqslant 0}\), is taken in the same probability space that we have been considering, has rates \(\lambda\), \(\mathsf{v}\), and \(m\) types. It starts with a single herd, with a type-1 particle at \(u_{1}=o\), a type-2 particle at \(u_{2}\), \(\ldots\), and a type \(m\) particle at \(u_{m}\). In analogy with \(R\), we let \(R^{\prime}\) denote the total number of particles in \(\widetilde{\Xi}_{T_{p}}\) that belong to herds that contain type-1 particles. That is, if we use an \(m\)-tuple \((A_{1},\ldots,A_{m})\) to represent a multi-type herd shape, then
\[R^{\prime}:=\sum_{(A_{1},\ldots,A_{m}):A_{1}\neq\varnothing}\widetilde{\Xi}_{ T_{p}}(A_{1},\ldots,A_{m})\cdot(|A_{1}|+\cdots+|A_{m}|).\]
With similar reasoning as in the proof of Lemma 2.9, we see that \(R\) is stochastically dominated by \(R^{\prime}\); in particular,
\[\mathbb{E}[R^{p}]^{1/p}\leqslant\mathbb{E}[(R^{\prime})^{p}]^{1/p}\leqslant \sum_{j=1}^{m}\mathbb{E}[(R^{\prime}_{j})^{p}]^{1/p},\]
where
\[R^{\prime}_{j}:=\sum_{(A_{1},\ldots,A_{m}):A_{1}\neq\varnothing}\widetilde{ \Xi}_{T_{p}}(A_{1},\ldots,A_{m})\cdot|A_{j}|.\]
Note that \(R^{\prime}_{1}\) is just the total number of type-1 particles in \(\widetilde{\Xi}_{T_{p}}\). If we ignore all types except for type 1 in \((\widetilde{\Xi}_{t})\), we obtain a (one-type) herds process
started from \(\delta_{\{o\}}\); hence, \(\mathbb{E}[(R_{1}^{\prime})^{p}]=\mathbb{E}[X_{T_{p}}^{p}]<\infty\) by Corollary 2.3. Next, for \(j\neq 1\), if we ignore all types except for types \(1\) and \(j\) in \((\widetilde{\Xi}_{t})\), we see a two-type herds process started from \(\delta_{\{o\},\{u_{j}\}}\), and Lemma 3.14 (with \(\alpha=(2d)^{p}\)) and the definition of \(T_{p}\) imply that \(\mathbb{E}[(R_{j}^{\prime})^{p}]\leqslant C_{p}(2d)^{-p\operatorname{dist}(o,u _{j})}\). We then have
\[\sum_{j=1}^{m}\mathbb{E}[(R_{j}^{\prime})^{p}]^{1/p}\leqslant\mathbb{E}[X_{T_ {p}}^{p}]^{1/p}+C_{p}^{1/p}\sum_{j=1}^{m}(2d)^{-\operatorname{dist}(o,u_{j})}.\]
The second sum on the right-hand side is smaller than
\[\sum_{u\in\mathbb{T}^{d}}(2d)^{-\operatorname{dist}(o,u)}\leqslant\sum_{i=1} ^{\infty}d^{i}\cdot(2d)^{-i}=1.\]
We have thus proved that \(\mathbb{E}[R^{p}]\leqslant(\mathbb{E}[X_{T_{p}}^{p}]^{1/p}+C_{p}^{1/p})^{p}\), so the proof is complete.
**Lemma 3.16**.: _Let \(p\geqslant 1\) and let \(C_{p}^{\prime\prime}:=\max(C_{p}^{\prime},\ \mathbb{E}[X_{T_{p}}^{p}])\), where \(C_{p}^{\prime}\) and \(T_{p}\) are as in Lemma 3.15. Then, for any \(\xi\in\mathcal{S}\), the herds process \((\Xi_{t})_{t\geqslant 0}\) with rates \(\lambda\), \(\mathsf{v}\) satisfies_
\[\mathbb{P}\left(\exists A:\ |A|\geqslant x,\ \Xi_{T_{p}}(A)>0\mid\Xi_{0}=\xi \right)\leqslant C_{p}^{\prime\prime}\cdot\frac{X(\xi)}{x^{p}},\quad x>0.\]
Proof.: By using Lemma 2.5 and a union bound, it suffices to prove that for any \(A_{0}\in P_{\mathfrak{f}}(\mathbb{T}^{d})\),
\[\mathbb{P}\left(\exists A:\ |A|\geqslant x,\ \Xi_{T_{p}}(A)>0\mid\Xi_{0}= \delta_{A_{0}}\right)\leqslant C_{p}^{\prime\prime}\cdot\frac{|A_{0}|}{x^{p}}, \quad x>0. \tag{51}\]
We will prove this by induction on \(|A_{0}|\). For the case where \(|A_{0}|=1\), we bound the left-hand side above by
\[\mathbb{P}(X_{t}\geqslant x\mid\Xi_{0}=\delta_{\{o\}})\leqslant\frac{\mathbb{ E}[X_{T_{p}}^{p}]}{x^{p}},\]
by Markov's inequality.
Now assume that \(|A_{0}|\geqslant 2\) and let \(u\in A_{0}\). Let \(B_{0}:=A_{0}\backslash\{u\}\). We consider a two-type herds process \((\widetilde{\Xi}_{t})_{t\geqslant 0}\) with rates \(\lambda\), \(\mathsf{v}\) started from \(\delta_{\{u\},B_{0}}\). Using Lemma 2.9(a), the left-hand side of (51) is smaller than
\[\mathbb{P}\left(\exists(A,B):\ |A|+|B|\geqslant x,\ \widetilde{\Xi}_{T_{p}}(A,B )>0\right).\]
In turn, this is smaller than
\[\mathbb{P}\left(\exists(A,B):\ A\neq\varnothing,\ |A|+|B|\geqslant x,\ \widetilde{\Xi}_{T_{p}}(A,B)>0\right) \tag{52}\] \[+\mathbb{P}\left(\exists(A,B):\ |B|\geqslant x,\ \widetilde{\Xi}_{T_{p}}(A,B)>0\right). \tag{53}\]
The probability in (52) is smaller than
\[\mathbb{P}\left(\sum_{(A,B):A\neq\varnothing}\Xi_{T_{p}}(A,B)\cdot(|A|+|B|) \geqslant x\right)\leqslant\frac{C^{\prime}_{p}}{x^{p}},\]
by Markov's inequality and Lemma 3.15. By Lemma 2.9(b), the probability in (53) can be expressed using a one-type herds process; it equals
\[\mathbb{P}\left(\exists B:\ |B|\geqslant x,\ \Xi_{T_{p}}(B)>0\ |\ \Xi_{0}=\delta_{B_{0}} \right),\]
which is smaller than \(C^{\prime\prime}_{p}|B_{0}|/x^{p}\) by the induction hypothesis (since \(|B_{0}|=|A_{0}|-1\)). We have then proved that
\[\mathbb{P}\left(\exists A:\ |A|\geqslant x,\ \Xi_{T_{p}}(A)>0\ |\ \Xi_{0}=A_{0}\right) \leqslant\frac{C^{\prime}_{p}}{x^{p}}+\frac{C^{\prime\prime}_{p}(|A_{0}|-1)}{ x^{p}}\leqslant\frac{C^{\prime\prime}_{p}|A_{0}|}{x^{p}}.\]
Proof of Lemma 3.11.: Fix \(s\geqslant T_{p}=12p\log(2d)/\mathsf{v}\). Let \(E_{1}\) be the event that there is some herd at time \(s\) whose number of particles is larger than \(X^{\frac{1}{2p}}_{s-T_{p}}\), that is,
\[E_{1}:=\left\{\mbox{ there exists }A\in P_{\mathsf{f}}(\mathbb{T}^{d})\mbox{ with }|A|\geqslant X^{\frac{1}{2p}}_{s-T_{p}}\mbox{ and }\Xi_{s}(A)>0\right\}.\]
On \(E_{1}^{c}\), we bound
\[\sum_{A\in P_{\mathsf{f}}(\mathbb{T}^{d})}\Xi_{s}(A)\cdot|A|^{p}\leqslant X^{ 1/2}_{s-T_{p}}\cdot\sum_{A\in P_{\mathsf{f}}(\mathbb{T}^{d})}\Xi_{s}(A) \leqslant X^{1/2}_{s-T_{p}}\cdot X_{s}.\]
On \(E_{1}\), we bound
\[\sum_{A\in P_{\mathsf{f}}(\mathbb{T}^{d})}\Xi_{s}(A)\cdot|A|^{p}\leqslant X^{ p}_{s}.\]
These bounds give
\[\mathbb{E}\left[\sum_{A\in P_{\mathsf{f}}(\mathbb{T}^{d})}\Xi_{s}(A)\cdot|A|^{ p}\right]\leqslant\mathbb{E}[X_{s}\cdot X^{1/2}_{s-T_{p}}]+\mathbb{E}[X^{p}_{s} \cdot\mathds{1}_{E_{1}}]. \tag{54}\]
We treat the two expectations on the right-hand side separately. For the first one, we start by bounding
\[\mathbb{E}[X_{s}\cdot X_{s-T_{p}}^{1/2}]=\mathbb{E}[\mathbb{E}[X_{s}\mid\mathcal{ F}_{s-T_{p}}]\cdot X_{s-T_{p}}^{1/2}]\leqslant\mathbb{E}[X_{s-T_{p}}^{3/2}] \cdot\mathbb{E}[X_{T_{p}}], \tag{55}\]
where the inequality follows from (6) and the Markov property. We now claim that there exists \(C>0\) (not depending on \(s\)) such that
\[\mathbb{E}[X_{s-T_{p}}^{3/2}]\leqslant C\mathbb{E}[X_{s}^{3/2}]. \tag{56}\]
To see this, first note that each particle that is alive at time \(s-T_{p}\) will stay alive until time \(s\) with probability \(q:=\mathrm{e}^{-T_{p}}\). Then, defining the event
\[E_{2}:=\{X_{s}\geqslant\tfrac{q}{2}\cdot X_{s-T_{p}}\},\]
we have, for any \(m\in\mathbb{N}\),
\[\mathbb{P}(E_{2}\mid\mathcal{F}_{s-T_{p}})\cdot\mathds{1}\{X_{s-T_{p}}=m\} \geqslant\mathbb{P}(\mathrm{Bin}(m,q)\geqslant\tfrac{q}{2}m)\]
(and in case \(m=0\), the left-hand side is trivially equal to \(1\)). Hence,
\[\mathbb{P}(E_{2}\mid\mathcal{F}_{s-T_{p}})\geqslant\beta:=\inf_{m\geqslant 1 }\mathbb{P}(\mathrm{Bin}(m,q)\geqslant\tfrac{q}{2}m)>0.\]
Then,
\[\mathbb{E}[X_{s}^{3/2}]\geqslant\mathbb{E}[X_{s}^{3/2}\cdot \mathds{1}_{E_{2}}] \geqslant\tfrac{q}{2}\cdot\mathbb{E}[X_{s-T_{p}}^{3/2}\cdot \mathds{1}_{E^{\prime}}]\] \[=\tfrac{q}{2}\cdot\mathbb{E}[X_{s-T_{p}}^{3/2}\cdot\mathbb{P}(E_ {2}\mid\mathcal{F}_{s-T_{p}})]\geqslant\tfrac{q}{2}\cdot\beta\cdot\mathbb{E}[ X_{s-T_{p}}^{3/2}],\]
proving (56) with \(C:=\tfrac{2}{\beta q}\). With (55) and (56), and putting together all the terms that do not depend on \(s\) in a sufficiently large constant \(C\), we have proved that
\[\mathbb{E}[X_{s}\cdot X_{s-T_{p}}^{1/2}]\leqslant C\mathbb{E}[X_{s}^{3/2}].\]
We now turn to the second term on the right-hand side of (54). We first bound the conditional expectation given \(\mathcal{F}_{s-T_{p}}\) with Cauchy-Schwarz:
\[\mathbb{E}[X_{s}^{p}\cdot\mathds{1}_{E_{1}}\mid\mathcal{F}_{s-T_{p}}] \leqslant\mathbb{E}[X_{s}^{2p}\mid\mathcal{F}_{s-T_{p}}]^{1/2}\cdot\mathbb{P} (E_{1}\mid\mathcal{F}_{s-T_{p}})^{1/2}.\]
Using (6) and the Markov property,
\[\mathbb{E}[X_{s}^{2p}\mid\mathcal{F}_{s-T_{p}}]\leqslant X_{s-T_{p}}^{2p} \cdot\mathbb{E}[X_{T_{p}}^{2p}].\]
Using Lemma 3.16 (and again the Markov property), we have
\[\mathbb{P}(E_{1}\mid\mathcal{F}_{s-T_{p}})\leqslant C^{\prime\prime}_{4p^{2}} \cdot\frac{X_{s-T_{p}}}{(X_{s-T_{p}}^{1/(2p)})^{4p^{2}}}=C^{\prime\prime}_{4p^{2 }}\cdot X_{s-T_{p}}^{1-2p}.\]
Then,
\[\mathbb{E}[X_{s}^{p}\cdot\mathds{1}_{E_{1}}\mid\mathcal{F}_{s-T_{p}}] \leqslant C^{\prime\prime}_{4p^{2}}\cdot\mathbb{E}[X_{T_{p}}^{2p}]\cdot X_{s- T_{p}},\]
so
\[\mathbb{E}[X_{s}^{p}\cdot\mathds{1}_{E_{1}}]\leqslant C^{\prime\prime}_{4p^{2 }}\cdot\mathbb{E}[X_{T_{p}}^{2p}]\cdot\mathbb{E}[X_{s-T_{p}}]\leqslant C^{ \prime\prime}_{4p^{2}}\cdot\mathbb{E}[X_{T_{p}}^{2p}]\cdot C\varphi^{s-T_{p}},\]
where the second inequality follows from (15). Putting together all the constants that do not depend on \(s\), this gives
\[\mathbb{E}[X_{s}^{p}\cdot\mathds{1}_{E_{1}}]\leqslant C\varphi^{s}.\]
## 4 Extinction time on a random \(d\)-regular graph with switching
The goal of this section is to prove Theorem 1.2. As mentioned in the Introduction, the first part of the theorem was already proved in [1], so we only prove the second part here.
This section is organized as follows. In Section 4.1, we give a detailed construction of the dynamic random graph and the contact process which co-evolves on this graph. After doing so, we argue that a version of the usual self-duality relation of the contact process is satisfied here. Due to this relation, in proving quick extinction, it suffices to study the process started from a single infection. We start this study in Section 4.2, where we introduce an exploration process, which reveals the contact process but only reveals partial information about the graph. In Section 4.3, we prove a general Markov chain lemma which allows us to couple this exploration process with (a projection of) the herds process. Finally, in Section 4.4, we take advantage of this coupling to give the proof of Theorem 4.4.
### Preliminaries: dynamic graph, joint evolution, duality
Let us define the class of graphs in which our dynamic graph process takes values. Fix \(n\in\mathbb{N}\) and let \(V_{n}:=[n]=\{1,\ldots,n\}\). The set \(\{(u,a):\ u\in V_{n},\ a\in\{1,\ldots,d\}\}\) is called the _set of half-edges_. Given a perfect matching of the set of half-edges (that is, a bijection \(\sigma\) from this set to itself with no fixed points and equal to its own inverse), we can obtain a (multi-)graph by setting \(V_{n}\) as the set of vertices and prescribing that each pair \(\{(u,a),(u^{\prime},a^{\prime})\}\) with \((u^{\prime},a^{\prime})=\sigma(u,a)\) corresponds to an edge between \(u\) and \(u^{\prime}\). Let \(\mathcal{G}_{n}\) denote the set of all multi-graphs that can be obtained in this way. Deterministic elements of \(\mathcal{G}_{n}\) will typically be denoted by \(\mathsf{g}\), whereas random elements of \(\mathcal{G}_{n}\) will be denoted by \(G\) or \(G_{t}\) (in the case of a process).
Fix \(\mathsf{g}\in\mathcal{G}_{n}\). A _switch code_ for \(\mathsf{g}\) is a triple \(\mathsf{m}=(e_{1},e_{2},\eta)\), where \(e_{1},e_{2}\) are distinct edges of \(\mathsf{g}\) and \(\eta\in\{-,+\}\). Fix such a switch code, with \(e_{1}=\{(u,a),(u^{\prime},a^{\prime})\}\) and \(e_{2}=\{(v,b),(v^{\prime},b^{\prime})\}\) so that \((u,a)<(u^{\prime},a^{\prime})\) and \((v,b)<(v^{\prime},b^{\prime})\) in lexicographic order. In case \(\eta=+\), we let \(\Gamma^{\mathsf{m}}(\mathsf{g})\) be the graph obtained from \(\mathsf{g}\) by replacing \(e_{1}\) and \(e_{2}\) by the edges \(\{(u,a),(v,b)\}\) and \(\{(u^{\prime},a^{\prime}),(v^{\prime},b^{\prime})\}\) (and keeping all other edges intact). In case \(\eta=-\), we instead replace \(e_{1},e_{2}\) by \(\{(u,a),(v^{\prime},b^{\prime})\}\) and \(\{(u^{\prime},a^{\prime}),(v,b)\}\).
The random graph process \((G_{t})_{t\geqslant 0}\) is the continuous-time Markov chain on \(\mathcal{G}_{n}\) which jumps from \(\mathsf{g}\) to \(\mathsf{g}^{\prime}\) with rate \(\upsilon_{n}:=\frac{\upsilon}{nd}\) if \(\mathsf{g}^{\prime}=\Gamma^{\mathsf{m}}(\mathsf{g})\) for some switch code \(\mathsf{m}\) of \(\mathsf{g}\) (and rate \(0\) otherwise). This chain is reversible with respect to the uniform measure on \(\mathcal{G}_{n}\). We will always start the graph dynamics from this distribution.
We now fix \(\lambda>0\) and define the contact process \((\xi_{t})_{t\geqslant 0}\) with infection rate \(\lambda\) on the dynamic graph \((G_{t})\). Although we could do so by describing the jump rates of the joint Markov chain \((G_{t},\xi_{t})_{t\geqslant 0}\), we will instead use a Poisson graphical construction. We take a probability space with probability measure \(\mathbb{P}\) in which the process \((G_{t})\) is defined, and also (independently of \((G_{t})\)), the following Poisson point processes (all independent) are defined:
* for each vertex \(u\), a Poisson point process \(R^{u}\) on \([0,\infty)\) with intensity \(1\) (recovery times);
* for each half-edge \((u,a)\), a Poisson point process \(R^{(u,a)}\) with intensity \(\lambda\) (transmission times).
Naturally, when \(t\in R^{u}\), vertex \(u\) goes to state \(0\) (if not already there) at time \(t\). Moreover, when \(t\in R^{(u,a)}\), there is a transmission from \(u\) to the vertex \(v\) that owns the half-edge to which \((u,a)\) is matched in \(G_{t}\) (so that, if \(u\) was in state \(1\) just before \(t\), then \(v\) goes to state \(1\), if not already there, at \(t\)). For each \(A\subseteq V_{n}\), we let \((\xi^{A}_{t})_{t\geqslant 0}\) be the contact process on \((G_{t})\) with \(\xi^{A}_{0}=\mathds{1}_{A}\) and obtained from this graphical construction (as usual, the graphical construction gives us contact processes started from all possible initial configurations, all coupled in a single probability space and respecting the monotonicity of set inclusion).
The usual duality relation
\[\mathbb{P}(\xi^{A}_{t}\cap B\neq\varnothing)=\mathbb{P}(\xi^{B}_{t}\cap A\neq \varnothing)\quad\text{for all }t\geqslant 0,\ A,B\subseteq V_{n} \tag{57}\]
holds in this context, but it is important to note that the above probabilities are annealed in the graph environment. Let us briefly prove (57). Fix \(t\geqslant 0\) and \(A,B\subseteq V_{n}\). Letting \((\mathbf{g}_{s})_{0\leqslant s\leqslant t}\) be a possible realization of the trajectory of \((G_{s})_{0\leqslant s\leqslant t}\), we have
\[\mathbb{P}(\xi^{A}_{t}\cap B\neq\varnothing\ |\ (G_{s})_{0 \leqslant s\leqslant t}=(\mathbf{g}_{s})_{0\leqslant s\leqslant t})\] \[\quad=\mathbb{P}(\xi^{B}_{t}\cap A\neq\varnothing\ |\ (G_{s})_{0 \leqslant s\leqslant t}=(\mathbf{g}_{t-s})_{0\leqslant s\leqslant t}).\]
This is verified using a standard argument involving infection paths and time-reversibility of Poisson processes. Integrating this equality over the choice of \((\mathbf{g}_{s})_{0\leqslant s\leqslant t}\) and using the fact that \((G_{s})_{0\leqslant s\leqslant t}\) has the same law as \((G_{t-s})_{0\leqslant s\leqslant t}\) gives (57).
Letting \(\bar{u}\in V_{n}\) be arbitrary (and deterministic) and writing \(\xi^{\bar{u}}_{t}\) instead of \(\xi^{\{\bar{u}\}}_{t}\), we have
\[\mathbb{P}(\xi^{V_{n}}_{t}\neq\varnothing)\leqslant\sum_{u\in V_{n}}\mathbb{ P}(\xi^{V_{n}}_{t}(u)=1)=n\cdot\mathbb{P}(\xi^{V_{n}}_{t}(\bar{u})=1)=n\cdot \mathbb{P}(\xi^{\bar{u}}_{t}\neq\varnothing), \tag{58}\]
where the equalities follow from symmetry and duality, respectively. Due to (58), the analysis of the extinction time of the contact process started from all vertices infected can be reduced to the analysis of the extinction
time of the contact process from a single infection at the arbitrary vertex \(\bar{u}\). For the rest of this section, \(\bar{u}\) remains fixed.
Next, we describe an _exploration process_, which only reveals partial information about \((G_{t})_{t\geqslant 0}\), namely, only the matching of half-edges at certain points in time on a need-to-know basis imposed by the transmission times of the contact process.
### Exploration process
Let
\[\mathcal{P}:=\{\{(u,a),(u^{\prime},a^{\prime})\}:\;u,u^{\prime}\in V_{n},\;a,a ^{\prime}\in\{1,\ldots,d\},\;(u,a)\neq(u^{\prime},a^{\prime})\}\]
be the set of all potential edges of our random graph. A set \(\mathcal{E}\subseteq\mathcal{P}\) is called _independent_ if any two elements of \(\mathcal{E}\) have no half-edge in common. Let
\[\mathscr{P}:=\{\mathcal{E}\subseteq\mathcal{P}:\;\mathcal{E}\text{ is independent}\}. \tag{59}\]
In the same probability space where \((G_{t})\) and the graphical construction of the contact process are defined, we now define a process \((\mathscr{E}_{t})_{t\geqslant 0}\) taking values in \(\mathscr{P}\). Intuitively, \(\mathscr{E}_{t}\) represents a set of edges that are known to be part of \(G_{t}\), having been revealed by an exploration induced by the contact process activity. This process will have the following features:
1. it starts from \(\mathscr{E}_{0}=\varnothing\);
2. for any \(t\), every element of \(\mathscr{E}_{t}\) is an edge of \(G_{t}\);
3. the pair \((\xi_{t}^{\bar{u}},\mathscr{E}_{t})_{t\geqslant 0}\) is a Markov chain;
4. for any \(t\), conditionally on \((\xi_{t}^{\bar{u}},\mathscr{E}_{t})\), the distribution of the edges of \(G_{t}\) apart from those in \(\mathscr{E}_{t}\) is uniform. More precisely, the pairing in \(G_{t}\) of the half-edges in the set \(\{(u,a):(u,a)\text{ not in any edge of }\mathscr{E}_{t}\}\) is uniformly distributed among all possibilities.
In order to define the exploration, we will need auxiliary times. Let \(T_{1}\) be the first time in which there is a transmission mark at a half-edge emanating
from \(\bar{u}\); let \(\mathscr{E}_{t}=\varnothing\) for all \(t\in[0,T_{1})\). In case \(T_{1}<\infty\), say, due to a transmission mark at the half-edge \((\bar{u},a)\), we reveal the half-edge \((v,b)\) to which \((\bar{u},a)\) is paired in \(G_{T_{1}}\), and include the edge \(\{(\bar{u},a),(v,b)\}\) in \(\mathscr{E}_{T_{1}}\).
Now assume that we have already defined stopping times (with respect to the filtration of \((G_{t})\) and the graphical construction) \(T_{1}\leqslant T_{2}\leqslant\cdots\leqslant T_{k}\), and that we have defined \(\mathscr{E}_{t}\) for \(0\leqslant t\leqslant T_{k}\). In case \(T_{k}=\infty\), set \(T_{k+1}=\infty\); from now on, assume that \(\{T_{k}<\infty\}\) occurs. Let \(T_{k+1}\) be the first time \(t>T_{k}\) when:
* either a switch occurs involving at least one edge of \(\mathscr{E}_{T_{k}}\) (call this _Case 1_),
* or "the contact process tries to use an unexplored edge", that is, a transmission mark appears at a half-edge emanating from some vertex of \(\xi_{t}\), and this half-edge is not part of an edge of \(\mathscr{E}_{T_{k}}\) (_Case 2_);
We set \(\mathscr{E}_{t}=\mathscr{E}_{T_{k}}\) for \(t\in(T_{k},T_{k+1})\), and \(\mathscr{E}_{T_{k+1}}\) is defined as follows.
* In Case 1, there are two sub-cases. First assume that the switch at time \(T_{k+1}\) involves an edge \(e\) of \(\mathscr{E}_{T_{k}}\) and another edge outside \(\mathscr{E}_{T_{k}}\). Then, we let \(\mathscr{E}_{T_{k+1}}=\mathscr{E}_{T_{k}}\backslash\{e\}\). Now assume that the switch at time \(T_{k+1}\) involves two edges \(e,e^{\prime}\) of \(\mathscr{E}_{T_{k}}\), transforming them into the two new edges \(e^{\prime\prime},e^{\prime\prime\prime}\). We then set \(\mathscr{E}_{T_{k+1}}=(\mathscr{E}_{T_{k}}\backslash\{e,e^{\prime}\})\cup\{e^ {\prime\prime},e^{\prime\prime\prime}\}\).
* In Case 2, we reveal the half-edge that is matched at time \(t\) to the half-edge having the transmission mark at that time; letting \(e\) be the edge formed by these two half-edges, we let \(\mathscr{E}_{T_{k+1}}=\mathscr{E}_{T_{k}}\cup\{e\}\).
This completes the description of the exploration, and it should be clear that properties (P1), (P2), (P3) and (P4) listed earlier are indeed satisfied.
Our next step is to use the exploration process as a tool to couple the contact process \((\xi_{t}^{\bar{u}})\) with a herds process.
### A Markov chain lemma and its application
We now prove a general result about coupling two continuous-time Markov chains.
**Lemma 4.1**.: _Let \(\mathcal{X}_{1}\) and \(\mathcal{X}_{2}\) be countable sets, and let \(r_{1}:\mathcal{X}_{1}\times\mathcal{X}_{1}\to[0,\infty)\) and \(r_{2}:\mathcal{X}_{2}\times\mathcal{X}_{2}\to[0,\infty)\) be functions defining the jump rates for continuous-time (non-explosive) Markov chains on \(\mathcal{X}_{1}\) and \(\mathcal{X}_{2}\), respectively. Assume that there is a subset \(\mathcal{X}_{1}^{\prime}\subseteq\mathcal{X}_{1}\) and functions \(\Psi:\mathcal{X}_{1}^{\prime}\to\mathcal{X}_{2}\) and \(f:\mathcal{X}_{1}^{\prime}\to[0,\infty)\) such that the following two conditions hold:_
\[\sum_{y\in\mathcal{X}_{1}\setminus\mathcal{X}_{1}^{\prime}}r_{1}(x,y)\leqslant f (x)\quad\text{for all }x\in\mathcal{X}_{1}^{\prime} \tag{60}\]
_and_
\[\sum_{\begin{subarray}{c}z\in\mathcal{X}_{2},\\ z\neq\Psi(x)\end{subarray}}\left|r_{2}(\Psi(x),z)-\sum_{y\in\Psi^{-1}(z)}r_{1}( x,y)\right|\leqslant f(x)\quad\text{for all }x\in\mathcal{X}_{1}^{\prime}. \tag{61}\]
_Fix \(\bar{x}\in\mathcal{X}_{1}^{\prime}\). Then, there exists a coupling \((\mathcal{A}_{t},\mathcal{B}_{t})_{t\geqslant 0}\) on \(\mathcal{X}_{1}\times\mathcal{X}_{2}\) with the following properties:_
* \(\mathcal{A}_{0}=\bar{x}\) _and_ \((\mathcal{A}_{t})_{t\geqslant 0}\) _is a Markov chain on_ \(\mathcal{X}_{1}\) _with jump rates_ \(r_{1}\)_;_
* \(\mathcal{B}_{0}=\Psi(\bar{x})\) _and_ \((\mathcal{B}_{t})_{t\geqslant 0}\) _is a Markov chain on_ \(\mathcal{X}_{2}\) _with jump rates_ \(r_{2}\)_;_
* _letting_ \[\sigma:=\inf\{t:\mathcal{B}_{t}\neq\Psi(\mathcal{A}_{t})\}\] _and, for any_ \(a>0\)_,_ \[T_{a}:=\inf\{t:f(\mathcal{A}_{t})>a\},\] _we have, for any_ \(t>0\)_,_ \[\mathbb{P}(\sigma\leqslant t\wedge T_{a})\leqslant 2at.\]
Proof.: We will define a continuous-time Markov chain \((\mathcal{A}_{t},\mathcal{B}_{t},\mathcal{W}_{t})_{t\geqslant 0}\) taking values in the set
\[\{(x,\Psi(x),1):\ x\in\mathcal{X}_{1}^{\prime}\}\cup\{(x,y,0):\ x\in\mathcal{ X}_{1},\ y\in\mathcal{X}_{2}\}, \tag{62}\]
starting from \((\bar{x},\Psi(\bar{x}),1)\). The pair \((\mathcal{A}_{t},\mathcal{B}_{t})\) will satisfy the properties in the statement. The third coordinate process \((\mathcal{W}_{t})\) will be a non-decreasing process (it jumps at most once, from \(1\) to \(0\)) with the property that for all \(t\leqslant\inf\{s:\mathcal{W}_{s}=0\}\), we have \(\mathcal{B}_{t}=\Psi(\mathcal{A}_{t})\). So, we interpret \(\mathcal{W}_{t}\) as the indicator of the event that "the coupling still works at time \(t\)".
In order to define this chain, we need to specify the jump rates. When the third coordinate equals zero (meaning that the coupling is already broken), the first and second coordinates move independently, according to the chains defined by \(r_{1}\) (on \(\mathcal{X}_{1}\)) and \(r_{2}\) (on \(\mathcal{X}_{2}\)), respectively. More precisely, from any triple of the form \((x,y,0)\), the chain jumps as follows:
* for each \(x^{\prime}\in\mathcal{X}_{1}\), it jumps to \((x^{\prime},y,0)\) with rate \(r_{1}(x,x^{\prime})\);
* for each \(y^{\prime}\in\mathcal{X}_{2}\), it jumps to \((x,y^{\prime},0)\) with rate \(r_{2}(y,y^{\prime})\).
We now need to specify the jump rates from points in the first set in the union in (62). In order to do so, we first introduce some notation. For each \(x\in\mathcal{X}_{1}^{\prime}\), we let
\[[x]:=\Psi^{-1}(\Psi(x))\subseteq\mathcal{X}_{1}^{\prime}.\]
For each \(x\in\mathcal{X}_{1}\) and each \(S\subseteq\mathcal{X}_{1}\), we write
\[r_{1}(x,S):=\sum_{y\in S}r_{1}(x,y).\]
Now, fix \(x\in\mathcal{X}_{1}^{\prime}\). The following list describes all the possible jumps that the chain can take from \((x,\Psi(x),1)\) (this starting location is kept fixed throughout the list), and their respective rates:
* \(\mathcal{A}\) _and \(\mathcal{B}\) jump together, stay coupled:_ for each \(y\in\mathcal{X}_{1}^{\prime}\backslash[x]\), jump to \((y,\Psi(y),1)\) with rate \[r_{1}(x,y)\cdot\frac{r_{1}(x,[y])\wedge r_{2}(\Psi(x),\Psi(y))}{r_{1}(x,[y])};\]
* \(\mathcal{A}\) _jumps alone inside \([x]\), stay coupled:_ for each \(y\in[x]\backslash\{x\}\), jump to \((y,\Psi(y),1)=(y,\Psi(x),1)\) with rate \(r_{1}(x,y)\);
* \(\mathcal{A}\) _jumps alone leaving \(\mathcal{X}^{\prime}_{1}\), break coupling:_ for each \(y\in\mathcal{X}_{1}\backslash\mathcal{X}^{\prime}_{1}\), jump to \((y,\Psi(x),0)\) with rate \(r_{1}(x,y)\);
* \(\mathcal{A}\) _jumps alone inside \(\mathcal{X}^{\prime}_{1}\), break coupling:_ for each \(y\in\mathcal{X}^{\prime}_{1}\backslash[x]\), jump to \((y,\Psi(x),0)\) with rate \[r_{1}(x,y)\cdot\left(1-\frac{r_{1}(x,[y])\wedge r_{2}(\Psi(x),\Psi(y))}{r_{1} (x,[y])}\right);\]
* \(\mathcal{B}\) _jumps alone, breaks coupling:_ for each \(z\in\mathcal{X}_{2}\), jump to \((x,z,0)\) with rate \[r_{2}(\Psi(x),z)-(r_{1}(x,\Psi^{-1}(z))\wedge r_{2}(\Psi(x),z)).\]
It is straightforward to check that the marginal rates for \((\mathcal{A}_{t})\) and \((\mathcal{B}_{t})\) are correct, so that items (a) and (b) in the statement of the lemma hold.
For each \(x\in\mathcal{X}^{\prime}_{1}\), let \(\mathcal{R}(x)\) denote the rate at which \((\mathcal{A}_{t},\mathcal{B}_{t},\mathcal{W}_{t})\) jumps from \((x,\Psi(x),1)\) to the set \(\mathcal{X}_{1}\times\mathcal{X}_{2}\times\{0\}\), where the coupling is broken. From the above rates, and then using (60) and (61), it can be seen that
\[\mathcal{R}(x)=r_{1}(x,\mathcal{X}_{1}\backslash\mathcal{X}^{\prime}_{1})+ \sum_{\begin{subarray}{c}z\in\mathcal{X}_{2},\\ z\neq\Psi(x)\end{subarray}}|r_{1}(x,\Psi^{-1}(z))-r_{2}(\Psi(x),z)|\leqslant 2f (x). \tag{63}\]
Next, let \(\sigma^{\prime}:=\inf\{t:\mathcal{W}_{t}=0\}\), and recall that \(T_{a}:=\inf\{t:f(\mathcal{A}_{t})>a\}\). The process
\[M_{t}:=\mathds{1}\{\sigma^{\prime}\leqslant t\wedge T_{a}\}-\int_{0}^{t\wedge \sigma^{\prime}\wedge T_{a}}\mathcal{R}(\mathcal{A}_{s})\;\mathrm{d}s,\qquad t\geqslant 0\]
is easily seen to be a martingale. Then, for any \(t\geqslant 0\),
\[0=M_{0}=\mathbb{E}[M_{t}]=\mathbb{P}(\sigma^{\prime}\leqslant t\wedge T_{a})- \mathbb{E}\left[\int_{0}^{t\wedge\sigma^{\prime}\wedge T_{a}}\mathcal{R}( \mathcal{A}_{s})\;\mathrm{d}s\right]\]
\[\stackrel{{\eqref{eq:M_t}}}{{\geqslant}}\mathbb{P}(\sigma^{ \prime}\leqslant t\wedge T_{a})-2at.\]
Now, recalling that \(\sigma:=\inf\{t:\mathcal{B}_{t}\neq\Psi(\mathcal{A}_{t})\}\), we have that \(\sigma^{\prime}\leqslant\sigma\), so
\[\mathbb{P}(\sigma\leqslant t\wedge T_{a})\leqslant\mathbb{P}(\sigma^{\prime} \leqslant t\wedge T_{a})\leqslant 2at.\]
In the application we have in mind for this lemma, the first Markov chain is the pair \((\xi_{t},\mathscr{E}_{t})\) consisting of the contact process and the exploration process in the random dynamic graph, as described in the previous subsection (recall that this pair is a Markov chain). The second Markov chain is a certain function of the herds process. We will need to give some definitions for both, as well as for the mapping \(\Psi\) between them.
#### 4.3.1 First Markov chain: contact and exploration process
The process \((\xi_{t},\mathscr{E}_{t})_{t\geqslant 0}\) (contact process and exploration process on the dynamic random graph \((G_{t})\)) takes values in the state space
\[\mathcal{X}_{1}:=\{(A,\mathcal{E}):\;A\subseteq[n],\;\mathcal{E}\in\mathscr{P }\},\]
where we recall the definition of \(\mathscr{P}\) in (59). We denote by \(r_{1}(\cdot,\cdot)\) the function giving the jump rates of this chain.
Given \((A,\mathcal{E})\in\mathcal{X}_{1}\), we define the **graph induced by \((A,\mathcal{E})\)**, denoted by \(\mathrm{Graph}(A,\mathcal{E})\), as follows. First enumerate \(\mathcal{E}=\{e_{1},\ldots,e_{m}\}\), with
\[e_{1}=\{(u_{1},a_{1}),(u^{\prime}_{1},a^{\prime}_{1})\},\quad\ldots,\quad e_{ m}=\{(u_{m},a_{m}),(u^{\prime}_{m},a^{\prime}_{m})\}.\]
Then, \(\mathrm{Graph}(A,\mathcal{E})\) is the graph with vertex set \(A\cup\{u_{1},u^{\prime}_{1},\ldots,u_{m},u^{\prime}_{m}\}\) and edge set \(\mathcal{E}\).
#### 4.3.2 Second Markov chain: herds process modulo automorphisms
Recall the definition of the set \(P_{\mathsf{f}}(\mathbb{T}^{d})\) of herd shapes from Definition 2.1. For \(A\in P_{\mathsf{f}}(\mathbb{T}^{d})\), define
\[[A]:=\left\{\begin{array}{c}A^{\prime}\in P_{\mathsf{f}}(\mathbb{T}^{d}):\; \mbox{there is a graph isomorphism }\psi:\mathbb{T}^{d}\to\mathbb{T}^{d}\\ \mbox{such that }\psi(A)=A^{\prime}\end{array}\right\}.\]
This decomposes \(P_{\mathsf{f}}(\mathbb{T}^{d})\) into equivalence classes.
Recall the definition of the set \(\mathcal{S}\) of herd configurations \(\mathcal{S}\) from Definition 2.2. Given \(\xi\in\mathcal{S}\), define \([\xi]:\{[A]:A\in P_{\mathsf{f}}(\mathbb{T}^{d})\}\to\mathbb{N}_{0}\) by setting
\[[\xi]([A])=\sum_{A^{\prime}\in[A]}\xi(A^{\prime}).\]
We then define
\[\mathcal{X}_{2}:=\{[\xi]:\xi\in\mathcal{S}\},\]
the **set of herd configurations modulo automorphisms**. Letting \((\Xi_{t})_{t\geqslant 0}\) be the herds process, we note that, by Lemma 2.4, the process \(([\Xi_{t}])_{t\geqslant 0}\) is a Markov chain on \(\mathcal{X}_{2}\). We let \(r_{2}(\cdot,\cdot)\) denote the function giving the jump rates of this chain.
#### 4.3.3 The mapping \(\Psi\) and the error bound \(f\)
Now that we have defined the pairs \((\mathcal{X}_{1},r_{1})\) and \((\mathcal{X}_{2},r_{2})\) that we will use in our application of Lemma 4.1, we will also define the sets \(\mathcal{X}_{1}^{\prime}\) and the functions \(\Psi:\mathcal{X}_{1}^{\prime}\to\mathcal{X}_{2}\) and \(f:\mathcal{X}_{1}^{\prime}\to[0,\infty)\) that appear in the assumptions of that lemma.
We start with
\[\mathcal{X}_{1}^{\prime}:=\{(A,\mathcal{E})\in\mathcal{X}_{1}:\text{ Graph}(A,\mathcal{E})\text{ is a forest}\}. \tag{64}\]
The mapping \(\Psi\) is easy to understand (Figure 2 provides an instant explanation) but somewhat clumsy to define. Fix \((A,\mathcal{E})\in\mathcal{X}_{1}^{\prime}\). Let \(\mathscr{C}_{1},\ldots,\mathscr{C}_{m}\) be the connected components of \(\text{Graph}(A,\mathcal{E})\) that contain at least one vertex of \(A\). For \(i\in\{1,\ldots,m\}\), let \(A_{i}\) be the set of vertices of \(A\) that intersect \(\mathscr{C}_{i}\). Since \(\mathscr{C}_{i}\) is a tree in which all vertices have degree at most \(d\), there exists an isomorphism \(\psi_{i}\) between \(\mathscr{C}_{i}\) and some connected subgraph of \(\mathbb{T}^{d}\) (in fact there are infinitely many such isomorphisms, but we choose one in some arbitrary way). Then, \(\xi(A,\mathcal{E}):=\sum_{i=1}^{m}\delta_{\psi_{i}(A_{i})}\) is a herd configuration, and we let
\[\Psi(A,\mathcal{E}):=[\xi]\in\mathcal{X}_{2}.\]
It is now straightforward to verify that there exists a constant \(C_{f}>0\) such that conditions (60) and (61) are satisfied with the choice
\[f(A,\mathcal{E}):=C_{f}\frac{(|A|+|\mathcal{E}|)^{2}}{n}.\]
We omit the details.
We now have all the ingredients to apply Lemma 4.1. Given an initial condition \((A,\mathcal{E})\in\mathcal{X}_{1}^{\prime}\) for the exploration process (which will often, but not always, be equal to \((\{\bar{u}\},\varnothing)\)), we can obtain the coupling \((\mathcal{A}_{t},\mathcal{B}_{t})_{t\geqslant 0}\) started from \(((A,\mathcal{E}),\Psi(A,\mathcal{E}))\) and satisfying the properties guaranteed by that lemma.
### Proof of Theorem 1.2
For the rest of this section, we assume that \(\lambda<\bar{\lambda}(\mathsf{v})\). By (58), it suffices to prove that, for \(C>0\) large enough, we have
\[n\cdot\mathbb{P}(\xi_{C\log n}^{\bar{u}}\neq\varnothing)\xrightarrow{n\to \infty}0.\]
Let us explain our strategy to prove this. We take advantage of the coupling with the herds process from the previous section. The probability that the
Figure 2: Illustration of the mapping \(\Psi\). Above, the pair \((A,\mathcal{E})\) is depicted (red vertices are those that belong to \(A\), and black vertices are those that belong to \(\operatorname{Graph}(A,\mathcal{E})\) but not to \(A\)). Below, the herd configuration modulo automorphisms \(\Psi(A,\mathcal{E})\) is shown. Note that one of the connected components of \(\operatorname{Graph}(A,\mathcal{E})\) has no counterpart in \(\Psi(A,\mathcal{E})\), because none of its vertices belongs to \(A\).
contact process started from \(\{\bar{u}\}\) survives until time \(C\log n\) (with \(C\) some large constant), and moreover the coupling remains good (meaning that \(\mathcal{B}_{t}=\Psi(\mathcal{A}_{t})\)) for time \(C^{\prime}\log n\) (with \(C^{\prime}<C\) but still large) is \(o(1/n)\), since this would imply survival of the herds process, which is subcritical, until \(C^{\prime}\log n\). However, the probability that the coupling turns bad before extinction is not \(o(1/n)\), so we have to deal with that event. The most problematic case is that the coupling turns bad due to the exploration process finding an edge that causes the explored graph to no longer be a forest. In that case, apart from events of probability \(o(1/n)\), this problematic edge is deleted after a short amount of time due to a switch (with no other problematic edges appearing in the meantime), and the explored region goes back to being a forest. At this moment when a forest reappears, we can start a brand new coupling between the exploration process (starting from its current state \((\xi_{t},\mathscr{E}_{t})\)) and a herds process (starting from \(\Psi(\xi_{t},\mathscr{E}_{t})\)). Now, this second coupling also turning bad has too low probability (when we consider this event together with the already low probability of the breaking of the first attempt). It is also unlikely that this second coupling stays active for a long time without turning bad, for the same reason as for the first one.
The above explanation shows that the argument is naturally structured in three stages (of course, not all of them necessarily occur): the first coupling attempt, then the period until a problematic edge is removed, and then the second coupling attempt. We encapsulate Stages 2 and 3 in two lemmas, in reverse order: Lemma 4.2 below deals with Stage 3, and Lemma 4.3 with Stage 2. Having these two lemmas in place, we are able to tell the full story from the beginning of Stage 1, concluding the proof.
**Lemma 4.2**.: _Let \((A,\mathcal{E})\in\mathcal{X}_{1}^{\prime}\), where \(\mathcal{X}_{1}^{\prime}\) is as in (64). Assume that \(|A|+|\mathcal{E}|\leqslant n^{1/6}\). Let \((\xi_{t},\mathscr{E}_{t})_{t\geqslant 0}\) be a contact process and exploration process started from \((\xi_{0},\mathscr{E}_{0})=(A,\mathcal{E})\). Then, letting \(\tau\) denote the extinction time of the contact process, and \(\varphi=\varphi(\lambda,\mathsf{v})\) be the growth index of the herds process (as in (10)), for \(n\) large enough we have_
\[\mathbb{P}\left(\tau>\frac{2}{|\log\varphi|}\log n\right)\leqslant\frac{1}{ \sqrt{n}}. \tag{65}\]
Proof.: Let \((\mathcal{A}_{t},\mathcal{B}_{t})_{t\geqslant 0}\) be the coupling obtained from Lemma 4.1, started from \((\mathcal{A}_{0},\mathcal{B}_{0})=((A,\mathcal{E}),\Psi(A,\mathcal{E}))\). Recalling from the statement of Lemma 4.1
that
\[\sigma:=\inf\{t:\mathcal{B}_{t}\neq\Psi(\mathcal{A}_{t})\},\qquad T_{a}:=\inf\{t:f (\mathcal{A}_{t})\geqslant a\}\]
and abbreviating
\[t_{*}:=\frac{2\log n}{|\log\varphi|},\qquad a_{*}:=\frac{1}{4t_{*}\sqrt{n}}= \frac{|\log\varphi|}{8\sqrt{n}\log n},\]
we bound the probability on the left-hand side of (65) by
\[\mathbb{P}(\tau>t_{*},\ \sigma>t_{*})+\mathbb{P}(\sigma\leqslant t_{*}\wedge T _{a_{*}})+\mathbb{P}(T_{a_{*}}<\sigma\leqslant t_{*}). \tag{66}\]
By Lemma 4.1, we have
\[\mathbb{P}(\sigma\leqslant t_{*}\wedge T_{a_{*}})\leqslant 2t_{*}a_{*}=\frac{ 1}{2\sqrt{n}}.\]
By the definition of \(\sigma\), on the event \(\{\tau>t_{*},\ \sigma>t_{*}\}\) we have that \(\mathcal{B}_{t_{*}}\) is not empty. Then, \(\mathbb{P}(\tau>t_{*},\ \sigma>t_{*})\) is smaller than the probability that a herds process started with fewer than \(n^{1/6}\) particles is still alive by time \(t_{*}\). By (6) and (15), we obtain
\[\mathbb{P}(\tau>t_{*},\ \sigma>t_{*})\leqslant Cn^{1/6}\cdot\varphi^{t_{*}}=Cn ^{1/6}\cdot n^{-2}<\frac{1}{4\sqrt{n}} \tag{67}\]
if \(n\) is large enough.
It remains to bound \(\mathbb{P}(T_{a_{*}}<\sigma\leqslant t_{*})\). Recalling that \(f(\mathcal{A}_{t})=C_{f}(|\xi_{t}|+|\mathscr{E}_{t}|)^{2}/n\), if \(T_{a_{*}}<\infty\) we have
\[|\xi_{T_{a_{*}}}|+|\mathscr{E}_{T_{a_{*}}}|\geqslant(na_{*}/C_{f})^{1/2}.\]
Since \(|\xi_{0}|+|\mathscr{E}_{0}|\leqslant n^{1/6}\), we obtain that
\[\text{on }\{T_{a_{*}}<\infty\},\quad|\xi_{T_{a_{*}}}|+|\mathscr{E}_{T_{a_{*}}}| -(|\xi_{0}|+|\mathscr{E}_{0}|)\geqslant(na_{*}/C_{f})^{1/2}-n^{1/6}>n^{1/5}.\]
Now, the process \((|\xi_{t}|+|\mathscr{E}_{t}|)\) only changes at times when \((\mathcal{A}_{t})=(\xi_{t},\mathscr{E}_{t})\) changes. If \(\mathcal{A}_{t}\) has a new infection appearing at time \(t\), then \(|\xi_{t}|+|\mathscr{E}_{t}|\) may increase by at most \(2\) at that time. If, on the other hand, \(\mathcal{A}_{t}\) performs a jump of any other kind, then \(|\xi_{t}|+|\mathscr{E}_{t}|\) stays the same or decreases. Hence, for any \(t\), we have
\[|\xi_{t}|+|\mathscr{E}_{t}|-(|\xi_{0}|+|\mathscr{E}_{0}|)\leqslant 2|\{s \leqslant t:\ \xi_{s}=\xi_{s-}+1\}|.\]
Putting these observations together, we see that
\[\text{on }\{T_{a_{*}}<\sigma\leqslant t_{*}\},\quad|\{s<\sigma:\;\xi_{s}=\xi_{s-} +1\}|\geqslant n^{1/5}/2.\]
Moreover, before time \(\sigma\), whenever a new infection appears in \((\mathcal{A}_{t})\), a new particle is also born in \((\mathcal{B}_{t})\). Letting \(\mathcal{N}_{\infty}\) denote the number of particles ever born in \((\mathcal{B}_{t})\) (even after time \(\sigma\)), we obtain the bound
\[\mathbb{P}(T_{a_{*}}<\sigma\leqslant t_{*})\leqslant\mathbb{P}(\mathcal{N}_{ \infty}\geqslant n^{1/5}/2)\leqslant\frac{\mathbb{E}[\mathcal{N}_{\infty}^{p}] }{(n^{1/5}/2)^{p}},\]
for any \(p\geqslant 1\), by Markov's inequality. Recalling that \(\mathcal{B}_{0}\) has at most \(n^{1/6}\) particles, and using Lemma 3.12 and Corollary 3.13, the right-hand side is smaller than
\[\frac{C_{p}(n^{1/6})^{p}}{(n^{1/5}/2)^{p}}=C_{p}^{\prime}\cdot n^{-p/30},\]
for some constants \(C_{p},C_{p}^{\prime}>0\). Taking \(p>15\) and then \(n\) large enough, this is smaller than \(\frac{1}{4\sqrt{n}}\), completing the proof.
**Lemma 4.3**.: _There exist \(\varepsilon>0\) and \(\delta>0\) such that the following holds. Let \((A,\mathcal{E})\in\mathcal{X}_{1}\backslash\mathcal{X}_{1}^{\prime}\) (so that \(\operatorname{Graph}(A,\mathcal{E})\) is not a forest). Assume that \(|A|+|\mathcal{E}|\leqslant n^{\varepsilon}\). Also assume that there is an edge \(e\) in \(\mathcal{E}\) such that \((A,\mathcal{E}\backslash\{e\})\in\mathcal{X}_{1}^{\prime}\), that is, \(\operatorname{Graph}(A,\mathcal{E})\) would become a forest if \(e\) were removed from \(\mathcal{E}\). Letting \((\xi_{t},\mathscr{E}_{t})_{t\geqslant 0}\) denote the contact and exploration process started from \((A,\mathcal{E})\), and letting \(\tau\) denote the extinction time of \((\xi_{t})\), for \(n\) large enough we have_
\[\mathbb{P}\left(\tau>\left(\delta+\frac{2}{|\log\varphi|}\right)\log n\right)< n^{-4\varepsilon}. \tag{68}\]
Proof.: Let \(U\) denote the time when the edge \(e\) disappears, due to being involved in a switch. The rate at which this happens equals \(\upnu_{n}=\frac{\upnu}{nd}\) times the number of other edges in the graph, which is \(\frac{nd}{2}-1\), times \(2\) (switches can be positive or negative). So this rate is \(\upnu(1-\frac{1}{2nd})\). Hence, \(U\) has exponential distribution with parameter \(\upnu(1-\frac{1}{2nd})\).
Denote by \(B\) the event that:
* for all \(t\in[0,U)\) we have \((\xi_{t},\mathscr{E}_{t}\backslash\{e\})\in\mathcal{X}_{1}^{\prime}\), that is, it stays the case that the removal of \(e\) from the set of edges turns \(\operatorname{Graph}(\xi_{t},\mathscr{E}_{t})\) into a forest;
* \(\operatorname{Graph}(\xi_{U},\mathscr{E}_{U})\) is a forest.
We fix \(\delta>0\) and \(\varepsilon>0\) for now; their values will be chosen at the end of the proof. We define
\[T:=\inf\{t:\;|\xi_{t}|+|\mathscr{E}_{t}|>n^{1/6}\}.\]
We bound the probability in (68) by
\[\mathbb{P}(U>\delta\log n) \tag{69}\] \[+\mathbb{P}\left(U\leqslant\delta\log n,\;T\leqslant U\right)\] (70) \[+\mathbb{P}\left(\{U\leqslant\delta\log n,\;T>U\}\cap B^{c}\right)\] (71) \[+\mathbb{P}\left(\{U\leqslant\delta\log n,\;T>U\}\cap B\cap\left\{ \tau>\left(\delta+\frac{2}{|\log\varphi|}\right)\log n\right\}\right). \tag{72}\]
We bound these four terms separately, starting with the first and last, which are the easiest. We have
\[\mathbb{P}(U>\delta\log n)=\exp\left\{-\mathsf{v}\left(1-\frac{1}{2nd}\right) \delta\log n\right\}=n^{-\mathsf{v}\left(1-\frac{1}{2nd}\right)\delta}.\]
Next, letting \(B^{\prime}:=\{U\leqslant\delta\log n,\;T>U\}\cap B\), and letting \((\mathcal{F}_{t})_{t\geqslant 0}\) be the natural filtration of \((\xi_{t},\mathscr{E}_{t})\), the probability in (72) is
\[\mathbb{P}\left(B^{\prime}\cap\left\{\tau>\left(\delta+\frac{2}{| \log\varphi|}\right)\log n\right\}\right)\] \[=\mathbb{E}\left[\mathds{1}_{B^{\prime}}\cdot\mathbb{E}\left[\, \tau>\left(\delta+\frac{2}{|\log\varphi|}\right)\log n\;\middle|\,\mathcal{F}_{ U}\right]\right]\leqslant\mathbb{P}(B^{\prime})\cdot\frac{1}{\sqrt{n}}\leqslant\frac{1}{ \sqrt{n}},\]
where the first inequality follows from Lemma 4.2.
To bound (70), we note that \(|\xi_{t}|+|\mathscr{E}_{t}|\) can increase by at most two units at a given jump time, and a jump that causes such an increase happens with rate at most \(\lambda d|\xi_{t}|\). We can thus stochastically dominate \((|\xi_{t}|+|\mathscr{E}_{t}|)_{t\geqslant 0}\) by a pure-birth process \((Z_{t})_{t\geqslant 0}\) on \(\mathbb{N}\) which starts from \(Z_{0}=\lceil n^{\varepsilon}\rceil\) and jumps from \(k\) to \(k+2\) with rate \(\lambda dk\) (and has no other kind of jump). Then, the probability in (70) is smaller than
\[\mathbb{P}\left(\max_{0\leqslant t\leqslant\delta\log n}(|\xi_{t }|+|\mathscr{E}_{t}|)>n^{1/6}\right)\] \[\leqslant\mathbb{P}(Z_{\delta\log n}>n^{1/6})\leqslant\frac{ \mathbb{E}[Z_{\delta\log n}]}{n^{1/6}}=\frac{\lceil n^{\varepsilon}\rceil\cdot \exp\{2\lambda d\delta\log n\}}{n^{1/6}}\leqslant 2n^{\varepsilon+2\lambda d\delta-\frac{1}{6}}.\]
We now turn to (71). For the process \((\xi_{t},\mathscr{E}_{t})_{t\geqslant 0}\), let us say that a "bad jump" is a jump time when either (a) a contact process transmission occurs which causes the inclusion in the exploration process of an edge between two vertices that were already present in \(\operatorname{Graph}(\xi_{t},\mathscr{E}_{t})\), or (b) a switch involving two edges that were already in \(\mathscr{E}_{t}\). The point is that, as long as there are no bad jumps, it remains true that \(\operatorname{Graph}(\xi_{t},\mathscr{E}_{t})\) would become a forest if \(e\) were removed. In particular, letting \(S\) denote the time at which the first bad jump occurs, we have
\[\mathbb{P}(\{U\leqslant\delta\log n,\ T>U\}\cap B^{c})\leqslant\mathbb{P}(U \leqslant\delta\log n,\ T>U,\ S\leqslant U).\]
Now let \(\mathcal{R}_{t}\) denote the rate at which a bad jump occurs from the present state \((\xi_{t},\mathscr{E}_{t})\). It is straightforward to check that there is \(C>0\) such that \(\mathcal{R}_{t}\leqslant C\frac{(|\xi_{t}|+|\mathscr{E}_{t}|)^{2}}{n}\), and that the process
\[Y_{t}:=\mathds{1}\{S\leqslant t,\ S\leqslant T\}-\int_{0}^{t\wedge S\wedge T} \mathcal{R}_{s}\ \mathrm{d}s,\quad t\geqslant 0\]
is a martingale. Then,
\[0=\mathbb{E}[Y_{0}]=\mathbb{E}[Y_{\delta\log n}]\geqslant\mathbb{P}(S \leqslant\delta\log n,\ S\leqslant T)-\delta\log n\cdot C\frac{(n^{1/6})^{2}}{n}.\]
Then, we have
\[\mathbb{P}(U\leqslant\delta\log n,\ T>U,\ S\leqslant U)\leqslant\mathbb{P}(S \leqslant\delta\log n,\ S\leqslant T)\leqslant C\delta\log n\cdot n^{-2/3}.\]
Putting now all our bounds together, we have proved that the probability in (68) is smaller than
\[n^{-\mathsf{v}\left(1-\frac{1}{2n\delta}\right)\delta}+n^{-1/2}+2n^{\varepsilon +2\lambda d\delta-\frac{1}{6}}+C\delta\log n\cdot n^{-2/3}.\]
By first choosing \(\delta\) small and then choosing \(\varepsilon\) much smaller, this expression is smaller than \(n^{-4\varepsilon}\) when \(n\) is large enough.
Proof of Theorem 1.3.: Let \(\varepsilon\) and \(\delta\) be as in Lemma 4.3. Define
\[\beta:=\frac{4}{|\log\varphi|}+\delta.\]
We will prove that
\[\mathbb{P}(\xi_{\beta\log n}^{V_{n}}\neq\varnothing)\xrightarrow{n\to\infty}0.\]
By (58), this will follow from proving that
\[\lim_{n\to\infty}n\cdot\mathbb{P}(\xi_{\beta\log n}^{\bar{u}}\neq\varnothing)=0,\]
where \(\bar{u}\) is a deterministic vertex. In order to prove this, we take the coupling \((\mathcal{A}_{t},\mathcal{B}_{t})_{t\geqslant 0}\) from Lemma 4.1, with \(\mathcal{A}_{0}=(\xi_{0},\mathscr{E}_{0})=(\{\bar{u}\},\varnothing)\) and \(\mathcal{B}_{0}=\Psi(\mathcal{A}_{0})\). Recall that \(\sigma:=\inf\{t:\mathcal{B}_{t}\neq\Psi(\mathcal{A}_{t})\}\).
Let \(\tau\) be the extinction time of \((\xi_{t})\) (which starts from \(\{\bar{u}\}\)), and also define
\[\beta_{0}:=\frac{2}{|\log\varphi|}\]
and
\[T:=\inf\{t:\;|\xi_{t}|+|\mathscr{E}_{t}|\geqslant n^{\varepsilon}\}.\]
We bound:
\[\mathbb{P}(\tau>\beta\log n)\] \[\leqslant\mathbb{P}(\tau>\beta\log n,\;\sigma>\beta_{0}\log n) \tag{73}\] \[\quad+\mathbb{P}(\sigma\leqslant\beta_{0}\log n,\;T\leqslant\sigma)\] (74) \[\quad+\mathbb{P}(\tau>\beta\log n,\;\sigma\leqslant\beta_{0}\log n,\;T>\sigma). \tag{75}\]
To bound (73), we give the same argument as we have used to bound (67); here it gives:
\[\mathbb{P}(\tau>\beta\log n,\;\sigma>\beta_{0}\log n)\leqslant C\varphi^{ \beta_{0}\log n}<n^{-2}.\]
To bound (74), we observe again that \(|\xi_{t}|+|\mathscr{E}_{t}|\) can only increase by \(2\) at any given jump time, and it only increases when there are contact transmissions generating new births. Moreover, before time \(\sigma\), any time when there is a birth for \((\xi_{t},\mathscr{E}_{t})\), there is also a particle birth for \(\mathscr{B}_{t}\). These considerations allow us to bound:
\[\mathbb{P}(\sigma\leqslant\beta_{0}\log n,\;T\leqslant\sigma)\] \[\leqslant\mathbb{P}(\sigma\leqslant\beta_{0}\log n,\;|\{t\leqslant \sigma:\;|\xi_{t}|=|\xi_{t-}|+1\}|\geqslant n^{\varepsilon}/2)\] \[\leqslant\mathbb{P}\left((\mathscr{B}_{t})\text{ has more than }\frac{n^{ \varepsilon}}{2}-2\text{ births}\right).\]
By Corollary 3.13, this is smaller than
\[C_{p}\left(\frac{n^{\varepsilon}}{2}-2\right)^{-p}\]
for any \(p\geqslant 1\). Hence, by taking \(p>1/(2\varepsilon)\) and \(n\) large enough, it is smaller than \(1/n^{2}\).
We now turn to (75). Let us first bound, using Lemma 4.1:
\[\mathbb{P}(\sigma\leqslant\beta_{0}\log n,\;T>\sigma)\leqslant C_{f}\cdot \frac{(n^{\varepsilon})^{2}}{n}\cdot\beta_{0}\log n=C_{f}\beta_{0}n^{-1+2 \varepsilon}\log n. \tag{76}\]
Letting \((\mathcal{F}_{t})_{t\geqslant 0}\) be the natural filtration for \((\mathcal{A}_{t},\mathcal{B}_{t})_{t\geqslant 0}\), we write
\[\mathbb{P}(\tau>\beta\log n,\;\sigma\leqslant\beta_{0}\log n,\;T>\sigma)\] \[\leqslant\mathbb{E}\left[\mathds{1}\{\sigma\leqslant\beta_{0} \log n,\;T>\sigma\}\cdot\mathbb{P}(\tau>\beta\log n\mid\mathcal{F}_{\sigma})\right]\] \[=\mathbb{E}\left[\mathds{1}\{\sigma\leqslant\beta_{0}\log n,\;T> \sigma,\;\mathcal{A}_{\sigma}\notin\mathcal{X}_{1}^{\prime}\}\cdot\mathbb{P} (\tau>\beta\log n\mid\mathcal{F}_{\sigma})\right] \tag{77}\] \[\quad+\mathbb{E}\left[\mathds{1}\{\sigma\leqslant\beta_{0}\log n,\;T>\sigma,\;\mathcal{A}_{\sigma}\in\mathcal{X}_{1}^{\prime}\}\cdot\mathbb{P} (\tau>\beta\log n\mid\mathcal{F}_{\sigma})\right]. \tag{78}\]
To bound (77), we note that on the event \(\{\sigma\leqslant\beta_{0}\log n,\;T>\sigma,\;\mathcal{A}_{\sigma}\notin \mathcal{X}_{1}^{\prime}\}\), we have that \(\mathcal{A}_{\sigma}=(\xi_{\sigma},\mathscr{E}_{\sigma})\) satisfies the assumptions of Lemma 4.3, and then that lemma implies that, on this event, \(\mathbb{P}(\tau>\beta\log n\mid\mathcal{F}_{\sigma})\leqslant n^{-4\varepsilon}\). Together with (76), this implies that (77) is smaller than
\[C_{f}\beta_{0}n^{-1+2\varepsilon}\log n\cdot n^{-4\varepsilon}<n^{-1-\varepsilon}\]
if \(n\) is large enough.
Next, Lemma 4.2 implies that on \(\{\sigma\leqslant\beta_{0}\log n,\;T>\sigma,\;\mathcal{A}_{\sigma}\in \mathcal{X}_{1}^{\prime}\}\), we have \(\mathbb{P}(\tau>\beta\log n\mid\mathcal{F}_{\sigma})<n^{-1/2}\); combining this with (76) shows that (78) is smaller than
\[C_{f}\beta_{0}n^{-1+2\varepsilon}\log n\cdot n^{-1/2}<n^{-5/4}\]
for \(n\) large. This completes the proof.
## 5 Appendix
### Proofs of Lemma 3.7 and Lemma 3.9
Proof of Lemma 3.7.: The proof is the same for the two functions, so we only treat the first. We use the simple bounds, that come from comparison with
a pure birth process,
\[\mathbb{E}[X_{t}\mid\Xi_{0}=\delta_{A}]\leqslant|A|\cdot e^{d\lambda t},\quad \mathbb{E}[X_{t}\mid\Xi_{0}=\delta_{A_{e,1}}+\delta_{A_{e,2}}]\leqslant|A| \cdot e^{d\lambda t},\]
together with the expression (18), to obtain
\[g_{\mathsf{v}}(\xi,t) \leqslant e^{d\lambda t}\cdot\sum_{A}\xi(A)\cdot|A|\cdot|\{\text{ active edges of }A\}|\] \[\leqslant e^{d\lambda t}\cdot\left(\sum_{A}\xi(A)\cdot|A|\right) \left(\sum_{A}\xi(A)\cdot|\{\text{active edges of }A\}|\right)\] \[\leqslant e^{d\lambda t}\cdot(X(\xi)+\mathscr{E}(\xi))^{2}.\]
The statement now readily follows from Corollary 2.3.
The following preliminary result will allow us to perform the exchange of limit and expectation in (32).
**Lemma 5.1**.: _There exist \(c_{1},\ c_{2}>0\) (depending on \(\lambda,\mathsf{v},\varepsilon)\) such that for any \(0\leqslant t<t+s<T\), on \(\{\tau_{\mathrm{sep}}>t\}\) we have_
\[|\widehat{\mathbb{E}}[\mathcal{A}_{t+s}\mid\mathcal{F}_{t}]|\leqslant c_{1}( X(\mathcal{V}_{t})+\mathscr{E}(\mathcal{V}_{t}))^{c_{2}}\cdot s.\]
Proof.: We fix \(s,t\) as in the statement, and let \(N\) denote the number of jumps of the process \((\mathcal{V},\mathcal{W})\) in \([t,t+s]\). We have
\[\mathds{1}_{\{\tau_{\mathrm{sep}}>t\}}\cdot|\widehat{\mathbb{E} }\left[\mathcal{A}_{t+s}\mid\mathcal{F}_{t}\right]|\] \[\leqslant\mathds{1}_{\{\tau_{\mathrm{sep}}>t\}}\cdot\left| \widehat{\mathbb{E}}\left[\mathds{1}_{\{\tau_{\mathrm{sep}}\leqslant t+s,\ N=1\}}\cdot \widehat{\mathbb{E}}[\mathcal{Y}_{T}-\mathcal{X}_{T}\mid\mathcal{F}_{\tau_{ \mathrm{sep}}}]\ \Big{|}\ \mathcal{F}_{t}\right]\right| \tag{79}\] \[\quad+\mathds{1}_{\{\tau_{\mathrm{sep}}>t\}}\cdot\left|\widehat{ \mathbb{E}}\left[\mathds{1}_{\{\tau_{\mathrm{sep}}\leqslant t+s,\ N\geqslant 2\}} \cdot\widehat{\mathbb{E}}[\mathcal{Y}_{T}-\mathcal{X}_{T}\mid\mathcal{F}_{\tau_ {\mathrm{sep}}}]\ \Big{|}\ \mathcal{F}_{t}\right]\right|. \tag{80}\]
We will treat the terms (79) and (80) separately. For both, it will be useful to bound, using domination by a pure birth process,
\[\widehat{\mathbb{E}}[\mathcal{X}_{T}\mid\mathcal{F}_{\tau_{\mathrm{sep}}}] \leqslant e^{d\lambda T}X(\mathcal{V}_{\tau_{\mathrm{sep}}}),\quad\widehat{ \mathbb{E}}[\mathcal{Y}_{T}\mid\mathcal{F}_{\tau_{\mathrm{sep}}}]\leqslant e ^{d\lambda T}X(\mathcal{W}_{\tau_{\mathrm{sep}}})=e^{d\lambda T}X(\mathcal{V}_ {\tau_{\mathrm{sep}}}),\]
which gives
\[|\widehat{\mathbb{E}}[\mathcal{V}_{T}-\mathcal{X}_{T}\mid\mathcal{F}_{\tau_{ \mathrm{sep}}}]|\leqslant e^{d\lambda T}X(\mathcal{V}_{\tau_{\mathrm{sep}}}). \tag{81}\]
We start bounding (79). On the event \(\{t<\tau_{\mathrm{sep}}\leqslant t+s,\ N=1\}\), we have \(X(\mathcal{V}_{t})=X(\mathcal{W}_{t})=X(\mathcal{V}_{\tau_{\mathrm{sep}}})=X( \mathcal{W}_{\tau_{\mathrm{sep}}})\) (since in this event the only
jump of \((\mathcal{V},\mathcal{W})\) in \([t,t+s]\) is a split which causes the separation of the processes, so there is no change in the number of particles). Then, using (81), we obtain that (79) is smaller than
\[\mathds{1}_{\{\tau_{\mathrm{sep}}>t\}}\cdot e^{d\lambda T}X(\mathcal{V}_{t}) \cdot\widehat{\mathbb{P}}(\tau_{\mathrm{sep}}\leqslant t+s,\ N\geqslant 1\mid \mathcal{F}_{t}).\]
The rate with which the process jumps away from the state \((\mathcal{V}_{t},\mathcal{W}_{t})\) is at most
\[\mu_{1}(\mathcal{V}_{t}):=(d\lambda+1)X(\mathcal{V}_{t})+(\mathsf{v}+\varepsilon )\mathscr{E}(\mathcal{V}_{t}),\]
so the probability above is at most
\[1-\exp\{-\mu_{1}(\mathcal{V}_{t})\cdot s\}\leqslant\mu_{1}(\mathcal{V}_{t}) \cdot s.\]
We have thus proved that (79) is bounded by the desired expression.
We now turn to (80). We start using (81) to bound (80) by
\[\mathds{1}_{\{\tau_{\mathrm{sep}}>t\}}\cdot e^{d\lambda T}\cdot \widehat{\mathbb{E}}\left[\mathds{1}_{\{\tau_{\mathrm{sep}}\leqslant t+s,\ N \geqslant 2\}}\cdot X(\mathcal{V}_{\tau_{\mathrm{sep}}})\ \Big{|}\ \mathcal{F}_{t}\right]\] \[\leqslant\mathds{1}_{\{\tau_{\mathrm{sep}}>t\}}\cdot e^{d \lambda T}\cdot\widehat{\mathbb{E}}\left[\mathds{1}_{\{N\geqslant 2\}} \cdot\max_{u\in[t,T]}X(\mathcal{V}_{u})\ \Big{|}\ \mathcal{F}_{t}\right].\]
By Holder's inequality, the expectation on the right-hand side is smaller than
\[\mathds{1}_{\{\tau_{\mathrm{sep}}>t\}}\cdot e^{d\lambda T}\cdot\widehat{ \mathbb{P}}\left(N\geqslant 2\mid\mathcal{F}_{t}\right)^{2/3}\cdot\widehat{ \mathbb{E}}\left[\max_{u\in[t,T]}X(\mathcal{V}_{u})^{3}\ \bigg{|}\ \mathcal{F}_{t}\right]^{1/3}. \tag{82}\]
Corollary 2.3 and the Markov property imply that
\[\widehat{\mathbb{E}}\left[\max_{u\in[t,T]}X(\mathcal{V}_{u})^{3}\ \bigg{|}\ \mathcal{F}_{t}\right]^{1/3}\leqslant cX(\mathcal{V}_{t}). \tag{83}\]
Let
\[\mu_{2}(\mathcal{V}_{t}):=2(d\lambda+1)(X(\mathcal{V}_{t})+1)+2(\mathsf{v}+ \varepsilon)(\mathscr{E}(\mathcal{V}_{t})+1).\]
In the event \(\{\tau_{\mathrm{sep}}>t\}\), the process \((\mathcal{V},\mathcal{W})\) jumps away from \((\mathcal{V}_{t},\mathcal{W}_{t})\) with rate smaller than \(\mu_{1}(\mathcal{V}_{t})\) (as previously observed), and after performing a first jump, it performs a second jump with a rate that is smaller than \(\mu_{2}(\mathcal{V}_{t})\). Indeed, after the first jump the number of particles or active edges of \(\mathcal{V}\) and \(\mathcal{W}\) increase by at most \(1\), and the factors \(2\) in the definition of \(\mu_{2}(\mathcal{V}_{t})\) account for the possibility that the first jump is the separation of the two
processes. Letting \((Z_{t})_{t\geqslant 0}\) be a Poisson process with constant rate \(\mu_{2}(\mathcal{V}_{t})\), we bound (on the event \(\{\tau_{\mathrm{sep}}>t\}\)):
\[\widehat{\mathbb{P}}(N\geqslant 2\mid\mathcal{F}_{t})\leqslant \mathbb{P}(Z_{s}\geqslant 2) =1-\mathrm{e}^{-\mu_{2}(\mathcal{V}_{t})s}-\mu_{2}(\mathcal{V}_{t })s\cdot\mathrm{e}^{-\mu_{2}(\mathcal{V}_{t})s}\] \[\leqslant\mu_{2}(\mathcal{V}_{t})s-\mu_{2}(\mathcal{V}_{t})s \cdot\mathrm{e}^{-\mu_{2}(\mathcal{V}_{t})s}\] \[\leqslant(\mu_{2}(\mathcal{V}_{t})s)^{2}.\]
Plugging this bound and (83) back in (82) gives the desired bound.
Proof of Lemma 3.9.: Fix \(t\in[0,T)\). For any \(s\in(0,T-t]\), we have
\[\mathds{1}\{\tau_{\mathrm{sep}}>t\}\cdot\frac{\widehat{\mathbb{E}}[\mathcal{ A}_{t+s}\mid\mathcal{F}_{t}]}{s}\leqslant\mathds{1}\{\tau_{\mathrm{sep}}>t\} \cdot c_{1}(X(\mathcal{V}_{t})+\mathscr{E}(\mathcal{V}_{t}))^{c_{2}}.\]
The random variable on the right-hand side is integrable, by Corollary 2.3. This and the Dominated Convergence Theorem justify the exchange of limit in (32). As explained before (32), this implies that
\[\lim_{s\to 0+}\frac{\widehat{\mathbb{E}}[\mathcal{A}_{t+s}]- \widehat{\mathbb{E}}[\mathcal{A}_{t}]}{s} =\widehat{\mathbb{E}}\left[\mathds{1}\{\tau_{\mathrm{sep}}>t\} \cdot\lim_{s\to 0+}\frac{\widehat{\mathbb{E}}[\mathcal{A}_{t+s}\mid\mathcal{F}_{t} ]}{s}\right]\] \[=\varepsilon\cdot\widehat{\mathbb{E}}[\mathds{1}\{\tau_{\mathrm{ sep}}>t\}\cdot g_{\mathsf{v},\varepsilon}(\mathcal{V}_{t},T-t)].\]
Finally, any function \(g:[0,\infty)\to\mathbb{R}\) which is continuous and has a continuous derivative from the right is necessarily differentiable (with derivative equal to the derivative from the right), so the proof is complete.
|
2308.00615 | Cardiac MRI Orientation Recognition and Standardization using Deep
Neural Networks | Orientation recognition and standardization play a crucial role in the
effectiveness of medical image processing tasks. Deep learning-based methods
have proven highly advantageous in orientation recognition and prediction
tasks. In this paper, we address the challenge of imaging orientation in
cardiac MRI and present a method that employs deep neural networks to
categorize and standardize the orientation. To cater to multiple sequences and
modalities of MRI, we propose a transfer learning strategy, enabling adaptation
of our model from a single modality to diverse modalities. We conducted
comprehensive experiments on CMR images from various modalities, including
bSSFP, T2, and LGE. The validation accuracies achieved were 100.0\%, 100.0\%,
and 99.4\%, confirming the robustness and effectiveness of our model. Our
source code and network models are available at
https://github.com/rxzhen/MSCMR-orient | Ruoxuan Zhen | 2023-07-31T00:01:49Z | http://arxiv.org/abs/2308.00615v1 | # Cardiac MRI Orientation Recognition and Standardization using Deep Neural Networks
###### Abstract
Orientation recognition and standardization play a crucial role in the effectiveness of medical image processing tasks. Deep learning-based methods have proven highly advantageous in orientation recognition and prediction tasks. In this paper, we address the challenge of imaging orientation in cardiac MRI and present a method that employs deep neural networks to categorize and standardize the orientation. To cater to multiple sequences and modalities of MRI, we propose a transfer learning strategy, enabling adaptation of our model from a single modality to diverse modalities. We conducted comprehensive experiments on CMR images from various modalities, including bSSFP, T2, and LGE. The validation accuracies achieved were 100.0%, 100.0%, and 99.4%, confirming the robustness and effectiveness of our model. Our source code and network models are available at [https://github.com/txzhen/MSCMR-orient](https://github.com/txzhen/MSCMR-orient)
## 1 Introduction
Cardiac Magnetic Resonance (CMR) images may exhibit variations in image orientations when recorded in DICOM format and stored in PACS systems. Recognizing and comprehending these differences are of crucial importance in deep neural network (DNN)-based image processing and computation, as DNN systems typically treat images merely as matrices or tensors, disregarding the imaging orientation and real-world coordinates. This study aims to investigate CMR image orientation, with a focus on referencing human anatomy and a standardized real-world coordinate system. The goal is to develop an efficient method for recognizing and standardizing the orientation of CMR images. By achieving this goal, we can ensure consistency and enhance the accuracy of DNN-based image analysis in the context of cardiac MRI.
For CMR images, standardization of their orientations is a prerequisite for subsequent computing tasks utilizing DNN-based methodologies, such as image segmentation [4] and myocardial pathology analysis [1]. Deep learning methods have found widespread use in orientation recognition and prediction tasks. For instance, Wolterink et al. introduced an algorithm that employs a Convolutional Neural Network (CNN) to extract coronary artery centerlines in cardiac CT angiography (CCTA) images [5]. Building upon CMR orientation recognition, our work focuses on developing a method for standardizing and adjusting the image orientations.
This study aims to design a DNN-based approach for achieving orientation recognition and standardization across multiple CMR modalities. Figure 1 illustrates the pipeline of our proposed method. The key contributions of this work are summarized as follows:
1. We propose a scheme to standardize the CMR image orientations and categorize them for classification purposes.
2. We present a DNN-based orientation recognition method tailored for CMR images and demonstrate its transferability to other modalities.
3. We develop a CMR image orientation adjustment tool embedded with a orientation recognition network. This tool greatly facilitates CMR image orientation recognition and standardization in clinical and medical image processing practices.
## 2 Method
In this section, we introduce our proposed method for orientation recognition and standardization. Our proposed framework is built on the categorization of CMR image orientations. We propose a DNN to recognize the orientation of CMR images and embed it into the CMR orientation adjust tool.
### CMR Image Orientation Categorization
Due to different data sources and scanning habits, the orientation of different cardiac magnetic resonance images may be different, and the orientation vector corresponding to the image itself may not correspond correctly. This may cause problems in tasks such as image segmentation or registration. Taking a 2D image as an example, we set the orientation of an image as the initial image and set the four corners of the image to \(\begin{bmatrix}1&2\\ 3&4\end{bmatrix}\), then the orientation of the 2D MR image may have the following 8 variations, which is listed in Table 1. For each image-label pair \((X_{t},\ Y_{t})\), we can flip \(X_{t},\ Y_{t}\) towards a picked orientation to get a new image-label pair. If we correctly recognize the orientation of an image, we can perform the reverse flip to standardize it.
### Deep Neural Network
We employ a classical convolutional neural network for orientation recognition. It is a widely adopted approach in image classification tasks, adhering to the standard design pattern for CNNs. The neural network architecture comprises 3 convolutional blocks, each housing a convolutional layer, batch normalization, ReLU activation, and max pooling. These blocks effectively capture features from the input images. Additionally, an average pooling layer and 2 fully connected layers, with 8 units in the output layer, complete the network, enabling orientation prediction.
Figure 1: The pipeline of the proposed CMR orientation recognition and standardization method. Initially, the image undergoes pre-processing. Subsequently, the image is input into a CNN to generate a orientation prediction. Guided by this orientation prediction, the adjust tool can standardize the image, ensuring its alignment with the desired orientation.
To train the model effectively, we utilize the cross-entropy loss, which efficiently measures the discrepancy between the predicted orientation and the ground truth orientation label.
### Transfer Learning
When adapting the proposed orientation recognition network to new datasets of different modalities, we employ a transfer learning approach to obtain the transferred model. Initially, we freeze the weights of the convolutional layers and fine-tune the fully connected layers. We repeat this process for the subsequent fine-tuning steps until the model converges. Afterwards, we unfreeze the weights of all layers and proceed to fine-tune the entire model.
## 3 Experiment
### Dataset
We experiment with the MyoPS dataset [1, 2], which provides the three-sequence CMR (LGE, T2, and bSSFP) from the 45 patients. We divide the CMR data of 45 patients into training and validation sets at the ratio of 80% and 20%.
### Data Pre-processing
For each CMR data, we initially apply 7 transformations according to Table 1 to ensure the dataset encompasses all 8 orientations. Subsequently, we slice the 3D CMR data into multiple 2D data instances.
Given an image-label pair \((X_{t},Y_{t})\), for each \(X_{t}\), we identify the maximum gray value, denoted as \(G\). Subsequently, three truncation operations are performed on \(X_{t}\) using thresholds of \(0.6G,0.8G\), and \(G\) to generate \(X_{1t},X_{2t},\) and \(X_{3t}\), respectively. In this operation, pixels with gray values higher than the threshold are mapped to the threshold gray value. The utilization of different thresholds allows us to capture image characteristics under various gray value window widths, mitigating the impact of extreme gray values. Additionally, grayscale histogram equalization is applied to \(X_{1t},X_{2t},\) and
\begin{table}
\begin{tabular}{c c c c} \hline Label & Operation & Image & Correspondence of coordinates \\ \hline
0 & Initial state & \(\begin{bmatrix}1&2\\ 3&4\end{bmatrix}\) & Target\([x,y,z]=\text{Source}[x,y,z]\) \\
1 & Horizontal flip & \(\begin{bmatrix}2&1\\ 4&3\end{bmatrix}\) & Target\([x,y,z]=\text{Source}[sx-x,y,z]\) \\
2 & Vertical flip & \(\begin{bmatrix}3&4\\ 1&2\end{bmatrix}\) & Target\([x,y,z]=\text{Source}[x,sy-y,z]\) \\
3 & Rotate \(180^{\circ}\) clockwise & \(\begin{bmatrix}4&3\\ 2&1\end{bmatrix}\) & Target\([x,y,z]=\text{Source}[sx-x,sy-y,z]\) \\
4 & Flip along the main diagonal & \(\begin{bmatrix}1&3\\ 2&4\end{bmatrix}\) & Target\([x,y,z]=\text{Source}[y,x,z]\) \\
5 & Rotate \(90^{\circ}\) clockwise & \(\begin{bmatrix}3&1\\ 4&2\end{bmatrix}\) & Target\([x,y,z]=\text{Source}[sx-y,x,z]\) \\
6 & Rotate \(270^{\circ}\) clockwise & \(\begin{bmatrix}2&4\\ 1&3\end{bmatrix}\) & Target\([x,y,z]=\text{Source}[y,sy-x,z]\) \\
7 & Flip along the secondary diagonal & \(\begin{bmatrix}4&2\\ 3&1\end{bmatrix}\) & Target\([x,y,z]=\text{Source}[sx-y,sy-x,z]\) \\ \hline \end{tabular}
\end{table}
Table 1: Orientation Categorization of 2D CMR Images. Here, \(sx\), \(sy\) and \(sz\) respectively denote the size of image in X-axis, Y-axis and Z-axis.
\(X_{3t}\), resulting in \(X^{\prime}_{1t},X^{\prime}_{2t}\), and \(X^{\prime}_{3t}\). Finally, we concatenate these three 2D images into a 3-channel image \([X^{\prime}_{1t},X^{\prime}_{2t},X^{\prime}_{3t}]\), which serves as the input to our proposed DNN.
We perform data augmentation by randomly rotating the image slightly and applying random crops and resizing. These approaches introduce variability in the orientation of the images, which aids in improving model generalization and enhances robustness to varying image sizes and aspect ratios.
### Results
We initially train the model on the bSSFP modality and subsequently fine-tune it on the T2 and LGE modalities. The training process is depicted in Figure 2. The average accuracy on the dataset is presented in Table 2. The results highlight the model's ability to transfer learning to other modalities, showcasing a remarkable level of accuracy.
## 4 Conclusion
We have introduced a DNN-based method for multi-sequence MRI images. The experimental results validate the effectiveness of the orientation recognition network in accurately classifying the orientation of multi-sequence CMR images. Our future research will focus on expanding the categorization of the CMR image orientation and refining the classification network to further enhance the classification accuracy.
|
2309.08160 | Cross-Modal Synthesis of Structural MRI and Functional Connectivity
Networks via Conditional ViT-GANs | The cross-modal synthesis between structural magnetic resonance imaging
(sMRI) and functional network connectivity (FNC) is a relatively unexplored
area in medical imaging, especially with respect to schizophrenia. This study
employs conditional Vision Transformer Generative Adversarial Networks
(cViT-GANs) to generate FNC data based on sMRI inputs. After training on a
comprehensive dataset that included both individuals with schizophrenia and
healthy control subjects, our cViT-GAN model effectively synthesized the FNC
matrix for each subject, and then formed a group difference FNC matrix,
obtaining a Pearson correlation of 0.73 with the actual FNC matrix. In
addition, our FNC visualization results demonstrate significant correlations in
particular subcortical brain regions, highlighting the model's capability of
capturing detailed structural-functional associations. This performance
distinguishes our model from conditional CNN-based GAN alternatives such as
Pix2Pix. Our research is one of the first attempts to link sMRI and FNC
synthesis, setting it apart from other cross-modal studies that concentrate on
T1- and T2-weighted MR images or the fusion of MRI and CT scans. | Yuda Bi, Anees Abrol, Jing Sui, Vince Calhoun | 2023-09-15T05:03:08Z | http://arxiv.org/abs/2309.08160v1 | Cross-modal synthesis of structural MRI and functional connectivity networks via conditional vit-gans
###### Abstract
The cross-modal synthesis between structural magnetic resonance imaging (sMRI) and functional network connectivity (FNC) is a relatively unexplored area in medical imaging, especially with respect to schizophrenia. This study employs conditional Vision Transformer Generative Adversarial Networks (cViT-GANs) to generate FNC data based on sMRI inputs. After training on a comprehensive dataset that included both individuals with schizophrenia and healthy control subjects, our cViT-GAN model effectively synthesized the FNC matrix for each subject, and then formed a group difference FNC matrix, obtaining a Pearson correlation of 0.73 with the actual FNC matrix. In addition, our FNC visualization results demonstrate significant correlations in particular subcortical brain regions, highlighting the model's capability of capturing detailed structural-functional associations. This performance distinguishes our model from conditional CNN-based GAN alternatives such as Pix2Pix. Our research is one of the first attempts to link sMRI and FNC synthesis, setting it apart from other cross-modal studies that concentrate on T1- and T2-weighted MR images or the fusion of MRI and CT scans.
Yuda Bi, Anees Abrol, Jing Sui, and Vince Calhoun\({}^{1}\)\({}^{1}\)Tri-institutional Center for Translational Research in Neuroimaging and Data Science @{GSU, GATech, Emory} Magnetic resonance imaging, generative model, vision transformer, image synthesis
## 1 Introduction
Generative adversarial networks (GANs) originated as a novel approach to create generative models using a generator and discriminator trained together [1]. GANs have revolutionized image generation, style adaptation, and data augmentation [2, 3]. The use of modality translation tasks in medical imaging is especially notable [4]. The vision transformer (ViT) has revolutionized visual tasks by modeling long-range dependencies and offering a reliable alternative to CNNs by utilizing language processing architectures [5]. The ViT-GAN combines ViT's characteristics with GAN, achieving superior image synthesis performance without CNN components [6, 7]. However, the application of ViT-GANs to medical datasets, particularly brain MRI, remains largely unexplored.
This study presents the conditional ViT-GAN (cViT-GAN), a new generative framework for synthesizing images from sMRI to functional network connectivity data (FNC), which utilizes sMRI and class identifier as conditions and is regulated by newly designed correlation loss. FNC expresses temporal correlations between independent neural activities and is commonly depicted as a 2D matrix. To derive FNC matrices, independent component analysis (ICA) was applied to fMRI datasets from the same cohort as the sMRI samples [7]. Existing literature highlights the intricate relationship between structural and functional MRI modalities [8, 9], which are often complementary in nature. Early research indicates that the integration of sMRI and fMRI could enhance diagnostic accuracy for disorders such as schizophrenia [10]. Although fMRI data are abundant in temporal information, analyzing them can be computationally demanding. FNC serves as a more efficient alternative, summarizing complex temporal patterns into a 2D layout without significant diagnostic loss. Our research aims to comprehend the intricate correlation between sMRI and FNC in diagnosing and understanding schizophrenia. Meanwhile, our cViT-GAN model also showed promising FNC visualization. The model created FNC matrices that showed functional zones identical to the original FNC data using sMRI data. This research illuminates the pathogenesis of schizophrenia by pinpointing these functional areas.
## 2 Related Works
Recent studies have applied GANs and transformer-based architectures to medical imaging tasks, showing superior performance in image synthesis and reconstruction [11, 12, 13, 14]. Specifically, Pan et al. used a transformer-based encoder for translating MRI modalities, demonstrating enhanced performance [14]. Li et al. introduced MedViTGAN for generating synthetic histopathology images, incorporating an auxiliary classifier and adaptive loss mechanism [15]. Additionally, Abrol et al. showed that fusing multi-feature sMRI and fMRI data via deep learning significantly improves the prediction of Alzheimer's disease progression [16].
## 3 Methods
In this section, we explain the methods employed in our cViT-GAN model for image synthesis, specifically for generating FNC matrices from sMRI inputs. We detail the generator and discriminator architectures, as well as the composite loss functions designed to achieve highly accurate FNC matrix synthesis. Importantly, our model incorporates multiple loss components to obtain more accurate results.
### Conditional ViTGAN
The cViT-GAN consists of two main components: a generator \(G\) and a discriminator \(D\). The architecture leverages the principles of GAN to optimize the following objective function:
\[\min_{G}\max_{D}\mathcal{L}(G,D) =\mathbb{E}_{y\sim\text{p_{train}}(y)}[\log D(y)]\] \[+\mathbb{E}_{x\sim\text{p_{train}}(x)}[\log(1-D(G(x)))] \tag{1}\]
In the objective function \(\mathcal{L}(G,D)\), \(\mathbb{E}_{y\sim\text{p_{train}}(y)}[\log D(y)]\) represents the expectation of the log-likelihood that the discriminator \(D\) correctly classifies real FNC matrices \(y\). The term \(\mathbb{E}_{x\sim\text{p_{train}}(x)}[\log(1-D(G(x)))]\) reflects the expectation that \(D\) misclassifies the FNC matrices \(\hat{y}=G(x)\) generated from sMRI inputs \(x\).
**Generator**: The generator's purpose is to transform 3D sMRI inputs \(x\) into a 2D network-by-network Functional Network Connectivity (FNC) matrix \(\hat{y}\). This conversion is accomplished through a series of intricate transformer blocks, making use of the inherent patterns and features of the sMRI data.
_Position Embedding with Class Identifier_: Before processing through the generator, the 3D sMRI undergoes a preliminary transformation. It is divided into a set of 3D patches, which are subsequently mapped to linear space to derive position embedding vectors. The introduction of a class identifier is crucial as it provides the model context about the nature of the input - whether it's from a schizophrenia patient or a healthy control.
_Transformer Block_: The heart of the generator is the transformer block. Here, the structure captures the complexities of the data through a combination of multi-head self-attention and position-wise feed-forward neural networks. The sequential nature of this design allows the generator to capture long-range dependencies in the data. The formulas show the computation:
\[Z =\text{LayerNorm}(\text{SelfAttention}(X)+X) \tag{2}\] \[\text{Output} =\text{LayerNorm}(Z+\text{FFN}(Z)) \tag{3}\]
_MLP for FNC Fragments_: Post the transformer blocks, each resultant embedding vector undergoes further refinement through a multi-layer perceptron (MLP). This step generates fragments, smaller pieces, of the desired FNC matrix. The final stage involves stitching these fragments together seamlessly, giving us the synthesized FNC matrix \(\hat{y}\).
**Discriminator**: In any generative setup, the discriminator \(D\) plays a pivotal role. In this architecture, \(D\) is trained to discern between authentic FNC matrices \(y\) and the ones generated by our model \(\hat{y}\). Its design draws inspiration from the ViT model, which has established itself as a potent tool for classification tasks in the realm of images.
Figure 1: The cViT-GAN framework with the embedded class identifier for sMRI to FNC synthesis: The model is regularized by VGG perceptual loss and correlation loss.
### Loss Functions
The standard GAN loss may not be sufficient for capturing the intricate relationships in FNC matrices. Therefore, our model employs a composite loss function, which incorporates several other loss components.
**ViT Perceptual Loss:** The ViT Perceptual loss aims to capture high-level features and patterns in FNC matrices, moving beyond mere pixel-level differences. Instead of relying on the hierarchical structure of CNN like VGG, we utilize the attention mechanisms of a pre-trained ViT-16 network to calculate this loss between real and generated FNC matrices. The ViT Perceptual loss is defined as:
\[\mathcal{L}_{\text{ViT\_16}}=\frac{1}{W_{l}H_{l}}\sum_{i,j}\left(F^{l}(y)_{ij }-F^{l}(\hat{y})_{ij}\right)^{2} \tag{4}\]
Here, \(F^{l}(\cdot)\) is the feature map extracted from the \(l\)-th block of the ViT-16 network. \(W_{l}\) and \(H_{l}\) are the width and height, respectively, of the feature map at block \(l\).
**Correlation Loss:** The correlation loss captures relationships between corresponding regions in the FNC matrix. This is particularly important as FNC matrices often show similarity not just in absolute values but also in the relative arrangement of values. The correlation loss is defined as:
\[\mathcal{L}_{\text{corr}}=1-\frac{\text{cov}(y,\hat{y})}{\sigma_{y}\sigma_{ \hat{y}}} \tag{5}\]
In this equation, \(\text{cov}(y,\hat{y})\) is the covariance between \(y\) and \(\hat{y}\), and \(\sigma_{y}\) and \(\sigma_{\hat{y}}\) are their respective standard deviations.
**Total Losses:** In total, the loss function of our cViT-GAN is:
\[\mathcal{L}_{G}=\mathcal{L}_{\text{GAN}}+\lambda_{1}\mathcal{L}_{\text{MSE}}+ \lambda_{2}\mathcal{L}_{\text{ViT\_16}}+\lambda_{3}\mathcal{L}_{\text{corr}} \tag{6}\]
Here, \(\mathcal{L}_{\text{GAN}}\) is the GAN loss for the generator. The hyperparameters \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) control the balance of MSE loss, VIT perceptual loss, and correlation loss, respectively.
## 4 Experiments
**Datasets**: We employed two datasets related to clinical research on schizophrenia, sourced from multiple sites in the United States and China, featuring a total of 827 and 815 participants, respectively. The first dataset amalgamated data from three studies--fBIRN, MPRC, and COBRE--using nine different 3-Tesla scanners with standard echo planar imaging (EPI) sequences. The second dataset originated in China, obtained using three different 3-Tesla scanners. Both datasets comprised a mix of control individuals and schizophrenia patients and included detailed demographic characteristics such as mean age and gender distribution.
**Preprocessing**: The sMRI data were preprocessed with a pipeline consisting of gray-scale segmentation and normalization. We enhanced the robustness of our model by introducing data augmentation techniques, including random rotation and Gaussian noise addition. For the fMRI data, preprocessing involved slice timing correction, realignment, and normalization. The FNC and sMRI feature vectors were derived following methods from our previous research [7]. Specifically, we employed independent component analysis (ICA) to extract time courses, which were subsequently used to compute the FNC matrices via cross-correlation.
**Models**: In our experiment, we deployed the cViT-GAN model, consisting of both a generator and a discriminator shown in Figure 1. For comparative analysis, we included the following baseline models: **1)DCGAN**: A deep convolutional generative adversarial network (DCGAN) was employed, where both the generator and the discriminator are built using CNNs. **2)Pix2Pix**[3]: This model utilizes a U-Net architecture for the generator and a PatchGAN architecture for the discriminator, focusing primarily on image-to-image translation tasks. **3)cViT-GAN-0**: In this variation of the cViT-GAN model, we only used sMRI as the condition for generating outputs, without including any class identifiers during concatenation. **4)cViT-GAN-1**: This model is another variant of cViT-GAN. It uses both sMRI and class identifiers for confidentiality. However, this version of the model excludes correlation loss and VGG perceptual loss during the training phase.
**Training and Evaluation**: All models were trained using the AdamW optimizer. The initial learning rate was set uniformly to \(1\times 10^{-4}\) across all architectures, including cViT-GAN, its variations, DCGAN, and Pix2Pix. The training lasted for 300 epochs and was executed on 8 NVIDIA V100 GPUs in a distributed environment. Both the generator and discriminator used a MultiStepLR scheduler, with the learning rate being decreased by a factor of 0.1 at epoch 50 and again at epoch 150. To assess the models' performance comprehensively, we employed a 10-fold cross-validation approach. For evaluation purposes, we compared the generated FNC matrices against actual FNC matrices produced by each model. Our evaluation metrics comprised Pearson correlation and cosine similarity in order to capture both linear and angular relationships between the real and generated matrices. These metrics offer a thorough comprehension of the extent to which the produced FNC matrices emulate the structural and functional traits existing in the real data.
## 5 Results
In this section, we compare the experimental results of our conditional cViT-GAN model with various baseline models, specifically focusing on generating "group difference FNC" between the healthy control group (HC) and schizophrenia patients (SZ). We employ Pearson correlation and cosine similarity as our evaluation metrics for these generative methods. Beyond quantitative measures, we also spotlight functional domains where our generated group difference FNC closely matches the actual FNC. This high degree of comparability may underscore significant structural-functional relationships between these groups.
**Basic results**: Table 1 provides a comprehensive comparison of the Pearson correlation and cosine similarity scores among various generative models, including our proposed cViT-GAN. In terms of Pearson correlation and cosine similarity, the cViT-GAN outperforms all other baseline models, achieving a score of \(0.731\) and \(0.732\) respectively. This highlights the advantage of using a ViT architecture for both the generator and discriminator along with class identifiers and the newly designed correlation loss.
**FNC domain visualizations**: This analysis provides visual representations of group difference functional network connectivity matrices (HC-SZ). Figure 2 shows the comparison between the generated group difference FNC with the real one. Remarkably, our cViT-GAN model is proficient at generating FNC matrices that closely mimic actual group difference FNC data, specifically in subcortical regions such as CB-SC, CB-AUD, CB-SM, CB-VS, CB-CC, CB-DM, and CB-CB. In these critical subcortical areas, the correlation between synthetic FNC derived from sMRI and actual group difference FNC data reaches an impressive similarity of up to 0.85. This pivotal finding not only underscores the importance of subcortical structures in distinguishing between HC and SZ groups but also validates the accuracy and utility of our cViT-GAN model in capturing these essential structural-functional connections. The high degree of similarity in subcortical regions suggests that our cViT-GAN model is exceptionally adept at reproducing complex, real-world neurological patterns, especially those that mark differences between healthy and pathological states. This offers promising avenues for advancing our understanding of disorders like schizophrenia, equipping us with more accurate diagnostic tools and targeted treatment strategies. Moreover, our model also revealed significant similarities in differential values for other region pairs, including SC-SM, CC-CC, SM-DM, and VS-DM, with moderate similarities noted in pairs like VS-AUD and CC-SM. These observations contribute additional layers of understanding to how differences in functional networks between the HC and SZ groups might be influenced by foundational structural abnormalities. Collectively, this knowledge is pivotal for the development of improved diagnostic methods and treatment plans that can effectively tackle the complexities of conditions like schizophrenia.
## 6 Conclusion
In conclusion, our study reveals that sMRI data can closely reflect FNC, offering the potential for one imaging modality to inform the other in diagnostics. This deepens our understanding of the brain's structure-function relationship, paving the way for personalized medicine and disease prediction. Our work sets the stage for holistic biomarkers through integrated neuroimaging, and the generative model enables FNC simulation under specific pathological states, yielding new insights into brain functionality and behavior.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline Model & Generator & Discriminator & Class Identifier & Correlation Loss & Pearson & Cosine \\ \hline cViT-GAN & ViT & ViT & Yes & Yes & **0.731** & **0.732** \\ \hline DCGAN & U-Net & MLP & Yes & Yes & 0.695 & 0.693 \\ \hline Pix2Pix [3] & U-Net & PatchGAN & Yes & Yes & 0.719 & 0.714 \\ \hline cViT-GAN-0 & ViT & ViT & No & Yes & 0.543 & 0.544 \\ \hline cViT-GAN-1 & ViT & ViT & Yes & No & 0.724 & 0.722 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of Pearson correlation and cosine similarity of cViT-GAN and baselines.
Figure 2: Generated FNC (a) and real FNC (b) for the group difference analysis of schizophrenia. |
2301.13404 | Opaque Contracts | Firms have access to abundant data on market participants. They use these
data to target contracts to agents with specific characteristics, and describe
these contracts in opaque terms. In response to such practices, recent proposed
regulations aim to increase transparency, especially in digital markets. In
order to understand when opacity arises in contracting and the potential
effects of proposed regulations, we study a moral hazard model in which a
risk-neutral principal faces a continuum of weakly risk-averse agents. The
agents differ in an observable characteristic that affects the payoff of the
principal. In a described contract, the principal sorts the agents into groups,
and to each group communicates a distribution of output-contingent payments.
Within each group, the realized distribution of payments must be consistent
with the communicated contract. A described contract is transparent if the
principal communicates the realized contract to the agent ex-ante, and
otherwise it is opaque. We provide a geometric characterization of the
principal's optimal described contract as well as conditions under which the
optimal described mechanism is transparent and opaque. We apply our results to
the design and description of driver payment schemes on ride-hailing platforms. | Andreas Haupt, Zoe Hitzig | 2023-01-31T04:45:57Z | http://arxiv.org/abs/2301.13404v1 | # Opaque Contracts
###### Abstract
Firms have access to abundant data on market participants. They use these data to target contracts to agents with specific characteristics, and describe these contracts in opaque terms. In response to such practices, recent proposed regulations aim to increase transparency, especially in digital markets. In order to understand when opacity arises in contracting and the potential effects of proposed regulations, we study a moral hazard model in which a risk-neutral principal faces a continuum of weakly risk-averse agents. The agents differ in an observable characteristic that affects the payoff of the principal. In a _described contract_, the principal sorts the agents into groups, and to each group communicates a distribution of output-contingent payments. Within each group, the realized distribution of payments must be consistent with the communicated contract. A described contract is _transparent_ if the principal communicates the realized contract to the agent ex-ante, and otherwise it is _opaque_. We provide a geometric characterization of the principal's optimal described contract as well as conditions under which the optimal described mechanism is transparent and opaque.
## 1 Introduction
Firms have access to abundant data on consumers and employees. They use these data to tailor contracts and market rules to specific participants, optimizing pricing, payment and content decisions with predictive algorithms.
Such tailored rules are often described to market participants in opaque terms. Ridehailing apps deploy complex pricing schemes and shape drivers' expectations about payment through summary statistics. Platforms personalize content to particular users and offer limited descriptions about how users' personal information influences the content they see. Employers have intricate rubrics for determining employee pay and promotions, but may describe these rubrics to employees and prospective employees incompletely. In response to such practices, recent laws and proposed regulations in the United States and the European Union aim to increase transparency in consumer and labor markets.1
When do firms offer opaque contracts, and how does opacity affect consumers and employees? In this paper, we study how firms (and other principals) design and describe contracts for agents. In a _described contract_, a principal communicates different contracts to different groups of agents. Within each group, the ex-post distribution of outcomes must be consistent with the (possibly stochastic) contract communicated ex-ante.
The optimal described contract resolves a novel tradeoff. Relative to transparent contracts, opaque contracts benefit the principal by creating a wedge between the agents' expected payments and true payments. At the same time, opacity hurts the principal when agents are risk averse, by introducing new uncertainty into the contract. The principal must compensate the agents for taking on additional risk.
We begin with a stylized example that introduces the key elements of our model.
### Introductory Example
A firm plans to introduce year-end bonuses for employees to induce higher effort. Since they cannot contract on effort, the firm will offer bonuses to employees who meet a performance target. The firm commits to the bonus schemes they communicate ex-ante--honoring commitments is important for employee trust and retention, and helps the firm steer clear of legal troubles.
The firm has a wealth of data on employee performance that allows them to estimate the effective cost of paying each employee a bonus of $x. These data reveal trends about employees in two divisions of equal size: employees in Division 0 are more expensive to incentivize--inclusive of taxes and other administrative and compliance costs, it costs four times more per dollar to incentivize employees in Division 0 than it does to incentivize employees in Division 1.
Suppose the employees are risk-averse in payments \((x)\) and have quadratic effort \((a)\) costs. In particular, assume that agents have utility \(u(a,x)=ax^{\nicefrac{{1}}{{2}}}-\nicefrac{{1}}{{2}}a^{2}\), where \(a\) is effort and the probability that the agent meets their performance target. Suppose further that the firm is risk-neutral and the value to the firm of each employee meeting their performance target is the same. Then the firm's optimal division-wide bonuses scheme are given by:
\begin{tabular}{|l|} \hline Division 0 \\ Employees who meet their performance targets receive bonuses of \(\nicefrac{{1}}{{3}}\). \\ & \((\hat{x}_{0})\) \\ \hline \hline Division 1 \\ Employees who meet their performance targets receive bonuses of \(\nicefrac{{4}}{{3}}\). \\ & \((\hat{x}_{1})\) \\ \hline \end{tabular}
that is, \(x_{0}=\frac{1}{3}\) and \(x_{1}=\nicefrac{{4}}{{3}}.\) Given these bonues, the employees' optimal actions are \(a_{0}=\nicefrac{{1}}{{\sqrt{3}}}\) and \(a_{1}=\nicefrac{{2}}{{\sqrt{3}}}.\)2 Assuming that the value of each employee meeting their target is normalized to 1, the firm's payoff is \(v=\nicefrac{{1}}{{2}}a_{0}(1-x_{0})+\nicefrac{{1}}{{2}}a_{1}(1-\nicefrac{{1} }{{4}}x_{1})=\nicefrac{{1}}{{\sqrt{3}}}\approx.55\).
Footnote 2: Where \(a_{0}\) and \(a_{1}\) are determined by the first-order condition with respect to \(a\) of the utility function of employees in Division 0 and Division 1, respectively.
Now suppose the firm instead decides to make a firm-wide commitment, offering the same bonus scheme to all employees. Taking advantage of the fact that the firm is both
designing the bonus scheme_ and _choosing what to say about it_, they could take a more opaque approach. They could say:
All Employees
Employees who meet their performance targets receive an average bonus of \(\nicefrac{{9}}{{8}}\). Half of employees will receive of a bonus of \(\nicefrac{{1}}{{4}}\) and the other half will receive a bonus of \(2\).
\[(\hat{x}_{01})\]
and in fact carry out the scheme
\[x_{01}=\begin{cases}\nicefrac{{1}}{{4}}&\text{if in Division }0\\ 2&\text{if in Division }1.\end{cases}\]
Under this scheme, all employees take the same action assuming that the contract they face is equally likely to be \(\nicefrac{{1}}{{4}}\) or \(2\), i.e \(a=\nicefrac{{1}}{{2}}(\nicefrac{{1}}{{4}}\nicefrac{{1}}{{2}})+\nicefrac{{1}}{ {2}}(\nicefrac{{2}}{{1}^{\prime}})\).3 The principal's payoff is \(v=\nicefrac{{1}}{{2}}a(1-x_{01}(0))+\nicefrac{{1}}{{2}}a\left(1-\nicefrac{{1 }}{{4}}x_{01}(1)\right)=\nicefrac{{5}}{{8}}(\nicefrac{{1}}{{4}}+\nicefrac{{1 }}{{\sqrt{2}}})\approx.6.\) Thus, the firm can do better by introducing a firm-wide, instead of division-by-division bonus scheme. This is because in the opaque scheme, the firm induces the same action from all employees, but then pays the employees differently. Compared to the transparent scheme, employees in Division \(0\) take a higher action in the opaque scheme but receive lower pay, while employees in Division \(1\) take a lower action but receive higher pay. On net, this benefits the firm because they can get higher actions overall but pay more to the "cheap" employees in Division \(1\) than to the "expensive" ones in Division \(0\).
Footnote 3: Where \(a\) is determined by taking the first-order condition of all employees’ utility function with respect to \(a\).
Note that the firm's gain from the opaque scheme requires that the employees cannot deduce that their payments in fact depend on their division. Although some employees may try to infer how the firm decides who gets the lower bonus versus the higher bonus, there is an overwhelming number of factors to consider--it could depend on division, but it could also depend on region, on whether the employee's position is client-facing, on whether the employee works remotely, and so forth. In the face of such complexity, a modest modeling approach is to assume that employees can do is take the firm's communication at face value.
So far we have only considered two possibilities: in the first "transparent" scheme, the firm communicates a different contract to each division, and in the second "opaque" scheme, the firm communicates a single contract to all employees. Could the firm get an even higher payoff from the bonus scheme by additionally restructuring the divisions? That is, suppose the firm moved some of the employees in Division \(1\) to Division \(0\), creating Divisions \(0^{\prime}\) (mix of "expensive" and "cheap" employees) and Division \(1^{\prime}\) (all "cheap" employees). Would the firm get an even higher payoff from offering an opaque scheme within the newly created divisions? And why stop there--could the firm do better by creating any arbitrary grouping of cheap and expensive agents, and communicating different opaque contracts to these groups? In this example, the answer is no: The firm cannot do strictly better than offering a single opaque contract to the whole firm.4 The model in this paper develops tools for understanding the firm's optimal strategy in the scenario presented here.
Footnote 4: The firm could also split the employees into divisions or any other grouping that preserved the composition of the overall firm, and attain the optimal payoff. That is, the firm could create two divisions Division \(0^{\prime}\) and \(1^{\prime}\) in which \(\frac{1}{2}\) of the employees in each division came from Division \(0\) and the other half from Division \(1\). The calculation for the exact optimal described contract can be found in Appendix B—it is in fact not too different from the contract \(\hat{x}_{01}\).
### Overview
The preceding example illustrates the key components of our moral hazard model with descriptions. A principal faces a continuum of weakly risk-averse agents, who each have a payoff-relevant observable characteristic. The principal wants to incentivize effort, but effort is costly for the agents and non-contractible. So, the principal offers contracts contingent on an output that is correlated with effort.
The principal chooses a _described contract_, which has three elements. First, the principal chooses a _sorting function_ that sorts agents into different contracts based on their observable characteristics. In the introductory example, the sorting function determines whether all employees are grouped together or sorted into divisions, as well as the composition of "cheap" and "expensive" agents in each division. Second, the principal chooses a contract to _communicate_ to each group of agents. In the example, these are the statements \((\hat{x}_{0},\hat{x}_{1})\) in the transparent contract and \(\hat{x}_{01}\) in the opaque contract. Finally, the principal chooses a contract to _realize_ for each specific agent, with the _consistency_ requirement that within each group, the distribution of realized payments is consistent with the communicated contract. In the transparent scheme in the example, the realized contract \((x_{0},x_{1})\) is trivially consistent with the communicated contract \((\hat{x}_{0},\hat{x}_{1})\) because the two coincide exactly. In the opaque scheme in the example, the realized contract \(x_{01}\) is consistent with the communicated contract \(\hat{x}_{01}\) because it results in the communicated distribution of payments.
The principal knows that agents take the communicated contract at face value, and solves for the optimal described mechanism. We focus on when the optimal described contract is _transparent_--that is, when the realized contract is communicated to the agent ex-ante. In such cases, it would be redundant for regulators to impose transparency requirements, as firms are already incentivized to be transparent. When a described contract is not transparent, we say it is _opaque_. A particular type of opaque contract is a _fully coarse_ contract: a described contract is fully coarse if there is only one communicated contract. When the optimal described contract is fully coarse, the principal may have an easier time implementing it--for example, if the firm in the introductory example did not have the power to additionally restructure the divisions, it would have only the optimal transparent and the optimal fully coarse contract to choose from.5
Footnote 5: In Appendix C, we study a modified version of the problem in which the principal is constrained to offering _coarse_ contracts, i.e. contracts that can group agents together based on their characteristics, but all agents with a particular characteristic must belong to the same group.
In section 3, we provide a geometric characterization of the principal's problem. Our approach relies on concavification techniques familiar from Aumann and Maschler (1995) and the literature on persuasion and information design (Kamenica and Gentzkow, 2011, ff). This technique allows us to gain insight into a joint contract design and information design problem, where the two parts of the design problem interact non-trivially: The optimal output-contingent payment scheme (contract design) depends on the composition of the group of agents to whom it is communicated (information design), and vice versa. This geometric characterization allows us to investigate the _value of opacity_ for the principal, defined as the different between the principal's optimal opaque mechanism and the principal's optimal transparent mechanism. In addition to investigating the value of opacity for the principal, we study how opacity affects agent welfare.
After using the geometric characterizations to derive sufficient conditions that are useful for applications, we present the result that captures the key tradeoff of the paper. As agents' risk-aversion increases, the value of opacity converges to zero. Opacity helps the principal
by introducing a wedge between agents' expected payments and realized payments, which may be profitable. But opacity also introduces uncertainty--when agents are risk-averse, this uncertainty must be compensated with higher payments.6
Footnote 6: For a particularly simple illustration of this tradeoff, see Appendix B, which compares the example in subsection 1.1 to a nearly-identical example in which employees are risk-neutral.
In the canonical single-agent moral hazard problem (Holmstrom, 1979; Grossman and Hart, 1983), the principal faces a tradeoff between incentives and insurance: increasing the spread between payments for different outputs motivates the agent to exert effort, but also discourages the agent by introducing additional risk into the contract. Our setting differs from a canonical contracting environment only in that there is a continuum of agents who each have an observable characteristic, and thus with these two ingredients the principal has an opportunity to offer an opaque contract. Opacity introduces a new element into this tradeoff. When comparing the potential benefits from moving from a transparent contract to an opaque contract, the principal trades off three quantities: the _incentive increase_ from _some_ agents who take a higher action, the _incentive decrease_ from _some_ agents who take a lower action, and the _risk decrease_ from _all_ agents' risk aversion. Our result shows that in the limit, the decrease from risk outweighs the net benefits from incentives.
Next, in section 4, we apply our results to understand the design and communication of per-ride payment schemes for drivers on ride-hailing platforms. Here, the principal is a ride-hailing platform (e.g. Uber, Grab, DiDi) and the agents are drivers. The drivers' observable characteristic is whether they are available to drive during high-demand or low-demand periods. The drivers exert costly hidden effort: they must drive some distance from their home on the outskirts of the city into town--driving further into town increases the chances that the driver will pick up a ride. The firm chooses per-ride payment schemes and decides whether to communicate transparently (articulating how demand influences per-ride payments) or opaquely. This application allows us to show in detail how different parameters of the model influence the platform's optimal strategy. Further, it illustrates that while opacity on average increases driver welfare relative to transparency, it is not Pareto improving in general--a surprising finding with subtle policy implications.
Before concluding the paper, we discuss the related literature in section 5. This discussion includes an interpretation of our model and results through the lens of the "Bayesian persuasion" paradigm. A brief conclusion in section 6 suggests directions for future work.
## 2 Model
There is a continuum of agents, each of whom takes a costly action \(a\in A\) that affects the payoff of a principal. The principal does not observe the action, but observes output \(q\in Q\) that is informative about the action \(a\), and distributed according to \(\pi:A\to\Delta(Q)\). We denote the probability of a particular output \(q\) by \(\pi(q\mid a)\), with \(\sum_{q\in Q}\pi(q\mid a)=1\).
Each agent contracts with the principal in a particular state of the world \(s\in S\), which is observed by the principal and may or may not be observed by the agents depending on the application. The state \(s\) is distributed in the population according to distribution \(f\), and the state space \(S\) is finite. The agents maximize expected utility, and have symmetric von Neumann-Morgenstern utility functions \(u\colon A\times S\times X\to\mathbb{R}\), where \(X\) is a set of feasible outcomes. The principal is risk neutral with an objective function \(v\colon A\times S\times X\to\mathbb{R}\).
The principal chooses a _described contract_, which has three features. The first feature of a described contract is a collection of _communicated contracts_\(\hat{g}_{k}\colon Q\to\Delta(X)\) where \(K\) is a set
of contract labels and each \(\hat{g}_{k}\) is a contract. We denote the implied probability distribution over outcomes given communicated contract \(\hat{g}_{k}\) by \(\hat{P}_{kq}\coloneqq\mathbb{P}(\hat{g}_{k}(q))\). The principal also chooses an assignment rule \(\sigma\colon S\to\Delta(K)\) which assigns agents to communicated contracts based on the state in which they arrive. We denote the distribution of communicated contracts given state \(s\) by \(\mu_{s}\coloneqq\mathbb{P}(\,\cdot\mid s)\). Finally, the principal chooses a _realized contract_\(g_{k}:Q\times S\to\Delta(X)\) for each \(k\in K\) which specifies the outcome for an agent who receives contract \(k\) in state \(s\). We denote the _induced distribution of outcomes_ given realized contract \(g_{k}(q,s)\) by \(P_{kqs}\coloneqq\mathbb{P}(g_{k}(q,s)).\) Note that the domain of a realized contract \(g_{k}\) is \(Q\times S\) while the domain of a communicated contract \(\hat{g}_{k}\) is \(Q\).
To each agent, the principal communicates only a single contract \(\hat{g}_{k}\). Ex-post, the agent observes outputs and outcomes for all agents who received the same contract. That is, an agent who receives a contract labelled \(k\) sees statistics on outputs and outcomes for all agents who received the contract labeled \(k\)--they see a "database" containing \(\hat{P}_{kq}\).7
Footnote 7: An alternative and equivalent interpretation is to view this as a legal constraint that is common knowledge: The principal will be punished by some third-party if she lies, and the agent knows this, and the Principal knows that the agent knows this... and so forth.
Since the agents who receive contract \(\hat{g}_{k}\) have access to the database \(\hat{P}_{kq}\), the principal must choose realized contracts that are _consistent_ with the communicated contracts. In order to define consistency, we first introduce notation for the distribution of outcomes for a contract labelled \(k\) for output \(q\). We call this quantity the _observed distribution of outcomes_.
**Definition 1**.: _Given realized contract \(g_{k}(q,s),\) the observed distribution of outcomes is a probability distribution_
\[P_{kq}\coloneqq\sum_{s\in S}P_{kqs}\frac{\mu_{s}(k)f(s)}{\sum_{s^{\prime}\in S }\mu_{s^{\prime}}(k)f(s^{\prime})}.\]
A realized contract is _consistent_ with a communicated contract if the observed distribution of outcomes from the realized contract \((P_{kq})\) is the same as the distribution of outcomes implied by the communicated contract.
**Definition 2** (Consistency).: _A realized contract \(g_{k}\) is consistent with a communicated contract \(\hat{g}_{k}\) if \(\hat{P}_{kq}=P_{kq}\) for all \(q\in Q\)._
In sum, the principal chooses a _described contract_ which contains a collection of consistent communicated contracts and realized contracts, as well as a sorting function.
**Definition 3** (Described contract).: _A described contract \(((\hat{g}_{k})_{k\in K},(g_{k})_{k\in K},\sigma)\) consists of communicated contracts \(\hat{g}_{k}(q)\), realized contracts \(g_{k}(q,s)\) and an assignment rule \(\sigma\colon S\to\Delta(K),\) where for each \(k\in K\), \(g_{k}(q)\) is consistent with \(\hat{g}_{k}(q,s).\)_
The following timeline summarizes and clarifies the timing of the game:
1. The principal chooses a described contract \(((g_{k})_{k\in K},(\hat{g}_{k})_{k\in K},\sigma).\)
2. To each agent \(i\) in state \(s\), 1. The principal communicates contract \(\hat{g}_{k}\) with probability \(\mu_{s}(k).\) 2. The agent accepts or rejects the contract. * If the agent rejects, she gets reservation utility \(0.\) * If the agent accepts, she chooses action \(a_{k}^{*}\), assuming the mechanism is \(\hat{g}_{k}\).
3. The outcome \(x\) is realized according to \(g_{k}(q,s)\).
4. The principal and agents' utilities are realized. Agents who received contract \(k\) observe \(P_{kq}\).
The principal's program can be written as:8
Footnote 8: In summation form, the principal’s expected utility is:
\[\sum_{s\in S}\sum_{k\in K}v(a_{k}^{*},g_{k}(q(a_{k}^{*}),s),s)\mu_{s}(k)f(s).\]
\[\max_{(\hat{g}_{k})_{k\in K},(g_{k})_{k\in K},\sigma}\mathbb{E}_{s\sim f,\,k \sim\mu_{s}}[v(a_{k}^{*},g_{k}(q(a_{k}^{*}),s),s)]\] (1) subject to \[\hat{P}_{kq} =P_{kq}\text{ for all }k\in K,q\in Q\] (Consistency) \[a_{k}^{*} \in\arg\max_{a}\mathbb{E}[u(a,\hat{g}_{k}(q(a)),s)]\text{ for all }k\in K\] (IC) \[0 \leq\mathbb{E}[u(a_{k}^{*},\hat{g}_{k}(q(a_{k}^{*})),s)]\text{ for all }k\in K.\] (IR)
We say that a described contract that solves this program is an _optimal described contract_. There are cases when the principal's optimal described contract is not unique. In such cases, we assume that the principal chooses the optimal described contract that maximizes agent welfare. This assumption simplifies the exposition and is necessary for characterizing agent welfare in subsection 3.3. Our analysis will focus on properties of the optimal described contract. To summarize, the optimal described contract, together with the agents' actions \((a_{k}^{*})\) constitute a (welfare-maximizing) subgame perfect equilibrium.
Note that in the final stage of the game (step 3), both the principal's and agents' realized utility depend on the set of _realized_ outcomes, determined by \(g_{k}(q,s)\). This feature of the model implies that when the agents make their decision in step 2b of the game, they may not know as much about their outcome as the principal knows. We say that a described contract is _transparent_ if the realized contract is communicated to the agent ex-ante. This occurs when, in the described contract, there is a one-to-one correspondence between the agent's state and the label of the contract to which the agent is assigned. Or, put another way, the described contract is transparent if for each agent the distribution of outcomes implied by the communicated contract is the same as the outcomes in the realized contract.
**Definition 4** (Transparent described contracts).: _A described contract \(((\hat{g}_{k})_{k\in K},(g_{k})_{k\in K},\sigma)\) is transparent if \(\sigma\colon S\to\Delta(K)\) is a bijective function._
We contrast transparent described mechanisms with _opaque_ described mechanisms.
**Definition 5** (Opaque described contract).: _A described contract \(((\hat{g}_{k})_{k\in K},(g_{k})_{k\in K},\sigma)\) is opaque if it is not transparent._
In an opaque described mechanism, the principal takes advantage of the fact that they face a population of agents, and can thus describe the mechanism with varying amounts of detail without "lying." The principal is constrained by consistency, but otherwise has the freedom to, for example, describe the realized contract for a particular agent stochastically even if it is in fact deterministic. A particular opaque described mechanism will be of interest
in both our theoretical analysis and applications--when does the principal communicate the same mechanism to all agents? We say that a described mechanism is _fully coarse_ when the principal communicates the same contract to all agents.
**Definition 6** (Fully coarse described contracts).: _A described contract \(((\hat{g}_{k})_{k\in K},(g_{k})_{k\in K},\sigma)\) is fully coarse if \(|K|=1\)._
In Appendix C, we will study the principal's optimal described contract when the principal must respect an additional _no-arbitrary-differential-treatment_ constraint. This constraint says that for any two agents \(i,j\) who arrive in the same state \(s\), the principal must communicate the same mechanism \(\hat{g}_{k}\), i.e. \(\sigma(s)\) is injective.9
Footnote 9: The _no-arbitrary-differential-treatment_ constraint may arise in applications for feasibility or fairness reasons. For example, in the introductory example in subsection 1.1, if the Division 0 and Division 1 were in fact different regional offices in different jurisdictions, it may be infeasible to communicate contracts to groups composed of employees from both regions. Or, if the difference underlying the differences in employee cost were in fact due to different job descriptions in the two divisions, it may be perceived as unfair for employees with the same job descriptions to receive different contracts.
## 3 Properties of the Optimal Described Contract
The principal's problem can be understood as a joint contract design and information design problem, where the two parts of the problem interact non-trivially. Nonetheless, we can characterize the principal and agent's utility at the optimal described contract through a concavification argument similar to that introduced in Aumann and Maschler (1995) and adapted to the study of "persuasion" in Kamenica and Gentzkow (2011).
In this section, we restrict our attention to cases in which the agent's utility does not depend on the state.
**Assumption 1**.: _The agent's utility is state-independent, i.e. \(u(a,s,x)=u(a,x)\) for all \(s\in S,a\in A,x\in A\)._
### Geometric Characterizations
We define the principal's _value function_\(V:\Delta(S)\to\mathbb{R}\) to be the value of the objective function when the principal can communicate only one contract to all agents (i.e. the principal is constrained to _fully coarse_ described contract). Note that when the principal is choosing among fully coarse contracts, the range of \(\sigma\) has a single element, so there is only a single realized contract \(g_{k}\) and a single communicated contract \(\hat{g}_{k}\), which we will call \(g\) and \(\hat{g}\), respectively. That is, when considering only fully coarse contract, the only realized contract is \(g(q,s)\coloneqq g_{k}(q,s)\), and the only communicated contract is \(\hat{g}(q)\coloneqq\hat{g}_{k}(q)\).
**Definition 7** (Principal value function).: _The principal's value function is given by_
\[V(f)\coloneqq\max\left\{\mathbb{E}_{s\sim f}[v(a^{*},g(q(a^{*}),s),s)] \left|\begin{array}{l}g\colon Q\times S\to\Delta(X)\\ a^{*}\in\arg\max_{a}\mathbb{E}[u(a,\hat{g}(q(a)))]\\ 0\leq\mathbb{E}[u(a^{*},\hat{g}(q(a^{*})))]\end{array}\right.\right\}. \tag{2}\]
We next introduce the concave closure of the principal's value function, which is pictured in the graph on the left in Figure 1. The figure presents the case where the state is binary, and we can thus identify a distribution \(f\) with the probability of one of the states. So, a particular probability distribution \(f\in\Delta(S)\) is represented by a point on the x-axis.
**Definition 8** (Closure of principal value function).: _The concave closure of the principal's value function is_
\[\overline{V}(f)\coloneqq\sup\{z\mid(f,z)\in\mathrm{co}(V)\} \tag{3}\]
_where \(\mathrm{co}(V)\) is the convex hull of the graph of the value function \(V\)._
In our first characterization result, we show that the concave closure \(\overline{V}(f)\) of the principal's value function at population distribution \(f\) is the principal's utility at the optimal described contract. The logic behind this result is similar to the logic in models of "persuasion" (Kamenica and Gentzkow, 2011), but important differences lurk beneath the surface in both analysis and interpretation--we note two differences here and return to the discussion in section 5.
The first key difference is that in persuasion models, the concave closure of the value function represents the principal's (or "sender's") utility from the optimal choice of a single object ("the statistical experiment") whereas here the the concave closure of the value function represents the principal's utility from the optimal choice of two objects (the composition of groups and the output-contingent payment scheme). Here the value function represents the principal's utility from what can be thought of as an information policy (the sorting of agents into groups) _and_ a contract choice. It is not obvious how the contract choice interacts with the optimal informational scheme.
Note, second, what this implies about the principal's commitment: in persuasion models, the principal (sender) must commit to a statistical experiment--an assumption which may fit a limited range of settings, since the adherence to a statistical experiment is often difficult or impossible to monitor. Meanwhile, in our model of contracting with descriptions, the consistency requirement amounts to a standard form of commitment: The principal commits only to a (possibly stochastic) communicated contract, which is as easy to monitor as any commitment to a (possibly stochastic) contract in canonical contract theory.10
Footnote 10: In this sense our results connect to Lin and Liu (2022). We return to the topic of persuasion, opacity and commitment in section 5, and there discuss an interpretation of our model and results through the lens of persuasion.
**Theorem 1**.: _Given a distribution \(f\in\Delta(S)\), the principal's utility at the optimal described contract lies on the concave closure of the principal's value function, at \(\overline{V}(f)\)._
**Proof.** Consider an arbitrary point \(V^{*}=\overline{V}(f)\) on the concave closure of the value function. We prove two statements about this point: (i) There exists a described contract \((g_{k},\hat{g}_{k},\sigma)\) that achieves \(V^{*}\), and; (ii) there is no described contract that achieves higher utility for the principal than \((g_{k},\hat{g}_{k},\sigma)\) at \(f\).
We begin with (i). If \(\overline{V}(f)=V(f)\) then the optimal fully coarse described contract achieves \(V^{*}\), by definition of \(V(f)\). If \(\overline{V}(f)\neq V(f)\), then there is a maximal convex subset \(I\subseteq\Delta(S)\) such that \(f\in I\) and \(V(f)\neq\overline{V}(f).\) We denote the boundary of the set \(I\) by \(\partial I\). By Caratheodory's theorem, there is a set of at most \(|S|\) boundary points, \(\{\partial_{1},\partial_{2},\ldots,\partial_{|S|}\}\subseteq\partial I\) such that
\[\overline{V}(f) =\sum_{k=1}^{|S|}\lambda_{k}V(\partial_{k}) f =\sum_{k=1}^{|S|}\lambda_{k}\partial_{k}.\]
We may interpret every boundary point \(\partial_{k}\in\Delta(S)\) as a distribution of states \(s\) given that the contract is \(k\).
We define a sorting function \(\sigma\colon S\to\Delta(K)\) such that the distribution of contracts given state \(s\) (\(\mu_{s}\)) satisfies11)
Footnote 11: To see that this is a probability distribution, observe: \(\sum_{k\in K}\mu_{s}(k)=\sum_{k\in K}\frac{\lambda_{k}\partial_{k}(s)}{f(s)}= \frac{\sum_{k\in K}\lambda_{k}\partial_{k}(s)}{f(s)}=\frac{f(s)}{f(s)}=1\).
\[\mu_{s}(k)=\frac{\lambda_{k}\partial_{k}(s)}{f(s)}.\]
Consider for each distribution \(\partial_{k}\) the fully coarse communicated contract \(\hat{g}_{k}\) and denote the (unique) realized contract for the fully coarse contract consistent with \(\hat{g}_{k}\) by \(g_{k}\). We claim that \((\sigma,(\hat{g}_{k})_{k\in K},(g_{k})_{k\in K})\) defines a described contract that achieves principal utility of \(\overline{V}(f)\). That this is a described contract, i.e., that \(\hat{g}_{k}\) is consistent with \(g_{k}\), is a direct consequence of the fact that \(\hat{g}_{k}\) and \(g_{k}\) are consistent as the communicated resp. described contract of a fully coarse contract.
Next, we show that this described contract gives the principal utility of \(\overline{V}(f)\). Note that contracts \((\hat{g}_{k},\,g_{k})\) yield utility \(V(\partial_{k})\) (by definition of \(V\)). By the principal's risk neutrality, we can express the principal's utility contract-by-contract,
\[\sum_{k\in K}\sum_{s\in S}f(s)\mu_{s}(k)V(\partial_{k}) =\sum_{k\in K}\sum_{s\in S}f(s)\frac{\lambda_{k}\partial_{k}(s)}{ f(s)}V(\partial_{k})\] \[=\sum_{k\in K}\lambda_{k}V(\partial_{k})\sum_{s\in S}\partial_{k }(s)=\sum_{k\in K}\lambda_{k}V(\partial_{k})=\overline{V}(f).\]
Next we prove statement (ii). Consider an arbitrary described contract \(((\hat{g}_{k})_{k\in K},(g_{k})_{k\in K},\sigma)\) that yields principal utility \(V\in\mathbb{R}\). Introduce the probability distribution on \(S\) that describes the composition of agents to whom contract \(\hat{g}_{k}\) is communicated:
\[\rho_{k}(s)\coloneqq\frac{f(s)\mu_{s}(k)}{\sum_{s\in s}f(s)\mu_{s}(k)}.\]
Observe that the utility that the principal derives from the mass of agents \(\sum_{s\in S}f(s)\mu_{s}(k)\) that receive the communicated contract \(\hat{g}_{k}\) may not be larger than the principal utility from the optimal fully coarse contract, \(V(\rho_{k})\) for this group. This is by definition--each \(\hat{g}_{k}\) is a "coarse" contract in the sense that a single contract is communicated to the mass of agents who receive \(k\). So the _optimal_ coarse contract for the mass of agents \(\sum_{s\in S}f(s)\mu_{s}(k)\) necessarily delivers weakly higher utility to the principal than the arbitrary contract \(\hat{g}_{k}\). Hence, \(V\leq\sum_{k\in K}\sum_{s\in S}f(s)\mu_{s}(k)V(\rho_{k})\). Note that \((\rho_{k},V(\rho_{k}))\) is in the graph of \(V\), and
\[\sum_{k\in K}\sum_{s\in S}f(s)\mu_{s}(k)=\sum_{s\in S}f(s)\left(\sum_{k\in K} \mu_{s}(k)\right)=\sum_{s\in S}f(s)=1,\]
so \(f(s)\mu_{s}(k)\) are weights in a convex combination. Hence, \(V\leq\overline{V}(f)\) by the definition of the concave closure.
Theorem 1 is powerful because it will typically be easier to solve for the principal's optimal fully coarse contract than it will be to solve for the principal's optimal described contract. This is because solving for the optimal described contract involves a nested optimization, optimizing over optimal contracts for all possible partitions of agents into groups. Meanwhile, the optimal coarse contract is the optimal contract for a single partition of agents into groups.
We next define another object that will give further insight into the optimal described contract without requiring the an optimization over optimal contracts for all partitions of agents. The _extremal closure of the principal's value function_ is the supremum of the convex hull of the graph of \(V\) evaluated only at the "extreme" distributions in which the entire population belongs to a single state.
**Definition 9** (Extremal closure of principal value function).: _The extremal closure of the principal's value function is_
\[V^{T}(f)\coloneqq\sup\{z\mid(f,z)\in\operatorname{co}(V(f^{\prime})\colon f^{ \prime}\in I)\}, \tag{4}\]
_where \(I=\{\delta_{s}\colon s\in S\}\subset\Delta(S)\) is the set of point mass distributions at all \(s\in S.\)_
On the left in Figure 1, we see that when the state is binary, the closure of the extremal principal value function is a line connecting \(V(0)\) and \(V(1).\) The extremal closure of the principal's value function gives the principal's utility at the optimal transparent contract.
**Proposition 1**.: _Given a distribution \(f\in\Delta(S)\), the principal's utility at the optimal transparent contract is \(V^{T}(f),\) i.e. the extremal closure of the principal's value function evaluated at \(f\)._
**Proof.** We proceed in two steps. We consider an arbitrary point \(V^{T}(f),\) and show: (i) that it is attained by a transparent contract, and (ii) that there is no transparent contract that achieves higher utility.
We begin with (i). A point \(V^{T}(f)\) can be decomposed as \(V^{T}(f)=\sum_{s\in S}\lambda_{s}V(\delta_{s})\) by the definition of \(V^{T},\) where each \(\delta_{s}\) is the point mass distribution at \(s\). As the convex combination is extremal, it must be that \(\lambda_{s}=f(s)\). Each \(V(\delta_{s})\) is the principal's utility at the optimal fully coarse contract \(\hat{g}_{s}\) given that the population distribution is \(\delta_{s},\) by the definition of the the principal's value function. So consider a described contract with each \(\hat{g}_{s}\) as the fully coarse contract at \(\delta_{s},\) and each \(g_{s}\) the (unique) realized contract that is consistent with \(\hat{g}_{s}.\) Define \(\sigma\colon S\to\Delta(K)\) such that
\[\mu_{s}(k)=\begin{cases}1&\text{ if }s=k\\ 0&\text{ otherwise.}\end{cases} \tag{5}\]
Note that this \(\sigma\) is bijective. The described contract with communicated contract \(\hat{g}_{s}\) as the fully coarse contract at \(\delta_{s}\) and \(\sigma\colon S\to\Delta(K)\) the bijective sorting function that satisfies (5) (i.e. maps each state \(s\) to contract \(\hat{g}_{s}\)) yields value \(V^{T}(f)\)
\[\sum_{k\in K}\sum_{s\in S}f(s)\mu_{s}(k)V(\delta_{k}) =\sum_{s\in S}f(s)\sum_{k\in K}\mu_{s}(k)V(\delta_{k})\] \[=\sum_{s\in S}f(s)\sum_{k\in K}\mu_{s}(k)V(\delta_{k})=\sum_{s \in S}f(s)V(\delta_{s})=\sum_{s\in S}\lambda_{s}V(f_{s}).\]
Next, we show (ii)--that there is no transparent contract that achieves higher utility at \(f\) than \(V^{T}(f).\) Consider a transparent contract with \(((\hat{g}_{s})_{s\in S},(g_{s})_{s\in S},\sigma)\). As this contract is transparent, for each \(s\in S,\) there exists a unique \(k\in K\) such that \(\mu_{s}(k)=1\). Note that this leads to distributions \(\mu_{k}(s)\) that are concentrated on a single \(s\in S\). It must be the case that each \(\hat{g}_{s}\) and \(g_{s}\) correspond to optimal coarse contracts on these degenerate distributions. Hence, the value from each contract is bounded from above by \(\sum_{s\in S}f(s)\mu_{s}(k)V(\delta_{k}),\) where
again \(\delta_{k}\) denotes a point-mass distribution at at \(k\). In sum, the principal value from all groups is
\[\sum_{k\in K}\sum_{s\in S}f(s)\mu_{s}(k)V(\delta_{k})=\sum_{s\in S}f(s)\mu_{s}(s) V(\delta_{s})\leq V^{T}(f),\]
where the bound is a result of the convex combination being from points on the graph of \(V\). This proposition is valuable because, together with Theorem 1, it allows us to gain insight into the optimal described contract, without computing the optimal described contract directly. The optimal transparent contract is much easier to solve for than the optimal described contract: it requires only solving for the optimal contract for \(|S|\) groups--it does not requiring searching through different possible combinations of groups.
To recap: we have defined three objects. The principal's value function, the closure of the principal's value function, and the extremal closure of the principal's value function. These three objects are shown on the left in Figure 1, and give the principal's utility at the optimal fully coarse contract (Definition 7), the optimal described contract (Theorem 1), and the optimal transparent contract (Proposition 1), respectively.
The characterization results thus far give us tools that will be helpful in understanding the principal's side of the problem--they will allow us to understand the shape of the optimal described contract and how much the principal benefits from its ability to offer opaque contracts. Given our motivation to understand regulatory problems, we also need tools to understand the agent's side of the problem--how does the principal's optimal described mechanism affect agent welfare?
So, next we define three objects concerning agent utility that are analogous to the three objects just defined for the principal. The _agent's value function_ is the agents' welfare at the principal's optimal fully coarse described mechanism, where _agent welfare_ for actions \(a_{k}\), realized contract \(g_{k}\) and sorting function \(\sigma(s)\), is the average of the agents' ex-post utility, i.e.
\[\sum_{s\in S}\sum_{k\in K}u(a_{k},g_{k}(q,s))\mu_{s}(k)f(s).\]
**Definition 10** (Agent value function).: _The agent's value function is given by_
\[U(f)\coloneqq\max\left\{\mathbb{E}_{s\sim f}[u(a^{*},g(q(a^{*}),s),s)]\left| \begin{array}{l}g\colon Q\times S\to\Delta(X)\\ a^{*}\in\arg\max_{a}\mathbb{E}[u(a,\hat{g}(q(a)))]\\ 0\leq\mathbb{E}[u(a^{*},\hat{g}(q(a^{*})))]\end{array}\right.\right\}. \tag{6}\]
The agent value function is the agents' welfare under the principal's optimal choice of a coarsely described contract. We will next define the _implied agent value function_.
**Definition 11** (Agent implied value function).: _Denote \(\mathcal{I}\) the set of (inclusion-)maximal convex subsets \(I\subseteq\Delta(S)\) such that \(V(f)\neq\overline{V}(f)\). The agents' implied value function is_
\[\tilde{U}(f)=\begin{cases}U(f)&\forall I\in\mathcal{I}:f\notin I\\ \max\{z\mid(f,z)\in\operatorname{co}(U|_{\partial I})\}&\exists I\in\mathcal{ I}\colon f\in I,\end{cases}\]
_where \(\partial I\) denotes the boundary of the set \(I\)._
An example of the agent's implied value function is shown on the right in Figure 1, alongside the principal's value function. Recall, the figure presents the case where the state is binary, and we identify a distribution \(f\) with the probability of one of the states. There
are two important intervals in the closure of the principal's value function: on the interval \(f\in[s^{*},1]\), the principal's value function is equal to its closure (\(V(f)=\overline{V}(f)\)); on the interval \(f\in[0,s^{*}]\) the principal's value function is not equal to its closure. So, in this case, there is a single maximal subset \(I\) such that \(V(f)\neq\overline{V}(f)\), and it is \(I=[0,s^{*}]\) (and so \(\mathcal{I}=\{I\}\)).
These intervals define the agents' implied value function: on the interval \(f\in[s^{*},1]\), i.e. \(f\not\in I\), the agents' implied value function is given by the agents' value function \(U(f)\); on the interval \(f\in[0,s^{*}]\), i.e. \(f\in I\), the agent's implied value function is given by the convex hull of the graph of \(U\) evaluated at the boundary of set \(I\).
As we did for the principal's objective function, we next define the _extremal closure of the agents' value function_. In the binary state case pictured in Figure 1, the extremal closure is a line connecting the points \(U(0)\) and \(U(1).\) In general, the extremal closure of the agents' value function is obtained by taking the supremum of the convex hull of the graph of the implied value function defined only on points \(\delta_{s}\), i.e. points that represent a point-mass distribution at some \(s\in S.\)
**Definition 12** (Extremal closure of agent value function).: _The extremal closure of the agents' value function is_
\[U^{T}(f)\coloneqq\sup\{z\mid(f,z)\in\operatorname{co}(U(f^{\prime})\colon f^{ \prime}\in I)\}, \tag{7}\]
_where \(I=\{\delta_{s}\colon s\in S\}\subset\Delta(S)\) is the set of point mass distributions \(\delta_{s}\) at all \(s\in S.\)_
Analogous to Theorem 1 and Proposition 1, the next proposition establishes that the agents' implied value function at \(f\) is agents' welfare at the principal's optimal described contract, and that the extremal closure of the agents' value function at \(f\) is agents' welfare at the principal's optimal transparent contract.
**Proposition 2**.: _Given a distribution \(f\in\Delta(S)\), agent welfare at the optimal described mechanism is given by \(\tilde{U}(f)\), and agent welfare at the optimal transparent mechanism is given by \(U^{T}(f)\)._
As the proofs for the two statements in Proposition 2 are similar to the corresponding proofs for the principal, they can be found in Appendix A.
Figure 1: Principal value and agent welfare at optimal described, fully coarse, and transparent contract, \(|S|=2\).
Note that there are many statistics of the distribution ex-post agent utility that may be of interest in applications. While welfare (i.e. expected ex-post agent utility) is perhaps the most natural one, other statistics such as the variance of the distribution may also lend important insight into how opacity affects equity. It is worth noting that our geometric characterization method works precisely because we are interested only in agent welfare, which is linear in individual agent's utility. Our characterizations in Proposition 2 rely on the linearity of agent welfare in the same way that our characterization in Theorem 1 and Proposition 1 rely on the principal's risk neutrality.
### When is Opacity Optimal?
We have stressed that solving for the optimal described contract may be difficult when the state space \(S\) is large. In applications, it may suffice to know whether the optimal described contract has particular properties--such as whether the optimal described contract is opaque or transparent. Our geometric characterizations lead directly to a series of sufficient conditions that serve as tests for particular properties of the optimal described contract. We begin with a corollary of the characterization of the principal's utility at the optimal described contract (Theorem 1).
**Corollary 1**.: _If \(V\) is globally weakly concave, i.e. \(\frac{\partial^{2}V}{\partial f^{2}}\leq 0\) for all \(f\in\Delta(S)\), then there is an optimal described contract that is fully coarse._
Given the characterization in Theorem 1, we know that in value function is equal to its concave closure if and only if there exists an optimal described contract that is fully coarse. This directly gives us Corollary 1, a sufficient condition for an optimal described contract to be fully coarse, which depends only on the value function \(V(f)\), and not its closure.
Note that if there is an optimal described contract that is fully coarse, the set of optimal contracts is infinite. The principal can choose a \(\sigma\) that creates any partition of agents in which the composition of agent characteristics within each partition cell is the same as the distribution \(f\). This is not the case when the optimal described contract is transparent. If there is an optimal described contract that is transparent, then it is the uniquely optimal described contract. This is because there is only one sorting function \(\sigma\) that creates the partition that corresponds to the optimal transparent contract--the partition \(\{\{s\}\colon s\in\}\).
Similarly, we can use the geometric characterization of the principal's optimal transparent contract to get a sufficient condition for when the principal's optimal described mechanism is transparent.
**Corollary 2**.: _The optimal described contract is transparent if \(V(f)\) is globally weakly convex, i.e. \(\frac{\partial^{2}V}{\partial f^{2}}\geq 0\) for all \(f\in\Delta(S)\)._
A direct implication of Corollaries 1 and 2 is that we can conclude whether the optimal described contract is coarse from understanding the concavity of the principal's coarse value function in only a region of \(\Delta(S)\), and same for transparency.
**Corollary 3**.: _If \(V\) is convex in a neighborhood of some \(f^{*}\in\Delta(S)\), then there is no optimal described contract that is fully coarse. If \(V\) is concave in a neighborhood of some \(f^{*}\in\Delta(S)\), then there is no optimal described contract that is transparent._
### The Value of Opacity
In addition to generating sufficient conditions that can be useful for determining whether the optimal described contract is transparent or fully coarse, we can use our geometric characterizations to understand how much the principal gains through the opportunity to be opaque. To motivate this exercise, note that the optimal transparent contract is a collection of contracts that solve a textbook principal-agent model with moral hazard. So, the difference between the optimal described contract and the optimal transparent contract gives us insight into how principals facing many agents with different observable payoff-relevant characteristics can benefit from jointly designing contracts and groups of agents to whom they are communicated. This can help to predict in which settings contracts are most likely to be opaque, and can illustrate, in settings where it may be costly to carry out an opaque scheme, whether opacity is worth it to the principal.
Formally, the _value of opacity_ at \(f\in\Delta(S)\) is the difference between the optimal described contract at \(f\) and the optimal transparent contract at \(f\), i.e. \(\overline{V}(f)-V^{T}(f)\).
We show that, in a wide class of problems, as agents become more risk averse, the value of opacity diminishes. In order to demonstrate this, we restrict attention to a particular class of utility functions studied in the contracting literature.
**Assumption 2**.: _Agent utility \(u(a,x)\) can be written \(h(a)\tilde{u}(x)-c(a)\) where \(\tilde{u}\) is real-valued, continuous, increasing, and concave, \(c(a)\) is continuous, and \(k(a)\) is continuous and strictly positive._
Under these assumptions, the realized contract in any optimal described mechanism is deterministic.
**Lemma 1**.: _Assume Assumption 2 holds. Then in any optimal described contract \(((\hat{g}_{k})_{k\in K},\)\((g_{k})_{k\in K},\sigma)\), for each \(k\in K\), the realized contract \(g_{k}(q,s)\) is deterministic, i.e. \(g_{k}\colon Q\times S\to X.\) It follows that in any transparent contract, for every \(k\in K\), the communicated \(\hat{g}_{k}\colon\) is also deterministic._
This lemma follows directly from results in Holmstrom (1979) and Grossman and Hart (1983) for the single agent moral hazard problem. Their arguments proceed by showing that any stochastic contract is dominated by a deterministic one. This fact holds because a principal in the canonical moral hazard problem faces a trade-off between providing incentives and providing insurance--giving higher payments for observable outputs creates higher incentives, but also increases the spread between outcomes in the agent's lottery. Under Assumption 2, a stochastic contract, i.e. a contract that offers a lottery conditional on a particular output, increases risk for the agent without improving incentives. The same argument applies for realized contracts \(g_{k}(q,s)\) in our setting.
Although, the tradeoff that the principal in our model faces in choosing a _realized_ contract is exactly the same as in the single agent setting, the principal's choice of a _described contract_ introduces a different kind of tradeoff between incentives and insurance. When the principal faces a continuum of agents, and has the power to describe contracts, if opacity is valuable, then it also improves incentives _on average_ while introducing risk _for all agents_. The value of opacity thus trades off three quantities: the _incentive gains_ from some agents taking a higher action than in the optimal transparent contract, the _incentive losses_ from some agents taking a lower action than in the optimal transparent contract, and the _insurance losses_ from introducing more uncertainty into the communicated contract (i.e. more uncertainty from the agent's perspective).
In the next Proposition, we consider a constant absolute risk aversion utility function with risk aversion parameter \(\rho\), \(\tilde{u}(x)=1-e^{-\rho x}\). In reference to a principal's value function with respect to \(u_{\rho}\), we will use a subscript \(V_{\rho}\).
**Proposition 3**.: _Assume Assumption 2 holds. Then as agents become more risk averse, the value of opacity \(\overline{V}_{\rho}(f)-V_{\rho}^{T}(f)\) converges to zero, i.e._
\[\lim_{\rho\rightarrow\infty}\overline{V}_{\rho}(f)-V_{\rho}^{T}(f)=0.\]
**Proof.** Let \(f\in\Delta(S)\) be a type distribution. Let \((\hat{g},g)\) be an optimal fully coarse contract. We show that there is is a transparent contract \(\tilde{g}_{s}\) whose principal utility approaches the one of \((\hat{g},g)\).
Define the transparent contract \(\tilde{g}_{s}(q)\) as the realized contract \(g(q,s)\) with an additional payment for each \(q\) and \(s\) that makes agents indifferent between the lottery \(\hat{g}_{s}(q)\) and \(g_{s}(q,s)\). It must be that \(g_{s}(q,s)\) is deterministic by Lemma 1. Note that under these contracts, agents choose the same action as under the coarse contract \(\hat{g}\). The amount of transfer that the principal needs to pay for the agent satisfies \(\mathbb{E}[\tilde{u}_{\rho}(\tilde{g}(q(a)))]=\mathbb{E}[\tilde{u}_{\rho}( \tilde{g}(q(a))+x)]\). Note that the only randomness in the second expectation is with respect to \(q\). We show that for any \(x>0\) there is \(R\) such that for all \(\rho>R\),
\[\mathbb{E}[\tilde{u}_{\rho}(\tilde{g}(q(a)))]<\mathbb{E}[\tilde{u}_{\rho}( \tilde{g}(q(a))+x)]. \tag{8}\]
This means that the amount of extra payment needed to make the agent indifferent converges to zero. We prove a more general statement about lotteries, which implies (8) for lotteries \(l=\hat{g}(q(a))\) and \(l_{i}=g_{s}(q,s)\)
**Claim 2**.: _For every compound lottery \(l=\sum_{i=1}^{m}p_{i}\cdot l_{i}\), every \(i=1,2,\ldots,m\), and every \(x>0\), there is \(R\) such that for all \(\rho\geq R\), \(\mathbb{E}[\tilde{u}_{\rho}(l)]<\mathbb{E}[\tilde{u}_{\rho}(l_{i}+x)]\), where \(l_{i}+x\) means the addition of x with certainty to all elements of \(l_{i}\)._
This result means that a sufficiently risk averse agent will accept the worst part of a lottery plus any fixed positive amount. As we can unravel compound lotteries, we may, without loss, assume \(l_{i}\) to be deterministic and \(l\) be a (non-compound) lottery. We then have
\[1-\exp(-\rho(l_{i}+x))>\sum_{j=1}^{n}p_{i}(1-\exp(-\rho l_{j})) \iff\exp(-\rho(l_{i}+x))<\sum_{j=1}^{n}p_{i}\exp(-\rho l_{j})\] \[\iff 1<\sum_{j=1}^{n}p_{i}\exp(-\rho(l_{j}-l_{i}-x)).\]
Note that by assumption, at least one of the terms \(l_{j}-l_{i}-x\) must be negative (namely the one for \(j=i\)), which means that for high enough \(\rho\), the corresponding summand \(p_{i}\exp(\rho x)\) will be larger than \(1\). As all other summands are nonnegative, this proves the claim.
Given the focus on transparency in regulatory conversations about algorithms, it is also valuable to understand how agent welfare under the optimal described contract (which may be opaque) compares to the optimal transparent contract. We next consider the _welfare increase from opacity_, which is the difference between agent welfare at the optimal described contract for distribution \(f\in\Delta(S)\) and the optimal transparent contract at distribution \(f\), i.e. \(\tilde{U}(f)-U^{T}(f).\) We present the following sufficient condition, which is a corollary to Proposition 2.
**Corollary 4**.: _If \(U(f)\) is globally weakly concave, then the welfare increase from opacity is weakly positive. If \(U(f)\) is globally weakly convex, then the welfare increase from opacity is agent welfare is weakly negative._
## 4 Application: Design and Communication of Ride-Hailing Payments
We next turn to an application: a ride-sharing platform (principal) choosing how to jointly design and describe payment schemes to drivers (agents).
We focus on this setting because ride-hailing platforms use many data to set payment schemes, and tailor their communications to different drivers. One commentator summarizes ride-hailing platforms' communication practices with drivers as follows: "using what they know about drivers, their control over the interface and the terms of transaction, they channel the behavior of the driver in the direction they want it to go."12
Footnote 12: [https://www.nytimes.com/interactive/2017/04/02/technology/uber-drivers-psychological-tricks.html](https://www.nytimes.com/interactive/2017/04/02/technology/uber-drivers-psychological-tricks.html)
### Setting
We consider drivers located in the outskirts of the city, who exert effort \(a\) to drive into the city to increase their chance of picking up a passenger. Without loss, we let \(a\) be the probability that the driver picks up a passenger, given that she exerts effort \(a\). The platform cannot contract on the distance driven into the city (\(a\)) because it is difficult to verify. The binary contractible output \(q\in\{0,1\}\) that is correlated with the drivers' effort is whether the driver picks up a passenger (\(q=1\)). The platform's output-contingent contract is thus a per-ride payment for the driver, \(g\).
We assume that drivers are risk averse in money (with square root utility), and that effort \(a\) is quadratic. The drivers' utility is given by \(u(a,g)=a\sqrt{g}-\frac{1}{2}a^{2}\).
The platform has plentiful data on the drivers. A particular characteristic \(s\) of the drivers that is of interest to the platform is whether drivers tend to drive during high-demand (\(s=h\)) periods or low-demand (\(s=\ell\)) periods. The low state occurs with probability \(f(\ell)=\alpha\).
The platform is risk-neutral and the state affects their utility function in the following way. When a driver completes a ride, the platform earns \(b_{s}\) and pays the driver \(\tau_{s}g\), where \(b_{s}\) and \(\tau_{s}\) are constants that depend on the state. So, the platform's objective in state \(s\) is given by \(v(a,g,s)=a(b_{s}-\tau_{s}g)\), where \(b_{s},\tau_{s}>0\) for all \(s\).
The "productivity" parameter \(b_{s}\) can be understood as the commission that the platform takes in state \(s\). For instance, if \(b_{\ell}=1\) and \(b_{h}>1\), it would imply that the platform can take a higher commission during high-demand periods--this may be the case if the platform charges surge fares to passengers without proportionally increasing driver pay.13 Meanwhile, the "effective payment" parameter \(\tau_{s}\) can be understood as the effective cost of paying the driver \(\$g\) per ride. For example, if \(\tau_{\ell}>\tau_{h}\), it would imply that paying a driver \(\$g\) per ride costs more in the low-demand state than it costs to pay a driver \(\$g\) per ride in the high-demand state. This may be the case if the platform is operating in a jurisdiction where
they must compensate drivers for idling time, as is the case in New York City.14 If the platform must pay drivers for idling time, then, when demand is low (\(s=\ell\)), the platform pays more per ride than just the per-ride payment $\(g\).
Footnote 14: [https://www.theverge.com/2019/2/1/18206737/nyc-driver-wage-law-uber-lyft-via-juno](https://www.theverge.com/2019/2/1/18206737/nyc-driver-wage-law-uber-lyft-via-juno)
The platform chooses payment schemes and messaging tactics about their payment schemes to incentivize (especially inexperienced) drivers to drive into the city. A communicated contract is a communication that summarizes the payment scheme, perhaps sent as a reminder to incentivize the driver to get on the road. For example, the platform could send a message such as "Drivers in your area are currently earning $10 per ride on average." Or, it might be more specific and send different messages to the two types of drivers: to the \(s=h\) drivers, it could say "Since you tend to drive when demand is high, you will earn $15 per ride."
In addition to choosing the communicated contract, the platform can strategically sort into different cohorts who receive the same message. The platform can stagger its' notifications so that different groups of agents get different messages at different times. For legal reasons, the platform cannot lie--their communicated payment rules must be _consistent_ with the realized payment rules within each group.
From a regulatory perspective, it is valuable to know when the platform offers a transparent described contract. In such settings, the minimal legal requirement of "consistency" is strong enough to make the platform tell agents everything there is to know about their payment schemes--imposing transparency requirements would be redundant.
### Discussion
Suppose that it costs the same amount to pay drivers in both states and when demand is high (\(s=h\)), drivers are more productive (\(b_{h}>b_{\ell}\)). The difference in value of the drivers' effort comes from the platform's ability to charge surge pricing to passengers--the platform can charge passengers more for rides when there is high demand.
**Remark 1**.: _Assume \(\tau_{\ell}=\tau_{h}\) and \(b_{\ell}<b_{h}\). Then the optimal described contract is transparent._
The platform's optimal described contract is transparent coarse in this case, and Figure 2 shows that the principal's value function is globally weakly convex for \(b_{h}\in\{1,5,10\}\). In this case, the platform would say specifically to drivers who drive in high demand periods, "If you
drive when demand is high, your compensation will be $X." And to drivers who drive in low demand periods, they would say "If you drive when demand is low, your compensation will be $Y." This is because the platform wants to incentivize drivers to get on the road when demand is high, and there's no benefit to introducing extra uncertainty into the contract.
Next, suppose that the only difference between drivers is the effective cost of paying them.
**Remark 2**.: _Assume \(b_{\ell}=b_{h}\) and \(\tau_{\ell}>\tau_{h}.\) Then the optimal described mechanism is fully coarse._
In this case, the platform's optimal strategy is to give an opaque description of the payments, and not mention how payments will in fact depend on demand. This makes drivers in the low-demand state think that there is some chance that they get higher payments, and so the principal can induce higher actions from them without paying the extra per-dollar cost. As Figure 2 shows, the principal's value function is globally concave for \(\tau\in\{1,5,10\}\).
## 5 Related Literature
This paper relates most closely to the literature on mechanism design with an informed principal, and models of joint information design and mechanism design.
In the canonical informed principal problem, the principal has private information that the agent doesn't have. The agent takes the principal's announcement of a contract as a signal about the principal's private information, and the agent solves for a Bayes-Nash equilibrium of the signaling game. Though originally introduced in screening settings (Myerson, 1983; Maskin and Tirole, 1990, 1992), the informed principal problem has also been studied in the presence of moral hazard (Beaudry, 1994; Inderst, 2001; Mekonnen, 2021; Clark, 2021).
Our set up is similar to such models in that the principal has information that the agent doesn't have--here the principal's private information is about the contract itself. But our model also contrasts with the informed principal problem in that the agent takes the principal's announcement "at face value," subject to a consistency condition. That is, the agent does not take the contract as a "signal" about the principal's private information. In some settings of interest, the agent may not even know that she is less informed than the principal, and we connect then to models of contracting with unawareness (Filiz-Ozbay, 2012; Auster, 2013). In other settings that motivate our investigation, there are too many factors that could influence the agent's belief about what the contract is--in the example of section 4, a ride-hailing platform driver and knows that the platform uses its extensive data to compute pricing schemes but cannot articulate the platform's type space or form a belief about it. Similar to models of add-on pricing (Ellison, 2005) and shrouded attributes (Gabaix and Laibson, 2006), if the principal doesn't mention a relevant feature of the agents' environment, the agent simply won't think about it. The model is thus similar in spirit to cursed equilibrium. In cursed equilibrium, players in a game fail to update their beliefs based on the information content of the other player's action (Eyster and Rabin, 2005; Esponda, 2008). In our model, the agent does not condition on the "information content" of the principal's choice of description--she does not solve for the equilibrium in a signaling game.
The literature on persuasion and information design also assumes that agents (or "receivers") do not think strategically about the principal's (or "sender's") strategy--they don't
have to because the principal commits to a signal structure about the state (see Bergemann and Morris (2019) for a review). The agent receives a signal about an unknown state, updates her belief about the state given her knowledge of the signal structure to which the principal has committed, and takes an action. Although our analysis relies on the same techniques used in this literature, we differ in both substance and interpretation.
First, in pure information design, the principal changes the agent's utility from taking a particular action by changing her beliefs, rather than changing how she is paid. In our model, the principal jointly designs payments and the "information" that agents have. Although there is an emerging literature developing models that join information design and mechanism design (Dworczak, 2020; Boleslavsky and Kim, 2018; Bergemann et al., 2022; Kwak, 2022), these papers concern a principal who can design agents' information about an exogenous feature of the environment. For example, Kwak (2022) in particular studies a moral hazard problem in which an exogenous unknown state affects output alongside the agent's action. The principal can design an experiment on this unknown state, controlling what the agent believes about her environment. By contrast, in our model, the principal controls what the agent believes about the contract itself.
Further, our model differs from information design models in interpretation. In our model, the agent does not know that the principal has committed to a signal structure and does not update her belief according to Bayes' rule taking the signal structure into account. Instead, in our model, all that a particular agent takes into account when making her decision is the (possibly stochastic) contract \(\hat{g}_{k}(q)\) that is communicated to her. The fact that the principal's realized contracts \(\hat{g}_{k}(q,s)\) have to be consistent with the communicated contracts disciplines the problem so that the principal has some flexibiliity in description, but is not in an "anything-goes" environment.
Despite these differences, it should not be too surprising that the analyses we carry out here can be entirely understood through the lens of persuasion. Our problem is in fact equivalent to a "contracting with persuasion" problem in which the agent has a prior on an unknown state \(s\), and the principal commits to a contract \(\psi(m,q,s)\) and a signal structure \(\phi:M\rightarrow\Delta(S).\) The agent sees a signal realization \(m\sim\phi(s)\) and then forms a posterior belief about the contract she faces, taking into account the principal's commitment. This is not the model we present for two reasons: it is does not well capture agents' often limited understanding of their environments in the settings of interest to us, and it relies on the assumption that principals can commit to signal structures. As such, the persuasion model to which ours most closely relates is Lin and Liu (2022), which derives conditions under which it is without loss to replace the commitment assumption with a "credibility" requirement. Their credibility requirement resembles our consistency requirement (though without payments): a disclosure policy is credible if the principal cannot profit from tampering with her signals while keeping the signal distribution unchanged.
## 6 Conclusion
This paper introduced _described contracts_ into a moral hazard framework in which a principal contracts with a continuum of agents. We provided a geometric characterization of the optimal described contract, and conditions under which it is transparent, and conversely, opaque. We showed how opacity alters the canonical contracting tradeoff between incentives and insurance: opacity may create an (on average) profitable wedge between agents' expected and realized payments.
In future work, we hope to better understand the effects of opacity on agents, as regulations that call for more transparency draw on arguments about how transparency affects agents like consumers and workers. In this paper, we looked only at agent welfare--but opaque contracts lead to greater heterogeneity in ex-post agent outcomes than transparent mechanisms. So, if a regulator cares about "equal treatment" of agents who have different characteristics, then allowing for described mechanisms is detrimental to the cause (compared to transparent or uniform contracts). In addition, opaque contracts, although they may lead to higher average welfare, also may entail violations of agents' _ex-post_ individual rationality. And even if agents are not led into a violation of their ex-post rationality constraint, they are "misled" in the sense that their ex-post utility under the described mechanism is different from their ex-post utility _had they known everything that the principal knew_ about the intended payment.
|
2309.13226 | Real3D-AD: A Dataset of Point Cloud Anomaly Detection | High-precision point cloud anomaly detection is the gold standard for
identifying the defects of advancing machining and precision manufacturing.
Despite some methodological advances in this area, the scarcity of datasets and
the lack of a systematic benchmark hinder its development. We introduce
Real3D-AD, a challenging high-precision point cloud anomaly detection dataset,
addressing the limitations in the field. With 1,254 high-resolution 3D items
from forty thousand to millions of points for each item, Real3D-AD is the
largest dataset for high-precision 3D industrial anomaly detection to date.
Real3D-AD surpasses existing 3D anomaly detection datasets available regarding
point cloud resolution (0.0010mm-0.0015mm), 360 degree coverage and perfect
prototype. Additionally, we present a comprehensive benchmark for Real3D-AD,
revealing the absence of baseline methods for high-precision point cloud
anomaly detection. To address this, we propose Reg3D-AD, a registration-based
3D anomaly detection method incorporating a novel feature memory bank that
preserves local and global representations. Extensive experiments on the
Real3D-AD dataset highlight the effectiveness of Reg3D-AD. For reproducibility
and accessibility, we provide the Real3D-AD dataset, benchmark source code, and
Reg3D-AD on our website:https://github.com/M-3LAB/Real3D-AD. | Jiaqi Liu, Guoyang Xie, Ruitao Chen, Xinpeng Li, Jinbao Wang, Yong Liu, Chengjie Wang, Feng Zheng | 2023-09-23T00:43:38Z | http://arxiv.org/abs/2309.13226v3 | # Real3D-AD: A Dataset of Point Cloud Anomaly Detection
###### Abstract
High-precision point cloud anomaly detection is the gold standard for identifying the defects of advancing machining and precision manufacturing. Despite some methodological advances in this area, the scarcity of datasets and the lack of a systematic benchmark hinder its development. We introduce Real3D-AD, a challenging high-precision point cloud anomaly detection dataset, addressing the limitations in the field. With 1,254 high-resolution 3D items (from forty thousand to millions of points for each item), Real3D-AD is the largest dataset for high-precision 3D industrial anomaly detection to date. Real3D-AD surpasses existing 3D anomaly detection datasets available regarding point cloud resolution (0.0010mm-0.0015mm), 360 degree coverage and perfect prototype. Additionally, we present a comprehensive benchmark for Real3D-AD, revealing the absence of baseline methods for high-precision point cloud anomaly detection. To address this, we propose Reg3D-AD, a registration-based 3D anomaly detection method incorporating a novel feature memory bank that preserves local and global representations. Extensive experiments on the Real3D-AD dataset highlight the effectiveness of Reg3D-AD. For reproducibility and accessibility, we provide the Real3D-AD dataset, benchmark source code, and Reg3D-AD on our website:[https://github.com/M-3LAB/Real3D-AD](https://github.com/M-3LAB/Real3D-AD).
## 1 Introduction
**Real 3D-AD Motivation: 3D > 2.5D.** There is a solid need to propose a high-resolution point cloud anomaly detection dataset to tap the gap between the academy and industry, which brings the capabilities of point cloud anomaly detection to the factory floor. Point cloud anomaly detection is widely deployed in real-world production lines. However, 3D anomaly detection datasets released in the academy are RGBD (2.5D), which is outside the demand of industrial manufacturing. Advanced machining and precision manufacturing require no blind spots throughout the inspection process. Nevertheless, the blind spots exist because RGBD datasets are achieved via single-view scanning. The lack of a real point cloud anomaly detection dataset hinders the further development of 3D
anomaly detection. Because of this, it is crucial and urgent to propose a point cloud anomaly detection dataset that meets industrial manufacturing needs.
**Limitation of current 3D-AD dataset and Real3D-AD advantage.** To address this problem, we present a large-scale, high-resolution 3D anomaly detection dataset, Real3D-AD, to support the research and development of 3D anomaly detection methods. Although two 3D anomaly detection dataset (MVTec 3D-AD [20] and Eyescandies [3]) has been proposed, there are still several limitations: 1) the precision of MVTec 3D-AD are insufficient to satisfy the requirements of high-precision point cloud anomaly detection. Specifically, MVTec 3D-AD offers a limited number of 4,147 point clouds per object with a point precision of 0.11mm. Real3D-AD offers a significantly greater number of point clouds per object, estimated at 1.3 million, compared to MVTec 3D-AD, which is approximately 100 times larger. Moreover, the point precision of Real3D-AD reaches up to 0.010mm, which is ten times more advanced than MVTec 3D-AD. The detailed analysis is given in Table 1. 2) There are blind spots of 3D anomaly detection datasets if adopting an RGBD camera to collect the 3D data, like MVTec 3D and Eyescandies. Identifying defects may pose a challenge when relying solely on a single view for inspection. Real3D-AD is collected using high-resolution laser scans, perfect for spotting the product's defects everywhere, as shown in Figure 1. 3) The simulated dataset (Eyescandies) makes it hard to extend the realistic scenario. Due to the commercial privacy of real products, it is difficult to collect the CAD model of real-world products. So most researchers adopt the simulated software, like _Blender_ framework [9]. Nevertheless, both synthesizing texture and anomaly details are not achieved with high fidelity. Real3D-AD collects the products from real-world applications and obtains excellent prototypes via the high-resolution 3D scanner. Hence, as shown in Table 3, we can conclude that three key characteristics distinguish Real3D-AD relative to prior work on the 3D anomaly detection dataset: high precision, no blind spot, and realistic high-accuracy prototype.
**Benchmark & Baseline.**To expedite research efforts towards developing a universal high-precision point cloud anomaly detection approach, we have constructed a comprehensive and structured large-scale benchmark, referred to ADBench-3D. Additionally, we have developed a registration-based baseline method that aligns with the prerequisites of high-resolution 3D anomaly detection. Given the practical constraints, the number of training datasets available for each category is restricted (less than or equal to four), as creating a high-accuracy prototype for each category is a time-intensive process (requiring up to two days per category). The configuration of ADBench-3D varies from contemporary unsupervised anomaly detection tasks in the realm of 3D. Section 4.1 provides a
Figure 1: Real3D-AD dataset examples for each category. The blue box indicates the normal images in the training dataset. The red box denotes the abnormal images in the test dataset. There are no blind spots in Real3D-AD since our dataset are achieved by scanning all the views of the object instead of the single view photoed by RGBD camera.
comprehensive description of the setting. To be more specific, the training examples are limited (\(\leq\) 4), and the test samples are only scanned by one side. The motivation is to simulate the real-world application: The scanning positions on the production line are fixed, and one position can only scan the results of one product side. Furthermore, to facilitate precise performance comparison within the community and guarantee replicability, the ADBench-3D framework encompasses a comprehensive end-to-end pipeline that includes data preprocessing, 3D-AD algorithms, evaluation scripts, metrics, and a visualization toolkit. ADBench-3D comprises a set of 8 fundamental 3D anomaly detection methodologies that have been implemented and tested on the Real3D-AD dataset. Moreover, the results presented in Table 4 indicate that the majority of current 3D-AD techniques are unable to attain satisfactory performance in Real3D-AD, as evidenced by their object-level AUROC score falling short of 50%. Consequently, we propose a registration-based 3D-AD method (Reg3D-AD) that serves as a versatile solution to cater to the requirements of the Real3D-AD dataset. The Rege3D-AD model introduces a novel feature memory bank, as illustrated in Figure 8, designed to preserve both the local and global features. The test objects are aligned with the training prototype during the inference process, and their features are extracted locally and globally. The defects are identified by assessing the distance between the features of the test object and the training prototypes. Therefore, Real3D-AD and ADBench-3D present a step towards unifying disjoint efforts in 3D anomaly detection research and pave the way toward a deeper understanding of 3D anomaly detection models.
Overall, the main contributions of this paper are:
* We create the first-ever high-resolution 3D anomaly detection dataset (Real3D-AD), enabling the design of high-resolution 3D anomaly detection algorithms and applying it to publicly available. Real3D-AD exhibits three primary attributes that set it apart from previous studies on 3D anomaly detection datasets. These attributes include a high level of precision, an absence of blind spots, and a realistic, high-accuracy prototype.
* The end-to-end pipeline offered by ADBench-3D includes data preparation, data splits, evaluation metrics and scripts, and visualization toolkits. ADBench-3D conducts a large-scale systematic assessment (8 main algorithms on Real3D-AD).
* We propose a general-purposed registration-based 3D anomaly detection method (Reg3D-AD). The efficacy of Reg3D-AD has been demonstrated through comprehensive experimentation on the Real3D-AD dataset, surpassing the performance of the subsequent most effective approach by a significant degree.
## 2 Related work
**3D-AD Datasets.** The datasets for 2D anomaly detection (2D-AD) are abundant, with a history tracing back to 2007 [29]. Over 20 different datasets are available for 2D-AD [11; 25; 17; 31]. The numerous 2D-AD datasets have given rise to many related works. Some studies have approached it from the perspective of image reconstruction [35; 8], feature distillation [12; 4; 27], and feature comparison [22]. A body of research also focuses on specific scenarios such as few-shot anomaly detection [30; 36; 7] and noisy anomaly detection [16]. In contrast, the number of datasets for 3D anomaly detection (3D-AD) is considerably limited. The first 3D-AD dataset was introduced in 2021, and today, there are only two 3D-AD datasets, MVTec 3D-AD dataset [20] and Eyecandies dataset [3]. MVTec 3D-AD [20] is a new dataset designed for 3D point cloud anomaly detection, and it is the only point cloud dataset for AD. It contains 2,656 pairs of images as training sets, 294 pairs of images as validation sets, 249 pairs of normal images, and 948 pairs of abnormal images to form a test set. The dataset comprises a total of 41 distinct types of anomalies, with a combined count of 1148 anomaly regions. Each pair of images consists of RGB images and tiff images representing the spatial coordinates of each pixel. The resolution of the images varies from 400\(\times\)400 to 900\(\times\)900. The Eyecandies dataset [3] is a novel synthetic dataset comprising ten different categories of candies rendered in a controlled environment. It consists of 13,250 pairs of normal samples and 2,250 pairs of abnormal samples. Each depth image corresponds to six RGB images under different lighting conditions. Both MVTec 3D-AD and Eyecandies are RGBD datasets, limited to single-view information. To further explore the value of spatial information in the AD task, we propose the Real 3D-AD dataset, which expands the object information to 3D space. Prototypes in the Real 3D-AD training set encompass comprehensive object information from various views. The test set also includes multi-view information on objects, allowing for a more extensive exploration of the value of 3D information in AD tasks.
**3D-AD methods.** Recently, many high-quality papers have emerged in the field of 2D-AD [33; 37; 32; 26]. The release of MVTec 3D-AD also sparked interest in 3D-AD anomaly detection methods [15; 23; 5; 28; 8]. However, more research on 3D than 2D anomaly detection still needs to be done. Some methods only use depth information to remove background noise, which limits the use of depth information. Meanwhile, combining RGB and depth information without compromising performance remains a challenge. Bergmann _et al._[1] propose a point cloud feature extraction network based on the teacher-student model. During training, the student and teacher networks maintain consistent features and use the differences in extracted features to locate anomalies during testing. Horwitz _et al._[15] combine hand-crafted 3D descriptors with the classical AD approach KNN framework. While both of these methods are effective, their performance is poor. AST [23] performs well in MVTec 3D-AD but only uses depth information to remove background. AST still uses the 2D-AD method to detect anomalies, ignoring the depth information of the object. M3DM [28] extracts features from point clouds and RGB images separately and fuses them for better decision-making. This approach is superior to BTF but relies heavily on pre-trained large models and memory libraries. CPMF [6] also uses the KNN paradigm. However, it projects the point cloud into a two-dimensional image from different angles, significantly reducing feature extraction's complexity and computational cost and fusing the resulting information for detection. In summary, existing 3D-AD models either perform poorly or rely heavily on pre-trained models and memory libraries. Currently, there is a lack of anomaly detection methods that use point cloud information, and the available datasets for research in this field are only MVTec 3D-AD with depth information and artificially synthesized Eyescandies [3] datasets. To bring attention and research to this area, we introduce the Real3D-AD dataset.
## 3 Real3D-AD dataset
### Data Collection
We outline the pipeline for generating the Real3D-AD dataset, including the description of a high-resolution scanner, the construction of prototypes, the generation of anomalies, and an assessment of the labor and time required for the process.
**Description of high-resolution and high-precision 3D scanner.** To obtain precise 3D anomaly detection data, we utilize a high-resolution binocular 3D scanner called PMAX-S130, as illustrated in Figure 2. The PMAX-S130 optical system comprises a pair of lenses with low distortion properties, a high luminance LED, and a blue-ray filter. The blue light scanner has a lens filter that selectively allows only the blue light of a specific wavelength to pass through. The filter effectively screens most blue light due to its relatively low concentration in natural and artificial lighting. Nevertheless, using blue light-emitting light sources could pose a unique obstacle in this context. The image sensor can collect light using the lens aperture. Hence, the influence exerted by ambient light is vastly reduced. The device exhibits the ability to perform scanning operations under intricate lighting circumstances that are frequently encountered in workshop environments. The abovementioned objective is accomplished by employing a cold light source of high-brightness LEDs. This approach prolongs the device's longevity and reduces heat emissions while ensuring consistent scanning precision. Moreover, the precision of the device's scanning is augmented by integrating a lens with minimal distortion. The data presented in Table 1 demonstrates that the PMAX-S130 performs better than the Zivid camera (utilized by MVTec 3D-AD), particularly regarding point precision. Real3D-AD exhibits a higher point precision and spatial distance per cloud than MVTec 3D-AD, with a factor of 10 and 4.28, respectively. Thus, Real3D-AD provides a pathway to enhance comprehension in high-precision point cloud anomaly detection.
\begin{table}
\begin{tabular}{l|c|c} \hline
**Scanner** & **PMAX-S130** & **Zivid One-Plus\({}^{a}\)** \\ \hline Dataset & Real3D-AD (Ours) & MVTec 3D-AD [1] \\ \hline FOV & 100cm to 400cm & 60cm to 200cm \\ \hline Point Precision & 0.011 mm-0.015mm & 0.11mm \\ \hline Spatial Distance & 0.04mm-0.07mm & 0.37mm \\ \hline
3D Format & ASC, PLY, STL, OBJ, IGES & TIFF \\ \hline \multicolumn{3}{l}{\({}^{a}\)[https://www.zivid.com](https://www.zivid.com)} \\ \end{tabular}
\end{table}
Table 1: Comparison of data collection equipment.
Figure 2: PMAX-S130.
**Prototype construction.** The prototype construction process is shown in Figure 3. Initially, the stationary object undergoes scanning while the turntable completes a full revolution of 360\({}^{\circ}\), enabling the scanner to capture images of the various facets of the object. Subsequently, the object undergoes reversal, and the process of rotation and scanning is reiterated. Following the manual calibration of the front and back scanning outcomes, the algorithm precisely calibers the stitching process. If there are any gaps in the outcome, the scan stitching process is reiterated until the point cloud is rendered.
**Anomalies types and labeling.** The anomalies pertaining to point clouds can be classified into two categories: incompleteness and redundancy. CloudCompare (2016) [10] is utilized to annotate the point cloud data. The process of labeling is depicted in Figure 4. The first step involves importing a _pcd_ file into the CloudCompare software and modifying the angle view and point cloud file size. Afterward, the anomalous and non-anomalous regions were segregated and assigned corresponding labels for each point cloud. The ultimate outcome is presented in a text file.
**Labor and time-consuming.** A significant requirement for labor characterizes the process of collecting and labeling the Real3D-AD dataset. Each prototypical construction requires 1.2 days
Figure 4: Anomalies annotation in Real3D-AD.
Figure 5: Incompleteness annotation in Real3D-AD. The first column refers to the normal sample. The second column denotes the abnormal sample. The third column presents the annotation for the incompleteness. Specific, we mark the cross-section and the broken edges as anomalies, which do not introduce the extra points in the annotation.
Figure 3: A prototype in the training set is made from two or more scan results.
to complete each object with a team of three individuals. The initial individual assumes the task of conducting a scan, while the subsequent individual directs their attention toward manual calibration. The third individual is primarily responsible for the labeling aspect of the task. To generate anomalies, a team of four individuals is utilized to complete the task. The initial individual directs their attention towards the inadequacy of point cloud anomalies, while the subsequent individual assumes responsibility for the superfluousness of point cloud anomalies. The second individual is primarily concerned with the process of labeling anomalies. Each atypical specimen requires 5 hours to complete. Real3D-AD involves a team of seven individuals and a time frame of four months to complete, owing to the substantial workload involved.
### Data Statistics
The statistical information of the Real3D-AD dataset is presented in Table 2. The table consists of the dataset category, the number of training prototypes, the number of normal and abnormal samples in the test dataset, the mean proportion of abnormal points in the test. Real3D-AD comprises a total of 1,254 samples that are distributed across 12 distinct categories. Each training set for a specific category contains only four samples, similar to the few-shot scenario in 2D anomaly detection. These categories include but are not limited to Airplane, Candybar, Chicken, Diamond, Duck, Fish, Gemstone, Seahorse, Shell, Starfish, and Toffees. All these categories are toys from manufacturing lines. The data presented in Table 2 demonstrates that the low anomaly point ratio poses a challenge for detecting anomalies. The majority of the attributes in the category pertain to transparency, indicating that the Real3D-AD dataset is highly suitable for tasks involving the detection of anomalies in point clouds.
Furthermore, a box-and-whisker plot represents the distribution of data points across all samples in the Real3D-AD dataset, as depicted in Figure 6. Two inferences can be made from the illustration in Figure 6. The observed differences in point count variability among distinct item categories within the point cloud demonstrate notable dissimilarity. In specific, the training samples provide complete prototypes of 3D objects while the test samples are scanned on only one side. Therefore, the number of training samples is much larger than that of test samples. Secondly, it can be observed that the disparity in point values between the normal and abnormal samples is comparatively minor within each test set.
### Real3D-AD and other datasets
The findings presented in Table 3 demonstrate that Real3D-AD exhibits superior performance compared to MVTec 3D-AD [20] and Eyescandies [3], particularly in terms of point resolution, point precision, absence of blind spots, and dataset authenticity. Real3D-AD demonstrates a point resolution and precision of 0.04mm and 0.011mm, respectively. This is notably higher than MVTec 3D-AD, with a factor of 4.28 and 9 for point resolution and precision, respectively. Furthermore, the Real3D-AD system benefits from multi-view scanning, which eliminates any potential blind spots and thereby improves its anomaly detection capabilities. Therefore, Real3D-AD is deemed more suitable for achieving high levels of precision in point cloud anomaly detection and can accommodate industrial manufacturing requirements.
\begin{table}
\begin{tabular}{l l l c c c c c c c c} \hline \hline & \multirow{2}{*}{**Category**} & \multicolumn{3}{c}{**Real Size [mm]**} & \multirow{2}{*}{**Attribute**} & \multicolumn{2}{c}{**Training**} & \multicolumn{2}{c}{**Test**} & \multicolumn{2}{c}{**Anomaly Point Ratio**} \\ \cline{3-10} & & & _Length_ & _Width_ & _Height_ & & _Normal_ & _Normal_ & _Abnormal_ & \(\Delta\) \\ \hline \multirow{6}{*}{**Category**} & Airplane & 34.0 & 14.2 & 31.7 & Transparency & 4 & 50 & 50 & 104 & 1.18\% \\ & Car & 35.0 & 29.0 & 12.5 & Transparency & 4 & 50 & 50 & 104 & 1.99\% \\ & Candybar & 33.0 & 20.0 & 8.0 & Transparency & 4 & 50 & 50 & 104 & 2.37\% \\ & Chicken & 25.0 & 14.0 & 20.0 & White & 4 & 52 & 54 & 110 & 4.39\% \\ & Diamond & 29.0 & 29.0 & 18.7 & Transparency & 4 & 50 & 50 & 104 & 5.41\% \\ & Duck & 30.0 & 22.2 & 29.4 & Transparency & 4 & 50 & 50 & 104 & 2.00\% \\ & Fish & 37.7 & 24.0 & 4.0 & Transparency & 4 & 50 & 50 & 104 & 2.86\% \\ & Gemstone & 22.5 & 18.8 & 17.0 & Transparency & 4 & 50 & 50 & 104 & 2.06\% \\ & Seahorse & 38.0 & 11.2 & 3.5 & Transparency & 4 & 50 & 50 & 104 & 4.57\% \\ & Shell & 21.7 & 22.0 & 7.7 & Transparency & 4 & 52 & 48 & 104 & 2.25\% \\ & Starfish & 27.4 & 27.4 & 4.8 & Transparency & 4 & 50 & 50 & 104 & 4.47\% \\ & Toffees & 38.0 & 12.0 & 10.0 & Transparency & 4 & 50 & 50 & 104 & 2.46\% \\ \hline \multirow{6}{*}{**Category**} & Mean & 30.9 & 20.3 & 13.9 & — & 4 & 50 & 50 & 104 & 3.00\% \\ & Total & — & — & — & 48 & 604 & 602 & 1254 & — \\ \hline \hline \end{tabular}
\end{table}
Table 2: The statistics of Real3D-AD. Note that \(\Delta\) refers to the proportion of abnormal points in abnormal samples.
## 4 Benchmark and Baseline
### Problem Definition and Challenges
**Problem definition.** ADBench-3D setting can be formally stated as follows. Given a set of training examples \(\mathcal{T}=\{t_{i}\}_{i=1}^{N}\), in which \(\{t_{1},t_{2},\cdots,t_{N}\}\) are the training prototypes. In Real3D-AD, the number of prototypes for each category is limited(\(\leq\) 4). In addition, \(\mathcal{T}_{n}\) belongs to one certain category, \(c_{j}\in\mathcal{C}\), where \(\mathcal{C}\) denotes the set of all categories. During test time, given a normal or abnormal sample from a target category \(c_{j}\), the AD model should predict whether or not the test 3D object is anomalous and localize the anomaly region if the prediction result is anomalous.
**Training and test samples visualization.** Figure 7 shows the training prototype and test dataset. The images in the blue box labeled (a)-(d) represent the prototype. The training prototype undergoes a \(\mathbf{360}^{\circ}\) scan, ensuring no areas of limited visibility exist. The images in the orange box (e)-(h) represent the test sample. **To simulate real-world conditions, the test sample is scanned on only one side. Since we desire to follow the real-world application: the workers or quality inspection equipments in the production line randomly check one side of the product to identify the defects by matching the scanned data with the prototype.**
**Challenges.** The following are the three challenges. (1) Each category's training dataset contains only normal prototypes, i.e., no object or point-level annotations. (2) There are few normal prototypes of the training set available. There are fewer than four training prototypes in the setting of ADBench-3D. (3) There are unavoidable differences between the test set and the training set samples, which need to be addressed.
### ADBench-3D
**Metrics.** We standardize evaluation using metrics designed for 3D anomaly detection, including Area Under the Receiver Operating Characteristic Curve (AUROC) and the Area Under the
\begin{table}
\begin{tabular}{l|c|c|c} \hline
**Dataset** & **MVTec 3D-AD** & **Eyescandies** & **Real 3D-AD (Ours)** \\ \hline Point Resolution & 0.37mm & Not applicable & **0.04mm** \\ \hline Point Precision & 0.11mm & Not applicable & **0.011mm** \\ \hline All Views (No Blind Spot) & ✗ & ✗ & ✓ \\ \hline Real/Synthesis & Real & Synthesis & Real \\ \hline \end{tabular}
\end{table}
Table 3: Comparison results of main datasets.
Figure 6: Point numbers for all samples on a logarithmic scale, visualized by a box-and-whisker plot.
Figure 7: Examples of training and test samples in Real3D-AD.
Precision-Recall curve (AUPR/AP). The details of metrics are introduced in the supplementary material.
**Methods.** As discussed in Section 2, 3D anomaly detection methods mainly focus on RGBD anomaly detection but not point anomaly detection tasks. So we adopt BTF [15] and M3DM [28] as our benchmark methods. We build a systematic benchmark, ADBench-3D, for our proposed Real3D-AD dataset, as shown in Table 4. In Table 4, BTF(Raw) refers to that we just adopt the coordinate features (xyz) into the BTF pipeline. BTF(FPFH) denotes we incorporate fast point feature histogram (FPFH) [24] into the BTF pipeline. M3DM(PointMAE) denotes M3DM using PointMAE [19] as a point cloud feature extractor and ignoring the RGB branch. M3DM(PointBERT) denotes M3DM using PointBert [34] as a point cloud feature extractor and ignoring the RGB branch. PatchCore+FPFH indicates we replace ResNet [14] as FPFH feature extractor and merge it into PathCore [21]. PatchCore+FPFH+Raw indicates we use FPFH and coordinates of each point cloud's feature and inject them into the PatchCore pipeline. PatchCore+PointMAE denotes that we adopt the PointMAE feature extractor and merge the feature into PatchCore architecture.
**Toolkit.** To complement ADBench-3D, we release a comprehensive toolkit as a starter code for high-precision point cloud anomaly detection, which implements 8 core methods in (1) data preprocessing, (2) evaluation scripts and metrics, and (3) visualization toolkit. Due to the page length limit, the toolkit details are put into the Github Repo.
### Reg3D-AD
Inspired by PatchCore [22], we develop a general-purposed registration-based point cloud anomaly detection method (Reg3D-AD), as shown in Figure 8, which greatly meets the demands of Real3D-AD. Reg3D-AD utilizes a dual-feature representation approach to preserve the training prototypes' local and global features. Two distinct features are present in the dataset under consideration. The first feature pertains to the coordinate values of each point cloud, namely the x-, y-, and z-values. The second feature is the PointMAE feature, which characterizes the training prototypes in their entirety. The coordinate value encapsulates the localization attribute of individual points, whereas the PointMAE model prioritizes attaining a comprehensive representation of the training prototypes. The training phase aims to establish a repository of neighborhood-sensitive characteristics derived from all regular prototypes intended to serve as a memory bank. Before incorporating the novel functionalities into the memory repository, we implement the Coreset sampling technique to preserve the memory bank's size.
**Anomaly score calculation.** Before anomaly score calculation, the test 3D object needs to be registered via RANSAC algorithm [2]. After finishing the registration, the test 3D object is predicted as an anomaly if at least one point cloud is anomalous, and point-level anomaly segmentation is computed via the mean score of the point-level feature and global feature. In particular, with local feature bank \(\mathcal{M}^{l}\) and global feature bank \(\mathcal{M}^{g}\), the object-level anomaly scores \(s\) for the test object \(x^{test}\) is computed by the mean value of local feature anomaly score \(s^{l}\) and the global feature anomaly score \(s^{g}\). The local feature anomaly score is the maximum score \(s^{l*}\) betwe
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Category**} & \multicolumn{2}{c}{**BTF**} & \multicolumn{2}{c}{**M3DM**} & \multicolumn{2}{c}{**PatchCore**} & \multirow{2}{*}{**Reg3D-AD**} \\ \cline{2-2} \cline{6-9} & _Raw_ & _FPFH_ & _PointMAE_ & _PointBERT_ & _FPFH_ & _FPFH+Raw_ & _PointMAE_ \\ \hline Airplane & 0.730 & 0.520 & 0.434 & 0.407 & **0.882** & 0.848 & 0.726 & 0.716 \\ Car & 0.647 & 0.560 & 0.541 & 0.506 & 0.590 & **0.777** & 0.498 & 0.697 \\ Candybar & 0.539 & 0.630 & 0.552 & 0.562 & 0.541 & 0.570 & 0.663 & **0.685** \\ Chicken & 0.789 & 0.432 & 0.683 & 0.673 & 0.837 & **0.853** & 0.827 & 0.852 \\ Diamond & 0.707 & 0.545 & 0.602 & 0.627 & 0.574 & 0.784 & 0.783 & **0.900** \\ Duck & 0.691 & **0.784** & 0.433 & 0.466 & 0.546 & 0.628 & 0.489 & 0.584 \\ Fish & 0.602 & 0.549 & 0.540 & 0.556 & 0.675 & 0.837 & 0.630 & **0.915** \\ Gemstone & **0.686** & 0.648 & 0.644 & 0.617 & 0.370 & 0.359 & 0.374 & 0.417 \\ Seahorse & 0.596 & **0.779** & 0.495 & 0.494 & 0.505 & 0.767 & 0.539 & 0.762 \\ Shell & 0.396 & **0.754** & 0.694 & 0.577 & 0.589 & 0.663 & 0.501 & 0.583 \\ Starfish & 0.530 & **0.575** & 0.551 & 0.528 & 0.441 & 0.471 & 0.519 & 0.506 \\ Toffees & 0.703 & 0.462 & 0.450 & 0.442 & 0.565 & 0.626 & 0.585 & **0.827** \\ \hline Average & 0.635 & 0.603 & 0.552 & 0.538 & 0.593 & 0.682 & 0.594 & **0.704** \\ \hline \hline \end{tabular}
\end{table}
Table 4: ADBench-3D for Real3D-AD. The score indicates object-level AUROC \(\uparrow\). The best results are highlighted in bold.
point-level feature \(\mathcal{P}(x^{test})\) and its respective nearest neighbour \(m^{l*}\) in \(\mathcal{M}^{l}\):
\[m^{test,*}=\operatorname*{arg\,max}_{m^{test}\in\mathcal{P}(x^{test})m^{l}\in \mathcal{M}^{l}}\left\|m^{test}-m^{l}\right\|_{2},\quad m^{l*}=\operatorname*{ arg\,min}_{m^{l}\in\mathcal{M}^{l}}\left\|m^{test}-m^{l}\right\|_{2}, \tag{1}\]
\[s^{l*}=\left\|m^{test,*}-m^{l*}\right\|_{2}. \tag{2}\]
To enhance the robustness of the anomaly detection model, PatchCore employs an important re-weighting method [18] to tune the anomaly score:
\[s^{l}=\left(1-\frac{\exp\left\|m^{test,*}-m^{*}\right\|_{2}}{\sum_{m\in \mathcal{N}_{b}(m^{*})}\exp\left\|m^{test,*}-m\right\|_{2}}\right)\cdot s^{l*}, \tag{3}\]
where \(\mathcal{N}_{b}(m^{*})\) denotes \(b\) nearest patch-features in \(\mathcal{M}\) for test patch-feature \(m^{*}\). The calculation of global feature anomaly scores \(s^{g}\) is similar to \(s^{l}\) and is achieved via the global feature memory bank \(M^{g}\). Finally, the total anomaly score of each point cloud \(s^{t}=(s^{l}+s^{g})/2\).
**Analysis of ADBench-3D.** The findings presented in Table 4 indicate that most of the 3D anomaly detection algorithms do not meet the requirements for Real3D-AD. Upon revisiting the setting outlined in Section 4.1, there appear to be notable similarities between this setting and that of few-shot anomaly detection. This is due to the fact that the training dataset for each category is comprised of a mere 4 prototypes. The majority of contemporary state-of-the-art 3D anomaly detection algorithms are not specifically designed for tasks involving the detection of anomalies in a few-shot scenario. To tackle this challenge, it is imperative to optimize the utilization of prototype data and guarantee that the acquired point cloud data remains unaffected by spatial relative positions. From Table 4, it can be clearly shown that our baseline method, Reg3D-AD, outperforms state-of-the-art 3D anomaly detection methods for the Real3D-AD dataset.
## 5 Limitations \(\&\) Potential negative societal impacts
**Limitations.** There is still a broad scope for improvement and exploration based on our work. For example, our data is sourced from the 3D scanner and only contains spatial information, a common practice in industrial production. However, obtaining standardized RGB point cloud templates may be possible by calibrating and stitching multiple RGBD images or using modeling software rendering. RGB point cloud templates may be simultaneously applied to RGB image (2D) anomaly detection and point cloud (3D) anomaly detection. Additionally, our data can generate depth images from different angles by controlling rendering conditions, enabling anomaly detection from that perspective. This
Figure 8: Pipeline of our baseline method. We extract features from the training set and sample the most representative features to the memory bank during training. During inference, we use the prototype as the target to calibrate the test sample and then extract the characteristics of the test sample to compare with the memory bank. We compute the anomaly score for each point according to the distance between test features and the memory bank.
has yet to be explored. Furthermore, although our baseline outperforms existing anomaly detection methods, it is still susceptible to false detection because the test point cloud edges are truncated. Therefore, more advanced models are expected to address these issues more effectively. Our work, as a first attempt at full-view point cloud anomaly detection, will inspire further exploration in this field.
**Potential negative societal impacts.** Our data is obtained from scanning industrial products, so no negative social impact will exist.
## 6 Conclusion
In this work, we propose a Real3D-AD dataset to investigate high-precision point cloud anomaly detection problems, which aims to facilitate the research of defect identification for advancing machining and precision manufacturing. The most extraordinary high-precision 3D industrial anomaly detection dataset to date, Real3D-AD, comprises 1,254 high-resolution 3D items (\(\geq\) one million point clouds for each item) spanning 12 objects of real-world applicability. Regarding point cloud resolution (0.0010mm-0.0015mm), \(360^{\circ}\) degree coverage, and flawless prototype, Real3D-AD outperforms currently available 3D anomaly detection datasets. In addition, we provide a thorough assessment of Real3D-AD datasets, highlighting the absence of baseline approaches to enable high-precision point cloud anomaly detection applications. We put forth a general registration-based 3D anomaly detection technique (Reg3D-AD) and 3D feature coupling unit that keeps local features and global representations. Experiments on the Real3D-AD dataset show that Reg3D-AD performs significantly better than the next-best approach.
**Acknowledgments.** This work is supported by the National Key R&D Program of China (Grant NO. 2022YFF1202903) and the National Natural Science Foundation of China (Grant NO. 62122035, 62206122).
|
2306.12965 | Improved Financial Forecasting via Quantum Machine Learning | Quantum algorithms have the potential to enhance machine learning across a
variety of domains and applications. In this work, we show how quantum machine
learning can be used to improve financial forecasting. First, we use classical
and quantum Determinantal Point Processes to enhance Random Forest models for
churn prediction, improving precision by almost 6%. Second, we design quantum
neural network architectures with orthogonal and compound layers for credit
risk assessment, which match classical performance with significantly fewer
parameters. Our results demonstrate that leveraging quantum ideas can
effectively enhance the performance of machine learning, both today as
quantum-inspired classical ML solutions, and even more in the future, with the
advent of better quantum hardware. | Sohum Thakkar, Skander Kazdaghli, Natansh Mathur, Iordanis Kerenidis, André J. Ferreira-Martins, Samurai Brito | 2023-05-31T14:57:05Z | http://arxiv.org/abs/2306.12965v2 | # Improved Financial Forecasting via Quantum Machine Learning
###### Abstract
Quantum algorithms have the potential to enhance machine learning across a variety of domains and applications. In this work, we show how quantum machine learning can be used to improve financial forecasting. First, we use classical and quantum Determinantal Point Processes to enhance Random Forest models for churn prediction, improving precision by almost 6%. Second, we design quantum neural network architectures with orthogonal and compound layers for credit risk assessment, which match classical performance with significantly fewer parameters. Our results demonstrate that leveraging quantum ideas can effectively enhance the performance of machine learning, both today as quantum-inspired classical ML solutions, and even more in the future, with the advent of better quantum hardware.
## 1 Introduction
Quantum computing is a rapidly evolving field that promises to revolutionize various domains, and finance is no exception. There is a variety of computationally hard financial problems for which quantum algorithms can potentially offer advantages [24, 16, 39, 6], for example in combinatorial optimization [34, 42], convex optimization [30, 43], monte carlo simulations [15, 44, 21], and machine learning [41, 18, 1].
In this work, we explore the potential of quantum machine learning methods in improving the performance of forecasting in finance, specifically focusing on two use cases within the business of Itai Unibanc, the largest bank in Latin America.
In the first use case, we aim to improve the performance of Random Forest methods for churn prediction. We introduce quantum algorithms for Determinantal Point Processes (DPP) sampling [29], and develop a method of DPP sampling to enhance Random Forest models. We evaluate our model on the churn dataset using classical DPP sampling algorithms and perform experiments on a scaled-down version of the dataset using quantum algorithms. Our results demonstrate that, in the classical setting, the proposed algorithms outperform the baseline Random Forest in precision, efficiency, and bottom line, and also offer a precise understanding of how quantum computing can impact this kind of problem in the future. The quantum algorithm run on an IBM quantum processor gives similar results as the classical DPP on small batch dimensions but falters as the dimensions grow bigger due to hardware noise.
In the second use case, we aim to explore the performance of neural network models for credit risk assessment by incorporating ideas from quantum compound neural networks [33]. We start by using quantum orthogonal neural networks [33], which add the property of orthogonality for the trained model weights to avoid redundancy in the learned features [3]. These orthogonal layers, which can be trained efficiently on a classical computer, are the simplest case of what we call compound neural networks, which explore an exponential space in a structured way. For our use case, we design compound neural network architectures
that are appropriate for financial data. We evaluate their performance on a real-world dataset and show that the quantum compound neural network models both have far fewer parameters and achieve better accuracy and generalization than classical fully-connected neural networks.
This paper is organized as follows: In section 2, we focus on the churn prediction use case and present the DPP-based quantum machine learning methods. In section 3, we present quantum neural network models for risk assessment. Finally, in section 4, we conclude the paper and discuss potential future research directions.
## 2 DPP-enhanced Random Forest models for churn prediction
### Use case
In this study, we focus on predicting customer churn in a banking setting. Churn, defined as a customer withdrawing more than a certain amount of money in a single month, is a significant concern for retail banks. Our objective is to predict which customers are most likely to churn in the next three months using customer data from the previous six months.
The primary dataset used in this study consists of 304,000 datapoints, with 153 features for each datapoint. Each datapoint represents a banking customer at a particular month in time, with the features representing various aspects of their activity over the previous six months. The target variable is a binary flag indicating whether or not the customer churned in at least one of the following three months. The data was anonymized and standardized before being split into training and test sets based on time period, with 130,000 datapoints being set aside as the test set and 174,000 datapoints used for training. The data was split in a way that did not produce any significant covariate shift between the train and test sets.
With the end goal of preventing churn, the model works by flagging customers with the highest risk of potential churn. For these flagged customers, the bank can deploy a representative to intervene and better understand their needs. However, resource limitations make it necessary to flag a relatively small number of customers with high confidence. The focus of this exploration was to reduce false positives in the flagged customers to increase the efficiency of bank interventions. In terms of the precision-recall trade-off, our model should be tuned to provide the highest possible precision for low recall values. Despite this simplification to a classification problem, the primary business KPI is the amount of withdrawal money correctly captured by the model, as discussed more in Sec. 2.4.3.
This use case already had a solution in production: a Random Forest classifier [7], whose performance was used as a benchmark. The model in production already captured a significant amount of churn, but there was clear room for improvement in the amount of withdrawals captured (see Fig. 4). Moreover, given the large number of customers in the dataset and the relative homogeneity of the population of interest, there existed an opportunity to employ techniques that explicitly try to explore diversity in the data.
### Determinant sampling
We will now introduce the Determinantal Point Process (DPP), which lies at the core of the methodology behind our solution to the churn problem. DPPs are a class of probabilistic models that can be used to sample diverse subsets of items from a larger set. They were first formalized by Macchi in 1975 as a way to model fermions in quantum mechanics [38]. More recently, these models are showing increasing promise in the context of machine learning [32], where they can be used for a variety of tasks, such as building unbiased estimators for linear regression [13], performing monte carlo estimation [4], and promoting diversity in model outputs [17].
#### 2.2.1 Definitions
A point process \(P\) on a set \(Y\) is a probability measure over the subsets of \(Y\). Sampling from a point process on \(Y\) will produce some subset \(S\subseteq Y\) with probability \(P(S)\). A repulsive point process is a point process in which points that are more similar to each other are less likely to be selected together.
A determinantal point process is a particular case of a repulsive point process, in which the selection probability of a subset of items \(T\subseteq Y\) is given by a determinant. Given a real, symmetric \(n\times n\) matrix \(K\) indexed by the elements of \(Y\):
\[P\{T\subseteq S\}=\det(K_{T,T})\,\]
where \(K_{T,T}\) denotes the \(|T|\times|T|\) submatrix indexed by the set \(T\) and \(n\) is the cardinality of \(Y\). In other words, the marginal distribution \(P\{T\subseteq S\}\) is defined by the subdeterminants of K.
The above is the most general definition, but in machine learning, we typically focus on a slightly more restrictive class of DPPs called \(L\)-ensembles. In \(L\)-ensembles, the whole distribution, not just the marginals, is given by the subdeterminant of a real, symmetric \(n\times n\) matrix \(L\).
\[P\{S\}\propto\det(L_{S,S})\.\]
Just like \(K\), \(L\) is indexed by the elements of \(Y\). Because of some convenient properties of the determinant [32], we can explicitly write down the distribution of an \(L\)-ensemble:
\[P\{S\}=\frac{\det(L_{S,S})}{\det(L+I)}\.\]
In machine learning literature, DPPs are typically defined over a set of points \(X\), with each item \(x_{i}\) a row in the data matrix \(X\). If we preprocess \(X\) such that its columns are orthonormal and choose \(L\) to be the inner-product similarity matrix, i.e. \(L=XX^{T}\), then the distribution becomes even simpler to write down. Instead of explicitly computing the \(L\) matrix, we can write the distribution in terms of the data matrix \(X\) itself, courtesy of the Cauchy-Binet formula,
\[P\{S\}=\frac{\det(X_{S})^{2}}{\det(XX^{T}+I)}. \tag{1}\]
Moreover, the distribution will almost surely produce samples with size \(d\), the rank of the orthogonalized data matrix \(X\). This kind of DPP is denoted \(d\)-DPP. We will focus here on an application of sampling from a \(d\)-DPP from a data matrix X.
#### 2.2.2 Unbiased least-squares regression
One unique feature of the DPP compared to i.i.d sampling techniques is that it can lead to provably unbiased estimators for least-squares linear regression [11][14]. Given an \(n\times d\) data matrix \(X\) and a target vector \(y\in\mathbb{R}^{n}\), where \(n\gg d\), we wish to approximate the least squares solution \(w^{*}=\operatorname*{argmin}_{w}||Xw-y||\). \(w^{*}\) represents the best-fit parameters to a linear model to predict \(y\).
Surprisingly, if we sample \(d\) points \(S\) from DPP\((XX^{T})\) and solve the reduced system of equations \(y_{S}=X_{S}w\), we get an unbiased estimate of \(w^{*}\). Formally, if \(S\sim d\text{-DPP}_{L}(XX^{T})\),
\[\mathbb{E}[X_{S}^{-1}y_{S}]=\operatorname*{argmin}_{w}||Xw-y||=w^{*}\.\]
This allows us to create an "ensemble" of unbiased linear regressors, each trained on a DPP sample. In some regard, this was the inspiration for trying an ensemble of _decision trees_ trained on DPP samples, as detailed in Sec. 2.3.
#### 2.2.3 Algorithms for sampling
There are several efficient algorithms for sampling from DPPs and computing their properties. The naive sampling method -- calculating all subdeterminants and performing \(l2\) sampling -- takes exponential time. The first major leap in making DPP sampling feasible on today's computers was the "spectral method" [31, 25]. This algorithm performs an eigendecomposition of the kernel matrix before applying a projection-based iterative sampling approach. Thus, the first sample takes \(O(nd^{2})\) time, and subsequent samples take \(O(d^{3})\).
Monte Carlo methods have been proposed to approximate the DPP distribution [2, 35], though they are not exact, and are often still prohibitively slow with a runtime of \(O(n\text{poly}(d))\) per sample.
In a counter-intuitive result, [12] and [8] proposed methods that avoid performing the full DPP sampling procedure on large parts of the basis set. This approach resulted in a significant reduction in runtime, making DPPs more practical for mid-to-large-scale datasets. These techniques allow exact sampling of subsequent \(d\)-DPP samples in \(O(\text{poly}(d))\), independent of the size of the full basis set \(n\). Many of these algorithms are implemented in the open-source DPPy library[20], which we used in the experiments in this paper.
Recent work has shown that quantum computers are in principle able to sample from DPPs in even lower complexity in some cases. We describe this quantum algorithm in Sec. 2.5. This and several other algorithms which arise from the techniques introduced in [29] are a budding area of research in the quantum computing space, and will hopefully inspire more applications like the one we describe in this paper. For example, in [28], DPPs and deterministic DPPs were used to improve the methods for the imputation of clinical data.
### DPP-Random Forest Model
The Random Forest algorithm was introduced in 2001 by Leo Brieman [7] and has since become one of the most popular supervised machine learning algorithms. This model consists of an ensemble of decision trees, each trained on a subsample of rows and features from the dataset. Subsampling makes the model more robust to variance in the training data. In normal operation, this subsampling is done in a data-independent random manner.
In this paper, we propose an extension of the Random Forest, called the DPP-Random Forest (DPP-RF), which utilizes Determinantal Point Processes (DPPs) to subsample rows and features for individual decision trees. These data-dependent DPP samples better preserve the diversity of observations (customers, in our case) and features in comparison to uniform sampling.
#### 2.3.1 DPP sampling methodology
Sampling from a DPP on the entire dataset of 174,000 points is computationally infeasible -- something that future full-scale quantum computers can potentially change. To be able to test these techiques today, a novel sampling procedure had to be developed, which is both efficient and preserves many of the benefits of DPPs. The procedure can be summarized as follows:
1. Divide the training set into smaller batches;
2. Sample \(S_{1}\sim d\text{-DPP}(X_{batch}X_{batch}^{T})\) data points from every batch;
3. Sample \(S_{2}\sim d\text{-DPP}(X_{S_{1}}^{T}X_{S_{1}})\) features;
4. Train a first group \(G_{1}\) of \(N_{1}\) decision trees on these small patches of data;
5. Aggregate the patches of data resulting from step 2 to create a larger dataset \(X_{agg}\);
6. Repeat for \(N_{2}\) times: sample \(S_{3}\sim d\text{-DPP}(X_{agg}^{T}X_{agg})\) features to create a long matrix;
7. Train a second group \(G_{2}\) of \(N_{2}\) decision trees on these new datasets;
8. Combine \(G_{1}\) and \(G_{2}\) by aggregating them to make predictions (similar to the classical Random Forest algorithm).
### Results
We focused on three key performance indicators (KPIs): the precision-recall curve, the training time and the bottom line. In addition, we benchmarked our proposed DPP-RF method by constructing models on
Figure 1: Steps 1 to 4 of the DPP-Random Forest algorithm.
Figure 2: Steps 5 to 7 of the DPP-Random Forest algorithm.
public tabular classification tasks.
#### 2.4.1 Precision-recall
To evaluate the performance of our proposed method, we optimized hyperparameters and measured the precision for a low fixed recall (6% in this case). As seen in Figure 3, our method showed an improvement in precision from 71.6% for the benchmark model to 77.5% with the new model. Our method also provided similar improvements in precision for the relevant range of small recall.
#### 2.4.2 Training time
The DPP-RF model has a longer training time compared to the traditional random forest on a classical computer: it took 54 minutes to train the model with the best hyperparameters, compared to 311 seconds for the benchmark model. The models were trained on a computer with an Intel(r) Core(tm) i5-8350U CPU running at 1.70 GHz, 24 GB of RAM and Windows 10 version 21H2, compilation 19044.2604.
Indeed, the training time of a DPP-RF model depends heavily on the choice of values for its hyperparameters. Notably, the size of the batches from which we take DPP samples -- termed "batch size" -- can increase the runtime dramatically if set higher than 1000. In our hyperparameter search, we limited our search to hyperparameters which keep the runtime reasonable, although still relatively long.
The computational bottleneck in this algorithm is DPP sampling, which was done using the classic SVD-based approach [25]. We believe that improved classical sampling techniques [8] and future quantum techniques (Sec. 2.5) can reduce the runtime dramatically.
Within the bank, the churn model is retrained just once every few months, so the training time was not prohibitive. However, faster sampling algorithms still serve to increase the range of feasible hyperparameters (especially the batch size).
#### 2.4.3 Bottom line -- withdrawals captured
From a business perspective, the most direct indicator of the success of the model is the amount of assets under management (AUM) that can be salvaged via interventions. Thus, we evaluated the amount of money
Figure 3: Precision-recall curve for the test set. Using DPP with the Random Forest algorithm shows an improvement of 5.9%
withdrawn every month by the 500 customers flagged by the model, i.e., the 500 customers which had the highest predicted probability of churning in one of the following 3 months. As seen in figures 4, 5 and 6, our model showed substantial overall improvements. The true financial impact of these predictions is dependent on the success of the interventions as well as the bank's profit-per-dollar-AUM.
Figure 4: Benchmark (BM) vs DPP-RF solution: money withdrawn per month by the flagged 500 customers, comparing the benchmark model (blue line) to the DPP-RF one (orange line). On the y axis, we have monetary values (not shown). The green line represents the total amount of money withdrawn by all customers in each month. The purple line is the sum of the 500 largest withdrawals, which is the maximum value that the model could capture. The red line represents the withdrawals captured by randomly flagging 500 observations.
Figure 5: Classical benchmark vs DPP-RF solution - percentage of total withdrawals captured per month, that is, relative to the green line in Fig. 4. On average over the 11 test months, the BM model captures 61.42% of the total, whilst the DPP-RF model captures 62.77% — an improvement of 1.35%
#### 2.4.4 Summary of results
The proposed DPP Random Forest model provides significant improvements in precision and bottom line. The results are summarized in the following table.
#### 2.4.5 Further Benchmarks
We further benchmarked our model on various classification datasets from OpenML. All except one (madelon) of these datasets were used in [22] and preprocessed accordingly. They were chosen to be representative of a wide variety of classification tasks. Each dataset was split into train, validation, and test sets. For each model, 400 sets of hyperparameters were randomly chosen and evaluated on the validation set. Both models used the same hyperparameter space, except for the addition of the batch_size parameter for the DPP-RF. The hyperparameters which gave the best results on the validation set were evaluated on the test set, and the results are reported below. Models were evaluated with the ROC-AUC metric1.
Footnote 1: The area under the receiver operating characteristic curve (ROC-AUC) is a common metric for two-class classification tasks, and evaluates the ability of the model to produce a proper ranking of datapoints by likelihood of being class 1.
### Quantum algorithm for Determinantal Point Processes
#### 2.5.1 Quantum circuits
Classical DPP sampling algorithms have improved significantly since their inception, but it still remains infeasible to sample from large datasets like ours. Recent work by Kerenidis and Prakash [29] has shown
\begin{table}
\begin{tabular}{|c|c|c|} \hline Metric & Benchmark model & Proposed model \\ \hline Precision & 71.6\% & 77.5\% \\ \% total withdrawals captured & 61.42\% & 62.77\% \\ \% maximum possible withdrawals captured & 69.18\% & 70.72\% \\ Train time & 311s & 54 min \\ \hline \end{tabular}
\end{table}
Table 1: Summarized comparison between models.
Figure 6: Classical benchmark vs DPP-RF solution - the percentage of maximum money possible to be captured (given n_flags = 500 customers flagged every month), that is, relative to the purple line in Fig. 4. On average over the 11 test months, the BM model captures 69.18% of the total, whilst the DPP-RF model captures 70.72% — an improvement of 1.54%
that a quantum computer can more natively perform DPP sampling, achieving a gate complexity of \(O(nd)\) and a circuit depth of \(O(d\log(n))\) for an orthogonal matrix of size \(n\times d\). The classical time complexity for sampling is \(O(d^{3})\)[25]. Note that when \(n\) is very large, then one can reduce the number of rows to \(O(d^{2})\) before performing the sampling [8].
For a thorough review of the quantum methods and circuits, we refer the reader to [29]. The circuit is described in brief below.
Given an orthogonal matrix \(X=(x^{1},x^{2},\ldots,x^{n})\in\mathbb{R}^{n\times d}\), the quantum DPP circuit applied on \(X\) performs the following operation:
\[\mathcal{D}(X)|0^{n}\rangle=\sum_{\begin{subarray}{c}|S|=d\\ S\in\{0,1\}^{n}\end{subarray}}\det(X_{S})|1_{S}\rangle\, \tag{2}\]
where \(X_{S}\) is the \(\mathbb{R}^{d\times d}\) submatrix obtained after sampling the rows of \(X\) indexed by \(S\); \(1_{S}\) is the characteristic vector of \(S\) (with 1's in the positions indexed by the elements of S) and \(\mathcal{D}(X)\) represents the quantum \(d\)-DPP circuit, as detailed below.
Thus, the probability of sampling \(S\), i.e., of measuring \(|1_{S}\rangle\), is: \(Pr(S)=\det(X_{S})^{2}=\det(L_{S,S})\), where \(L=XX^{T}\). This draws the link between the quantum determinantal sampling circuit and the classical \(d\)-DPP model as seen in eq. 1.
To construct the quantum \(d\)-DPP circuit, we need to first introduce a circuit known as a _Clifford loader_, which performs the following operation:
\[\mathcal{C}(x)=\sum_{i=1}^{n}x_{i}Z^{i-1}XI^{n-i},\quad\text{for}\quad x\in \mathbb{R}^{n}. \tag{3}\]
The Clifford loader was shown to have a log-depth circuit in [29], and is shown for \(n=8\) in fig 7, in which the gates represented by vertical lines are RBS gates -- parameterized, hamming weight preserving
\begin{table}
\begin{tabular}{|c|c|c|} \hline Dataset & Random Forest & DPP Random Forest \\ \hline \begin{tabular}{c} madelon \\ credit-default \\ house-pricing \\ \end{tabular} & \begin{tabular}{c} 0.916 \\ 0.856 \\ **0.948** \\ **0.866** \\ **0.704** \\ **0.710** \\ **0.881** \\ **0.901** \\ **0.962** \\ \end{tabular} &
\begin{tabular}{c} **0.941** \\ 0.856 \\ **0.939** \\ **0.870** \\ **0.710** \\ **0.881** \\ **0.906** \\ **0.963** \\ \end{tabular} \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of DPP Random Forest and Random Forest models for different datasets. The results are reported via the ROC-AUC metric.
Figure 8: DPP circuit as a series of Clifford Loaders.
two-qubit gates.
The full quantum \(d\)-DPP circuit is a series of \(d\) clifford loaders, one for each orthogonal column of \(X\):
\[\mathcal{D}(X)=\mathcal{C}(x^{1})\mathcal{C}(x^{2})\ldots\mathcal{C}(x^{d}). \tag{4}\]
An example of a \(d\)-DPP circuit as a series of cliffords for \(n=4\) is shown in fig. 8.
#### 2.5.2 Hardware experiment results
As a hardware experiment, we aimed to implement a simplified version of our algorithm on a quantum processor. We chose to use the "ibmq_guadelupe" 16-qubit chip, which is only capable of running small quantum DPP circuits for matrices of certain dimensions, such as \((4,2),(5,2),(5,3),(6,2),(8,2)\). As a result, we had to reduce the size of our problem.
To accomplish this, we defined reduced train/test sets: a train set of \(\sim\)1000 points from 03/2019 and a test set of \(\sim\)10000 points from 04/2019. The quantum hardware-ready simplified algorithm is outlined in Figure 9. It includes the following steps:
1. Applying PCA to reduce the number of columns from 153 to \(d=2,3\);
2. Dividing the dataset into batches of \(n=4,5,6,8\) points;
3. Sampling \(S\sim d\text{-DPP}(X_{batch}X_{batch}^{T})\) rows from each batch, resulting in small \(d\times d\) patches of data;
4. Aggregating these patches to form a larger dataset, then training one decision tree on this dataset.
We repeated this process for a number of trees and estimated the F1\({}^{2}\) score for every tree. We then compared the results for different sampling methods: uniform sampling, quantum DPP sampling using a simulator, and quantum DPP sampling using a quantum processor.
The IBM quantum computer only allows using RBS gates on adjacent qubits, so we cannot use the circuit described in section 2.5.1. Instead, we use two different Clifford loader architectures which only use
Figure 9: Quantum hardware-ready procedure for DPP sampling.
adjacent-qubit connectivity, visualized in fig 10. The diagonal clifford loader is explained in [29], and the semi-diagonal loader is a modification that halves the circuit depth. As an error mitigation measure, we disregarded all results which did not have the expected hamming weight (\(d\)). The results are shown in the violin plots in Figures 12 and 13.
The results indicate that for small matrix dimensions -- up to (6,2) -- the IBM quantum processor gives results similar to the ones achieved with the simulator. However, as the dimensions grow bigger, the samples from the quantum DPP circuits lead to worse classifier performance. This highlights the limitations of the available quantum hardware, which is prone to errors.
## 3 Quantum Neural Networks for credit risk assessment
### Use case
In the second study, we focus on the problem of credit-default prediction associated with credit applications from Small and Medium Enterprises (SMEs). The Credit operation is one of the largest and most important operations in a retail banking institution. Throughout the credit journey (life-cycle) of a customer within the bank, several different models are used at different points of the journey and for different purposes, such as the determination of interest rates, offering of different products, etc.
The credit granting model is a particularly important one since it determines whether or not a credit relationship will be established. It is also particularly challenging in the case of SMEs, where the relationship with the bank often starts only when the SME submits an application for credit, so very little data is available. Given these challenges, we propose the use of quantum techniques aiming at improving the predictive performance of the credit granting model.
The credit granting decision may be seen as a binary classification problem, in which the objective is to predict if the SME will default on credit. More specifically, we are interested in calculating the so-called probability of default (PD), which is given by \(P(\hat{y}=1|\mathbf{x})\). The PD information is used internally for other pipelines primarily concerned with the determination of credit ratings for the SMEs (though in this study, we focus solely on the PD model), so the PD distribution is the main output of interest from the model. For this reason, we do not threshold the probability outputs from the model -- thus, we use threshold-independent classification metrics to evaluate its predictive performance. Namely, the main Key Performance Indicator (KPI) that we use is the Gini score, constructed from the Area Under the Curve (AUC) of the ROC (Receiver Operating Characteristic) curve as \(\text{Gini}=2\times\text{AUC}-1\). The Gini score is easily interpreted by the business team and allows for a holistic estimation of the model's impact.
In this study, we chose to focus on the development of an "internal model" of credit default, which only uses features collected by Itai, without considering any external information (from credit bureaus, for instance). The dataset used consists of \(\approx 141,500\) observations, each one represented by 32 features: 31 numerical and 1 categorical. Each observation represents a given SME customer in a specified reference month, whose observed target indicates its default behaviour and whose features consist of internal information about the
Figure 13: Decision trees performance using Quantum DPP sampling with semi-diagonal Clifford loaders.
company. The data was anonymized, standardized, and split into training and test sets based on the time period: the training set consists of \(\approx 74,700\) observations covering \(12\) months of data, while in the test set, we have \(\approx 66,800\) observations covering the subsequent \(8\) months.
The dataset also contains a large number of missing values, which motivated the experimentation of different imputation techniques (such as simple imputer, iterative imputer, MICE [45]). The best results were achieved using the iterative imputer, so this was the pre-processing employed in all the results which will be presented.
### Quantum Neural Networks with Orthogonal and Compound Layers
In recent years, variational/parameterized quantum circuits [5] have become very prominent as NISQ-friendly QML techniques. When applied to classification problems, they are commonly known as Variational Quantum Classifiers (VQC) [23]. The quantum circuits associated with VQCs may be schematically thought of as composed of three layers: the _feature map_\(\mathcal{U}_{\Phi(\vec{x})}\), which encodes classical data \(\vec{x}\) into quantum states; the _variational layer_\(W(\boldsymbol{\theta})\), which is the part of the circuit parameterized by a set of parameters \(\boldsymbol{\theta}\) which are learned in the training process; and finally, the _measurement layer_, which measures the quantum registers and produces classical information used in training and inference.
The feature map and variational layers can take different forms, called _ansatze_, consisting of many possible different quantum gates in different configurations. Such immense freedom raises an important question: how should one choose an architecture for a given problem, and can it be expected to yield a quantum advantage? This question is of major practical importance, and although benchmark results have been shown for very particular datasets [23, 37], there is little consensus on which ansatze are good choices for machine learning.
In our work, we use quantum neural networks with orthogonal and compound layers. Although these neural networks roughly match the general VQC construction, they produce well-defined linear algebraic operations, which not only makes them much more interpretable but gives us the ability to analyze their complexity and scalability. Because we understand the actions of these layers precisely, we are able to identify instances for which we can design efficient classical simulators, allowing us to classically train and test the models on real-scale datasets.
A standard feed-forward neural network layer modifies an input vector by first multiplying it by a weight matrix and then applying a non-linearity to the result. Feed-forward neural networks usually use many such layers and learn to predict a target variable by optimizing the weight matrices to minimize a loss function. Enforcing the orthogonality of these weight matrices, as proposed in [26], brings theoretical and practical benefits: it reduces the redundancy in the trained weights and can avoid the age-old problem of vanishing gradients. However, the overhead of typical projection-based methods to enforce orthogonality prevents mainstream adoption.
In [33], an improved method of constructing orthogonal neural networks using quantum ideas was developed. We shall describe it below in brief.
#### 3.2.1 Data Loaders
In order to perform a machine learning task with a quantum computer, we need to first load classical data into the quantum circuit.
Unary data loading circuits
The first way we will load classical data is an example of _amplitude encoding_, which means that we load the (normalised) vector elements as the amplitudes of a quantum state. In [27], three different circuits to load a vector \(\boldsymbol{x}\in\mathbb{R}^{d}\) using \(d-1\) gates are proposed. The circuits range in depth from \(O(log(d))\) to \(O(d)\), with varying qubit connectivity (see Fig.14). They use the _unary_ amplitude encoding, where a vector \(\boldsymbol{x}=(x_{1},\cdots,x_{d})\) is loaded in the quantum state \(|\boldsymbol{x}\rangle=\frac{1}{\|x\|}\sum_{i=1}^{d}x_{i}\,|e_{i}\rangle\), where \(|e_{i}\rangle\) is the quantum state with all qubits in \(|0\rangle\) except the \(i^{th}\) qubit in state \(|1\rangle\) (e.g., \(|e_{3}\rangle=|00100000\rangle\)). The circuit uses \(RBS\) gates: a parameterized two-qubit hamming weight-preserving gate implementing the unitary given by Eq.5:
\[RBS(\theta)=\left(\begin{array}{cccc}1&0&0&0\\ 0&\cos\theta&\sin\theta&0\\ 0&-\sin\theta&\cos\theta&0\\ 0&0&0&1\end{array}\right)\;. \tag{5}\]
The parameters \(\theta_{i}:i\in\{1,...,d-1\}\) of the \(d-1\) RBS gates are classically pre-computed to ensure they encode the correct vector \(\left|\mathbf{x}\right\rangle\).
#### \(Ry\)-loading circuits
We will also use data loading procedures beyond the unary basis. In particular, for a normalised input vector \(\mathbf{x}\in\mathbb{R}^{d}\), we use \(d\) qubits, where on each of the qubits, we apply an \(RY(\theta)\) rotation gate where the angle parameter on the \(i^{th}\) qubit is \(\theta_{i}=2\pi x_{i}\), according to Eq. 6. Multiplication with \(2\pi\) allows us to cover the entire range of the \(\sin\) and \(\cos\) functions. This technique loads the data in the entire \(2^{d}\)-dimensional Hilbert space encompassing all the hamming weights from \(0\) to \(d\). This loading technique has constant depth independent of \(d\), and we refer to it as the _RY loading_, whose circuit for \(d=8\) is illustrated in Fig. 15.
\[RY(\theta)\left|0\right\rangle=\cos\frac{\theta}{2}\left|0\right\rangle+\sin \frac{\theta}{2}\left|1\right\rangle \tag{6}\]
#### \(H\)-loading circuits
Lastly, we define a different technique for loading the data in the entire \(2^{d}\)-dimensional Hilbert space, which loads the vector in the unary basis and then applies a Hadamard gate on each qubit. This operation applies a Fourier transform on \(\mathbb{Z}_{2}\) and gives us a state encompassing all the hamming weights from \(0\) to \(d\) at no additional cost to the circuit depth. We call this the _H-loading_, whose circuit for \(d=8\) is illustrated in Fig. 16.
\[H\left|0\right\rangle=\frac{\left|0\right\rangle+\left|1\right\rangle}{\sqrt{2 }}\hskip 28.452756ptH\left|1\right\rangle=\frac{\left|0\right\rangle-\left|1 \right\rangle}{\sqrt{2}} \tag{7}\]
#### 3.2.2 Quantum Orthogonal and Compound Layers
Quantum orthogonal layers consist of a unary data loader plus a parametrised quantum circuit made of \(RBS\) gates, while quantum compound layers consist of a general data loader plus a parametrised quantum circuit made of \(RBS\) gates.
Figure 14: Three possible unary data loaders for \(d\)-dimensional vectors (\(d=8\)). From left to right: the parallel, diagonal, and semi-diagonal circuits have respectively a circuit depth of \(log(d)\), \(d\), and \(d/2\). The X gate represents the Pauli X gate, and the vertical lines represent \(RBS\) gates with tunable parameters.
\(RBS\) gates and circuits preserve the hamming weight of the input state: if we use a unary data loader, then the output of the layer will be another vector in unary amplitude encoding. Similarly, if the loaded quantum state is a superposition of only basis states of hamming weight \(k\), so is the output state. More generally, we can think of such hamming-weight preserving circuits with \(n\) qubits as block-diagonal unitaries that act separately on \(n+1\) subspaces, where the \(k^{th}\) subspace is defined by all computational basis states with hamming weight equal to \(k\). The dimension of these subspaces is equal to \(\binom{n}{k}\). The first block of this unitary is an \(n\times n\) orthogonal matrix, such that when a vector is loaded in the unary basis, this circuit simply performs orthogonal matrix multiplication. In general, the \(k\)-th block of this unitary applies a _compound matrix of order \(k\)_ of the \(n\times n\) unary matrix. The dimension of this \(k\)-th order compound matrix is \(\binom{n}{k}\times\binom{n}{k}\). We refer to the layers that use bases beyond the unary as _compound layers_.
There exist many possibilities for building a parametrised quantum circuit made of \(RBS\) gates which can be used in a quantum orthogonal or compound layer, each with different properties.
The **Pyramid circuit** (Fig.19), proposed in [33], is a parameterized quantum circuit composed of exactly \(n(n-1)/2\)\(RBS\) gates. This circuit requires only adjacent qubit connectivity, which makes it suitable for most superconducting qubit hardware. In addition, when restricted to the unary basis, the pyramid circuit
expresses exactly the Special Orthogonal Group, i.e. orthogonal matrices with the determinant equal to \(+1\). To allow this circuit to express the entire orthogonal group, we can add a final \(Z\) gate on the last qubit. This allows us to express orthogonal matrices with a \(-1\) determinant as well. The pyramid circuit is, therefore, very general and covers all the possible orthogonal matrices of size \(n\times n\).
The **X circuit** (Fig.20), introduced in [9], uses just \(O(n)\) gates and has nearest-neighbor connectivity. Due to reduced depth and gate complexity, it accumulates less hardware noise.
The **Butterfly circuit** (Fig.18) is inspired by the classical fast Fourier transform algorithm, and uses \(O(nlog(n))\) gates. It was also introduced in [9], and despite having reduced expressivity compared to the Pyramid circuit, it often performs just as well.
In [33], a method is proposed to train orthogonal layers for the unary basis by computing the gradient of each parameter \(\theta_{i}\) using backpropagation. This backpropagation method for the pyramid circuit (which is the same for any circuit with \(RBS\) gates) takes time \(O(n^{2})\), corresponding to the number of gates, and provides a polynomial improvement in runtime compared to the previously known orthogonal neural network training algorithms which relied on an \(O(n^{3})\) SVD operation [26]. Since the runtime corresponds to the number of gates, it is lower for the butterfly and \(X\) circuits. See Table 3 for full details on the comparison between the three types of circuits. For the compound layers, we need to consider the entire \(2^{n}\times 2^{n}\) space and thus train an exponential size weight matrix, which takes exponential time on a classical computer. In principle, a compound layer can also be trained using the parameter shift rule for quantum circuits, which can be more efficient since the number of parameters is polynomial in the input size, though noise in current quantum hardware makes this impractical for the time being.
#### 3.2.3 Expectation-per-subspace Compound Layer
We describe here a compound layer that we call the Expectation-per-subspace compound layer. This layer involves loading the input vector using a non-unary basis which could be done either via the \(RY\)-loading or the \(H\)-loading circuit as previously defined. Then, we apply a parameterized quantum circuit with RBS gates, e.g. a pyramid circuit, which performs the compound matrix operation on all the fixed hamming weight subspaces. More precisely, we can think of the operation as performing the matrix-vector multiplication of an \(\binom{n}{k}\times\binom{n}{k}\) matrix with an \(\binom{n}{k}\)-dimensional vector for each hamming weight \(k\) from \(0\) to \(n\). Note that for \(0\) and \(n\), the dimension is \(1\) and hence the unitary acts as identity.
If we look at the output quantum state, it defines a distribution over a domain of size \(2^{n}\). Given the exponential size of the distribution, it is not advisable to try and train the entire distribution, since that would take exponential time. However, one can still try to use a loss function that contains some information about the distribution. For example, one can use the expectation of the distribution, which is what normally happens in variational quantum algorithms where one approximates this expectation by using a number of measurement outcomes. Given the fact that our unitary is block-diagonal, one can try to define a more complex loss function that contains more information about the distribution. In particular, one can split the domain of the distribution into \(n+1\) subdomains, one for each subspace, and then train on all these expectations.
This is what we do in the Expectation-per-subspace compound layer, where for each \(k\) from \(0\) to \(n\), we take the outputs corresponding to the hamming weight \(k\) strings and sort them. Now, for each \(k\), we assign values which are equally spaced between two bounds \(a\) and \(b\) (which are \(0\) and \(10\), in our models) to the \(\binom{n}{k}\) strings. We normalise the outputs using the \(L1\)-norm to correspond to a probability distribution over
\begin{table}
\begin{tabular}{|c|c|c|c|} Circuit & Hardware Connectivity & Depth & \# Gates \\ \hline Pyramid & Nearest-Neighbor & \(2n-3\) & \(\frac{n(n-1)}{2}\) \\ X & Nearest-Neighbor & \(n-1\) & \(2n-3\) \\ Butterfly & All-to-all & \(log(n)\) & \(\frac{n}{2}log(n)\) \\ \end{tabular}
\end{table}
Table 3: Comparison of different parameterized quantum circuits for orthogonal and compound layers with \(n\) qubits.
the \(\binom{n}{k}\) values between \(a\) and \(b\), and then we calculate the expectation value for that hamming weight. This gives us a set of \(n+1\) values corresponding to each hamming weight. Since for hamming weight \(0\) and \(n\) the dimension of the subspace is \(1\) (the all zero and all one strings), we combine them and calculate the expectation for these two together and make the layer have \(n\) outputs. The entire operation is illustrated in Fig. 22.
#### 3.2.4 An Orthogonal Neural Network Architecture for Credit Risk Assessment
We first deploy a variant of Residual Neural Networks using orthogonal layers (which we call _OrthoResNN_). We use a three-layered network which takes a \(32\)-dimensional vector as input and outputs a \(2\)-dimensional vector denoting the probability of predicting \(0\) and \(1\) for the data point. Our first layer is a standard feed-forward layer of size \(32\times 8\) followed by a \(\tanh\) activation. Then, we apply an \(8\times 8\) orthogonal layer (semi-diagonal loader and \(X\) circuit). Note that measurements provide the sampled distribution, which corresponds to the square of the probability amplitudes. Since this squaring is a type of non-linearity, we do not apply any additional non-linearity to it but add the bias directly. In the spirit of residual neural networks (ResNN), we make a skip connection by adding the input of the orthogonal layer to the output. Finally, we apply another standard feed-forward layer of size \(8\times 2\) followed by a softmax layer to make it predict the probabilities. This architecture is illustrated in Fig. 23. Moreover, we also perform our simulations for the model without skip connections (which we call _OrthoFNN_). Such a model is a basic feed-forward neural network (FNN) of size \([32,8,8,2]\) where the \(8\times 8\) layer is an orthogonal layer. The architecture is exactly the same as Fig. 23 without the skip connections.
Figure 22: Expectation-per-subspace Compound Layer. In the final step, we combine the \(\binom{n}{0}\) and the \(\binom{n}{n}\) subspaces and calculate their overall expectation.
#### 3.2.5 A Compound Neural Network Architecture for Credit Risk Assessment
We also define a quantum neural network architecture that contains a quantum compound layer, using a non-unary basis for loading the data and exploring the entire \(2^{n}\)-dimensional Hilbert space. Our first layer is a standard feed-forward layer of size \(32\times 8\) followed by a \(\tanh\) activation. After this, we apply an \(8\times 8\) Expectation-per-subspace compound layer using a non-unary data loading technique. We tried both the \(H\)-loading and \(RY\)-loading techniques. After adding the biases, we apply a \(\tanh\) activation. Then, we apply another standard feed-forward layer of size \(8\times 2\) to get the two outputs. We use softmax to convert the outputs to probabilities. Again, in the experiments, we try the same model architectures with and without the ResNN-like skip connection procedure, which we respectively call _ExpResNN_ and _ExpFNN_. Fig. 24 illustrates the ExpResNN architecture for the \(H\)-loading. The ExpFNN architecture is the same as Fig. 24 without the skip connection. The same models can use the \(RY\)-loading instead of the \(H\)-loading.
Figure 23: Architecture of the OrthoResNN model.
### Results
#### 3.3.1 Classical Results
The neural network experiments were performed using the dataset described in Sec. 3.1, with the iterative imputation method to fill in the missing data. The training of the networks was performed using the JAX package by Google. We train our models for 500 epochs. The learning rate reduces by half after every 100 epochs and is initially set to 0.1. We performed simulation experiments for two types of model architectures: Feed-forward Neural Networks (FNN) and Residual Neural Networks (ResNN). Each of them has three types of layers: the standard fully-connected layer, the orthogonal layer and the expectation-per-subspace compound layer.
In our experimental setup, the fully connected layer (FNN) produced a test Gini score of 53.7%, which we consider the classical benchmark. We performed the same experiment with an orthogonal layer using the semi-diagonal loader and the X circuit (OrthoFNN), which achieved a Gini score of 53.7% as well. Finally, we tried the expectation-per-subspace compound layer with the Hadamard loader and X circuit (ExpFNN), which produced a Gini score of 53.7%. While the performance of the OrthoFNN and ExpNN remained nearly the same as the FNN layer, these new layers learn the angles of \(2n\) RBS gates instead of \(n^{2}\) elements of the weight matrix, dramatically reducing the number of parameters needed. The results are shown in Table 4.
\begin{table}
\begin{tabular}{c|c} Model & Test Gini \\ \hline FNN & 53.7\% \\ \hline OrthoFNN & 53.7\% \\ ExpFNN & 53.7\% \\ \end{tabular}
\end{table}
Table 4: Comparison between different Feed-Forward Neural Networks.
Figure 24: Architecture of the ExpResNN model using the \(H\)-loading.
We then performed our classical experiments using the more advanced residual neural networks. The model with a standard fully-connected layer (ResNN) gave us a test Gini score of 53.7%, which we took as our benchmark. We then trained the orthogonal residual neural network model (OrthoResNN), which gave us a Gini score of 54.1%. Finally, we tried the residual neural network with the expectation-per-subspace compound layer (ExpResNN) with \(H\)-loading, which gave us a Gini score of 53.9%. Both quantum neural networks proposed in this paper outperformed the standard fully-connected layer and used significantly fewer parameters. The results are shown in Table 5.
#### 3.3.2 Quantum Hardware Results
Using a classical computer, inference using an orthogonal layer takes time \(O(n^{2})\), while for a general compound layer, this time is exponential in \(n\). Using a quantum computer, inference with an orthogonal or compound layer uses a quantum circuit that has depth \(O(n)\) (Pyramid or X) or \(O(\log(n))\) (Butterfly), and \(O(n^{2})\) gates. Therefore, one may find a further advantage if the inference is performed on a quantum computer. This motivated us to test the inference step on currently available quantum hardware, which was done for the classically trained OrthoResNN and ExpFNN models.
The data loader and orthogonal/compound layer circuits employed in our model architectures are NISQ-friendly and particularly suitable for superconducting qubits, with low depth and nearest-neighbours qubit connectivity. Thus, we chose to use IBM's 27-qubit machine ibm_hanoi (see Fig.25).
To perform inference on _ibm_hanoi_, we used the semi-diagonal data loader and X circuit to implement the OrthoResNN model; and the Hadamard loader and X circuit for the ExpFNN model -- the same architectures described in Sec. 3.3.1. Both neural networks were trained classically, and the trained parameters were used to construct our quantum circuits for inference.
Given the large size of the test dataset (\(66,750\) data points), we decided to perform inference using the trained models on a small test subsample of 300 test points, corresponding to the maximum number of circuits we could send in one job to the IBM machine. After testing different subsamples with the
\begin{table}
\begin{tabular}{c|c} Model & Test Gini \\ \hline ResNN & 53.7\% \\ \hline OrthoResNN & 54.1\% \\ ExpResNN & 53.9\% \\ \end{tabular}
\end{table}
Table 5: Comparison between different Residual Neural Networks.
Figure 25: Topology graph of the 27-qubit _ibm_hanoi_ machine used to perform our hardware experiments. The colours in the qubits indicate readout assignment error; and in the connections the CNOT error — dark blue is low, purple is high.
OrthoResNN model3, we selected one for which we achieved a subsample test Gini score of 54.19% using a noiseless simulator (blue ROC curve in Figure 26). The same was done for the ExpFNN experiment, yielding a subsample test Gini of 53.90% with the noiseless simulator. These values were taken as the best possible Gini scores if the inference was performed on noiseless quantum hardware, which could then be compared with the values actually achieved with ibm_hanoi.
Footnote 3: We made sure to pick a representative subsample of observations, for which we would have neither a heavily under nor overestimated Gini score compared to the one for the full test set. By selecting 50 different subsamples of randomly chosen 300 observations and performing the classical inference, we found Gini scores between 45.9% and 64.3%, with an average of 54.44%. This further supports the fact that the subsample that we chose correctly yields the (approximate) expected value for this Gini score distribution, which then yields an unbiased subsample Gini score. The same procedure was performed for the ExpFNN experiment.
The circuits were then run on the quantum processor. Due to its limited Hilbert space of size \(n\), the OrthoResNN has a natural error-rejection procedure: any measurements outside of the unary basis can be disregarded as errors. As a result, the inference yielded a Gini score of 50.19%, as shown in the orange ROC curve in Fig. 26. The achieved Gini was not too far from the noise-free simulation result (54.19%), but there was clearly room for improvement in order to close the 4 pp difference. We also attempted inference with the more complex ExpFNN, which yielded a Gini score of 40.20%, much farther from the noiseless simulation Gini of 53.90%. Since the ExpFNN uses the entire \(2^{n}\)-dimensional Hilbert space, it is more prone to errors due to noise, as the error-rejection procedure used for the OrthoResNN cannot be employed.
#### Improving the Hardware Results with Error Mitigation Techniques
Error mitigation and error suppression techniques undoubtedly play a very important role in NISQ-era quantum computing. While these techniques alone may not be sufficient to fully overcome the imperfections of current quantum systems, they can push the practical limits of what can be achieved. As a next step, for the OrthoResNN model, we experimented with various error mitigation and suppression approaches, going beyond the simple hamming-weight postselection procedure, in an attempt to close the gap of 4 pp between the Gini score from the noisy simulation and the one from hardware execution.
The first approach that we tried was a correlated readout mitigator. This is a purely classical post-processing technique which demands the construction of a calibration circuit for each one of the possible \(2^{N}\) states of the full \(N\) qubits Hilbert space. The calibration circuits' execution (simulated using ibm_hanoi's backend information, in our case) yields a \(2^{N}\times 2^{N}\) assignment matrix, which is used to understand how errors might occur during readout. One can see that this method rapidly becomes intractable as the number of qubits \(N\) increases. In our case, for \(N=8\), the Gini score improved to 50.24%, a small improvement of only 0.05 pp.
Thus, in order to investigate the effect of more robust error suppression and mitigation techniques in our results, we moved on to a new round of hardware experiments, performing the inference by executing the exact same OrthoResNN circuits via the Qiskit Runtime [10] service using the Sampler primitive, which allows one to use circuit optimization as well as error suppression and mitigation techniques, as detailed below.
Firstly, we used circuit optimization at the point of circuit transpilation and compilation by setting the optimization_level parameter to the highest possible value, 3. This performs the following circuit optimization routines: Layout selection and routing (VF2 layout pass and SABRE layout search heuristics [36]; 1 qubit gate optimization (chains of single-qubit u1, u2, u3 gates are combined into a single gate); commutative cancellation (cancelling of redundant self-adjoint gates); 2 qubit KAK optimization (decomposition of 2-qubit unitaries into the minimal number of uses of 2-qubit basis gates).
Secondly, we used the Dynamical Decoupling error suppression technique [46, 19]. This technique works as a pulse schedule by inserting a DD pulse sequence into periods of time in which qubits are idle. The DD pulses effectively behave as an identity gate, thus not altering the logical action of the circuit, but having the effect of mitigating decoherence in the idle periods, reducing the impact of errors.
Thirdly, we used the M3 (Matrix-free Measurement Mitigation) error mitigation technique [40] by setting the Sampler resilience_level parameter to 1 (the only option available for the Sampler primitive). This provides mitigated quasi-probability distributions after the measurement. M3 works in a reduced subspace
defined by the noisy input bitstrings supposed to be corrected, which is often much smaller than the full \(N\) qubits Hilbert space. For this reason, this method is much more efficient than the matrix-based readout mitigator technique mentioned above. M3 provides a matrix-free preconditioned iterative solution method, which removes the need to construct the full reduced assignment matrix but rather computes individual matrix elements, which uses orders of magnitude less memory than direct factorization.
By employing these three techniques, we were able to achieve a Gini score of \(53.68\%\) for the OrthoResNN (Fig. 26). This is a \(3.49\) pp improvement from the initial \(50.19\%\) Gini of the unmitigated run, falling only \(0.53\) pp behind the ideal noiseless execution (\(54.21\%\) Gini)! This remarkable result underscores the NISQ-friendliness of the orthogonal layer and highlights the importance of error suppression and mitigation techniques in the NISQ era.
It is important to note that circuit optimization, error suppression, and mitigation techniques typically result in some classical/quantum pre/post-processing overhead to the overall circuit runtime. Some of these techniques are based on heuristics and/or do not have efficient scaling at larger circuit sizes. It is important to balance the desired levels of optimization and resilience with the required time for the full execution, especially as the circuit sizes increase.
#### 3.3.3 Further Benchmarks
We benchmarked the OrthoFNN architecture on different classification datasets from OpenML as we did for the Random Forests (Sec. 2.4.5). We trained fully connected neural networks (FNNs) using the same model parameters and hyperparameters. We first use a fully connected layer of \(16\) output nodes followed by GeLU. Then, we extract the features using a \(16\times 16\) fully connected layer, which is done using the Pyramid circuit in the case of the orthogonal FNN, followed by \(\tanh\). In the end, we use another fully connected layer with \(2\) output nodes. For all the datasets and models, we use a batch size of \(128\) and a learning rate of \(10^{-4}\). Each is trained for \(500\) epochs and evaluated with the ROC-AUC metric. The results are summarised in Table 6, in which one can see that the OrthoFNN model was superior in most of the tasks.
Figure 26: ROC Curves with Gini score for the ideal simulation, hardware execution and the error-mitigated hardware execution.
## 4 Conclusion
In this work, we have explored the potential of quantum machine learning methods in improving forecasting in finance, with a focus on two specific use cases within the Itau business: churn prediction and credit risk assessment. Our results demonstrate that the proposed algorithms, which leverage quantum ideas, can effectively enhance the performance of Random Forest and neural networks models, achieving better accuracy and generalization.
In the present day, quantum hardware is not powerful enough to provide real improvements or conclusive large-scale benchmarks. Performance enhancements can be achieved today by turning these quantum ideas into classical ML solutions run on GPUs. However, with the advent of better quantum hardware, we expect these methods to run faster and produce even better results when run on quantum computers.
The general nature of the proposed methods makes them applicable to other use cases in finance and beyond, although they must be tuned to specific datasets and tasks. We hope this work inspires confidence that QML research holds promise both for today as well as for the coming era of scaled, fault-tolerant quantum hardware.
### Disclaimer
This paper is a research collaboration between Itau Unibanco and QC Ware. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of Itau Unibanco. This paper is not and does not constitute or intend to constitute investment advice or any investment service. It is not and should not be deemed to be an offer to purchase or sell or a solicitation of an offer to purchase or sell, or a recommendation to purchase or sell any securities or other financial instruments. Moreover, all data used in this study is compliant with the Brazilian General Data Protection Law.
|
2309.12138 | Endotrivial complexes | Let $G$ be a finite group, $p$ a prime, and $k$ a field of characteristic
$p$. We introduce the notion of an endotrivial chain complex of $p$-permutation
$kG$-modules, which are the invertible objects in the bounded homotopy category
of $p$-permutation $kG$-modules, and study the corresponding Picard group
$\mathcal{E}_k(G)$ of endotrivial complexes. Such complexes are shown to induce
splendid Rickard autoequivalences of $kG$. The elements of $\mathcal{E}_k(G)$
are determined uniquely by integral invariants arising from the Brauer
construction and a degree one character $G \to k^\times$. Using ideas from
Bouc's theory of biset functors, we provide a canonical decomposition of
$\mathcal{E}_k(G)$, and as an application, give complete descriptions of
$\mathcal{E}_k(G)$ for abelian groups and $p$-groups of normal $p$-rank 1.
Taking Lefschetz invariants of endotrivial complexes induces a group
homomorphism $\Lambda: \mathcal{E}_k(G) \to O(T(kG))$, where $O(T(kG))$ is the
orthogonal unit group of the trivial source ring. Using recent results of
Boltje and Carman, we give a Frobenius stability condition elements in the
image of $\Lambda$ must satisfy. | Sam K. Miller | 2023-09-21T14:59:09Z | http://arxiv.org/abs/2309.12138v4 | # Endotrivial complexes
###### Abstract
Let \(G\) be a finite group, \(p\) a prime, and \(k\) a field of characteristic \(p\). We introduce the notion of an endotrivial complex of \(p\)-permutation \(kG\)-modules, and study the corresponding group of endotrivial complexes, \(\mathcal{E}_{k}(G)\). Such complexes are shown to induce splendid Rickard autoequivalences of \(kG\). The elements of \(\mathcal{E}_{k}(G)\) are determined uniquely by integral invariants arising from the Brauer construction and a degree \(1\) character \(G\to k^{\times}\). Using ideas from Bouc's theory of biset functors, we provide a canonical decomposition of \(\mathcal{E}_{k}(G)\), and as an application, determine complete descriptions of \(\mathcal{E}_{k}(G)\) for abelian groups and \(p\)-groups of normal \(p\)-rank \(1\). We investigate the image of \(\mathcal{E}_{k}(G)\) in the orthogonal unit group of the trivial source ring \(O(T(kG))\) induced via the Lefschetz invariant map, and using recent results of Boltje and Carman, we determine a Frobenius stability condition an orthogonal unit must satisfy to lift to an endotrivial complex.
## 1 Introduction
Let \(G\) be a finite group, \(p\) a prime, and \(k\) a field of characteristic \(p\). Endotrivial \(kG\)-modules, i.e. \(kG\)-modules \(M\) which satisfy \(M^{*}\otimes_{k}M\cong k\oplus P\) for some projective \(kG\)-module \(P\), are an object of interest for group theorists and representation theorists. The work of Dade, Puig, and many others have elucidated much about endotrivial modules and their corresponding group \(\mathcal{T}(G)\), and more generally, endopermutation modules. Such modules arise in many places in the theory of modular representations: for instance, endopermutation modules are closely linked to stable equivalences of Morita type between block algebras (see [15, 1.2]).
In this paper, we adapt this notion of invertibility for chain complexes. We have multiple candidate categories to choose from in order to determine what "invertibility" means, and in the scope of this paper, we choose the homotopy category of bounded chain complexes \(K^{b}(_{kG}\mathbf{mod})\); thus contractible complexes replace the role of projective modules. Our initial definition of an endotrivial chain complex is as follows: if \(C\) is a bounded complex of \(kG\)-modules, \(C\) is endotrivial if and only if \(C^{*}\otimes_{k}C\simeq k[0]\), where \(k[0]\) denotes the chain complex consisting of the trivial module in degree \(0\) and the zero module in all other degrees. We additionally impose the restriction that each module of an endotrivial chain complex is \(p\)-permutation, that is, is a permutation module when restricted to a Sylow subgroup of \(G\). As a desired consequence, endotrivial complexes will induce splendid Rickard equivalences; this is Theorem 4.7.
**Theorem 1.1**.: _Let \(C\) be an endotrivial complex of \(p\)-permutation \(kG\)-modules. Then \(\operatorname{ind}_{\Delta G}^{G\times G}C\), regarded as a complex of \((kG,kG)\)-bimodules, is a splendid Rickard complex for \(kG\) and \(kG\)._
Boltje and Xu showed in [6, 1.5] that any splendid Rickard complex induces a \(p\)-permutation equivalence by taking its Lefschetz invariant (also referred to as its Euler characteristic), that is, taking an alternating sum of its components. A question of interest is determining the image of the Lefschetz invariant map; in particular we wish to construct lifts for \(p\)-permutation equivalences to splendid Rickard complexes, and such constructions may shine light on the interplay between the two equivalences.
The trivial source ring \(T(kG)\) is the Grothendieck group of the category of \(p\)-permutation modules with respect to split exact sequences, and the orthogonal unit group \(O(T(kG))\leq T(kG)^{\times}\) is the subgroup consisting of units whose inverse is its dual. Analogously, the Lefschetz invariant of an endotrivial complex is an orthogonal unit of the trivial source ring, and the analogous question we ask is what the image of the map induced by taking a Lefschetz invariant is in \(O(T(kG))\). To connect things to the previous question, orthogonal units induce \(p\)-permutation equivalences in the same manner as endotrivial complexes induce splendid Rickard complexes.
A classical tool for studying \(p\)-permutation modules is the Brauer construction, which, given any \(p\)-subgroup \(P\) of \(G\) and \(kG\)-module \(M\), functorially constructs a \(kN_{G}(P)\)-module \(M(P)\) of which \(P\) acts trivially, hence a \(k[N_{G}(P)/P]\)-module. This can be seen as an analogue of taking \(P\)-fixed points of a \(G\)-set. For endotrivial complexes of \(kG\)-modules, the Brauer construction induces "local" endotrivial complexes of \(k[N_{G}(P)/P]\)-modules. To each \(p\)-group, one can associate an integer corresponding to the location of non-exactness of the corresponding local endotrivial complex, which we call an _h-mark_, analogous to the marks of a \(G\)-set. In fact, taking h-marks at all \(p\)-subgroups induces a group homomorphism, and up to a twist by a \(k\)-dimension 1 representation, completely characterizes the homotopy class of an endotrivial complex. We describe with more precision and prove these statements in Section 3.
**Theorem 1.2**.: _Denote by \(\mathcal{E}_{k}(G)\) the group of homotopy classes of endotrivial chain complexes of \(kG\)-modules, with group law given by \(\otimes_{k}\). We have a group homomorphism_
\[\mathcal{E}_{k}(G)\to\left(\prod_{P\in s_{p}(G)}\mathbb{Z}\right)^{G}\]
_with kernel consisting of all isomorphism classes of \(k\)-dimension one representations of \(G\), regarded as chain complexes in degree 0._
_In particular, \(\mathcal{E}_{k}(G)\) has rank as abelian group bounded by the number of conjugacy classes of \(p\)-subgroups of \(G\). Moreover, two endotrivial complexes are homotopy equivalent if and only if they have the same h-marks and isomorphic homology in all degrees._
In addition, we take inspiration from Bouc's theory of biset functors to consider the notion of a faithful subgroup of \(\mathcal{E}_{k}(G)\). We obtain a canonical decomposition of \(\mathcal{E}_{k}(G)\) into faithful constituents of quotient groups of \(G\), which allows us to recursively determine \(\mathcal{E}_{k}(G)\) for abelian groups and groups of normal \(p\)-rank 1. In all such cases, we conclude that every orthogonal unit of the trivial source ring lifts to an endotrivial complex. These computations are carried out in Section 6.
Unfortunately, completely determining \(\mathcal{E}_{k}(G)\) for arbitrary groups does not yet appear feasible, nor does the task of determining the cokernel of \(\Lambda:\mathcal{E}_{k}(G)\to O(T(kG))\) in complete generality. In the case of \(p\)-groups, \(O(T(kG))\) is canonically isomorphic to the unit group of the Burnside ring of \(G\), \(B(G)^{\times}\). In particular for odd \(p\), it is easy to show that \(\Lambda\) is surjective since \(B(P)^{\times}=\{\pm 1\}\) for \(p\)-groups of odd order. The question of \(p=2\) is more difficult - in this case a basis for \(B(P)^{\times}\) was described by Bouc in [8, 7.4], which relies on tensor induction. Unfortunately, tensor induction does not preserve endotriviality of complexes: we give an example in Section 5 demonstrating that even for small groups, tensor induction does not hold.
In contrast to the previous cases, we find large classes of orthogonal units which cannot be lifted to endotrivial complexes by observing the action induced by the Frobenius automorphism on the corresponding module categories. This theorem is the main result of Section 7. It requires some technical background, which is detailed in Remark 4.4. In particular, there exists an injective homomorphism described by Boltje & Carman in [4, Theorem A], \(\beta_{G}:O(T(kG))\to\prod_{P\in s_{p}(G)}R_{k}(N_{G}(P)/P)\) which associates to each orthogonal unit \(u\in O(T(kG))\) a tuple \((\epsilon_{P}\cdot\rho_{P})_{P\in s_{p}(G)}\) with \(\epsilon_{P}\in\{\pm 1\}\) and \(\rho_{P}\in\operatorname{Hom}(N_{G}(P)/P,k^{\times})\).
**Theorem 1.3**.: _Let \(u\in O(T(kG))\), and suppose \(\beta_{G}(u)=(e_{P}\cdot\rho_{P})_{P\in s_{p}(G)}\), with \(\rho_{1}\) the trivial representation. If \(u\) lifts to an endotrivial complex, then each \(\rho_{P}\) admits values only in \(\mathbb{F}_{p}^{\times}\)._
_In particular, if \(p=2\), then the only units of \(O(T(kG))\) which may lift arise from units of the Burnside ring of a Sylow \(p\)-subgroup of \(G\) and twists by 1-dimensional representations. If \(p>2\) and all primes \(q\) which divide \(|G|\) satisfy \(q\nmid p-1\), then the only units in \(O(T(kG))\) which lift are 1-dimensional representations and their additive inverses._
The paper is structured as follows. Section 2 recalls most of the preliminary definitions and theorems needed for the paper. Section 3 introduces endotrivial complexes, the group \(\mathcal{E}_{k}(G)\), and the h-mark homomorphism \(\epsilon\), and establishes key structural properties of \(\mathcal{E}_{k}(G)\). Section 4 describes the behavior of \(\mathcal{E}_{k}(G)\) with regards to functorial constructions, describes how one can partially recover the Lefschetz invariant of an endotrivial complex from the h-marks, and proves that endotrivial complexes induce splendid autoequivalences of \(kG\). Section 5 introduces the faithful component of \(\mathcal{E}_{k}(G)\), gives a canonical decomposition of \(\mathcal{E}_{k}(G)\) into faithful components of quotient groups, and describes which modules live in faithful complexes. Section 6 computes \(\mathcal{E}_{k}(G)\) for abelian, dihedral, generalized quaternion, and semidihedral groups. Finally, Section 7 provides a necessary condition for elements of \(O(T(kG))\) to lift to endotrivial complexes, which allows us to completely determine the image of the Lefschetz invariant in some cases.
**Notation:** Throughout, unless stated otherwise, \(G\) is a finite group, \(p\) a prime, and \((K,\mathcal{O},k)\) is a \(p\)-modular system. That is, \(\mathcal{O}\) is a complete discrete valuation ring of characteristic \(0\), \(K\) is its field of fractions and \(k\) its residue field of characteristic \(p\). Note that we make no assumptions about \((K,\mathcal{O},k)\) being large enough for \(G\). All modules are finitely generated.
We denote the set of \(p\)-subgroups of \(G\) by \(s_{p}(G)\). If two subgroups \(H_{1},H_{2}\leq G\) are \(G\)-conjugate, we write \(H_{1}=_{G}H_{2}\), and if \(H_{1}\) is a subgroup of \(H_{2}\) up to \(G\)-conjugacy, we write \(H_{1}\leq_{G}H_{2}\). If \(K,H\leq G\), we write \([G/H]\) to denote a set of coset representatives of \(G/H\) and write \([K\backslash G/H]\) to denote a set of double coset representatives of \(K\backslash G/H\). \(\operatorname{Syl}_{p}(G)\) denotes the set of Sylow \(p\)-subgroups of \(G\).
Let \(R\) be a commutative ring. \({}_{RG}\mathbf{mod}\) denotes the category of finitely generated \(RG\)-modules, \(Ch({}_{RG}\mathbf{mod})\) (resp. \(Ch^{b}({}_{RG}\mathbf{mod})\)) denotes the category of chain complexes (resp. bounded chain complexes) of finitely generated \(RG\)-modules, and \(K({}_{RG}\mathbf{mod})\) (resp. \(K^{b}({}_{RG}\mathbf{mod})\)) denotes the homotopy category (resp. bounded homotopy category) of \({}_{RG}\mathbf{mod}\). Given a \(RG\)-module \(M\), we write \(M^{*}\) for the \(R\)-dual of \(M\), the \(RG\)-module \(\operatorname{Hom}_{R}(M,R)\) with left \(RG\)-action defined by \((g\cdot f)(m)=f(g^{-1}m)\). Given an \((RG,RH)\)-bimodule \(M\), we write \(M^{*}\) for the \(R\)-dual of \(M\), the \((RH,RG)\)-bimodule \(\operatorname{Hom}_{R}(M,R)\) with bimodule action defined by \((h\cdot f\cdot g)(m)=f(gmh)\). Finally, given two \(RG\)-modules \(M,N\), the tensor product \(M\otimes_{R}N\) is again an \(RG\)-module, with diagonal \(G\)-action \(g\cdot(m\otimes n)=(gm\otimes gn)\). This endows \({}_{RG}\mathbf{mod}\) with a symmetric monoidal structure.
**Acknowledgments:** The author is extremely grateful to his supervisor Robert Boltje for the countless hours of discussion, supervision, and assistance he has offered to made this paper possible. He additionally would like to thank the many mathematicians who offered their thoughts during the Dame Kathleen Ollerenshaw workshop including Nadia Mazza, Caroline Lassueur, and Markus Linckelmann, as well as the University of Manchester for their hospitality during the workshop.
Finally, he would like to thank Dan Nakano for asking a very insightful question during a talk by the author, which led to the ideas present in Definition 3.5.
## 2 Preliminaries
In this section, we recall various facts about \(p\)-permutation modules, the Brauer construction, and homotopy theory for chain complexes which will be necessary.
### \(p\)-permutation modules and the Brauer construction
**Definition 2.1**.: A \(kG\)-module \(M\) is a \(p\)_-permutation module_ if \(\operatorname{res}_{P}^{G}M\) is a permutation module for all \(p\)-subgroups \(P\in s_{p}(G)\).
Note that it suffices to check \(\operatorname{res}_{S}^{G}M\) is a permutation module for \(S\in\operatorname{Syl}_{p}(G)\). When working over characteristic \(p\), we have an equivalent characterization of \(p\)-permutation. Given \(H\leq G\), a \(kG\)-module \(M\) is _relatively \(H\)-projective_ if there exists a \(kH\)-module \(N\) for which \(M\) is a direct summand of \(\operatorname{ind}_{H}^{G}M\). If \(\mathcal{X}\) is a set of subgroups of \(G\), say \(M\) is \(\mathcal{X}\)-projective if each indecomposable constituent of \(M\) is \(H\)-projective for some \(H\in\mathcal{X}\). If \(M\) is indecomposable, we say \(M\) has _vertex \(P\)_ and _source \(S\)_ if \(P\leq G\) is minimal with respect to the property that \(M\) is \(P\)-projective, and \(S\) is an indecomposable \(kP\)-module for which \(M\) is a direct summand of \(\operatorname{ind}_{P}^{G}S\). It is well-known that the set of vertices of an indecomposable module form a full conjugacy class of \(p\)-subgroups of \(G\).
**Theorem 2.2**.: _Let \(M\) be a \(kG\)-module. \(M\) is a \(p\)-permutation module if and only if \(M\) is a direct summand of a permutation module. Moreover, if \(M\) is indecomposable, then \(M\) is a \(p\)-permutation module if and only if \(M\) has trivial source._
Proof.: See [16, 5.10.2 & 5.11.2].
For this reason, indecomposable \(p\)-permutation modules are also referred to as _trivial source modules_. We write \({}_{kG}\mathbf{triv}\) to denote the full subcategory of \({}_{kG}\mathbf{mod}\) with objects given by all \(p\)-permutation modules. It is an additive category which is symmetric monoidal and idempotent complete, but is in general not pre-abelian, since kernels and cokernels of homomorphisms of \(p\)-permutation modules need not be \(p\)-permutation. It is the idempotent completion of the category \({}_{kG}\mathbf{perm}\) of permutation \(kG\)-modules.
In the case of \(p\)-groups, the trivial source modules are easy to describe. This property will be crucial in the sequel, as it connects the theory of \(p\)-permutation \(kP\)-modules to the theory of \(P\)-sets.
**Proposition 2.3**.: _Let \(P\) be a \(p\)-group. Then the isomorphism classes of indecomposable \(p\)-permutation modules are given by \(k[P/Q]\), where \(Q\) runs through all subgroups of \(P\)._
Proof.: See [16, 1.11.4].
_Remark 2.4_.: Similarly, one can define \(p\)-permutation \(\mathcal{O}G\)-modules in the same way. We have a canonical functor \(k\otimes_{\mathcal{O}}-:\mathcal{O}_{G}\mathbf{triv}\to{}_{kG}\mathbf{triv}\) which is essentially surjective on objects and surjective on morphisms, see [16, 5.11.2]. In fact, this functor induces a one-to-one correspondence on isomorphism classes of objects between \({}_{\mathcal{O}G}\mathbf{triv}\) and \({}_{kG}\mathbf{triv}\), that is, given a \(p\)-permutation \(kG\)-module, there is a unique \(p\)-permutation \(\mathcal{O}G\)-module which lifts it.
In the sequel, we will work with chain complexes of \(p\)-permutation \(kG\)-modules. In general, we again have a functor \(Ch({}_{\mathcal{O}G}\mathbf{triv})\to Ch({}_{kG}\mathbf{triv})\), however it is no longer essentially surjective,
since if one attempts to lift a chain complex of \(p\)-permutation \(kG\)-modules to a chain complex of \(p\)-permutation \(\mathcal{O}G\)-modules, the resulting differentials of the resulting graded object may no longer satisfy \(d^{2}=0\). For example, the 3-term chain complex of \(p\)-permutation \(\mathbb{F}_{2}C_{2}\)-modules \(\mathbb{F}_{2}\rightarrow\mathbb{F}_{2}C_{2}\rightarrow\mathbb{F}_{2}\) with each differential the unique nonzero homomorphism between the two modules has no lift to \(\mathcal{O}\).
**Definition 2.5**.: The _trivial source ring_, denoted by \(T(kG)\), is the Grothendieck group of \({}_{kG}\mathbf{triv}\) with respect to split exact sequences. It is a free \(\mathbb{Z}\)-module with canonical basis given by the images of all trivial source \(kG\)-modules. Given a \(kG\)-module \(M\), we write \([M]\in T(kG)\) to denote the class of \(M\) in \(T(kG)\). We write \(O(T(kG))\) for the subgroup of the unit group \(T(kG)^{\times}\) consisting of _orthogonal units_, i.e. units \(x\in T(kG)^{\times}\) for which \(x^{*}=x^{-1}\).
Denote by \(R_{k}(G)\) the _Brauer character ring_ of \(kG\). It is isomorphic to the Grothendieck group of \({}_{kG}\mathbf{triv}\) with respect to short exact sequences, and therefore we have a canonical surjection \(T(kG)\to R_{k}(G)\). It is well-known that \(R_{k}(G)^{\times}=\{\pm\chi\mid\chi\in\mathrm{Hom}(G,k^{\times})\}\), that is, the units are given by 1-dimensional irreducible Brauer characters up to a sign change.
One of the tools used to study \(p\)-permutation modules is the _Brauer construction_ (also called the _Brauer quotient_).
**Definition 2.6**.:
1. Given any subgroups \(Q\leq P\leq G\), the _trace map_\(\mathrm{tr}_{Q}^{P}:M^{Q}\to M^{P}\) is given by \[\mathrm{tr}:m\mapsto\sum_{x\in[P/Q]}x\cdot m.\]
2. Given any subgroup \(P\leq G\), the _Brauer construction_ at \(P\), \(-(P):{}_{kG}\mathbf{mod}\rightarrow{}_{k[N_{G}(P)]}\mathbf{mod}\) is defined by \[M(P):=M^{P}/\sum_{Q<P}\mathrm{tr}_{Q}^{P}(M^{Q}).\] Given any \(kG\)-module \(M\), \(M(P)\) is clearly \(P\)-fixed, and thus, the Brauer construction may also be considered as an additive functor \((-)(P):{}_{kG}\mathbf{mod}\rightarrow{}_{k[N_{G}(P)/P]}\mathbf{mod}\). We consider the Brauer construction in this way, unless stated otherwise - it will be convenient in a few situations to do so.
If \(P\) is not a \(p\)-group, for all \(kG\)-modules \(M\), \(M(P)=0\), thus we restrict our attention to taking Brauer quotients at \(p\)-groups. The Brauer construction restricts to a functor \((-)(P):{}_{kG}\mathbf{triv}\rightarrow{}_{k[N_{G}(P)/P]}\mathbf{triv}\). It extends to a functor of chain complexes, \(Ch({}_{kG}\mathbf{mod})\to Ch({}_{k[N_{G}(P)/P]}\mathbf{mod})\) and similarly for \({}_{kG}\mathbf{triv}\). This functor is rarely exact.
We now give a list of well-known properties of the Brauer construction.
**Proposition 2.7**.:
1. _Let_ \(M\) _be a_ \(kG\)_-module and_ \(P\in s_{p}(G)\)_._ \(M(P)^{*}\cong(M^{*})(P)\) _is a natural isomorphism of_ \(k[N_{G}(P)/P]\)_-modules._
2. _Given any_ \(kG\)_-module_ \(M\)_,_ \(P\in s_{p}(G)\) _with_ \(P\leq H\leq N_{G}(P)\)_,_ \(\mathrm{res}_{N_{H}(P)/P}^{N_{G}(P)/P}M(P)=\left(\mathrm{res}_{H}^{G}\,M\right) (P)\)_._
3. _Let_ \(M\) _be a_ \(kG\)_-module,_ \(P\in s_{P}(G)\)_, and_ \(x\in G\)_._ \({}^{x}(M(P))\cong({}^{x}M)({}^{x}P)\) _is a natural isomorphism of_ \(k[{}^{x}(N_{G}(P)/P)]\)_-modules._
Proof.: All of these follow directly from the definition of the Brauer construction and are easily verified.
The Brauer construction behaves especially well with \(p\)-permutation modules. The following are well-known properties.
**Proposition 2.8**.:
1. _Let_ \(M,N\) _be_ \(p\)_-permutation_ \(kG\)_-modules and_ \(P\in s_{p}(G)\)_. Then there is a natural isomorphism_ \((M\otimes_{k}N)(P)\cong M(P)\otimes_{k}N(P)\) _of_ \(k[N_{G}(P)/P]\)_-modules._
2. _Let_ \(M\cong kX\) _be a permutation module and_ \(P\in s_{p}(G)\)_. Then_ \(M(P)\) _has as a permutation_ \(k\)_-basis the image of the_ \(P\) _fixed points_ \(X^{P}\) _under the quotient map_ \(M^{P}\to M(P)\)_. In particular,_ \(k[G/P](P)\cong k[N_{G}(P)/P]\) _as_ \(k[N_{G}(P)/P]\)_-modules._
3. _Let_ \(M\) _be a trivial source module._ \(M(P)\neq 0\) _if and only if_ \(P\) _is contained in a vertex of_ \(M\)_._
4. _Let_ \(Q\trianglelefteq P\in s_{p}(G)\) _and_ \(M\)_. Then_ \(M(P)\cong M(Q)(P)\)_, regarding_ \(M(Q)\) _as a_ \(kN_{G}(Q)\)_-module. In particular, if_ \(M(P)\neq 0\)_, then_ \(M(R)\neq 0\) _for any subgroup_ \(R\leq P\)_._
Proof.: (a) is [16, 5.8.10]. (b) follows from [16, 5.8.1]. (c) is [16, 5.10.3]. (d) is [16, 5.8.5].
The following theorems will be crucial in the sequel.
**Theorem 2.9**.: _[_16_, 5.8.11]_ _Let \(M,N\) be \(p\)-permutation \(kG\)-modules, and let \(f:M\to N\) be an injective (resp. surjective) homomorphism. Then \(f\) is split injective (resp. surjective) if and only if \(f(P)\) is injective (resp. surjective) for all \(p\)-subgroups \(P\in s_{p}(G)\)._
**Theorem 2.10**.: _[_15_, 6.1]_ _Let \(U\) be an indecomposable \(kG\)-module with vertex \(P\) and trivial source, and let \(M\) be a \(p\)-permutation \(kG\)-module. Let \(f:U\to M\) be a \(kG\)-module homomorphism. The following are equivalent:_
* \(f:U\to M\) _is split injective._
* \(f(P):U(P)\to M(P)\) _is injective._
_Dually, let \(g:M\to U\) be a \(kG\)-module homomorphism. The following are equivalent:_
* \(g:M\to U\) _is split surjective._
We give a generalization of this theorem in Section 5.
**Theorem 2.11**.: _[_7_, 7.9]___
1. _Let_ \(C\in Ch^{b}(_{kG}\textbf{triv})\)_._ \(C\) _is contractible if and only if_ \(C(P)\) _is acyclic for all_ \(p\)_-subgroups_ \(P\in s_{p}(G)\)_._
2. _Let_ \(C\in Ch^{b}(_{kG}\textbf{triv})\)_._ \(C\) _is homotopy equivalent to a bounded complex of projective_ \(kG\)_-modules if and only if_ \(C(P)\) _is acyclic for all nontrivial_ \(p\)_-subgroups_ \(1<P\in s_{p}(G)\)_._
For a more detailed treatment of \(p\)-permutation modules, the trivial source ring, and the Brauer quotient, we direct the reader to [14], [16], or [4]. We remark the Brauer construction is often defined as a functor \({}_{\mathcal{O}G}\textbf{mod}\to{}_{k[N_{G}(P)/P]}\textbf{triv}\). On the other hand, there is no equivalent functor \({}_{\mathcal{O}G}\textbf{mod}\to{}_{\mathcal{O}[N_{G}(P)/P]\textbf{mod}}\) which lifts the Brauer construction over \(k\).
### Chain complexes
Next, we recall the definitions of the tensor product and internal hom of chain complexes.
**Definition 2.12**.: Let \(R\) be a commutative ring and \(C,D\) two bounded chain complexes of \(RG\)-modules with corresponding differentials denoted by \(c_{i},d_{j}\) respectively.
* The _tensor product_ is a chain complex of \(RG\)-modules, denoted \(C\otimes_{R}D\), and is defined as follows. 1. \((C\otimes_{R}D)_{n}=\bigoplus_{i+j=n}C_{i}\otimes_{R}D_{j}\). 2. \(d_{i,j}^{C}:C_{i}\otimes_{R}D_{j}\to C_{i-1}\otimes_{R}D_{j}\) is the homomorphism \(c_{i}\otimes\operatorname{id}\). 3. \(d_{i,j}^{D}:C_{i}\otimes_{R}D_{j}\to C_{i}\otimes_{R}D_{j-1}\) is the homomorphism \((-1)^{i}\operatorname{id}\otimes d_{j}\).
* The _internal hom_ is a chain complex of \(RG\)-modules, denoted \(\operatorname{Hom}_{R}(C,D)\), and is defined as follows. 1. \(\operatorname{Hom}_{R}(C,D)_{n}=\bigoplus_{j-i=n}\operatorname{Hom}_{R}(C_{i},D_{j})\). 2. \(d_{i,j}^{C}:\operatorname{Hom}_{R}(C_{i},D_{j})\to\operatorname{Hom}_{R}(C_{i+ 1},D_{j})\) is the homomorphism \((-1)^{1+j-i}(c_{i+1})^{*}\). 3. \(d_{i,j}^{D}:\operatorname{Hom}_{R}(C_{i},D_{j})\to\operatorname{Hom}_{R}(C_{i},D_{j-1})\) is the homomorphism \((d_{j})_{*}\). Given a \(RG\)-module \(M\) and \(i\in\mathbb{Z}\), we write \(M[i]\) for the chain complex with \(M\) in degree \(i\) and zero modules in all other degrees. The _dual chain complex_ is \(C^{*}=\operatorname{Hom}(C,R[0])\) with the above sign conventions.
**Proposition 2.13**.: _Given \(C\in Ch^{b}(_{RG}\mathbf{mod})\), we have an \(RG\)-linear chain complex isomorphism \(C^{*}\otimes_{R}C\cong\operatorname{End}_{R}(C)\). In particular, \(Ch^{b}(_{RG}\mathbf{mod})\) is a symmetric monoidal category._
Two chain complexes \(C,D\) of \(kG\)-modules are homotopy equivalent, denoted \(C\simeq D\), if and only if they are isomorphic in the homotopy category \(K(_{kG}\mathbf{mod})\). If the Krull-Schmidt theorem holds, equivalently, there exist contractible complexes \(C^{\prime},D^{\prime}\) of \(kG\)-modules such that \(C\oplus C^{\prime}\cong D\oplus D^{\prime}\). The following characterization of bounded contractible chain complexes will be used throughout the paper.
**Proposition 2.14**.: _A bounded chain complex \(C\in K^{b}(_{RG}\mathbf{mod})\) is contractible, i.e. \(C\simeq 0\), if and only if \(C\) is isomorphic to a finite direct sum of chain complexes of the form \(\cdots\to 0\to M\xrightarrow{\sim}M\to 0\to\dots\)._
Recall that a \(kG\)-module homomorphism \(f:M\to N\) is _split_ if and only if there exists a homomorphism \(s:N\to M\) such that \(f=f\circ s\circ f\). In this case, we have \(M=\ker f\oplus\operatorname{im}s\circ f\) and \(N=\operatorname{im}f\oplus\ker f\circ s\). A chain complex is _split_ if and only if all of its differentials are split. The following alternative characterization is well-known.
**Proposition 2.15**.: _A chain complex \(C\in Ch(_{kG}\mathbf{mod})\) is split if and only if \(C\simeq H_{\bullet}(C)\), with \(H_{\bullet}(C)\) viewed as a chain complex with zero maps as differentials. In particular, a chain complex is contractible if and only if it is split acyclic._
Given a bounded chain complex \(C\), if \(C_{i}\neq 0\), \(C_{i^{\prime}}=0\) for all \(i^{\prime}>i\), \(C_{j}\neq 0\), and \(C_{j^{\prime}}=0\) for all \(j^{\prime}<j\), the _length_ of \(C\) is \((i-j)+1\). For example, the contractible chain complex \(\cdots\to 0\to k=k\to 0\to\cdots\) has length \(2\).
We state the version of the Kunneth formula which will be critical for most of the techniques used in this paper.
**Theorem 2.16**.: _(Kunneth formula for complexes of \(RG\)-modules with diagonal tensor product) Let \(R\) be a commutative ring and let \(C\) and \(D\) be complexes of \(RG\)-modules. If \(P_{n}\) and \(d(P_{n})\) are flat for each \(n\), then there is an exact sequence_
\[0\to\bigoplus_{p+q=n}H_{p}(C)\otimes_{R}H_{q}(D)\to H_{n}(C\otimes_{R}D)\to \bigoplus_{p+q=n-1}\operatorname{Tor}_{1}^{RG}(H_{p}(C),H_{q}(D))\to 0\]
_for each \(n\). In particular, if \(R=k\) is a field, then we have an isomorphism_
\[\bigoplus_{p+q=n}H_{p}(C)\otimes_{k}H_{q}(D)\cong H_{n}(C\otimes_{k}D)\]
_for all \(n\)._
Proof.: Adapt the proof of the Kunneth formula given in [22, 3.6.3] with the diagonal tensor product structure.
### Endotrivial modules
We will not rely too heavily on the theory of endotrivial modules, but a specific result later on will be used. We present this section both for reference for later, and to provide some additional motivation for our constructions and questions as things progress.
**Definition 2.17**.:
1. A \(kG\)-module \(M\) is an _endopermutation module_ if \(M^{*}\otimes_{k}M\cong k[X]\), where \(X\) is a \(G\)-set.
2. A \(kG\)-module \(M\) is an _endotrivial module_ if \(M^{*}\otimes_{k}M\cong k\oplus P\), where \(P\) is a projective \(kG\)-module. In other words, \(M^{*}\otimes_{k}M\cong k\) in the stable module category \({}_{kG}\underline{\mathbf{mod}}\).
3. We define the group of endotrivial \(kG\)-modules \(\mathcal{T}(G)\) as follows. The elements of \(\mathcal{T}(G)\) are classes of endotrivial modules which are stably isomorphic, where \(M_{1}\) and \(M_{2}\) are stably isomorphic if \(M_{1}\oplus P_{1}\cong M_{2}\oplus P_{2}\) for some projective \(kG\)-modules. In other words, \(M_{1}\cong M_{2}\) in \({}_{kG}\underline{\mathbf{mod}}\). Multiplication in \(\mathcal{T}(G)\) is induced by \(\otimes_{k}\).
Standard notation for the group of endotrivial modules is \(T(G)\), however since we also use \(T\) to denote the trivial source ring, we switch to \(\mathcal{T}(G)\) to distinguish between the two groups.
_Example 2.18_.: Most known examples of endotrivial modules come from syzygies. Proofs of the following two examples are fairly elementary.
1. Let \(\Omega(M)\) be the kernel of the projective cover \(P\to M\), and define \(\Omega_{i}(M)=\Omega(\Omega_{i-1}(M))\), with \(\Omega_{1}(M)=\Omega(M)\). If \(M\) is endotrivial, \(\Omega_{i}(M)\) is endotrivial for all \(i\in\mathbb{Z}\). This was proven by Alperin in [18, Example 2.1] for more details.
2. Let \(X\) be a \(G\)-set. The _relative syzygy_\(\Omega_{X}\) is the kernel of the augmentation homomorphism \(kX\to k\). If \(G\) is a \(p\)-group \(\Omega_{X}\) is an endopermutation module, see [1, Theorem 1]. \(\Omega_{X}\) is an endotrivial module in some rare exceptional cases. For example, when \(G\) is a semidihedral \(2\)-group and \(X=G/H\), where \(H\) is a noncentral subgroup of order \(2\), \(\Omega_{G/H}\) is endotrivial, and \(\Omega_{G/H}^{\otimes 2}\) is torsion in \(\mathcal{T}(G)\) with order \(2\). This was proven by Carlson & Thevenaz in [10, 7.1].
_Remark 2.19_.: Puig showed in [19, 2.4] that \(\mathcal{T}(G)\) is finitely generated as an abelian group, therefore \(\mathcal{T}(G)\) has a decomposition into its torsion subgroup and a (not necessarily unique) free subgroup. One goal of modular representation theorists is to completely determine the structure of \(\mathcal{T}(G)\) for all finite groups, and to determine explicit constructions for generators, an analogy that we will adopt in the sequel. In general, this is an open problem, but is known for many cases, including all \(p\)-groups, which was completed by Carlson & Thevenaz over multiple papers.
In general, torsion in \(\mathcal{T}(G)\) is rare for \(p\)-groups, in fact it only happens for cyclic groups of order at least 3, quaternion groups, and semidihedral groups. On the other hand, if \(P\) is a 2-group, syzygies are almost always how generators for \(\mathcal{T}(P)\) are found, with the sole exception of one exceptional endotrivial \(kQ_{8}\)-module. We refer the reader to [18] for a detailed exposition on endotrivial and endopermutation modules.
### The Burnside ring
We next discuss the Burnside ring, which partially governs the decomposition of \(O(T(kG))\). Additionally, if \(P\) is a \(p\)-group, since the indecomposable \(p\)-permutation \(kP\)-modules are isomorphic to transitive permutation modules, the trivial source ring of \(kP\) is isomorphic to the Burnside ring. We predominantly follow [9] for this subsection, and refer the reader to [12] for a few more details. If \(X\) is a \(G\)-set, we write \([X]\) to denote the image of \(X\) in \(B(G)\).
**Definition 2.20**.: The _Burnside ring_ of a finite group \(G\), denoted \(B(G)\), is the Grothendieck group of the additive category of finite \(G\)-sets, \({}_{G}\)**set**. \(B(G)\) is a ring with product induced by the direct product. We define the _mark homomorphism_ as follows, following:
\[\mathfrak{m}:B(G) \to\left(\prod_{H\leq G}\mathbb{Z}\right)^{G}\] \[X \mapsto(|X^{H}|)_{H\leq G}\]
Here, the \(G\)-action is given by \(G\)-conjugation on the poset of subgroups of \(G\). It follows that the image of \(\mathfrak{m}\) is \(G\)-equivariant from the property that if \(K,H\leq G\), then \(G/K\cong G/H\) as \(G\)-sets if and only \(H=_{G}K\).
_Remark 2.21_.: Note that we have a ring homomorphism \(B(G)\to T(kG)\) induced by \(k\)-linearization, \([X]-[Y]\mapsto[kX]-[kY]\). It is in general neither injective nor surjective, but if \(G\) is a \(p\)-group it is an isomorphism.
\(\mathfrak{m}\) is a full-rank injective ring homomorphism, so after tensoring by \(\mathbb{Q}\), we obtain an isomorphism
\[\mathbb{Q}\otimes\mathfrak{m}:\mathbb{Q}\otimes_{\mathbb{Z}}B(G)\xrightarrow {\sim}\left(\prod_{H\leq G}\mathbb{Q}\right)^{G}\]
We set \(\mathbb{Q}B(G):=\mathbb{Q}\otimes_{\mathbb{Z}}B(G)\) and \(\mathbb{Q}\mathfrak{m}:=\mathbb{Q}\otimes\mathfrak{m}\).
We describe the inverse \(\mathbb{Q}\mathfrak{m}^{-1}\). Define the primitive idempotent \(\delta_{[H]}\in\left(\prod_{H\leq G}\mathbb{Q}\right)^{G}\) by \((\delta_{[H]})_{K}=1\) if \(H\) and \(K\) are conjugate and \((\delta_{[H]})_{K}=0\) otherwise. Then, \(\delta_{[H]}\) has preimage
\[e_{H}^{G}=\frac{1}{|N_{G}(H)|}\sum_{K\leq K}|K|\mu(K,H)[G/K]\in\mathbb{Q}B(G),\]
where \(\mu\) is the Mobius function associated to the poset of subgroups of \(G\). Let \([s_{G}]\) denote a set of conjugacy class representatives of subgroups of \(G\). Then \(\mathbb{Q}\mathfrak{m}^{-1}:\left(\prod_{H\leq G}\mathbb{Q}\right)^{G}\to\mathbb{ Q}B(G)\) is as follows:
\[\mathbb{Q}\mathfrak{m}^{-1}:\left(\prod_{H\leq G}\mathbb{Q}\right) ^{G} \to\mathbb{Q}B(G)\] \[(a_{H})_{H\leq G} \mapsto\sum_{H\in[s_{G}]}a_{H}e_{H}^{G}\]
_Remark 2.22_.: In general, the unit group \(B(G)^{\times}\) is very difficult to describe. In fact, by an argument of tom Dieck in [21], the statement "if \(G\) has odd order, than \(B(G)^{\times}=\{\pm[G/G]\}\)" is logically equivalent to the Feit-Thompson odd order theorem. However, if \(P\) is a \(2\)-group, Yalcin gave a descriptor of the generators of \(B(P)^{\times}\) in [23], which Bouc refined using different methods in [8] to describe a basis of \(B(G)^{\times}\). We detail this result in Section 6.2, after more terminology has been introduced.
## 3 Endotrivial complexes and h-marks
In this section, we introduce the notion of an endotrivial chain complex and define the group \(\mathcal{E}_{k}(G)\) of endotrivial \(kG\)-complexes. We will find via the Brauer construction that elements of this group can be described via integral constants, similar to how elements of the Burnside ring can be described via their marks.
**Definition 3.1**.: Let \(C\in Ch^{b}(_{kG}\textbf{triv})\). Say \(C\) is an _endotrivial complex_ if \(\operatorname{End}_{k}(C)\cong C^{*}\otimes_{k}C\simeq k[0]\).
Of course, we can define endotriviality for \({}_{kG}\textbf{mod}\), but the scope of this paper is limited to bounded chain complexes of \(p\)-permutation \(kG\)-modules. "Endotrivial complexes" implies bounded endotrivial complexes of \(p\)-permutation \(kG\)-modules for the rest of this paper.
**Definition 3.2**.:
1. We define the group \(\mathcal{E}_{k}(G)\) to be the set of all homotopy classes of endotrivial chain complexes of \(p\)-permutation \(kG\)-modules with multiplication induced from \(\otimes_{k}\). It is easy to verify this is an abelian group. For an endotrivial complex \(C\), let \([C]\in\mathcal{E}_{k}(G)\) denote the corresponding homotopy class in the group.
2. We call the trivial module \(k\), regarded as a chain complex in degree \(0\), the _trivial endotrivial complex_. \([k]\) is the identity of \(\mathcal{E}_{k}(G)\).
We may abusively refer to elements of \(\mathcal{E}_{k}(G)\) as chain complexes, rather than homotopy classes, when it is permissible to do so. We write \(C\in\mathcal{E}_{k}(G)\) to denote that \(C\) is an endotrivial complex of \(kG\)-modules. One has to take care when defining properties of elements in \(\mathcal{E}_{k}(G)\) via representatives of homotopy classes, to make sure the properties are homotopy invariant.
First, note that the Brauer construction preserves endotriviality.
**Proposition 3.3**.: _If \(G\) and \(H\) are finite groups and \(\mathcal{F}:Ch^{b}(_{kG}\textbf{triv})\to Ch^{b}(_{kH}\textbf{triv})\) is an additive functor such that for all \(C_{1},C_{2}\in Ch^{b}(_{kG}\textbf{triv})\), \(\mathcal{F}(C_{1})\otimes_{k}\mathcal{F}(C_{2})\cong\mathcal{F}(C_{1}\otimes _{k}C_{2})\), \(\mathcal{F}(C_{1}^{*})\cong\mathcal{F}(C_{1})^{*}\), and \(\mathcal{F}(k)\cong k\), then \(\mathcal{F}\) induces a well-defined group homomorphism \(\mathcal{E}_{k}(G)\to\mathcal{E}_{k}(H)\)._
_In particular, if \(C\in\mathcal{E}_{k}(G)\), then \(C(P)\in\mathcal{E}_{k}(N_{G}(P)/P)\), and \(-(P)\) induces a well-defined group homomorphism \(\mathcal{E}_{k}(G)\to\mathcal{E}_{k}(N_{G}(P)/P)\)._
Proof.: Since \(C\otimes_{k}C^{*}\simeq k\), we have the following sequence of \(kH\)-complex homotopy equivalences:
\[k=\mathcal{F}(k)\simeq\mathcal{F}(C\otimes_{k}C^{*})\cong\mathcal{F}(C)\otimes_{ k}\mathcal{F}(C^{*})\cong\mathcal{F}(C)\otimes_{k}\mathcal{F}(C)^{*}.\]
Thus \(\mathcal{F}(C)\) is endotrivial. \(\mathcal{F}\) preserves homotopy equivalences, so the map \(\mathcal{F}:\mathcal{E}_{k}(G)\to\mathcal{E}_{k}(H)\) is well-defined, and it is a group homomorphism since \(\mathcal{F}\) commutes with tensor products by assumption. This holds for the Brauer construction as it satisfies all of the assumed properties.
The following theorem is an equivalent formulation of endotriviality for \(p\)-permutation complexes. It was first communicated to the author by Robert Boltje.
**Theorem 3.4**.: _Let \(C\in Ch^{b}(_{kG}\textbf{triv})\). Then \(C\) is endotrivial if and only if for all \(p\)-subgroups \(P\leq G\), \(C(P)\) has nonzero homology concentrated in one degree, with the nontrivial homology having \(k\)-dimension 1._
Proof.: First, suppose \(C\) is endotrivial, then \(C(P)\) is endotrivial as well. Since \(H_{i}(C)^{*}\cong H_{-i}(C^{*})\) for all \(i\in\mathbb{Z}\), an easy argument using the Kunneth formula shows that \(C(P)\) has nonzero homology concentrated in one degree, with the nonzero homology one-dimensional.
Conversely, suppose for all \(p\)-subgroups \(P\leq G\), \(C(P)\) has homology concentrated in one degree, with the homology having \(k\)-dimension 1. We show \(C\otimes_{k}C^{*}\simeq k\). By the Kunneth formula, we have that \(C\otimes_{k}C^{*}\cong D\), where \(D\) is a chain complex satisfying \(H_{0}(D)\cong k\). Label the differentials of \(D\) by \(d_{n}:D_{n}\to D_{n-1}\). If \(C\) has length \(n\), then \(D\) has length \(2n-1\), with highest nonzero term \(D_{n-1}\) and lowest nonzero term \(D_{-(n-1)}\). Moreover, \(D(P)\) has nonzero homology in degree zero for any \(p\)-subgroup \(P\in s_{p}(G)\), since if \(H_{i}(C(P))\neq 0\), \(H_{i}(C(P)^{*})_{-i}\neq 0\), so by the Kunneth formula and multiplicativity of the Brauer construction, \(H_{0}((C\otimes_{k}C^{*})(P))\cong k\) and \(H_{i}((C\otimes_{k}C^{*})(P))=0\) for \(i\neq 0\).
\(d_{n-1}(P)\) and \(d_{-n+2}(P)\) are injective and surjective respectively over all \(p\)-subgroups \(P\in s_{p}(G)\), and therefore by Theorem 2.9 are split injective and surjective respectively. Thus, we have an isomorphism \(D\cong D^{\prime}\oplus(D_{n-1}\xrightarrow{\cong}D_{n-1})\oplus(D_{-n+2} \xrightarrow{\cong}D_{-n+2})\), where \(D^{\prime}\) is a chain complex of length \(2n-3\) for which \(D^{\prime}(P)\) has nonzero homology only in degree zero for all \(P\in s_{p}(G)\). Moreover, the other complexes are contractible., so \(D\simeq D^{\prime}\). Inductively applying the previous argument yields the homotopy equivalence \(D\simeq k\), as desired.
If \(C,D\in Ch^{b}(_{kG}\textbf{triv})\) satisfy \(C\simeq D\), then \(C(P)\simeq D(P)\) for all \(p\)-subgroups \(P\leq G\). Moreover, since homology is preserved under homotopy equivalence, we may speak of "the homology" of a homotopy equivalence class of chain complexes.
**Definition 3.5**.: For a class of endotrivial complexes \([C]\), denote by \(h(C)\) the degree \(i\in\mathbb{Z}\) of a representative \(C\) in which \(H_{i}(C)\neq 0\), and abusing notation, set \(H(C):=H_{h(C)}(C)\). We will call the values of \(h(C(P))\) the _h-marks of \(C\) at \(P\)_, and refer to \(H(C(P))\) as _the homology of \(C\) at \(P\)_.
We have a map:
\[\Xi:\mathcal{E}_{k}(G) \to\prod_{P\in s_{p}(G)}\mathbb{Z}\times\operatorname{Hom}(N_{G} (P)/P,k^{\times})\] \[\mapsto\big{(}h(C(P)),H(C(P)\big{)}_{P\in s_{p}(G)}\]
It is straightforward to verify that this map is a well-defined group homomorphism via the Kunneth formula and commutativity of the Brauer construction with tensor products for \(p\)-permutation modules. Projection onto the first term gives another group homomorphism, the _h-mark homomorphism,_
\[\epsilon:\mathcal{E}_{k}(G) \to\left(\prod_{P\in s_{p}(G)}\mathbb{Z}\right)^{G}\] \[[C] \mapsto\left(h[C(P)]\right)_{P\in s_{p}(G)}\]
Here, the \(G\)-action is induced by conjugation on \(s_{p}(G)\). The \(G\)-equivariance of the image of \(\epsilon\) arises from Proposition 2.7 (c). Note the connection to the mark homomorphism on the Burnside ring, which is also \(G\)-equivariant under the same action.
**Theorem 3.6**.: \(\ker\epsilon=\{k_{\omega}[0]\mid\omega\in\operatorname{Hom}(G,k^{\times})\}\)_. In other words, up to homotopy, the only endotrivial complexes \(C\) such that \(C(P)\) has homology in degree 0 for all \(p\)-subgroups \(P\in s_{p}(G)\) are the 1-dimensional simple representations of \(kG\)._
Proof.: First note all \(k_{\omega}\) are \(p\)-permutation modules, since there is only one 1-dimensional \(kP\)-module for any \(p\)-group \(P\), the trivial \(kP\)-module. Suppose for contradiction there is another class of endotrivial complexes \([C]\) for which \(\epsilon([C])=(0)_{P\in s_{p}(G)}\). Choose \(C\) to be a representative such that \(C\) is minimal with respect to length. \(C\) must have at least length two, since otherwise all terms would be concentrated in one degree, resulting in one of the classes already described. Let \(i\in\mathbb{Z}\) be the greatest integer such that \(C_{i}\neq 0\) and let \(j\in\mathbb{Z}\) be the least integer such that \(C_{j}\neq 0\) (so \(C\) has length \(i-j+1\)). Since \(h(C)=0\), \(d_{0}\) is a nonzero homomorphism, so \(C_{0}\neq 0\). Since the length of \(C\) is at least 2, either \(i>0\) or \(j<0\). We assume \(i>0\), the other case follows dually.
Since \(H_{i}(C)=0\) (as \(h(C)=0\)), \(d_{i}:C_{i}\to C_{i-1}\) is injective. Now, for all \(p\)-subgroups \(P\in s_{p}(G)\), the largest integer \(i^{\prime}\) for which \(C_{i^{\prime}}(P)\neq 0\) satisfies \(i^{\prime}\leq i\). Moreover since \(h(C(P))=0\), \(H_{i}(C(P))=0\) as well, so \(d_{i}(P)\) is injective as well (possibly the zero map). Since \(P\) is an arbitrary \(p\)-group, Theorem 2.9 implies \(d_{i}\) is split injective. It follows that \(C\) is homotopy equivalent to a chain complex \(C^{\prime}\) of length \(i+j-2\). This contradicts minimality of the length of \(C\), so we are done.
**Corollary 3.7**.: \(\Xi\) _is injective. Moreover, given integers \((x_{P})_{P\in s_{p}(G)}\) and a linear character \(\rho\in\operatorname{Hom}(G,k^{\times})\), there is at most 1 element \((a_{P},\rho_{P})_{P\in s_{p}(G)}\in\operatorname{im}\Xi\) for which \(a_{P}=x_{P}\) for all \(P\in s_{p}(G)\) and \(\rho_{1}=\rho\)._
_In particular, \(\mathcal{E}_{k}(G)\) is finitely generated, with \(\mathbb{Z}\)-rank bounded by the number of conjugacy classes of \(p\)-subgroups of \(G\) and torsion subgroup isomorphic to \(\operatorname{Hom}(G,k^{\times})\)._
Proof.: This follows immediately from the previous theorem.
_Remark 3.8_.: We will see in the sequel that \(\epsilon\) is rarely a full-rank homomorphism.
By the previous propositions, we can read off if two endotrivial complexes are homotopy equivalent up to a twist by a 1-dimensional representation by checking the degrees of their homology at every pair of complexes induced by the Brauer construction. In particular, if \(G\) has no cyclic quotient groups of \(p^{\prime}\)-order, then \(\epsilon|_{\mathcal{E}_{k}(G)}\) is injective, since the only one-dimensional \(kG\)-module is \(k\) itself.
In fact, \(\epsilon\) yields a split exact sequence:
\[0\to\operatorname{Hom}(G,k^{\times})\hookrightarrow\mathcal{E}_{k}(G)\xrightarrow {\epsilon}\operatorname{im}\epsilon\to 0\]
with a retraction of the inclusion given by
\[r:\mathcal{E}_{k}(G) \to\operatorname{Hom}(G,k^{\times})\] \[=H(C)\]
Therefore, \(\mathcal{E}_{k}(G)\cong\operatorname{Hom}(G,k^{\times})\times\operatorname{im}\epsilon\). With this characterization, we view h-marks as analogues of the usual marks associated to elements of the Burnside ring.
_Remark 3.9_.: Endotrivial complexes, after a possible shift in degree, are examples of _endosplit \(p\)-permutation resolutions,_ an object first defined by Rickard in [20, Section 7] (note in [20], they are called endosplit permutation resolutions). A chain complex \(C\in Ch^{b}(_{kG}\mathbf{mod})\) is an endosplit \(p\)-permutation resolution if \(C\) has homology concentrated in degree \(0\), and \(C^{*}\otimes_{k}C\) is split. It is easy to see that a chain complex is endotrivial if and only if it is (up to a shift) if it is an endosplit \(p\)-permutation resolution for a \(kG\)-module with \(k\)-dimension \(1\). This observation was first communicated to the author by Markus Linckelmann.
As a result, we obtain a lifting theorem to \(\mathcal{O}\).
**Theorem 3.10**.: _[_20_, 7.1]_ _Let \(G\) be a finite group and let \(M\) be a \(kG\)-module that has an endosplit \(p\)-permutation resolution \(X_{M}\). Then \(M\) can be lifted to a \(\mathcal{O}G\)-module that has an endosplit \(p\)-permutation resolution over \(\mathcal{O}\). Moreover, the endosplit \(p\)-permutation resolution over \(\mathcal{O}\) is unique._
**Corollary 3.11**.: _Let \(C\) be an endotrivial \(kG\)-complex. There is a unique (up to isomorphism) complex \(\widehat{C}\) of \(p\)-permutation \(\mathcal{O}G\)-modules satisfying \(\widehat{C}^{*}\otimes_{\mathcal{O}}\widehat{C}\simeq\mathcal{O}\) and \(k\otimes_{\mathcal{O}}\widehat{C}\cong C\)._
Proof.: This follows immediately from the previous theorem after a possible shift in degree of \(C\).
## 4 Basic properties of endotrivial complexes
The following proposition ensures that \(\mathcal{E}_{k}(G)\) is compatible with \(O(T(kG))\).
**Proposition 4.1**.: _The following map is well-defined a group homomorphism._
\[\Lambda:\mathcal{E}_{k}(G) \to O(T(kG))\] \[\mapsto\sum_{i\in\mathbb{Z}}(-1)^{i}C_{i}\]
Proof.: To show well-definedness, suppose \(C\simeq C^{\prime}\) are endotrivial complexes. Then there exist contractible complexes \(D,D^{\prime}\) such that \(C\oplus D\simeq C^{\prime}\oplus D^{\prime}\), thus \(\Lambda(C)+\Lambda(D)=\Lambda(C^{\prime})+\Lambda(D^{\prime})\in T(kG)\). Since every bounded contractible complex can be expressed as a finite direct sum of complexes of the form \(0\to M\to M\to 0\), \(\Lambda(D)=\Lambda(D^{\prime})=0\), so \(\Lambda(C)=\Lambda(C^{\prime})\). It is an easy verification that \(\Lambda\) commutes with taking duals and tensor products. It follows that \(\Lambda\) is a group homomorphism and \(\Lambda(C\otimes_{k}C^{*})=\Lambda(C)\otimes_{k}\Lambda(C)^{*}=[k]\). Thus \(\Lambda(C)\in O(T(kG))\).
\(\Lambda(C)\) is referred to as the _Lefschetz invariant_ of \(C\). The image of \(\Lambda\) will be of considerable interest in the sequel.
### Relationships between h-marks and functorial constructions
_Remark 4.2_.: Any group isomorphism \(f:G^{\prime}\xrightarrow{\sim}G\) induces an algebra automorphism \(f:kG\to kG^{\prime}\), hence a functor \(\operatorname{iso}_{f}:{}_{kG}\mathbf{mod}\to{}_{kG^{\prime}}\mathbf{mod}\) which restricts to \(\operatorname{iso}_{f}:{}_{kG}\mathbf{triv}\to{}_{kG^{\prime}}\mathbf{triv}\) and a group automorphism \(\operatorname{iso}_{f}:\mathcal{E}_{k}(G)\xrightarrow{\sim}\mathcal{E}_{k}(G ^{\prime})\). Note \(\operatorname{iso}_{f}:\mathcal{E}_{k}(G)\xrightarrow{\sim}\mathcal{E}_{k}(G)\) is trivial if \(G=G^{\prime}\) and \(f\) is an inner automorphism. Define the map \(F\) by
\[F:\prod_{P\in s_{p}(G)}\mathbb{Z}\times\operatorname{Hom}(N_{G}( P)/P,k^{\times}) \to\prod_{P^{\prime}\in s_{p}(G^{\prime})}\mathbb{Z}\times \operatorname{Hom}(N_{G^{\prime}}(P^{\prime})/P^{\prime},k^{\times})\] \[(x_{P},\rho_{P})_{P\in s_{p}(G)} \mapsto(x_{f(P^{\prime})},\rho_{f(P^{\prime})})_{P^{\prime}\in s _{p}(G^{\prime})}\]
Since for any \(C\in Ch(k_{G}\mathbf{mod})\) and \(P\in s_{p}(G^{\prime})\), \(\mathrm{iso}_{f^{-1}|_{k|N_{G}(P)/P}|}((\mathrm{iso}_{f}\,C)(P))\cong C(f(P))\), the following diagram commutes.
Restriction, inflation, and the Brauer construction all preserve endotriviality as well. We next describe how the embedding \(\Xi\) behaves with respect to endotriviality-preserving operations.
**Proposition 4.3**.:
1. _Let_ \(H\leq G\)_. The following diagram commutes, where_ \(\pi\) _is the projection map which applies restriction from_ \(N_{G}(P)\) _to_ \(N_{H}(P)\) _in the obvious way:_ \[\begin{CD}\mathcal{E}_{k}(G)@>{\Xi}>{}>\prod_{P\in s_{p}(G)}\mathbb{Z}\times \mathrm{Hom}(N_{G}(P)/P,k^{\times})\\ @V{}V{\operatorname{\mathsf{res}}_{H}^{G}}V@V{}V{\pi}V\\ \mathcal{E}_{k}(H)@>{\Xi}>{}>\prod_{P\in s_{p}(H)}\mathbb{Z}\times\mathrm{Hom}( N_{H}(P)/P,k^{\times})\end{CD}\]
2. _Let_ \(Q\in s_{p}(G)\) _be a_ \(p\)_-subgroup of_ \(G\)_, and regard_ \(-(Q)\) _as a functor_ \({}_{kG}\mathbf{mod}\to{}_{kN_{G}(Q)}\mathbf{mod}\) _for ease of notation. Define the map_ \(B\) _by_ \[B:\prod_{P\in s_{p}(G)}\mathbb{Z}\times\mathrm{Hom}(N_{G}(P)/P,k^{ \times}) \to\prod_{P\in s_{p}(N_{G}(Q))}\mathbb{Z}\times\mathrm{Hom}(N_{N_{G} (Q)}(P)/(P),k^{\times})\] \[(x_{P},\rho_{P})_{P\in s_{p}(G)} \mapsto\left(x_{P},\rho_{P}|_{N_{N_{G}(Q)}(P)}\right)_{P\in s _{p}(N_{G}(Q))}\] _Then the following diagram commutes:_ \[\begin{CD}\mathcal{E}_{k}(G)@>{\Xi}>{}>\prod_{P\in s_{p}(G)}\mathbb{Z}\times \mathrm{Hom}(N_{G}(P)/P,k^{\times})\\ @V{}V{-(Q)}V@V{}V{B}V\\ \mathcal{E}_{k}(N_{G}(Q))@>{\Xi}>{}>\prod_{P\in s_{p}(N_{G}(Q))}\mathbb{Z}\times \mathrm{Hom}(N_{N_{G}(Q)}(P)/P,k^{\times})\end{CD}\]
3. _Let_ \(N\trianglelefteq G\)_. Define the map_ \(I\) _by_ \[I:\prod_{P/N\in s_{p}(G/N)}\mathbb{Z}\times\mathrm{Hom}(N_{G/N}(P/N )/(P/N),k^{\times}) \to\prod_{P\in s_{p}(G)}\mathbb{Z}\times\mathrm{Hom}(N_{G}(P)/P,k ^{\times})\] \[(x_{P/N},\rho_{P/N})_{P/N\in s_{p}(G/N)} \to\left(x_{PN/N},\inf_{N_{G/N}(PN/N)/(PN/N)}^{N_{G}(P)/P}\rho_{ PN/N}\right)_{P\in s_{p}(G)}\] _Then the following diagram commutes:_ \[\begin{CD}\mathcal{E}_{k}(G/N)@>{\Xi}>{}>\prod_{P/N\in s_{p}(G/N)}\mathbb{Z} \times\mathrm{Hom}(N_{G/N}(P/N)/(P/N),k^{\times})\\ @V{}V{\operatorname{\mathsf{inf}}_{G/N}^{G}}V@V{}V{I}V\\ \mathcal{E}_{k}(G)@>{\Xi}>{}>(\prod_{P\in s_{p}(G)}\mathbb{Z}\times\mathrm{ Hom}(N_{G}(P)/P,k^{\times})\end{CD}\]
_In particular, (a) and (c) along with the previous remark imply if \(f:G^{\prime}\to G\) is any group homomorphism, then the group homomorphism \(\text{res}_{f}:\mathcal{E}_{k}(G)\to\mathcal{E}_{k}(G^{\prime})\) induced by the restriction functor corresponding to \(f\) preserves endotriviality._
Proof.: (a) follows from Proposition 2.7 (b). Let \(Q\leq P\leq N_{G}(Q)\) (so \(Q\trianglelefteq P\leq G\)), then (b) follows by Proposition 2.8 (d). Finally (c) follows by observing that we have a natural isomorphism of \(G/N\)-modules \((\inf_{G/N}^{G}M)(P)\cong\inf_{N_{G/N}(PN/N)}^{N_{G}(P)}(M(PN/N))\) for any \(P\in s_{p}(G)\), where in this case, \(-(P)\) is regarded as a functor \({}_{kG}\mathbf{triv}\to{}_{kN_{G}(P)}\mathbf{triv}\).
_Remark 4.4_.: Understanding the interplay between \(\Lambda:\mathcal{E}\to O(T(kG))\) and \(\Xi\) and \(\epsilon\) will be important in determining the image of \(\Lambda\). Given some endotrivial complex \(C\), it is immediate what orthogonal unit it corresponds to, \(\Lambda(C)\). However given only the h-marks \(\epsilon(C)\), the corresponding orthogonal unit may not be clear.
Boltje and Carman in [4] determined a decomposition of \(O(T(kG))\) which we denote by \(\kappa\),
\[\kappa:O(T(kG))\xrightarrow{\sim}B(\mathcal{F})^{\times}\times\text{Hom}(G,k ^{\times})\times\left(\prod_{P\in s_{p}(G)}\text{Hom}(N_{G}(P)/PC_{G}(P),k^{ \times})\right)^{\prime}.\]
Here \(B(\mathcal{F})^{\times}\leq B(S)^{\times}\) is the unit group of the Burnside ring of the fusion system \(\mathcal{F}_{S}(G)\), for \(S\in\text{Syl}_{p}(G)\). We refer the reader to [2] for more details on the fusion system of a finite group, and to [3] for details on \(B(\mathcal{F})\).
The third constituent is additionally subject to the coherence condition
\[\chi_{P}(xPC_{G}(P))=\chi_{P\langle x_{p}\rangle}\big{(}xP\langle x_{p}\rangle C _{G}(P\langle x_{p}\rangle)\big{)}\]
for all \(P\in s_{p}(G)\) and \(x\in G\). Here \(x_{p}\) denotes the \(p\)-part of \(x\); we have a unique decomposition \(x=x_{p}x_{p^{\prime}}=x_{p^{\prime}}x_{p}\), where \(x_{p}\) has \(p\)-power order, and \(x_{p^{\prime}}\) has \(p^{\prime}\)-order. We equivalently consider the homomorphisms in the third constituent as elements of \(\text{Hom}(N_{G}(P),k^{\times})\) whose kernel contains \(PC_{G}(P)\). We denote this component by \(\mathcal{L}_{G}\), for "local" homology. The middle constituent on the other hand corresponds to the "global" homology.
We describe \(\kappa\) in greater detail, following [4]. First, [4, Theorem A] states that there is an injective homomorphism
\[\beta_{G}:T(kG)\to\left(\prod_{P\in s_{p}(G)}R(K[N_{G}(P)/P])\right)^{G},\]
whose image consists of character tuples satisfying the coherence condition from before: for each \(P\in s_{p}(G)\) and \(x\in N_{G}(P)\), one has \(\chi_{P}(xP)=\chi_{P\langle x_{p}\rangle}(xP\langle x_{p}\rangle)\). In particular,
\[(\beta_{G}(x))_{P}=K\otimes_{\mathcal{O}}\widehat{x(P)}\in R(K[N_{G}(P)/P]),\]
where \(\widehat{(-)}\) denotes the isomorphism \(T(\mathcal{O}G)\cong T(kG)\) induced by taking the unique lift of a \(p\)-permutation \(kG\)-module to a \(p\)-permutation \(\mathcal{O}G\)-module. We denote the subgroup of _coherent_ tuples satisfying this condition by
\[\left(\prod_{P\in s_{p}(G)}R(K[N_{G}(P)/P])\right)^{\prime}.\]
Since the unit group of \(R_{K}(G)\) is generated by \(1\)-dimensional characters and their additive inverses, it follows that for every orthogonal unit \(u\in O(T(kG))\), there exist homomorphisms \(\rho_{P}\in\operatorname{Hom}(N_{G}(P)/P,K^{\times})\) and signs \(\epsilon_{P}\in\{\pm 1\}\) such that
\[\beta_{G}(u)=(\epsilon_{P}\cdot\rho_{P})_{P\in s_{p}(G)}.\]
However, the coherence condition implies that if \(x\in G\) is a \(p\)-element, \(\rho_{P}(x)=1\), so each \(\rho_{P}\) descends to a \(1\)-dimensional Brauer character, hence a homomorphism \(\overline{\rho_{P}}\in\operatorname{Hom}(N_{G}(P)/P,k^{\times})\). In this sense, \(\beta_{G}\) can be thought of as the trivial source ring analogue of \(\Xi\). One may explicitly compute \(\epsilon_{P}\cdot\rho_{P}\) by taking the image of \(u(P)\in O(T(k[N_{G}(P)/P]))\) in \(R_{k}(G)\) to obtain the degree \(1\) Brauer character.
Now, for \(S\in\operatorname{Syl}_{p}(G)\), there exist a sequence of homomorphisms
\[B(\mathcal{F})^{\times}\hookrightarrow B(G)^{\times}\xrightarrow{k[-]}O(T(kG ))\xrightarrow{\operatorname{res}_{S}^{G}}O(T(kS))=T(kS)^{\times}\xrightarrow {\sim}B(S)^{\times}\]
whose composition is the identity, giving a decomposition
\[O(T(kG))=B(\mathcal{F})^{\times}\times\ker(\operatorname{res}_{S}^{G}:O(T(kG ))\to O(T(kS)).\]
It follows that the kernel is given precisely by units \(u\in O(T(kG))\) for which \(\beta_{G}(u)=(\rho_{P})_{P\in s_{p}(G)}\) for group homomorphisms \(\rho_{P}:N_{G}(P)/P\to k^{\times}\). This implies an isomorphism
\[O(T(kG))\cong B(\mathcal{F})^{\times}\times\left(\prod_{P\in s_{p}(G)} \operatorname{Hom}(N_{G}(P)/P,k^{\times})\right)^{\prime}.\]
Now, we obtain a sequence of homomorphisms whose composition is the identity,
\[\operatorname{Hom}(G,k^{\times})\hookrightarrow\left(\prod_{P\in s_{p}(G)} \operatorname{Hom}(N_{G}(P)/P,k^{\times})\right)^{\prime}\xrightarrow{(\rho_{ P})\mapsto\rho_{1}}\operatorname{Hom}(G,k^{\times}).\]
Here the latter map has kernel given by coherent tuples \((\rho_{P})_{P\in s_{p}(G)}\) for which \(PC_{G}(P)\leq\ker(\rho_{P})\), which is precisely \(\mathcal{L}_{G}\). Therefore, we obtain a second isomorphism
\[\left(\prod_{P\in s_{p}(G)}\operatorname{Hom}(N_{G}(P)/P)\right)^{\prime} \cong\operatorname{Hom}(G,k^{\times})\times\left(\prod_{P\in s_{p}(G)} \operatorname{Hom}(N_{G}(P)/PC_{G}(P))\right)^{\prime}=\operatorname{Hom}(G,k ^{\times})\times\mathcal{L}_{G}.\]
This data completely describes \(\kappa\). All claims described here are presented in greater detail and proven in [4].
The next proposition follows easily from the previous discussion and the definition of \(\Xi\). For the next two propositions, we set \(\pi\) to be the composition of surjective group homomorphisms \(\pi:\mathbb{Z}\to\mathbb{Z}/2\xrightarrow{\sim}\{\pm 1\}\).
**Proposition 4.5**.: _The following diagram commutes:_
Remark 2.21 described the inverse of the \(\mathbb{Q}\)-linearized mark homomorphism \(\mathbb{Q}\mathfrak{m}:\mathbb{Q}B(G)\to\left(\prod_{H\leq G}\mathbb{Q}\right)^{G}\). Given the h-marks and the nonzero homology of an endotrivial complex \(C\), we can read off parts of the corresponding orthogonal unit according to the decomposition described in Remark 4.4 as follows:
**Proposition 4.6**.: _The following diagram commutes:_
_Here, for \(S\) the Sylow \(p\)-subgroup of \(G\) associated to \(\mathcal{F}\), \(\phi:\left(\prod_{P\in s_{p}(G)}\mathbb{Z}\right)^{G}\to\mathbb{Q}B(\mathcal{F })^{\times}\) is:_
\[\left(\prod_{P\in s_{p}(G)}\mathbb{Z}\right)^{G}\xrightarrow{\prod\pi}\left( \prod_{P\leq S}\{\pm 1\}\right)^{\mathcal{F}}\xrightarrow{\mathbb{Q}\mathfrak{m}^{-1}} \mathbb{Q}B(\mathcal{F})\]
_In particular, the images of the two composites of maps in the original commutative diagram lies within \(B(\mathcal{F})^{\times}\)._
Proof.: We have a diagram as follows:
By construction of \(\phi\), the right-most triangle commutes, and the top triangle commutes by properties of \(\epsilon\) and restriction. Since \(\mathbb{Q}\mathfrak{m}\) is an isomorphism, it suffices to show the trapezoid on the bottom row commutes, however this follows from the previous proposition.
### Endotrivial complexes induce splendid autoequivalences and lift to \(\mathcal{O}\) uniquely
The goal of this section is to prove the following:
**Theorem 4.7**.: _If \(C\) is a endotrivial complex, then \(\operatorname{ind}_{\Delta G}^{G\times G}C\) is a splendid Rickard autoequivalence of \(kG\), with \(\Lambda(\operatorname{ind}_{\Delta G}^{G\times G}C)=\operatorname{ind}_{ \Delta G}^{G\times G}(\Lambda(C))\)._
We first review some necessary definitions. Here, we relax the definition of splendid Rickard complexes first defined by Rickard in [20], by allowing algebras to be direct summands of group algebras rather than block algebras.
**Definition 4.8**.: Let \(A\) and \(B\) be direct summands of group algebras \(kG\) and \(kH\) respectively. We say a chain complex \(\Gamma\) of \((A,B)\)-bimodules is a _splendid Rickard complex between \(A\) and \(B\)_ if the following hold:
1. \(\Gamma\otimes_{B}\Gamma^{*}\simeq A\).
2. \(\Gamma^{*}\otimes_{A}\Gamma\simeq B\).
3. Each component of \(\Gamma\) is a \(p\)-permutation module when viewed as an \((A\otimes_{k}B^{op})\)-module, and each indecomposable constituent has a _twisted diagonal_ vertex, i.e. a subgroup of \(G\times H\) of the form \(\Delta(P,\phi,Q)=\{(\phi(g),g)\mid g\in Q\}\) for some \(P\in s_{p}(G)\), \(Q\in s_{p}(H)\), and \(\phi:Q\to P\) an isomorphism.
We say \(\Gamma\) induces a _splendid (derived) equivalence_\(D^{b}(_{A}\mathbf{mod})\cong D^{b}(_{B}\mathbf{mod})\), which is induced by the functors by \(\Gamma\otimes_{B}-\) and \(\Gamma^{*}\otimes_{A}-\). In fact functors induce equivalences \(K^{b}(_{A}\mathbf{mod})\cong K^{b}(_{B}\mathbf{mod})\) as well.
It is easy to see that the set of all splendid Rickard complexes for \(A\) with itself modulo homotopy equivalence form a group with multiplication induced by \(\otimes_{A}\). We denote this group by \(\mathcal{S}_{k}(A)\).
\(p\)-permutation equivalences, first defined in [6] by Boltje & Xu, can be viewed as an analogue of splendid Rickard complexes on a representation ring level. We state the more general definition given in [5], which no longer assumes shared Sylow subgroups.
**Definition 4.9**.: Let \(A\) and \(B\) be direct summands of group algebras \(kG\) and \(kH\) respectively. Write \(T(A,B)\) for the Grothendieck group of \(p\)-permutation (when viewed as an \((A\otimes_{k}B^{op})\)-module) \((A,B)\)-modules. If \(C\) is a direct summand of the group algebra \(kK\), observe that the tensor product \(\otimes_{B}\) induces a bilinear map \(\cdot_{B}:T(A,B)\times T(B,C)\to T(A,C)\). Denote by \(T^{\Delta}(A,B)\) the subgroup of \(T(A,B)\) generated by bimodules with twisted diagonal vertices.
We say \(\gamma\in T^{\Delta}(A,B)\) is a \(p\)_-permutation equivalence_ if
\[\gamma\cdot_{A}\gamma^{*}=[B]\in T^{\Delta}(B,B)\text{ and }\gamma^{*}\cdot_{B} \gamma=[A]\in T^{\Delta}(B,B).\]
In this case, \(\gamma\) induces a group isomorphism \(T(A)\cong T(B)\) via the homomorphisms \(\gamma\cdot_{B}-\) and \(\gamma^{*}\cdot_{A}-\). Denote the set of \(p\)-permutation equivalences between \(A\) and \(B\) by \(O(T^{\Delta}(A,B))\). If \(A=B\), it is easy to see that \(O(T^{\Delta}(A,A))\) forms a group with multiplication induced by \(\cdot_{A}\).
The composite of the functors \(\operatorname{iso}_{G}^{\Delta G}:{}_{kG}\mathbf{mod}\to{}_{k[\Delta G]} \mathbf{mod}\), where \(\Delta G\) is the diagonal subgroup \(\Delta G=\{(g,g):g\in G\}\), induction \(\operatorname{ind}_{\Delta G}^{G\times G}:{}_{k[\Delta G]}\mathbf{mod}\to{}_{k[ G\times G]}\mathbf{mod}\), and the equivalence of categories given by the identification \({}_{k[G\times G]}\mathbf{mod}\cong{}_{kG}\mathbf{mod}{}_{kG}\) via the group action \(g\cdot m\cdot h:=(g,h^{-1})\cdot m\) will be abusively denoted \(\operatorname{ind}_{\Delta G}^{G\times G}:{}_{kG}\mathbf{mod}\to{}_{kG} \mathbf{mod}{}_{kG}\) when the context is clear. We will show that this functor transfers the necessary properties of endotriviality to splendor. The next two propositions are elementary. Note that a natural isomorphism of functors on preadditive categories extends to a natural isomorphism of functors on their chain complex categories as well.
**Proposition 4.10**.: _Let \(H\leq G\). The contravariant composite of functors \(\operatorname{ind}_{H}^{G}\circ(-)^{*}\), \((-)^{*}\circ\operatorname{ind}_{H}^{G}:{}_{kH}\mathbf{mod}\to{}_{kG}\mathbf{ mod}\) are naturally isomorphic._
Proof.: This is a well-known property, so we sketch the proof. Let \(M\) be a \(kH\)-module. By the Yoneda embedding, we have a isomorphism \(\operatorname{ind}_{H}^{G}(M^{*})\cong(\operatorname{ind}_{H}^{G}M)^{*}\) natural in \(M\) if and only if the functors \(\operatorname{Hom}_{kG}(-,\operatorname{ind}_{H}^{G}(M^{*}))\) and \(\operatorname{Hom}_{kG}(-,(\operatorname{ind}_{H}^{G}M)^{*})\) are naturally isomorphic as functors, and this isomorphism is natural in \(M\). Let \(N\) be any \(kG\)-module, and denote the trivial \(kG\)-module by
\(k_{G}\) for clarity. Then, using the tensor-hom adjunction, the Frobenius property \(\operatorname{ind}_{H}^{G}(\operatorname{res}_{H}^{G}V\otimes_{k}W)\cong V\otimes _{k}\operatorname{ind}_{H}^{G}W\), and the biadjunction between induction and restriction (which are all natural in both arguments) yields the following sequence of natural isomorphisms.
\[\operatorname{Hom}_{kG}(N,\operatorname{ind}_{H}^{G}(M^{*})) \cong\operatorname{Hom}_{kH}(\operatorname{res}_{H}^{G}N,M^{*})\] \[\cong\operatorname{Hom}_{kH}(M\otimes_{k}(\operatorname{res}_{H} ^{G}N),\operatorname{res}_{H}^{G}k_{G})\] \[\cong\operatorname{Hom}_{kG}(\operatorname{ind}_{H}^{G}(M\otimes _{k}\operatorname{res}_{H}^{G}N),k_{G})\] \[\cong\operatorname{Hom}_{kG}((\operatorname{ind}_{H}^{G}M)\otimes _{k}N,k_{G})\] \[\cong\operatorname{Hom}_{kG}(N,(\operatorname{ind}_{H}^{G}M)^{*})\]
**Proposition 4.11**.: _The bifunctors \(\operatorname{ind}_{\Delta G}^{G\times G}(-)\otimes_{kG}\operatorname{ind}_{ \Delta G}^{G\times G}(-)\) and \(\operatorname{ind}_{\Delta G}^{G\times G}(-\otimes_{k}-):{}_{kG}\mathbf{mod} \times{}_{kG}\mathbf{mod}\to{}_{kG}\mathbf{mod}_{kG}\) are naturally isomorphic in both arguments._
Proof.: This is well-known, see [16] Corollary 2.4.13.
Since the two isomorphisms are natural, they extend to isomorphisms of chain complexes as well. However, there is a technicality in showing the desired isomorphism
\[\operatorname{ind}_{\Delta G}^{G\times G}(C\otimes_{k}C^{*})\cong \operatorname{ind}_{\Delta G}^{G\times G}C\otimes_{kG}\big{(}\operatorname{ ind}_{\Delta G}^{G\times G}C\big{)}^{*},\]
the left \((-)^{*}\) functor is the left \(kG\)-module dual, while the right \((-)^{*}\) functor is the \((kG,kG)\)-bimodule dual. These functors do not automatically commute with the identification \({}_{k[G\times H]}\mathbf{mod}\xrightarrow{\sim}{}_{kG}\mathbf{mod}_{kH}\). For example, if \(M\in{}_{k[G\times H]}\mathbf{mod}\), then first taking its dual, then identifying the resulting module as a bimodule results in a \((kG,kH)\)-bimodule. However, first identifying \(M\) as a \((kG,kH)\)-bimodule then taking a dual results in a \((kH,kG)\)-bimodule. This disparity can be resolved by sending the bimodule to its opposite bimodule.
**Proposition 4.12**.: _Then the following diagram commutes up to natural isomorphism, where the vertical functors are bimodule identification and \(\operatorname{ind}_{\Delta G}^{G\times G}\) is usual induction:_
Proof.: We construct an isomorphism \(\phi:(k[G\times G]\otimes_{k\Delta G}M)_{1}^{*}\to(k[G\times G]\otimes_{k \Delta G}k)_{2}^{*}\), where \((k[G\times G]\otimes_{k\Delta G}M)_{1}^{*}\) corresponds to the top right composite, that is, for \(a,b\in G\) it has actions defined by:
\[a\cdot f\big{(}(g_{1},g_{2})\otimes m\big{)}\cdot b =(a,b^{-1})\cdot f\big{(}(g_{1},g_{2})\otimes m\big{)}\] \[=f\big{(}(a^{-1},b)(g_{1},g_{2})\otimes m\big{)}\] \[=f\big{(}(a^{-1}g_{1},bg_{2})\otimes m\big{)},\]
and \((k[G\times G]\otimes_{k\Delta G})M_{2}^{*}\) corresponds to the bottom left composite, that is, it has actions defined by:
\[a\cdot f\big{(}(g_{1},g_{2})\otimes m\big{)}\cdot b =f\big{(}b\cdot\big{(}(g_{1},g_{2})\otimes m\big{)}\cdot a\big{)}\] \[=f\big{(}(bg_{1},a^{-1}g_{2})\otimes m\big{)}.\]
\((k[G\times G])^{*}\) has \(k\)-bases given by both \(\{\delta_{(g,1)}:g\in G\}\) and \(\{\delta_{(1,g)}:g\in G,\}\). The map \(\phi\) is defined by \(k\)-linearizing the following assignment:
\[\phi:(k[G\times G]\otimes_{k\Delta G}M)_{1}^{*} \to(k[G\times G]\otimes_{k\Delta G}M)_{2}^{*}\] \[\delta_{(g,1)\otimes m} \mapsto\delta_{(1,g)\otimes m}\]
It is straightforward that \(\phi\) is a well-defined bijective mapping, but it is less clear that \(\phi\) is a homomorphism. We verify:
\[\phi(a\cdot\delta_{(g,1)\otimes m_{i}}\cdot b) =\phi((a,b^{-1})\cdot\delta_{(g,1)\otimes m_{i}})\] \[=\phi(\delta_{(g,1)\otimes m_{i}}((a^{-1},b)\cdot-))\] \[=\phi(\delta_{(ag,b^{-1})\otimes m_{i}})\] \[=\delta_{(b^{-1},ag)\otimes m_{i}}\] \[=\delta_{(1,g)\otimes m}((b,a^{-1})\cdot-)\] \[=\delta_{(1,g)\otimes m}(b\cdot-\cdot a)\] \[=a\cdot\delta_{(1,g)\otimes m}\cdot b\] \[=a\cdot\phi(\delta_{(g,1)\otimes m_{i}})\cdot b\]
Thus \(\phi\) is an \((kG,kG)\)-bimodule isomorphism. To see it is natural, first note that all morphisms which arise are of the form \((\operatorname{id}\otimes f)^{*}\) for \(f:M\to N\) any left \(kG\)-module homomorphism. Then, it is straightforward to check the following diagram commutes:
\[(k[G\times G]\otimes_{k\Delta G}M)_{1}^{*}\xrightarrow{\phi_{M}}(k[G\times G ]\otimes_{k\Delta G}M)_{2}^{*}\] \[(k[G\times G]\otimes_{k\Delta G}N)_{1}^{*}\xrightarrow{\phi_{N}}(k [G\times G]\otimes_{k\Delta G}N)_{2}^{*}\]
Proof of Theorem 4.7.: We have a pair of natural isomorphisms corresponding to each interior commutative square below:
\[\xrightarrow{k[\Delta G]\mathbf{mod}}\xrightarrow{(-)^{*}}k[\Delta G] \mathbf{mod}\] \[\xrightarrow{\operatorname{ind}_{\Delta G}^{G\times G}}\xrightarrow{ \operatorname{ind}_{\Delta G}^{G\times G}}\xrightarrow{\operatorname{ind}_{ \Delta G}^{G\times G}}\] \[\xrightarrow{\cong}kG\mathbf{mod}_{kG}\xrightarrow{(-)^{*}}kG \mathbf{mod}_{kG}\]
Therefore, \(\operatorname{ind}_{\Delta G}^{G\times G}(C^{*})\cong\left(\operatorname{ind}_{ \Delta G}^{G\times G}C\right)^{*}\), and
\[kG\cong\operatorname{ind}_{\Delta G}^{G\times G}(k)\simeq\operatorname{ind}_{ \Delta G}^{G\times G}(C\otimes_{k}C^{*})\cong\operatorname{ind}_{\Delta G}^{G \times G}C\otimes_{kG}\operatorname{ind}_{\Delta G}^{G\times G}(C^{*})\cong \operatorname{ind}_{\Delta G}^{G\times G}(C)\otimes_{kG}\big{(}\operatorname{ ind}_{\Delta G}^{G\times G}C\big{)}^{*}.\]
If \(C\) is an endotrivial complex, then the components of \(C\) are \(p\)-permutation modules with diagonal vertices, since \(\operatorname{ind}_{\Delta G}^{G\times G}\circ\operatorname{iso}_{G}^{\Delta G }\circ\operatorname{ind}_{H}^{G}=\operatorname{ind}_{\Delta H}^{G\times G} \circ\operatorname{iso}_{H}^{\Delta H}\). Therefore, \(\operatorname{ind}_{\Delta G}^{G\times G}C\otimes_{kG}\operatorname{ind}_{ \Delta G}^{G\times G}(C)^{*}\simeq kG\), so \(\operatorname{ind}_{\Delta G}^{G\times G}(C)\) is a splendid Rickard auto-equivalence of \(kG\). The second homotopy equivalence follows similarly. The final statement follows from additivity of induction.
_Remark 4.13_.: In particular, if \(C\) is a lift of an orthogonal unit \(u\in O(T(kG))\), then \(\operatorname{ind}_{\Delta G}^{G\times G}C\) is a splendid lift of the \(p\)-permutation autoequivalence \(\operatorname{ind}_{\Delta G}^{G\times G}u\in O(T^{\Delta}(kG,kG))\). Moreover, \(\operatorname{ind}_{\Delta G}^{G\times G}\) reflects isomorphisms (which we show in the following lemma), implying each unique (up to isomorphism) endotrivial complex defines an corresponding unique (up to isomorphism) splendid autoequivalence. We obtain an injective group homomorphism \(\mathcal{E}_{k}(G)\to\mathcal{S}_{k}(G)\).
**Lemma 4.14**.: _Let \(R\) be a commutative ring, and \(M\) a \(RG\)-module. Then we have a natural isomorphism \((\operatorname{ind}_{\Delta G}^{G\times G}M)^{1\times G}\cong M\). In particular, if \(C_{1},C_{2}\) are chain complexes of \(RG\)-modules satisfying \(\operatorname{ind}_{\Delta G}^{G\times G}C_{1}\cong\operatorname{ind}_{ \Delta G}^{G\times G}C_{2},\) then \(C_{1}\cong C_{2}\)._
Proof.: Observe every element of \((\operatorname{ind}_{\Delta G}^{G\times G}M)^{1\times G}\) can be expressed in the form
\[r\left(g,\sum_{g^{\prime}\in G}g^{\prime}\right)\otimes m,\quad\text{for }r \in R,g\in G,\text{ and }m\in M.\]
Then it is straightforward to verify that the homomorphism \(\phi:(\operatorname{ind}_{\Delta G}^{G\times G}M)^{1\times G}\to M\) induced by the \(R\)-linearization of \(\left(g,\sum_{g^{\prime}\in G}g^{\prime}\right)\otimes m\mapsto gm\) is a well-defined natural isomorphism.
## 5 The faithful constituent of \(\mathcal{E}_{k}(G)\)
Bouc introduced a theory of "biset functors," an abstraction of constructions which have an abelian group associated to to every finite group, and corresponding induction, restriction, transfer, inflation, and deflation maps between groups. This can be viewed as an extension of global Mackey functors. This approach has led to a number of results, such as classifying the Dade group \(D(P)\) for \(p\)-groups \(P\), and inductively determining a basis for the unit group of the Burnside ring for \(2\)-groups, \(B(P)^{\times}\). The former is connected to the study of endotrivial modules, as there is always an embedding \(\mathcal{T}(G)\hookrightarrow D(G)\). We refer the reader to Bouc's text on biset functors [9, Chapters 11, 12] for further details.
Following Bouc, we define the notion of a faithful endotrivial complex which one may think of as an endotrivial complex which contains no parts which are inflated from a proper quotient group. The set of all of these forms the faithful subgroup of \(\mathcal{E}_{k}(G)\). For ease of notation, we drop bracket notation when referring to elements of \(\mathcal{E}_{k}(G)\). The following definition is adapted from [9, Chapter 6] with a few modifications. Let \(\in s_{p}^{\triangle}(G)\) denote the set of all normal \(p\)-subgroups of \(G\).
**Definition 5.1**.: Let \(P\in s_{p}^{\triangle}(G)\). Then \(\inf_{G/P}^{G}\) induces injective group homomorphisms \(\mathcal{E}_{k}(G/P)\to\mathcal{E}_{k}(G)\). These homomorphisms are split injective with retraction given by \(\operatorname{def}_{G/P}^{G}:=(-)(P)\). Define the _faithful component_ of \(\mathcal{E}_{k}(G)\), \(\partial\mathcal{E}_{k}(G)\) as follows.
\[\partial\mathcal{E}_{k}(G):=\bigcap_{1<P\in s_{p}^{\triangle}(G)}\ker\big{(}(-) (P)\big{)}.\]
For example, when \(G\) has only normal \(p\)-subgroups, \(\partial\mathcal{E}_{k}(G)\) consists of endotrivial complexes whose h-marks are all zero and local homology is trivial, except possibly at the trivial group. In that case, it follows that \(\partial\mathcal{E}_{k}(G)\) has \(\mathbb{Z}\)-rank at most \(1\), corresponding to the h-mark at the trivial subgroup. Call any \(C\in\partial\mathcal{E}_{k}(G)\) a _faithful_ endotrivial complex.
_Remark 5.2_.: \(\mathcal{E}_{k}\) may be regarded as a partial biset functor, with inflation, isomorphism, and restriction defined as usual, and deflation defined only for \(p\)-groups. In this way, the faithful component defined here is analogous to the faithful component of a biset functor as defined in [9]. However, there is no known induction - usual induction of chain complexes is not multiplicative, and tensor induction of chain complexes (see [13]) does not in general preserve endotriviality.
For example, let \(V_{4}\) have generators \(\sigma,\tau\), and \(C\) be the endotrivial complex of \(kC_{2}\)-modules given by \(kC_{2}\to k\), where the differential is any surjective homomorphism. Then, it is straightforward to show that
\[\operatorname{ten}_{C_{2}}^{V_{4}}C=k[V_{4}/\langle\tau\rangle]\oplus k[V_{4 }/\langle\sigma\tau\rangle]\to kV_{4}\to k.\]
Even without knowing the differentials, we may see that this cannot be an endotrivial complex, since
\[\left(\operatorname{ten}_{C_{2}}^{V_{4}}C\right)(\langle\tau\rangle)\cong k [V_{4}/\langle\tau\rangle]\to 0\to k\]
which cannot possibly have homology concentrated in one degree.
However, we may view \(\mathcal{E}_{k}\) as a biset functor after restricting to the subcategory \(\mathbb{Z}\)-linearly generated by all restriction, inflation, and transfer bisets, and deflation bisets only of the form \(\operatorname{def}_{G/P}^{G}\) for normal \(p\)-subgroups \(P\in s_{p}^{\circ}(G)\).
_Example 5.3_.: The following examples come from Section 6. If \(G=C_{p^{n}}\) with \(p>2\) or \(p=2\) and \(n>1\), then \(\partial\mathcal{E}_{k}(G)\) is generated by the endotrivial complex generated by truncating the period \(2\) free resolution of \(k\). Say \(G=\langle\sigma\rangle\), then the endotrivial complex \(C\) is as follows:
\[C=\big{(}kG\xrightarrow{d_{2}}kG\xrightarrow{d_{1}}k,\quad d_{2}:\sigma \mapsto\sigma-1,\quad d_{1}:\sigma\mapsto 1\big{)}.\]
Indeed, \(h(C(1))=2\) and \(C(P)\cong k\) for any \(1<P\in s_{p}(G)\). We prove these constructions generate \(\partial\mathcal{E}_{k}(G)\) in Section 6.
If \(p=2\) and \(G=C_{2}\), then \(\partial\mathcal{E}_{k}(G)\) is generated by the endotrivial complex generated by truncating the period \(1\) free resolution of \(k\),
\[C=\big{(}kC_{2}\to k,\quad\sigma\mapsto 1\big{)}.\]
Finally if \(G\) is any group of order prime to \(p\) or does not contain any nontrivial normal \(p\)-subgroups, then vacuously, \(\partial\mathcal{E}_{k}(G)=\mathcal{E}_{k}(G)\).
Computing faithful endotrivial complexes which generate the faithful constituent will be the main focus of Section 6, since as the next proposition will imply, determining \(\partial\mathcal{E}_{k}(G)\) is, assuming an inductive hypothesis, the only necessary information to completely determine \(\mathcal{E}_{k}(G)\). The next theorem and proof are adapted from [9, 6.3.3], but reformulated to be presentable in a self-contained manner.
**Theorem 5.4**.: _Let \(G\) be any group. Define the following group homomorphism:_
\[\Phi:\mathcal{E}_{k}(G) \to\prod_{P\in s_{p}^{\circ}(G)}\mathcal{E}_{k}(G/P)\] \[C \mapsto\left(\bigotimes_{P\leq Q\in s_{p}^{\circ}(G)}\left( \inf_{G/Q}^{G/P}C(Q)\right)^{\otimes\mu(P,Q)}\right)_{P\in s_{p}(G)}\]
_Here, \(\mu\) is the Mobius function associated to the poset of \(p\)-subgroups of \(G\)._
1. _The image of_ \(\Phi\) _is contained in_ \(\prod_{P\in s_{p}^{\triangle}(G)}\partial\mathcal{E}_{k}(G/P)\)_._
2. \(\Phi:\mathcal{E}_{k}(G)\to\prod_{P\in s_{p}^{\triangle}(G)}\partial\mathcal{E}_ {k}(G/P)\) _is an isomorphism of groups, with inverse given by_ \[\Psi:\prod_{P\in s_{p}^{\triangle}(G)}\partial\mathcal{E}_{k}(G/P) \to\mathcal{E}_{k}(G)\] \[(C_{P})_{P\in s_{p}^{\triangle}(G)} \mapsto\bigotimes_{P\in s_{p}^{\triangle}(G)}\inf_{G/P}^{G}C_{P}\]
Proof.: Set \(\Phi_{S}\) to be the component of \(\Phi\) at \(S\), that is, \(\Phi=(\Phi_{S})_{S\in s_{p}^{\triangle}(G)}\). To show (a) it suffices to show that \(\big{(}\operatorname{im}\Phi_{S}\big{)}(P)=k\) when \(P>S\). Fix an endotrivial complex of \(kG\)-modules \(C\). We first exhibit a clever reindexing: for fixed \(S\) and \(P\geq S\) with \(P,S\in s_{p}^{\triangle}(G)\),
\[\left(\bigotimes_{S\leq Q\in s_{p}^{\triangle}(G)}\big{(} \operatorname{inf}_{G/Q}^{G/S}C(Q)\big{)}^{\otimes\mu(S,Q)}\right)(P) =\bigotimes_{Q\in s_{p}^{\triangle}(G)}\Big{(}\big{(} \operatorname{inf}_{G/Q}^{G}C(Q)\big{)}(P)\Big{)}^{\otimes\mu(S,Q)}\] \[=\bigotimes_{S\leq Q\in s_{p}^{\triangle}(G)}\Big{(}\operatorname {inf}_{G/PQ}^{G/P}C(PQ)\Big{)}^{\otimes\mu(S,Q)}\] \[=\bigotimes_{SP\leq X\in s_{p}^{\triangle}(G)}\Big{(}\operatorname {inf}_{G/X}^{G/P}C(X)\Big{)}^{\otimes\big{(}\sum_{S\leq Q\in s_{p}^{ \triangle}(G),X=PQ}\mu(S,Q)\big{)}}\]
Set \(s_{X}=\sum_{S\leq Q\in s_{p}^{\triangle}(G),PQ=X}\mu(S,Q)\), then it suffices to show \(S_{X}=0\) unless \(PS=S\). If \(PS\neq S\), then
\[s_{PS}=\sum_{Q\in s_{p}^{\triangle}(G),S\leq Q\leq PS}\mu(S,Q)=0.\]
Then, for \(PS\leq Y\in s_{p}^{\triangle}(G)\),
\[\sum_{X\in s_{p}^{\triangle}(G),PS\leq X\leq Y}s_{X}=\sum_{X\in s_{p}^{ \triangle}(G),PS\leq X\leq Y}\sum_{S\leq Q\in s_{p}^{\triangle}(G),PQ=X}\mu(S, Q)=\sum_{Q\in s_{p}^{\triangle}(G),S\leq Q\leq Y}\mu(S,Q)=0,\]
and inducting on the poset \(s_{P}^{\triangle}(G)\) allows us to conclude \(s_{Y}=0\). Thus, the exponent is zero unless \(P=S\), and we conclude \(\operatorname{im}\Phi_{P}\subseteq\partial\mathcal{E}_{k}(G/P)\).
For (b), by Mobius inversion, it follows that for any endotrivial complex of \(kG\)-modules \(C\),
\[\bigotimes_{P,Q\in s_{p}^{\triangle}(G),P\leq Q}\inf_{G/Q}^{G/P}\Phi_{Q}(C)=C(P).\]
In particular, \(\bigotimes_{P\in s_{p}^{\triangle}(G)}\inf_{G/P}^{G}\Phi_{P}(C)=C\), which demonstrates \(\Psi\circ\Phi=\operatorname{id}_{\mathcal{E}_{k}(G)}\). To show that \(\Phi\) and \(\Psi\) are inverses, it suffices to show that \(\Psi\) is injective.
Suppose for contradiction that \(\ker\Psi\neq(k)_{P\in s_{p}(G)}\), and say nontrivial \((C_{P})_{P\in s_{p}^{\triangle}(G)}\in\ker\Psi\). Then the product of all \(\operatorname{inf}_{G/P}^{G}C_{P}\) is the trivial complex. There must exist a locally maximal \(X\in s_{p}^{\triangle}(G)\) with respect to the property that \(C_{X}\) is a nontrivial faithful endotrivial complex. First, assume \(X\) is not a maximal element of \(s_{p}^{\triangle}(G)\). Then, \(C_{X}\) has unique highest nonzero h-mark at a subgroup \(X^{\prime}\geq X\) with respect to subgroup order, where \(X^{\prime}\) does not contain as
subgroup any normal subgroups containing \(X\) besides \(X\) itself. But then, it is impossible that \(h\left(\left(\bigotimes_{P\in s_{p}^{\triangle}(G)}\inf_{G/P}^{G}C_{P}\right)(X) \right)=0\), since for all other \(X<Y\in s_{p}^{\triangle}(G)\), \(h(C_{Y}(X))=0\) by maximality of \(X\), and for all other \(X\not\leq Y\in s_{p}^{\triangle}(G)\), \(h(C_{Y}(X))=0\) since \(C_{Y}\) is a faithful endotrivial \(k[G/Y]\)-complex.
Otherwise, if \(X\) is maximal, it might also be the case that \(C_{X}\) has h-marks \(0\) everywhere, in which case \(C_{X}=k_{\omega}\) for some nontrivial \(\omega\in\operatorname{Hom}(G/X,k^{\times})\). In this case, \(X\) is a global maximum by basic group-theoretic arguments. It follows that \(C_{X}\) is the only chain complex in the tuple \((C_{P})\) for which \((C_{P})(X)\neq k\), by faithfulness. Therefore
\[\bigotimes_{P\in s_{p}^{\triangle}(G)}\inf_{G/P}^{G}C_{P}(X)\cong\left(\bigotimes _{P\in s_{p}^{\triangle}(G)}\inf_{G/P}^{G}C_{P}\right)(X)\neq k.\]
Thus \((C_{P})\) cannot exist, and \(\ker\Psi=(k)_{P\leq G}\), as desired.
_Remark 5.5_.: Since \(\mathcal{E}_{k}(G)\) decomposes into a direct product of faithful components, to completely determine the structure of \(\mathcal{E}_{k}(G)\), it suffices to determine \(\partial\mathcal{E}_{k}(G)\), assuming we have already determined \(\mathcal{E}_{k}(G)\) for all groups of smaller order. One may ask what other restrictions can be placed upon elements of \(\partial\mathcal{E}_{k}(G)\).
We will show that if \([C]\in\partial\mathcal{E}_{k}(G)\), then there is a representative \(C\) for with all of its components "faithful" as well, in that after applying the Brauer construction at any nontrivial normal subgroup, each of its components vanish with the exception of a lone simple module of \(k\)-dimension one in one degree. Explicitly, if \(\mathcal{X}\) is the set of \(p\)-subgroups of \(G\) which do not contain a nontrivial subgroup normal in \(G\), then there exists some \(i\in\mathbb{Z}\) such that for all \(j\neq i\), \(C_{j}\) is \(\mathcal{X}\)-projective, and \(C_{i}\) contains one indecomposable summand which has vertex \(S\in\operatorname{Syl}_{p}(G)\), and all other summands have vertex contained in \(\mathcal{X}\). To do this, we first generalize Bouc's Theorem 2.11 beyond the scope of projective modules.
**Theorem 5.6**.: _Let \(C\) be a chain complex of \(p\)-permutation \(kG\)-modules, and let \(\mathcal{X}\) be a subset of the \(p\)-subgroups of \(G\) which is closed under \(G\)-conjugation and taking subgroups. The following are equivalent:_
1. _For all_ \(P\not\in\mathcal{X}\)_,_ \(C(P)\) _is acyclic._
2. _There exists a chain complex_ \(D\) _with_ \(C\simeq D\) _such that for all_ \(i\in\mathbb{Z}\)_,_ \(D_{i}\) _is_ \(\mathcal{X}\)_-projective._
_In particular, if \(\mathcal{X}=1\), we obtain Theorem 2.11._
To prove this, we first state a lemma which refines Theorem 2.10.
**Lemma 5.7**.: _Let \(M,N_{1},\ldots,N_{l}\) be \(p\)-permutation \(kG\)-modules, with each \(N_{i}\) indecomposable with vertex \(P_{i}\), and let \(f:M\to N_{1}\oplus\cdots\oplus N_{l}\) be a \(kG\)-module homomorphism. \(f\) is split surjective if and only if the \(kN_{G}(P_{i})/P_{i}\)-module homomorphism \(f(P_{i})\) is surjective for all \(i\in\{1,\ldots,l\}\)._
Proof.: The forward direction is trivial. We induct on \(l\). Note \(l=1\) is simply the dual statement of Theorem 2.10.
Suppose \(f(P_{i})\) is surjective for all \(i\in\{1,\ldots,l\}\). Since \(f(P_{1})\) is surjective, the dual of Theorem 2.10 implies there exists a direct summand \(M_{1}\) of \(M\) isomorphic to \(N_{1}\) such that \(f(M_{1})=N_{1}\) and \(f|_{M_{1}}\) is an isomorphism. Let \(M=M_{1}\oplus\ker M^{\prime}\). It follows that \(f|_{M^{\prime}}(P_{2})\) surjects onto \(N_{2}(P_{2})\oplus\cdots\oplus N_{k}(P_{2})\), since \(\operatorname{im}f(P_{2})=N(P_{2})\), and the inductive hypothesis completes the proof.
Proof of Theorem 5.6.: (b) implies (a) is straightforward. Let \(n\) be the minimum integer for which \(C_{n}\neq 0\). For each nonzero \(C_{n}\), write \(C_{n}=X_{n}\oplus Y_{n}\), where \(X_{n}\) consists of all indecomposable summands with vertex contained in \(\mathcal{X}\), so \(Y_{n}\) consists of all indecomposable summands with vertex not contained in \(\mathcal{X}\). We construct a chain complex \(D\) identical to \(C\), except in degree \(n+1\), where we set \(D_{n+1}=X_{n}\oplus C_{n+1}\), and add the identity map on \(X_{n}\) to the differential of \(D_{n+1}\), \(d_{n+1}\). Since \(C(Q)\) is acyclic for all \(Q\not\in\mathcal{X}\) by assumption, so is \(D(Q)\), as \(X_{n}(Q)=0\) by Proposition 2.8. Therefore \(d_{n+1}\) composed with projection onto \(Y_{n}\) is split surjective by the previous lemma, and by construction, it follows that \(d_{n+1}\) is split surjective. Thus, we have that \(D\cong D^{\prime}\oplus(C_{n}\xrightarrow{\sim}C_{n})\), where \(D^{\prime}\) has lowest nonzero degree \(n+1\).
Note that \(C\) is homotopy equivalent to a shift of the mapping cone of the chain complex homomorphism \(D^{\prime}\to X_{n}\). Moreover, if \(Q\not\in\mathcal{X}\), \(C(Q)\cong D(Q)\simeq D^{\prime}(Q)\), and these complexes are by assumption acyclic. We now perform an inductive argument as follows. If the length of \(C\) is one, then (b) holds, since there is only one nonzero term which must vanish after applying the Brauer construction at all \(P\not\in\mathcal{X}\), and the rest follows by 2.8.
Now we assume (a) implies (b) holds for complexes of length at most \(n\), and suppose \(C\) has length \(n+1\). If the length of \(C\) is \(n+1\), then \(D^{\prime}\) has length \(n\). Since \(D^{\prime}(P)\) is also acyclic for all \(P\not\in\mathcal{X}\), \(D^{\prime}\) contains only \(\mathcal{X}\)-projective modules. Since \(C\) is homotopy equivalent to a shift of the mapping cone \(D^{\prime}\to X_{n}\), and both \(D^{\prime}\) and \(X_{n}\) (as a complex) are homotopy equivalent to complexes with only \(\mathcal{X}\)-projective modules, we conclude \(C\) is also homotopy equivalent to a complex with only \(\mathcal{X}\)-projective modules, by invariance of the mapping cone under homotopy equivalences.
**Lemma 5.8**.: _Let \(C\) be a bounded complex of \(p\)-permutation \(kG\)-modules with the following property: there exists a sub-poset \(\mathcal{Y}\subset s_{p}(G)\) which does not contain Sylow subgroups, is closed under conjugation and taking subgroups, and for which there exists some \(i\in\mathbb{Z}\) such that for all \(P\not\in\mathcal{Y}\), \(\dim_{k}H_{i}(C(P))=1\) and \(C(P)\) is exact in all other degrees. \(C\) is homotopy equivalent to an complex \(C^{\prime}\), with the following property: there exists an \(i\in\mathbb{Z}\) such that \(C^{\prime}_{i}=M\oplus N\), with \(M\)\(\mathcal{Y}\)-projective or 0 and \(N\) has Sylow vertices, and for all \(j\neq i\), either \(C^{\prime}_{j}=0\) or \(C^{\prime}_{j}\) is \(\mathcal{Y}\)-projective._
Proof.: We may assume without loss of generality that \(C\) contains no contractible summands. Set \(i=h(C(S))\). Then that each \((C(S))_{i}\) consists of direct sums of modules of \(k\)-dimension 1, as the only \(p\)-permutation \(kG\)-modules with vertex \(S\) have \(k\)-dimension 1 upon applying the Brauer construction at \(S\), and \(k[N_{G}(S)/S]\) is semisimple. Therefore, we have a homotopy equivalence \(k_{\omega}[i]\simeq C(S)\) for some \(\omega\in\operatorname{Hom}(N_{G}(S)/S,k^{\times})\). Therefore, \(C_{i}\) contains a corresponding trivial source direct summand \(E\), which satisfies \(k_{\omega}[i]=E(S)\), and which corresponds to the homotopy equivalence at the \(S\)-local level.
Either \(d_{i}(E)=0\) or \(d_{i}(E)\subset M\), where \(M\) is an summand of \(C_{i-1}\) of minimal \(k\)-dimension. If \(d_{i}(E)=0\) holds, we set \(M=0\). Now, define the two-term chain complex \(D\)
\[0\to E\hookrightarrow M\to 0\]
with \(k_{\omega}\) in degree \(i\) and the nonzero differential induced by \(d_{i}\) (possibly 0). We have a chain complex homomorphism \(\phi\), with nonzero componentwise maps induced by inclusion.
By construction, \(\phi(S):D(S)\to C(S)\) is a homotopy equivalence. Let \(P\not\in\mathcal{Y}\). We claim that \(C(P)\) is homotopy equivalent to the chain complex \(N[i]\), for some \(kG\)-module \(N\) with \(k\)-dimension 1.
Indeed, for any \(p\)-subgroup \(Q\) of \(N_{G}(P)/P\), \(C(P)(Q)\) satisfies \(\dim_{k}H_{i}(C(P)(Q))=1\) and \(C(P)(Q)\) is exact in all other degrees. Therefore, inductively peeling off terms of \(C(P)\) yields a homotopy equivalence with \(N[i]\), and since \(\dim_{k}H_{i}(C(P))=1\), \(N\) must have \(k\)-dimension 1.
We next claim that for any \(p\)-subgroup \(P\not\in\mathcal{Y}\), \(\phi(P):D(P)\to C(P)\) is a quasi-isomorphism. Suppose not, then either \((\phi_{i}(E))(P)\in\operatorname{im}d_{i+1}(P)\) or \((\phi_{i}(E))(P)\not\in\ker d_{i}\). However, in both cases, since \(C(P)\simeq N[i]\), this would imply that \(\phi(E)(P)\) belongs to an indecomposable contractible summand of \(C(P)\) of the form \(0\to N\to N\to 0\). An inductive argument up the poset of \(p\)-subgroups implies that \(\phi(E)(S)\) also belongs to an indecomposable contractible summand of \(C(S)\) of the form \(0\to N^{\prime}\to N^{\prime}\to 0\), for some \(k[N_{G}(S)/S]\)-module \(N^{\prime}\) with \(k\)-dimension 1 as well, which contradicts our construction. Thus \(\phi(P)\) is a quasi-isomorphism for all \(P\not\in\mathcal{Y}\). Moreover, this shows that \(d_{i}(P)\) is the zero map for all \(P\not\in\mathcal{Y}\).
Now, by the previous claim and Theorem 5.6, the mapping cone \(C(\phi)\) is homotopy equivalent to a complex with \(\mathcal{Y}\)-projective components in all degrees. Therefore, by considering the structure of the mapping cone, \(C\) is homotopy equivalent to a complex with \(\mathcal{Y}\)-projective components in all degrees, with the exception of the summands \(E\) of \(C_{i}\) and \(M\) of \(C_{i-1}\). Replace \(C\) with this reduced complex. If \(M\) is \(\mathcal{Y}\)-projective, we are done. Otherwise, suppose \(M\) has vertex \(P\not\in\mathcal{Y}\). In this case, \(C(P)\) contains exactly two nonzero components, \(C_{i}=E(P)\) and \(C_{i-1}=M(P)\). However, since \(\phi(P)\) is a quasi-isomorphism, \((\operatorname{im}d_{i}(E))(P)=0\), and since \(C_{i-2}\) has no summands with vertex \(P\), \((\ker d_{i-1}(M))(S)=M(S)\). Since these are the only two summands with vertex \(P\), \(H_{i-1}(C(P))\cong M(P)\neq 0\), a contradiction. Thus \(M\) has non-Sylow vertices, and we are done.
Applying this lemma to the faithful case, we immediately obtain a structural result for faithful complexes.
**Corollary 5.9**.: _Let \(C\in\partial\mathcal{E}_{k}(G)\), and let \(\mathcal{X}\) be the set of \(p\)-subgroups of \(G\) which do not contain any nontrivial normal subgroups of \(G\) as subgroups. Then there exists \(i\in\mathbb{Z}\) for which \(C_{j}\) is \(\mathcal{X}\)-projective or \(0\) for all \(j\neq i\), and \(C_{i}=M\oplus N\) where \(M\) is \(\mathcal{X}\)-projective or 0 and \(N\) has Sylow vertices._
Proof.: This follows immediately from the previous lemma by setting \(\mathcal{Y}\) to be the set of all \(p\)-subgroups of \(G\) which contain no nontrivial normal subgroups of \(G\) as subgroups.
## 6 Determining \(\mathcal{E}_{k}(G)\) for some groups
We can now completely deduce the structure of \(\mathcal{E}\) for some classes of well-understood groups. Most of the computations will rely on determining \(\partial\mathcal{E}_{k}(G)\).
_Remark 6.1_.: One source of endotrivial complexes comes from truncating periodic projective resolutions of the trivial \(kG\)-module \(k\). We first will set some conventions. Assume that all projective resolutions are of the form \(P_{\bullet}\to M\), with \(M\in{}_{kG}\mathbf{mod}\) in degree zero of the chain complex unless otherwise stated. By _periodic resolution_, we mean a projective resolution \(P_{\bullet}\) for which there exist \(i\in\mathbb{N}\) such that \(\ker d_{i}\cong M\). These arise from _periodic modules_, modules \(M\) for which \(M=\Omega_{i}(M)\) for some \(i\in\mathbb{N}\).
By the _period_ of a periodic resolution or module, we mean the minimum value of \(i>0\) satisfying \(\Omega_{i}(M)=M\). We may _truncate_ a periodic resolution \(P_{\bullet}\) of length \(i\) by taking the chain complex
\[\hat{P}=0\to P_{i}\to\dots\to P_{1}\to M\to 0.\]
It follows that \(H_{i}(C)\cong M\) and \(H_{j}(C)=0\) for \(j\neq i\). Note that if the period of a projective resolution is \(n\), the corresponding minimal truncation has length \(n+1\).
It is well-known that the trivial \(kG\)-module \(k\) is periodic if and only if \(G\) has \(p\)-rank 1, in other words, \(G\) has cyclic or quaternion Sylow \(p\)-subgroup. Let \(S\) denote the Sylow \(p\)-subgroup of \(G\). If \(S=C_{2}\), \(k\) has period 1, if \(S=C_{p^{n}}\) for \(p>2\) or \(p=2\) and \(n>1\), then \(k\) has period 2, and if \(S=Q_{2^{n}}\), then \(k\) has period 4. The first two periodic resolutions were given in Section 5 in the case of \(p\)-groups. We refer the reader to [11, Chapter 12.7] for an explicit construction of the periodic resolution of \(k\) as \(kQ_{2^{n}}\)-module for \(n\geq 3\).
**Proposition 6.2**.: _Let \(G\) be a group with \(p\)-rank 1, and let \(C\) be a minimal truncation of the periodic resolution of the trivial module. Then \(\langle C\rangle_{\mathbb{Z}}=\partial\mathcal{E}_{k}(G)\)._
Proof.: In this case, \(S\in\operatorname{Syl}_{p}(G)\) is either cyclic or generalized quaternion, and in either case, the unique subgroup of order 2 is normal. Therefore \(\partial\mathcal{E}_{k}(G)\) has rank at most 1, corresponding to the h-mark at the trivial subgroup. If \(\partial\mathcal{E}_{k}(G)\) was not generated by \(C\), then Theorem 5.8 would imply the existence of a periodic projective resolution of \(k\) with shorter period, contradicting minimality of the truncation.
### \(\mathcal{E}_{k}(G)\) for abelian groups
We first deduce the structure of \(\mathcal{E}_{k}(G)\) for any abelian group \(G\). In this situation, every subgroup is normal, which allows for restriction to preserve structural properties.
**Proposition 6.3**.: _Let \(H\leq G\) be Dedekind groups, i.e. groups for which every subgroup is normal. If \(p\mid|G|\), then restriction induces an injective group homomorphism \(\partial\mathcal{E}_{k}(G)\to\partial\mathcal{E}_{k}(H)\)._
Proof.: Any faithful chain complex \(C\) must have h-marks zero and \(H_{0}(C(P))=k\) for all nontrivial \(p\)-subgroups \(1<P\in s_{p}(G)\). Since \(p\mid|G|\), there exists at least one nontrivial \(p\)-subgroup, so by Theorem 3.6, the only faithful endotrivial complex with h-marks entirely zero is the trivial endotrivial complex \(k\). So if \(C\) is a nontrivial faithful endotrivial complex, \(\operatorname{res}_{H}^{G}C\) must have homology in a nonzero degree, and it follows that \(\operatorname{res}_{H}^{G}C\) also has 0 h-mark at \(P\) and \(H(C(P))=k\) for all nontrivial \(p\)-subgroups \(1<P\in s_{p}(H)\). Thus \(\operatorname{res}_{H}^{G}C\) is a trivial faithful endotrivial complex of \(kH\)-modules.
**Proposition 6.4**.: _Set \(G=C_{p}\times C_{p}\). Then \(\partial\mathcal{E}_{k}(G)\) is trivial. In particular, if \(p=2\),_
\[\mathcal{E}_{k}(G)=\langle k[G/H_{1}]\to k,k[G/H_{2}]\to k,k[G/H_{3}]\to k,k[1] \rangle_{\mathbb{Z}},\]
_and if \(p\) is odd,_
\[\mathcal{E}_{k}(G)=\langle k[G/H_{1}]\to k[G/H_{1}]\to k,\ldots,k[G/H_{p+1}] \to k[G/H_{p+1}]\to k,k[1]\rangle_{\mathbb{Z}},\]
_where \(H_{1},\ldots,H_{p+1}\) are the \(p+1\) subgroups of \(G\) with index \(p\), and the complexes are inflated truncated periodic free resolutions of \(k\)._
Proof.: Suppose for contradiction that \(\partial\mathcal{E}_{k}(G)\neq 0\). Then there exists some endotrivial chain complex \(C\) for which \(h(C(P))=0\) for all \(1<P\leq C_{p}\times C_{p}\), but \(h(C)\neq 0\). Assume \(h(C)>0\), the other case follows by dualizing. By peeling off contractible terms, it suffices to assume \(C_{i}=0\) for \(i<0\). Therefore, by Theorem 5.8, \(C\) is homotopy equivalent to a truncated periodic free resolution of the \(kG\)-module \(k\). However since \(G\) has \(p\)-rank 2, no such resolution exists, a contradiction.
Now, \(\partial\mathcal{E}_{k}(G)\) can have \(\mathbb{Z}\)-dimension at most \(p+2\), as there are \(p+3\) subgroups of \(\mathcal{E}_{k}(G)\), hence \(\operatorname{im}\epsilon\) is a subgroup of a group of \(\mathbb{Z}\)-dimension \(p+3\). All other complexes are copies of the inflated truncated periodic resolutions of cyclic \(p\)-groups or the shifted trivial endotrivial complex \(k\).
From this, the structure of \(\mathcal{E}_{k}(G)\) for \(G\) abelian follows easily.
**Theorem 6.5**.: _Let \(G\) be an abelian group and let \(s_{p}^{1}(G)\) denote the set of all \(p\)-subgroups of \(G\) for which \(G/P\) has \(p\)-rank at most 1. Then,_
\[\mathcal{E}_{k}(G)\cong\prod_{P\in s_{p}^{1}(G)}\partial\mathcal{E}_{k}(G/P).\]
\(\mathcal{E}_{k}(G)\) _is generated by endotrivial complexes which arise from inflating truncated periodic free resolutions of \(k\) and shifts of 1-dimensional representations._
Proof.: If \(G/P\) has \(p\)-rank greater than 1, \(\partial\mathcal{E}_{k}(G/P)=0\), since by Proposition 6.3, we have an injective group homomorphism \(\partial\mathcal{E}_{k}(G/P)\to\partial\mathcal{E}_{k}(C_{p}\times C_{p})\). However, \(\partial\mathcal{E}_{k}(C_{p}\times C_{p})=0\) by Proposition 6.4, so \(\partial\mathcal{E}_{k}(G/P)=0\).
Conversely, if \(G/P\) has \(p\)-rank at most 1, this case was covered in Proposition 6.2. Additionally, if \(G/P\) has \(p\)-rank 0, i.e. \(P\) is the unique Sylow \(p\)-subgroup of \(G\), then \(\partial\mathcal{E}_{k}(G/P)=\mathcal{E}_{k}(G/P)\) has rank 1 as well. In this case \(\partial\mathcal{E}_{k}(G/P)\) is generated by \(k[1]\) and all 1-dimensional \(k[G/P]\)-modules.
By the isomorphism \(\partial\mathcal{E}_{k}(G)\cong\prod_{P\in s_{p}^{1}(G)}\partial\mathcal{E}_{k }(G/P)\), the result follows.
**Corollary 6.6**.: _If \(G\) is abelian, \(\Lambda:\mathcal{E}_{k}(G)\to O(T(kG))\) is surjective._
Proof.: Matsuda proved in [17, 4.5] that if \(G\) is abelian,
\[B(G)^{\times}=\langle-[G/G],\{[G/G]-[G/H]\mid[G:H]=2\}\rangle.\]
Under the decomposition afforded by \(\kappa\) from [4], we have \(O(T(kG))=B(\mathcal{F})^{\times}\times\operatorname{Hom}(G,k^{\times})\times \mathcal{L}_{G}\). However, since \(N_{G}(P)=PC_{G}(P)=G\) for any \(p\)-subgroup \(P\), \(|\mathcal{L}_{G}|=1\). It is clear that every element belonging to \(1\times\operatorname{Hom}(G,k^{\times})\times\mathcal{L}_{G}\) has a lift. Moreover, it follows from the properties described in section Proposition 4 that \(k[1]\in\mathcal{E}_{k}(G)\) descends to \(-[S/S]\in B(\mathcal{F})\), where \(S\in\operatorname{Syl}_{p}(G).\) If \(p\) is odd, we are done.
Otherwise, let \(Q\leq S\) with \([S:Q]=2\). Let \(H\) be the unique largest \(p^{\prime}\)-subgroup of \(G\). Then, it is routine to verify that the chain complex \(C_{Q}=k[G/QH]\to k\) is endotrivial, and that under restriction, \(\operatorname{res}_{S}^{G}C_{Q}=k[S/Q]\to k\), and descends to \([S/S]-[S/Q]\in B(\mathcal{F})^{\times}\times 1\times 1\). Thus, all 3 constituents of \(\kappa(O(T(kG)))\) are mapped onto, and we conclude \(\Lambda\) is surjective.
### \(B(p)^{\times}\) for 2-groups
We briefly digress to describe some details of \(B(P)^{\times}\) when \(P\) is a 2-group. Recall that in this case, we have an isomorphism \(B(P)^{\times}\cong O(T(kG))\) induced by \(k\)-linearization, which will be crucial for verifying surjectivity of \(\Lambda\) for the rest of this chapter. For the rest of this section, we will identify \(B(P)=T(kP)\) when \(P\) is a \(p\)-group.
_Remark 6.7_.: There are two key facts that will be useful to us.
1. Note that for permutation modules, the Brauer construction is the same as taking \(G\)-fixed points of the underlying permutation basis, by Proposition 2.8. One can define the faithful component of \(B(G)^{\times}\) analogous to Definition 5.1, with the operation of taking fixed points of \(G\)-sets replacing the Brauer construction. This is a consequence of the fact that \(B(G)^{\times}\) is a biset functor. We denote the faithful component of \(B(G)^{\times}\) also by \(\partial B(G)^{\times}\). Via general biset functor theory, we obtain an analogous isomorphism \[B(G)^{\times}\cong\prod_{N\leq G}\partial B(G/N)^{\times}.\]
In generality, \(N\) need not be a \(p\)-subgroup of \(G\). However, in the case where \(G\) is a \(p\)-group, the decomposition of \(B(P)^{\times}\) afforded by general theory and the decomposition of \(\mathcal{E}_{k}(G)\) given in Theorem 5.4 coincide through \(\Lambda\). To be precise, \(\Lambda(\partial\mathcal{E}_{k}(P/Q))\subseteq\partial B(P/Q)^{\times}\) for all \(Q\trianglelefteq P\), after identifying \(B(P)^{\times}\cong O(T(kG))\).
2. For \(P\) a \(2\)-group, Bouc obtained the basis of \(B(P)^{\times}\) by the general theory of rational \(p\)-biset functors. This required computations of \(\partial B(P)^{\times}\) for all \(2\)-groups with normal \(p\)-rank \(1\): they are \(C_{2^{i}}\) for \(i\geq 0\), \(D_{2^{i}}\) for \(i\geq 4\), \(SD_{2^{i}}\) for \(i\geq 4\), and \(Q_{2^{i}}\) for \(i\geq 3\). In this case, it was shown that \(\partial B(P)^{\times}\neq 0\) only for \(C_{2^{i}}\) when \(i\in\{0,1\}\) and \(D_{2^{i}}\) when \(i\geq 4\). The corresponding generators are \(\{-[G/G]\}\) for \(C_{1}\), \(\{[C_{2}/1]-[C_{2}/C_{2}]\}\), and \(\{[D_{2^{n}}/1]+[D_{2^{n}}/D_{2^{n}}]-[D_{2^{n}}/H_{1}]-[D_{2^{n}}/H_{2}]\}\) for \(D_{2^{n}}\), where \(H_{1},H_{2}\leq D_{2^{n}}\) are representatives of the two conjugacy classes of noncentral subgroups of order \(2\). From there, the basis is constructed by obtaining a "genetic basis," a collection of subquotients of \(P\), and tensor inducing then inflating the faithful units from the subquotient to \(P\). We omit these details as they will not be necessary for the scope of this paper.
### \(\mathcal{E}_{k}(G)\) for dihedral \(2\)-groups
_Remark 6.8_.: For non-abelian groups \(G\), it is no longer necessary that \(\partial\mathcal{E}_{k}(G)\) has at most \(\mathbb{Z}\)-dimension \(1\).
We next focus on dihedral \(2\)-groups. Assume \(k\) is a field of characteristic \(2\). We use the presentation
\[D_{2^{n}}=\langle a,b\mid a^{2^{n-1}}=b^{2}=1,{}^{b}a=a^{-1}\rangle.\]
Then the normal subgroups of \(D_{2^{n}}\) are as follows: there are \(2\) normal subgroups of index \(2\) isomorphic to \(D_{2^{n-1}}\), \(D_{2^{n}-1}^{1}=\langle a^{2},b\rangle\), \(D_{2^{n}-1}^{2}=\langle a^{2},ba\rangle\), and for each \(k\in\{0,\ldots,k-1\}\), a copy of the cyclic group of \(2^{k}\) elements of index \(2^{n-k}\), \(C_{2^{k}}=\langle a^{2^{n-k+1}}\rangle\). One may compute via the generators that for \(k<n\), the quotient \(D_{2^{n}}/C_{2^{k}}\) is isomorphic to \(D_{2^{n-k}}\). Therefore, we have an isomorphism afforded by 5.4
\[\mathcal{E}_{k}(D_{2^{n}})\cong\partial\mathcal{E}_{k}(C_{2})\times\partial \mathcal{E}_{k}(C_{2})\times\prod_{i=0}^{n}\partial\mathcal{E}_{k}(D_{2^{i}}).\]
We have already determined \(\partial\mathcal{E}_{k}(D_{2^{i}})\) for \(i=0,1,2\) (recalling \(D_{4}=V_{4}\) and \(\partial\mathcal{E}_{k}(V_{4})=0\)), so it suffices to determine \(\partial\mathcal{E}_{k}(D_{2^{i}})\) for \(i\geq 3\). It turns out that in these cases, the computation is independent of \(i\geq 3\). For \(i\geq 3\), the only conjugacy classes of subgroups of \(D_{2^{i}}\) not containing a nontrivial normal subgroup have representatives \(\{1,\langle b\rangle,\langle ab\rangle\}.\) It follows that \(\partial\mathcal{E}_{k}(D_{2^{i}})\) consists of all endotrivial complexes for which \(C(P)\simeq k\) for all \(P\leq D_{2^{i}}\) not conjugate to one of those three subgroups. Therefore, \(\partial\mathcal{E}_{k}(D_{2^{i}})\) has \(\mathbb{Z}\)-rank at most \(3\).
Let \(i\geq 3\). We first construct an endotrivial complex for \(kD_{2^{i}}\) with \(i\geq 3\) which is faithful. \(D_{2^{i}}\) has three conjugacy classes of subgroups of order \(2\): \(Z(D_{2^{i}})=\langle a^{2^{i-2}}\rangle\), \(\{\langle a^{2k}b\rangle\mid k\in\mathbb{Z}\}\), and \(\{\langle a^{2k+1}b\rangle\mid k\in\mathbb{Z}\}\). Set \(H_{1}=\langle b\rangle\) and \(H_{2}=\langle ab\rangle\) as representatives of the conjugacy classes. We construct a chain complex of \(kD_{2^{i}}\)-modules, \(\Gamma_{i}^{D}\), as follows:
\[\Gamma_{i}^{D}:=0\to kD_{2^{i}}\xrightarrow{d_{2}}k[D_{2^{i}}/H_{1}]\oplus k[ D_{2^{i}}/H_{2}]\xrightarrow{d_{1}}k\to 0,\]
\[d_{2}:x\in D_{2^{i}}\mapsto(xH_{1},xH_{2}),\quad d_{1}:(xH_{1},0)\mapsto 1,(0,xH_{ 2})\mapsto-1.\]
Notice that \(\Lambda(\Gamma_{i}^{D})=[kD_{2^{i}}]+[k]-[k[D_{2^{i}}/H_{1}]]-[k[D_{2^{i}}/H_{ 2}]]\in\partial B(D_{2^{i}})^{\times}\). This is isomorphic to the chain complex corresponding to the relative syzygy \(\Omega(\Omega_{D_{2^{i}}/H_{1}\sqcup D_{2^{i}}/H_{2}})\).
**Proposition 6.9**.: \(\Gamma_{i}^{D}\) _is a faithful endototrivial complex of \(kD_{2^{i}}\)-modules._
Proof.: \(N_{D_{2^{i}}}(H_{j})=Z(D_{2^{i}})\cdot H_{j}\) for \(j\in\{1,2\}\), so it follows that \(\Gamma_{i}^{D}(H_{j})\cong kC_{2}\to k\), an endotrivial complex. Moreover, for every subgroup \(H\leq D_{2^{i}}\) not \(H_{1},H_{2}\), or \(1\), \(\Gamma_{i}^{D}(H)=k\), the trivial endotrivial complex of \(k[N_{D_{2^{i}}}(H)/H]\)-modules. Therefore if \(\Gamma_{i}^{D}\) is endotrivial, it is faithful. It remains to show that \(\Gamma_{i}^{D}\) has homology in one degree, with its homology having \(k\) dimension \(1\).
**Lemma 6.10**.: \(H_{2}(\Gamma_{i}^{D}))\cong k\)_, \(H_{1}(\Gamma_{i}^{D})=0\), and \(H_{0}(\Gamma_{i}^{D})=0\)._
Proof.: The final of these three assertions is straightforward since \(d_{0}\) is clearly surjective. By dimension-counting, the first assertion holds if and only if the second does. It suffices to show \(\ker d_{0}\subseteq\operatorname{im}d_{1}\). Write, for \(m\in k[G/H_{1}]\oplus k[G/H_{2}]\),
\[m=\left(\sum_{h\in[G/H_{1}]}a_{h}hH_{1},\sum_{h\in[G/H_{2}]}b_{h}hH_{2}\right).\]
Therefore,
\[m\in\ker d_{0}\iff\sum_{h\in[G/H_{1}]}a_{h}=\sum_{h\in[G/H_{2}]}b_{h},\quad a _{h},b_{h}\in k.\]
It follows that
\[\ker d_{0}=\operatorname{span}_{k}\{(g_{1}H_{1},g_{2}H_{2}):g_{1},g_{2}\in G\}.\]
On the other hand,
\[\operatorname{im}d_{1}=\operatorname{span}_{k}\{(gH_{1},gH_{2}):g\in G\}.\]
So it suffices to show for any \(g_{1},g_{2}\in G\), \((g_{1}H_{1},g_{2}H_{2})\in\operatorname{span}_{k}\{(gH_{1},gH_{2}):g\in G\}= \operatorname{im}d_{1}.\) First, observe
\[(g_{1}H_{1},g_{1}bH_{2})=(g_{1}H_{1},g_{1}H_{1})-(g_{1}abH_{1},g_{1}abH_{2})+( g_{1}aH_{1},g_{1}aH_{1}).\]
Thus \((g_{1}H_{1},g_{1}bH_{2})=(g_{1}H_{1},g_{1}babH_{2})\in\operatorname{im}d_{1}\). It inductively follows that if \((g_{1}H_{1},g_{2}H_{2})\in\operatorname{im}d_{1}\), then \((g_{1}H_{1},g_{2}bH_{2})=(g_{1}H_{1},g_{2}babH_{2})\in\operatorname{im}d_{1}\). Now, observe every \(g\in G=\langle H_{1}H_{2}\rangle\) can be written either as \(g=(bab)\cdots(bab)\) or \(g=(bab)\cdots(bab)b\). In particular, this holds for \(g_{1}^{-1}\) and \(g_{2}\), so \((g_{1}H_{1},g_{1}g_{1}^{-1}g_{2}H_{2})=(g_{1}H_{1},g_{2}H_{2})\in\operatorname{ im}d_{1}\).
Thus, \(h(\Gamma_{i}^{D})=2\). By Theorem 3.4, \(\Gamma_{i}^{D}\) is endotrivial.
_Remark 6.11_.: In relative syzygy terminology, the previous lemma implies \(\Omega(\Omega_{D_{2^{n}}/H_{1}\sqcup D_{2^{n}}/H_{2}})\cong k\).
In fact, \(\Gamma_{i}^{D}\) generates \(\partial\mathcal{E}_{k}(D_{2^{i}})\).
**Theorem 6.12**.: _Let \(i\geq 3\). Then \(\partial\mathcal{E}_{k}(D_{2^{i}})=\langle\Gamma_{i}^{D}\rangle\)._
Proof.: Suppose for contradiction there exists a faithful endotrivial complex \(C\in\partial\mathcal{E}_{k}(D_{2^{i}})\) which is not homotopy equivalent to a repeated tensor product of \(\Gamma_{i}^{D}\) or its dual. We consider the h-marks of \(C\). Recall \(h(\Gamma_{i}^{D})=2,h(\Gamma_{i}^{D}(\langle b\rangle))=h(\Gamma_{i}^{D}( \langle ab\rangle))=1.\) Suppose \(h(C)=d,h(C(\langle b\rangle))=e,h(C\langle ab\rangle))=f\).
Since \(C\) is not homotopy equivalent to a multiple of \(\Gamma\), at least one of \(e,f\) must satisfy that \(d\neq 2e\) or \(d\neq 2f\). First, assume \(d\neq 2e\). Set \(C^{\prime}=C\otimes_{k}(\Gamma_{i}^{D})^{\otimes(-e)}\). Then \(h(C^{\prime})=d-2e\neq 0\), \(h(C^{\prime}(\langle b\rangle))=e-e=0\). We have \(Z(D_{2^{i}})=\langle a^{2^{i-2}}\rangle\cong C_{2}\). Now, \(h(C^{\prime}(\langle a^{2^{i-2}}\rangle))=h(C^{\prime}(\langle a^{2^{i-2}},b \rangle))=0\) since \(C^{\prime}\in\partial\mathcal{E}_{k}(D_{2^{i}})\), and finally since \(\langle a^{2^{i-2}}b\rangle=_{G}\langle b\rangle\), \(h(C^{\prime}(\langle a^{2^{i-2}}b\rangle))=0\). Restricting \(C^{\prime}\) to \(\langle a^{2^{i-2}},b\rangle\cong V_{4}\) yields an endotrivial complex \(C^{\prime}\) of \(kV_{4}\)-modules for which \(h(C^{\prime})\neq 0\) but \(h(C^{\prime}(H))=0\).
\(0\) for all nontrivial subgroups \(1<H\leq V_{4}\). However, no such complex exists since \(\partial\mathcal{E}_{k}(V_{4})\) is trivial. Thus, this case cannot occur.
Otherwise, assume that \(d=2e\) but \(d\neq 2f\). Set \(C^{\prime}=C\otimes_{k}(\Gamma_{i}^{D})^{\otimes(-f)}\), then it follows by the same argument as before that \(C^{\prime}\) restricted to \(\langle a^{2^{i-2}},ab\rangle\cong V_{4}\) yields a nontrivial endotrivial complex \(C^{\prime}\in\partial\mathcal{E}_{k}(V_{4})\), a contradiction.
**Corollary 6.13**.: \(\Lambda(-):\mathcal{E}_{k}(D_{2^{n}})\to O(T(kD_{2^{n}}))\) _is surjective. \(\operatorname{rk}_{\mathbb{Z}}\mathcal{E}_{k}(D_{2^{n}})=n+1\)._
Proof.: We have \(\Lambda:\mathcal{E}_{k}(D_{2^{n}})\to\partial B(D_{2^{n}})^{\times}\) is surjective for \(n\geq 3\), and it is clear to see for \(n=1\) from the abelian case. Moreover, it follows from Matsuda's theorem [17, 4.5] that \(\partial B(V_{4})=0\). Thus, \(\Lambda\) is surjective on all components of the decomposition, hence surjective. The final statement follows by counting.
### \(\mathcal{E}_{k}(G)\) for generalized quaternion \(2\)-groups
_Remark 6.14_.: For \(Q_{2^{n}}\), \(n\geq 3\), given by the presentation
\[Q_{2^{n}}=\langle a,b\mid a^{2^{n-1}}=1,{}^{b}a=a^{-1},a^{2^{n-2}}=b^{2}\rangle,\]
the normal subgroup structure of \(Q_{2^{n}}\) for \(n\geq 3\) is similar to the normal subgroup structure of \(D_{2^{n}}\). There are two normal subgroups isomorphic to \(Q_{2^{n-1}}\) and for each \(i\in\{0,\ldots,n-1\}\), there is a normal subgroup isomorphic to \(C_{2^{i}}\) generated by some power of \(a\). One may check via the presentation that the quotient \(Q_{2^{n}}/C_{2^{i}}\) is isomorphic to \(D_{2^{n-i}}\).
It follows that
\[\mathcal{E}_{k}(Q_{2^{n}})\cong\partial\mathcal{E}_{k}(Q_{2^{n}})\times \partial\mathcal{E}_{k}(C_{2})\times\partial\mathcal{E}_{k}(C_{2})\times\prod _{i=0}^{n-1}\partial\mathcal{E}_{k}(D_{2^{i}}),\]
recalling that \(\partial\mathcal{E}_{k}(V_{4})=0\). We determined in Proposition 6.2 that \(\partial\mathcal{E}_{k}(Q_{2^{n}})\) is generated by a truncated periodic resolution of \(k\) of period \(4\). In endotrivial module terminology, this implies \([\Omega(k)]\in\mathcal{T}(Q_{2^{n}})\) is torsion with order \(4\). All other faithful constituents we have already determined in the previous subsections as well, so we have a complete set of generators of \(\mathcal{E}_{k}(Q_{2^{n}})\).
**Theorem 6.15**.: \(\Lambda(-):\mathcal{E}_{k}(Q_{2^{n}})\to O(T(kQ_{2^{n}}))\) _is surjective. \(\operatorname{rk}_{\mathbb{Z}}\mathcal{E}_{k}(Q_{2^{n}})=n+1\)._
Proof.: We have shown previously that \(\Lambda:\partial\mathcal{E}_{k}(P)\to\partial B(P)^{\times}\) is surjective when \(P\) is dihedral or cyclic. Moreover, \(\partial B(Q_{2^{n}})\) is trivial, so \(\Lambda\) is surjective on all components of the decomposition of \(\mathcal{E}_{k}(Q_{2^{n}})\), hence surjective.
### \(\mathcal{E}_{k}(G)\) for semidihedral \(2\)-groups
_Remark 6.16_.: For \(SD_{2^{n}}\), \(n\geq 4\), given by the presentation
\[\langle a,b\mid a^{2^{n-1}}=b^{2}=1,{}^{b}a=a^{2^{n-2}-1}\rangle,\]
the normal subgroup structure is similar to the previous normal subgroup structures. There are three subgroups of index two, one isomorphic to \(D_{2^{n-1}}\) given by \(\langle a^{2},b\rangle\), one isomorphic to \(Q_{2^{n-1}}\) given by \(\langle a^{2},ab\rangle\), and one isomorphic to \(C_{2^{n-1}}\) given by \(\langle a\rangle\). Additionally for each \(i\in\{0,\ldots,n-2\}\) there is a normal subgroup isomorphic to \(C_{2^{i}}\). One may compute via the generators that for \(i<n\), the quotient \(SD_{2^{n}}/C_{2^{i}}\) is isomorphic to \(D_{2^{n-i}}\).
It follows that
\[\mathcal{E}_{kSD_{2^{n}}}\cong\partial\mathcal{E}_{kSD_{2^{n}}}\times\partial \mathcal{E}_{k}(C_{2})\times\partial\mathcal{E}_{k}(C_{2})\times\prod_{i=0}^{n- 1}\partial\mathcal{E}_{k}(D_{2^{i}}),\]
recalling that \(\partial\mathcal{E}_{k}(V_{4})=0\). It remains only to compute \(\partial\mathcal{E}_{kSD_{2^{n}}}\). \(SD_{2^{n}}\) has two conjugacy classes of subgroups of order \(2\), the center \(Z:=Z(SD_{2^{n}})\) and a full conjugacy class of subgroups contained in \(D_{2^{n-1}}\). Let \(H\) be a representative of this conjugacy class of noncentral subgroups, then \(N_{SD_{2^{n}}}(H)=ZH\cong V_{4}\). Moreover, all subgroups of order at least \(4\) contain \(Z(SD_{2^{n}})\), so the only h-marks which can be nonzero are at \(1\) and \(H\).
Let \(N=N_{G}(H)\). Suppose we have a faithful endotrivial complex \(C\in\partial\mathcal{E}_{k}(G)\). Restricting to \(N\cong V_{4}\), we have that the h-marks at \(Z\) and \(N\) are \(0\) by faithfulness. Suppose the h-mark at \(H\) is \(i\), then the last subgroup of order \(2\) in \(N\) is \(SD_{2^{n}}\)-conjugate to \(H\), so it has h-mark \(i\) as well. By the classification of \(\mathcal{E}_{k}(V_{4})\), it follows that the h-mark at \(1\) of \(C\) is \(h(C)=2i\).
Now, since \(Q_{2^{n-1}}\cap H=1\), upon restriction to \(Q_{2^{n-1}}\), the h-mark of \(C\) at \(1\) is \(2i\) and \(0\) elsewhere. By the classification of \(\partial\mathcal{E}_{k}(Q_{2^{n}})\), \(i\) must be even, that is, the h-mark at \(1\) must be a multiple of \(4\). We will construct a complex \(\Gamma_{n}^{S}\) with \(h(\Gamma_{n}^{S})=4\) and \(h(\Gamma_{n}^{S}(H))=2\) - from the previous paragraphs, it will follow that \(\langle\Gamma_{n}^{S}\rangle_{\mathbb{Z}}=\partial\mathcal{E}_{k}(SD_{16})\).
To construct this complex, we take a slight detour to recall a theorem of Carlson and Thevenaz in their classification of \(\mathcal{T}(G)\) for \(p\)-groups.
_Theorem 6.17_.: [10, 7.1]\(\mathcal{T}(G)\cong\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}\). \(\Omega_{1}(k)\) generates the torsion-free part of \(\mathcal{T}(G)\) and \(\Omega_{1}(\Omega_{SD_{2^{n}}/H})\) is the lone nontrivial torsion element of \(\mathcal{T}(G)\)._
Let \(E=\Omega(\Omega_{SD_{2^{n}}/H})\). We first define the chain complex corresponding to the construction of \(E\),
\[U=P\to k[SD_{2^{n}}/H]\to k,\]
with \(d_{1}:k[SD_{2^{n}}/H]\to k\) the augmentation map, \(P\) the projective cover of \(\Omega_{SD_{2^{n}}/H}\), and \(d_{2}\) the corresponding covering. Thus \(H_{2}(U)=E\) and \(H_{i}(U)=0\) for \(i\neq 2\). It is routine to compute that \(U(H)\cong kC_{2}\to k\), and \(U(K)\cong k[0]\) for all \(1,H\neq K\leq SD_{2^{n}}\).
Now, it follows by the Kunneth formula that \(H_{4}(U^{\otimes 2})\cong E\otimes_{k}E\) and \(H_{i}(U^{\otimes 2})=0\) for \(i\neq 4\). Moreover, by multiplicativity of the Brauer construction \(H_{4}(U^{\otimes 2}(H))\cong k\), \(H_{i}(U^{\otimes 2}(H))=0\) for \(i\neq 2\), and \(U^{\otimes 2}(K)\cong k[0]\) for \(1,H\neq K\leq SD_{2^{n}}.\) Since \([E]^{2}=1\) in \(\mathcal{T}(SD_{2^{n}})\), \(E\otimes_{k}E\cong k\oplus P^{\prime}\) for some projective \(P^{\prime}\). Denoting the differentials of \(U^{\otimes 2}\) by \(\{e_{i}\}\), we have
\[\ker e_{4}\cong k\oplus P^{\prime}\subset P^{\otimes 2}=(U^{\otimes 2})_{4}.\]
Since \(P^{\prime}\) is an injective module as well, it follows that \(P^{\prime}\) is a direct summand of \((U^{\otimes 2})_{4}\), so we have a chain complex homomorphism \(\pi_{P^{\prime}}\) given by projection onto \(P^{\prime}\) as follows:
This is well-defined since \(P^{\prime}\subset\ker e_{4}\). Define \(\Gamma_{n}^{S}:=C(\pi_{P^{\prime}})\), the mapping cone of \(\pi_{P^{\prime}}\). It is routine to compute that \(H_{4}(\Gamma_{n}^{S})\cong k\) and \(H_{i}(\Gamma_{n}^{S})=0\) for all \(i\neq 4\). Since we constructed \(\Gamma_{n}^{S}\) by adding projective modules to a single component, the resulting complexes when taking the Brauer construction at any nontrivial subgroup remain unchanged, and thus \(\Gamma_{n}^{S}\) is endotrivial by Theorem 3.4.
If one restricts to the three maximal subgroups \(D_{2^{n-1}},Q_{2^{n-1}},C_{2^{n-1}}\), the restriction of \(\Gamma_{n}^{S}\) to any of the three maximal subgroups yields a faithful endotrivial complex. However, only its restriction to \(Q_{2^{n-1}}\) yields a generator of the faithful constituent.
**Theorem 6.18**.: \(\Lambda(-):\mathcal{E}_{k}(SD_{2^{n}})\to O(T(kSD_{2^{n}}))\) _is surjective. \(\operatorname{rk}_{\mathbb{Z}}\mathcal{E}_{k}(SD_{2^{n}})=n+1\)._
Proof.: We have shown previously that \(\Lambda:\partial\mathcal{E}_{k}(P)\to\partial B(P)^{\times}\) is surjective when \(P\) is dihedral, cyclic, or quaternion. Moreover, \(\partial B(SD_{2^{n}})\) is trivial, so \(\Lambda\) is surjective on all components of the decomposition of \(\mathcal{E}_{k}(SD_{2^{n}})\), hence surjective. The final statement follows by counting.
## 7 Not all orthogonal units lift to endotrivial complexes
In this section, we describe a Galois invariance condition which orthogonal units of the trivial source ring must satisfy in order to be lifted to endotrivial complexes. In particular, we obtain a criteria for when orthogonal units cannot be lifted.
**Definition 7.1**.: Let \(k\) be any field and \(\varphi:k\to k\) any field automorphism of \(k\). \(\varphi\) induces a ring automorphism on the group algebra \(\varphi:kG\to kG\) which acts trivially on group elements. Given any \(kG\)-module \(M\), precomposing by \(\varphi^{-1}\) induces a new \(kG\)-module \({}^{\varphi}M:=\operatorname{iso}_{\varphi^{-1}}M\). This induces an endofunctor \({}^{\varphi}(-):{}_{kG}\mathbf{mod}\to{}_{kG}\mathbf{mod}\).
Denote \(kG\)-multiplication for \({}^{\varphi}M\) by \({}_{\varphi}\). Multiplication for \({}^{\varphi}M\) behaves as follows, for \(g\in G,m\in M,c\in k\):
\[g\cdot_{\varphi}m=gm,\quad c\cdot_{\varphi}m=\varphi(c)m.\]
If \(k\) is a finite field extension of \(\mathbb{F}_{p}\), then \(\operatorname{Aut}(k)=\operatorname{Gal}(k/\mathbb{F}_{p})=\langle F\rangle\), where \(F\) is the Frobenius automorphism of \(k\),
\[F:k^{\times}\to k^{\times},\quad x\mapsto x^{p}.\]
Given any \(kG\)-module \(M\), call \({}^{F}M\) the _Frobenius twist_ of \(M\).
**Proposition 7.2**.:
1. \({}^{\varphi}(-)\) _is an exact, additive functor which commutes with tensor products._
2. _If_ \(k/\mathbb{F}_{p}\) _is a finite field extension,_ \({}^{\varphi}(-)\) _restricts to the identity functor on_ \({}_{kG}\)__**_perm_** _and restricts to an autoequivalence on_ \({}_{kG}\)__**_triv_**_._
3. _If_ \(k/\mathbb{F}_{p}\) _is a finite field extension, then_ \({}^{\varphi}(-)\) _commutes with the Brauer construction, i.e. for any_ \(M\in{}_{kG}\mathbf{mod}\) _and_ \(P\in s_{p}(G)\)_,_ \({}^{\varphi}(M(P))=({}^{\varphi}M)(P)\)_. This is natural._
4. _Let_ \(\chi\in\operatorname{Hom}(G,k^{\times})\) _and let_ \(k_{\chi}\) _be the associated simple 1-dimensional representation. Then_ \({}^{\varphi}k_{\chi}\cong k_{\varphi^{-1}{}_{0}\chi}\)_._
Proof.: (a) and (b) are straightforward. (c) follows from the property \({}^{\varphi}(M^{P})=({}^{\varphi}M)^{P}\) and since \({}^{\varphi}(-)\) does not alter the group action on \(M\), the quotient term in the Brauer construction remains similarly unaltered. Since the functor does not alter morphisms, it is a natural isomorphism.
For (d), regarding both \({}^{\varphi}k_{\chi}\) and \(k_{\varphi^{-1}{}_{0}\chi}\) as 1-dimensional \(k\)-vector spaces, \(\varphi^{-1}\) induces a map \({}^{\varphi}k_{\chi}\to k_{\varphi^{-1}{}_{0}\chi}\). The map is bijective, and we claim it is a \(kG\)-module isomorphism. We compute, for \(m\in{}^{\varphi}k_{\chi}\), \(g\in G\), and \(c\in k\):
\[\varphi^{-1}(g\cdot_{\varphi}m)=\varphi^{-1}(gm)=\varphi^{-1}(\chi(g)m)=( \varphi^{-1}\circ\chi(g))\varphi^{-1}(m)=g\cdot\varphi^{-1}(m)\]
\[\varphi^{-1}(c\cdot_{\varphi}m)=\varphi^{-1}(\varphi(c)m)=c\cdot\varphi^{-1}(m)\]
Thus, \({}^{\varphi}k_{\chi}\cong k_{\varphi^{-1}{}_{0}\chi}\).
_Remark 7.3_.: \({}^{\varphi}(-)\) induces a ring automorphism on \(T(kG)\) and \(R_{k}(G)\). In the case of 1-dimensional Brauer characters, the image of \(\varphi(\chi)\in R_{k}(G)\) is \(\varphi^{-1}\circ\chi\), by the previous proposition. Because every orthogonal unit \(u\in O(T(kG))\) can be expressed as a collection of 1-dimensional characters up to a sign via \(\beta_{G}\), i.e. homomorphisms \(G\to k^{\times}\), and \({}^{\varphi}(-)\) commutes with the Brauer construction, it follows that
\[\beta_{G}({}^{\varphi}u)=(\epsilon_{P}\cdot(\varphi^{-1}\circ\rho_{P}))_{P\in s _{p}(G)}.\]
In other words, determining the image of \({}^{\varphi}u\in O(T(kG))\) amounts to post-composing \(\varphi^{-1}\) to each local character. Similarly for \(\Xi\) and \(\mathcal{E}_{k}(G)\), it follows that
\[\Xi({}^{\varphi}C)=(\epsilon(C(P)),\varphi^{-1}\circ H(C(P)))_{P\in s_{p}(G)}.\]
We now focus on Frobenius twists. The fixed points of \(F:k\to k\) are precisely the subfield \(\mathbb{F}_{p}\). Recall \(\mathcal{L}_{G}=\left(\prod_{P\in s_{p}(G)}\operatorname{Hom}(N_{G}(P)/PC_{G }(P),k^{\times})\right)^{\prime}\leq O(T(kG))\), the "local homology" subgroup of \(O(T(kG))\).
**Theorem 7.4**.: _Let_
\[(\rho_{P})_{P\in s_{p}(G)}\in 1\times 1\times\mathcal{L}_{G}\leq O(T(kG)),\]
_If \((\rho_{P})_{P\in s_{p}(G)}\) lifts to an endotrivial complex, considering \(\rho_{1}\) to be the trivial character on \(G\), then \({}^{F}(\rho_{P})=\rho_{P}\) for all \(P\in s_{p}(G)\)._
_Equivalently, if \((\rho_{P})_{P\in s_{p}(G)}\in 1\times 1\times\mathcal{L}_{G}\leq O(T(kG))\) contains a character \(\rho_{P}\) for which \({}^{F}\rho_{P}\neq\rho_{P}\), then there does not exist an endotrivial complex whose Lefschetz invariant (after identifying) is \((\rho_{P})_{P\in s_{p}(G)}\)._
Proof.: Let \(u\in O(T(kG))\) correspond to \((\rho_{P})_{P\in s_{p}(G)}\), and suppose \(u\) has a lift \(C\in\mathcal{E}_{k}(G)\). Then \(H(C(P))=\rho_{P}\), so \({}^{F}(C)\in\mathcal{E}_{k}(G)\) as well. \(F(C)\) has the same h-marks as \(C\), and has as local homology
\[H({}^{F}(C)(P))=H({}^{F}(C(P))))={}^{F}(H(C(P)))={}^{F}(\rho_{P}).\]
Now, \(C\otimes_{k}({}^{F}C)^{*}\) has h-marks entirely concentrated in degree 0. Since \(\rho_{1}\) is the trivial representation and \(\ker\epsilon=\operatorname{Hom}(G,k^{\times})\leq O(T(kG))\), it follows that \(\rho_{P}\cdot({}^{F}\rho_{P})^{*}\) is also the trivial representation. Thus \(\rho_{P}={}^{F}(\rho_{P})\).
Say \((\rho_{P})_{P\in s_{p}(G)}\in\mathcal{L}_{G}\) is Frobenius-stable if \({}^{F}(\rho_{P})_{P\in s_{p}(G)}=(\rho_{P})_{P\in s_{p}(G)}.\) Say \(u\in O(T(kG))\) is Frobenius-stable if the \(\mathcal{L}_{G}\)-constituent of \(\beta_{G}(u)\) is Frobenius-stable.
**Corollary 7.5**.: _Let \(u\in O(T(kG))\) with \(\beta_{G}(u)=(\epsilon_{P}\cdot\rho_{P})\). If \(u\) has an endotrivial lift \(C\in\mathcal{E}_{k}(G)\), then \(u\) is Frobenius-stable._
_Moreover, each \(\rho_{P}\cdot\rho_{1}^{-1}\) are \(d\)th roots of unity, for some \(d\mid p-1\). In particular, if for all primes \(q\) which divide \(G\), \(q\nmid p-1\), then the only element of \(\mathcal{L}_{G}\) which lifts to an endotrivial complex is the identity._
Proof.: This follows by applying a similar proof as in the previous theorem to the collection of signed tuples \(\beta_{G}(u)\) (see Remark 4.4) and twisting by \(\rho_{1}\) so that the global homology is trivial. Note that the signs \(\epsilon_{P}\) are \(F\)-invariant. The rest are basic observations.
_Example 7.6_.:
1. Let \(p=2\), \(k\) be a finite field of characteristic 2 which has a 3rd root of unity \(\omega\), and \(G=A_{4}\). \(kA_{4}\) has three projective indecomposables, and are given by \(P_{1}=k[A_{4}/C_{3}]\), \(P_{2}=k_{\omega}\otimes_{k}k[A_{4}/C_{3}]\), and \(P_{3}=k_{\omega^{2}}\otimes_{k}k[A_{4}/C_{3}]\), where \(k_{\omega}\) is the simple representation of dimension 1 for which \((123)\cdot 1=\omega\).
Set \(u=k_{\omega}+P_{1}-P_{2}\). One may compute that \(\beta_{G}(u)=(\chi_{1},\chi_{C_{2}},\chi_{V_{4}})\) with \(\chi_{V_{4}}\in\operatorname{Hom}(A_{4}/V_{4},k^{\times})=k_{\omega}\), \(\chi_{C_{2}}\in\operatorname{Hom}(V_{4}/V_{4},k^{\times})=k\), and \(\chi_{1}\in\operatorname{Hom}(A_{4}/A_{4},k^{\times})=k\), so \(u\in O(T(kG))\). \({}^{F}(\chi_{V_{4}})=k_{\omega^{2}}\), so \(u\) cannot lift to an endotrivial complex.
2. On the other hand, some orthogonal units \(u\in 1\times 1\times\mathcal{L}_{G}\leq O(T(kG))\) indeed lift to endotrivial complexes when \(p\) is a sufficiently large divisor of \(|G|\). Let \(G=D_{2n}\) for \(n>2\) not a power of \(2\) and \(p\) be any odd prime dividing \(2n\). Consider the image of \(u=[D_{2n}/D_{2n}]-[D_{2n}/H_{1}]-[D_{2n}/H_{2}]+[D_{2n}/1]\in B(D_{2n})^{\times}\) in \(O(T(kD_{2n}))\), where \(H_{1}\) and \(H_{2}\) are pairwise non-conjugate non-central subgroups of order \(2\). One may verify the unit has a lift similar in construction to the complex presented in Remark 6.8, \[C=0\to kD_{2n}\to k[D_{2n}/H_{1}]\oplus k[D_{2n}/H_{2}]\to k\to 0.\] In fact, this construction can be generalized, and is independent of characteristic after adding signs. Then, it is straightforward to check that \(k_{-}\otimes_{k}k[u]\in 1\times 1\times\mathcal{L}_{G}\), and both terms in the tensor product lift to endotrivial complexes, thus the tensor does as well.
|
2309.05444 | Pushing Mixture of Experts to the Limit: Extremely Parameter Efficient
MoE for Instruction Tuning | The Mixture of Experts (MoE) is a widely known neural architecture where an
ensemble of specialized sub-models optimizes overall performance with a
constant computational cost. However, conventional MoEs pose challenges at
scale due to the need to store all experts in memory. In this paper, we push
MoE to the limit. We propose extremely parameter-efficient MoE by uniquely
combining MoE architecture with lightweight experts.Our MoE architecture
outperforms standard parameter-efficient fine-tuning (PEFT) methods and is on
par with full fine-tuning by only updating the lightweight experts -- less than
1% of an 11B parameters model. Furthermore, our method generalizes to unseen
tasks as it does not depend on any prior task knowledge. Our research
underscores the versatility of the mixture of experts architecture, showcasing
its ability to deliver robust performance even when subjected to rigorous
parameter constraints. Our code used in all the experiments is publicly
available here: https://github.com/for-ai/parameter-efficient-moe. | Ted Zadouri, Ahmet Üstün, Arash Ahmadian, Beyza Ermiş, Acyr Locatelli, Sara Hooker | 2023-09-11T13:31:00Z | http://arxiv.org/abs/2309.05444v1 | # Pushing Mixture of Experts to the Limit: Extremely Parameter Efficient MoE for Instruction Tuning
###### Abstract
The Mixture of Experts (MoE) is a widely known neural architecture where an ensemble of specialized sub-models optimizes overall performance with a constant computational cost. However, conventional MoEs pose challenges at scale due to the need to store all experts in memory. In this paper, we push MoE to the limit. We propose extremely parameter-efficient MoE by uniquely combining MoE architecture with lightweight experts.Our MoE architecture outperforms standard parameter-efficient fine-tuning (PEFT) methods and is on par with full fine-tuning by only updating the lightweight experts - less than 1% of an 11B parameters model. Furthermore, our method generalizes to unseen tasks as it does not depend on any prior task knowledge. Our research underscores the versatility of the mixture of experts architecture, showcasing its ability to deliver robust performance even when subjected to rigorous parameter constraints. Our code used in all the experiments is publicly available here: [https://github.com/for-ai/parameter-efficient-moe](https://github.com/for-ai/parameter-efficient-moe).
## 1 Introduction
A conventional training paradigm is to apply the weights of a model to each input. Arguably, this is not efficient since a given input may not need all of a model's capacity. In contrast, MoEs build on the premise that sub-modular components - so called experts - can specialize to different types of inputs. This emphasis on conditional computation has important efficiency side-effects such as constant inference cost. This has made MoEs an area of significant research and widespread adoption in the era of large-scale Transformers where scaling has increased deployment and latency costs (Shazeer et al., 2018; Riquelme et al., 2021; Du et al., 2022; Fedus et al., 2022).
While the majority of work to-date has focused on MoEs as a pretraining strategy,the inherent motivation of MoEs is not confined solely to pretraining. In fact, the merits of MoEs are arguably well suited to an _instruction fine-tuning_ setting where the data is often deliberately structured to
represent a diverse set of tasks, often referred to as multi-task finetuning (Chung et al., 2022; Wei et al., 2022; Sanh et al., 2022; Longpre et al., 2023; Muennighoff et al., 2023).
In this work, we pose the question _can we leverage MoEs for instruction fine-tuning?_ One of the main drawbacks of MoEs paradigm is that it introduces an extreme amount of total parameters (Fedus et al., 2022). Despite the conditional computation, fully fine-tuning MoE architecture is extremely computationally demanding given the need to update all the parameters. For most practitioners, given the scale of modern LLMs (Brown et al., 2020; Touvron et al., 2023; Kaplan et al., 2020; Anil et al., 2023) this is an infeasible computational cost.
Thus, we focus on a more realistic setting for everyday practitioners - _can we successfully apply MoEs to parameter-efficient fine-tuning (PEFT)_ methods such as (IA)\({}^{3}\)(Liu et al., 2022) or LORA (Hu et al., 2021) which only fine-tune a far smaller number of parameters. This is a significant challenge not only since our aim is to update only a small percentage of all parameters but as we also navigate the optimization challenges inherent to MoEs already noted by prior work (Chen et al., 2022) in a more constrained environment.
In this work, we propose a new framework that leverages the benefits of MoE in a severely constrained computational environment. We introduce **Mixture of Vectors (MoV)** and **Mixture of LORA (MoLORA)**, a parameter-efficient adaptation of the Mixture of Experts approach. Unlike the standard MoE, our framework can be utilized in a parameter-limited setting due to its lightweight nature. Remarkably, our method achieves performance parity with full fine-tuning on unseen tasks by updating less than 1% of the parameters. It also easily outperforms base parameter-efficient techniques like (IA)\({}^{3}\) or LORA.
We achieve consistent results across T5 models (Raffel et al., 2020) ranging from 770M to 11B across 12 different tasks from 55 datasets P3 (Sanh et al., 2022). In summary, our contributions are as follows:
1. We present extremely parameter-efficient MoEs. This architecture leverages MoEs in a more
Figure 1: _Left_: Our mixture of PEFT experts outperforms SOTA single PEFT methods using a comparable amount of parameters demonstrated for T5-XL (3B). _Right_: Mixture of PEFT approach scales up to 11B; with tiny parameter updates, it approximates or matches full fine-tuning performance.
realistic setting using modular and lightweight experts. Our MoEs can be used to fine-tune a dense model by updating less than 1% of its parameters.
2. Instruction fine-tuning with our proposed methods consistently outperforms traditional parameter efficient methods on unseen tasks, while maintaining high parameter efficiency across different scales. The mixture of \(\left(\text{IA}\right)^{3}\) vectors (MoV) achieves up to 14.57% and 8.39% improvements over the standard \(\left(\text{IA}\right)^{3}\) at 3B and 11B model sizes respectively. This superiority holds across different model sizes, types of experts and trainable parameter budgets.
3. We show that our recipe can match the performance of _full fine-tuning_ at large scales while updating a tiny fraction of the model parameters. Our results across 8 unseen tasks show that our MoV which updates just 0.32% and 0.86% of the parameters in the 3B and 11B models achieves _highly competitive_ performance to full fine-tuning with a significantly reduced computational cost.
4. Finally, we present an extensive set of ablation studies that systematically evaluate the efficacy of various MoE architectures and PEFT strategies at various model sizes, different adapter types, the number of experts, routing mechanisms, and the importance of optimizing hyper-parameters, especially given the sensitivity of MoE.
## 2 Methodology
The instruction tuning setup is formulated as such where there are set of tasks which are divided into training and held-out evaluation tasks, \(T=T_{\text{train}}\cup T_{\text{eval}}\). The base pretrained model is first fine-tuned on \(T_{train}\) and then evaluated in a zero-shot manner on each unseen task from \(T_{eval}\). The standard approach is fine-tuning all model parameters that cause high compute and memory costs. Our method offers an efficient alternative using parameter-efficient mixture of experts. In this section, we describe our framework in detail.
### Parameter-efficient Fine-tuning with \(\left(\text{IA}\right)^{3}\) and LORA Adapters
In this work, we push the mixture of expert (MoE) architecture to an extreme degree of parameter efficiency using _parameter-efficient fine-tuning_ (PEFT) methods. PEFT methods address the challenges associated with updating a large number of parameters - especially emerging at scale when fully fine-tuning an LLM - by restricting weight updates to a limited number of parameters. To show how our method scales with different PEFT techniques, we experiment with both \(\left(\text{IA}\right)^{3}\) and LORA. These methods add a small number of parameters to the existing pre-trained model. We briefly introduce each PEFT method below:
\(\left(\text{IA}\right)^{3}\) introduces three new vectors, \(l_{\text{k}}\in\mathbb{R}^{d_{\text{k}}}\), \(l_{\text{v}}\in\mathbb{R}^{d_{\text{v}}}\), \(l_{\text{ff}}\in\mathbb{R}^{d_{\text{ff}}}\) which re-scale key and value activations in self-attention, and intermediate activations in position-wise feed-forward layers:
\[\text{softmax}\left(\frac{Q(l_{\text{k}}\odot K^{T})}{\sqrt{d_{\text{k}}}} \right)(l_{\text{v}}\odot V);\ \ \left(l_{\text{ff}}\odot\gamma\ (W_{1}x)\right)W_{2}\] ( \[\left(\text{IA}\right)^{3}\] )
where \(Q\), \(K\), \(V\) are query, key, and value projection matrices for self-attention, and \(W_{1}\), \(W_{2}\) are frozen weights of the feed-forward layers in the pretrained model. Since \(\left(\text{IA}\right)^{3}\) only updates \(l_{\text{k}}\), \(l_{\text{v}}\),
rescaling vectors for each Transformer layer*, it is extremely parameter-efficient. For the 3 billion parameter T5 model (Raffel et al., 2020), it only updates 0.018% of the total parameters.
Footnote *: For an encoder-decoder model with L number of layers in both sides, (IA)\({}^{3}\) only introduces \(L(d_{\text{k}}+d_{\text{v}}+d_{\text{fl}})\) new parameters for encoder and \(L(2d_{\text{k}}+2d_{\text{v}}+d_{\text{fl}})\) for decoder, due to the additional encoder-decoder attention block.
Note that, unlike adapters (Houlsby et al., 2019) or prompt-tuning (Lester et al., 2021), the number of new parameters inserted by (IA)\({}^{3}\) is determined by the architecture as the scaling vectors need to be the same size with the corresponding activation dimensions.
**Low-Rank adaptation** (LORA; Hu et al., 2021) optimizes low-rank decomposition of dense layers in LLMs. For a pre-trained weight matrix \(W_{0}\in\mathbb{R}^{d_{\text{m}}\times d_{\text{p}}}\) and input activation \(x\in\mathbb{R}^{d_{\text{m}}}\), LORA decomposes \(W_{0}\) into two low-rank matrices:
\[h=W_{0}+\Delta W_{x}=W_{0}+BAx\] (LORA)
where \(B\in\mathbb{R}^{d_{\text{p}}\times r}\)\(A\in\mathbb{R}^{r\times d_{\text{m}}}\), and the rank \(r=\min(d_{\text{m}},d_{\text{p}})\). During fine-tuning, all pretrained weights are frozen, and only \(A\) and \(B\) matrices are updated.
LORA adaptation can be used for all the linear layers in each Transformer block including query \(Q\), key \(K\), value \(V\), and output \(O\) of the self-attention and the feed-forward layers \(W_{1}\) and \(W_{2}\). Unlike (IA)\({}^{3}\), LORA adaptation offers more flexibility in terms of the parameters used. We can adjust the capacity by incrementing the rank \(r\) of the matrix decomposition until it reaches its maximum, determined by \(r=\min(d_{\text{m}},d_{\text{p}})\). To illustrate its parameter efficiency, for a T5 3B model, LORA with a rank of 4, updates 0.3% of the model parameters.
Figure 2: _Left_: Overview of the MoV architecture highlighting soft-merging where only the vectors and router are updated for each multi-head attention block, as denoted by color. _Right_: JAX-like pseudo-code illustrating the core implementation of a MoV layer.
### Extremely Parameter Efficient Mixture of Experts
We propose an extremely parameter-efficient Mixture of Experts (MoE) framework that leverages lightweight "adapters" as experts on top of a pretrained dense model. Concretely, the MoE is a family of neural network architecture that enables conditional computation through multiple experts that are activated based on a gating mechanism (router). An MoE layer consists of a router network \(R\) and a set of \(n\) experts \(E_{1},...,E_{n}\) where each expert \(E_{i}\) is a parameterized function. Following Fedus et al. (2022), our router network commonly consists of a dense layer with trainable weights \(W_{g}\in\mathbb{R}^{d_{\text{m}}\times n}\) followed by a _softmax_ function which takes an intermediate token representation \(x\) as input and combines the output of each expert based on the gating scores \(s_{1},...,s_{n}\):
\[s_{i}=R(x)_{i}=\text{softmax}(W_{g}^{T}x)\] (Router) \[y=\sum_{i=1}^{n}s_{i}\cdot E_{i}(x)\] (MoE)
For Transformer models (Vaswani et al., 2023), dense feed-forward layers are replaced by MoE layers where each expert \(E_{i}\) corresponds to an independent dense feed-forward network. This multiplies the total number of model parameters as each expert size and number of experts increase. However, in our parameter-efficient MoE architecture, we replace each expert with a lightweight PEFT adapter such as (IA)\({}^{3}\) vectors or LORA adapters. During fine-tuning, pretrained weights of dense layers remain fixed, while experts and router layers are trained from scratch. Unlike the standard MoE, our lightweight experts learn to adapt the pretrained Transformer layers in the fine-tuning time. In this way, our MoE framework requires a limited number of parameter updates and does not introduce a huge model size in total.
In addition to parameter efficiency, our selection of PEFT adapters enables routing computation with _soft merging_. Concretely, since both (IA)\({}^{3}\) vectors and LORA adapters are linear functions, we compute a weighted average of experts first and then apply a PEFT transformation using the combined expert \(E_{mix}\) similar to Muqeeth et al. (2023):
\[E_{mix}=\sum_{i=1}^{n}s_{i}\cdot E_{i};\ \ y=E_{mix}(x)\] (Soft Merging)
We call the variants of our method as _Mixture of Vectors_ (**MoV**) and _Mixture of LORA_ (**MoLORA**) that leverage (IA)\({}^{3}\) vectors or LORA adapters as experts respectively, both demonstrating consistent gains over the corresponding PEFT method. Figure 2 shows the architecture of a MoV layer together with the corresponding pseudo-code. Only updating a small fraction of parameters through MoV and MoLORA has multiple practical benefits not only to training but to inference time, with the latter being unique to MoE architectures. We provide a brief overview of these gains below:
**Efficiency in training** Our extremely parameter-efficient MoE formulation leads to a significant reduction in memory. The freezing of most parameters during training reduces the computational
overhead of calculating gradients for model parameters but also reduces the memory requirements of storing the optimizer states for the model. The latter can be quite significant depending on the choice of the optimizer, for instance, variants of Adam (Kingma and Ba, 2017) including AdamW (Loshchilov and Hutter, 2019), require twice the memory required for each parameter, to store the optimizer states (estimates for first and second moments) whereas Adafactor (Shazeer and Stern, 2018) reduces this overhead roughly by half through factored estimation of the second-order parameter moments.
**Efficiency at inference** The inherent structural modularity of our MoV and MoLORA methods allows for significant memory gains at inference time. For traditional MoE models, many copies of the full-fledged feed-forward blocks (or even complete replicas of the model based on specific architecture) need to be stored in memory at inference time which is an expensive undertaking. With our methods, regardless of the exact type, only a single copy of the model backbone needs to be stored in memory in addition to lightweight parameter-efficient experts. This leads to a significant reduction in the memory requirements at inference time.
## 3 Experiments
**Dataset** We conduct instruction-tuning experiments using a comprehensive set of prompt instructions from the Public Pool of Prompts (P3) dataset Sanh et al. (2022). We follow the same procedure as Raffel et al. (2020) where each task is converted into the format provided templates in (Sanh et al., 2022). P3 is a collection of 62 datasets covering a wide variety of tasks.
**Experimental Setup** For the base pretrained models, we use T5 v1.1+LM adaptation (Lester et al., 2021) that includes T5 models of different sizes ranging from 770M to 11B parameters. For all experiments, we fine-tune using Adafactor optimizer (Shazeer and Stern, 2018) with a learning rate of \(3e^{-4}\). We set the sequence length to 1024 for the input and 256 for the target following to Sanh et al. (2022). For all parameter-efficient MoE variants, we fine-tune T5 models using a batch size of 32 over 500K steps.
**Baselines** We compare our mixture of parameter-efficient experts against both T0 baseline as the fully fine-tuned model, and the standard parameter-efficient fine-tuning methods (IA)\({}^{3}\) and LORA. For T0 baselines, based on our experiments with different hyperparameters, we find that a larger batch size and learning rate result in better performance, thus, we replicated T0 by fine-tuning for 10k steps with a batch size of 256, and a learning rate of \(1e^{-3}\), following Phang et al. (2023) - these hyperparameters achieve significantly higher results as shown in Table 1. For (IA)\({}^{3}\) and LORA with rank=4, we use the same training hyper-parameters such as learning rate of \(3e^{-4}\) and batch of 32 over 500k steps.
**Metrics** Following the zero-shot evaluation presented in T0 Sanh et al. (2022), we test our method and the baselines on 8 held-out (unseen during training) datasets - ANLI (Nie et al., 2020), HellaSwag (Zellers et al., 2019), WinoGrande (Sakaguchi et al., 2019), and 5 Super Glue datasets Wang et al. (2020). These datasets cover different tasks ranging from coreference resolution, natural language inference, multiple-choice question answering, story completion, and word sense disambiguation. We calculate the median accuracy for each evaluation dataset across different prompt templates and then report the per-dataset result together with an average across all datasets. We also include the mean accuracy for all evaluation datasets in the Appendix.
InfrastructureAll experiments were conducted on TPU v4 machines up to 256 pod slices. For training, evaluation, and inference of all the models experimented, we used SeqIO and T5X (Roberts et al., 2022) frameworks that enable data and model parallelization across TPU cores with integrated sequential data processing.
### Ablations
Given no work to-date has studied MoE in extremely parameter-efficient settings, we also seek to understand key characteristics of our proposed methdology by running rigorous ablations. We detail both briefly, along with the experimental set-up below:
**Routing Input: Token vs Sentence Embeddings** _How does a pronounced inductive bias for task representations in the form of instruction embedding affect routing and downstream generalization?_ In our main MoV and MoLORA methods, router layers take intermediate embeddings of input tokens as input similar to other MoE architectures (Shazeer et al., 2017; Fedus et al., 2022). However, as an alternative, a sentence embedding can be computed for each instruction (prompt with corresponding input) and be used as input for the router (Ye et al., 2022). To compare both - sentence embeddings for each instruction were derived using the Sentence-T5 encoder (Ni et al., 2022), trained with the T5-XL retrieval model (Ni et al., 2021). This encoder was initialized from the pretrained T5 and trained on diverse data sources as outlined in Ni et al. (2022). Without additional fine-tuning, each instruction sequence which consists of a prompt template and the input sentence, was passed to retrieve the embeddings with a dimension of 768.
**Routing Strategy: Soft vs Discrete** _What is the best routing strategy in parameter-efficient MoEs?_ In our MoE framework, we use soft merging of experts as routing strategy. Soft merging refers to a weighted average of all the experts computed within a specified routing block. As an alternative, discrete top-k routing strategy as used in standard MoE architectures introduces the sparsity and decreases the amount of computation (Shazeer et al., 2018; Fedus et al., 2022). In the top-k routing approach, rather than considering all experts for a decision, only the top 'k' experts, determined by the router, are chosen for the computation. Note that, although the computation is conditional to the top-k experts, the required memory depends on the total number of experts.
We evaluate top-k selection with \(k=\{1,2\}\) as they were proposed by previous work (Shazeer et al., 2017; Fedus et al., 2022). Results for these strategies are elaborated in Section 4.4. Additionally, we assess discrete routing with top-k using _load balancing_ following to Fedus et al. (2022) which promotes balanced top-k selection through an auxiliary loss, aiming for an equitable workload distribution among experts.
## 4 Results and Discussion
**Parameter efficient MoEs vs PEFTs** _How does our MoE recipe compare to a single expert PEFT?_ Table 1 compares zero-shot performance of PEFTs methods ((IA)\({}^{3}\) and LORA), and our variants of parameter-efficient MoE (MoV and MoLORA), using T5-3B as the base model. We observe that our MoE variants (MoV and MoLORA) present a significant performance boost over the standard (IA)\({}^{3}\) vectors and LORA adapters.
MoV using 30 experts achieves a 14.57% performance improvement compared to its dense counterpart
(IA)\({}^{3}\). This improvement is consistent across all unseen tasks and is achieved at a marginal increase in the number of updated parameters - only an additional 0.018% parameters per expert. In the context of LORA, our MoLORA equipped with 15 experts, achieves an average median score increase of 5.70%. This improvement is notably less significant when compared to MoV. We attribute this disparity to the difference in updated parameter count in LORA adapters and (IA)\({}^{3}\) vectors (0.3% vs 0.018%). Overall, learning a mixture for both MoV and MoLORA as opposed to a single dense model leads to notable gains in zero-shot performance.
MoV outperforms MoLORA given same parameter budgetBetween our methods, MoV achieves a better performance-parameter cost trade-off at 3B parameters base model. As shown in the left plot in figure 1 MoV with 30 experts, only updating 0.68% of all parameters, achieves nearly the same performance as MoLORA with 15 experts that updates 4.69% of parameters. This shows the effectiveness of our MoE approaches even with tiny experts at a large base model scale.
Parameter efficient MoEs vs full fine-tuning_How does MoE compare to updating all parameters during finetuning?_ As shown in Table 1 when compared to fully fine-tuned T0-3B, our proposed methods, MoV and MoLORA both with 10 experts, are on par with full fine-tuning. This is impressive as MoV-10 only updates 0.32% of all model parameters. Furthermore, when increasing the number of experts from 10 to 15 and 30 for MoV and MoLORA respectively, our both methods outperform the full fine-tuning by a small margin.
### How do parameter-efficient MoEs scale with base model size?
Figure 1 (right) shows the scaling characteristic of MoV with 60 experts compared with (IA)\({}^{3}\) and full fine-tuning for 770M, 3B and 11B parameters base models. We find that across all model sizes we evaluate, our parameter-efficient MoEs consistently maintain higher performance compared to standard PEFTs and achieve comparable results with full fine-tuning.
MoV benefits from scalingAt all model sizes, MoV-60 significantly outperforms standard (IA)\({}^{3}\). It is also far closer in performance to full fine-tuning than a single expert. For example, at 770M
\begin{table}
\begin{tabular}{l l c c c c c c c c c c} \hline \hline & **Model** & **\% Params.** & **ANLI** & **CB** & **RTE** & **WSC** & **WIC** & **Copa** & **WNG** & **HS** & **Average** \\ \hline \multirow{2}{*}{_Full-FT_} & T0-3B (Sanh et al., 2022) & 100\% & 33.46 & 50.0 & 64.08 & 64.42 & 50.39 & 74.92 & 50.51 & 27.51 & 51.91 \\ & T0-3B (our replication) & 100\% & 41.08 & 80.36 & 76.17 & 53.37 & 53.92 & 88.94 & 57.46 & 29.19 & 60.06 \\ & & & & & & & & & & & \\ & (IA)\({}^{3}\) & 0.018\% & 34.08 & 50.0 & 66.43 & 56.25 & 55.41 & 79.08 & 52.09 & 29.91 & 52.90 \\ \multirow{4}{*}{_PEFT_} & LORA & 0.37\% & 37.5 & 75.57 & 73.53 & 61.02 & 51.25 & 83.6 & 54.33 & 25.32 & 57.51 \\ & & & & & & & & & & & \\ & MoV-10 & 0.32\% & 38.92 & 75.0 & 78.88 & 62.5 & 52.19 & 85.77 & 55.96 & 30.24 & 59.93 \\ & MoV-30 & 0.68\% & 38.7 & 78.57 & 80.87 & 63.46 & 51.1 & 87.25 & 56.27 & 28.63 & 60.61 \\ \multirow{4}{*}{_Our Method_} & MoV-60 & 1.22\% & 38.83 & 76.79 & 74.55 & 60.1 & 52.66 & 89.79 & 55.49 & 30.47 & 59.83 \\ & MoLORA-10 & 3.18\% & 38.5 & 78.57 & 78.16 & 63.46 & 50.86 & 86.5 & 55.41 & 26.72 & 59.77 \\ & MoLORA-15 & 4.69\% & 40.0 & 80.36 & 80.51 & 62.98 & 50.86 & 89.0 & 55.33 & 27.3 & 60.79 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average median results on unseen tasks for full model fine-tuning (T0), parameter-efficient fine-tune methods ((IA)\({}^{3}\) and LORA) and our mixture of parameter-efficient experts (MoV and MoLORA), using T5-3B base model (Raffel et al., 2020). Note that our replication of T0 performs significantly higher than the original T0 confirming previous work (Phang et al., 2023; Ivison et al., 2023).
parameters, there is a 12.34% performance gap between (IA)\({}^{3}\) and full fine-tuning vs 5.56% for MoV-60. As the base model scales up, MoV becomes more competitive with full fine-tuning. For 3B and 11B parameter models, MoV-60 achieves performance approximately on par with the full fine-tuning, despite updating less than 1.3% of the total parameters.
**MoLORA outperforms MoV in smaller model size regimes** As discussed in the main results, at larger model sizes MoV achieves a better performance-parameter efficiency trade-off compared to MoLORA. Conversely, at the 770M scale, MoLORA with 10 experts that updates 3.18% of total parameters, performs better compared to MoV-60 and nearly matches the performance of full fine-tuning (Figure 3). Finally, similar to MoV, MoLORA archives higher performance than LORA at both 770M and 3B scales.
### How does the number of experts impact the downstream performance?
The center plot of Figure 4 shows the performance of MoV with different numbers of experts at all model sizes. We find that increasing the number of experts generally improves unseen task performance. However, this improvement is contingent upon the specific number of experts and the base model size. For both 770M and 11B parameter base models, our MoV method achieves its best performance by using 60 experts. To illustrate, when number of experts is increased from 10 to 60, the average median accuracy improves from 52.47 to 53.63 for the 770M model and from 62.3 to 64.08 for the 11B model. However, for the 3B model, using just 30 experts, updating 0.68% of the parameters, reaches peak accuracy with a score of 60.61 at this scale, as performance stagnates when 60 experts are used.
This trend of performance improvement by scaling more experts is further corroborated in the context of MoLORA; when scaling experts from sets of (5, 10, 15), there was a corresponding elevation in the average median score, registering at 58.6, 59.77, and 60.79, respectively.
### What is the best routing strategy in parameter-efficient MoEs?
In Figure 4, the rightmost plot shows the overall unseen task performance when using different routing strategies for MoV. Specifically, we compare the _soft merging_ of 10 experts (dashed line) with
Figure 3: Comparison of the top-performing variants from our proposed mixture of PEFT experts versus their dense counterparts across T5-Large (_Left_) and T5-XL (_Right_).
discrete top-2 and top-1 routing. We observe that soft merging significantly outperforms discrete routing in the MoV-10 setting. Specifically, for discrete routing with top-k experts, where k is 1 and 2, the MoE achieves an average median accuracy of 54.92 and 57.45 respectively. In contrast, using the soft merging approach, where all experts are activated, we observe an accuracy of 59.93.
Furthermore, to understand if we recover the performance loss of top-k routing by using load balancing, we integrated the loss balancing following to Fedus et al. (2022). However, we find that the top-k selection of \(k=2\) with load balancing loss leads to a further decrease in performance 1.5 average median score.
Together, these results show that in extremely parameter-efficient MoE settings, soft merging enables superior performance. Note that top-2 and top-1 routing strategies (among 10 experts) perform better than MoV with only 2 experts and a single expert (IA)\({}^{3}\) respectively, showing that soft merging performs better when a larger number of experts are used.
### Does a pronounced task information in routing lead to higher performance?
To understand the effects of a pronounced inductive bias towards task representations in our MoE framework, we compare using sentence embeddings of instructions with token embeddings for the routing input. These sentence embeddings are obtained offline using an external sentence embedding model. Here, we aim to evaluate how pronounced task information affects the router's decision and the subsequent generalization capabilities of the model in downstream tasks. Figure 4 leftmost plot shows performances of token routing and sentence routing at all model sizes. We find that the token routing exhibits superior performance with 3.03%, 8.86%, and 0.94% improvement for 770M, 3B, and 11B base model sizes respectively. These results suggest that a higher degree of inductive bias for task datasets is not necessarily beneficial as our approaches can acquire a diverse set of task knowledge directly from the hidden representations of tokens. Furthermore, token routing enables the use of learned experts and routing layers without any prior task information for unseen tasks.
### Do experts specialize in diverse knowledge across different tasks?
To understand how expert routing differs for different tasks, we take a closer look at how experts are activated for a variety of tasks. Figure 5 shows the mean expert probabilities for MoV with 5 experts that are located in feed-forward layers in the last decoder block at 770M parameter T5
Figure 4: _Left:_ Zero-shot performance of passing embedding of the token sequence to the router vs. passing tokens to the router. _Middle:_ Zero-shot performance across T5 model sizes (Large, XL, XXL) as the number of experts increases. _Right:_ The effectiveness of activating top-k experts.
model. We selected the last decoder block as it has been shown deeper layers learn more task-specific information (Rogers et al., 2020). We plot the mean routing probabilities for both training tasks and evaluation tasks that are unseen during training, to understand cross-task generalization through the lens of experts if skills learned at training time generalize to unseen tasks at evaluation time. Intuitively, if experts have indeed learned different _skills_, we expect that they contribute in different degrees to tasks that are different in nature. The amount of contribution is directly reflected in the routing probability of each expert since we use soft merging i.e. summation of expert vectors weighted by the routing probability as described in Figure 2. As such, the mean routing probabilities plotted in Figure 5 provide an overall picture of the contribution of each expert, depending on the downstream task.
**Specialization across unseen vs seen tasks** As depicted in Figure 5, both evaluation and training tasks lead to the activation of experts at different magnitudes. For example, both quail and super_glue_cb activate Expert 3 the most out of the 5 experts, followed by Expert 4 but are different both in terms of the relative contribution of each expert and the ordering of the remaining 3 experts based on routing probability. A similar pattern can be observed for common_gen & winogrande as they both activate Expert 2 the most but are otherwise different. Overall, the fact that routing specialization seems to occur _regardless_ of whether the downstream task was trained on, suggests that expert specialization is inherent and transferable from seen tasks to unseen tasks.
### Hyper-parameters Sensitivity
Given the widely documented sensitivity of MoE-style architecture to hyperparameters (Fedus et al., 2022; Shazeer et al., 2017), we ran extensive ablation studies to uncover the idiosyncrasy of PEFT methods in the context of MoE. We experimented with batch sizes of 32, 128, 256, and 2048 and we found that the larger the batch size, the more likely our MoEs to collapse to a single expert. Our empirical finding resonates with Shen et al. (2023) which also finds that a small batch is necessary for stable training. For instance, by experimenting with a batch size of 2048 and evaluating every 5K steps up to 20K, we observed that the performance of our parameter-efficient MoEs deteriorates after 5K steps, converging to performance levels akin to their dense counterparts. Additionally, we
Figure 5: Mean expert routing probabilities for intermediates activations at the last feedforward layer. Values are averaged across tokens and batch. Experts are weighted differently in soft merging depending on the task. _Left:_ Measured on tasks seen during training. _Right:_ Measured on unseen evaluation tasks.
experimented with varying learning rates from \(3e^{-3}\) to \(6e^{-4}\) where we discovered for our methods, a smaller learning rate of \(3e^{-4}\) leads to higher performance relative to their dense PEFT counterpart and full fine-tuning. Smaller learning rates stabilize training in parameter-efficient experts by preventing rapid, imbalanced updates that can suppress diversity and lead to suboptimal solutions.
## 5 Related Work
Mixture-of-ExpertsThe Mixture-of-Experts (MoE) has been investigated thoroughly in Natural Language Processing (Lou et al., 2022; Mustafa et al., 2022; Shazeer et al., 2017; Lepikhin et al., 2020; Fedus et al., 2022; Du et al., 2022; Zoph et al., 2022; Clark et al., 2022; Zhou et al., 2022; Komatsuzaki et al., 2023; Kudugunta et al., 2021; Zuo et al., 2022) as an effective way of increasing the model's capacity in parameter size where certain parts of the model are activated while computation is kept the same or close to its dense counterpart. In the context of MoE, there is a body of work focusing on improving the routing (Hazimeh et al., 2021; Lewis et al., 2021; Roller et al., 2021; Zhou et al., 2022) including random routing (Zuo et al., 2022) activating all expert through weighted average (Eigen et al., 2014) to sparsely select a single or \(k\) experts (Fedus et al., 2022; Du et al., 2022). MoE has also been invested in multi-task settings including multilingual neural machine translation(Hazimeh et al., 2021; Kudugunta et al., 2021). Unlike these studies, our research addresses MoE by scaling both the volume of data and the number of tasks, aiming to mitigate the instability inherent in training the MoE models. But our primary emphasis remains on achieving efficient fine-tuning. Recently, Shen et al. (2023) highlighted how instruction fine-tuning with scaled tasks can counteract the generalization challenges tied to MoE models. In distinction from this, our study scrutinizes the efficacy of instruction fine-tuning in the MoE domain, specifically concentrating on a unique ensemble of the PEFT components, considering the memory cost of the traditional MoE can be prohibitive for many practitioners. Similar to the aforementioned work, Ye et al. (2022) utilized MoE in a multi-task context, employing BART Lewis et al. (2019) as their pre-trained model. However, they limited their experimental scope to a smaller scale and used replicas of each transformer layer as experts, simply multiplying the model by the number of experts. Our work, on the other hand, presents an extreme parameter efficiency with small experts at a large scale up to 11B parameter base model.
Instruction TuningInstruction tuning, as elucidated in (Sanh et al., 2022; Wei et al., 2022; Mishra et al., 2022), is a technique where a language model is fine-tuned over a collection of tasks using paired prompts and responses. The primary goal of this technique is to enable the model to predict responses accurately based on the provided prompts, thereby augmenting its ability to understand and execute instructions effectively. The method has gained considerable attention due to its pronounced success in enhancing zero-shot performance on tasks to which the model has not been previously exposed. Additionally, instruction tuning has led to breakthroughs such as Chain of Thought Prompting (Wei et al., 2023) where a breakdown of complex problems into smaller steps to produce intermediate reasoning along with the final solution, PaLM (Chowdhery et al., 2022), FLAN (Wei et al., 2022). In our work, we explore the use of instruction fine-tuning with the intention of harnessing its benefits that enable the model to learn from a diverse set of inputs where the mixture of expert style models suits well, for enhanced evaluation performance on unseen tasks. Our objective remains to optimize computational efficiency without compromising zero-shot performance.
Parameter-Efficient Fine-tuningHoulsby et al. (2019) established "adapters" in the NLP domain to fine-tune BERT. There are many variants of adapters with different design choices (Bapna
et al., 2019; Pfeiffer et al., 2021). Li & Liang (2021) proposed updating soft prompts concatenated to embeddings or layer outputs instead of adapters. Zaken et al. (2022) show that just updating only a small subset of parameters during fine-tuning (e.g. just biases) is very effective. Hu et al. (2021) proposed LORA based on low-rank decomposition matrices of transformer layers. They show superior performance with a smaller parameter budget and no inference cost as LORA parameters can be applied offline to the baseline model. Liu et al. (2022) proposed \((IA)^{3}\), task-specific vectors to modify attention activation. Instead of using feedforward layers inserted in transformer layers as adapters, they learn vectors to update (by broadcast multiplication) key, value, and linear layer weight matrices. Unlike the other PEFT methods, \((\text{IA})^{3}\) does not induce any additional inference cost and enables mix-batches (from different datasets). The multiplicative nature of the \((\text{IA})^{3}\) creates an interesting opportunity for the mixture-of-expert type of modeling without parallelization overhead. Chen et al. (2023) experiment with different design spaces (essentially a hyperparameter search) for PEFT. They suggest four phases: 1) grouping layers into different sets; 2) adding trainable parameters towards each group; 3) deciding which group should be trained; 4) assigning groups with different training strategies. Their finding is that different architectures have different best settings. We have chosen \((IA)^{3}\) and LORA as our PEFT components because they offer an optimal balance between performance and parameter efficiency (Mahabadi et al., 2021; Liu et al., 2022).
Several studies have explored PEFT in the context of MoE or in a similar fashion, albeit with certain distinctions. For instance, Wang et al. (2022) focused on single-task fine-tuning employing a mixture of adapters for BERT\({}_{base}\) with 110M parameters (Devlin et al., 2019) and \(RoBERTa_{large}\) with 355M parameters (Liu et al., 2019), incorporating random routing, and adopting a few-shot evaluation. In divergence from this, our work centers on instruction-tuning with multiple tasks present during fine-tuning. We underscore the efficacy of this approach by rigorously testing up to 11B parameter text-to-text model Raffel et al. (2020), implementing token routing, and strictly emphasizing evaluation on a set of unseen (held-out) tasks to underscore the potential of instruction tuning. In another work, Ponti et al. (2022) introduced Polytropon, which involves learning adapters (termed as'skills') specific to each task and employing a task-skills binary matrix to determine the skill set associated with each task. In their method, input examples dictate the selection of adapters. These adapters are then aggregated, and the resultant single adapter is integrated into the overall architecture. Extending upon the Polytropon framework, Caccia et al. (2023) implemented a distinct skill set for every layer in their variant named Polytropon-S. They introduce a deterministic routing function, delve into supplementary inductive biases, show effectiveness up to 3B models, and they don't employ MoE style architecture. Our research presents a departure from these two studies. Specifically, our primary experimental setup employs MoEs that do not require any specific task identifier during fine-tuning by the use of their token routing strategy. In this way, we can evaluate our instruction-tuned MoEs on unseen tasks without any further task-specific few-shot fine-tuning. We showed the scaling property of our MoEs in this setting by fine-tuning models up to 11B parameters.
## 6 Conclusion
This work introduces MoEs in an extremely computationally limited environment. We propose introduce the Mixture of Vectors (MoV) and Mixture of LoRA (MoLORA) to mitigate the challenges associated with scaling instruction-tuned LLMs at scale. Our method outperforms parameter-efficient techniques and achieves performance parity with full fine-tuning on unseen tasks by updating less than 1% of the 3B and 11B model parameters. This percentage may vary depending on the base
model size and the number of experts involved. Our extensive experiments, including rigorous ablations across model sizes, representation of tokens vs embeddings, soft vs top-k routing, confirm the effectiveness of our approach across diverse unseen tasks, highlighting its superior accuracy and computational efficiency. Furthermore, our framework's versatility seamlessly integrates with other parameter-efficient strategies and remains compatible with efficiency-enhancing techniques such as quantization.
**Limitations** A primary constraint of our experimental framework is its focus on text-to-text models, such as T5, without extending the evaluation to decoder-only such as GPT style models. We leave this as the subject of future work. Additionally, our assessment is exclusively within the context of fine-tuning. Exploration of its efficacy during the pre-training phase remains an avenue for future research.
|
2302.00128 | TBAM: Towards An Agent-Based Model to Enrich Twitter Data | Twitter (one example of microblogging) is widely being used by researchers to
understand human behavior, specifically how people behave when a significant
event occurs and how it changes user microblogging patterns. The changing
microblogging behavior can reveal patterns that can help in detecting
real-world events. However, the Twitter data that is available has limitations,
such as, it is incomplete and noisy and the samples are irregular. In this
paper we create a model, called Twitter Behavior Agent-Based Model (TBAM) to
simulate Twitter pattern and behavior using Agent-Based Modeling (ABM). The
generated data from ABM simulations can be used in place or to complement the
real-world data toward improving the accuracy of event detection. We confirm
the validity of our model by finding the cross-correlation between the real
data collected from Twitter and the data generated using TBAM. | Usman Anjum, Vladimir Zadorozhny, Prashant Krishnamurthy | 2023-01-31T22:25:39Z | http://arxiv.org/abs/2302.00128v1 | # Localization of Events Using Neural Networks in Twitter Data
###### Abstract
Twitter (one example of microblogging) is widely being used by researchers to understand human behavior, specifically how people behave when a significant event occurs and how it changes user microblogging patterns. The changing microblogging behavior can reveal patterns that can help in detecting real-world events. However, the Twitter data that is available has limitations, such as, it is incomplete and noisy and the samples are irregular. In this paper we create a model, called _Twitter Behavior Agent-Based Model (TBAM)_ to simulate Twitter pattern and behavior using Agent-Based Modeling (ABM). The generated data from ABM simulations can be used in place or to complement the real-world data toward improving the accuracy of event detection. We confirm the validity of our model by finding the cross-correlation between the real data collected from Twitter and the data generated using TBAM.
Agent-Based Model, Twitter, Modeling and Simulation, Event Detection.
## 1 Introduction
The widespread use of microblogging services, such as Twitter, which generate immense content has resulted in considerable research focusing on utilizing their counts and semantic content for many different practical applications. For example, researchers can use microblogging data to gain insight into events and how people behave when an event occurs. The change in microblogging behavior when an event occurs creates patterns that can aid in detecting events. Detecting an event is important as it allows local authorities to both respond to the event and inform the public in a timely manner [18].
An event is defined as a real-world one-time occurrence that generates the interest of people and is based on specific spatial and temporal properties [10, 11]. Events have been classified as unexpected or expected [12, 13]. Unexpected events are rare or at least infrequent occurrences that are unpredictable, unidentified, unscheduled or unknown. Prior knowledge about event type, time and location may not be readily available until well after the event has occurred.
The purpose of this paper is to create a novel approach, called _Twitter Behavior Agent-Based Model (TBAM)_, that can simulate microblogging behavior in an event. The necessity for creating the model is due to the limitations researchers have with the real world Twitter data. Such data are scarce and unreliable in terms of the delivery, knowledge of ground truth, and information content. Before using the real world content, further complicated processing of the data would be required so that it is suitable for event detection. The data generated from simulations can be used to understand patterns or to enrich this _underdeveloped_ data.
_CoRe Paper - AI for Crisis Management_
_Proceedings of the 18th ISCRAM Conference - Blacksburg, VA, USA May 2021_
_Anouok Adrot, Rob Grace, Kathleen Moore and Christopher Zobel, eds._
We define underdeveloped data under two dimensions. These two dimensions are _reliability_ and _delivery state_. Data can have high or low reliability and data delivery can be regular or sporadic. We can consider medical data from instruments (e.g. EKG data) to have high reliability and regular delivery slate. On the other hand, Twitter data has low reliability and low delivery slate.
Twitter content involves humans as source of the data which means that the data could intentionally or unintentionally be distorted and further only provides subjective information mixed with personal emotion. For example, microblogging data usually does not contain complete spatial information (like latitude and longitude) that is essential for accurate event detection. Use of location anonymization techniques for privacy preservation makes the latitude and longitude (_geotags_) not readily available. Moreover, the data available is typically aggregated in space which reduces the event detection accuracy. Alternatively, researchers have used location names found in the message (also called place name) and user location found in their profiles to localize an event. But again, this information is not reliable as users may use multiple locations or may be slow in updating location information. This means that the location in the profile and the user location need not be consistent. Users may include incorrect location information which further reduces the data reliability (Atefeh and Khreich 2015). Twitter messages are short in length and can contain ambiguous words making it hard to obtain correct semantic information from the messages. Hence, Twitter data has low reliability. Another example of data with low reliability is mis-information data like fake news data (McNair 2017) because fake news data also involves humans as data sources.
The sporadic delivery slate of Twitter data is because there is no control over the delivery frequency and not all users are sending out event related data and even if they do, it may not be about an event and users may microblog only when it is convenient or of their interest. Factors like number of users who actively microblog, time of day, population density, significance of the microblog or event, etc. influence how frequently people may send out standard and event-related microblogs. Sensor data is another example of data with potentially sporadic delivery slates because of the battery life, duty cycle, random triggers, etc. For example, many sensors with limited battery life that cannot be replaced for long periods of time generate data only when events are detected and sometimes with long duty cycles. Other sensors only send out data when there is an external trigger (e.g. motion detection sensors only are triggered and send data when motion is detected). Thus, sensors may have unreliable delivery slate. Figure 1 shows the placement of different data types according to their reliability and delivery slate.
A consequence of the underdeveloped nature of data is that there might not be granular data or location information. These limitations reduce event detection accuracy. Data generated through models could not only be used as a replacement to real data to understand event change patterns but also complement and enrich the real data by providing information that the real data may not contain. As one example, the generated data may be used to train machine learning models to find event signatures. These machine learning models can then be applied to real data from Twitter to detect events. To create TBAM we use agent-based models (ABM). An ABM implements a top-down modeling approach where we can set different parameters that change the generation and distribution of tweets. An alternative approach for modelling could be through machine-learning, like generative adversarial networks (GAN) which would work as a bottom-up approach. In the bottom-up approach, instead of using parameters to generate
Figure 1: Dimensions of Data
data, the real data is directly used to train GAN and generate synthetic data. The limitation of the latter is the lack of explain-ability of the synthetic data.
To generate data using TBAM, we assume that there are users with known locations distributed throughout the (synthetic) world. There is a reference sensor (we call it a social sensor) placed at a known location that counts the number of tweets at radial distances from itself. In Figure 2 the sensor is placed at the origin \((0,0)\). We believe that counting the number of tweets can give us reasonable information about microblogging behavior and we use this rather than focusing on the messages within the tweets (which need further processing). The changes in the number of tweets can be used for event detection. For example, previous work has shown that peaks in a time series of the number of tweets is an indication of an event (Ben Lazreg et al., 2020). We assume the spreading of information about an event is analogous to rumor-spreading (Jin et al., 2013). The rumor spreading model assumes that information about an event spreads out gradually similar to the ripple effect when a stone is dropped in a puddle of water.
Figure 2 also shows different parameters like the probability that a user will tweet, distance and time from event, significance of an event, etc. By changing these parameters we have more control over the delivery slate. This allows generation of synthetic data that can match different scenarios. Finally, we validate our generated synthetic data by comparing it with data obtained from Twitter. Figures 3(a), 3(b) and 3(c) show the comparison between the generated data using TBAM and the data obtained by scraping Twitter around three events. We describe the data sets later.
In summary, our goals and contributions in this paper are as follows:
**Formulation and Algorithm:** We propose a methodology called Twitter Behavior Agent-Based Model (TBAM) and build a simulation using Netlogo to generate data using agent based modeling. Our model is able to identify the major parameters that affect the microblogging behavior in users.
**Accuracy:** Based on our results, we are accurately able to generate data that are statistically significant to real data as seen in Figures 3(a), 3(b) and 3(c).
## Literature review
There are numerous surveys that focus on event detection and how humans behave when an event occurs (Steiger et al., 2015; Atefeh and Khreich, 2015; Cordeiro and Gama, 2016; Garg and Kumar, 2016; Imran et al., 2015; Ajao et al., 2015; Hasan et al., 2018; Ozdikis et al., 2017; Zheng et al., 2018). It is believed that whenever an event occurs, there will be a change in the user behavior which will be reflected in the change in the microblogging activity. These papers have also mentioned how unreliable Twitter data is for event detection and that the data require considerable pre-processing before they can be used.
_CoRe Paper - AI for Crisis Management_
_Proceedings of the 18th ISCRAM Conference - Blacksburg, VA, USA May 2021_
_Anouck Adrot, Rob Grace, Kathleen Moore and Christopher Zobel, eds._
Figure 2: An example of TBAM’s interface: the left side shows the parameters and the right side shows the simulation space
A part of the enrichment process for underdeveloped data is to use data generated through models. Generated data has been used in prior literature for _data augmentation_ and _data imputation_. Data augmentation and imputation are relatively recently developed techniques. Data augmentation has been used in previous literature for image (e.g., facial data augmentation (Wang et al., 2020)), speech and natural language processing (NLP) (Dai and Adel, 2020) and time-series data to reduce over-fitting (Shorten and Khoshgoftaar, 2019; Wen et al., 2020). Augmentation increases the size of the training data set by geometric and color transformations and deep learning techniques like Generative Adversarial Networks (GAN). Augmentation also alleviates the issue of class imbalance, which is a data set with skewed majority to minority sample ratios (Shorten and Khoshgoftaar, 2019). The effect of different augmentation techniques on time-series data was evaluated in Iwana and Uchida (2020) where there is also a guide for researchers and developers to help select the appropriate data augmentation method for their applications. Generative adversarial networks (GAN) was one of the popular methods used to generate synthetic images in the medical domain (Bowles et al., 2018; Frid-Adar et al., 2018; Han et al., 2018). These works generated images of CT images of liver lesions (Bowles et al., 2018; Frid-Adar et al., 2018) and MR images (Han et al., 2018) which were very close in comparison to the real data. Similarly, cycle-Consistent Generative Adversarial Networks (CycleGANs) were proposed as an image classification method to detect floods using images found in social media (Pouyanfar et al., 2019). An agent-based model simulator called _paysim_ was created to simulate mobile money transaction and to create a synthetic data that is similar to the original data set (Lopez-Rojas et al., 2016).
Data imputation is the task of estimating missing values in a data set. Data imputation is usually done to find missing values in traffic data arising from sensor damage, malfunction, or transmission errors, etc. using low-rank matrix decomposition. Most work on data imputation has focused on using GANs for data imputation by slightly varying its structure or the loss function (Kim et al., 2020). Two of the prominent works that have used GAN as a method for finding missing values in time-series data are found in Luo et al. (2018) and Yoon et al. (2018).
However, there is little literature that has addressed the issue of using generated data to understand user microblogging behavior. The research studies have used Agent-Based modelling (ABM) to study information diffusion but none of the works focus on how user tweeting behavior changes when an event occurs. In Cui et al. (2013) ABM was used to investigate how information was spread during the 2011 Wenzhou train crash through the Sina Weibo. They use the ABM framework to compare information diffusion through word-of-mouth and mass media and to
Figure 3: Comparison of Real Twitter data with TBAM generated data
determine which is a more significant means of spreading information when it comes to social media. ABM has been used to create an information propagation model to study how retweeting occurs [21, 10]. In Pezzoni et al. [20] a retweeting model was created based on two main parameters: the influence of the user (number of followers a user has) and the time at which the tweet was received. In Xiong et al. [20] the retweeting model was based on the susceptible-infected-refractory (SIR) model. Similarly in Gatti et al. [20], ABM was used to study user behavior in a social network. The model was created to predict the sentiment of users and whether they choose to forward, reply or do nothing about a topic.
## 4 Background
In this section we look at how agent-based modeling and the Twitter network works. We design our agent-based model based on these definitions.
### Agent-Based Modeling
ABM has been used in many different fields for analysis and understanding of the real world like biology, chemistry, cyber-security, social and economic modeling, etc. [15]. Agent-based models (ABM) [21] have entities or _agents_. An agent can be an individual or an object that has specific properties and actions. The agents may move around in a two dimensional grid called the _world_. The interactions between the agents can be quite complex but can be defined according to a set of rules. An agent can be autonomous, flexible, adaptable, and self-learning [15]. There may be other models that can simulate scenarios pertaining to human social behavior and social media information dispersion (like system dynamics). However, ABM is able to better represent complex and heterogeneous interactions [21, 15] which makes it suitable for creating our model. Further, the microblogging behavior of one human may be considered actions of an agent, that may be influenced by what the agent sees in the environment and the actions of other agents. We believe that this is best modeled using ABM.
### Twitter as a Microblogging Service
Twitter is currently the fastest growing (and by far the most popular) microblogging service with a lot of research being done on the generated content to understand human behavior [1]. A Twitter user can send out a standard Twitter message called a _tweet_ about a specific topic and can contain a short text, links, or images. Messages can be grouped together based on their topic or group by use of their _hastags_. Tweets can also be forwarded by other users and they are called _retweets_. A retweet can only be received if a user is in the same network as the user who originally sent out the tweet. The tweets for research and analysis can be obtained from the Twitter API1. The Twitter API can provide past tweets as well as stream tweets. The Twitter API only allows tweets to be collected over the past two weeks and has restrictions in terms of the amount of number of tweets that can be collected.
Footnote 1: [https://dev.twitter.com/overview/api](https://dev.twitter.com/overview/api)
## 5 Twitter Behavior Agent-Based Model (TBM) Description
To generate synthetic data we developed the Twitter Behavior Agent-Based Model (TBAM) that uses Agent-Based Modelling (ABM) to simulate how users tweet and how microblogging behavior changes when an event occurs. In this section we provide a detailed overview of the model and the different parameters used to generate TBAM data.
### TBAM Design
The scope of TBAM is to simulate user behavior when an event occurs, more specifically to investigate the change in number of tweets as time and distance from event changes. For our model we consider "local events" [10], i.e., events restricted to a certain region. Our model simulates microblogging behavior of people similar to what may happen within a city or a few small neighborhoods and helps us examine how people's microblogging behavior changes when they have close spatial and temporal proximity to an event.
We use _Netlogo_[22] to create our agent based model. In Netlogo there are four types of agents: turtles, patches, links and observer. The turtles are agents that move around in the _world_. The world is sub-divided into smaller squares called _patches_ and each patch has a unique coordinate. Links are agents that connect two turtles. The observer observes the agents and their interactions. In Netlogo models, time passes in discrete steps called _ticks_.
Figure 2 shows a snapshot of the synthetic world at a particular tick. In our model, the turtles are the Twitter users (people) who send out the tweets. A tweet can be a non-event related tweet (which is a standard or routine tweet indicated by green colored users), an event related tweet (indicated by users colored yellow) or tweets sent out during low Twitter activity (indication by users colored black). The patches represent the locations over which Twitter users lie and where an event can occur. Initially all patches are colored blue. Once an event occurs, the patches change color to red as the patches are influenced by the event. In a real world setting a patch could represent a geographical coordinate. A tick is a unit of time over which the total number of tweets are measured. Ticks could be in hours, minutes or seconds depending on the time granularity that is required.
### TBAM Parameter Description
In order to generate data that may accurately reflect real world settings we define different parameters. The parameters are summarized in Table 1 and Table 2. In this section, we provide an overview of these parameters and explain how our model simulates microblogging behavior and event generation. The model is made of two phases. In the first phase, also called the _setup_ phase, the synthetic world settings are created in which the users will tweet. In the second phase, also called the _simulation_ phase, the users tweet and once an event occurs, their microblogging behavior changes.
The setup phase begins with the creation of \(N\) users. The users are randomly distributed throughout the world. Some of the users are clustered together. The number of clusters are defined by the parameter num-clusters. The parameter cluster? determines if the users are clustered or not and percentage-clustering determines what percentage of users are clustered together randomly in each of the clusters. The setup phase also generates concentric circles around the central coordinate, i.e \((0,0)\). The central coordinate is the location of the sensor that counts the number of tweets. Each consequent circle increases its radius by the parameter step. These circles aim to provide a visual understanding of how tweets change with changing distance.
To simulate a Twitter network, some users are linked together with bi-directional links. The links are generated using _Erdos-Renyi_ model which has been used in previous literature to study social networks (Erdos and Renyi 1960). In the Erdos-Renyi model, each link between a user has a fixed probability of being present and being absent independent of the links in a network. The parameter probability can be varied to change the probability and create networks with a lot of links or with very few links between users. There is also an option for generating a network with random links between users. The num-links is a parameter specific to random networks which randomly creates num-links number of links between different users. The twitter-network can be set to true or false to choose between Erdos-Renyi or random model to generate the network.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Parameter** & **Description** & **Value** \\ \hline n-events? & binary: chooses between one event or n-events & FALSE \\ \hline event-sources & sets number of events. Only valid if n-events is true & 1 \\ \hline eight-mode? & binary: chooses between spreading event to both diagonal and adjacent patches or only to adjacent patches & true \\ \hline twitter-network & binary: chooses between Erdos-Renyi and random network & Erdos-Renyi \\ \hline num-links & number of links in the random network & -NA- \\ \hline people & number of people microblogging & 1000 \\ \hline probability & probability of a link being created between two people in the Erdos-Renyi Network & 0.45 \\ \hline step & distance between each layer & 7 \\ \hline num-clusters & number of clusters of people & 9 \\ \hline cluster? & binary: choose to cluster people (1) or distribute people uniformly (0) & 1 \\ \hline percentage-clustering & percentage of people in clusters & 0.75 \\ \hline tweet-threshold & used to generate random number that a user will tweet & 0.7 \\ \hline user-interest & duration over which people remain interested about a tweet & 5 \\ \hline event-interest & duration over which people remain interested about tweets related to an event & 5 \\ \hline night-mode & to consider periodicity in tweets and separate users microblogging at day and night & True \\ \hline \end{tabular}
\end{table}
Table 1: Fixed parameters for all TBAM data generation simulations
In the simulation phase at each tick the total number of tweets sent out are counted. The total number of tweets are the sum of standard tweets, event related tweets, and tweets sent out during low Twitter activity. It should be noted that we are able to separate these in TBAM but it may not be possible to do so with scraped Twitter data. At each tick, a random number \(z_{i}\) is generated for each user. The random number is used to create the random conditions where users may not choose to tweet at a specific time. Since it is hard to determine these random conditions, we assume that \(z_{i}\) is normally distributed random number of mean tweet-threshold and variance 0.2. The scraped twitter data also follows a rough normal distribution which is why we chose the random conditions to be normally distributed. Before an event occurs a user will only send out a routine tweet. A user will only tweet if \(z_{i}<\) tweet-chance where \(z_{i}\) is a random number generated for user \(i\) and tweet-chance is probability of sending out a standard tweet (from Table 2).
At a specific time (tick) and location (patch) the event occurs and with each tick spreads across the world. The rumor spreading model has been used as a basis to simulate spreading of an event influence (or information) (Wilensky 1997). There may be other models that can be used to simulate spreading of an event, like the susceptible-infected-refractory (SIR) model (Xiong et al. 2012). In a manner similar to the rumor spreading model, immediately after an event occurs, the event influence starts spreading to all of the neighboring patches (shown by the red colored patches in Figure 2). However, the rate at which the event influence spreads to its adjacent neighbors may not be uniform and may vary with time. Initially, as soon as the event occurs, the event influence immediately spreads to all patches within a fixed radius. The influence then spreads to adjacent neighboring patches with decreasing rate as more time elapses. This assumption is based on the observations made from collected Twitter data that show a sharp rise in the counts immediately after an event. The parameter that affects the spreading of an event is eightmode?. Setting eightmode? to true causes the event to spread to its diagonals and its adjacent neighbours but setting the eightmode? to false causes the event to only spread to its adjacent neighbours. It should be noted that when eightmode? is true, then the event spreads outwards more quickly.
Once an event occurs, a user can send out either an event related tweet or a routine tweet. A user will choose to send out a tweet about an event if \(z_{i}<\frac{q_{i}}{(q_{i}\text{+}\text{+}\text{+}\text{+}\text{-}\text{ chance})}\) where \(z_{i}\) is a random number generated for user \(i\) as described previously, \(q_{i}\) is probability a user \(i\) will tweet about an event and tweet-chance is from Table 2. Once a user chooses to tweet about an event, then a user will only send out tweet about an event if they are on a patch where an event has spread to and \(z_{i}<q_{i}\) where \(z_{i}\) is a random number generated for user \(i\) and \(q_{i}\) is probability a user \(i\) will tweet about an event. There are multiple methods of determining \(q_{i}\). The value of \(q_{i}\) could be fixed or vary with time and distance from event. For the model, we employ a hybrid approach. In the immediate vicinity of the event \(q_{i}=\)event-tweet-chance where event-tweet-chance is one of the parameters from Table 2. But as the distance and time from event increase, \(q_{i}\) decreases according to Equation 1.
\[q_{i}=event-tweet-chance*[(t-t_{event})^{-\texttt{ndist}/\alpha}*(d_{event})^{ -\texttt{ndist}/\beta}] \tag{1}\]
The variable \(t\) is the current time tick measured after the event occurs, \(t_{event}\) is the time tick at which event occurred, \(d_{event}\) is the distance of the user from the event, \(\texttt{ndist}/\alpha\) and \(\texttt{ndist}/\beta\) are scaling factors. A high ndist value means that event-tweet-chance decays less rapidly with changing time and distance. For our model we keep \(\alpha\) fixed at 1 and \(\beta\) fixed at 20. Since we are considering local events and users would generally be in close proximity to the event, the decay of event-tweet-chance with distance should be less rapid than decay due to time. Hence, we choose a larger value of \(\beta\) than for \(\alpha\). It should be noted that there are many different functions that could be used to simulate the decay of probability of microblogging about an event. But previous literature (Pezzoni et al. 2013; Sakaki et al. 2010) have considered exponential distribution for tweets which can also be observed from the collected twitter data. Hence, we choose a simple exponential function to change how the probability of
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline
**Parameter** & **Description** & **VIRG** & **STEM** & **GAR** \\ \hline tweet-chance & probability of a person sending out a tweet & 0.33 & 0.29 & 0.22 \\ \hline event-duration & length of time that event remains active & 31 & 48 & 8 \\ \hline event-tweet-chance & probability of a person sending out a tweet & 0.49 & 0.55 & 0.67 \\ \hline night-tweet-chance & probability of sending out tweets during low & 0.17 & 0.16 & 0.12 \\ & Twitter activity & & & \\ \hline night-duration & the duration of low Twitter activity & 8 & 8 & 8 \\ \hline ndist & scaling factor effecting decay for \(q_{i}\) & 0.07 & 0.12 & 2 \\ \hline \end{tabular}
\end{table}
Table 2: **Variable parameters for different TBAM data generation simulations**
microblogging about an event decays with time and distance in our simulations. Figure 4 is a plot of the function showing how \(q_{i}\) changes as distance and time from event changes when \(\alpha=1\) and \(\beta=20\).
During the simulation phase, users can also send out retweets. A user will send out a standard retweet if \(z_{i}<\texttt{tweet-chance}\) AND there is a link with another user who has sent out a standard tweet. Consequently, a user will only send out an event related retweet, if \(z_{i}<q_{i}\) and there is a link with another user who has sent out an event related tweet. We define the parameters event-interest and user-interest as the tick duration over which users will keep on talking about an event or a routine tweet. These parameters quantify the importance of standard or event related tweets and the higher these parameters are, the larger will be the number of retweets sent out. For our simulations, we keep these values constant for the simulation duration.
The event ends when event-duration ticks have elapsed. Once the event ends, just like the rumor spreading model, the patches loose the influence of the the events. No new tweets relating to an event are generated and the event related tweets gradually decrease until they eventually stop. A higher event-duration value signifies that users will continue to generate new tweets about an event for longer periods of time.
There is also the option of choosing multiple events which is done by setting n-events? true. If there are multiple events then event-sources sets the number of event sources. For this paper, we use one event source. In short, event-duration, event-tweet-chance and event-interest determine how significant an event is. If these values are set high then it indicates an event that has a high impact on users' lives and they will tweet and retweet more about the event and remain interested in the event for longer duration. These parameters can be changed to incorporate different types of events.
The data collected from Twitter reveals periods of time with very few tweets being sent out. To incorporate such behavior we introduce the parameter night-mode that enables or disables consideration of time when there is low Twitter activity. If night-mode is enabled, then there are two parameters that affect the low Twitter activity. One parameter night-duration effects how long the low activity period lasts. The other parameter night-tweet-chance is a measure of the probability of a user microblogging during the low Twitter time period. A user sends out tweets during this time only when \(z_{i}<\texttt{night-tweet-chance}\). Usually night-tweet-chance would be less than tweet-chance which in turn is usually less than event-tweet-chance.
## 4 TBAV Validation
For our comparison we used three data sets, referred to as VIRG, STEM and GAR, collected directly from the Twitter API using the 'TwitterR' package in R (Gentry 2015). The social sensor (reference point) coordinate was 2 miles from the event along the y-axis. It shows the number of tweets within a 2.8 mile radius changing with time. The data are related to three events and are summarized in Table 3. Figures 3(a), 3(b) and 3(c) represent the plots of the data. _Real_ indicates data obtained from Twitter and _TBAM_ indicates data generated through TBAM. Each tick represents the number of tweets sent out in an hour. The occurrence of an event is indicated by the vertical line. From the plots it can be clearly seen that after an event occurs, there is a sharp rise in the number of tweets which is similar to the real data.
To generate data using TBAM, we use the parameter values described in Table 1 and Table 2. Table 1 shows the parameters that are kept fixed for all the simulations. Table 2 shows the parameters that are changed according to
Figure 4: Changing \(q_{i}\) with changing distance or ticks (with \(\alpha=1\) and \(\beta=20\))
the Twitter data set they are meant to match. The parameters in Table 2 were estimated by inspection of the Twitter data. Table 4 summarizes how we estimated the different parameters from the Twitter data. The event-duration was estimated as the duration over which the number of tweets sent after the event were higher than tweets sent before the event. For example, in Figures 3(b) the number of tweets in the _Real_ data return to the value before the event after 48 ticks, hence, the TBAM event-duration parameter was set to 48 to generate the data in that figure.
Similarly, from the real data we calculate the values for \({tweets_{night}}\), \({tweets_{pre-event}}\) and \({tweets_{post-event}}\). Then using these values we estimate the probabilities for the TBAM parameters of tweet-chance, event-tweet-chance and night-tweet-chance which are then used to generate TBAM data.
The heuristic analysis of the data from Twitter reveals how the different parameters vary for different areas and events. The difference in these parameters could be due to the difference in demographics, Twitter usage, and density of the Twitter network. For all three data sets we considered a similar event. VIRG and STEM had roughly similar parameters but GAR has very different parameters. This is because GAR refers to two events combined as one. GAR event is different from the other two events as it started off as a different event which was a festival but ended up as a shooting event. As a result there were more Twitter users compared to usual days and the parameters event-tweet-chance, event-duration and ndist are significantly different than the other two events. The parameter of event-duration is significantly shorter due to Twitter users leaving the event location and hence, the scaling factor is high to account for the high outflow of Twitter users.
### Model Validation
In order to measure the accuracy of the TBAM generated data compared to the data collected from Twitter, we use the _cross-correlation function (ccf)_ (**shumway2000time**). The cross-correlation function between two time series \(x_{t}\) and \(y_{t}\) is given by:
\[\rho_{xy}(s,t)=\frac{\gamma_{xy}(s,t)}{\sqrt{\gamma_{x}(s,s)\gamma_{y}(t,t)}} \tag{2}\]
where
\[\gamma_{xy}(s,t)=cov(x_{s},y_{t})=E[(x_{s}-\mu_{xx})(y_{s}-\mu_{yt})] \tag{3}\]
_CoRe Paper - AI for Crisis Management_
_Proceedings of the 18th ISCRAM Conference - Blacksburg, VA, USA May 2021_
_Anouck Adrot, Rob Grace, Kathleen Moore and Christopher Zobel, eds._
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline
**Parameter** & **Description** \\ \hline \({tweets_{night}}\) & mean number of tweets sent in low Twitter activity hours \\ \hline \({tweets_{pre-event}}\) & mean number of tweets sent before the event (excluding low Twitter activity tweets) \\ \hline \({tweets_{post-event}}\) & mean number of tweets sent after the event (excluding low Twitter activity hour tweets) \\ \hline \hline event-duration & duration over which number of tweets sent after event are remain higher than number of tweets sent before event \\ \hline tweet-chance & \(\frac{{tweets_{night}}+{tweets_{pre-event}}+{tweets_{post-event}}}{{tweets_{post- event}}}\) \\ \hline event-tweet-chance & \(\frac{{tweets_{night}}+{tweets_{pre-event}}+{tweets_{post-event}}}{{tweets_{night}}}\) \\ \hline night-tweet-chance & \(\frac{{tweets_{night}}+{tweets_{pre-event}}+{tweets_{post-event}}}\) \\ \hline \end{tabular}
\end{table}
Table 4: Determining the probabilities from Twitter data for TBAM
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|p{142.3pt}|p{142.3pt}|p{142.3pt}|} \hline
**Event Name** & **Reference Name** & **Event Date** & **Event Location (latitude, longitude)** & **Social Sensor Location (latitude, longitude)** \\ \hline Virginia Beach Shootings & VIRG & 05-31-2019 & 36.7509,-76.0575 & 36.77974,-76.05750 \\ \hline STEM School Shootings & STEM & 05-07-2019 & 39.556,-104.9979 & 39.58482,-104.99790 \\ \hline Garlic Festival Shootings & GAR & 07-28-2019 & 36.997778,-121.585278 & 37.02661,-121.58528 \\ \hline \end{tabular}
\end{table}
Table 3: Summary of Real Data
\(\mu_{ss}\) is the mean of time series \(x_{s}\) and \(\mu_{yt}\) is the mean of time series \(y_{t}\).
The cross-correlation measures the dependence between two points on different time series observed at different times. In other words, _ccf_ measures the linear predictability of the series at time \(s\), say \(x_{s}\), using only the value \(y_{t}\). In our TBAM data we are trying to see if the trends and patterns in the original data match. Hence, _ccf_ will be a suitable metric and give an overview of the statistical significance between the two data sets.
Figures 5(a), 5(b) and 5(c) shows the _ccf_ between VIRG, STEM and GAR data sets and the TBAM generated data respectively. It can be seen that the TBAM generated data and the real data have a very similar pattern as the _ccf_ is higher than the threshold for most lags. Another important observation is that the correlation is also high at \(lag=0\). High correlation at \(lag=0\) indicates a strong statistical significance and shows that the patterns in both the data match very closely. This also means that any events found in the real data could also be found in the same position in the TBAM data.
For comparison, to show that our method does generate reasonably accurate data, we measure the _ccf_ between data from Twitter and uniformly randomly generated data. The random numbers were generated between 1 and the maximum value in the number of tweets of the specific data set. Figures 6(a), 6(b) and 6(c) shows the _ccf_ between VIRG, STEM and GAR data sets and uniformly randomly generated data. It is clearly seen from the plots that there is very low correlation at all lags. This shows that the data generated using TBAM is significantly better.
As shown above, TBAM reasonably reproduces microblogging behavior. Unlike real tweets, here we can (a) identify, control, and tune parameters which impact the tweet counts and microblogging behavioral patterns (b) separate event related tweets from standard tweets without looking at semantics (c) create aggregates in space or time (which we have not done here, but it is straightforward) which can reflect real world data scraping and provides different space and time granularity (d) intentionally, randomly, or using other models, introduce gaps or noise (e) add additional demographics - groups that tweet more or less (f) add system-wide or regional variations - all in a controlled manner. In this way, there is more control over the delivery slate and reliability of the microblogging data.
## Conclusion
The result from our analysis indicates that the data generated using TBAM can be used to complement and possibly enrich the underdeveloped real-world data. The main application of our data generation model is in detecting and localizing events using crowd-sourced "social sensors" whose aggregate counts of event alerts are collected at
Figure 5: CCF of Real Twitter data with TBAM generated data
Figure 6: CCF of Real Twitter data with randomly generated data |
2309.15131 | Combining optical diffraction tomography with imaging flow cytometry for
characterizing morphology, hemoglobin content, and membrane deformability of
live red blood cells | Integrating optical diffraction tomography with imaging flow cytometry
enables label-free quantifications of the three-dimensional (3D) morphology and
hemoglobin content of red blood cells (RBCs) in their natural form.
Self-rotation of RBCs flowing in a microfluidic channel has been utilized to
achieve various projection directions for 3D reconstruction. However, the
practicality of this technique has not been sufficiently studied. We improved
the accuracy of estimating the rotation angle of RBCs and demonstrated 3D
reconstructions of both healthy and glutaraldehyde-treated RBCs. Results showed
the capability to quantify changes in RBC morphology, hemoglobin content, and
membrane fluctuations generated by glutaraldehyde treatments, demonstrating the
potential to detect changes frequently present in various RBC membrane
disorders. | Yu-Hsiang Chang, Yang-Hsien Lin, Kung-Bin Sung | 2023-09-26T02:58:55Z | http://arxiv.org/abs/2309.15131v1 | Combining optical diffraction tomography with imaging flow cytometry for characterizing morphology, hemoglobin content, and membrane deformability of live red blood cells
###### Abstract
Integrating optical diffraction tomography with imaging flow cytometry enables label-free quantifications of the three-dimensional (3D) morphology and hemoglobin content of red blood cells (RBCs) in their natural form. Self-rotation of RBCs flowing in a microfluidic channel has been utilized to achieve various projection directions for 3D reconstruction. However, the practicality of this technique has not been sufficiently studied. We improved the accuracy of estimating the rotation angle of RBCs and demonstrated 3D reconstructions of both healthy and glutaraldehyde-treated RBCs. Results showed the capability to quantify changes in RBC morphology, hemoglobin content, and membrane fluctuations generated by glutaraldehyde treatments, demonstrating the potential to detect changes frequently present in various RBC membrane disorders.
## 1 Introduction
Healthy red blood cells (RBCs) are in the shape of biconcave discs. This three-dimensional (3D) morphology is crucial for the primary function of RBCs, which is the transportation of oxygen. Deviations in the morphology from the optimal discocytes often hinder normal functions of RBCs and, therefore, are targets for diagnostic methods such as flow cytometry and blood smear [1]. In flow cytometry, which has high throughput but requires bulky instruments, the size of individual RBCs can be estimated from impedance or forward light scattering measurements. However, detailed 3D morphology information is not available. On the other hand, a blood smear provides two-dimensional (2D) images of stained RBCs, revealing cellular morphology and intracellular distributions of hemoglobin. However, both the staining and image interpretation are more demanding regarding resources. Therefore, it is usually performed to facilitate diagnosis rather than for screening.
Quantitative phase imaging (QPI) is an emerging technique that quantifies the intrinsic phase contrast in living cells [2-4]. Moreover, 3D reconstruction of the internal refractive-index (RI) distributions of living cells can be achieved using multiple 2D QPI of the cells at different illumination angles through beam scanning [5, 6], sample rotation [7], or sample flowing [8]. 3D morphology of RBCs from patients with various blood-related diseases has been quantified and has the potential to facilitate the screening and management of the conditions [9-11]. Since QPI does not require labeling and can be implemented with relatively simple and cost-effective hardware, it is suitable for making widespread point-of-care or screening devices. However, tomographic imaging has been achieved mainly by scanning the incident beam on stationary RBCs with low throughput.
A common strategy to increase the data acquisition rate is integrating a microfluidic channel into the QPI instrument and acquiring images of cells or organisms flowing continuously in the medium [12-14]. To acquire QPI data for tomographic reconstruction, Merola _et al._ proposed carefully controlling the flow of RBCs through a microfluidic channel to make them rotate continuously. The rotation angles needed for reconstruction are determined by fitting the RBCs' projected phase images to Zernike polynomials [15]. This method has higher throughput than other works where projection directions are varied by scanning the illumination beam [16], or rotating RBCs with optical traps [17] or dielectrophoresis [18] in microfluidic channels. Yet another method continuously records forwardly scattered light from a line region illuminated by a convergent beam while cells flow across the line [19, 20]. The angular range of projections is limited by the collecting cone of the objective lens and cannot
reach a full 180\({}^{\circ}\) rotation as in the self-rotation method proposed in [15], resulting in lower spatial resolution.
Although 3D RI tomograms have been obtained from continuously flowing RBCs, the method's reliability has not been demonstrated to show its practicality. Specifically, only four abnormally shaped RBCs were reported in [15], and one RBC tomogram was shown in [21]. Therefore, we evaluated the performances of estimating the rotation angle of RBCs and the subsequent 3D reconstruction using simulated RBC QPI projections. With improved accuracy in the rotation angle determined by a modified procedure, we applied optical diffraction tomography [22, 23] to reconstruct 3D RI tomograms of live RBCs flowing in a custom-made microfluidic device. The rotation of RBCs was confined to the central region of the sample channel by a sheath flow, and complex field images of the RBCs were continuously acquired by off-axis digital holographic microscopy (DHM). To test the capability of the developed tomographic imaging flow cytometer in quantifying changes in RBC 3D morphology and hemoglobin content, we treated healthy RBCs with glutaraldehyde that stiffened the membrane and reduced the deformability of RBCs. The treatment was intended to imitate trends observed in abnormalities such as hereditary spherocytosis, hereditary elliptocyosis, and metabolic syndrome [24].
## 2 Materials and Methods
### DHM setup
Quantitative phase images of RBCs flowing in a microfluidic channel were acquired by a DHM setup whose schematic diagram is shown in Fig. 1. Off-axis interferometry is implemented in a common-path arrangement in which a uniform reference beam is generated by a transmission grating [25]. A 532-nm laser is spatially filtered (pinhole diameter 10 \(\upmu\)m), expanded, and focused to the back focal plane of a condenser lens to provide nearly plane-wave illumination at normal incidence on the channel. A water-immersion objective lens (LUMFLN 60X, NA 1.1, Olympus) collects light scattered by the RBCs. It forms an intermediate image that is relayed by a 4f lens system onto a complementary metal-oxide semiconductor camera (VC-12MX-M 180, Vieworks Co., Ltd.) with a transverse magnification of about 84. The field of view is about 200 \(\upmu\)m \(\times\) 200 \(\upmu\)m. Unscattered illumination beam is also collected by the objective lens, magnified, and relayed onto the camera as a uniform beam. A grating (300 grooves/mm UV Transmission Grating, Dynasil Corporation) splits the transmitted light (both scattered and unscattered by the RBCs) into multiple beams, and only the 0 and -1 orders are allowed to pass a clear aperture located at the Fourier plane of the intermediate image. The grating is moved a few millimeters away from the intermediate image plane along the optical axis to shear RBC images of the -1-order beam relative to those of the 0-order beam. The shearing direction is perpendicular to the sample flow direction, and the shearing distance is adjusted so that RBC images in the 0-order (sample) beam overlap with an empty region without flowing RBCs in the -1-order (reference) beam [25, 26]. This strategy for creating a nearly common-path uniform reference beam has two advantages over spatial filtering with a pinhole, as used in original diffraction phase microscopy [27]. First, it is less susceptible to misalignment and movements of optical components such as the pinhole. Second, the grating can be chosen to have roughly equal efficiency in the -1 and 0 orders to achieve higher fringe contrast [28, 29]. The lateral resolution of the optical imaging system was measured to be 0.26 \(\upmu\)m under white light illumination and is close to the theoretical prediction of diffraction-limited systems. Images were acquired at 180 frame/s with an exposure of 1 ms for results reported here.
### Design and fabrication of the microfluidic device
To facilitate the rolling of RBCs for at least one revolution within the field of view (FOV) and prevent RBCs from flowing through out-of-focus regions, we adopted a 3D hydrodynamic focusing method to help confine RBCs near the bottom of the channel [30]. A schematic diagram of the microfluidic device is shown in Fig. 2(a). The 0.3 mm thick central sample channel was shallower than the 1.2 mm thick sheath channels, and the flow rate of the sheath stream was faster than that of the sample stream to achieve 3D hydrodynamic focusing effects. The microfluidic channel was fabricated by casting polydimethylsiloxane (PDMS) against a positive micro-milled aluminum mold and boding the cured PDMS slab onto a coverglass with oxygen plasma surface treatment. Syringe pumps were used to push the sample and sheath fluids with a flow rate of 5 \(\upmu\)l/min and 15-20 \(\upmu\)l/min, respectively. As seen in Fig. 2(b), the faster sheath flow occupied most of the channel height and successfully confined the sample flow to near the bottom of the channel.
Figure 1: Schematic diagram of the imaging cytometer. Po1 & Po2: linear polarizers; QP: quarter waveplate; OL1 & OL2: objective lenses; M1 & M2: mirrors; L1 -L5: positive lenses. Dashed black lines adjacent to the grating indicate the location of an intermediate image of the sample.
### Image Processing, Phase Reconstruction, and Phase Unwrapping
Since many interference images were acquired during the time when an RBC flowed across the FOV, individual RBCs in the raw images were automatically detected by an image cascade network [31] and cropped to speed up the processing. Training of the network is described as follows. First, a mean intensity image was calculated from the whole series of continuously recorded images and subtracted from each recorded image. Second, local nonuniformity in intensity was alleviated by contrast-limited adaptive histogram equalization. Third, we manually labeled more than 13,000 RBCs in about 2,200 recorded interference images and randomly split the labeled RBC images into a training set and a validation set with a 9:1 ratio to train the network using PyTorch. The implemented network on a personal computer equipped with an Intel Core i5-9400F CPU and NVIDIA GeForce RTX 2060 GPU achieved a processing speed of 18 frames/s for 1024x1024 pixels/frame, and an average intersection over union (IoU) of 0.822.
We reconstructed quantitative phase images of individual RBCs by the spatial-frequency filtering method [32, 33]. The method consists of bandpass filtering the recorded images around the spatial frequency of interference fringes, performing inverse Fourier transformation, taking the argument of the complex image, and subtracting a background phase image of the same FOV but without RBCs. The amplitude image of each RBC was also obtained from the inverse Fourier transformation. The 2\(\pi\) phase ambiguity issue was solved by a fast 2D phase-unwrapping algorithm [34]. Moreover, during the movement of RBCs in the channel, the location of RBCs along the optical axis is not always constant. To ensure high image quality, we applied numerical refocusing to obtain the best-focused RBC phase image. We propagated the complex-field (i.e., amplitude and phase) image with the angular spectrum method and searched for the focus-shifted phase image with the maximum Tamura coefficient [35]. After refocusing the complex images, we calculated the center of mass of each RBC phase image and aligned the centers of all phase images belonging to each RBC for subsequent 3D reconstruction.
Figure 2: (a) Schematic of the microfluidic 3D focusing device. (b) A photograph of the microfluidic device filled with red (sample fluid) and blue (sheath fluid) solutions. The inset on the left shows the horizontal confinement of the sample flow, and the inset on the right shows the vertical confinement of the sample flow.
### Rotation Angle Determination
The orientation of an RBC in the microfluidic channel can be described by two rotation angles designated as \(\theta\) and \(\gamma\) hereafter. As illustrated in Fig. 1, the x-axis and z-axis align with the flow direction in the channel and the optical axis of the DHM setup, respectively. \(\theta\) indicates the angle of rotation around the y-axis when the RBC rolls continuously due to the flow in the channel, and \(\gamma\) refers to the rotation angle around the z-axis due to imbalanced flow speeds at two sides of the RBC. The \(\gamma\) of each RBC phase image was first determined by elliptical fitting of the RBC, and the RBC image was numerically rotated so that the long axis of the RBC was aligned with the y-axis. Since RBCs oriented close to \(\theta\) = 0\({}^{\circ}\), i.e., with disk-like appearance, have high levels of radial symmetry and are challenging to determine \(\gamma\), we only performed the elliptical fitting on RBCs with their major axis at least five pixels longer than their short axis. \(\gamma\) values of RBC phase images without the elliptical fitting were determined by interpolation.
After re-rotating RBC images around the z-axis, we determined the rolling angle \(\theta\) of each RBC image based on Zernike polynomial fitting of RBC phase images [15] with some modifications. The relationship C4 \(\propto\) cos2(\(\theta\)) was used in [15] to calculate \(\theta\) = cos-1[(C4)0.5], where C4 is the Zernike coefficient of defocus. However, based on our experimental results of RBCs moving in the channel, RBC phase images showed both time-varying intracellular mass distributions and nonideal biconcave discs during the movements, which resulted in RBCs with the same rolling angle corresponding to different C4 values. Therefore, we modified the previous method to improve accuracy in the rolling angle estimation. Specifically, RBC phase images in the orientations around \(\theta\)=90\({}^{\circ}\) and \(\theta\)=0\({}^{\circ}\) were first identified.
RBCs with \(\theta\) around 90\({}^{\circ}\) had their axes of symmetry aligned with the x-axis and were identified by finding frames with a local maximum of C\({}_{13}\) in the image stack of each RBC, where C\({}_{13}\) is the Zernike coefficient of horizontal secondary astigmatism. Similarly, RBCs with \(\theta\) around 0\({}^{\circ}\) had their axis of symmetry aligned with the z-axis and were identified by finding frames with a local maximum of C4+1/C\({}_{12}\) in the image stack of each RBC, where C\({}_{12}\) is the Zernike coefficient of primary spherical aberration. The Zernike polynomials used are illustrated in Fig. 3, which show similarity to phase images of RBCs in orientations of \(\theta\)=0\({}^{\circ}\) and \(\theta\)=90\({}^{\circ}\). After the frames with \(\theta\) = 90\({}^{\circ}\) and \(\theta\) = 0\({}^{\circ}\) were identified, the whole stack of RBC images was divided into segments of \(\theta\) = 0\({}^{\circ}\)-90\({}^{\circ}\) and \(\theta\) = 90\({}^{\circ}\)-0\({}^{\circ}\), as illustrated in Fig. 4. Within each segment, we assumed that the frame identified as \(\theta\) = 90\({}^{\circ}\) has the minimum C4+C\({}_{5}\) and that identified as \(\theta\) = 0\({}^{\circ}\) has the maximum C4+C\({}_{5}\). C\({}_{5}\) is the Zernike coefficient of horizontal primary astigmatism. Frames with C4+C\({}_{5}\) values below the frame specified as \(\theta\) = 90\({}^{\circ}\) or above the frame identified as \(\theta\) = 0\({}^{\circ}\) within the same segment were deemed outliers and removed from subsequent processing. Finally, the C4+C\({}_{5}\) value of every frame within the segment was normalized to a range between 0 and 1, and the rolling angle was calculated as \(\theta\) = cos-1 [(C+ C\({}_{5}\))0\({}^{,5}\)].
Figure 3: Normalized Zernike polynomials that are used to fit RBC phase images for determining the rotation angle \(\theta\).
### Reconstruction of 3D RI Maps
3D RI distributions of individual RBCs were reconstructed from about 200 complex-field images by backpropagation based on Fourier diffraction tomography under Rytov approximation [36]. Since the RBC field images were only obtained under uniaxial rotation of RBCs about the y-axis, the 2D Fourier transform of acquired field images filled a horn torus-like shape in the spatial frequency domain [23]. Artifacts generated by this missing cone problem were reduced by applying positivity and spatial constraints iteratively. In addition, total variation minimization (TVmin) was used to smooth intracellular RI variations while preserving RBCs' edges in the reconstructed images [37]. The procedure for processing a reconstructed 3D RBC RI image is as follows. We applied the TVmin step to the reconstructed image ten times to approximate the spatial extent of the RBC using Otsu thresholding. Then, we performed morphological opening and closing to create a 3D mask for the RBC. Subsequent TVmin and positivity operations were confined within the mask until the total variation could not be improved, or a maximum iteration was reached.
### Evaluation of methods for determining rotation angles
To evaluate the accuracy of the estimated rotation angles by the proposed method, quantitative phase images of RBCs were generated from experimentally acquired and reconstructed 3D RI tomograms [11] as testing data. We took 3D RI tomograms of 15 RBCs and sampled 200 combinations of \(\theta\) and \(\gamma\) for each RBC to generate test phase images by projection. Increments of \(\gamma\) between adjacent frames were randomly sampled from the range of \(\pm\)5\({}^{\circ}\). Increments of \(\theta\) between adjacent frames were randomly sampled from the range of \(\pm\)5\({}^{\circ}\) when \(\theta\) was between -30\({}^{\circ}\) and 30\({}^{\circ}\), and from the range of 3\({}^{\circ}\)-8\({}^{\circ}\) at other \(\theta\) values. This setting was determined based on observations of the relative occurrence rate of RBC movements in the recorded interference images (see visualization 1 for an example). Two types of artifacts were digitally added to the test images to imitate artifacts commonly seen in experimentally measured phase images. The first artifact was temporal fluctuations in the total phase, which were found to be about 9% of the mean value. We randomly sampled from a normal distribution with a 9% standard deviation, calculated the corresponding noise in the total phase, and evenly distributed the noise to all the pixels in an RBC phase image. The second artifact was the warping of RBC phase images, which was achieved by shifting pixel positions row-by-row using
Figure 4: An example of estimating rolling angles of a stack of RBC phase images. (a) Frames with a local maximum of C\({}_{13}\) were identified to be oriented at \(\theta\) = 90\({}^{\circ}\) (red circles); (b) frames with a local maximum of C\({}_{4}\)+1/C\({}_{12}\) were determined to be oriented at \(\theta\) = 0\({}^{\circ}\) (yellow circles); (c) C\({}_{4}\)+C\({}_{5}\) of all frames in the image stack, and (d) segments of frames with \(\theta\) between 0\({}^{\circ}\) and 90\({}^{\circ}\) as indicated in (c). Black circles are outlier frames that had C\({}_{4}\)+C\({}_{5}\) above the yellow circles (\(\theta\) = 0\({}^{\circ}\)) or below the red circles (\(\theta\) = 90\({}^{\circ}\)) and were excluded from further processing.
\[x^{\prime}=\ S\times\sin\left(\frac{2\pi\mathrm{rx}}{W}\right)+x, \tag{1}\]
where \(x\) is the original location of a pixel in the x-axis, \(x^{\prime}\) is the new location of the pixel, S is a constant to adjust the amount of warping, and \(W\) is the width of the RBC in the number of pixels. Examples of warping by S=8 are illustrated in Fig. 5(b).
### RBC sample preparation and feature extraction
A droplet of blood (about 2 \(\upmu\)L) was obtained from one of the authors (Y. Chang) by finger pick and diluted in 0.85% phosphate-buffered saline (PBS). Part of the blood samples were diluted in PBS with 0.01% or 0.05% glutaraldehyde to generate stiffening effects and associated morphological changes in RBCs. Adding glutaraldehyde to PBS is known to cause an increase in osmolality, which in turn changes the morphology of RBCs. Therefore, the osmolality of the diluents containing glutaraldehyde was adjusted to be approximately the same as that of the PBS. After 20 minutes of treatment, the cells were extracted by centrifugation and diluted in PBS for imaging in the microfluidic channel.
Three types of features were obtained from reconstructed phase images or RI tomograms of RBCs, including morphology, intracellular content, and membrane stiffness. Morphological features directly quantified from 3D RI tomograms included each RBC's volume and surface area. To better characterize the shape of an RBC without being biased by its size, we also calculated sphericity by \(\pi^{\frac{1}{3}}(6\times\mathit{volume})^{\frac{2}{3}}/(\mathit{surface area})\). In addition, the mean diameter and biconcave disc parameters were quantified from 2D phase images of RBCs orientated at about 0=0\({}^{\circ}\). Since RBCs do not contain membrane-bound organelles and consist mostly of hemoglobin molecules, they can be approximated as objects with homogeneous intracellular refractive index. The phase of a pixel (x, y) in an RBC phase image can be described as
\[\Delta\phi(x,y)=\frac{2\pi}{\lambda}[n_{RBC}-n_{m}]h(x,y), \tag{2}\]
where \(h(x,y)\) is the local height of the RBC, \(n_{m}\) is the refractive index of the diluent and \(n_{RBC}\) is the average refractive index of the RBC. It follows that the phase of a pixel is proportional to the height at the local position under the homogeneity assumption. Therefore, 2D phase images of RBCs can be used to characterize the thickness profile or contour of the RBCs. We adopted the following equation [38],
\[\left(1-\left(\frac{r}{R}\right)^{2}\right)^{\frac{1}{2}}\times\left(B_{0}+B_ {2}\left(\frac{r}{R}\right)^{2}+B_{4}\left(\frac{r}{R}\right)^{4}\right), \tag{3}\]
to fit an RBC's phase expressed as a function of radial distance r, where \(R\) is the average radius, and \(B_{0}\) is the phase at the RBC's center. Both parameters \(B_{2}\) and \(B_{4}\) decrease relatively to \(B_{0}\) when the shape of an RBC flattens and deviates from a regular biconcave disc.
Figure 5: Two examples of (a) 3D rendered RI tomograms of RBCs, and (b) quantitative phase images obtained from direct projection of tomograms shown in (a) before (left) and after warping (right).
The intracellular content of RBCs is mostly hemoglobin. The refractive index of the cytosol of an RBC is linearly correlated with the mass density of biomolecules in it [39]. From a 2D phase image of an RBC, it is straightforward to calculate the total phase-area product of all pixels within the image by
\[\text{OV}=\frac{\lambda}{2\pi}\sum_{RBC}\Delta\phi(x,y)\,\Delta x\Delta y, \tag{4}\]
where OV is optical volume [40], and \(\text{Ax}\Delta y\) is the area of each pixel in sample space. OV is a convenient parameter to assess the dry mass of cells from their 2D quantitative phase images. Precisely, suppose the contribution to intracellular RI by substances other than hemoglobin can be ignored in healthy RBCs. The dry mass of hemoglobin in an RBC can be estimated by OV/\(\alpha\) where \(\alpha\) is the specific refraction increment of hemoglobin [41]. Moreover, according to Eqs. (2) and (4), dividing OV by the physical volume of the identical RBC gives the difference between the average intracellular RI and the medium RI, \(n_{RBC}-n_{m}\), which is proportional to the dry mass density of hemoglobin in the RBC [39].
The deformability or viscoelasticity of the RBC membrane was assessed by quantifying temporal fluctuations of the cell membrane using the same DHM instrument. After driving RBCs into the channel, the syringe pump was stopped for 10 minutes to allow RBCs to settle at the bottom of the channel. Then, we captured 300 interference images of RBCs at 180 frames/s. Quantitative phase images of the RBCs were reconstructed and cropped as described in Sec. 2.3. The temporal fluctuation in phase at each pixel was first quantified as the absolute difference between each image's phase value and the mean phase of all images, and an average fluctuation in phase was obtained over the whole duration of 300 images. That is, the mean membrane fluctuation in phase for each pixel (\(x\), \(y\)) can be calculated as
\[\sigma_{\phi}(x,y)=\frac{\Sigma_{t}|\Delta\phi(x,y)_{t}-\Delta\phi(x,y)_{average }|}{\tau}, \tag{5}\]
where \(t\)=1,...\(T\) is the time index of each image, \(\Delta\phi(x,y)_{average}\) is the average phase over the \(T\) images at the pixel. Finally, membrane fluctuations of all pixels in the image were averaged [42].
## 3 Results
### Theoretical evaluation of two methods of RBC rotation angle determination
The proposed method for determining the orientation of RBCs was modified from the method proposed by Merola _et al._[15]. We tested the two methods on 200 projections for each of the 15 RBCs to evaluate their performance in estimating the rolling angles. The mean errors in estimated rolling angles in the presence of total phase noise are 9.4\({}^{\circ}\) and 13.9\({}^{\circ}\) for the proposed method and previously reported method, respectively. The mean errors in the presence of total phase noise and image warping are 9.7\({}^{\circ}\) and 11.8\({}^{\circ}\) for the two methods, respectively. The proposed method showed significantly smaller errors (p<0.001) in the estimated rolling angles with or without the warping artifacts. The previously reported method uses normalized C\({}_{4}\) values over the whole sequence of phase images to determine the rolling angle, which is prone to errors since changes in intracellular mass distributions during the movement may alter the relation between C\({}_{4}\) and cos\(\theta\). The proposed method, on the other hand, divides the whole sequence of phase images into segments of 90\({}^{\circ}\) rolling. Normalizing C\({}_{4}\)+C\({}_{5}\) values within each 90\({}^{\circ}\) segment is shown to more accurately recover the orientation of RBCs since image frames captured within each 90\({}^{\circ}\) segment are within 0.5 s and appear to have similar intracellular mass distributions. In addition, the inclusion of an additional C\({}_{5}\) term helps minimize the effects of nonideal shapes of RBCs during their movements on the fitting to the Zernike polynomials.
The performance of the proposed method was further evaluated by reconstructing 3D RI tomograms of the 15 RBCs from the 200 test phase images using the angles estimated. 3D
morphological features, including the volume and surface of reconstructed tomograms, were used as target parameters. The gold standard was obtained by reconstructing 3D RI tomograms using 500 projected complex-field images with known and equally spaced rolling angles. Table 1 shows that the errors of the proposed method are smaller than those of the original method for estimating the rotation angle.
### Imaging healthy and glutaraldehyde-treated RBCs
Reconstructed and unwrapped phase images of an RBC flowing through the FOV of the DHM setup are shown in visualization 1, demonstrating that the flow rate was appropriate to result in at least one complete revolution of RBCs within the FOV. It also shows that the speed of rolling was not constant. The maximum speed of RBCs' movement was found to be about 0.2 mm/s to maintain the natural shape of RBCs. Exemplary reconstruction results of RI tomograms of two healthy RBCs are shown in Fig. 6. The biconcave disc shape of the RBCs can be seen in both selected slices of the 3D tomograms and the surface-rendered graphs.
Influences of glutaraldehyde on RBCs were characterized by quantitative phase images and reconstructed RI tomograms of RBCs flowing through the microfluidic channel. Fig. 7(a)-(e) shows morphological features of RBCs in the three groups (control, 0.01% glutaraldehyde, and 0.05% glutaraldehyde). After the treatment, RBCs shrank in volume and mean diameter, as shown in Fig. 7(a) and 7(b), respectively. This is expected due to the crosslinking effects of glutaraldehyde on proteins [43] and has been measured by low-angle light scattering in flow cytometry [44]. Since both the surface area in Fig. 7(c) and the volume in Fig. 7(a) decreased after the treatment, it is uncertain how the shape of RBCs changed based on the measured surface area alone. Therefore, changes in RBC shape due to glutaraldehyde treatments were assessed with sphericity and biconcave disc parameters according to Eq. (3). As shown in Fig. 7(d) and 7(e) respectively, the sphericity increased and the sum of \(B_{2}\) and \(B_{4}\) decreased relatively to \(B_{0}\) in treated cell populations, indicating that the RBCs' shape deviated from discocytes and resulted in a decreased concave depth. It is noted that concentrations of glutaraldehyde should be interpreted with caution. Fixation properties of glutaraldehyde in the same apparent concentration vary from batch to batch due to differences in the fraction of
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Random rotation +} & \multicolumn{2}{c|}{Random rotation +} \\ & \multicolumn{2}{c|}{Total phase variation} & \multicolumn{2}{c|}{Total phase variation +} \\ & \multicolumn{2}{c|}{Image warping} \\ \cline{2-5} & Volume & Surface area & Volume & Surface area \\ \hline Our Method (N = 15) & \(3.5\pm 3.3\) \% & \(3.1\pm 1.9\) \% & \(6.6\pm 3.3\)\% & \(3.9\pm 2.8\)\% \\ \hline C\({}_{4}\) estimated result (N = 15) & \(10.8\pm 4.2\) \% & \(5.4\pm 4.6\) \% & \(10.29\pm 3.7\)\% & \(6.1\pm 3.3\)\% \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of errors in volume and surface area of reconstructed RBC tomograms between the proposed method and previously published method [15]
Figure 6: Two examples of reconstructed 3D RI tomograms of RBCs, showing slices at various depths on the left and 3D surface rendering on the right.
monomers and polymers [43]. Therefore, while similar trends in RBC characteristics due to glutaraldehyde treatments have been reported in the literature, the glutaraldehyde concentrations used may vary substantially.
Fig. 7(f) shows that the OV of treated RBCs was higher than that of control RBCs, which is reasonable due to the addition of glutaraldehyde to RBCs that increases intracellular dry mass [43]. The decreased volume and increased intracellular dry mass in treated RBCs combined additively to increase the mass density in treated RBCs, which has been reported in [43]. In the control group results, the average OV of 5.4 fL corresponds to a mean hemoglobin dry mass of 26.6 pg, assuming a refraction increment of 0.203 ml/g for hemoglobin [45]. This value is within the range of typical mean corpuscular hemoglobin (MCH) for healthy human subjects. The ratio between the OV and the physical volume of each RBC in the control group results averaged about 0.06. This value is close to a previously reported average \(n_{RBC}-n_{m}\) value of RBCs measured on samples from seven healthy human subjects [11]. The intracellular hemoglobin concentration could be calculated to be 30 g/dL by assuming the same refraction increment of 0.203 ml/g [45] and ignoring the contributions of other substances to the cellular dry mass. The estimated hemoglobin concentration was slightly smaller than typical values of mean corpuscular hemoglobin concentration (MCHC) in healthy subjects. The discrepancy may be attributed to individual variations in MCHC and refraction increments of hemoglobin, and differences in RBC volume measured by optical diffraction tomography and clinical blood analyzers [46].
Figure 7: Comparison of features extracted from 3D RI tomograms and quantitative phase images of RBCs in the control group (sample size=108) and experimental groups treated with 0.01% (sample size=103) and 0.05% (sample size=95) glutaraldehyde, respectively. (a)-(e) Morphological features. (f) Optical volume or intracellular hemoglobin content. (g) Membrane fluctuation.
The quantified membrane fluctuations are shown in Fig. 7(g) and indicate increased membrane stiffness in treated RBCs, which is expected since glutaraldehyde crosslinks proteins in the membrane and cytosol of RBCs [43].
## 4 Discussion
We implemented 3D imaging flow cytometry of RBCs based on optical diffraction tomography. It is label-free and quantifies 3D morphology and intracellular hemoglobin content of RBCs in their natural discocyte form, which are advantages over conventional blood smears. We demonstrated the capability of the developed cytometer to detect changes in the morphology, hemoglobin content, and deformability of RBCs due to glutaraldehyde treatments. The results indicate the potential of the implemented cytometer to detect changes commonly present in various RBC membrane disorders and metabolic syndrome. Recent evidence has also shown associations between decreased RBC deformability and cardiovascular disease conditions [24]. With further improvements for automated image reconstruction and analysis, the proposed cytometry could be a practical tool to provide fast measurements of important RBC parameters for the diagnosis, treatment monitoring, and management of various conditions.
The key to achieving high-throughput tomographic imaging of single RBCs is using self-rotation of continuously flowing cells to collect complex field images of the cells under multiple projection directions and quantifying the rotation angles from the acquired quantitative phase images [15]. This strategy has been adopted by [21] to image single and coagulated RBCs. We modified the previous method to improve the image quality. First, motion-induced noise known to plaque interferometry-based instruments is reduced by a common-path configuration for generating a uniform reference wavefront [25]. Compared to typical diffraction phase microscopy, where a pinhole is used as a spatial filter to generate the uniform reference beam [27], our method removes the pinhole and is more robust to misalignment. Second, the process for determining rotation angles of flowing RBCs by Zernike polynomial fitting [15] was modified to tolerate fluctuations in cellular shape and intracellular mass distributions during the movement of RBCs along the channel. Results show that the modified method improved accuracy in determining both rotation angles and the volume of reconstructed tomograms. Third, we considered effects of diffraction on the 3D reconstruction of RI tomograms using optical diffraction tomography [36], and applied total variation minimization to smooth intracellular RI distributions [47].
The throughput of the reported imaging flow cytometer is about 43 RBCs/min. It is currently limited by images of RBCs flowing through out-of-focus regions overlapping with those of in-focus RBCs due to incomplete confinement of RBCs to the depth of field by hydrodynamic focusing. This artifact could be reduced by manufacturing microfluidic channels with higher precision. In particular, decreasing the channel height to about twice the diameter of an RBC helps confine RBCs to the bottom half of the channel, and the laminar flow in the channel facilitates continuous rolling of the RBCs. The required precision could be achieved by photolithography with spin-coated photoresist for patterning the channels [20]. Although the flow speed cannot be significantly increased due to shape of RBCs, the throughput could be greatly improved by maximizing the number of flowing RBCs within the FOV of the DHM system. For example, one can increase the density of RBCs flowing in the channel and make the channel sufficiently wide so that the whole FOV is within the channel region.
## 5 Conclusion
To develop tomographic imaging flow cytometry for high-throughput single-cell characterization of RBCs, we designed and fabricated a microfluidic device to produce self-rotation of live RBCs flowing in the channel and complex field images of the RBCs were continuously acquired with a common-path off-axis digital holographic microscope. We modified the procedure of determining the rotation angle of RBCs for 3D reconstruction based
on optical diffraction tomography. Results on simulated RBC projection images showed that the modified approach improved the accuracy in estimating the rotation angle and the volume of reconstructed RBCs as compared to the original method. The developed imaging flow cytometer was validated by imaging healthy RBCs, where the 3D morphology of RBCs was correctly reconstructed as biconcave discs. We further demonstrated the capability of the developed cytometer to quantify changes in RBC 3D morphology, hemoglobin content, and membrane fluctuations generated by glutaraldehyde treatments. The results of this study show the potential of the proposed cytometry to detect changes commonly seen in various RBC membrane disorders and metabolic syndrome.
## Disclosures
All authors state that they have no relevant financial interests in this article and no other conflicts of interest to disclose.
### Acknowledgments
The authors thank National Science and Technology Council in Taiwan for financial support (grant number NSTC 108-2221-E-002-081-MY3). The authors thank Prof. Nien-Tsu Huang of National Taiwan University for suggestions on the design and fabrication of the microfluidic device and Ms. Huai-Ching Hsieh for help with organizing and making figures and the video file.
|
2309.12886 | Implementing Automated Data Validation for Canadian Political Datasets | This paper describes a series of automated data validation tests for datasets
detailing charity financial information, political donations, and government
lobbying in Canada. We motivate and document a series of 200 tests that check
the validity, internal consistency, and external consistency of these datasets.
We present preliminary findings after application of these tests to the
political donations ($\approx10.1$ million observations) and lobbying
($\approx711,200$ observations) datasets, and to a sample of $\approx380,880$
observations from the charities datasets. We conclude with areas for future
work and lessons learnt for others looking to implement automated data
validation in their own workflows. | Lindsay Katz, Callandra Moore | 2023-09-22T14:19:12Z | http://arxiv.org/abs/2309.12886v1 | # Implementing Automated Data Validation for Canadian Political Datasets
###### Abstract
This paper describes a series of automated data validation tests for datasets detailing charity financial information, political donations, and government lobbying in Canada. We motivate and document a series of 200 tests that check the validity, internal consistency, and external consistency of these datasets. We present preliminary findings after application of these tests to the political donations (\(\approx 10.1\) million observations) and lobbying (\(\approx 711,200\) observations) datasets, and to a sample of \(\approx 380,880\) observations from the charities datasets. We conclude with areas for future work and lessons learnt for others looking to implement automated data validation in their own workflows.
## 1 Purpose
The Investigative Journalism Foundation (IJF) has collated and actively maintains eight public interest databases relating to political donations, charity financial information, and government lobbying in Canada. These data are publicly available by the IJF in a form that is clean, interpretable, and can be queried and explored by users with ease. However, there is great variation in the accessibility, completeness, and cleanliness of the raw data sources upon which these databases are built, both across regions and over time. This has necessitated a complex data pipeline built by the IJF which routinely and programmatically updates each database while maintaining data cleanliness and standardization. This data pipeline executes a number of processing steps through which each piece of data must pass to reach its final form.
Automated data testing is a valuable tool for verifying that data are meeting certain standards or expectations held by the user, while simultaneously uncovering inconsistencies or errors within the data (Alexander, 2023a). This is especially beneficial for complex collated databases such as the IJF's, which integrate data from multiple origins across time. Moreover, the construction of all datasets involve fundamental assumptions and programmatic decisions which inform downstream analysis and use. To automate data validation for the IJF, we have developed a bespoke suite of automated tests spanning each of the eight IJF databases using Python's Great Expectations (GX) library. This means of data quality testing facilitates trust and transparency in the data being shared (Alexander, 2023a), and consequently in the news and scholarly articles informed by these data.
In this report, we begin with a review of the current literature and computational tools available pertaining to data quality assessment. We then provide a detailed description of our workflow, following Alexander (2023a). This is followed by a discussion of future work, both in the IJF's unique data testing efforts, and data validation more generally. We then close with a conclusion outlining the main learnings from this work.
## 2 Literature Review
Concerns surrounding the transparency and replicability of published research have gained prominence in recent years, inspiring greater awareness and discussion of the need for reproducibility to be incorporated into scientific workflows (Vilhuber et al., 2022; Alexander, 2023a; Gelman). The issue is highlighted by articles which attempt, and in many cases, fail, to reproduce various published research findings across disciplines (Vilhuber et al., 2022; Trisovic et al., 2022). Such work has shone light on
the need for a transformation of the standards set for published research across disciplines, particularly reproducible workflows and accessible data and code. Data validation is a necessary tool for this. Chapter 3 of Telling Stories With Data [1] is devoted to reproducible, well-documented workflows, and emphasizes that openness of code and data, especially detailing modifications to the original unedited data, are crucial components of reproducibility. Without such a transformation, researchers and journals may continue to publish works which are not replicable, in turn perpetuating public distrust of scientific research, and the publication of misleading conclusions.
[1] also provides a detailed framework for writing a suite of data tests to improve the quality of one's code by documenting the expectations they have of their data at particular points in the code [1]. Specifically, focus is placed on testing for validity, internal consistency, and external consistency of the data. Validity refers to general correctness of variable classes and values (e.g., names do not contain numerals; numeric data is classified as such); internal consistency refers to coherence within the dataset (e.g., component columns summing to the total column); and external consistency relates to coherence of the data with relevant external sources [1]. In providing such a framework, Alexander highlights the fundamental relationship between data transformation and data validation. Data transformation involves strategic decision making based on characteristics we would like the data to have, and data validation involves testing that those characteristics hold true in the data at large.
Taking a more domain-specific focus, [2] discuss concerns surrounding the quality and reproducibility of research studies based on electronic health record (EHR) data. The authors advocate for six considerations to assess the quality of these studies, broadly pertaining to how complete, accurate, transparent, and comprehensive the data and analyses are [14]. Additionally, [15] present a harmonized framework for EHR data quality assessment to encourage users to comprehensively evaluate the fitness of the data to their specific research goals. This framework includes data validation, emphasizing the importance of an alignment between characteristics of the data, and "relevant external benchmarks" [13]. Lee et al. [2] implement the framework developed by [12] within the heart failure domain, illustrating the importance of domain knowledge for developing a comprehensive, accurate set of tests for data quality [11].
Some computational tools have also been developed specifically for machine learning projects [10, 14] present an anomaly detection data validation system for data used in machine learning pipelines, deployed as part of TFX at Google. The authors emphasize the downstream effect that one data error can have on machine learning infrastructure, and the importance of catching assumptions made in the data wrangling process early on [10]. [14] also present a data validation framework for machine learning datasets, called Data Linter. The authors acknowledge that error detection in machine learning data is a time-consuming, error-prone, and iterative process, and present a tool which analyzes the data and offers variable transformation recommendations based on the specific model that will be trained [14]. These works illustrate the ways in which assumptions made about the data can get lost in the data science workflow, and the importance of checking and documenting them to avoid misleading or inaccurate conclusions.
In addition to the domain-specific frameworks for data validation, there also exist a number of more general-purpose computational tools for data testing. [1] provides information and example code on how to use a number of libraries and functions for code testing in the R programming language [15], including **testhat**, **pointblank**, and base R's **stopifnot()** function. Notably, **pointblank** contains built-in test functions which allow users to test that certain characteristics of their data hold. Additionally, [1] provide a comprehensive review of R packages for assessing data quality with applications using publicly available cohort study data. The authors compare each package based on characteristics such as output format, string functionality, and availability of a graphical user interface [11]. Great Expectations is a tool for validating, documenting, and profiling data in the Python programming language. This tool is useful as in addition to providing built-in validation test functions, it also offers an Onboarding Data Assistant tool that profiles your data to create a suite of bespoke validation tests [16].
Evidently, much great work has been done in the realm of data validation to build cross-disciplinary frameworks to inform data quality tests and computational tools for the programmatic implementation
of those tests.
## 3 Workflow
### Description of the raw data
This subsection provides a brief overview of the raw, unedited data from which the IJF's databases were created. All eight datasets are continually updated by the IJF with new information as data are added to their source websites.
#### 3.1.1 Charities
The IJF's charities database is composed of three datasets: charity tax returns, charity staff compensation, and gifts received by charities. All of these data were sourced from the Canada Revenue Agency (CRA), covering data from 1990 to the present. The Income Tax Act (1985) legally requires registered charities in Canada to file an annual information return. As outlined on the IJF's methodology page for this database, "A complete information return includes form T3010 Registered Charity Information Return, a copy of the charity's own financial statements, Form T1235, Directors/Trustees and Like Officials Worksheet, and if applicable, Form T1236, Qualified Donees Worksheet / Amounts Provided to Other Organizations and Form T2081, Excess Corporate Holdings Worksheet for Private Foundations" [The Investigative Journalism Foundation, 2023a].
The T3010 form is where all the data for the Tax Returns dataset is sourced. This form is composed of hundreds of distinct fields, called line items. These fields include a very intricate monetary breakdown of the charity's assets, liabilities, revenues, expenditures, and gifts to the organization. Importantly, there have been a number of changes in the T3010 form since 1990, including the numbers and definitions affiliated with each line item. For instance, from 1990 to 1996, total liabilities was line item number 131, which changed to number 65 from 1997 to 2002, and then changed again to number 4350 from 2003 onward. A portion of the financial information section of the 2009 T3010 form, including line item 4350, can be seen in Figure 1.
The charity staff compensation database contains data on the number of staff working at each charity, the total compensation for all positions, and the salary ranges for the highest paid employees. These data are sourced from the compensation section of the T3010 form, where the line item names and definitions have changed notably over time. In particular, the salary brackets defined by the CRA have encompassed different ranges over time. This portion of the T3010 form from 2009 is provided in Figure 2.
Figure 1: Part of the financial information section from the T3010 form from 2009.
Finally, the gifts received by charities data are sourced from the T1236 form which is filed by charities alongside the T3010 tax return form each year if they made donations to qualified donees in that fiscal year. An example of part of this form from 2018 is shown in Figure 3.
#### 3.1.2 Political Donations
The political donations database covers donations made federally, provincially, and territorially, with the earliest records from elections Canada dating from 1993. Records of political donations are required by law to be submitted by political parties and/or candidates and are maintained and published by elections agencies [The Investigative Journalism Foundation, 2023b]. The frequency and scope of reporting required varies across jurisdictions, as does the type of recipients and donors that are allowed to receive and give political donations [The Investigative Journalism Foundation, 2023b]. Maximum legal donation amounts vary across jurisdictions, and who is making the donation (e.g., an individual, corporation, or union). The IJF collected these donations data from elections agency websites, where files were stored as either a downloadable spreadsheet, PDF, or HTML form depending on jurisdiction and year. An example of the raw Nova Scotia donations data in PDF form is shown in Figure 4.
#### 3.1.3 Lobbying
The lobbying data is composed of four databases: lobbying registrations, government funding, lobbying communications, and revolving door (that is, lobbyists who formerly held government positions and have since transitioned into lobbying). In Canada, lobbyists must register with the lobbying registrar of all jurisdictions in which they are active, and disclose specific details on their activities [The Investigative Journalism Foundation, 2023c]. While there is regional variation in the information lobbyists are required to disclose, in general they are mandated to report for which organizations they are lobbying, the laws or subject matters that the lobbyists would like to discuss, and/or what money the lobbyists have or want to receive from the government [The Investigative Journalism Foundation, 2023c]. Figure 5 provides an example of part of the webpage for a lobbying registration at the federal level.
The lobbying registrations and revolving door data were scraped from Federal, provincial, and Yukon lobbyist registries' websites. Subject matter details and the list of government institutions being lobbied were collected for the registrations database, and details on the former public offices
Figure 3: Portion of T1236 form from 2018.
Figure 2: Compensation section from the T3010 form from 2009.
held by lobbyists were used to build the revolving door database. The designated public offices held data can be accessed through the "Lobbyists Details" tab outlined in green in Figure 5.
The government funding data were scraped from the Federal website and each province's website. These data contain information on the amount of funding that the organization received by the government, broken down by source. Finally, lobbying communications data were scraped from the Federal and British Columbia registries (the only regions for which these data are available), and include details on communications between lobbyists and government officials that were disclosed by the lobbyists themselves. The red box in Figure 5 illustrates where on the webpage the communications data can be accessed. These data "detail specific interactions between lobbyists and government officials", making them a valuable supplement to the more general lobbying registrations data [The Investigative Journalism Foundation, 2023c]. Interactions with government officials can include email exchanges, meetings, and phone calls.
Figure 4: Portion of the 2014 political contributions PDF from Elections Nova Scotia.
Figure 5: Screenshot from the Federal registry of lobbyists website.
### Initial data cleaning
As illustrated in the previous section, the unedited, raw data used to build the IJF's eight databases came in many different forms, with structural variation across jurisdictions and over time. As such, the IJF performed some initial data cleaning where they deemed appropriate - that is, where cleaning would improve data usability, but not compromise data authenticity.
Recall that the charity's tax return forms are composed of numerous line items whose numbers and definitions have changed over time. To keep track of this variation in the tax return forms, the IJF built spreadsheets which map every line item number to its correct definition and the years in which it was collected. This type of mapping spreadsheet is also known as a schema. The IJF selected about 250 line items of the over 600 available to include in their published dataset, which include main financial categories in the form and basic identification details about the charity [The Investigative Journalism Foundation, 2023a]. An example of part of the IJF's schema structure is shown in Table 1, where line item 4570 can be seen in Figure 1.
Because all of these data are based on self-reported forms, and only about 1% of charities are audited annually [Canada Revenue Agency], they are prone to many human-made errors such as spelling mistakes, or incorrect dollar amounts recorded. The IJF performed data cleaning and standardization across the charity datasets where appropriate, to improve interpretability and consistency. In all three datasets, they deleted duplicated columns from the raw data and renamed some columns to make them more interpretable. The IJF converted fully capitalized text to lowercase when the text did not consist of proper nouns and converted names and cities to title case. For the tax returns data, they computed which columns add to the totals for each component of the tax return, such as which line items are sub-components of total liabilities. For the gifts received by charities data, the IJF removed rows where the donation amounts were distinctly wrong based on two possible characteristics. These rows are characterized by either a donation amount exactly equal to the charity's unique nine-digit registration number (a likely error in data ingestion), or a donation amount greater than one billion dollars when the charity's total revenues and assets summed to less than one million dollars [The Investigative Journalism Foundation, 2023a].
Since many donations records were only available in static PDF form, optical character recognition (OCR) technology was necessary to convert them to comma-separated value (CSV) form. Converting documents with OCR can lead to a number of errors in the resulting CSV, such as a dollar sign ($) being parsed as a letter S or number 5. The IJF performed extensive manual cleaning to correct these OCR errors wherever possible, checking the original PDF files throughout this process. The IJF also cleaned and standardized a number of columns for clarity. Dates were standardized to YYYY-MM-DD format, donor names in the form of "surname, firstname" were standardized to "firstname surname" format, and abbreviated party names were changed to full party names. Further, the IJF had to collate these data across all jurisdictions to create an amalgamated political donations database.
The self-reported nature of lobbying registrations, and the amount of free text present within the data, means that there is much variation in spelling of names and targets. In an effort to mitigate some of this variation, title casing was applied to government titles, and any unnecessary numbers
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline
**Start year** & **End year** & **IJF table** & **IJF column** & **Column name** \\ \hline \hline
2003 & 2002 & liabilities & ch4300\_liabilities\_counts\_payable\_scrued & Accounts payable, accrued liabilities \\ & & & \_liabilities & \\ \hline
1990 & 1996 & assets & ch123\_assets\_fixed\_other & Other fixed assets (land, buildings) \\ \hline
2003 & 2002 & revenue & ch4570\_revenue\_total\_amount\_from\_govt & Total amount received from government \\ \hline
2003 & 2022 & expenditures & ch5050\_expenditures\_gifts\_to\_qual\_donee & Total amount of gifts to qualified donees \\ \hline
1997 & 2002 & liabilities & ch65\_total\_liabilities & Total liabilities \\ \hline \end{tabular}
\end{table}
Table 1: Part of the IJF’s schema for the CRA tax return form line items from 1990 to 2022.
were removed from the text. In the exact same manner as the donations data, dates and the structure of names were also standardized. Additionally, in the data cleaning process, the IJF discovered that the reporting forms in Quebec and New Brunswick require dollar amounts to be spelled out as text, while those forms in all other jurisdictions require dollar amounts to be written numerically [The Investigative Journalism Foundation, 2023c]. As such, the data for Quebec and New Brunswick had to be converted, both programmatically and manually, to numeric form by the IJF.
As illustrated by this overview of the IJF's initial data cleaning, the process of preparing and creating a dataset requires a number of choices to be made, many of which are informed by characteristics of the raw data only known to those who have access to it. As such, potential errors present in the raw data, in combination with the cleaning and standardization choices made by those involved in dataset construction -- in this case the IJF -- are crucial to consider when developing accurate, valuable expectations to set for the data.
### Test Development
Our approach to developing the test suite can be broadly summarized within the framework introduced by [1] grounded in tests for validity, internal consistency, and external consistency. To implement this framework in data tests, it was necessary to first examine the source forms of the data (e.g., charity tax return forms from 1993 to present which inform the schema), the desired format of the data (i.e., what is displayed publicly), and the methodology employed to create the latter from the former. This approach enables a stronger understanding of the context and structure of the data, what validity and internal consistency look like for the data, and what external data may be relevant for testing, ultimately leading to the development of more comprehensive tests.
#### 3.3.1 Validity
Validity in the IJF's databases largely centers on expectations surrounding missingness. Missingness tests for validity are applied to variables created by the IJF that should be populated across all datasets, jurisdictions, and time for which missingness would imply an obvious error. In our test suite, we test the "rid" unique identifier variable and the "added" datetime variable for missingness across all databases where they exist. An additional test for validity we developed checks date format. We developed a test to check that all values in the "date" column of the political donations dataset match the expected YYYY-MM-DD pattern. Another important component of database validity is checking variables classes. An incorrect variable class can lead to misleading results of statistical tests or models, and can be a detriment in data visualization (e.g., a discrete variable which is classified as continuous). The relational database management system used by the IJF checks for variable classes independently, however in general, variable class checks should be accounted for when developing a test suite.
#### 3.3.2 Internal Consistency
There are a number of columns across all eight databases for which we can reasonably expect either a specific subset of the data, or all of the data, to not be missing. Such tests fall under the internal consistency approach as they focus on the scraped and processed data, for which expectations of missingness are more involved and may depend on other characteristics of the data. As mentioned, there is significant variation in the source data over time and across jurisdictions for all databases, meaning many expectations of missingness are only applicable to data from certain years and regions, in accordance with the IJF's schemas. As a result, these expectations depend on how the data exists internal to the final database, once it has been processed and collated.
To develop missingness expectations for internal consistency, we first looked at all columns present in each dataset, and consulted with the IJF to create a list of those variables where these tests were appropriate. There are two types of tests for missing values we employ: those for data being missing and those for data not being missing. The latter type is applied to columns such as donation amount or lobbyist names, which have been scraped, cleaned, and collated by the IJF, and are expected to be present across all jurisdictions and time. This test type is also used to detect reporting completeness in charity tax returns, which is characterized by a charity reporting a total value for at least one of their expenditures, revenues, assets or liabilities for that fiscal period. We use the former to test for coherence between all line item columns in the charities tax returns database and the IJF's schema
(see example Table 1) detailing in which years each line item was collected. We also do so for the salary range line item columns maintained in the charity staff compensation database. Since we know the timeframe in which each line item was present in the T3010 form, we expect rows in the data attributed to a fiscal period end date outside of that given line item's timeframe in the CRA form, the data for that line item should be null. For instance, line item 65 seen in Table 1 represents total liabilities, and is coded in the IJF schema to have been recorded from 1997 to 2002. Therefore, if our test detects a non-null entry for that line item in a row with a fiscal period end date outside 1997 to 2002, there is an error in either the schema, or the parsed data.
Table 2 provides a summary of all missingness-related tests we developed for internal consistency - that is, all tests developed for where data _should_ be missing, and tests developed for where data _should not_ be missing.
Another core characteristic of an internally consistent database is the sub-components of a value add up to the correct total. In the political donations data, in addition to the total 'amount' variable, there are separate variables for the monetary and non-monetary contributions. Based on the availability of these variables, we developed a test that assesses whether the amount monetary and amount non-monetary sum to the total amount variable. We also checked for summation consistency across the charities databases. Knowing that there are line items which capture parts of a whole (e.g., lines 4490 to 4650 in Table 1), we set the expectation that the sum of gifts given by the organization in the gifts received by charities database is less than or equal to the "total amount of gifts to qualified donees" value in the expenditures portion of the tax return database. Additionally, we developed a test which checks for internal consistency between the charity's compensation data from their tax return (Figure 2), and the charity's reported compensation expenditures. In particular, we expect that the number of staff paid in each salary range multiplied by the lowest end of that range in the staff compensation data is less than to the "Total compensation" amount in the expenditures portion of the tax return data. An example of this is provided in Figures 6 and 7 below -- we can see that total reported compensation from the highest paid employees is equal to \(2\times 40,000=80,000\), which is less than the total compensation amount of \(166,491\), so our expectation is met.
In developing a test for internal consistency across the lobbying databases, we verified with the IJF team that we can reasonably expect that for each unique lobbying registration present in the revolving door database (based on record identification (RID)), there should be an entry with the same RID in the lobbying registrations database. As such, we added this expectation to our test suite. Figure 8 is a screenshot from the revolving door data, and Figure 9 is a screenshot from the lobbying registrations data. Notice the two distinct RID's found in the revolving door database for Don Stickney at Lululemon, outlined in red and blue respectively, can be found in the registrations database outlined in the same colors. This is an example of what informed our expectation, and what exactly we are testing for with this expectation.
The final test for internal consistency we developed spans all databases for which there is a region variable (i.e., all lobbying databases and the political donations database). We test that all values in the'region' variable are equal to one of the official English names of the Canadian provinces and territories, with correct capitalization, such as "Newfoundland and Labrador".
Figure 6: Staff compensation data for St. Thomas Elgin Food Bank in 2022.
\begin{table}
\begin{tabular}{|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline Database & Expectation & Applicable data & Columns to be tested \\ \hline Donations & Expect no data to be missing & All rows & Amount, donor full name, region, political party, donation year, recipient, political entity, donation date \\ \hline Lobbying Registrations & Expect no data to be missing & All rows & Registration number, org name, region, subject matters, targets, affiliates, categories \\ \hline Lobbying Communications & Expect no data to be missing & All rows & Subject matters, name, lobbyist, targets \\ \hline Government Funding & Expect no data to be missing & All rows & Region, entity, registration number, sum, source, financial end \\ \hline Charities Tax Returns & Expect that tax returns do not have data for line items which were not present on the T3010 associated with the fiscal year & All rows conditional on the years specified in the IJF’s schema for each given line item \\ \hline \multirow{5}{*}{Charities Tax Returns} & Expect that the value for at least one of total assets, expenditures, revenue, and liabilities is not missing. & All rows except for those attributed to the first return submitted by a charity upon charitable registration, or associated with a charity’s status revocation. & All line items representing total assets, expenditures, revenue, and liabilities \\ \hline Charity Staff Compensation & Expect that the compensation section of tax returns do not have data for line items which were not present on the T3010 associated with the fiscal year. & All rows conditional on the years specified in the IJF’s schema for each given line item. & All line items representing the number of staff paid in various salary ranges. \\ \hline \end{tabular}
\end{table}
Table 2: Summary of missingness tests developed for internal consistency.
#### 3.3.3 External Consistency
The final element of our test suite checks for external consistency. Tests of this nature require relevant external benchmarks against which we can check data values. We developed such a test for the political donations data. The IJF's methodology page summarizes the legal limits for political donations in each region according to political finance regulations as of November 2022. Using this summary, we intend to test all 2022 donations data against the legal limit for its jurisdiction. We do not screen earlier years' data in this manner because summarizing the evolution of legal donation limits over time for each jurisdiction is a non-trivial research task. This is because legal donation limits evolve not only over time and region, but also type of donor (e.g., individual, corporate, etc.). Given the need for external data to develop these tests, they require additional research, and as such we plan to develop more tests of this type in future work. Details on this will follow.
### Implementation Process
To implement our data tests programmatically, we employed Python's Great Expectations (GX) library. GX provides a variety of pre-built expectation functions that are easy to implement, with a corresponding glossary outlining what each function tests, its arguments, and its outputs. Importantly, many GX functions can be supplied with a 'row_condition' argument, allowing the user to apply the function to only those rows which meet the specified condition.
Before running the test suite, we needed to first transform some of the data to be passed to its corresponding GX function. For the donations data, we had to remove all non-numeric characters (i.e., dollar signs and commas) from the amount column, and then convert that column to numeric. We also had to create an additional column in this dataframe equal to the absolute difference between the amount value and the sum of the amount monetary and amount non monetary values. While
Figure 8: Screenshot from revolving door database.
Figure 7: Expenditures portion of St. Thomas Elgin Food Bank’s 2022 tax return.
the capability to check those row-wise sums against a single value for all rows in the dataset, meaning we cannot compare column sums to a unique total value (in this case, the amount value) in each row. Performing data manipulation enabled us to perform the test using a different GX function without compromising on its design. For the charity data, we had to similarly remove dollar signs and commas from all line item variables and convert them to numeric. We also had to convert the fiscal period end column to datetime format. Due to the scale of the charities database, and the fact that it is maintained by the IJF in a number of distinct tables, evaluating expectations required extensive data manipulation including merging multiple dataframes on distinct charity registration numbers, transposing dataframes, and computing aggregates.
Having written the code to prepare the data as necessary, we implemented our tests on a random sample of about 10,000 rows of data based on ID number where available. In doing so, we found a large number of exceptions to many of these tests. For instance, we found that a large proportion of the fiscal period end date data in the government funding database were missing, and we found a number of rows in the political donations data where 2022 donations exceeded the legal limit for that year. Such findings prompted us to explore whether these exceptions indicate true errors in the data or are indicative of a need to adjust our expectations based on some characteristic of the data we had not been aware of. In many cases it was the latter. To determine this, we looked at the data which failed each test and performed exploratory analysis to detect patterns in these data. This process uncovered very informative and interesting trends. For instance, we found that for some columns with large proportions of missing values, those rows with missing data belonged almost entirely to a subset of regions and/or years. We also found that a large number of rows in the donations dataset with an amount value over the legal donation limit had the name "Contributions Of $200 Or Less/Contributions De 200 $ Ou Moins".
Presenting these observations to the IJF team who are equipped with fundamental domain knowledge on the data, we were able to identify inaccuracies in our expectations, and adjust them accordingly. Expectations for two columns in which we set the expectation that no data should be missing were updated to account for different regional reporting requirements, recalling that these databases are an amalgamation of data across jurisdictions. For instance, the fiscal period end date variable in the government funding data is only collected in the Federal and Saskatchewan jurisdictions, meaning we can only truly expect there to be non-null values for that variable in those two jurisdictions. We adjusted the code for that test to reflect this. Additionally, we learned that donations with a donor name of "Contributions Of $200 Or Less/Contributions De 200 $ Ou Moins" are aggregated donation values, meaning that the legal limit should not be applied to them.
After making these initial adjustments and re-running our test suite, we performed additional exploratory data analysis on tests that did not produce a 100% success rate. Doing so allowed us to confirm that we had correctly accounted for characteristics in the data that we had previously excluded from our expectation code due to a lack of domain knowledge, and to check for any other trends present in the data that did not pass a given test. Having confirmed that there were no remaining patterns that implied inaccurate expectation conditions, we generalized our code by applying it to a larger sample of the data. We then iterated on our data expectations by testing larger samples of the data with each iteration and exploring the flagged data to identify interesting patterns.
This process led us to detect additional characteristics of the donations data which improved the accuracy of our data tests. For instance, in subsequent iterations of the test for external consistency of 2022 donations amounts and the 2022 legal limits, we identified a number of exceptions where the donor name included "Estate of". Upon further investigation, we learned that continuing contributions from a testamentary trust made before 2015 in the Federal jurisdiction have been subject to different legal limits (Furrow, 2015), and those made before November 2017 in British Columbia were not subject to a limit at all (Carman, 2020). Additional iterations uncovered other characteristics of the political
Figure 9: Screenshot from the lobbying registrations database.
donations data and legal regime unbeknown to us, requiring modification of our test code. Further, many rows in the donations data had missing values for monetary amount and non-monetary amount. This disaggregation of amount type was only collected by Elections Canada after the year 2000. Based on this, we adjusted the expectation code to only run the test on post-2000 data. These and other discoveries highlight our lack of knowledge about political donation jurisprudence and record-keeping and the necessity of this knowledge for comprehensive data validation. Table 3 provides a summary of the tests that required adjustment following this iterative implementation process, and describes what adjustments were made and why.
Evidently, the process of implementing our data validation test suite is iterative in nature, and required extensive fundamental domain knowledge to ensure the final tests were as accurate and informative as possible.
## 4 Preliminary Findings
At present, we have implemented all our expectations for the donations and lobbying databases, and most of our expectations for the charities databases. We are actively working to implement the final few expectations for charities in Python. Note that our preliminary findings are based on data queried from the development environment which may be slightly different from what is shown in production.
For the donations data, our expectations that for all rows, the values of donation amount, donor full name, political party, region, donation year, and recipient should not be null, was met with a 100% success rate. The expectation that the political entity must not be null had a 97.76% success rate in the data. Further, our test did not catch any exceptions to our expectation that where applicable, monetary and non-monetary donation amounts summed correctly to the reported total within a margin of error of \(\pm 5\) dollars. All donation date values matched the expected regular expression pattern based on the YYYY-MM-DD date format. Finally, for all regions except Federal, British Columbia, and Quebec, there were no donations in 2022 that exceeded the legal limit. Those three regions had 2, 1, and 11 exceptions, respectively. The two at the Federal level belong to individuals who were electoral candidates at the time meaning they can legally donate beyond $1675. And the one exception for British Columbia appears to be a duplicate of another entry belonging to an estate donation. The
\begin{table}
\begin{tabular}{|p{85.4pt}|p{142.3pt}|p{142.3pt}|} \hline Database & Original Expectation & Condition added and explanation \\ \hline \hline Donations & Expect donation dates to never be missing. & Region must be equal to one of Federal, Ontario, or British Columbia, as these are the only jurisdictions which collect this variable. \\ \cline{2-3} & & Donor full name must not contain “Estate of”, “Contributions of” or “Total Anonymous Contributions”. Estate contributions have distinct legal limits, and names which contain the latter two phrases are aggregates. Also, the political entity entry must not contain “Leadership”, because the legal limits for donations differ for leadership contestants. \\ \cline{2-3} & Expect that for all the donations data, the maximum absolute difference between “amount” and the sum of “amount monetary” and “amount monetary” is 5. & The year must be greater than 2000, the jurisdiction must be Federal, and at least one of the “amount monetary” and “amount non-monetary” values must not be null. \\ \hline Government Funding & Expect the financial end to never be missing. & Region must be equal to one of Federal or Saskatchewan, as these are the only jurisdictions which collect this variable. \\ \hline \end{tabular}
\end{table}
Table 3: Overview of adjustments made to data tests following initial implementation.
exceptions for Quebec require additional investigation.
For lobbying registrations, the only tests which did not have perfect success rates were those where we tested that all registration numbers, organization names, and regions must not be null. These had 99.66%, 99.44%, and 99.45% success rates, respectively. Across the other three lobbying databases, there were no missing region entries. For government funding, only one test caught exceptions, and that was due to null source entries we did not expect to be missing (98.91% success rate). Two lobbying communications data tests were not perfect - we found two null target entries, and 25 null subject matter entries in the data. Finally, the expectation we set that for each unique lobbyist per organization in the revolving door database, there should be an entry with their name in the lobbying registrations database, had a 99.58% success rate.
Across all donations and lobbying databases, we tested the expectation that all entries for the region variable must be the official English name of one of the Canadian provinces and territories. The only database which failed this test was lobbying communications, where we found 644 rows with a region listed as "Bc_Reports". This exemplifies the value of testing internal consistency, especially with a database as large as the IJF's. Having detected this inconsistency in region name, the IJF can now defer to the raw data, identify where this inconsistency originates, and modify their data cleaning pipeline accordingly such that newly scraped data which have this region name are subject to appropriate standardization.
Because of the scale of the charities databases, we took a random sample of these data by randomly selecting 20,000 registration numbers and querying all tax returns associated with those registration numbers. For this random sample of charities data, our tests detected 61 line item variables where there were non-null values in line items that were not collected on the associated year's reporting forms (according to the IJF's schema). These all warrant further investigation, to detect whether the exceptions stem from an issue in the raw data (e.g., the charity completing an out-of-date form), or the IJF's schema. It should be noted that CRA data includes a number of records from before 1990 which is beyond the scope of the IJF's database. For this reason, the IJF's schema does not account for pre-1990 records, meaning our findings based on this sample and the IJF's schema may have inflated error rates. Our expectation that the sum of gifts given by the organization in the gifts received by charities database is less than or equal to the reported total amount of gifts to qualified donees expenditures line-item value had a \(\approx 95.0\%\) success rate. This may in part be attributed to incomplete T1236 forms being filed. The expectation that the reported total compensation amount in expenditures is greater than the calculated lower bound was run separately from 2002 onwards. Our sample of data from 2003 to 2008 and 2009 to 2022 produced an \(\approx 97.8\%\) and \(\approx 98.5\%\) success rate, respectively. Finally, we test that the value of at least one of total assets, expenditures, revenue or liabilities is not missing unless it is a charity's first return filed. This test had an \(\approx 97.3\%\) success rate.
Evidently, many of the expectations we set for the data were met when tested with code. While some of these expectations may seem simple, testing them programmatically and reporting the results as we have enables users to have a clearer image of the data with which they are working. Further, the tests which did not have a 100% success rate have prompted us to dig deeper into the data to decide whether to adjust our expectations, modify the IJF's methodology, or flag inconsistencies or errors inherent to the raw data.
## 5 Future Work
While this project is still a work in progress, there are some interesting avenues of future work we would like to pursue. For the political donations data, we currently check data from 2022 against the legal limits for that year because we do not have a summary of the evolution of legal donation limits over time. In future, we would like to create this schema and use it to check donation amounts both before and after 2022 against the legal limits for the corresponding year. Creating this schema will be time-consuming, as we will need to check the legal limits for each year individually across all regions and account for any differences based on who is donating (e.g., individuals or candidates) and when they are making the donation (e.g., different electoral events).
Another avenue of future work would be to develop more detailed, comprehensive tests across the charities databases. We would like to set additional expectations that facilitate validation across line items in charities' tax returns (e.g., expenditures, total gifts given to qualified donees, etc.). It would be valuable to implement automated checks to test that the sum of non-null line items add up to the
total value reported using Great Expectations. The IJF has checked this for all the tax returns up to February 2023 themselves, an example of which can be seen in Figure 10, signified by the asterisk next to the total revenue. However, these tests were completed independently of the data cleaning pipeline. Running and deploying these tests with a tool like Great Expectations would allow for newly ingested data to be checked automatically as well. This is a particularly challenging task because for a high proportion of returns where the total is inconsistent with the addends, the tax return itself reports inconsistent or incorrect values and the error in summation is not an error on the part of the IJF's scraping, ingesting, or cleaning process. This problem is thus one of external rather than internal consistency. However, running this validation test is impractical at scale for a dataset of this magnitude. Also, the work of auditing tax returns is one of the CRA and not the IJF.
It is also of interest to develop more free text-focused expectations, which would allow us to detect more inconsistencies in the data, especially within and across the lobbying databases which contain a large amount of text data. For instance, the IJF methodology pages for the charities and lobbying databases mention changing names written in the form "last name, first name" to the form of "first name last name". Developing an automated data test to check for this pattern would enable the IJF to catch any rows of the data that they missed in the data standardization process.
Finally, we would like to develop an algorithm for checking duplicate rows in the data that contain only one minor difference, such as extra whitespace between two words. This is beyond the pre-defined functions made available in Great Expectations, and would require additional programming work, especially since we would ideally develop this algorithm for each database published by the IJF.
For data validation more generally, an interesting future endeavor would be to explore the capacity for large language models (LLMs) such as GPT-4 to produce a suite of data validation tests for a dataset via prompt-based in-context learning. While valuable, existing tools for developing automated data testing such as Great Expectations are inadequate on their own for producing a comprehensive suite of data tests which are as accurate as possible for the data at hand. This is because they are limited to a set of predefined test functions which do not incorporate domain knowledge, and as exemplified in the need to impose row conditions throughout our test suite, domain knowledge is a fundamental prerequisite for accurately understanding and assessing data. Further, developing a suite of tests can be quite time consuming and difficult, particularly for individuals who do not prioritize validation in their database construction process or who were not involved in the database construction process. As such, exploring the quality and breadth of data tests produced by LLMs in comparison to those created by an individual, such as those developed in this work, or other simple automated data validation suite may encourage others to prioritize data validation in their data pipelines, especially due to the decreased mental effort associated with LLM outsourcing.
Figure 10: Example of IJF charities tax return data where sub-parts do not add up to the reported total (see asterisk).
Conclusion
Data validation is an important component of all workflows which use data for producing reproducible, transparent, and high-quality work. As illustrated in this project, not only does implementing automated data validation allow the IJF to check the quality, validity and consistency of their data, but it also facilitates transparency in outlining the true methodological assumptions that underpin each database.
The process of developing a data test suite for the IJF presented a number of valuable lessons:
**Understand your data backwards and forwards**. Familiarizing oneself with both the databases presented to the public and those used internally is crucial when beginning to build an expectation suite. Doing so allows for a balance of focus and understanding between the raw and amalgamated data, which leads to the development of more thorough data expectations.
**Don't be afraid to wrangle**. Data wrangling is sometimes necessary to run tests in the way that you desire -- more so when implementing those tests with a tool such as Great Expectations, that has predefined functionality. Data tests should not be modified or compromised to fit the limited test functions available. Users should harness the powerful tool of data wrangling to manipulate their data in such a way that it fits the format necessary for validation.
**Iterate**. This work has exemplified that the process of implementing data validation is necessarily iterative, and is continually being updated by failed tests and new knowledge. This is simplest when breaking the data into manageable chunks and gradually increasing sample size. Exploring the flagged data with each iteration presents an opportunity to identify important trends and characteristics inherent to the data, which can inform test development.
**Expertise is key**. Arguably the most important lesson learned, however, is that relying on the data alone is insufficient for producing a comprehensive suite of tests. Domain knowledge is a crucial component for developing accurate data tests and interpreting the results of those tests. In our case, this involved collaborative efforts with the IJF to acquire knowledge of political donations regulations, historical tax return forms, and regional differences in reporting requirements across all the databases. The danger of a user or data scientist making assumptions about the data based on personal expectations informed only by the data is that they do not always hold true in practice, and could result in misleading conclusions.
Though the data validation process is by its nature never completed, our work offers a fundamental basis for testing that core beliefs about the data hold true at scale. Further, this work has illustrated the extent of resources necessary to build a test suite that is both valuable and accurate. Despite this difficulty, data validation work is vital to the reproducibility, transparency, and quality data analysis workflows across research domains from machine learning to political science to neuroscience. Our work here can serve as a framework to other research projects undergoing similar challenges. The complexity and importance of validation in all data-focused workflows illustrates the need for developments in the realm of making data validation easier and more accessible to implement for individuals of all backgrounds.
|
2309.10049 | Morphological evidence for nanoflares heating warm loops in the solar
corona | Nanoflares are impulsive energy releases by magnetic reconnection in the
braided coronal magnetic field, which is a potential mechanism for heating the
corona. However, there are still sporadic observations of the interchange of
braiding structure segments and footpoints inside coronal loops, which is
predicted to be the morphological evolution of the reconnecting magnetic
bundles in the nanoflare picture. This work aims to detect the evolutions of
the pairs of braiding strands within the apparent single coronal loops observed
in Atmospheric Imaging Assembly (AIA) images. The loop strands are detected on
two kinds of upsampled AIA 193 \AA\ images, which are obtained by upscaling the
Point Spread Function matched AIA images via Bicubic interpolation and are
generated using a super-resolution convolutional neural network, respectively.
The architecture of the network is designed to map the AIA images to
unprecedentedly high spatial resolution coronal images taken by High-resolution
Coronal Imager (Hi-C) during its brief flight. At times, pairs of separate
strands that appear braided together later evolved into pairs of almost
parallel strands with completely exchanged parts. These evolutions offer
morphological evidence that magnetic reconnections between the braiding strands
have taken place, which is further supported by the appearance of transient hot
emissions containing significant high-temperature components (T > 5MK) at the
footpoints of the braiding structures. The brief appearances of the two
rearranging strands support that magnetic reconnections have occurred within
what appears to be a single AIA loop. | Y. Bi, J. J. Yang, Y. Qin, Z. P. Qiang, J. C. Hong, B. Yang, Z. Xu, H. Liu, K. F. Ji | 2023-09-18T18:06:07Z | http://arxiv.org/abs/2309.10049v1 | # Morphological evidence for nanoflares heating warm loops in the solar corona +
###### Abstract
Context:Nanoflares are impulsive energy releases by magnetic reconnection in the braided coronal magnetic field, which is a potential mechanism for heating the corona. However, there are still sporadic observations of the interchange of braiding structure segments and footpoints inside coronal loops, which is predicted to be the morphological evolution of the reconnecting magnetic bundles in the nanoflare picture.
Aims:This work aims to detect the evolutions of the pairs of braiding strands within the apparent single coronal loops observed in Atmospheric Imaging Assembly (AIA) images.
Methods:The loop strands are detected on two kinds of upsampled AIA 193 A images, which are obtained by upscaling the Point Spread Function matched AIA images via Bicubic interpolation and are generated using a super-resolution convolutional neural network, respectively. The architecture of the network is designed to map the AIA images to unprecedentedly high spatial resolution coronal images taken by High-resolution Coronal Imager (Hi-C) during its brief flight.
Results:At times, pairs of separate strands that appear braided together later evolved into pairs of almost parallel strands with completely exchanged parts. These evolutions offer morphological evidence that magnetic reconnections between the braiding strands have taken place, which is further supported by the appearance of transient hot emissions containing significant high-temperature components (T \(>\) 5MK) at the footpoints of the braiding structures.
Conclusions:The brief appearances of the two rearranging strands support that magnetic reconnections have occurred within what appears to be a single AIA loop.
## 1 Introduction
One of the most challenging problems in solar physics is how the solar corona is heated up to a temperature of millions of degrees, far above that of the photosphere, although it is widely accepted that the magnetic field plays a major role in the energetics of the bright corona.
The bright coronal loops are the building blocks of the solar corona. Therefore the heating mechanisms for the coronal loops are important to understand how the corona is heated. Based on the temperature regime, the loops observed in EUV are classified as warm loops and hot loops (Reale, 2014), which confine plasma at temperatures around 1-1.5 MK and around or above 2 MK, respectively. The model developed by van Ballegooijen et al. (2017) indicated that the Alfven wave turbulence launched from the photosphere can produce enough heat to maintain a peak temperature of about 2.5 MK of the coronal loops. Also, a large number of transverse waves are deduced from the observed Alfvenic motion of the coronal features (McIntosh et al., 2011), spicules (De Pontieu et al., 2007), and network jets (Tian et al., 2014; Shen, 2021), as well as the falling solar prominence knots (Bi et al., 2020).
Energy releases from small-scale magnetic reconnections are another promising mechanism to heat the corona (Klimchuk, 2006). It has been accepted that the small-scale events of magnetic reconnections (Testa et al., 2013; Gupta et al., 2018; Priest et al., 2018; Asgari-Targhi et al., 2019; Chitta et al., 2020) are responsible for the heating of the hot loops or hot plasma in the corona (Klimchuk, 2006; Schmelz et al., 2015; Ishikawa et al., 2017; Yang et al., 2018; Zhang et al., 2023). It seems more controversial how the warm loops are heated. Some warm loops might be globally cooling from the hot loops (Winebarger & Warren, 2005; Viall & Klimchuk, 2011; Li et al., 2015), but many long-lived warm loops would be much less visible in the hot EUV channels. Although Alfven waves originating in the photosphere may provide sufficient energy for heating the warm loops (van Ballegooijen et al., 2017), observations supporting the magnetic reconnection-type heating in the warm loops were also presented, such as the transient brightnesses found around the footpoints of the warm loops (Regnier et al., 2014; Subramanian et al., 2018), short-lived warm loops that impulsively appeared in and faded out during a few minutes and never achieve million
degree temperatures (Winebarger et al., 2013), and the reconnection outflow-like plasma (termed nanojet) within the warm loops composed of misaligned strands (Antolin et al., 2021).
A nanoflare refers to an impulsive energy release in the coronal braided magnetic field (Parker, 1988), which are considered the most promising mechanism for the generation of hot plasma in active regions by small-scale reconnection. According to the nanoflare scenario, when the strands reconnect, they exchange segments and footpoints (Berger and Asgari-Targhi, 2009; Klimchuk, 2015). However, the morphological evolutions of sub-arcsecond strands linked to nanoflares are still challenging to identify, probably due to the existing performance limits of coronal observations and the scarcity of coronal observations with spatial resolution less than 1\({}^{\prime\prime}\), such as High-resolution Coronal Imager (Hi-C; Kobayashi et al., 2014) and Extreme Ultraviolet Imager (EUI).
It has been reported that two braiding structures (Cirtain et al., 2013) were detected in the images taken by the Hi-C, which observed a bandpass dominated by the Fe xii 193 A line with a pixel size of \(\sim\) 0\({}^{\prime\prime}\).1 (\(\sim\) 75 Km on the Sun) in the period of a few minutes. Using NLFF extrapolation, Thalmann et al. (2014) have found that the braided structure observed by Hi-C was a low-lying twisted flux rope above a penumbral filament region. However, limited to a brief period of observation, Hi-C is unable to confirm whether these tangled loops are associated with energy release in the solar corona. Most recently, using new observations with high spatial resolution (a pixel size of 125 -135 km on the Sun) from EUI on board Solar Orbiter, Chitta et al. (2022) reported the untangling of small-scale coronal braids, giving rise to coronal loops that run more parallel with each other. By contrast, the uninterrupted observations taken by Atmospheric Imaging Assembly (AIA; Lemen et al., 2012) onboard the Solar Dynamics Observatory (SDO) have a larger pixel size of \(\sim\) 0\({}^{\prime\prime}\).6 and then hardly recognize the braiding substructures as resolved in the Hi-C and EUI images.
Recently, various machine learning (ML) models have been applied to create artificial solar images to further extend the performance of the current solar observations (Kim et al., 2019; Szenicer et al., 2019; Bai et al., 2021; Hong et al., 2021; Dos Santos et al., 2021; Pincei et al., 2021; Yu et al., 2021). In particular, Diaz Baso and Asensio Ramos (2018) adopted a deep neural network approach to deconvolve and superresolve Helioseismic and Magnetic Imager (HMI; Schou et al., 2012) images and found that the synthetic HMI images contained information not present in the original data, which supported that a certain deep convolutional neural network could be applied to solar image enhancement. Super-Resolution (SR) is a classic problem in computer vision, which is to recover a high-resolution image from a low-resolution image. Because there are multiple solutions for any given low-resolution pixel, SR is generally an ill-posed inverse problem. Its solution pipeline is suggested to be equivalent to a deep convolutional neural network (Dong et al., 2016), and then a proper architecture of a convolutional neural network could be applied to directly learn an end-to-end mapping between low- and high-resolution images.
The rest of the paper is structured as follows. Section 2 introduces the details of the algorithm for upscaling the AIA 193 A images. Here, the images are upscaled by the ML-based mapping from AIA to Hi-C 193 A images, as well as by the method of deconvolution and upscaling interpolation. Section 3 presents three events not covered by Hi-C observations, in which the reconnection-like rearrangements of the braiding strands are observed on the upscaled 193 A images and the transient hot emissions are detected on the AIA 94 A images. Discussions and a brief summary are presented in Sects. 4 and 5.
## 2 Method
We use two methods to upscale the AIA 193 A images to the pixel size of 0\({}^{\prime\prime}\).15. Firstly, the AIA images are deconvolved with the Point Spread Function (PSF1; Boerner et al., 2012) via Richardson-Lucy algorithm, and then the resulting PSF-matched images are upsampled using Bicubic interpolation. We term them as PFS-Bicubic upscaled images. Secondly, the ML-upscaled AIA 193 A images are generated using a super-resolution network mapping the AIA 193 A images to Hi-C images.
Footnote 1: The PSF for AIA is available in SolarSoft ([http://www.lmsal.com/solarsoft/](http://www.lmsal.com/solarsoft/)).
To generate the ML-upscaled AIA 193 A images, we applied the wide activation super-resolution networks (WDSR; Yu et al., 2018; Fan et al., 2018), a kind of ML network for single image super-resolution (SR), to upsample the AIA 193 A images by a factor of 4. As shown in Fig. 1, the network mainly consists of 16 residual blocks (He et al., 2016). Each basic block is made up of three convolutional layers and starts with the feature maps being extended to 384 with 1\(\times\)1 kernel. The channel number expansion before the ReLU pooling layer, termed wide activation (Yu et al., 2018), allows more information to pass from shallow layers to deeper ones. A global residual connection is applied to relieve the redundant features generated potentially by the deep network architectures (Ledig et al., 2017). A pixel-Shuffling layer (Shi et al., 2016) is utilized at the end of the network to upsample the final feature maps into the SR output. Weight Normalization layers (Salimans and Kingma, 2016) are used to ease the training difficulty of deep networks. The total number of trainable parameters for the network modeling amounts to \(1.2\times 10^{6}\).
Five pairs of AIA and Hi-C images are used to train the ML-upscaled model. The observed time difference between each pair of images is less than 3 seconds. Using these images we built the training set and validation set as follows. Firstly, the Hi-C images with a pixel size of 0\({}^{\prime\prime}\).1 are downsampled via bilinear interpolation to a pixel size of 0\({}^{\prime\prime}\).15, a quarter of the pixel size of the AIA images. Secondly, we aligned the Hi-C to AIA images via cross-correlation. Finally, two small patches (indicated
Figure 1: Architecture of the WDSR convolutional neural network for upsampling factor \(\times\)4. All of feature maps in each convolutional layer have the size \(H\times W\) as same as that of input image/patch. For upscaling factor \(\times S\) (\(\times\)4), the number of channels used as the input for the Pixel-Shuffle layer is exactly \(S^{2}\)(16).
by the lower and upper blanked regions in Fig. 2d) in these images are extracted as the validation and testing sets. The training set is made up of all frames of the data with the validation and test sets being spared out. That is, to prevent the model to train the known state of that regions shortly before/after that frame, it is necessary to ensure that these regions in the validation and testing set are not taken for training from any frame.
The frame patches for training have a size of \(48\times 48\) pixels, which are randomly extracted both spatially and temporally from the training set and are randomly rotated in multiples of 90 degrees. The number of patches used in one single training epoch amount to 100 and the number of epochs used for training is \(10^{5}\). We set an initial learning rate of \(1.2\times 10^{-5}\), decreased by 90 % every 2000 epochs and use the loss function of the mean absolute error (MAE) function for optimizing the network. The code and trained model are based on Pytorch and available at GitHub2.
Footnote 2: [https://github.com/YiBi-TNAO/ML-upscaled-AIA-193-.git](https://github.com/YiBi-TNAO/ML-upscaled-AIA-193-.git)
For training, validation, and testing sets, the quantities of MAE between the ML-upscaled and Hi-C images (ground truth) are seen to be flattened out finally after about \(5\times 10^{4}\) training epochs are performed (Fig. 2e). This indicates that the model has not been overfitted. To further evaluate how well our model enhances the AIA 193 A images, we use Structural Similarity Index Measure (SSIM; Wang et al., 2004; Aydin et al., 2008) for measuring similarity between the ML-upscaled and Hi-C images. A greater value of SSIM reflects a smaller difference between them. As expected, the value of SSIM in each set increased initially during training and was then close to an asymptotic state (Fig. 2f). Figs. 2a and 2b compares the unprocessed AIA, PSF-Bicubic upscaled AIA, MI upscaled AIA, and Hi-C images in the testing and validation sets, respectively. On the Hi-C image, an apparent single AIA loop in the validation set appears to be three separate strands, which were identified by Cirtain et al. (2013) as a set of braiding loops. The split strands were clearly visible in the ML-upscaled AIA images, as well as slightly in the PSF-Bicubic-upscaled AIA images (Fig. 2b). Fig. 2c shows the slice plots across the loop in the various images, with three peaks evident in the slice plots taken from the ML-upscaled and Hi-C image. In the testing set, again, a bifurcation of an AIA loop could be seen on both ML-upscaled AIA and Hi-C images (as indicated by the arrows in Fig. 2a). Thus, while both the PSF-Bicubic and ML upscaling methods are capable of improving the AIA 193 A images in a reasonable manner, the ML upscaling method appears to perform better in both the validation and testing sets.
## 3 Results
To investigate the nanoflare candidates, this work focuses on the braiding strands recognized in the upscaled AIA 193 A images. We investigated the AIA 193 A observation of a sample of non-flare active regions (Schmelz et al., 2015), all of which were not covered by the Hi-C observation. We present three events as follows.
### Overview of the recognizable braiding strands and their evolutions
In Event 1, a bifurcated loop is observed on the raw AIA 193 A images (Fig. 3a). Subsequently, the loop seemingly departed into two separate ones. Both the PFS-Bicubic-upscaled and ML-upscaled images show more detail of the evolution of the loop. In these upscaled images (Figs. 3b and 3c), the AIA loop appears to be a pair of strands braiding with each other at the beginning of Event 1. The disappearance of the thinner strand (indicated by the magenta dot-line at 05:04:56 UT) was followed by the appearance of a newly-formed strand (indicated by the magenta dot-line at 05:09:20 UT), which was parallel with the thicker one that seem to be nearly unchanged (indicated by the cyan dot-lines).
Figs. 4 and 5 show another two examples (Events 2 and 3) exhibiting similar evolutions of substructures from braiding to parallel with each other, which were observed on both the PFS
Figure 2: The training, testing, and validation sets for the machine learning. _Panels a-b:_ Comparison of close-ups of the unprocessed AIA, PSF-Bicubic upscaled AIA, MI upscaled AIA, and Hi-C images. The images in _(a)_ and _(b)_ are taken from testing set and validation set, respectively. _Panels c:_ The slice plots exhibiting intensities along the line indicated in _b. _Panel d:_ The Hi-C image with its full field-of-view, in which the lower and upper blanked regions are taken as the validation set and testing set, respectively, and all of the left region is used as the training set. _Panels e-f:_ Time history of the value of MAE (_e_) and SSIM (_f_) between the model outputs and the Hi-C images in each dataset.
Figure 4: The evolution of the braiding strands on 22 June 2010 (Event 2). _Panel a:_ SDO/AIA 193 Å images. _Panels b-c:_ the Bicubic upscaled versions of the PSF-matched AIA 193 Å images and ML-upscaled versions of the AIA 193 Å images, on which the various colored dot-lines indicate the various recognizable strands. _Panels d-e:_ The images of \(I_{(or=1)}\) and \(\Delta I=I_{(or=1)}-I_{(or=4)}\), where the \(I_{(or=1)}\) and \(I_{(or=4)}\) is the Fe XVIII intensity of \(I(94\AA)-I(211\AA)/120-I(171\AA)/450\) smoothed with a Gaussian kernel of 1 and 4 minutes, respectively. These images are taken at the peak time of the hot emission indicated by the brown box in (d), which is only one hot emission identified from 00:35:08 UT to 00:45:08 UT. The brown box is also plotted in the other panels. _Panel f:_ The vertical component (\(B_{s}\)) of the photospheric vector data from SDO/HMI. All images have the same FOV. The evolution of the braiding strands is shown in a movie (anim4.mpeg) available online.
Figure 3: The evolution of the braiding strands on 5 January 2012 (Event 1). _Panel a:_ SDO/AIA 193 Å images. _Panels b-c:_ the Bicubic upscaled versions of the PSF-matched AIA 193 Å images and ML-upscaled versions of the AIA 193 Å images, on which the various colored dot-lines indicate the various recognizable strands. _Panels d-e:_ The images of \(I_{(or=1)}\) and \(\Delta I=I_{(or=1)}-I_{(or=4)}\), where the \(I_{(or=1)}\) and \(I_{(or=4)}\) is the Fe XVIII intensity of \(I(94\AA)-I(211\AA)/120-I(171\AA)/450\) smoothed with a Gaussian kernel of 1 and 4 minutes, respectively. These images are taken at the peak time of the hot emission indicated by the brown box in (d), which is also plotted on the other panels. The boxes in \(d\) outline the locations of all hot emissions that are identified from 05:00:08 UT to 05:10:08 UT. _Panel f:_ The vertical component (\(B_{s}\)) of the photospheric vector data from SDO/HMI. All images have the same Field-Of-View (FOV). The evolution of the braiding strands is shown in a movie (anim3.mpeg) available online.
Bicubic-upscaled and the ML-upscaled AIA 193 A images but were hardly seen in the raw AIA images.
### Transient brightenings at the footpoints of the braiding structures
We use an empirical approach to isolate the hot plasma component (produced by Fe XVIII emission) present in the 94 channel for coronal diagnostics of impulsive heating. A reasonable estimate of Fe XVIII emission is \(I\)(94A ) - \(I\)(211A )/120 - \(I\)(171A )/450 (Del Zanna, 2013). The process eliminates the warm plasma component from the emission observed in the 94 channel, which is around 1 MK. The peak formation temperature of Fe XVIII is at 7.1 MK (Warren et al., 2012), but Fe XVIII emission from plasma at 3 MK may detected due to the large fraction of plasma present at this temperature (Del Zanna, 2013). This information shows that the hot emissions are typically believed to have a temperature of at least 3 MK. To improve its signal-to-noise ratio, the light curve of the Fe XVIII intensity from each AIA pixel is smoothed in time, and the resulting \(\bar{I}_{(\sigma=1)}\) and \(\bar{I}_{(\sigma=4)}\) indicates the data smoothed with a Gaussian kernel of 1 and 4 minutes. We apply the high-pass filter of the Fe XVIII intensity, corresponding to \(\bar{I}_{(\sigma=1)}-\bar{I}_{(\sigma=4)}\), to extract the short hot emissions since the durations of the fluctuations in the 94 A channel in response to the nanoflare-scale heat pulses last for minutes (Reale et al., 2011; Tajfirouze et al., 2016). A pixel-wise emission enhancement is identified when \(\Delta\bar{I}\) is greater than 1.5 times the standard deviation of \(I_{(\sigma=1)}\) during at least one minute. The decision to use a threshold of 1.5 standard deviations was made because picking a standard deviation with a smaller multiple will likely pick up some trivial disturbances. When such three or more adjacent pixel-wise emission enhancements are detected at the same time, we define them as a transient hot emission.
The hot emissions are found in all events studied here. As shown in Figs. 3d, 4d, and 5d, the brown boxes outline the concerned hot emissions, which seem to be associated with the evolutions of the braiding structures. Specifically, a hot emission took place at one of the footpoints of the braiding stands in Events 1 and 2 (Figs. 3 and 4), respectively; three hot emissions, in turn, occurred in the southeast, southwest, and northwest endpoints of the evolving strands in Event 3 (Fig. 5 and its accompanying animation).
Fig. 6a presents each light curve of the Fe XVIII intensity from each hot emission. The light-gray areas show that the fluctuations in Fe XVIII intensity last for 1.2-2.2 minutes. Each peak time indicated by the vertical line corresponds to the moment when \(\Delta\bar{I}\) reaches its maximum. The comparisons of the peak times and evolutions of the braiding strands (Figs. 3, 4, 5, and their accompanying animations) reveal that the hot emissions always reached their peaks before the braiding strands evolved into two parallel ones. Such hot plasma provides evidence supporting that the energy was released from the magnetic reconnection
Figure 5: The evolution of the braiding strands on 30 August 2011 (Event 3). _Panel a_: SDO/AIA 193 Å images. _Panels b-c:_ the Bicubic upscaled versions of the PSF–matched AIA 193 Å images and ML–upscaled versions of the AIA 193 Å images, on which the various colored dot-lines indicate the various recognizable strands. _Panels d-e:_ The images of \(I_{(\sigma=1)}\) and \(\Delta\bar{I}=\bar{I}_{(\sigma=1)}-\bar{I}_{(\sigma=4)}\), where the \(\bar{I}_{(\sigma=1)}\) and \(\bar{I}_{(\sigma=4)}\) is the Fe XVIII intensity of \(I\)(94Å ) - \(I\)(211Å )/120 – \(I\)(171Å )/450 smoothed with a Gaussian kernel of 1 and 4 minutes, respectively. The boxes in \(d\) outline the locations of all hot emissions that are identified from 09:44:08 UT to 09:54:08 UT, and the three brown boxes marking the hot emissions A, B, and C are also plotted on the other panels. These images in (d) and (e) are taken at the peak time of the hot emission A. _Panel f:_ The vertical component (\(B_{s}\)) of the photospheric vector data from SDO/HMI. All images have the same FOV. The evolution of the braiding strands is shown in a movie (anim5.mpeg) available online.
(Klimchuk, 2006; Schmelz et al., 2015) during the morphological changes of the braiding strands.
Since the plasma along a given line-of-sight may have a range of temperatures rather than being isothermal, it is common to describe the coronal temperature distribution by reconstruction of differential emission measures (DEMs). Here, we apply a Fast, Simple, Robust algorithm (Plowman and Caspi, 2020) to inverse the DEM from AIA images in the six optically thin wavelengths, including 94 A, 131 A, 171 A, 193 A, 211 A, and 335 A. The temperature points in the inversion are selected to range from \(10^{5.5}\) to \(10^{7.0}\) K. The choice of the maximum temperature of 10 MK ensures that the hot emission would be not overestimated, since the emission above 10 MK is less well constrained by the six AIA channels. The DEM analysis demonstrates that the hot plasmas include a significant high-temperature component (\(T>5MK\)), as shown in Fig. 6b, which presents the evolutions of emission measure (EM) at 5-10 MK from each hot emission.
Since the coronal plasma is multi-thermal, we estimate the thermal energy as Equation 12 in Aschwanden et al. (2015), where the volume of the transient emission is estimated as \(V=A^{3/2}\), where \(A\) refers to the area of each brightening, and unity filling factor is assumed. In Fig. 6c, the time evolution of the thermal energy of each hot emission shows that \(E_{peak}\) ranges from \(3.0\times 10^{25}\) to \(1.5\times 10^{26}\)\(erg\) and \(\Delta E_{peak}\) ranges \(1.5\times 10^{24}-5.5\times 10^{24}\)\(erg\). Here, the values of \(E\) and \(\Delta E\) are also estimated from the amount of energy smoothed with a Gaussian kernel of 1 minute (blue curves in Fig. 6c) and its difference with respect to that smoothed with a Gaussian kernel of 4 minutes (orange curves in Fig. 6c). This amount of change in the thermal energy \(\Delta E_{peak}\) corresponds to the level of the most common nanoflare energy suggested by Parker (1988).
The hot emissions were often found to be rooted in the unipolar region according to simultaneous measurements of the radial component \(B_{r}\) of the photospheric magnetic field from SDO/HMI, such as in Events 1 and 2 (Figs. 3f and 4f). This excludes the possibility that the magnetic reconnections between opposite-polarity magnetic flux on the photosphere produce the hot plasma at the endpoints of the loops (Samanta et al., 2019) unless there is minority-polarity invisible in the HMI magnetogram (Wang et al., 2019). When compared to Events 1 and 2, which was centered at (\(-190^{\prime\prime}\), \(280^{\prime\prime}\)) and (\(370^{\prime\prime}\), \(385^{\prime\prime}\)) from Sun center, respectively, Event 3 (Fig. 5f) was centered at (\(-660^{\prime\prime}\), \(-440^{\prime\prime}\)), and was then observed farther from the disk center. Determining the magnetic polarities in Event 3 from the HMI is therefore challenging due to near-limb projection effects.
### Coronal Magnetic Extrapolation
The braiding strands may mark the bundles of coronal magnetic flux winding about each other (Berger and Asgari-Targhi, 2009). This is supported by the comparisons of the braiding structures and the Nonlinear Force-free Field (NLFFF) coronal magnetic fields, which are constructed by the optimization method (Wheatland et al., 2000) with the required boundary conditions being provided by HMI vector data. The modeled field shows good alignment to the coronal loops in Events 1 and 2, suggesting that force-free extrapolation could be considered as a consistent model of the corona in these events (De Rosa et al., 2009). Similar to the appearance of the braiding strands, the modeled field aligning well with the AIA loop consists of two bundles of modeled field lines twisting with each other (Figs. 7). Again, possibly due to magnetogram degradation by near-limb projection effects, the NLFFF field even failed to match the majority of coronal loops in Event 3.
Figure 6: The evolutions of the identified hot emissions. _Panel a:_ The light curves of the Fe XVIII intensity of \(I(94\AA)-I(211\AA)/120-I(171\AA)/450\) from the hot emissions outlined by the brown boxes in Figs. 3, 4, and 5. In each panel, the light-gray area indicates the time period in which the value of \(\Delta I\) is higher than 1.5 times the standard deviation of \(I_{peak}\). _Panel b-c:_ The evolutions of EM at 5 - 10 MK and thermal energy \(E\) from the locations of the hot emissions, in which the gray shaded area denotes the standard deviation of 200 Monte Carlo simulations by adding random AIA instrument noise into the EM inversion. In each panel, the dots with thin lines indicate the raw data; the blue and orange curves indicate the data smoothed with a Gaussian kernel of 1 and 4 minutes, respectively. The vertical line indicates the peak time of \(\Delta I\), which amounts to \(\tilde{I}_{(\nu=1)}-\tilde{I}_{(\nu=4)}\).
The crossing manners of the strands imply that a localized tangential discontinuity (Parker, 1987) of the magnetic field may exist at the crossing site of the braiding strands. The magnetic free energy carried in the localized current sheet would convert into heat and kinetic energy when magnetic reconnection occurs there. The amount of free energy is of order \(B_{\perp}^{2}V/8\pi\), where \(B_{\perp}\) is of the order of \(Bsin(\theta)\). Here, \(B\) is estimated as magnetic field strength of the NLFFF lines, which amounts to \(\sim\) 60G and \(\sim\) 105G in Event 1 and 2, respectively. The discontinuity in the field direction \(\theta\) is assumed to be of the order of the misalignment angle \(\theta\) between the two braiding strands, which is about 55\({}^{\circ}\) and 29\({}^{\circ}\) in Event 1 and 2, respectively. Moreover, we assume that characteristic length \(\Delta L\) is of the order of 1\({}^{\prime\prime}\), corresponding to the characteristic width of the strands detected in the upscaled AIA 193 A images (see Appendix A), and then the Volume \(V\approx(\Delta L)^{3}\). With the number estimated above the magnetic free energy associated with a discontinuity is of the order of \(3.7\times 10^{25}erg\) and \(3.9\times 10^{25}erg\) for Event 1 and Event 2, respectively. Therefore, the amount of magnetic free energy could account for the thermal energy of order of \(10^{24}\)\(erg\) released by a nanoflare.
## 4 Discussions
The investigation of the widths of the detectable loop structures (see Appendix A) shows that the widths of strands recognized in both the PSF-bicubic upscaled and ML-upscaled AIA 193 A images range from 0\({}^{\prime\prime}\).7 to 1\({}^{\prime\prime}\).3 (Fig. A.2), corresponding to a physical size ranging from about 500 \(Km\) to 1000 \(Km\), and that each pair of strands is resolved in an AIA loop with characteristic width of 2\({}^{\prime\prime}\) to 3\({}^{\prime\prime}\)(Aschwanden & Peter, 2017). The uncertainty of width of the upscaled structures amounts to \(\pm\)0\({}^{\prime\prime}\).3 and is roughly determined by the results from various networks trained with slightly different training sets. Therefore, the upscaled AIA 193 A images have a performance to resolve the strands with widths less than 1\({}^{\prime\prime}\).2, which could hardly be resolved on the raw AIA images with a spatial resolution of 1\({}^{\prime\prime}\).2. According to Brooks et al. (2013), the lowest and mean Gaussian widths of loops observed in Hi-C was about 90 km and 270 km, or 0\({}^{\prime\prime}\).12 and 0\({}^{\prime\prime}\).37, respectively. By contrast, strands in the upscaled AIA 193 A images always have widths no less than a pixel size of 0\({}^{\prime\prime}\).6 of AIA images. This is mostly because not more information can be extracted from the AIA images and then the minimum width of the strands detected on the ML-upscaled images is close to the pixel size of AIA images.
From our observations, we infer the schematic picture of Fig. 8 for two flux tubes braiding in various patterns and their subsequent evolutions due to magnetic reconnection. In the upscaled AIA 193 A image we could recognize some pairs of strands with their footpoints separated randomly (e.g., Fig. 8b-c), instead of a pair of strands braiding coherently in a well-combed pattern (Fig. 8a), which is difficult to observe due to the limit of the resolution and the effect of cross-field diffusion electrons in tangled magnetic fields (Galloway et al., 2006; Berger & Asgari-Targhi, 2009). Fig. 8b shows two identical tubes braiding with each other and the magnetic reconnection between them would result in a total exchange of their footpoints. The picture could explain the morphological evolution exhibited in Events 2 and 3 (Figs. 4 and 5), in which the two braiding strands evolved to be two parallel ones. Figs. 8c shows the evolutions of two braiding flux tubes with nonequal axial flux. As the two nonidentical flux tubes collide with each other, the reconnection halts once all flux of the smaller tube has reconnected, and a final state is that only the outer shell of the larger flux tube is reconnected and the rest
Figure 8: Picture of evolutions of two flux tubes with various braid pattens. _Top row_: The flux tubes braiding with various patterns. _bottom row_: Their final states after occurrences of magnetic reconnection at their crossings. _Panel a:_ The two coherent flux tubes braid with each other in a well-combed pattern. _Panel b-d_: The two flux tubes braid with their legs separating from each other randomly. The two flux tubes in _panels a_ and \(b\) initially have identical flux and, while the two tubes in _panels c_ and \(d\) have nonidentical flux and tubes colored cyan have more axial flux than that colored magenta. The pairs of flux tubes in _panels a-c_ initially braid with one crossing and the two tubes in _panel d_ braid with two crossings.
Figure 7: The modeled magnetic field lines matching the loops. _Left column_: The ML-upscaled versions of AIA 193 Å images taken in Events 1 (_top_) and 2 (_bottom_), in which the magenta and cyan lines outline the two loop stands. The misalignment angle \(\theta\) between the two lines amounts to 55\({}^{\circ}\) and 29\({}^{\circ}\) in Events 1 and 2, respectively. _Right column_: deposited upon the ML-upscaled AIA images, the field lines are traced from the NLFFF field. In Events 1 (_top_) and 2 (_bottom_), the NLFFF field is extrapolated based on the set of HMI vector data taken at 05:00:00 UT on 5 January 2012 and 00:36:00 UT on 22 June 2010, respectively.
is unreconnected (Linton, 2006). Moreover, when the two non-identical flux tubes braid with two crossings (Fig. 8d), magnetic reconnection between them would occur twice and then produce two thoroughly separated tubes. Consistently, we can see in Event 1 (Fig. 3) that the unequal width strands braided with two apparent crossings and that subsequently, the thinner one became parallel with the other one nearly unchanged. Accordingly, the morphological evolutions of the two braiding strands presented here are consistent with the schematic pictures of two braiding bundles of magnetic flux driven to reconnect with each other.
Pontin et al. (2017) claimed that the existence of crossed loop strands does not always imply magnetic discontinuities and subsequent magnetic reconnection. Here, further evidence for the occurrence of magnetic reconnection between the braiding strands is provided by the transient hot emission around the footpoints of the loop strands. The impulsive brightenings at the footpoints of the loops were also noted to happen at lower temperatures between 1 and 2 MK (Regnier et al., 2014; Subramanian et al., 2018). By contrast, transient emissions with higher temperatures were detected in the events reported here. According to the EM analysis, the hot emissions contain a temperature component greater than 5 MK. This value can be easily obtained using reconnection models (Schmelz et al., 2015) but are difficult to get using wave models (van Ballegooijen et al., 2017).
The footpoints of the loops could have been heated by the energetic electrons produced by magnetic reconnections high in the corona, even though hot plasma components were undetectable at the locations of reconnection sites (e.g. Zhang & Ni, 2019). This is due to the fact that there were low densities higher in the corona and that, as predicted by a simulation conducted by Polito et al. (2018), the energetic electrons then deposited raw kinetic energy in the corona until they reached the low corona with an increasing density. Given that loops are isothermal along their coronal parts, heat pulses at their footpoints might cause the loop to heat up and become denser as a result of thermal conduction and chromospheric evaporation. However, the measurable increase in the 94 A emission was also absent throughout the entirety of the loops reported here. This could be as a result of smaller 94 A variations in the loops' lower-density sections than at their footpoints ( Tajfirouze et al., 2016). To explore the locations and temperatures of hot plasmas created in response to the braided magnetic field forming the coronal loops and to evaluate the implications provided from this study, more simulation model findings would be helpful.
## 5 Summary
In this article, the evolutions of the strands braiding with each other in the apparent single AIA loops are presented. The main results are summarized as follows:
1. We performed two validations to confirm that the substructures recorded in the AIA pictures could be seen after the images were appropriately enhanced and upscaled. The width of the strands that make up what appear to be single AIA loops ranges from 0\({}^{\prime\prime}\).7 to 1\({}^{\prime\prime}\).3
2. The braided substructured loops were discovered to closely match the twisted NLFFE field lines in some events that are observed near the disk-center and then the photospheric magnetic data is properly recorded. The magnetic free energy in the modeled field is sufficient to match the thermal energy required for a nanoflare.
3. The braided strands developed into pairs of almost parallel ones together with the hot emission that was present at their footpoints, supporting the occurrence of magnetic reconnection in the coronal loops shown in the AIA images.
A comparison of the raw AIA, ML-upscaled AIA, and Hi-C images (Figs. 2 and 1) reveals that the ML-algorithm only detects a portion of the features at a scale smaller than \(\sim\) 1\({}^{\prime\prime}\).2 that can be seen by Hi-C. As a result, it would appear that the evolutions of the braided structures pertaining to nanoflares are predicted to be discovered more frequently in the higher resolution observations, such EUI on board Solar Orbiter.
###### Acknowledgements.
The authors are grateful to the anonymous referee for detailed comments and useful suggestions that improved this manuscript. We acknowledge the High resolution Coronal Imager instrument team for making the flight data publicly available. MSFC/NASA led the mission and partners include the Smithsonian Astrophysical Observatory in Cambridge, Mass.; Lockheed Martin's Solar Astrophysical Laboratory in Palo Alto, Calif.; the University of Central Lancashire in Lancashire, England; and the Lebedev Physical Institute of the Russian Academy of Sciences in Moscow. The NASA/SDO data used here are courtesy of the AIA and HMI science teams. This work is supported by the National Key Research and Development Program of China (2019YFA0405000), and the Natural Science Foundation of China under grants 12273106, 12073077, 12163004, U2031140, 12073072, 11933009, 12273108, 1203097, 12003068, 11873088, 12173084, and 11973088, the CAS "Light of West China" Program, and the Strategic Priority Research Program of Chinese Academy of Sciences, Grant No. XDB 41000000.
|