id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.08549
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning Attacks
While numerous defense methods have been proposed to prohibit potential poisoning attacks from untrusted data sources, most research works only defend against specific attacks, which leaves many avenues for an adversary to exploit. In this work, we propose an efficient and robust training approach to defend against data poisoning attacks based on influence functions, named Healthy Influential-Noise based Training. Using influence functions, we craft healthy noise that helps to harden the classification model against poisoning attacks without significantly affecting the generalization ability on test data. In addition, our method can perform effectively when only a subset of the training data is modified, instead of the current method of adding noise to all examples that has been used in several previous works. We conduct comprehensive evaluations over two image datasets with state-of-the-art poisoning attacks under different realistic attack scenarios. Our empirical results show that HINT can efficiently protect deep learning models against the effect of both untargeted and targeted poisoning attacks.
Minh-Hao Van, Alycia N. Carey, Xintao Wu
2023-09-15T17:12:19Z
http://arxiv.org/abs/2309.08549v3
# HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning Attacks ###### Abstract While numerous defense methods have been proposed to prohibit potential poisoning attacks from untrusted data sources, most research works only defend against specific attacks, which leaves many avenues for an adversary to exploit. In this work, we propose an efficient and robust training approach to defend against data poisoning attacks based on influence functions, named _Healthy Influential-Noise based Training_. Using influence functions, we craft healthy noise that helps to harden the classification model against poisoning attacks without significantly affecting the generalization ability on test data. In addition, our method can perform effectively when only a subset of the training data is modified, instead of the current method of adding noise to all examples that has been used in several previous works. We conduct comprehensive evaluations over two image datasets with state-of-the-art poisoning attacks under different realistic attack scenarios. Our empirical results show that HINT can efficiently protect deep learning models against the effect of both untargeted and targeted poisoning attacks. Data poisoning, adversarial defense, robust training ## I Introduction Having access to high-quality, clean, and human-annotated data is essential to building and training a well-performing prediction model. However, it is common that an organization only has a limited amount of this type of data on hand. Consequently, organizations are often tasked with collecting additional data from outside sources using techniques such as web scraping and/or crowd-sourcing. This, unfortunately, opens numerous avenues for attacking the proposed model, such as _data poisoning_ attacks in which an attacker injects harmful data into the training routine to affect the final model's utility. For example, in [1, 2], the authors demonstrate the ability of adversarially crafted examples to destroy a DNN's prediction accuracy and [3, 4, 5, 6] show that attackers can force a model to predict the adversarial class on a specific targeted example by only having to modify a small fraction of the training data. Telling if a certain collected data point is benign or malicious is a non-naive task, and there is an active area of research focused on building defense methods against poisoning attacks as well as analyzing the harm that poisoning attacks have on the final model [7, 8, 9, 10]. From a defense perspective, most of the proposed works are attack-specific and are easily defeated by newer types of attacks that consider the underlying defense mechanism. Some other defenses focus on pre-processing the training data in order to detect and remove malicious examples before they are used for training [11, 12, 13, 14]. The pre-processing approach works well when the malicious perturbations are large, or when only a fraction of the dataset is poisoned. However, when those criteria are not met, the defenses based on pre-processing are easily overcome [7, 8]. Another problem inherent to current research on defending against poisoning attacks is the trade-off between model accuracy and the effectiveness of a defense - especially in cases where DNN's are used as the model architecture. Although some modern defense mechanisms have the ability to achieve better generalization [8, 10, 11], the need for more research remains. In this work, we consider the realistic scenario of the training dataset only containing a limited number of clean and human-labeled data points, while the remaining points are unverified and malicious data collected from untrustworthy outside sources. In order to defend a classifier against poisoning attacks, we propose Healthy Influential-Noise based Training (HINT)1 - a robust training procedure based on influence functions. Influence functions, originally a product of robust statistics [15], have grown popular over the last few years as an explainability method for understanding black-box model predictions [16]. In this work, we show that influence functions, in addition to explaining the effect an entire training point has on the model parameters and/or test loss, can capture useful information about the impact of each local pixel to the model's prediction. Consequently, those pixels, which are identified as influential, form local regions that cause significant changes in the test loss of the model. By incorporating HINT in the training procedure, we are able to: (1) identify a subset of training examples that have high impact on the model loss, (2) craft the healthy influential-noises that both reduce the harmful regions and boost the helpful regions inside images, and then add them into training examples to reduce the effect of poisoning attacks. HINT can help the trained model predict the correct class with high confidence score. Through extensive experiments, we show that the classification model trained with our HINT can resist different types of untargeted and targeted attacks while retaining good generalization. Footnote 1: We interchangeably use _healthy noise_ as an alternative to _healthy influential-noise_ in the paper The remainder of the paper is as follows. In Section II, we detail closely related works in poisoning attacks and defenses. In Section III, we introduce the influence function, the central mechanism in our method. Section IV details our HINT method. Section V discusses data poisoning attacks and defenses that we use in our evaluation. Then, Section VI shows our experiments on HINT and other baselines. Finally, we offer our concluding remarks in Section VII. ## II Related Work ### _Poisoning Attacks_ Poisoning attacks, which manipulate the training data to compromise the model's performance at test time, can be grouped into two categories: _untargeted attacks_ (or availability attacks) and _targeted attacks_ (or integrity attacks). In untargeted attacks, the attacker manipulates a subset of the training data to degrade the utility of the machine learning model in general, and are originally proposed to attack traditional classification models such as linear regression and support vector machines [17, 18, 19, 20]. In the deep learning setting, there are few modern untargeted attacks [1, 2, 21] that focus on threatening model availability. In addition to performing attacks on DNN models, untargeted attacks focusing on affecting the fairness of a model have been proposed. Specifically, [22, 23] both proposed untargeted poisoning attacks on fair machine learning models and demonstrated the trade-off between fairness and accuracy. Contrasting untargeted attacks, instead of attempting to sabotage the model in general, targeted attacks aim to undermine the integrity of a specific test example (or a set of test examples), which are more challenging to defend against than untargeted attacks. The victim model trained on poisoned data crafted using targeted attacks still achieves good overall accuracy, but the predictions on the targeted examples (selected by the attacker) are misclassified into the intended adversarial class. Many proposed attacks [3, 4, 5, 6] can successfully cause a deep learning model (e.g., ResNet or VGG) to predict, with high probability, the adversarial class for a targeted image instead of the actual class. ### _Defenses Against Poisoning Attacks_ There are two main strategies to defend against data poisoning attacks: _filtering defense_ and _robust training_. _Filtering defense_ aims to detect malicious examples in the training data and intervene before they can harm the model. The most common filtering defenses are: 1) applying pre-processing techniques on a pre-trained model; and 2) implementing in-processing strategies during the training phase. In [12], the authors used clustering methods on the activation layers of a neural network to detect poisons in the training set. Data provenance is used in [13] to identify poisoned data by evaluating the likelihood of a data point being poisoned. [14] showed that strong signals in hidden representations often mean that a data point has been attacked. Their method, in turn, examines the distribution shift between malicious and clean inputs to detect and remove poisoned examples. [10] proposed EPIC, an effective defense that performs filtering during the training phase. The common assumption of all these attacks is that the overall fraction of malicious examples in the training set is small, and removing them does not hurt the model's generalization ability. Moreover, heavy computational resources are required to choose the optimal filtering settings for each method correctly. In contrast, our HINT method does not aim to remove malicious examples from the training data. Instead, we generate healthy noise such that when added to an image, it alleviates the effect of the poisoned data. _Robust training_ methods usually apply smoothing and augmentation techniques to make the model more robust to noisy data. In [24], the authors introduced a unified framework to deal with poisoning attacks via randomized smoothing. From the augmentation approach, [25] proposed to use strong data augmentation such as MixUp while [26] combined MixUp with random smoothing noise from differentially private training to achieve more robust defense. [7] and [9] both leveraged the idea of adversarial training, which was proposed initially to deal with evasion attacks, to defend against poisoning attacks. While [9] aims to perform adversarial training against delusive attacks (a.k.a clean-label availability attacks), [7] simulated the attacks during the training phase by creating and injecting targeted poisoning attacks into training data. In [8], the authors proposed optimizing two components, friendly noise, and random noise, to perturb training examples so that they can alleviate the harmful effects of poisoned data without losing the generalization ability of the model. Differentially private SGD (DP-SGD) has also been proposed as a strategy to train a robust model against poisoning attacks [27, 28]. ## III Preliminaries Inspired by the influence function from robust statistics [15], Koh and Liang [16] introduced a method for estimating the influence that a training point \(z=(x,y)\) has on a machine learning model, where \(x\in X\) in the input and \(y\in Y\) is the class. Let \(f_{\theta}\) be a classification model parameterized by \(\theta\) and let \(D_{trn}/D_{val}/D_{lst}\) be the training/validation/test sets. Let \(l(\cdot,\theta)\) represent the loss and \(L(D_{trn},\theta)=\frac{1}{|D_{trn}|}\sum_{z_{i}\in D_{trn}}l(z_{i},\theta)\) be the empirical loss to be minimized during training. To see the change in model parameters w.r.t to training point \(z\), the ERM formulation can be modified as: \[\hat{\theta}_{\epsilon,z}=\operatorname*{arg\,min}_{\theta\in\Theta}\frac{1} {|D_{trn}|}\sum_{z_{i}\in D_{trn}}l(z_{i},\theta)+\epsilon l(z,\theta) \tag{1}\] where \(z\) is effectively upweighted by a small weight \(\epsilon\) (usually on order of \(\frac{1}{n}\) where \(n\) is the number of training points). Instead of actually performing training using Eq. 1, [16] shows that it can be estimated without actually having to retrain the model on \(D_{trn}\setminus\{z\}\): \[\mathcal{I}_{up,param}(z)=\frac{d\hat{\theta}_{\epsilon,z}}{d\epsilon}\Big{|} _{\epsilon=0}=-H_{\hat{\theta}}^{-1}\nabla_{\theta}l(z,\hat{\theta}) \tag{2}\] In addition to showing the effect a training point \(z\) has on the parameters, Eq. 2 can be extended to show the influence that \(z\) has on a test point \(z_{test}\). \[\mathcal{I}_{up,loss}(z,z_{test})=-\nabla_{\theta}l(z_{test},\hat{\theta})^{ \top}H_{\hat{\theta}}^{-1}\nabla_{\theta}l(z,\hat{\theta}) \tag{3}\] Eq. 2 and Eq. 3 simulate the effect of \(z\) being removed from the dataset. However, the effect of perturbing \(z\) can be estimated via influence functions as well. Let \(\hat{z}=(x+\delta,y)\) be the perturbed variant of \(z\) by adding a small noise \(\delta\). One can define the parameters resulting from moving \(\epsilon\) mass from \(z\) onto \(\hat{z}\) as: \[\hat{\theta}_{\epsilon,\hat{z},-z}=\operatorname*{arg\,min}_{\theta\in\Theta} \frac{1}{|D_{trn}|}\sum_{z_{i}\in D_{trn}}l(z_{i},\theta)+\epsilon l(\hat{z}, \theta)-\epsilon l(z,\theta) \tag{4}\] Analogous to Eq. 4, Eq. 5 uses the influence function to approximate the effect that modifying a training point \(z\to\hat{z}\) has on the model parameters: \[\mathcal{I}_{pert,param}(z) =\frac{d\hat{\theta}_{\epsilon,\hat{z},-z}}{d\epsilon}\Big{|}_{ \epsilon=0} \tag{5}\] \[=\mathcal{I}_{up,param}(\hat{z})-\mathcal{I}_{up,param}(z)\] \[=-H_{\hat{\theta}}^{-1}\left(\nabla_{\theta}l(\hat{z},\hat{ \theta})-\nabla_{\theta}l(z,\hat{\theta})\right)\] As in the case with Eq. 2, [16] extends Eq. 5 to show how perturbing \(z\to\hat{z}\) would affect the loss of a test point \(z_{test}\): \[\mathcal{I}_{pert,loss}(z,z_{test})=-\nabla_{\theta}l(z_{test},\hat{\theta}) ^{\top}H_{\hat{\theta}}^{-1}\nabla_{x}\nabla_{\theta}l(z,\hat{\theta}) \tag{6}\] The main difference between Eq. 3 and Eq. 6 is that in Eq. 6, the gradient of \(\nabla_{\theta}L(z,\hat{\theta})\) w.r.t \(x\) is additionally calculated. This additional gradient computation captures how changing \(z\) along each dimension of \(x\) affects the loss of a test point. ## IV Healthy Influential-Noise based Training In this section, we propose Healthy Influential-Noise based Training (HINT) which is a training procedure robust to malicious training examples. Our HINT reduces the potential harm caused by untrusted data sources while retaining the model's generalization ability. ### _Framework_ Most robust training approaches manipulate the training data (or a subset of the training data) to train a robust model. In our method, we construct a subset \(D_{s}\subset D_{trn}\) by choosing the most influential examples in \(D_{trn}\). Additionally, we denote the training points not chosen to be in the subset as \(D_{u}=D_{trn}\setminus D_{s}\). Let \(\delta_{i}\) is the healthy influential-noise and \(\hat{z}_{i}=(x_{i}+\delta_{i},y_{i})\) be the healthy-perturbed version of \(z_{i}=(x_{i},y_{i})\). With image data, \(\delta_{i}\) lies in the space \(\Delta=\{\delta\in\mathbb{R}^{H\times W}:\|\delta\|_{\infty}\leq\beta\}\), where \(\|\cdot\|_{\infty}\) is the \(L_{\infty}\)-norm and \(\beta\) is the hyper-parameter for bounding the noise. We define the healthy-perturbed training set as \(\hat{D}_{trn}=\hat{D}_{s}\cup D_{u}\), where \(\hat{D}_{s}\) is the set \(D_{s}\)_after_ healthy noise is added. Under this setting, we define the empirical loss function of the defense model as \(L(\hat{D}_{trn},\theta)=L(\hat{D}_{s},\theta)+L(D_{u},\theta)\), where \(L(D_{s},\theta)=\frac{1}{|D_{s}|}\sum_{z_{i}\in D_{s}}l(z_{i},\theta)\) and \(D_{s}\) denotes "any dataset". We define the defender's objective as: \[\min_{\delta\in\Delta}\;L(D_{val},\theta_{\delta})\text{ s.t. }\theta_{\delta}= \operatorname*{arg\,min}_{\theta\in\Theta}L(\hat{D}_{trn},\theta) \tag{7}\] The naive approach to the above problem would be for the defender to try several different healthy noise values \(\delta_{i}\) and to train/optimize several different models. However, this naive approach is intractable since the feasible spaces for \(\Delta\) and \(\Theta\) are sufficiently large, and it will take significant computational resources to train multiple instances of only one particular model architecture. By using the influence function, we avoid the requirement of costly retraining. Specifically, we use influence function to estimate the change in the model's loss on \(D_{val}\) when modifying a particular training data point to efficiently defend against poisoning attacks. In this work, we introduce HINT, a training algorithm with healthy influential-noise, that: (1) selects a subset of training points \(D_{s}\) which have the most impact on the model by calculating the influence of each training point on the model loss; and (2) generates healthy noise for every example in \(D_{s}\) to reduce the success of poisoning attacks without significantly degrading the model performance over the test data. Algorithm 1 fully presents HINT. First, in lines 2-3, we perform pre-training of the model parameterized by \(\theta\) for a few epochs. This pre-training is to warm-up the \(\theta\) parameter to avoid instability in early epochs. Lines 4-10 contain the main routine of HINT. In line 4, we select the most influential training points \(D_{s}\) using Algorithm 2 (discussed in Section IV-B). In lines 6-10, we first check if the round is an update round (which is defined by an update schedule \(S\)), and if it is, we generate and add healthy noise to selected training examples following Algorithm 3 (line 8), and then update \(\hat{D}_{trn}\) (line 9). Details of generating healthy influential-noise will be given in Section IV-C. Regardless if the noise is updated or not, the model parameters are updated on \(\hat{D}_{trn}\) in every epoch (line 10). ``` 0: Training data \(D_{trn}\), validation data \(D_{val}\), train epochs \(T\), pre-train epochs \(T_{pre}\), scaling factor \(\gamma\), healthy noise bound \(\beta\), ratio of selected examples \(r\), learning rate \(\eta\), healthy noise update schedule \(S\) 0: Trained model \(\hat{\theta}\) 1: Initialize \(\theta^{0}\) 2:for\(t=1\dots T_{pre}\)do 3:\(\theta^{t}\leftarrow\theta^{t}-\eta\nabla L(D_{trn},\theta^{t})\) 4:\(\hat{D}_{s},D_{u}\leftarrow\) SecInf(\(D_{trn}\), \(D_{val}\), \(r\)) using Algorithm 2 5:\(\hat{D}_{trn}\gets D_{trn}\), \(\hat{D}_{s}\gets D_{s}\) 6:for\(t=T_{pre}+1\dots T\)do 7:if\(t\in S\)then 8:\(\hat{D}_{s}\leftarrow\) AddNoise(\(\hat{D}_{s},D_{s},\gamma,\beta\)) using Algorithm 3 9:\(\hat{D}_{trn}\leftarrow\hat{D}_{s}\cup\hat{D}_{u}\) 10:\(\theta^{t}\leftarrow\theta^{t}-\eta\nabla L(\hat{D}_{trn},\theta^{t})\) ``` **Algorithm 1** HINT: Healthy Influential-Noise based Training Influence Function on Validation GroupIt is essential to note that Eqs. 3 and 6 consider the influence that a single training point has on a single test point. Calculating the influence score with respect to only one single test point, however, may not produce a good estimation when the training data is poisoned. Additionally, it is computationally expensive to calculate the influence score for each pair of training and test points individually, since each pair requires the inverse Hessian matrix to be calculated (or estimated). Therefore, we extend the influence functions of Eqs. 3 and 6 to estimate the impact that a single training point has on a group of test (or validation) points. Since influence is additive [16], we can extend both equations to consider the loss on a group of validation points. The influence of a training point on the loss of a validation set \(D_{val}\), in both the total removal and perturbation cases are: \[\begin{split}\mathcal{I}_{up,loss}(z,D_{val})=-\nabla_{\theta}L \left(D_{val},\hat{\theta}\right)^{\top}H_{\hat{\theta}}^{-1}\nabla_{\theta} l\left(z,\hat{\theta}\right)\\ =-\left[\nabla_{\theta}\frac{1}{|D_{val}|}\sum_{i=1}^{|D_{val}|}l \left(z_{i},\hat{\theta}\right)\right]^{\top}H_{\hat{\theta}}^{-1}\nabla_{ \theta}l\left(z,\hat{\theta}\right)\end{split} \tag{8}\] \[\begin{split}\mathcal{I}_{pert,loss}(z,D_{val})=-\nabla_{\theta}L \left(D_{val},\hat{\theta}\right)^{\top}H_{\hat{\theta}}^{-1}\nabla_{x}\nabla_ {\theta}l\left(z,\hat{\theta}\right)\\ =-\left[\nabla_{\theta}\frac{1}{|D_{val}|}\sum_{i=1}^{|D_{val}|}l \left(z_{i},\hat{\theta}\right)\right]^{\top}H_{\hat{\theta}}^{-1}\nabla_{x} \nabla_{\theta}l\left(z,\hat{\theta}\right)\end{split} \tag{9}\] Note that for deep learning models, we can compute the influence score using only the top layers instead of the full network, which is a common way since the top layers work as a classifier and the bottom layers work as a feature extractor. Even if we only consider the top layers, computing the inverse Hessian matrix (\(H_{\hat{\theta}}^{-1}\)) is computationally intensive. To avoid the direct computation of the inverse Hessian matrix, we can instead leverage the inverse Hessian-Vector Product (IHVP) method to approximate \(H_{\theta}^{-1}\nabla_{\theta}L(D_{val},\hat{\theta})\) in Eq. 8 and Eq. 9. We use Linear time Stochastic Second-Order Algorithm (LiSSA) [29] to compute the IHVP efficiently. ### _Selecting Influential Examples_ Recall from Section III that the influence function tells how the model loss would change if a data point \(z\) was removed from the training set. In the presence of attacks, poisoned examples in training data can silently change the underlying data distribution. In other words, poisoning attacks shift the model's decision boundary away from the one over clean data, and the poisoned model effectively treats both poisoned and clean examples as normal training data. However, some poisoned examples are more effective than others in changing the model's predictions [8, 10]. Intuitively, we can increase the generalization ability of the model by focusing on the training examples which have a higher impact. To do so, we introduce a subset selection method in Algorithm 2. For each example in the training data, we calculate the influence score using upweighting approach (Eq. 8) as shown in lines 2-3. We then build the subset \(D_{s}\) by selecting the \(\lceil r\times|D_{trn}|\rceil\) examples which have the highest impact. According to Basu et al. [30], the influence function may not accurately estimate the change in loss when working with DNN models. The phenomenon is more noticeable when the models have deeper and wider architectures. The correctness of influence estimation also depends on various factors in the training scheme. However, even though the estimated changes in loss using Eq. 8 does not closely match the actual changes, they are still highly correlated [16]. In fact, our Algorithm 2 constructs the training subset \(D_{s}\) based on the rank in the influence scores, not on the estimated change in loss. ### _Adding Healthy Influential-Noise_ We first make an observation of how the influence score can help explain the effect of training input on the trained model. Eq. 9, \(\mathcal{I}_{pert,loss}(z,D_{val})\), tells us the contribution of each input pixel to the loss of the whole validation set. While the gradient of the loss of the validation set, \(\nabla_{\theta}L(D_{val},\hat{\theta})\), provides the information on how the trained model performs on the unseen validation set, the term \(\nabla_{x}\nabla_{\theta}l(z,\hat{\theta})\) tells us how each input pixel contributes to the loss. Pixels with either significantly positive or negative influence scores will highlight the model's attention. Since the influence score represents the change in loss, a pixel that has a positive (negative) score will cause an increase (decrease) in the loss. Therefore, by crafting a noise in the opposite direction of influence score, i.e., \(\delta=-\mathcal{I}_{pert,loss}(z,D_{val})\), we can perturb the original image in a way that strengthens pixels that have a helpful effect and weakens pixels that have a harmful effect. In the following example, we show that images that are predicted wrongly by the trained model can be altered by adding healthy noises based on their influence scores to increase the probability of being correctly classified by the model. In the second column of Figure 1, we visualize the healthy noise for either clean or poisoned MNIST images. Note that the noise values can be either negative or positive, and we scale the values to be between 0 and 1 in the visualization. From the noise, we get distinct dark and bright regions which give information about the harmful/helpful regions inside the image. Each dark area corresponds to a harmful region, and these patches of pixels cause confusing regions that need to be reduced. On the other side, each bright area corresponds to a helpful region and depicts where healthy influential-noise can be added to improve the prediction ability of the model. Therefore, the influence function intuitively provides useful information on how we can perturb the original image to get better classification results from the model. We note that this conclusion is consistent with our observation from Eq. 9. In the third column, we generate healthy-noise perturbed images by adding the healthy noise (middle column) to the original images (first column). By comparing the classification results of the third column with the results of the first column, we can see that adding noise based on the influence scores helps reduce the effect of harmful regions and boost the helpful regions of each image, as the correct class is now predicted with high probability. Therefore, the model benefits from the addition of healthy noise. Moreover, in the case of poisoned images, the healthy noise assigns negative values to the harmful poisoned regions so that the malicious perturbations become less effective. Algorithm 3 demonstrates how our method generates healthy noise for each example in \(D_{s}\). Let \(\delta_{i}\in\Delta\) be the healthy influential-noise corresponding to training input \(x_{i}\), i.e., \(\hat{x}_{i}=x_{i}+\delta_{i}\) in line 2. We optimize the noise within \(L_{\infty}\)-norm \(\beta\)-bound. In other words, \(\delta\) should belong to \(\Delta=\{\delta\in\mathbb{R}^{H\times W}:\|\delta\|_{\infty}\leq\beta\}\). During an update round, for every training example \(\hat{z}_{i}=(\hat{x}_{i},y_{i})\in\hat{D}_{s}\), we first generate the noise as \(\mathcal{I}_{pert,loss}(\hat{z}_{i},D_{val})\) (line 3) and then project it onto the feasible space. After that, we add the healthy noise to the training input and clip pixel values to be within a valid range (line 4). Finally, we update the newly perturbed examples on \(\hat{D}_{s}\) in line 6. We note that for \(\mathcal{I}_{pert,loss}(\hat{z}_{i},D_{val})\) in line 3, the differentiation involves all layers of the entire network to calculate \(\nabla_{x}\nabla_{d}l\left(z,\hat{\theta}\right)\). ``` 0: Perturbed training subset \(\hat{D}_{s}\) at previous update step, selected training subset \(D_{s}\), scaling factor \(\gamma\), healthy noise bound \(\beta\) 0:\(\hat{D}_{s}\) 1:for\(\hat{z}_{i}\in\hat{D}_{s}\) and \(z_{i}\in D_{s}\)do 2:\(\delta_{i}\leftarrow\hat{x}_{i}-x_{i}\) 3:\(\delta_{i}\leftarrow\Pi_{\beta}(\delta_{i}-\gamma\mathcal{I}_{pert,loss}(\hat {z}_{i},D_{val}))\) 4:\(\hat{x}_{i}\leftarrow\text{Clip}(x_{i}+\delta_{i})\) 5:\(\hat{z}_{i}\leftarrow(\hat{x}_{i},y_{i})\) 6: Update new \(\hat{z}_{i}\) in \(\hat{D}_{s}\)return\(\hat{D}_{s}\) ``` **Algorithm 3** AddNoise(\(\hat{D}_{s}\),\(D_{s}\),\(\gamma\),\(\beta\)) - Adding Healthy Influential-Noise ### _Complexity Analysis_ In this section, we analyze the complexity of our proposed method. Let \(p\) be the number of parameters in the model, \(n\) be the size of the training set \(D_{trn}\), \(k\) be the size of the validation set \(D_{val}\), and \(d\) be the number of input features. In Algorithm 2, \(\mathcal{I}_{up,loss}(z,D_{val})\) is computed for each training example before the inputs are sorted from most to least influential. Since \(\left[\nabla_{\theta}\frac{1}{|D_{val}|}\sum_{i=1}^{|D_{val}|}l\left(z_{i}, \hat{\theta}\right)\right]^{\top}H_{\theta}^{-1}\) is fixed, it only needs to be computed once - which helps to reduce the overall running time. Algorithm 2 requires \(O(np)\) for calculating the loss of \(n\) training examples and the one computation of IHVP using LiSSA takes \(O(kp+rjp)\), where \(r\) is the recursion depth and \(j\) is the number of recursions. Sorting takes \(O(n\log(n))\) on average, hence, the subset selection procedure takes in total \(O(np+kp+rjp+n\log(n))\). Algorithm 3 generates and updates the healthy influential-noise for each example in the subset \(\hat{D}_{s}\). The noise generation step in line 3 takes a similar running time to perform the IHVP estimation, and \(O(dp)\) is the cost for calculating the gradient with respect to the input, \(\nabla_{x}\nabla_{\theta}l\left(z,\hat{\theta}\right)\). Since \(|D_{s}|\leq n\), Algorithm 3 takes \(O(ndp+kp+rjp)\). In Algorithm 1, the training phase is performed over \(T\) epochs, each of which needs \(O(np)\) to update model's parameters. The training pipeline calls SecInf (Algorithm 2) once in line 4, and AddNoise (Algorithm 3) \(s\) times in line 6-9, where \(s=|S|\). Since \(k\ll n\) and \(s\ll T\), the total complexity for our HINT (Algorithm 1) is \(O(Tp(nd+rj)+n\log(n))\). ## V Data Poisoning Attacks and Defenses ### _Data Poisoning Attacks_ We detail the untargeted and targeted attacks we utilize in our experiments to craft adversarial examples. #### V-A1 Untargeted Attacks The untargeted attacks we consider include projected gradient descent (PGD) [2], delusive adversarial perturbation (DAP) [9], delusive universal random perturbation (DURP) [9], and deep confuse (DC) [1]. PGD Fig. 1: Using influence score to boost model prediction. The first two rows are clean examples. The last two rows are poisoned examples generated by DeepConfuse [1]. Three columns from left to right are: original image, noise generated by HINT, and healthy-noise perturbed image. _Pred_ is the predicted class and _Prob_ is the probability. Red dotted circles are important regions that the influential noise focuses on. was originally proposed as a test-time attack and it utilizes gradient information to generate adversarial perturbations. DAP crafts an adversarial training input \(\tilde{x}_{i}\) by minimizing the loss \(l(f_{\theta}(\tilde{x}_{i}),t_{i})\) where \(t_{i}\) is a class other than \(y_{i}\) and \(\tilde{x}\) is bounded by a small tolerance rate \(\epsilon\). DURP works by adding class-wise random perturbation \(\mu(y_{i})\) to each \(x_{i}\). This means that for all training examples having class \(y_{i}\), the attack will add the same perturbation \(\mu(y_{i})\) to them. DC generates malicious examples using an Auto-Encoder architecture (e.g., UNet for the CIFAR-10 dataset). This attack can craft imperceptible and efficient poisoning examples. #### V-A2 Targeted Attacks The attacker's objective can be formulated as a bi-level optimization problem: \[\min_{\epsilon\in C}l(f_{\theta_{\epsilon}}(x_{t}),y_{adv})\text{ s.t. }\theta_{ \epsilon}=\operatorname*{arg\,min}_{\theta\in\Theta}l(f_{\theta}(x_{i}+ \epsilon_{i}),y_{i}) \tag{10}\] where \(C=\{\epsilon\in\mathbb{R}^{H\times W}:||\epsilon||_{\infty}\leq\xi,\epsilon_{ i}=0\ \forall i\notin D_{p}\}\) is the constraint set of malicious perturbation \(\epsilon\), and \(D_{p}\) is the poisoned set. Commonly, the malicious perturbations lie within \(\xi\)-bounded \(l_{\infty}\) ball to be imperceptible. We consider four different targeted attacks in our experimental evaluation: MetaPoison (MP) [5], gradient matching (GM) [4], bullseye polytope (BP) [3], and feature collision (FC) [6]. In four attacks, GM and MP are two modern attacks in training-from-scratch scenario, while BP and FC work efficiently in transfer learning scenario. MP uses a meta-learning approach to approximate the attacker's bilevel objective in Eq. 10. To do so, MP runs multiple unroll steps to approximate the inner optimization, and looks into the training pipeline to evaluate how the perturbation will affect the adversarial loss in future training steps. Then, the method uses the Adam optimizer to update the perturbation. GM optimizes the malicious perturbation via aligning the gradients of targeted and poisoned examples. Therefore, the method only needs one unroll step to compute the gradient. To optimize the malicious perturbation, GM attempts to minimize the negative cosine similarity between adversarial loss and natural loss, where the adversarial (natural) loss is the loss defined in the outer (inner) objective function in Eq. 10. In BP, the attacker crafts poisoned examples such that their representation in feature space is close to the targeted image. BP significantly improves the scalability and transferability of Convex Polytope attack [31]. FC, also known as Poison Frogs attack, has a similar idea to BP that it explores the feature space of images. The method aims to optimize the malicious examples such that they collide with the targeted example in the feature space. ### _Defenses against Data Poisoning Attacks_ In this section, we detail the defense mechanisms that we compare with HINT in Section VI. Specifically, we consider the FRIENDS [8], adversarial training against delusive adversaries (ATDA) [9], and EPIC [10] algorithms. **FRIENDS.** From the observation that each effective poison causes a local increase in the training loss and the whole poisoning set forms local regions in the loss space, FRIENDS aims to optimize the maximum perturbation without changing the model prediction. For each training example \(x_{i}\), the friendly noise \(\epsilon_{i}\) is generated as: \[\epsilon_{i}=\operatorname*{arg\,min}_{e:||\epsilon||_{\infty}\leq\beta}D_{KL }(f_{\theta}(x_{i}+\epsilon)||f_{\theta}(x_{i}))-\lambda||\epsilon||_{2},\] where \(\lambda\) is the scaling factor. Besides the friendly noise, FRIENDS adds random noise from Gaussian, Uniform, or Bernoulli distribution to smooth the training loss. **ATDA.** Following the theoretical proof that the adversarial risk can be the upper bound of natural risk, ATDA adapts the adversarial training technique to defend against untargeted poisoning attacks. By using FAT [32], the adversarial example \(\tilde{x}_{i}\) can be generated as: \[\tilde{x}_{i}=\operatorname*{arg\,min}_{\tilde{x}:||\tilde{x}-x|| _{p}\leq\beta}l(f_{\theta}(\tilde{x}_{i}),y_{i})\] \[\text{s.t. }l(f_{\theta}(\tilde{x}_{i}),y_{i})-\min_{y\in Y}l(f_{ \theta}(\tilde{x}_{i}),y)\geq\tau,\] where \(\tau>0\) is the margin such that an adversarial example would be misclassified. **EPIC.** Different from the previously discussed methods, EPIC finds and drops malicious training points. From the observation that effective poisoned examples are often isolated from others of the same class in the gradient space, EPIC builds a set of medoids of each class, assigns other data points to its closest medoid, and drops isolated medoids during the training. The objective to find the set of medoids can be formulated as: \[S\in\operatorname*{arg\,min}_{S\subseteq V,|S|<m}\sum_{i\in V}\min_{j\in S}|| \nabla l(f_{\theta}(x_{i}),y_{i})-\nabla l(f_{\theta}(x_{j}),y_{j})||_{2},\] where \(m\) is the maximum number of medoids, \(S\) is the index set of medoids, and \(V\) is the index set of training data. **Remark.** Our HINT is similar to FRIENDS and ATDA in principle as all three methods perturb training examples to defend against poisoning attacks. But HINT uses a different defense mechanism as aforementioned in Section IV. The healthy noise can capture local harmful/helpful regions formed by influential pixels and, by focusing on those important regions, the trained model is more resilient to attacks. Furthermore, HINT leverages the advantage of influence function, which can estimate the impact of each training example to the model loss, to choose a subset of training examples for perturbation. ## VI Experiments ### _Evaluation Setup_ #### Vi-A1 Dataset and Model We focus on the image classification tasks and use MNIST and CIFAR-10 as our primary evaluation datasets. Each dataset is divided into three subsets (train, validation, and test), and a classification model is trained over the training set. Table I gives the specific details of our dataset construction. For MNIST, we train the CNN model using the SGD method with a learning rate of 0.01. For CIFAR-10, we train the ResNet-18 model using the SGD method with a Nesterov momentum of 0.9 and weight decay of \(5\times 10^{-4}\). Data augmentation techniques such as random crop and horizontal flip are also applied to the training images. Additionally, the initial learning rate is set to 0.1, which is then decreased by a factor of 10 at epochs 30, 50, and 70. #### V-A2 Poisoning Training Data We use the attack algorithms aforementioned in Section V-A to generate adversarial examples and inject them into the clean training data to produce our poisoned training set. **Poisoning training data with untargeted attacks.** In this experiment, we consider the difficult scenario where the poisoned training data contains poisoned examples generated by four different malicious attacks: PGD, DAP, DURP, and DC. We train a victim model over the clean data for each attack type and generate the same amount of poisoned data per attack type based on a poison ratio \(\rho\). For example, using CIFAR-10 with \(\rho=0.4\), we create a poisoned training dataset (with 49,000 images), which has 29,400 clean images and 19,600 poisoned images (4,900 images per attack type). For each attack type, the attacker's budget for perturbation is \(\xi=0.031\) (or 8/255) when attacking CIFAR-10 and \(\xi=0.3\) (or 76/255) when attacking MNIST. The setting is practical since the attacker tries to use a combination of multiple powerful attacks to efficiently generate malicious data. **Poisoning training data with targeted attacks.** When analyzing the ability of defenses against targeted attacks, we only consider the CIFAR-10 as this dataset is commonly used for targeted attack evaluation. In this experiment, we evaluate the effectiveness of defense methods in both training-from-scratch (with GM and MP) and transfer learning (with BP and FC) settings as aforementioned in Section V-A. In both scenarios, we assume that the attacker knows the dataset, model architecture, and training scheme. However, they do not know the model weights. Similar to other papers aimed at defending against targeted attacks [8, 10], and the benchmarks in [33], we randomly choose the target and source classes, and then generate 490 poisoned training images (\(\rho=1\%\)). We run the attacks with the same setup, except for some specific hyper-parameters for each attack that we mention in this section. For GM, BP and FC, we train the victim model for 80 epochs and choose \(\xi=0.062\) (or 16/255) as the bound for malicious perturbation. The number of attack iterations is set at 250, 4,000, and 1,000, respectively. Other hyper-parameters follow the default values from the implementation of GM2. For MP, the poisoned data is downloaded from MetaPoison3. We use an equal mix of poison-dog target-bird and poison-frog target-plane settings in the evaluation. The bound for perturbation used in MetaPoison is \(\xi=0.031\). Footnote 2: [https://github.com/JonasGeiping/poisoning-gradient-matching](https://github.com/JonasGeiping/poisoning-gradient-matching) Footnote 3: [https://github.com/wronnyhuang/metaposion](https://github.com/wronnyhuang/metaposion) For transfer learning scenario, we use a similar setup as in the from-scratch scenario to pre-train the victim model. Then, we randomly re-initialize the top layers while freezing the feature extraction layers. The victim model is then optimized on the transfer set. We construct the clean training set (for pre-training the victim model) with 44,100 clean images (90%) selected uniformly from each class. The transfer set consists of the remaining 4,900 images (10%) of the training set, in which 490 examples have been poisoned. We note that transfer learning setting used in our evaluation is not a real transfer learning setting as the clean training and the transfer sets come from the same original dataset. This, however, is the worst-case scenario that a defense has to consider to show its effectiveness, as the setting makes it easier for the attacker to succeed [4, 8]. #### V-A3 HINT and Defense Baselines In this section, we briefly discuss the hyper-parameters for running each defense used in our experiment. **Defense Baselines.** We choose FRIENDS+Bernoulli in our evaluation as it has the best performance according to [8]. Similar to the default setting of FRIENDS, we set \(\beta=0.062\). For ATDA, we choose FAT as the representation and set the default defender's budget \(\beta\) as 0.25 based on the findings presented in [9]. When conducting experiments of ATDA against targeted attacks, we also use \(\beta=0.062\) for a fair comparison with other baselines. EPIC-0.1 is the default representation for EPIC with the poison drop interval \(T\) and warm-up epochs \(K\) as: \(T=4\), \(K=10\) for CIFAR-10; and \(T=2\), \(K=5\) for MNIST. We note that our evaluation uses default values as presented in the FRIENDS, ATDA, and EPIC papers [8, 9, 10] for all other hyper-parameters. Besides the above defenses, we additionally run experiments for a naive training method (W/o Defense) using the default architecture of each dataset. **HINT.** When computing the healthy noise, we only use the weights from the top layers of the model and discard all other weights (i.e., all weights from feature extraction layers). We set the update schedule for healthy noise at epochs 5, 15, and 40 for CIFAR-10 and at epochs 5 and 15 for MNIST. We choose the budget for the healthy noise to be \(\beta=0.062\) and the scaling factor to be \(\gamma=0.1\). **Metrics.** For each experiment, we report the average and standard deviation of the test accuracy over five trials. When defending against targeted attacks, we additionally report the Attack Success Rate (ASR), which evaluates the success of an attack in changing the prediction of targeted examples. Note that an attack is considered successful only if it can change the predicted class to the intended class, following the evaluation in [4, 8, 10]. We use GPU Tesla V100 (32GB RAM) and CPU Xeon 6258R 2.7 GHz to conduct all experiments. ### _Results_ #### V-B1 Defending against Untargeted Attacks In this experiment, we evaluate the ability of different defense mechanisms to defend against poisoning attacks under different poison ratios. In Table II, we report the average and standard deviation of the test accuracy on CIFAR-10 and MNIST datasets. Each row shows the result for one poisoning set constructed from a particular poison ratio \(\rho\). In most rows, HINT outperforms all \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline Dataset & \(|D_{trn}|\) & \(|D_{val}|\) & \(|D_{test}|\) & Model & Batch & Epoch \\ \hline MNIST & 59000 & 1000 & 10000 & CNN & 128 & 30 \\ CIFAR-10 & 49000 & 1000 & 10000 & Resnet-18 & 128 & 80 \\ \hline \end{tabular} \end{table} TABLE I: Description of datasets and corresponding models. other defense baselines, demonstrating our defense's effectiveness against poisoning attacks. For HINT and other methods, the test accuracy decreases when \(\rho\) increases, which is not surprising since the attacker is able to inject more poisoning examples into the training set. With a large poison ratio (\(p\geq 0.8\)), our HINT method significantly reduces the effect of poisoned samples when we compared to naive training. It also clearly outperforms other three defense baselines. When we look at the results on the CIFAR-10 with a small poison ratio (\(\rho\leq 0.4\)), HINT is the only defense method that achieves better performance than naive training, while other defenses have significant gaps below (more than \(1.5\%\)). A similar pattern happens to the results on the MNIST when \(\rho\leq 0.6\). In this case, the model still learns both clean and malicious patterns but is able to focus more on the clean data. With FRIENDS and ATDA, when friendly or adversarial noises are added to clean training examples, the decision boundary may move further away from the original one. Recall from Section V-B where we explain that FRIENDS and ATDA add noise to the whole training dataset while our HINT method perturbs only a selected subset. Especially in the case that there is no poisoning attack performed (\(\rho=0.0\)), our HINT method only loses \(0.22\%\) test accuracy compared to naive training on MNIST and even has better accuracy on CIFAR-10. These results match our previous observation in Section IV-C that the model trained with healthy noise does not lose its generalization ability. On both datasets, EPIC is the worst performer in the four defenses. This is because EPIC continuously detects and drops malicious examples, which works more efficiently when poisoned examples are far from their class in the gradient space. However, it is hard for EPIC to detect poisoned examples by untargeted attacks since untargeted attacks significantly perturb multiple training examples of every class to shatter the decision boundary, which also tampers the representations of training examples in the gradient space. It is even more challenging when we mix four different attack types with clean data in our setting. In Figure 2, we show the predicted class, along with the probability of the prediction, of HINT and other baselines for five test examples from MNIST. We use red to denote each defense's success in preventing untargeted attacks. Each result shows the predicted class with the confidence score from the trained model using the corresponding defense method. Some pairs of classes have a high chance of confusion in prediction, which shows that untargeted attacks can successfully mislead the victim model and shift the decision boundary. For example, a trained model easily gets confusing an image of digit 7 for digit 2, or an image of digit 5 for digit 6. The results show that our method is more effective than other baselines in helping the trained model avoid the effect of untargeted attacks and give correct prediction results with a high confidence score. #### Iv-B2 Defending against Targeted Attacks Table III shows a comparison of our HINT method with all baselines in terms of ASR and test accuracy on CIFAR-10. We compare the results based on ASR (lower is better) and test accuracy (higher is better). In this setting, our HINT method also significantly outperforms all other baselines. With all attacks, the naive model fails all five trials, except for MP with 4/5. In particular, HINT, FRIENDS, and EPIC succeed in keeping the targeted example safe under the GM attack. However, our method achieves the highest test accuracy of the three defenses. Under other attack methods, HINT only fails once, while most other baselines fail multiple times. In most experiments, our HINT method also consistently has the best test accuracy compared to the other baselines. ATDA is the worst performer in terms of ASR compared to HINT, FRIENDS, and EPIC. This is unsurprising since ATDA uses adversarial perturbation to break the malicious examples by untargeted attacks, while the training data is poisoned by targeted attacks in this experiment. From the results, we can see that all the defense methods work well when defending against GM and MP. In training-from-scratch scenario, when training the model from scratch, defense methods will have a better chance to capture the malicious pattern from the poisoned examples and thereby be able to reduce their effect. Conversely, FRIENDS and EPIC have noticeably dropped in efficiency when defending against BP and FC in transfer learning scenario. Figure 3 and 4 illustrate our results under MP and BP attacks, respectively. Similar to Figure 2, we use red color to denote the defense's success in preventing attacks from falsifying the target's class, and each result shows the predicted class with the confidence score. The model trained with HINT predicts the correct classes for most of the targeted examples with a high confidence score, consistent with the results of defending against untargeted attacks. Especially under the BP attack, HINT is the only method to successfully defend against the attack trials in the last two columns. #### Iv-B3 Sensitivity Analysis Table IV shows the sensitivity analysis of HINT on the ratio of selected examples \(r\) and the defender's budget \(\beta\). We run our HINT method with different \(r\) values to show how the size of the selected subset affects the model's performance. Note that when \(r=1.0\), the result is equivalent to the case of removing the subset selection module. The result shows that as \(r\) increases, the test accuracy decreases. This observed trend highlights the contribution of our subset selection module in the whole training method. The trend also matches our previous observation that adding noise into clean examples may cause a drop in test accuracy. Fig. 2: Prediction of defense methods on MNIST test images under multiple untargeted attacks. For hyper-parameter \(\beta\), the test accuracy increases when moving from \(0.031\) to \(0.062\), and decreases when \(\beta>0.062\). This phenomenon is understandable since the healthy noise is not large enough to reduce the effect of malicious perturbation when we choose small \(\beta\). However, when the healthy noise is sufficiently large, it makes the values of the pixels move too far from their original value and breaks the spatial relationship in the images. #### Iv-B4 Execution Time We report the execution time of all defense methods used in our experiments in Table V. For HINT, we report the time for four variants with different \(r\) values. When \(r\) increases, the running time increases since the method needs to compute healthy noise for more training examples. When \(r=1.0\), the running time does not significantly increase compared to when \(r=0.75\) since HINT disables the subset selection module. For other defenses, the running time of FRIENDS is close to the default setting of HINT (\(r=0.5\)), while ATDA and EPIC need more time to execute. ATDA \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline Dataset & \(\rho\) & W/o Defense & HINT & ATDA & FRIENDS & EPIC \\ \hline & 0.0 & **93.86 \(\pm\) 0.26** & 93.64 \(\pm\) 0.12 & 91.82 \(\pm\) 0.17 & 89.25 \(\pm\) 0.53 & 87.28 \(\pm\) 0.71 \\ & 0.2 & 92.54 \(\pm\) 0.05 & **92.67 \(\pm\) 0.04** & 90.97 \(\pm\) 0.17 & 89.37 \(\pm\) 0.52 & 86.98 \(\pm\) 0.79 \\ CIFAR-10 & 0.4 & 91.67 \(\pm\) 0.12 & **92.08 \(\pm\) 0.17** & 89.92 \(\pm\) 0.25 & 88.53 \(\pm\) 0.38 & 87.37 \(\pm\) 0.40 \\ & 0.6 & 83.51 \(\pm\) 0.17 & **91.94 \(\pm\) 0.21** & 90.00 \(\pm\) 0.38 & 88.90 \(\pm\) 0.53 & 86.70 \(\pm\) 0.11 \\ & 0.8 & 80.02 \(\pm\) 0.36 & **91.20 \(\pm\) 0.21** & 89.22 \(\pm\) 0.57 & 87.75 \(\pm\) 0.18 & 85.38 \(\pm\) 0.73 \\ & 1.0 & 48.62 \(\pm\) 0.74 & **90.79 \(\pm\) 0.13** & 89.41 \(\pm\) 0.58 & 86.76 \(\pm\) 0.55 & 84.73 \(\pm\) 0.32 \\ \hline & 0.0 & 98.44 \(\pm\) 0.02 & **98.87 \(\pm\) 0.01** & 98.45 \(\pm\) 0.11 & 98.33 \(\pm\) 0.09 & 98.06 \(\pm\) 0.05 \\ & 0.2 & 97.73 \(\pm\) 0.01 & **98.42 \(\pm\) 0.02** & 97.39 \(\pm\) 0.29 & 97.58 \(\pm\) 0.08 & 97.31 \(\pm\) 0.16 \\ MNIST & 0.4 & 97.15 \(\pm\) 0.03 & **98.01 \(\pm\) 0.09** & 97.01 \(\pm\) 0.09 & 97.14 \(\pm\) 0.12 & 96.76 \(\pm\) 0.19 \\ & 0.6 & 96.64 \(\pm\) 0.12 & **96.87 \(\pm\) 0.02** & 96.46 \(\pm\) 0.14 & 96.58 \(\pm\) 0.09 & 96.19 \(\pm\) 0.19 \\ & 0.8 & 95.55 \(\pm\) 0.12 & **95.88 \(\pm\) 0.04** & 95.59 \(\pm\) 0.09 & 95.49 \(\pm\) 0.21 & 95.03 \(\pm\) 0.20 \\ & 1.0 & 74.62 \(\pm\) 0.94 & **88.92 \(\pm\) 0.19** & 80.60 \(\pm\) 1.01 & 77.08 \(\pm\) 1.70 & 77.84 \(\pm\) 1.54 \\ \hline \end{tabular} \end{table} TABLE II: Test accuracy (%) of our method and baselines when defending against multiple untargeted attacks (PGD+DAP+DURP+DC) on CIFAR-10 and MNIST datasets. In this evaluation, \(r=0.5\) and \(\rho\) is the poison ratio. Fig. 4: Prediction of defense methods on CIFAR-10 test images under BP attack. “A” and“I” stand for actual and intended classes, respectively. \begin{table} \begin{tabular}{c|c|c|c|c} \hline & \multicolumn{1}{c|}{GM} & \multicolumn{1}{c|}{MP} & \multicolumn{1}{c|}{BP} & \multicolumn{1}{c}{FC} \\ \hline \multirow{2}{*}{W/o Defense} & ASR & 5/5 & 4/5 & 5/5 & 5/5 \\ & Test acc. & 93.69 \(\pm\) 0.19 & 87.48 \(\pm\) 0.41 & 91.41 \(\pm\) 1.34 & 89.40 \(\pm\) 1.39 \\ \hline \multirow{2}{*}{HINT} & ASR & **0/5** & **1/5** & **1/5** & **1/5** \\ & Test acc. & **92.99 \(\pm\) 0.26** & **87.15 \(\pm\) 0.46** & **92.41 \(\pm\) 0.54** & **92.21 \(\pm\) 0.31** \\ \hline \multirow{2}{*}{ATDA} & ASR & 3/5 & 3/5 & 4/5 & 4/5 \\ & Test acc. & 93.64 \(\pm\) 0.27 & 87.45 \(\pm\) 0.61 & 89.50 \(\pm\) 1.51 & 88.37 \(\pm\) 1.73 \\ \hline \multirow{2}{*}{FRIENDS} & ASR & 0/5 & 1/5 & 3/5 & 2/5 \\ & Test acc. & 89.17 \(\pm\) 0.41 & 78.26 \(\pm\) 0.63 & 89.53 \(\pm\) 0.66 & 88.84 \(\pm\) 0.94 \\ \hline \multirow{2}{*}{EPIC} & ASR & 0/5 & 2/5 & 4/5 & 3/5 \\ & Test acc. & 90.36 \(\pm\) 0.43 & 86.68 \(\pm\) 0.23 & 89.65 \(\pm\) 2.34 & 89.19 \(\pm\) 1.61 \\ \hline \end{tabular} \end{table} TABLE III: Attack Success Rate and test accuracy (%) of defense mechanisms against different targeted poisoning attacks. Experiments with MetaPoison is run without augmentation, following the setting in [5, 8]. Fig. 3: Prediction of defense methods on CIFAR-10 test images under MP attack. “A” and“I” stand for actual and intended classes, respectively. generates and adds the noise for every training example in every training step. EPIC executes poison identification, which requires extensive resources to find medoids for each class in gradient space, to drop isolated points in every \(T\) epoch. ## VII Conclusion In this work, we presented an effective robust training framework, HINT, that hardens the model with healthy influential-noise to protect machine learning models from poisoning attacks. Our method uses the influence function as a central mechanism to select examples with the highest impact on the model test loss and crafts the healthy influential-noise. Deep learning models trained with our HINT method are more resilient to the effect of malicious examples. Through comprehensive empirical evaluations, we demonstrate the effectiveness and stability of HINT in defending against powerful untargeted and targeted attacks (e.g., Deep Confuse, Gradient Matching, and Bulleyes Polytope) and its superiority over state-of-the-art defense baselines. These evaluations were conducted in a realistic scenario, highlighting the suitability of our defense mechanism for deployment in sensitive security settings. In future work, we will extend our approach to defend against other attack types, such as back door attacks, which involve injecting specific backdoor patterns into selected training data, manipulating test data to embed the triggers, and causing intentional misclassification. **Reproducibility.** Our source code is available at [https://github.com/minhhao97vn/HINT](https://github.com/minhhao97vn/HINT). ## Acknowledgements This work was supported in part by National Science Foundation under awards 1946391, the National Institute of General Medical Sciences of National Institutes of Health under award P20GM139768, and the Arkansas Integrative Metabolic Research Center at University of Arkansas.
2309.06564
Application of the Thermodynamics of Radiation to Dyson Spheres as Work Extractors and Computational Engines, and their Observational Consequences
I apply the thermodynamics of radiation to Dyson spheres as machines that do work or computation, and examine their observational consequences. I identify four properties of Dyson spheres that complicate typical analyses: globally, they may do no work in the usual sense; they use radiation as the source and sink of energy; they accept radiation from a limited range of solid angle; and they conserve energy flux globally. I consider three kinds of activities: computation at the Landauer limit; dissipative activities, in which the energy of a sphere's activities cascades into waste heat, as for a biosphere; and "traditional" work that leaves the sphere, such as radio emission. I apply the Landsberg formalism to derive efficiency limits in all 3 cases, and show that optical circulators provide an "existence proof" that greatly simplifies the problem and allows the Landsberg limit to be plausibly approached. I find that for computation and traditional work, there is little to no advantage to nesting shells (as in a "Matrioshka Brain"); that the optimal use of mass is generally to make very small and hot Dyson spheres; that for "complete" Dyson spheres we expect optical depths of several; and that in all cases the Landsberg limit corresponds to a form of the Carnot limit. I explore how these conclusions might change in the face of complications such as the sphere having practical efficiencies below the Landsberg limit (using the endoreversible limit as an example); no use of optical circulators; and swarms of materials instead of shells.
Jason T. Wright
2023-09-12T22:49:09Z
http://arxiv.org/abs/2309.06564v2
Application of the Thermodynamics of Radiation to Dyson Spheres as Work Extractors and Computational Engines, and Their Observational Consequences ###### Abstract I apply the thermodynamics of radiation to Dyson spheres as machines that do work or computation, and examine their observational consequences. I identify four properties of Dyson spheres that complicate typical analyses: globally, they may do no work in the usual sense; they use radiation as the source and sink of energy; they accept radiation from a limited range of solid angle; and they conserve energy flux globally. I consider three kinds of activities: computation at the Landauer limit; dissipative activities, in which the energy of a sphere's activities cascades into waste heat, as for a biosphere; and "traditional" work that leaves the sphere, such as radio emission. I apply the Landsberg formalism to derive efficiency limits in all 3 cases, and show that optical circulators provide an "existence proof" that greatly simplifies the problem and allows the Landsberg limit to be plausibly approached. I find that for computation and traditional work, there is little to no advantage to nesting shells (as in a "Matrioshka Brain"); that the optimal use of mass is generally to make very small and hot Dyson spheres; that for "complete" Dyson spheres we expect optical depths of several; and that in all cases the Landsberg limit corresponds to a form of the Carnot limit. I explore how these conclusions might change in the face of complications such as the sphere having practical efficiencies below the Landsberg limit (using the endoreversible limit as an example); no use of optical circulators; and swarms of materials instead of shells. + Footnote †: slugcomment: Accepted to AAS Journals ## 1 Introduction ### The Kinds of SETI and Prior Work on Dyson Spheres There are five primary pillars of modern SETI: radio SETI (Cocconi & Morrison, 1959; Drake, 1961), optical SETI (Schwartz & Townes, 1961), solar system SETI (Bracewell, 1960), waste heat SETI (Dyson, 1960), and exoplanetary SETI (Campbell, 2006). Of these, radio SETI is by far the most developed, followed by optical SETI. The challenge of detecting the interactions of light with terrestrial exoplanetary surfaces and atmospheres makes the last of these an understandably immature field, but waste heat and solar system SETI have a heritage just as old as radio and optical SETI, and remain relatively undeveloped. The premise of waste heat SETI is that life and technology, almost by definition, exploit free energy gradients. In particular, life can exploit these gradients and local resources to reproduce, and "intelligent" life can overcome many energy and resource limitations through the application of technology (Wright et al., 2014). This means that, in principle, technological life might be able to expand and grow to exploit an almost arbitrarily large amount of energy, up to the absolute limit: the total luminosity if a nearby star. Because energy must be conserved, the star's luminosity must be expelled from the system as waste heat, and so one might be able to measure the total energy use of such technology by looking for such heat. Dyson was partly motivated by the development of infrared detector technology, and imagining what might be found once astronomers could search the infrared sky. He correctly surmised that there would be many confounding sources, especially stars with circumstellar dust, and that the search for "infrared stars" would have broad astrophysical implications, regardless of whether any Dyson spheres were found. He also correctly pointed out that finding excess infrared emission from a star would hardly be dispositive evidence of alien life, and that some other traces of technology would need to be found to close the case. Wright et al. (2014) and Wright (2020) provide a thorough discussion of the history of waste heat SETI, including the limited number of searches to date. The primary challenge is that waste heat will presumably be found at mid-infrared wavelengths, which are best studied above Earth's atmosphere. A search for such emission thus requires an all-sky midinfrared space mission, of which there have only been three: _IRAS_, _WISE_, and _AKARI_. Carrigan (2009) provided the first thorough search using _IRAS_, Griffith et al. (2015) provided the first limits from _WISE_ for extragalactic sources, and Suazo et al. (2022) provided the first limit for nearby stars. Searches are somewhat hampered, however, by the lack of an underlying theory of the waste heat of circumstellar technological material that can predict its observational consequences. Wright et al. (2014) presented the AGENT formalism for parameterizing the fraction of starlight reprocessed as waste heat and its approximate effects on stellar and galactic spectral energy distributions. Wright (2020) developed a model for spherically symmetric distributions of circumstellar material,1 connecting the AGENT parameters to the physical properties of the material, and Huston & Wright (2022) determined the effects of such material on the surface and evolutionary properties of the star, making the model fully self-consistent. Footnote 1: That work contains at least two errors: Eq. 25 should read \(s=(R_{*}/R)^{2}\) and the solid angle subtended by the star is then \(2\pi(1-\sqrt{1-s^{2}})\); and it is missing the factor of \(\frac{4}{3}\) in the rate of computation described later in this work. These models remain simple and general, however: they generally assume spherical symmetry, assume that material has a characteristic orbital distance from the star, ignore radiative interactions among the material, and do not provide strong guidance regarding the typical temperatures, distributions, or optical depths of the material. To guess at such properties, one must typically invoke specific motivations or purposes for the material. ### Proposed Motivations and Purposes of Dyson Spheres Dyson (1960)'s original conception of an "artificial biosphere" around a star, later dubbed a "Dyson sphere" by Kardashev (1964), was loosely sketched as a logical limit of a species' expansion into space. Later, Dyson acknowledged that capturing all of a star's luminosity to maintain a vast habitat was just one possible motivation for such a project; in Dyson (1966) he generalized the argument for why such structures might exist or what their purpose might be: [T]hink of the biggest possible artificial activities, within limits set only by the laws of physics and engineering...I do not need to discuss questions of motivation, who would want to do these things or why. Why does the human species explode hydrogen bombs or send rockets to the moon? It is difficult to say exactly why. My rule is, there is nothing so big nor so crazy that one out of a million technological societies may not feel itself driven to do, provided it is physically possible. With this philosophy, we minimize our consideration of Dyson spheres' reasons to exist beyond the very general idea that they consume large amounts of energy to perform some kind of task. They might arise organically as a species expands into space, and not as part of a grand engineering project, slowly blocking more and more of the star's light the way a growing forest eventually develops a canopy that does the same. Dyson spheres require an enormous extrapolation from current human technology. Humanity has put very roughly 0.1 km\({}^{2}\) of solar panels into Earth orbit, enough to block \(\sim\)10\({}^{-19}\) of the light that would have escaped into space.2 To build a shell around the Sun 1 cm think at 1 au would require roughly an Earth-mass of material. Being so far beyond our current engineering capabilities, one must wonder what we could possibly say about them, including the the possibility that they could even exist. After all, the technology that could rearrange such huge amounts of mass would presumably employ methods we could not imagine. Footnote 2: From Jonathan McDowell (private communication) based on his comprehensive catalog of human objects in space. With solar panels having \(\sim\)20% efficiency, humanity’s space technology is thus \(K\)\(\sim\)0.14 on Kardashev’s scale, where a complete Dyson sphere is \(K=2\)(Gray, 2020). Dyson (1966) addressed this by considering only the laws of physics we are reasonably sure are foundational, ignoring engineering practicalities except to establish physical possibility: I am presenting an existence proof for certain technological possibilities. I describe crude and clumsy methods which would be adequate for doing various things. If there are other more elegant methods for doing the same things, my conclusions will still be generally valid. His "crude and clumsy" methods included ways to disassemble planets to acquire the huge amounts of mass necessary to block a significant fraction of a star's light. There are many disagreements in the literature about the form a Dyson sphere would take. Badescu (1995) analyzed the thermodynamics of Dyson spheres under certain assumptions about how they harvest energy, and attempted to calculate a minimum size for them given that they are biospheres. Badescu & Cathcart (2000) discuss and analyze many uses of a Dyson sphere, defining 3 classes of "stellar engines," including a class A engine that propels the star, and a class B engine that performs work. The latter is the primary focus of this paper (Dyson spheres may serve other purposes as well). Others have argued they would be extremely cold to maximize the work they can do (for instance Lacki (2016)). The proper thermodynamic analysis for Dyson spheres and radiation has also been debated, for instance in Badescu (2014). One important suggestion for the role of a Dyson sphere is that it would be used for computation (e.g. Scharf & Witkowski, 2023). Robert J. Bradbury wrote an influential discussion of "Matrioshka Brains," giant machines that perform as many computations as possible from astronomical sources of power like stars.3 In that and other works it is imagined that Dyson spheres will have a nested structure (thus their namesake, nested Russian matrioshka dolls) in which outer layers exploit the waste heat of inner layers to optimize computational efficiency. Footnote 3: [https://web.archive.org/web/https://gwern.net/doc/ai/1999-bradbury-matrioshkabrains.pdf](https://web.archive.org/web/https://gwern.net/doc/ai/1999-bradbury-matrioshkabrains.pdf) ### Purpose and Plan of this Paper Here, our purpose is to follow Dyson's original philosophy by identifying only the outer limits of what a Dyson sphere might accomplish, and clarifying the appropriate expressions for their efficiency. We will explore what we can say about the optimal configurations of mass around a star for exploiting energy use for different broad purposes. Real Dyson Spheres, if they exist, will presumably be both sub-optimal and subject to many practical or other constraints we both can and cannot imagine: our purpose is not to guess those, but to see what we might be able to conclude for Dyson spheres generally. One of the keys to the analysis in this paper will be the recognition of Buddhiraju et al. (2018), Li et al. (2020), and others that the use of optical circulators can, in principle, circumvent some practical limits of heat engines using radiation as a pump and sink. Following the spirit of Dyson, we will accept this as an "existence proof" of technologies that simplify the optimal thermodynamics of Dyson spheres, yielding expressions consistent with the Carnot limit for work efficiency. Another contribution of this paper will be to clarify how Dyson sphere efficiencies depend on the kind of activities done with stellar radiation, and to explore three general categories. Most previous papers have assumed "work" is of the traditional sort, i.e. that it is essentially mechanical and is removed from the energy budget of the sphere. But Wright (2020) pointed out this is inappropriate for most work we might imagine a Dyson sphere would do, because Dyson spheres must conserve energy and radiate away _all_ of the energy they receive, not just their waste heat. Finally, this work will find that within a factor of a few, gross Dyson sphere properties are largely insensitive to these details. This is good: it means that we need not guess too precisely about the means and motives of large technological species to search for them. ## 2 Preliminaries We begin with a somewhat didactic presentation of the principles of Dyson sphere thermodynamics to orient the reader with the philosophy of the paper. ### Work, Entropy, and Efficiency One typically calculates the amount of work that can be extracted from a source of heat by looking at the energy flow one can generate between a source of heat and a cooler environment / heat sink as heat flows from warm to cold. One wishes to tap this energy flow to do "work," by which we mean we _remove the energy from the system_ and use it to do some sort of (usually mechanical) task. This extraction of work removes energy from the system, but not entropy. Since we cannot destroy entropy (we can only increase it), it is impossible to extract all of the energy flow for work--at least some of it must be reserved to carry away the entropy that came into the system. When this energy leaves the system it is called "waste heat." The entropy \(\delta S\) associated with a small amount of waste heat \(\delta Q\) is \(\delta Q/T\), meaning that heat at a low temperature has more entropy associated with it per erg than warm temperature heat. If one expels energy as heat at a lower temperature than one accepted it from the warm source, it will contain more entropy per erg than it had going in. We can then heuristically think of a heat engine as something that reallocates entropy across a pool of incoming energy. The engine puts some energy out as heat into the cooler environment--at this lower temperature, that energy can "hold" more entropy than it had coming in, so it can remove most or all of the incoming entropy, freeing up the remaining energy to be extracted as work. The amount of energy one can extract from a system do to such work is called the _exergy_ of the system. It depends on the amount of entropy the incoming heat started with, and the temperature at which one can expel heat. The maximum efficiency of a machine is the fraction of incoming energy from the hot source that is exergy. A Carnot engine is an ideal realization of a machine that performs this task optimally, generating no entropy of its own, and has an efficiency given by \[\eta_{\rm Carnot}=\frac{\dot{W}}{\dot{E}_{\rm in}}=1-\frac{T_{h}}{T_{c}} \tag{1}\] where \(\dot{W}\) is the rate at which work is extracted (i.e. the rate at which energy, but not entropy, leaves the system) and \(\dot{E}_{\rm in}\) is the total energy into the system from the hot source. As typically defined, there are many practical difficulties in constructing a heat engine that can achieve this limit using heat transfer, including the need for an infinite heat source and heat sink that do not change their temperature as the machine operates. It also assumes that one can bring a component of the machine to exactly the temperature of the source or sink--in practice this is difficult or impossible to achieve because heat transfer takes finite time. The Carnot limit is thus technically only achieved for infinitessimal temperature differences, and the work is extracted infinitely slowly. For heat engines using a gas or fluid with finite heat capacity exchanging heat with infinite thermal baths acting as pumps and sinks, the ultimate efficiency limit at maximum power is the _endoreversible_ limit (Curzon & Ahlborn, 1975): \[\eta_{\rm endoreversible}=1-\sqrt{\frac{T_{c}}{T_{h}}} \tag{2}\] This limit, somewhat surprisingly, is independent of the details of how quickly heat transfer can take place. A commonly seen expression for thermodynamic efficiency in the limit where radiation at temperature \(T_{h}\) is being used as the heat source and heat is radiated away as radiation with temperature \(T_{c}\), is that of Petela (1964): \[\eta_{\rm radiation}=1-\frac{4}{3}\frac{T_{c}}{T_{h}}+\frac{1}{3}\left(\frac {T_{c}}{T_{h}}\right)^{4} \tag{3}\] This form (equation 3.4 in Landsberg & Tonge, 1980, and repeated in other references) assumes that the radiation is isotropic--that is, it assumes the flux on the absorber is \(\sigma T_{h}^{4}\) and the flux out of the radiator is \(\sigma T_{c}^{4}\). It is thus inappropriate for situations where the incoming radiation is comes from a small area on the sky, as with solar panels. It is also inappropriate for situations in which energy cannot be exchanged with the environment, since the radiative fluxes in and out do not balance against the work done (i.e. it does not assume \(\dot{Q}=0\) in the Landsberg formalism below). ### Thermodynamics of Radiation When the source and sink of energy in the system is blackbody radiation, not an infinite-heat-capacity thermal bath at fixed \(T\), we must account for the entropy of that radiation with care. Badescu (2014) and Buddhiraju et al. (2018) both present treatments of this case we can draw from. For radiation, the entropy per time per unit surface area escaping from a blackbody is given by4 Footnote 4: Some sources erroneously claim the entropy flux of blackbody radiation is simply \(\sigma T^{3}\), but this ignores the fact that radiant energy has an inherent pressure that adds an extra factor of 4/3. See Wu & Liu (2010). \[\dot{s}=\frac{4}{3}\sigma T^{3} \tag{4}\] and the energy flux is \[F=\sigma T^{4} \tag{5}\] Significantly, the ratio \(\dot{s}T/F=4/3\), a coefficient that is absent from treatments of efficiencies with respect to heat exchange with thermal baths. This will have important consequences for our derived limits. ### The Landsberg Limit and Formalism In a foundational paper, Landsberg and Tonge (1980) described the absolute limits of thermodynamic energy conversion of radiation, now referred to as the Landsberg limit. In their formalism, a machine converts energy with two inputs and four outputs: * Energy flux into a system, \(\dot{E}_{p}\), where \(p\) stands for "pump." We will refer to this quantity as \(\dot{E}_{\rm in}\). * Entropy flux into a system, \(\dot{S}_{p}\) We will refer to this quantity as \(\dot{S}_{\rm in}\). * Energy flux into a sink, \(\dot{E}_{s}\). We will refer to this quantity as \(\dot{E}_{\rm out}\). * Entropy flux into a sink, \(\dot{S}_{s}\). We will refer to this quantity as \(\dot{S}_{\rm out}\). * \(\dot{Q}\) the heat flux out of the system into the ambient environment. * \(\dot{W}\) the rate of work done by the system, that is, rate of energy out of the system with no corresponding entropy flux. The machine itself has four properties: * \(\dot{E}\) the increase in internal energy with time. * \(\dot{S}\) the increase in internal entropy with time. * \(T\) the temperature on the boundaries of the machine. * \(\dot{S}_{g}\) the rate of entropy generated by the machine. Global energy conservation is enforced by \[\dot{E}_{\rm out}=\dot{E}_{\rm in}-\dot{E}-\dot{Q}-\dot{W} \tag{6}\] and entropy as accounted for with the identity \[\dot{S}_{\rm out}=\dot{S}_{\rm in}-\dot{S}-\dot{Q}/T+\dot{S}_{g} \tag{7}\] These are shown in Figure 1. This figure is a general case of all similar figures later in this work. Note that the Landsberg formalism is a generalization of calculations that produce the Carnot and radiation limits above, not a contradiction of them. ### Optical Circulators and the Landsberg Limit The practical limits of power extraction from radiation are not clear. Many treatments begin with the premise that the incoming radiation must warm an intermediate blackbody to power a Carnot engine, which creates an inefficiency. In addition, typically when a surface absorbs radiation and comes to some temperature it also emits radiation back towards the source, and a radiator not only disposes of waste heat but absorbs radiation from its environment. Both effects limit the ability to re-use waste heat at a lower temperature, and to efficiently cool. These effects provide a practical limit, typically below the Carnot limit, for many engine designs using radiation. Buddhiraju et al. (2018) (Figure 6) and Li et al. (2020) (Figure 4h) describe schemes using optical circulators to operate at the full Carnot efficiency between two radiation sources that avoids these inefficiencies. The idea is that circulators violate Lorentz reciprocity, which applies to most materials and says that the propagation of light through a system is symmetric with respect to direction of travel. A circulator exploits polarization to differentiate between the two directions. In a three port circulator, if light enters port A and exits port B, light entering port B will not exit port A as one would expect under Lorentz reciprocity, but instead exit via port C (and light entering port C emerges from port A). This allows the system to collect light via port A without necessarily returning light via port A, overcoming many practical limitations of heat engines and allowing one to approach the Landsberg limit. In their scheme, a set of circulators in series each feed a set of Carnot engines working at lower and lower temperatures. The first engine operates at \(T_{h}\), the temperature of the incoming radiation. The waste heat from the engine is then passed to another at slightly lower temperature, and another and so on, until the lowest possible temperature for the system is reached. For their system perfectly coupled to the cold radiation sink, this is the cold temperature \(T_{l}\). By increasing the number of steps between the hottest and coldest temperatures, one reduces the inefficiencies that come from extracting work at finite temperature differences and, in the limit of a very large number of steps, one achieves the Landsberg limit. We should also keep in mind that when working with photons as an energy source instead of heat, we might not need to first convert photons to thermal energy and then to electricity or other forms of work; other possibilities may emerge depending on the kind of work to be done. For instance, in photosynthesis photons interact directly with the molecules responsible for plant metabolism; similarly photons might directly perform the activities in question, and thus avoid practical limitations of real heat engines. In what follows, we will therefore adopt the schemes of Buddhiraju et al. (2018) and Li et al. (2020) as proofs of principle that we can ignore issues like feedback and intermediate absorbers when computing limits of Dyson spheres. We will check the validity of this assumption by generalizing our treatment to include other efficiency laws, including the endoreversible case. Figure 1: Schematic after Figure 1 of Landsberg and Tonge (1980) showing the inputs and outputs of energy and entropy in a machine. In that work, inputs and outputs included a “pump” and a “sink”; we use the terms “in” and “out”. In this work, we apply this formalism to a shell or swarm of material at a distance \(R\) from a star. Terms not used in this work because they are zero for Dyson spheres in steady state are in gray. \(L\) is implicitly the luminosity of the star, but in systems with feedback or considering incoming radiation from space it may be higher than this. \(\dot{Q}=0\) because there is no ambient environment to share heat with except via radiation, already accounted for in \(\dot{E}_{\rm out}\). \(\dot{S}_{g}\) will be nonzero for Dyson spheres doing computation, dissipative activities, or working below the Landsberg limit, and \(T\) is its outgoing radiation temperature. \(\dot{W}\) will be zero unless the sphere is emitting low-entropy radiation, as with a radio beacon. In steady-state, \(\dot{E}_{\rm in}=\dot{E}_{\rm out}+\dot{W}=L\), \(\dot{E}=0\), and \(\dot{S}=0\). All similar figures in this work are special cases of this master figure, in some cases with \(\dot{E}_{\rm out,1}+\dot{E}_{\rm out,2}=\dot{E}_{\rm out}\) and \(\dot{S}_{\rm out,1}+\dot{S}_{\rm out,2}=\dot{S}_{\rm out}\). ## 3 Application to Dyson Spheres ### Satellites and Dyson Spheres Our purpose is to apply the Landsberg formalism properly to the problem of a Dyson sphere. Dyson spheres have some special properties that make many calculations for work done via radiation in the literature inappropriate. Four of special relevance are: * Dyson spheres that extract work and perform it locally, for instance if they are doing computation or maintaining biospheres, must eventually radiate their energy away, raising the temperature of their radiators, and resulting in no work extraction \(\dot{W}\). This complicates our efforts to define and calculate an efficiency for them. * Dyson spheres use radiation for their input energy and output waste heat. We thus need to use the fluxes given in Section 2.2. * Dyson spheres accept radiation coming from a finite source, i.e. from a narrow range of solid angle, and not isotropical.5 Footnote 5: This is different from a “dilute” source of radiation, where blackbody radiation in a given solid angle is at a lower intensity than given by the Planck law by a frequency-independent factor. In both cases less light arrives at a surface, but for different reasons: for “dilute” radiation, it arrives from all directions but has been attenuated, as by a gray absorbing medium, and for the case this paper in concerned with it arrives at the full Planck intensity but over a limited range of solid angle. The thermodynamic difference is in the amount of entropy contained in the radiation: Dilute radiation distributes fewer photons in the same number of modes (directions and frequencies), and since the specific entropy is calculated from the number of ways \(M\) photons can occupy \(N\) modes via the ratio \(M/N\), reducing \(M\) with fixed \(N\) results in a different specific entropy. For radiation restricted to a solid angle \(\Omega\), both the number of photons and the number of modes are reduced by \(\Omega/4\pi\), so the ratio \(M/N\) is fixed, and the blackbody expression for specific entropy holds. See Landsberg and Tonge (1979); Wu and Liu (2010) for a discussion of how to treat dilute radiation. * The Dyson sphere-star system must conserve energy flux globally. In particular, Dyson spheres can only dispose of energy outwards, not back onto the star or inwards to their own interiors. Energy sent back to the star will have implications for stellar feedback and potentially the entropy management of the sphere, but will to first order not affect the global energy budget of the system. In this work, we will presume that Dyson spheres are composed of a large swarm of satellites, each presenting a flat "solar panel" towards the star and a radiator away from it. We make no assumptions about the nature of the "solar panel" except that it collects stellar radiation to perform some sorts of activities, and shares a cross-sectional area with the radiator. We assume the radiator emits radiation as a blackbody of temperature \(T\). We will model this swarm as being in a thin sphere around the star at radius \(R\) and area \(4\pi R^{2}\), as if it were a monolithic shell, which would not be stable (Wright, 2020). Later we will check to see how moving to a more realistic swarm model changes our answers. We will assume for this work that the star radiates as a blackbody with temperature \(T_{*}\) and that it has radius \(R_{*}\), and luminosity \(L=4\pi R_{*}^{2}\sigma T_{*}^{4}\). Later, when considering feedback onto the star we will briefly distinguish between the effective temperature \(T_{\rm eff}\) defined by the luminosity and radiation, and the blackbody temperature of the surface \(T_{*}\). Dyson spheres cannot accumulate significant amounts of energy. As Wright (2020) discusses, the luminosities involved are so large that the internal energy of a Dyson sphere that attempts to store energy will quickly become extremely and impossibly high, causing them to become chemically and gravitationally unbound. If Dyson spheres are long-lived, they must be in steady state, and radiate away all of the energy they collect. We will thus assume steady state, and that energy is conserved globally. In terms of the Landsberg formalism, this means we can write: \[\dot{Q} = 0 \tag{8}\] \[\dot{E} = 0\] (9) \[\dot{S} = 0\] (10) \[\dot{E}_{\rm in} = \dot{E}_{\rm out}+\dot{W} \tag{11}\] \(\dot{Q}\) is zero because no heat can be transferred to the ambient environment because there is no ambient environment except deep space, and we already account for the transfer of energy and entropy to deep space via \(\dot{E}_{\rm out}\) and \(\dot{S}_{\rm out}\). Note that the radiation from the star is from is given by the blackbody law, but it only arrives from a small range of solid angles when it gets to the sphere. We account for this by interpreting terms like \(\dot{E}\) to have units of power and not flux--that is, we explicitly account for the area of our collectors and radiators. Note that this differs from many treatments, including Landsberg & Tonge (1980), which somewhat confusingly and tacitly ignore this distinction, a choice that is equivalent to assuming all radiation fields are isotropic (i.e. incoming radiation arrives at a surface uniformly in all directions). ### Circulators for Dyson Spheres We next need to consider all of the radiation into and out of the Dyson sphere from both directions. For simplicity we will ignore in this work the effects of radiation from deep space and presume it is effectively zero, but in practice for very large spheres it sets an outer limit to the size and efficiency of a sphere. More important will be radiation from the inside surface of a sphere. This radiation will mostly land on the inside of the sphere that radiated it, for no net effect, but some will be returned to the star or an inner sphere. This will alter the energy balance of the star and sphere, and some treatments account for this feedback explicitly (e.g. Wright, 2020; Huston & Wright, 2022). However, since we are looking at the outer limits of what is possible, we can invoke circulators to ignore this effect, as described in Section 2.4. Consider that the shell makes use of three-port circulators, with Port I pointing inward, Port O pointing outward, and Port E connecting to the work extractor (and engine). The circulator can circulate radiation in the cycle \(O\to I\to E\to O\). Doing so would mean each shell would "see," when looking outward, the interstellar radiation field, which it would pass along inwards without absorbing it, and would accept radiation from the inner shell moving outward without returning any radiation from its own emissions. Figure 2 and 3 show the scheme as a worked example of two nested spheres doing different kinds of activities, as described in the next section. Figure 2: Schematic illustrating how to control the flow of radiant energy in a system with one or more Dyson spheres such that there is no radiative feedback inwards. Circulators have three ports, pointing inward (“I”), outward (“O”), and towards the work extractor / engines in each sphere (“E”). Power from deep space \(L_{\rm space}\) is passed inward by each sphere, striking either the outside of the next smallest sphere (at the next port ”O”, paths 2 and 3) or the interior surface of the sphere that passed it on (at its own port “I”, paths 2a and 3a). Figure 3 shows the 2D geometry corresponding to this scheme. We have ignored the small contribution of \(L_{\rm space}\) and its entropy to the function of the engines except to show its path via small arrows for completeness, but in principle it adds to the outwards energy flow. Here we show a specific worked example involving two nested spheres doing different activities, but the scheme generalizes. This demonstrates that our simple treatment in this paper that ignores ingoing radiation from the spheres is a physically possible limit. Putting this together with Section 2.2, and ignoring the very small amount power and entropy from deep space, we therefore have for a single Dyson sphere around a star \[\dot{E}_{\rm in} = 4\pi R_{*}^{2}\sigma T_{*}^{4} \tag{12}\] \[\dot{S}_{\rm in} = 4\pi R_{*}^{2}\frac{4}{3}\sigma T_{*}^{3}\] (13) \[\dot{E}_{\rm out} = 4\pi R^{2}\sigma T^{4}\] (14) \[\dot{S}_{\rm out} = 4\pi R^{2}\frac{4}{3}\sigma T^{3} \tag{15}\] All of these are shown in Figure 1. Figure 2 has a superficial similarity to the schemes of Buddhiraju et al. (2018) (Figure 6) and Li et al. (2020) for achieving the Landsberg limit with radiation (because it was inspired by them), but it is not identical. First, if we have nested shells they will generally be widely spaced, with large temperature differences among them. Secondly, we must account explicitly for the geometry of spheres and very different radiating areas of our components, specifically with respect to the path of light from deep space, as shown in Figure 3. Finally, because we have finite radiating area, we cannot operate at the Carnot efficiency against the cold radiation of space (i.e. \(T_{0}\) in Figure 2), but only against the radiation temperature of our sphere. ### Three Kinds of Dyson Sphere Stellar Engine Activities In order to make use of this formalism, we need to define what we mean by work and how we wish to quantify the activities the Dyson sphere can do. There are three options we will consider: 1. **Computation:** We can imagine the sphere extracts electrical energy from starlight, uses it to run a computer to do a calculation, and then this computer disposes of the energy as heat. We would thus measure the efficiency of such a computer not based on the energy it consumes, but on the rate of computations it can perform, \(r\). (Our analysis will be general, independent of the actual internal mechanism used to power the computer). 2. **Dissipative Activities:** Most energy use is dissipative. Consider that nearly all work done by technology on Earth--the majority of which is not computation--ultimately dissipates into heat. Fossil fuels used for transportation are temporarily converted to kinetic energy, but all machines suffer friction which converts that energy to heat. This is also true of biological activity: plants convert solar energy into chemical energy, animals consume plants and use this energy in their metabolisms, and in all cases all of this energy is ultimately carried Figure 3: Schematic showing the correspondence between the numbered paths of flow in Figure 2 and the geometry of the Dyson sphere. away as heat into the environment. In this case, which is a generalization of computation, there will be some sort of internal rate of power put to use, \(\dot{W}_{\rm internal}\), before it cascades into waste heat. Here, "internal" means it is internal _to the Dyson sphere_; we will conceptually have to define part of the sphere to be the work extractor, from which work \(\dot{W}_{\rm internal}\) leaves, even if work never leaves the sphere itself. The distinction between cases 1 and 2 is purely in terms of how we will measure the efficiency of the sphere. In both cases, since no work leaves the sphere, with respect to Figure 1 we will write \[\dot{W}=0\] (16) 3. **Traditional Work:** Dyson spheres might convert starlight into a low entropy form that leaves the sphere, for instance as a strong, coherent radio signal. This case will track the usual calculations for thermodynamic efficiency best, since they are usually concerned with the rate of work \(\dot{W}\) that leaves the system, which is how we will measure our efficiencies in this third case. In this case, the entire sphere is a work extractor. ## 4 Optimal Efficiencies for Dyson Spheres ### Computation In a computation, each binary logical operation requires the creation of \(S=k\ln{(2)}\) of entropy (from erasure of memory in the system).6 The Landauer limit is derived from this fact, stating that one must expel at least \(TS=kT\ln{(2)}\) of energy as heat after performing a binary logical operation. Footnote 6: Reversible computing _might_ be able to overcome this limit, but even if a Dyson sphere computer performs such calculations, it will still need to overcome some rate of error generation, generating entropy equal to \(k\ln{(2)}\) per bit. Either way, the computer is limited in its computational ability by its ability to dispose of this entropy. An ideal extractor of computation from light would work something like this. A machine receives a very small amount of radiant power \(\delta\dot{E}\) and does \(\delta\dot{E}/(kT\ln{(2)})\) calculations per unit time with it, generating \(\delta\dot{E}\) in waste heat with \(\delta\dot{S}=\delta\dot{E}/T\) entropy per unit time. This heat is radiated away, causing its radiator to increase its temperature such that its radiant power output \(P=A\sigma T^{4}\) increases by \(\delta\dot{E}\), where \(A\) is its area. Differentially, we have \(\delta P=4A\sigma T^{3}\delta T=\delta\dot{E}\). We can then integrate the total effect of an arbitrary, macroscopic amount of energy flux \(\dot{E}\), starting with a very cold radiator. The total energy flux through the system is then \[\dot{E}_{\rm out} = \int_{0}^{\dot{E}}\delta\dot{E} \tag{17}\] \[= \int_{0}^{T}4A\sigma T^{3}\delta T\] (18) \[= A\sigma T^{4} \tag{19}\] as it must be, and the total entropy emitted is \[\dot{S}_{\rm out} = \int_{0}^{\dot{E}}\frac{\delta\dot{E}}{T} \tag{20}\] \[= \int_{0}^{T}4A\sigma T^{2}\delta T\] (21) \[= \frac{4}{3}A\sigma T^{3} \tag{22}\] as it must be. The total number of calculations done in this case is \[r=\frac{\dot{S}_{\rm out}}{k\ln{(2)}}=\frac{4}{3}\frac{L}{kT\ln{(2)}} \tag{23}\] where \(L\) is the luminosity of the star. The factor of \(\frac{4}{3}\) is not a violation of the Landauer limit but arises from the fact that the Landauer limit comes from the thermodynamic definition of heat and entropy, \(dQ=TdS\), which is a _differential_ relationship--it does not imply that \(Q/S=T\) for macroscopic amounts of heat, so we need not necessarily spend \(kT\ln{(2)}\) per calculation in bulk. We can think of some operations generating heat that is radiated away in the low temperature modes of the blackbody radiation field, having a lower effective \(T\) than the surface of the sphere. We can now apply Landsberg's formalism properly to the case of a Dyson sphere performing calculations as illustrated in Figure 4. By energy balance, because \(\dot{W}=0\) we have \[\dot{E}_{\rm in}=L=4\pi R_{*}^{2}\sigma T_{*}^{4}=\dot{E}_{\rm out}=4\pi R^{2 }\sigma T^{4} \tag{24}\] Since we consider \(L\) and \(R\) to be given, this uniquely determines \(T\):7 Footnote 7: We could also fix \(T\) to determine \(R\), of course. \[T=T_{*}\sqrt{\frac{R_{*}}{R}} \tag{25}\] With \(T\) determined, we can then examine entropy balance to determine \(\dot{S}_{g}\): \[\dot{S}_{g}=\dot{S}_{\rm out}-\dot{S}_{\rm in} \tag{26}\] yielding \[r = \frac{4}{3}\frac{L}{kT\ln{(2)}}\left[1-\frac{T}{T_{*}}\right] \tag{27}\] \[= \frac{4}{3}\frac{L}{kT_{*}\ln{(2)}}\left[\sqrt{\frac{R}{R_{*}}}-1\right] \tag{28}\] The factor in square brackets in Eq.27 is the Carnot efficiency, so we may _heuristically_ say that the sphere extracts work at the Carnot efficiency and uses it do to calculations at a cost of \(\frac{3}{4}kT\ln{(2)}\) per calculation, appropriate for radiation. Note that the low temperature in both these calculations is _not_ the ambient radiation around the sphere (i.e. the interstellar radiation field, whose small contribution we have ignored), but the surface temperature of the machine. This is enforced by energy conservation, the steady state assumption, and the area of the sphere. ### Dissipative Activities We can determine the amount power a Dyson sphere can devote to dissipative activities by heuristically breaking the system into two pieces: a work extractor generating \(\dot{W}_{\rm internal}\) of work per unit time in the usual thermodynamic sense, and an "engine" that makes use of this, which ultimately dissipates as heat. We imagine the sphere dedicates only a fraction \(f\) of its radiating surface to passing along entropy received from the star as waste heat from the extractor, and then sends the resulting work across its boundary to an engine which eventually radiates the energy away from the engine using the remaining radiator fraction \((1-f)\). This is shown schematically in Figure 5. The efficiency of the sphere now can be expressed as the efficiency of the work extractor \(\eta=\dot{W}_{\rm internal}/\dot{E}_{\rm in}\). Importantly, this scheme is simply a more detailed accounting of energy flow than that in Figure 1, and could also be interpreted as a way to implement Figure 4. An important consequence of this scheme is that the radiators for the work extractor and engine have a common temperature.8 Footnote 8: Changing this by, for instance, making \(f\) larger amounts to an intermediate case between dissipative activities and traditional activities. For \(f=1\), we recover the traditional activity limit described in the next section. At maximum efficiency the work extractor produces no entropy, so \[\dot{S}_{\rm in}=\dot{S}_{\rm out,1} \tag{29}\] We also have from energy balance that \[\dot{E}_{\rm in}=\dot{E}_{\rm out,1}+\dot{E}_{\rm out,2}=L \tag{30}\] Together with the values shown in Figure 5, these two equations allow us to solve for \(f\) and \(T\) \[T = T_{*}\sqrt{\frac{R_{*}}{R}} \tag{31}\] \[f = \frac{T}{T_{*}}=\sqrt{\frac{R_{*}}{R}} \tag{32}\] and so by simple energy balance we have \[\dot{W}_{\rm internal}=\dot{E}_{\rm in}-\dot{E}_{\rm out,1}=(1-f)L \tag{33}\] yielding the familiar Carnot efficiency \[\eta=\frac{\dot{W}}{\dot{E}_{\rm in}}=1-\frac{T}{T_{*}}=1-\sqrt{\frac{R_{*}}{R}} \tag{34}\] Using radiation as a source of energy and a sink of waste heat thus allows a work extractor to operate at the Carnot efficiency, consistent with our result for computational rate. ### Traditional Work (That Leaves the Sphere) In the case where the purpose of the sphere is to do work that leaves the sphere, we have a different relation. This work might go out as coherent radiation as in a radio beacon, for instance.9 Footnote 9: This case would also cover something exotic like energy-to-mass conversion (antimatter might be a worthwhile output product). In this case the energy merely changes form, it does not “leave” the sphere, but the essence is the same: we have a colder sphere, because the extracted work is not warming the radiators. In this case, we have the same schematic as Figure 5 except that the "engine" is not present, the energy \(\dot{W}\) leaves the system entirely, and \(f=1\). Operating at maximum efficiency, we have \(\dot{S}_{\rm in}=\dot{S}_{\rm out}\) We then have for the temperature of the outgoing radiation and efficiency \[T = T_{*}\left(\frac{R_{*}}{R}\right)^{\frac{2}{3}} \tag{35}\] \[\dot{W} = L\left(1-\frac{T}{T_{*}}\right) \tag{36}\] Figure 4: Schematic for a Dyson sphere doing pure computation. \(\dot{W}=0\) and is not shown because all incoming energy is used for computation and expelled as waste heat. At maximum efficiency, all of the entropy added to the incoming energy is generated by computation at rate \(r\) at the Landauer limit, giving rise to an internal entropy generation rate \(\dot{S}_{g}\). \(T\) is determined from energy balance, given \(R\). From this, \(\dot{S}_{g}\) can be computed from entropy balance, yielding \(r\). yielding efficiency \[\eta=1-\frac{T}{T_{*}}=1-\left(\frac{R_{*}}{R}\right)^{\frac{2}{3}} \tag{37}\] That is, it satisfies the Carnot efficiency but at a lower temperature (and, so, higher efficiency) than the dissipative case, going as the \(\frac{2}{3}\) power of \(R_{*}/R\) of the stellar temperature instead of the square root. Figure 6 illustrates the scheme. We can now see why the Petela efficiency, Eq. 3, does not apply to Dyson spheres. Badescu (2023) derives a result for this equation in the case where incoming radiation occupies a range of solid angle with geometric factor \(f_{\rm in}\) and outgoing radiation a has a geometric factor \(f_{\rm out}\) (where I have simplified the notation): \[\eta_{\rm radiation}=1-\frac{4}{3}\frac{T}{T_{*}}+\frac{1}{3}\frac{f_{\rm in }}{f_{\rm out}}\left(\frac{T}{T_{*}}\right)^{4} \tag{38}\] By the definition of the geometric factor \(f\equiv\int_{\rm source}\cos\theta d\Omega\)(Badescu, 2023, Eq. 16), a sphere of radius \(R_{*}\) overhead at distance \(R\) has \(f_{\rm in}=\pi(R_{*}/R)^{2}\), and for our isotropic outward radiation, we have \(f_{\rm out}=\pi\). From Eq. 35, which enforces \(\dot{Q}=0\), we have \(f_{\rm in}=(T_{*}/T)^{3}\), with which substitution Eq. 38 reduces to the Carnot efficiency, consistent with Eq 36 and Badescu (2023) Table 7. Eq. 3. There is thus no contradiction between our results and those of Petela (1964). ### Endorevesible Limits As an example of a "practical" solution to energy extraction for sunlight, we will also imagine that an engine would operate below the Landsberg limit because it needs to convert incoming starlight to heat to be exchanged in a traditional heat engine, which can only be done with some finite rate, yielding an efficiency given by, for instance, Figure 5: Schematic showing the notional two-part machine used to do dissipative mechanical activities. The energy extractor generates work \(\dot{W}_{\rm internal}\), which is passed on to an “engine,” generating entropy at rate \(\dot{S}_{g}\). Importantly, the two systems share radiators (the extractor uses a fraction \(f\) of them, the engine the rest) so they share a common temperature. The two outputs combine to result in the same scheme as shown in Figures 1 and 4. Global energy balance determines \(T\) via \(L=4\pi R^{2}\sigma T^{4}\), and entropy and energy balance together in the work extractor alone determine \(\dot{W}_{\rm internal}\). Eq. 2. Under this concession, we imagine that this conversion generates some entropy that must expelled in addition to the ordinary waste heat and any heat from dissipative activities. We consider the notional two-part, extractor-engine scheme in the endoreversible limit in Figure 7 Here, \(T\) has the same values as in the dissipative and calculation cases in the Landsberg limit. For computation, we have \[r = \frac{\dot{S}_{g,2}}{k\ln\left(2\right)} \tag{39}\] \[= \frac{4}{3}\frac{L}{k\ln\left(2\right)}\left(\frac{1}{T}-\frac{1 }{\sqrt{TT_{*}}}\right)\] (40) \[= \frac{4}{3}\frac{L}{k\ln\left(2\right)}\left(\sqrt{\frac{R}{R_{* }}}-\left(\frac{R}{R_{*}}\right)^{\frac{1}{4}}\right) \tag{41}\] For dissipative activities we have \[\eta=1-\sqrt{\frac{T}{T_{*}}}=1-\left(\frac{R}{R_{*}}\right)^{\frac{1}{4}} \tag{42}\] This situation for traditional work is different than in the Landsberg limit, because we have extra entropy generated in the work extractor, \(\dot{S}_{g,1}\). Since \(\dot{W}_{\rm internal}\) is now specified, we instead determine the temperature via energy balance \[\dot{E}_{\rm out}=L-\dot{W}_{\rm internal}=L\sqrt{\frac{T}{T_{*}}} \tag{43}\] \[4\pi R^{2}\sigma T^{4}=4\pi R_{*}^{2}\sigma T_{*}^{4}\sqrt{\frac{T}{T_{*}}} \tag{44}\] \[T=T_{*}\left(\frac{R_{*}}{R}\right)^{\frac{1}{4}} \tag{45}\] Figure 6: Schematic for work that leaves the sphere, for instance as coherent radio emission (having negligible entropy). In this case, entropy balance determines \(T\), from which energy balance determines \(\dot{W}\). yielding the efficiency \[\eta=1-\sqrt{\frac{T}{T_{*}}}=1-\left(\frac{R}{R_{*}}\right)^{\frac{2}{\eta}} \tag{46}\] Fig. 9 shows the efficiencies in the four cases (Eqs. 34,37,42,46) as a function of total shell area. ### Swarms of Material Real Dyson spheres will likely be composed of swarms of material, potentially at a large range of distances. The optimal arrangement of these will depend on many details we are unlikely to be able to guess, but we can at least get a sense of how those details matter by bounding the problem. We will assume the total cross-sectional area of a large number of individual satellites is \(A\). In the ideal case, elements of a swarm will never shadow each other, and as their number increases they will need to make ever more elaborate maneuvers to avoid this. In the limit that they capture all of the light from the star, we would then have \(A=4\pi R^{2}\), but this may be this may be impossible to achieve. Another bound might be to consider that the swarm elements avoid collisions but do not avoid shadowing, and so regularly block each others' views. For large numbers of small satellites in random orbits, we might expect this to result in the swarm having optical depth \[\tau=\frac{A}{4\pi R^{2}} \tag{47}\] where \(A\) is the total cross-sectional area of the elements, and \(R\) is their typical orbital distance. Note this is an inappropriate approximation for shells, which we assume have \(\tau=\infty\). In this case, the total energy collected \(\dot{E}_{\rm in}\) will be lower than that we would expect for a partial shell of area \(A\) by a factor \[1-e^{-\tau} \tag{48}\] Figure 7: Schematic for endoreversible computation or dissipative activities in a Dyson sphere. In this case, some extra entropy \(\dot{S}_{g,1}\) is generated by the work extractor, since it operates below the Carnot limit. Here, energy balance and \(R\) determine \(T\) as usual via \(L=4\pi R^{2}\sigma T^{4}\) for dissipative activities and computation. For computation, the computational rate \(r\) is then determined from \(\dot{S}_{g,2}\), equal to \(\frac{4}{3}\dot{W}_{\rm internal}/T\). (For traditional work, the engine is not present and we replace \(\dot{W}_{\rm internal}\) with \(\dot{W}\), and then calculate \(T\) from \(L-\dot{W}=4\pi R^{2}\sigma T^{4}\).) as will our rate of computation or work. If we still compute efficiencies in terms of \(L\), this will decrease all of our efficiencies by that factor, as well. In the limit of a very small amount of mass--say, just a single satellite--this factor becomes \[1-\exp\left(-\frac{A}{4\pi R^{2}}\right)\rightarrow\frac{A}{4\pi R^{2}} \tag{49}\] ### Summary of Results for Dyson Spheres Summarizing these results, we can write in general that the efficiency of a Dyson sphere for work is \[\eta=(1-e^{-\tau})\left[1-\left(\frac{T}{T_{*}}\right)^{m}\right] \tag{50}\] where \(\tau\rightarrow\infty\) for a shell and \(\tau=A/4\pi R^{2}\) for swarms of total cross-sectional area \(A\); and where \(m=1\) in the Landsberg limit, and \(m=1/2\) in the endoreversible limit. We might imagine that other practical limitations yield some other value for \(m\) such that \(m<1\). Our relation for the temperature is \[T=T_{*}\left(\frac{R_{*}}{R}\right)^{n} \tag{51}\] where \(n=\frac{1}{2}\) for dissipative activities and computation, and \(n=2/(4-m)\) for traditional work (so, \(\frac{2}{3}\) in the Landsberg limit, and \(\frac{4}{7}\) in the endoreversible limit.) Putting this together, the efficiency of a sphere for work in terms of \(R\) is \[\eta=(1-e^{-\tau})\left[1-\left(\frac{R_{*}}{R}\right)^{nm}\right] \tag{52}\] where the exponent \(nm\) ranges in our examples from \(\frac{1}{4}\) to \(\frac{2}{3}\). Finally, the rate of computation is \[r = \frac{4}{3}\frac{L}{kT\ln{(2)}}(1-e^{-\tau})\left[1-\left(\frac{T} {T_{*}}\right)^{m}\right] \tag{53}\] \[= \frac{4}{3}\frac{L}{kT_{*}\ln{(2)}}(1-e^{-\tau})\left[\sqrt{\frac {R}{R_{*}}}-\left(\frac{R}{R_{*}}\right)^{\frac{1-m}{2}}\right] \tag{54}\] where the exponent \((1-m)/2\) is zero in the Landsberg limit and \(\frac{1}{4}\) in the endoreversible limit. ## 5 Optimal Configurations Next, we turn to see if we can deduce any observational consequences of the various assumptions we have made. We will especially look for any results that are rather insensitive to our assumptions, as they might be general properties of Dyson spheres. ### Absence of Mass Constraints In all cases, we see that for a single shell, the most efficient configuration is one that maximizes \(R\) and minimizes \(T\), performing as much work or computation as possible on the luminosity \(L\). The more mass one has at one's disposal, the larger the sphere that can be built. There are strongly diminishing returns, however. Let us look just at the case of a shell doing dissipative activities in the Landsberg limit. A Dyson sphere at \(\sim\)1 au around a Sun-like star would have roughly \(\eta=1-\sqrt{300/6000}=95\%\) efficiency. This is already very close to unity; no amount of mass or sophistication of technology can improve this by an order of magnitude. Further, since the mass \(M\) required for a sphere presumably scales with its area \(A\), we have that the amount of "wasted" energy from not increasing the size of a sphere to be colder and more efficient scales as: \[1-\eta\propto R^{-\frac{1}{2}}\propto A^{-\frac{1}{4}}\propto M^{-\frac{1}{4}} \tag{55}\] This means that to capture half of the unused energy, one must increase the mass of the sphere by a factor of \(2^{4}=16\). The situation is not much better for computation. There, we have for large \(R\): \[r\propto\sqrt{R}\propto A^{\frac{1}{4}}\propto M^{\frac{1}{4}} \tag{56}\] So doubling the amount of mass increases the computation rate by a factor of \(2^{\frac{1}{4}}\) or around 20%. Thus, the efficiency of a sphere measured as work done (or computations performed) _per gram_ drops sharply with radius. Since presumably there is some cost to acquiring mass, with such strongly diminishing returns we might expect Dyson spheres to be reasonably warm, corresponding to efficiencies of 10s of percent. ### Optimum Shell Radius for Fixed Surface Area (Mass) In the opposite limit, we might ask what the optimum distance is for a very small amount of mass, composing, say, a single satellite of area \(A\). In this case, we do not have the entire luminosity of the star to work with, and we must balance proximity (which gives us more flux) with efficiency. In this case the total work done will be \[\dot{W}=L\frac{A}{4\pi R^{2}}\left[1-\left(\frac{R_{*}}{R}\right)^{nm}\right] \tag{57}\] This function is maximized at \(R=R_{*}\left(\frac{nm+2}{2}\right)^{\frac{1}{nm}}\), the coefficient of which for our values is around \(\sim\)1.6, so \(R\) is extremely close to the star. The lesson is that higher fluxes trump higher efficiencies up to very high temperatures--nearly as hot as the star--and so for small amounts of mass the optimal configuration is as close to the star as the engine allows, perhaps at the limits where its components would begin to melt. As the mass (area) available grows, there is a balance between adding new elements at this minimum distance and avoiding shadowing. At some point, one will have enough mass that an optically thick shell is formed, and the best use of additional mass involves expanding the shell to lower its optical depth. We then shift to maximizing the function \[\dot{W}=L\left[1-\exp\left(-\frac{A}{4\pi R^{2}}\right)\right]\left[1-\left( \frac{R_{*}}{R}\right)^{nm}\right] \tag{58}\] For computation, the optimum distance is somewhat different, being the maximum of equation 54, in particular the component \[\left[1-\exp\left(-\frac{A}{4\pi R^{2}}\right)\right]\left[\sqrt{\frac{R}{R_{* }}}-\left(\frac{R}{R_{*}}\right)^{\frac{1-m}{2}}\right] \tag{59}\] which is the computational rate in units of \(\frac{4}{3}L/(kT_{*}\ln{(2)})\), and which can be greater than 1. Both equations must be maximized numerically. The results are shown in Figure 8, compared with the radius of a complete shell of the same area, assuming \(R_{*}=R_{\odot}\). In both cases, the optimum distance for a swarm is quite similar to that of a shell for most distances--for dissipative activities and traditional work the distance is about 40% of the shell distance, and for computation about 70%. The righthand side of that figure shows that for the swarm, significant shadowing takes place, with optimal optical depths of a few, depending on orbital distance. This might argue that we should not expect Dyson spheres to be "complete," but to provide a few magnitudes of gray extinction, regardless of their purpose. The differences in efficiency of work and computation for the swarm versus the shell are not great, as shown in Figure 9. In all cases, the efficiency does not vary by more than an order of magnitude for a huge range of total areas: there is very little gain per gram to be had once \(\sim\)10% of the starlight is being used, regardless of the assumptions we make. ### Multiple Shells So far we have optimized shells and swarms at a single distance, but there might be superior optima if the mass is spread out over some range of distances. This is especially true if there is benefit to nested shells, with outer shells making use of the waste heat from inner shells, as in the idea of a Matrioshka Brain. We first consider the case of two shells to see if inner shells offer any benefit over a single, outer shell. #### 5.3.1 Computation First, we consider the case of computation in the Landsberg limit. The inner shell, for which we will use the subscript \(1\), does computation at a rate \[r_{1}=\frac{4}{3}\frac{L}{kT_{1}\ln\left(2\right)}\left[1-\frac{T_{1}}{T_{*}}\right] \tag{60}\] Figure 8: _Left:_ Optimum distance for a swarm of random orbiters with fixed total surface area for dissipative activities or traditional work, and for calculations, in the Landsberg limit and with \(R_{*}=R_{\odot}\). The orbital distance of a shell of equal surface area is shown for reference. Values in the endoreversible limit are similar, and not shown. The optimum distance is largely insensitive to the kind activities done. The solid shell radius is set by the surface area of a sphere. The random swarm increasingly would shadow itself at close distances but miss more flux at large distances; the optimum distance from the balance of these effects is typically about \(40\%\) of the distance it would have as a shell for work and dissipative activities, and \(70\%\) for computation. _Right:_ Optical depth of the swarms from the left panel, expressed as \(\tau=A/(4\pi R^{2})\), now including the endoreversible cases as dashed lines. This ratio is \(1\) for the shell (shown in purple), although shells are assumed to have infinite optical depth. Figure 9: _Left:_ Efficiency of shells and swarms doing different kinds of activities in the Landsberg and endoreversible limits, assuming \(R_{*}=R_{\odot}\). Note the plot is semi-logarithmic, and that all gains beyond \(1\) au\({}^{2}\) in surface area are no more than a factor of \(2\). _Right:_ Computational rate of shells and swarms in units of \(\frac{4}{3}L/(kT_{*}\ln\left(2\right))\). For large areas, the total number of calculations scales very weakly as \(r\propto A^{\frac{4}{4}}\). The second shell accepts the same amount of luminosity, and uses it to perform calculations at the rate \[r_{2}=\frac{4}{3}\frac{L}{kT_{2}\ln{(2)}}\left[1-\frac{T_{2}}{T_{1}}\right] \tag{61}\] so the total rate is \(r_{1}+r_{2}\). At the Landsberg limit, this is \[r=r_{1}+r_{2}=\frac{4}{3}\frac{L}{kT_{2}\ln{(2)}}\left[1-\frac{T_{2}}{T_{*}}\right] \tag{62}\] which is exactly the rate we would have had without the inner shell. This makes sense: the total increase in entropy from the stellar surface to the outer shell is set only by their respective temperatures and radii--an outer shell that perfectly converts this entropy into computation is working at maximum efficiency. An inner shell might do some of these calculations and pass on the radiation to the outer shell to complete, but this merely changes the location of the increase in entropy from calculation. Dyson spheres at the Landsberg and Landauer limits are thus neutral with respect to where the calculations take place. If we imagine there are significant overhead costs to maintaining a sphere, such as for navigation, energy collection, and energy disposal, then the optimum configuration incurs these costs only in an outer shell, and does not bother with inner shells. We can generalize this result to include practical limitations, as given by the endoreversible limit, by extending our analysis the the general \(m<1\) case. We compute the difference in computational rates \(\Delta r\) of two shells versus one shell following the treatment above: \[\Delta r \propto \frac{1}{T_{1}}\left[1-\left(\frac{T_{1}}{T_{*}}\right)^{m} \right]+\frac{1}{T_{2}}\left[1-\left(\frac{T_{2}}{T_{1}}\right)^{m}\right]- \frac{1}{T_{2}}\left[1-\left(\frac{T_{2}}{T_{*}}\right)^{m}\right] \tag{63}\] \[= \frac{1}{T_{2}}\left[\frac{T_{2}}{T_{1}}-\frac{T_{2}}{T_{1}} \left(\frac{T_{1}}{T_{*}}\right)^{m}-\left(\frac{T_{2}}{T_{1}}\right)^{m}+ \left(\frac{T_{2}}{T_{*}}\right)^{m}\right]\] (64) \[= \frac{1}{T_{2}}\left[\left(\frac{T_{2}}{T_{1}}\right)^{m}-\left( \frac{T_{2}}{T_{*}}\right)^{m}\right]\left[\left(\frac{T_{2}}{T_{1}}\right)^{1 -m}-1\right] \tag{65}\] The first term in brackets is never negative because \(m\geq 0\) and \(T_{1}<T_{*}\). The second term is never positive because \(T_{2}<T_{1}\) and \(m\leq 1\). When \(m=1\) in the Landsberg limit, we recover the result that \(\Delta r=0\), that is that there is no benefit or harm to inner spheres beyond additional overhead per sphere. For other values of \(m\), inner shells only decrease overall computational efficiency. Thus, there is no reason at the Landauer limit to build inner shells, regardless of the efficiency model we assume for the engines providing energy to the computers. Matrioshka brains are not computationally ideal. #### 5.3.2 Dissipative Activities For dissipative activities internal to the system, the two shells extract work \[\dot{W}_{\rm internal,1} = L\left[1-\left(\frac{T_{1}}{T_{*}}\right)^{m}\right] \tag{66}\] \[\dot{W}_{\rm internal,2} = L\left[1-\left(\frac{T_{2}}{T_{1}}\right)^{m}\right]\] (67) \[\dot{W}_{\rm internal} = \dot{W}_{\rm internal,1}+\dot{W}_{\rm internal,2}=L\left[2-\left( \frac{T_{1}}{T_{*}}\right)^{m}-\left(\frac{T_{2}}{T_{1}}\right)^{m}\right] \tag{68}\] which indicates an efficiency greater than one. Here, we see there can be a benefit to additional shells: each provides additional internal work that can be done, at efficiencies near unity. There is a limit, however. If we imagine an infinite number of shells between two temperatures \(T_{h}\) and \(T_{c}\), the total \(\dot{W}_{\rm internal}\) is \[\dot{W}_{\rm internal} = L\left[1-\left(\frac{T_{h}}{T_{*}}\right)^{m}+\int_{T_{c}}^{T_{ h}}\left(1-\left(\frac{T-dT}{T}\right)^{m}\right)\right] \tag{69}\] \[= L\left[1-\left(\frac{T_{h}}{T_{*}}\right)^{m}+m\int_{T_{c}}^{T_{h}} \frac{dT}{T}\right] \tag{70}\] \[= L\left[1-\left(\frac{T_{h}}{T_{*}}\right)^{m}+m\ln\left(\frac{T_ {h}}{T_{c}}\right)\right] \tag{71}\] If we guess at some extreme values to bound the utility of so may shells, \(T_{*}=6000\) K, \(T_{h}=1000\) K, \(T_{c}=10\) K, we have at the Landsberg limit (\(m=1\)) \[\dot{W}_{\rm internal}\approx L(0.83+4.6)\approx 5.4L \tag{72}\] Building five spheres with temperatures \(T_{i}=1,000\)K, \(300\) K, \(100\) K, \(30\) K, and \(10\) K yields \(\dot{W}_{\rm internal}=3.56L\), which is over half of the benefit of an infinite number of spheres. We thus see some benefit to a set of up to several nested spheres at very different temperatures, with diminishing returns, which decreases somewhat as \(m\) does. The optimal arrangement of these shells as a function of available mass is a complex problem that probably does not warrant too detailed a solution since we do not know what practical effects we are missing. The lesson is that for such activities at or near the Landsberg limit we might expect a swarm of material at many distances, with the outer layers exploiting the waste heat of the inner layers. This might also be true for computation if it occurs well above the Landauer limit in a manner where \(r\) is independent of temperature but proportional to \(\dot{W}_{\rm internal}\), and so might justify a Matrioshka brain. #### 5.3.3 Traditional Work at the Landsberg limit We look now to work that leaves the system. Computing the total work for the first shell we have: \[\dot{W}_{1} = L\left[1-\left(\frac{T_{1}}{T_{*}}\right)^{m}\right] \tag{73}\] \[\dot{W}_{2} = L\left(\frac{T_{1}}{T_{*}}\right)^{m}\left[1-\left(\frac{T_{2}}{ T_{1}}\right)^{m}\right] \tag{74}\] where the second shell can only access the remaining energy that has not left the system from the first shell. Summing these two, we have \[\dot{W}=\dot{W}_{1}+\dot{W}_{2}=L\left[1-\left(\frac{T_{2}}{T_{*}}\right)^{m}\right] \tag{75}\] which is again exactly what we would have had if we had not bothered with the first shell, regardless of the efficiency model we adopt. Dyson spheres are thus also neutral to where the work is performed, but simplicity might dictate all work is optimally performed in a single, outer shell. ### Some Other Practical Effects There are many practical limitations that will change our results, mostly having the effect of reducing the efficiencies of Dyson spheres below the limits here. Two small effects that may actually make our limits here _pessimistic_ are the actual spectrum of the star and the limitations of optical circulators. Because real stellar spectra are not blackbodies, their outgoing flux has _less_ entropy per unit energy than we have assumed when we interpret their effective temperature \(T_{\rm eff}\) as \(T_{*}\). This ultimately means that one can extract more work or do more computations than we have assumed, depending on the degree of departure from a blackbody. We have invoked optical circulators to ignore energy that falls back onto the star from the Dyson sphere. Including this must be done with care, but for now we will ignore changes to the star's structure and focus on the first-order heating effect on the surface. When such energy lands on a star with temperature \(T_{\rm eff}\equiv(L/(4\pi R_{*}^{2}\sigma))^{\frac{1}{4}}\), the surface heats to a new, higher temperature \(T_{*}\), and returns this energy to the shell, and this must be accounted for explicitly. This new temperature, by energy balance, obeys \[T_{*}=\left(T_{\rm eff}^{4}+T^{4}\right)^{\frac{1}{4}}=T_{\rm eff}\left[1+ \left(\frac{T}{T_{\rm eff}}\right)^{4}\right]^{\frac{1}{4}} \tag{76}\] where \(T\) is the temperature of the shell. This has no effect on the outer temperature of the shell for computation and dissipative activities, which is set entirely by \(L\) and \(R\). The radiation the shell receives back from the star, however, is at a higher temperature and so has less entropy per erg than it had going in.10 This slightly lowers the entropy flux through the shell, by a geometric factor of order \(1+(T/T_{\text{eff}})^{4}\). Since the temperature of a shell doing traditional work is set by entropy balance, this slightly lowers its temperature. Footnote 10: This does not violate the second law of thermodynamics since energy gains entropy as it moves from the stellar core to the surface, and this process can become slightly less effective to compensate for the extra entropy arriving from the outside. For calculations, which work directly with the entropy of the radiation, the result of all of this is an additional amount of computation that can be done, introducing a correction of order \(r(T/T_{\text{eff}})^{3}\) (since \(r\) scales as \(L/T\) times the geometric factor). This is thus, at best, perhaps a few percent for a very warm sphere. The amount of dissipative activities and traditional work go as \(L(1-\frac{3}{4}\dot{S}_{\text{in}}T)\), so decrease by a small amount because \(\dot{S}_{\text{in}}\) is higher. For swarms with finite optical depth, not using circulators has an additional benefit that it allows them to cool via inward radiation into deep space (i.e. via paths 2a and 3a in Figure 3). This will not affect their ability to achieve the Landsberg limit in terms of \(T\), but it will lower \(T\) by increasing their available radiating surface to nearly the full sphere, i.e. by almost a factor of 4. The details will depend on the swarm geometry in complex ways, but the extra cooling will very roughly be a factor of \(\sim\)(\(1+3e^{-\tau}\)). For complete spheres, this will increase the swarm performance levels we have calculated, closer to the shell performances, and move the optimal distances outwards. For dense swarms the improvement is small but for our Landsberg computation case where optimal optical depths are of order 2, it could plausibly raise \(r\) by over 10%. In the \(\tau\ll\)1 limit where we consider only isolated swarm elements, optimal values of \(r\) will be higher by up to a factor of 2 or so. These (and doubtless many more effects) are all likely much smaller than the uncertainties introduced by our assumptions and approximations elsewhere, but could be considered in any detailed analysis of a specific design of Dyson sphere. ## 6 Discussion and Conclusions The details of optimal ways to arrange mass in a Dyson sphere depend on assumptions about the nature of the engines. Some overall themes have emerged from our analysis, however: * Unless one has enough mass to capture most of a star's luminosity, the optimal placement of mass is as close to the star as possible, favoring higher intercepted flux over efficiency. This argues that unless a star is suffering significant optical extinction, we should expect the waste heat of industry to be in the mid- or near-infrared, or even in the optical (a possibility explored by Osmanov & Berezhiani (2018) and Lacki (2016).) * Even for complete spheres, the return on investment for additional amounts of mass beyond a small sphere is extremely small, also arguing for relatively high sphere temperatures. * In principle, optical circulators could be used to avoid complications of feedback among elements of a Dyson swarm. Activities might also be done directly with photons, avoiding the need to run heat engines between intermediate absorbers heated and cooled by radiation. * In that ultimate limit, the appropriate optimal (Landsberg) efficiency for Dyson sphere components is the Carnot efficiency. For computation there is an additional factor of \(1/T\) to consider from the Landauer limit. * Unless they radiate their energy out of the system, for instance as low-entropy radio waves, Dyson spheres as a whole do no "work" in a thermodynamic sense, since they must radiate away all energy they consume as waste heat. This raises their temperatures and lowers their efficiencies compared to engines that do traditional work. * The optimal orbital distances as a function of mass have some dependence on the details of the kind of activities performed and the nature of the engines, but these details only matter to a factor of 2 or so. * If Dyson spheres are composed of swarms capturing most of the star's light, they likely have optical depths of a few. * For work that leaves the sphere and computation at the Landauer limit, there is no value to nested spheres, each capturing the inner sphere's waste heat. Matrioshka Brains are thus not optimal configurations for performing calculations. * For dissipative activities, such as maintaining biological activity or computation below the Landauer limit, there is some benefit to widely spaced nested spheres feeding on each others' waste heat. In these cases, we might expect material across a wide range of orbital distances, and the total optical depth to the star could be quite high. We must, of course take all of this with a grain of salt. Real technological development around a star will be subject to many constraints and practical considerations that we probably cannot guess. While we have outlined the ultimate physical limits of Dyson spheres, consistent with Dyson's philosophy and subject only to weak assumptions that there is a cost to acquiring mass, if real Dyson spheres exist they might be quite different than we have imagined here. Nonetheless, these conclusions can guide speculation into the nature of what sorts of Dyson spheres might exist, help interpret upper limits set by search programs, and potentially guide future searches. I thank Viorel Badescu for feedback on and suggestions for this manuscript. I thank Caleb Scharf and Brian Lacki for helpful conversations. The Center for Exoplanets and Habitable Worlds and the Penn State Extraterrestrial Intelligence Center are supported by Penn State and its Eberly College of Science. This research has made use of NASA's Astrophysics Data System Bibliographic Services.
2309.09657
Assessment of Immersed Boundary Methods for Hypersonic Flows with Gas-Surface Interactions
Immersed boundary (IB) methods with adaptive mesh refinement (AMR) techniques are assessed for atmospheric entry applications, including effects of chemical nonequilibrium (CNE) and gas-surface interactions (GSI). The performance of a conservative cut-cell and two non-conservative ghost-cell IB methods is assessed in comparison with analytical solutions, data from literature, and results obtained with a reference solver that operates on body-fitted grids. All solvers use the same external thermochemistry library so that all observed differences can be attributed to the underlying numerical methods. Results from eight benchmark cases are reported. Four cases are selected to verify the implementation of chemistry, transport properties, catalytic boundary conditions, and shock capturing. Four validation cases consider blunt geometries with adiabatic/isothermal and inert/catalytic/ablative boundary conditions. Overall, the results obtained with the IB solvers are in very good agreement with the reference data. Discrepancies arise with ghost-cell methods for cases with large temperature or concentration gradients at the wall and are attributed to mass conservation errors. Only a strictly conservative cut-cell IB method is on par with body-fitted grid methods.
Ata Onur Başkaya, Michele Capriati, Alessandro Turchi, Thierry Magin, Stefan Hickel
2023-09-18T10:50:31Z
http://arxiv.org/abs/2309.09657v1
## Abstract ## Abstract ### Assessment of Immersed Boundary Methods for Hypersonic Flows with Gas-Surface Interactions Ata Onur Baskaya, Michele Capriati, Alessandro Turchi, Thierry Magin, Stefan Hickel ## Highlights **Assessment of Immersed Boundary Methods for Hypersonic Flows with Gas-Surface Interactions** Ata Onur Baskaya, Michele Capriati, Alessandro Turchi, Thierry Magin, Stefan Hickel * Immersed boundary (IB) methods are assessed for applications with strong thermal gradients and gas-surface interactions. * A set of well-defined test cases is established for the verification and validation of IB methods for atmospheric entry. * Ghost-cell based IB methods suffer from conservation errors at cold isothermal walls and reacting ablative walls. * Strictly conservative cut-cell IB method performs on par with body-fitted grid methods. # Assessment of Immersed Boundary Methods for Hypersonic Flows with Gas-Surface Interactions Ata Onur Baskaya Michele Capriati Alessandro Turchi Thierry Magin Stefan Hickel Aerodynamics Group, Faculty of Aerospace Engineering, TU Delft, The Netherlands Aeronautics and Aerospace Department, von Karman Institute for Fluid Dynamics, Belgium Inria, Centre de Mathematiques Appliquees, Ecole Polytechnique, IPP, France Science and Research Directorate, Italian Space Agency,, Italy ###### Abstract Immersed boundary (IB) methods with adaptive mesh refinement (AMR) techniques are assessed for atmospheric entry applications, including effects of chemical nonequilibrium (CNE) and gas-surface interactions (GSI). The performance of a conservative cut-cell and two non-conservative ghost-cell IB methods is assessed in comparison with analytical solutions, data from literature, and results obtained with a reference solver that operates on body-fitted grids. All solvers use the same external thermochemistry library so that all observed differences can be attributed to the underlying numerical methods. Results from eight benchmark cases are reported. Four cases are selected to verify the implementation of chemistry, transport properties, catalytic boundary conditions, and shock capturing. Four validation cases consider blunt geometries with adiabatic/isothermal and inert/catalytic/ablative boundary conditions. Overall, the results obtained with the IB solvers are in very good agreement with the reference data. Discrepancies arise with ghost-cell methods for cases with large temperature or concentration gradients at the wall and are attributed to mass conservation errors. Only a strictly conservative cut-cell IB method is on par with body-fitted grid methods. keywords: Immersed boundary method, CFD simulation, Atmospheric entry, Hypersonic flow, Gas-surface interaction, Ablation + Footnote †: journal: Computers & Fluids ## 1 Introduction Hypersonic flows experienced during atmospheric entry of capsules or space debris are characterized by strong shock waves and thermochemical nonequilibrium effects through the excitation of the internal energy modes of species and rapid chemical reactions in the shock layer. The hot gas interacts with the surface thermal protection system (TPS) material installed to protect the spacecraft from this hostile environment. Depending on the characteristics of the TPS material, these gas-surface interactions (GSI) involve catalysis as well as ablation. While the former accelerates the exothermic recombination reactions leading to increased heat transfer towards the surface, the latter alleviates the heat load by means of physicochemical decomposition and mass loss. These ablative GSI change the shape of the object by surface recession. Understanding these interactions is crucial for predicting the surface stresses and heat fluxes, as well as the uncontrolled trajectory of space debris. Ground testing is indispensable for validation purposes; however, no facility can simultaneously replicate all aspects of atmospheric entry flows [1]. Hence, computational fluid dynamics (CFD) simulations are essential for the efficient aerothermodynamic analysis and design of future spacecraft. Most CFD solvers used for high-speed and high-enthalpy applications employ body-fitted structured grids [2; 3; 4]. In these solvers, alignment of the grid with the shock and the surface needs to be ensured for an accurate prediction of the flow field. Generating these types of grids usually involves strenuous effort from the user especially for detailed features and incremental geometry updates [5]. Unstructured grids have also been explored; however, issues affecting the heat flux predictions at the surface were reported [6; 7]. A promising alternative is the use of adaptive mesh refinement (AMR) techniques based on piecewise Cartesian grids with immersed boundary (IB) methods. There has been a recently growing interest in IB-AMR solvers for atmospheric entry applications [8; 9; 10; 11; 12], mainly for their potential in considering complex and deforming geometries, and better robustness and higher computational efficiency compared to body-fitted mesh-deformation methods. These methods also allow for a relatively straightforward implementation of high-order schemes. However, special care must be taken to have sufficient grid resolution near the boundaries, as it is more difficult for immersed boundary methods to efficiently resolve thin boundary layers over curved surfaces. To address this shortcoming, a blend of Cartesian grids in the fluid and body-fitted grids near the surface can be employed [10; 11]. This approach has been successful in reducing the required number of cells and providing better resolution of the thermal boundary layer. In general, a blended grid approach is well suited for shapes with smooth curvatures. However, it is susceptible to the same drawbacks inherent to body-fitted grids, for instance, their difficult adaptation to complex deforming geometries. Arslanbekov et al. [8], Sekhar and Ruffin [9], and more recently Brahmachary et al. [12] demonstrated the benefits of using IB-AMR solvers for a number of relevant cases. These studies have generally indicated good predictions for wall pressure and skin friction distributions, while emphasizing the difficulty in accurately predicting wall heat fluxes. As with more recent contributions [10; 11], these studies were mostly performed with ghost-cell methods from the family of discrete forcing IB approaches [13]. Ghost-cell methods impose boundary conditions by extrapolating the fluid solution into the solid, i.e. into ghost cells that are neighbouring fluid cells. Relying solely on ghost-point extrapolation does not ensure strict conservation of mass, momentum, and energy. A strictly conservative approach is the cut-cell finite-volume method, which splits fluid and solid domains into consistently deformed finite volumes. The implementation of a cut-cell method for three dimensions and high-order schemes is not as straightforward as the ghost-cell approach, and it also introduces additional challenges such as cut-cells with very small fluid volumes. However, the main advantage of the cut-cell method lies in satisfying the conservation of mass, momentum, and energy near the wall [13]. In this paper, ghost-cell and cut-cell IB methods are scrutinized through a curated list of benchmark case studies relevant for atmospheric entry applications. Main contributions of this paper are twofold: * to assess the accuracy of IB methods for applications with strong thermal gradients and gas-surface interactions. * to establish a set of well-defined test cases for the verification and validation of IB methods for atmospheric entry. Results obtained with the IB-AMR solvers INCA [14; 15] and CHESS [16; 17] are compared to reference results obtained with the body-fitted finite-volume solver US3D [6] in addition to data from literature. A consistent comparison to assess the accuracy of the numerical methods is achieved by coupling each of the flow solvers with the same thermochemistry library, Mutation\({}^{++}\)[18]. The paper is structured as follows: Governing equations and modelling approaches are presented in Section 2. Solver methodologies are introduced in Section 3. Results of the benchmark case studies are presented and discussed in Section 4, while the influence of the different IB methodologies is further investigated in Section 5. Concluding remarks are made in Section 6. ## 2 Governing Equations and Models ### Governing Equations The compressible Navier-Stokes equations are solved in their conservative form for a reacting multicomponent fluid, \[\frac{\partial\rho_{i}}{\partial t}+\mathbf{\nabla}\cdot(\rho_{i} \mathbf{u})+\mathbf{\nabla}\cdot\mathbf{J}_{i} =\dot{\omega}_{i}\:, \tag{1}\] \[\frac{\partial\rho\mathbf{u}}{\partial t}+\mathbf{\nabla}\cdot(\rho \mathbf{u}\otimes\mathbf{u})+\mathbf{\nabla}p-\mathbf{\nabla}\cdot\mathbf{\tau} =0\:,\] (2) \[\frac{\partial\rho E}{\partial t}+\mathbf{\nabla}\cdot\left[(\rho E+p )\,\mathbf{u}\right]+\mathbf{\nabla}\cdot\mathbf{q}-\mathbf{\nabla}\cdot(\mathbf{\tau} \cdot\mathbf{u}) =0\:, \tag{3}\] where \(\rho_{i}\) is the species partial density for the \(i^{\text{th}}\) species, \(\mathbf{u}\) is the mixture average velocity, \(\dot{\omega}_{i}\) is the source term associated with the production or consumption of species due to chemical reactions, \(\rho\) is the mixture density, \(p\) is the mixture pressure, and \(E=e+u^{2}/2\) is the specific total energy, which is the sum of the thermodynamic internal energy \(e\) and the kinetic energy. External forces due to gravitational or electromagnetic effects, and radiative energy exchanges are not considered for the cases in this study. Both solvers considered in this work can perform under thermal nonequilibrium with multi-temperature methods, such as that of Park [19]. However, results presented in this paper are obtained with a thermal equilibrium assumption. The ideal gas assumption leads to the equation of state \(p=\rho RT\), where \(R=\mathcal{R}/\overline{M}\) is the mixture gas constant obtained from the universal gas constant \(\mathcal{R}\) and the mixture average molar mass \(\overline{M}\). These mixture properties are modelled according to Dalton's law through their constituent species as \(p=\sum_{i}p_{i}\), \(\rho=\sum_{i}\rho_{i}\), \(R=\sum_{i}y_{i}R_{i}\), with the mass fractions \(y_{i}=\rho_{i}/\rho\). Two models are considered for the species diffusion flux \(\mathbf{J}_{i}\): Fick's law with a correction to ensure conservation of mass as \[\mathbf{J}_{i}=-\rho D_{im}\mathbf{\nabla}y_{i}+y_{i}\sum_{j}\rho D_{jm}\mathbf{ \nabla}y_{j}\:, \tag{4}\] with the mixture-averaged diffusion coefficients \(D_{im}=\frac{1-x_{i}}{\sum_{j\neq i}\frac{x_{j}}{\mathscr{D}_{ij}}}\,,\) obtained by Wilke's average of the binary diffusion coefficients \(\mathscr{D}_{ij}\). The second diffusion model uses the solution of the Stefan-Maxwell equations, \[\mathbf{\nabla}x_{i}=\frac{\overline{M}}{\rho}\sum_{j\neq i}\left(\frac{x_{i}\mathbf{ J}_{j}}{M_{j}\mathscr{D}_{ij}}-\frac{x_{j}\mathbf{J}_{i}}{M_{i}\mathscr{D}_{ij}} \right)\;, \tag{5}\] where \(x_{i}\) are the mole fractions, and \(M_{i}\) are the species molar masses. This formulation is computationally costlier, but theoretically more accurate [20]. Viscosity and thermal conductivity are obtained through a linear system solution using an LDL\({}^{\mathrm{T}}\) decomposition as opposed to the common approach of using simplified mixture rules [21; 22; 23]. The viscous stress tensor \(\mathbf{\tau}\) is defined assuming Stokes' hypothesis as \[\mathbf{\tau}=\mu\left[\mathbf{\nabla}\mathbf{u}+(\mathbf{\nabla}\mathbf{u})^{\dagger}- \frac{2}{3}\mathbf{\nabla}\cdot\mathbf{u}\mathbf{I}\right]\,, \tag{6}\] where \(\mu\) is the dynamic (shear) viscosity of the mixture. The total heat flux vector \(\mathbf{q}\) includes the contributions from conduction and mass diffusion, \[\mathbf{q}=-\lambda\mathbf{\nabla}T+\sum_{i}\mathbf{J}_{i}h_{i}(T)\;, \tag{7}\] where \(T\) is the temperature. The first term stems from Fourier's law with the thermal conductivity \(\lambda\) of the mixture, and the second term accounts for the transport of enthalpy by species diffusion, with \(h_{i}\) as the species enthalpy. ### Physicochemical Modelling The models used in state-of-the-art CFD solvers capable of simulating the aforementioned phenomena vary considerably. Broadly, choices need to be made on the thermodynamic database, the treatment of TCNE effects, the transport properties modelling, and the approach for handling GSI. For details we refer to several published studies that evaluate the impact of these selections in modelling thermal nonequilibrium [24; 25], species diffusion [20; 26], viscosity and thermal conductivity [26; 27], rate of catalysis [28], and ablation [29]. As important quantities of interest, such as surface heat fluxes, are highly sensitive to the modelling approaches, large discrepancies between the results obtained with hypersonic CFD codes are common [3; 4]. Hence, this variety of approaches often obscures a clear assessment of the underlying numerical methods, when comparing different solvers. Based on these considerations, each of the solvers used in this study is coupled with the multicomponent thermodynamic and transport properties for ionized gases in C++ (Mutation\({}^{++}\)) open-source library. Mutation\({}^{++}\) provides all required physiochemical models for thermodynamics, transport properties, chemical kinetics, and GSI. A detailed description of Mutation\({}^{++}\) is presented by Scoggins et al. [18]. The caloric properties of the species are approximated with standard NASA-9 polynomial fits [30]. Species mass diffusivities, viscosities, and thermal conductivities are provided by Mutation\({}^{++}\) according to multicomponent Chapman-Enskog formulations [31]. The chemical reaction mechanisms, that is, species mass rates and their analytical Jacobians with respect to species densities, are also provided by Mutation\({}^{++}\). Catalytic and ablative surface boundary conditions are imposed by solving a mass balance [32; 18], \[(\rho_{i}v_{blow})_{wall}+(J_{i})_{wall}=\dot{\omega}_{i,wall}\:, \tag{8}\] with \(v_{blow}\) as the surface-normal blowing velocity, which is nonzero only for an ablative boundary. Terms from left to right refer to convective flux due to blowing, diffusive flux, and species source term due to surface reactions. A probability based approach is employed for computing this chemical source term for the surface, written as \[\dot{\omega}_{i,wall}=\gamma m_{i}{\cal F}_{i,impin}\:, \tag{9}\] where \(\gamma={\cal F}_{i,react}/{\cal F}_{i,impin}\) is the ratio of reacting to impinging species fluxes and it describes the efficiency of the process, and \(m_{i}\) is the mass of the \(i^{\rm th}\) species [32]. Assuming the species at the wall have a Maxwellian distribution function, the impinging species flux is \[{\cal F}_{i,impin}=n_{i}\sqrt{\frac{k_{B}T_{w}}{2\pi m_{i}}}\:, \tag{10}\] where \(k_{B}\) is the Boltzmann constant, \(T_{w}\) is the wall temperature, and \(n_{i}\) is the number density of the \(i^{\bf th}\) species [33]. From the mass blowing rate \(\dot{m}=\sum_{i}\dot{\omega}_{i,wall}\), the blowing speed is calculated by \[v_{blow}=\frac{\dot{m}}{\sum_{i}\rho_{i}}\:. \tag{11}\] Values obtained for species densities and mass blowing speeds are then imposed as boundary conditions for the Navier-Stokes equations. ## 3 Numerical Methods We consider three different methodologies for imposing surface boundary conditions in the framework of finite-volume methods. Schematics of the body-fitted, ghost-cell IB, and cut-cell IB approaches are shown in Fig. 1. The arc near the middle of each sketch indicates the surface that demarcates the fluid above from the solid below it. The other black lines are grid lines and the filled dots indicate cell centers. The color code matches the one used for presenting the results in Section 4. The classical body-fitted grid method, Fig. (a)a, simply makes use of the grid's alignment with the geometry. An example stencil is drawn in the sketch and the boundary intercept is indicated by a colored hollow circle. Hollow circles indicate the stencil of the discretization scheme. The fluid-cell solutions and boundary conditions are used to reconstruct quantities at cell interfaces according to the chosen numerical scheme. The two other approaches use IB methods on Cartesian grids. The ghost-cell IB approach [34] seen in Fig. (b)b imposes boundary conditions by extending the fluid solution to ghost-cells. The virtual flow solution of ghost-cells is Figure 1: Schematics of (a) body-fitted, (b) ghost-cell IB, and (c) cut-cell IB approaches. Ghost cells are striped (violet or blue) and cut-cells are tinted (turquoise). set by extrapolating the nearby fluid solution according to boundary conditions at the nearest fluid-solid interface. Since the grid does not conform with the geometry, the solution at an image point in the fluid needs to be found through interpolation using the surrounding fluid-cell solutions. An example stencil is colored in the sketch, with the fluid points used in the interpolation connected by dotted lines. Ghost-cells are marked with a striped pattern in the sketch. While being relatively straightforward to implement, this approach does not ensure strict conservation of mass, momentum, and energy at the interface between the fluid and the solid. Fluxes are reconstructed from the fluid-cell and the ghost-cell solutions on the Cartesian grid without considering the location and shape of the fluid-solid interface. Errors in implicitly satisfying the conservative flux boundary condition therefore result from the nonlinearity of the flux function, from the image point interpolation, and from the interface curvature. The cut-cell IB approach [35; 36; 37], see Fig. 1c, ensures strict conservation by considering the flux balance for the part of the cell that belongs to the fluid domain. These consistently deformed finite volumes and their cell faces are colored in the sketch. Fluxes over the cell faces of the cut cells are scaled according to the wetted areas. The exchange of mass (e.g. with surface reactions), momentum, and energy through the fluid-solid interface is calculated from the prescribed boundary conditions and the local fluid solution. The latter is acquired by interpolation from the surrounding cell values and the boundary conditions. An example stencil is colored in the sketch for the cut-cell interpolation. The other stencil in the sketch is identical to the ghost-cell IB approach. This addition to the cut-cell method refers to the specific implementation within the INCA solver and will be discussed in Section 3.2. Cut-cells with a very small fluid volume fraction require a special treatment to ensure stable time integration. They are typically mixed or merged with nearby cells [37]. ### Body-fitted Solver The body-fitted solver considered in this study is US3D, which is a high-fidelity flow solver specifically designed for aerodynamic applications in the hypersonic regime by the University of Minnesota and NASA [6]. It solves the compressible chemically reacting Navier-Stokes equations in a finite-volume framework on unstructured body-fitted grids. Among the several numerical schemes available in the solver, all simulations carried out within this work use the modified Steger-Warming scheme [38], which is suitable for steady computations. A MUSCL approach [39] is employed to obtain second-order accurate fluxes. Both explicit and implicit time integration methods are available; in this work, rapid convergence to steady state is achieved with the data parallel line relaxation (DPLR) method [40]. US3D is equipped with chemistry/multi-temperature source terms and transport properties with the possibility to account for high temperature and high pressure effects. Native routines can be further extended by user-defined subroutines, which allow coupling the solver to external libraries; we refer to Capriati et al. [41] for the coupling with Mutation\({}^{++}\). ### Immersed Boundary Solvers Two IB solvers are considered: one able to use both the cut-cell and the ghost-cell methods, and another using only the latter. Employing a cut-cell IB methodology, INCA is a high-fidelity finite-volume solver for direct numerical simulations (DNS) and large eddy simulations (LES) of the compressible chemically reacting Navier-Stokes equations and provides a large number of different discretization schemes on three-dimensional block-Cartesian AMR grids [14; 15]. For the purposes of this study, a third-order weighted essentially non-oscillatory (WENO) scheme [42] with HLLC flux function [43] is selected to discretize the inviscid terms. WENO schemes permit high accuracy in smooth regions, while ensuring stable and sharp capturing of discontinuities. Second-order centered differences are used for the viscous terms and the explicit third-order Runge-Kutta scheme of Gottlieb and Shu [44] is selected for time integration. Chemical source terms are treated using Strang's second-order time splitting scheme [45] to alleviate the numerical stiffness caused by these terms. The chemical source terms thus reduce to a system of ordinary differential equations, which is solved by the VODE library [46]. INCA employs a unique improvement to the common cut-cell methodology [37], which we refer to as the cut-element method [47; 48]. This method represents the fluid-solid interface through cut-elements, which are derived from the Cartesian mesh and the triangulation of the surface geometry. Instead of considering a planar intersection of a finite-volume cell with the wall surface, as typically done in cut-cell methods [49; 50], cut-elements maintain all details of the intersection of the grid with the surface triangulation. The interface within each cut-cell is thus represented by several cut-elements belonging to different surface triangles to yield sub-cell accuracy and robustness for complex geometries. This method is a consistent and conservative extension of the finite volume flux balance to accommodate cells being split by boundaries. Further details on this cut-element methodology and its extension to incorporate GSI and the effects of thermal nonequilibrium are provided in Ref. [51]. INCA employs ghost-cells to allow for the use of unmodified stencils throughout the domain, as shown in Fig. 1c. Moreover, the cut-cell procedure that ensures strict conservation can be switched off to use only the ghost-cell method. We will discuss results obtained with the INCA ghost-cell method for selected cases in Section 5. In contrast to INCA, the IB method implemented in the flow solver CHESS of Politecnico di Bari [17] fully relies on ghost-cells. The numerical method utilized by CHESS is based on the flux vector splitting proposed by Steger and Warming [38] with a second-order MUSCL reconstruction in space [39] for the hyperbolic terms. Discretization of the viscous fluxes uses Gauss's theorem in conjunction with a second-order linear reconstruction of the solution. A third-order explicit Runge-Kutta scheme is employed for time integration of the transport terms in the Navier-Stokes equations. Following the Runge-Kutta time step, chemical source terms are computed by means of a Gauss-Seidel scheme. CHESS also uses AMR to provide appropriate resolution of shocks and boundary layers [52] and uses the same physicochemical models as US3D and INCA [16]. Further details on the solver can be found in the aforementioned works [16; 17]. ## 4 Benchmark Cases We have curated a set of benchmark cases through collaborative effort with several research groups [16]. The goal is to first verify the physicochemical models and the numerical schemes. Once confidence is established over these fundamental aspects, the accuracy and limitations of the IB methods is addressed. We have selected setups that are sufficiently challenging for the methods under assessment, and simple enough to be readily reproduced by others to incentivize collaboration. For IB methods on Cartesian grids, curved geometries were selected to include the entire angular range of fluid-solid interfaces in two dimensions. These cases include strong thermal gradients near cold isothermal walls as well as gas-surface interactions such as catalytic reactions and ablative surface blowing. The benchmark cases are summarized in Table 1. The first four cases serve as the verification of the implementations for chemistry, transport properties, the catalytic boundary conditions, and the numerical schemes for shock capturing. Established validation experiments are chosen as the final benchmark cases: the fifth, sixth and seventh cases are 2-D cylinder flows of Knight et al. (2016) with inert adiabatic, inert isothermal, and catalytic isothermal surfaces. As the eighth benchmark, we discuss results for an ablative TPS sample geometry under plasma wind tunnel conditions, for which reference experimental data is provided by Helber et al. (2017). ### 0-D Reactor The first study verifies the chemical source term implementation by considering 5-species air, \([\mathrm{N}_{2},\mathrm{O}_{2},\mathrm{NO},\mathrm{N},\mathrm{O}]\), in an adiabatic reactor. Starting from the chemical nonequilibrium (CNE) initialization provided in Table 2, the system is left free to time-march towards the equilibrium state according to chemical mechanisms from Park (2007); Helber et al. (2017). The solutions provided by all three solvers are shown in Fig. 2. Dissociation of \(\mathrm{N}_{2},\mathrm{O}_{2}\) and the resulting formation of \(\mathrm{NO},\mathrm{N}\), and \(\mathrm{O}\) can be seen. The code-to-code agreement is excellent. ### 1-D Diffusion Problem This test case verifies the implementation of models for transport properties. Viscosity and thermal conductivity are obtained through direct calls to \begin{table} \begin{tabular}{c l l l} \hline \hline & Name & Aspect to Assess & Section \\ \hline 1. & 0-D Reactor & Chemistry & 4.1 \\ 2. & 1-D Diffusion Problem & Mass diffusion & 4.2 \\ 3. & 1-D Catalytic Diffusion Problem & Mass diffusion with catalysis & 4.3 \\ 4. & 1-D Shocktube & Shock capturing & 4.4 \\ 5. & 2-D Cylinder (inert, adiabatic wall) & Chemical nonequilibrium & 4.5.1 \\ 6. & 2-D Cylinder (inert, isothermal wall) & Surface heat flux & 4.5.2 \\ 7. & 2-D Cylinder (fully catalytic, isot. wall) & Surface heat flux with catalysis & 4.5.3 \\ 8. & 2-D Ablator & Surface mass blowing with ablation & 4.6 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of studied cases. \begin{table} \begin{tabular}{c c c c c} \hline \hline \(\rho\) [kg/m\({}^{3}\)] & \(T\) [K] & \(u\) [m/s] & \(y\)(\(\mathrm{N}_{2}\)) & \(y\)(\(\mathrm{O}_{2}\)) \\ \hline 0.01 & 7000 & 0.0 & 0.767 & 0.233 \\ \hline \hline \end{tabular} \end{table} Table 2: Setup conditions for the 0-D reactor case. Mutation\({}^{++}\), and are exactly equal for all solvers. Therefore, mainly the differences in the implementation of the driving force and boundary conditions are assessed. The setup consists of a 1-D tube with isothermal end walls at different temperatures. The initial and boundary conditions are provided in Table 3. The mixture composition and reaction mechanisms are the same as in the 0-D reactor case. The tube is 3 mm long. It should be pointed out that the computational meshes in US3D and INCA solvers have 100 cells, whereas CHESS results [16] used 400 cells. It has been verified that the US3D and INCA solutions are grid converged on the mesh with 100 cells. In this test case the temperature gradient leads to chemical reactions, which in turn drive mass diffusion. Temperature and mass fraction distributions along the tube are presented in Fig. 3. INCA results have been obtained by both Fick's law and Stefan-Maxwell diffusion models. However, for this test case, differences seem to be negligible between the two. Overall, US3D results are matched perfectly with INCA, while slight differences are observed for the mass fraction distributions predicted by CHESS, even though the temperature profiles match exactly. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(\rho\) [kg/m\({}^{3}\)] & \(T\) [K] & \(T_{\text{left}}\) [K] & \(T_{\text{right}}\) [K] & \(u\) [m/s] & \(y\)(N\({}_{2}\)) & \(y\)(O\({}_{2}\)) \\ \hline 0.02 & 1000 & 800 & 4800 & 0.0 & 0.767 & 0.233 \\ \hline \hline \end{tabular} \end{table} Table 3: Setup conditions for the 1-D diffusion case. Figure 2: Evolution of mass fractions for 5-species air in the 0-D reactor case. ### 1-D Catalytic Diffusion Problem This test case verifies the catalytic boundary condition implementation for a simple [N\({}_{2}\), N] binary mixture along a 1-D tube, for which an analytical solution exists and is derived in the appendix. Setup conditions are given in Table 4. The length of the tube is 0.2 m. One side of the tube at x = 0.0 m is at reservoir conditions, while at the other, at x = 0.2 m, a catalytic wall boundary condition is imposed. The catalytic wall promotes the recombination of nitrogen, that is, \(\rm N+N\to N_{2}\). The reaction rate is controlled by the recombination coefficient \(\gamma\) through Eq. 9. Results obtained with US3D, INCA, and CHESS [16] are compared with the analytical reference solution in Fig. 4. Naturally, for higher values of the recombination coefficient \(\gamma\), mass fraction of molecular nitrogen at the wall increases, and reaches unity for the fully catalytic case with \(\gamma=1.0\). All numerical predictions are in excellent agreement with the analytical solution. The previously noted difference for the CHESS solver in the diffusion problem is not observed here as the diffusion of species are driven predominantly by the surface reactions. \begin{table} \begin{tabular}{c c c c c c c} \hline \(p\) [Pa] & \(T\) [K] & \(T_{\rm wall}\) [K] & \(u\) [m/s] & \(y\)(N\({}_{2}\)) & \(y\)(N) & \(\gamma_{\rm N}\) \\ \hline 100 & 3000 & 3000 & 0.0 & 0.0 & 1.0 & [0.001, 0.01, 0.1, 1.0] \\ \hline \end{tabular} \end{table} Table 4: Setup conditions for the 1-D catalytic diffusion case. Figure 3: Comparison of (a) temperature and (b) mass fraction distributions for the 1-D diffusion case. ### 1-D Shocktube The Riemann problem of Grossman and Cinnella [56] is used to evaluate shock capturing. The unit domain, \(\mathrm{x}=[0,1]\) m, is spatially discretized by 600 cells, in line with the reference resolution. The diaphragm separating the two initial states is set at the midpoint of the tube. The initial conditions for the two states are given in Table 5. Air with 5-species is initially considered at thermodynamic equilibrium. The reaction mechanism is taken from an earlier work of Park [57], to match with the reference [56]. Grossmann and Cinnella applied a thermal nonequilibrium model; however, we have performed tests with Park's two-temperature model [19] and found no significant differences between the translational and vibrational energy modes. Therefore, we show results that have been obtained with a thermal equilibrium assumption. Fig. 5 shows pressure and density profiles \(99\,\mathrm{\SIUnitSymbolMicro s}\) after the initial state. Mass fractions are given in Fig. 6. The contact discontinuity and the shock wave traveling in the positive x direction as well as the expansion traveling in the opposite direction are well captured. The peak in density after the shock also matches perfectly with the reference results without any oscillations. Predictions of US3D and INCA for the mass fractions are also in excellent Figure 4: N\({}_{2}\) mass fractions for different recombination coefficients \(\gamma\) for the 1-D catalytic diffusion problem. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(u_{left}\) [m/s] & \(T_{left}\) [K] & \(p_{left}\) [Pa] & \(u_{right}\) [m/s] & \(T_{right}\) [K] & \(p_{right}\) [Pa] \\ \hline 0.0 & 9000 & 195256 & 0.0 & 300 & 10000 \\ \hline \hline \end{tabular} \end{table} Table 5: Initial conditions for the 1-D shocktube case. agreement with the reference results of Grossmann and Cinnella, see Fig. 5(a). The minor differences between the solvers in their sharp representation of the discontinuity is shown in the close-up view in Fig. 5(b). CHESS [16] predicts a slightly higher N\({}_{2}\) mass fraction, and accordingly less atomic nitrogen, than US3D and INCA. ### 2-D Cylinder 2-D cylinder flows are used for the validation of surface heat flux calculations under inert and catalytic wall conditions. Knight et al. [3] have presented an assessment of five different CFD codes from participating institutions with respect to reference experiments conducted at the high-enthalpy shock tunnel of the German Aerospace Center (DLR) [58]. The experiment investigates the flow past a cylinder with a radius of 0.045 m exposed to a reported total enthalpy of 22.4 MJ/kg. The experimental setup is numerically replicated by imposing the inflow conditions given in Table 6 on the left boundary. Symmetry is imposed along the stagnation line, and the outer boundaries are set as non-reflecting outlets. The reaction mechanism employed for the 5-species air model is taken from Park [54; 55]. As remarked by Knight et al. [3], there appears to be a large variation in the results from different solvers, especially regarding the treatment of the surface. To study this sensitivity, three different surface conditions are tested in the following sections: two inert cases with adiabatic and isothermal conditions, and a third case with a fully catalytic isothermal wall. Figure 5: Comparison of (a) pressure and (b) density distributions for the 1-D shocktube case of Grossman and Cinnella [56]. In the following assessment of IB methods, "BF" is used to denote reference results obtained on body-fitted grids with US3D, "IB-CC" is used for the cut-cell IB method of INCA, and "IB-GP" refers to the ghost-cell IB method of CHESS [16]. #### 4.5.1 Inert Adiabatic Wall The temperature and species mass fractions along the stagnation line are presented for the adiabatic case in Fig. 7. Shock stand-off distance and the dissociation of molecular nitrogen and oxygen in the shock layer are predicted in very good agreement between all methods. The fundamental differences in the implementation of the adiabatic wall boundary condition have no noticeable effect on the results. This is in line with the expectation that truncation and conservation errors are small in the absence of strong gradients. \begin{table} \begin{tabular}{l c c c c} \hline \hline \(\mathrm{M_{\infty}}\) & \(u_{\infty}\) [m/s] & \(T_{\infty}\) [K] & \(p_{\infty}\) [Pa] & \(\rho_{\infty}\) [kg/m\({}^{3}\)] \\ \hline 8.98 & 5956 & 901 & 476 & 1.547\(\times 10^{-3}\) \\ \hline \hline \(y\)(\(\mathrm{N_{2}}\)) & \(y\)(\(\mathrm{O_{2}}\)) & \(y\)(\(\mathrm{NO}\)) & \(y\)(\(\mathrm{N}\)) & \(y\)(\(\mathrm{O}\)) \\ \hline 0.7543 & 0.00713 & 0.01026 & \(6.5\times 10^{-7}\) & 0.2283 \\ \hline \hline \end{tabular} \end{table} Table 6: Freestream conditions for the 2-D cylinder case. Figure 6: Comparison of mass fraction distributions for the 1-D shocktube case. #### 4.5.2 Inert Isothermal Wall For the same inflow conditions, an isothermal wall boundary condition with a wall temperature of \(300\,\mathrm{K}\) is imposed on the cylinder surface, in accordance with the specifications by Knight et al. [3]. The numerical predictions for the stagnation line temperature and mass fraction distributions are plotted in Fig. 8. Results obtained with the BF and the IB-CC methods match almost exactly, including the steep temperature and species variations in the boundary layer. Results obtained with the IB-GP method, on the other hand, show a significant difference in the shock stand-off distance. This could be Figure 8: Comparison of (a) temperature and (b) mass fractions along the stagnation line for the inert isothermal 2-D cylinder case by Knight et al. [3]. Figure 7: Comparison of (a) temperature and (b) mass fractions along the stagnation line for the inert adiabatic 2-D cylinder case. attributed to the non-conservative formulation of the ghost-cell IB methodology. Mass conservation errors could manifest as an unphysical blowing from the surface. Consequently, the shock stand-off distance is increased and the whole flow field is modified. The adiabatic case is less affected by these conservation errors because it has much smaller temperature and density gradients near the wall. The IB-CC method handles large temperature and density gradients at isothermal walls much better, because it uses a strictly conservative IB method. In Fig. 9, surface pressure and heat flux distributions are compared with the experimental measurements from Knight et al. [3] and also with the numerical simulations of Nompelis from the same publication. All methods accurately predict the pressure distribution. Heat flux predictions of the BF and the IB-CC methods are in very good agreement. They match the experimental measurements better than the numerical simulations of Nompelis. Slight differences in heat fluxes are expected to be due to the differences in grid resolutions at the surface. Grid convergence studies have been carried out with both the BF and the IB-CC methods as summarized in Table 7 and showcased for the variation in heat fluxes in Fig. 10. Four levels of resolution are considered with the minimum cell size at the surface approximately halving with each step. For both solvers, results obtained on the medium-fine resolution mesh are considered grid converged, as they are essentially identical to the results obtained on the fine meshes. For these grids, smallest cell size near the wall is \(1.0\times 10^{-7}\) m for the BF method and \(6.25\times 10^{-7}\) m for Figure 9: Comparison of surface (a) pressures and (b) heat fluxes for the inert isothermal 2-D cylinder case by Knight et al. [3]. the IB-CC method. An interesting observation is that IB-CC method under-predicts the heat flux on coarse meshes, as intuitively expected, whereas the BF method over-predicts the heat flux on coarse meshes. This difference in the convergence trend is a sign of complex interactions between transport and chemistry. For this case, the IB-GP method is not able to predict the heat flux correctly. A similar underprediction has also been reported in literature for another ghost-cell IB-AMR solver by Brahmachary et al. [12], where the issue has been linked to the reconstruction of temperature by linear interpolation. However, the cut-cell IB method also resorts to second-order reconstruction schemes and can predict the heat flux correctly. Therefore, we attribute the observed deficiencies to conservation errors incurred through the ghost-cell IB method. This hypothesis is further discussed in Section 5, where results obtained with two independently developed ghost-cell IB methods are presented. \begin{table} \begin{tabular}{l l l l} \hline Solver \& Grid Resolution & \(\Delta h_{w}\) [\(\mathrm{\SIUnitSymbolMicro m}\)] & \(p_{0}\) [kPa] & \(q_{w,0}\) [\(\mathrm{MW/m^{2}}\)] \\ \hline Experiment [3] & N/A & 52.26 \(\pm\) 3.034 & 7.402 \(\pm\) 0.220 \\ \hline Nompelis [3] & 7.01 & 52.40 & 5.971 \\ \hline BF (coarse) & 0.44 & 54.38 & 8.422 \\ BF (medium-coarse) & 0.22 & 53.29 & 7.576 \\ BF (medium-fine) & 0.1 & 53.05 & 7.345 \\ BF (fine) & 0.05 & 52.95 & 7.308 \\ \hline IB-CC (coarse) & 2.5 & 51.57 & 6.684 \\ IB-CC (medium-coarse) & 1.25 & 51.57 & 7.028 \\ IB-CC (medium-fine) & 0.625 & 51.58 & 7.144 \\ IB-CC (fine) & 0.3125 & 51.58 & 7.189 \\ \hline IB-GP [16] & 1.0 & 52.83 & 0.167 \\ \hline \end{tabular} \end{table} Table 7: Grid resolution and stagnation point details for the inert isothermal 2-D cylinder case, where \(\Delta h_{w}\) is the effective wall-normal cell size at the wall, \(p_{0}\) is the stagnation point pressure, and \(q_{w,0}\) is the stagnation point wall heat flux. #### 4.5.3 Catalytic Isothermal Wall Exothermic catalytic reactions enhance the surface heat flux through a diffusive heat flux contribution. A fully catalytic wall (\(\gamma=1.0\)) at the same temperature of \(300\,\mathrm{K}\) is considered, as Karl et al. [58] state that this boundary condition is closest to what they have assumed for the experiments. It is, however, reasonable to assume that fully catalytic conditions were only achieved for a short duration at the beginning of the experiment. The fully catalytic boundary condition imposes the recombination of all atoms impinging on the surface, while still respecting the physical limits set by species Figure 11: Comparison of (a) mass fractions along the stagnation line and (b) heat fluxes over the surface for the catalytic isothermal 2-D cylinder case by Knight et al. [3]. Figure 10: Grid independence studies with (a) the BF and (b) the IB-CC methods considering surface heat fluxes for the inert isothermal 2-D cylinder case by Knight et al. [3]. diffusion. The species mass fractions along the stagnation line and the total surface heat flux distributions are shown in Fig. 11. Predictions of the BF and the IB-CC methods are in excellent agreement, both in terms of species concentrations and surface heat fluxes. Because the cold wall itself already promotes recombination reactions in the boundary layer, accounting for catalysis leads only to a minor increase in the heat fluxes, which remain within the experimental uncertainties. It is therefore difficult to draw conclusions on the effective value of the recombination coefficient in the experiment. Another interesting observation could be made by comparing the level of agreement between the BF and the IB-CC results for the heat fluxes at the inert wall shown in Fig. 9 and with the fully catalytic wall shown in Fig. 11. Taking the BF method as reference, it is seen that at the stagnation point, results of the IB-CC method with the inert wall are 2.7% lower, while with the fully catalytic wall the difference is only 0.2%. It can be argued that this better agreement is mostly associated with the dominant nature of the catalytic boundary condition. Our previous comments regarding the differences seen in the IB-GP method for the inert isothermal case apply here as well. To complete the analysis, contour plots are presented for Mach numbers in Fig. 12, for temperatures in Fig. 13, and for atomic nitrogen concentrations in Fig. 14. These contour plots further confirm the preceding quantitative discussions by once again reflecting the excellent agreement between the BF and the IB-CC methods. From the trace of the sonic line, to the peak shock temperatures, and to the extent of the species boundary layer marked by nitrogen accumulation, the results are in perfect agreement. ### 2-D Ablator To validate the ablative boundary condition implementation and to assess the IB methods for GSI, a subsonic plasma wind tunnel experiment conducted at the von Karman Institute for Fluid Dynamics (VKI) by Helber et al. [53] is considered. The experiment exposes a graphite sample with a hemispherical nose of radius \(25\,\mathrm{mm}\) and a downstream extension of \(250\,\mathrm{mm}\) to nitrogen plasma. The sample undergoes ablation through nitridation reactions \[\mathrm{C_{(solid)}+N\to CN\;,} \tag{12}\] according to Eqs. (8-11) with the nitridation efficiency coefficient \[\gamma=7.91\cdot 10^{-2}\exp\left(-\frac{5663}{T_{\rm wall}}\right)\,. \tag{13}\] The nitridation efficiency was calibrated based on these particular experiments [53]. The simulations discussed in the following include mass blowing due to ablation, but do not account for the very slow shape change of the sample. First, we reproduce the experiment numerically using the BF method. For these simulations, a 9-species nitrogen-carbon mixture is considered, including free electrons and ionized species. These simulations yielded a stagnation point mass blowing rate of \(3.41\) g/m\({}^{2}\)s, which is within the experimental uncertainty range set by \(2.8864\mp 0.965\) g/m\({}^{2}\)s. This validates the ablation model based on Eq. 13. Having confidence in the ablation model and its implementation in the BF method, the experimental test case is simplified to a 2-D geometry without ionized species to reduce the computational cost and to avoid straying too far from the objective of evaluating immersed boundary methods for Figure 12: Comparison of Mach number contours for the catalytic isothermal 2-D cylinder case by Knight et al. [3] obtained with (a) the BF and (b) the IB-CC methods. The sonic line is indicated with the bright yellow line. Figure 14: Comparison of atomic nitrogen mass fraction contours for the catalytic isothermal 2-D cylinder case by Knight et al. [3] obtained with (a) the BF and (b) the IB-CC methods. Figure 13: Comparison of temperature contours for the catalytic isothermal 2-D cylinder case by Knight et al. [3] obtained with (a) the BF and (b) the IB-CC methods. an ablative boundary condition. Freestream conditions of this 2-D case are given in Table 8. A 6-species mixture of \([\mathrm{N_{2},N,CN,C_{3},C_{2},C}]\) is considered with chemical mechanisms from Olynick et al. [59]. For all methods, the grid resolution at the wall is \(1\times 10^{-5}\) m in the wall-normal direction. The mass fractions along the stagnation line and the mass blowing rates over the wall are shown in Fig. 15. Mass fractions for C\({}_{3}\) are not seen as they are almost zero. Predictions of the BF and the IB-CC methods agree well with each other. Overall, the production of CN at the wall and the dissociation of it through gas-phase reactions to form atomic nitrogen are well captured. Mass blowing rates from the BF and the IB-CC methods are also in very good agreement. The IB-GP method show noticeable discrepancies for the mass fractions along the stagnation line and for the surface mass blowing rates. Despite the apparent quantitative mismatch, also the IB-GP method captures the profiles qualitatively well in the absence of strong gradients near the wall. Temperature and atomic nitrogen contours for the simulations with the BF and the IB-CC methods are shown in Figs. 16 and 17. Results of both methods agree very well on the thermal gradient over the surface and on the recombination of nitrogen as temperature drops. ## 5 On the Importance of Conservative Boundary Conditions In the previous section, we have demonstrated that INCA with its cut-cell IB method on block-Cartesian AMR meshes performs on par with the reference solver US3D employing body-fitted meshes. The ghost-cell IB method of CHESS yields similar accuracy for cases with adiabatic walls; however, it cannot predict the heat flux at strongly cooled walls. We have attributed these inaccuracies to mass conservation errors as this is the most striking difference between ghost-cell methods and the strictly conservative cut-cell approach. However, the three solvers clearly differ also in several other aspects, such as the numerical schemes used for advection and diffusion driving forces. It is therefore unclear whether the observed deficiencies are inherent to \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(u_{\infty}\) [m/s] & \(T_{\infty}\) [K] & \(T_{\mathrm{wall}}\) [K] & \(p_{\infty}\) [Pa] & \(y\)(N\({}_{2}\)) & \(y\)(N) \\ \hline 1570 & 10280 & 2407 & 1500 & 9.77659e-05 & 0.9999022341 \\ \hline \hline \end{tabular} \end{table} Table 8: Freestream conditions for the 2-D ablator case. the ghost-cell method or a particular implementation. To further corroborate the superiority of a conservative cut-cell (or cut-element) IB methodology, we have also applied the ghost-cell method of INCA for selected cases. By switching off the special flux treatment in cut-cells [51] employed in the preceding sections, a standard ghost-cell method is obtained that only relies on the extrapolated fluid solutions near the boundary as described in Section 3. Therefore, mass, momentum, and energy conservation are not exactly satisfied. This ghost-cell method has nominally the same order of convergence as the conservative cut-element method of INCA. The comparison of the two methods is shown for the two most challenging benchmark cases in Fig. 18, where the previous cut-cell based results from the INCA solver are denoted by "INCA-CC" and the ghost-cell based results are denoted by "INCA-GP". Similarly, results with the ghost-cell IB method of the CHESS solver are denoted by "CHESS-GP". It can be seen that regardless of the various differences between INCA and CHESS, in both cases the ghost-cell methods are unsuccessful in predicting the surface heat fluxes and the mass blowing rates. For the 2-D cylinder case with an isothermal wall, the heat flux prediction of the INCA-GP method is closer in magnitude to the INCA-CC method and to the body-fitted reference from US3D than to the results obtained with the CHESS ghost-cell IB method; however, both ghost-cell methods give clearly wrong results. For the ablative case, the ghost-cell based method of INCA yields a very similar overprediction as the IB method of CHESS. Figure 15: Comparison of (a) mass fractions along the stagnation line and (b) mass blowing rates over the surface for the 2-D ablator case. Comparable inaccuracies are observed for two independently developed ghost-cell IB methods. The only difference between the ghost-cell IB method of INCA and INCA's cut-cell method, which shows excellent agreement with the reference data, is the conservative flux treatment. This further consolidates the diagnosis that conservation errors inherent to ghost-cell IB methods are responsible for large errors at cold walls. It is expected that these conservation errors converge at the same rate as the truncation errors of the baseline schemes. That is, conservation errors are very small unless grad Figure 16: Comparison of the temperature contours for the 2-D ablator case obtained with (a) the BF and (b) the IB-CC methods. Figure 17: Comparison of the atomic nitrogen mass fraction contours for the 2-D ablator case obtained with (a) the BF and (b) the IB-CC methods. ents of the conservative variables are very large. This explains why errors manifest at cold walls and not at adiabatic walls. ## 6 Conclusion We have evaluated the accuracy of immersed boundary methods for atmospheric entry conditions, including the influence of chemical nonequilibrium and gas-surface interactions. The benchmark cases have considered the accurate modelling of gas chemistry, mass diffusion, surface catalysis, and surface mass blowing due to ablation. Computational results obtained with the cut-element IB method in the AMR solver INCA are in an almost perfect agreement with the reference data for all considered cases, and as accurate as the results obtained with US3D on body-fitted meshes. Particularly for surface heat flux and mass blowing rate predictions, the benefit of an IB method that strictly conserves mass, momentum, and energy, such as the cut-element method in INCA, is clearly demonstrated in this study. After comparing this method with two non-conservative ghost-cell methods implemented in INCA and an independently developed solver, we saliently remark that numerical anomalies causing mispredictions of sensitive surface quantities can occur when using non-conservative IB formulations. Figure 18: Comparison of (a) heat fluxes over the surface for the isothermal 2-D cylinder case by Knight et al. [3] and (b) mass blowing rates over the surface for the 2-D ablator case. “INCA-CC” refers to INCA with the cut-cell based IB method, “INCA-GP” refers to the ghost-cell IB method of INCA, and “CHESS-GP” refers to the ghost-cell IB method of CHESS [16]. CFD solvers that provide automatic mesh generation and adaptation to represent detailed and moving geometries with IB methods have many promising advantages, but the accuracy of the numerical schemes used for predicting surface quantities must be analyzed rigorously before they can be used for predictive simulations. The selection of a set of well-defined test cases by mutual collaboration between research groups is crucial in converging to a robust consensus on the prediction of these surface states. To that end, this paper establishes such a set of fundamental benchmark cases with reacting surfaces, which can be used for the verification and validation of hypersonic flow solvers, while assessing the accuracy of immersed boundary methods for atmospheric entry applications. ## Acknowledgments The authors would like to thank Dr. Davide Ninni, Dr. Francesco Bonelli, and Prof. Giuseppe Pascazio from Politecnico di Bari for their collaboration and discussions on the results. From TU Delft, the authors would like to thank Prof. Georg Eitelberg for his insight regarding the experiments conducted at DLR and Dr. Ferdinand Schrijer for his comments on the manuscript. We also thank the Delft High Performance Computing Centre for providing access to DelftBlue and SURF (www.surf.nl) for the support in using the National Supercomputer Snellius. ## Appendix ### Analytical Solution of the 1-D Catalytic Diffusion Problem Following the derivation proposed by Bariselli [60], substituting Fick's law into the molar continuity equation, and solving for the zero-advection, constant temperature, steady-state solution one obtains \[\nabla\cdot\left(n\frac{M_{\rm N}}{\overline{M}}D_{\rm N_{2},N}\nabla(x_{\rm N _{2}})\right)=0\:, \tag{1}\] with \(n\) as the number density. For the current binary mixture \(M_{N_{2}}=2M_{N}\) and \(\overline{M}=\sum_{i}x_{i}M_{i}\), which in 1-D leads to \[\frac{d}{d\eta}\left(\frac{1}{x_{\rm N_{2}}+1}\left(\frac{d}{d\eta}x_{\rm N_{2 }}\right)\right)=0\:, \tag{2}\] with \(\eta\) as the spatial coordinate. Solving for \(x_{\rm N_{2}}\) yields \[x_{\rm N_{2}}=\frac{e^{C_{1}M_{\rm N}\eta}e^{C_{2}M_{\rm N}}}{M_{\rm N}}\;, \tag{.3}\] with \(C_{1}\) and \(C_{2}\) as integration constants to be found through the boundary conditions. Firstly, by knowing that \(\left(X_{\rm N_{2}}\right)_{\eta=0}=0\) at the free-stream reservoir \[C_{2}=\frac{\ln M_{\rm N}}{M_{\rm N}}\;. \tag{.4}\] Secondly, by equating the diffusion flux to the chemical production rate at the wall, \((J_{\rm N_{2}}=\dot{\omega}_{\rm N_{2}})_{\eta=L}\), which gives \[\left(\frac{C_{1}M_{\rm N}}{2-e^{C_{1}LM_{\rm N}}}=\frac{\gamma_{\rm N}}{2D_{ \rm N_{2}N}}\sqrt{\frac{k_{B}T}{2\pi m_{\rm N}}}\right)_{\eta=L}\;, \tag{.5}\] where \(k_{B}\) is the Boltzmann constant. The last expression can be solved iteratively through the Newton-Raphson method. The solution describes the species distribution as a function of spatial variable \(\eta\).
2309.11018
Conformalized Multimodal Uncertainty Regression and Reasoning
This paper introduces a lightweight uncertainty estimator capable of predicting multimodal (disjoint) uncertainty bounds by integrating conformal prediction with a deep-learning regressor. We specifically discuss its application for visual odometry (VO), where environmental features such as flying domain symmetries and sensor measurements under ambiguities and occlusion can result in multimodal uncertainties. Our simulation results show that uncertainty estimates in our framework adapt sample-wise against challenging operating conditions such as pronounced noise, limited training data, and limited parametric size of the prediction model. We also develop a reasoning framework that leverages these robust uncertainty estimates and incorporates optical flow-based reasoning to improve prediction prediction accuracy. Thus, by appropriately accounting for predictive uncertainties of data-driven learning and closing their estimation loop via rule-based reasoning, our methodology consistently surpasses conventional deep learning approaches on all these challenging scenarios--pronounced noise, limited training data, and limited model size-reducing the prediction error by 2-3x.
Domenico Parente, Nastaran Darabi, Alex C. Stutts, Theja Tulabandhula, Amit Ranjan Trivedi
2023-09-20T02:40:59Z
http://arxiv.org/abs/2309.11018v1
# Conformalized Multimodal Uncertainty Regression and Reasoning ###### Abstract This paper introduces a lightweight uncertainty estimator capable of predicting multimodal (disjoint) uncertainty bounds by integrating conformal prediction with a deep-learning regressor. We specifically discuss its application for visual odometry (VO), where environmental features such as flying domain symmetries and sensor measurements under ambiguities and occlusion can result in multimodal uncertainties. Our simulation results show that uncertainty estimates in our framework adapt sample-wise against challenging operating conditions such as pronounced noise, limited training data, and limited parametric size of the prediction model. We also develop a reasoning framework that leverages these robust uncertainty estimates and incorporates optical flow-based reasoning to improve prediction prediction accuracy. Thus, by appropriately accounting for predictive uncertainties of data-driven learning and closing their estimation loop via rule-based reasoning, our methodology consistently surpasses conventional deep learning approaches on all these challenging scenarios-pronounced noise, limited training data, and limited model size-reducing the prediction error by 2-3\(\times\). Conformal inference. Visual odometry. ## 1 Introduction Cutting-edge deep learning frameworks are striving not only for accurate point predictions but also to express predictive uncertainties. This is achieved by incorporating uncertainty-aware learning and prediction techniques such as Bayesian neural networks, Gaussian processes, Monte Carlo dropout, variational inference, _etc._[1, 2, 3, 4]. However, the robustness of learning uncertainty-aware prediction comes at a considerable computational cost. For example, methods like Monte Carlo dropout require multiple model samplings to capture the predictive model's distribution, where each sampling entails running model inference. For applications such as autonomous drones, where predictions must be made in real-time and with limited onboard resources, the computational challenges for uncertainty-aware predictions thus become excessive [5, 6]. To address these limitations of computationally efficient uncertainty-aware deep learning, there has been a notable surge of interest in _conformal inference_[7, 8, 9, 10, 11]. This approach offers a systematic framework for estimating predictive uncertainty intervals by incorporating calibration measures into training procedures. The process entails training a learning model on a labeled dataset and utilizing calibration data to construct a predictive uncertainty region around each test instance. In the case of classification problems, conformal inference adapts them into a minimal set prediction task, where there is a high confidence that the true class lies within the prediction set. Likewise, regression problems transform into minimal interval prediction tasks, ensuring that the true value falls within the interval with high confidence. Consequently, by employing conformal prediction, learning models can provide point predictions _as well as_ a reliable measure of confidence associated with each prediction. A key computational advantage of conformal inference is its explicit focus on the uncertainty bounds rather than computing the distribution, which enhances its computational efficiency. _Yet_, a notable drawback of current conformal inference methods for regression is that they can only predict contiguous (single mode) uncertainty intervals [12, 10, 13, 14]. Meanwhile, predictive uncertainties can be multi-modal in many cases. For example, in Fig. 1, consider an autonomous drone navigating a hallway with multiple similar rooms, doors, and intersections. Determining the vehicle's position in such surroundings will likely involve multimodal uncertainties due to environmental symmetries. Additionally, sensor measurements can be noisy or prone to occlusions, resulting in multiple plausible interpretations. Hence, computationally efficient multimodal uncertainty extraction is crucial. Fig. 1: **Multimodal Uncertainties:** Environmental features such as flying domain symmetries and sensor measurements under ambiguities and occlusion can result in multimodal (disjoint) uncertainty bands in practical applications such as visual odometry. We develop lightweight multimodal uncertainty regression and reasoning to address these practical challenges. Addressing these limitations, our work makes the following key contributions: Using visual odometry (VO) as a driving application, we present a novel conformalized multimodal uncertainty regression by transforming regression into a set prediction problem. Results show that the uncertainty estimates adapt sample-wise against varying operating conditions such as input noise, limited training data, and the model's limited parametric size. We also present multimodal uncertainty reasoning that leverages the robust uncertainty estimates and closes the estimation loop via rule-based reasoning. Specifically, under challenging scenarios such as extreme noise, limited training data, and limited computational constraints, our integrated methodology consistently outperforms traditional deep learning by a 2-3\(\times\) improvement in regression accuracy. ## 2 Multimodal Uncertainty Regression Overcoming the limitations of current conformalized regression methods [10, 12, 13] and using VO as a driving application, we present a novel conformalized multimodal uncertainty regression. VO is a predominant computer vision technique in robotics to estimate the pose of a mounted camera on a drone/robot [15, 16, 17]. Deep learning-based VO employs neural networks to automatically learn and extract features from consecutive image frames to estimate the camera's pose. Our method reformulates the learning-based pose regression task as a _set prediction problem_. In this context, the predicted set can represent uncertainty intervals that are not confined to being contiguous. To achieve this transformation, we extract a calibration set from the initial training data, enabling the conversion of the pose classification challenge into a set prediction task through conformalization, as depicted in Fig. 2(b). The first step involves segmenting the drone's navigational space into \(K\) unique sets, spanning both position and orientation (or pose) dimensions. This segmentation is achieved using a non-uniform space discretization, which is informed by the training set trajectories within each dimension. These trajectories are split into \(K\) quantiles to establish class boundaries. The quantile-based non-uniform discretization groups frequently visited poses into more confined spatial intervals. This allows for their identification with enhanced precision. The outcome of this space encoding process is a one-hot encoded matrix, contingent on the \(K\) classes. In the subsequent stage, a neural network extracts features from input images. These features are then relayed to a multi-head classifier, as illustrated in Fig. 2(a). Each classifier head generates the softmax scores for interval classes along its respective pose dimension. Based on the softmax scores, our procedure then targets the _sample-adaptive minimum prediction set_\(C(X)\subset\{1,...,K\}\) such that the correct class resides within the set with \(1-\alpha\) probability, where \(\alpha\) denotes an arbitrary miscoverage rate (e.g., 10%). This marginal coverage can be expressed as \(1-\alpha\leq\mathbb{P}\{Y\in C(X)\mid X=x\}\leq 1-\alpha+\frac{1}{n+1}\), where \(n\) denotes the size of the calibration set. Conformal scores arise by deducting the softmax output of the appropriate class for every input from one. Finally, \(\hat{q}\) is computed as the \(\left\lceil\frac{(n+1)(1-\alpha)}{n}\right\rceil\) empirical quantile of these conformal scores, and the conformalized prediction set \(C(X)\) is formed by incorporating classes with softmax scores greater than \((1-\hat{q})\). The product of the predicted set of classes along each pose dimension produces the net uncertainty regions, which are not necessarily contiguous. Figs. 2(c-d) shows the example multimodal uncertainty regions predicted using the above procedure on the KITTI dataset [18]. The uncertainty regions are cuboid-shaped due to the product of intervals along respective pose dimensions. In Fig. 2(d), the uncertainty cuboids contract as the number of classes \(K\) increases from ten to fifty due to higher precision interval classification. The multimodality and non-contiguity of uncertainty intervals are evident in the figure. The uncertainty estimates also adapt to drone's motion intricacies. For example, the predictive uncertainty increases at sharp turns. Figure 3: Optical flow-based correspondence between consecutive frames. Figure 2: **Multimodal Conformalized Regression: (a) Our framework for embedding conformalization in a deep learning regressor. (b) Example demonstration of our scheme by conformalization of softmax scores of multiple interval classes resulting in multimodal (disjoint) uncertainty interval prediction. Sample demonstration of our scheme for (c) 10-class and (d) 50-class multi-head configuration.** ## 3 Multimodal Uncertainty Reasoning ### Relative Motion Estimation For reasoning among multimodal uncertainty estimates, our framework determines the relative motion between two frames using the Harris corner detector [20] as shown in Fig. 3. The process identifies key feature points, often located at angles or regions with intensity discontinuities in the first image. Subsequently, the Lucas-Kanade algorithm [21] pinpoints the corresponding key points in the second frame. The algorithm presumes minimal movement of interest points between consecutive frames - which is a reasonable assumption under a sufficiently high sampling rate and consistent brightness. Under this assumption, starting from the optical flow equation for each point \((x,y)\): \(I(x+dx,y+dy,t+dt)\approx I(x,y,t)\), a linear approximation of the optical flux in a neighborhood of the points of interest is computed using a Taylor series. The algorithm assumes that the change in brightness of a pixel of the scene is totally compensated by the gradient of the scene itself, i.e., \(I_{x}u+I_{y}v+I_{t}=0\). Therefore, the displacement vector \([dx,dy]\) can be estimated by minimizing the squared error between points in the initial image and their counterparts in the subsequent frame. Based on the computed optical flow, points of interest are continuously updated, discarding those with significant tracking discrepancies, which also yields the key feature points for the second frame. Based on the feature correspondence, the relative motion between two poses can be extracted by exploiting the essential matrix \(E\), given by \(x_{1}^{T}\cdot E\cdot x_{0}=0\), where \(x_{0}\) is a corresponding point in the first image and \(x_{1}\) is the corresponding point in the destination image. By factoring the matrix using singular value decomposition (SVD), rotation matrix \(R\) and translation unit vector \(t\) can be extracted as in [22]. Notably, essential matrix-based rotation and translation estimates often prove challenging in practice due to their sensitivity to small errors in point correspondences, leading to significant inaccuracies in the derived transformations. Despite these challenges, we leverage them solely for discerning uncertainty intervals, i.e., not as the main predictor. As elaborated in the subsequent section, such integrated prediction combining uncertainty-aware deep learning and optical flow-based reasoning yields impressive predictive accuracy across benchmarks, including noise robustness, sample efficiency, and parametric efficiency. ### Uncertainty Discrimination After determining the relative rotation and translation between two frames, the next step involves discerning among multiple uncertainty intervals to ascertain the optimal value. We initiate this by computing the mean value within each uncertainty interval to approximate the mean directions of displacements. Given that distinct predicted bounds exist for every dimension, we evaluate all possible combinations to identify the most fitting value. The algorithm takes the first pose with the highest softmax score from the prediction model and then iteratively searches for the best pose of the next frame. To find the best prediction of the orientation corresponding to the next frame, first, we calculate the corresponding rotation matrix as \(R_{next}=R_{relative}\cdot R_{previous}\), then we transform it into a quaternion \(q_{next}\) and at this point we look for the best prediction by solving: \[\underset{q_{\text{predefined}}\in Q}{\text{minimize}}\quad\|q_{\text{ predicted}}-q_{\text{next}}\| \tag{1}\] where \(Q\) is the set of all quaternions describing the orientation of the camera that took the frame. Likewise, to find the best prediction of the spatial position corresponding to the next frame, we use the translation direction \(t_{relative}\) calculated in the relative motion estimation phase and solve: \[\underset{t_{\text{predefined}}\in T}{\text{minimize}}\quad\left\|t_{\text{ relative}}-\frac{t_{\text{predicted}}-t_{\text{previous}}}{|t_{\text{ predicted}}-t_{\text{previous}}|}\right\| \tag{2}\] Figure 4: **Sample Efficiency:** In the right panel, the proposed framework maintains high accuracy despite a significant reduction in training data. Predictive uncertainties adaptively increase in our framework with lower training data. _[Testcase: RGB-D dataset [19], first test sequence, 50-class multihead]_ Figure 5: **Parametric Efficiency:** In the left panel, the proposed uncertainty-aware prediction framework maintains high accuracy despite a smaller network size of four million parameters. In the right panel, the proposed framework reduces overfitting risks by uncertainty-oriented predictions. _[Testcase: RGB-D dataset [19], first test sequence, 50-class multihead] where \(T\) is the set of all translation vectors describing the position of the camera that took the frame. Solving these optimization steps selects the uncertainty interval with the highest likelihood for mean prediction and uncertainty discrimination on successive predictions. ## 4 Simulation Results In Table 1 and Figs. 4-6, we compare the proposed framework to a conventional regression model on _sample efficiency_ (i.e., at varying sizes of labeled training data), _parametric efficiency_ (i.e., the necessary deep learning model size), and _noise robustness_. For the comparative study, we only modify the training procedure from conventional to conformalized uncertainty-based reasoning while keeping all other settings, such as feature extraction, training data, _etc._ the same. **Sample Efficiency:** Gathering sufficient labeled data for applications like VO poses significant challenges [23]. Under these practical challenges, Fig. 4 compares the training sample efficiency of the proposed framework against the conventional. In the right panel, the proposed framework maintains high accuracy even when the training data reduces to 40% of the original. Notably, under limited training data, our framework appropriately expresses higher predictive uncertainties, as seen in left _vs._ right panels. **Parametric Efficiency:** Smaller deep networks are essential for edge computing of VO to minimize computational and memory demands, enabling real-time processing on resource-constrained devices. In Fig. 5, we compare the performance of our framework against conventional deep learning regression at two variants of feature extractor-EfficientNet-b0 (with four million parameters) and EfficientNet-b7 (with \(\sim\)sixty-four million parameters) [24]. Across these diverse configurations, the performance of the conformal models remained strikingly consistent despite the substantial variation in model parameters. While deep learning-based regressors underfit or overfit at too small or too large a model, in the proposed framework, the parametric resources are used under the guidance of uncertainty calibration. Our framework's mean prediction curves in the left and right panels are consistent. In contrast, the uncertainty bounds reduce with higher parameters in the right panel for EfficientNet-b7. **Noise Robustness:** Input noise tolerance is crucial for VO to ensure accurate pose estimation under challenging environmental conditions such as low light scenarios. In Fig. 6, we compare predictions from both procedures while injecting Gaussian noise to input image pixels under varying noise variance (\(\sigma\)). Notably, the proposed framework also shows exquisite tolerance to input image noise and responds to higher noise levels in input images by expanding predictive uncertainty intervals. Table 1 compares the RMSE (Root Mean Square Error) of pose prediction trajectories on the first testing sequence of RGB-D Scenes dataset [19] in classical deep learning-based regression against our procedure. By systematically acknowledging the variabilities in training data, model size, and input noise, our multimodal uncertainty-aware procedure improves prediction accuracy by \(\sim\)2-3\(\times\) across all these diverse variations. When faced with multiple modes of uncertainty (i.e., disjoint uncertainty intervals), the reasoning framework is able to evaluate each mode's implications separately, thus capturing these complexities more accurately than an unimodal distribution, which otherwise oversimplifies. ## 5 Conclusions We presented a novel conformalized multimodal uncertainty regressor and reasoning framework for VO. The proposed framework demonstrated \(\sim\)2-3\(\times\) lower prediction error than conventional deep learning under challenging scenarios such as lower training data, model size, and extreme input noise. \begin{table} \begin{tabular}{l c c c} \hline \hline **Metric** & **Conformal** & **Classical** & **Improvement** \\ \hline 40\% total data & 0.0534 & 0.136 & 2.5\(\times\) \\ 80\% total data & 0.026 & 0.078 & 3\(\times\) \\ \hline Efficientnet-b0 (4M) & 0.029 & 0.083 & 2.9\(\times\) \\ Efficientnet-b3 (10.7M) & 0.031 & 0.077 & 2.5\(\times\) \\ Efficientnet-b7 (63.8M) & 0.028 & 0.088 & 3.1\(\times\) \\ \hline No noise & 0.050 & 0.136 & 2.7\(\times\) \\ Gaussian (\(\sigma=0.05\)) & 0.055 & 0.126 & 2.3\(\times\) \\ Gaussian (\(\sigma=0.1\)) & 0.073 & 0.132 & 1.8\(\times\) \\ Gaussian (\(\sigma=0.2\)) & 0.099 & 0.201 & 2\(\times\) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of RMSE (Root Mean Square Error) of Pose Prediction Trajectories in Classical Inference vs. Our Uncertainty-Aware Inference. Figure 6: **Noise Robustness:** In the right panel, the proposed framework shows exquisite resilience to extreme noise. Predictive uncertainty levels appropriately adapt to input image noise levels. _[Testcase: RGB-D dataset [19], first test sequence, 50-class multihead]_
2309.16777
How many words does ChatGPT know? The answer is ChatWords
The introduction of ChatGPT has put Artificial Intelligence (AI) Natural Language Processing (NLP) in the spotlight. ChatGPT adoption has been exponential with millions of users experimenting with it in a myriad of tasks and application domains with impressive results. However, ChatGPT has limitations and suffers hallucinations, for example producing answers that look plausible but they are completely wrong. Evaluating the performance of ChatGPT and similar AI tools is a complex issue that is being explored from different perspectives. In this work, we contribute to those efforts with ChatWords, an automated test system, to evaluate ChatGPT knowledge of an arbitrary set of words. ChatWords is designed to be extensible, easy to use, and adaptable to evaluate also other NLP AI tools. ChatWords is publicly available and its main goal is to facilitate research on the lexical knowledge of AI tools. The benefits of ChatWords are illustrated with two case studies: evaluating the knowledge that ChatGPT has of the Spanish lexicon (taken from the official dictionary of the "Real Academia Espa\~nola") and of the words that appear in the Quixote, the well-known novel written by Miguel de Cervantes. The results show that ChatGPT is only able to recognize approximately 80% of the words in the dictionary and 90% of the words in the Quixote, in some cases with an incorrect meaning. The implications of the lexical knowledge of NLP AI tools and potential applications of ChatWords are also discussed providing directions for further work on the study of the lexical knowledge of AI tools.
Gonzalo Martínez, Javier Conde, Pedro Reviriego, Elena Merino-Gómez, José Alberto Hernández, Fabrizio Lombardi
2023-09-28T18:13:02Z
http://arxiv.org/abs/2309.16777v1
# How Many Words Does ChatGPT Know? ###### Abstract The introduction of ChatGPT has put Artificial Intelligence (AI) Natural Language Processing (NLP) in the spotlight. ChatGPT adoption has been exponential with millions of users experimenting with it in a myriad of tasks and application domains with impressive results. However, ChatGPT has limitations and suffers hallucinations, for example producing answers that look plausible but they are completely wrong. Evaluating the performance of ChatGPT and similar AI tools is a complex issue that is being explored from different perspectives. In this work, we contribute to those efforts with ChatWords, an automated test system, to evaluate ChatGPT knowledge of an arbitrary set of words. ChatWords is designed to be extensible, easy to use, and adaptable to evaluate also other NLP AI tools. ChatWords is publicly available and its main goal is to facilitate research on the lexical knowledge of AI tools. The benefits of ChatWords are illustrated with two case studies: evaluating the knowledge that ChatGPT has of the Spanish lexicon (taken from the official dictionary of the "Real Academia Espanola") and of the words that appear in the Quixote, the well-known novel written by Miguel de Cervantes. The results show that ChatGPT is only able to recognize approximately 80% of the words in the dictionary and 90% of the words in the Quixote, in some cases with an incorrect meaning. The implications of the lexical knowledge of NLP AI tools and potential applications of ChatWords are also discussed providing directions for further work on the study of the lexical knowledge of AI tools. ## 1 Introduction The introduction of generative AI tools capable of creating images from text descriptions such as DALL-E or Stable Diffusion [1] and AI natural language processing tools based on large language models such as ChatGPT have put generative AI in the spotlight. ChatGPT reached one million users in the first week after its introduction and more than 100 million within its first two months, thus becoming the computing technology with the fastest adoption ever [2]. ChatGPT is now being used in a myriad of applications, and the technology has been incorporated into a variety of products [3]. ChatGPT achieves impressive performance and can answer a wide variety of questions on many different subjects as well as keep sophisticated conversations, translate or summarize text, follow instructions, or generate code among many other tasks [4]. However, it also has limitations and suffers from hallucinations, so producing wrong responses that contain incorrect or false data but they look plausible [5, 6]. Therefore, significant efforts are being made to better understand its capabilities and limitations [5]. Recent studies try to automate most of the testing to be able to evaluate ChatGPT performance over thousands or tens of thousands of samples from different publicly available datasets, for example in different types of questions and answer tasks [7, 8]. Automation enables not only testing large datasets but also evaluating different ChatGPT parameters and contexts. These studies focus on the performance of specific tasks such as question and answer and instructions. For AI tools that generate content such as ChatGPT, it is also relevant to understand which type of content is generated and in particular its fidelity and diversity [9], especially when the AI tools are used massively, and their content populates the Internet [10]. Moreover, the use of AI-generated data for training can lead to performance degradation and even to a model collapse [11, 12]. In the case of ChatGPT, analyzing the type of content generated is complex because it depends on the task, context, and language [13]. This has triggered research, for example, to try to understand phonological aspects [14], linguistic patterns [15] or lexical richness [16] of AI-generated text and comparing it with those of human-written text. In this article, we contribute to the evaluation of ChatGPT assessing its knowledge of the words of a language. One of the basic features of a language is its vocabulary (words). The Real Academia Espanola (RAE) recognizes more than 90,000 different words1, the Oxford English Dictionary recognizes 171,476 words2 in use and the Larousse Illustrre 63,800 words3 in French. However, the average person uses only a small fraction of those [17, 18], typically between one-third and one-half for the most common Indoeuropean languages. Books can also be categorized or ranked based on the richness of its vocabulary. For instance, the Spanish Quixote [19] (Don Quijote de la Mancha) comprises 382,477, words, of which 22,632 are different words, so reading Quixote is a complex task. Footnote 1: [https://www.rae.es/obras-academicas/diccionarios/diccionario-de-la-lengua-espanola](https://www.rae.es/obras-academicas/diccionarios/diccionario-de-la-lengua-espanola) Footnote 2: [https://languages.oup.com/](https://languages.oup.com/) Footnote 3: [https://www.editions-larousse.fr/livre/grand-larousse-illustrre-2023-9782035938718](https://www.editions-larousse.fr/livre/grand-larousse-illustrre-2023-9782035938718) Therefore, it is of interest to evaluate the knowledge that AI tools in general, and ChatGPT in particular have of the words of languages. Interestingly, this process can be automated as the list of words is typically available in dictionaries and prompts can be automatically designed such that ChatGPT produces answers that can be analyzed by a simple parser. In this work, we present ChatWords a tool to automatically test the knowledge that ChatGPT has of an arbitrary set of words. ChatWords has been designed with a modular approach to enable customizations and additions and to facilitate the evaluation of different lexicons and AI tools. The potential applications include assessing the knowledge of the lexicon of different languages, but also of those of specific domains, or even of incorrect words that may have been wrongly learned by ChatGPT. ChatWords is open-source and publicly available4. To illustrate the benefits of ChatWords, it has been used to evaluate the knowledge that ChatGPT has of the Spanish lexicon and of the words in _The Quixote_ have been evaluated. The results show that ChatGPT does not recognize approximately 20% of the Spanish lexicon. Footnote 4: [https://github.com/WordsGPT/ChatWords](https://github.com/WordsGPT/ChatWords) The rest of the paper is organized as follows. Section 2 briefly introduces the importance of words in a language and how they are used. Section 3 discusses the testing of the knowledge that ChatGPT has of a word as well as the process that can be automated together with the limitations that automation introduces. ChatWords is presented in section 4 and its potential benefits are illustrated in section 5 with two case studies. The potential implications of the lexical knowledge of AI tools like ChatGPT are discussed in that section, followed by a brief analysis of applications for LexiChat in section 6. The paper ends with the conclusion in section 7. ## 2 Words: the bricks of a language "In the beginning was the Word..." this statement from the Gospel of John in the New Testament is used to introduce the importance of words in languages such that our understanding and analysis of languages start from words [20]. The lexicon of any language, defined as the complete list of all its words, forms the very foundation of human language. It is derived from the Greek term "lexis" (\(\lambda\xi\xi\iota\iota\)), which means "speech", "way of speaking" and "word". Words serve as the medium through which the meanings of universal knowledge are represented, making them an essential part of our heritage. For instance, words classified as obsolete, often excluded from general dictionaries of "current language," are crucial for understanding texts from different eras. The permanent loss of words can result in the loss of essential keys to understanding the world. Throughout history, efforts have been made to document and preserve every word of a language, including those that have fallen out of use, those currently in use, and new entries. Since the 19th century, there has been a proliferation of dictionaries that collect historical lexicography in various languages. The desire for completeness has, at times, resulted in monumental works such as the TLL (Thesaurus Linguae Latinae)5, which was started in 1894 and is expected to be completed around 2050. However, attempts to carry out parallel work for the Greek TLG (Thesaurus Linguae Graecae) have encountered insurmountable difficulties [21, 22]. During the 20th century, the utilization of _corpora_ became increasingly prevalent. These compilations, encompassing a diverse array of language samples, serve various objectives, including the comprehensive documentation of languages' lexicons. This documentation not only offers insights from a historical standpoint, but it also provides valuable information about usage patterns. The emergence of digital technology and advancements in database management in the 21st century have facilitated a surge in the volume of entries and their systematic organization. This development has revitalized and propelled forward the _corpora_ projects that were originally initiated in the 20th century. Thus, the Oxford English Corpus (OEC) contains in its latest version nearly 2,1 billion words6, the _Corpus de Referencia del Espanol Actual_ (CREA), contains more than 160 million forms7 and the _Corpus Diacronico del Espanol_ (CORDE) contains 250 million records8[23], the _Digitales Worterbuch der Deutschen Sprache (DWDS)_ includes around 50 billion words from historical and contemporary collections9. However, in the case of the Corpus de _reference du francais contemporain_ (CRFC), a delay in its preparation is recognized in relation to other main languages (major languages) [24]. Footnote 5: [https://thesaurus.badw.de/tll-digital/tll-open-access.html](https://thesaurus.badw.de/tll-digital/tll-open-access.html) Footnote 6: [https://www.sketchengine.eu/oxford-english-corpus](https://www.sketchengine.eu/oxford-english-corpus) Footnote 7: [https://www.rae.es/banco-de-datos/crea](https://www.rae.es/banco-de-datos/crea) Footnote 8: [https://www.rae.es/banco-de-datos/corde](https://www.rae.es/banco-de-datos/corde) Footnote 9: [https://www.duds.de/r](https://www.duds.de/r) Nowadays, online dictionaries allow for the inclusion of a vast repository of words that were previously declassified and removed from current use dictionaries of all languages due to the limitations of physical formats. While the deletion of words that had fallen out of use and the inclusion of new entries has been a constant throughout the history of dictionaries [25], the general trend now is to keep all lemmas to the point of hinting "guarantee of comprehensiveness" [26]. Thanks to the possibilities of digital storage and their easy management, lexical repositories are now updated and categorized in real-time, allowing for the almost utopian goal of total conservation from now on. This scenario is contrasted, however, with a possible impoverishment of language in use due to AI learning mechanisms. The overuse of certain words by humans, at the expense of others, could lead to biased learning of a language's lexicon by AI, despite the language being inherently broader and more varied. The constant repetition and degradation processes evident in interactions with AI could compel it, in its pursuit of emulating natural human discourse, to use fewer and fewer words, despite its theoretical access to expansive online digital repositories. The ramifications associated with the potential loss of lexicon for universal knowledge, along with the imperative to safeguard each and every component of it, are succinctly encapsulated in Wittgenstein's renowned statement 5.6 from his _Tractatus_: "The limits of my language mean the limits of my world." [27]. An interesting observation is that words are used with very different frequencies, and only a small fraction of the lexicon concentrates most of the utterances [28], following the so-called Zipf distribution. The Zipf distribution is the discrete version of the continuous Pareto distribution, used by the economist Pareto to describe the unfair distribution of wealth among people. The same Zipf behaviour is observed in multiple nature phenomena, including the Internet and social networks [29]. As discussed in the introduction, most persons only know a fraction of the words [17]. For Spanish, the number of words that an average person can recognize has been estimated to be around 30,000 which is approximately one-third of the words in the dictionary [18]. ## 3 Testing if ChatGPT knows a word: automation and limitations To evaluate if ChatGPT knows a given word, for example, "dog" we can simply create an input prompt like, "Do you know the meaning of the word dog in English?" and let ChatGPT answer. Then read the answer to check if ChatGPT produces a valid description of the meaning. This, however, does not scale if we want to test all the words in the dictionary because we have to manually check ChatGPT's response for each and every single word. Therefore, testing must be automated. ### Automating the testing process To automate testing, we need prompts that instruct ChatGPT to produce an answer that can be processed automatically. Therefore, the prompts must be designed such that responses can be easily parsed to extract the relevant information. In more detail, we can for example explicitly ask ChatGPT to answer only "yes" or "no" making our questions Boolean. Note that even Boolean questions may not be trivial and may require complex reasoning [30]. This is not our case, we try to make questions as simple and direct as possible. For example, we use the following prompts: 1. Prompt #1: "Do you know the meaning of the word "X" in Spanish? Please answer yes or no." 2. Prompt #2: "Is "X" a correct word in Spanish? Please answer, yes or no." 3. Prompt #3: "Is "X" a valid word in Spanish? Please answer, yes or no." 4. Prompt #4: "Is the word "X" in the Dictionary of the Real Academia Espanola (RAE)? Please answer yes or no." These prompts are simple, and the responses are easy to parse, so they can be processed automatically. More elaborate prompts can be used to ask for example for the meaning of the word and try to use metrics like perplexity [31] to evaluate the confidence that the tool has in its response. Similarly, the response can be compared with the meaning of the word extracted from a dictionary. However, the automation of such checking process becomes significantly more complex. Therefore, we will use simple prompts like the ones listed above, leaving the use of more complex textual responses for future work. An intermediate approach is to get the meaning of the word from a dictionary and then ask ChatGPT to give a word with that meaning that starts with the first letters of the word. This may allow automation by checking if the word is part of ChatGPT's answer, so just like playing the alphabet game with ChatGPT. This could be the next step toward the automation of more elaborate prompts. ### Limitations of automation To understand the limitations introduced by these simple prompts in the analysis, 100 words have been randomly selected from the dictionary of the Real Academia Espanola and they have been manually tested using additional prompts to check if the meaning of a given word by ChatGPT is correct when it answers yes. In more detail, the prompt: "Do you know the meaning of the word X in Spanish?" was used. In five of the words, the response of ChatGPT was an incorrect meaning; an example is shown in Figure 1. Therefore, the results presented in our analysis are likely to overestimate the lexical knowledge of ChatGPT. However, the percentage of such mistakes was only 5% in our random sample and thus, simple prompts can be used to get an initial estimate of the lexical knowledge of ChatGPT. In any case, automating the checking of the meaning of the words rather than using a simple yes/no response is of interest to detect the words that ChatGPT misinterprets. ## 4 ChatWords: automating lexical knowledge evaluation in ChatGPT The basic idea behind our tool is that once the testing of a word has been automated the same process can be run for a given lexicon such that all words are tested and all responses are parsed and compiled. Therefore, experiments can be defined to evaluate any arbitrary set of words for a specific AI model and configuration and then run. With this goal, ChatWords has been developed with two types of users in mind: 1) programmers or data scientists and 2) users who generally do not have programming skills, for example linguists, and want to use the tool from a simple graphical interface. The goal is to support both advanced users who want to modify the tool or even code part of the tool and users who want an easy-to-use web-based application. In the rest of the section, we first describe the overall architecture of the tool and then describe both versions of the tool. The source code of ChatWords is available on a github repository10 offered under the GNU General Public License v3.0 (GPL-3.0). Footnote 10: [https://github.com/WordsGPT/ChatWords](https://github.com/WordsGPT/ChatWords) ### Tool architecture The tool is divided into two parts, a program that interacts with ChatGPT using its API and a web application that interacts with the program to run experiments and store the results in a database. The interface between both parts is clearly defined, so that each part can be modified independently. The overall architecture is shown in Figure 2; users can interact with ChatWords through the ChatWords application to configure and run their experiments. This application calls a lower-level program that executes the experiments and sends the results to the Web application that then writes them in a simple database. The user can then access the results at any time from the database and keep a record of all experiments and results. Advanced users can directly run the lower-level program from a shell or command line and get the results in a file. The interfaces between the ChatWords program and the ChatWords application are clearly defined to enable programmers to create their own low-level programs; for example, advanced users may create programs that evaluate other AI tools instead of ChatGPT, or use a different programming language. Similarly, the low-level program developed in Python can be used with another implementation of the ChatWords application. This flexibility combined with the availability of the source code for both components is intended to facilitate extensions and modifications of ChatWords. ### ChatWords Program The block diagram of the program is shown in Figure 3. The parameters and lexicon to evaluate are provided by the ChatWords application and the results are sent back to the ChatWords application. For each word received, prompts are generated and sent to ChatGPT using OpenAI's API. The configuration of ChatGPT is also determined by the user with Figure 1: **Example of ChatGPT answering with an incorrect meaning for a word. “Lavacaras” means “flatterer” and ChatGPT answers with a meaning that seems taken from ”Lava” (wash) and ”Cara” (face).** the ChatWords application. The architecture is modular and intended to provide flexibility to users who want to add new functionalities. In addition to the ChatWords application, the ChatWords program can also be run on its own from the command line. The program is written in Python and can be run from a shell or in other ways outside the web; this gives the user the flexibility to adapt it to their needs. Requests are processed asynchronously, reaching a maximum speed of 3,500 prompts processed per second, reaching the limits of OpenAI's API11. Users can modify the speed of the experiment based on their needs or capabilities. Figure 3: **Block diagram of the ChatWords program** Figure 2: **Block diagram of ChatWords** ### ChatWords application One of the main design goals of ChatWords is that it can be used by people without programming skills; to that end, a web-based user interface is also provided. This version is programmed in Nest,js (a progressive Node.js framework for the development of web applications) for the back-end, and Angular (a web development JavaScript framework) for the front-end. The application stores the configuration and results for each experiment in a simple and easy-to-configure database; the application can be easily installed and run. The user has a web interface to select the configuration parameters, input file and to load results from previous experiments. The tool is designed to support additional configuration parameters as needed; the prompts are also configurable by the user or the programmer. It also provides a template to ease the creation of new experiments for programmers. It is agnostic to the model used for the experiments; even new models not developed yet can be integrated easily. It acts as an intermediary system that helps to create experiments, configure their parameters, load the words that want to be tested, save the results after they have been obtained, and present the results in real time through its web interface. Chatwords aims to facilitate collaboration between linguists (often lacking technical programming skills) and data engineers. To this end, the ChatWords application allows linguists to set up experiments via a web interface and to upload large amounts of words. From the interface, the experiment can be started, stopped, and the results are shown in real time as they are generated; nevertheless, data engineers provide ChatWords programs with the logic of each experiment. ChatWords is not only model agnostic, but it is also agnostic to the configuration of the experiment itself. For example, in the ChatGPT API it is possible to select parameters such as temperature or limit the number of tokens of the answer. Therefore, the graphical interface of ChatWords adapts itself to each experiment starting from a JSON configuration file provided by the data engineer that includes metadata about the experiment, such as for example: { "name": "Template experiment", "description": "Description of template experiment", "configuration": { "model": [ "type": "select", "options": [ [ "ChaGPT 3.5": "ChaGPT 3.5"], [ "ChaGPT 4": "ChaGPT 4"] ] }, "temperature": { "type": "number", "name": "Configuration param1", "placeholder": "Introduce the configuration parameter 1", "value": 0, "step": 0.1, "min": 0, "max": 1 } } } The metadata includes the name of the experiment, the description, and the input of the form to be populated from the graphical interface. According to the previous template, ChatWords renders a form with a selector to choose the ChatGPT model (GPT 3.5 or GPT 4), and the temperature (controls the degree of randomness in the generated text.). Figure 4 shows a diagram with all the steps involved in setting up and running an experiment. ## 5 Case studies: Spanish Lexicon and the Quixote To validate the functionality of ChatWords and illustrate the benefits of lexicon knowledge evaluation, the Spanish language lexicon and the words used in _The Quixote_ have been used as case studies; so they are discussed in the following subsections. ### Spanish Lexicon The most authoritative reference source in Spanish is the dictionary produced by the Real Academia de la Lengua Espanola (RAE). The Academy is a typical Enlightenment institution, founded in Madrid in 1713. It was inspired by the precepts of the Academie Francaise, founded in the previous century, to regulate and perfect the French language. However, since its inception, the interest of Spanish academics focused, among other goals, on the development of a dictionary, whose first edition was published in 1780 [32]. Twenty-three editions of the Dictionary of the Spanish Language (Diccionario de la lengua Espanola: DLE) have been published since the 18th century. The most recent one is the twenty-third, published in 2014. The dictionary includes lexical entries that belong to the entire Spanish-speaking context. The analysis of the lexicon and the decisions on modifications, additions, the inclusions of new meanings, amendments, etc. are adopted by consensus among the 23 academies of the Spanish language, present in Spain, America, the Philippines, and Equatorial Guinea. The successive updates of the dictionary are reflected through the different electronic versions. The current one is version 23.6, which includes new lemmas, completes etymologies, adds new meanings, or amends different aspects of previous versions12. The dictionary provides information, among other issues, on the specific geographical use of words that are not used generally in the Spanish-speaking community, as well as terms that are rarely used, archaic, out of use, poetic, vulgar, etc. Figure 4: **Diagram for setting up and running an experiment** The words of the RAE dictionary were downloaded from the RAE website13 using a modified version of a publicly available scrapper14 that was modified to remove the gender forms of words. The result is a list of 91,168 words that we use to evaluate ChatGPT. To achieve a more balanced version of the list, only the verbal infinitives are presented, eliminating all their inflectional variants, which in Spanish are approximately 80 forms per verb. Variants of gender and number have also been eliminated, using the singular masculine generic as unmarked lemmas. For example, the 'abogado' is the chosen lemma and its variants of gender, "abogada", and number, "abogados", "abogadas" are discarded. The final number of words included in the file used for evaluation is, in any case, very similar to that in the twenty-third edition of the dictionary, the most recent, with 93,111 entries [33]. Footnote 13: [https://dle.rae.es/](https://dle.rae.es/) Footnote 14: [https://github.com/JorgeDuenasLerin/diccionario-espanol-txt](https://github.com/JorgeDuenasLerin/diccionario-espanol-txt) ### The Quixote In the second case study, we analyze a specific case, the words used in _The Quixote_, the novel by Miguel de Cervantes. For this evaluation, the commemorative edition of the IV centenary of _Don Quixote de la Mancha_, published by the RAE and the Association of Academies of the Spanish Language [19], has been used. This text showcases a spelling update while retaining archaisms, cultivars, vulgarisms, and certain outdated spellings. These have been retained when it was deemed that their removal would have resulted in misunderstandings or in diminished richness in textual interpretation. The book was processed extracting 22,632 unique words out of 382,477 words in the text. ### Evaluation Testing has been done with ChatGPT3.5turbo with no context and default settings for all the parameters except temperature that was set to zero for deterministic answers. The following prompts have been used: 1. Prompt #1: "Do you know the meaning of the word "X" in Spanish? Please answer yes or no." 2. Prompt #2: "Is "X" a correct word in Spanish? Please answer, yes or no." 3. Prompt #3: "Is "X" a valid word in Spanish? Please answer, yes or no." 4. Prompt #4: "Is the word "X" in the Dictionary of the Real Academia Espanola (RAE)? Please answer yes or no." The first prompt is the most general one, while the fourth prompt is the most specific asking ChatGPT about RAE's dictionary. Note also that the second and third prompts are very similar and should get the same response. They are used to check the consistency of the ChatGPT answers. The Chatwords User Interfaces to generate the experiment are shown in Figure 5. First the experiment is generated by selecting the AI model (ChatGPT), its parameters (version 3.5 and temperature = 0) and the low level program (meaning); then the words are loaded, in this case from a.txt file containing all words in the RAE's dictionary. The results for the words in the dictionary are summarized in Figure 6. The first plot shows the percentage of positive responses to each of the prompts (P1 to P4). It can be observed that the first prompt has the largest fraction of positive responses while the last prompt the lowest value as expected; the second and third prompts are in the middle. Interestingly, the second and third prompts have slightly different percentages which show the variability of ChatGPT responses for similar questions. Even for the most generic prompt, there is a non-negligible fraction of negative responses, approximately 20% which increases for the other prompts to 50-60%. This shows that ChatGPT does not have a complete knowledge of the Spanish lexicon. The second plot shows the percentages for the sixteen possible combinations of the answers to the four prompts coded in binary as P4,P3,P2,P1. So, for example, '0001' corresponds to a positive answer to the first prompt and a negative for the other three. The fraction of words for which ChatGPT gives a positive response ('1111') for all four prompts drops to 35%, mostly due to a large percentage of words not being identified as part of RAE's dictionary ('0xx1'). Combinations '0001' and '0111' have the largest values apart from '0000' which accounts for approximately 17.5% of the words. This latest result confirms that ChatGPT does not seem to have any knowledge of a significant fraction of the words. It is also interesting to note that there is a small percentage of contradictory answers, for example, '0101' and '1101', in which ChatGPT says that a word is correct but not valid. Finally, it is important to note that as discussed in section 3, automated testing is not fully accurate and for a small fraction of the words that ChatGPT recognizes it will have an incorrect meaning. For _The Quixote_, the same testing was conducted but this time for both ChatGPT3.5turbo and ChatGPT4 to study whether the lexical knowledge has improved in the latest version of the tool. The results are shown in Figure 7. For ChatGPT3.5turbo they follow similar trends as for the RAE dictionary: there is a significant number of negative answers that increase from the first to the last prompt; the most frequent combinations of the answers are also the same. The main difference is that the percentages of negative answers are lower than for the entire dictionary. This is reasonable because _The Quixote_ uses a fraction of the lexicon that includes most common words but only a small part of the less frequently used words, which are more prone to be unknown to ChatGPT. Figure 5: **Running a experiment in ChatWords: setting up the experiment (top) and loading the words and running the experiment (bottom)** Focusing on the comparison between the two versions of ChatGPT, it can be observed that ChatGPT4 provides more consistent responses with fewer differences between the four prompts. The percentage of positives is lower for the first prompt, while it increases for the other three. These trends are more obvious on the right plot, in which the percentages of answers for which all prompts are all zeros or all ones increase from 11.3% to 12.7% and from 56.9% to 71.4%. The most notable reduction is for '0001' indicating that ChatGPT4 is more cautious to state that it knows the meaning of a word (P1). A random sample of 100 words was checked manually by asking ChatGPT4 for the meaning of the words and comparing it with that in the dictionary of the RAE. The number of false positives, so words for which ChatGPT4 states that it knows the meaning but then answers with an incorrect meaning, was reduced compared to ChatGPT3.5turbo. This is in line with the other results and confirms that ChatGPT4 provides more reliable answers. Overall it seems that lexical knowledge has not improved significantly from ChatGPT3.5turbo to ChatGPT4. ### Discussion and Implications The lexical knowledge of ChatGPT has potential implications for the Spanish language. It seems reasonable to assume that ChatGPT will not use words it does not know. To confirm this hypothesis, we ask ChatGPT to write a sentence Figure 6: **Results for the words in the dictionary of the RAE: percentage of positive answers to each prompt (left), percentage of responses for each of the sixteen possible combinations of answers for the four prompts (right)** Figure 7: **Results for the words in _The Quixote_: percentage of positive answers to each prompt (left), percentage of responses for each of the sixteen possible combinations of answers for the four prompts (right)** using a word that it does not know and it refuses to do so, an example is shown in Figure 8. Therefore, as ChatGPT and similar tools are increasingly used to write content that ends up on the Internet, future training datasets may have less lexical diversity. This may cause newer versions of ChatGPT to further reduce their lexicon leading to less rich content. There is already a strong debate on whether technology is reducing the lexicon used by newer generations. Although there are not sufficient studies to prove this hypothesis, the increasing speed of communication imposed by newer digital tools forces speakers to choose among their available lexicon [34] very quickly. This may introduce a bias towards using common words further increasing their frequencies, while other less common words that need more time and analysis, are thus less used increasing the gap between the available lexicon and the lexicon used. AI tools like ChatGPT may contribute to this trend by exposing users to a subset of words only, making the rest invisible and thus reinforcing the trend toward a reduction of the lexicon used, not only by newer AI tools but also by humans. The mechanisms of linguistic evolution have produced modifications in phonetics, in its orthographic representation, as well as important semantic alterations of many words and expressions throughout the centuries. Semantic misunderstandings caused by the formal similarity of words or confusions caused by phonetic analogy or the inappropriate use of certain phrases that become obscure over time, are some of the phenomena that introduce new meanings, but which sometimes leads to a loss of precision in oral and written communication. Thus, the Latin expression "in flagranti [crimine]" is confused with the present participle, more common in Spanish, "fragante" [from lat. fragrans, fragrantis: that exhales an odor, a perfume]. The confusion results in the now accepted "in fraganti", an adverbial phrase documented in Spanish at least since the 19th century15. The emergence of the Internet implies important changes in the speed of linguistic evolution, particularly in English, in its role as the lingua franca of the network, to the point of coining a new branch of linguistics: Internet Linguistics [35]. Internet English, in some ways, could be compared to a typical koine since it is becoming the common language among users of native speakers of different languages, and, like all koines, tends to a general simplification, for the sake of a greater ability to spread and understand the message. Footnote 15: [https://www.rae.es/banco-de-datos/corde](https://www.rae.es/banco-de-datos/corde) The consequences of the trend toward simplification are also evident in the AI preferences. In this regard, it is common for the interactions with AI carried out with the aim of improving the writing of a text, suggesting the use of more common or simpler words, sometimes resulting in changes or losses of nuances and meanings. Some examples of authentic Bing AI suggestions are: * Use "are made collectively" instead of "are made collegially" to use a more common word, * Use "or modifies" instead of "or amends" to use a simpler word. * Use "among other things" instead of "among other aspects" to use a more common expression. Another interesting consideration is the relative knowledge of the lexicon across languages. Are the differences like to those observed in the performance for different tasks? For example, the performance for Spanish and English of ChatGPT-4 is very similar on some tasks16, does the same apply to the lexicon? Or is there a stronger bias in the lexicon knowledge across languages? This is an interesting topic for future work. Figure 8: **Asking ChatGPT to write a sentence with a word it does not know ”Destripaterrones” that is used in _The Quixote_ and included in the RAE’s dictionary. ChatGPT refuses to write the sentence.** ## 6 Lexical Analysis Applications To illustrate the potential benefits of ChatWords this section discusses some applications in which lexical analysis can be useful. ### Language analysis The most direct application of ChatWords is to evaluate the lexicon knowledge of ChatGPT across languages and how it depends on the version and configuration of ChatGPT. This would provide an understanding of how the tool captures the lexical richness of different languages. The results could also be compared with those of native speakers to put the results in context. More generally, instead of a simple yes/no answer, we could ask ChatGPT for the meaning of each word and then compare it against the meaning in a dictionary. This would provide a deeper understanding of the lexical knowledge of ChatGPT, not only if it knows the words but also their meanings. The automation of this application requires a more sophisticated processing of ChatGPT outputs that needs further study. ### Domain specific analysis Another potential use of ChatWords is to evaluate the knowledge of ChatGPT for the terminology and words used in specific domains such as Law, Medicine, Science, or Engineering in different languages. This can be of interest to users that rely on ChatGPT for specific areas of application. ### Names and places analysis The tool can also be used to evaluate if ChatGPT knows the names of persons, places, or products. This can be useful for companies and for applications that use ChatGPT as these words that are not included in dictionaries. An example of the results obtained is shown in Figure 9, ChatGPT does not know a small village in the northwest of Spain. ### Slang and emerging lexicon analysis Another interesting aspect is to evaluate the knowledge that ChatGPT has of slang and new words being used by younger generations. For example, the lyrics of trendy trap or reggaeaton songs can be analyzed by ChatWords. ### Incorrect word analysis ChatWords is not only useful to assess the knowledge of the correct words; it can be used to check if ChatGPT has learned incorrect words that may be present in its training dataset. This can be checked by testing a set of false words [36, 37] on ChatWords. It is of interest to understand the impact of using text that may not be correct during training. Although a detailed evaluation of the responses of ChatGPT to pseudowords that do not exist, is outside the scope of Figure 9: **Example of the evaluation of ChatGPT knowledge of places. Rihonor de Castilla is the Spanish part of a village that is half Spanish and half Portuguese ([https://en.wikipedia.org/wiki/Rihonor_de_Castilla](https://en.wikipedia.org/wiki/Rihonor_de_Castilla)).** this work, we have used the Wuggy tool [37] to generate a few pseudowords in Spanish and asked ChatGPT for their meanings. The answers seem to depend on the version of ChatGPT used and for some of them, the tool misinterpreted some words. For example, when asked about the word "esglo" the answer was "esglo means century in Spanish". Seglo does not exist in Spanish but it is quite similar to "siglo" which means century. Therefore, depending on the version and setting, it seems that ChatGPT can mistake invalid words for words that are similar and valid. The importance of accepting incorrect words as valid deserves further study and a detailed analysis of its implications. ### Impact of lexical knowledge on task performance A more subtle use of ChatWords would be to correlate ChatGPT performance in different tasks with its lexical knowledge. For example, for question and answer, summarization, or translation tasks, we could run ChatWords on the inputs to quantify the knowledge of the words used in the prompt and see if there is a correlation between lexical knowledge and task performance. A simple example of translation shows that the lack of lexical knowledge can lead to incorrect results and intuitively the same applies to other language processing tasks. ## 7 Conclusion In this paper, we have presented ChatWords, a tool to automate the lexical knowledge evaluation of ChatGPT. ChatWords is designed to be both user-friendly, requiring no programming skills, and highly flexible, allowing for easy modifications and extensions. Flexibility in terms of the prompts, AI tool configuration, result processing and integration with other AI tools is provided to facilitate research on the lexical knowledge of AI tools. The potential benefits of ChatWords have been demonstrated with two case studies: the evaluation of the Spanish lexicon and the words used in The Quixote. The applications of lexical knowledge evaluation of AI tools as well as its implications have also been discussed providing multiple directions for future work. ChatWords is publicly available and we expect that its use will facilitate the evaluation of the lexical knowledge of AI tools in the applications discussed in the paper and beyond, as new horizons, investigations and scenarios will be proposed by other researchers ## 8 Acknowledgements The authors would like to acknowledge the support of the FUN4DATE (PID2022-136684OB-C21/22) project funded by the Spanish Agencia Estatal de Investigacion (AEI) 10.13039/501100011033. The authors would like to thank Blanca Querol for the design of the ChatWords logo and graphical user interface. Figure 10: **Example of the impact of lack of lexical knowledge on other tasks. ChatGPT does not know the word ”majorito” that in Spanish is a type of bobbin lace and thus, it uses the translation of ”majadero” which is completely unrelated.**
2309.15295
Scaling solutions as Early Dark Energy resolutions to the Hubble tension
A wide class of scalar field models including Quintessence and K-essence have the attractive property of tracker regimes, where the energy density stored in the field evolves so as to mimic that of the dominant background component for a period of time. During this evolution, for a brief period of time there is an increase in the energy density of the field as it spirals in towards it's attractor solution. We show that when the peak of this energy density occurs around the epoch of equality, we can address a key requirement of early dark energy (EDE), postulated as a solution to the Hubble tension. In particular we demonstrate how this can occur in a wide class of Quintessence, axion and K-essence models, before showing that the Quintessence models suffer in that they generally lead to sound speeds incompatible with the requirements of EDE, whereas the K-essence and axion models can do a better job of fitting the data.
Edmund J. Copeland, Adam Moss, Sergio Sevillano Muñoz, Jade M. M. White
2023-09-26T22:20:57Z
http://arxiv.org/abs/2309.15295v2
# Scaling solutions as Early Dark Energy resolutions to the Hubble tension ###### Abstract A wide class of scalar field models including Quintessence and K-essence have the attractive property of tracker regimes, where the energy density stored in the field evolves so as to mimic that of the dominant background component for a period of time. During this evolution, for a brief period of time there is an increase in the energy density of the field as it spirals in towards it's attractor solution. We show that when the peak of this energy density occurs around the epoch of equality, we can address a key requirement of early dark energy (EDE), postulated as a solution to the Hubble tension. In particular we demonstrate how this can occur in a wide class of Quintessence, axion and K-essence models, before showing that the Quintessence models suffer in that they generally lead to sound speeds incompatible with the requirements of EDE, whereas the K-essence and axion models can do a better job of fitting the data. ###### Contents * 1 Introduction * 2 Attractor solutions in Quintessence * 2.1 Exponential potential with a constant slope parameter \(\lambda\) * 2.2 Exponential potential with a time dependent slope parameter \(\tilde{\lambda}(\phi)\) * 2.3 The axion potential * 3 K-essence case: \(\mathcal{L}(X,\phi)=X^{n}-V(\phi)\) * 4 Comparison with observations * 5 Conclusion and discussion Introduction Measuring the expansion rate of our Universe (\(H_{0}\)), is without doubt one of the most challenging tasks facing cosmologists [1]. An accurate determination of \(H_{0}\) is essential in obtaining the values of most of our cosmological parameters. It is therefore of little surprise that when two complementary approaches to determine its value fail to agree to within of order five sigma, then there is a concerted effort to understand the differences, and possibly reconcile them [2]. This disagreement between the Hubble parameter determined from direct observations of Cepheids and Type Ia Supernova (\(H_{0}=73.04\pm 1.04\) km s\({}^{-1}\) Mpc\({}^{-1}\)) [3; 4] and those inferred from measurements of anisotropies in the cosmic microwave background radiation (CMB), assuming a standard six parameter \(\Lambda\)CDM model, (\(H_{0}=67.44\pm 0.58\) km s\({}^{-1}\) Mpc\({}^{-1}\)) [5; 6; 7] have led to what is known as the Hubble tension. Attempts to explain the differences in terms of possible systematics between the two experiments have to date not resolved the discrepancy (although see [8] for a recent discussion of the systematics associated with the direct determination of the Hubble parameter), hence has prompted a renewed interest in the possibility that the tension is present because of new physics that will not have been properly accounted for in the literature to date. The basic idea is that the Planck indirect measurement has been made without the presence of the new physics being included. For recent reviews of possible theoretical models resolving the tension see [9] and [10]. The majority of the most promising solutions have the property of increasing the expansion rate of the scale factor just prior to recombination. The go-to resolution for this is by using a dynamical scalar field whose energy density increases rapidly and briefly during this period, thereby allowing the Hubble parameter to increase to that consistent with the Supernova observations - an effect that has become known as Early Dark Energy (EDE) [10]. In particular, it appears that a successful scenario requires that this increase occurring around matter-radiation equality amounts to a brief insertion of around 8% of the energy density, which then dissipates faster than matter in order that it does not leave any lasting imprints in the CMB. Two natural candidates for this behaviour that have to date received relatively little attention come from what are known as Quintessence and K-essence models, scalar field models with dynamics driven by either their potential energy (Quintessence) [11] or non-canonical kinetic energy (K-essence) [12; 13; 14]. For a review of such dynamical fields see [15; 16]. In this paper, we argue that such models can in fact in principle provide a solution to the Hubble tension. They possess a key property that is known as an attractor solution, where the energy density in the scalar field can mimic that of the background dominating energy density (whether it be radiation or matter) [17]. In particular, due to the spiral stability of this solution, the field experiences a relative increase in its energy density for a short period while approaching this attractor, sufficient to account for the required EDE impulse. Moreover, as we will demonstrate, a wide variety of models will also feel attracted to this solution at early times, producing the desired EDE peak at equality while decaying at late times. The layout of the paper is as follows. In section 2, as a toy model, we introduce a scalar field \(\phi\) with an exponential potential of constant slope \(\lambda\), evolving in a \(\Lambda\)CDM cosmology and show analytically and numerically how it can accommodate a period of EDE. In section 2.2 we extend this to a more general potential, with a time-dependent slope, allowing us to derive the conditions required for EDE to emerge in general Quintessence models. In section 2.3 we show how this can work as well for the case of an axion. In section 3 we turn our attention to the case of K-essence. Considering a particular toy model, where \(K(X)=\frac{X^{n}}{M^{dn-4}}\), where \(X\equiv\frac{1}{2}\dot{\phi}^{2}\) (where \(\dot{\phi}\equiv\frac{d\phi}{dt}\)), and \(M\) is a mass scale, (motivated partly by the fact the case \(n=2\) leads to a sound speed \(c_{s}^{2}=\frac{1}{3}\) in the fluctuations of the K-essence field, a result consistent with the findings in [18]), we show analytically in a manner similar to the Quintessence case how these too lead to EDE. In section 4 we describe the numerical solutions obtained and provide an MCMC likelihood analysis showing the significance of the results. In particular, we show the problems faced by Quintessence models, and the fact that the K-essence case with \(n=3/2\) provides a good fit to the data. Finally, we conclude in section 5. ## 2 Attractor solutions in Quintessence In this section, we develop the argument that scalar field evolution in the presence of a background fluid can experience scaling solutions where the energy density of the scalar field aims to become a fixed fraction of that of the dominating background fluid. In following that trajectory, there is a short period of time when the energy density stored in the scalar field itself, increases briefly as it readjusts to its new scaling regime. It is this increase that can provide the input required to address the Hubble tension, and in what follows we first of all show the principle of it. We begin by introducing the equations of motion, for a system containing a canonical scalar field \(\phi\) with potential \(V(\phi)\) and two barotropic fluids with energy density \(\rho_{\gamma}\) (radiation) and \(\rho_{m}\), (matter, both baryonic and non-baryonic) with equations of state \(\gamma_{r}=4/3\) and \(\gamma_{m}=1\) respectively, defined in terms of their pressure (\(p\)) and energy density (\(\rho\)) by \(p=(\gamma-1)\rho\). For completeness we also include a cosmological constant \(\rho_{cc}=\frac{\Lambda}{\kappa^{2}}\) (with an associated equation of state \(\gamma_{cc}=0\)) to provide the late time dark energy of the universe, although in the analytic analysis below we will drop this term as it is completely sub-dominant around matter-radiation equality, when the effect we are seeking to explain occurs. However, we keep the full equations in the numerical solutions we compare to in section 4. The Friedmann equation is given by: \[H^{2}=\frac{\kappa^{2}}{3}\left(\rho_{r}+\rho_{m}+\rho_{cc}+\frac{\dot{\phi}^{ 2}}{2}+V(\phi)\right), \tag{1}\] where \(\kappa=\sqrt{8\pi G}\), \(H\equiv\dot{a}/a\) is the Hubble parameter with \(a(t)\) the scale factor and \(\dot{a}\equiv\frac{da}{dt}\). The dynamics and stability of the system will depend on the specific choice for the potential \(V(\phi)\). A natural choice is an exponential potential1, \(V(\phi)=V_{0}\exp\left(-\lambda\kappa\phi\right)\), with slope parameter \(\lambda=\text{const}\), since it presents scaling behavior at late times, as well as the intermediate regime of increased energy density we are searching for. Footnote 1: Notice that the cosmological constant can be either treated as an extra fluid or absorbed into the potential as a constant term. In the former case, the system is described by the formalism shown in section 2.1; otherwise, we would have to consider a time-dependent \(\tilde{\Lambda}(N)\), using the method introduced in section 2.2. ### Exponential potential with a constant slope parameter \(\lambda\) The fluid and scalar field equations of motion are \[\dot{\rho}_{r}= -3H\gamma_{r}\rho_{r}, \tag{2}\] \[\dot{\rho}_{m}= -3H\gamma_{m}\rho_{m},\] \[\dot{\rho}_{cc}= -3H\gamma_{cc}\rho_{cc},\] \[\ddot{\phi}+3H\dot{\phi}+V_{,\phi}\left(\phi\right)=0.\] where \(V_{;\phi}\left(\phi\right)\equiv\frac{dV}{d\phi}\). Following the prescription introduced in [17] we convert these equations to first-order ones by introducing, the dimensionless density parameters \[x= \frac{\kappa\dot{\phi}}{\sqrt{6}H}, y= \frac{\kappa\sqrt{V}}{\sqrt{3}H}, z= \frac{\kappa\sqrt{\rho_{r}}}{\sqrt{3}H}, l= \frac{\kappa\sqrt{\rho_{cc}}}{\sqrt{3}H}, \tag{3}\] which from the Friedmann constraint Eq. (1) gives the dimensionless energy density in matter via \[\Omega_{m}\equiv\frac{\kappa^{2}\rho_{m}}{3H^{2}}=1-(x^{2}+y^{2}+z^{2}+l^{2}), \tag{4}\] whilst for completion, we have the important quantity, the dimensionless energy density in \(\phi\), \[\Omega_{\phi}=\frac{\kappa^{2}\rho_{\phi}}{3H^{2}}=x^{2}+y^{2}. \tag{5}\] Differentiating the parameters \((x,y,z,l)\) with respect to the number of e-folds (\(N=\log a\)), leads to the following closed system (using \(\gamma_{r}=4/3,\gamma_{m}=1,\gamma_{cc}=0\)): \[x^{\prime} = \sqrt{\frac{3}{2}}\lambda y^{2}-\frac{x}{2}(3-3x^{2}+3y^{2}-z^{2 }-3l^{2}), \tag{6}\] \[y^{\prime} = -\sqrt{\frac{3}{2}}\lambda xy+\frac{y}{2}(3+3x^{2}-3y^{2}+z^{2}+3 l^{2}),\] (7) \[z^{\prime} = -\frac{z}{2}(1-3x^{2}+3y^{2}-z^{2}-3l^{2}),\] (8) \[l^{\prime} = \frac{l}{2}(3+3x^{2}-3y^{2}+z^{2}+3l^{2}), \tag{9}\] where \(x^{\prime}\equiv\frac{dx}{dN}\) and we have already substituted the exponential potential with a constant slope parameter \(\lambda\), \[V(\phi)=V_{0}\exp{(-\kappa\lambda\phi)}. \tag{10}\] To reiterate, here and in section 3 we drop \(\rho_{cc}\) from Eqs. (6-9) since we are focusing on the effects of the scalar field around matter-radiation equality, where \(l^{2}\ll 1\). Although this set of equations allows us to see the evolution of each energy parameter, it proves convenient to introduce the effective equation of state of the background radiation and matter fields, defined via \[\gamma_{\rm eff}=1+\frac{p_{\gamma}+p_{m}}{\rho_{\gamma}+\rho_{m}}=1+\frac{1} {3}\left(\frac{z^{2}}{1-x^{2}-y^{2}}\right). \tag{11}\] We see that \(\gamma_{\rm eff}\) is a particularly useful parameter to use because it only varies between \(1\leq\gamma_{\rm eff}\leq 4/3\), compared to \(z\) which varies between \(0\) and \(1\). With this in mind, we can replace \(z\) in terms of \(\gamma_{\rm eff}\) and the system of equations (6) - (8) become \[x^{\prime} = \sqrt{\frac{3}{2}}\lambda y^{2}+\frac{3x}{2}(-2+2x^{2}+\gamma_{ \rm eff}(1-x^{2}-y^{2})), \tag{12}\] \[y^{\prime} = -\sqrt{\frac{3}{2}}\lambda xy+\frac{3y}{2}(2x^{2}+\gamma_{\rm eff }(1-x^{2}-y^{2})),\] (13) \[\gamma^{\prime}_{\rm eff} = (\gamma_{\rm eff}-1)(3\gamma_{\rm eff}-4). \tag{14}\] We begin the analysis by noting that throughout both its early and late evolution the scalar field needs to be subdominant, having to satisfy an upper bound at matter domination of \(\Omega_{\phi}<0.02\)[5] (for an example see Figure 1). From Eq. (5), we can therefore neglect terms cubic in \(x\) and \(y\), implying that Eqs. (12)-(14) become \[x^{\prime} \approx \left(\frac{3}{2}\gamma_{\rm eff}-3\right)x+\sqrt{\frac{3}{2}} \lambda y^{2}, \tag{15}\] \[y^{\prime} \approx \frac{3}{2}\gamma_{\rm eff}y-\sqrt{\frac{3}{2}}\lambda xy,\] (16) \[\gamma^{\prime}_{\rm eff} \approx \left(\gamma_{\rm eff}-1\right)\left(3\gamma_{\rm eff}-4\right). \tag{17}\] Note, the nice feature that \(\gamma_{\rm eff}\) has fixed points for both matter and radiation domination (\(\gamma_{\rm eff}=1\) and \(\gamma_{\rm eff}=4/3\), respectively). It is not difficult to show that the fixed point (assuming \(\gamma_{\rm eff}\) constant) is given by \[x_{\rm sc}= \sqrt{\frac{3}{2}}\frac{\gamma_{\rm eff}}{\lambda},\qquad y_{\rm sc }= \left(\frac{3}{2}\frac{\gamma_{\rm eff}(2-\gamma_{\rm eff})}{\lambda^{2}} \right)^{1/2},\qquad\Omega_{\phi}^{\rm sc}= \frac{3\gamma_{\rm eff}}{\lambda^{2}},\qquad\gamma_{\phi}= \gamma_{\rm eff}, \tag{18}\] corresponding to the scaling solutions found in [17] (for \(\gamma_{\rm eff}=1\) and \(\gamma_{\rm eff}=4/3\)). Therefore, depending on the background fluid that is dominating, as long as \(\lambda^{2}>3\gamma_{\rm eff}\), there is a spiral stable attractor solution where \(\phi\) evolves so that its energy density tracks that of the dominating background fluid, ruled by \(\gamma_{\rm eff}\), behaving as radiation in the early universe, and evolving into matter like behaviour in the matter dominated regime. This is well known [17; 19], but there is an interesting element that appears to have been overlooked and could be relevant in addressing the Hubble tension. As shown in Figure 2, due to the spiraling nature of the fixed point, the scalar field will experience oscillations around the attractor in its trajectory. Thus, as these oscillations are damped, the first will lead to a peak in the energy density, which if placed just before matter-radiation equality could alleviate the observed tension. We turn our attention now to determine analytically the properties of the peak, its location in time, and its magnitude in height. Figure 1: Evolution of a scalar field with an exponential potential Eq. (10) in a background containing matter and radiation barotropic fluids. We can see that during its evolution to the scaling solution fixed point, the field has a local peak in its energy density. The solid yellow line corresponds to \(\Omega_{r}\), the dotted purple line to \(\Omega_{m}\), the solid blue line to \(\Omega_{\phi}\), where as the red dashed and green dashed-dotted lines correspond to the kinetic and potential energy contributions to \(\Omega_{\phi}\) respectively. As we can see in Figure 1, the early-time scalar field energy density is dominated by the potential term (\(y\)) up to the peak. Moreover, given that to address the Hubble tension the peak must take place before or at matter-radiation equality, the scalar field will be evolving then in a radiation-dominated universe (\(\gamma_{\rm eff}\sim 4/3\)). Since \(x\ll 1\) and \(x\ll\lambda\), it follows that before the peak has been reached, Eqs. (15)-(16) become \[x^{\prime} \approx -x+\sqrt{\frac{3}{2}}\lambda y^{2}, \tag{19}\] \[y^{\prime} \approx 2y, \tag{20}\] yielding early-time solutions \[x_{early}(N) \approx (x_{i}-a_{i})e^{-\Delta N_{i}}+a_{i}e^{4\Delta N_{i}}, \tag{21}\] \[y_{early}(N) \approx y_{i}e^{2\Delta N_{i}}, \tag{22}\] where \(\Delta N_{i}=N-N_{i}\), \(N_{i}\) is the initial time, \(x_{i}\) and \(y_{i}\) are the respective initial values and \[a_{i}=\sqrt{\frac{3}{2}}\frac{\lambda y_{i}^{2}e^{4N_{i}}}{5}. \tag{23}\] These solutions are valid as long as we can drop the higher order terms in Eqs. (15)-(16), which takes us close to when the peak in \(\Omega_{\phi}\) takes place, a time we call \(N_{1}\). To be more specific we can estimate this time as being the moment \(y_{early}(N)\) first passes its final fixed point value Eq. (18), which implies (assuming the energy density is equally split between \(x\) and \(y\)), \[y_{early}(N_{1})\approx\sqrt{\Omega_{\phi}^{\rm(sc)}/2}. \tag{24}\] Using Eqs. (18) and (22), we obtain the following estimate of the time of the peak \[N_{1}\approx N_{i}+\frac{1}{2}\log\bigg{(}\frac{\sqrt{3\gamma_{\rm eff}}}{y_{ i}\lambda\sqrt{2}}\bigg{)}. \tag{25}\] Figure 2: Evolution of a scalar field with a matter-like fluid (\(\gamma_{\rm eff}=1\)). The evolution of each of the different densities versus \(N\) is represented in the left panel, using the same colour code as in Fig. 1 while in the right panel we show the corresponding \((x,y)\) phase space. Note how the peak (black dot) in \(\Omega_{\phi}\) corresponds to the orbits of the scalar field around the fixed point. Notice that we have reintroduced a generic \(\gamma_{\rm eff}\) for \(\Omega_{\phi}^{(\rm sc)}\). This is because the effective equation of state of the universe at the time of the peak (\(\gamma_{\rm eff}\)) will have an impact on the peak height. The change of \(\gamma_{\rm eff}\) would also introduce corrections in the early evolution from Eqs. (21) and (22), but these are negligible when the time of the peak is around matter-radiation equality. The evolution of the scalar field from this point to the peak is non-trivial, but it can be approached in two related ways. The first is to perturb the evolution equations about the solutions Eqs. (21) and (22), evaluated at time \(N_{1}\) to linear order, and solve for the linear fluctuations. The second, related approach is to recognise that we are effectively perturbing around the fixed point solutions Eq. (18) at \(N_{1}\) and so we are interested in the stability of these solutions about the fixed point. This stability analysis (assuming \(\gamma_{\rm eff}\) is constant) introduces eigenvalues \(E_{\pm}\) with \(x(N)\) and \(y(N)\) given by (\(N\geq N_{1}\)) \[x(N) = x_{early}(N_{1})+A_{x}\exp(E_{+}\Delta N_{1})+B_{x}\exp(E_{-} \Delta N_{1}), \tag{26}\] \[y(N) = y_{early}(N_{1})+A_{y}\exp(E_{+}\Delta N_{1})+B_{y}\exp(E_{-} \Delta N_{1}), \tag{27}\] where \(\Delta N_{1}=N-N_{1}\), \(A_{x}\), \(B_{x}\), \(A_{y}\) and \(B_{y}\) are constants with the eigenvalues given by \[E_{\pm}=-\frac{3(2-\gamma_{\rm eff})}{4}\left(1\pm\sqrt{1-\frac{8\gamma_{\rm eff }}{2-\gamma_{\rm eff}}}\right), \tag{28}\] showing that it is a stable spiral. Thus, as the field evolves from its initial position (\(\Omega_{\phi}^{0}\sim 0\)) (when \(x\ll 1\) and \(y\ll 1\)) and starts to grow, it feels the fixed point attractor and begins to evolve towards it. In doing so it experiences a damped oscillatory behaviour which is exactly the one that produces the desired peak, as seen in Figure 2. A comparison between the analytical approximations derived in this section and the full numerical solution can be seen in the plot highlighting the peak in \(\Omega_{\phi}\) in Figure 3. They reproduce the time for the peak, \(N_{1}\), and from that time, perturbing around the fixed point evolves through the peak. Notice that the approximation does not match the late-time evolution of the numerical solution. This is because we are perturbing around the fixed point for the value of \(\gamma_{\rm eff}\) at the time of the peak, \(N_{1}\). After that, the system will evolve as dictated by the effective attractors at the time. In this way, the field will scale with the dominating background fluid, as we can see in Figure 4. So far, in discussing the nature of the peak in \(\Omega_{\phi}\) we have concentrated on the case of an exponential potential with constant slope parameter \(\lambda\), Eq. (10). We have seen how it is possible to understand both the value of the peak height and when it occurs through the fact that starting with \(\Omega_{\phi}\ll 1\), the system wants to head towards its natural scaling solution. However, there is a potential problem, and that is we know the scalar field needs to rapidly become subdominant soon after the peak has been reached, and this is not a natural feature of these models where it is clear from Eq. (18) that the system wants to reach a non-negligble and constant \(\Omega_{\phi}=3/\lambda^{2}\). In particular, satisfying the constraint that in the matter-dominated era \(\Omega_{\phi}\lesssim 0.02\)[5] implies that \(\lambda\gtrsim 12\), which is very constraining. Ideally, we would like to have a peak in the energy density driven by the type of mechanism we have described in this section but followed by a decaying \(\Omega_{\phi}\). In section 2.2, we show how to achieve this, by considering a model in which the slope parameter of the potential becomes field-dependent, and tends rapidly to large values as the field evolves. Figure 4: The \(x-y\) orbits of a scalar field around its corresponding fixed point for differing background cosmologies. The blue line represents a scalar field with two background fluids, in which the peak takes place at radiation-matter equality – the true evolution. The dashed red line corresponds to a scalar field evolving in a matter-dominated background, and the dot-dashed green line to a scalar field in a \(\gamma_{\rm eff}=7/6\) background (corresponding to the equation of state of the universe at matter-radiation equality). We can see that the peak in \(\Omega_{\phi}\) (as seen for example in the right pane of Fig. 2) for the full solution (blue) is well approximated by the effective equation of state of the universe, \(\gamma_{\rm eff}=7/6\), before it finally falls down to the late time attractor, given by matter domination. Figure 3: Comparison of the analytic (red and green lines) and full numerical solution (blue line) for the quintessence potential of Eq. (10) showing the first peak in \(\Omega_{\phi}\). The black dot represents the time \(N_{1}\) where the early evolution (dashed red) matches onto the perturbation around the effective fixed point assuming \(\gamma_{\rm eff}=7/6\) (dot-dashed green). The analytic solution accurately predicts the first peak of \(\Omega_{\phi}\). Beyond the peak, differences emerge as the full numerical system follows the global attractor given by matter domination with \(\gamma_{\rm eff}=1\). ### Exponential potential with a time dependent slope parameter \(\tilde{\lambda}(\phi)\) We want to turn our attention to more general Quintessence potentials \(V(\phi)\), where the slope parameter, or more accurately the quantity \(\frac{V_{\phi}(\phi)}{V(\phi)}\) can be time dependent. Our aim is to establish the conditions under which such potentials can provide the necessary input of scalar field energy density around equality, whilst satisfying the requirements of EDE. We will follow the basic approach of section 2.1, where the existence of scaling solutions for fixed \(\gamma_{\rm eff}\) will play an important role. With the definitions from Eqs.(3), we obtain \[x^{\prime} = \sqrt{\frac{3}{2}}\tilde{\lambda}y^{2}-\frac{x}{2}(3-3x^{2}+3y^{ 2}-z^{2}), \tag{29}\] \[y^{\prime} = -\sqrt{\frac{3}{2}}\tilde{\lambda}xy+\frac{y}{2}(3+3x^{2}-3y^{2} +z^{2}),\] (30) \[z^{\prime} = -\frac{z}{2}(1-3x^{2}+3y^{2}-z^{2}),\] (31) \[\tilde{\lambda}^{\prime} = -\sqrt{6}\tilde{\lambda}^{2}(\Gamma-1)x, \tag{32}\] where, additionally we now have \[\Gamma= \frac{V_{,\phi\phi}\left(\phi\right)V(\phi)}{V_{,\phi}\left(\phi \right)^{2}}, \tilde{\lambda}= -\frac{V_{,\phi}\left(\phi\right)}{\kappa V(\phi)}. \tag{33}\] Notice that this system is equivalent to the exponential potential case (Eqs. (6-7)) but with a time-varying \(\lambda\). An important point to make here is that the nature of the approach to a scaling regime does not rely on having \(\tilde{\lambda}=\) constant. In fact, there are many cases where the slope is not a constant, and yet we approach the type of scaling regimes described in Section 2.1 - for detailed examples see [16]. In terms of the \(\gamma_{\rm eff}\) parameter defined in Eq. (11), the system of equations can be written as \[x^{\prime} = \sqrt{\frac{3}{2}}\tilde{\lambda}y^{2}+\frac{3x}{2}(-2+2x^{2}+ \gamma_{\rm eff}(1-x^{2}-y^{2})), \tag{34}\] \[y^{\prime} = -\sqrt{\frac{3}{2}}\tilde{\lambda}xy+\frac{3y}{2}(2x^{2}+\gamma_ {\rm eff}(1-x^{2}-y^{2})),\] (35) \[\gamma^{\prime}_{\rm eff} = (\gamma_{\rm eff}-1)(3\gamma_{\rm eff}-4),\] (36) \[\tilde{\lambda}^{\prime} = -\sqrt{6}\tilde{\lambda}^{2}(\Gamma-1)x, \tag{37}\] As a particular example, we consider the model of Fang et. al. [20], as it is a nice extension of the exponential potential model, allowing us to understand the evolution of the system in an analogous manner to that case, whilst seeing how it satisfies the requirements of EDE through the time-dependent slope. The model is defined through its potential \(V(\phi)\) by \[V(\phi)=V_{0}e^{\tilde{\alpha}\phi(\phi+\beta)/2}, \tilde{\lambda}=-\tilde{\alpha}(2\phi+\beta)/2, \Gamma=1+\frac{\tilde{\alpha}}{\tilde{\lambda}^{2}}, \tag{38}\] where \(\tilde{\alpha}\) and \(\beta\) are constants, and the time-dependent effective slope \(\tilde{\lambda}\) satisfies \[\tilde{\lambda}^{\prime}=-\sqrt{6}\tilde{\alpha}x. \tag{39}\] Applying the same sub-dominant scalar field energy simplifications as in the last section (i.e., considering that \(x^{2}\ll 1\) and \(y^{2}\ll 1\)) we obtain the following equations describing the system \[x^{\prime} \approx \tag{40}\] \[y^{\prime} \approx\] (41) \[\gamma^{\prime}_{\rm eff} = (\gamma_{\rm eff}-1)(3\gamma_{\rm eff}-4),\] (42) \[\tilde{\lambda}^{\prime} = -\sqrt{6}\tilde{\alpha}x, \tag{43}\] which for a fixed \(\gamma_{\rm eff}\) and \(\tilde{\alpha}<0\) has the following stable fixed point \[x_{\rm sc} =0, y_{\rm sc} =0, \tilde{\lambda}_{\rm sc} \rightarrow\infty. \tag{44}\] Given that the initial conditions are \(x_{i}\ll 1\) and \(y_{i}\ll 1\), we might naively expect no evolution in \(x\) and \(y\), hence in \(\Omega_{\phi}\), implying no peak forming, since they are already close to their fixed points. However, that does not take into account the evolution of \(\tilde{\lambda}\) whose initial value \(\tilde{\lambda}_{i}\) is a long way from its global fixed point, which, as we see from Figure 5, can be crucial. Moreover, given that the evolution equation for \(\tilde{\lambda}\) is \(\tilde{\lambda}^{\prime}\propto x\), it implies that \(\tilde{\lambda}\approx\) const, until close to the peak. The only difference between the exponential case of section 2.1 and this more general case is the fact that \(\tilde{\lambda}\) begins to vary slowly as \(x\) begins to evolve, leading eventually to a changing effective fixed point once the peak is approached. This implies that we expect the same behaviour as for the exponential case of section 2.1 up to \(N_{1}\). The early time solutions for \(x\) and \(y\) are once again given by Eqs. (15-16), and it is only when \(x\) begins to grow that we need to take into account the evolution of \(\tilde{\lambda}\). This is a really nice feature, it means that we can always find a solution for the exponential case that fits the early evolution (and the peak) of more generic potential models, as long as \(\tilde{\lambda}\) is not varying rapidly. As another example we will see this in section 2.3 for the case of the axion. It is worth stressing that this is more general than the specific result for the potential Eq. (38), it holds because \(\tilde{\lambda}\) remains essentially constant up to the peak, which is a common feature of Quintessence as they will always have \(\tilde{\lambda}^{\prime}\propto x\) (see Eq. (37)). In particular Eq. (25) gives \(N_{1}\) where we use \(\tilde{\lambda}=\tilde{\lambda}_{i}\) as a leading approximation, to obtain \[N_{1}\approx N_{i}+\frac{1}{2}\log\bigg{(}\frac{\sqrt{3\gamma_{\rm eff}}}{y_{ i}\tilde{\lambda}_{i}\sqrt{2}}\bigg{)}. \tag{45}\] We should note here that there is nothing special about matter-radiation equality leading to the peak here. It means that there is a degree of finetuning required in order to make the system evolve towards its peak value. However, for the perturbation around the scaling fixed point, we can obtain a better estimate by solving Eq (43) with the early time solution for \(x\) namely, Eq. (21), with initial conditions \(\tilde{\lambda}(N_{i})=\tilde{\lambda}_{i}\). and with \(\lambda\rightarrow\tilde{\lambda}_{i}\) in Eq. (21). When we do this we obtain, \[\tilde{\lambda}_{early}(N)=\tilde{\lambda}_{i}-\sqrt{6}\tilde{\alpha}\left[ \frac{a_{i}}{4}\left(e^{4\Delta N_{i}}-1\right)-\left(1-a_{i}\right)\left(e^{- \Delta N_{i}}-1\right)\right], \tag{46}\] where now \(\Delta N_{i}\equiv N-N_{i}\). Given we have determined \(\tilde{\lambda}_{early}(N_{1})\), in Figure 6, we show how well an exponential potential with constant \(\lambda=\tilde{\lambda}_{early}(N_{1})\) solution matches the true evolution up to and beyond the first peak in \(\Omega_{\phi}\). Therefore, we can apply the same bounds on \(\tilde{\lambda}_{early}(N_{1})\) as for the exponential potential case, meaning that to address the Hubble tension we need \(\tilde{\lambda}_{early}(N_{1})\approx 8\). After the peak takes place, the exponential case approximation quickly fails as \(\tilde{\lambda}\) keeps on increasing towards its fixed point at \(\tilde{\lambda}\to\infty\). However, as we know from the stability analysis, the scaling solution for the scalar field implies that its energy density vanishes at late times. Based on a combination of analyzing the behaviour of the system of equations (34)-(37) and the particular case of the Fang et. al. model [20], we can make some general statements about the properties of scalar field potentials that can lead in principle to a successful period of EDE. A scalar field will be a good candidate for the Hubble tension if it satisfies the following: 1. **Before the peak:** A predictable evolution of \(\tilde{\lambda}(N)\) that ends in a value of order \(\tilde{\lambda}(N_{1})\sim\mathcal{O}(10)\), which will produce the required peak due to the orbits around the Figure 5: Evolution of a scalar field with \(V(\phi)=V_{0}e^{\tilde{\alpha}\phi(\phi+\beta)/2}\), \(\tilde{\alpha}<0\), in a background containing both matter and radiation barotropic fluids. Although the fixed point in this case is at \(x=y=0\), the system develops a peak during its trajectory around the period of radiation-matter equality. Figure 6: Evolution of a scalar field with a matter-like fluid (\(\gamma_{\rm eff}=1\)). The evolution of each of the different densities versus \(N\) is represented in the left panel, while the right panel shows the \(x-y\) phase space during this evolution. We can see that for the model with evolving \(\tilde{\lambda}\) given by potential Eq. (38) (dashed red), there is always a pure exponential case with constant \(\tilde{\lambda}\) that matches the trajectory up to the peak (blue). effective scaling solution. 2. **At the peak:** A \(\tilde{\lambda}(N)\) that doesn't vary too rapidly around \(N_{1}\), so the field has enough time to orbit around the fixed point. 3. **After the peak:** A \(\tilde{\lambda}(N)\) that tends to infinity, so that the effective scaling solution for the system goes back to \(x=0\) and \(y=0\) at late times, corresponding to a subdominant \(\Omega_{\phi}\). In the following subsection, we focus on an axion model in which \(\tilde{\lambda}(N)\) varies very rapidly around the peak. We will show that although this makes it difficult to accurately predict the height of the peak, we are still able to use this method to explain the systems early and late-time evolution. ### The axion potential A major drawback with using slowly rolling Quintessence models, as an explanation of EDE is that they lead to a sound speed \(c_{s}^{2}=1\) for the field \(\phi\), and as we will see in section 4, the data seems to favour \(c_{s}^{2}<1\) around the time of EDE [9; 10; 18]. Fortunately, as we will now show, the techniques we have developed can be applied to the case of a rapidly oscillating field with \(c_{s}^{2}<1\), such as the case of the axion. Consider a generic axion field evolving under the following potential \[V_{\rm Ax}^{(m)}(\phi)=\mu_{m}^{4}\left(1-\cos\left(\frac{\phi}{f_{m}}\right) \right)^{m}, \tag{47}\] where \(\mu_{m}\) and \(f_{m}\) are constants. Using the parametric redefinitions from Eq. (3), we find the following equations of motion2 Footnote 2: For the axion, we find it is numerically more stable to solve for the variable \(1/\tilde{\lambda}\) due to the large increase in \(\tilde{\lambda}\). \[x^{\prime} = \sqrt{\frac{3}{2}}\tilde{\lambda}y^{2}-\frac{x}{2}(3-3x^{2}+3y^{ 2}-z^{2}), \tag{48}\] \[y^{\prime} = -\sqrt{\frac{3}{2}}\tilde{\lambda}xy+\frac{y}{2}(3+3x^{2}-3y^{2} +z^{2}),\] (49) \[z^{\prime} = -\frac{z}{2}(1-3x^{2}+3y^{2}-z^{2}),\] (50) \[\tilde{\lambda}^{\prime} = \frac{\sqrt{6}x}{2m}\left(\tilde{\lambda}^{2}+\frac{m^{2}}{f_{m}^ {2}\kappa^{2}}\right). \tag{51}\] which with Eq. (33), leads to the following expressions for the axion potential \[\tilde{\lambda}= -\frac{m}{\kappa f_{m}}\frac{\sin\left(\frac{\phi}{f_{m}}\right) }{1-\cos\left(\frac{\phi}{f_{m}}\right)}, \Gamma= 1-\frac{1}{2m}-\frac{m}{2f_{m}^{2}\kappa^{2}\tilde{\lambda}^{2 }}. \tag{52}\] As we can see in Figure 7, the axion model also produces a peak in its energy density that can alleviate the Hubble tension. Focusing on the equation of motion for \(\tilde{\lambda}(N)\), we can see that our analysis from Section 2 can be expended to study the evolution of the axion potential. This is because, since \(\tilde{\lambda}^{\prime}\) depends linearly on \(x\), the value for \(\tilde{\lambda}\) will be frozen at early times, when \(x\ll 1\). Therefore, the system can be approximated by an exponential potential solution at early times, where \(\tilde{\lambda}^{\prime}\to 0\). necting this system to our previous analysis, the early evolution for the axion can be approximated to that of an exponential potential until the energy density of the scalar field is of the same order as its corresponding scaling solution. For \(\tilde{\lambda}_{i}<m/f_{m}\kappa\), we can linearize Eq. (51), allowing us to follow the same steps as in section 2.2, and obtain the early evolution for \(x,y\) and \(\tilde{\lambda}\). After some algebra (and substituting for \(\gamma_{\rm eff}=4/3\)), we obtain \[x_{early}(N) \approx(x_{i}-a_{i})e^{-\Delta N_{i}}+a_{i}e^{4\Delta N_{i}}, \tag{53}\] \[y_{early}(N) \approx y_{i}e^{2\Delta N_{i}},\] (54) \[\tilde{\lambda}_{early}(N) \approx\tilde{\lambda}_{i}+\frac{m}{f_{m}\kappa}\tan\left(\frac{ 3\tilde{\lambda}_{i}y_{i}^{2}}{40f_{m}\kappa}e^{4\Delta N_{i}}\right), \tag{55}\] where \(\Delta N_{i}=N-N_{i}\). As expected, the only difference in the early evolution of different models is in \(\tilde{\lambda}_{early}(N)\), which will affect the height of the subsequent peak in energy density. Therefore, we can use these equations to calculate the value for \(N_{1}\), which marks the end of the period in which the early-time approximation is valid, and so the time of the peak. This quantity was introduced in Eq. (25), and is given by \[N_{1}\approx N_{i}+\frac{1}{2}\log\bigg{(}\frac{\sqrt{3\gamma_{\rm eff}}}{y_{ i}\tilde{\lambda}_{i}\sqrt{2}}\bigg{)}, \tag{56}\] where, \(\gamma_{\rm eff}\) is the effective equation of state of the universe at the time of the peak. Notice that, as mentioned in section 2.2, the fine-tuning in Quintessence EDE models is exactly of the same order as for the axion fields. Although the early evolution for \(x,y\) and \(\tilde{\lambda}\) is well described by Eqs. (53-55) (as shown in Figure 8), the rapid evolution of \(\tilde{\lambda}_{early}(N)\) at \(N=N_{1}\) makes it very difficult to make an analytic prediction of the scalar field's energy density parameter at the peak. This is because even though we have a very good approximation for \(N_{1}\) (and so the time at which the peak takes place), the steep slope of \(\tilde{\lambda}_{early}(N)\) at \(N_{1}\) implies that any uncertainty in the value of \(N_{1}\) will have a significant impact on \(\tilde{\lambda}_{early}(N_{1})\) due to its exponential dependence on \(N\) (see Eq. (55)). Therefore, given that the peak height is very sensitive to the value of \(\tilde{\lambda}_{early}(N_{1})\), it is not possible (without the use of iterative methods to improve the value for \(N_{1}\)) to make accurate analytical predictions on the behaviour of Figure 7: Evolution of a scalar field with an axion potential Eq. (47) with \(n=2\). the axion field during the peak. Nevertheless, it is still possible to estimate the late-time evolution of the field's total energy once the peak has taken place. For this, we just need to consider that once the axion field unfreezes, it will start oscillating around the minimum of its potential. During these oscillations, there are periods in which either \(x\) or \(y\) dominate the field's energy content, extending the oscillations to the equation of state for the axion field. However, we can average the fluctuations into an effective equation of state that approximates the decay of the scalar field. For this, we must first make the following approximation (valid as long as the field is close to the minimum) to the potential \[V_{\rm{Ax}}^{(m)}(\phi)\approx\mu_{m}^{4}\left(\frac{\phi^{2}}{2f_{m}^{2}} \right)^{m}. \tag{57}\] This is necessary as we can now apply the virial theorem to estimate the average ratio between \(x\) and \(y\) during these oscillations, which, depending on the exponent in the potential (\(m\)), is given by \(\left\langle x^{2}\right\rangle\approx m\left\langle y^{2}\right\rangle\). Thus, the effective equation of state for the axion field can be averaged to \[\left\langle\gamma_{\phi}\right\rangle=\frac{2\left\langle x^{2}\right\rangle }{\left\langle x^{2}\right\rangle+\left\langle y^{2}\right\rangle}\approx \frac{2m}{m+1}. \tag{58}\] This result agrees with the previous literature [21, 22] and shows that in order for the axion field to act as dark energy we need \(m\leq 2\), such that it decays faster than radiation. In summary, we can predict the behavior for a scalar field with a generic potential as long as its associated \(\tilde{\lambda}\) varies slowly. For this, we just need to find the value for \(\tilde{\lambda}\) at the time of the peak, and use the same analysis and equations we used in the exponential case but using the approximated \(\tilde{\lambda}_{1}\). The underlying idea of finding a \(\lambda\) constant case that matches the early evolution of Quintessence fields can be extended to a class of models with non-canonical kinetic energies, known as K-essence models. In the next section, we will show Figure 8: Comparison between the numerical evolution \(\tilde{\lambda}(N)\) (blue) against the analytical approximation from Eq. (55) (red dashed) with \(\tilde{\lambda}_{i}=2\) and \(f=0.1\). The black point marks the predicted value for \(N_{1}\) and the subsequent \(\tilde{\lambda}_{1}\), which should be substituted into an exponential potential solution to mimic the peak for the full system. The blue point represents the correct value for \(\tilde{\lambda}_{1}\) needed to have a perfect match between the exponential and the axion peak. Although the analytical expression for \(\tilde{\lambda}_{early}(N)\) matches the evolution of \(\tilde{\lambda}(N)\) at early times, we see that the steep slope of the function at \(N_{1}\) leads to a small error on \(N_{1}\) which in turn leads to a large difference in \(\tilde{\lambda}_{1}\). how they too can lead to observationally viable periods of EDE while satisfying the bound on its sound speed, namely \(c_{s}^{2}<1\). ## 3 K-essence case: \(\mathcal{L}(X,\phi)=X^{n}-V(\phi)\) So far we have concentrated on the evolution of a canonical scalar field in the early universe, asking how it can address the Hubble tension. To date, relatively little attention has been paid to the role non-canonical fields could play, yet these are known to have some very interesting cosmological properties and arise in a number of particle inspired settings [12; 13; 14; 16; 23; 24; 25; 26] although there have been questions raised over their ability to act as dark energy [27]. We are going to consider it here as a way of providing successful EDE. With that in mind, we consider a class of simplified such models, with Lagrangian's of the form \[\mathcal{L}=\frac{X(\dot{\phi})^{n}}{M^{4(n-1)}}-V(\phi), \tag{3.1}\] where \(X(\dot{\phi})\equiv\frac{1}{2}\dot{\phi}^{2}\), \(M\) is a mass scale introduced to keep the action dimensionless and \(n\) is a constant. Of course, \(n=1\) corresponds to a canonical scalar field. One of the interesting aspects of these models is that they lead to reduced sound speeds of the field \(\phi\). In particular, we find for this case [28; 29] \[c_{s}^{2}=\frac{1}{2n-1}, \tag{3.2}\] depending on the exponent in the kinetic energy function \(X(\dot{\phi})\). A number of the proposals for early dark energy have included resolutions that include reduced sound speeds, for example see [18; 21]. In what follows, we keep \(n\) general, allowing us to constrain the full parameter space of \(n\). For completeness, we begin by once again introducing the equations of motion for our system, which now contains the non-canonical Lagrangian Eq. (3.1), plus the two barotropic fluids introduced in section 2, with energy density \(\rho_{r}\) (radiation) and \(\rho_{m}\), (matter, both baryonic and non-baryonic) and equations of state \(\gamma_{r}\) and \(\gamma_{m}\) respectively [16]. The Friedmann equation is given by \[H^{2}=\frac{\kappa^{2}}{3}\left(\rho_{r}+\rho_{m}+\frac{2n-1}{2^{n}M^{4(n-1)}} (\dot{\phi}^{2})^{n}+V(\phi)\right), \tag{3.3}\] while the fluid and scalar field equations of motion are \[\dot{\rho}_{r}= -3H\gamma_{r}\rho_{r}, \tag{3.4}\] \[\dot{\rho}_{m}= -3H\gamma_{m}\rho_{m},\] \[\frac{n(2n-1)}{2^{n-1}M^{4(n-1)}}\ \dot{\phi}^{2n-2}\ \ddot{\phi} +\frac{3Hn}{2^{n-1}M^{4(n-1)}}\ \dot{\phi}^{2n-1}+V_{,\phi}\left(\phi \right)=0.\] We may introduce the dimensionless density parameters (note the new definition of \(x\) here) \[x= \frac{\kappa\sqrt{2n-1}}{2^{n/2}M^{4(n-1)}}\frac{\dot{\phi}^{n}}{ \sqrt{3}H}, y= \frac{\kappa\sqrt{V}}{\sqrt{3}H}, z= \frac{\kappa\sqrt{\rho_{r}}}{\sqrt{3}H}, \tag{3.5}\] which when substituted into Eq. (3.3), gives the dimensionless density in matter \[\Omega_{m}\equiv\frac{\kappa^{2}\rho_{m}}{3H^{2}}=1-(x^{2}+y^{2}+z^{2}), \tag{3.6}\] with the dimensionless density in \(\phi\), \[\Omega_{\phi}=\frac{\kappa^{2}\rho_{\phi}}{3H^{2}}=x^{2}+y^{2}. \tag{3.7}\] For consistency with the earlier sections we concentrate on the case of the exponential potential \(V=V_{0}\exp(-\kappa\lambda\phi)\) in Eq (2.10), such that the evolution equations become (with \(\gamma_{r}=4/3,\gamma_{m}=1\)): \[x^{\prime} = \sqrt{\frac{3}{2}}\frac{M^{4(n-1)}\lambda y^{2}}{(2n-1)^{\frac{1 }{2n}}}\left(\frac{\kappa}{\sqrt{3}Hx}\right)^{1-\frac{1}{n}}+\frac{x}{2}\left( \frac{3x^{2}-3}{2n-1}-3y^{2}+z^{2}\right), \tag{3.8}\] \[y^{\prime} = -\sqrt{\frac{3}{2}}\frac{M^{4(n-1)}\lambda xy}{(2n-1)^{\frac{1} {2n}}}\left(\frac{\kappa}{\sqrt{3}Hx}\right)^{1-\frac{1}{n}}+\frac{y}{2}\left( 3+\frac{3x^{2}}{2n-1}-3y^{2}+z^{2}\right),\] (3.9) \[z^{\prime} = \frac{z}{2}\left(-1+\frac{3x^{2}}{2n-1}-3y^{2}+z^{2}\right), \tag{3.10}\] where recall \(x^{\prime}\equiv\frac{dx}{dN}\). Once again working in terms of \(\gamma_{\rm eff}\), the effective equation of state of the background fluids defined in Eq. (2.11), we have \[x^{\prime} = \sqrt{\frac{3}{2}}\frac{M^{4(n-1)}\lambda y^{2}}{(2n-1)^{\frac{1} {2n}}}\left(\frac{\kappa}{\sqrt{3}Hx}\right)^{1-\frac{1}{n}}+\frac{3x}{2}\left( \frac{2n(x^{2}-1)}{2n-1}+\gamma_{\rm eff}(1-x^{2}-y^{2})\right), \tag{3.11}\] \[y^{\prime} = -\sqrt{\frac{3}{2}}\frac{M^{4(n-1)}\lambda xy}{(2n-1)^{\frac{1}{2 n}}}\left(\frac{\kappa}{\sqrt{3}Hx}\right)^{1-\frac{1}{n}}+\frac{3y}{2}\left( \frac{2nx^{2}}{(2n-1)}+\gamma_{\rm eff}(1-x^{2}-y^{2})\right),\] (3.12) \[\gamma^{\prime}_{\rm eff} = (\gamma_{\rm eff}-1)(3\gamma_{\rm eff}-4). \tag{3.13}\] Note, however, that this system of equations is not obviously closed due to the explicit dependence of the system on \(H(N)\). It proves convenient to combine \(H(N)\) into a new parameter \(\eta\) such that the system manifestly becomes explicitly closed, \[\eta=\frac{M^{4(n-1)}\lambda}{(2n-1)^{\frac{1}{2n}}}\left(\frac{\kappa}{\sqrt {3}Hx}\right)^{1-\frac{1}{n}}, \tag{3.14}\] leading to the following system of equations \[x^{\prime} = \sqrt{\frac{3}{2}}\eta y^{2}+\frac{3x}{2}\left(\frac{2n(x^{2}-1)} {2n-1}+\gamma_{\rm eff}(1-x^{2}-y^{2})\right), \tag{3.15}\] \[y^{\prime} = -\sqrt{\frac{3}{2}}\eta xy+\frac{3y}{2}\left(\frac{2nx^{2}}{(2n-1)}+ \gamma_{\rm eff}(1-x^{2}-y^{2})\right),\] (3.16) \[\gamma^{\prime}_{\rm eff} = (\gamma_{\rm eff}-1)(3\gamma_{\rm eff}-4),\] (3.17) \[\eta^{\prime} = \eta(n-1)\left(\frac{3}{2n-1}-\sqrt{\frac{3}{2}}\frac{\eta y^{2} }{nx}\right). \tag{3.18}\] We note the close resemblance between these equations and the particular set of Quintessence equations Eqs. (2.40-2.43), where the only differences are in the form of the evolution of \(\eta\) (corresponding to the time-dependent \(\tilde{\lambda}\) in Quintessence) and one of the terms in the evolution of \(x\). The natural late time evolution of the system Eqs. (3.15)-(3.18) for fixed \(\gamma_{\rm eff}\) is \[x_{\rm sc}\to 0,\qquad\qquad y_{\rm sc}\to 0,\qquad\qquad\eta_{\rm sc}=\sqrt{ \frac{2}{3}}\left(\frac{3n}{2n-1}\right)\frac{x}{y^{2}}\to\infty. \tag{3.19}\] Therefore, naively, for a system that starts close to the fixed points \(x=y=0\) we expect little evolution. However, as we saw in Section 2.2, even though \(x\) and \(y\) start close to their late time fixed points, \(\eta\) (in that case \(\tilde{\lambda}\)) starts far from it, and this is what leads to a non-trivial evolution of the field. In fact, the similarity with the set of Quintessence equations Eqs. (2.40-2.43) doesn't end there. It suggests that we should be able to use a similar approach to that adopted in section 2.2 for \(\tilde{\lambda}\). In this way, we may find an evolution for a system with a constant \(\eta\) case that matches the full evolution of K-essence (with varying \(\eta\)) up to the peak, in a similar manner to Quintessence where the exponential case matched the Fang et al. model in Section 2.1. For the case where we have a constant \(\eta=\eta_{c}\), equations (3.15)-(3.18) reduce to \[x^{\prime} = \sqrt{\frac{3}{2}}\eta_{c}y^{2}+\frac{3x}{2}\left(\frac{2n(x^{2} -1)}{2n-1}+\gamma_{\rm eff}(1-x^{2}-y^{2})\right), \tag{3.20}\] \[y^{\prime} = -\sqrt{\frac{3}{2}}\eta_{c}xy+\frac{3y}{2}\left(\frac{2nx^{2}}{( 2n-1)}+\gamma_{\rm eff}(1-x^{2}-y^{2})\right),\] (3.21) \[\gamma^{\prime}_{\rm eff} = (\gamma_{\rm eff}-1)(3\gamma_{\rm eff}-4), \tag{3.22}\] which has the following spiral stable fixed point (assuming a constant \(\gamma_{\rm eff}\) and \(n\leq\frac{2\gamma_{\rm eff}}{2\gamma_{\rm eff}-1}\))3 Footnote 3: Notice that in a radiation dominated universe (\(\gamma_{\rm eff}=4/3\)) this fixed point ceases to exist for \(n\geq 2\), which will be used later to rule out a whole range of K-essence models. \[x_{\rm(sc)}= \sqrt{\frac{3}{2}}\frac{\gamma_{\rm eff}}{\eta_{c}},\quad y_{\rm( sc)}= \sqrt{\frac{3}{2}}\frac{\sqrt{\gamma_{\rm eff}}}{\eta_{c}}\sqrt{ \frac{2n}{(2n-1)}-\gamma_{\rm eff}},\quad\Omega^{\rm(sc)}_{\phi}= \frac{3n\gamma_{\rm eff}}{(2n-1)\eta_{c}^{2}},\quad\gamma^{\rm(sc)}_{ \phi}= \gamma_{\rm eff}. \tag{3.23}\] Figure 9: Evolution of a k-essence (\(X^{2}\)) scalar field with an exponential potential in a background containing both matter and radiation baryotropic fluids. Although the late time fixed point is at \(x=y=0\), the system presents a peak during its trajectory as \(\eta\) begins to grow. As inferred from the equation of state, this solution corresponds to the scaling fixed point for this class of K-essence models (reducing to Quintessence for \(n=1\)). To find the corresponding constant value for \(\eta_{c}\) in Eqs. (3.20-3.22) that perfectly describes the peak of the full system, we just need to calculate the value for \(\eta\) close to the peak, at \(N=N_{1}\), which we call \(\eta_{1}\). To relate the value for \(\eta_{1}\) to its initial value \(\eta_{i}\), we will study the case for a set of initial conditions in which both \(x\) and \(y\) start very close to the origin, and \(\eta_{i}\sim O(1)\), such that \(x\ll 1,\ y<1,\ y<\eta_{i}\). To address the Hubble tension, we know that the peak must take place at matter-radiation equality or slightly before. Therefore, for this early evolution of the fields, we can assume that the universe is radiation dominated, meaning that \(\gamma_{\rm eff}=4/3\). With this, Eqs. (3.15)-(3.18) simplify to \[x^{\prime} = \left(2-\frac{3n}{2n-1}\right)x+\sqrt{\frac{3}{2}}\eta y^{2}, \tag{3.24}\] \[y^{\prime} = 2y-\sqrt{\frac{3}{2}}\eta xy,\] (3.25) \[\eta^{\prime} = \eta(n-1)\left(\frac{3}{2n-1}-\sqrt{\frac{3}{2}}\frac{\eta y^{2 }}{nx}\right), \tag{3.26}\] which can be solved to give (by dropping the \(\eta xy\) term in Eq. (3.25)) \[x_{early}(N) = x_{i}e^{2\Delta N_{i}}\left((1-b_{i})e^{-3\Delta N_{i}}+b_{i}e^{ 2\Delta N_{i}}\right)^{\frac{n}{2n-1}}, \tag{3.27}\] \[y_{early}(N) = y_{i}e^{2\Delta N_{i}}\] (3.28) \[\eta_{early}(N) = \eta_{i}\left((1-b_{i})e^{-3\Delta N_{i}}+b_{i}e^{2\Delta N_{i} }\right)^{-\frac{n-1}{2n-1}}, \tag{3.29}\] where \(\Delta N_{i}=N-N_{i},x_{i}=x(N_{i}),y_{i}=y(N_{i}),\eta_{i}=\eta(N_{i})\) and \[b_{i}=\sqrt{\frac{3}{2}}\frac{(2n-1)}{5n}\frac{y_{i}^{2}\eta_{i}}{x_{i}}. \tag{3.30}\] Eqs. (3.27-3.29) provide the early time evolution of the system. These equations are valid as long as we can ignore the higher order term in Eq. (3.25), which becomes important when \(y_{early}(N)\) is in the vicinity of the scaling fixed point, at a time we call \(N_{1}\). Therefore, as for Quintessence, we may define this time as when the following equality is satisfied \[y_{early}(N_{1})\approx\sqrt{\Omega_{\phi}^{\rm(sc)(N_{1})}/2}=\sqrt{\frac{3n \gamma_{\rm eff}}{2(2n-1)\eta_{early}(N_{1})^{2}}}, \tag{3.31}\] where we can see that the fixed point is time-dependent, given the evolution of \(\eta(N)\). Substituting for \(y_{early}(N_{1})\) and \(\eta_{early}(N_{1})\) from Eqs. (3.28) and (3.29), we find \[N_{1}=N_{i}+\frac{2n-1}{2n}\log\left(\sqrt{\frac{3n\gamma_{\rm eff}}{2(2n-1)}} \frac{b_{i}^{\frac{n-1}{2n-1}}}{y_{i}\eta_{i}}\right). \tag{3.32}\] Therefore, the value for \(\eta\) at the time of the peak, \(\eta_{1}\), is given by \[\eta_{1}=\eta_{i}\left(\frac{x_{i}}{y_{i}}\right)^{\frac{n-1}{n}}\left(\frac{ 10}{3}\sqrt{\frac{n}{\gamma_{\rm eff}(2n-1)}}\right)^{\frac{n-1}{n}}, \tag{3.33}\] which we can use for the constant \(\eta_{c}\) scenario in Eqs. (3.20-3.22) to match the peak of the full system. In Figure 10 we can see the specific case for \(n=3/2\) and \(\gamma_{\rm eff}=7/6\), giving a nearly perfect approximation. Therefore, to calculate the trajectory for the constant \(\eta\) system from \(N_{1}\) to the peak, at \(N_{p}\), we just need to perturb around the fixed point solutions shown in Eq. (3.23) for \(\eta_{c}=\eta_{1}\). Performing the stability analysis leads to \[x(N) = x_{early}(N_{1})+C_{x}\exp(E_{+}\Delta N_{1})+D_{x}\exp(E_{-} \Delta N_{1}), \tag{3.34}\] \[y(N) = y_{early}(N_{1})+C_{y}\exp(E_{+}\Delta N_{1})+D_{y}\exp(E_{-} \Delta N_{1}), \tag{3.35}\] with \(C_{x}\), \(D_{x}\), \(C_{y}\) and \(D_{y}\) being a new set of constants, and for generic \(n\), we obtain for the eigenvalues \(E_{\pm}\) \[E_{\pm}=\frac{3}{4}\left(\gamma_{\rm eff}-\frac{2n}{2n-1}\right)\left(-1\pm \sqrt{1-\frac{8\gamma_{\rm eff}}{\frac{2n}{2n-1}-\gamma_{\rm eff}}}\right), \tag{3.36}\] which are complex with a negative real part for \(1\leq\gamma_{\rm eff}<4/3\). In Figure 11 we can see a comparison between our semi-analytic prediction and the full numerical calculation. As for Quintessence, our analytic predictions only take us to the peak, since the late time evolution will be determined by the global stability of the system, which we recall is at \(x=0,y=0\) (given in Eqs. (3.23)). The rate of decay for the scalar field from the peak to this fixed point can be calculated by noticing that in general the kinetic energy term, \(x\), dominates the energy density of the scalar field after the peak. Thus, since the effective equation of state for a K-essence field with a Lagrangian of the form Eq. (3.1) is given by \[\gamma_{\phi}=\frac{2nx^{2}}{(2n-1)(x^{2}+y^{2})}, \tag{3.37}\] we know that the equation of state of the decaying scalar field (\(y\approx 0\)) is \[\gamma_{\phi}=\frac{2n}{(2n-1)}. \tag{3.38}\] Figure 10: Comparison between the constant \(\eta_{c}=\eta_{1}\) case, (Eqs. (3.20-3.22)) (red-dashed line) and the full system (blue line) for K-essence with \(n=3/2\) (both systems solved numerically). We can see that the full system tracks the first peak of the scaling solution for a constant \(\eta_{c}\), decaying back to the origin after the peak takes place. Notice that this equation of state is not unique to the exponential potential case, since the potential density parameter \(y\) already vanishes by the time \(\Omega_{\phi}\) starts decaying. This result will allow us to place an upper bound on K-essence models in the following section, where we will also compare these models against observations, finding the equilibrium point between a low speed of sound and a large equation of state is preferred by the data. ## 4 Comparison with observations The analytic arguments presented in sections 2 and 3, suggest that for there to be viable EDE models associated with the tracking regimes of Quintessence and K-essence, we require initial conditions where \(x\ll 1\) and \(y\ll 1\), in order to establish that a peak value \(\Omega_{\phi}\sim 0.1\) can occur around matter-radiation equality. We first confirm this by solving the full numerical solutions for all three cases analysed in sections 2 and 3, namely Quintessence with constant slope parameter \(\lambda\), with time-dependent \(\lambda(\phi)\), (including the case of the axion field) and K-essence with constant slope parameter \(\lambda\) and constant \(n\). Herein we present a summary of the so-far obtained bounds imposed by observations on each case: * **Quintessence with constant \(\lambda\):** This case is highly constrained by observations. Although it is possible to obtain a peak height at equality for small enough \(\lambda\), we know the scalar field needs to rapidly become subdominant soon after the peak has been reached, and this is not a natural feature of these models. From Eq. (18) the system wants to reach a non-negligible and constant \(\Omega_{\phi}=3/\lambda^{2}\). This was confirmed in full numerical simulations showing these models are inconsistent as EDE models. * **Slow rolling Quintessence with time-dependent \(\tilde{\lambda}(N):\)** These models, along with axion models were discussed in section 2.2 and section 2.3 respectively, where we confirmed numerically that it was possible to both obtain the correct peak and late Figure 11: Comparison of our semi-analytic versus full numerical solution for K-essence with \(n=3/2\). The black dot represents \(N_{1}\) where the early analytic evolution (dashed red line) matches the perturbation around the effective fixed point at \(\gamma_{\rm eff}=7/6\) and \(\eta_{c}=\eta_{1}\) (dot-dashed green). We can see that the analytic procedure can describe with a precision of 10% the magnitude of the first peak of the scalar field. After that, both solutions begin to diverge from each other as the full system (blue) follows the global attractor given by \(\gamma_{\rm eff}=1\) and \(\eta\to\infty\), vanishing faster than radiation. time behaviour of \(\Omega_{\phi}\). This is encouraging, however, we know that Quintessence models generally have \(c_{s}^{2}=1\), and EDE favours lower sound speeds [9; 10; 18]. * **K-essence with \(K(X)=X^{2}\):** The K-essence model we analysed in section 3 indicates both analytically and numerically that it is possible to have successful models for EDE peaking around matter-radiation equality for \(1\leq n\leq 2\) and a range of values of the slope parameter \(\lambda\). One of the promising features of these models is that, from Eqns. (3.2) and (3.38), in the small \(x\) and \(y\) regimes, \(c_{s}^{2}\) and the effective equation of state (\(\gamma_{\phi}\)) of the field depend on \(n\). In particular we see that for \(1\leq n\leq 2\), then \(1\geq c_{s}^{2}\geq\frac{1}{3}\) and \(2\geq\gamma_{\phi}\geq\frac{4}{3}\). These two conditions bring complementary constraints on the favoured values of \(n\). Given the data prefers \(c_{s}^{2}<1\), this suggests we require \(n>1\), which effectively rules out standard quintessence models (as already discussed), whereas demanding that the energy density in \(\phi\) falls off at least as fast as radiation suggests \(n\leq 2\). With this in mind, we now turn our attention to the likelihood of these models acting as EDE. To begin the analysis we first integrate the autonomous set of equations into the public Camb code [30]. This was performed in two steps: First, cosmological parameters from Camb (notably the radiation and dark energy densities today, (\(\Omega_{\text{r},\,0}\) and \(\Omega_{\text{DE},\,0}\), with \(\Omega_{\text{DE},\,0}=\Omega_{\text{cc},\,0}+\Omega_{\phi,\,0}\)) were passed to a separate module to solve the autonomous system \((x^{\prime},y^{\prime},\gamma^{\prime}_{\text{eff}},\tilde{\lambda}^{\prime}, l^{\prime})\) for Quintessence with variable \(\lambda\), and \((x^{\prime},y^{\prime},\gamma^{\prime}_{\text{eff}},\eta^{\prime},l^{\prime})\) for K-essence. The system was evolved from \(N_{i}=\ln(a_{i})\) with \(a_{i}=10^{-6}\), with initial values \(x_{i}\), \(y_{i}\), \(\tilde{\lambda}_{i}\), \(\eta_{i}\) chosen from wide enough flat priors to encompass the full range of peak redshifts and magnitudes. The initial conditions \(\gamma_{\text{eff},\text{i}}\) and \(l_{i}\) were found by a bisection search to give the desired \(\Omega_{\text{r},\,0}\) and \(\Omega_{\text{DE},\,0}\). From this, \(\gamma_{\phi}(z)\) and \(c_{s}^{2}\) (which can be calculated analytically for the models we consider) are inputted to Camb to solve for the full cosmological evolution, including perturbations. We note there is a small inconsistency in this approach, as the autonomous equations do not include massive neutrinos in the background evolution. However, we found that when modeling neutrinos as 2 massless species, and 1 massive species with \(m_{\nu}=0.06\,\text{eV}\), there is only a small difference in the resulting \(\gamma_{\phi}(z)\) and observational constraints. For concreteness, we again use the Fang model as an example of Quintessence with variable \(\lambda\). A wide, flat prior is chosen on \(\tilde{\alpha}\), again to encompass the full range of EDE peaks. For K-essence we use a flat prior of \(1\leq n\leq 2\). Rather than solve for the field variables, we solve for the perturbed fluid equations in Camb, \[\frac{d\rho_{\phi}}{d\tau} = -3\mathcal{H}\gamma_{\phi}\rho_{\phi}\,,\] \[\frac{d\delta_{\phi}}{d\tau} = -\bigg{[}ku_{\phi}+\frac{\gamma_{\phi}}{2}\frac{dh}{d\tau}\bigg{]} -3\mathcal{H}(c_{\text{s}}^{2}-\gamma_{\phi}-1)\left(\delta_{\phi}+3\mathcal{H }\frac{u_{\phi}}{k}\right)-3\mathcal{H}\frac{1}{\gamma_{\phi}}\frac{d\gamma_{ \phi}}{d\tau}\frac{u_{\phi}}{k}\,, \tag{4.1}\] \[\frac{du_{\phi}}{d\tau} = -(1-3c_{\text{s}}^{2})\mathcal{H}u_{\phi}+\frac{1}{\gamma_{\phi}} \frac{d\gamma_{\phi}}{d\tau}u_{\phi}+kc_{\text{s}}^{2}\,\delta_{\phi}\,, \tag{4.2}\] where conformal time \(\tau\) is defined by \(dt=a(\tau)d\tau\), \(\mathcal{H}=aH\), \(c_{\text{s}}^{2}\) is defined in the rest-frame of the field, \(\delta_{\phi}\) is the density perturbation, and \(u_{\phi}\equiv\gamma_{\phi}v_{\phi}\) is the heat-flux, where \(v_{\phi}\) is the velocity perturbation. The synchronous gauge metric perturbation is given by \(h\). Note that the dynamics in terms of the field (and its perturbations) are equivalent to the perturbed fluid dynamics. We perform a Markov Chain Monte Carlo (MCMC) analysis of the base \(\Lambda\)CDM, Fang and K-essence models, using the public Cobaya[31] and Camb codes [30]. We assume a spatially flat cosmology, with flat priors on the six base cosmological parameters, \(\left\{H_{0},\Omega_{c}h^{2},\Omega_{\rm b}h^{2},n_{\rm s},\log(10^{10}A_{\rm s} ),\tau\right\}\). The Fang model and K-essence models have an additional 4 parameters, \(\left\{x_{i},y_{i},\tilde{\lambda}_{i},\tilde{\alpha}\right\}\) and \(\left\{x_{i},y_{i},\eta_{i},n\right\}\) respectively. We use the ensemble sampler emcee[32] to sample over the model parameters, running 100 walkers in the ensemble. The minimum \(\chi^{2}\) is then found by BOBYQA minimisation, using the chain best-fit as an initial guess. We use _Planck_ 2018 data [5] in combination with BAO data from BOSS DR12 [33], 6dFGS [34] and SDSS-MGS [35]. The _Planck_ likelihoods used are the TT, TE and EE spectra at \(\ell\geq 30\), the low-\(\ell\) likelihood using the Commander component separation algorithm [36], the low\(-\ell\) EE likelihood from the SimAll algorithm, and lensing [37]. We include the SH0ES \(H_{0}\) measurement from [39]4 and high \(\ell\) CMB data from Act DR4 [41]. Following the Act analysis, we exclude \(\ell<1800\) TT data to minimise double counting of information when combined with _Planck_. Footnote 4: Note that the SHOES measurement of \(H_{0}=73.2\pm 1.3\,\rm{km\ s^{-1}\ Mpc^{-1}}\) is at the upper end of local estimates. More recent SH0ES results [4] suggest an even higher level of tension with \(\Lambda\)CDM, whereas alternative methods suggest a lower value, e.g. [40]. The resulting parameter constraints on the base cosmological parameters and \(\chi^{2}\) values are shown in Table. 1. For the K-essence model, we find a \(\Delta\chi^{2}=-14.2\) improvement over \(\Lambda\)CDM which is comparable with previous studies using the axion potential of Eq. (2.47). The main improvement comes from the SH0ES measurement (\(\Delta\chi^{2}_{\rm H0.reiss2020}=-11\)) and Act (\(\Delta\chi^{2}_{\rm ACT}=-6\)). Several features are consistent with previous work using the axion. First, whilst the best-fit \(H_{0}=70.45\,\rm{km\ s^{-1}\ Mpc^{-1}}\) is increased compared to \(\Lambda\)CDM, it does not fully come into alignment with the SH0ES value. Second, there is an increase in the value of \(S_{8}\equiv\sigma_{8}(\Omega_{m}/0.3)^{0.5}\), where \(\sigma_{8}\) is the matter clustering amplitude on scales of \(8h^{-1}\,\rm{Mpc}\), which would increase tensions with the \(S_{8}=0.790^{+0.018}_{-0.014}\) result from DES and KiDs [42]. The resulting reconstruction of \(\Omega_{\phi}(z)\) is shown in Fig. 12. Similar to the axion, there is a preference for \(\Omega_{\phi}\sim 0.06\) at \(z\sim 4000\). The posterior constraint on \(n\) is \(n=1.51^{+0.20}_{-0.28}\), giving a derived value of \(c_{s}^{2}=0.53\pm 0.13\). This is again similar to the axion model, which has \(c_{s}^{2}\approx 0.7\) over the relevant times and scales of interest [43]. For the Fang model, \(\Omega_{\phi}(z)\) is completely suppressed at all redshifts, with similar \(\chi^{2}\) values to \(\Lambda\)CDM. We note the model and parameter ranges are capable of producing peaks similar to K-essence, but in this case \(c_{s}^{2}=1\). This underlies the importance of \(c_{s}^{2}<1\) for a viable EDE model. ## 5 Conclusion and discussion In this paper we have taken seriously the prospect that the current Hubble tension is a manifestation of new early universe physics. In particular, an evolving scalar field which for a short period of time, around the period of matter radiation equality could briefly enhance the energy density of the universe, and by doing so increase the Hubble parameter beyond the CMB derived value (\(H_{0}=67.44\pm 0.58\ \rm{km\ s^{-1}\ Mpc^{-1}}\)) [5, 6, 7] and closer to that determined by the SHOES team (\(H_{0}=73.04\pm 1.04\ \rm{km\ s^{-1}\ Mpc^{-1}}\)) [3, 4]. In doing so, we of course recognise, that this may be overkill, the resolution may reside in the way the data is analysed and interpreted - for a recent discussion of the issues concerning direct measurements see [8]). The scalar field models that have been proposed to date, have in common a short period just before matter-radiation equality where the energy density increases briefly to around 5-10% of the overall energy density \(\Omega_{\phi}\sim 0.1\), before reducing again faster than radiation, so as to make sure it does not impact on the matter dominated era that then follows (for a review of models see [9; 10]). The novel element in this work is the recognition that for many scalar field models, evolving in the presence of a background dominating fluid, they will approach a scaling regime, where the energy density stored in the field mimics that of the fluid and becomes a fixed fraction of the energy density. In particular, we ask under what conditions will such a field lead to the required conditions just described around the time of matter-radiation equality. We have addressed this question for a class of exponential potential models (Eq. (10) with a constant slope parameter \(\lambda\), Eq. (38) for a time-dependent slope \(\lambda(\phi)\)), Eq. (47) for an axion field and then for a particular class of K-essence models (Eq. (11)). In each case we have been able to obtain solutions, where starting from initial conditions corresponding to \begin{table} \begin{tabular}{|l||c|c|c|} \hline Parameter & \(\Lambda\)CDM & K-essence & Fang \\ \hline \hline \(H_{0}\) & \(68.19\pm 0.37\) (\(68.09\)) & \(69.7^{+1.3}_{-1.6}\) (\(70.45\)) & \(68.20\pm 0.37\) (\(68.29\)) \\ \(\Omega_{\rm b}h^{2}\) & \(0.02248^{+0.00010}_{-0.00012}\) (\(0.02247\)) & \(0.02251^{+0.00016}_{-0.00018}\) (\(0.02251\)) & \(0.02248\pm 0.00012\) (\(0.02250\)) \\ \(\Omega_{c}h^{2}\) & \(0.11824\pm 0.00086\) (\(0.1185\)) & \(0.1240^{+0.0051}_{-0.0062}\) (\(0.1278\)) & \(0.11822^{+0.00078}_{-0.00088}\) (\(0.1180\)) \\ \(n_{\rm s}\) & \(0.9721\pm 0.0034\) (\(0.9711\)) & \(0.9806^{+0.0092}_{-0.0072}\) (\(0.9873\)) & \(0.9717\pm 0.0033\) (\(0.9722\)) \\ \(\log(10^{10}A_{\rm s})\) & \(3.056\pm 0.013\) (\(3.055\)) & \(3.063^{+0.014}_{-0.016}\) (\(3.058\)) & \(3.056\pm 0.013\) (\(3.057\)) \\ \(\tau_{\rm reio}\) & \(0.0583\pm 0.0068\) (\(0.05784\)) & \(0.0570^{+0.0064}_{-0.0072}\) (\(0.05122\)) & \(0.0585\pm 0.0070\) (\(0.05884\)) \\ \hline \(r_{d}h\) & \(100.54^{+0.60}_{-0.68}\) (\(100.4\)) & \(100.65^{+0.64}_{-0.73}\) (\(100.5\)) & \(100.55\pm 0.64\) (\(100.7\)) \\ \(S_{8}\equiv\sigma_{8}(\Omega_{m}/0.3)^{0.5}\) & \(0.8175\pm 0.0094\) (\(0.8195\)) & \(0.830^{+0.012}_{-0.017}\) (\(0.8378\)) & \(0.8171^{+0.0091}_{-0.010}\) (\(0.8153\)) \\ \hline \(\lambda_{\rm 10.reis2020}^{2}\) & \(15.5\) & \(4.5\) **(-11.0)** & \(14.2\) ( -1.2) \\ \(\chi_{\rm Planck}^{2}\) & \(1014.2\) & \(1017.1\) ( \(2.8\)) & \(1015.3\) ( \(1.1\)) \\ \(\chi_{\rm ACT}^{2}\) & \(240.4\) & \(234.4\) (**-6.0)** & \(240.1\) ( -0.3) \\ \hline \(\chi_{\rm data}^{2}\) & \(2310.1\) & \(2296.0\) (**-14.2)** & \(2309.7\) ( -0.5) \\ \hline \end{tabular} \end{table} Table 1: Mean and best-fit parameter values for the \(\Lambda\)CDM, K-essence and Fang models. Consistent \(\chi^{2}\) values (BAO and SN) have been suppressed. Figure 12: Reconstruction of \(\Omega_{\phi}(z)\) for the K-essence (left) and the Fang models (right). \(1\) and \(2\sigma\) confidences are indicated by the dark and light blue regions, and the best-fit by the dashed line. The best-fit of the K-essence model has \(n=1.54\). \(x\ll 1\) and \(y\ll 1\) we find that as the system evolves, the field initially remains approximately constant before naturally trying to evolve towards the scaling regime it would associate with matter domination. It is in that evolution, that there is a brief period where the energy density increases allowing the field to act like EDE. We explained analytically the evolution in all four cases and showed how to understand both the evolution of \(\Omega_{\phi}\), and the epoch of domination in terms of the potential parameters and initial conditions of the field. We saw that the standard Quintessence model with constant \(\lambda\) could not provide the necessary enhancement in \(\Omega_{\phi}\) whilst also satisfying matter domination constraints on its energy density, but the time-dependent \(\lambda\) quintessence models and the axion model could, as their energy density dropped rapidly just after equality. A set of conditions were established for such models to act as EDE. Moreover, we were able to demonstrate a nice relationship between these models and the K-essence models, which were also successful in leading to the required EDE. The real test of the models though lies in how well they fit all the available data, including the evolution of the associated fluctuations of the field and the formation of structure such as in the CMB. Using an MCMC likelihood fit, we saw that the K-essence model with \(n\approx 3/2\) was a much better fit providing an improved \(\chi^{2}\) fit of 14.3 compared to standard \(\Lambda\)CDM (see Table 1). Although the variable \(\lambda\) model of Fang et al [20], provided an excellent fit to the background EDE cosmology, it failed eventually because of the fact all Quintessence models have a sound speed \(c_{s}^{2}=1\) which the data just does not prefer. Of course we are not pretending that the K-essence model we have analysed is a particularly well-motivated particle physics model. However, it does bring home the possibility that interesting dark energy physics can emerge from such non-canonical terms as they can provide ideal realisations of the constraints emerging on EDE solutions from the background field cosmology, i.e. \(\gamma_{\phi}>\frac{4}{3}\) just after matter-radiation equality, and the perturbations of the fields, i.e. \(c_{s}^{2}<1\) during that period. It would be fascinating to see whether such models exist in the world of particle physics. ## Note added Just as this paper was being completed, we became aware of [44] in which the authors also make use of the fact that dynamical systems can lead to enhanced energy densities as they approach their fixed points. In their case, they attempted to use the same field for both EDE and late time dark energy. The work of EJC and AM was supported by an STFC Consolidated Grant [Grant No. ST/T000732/1], and EJC was also supported by a Leverhulme Research Fellowship [RF-2021 312]. SSM was supported by an STFC studentship [Grant No. ST/V506928/1], and JMMW by an STFC studentship [Grant No. ST/W507702/1].
2309.11795
An optimal control deep learning method to design artificial viscosities for Discontinuous Galerkin schemes
In this paper, we propose a method for constructing a neural network viscosity in order to reduce the non-physical oscillations generated by high-order Discontiuous Galerkin (DG) methods. To this end, the problem is reformulated as an optimal control problem for which the control is the viscosity function and the cost function involves comparison with a reference solution after several compositions of the scheme. The learning process is strongly based on gradient backpropagation tools. Numerical simulations show that the artificial viscosities constructed in this way are just as good or better than those used in the literatur
L éo Bois, Emmanuel Franck, Laurent Navoret, Vincent Vigon
2023-09-21T05:43:15Z
http://arxiv.org/abs/2309.11795v1
An optimal control deep learning method to design artificial viscosities for Discontinuous Galerkin schemes ###### Abstract In this paper, we propose a method for constructing a neural network viscosity in order to reduce the non-physical oscillations generated by high-order Discontinuous Galerkin (DG) methods. To this end, the problem is reformulated as an optimal control problem for which the control is the viscosity function and the cost function involves comparison with a reference solution after several compositions of the scheme. The learning process is strongly based on gradient backpropagation tools. Numerical simulations show that the artificial viscosities constructed in this way are just as good or better than those used in the literature. ## 1 Introduction In computational fluid dynamics, Discontinuous Galerkin (DG) methods can be used as an alternative to finite volumes (FV) methods when a high order of convergence is desired. Indeed, by using polynomials coupled with the weak form of the equation to approximate the solution, DG methods allow to reach arbitrarily high orders of convergence [11, 12]. However, when the solution exhibits shocks or strong gradients, these high order methods introduce non-physical oscillations, which can deteriorate the accuracy of the solution and lead to stability issues. Since shocks and strong gradients easily appear in non-linear conservative systems, even from continuous initial conditions, countermeasures are required to stabilize DG methods. Classical approaches to reduce oscillations and stabilize DG methods are based on slope limiters, filtering techniques or artificial viscosity methods. Slope limiter methods were initially developed for FV methods and then adapted to DG schemes: they use a troubled-cell indicator to identify cells with oscillations and then define fluxes at the cells interfaces with second order polynomial approximation and total variations diminishing property [11, 12]. An alternative is to consider WENO (Weighted Essentially Non Oscillating) type reconstruction to take advantage of the full high-order approximation in the identified troubled cells and their neighbours [13, 14, 15]. Regarding filtering techniques, they involve applying a linear filter using the modal representation of the solution locally in each cell to smooth out the solution [10, 11]. Finally the artificial viscosity method consists in adding a non-linear viscous term to the equation, which makes the solution smoother, thus saving the DG methods from situations that they cannot handle correctly. The viscous term can then be tuned with a local coefficient, which should only be activated in problematic areas and should vanish as the characteristic length of the mesh tends to zero. In this paper, we will focus on artificial viscosity methods. A few different approaches have emerged to deal with different problems. For instance, the artificial viscosity can be function of the divergence of the velocity field [10], function of the modal decay of the solution in each cell [13] or function of the entropy production [12]. A comparative study of some models has been carried out in [11], and shows that these different models behave differently from each other, and that the most suitable model may depend on the test case considered. In addition, each of these models relies on some parameters that have to be adjusted empirically, making the process of finding the right viscosity coefficient even more difficult. A more recent approach consists in exploiting the capabilities of neural networks in pattern recognition to design data-driven tools. In short, neural networks can be described as non-linear functions with many parameters --from thousands to millions-- that can be adjusted using gradient descent in order to optimize a given criterion. Examples of this approach can be found in several related topics: neural networks are used as troubled-cell indicator for high order schemes in [14], as classifiers of functions' regularity to control oscillations in spectral methods in [15], or as predictor of the degree of reconstruction for MOOD algorithm in [1]. Last but not least, in [13] the authors design an artificial viscosity for DG methods using neural networks. In all these examples, the authors use _supervised learning_, where the criterion to optimize is an error between the output of the neural network and a given target. In [13] for instance, for each test case of the training dataset, the target is set to the artificial viscosity model (among a given set of options) that performs best for this specific test case. This method makes the neural network converge toward a kind of good interpolation between known models. However supervised learning in this way is not necessarily the best option, or may not even be possible in some other contexts. Indeed, an appropriate target is not always available; and when it is, as is the case in the example previously described, using it as a target will not allow the neural network to explore new designs. In this paper we use a different way to train parameterized functions in numerical schemes, that is not bounded by these limitations. We train the neural network directly in the numerical scheme, and compare the resulting numerical solution to a target solution, instead of considering the output of the neural network itself. This approach relieves us from any prior expectation on what the output of the neural network should look like and only focuses on the result, while having the additional advantage to include effects of the neural network through many iterations of the scheme computing the gradient by automatic differentiation framework. We speak about "differentiable physic approach" [14]. This approach have been very recently used to learn discretization [1, 15, 16]. This approach can be see as an optimal control approach, which we design a closed-loop control of the system to fit the reference solution. A comparable approach is reinforcement learning, which is equivalent to optimal control without using the temporal transition scheme, but only examples of transitions. To our knowledge, this approach has been used for the construction of limiters in [17] and for adjusting the weights in WENO schemes [18]. One of the main ingredients of this approach is the use of deep learning frameworks (like TensorFlow or PyTorch), not only for the implementation of the neural network, but extended to the implementation of the whole numerical scheme. Indeed, the optimization of the parameters is performed with a gradient descent algorithm, which requires the computation of the gradient of the error. Since the error involves the numerical solution produced through many iterations of the parameterized numerical scheme, all the computations need to be differentiated. By implementing the numerical scheme in a deep learning framework, the computation of this highly complex gradient can be fully automated, making the optimization algorithm fairly easy to code. However, the complexity of the gradient is in itself an obstacle to the proper functioning of the algorithm, both because of its high computational cost and because of potential gradient instability. In this paper, we propose an algorithm to adress both of these limitations, and provide results of this algorithm when applied to the design of a neural network viscosity for DG schemes. The remaining of this paper is organized as follows. In section 2, we introduce a general framework for the approach we used in this work. In section 3, we describe in details our implementation of this framework to the design of an artificial viscosity for DG schemes in 1D. Finally, section 4 gives numerical results for the advection equation, Burgers' equation and Euler's equation, with considerations on the influence of some parameters. ## 2 An optimal control method for parameterized schemes In this section we describe the method we use to construct an artificial viscosity. Since this method is not specific to this problem, we describe it in general terms, as a general framework to optimize parameters in a numerical scheme for partial differential equations. In our application, these parameters are the weights of a neural network designed to output the coefficient for the artificial viscosity. ### Optimization problem As an example, let us consider a general hyperbolic equation \[\partial_{t}\mathbf{U}+\nabla\cdot\mathbf{F}(\mathbf{U})=0,\] with \(\mathbf{U}:\mathbb{R}^{d}\times\mathbb{R}^{*}_{+}\to\mathbb{R}^{s}\) the vector of unknowns and \(\mathbf{F}:\mathbb{R}^{s}\to\mathbb{R}^{s\times d}\) the flux of the equation. Let us consider a given discretization of space (finite volumes, DG, WENO schemes, etc.) and time (explicit Euler, Runge-Kutta, etc.), resulting in a numerical scheme of the form \[U^{n+1}=S(U^{n},\pi(U^{n}))=S_{\pi}(U^{n}), \tag{1}\] where \(U^{n}\) is the approximation of the solution \(\mathbf{U}\) at time \(t^{n}\), \(S_{\pi}\) is an iteration of the numerical scheme on one timestep, and \(\pi\) is a part of the numerical scheme that can be modified to serve as a control to the scheme. For instance, \(\pi\) could be a slope limiter for a finite volumes method, a process to compute the weights of a WENO scheme, or --as in this paper-- an artificial viscosity coefficient for a discontinuous Galerkin method, among many other possibilities. In order to express the problem as an optimization problem, we denote by \(V_{\pi}^{N}(U^{0})\) the quantity \[V_{\pi}^{N}(U^{0})=\mathcal{L}\big{(}S_{\pi}(U^{0}),\ S_{\pi}^{2}(U^{0}),\ \dots,\ S_{\pi}^{N}(U^{0})\big{)}, \tag{2}\] where \(S_{\pi}^{n}\) corresponds to \(n\) iterations of the scheme \[S_{\pi}^{n}(U^{0})=(\underbrace{S_{\pi}\circ\dots\circ S_{\pi}}_{n\text{ times}})(U^{0}),\] and \(\mathcal{L}\) is a generic cost function. Thus \(V_{\pi}^{N}(U^{0})\) is a cost function depending on a discrete solution made of \(N\) successive iterations of the numerical scheme \(S_{\pi}\). It can be for example an error committed by the scheme compared to a reference solution a penalization of the control. Since there is usually little hope to find a control \(\pi\) that minimizes \(V_{\pi}^{N}(U_{0})\) for all possible initial conditions \(U_{0}\), we focus on a specific distribution of initial conditions \(\mathbb{P}\), and consider the following optimization problem: \[\min_{\pi}\int V_{\pi}^{N}(U_{0})\,d\mathbb{P}(U_{0}). \tag{3}\] In order to find a solution to this optimization problem, we choose to parameterize \(\pi\) with a set of parameters \(\theta\), for instance by implementing \(\pi_{\theta}\) as a neural network. The optimization problem thus becomes \[\min_{\theta}J(\theta)=\min_{\theta}\int V_{\pi_{\theta}}^{N}(U_{0})\,d \mathbb{P}(U_{0}). \tag{4}\] This problem is similar to the optimization problem solved by policy gradient methods in reinforcement learning [1]. ### Gradient descent and back-propagation Our approach to solve the optimization problem (4) is to use a mini-batch gradient descent algorithm, relying on automatic differentiation for the computation of the gradient. The gradient descent algorithm consists in starting from an arbitrary set of parameters, and iteratively improve it by performing updates of the form \[\theta\leftarrow\theta-\eta\nabla_{\theta}J(\theta),\] or any other alternative, e.g with momentum, Adam, and so on. The _mini-batch_ version of the algorithm consists in replacing the gradient \(\nabla_{\theta}J(\theta)\) by an approximation using a Monte-Carlo method, meaning that the integral over the distribution \(\mathbb{P}\) of initial conditions is replaced by a sum over a sample \((U_{1}^{0},...,U_{K}^{0})\) of \(\mathbb{P}\): \[\nabla_{\theta}J(\theta)\simeq\sum_{k=1}^{K}\nabla_{\theta}V_{\pi_{\theta}}^{N} (U_{k}^{0}).\] In principle this approximation does not prevent the convergence of the algorithm while making it much faster. In our case, if the distribution \(\mathbb{P}\) is infinite, such an approximation is even required for the computation of the gradient to be possible. From here, the difficulty lies in the computation of \(\nabla_{\theta}V^{N}_{\pi_{\theta}}(U^{0})\) for a given \(U^{0}\). Figure 2 shows the computational graph of this quantity, from which can be derived the following formulae: \[\big{(}\nabla_{\theta}V^{N}_{\pi_{\theta}}\big{)}=\sum_{n=1}^{N}\big{(}\nabla_{ \theta}S^{n}_{\pi_{\theta}}\big{)}\big{(}\nabla_{U^{n}}\mathcal{L}\big{)},\] \[\big{(}\nabla_{\theta}S^{n+1}_{\pi_{\theta}}\big{)}=\big{(}\nabla_{\theta}(S _{\pi_{\theta}}\circ S^{n}_{\pi_{\theta}})\big{)}=\big{(}\nabla_{\theta}S^{n}_ {\pi_{\theta}}\big{)}\big{(}\nabla_{U}S_{\pi_{\theta}}\big{)}+\big{(}\nabla_{ \theta}S_{\pi_{\theta}}\big{)},\] where, for any function \(g(x)\), the gradient \(\nabla_{\pi g}\) refers to the transpose of its Jacobian matrix. Assuming that all these quantities are well defined, meaning that both \(\mathcal{L}\) and \(S_{\pi_{\theta}}\) are differentiable, we thus obtain a way to compute the gradient \(\nabla_{\theta}V^{N}_{\pi_{\theta}}(U^{0})\). In practice, all these computations are done automatically. Indeed, in the same way that deep learning frameworks (e.g Tensorflow, Pytorch) allow to automatically compute the gradient of a neural network w.r.t its parameters using the _backpropagation_ algorithm, the same frameworks can be used to compute the gradient of any parameterized numerical scheme \(S_{\pi_{\theta}}\) w.r.t \(\theta\), provided that \(S_{\pi_{\theta}}\) is implemented using the differentiable functions of the framework. More than that, the backpropagation algorithm can be applied to any number of iterations of the numerical scheme, and even to the complete computational graph for \(V^{N}_{\pi_{\theta}}(U^{0})\) shown in Figure 2. In particular, this method illustrates one way these deep learning frameworks can prove useful for optimization tasks in scientific computing. Let us conclude this section with a few observations on this approach to solve problem (4). A first advantage of this optimization algorithm lies in the fact that it does not require any reference \(\pi\), since the error is not computed on the output of \(\pi_{\theta}\) directly, but instead on the numerical solution that stems from \(\pi_{\theta}\). A second advantage is that the optimized function \(J(\theta)\) can take into account many iterations of the numerical scheme \(S_{\pi_{\theta}}\), thus including effects of the control \(\pi_{\theta}\) that would go unnoticed on shorter time scales. In our application to an artificial viscosity, these long time effect would be the diffusion related to a too high viscosity, as opposed to the short term oscillations related to a too low viscosity. Finally, note that this method contrasts with classical reinforcement learning algorithms in that this method leverages our knowledge of the transition process between two successive states, here \(U^{n}\) and \(U^{n+1}\). Indeed, classical reinforcement learning usually build implicitly (model free approaches) or explicitly (model based approaches) an approximation of this transition process by analyzing examples of transitions, whereas in our case the full knowledge of this transition process, ie of the numerical scheme, can be used to compute the gradient of \(J(\theta)\) directly. ### Optimization on sub-trajectories using the reference solutions Let us mention two obstacles to the application of the method described above. The first one stems from the _depth_ of the computational graph when the number of iterations \(N\) grows higher and higher. Indeed, as \(N\) increases, the computation of \(\nabla_{\theta}V^{N}_{\pi_{\theta}}\) becomes not only more and more expensive, but also more and more subject to gradient instability issues, similarly to very deep neural networks. The second obstacle is that although the algorithm does not require a reference control \(\pi\), it does rely on a function \(\mathcal{L}\) that quantifies the error of a numerical solution, and which may not be easy to determine. One way we have found to partially address both of these issues is to use a reference numerical scheme \(S_{\text{ref}}\), accurate and robust, and the reference solutions \(U^{1}_{\text{ref}},...,U^{N}_{\text{ref}}\) provided by it. First it helps with the measure of the error committed by \(U_{\theta}\), by providing an expected result to compare with. But we can also use the reference solutions to limit the number of iterations the gradient actually goes through to a given number \(m<N\), by replacing problem (4) by: \[\min_{\theta}J(\theta)=\int\sum_{n=0}^{N-m}V^{m}_{\theta}(U^{n}_{\text{ref}}) \,d\mathbb{P}(U_{0})=\min_{\theta}\int\sum_{n=0}^{N-m}V^{m}_{\theta}(S^{n}_{ \text{ref}}(U_{0}))\,d\mathbb{P}(U_{0}). \tag{5}\] In this formulation of the problem, instead of minimizing the error on an entire trajectory with \(N\) iterations, we minimize the sum of the errors on all the sub-trajectories with \(m\) iterations, starting from a point in the reference solution. This process thus limits the size of the computational graph for \(J(\theta)\), while still allowing the parameterized scheme \(S_{\pi_{\theta}}\) to be trained on data at times arbitrarily far from \(t=0\), which would not be the case if we simply picked a small \(N\). Note that for the computation of \(\nabla_{\theta}J(\theta)\) with this new formulation, the sum over the sub-trajectories is also approximated by a Monte-Carlo method, similarly to the integral over the initial conditions. This approach does not allow to capture very long time effects of the control but only on medium size time sequences. For a number of applications such as the construction of limiters or viscosity this seems to be sufficient. ### Algorithm The whole method is described in Algorithm 1. Note that, as mentioned in the previous section, both the integral over the initial conditions and the sum over the sub-trajectories of length \(m\) in (5) are approximated by a Monte-Carlo method, resulting in a kind of double mini-batch gradient descent algorithm. Something not mentioned in this algorithm for the sake of simplicity, but very useful to track the progression of the training, is the computation of a _validation loss_ at the end of each epoch, consisting in the evaluation of \(J(\theta)\) on a set of sub-trajectories generated at the beginning of the training. Also note that in this algorithm, \(S_{\pi_{\theta}}\) and \(S_{\text{ref}}\) could actually consist of several iterations of the corresponding numerical schemes, so that the actual timestep \(\Delta t\) satisfies some stability conditions. Equivalently, we could say that the error \(\mathcal{L}(U^{n},\cdots,U^{n+m})\) could be computed on a subset of instants, thus lowering the memory requirements for the storage of the reference solutions. ``` 1Start from a random set of parameters \(\theta\) 2foreach episodedo 3 Generate random initial conditions \((U^{0}_{1},\ldots,U^{0}_{K})\sim\mathbb{P}\) 4 Compute reference trajectories from \(U^{0}_{k}\) up to \(S^{N}_{\text{ref}}(U^{0}_{k})\) for all \(k\in\{1,\ldots,K\}\) 5foreach epochdo 6 Randomly select a set \(I\) of indices \((k,n)\in\{1,\ldots,K\}\times\{0,\ldots,N-m\}\) 7 Compute sub-trajectories from \(S^{n}_{\text{ref}}(U^{0}_{k})\) up to \(S^{m}_{\pi_{\theta}}(S^{n}_{\text{ref}}(U^{0}_{k}))\) for all \((k,n)\in I\) 8 Compute \(J(\theta)=\sum_{(k,n)\in I}V_{\pi_{\theta}}(S^{n}_{\text{ref}}(U^{0}_{k}))\) 9 Update parameters \(\theta\) with \(\nabla J(\theta)\) 10 end for 11 12 end for ``` **Algorithm 1**training algorithm ## 3 Design of an artificial viscosity for discontinuous Galerkin schemes This section is dedicated to our application of the method previously described to the design of an artificial viscosity for discontinuous Galerkin schemes in one dimension. Sections 3.1 and 3.2 describe the problem and the key elements of the method, like the numerical scheme, the control \(\pi\), the cost function \(\mathcal{L}\). Then sections 3.3 to 3.3 give some details on the implementation. ### Discontinuous Galerkin method and artificial viscosity In this application, we are interested in using discontinuous Galerkin (DG) schemes to solve hyperbolic equations of the form \[\partial_{t}\mathbf{U}+\partial_{x}\mathbf{F}(\mathbf{U})=0, \tag{6}\] with \(\mathbf{U}:\mathbb{R}_{+}\times[x_{\min},x_{\max}]\to\mathbb{R}^{s}\) the conservative variables, and \(\mathbf{F}:\mathbb{R}^{s}\to\mathbb{R}^{s}\) the physical flux. In order to discretize equation (6) with a discontinuous Galerkin method, we consider a spatial mesh of the interval \([x_{\min},x_{\max}]\) made of \(n_{x}\) cells of equal length \(\Delta x\), and introduce a basis of polynomials \((\phi_{1},...,\phi_{p})\) of degree at most \(p-1\) on the reference interval \([-1,1]\). Assuming that the components of \(\mathbf{U}\) are polynomials of degree at most \(p-1\) on each cell, with no constraint of continuity at the interfaces of the cells, the \(i\)-th variable on the \(j\)-th cell can be written \[\mathbf{U}_{i,j}(x,t)=\sum_{k=1}^{p}U_{i,j,k}(t)\phi_{k}(\hat{x}),\quad 1\leq i \leq s,1\leq j\leq n_{x},\] involving the change of variable to the reference interval: \(\hat{x}=-1+2\)\(((x-x_{\min})\mod\Delta x)/\Delta x\in[-1,1]\). Assuming that \(\mathbf{F}(\mathbf{U})\) are also polynomials of degree at most \(p-1\) on each cell, it has a similar decomposition with coefficients \((F_{i,j,k})\). Then, integrating (6) against each of the \(\phi_{k}\) leads to the following semi-discrete weak formulation: \[\frac{dU}{dt}M+(F^{\star}-FS)=0, \tag{7}\] where \(M=\left(\int_{-1}^{1}\phi_{k}\phi_{\ell}\right)_{k,\ell}\), \(S=\left(\int_{-1}^{1}\phi_{k}\partial_{x}\phi_{\ell}\right)_{k,\ell}\), and \(F^{\star}\) involves the estimated values at the interfaces of the cells, using a local Lax-Friedrichs flux. Here, the product \(\frac{dU}{dt}M\) is to be understood as \[\left(\frac{dU}{dt}M\right)_{i,j,k}=\sum_{\ell}\frac{dU_{i,j,\ell}}{dt}M_{\ell,k}.\] Finally, a Runge-Kutta method is used for the time integration of (7). We refer to [13] for more details. An important benefit of discontinuous Galerkin schemes is that they can be made to converge in \(O(\Delta x^{p})\) for any arbitrary order \(p\), by using polynomials of high enough degree (\(p-1\) in one dimension). However, when the solution exhibits strong gradients or shocks, high-order DG schemes produce oscillations as those shown in Figure 1, which can ruin the accuracy of the scheme and produce fatal instabilities (e.g negative pressure in the Euler equations). For this reason, a method that is sometimes used consists in adding an artificial viscosity term to the equation to solve, in order to smooth out the solution: \[\partial_{t}\mathbf{U}+\partial_{x}\mathbf{F}(\mathbf{U})=\partial_{x}(\mu \partial_{x}\mathbf{U}). \tag{8}\] The artificial viscosity above depends on a coefficient \(\mu=\mu(x,t)\in\mathbb{R}\) that can locally increase or decrease the amount of smoothing, and that is expected to vanish as the length \(\Delta x\) of the cells tends to zero to recover the original equation asymptotically. In practice, since the places where the viscosity is needed depend on the solution, the viscosity coefficient is taken as a function of \(\mathbf{U}\): \[\mu=\pi(\mathbf{U}).\] Denoting \(\mathbf{G}=\mu\partial_{x}\mathbf{U}=\pi(\mathbf{U})\partial_{x}\mathbf{U}\) and \((G_{i,j,k})\) its coefficients in the discontinuous polynomial basis, the discontinuous Galerkin scheme now reads: \[GM=\pi(U)\left(U^{\star}-US\right),\] \[\frac{dU}{dt}M+\left((F^{\star}-FS)-(G^{\star}-GS)\right)=0.\] where \(U^{\star}\) and \(G^{\star}\) involve the estimated values at the cells interfaces using a centered numerical flux. After time discretization, still with a Runge-Kutta method, we have thus completely defined the numerical scheme \(U^{n+1}=S_{\pi}(U^{n})\). Function \(\pi\) is the one that we intend to design with the use of the method described in the previous section. As discussed in the introduction, some models for \(\pi\) can already be found in the literature, and [12] compare some of them. In the result section, we compare our own viscosity to two of these models, referred to as the derivative-based (DB) and highest modal decay (MDH) models respectively, briefly described in appendix A. ### Definition of the cost function To design the cost function \(\mathcal{L}\) used to determine the control \(\pi\), we compare the associated numerical solution \(U^{1},\cdots,U^{N}\) (or of a sub-trajectory \(U^{n},\cdots,U^{n+m}\)) to a reference solution \(U_{\text{ref}}\) by summing a local-in-time cost function \(C\) over iterations: \[\mathcal{L}(U^{1},\ldots,U^{N})=\sum_{n=1}^{N}C(U^{n},U_{\text{ref}}^{n}).\] For the reference solution, we use the numerical solution of a second-order MUSCL scheme on a fine grid, which ensures that the reference solution is both accurate and oscillation-free. The local-in-time cost function \(C\), also concerned with both the accuracy of the solution and the presence of oscillations, is taken as a combination of three terms: \[C(U^{n},U_{\text{ref}}^{n})=\omega_{\text{osc}}\,C_{\text{osc}}(U^{n},U_{ \text{ref}}^{n})+\omega_{\text{acc}}\,C_{\text{acc}}(U^{n},U_{\text{ref}}^{n} )+\omega_{\text{visc}}\,C_{\text{vis}}(U^{n}).\] For simplicity, we give below the expression of each term in the scalar case. In case of a system, we simply take the average cost. The aim of the first term is to detect the numerical oscillations. After some testing, we have obtained interesting results with the following \(W^{2,1}\) semi-norm: \[C_{\text{osc}}(U^{n},U^{n}_{\text{ref}})=\Delta x_{\text{ref}}\sum_{i}\left\|D_ {xx}(\Pi_{\text{ref}}(U^{n}))_{i}-D_{xx}(U^{n}_{\text{ref}})_{i}\right\|_{1},\] where, for any approximate quantity \(\mathbf{V}\), \(\mathbf{V}_{i}\) refers to its value in cell \(i\), \(\Pi_{\text{ref}}(U^{n})\) is the projection of the piecewise polynomial solution of the DG scheme on the fine mesh of the reference FV scheme, and \(D_{xx}(U)_{i}=\frac{1}{\Delta x_{\text{ref}}^{2}}(U_{i-1}-2U_{i}+U_{i+1})\) a finite-difference second derivative. One can obviously use other measures of oscillations or costs penalizing positivity losses or violations of the local maximum principle. The second term measures the accuracy of the scheme and is given by the discrete \(L^{1}\) norm of the difference between \(U^{i}\) and \(U^{i}_{\text{ref}}\): \[C_{\text{acc}}(U^{n},U^{n}_{\text{ref}})=\Delta x_{\text{ref}}\sum_{i}\left\| \Pi_{\text{ref}}(U^{n})_{i}-(U^{n}_{\text{ref}})_{i}\right\|_{1},\] We compare the two solutions on the fine grid in order to highlight the oscillations. Finally, since the artificial viscosity is a non-physical process, it is natural to look for the smallest viscosity which still allows to kill the oscillations. To do so, we use as the third term an \(L^{2}\) penalisation: \[C_{\text{vis}}(U^{n})=\left\|\pi_{\theta}(U^{n})\right\|_{2}^{2},\] with the norm computed directly from the piecewise polynomial viscosity. This cost is standard in optimal control problems. Finally, a good starting point for the weights \(\omega_{\text{osc}}\), \(\omega_{\text{acc}}\) and \(\omega_{\text{visc}}\) could be such that all three terms contribute about as much to the overall error, but further empirical tweaking is necessary to get to the best compromise between diffusion and oscillations, as illustrated in the result section. ### Neural network viscosity function To apply Algorithm 1, it remains to define the neural network used for the viscosity function \(\pi_{\theta}(U)\) and how it is used in the scheme \(S_{\pi}\). Neural network architecture.In this work we use a residual neural networks (ResNet) as introduced in [16], with adequate padding and no pooling so that the size of the output is the same as the input. This is a standard architecture for deep convolutionnal neural network. The hyper-parameters of this architecture are its _depth_, i.e. the number of blocks, its _width_, i.e. the number of filters per convolution, and the _kernel size_ of these convolutions. We got good results with a very small version of it depicted in Figure 5: one block, width 16 and kernel size 3, for a total of about 2000 trainable parameters. We use the rectified linear unit (ReLU) activation function, except for Figure 1: Example of oscillations with discontinuous Galerkin schemes. Linear advection with periodic boundary conditions, solutions after one period. All solutions were obtained using a DG scheme of order 4. the last layer that uses the softplus activation function. Also the last layer is initialized with kernel zero and constant bias \(-3\) so that the initial output of the neural network is a constant vector with value softplus\((-3)\simeq 0.02\). The purpose of this initialization is to start the training with a reasonable viscosity that makes the numerical scheme stable. Pre-processing and post-processingThe raw input for the neural network is the approximated solution \(U^{n}\) at a given time, which comes as a tensor of values at each quadrature point of each cell. The cells are of equal length but the quadrature points are not uniformly distributed across the cells, which results in an overall non uniform discretization of the solution. Since convolutional neural networks -as the one we use- are not adapted to non uniform discretization, the input needs to be encoded in some way before being fed to the neural network. We opted for a concatenation between the value of the solution at the quadrature point and the relative position of the said quadrature point in the cell, in the form of a one-hot encoding: the first quadrature point of the cell is mapped to the vector \((1,0,...,0)\in\mathbb{R}^{p}\), the second to \((0,1,0,...,0)\in\mathbb{R}^{p}\), and so on, \(p\) being the number of quadrature points per cell. Figure 4 gives an example of this encoding on a single variable. Notably, the input of the neural network does not include information about the resolution of the solution, and therefore the artificial viscosity produced by the neural network is only adapted to the resolution the neural network has been trained with. In order to use the neural network with different resolution, we multiply the output by a scaling factor, the same way it is done in [1]. This scaling factor \(s\) is constant across each cell and involves the size of the cell \(\Delta x\) as well as the jumps of the solution at its interfaces \([\![U]\!]_{L}\) and \([\![U]\!]_{R}\): \[s=\min\{\Delta x,\max\{|[\![U]\!]_{L}|,|[\![U]\!]_{R}|\}\}\] Also, this scaling helps the artificial viscosity getting closer to zero where the solution is smooth, which prevents unnecessary diffusion. Figure 4 depicts the whole process for the computation of the viscosity on a fictive cell. Integration in the numerical schemeSince the evaluation of the neural network is relatively expensive compared to the rest of the numerical scheme, we choose to compute the artificial viscosity only once at the beginning of the timestep, as illustrated in Figure 3. Thus, we do not update its value at each stage of the Runge-Kutta method. We found that this simplification allowed faster computing with no perceptible loss of accuracy. ### Training data In this work we try and learn from initial conditions that have a general form, expressed as partial Fourier series: \[U^{0}:x\in[0,1]\mapsto\sum_{i=0}^{20}\tfrac{a_{n}}{n}\cos(2\pi nx)+\tfrac{b_{n }}{n}\sin(2\pi nx), \tag{9}\] with coefficients \(a_{n}\) and \(b_{n}\) following a uniform distribution on \([-1,1]^{s}\). Of course, it is possible to use other type of dataset without difficulties. For instance, for the Euler equations (see Section 4.3), we will use this kind of initialization on the primitive variables instead of the conservative ones. Positive initial conditions can be necessary for some variables: in this case, we subtract to the above functions their minima and add a small positive value \(\varepsilon=0.1\). As the neural network is non-local, the learned viscosity may depend on the solutions generated during the training. In particular, if the network is trained with one particular equation, it may not perform as well on another equation. However, it would be possible to train the network directly on several equations, even if it has not been done in this work. ## 4 Numerical Results In the following three sections we give numerical results for three different equations : the advection equation, Burgers' equation and Euler's system respectively. We give some details regarding the training and the influence of some parameters in the advection case, and then simply give the results for Burgers and Euler. Figure 4: Computing of the viscosity with a neural network. The input variable is encoded and given to the neural network, whose output is scaled to produce the artificial viscosity. Figure 5: The architecture we use to compute the artificial viscosity: a small ResNet with only one block. Conv(\(w\), \(k\)) represents a 1D convolution with \(w\) filters of size \(k\). Figure 3: Computations graph for one iteration of the Runge-Kutta 4 scheme. The artificial viscosity is computed only once and used at each step. Figure 2: View of the computational graph for \(V_{\pi_{\theta}}\). In all the numerical results presented below, we use the following parameters unless stated otherwise: * DG scheme: order \(p=4\), Gauss-Lobatto quadrature points, \(32\) cells on \([0,1]\), timestep \(\Delta t=1e-5\), RK4 discretization in time, * Reference FV scheme: order \(2\) (MUSCL), \(2048\) cells on \([0,1]\), timestep \(\Delta t=10^{-5}\), RK2 discretization in time, * Entire trajectories of \(N=4096\) iterations, sub-trajectory of \(m=512\) iterations, * \(K=8\) initial conditions per episode, * \(20\) batches of size \(16\) per episode, arbitrary high number of episodes. Values for the weights \(\omega_{\text{osc}}\), \(\omega_{\text{acc}}\) and \(\omega_{\text{visc}}\) will be specified for each test-cases. ### Advection equation We will start by validating the approach on the advection equation given by \[\left\{\begin{array}{l}\partial_{t}\rho+a\,\partial_{x}\rho=0,\\ \rho(t=0,x)=\rho_{0}(x),\end{array}\right.\] where \(\rho:\mathbb{R}_{+}\times[0,1]\to\mathbb{R}\) is the advected density and \(a\in\mathbb{R}\) is a constant velocity that we take equal to \(1\). We consider periodic boundary conditions For the initial condition \(\rho_{0}\), we test the viscosity on a common composite function with different kinds of discontinuities: \[\rho_{0}(x)=1+\left\{\begin{array}{ll}e^{-((x-0.125)/0.03)^{2}}&\text{if }x<0.25,\\ 1,&\text{if }5/16\leq x<7/16,\\ 1-\left|(x-\frac{5}{8})\times 16\right|&\text{if }9/16\leq x<11/16,\\ \sqrt{1-(16x-14)^{2}}&\text{if }13/16\leq x<15/16,\\ 0&\text{otherwise.}\end{array}\right.\] In order to notice the effects in long time of the different viscosities, we consider the solution after two periods at \(t=2\). We take advantage of the simplicity of the problem to discuss the effect of the hyper parameters of the optimization problem, i.e. the weights \(\omega_{\text{osc}}\), \(\omega_{\text{acc}}\) and \(\omega_{\text{visc}}\) involved in the cost function and the size \(m\) of the sub-trajectories. For simplicity we start by training an artificial viscosity using only two of the three terms in the loss. Since the term in \(C_{\text{osc}}\) on the one hand and the terms in \(C_{\text{acc}}\) and \(C_{\text{visc}}\) on the other seem adversarial, we consider using only \(C_{\text{osc}}\) and \(C_{\text{visc}}\) (\(\omega_{\text{acc}}=0\)), or only \(C_{\text{osc}}\) and \(C_{\text{acc}}\) (\(\omega_{\text{visc}}=0\)). Let us consider the first case. Figure 6 shows a typical training in these conditions: at first, the neural network greatly decreases the amount of viscosity, before adding some back in order to find a better compromise between the two parts of the loss. In order to visualize the resulting viscosity and solution, the neural network is applied with a specific test case, but note that the training was done with random initial conditions as described in section 3.4. \begin{table} \begin{tabular}{||c|c c c|c c c|c c||} \hline Model & \(\omega_{\rm osc}\) & \(\omega_{\rm acc}\) & \(\omega_{\rm visc}\) & \(C_{\rm osc}\) & \(C_{\rm acc}\) & \(C_{\rm visc}\) & \(L^{2}\) & \(L^{\infty}\) \\ \hline \hline \multirow{3}{*}{DG NN} & \(10^{-5}\) & 0 & \(2\cdot 10^{3}\) & 9.32e+03 & 1.10e-01 & 1.73e-10 & 5.91e-02 & 5.26e-01 \\ & \(10^{-5}\) & 0 & \(4\cdot 10^{3}\) & 9.36e+03 & 9.32e-02 & 1.03e-10 & 5.95e-02 & 5.34e-01 \\ & \(10^{-5}\) & 0 & \(6\cdot 10^{3}\) & 9.41e+03 & 8.35e-02 & 7.44e-11 & 5.97e-02 & 5.37e-01 \\ & \(10^{-5}\) & 0 & \(8\cdot 10^{3}\) & 9.45e+03 & 7.57e-02 & 5.85e-11 & 5.98e-02 & 5.43e-01 \\ \hline \end{tabular} \end{table} Table 1: (Advection - variation \(\omega_{\rm visc}\)) Errors for each model presented in Figure 7. \begin{table} \begin{tabular}{||c|c c c|c c c|c c||} \hline Model & \(\omega_{\rm osc}\) & \(\omega_{\rm acc}\) & \(\omega_{\rm visc}\) & \(C_{\rm osc}\) & \(C_{\rm acc}\) & \(C_{\rm visc}\) & \(L^{2}\) & \(L^{\infty}\) \\ \hline \hline \multirow{3}{*}{DG NN} & \(10^{-5}\) & 0.2 & 0 & 9.26e+03 & 1.38e-01 & 3.47e-10 & 5.86e-02 & 5.18e-01 \\ & \(10^{-5}\) & 0.8 & 0 & 9.27e+03 & 1.28e-01 & 2.87e-10 & 5.88e-02 & 5.26e-01 \\ & \(10^{-5}\) & 1.6 & 0 & 9.25e+03 & 1.28e-01 & 2.94e-10 & 5.88e-02 & 5.26e-01 \\ & \(10^{-5}\) & 3.2 & 0 & 9.27e+03 & 1.19e-01 & 2.55e-10 & 5.89e-02 & 5.22e-01 \\ \hline \end{tabular} \end{table} Table 2: (Advection - variation \(\omega_{\rm acc}\)) Errors for each model presented in Figure 8. Figure 6: (Advection) Top: Evolution of the validation loss during training. The contribution of the different terms are shown in color. Middle and bottom: Solution (middle) and viscosity (bottom) on a test case using the neural network viscosity at different points in its training. The test case uses periodic boundary conditions and consists of two periods (ie final time \(t=2\)). diffusion is clearly visible in the cost functions. Otherwise, the method learns a too large viscosity since its negative effect will not be visible enough in the cost functions. Also, a smaller \(m\) do not necessarily Figure 8: (Advection - variation \(\omega_{\rm acc}\)) Solution (top) and viscosity (bottom) on a test case with neural network viscosities obtained with different weights \(\omega_{\rm acc}\), the other two weights being set to \(\omega_{\rm osc}=10^{-5}\) and \(\omega_{\rm visc}=0\). The test case uses periodic boundary conditions and consists of two periods (ie final time \(t=2\)). Figure 7: (Advection - variation \(\omega_{\rm visc}\)) Solution (top) and viscosity (bottom) on a test case with neural network viscosities obtained with different weights \(\omega_{\rm visc}\), the other two weights being set to \(\omega_{\rm osc}=10^{-5}\) and \(\omega_{\rm acc}=0\). The test case uses periodic boundary conditions and consists of two periods (ie final time \(t=2\)). decrease the training time even if the computation of the gradient is cheaper since the number of steps of gradient descent may be larger. Finally, we compare our viscosity to two reference viscosities: the "derivative-based" (DB) viscosity, and the "highest modal decay" (MDH) viscosity, described in appendix A. Figure 10 shows the result with 32 cells for the discontinuous Galerkin scheme, which is the same resolution as the one used for the training of the neural network. Interestingly enough, our viscosity generalises pretty well to other resolutions, thanks to the scaling described in section 3.3. As an illustration, Figures 11, 12 and 13 compare the same three viscosities but used with 64, 128 and 256 cells respectively. Note that in these three examples, the model "DG NN" uses the same viscosity as in Figure 10, trained on 32 cells only. The results on the different figures and given by Table 3 show that the neural network viscosity gives the best compromise between accuracy and oscillations or between \(L^{2}\) and \(L^{\infty}\) errors. Indeed, the MDH method has a lower \(L^{2}\) error but oscillates more and has a larger \(L^{\infty}\) error. The DB approach is clearly more diffusive for this long time problem. ### Burgers equation We now consider the Burgers equation given by \[\left\{\begin{array}{l}\partial_{t}\rho+\partial_{x}\left(\frac{\rho^{2}}{2 }\right)=0,\\ \rho(t=0,x)=\rho_{0}(x),\end{array}\right.\] with \(\rho:\mathbb{R}_{+}\times[0,1]\rightarrow\mathbb{R}\) and complemented with periodic boundary conditions. The network, the hyper-parameters and the training process are exactly the same as for the advection equation, with coefficients \(\omega_{\rm acc}=0.5\), \(\omega_{\rm osc}=10^{-5}\) and \(\omega_{\rm visc}=5\). In order to avoid any issues with non-entropic solutions the DG scheme could converge to, we only consider positive functions in the dataset. The remarks made on the hyper-parameters, the learning in the previous section on advection remains valid here. We therefore propose to give direct results comparing a learned viscosity with classical viscosities. To do this, we consider an initial condition that has not been used in the training phase of the neural network viscosity: \[\rho_{0}(x)=1+\sin(2\pi x),\quad x\in[0,1],\] Figure 9: (Advection - variation \(m\)) Solution (top) and viscosity (bottom) on a test case with neural network viscosities obtained using sub-trajectories with different length \(m\). The test case uses periodic boundary conditions and consists of two periods (ie final time \(t=2\)). with final time \(t=1\). On Figures 14 and 15, we observe as before that the classical DG method without viscosity term generates large oscillations closed to the discontinuity. Contrary to the transport case, the MDH Figure 11: (Advection - comparison with DB and MDH) Solution (top) and viscosity (bottom) on a test case with 64 cells instead of the usual 32. The different viscosities used are the derivative-based (DB), the highest modal decay (MDH) and our neural network based viscosity (NN). The test case uses periodic boundary conditions and consists of two periods (ie final time \(t=2\)). Figure 10: (Advection - comparison with DB and MDH) Solution (top) and viscosity (bottom) on a test case with different viscosities: no viscosity (DG), derivative-based viscosity (DG DB), highest modal decay viscosity (DG MDH) and our neural network based viscosity (DG NN). The test case uses periodic boundary conditions and consists of two periods (ie final time \(t=2\)). method is here the more diffusive method. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods gives very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and the results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and the results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and the results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and the results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and the results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and the results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and the results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and the results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The neural network and the DB methods give very similar results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and the results with less numerical diffusion and small oscillations. Note that the MDH viscosity acts at the beginning of the simulation and then vanishes as the approximate solution becomes smooth. The that the neural network is slightly less oscillating at the bottom of the discontinuity. In conclusion, this test-case shows that the neural network viscosity still provides good results for such a non-linear equation with generates discontinuities. ### Euler system Finally we present results for the Euler system: \[\left\{\begin{array}{l}\partial_{t}\rho+\partial_{x}\left(\rho u\right)=0,\\ \partial_{t}(\rho u)+\partial_{x}\left(\rho u^{2}+p\right)=0,\\ \partial_{t}E+\partial_{x}\left(Eu+pu\right)=0,\end{array}\right.\] where \(\rho:\mathbb{R}_{+}\times\mathbb{R}\rightarrow\mathbb{R}\) denotes the density, \(u:\mathbb{R}_{+}\times\mathbb{R}\rightarrow\mathbb{R}\) the velocity, \(p:\mathbb{R}_{+}\times\mathbb{R}\rightarrow\mathbb{R}\) the pressure and \(E:\mathbb{R}_{+}\times\mathbb{R}\rightarrow\mathbb{R}\) the energy. The system is completed with a perfect gas law, resulting in the following relation : \[E=\frac{p}{\gamma-1}+\frac{\rho u^{2}}{2},\] where \(\gamma\) is the adiabatic constant taken equal to \(1.4\) here. Once again we use the same parameters as before, the only difference being that \(\pi_{\theta}\) now has three inputs, one for each conservative variable. The output is still a single viscosity coefficient \(\mu(x)\), since the same viscosity is applied to each equation. The neural network is trained using the coefficients \(\omega_{\mathrm{acc}}=0\), \(\omega_{\mathrm{osc}}=10^{-5}\) and \(\omega_{\mathrm{visc}}=10^{3}\) in the loss. The training dataset is made using initial conditions with the three variables \(\rho\), \(u\), \(p\) chosen according to (9) with the correction to ensure the positivity of the density and the pressure. We compare the different viscosity approaches on two classical test cases: the Sod problem and the Shu-Osher problem. In these test cases the DG scheme without viscosity is unstable and therefore is not presented. The Sod test-case uses the initial condition \[(\rho_{0},u_{0},p_{0})(x)=\left\{\begin{array}{ll}(1,0,1),&\mbox{if }x<0.5\\ (0.125,0,0.1).&\mbox{otherwise}\end{array}\right.\] on the interval \([0,1]\) with final time \(t=0.2\). We consider also Dirichlet boundary conditions. On Figure 16, we compare the different schemes associated with the different viscosity models on a mesh with \(100\) cells. As for the Burgers equation, the MDH viscosity provides the worst results. This problem can be explained by the fact that the hyper-parameters of the MDH method, taken from [13], may not be optimized to this specific test-case. The result between the DB model and the neural network model are close. Our approach seems better in the contact wave and a little bit more oscillating on \begin{table} \begin{tabular}{||l|r|r r r|r r||} \hline Model & Cells & \(C_{\mathrm{osc}}\) & \(C_{\mathrm{acc}}\) & \(C_{\mathrm{visc}}\) & \(L^{2}\) & \(L^{\infty}\) \\ \hline \hline \multirow{4}{*}{DG} & 32 & 9.97e+03 & 5.24e-02 & 0.00e+00 & 9.03e-03 & 6.05e-01 \\ & 64 & 9.89e+03 & 2.62e-02 & 0.00e+00 & 4.71e-03 & 5.95e-01 \\ & 128 & 1.01e+04 & 1.36e-02 & 0.00e+00 & 2.52e-03 & 5.81e-01 \\ & 256 & 1.05e+04 & 7.23e-03 & 0.00e+00 & 1.37e-03 & 5.60e-01 \\ \hline \multirow{4}{*}{DG DB} & 32 & 9.18e+03 & 2.49e-01 & 3.30e-07 & 7.82e-02 & 6.04e-01 \\ & 64 & 9.18e+03 & 1.56e-01 & 6.11e-08 & 3.94e-02 & 5.10e-01 \\ & 128 & 9.21e+03 & 8.57e-02 & 7.16e-09 & 1.86e-02 & 5.06e-01 \\ & 256 & 9.22e+03 & 4.36e-02 & 6.94e-10 & 9.13e-03 & 5.02e-01 \\ \hline \multirow{4}{*}{DG MDH} & 32 & 9.57e+03 & 5.94e-02 & 0.00e+00 & 1.22e-02 & 5.57e-01 \\ & 64 & 9.56e+03 & 2.49e-02 & 0.00e+00 & 5.37e-03 & 5.57e-01 \\ & 128 & 9.69e+03 & 1.26e-02 & 0.00e+00 & 2.75e-03 & 5.50e-01 \\ & 256 & 1.00e+04 & 6.60e-03 & 0.00e+00 & 1.45e-03 & 5.37e-01 \\ \hline \multirow{4}{*}{DG NN} & 32 & 9.41e+03 & 8.35e-02 & 4.76e-09 & 1.78e-02 & 5.37e-01 \\ & 64 & 9.30e+03 & 4.14e-02 & 3.96e-10 & 8.48e-03 & 5.22e-01 \\ \cline{1-1} & 128 & 9.27e+03 & 2.20e-02 & 2.86e-11 & 4.98e-03 & 5.08e-01 \\ \cline{1-1} & 256 & 9.34e+03 & 1.28e-02 & 3.00e-12 & 3.11e-03 & 4.94e-01 \\ \hline \end{tabular} \end{table} Table 3: (Advection - comparison with DB and MDH) Errors for each model presented in Figure 10. Figure 14: (Burgers - comparison with DB and MDH) Solution (top) and viscosity (bottom) of different models for Burgers equation with 32 cells Figure 15: (Burgers - comparison with DB and MDH) Solution (top) and viscosity (bottom) of different models for Burgers equation with 64 cells the shock. On a grid with 200 cells 17, the neural network viscosity seems slightly more accurate for all the different components of the solution. It is confirmed by the errors presented in Table 4, which both \(L^{2}\) and \(L^{\infty}\) errors are smaller on the two meshes. The second test case is the Shu-Osher test case, whose initial condition is given by: \[(\rho_{0},u_{0},p_{0})(x)=\left\{\begin{array}{ll}(3.857143,2.629369,10.333333)& \mbox{if }x<-4\\ (1+0.2\sin(5x),0,1)&\mbox{otherwise}\end{array}\right.,\] on the interval \([-5,5]\) with final time \(t=1.8\). The solution is composed of several smooth oscillations and a discontinuity. As before, we compare the different approaches on a given mesh with 200 cells. The results in Figure 19 and Table 5 shows that our model and the DB model gives very similar results with a slight advantage for the DB model. ## 5 Conclusion In this paper, we propose an optimal control approach to optimize a parametric numerical scheme based on its effect after several iterations. The method is a simple gradient method to optimize a given cost function, where the gradient is calculated across a large number of iterations by automatic differentiation. We apply it to the construction of an artificial viscosity for DG methods for one-dimensional hyperbolic equations. The numerical results on different simulations show that the obtained neural network viscosities result in equivalent or better results compared with classical artificial viscosities (Derivative Based or Highest Model Decay viscosities). \begin{table} \begin{tabular}{||l|c|c c c|c c||} \hline Model & Cells & \(C_{\rm osc}\) & \(C_{\rm acc}\) & \(C_{\rm visc}\) & \(L^{2}\) & \(L^{\infty}\) \\ \hline \hline \multirow{2}{*}{DG DB} & 100 & 3.13e+02 & 2.47e-03 & 2.04e-08 & 3.36e-05 & 5.08e-02 \\ & 200 & 3.04e+02 & 1.12e-03 & 2.48e-09 & 1.03e-05 & 3.93e-02 \\ \hline \multirow{2}{*}{DG MDH} & 100 & 2.86e+02 & 2.88e-03 & 0.00e+00 & 5.27e-05 & 5.43e-02 \\ & 200 & 3.00e+02 & 1.24e-03 & 0.00e+00 & 1.27e-05 & 3.90e-02 \\ \hline \multirow{2}{*}{DG NN} & 100 & 3.30e+02 & 1.36e-03 & 4.97e-09 & 1.48e-05 & 4.21e-02 \\ & 200 & 2.94e+02 & 5.86e-04 & 7.09e-10 & 2.92e-06 & 3.11e-02 \\ \hline \end{tabular} \end{table} Table 4: Errors for each model on the Sod test case. Figure 16: (Euler - Sod) Sod test case, 100 cells Figure 17: (Euler - Sod) Sod test case, 200 cells Figure 18: (Euler) Shu-Osher test case, 100 cells There are several possible ways to extend this work. First, non-physical oscillations have so far been detected with the semi-norm \(W^{2,1}\) of the error with respect to the reference solution. Another possibility would be to design a data-driven detector of the non-physical oscillations like in [2]. It will also be naturally important to extend this work to 2D/3D problems. Note however that a major difficulty comes from the number of iterations taken into account in the computation of the gradient. In our one-dimensional problem, we succeed in considering up to 1000 time steps. However, this was possible because of the coarse meshes and small networks. For two-dimensional problems, the sizes of the mesh and the network may be larger and the memory resources may be saturated. To overcome this difficulty, the method could be coupled with a reinforcement approach [25] or a neural ODE method [10], for which the gradient are computed by duality. Finally, the same methodology could also be applied to other problems like estimating optimal slope limiters or WENO stencils. ## Appendix A Reference artificial viscosity models In the result section, we compare our viscosity to two models of reference, that we briefly describe here in the context of a discontinuous Galerkin scheme of order \(p\) in one dimension. The first one is the simplest one, refered to as the derivative-based (DB) model in the comparative study [26], and reads \[\pi_{\text{DB}}(\mathbf{U})=\min(\mu_{\beta},\mu_{\text{max}}),\quad\mu_{\beta }=c_{\beta}(\tfrac{\Delta x}{p-1})^{2}|\partial_{x}u|,\quad\mu_{\text{max}}=c_ {\text{max}}\tfrac{\Delta x}{p-1}\max_{\text{cell}}|s|,\] Figure 19: (Euler) Shu-Osher test case, 200 cells \begin{table} \begin{tabular}{||l|c|c c c|c c||} \hline Model & Cells & \(C_{\text{osc}}\) & \(C_{\text{acc}}\) & \(C_{\text{visc}}\) & \(L^{2}\) & \(L^{\infty}\) \\ \hline \hline \multirow{2}{*}{DG DB} & 100 & 2.47e+03 & 4.09e-01 & 1.73e-04 & 1.01e-01 & 1.18e+00 \\ & 200 & 2.40e+03 & 1.61e-01 & 2.16e-05 & 2.92e-02 & 1.08e+00 \\ \hline \multirow{2}{*}{DG MDH} & 100 & 2.37e+03 & 5.76e-01 & 1.98e-05 & 1.79e-01 & 1.31e+00 \\ & 200 & 2.25e+03 & 2.53e-01 & 6.00e-13 & 5.21e-02 & 1.25e+00 \\ \hline \multirow{2}{*}{DG NN} & 100 & 2.38e+03 & 3.49e-01 & 5.46e-06 & 8.24e-02 & 1.22e+00 \\ & 200 & 2.42e+03 & 1.71e-01 & 5.29e-06 & 2.95e-02 & 1.23e+00 \\ \hline \end{tabular} \end{table} Table 5: (Euler) Errors for each model on the Shu-Osher test case. where \(u\) is the unique variable in the scalar case and the velocity for the Euler equation, \(s\) is the local wave speed, and \(c_{\beta}\) and \(c_{\max}\) are empirical parameters, set to \(1\) and \(0.5\) respectively. The second one is refered to as the highest modal decay (MDH) model in [13] and was first proposed in [10]. In this model, the viscosity is computed from the variable \(\rho\) which refers to the unique variable in the scalar case, and to the density for the Euler equation. The MDH model relies on a modal expansion of \(\rho\) in each cell, \[\rho(x,t)=\sum_{k=0}^{p-1}\hat{\rho}_{k}(t)\psi_{k}(x),\quad\psi_{k}\text{ Legendre polynomials on the cell considered},\] and more specifically on the ratio between the norm of the highest mode and the overall norm: \[r=\log_{10}\frac{\|\hat{\rho}_{p-1}\psi_{p-1}\|_{L^{2}}^{2}}{\|\rho\|_{L^{2}}^ {2}}.\] The viscosity is then taken smoothly increasing with \(r\) from \(0\) to \(\mu_{\max}\) as follows: \[\pi_{\text{MDH}}(\mathbf{U})=\mu_{\max}\left\{\begin{array}{ll}0&\text{if }r<r_{0}-c_{K}\\ \frac{1}{2}\left(1+\sin\frac{\pi(r-r_{0})}{2c_{K}}\right)&\text{if }r_{0}-c_{K}<r<r_{0}+c_{K} \\ 1&\text{otherwise}\end{array}\right.\] The threshold \(r_{0}\) depends on the order \(p\) as \[r_{0}=-\big{(}c_{A}+4\log_{10}(p-1)\big{)},\] and \(c_{A}\) and \(c_{K}\) are empirical parameters set to \(2.5\) and \(0.2\) respectively. These computations give a value for the viscosity coefficient on each cell, which is interpolated by a polynomial of degree \(2\) that has this value in the middle of the cell, and the average value between the two cells involved at the interfaces, resulting in a continuous function.
2305.00559
Automated reasoning support for Standpoint-OWL 2
We present a tool for modelling and reasoning with knowledge from various diverse (and possibly conflicting) viewpoints. The theoretical underpinnings are provided by enhancing base logics by standpoints according to a recently introduced formalism that we also recall. The tool works by translating the standpoint-enhanced version of the description logic SROIQ to its plain (i.e. classical) version. Existing reasoners can then be directly used to provide automated support for reasoning about diverse standpoints.
Florian Emmrich, Lucía Gómez Álvarez, Hannes Strass
2023-04-30T19:50:14Z
http://arxiv.org/abs/2305.00559v1
# Automated reasoning support for Standpoint-OWL 2 ###### Abstract We present a tool for modelling and reasoning with knowledge from various diverse (and possibly conflicting) viewpoints. The theoretical underpinnings are provided by enhancing base logics by _standpoints_ according to a recently introduced formalism that we also recall. The tool works by translating the standpoint-enhanced version of the description logic $R01Q to its plain (i.e. classical) version. Existing reasoners can then be directly used to provide automated support for reasoning about diverse standpoints. Standpoint Logic, OWL 2 DL, Reasoning ## 1 Introduction The Semantic Web has democratised the production of knowledge sources by providing a set of standards for the specification of vocabularies, rules, and data stores. The standard for authoring ontologies and knowledge bases is the _Web Ontology Language_ OWL 2 [1], a language based on description logic (DL) [2]. Beyond the publication of independently developed sources, a fundamental goal of the Semantic Web is to support the integration and combination of the knowledge embedded within them. However, the interoperability between ontologies is often hindered by semantic heterogeneity, differences in perspectives and other contextual factors. A recent proposal aiming to address these challenges is _Standpoint Logic_ (SL) [3], a framework for multi-perspective reasoning. SL is a multi-modal logic conceived to support the coexistence of multiple standpoints and the establishment of alignments between them. The language supports expressions of the form \(\square_{\mathsf{s}}[\phi]\) and \(\Diamond_{\mathsf{s}}[\phi]\), which express information relative to the _standpoint_\(\mathsf{s}\) and read, respectively: "according to \(\mathsf{s}\), it is _unequivocal/conceivable_ that \(\phi\)". In the semantics, standpoints are represented by sets of _precisifications_,1 such that \(\square_{\mathsf{s}}[\phi]\) and \(\Diamond_{\mathsf{s}}[\phi]\) hold if \(\phi\) is true in all/some of the precisifications in \(\mathsf{s}\). For the sake of illustration, let us revisit a condensed version of the example provided by Gomez Alvarez et al. [4]. Footnote 1: Precisifications are analogous to the _words_ of modal-logic frameworks with possible-worlds semantics. **Example 1**.: _A range of conceptualisations for the notion of forest have been specified for different purposes, giving rise to diverging or even contradictory statements regarding forest distributions. Consider a knowledge integration scenario involving two sources adopting a \(\mathsf{land}\) cover \((\mathsf{LC})\) and a \(\mathsf{land}\) \(\mathsf{use}\)\((\mathsf{LU})\) perspective on forestry. \(\mathsf{LC}\) characterises a forest as a "forest ecosystem" with a minimum area \((\mathsf{F}1)\) where a forest ecosystem is specified as an ecosystem with a certain ratio of tree canopy cover (F2). \(\mathsf{LU}\) defines a forest with regard to the purpose for which an area of land is put to use by humans, i.e. a forest is a maximally connected area with "forest use" (F3).2 Sources \(\mathsf{LC}\) and \(\mathsf{LU}\) agree that forests subsume broadleaf, needleleaf and tropical forests (F4), and they both adhere to the Basic Formal Ontology (\(\mathsf{BFO}\)) [5], an upper-level ontology that formalises general terms, stipulating for instance that \(\mathsf{land}\) and ecosystem are disjoint categories (F5). Footnote 2: “Forest use” areas may qualify for logging concessions and be classified into, e.g. agricultural or recreational use. _Using standard DL notation and providing "perspective annotations" by means of correspondingly labelled multi-modal logic box operators, the example can be formalised in a standpoint-enhanced description logic as follows:_ 1. \(\square_{\mathsf{LC}}[\mathsf{Forest}\equiv\mathsf{ForestEcosystem}\sqcap \exists\mathsf{hasLand.Area}_{\geq 0.5\mathsf{ha}}]\) 2. \(\square_{\mathsf{LC}}[\mathsf{ForestEcosystem}\equiv\mathsf{Ecosystem}\sqcap \mathsf{TreeCanopy}_{\geq 20\%}]\) 3. \(\square_{\mathsf{LU}}[\mathsf{Forest}\equiv\mathsf{ForestlandUse}\sqcap \mathsf{MCON}]\wedge\square_{\mathsf{*}}[\mathsf{ForestlandUse}\sqsubseteq \mathsf{Land}]\) 4. \(\square_{\mathsf{LCLU}}[(\mathsf{BroadleafForest}\sqcup\mathsf{ NeedleleafForest}\sqcup\mathsf{TropicalForest})\sqsubseteq\mathsf{Forest}]\) 5. \((\mathsf{LC}\preceq\mathsf{BFO})\wedge(\mathsf{LU}\preceq\mathsf{BFO})\wedge \square_{\mathsf{BFO}}[\mathsf{Land}\sqcap\mathsf{Ecosystem}\sqsubseteq\bot]\) Notice that _ecosystem_ and _land_ are disjoint categories according to the overarching \(\mathsf{BFO}\) (F5), yet forests are defined as ecosystems according to \(\mathsf{LC}\) (F1) and as lands according to \(\mathsf{LU}\) (F3). As discussed in [3], these kinds of disagreements result in well-reported challenges in the area of Ontology Integration [6, 7] and make ontology merging a non-trivial task. Standpoint logic overcomes the usual tradeoffs by supporting standpoint-dependent knowledge specifications, which allows the statements (F1)-(F5) to be jointly represented. In recent work, Gomez Alvarez et al. [4] introduced _First-Order_ Standpoint Logic (FOSL) and showed favourable complexity results for its _sentential_ fragments, which disallow modal operators being applied to formulas with free variables. Specifically, adding sentential standpoints does not increase the complexity for fragments that are _NP-hard_, which is shown by means of a polytime equisatisfiable translation. These results apply to the sentential standpoint variants of the expressive \(\mathcal{SGHOQ}\) family of description logics, logical basis of OWL 2 DL [1]. In a nutshell, given a knowledge base in sentential Standpoint-\(\mathcal{SGHOQ}{b_{s}}\)3, the provided polytime translation outputs an equisatisfiable knowledge base in plain \(\mathcal{SGHOQ}{b_{s}}\). Beyond establishing tight complexity bounds, this presented us with a way to leverage existing highly optimised OWL reasoners to provide reasoning support for ontology languages extended by standpoint modelling. In this work, we adjust this translation to plain \(\mathcal{SGHOQ}\),4 and we present an implementation thereof, which effectively constitutes the first tool supporting automated reasoning on Standpoint-OWL 2 DL in combination with existing off-the-shelf reasoners. Footnote 3: Notice that the published translation [4] is for the mildly stronger \(\mathcal{SGHOQ}{b_{s}}\) instead of the more mainstream \(\mathcal{SGHOQ}\). Footnote 4: To the best of our knowledge current reasoners do not support \(\mathcal{SGHOQ}{b_{s}}\). The paper is structured as follows. We first introduce the syntax and semantics of sentential Standpoint-\(\mathcal{SGHOQ}\) and describe briefly how \(\mathcal{SGHOQ}\) relates to OWL \(2\) (Section 2.1). We then explain how to encode sentential Standpoint-\(\mathcal{SGHOQ}\) axioms in an OWL \(2\) DL ontology, and how our implementation translates them in such a way that they can be processed by an OWL \(2\) DL reasoner (Section 3). We proceed to detail the usage of the command-line tool (Section 4) and we conclude the paper with a discussion of the contributions and future work. ## 2 Background We next introduce the theoretical background, starting with the "plain" (standpoint-free) description logic \(\mathcal{SROJOQ}\), its standpoint-enhanced version, and the web ontology language OWL 2 DL. ### Standpoint Description Logic Let \(\mathcal{C}\), \(\mathcal{P}_{1}\), and \(\mathcal{P}_{2}\) be finite, mutually disjoint sets called _individual names_, _concept names_ and _role names_, respectively. \(\mathcal{P}_{2}\) is subdivided into _simple role names_\(\mathcal{P}_{2}^{\mathrm{ps}}\) and _non-simple role names_\(\mathcal{P}_{2}^{\mathrm{ps}}\), the latter containing the _universal role_\(\mathrm{u}\) and being strictly ordered by some strict order \(\prec\).5 Then, the set \(\mathcal{R}^{\mathrm{s}}\) of _simple role expressions_ is defined by \(r_{1},r_{2}::=\mathsf{s}\mid\mathsf{s}^{-}\), with \(\mathsf{s}\!\in\!\mathcal{P}_{2}^{\mathrm{s}}\), while the set of (arbitrary) _role expressions_ is \(\mathcal{R}\!=\!\mathcal{R}^{\mathrm{s}}\cup\mathcal{P}_{2}^{\mathrm{ps}}\). The order \(\prec\) is then extended to \(\mathcal{R}\) by making all elements of \(\mathcal{R}^{\mathrm{s}}\prec\)-minimal. The syntax of _concept expressions_ is given by \(C,D::=\mathsf{A}\mid\{a\}\mid\top\mid\bot\mid\neg C\mid C\sqcap D\mid C\sqcup D \mid\forall r.C\mid\exists r.C\mid\exists r^{\prime}.Self\mid\leqslant\!n\,r^{ \prime}.C\mid\geqslant\!n\,r^{\prime}.C,\) with \(\mathsf{A}\in\mathcal{P}_{1}\), \(a\in\mathcal{C}\), \(r\in\mathcal{R}\), \(r^{\prime}\in\mathcal{R}^{\mathrm{s}}\), and \(n\in\mathbb{N}\). The different types of \(\mathcal{SROJOQ}\) sentences (called _axioms_) are given in Table 1.6 Footnote 5: In the original definition of \(\mathcal{SROJOQ}\), simplicity of roles and \(\prec\) are not given a priori, but meant to be implicitly determined by the set of axioms. Our choice to fix them explicitly upfront simplifies the presentation without restricting expressivity. Similar to FOL, the semantics of \(\mathcal{SROJOQ}\) is defined via interpretations \(\mathcal{I}=(\Delta,\,\mathcal{I})\) composed of a non-empty set \(\Delta\) called the _domain of \(\mathcal{I}\)_ and a function \(\cdot^{\mathcal{I}}\) mapping individual names to elements of \(\Delta\), concept names to subsets of \(\Delta\), and role names to subsets of \(\Delta\times\Delta\). This is extended to role and concept expressions and used to define satisfaction of axioms (see Table 1). In Standpoint-\(\mathcal{SROJOQ}\), "plain" \(\mathcal{SROJOQ}\) axioms may be preceded by a _standpoint modality_, expressing a standpoint relative to which the axiom is stated to hold. Within such modalities, standpoints may be either referred to by name (e.g. as in F1-F5), or by expressions constructed from names inductively using set operators. Formally, the set \(\mathcal{E}_{\mathcal{S}}\) of _standpoint expressions_ is defined by \(\mathsf{e}_{1},\mathsf{e}_{2}::=*\mid\mathsf{s}\mid\mathsf{e}_{1}\cup\mathsf{ e}_{2}\mid\mathsf{e}_{1}\cap\mathsf{e}_{2}\mid\mathsf{e}_{1}\setminus\mathsf{e}_{2}\), where \(s\in\mathcal{S}\) is a _standpoint name_, and \(*\in\mathcal{S}\) is a special name referring to the _universal standpoint_, i.e. the standpoint comprising all precisifications. A _sharpening statement_\(\mathsf{e}_{1}\preceq\mathsf{e}_{2}\) expresses that \(\mathsf{e}_{1}\) pertains to a viewpoint that is at least as narrow as that of \(\mathsf{e}_{2}\) and is syntactic sugar for the axiom \(\square_{\mathsf{e}_{1}\setminus\mathsf{e}_{2}}[\top\sqsubseteq\bot]\). The set \(\mathbb{S}_{[\$\$00\nmid\$]}\) of _sentential Standpoint-\(\$\mathcal{SROIQ}\nmid\) sentences_ is now defined as the union \(\mathbb{S}_{[\$\$00\nmid\$]}:=\mathcal{B}_{R}\cup\mathcal{B}_{T},\) where \(\mathcal{B}_{R}\) consists of all \(\$\mathcal{SROIQ}\nmid\) RIAs and \(\mathcal{B}_{T}\) is inductively defined: * if \(\phi\) is a \(\$\mathcal{ROIQ}\nmid\) TBox axiom, then \(\phi\in\mathcal{B}_{T}\), * if \(\phi,\psi\in\mathcal{B}_{T}\), then \(\neg\phi,\phi\wedge\psi,\phi\vee\psi\in\mathcal{B}_{T}\), * if \(\phi\in\mathcal{B}_{T}\) and \(\mathsf{e}\in\mathcal{E}_{\delta}\), then \(\square_{\mathsf{e}}\phi,\Diamond_{\mathsf{e}}\phi\in\mathcal{B}_{T}\). Any \(\phi\in\mathbb{S}_{[\$\$00\nmid\$]}\) can be transformed to an equivalent \(\psi\in\mathbb{S}_{[\$\$00\nmid\$]}\) in _normal form_, where negation only occurs directly before a \(\$\mathcal{SROIQ}\nmid\) TBox axiom or a standpoint modality \(\square_{\mathsf{e}}/\Diamond_{\mathsf{e}}\), and no standpoint modality appears in the scope of another. In the semantics of sentential Standpoint-\(\$\mathcal{ROIQ}\nmid\), standpoints are represented by sets of so-called _precisifications_ where each precisification corresponds to an ordinary \(\$\mathcal{ROIQ}\nmid\) interpretation. Formally, the semantics of (sentential) Standpoint-\(\$\mathcal{ROIQ}\nmid\) knowledge bases \(\mathcal{K}\subseteq\mathbb{S}_{[\$\$00\nmid\$]}\) is given by _description logic standpoint structures_\(\mathfrak{D}=(\Delta,\Pi,\sigma,\gamma)\) where \(\Delta\) is a non-empty set, the interpretation _domain_, \(\Pi\) is a non-empty set of _precisifications_, \(\sigma\) maps each standpoint name \(\mathsf{s}\in\mathcal{S}\) to a subset of \(\Pi\), and \(\gamma\) maps each \(\pi\in\Pi\) to a "plain" \(\$\mathcal{ROIQ}\nmid\) interpretation with domain \(\Delta\). The satisfaction relation for DL standpoint structures and elements of \(\mathbb{S}_{[\$\$00\nmid\$]}\) is then given by * \(\mathfrak{D},\pi\models\xi\) iff \(\gamma(\pi)\models\xi\) for \(\$\mathcal{ROIQ}\nmid\) TBox axioms \(\xi\), and * \(\mathfrak{D},\pi\models\square_{\mathsf{e}}[\phi]\) iff \(\mathfrak{D},\pi^{\prime}\models\phi\) for each \(\pi^{\prime}\in\sigma(\mathsf{e})\) and * \(\mathfrak{D},\pi\models\Diamond_{\mathsf{e}}[\phi]\) iff \(\mathfrak{D},\pi^{\prime}\models\phi\) for some \(\pi^{\prime}\in\sigma(\mathsf{e})\) where \(\sigma\) is extended from standpoint names to standpoint expressions in the obvious way, and the satisfaction relation for the Boolean connectives is as usual. In a Standpoint-\(\$\mathcal{ROIQ}\nmid\) knowledge base \(\mathcal{K}\subseteq\mathbb{S}_{[\$\$00\nmid\$]}\), we consider all formulas \(\phi\) not preceded by a modality to be implicitly of the form \(\square_{*}[\phi]\). For the full technical definitions we refer to the original paper [4]. We finally note that Gomez Alvarez et al. [4] have presented a sentential standpoint version of the description logic \(\$\mathcal{ROIQ}\nmid b_{s}\), which extends \(\$\mathcal{ROIQ}\nmid\) by _safe Boolean role expressions_, i.e. role expressions of the form \(r_{1}\cup r_{2}\), \(r_{1}\cap r_{2}\) and \(r_{1}\setminus r_{2}\), denoting union, intersection and difference of roles, respectively. Since to our knowledge there is no reasoner which supports these safe Boolean role expressions, we have restricted the implementation to sentential Standpoint-\(\$\mathcal{ROIQ}\nmid\). ### Owl 2 Dl The _Web Ontology Language OWL 2_[1] is an expressive knowledge representation language and a W3C-recommended standard for modelling ontologies. There are two alternative ways of defining the semantics of OWL 2 ontologies: the _RDF-Based Semantics_[8], which assigns meaning to RDF graphs and thus only indirectly to ontology structures via the mapping to RDF graphs [9], and the _Direct Semantics_[10] which assigns meaning directly to ontology structures. The latter results in a semantics compatible with the model-theoretic semantics of \(\$\mathcal{ROIQ}\nmid\). Moreover, to ensure that OWL 2 ontology structures can be translated into a \(\mathcal{SROJQQ}\) knowledge base, certain conditions have to be fulfilled, for instance transitive properties cannot be used in number restrictions. A complete list of restrictions can be found in the OWL 2 Structural Specification document [11, Section 3]. Ontologies that satisfy these conditions are called _OWL 2 DL_ ontologies. Our focus is on OWL 2 DL since this compatibility with \(\mathcal{SROJQ}\) ontologies allows us to implement the translation from sentential Standpoint-\(\mathcal{SROJQ}\) to standard \(\mathcal{SROJQ}\) in OWL 2. ## 3 Standpoint-Owl 2 Dl In order to support standpoint-based reasoning in the semantic web, one may either extend current standards such as OWL 2, or provide procedures to encode the standpoint operators within these languages. We take the latter approach following the lines of the work of Bobillo and Straccia [12], who proposed a methodology to represent fuzzy ontologies in OWL 2 using annotation properties. These properties are broadly used to add comments or labels to entities and axioms of the ontology, as a way to provide supplementary information to the user. In our case, we define the annotation property "standpointLabel", which will be used to add standpoint operators to axioms and to create Boolean combinations of standpoint axioms. This section illustrates how to encode the sentential Standpoint-\(\mathcal{SROJQ}\) (Section 2.1) constructs that are not available in OWL 2. Most importantly, we provide the syntax to encode Boolean combinations of standpoint axioms, i.e. the axioms in \(\mathcal{B}_{T}\). While this is sufficient to encode standpoint ontologies, we also introduce syntax for the specification of sharpening statements, which are syntactic sugar in Standpoint-\(\mathcal{SROJQ}\), and also for labelling single standard OWL 2 subclass or equivalence axioms with a standpoint operator, which facilitates the enhancement of pre-existing ontologies with standpoints. Boolean combinationsComplex standpoint axioms can be added to an ontology by annotating the ontology itself by a standpointLabel with a \(\mathrm{BoolComb}\) value:7 Footnote 7: The elements in the XML syntax are not case-sensitive, but the name attribute is. \[\mathrm{BoolComb} :=\ \text{<booleanCombination>Formula</booleanCombination>}\] \[\mathrm{Formula} :=\ \mathrm{Axiom}\mid\text{<NOT>Axiom</NOT>}\mid\] \[\text{<AND>FormulaFormula</AND>}\mid\text{<OR>Formula</OR>}\] \[\mathrm{Axiom} :=\ \mathrm{StdAxiom}\mid\text{<standpointAxiom name="@ax"/>}\mid\] \[\text{<Box>}\text{$\mathcal{B}$expExpr StdAxiom</Box>}\mid\text{<Diamond>} \text{<SPExpr StdAxiom</Diamond>}\] \[\mathrm{StdAxiom} :=\ \text{<subClassof> <LHS>Class</LHS> <RHS>Class</RHS> </subClassof>}\mid\] \[\text{<equivalentClasses> <LHS>Class</LHS><RHS>Class</RHS> </equivalentClasses>}\] \[\mathrm{SPExpr:=\ \text{<Standpoint name="s"/>}\mid\text{<INTERSECTION> SPEExpr</INTERSECTION>}\mid\] \[\text{<Union>}\text{$\mathcal{P}$expr</UNION>}\mid\text{<minus>} \text{$\mathcal{P}$Expr</MINUS>}\] where \(\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \cdot Manchester syntax [13]. If a named standpoint axiom is mentioned, there has to be an annotated axiom with the same name attribute in the ontology, which then replaces the reference in the Boolean combination. **Example 2**.: _We can encode the axiom \((\mathrm{F3})\) in Example 1 by annotating the ontology with a standpointLabel in the following way:_ < standpointLabel> <booleanCombination> <AND> <Box> <Standpointname="LU"/> <equivalentClasses> <LHS>Forest</LHS> <RHS>ForestlandUseandMCON</RHS> </equivalentClasses> </Box> <Box> <Standpointname="*"/> <subClassof> <LHS>ForestlandUse</LHS> <RHS>Land</RHS> </subClassOf> </Box> </AND> </booleanCombination> </standpointLabel> Sharpening statements Sharpening statements \(\mathrm{e_{1}\preceq e_{2}}\) are encoded via annotation of the ontology with a standpointLabel of the form <Sharpening>SPExpr SPEexpr</Sharpening>. Simple standpoint axiomsStandard subclass and equivalence axioms can be turned into standpoint axioms by adding a standpointLabel annotation of the form SPAxiom := <standpointAxiom>SPOperator</standpointAxiom> | <standpointAxiomname="Agax">SPOperator</standpointAxiom> SPOperator := <Box>SPExpr</Box> | <Diamond>SPExpr</Diamond> with Agax and SPEexpr defined as above. This effectively prepends a standard subclass or equivalence axiom by a standpoint operator \(\Box_{e}/\Diamond_{e}\) for some standpoint expression \(e\). If the name attribute of the standpointAxiom element is given, it can be used as a reference in Boolean combinations. Note that a standpoint axiom with a name attribute will not be translated outside of the Boolean combinations that refer to them, since this would render any reference to it equivalent to "true". ## 4 Tool Description Our command-line tool implements an adaptation to \(\mathcal{SROJQ}\) of the translation from sentential Standpoint-\(\mathcal{SROJQ}b_{s}\) to standard \(\mathcal{SROJQ}b_{s}\) proposed by Gomez Alvarez et al. [4]. In this section, we describe how a standpoint-annotated OWL 2 DL ontology is translated to an OWL 2 DL ontology that can be processed by an OWL 2 DL reasoner. Subsequently, we explain how to use the command-line tool and outline some of its additional features. ### Implementation Our command-line tool8 can parse a sentential Standpoint-\(\mathcal{SROJQ}\) ontology in the syntax provided in Section 3, and translate it to standard OWL 2 DL, for which efficient reasoners already exist, e.g. HermiT [14]. We use the _OWL API_ for creating, parsing and manipulating OWL 2 ontologies, hence the format of the input ontology can be one of a variety of standardised syntaxes, such as RDF/XML [15] or Manchester syntax [13]. Footnote 8: The source code can be found on the GitHub repository: [https://github.com/cl-tud/standpoint-owl2](https://github.com/cl-tud/standpoint-owl2) The implemented translation exploits the fact that satisfiable \(\mathbb{S}_{[\mathcal{SROJQ}]}\) knowledge bases are guaranteed to have a model with a bounded number of precisifications, which are represented by integers \(\pi\in\{0,\ldots,p-1\}\) in the encoding. While Gomez Alvarez et al. [4] set this bound to the size of the knowledge base, for our implementation we use the more fine-grained count of the diamonds occurring in positive polarity and the boxes occurring in negative polarity in the standpointLabel annotations. There are two syntactic impediments of \(\mathcal{SROJQ}\) that need to be addressed by the translation: (a) \(\mathcal{SROJQ}\) does not provide nullary predicates, which we simulate by concept expressions of the form \(\forall u.P\), where \(u\) is the universal role and \(P\) encodes the predicate via a concept name, and (b) \(\mathcal{SROJQ}\) does not directly allow for arbitrary Boolean combinations of axioms, but an equivalent encoding is possible using the universal role \(u\); for instance the expression \(\neg(A\equiv B)\vee(A\sqsubseteq C)\) can be converted to \(\top\sqsubseteq\forall u.(A\sqcap\neg B)\sqcup\forall u.(B\sqcap\neg A) \sqcup\forall u.(\neg A\sqcup C)\). The translation proceeds in the following way. For each concept name \(A\), role name \(r\) and standpoint \(s\) in the input ontology, we generate the fresh concept and role names \(M\_s\_\pi\), \(A\_\pi\) and \(r\_\pi\) for each \(\pi\in\{0,\ldots,p-1\}\), where \(M\) is a prefix for the nullary standpoint predicates. To avoid altering the original ontology file, we additionally _rebase_ all concept, role and individual names, i.e. update their IRIs with that of the output ontology. Then, for each \(\pi\in\{0,\ldots,p-1\}\), we add the axioms \((\top\sqsubseteq\forall u.M\_*\_\pi)\) and for each standpoint axiom \(\phi\in\mathcal{B}_{T}\) the set of GCIs consisting of \((\top\sqsubseteq\mathrm{trans}(\pi,\phi))\), with \(\mathrm{trans}\) defined as follows. \[\mathrm{trans}(\pi,C\sqsubseteq D) =\forall u.(\neg C\_\pi\sqcup D\_\pi), \mathrm{trans}_{\mathcal{E}}(\pi,s) =\forall u.M\_s\_\pi,\] \[\mathrm{trans}(\pi,\neg(C\sqsubseteq D)) =\exists u.(C\_\pi\sqcap\neg D\_\pi), \mathrm{trans}_{\mathcal{E}}(\pi,\mathtt{e}_{1}\cap\mathtt{e}_{2}) =\mathrm{trans}_{\mathcal{E}}(\pi,\mathtt{e}_{1})\sqcap\mathrm{ trans}_{\mathcal{E}}(\pi,\mathtt{e}_{2}),\] \[\mathrm{trans}(\pi,\phi_{1}\wedge\phi_{2}) =\mathrm{trans}(\pi,\phi_{1})\sqcap\mathrm{trans}(\pi,\phi_{2}), \mathrm{trans}_{\mathcal{E}}(\pi,\mathtt{e}_{1}\cup\mathtt{e}_{2}) =\mathrm{trans}_{\mathcal{E}}(\pi,\mathtt{e}_{1})\sqcup\mathrm{ trans}_{\mathcal{E}}(\pi,\mathtt{e}_{2}),\] \[\mathrm{trans}(\pi,\phi_{1}\vee\phi_{2}) =\mathrm{trans}(\pi,\phi_{1})\sqcup\mathrm{trans}(\pi,\phi_{2}), \mathrm{trans}_{\mathcal{E}}(\pi,\mathtt{e}_{1}\setminus\mathtt{e}_{2}) =\mathrm{trans}_{\mathcal{E}}(\pi,\mathtt{e}_{1})\sqcap\mathrm{ trans}_{\mathcal{E}}(\pi,\mathtt{e}_{2})\] \[\mathrm{trans}(\pi,\square_{\mathtt{e}}) =\sqcap_{\pi^{\prime}=0}^{p-1}(\neg\mathrm{trans}(\pi^{\prime}, \mathtt{e})\sqcup\mathrm{trans}(\pi^{\prime},\phi)),\] \[\mathrm{trans}(\pi,\Diamond_{\mathtt{e}}) =\sqcup_{\pi^{\prime}=0}^{p-1}(\mathrm{trans}(\pi^{\prime}, \mathtt{e})\sqcap\mathrm{trans}(\pi^{\prime},\phi)),\] \[\mathrm{trans}(\pi,\Diamond_{\mathtt{e}}) =\sqcup_{\pi^{\prime}=0}^{p-1}(\mathrm{trans}(\pi^{\prime}, \mathtt{e})\sqcap\mathrm{trans}(\pi^{\prime},\phi)),\] Equivalence axioms are treated as a conjunction of subclass axioms, and negation in front of standpoint modalities is resolved by duality, viz. \(\neg\square_{\mathtt{e}}[\phi]=\Diamond_{\mathtt{e}}[\neg\phi]\) and \(\neg\Diamond_{\mathtt{e}}[\phi]=\square_{\mathtt{e}}[\neg\phi]\). In line with treating plain \(\mathcal{SROJQ}\) axioms as being prepended by \(\square_{*}\), we translate standard subclass and equivalence axioms as being of the form \(\square_{*}[\phi]\), and RIAs are translated by simply replacing the original role names \(r\) by \(r\_\pi\) for all precisifications \(\pi\in\{0,\ldots,p-1\}\). ### Usage TranslateAn annotated ontology can be directly translated via the command-line tool by providing the ontology file or its IRI. When one or more of the options listed below are used, the output ontology will not be translated automatically, but saved in a separate file. This can be avoided by setting a separate _translate_ flag. ImportThe _import_ option first imports an ontology into the input file, and then annotates all imported axioms for which standpoint annotation is supported by a box operator with a specified standpoint name. This feature avoids that, for instance, two concepts with the same name (and possibly different IRI bases) occuring in subclass or equivalence axioms will be treated as the same concept during translation. QueryThe most basic functionality of OWL 2 DL reasoners is checking the ontology for inconsistency. Popular reasoners, e.g. HermiT [14], additionally offer to answer queries regarding subclass relations, instances etc. However, these query services are impractical for translated Standpoint-OWL \(2\) ontologies, since standpoint axioms can only be used in a query if they are translated to standard OWL 2 beforehand. In order to simplify the specification of queries containing standpoint axioms, we have added a _query_ option to the command-line tool. The query language is the language of Boolean combinations, i.e. we can ask whether a given Boolean combination is entailed by the translated ontology. A query can be given by an expression of the form Formula defined in Section 3 (possibly in a separate file), or in a simplified query syntax for single standpoint axioms. The syntax of a simple query is defined as follows: \[\begin{split}&\text{SimpleQuery}\coloneqq\ [\mathsf{s}](\text{SimpleAxiom})\mid\mathsf{<s>}(\text{SimpleAxiom})\\ &\text{SimpleAxiom}\coloneqq\ \text{Class sub Class }\mid\text{Class eq Class}\end{split}\] where \(\mathsf{s}\) and \(\text{Class}\) are as before. The operators \([\mathsf{s}]\) and \(\mathsf{<s>}\) stand for \(\square_{\mathsf{s}}\) and \(\Diamond_{\mathsf{s}}\), respectively, \(\mathsf{sub}\) for a subsumption relation and eq for equivalence of classes. The query is first negated, and then added to the input ontology as a Boolean combination. If, after translation, the resulting ontology is inconsistent, the query is a logical consequence of the ontology. DumpLastly, there is the option to _dump_ the output ontology to the command-line, rather than saving it to a new file, which facilitates reasoning over the translated ontology via pipeline to an OWL \(2\) DL reasoner. ## 5 Conclusion In this paper, we have proposed a syntax for sentential Standpoint-\(\mathcal{SROJOQ}\) using OWL 2 annotations, which have proved useful for implementing different non-standard description logics. While in this paper we have focused on the sentential fragment of \(\mathcal{SROJOQ}\), our approach can be easily extended to more expressive fragments of standpoint \(\mathcal{SROJOQ}\), e.g. to support standpoint operators on the level of concepts and roles, which leads to fragments currently under investigation and with interesting applications in ontology alignment. Subsequently, we have provided a translation from sentential Standpoint-\(\mathcal{SROJOQ}\) to standard \(\mathcal{SROJOQ}\), which is an adjustment of the recently published translation for the more expressive \(\$\$00\Join b_{s}\), and finally, we have implemented this translation as a command-line tool, thus effectively providing standpoint-based reasoning support for OWL \(2\) DL ontologies. Future work will focus on the usability of the system. The XML syntax for standpointLabel annotations is not user-friendly, and annotating an ontology in this way can be time-consuming, even when using an ontology editor like Protege [16]. A possible approach to alleviating this problem would be to develop a plugin for Protege and to make use of its existing user interface for adding and modifying standpoint axioms, similar to the Fuzzy OWL 2 Protege plugin by Bobillo and Straccia [12]. This will allow for the integration of the modelling support with the translator and reasoner.
2309.14821
Expedited Data Transfers for Serverless Clouds
Serverless computing has emerged as a popular cloud deployment paradigm. In serverless, the developers implement their application as a set of chained functions that form a workflow in which functions invoke each other. The cloud providers are responsible for automatically scaling the number of instances for each function on demand and forwarding the requests in a workflow to the appropriate function instance. Problematically, today's serverless clouds lack efficient support for cross-function data transfers in a workflow, preventing the efficient execution of data-intensive serverless applications. In production clouds, functions transmit intermediate, i.e., ephemeral, data to other functions either as part of invocation HTTP requests (i.e., inline) or via third-party services, such as AWS S3 storage or AWS ElastiCache in-memory cache. The former approach is restricted to small transfer sizes, while the latter supports arbitrary transfers but suffers from performance and cost overheads. This work introduces Expedited Data Transfers (XDT), an API-preserving high-performance data communication method for serverless that enables direct function-to-function transfers. With XDT, a trusted component of the sender function buffers the payload in its memory and sends a secure reference to the receiver, which is picked by the load balancer and autoscaler based on the current load. Using the reference, the receiver instance pulls the transmitted data directly from the sender's memory. XDT is natively compatible with existing autoscaling infrastructure, preserves function invocation semantics, is secure, and avoids the cost and performance overheads of using an intermediate service for data transfers. We prototype our system in vHive/Knative deployed on a cluster of AWS EC2 nodes, showing that XDT improves latency, bandwidth, and cost over AWS S3 and ElasticCache.
Dmitrii Ustiugov, Shyam Jesalpura, Mert Bora Alper, Michal Baczun, Rustem Feyzkhanov, Edouard Bugnion, Boris Grot, Marios Kogias
2023-09-26T10:39:59Z
http://arxiv.org/abs/2309.14821v1
# Expedited Data Transfers for Serverless Clouds ###### Abstract Serverless computing has emerged as a popular cloud deployment paradigm. In serverless, the developers implement their application as a set of chained functions that form a workflow in which functions invoke each other. The cloud providers are responsible for automatically scaling the number of instances for each function on demand and forwarding the requests in a workflow to the appropriate function instance. Problematically, today's serverless clouds lack efficient support for cross-function data transfers in a workflow, preventing the efficient execution of data-intensive serverless applications. In production clouds, functions transmit intermediate, i.e., ephemeral, data to other functions either as part of invocation HTTP requests (i.e., inline) or via third-party services, such as AWS S3 storage or AWS ElastiCache in-memory cache. The former approach is restricted to small transfer sizes, while the latter supports arbitrary transfers but suffers from performance and cost overheads. This work introduces Expedited Data Transfers (XDT), an API-preserving high-performance data communication method for serverless that enables direct function-to-function transfers. With XDT, a trusted component of the sender function buffers the payload in its memory and sends a secure reference to the receiver, which is picked by the load balancer and autoscaler based on the current load. Using the reference, the receiver instance pulls the transmitted data directly from the sender's memory. XDT is natively compatible with existing autoscaling infrastructure, preserves function invocation semantics, is secure, and avoids the cost and performance overheads of using an intermediate service for data transfers. We prototype our system in vHive/Knative deployed on a cluster of AWS EC2 nodes, showing that XDT improves latency, bandwidth, and cost over AWS S3 and ElasticCache. On real-world applications, XDT delivers a 1.3-3.4\(\times\) speed-up over S3, with a cost savings of 2-5\(\times\). Compared to ElastiCache, XDT provides a 2-5% performance improvement while reducing cost by 17-772\(\times\). ## 1 Introduction Serverless computing has emerged as a pervasive cloud technology due to its scalability, resource- and cost-efficiency - factors that benefit both cloud providers and their customers. All major cloud providers have serverless offerings, including AWS Lambda [9], Azure Functions [45], and Google Cloud Functions [30] and Google Cloud Run [59]. In serverless, the application logic is organised as a collection of _stateless functions_ that communicate with each other and with cloud storage services where application state resides. Serverless computing is expressive enough to support various applications such as video encoding [28, 55], compilation [27, 36] and machine learning [34]. The stateless and ephemeral nature of function instances mandates that functions communicate any intermediate and ephemeral state across the functions comprising the application logic. Inter-function communication generally happens when one function, the _producer_, invokes one or more _consumer_ functions in the workflow and passes inputs to them. Crucially, the instances of the consumer functions are not known by the producer at invocation time because they are picked by the cloud provider's load balancer and autoscaler components on demand. Also, for many serverless applications, the amount of data communicated across function instances can be large, measuring 10s of MBs or more; examples include video analytics [27, 28, 54, 55], map-reduce style and database analytics [48, 51, 52], and ML training [34]. Inter-function communication can happen in one of two ways. The first is by _inlining_ the data inside function invocation requests. Because these requests traverse the cloud provider's autoscaling infrastructure (i.e, the request _control plane_), providers limit the maximum size of inlined data to at most a few MBs to mitigate the impact of large payloads on the forwarding logic along the request-response path [14, 31]. The second inter-function communication approach is via an intermediate service, which can be a storage service (e.g., AWS S3 or Google Cloud Storage) or an in-memory cache service (e.g., AWS ElastiCache), which requires the producer function to first store the data, then invoke the consumer, and subsequently have the consumer retrieve the data from storage. The indirection via an intermediate service overcomes the payload size limitation of inline transfers but introduces large latency overheads and adds the cost of the intermediate service. For example, we show that transmitting data via S3 and ElastiCache in a MapReduce application's shuffle phase can account for 70% to over 99% of the total processing cost. Researchers have identified the problem of efficient serverless communication and have proposed several solutions. Some seek to improve the performance of storage-based transfers through the use of tiered storage, such as combining an in-memory cache layer (e.g., ElastiCache) with a cold storage layer (e.g., S3) [43, 47, 54, 58]. While tiered storage can somewhat improve performance or cost over a single storage or cache layer, the general disadvantages of through-storage indirection remain. Others try to enable direct function-to-function communication but do so in a way incompatible with the existing autoscaling infrastructure and may pose security risks [60, 63]. Indeed, coding infrastructure management to cloud providers has been a key driver behind the rapid adoption of serverless by cloud application developers. Our work focuses on the problem of seamless, high-performance serverless communication that is non-disruptive with existing serverless infrastructure. To that end, we introduce Expedited Data Transfers (XDT)1, a serverless communication substrate that allows direct communication between two function instances in a manner that is secure, flexible and compatible with the autoscaling infrastructure used by cloud providers. XDT preserves the existing API and invocation semantics of serverless functions while avoiding the need for intermediate storage for arbitrarily-sized data transfers. At the heart of XDT is an explicit separation of the control plane used for function invocation, which is tightly integrated with the autoscaling infrastructure, from the data transfer itself. In simplest terms, with XDT, the producer function buffers the data that needs to be transferred in its memory and sends a secure reference to the data inlined with the invocation to the consumer function. The consumer then directly _pulls_ the data from the producer's memory. Footnote 1: We plan to release the XDT’s source code by the time of publication. More concretely, XDT defines a short-lived namespace of objects with the same lifetime as the function instance. This namespace can be accessed by subsequent function instances through secure references that do not expose the underlying infrastructure to the user code. XDT exploits the insight that the lifetime of individual function instances, as controlled by the keep-alive policies implemented to keep function instances active to reduce the chance of cold starts, exceeds the lifetime of intermediate state required across function invocations. XDT naturally supports a variety of inter-function communication patterns, including producer-consumer, scatter (map), gather (reduce), and broadcast. Unlike inline transfers, XDT is not limited to small transfer sizes; compared to through-storage transfers, XDT avoids high-latency data copies to and from a storage layer and the associated monetary cost of storage usage. Critically, XDT is fully compatible with the deployed autoscaling infrastructure and requires only minimal modifications at the endpoints of the existing function invocation control plane. We prototype XDT in vHive/Knative [61] by extending Knative queue-proxy components with XDT support. We evaluate our proposal by deploying a XDT-enabled vHive cluster in AWS EC2. Using a set of microbenchmarks, we show that XDT delivers superior performance versus transfers via S3 storage and ElastiCache in-memory cache for all of the aforementioned communication patterns in serverless computing. The main contributions of our work are as follows: * We show that existing inter-function communication methods fall short of serverless demands for high performance and low cost. The most general serverless communication approach, through-storage transfers, carries latency and bandwidth overheads of 8.1\(\times\) for objects of 100KB versus inline transfers, which are limited in size to at most few MBs. * We introduce XDT, which uses control/data path separation to pass a secure object reference to a consumer function instance as part of an invocation request, and delegates to the consumer pulling the data from the producer's memory. XDT supports various inter-function communication patterns and is fully compatible with serverless autoscaling infrastructure. * We demonstrate that XDT is flexible and fast. On real-world applications, XDT outperforms S3 by 1.3-3.4\(\times\) at a cost savings of 2-5\(\times\). Furthermore, XDT also consistently outperforms ElastiCache (the highest-performance data transfer option available today) by 2-5% while slashing the overall application execution cost by 17-772\(\times\). ## 2 Background ### Serverless Computing Basics A typical serverless application is composed of multiple stateless functions connected in a _workflow_. Functions, deployed as HTTP servers with user-defined handlers, may invoke each other. Two aspects are central to serverless deployments. The first is autoscaling, whereby function instances are launched on-demand, based on observed load. Autoscaling affords extreme scalability and cost-efficiency, as idle functions are rapidly shutdown by the cloud provider. The latter is facilitated by the stateless nature of serverless functions. The second important aspect of serverless is the unavoidable communication between function instances in a workflow. Communication is endemic due to the stateless nature of serverless, which cannot hold or share state across invocations. This often places inter-function communication on the critical path, performance-wise, necessitating a low-latency high-bandwidth communication substrate. The quest for high-performance inter-function communication is complicated by the autoscaling aspect of serverless, since a producer instance in a workflow might not know the consumer until the latter is invoked by the producer. It is only at the invocation point that the cloud infrastructure picks either an existing instance of the consumer function or spawns a new one based on observed load. Thus, any optimizations to the inter-function communication substrate must respect the cloud provider's autoscaling policies. The rest of this section details the workings of the autoscaling infrastructure and inter-function communication options in today's clouds. ### Serverless Autoscaling Infrastructure We describe the operation of a serverless autoscaling infrastructure (Fig. 1) using the Knative [4] terminology, since it resembles production clouds, in particular Google Cloud Run [59] and AWS Lambda [9]. The autoscaling infrastructure of serverless is designed to achieve two objectives. The first is responding to changes in load by spawning new function instances when the load increases and shutting down idle instances once the load drops. The second objective is minimizing queuing latency by balancing the load across the active instances. Instance scaling and load-balancing decisions inherently rely on utilization metrics from the active function instances, gathered and stored with the help of the following two components. Each function invocation traverses a provider-managed _queue-proxy_ component, which is in charge of forwarding incoming requests to the function instance with which the queue proxy is co-located. Queue proxy also collects and reports utilization metrics of that instance to the _autoscaler_ control plane component. The _autoscaler_ monitors the load in front of each active instance and implements the scaling policy. To balance the load among all active instances of a function, serverless clouds employ a _load balancer_ whose job is to steer requests to one of the instances. Every function invocation must traverse the load balancer, referred to as the _activator_ in Knative. The activator periodically receives updates from the autoscaler regarding the active instances and their current load. If there is an incoming request for a function and there are no active instances available or all of them are busy, the activator needs to request new instances of that function from the autoscaler. The autoscaler makes a placement decision and spawns a new function instance while the activator buffers the pending function invocation request. Once the new instance is up, the activator steers the buffered invocation to the instance via its corresponding queue proxy. This triplet of components, namely the queue proxy, the autoscaler, and the load balancer, work together to enable autoscaling of serverless functions. The rest of the serverless system is designed around this triplet to deliver seamless scalability to serverless application developers and efficient resource usage to cloud providers. ### Serverless Communication Methods Any meaningful serverless application must combine multiple stateless functions that communicate with each other. The resulting inter-function communication can significantly affect the performance of the entire serverless application. We next discuss the various inter-function communication mechanisms, including solutions deployed in production in today's clouds and proposals from the academic community. #### 2.3.1 Commercial Solutions Serverless clouds, e.g., AWS Lambda or Google Cloud Functions, provide two methods of data transfers: either inline as part of the invocation HTTP request, or via third-party storage or in-memory cache services, e.g., AWS S3, Google Cloud Storage, and AWS ElastiCache. Inline transfers do not require external storage and can thus provide low latency, but pose limitations on the amount of data that can be transferred between function instances. Because invocation requests travel via the serverless autoscaling infrastructure, providers tend to restrict the maximum aggregate size of inlined payloads to reduce the pressure on the load balancer services, which are shared across the datacenter. For instance, in AWS Lambda, inline transfers are limited to 6MB and 256KB per HTTP request/response for synchronous and asynchronous invocation [14], respectively. In contrast to inline transfers, _through-storage_ transfers can be used for arbitrarily large objects but require the use of an external storage service. For example, in AWS, a function \(A\) that needs to pass a large object as an input to function \(B\) would first save the object in an S3 bucket and pass the corresponding S3 key to \(B\), which would then retrieve the object. Problematically, the use of a storage service adds latency in the critical path of communication and incurs an additional monetary cost to the developer. The highest-performance alternative to a storage service in today's clouds that is not limited in the amount of data that can be transferred is an in-memory caching service, such as AWS ElastiCache. While offering lower latency and higher bandwidth than storage, ElastiCache comes with a significant cost overhead. For example, as of this writing, AWS S3 storage service is billed at around $0.02 per GB-month [12], whereas ElastiCache costs $0.02 per GB-_hour_[11] - a cost difference of around 700\(\times\). Moreover, these costs can easily dominate Figure 1: Operation of serverless autoscaling infrastructure. the overall cost of executing serverless applications. Indeed, as we show in SS7.2, data transferring can cost nearly \(1000\times\) more than compute time in AWS Lambda when using ElasticCache and \(3\times\) more if using AWS S3. In addition to their cost premium, another disadvantage of in-memory caches compared to storage is their lack of durability; users can set up replication for high availability, but that will further drive up the cost. Finally, neither self-hosted (e.g., on an AWS EC2 node) nor managed caching services support autoscaling by default, meaning that application developers must add that functionality themselves. Fig. 2 shows latency and effective bandwidth achieved with synchronous inline transfers and those via AWS S3 and ElasticCache for Redis in AWS Lambda. We obtained these results by instrumenting a pair of AWS Lambda functions, one invoking the other and passing payloads of various sizes with timestamps in user code. These timestamps capture both invocation and data transfer latencies, including those for S3 and ElasticCache in the respective experiments. We observe that the inline data transfers deliver lower latency and higher bandwidth than the storage-based alternative. For instance, for a 100KB transfer, inline achieves \(8.1\times\) and \(1.3\times\) lower latency than S3 and ElasticCache, respectively. However, Lambda users are limited to 6MB maximum object size for inline transfers. Larger objects require the use of through-storage transfers. #### 2.3.2 Research Proposals To overcome the limitations of inter-function communication methods used in production clouds, recent works have considered two alternative strategies. The first strategy focuses on improving the performance of data transfers through the use of tiered storage. For example, Locus [52] uses different storage tiers for specific purposes, namely Redis for shuffling and S3 for cold storage. Pocket [37] and SONIC [43] employ a similar idea and develop a control-plane solution to multiplex different storage services based on inferred application needs. FaaST [54], Cloudburst [58], and OFC [47] propose using key-value stores as a cache for ephemeral data transfers. While an improvement on production offerings, approaches that rely on one or more storage layers for data transfers fundamentally increase system complexity, impose additional coordination overheads at the application level, and incur latency cost in the critical path for writing and reading data. The second strategy advocated in research papers is to implement direct communication between function instances by exposing IP endpoints to functions [49, 60, 63]. Problematically, the unmediated direct IP communication between serverless instances introduces several issues. First, exposing IP addresses of the instances to untrusted user code is a security concern. Indeed, allowing a malicious user to infer serverless cloud network topology can facilitate denial-of-service and side channel attacks. Secondly, communication using static IP addresses impedes the autoscaler from scaling individual functions independently and places the burden of load-balancing on the user. Indeed, doing so is fundamentally at odds with the autoscaling principle of serverless computing, and is thus highly unattractive for practical usage. ## 3 Serverless Communication Requirements We aim to design an API-preserving data communication method for serverless that enables direct function-to-function transfers in a way that is fully compatible with existing cloud infrastructure. This implies the following requirements: 1. _High performance:_ low latency and high bandwidth across a full range of transfer sizes. 2. _Compliance with existing semantics of serverless function invocations:_ no change to the existing _at-most-once_ semantics [29, 39, 40] for function invocations. 3. _Seamless integration with autoscaling:_ inter-function communication should work seamlessly with the existing autoscaling infrastructure. 4. _API compatibility:_ the communication method should require no or minimal changes to the user code and should support both passing-by-value and passing-by-reference APIs. 5. _Security:_ must keep sensitive provider information, including IP addresses of function instances and the underlying topology, hidden from untrusted user code. As discussed in Section 2.3, existing inter-function communication methods are unable to meet all of these requirements, prompting an alternative solution, which we introduce next. ## 4 Expedited Data Transfers (XDT) ### Design Insights We exploit three insights that lead us to the design presented below. The first insight concerns _control/data paths separation_. Inline transfers in today's serverless clouds transfer the data along with the function invocation message, which results in the inlined object traversing the entire control plane of the function invocation and forces providers to impose strict limitations on the maximum size of inlined objects. A better communication method would separate the control (function invocation) from the data transfer. Doing so would naturally Figure 2: Latency and effective bandwidth characteristics of a single data transfer, inline and via AWS S3, and AWS ElasticCache (EC), in AWS Lambda. All axes are logarithmic. unburden the control plane without impacting the functioning of the autoscaling infrastructure. The challenge is doing so without resorting to a storage service, which is what existing through-storage transfers rely on. We address this challenge with the help of the second insight. The second insight is that the data that need to be transferred between instances are ephemeral, with lifetimes on the order of a few seconds [36, 37]. Hence, the data lifetime is much shorter than the keep-alive period of serverless functions, typically in the order of minutes to increase the likelihood of a warm invocation [56, 9]. Based on the above, we draw one final insight: instead of using a storage service to communicate data across function instances, a producer instance can simply buffer the data in its own memory and have the consumer instance pull from it. This insight forms the foundation for XDT, presented next. ### Design Overview We introduce Expedited Data Transfers (XDT), a serverless-native high-performance data communication fabric that meets all five serverless communication requirements (SS3): high performance, compliance with existing semantics of serverless function invocations, compatibility with autoscaling, standard API for transferring data in serverless, and security. Following the insights developed in Sec. 4.1, XDT splits the function invocation plane into a control and data planes. Crucially, the control plane is unchanged, matching the existing serverless architecture (Fig. 1), thus allowing the autoscaling infrastructure to take the load balancing decisions for each incoming invocation by steering the invocation to the least-loaded instances of a function. The control plane carries only the function invocation control messages, i.e., RPCs. The data plane is responsible for transferring the objects. In simplest terms, a producer function instance in XDT buffers the data to be communicated to the consumer(s) in its own memory and sends a secure reference to the data inline with the invocation to the consumer function(s). The consumer(s) then directly _pull_ the data from the producer's memory. XDT fundamentally replaces push-based data transfers, in which the producer pushes the data through the activator or through a storage layer, with an approach in which the consumer directly pulls the data after the control plane has made its decisions. Fig. 3 describes XDT operation. Let us assume two serverless functions, a producer and a consumer, each of which may have any number of instances at any point in time. As in the case with existing communication methods, the producer logic invokes the consumer function while passing a data object as an argument. However, in contrast to the existing systems (SS2.2), in XDT, consumer function invocations travel to the activator separately from their corresponding objects 1, which remain buffered at their source. After contacting the autoscaler as needed, the activator chooses the instance of the consumer function, to which the activator forwards the invocation for processing 2. Once the invocation arrives at the target instance, the instance can pull the object from the producer instance 3, using the secure reference enclosed in the invocation message. Footnote 1: Note that during non-blocking transfers, the producer function’s user code allocates the object, with the XDT SDK only holding references to it. #### 4.2.1 XDT Programming Model The XDT programming model features a minimalist yet expressive API (Table 1) that supports all three essential communication patterns (SS2.3), namely invoking a function, scattering and broadcasting objects to several consumers, and gathering the output of several functions. The XDT API is fully compatible with the API supported by production clouds, such as AWS Lambda and S3's Boto3 [16]. First, XDT supports the standard blocking API, as in AWS Lambda [19], that is invoke() call that invokes a function by its URL, passing a binary data object obj by value. Upon invocation, the API of the XDT SDK is responsible for buffering the object at the producer side until the consumer function instance, chosen by the autoscaling infrastructure, pulls it. In this case, the consumer function starts processing _after_ the object is transferred to the consumer instance. XDT also supports the standard non-blocking (asynchronous) interface, which is similar to a common key-value store interface like in AWS S3 [16], namely get() and put() calls. In contrast to using a storage service, with XDT, the sender instance of the producer function can finish the invocation before one of the consumer instances retrieves the transmitted object. To de-couple the function invocation and the data transfer interfaces, XDT introduces _XDT references_ as a first-class primitive. When the producer function calls put(), the runtime returns an XDT reference to a specific object while retaining an immutable copy of the object.2 When the consumer needs to read this object, it calls get() that pulls the object from the remote server. Each reference is associated with a user-specified number of retrievals N of that object, which complete before the object can be de-allocated at the producer instance. From a user perspective, references \begin{table} \begin{tabular}{l l} \hline API Call & Description \\ \hline rsp := invoke(URL, obj) & Invoke a function \\ ref := put(obj, N) & Buffer an object locally \\ obj := get(ref) & Fetch a remote object \\ \hline \end{tabular} \end{table} Table 1: XDT API description. Figure 3: XDT architecture overview. are just opaque hashes that do not expose any information regarding the underlying provider infrastructure, and that can be neither generated nor manipulated by user code. The above programming model allows to seamlessly port serverless applications, e.g., those implemented for AWS Lambda or Knative serverless platforms, with a set of the corresponding wrapper functions. To demonstrate the API's portability, we implemented XDT SDKs for applications written in Python and Golang and deployed in a Knative cluster. #### 4.2.2 XDT Semantics & Error Handling Function invocations in modern serverless offerings, like AWS Lambda and Azure Functions, provide the _at-most-once_ semantics [29, 39, 40], i.e., a function invocation may execute not more than once even in the presence of a failure.3 Hence, the provider is responsible for exposing the runtime errors to the user logic to handle them [17, 18, 22, 46]. Error handling logic in today's serverless applications varies based on the function composition method. The user can compose the functions as a direct chain (e.g., the producer makes a blocking call to the consumer) or chain the functions in an asynchronous workflow. In the latter case, an _orchestrator_ invokes the functions within the workflow. The orchestrator can be provider-based (e.g., AWS Step Functions [15] and Azure Durable Functions [44]) or an auxiliary function that drives other functions. Handling certain failures may require re-execution of several functions. In this case, the first function of the sub-workflow must be re-invoked with the same arguments as the original invocation. In this case, the user is responsible to pass the first function's context throughout the sub-workflow down to the function that can detect its failure. Footnote 3: The user can construct primitives with at-least-once semantics by combining primitives with at-most-once semantics and re-try logic. Prior work also shows constructing primitives with the exactly-once semantics [40]. Handling of XDT-related failures follows the same approach. We describe an XDT failure scenario in a two-function workflow with one producer function and one consumer function, which can be recursively generalized to an arbitrary workflow. Crucially, the lifetime of an XDT object is connected to the lifetime of the producer instance, thus a shutdown of a producer instance leads to immediate de-allocation of all the objects, retrievals of which have not completed. For blocking invocations, i.e., the ones invoked with the invoke() call, the producer instance stays alive waiting for the response from the consumer, and may decide to re-invoke the consumer invocation if the previous invocation returns an error. For the non-blocking invocations, an XDT transfer may fail if the producer instance is killed (e.g., due to exceeding the maximum invocation processing time) before the transmitted object is retrieved by a consumer instance. For example, it is possible that the producer function returns success before the transfer is complete, which is followed by the instance shutdown. However, in this case, the consumer function receives the corresponding error when executing XDT get(). The invocation of the consumer can follow the at-least-once semantics approach. To guarantee the correct execution of the entire workflow, the consumer needs to re-invoke the workflow starting from the producer function. Hence, the user code in the consumer function should forward this error to the corresponding entity (i.e., the orchestrator or the driver function) that can re-invoke the producer with the same original arguments. For example, if AWS Step Functions orchestrator is used, the user can define a custom fallback function to handle a particular error code [18]. To summarize, XDT is fully compliant with the existing at-most-once semantics of serverless function invocations and can be enhanced to at-least-once semantics using existing serverless infrastructure by introducing error handling in application logic. ## 5 Implementation We prototype XDT in vHive [61], an open-source framework for serverless experimentation that is representative of production clouds. vHive features the Knative programming model [4] where a function is deployed as a containerized HTTP server's handler (further referred to as _function server_), which is triggered upon receiving an HTTP request, i.e., RPC, sent to a URL assigned to the function by Knative. Each function instance runs in a separate Kubernetes pod atop a worker host (bare-metal or virtualized) in a serverless cluster. ### XDT Prototype in vHive/Knative We start by describing the implementation of the different software layers of the prototype, required to support blocking function invocations with XDT, followed by a discussion of support for the non-blocking XDT API. #### 5.1.1 XDT Software Development Kit (SDK) XDT relies on an SDK to implement the API, bridging the user logic and the provider components that perform the transfer. At the producer instance's side, the SDK splits the original invocation request into two messages, namely a control message and an object, which comprises the transferred data. The SDK creates and adds an XDT reference to the gRPC request as an HTTP header. The reference comprises an encrypted string, containing the IP address of the pod where instance's function server is running, and the object key, which is unique for that pod. Encryption prevents the user code from obtaining the IP addresses of function instances. At the consumer instance's side, the SDK reconstructs the original request, joining the control message and the object (after the latter has been pulled), before invoking the consumer function in the same way as with the vanilla serverless API. #### 5.1.2 Control and Data Planes XDT uses gRPC [2] for the control plane, a common industry choice that retains compatibility to the rest of HTTP-based control-plane components. For the data plane, we choose the high-performance Cap'n Proto [1] RPC fabric. This fabric runs directly on top of TCP, delivering higher performance when compared to gRPC, whose performance is limited by HTTP compatibility. Both protocols support a wide range of programming languages. #### 5.1.3 Provider Components Extension We extend the Knative queue proxy (QP) for object buffering (SS2.2). QP is a minimal auxiliary provider container, written in Golang. It is deployed per function instance and shares the pod with the function server. The added logic increases the QP memory footprint by 2MB. Because a QP, being a minimal provider container, might be online long before the function server during a cold start, we deploy the following performance optimization. We let the QP retrieve the object on behalf of the consumer function server, instead of the consumer SDK, to overlap retrieving the request with booting the function instance. ### XDT Operation #### 5.2.1 XDT invoke() Operation Fig. 4 shows the request path in the XDT infrastructure following an invoke() call. 1 when the caller function needs to call another function it invokes the SDK. 2 the SDK splits the request into two parts, the XDT object and the control plane message that carries the reference to the object. 3 the SDK sends the control message to the activator and 4 stores the object into a buffer to be fetched later by the consumer's QP (QP\({}_{\text{con}}\)). 5 the activator chooses the instance of the consumer and forwards the control message to the consumer's QP (QP\({}_{\text{con}}\)). 6 QP\({}_{\text{con}}\) extracts the reference from the header, decrypts the reference to extract IP address and the object key, and requests the data by sending a Cap'n Proto RPC request to the producer function's SDK, requesting the data by the object key. 7 SDK at the producer function sends the data to the QP\({}_{\text{con}}\) and de-allocates the object when they are dispatched. 8 QP\({}_{\text{con}}\) forwards the object to the SDK that reconstructs the original request, and 9 invokes the function handler. If the response is small, it follows the reverse control plane path through the two QPs and the activator. #### 5.2.2 XDT get() / put() Operation Whereas invoke() is a synchronous call, the two other calls of the XDT API - put() and get() - are asynchronous. While the operation of put() and get() is similar to invoke(), there are a few important differences. The first difference is that put() returns an XDT reference for the object to the user logic. The producer function may pass this reference, like any other string field, to any function that belongs to the same user. Once the consumer function calls get() using the delegated reference, the SDK retrieves the object by sending a Cap'n Proto RPC request directly to the producer instance (i.e., to a a Cap'n Proto RPC server inside the SDK), using the IP address and the key in the reference. The asynchronous get()/put() API can be used not only for invocations but also for large responses as well. The response path follows the control plan path in the reverse order and is used only with small (inline) replies, i.e., \(<\)6MB in AWS Lambda. In the case of a large reply, the XDT-enabled consumer creates a reference to the response object through a put() call and includes the reference in the response. Upon receiving the response, the producer can retrieve the response payload through a get() call. ### Flow Control The XDT design relies on the availability of the pre-allocated buffer in the QP\({}_{\text{con}}\) component to offer high-performance data transfers. If buffers are unavailable, the system needs to engage a flow control mechanism to pace the sending components before the downstream buffers free up. Fortunately, a Cap'n Proto RPC works on top of TCP and can rely on its flow control without any changes to the XDT logic, which only needs to buffer and forward the object's chunks along the component chain. Hence, if the number of transmitted objects exceeds the number of available buffers, the subsequent transfers are paused, resulting in the user code blocking in the corresponding XDT API call. ## 6 Methodology ### Evaluation Platform Due to the close-source nature of commercial cloud infrastructure, we prototype and evaluate XDT in Knative [4]. We deploy a Knative cluster that features XDT-enabled queue-proxy containers on AWS EC2 nodes, similarly to prior work [37, 63, 43], thus ensuring low access time to AWS S3. We use a multi-node cluster of bare-metal m5.16xlarge instances in the 'us-west-1' availability zone, to evaluate the baseline and the XDT-enabled serverless settings. This instance features Intel Xeon Platinum 8000 series 3.1GHz with 64 SMT cores, 256GB RAM, EBS storage, and a 20Gb/s NIC. Using the vHive experimentation framework v1.4.2 [61], we set up Knative 1.3 in a multi-node Kubernetes 1.23 cluster [6], running all deployed functions, Knative autoscaling components, and Istio ingress [3]. The pods are scheduled on nodes to ensure all the data transfers happen across the network (i.e., no local communication), by placing each function on a separate AWS EC2 node. In all experiments, we emulate a stable serverless workflow where enough active instances are present at all times - i.e., there are no cold start delays during the measurements. We achieve this by deploying functions with a fixed number of instances. ### Measurement Framework We use the measurement infrastructure integrated with the vHive framework [7], which supports end-to-end benchmarking. The vHive framework features a service, called invoker, that injects requests in a common format for all of the studied workloads and waits for the responses from the corresponding workflows, reporting the end-to-end delays. The user code of workloads is annotated with logs, which are then aggregated to determine the end-to-end latency breakdown. Unless specified otherwise, we report average end-to-end latency computed from 10 measurements. For microbenchmarks, which do not have any computational overheads except network processing, we calculate _effective bandwidth_ of a data transfer by dividing the transferred object size by the measured end-to-end latency. ### Baseline and XDT Configurations Our baseline is the through-storage communication approach, which is unencumbered by transfer size limitations inherent in inline transfers. We evaluate two options for the storage service. The first is Amazon S3, which is the baseline configuration used in prior work [34, 36, 37, 63, 52]. The second is ElastiCache, a cloud-native in-memory data store. As shown in prior work [36, 37], ElastiCache provides extremely high performance for inter-function communication but at a high monetary cost as compared to S3. For ElastiCache, we used a single-node Redis cache of the node type cache.m6g.16xlarge having 64 vCPUs with 25 Gb/s NIC, which is one of the peak performance configurations priced at $4.7 per hour. ### Microbenchmarks We use a number of microbenchmarks, implemented in Golang 1.18, each of which evaluates one of the data transfer patterns commonly used in serverless computing (SS4.2.1), namely producer-consumer (1-1), scatter, gather, and broadcast. All these patterns comprise various numbers of instances of the producer and the consumer functions communicating one or more objects from the former to the latter. From here on, by saying a producer (consumer), we mean a producer (consumer) function instance. ### Real-World Workloads We use three data-intensive applications from the vSwarm benchmarking suite [8], which features representative workloads widely used in serverless computing, with their reference inputs. Each workload is comprised of multiple functions, deployed with Knative Serving [5], that call one another using the blocking interface, i.e., a caller function waits for the callee to respond. Each of the workloads uses one or more data transfer patterns to communicate across functions. We modify the workloads to support XDT along with the S3-based and ElastiCache baselines using the same communication API: invoke(),get() and put() (SS4.2.1). The studied workloads feature different communication patterns. _Video Analytics (VID)_ shows the 1-1 and scatter patterns, as it features a pipeline of video streaming, frame decoder and object recognition functions; where the frame decoder function invokes the object recognition function once for several frames in a decoded fragment in the scatter communication pattern. _Stacking Ensemble Training (SET)_ is a distributed ML training application, which fits the serverless programming model well due to its speed, low memory footprint, and low computational complexity [24, 53, 25]. In this workload, the first function broadcasts the training dataset when invoking several training tasks in parallel, and the last function gathers and reconciles the trained ensemble model. Hence, this workload's execution highly depends on the efficiency of the broadcast and gather communication patterns. Finally, the _MapReduce (MR)_ workload implements the Aggregation Query from the representative AMPLab Big Data Benchmark [50]. The gather communication pattern execution is critical for the MapReduce workload, due to the data-intensive shuffling phase between the mapper and the reducer functions. Figure 4: XDT operation in a single producer single consumer scenario (only the request path is shown). Dashed arrows show the control plane, solid lines show the data plane, and the thick solid lines show data streaming in the data plane. #### 6.5.1 Cost Model We estimate the cost of executing the applications we study from the application developer's perspective, according to the AWS pricing models [11, 12, 13]. The price of a single function invocation, from the perspective of an application developer, comprises a small fixed fee for invoking a function, another fee proportional to the product of the processing time and maximum memory footprint of that invocation, and the cost of storage for transferring the data. For all studied functions, we assume the maximum function memory footprint of 512MB, and use the processing times as measured in SS7.2. Storage costs are billed on the GB/month (AWS S3 [12]) or GB/hour (AWS ElastiCache [11]) bases. In our cost model, we take the minimal possible price for storing transferred data, assuming that ephemeral storage de-allocates transferred data immediately after the last retrieval. ## 7 Evaluation We compare XDT to through-storage transfers based on Amazon S3 and ElastiCache (EC). We first study the performance of the evaluated communication mechanisms on microbenchmarks. We then assess the performance and cost of real-world applications running in a serverless cloud. ### Microbenchmarks This subsection further quantifies the latency and effective bandwidth characteristics of XDT in common communication scenarios (SS6.4): 1-1, gather, scatter, and broadcast. The 1-1, or producer-consumer, pattern is typical of chained function invocations accomplished via the invoke() API call. Gather, or reduce, is essential for applications with functions whose input is the output of several other functions and that use the put()/get() API. Scatter, or map, is important when functions have a large fan-out of calls to other functions, passing the objects via the invoke() and the put()/get() APIs. Broadcast is used by functions that distribute the same data among many consumers, accomplished via a single put() call followed by multiple get() calls with the _same_ S3 key or XDT reference. #### 7.1.1 Producer-Consumer Communication We focus on the 1-1 (producer-consumer) pattern to study the latency characteristics of the serverless communication choices. Latency is a key metric for interactive, user-facing cloud services, with both median and tail latency considered critical. Figure 5 plots the median and tail (99th percentile) latency for S3, ElastiCache and XDT-based transfers for 10KB (small) and 10MB (large) objects. For small objects, transfers through ElastiCache in-memory cache offer much lower latency than transfers through S3, a cold storage service. The median (tail) latency with ElastiCache is 89% (92%) lower than that with S3. XDT offers a further improvement compared to ElastiCache, with median (tail) latency 12% (10%) lower than ElastiCache. XDT has better latency than transfers through S3 and ElastiCache because XDT avoids writing and reading the object on intermediate nodes. For large objects, the median (tail) latency of the ElastiCache-based transfers is 87% (90%) lower than the S3-based ones. XDT shows median and tail transfer latencies 45% and 34% shorter than those with ElastiCache. Larger object sizes incur higher write and read latencies while transferring the objects through third-party services, which explains the performance advantages of XDT over both S3 and ElastiCache. Figure 5: Transfer latency cumulative distribution functions (CDFs) for S3, ElastiCache (EC) and XDT in the _1-1_ workflow. Note the logarithmic scale on the horizontal axis. Figure 6: Transfer latency of the scatter, gather, and broadcast communication patterns with the fan degrees of 4 and 16. Note that both subfigures use a logarithmic scale on the vertical axis, but the scales differ across subfigures. #### 7.1.2 Collective Communication We evaluate the speed of the collective communication patterns, namely the gather, scatter, and broadcast, by comparing their latency and effective bandwidth, which is calculated as the size of the transferred objects divided by the end-to-end time of the transfer. We next compare the latency for gather, scatter, and broadcast patterns. We study fan-in (gather) and fan-out (scatter, broadcast) degrees of 4 and 16, and consider two transfer sizes: 10KB (small) and 10MB (large). Figure 5(a) shows the results for S3, ElastiCache, and XDT transfers of 10KB. For the small transfers, ElastiCache consistently outperforms S3, delivering a latency 9.2-11.0\(\times\) lower at the fan degree of 4 and 7.8-10.8\(\times\) lower at the fan degree of 16. This result corroborates prior work [37] that also noted that transfers via in-memory storage such as ElastiCache significantly improve performance over transfers via S3. XDT consistently matches or outperforms ElastiCache, with a latency up to 1.16\(\times\) lower than ElastiCache. These trends persist for larger 10MB transfers as well, shown in Figure 5(b). ElastiCache continues to outperform transfers through S3, with the transfer latency up to 7.7\(\times\) lower. Meanwhile, XDT improves on ElastiCache by delivering 1.2-1.9\(\times\) lower latency. Lastly, XDT achieves higher effective bandwidth than both S3 and ElastiCache. For 10MB transfers with a fan degree of 32, XDT achieves 16.4Gb/s (82% of the NIC peak bandwidth of 20Gb/s). In contrast, ElastiCache-based transfers deliver 14.0Gb/s (70% of the peak bandwidth) while S3-based transfers deliver 5.5Gb/s (28% of the peak bandwidth). ### Real-World Workloads Next, we study three data-intensive applications (SS6.5), presenting their end-to-end latency along with a detailed breakdown of the sources of latency (Figure 7) and estimating the associated cost (Table 2) of executing an invocation for each of the studied applications. **Video Analytics** spends 39% and 5% of its execution time in transferring the video fragment and the frames in the S3-based and ElastiCache-based configuration respectively. With XDT, this fraction decreases to 4%, yielding an overall processing time reduction of 36% and 2% compared to the S3 and ElastiCache baselines, respectively. This speedup comes from 9.5\(\times\) and 1.2\(\times\) faster transmission of video and frames, respectively. In terms of cost, a single invocation processed in an XDT-enabled system lowers the cost by 3\(\times\) and 56\(\times\) compared to S3 and ElastiCache-based configurations, respectively, by avoiding the need for intermediate storage or cache. **Stacking Ensemble Training** spends 76% and 14% of execution time in data communication in the S3-based and ElastiCache-based configuration respectively. The largest fraction of data communication is the _gather trained models_ latency component accounting for 34% and 4% of the overall execution time in the S3-based and ElastiCache-based configuration, respectively. Using XDT allows to decrease the gather fraction to 3% of the end-to-end latency driving the overall \begin{table} \begin{tabular}{|l|r|r|r|r|r|r|r|} \hline & \multicolumn{2}{c|}{S3} & \multicolumn{2}{c|}{ElastiCache} & XDT \\ \hline App & Comp. & Stor. & Total & Comp. & Stor. & Total & Total (comp.) \\ \hline VID & 37 & 18 & 55 & 14 & 913 & 928 & 17 \\ \hline SET & 95 & 30 & 125 & 69 & 1104 & 1172 & 70 \\ \hline MR & 180 & 416 & 595 & 125 & 99667 & 99792 & 129 \\ \hline \end{tabular} \end{table} Table 2: Cost estimation (in \(USDx10^{-6}\)) for compute (Comp) and storage (Stor) spending when executing a single invocation for S3, ElastiCache, and XDT based configurations based on AWS Lambda [13], AWS S3 [12], and AWS ElastiCache [11] prices as of 1/1/2023. Figure 7: Latency breakdown of real-world workloads, deployed in XDT, ElastiCache (EC) and S3 based systems. data communication fraction with XDT down to 12%. Thus, XDT delivers a 3.4\(\times\) speedup over the S3 baseline and 1.05\(\times\) vs. ElastiCache. Cost-wise, XDT is cheaper by 2\(\times\) and 17\(\times\) when compared to the S3 and ElastiCache based alternatives, respectively. **MapReduce** shows 70% and 62% of execution time spent in communication for the S3 and ElastiCache configurations respectively. Moreover, 40% of the overall time in S3 baseline is spent retrieving the original input from S3 and writing back the results to S3, which we do not optimize with XDT. The rest, i.e., 30% of time, are subject to XDT optimization. XDT allows to achieve 1.26\(\times\) overall speedup over the S3 baseline and 1.05\(\times\) over ElastiCache. XDT's speedup is achieved due to a significant decrease in data shuffling, namely mapper-put and the reducer-get phases, which are reduced by 23.4\(\times\) and 4.8\(\times\), respectively, compared to the S3 baseline, and by 30% and 55%, respectively, compared to ElastiCache. Compared to the two previously-discussed workloads, the cost of executing the MapReduce workload is reduced by an even greater amount with XDT, namely by 5\(\times\) and 772\(\times\) vs. S3- and ElastiCache-based alternatives, respectively. This large cost reduction associated with XDT is attributable to the large amount of ephemeral data transferred during the shuffle phase of MapReduce, making through-storage/cache transfers particularly expensive. ### Summary XDT enables efficient transfer of ephemeral data across functions without adding cost or complexity to the application logic. In line with the requirements articulated in SS3, XDT delivers high performance, compatibility with existing semantics and API, native autoscaling, and security. By design, XDT is much faster than transferring data via conventional storage services, such as AWS S3. XDT avoids unnecessary writing and reading to the durable tier of storage services, which incurs high latency overheads and carries a monetary cost. Compared to an in-memory cache, XDT offers similar or better performance _without_ the staggering cost or complexity overheads associated with using an additional service. Indeed, our results show that the cost overheads of an in-memory cache are prohibitive, exceeding compute costs by one to two orders of magnitude (Table 2). Meanwhile, the use of an additional service for the caching tier burdens the developer with additional design complexity in the application logic and may require manual reconfiguration (or further application complexity) to accommodate changes in load or data volume. In contrast, XDT avoids the need for an additional service and its bandwidth naturally scales with the number of producer and consumer instances. ## 8 Related Work Prior works employ direct inter-function communication approaches [49, 60, 63] by exposing the IP addresses of function instances to the user code. Doing so increases the attack surface and places the burden of load balancing and scaling on the user. We argue that such optimizations undermine the core principle of serverless, namely the cloud provider's transparent management of cloud infrastructure, for which autoscaling is the feature of paramount importance. Other works [54, 58, 37, 43, 37] consider a number of ephemeral storage service designs, aiming to provide high-performance transfers at low cost. However, we show in SS7.2 that the cost of even the slowest tier (e.g., AWS S3 as in several works [52, 37, 43]) can dominate the overall cost of executing a data-intensive application in serverless clouds. Other prior works [54, 57, 56, 26, 26] consider extending serverless with a distributed shared memory (DSM) tier and pass references or capabilities over the DSM around instead of data objects. In contrast to these proposals, the data objects transmitted via XDT are immutable, avoiding the complexity of supporting data consistency models. Like in the XDT design, researchers have proposed separating the control and data planes to avoid centralized bottlenecks and deliver high performance. For example, Crab [38] and Prism [32] follow a similar separation to reduce the load on L4 and L7 load balancers, respectively. XDT ships a function invocation's data along the compute for processing, which is complementary to the approaches that ship compute to data or data to compute. Shredder [65] suggests running compute operations directly at the storage tier. Kayak [64] and Bhardwaj et al. [21] investigate the balance between moving data vs. moving compute, suggesting hybrid schemes to combine both. XDT enables high-performance transfers without making assumptions on function instances co-location and data locality, which makes it fundamentally different to the following prior works. SAND [10] accelerates data communication proposing a hierarchical messaging bus to facilitate transfers between co-located function instances. FaaSFlow [41], Sledge [42], and Wukong [23] focus on leveraging locality to accelerate the execution of serverless multi-function applications. Nightcore [33] suggests exchanging messages over OS pipes for co-located functions. Despite the potential efficiency gains, today's commercial systems, e.g., AWS Lambda, tend to avoid serverless function co-location as such placements may lead to hotspots [20, 9], instead relying on statistical multiplexing across a wide server fleet. ## 9 Conclusion The performance of data-intensive serverless applications heavily depends on the efficiency of inter-function data transfers. The data transfer options available in today's serverless clouds and those proposed by researchers fall short of serverless applications' demands. In response, we introduce XDT, a high-speed API-preserving direct function-to-function communication method that integrates seamlessly with the existing autoscaling infrastructure. XDT leverages control/data
2303.18098
Simulation of a Solar Jet Formed from an Untwisting Flux Rope Interacting with a Null Point
Coronal jets are eruptions identified by a collimated, sometimes twisted spire. They are small-scale energetic events compared with flares. Using multi-wavelength observations from the Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) and a magnetogram from Hinode/Spectro-Polarimeter (Hinode/SP), we study the formation and evolution of a jet occurring on 2019 March 22 in the active region NOAA 12736. A zero-$\beta$ magnetohydrodynamic (MHD) simulation is conducted to probe the initiation mechanisms and appearance of helical motion during this jet event. As the simulation reveals, there are two pairs of field lines at the jet base, indicating two distinct magnetic structures. One structure outlines a flux rope lying low above the photosphere in the north of a bald patch region and the other structure shows a null point high in the corona in the south. The untwisting motions of the observed flux rope was recovered by adding an anomalous (artificial) resistivity in the simulation. A reconnection occurs at the bald patch in the flux rope structure, which is moving upwards and simultaneously encounters the field lines of the null point structure. The interaction of the two structures results in the jet while the twist of the flux rope is transferred to the jet by the reconnected field lines. The rotational motion of the flux rope is proposed to be an underlying trigger of this process and responsible for helical motions in the jet spire.
Jiahao Zhu, Yang Guo, Mingde Ding, Brigitte Schmieder
2023-03-31T14:42:25Z
http://arxiv.org/abs/2303.18098v1
# Simulation of a Solar Jet Formed from an Untwisting Flux Rope ###### Abstract Coronal jets are eruptions identified by a collimated, sometimes twisted spire. They are small-scale energetic events compared with flares. Using multi-wavelength observations from the Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) and a magnetogram from Hinode/Spectro-Polarimeter (Hinode/SP), we study the formation and evolution of a jet occurring on 2019 March 22 in the active region NOAA 12736. A zero-\(\beta\) magnetohydrodynamic (MHD) simulation is conducted to probe the initiation mechanisms and appearance of helical motion during this jet event. As the simulation reveals, there are two pairs of field lines at the jet base, indicating two distinct magnetic structures. One structure outlines a flux rope lying low above the photosphere in the north of a bald patch region and the other structure shows a null point high in the corona in the south. The untwisting motions of the observed flux rope was recovered by adding an anomalous (artificial) resistivity in the simulation. A reconnection occurs at the bald patch in the flux rope structure, which is moving upwards and simultaneously encounters the field lines of the null point structure. The interaction of the two structures results in the jet while the twist of the flux rope is transferred to the jet by the reconnected field lines. The rotational motion of the flux rope is proposed to be an underlying trigger of this process and responsible for helical motions in the jet spire. Solar magnetic fields(1503); Solar magnetic reconnection(1504); Magnetohydrodynamics(1964); Solar activity(1475); Solar active region(1974) 0000-0002-8828-7880]Jiahao Zhu 0000-0002-4882-0880]Yang Guo 0000-0002-4882-0880]Mingde Ding 0000-0002-1888-0880]Brigitte Schmieder ## 1 Introduction Coronal jets are one of the most common phenomena in the solar atmosphere. They are typically collimated beams with helical motions in a transient process. Modern observations of jets have already been taken through a wide range of wavelengths from X-ray to extreme ultraviolet (EUV) (Shibata et al., 1992; Cirtain et al., 2007; Patsourakos et al., 2008). Observations with high spatio-temporal resolutions from the Atmospheric Imaging Assembly (AIA; Lemen et al., 2012) on board the Solar Dynamics Observatory (SDO; Pesnell et al., 2012) and simultaneous spectral observations from the Interface Region Imaging Spectrograph (IRIS; De Pontieu et al., 2014) provide an opportunity to study the origin and development of jets (Joshi et al., 2020; Yang et al., 2020). Although the origins and morphology of jets have been investigated for decades, the results are still inconclusive (Liu et al., 2011; Shibata et al., 1994; Raouafi et al., 2016; Schmieder, 2022). Usually jets can be classified into standard jets, which fit the standard reconnection picture, and blowout jets, analogous to a miniature version of blowout explosion at their bases (Moore et al., 2010). Coronal jets extend in length from tens of Mm to about 400 Mm and possess a speed from about 100 km s\({}^{-1}\) to 600 km s\({}^{-1}\)(Nistico et al., 2009; Paraschiv et al., 2010). They are not isolated phenomena and are believed to be closely related to other activities like coronal mass ejections, particle acceleration in the solar wind, and even to coronal heating and so on (Nitta et al., 2006; Yu et al., 2014; Joshi et al., 2020). There are still many questions to be solved for solar coronal jets. Magnetic reconnection is assumed to be responsible for explosive jet events in the corona. Fan-spine structure is composed of a fan-like dome with a null point connecting an outer spine to a remote region (Parnell et al., 1996; Priest and Titov, 1996). The reconnection can easily occur at a null point and transfer the plasma to the far end of the spine. When reconnection takes place at a null point, energy stored in the magnetic field is continuously converted into the kinetic energy and thermal energy of plasma. And then triggered by a breakout structure, an intensive explosion or rapid motion of plasma can be observed in the corona. In terms of the magnetic topology, fan-spine structures (Liu et al., 2011), eruption of magnetic flux ropes (Adams et al., 2014; Zhu et al., 2017) and bald patches with a continuous build-up of electric current (Guo et al., 2013; Schmieder et al., 2013) may easily trigger jet events. Bald patch is a region where the magnetic field lines are tangential to the photosphere and the direction of field lines points from negative polarity to positive polarity (Titov et al., 1993). The magnetic configuration of parasitic polarities situated in a large opposite polarity tends to generate such topologies. In some studies, successive jets in coronal holes exhibit a homologous behaviour of recurrence, resulting from impulsive reconnection between closed and open field lines (Pariat et al., 2010). Although spectral and EUV images have been widely used in the analysis of the jet formation and evolution, the lack of simulation is still a restriction on exploring the nature of jets (Schmieder, 2022). Recently, state-of-the-art magnetohydrodynamic (MHD) models with high performance computing techniques help to understand the origin of jets. Fang et al. (2014) simulated the process of a twisted flux rope emerging in an open magnetic field and found that magnetic reconnection is responsible for the explosive events. Gonzalez-Aviles et al. (2020) performed simulations of different parameters to probe the impacts on the origin and morphology of the jets. They found the significance of the resistivity and thermal conductivity in constructing the jets. Wyper et al. (2019) simulated a helical jet caused by a sigmoidal flux rope eruption, with artificial surface motions injecting free energy into the active region. This model exhibits a coupled mechanism of the breakout reconnection and ideal MHD instability in the process of jet eruption. However, most of the simulations about jets mentioned above are ideal MHD models controlled by artificial settings. Simulations utilizing magnetograms from observations have not yet been widely studied. A force-free field is widely used to study the stable magnetic configuration before eruptions (Wiegelmann et al., 2006; Wiegelmann and Sakurai, 2021). Such fields can be described by \(\left(\nabla\times B\right)\times B=0\) and it becomes \(\nabla\times B=\alpha\left(r\right)B\) if considering the divergence-free condition. When the torsional parameter \(\alpha\left(r\right)\) equals to zero or is a constant, the field is a potential field or a linear force-free field, which can be solved in such particular situations. To further explore the actual physical conditions, a non-linear force-free field (NLFFF) is solved where \(\alpha\left(r\right)\) varies with position. Although no general analytic solutions exist because of the nonlinearity of the equation, various algorithms have been proposed and implemented to solve this issue (Schrijver et al., 2006; Metcalf et al., 2008). In this aspect, an NLFFF extrapolated from the observed magnetogram and the data-driven simulation based on such a field further help us to understand the magnetic structures of the active region and nature of flux rope eruptions (Guo et al., 2019) including some complex solar eruptions (Jiang et al., 2022). Jiang et al. (2016) simulated a coronal magnetic field which transformed from the pre-eruptive to eruptive state following a long-duration quasi-static evolution, and found the establishment of a positive feedback between emerging flux and external magnetic reconnection. Therefore, a promising method is to perform the data-constrained or data-driven simulation that can reproduce the jet eruption with helical motions as observed. Recently, statistical studies on the origin of coronal jets draw a conclusion that mini-filament eruptions play a critical role in triggering both standard and blowout jets (Sterling et al., 2022; Baikie et al., 2022). The rotational motion of a mini-filament along with its upward motion to the top of an arch system can result in magnetic reconnection. The reconnection between the inner and ambient field lines decreases the bound force imposed on the mini-filament and thus leads to its eruption. The jet followed with the mini-filament eruption shows a mixture of both cold and hot material and exhibits a helical motion, often seen in coronal mass ejections (Sheeley and Wang, 2007) and active region flares (Zhang et al., 2022). These characteristics suggest that the relations between solar eruptions at different scales show quite similar features. Besides, ideal experiments of magnetic reconnection have also been performed. Farid et al. (2022) aimed to check the presence of a mini-filament. They used the flux-rope insertion method to create an energized NLFFF model of coronal jets and identified the consequent eruption as an untwisting jet. Pariat et al. (2015) demonstrated that for getting a helical jet, it is a primary requirement to store sufficient magnetic free energy and impulsively release them. They thought the pre-eruption magnetic configuration might not be so important. Wyper et al. (2017) on the other hand proposed that the magnetic breakout is a universal model for solar eruptions including coronal jets. However, current simulations can not spontaneously produce a mini-filament structure directly from observed magnetograms. In this paper, we analyze the motion of a jet through multi-wavelength observations from AIA and IRIS. Then we simulate the active region with a zero-\(\beta\) MHD simulation using the Message Passing Interface Adaptive Mesh Refinement Versatile Advection Code (MPI-AMRVAC; Keppens et al., 2003, 2012; Porth et al., 2014; Xia et al., 2018) and find the structure of a null point and a flux rope at the jet base. In Section 2, we show the observations of the jet and depict its kinematic evolution. The extrapolation and simulation methods are also described. Section 3 shows the results and Section 4 presents a summary and discussion about our simulation. We show the importance of the twisting flux rope in the reconnection of this jet event. ## 2 Observations and Methods ### Instruments SDO/AIA provides regular full-disk EUV observations at several different wavelengths with a spatial resolution of \(0.6^{\prime\prime}\) and a temporal cadence of 12 s. The AIA channels respond to a wide range of temperature, (0.1-10 MK), and can be used to explore solar features over a wide range of scales. IRIS is widely used to detect the transition region between the chromosphere and corona. It switches to different raster modes and obtains spectral data with different resolutions. Also, it can provide high-resolution slit-jaw images (SJIs) in several wavebands such as C II 1330 A, as a supplement to the SDO/AIA observations. The C II 1330 A SJIs cover a specific region with a field of view of \(60^{\prime\prime}\times 68^{\prime\prime}\) and a pixel size of \(0.16^{\prime\prime}\times 0.32^{\prime\prime}\). We also adopt simultaneous spectroscopic observations right above the reconnection site to analyze the Doppler shift in this region. We use the Mg II k lines in this work, which forms at chromospheric temperatures, and the SJI 1330 for the comparison of the structures of AIA. The magnetogram used in the magnetic field extrapolation and numerical simulation comes from the Spectro-Polarimeter (SP; Lites et al., 2013) of the Solar Optical Telescope (SOT; Suematsu et al., 2008; Shimizu et al., 2008; Tsuneta et al., 2008; Ichimoto et al., 2008) on board Hinode (Kosugi et al., 2007). It was obtained by scanning this region step-by-step along the east-west direction in a fixed band of wavelengths centered on the Zeeman-sensitive Fe I lines at 6302 A. The observation started at 01:02:05 UT and ended at 01:59:30 UT, right before the jet event which started at around 02:03 UT. The whole magnetogram covers a field of view of \(270^{\prime\prime}\times 164^{\prime\prime}\) with a pixel size of \(0.32^{\prime\prime}\) along the slit and a scanning step size of \(0.3^{\prime\prime}\). To compensate the stretch caused by the long-time scanning, we scale its spatial resolution with the assistance of the magnetogram from the Helioseismic and Magnetic Imager (HMI; Scherrer et al., 2012; Schou et al., 2012) on board SDO by comparing the main common features in the magnetograms, which have already been corrected for projection effects. The pixel size of the HMI magnetogram is \(0.5^{\prime\prime}\). The position information between the instruments we used is shown in Figure 1. ### Observations The jet event started at around 02:03 UT on 2019 March 22 in the active region (AR) 12736 and lasted for several minutes. This event has been intensively studied from an observational point of view (Yang et al., 2020; Joshi et al., 2020, 2021, 2022; Schmieder, 2022; Schmieder et al., 2022). Multi-waveband observations from SDO/AIA recorded this intense eruption in all its wavebands. We select four EUV images and one UV image provided by SDO/AIA as shown in Figure 2, which reflect different temperatures in the chromosphere and corona. At the EUV brightness peak time around 02:06 UT, the helical motion of the jet around its axis can be recognized in most of the wavebands through the bright and dark patterns. Meanwhile, IRIS observed the active region and one of its slit position was just above the reconnection site of the jet base at the early eruption phase. Figure 2 shows some observational features, such as the shape of the jet base. An X-shaped reconnection site and a flux rope can also be inferred by the heated brightening structures as shown in Figure 2. SOT/SP on board Hinode obtained magnetic field on the photosphere from 01:02 UT to 01:59 UT by scanning this region with a spectrograph. A potential field can be extrapolated when the magnetic field is in a stable or quasi-stable state. Usually it is valid doing so before eruptions occur. As the magnetogram shows, this magnetic field possesses a major positive polarity at northeast and a major negative polarity at southwest, and several parasitic negative polarities around the positive one. A fan-spine structure may easily form in such a magnetic configuration around those parasitic polarities. In this case, the fan lies above main polarities in the east and the outer spine connects to the far side in the west region. We focus on the small field of view of the jet base using the SDO/AIA full-disk observations. To study the propagation of the twisting jet, we select a slit along the direction of the jet indicated as the solid white line in Figure 3(a) and present the time-distance map of the slit in Figure 3(b). The figure shows the evolution of the jet spire, a bulk motion of bright plasma with a velocity of \(\sim 300\) km s\({}^{-1}\) followed by a motion of dark structure at around 02:04 UT. This suggests that the hot outflows set in through magnetic reconnection at first, followed by the eruption of the cold material, which may come from the flux rope in the jet base, and move along the reconnected field lines. The curved pattern in Figure 3(b) also shows the helical motion of the jet, indicating that a twist structure, probably a flux rope, may exist previously at the jet base. To further look into the helical motion of the jet, we draw five equi-distanced cuts perpendicular to the slit in Figure 3(a). Figure 4 displays the time-distance maps of the five cuts. The eruptive motion of the bright and dark patterns shown by slices in two perpendicular directions in both Figure 3(b) and Figure 4, as well as the recurrent features labeled by dashed white lines in Figure 4, exhibit the helical motion of the jet. The panels of Figure 4 in sequence also shows the separation between these two patterns. The dark wave feature in the last panel of Figure 4, which is sliced from the transverse direction, clearly shows an oscillatory motion in the eruption. IRIS performed spectral observations with a 4-step raster for the region covering the reconnection site during the whole process of the jet event. At the time of the jet eruption, the profile of the Mg II lines shows a broad extension in both the red and blue wings, which indicates complex material motions in the transition region. The presence of complex motions has been separately verified by Joshi et al. (2020). We adopt the moment method to calculate the average Doppler velocity regardless of the irregular profile. We mainly analyze the Mg II k lines of the first slit position, which is located just above the reconnection site during the peak time. We thus obtain the Doppler velocity from observations around the peak time and then compare it with the numerical values calculated from the data-constrained simulation in Section 3. ### Magnetic field extrapolation and data-constrained simulation Based on the vector magnetic field obtained from SOT/SP, we make an NLFFF extrapolation to study the magnetic topology structure in AR 12736. To further investigate the relationship between the magnetic structure and observational morphology, we scale the spatial resolution of the SOT/SP magnetogram by comparing it with the SDO/HMI magnetogram that has already been corrected for the projection effects. Meanwhile, we resize the original data to reduce the spatial resolution by twice for relieving the computational burden. The NLFFF is reconstructed from the magneto-frictional module implemented by Guo et al. (2016, 2016) in MPI-AMRVAC. It can relax the potential field together with the bottom boundary to NLFFF in the computation domain. The method can be applied to multiple situations, such as Cartesian or spherical coordinates with uniform grids or adaptive mesh refinement grids. We note that it is hard to attain a completely force-free field relaxed from an observed magnetogram, because the Lorentz force deduced from observations is not nicely balanced in the photosphere due to the role of gravity (Zhu et al., 2016). So, the NLFFF might not be in a complete equilibrium state initially. Figures 5(a) and 5(b) show the vertical magnetic field and the distribution of current density magnitude in the extrapolated field 1.2 Mm above the photosphere. We zoom in the region of interest showing a current density isosurface of 0.03 A m\({}^{-2}\) and the distribution of \(J_{z}\) in Figures 5(c) and 5(d), respectively. The concentrated current reveals a strong magnetic flux rope structure in the vicinity of the small parasitic negative polarities in the north. It is the same structure detected in the HMI vector magnetograms by Joshi et al. (2020). And the elongated double-peaked \(J_{z}\) pattern with hooks confirms the existence of the flux rope (Aulanier and Dudik, 2019; Joshi et al., 2020). Figure 6(a) displays two magnetic structures in the extrapolated field, a flux rope and a null point. Figure 6(b) shows the flux rope lying above a bald patch region low in the photosphere. By tracing the magnetic field lines of the flux rope, we find that they are actually a part of the bald patch field lines. Figure 6(c) shows the null point in the south above the flux rope. The X-shaped reconnection site is reproduced with the spine connecting to a remote region in the west. To probe the nature of the jet, we conduct a data-constrained MHD simulation based on the result of the NLFFF extrapolation. The model we adopt is the same as that in Guo et al. (2019), which solves the zero-\(\beta\) MHD equations. For the boundary conditions, we select the data-constrained case, which fixes the bottom boundary, rather than the data-driven case, because of the limitation of the scanned magnetogram from SOT/SP. The MHD equations are solved in the dimensionless forms in MPI-AMRVAC. In this model, the gravity and gas pressure gradient are omitted in the momentum equation, and the energy equation including the heat conduction term and others is dropped entirely. In other words, we only solve the continuity equation, momentum equation and magnetic induction equation to get the density, velocity, and the magnetic field. The initial condition is set to be the relaxed NLFFF and the initial density distribution is calculated by assuming a temperature distribution of a stepwise function of height. We set the computation grid to be \(288\times 200\times 200\), namely \(166\times 85\times 85\) Mm\({}^{3}\) in real scale with the resolution to be \(0.58\times 0.42\times 0.42\) Mm\({}^{3}\). Also, an anomalous resistivity is added in this model to facilitate the occurrence of magnetic reconnection. When the current density \(J\) of a local point is higher than the critical value \(J_{c}\), the resistivity \(\eta\) is set to be \(\eta=\eta_{0}\times(J-J_{c})^{2}/{J_{c}}^{2}\) and otherwise to be zero. Here, we set \(\eta_{0}\) and \(J_{c}\) to be \(5\times 10^{-4}\) and \(1000\) in normalized units, respectively. The whole simulation time is about half an hour. We output the snapshots of the simulation results every 24 s for further comparisons with the AIA and IRIS observations. ## 3 Results In the NLFFF extrapolation, we recognize a flux rope low in the photosphere related to the north parasitic polarities, and an X-shaped current sheet high in the corona in a south region between the two main polarities, which is also the reconnection site as shown in Figure 6. At the early time of the simulation, the eastern footpoint of the flux rope slowly drifts close to the positive polarity in its southeast. Meanwhile, the main body of the flux rope rotates around its axis counterclockwise and approaches the reconnection site located high in the south region. The rotation slowly changes the magnetic configuration and the kinetic energy of plasma is accumulated through the continuous reconnection. When the field lines of the flux rope encounter the X-shaped current sheet, they join the reconnection process with other field lines within the sheet. Then, the explosive burst takes place and the jet with helical motion comes out of the null point along the spine connecting to the faraway side in the major negative polarity. At last, the field lines shrink to a low magnetic arch structure after the burst. Thus, we can detect the hot reconnection outflows and later the cold material from the flux rope, corresponding to the bright and dark patterns in AIA observations, respectively. The general evolution is revealed by four typical moments shown in Figure 7. To compare the simulation with the observations of SDO/AIA and IRIS, we project the simulation to the line of sight and superpose it on the AIA observations in Figure 8. Since the magnetogram from SOT/SP is scanned through a period of time, we could not give an exact time that the simulation starts from. Thus we use T\({}_{0}\) to represent the start time and focus on the time evolution of simulations and observations. The simulation matches the observations quite well in the perspective of the triangular shape at the base. The field lines of the flux rope fit with the anemone-like arcade and the field lines at the null point display a triangular shape like the jet. Note that only magnetic field lines are displayed in Figure 8, and the plasma is assumed to move along the field lines. We find that the time-sequential images match roughly synchronously with the real-time evolution revealed in the observations. All these results indicate that the simulation reproduces the observations to a satisfactory degree. We further extract the velocity and density information from the simulation snapshots. And then we analyze the Doppler velocity along the line of sight and compare it with that from observations. The results are shown in Figure 9. The Doppler velocity from the simulation is derived from the Gaussian fitting based on the distribution function of density squared versus velocity along the line of sight as shown by each white line in Figure 9(b). Note that every white line is perpendicular to the IRIS slit and can be projected to one point on the slit. Then we can obtain the velocity distribution from simulation at these points along the slit. We discard the extremely large values compared with the maximum absolute value from observations and further make a spatial interpolation to the interval of observations. This is intended for calculating the cross-correlation between Doppler velocities from observations and simulations. Because of the lack of a lot of information about specific matters in the zero-\(\beta\) simulation, we take such an averaged method in analyzing the Doppler velocities. For the observational result at a selected time, we compare it with each simulation snapshot. And the best result with a correlation coefficient of 0.71 is shown in Figure 9(e). The time difference between the observation and simulation is about 6.5 minutes for this best matched case, which is acceptable considering the simplified MHD equations used in the simulation. This jet event has already been studied by two different groups, as it was mentioned in Section 2. Different opinions were proposed regarding the trigger mechanism of the jet event. Yang et al. (2020) did an NLFFF extrapolation to derive the main magnetic structure of this event. They confirmed a fan-spine magnetic topology and found two flux ropes in the extrapolated results. Also, they found a null point responsible for the reconnection site, which is the same as that we find in this work. In their opinion, the breakout-type reconnection caused by one of the flux ropes leads to the eruption of the jet. By contrast, Joshi et al. (2020) checked the magnetogram evolution prior to the occurrence of the event and analyzed the vector magnetic field map in comparison with an ideal MHD simulation. They found a flux rope along the polarity inversion line in the north and the flux rope migrates towards the south due to the photospheric motions, leading to the formation of a small parasitic bipole. And the rotational motion of the flux rope was verified by them through analysis on the IRIS spectra in their Figure 8(b). Though not so obvious, the rotational characteristics can also be exhibited in our Figure 9(d). A strong extended bidirectional shifts at around \(y=75\) and from \(y=75\) blue shifts fade along the \(y\) axis, while red shifts fade reverse the \(y\) axis. They found that the bipole is also a bald patch region, above which a current sheet forms. Thus the twist of the flux rope is transferred into the jet through reconnection without direct eruptions. These two studies come into a consensus that the helical motion of the jet comes from the pre-existing twist in the flux rope. Nevertheless, the critical difference of these two studies lies in how the jet erupts, namely, how is the twist transferred into the jet. Such a difference could be ascribed to the different methods used in these two studies, since the study by Yang et al. (2020) is based on the magnetic field extrapolation together with a high-resolution observation from NVST, while that of Joshi et al. (2020) focuses on the evolution of the magnetogram and comparisons with ideal MHD simulations. Neither of them performed dynamic simulations about this active region. Thus, we make a zero-\(\beta\) MHD simulation in this work to probe how the jet is triggered. The Doppler velocity of our simulation is in accordance with IRIS observations from the aspect of correlation analysis. Specifically, our simulation shows that the flux rope lies low at the bald patch and the null point is located high in the corona. However, they are not far from each other when observed from the viewing angle of the spacecraft. In that sense, it is hard to distinguish their distinct features only according to the spectrum that is actually an integration along the line of sight. Therefore, pure analysis of observations might not lead to a solid conclusion. The combination of MHD simulations and observations is needed for learning the physical nature of solar eruptive events. ## 4 Summary and Discussion With the simultaneous observations from AIA and IRIS, we analyze the evolution of the jet in AR 12736 on 2019 March 22. We draw a time-slice map for the slit along the jet axis and find that the jet is twisting around its axis as propagating outwards at a speed of \(\sim 300\) km s\({}^{-1}\). The jet erupts with hot matter followed by cold matter. We suppose that they correspond to the hot reconnection outflows through a null point and the cold material from the flux rope. The helical motion comes from the untwisting of the flux rope. Furthermore, we utilize an NLFFF to check the topological structure of the magnetic field with the photospheric vector magnetic field from SOT/SP. A null point structure in the south and a flux rope in the north of a bald patch region are found to exist in the active region. Then, we adopt a zero-\(\beta\) MHD simulation based on the results of the NLFFF extrapolation. It reveals that the jet may be produced by the magnetic reconnection at the null point. With the untwisting motion of the flux rope and slipping motion of its footpoint, the inner field lines of the flux rope encounter and then reconnect with the field lines originally situated at the null point. The reconnection outflows propagate along the spine line connected to the major negative polarity in the west region, which correspond to the bright and dark patterns in the AIA images. We experiment with both the SDO/HMI magnetogram and SOT/SP magnetogram and the reconstruction of the NLFFF for further MHD simulations. An X-shaped current sheet was found in both cases; however, the flux rope can only be found in the SOT/SP case. Although we adopt the same magnetogram as that used in Yang et al. (2020), the magnetic structures obtained are different in the two cases. We attribute the different results to the usage of different extrapolation methods. Moreover, in our trials without extra anomalous resistivity, the rotational movement of the flux rope near the photosphere cannot be reproduced in the simulation. The resistivity is artificially added in the regions where the current density is large and thus the flux rope erupts with high resistivity and reveals untwisting motions. This implies that the rotational untwisting motion of the flux rope at the beginning comes from the reconnection in the flux rope. Furthermore, the ideal MHD simulations conducted by Wyper et al. (2018) show that minority-polarity intrusions may add free energy into the field and can lead to both intermittent low-level reconnection and explosive, high-energy-release reconnection above these regions. Though the moving of the parasitic polarity is not considered in our data-constrained simulation, this may partly account for the impacts of the resistivity added in our experiments. Our MHD simulation does not contain heat conduction and radiation process. Therefore, the physical parameters we obtain in the numerical simulations are not quantitatively accurate. Recently, Zhou et al. (2022) conducted an MHD simulation of a sheared arcade configuration, in which a magnetic flux rope is formed and erupts through reconnection. The flux rope in their simulation rotates against its twist and transfers the twist into the surrounding field lines, which resembles our results. However, further studies are needed for clarifying the mechanism of the rotation of the flux rope leading to the reconnection between the flux rope and the spine structure connected to the remote side of the active region. In particular, we should focus on the role of the anomalous resistivity in driving the reconnection in the inner part of the flux rope and leading to its rotational motion around its axis and its upward motion in the chromosphere. We should also improve the data-constrained simulations, such as adding some specific magnetic structures and photospheric motions in order to fit the observations more closely. On the other hand, we also aim to carry out simulations with full MHD equations including heat conduction and radiative cooling to learn the physics more accurately in such jet events. J.H.Z., Y.G. and M.D.D. are supported by the National Key Research and Development Program of China (2022YFF0503004, 2021YFA1600504, and 2020YFC2201201) and NSFC (11773016 and 11733003). We thank the two anonymous referees very much for their constructive suggestions. We deeply appreciate the free access to the magnetogram and observation data provided by SDO/AIA, SDO/HMI, SOT/SP and IRIS science teams. J.H.Z. thanks Ye Qiu and Ze Zhong for their help in the work. Computational resources were provided by the High Performance Computing Center (HPCC) at Nanjing University.
2309.13196
ClusterFormer: Clustering As A Universal Visual Learner
This paper presents CLUSTERFORMER, a universal vision model that is based on the CLUSTERing paradigm with TransFORMER. It comprises two novel designs: 1. recurrent cross-attention clustering, which reformulates the cross-attention mechanism in Transformer and enables recursive updates of cluster centers to facilitate strong representation learning; and 2. feature dispatching, which uses the updated cluster centers to redistribute image features through similarity-based metrics, resulting in a transparent pipeline. This elegant design streamlines an explainable and transferable workflow, capable of tackling heterogeneous vision tasks (i.e., image classification, object detection, and image segmentation) with varying levels of clustering granularity (i.e., image-, box-, and pixel-level). Empirical results demonstrate that CLUSTERFORMER outperforms various well-known specialized architectures, achieving 83.41% top-1 acc. over ImageNet-1K for image classification, 54.2% and 47.0% mAP over MS COCO for object detection and instance segmentation, 52.4% mIoU over ADE20K for semantic segmentation, and 55.8% PQ over COCO Panoptic for panoptic segmentation. For its efficacy, we hope our work can catalyze a paradigm shift in universal models in computer vision.
James C. Liang, Yiming Cui, Qifan Wang, Tong Geng, Wenguan Wang, Dongfang Liu
2023-09-22T22:12:30Z
http://arxiv.org/abs/2309.13196v3
# ClusterFormer: Clustering As ###### Abstract This paper presents ClusterFormer, a universal vision model that is based on the Clustering paradigm with Transformer. It comprises two novel designs: 1_recurrent cross-attention clustering_, which reformulates the cross-attention mechanism in Transformer and enables recursive updates of cluster centers to facilitate strong representation learning; and 2_feature dispatching_, which uses the updated cluster centers to redistribute image features through similarity-based metrics, resulting in a transparent pipeline. This elegant design streamlines an explainable and transferable workflow, capable of tackling heterogeneous vision tasks (_i.e_., image classification, object detection, and image segmentation) with varying levels of clustering granularity (_i.e_., image-, box-, and pixel-level). Empirical results demonstrate that ClusterFormer outperforms various well-known specialized architectures, achieving 83.41% top-1 acc. over ImageNet-1K for image classification, 54.2% and 47.0% mAP over MS COCO for object detection and instance segmentation, 52.4% mIoU over ADE20K for semantic segmentation, and 55.8% PQ over COCO Panoptic for panoptic segmentation. For its efficacy, we hope our work can catalyze a paradigm shift in universal models in computer vision. ## 1 Introduction Computer vision has seen the emergence of specialized solutions for different vision tasks (_e.g_., ResNet [34] for image classification, Faster RCNN [70] for object detection, and Mask RCNN [33] for instance segmentation), aiming for superior performance. Nonetheless, neuroscience research [73, 65, 82, 5] has shown that the human perceptual system exhibits exceptional interpretive capabilities for complex visual stimuli, without task-specific constraints. This trait of human perceptual cognition diverges from current computer vision techniques [95, 44, 46], which often employ diverse architectural designs. Human vision possesses a unique attention mechanism that selectively focuses on relevant parts of the visual field while disregarding irrelevant information [81, 40]. This can be likened to a clustering approach [2, 3, 89], in which individual pixel points are decomposed and reorganized into relevant concepts to address various tasks. This essentially is a hierarchical process that involves combining Figure 1: ClusterFormer is a clustering-based universal model, offering superior performance over various specialized architectures. basic visual features, such as lines, shapes, and colors, to create higher-level abstractions of objects, scenes, and individuals [79; 59; 66; 27]. Inspired by the remarkable abilities of the human vision system, this work aims to develop a universal vision model that can replicate the unparalleled prowess. To this end, we employ a clustering-based strategy that operates at varying levels of granularity for visual comprehension. By solving different vision tasks (_i.e_., image classification, object detection, and image segmentation), we take into account the specificity at which visual information is grouped (_i.e_., image-, box-, and pixel-level). We name our approach, ClusterFormer (SS3.2), as it utilizes a Clustering mechanism integrated within the TransFormer architecture to create a universal network. The method begins by embedding images into discrete tokens, representing essential features that are grouped into distinct clusters. The cluster centers are then recursively updated through a recurrent clustering cross-attention mechanism that considers associated feature representations along the center dimension. Once center assignments and updates are complete, features are dispatched based on updated cluster centers, and then both are fed into the task head for the target tasks. ClusterFormer enjoys a few attractive qualities. _Flexibility_: ClusterFormer is a clustering-anchored approach that accommodates a broad array of visual tasks with superior performance (see Fig. 1) under one umbrella. The core epistemology is to handle various tasks with different levels of granularity (_e.g_., image-level classification, box-level detection, pixel-level segmentation, _etc_.), moving towards a universal visual solution. _Transferability_: The cluster centers generated by the ClusterFormer encoder are directly employed by the task head as initial queries for clustering, allowing the entire architecture to transfer underlying representation for target-task predictions (see Table. 4). This elegant design facilitates the transferability of knowledge acquired from the upstream task (_i.e_., encoder trained on ImageNet [72]) to downstream tasks (_e.g_., decoder trained on instance segmentation on COCO [49]). _Explainability_: Regardless of the target tasks, ClusterFormer's decision-making process is characterized by a transparent pipeline that continuously updates cluster centers through similarity-based metrics. Since the reasoning process is naturally derivable, the model inference behavior is ad-hoc explainable (see SS4.2). This differs ClusterFormer from most existing unified models [17; 44; 95] that fail to elucidate precisely how a model works. To effectively assess our method, we experimentally show: In SS4.1.1, with the task of image classification, ClusterFormer outperforms traditional counterparts, _e.g_., \(\mathbf{0.13}\sim\mathbf{0.39}\%\) top-1 accuracy compared with Swin Transformer [53] on ImageNet [72], by training from scratch. In SS4.1.2, when using our ImageNet-pretrained, our method can be expanded to the task of object detection and greatly improve the performance compared to Dino [96] over Swin Transformer on COCO [49] (\(\mathbf{0.8}\sim\mathbf{1.1}\%\) mAP). In addition, our method can also adapt to more generic per-pixel tasks, \(a.k.a\), semantic segmentation (see SS4.1.3), instance segmentation (see SS4.1.4), and panoptic segmentation (see SS4.1.5). For instance, we achieve performance gains of \(\mathbf{0.6}\sim\mathbf{1.3}\%\) mIoU for semantic segmentation on ADE20K [101], \(\mathbf{1.0}\sim\mathbf{1.4}\%\) mAP for instance segmentation on MS COCO [49] and \(\mathbf{1.5}\sim\mathbf{1.7}\%\) PQ for panoptic segmentation on COCO Panoptic [42] compared with Mask2Former [17] over Swin Transformer. Our algorithm are extensively tested, and the efficacy for the core components is also demonstrated through a series of ablative studies outlined in SS4.2, ## 2 Related Work **Universal Vision Model.** Transformers [81] have been instrumental in driving universal ambition, fostering models that are capable of tackling tasks of different specificity with the same architecture and embody the potential of these recent developments [23; 17; 16; 95; 96; 4; 80; 30; 57; 86] in the field. In the vision regime, mainstream research endeavors have been concentrating on the development of either encoders [53; 88] or decoders [44; 94]. The encoder is centered around the effort of developing foundation models [4; 53; 24; 22], trained on extensive data that can be adapted and fine-tuned to diverse downstream tasks. For instance, Swin Transformer [53] capably serves as a general-purpose backbone for computer vision by employing a hierarchical structure consisting of shifted windows; ViT-22B [22], parameterizes the architecture to 22 billion and achieves superior performance on a variety of vision tasks through learning large-scale data. Conversely, research on decoders [23; 17; 16; 95; 94; 44; 96; 87; 50; 20; 52; 19; 21; 51; 93; 76; 37; 99; 25; 48] is designed to tackle homogeneous target tasks, by using queries to depict visual patterns. For instance, Mask2Former [17] incorporates mask information into the Transformer architecture and unifies various segmentation tasks (_e.g_., semantic, instance, and panoptic segmentation); Mask-DINO [44] extends the decoding process from detection to segmentation by directly utilizing query embeddings for target task predictions. Conceptually different, we streamline an elegant systemic workflow based on clustering and handle heterogeneous visual tasks (_e.g._, image classification, object detection, and image segmentation) at different clustering granularities. **Clustering in Vision.** Traditional clustering algorithms in vision [39; 28; 29; 55; 91; 1; 10; 61; 6; 58] can be categorized into the hierarchical and partitional modes. The hierarchical methods [62; 38] involve the modeling of pixel hierarchy and the iterative partitioning and merging of pixel pairs into clusters until reaching a state of saturation. This approach obviates the necessity of a priori determination of cluster quantity and circumvents the predicaments arising from local optima. [98; 12]. However, it exclusively considers the adjacent pixels at each stage and lacks the capacity to assimilate prior information regarding the global configuration or dimensions of the clusters. [69; 64]. In contrast, partitional clustering algorithms [78; 36] directly generate a flat structure with a predetermined number of clusters and exclusively assign pixels to a single cluster. This design exhibits a dynamic nature, allowing pixels to transition between clusters [11; 63]. By employing suitable measures, this approach can effectively integrate complex knowledge within cluster centers. As a powerful system, human vision incorporates the advantages of both clustering modes [89; 83; 67]. We possess the capability of grouping analogous entities at different scales. Meanwhile, we can also effectively categorize objects purely based on their shape, color, or texture, without having the hierarchical information. Drawing on the above insights, we reformulate the attention mechanism (SS3.2 ) in Transformer architectures [81] from the clustering's perspective to decipher the hierarchy of visual complexity. ## 3 Methodology ### Preliminary **Clustering.** The objective of clustering is to partition a set of data points, denoted by \(X\in\mathbb{R}^{n\times d}\), into \(C\) distinct clusters based on their intrinsic similarities while ensuring that each data point belongs to only one cluster. Achieving this requires optimizing the stratification of the data points, taking into account both their feature and positional information, to form coherent and meaningful groupings. Clustering methodologies typically employ advanced similarity metrics, such as cosine similarity, to measure the proximity between data points and cluster centroids. Additionally, they consider the spatial locality of the points to make more precise group assignments. **Cross-Attention for Generic Clustering.** Drawing inspiration from the Transformer decoder architecture [81], contemporary end-to-end architecture [17; 9] utilize a query-based approach in which a set of \(K\) queries, \(\mathbf{C}=[\mathbf{c}_{1};\cdots;\mathbf{c}_{K}]\in\mathbb{R}^{K\times D}\), are learned and updated by a series of cross-attention blocks. In this context, we rethink the term "\(\mathbf{C}\)" to associate queries with cluster centers at each layer. Specifically, cross-attention is employed at each layer to adaptively aggregate image features and subsequently update the queries: \[\mathbf{C}\leftarrow\mathbf{C}+\mathrm{softmax}_{HW}(\mathbf{Q}^{C}(\mathbf{K}^{I})^{\top}) \mathbf{V}^{I}, \tag{1}\] where \(\mathbf{Q}^{C}\!\in\!\mathbb{R}^{K\times D}\), \(\mathbf{V}^{I}\!\in\!\mathbb{R}^{HW\times D}\), \(\mathbf{K}^{I}\!\in\!\mathbb{R}^{HW\times D}\) represent linearly projected features for query, key, and value, respectively. The superscripts "\(C\)" and "\(I\)" denote the features projected from the center and image features, respectively. Motivated by [95], we follow a reinterpretation of the cross-attention mechanism as a clustering solver by considering queries as cluster centers and applying the _softmax_ function along the query dimension (\(K\)) instead of the image resolution (\(HW\)): \[\mathbf{C}\leftarrow\mathbf{C}+\mathrm{softmax}_{K}(\mathbf{Q}^{C}(\mathbf{K}^{I})^{\top})\bm {V}^{I}. \tag{2}\] ### ClusterFormer In this subsection, we present ClusterFormer (see Fig. 2(a)). The model has a serial of hierarchical stages that enables multi-scale representation learning for universal adaptation. At each stage, image patches are tokenized into feature embedding [81; 53; 24], which are grouped into distinct clusters via a unified pipeline -- first _recurrent cross-attention clustering_ and then _feature dispatching_. **Recurrent Cross-Attention Clustering.** Considering the feature embeddings \(\mathbf{I}\in\mathbb{R}^{HW\times D}\) and initial centers \(\mathbf{C}^{(0)}\), we encapsulate the iterative Expectation-Maximization (EM) clustering process, \[\begin{split}\text{$E$-step:}\quad\hat{\mathbf{M}}^{(t)}& =\!\operatorname{softmax}_{K}(\mathbf{Q}^{C^{(t)}}(\mathbf{K}^{I})^{\top}),\\ \text{$M$-step:}\quad\mathbf{C}^{(t+1)}&=\!\hat{\mathbf{M} }^{(t)}\mathbf{V}^{I}\!\in\!\mathbb{R}^{K\!\times\!D},\end{split} \tag{3}\] where \(t\in\{1,\cdots,T\}\) and \(\hat{\mathbf{M}}\in[0,1]^{K\times HW}\) represents the "soft" cluster assignment matrix (_i.e._, probability maps of \(K\) clusters). As defined in Section 3.1, \(\mathbf{Q}^{C}\in\mathbb{R}^{K\times D}\) denotes the query vector projected from the center \(\mathbf{C}\), and \(\mathbf{V}^{I},\mathbf{K}^{I}\in\mathbb{R}^{HW\times D}\) correspond to the value and key vectors, respectively, projected from the image features \(\mathbf{I}\). The Recurrent Cross-Attention approach iteratively updates cluster membership \(\hat{\mathbf{M}}\) (_i.e._, \(E\)-step) and centers \(\mathbf{C}\) (_i.e._, \(M\)-step). This dynamic updating strategy embodies the essence of partitional clustering. It enjoys a few appealing characteristics: * _Efficiency:_ While the vanilla self-attention mechanism has a time complexity of \(\mathcal{O}(H^{2}W^{2}D)\), the _Recurrent Cross-Attention_ approach exhibits a lower bound of \(\mathcal{O}(TKHWD)\). This is primarily due to the fact that \(TK\ll HW\) (_i.e._, 4165 in Swin [53]\(vs.\) 1200 in ours). Specifically, considering the nature of the pyramid architecture [88; 53] during the encoding process, \(TK\) can indeed be much smaller than \(HW\), especially in the earlier stages. It is important to note that during each iteration, merely the \(\mathbf{Q}\) matrix requires an update, while the \(\mathbf{K}\) and \(\mathbf{V}\) matrices necessitate a single computation. Consequently, the whole model enjoys systemic efficiency (see Table 6c). * _Transparency:_ The transparency hinges on the unique role that cluster centers play in our _Recurrent Cross-Attention_ mechanism. The cluster centers, derived through our clustering process, act as 'prototypes' for the features they cluster. These 'prototypes' serve as a form of a representative sample for each cluster, reflecting the most salient or characteristic features of the data points within that cluster. Moreover, the _Recurrent Cross-Attention_ method adheres to the widely-established EM clustering algorithm, offering a lucid and transparent framework. This cluster center assignment behaves in a human-understandable manner (see Fig. 3) during representation learning and fosters ad-hoc explainability, allowing for a more intuitive understanding of the underlying relationships. * _Non-parametric fashion:_ The _Recurrent Cross-Attention_ mechanism achieves a recursive nature by sharing the projection weights for query, key, and value across iterations. This approach effectively ensures recursiveness without the introduction of additional learnable parameters (see Table 6b). Since the overall architecture is hierarchical, _Recurrent Cross-Attention_ is able to thoroughly explore the representational granularity, which mirrors the process of hierarchical clustering: \[\mathbf{C}^{l}=\operatorname{RCA}^{l}(\mathbf{I}^{l},\mathbf{C}^{l}_{0}), \tag{4}\] where RCA stands for the recurrent cross-attention layer. \(\mathbf{I}^{l}\) is the image feature map at different layers by standard pooling operation with \(H/2^{l}\times W/2^{l}\) resolution. \(\mathbf{C}^{l}\) is the cluster center matrix for \(l^{th}\) layer and \(\mathbf{C}^{l}_{0}\) is the initial centers at \(l^{th}\) layer. The parameters for _Recurrent Cross-Attention_ at different layers, _i.e._, \(\{\operatorname{RCA}^{l}\}_{l=1}^{L}\), are not shared. In addition, we initialize the centers from image grids: \[[\mathbf{c}^{(0)}_{1};\cdots;\mathbf{c}^{(0)}_{K}]=\operatorname{FFN}(\operatorname{ Adptive\_Pooling}_{K}(\mathbf{I})), \tag{5}\] Figure 2: (a) Overall pipeline of ClusterFormer. (b) Each _Recurrent Cross-Attention Clustering_ layer carries out \(T\) iterations of cross-attention clustering (_E_-step) and center updating (_M_-step) (see Eq. 3). (c) The _feature dispatching_ redistributes the feature embeddings on the top of updated cluster centers (see Eq. 6). where FFN stands for Position-wise Feedforward Network which is an integral part of the Transformer architecture. It comprises two fully connected layers along with an activation function used in the hidden layer. \(\text{Adaptive\_Pooling}_{K}(\mathbf{I})\) refers to select \(K\) feature centers from \(\mathbf{I}\) using adaptive sampling, which calculates an appropriate window size to achieve a desired output size adaptively, offering more flexibility and precision compared to traditional pooling methods. **Feature Dispatching.** After the cluster assignment, the proposed method employs an adaptive process that dispatches each patch within a cluster based on similarity (see Fig. 2(c)), leading to a more coherent and representative understanding of the overall structure and context within the cluster. For every patch embedding \(p_{i}\in I\), the updated patch embedding \(p_{i}^{{}^{\prime}}\) is computed as: \[p_{i}^{{}^{\prime}}=p_{i}+\mathrm{MLP}(\frac{1}{K}\sum_{k=0}^{K}sim(C_{k},p_{i })*C_{k}) \tag{6}\] This equation represents the adaptive dispatching of feature embeddings by considering the similarity between the feature embedding and the cluster centers (\(C\)), weighted by their respective similarities. By incorporating the intrinsic information from the cluster centers, the method refines the feature embeddings, enhancing the overall understanding of the image's underlying structure and context. All feature representations are utilized for handling the target tasks in the decoding process. In SS3.3, we discuss more details about the implementation of the ending tasks. ### Implementation Details The implementation details and framework of ClusterFormer are shown in (Fig. 2a). We followed the architecture and configuration of Swin Transformer [53]. The code will be available at here. * _Encoder._ The encoding process is to generate presentation hierarchy, denoted as \(\{\mathbf{I}^{l}\}\) with \(l=\{1,2,3,4\}\), for a given image \(I\). The pipeline begins with the feature embedding to convert the images into separate feature tokens. Subsequently, multi-head computing [81; 53] is employed to partition the embedded features among them. Center initialization (Eq. 5) is then adopted as a starting for initializing the cluster centers, and the recurrent cross-attention clustering (Eq. 3) is utilized to recursively update these centers. Once the centers have been updated, the features are dispatched based on their association with the updated centers (Eq. 6). The further decoding process leverage both the centers and the features, which guarantees well-rounded learning. * _Adapation to Image Classification._ The classification head is a single-layer Multilayer Perceptron (MLP) takes the cluster centers from the encoder for predictions. * _Adaptation to Detection and Segmentation._ Downstream task head has six Transformer decoder layers with the core design of recurrent cross-attention clustering (Eq.4). Each layer has 3 iterations. ## 4 Experiment We evaluate our methods over five vision tasks _viz._ image classification, object detection, semantic segmentation, instance segmentation, and panoptic segmentation on four benchmarks. **ImageNet-1K for Image Classification.** ImageNet-1K[72] includes high-resolution images spanning distinct categories (_e.g._, animals, plants, and vehicles). Following conventional procedures, the dataset is split into 1.2M/50K/100K images for train/validation/test splits. **MS COCO for Object Detection and Instance Segmentation.** COCO [49] dataset features dense annotations for 80 common objects in daily contexts. Following standard practices [49], the dataset is split into 115K/5K/20K images for train2017/val2017/test-dev splits. **ADE20K for Semantic Segmentation.** ADE20K [101] dataset offers an extensive collection of images with pixel-level annotations, containing 150 diverse object categories in both indoor and outdoor scenes. The dataset comprises 20K/2K/3K images for train/val/test splits. **COCO Panoptic for Panoptic Segmentation.** The COCO Panoptic dataset [42] includes 80 "thing" categories and a carefully annotated set of 53 "stuff" categories. In line with standard practices [42], the COCO Panoptic dataset is split into 115K/5K/20K images for the train/val/test splits as well. The ensuing section commences by presenting the main results of each task (SS4.1), succeeded by a series of ablative studies (SS4.2), which aim to confirm the efficacy of each modulating design. ### Main Results #### 4.1.1 Experiments on Image Classification **Training.** We use mmclassification2 as codebase and follow the default training settings. The default configuration for our model involves setting the number of centers to 100. To optimize the model's performance, we employ cross-entropy as the default loss function, which is widely used in classification tasks and helps in minimizing the difference between predicted probabilities and ground truth. For the training details, we run the model for 300 epochs, allowing sufficient time for the model to learn and converge. To manage the learning rate, we initialize it at 0.001 as default. The learning rate is then scheduled using a cosine annealing policy, which gradually decreases the learning rate over time. Due to limitations in our GPU capacity, we are constrained to set the total batch size at 1024. Models are trained _from scratch_ on sixteen A100 GPUs. Footnote 2: [https://github.com/open-mmlab/mmclassification](https://github.com/open-mmlab/mmclassification) **Results on ImageNet.** Table 1 illustrates our compelling results over different famous methods. ClusterFormer exceeds the Swin Transformer [53] by **0.13\(\%\)** and **0.39\(\%\)** on Tiny-based and Small-based models with fewer parameters (_i.e._, 27.85M \(vs.\) 28.29M and 48.71M \(vs.\) 49.61M), respectively. On top-5 accuracy, our approach also outperforms the Swin-Tiny and Swin-Small with gains of **0.71\(\%\)** and **0.84\(\%\)**, respectively. In addition, our margins over the ResNet family [34] are **3.44\(\%\)\(\sim\) **4.76\(\%\)** on top-1 accuracy with on-par parameters (_i.e._, 27.85M \(vs.\) 25.56M and 48.71M \(vs.\) 44.55M). #### 4.1.2 Experiments on Object Detection **Training.** We use mmdetection3 as codebase and follow the default training settings. For a fair comparison, we follow the training protocol in [17]: 1) the number of instances centers is set to 100; 2) a linear combination of the \(\mathcal{L}_{1}\) loss and the GIoU Loss is used as the optimization objective for bounding box regression. Their coefficients are set to 5 and 2, respectively. In addition, the final object centers are fed into a small FFN for object classification, trained with a binary cross-entropy loss. Moreover, we set the initial learning rate to \(1\times 10^{-5}\), the training epoch to 50, and the batch size to 16. We use random scale jittering with a factor in \([0.1,2.0]\) and a crop size of \(1024\times 1024\). Footnote 3: [https://github.com/open-mmlab/mmdetection](https://github.com/open-mmlab/mmdetection) **Test.** We use one input image scale with shorter side as 800. **Metric.** We adopt AP, AP\({}_{50}\), AP\({}_{75}\), AP\({}_{S}\), AP\({}_{M}\), and AP\({}_{L}\). **Performance Comparison.** In Table 2, we present the numerical results for ClusterFormer for object detection. We observe that it surpasses all counterparts [70, 7, 56, 77, 9, 75, 60, 102, 71, 96] with remarkable gains with respect to mAP. In particular, ClusterFormer-Tiny exceeds the vanilla Deformable DETR [102], Sparse-DETR [71], and DINO [96] over Swin-T [53] by **6.5\(\%\)**, **3.4\(\%\)**, and **0.8\(\%\)** in terms of mAP, respectively. In addition, our approach also outperforms these methods over Swin-S [53], _i.e._, _54.2\(\%\)_vs_\(48.3\%\)_vs_\(49.9\%\)_vs_\(53.3\%\) in terms of mAP, respectively. Notably, ClusterFormer achieves impressive performance without relying on additional augmentation. #### 4.1.3 Experiments on Semantic Segmentation **Training.** We use mmsegmentation4 as codebase and follow the default training settings. The training process for semantic segmentation involves setting the number of cluster centers to match the number of semantic categories, which is 150 for ADE20K [101]. Following the approach employed in recent works [97, 17, 74], we adopt a combination of the standard cross-entropy loss and an auxiliary dice loss for the loss function. By default, the coefficients for the cross-entropy and dice losses are set to \(5\) and \(1\), respectively. In addition, we configure the initial learning rate to \(1\times 10^{-5}\), the number of training epochs to 50, and the batch size to 16. \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{2}{c}{Method} & \#Params & top-1 & top-5 \\ \hline \hline Context Cluster-Tiny\(\downarrow\)[58] & 5.3M & 71.68\(\%\) & 90.49\(\%\) \\ DeiT-Tiny\(\downarrow\)[80] & 5.72M & 74.50\(\%\) & 92.25\(\%\) \\ PVG-Tiny\(\downarrow\)[31] & 9.46M & 78.39\(\%\) & 94.38\(\%\) \\ ResNet-50-\(\downarrow\)[34] & 25.25M & 76.55\(\%\) & 93.06\(\%\) \\ Swin-Tiny\(\downarrow\)[53] & 28.29M & 81.18\(\%\) & 95.61\(\%\) \\ **ClusterFormer-Tiny** & 27.85M & **81.31\(\%\)** & **96.32\(\%\)** \\ \hline Context Cluster-Small\(\downarrow\)[58] & 14.0M & 77.42\(\%\) & 93.69\(\%\) \\ DeiT-Small\(\downarrow\)[80] & 20.25M & 80.69\(\%\) & 95.06\(\%\) \\ PVG-Small\(\downarrow\)[31] & 29.02M & 82.00\(\%\) & 95.97\(\%\) \\ ResNet-10-\(\downarrow\)[34] & 44.53M & 77.97\(\%\) & 94.06\(\%\) \\ Swin-Small\(\downarrow\)[53] & 49.61M & 83.02\(\%\) & 96.29\(\%\) \\ **ClusterFormer-Small** & 48.71M & **83.41\(\%\)** & **97.13\(\%\)** \\ \hline \hline \end{tabular} \end{table} Table 1: **Classification top-1 and top-5 accuracy on ImageNet [72] val (see §4.1.1 for details).** Furthermore, we employ random scale jittering, applying a factor within the range of [0.5, 2.0], and utilize a crop size with a fixed resolution of \(640\times 640\) pixels. **Test.** During the testing phase, we re-scale the input image with a shorter side to 640 pixels without applying any additional data augmentation at test time. **Metric.** Mean intersection-over-union (mIoU) is used for assessing image semantic segmentation performance. **Performance Comparison.** Table 3 shows the results on semantic segmentation. Empriailly, our method compares favorably to recent transformer-based approaches [54, 15, 32, 100, 74, 90, 95, 17]. For instance, ClusterFormer-Tiny surpasses both recent advancements, _i.e_., kMaX-Deeplab [95] and Mask2Former [17] with Swin-T [53] (_i.e_., **49.1\(\%\)_vs_. \(48.3\%\)_vs_. \(48.5\%\)), respectively. Moreover, ClusterFormer-Small achieves **52.4\(\%\)**mIoU and outperforms all other methods in terms of mIoU, making it competitive with _state-of-the-art_ methods as well. #### 4.1.4 Experiments on Instance Segmentation **Training.** We adopt the same training strategy for instance segmentation by following SS4.1.2. For instance segmentation, we change the training objective by utilizing a combination of the binary cross-entropy loss and the dice Loss for instance mask optimization. **Test.** We use one input image scale with a shorter side of 800. **Metric.** We adopt AP, AP\({}_{50}\), AP\({}_{75}\), AP\({}_{S}\), AP\({}_{M}\), and AP\({}_{L}\). **Performance Comparison.** Table 4 presents the results of ClusterFormer against famous instance segmentation methods [33, 8, 14, 43, 13, 26, 23, 18, 17, 44] on COCO test-dev. ClusterFormer shows clear performance advantages over prior arts. For example, ClusterFormer-Tiny outperforms the universal counterparts Mask2Former [17] by \(1.4\%\) over Swin-T [53] in terms of mAP and on par with the _state-of-the-art_ method, Mask-Dino [44] with Swin-T backbone. Moreover, ClusterFormer-Small surpasses all the competitors, _e.g_., yielding significant gains of \(1.0\%\) and \(0.5\%\) mAP compared to Mask2Former and Mask-Dino over Swin-S, respectively. Without bells and whistles, our method establishes a new _state-of-the-art_ on COCO instance segmentation. #### 4.1.5 Experiments on Panoptic Segmentation **Training.** Following the convention [84, 17], we use the following objective for network learning: \[\mathcal{L}^{\text{Panoptic}}=\lambda^{\text{th}}\mathcal{L}^{\text{th}}+ \lambda^{\text{st}}\mathcal{L}^{\text{st}}+\lambda^{\text{aux}}\mathcal{L}^{ \text{aux}}, \tag{7}\] \(\mathcal{L}^{\text{th}}\) and \(\mathcal{L}^{\text{st}}\) represent the loss functions for things and stuff, respectively. To ensure a fair comparison, we follow [95, 85] and incorporate an auxiliary loss calculated as a weighted sum of four different loss terms, specifically, a PQ-style loss, a mask-ID cross-entropy loss, an instance discrimination \begin{table} \begin{tabular}{c||c|c|c c c c c} \hline \hline Algorithm & Backbone & Epoch & mAP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{S}\) & AP\({}_{M}\)T & AP\({}_{L}\)T \\ \hline Faster R-CNN-[70] & ResNet-101 & 36 & 41.7 & 62.3 & 45.7 & 24.7 & 46.0 & 53.2 \\ Cascade R-CNN-[71] & ResNet-101 & 36 & 42.8 & 61.1 & 46.7 & 24.9 & 46.5 & 56.4 \\ Grid R-CNN-[56] & ResNet-50 & 24 & 40.4 & 58.5 & 43.6 & 22.7 & 43.9 & 53.0 \\ EfficientDet-[77] & Efficient-B30 & 300 & 45.4 & 63.9 & 49.3 & 27.1 & 49.5 & 61.3 \\ DETRCNN-[79] & ResNet-50 & 150 & 39.9 & 60.4 & 41.7 & 17.6 & 43.4 & 59.4 \\ Sparse R-CNN-[75] & ResNet-101 & 36 & 46.2 & 65.1 & 50.4 & 29.5 & 49.2 & 61.7 \\ Conditional DETRCNN-[60] & ResNet-50 & 50 & 41.1 & 61.9 & 43.5 & 20.4 & 44.5 & 59.9 \\ \hline Deformable DETRCNN-[102] & Swin-T & 50 & 45.3\(\pm\)0.6 & 65.2\(\pm\)0.20 & 49.8\(\pm\)0.21 & 27.0\(\pm\)0.26 & 49.1\(\pm\)0.24 & 60.7\(\pm\)0.29 \\ & Swin-T & 50 & 48.3\(\pm\)0.6 & 68.7\(\pm\)0.22 & 52.1\(\pm\)0.35 & 30.5\(\pm\)0.25 & 51.6\(\pm\)0.24 & 64.4\(\pm\)0.29 \\ Sparse-DETRCNN-[71] & Swin-T & 50 & 48.6\(\pm\)0.6 & 69.6\(\pm\)0.20 & 53.5\(\pm\)0.23 & 30.1\(\pm\)0.27 & 51.8\(\pm\)0.21 & 64.9\(\pm\)0.29 \\ Sparse-DETRCNN-[71] & Swin-S & 50 & 49.9\(\pm\)0.21 & 70.3\(\pm\)0.27 & 54.0\(\pm\)0.29 & 32.5\(\pm\)0.22 & 53.6\(\pm\)0.28 & 66.2\(\pm\)0.25 \\ & \multirow{2}{*}{DINO} & \multirow{2}{*}{[96]} & \multirow{2}{*}{Swin-T} & 51.2\(\pm\)0.26 & 66.4\(\pm\)0.25 & 55.3\(\pm\)0.26 & 31.3\(\pm\)0.24 & 55.1\(\pm\)0.58 & 65.6\(\pm\)0.26 \\ & & & & & & & \\ \cline{2-7} & \multirow{2}{*}{[96]} & \multirow{2}{*}{Swin-S} & 51.2\(\pm\)0.26 & 66.4\(\pm\)0.25 & 55.3\(\pm\)0.26 & 33.1\(\pm\)0.27 & 55.1\(\pm\)0.58 & 66.3\(\pm\)0.26 \\ & & & & & & & \\ \cline{2-7} & \multirow{2}{*}{[96]} & \multirow{2}{*}{Swin-S} & 51.3\(\pm\)0.26 & 70.9\(\pm\)0.38 & **57.6\(\pm\)**0.23 & 33.8\(\pm\)0.23 & 56.4\(\pm\)0.26 & **69.9\(\pm\)0.26** \\ \hline \hline \multirow{2}{*}{**ClusterFormer**} & Ours-Tiny & 50 & 52.0\(\pm\)0.32 & 70.4\(\pm\)0.25 & 57.5\(\pm\)0.22 & 34.2\(\pm\)0.25 & 54.8\(\pm\)0.29 & 64.8\(\pm\)0.22 \\ & Ours-Small & **54.2\(\pm\)0.32** & **71.8\(\pm\)0.16** & **59.1\(\pm\)**0.17 & **35.6\(\pm\)**0.28 & **57.2\(\pm\)**0.20 & **67.4\(\pm\)**0.18 \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative results on COCO [49] test-dev for object detection (see §4.1.2 for details). \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Algorithm & Backbone & Epoch & mIoU\({}_{\blacksquare}\) \\ \hline \hline FCN-CNN-[54] & ResNet-50 & 50 & 36.0 \\ DeeplabV3+ECCV3+[15] & ResNet-50 & 50 & 42.7 \\ APCNet-CVPR3[32] & ResNet-50 & 100 & 43.4 \\ SETRCNN-[100] & ViT-L & 100 & 49.3 \\ Segmentero-Tiny [74] & ViT-B & 100 & 52.1 \\ Segformer-[90] & ViT-B & 100 & 51.4 \\ \hline kMaX-Deeplabcovins[95] & ConvNeXt-T & 100 & 48.5\(\pm\)0.15 \\ ConvNeXt-S & 100 & 51.6\(\pm\)0.23 \\ Mask2Former-CVPR320 & Swin-T & 100 & 48.5\(\pm\)0.24 \\ & Swin-S & 100 & 51.1\(\pm\)0.21 \\ \hline \hline \multirow{2}{*}{**ClusterFormer**} & Ours-Tiny & 100 & 49.1\(\pm\)0.19 \\ & & **52.4\(\pm\)**0.22 \\ \hline \end{tabular} \end{table} Table 3: Quantitative results on ADE20K [101] val for semantic segmentation (see §4.1.3 for details). loss, and a semantic segmentation loss. More information about \(\mathcal{L}^{\text{aux}}\) can be found in [85; 95]. The coefficients \(\lambda^{\text{th}}\), \(\lambda^{\text{st}}\), and \(\lambda^{\text{aux}}\) are assigned the values of 5, 3, and 1, respectively. Furthermore, the final centers are input into a small feed-forward neural network (FFN) for semantic classification, which is trained using a binary cross-entropy loss. Moreover, we set the initial learning rate to \(1\times 10^{-5}\), the number of training epochs to 50, and the batch size to 16. We also employ random scale jittering with a factor range of [0.1, 2.0] and a crop size of 1024\(\times\)1024. **Test.** We use one input image scale with a shorter side of 800. **Metric.** We employ the PQ metric [42] and report PQ\({}^{\text{Th}}\) and PQ\({}^{\text{St}}\) for the "thing" and "stuff" classes, respectively. To ensure comprehensiveness, we also include mAP\({}^{\text{Th}}_{\text{pan}}\), which evaluates mean average precision on "thing" classes using instance segmentation annotations, and mIoU\({}_{\text{pan}}\), which calculates mIoU for semantic segmentation by merging instance masks belonging to the same category, using the same model trained for the panoptic segmentation task. **Performance Comparison.** We perform a comprehensive comparison against two divergent groups of _state-of-the-art_ methods: universal approaches [46; 17; 44] and specialized panoptic methods [41; 92; 16; 45; 85; 97; 94]. As shown in Table 5, ClusterFormer outperforms both types of rivals. For instance, the performance of ClusterFormer-Tiny clear ahead compared to Mask2Former (_i.e._, **54.7\(\%\) PQ \(vs.\)\(53.2\%\) PQ) and Mask-Dino [44] (_i.e._, **54.7\(\%\) PQ \(vs.\)\(53.6\%\) PQ) on the top of Swin-T [53], and ClusterFormer-Small achieves promising gains of **1.7\(\%\)** and **0.9\(\%\)** PQ against Mask2Former and Mask-Dino over Swin-S, respectively. Moreover, in terms of mAP\({}^{\text{Th}}_{\text{pan}}\) and mIoU\({}_{\text{pan}}\), the ClusterFormer also achieves outstanding performance beyond counterpart approaches. ### Ablative Study This section ablates ClusterFormer's key components on ImageNet [72] and MS COCO [49] validation split. All experiments use the tiny model. **Key Component Analysis.** We first investigate the two major elements of ClusterFormer, specifically, _Recurrent Cross-Attention Clustering_ for center updating and _Feature Dispatching_ for feature updating. We construct a Baseline model without any center updating and feature dispatching technique. As shown in Table (a)a, Baseline achieves \(74.59\%\) top-1 and \(91.73\%\) top-5 accuracy. Upon applying _Recurrent Cross-Attention Clustering_ to the Baseline, we observe consistent and substantial improvements for both top-1 accuracy (\(74.59\%\rightarrow\textbf{80.57}\%\)) and top-5 accuracy (\(91.73\%\rightarrow\textbf{95.22}\%\)). This highlights the importance of the center updating strategy \begin{table} \begin{tabular}{c|c|c|c c c c c} \hline \hline Algorithm & Backbone & Epoch & mAP1 & AP\({}_{\text{PQ}}\)\(\uparrow\) & AP\({}_{\text{PS}}\)\(\uparrow\) & AP\({}_{\text{Es}}\) & AP\({}_{\text{BH}}\)\(\uparrow\) & AP\({}_{\text{E}}\)\(\uparrow\) \\ \hline \hline Mask R-CNN\({}_{\text{mean}}\)[33] & ResNet-101 & 12 & 36.1 & 57.5 & 38.6 & 18.8 & 39.7 & 49.5 \\ Cascade MR-CNN\({}_{\text{mean}}\)[8] & ResNet-101 & 12 & 37.3 & 58.2 & 40.1 & 19.7 & 40.6 & 51.5 \\ HTC\({}_{\text{mean}}\)[14] & ResNet-101 & 20 & 39.6 & 61.0 & 42.8 & 21.3 & 42.9 & 55.0 \\ PointEnd\({}_{\text{mean}}\)[43] & ResNet-S01 & 12 & 36.3 & 56.9 & 38.7 & 19.8 & 39.4 & 48.5 \\ BlendMask\({}_{\text{mean}}\)[13] & ResNet-101 & 36 & 38.4 & 60.7 & 41.3 & 18.2 & 41.5 & 53.3 \\ QueryInst\({}_{\text{mean}}\)[26] & ResNet-101 & 36 & 41.0 & 63.3 & 44.5 & 21.7 & 44.4 & 60.7 \\ SOLO\({}_{\text{mean}}\)[23] & Swin-L & 50 & 46.7 & 72.7 & 50.6 & 29.2 & 50.1 & 60.9 \\ SparseNet\({}_{\text{mean}}\)[18] & ResNet-50 & 36 & 37.9 & 59.2 & 40.2 & 15.7 & 39.4 & 56.9 \\ MaxFormer\({}_{\text{mean}}\)[17] & Swin-T & 44.3\({}_{\text{mean}}\)[16] & 67.3\({}_{\text{mean}}\)[15] & 47.7\({}_{\text{mean}}\)[15] & 23.9\({}_{\text{mean}}\)[15] & 48.1\({}_{\text{mean}}\)[16] & 66.4\({}_{\text{mean}}\)[15] \\ Swin-S & 50 & 46.0\({}_{\text{mean}}\)[15] & 68.4\({}_{\text{mean}}\)[15] & 49.8\({}_{\text{mean}}\)[15] & 25.4\({}_{\text{mean}}\)[11] & 49.7\({}_{\text{mean}}\)[15] & 67.4\({}_{\text{mean}}\)[15] \\ Swin-T & 50 & 46.0\({}_{\text{mean}}\)[15] & 68.4\({}_{\text{mean}}\)[15] & 49.8\({}_{\text{mean}}\)[15] & 25.4\({}_{\text{mean}}\)[11] & 49.7\({}_{\text{mean}}\)[15] & 67.4\({}_{\text{mean}}\)[15] \\ & Swin-T & 50 & 46.3\({}_{\text{mean}}\)[15] & 66.9\({}_{\text{mean}}\)[15] & 56.2\({}_{\text{mean}}\)[15] & 26.0\({}_{\text{mean}}\)[15] & 48.7\({}_{\text{mean}}\)[15] & 64.0\({}_{\text{mean}}\)[15] \\ Mask-Dino\({}_{\text{mean}}\)[44] & Swin-S & 46.5\({}_{\text{mean}}\)[15] & 70.1\({}_{\text{mean}}\)[15] & **52.2\({}_{\text{mean}}\)[15]** & **27.6\({}_{\text{mean}}\)[15]** & 49.9\({}_{\text{mean}}\)[15] & 69.5\({}_{\text{mean}}\)[15] \\ \hline \hline \multirow{2}{*}{**ClusterFormer**} & Our-Tiny & 50 & 45.9\({}_{\text{mean}}\)[15] & 69.1\({}_{\text{mean}}\)[15] & 49.5\({}_{\text{mean}}\)[15] & 25.2\({}_{\text{mean}}\)[15] & 50.1\({}_{\text{mean}}\)[15] & 68.8\({}_{\text{mean}}\)[15] \\ & Ours-Tiny & 50 & **47.0\({}_{\text{mean}}\)[15]** & 51.8\({}_{\text{mean}}\)[15] & 27.3\({}_{\text{mean}}\)[15] & **50.50\({}_{\text{mean}}\)[15]** & **72.6\({}_{\text{mean}}\)[15]** & **72.6\({}_{\text{mean}}\)[15]** \\ \hline \hline \end{tabular} \end{table} Table 4: Quantitative results on COCO [49] test-dev for instance segmentation (see §4.1.4 for details). \begin{table} \begin{tabular}{c|c|c|c|c c c c} \hline \hline Algorithm & Backbone & Epoch & PQ\({}_{\text{PQ}}\)\(\uparrow\) & PQ\({}_{\text{PQ}}\)\(\uparrow\) & \({}_{\text{PQ}}\)\(\uparrow\) & mAP\({}_{\text{pan}}\)\(\uparrow\) & mIoU\({}_{\text{pan}}\)\(\uparrow\) \\ \hline \hline Panoptic-FPN\({}_{\text{mean}}\)[41] & ResNet-101 & 20 & 44.0 & 52.0 & 31.9 & 34.0 & 51.5 \\ UPSNet\({}_{\text{mean}}\)[92] & ResNet-101 & 12 & 46.2 & 52.8 & 36.5 & 36.3 & 56.9 \\ Panoptic-DeepLab\({}_{\text{mean}}\)[16] & Xception-71 & 12 & 41.2 & 44.9 & 35.7 & 31.5 & 55.4 \\ Panoptic-FCN\({}_{\text{mean}}\)[45] & ResNet-50 & 12 & 44.3 & 50.0 & 35.6 & 35.5 & 55.0 \\ Max-DeepLab\({}_{\text{mean}}\)[58] & Max-L & 55 & 51.1 & 57.0 & 42.2 & – & – \\ CMT-DeepLab\({}_{\text{mean}}\)[94] & Axial-R104\({ and validates the effectiveness of our approach, even without explicitly performing clustering. Furthermore, after incorporating _Feature Dispatching_ into the Baseline, we achieve significant gains of **3.99\(\%\)** in top-1 accuracy and **2.95\(\%\)** in top-5 accuracy. Finally, by integrating both core techniques, ClusterFormer delivers the best performance across both metrics. This indicates that the proposed _Recurrent Cross-Attention Clustering_ and _Feature Dispatching_ can work synergistically and validates the effectiveness of our comprehensive algorithmic design. **Recurrent Cross-attention Clustering.** We next study the impact of our _Recurrent Cross-attention Clustering_ (Eq.4) by contrasting it with the cosine similarity updating, basic cross-attention [81], Criss-attention [35] and \(K\)-Means cross-attention [95]. As illustrated in Table (c)c, our _Recurrent Cross-Attention_ proves to be _effective_ - it outperforms the cosine similarity, vanilla, Criss and \(K\)-Means by **2.52\(\%\)**, **1.64\(\%\)**, **1.40\(\%\)** and **0.15\(\%\)** top-1 accuracy respectively, and _efficient_ - its #Params are significantly less than the other vanilla and Criss-attention and on par with \(K\)-Means, in line with our analysis in SS3.2. To gain further insights into recursive clustering, we examine the effect of the recursion number \(T\) in Table (b)b. We discover that performance progressively improves from \(81.06\%\) to **81.31\(\%\)** in top-1 accuracy when increasing \(T\) from \(1\) to \(3\), but remains constant after running additional iterations. We also observe that #Params increase as \(T\) increases. Consequently, we set \(T=3\) as the default to strike an optimal balance between accuracy and computation cost. **Multi-head Dimension.** We then ablate the head embedding dimension for the attention head in Table (d)d. We find that performance significantly improves from \(71.69\%\) to **82.40\(\%\)** in top-1 accuracy when increasing the dimension from \(16\) to \(48\), but #Params steadily increase as the dimension grows. For a fair comparison with Swin [53], we set the head dimension to 32 as our default. **Feature Dispatching.** We further analyze the influence of our _Feature Dispatching_. As outlined in Table (e)e, in a standard manner without any dispatching method, the model attains \(80.57\%\) top-1 accuracy and \(95.22\%\) top-5 accuracy. By applying a vanilla fully connected layer to update the feature, we witness a marginal increase of **0.26\(\%\)** in top-1 accuracy. Moreover, using the confidence-based updating method [68] and fully connected layer with similarity, the model demonstrates a noticeable enhancement in \(0.12\%\) and \(0.39\%\) top-1 accuracy, respectively. Last, our method yields significant performance advancements across both metrics, _i.e_., **81.31\(\%\)** top-1 and **96.32\(\%\)** top-5 accuracy. **Decoder Query Initialization.** Last, we examine the impact of query initialization in the decoder on a downstream task (_i.e_., instance segmentation) in Table (f)f. For free parameter initialization, the base model can achieve \(44.2\%\) in terms of mAP. By applying direct feature embedding, the method has a slight improvement of \(0.3\%\) mAP. In addition, the model exhibits improvements in \begin{table} \end{table} Table 6: A set of **ablative studies** on ImageNet [72] validation and MS COCO [49] test-dev split (see §4.2). The adopted designs are marked in red. mAP, achieving \(44.9\%\) and \(45.1\%\), respectively, by employing the mixed query selection [44] and scene-adoptive embedding [47]. Outstandingly, ClusterFormer achieves the highest performance in all three metrices, _i.e_., \(45.9\%\) mAP, \(69.1\%\) AP\({}_{50}\) and \(49.5\%\) AP\({}_{75}\), respectively. The empirical evidence proves our design -- using the cluster centers from the encoder to derive the initial query for the decoder -- that facilitates the transferability for representation learning. **Ad-hoc Explainability.** We visualize the cluster assignment map for image classification in Fig. 3. This figure provides an insightful illustration of how ClusterFormer groups similar features together. Each color represents a cluster of features that share common characteristics. ## 5 Conclusion This study adopts an epistemological perspective centered on the clustering-based paradigm, which advocates a universal vision framework named ClusterFormer. This framework aims to address diverse visual tasks with varying degrees of clustering granularity. By leveraging insights from clustering, we customize the cross-attention mechanism for recursive clustering and introduce a novel method for feature dispatching. Empirical findings provide substantial evidence to support the effectiveness of this systematic approach. Based on its efficacy, we argue deductively that the proposed universal solution will have a substantial impact on the wider range of visual tasks when viewed through the lens of clustering. This question remains open for our future endeavors. **Acknowledgement.** This research was supported by the National Science Foundation under Grant No. 2242243.
2310.20359
Homogeneous continuous images of smaller weight
We show that every infinite crowded space can be mapped onto a homogeneous space of countable weight, and that there is a homogeneous space of weight continuum that cannot be mapped onto a homogeneous space of uncountable weight strictly less than continuum.
István Juhász, Jan van Mill
2023-10-31T10:58:57Z
http://arxiv.org/abs/2310.20359v1
# Homogeneous continuous images of smaller weight ###### Abstract. We show that every infinite crowded space can be mapped onto a homogeneous space of countable weight, and that there is a homogeneous space of weight \(\mathfrak{c}\) that cannot be mapped onto a homogeneous space of weight strictly between \(\omega\) and \(\mathfrak{c}\). Key words and phrases:Homogeneous space, topological group, weight, continuous image 2020 Mathematics Subject Classification: 54A25, 54C05, 54D30 The first author was supported by NKFIH grant no. K129211. ## 1. Introduction For a space \(X\), we let \(w(X)\) denote its weight. Tkachenko [14] proved that if \(X\) is a Tychonoff space, then for every infinite regular cardinal \(\tau\leq w(X)\), there is a continuous image of \(X\) of weight \(\tau\). The same result was obtained independently but later by Juhasz in [9]. Several related'reflecting' properties were found for the class of \(\omega\)-narrow topological groups in [6]. It was shown there for example that every \(\omega\)-narrow topological group \(G\) admits a continuous homomorphism onto a topological group of weight \(\tau\), where \(\tau\leq w(G)\) is any infinite regular cardinal. See [6] for more results and references. It is an intriguing problem whether the assumption about the regularity of \(\tau\) is essential in these results. Although it is not stated explicitly in [6], their assumption on \(\omega\)-narrowness is essential. Examples are easily found. Koppelberg [11] proved that the homeomorphism group of any \(D(2)^{\kappa}\), \(\kappa\geq\omega\), is algebraically simple (for \(\kappa=\omega\), this is due to Anderson [1]). Hence the homeomorphism group of \(D(2)^{\omega_{1}}\) endowed with the compact-open topology, being of weight \(\omega_{1}\), does not admit a continuous homomorphic image of weight \(\omega\). As far as we know, for the class of topologically homogeneous spaces no'reflecting' properties were obtained in the literature. Since there are many for topological groups, it is a natural question whether these can be extended to the more general class of homogeneous spaces. See [3] for a survey containing old and new problems on homogeneous spaces. We show that the results on topological groups mentioned above cannot be extended in ZFC to homogeneous spaces by proving that every infinite crowded space can be mapped onto an infinite homogeneous space of countable weight, and that there is a homogeneous space of weight \(\mathfrak{c}\) that cannot be mapped onto any homogeneous space of weight strictly between \(\omega\) and \(\mathfrak{c}\). We also comment on the cardinality of (generalizations of) almost-compact spaces. ## 2. Continuous images of countable weight In this and the next section, all spaces under discussion are Tychonoff. As usual, \(\mathbb{N}\), \(\mathbb{R}\) and \(\mathbb{I}\) denote the spaces of natural numbers, real numbers and the closed interval \([0,1]\), respectively. **Lemma 2.1**.: _Every infinite crowded space maps onto \(\mathbb{N}\) or \(\mathbb{I}\)._ Proof.: Assume that there is a continuous map \(f\colon X\to\mathbb{R}\) such that \(f(X)\) is not zero-dimensional. Then \(f(X)\) contains a nontrivial closed interval \(J\) on which \(f(X)\) can be retracted. Hence we may assume that for all continuous functions \(f\colon X\to\mathbb{R}\), \(f(X)\) is zero-dimensional. As a consequence, \(X\) is zero-dimensional. Assume that there is a continuous function \(f\colon X\to\mathbb{R}\) such that \(f(X)\) is not compact. Then \(f(X)\) contains a closed copy of \(\mathbb{N}\) on which \(f(X)\), being zero-dimensional, can be retracted. Hence we can assume that for all continuous functions \(f\colon X\to\mathbb{R}\), \(f(X)\) is compact; that is, \(X\) is pseudocompact. Assume that there is a continuous function \(f\colon X\to\mathbb{R}\) such that \(f(X)\) is uncountable. Then \(f(X)\), being compact, contains a Cantor set \(K\). And since \(f(X)\) is zero-dimensional, it can be retracted onto \(K\). Now it suffices to observe that \(K\) can be mapped onto \(\mathbb{I}\). Let \(bX\) be a zero-dimensional compactification of \(X\). Since \(bX\) is crowded and zero-dimensional, there is a continuous surjection \(f\colon bX\to D(2)^{\omega}\). But then by pseudocompactness of \(X\), \(f(X)=D(2)^{\omega}\), and so we are done by what we just observed. **Corollary 2.2**.: _Every infinite crowded space can be mapped onto an infinite homogeneous space of countable weight._ Proof.: Simply observe that by collapsing the two endpoints of \(\mathbb{I}\) to a single point, we obtain a homogeneous space. The lemma and corollary are not true if \(X\) is not crowded. For let \(X=W(\omega_{1})\), the space of all countable ordinal numbers. Then every continuous image of \(X\) of countable weight, is compact and countable. Hence has an isolated point. Hence if it is infinite, it is neither \(\mathbb{N}\) nor \(\mathbb{I}\), and not homogeneous. ## 3. Continuous images of uncountable weight We now formulate and prove the main result in this note. **Theorem 3.1**.: _There is a homogeneous space of weight \(\mathfrak{c}\) such that if \(f\colon X\to Y\) is a continuous surjection and \(\omega<w(Y)<\mathfrak{c}\), then \(Y\) is not homogeneous._ Let \(\mathcal{A}\) be a Mrowka family on \(\omega\). That is, \(\mathcal{A}\) is a MAD-family such that the Cech-Stone compactification of \(\Psi(\mathcal{A})\), the \(\Psi\)-space \(\omega\cup\mathcal{A}\) of \(\mathcal{A}\), coincides with its \(1\)-point compactification. That is, \(\Psi(\mathcal{A})\) is _almost-compact_. That such a family exists is well-known, Mrowka [12] (see also [7, 8.6.1]). The family \(\mathcal{A}\) has cardinality \(\mathfrak{c}\) by construction, hence \(w(\Psi(\mathcal{A}))=\mathfrak{c}\). Now in \(\Psi(\mathcal{A})\) we replace every point of \(\omega\) by a copy of the Cantor set \(K=D(2)^{\omega}\) to obtain a space \(X\), as follows. The underlying set of \(X\) is \((\omega\times K)\cup\mathcal{A}\). A basic neighborhood of a point \(\langle n,k\rangle\), where \(n\in\omega\) and \(k\in K\), has the form \(\{n\}\times C\), where \(C\) is any open neighborhood of \(k\) in \(K\). And a basic neighborhood of \(A\in\mathcal{A}\) in \(X\) has for \(n\in\omega\) the form \(U_{n}(A)=\{A\}\cup\bigcup\{\{m\}\times K:m\in A,m\geq n\}\). It is clear that \(X\) is zero-dimensional and locally homeomorphic to \(K\), hence \(X\) is homogeneous. Let \(f\colon X\to\Psi(\mathcal{A})\) be the function that sends \(A\) in \(X\) to \(A\) in \(\Psi(\mathcal{A})\), for every \(A\in\mathcal{A}\), and every \(\{n\}\times K\) to \(\{n\}\), for \(n\in\omega\). **Lemma 3.2**.: \(f\) _is perfect and open._ Proof.: It is clear that \(f\) is continuous and has compact fibers. It is also clear that \(f\) is open. Hence it suffices to prove that \(f\) is closed. To this end, let \(E\) be any closed subset of \(X\). We claim that that \(\Psi(A)\setminus f(E)\) is open. If \(p\in(\Psi(\mathcal{A})\setminus f(E))\cap\omega\), then \(\{p\}\) is a neighborhood of \(p\) that is contained in \(\Psi(\mathcal{A})\setminus f(E)\). Now assume that \(p=A\in\mathcal{A}\). Then \(A\) in \(X\) does not belong to \(E\). Hence there exists \(n\) such that \(U_{n}(A)\cap E=\emptyset\). But \(f^{-1}(f(U_{n}(A))=U_{n}(A)\), hence \(f(U_{n}(A))\cap f(E)=\emptyset\). But \(f(U_{n}(A))\) is a neighborhood of \(A\) in \(\Psi(\mathcal{A})\), hence we are done. **Lemma 3.3**.: _If \(Z\) is a zero-set \(X\), then \(f(Z)\) is a zero-set in \(\Psi(\mathcal{A})\)._ Proof.: Let \(\xi\colon X\to[0,1]\) be continuous such that \(\xi^{-1}(0)=Z\). Define \(\eta\colon\Psi(\mathcal{A})\to[0,1]\) as follows: \(\eta(p)=\min\xi(f^{-1}(p))\). It is clear that \(\eta\) is well-defined since \(f\) is perfect (Lemma 3.2). To prove it is continuous, we only need to check that at some \(A\in\mathcal{A}\). By first-countability, we can check that by considering convergent sequences. A typical sequence that converges to \(A\) in \(\Psi(\mathcal{A})\) has the form \(C=\{m\in A:m\geq n\}\) for some \(n\). By the definition of the topology on \(X\), \(\{f^{-1}(c):c\in C\}\) converges to \(A\) in \(X\). Hence \(\{\xi(f^{-1}(c)):c\in C\}\) converges to \(\eta(A)=\xi(A)\) in \([0,1]\). But then so do the minima of these compact sets. We will shows that \(\eta^{-1}(0)=f(Z)\), which does the job. Indeed, pick an arbitrary \(p\in\eta^{-1}(0)\). Assume first that \(p=A\in\mathcal{A}\). Then since \(\eta(A)=\xi(A)\), \(A\in Z\), hence \(A=f(A)\in f(Z)\). Assume next that \(p=n\in\omega\). Then \(f^{-1}(p)=\{n\}\times K\), hence there exists \(k\in K\) such that \(\xi(\langle n,k\rangle)=0\). Hence \(\langle n,k\rangle\in Z\), so that \(n=f(\langle n,k\rangle)\in f(Z)\). From this we conclude that \(\eta^{-1}(0)\subseteq f(Z)\). For the reverse inclusion, take an arbitrary \(z\in Z\), and consider \(f(z)\). Again, there are two cases. Assume first that \(f(z)=A\in\mathcal{A}\). Then \(z=A\), hence \(\eta(f(z))=\xi(A)=0\). Assume next that \(f(z)=n\in\omega\). Then \(z\in\{n\}\times K\), hence \(\min(\xi(\{n\}\times K))=0\). That is, \(\eta(f(z))=\eta(n)=0\). **Corollary 3.4**.: \(X\) _is almost compact._ Proof.: By Gillman and Jerison [5, 6J] we must show that of any two disjoint zero-sets in \(X\), at least one of them is compact. Hence, assume that \(Z_{0}\) and \(Z_{1}\) are disjoint zero-sets in \(X\) such that neither \(Z_{0}\) nor \(Z_{1}\) is compact. Assume that \(Z_{0}\cap\mathcal{A}\) is finite. There is a clopen compact \(C\) in \(X\) that contains \(Z_{0}\cap\mathcal{A}\). Hence \(S_{0}=Z_{0}\setminus C\) is a noncompact zero-set in \(X\) that misses \(\mathcal{A}\). But then \(f(S_{0})\) is by Lemma 3.2 an infinite closed subset of \(\Psi(\mathcal{A})\) that misses \(\mathcal{A}\). But every infinite closed subset of \(\Psi(\mathcal{A})\) intersects \(\mathcal{A}\) since \(\mathcal{A}\) is MAD. We may consequently assume without loss of generality that \(Z_{0}\cap\mathcal{A}\) and \(Z_{1}\cap\mathcal{A}\) are both infinite. But they are clearly zero-sets of \(X\). But then by Lemma 3.3, \(\mathcal{A}\) contains two disjoint infinite zero-sets of \(\Psi(\mathcal{A})\), which contradicts \(\Psi(\mathcal{A})\) being almost compact. Write \(\beta\Psi(\mathcal{A})=\Psi(\mathcal{A})\cup\{\infty\}\). We are now in a position to present the proof of Theorem 3.1. To this end, let \(g\colon X\to Y\) be continuous, where \(\omega<w(Y)<\mathfrak{c}\) and \(Y\) is homogeneous. We will show that this leads to a contradiction. The space \(Y\) is almost compact (hence locally compact) by Gillman and Jerison [5, 6J]. Write \(\beta X=X\cup\{\infty_{X}\}\), \(\beta Y=Y\cup\{\infty_{Y}\}\), and let \(h=\beta g\colon\beta X\to\beta Y\) be the Stone extension of \(g\). **Lemma 3.5**.: \(Y\) _is compact._ Proof.: Assume that \(Y\) is not compact. Then \(h(\infty_{X})=\infty_{Y}\), and since \(h(X)=Y\), we get \(h^{-1}(\{\infty_{Y}\})=\{\infty_{X}\}\). Since \(Y\) is locally compact, \(w(\beta Y)<\mathfrak{c}\) from which it follows that the character of \(\infty_{X}\) in \(X\) is less than \(\mathfrak{c}\). But then the character of \(\infty\) in \(\beta\Psi(\mathcal{A})\) is less than \(\mathfrak{c}\), which is a contradiction. Put \(y_{0}=h(\infty_{X})\). Take an arbitrary \(y_{1}\in X\setminus\{y_{0}\}\). Let \(U\) be a compact neighborhood of \(y_{1}\) in \(Y\) that misses \(y_{0}\). Then \(h^{-1}(U)\) is a compact subset of \(X\) and hence is of countable weight. But then \(U\) is of countable weight. By homogeneity, every point of \(Y\) has a compact neighborhood of countable weight. But then by compactness, \(Y\) has countable weight, which is a contradiction. ## 4. On locally \(\mathfrak{c}\) and \(\mathfrak{c}\)-fair spaces In the previous section, we constructed an almost-compact first-countable and homogeneous space of weight and cardinality \(\mathfrak{c}\). By Arhangel'skii's celebrated result from [2], every first-countable Lindelof space has cardinality at most \(\mathfrak{c}\). In the light of this it is natural to wonder whether there is a bound on the cardinality of first-countable almost-compact spaces. Our following general results answer this, but we think they are of independent interest in themselves. For the spaces in this section, unless otherwise stated, no separation axiom is assumed. **Definition 4.1**.: A topological space \(X\) is 1. _locally_ \(\mathfrak{c}\) if every point in \(X\) has a neighborhood of cardinality \(\leq\,\mathfrak{c}\); 2. \(\mathfrak{c}\)_-fair_ if the closure of every subset of \(X\) of cardinality \(\mathfrak{c}\) also has cardinality \(\mathfrak{c}\). 3. We call a \(\kappa\)-sequence \(\{F_{\alpha}:\alpha<\kappa\}\) of subsets of \(X\)_strongly increasing_ if \(F_{\alpha}\subset\operatorname{int}F_{\alpha+1}\) for all \(\alpha<\kappa\). We now present the main result of this section. **Theorem 4.2**.: _(i) Assume that the space \(X\) is both locally \(\mathfrak{c}\) and \(\mathfrak{c}\)-fair, moreover \(\kappa\leq\mathfrak{c}\) is a regular cardinal such that for every strongly increasing \(\kappa\)-sequence \(\{F_{\alpha}:\alpha<\kappa\}\) of closed subsets of \(X\) their union \(\bigcup\{F_{\alpha}:\alpha<\kappa\}\) is closed in \(X\). Then \(|X|>\mathfrak{c}\) implies that for every subset \(A\) of \(X\) with \(|A|\leq\mathfrak{c}\) there exists a \(\operatorname{clopen}\) subset \(U\) of \(X\) with \(A\subset U\) and \(|U|=\mathfrak{c}\) such that \(L(U)\geq\kappa\)._ _(ii) If \(X\) is both locally \(\mathfrak{c}\) and \(\mathfrak{c}\)-fair, moreover we have \(t(x,X)<cf(\mathfrak{c})\) for all points \(x\in X\), then \(|X|>\mathfrak{c}\) implies the existence of a clopen subset \(U\) of \(X\) with \(|U|=L(U)=\mathfrak{c}\)._ Proof.: (i) It is straightforward from \(|X|>\mathfrak{c}\) and \(X\) being locally \(\mathfrak{c}\) and \(\mathfrak{c}\)-fair, that we may define by transfinite recursion a strongly increasing \(\kappa\)-sequence \(\{F_{\alpha}:\alpha<\kappa\}\) of closed subsets of \(X\) such that \(F_{0}=\overline{A}\) and \(\{\operatorname{int}F_{\alpha}:\alpha<\kappa\}\) is _strictly_ increasing, moreover \(|F_{\alpha}|=\mathfrak{c}\) for all \(\alpha<\kappa\). It is clear that then \[U=\bigcup\{F_{\alpha}:\alpha<\kappa\}=\bigcup\{\operatorname{int}F_{\alpha}: \alpha<\kappa\}.\] Hence, by our assumption, \(U\) is clopen with \(A\subset U\) and \(|U|=\mathfrak{c}\), moreover the strictly increasing open cover \(\{\operatorname{int}F_{\alpha}:\alpha<\kappa\}\) of \(U\) witnesses \(L(U)\geq\kappa\) because \(\kappa\) is regular. (ii) If \(\mathfrak{c}\) is regular then we may just use part (i) for \(\kappa=\mathfrak{c}\) to get the clopen \(U\) with \(|U|=L(U)=\mathfrak{c}\) because by our assumption the union of every increasing \(\mathfrak{c}\)-sequence of closed subsets of \(X\) is closed. If, however, \(\mathfrak{c}\) is singular then we first fix a sequence \(\{\kappa_{\xi}:\xi<cf(\mathfrak{c})\}\) of regular cardinals \(\kappa_{\xi}\geq cf(\mathfrak{c})\) that converges to \(\mathfrak{c}\). Then we may repeatedly apply part (i) to obtain an increasing sequence \(\{U_{\xi}:\xi<cf(\mathfrak{c})\}\) of clopen sets in \(X\) such that \(|U_{\xi}|=\mathfrak{c}\) and \(L(U_{\xi})\geq\kappa_{\xi}\) for all \(\xi<cf(\mathfrak{c})\}\). In fact, what we do to get \(U_{\xi}\) given \(\{U_{\eta}:\eta<\xi\}\), is using part (i) with the choice \(A=\bigcup\{U_{\eta}:\eta<\xi\}\). Then, using again that \(t(x,X)<cf(\mathfrak{c})\) for all \(x\in X\), the set \(U=\bigcup\{U_{\xi}:\xi<cf(\mathfrak{c})\}\) is clopen with \(|U|=\mathfrak{c}\), moreover we have \(L(U)\geq L(U_{\xi})\geq\kappa_{\xi}\) for all \(\xi<cf(\mathfrak{c})\}\), hence \(L(U)=\mathfrak{c}\). We note that by \(|X\setminus U|=|X|>\mathfrak{c}\) and \(X\) being locally \(\mathfrak{c}\), we trivially have \(L(X\setminus U)=|X|>\mathfrak{c}\). **Corollary 4.3**.: _Assume that \(X\) is a locally Lindelof regular space with \(\psi(X)=t(X)=\omega\) and \(|X|>\mathfrak{c}\). Then there is a clopen subset \(U\) of \(X\) with \(|U|=L(U)=\mathfrak{c}\)._ Proof.: It follows from Shapirovskii's strengthening of Arhangel'skii's theorem in [13], see also 2.27 of [8], that \(X\) is locally \(\mathfrak{c}\). Since we have \(t(X)=\omega\), the \(\mathfrak{c}\)-fair property of \(X\) follows if we can show that \(|\overline{S}|\leq\mathfrak{c}\) whenever \(S\) is any countable subset of \(X\). But for such a set \(S\) we have \(L(\overline{S})\leq w(X)\leq\mathfrak{c}\) by the regularity of \(X\), and then \(|\overline{S}|\leq\mathfrak{c}\) follows since we already know that \(X\) is locally \(\mathfrak{c}\). Finally, the assumption \(t(X)=\omega<cf(\mathfrak{c})\) implies that we may apply part (ii) of Theorem 4.2 to conclude that there is a clopen subset \(U\) of \(X\) with \(|U|=L(U)=\mathfrak{c}\). Now, if \(X\) is almost compact and first countable then it is locally compact, \(\psi(X)=t(X)=\omega\), and for every clopen \(U\subset X\) either \(U\) or \(X\setminus U\) is compact, hence it is immediate from Corollary 4.3 that \(|X|\leq\mathfrak{c}\). This shows that Arhangel'skii's theorem may be extended to almost compact spaces, giving the answer to our motivating question. If, in addition to the assumptions of Corollary 4.3, \(X\) is also connected then \(|X|>\mathfrak{c}\) would lead to a contradiction, hence we get that any locally Lindelof and connected regular space \(X\) with \(\psi(X)=t(X)=\omega\) has cardinality \(\leq\mathfrak{c}\). We note that the assumption of regularity of \(X\) in the last statement, and hence in Corollary 4.3 as well, cannot be weakened to the Hausdorff property. Indeed, by Corollary 2.6 of [10] there is a locally countable anti-Urysohn space \(X\) with \(|X|=2^{\mathfrak{c}}\), and any anti-Urysohn space is connected in a strong sense. (We recall that a Hausdorff space is anti-Urysohn if any two non-empty regular closed sets in it intersect.) But any locally countable space clearly satisfies all the other assumptions of Corollary 4.3. In contrast to this, the following corollary of Theorem 4.2 for connected spaces requires only the Hausdorff property. **Corollary 4.4**.: _If \(X\) is any connected, locally \(\mathfrak{c}\) and sequential Hausdorff space, then \(|X|\leq\mathfrak{c}\)._ Proof.: Assume, on the contrary, that \(|X|>\mathfrak{c}\). It is well-known that sequential spaces have countable tightness, moreover the closure of any countable set in a sequential Hausdorff space has cardinality \(\leq\mathfrak{c}\), and these clearly imply that \(X\) is \(\mathfrak{c}\)-fair. Moreover, \(t(X)=\omega\) also implies that the union of every increasing \(\omega_{1}\)-sequence of closed sets in \(X\) is closed. Thus we may apply Theorem 4.2 to obtain a clopen subset \(U\) of \(X\) with \(|U|=\mathfrak{c}\). But as \(X\) is connected, then we would have \(X=U\), contradicting \(|X|>\mathfrak{c}\).
2304.00116
Enhancing Large Language Models with Climate Resources
Large language models (LLMs) have significantly transformed the landscape of artificial intelligence by demonstrating their ability in generating human-like text across diverse topics. However, despite their impressive capabilities, LLMs lack recent information and often employ imprecise language, which can be detrimental in domains where accuracy is crucial, such as climate change. In this study, we make use of recent ideas to harness the potential of LLMs by viewing them as agents that access multiple sources, including databases containing recent and precise information about organizations, institutions, and companies. We demonstrate the effectiveness of our method through a prototype agent that retrieves emission data from ClimateWatch (https://www.climatewatchdata.org/) and leverages general Google search. By integrating these resources with LLMs, our approach overcomes the limitations associated with imprecise language and delivers more reliable and accurate information in the critical domain of climate change. This work paves the way for future advancements in LLMs and their application in domains where precision is of paramount importance.
Mathias Kraus, Julia Anna Bingler, Markus Leippold, Tobias Schimanski, Chiara Colesanti Senni, Dominik Stammbach, Saeid Ashraf Vaghefi, Nicolas Webersinke
2023-03-31T20:24:14Z
http://arxiv.org/abs/2304.00116v1
# Enhancing Large Language Models with Climate Resources ###### Abstract Large language models (LLMs) have significantly transformed the landscape of artificial intelligence by demonstrating their ability in generating human-like text across diverse topics. However, despite their impressive capabilities, LLMs lack recent information and often employ imprecise language, which can be detrimental in domains where accuracy is crucial, such as climate change. In this study, we make use of recent ideas to harness the potential of LLMs by viewing them as agents that access multiple sources, including databases containing recent and precise information about organizations, institutions, and companies. We demonstrate the effectiveness of our method through a prototype agent that retrieves emission data from ClimateWatch ([https://www.climatewatchdata.org/](https://www.climatewatchdata.org/)) and leverages general Google search. By integrating these resources with LLMs, our approach overcomes the limitations associated with imprecise language and delivers more reliable and accurate information in the critical domain of climate change. This work paves the way for future advancements in LLMs and their application in domains where precision is of paramount importance. 1FAU Erlangen-Nuremberg, Germany 2University of Oxford, United Kingdom 3University of Zurich, Switzerland 4ETH Zurich, Switzerland [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] ## 1 Introduction **Motivation.** Large language models have revolutionized the field of artificial intelligence (AI) in recent years [14, 15, 16]. Models such as T0 [13], LLama [15], Plam [16], GPT-3 [14], GPT-4 [15], or instruction finetuned models, such as ChatGPT [14] have demonstrated their exceptional capabilities in generating human-like text across various domains, including language translation, summarization, and question answering. However, despite their impressive performance, LLMs are not without limitations. First, LLMs by themselves only have access to information from their training data (i.e., data from the past). Thus LLMs can not access the most recent information which prohibits usage in the domain of climate change to a large extend. Another critical drawback is their tendency to use imprecise language, which can lead to unreliable or inaccurate predictions in domains where precision and responsibility are crucial. Thus, criticism of these systems has grown, with a recent letter from several experts in the field asking, "Should we let machines flood our information channels with propaganda and untruth?" (Future of Life Institute 2023). One such domain where precision and accuracy are paramount is research on climate reporting [13]. Accurate data and information are essential for policymakers, scientists, and the general public to understand the magnitude and urgency of the climate crisis and take necessary actions to mitigate its impact [12]. Inaccurate information, on the other hand, can lead to inappropriate policies or delayed actions, exacerbating the consequences of climate change [17]. However, retrieving and interpreting accurate data from a vast amount of climate-related information is a challenging task [1]. One example of a valuable source of relevant information for understanding a firm's climate-related performance is annual reports, which contain both quantitative and qualitative information [1]. However, this type of information is not naturally represented in LLMs and must be retrieved separately. **Contribution.** To address this challenge, we use recent developments in LLMs as agents that access multiple sources, including databases containing precise information about organizations, institutions, and companies. By integrating these resources with LLMs, our approach mitigates the limitations associated with imprecise language, providing more reliable and accurate information in the critical domain of climate change. We present a prototype LLM agent that retrieves emission data from ClimateWatch ([https://www.climatewatchdata.org/](https://www.climatewatchdata.org/)) and leverages general Google search. Through two exemplary experiments, we demonstrate the potential of LLMs in the area of climate change. We anticipate that our approach will encourage further research on integrating LLMs with accurate data sources, ultimately fostering more responsible and reliable AI systems. **Findings.** Our study demonstrates the potential of LLMs as agents that access multiple sources of information to deliver reliable and accurate information in the domain of climate change. We have shown how our prototype agent can retrieve and process data from ClimateWatch, a database containing precise information on emissions from coun research to complement the information retrieved from ClimateWatch and provide additional context to the generated text. Our experiments and analyses showcase how LLM agents can be used to access multiple datasources and combine this information to provide accurate and reliable responses. **Implications.** The implications of our study are significant for both practice and research. In practice, this work is not limited to the field of climate change but can be used to improve the accuracy and reliability of AI systems in other domains where precision and responsibility are crucial. Policymakers, scientists, and the general public can benefit from more accurate and reliable information to make informed decisions and take necessary actions to mitigate the impact of climate change. In research, our study paves the way for future advancements in LLMs and their application in domains where precision is of paramount importance. Further research is needed to explore the potential of integrating LLMs with other data sources, such as scientific papers, reports, and databases. The remainder of this paper is structured as follows. Section 2 gives a brief overview over recent developments in the field of LLMs, LLM agents, NLP in climate change, and use of multiple sources in the context of LLMs. Section 3 describes the setting in which we run our experiments, Section 4 discusses example prompts that need to access information from ClimateWatch, General Internet, or a combination thereof. Section 5 discusses these results and implications of this work. Section 6 concludes this work. ## 2 Background ### Large Language Models Large language models are a type of artificial neural network that have demonstrated remarkable ability to generate human-like text across diverse topics [1, 2, 3]. These models are primarily transformer models and are trained on massive amounts of text data to learn the patterns and relationships within the data [20]. The recent breakthroughs with models like T0 [13], LLama [32], Palm [2], GPT-3 [2], and GPT-4 [2] have further highlighted the potential of LLMs, with applications ranging from natural language processing to chatbots [2] and virtual assistants [1]. LLMs have also demonstrated the ability to complete tasks that they were not explicitly trained on, making them a versatile tool for a variety of applications. However, despite their impressive capabilities, LLMs are also limited in several ways. One of the primary limitations is their tendency to employ imprecise language, which can be detrimental in domains where accuracy and responsibility are crucial. This has led to concerns about the ethical implications of using LLMs for certain applications, such as automated content creation or decision-making. In this view, multiple experts from the field demand a pause in development of large AI systems in their letter "Pause Giant AI Experiments: An Open Letter" [23]. ### LLM Agents An LLM agent employs the capabilities of state-of-the-art language models to perform complex tasks and make informed decisions. These agents can autonomously determine which actions to take, including utilizing various tools and observing their outputs or providing responses to user queries [16]. By leveraging the LLM's vast knowledge and understanding of natural language, agents can efficiently navigate through an array of tools and select the most appropriate one based on the given context. This enables the LLM agent to provide reliable, accurate, and contextually relevant solutions in diverse applications and domains. In contrast to a basic LLM, an LLM agent does not only rely on the trained weights in the network, thus it is capable of accessing data generated after the point of training. This is crucial in many contexts, such as in climate change, as there is constantly new information presented from companies, institutions, and organizations that needs to be included into the AI system in order to provide an informed response. Figure 1: Setup for LLM agents that can access multiple sources, such as databases or general internet search. ### NLP in Climate Change Climate change is one of the most pressing challenges facing humanity today, with implications for everything from the environment to the economy [15]. In order to effectively address this challenge, it is critical to have accurate and reliable data about greenhouse gas emissions and other climate-related factors [14]. In the field of climate change, accurate and up-to-date information is of paramount importance to ensure that decision-makers, researchers, and the public can make informed choices and develop effective strategies [12]. Recent transformer models have started to be employed within the climate change domain, witnessing improved accuracy in typical classification tasks [13, 14, 15, 16]. These models are capable of accounting for the context of words, enabling them to detect complex and implicit topic patterns in addition to many trivial cases. However, recent advances in LLMs have pushed the boundary towards more general models which are not restricted to typical classification tasks, but which can serve as systems that can potentially be used to make verdicts about quality of climate reporting etc. ### Use of Multiple Data Sources Given the challenges associated with obtaining accurate and reliable data, the use of multiple data sources has become increasingly common in research and decision-making in topics related to climate change [1]. By combining data from multiple sources, it is possible to enhance the accuracy and reliability of the results, as well as identify patterns and relationships that might not be apparent from a single data source alone. However, the integration of different data sources can be a challenging task, as the data may be in different formats, have different levels of granularity, or be subject to different biases. This has led to the development of various tools and techniques for data integration [15]. The potential of combining different data sources to enhance the accuracy and reliability of the results has significant implications for a wide range of applications, from scientific research to business decision-making. ## 3 Setting ### LLM Agent In this work, we build upon the LangChain package.1 In the LangChain ecosystem, the LLM agent operates by utilizing the ReAct framework. This framework enables the agent to select the most suitable tool based on the tool's description, given that each tool has an associated description provided. The ReAct framework allows the LLM agent to work with any number of tools. Our LLM agent specifically employs the "text-davinci-003" model from OpenAI with a temperature setting of 0. This particular configuration maximizes precision, ensuring that the agent generates appropriate responses while maintaining a high degree of accuracy. The combination of the ReAct framework and the LLM's vast knowledge base empowers the agent to autonomously navigate through diverse tasks, making it an invaluable asset in a wide range of applications. Footnote 1: [https://github.com/hwchase17/langchain](https://github.com/hwchase17/langchain) The use of LLM agents in LangChain offers several advantages, including their ability to process vast amounts of information, understand context, and adapt to new tasks quickly. By harnessing the power of LLMs, agents can efficiently handle complex challenges and provide precise solutions that cater to the specific needs of users. This adaptability enables the LLM agent to continually evolve its understanding and decision-making processes, staying up-to-date with the latest developments and trends in various fields. As a result, the LLM agent becomes a valuable tool for users, researchers, and organizations alike, offering insights and solutions that might otherwise be difficult or time-consuming to obtain. In conclusion, LLM agents represent a significant advancement in the field of artificial intelligence, particularly in the context of LangChain and the ReAct framework. By leveraging the extensive knowledge and comprehension abilities of LLMs, these agents can autonomously navigate a diverse array of tools and generate contextually relevant, accurate, and reliable solutions. The versatility and adaptability of LLM agents make them an essential asset in various applications and domains, highlighting the immense potential for their future development and integration into increasingly complex and sophisticated AI systems. ### LLM Tools In our setting, we use two tools. Note that this only serves as an exemplary use case and we plan to extend this setting to include multiple data sources. The first tool employed by our LLM agent is a Python-based module that utilizes the powerful pandas library to access and manipulate data stored in datarrames. This tool enables the agent to seamlessly interface with structured data. In our case this tool allows access to data from ClimateWatch and perform complex data processing tasks using Python code. By leveraging the capabilities of pandas, the agent can efficiently filter, sort, aggregate, and transform the data, allowing it to extract precise and relevant information as needed. The integration of this tool with the LLM agent ensures that the AI system has direct access to accurate and up-to-date data, which is vital for generating reliable and responsible insights in the domain of climate change. The second tool, "google-seper," is designed to facilitate general Google searches within the context of the LLM agent. This tool allows the agent to perform targeted queries on the web, tapping into the vast repository of knowledge available through Google's search engine. By leveraging "google-seper," the LLM agent can complement the information retrieved from structured data sources, such as ClimateWatch, with additional context and insights from various web sources. This integration enables the agent to generate richer, more comprehensive responses to ### Input Prompt The prompt used in our study consists of a series of instructions that guide the agent on how to answer a given question. The prompt is structured in a way that allows the agent to access multiple sources of information, including ClimateWatch and Google search. Listing 1 shows this input prompt. The prompt begins with a section that describes the available tools to the agent, which are ClimateWatch and Serper Search. The agent is instructed to prioritize the use of ClimateWatch for answering general questions about emissions and to only use Google search if necessary for answering questions about current events. The ClimateWatch tool is described to handle pandas dataframes in Python, which suggests that the agent should be able to read and manipulate data in this format. The prompt then presents a format for the agent to follow, which consists of five sections: Question, Thought, Action Input, and Observation. The Question section presents the input question that the agent must answer. The Thought section provides guidance for the agent to think about what to do, and the Action section describes the action that the agent should take. The Action Input section specifies the input to the action, and the Observation section presents the result of the action. These five sections can repeat multiple times until the agent is confident that it has the final answer, at which point it is instructed to provide the Final Answer. The prompt concludes with a reminder to the agent to prioritize the use of ClimateWatch and to only use Google Search if the needed information is not provided by ClimateWatch. This ensures that the agent uses reliable and accurate data sources for generating its response.2 Footnote 2: In our experiments, we noticed that the agent tends to use Google search, even in cases when the information is available in the more reliable source of ClimateWatch. Overall, the prompt is designed to guide the agent in accessing multiple sources of information and to prioritize the use of precise and reliable data sources for generating its response. By following the format presented in the prompt, the agent is able to systematically retrieve and process information to provide accurate and reliable answers to the input questions. ``` Answerthefollowingquestionsasbestyoucan.Prioritizethe"ClimateWatch"toolandonlyuseothertoolsfnecessary.Youhaveaccesstothefollowingtools:ClimateWatch:usefulforwhenyouneedtoanswergeneralquestionsaboutemissions.YouareworkingwithapandandandandatframeinPython.SerperSearch:Alow-costGoogleSearchAPI.Usefulforwhenyouneedtoanswerquestionsaboutcurrentevents.Inputshouldbeasearchquery.Usethefollowingformat:Question:theinputquestionyoumustanswer.Thought:youshouldalwaysthinkaboutwhattodoAction:theactiontotake,shouldbeconeof[ClimateWatch,SerperSearch]ActionInput:theinputtotheactionObservation:theresultoftheaction...(thisThought/Action/ActionInput/ObservationcanrepeatNtimes)Thought:InowknowthefinalanswerFinalAnswer:thefinalanswertotheoriginalinputquestionBegin!Remembertoprioritizethe"ClimateWatch"toolandonlyuseothertoolsfiftheneededinformationisnotprovidedby"ClimateWatch".Question:(input){agent_scratchpad} ``` Listing 1: Input prompt. ## 4 Experiments In the following we showcase two experiments, in which the LLM agent should once access only the ClimateWatch tool, and once has to combine information from ClimateWatch and google-seper. ### Example with Single Data Source **Input:** What is the average emission of Italy between 2010 and 2015? The example with a single data source illustrates how the LLM agent uses the prompt to answer a specific question about the average emissions of Italy between 2010 and 2015. Listing 2 shows an extract of the chain of thought of the LLM agent. The steps can be broken down into several steps: 1. Upon receiving the input question, the LLM agent first identifies the goal of finding the average emission of Italy between 2010 and 2015. 2. The agent then determines that the best data source for answering this question is the ClimateWatch tool, which is a pandas dataframe in Python. 3. The agent uses the python_repl_ast action to print the columns of the dataframe to gain an understanding of the available data. 4. With this information, the agent determines the relevant columns for answering the question, which are the ones corresponding to Italy and the years 2010 to 2015. 5. The agent then calculates the average emission of Italy between 2010 and 2015 and provides the final answer. Overall, the LLM agent follows a systematic approach to answering the input question, starting with identifying the goal and determining the best data source to use. The agent then processes the data to provide the final answer, with each step of the process being captured in the prompt response. This approach highlights the potential of LLMs to effectively retrieve and process information from data sources and provide accurate and reliable answers to complex questions. Listing 2: Chain of thought for input "What is the average emission of Italy between 2010 and 2015?." Thought: Ineed to find the average emission of Italy between 2010 **and** 2015 Action: ClimateWatch Action Input: pandas dataframe Thought: Ineed to find out what the dataframe contains Action: python_repl_ast Action Input: print(df.columns)Index([' Country', 'Data source', 'Sector', 'Gas' , 'Unit', '2019', '2018', ..., '1990'], dtype='object') Observation: None Thought: I now know the columns of the dataframe ... Thought: Ineed to filter the dataframe for Italy and select the columns from 2010 to 2015 Action: python_repl_ast Action Input: df[df['Country'] == 'Italy'][[ '2010', '2011', '2012', '2013', '2014', '2015']] Observation: 2010 2011 2012 2013 2014 2015 448.37 437.82 419.63 387.64 367.92 379.58 Thought: I now know the final answer Final answer: Italy's emissions from 2010 to 2015 were 448.37 MTOC2e in 2010, 437.82 MtCO2e in 2011, 419.63 MTOC2e in 2012, 387.64 MtCO2e in 2013, 367.92 MtCO2e in 2014, and 379.58 MtCO2e in 2015. Thought: I now know the average emission of Italy between 2010 and 2015 Final Answer: The average emission of Italy between 2010 and 2015 was 406.83 MtCO2e. ### Example with Combined Data Sources **Input:** "Which European country has the most ambitious net zero plans? How did the emissions of this country develop over the last 10 years? Remember to only include single countries." The example with combined data sources showcases how the LLM agent can effectively combine information from multiple data sources to answer a complex question. In this case, the question asks for the European country with the most ambitious net zero plans and how the emissions of this country developed over the last 10 years. Listing 3 shows an extract of the chain of thought of the LLM agent. The steps can be broken down into several steps: 1. Upon receiving the input question, the LLM agent determines that it needs to find the European country with the most ambitious net zero plans and then look at the emissions data for that country. 2. The agent decides to use the Google Search tool to search for information on the most ambitious net zero plans in Europe. 3. The agent retrieves information from Google Search that Finland has passed arguably the world's most ambitious climate target into law, aiming to be the first developed country to reach net zero in 2035 and net negative by 2040. 4. The agent then determines that the best data source to look at Finland's emissions data is the ClimateWatch tool. 5. The agent uses the python_repl_ast action to filter the ClimateWatch tool's pandas dataframe to only include data for Finland. 6. The agent then observes that Finland's emissions have decreased from 38.54 MtCO2e in 1990 to 58.42 MtCO2e in 2019, providing the final answer to the question. Overall, the LLM agent demonstrates the potential to effectively combine information from multiple data sources to answer complex questions. Note, however that the LLM agent provides information about the emissions from 1990 to 2019 and not over the last 10 years as asked for. However, with the recent developments, we belief such shortcomings to be solved very soon.3 Footnote 3: Note also, that we did not use the most recent LLM models in these experiments. Listing 3: Chain of thought for input "Which european country has the most ambitious net zero plans? How did the emissions of this country develop over the last 10 years? Remember to only include single countries." Thought: Ishould look for the country with the most ambitious net zero plans and then look at the emissions data for that country. Action: Serper Search Action Input: "European country with most ambitious net zero plans" Observation: Finland has passed arguably the world's most ambitious climate target into law. It aims to be the first developed country to reach net zero, in 2035, and net negative - absorbing more CO2 than it emits - by 2040. May 31, 2022 Thought: I should now look at the emissions data for Finland. Action: ClimateWatch Action Input: Finland emissions data Thought: Ineed to find the emissions data for Finland Action: python_repl_ast Action Input: df[df['Country'] == 'Finland'] Observation: Country Data source Sector Gas Unit 2019 \ 82 Finland ClimateWatch Total including LUCF All GHG MtCO2e 58.42 2018 2017 2016 2015... 1999 1998 1997 1996 1995 1994 \ 82 62.43 60.86 63.65 -0.28... 39.51 40.18 43.79 45.69 39.23 43.21 1993 1992 1991 1990... Final Answer: Finland has the most ambitious net zero plans and its emissions have decreased from 38.54 MtCO2e in 1990 to 58.42 MtCO2e in 2019. ## 5 Discussion **Limitations.** While the described approach offers a promising solution for leveraging the potential of LLMs in domains where accuracy and responsibility are crucial, there are also limitations to the current state-of-the-art. One of the primary limitations of current LLMs is their limited context length, which can make it difficult for them to accurately capture complex relationships and patterns that require a broader context. However, progress is being made in this area, with recent models like GPT-4 demonstrating significantly improved performance on longer sequences of text (OpenAI 2023a). Another limitation of the LLM agents is the potential for the agent to misunderstand the question such as seen in our second experiment. This highlights the importance of developing robust methods for evaluating the accuracy and reliability of the results. Finally, in this work, we merely used two data sources. In practice, hundreds or thousands of data sources should be used to combine the most precise information on each topic. While this potentially raises technical issues, recent efforts by companies such as Microsoft, which recently published a paper on the use of thousands of tools for NLP applications, are promising and offer potential solutions to overcome these limitations. (Liang et al., 2023). **Implications for NLP in Climate Change.** The described approach has significant implications for the field of NLP in the context of climate change research and policy-making. By leveraging the potential of LLMs to generate human-like text and integrating multiple data sources, it is possible to enhance the accuracy and reliability of the results, as well as facilitate more informed decision-making. This has significant implications for a wide range of applications, from climate modeling to environmental policy-making. Additionally, the used approach offers a solution for addressing some of the limitations associated with LLMs, such as their outdated information stored in the model itself and their tendency to employ imprecise language. By incorporating precise data sources and developing methods for evaluating the accuracy and reliability of the results, it is possible to overcome some of these limitations and enhance the applicability of LLMs in domains where precision and responsibility are crucial. **Implications for Research.** The described approach has implications beyond the specific application of climate change research and policy-making. It represents a novel solution for leveraging the potential of LLMs in domains where precision and accuracy are paramount, and highlights the potential of combining multiple data sources to enhance the reliability and accuracy of the results. Potential use cases could include for instance applications in healthcare or legal reasoning. Here, this work can serve as a starting point for future research in this area. While there are still limitations to the current state-of-the-art, the development is of incredible pace and it is only a matter of time until the majority of these limitations are also solved. Overall, the described approach represents a significant step forward for the field of NLP and has the potential to drive advancements in both research and practice. **Carbon Footprint.** While LLMs have demonstrated impressive capabilities in generating human-like text across diverse topics, they come with a significant energy consumption cost (Hershcovich et al., 2022). Training these models requires massive amounts of computational resources, which in turn generates considerable greenhouse gas emissions. Recent estimates suggest that training a single large LLM, like GPT-3, can emit as much as 626,000 pounds of carbon dioxide equivalent, which is roughly equivalent to the lifetime emissions of five average cars (Hao, 2019). These emissions can have a significant impact on the environment, particularly in light of the urgent need to reduce global carbon emissions. As such, there is a need for further research into methods for optimizing the training and inference of LLMs to minimize their environmental impact. For example, recent approaches like LoRA (Hu et al., 2021) or low-precision optimizations (Dettmers et al., 2022, 2022) for reducing the energy consumption and carbon emissions associated with LLMs. Further research in this area is crucial to ensure that the benefits of LLMs are not outweighed by their environmental costs, and to enable their widespread adoption in a sustainable manner. ## 6 Conclusion In conclusion, this paper demonstrates the potential of LLMs in the field of climate change by employing an approach that integrates multiple sources of information to correctly answer to questions that can not be answered by using only the original model weights. Our prototype LLM agent retrieves information from general Google search and and emission data from ClimateWatch to provide reliable and accurate information. Through two exemplary experiments, we showcase how such an LLM agent can operate to enhance the accuracy and reliability of climate-related text generation. This work contributes to the exploration of LLM applications in domains where up-to-date and accurate information is critical, and encourages further research on integrating LLMs with external data sources for more responsible and reliable AI systems.
2309.16788
Quantum Computing, Math, and Physics (QCaMP): Introducing quantum computing in high schools
The nascent but rapidly growing field of Quantum Information Science and Technology has led to an increased demand for skilled quantum workers and an opportunity to build a diverse workforce at the outset. In order to meet this demand and encourage women and underrepresented minorities in STEM to consider a career in QIST, we have developed a curriculum for introducing quantum computing to teachers and students at the high school level with no prerequisites. In 2022, this curriculum was delivered over the course of two one-week summer camps, one targeting teachers and another targeting students. Here, we present an overview of the objectives, curriculum, and activities, as well as results from the formal evaluation of both camps and the outlook for expanding QCaMP in future years.
Megan Ivory, Alisa Bettale, Rachel Boren, Ashlyn D. Burch, Jake Douglass, Lisa Hackett, Boris Kiefer, Alina Kononov, Maryanne Long, Mekena Metcalf, Tzula B. Propp, Mohan Sarovar
2023-09-28T18:26:17Z
http://arxiv.org/abs/2309.16788v2
# Quantum Computing, Math, and Physics (QCaMP): Introducing quantum computing in high schools ###### Abstract The nascent but rapidly growing field of Quantum Information Science and Technology has led to an increased demand for skilled quantum workers and an opportunity to build a diverse workforce at the outset. In order to meet this demand and encourage women and underrepresented minorities in STEM to consider a career in QIST, we have developed a curriculum for introducing quantum computing to teachers and students at the high school level with no prerequisites. In 2022, this curriculum was delivered over the course of two one-week summer camps, one targeting teachers and another targeting students. Here, we present an overview of the objectives, curriculum, and activities, as well as results from the formal evaluation of both camps and the outlook for expanding QCaMP in future years. Quantum Information Science and Technology, Quantum Education, Quantum Outreach ## I Introduction Quantum Information Science and Technology (QIST) is a nascent field that has been recognized by the US government as a key strategic investment area due to its promising applications in sensing, time-keeping, computing, and communications, and it is set to see significant investment in the years to come [1, 2]. As the field expands, so do the demands for a workforce with the skills, expertise, and educational experiences needed to meet the ever-evolving requirements. There is broad recognition that a critical need exists to grow the QIST workforce across the full educational spectrum - not just at the PhD level - and to engage new communities if the US is to maintain leadership in the field [1]. Current efforts to address the QIST workforce demand largely focus on graduate level programs and upskilling quantum-adjacent professionals from computer science, engineering, math, and other STEM-fields. Unfortunately these fields traditionally lack diversity [3]. We believe that introducing QIST concepts at the high school level will enable us to reach a more diverse group of students, and this awareness will enable students to be better prepared for a career in QIST. To that end, we launched the Quantum Computing, Math, and Physics camp (QCaMP) in 2022 to introduce QIST concepts to high school teachers and students with the following goals: * Introduce topics through meaningful hands-on puzzles, experiments, and demos with no advanced mathematics or prerequisites, * Provide teachers with the tools, experience, and modules that are in line with classroom standards and easy to integrate into high school curricula, * Expose students to different career pathways, * Reach and relate to participants with diverse demographics, including gender, racial/ethnic, regional, and socio-economic diversity. QCaMP brought together 27 contributors from 8 institutions spanning national laboratories, academia, industry, and educational non-profits to teach 32 students and 20 teachers about QIST over two one-week-long virtual camps. To encourage socio-economic diversity, materials were sent to participants free-of-charge and each participant received a stipend. Financial stress disproportionately affects the underrepresented communities we hope to reach with this program [4]. As such, prospective participants in programs like this one face the decision of working summer jobs to support themselves and their families vs attending summer camps that could prepare them for better educational and career opportunities. We sought to mitigate these financial impacts by providing teachers with \(\$1000\) for the week and students with \(\$250\) for the week. Stipend amounts were chosen to be comparable to a 1-week teacher salary in NM and to half-week at minimum wage for teachers and students, respectively. These stipends made up the majority of QCaMP's costs. We present an overview of our curriculum in Section II, outcomes from the pilot camp evaluations in Section III, and outlooks for further refining and expanding the program in Section IV. ## II Curriculum QIST researchers from national laboratories and academia collaboratively developed the curriculum around topics shown in Table I. Our aim was to make the material accessible to a broad range of students by avoiding any prerequisite math, physics, or computer science courses and to incorporate experiential and active learning wherever possible. Beginning with no assumed background knowledge, the curriculum introduces students to relevant topics, culminating in a research project using IBM's cloud-based quantum computer (IBM Q composer) at the end of the week. The curricula for the teacher camp and student camp were largely the same. Key differences include the following: teacher camps included additional time for pedagogical discussions and resource sharing while student camps included career talks and follow-on extracurricular educational opportunities. These topics were covered during discussion and Q&A time and via mini-talks at the end of each day. For both camps, each day ended with a Q&A session utilizing Padlet.com. This tool offered an anonymous platform for soliciting feedback, questions, and comments from the participants. The following morning, we addressed the common misconceptions or areas that needed additional instruction. In the following, we describe each module in more detail. ### _Classical Bits and Gates_ On Monday morning, we began by introducing classical computing fundamentals, including bits and gates. We used the formalism for bits and gates developed in [5] and utilized in [6] - white and black balls representing 0 and 1 bits, respectively, which are dropped through various boxes representing gates. Basic gates including NOT, CNOT, and SWAP were introduced and reinforced through guided exercises in which participants worked out truth tables for individual gates and determined the output of various combinations of gates. We also discussed how information could be stored in anything with two mutually exclusive states, setting the stage for the later lab tours showcasing different qubit hardware types. Finally, we acknowledged that current hardware remains noisy and described simple schemes for error detection and correction. ### _Probability and Statistics_ We acknowledged that students would need some understanding of probability and statistics in order to interpret and understand randomness in the outcomes of quantum experiments. A very short and practical introduction to probability and statistics was provided on Monday afternoon. We found that although most students did not have formal training in these topics, they had intuition for the concept of probability and could reason about events, likelihoods, _etc._ The probability portion of the lesson explained basic combinatorics principles, including mutually exclusive and independent events. Then, the statistics portion focused on practical computations the participants would perform in the hands-on portions of the camp, especially in the final project's analysis of quantum circuits run on IBM's cloud-based quantum processors. We introduced concepts such as mean, variance, and standard deviation; provided intuition for each of these quantities; and then demonstrated how they can be computed on real data sets using the Google Sheets spreadsheet application. ### _Light Labs_ On Tuesday, we included more physics-related topics using hands-on activities that had students and teachers step away from the computer. We provided participants with necessary materials prior to the start of the camp, shipping them lasers, apertures, and polarizers that they could use in their experiments (see Section II-I for more details about these kits). We began by exploring how light passes through apertures. We also used the University of Colorado at Boulder's PhET Interactive Simulations [7] to hone in on these concepts which culminated in a double slit experiment by the end of the morning. Using the concepts they learned performing these lab experiments, we connected the wave-particle nature of light to their observations and began to introduce the concepts of superposition and measurement. Tuesday afternoon was devoted to polarization in a lesson adapted from the University of Waterloo's Schrodinger's Class materials [8]. In order to accommodate students who have not yet taken trigonometry, a more conceptual approach was taken, exploring phenomena through direct observations of the effects of polarizers. The activity started by exploring how a single polarizer blocks light from a computer screen and any other sources of light that teachers and students wanted to observe. To connect back to Monday's lessons, the topic of mutually exclusive states was reintroduced, highlighting the fact that polarization provides mutually exclusive states that can store information. Next, teachers and students used two polarizers to directly observe which polarization states were mutually exclusive, followed by an introduction of the concept of measurement to make sense of their observations. Finally, the camp participants observed the surprising effect of inserting a diagonal polarizer between vertical and horizontal polarizers, bringing in the concept of superposition and relating it again to measurement. ### _Superposition and the Hadamard Gate_ On Wednesday morning, we formally introduced the concepts of superposition and measurement, drawing from examples camp participants encountered the day before in the lab session. We presented the differences between classical and quantum superposition and found that the most challenging portion of this lesson for both students and teachers was grasping the physical concept behind mixed quantum states. There was ample time dedicated at the beginning of this session for brainstorming and discussion on its physical realization in the world. Teachers were additionally provided content on how superposition could be represented using vectors, building upon high school trigonometric concepts of the unit circle to guide them to a basic understanding of a mixed state as it would be depicted in a Bloch sphere. Conceptual details about the Bloch sphere were presented to the students in a separate mini-talk (see Section II-G), but teachers found the math very useful in this session. We then finally introduced the quantum Hadamard gate, building upon the foundation introduced on the first day with classical gates. In a similar vein, black and white colored balls represented observed qubits. Dropping them through the H box (i.e. Hadamard gate) produced a'misty' state as described in [5] where the balls emerged in some superposition of black and white. We included additional exercises combining classical and quantum gates to cement this learning. While performing these puzzles, the participants were also taught about the intrusive nature of measurement and the induced collapse of the'misty' state into one of the two observed black or white balls. ### _Entanglement_ Entanglement was introduced as a final module on Thursday morning. While we initially developed an activity to measure the CHSH value using the QuVis simulations [9], we realized during the teacher camp that this was much too advanced for our audience. For the student camp, we significantly modified this lesson to conceptually discuss the differences between entanglement and classical correlation. We also acknowledged that scientists are still debating the explanations behind these concepts as an effort to encourage similar discussion amongst the students. The many-worlds theory seemed to inspire the most discussion, as students could relate somewhat easily due to similar interpretations appearing in pop culture entertainment. ### _Ibm Q_ The camp was structured to culminate in a final project using IBM Q, reflecting our guiding principle that the best way to learn how a quantum computer works is to actually use one. In fact, access to quantum hardware immediately distinguishes this field from its predecessor, classical computing. The invention of the Personal Computer (PC) came decades after computers became useful for solving complex problems. Computing access and network access for everyone revolutionized our society. More and more quantum computing architectures are becoming publicly available through the cloud and - typically - accessible to everyone. On Wednesday afternoon, we introduced camp participants to the IBM Quantum Composer [10], a visual drag-and-drop user interface that requires no prerequisite programming experience. Teachers and students were familiarized with this tool first by reproducing various classical gates puzzles and using the individual qubit "Prob of \(|1\rangle\)" readout as well as the State Vector plots to verify their results. Later, these features were used to reinforce the results of puzzles incorporating the Hadamard gate. Finally, teachers and students learned how to perform measurements and run small single and multi-qubit circuits on IBM's publicly available quantum computers. Building upon the probability and statistics lesson, they interpreted resulting histograms to understand read-out errors and CNOT errors. Many participants were shocked to learn that real quantum computers existed today and could be accessed via the cloud for free. As a final project, on Thursday and Friday, we designed a hands-on research exercise using IBM Q that built upon the material learned throughout the camp, making quantum computing come to life and providing a practical and relevant tool for the students and teachers to use after the course. For this exercise, participants were divided into small groups, and each group was assigned one of IBM's quantum computers. The teams constructed quantum circuits and submitted them as jobs using Quantum Composer. We provided a worksheet detailing research objectives along with a series of questions to spark critical thinking as they progressed through the exercise. We also provided spreadsheets to record and evaluate their results. Specifically, the camp participants were asked to evaluate CNOT error and readout error for each experimental device to predict which computer would perform best. Then the participants proceeded to run 5 experiments for each single qubit and multi-qubit quantum circuit on their assigned quantum computer. The multi-qubit quantum circuit was designed to generate Bell states to illustrate correlations in measurement outcomes. They recorded measurement outcomes for \(|0\rangle\) and \(|1\rangle\) in the spreadsheet for statistical analysis. Correlation in measurement outcomes revealed which two-qubit experiments produced entangled qubits. Groups were asked to compare their measured error rates to the other groups' and to IBM's specified errors. Working through the the final project introduced the students and teachers to numerous concepts in QIS experimentation, but it also taught us which concepts and activities were the most challenging for them. Students best grasped how to characterize quantum devices and how to get results by running Quantum Composer. Quantum circuit implementation and device characterization proved to be one of the least challenging aspects of the activity due to the substantial preparation from the week's curriculum. In fact, it was using a spreadsheet and the concept of correlation for data analysis that presented the largest challenge, as most students had never used a spreadsheet prior to the research exercise. Therefore, our lesson not only taught quantum information concepts, but it also taught the students practical skills needed for quantitative tasks. Overall, correlation was the most difficult concept for them to grasp as statistical analysis is absent in most high school classrooms. However, throughout this camp and final project, the students gained a better understanding of correlation and how it connects with measurement of entangled qubits. These observations, and those in the previous paragraph, were based on the interactions between the instructors and students rather than a quantitative evaluation. The main outcome of the IBM Q research exercise was reinforcement of the scientific method - hypothesis, experiment, analysis, results-driven hypothesis justification - through critical thinking and hands-on experimentation. Using the IBM quantum experience and spreadsheets also provided practical skills that the students could use as they progress into the next academic stage of their lives either in STEM or other related fields. ### _Additional Sessions_ In addition to the main modules described above, a number of mini-talks were incorporated throughout the week to provide participants with exposure to career pathways, follow-on opportunities, resource sharing, lab tours, etc. Here we provide details on a few of these additional sessions. **Vectors and the Bloch Sphere:** Building upon concepts introduced during the superposition and Hadamard gate module, the topics of vectors and the Bloch sphere were reintroduced utilizing an interwoven format [11]. First, students were guided through contrasting addition of "normal numbers" with addition of walking directions in a hands-on manner. This form of highly active learning has been found effective in promoting science efficacy and attitude in informal science outreach [12]. Next, students were guided through an analysis of the state-space of a classical coin: a line of mixed states connecting two pure endpoints (heads and tails). This idea was then incorporated into the larger state-space of a quantum coin, where the previously-learned superposition states along the equator were contrasted with the maximally mixed state at the sphere's center. Finally, students were introduced to vector manipulation on the sphere via rotations about an axis, connecting the visual representation to the fundamental Pauli single-qubit gates discussed previously during the module. **Superconducting Qubits and Superconducting Testbed Virtual Lab Tour:** After the participants had been introduced to IBM's Quantum Composer, this talk provided an "under-the-hood" view of superconducting qubit operation and hardware. We gave an overview of the basic principles and operation of superconducting qubits and used the virtual lab tour at DOE's Advanced Quantum Testbed [13] to introduce participants to typical hardware and lab environments for this qubit platform. **Atomic Qubits and Trapped Ion Virtual Lab Tour:** To tie in the light lab activities performed earlier in the week, we also gave participants an introduction to atomic qubits, which included the basics of confining an ion with oscillating voltages, Doppler/laser cooling, and \(\pi\)/2 pulses to create superposition states. We showed photographs of laboratory systems, single ions, and chains of ions. With support from the UC Berkeley trapped ion group, we concluded this mini-talk with a live walk-through of their lab. **Big Picture:** On the final day of the teacher and student camps, there was a "big picture" mini-talk that provided a high-level survey of the history, development, and current status of quantum computing. The aim was to put the topics that were discussed during the week within the context of the large, dynamic, world-wide QIST research effort and to connect the concrete progress made by the camp participants to the current state of the field, e.g., to show that some of the activities they engaged in during the week such as the IBM Q experiments are also part of current QIST research practice. This talk concluded by encouraging participants to find out more about QIST and join the research community in this exciting endeavor to build and leverage a quantum computer. **Near-Peer Mentorship:** As a closing activity, one of the instructors (a junior post-doc) led a near-peer mentorship session [14], answering student questions about education and careers in quantum information. As a transgender scientist, the instructor was also able to speak to the experience of working in QIST as multiple genders, as well as provide tools for resilience for all students. LGBTQ+ belonging in QIST was heavily emphasized. This session was especially important for transgender and gender-non-conforming students, who exhibit lower persistence in STEM than their cisgender peers [15]. ### _Incorporating QIST in the High School Classroom_ To aid teachers in incorporating QIST lesson modules in their classrooms, each topic covered during QCaMP was linked with relevant Next Generation Science Standards (NGSS) [16]. NGSS are currently adopted by twenty states in the United States of America, in addition to twenty-four states that have science standards influenced by the NGSS framework. In particular, the focus was to expand on NGSS Disciplinary Core Idea (DCI) PS4: Waves and Their Applications in Technologies for Information Transfer. For high school students, this DCI includes standards that cover classical waves concepts such as mechanical waves and electromagnetic waves, as well as a conceptual introduction to the wave-particle duality. By expanding on this DCI, teachers can build upon classical physics already covered in their classrooms to transition into quantum concepts. In addition to NGSS, topics were also linked with Quantum Information Science (QIS) Key Concepts for K-12 Physics from the National Q-12 Education Partnership [17], as well as Advanced Placement (AP) Physics 2 learning objectives from the College Board. Table II clarifies which frameworks, standards, and learning objectives are linked to topics covered in QCaMP. During the teacher camp, debrief sessions were intentionally placed throughout the week after each lesson module so teachers could have the opportunity to discuss with each other and the instructors their perspectives on the lessons. Debriefs were structured so that teachers could share key concepts they learned, how they would modify the lesson modules so they could meet the needs of their students, and any lingering questions. At the end of the week, teachers also had the opportunity to share additional resources and strategies with each other to support student success in the classroom. The debrief sessions not only enabled the teachers to reflect on their learning of the material, but also helped us identify adjustments for future iterations of the lessons. ### _Materials_ To facilitate hands-on learning despite the virtual format of the camp, we sent all participants a kit of materials prior to the camp start date. Materials included necessary items for the Probability and Statistics module (Section II-B), Light Labs modules (Section II-C), and a copy of the Q is for Quantum book [5] that informed the box gate formalism of the Classical Bits and Gates (Section II-A) and Superposition and the Hadamard Gate (Section II-D) modules. For convenience, we include a list of materials, vendors, and rough costing in Table III below. ## III Outcomes ### _Methods_ To evaluate impact on teacher and student attitudes about camp goals, the evaluation team worked with the camp team to design a survey with questions that reflected these areas. The survey was administered online at the end of each teacher and student camp through the Research Electronic Data Capture (REDCap) software [19]. Evaluation was approved through the local Institutional Review Board (IRB). Questions asked about demographic areas and whether respondents felt they gained certain skills from the camp. Participants also had opportunities to provide open-ended feedback about their experiences. ### _Diversity_ Evaluation captured feedback from a diverse set of participants within both teacher and student cohorts. #### Iii-B1 Teachers In total, 15 middle and high school teachers completed evaluation surveys. The primary subject areas of these teachers included science (e.g. biology, chemistry, computer science, physics) as well as engineering, math, and robotics. Their years of experience as a classroom teacher ranged from 1 to 34 years with the average being 12 years. Demographic questions captured teachers' race/ethnicity, as presented in Table IV below. 7 teacher participants identified with underrepresented racial groups in STEM: 6 participants identified as Hispanic or Latino and 1 identified as Native Hawaiian or Other Pacific Islander. Regarding gender identity, 9 teachers identified as male and 6 identified as female. #### Iii-B2 Students The 20 QCaMP student participants who completed an evaluation survey also represented diverse identities. Table V below shows the students' race/ethnicity responses. 7 student participants identified with underrepresented racial groups in STEM: 2 identified as American Indian or Alaska Native, 2 as Black or African American, and 3 as Hispanic or Latino. Regarding gender identity, 11 students identified as male, 8 as female, and 1 as gender non-binary. As the camp was open to high school students in the summer, students were asked what grade level they would enter in the following fall. Nearly half of the students (\(n=9\)) reported entering the 12th grade; the responses are provided in Table VI below. To capture college generational status, students were asked if either of the student's parent(s) or guardian(s) earned a college degree. The responses are tallied in Table VII, with the majority of students (\(n=12\)) indicating that "yes," one of their parent(s) or guardian(s) earned a college degree. 7 students responded "no," which means that if they choose to pursue a post-secondary education, they would be a first-generation college student. For the questions in Tables VIII and IX, there were five levels of agreement for the response options that ranged from Strongly Disagree to Strongly Agree with a Neutral option in the middle. To calculate averages, all responses were given a value ranging from 1 (assigned to responses of "Strongly Disagree") to 5 (assigned to responses of "Strongly Agree"). The questions were prefaced with "As a result of my attendance at QCaMP..." to help participants focus on impact from the camp experience specifically. ### _Teacher Camp Evaluation_ Overall, teachers reported strong positive impacts of the camp. In particular, they felt more confident delivering instruction in areas such as quantum and physics, that their students would benefit from teachers attending QCaMP, and that they learned new ways to engage students in QCaMP material. All questions and averages can be found in Table VIII. There were also opportunities for teachers to answer open-ended questions. For example, they were asked why they decided to pursue QCaMP, and most teachers reported wanting to expand their knowledge regarding quantum computing and to be able to implement it in classrooms or extracurricular activities. The following are some of the teachers' direct responses: * _"I'm going into my first year of teaching and I thought it wold [sic] be extremely helpful to be informed about the research being done currently and how I could create lessons that inform students about it!"_ * _"Several students in my STEM club have wanted to learn more about quantum computing, and I've always been curious about how to code for them."_ Regarding what elements of the camp were enjoyed the most, teachers appreciated the subject experts (scientists) involved in the camp and the interaction with other teachers. Finally, there was an opportunity to provide any other feedback; the following are some of the responses: * _"An overall excellent experience, thank you for the opportunity to uncover so much I knew nothing about and get me excited about the future of computing!"_ * _"This was an amazing camp! Thank you so much for informing me about all these topics and connecting me with other teachers across the US! It was so much fun and I know the kids will equally enjoy this camp!"_ ### _Student Camp Evaluation_ Students also reported an overall positive experience, particularly when asked if they understand more about quantum computing, how mistakes form part of the scientific process, and if they are more likely to take a class in related subjects during the upcoming school year. All questions and averages can be found in Table IX. Similar to teacher evaluations, students had the opportunity to answer a number of open-ended questions. They reported a variety of aspects they enjoyed most out of the camp, including meeting the subject experts (scientists), participating in experiments, and learning about career opportunities. The following are some of their responses: * _"I got to learn a lot about things I don't learn about in school, and it was really cool."_ * _"Getting to see real scientists explain quantum mechanics and computers to us."_ For any additional feedback, students provided suggestions for future QCaMPs, particularly suggesting that we host an in-person camp. Other responses reflected the positive time the students seemed to have. The following are some of their responses: * _"Overall it is a good experience even though it was defeniently [sic] new for me."_ * _"It was fun and I really learned a lot"_ ## IV Outlook Due to the overwhelmingly positive evaluation feedback and continued growth in the QIST field, we will build upon 2022's developed curriculum to offer QCaMP 2023. This next camp has a few notable differences from 2022. Firstly, by popular request, in addition to a virtual option, we will also host in-person cohorts in regions with multiple participants. There will be an in-person cohort of teachers in Albuquerque, NM, and in-person cohorts of students in Albuquerque and Santa Fe, NM. The virtual and in-person camps will take place simultaneously - one week for teachers and another week for students - with virtual instruction by QIST researchers to limit demand on researcher time. We are anticipating 24 teachers and 50 students in 2023. Secondly, we are adjusting the 2023 curriculum to incorporate even more time and activities on IBM's Quantum Composer as it was one of the more popular activities from 2022. In addition to the research project, hands-on IBM Q exercises will be included in the Probability and Statistics module and the Superposition module. This change will not only help reinforce concepts through active learning, but also underscore connections between topics across the curriculum. To accommodate, the Intro to IBM Q module will occur on Monday afternoon, swapping with the Probability and Statistics module. We also decided to spread the Light Labs over two days to better balance experimental activities and screen time. An overview of the 2023 curriculum can be found in Table X. Thirdly, we are excited to partner with SparCQS, an NSF Center for Quantum Networks initiative, led and operated out of Northern Arizona University. Expanding the reach of a traditional outreach program, SparCQS includes a mobile quantum laboratory bringing 'hands-on' quantum science experiences directly to schools and communities [20]. SparCQS will join our in-person Albuquerque cohorts on Friday of both the teacher and student camps. During this time, we will introduce our virtual participants to various online quantum games. We would like to expand further in future years by offering in-person cohorts across the country as well as virtual options to continue to make QCaMP accessible to as many under-represented communities as possible. To be successful, we will need the help of our broader QIST community to recruit students in their regions, to identify classroom spaces for hosting in-person cohorts, to facilitate in-person cohorts (student supervision, tech support, etc.), and to help identify potential financial sponsors for their region's participant stipends. As another option, we are moving toward making all of our material (lesson slides, worksheets, and video recordings) freely available for anyone who wishes to use them in their own classrooms or camps. Two of the most impactful outcomes of this effort have been sparking further curiosity by students into more complicated concepts in quantum information science and creating a channel between expert QIST researchers and prospective QIST students. Following a pilot program at JSTI Virtual 2021 [21], wherein we used a similar curriculum spread over a two-week virtual program, one of the students worked with an organizer on science fair projects investigating quantum teleportation and quantum machine learning. One of these projects received numerous regional, national, and international awards, opening the previously unattainable opportunity to gain acceptance to top US undergraduate institutions. This student will be starting in a top undergraduate physics program focusing on QIST. This example demonstrates that it is possible to teach quantum computing to high school age students, and the outcomes can be profound. ## Acknowledgment We acknowledge leadership activities performed by Amy Tapia of the Community Engagement Office at Sandia National Laboratories, Faith Dukes of the K-12 Programs at Lawrence Berkeley National Laboratory, and Yolanda Lozano of the Computer Science Alliance; and classroom facilitation by Emily Clauss of La Cueva High School and Cari Hushman of the Department of Education at the University of New Mexico. Additional QCaMP mini-talk speakers were Will Kindel (IQM), Francis Vigil (NIEA), Kayla Lee (IBM), and Lauren Thomas Quigley (IBM). Thanks to the Haeffner Group at UC Berkeley for facilitating a virtual trapped ion lab tour. This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Systems Accelerator. Additional support is acknowledged from Sandia National Laboratories, Lawrence Berkeley National Lab, and IEEE. Tz. B. Propp acknowledges funding from National Science Foundation Grant No. PHY-1630114. This article has been co-authored by employees of National Technology & Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The authors own all right, title and interest in and to the article and are solely responsible for its contents. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this article or allow others to do so, for United States Government purposes. The DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan [https://www.energy.gov/downloads/doe-public-access-plan](https://www.energy.gov/downloads/doe-public-access-plan). This paper was prepared for information purposes, and is not a product of HSBC Europe or its affiliates. Neither HSBC Europe nor any of its affiliates make any explicit or implied representation or warranty and none of them accept any liability in connection with this paper, including, but not limited to, the completeness, accuracy, reliability of information contained herein and the potential legal, compliance, tax or accounting effects thereof.
2309.09870
Zero-Shot Policy Transferability for the Control of a Scale Autonomous Vehicle
We report on a study that employs an in-house developed simulation infrastructure to accomplish zero shot policy transferability for a control policy associated with a scale autonomous vehicle. We focus on implementing policies that require no real world data to be trained (Zero-Shot Transfer), and are developed in-house as opposed to being validated by previous works. We do this by implementing a Neural Network (NN) controller that is trained only on a family of circular reference trajectories. The sensors used are RTK-GPS and IMU, the latter for providing heading. The NN controller is trained using either a human driver (via human in the loop simulation), or a Model Predictive Control (MPC) strategy. We demonstrate these two approaches in conjunction with two operation scenarios: the vehicle follows a waypoint-defined trajectory at constant speed; and the vehicle follows a speed profile that changes along the vehicle's waypoint-defined trajectory. The primary contribution of this work is the demonstration of Zero-Shot Transfer in conjunction with a novel feed-forward NN controller trained using a general purpose, in-house developed simulation platform.
Harry Zhang, Stefan Caldararu, Sriram Ashokkumar, Ishaan Mahajan, Aaron Young, Alexis Ruiz, Huzaifa Unjhawala, Luning Bakke, Dan Negrut
2023-09-18T15:30:54Z
http://arxiv.org/abs/2309.09870v1
# Zero-Shot Policy Transferability for the Control of a Scale Autonomous Vehicle ###### Abstract We report on a study that employs an in-house developed simulation infrastructure to accomplish zero shot policy transferability for a control policy associated with a scale autonomous vehicle. We focus on implementing policies that require no real world data to be trained (Zero-Shot Transfer), and are developed in-house as opposed to being validated by previous works. We do this by implementing a Neural Network (NN) controller that is trained only on a family of circular reference trajectories. The sensors used are RTK-GPS and IMU, the latter for providing heading. The NN controller is trained using either a human driver (via human in the loop simulation), or a Model Predictive Control (MPC) strategy. We demonstrate these two approaches in conjunction with two operation scenarios: the vehicle follows a waypoint-defined trajectory at constant speed; and the vehicle follows a speed profile that changes along the vehicle's waypoint-defined trajectory. The primary contribution of this work is the demonstration of Zero-Shot Transfer in conjunction with a novel feed-forward NN controller trained using a general purpose, in-house developed simulation platform. ## I Introduction ### _Motivation_ Simulation can be a powerful tool for designing better robots and autonomous vehicles as it can reduce design costs, accelerate the design cycle, and enable the testing of a larger pool of candidate designs in a diverse set of scenarios difficult to reproduce in reality [1]. The idea of using simulation in robot design is hindered by the so called simulation-to-reality, or sim2real, gap [2]: in many cases, an algorithm that performs well in simulation may display poor performance in reality due to hard to model aspects present in the real system [3], e.g., slackness in the steering mechanism, delays in actuation, complex sensor behavior. When Machine Learning (ML) comes into play in designing control policies, ZST [4] is even more elusive since one has to make do exclusively with synthetic data. Past contributions that focused on ZST either came short of demonstrating performance in the real world [5], or required a mixture of simulated and real data, e.g. [6, 7]. ### _Contribution_ We outline a general purpose approach that uses our Autonomy Research Testbed (ART) platform [8] to synthesize an ML-based controller in simulation and subsequently demonstrate it on a scale vehicle, thus achieving ZST. Our contributions are threefold. **(i)** We propose the use of a physics-based high fidelity simulator to synthesize control policies without resorting to domain randomization or domain adaptation to achieve ZST. **(ii)** We do not resort to off-the-shelf proven algorithms. Rather, we train a NN via imitation learning on a low diversity dataset obtained by driving the vehicle along a small family of circles of different radii. The model is trained to imitate either a human driver or a Model Predictive Control (MPC) algorithm. Producing training data in simulation and training the NN model takes minutes. **(iii)** We demonstrate that our simulator, called Chrono [9, 10], used in conjunction with ART can support autonomy stack development in simulation with good sim2real transferability traits. ### _Related Work_ Our approach is similar to [11], except that perception is carried out differently (GPS and IMU as opposed to camera), and we draw exclusively on simulated data. In [5], the authors train an all-terrain-vehicle in simulation to drive off-road while avoiding obstacles but their policy is only demonstrated in simulation. In [12], the authors focus on automatic test generation for AVs and propose "robustness values" for their Autonomy Stack (A-Stack), but do not provide evidence for the translation of this robustness value into real world results. AutoVRL, a platform similar to the ART one discussed herein, is presented in [13], but its authors do not demonstrate real-world operation of their A-Stack. Unlike these contributions that only demonstrate performance in simulation, a body of literature shows performance in reality by focusing on combining simulation and real-world data. Most of these relate to autonomous agents that use camera and LiDAR sensors for end-to-end solutions in which an ML model directly takes sensor inputs while it outputs control commands. In [12, 14] the perception training is done purely on real world datasets, i.e., KITTI [15] and YCB [16], respectively. Many works focus on _Domain Translation_ (DT) where a separate ML module is tasked with changing the appearance of sensor outputs. For instance, in [17] the authors use a Generative Adversarial Network (GAN) to learn a mapping function from simulated images to a potential real world counterpart, highlighting their use of unlabeled reality data. The A-Stack is then trained on this enhanced simulation data. Similar to this, the VISTA simulator [7] uses a data-driven approach to generate sensor data, integrating the process described above into the simulator. In [18], the authors do the inverse of the operation described above. They train their A-Stack in simulation, and then generate "VR-Goggles" for their vehicle. When driving in reality, the sensor data is first processed by the VR-Goggles to look more like simulated data, and then passed to the A-Stack. This also requires reality data for training the goggles. While all of these approaches achieve simulation based A-Stack training with transfer to reality, they still require pre-processing of real world data for their testing, stopping short of ZST. _Domain Randomization_ (DR) has been widely used to impart robustness to a control policy synthesized in simulation. This is commonly done for robotic-arm control policies, see, for instance, [19]. In [4], the authors discuss the effectiveness of different DR techniques. In our work, we choose to not employ DR for ZST and instead emphasize the role that an accurate simulator and model can play in accomplishing ZST. Against this backdrop, our effort is motivated by two observations. Firstly, the DR technique often lacks clarity on why it succeeds or fails. Secondly, DR can serve as a reliable option for enhancing robustness provided one already achieved ZST with a good simulator and model. Finally, in this study, we showcase the integration of the Chrono simulation engine and the ART platform, both of which are being collaboratively developed with input from this group. Looking beyond Chrono, commonly used simulators include CARLA [20], Isaac Sim [21], MuJoCo [22], webots [23], Coppelia Robotics [24], Gazebo [25] and PyBullet [26]. Several of these solutions highlight photorealism and fast computation times, producing results that are plausible but not necessarily physically meaningful since they draw on game engines [27]. This revokes some of the benefits of simulation based training, as one is no longer guaranteed to be able to exactly replicate physical scenarios/tests, and may make determining failure causes difficult. Finally, in addition to embracing a physics-based approach to simulation, one aspect that sets apart Chrono is its ability to embed humans in the loop and either allow them to guide the data generation process for training (as done in this contribution), or synthesize and test autonomy solutions that come into play in human-robot interaction applications. ## II Background The multi-physics simulator used used in this study is called Chrono [9]. Two modules, Chrono::Vehicle [28] and Chrono::Sensor [29], provide high-fidelity, physics-based vehicle and sensor simulation, respectively, and can be leveraged for synthesizing control policies. While this work focuses on a scaled on-road car equipped with GPS and IMU sensors, Chrono allows for the combination of various wheeled and tracked vehicles, and proprioceptive and exteroceptive sensor types, e.g., camera, LiDAR [8, 5]. It has good terramechanics support for off-road mobility fidelity, the user having the ability to choose several terrain models [30, 31]. Furthermore, the platform supports human-in-the-loop simulation, and the interaction of multiple autonomous agents, whether in intricate traffic scenarios or convoy operations utilizing Synchrono [32, 33]. ART/ATK provides a ROS 2 framework that leverages the Chrono simulation engine to enable autonomy algorithm synthesis in simulation followed by demonstration in the real world. A more detailed description is provided in Sec. III-A ## III Preliminaries ### _ART/ATK and Chrono_ The control synthesis takes place in simulation in line with ZST expectations. The Chrono simulator is used to produce the time evolution of the scale vehicle. The ART in "ART/ATK" provides a ROS2-based basic autonomy stack. It is implemented in Python and is Docker-containerized so that _the same_ autonomy stack, running on _the same_ processor (NVIDIA Jetson AGX card) is used both in simulation and the real world. The autonomy stack is deployed on the SAV vehicle (from "scaled Autonomous Vehicle"), which has in dtSAV a Chrono digital twin. Finally, the ATK component is a utility that produces the Docker container infrastructure required to accommodate the ART autonomy stack, be it in simulation or in real world testing [8]. An IMU sensor provides heading information, and an RTK GPS delivers centimeter-scale accuracy for localization. An Extended Kalman Filter (EKF) is used for velocity estimation, utilizing the 4-DOF model described in sec. III-B. In simulation, Chrono provides dtSAV with ground truth information for the position and heading; the same EKF is used for velocity estimation. ### _4-Dof Vehicle Model and Error State_ State estimation and MPC, the latter used in generating training data, call for a simple vehicle model. This model is not the Chrono dtSAV vehicle (which is highly nonlinear and complex), but a low fidelity replica that captures well enough dtSAV's dynamics. In other words, when estimating state in simulation or producing a command via MPC, one uses a Chrono simulation, inside which we run a second simulation of the 4-DOF vehicle described in this subsection. The states of the 4-DOF vehicle model are \(\mathbf{q}=[x,y,\theta,v]^{T}\), with \(x\) and \(y\) representing the position in the Cartesian coordinates, respectively, heading angle \(\theta\), and longitudinal velocity \(v\). The commands \(\mathbf{u}\) consist of a steering value \(\delta\) in the range of \([-1,1]\), and a throttle input \(\alpha\) in the range of \([0,1]\). The dynamics of the 4-DOF vehicle model is captured by the following differential equations: \[\dot{\mathbf{q}}=\mathbf{f}(\mathbf{q},\mathbf{u})=\left\{\begin{aligned} & cos(\theta)\cdot v,\\ & sin(\theta)\cdot v,\\ & v\cdot tan(\beta\delta)/l,\\ & T(\alpha,v)\cdot\gamma R_{w}/I_{w}.\end{aligned}\right. \tag{1a}\] Equations (1a), (1b), and (1c) describe a simplified bicycle model [34], where \(\beta\) maps the steering command \(\delta\) to the wheel steering angle, and \(l\) is the wheel base of the vehicle. Assuming that the wheel has no slip, Eq. (1d) approximates the longitudinal acceleration \(\dot{v}\) based on motor torque \(T(\alpha,v)\), gear ratio \(\gamma\), and wheel radius and moment of inertia, \(R_{w}\) and \(I_{w}\). See [35] for more details. As illustrated in Fig. 1, given a reference trajectory, the error state, \(\mathbf{e}=[e_{1},e_{2},e_{3},e_{4}]^{T}\), with respect to the reference one, \(\mathbf{q}_{r}=[x_{r},y_{r},\theta_{r},v_{r}]^{T}\), can be computed using Eq. (2) [36]. \[\mathbf{e}=\left(\begin{aligned} &\cos\theta&\sin\theta& 0& 0\\ &-\sin\theta&\cos\theta& 0& 0\\ & 0& 0& 1& 0\\ & 0& 0& 1\end{aligned}\right)\left(\begin{aligned} & x_{r}-x\\ & y_{r}-y\\ &\theta_{r}-\theta\\ & v_{r}-v\end{aligned}\right). \tag{2}\] ### _Mpc Details_ The MPC solution embraced is described in [35]. This control policy has already been tuned in simulation and tested in reality, providing insight into sim2real transferability [3]. Therein, the salient conclusion was that the MPC solution was not robust, which is not an issue since it is used only to generate training data for the NN controller. The MPC is posed as \[e_{t+1}=A_{t}\cdot e_{t}+B_{t}\cdot u_{t} \tag{3}\] \[J_{t}^{*}(e_{t}) =\min_{u_{k}}\;\;e_{N}^{T}Qe_{N}+\sum_{k=0}^{N-1}e_{k}^{T}Qe_{k} \tag{4}\] \[+(u_{k}-u_{r})^{T}R(u_{k}-u_{r})\;.\] In Eq. (3), we linearized the error dynamics, using Eqs. (1) and (2) [35]. Equation (4) describes the optimization problem used to generate the next optimal command. For more details, please see [3, 37]. ## IV Method ### _Training Data_ #### Iv-A1 Human-in-the-loop produced training data This approach is schematically captured in Fig. 2. Since Chrono supports human in the loop (HIL) simulation, a human drives dtSAV in the virtual world (manual driving) to collect data that registers what the driver does when dtSAV strays away from a given trajectory. We record the error state and corresponding control commands at each time step of the simulation. Seven reference trajectories are used one at a time for the training process. They are circular paths with radius \(\mathrm{2m}\), \(\mathrm{5m}\), \(\mathrm{25m}\), both clockwise and counter-clockwise, plus a \(\mathrm{30m}\) straight line path, which can be thought of as a circle with infinite radius. For the multispeed control, we Fig. 1: Error state relative to target reference trajectory. used seven trajectories with the same geometric shapes but different target velocities - half of the course had a \(1~{}\mathrm{m}/\mathrm{s}\) velocity prescribed for the vehicle, the other half had a \(2~{}\mathrm{m}/\mathrm{s}\) reference velocity, with a transition velocity in between. The data helped the NN model how changes in speed elicit changes in the throttle position. Collecting training data in simulation was both simple and fast (took minutes to generate). #### Iv-A2 MPC Training An alternative to having a human drive dtSAV in simulation is to use an existing MPC controller, see Sec. III-C, and record the input commands issued by the MPC while it works to maintain a predefined trajectory. The MPC issued commands to dtSAV to make it follow the same reference trajectories used in the HIL data collection. The training data contains the error state and corresponding MPC command issued in each time step. The reason why _our_ MPC policy was not useful in reality was that it was not robust - SAV in reality did much worse than dtSAV in simulation, a prime manifestation of the sim2real gap. ### _Imitation Learning Based Controller_ We employ a feed forward Neural Network (NN) that upon training will drive dtSAV and subsequently SAV, thus accomplishing ZST. Figure 3 depicts the two-hidden-layer NN that engages in supervised learning. Training data is generated via HIL only, or via MPC only, see Sec. IV-A. For NN training, the input data \(\mathbf{E}\in\mathbb{R}^{4\times n}\) is the set of error states \(\mathbf{e}\in\mathbb{R}^{4\times 1}\). This data \(\mathbf{U}\in\mathbb{R}^{2\times n}\) is matched with the control commands \(\mathbf{u}\in\mathbb{R}^{2\times 1}\). The NN is trained to produce a mapping between the error state and control command, \(\mathbf{f}:\mathbf{e}\rightarrow\mathbf{u}\). The NN training and inference is carried out in Keras Core [38], using PyTorch as a backend [39]. The training converges fast since the input and output spaces are small, as are the NN's depth and width. ## V Experimental Results ### _Experimental Setup_ This section reports on experimental work done to assess the extent to which the control policy synthesized in simulation transferred to the real world. We established in simulation four NN policies: one trained by using a human driver, the other one using MPC-produced data; and, for each of these two scenarios we had two sub-cases: the velocity along the course was kept constant, or it changed based on the location of the SAV vehicle along its trajectory. The assessment of the four NNs took place on the top of a parking lot. SAV was given waypoints, and the NN controller used the IMU heading and RTK-GPS information to pass through the waypoints while following the prescribed speed regimen. The waypoints selection was mindful of the topology of the parking lot, and yielded a path that was roughly rectangular with a width by length of approximately 34 Fig. 4: Experiment Setup: NN was trained using dtSAV-generated data and subsequently tested in a parking lot using SAV. The red dashed line in the right-most image represents the parking lot reference trajectory. Fig. 3: Feed Forward Layers Setup. Fig. 2: Training Process Demonstration: upper half is the pipeline for collecting HIL (manual control) training data; lower half shows data collection using MPC. \(\times\) 72 \(\mathrm{m}\), see Fig. 4. There were sinusoidal portions and regions where the speed changed along the way. For more experiment details, see uploaded movie. ### _Experimental Results_ #### V-B1 Constant Velocity Tracking The reference trajectory is first followed using constant velocity along the entire path. The loop around the parking lot shown in Fig. 4 has a sinusoidal segment along one width, and an arc on the opposite one. Sample simulation and reality results are displayed in Fig. 5. The control profiles for throttle and steering are shown in Fig. 6. We ran five additional tests for each controller in each scenario. For each GPS output reading, we found the closest point on the reference trajectory, and computed the distance between the vehicle and this reference point. We then averaged these across the five tests for each reference point, and displayed them as a plot with respect to the reference location used. These absolute errors are shown in Fig. 7. #### V-B2 Tracking a Complex Speed Profile We used the same reference trajectory, but had a speed profile that changed along the vehicle trajectory: it was 1 m/s around the four corners, and climbed to 2 m/s in between. Figure 8 displays results for the two NN controllers. Heat maps show the speed of the vehicle along the trajectory, both in simulation and reality. ## VI Discussion The results in Fig. 5 indicate that the feed-forward NN used in this study accomplishes ZST regardless of whether the synthetic training data was produced via MPC or HIL. The sim2real gap is small as demonstrated in Fig. 7. A notable difference between simulation and reality can be noted in the commands issued, see Fig. 6. We hypothesize that is likely due to: a steady slant of the road for parking lot drainage purposes, which is present in reality but not in simulation; and, an unavoidable slack in the steering mechanism (with zero steering command input, the vehicle still tries to steer in the direction that the terrain is slanted towards). As shown in Fig. 6, for the straight line portion the reality steering has negative steering values (to compensate for the tilted road) while in simulation zero steering is maintained. Another observation from Fig. 6 is that there is a qualitative difference between the NN trained with HIL data or MPC data. The HIL training data displays smoother changes for steering commands and prefers to merge back to the referenced trajectory slowly. Conversely, the MPC-generated training data has higher transients since it solves an optimization problem that was not instructed to account for smoothness of the ensuing maneuver. This explains why the MPC-data trained NN controller follows trajectories more precisely. Finally, for the case when the prescribed velocity changes along the trajectory, dtSAV achieves a wider range of speeds compared to SAV. Likewise, MPC Fig. 5: Results for simulation and reality testing: (a) NN controller trained by MPC; (b) NN controller trained by manual driving. Fig. 6: Control Profiles comparing SAV and dtSAV: (a) NN controller trained via MPC; (b) NN trained via HIL. data training leads to a more responsive controller than when using HIL data since the MPC training takes the vehicle's powertrain model into consideration. Indeed, results in Fig. (b)b look sharper than the ones in Fig. (d)d. Correspondingly, when transferring the control policies into reality, the multi-speeds control is more precise for the controller trained with MPC (in Fig. (a)a) than the one trained with HIL data (in Fig. (c)c). Given the complexity of real-world environments, expecting exact matches between simulated and real results is exceedingly difficult to achieve. However, it is desirable to quantify this gap in safety critical applications [40], and at a minimum to see traits that manifest in simulation carry over to the real world. In Figure 8, we illustrate this with multi-speed training: the HIL driver (while generating training data in the simulation) is unable to match desired speeds as precisely as the MPC. This trait is learned by the NN and appears in the real-world scenario where the NN trained on HIL doesn't match desired speeds accurately. Additionally, the HIL driver prioritizes smoother steering inputs, while the MPC prioritizes error mitigation over smoothness, as evident in the steering control inputs in Fig. 6. ## VII Conclusions Our contributions are threefold. We demonstrated ZST for a scale vehicle using RTK-GPS and IMU sensor fusion. We established a feed-forward NN controller trained to imitate a human driver or the behavior of an MPC controller. Finally, anchored by Chrono and ART/ATK, we established an open source platform that enables the synthesis of autonomy algorithms in simulation and their demonstration in reality. The salient strength of the Chrono-ART/ATK platform is that the same ROS2 ART autonomy stack, running on the same hardware, is exercised both in simulation and reality. Since Chrono supports HIL, it enables a driver to operate a digital twin, and the data generated be subsequently used to synthesize ML-based control policies. Ongoing work focuses on increasing the determinism of the ART/ATK autonomy stack; investigating the ZST problem for a rover-like vehicle with four steerable wheels; and using simulation to synthesize autonomy stacks for ground vehicles operating on deformable terrains. Fig. 8: Simulation vs. Reality Testing, variable-speed case: Fig. (a)a–SAV using MPC-trained NN controller; Fig. (b)b–dtSAV using MPC-trained NN controller; Fig. (c)c–SAV using HIL-trained NN controller; Fig. (d)d–dtSAV using HIL-trained NN controller. Fig. 7: Absolute sime2real errors: (a) NN controller trained by MPC; (b) NN controller trained by HIL.
2304.02729
Constructing Phylogenetic Networks via Cherry Picking and Machine Learning
Combining a set of phylogenetic trees into a single phylogenetic network that explains all of them is a fundamental challenge in evolutionary studies. Existing methods are computationally expensive and can either handle only small numbers of phylogenetic trees or are limited to severely restricted classes of networks. In this paper, we apply the recently-introduced theoretical framework of cherry picking to design a class of efficient heuristics that are guaranteed to produce a network containing each of the input trees, for datasets consisting of binary trees. Some of the heuristics in this framework are based on the design and training of a machine learning model that captures essential information on the structure of the input trees and guides the algorithms towards better solutions. We also propose simple and fast randomised heuristics that prove to be very effective when run multiple times. Unlike the existing exact methods, our heuristics are applicable to datasets of practical size, and the experimental study we conducted on both simulated and real data shows that these solutions are qualitatively good, always within some small constant factor from the optimum. Moreover, our machine-learned heuristics are one of the first applications of machine learning to phylogenetics and show its promise.
Giulia Bernardini, Leo van Iersel, Esther Julien, Leen Stougie
2023-03-31T15:04:42Z
http://arxiv.org/abs/2304.02729v1
# Constructing Phylogenetic Networks via Cherry Picking and Machine Learning+ ###### Abstract Combining a set of phylogenetic trees into a single phylogenetic network that explains all of them is a fundamental challenge in evolutionary studies. Existing methods are computationally expensive and can either handle only small numbers of phylogenetic trees or are limited to severely restricted classes of networks. In this paper, we apply the recently-introduced theoretical framework of cherry picking to design a class of efficient heuristics that are guaranteed to produce a network containing each of the input trees, for datasets consisting of binary trees. Some of the heuristics in this framework are based on the design and training of a machine learning model that captures essential information on the structure of the input trees and guides the algorithms towards better solutions. We also propose simple and fast randomised heuristics that prove to be very effective when run multiple times. Unlike the existing exact methods, our heuristics are applicable to datasets of practical size, and the experimental study we conducted on both simulated and real data shows that these solutions are qualitatively good, always within some small constant factor from the optimum. Moreover, our machine-learned heuristics are one of the first applications of machine learning to phylogenetics and show its promise. Introduction Phylogenetic networks describe the evolutionary relationships between different objects: for example, genes, genomes, or species. One of the first and most natural approaches to constructing phylogenetic networks is to build a network from a set of gene trees. In the absence of incomplete lineage sorting, the constructed network is naturally required to "display", or embed, each of the gene trees. In addition, following the parsimony principle, a network assuming a minimum number of reticulate evolutionary events (like hybridization or lateral gene transfer) is often sought. Unfortunately, the associated computational problem, called Hybridization, is NP-hard even for two binary input trees [5], and indeed existing solution methods do not scale well with problem size. For a long time, research on this topic was mostly restricted to inputs consisting of two trees. Proposed algorithms for multiple trees were either completely impractical or ran in reasonable time only for very small numbers of input trees. This situation changed drastically with the introduction of so-called cherry-picking sequences [12]. This theoretical setup opened the door to solving instances consisting of many input trees like most practical datasets have. Indeed, a recent paper showed that this technique can be used to solve instances with up to 100 input trees to optimality [21], although it was restricted to binary trees all having the same leaf set and to so-called "tree-child" networks. Moreover, its running time has a (strong) exponential dependence on the number of reticulate events. In this paper, we show significant progress towards a fully practical method by developing a heuristic framework based on cherry picking comprising very fast randomised heuristics and other slower but more accurate heuristics guided by machine learning. Admittedly, our methods are not yet widely applicable since they are still restricted to binary trees. However, our set-up is made in such a way that it may be extendable to general trees. Despite their limitations, we see our current methods already as a breakthrough as they are not restricted to tree-child networks and scale well with the number of trees, the number of taxa and the number of reticulations. In fact, we experimentally show that our heuristics can easily handle sets of 100 trees in a reasonable time: the slowest machine-learned method takes 4 minutes on average for sets consisting of 100 trees with 100 leaves each, while the faster, randomised heuristics already find feasible solutions in 2 seconds for the same instances. As the running time of the fastest heuristic depends at most quadratically on the number of input trees, linearly on the number of taxa, and linearly on the output number of reticulations, we expect it to be able to solve much larger instances still in a reasonable amount of time. In addition, in contrast with the existing algorithms, our methods can be applied to trees with different leaf sets, although they have not been specifically optimized for this kind of input. Indeed, we experimentally assessed that our methods give qualitatively good results only when the leaf sets of the input trees have small dif ferences in percentage (up to 5-15%); when the differences are larger, they return feasible solutions that are far from the optimum. Some of the heuristics we present are among the first applications of machine learning in phylogenetics and show its promise. In particular, we show that crucial features of the networks generated in our simulation study can be identified with very high test accuracy (99.8%) purely based on the trees displayed by the networks. It is important to note at this point that no method is able to reconstruct any specific network from displayed trees as networks are, in general, not uniquely determined by the trees they display [14]. In addition, in some applications, a phenomenon called "incomplete lineage sorting" can cause gene trees that are not displayed by the species network [26], and hence our methods, and other methods based on the Hybridization problem, are not (directly) applicable to such data. We focus on _orchard_ networks (also called _cherry picking_ networks), which are precisely those networks that can be drawn as a tree with additional horizontal arcs [19]. Such horizontal arcs can for example correspond to lateral gene transfer (LGT), hybridization and recombination events. Orchard networks are broadly applicable: in particular, the orchard network class is much bigger than the class of tree-child networks, to which the most efficient existing methods are limited [1]. Related work.Previous practical algorithms for Hybridization include PIRN [25], PIRNs [13] and Hybroscale [1], exact methods that are only applicable to (very) small numbers of trees and/or to trees that can be combined into a network with a (very) small reticulation number. Other methods such as PhyloNet[22] and PhyloNetworks[18] also construct networks from trees but have different premises and use completely different models. The theoretical framework of cherry picking was introduced in [8] (for the restricted class of temporal networks) and [12] (for the class of tree-child networks) and was later turned into algorithms for reconstructing tree-child [21] and temporal [6] networks. These methods can handle instances containing many trees but do not scale well with the number of reticulations, due to an exponential dependence. The class of orchard networks, which is based on cherry picking, was introduced in [17] and independently (as cherry-picking networks) in [10], although their practical relevance as trees with added horizontal edges was only discovered later [19]. The applicability of machine-learning techniques to phylogenetic problems has not yet been fully explored, and to the best of our knowledge existing work is mainly limited to phylogenetic tree inference [2, 28] and to testing evolutionary hypotheses [11]. Our contributions.We introduce Cherry Picking Heuristics (CPH), a class of heuristics to combine a set of binary phylogenetic trees into a single binary phylogenetic network based on cherry picking. We define and analyse several heuristics in the CPH class, all of which are guaranteed to produce feasible solutions to Hybridization and all of which can handle instances of practical size (we run experiments on tree sets of up to 100 trees with up to 100 leaves which were processed in on average 4 minutes by our slowest heuristic). Two of the methods we propose are simple but effective randomised heuristics that proved to be extremely fast and to produce good solutions when run multiple times. The main contribution of this paper consists in a machine-learning model that potentially captures essential information about the structure of the input set of trees. We trained the model on different extensive sets of synthetically generated data and applied it to guide our algorithms towards better solutions. Experimentally, we show that the two machine-learned heuristics we design yield good results when applied to both synthetically generated and real data. We also analyse our machine-learning model to identify the most relevant features and design a non-learned heuristic that is guided by those features only. Our experiments show that this heuristic leads to reasonably good results without the need to train a model. This result is interesting per se as it is an example of how machine learning can be used to guide the design of classical algorithms, which are not biased towards certain training data. A preliminary version of this work appeared in [4]. Compared to the preliminary version, we have added the following material: (i), we defined a new non-learned heuristic based on important features and experimentally tested it (Section 5.3); (ii), we extended the experimental study to data generated from non-orchard networks (Section 5.2.3), data generated from a class of networks for which the optimum number of reticulations is known (Section 5.2.1) and to input trees with different leaf sets (Section 5.2.6); and (iii), we provided a formal analysis of the time complexity of all our methods (Section 4.1) and conducted experiments on their scalability (Section 5.2.5). ## 2 Preliminaries A _phylogenetic network_\(N=(V,E,X)\) on a set of taxa \(X\) is a directed acyclic graph \((V,E)\) with a single _root_ with in-degree 0 and out-degree 1, and the other nodes with either (i) in-degree 1 and out-degree \(k>1\) (_tree nodes_); (ii) in-degree \(k>1\) and out-degree 1 (_reticulations_); or (iii) in-degree 1 and out-degree 0 (_leaves_). The leaves of \(N\) are biunivocally labelled by \(X\). A surjective map \(\ell:E\rightarrow\mathbb{R}^{\geq 0}\) may assign a nonnegative _branch length_ to each edge of \(N\). We will denote by \([1,n]\) the set of integers \(\{1,2,...,n\}\). Throughout this paper, we will only consider binary networks (with \(k=2\)), and we will identify the leaves with their labels. We will also often drop the term "phylogenetic", as all the networks considered in this paper are phylogenetic networks. The _reticulation number_\(r(N)\) of a network \(N\) is \(\sum_{v\in V}\max\left(0,d^{-}(v)-1\right),\) where \(d^{-}(v)\) is the in-degree of \(v\). A network \(T\) with \(r(T)=0\) is a _phylogenetic tree_. It is easy to verify that binary networks with \(r(N)\) reticulations have \(|X|+r(N)-1\) tree nodes. **Cherry-picking.** We denote by \(\mathcal{N}\) a set of networks and by \(\mathcal{T}\) a set of trees. An _ordered_ pair of leaves \((x,y),\ x\neq y\), is a _cherry_ in a network if \(x\) and \(y\) have the same parent; \((x,y)\) is a _reticulated cherry_ if the parent \(p(x)\) of \(x\) is a reticulation, and \(p(y)\) is a tree node and a parent of \(p(x)\) (see Figure 1). A pair is _reducible_ if it is either a cherry or a reticulated cherry. Notice that trees have cherries but no reticulated cherries. _Reducing_ (or _picking_) a cherry \((x,y)\) in a network \(N\) (or in a tree) is the action of deleting \(x\) and replacing the two edges \((p(p(x)),p(x))\) and \((p(x),y)\) with a single edge \((p(p(x)),y)\) (see Figure 0(a)). If \(N\) has branch lengths, the length of the new edge is \(\ell(p(p(x)),y)=\ell(p(p(x)),p(x))+\ell(p(x),y)\). A reticulated cherry \((x,y)\) is reduced (picked) by deleting the edge \((p(y),p(x))\) and replacing the other edge \((z,p(x))\) incoming to \(p(x)\), and the consecutive edge \((p(x),x)\), with a single edge \((z,x)\). The length of the new edge is \(\ell(z,x)=\ell(z,p(x))+\ell(p(x),x)\) (if \(N\) has branch lengths). Reducing a non-reducible pair has no effect on \(N\). In all cases, the resulting network is denoted by \(N_{(x,y)}\): we say that \((x,y)\) affects \(N\) if \(N\neq N_{(x,y)}\). Any sequence \(S=(x_{1},y_{1}),\ldots,(x_{n},y_{n})\) of ordered leaf pairs, with \(x_{i}\neq y_{i}\) for all \(i\), is a _partial cherry-picking sequence_; \(S\) is a cherry-picking sequence (CPS) if, for each \(i<n\), \(y_{i}\in\{x_{i+1},\ldots,x_{n},y_{n}\}\). Given a network \(N\) and a (partial) CPS \(S\), we denote by \(N_{S}\) the network obtained by reducing in \(N\) each element of \(S\), in order. We denote \(S\circ(x,y)\) the sequence obtained by appending pair \((x,y)\) at the end of \(S\). We say that \(S\) fully reduces \(N\) if \(N_{S}\) consists of the root with a single leaf. \(N\) is an _orchard network_ (ON) if there exists a CPS that fully reduces it, and it is _tree-child_ if every non-leaf node has at least one child that is a tree node or a leaf. A _normal_ network is a tree-child network such that, in addition, the two parents of a reticulation are always incomparable, i.e., one is not a descendant of the other. If \(S\) fully reduces all \(N\in\mathcal{N}\), we say that \(S\) fully reduces \(\mathcal{N}\). In particular, in this paper we will be interested in CPS which fully reduce a set of trees \(\mathcal{T}\) consisting of \(|\mathcal{T}|\) trees of total size \(||\mathcal{T}||\). **Hybridization.** The Hybridization problem can be thought of as the computational problem of combining a set of phylogenetic trees into a network with the Figure 1: \((x,y)\) is picked in two different networks. In **(a)**\((x,y)\) is a cherry, and in **(b)**\((x,y)\) is a reticulated cherry. After picking, degree-two nodes are replaced by a single edge. smallest possible reticulation number, that is, to find a network that displays each of the input trees in the sense specified by Definition 1, below. See Figure 2 for an example. The definition describes not only what it means to display a tree but also to display another network, which will be useful later. **Definition 1**.: _Let \(N=(V,E,X)\) and \(N^{\prime}=(V^{\prime},E^{\prime},X^{\prime})\) be networks on the sets of taxa \(X\) and \(X^{\prime}\subseteq X\), respectively. The network \(N^{\prime}\) is displayed in \(N\) if there is an embedding of \(N^{\prime}\) in \(N\): an injective map of the nodes of \(N^{\prime}\) to the nodes of \(N\), and of the edges of \(N^{\prime}\) to edge-disjoint paths of \(N\), such that the mapping of the edges respects the mapping of the nodes, and the mapping of the nodes respects the labelling of the leaves._ We call _exhaustive_ a tree displayed in \(N=(V,E,X)\) with the whole \(X\) as a leaf set. Note that Definition 1 only involves the topologies of the networks, disregarding possible branch lengths. In the following problem definition, the input trees may or may not have branch lengths, and the output is a network without branch lengths. We allow branch lengths for the input because they will be useful for the machine-learned heuristics of Section 4. Hybridization **Input:** A set of phylogenetic trees \(\mathcal{T}\) on a set of taxa \(X\). **Output:** A network displaying \(\mathcal{T}\) with minimum possible reticulation number. ## 3 Solving the Hybridization Problem via Cherry-Picking Sequences We will develop heuristics for the Hybridization problem using cherry-picking sequences that fully reduce the input trees, leveraging the following result by Janssen and Murakami. **Theorem 1** ([10], Theorem 3).: _Let \(N\) be a binary orchard network, and \(N^{\prime}\) a (not necessarily binary) orchard network on sets of taxa \(X\) and \(X^{\prime}\subseteq X\), respectively. If a minimum-length CPS \(S\) that fully reduces \(N\) also fully reduces \(N^{\prime}\), then \(N^{\prime}\) is displayed in \(N\)._ Figure 2: The two trees in **(b)** are displayed in the network **(a)**. Notice that Hybridization remains NP-hard for binary orchard networks. For binary networks we have the following lemma, a special case of [10, Lemma 1]. **Lemma 1**.: _Let \(N\) be a binary network, and let \((x,y)\) be a reducible pair of \(N\). Then reducing \((x,y)\) and then adding it back to \(N_{(x,y)}\) results in \(N\)._ Note that Lemma 1 only holds for binary networks: in fact, there are different ways to add a pair to a non-binary network, thus the lemma does not hold unless a specific rule for adding pairs is specified (inspect [10] for details). Theorem 1 and Lemma 1 provide the following approach for finding a feasible solution to Hybridization: find a CPS \(S\) that fully reduces all the input trees, and then uniquely reconstruct the binary orchard network \(N\) for which \(S\) is a minimum-length CPS, by processing \(S\) in the reverse order. \(N\) can be reconstructed from \(S\) using one of the methods underlying Lemma 1 proposed in the literature, e.g., in [10] (illustrated in Figure 3) or in [21]. The following lemma relates the length of a CPS \(S\) and the number of reticulations of the network constructed from \(S\). **Lemma 2** ([20]).: _Let \(S\) be a CPS on a set of taxa \(X\). The number of reticulations of the network \(N\) reconstructed from \(S\) is \(r(N)=|S|-|X|+1\)._ In the next section we focus on the first part of the heuristic: producing a CPS that fully reduces a given set of phylogenetic trees. ### Randomised Heuristics We define a class of randomised heuristics that construct a CPS by picking one reducible pair of the input set \(\mathcal{T}\) at a time and by appending this pair to a growing partial sequence, as described in Algorithm 1 (the two subroutines PickNext and CompleteSeq will be later described in details). We call this class CPH (for Cherry-Picking Heuristics). Recall that \(\mathcal{T}_{S}\) denotes the set of trees \(\mathcal{T}\) after reducing all trees with a (partial) CPS \(S\). The while loop at lines 2-5 produces, in general, a partial CPS \(S\), as shown in Example 1. To make it into a CPS, the subroutine CompleteSeq at line 6 appends at Figure 3: The ON reconstructed from the sequence \(S=(x,y),(x,w),(w,y)\). The pairs are added to the network in reverse order: if the first element of a pair is not yet in the network, it is added as a cherry with the second element (see the pair \((x,w)\)). Otherwise, a reticulation is added above the first element with an incoming edge from a new parent of the second element (see the pair \((x,y)\)). the end of \(S\) a sequence \(S^{\prime}\) of pairs such that each second element in a pair of \(S\circ S^{\prime}\) is a first element in a later pair (except for the last one), as required by the definition of CPS. These additional pairs do not affect the trees in \(\mathcal{T}\), which are already fully reduced by \(S\). Algorithm 2 describes a procedure CompleteSeq that runs in time linear in the length of \(S\). ``` INPUT: A set \(\mathcal{T}\) of phylogenetic trees OUTPUT: A CPS reducing \(\mathcal{T}\). 1:\(S\leftarrow\emptyset\); 2:while there is a reducible pair in \(\mathcal{T}_{S}\)do 3:\((x,y)\leftarrow\mathsf{PickNext}(\mathcal{T}_{S})\); 4:\(S\gets S\circ(x,y)\); 5: Reduce \((x,y)\) in all trees of \(\mathcal{T}_{S}\); 6:\(S\leftarrow\mathsf{CompleteSeq}(S)\); 7:return\(S\); ``` **Algorithm 1** CPH **Example 1**.: _Let \(\mathcal{T}\) consist of the 2-leaf trees \((x,y)\) and \((w,z)\). A partial CPS at the end of the while loop in Algorithm 1 could be, e.g., \(S=(x,y),(w,z)\). The trees are both reduced to one leaf, so there are no more reducible pairs, but \(S\) is not a CPS. To make it into a CPS either pair \((y,z)\) or pair \((z,y)\) can be appended: e.g., \(S\circ(y,z)=(x,y),(w,z),(y,z)\) is a CPS, and it still fully reduces the two input trees._ The class of heuristics given by Algorithm 1 is concretised in different heuristics depending on the function PickNext at line 3 used to choose a reducible pair at each iteration. To formulate them we need to introduce the following notions of height pair and trivial pair. Let \(N\) be a network with branch lengths and let \((x,y)\) be a reducible pair in \(N\). The _height pair_ of \((x,y)\) in \(N\) is a pair \((h_{x}^{N},h_{y}^{N})\in\mathbb{R}_{\geq 0}^{2}\), where \(h_{x}^{N}=\ell(p(x),x)\) and \(h_{y}^{N}=\ell(p(y),y)\) if \((x,y)\) is a cherry (indeed, in this case, \(p(x)=p(y)\)); \(h_{x}^{N}=\ell(p(y),p(x))+\ell(p(x),x)\) and \(h_{y}^{N}=\ell(p(y),y)\) if \((x,y)\) is a reticulated cherry. The _height_\(h_{(x,y)}^{N}\) of \((x,y)\) is the average \((h_{x}^{N}+h_{y}^{N})/2\) of \(h_{x}^{N}\) and \(h_{y}^{N}\). Let \(\mathcal{T}\) be a set of trees whose leaf sets are subsets of a set of taxa \(X\). An ordered leaf pair \((x,y)\) is a _trivial pair_ of \(\mathcal{T}\) if it is reducible in all \(T\in\mathcal{T}\) that contain both \(x\) and \(y\), and there is at least one tree in which it is reducible. We define the following three heuristics in the CPH class, resulting from as many possible implementations of PickNext. * Function PickNext picks uniformly at random a reducible pair of \(\mathcal{T}_{S}\). * Function PickNext picks a reducible pair \((x,y)\) with the lowest average of values \(h_{(x,y)}^{T}\) over all \(T\in\mathcal{T}_{S}\) in which \((x,y)\) is reducible (ties are broken randomly). * Function PickNext picks a trivial pair if there exists one and otherwise picks a reducible pair of \(\mathcal{T}_{S}\) uniformly at random. **Theorem 2**.: _Algorithm 1 computes a CPS that fully reduces \(\mathcal{T}\), for any function PickNext that picks, in each iteration, a reducible pair of \(\mathcal{T}_{S}\)._ Proof.: The sequence \(S\) is initiated as an empty sequence. Then, each iteration of the while loop (lines 2-5) of Algorithm 1 appends one pair to \(S\) that is reducible in at least one of the trees in \(\mathcal{T}\), and reduces it in all trees. Hence, in each iteration, the total size of \(\mathcal{T}_{S}\) is reduced, so the algorithm finishes in finite time. Moreover, at the end of the while loop, each tree in \(\mathcal{T}_{S}\) is reduced, thus the partial CPS \(S\) reduces \(\mathcal{T}_{S}\). As CompleteSeq only appends pairs at the end of \(S\), the result of this subroutine still reduces all trees in \(\mathcal{T}_{S}\). In Section 5 we experimentally show that TrivialRand produces the best results among the proposed randomised heuristics. In the next section, we introduce a further heuristic step for TrivialRand which improves the output quality. ### Improving Heuristic TrivialRand via Tree Expansion Let \(\mathcal{T}\) be a set of trees whose leaf sets are subsets of a set of taxa \(X\), let \(S\) be a partial CPS for \(\mathcal{T}\) and let \(\mathcal{T}_{S}\) be the tree set obtained by reducing in order the pairs of \(S\) in \(\mathcal{T}\). With respect to a trivial pair \((x,y)\), each tree \(T\in\mathcal{T}_{S}\) is of one of the following types: (i) \((x,y)\) is reducible in \(T\); or (ii) neither \(x\) nor \(y\) are leaves of \(T\); or (iii) \(y\) is a leaf of \(T\) but \(x\) is not; or (iv) \(x\) is a leaf of \(T\) but \(y\) is not. Suppose that at some iteration of TrivialRand, the subroutine PickNext returns the trivial pair \((x,y)\). Then, before reducing \((x,y)\) in all trees, we do the following extra step: for each tree of type (iv), replace leaf \(x\) with cherry \((x,y)\). We call this operation the _tree expansion_: see Figure 4(c). The effect of this step is that, after reducing \((x,y)\), leaf \(x\) disappears from the set of trees, which would have not necessarily been the case before, because of trees of type (iv). Tree expansion followed by the reduction of \((x,y)\) can, alternatively, be seen as relabelling leaf \(x\) in any tree of type (iv) by \(y\). The choice of describing this relabelling as tree expansion is just for the purpose of proving Lemma 3. To guarantee that a CPS \(S\) produced with tree expansion implies a feasible solution for Hybridization, we must show that the network \(N\) reconstructed from \(S\) displays all the trees in the input set \(\mathcal{T}\). We prove that indeed this is the case with the following steps: (1), we consider the networks \(N_{T}\) obtained by "reverting" a partial CPS \(S\) obtained right after applying tree expansion to a tree \(T_{S}\): in other words, to obtain \(N_{T}\) we add to the partially reduced tree \(T_{S}\) the trivial pair \((x,y)\) and then all the pairs previously reduced by \(S\) in the sense of Lemma 1. We show that \(N_{T}\) always displays \(T\), the original tree; (2), we prove that this holds for an arbitrary sequence of tree expansion operations; and (3), since the CPS obtained using tree expansions fully reduces the networks of point (2), and since these networks display the trees in the original set \(\mathcal{T}\), we have the desired property by Theorem 1. We prove this more formally with the following lemma. **Lemma 3**.: _Let \(S\) be the CPS produced by TrivialRand using tree expansion with input \(\mathcal{T}\). Then the network reconstructed from \(S\) displays all the trees in \(\mathcal{T}\)._ Proof.: Let us start with the case where only \(1\) tree expansion occurs. Let \(S^{(i-1)}\) be the partial CPS constructed in the first \(i-1\) steps of TrivialRand, and let \(i\) be the step in which we pick a trivial pair \((x,y)\). For each \(T\in\mathcal{T}_{S^{(i-1)}}\) that is reduced by \(S^{(i-1)}\) to a tree \(T^{(i-1)}\) of type (iv) for \((x,y)\), let \(S^{(i-1)}_{T}\) be the subsequence of \(S^{(i-1)}\) consisting only of the pairs that subsequently affect \(T\). We use the partial CPS \(S^{i}_{T}=S^{(i-1)}_{T}\circ(x,y)\) to reconstruct a network \(N_{T}\) with a method underlying Lemma 1, starting from \(T^{(i-1)}\): see Figure 4(d). Figure 4: Tree expansion of \(T\)**(a)** with the trivial cherry \((x,y)\) of \(\mathcal{T}_{(y,z)}\). **(b)** After picking cherry \((y,z)\), leaf \(y\) is missing in \(T^{(1)}\). **(c)** Leaf \(x\) is replaced by the cherry \((x,y)\). After completion of the heuristic, we have \(S_{T}=(y,z),(x,y),(y,w),(w,z)\). **(d)** The network \(N_{T}\) reconstructed from \(S^{1}\cdot(x,y)\). Note that the input tree \(T\) is displayed in \(N_{T}\) (solid edges). For trees of type (i)-(iii), \(N_{T}=T\). We call the set \(\mathcal{N}_{\mathcal{T}}\), consisting of the networks \(N_{T}\) for all \(T\in\mathcal{T}\), the _expanded reconstruction_ of \(\mathcal{T}\). Note that, by construction and Lemma 1, all the elements of \(\mathcal{N}_{\mathcal{T}}\) after reducing, in order, the pairs of \(S^{(i-1)}\circ(x,y)\), are trees: in particular, they are equal to the trees of \(\mathcal{T}_{S^{(i-1)}\circ(x,y)}\) in which all the labels \(y\) have been replaced by \(x\). We denote this set of trees \((\mathcal{N}_{\mathcal{T}})_{S^{(i-1)}\circ(x,y)}\). We can generalise this notion to multiple trivial pairs: we denote by \(\mathcal{N}_{\mathcal{T}}^{(j)}\) the expanded reconstruction of \(\mathcal{T}\) with the first \(j\) trivial pairs, and suppose we added the \(j\)-th pair \((w,z)\) to the partial CPS \(S\) at the \(k\)-th step. Consider a tree \(T^{\prime}\in(\mathcal{N}_{\mathcal{T}}^{(j-1)})_{S^{(k-1)}}\) of type (iv) for \((w,z)\), and let \(N_{T}^{(j-1)}\in\mathcal{N}_{\mathcal{T}}^{(j-1)}\) be the network it originated from. Let \(S_{T}^{(k-1)}\) be the subsequence of \(S^{(k-1)}\) consisting only of the pairs that subsequently affected \(N_{T}^{(j-1)}\). Then \(N_{T}^{(j)}\) is the network reconstructed from \(S_{T}^{(k-1)}\circ(w,z)\), starting from \(T^{\prime}\). For trees of \((\mathcal{N}_{\mathcal{T}}^{(j-1)})_{S^{(k-1)}}\) that are of type (i)-(iii) for \((w,z)\), we have \(N_{T}^{(j)}=N_{T}^{(j-1)}\). The elements of \(\mathcal{N}_{\mathcal{T}}^{(j)}\) are all networks \(N_{T}^{(j)}\). For completeness, we define \(\mathcal{N}_{\mathcal{T}}^{(0)}=\mathcal{T}\) and \(\mathcal{N}_{\mathcal{T}}^{(1)}=\mathcal{N}_{\mathcal{T}}\). By construction, \(S\) fully reduces all the networks in \(\mathcal{N}_{\mathcal{T}}^{(j)}\), thus the network \(N\) reconstructed from \(S\) displays all of them by Theorem 1. We prove that \(N_{T}^{(j)}\) displays \(T\) for all \(T\in\mathcal{T}\), and thus \(N\) displays the original tree set \(\mathcal{T}\) too, by induction on \(j\). In the base case, we pick \(j=0\) trivial pairs, so the statement is true by Theorem 1. Now let \(j>0\). The induction hypothesis is that each network \(N_{T}^{(j-1)}\in\mathcal{N}_{\mathcal{T}}^{(j-1)}\) displays the tree \(T\in\mathcal{T}\) it originated from. Let \((w,z)\) be the \(j\)-th trivial pair, added to the sequence at position \(k\). Let \(T^{\prime}\in(\mathcal{N}_{\mathcal{T}}^{(j-1)})_{S^{(k-1)}}\) be a tree of type (iv) for \((w,z)\), and let \(N_{T}^{(j-1)}\) be the network it originates from. Then there are two possibilities: either \(z\) is a leaf of \(N_{T}^{(j-1)}\) or it is not. In case it is not, then adding \((w,z)\) to \(N_{T}^{(j-1)}\) does not create any new reticulation, and clearly \(N_{T}^{(j)}\) keeps displaying \(T\). If \(z\) does appear in \(N_{T}^{(j-1)}\), then it must have been reduced by a pair \((z,v)\) of \(S^{(k-1)}\) (otherwise \(T^{\prime}\) would not be of type (iv)). Then the network \(N_{T}^{(j)}\) has an extra reticulation, created with the insertion of \((z,v)\) at some point after \((w,z)\) during the backwards reconstruction. In both cases, by [10, Lemma 10]\(N_{T}^{(j-1)}\) is displayed in \(N_{T}^{(j)}\), and thus by the induction hypothesis \(T\) is displayed too. ### Good Cherries in Theory By Lemma 1 the binary network \(N\) reconstructed from a CPS \(S\) is such that \(S\) is of minimum length for \(N\), that is, there exists no shorter CPS that fully reduces \(N\). By Theorem 1 if \(S\), in turn, fully reduces \(\mathcal{T}\), then \(N\) displays all the trees in \(\mathcal{T}\). Depending on \(S\), though, \(N\) is not necessarily an optimal network (i.e., with minimum reticulation number) among the ones displaying \(\mathcal{T}\): see Example 2. Let \(\mathsf{OPT}(\mathcal{T})\) denote the set of networks that display \(\mathcal{T}\) with the minimum possible number of reticulations (in general, this set contains more than one network). Ideally, we would like to produce a CPS fully reducing \(\mathcal{T}\) that is also a minimum-length CPS fully reducing some network of \(\mathsf{OPT}(\mathcal{T})\). In other words, we aim to find a CPS \(\tilde{S}=(x_{1},y_{1}),\ldots,(x_{n},y_{n})\) such that, for any \(i\in[1,n]\), \((x_{i},y_{i})\) is a reducible pair of \(\tilde{N}_{\tilde{S}^{(i-1)}}\), where \(\tilde{S}^{(0)}=\emptyset\), \(\tilde{S}^{(k)}=(x_{1},y_{1}),\ldots,(x_{k},y_{k})\) for all \(k\in[1,n]\), and \(\tilde{N}\in\mathsf{OPT}(\mathcal{T})\). Let \(S=(x_{1},y_{1}),\ldots,(x_{n},y_{n})\) be a CPS fully reducing \(\mathcal{T}\) and let \(\mathsf{OPT}^{(k)}(\mathcal{T})\) consist of all networks \(N\in\mathsf{OPT}(\mathcal{T})\) such that each pair \((x_{i},y_{i})\), \(i\in[1,k]\), is reducible in \(N_{S^{(i-1)}}\). **Lemma 4**.: _A CPS \(S\) reducing \(\mathcal{T}\) reconstructs an optimal network \(\tilde{N}\) if and only if each pair \((x_{i},y_{i})\) of \(S\) is reducible in \(\tilde{N}_{S^{i-1}}\), for all \(i\in[1,n]\)._ Proof.: (\(\Rightarrow\)) By Lemma 1, \(S\) is a minimum-length CPS for the network \(\tilde{N}\) that is reconstructed from it; and a CPS \(C=(w_{1},z_{1}),\ldots,(w_{n},z_{n})\) reducing a network \(N\) is of minimum length precisely if, for all \(j\in[1,n]\), \((w_{j},z_{j})\) is a reducible pair of \(N_{C^{(j-1)}}\) (otherwise the pair \((w_{j},z_{j})\) could be removed from \(C\) and the new sequence would still reduce \(N\)). (\(\Leftarrow\)) If all pairs of \(S\) affect some optimal network \(\tilde{N}\), then \(S\) is a minimum-length CPS for \(\tilde{N}\), thus \(\tilde{N}\) is reconstructed from \(S\) (and it displays \(\mathcal{T}\) by Theorem 1). Lemma 4 implies that if some pair \((x_{i},y_{i})\) of \(S\) does not reduce any network in \(\mathsf{OPT}^{(i-1)}(\mathcal{T})\), then the network reconstructed from \(S\) is not optimal: see Example 2. **Example 2**.: _Consider the set \(\mathcal{T}\) of Figure 2b: \(S=(y,x),(y,z),(w,x),(x,z)\) is a CPS that fully reduces \(\mathcal{T}\) and consists only of pairs successively reducible in the network \(N\) of Fig. 2a, thus it reconstructs it by Lemma 1. Now consider \((w,x)\), which is reducible in \(\mathcal{T}\) but not in \(N\), and pick it as first pair, to obtain e.g. \(S^{\prime}=(w,x),(y,z),(y,x),(w,x),(x,z)\). The network \(N^{\prime}\) reconstructed from \(S^{\prime}\), depicted in Figure 5, has \(r(N^{\prime})=2\), whereas \(r(N)=1\)._ Suppose we are incrementally constructing a CPS \(S=(x_{1},y_{1}),\ldots,(x_{n},y_{n})\) for \(\mathcal{T}\) with some heuristic in the CPH class. If we had an oracle that at each iteration \(i\) told us if a reducible pair \((x,y)\) of \(\mathcal{T}^{(i-1)}\) were a reducible pair in some \(N\in\mathsf{OPT}^{(i-1)}(\mathcal{T})\), then, by Lemma 4, we could solve Hybridization optimally. Unfortunately no such exact oracle can exist (unless \(P=NP\)). However, in the next section we exploit this idea to design machine-learned heuristics in the CPH framework. Figure 5: Network \(N^{\prime}\) of Example 2. Predicting Good Cherries via Machine Learning In this section, we present a supervised machine-learning classifier that (imperfectly) simulates the ideal oracle described at the end of Section 3.3. The goal is to predict, based on \(\mathcal{T}\), whether a given cherry of \(\mathcal{T}\) is a cherry or a reticulated cherry in a network \(N\) displaying \(\mathcal{T}\) with a close-to-optimal number of reticulations, without knowing \(N\). Based on Lemma 4, we then exploit the output of the classifier to define new functions PickNext, that in turn define new machine-learned heuristics in the class of CPH (Algorithm 1). Specifically, we train a random forest classifier on data that encapsulates information on the cherries in the tree set. Given a partial CPS, each reducible pair in \(\mathcal{T}_{S}\) is represented by one data point. Each data point is a pair \((\mathbf{F},\mathbf{c})\), where \(\mathbf{F}\) is an array containing the features of a cherry \((x,y)\) and \(\mathbf{c}\) is an array containing the probability that the cherry belongs to each of the possible classes described below. Recall that cherries are ordered pairs, so \((x,y)\) and \((y,x)\) give rise to two distinct data points. The classification model learns the association between \(\mathbf{F}\) and \(\mathbf{c}\). The true class of a cherry \((x,y)\) of \(\mathcal{T}\) depends on whether, for the (unknown) network \(N\) that we aim to reconstruct: (class 1) \((x,y)\) is a cherry of \(N\); (class 2) \((x,y)\) is a reticulated cherry of \(N\); (class 3) \((x,y)\) is not reducible in \(N\), but \((y,x)\) is a reticulated cherry; or (class 4) neither \((x,y)\) nor \((y,x)\) are reducible in \(N\). Thus, for the data point of a cherry \((x,y)\), \(\mathbf{c}[i]\) contains the probability that \((x,y)\) is in class \(i\), and \(\mathbf{c}[1]+\mathbf{c}[2]\) gives the predicted probability that \((x,y)\) is reducible in \(N\). We define the following two heuristics in the CPH framework. * Given a threshold \(\tau\in[0,1)\), function PickNext picks the cherry with the highest predicted probability of being reducible in \(N\) if this probability is at least \(\tau\); or a random cherry if none has a probability of being reducible above \(\tau\). * Function PickNext picks a random trivial pair, if there exists one; otherwise it uses the same rules as ML. In both cases, whenever a trivial pair is picked, we do tree expansion, as described in Section 3.2. Note that if \(\tau=0\), since the predicted probabilities are never exactly \(0\), ML is fully deterministic. In Section 5.2.7 we show how the performance of ML is impacted by the choice of different thresholds. To assign a class to each cherry, we define 19 features, summarised in Table 1, that may capture essential information about the structure of the set of trees, and that can be efficiently computed and updated at every iteration of the heuristics. The _depth_ (resp. _topological_ depth) of a node \(u\) in a tree \(T\) is the total branch length (resp. the total number of edges) on the root-to-\(u\) path; the depth of a cherry \((x,y)\) is the depth of the common parent of \(x\) and \(y\); the depth of \(T\) is the maximum depth of any cherry of \(T\). The (topological) leaf distance between \(x\) and \(y\) is the total branch length of the path from the parent of \(x\) to the lowest common ancestor of \(x\) and \(y\), denoted by \(\operatorname{LCA}(x,y)\), plus the total length of the path from the parent of \(y\) to \(\operatorname{LCA}(x,y)\) (resp. the total number of edges on both paths). In particular, the leaf distance between the leaves of a cherry is zero. ### Time Complexity Designing algorithms with the best possible time complexity was not the main objective of this work. However, for completeness, we provide worst-case upper bounds on the running time of our heuristics. The omitted proofs can be found in Appendix A. We start by stating a general upper bound for the whole CPH framework in the function of the time required by the PickNext routine. **Lemma 5**.: _The running time of the heuristics in the CPH framework is \(\mathcal{O}(|\mathcal{T}|^{2}|X|+cost(\text{PickNext}))\), where \(cost(\text{PickNext})\) is the total time required to choose reducible pairs over all iterations. In particular, Rand takes \(\mathcal{O}(|\mathcal{T}|^{2}|X|)\) time._ Proof.: An upper bound for the sequence length is \((|X|-1)|\mathcal{T}|\) as each tree can individually be fully reduced using at most \(|X|-1\) pairs. Hence, the while loop of \begin{table} \begin{tabular}{l l l} \hline \hline Num & Feature name & Description \\ \hline 1 & Cherry in tree & Ratio of trees that contain cherry \((x,y)\) \\ 2 & New cherries & Number of new cherries of \(\mathcal{T}\) after picking cherry \((x,y)\) \\ 3 & Before/after & Ratio of the number of cherries of \(\mathcal{T}\) before/after picking cherry \((x,y)\) \\ 4 & Trivial & Ratio of trees with both leaves \(x\) and \(y\) that contain cherry \((x,y)\) \\ 5 & Leaves in tree & Ratio of trees that contain both leaves \(x\) and \(y\) \\ \hline \multicolumn{3}{l}{_Features measured by distance (d) and topology (t)_} \\ \hline \(6_{d,t}\) & Tree depth & Avg over trees with \((x,y)\) of ratios “depth of the tree/max depth over all trees” \\ \(7_{d,t}\) & Cherry depth & Avg over trees with \((x,y)\) of ratios “depth of \((x,y)\) in the tree/depth of the tree” \\ \(8_{d,t}\) & Leaf distance & Avg over trees with \(x\) and \(y\) of ratios “\(x\)-\(y\) leaf distance/depth of the tree” \\ \(9_{d,t}\) & Leaf depth \(x\) & Avg over trees with \(x\) and \(y\) of ratios “root-\(x\) distance/depth of the tree” \\ \(10_{d,t}\) & Leaf depth \(y\) & Avg over trees with \(x\) and \(y\) of ratios “root-\(y\) distance/depth of the tree” \\ \(11_{d,t}\) & LCA distance & Avg over trees with \(x\) and \(y\) of ratios “\(x\)-\(\operatorname{LCA}(x,y)\) distance/\(y\)-\(\operatorname{LCA}(x,y)\) distance” \\ \(12_{d,t}\) & Depth \(x/y\) & Avg over trees with \(x\) and \(y\) of ratios “root-\(x\) distance/root-\(y\) distance” \\ \hline \hline \end{tabular} \end{table} Table 1: Features of a cherry \((x,y)\). Features 6-12 can be computed for both branch lengths and unweighted branches. We refer to these two options as _distance_ and _topological distance_, respectively. Algorithm 1 is executed at most \((|X|-1)|\mathcal{T}|\) times. Moreover, reducing the pair and updating the set of reducible pairs after one iteration takes \(O(1)\) time per tree. Combining this with the fact that \(\mathsf{CompleteSeq}\) takes \(\mathcal{O}(|S|)=\mathcal{O}(|X||\mathcal{T}|)\) time, we obtain the stated time complexity. Since choosing a random reducible pair takes \(\mathcal{O}(1)\) time at each iteration, \(\mathsf{Rand}\) takes trivially \(\mathcal{O}(|\mathcal{T}|^{2}|X|)\) time. Note that, by Lemma 2, the number of reticulations \(r(N)\) of the network reconstructed from the output CPS is bounded by \((|X|-1)|\mathcal{T}|-|X|+1\) and thus the time complexity of \(\mathsf{Rand}\) is also \(\mathcal{O}(r(N)|\mathcal{T}|)\). Let us now focus on the time complexity of the machine-learned heuristics \(\mathsf{ML}\) and \(\mathsf{TrivialML}\). At any moment during the execution of the heuristics, we maintain a data structure that stores all the current cherries in \(\mathcal{T}\) and allows constant-time insertions, deletions, and access to the cherries and their features. A possible implementation of this data structure consists of a hashtable \(\mathsf{cherryfeatures}\) paired with a list \(\mathsf{cherylist}\) of the pairs currently stored in \(\mathsf{cherryfeatures}\). We will use \(\mathsf{cherrylist}\) to iterate over the current cherries of \(\mathcal{T}\), and \(\mathsf{cherryfeatures}\) to check whether a certain pair is currently a cherry of \(\mathcal{T}\) and to access its features. Note that the total number of cherries inserted in \(\mathsf{cherryfeatures}\) over all the iterations is bounded by the total size of the trees \(||\mathcal{T}||\) because up to two cherries can be created for each internal node over the whole execution. We will assume that we have constant-time access to the leaves of each tree: specifically, given \(T\in\mathcal{T}\) and \(x\in X\), we can check in constant time whether \(x\) is currently a leaf of \(T^{1}\). InitialisationThe cherries of \(\mathcal{T}\) can be identified and features 1-3 can be initially computed in \(\mathcal{O}(||\mathcal{T}||)\) time by traversing all trees bottom-up. Features 4-5 can be computed in \(\mathcal{O}(\min\{|\mathcal{T}|\cdot||\mathcal{T}||,|\mathcal{T}|\cdot|X|^{2}\})\) time by checking, for each \(T\in\mathcal{T}\) and each cherry \((x,y)\) of \(\mathcal{T}\), whether both \(x\) and \(y\) appear in \(T\). Features \(6_{d,t}\) to \(12_{d,t}\) can also be initially computed with a traversal of \(\mathcal{T}\) made efficient by preprocessing each tree in linear time to allow constant-time LCA queries [7] and by storing the depth (both topological and with the branch lengths) of each node. We also store the topological and branch length depth of each tree and their maximum value over \(\mathcal{T}\). Altogether this gives the following lemma. **Lemma 6**.: _Initialising all features for a tree set \(\mathcal{T}\) of total size \(||\mathcal{T}||\) over a set of taxa \(X\) requires \(\mathcal{O}(\min\{|\mathcal{T}|\cdot||\mathcal{T}||,|\mathcal{T}|\cdot|X|^{2}\})\) time and \(\mathcal{O}(||\mathcal{T}||)\) space._ The next lemma provides an upper bound on the time complexity of updating the distance-independent features. **Lemma 7**.: _Updating features 1-5 for a set \(\mathcal{T}\) of \(|\mathcal{T}|\) trees of total size \(||\mathcal{T}||\) over a set of taxa \(X\) requires \(\mathcal{O}(|\mathcal{T}|(||\mathcal{T}||+|X|^{2}))\) total time and \(\mathcal{O}(||\mathcal{T}||)\) space._ Since searching for trivial cherries at each iteration of the randomised heuristic \(\mathsf{TrivialRand}\) can be done with the same procedure we use for updating feature 4 in the machine-learned heuristics, which in particular requires \(\mathcal{O}(|\mathcal{T}|\cdot||\mathcal{T}||)\) time, we have the following corollary. **Corollary 1**.: _The time complexity of \(\mathsf{TrivialRand}\) is \(\mathcal{O}(|\mathcal{T}|\cdot||\mathcal{T}||)=\mathcal{O}(|\mathcal{T}|^{2} \cdot|X|)\)._ The total time required for updating the distance-dependent features raises the time complexity of \(\mathsf{ML}\) and \(\mathsf{TrivialML}\) to quadratic in the input size. However, the extensive analysis reported in Appendix A shows that this is only due to the single feature \(6_{d}\), and without such a feature, the machine-learned heuristics would be asymptotically as fast as the randomised ones. Since Table 3 in Appendix C shows that this feature is not particularly important, in future work it could be worth investigating whether disregarding it leads to equally good results in shorter time. **Lemma 8**.: _The time complexity of \(\mathsf{ML}\) and \(\mathsf{TrivialML}\) is \(\mathcal{O}(||\mathcal{T}||^{2})\)._ ### Obtaining Training Data The high-level idea to obtain training data is to first generate a phylogenetic network \(N\); then to extract the set \(\mathcal{T}\) of all the exhaustive trees displayed in \(N\); and finally, to iteratively choose a random reducible pair \((x,y)\) of \(N\), to reduce it in \(\mathcal{T}\) as well as in \(N\), and to label the remaining cherries of \(\mathcal{T}\) with one of the four classes defined in Section 4 until the network is fully reduced. We generate two different kinds of binary orchard networks, normal and not normal, with branch lengths and up to 9 reticulations using the LGT (lateral gene transfer) network generator of [16], imposing normality constraints when generating the normal networks. For each such network \(N\), we then generate the set \(\mathcal{T}\) consisting of all the exhaustive trees displayed in \(N\). If \(N\) is normal, \(N\) is an optimal network for \(\mathcal{T}\)[24, Theorem 3.1]. This is not necessarily true for any LGT-generated network, but even in this case, we expect \(N\) to be reasonably close to optimal, because we remove redundant reticulations when we generate it and because the trees in \(\mathcal{T}\) cover all the edges of \(N\). In particular, for LGT networks \(r(N)\) provides an upper bound estimate on the minimum possible number of reticulations of any network displaying \(\mathcal{T}\), and we will use it as a reference value for assessing the quality of our results on synthetic LGT-generated data. ## 5 Experiments The code of all our heuristics and for generating data is written in Python and is available at [https://github.com/estherjulien/learn2cherrypick](https://github.com/estherjulien/learn2cherrypick). All experiments ran on an Intel Xeon Gold 6130 CPU @ 2.1 GHz with 96 GB RAM. We conducted experiments on both synthetic and real data, comparing the performance of Rand, TrivialRand, ML and TrivialML, using threshold \(\tau=0\). Similar to the training data, we generated two synthetic datasets by first growing a binary orchard network \(N\) using [16], and then extracting \(\mathcal{T}\) as a subset of the exhaustive trees displayed in \(N\). We provide details on each dataset in Section 5.2. We start by analysing the usefulness of tree expansion, the heuristic rule described in Section 3.2. We synthetically generated 112 instances for each tree set size \(|\mathcal{T}|\in\{5,10,20,50,100\}\) (560 in total), all consisting of trees with 20 leaves each, and grouped them by \(|\mathcal{T}|\); we then ran TrivialRand 200 times (both with and without tree expansion) on each instance, selected the best output for each of them, and finally took the average of these results over each group of instances. The results are in Figure 6, showing that the use of tree expansion brought the output reticulation number down by at least 16% (for small instances) and up to 40% for the larger instances. We consistently chose to use this rule in all the heuristics that detect trivial cherries, namely, TrivialRand, TrivialML, ML (although ML does not explicitly favour trivial cherries, it does check whether a selected cherry is trivial using feature number 2), and the non-learned heuristic that will be introduced in Section 5.3. ### Prediction Model The random forest is implemented with Python's scikit-learn[15] package using default settings. We evaluated the performance of our trained random forest models Figure 6: Number of reticulations output by TrivialRand with and without using tree expansion. The height of the bars is the average reticulation number over each group, obtained by selecting the best of 200 runs for each instance. on different datasets in a holdout procedure: namely, we removed \(10\%\) of the data from each training dataset, trained the models on the remaining \(90\%\) and used the holdout \(10\%\) for testing. The accuracy was assessed by assigning to each test data point the class with the highest predicted probability and comparing it with the true class. Before training the models, we balanced each dataset so that each class had the same number of representatives. Each training dataset differed in terms of the number \(M\) of networks used for generating it and the number of leaves of the networks. For each dataset, the number \(L\) of leaves of each generated network was uniformly sampled from \([2,\max L]\), where \(\max L\) is the maximum number of leaves per network. We constructed LGT networks using the LGT generator of [16]. This generator has three parameters: \(n\) for the number of steps, \(\alpha\) for the probability of lateral gene transfer events, and \(\beta\) for regulating the size of the biconnected components of the network (called _blobs_). The combination of these parameters determines the level (maximum number of reticulations per blob), the number of reticulations, and the number of leaves of the output network. In our experiments, \(\alpha\) was uniformly sampled from \([0.1,0.5]\) and \(\beta=1\) (see [16] for more details). To generate normal networks we used the same generator with the same parameters, but before adding a reticulation we check if it respects the normality constraints and only add it if it does. Each generated network gave rise to a number of data points: the total number of data points per dataset is shown in Table 4 in Appendix B. Each row of Table 4 corresponds to a dataset on which the random forest can be trained, obtaining as many ML models. We tested all the models on all the synthetically generated instances: we show these results in Figures 18, 19 and 20 in Appendix C. In Section 5.2 we will report the results obtained for the best-performing model for each type of instance. Among the advantages of using a random forest as a prediction model, there is the ability of computing feature importance, shown in Table 3 in Appendix B. Some of the most useful features for a cherry \((x,y)\) appear to be 'Trivial' (the ratio of the trees containing both leaves \(x\) and \(y\) in which \((x,y)\) is a cherry) and 'Cherry in tree' (the ratio of trees that contain \((x,y)\)). This was not unexpected, as these features are well-suited to identify trivial cherries. 'Leaf distance' (t,d), 'LCA distance' (t) and 'Depth \(x/y\)' (t) are also important features. The rationale behind these features was to try to identify reticulated cherries. This was also the idea for the feature 'Before/after', but this has, surprisingly, a very low importance score. In future work, we plan to conduct a thorough analysis of whether some of the seemingly least important features can be removed without affecting the quality of the results. ### Experimental Results We assessed the performance of our heuristics on instances of four types: normal, LGT, ZODS (binary non-orchard networks), and real data. Normal, LGT and ZODS data are synthetically generated. We generated the normal instances much as we did for the training data: we first grew a normal network using the LGT generator and then extracted all the exhaustive trees displayed in the network. We generated normal data for different combinations of the following parameters: \(L\in\{20,50,100\}\) (number of leaves per tree) and \(R\in\{5,6,7\}\) (reticulation number of the original network). Note that, for normal instances, \(|\mathcal{T}|=2^{R}\). For every combination of the parameters \(L\) and \(R\) we generated 48 instances: by _instance group_ we indicate the set of instances generated for one specific parameter pair. For the LGT instances, we grew the networks using the LGT generator, but unlike for the normal instances we then extracted only a subset of the exhaustive trees from each of them, up to a certain amount \(|\mathcal{T}|\in\{20,50,100\}\). The other parameters for LGT instances are the number of leaves \(L\in\{20,50,100\}\) and the number of reticulations \(R\in\{10,20,30\}\). For a fixed pair \((L,|\mathcal{T}|)\), we generated 16 instances for each possible value of \(R\), and analogously, for a fixed pair \((L,R)\) we generated 16 instances for each value of \(|\mathcal{T}|\). The 48 instances generated for a fixed pair of values constitute a LGT instance group. We generated non-orchard binary networks using the ZODS generator [27]. This generator has two user-defined parameters: \(\lambda\), which regulates the speciation rate, and \(\nu\), which regulates the hybridization rate. Following [9] we set \(\lambda=1\) and we sampled \(\nu\in[0.0001,0.4]\) uniformly at random. Like for the LGT instances, we generated an instance group of size 48 for each pair of values \((L,|\mathcal{T}|)\) and \((L,R)\), with \(L\in\{20,50,100\}\), \(|\mathcal{T}|\in\{20,50,100\}\), \(R\in\{10,20,30\}\). Finally, the real-world dataset consists of gene trees on homologous gene sets found in bacterial and archaeal genomes, was originally constructed in [3] and made binary in [21]. We extracted a subset of instances (Table 2) from the binary dataset, for every combination of parameters \(L\in\{20,50,100\}\) and \(|\mathcal{T}|\in\{10,20,50,100\}\). For the synthetically generated datasets, we evaluated the performance of each heuristic in terms of the output number of reticulations, comparing it with the number of reticulations of the network \(N\) from which we extracted \(\mathcal{T}\). For the normal instances, \(N\) is the optimal network [24, Theorem 3.1]; this is not true, in general, for the LGT and ZODS datasets, but even in these cases, \(r(N)\) clearly provides an estimate (from above) of the optimal value, and thus we used it as a reference value for our experimental evaluation. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \(L\) & \multicolumn{5}{c}{\(|\mathcal{T}|\)} & Tot. Trees \\ \hline & 10 & 20 & 50 & 100 & \\ \hline 20 & 50 & 50 & 50 & 50 & 1684 \\ 50 & 20 & 20 & 20 & 20 & 290 \\ 100 & 5 & 5 & 1 & 0 & 53 \\ \hline \hline \end{tabular} \end{table} Table 2: Number of real data instances for each group (combination of parameters \(L\) and \(|\mathcal{T}|\)). For real data, in the absence of the natural estimate on the optimal number of reticulations provided by the starting network, we evaluated the performance of the heuristics comparing our results with the ones given by the exact algorithms from [21] (TreeChild) and from [1] (Hyproscale), using the same datasets that were used to test the two methods in [21]. These datasets consist of rather small instances (\(|\mathcal{T}|\leq 8\)); for larger instances, we run TrivialRand 1000 times for each instance group, selected the best result for each group, and used it as a reference value (Figure 10). We now describe in detail the results we obtained for each type of data and each of the algorithms we tested. #### 5.2.1 Experiments on Normal Data For the experiments in this section we used he ML model trained on 1000 normal networks with at most 100 leaves per network (see Figure 18 in Appendix C). We ran the machine-learned heuristics once for each instance and then averaged the results within each instance group (recall that one instance group consists of the sets of all the exhaustive trees of 48 normal networks having the same fixed number of leaves and reticulations). The randomised heuristics Rand and TrivialRand were run \(\min\{x(I),1000\}\) times for each instance \(I\), where \(x(I)\) is the number of runs that can be executed in the same time as one run of ML on the same instance. We omitted the results for LowPair because they were at least 44% worse on average than the worst-performing heuristic we report. In Figure 7 we summarise the results. Solid bars represent the ratio between the average reported reticulation number and the optimal value, for each instance group and for each of the four heuristics. Dashed bars represent the ratio between the average (over the instances within each group) of the best result among the \(\min\{x(I),1000\}\) runs for each instance \(I\) and the optimum. Figure 7: Experimental results for normal data. Each point on the horizontal axis corresponds to one instance group. In the left graph, the height of each bar gives the average of the results over all instances of the group, scaled by the optimum value for the group. The right graph compares the average output of ML within each instance group and the average of the best output given by TrivialRand for each instance of a group. The shaded areas represent 95% confidence intervals. The machine-learned heuristics ML and TrivialML seem to perform very similarly, both leading to solutions close to optimum. The average performance of TrivialRand is around 4 times worse than the machine-learned heuristics; in contrast, if we only consider the best solution among the multiple runs for each instance, they are quite good, having only up to 49% more reticulations than the optimal solution, but they are still at least 4% worse (29% worse on average) than the machine-learned heuristics' solutions: see the right graph of figure 7. The left graph of Figure 7 shows that the performance of the randomised heuristics seems to be negatively impacted by the number of reticulations of the optimal solution, while we do not observe a clear trend for the machine-learned heuristics, whose performance is very close to optimum for all the considered instance groups. Indeed, the number of existing phylogenetic networks with a certain number of leaves grows exponentially in the number of reticulations, thus making it less probable to reconstruct a "good" network with random choices. This is consistent with the existing exact methods being FPT in the number of reticulations [23, 21]. The fully randomised heuristic Rand always performed much worse than all the others, indicating that identifying the trivial cherries has a great impact on the effectiveness of the algorithms (recall that ML implicitly identifies trivial cherries). #### 5.2.2 Experiments on LGT Data For the experiments on LGT data we used the ML model trained on 1000 LGT networks with at most 100 leaves per network (see Figure 19 in Appendix C). The setting of the experiments is the same as for the normal data (we run the randomised heuristics multiple times and the machine-learned heuristics only once for each instance), with two important differences. First, for LGT data we only take proper subsets of the exhaustive trees displayed by the generating networks, and thus we have two kinds of instance groups: one where in each group the number of trees extracted from a network and the number of leaves of the networks are fixed, but the trees come from networks with different numbers of reticulations; and one where the number of reticulations of the generating networks and their number of leaves are fixed, but the number of trees extracted from a network varies. The second important difference is that the reference value we use for LGT networks is not necessarily the optimum, but it is just an upper bound given by the number of reticulations of the generating networks which we expect to be reasonably close to the optimum (see Section 4.2). The results for the LGT datasets are shown in Figure 8. Comparing these results with those of Figure 7, it is evident that the LGT instances were more difficult than the normal ones for all the tested heuristics: this could be due to the fact that the normal instances consisted of all the exhaustive trees of the generating networks, while the LGT instances only have a subset of them and thus carry less information. The machine-learned heuristics performed substantially better (up to 80% on average) than the best randomised heuristic TrivialRand in all instance groups but the ones with the smallest values for parameters \(R,|\mathcal{T}|\) and \(L\), for which the performances are essentially overlapping. On the contrary, the advantage of the machine-learned methods is more pronounced when the parameters are set to the highest values. This is because the larger the parameters, the more the possible different networks that embed \(\mathcal{T}\), thus the less likely for the randomised methods to find a good solution. From the graphs on the right of Figure 8, it seems that the number of reticulations has a negative impact on both machine-learned and randomised heuristics, the effect being more pronounced for the randomised ones. The effect of the number of trees \(|\mathcal{T}|\) on the quality of the solutions is not as clear (Figure 8, left). However, we can still see that the trend of ML and TrivialRand is the same: the "difficult" instance groups are so for both heuristics, even if the degradation in the quality of the solutions for such instance groups is less marked for ML than for TrivialRand. Figure 8: Experimental results for LGT data. Each point on the horizontal axis corresponds to one instance group. For the graphs on the left, there is one group for each fixed pair \((L,|\mathcal{T}|)\) consisting of 16 instances coming from LGT networks for each value of \(R\in\{10,20,30\}\). For the graphs on the right, there is one group for each fixed pair \((L,R)\) consisting of 16 instances coming from LGT networks for each value of \(|\mathcal{T}|\in\{20,50,100\}\). In the top graphs, the height of each bar gives the average of the results over all instances of the group, each scaled by the number of reticulations of the generating network. The bottom graphs compare the average output of ML within each instance group and the average of the best output given by TrivialRand for each instance group. The shaded areas represent 95% confidence intervals. #### 5.2.3 Experiments on ZODS Data For the experiments on ZODS data we used the ML model trained on 1000 LGT networks with at most 100 leaves per network (see Figure 20 in Appendix C). The setting of the experiments is the same as for the LGT data, and the results are shown in Figure 9. At first glance, the performance of the randomised heuristics seems to be better for ZODS data than for LGT data (compare figures 8 and 9), which sounds counterintuitive. Recall, however, that all the graphs show the ratio between the number of reticulations returned by our methods and a reference value, i.e., the number of reticulations of the generating network: while we expect this reference to be reasonably close to the optimum for LGT networks, this is not the case for ZODS networks. In fact, a closer look to ZODS networks shows that they have a large number of redundant reticulations which could be removed without changing the set Figure 9: Experimental results for ZODS data. Each point on the horizontal axis corresponds to one instance group. For the graphs on the left, there is one group for each fixed pair \((L,|\mathcal{T}|)\) consisting of 16 instances coming from ZODS networks for each value of \(R\in\{10,20,30\}\). For the graphs on the right, there is one group for each fixed pair \((L,R)\) consisting of 16 instances coming from ZODS networks for each value of \(|\mathcal{T}|\in\{20,50,100\}\). In the top graphs, the height of each bar gives the average of the results it represents over all instances of the group, each scaled by the number of reticulations of the network the instance originated from. The bottom graphs compare the average output of ML within each instance group and the average of the best output given by TrivialRand for each group instance. The shaded areas represent 95% confidence intervals. of trees they display, and thus their reticulation number is in general quite larger than the optimum. This is an inherent effect of the ZODS generator not having any constraints on the reticulations that can be introduced, and it is more marked on networks with a small number of leaves. Having a reference value significantly larger than the optimum makes the ratios shown in Figure 9 small (close to 1, especially for TrivialRand on small instances) without implying that the results for the ZODS data are better than the ones for the LGT data. The graphs of Figures 8 and 9 are thus not directly comparable. The reference value for the experiments on ZODS data not being realistically close to the optimum, however, does not invalidate their significance. Indeed, the scope of such experiments was just to compare the performance of the machine-learned heuristics on data entirely different from those they were trained on with the performance of the randomised heuristics, which should not depend on the type of network that was used to generate the input. As expected and in contrast with normal and LGT data, the results show that the machine-learned heuristics perform worse than the randomised ones on ZODS data, consistent with the ML methods being trained on a completely different class of networks. #### 5.2.4 Experiments on Real Data We conducted two sets of experiments on real data, using the ML model trained on the dataset trained on 1000 LGT networks with at most 100 leaves each. For sufficiently small instances, we compared the results of our heuristics with the results of two existing tools for reconstructing networks from binary trees: TreeChild[21] and Hybroscale[1]. Hybroscale is an exact method performing an exhaustive search on the networks displaying the input trees, therefore it can only handle reasonably small instances in terms of the number of input trees. TreeChild is a fixed-parameter (in the number of reticulations of the output) exact algorithm that reconstructs the best _tree-child_ network, a restricted class of phylogenetic networks, and due to its fast-growing computation time cannot handle large instances either. We tested ML and TrivialRand against Hybroscale and TreeChild using the same dataset used in [21], in turn taken from [3]. The dataset consists of ten instances for each possible combination of the parameters \(L\in\{10,20,30,40,50,60,80,100,150\}\) and \(|\mathcal{T}|\in[2,8]\). In Figure 10 we show results only for the instance groups for which Hybroscale or TreeChild could output a solution within 1 hour, consistent with the experiments in [21]. As a consequence of Hybroscale and TreeChild being exact methods (TreeChild only for a restricted class of networks), they performed better than both ML and TrivialRand on all instances they could solve, although the best results of TrivialRand are often close (no worse than 15%) and sometimes match the optimal value. The main advantage of our heuristics is that they can handle much larger instances than the exact methods. In the conference version of this paper [4] we showed the results of our heuristics on large real instances, using a ML model trained on 10 networks with at most 100 leaves each. These results demonstrated that consistently with the simulated data, the machine-learned heuristics gave significantly better results than the randomised ones for the largest instances. When we first repeated the experiments with the new models trained on 1000 networks with \(\mathsf{max}L=100\), however, we did not obtain similar results: instead, the results of the randomised heuristics were better or only marginally worse than the machine-learned ones on almost all the instance groups, including the largest. Puzzled by these results, we conducted an experiment on the impact of the training set on real data. The results are reported in Figure 11, and show that the choice of the networks on which we train our model has a big impact on the quality of the results for the real datasets. This is in contrast with what we observed for the synthetic datasets, for which only the class of the training networks was important, not the specific instances of the networks themselves. According to what was noted in [21], this is most likely due to the fact that the real phylogenetic data have substantially more structure than random synthetic datasets, and the randomly generated training networks do not always reflect this structure. By chance, the networks we used for training the model we used in [4] were similar to real phylogenetic networks, unlike the 1000 networks in the training set of this paper. Figure 10: Comparison of ML, TrivialRand, Hybroscale, and TreeChild on real data. Each point on the horizontal axis corresponds to one instance group, consisting of 10 instances for a fixed pair \((L,|\mathcal{T}|)\). In the top graph, the height of each bar gives the average, over all instances of the group, of the number of reticulations returned by the method. The bottom graphs compare the average output of ML within each instance group and the average of the best output given by TrivialRand within the group. The shaded areas represent 95% confidence intervals. #### 5.2.5 Experiments on Scalability We conducted experiments to study how the running time of our heuristics scales with increasing instance size for all datasets. In Figure 12 we report the average of the running times of \(\mathsf{ML}\) for the instances within each instance group with a 95% confidence interval, for an increasing number of reticulations (synthetic datasets) or number of trees (real dataset). The datasets and the instance groups are those described in the previous sections. Note that we did not report the running times of the randomised heuristics because they are meant to be executed multiple times on each instance, and in all the experiments we bounded the number of executions precisely using the time required for one run of \(\mathsf{ML}\). We also compared the running time of our heuristics with the running times of the exact methods TreeChild and Hybroscale. The results are shown in Figure 13 and are consistent with the execution times of the exact methods growing exponentially, while the running time of our heuristics grows polynomially. Note that networks with more reticulations are reduced by longer CPS and thus the running time increases with the number of reticulations. Figure 11: Ratio between the performance of \(\mathsf{ML}\) and the best value output by TrivialRand for different instance groups and different training sets. TrivialRand is executed \(\min\{x(I),1000\}\) times for each instance \(I\), \(x(I)\) being the number of runs that could be completed in the same time as one run of \(\mathsf{ML}\) on \(I\). The results are then averaged within each group. Each blue line represents the results obtained training the model with a different set of 10 randomly generated LGT networks with at most 100 leaves each. The green line corresponds to the training set used in [4]; the orange line represents one of the best-performing sets; the red line corresponds to the training set we used for the experiments on LGT and ZODS data in this paper, consisting of 1000 randomly generated LGT networks. #### 5.2.6 Experiments on Non-Exhaustive Input Trees The instances on which we tested our methods so far all consisted of a set of exhaustive trees, that is, each input tree had the same set of leaves which coincided with the set of leaves of the network. However, this is not a requirement of our heuristics, which are able to produce feasible solutions also when the leaf sets of the input trees are different, that is when their leaves are proper subsets of the leaves of the optimal networks that display them. To test their performance on this kind of data, we generated 18 LGT instance groups starting from the instances we used in Section 5.2.2 and removing a certain percentage \(p\) of leaves from each tree in each instance uniformly at random. Specifically, we generated an instance group for each value of \(p\in\{5,10,15,20,25,50\}\) starting from the LGT instance groups with \(L=100\) leaves and \(R\in\{10,20,30\}\) reticulations. Since the performances of the two machine-learned heuristics were essentially overlapping for all of the other experiments, and since TrivialRand performed consistently better than the other randomised heuristics, we limited this test to ML and TrivialRand. The results are shown in Figure 14. In accordance with intuition, the performance of both methods decreases with an increasing percentage of removed leaves, as the trees become progressively less informative. However, the degradation in the quality of the solutions is faster for ML than for TrivialRand, consistent with the fact that ML was trained on exhaustive trees only: when the difference between the training data and the input data becomes too large, the behaviour of the machine-learned heuristic becomes unpredictable. We demand the design of algorithms better suited for trees with missing leaves for future work. Figure 12: The running time (in seconds) of ML for the instance groups described in Sections 5.2.1, 5.2.2, 5.2.3, 5.2.4. The solid lines represent the average of the running times for the instances within each instance group. The shaded areas represent 95% confidence intervals. #### 5.2.7 Effect of the Threshold on ML. We tested the effectiveness of adding a threshold \(\tau>0\) to ML on the same datasets of Sections 5.2.1, 5.2.2 and 5.2.3 (normal, LGT and ZODS). Recall that each instance group consists of 48 instances. We ran ML ten times for each threshold \(\tau\in\{0,0.1,0.3,0.5,0.7\}\) on each instance, took the lowest output reticulation number and averaged these results within each instance group. The results are shown in Figure 15. For all types of data, a threshold \(\tau\leq 0.3\) is beneficial, intuitively indicating that when the probability of a pair being reducible is small it gives no meaningful indication, and thus random choices among these pairs are more suited. The seemingly best value for the threshold, though, is different for different types of instances. The normal instances seem to benefit from quite high values of \(\tau\), the best among the tested values being \(\tau=0.7\). While the optimal \(\tau\) value for normal instances could be even higher, we know from Figure 7 that it must be \(\tau<1\), as the random strategies are less effective than the one based on machine learning for normal data. For the LGT and the ZODS instances, the best threshold seems to be around \(\tau=0.3\), while very high values (\(\tau=0.7\)) are counterproductive. This is especially true for the LGT instances, consistent with the randomised heuristics being less effective for them than for the other types of data (see Figure 8). These experiments should be seen as an indication that introducing some randomness may improve the performance of the ML heuristics, at the price of running them multiple times. We defer a more thorough analysis to future work. Figure 13: The running time of ML on the real dataset described in Section 5.2.4 compared with the running time of the exact methods Hybroscale and TreeChild on the same dataset. The solid lines represent the average running times within each instance group. The shaded areas represent 95% confidence intervals. ### A Non-Learned Heuristic Based on Important Features In this section we propose \(\mathsf{FeatImp}\), yet another heuristic in the CPH framework. Although \(\mathsf{FeatImp}\) does not rely on a machine learning model, we defined the rules to choose a cherry on the basis of the features that were found to be the most relevant according to the model we used for \(\mathsf{ML}\) and \(\mathsf{TrivialML}\). To identify the most suitable rules, we trained a classification tree using the same features and training data as the ones used for the \(\mathsf{ML}\) heuristic (see figure 17 in Appendix B). We then selected the most relevant features used in such tree and used them to define the function \(\mathsf{PickNext}\) listed by Algorithm 3: namely, the features \(4\), \(8_{t}\), \(11_{d}\) and \(12_{t}\) of Table 1 (the ratio of trees having both leaves \(x\) and \(y\) in which \((x,y)\) is reducible, the average of the topological leaf distance between \(x\) and \(y\) scaled by the depth of the trees, the average of the ratios \(d(x,\mathsf{LCA}(x,y))/d(y,\mathsf{LCA}(x,y))\) and the average of the topological distance from \(x\) to the root over the topological distance from \(y\) to the root, respectively). To compute and update these quantities we proceed as described in Section 4.1 and Appendix A. The general idea of the function \(\mathsf{PickNext}\) used in \(\mathsf{FeatImp}\) is to mimic the first splits of the classification tree by progressively discarding the candidate reducible pairs that are not among the top \(\alpha\%\) scoring for each of the considered features, for some input parameter \(\alpha\). Figure 14: Ratio between the number of reticulations outputted by \(\mathsf{ML}\) and \(\mathsf{TrivialRank}\)Best and the reference value for an increasing percentage of removed leaves on LGT data. Each point on the horizontal axis corresponds to a certain percentage of leaves removed from each tree; each line represents the average, within the instances of a group \((L,R)\) with a certain percentage of removed leaves, of the output reticulation number divided by the reference value. The shaded areas represent 95% confidence intervals. We implemented \(\mathsf{FeatImp}\) and test it on the same instances as sections 5.2.1, 5.2.2 and 5.2.3 with \(\alpha=20\). The results are shown in Figure 16. As expected, \(\mathsf{FeatImp}\) works consistently worse than \(\mathsf{ML}\) on all the tested datasets, and it also performs worse than \(\mathsf{TrivialRand}\) on most instance groups. However, it is on average \(12\%\) better than \(\mathsf{TrivialRand}\) on the LGT instance group having \(50\) leaves and \(30\) reticulations and on all the LGT instance groups with \(100\) leaves, which are the most difficult for the randomised heuristics, as already noticed in Section 5.2.2. The results it provides for such difficult instances are only on average \(20\%\) worse than those of \(\mathsf{ML}\), with the advantage of not having to train a model to apply the heuristic. These experiments are not intended to be exhaustive, but should rather be seen as an indication that machine learning can be used as a guide to design smarter non-learned heuristics. Possible improvements of \(\mathsf{FeatImp}\) include using different values of \(\alpha\) for different features, introducing some randomness in Line 8, that is, instead Figure 15: The reticulation number when running \(\mathsf{ML}\) with different thresholds on the instance groups of Sections 5.2.1, 5.2.2 and 5.2.3. Each instance was run \(10\) times, and the lowest reticulation value of these runs was selected. The shaded areas represent \(95\%\) confidence intervals. of choosing the single top scoring pair to choose one among the top \(\alpha\%\) at random, or to use fewer/more features. ## 6 Conclusions Our contributions are twofold: first, we presented the first methods that allow reconstructing a phylogenetic network from a large set of large binary phylogenetic trees. Second, we show the promise and the limitation of the use of machine learning in this context. Our experimental studies indicate that machine-learned strategies, consistent with intuition, are very effective when the training data have a structure similar enough to the test data. In this case, the results we obtained with machine learning were the best among all the tested methods, and the advantage is particularly evident in the most difficult instances. Furthermore, preliminary experiments indicate that the performance of the machine-learned methods can even be improved by introducing appropriate thresholds, in fact mediating between random choices and predictions. However, when the training data do not sufficiently reflect the structure of the test data, repeated runs of the fast randomised heuristics lead to better results. The non-learned cherry-picking heuristic we designed based on the most relevant features of the input (identified using machine learning) shows yet another interesting direction. Our results suggest many interesting directions for future work. First of all, we have seen that machine learning is an extremely promising tool for this problem since it can identify cherries and reticulated cherries of a network, from displayed trees, with very high accuracy. It would be interesting to prove a relationship between the machine-learned models' accuracy and the produced networks' quality. In addition, Figure 16: Comparison of the results of FeatImp, ML and TrivialRand on the instance groups described in Sections 5.2.1, 5.2.2 and 5.2.4. Each point on the horizontal axis corresponds to an instance group; each line represents the average, within the instance group, of the output reticulation number divided by the reference value. The shaded areas represent 95% confidence intervals. do there exist algorithms that exploit the high accuracy of the machine-learned models even better? Could other machine learning methods than random forests, or more training data, lead to even better results? Our methods are applicable to trees with missing leaves but perform well only if the percentage of missing leaves is small. Can modified sets of features be defined that are more suitable for input trees with many missing leaves? Moreover, we have seen that combining randomness with machine learning can lead to better results than either individual approach. However, we considered only one strategy to achieve this. What are the best strategies for combining randomness with machine learning for this, and other, problems? From a practical point of view, it is important to investigate whether our methods can be extended to deal with nonbinary input trees and to develop efficient implementations: in fact, we point out that our current implementations are in Python and not optimised for speed. Faster implementations could make machine-learned heuristics with nonzero thresholds even more effective. Finally, can the machine-learning-based approach be adapted to other problems in the phylogenetic networks research field? ## Appendix A Time Complexity **Lemma 7**.: _Updating features 1-5 for a set \(\mathcal{T}\) of \(|\mathcal{T}|\) trees of total size \(||\mathcal{T}||\) over a set of taxa \(X\) requires \(\mathcal{O}(|\mathcal{T}|(||\mathcal{T}||+|X|^{2}))\) total time and \(\mathcal{O}(||\mathcal{T}||)\) space._ Proof.: Let \(F^{i}_{(x,y)}\) denote the current value of the \(i\)-th feature for a cherry \((x,y)\). When reducing a cherry \((x,y)\) in a tree \(T\) (thus deleting \(x\) and \(p(x)=p(y)\) and then adding a direct edge from \(p(p(y))\) to \(y\)), we check whether the other child of \(p(p(y))\) is a leaf \(z\) or not. If not, no new cherry is created in \(T\), thus the features 1-4 remain unaffected for all the cherries of \(\mathcal{T}\). Otherwise, \((z,y)\) and \((y,z)\) are new cherries of \(T\) and we can distinguish two cases. 1. \((z,y)\) and \((y,z)\) are already cherries of \(\mathcal{T}\). Then, \(F^{1}_{(y,z)}\) and \(F^{1}_{(z,y)}\) are increased by \(\frac{1}{|\mathcal{T}|}\); \(F^{4}_{(y,z)}\) and \(F^{4}_{(z,y)}\) are increased by \(\frac{1}{|\mathcal{T}^{y,z}|}\), where \(|\mathcal{T}^{y,z}|\) is the number of trees that contain both \(y\) and \(z\) and is equal to \(|\mathcal{T}|F^{5}_{(y,z)}\). To update features 2 and 3 we use two auxiliary data structures \(\texttt{new\_cherries}_{(y,z)}\) and \(\texttt{new\_cherries}_{(z,y)}\) to collect the distinct cherries that would originate after picking \((y,z)\) and \((z,y)\) in each tree, respectively. These structures must allow efficient insertions, membership queries, and iteration over the elements2, and can be deleted before picking the next cherry in \(\mathcal{T}\). If the other child of \(p(p(z))\) is a leaf \(w\), we add \((z,w)\) and \((w,z)\) to \(\texttt{new\_cherries}_{(y,z)}\) and \((y,w)\) and \((w,y)\) to \(\texttt{new\_cherries}_{(z,y)}\) (unless they are already present). Footnote 2: For example, hashtables paired with lists. 2. \((z,y)\) and \((y,z)\) are new cherries of \(\mathcal{T}\). Then we insert them into \(\texttt{cherryfeatures}\). We initially set \(F^{1}_{(y,z)}=F^{1}_{(z,y)}=\frac{1}{|\mathcal{T}|}\), and for features 2-3 we create the same data structures as the previous case. To compute \(F^{5}_{(y,z)}=F^{5}_{(z,y)}\) we first compute \(|\mathcal{T}^{y,z}|\) by checking whether \(y\) and \(z\) are both leaves of \(T\) for each \(T\in\mathcal{T}\). Then we set \(F^{5}_{(y,z)}=F^{5}_{(z,y)}=\frac{|\mathcal{T}^{y,z}|}{|\mathcal{T}|}\) and \(F^{4}_{(y,z)}=F^{4}_{(z,y)}=\frac{1}{|\mathcal{T}^{y,z}|}\). Once we have reduced \((x,y)\) in all trees, we count the elements of each of the auxiliary data structures \(\mathsf{new\_cherries}\) and update features 2-3 of the corresponding cherries accordingly. Since picking a cherry can create up to two new cherries in each tree, and for each new cherry we add up to two elements to an auxiliary data structure, this step requires \(\mathcal{O}(|\mathcal{T}|)\) time for each iteration. Feature 5 must be updated for all the cherries corresponding to the unordered pairs \(\{x,w\}\) with \(w\neq y\). To do so, when we reduce \((x,y)\) in a tree \(T\) we go over its leaves: for each leaf \(w\neq y\) we decrease \(F^{5}_{(x,w)}\) and \(F^{5}_{(w,x)}\) by \(\frac{1}{|\mathcal{T}|}\) (if \((x,w)\) and \((w,x)\) are currently cherries of \(\mathcal{T}\)). This requires \(\mathcal{O}(|X|^{2})\) total time per tree over all the iterations, because we scan the leaves of a tree only when we reduce a cherry in that tree. Computing feature 5 when new cherries of \(\mathcal{T}\) are created (case 2) requires constant time per tree per cherry. The total number of cherries created in \(\mathcal{T}\) over all the iterations cannot exceed \(2||\mathcal{T}||\), thus the total time required to update feature 5 is \(\mathcal{O}(|\mathcal{T}|(||\mathcal{T}||+|X|^{2}))\). We arrived at the following result. **Lemma 8**.: _The time complexity of ML and TrivialML is \(\mathcal{O}(||\mathcal{T}||^{2})\)._ Proof.: Recall that during the initialization phase, we store the depth of each node, both topological and with respect to the branch lengths, and we preprocess each tree to allow constant-time LCA queries. Note that reducing cherries in the trees does not affect the height of the nodes nor their ancestry relations, thus it suffices to preprocess the tree set only once at the beginning of the algorithm. When we reduce a cherry \((x,y)\) in a tree \(T\), this may affect the depth of \(T\) as a consequence of the internal node \(p(x)\) being deleted. We thus visit \(T\) to update its depth (both topological and with the branch lengths), and after updating the depth of all trees, we update the maximum value over the whole set \(\mathcal{T}\) accordingly. In order to describe how to update the features \(6_{d,t}-12_{d,t}\) we denote by \(\mathsf{old\_depth}^{t}(T)\) the topological depth of \(T\) before reducing \((x,y)\), \(\mathsf{new\_depth}^{t}(T)\) its depth after reducing \((x,y)\), and use analogous notation for the distances \(\mathsf{old\_dist}^{t}\) and \(\mathsf{new\_dist}^{t}\) between two nodes of a tree and for the depth, the max depth, and distances with the branch lengths. Whenever the value of the maximum topological depth changes, we update the value of feature \(6_{t}\) for all the current cherries \((z,w)\) as \(F^{6_{t}}_{(z,w)}=\frac{F^{6_{t}}_{(z,w)}\cdot\mathsf{old\_max\_depth}^{t}}{ \mathsf{new\_max\_depth}^{t}}\). Since the maximum topological depth can change \(\mathcal{O}(|X|)\) times over all the iterations, and the total number of cherries at any moment is \(\mathcal{O}(|\mathcal{T}||X|)\), these updates require \(\mathcal{O}(|\mathcal{T}||X|^{2})\) total time. We do the same for feature \(6_{d}\), but since the maximum branch-length depth can change once per iteration in the worst case, this requires \(\mathcal{O}(||\mathcal{T}||^{2})\) time overall. Features \(8_{d,t}-12_{d,t}\) must be then updated to remove the contribution of \(T\) for the cherries \((x,w)\) and \((w,x)\) for each leaf \(w\neq x\neq y\) of \(T\), because \(x\) and \(w\) will no longer appear together in \(T\). These updates require \(\mathcal{O}(1)\) time per leaf and can be done as follows. We set \[F_{(x,w)}^{8_{t}}=\frac{F_{(x,w)}^{8_{t}}\cdot|\mathcal{T}^{x,w}|-\frac{\text{ old dist}^{t}(x,w)}{\text{old depth}^{t}(T)}}{|\mathcal{T}^{x,w}|-1} \tag{1}\] and use analogous formulas to update \(F_{(x,w)}^{8_{d}}\) and features \(9_{d,t}-12_{d,t}\) for \((x,w)\) and \((w,x)\). We finally need to further update all the features \(6_{d,t}-12_{d,t}\) for all the cherries of a tree \(T\) in which \((x,y)\) has been reduced and whose depth has changed, including the newly created ones. This can be done in \(\mathcal{O}(1)\) time per cherry per tree with opportune formulas of the form of Equation 1. We have obtained the stated bound. ## Appendix B Random Forest Models \begin{table} \end{table} Table 3: Feature importances of random forest trained on the biggest dataset (\(M=1000\) and \(\max L=100\)) based on normal (a) and LGT (b) network data. Higher importance indicates that a feature has more effect on the trained model. The values sum up to one. The descriptions of the features are given in Table 1. \begin{table} \end{table} Table 4: Trained random forest models on different datasets for different combinations of \(\max L\) (maximum number of leaves per network) and \(M\) (number of networks). Each row in the table represents one model. For each model, the testing accuracy is given under “Accuracy”, and the total number of data points retrieved from all \(M\) networks is given under “Num. data”. Each dataset is split for training and testing \((90\%-10\%)\). The training duration for the random forest is given in column “Training” and the time needed to generate the training data is given in column “Data gen.”, in hours per core (we used 16 cores in total). ## References ## Appendix C Heuristic Performance of ML Models ## Appendix A Proofs ### Proof of Theorem 1 Figure 19: Results for \(\mathsf{ML}\) on LGT instances for different training datasets, similar as description of Fig. 18, with \(L\in\{20,50,100\}\), \(R\in\{10,20,30\}\) and \(|\mathcal{T}|\in\{20,50,100\}\). ## Appendix A Figure 20: Results for ML on ZODS instances for different training datasets, similar as description of Fig. 18, with \(L\in\{20,50,100\}\), \(R\in\{10,20,30\}\) and \(|\mathcal{T}|\in\{20,50,100\}\).
2310.20371
Optimal work fluctuations for thermally isolated systems in weak processes
The fluctuation-dissipation relation for the classical definition of work is extended to thermally isolated systems, in classical and quantum realms. From this, the optimal work variance is calculated, showing it achieves its minimum possible value, independent of the rate of the process, in a so-called quasistatic variance, related to the difference between the quasistatic work and the difference of Helmholtz's free energy of the system. The result is corroborated by the example of the classical and driven harmonic oscillator, whose probability density function of the work distribution is non-Gaussian and constant for different rates. The optimal variance is calculated for the quantum Ising chain as well, showing its finiteness if the linear response validity criterion is complied with. A stronger definition of the arbitrary constant for the relaxation function of thermally isolated systems is obtained along the work.
Pierre Nazé
2023-10-31T11:30:45Z
http://arxiv.org/abs/2310.20371v1
# Optimal work fluctuations for thermally isolated systems in weak processes ###### Abstract The fluctuation-dissipation relation for the classical definition of work is extended to thermally isolated systems, in classical and quantum realms. From this, the optimal work variance is calculated, showing it achieves its minimum possible value, independent of the rate of the process, in a so-called quasistatic variance, related to the difference between the quasistatic work and the difference of Helmholtz's free energy of the system. The result is corroborated by the example of the classical and driven harmonic oscillator, whose probability density function of the work distribution is non-Gaussian and constant for different rates. The optimal variance is calculated for the quantum Ising chain as well, showing its finiteness if the linear response validity criterion is applied. A stronger definition of the arbitrary constant for the relaxation function of thermally isolated systems is obtained along the work. ## I Introduction Fluctuation-dissipation relations are important identities able to furnish information about optimal control of dissipated averages and fluctuations. In the context of classical and isothermal processes, it has been shown for quadratic potentials [1], slowly-varying [2] and weak [3] processes. Using the quantum definition of the work, it has been shown its breakdown for slowly-varying and weak processes [4; 5]. The aim of this work is to obtain the optimal work fluctuations of classical and quantum thermally isolated systems using the classical definition of work. This is done by means of an extension of the fluctuation-dissipation relation. By contrast with the isothermal case, such a relation presents a breakdown, presenting an extra quasistatic variance in the equality, which is independent of the rate of the process, and related to the difference between the quasistatic work and difference of Helmholtz's free energy of the system. When the protocol is optimal, the optimal variance achieves its minimal value in this quasistatic variance. To exemplify it, it is presented the driven harmonic oscillator. In particular, its work probability distribution is non-Gaussian and independent of the rate. The optimal variance for the quantum Ising chain is calculated as well, showing its finiteness if the the linear response validity agreement is complied. ## II Weak processes I start defining notations and developing the main concepts to be used in this work. This section is based on the technical introductory section of Ref. [3]. Consider a classical system with a Hamiltonian \(\mathcal{H}(\mathbf{z}(\mathbf{z_{0}},t)),\lambda(t))\), where \(\mathbf{z}(\mathbf{z_{0}},t)\) is a point in the phase space \(\Gamma\) evolved from the initial point \(\mathbf{z_{0}}\) until time \(t\), with \(\lambda(t)\) being a time-dependent external parameter. Initially, the system is at equilibrium with a heat bath of temperature \(\beta\equiv\left(k_{B}T\right)^{-1}\), where \(k_{B}\) is Boltzmann's constant. The heat bath is then removed from the system, and during a switching time \(\tau\), the external parameter is changed from \(\lambda_{0}\) to \(\lambda_{0}+\delta\lambda\). The average work performed on the system during this interval of time is \[\overline{W}(\tau)\equiv\int_{0}^{\tau}\left\langle\partial_{\lambda}\mathcal{ H}(t)\right\rangle_{0}\dot{\lambda}(t)dt, \tag{1}\] where \(\partial_{\lambda}\) is the partial derivative in respect to \(\lambda\) and the superscripted dot the total time derivative. The generalized force \(\left\langle\partial_{\lambda}\mathcal{H}\right\rangle_{0}\) is calculated using the averaging \(\left\langle\cdot\right\rangle_{0}\) over the initial canonical ensemble. The external parameter can be expressed as \[\lambda(t)=\lambda_{0}+g(t)\delta\lambda, \tag{2}\] where to satisfy the initial conditions of the external parameter the protocol \(g(t)\) must satisfy the following boundary conditions \(g(0)=0\), \(g(\tau)=1\). Linear-response theory aims to express average quantities until the first-order of some perturbation parameter considering how this perturbation affects the observable to be averaged and the probabilistic distribution [6]. In our case, we consider that the parameter does not considerably change during the process, \(|g(t)\delta\lambda/\lambda_{0}|\ll 1\), for all \(t\in[0,\tau]\) and \(\lambda_{0}\neq 0\). The generalized force can be approximated until the first order as [7] \[\begin{split}\left\langle\partial_{\lambda}\mathcal{H}(t)\right \rangle_{0}=&\left\langle\partial_{\lambda}\mathcal{H}\right\rangle _{0}-\widetilde{\Psi}_{0}\lambda(t)\\ &+\int_{0}^{t}\Psi_{0}(t-t^{\prime})\dot{\lambda}(t^{\prime})dt^ {\prime},\end{split} \tag{3}\] where \[\Psi_{0}(t)=\beta\left\langle\partial_{\lambda}\mathcal{H}(0)\partial_{ \lambda}\mathcal{H}(t)\right\rangle_{0}-\mathcal{C} \tag{4}\] is the relaxation function and \(\widetilde{\Psi}_{0}\equiv\Psi_{0}(0)-\left\langle\partial_{\lambda\lambda}^{2 }\mathcal{H}\right\rangle_{0}\)[6]. The constant \(\mathcal{C}\) is arbitrary, whose chosen value \(1\) am going to discuss in the next section. Combining Eqs. (1) and (3), the average work performed at the linear response of the generalized force is \[\begin{split}\overline{W}(\tau)=&\,\delta\lambda\, \langle\partial_{\lambda}\mathcal{H}\rangle_{0}-\frac{\delta\lambda^{2}}{2} \widetilde{\Psi}_{0}\\ &+\frac{1}{2}\int_{0}^{\tau}\int_{0}^{\tau}\Psi_{0}(t-t^{\prime}) \dot{\lambda}(t^{\prime})\dot{\lambda}(t)dt^{\prime}dt.\end{split} \tag{5}\] where the symmetric property of the relaxation function was used [6]. Such an equation holds for finite-time and weak processes. Our treatment throughout this work will be classical, but the same reasoning with similar arguments leads to the same average work for quantum systems, where the observables become operators, and averages, traces. ## III Constant \(\mathcal{C}\) In previous works, I have observed that the double integral on Eq. (5) depends on the path, which would indicate that the other terms are the contribution of the quasistatic work \(W_{\text{qs}}\). However, the constant \(\mathcal{C}\) must be chosen properly to produce such a result. For isothermal processes, it is chosen such that the relaxation function decorrelates for long times \[\lim_{t\rightarrow\infty}\Psi_{0}(t)=0, \tag{6}\] which is nothing more than a feature of the Second Law of Thermodynamics. However, for thermally isolated systems, such operation does not make any sense, because the relaxation function does not decorrelate. One alternative is the definition proposed by Kubo [6] where \(\mathcal{C}\) is calculated such that \[\lim_{s\to 0^{+}}s\widetilde{\Psi}_{0}(s)=0, \tag{7}\] where \(\widetilde{\cdot}\) is the Laplace transform. This definition, in my opinion, although the success verified _a posteriori_[8; 9; 10; 11], lacks an _a priori_ physical motivation. In what follows I propose an alternative which will furnish a value to \(\mathcal{C}\) agreeing with the Second Law of Thermodynamics. ## IV Cumulant series Jarzynski's equality is well recognized as a generalization of the Second Law of Thermodynamics [1]. I am going to propose a definition of \(\mathcal{C}\) which will agree with such a relation. According to it, it holds the following cumulant series expansion for the irreversible work \[\beta W^{\text{irr}}=\beta(\overline{W}-\Delta F)=\sum_{n=2}^{\infty}\frac{(- \beta)^{n}}{n!}\kappa_{n}, \tag{8}\] where \(\overline{W}\) is the average work for a thermally isolated system, \(\kappa_{n}\) the cumulants for the work probability distribution and \(\Delta F\) is the difference of Helmholtz's free energy. Writing in terms of the excess work, one has \[\beta W^{\text{ex}} =\beta(\overline{W}-W_{\text{qs}}) \tag{9}\] \[=\sum_{n=2}^{\infty}\frac{(-\beta)^{n}}{n!}\kappa_{n}+\beta( \Delta F-W_{\text{qs}}) \tag{10}\] In particular, using linear response theory, one has \[\beta W_{2}^{\text{ex}}=\frac{\beta^{2}}{2}\kappa_{2}+\beta(\Delta F-W_{\text {qs}})_{2}, \tag{11}\] where the terms were calculated until the second order in the parameter perturbation. Using \[\beta W_{2}^{\text{ex}}-\frac{\beta^{2}}{2}\kappa_{2}=-\beta\mathcal{C}-\frac {\beta^{2}}{2}\delta\lambda^{2}\langle\partial_{\lambda}\mathcal{H}(0)\rangle _{0}^{2} \tag{12}\] one has \[\beta\mathcal{C}=-\frac{\beta^{2}}{2}\delta\lambda^{2}\langle\partial_{\lambda }\mathcal{H}(0)\rangle_{0}^{2}+\beta(W_{\text{qs}}-\Delta F)_{2}. \tag{13}\] However, by the definitions of \(\Delta F\) and \(W_{\text{qs}}\), one has \[\beta(W_{\text{qs}}-\Delta F)_{2}=\frac{\beta^{2}}{2}\delta\lambda^{2} \langle\partial_{\lambda}\mathcal{H}(0)\rangle_{0}^{2}. \tag{14}\] Therefore \[\mathcal{C}=0, \tag{15}\] which is a stronger and physically more meaningful definition for such a constant than that proposed by Kubo. Such a result is corroborated by different works, for classical and quantum systems [8; 9; 10; 11]. Using the classical definition of work, the cumulant series can be extended using the quantum treatment. ## V Fluctuation-dissipation relation From the approximation of the cumulant series for linear response theory deduced in the previous section, observe that it holds the following fluctuation-dissipation relation \[\beta W_{2}^{\text{ex}}=\frac{\beta^{2}}{2}\sigma_{W_{2}}^{2}-\frac{\beta^{2} \delta\lambda^{2}}{2}\langle\partial_{\lambda}\mathcal{H}(0)\rangle_{0}^{2}, \tag{16}\] where \(\sigma_{W_{2}}^{2}\) is the variance of the work probability distribution calculated until the second-order in the parameter perturbation. That relation implies that the excess work expends less energy than its irreversible work counterpart. The breakdown in the relation when compared to isothermal cases occurs due to the difference between the quasistatic work and the difference of Helmholtz's free energy. To exemplify such a result, consider a linear-driven harmonic oscillator, whose Hamiltonian is \[\mathcal{H}(\lambda(t))=\frac{p^{2}}{2}+\lambda(t)\frac{q^{2}}{2}, \tag{17}\] where \(\lambda(t)=\lambda_{0}+\delta\lambda(t/\tau)\). Its solution is known for the full dynamics, from where the average work and work variance can be calculated. Also, the quasistatic work is known [8] \[W_{\rm qs}=\frac{1}{\beta}\left(\sqrt{\frac{\lambda_{0}+\delta\lambda}{\lambda_{ 0}}}-1\right). \tag{18}\] Considering \(\delta\lambda/\lambda_{0}=0.01\), Fig. 1 depicts the fluctuation-dissipation relation expressed in Eq. (16). Here, \(\delta\lambda^{2}\langle\partial_{\lambda}\mathcal{H}(0)\rangle_{0}^{2}=2.5 \times 10^{-5}\). ## VI Optimal work fluctuations It has been shown that for thermally isolated performing weak processes the optimal excess work is null [12]. Therefore, by the fluctuation-dissipation relation, the optimal variance of the work for thermally isolated systems achieves its minimum possible value, given by \[\sigma_{W_{2}}^{2*}=\delta\lambda^{2}\langle\partial_{\lambda}\mathcal{H}(0) \rangle_{0}^{2}, \tag{19}\] which is independent of the rate of the process. It is indeed an intrinsic characteristic of the system. One may consider it as a quasistatic variance for the thermally isolated system. This unexpected result shows that although one achieves by the optimal protocol the quasistatic work for arbitrary rates, there is always an intrinsic error associated. In particular, it is expected that \(\sigma_{W_{2}}^{2*}(\tau)\propto 1/\beta^{2}\), because of the average on the canonical ensemble. In this situation, if the system starts at \(T\approx 0\), the variance diverges. To exemplify it, consider again the driven harmonic oscillator, but driven with the optimal protocol for linear response, given by [8; 12] \[g^{*}(t)=\frac{t}{\tau}+\frac{\delta(t)-\delta(\tau-t)}{4\lambda_{0}\tau}. \tag{20}\] The optimal work variance is exhibited in Fig. 2 for different rates. In this particular case, \(\beta^{2}\delta\lambda^{2}\langle\partial_{\lambda}\mathcal{H}(0)\rangle_{0}^ {2}=2.5\times 10^{-5}\). Figure 3 depicts the optimal probability distribution function of the work, which is also non-Gaussian [3] and constant for different rates. I used in the computational simulation \(10^{5}\) initial conditions, and \(\delta\lambda/\lambda_{0}=0.01\). ## VII Quantum Ising chain One important problem to deal with nowadays is the performance of quantum annealing, aiming to apply in quantum computing [13]. In particular, a question not so explored is the work fluctuations present in driving processes. Consider then the quantum Ising model, whose Hamiltonian operator is \[\mathcal{H}=-J\sum_{i=1}^{N}\sigma_{i}^{x}\sigma_{i+1}^{x}-\Gamma\sum_{i=1}^{N }\sigma_{i}^{z}. \tag{21}\] Figure 3: Optimal work probability density function for driven harmonic oscillator. Here, \(\delta\lambda/\lambda_{0}\)=0.01 for \(10^{5}\) initial conditions. The histogram is non-Gaussian (\(\mu_{3}>0\)) and does not change with different rates. where each one of the \(N\) spins has a vector \(\vec{\sigma}_{i}:=\sigma_{i}^{x}\mathbf{x}+\sigma_{i}^{y}\mathbf{y}+\sigma_{i}^{z }\mathbf{z}\) composed by the Pauli matrices. The parameter \(J\) is the coupling energy and \(\Gamma\) is the transverse magnetic field. Also, the system is subjected to periodic boundary conditions to an even number of spins. In Ref. [12] I have found the optimal protocol that produces a null excess work of such a system. Under those circumstances, the work fluctuations will be given by the quasistatic variance of the system. In particular, this quantity is \[\sigma_{W_{2}}^{2*}=2\delta\Gamma^{2}\sum_{n,m=1}^{N/2}\tanh\beta\epsilon_{n} \tanh\beta\epsilon_{m}, \tag{22}\] where \[\epsilon_{n}=2\sqrt{J^{2}+\Gamma^{2}-2\Gamma J\cos{(\pi(2n-1)/N)}}. \tag{23}\] In particular, for purposes of performance of quantum annealing, it is interesting to observe how the quasistatic variance behaves for a system that starts with \(T=0\). In this case, one has \[\sigma_{W_{2}}^{2*}\propto\delta\Gamma^{2}N^{2}(N+1)^{2} \tag{24}\] which indicates that the quasistatic variance diverges if the system is in the thermodynamic limit, where \(N\gg 1\). However, in this situation, linear response only works for very small perturbations [11]. Therefore, after choosing \(\delta\Gamma\propto N^{-2}\), this will compensate the divergence produced by the thermodynamic limit, generating a finite quasistatic variance for the system. Knowing how the quasistatic variance of a system behaves could be an additional criterion to evaluate the validity of linear response in quantum phase transition situations than only using the criterion proposed in Ref. [11]. ## VIII Final remarks In this work, in order to find the optimal work fluctuation for thermally isolated systems performing weak adiabatic processes, in classical and quantum realms, for the classical definition of work, the fluctuation-dissipation relation was extended. The equality presents a breakdown in comparison to the isothermal case, where an extra quasistatic variance appears, related to the difference between the quasistatic work and the difference of Helmholtz's free energy of the system. From this, the optimal work variance was calculated showing it achieves its minimum value, independent of the rate of the process. The result was corroborated by the example of the driven harmonic oscillator. The optimal variance for the quantum Ising chain is calculated as well, showing its finiteness if the the linear response validity agreement is complied. The arbitrary constant for the relaxation function of thermally isolated systems was shown to be equal to zero to agree with the Second Law of Thermodynamics.
2309.13800
Enumerating All Maximal Clique-Partitions of an Undirected Graph
We address the problem of enumerating all maximal clique-partitions of an undirected graph and present an algorithm based on the observation that every maximal clique-partition can be produced from the maximal clique-cover of the graph by assigning the vertices shared among maximal cliques, to belong to only one clique. This simple algorithm has the following drawbacks: (1) the search space is very large; (2) it finds some clique-partitions which are not maximal; and (3) some clique-partitions are found more than once. We propose two criteria to avoid these drawbacks. The outcome is an algorithm that explores a much smaller search space and guarantees that every maximal clique-partition is computed only once. The algorithm can be used in problems such as anti-unification with proximity relations or in resource allocation tasks when one looks for several alternative ways to allocate resources.
Mircea Marin, Temur Kutsia, Cleo Pau, Mikheil Rukhaia
2023-09-25T01:14:49Z
http://arxiv.org/abs/2309.13800v1
# Enumerating All Maximal Clique-Partitions of an Undirected Graph ###### Abstract We address the problem of enumerating all maximal clique-partitions of an undirected graph and present an algorithm based on the observation that every maximal clique-partition can be produced from the maximal clique-cover of the graph by assigning the vertices shared among maximal cliques, to belong to only one clique. This simple algorithm has the following drawbacks: (1) the search space is very large; (2) it finds some clique-partitions which are not maximal; and (3) some clique-partitions are found more than once. We propose two criteria to avoid these drawbacks. The outcome is an algorithm that explores a much smaller search space and guarantees that every maximal clique-partition is computed only once. The algorithm can be used in problems such as anti-unification with proximity relations or in resource allocation tasks when one looks for several alternative ways to allocate resources. **Acknowledgments.** Partially supported by the Austrian Science Fund (FWF) under project P 35530 and by the Shota Rustaveli National Science Foundation of Georgia under project FR-21-16725. ## 1 Introduction In this paper, we are interested in computing all maximal clique-partitions in a graph. The original motivation comes from anti-unification with proximity relations. Anti-unification is a well-known technique in computational logic. It was introduced in [14, 15] and was quite intensively investigated in the last years, see, e.g. [2, 3, 12, 11, 7]. Given two first-order logic terms \(t_{1}\) and \(t_{2}\), it aims at computing a least general generalization of those terms. That means, one is looking for a term \(s\) from which \(t_{1}\) and \(t_{2}\) can be obtained by variable substitutions. Such an \(s\) is called a generalization of \(t_{1}\) and \(t_{2}\). Moreover, there should be no other generalization \(r\) of \(t_{1}\) and \(t_{2}\), which can be obtained from \(s\) by a substitution. For instance, if \(t_{1}\) and \(t_{2}\) are the ground terms \(f(a,a)\) and \(f(b,b)\), then anti-unification computes their least general generalization \(f(x,x)\). Replacing variable \(x\) by \(a\) (resp. by \(b\)) in it, one gets \(f(a,a)\) (resp. \(f(b,b)\)). Note that \(f(x,y)\) and \(x\) are also generalizations of \(f(a,a)\) and \(f(b,b)\), but they are not least general. Anti-unification has been successfully used in inductive reasoning, inductive logic programming, reasoning and programming by analogy, term set compression, software code clone detection, etc. In many applications, that can be also relevant for anti-unification, one has to deal with imprecise or vague information. In such circumstances, one tends to consider two objects the same, if they are "sufficiently close" to each other. However, such a proximity relation is not transitive. Nontransitivity has to be dealt with in a special way. Proximity relations (reflexive symmetric fuzzy binary relations) characterize the notion of 'being close' numerically. They become crisp once we fix the threshold from which on, the distance between the objects can be called 'close'. Symbolic constraint solving (for unification, matching, and anti-unification constraints) over proximity relations has been studied recently by various authors, e.g., [12, 13, 1, 8, 9]. The approaches can be characterized as class-based and block-based. Considering proximity relations as (weighted) undirected graphs, a proximity class of a vertex is its neighborhood (i.e., the set of vertices to which the current vertex is connected by an edge), while a proximity block is a clique. In the class-based approach to proximity constraint solving, two objects are considered proximal if one of them belongs to the proximity class of another. In the block-based approach, two objects are proximal if they belong to the same _unique_ maximal proximity block. The block-based approach is one that is closely related to the subject of this paper. To compute a minimal complete set of generalizations of two first-order logic terms with this approach, one needs to consider all maximal clique-partitions of the graph induced by the proximity relation between constants and between function symbols. For instance, if \(a\) is close to both \(b\) and \(c\), but \(b\) and \(c\) are not close to each other, then \(f(a,a)\) and \(f(b,c)\) have two minimal common generalizations: \(f(a,x)\) and \(f(x,a)\). In this example, the proximity graph would be \((\{a,b,c\},\{(a,b),(a,c)\})\). It has two maximal clique partitions \(\{\{a,b\},\{c\}\}\) and \(\{\{a,c\},\{b\}\}\) that tell exactly which symbols should be considered the same. In the first case these are \(a\) and \(b\), leading to the generalization \(f(a,x)\), and in the second case they are \(a\) and \(c\), giving \(f(x,a)\). Also, in the block-based approach to approximate unification, one would need to maintain maximal clique-partitions of the proximity graph in order to detect that, e.g., \(f(x,x)\) and \(f(b,c)\) are not unifiable in the abovementioned proximity relation, see, e.g., [9]. Also, the resource allocation problem, when one looks for several alternative ways to allocate resources, can be an application area of the algorithm considered in this paper. Whereas the problem of computing all maximal cliques is well studied [5, 17, 16, 6], the problem of computing all maximal clique-partitions became of interest only recently. To the best our knowledge, the only previous study of it is the one reported in 2022 by C. Pau her PhD thesis [13, Sect. 3.3.2]. In this paper we provide a more in-depth analysis of the problem and propose another algorithm which performs better than the one described in [13]. For a given undirected graph \(G\), in order to compute all its maximal clique-partitions, we use a kind of top-down approach. First, we compute the maximal clique cover of \(G\), and the list \(S\) of all graph vertices which are shared among maximal cliques. By a systematic enumeration of all possibilities to assign each vertex in \(S\) to only one clique where it belongs, we obtain an algorithm that finds all clique-partitions of \(G\) in a tree-like search space starting from the maximal clique-cover of \(G\). Our algorithm is optimal in the following sense: (1) It computes only maximal clique-partitions, by avoiding computations below some nodes of the search space which yield non-maximal clique-partitions, and (2) Each maximal clique-partition is computed only once. Moreover, our algorithm computes the maximal clique-partitions incrementally, in the following sense: If one does not want to get all solutions, he/she can stop the algorithm after computing a certain number of solutions. As a result, the computation of maximal clique-partitions can be streamlined with other operations on them. The paper is structured as follows. In Section 2 we introduce some preliminary notions, the search space \(\mathbb{T}^{S}(G)\) for maximal clique-partitions of an undirected graph \(G\) and its main properties. The following two sections describe our main contributions: a criterion to avoid the computation of nonmaximal clique-partitions (Section 3), and a criterion to avoid the redundant computations of the same maximal clique-partition (Section 4). In Section 5 we indicate how to combine these two criteria and define an algorithm to enumerate all maximal clique-partitions of an undirected graph. An analysis of the runtime complexity of our algorithm is performed in Section 7. In the last section we draw some conclusions. ## 2 Preliminaries We consider undirected graphs \(G=(V,E)\) where \(V\) is the set of nodes and \(E\) is the set of edges. A **clique** in \(G\) is a nonempty subset of \(V\) such that every two vertices of it are incident. A clique \(C\) is **maximal** if it is not a proper subset of another clique. A **cover** of \(G\) is a finite family \(\{V_{1},\ldots,V_{m}\}\) of nonempty sets of nodes such that \(\bigcup_{i=1}^{m}V_{i}=V\). A **clique-cover** of \(G\) is a cover \(\mathcal{P}\) of \(V\) such that every \(C\in\mathcal{P}\) is a clique in \(G\). A partition into cliques, or shortly **clique-partition** of \(G\), is a partition \(\mathcal{P}\) of \(V\) such that every \(C\in\mathcal{P}\) is a clique in \(G\). \(\mathcal{P}\) is a **maximal clique-partition** of \(G\) if \(\mathcal{P}\) is a clique-partition of \(G\) and \(\mathcal{P}\) does not contain two different cliques \(C,C^{\prime}\) such that \(C\cup C^{\prime}\) is a clique. A graph may have several maximal clique-partitions. In the literature, a problem that was studied intensively is to compute a maximal clique-partition with the smallest number of cliques. Tseng's algorithm [18], introduced to solve this problem, was motivated by its application in the design of processors. Later, Bhasker and Samad [4] proposed two other algorithms. They also derived the upper bound on the number of cliques in a partition and showed that there exists a partition containing a maximal clique of the graph. A problem closely related to clique-partition is the vertex coloring problem, which requires to color the vertices of a graph in such a way that two adjacent vertices have different colors. In fact, a clique-partitioning problem of a graph is equivalent to the coloring problem of its complement graph. Both problems are NP-complete [10]. We write \(\mathbb{N}\) for the set of natural numbers starting from \(1\), and \(\mathbb{N}^{*}\) for the monoid of finite sequences of numbers from \(\mathbb{N}\) with the operation of sequence concatenation and neutral element \(\varepsilon\). If \(n\in\mathbb{N}\) we assume that \([n]\) is the set of natural numbers \(k\) such that \(1\leq k\leq n\). From now on we assume that \(G=(V,E)\) is an undirected graph for which we know: 1. An enumeration \(\mathit{cfg}_{0}:=[\overline{C}_{1},\ldots,\overline{C}_{m}]\) of all maximal cliques of \(G\). We denote the maximal cliques of \(G\) with identifiers with an overbar. The value \(m\) indicates the number of maximal cliques of graph \(G\). 2. For every vertex \(v\in V\) and set of nodes \(C\subseteq V\) we define: * \(\mathit{cliques}(v):=\{i\mid v\in\overline{C}_{i}\}\), and \(d(v):=|\mathit{cliques}(v)|\), * \(\mathit{cliques}(C):=\bigcap_{v\in C}\mathit{cliques}(v)\) for every set of nodes \(C\subseteq V\). 3. An enumeration \(S:=[v_{1},\ldots,v_{s}]\) of all nodes \(v\in V\) with \(d(v)>1\). 4. \(Rgd:=\{k\in[m]\mid\text{there is a vertex $v\in V$ with $\mathit{cliques}(v)=\{k\}$}\}\). \(\mathit{cliques}(v)\) is the set of indices of maximal cliques where node \(v\) belongs, and \(\mathit{cliques}(C)\) is the set of indices of maximal cliques which contain \(C\). Thus, a nonempty set of nodes \(C\) is not a clique iff \(\mathit{cliques}(C)=\emptyset\). The nodes \(v\in S\) are those that belong to more than one maximal clique. To transform the maximal clique cover \(\{\overline{C}_{1},\ldots,\overline{C}_{m}\}\) into a clique partition, we must assign every node \(v\in S\) to belong to only one clique. To formalize this process, we introduce a couple of auxiliary notions. A **configuration** is an enumeration \([C_{1},\ldots,C_{m}]\) of empty sets or cliques of \(G\), such that \(\bigcup_{k=1}^{m}C_{k}=V\) and \(C_{i}\subseteq\overline{C}_{i}\) for all \(1\leq i\leq m\). In particular, \(\mathit{cfg}_{0}=[\overline{C}_{1},\ldots,\overline{C}_{m}]\) is a configuration. Every configuration \(\mathit{cfg}=[C_{1},\ldots,C_{m}]\) represents a clique cover denoted by \[\mathit{repr}(\mathit{cfg}):=\{C_{i}\mid 1\leq i\leq m\text{ and }C_{i}\neq \emptyset\}.\] We distinguish two sets of configurations of interest: the set \(\Pi_{cp}(G)\) of configurations \(\mathit{cfg}\) for which \(\mathit{repr}(\mathit{cfg})\) is a clique partition of \(G\); and the set \(\Pi_{\mathit{mcp}}(G)\) is the set of configurations \(\mathit{cfg}\) for which \(\mathit{repr}(\mathit{cfg})\) is a maximal clique partition of \(G\). **Lemma 1**.: _Every maximal clique-partition of \(G\) is represented by a configuration in \(\Pi_{\mathit{mcp}}(G)\)._ Proof.: Let \(\mathcal{P}\) be a maximal clique-partition of \(G\). Then for every \(C\in\mathcal{P}\) there exists a maximal clique \(\varphi(C)\) such that \(C\subseteq\varphi(C)\). If \(C_{1},C_{2}\in\mathcal{P}\) and \(\varphi(C_{1})=\varphi(C_{2})\) then \(C_{1}\cup C_{2}\subseteq\varphi(C_{1})\) is a clique, and the maximality of \(\mathcal{P}\) implies \(C_{1}=C_{2}\). Thus \(\varphi\) is injective and we can define \(\mathit{cfg}=[C_{1},\ldots,C_{m}]\) by \[C_{k}:=\left\{\begin{array}{ll}C&\text{if $C\in\mathcal{P}$ and $\varphi(C)= \overline{C}_{k}$},\\ \emptyset&\text{otherwise}\end{array}\right.\] for all \(k\in[m]\). Then \(\mathit{repr}(\mathit{cfg})=\mathcal{P}\) because \(\varphi\) is injective. Thus \(\mathit{cfg}\in\Pi_{\mathit{mcp}}(G)\). For every node \(v\in V\) we define the relation \([C_{1},\ldots,C_{m}]\rightarrow_{(v,i)}[C_{1}^{\prime},\ldots,C_{m}^{\prime}]\) to hold if \(v\in C_{i}\) for some \(1\leq i\leq m\), \(C_{i}^{\prime}=C_{i}\) and \(C_{j}^{\prime}=C_{j}-\{v\}\) for all \(j\in[m]-\{i\}\). This relation corresponds to the decision to assign node \(v_{i}\) to the \(i\)-th clique of the configuration. ### The search space \(\mathbb{T}^{S}(G)\) It is easy to see that, if \(S=[v_{1},v_{2},\ldots,v_{s}]\) and \(i_{1}\in\mathit{cliques}(v_{1}),i_{2}\in\mathit{cliques}(v_{2})\),..., \(i_{s}\in\mathit{cliques}(v_{s})\) then \(\mathit{cfg}_{0}=[\overline{C}_{0},\ldots,\overline{C}_{m}]\rightarrow_{(v_{ 1},i_{1})}\mathit{cfg}_{1}\rightarrow_{(v_{2},i_{2})}\ldots\rightarrow_{(v_{ s},i_{s})}\mathit{cfg}_{s}\) is a sequence of decision steps that ends with a configuration whose representation is a partition of \(G\). We let \(\mathbb{T}^{S}(G)\) be the tree with root \(\mathit{cfg}_{0}\) and edges \(\mathit{cfg}\rightarrow_{(v,i)}\mathit{cfg}^{\prime}\) which correspond to the decision to keep the shared node \(v\in S\) in the \(i\)-th component of the configuration. We will use \(\mathbb{T}^{S}(G)\) as the search space for maximal clique-partitions, and let \(\mathit{Leaf}(\mathbb{T}^{S}(G))\) be the set of leaf configurations in \(\mathbb{T}^{S}(G)\). **Example 1**.: _The simple graph_ _has four maximal cliques: \(\overline{C}_{1}=\{x_{1},x_{2},x_{3}\}\), \(\overline{C}_{2}=\{x_{1},x_{4}\}\), \(\overline{C}_{3}=\{x_{4},x_{5}\}\), \(\overline{C}_{3}=\{x_{4},x_{6}\}\) and two nodes shared among maximal cliques: \(S=[v_{1},v_{2}]\) where \(v_{1}=x_{4}\), \(v_{2}=x_{1}\). In this example we have \(\mathit{cliques}(x_{1})=\{1,2\}\), \(\mathit{cliques}(x_{2})=\{2,3,4\}\). The exhaustive search space for maximal clique partitions is the tree \(\mathbb{T}^{S}(G)\) depicted below:_ _where the leaf configurations are_ \(\mathit{cfg}_{1}=[\overline{C}_{1},\{x_{4}\},\{x_{5}\},\{x_{6}\}]\) _with_ \(\mathit{repr}(\mathit{cfg}_{1})=\{\overline{C}_{1},\{x_{4}\},\{x_{5}\},\{x_{6 }\}\}\)_._ \(\mathit{cfg}_{2}=[\overline{C}_{1},\emptyset,\overline{C}_{3},\{x_{6}\}]\) _with_ \(\mathit{repr}(\mathit{cfg}_{2})=\{\overline{C}_{1},\overline{C}_{3},\{x_{6 }\}\}\)_._ \(\mathit{cfg}_{3}=[\overline{C}_{1},\emptyset,\{x_{5}\},\overline{C}_{4}]\) _with_ \(\mathit{repr}(\mathit{cfg}_{3})=\{\overline{C}_{1},\{x_{5}\},\overline{C}_{4}\}\)_._ \(\mathit{cfg}_{4}=[\{x_{2},x_{3}\},\overline{C}_{2},\{x_{5}\},\{x_{6}\}]\) _with_ \(\mathit{repr}(\mathit{cfg}_{4})=\{x_{2},x_{3}\},\overline{C}_{2},\{x_{5}\},\{x_{6 }\}\}\)_._ \(\mathit{cfg}_{5}=[\{x_{2},x_{3}\},\{x_{1}\},C_{3},\{x_{6}\}]\) _with_ \(\mathit{repr}(\mathit{cfg}_{5})=\{\{x_{2},x_{3}\},\{x_{1}\},C_{3},\{x_{6 }\}\}\)_._ \(\mathit{cfg}_{6}=[\{x_{2},x_{3}\},\{x_{1}\},\{x_{5}\},C_{4}]\) _with_ \(\mathit{repr}(\mathit{cfg}_{6})=\{\{x_{2},x_{3}\},\{x_{1}\},\{x_{5}\},C_{4}\}\) _Only the final configurations \(\mathit{cfg}_{2},\mathit{cfg}_{3},\mathit{cfg}_{4}\) represent maximal clique-partitions of \(G\): \(\mathcal{P}_{1}=\{\overline{C}_{1},\overline{C}_{3},\{x_{6}\}\}=\mathit{repr }(\mathit{cfg}_{2})\), \(\mathcal{P}_{2}=\{\overline{C}_{1},\{x_{5}\},\overline{C}_{4}\}=\mathit{repr }(\mathit{cfg}_{3})\), and \(\mathcal{P}_{3}=\{\{x_{2},x_{3}\},\overline{C}_{2},\{x_{5}\},\{x_{6}\}\}= \mathit{repr}(\mathit{cfg}_{4})\). The other final configurations in \(\mathbb{T}^{S}(G)\) represent non-maximal clique-partitions of \(G\). _ #### 2.1.1 Properties of the search space \(\mathbb{T}^{S}(G)\) The following are immediate consequences of the definition: if \(S=[v_{1},v_{2},\ldots,v_{s}]\) is the list of nodes shared among the maximal cliques of \(G\) then 1. \(\mathbb{T}^{S}(G)\) has depth \(s\), and all its leaf configurations occur at depth \(s\). 2. Every internal configuration at depth \(\ell<s\) in \(\mathbb{T}^{S}(G)\) has \(d(v_{\ell})\) children. For every configuration \(\mathit{cfg}\) in \(\mathbb{T}^{S}(G)\) there is a unique path \[\mathit{cfg}_{0}\rightarrow_{(v_{1},i_{1})}\mathit{cfg}_{1}\rightarrow_{(v _{2},i_{2})}\ldots\rightarrow_{(v_{\ell},i_{\ell})}\mathit{cfg}\] from the root configuration to \(\mathit{cfg}\). We let \(\delta(\mathit{cfg}):=[i_{1},\ldots,i_{\ell}]\) be the sequence of assignment decisions made for the shared nodes \(v_{1},\ldots,v_{\ell}\in S\). **Lemma 2**.: _If \(\mathit{cfg}\in\mathbb{T}^{S}(G)\) with \(\delta(\mathit{cfg})=[i_{1},\ldots,i_{\ell}]\) then all descendants \([C_{1},\ldots,C_{m}]\) of \(\mathit{cfg}\) in \(\mathbb{T}^{S}(G)\), including \(\mathit{cfg}\), have \(C_{i}\neq\emptyset\) for every \(i\in\mathit{Rgd}\cup\{i_{1},\ldots,i_{\ell}\}\)._ Proof.: If \(i=i_{p}\in\{i_{1},\ldots,i_{l}\}\) then the shared node \(v_{p}\in S\) was assigned to the clique with index \(i\), thus \(C_{i}\neq\emptyset\) because \(v_{p}\in C_{i}\). If \(i\in\mathit{Rgd}\) then \(\overline{C}_{i}\) has a node \(v\) with \(\mathit{cliques}(v)=\{i\}\). Node \(v\) persists in the \(i\)-th component of all configurations in \(\mathbb{T}^{S}(G)\). In particular, \(C_{i}\neq\emptyset\) because \(c\in C_{i}\). **Lemma 3**.: \(\Pi_{\mathit{mcp}}(G)\subseteq\mathit{Leaf}(\mathbb{T}^{S}(G))\)_._ Proof.: Let \(C=[C_{1},\ldots,C_{m}]\in\Pi_{\mathit{mcp}}(G)\). For every \(v\in S\), there is a unique \(\kappa(v)\in[m]\) such that \(v\in C_{\kappa(v)}\). Then \(\mathit{cfg}_{0}\rightarrow_{(v_{1},\kappa(v_{1}))}\mathit{cfg}_{1} \rightarrow_{(v_{2},\kappa(v_{2}))}\ldots\rightarrow_{(v_{s},\kappa(v_{s}))} \mathit{cfg}_{s}\) is a valid sequence of decision steps. Moreover, \(\mathit{cfg}_{s}\) is a final configuration in \(\mathbb{T}^{S}(G)\) and \(\mathit{repr}(\mathit{cfg}_{s})=\mathcal{P}\). **Corollary 1**.: _Every maximal clique-partition is represented by a configuration in \(\mathbb{T}^{S}(G)\)._ Proof.: Immediate consequence of Lemmas 1 and 3. From these preliminary results, we derive the following algorithm to find all clique-partitions of \(G\): we traverse systematically (e.g., in a depth-first manner) the search space \(\mathbb{T}^{S}(G)\), and for every final configuration \(\mathit{cfg}\) in \(\mathbb{T}^{S}(G)\) we check if \(\mathit{repr}(\mathit{cfg})\) is a maximal clique-partition of \(G\). This method has the following drawbacks: 1. The search space can be huge, with many final configurations for non-maximal clique-partitions. For instance, in Example 1, the final configurations \(\mathit{cfg}_{1}\), \(\mathit{cfg}_{6}\) and \(\mathit{cfg}_{6}\) represent non-maximal clique partitions. 2. Some maximal clique-partitions may be represented by more than one final configuration. We wish to prune the search space as much as possible, to eliminate the computation of configurations for non-maximal clique-partitions, and to ensure the computation of exactly one configuration for every maximal clique-partition. ## 3 Avoiding the computation of nonmaximal clique-partitions A final configuration \([C_{1},\ldots,C_{m}]\) does not represent a maximal clique-partition if there exist \(1\leq i\neq j\leq m\) such that \(C_{i}\neq\emptyset\neq C_{j}\) and \(C_{i}\cup C_{j}\) is a clique. Therefore, it is useful to detect and stop searching below nodes of \(\mathbb{T}^{S}(G)\) from where we reach only such final configurations. **Definition 1**.: _We say that a configuration \(\mathit{cfg}=[C_{1},\ldots,C_{m}]\in\mathbb{T}^{S}(G)\) with \(\delta(\mathit{cfg})=[i_{1},\ldots,i_{\ell}]\) is a \(T1\)_**-node**_, or that it has property \(T1\), if either_ **(a)**: \(\mathit{cliques}(C_{a})\cap\mathit{Rgd}\neq\emptyset\) _for some clique index_ \(a\in\{i_{1},\ldots,i_{\ell}\}-\mathit{Rgd}\)_, or_ **(b)**: \(\mathit{cliques}(C_{a})\cap\mathit{cliques}(C_{b})\neq\emptyset\) _for distinct clique indices_ \(a,b\in\{i_{1},\ldots,i_{\ell}\}\)_._ _We let \(\mathbb{T}^{S}_{\{i_{T1}\}}(G)\) be the result of pruning from \(\mathbb{T}^{S}(G)\) all subtrees whose root is a \(T1\)-node._ **Proposition 1**.: _If \(\mathit{cfg}\) is a \(T1\)-node of \(\mathbb{T}^{S}(G)\) then there is no final configuration \([C_{1},\ldots,C_{m}]\) below or equal to \(\mathit{cfg}\) such that \(\mathit{repr}(\mathit{cfg})\) is a maximal clique-partition._ Proof.: Let \([C_{1},\ldots,C_{m}]\) be a final configuration below or equal to \(\mathit{cfg}\) in \(\mathbb{T}^{S}(G)\). (a) If \(\mathit{cliques}(C_{a})\cap\mathit{Rgd}\neq\emptyset\) for some clique index \(a\in\{i_{1},\ldots,i_{\ell}\}-\mathit{Rgd}\) then \(C_{a}\neq\emptyset\) by Lemma 2, and there exists \(b\in\mathit{Rgd}\) such that \(C_{a}\subseteq\overline{C}_{b}\). In this case we have: (1) \(b\neq a\) because \(b\in\mathit{Rgd}\) and \(a\not\in\mathit{Rgd}\); (b) \(C_{b}\neq\emptyset\) by Lemma 2; and (3) \(C_{a}\cup C_{b}\) is a clique included in \(\overline{C}_{b}\) because \(C_{a}\subseteq\overline{C}_{b}\) and \(C_{b}\subseteq\overline{C}_{b}\). Therefore, \(\mathcal{P}\) is not a maximal clique-partition. (b) If \(\mathit{cliques}(C_{a})\cap\mathit{cliques}(C_{b})\neq\emptyset\) for distinct \(a,b\in\{i_{1},\ldots,i_{\ell}\}\) then \(C_{a}\neq\emptyset\neq C_{b}\) by Lemma 2, and there exists \(p\in\mathit{cliques}(C_{a})\cap\mathit{cliques}(C_{b})\) such that \(C_{a}\subseteq\overline{C}_{p}\) and \(C_{b}\subseteq\overline{C}_{p}\). Thus \(C_{a}\cup C_{b}\subseteq\overline{C}_{p}\), hence \(C_{a}\cup C_{b}\) is a clique and \(\mathcal{P}\) is not maximal clique-partition. **Corollary 2**.: \(\mathit{Leaf}(\mathbb{T}^{S}_{\{i_{T1}\}}(G))=\Pi_{\mathit{mcp}}(G)\)_._ Proof.: Immediate consequence of Lemma 3, Proposition 1 and the obvious observation that every \(\mathit{cfg}\in\mathbb{T}^{S}(G)\) with \(\mathit{repr}(\mathit{cfg})\) non-maximal is not in \(\mathbb{T}^{S}_{\{iT1\}}(G)\) because it is a \(T1\)-node. **Example 2**.: _An enumeration of the maximal cliques of the undirected graph \(G\):_ _is \(\mathit{cfg}_{0}:=[\overline{C}_{1},\overline{C}_{2},\overline{C}_{3}, \overline{C}_{4}]\) where \(\overline{C}_{1}=\{1,2,3\}\), \(\overline{C}_{2}=\{2,3,4\}\), \(\overline{C}_{3}=\{4,5,6\}\), \(\overline{C}_{4}=\{5,6,7\}\). Then \(d[1]=d[7]=1\) and \(d[v]=2\) for all vertices \(v\in\{2,3,4,5,6\}\). Therefore \(\mathit{Rgd}=\{1,4\}\) and we can choose \(S=[2,3,4,5,6]\). The tree \(\mathbb{T}^{S}(G)\) has \(\sum_{i=1}^{5}\prod_{j=1}^{i}2=62\) non-root configurations and \(2^{5}=32\) final configurations, whereas \(\mathbb{T}^{S}_{\{iT1\}}(G)\) has 25 non-root configurations, as shown in Fig. 1. The final configurations in \(\mathbb{T}^{S}_{\{iT1\}}(G)\) are \(\mathit{cfg}_{1},\mathit{cfg}_{2},\mathit{cfg}_{3},\mathit{cfg}_{4}, \mathit{cfg}_{5},\mathit{cfg}_{7},\mathit{cfg}_{9},\mathit{cfg}_{11}\), and_ \[\mathit{repr}(\mathit{cfg}_{1})=\mathit{repr}(\mathit{cfg}_{5})=\{\{1,2,3 \},\{4,5,6\},\{7\}\},\quad\mathit{repr}(\mathit{cfg}_{2})=\{\{1,2,3\},\{4,5,6 \},\{7\}\},\] \[\mathit{repr}(\mathit{cfg}_{3})=\{\{1,2,3\},\{4,5\},\{6,7\}\}, \quad\mathit{repr}(\mathit{cfg}_{4})=\{\{1,2,3\},\{4,6\},\{5,7\}\},\] \[\mathit{repr}(\mathit{cfg}_{7})=\{\{1,2\},\{3,4\},\{5,6,7\}\}, \quad\mathit{repr}(\mathit{cfg}_{9})=\{\{1,3\},\{2,4\},\{5,6,7\}\},\] \[\mathit{repr}(\mathit{cfg}_{11})=\{\{1\},\{2,3,4\},\{5,6,7\}\}.\] ## 4 Avoiding repeated computations of the same maximal clique-partition In Example 2, the final configurations \(\mathit{cfg}_{1}:=[\overline{C}_{1},\{4\},\emptyset,\overline{C}_{4}]\) and \(\mathit{cfg}_{5}:=[\overline{C}_{1},\emptyset,\{4\},\overline{C}_{4}]\) represent the same maximal clique-partition: \(\mathit{repr}(\mathit{cfg}_{1})=\mathit{repr}(\mathit{cfg}_{5})=\{\{1,2,3\}, \{4\},\{5,6,7\}\}\). Thus, there are situations when the search space \(\mathbb{T}^{S}_{!(T1)}(G)\) has the following undesirable feature: different final configurations represents the same maximal clique-partition. This implies that some computations are redundant: some maximal clique-partitions will be generated more than once. Note that, in Example 2, the configurations \(\mathit{cfg}_{1}\) and \(\mathit{cfg}_{5}\) which represent the same maximal clique-partition \(\mathcal{P}=\{\overline{C}_{1},\{4\},\overline{C}_{4}\}\) have the following property: \(\mathit{cfg}_{1}=[C_{1},\ldots,C_{m}]\), \(\mathit{cfg}_{5}=[C^{\prime}_{1},\ldots,C^{\prime}_{m}]\) and there exist \(1\leq j\neq i\leq m\) such that \(C_{i}=C^{\prime}_{j}\neq\emptyset\) and \(C^{\prime}_{i}=\emptyset=C_{j}\). The following lemma indicates that this is a general property of configurations which represent the same maximal clique-partition: **Lemma 4**.: _If the distinct final configurations \(\mathit{cfg}=[C_{1},\ldots,C_{m}],\mathit{cfg}^{\prime}=[C^{\prime}_{1}, \ldots,C^{\prime}_{m}]\) in \(\mathbb{T}^{S}_{!(T1)}(G)\) have \(\mathit{repr}(\mathit{cfg})=\mathit{repr}(\mathit{cfg}^{\prime})\) then there exist \(1\leq j\neq i\leq m\) such that \(C_{i}=C^{\prime}_{j}\neq\emptyset\) and \(C^{\prime}_{i}=C_{j}=\emptyset\)._ Proof.: Let \(I:=\{k\mid 1\leq k\leq m\text{ and }C_{k}\neq\emptyset\}\). Then \(\mathit{repr}(\mathit{cfg})=\{C_{k}\mid k\in I\}\). Moreover, * \(\mathit{repr}(\mathit{cfg})=\mathit{repr}(\mathit{cfg}^{\prime})\) implies the existence of a permutation \(\pi:\{1,\ldots,m\}\to\{1,\ldots,m\}\) such that \(C^{\prime}_{k}=C_{\pi(k)}\) for all \(k\in[m]\), * \(\mathit{cfg}\neq\mathit{cfg}^{\prime}\) implies the existence of \(i\in I\) such that \(C_{i}\neq C^{\prime}_{i}\). This implies \(i\neq\pi(i)\). Let \(j=\pi^{-1}(i)\). Then \(j\neq i\) and \(C_{i}=C_{\pi(j)}=C^{\prime}_{j}\). Since \(i\in I\), we have \(C_{i}=C^{\prime}_{j}\neq\emptyset\). Note that \(j\neq\pi(j)\) because \(j=\pi(j)\) implies \(j=\pi(\pi^{-1}(i))=i\), contradiction. It remains to prove that \(C^{\prime}_{i}=C_{j}=\emptyset\). From \(\emptyset\neq C_{i}\subseteq\overline{C}_{i}\) and \(C_{\pi(i)}=C^{\prime}_{i}\subseteq\overline{C}_{i}\) we learn that \(C_{i}\cup C_{\pi(i)}\) is a clique included in \(\overline{C}_{i}\). We must have \(C^{\prime}_{i}=C_{\pi(i)}=\emptyset\) because otherwise \(C_{i}\) and \(C_{\pi(i)}\) would be different cliques of \(\mathit{repr}(\mathit{cfg})\) with \(C_{j}\cup C_{\pi(j)}\) a clique, which contradicts the assumption that \(\mathit{repr}(\mathit{cfg})\) is maximal clique-partition. From \(\emptyset\neq C^{\prime}_{j}=C_{\pi(j)}\subseteq\overline{C}_{j}\) and \(C_{j}\subseteq\overline{C}_{j}\) we learn that \(C_{\pi(j)}\cup C_{j}\) is a clique included in \(\overline{C}_{j}\). We must have \(C_{j}=\emptyset\) because otherwise \(C_{j}\) and \(C_{\pi(j)}\) would be different cliques of \(\mathit{repr}(\mathit{cfg})\) with \(C_{j}\cup C_{\pi(j)}\) a clique, which contradicts the assumption that \(\mathit{repr}(\mathit{cfg})\) is maximal clique-partition. Lemma 4 allows us to define a criterion to eliminate from \(\mathbb{T}^{S}_{!(T1)}(G)\) the redundant computations of maximal clique-partitions. **Definition 2**.: _We say that configuration \(\mathit{cfg}=[C_{1},\ldots,C_{m}]\in\mathbb{T}^{S}_{!(T1)}(G)\) with \(\delta(\mathit{cfg})=[i_{1},\ldots,i_{\ell}]\) is a \(T2\)_**-node**_, or that it has property \(T2\), if there exist \(1\leq j<i\leq m\) such that \(j\in\{i_{1},\ldots,i_{\ell}\}-\mathit{Rgd}\) and \(i\in\mathit{cliques}(C_{j})\). We let \(\mathbb{T}^{S}_{!(T_{1},T_{2})}(G)\) be be the result of pruning from \(\mathbb{T}^{S}_{!(T1)}(G)\) all subtrees whose root is a \(T2\)-node._ From now on we write \(\mathit{Leaf}(\mathbb{T}_{!(T1,T2)})^{S}(G))\) for the set of configurations in \(\mathbb{T}^{S}_{!(T1,T2)}(G)\) at depth \(s\). If we let \(\Pi^{!}_{mcp}(G)\) be the set of configurations \([C_{1},\ldots,C_{m}]\) from \(\Pi_{mcp}(G)\) such that \[\text{for all }1\leq j<i\leq m\text{, if }C_{j}\neq\emptyset\text{ then }i\not\in \mathit{cliques}(C_{j})\] then the following lemmas hold: **Lemma 5**.: _For every maximal clique-partition \(\mathcal{P}\) of \(G\) there is exactly one configuration \(\mathit{cfg}\in\Pi^{!}_{mcp}(G)\) with \(\mathit{repr}(\mathit{cfg})=\mathcal{P}\)._ Proof.: For every configuration \(\mathit{cfg}=[C_{1},\ldots,C_{m}]\in\mathit{Leaf}(\mathbb{T}^{S}_{!(T1)}(G))\) we define the measure \[m(\mathit{cfg}):=\sum_{\begin{subarray}{c}1\leq i\leq m\\ \overline{C_{i}}=\emptyset\end{subarray}}i.\] First, we prove that \(\Pi^{!}_{mcp}(G)\) has at least one configuration whose representation is \(\mathcal{P}\). By Lemma 1, there exists \(\mathit{cfg}^{\prime}_{0}=[C_{1},\ldots,C_{m}]\in\Pi_{mcp}(G)\) with \(\mathit{repr}(\mathit{cfg})=\mathcal{P}\). If \(\mathit{cfg}\in\Pi^{!}_{mcp}(G)\) then we are done. Otherwise there exists \(1\leq j<i\leq m\) such that \(\emptyset\neq C_{j}\subseteq\overline{C}_{i}\). Then \(C_{i}=\emptyset\) because otherwise \(C_{i},C_{j}\) would be different components of \(\mathcal{P}\) with \(C_{i}\cup C_{j}\subseteq\overline{C}_{i}\) and this contradicts the assumption that the clique-partition \(\mathcal{P}\) is maximal. Thus, we can define the configuration \(\mathit{cfg}^{\prime}_{1}=[C^{\prime}_{1},\ldots,C^{\prime}_{m}]\in\Pi_{mcp}(G)\) with \[C^{\prime}_{k}:=\left\{\begin{array}{ll}C_{j}&\text{if }k=i,\\ \emptyset&\text{if }k=j,\\ C_{k}&\text{otherwise}\end{array}\right.\] If follows that \(m(\mathit{cfg}^{\prime}_{0})-m(\mathit{cfg}^{\prime}_{1})=i-j>0\). In this way we can build a sequence of configurations \(\mathit{cfg}^{\prime}_{0},\mathit{cfg}^{\prime}_{1},\ldots\in\Pi_{mcp}(G)\) with \(\mathcal{P}=\mathit{repr}(\mathit{cfg}^{\prime}_{0})=\mathit{repr}( \mathit{cfg}^{\prime}_{1})=\ldots\) and \(m(\mathit{cfg}^{\prime}_{0})>m(\mathit{cfg}^{\prime}_{1})>\ldots\) Since the ordering \(>\) on natural numbers is well-founded, this construction will eventually end with a configuration \(\mathit{cfg}^{\prime}_{p}\in\Pi^{!}_{mcp}(G)\) and \(\mathit{repr}(\mathit{cfg}^{\prime}_{p})=\mathcal{P}\). Hence \(\Pi^{!}_{mcp}(G)\) has at least one configuration whose representation is \(\mathcal{P}\). It remains to show that there are no two configurations \(\mathit{cfg}=[C_{1},\ldots,C_{m}],\mathit{cfg}^{\prime}=[C^{\prime}_{1}, \ldots,C^{\prime}_{m}]\in\Pi^{!}_{mcp}(G)\) with \(\mathit{repr}(\mathit{cfg})=\mathit{repr}(\mathit{cfg}^{\prime})\). If this were the case then, by Lemma 4, there exist \(1\leq j\neq i\leq m\) such that \(C_{i}=C^{\prime}_{j}\neq\emptyset\) and \(C^{\prime}_{i}=C_{j}=\emptyset\). If \(j<i\) then \(\emptyset\neq C^{\prime}_{j}=C_{i}\subseteq\overline{C}_{i}\), which contradicts the assumption that \(\mathit{cfg}^{\prime}\in\Pi^{!}_{mcp}(G)\). If \(j>i\) then \(\emptyset\neq C_{i}=C^{\prime}_{j}\subseteq\overline{C}_{j}\), which contradicts the assumption that \(\mathit{cfg}\in\Pi^{!}_{mcp}(G)\). **Lemma 6**.: \(\mathit{Leaf}(\mathbb{T}^{S}_{!(T1,T2)}(G))=\Pi^{!}_{mcp}(G)\)_._ Proof.: First, we prove that \(\mathit{Leaf}(\mathbb{T}^{S}_{!(T1,T2)}(G))\subseteq\Pi^{!}_{mcp}(G)\). Let \(\mathit{cfg}=[C_{1},\ldots,C_{m}]\in\mathit{Leaf}(\mathbb{T}^{S}_{!(T1,T2)}(G))\) with \(\delta(\mathit{cfg})=[i_{1},\ldots,i_{s}]\). Since \(\mathit{Leaf}(\mathbb{T}^{S}_{!(T1,T2)}(G))\subseteq\mathit{Leaf}(\mathbb{T}^{ S}_{!(T1)}(G))=\Pi_{mcp}(G)\), we only have to show that, for all \(1\leq j<i\leq m\), if \(C_{j}\neq\emptyset\) then \(i\not\in\mathit{cliques}(C_{j})\). If this were not the case, then there exist \(j\in\{i_{1},\ldots,i_{s}\}\cup Rgd\) and \(j<i\leq m\) such that \(i\in\mathit{cliques}(C_{j})\). We observe that \(j\not\in Rgd\) because otherwise \(\mathit{cliques}(C_{j})=\{j\}\), which contradicts the assumption \(i\in\mathit{cliques}(C_{j})\). This implies that \([C_{1},\ldots,C_{m}]\) is \(T2\)-node of \(\mathbb{T}^{S}_{\{?(T1)\}}(G)\), which contradicts the assumption that \([C_{1},\ldots,C_{m}]\in\mathit{Leaf}(\mathbb{T}^{S}_{\{?(T1,T2)\}}(G))\). To finish the proof, we must show that \(\Pi^{!}_{\mathit{mcp}}(G)\subseteq\mathit{Leaf}(\mathbb{T}^{S}_{\{?(T1,T2) \}}(G))\). Since \[\Pi^{!}_{\mathit{mcp}}(G)\subseteq\Pi_{\mathit{mcp}}(G)=\mathit{Leaf}( \mathbb{T}^{S}_{\{?(T1)\}}(G)),\] it is sufficient to prove that every configuration \(\mathit{cfg}\in\Pi_{\mathit{mcp}}(G)-\Pi^{!}_{\mathit{mcp}}(G)\) is below a \(T2\)-node of \(\mathbb{T}^{S}_{\{?(T1)\}}(G)\). Let \(\mathit{cfg}=[C_{1},\ldots,C_{m}]\) with \(\delta(\mathit{cfg})=[i_{1},\ldots,i_{s}]\). Then there exist \(1\leq j<i<m\) such that \(\emptyset\neq C_{j}\subseteq\overline{C}_{i}\). We observe that \(j\not\in Rgd\) because otherwise the only maximal clique which contains \(C_{j}\) is \(\overline{C}_{j}\). Therefore we must have \(j\in\{i_{1},\ldots,i_{s}\}-Rgd\). This implies that \(\mathit{cfg}\) is \(T2\)-node. The following corollary is an immediate consequence of the previous two lemmas. **Corollary 3**.: _Every maximal clique-partition is produced by a single configuration from \(\mathit{Leaf}(\mathbb{T}^{S}_{\{?(T1,T2)\}}(G))\)._ **Example 3**.: _The tree \(\mathbb{T}^{S}_{\{?(T1,T2)\}}(G)\) for the graph \(G\) from Example 2 is shown in Figure 2. \([\overline{C}_{1},\{4\},\{5,6\},\overline{C}_{4}]\in\mathbb{T}^{S}_{\{?(T1) \}}(G)\) is a \(T2\)-node because it is of the form \([C_{1},C_{2},C_{3},C_{4}]\) and there exist \(j=2<3=i\) such that \(j\in\{2,3\}-\mathit{Rgd}\) and \(C_{2}\subseteq\overline{C}_{3}\). Compared to \(\mathbb{T}^{S}_{\{?(T1)\}}(G)\), the total number of non-root nodes in \(\mathbb{T}^{S}_{\{?(T1,T2)\}}(G)\) has dropped from 25 to 22, and_ \[\mathit{Leaf}(\mathbb{T}^{S}_{\{?(T1,T2)\}}(G))=\Pi^{!}_{\mathit{mcp}}(G)=\{ \mathit{cfg}_{2},\mathit{cfg}_{3},\mathit{cfg}_{4},\mathit{cfg}_{5}, \mathit{cfg}_{7},\mathit{cfg}_{9},\mathit{cfg}_{11}\}.\] _Every maximal clique-partition is produced by a single final configuration in \(\mathbb{T}^{S}_{\{?(T1,T2)\}}(G))\):_ \[\begin{array}{ll}\mathit{repr}(\mathit{cfg}_{2})=\{\{1,2,3\},\{4,5,6\}, \{7\}\},&\mathit{repr}(\mathit{cfg}_{3})=\{\{1,2,3\},\{4,5\},\{6,7\}\},\\ \mathit{repr}(\mathit{cfg}_{4})=\{\{1,2,3\},\{4,6\},\{5,7\}\},&\mathit{repr} (\mathit{cfg}_{5})=\{\{1,2,3\},\{4,5,6\},\{7\}\},\\ \mathit{repr}(\mathit{cfg}_{7})=\{\{1,2\},\{3,4\},\{5,6,7\}\},&\mathit{repr} (\mathit{cfg}_{9})=\{\{1,3\},\{2,4\},\{5,6,7\}\},\\ \mathit{repr}(\mathit{cfg}_{11})=\{\{1\},\{2,3,4\},\{5,6,7\}\}.\end{array}\] ## 5 The combined detection of \(T1\)-nodes and \(T2\)-nodes Let \(\mathit{cfg}\rightarrow_{(v_{\ell},i_{\ell})}\mathit{cfg}^{\prime}\) be a branch of \(\mathbb{T}^{S}_{\{?(T1,T2)\}}(G)\), and \(\delta(\mathit{cfg}^{\prime})=[i_{1},\ldots,i_{\ell}]\). The only visible change from \(\mathit{cfg}\) to \(\mathit{cfg}^{\prime}\) is that of the components with indices from the set \(\mathit{cliques}(v_{\ell})-\{i_{\ell}\}\). When we are about to check if \(\mathit{cfg}^{\prime}\) has property \(T1\) or \(T2\), we know that \(\mathit{cfg}\) does not have property \(T1\) or \(T2\) because it has already passed this test. Therefore, it is sufficient to consider only the set of clique indices \(\mathcal{J}_{\ell}:=\mathit{cliques}(v_{\ell})\cap\mathcal{I}_{\ell}\) where \(\mathcal{I}_{\ell}:=\{i_{1},\ldots,i_{\ell}\}-\mathit{Rgd}\), and to check if \(\mathit{cfg}^{\prime}\) is a configuration \([C_{1},\ldots,C_{m}]\) which satisfies one of the following conditions for some \(j\in\mathcal{J}_{\ell}\): 1. (a) \(C_{j}\subseteq\overline{C}_{i}\) for an \(i\in Rgd\) (in this case \(\mathit{cfg}^{\prime}\) is \(T1\)-node), or (b) \(i>j\) (in this case \(\mathit{cfg}^{\prime}\) is \(T2\)-node); or 2. there exists \(i\in\mathcal{I}_{\ell}-\{j\}\) such that \(C_{i}\cup C_{j}\) is a clique. In this case \(\mathit{cfg}^{\prime}\) is \(T1\)-node. Condition 1.(a) is equivalent with \(\mathit{cliques}(C_{j})\cap Rgd\neq\emptyset\), and condition 2 is equivalent with \(\mathit{cliques}(C_{i})\cap\mathit{cliques}(C_{j})\neq\emptyset\). ## 6 Enumerating all maximal clique-partitions In this section we describe an algorithm to compute the maximal clique-partitions one-by-one, on request. The following global data is assumed to be available: * \(\overline{C}_{1},\ldots,\overline{C}_{m}\): the maximal cliques of \(G\) * \(\mathit{cliques}(v)=\{k\in[m]\mid v\in\overline{C}_{k}\}\) for all \(v\in V\) * \(Rgd=\{k\mid\mathit{cliques}(v)=\{k\}\) for some \(v\in V\}\) * \(S=[v_{1},\ldots,v_{s}]\): an enumeration of all vertices \(v\in V\) with \(d(v)>1\) During the computation we will keep track of the following information: * \(\ell\): the depth of the search in tree \(\mathbb{T}^{S}_{!(T1,T2)}(G)\) * \(\mathit{cfg}[\ell]\): the current configuration of the search in tree \(\mathbb{T}^{S}_{!(T1,T2)}(G)\) * \(\mathit{choice}[i]\) for \(1\leq i\leq\ell\): the index of the clique where vertex \(v_{i}\in S\) is assigned. If \(\mathit{choice}[i]==0\) then vertex \(v_{i}\) was not yet been assigned to any clique. ``` procedureinitSearch() \(\mathit{cfg}[1]:=[\overline{C}_{1},\ldots,\overline{C}_{m}]\); \(\ell:=1\); for\(i:=1\) to \(s\)do \(\mathit{choice}[i]:=0\); findNextClique(); procedurehasNext() return\(\mathit{cfg}[1]\neq\)null procedurenext() if(\(\mathit{cfg}[1]==\)null) returnnull; ``` Figure 2: The search tree \(\mathbb{T}^{S}_{!(T1,T2)}(G)\) for the graph from Example 2. **else** \(result:=repr(cfg[\ell])\); \(\mathtt{findNextClique()}\); \(\mathtt{return}\)\(\mathit{result}\); procedure findNextClique() **if**\(s>0\) **while**\(\ell\geq 1\) \(V_{\ell}:=\{k\in\mathit{cliques}(v_{\ell})\mid k>\mathit{choice}[\ell]\text{ and }(k\in Rgd\text{ or }\mathit{cliques}(C_{k})\cap Rgd=\emptyset)\}\); **if**\(V_{\ell}\) is empty \(\mathit{choice}[\ell]:=0\); \(\ell:=\ell-1\); **else** \(i:=\min V_{\ell}\); // keep \(v_{\ell}\) only in clique with index \(i\) \(\mathit{choice}[\ell]:=i\); \(\mathit{cfg}[\ell]:=[C_{1}^{\prime},\ldots,C_{m}^{\prime}]\) where \(C_{k}^{\prime}:=\left\{\begin{array}{ll}C_{k}-\{v_{\ell}\}&\text{if }k\in\mathit{cliques}(v_{\ell})-\{i\},\\ C_{k}&\text{otherwise}\end{array}\right.\) **if** (isT10rT2(\(\mathit{cfg}[\ell]\))) **continue**; **else** \(\mathtt{if}(\ell==s)\) **return**; // maximal clique-partition detected **else** \(\mathit{cfg}[\ell+1]:=\mathit{cfg}[\ell]\); \(\ell:=\ell+1\); \(\mathit{cfg}[1]:=\mathtt{null}\); procedure isT10rT2(\([C_{1},\ldots,C_{m}]\)) \(I:=\{choice[k]\mid 1\leq k\leq\ell\}-Rgd\); \(J:=I\cap\mathit{cliques}(v_{\ell})\); **for** all \(j\in J\) **if**\(\mathit{cliques}(C_{j})\cap Rgd\neq\emptyset\) or \(\mathit{cliques}(C_{j})\cap\{i\mid j<i\leq m\}\neq\emptyset\) **return**true**; **for** all \(i\in I\) **if**\(i\in J\) **if**\((j<i)\) and \(\mathit{cliques}(C_{i})\cap\mathit{cliques}(C_{j})\neq\emptyset\) **return**true**; **else** \(\mathit{cliques}(C_{i})\cap\mathit{cliques}(C_{j})\neq\emptyset\) **return**true**; **return**false; ## 7 Complexity **Theorem 1**.: _If the set of vertices of \(G\) is \(\{v_{1},\ldots,v_{n}\}\) then the number of maximal clique-partitions of \(G\) is at most \(\prod_{i=1}^{n}d(v_{i})\)._ Proof.: The result directly follows from the facts that (1) every maximal clique-partition is produced by some final configuration of \(\mathbb{T}^{5}(G)\), and (2) \(\mathbb{T}^{5}(G)\) has \(\prod_{i=1}^{n}d(v_{i})\) final configurations. It is easy to see that this upper bound can be reached. Just consider the graph with two maximal cliques: \(C_{1}=\{p_{1},\ldots,p_{n},\textit{true}\}\) and \(C_{2}=\{p_{1},\ldots,p_{n},\textit{false}\}\). The set of all maximal clique-partitions imitates the truth assignment in propositional logic, containing \(2^{n}\) maximal clique-partitions. This theorem implies that the algorithm is exponential in the number of vertices shared among multiple cliques. On the other hand, the length of each branch of the algorithm is polynomially bounded, since it requires at most as many steps as there are vertices shared among maximal cliques. Therefore, every single maximal clique-partition can be computed in polynomial time. Experimental resultsTo test the performance of our algorithm, we implemented it in Java and Mathematica [19], and ran it on a MacBook Air M2 with 8-core CPU and 8 GB RAM. We indicate the runtimes to enumerate all maximal clique-partitions of some graphs from the following families: 1. \(G_{n}\), \(n\geq 2\), obtained by extending the complete graph \(K_{n}\) with \(n\) new vertices, and connecting every vertex of \(K_{n}\) with a distinct new vertex. Examples of graphs \(G_{n}\) are: 2. \(H_{n}\) of order \(4\,n\) (\(n\geq 2\)), with three maximal cliques of order \(2\,n\): two are mutually disjoint, and the third one shares \(n\) vertices with each of the other two maximal cliques. For example: 3. Graphs \(G_{m,n}\) with set of vertices \(V=\{v_{1},v_{2},\ldots,v_{m+n-1}\}\) and \(n\) maximal cliques \(\overline{C}_{1},\ldots,\overline{C}_{n}\) such that \(\overline{C}_{i}=\{v_{j}\mid i\leq j<i+m\}\) for all \(1\leq i\leq n\). Examples of graphs \(G_{m,n}\) are: \(G_{3,2}\): \(G_{3,3}\): \(G_{3,4}\): \(G_{3,4}\): \(G_{3,2}\): \(G_{3,3}\): \(G_{3,4}\): \(G_{3,4}\): \(G_{3,4}\): \(G_{3,2}\): \(G_{3,3}\): \(G_{3,4}\): \(G_{3,3}\): \(G_{3,4}\): \(G_{3,3}\): \(G_ ## 8 Conclusion We developed an algorithm for computing all maximal clique-partitions in an undirected graph. The algorithm starts from the maximal clique cover of the graph and revises it, reducing the number of vertices shared among cliques by assigning them to one of the cliques they belong to. In this process, we avoid the computation of undesirable answers by detecting and discarding the search states which produce only non-maximal clique-partitions or duplicate answers. Our algorithm is optimal in the following sense: it enumerates all maximal clique-partitions, and each of them is computed only once. The set of computed partitions can be exponentially large with respect to the number of vertices shared among maximal cliques, but every answer can be computed in polynomial time (starting from all maximal cliques). Besides, the computations of different maximal clique-partitions corresponds to the exploration of different branches of the search tree for solutions, and they can be carried out independently, in parallel of each other. Our algorithm is iterative in the sense that it enumerates the computed answers one by one, on demand. This is highly desirable because the total number of maximal clique-partitions can be exponentially large and we may want to start analyzing and processing them as soon as possible. In many practical applications, we are not interested to enumerate all maximal clique-partitions. Often, our algorithm can be easily adjusted to reduce the search space and compute only the preferred ones. For instance, we can impose the constraint to keep a group of vertices in the same clique by assigning them simultaneously (whenever possible) to the same single clique originating from a maximal clique of the graph.
2309.09680
Modifier-Adaptation for Real-Time Optimal Periodic Operation
In this paper, we present the periodic modifier-adaptation formulation of the dynamic real time optimization. The proposed formulation uses gradient information to update the problem with affine modifiers so that, upon convergence, its solution matches the optimal steady periodic trajectory. Unlike other state of the art modifier-adaptation techniques, the proposed approach is able to converge not only to optimal steady states, but also to optimal periodic trajectories. The full control scheme to take the system from its current state to the optimal periodic trajectory is detailed. The convergence of the computed reference to the optimal periodic behaviour is shown by means of a periodic version of the quadruple tank benchmark.
Victor Mirasierra, Daniel Limon
2023-09-18T11:36:52Z
http://arxiv.org/abs/2309.09680v1
# Modifier-Adaptation for Real-Time Optimal Periodic Operation ###### Abstract In this paper, we present the periodic modifier-adaptation formulation of the dynamic real time optimization. The proposed formulation uses gradient information to update the problem with affine modifiers so that, upon convergence, its solution matches the optimal steady periodic trajectory. Unlike other state of the art modifier-adaptation techniques, the proposed approach is able to converge not only to optimal steady states, but also to optimal periodic trajectories. The full control scheme to take the system from its current state to the optimal periodic trajectory is detailed. The convergence of the computed reference to the optimal periodic behaviour is shown by means of a periodic version of the quadruple tank benchmark. ## 1 Introduction Economic optimization plays a major role in most industries, since it allows to optimize the performance of the real plant operation [1]. To achieve optimal performance, optimization problems leverage system data and models to compute the trajectory that minimize the economic cost. To arrange the system information into an optimization framework, multidisciplinary teams are often involved. They must have deep knowledge about the real systems and be able to build detailed models that mirror their behaviour. The complexity and possible change over time of real systems (e.g. due to deterioration) make the identification task expensive and prone to errors, which leads to plant-model mismatch and ultimately may lead to a loss of performance in the controlled system. In two-layer control schemes [2], the economic optimization is splitted in two main layers. The first one, called real time optimization (RTO), computes the optimal steady behaviour, while the second one, known as advanced control, calculates the inputs required to take the system to that reference. To transform the optimal reference computed by the RTO into a valid reference to the advance controller, often an intermediate layer known as the steady-state target optimization (SSTO) [3] is used. One of the strengths of two-layer control schemes is their ability to use different models for the different layers, being the model from the RTO layer usually more complex and global, while the one from the advanced control layer is generally faster and able to quickly react to disturbances. This allows to keep a high performance from the detailed RTO, while keeping the control fast from the advanced control. Standard formulations of the RTO deal with the optimization of the plant operated at equilibrium points. However, there exist many scenarios where the plant operates optimally with a periodic behaviour, such as HVAC systems, solar plants, water distribution networks, electric networks, among others. For these systems with periodic nature, a dynamic RTO is better suited because of its ability to calculate not only the optimal steady state, but also the optimal periodic trajectory. This constitutes a generalization of the standard RTO and usually comes at the expense of an increased complexity because of the larger number of variables. While dynamic RTO schemes may theoretically converge to the optimal steady operation, it is still sensitive to plant-model mismatch, thus making it susceptible to a performance decrease. In order to cope with the issues derived from the plant-model mismatch, modifier-adaptation (MA) formulations of the RTO emerged and have been studied over the last decades [4; 5; 6; 7] with promising results. They update the model-based RTO problem with affine modifiers that incorporate information of the real system. Upon convergence of the modifiers, the modified problem is able to calculate either the optimal steady operation of the real system or the optimal input profile of a batch process [8] from an initially inaccurate model. Modifier-adaptation schemes have been mainly built upon the standard RTO to compute the optimal steady state of a system. In this work we present a periodic modifier adaptation scheme which is built upon a dynamic RTO and calculates upon convergence the optimal periodic trajectory of a real system. The proposed approach can be seen as a generalization of the MA scheme proposed in [9] to include optimal periodic behaviour. The structure of the paper is the following: In Section 2 we introduce the problem under consideration, along with the two-layer control scheme. Then, in Section 3 we analyse how to modify the dynamic RTO so that, upon convergence, its solution matches the first order necessary conditions of optimality of the optimal problem. Then, in Section 5 we detail how to transform the optimal operation computed by the dynamic RTO into a valid reference for the advanced control layer. Section 4 shows a way to design the advanced layer to follow a reference. In Section 6 we present the full algorithms to implement the two-layer control scheme with periodic modifier-adaptation. A simplified version of this algorithm is used in Section 7 on the quadruple tank benchmark example to test the performance of the proposed approach. Finally, Section 8 discusses the conclusions. ## 2 Problem formulation Consider a system that is described by the following (unknown) discrete-time state-space representation: \[x_{k+1}=f_{p,k}(x_{k},u_{k}), \tag{1}\] where \(x_{k}\in\mathbb{R}^{n_{x}}\) and \(u_{k}\in\mathbb{R}^{n_{u}}\) are respectively the states and inputs of the system at time \(k\), and \(f_{p,k}:\mathbb{R}^{n_{x}\times n_{u}}\rightarrow\mathbb{R}^{n_{x}}\) represents the dynamics of the real system at time \(k\). Each step in \(k\) represents \(t_{T}\) seconds. Let system (1) be periodic with known period \(Tt_{T}\) seconds, i.e. \(f_{p,k}=f_{p,k+T}\), and let \(x_{0}\) be the initial state. At the first step of each period, given the sequence of \(T\) next inputs \(\mathbf{u}_{T}=\begin{bmatrix}u_{0}^{T}&u_{1}^{T}&\cdots&u_{T-1}^{T}\end{bmatrix} ^{T}\in\mathbb{R}^{Tn_{u}}\), then the \(T\) following states of the system (1) are defined by the time-invariant function \(F_{p}:\mathbb{R}^{n_{x}\times Tn_{u}}\rightarrow\mathbb{R}^{Tn_{x}}\) so that: \[\mathbf{x}_{T}=\begin{bmatrix}x_{1}^{T}&x_{2}^{T}&\cdots&x_{T}^{T}\end{bmatrix} ^{T}=F_{p}(x_{0},\mathbf{u}_{T}). \tag{2}\] At any time \(k\), the states and inputs of system (1) can be subject to (possibly nonlinear) constraints of the form: \[g_{k}(x_{k},u_{k})\leq 0, \tag{3}\] which are also periodic with period \(Tt_{T}\) seconds. Considering the periodic constraint \(x_{0}=x_{T}\), at the first step of each period the constraints (3) can also be expressed by its compact form: \[G(\mathbf{x}_{T},\mathbf{u}_{T})\leq 0,\] where \(G:\mathbb{R}^{Tn_{x}\times Tn_{u}}\rightarrow\mathbb{R}\). The optimal economic control problem calculates the infinite sequence of inputs that, when applied to the system (1), minimizes the economic cost given by the stage cost function \(\phi_{k}(x_{k},u_{k})\) over time. Let the stage cost function \(\phi_{k}\) be periodic with period \(Tt_{T}\) seconds and consider the periodic constraint \(x_{0}=x_{T}\), then at the first step of each period, the time-invariant cost function \(\Phi\) represents the sum of stage cost functions \(\phi_{k}\) over the \(T\) future steps and is defined as: \[\Phi(\mathbf{x}_{T},\mathbf{u}_{T})=\sum_{i=0}^{T-1}\phi_{i}(x_{i},u_{i}). \tag{4}\] Given the initial state of the system \(x_{0}\), the optimal economic control problem can be formulated as follows: \[\begin{split}\min_{\mathbf{u}_{\infty}}&\sum_{k=0 }^{\infty}\phi_{k}(x_{k},u_{k})\\ \mathrm{s.t.}& x_{k+1}=f_{p,k}(x_{k},u_{k}),\quad \text{for all }k=0,1,\ldots,\infty\\ & g_{k}(x_{k},u_{k})\leq 0,\quad\text{for all }k=0,1,\ldots, \infty\end{split} \tag{5}\] In real applications, the previous fomulation is seldom implemented because of two main reasons: (i) the system dynamics (i.e. \(f_{p,k}\)) are usually unknown, and (ii) the infinite number of decision variables hinders the problem's readiness for implementation. In practice, problem (5) is often tackled using a two-layer control scheme (Figure 1). In this scheme, the upper layer, also known as real time optimization (RTO), calculates the optimal operation of the system. Whereas the lower one, known as advanced control, computes the input sequence required to take the system from its current state to a given reference. These layers are separate and usually use different models and time horizons, so we consider also an intermediate layer called steady trajectory target optimization (STTO) which turns the optimal operation computed by the upper layer into a valid steady reference for the lower layer. The basis of the two layer architecture is to split the main control problem into two smaller problems of different complexities and which are solved with different frequencies. On one hand, the RTO usually works with a complex and accurate model of the global plant. This model typically describes the fundamental and generally slow behaviour of the plant, which results in large time scales and low update frequency for the RTO. On the other hand, the advanced control generally uses a local dynamic model of system. It uses simple and fast models and its time scales are short. One of the benefits of this scheme is that the upper layer does not need to be recalculated with the same frequency as the lower one. This reduces computational costs, while keeping the control fast. Figure 1: Control diagram. In this work we use dynamic real-time optimization (DRTO) as the upper layer. Unlike standard RTO, which aims to calculate the optimal steady setpoint \((x^{s},u^{s})\), the objective of the DRTO is to compute the optimal periodic trajectory \((\mathbf{\hat{x}}_{T}^{\mathrm{drto}},\mathbf{u}_{T}^{\mathrm{drto}})\) with a predefined period of \(Tt_{T}\) seconds. The optimal periodic trajectory can be seen as a generalization of the optimal steady setpoint, since they lead to the same solution for \(T=1\). Consequently, the DRTO can lead to better steady performance than the standard RTO, at the expense of it being a more intricate problem. In the case of periodic systems, it has been proven that the DRTO formulation is able to capture their optimal steady operation [10]. The DRTO uses a model of the real system \(F_{m}\), instead of the real system dynamics \(F_{p}\) described in (2): \[\mathbf{\hat{x}}_{T}=\begin{bmatrix}\hat{x}_{1}^{T}&\hat{x}_{2}^{T}&...&\hat {x}_{T}^{T}\end{bmatrix}^{T}=F_{m}(x_{0},\mathbf{u}_{T}), \tag{6}\] where \(\hat{x}_{k}\) is the state predicted by the model at time \(k\). Like (2), each step in \(k\) equals \(t_{T}\) seconds. Because of the complexity of real systems, models are usually unable to perfectly capture the real dynamics, leading to plant-model mismatch, i.e. \(x_{k+1}\neq\hat{x}_{k+1}\). One iteration of the DRTO is solved every \(t_{D}=D(Tt_{T})\) seconds, with \(D\) being a positive integer. Given the period \(T\), they can be formulated as: \[\begin{split}(x_{0}^{\mathrm{drto}},\mathbf{\hat{x}}_{T}^{ \mathrm{drto}},&\mathbf{u}_{T}^{\mathrm{drto}})=\\ \operatorname*{arg\,min}_{x_{0},\mathbf{\hat{x}}_{T},\mathbf{u}_{T}}& \sum_{i=0}^{T-1}\phi_{i}(\hat{x}_{i},u_{i})\\ \text{s.t.}&\mathbf{\hat{x}}_{T}=F_{m}(x_{0}, \mathbf{u}_{T})\\ & G(\mathbf{\hat{x}}_{T},\mathbf{u}_{T})\leq 0\\ &\hat{x}_{T}=x_{0}.\end{split} \tag{7}\] The aforementioned formulation of the DRTO computes the optimal periodic operation for the available model of the system. However, due to plant-model mismatch, we know that this operation may not be optimal for the real system and might even lead to constraint violation. In the next section we present a reformulation of (7) which uses gradient based modifiers to update the base model so that, upon convergence, the solution of the modified DRTO matches the optimal periodic operation. Later, in Sections 4 and 5, the advanced control and the steady trajectory target optimization layers will be detailed. ## 3 Periodic Modifier-Adaptation Modifier-adaptation (MA) methodologies arose to correct the plant-model mismatch at the RTO level [4; 9]. They use measures and gradients from the system to build modifiers that update the RTO with affine terms. Upon convergence, MA schemes guarantee the satisfaction of the first order necessary conditions for optimality of the optimal problem. Traditionally, MA schemes have been built upon the standard RTO, which ultimately calculates the optimal steady setpoint. In this section we generalize state of the art approaches and show how to apply modifier-adaptation to the DRTO problem (7) and address the plant-model mismatch for optimal periodic trajectories. Zeroth and first order modifiers will be presented to update the dynamic model and ensure that, upon convergence, the optimal solution of the modified DRTO matches the optimal periodic trajectory of the system. Let each iteration of the DRTO be labelled by index \(l\). Then, given the modifiers \(\lambda_{l}^{x}\in\mathbb{R}^{Tn_{x}\times n_{x}}\), \(\lambda_{l}^{u}\in\mathbb{R}^{Tn_{x}\times Tn_{u}}\) and \(\epsilon_{l}\in\mathbb{R}^{Tn_{x}}\), we introduce the periodic modifier-adaptation (P-MA) formulation of the DRTO at iteration \(l\): \[\begin{split}(x_{0}^{\text{drto}},\mathbf{\hat{x}}_{T}^{\text{ drto}},&\mathbf{u}_{T}^{\text{drto}})=\\ \operatorname*{arg\,min}_{x_{0},\mathbf{\hat{x}}_{T},\mathbf{u}_ {T}}&\Phi(\mathbf{\hat{x}}_{T},\mathbf{u}_{T})\\ \text{s.t.}&\mathbf{\hat{x}}_{T}=F_{m}(x_{0}, \mathbf{u}_{T})+\lambda_{l}^{x}x_{0}+\lambda_{l}^{u}\mathbf{u}_{T}+\epsilon_{l }\\ & G(\mathbf{\hat{x}}_{T},\mathbf{u}_{T})\leq 0\\ & M\mathbf{\hat{x}}_{T}=x_{0}.\end{split} \tag{8}\] where \(M\) represents the constant matrix that ensures that the periodic constraint meets, i.e. \(\hat{x}_{T}=x_{0}\). After solving problem (8), the DRTO identifies a set of variables \(\mathbf{r}_{e}^{\text{drto}}\in\mathbb{R}^{Tn_{r}}\) that univocally defines the optimal economic trajectory \[\mathbf{r}_{e}^{\text{drto}}=r_{e}(\mathbf{\hat{x}}_{T}^{\text{drto}},\mathbf{ u}_{T}^{\text{drto}}) \tag{9}\] and passes it to the STTO, which then transforms it into a valid reference for the MPC \((\mathbf{\hat{z}}_{N,j}^{\text{ref}},\mathbf{v}_{N,j}^{\text{ref}})\) (See Figure 1). Now, we see how to calculate the modifiers \(\lambda_{l}^{x},\lambda_{l}^{u}\) and \(\epsilon_{l}\) so that, upon convergence, the KKT conditions of problem (8) converge to those of the optimal problem. ### KKT Matching In this section we show how to update the modifiers \(\lambda_{l}^{x},\lambda_{l}^{u}\) and \(\epsilon_{l}\) so that the first order necessary conditions of optimality, also known as KKT conditions, of the P-MA DRTO (8) match with those of the optimal problem. Given period \(T\), the real optimal periodic trajectory \((\mathbf{x}_{T}^{\text{opt}},\mathbf{u}_{T}^{\text{opt}})\) can be computed as the optimal solution to the following optimization problem: \[\begin{split}(x_{0}^{\text{opt}},\mathbf{x}_{T}^{\text{opt}},& \mathbf{u}_{T}^{\text{opt}})=\\ \operatorname*{arg\,min}_{x_{0},\mathbf{x}_{T},\mathbf{u}_{T}}& \Phi(\mathbf{x}_{T},\mathbf{u}_{T})\\ \text{s.t.}&\mathbf{x}_{T}=F_{p}(x_{0},\mathbf{u}_{T}) \\ & G(\mathbf{x}_{T},\mathbf{u}_{T})\leq 0\\ & M\mathbf{x}_{T}=x_{0}.\end{split} \tag{10}\] For the sake of simplicity and comparison, we define \(\theta=\begin{bmatrix}x_{0}\\ \mathbf{u}_{T}\end{bmatrix}\) and reformulate (10) as: \[\min_{\theta} \Phi^{\theta}(F_{p}^{\theta}(\theta),\theta)\] (11a) s.t. \[G^{\theta}(F_{p}^{\theta}(\theta),\theta)\leq 0 \tag{11b}\] \[M_{1}F_{p}^{\theta}(\theta)+M_{2}\theta=\mathbf{0}, \tag{11c}\] where \(\Phi^{\theta},F_{p}^{\theta}\), \(G^{\theta}\), \(M1\) and \(M2\) are functions derived from rewritting the ones in (10) in terms of \(\theta\), i.e. \(\mathbf{x}_{T}=F_{p}^{\theta}(\theta)\) and similar, and (11c) corresponds to the periodic constraint. We also define a modified version of the dynamic RTO (8) at step \(l\): \[\min_{\theta} \Phi^{\theta}(F_{m}^{\theta}(\theta)+(\Lambda_{l}^{\theta})^{T} \theta+\epsilon_{l},\theta)\] s.t. \[G^{\theta}(F_{m}^{\theta}(\theta)+(\Lambda_{l}^{\theta})^{T} \theta+\epsilon_{l},\theta)\leq 0 \tag{12}\] \[M_{1}(F_{m}^{\theta}(\theta)+(\Lambda_{l}^{\theta})^{T}\theta+ \epsilon_{l})+M_{2}\theta=\mathbf{0}\] where \(\epsilon_{l}\) and \(\Lambda_{l}^{\theta}=\begin{bmatrix}\lambda_{l}^{x}&\lambda_{l}^{u}\end{bmatrix}^ {T}\) refers to the zeroth and first order modifiers respectively at step \(l\). The Lagrangian function associated to the problem (11) is: \[\mathbb{L}_{p}(\theta)= \Phi^{\theta}(F_{p}^{\theta}(\theta),\theta)+\pi_{1}^{T}\left(G^ {\theta}(F_{p}^{\theta}(\theta),\theta)\right)+\] \[\pi_{2}^{T}\left(M_{1}F_{p}^{\theta}(\theta)+M_{2}\theta\right),\] and its gradient with respect to the decision variable \(\theta\) is: \[\frac{\partial\mathbb{L}_{p}}{\partial\theta}= \frac{\partial\Phi^{\theta}}{\partial F_{p}^{\theta}}(\frac{ \partial F_{p}^{\theta}}{\partial\theta})+\frac{\partial\Phi^{\theta}}{ \partial\theta}+\pi_{1}^{T}\Big{[}\frac{\partial G^{\theta}}{\partial F_{p}^{ \theta}}(\frac{\partial F_{p}^{\theta}}{\partial\theta})+\frac{\partial G^{ \theta}}{\partial\theta}\Big{]}+\] \[\pi_{2}^{T}\Big{(}M_{1}(\frac{\partial F_{p}^{\theta}}{\partial \theta})+M_{2}\Big{)}.\] Analogously, the gradient of the Lagrangian function associated to problem (12) is the following: \[\frac{\partial\mathbb{L}_{m}}{\partial\theta}= \frac{\partial\Phi^{\theta}}{\partial F_{m}^{\theta}}(\frac{ \partial F_{m}^{\theta}}{\partial\theta}+\Lambda_{\infty}^{\theta})+\frac{ \partial\Phi^{\theta}}{\partial\theta}+\] \[\pi_{1}^{T}\Big{[}\frac{\partial G^{\theta}}{\partial F_{m}^{ \theta}}(\frac{\partial F_{m}^{\theta}}{\partial\theta}+\Lambda_{\infty}^{ \theta})+\frac{\partial G^{\theta}}{\partial\theta}\Big{]}+\] \[\pi_{2}^{T}\Big{(}M_{1}(\frac{\partial F_{m}^{\theta}}{\partial \theta}+\Lambda_{\infty}^{\theta})+M_{2}\Big{)}.\] Let \(\theta^{*}\) be the (a priori unknown) optimal operation of the system, then the KKT conditions associated to problem (11) are: \[\frac{\partial\mathbb{L}_{p}}{\partial\theta}(\theta^{*})=0 \tag{13a}\] \[G^{\theta}(F_{p}^{\theta}(\theta^{*}),\theta^{*})\leq 0\] (13b) \[M_{1}F_{p}^{\theta}(\theta)+M_{2}\theta=\mathbf{0}\] (13c) \[\pi_{1}^{*},\pi_{2}^{*}\geq 0\] (13d) \[(G^{\theta}(F_{p}^{\theta}(\theta^{*}),\theta^{*}))\pi_{1}^{*}=0\] (13e) \[(M_{1}F_{p}^{\theta}(\theta)+M_{2}\theta)_{j}\pi_{2,j}^{*}=0,\ \ j=0,1, \ldots,n_{x}+Tn_{u}. \tag{13f}\] Analogously, the KKT conditions associated to problem (12) are: \[\frac{\partial\mathbb{L}_{m}}{\partial\theta}(\theta^{*})=0 \tag{14a}\] \[G^{\theta}(F_{m}^{\theta}(\theta^{*})+(\Lambda_{l}^{\theta})^{T} \theta^{*}+\epsilon_{l},\theta^{*})\leq 0\] (14b) \[M_{1}(F_{m}^{\theta}(\theta^{*})+(\Lambda_{l}^{\theta})^{T} \theta^{*}+\epsilon_{l})+M_{2}\theta^{*}=\mathbf{0}\] (14c) \[\pi_{1}^{*},\pi_{2}^{*}\geq 0\] (14d) \[\big{(}G^{\theta}\big{(}F_{m}^{\theta}(\theta^{*})+(\Lambda_{l}^ {\theta})^{T}\theta^{*}+\epsilon_{l},\theta^{*}\big{)}\big{)}\pi_{1}^{*}=0\] (14e) \[\big{(}M_{1}(F_{m}^{\theta}(\theta^{*})+(\Lambda_{l}^{\theta})^{ T}\theta^{*}+\epsilon_{l})+M_{2}\theta^{*}\big{)}_{j}\,\pi_{2,j}^{*}=0,\] (14f) \[j=0,1,\ldots,n_{x}+Tn_{u},\] Therefore, the KKT conditions of both problems match upon convergence of the modifiers (represented by \(l=\infty\)) if and only if: \[\frac{\partial\mathbb{L}_{p}}{\partial\theta}(\theta^{*})=\frac{ \partial\mathbb{L}_{m}}{\partial\theta}(\theta^{*})=\mathbf{0} \tag{15a}\] \[F_{p}^{\theta}(\theta^{*})=F_{m}^{\theta}(\theta^{*})+(\Lambda_ {\infty}^{\theta})^{T}\theta^{*}+\epsilon_{\infty}. \tag{15b}\] To meet (15a), we need to set the first order modifiers \(\Lambda^{\theta}\) so that: \[\frac{\partial F_{p}^{\theta}}{\partial\theta}(\theta^{*})=\frac{\partial F_ {m}^{\theta}}{\partial\theta}(\theta^{*})+\Lambda_{\infty}^{\theta}\] Thus, the optimal modifiers \(\big{(}\lambda_{\infty}^{x},\lambda_{\infty}^{u}\big{)}\) must be computed as: \[\Lambda_{\infty}^{\theta}=\big{[}\lambda_{\infty}^{x}\quad\lambda_{\infty}^{u }\big{]}^{T}=\frac{\partial F_{p}^{\theta}}{\partial\theta}(\theta^{*})-\frac {\partial F_{m}^{\theta}}{\partial\theta}(\theta^{*}). \tag{16}\] To converge to the optimal modifiers, we follow an update policy similar to the one proposed in [9]. Let \(\theta_{l}^{\text{drto}}\) be the solution of the DRTO (8) at iteration \(l\), then the modifiers at iteration \(l+1\) are calculated as: \[\Lambda_{l+1}^{\theta}=\big{[}\lambda_{l+1}^{x}\quad\lambda_{l+1}^{u}\big{]}^ {T}=\frac{\partial F_{p}^{\theta}}{\partial\theta}(\theta_{l}^{\text{drto}})- \frac{\partial F_{m}^{\theta}}{\partial\theta}(\theta_{l}^{\text{drto}}). \tag{17}\] While the gradients of the model can usually be easily computed with user-defined precision, e.g. by numeric or analytical differentiation (Section 3.2), the gradients of the real system often are cumbersome and rely on noisy measures to carry out estimations. The estimation of such gradients is out of the scope of this paper and the reader is referred to other works such as [11, 12]. Given the modifiers \(\Lambda_{\infty}^{\theta}\) computed in (16), in order to meet (15b), the modifier \(\epsilon_{\infty}\) must be set as: \[\epsilon_{\infty}=F_{p}^{\theta}(\theta^{*})-\big{(}F_{m}^{\theta}(\theta^{*} )+(\Lambda_{\infty}^{\theta})^{T}\theta^{*}\big{)}. \tag{18}\] Applying an update like the one from (17), we get to the following update for \(\epsilon_{l}\): \[\epsilon_{l+1}=F_{p}^{\theta}(\theta_{l}^{\text{drto}})-\big{(}F_{m}^{\theta} (\theta_{l}^{\text{drto}})+(\Lambda_{l}^{\theta})^{T}\theta_{l}^{\text{drto}} \big{)}. \tag{19}\] ### Gradients of a linear model In this section, we derive the analytical expression for the gradients of a linear model. Given the discrete-time linear model \[\hat{x}_{k+1}=f_{m,k}(x_{k},u_{k})=A_{k}\hat{x}_{k}+B_{k}u_{k}, \tag{20}\] we have that \[\hat{\mathbf{x}}_{T}=F_{m}(x_{0},\mathbf{u}_{T})=\mathcal{F}^{x}x_{0}+ \mathcal{F}^{u}\mathbf{u}_{T}, \tag{21}\] or equivalently \[\hat{\mathbf{x}}_{T}=F_{m}^{\theta}(\theta)=\begin{bmatrix} \mathcal{F}^{x}&\mathcal{F}^{u}\end{bmatrix}\theta, \tag{22}\] where \[\mathcal{F}^{x}=\begin{bmatrix}A_{0}\\ A_{0}A_{1}\\ \vdots\\ \prod_{i=0}^{T-1}A_{i}\end{bmatrix},\quad\mathcal{F}^{u}=\begin{bmatrix}B_{0} \\ A_{1}B_{0}&B_{1}\\ \vdots&\vdots&\ddots\\ (\prod_{i=1}^{T-1}A_{i})B_{0}&(\prod_{i=2}^{T-1}A_{i})B_{1}&\ldots&B_{T-1} \end{bmatrix}. \tag{23}\] Therefore, the gradients of the model are constant and can be explicitly computed as \(\frac{\partial F_{m}^{\theta}}{\partial\theta}=\begin{bmatrix}\mathcal{F}^{x }&\mathcal{F}^{u}\end{bmatrix}^{T}\). ## 4 MPC for periodic operation Model predictive controllers (MPCs) are one of the multiple choices for the bottom layer of the two-layer scheme introduced in Section 2, often refered to as advanced control. Its objective is to calculate the control sequence that takes the system from its current state \(z_{j}\) to the reference given by the STTO. Contrary to the DRTO, the MPC generally has a more local and fast nature, which may make its correspondent real system different from the one presented in (1). Let the local system be defined as \[z_{j+1}=f_{p,j}^{\text{mpc}}(z_{j},v_{j}), \tag{24}\] where \(z_{j}\in\mathbb{R}^{n_{z}}\) and \(v_{j}\in\mathbb{R}^{n_{v}}\) represent respectively the states and inputs of the local system at time \(j\), and \(f_{p,j}^{\text{mpc}}:\mathbb{R}^{n_{z}\times n_{v}}\rightarrow\mathbb{R}^{n_{z}}\) represents the dynamics of the real system at time \(j\). Note that the local system is parameterized by \(j\) to indicate that the discretization time of the local system (\(t_{N}\) seconds) is generally different than that of the global system parameterized by \(k\) (\(t_{T}\) seconds). Therefore, the system is periodic with period \(t_{N}L=t_{T}T\) seconds. The MPC solves at each time step \(j\) an optimization problem to calculate the optimal sequence of control inputs. Given a reference at time \(j\) (\(\hat{\mathbf{z}}_{N,j}^{\text{ref}},\mathbf{v}_{N,j}^{\text{ref}}\)), we use an offset free MPC formulation based on [13; 14]. Let the model of the MPC local system (24) be defined as \[\hat{z}_{j+1}=f_{m}^{\text{mpc}}(\hat{z}_{j},v_{j},d_{j}), \tag{25}\] where \(f_{m}^{\rm mpc}\) is usually a linear system to allow fast MPC implementations and \(d_{j}\in\mathbb{R}^{n_{z}}\) is the so-called disturbance at time \(j\). In contrast to the local system (24), the model \(f_{m}^{\rm mpc}\) is time-invariant and its dependence of time comes through the disturbances \(d_{j}\). Local constraints are also considered, but for the sake of simplicity, only as box constraints on the inputs \(v_{j}\). More general constraints require robust formulations of the MPC to guarantee recursive feasibility [15; 16; 17] and are out of the scope of this work. The sequence of disturbances \(d_{j}\) is periodic over the periodic horizon \(L\) and are updated in such a way that, upon convergence, \[f_{m}^{\rm mpc}(z_{j},v_{j},d_{j})=f_{p,j}^{\rm mpc}(z_{j},v_{j}). \tag{26}\] Now we present a simple way to estimate the disturbances \[d_{j+L}=d_{j}+K^{d}\left(f_{p,j}^{\rm mpc}(z_{j},v_{j})-f_{m}^{\rm mpc}(z_{j},v _{j},d_{j})\right), \tag{27}\] where \(K^{d}\in\mathbb{R}^{n_{z}\times n_{z}}\) is a filtering matrix and the matrix \(K_{d}\) is stable. Given the current state \(z_{j}\) and the sequence of future disturbances \(\mathbf{d}_{N}=\begin{bmatrix}d_{j}&d_{j+1}&\cdots&d_{j+N-1}\end{bmatrix}\), the offset-free periodic MPC at time step \(j\) is formulated as follows: \[\begin{split}\left(\mathbf{z}^{*},\mathbf{v}^{*}\right)=\\ \operatorname*{arg\,min}_{\hat{\mathbf{z}}_{N},\mathbf{v}_{N}}& \ell^{\rm mpc}(\hat{\mathbf{z}}_{N},\mathbf{v}_{N},\hat{\mathbf{z}}_{N,j}^{\rm ref },\mathbf{v}_{N,j}^{\rm ref})\\ \text{s.t.}&\hat{z}_{i+1}=f_{m}^{\rm mpc}(\hat{z}_{i},v_{i},d_{j+i}), \text{ for all }i=0,1,\ldots,N-1\\ &v^{L}\leq v_{i}\leq v^{U},\text{ for all }i=0,1,\ldots,N-1\\ &\hat{z}_{0}=z_{j},\end{split} \tag{28}\] where \(\ell^{\rm mpc}\) is a cost function that penalizes the distance between the reference sequences of states and inputs, and \(\hat{\mathbf{z}}_{N}=\begin{bmatrix}\hat{z}_{1}&\hat{z}_{2}&\cdots&\hat{z}_{N}\end{bmatrix}, \mathbf{v}_{N}=\begin{bmatrix}v_{0}&v_{1}&\cdots&v_{N-1}\end{bmatrix}\). The current local state \(z_{j}\) is considered known. The MPC follows a receding horizon scheme, which means that only the first computed input \(v_{0}^{*}\) is applied to the system at each iteration of the MPC. ## 5 Steady trajectory target optimization (STTO) The solution of the DRTO presented in Section 3 leads to the reference trajectory \(r_{e}^{\rm drto}\). The objective of the STTO is to transform this reference trajectory into a valid target \((\hat{\mathbf{z}}_{L,j}^{\rm ref},\mathbf{v}_{L,j}^{\rm ref})\) for the MPC defined in Section 4, i.e. one feasible for the MPC constraints. The first step is to match the time scale of the DRTO (\(t_{T}\)) with that of the MPC (\(t_{N}\)). Usually, the DRTO works with longer time steps than the MPC (\(t_{T}>t_{N}\)). Therefore, one must transform the reference given by the DRTO into one with the same time scale of the MPC. The reference given by the DRTO spans a total duration of \(Tt_{T}\) seconds. To transform it into the time scale of the MPC, just divide it into segments of \(t_{N}\) seconds and check which value of the reference trajectory \(\mathbf{r}_{e}^{\rm drto}\) corresponds to each segment. To avoid dealing with segments that comprise two or more values, we assume that the DRTO sampling time \(t_{T}\) is a multiple of the MPC sampling time \(t_{N}\). The new reference with time scale \(t_{N}\) is denoted \(\mathbf{r}_{e}^{\rm stto}\) and its length is \(L\). Then, this reference is shifted to match the current time step \(j\). Let \(\ell^{\text{stto}}:\mathbb{R}^{Ln_{z}\times Ln_{v}\times Ln_{r}}\to \mathbb{R}\) be a function that penalizes the distance between the trajectories of states and inputs of the MPC \((\mathbf{\hat{z}},\mathbf{v})\) and the reference trajectory \((\mathbf{r}_{e}^{\text{stto}})\). Then, the STTO problem at step \(j\) can be formulated as: \[\begin{split}&\left(\mathbf{\hat{z}}_{L,j}^{\text{ref}},\mathbf{v}_{L,j}^{\text{ref}}\right)=\\ &\operatorname*{arg\,min}_{\mathbf{\hat{z}},\mathbf{v}}\quad \ell^{\text{stto}}(\mathbf{\hat{z}},\mathbf{v},\mathbf{r}_{e}^{\text{stto}}) \\ &\text{s.t.}\quad\hat{z}_{i+1}=f_{m}^{\text{mpc}}(\hat{z}_{i},v_{i },d_{j+i}),\text{ for all }i=0,1,\ldots,L-1\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad v^{L}\leq v_{i}\leq v^{U},\text{ for all }i=0,1,\ldots,L-1\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad ``` Init 1. Initialize \(l=0\) and the modifiers \(\hat{\lambda}_{0}^{x},\hat{\lambda}_{0}^{u},\hat{\epsilon}_{0}\) to zero. Loop 2. Given the modifiers \(\hat{\lambda}_{l}^{x},\hat{\lambda}_{l}^{u},\hat{\epsilon}_{l}\), compute the optimal trajectory with the DRTO defined in (8) and (9) and obtain \(\mathbf{r}_{e}^{\text{drto}}\). 3. Pass \(\mathbf{r}_{e}^{\text{drto}}\) as a reference to the STTO and MPC Algorithm. 4. Estimate the gradients of the model and the real system and update the modifiers according to (17) and (19): \[\lambda_{l+1}^{x}= \frac{\partial F_{p}}{\partial x_{0}}\Big{|}_{(x_{0}^{\text{drto} },\mathbf{u}_{T}^{\text{drto}})}-\frac{\partial F_{m}}{\partial x_{0}}\Big{|}_ {(x_{0}^{\text{drto}},\mathbf{u}_{T}^{\text{drto}})}\] \[\lambda_{l+1}^{u}= \frac{\partial F_{p}}{\partial\mathbf{u}_{T}}\Big{|}_{(x_{0}^{ \text{drto}},\mathbf{u}_{T}^{\text{drto}})}-\frac{\partial F_{m}}{\partial \mathbf{u}_{T}}\Big{|}_{(x_{0}^{\text{drto}},\mathbf{u}_{T}^{\text{drto}})}\] \[\epsilon_{l+1}= F_{p}(x_{0}^{\text{drto}},\mathbf{u}_{T}^{\text{drto}})-\] \[\big{(}F_{m}(x_{0}^{\text{drto}},\mathbf{u}_{T}^{\text{drto}})+ \hat{\lambda}_{l}^{x}x_{0}^{\text{drto}}+\hat{\lambda}_{l}^{u}\mathbf{u}_{T}^ {\text{drto}}\big{)}\] 5. Wait until next iteration and update \(l=l+1\). End Loop ``` **Algorithm 1** DRTO algorithm (executed each \(t_{D}\) seconds) ``` Init 1. Initialize disturbances of the first period to zero, i.e. \(d_{j}=0\), where \(j=0,1,\ldots,L-1\), and set \(j=0\). Loop 1. Get the current state \(z_{j}\). 2. Given \(\mathbf{r}_{e}^{\text{drto}}\) from the DRTO Algorithm, shift it to match the current time step \(j\) and solve the STTO layer detailed in Section 5 to obtain \((\mathbf{\hat{z}}_{N,j}^{\text{ref}},\mathbf{v}_{N,j}^{\text{ref}})\). 3. Given the reference trajectory \((\mathbf{\hat{z}}_{N,j}^{\text{ref}},\mathbf{v}_{N,j}^{\text{ref}})\), compute the MPC control input \(v_{j}\) from (28). 4. Apply input \(v_{j}\) to the local system. 5. Estimate the disturbance for the next period \(d_{j+L}\) using (27). 6. Wait until next iteration and update \(j=j+1\). End Loop ``` **Algorithm 2** STTO and MPC algorithm (executed each \(t_{N}\) seconds) ## 7 Illustrative example In this section we show the performance of the periodic modifier-adaptation formulation of the DRTO introduced in this paper. For the sake of clarity, we omit the STTO and MPC layers and show how the reference trajectory computed by the P-MA formulation of the DRTO converges to the optimal periodic trajectory. To study the performance of the proposed approach, we test it against a periodic version of the quadruple tank process. This benchmark first proposed in [18] has been widely used to test different controllers [19]. The quadruple-tank system scheme is shown in Figure 2 and consists of four interconnected tanks that share water according to the following physical equations: \[\begin{split} S\frac{dh_{1}}{dt}&=-a_{1}\sqrt{2gh_ {1}}+a_{3}\sqrt{2gh_{3}}+\frac{\gamma_{a}q_{a}}{3600}\\ S\frac{dh_{2}}{dt}&=-a_{2}\sqrt{2gh_{2}}+a_{4}\sqrt {2gh_{4}}+\frac{\gamma_{b}q_{b}}{3600}\\ S\frac{dh_{3}}{dt}&=-a_{3}\sqrt{2gh_{3}}+(1-\gamma _{b})\frac{q_{b}}{3600}\\ S\frac{dh_{4}}{dt}&=-a_{4}\sqrt{2gh_{4}}+(1-\gamma _{a})\frac{q_{a}}{3600}.\end{split} \tag{30}\] And are subject to the following box constraints: Figure 2: Quadruple-tank system diagram, reproduced from [19]. \[\mathbf{h}_{\min}\leq\begin{bmatrix}h_{1}\\ h_{2}\\ h_{3}\\ h_{4}\end{bmatrix}\leq\mathbf{h}_{\max}\qquad\mathbf{q}_{\min}\leq\begin{bmatrix} q_{a}\\ q_{b}\end{bmatrix}\leq\mathbf{q}_{\max}. \tag{31}\] The quadruple tank process has some relevant properties: * It presents large coupling between its subsystems. * It dynamics are nonlinear. * States can be measured. * States and inputs are hard constrained. * Its real gradients can be analytically computed with the physical equations. We use a compact notation to define the parameters of the plant: \[\mathbf{a}=\begin{bmatrix}a_{1}\\ a_{2}\\ a_{3}\\ a_{4}\end{bmatrix},\mathbf{x}=\begin{bmatrix}h_{1}\\ h_{2}\\ h_{3}\\ h_{4}\end{bmatrix},\mathbf{u}=\begin{bmatrix}q_{a}\\ q_{b}\end{bmatrix},\boldsymbol{\gamma}=\begin{bmatrix}\gamma_{a}\\ \gamma_{b}\end{bmatrix},\] where water levels \(\mathbf{x}\) corresponds to the states and water flows \(\mathbf{u}\) to the inputs of the system. Information about the parameters is collected in Table 1 and (32). The periodic nature of the system is induced through parameter \(\boldsymbol{\gamma}\), whose cycle is shown in (32), where each column represent a constant value of \(\boldsymbol{\gamma}\) for \(t_{T}=3600\) seconds. Therefore, the plant is periodic with period \(T=7\) hours. \[\boldsymbol{\gamma}_{\text{cycle}}=\begin{bmatrix}0.3&0.4&0.5&0.7&0.6&0.4&0.2\\ 0.6&0.5&0.4&0.2&0.3&0.5&0.7\end{bmatrix}. \tag{32}\] The model of the system (30) is a discrete linear model with discretization time set to 5 seconds and linearized at the point: \[x_{0}=\begin{bmatrix}0.7293\\ 0.8102\\ 0.6594\\ 0.9408\end{bmatrix},\quad u_{0}=\begin{bmatrix}1.948\\ 2.00\end{bmatrix},\quad\boldsymbol{\gamma}_{0}=\begin{bmatrix}0.3\\ 0.4\end{bmatrix}\] Therefore, the model can be written as: \[\begin{split} x_{k+1}&=\begin{bmatrix}0.945&0&0.040&0\\ 0&0.940&0&0.032\\ 0&0&0.959&0\\ 0&0&0&0.967\end{bmatrix}(x_{k}-x_{0})+\\ &\begin{bmatrix}0.0135&0.0006\\ 0.0005&0.0180\\ 0&0.0272\\ 0.0319&0\end{bmatrix}(u_{k}-u_{0})+x_{0}.\end{split} \tag{33}\] At every time step, the system is subject to the box constraints on inputs and states from (31), i.e. \[\mathbf{h}_{\min}\leq\mathbf{x}\leq\mathbf{h}_{\max}\qquad\mathbf{q}_{\min}\leq \mathbf{u}\leq\mathbf{q}_{\max}. \tag{34}\] Given the economic parameters \(c=1\) and \(p=20\), the economic cost of operating the plant at each discrete time step is given by \[\phi(x_{k},u_{k})=(q_{a}^{2}+cq_{b}^{2})+p\frac{0.012}{S(h_{1}+h_{2})}.\] We apply Algorithm 1 to compute the optimal periodic trajectory for the system. The control process, i.e. steps (iii) and (iii) and Algorithm 2, is omited for the sake of clarity. The periodic constant is taken as \(T=7\) and the optional filtering of the modifiers proposed in Remark 1 has not been taken into account. The integration of the real process as well as the computation of the optimal trajectory and the P-MA DRTO reference trajectory have been computed using the CasADi optimization tool in Matlab [20]. The gradients of the real process have been computed using numerical differentiation on the real system (30), while those of the linear model have been computed using the results from Section 3.2. Figures 3 and 4 show the optimal trajectory computed by the DRTO with no first order modifiers. The inclusion and convergence of the zeroth order modifier \(\epsilon_{l}\) guarantees that the predicted trajectory matches the response of the real system. However, the lack of first order modifiers entails that, upon convergence, the computed input sequence is not optimal. Moreover, since the \begin{table} \begin{tabular}{c|c|c|c} & Value & Unit & Description \\ \hline \hline \(S\) & \(0.03\) & \(\mathrm{m}^{2}\) & Cross-section of the tanks \\ \(\mathbf{a}\) & \(\left[\begin{array}{c}1.31\\ 1.51\\ 0.927\\ 0.882\end{array}\right]\) & \(e^{-4}\) & \(\mathrm{m}^{2}\) & Discharge constants \\ \(\mathbf{h}_{\max}\) & \(\left[\begin{array}{c}1.36\\ 1.36\\ 1.30\\ 1.30\end{array}\right]\) & \(\mathrm{m}\) & Maximum water level \\ \(\mathbf{h}_{\min}\) & \(\left[\begin{array}{c}0.2\\ 0.2\\ 0.2\\ 0.2\end{array}\right]\) & \(\mathrm{m}\) & Minimum water level \\ \(\mathbf{q}_{\max}\) & \(\left[\begin{array}{c}3.6\\ 4.0\end{array}\right]\) & \(\mathrm{m}^{3}/\mathrm{h}\) & Maximum water flow \\ \(\mathbf{q}_{\min}\) & \(\left[\begin{array}{c}0\\ 0\end{array}\right]\) & \(\mathrm{m}^{3}/\mathrm{h}\) & Minimum water flow \\ \(g\) & \(9.81\) & \(\mathrm{m}/\mathrm{s}^{2}\) & Gravity acceleration \\ \end{tabular} \end{table} Table 1: Parameters of the plant KKT conditions of this DRTO does not change with time, the sequence of inputs predicted at each iteration is constant over time. Notice how in Iteration 1 (Figures 3 and 4), all the modifiers are set to zero and the optimal predicted behaviour is a single steady state. This is due to the time invariant model used by the DRTO, which differs vastly from the real periodic behaviour of the system. Figure 5 shows how the P-MA DRTO achieves convergence to the optimal sequence of states, and Figure 6 shows that this convergence is also achieved with the optimal sequence of inputs. After 15 iterations, the sequences of states and inputs computed by the P-MA DRTO are sufficiently close to the optimal sequences. Figure 4: Sequence of inputs calculated by the DRTO with the zeroth order modifier in different iterations (solid lines) vs optimal sequence of inputs (dashed lines). system are computed with enough accuracy.
2308.16489
Unraveling the Emission Mechanism of the HBL Source Mrk 180 with Multi-Wavelength Data
Markarian (Mrk) 180 is a High frequency-peaked BL Lacertae object or HBL object, located at a redshift of 0.045 and a potential candidate for high-energy cosmic ray acceleration. In this work, we have done a temporal and spectral study using Fermi Large Area Telescope (Fermi-LAT) $\gamma$-ray data, collected over 12.8 years. In the case of the temporal study, the 12.8 years long, 30-day binned, Fermi-LAT $\gamma$-ray light curve does not show any significant enhancement of the flux. To understand the underlying physical mechanism, we focused our study on multi-wavelength spectral analysis. We constructed multi-wavelength spectral energy distribution (MWSED) using Swift X-ray, ultraviolet & optical, and X-ray Multi-Mirror Mission (XMM-Newton) data, which have been analysed thoroughly. The SED has been modelled with three different models: (i) pure leptonic scenario and lepto-hadronic scenario where we considered two types of lepto-hadronic interactions (ii) line-of-sight interactions of ultrahigh-energy cosmic rays (UHECR; $E\gtrsim 10^{17}$ eV) with the cosmic background radiation and (iii) interaction between relativistic protons with the cold proton within the blazar jet. In this literature, we have done a detailed comparative study between all these three models. In an earlier study, Mrk 180 was associated with the Telescope Array (TA) hotspot of UHECRs at $E>57$ EeV which motivates us to check whether Mrk 180 can be a source of UHECRs, contributing to the TA hotspot. From our study, we find, for conservative strengths of the extragalactic magnetic field, Mrk 180 is unlikely to be a source of UHECR events.
Sandeep Kumar Mondal, Saikat Das, Nayantara Gupta
2023-08-31T06:52:20Z
http://arxiv.org/abs/2308.16489v1
# Unraveling the Emission Mechanism of the HBL Source Mrk 180 with Multi-Wavelength Data ###### Abstract: Markarian (Mrk) 180 is a High frequency-peaked BL Lacertae object or HBL object, located at a redshift of 0.045 and a potential candidate for high-energy cosmic ray acceleration. In this work, we have done a temporal and spectral study using Fermi Large Area Telescope (Fermi-LAT) \(\gamma\)-ray data, collected over 12.8 years. In the case of the temporal study, the 12.8 years long, 30-day binned, Fermi-LAT \(\gamma\)-ray light curve does not show any significant enhancement of the flux. To understand the underlying physical mechanism, we focused our study on multi-wavelength spectral analysis. We constructed multi-wavelength spectral energy distribution (MWSED) using Swift X-ray, ultraviolet & optical, and X-ray Multi-Mirror Mission (XMM-Newton) data, which have been analysed thoroughly. The SED has been modelled with three different models: (i) pure leptonic scenario and lepto-hadronic scenario where we considered two types of lepto-hadronic interactions (ii) line-of-sight interactions of ultrahigh-energy cosmic rays (UHECR; \(E\gtrsim 10^{17}\) eV) with the cosmic background radiation and (iii) interaction between relativistic protons with the cold proton within the blazar jet. In this literature, we have done a detailed comparative study between all these three models. In an earlier study, Mrk 180 was associated with the Telescope Array (TA) hotspot of UHECRs at \(E>57\) EeV which motivates us to check whether Mrk 180 can be a source of UHECRs, contributing to the TA hotspot. From our study, we find, for conservative strengths of the extragalactic magnetic field, Mrk 180 is unlikely to be a source of UHECR events. Introduction Active Galactic Nuclei (AGN) is the central core of an active galaxy which emits highly variable radiation ranging from radio to very high-energy (VHE; E\(\gtrsim\)30 GeV) \(\gamma\)-rays. The emission from the central core is powered by accretion onto a supermassive black hole (SMBH) which leads to the formation of collimated jets along the direction of the angular momentum. BL Lac is a subclass of AGN whose jet is directed towards the line-of-sight of the observer and has featureless spectra. The low-energy hump of the BL Lac SED is attributed because of the synchrotron radiation from the relativistic leptons. And the most prevalent explanation for the high-energy hump is Inverse Compton (IC) scattering process; in the case of BL Lac, it is considered as synchrotron self-Compton (SSC). The presence of VHE \(\gamma\)-rays is also possible because of the radiation from photohadronic (p\(\gamma\)) or hadronuclear (pp) interactions of the cosmic rays with the ambient medium in the emission zone or blazar jet or proton synchrotron emission. Mrk 180 is a BL Lac object, located at a redshift= 0.045, R.A.= 174.11008 deg, Decl.= 70.1575 deg, discovered by a Swiss-origin astronomer Fritz Zwicky. In March 2006, VHE \(\gamma\)-ray emission was detected for the first time [1] from this source, triggered by an optical burst. This source was monitored by several telescopes e.g. Fermi-LAT, Swift, Major Atmospheric Gamma Imaging Cherenkov Telescope (MAGIC), XMM-Newton, Monitoring of jets in Active Galactic Nuclei with VLBA Experiments (MOJAVE), KVA, ASM. An earlier study [2] identified Mrk 180 as a possible source of UHECRs in the context of explaining the origin of the TA hotspot at \(E>57\) EeV. Motivated by this earlier study, we carry out a comprehensive study of Mrk 180 to ascertain the underlying mechanism of high-energy \(\gamma\)-ray emission and whether it can be the source of UHECRs beyond 57 EeV contributing to the TA hotspot. In this work, we have mainly carried out a temporal and spectral study of Mrk 180. ## 2 Data Analysis **Fermi-LAT:** The Fermi-LAT is an imaging, pair-conversion, wide-field-of-view, high-energy \(\gamma\)-ray telescope that can detect photons of energy 20 MeV to more than 300 GeV, with a field of view of 2.4 sr [3]. The LAT is Fermi's primary instrument. For this work, we have extracted the Pass 8 Fermi-LAT \(\gamma\)-ray data of Mrk 180 from Fermi Science Support Center (FSSC) data server from August 2008 to May 2021, almost 12.8 years. We have analyzed this data with Fermipy (v1.0.1; [4]) tool following the standard data analysis procedure. The long-term Fermi-LAT \(\gamma\)-ray light curve has been shown in Fig. 1 and the SED, which is used to construct the MWSED shown in Fig. 2 and 3. **SWIFT XRT and UVOT:** Neil Gehrels Swift observatory is a multi-wavelength space-based observatory with three instruments onboard: Burst Alert Telescope (BAT), X-Ray Telescope (XRT) and Ultraviolet and Optical Telescope (UVOT) [5]. We collected all the XRT and UVOT data of Mrk 180 over the period from August 2008 to May 2021 (44 observations). The standard data reduction procedure has been followed for the analysis e.g. source & background region selection etc. The final X-ray SED has been obtained after being modelled by xspec (v12.11.0; [6]). Similarly, we have obtained UVOT SED points in all six filters, considering the galactic absorption. We have used the following extinction coefficient values corresponding to different Swift-UVOT wavebands which have been obtained from the python module 'extinction' 1; U: 0.05584, V: 0.03460, B: 0.04603, UVW1: 0.07462, UVM2: 0.10383, UVW2: 0.09176. Footnote 1: [https://extinction.readthedocs.io/en/latest/#](https://extinction.readthedocs.io/en/latest/#) **XMM-Newton X-ray Data Analysis:** XMM-Newton is a space-borne X-ray observatory, consisting of three imaging X-ray cameras (European Photon Imaging Camera or EPIC), two grating X-ray spectrometers (Reflection Grating Spectrometer or RGS) and one optical monitor (OM). The three EPIC cameras are the primary instrument aboard XMM-Newton; out of the three, two of them are MOS-CCD cameras and the remaining one is the pn-CCD camera. From the data archive of XMM-Newton 2, we found two observations for Mrk 180: 0094170101 and 0094170301 of 20 ks and 8 ks respectively. We have followed standard data reduction procedure 3 to extract the SED. Finally, we got SED data points from the both MOS and pn detector. Thereafter, we used xspec (v12.11.0; [6]) to model these spectra. We also analyzed OM image mode data. Following the same data reduction procedure for further analysis. The first observation 0094170101, contains single data corresponding to the u-band, which is insufficient for further analysis whereas, the second observation 0094170301 does not contain any image file for further study. So, our multi-wavelength data does not contain any XMM-Newton OM data. Footnote 2: [http://nxsa.esac.esa.int/nxsa-web/#search](http://nxsa.esac.esa.int/nxsa-web/#search) Footnote 3: [https://www.cosmos.esa.int/web/xmm-newton/](https://www.cosmos.esa.int/web/xmm-newton/) ss-threads **Archival Data : MOJAVE, MAGIC & SSDC:** We have used archival data from MOJAVE, MAGIC and SSDC. MOJAVE is a long-term program to monitor radio brightness and polarization variation in jets associated with active galaxies visible in the northern sky. We have collected MOJAVE data for Mrk 180, which consists of seven observations, from the MOJAVE/2cm Survey Data Archive 4. This research has made use of data from the MOJAVE database that is maintained by the MOJAVE team [7]. MAGIC is a system of two Imaging Atmospheric Cherenkov Telescopes (IACT), situated on the Canary Island of La Palma, which can detect \(\gamma\)-rays of energy between 30 GeV to 100 TeV. VHE \(\gamma\)-rays from Mrk 180 were detected during an optical outburst in 2006 [1]. We have used that data from 5 for our study. And lastly, we have collected the data from SSDC SED builder 6 and shown it with grey squares in the multi-wavelength SEDs (Fig. 2 and 3). Footnote 4: [https://www.cv.nrao.edu/MOJAVE/sourcepages/](https://www.cv.nrao.edu/MOJAVE/sourcepages/) Footnote 5: [http://nxsa.esac.esa.int/nxsa-web/#search](http://nxsa.esac.esa.int/nxsa-web/#search) ## 3 Long-term Fermi-LAT gamma-Ray Light curve The 12.8 years long, 30-day binned Fermi-LAT \(\gamma\)-ray light curve Fig. 1 does not show any significant variation in the \(\gamma\)-ray flux. There are a few data points with high \(\gamma\)-ray flux with large error bars, so further temporal study is not feasible in this case. Figure 1: Application of Bayesian Block Method on Fermi-LAT \(\gamma\)-ray light curve of Mrk 180 (MJD 54682.65- 59355.67) ## 4 Multi-Wavelength SED Modeling We have constructed the MWSED with analyzed and archival data (as mentioned in sec. 2) which covers radio to \(\gamma\)-ray wavebands. The MWSED has been modelled with three different models, discussed below. **Pure Leptonic Modeling:** We have considered a spherical emission region of radius \(R\) within the jet, moving with a Doppler factor \(\delta_{D}\), where relativistic electrons and positrons accelerated in the jet, lose energy through synchrotron radiation in a steady and uniform magnetic field \(B\), and also by SSC emission. From the maximum likelihood analysis of Fermi-LAT data, a log-parabola injection was found to best fit the data. Following, [8] we have used the log-parabolic spectrum of the injected electrons in the blob to explain the MWSED of Mrk 180. We have used an open-source code, 'GAMERA' [9] to model the MWSED. We consider a constant escape of the electrons from the emission region over the dynamical timescale. We find that the time-evolved electron spectrum reaches the steady state after nearly 100 days, and this spectrum has been used in this work. **UHECR Interactions:** We have assumed a power-law injection of the protons into the interstellar medium (ISM) within an energy range of 0.1-100 EeV. The ultra-high energy protons escape from the emission region and propagate through the extra-galactic medium interacting with CMB and EBL photons. In this process, electrons, positrons, \(\gamma\)-rays, and neutrinos are produced through \(\Delta\)-resonance and Bethe-Heitler pair production. The neutral pions decay to \(\gamma\) photons (\(\pi^{\circ}\rightarrow\gamma\gamma\)) and the charged pions decay to neutrino (\(\pi^{+}\rightarrow\mu^{+}+\nu_{\mu}\to e^{+}+\nu_{e}+\bar{\nu}_{\mu}+ \nu_{\mu}\)). The resulting cosmogenic neutrinos propagate undeflected by magnetic fields and unattenuated by interaction with other particles. The secondary \(e^{\pm}\), \(\gamma\)-rays initiate electromagnetic (EM) cascade by undergoing pair production, inverse-Compton upscattering of the background photons, and synchrotron radiation in the extra-galactic magnetic field (EGMF). The resulting spectrum extends down to GeV energies. Figure 2: Left: (a) Pure leptonic modeling of MWSED of Mrk 180; Right: (b) Leptonic+ hadronic (UHECR) modeling of MWSED of Mrk 180; with residual plot corresponding to the modeling. The data color codes are mentioned in the plots. We have used the publicly available simulation framework, CRPropa 3 [10, 11] to propagate UHECR protons (for simplicity we have considered only protons) from their source to the observer. The secondary EM particles are propagated in the CRPropa simulation chain, using a value of EM thinning \(\eta=0.6\). **pp Interactions:** When the relativistic protons have much lower energy than UHECRs and they interact with the cold protons within the emission region as they are trapped in the magnetic field of the emission region. The proton-proton interactions result in the production of neutral and charged pions. These pions decay into secondary particles e.g. electrons/ positrons, neutrinos and \(\gamma\)-rays. We have considered a power-law proton injection spectrum within the emission region, with a proton spectral index \(\alpha_{p}\) and energy range 10-10\({}^{4}\) GeV with the publicly available code Gamera for the time-independent \(pp\) modelling. We have balanced the total charge in the emission region to determine the total number of protons. The \(\gamma\)-ray spectrum produced in \(pp\) interactions has been corrected for internal absorption by the lower energy photons inside the blob, and also for absorption by the EBL. ### Jet Power We have calculated the total kinematic jet power using the following equation \(P_{\rm tot}^{k}=P_{e}+P_{B}+P_{p}=\pi R^{2}\Gamma^{2}c(u_{e}^{\prime}+u_{p}^{ \prime}+u_{B}^{\prime})\) where \(\rm P_{tot}^{k}\) is the kinematic jet power, \(\Gamma\) is the bulk Lorentz factor; \(u_{e}^{\prime}\), \(u_{p}^{\prime}\) and \(u_{B}^{\prime}\) are the energy densities of the relativistic electrons (and positrons) and protons and magnetic field respectively in the comoving jet frame [12, 13]. The primed and unprimed notations denote quantities in the comoving jet frame and the AGN frame, respectively. We have maintained the charge neutrality condition in the jet. If we add the jet power of cold protons the luminosity budget in the proton-proton interaction model exceeds the Eddington luminosity as discussed in [12, 13]. We calculated the Eddington luminosity of Mrk 180 from the mass of Mrk 180, which is 5.06-6.51\(\times 10^{46}\) erg/s. We compare only the kinematic jet power to the Eddington luminosity as it has been done in earlier papers. ## 5 Results The 12.8 years long 30-day binned Fermi-LAT \(\gamma\)-ray light curve (Fig. 1) of Mrk 180 does not show any significant flaring throughout this time, also the error bars of the high-energy \(\gamma\)-ray data points are large. To know about the physical processes, we further investigate the long-term MWSED of Mrk 180. The MWSED shows the double hump structure, which has been modelled with GAMERA; considering a simple one-zone spherical emission region within the jet. In Fig. 2 Figure 3: Leptonic+ hadronic (\(pp\)) modeling of MWSED of Mrk 180 and residual plot corresponding to this modeling; the grey-shaded region denotes the difference between the attenuated and unattenuated regions of the total SED. and 3, we have shown the MWSEDs fitted with different models e.g. pure leptonic, lepto-hadronic. Also, we have shown the residual (Data-Model/error) plot corresponding to the fit to each model in Fig. 2 and 3. In the case of a pure leptonic model, the first hump is produced due to the synchrotron radiation of the relativistic electrons, and the second hump is produced due to the up-scattering of the synchrotron photons by the relativistic electrons. From Fig. 2 (a) we can see that the SED from the pure leptonic model cannot fit the Swift UV data points. The slope of the observed X-ray and the \(\gamma\)-ray data points cannot be explained with the slope of the theoretical SED; it poorly fits the \(\gamma\)-ray data points, also the highest energy \(\gamma\)-ray data point cannot be fitted with this model. The residual plot corresponding to the pure leptonic model shows this model poorly fits the Swift UV data, X-ray, and MAGIC data. So this model cannot explain the MWSED at all. For the improvement of the fit, particularly at the VHE \(\gamma\)-ray regime, we check the fit with lepto-hadronic models (Fig. 2 (b) and Fig. 3). In the case of UHECRs, the escape of protons from the blazar jet can dominate over the energy loss inside the blazar jet. We have considered power-law injection of protons into ISM between 0.1-100 EeV with proton spectral index (\(\alpha_{p}\)) 2.2. In this model, we consider the three-dimensional propagation of UHECRs to calculate the fraction of them that survives within 0.1\({}^{\circ}\) degrees of initial emission direction. We consider a random turbulent EGMF given by a Kolmogorov power spectrum and an RMS field strength of B\({}_{\rm rms}\approx 10^{-5}\) nG and a coherence length of 0.5 Mpc. We consider all these factors to calculate the \(\gamma\)-rays reaching the observer from the direction of the BL Lac. Fig. 2 (b) is the resulting fit corresponding to this model. The green curve indicates the spectrum of cosmogenic photons. In this case, the highest energy MAGIC data point can be fitted, but the fit to the X-ray data points has not improved. Moreover, the Swift UV data cannot be fitted well with this model. The residual plot corresponding to this model looks almost the same as that of the pure leptonic model between 10\({}^{-5}\)- 10\({}^{11}\) eV, except for the MAGIC data points. We subsequently consider the \(pp\) interactions within the jet. As explained in sec. 4, the relativistically accelerated protons interact with cold protons and produced neutral and charged pions which decay into photons, leptons, and neutrinos. The cold proton density is assumed to be n\({}_{\rm H}\)=1.2\(\times\)10\({}^{6}\) cm\({}^{-3}\). Fig. 3 shows improvement in both SED and the residuals. The SED fits the Swift UV data points and matches the slope of the X-ray data and the \(\gamma\)-ray data. We have not shown the residuals for the Swift Optical data points, as they cannot be fitted with any of these models. Most of the \(\gamma\)-ray data points can be fitted in this model. The total kinematic jet power corresponding to each model is less than the Eddington luminosity of Mrk 180, which has been mentioned in Table 1. Also, the best-fitted parameter values corresponding to each model are listed in the same table. To check the association of Mrk 180 as a UHECR source to the TA hotspot at \(E>57\) EeV, we propagate UHECRs from the source to the Earth in a random turbulent magnetic field given by the Kolmogorov power spectrum. We consider three different combinations of the RMS value of the EGMF (B\({}_{\rm rms}\)) and composition (\({}^{1}\)H and \({}^{56}\)Fe) at the source as shown in Fig. 4. The turbulence correlation length of the EGMF is taken to be 0.5 Mpc. The Galactic magnetic field model (GMF) is considered to be the one given in Jansson & Farrar [14]. We inject cosmic rays with a generic power-law spectrum given by \(dN/dE\sim E^{-2}\) and perform three-dimensional simulations including both GMF and EGMF in CRPropa 3 [10, 11]. After considering different B\({}_{\rm rms}\) and compositions, we found that for conservative strength of EGMF, the contribution of this source to the TA hotspot is disfavored. Thus, Mrk 180 may not be a plausible UHECR source for explaining the TA hotspot. ## 6 Discussions & Conclusion There are no significant flux variation in the long-term Fermi-LAT \(\gamma\)-ray light curve (Fig. 1). Also, the error bars of the high-energy \(\gamma\)-ray data points are large to carry on a detailed temporal study on this source. Hence a more detailed analysis of the light curve cannot give us any useful information. We modelled the MWSED with GAMERA to explore the underlying emission mechanism of Mrk 180. It is found that a single-zone pure leptonic model cannot explain the multi-wavelength spectrum of Mrk 180 properly. We considered single-zone lepto-hadronic models to obtain better fits to the data. The residuals of the three models are compared and the \(pp\) interaction model is found to give a better fit to the multi-wavelength data compared to the other two models; however, more observational data is necessary to explain the radiation mechanisms in Mrk 180, as our results show large values of residuals in all the cases. We look forward to future multi-wavelength campaigns to cover all the frequencies over a long period to monitor this source more closely to give a definitive conclusion. [2] calculated the probability associated with some sources to be the contributors to the TA hotspot, Mrk 180 is one of them. It is important to know the role of Mrk 180 as a UHECR accelerator, and whether it can generate events above 57 EeV. In our study we found that Mrk 180 is disfavoured as a source of the UHECR events contributing to the TA hotspot considering conservative strength of magnetic field. In future, with more observational data it would be interesting to study the association of Mrk 180 with the TA hotspot. Figure 4: Arrival direction of UHECRs at \(E>57\) EeV from Mrk 180 to Earth. The blue line shows the Galactic plane. The purple point and the purple dotted curve show the TA hotspot center and the \(20^{\circ}\) region around it. Similarly, the green dotted curve shows the \(20^{\circ}\) region around Mrk 180. The color bar indicates the energy per nucleon (E/z) of the observed events. From left, the figures correspond to (a) pure proton injection and B\({}_{\rm rms}\approx 10^{-3}\) nG; (b) pure proton injection and B\({}_{\rm rms}\approx 10^{-5}\) nG; (c) Fe injection and B\({}_{\rm rms}\approx 10^{-5}\) nG
2309.09858
Unsupervised Open-Vocabulary Object Localization in Videos
In this paper, we show that recent advances in video representation learning and pre-trained vision-language models allow for substantial improvements in self-supervised video object localization. We propose a method that first localizes objects in videos via an object-centric approach with slot attention and then assigns text to the obtained slots. The latter is achieved by an unsupervised way to read localized semantic information from the pre-trained CLIP model. The resulting video object localization is entirely unsupervised apart from the implicit annotation contained in CLIP, and it is effectively the first unsupervised approach that yields good results on regular video benchmarks.
Ke Fan, Zechen Bai, Tianjun Xiao, Dominik Zietlow, Max Horn, Zixu Zhao, Carl-Johann Simon-Gabriel, Mike Zheng Shou, Francesco Locatello, Bernt Schiele, Thomas Brox, Zheng Zhang, Yanwei Fu, Tong He
2023-09-18T15:20:13Z
http://arxiv.org/abs/2309.09858v2
# Unsupervised Open-Vocabulary Object Localization in Videos ###### Abstract In this paper, we show that recent advances in video representation learning and pre-trained vision-language models allow for substantial improvements in self-supervised video object localization. We propose a method that first localizes objects in videos via a slot attention approach and then assigns text to the obtained slots. The latter is achieved by an unsupervised way to read localized semantic information from the pre-trained CLIP model. The resulting video object localization is entirely unsupervised apart from the implicit annotation contained in CLIP, and it is effectively the first unsupervised approach that yields good results on regular video benchmarks. + Footnote †: *: Equal contribution. Ke Fan is the first intern author, Zechen Bai is the first FTE author, they contributed equally. Work down during Ke Fan’s internship in AWS Shanghai AI Lab + Footnote †: *: footnotetext: *: Equal contribution. Ke Fan is the first intern author, Zechen Bai is the first FTE author, they contributed equally. Work down during Ke Fan’s internship in AWS Shanghai AI Lab + Footnote †: *: Equal contribution. Ke Fan is the first intern author, Zechen Bai is the first FTE author, they contributed equally. Work down during Ke Fan’s internship in AWS Shanghai AI Lab + Footnote †: *: Equal contribution. Ke Fan is the first intern author, Zechen Bai is the first FTE author, they contributed equally. Work down during Ke Fan’s internship in AWS Shanghai AI Lab + Footnote †: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: * footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: * footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: *: footnotet: foot: *: footnotet: *: footnotet: footnotet: * footnotet: foot: * footnotet: *: footnotet: * footnotet: foot: *: footnotet: foot: * footnotet: foot: *: footnotet: foot: * footnotet: foot: * footnotet: foot: * footnotet: foot: * footnotet: foot: footnotet: * footnotet: foot: * footnotet: foot: footnotet: * footnotet: foot: * footnotet: foot: * footnotet: foot: foot: * footnotet: foot: footnotet: * foot: footnotet: *: footnotet: foot: footnotet: * footnotet: foot: footnotet: * foot: footnotet: foot: * footnotet: foot: foot: footnotet: * footnotet: foot: footnotet: *: footnotet: foot: foot: footnotet: * footnotet: foot: footnotet: foot: footnotet: * foot: footnotet: foot: foot: foot: footnotet: demonstrated that vision transformers, combined with self-supervised learning, can create a feature space in which patterns of the same object class cluster. This feature space was used in a series of papers to learn unsupervised segmentation models [19, 33, 25]. In another line of research, Locatello et al. [17] proposed an approach, where so-called _slots_ localize individual objects in the scene and represent their visual patterns and properties. This type of approach belongs to the field of _object-centric learning (OCL)_. So far, much of the OCL literature has focused on static images. Only recently it became possible to obtain good results not only on synthetic but also on real-world datasets [25]. Few papers have tried to apply a slot-based OCL approach to video data [15, 8]. These works scale to real-world data by adding some form of weak annotation. In contrast, this paper proposes an OCL pipeline for _real-world_ video data with two main goals: (1) partitioning the videos into spatially, temporally, and semantically meaningful groups _without using any labeled training data_; and (2) labeling those groups using an off-the-shelf vision-language model, such as CLIP [22]. Since CLIP was trained to align text with global image features, it is initially unable to align text with local features from individual objects. However, CLIP can be fine-tuned on image-text data to make it align the local patterns to text [20]. In this paper, we show that paired image-text data is not necessary to allow for such alignment. We propose a self-supervised objective to fine-tune the last layer of the CLIP vision transformer, which trains on image data only. **Overview.** The overall framework is composed of three parts, as depicted in Figure 2. The individual processing stages are visualized in Figure 1. The first part yields bottom-up object-centric video representations (slots) that localize the regions in the video with a spatio-temporal grouping model (Section 3). The second part adapts the off-the-shelf image-text alignment model CLIP [22] to assign a text to the video slots (Section 4). Finally, a merging procedure leverages overlapping information from the text and the image to improve both localization and labeling (Section 5). Our contributions are twofold: * We provide the first approach that localizes objects with spatio-temporal consistency in real-world videos, without labelled training data. * We assign text labels to video slots using a pre-trained CLIP model without additional supervised fine-tuning. ## 2 Related Work **Object-centric localization.** Our research contributes to the field of object-centric learning, which involves extracting individual objects from visual input using neural networks with inductive biases. Some approaches, such as using an information bottleneck with pixel-level reconstruction [27, 17, 30] or examining perceptual similarity through self-supervised representation pretraining [9, 6], have been previously explored. Our contribution is introducing temporal coherence into the mix, which improves object emergence and enhances the objectness signal by leveraging the Common Fate principle. Furthermore, there have been recent advances in object-centric learning for video applications [15, 8, 27, 1]. These approaches improve upon previous image-level methods by incorporating temporal modules into the network architecture. However, they either require additional annotation signals in the form of the first frame and optical flow, or they rely on complex modules to enable temporal tracking or matching of object slots in neighboring frames. In comparison, our method does not require any additional signals or specifically designed network modules, yet still achieves competitive performance on real-world videos. **CLIP for low-level vision tasks.** Recent advances in vision-language models have been driven by large-scale Contrastive Language-Image Pre-Training (CLIP) model [22]. While CLIP has shown promising results in low-level vision tasks, its training setup has been criticized for lacking localization ability [20, 16, 7, 31]. Some solutions involve using gradient activation maps [26] or extra annotation to finetune CLIP [31]. Our contribution is decoupling the tasks of object-centric learning and vision-language, allowing us to finetune CLIP with only a moderate _unlabeled_ dataset for semantic feature extraction, without the need for additional annotations. This approach stands in contrast to previous works that require additional training annotations for both localization and semantic labeling using a single vision-language module [31]. ## 3 Video slot learning **Problem setup.** Given a video \(V\in\mathbb{R}^{T\times H\times W\times 3}\), the task is to extract \(K\) video slots \(S_{\text{video}}\in\mathbb{R}^{K\times D_{\text{slots}}}\) that bind to object features in the video, as well as a video alpha mask \(\alpha\in\mathbb{R}^{K\times T\times H\times W}\) used to segment the video into \(K\) tubes. **Self-supervised video encoder.** For a pre-defined \(p\times p\times 1\) patch size, we tokenize the video \(V\) and flatten the spatio-temporal dimension to a sequence of tokens \(\text{tn}_{V}=\text{Tokenize}(V)\in\mathbb{R}^{T\times H^{\prime}\times W^{ \prime}\times D}\), where \(H^{\prime}=\lceil\frac{H}{p}\rceil\) and \(W^{\prime}=\lceil\frac{W}{p}\rceil\). After adding spatio-temporal positional encoding, the tokens are processed by a video encoder \(\text{E}_{\text{V}}\) as \(D\)-dimensional video token features, \(F=\text{E}_{\text{V}}(V)\in\mathbb{R}^{T\times H^{\prime}\times W^{\prime} \times D}\). The video encoder consists of a stack of transformer blocks, an architecture that has been shown effective for self-supervised vision representation learning without annotation [3, 28, 10]. **Learning video slots.** We generalize DINOSAUR [25], a state-of-the-art object-centric learning method for _static_ images, to videos. DINOSAUR passes the tokens of an im age, _e.g._, a frame at time step \(t\) through a pre-trained image encoder \(\text{E}_{t}\) and obtains \(D\)-dimensional image features \(f_{t}=\text{E}_{t}(\text{tn}_{V_{t}})\in\mathbb{R}^{H^{\prime}\times W^{\prime} \times D}\). It then flattens the spatial dimensions of \(f_{t}\) and applies slot-attention [17] as follows: \[S_{\text{img},t}=\text{SlotAttention}(f_{t}.\text{reshape}(H^{\prime}W^{\prime},D)).\] \(S_{\text{img},t}\) effectively represents image slots, each corresponding to a group of pixels that are supposed to cover an object. To use DINOSAUR on videos, a straightforward approach would be to run it separately on each frame of the video. However, this require to run slot-attention \(T\) times and results in poor temporal consistency among slots in multiple frames. Instead, we run slot-attention only once on the entire spatio-temporal features \(F\in\mathbb{R}^{T\times H^{\prime}\times W^{\prime}\times D}\) of the video. Specifically, we flatten the spatial and temporal dimensions and extract video slots for all frames as follows: \[S_{\text{video}}=\text{SlotAttention}(F.\text{reshape}(TH^{\prime}W^{\prime},D))\in\mathbb{R}^{K\times D_{\text{data}}}.\] \(S_{\text{video}}\) are video slots that capture video features. Since patches from a same object tend to have similar features across subsequent frames, the slot attention mechanism tends to group them into a same slot, which ensures temporal consistency _by design_. **Slot decoder and training loss.** With \(S_{\text{video}}\), each slot \(S_{i}\in\mathbb{R}^{D_{\text{data}}}\) is first boardcasted to a 3D volume \(V_{i}\in\mathbb{R}^{T\times H\times W\times D_{\text{data}}}\) and added to a spatio-temporal positional encoding \(pe\), then a 2-layer Multi-Layer Perceptron (MLP) is applied on each position individually: \[y_{i},\alpha_{i}=\text{MLP}(V_{i}+pe)\] where we have \(\textbf{y}=[y_{1},\dots,y_{K}]\) with \(y_{i}\in\mathbb{R}^{T\times H^{\prime}\times W^{\prime}\times D}\) as the feature reconstruction, and \(\alpha=[\alpha_{1},\dots,\alpha_{K}]\) with \(\alpha_{i}\in\mathbb{R}^{T\times H^{\prime}\times W^{\prime}}\) as the alpha mask for the patch-to-slot assignment weight. Specifically, let \(\alpha_{\cdot,j}=[\alpha_{1,j},\dots,\alpha_{K,j}]\) be the alpha mask weight for the \(j\)-th patch, then \(\sum_{i=1}^{K}\alpha_{i,j}=1\). We say that the \(j\)-th patch belongs to the \(i^{*}\)-th slot if \(i^{*}=\arg\max_{i}\alpha_{i,j}\). Similar to previous work [27, 25], during training, we use the following reconstruction loss of video features \(F\) based on the slot-attention output **y** and \(\alpha\): \[L=\text{MSE}(\textbf{y}\cdot\alpha,F).\] ## 4 Labeling video slots **Problem Setup.** Given a video \(V\in\mathbb{R}^{T\times H\times W\times 3}\), its alpha mask \(\alpha\in\mathbb{R}^{K\times T\times H^{\prime}\times W^{\prime}}\) and a list of \(N\) potential labels and corresponding features \(f_{\text{text}}\in\mathbb{R}^{N\times D_{\text{data}}}\), the task is to come up with a slot-to-label assignment matrix \(A\in\mathbb{R}^{K\times N}\). **A naive baseline with CLIP.** CLIP [22] is a vision-language model pre-trained on an internet-scale dataset. For each image-text pair, CLIP summarizes the text and vision features in tokens \(\mathit{CLS}_{t}\) and \(\mathit{CLS}_{v}\) and aligns them using contrastive learning. The tokens \(\mathit{CLS}\) are constructed using an N-stack transformer. Since CLIP can be used to label almost any full image, a straightforward approach for slot labeling would be to wrap the video slots into bounding boxes, crop the regions and send each of them to CLIP. However, this method suffers from two problems. First, this requires multiple runs of the CLIP visual encoder for one input image. Second, when a slot is non-convex in shape, its corresponding bounding box would inevitably include con Figure 2: **Proposed framework. Given an input video, we first localize objects by slot attention with a video encoder pretrained with self-supervision. Next, we extract semantic features for each slot by a patch-based CLIP finetuned from its vanilla version. Then, slots are named by matching slot semantic features to text features from a curated list of text prompts. Finally, the named slots are optimized to alleviate over-segmentation caused by part-whole hierarchies.** tent that does not belong to the slot and eventually bias the features from CLIP. In the following, we describe a more efficient and unbiased approach to slot labeling which requires minor modifications on the pre-trained CLIP model but no additional labels. **Efficient slot labeling with semantic features.** Assume there is an image semantic encoder \(E_{\text{sem}}\) that operates on the same patch size \(p\) as in Section 3. We encode the \(t\)-th frame \(I_{t}\) into a set of semantic patch features \(\{P_{t}\}=E_{\text{sem}}(I_{t})\in\mathbb{R}^{H^{\prime}\times W^{\prime} \times D_{\text{sem}}}\). For each video slot, we average the patch features in this slot as the slot semantic feature \(\{f_{\text{slot}}\}\in\mathbb{R}^{K\times D_{\text{sem}}}\). Given a list of potential semantics and corresponding features \(\{f_{\text{text}}\}\in\mathbb{R}^{N\times D_{\text{sem}}}\), we compute the cosine similarity between \(f_{\text{slot}}\) and \(f_{\text{text}}\). Let \(A\) be the matrix for slot-text similarity: \(A_{ij}=\cos(f_{\text{slot},i},f_{\text{text},j})\). For each slot, we find the index of the text with maximum feature cosine similarity \(j^{*}=\arg\max_{j}A_{ij}\) and label the slot accordingly. **Local semantics from CLIP.** Although the output of CLIP's visual encoder contains both the \(\mathit{CLS}_{v}\) token as well as image patch tokens, the contrastive loss used during pre-training only enforces alignment between the text token \(\mathit{CLS}_{t}\) with \(\mathit{CLS}_{v}\), not with the other visual tokens. And in practice, those other patch tokens do not align with the text features \(\mathit{CLS}_{t}\), as observed by [20, 16], and confirmed by our own initial experiments. Therefore, the visual encoder from a vanilla CLIP cannot be used without further modification as the semantic encoder \(E_{\text{sem}}\). To fix this issue, [20, 16] adapt the CLIP visual encoder to local semantic features by fine-tuning it on additional vision data with image-text pairing, for example segmentation masks [16]. **Patch-based CLIP.** In contrast, we adapt the pre-trained CLIP visual encoder to a _patch-based CLIP_ visual encoder using only _unlabeled images_, as illustrated in Figure 3. The idea is based on this observation: the semantic-informative token \(CLS_{v}\) is the output from a ViT's last layer, thus we can train a new function that re-projects the information from the last layer's input to new patch features. Formally, given an input image \(I\) and the pre-trained CLIP visual encoder \(\mathrm{E}_{\text{CLIP}}^{v}\), we consider both its output token \(\mathit{CLS}_{v}\in\mathbb{R}^{D_{\text{sem}}}\) as well as the patch tokens \(P^{\prime}\in\mathbb{R}^{M\times D_{\text{sem}}}\) of the ViT that were used in the penultimate layer to construct \(\mathit{CLS}_{v}\). We replace the last layer by a new multi-head self-attention module \(\mathrm{E}_{\text{MHSA}}^{v}\) and obtain new patch tokens \(\hat{P}=\mathrm{E}_{\text{MHSA}}^{v}(P^{\prime})\). This new module is trained with self-supervision, under the assumption that a pre-trained CLIP has its \(\mathit{CLS}_{v}\) close to the corresponding text token \(\mathit{CLS}_{t}\). We use \(\mathit{CLS}_{v}\) as a proxy to the semantic representation space. We first compute a cross-attention with queries being \(\mathit{CLS}_{v}\), while keys and values are both \(\hat{P}\): \[\hat{P}_{v}=\text{Softmax}(\hat{P}\cdot\mathit{CLS}_{v}^{T})\cdot\hat{P}. \tag{1}\] We then minimize the InfoNCE loss [21] between the aggregated feature \(\hat{P}_{v}\) and \(\mathit{CLS}_{v}\) across data samples. \[\phi_{ij}=\frac{\hat{P}_{v,i}}{|\hat{P}_{v,i}|}\cdot\frac{\mathit{CLS}_{j}}{| \mathit{CLS}_{j}|}, \tag{2}\] \[\mathcal{L}_{\text{InfoNCE}}=\frac{1}{k}\sum_{i}^{k}\log(\frac{e^{\phi_{ii}}}{ \sum_{j}^{k}e^{\phi_{ij}}}). \tag{3}\] We use this scheme to adapt pre-trained CLIP patch features to the image-text joint space while training only on unlabeled image data. We do not use any visual annotation or image-text pairs. Intuitively, our modification of CLIP simply allows us to read out the information that was already contained in the CLIP model, but was not readily available. ## 5 Post-processing and joint optimization The localization and text assignment procedures above already provide a good basis, but they leave room for improvements. In the following, we describe two post-processing steps that improve clustering on the text and the video data, respectively. **Optimizing labels with curated vocabulary.** The similarity-based algorithm described from Section 4 requires a vocabulary of possible labels. In principle, the patch-based CLIP is capable to embed and recognize any word that appears in its large-scale training set. However, a too large vocabulary can lead to sub-optimal results due to synonyms and to less robust ranking performance against a too-long list of text features. In practice, a good vocabulary should include both _target labels_ and _common background labels_. Target labels are Figure 3: **Patch-based CLIP finetuning**. We replace the last ViT layer by a multi-head self-attention module to re-project semantic information to the new patch tokens. We then use cross-attention to encourage those patches that contain the main context to be similar to the \(\mathit{CLS}_{v}\) token. Maybe surprisingly, this suffices to get semantically meaningful patch features. Importantly, we do not use any labeled data during this fine-tuning step. the dataset specific foreground object categories of interest. Common background labels are concepts that usually appear in the background and do not overlap with target labels. Without prior knowledge, it is still possible that content of a slot may not correspond to any category, neither in the target, nor in common background labels. As a result, such a slot would have features that are not similar to any text features. We therefore leave slot \(i\) "unnamed" if \(\cos(f_{\text{slot},i},f_{\text{text},j})<\lambda\) for all \(j\), for some constant threshold \(\lambda\). In practice, we find \(\lambda\) can be simply set to 0. **Improving part-whole with name-assisted localization.** Using the slot labels (i.e., names), we can identify slots that cover only part of an object and merge them into a new slot with the same name. Specifically, for each pair of slots \(S_{i}\) and \(S_{j}\), we consider them as being part of the same object if they have the same label and are spatial neighbors in at least one frame. This merging process is repeated until there's no pair of slots to merge. Figure 4 illustrates this process. Although seemingly simple, this step is crucial to harness the part-whole issue. It builds both parts of our pipeline so far: the localization and the semantic label information. This concludes the description of our approach. The next section describes our evaluation pipeline. ## 6 Datasets, implementations and baselines **Implementation details.** (1) _Vision backbone._ We use the original VideoMAE module with two modifications: 1) we set each token to represent a \(p\times p\times 1\) data patch instead of \(p\times p\times 2\), and 2) we use 3D trigonometric function combination as the positional encoding. This modified VideoMAE is pre-trained on Something-Something-v2 [11] without supervision from human annotation. (2) _Patch-based CLIP_ We train the patch-based CLIP model on ImageNet-1K [23] dataset without any class label. Specifically, we train a multi-head self-attention network to replace the last layer of the CLIP vision Transformer encoder. **Other baselines.** For video slot learning, we compare against a number of OCL pipelines that output slots for each frame. Those baselines do not perform slot association across frames and do not produce video slots (i.e., \(K\) coherent tubes for a video). Nevertheless, their output can be sent into the patch-based CLIP to be named. (1) _Slot-Attention_[17] uses a convolutional neural network to encode an image and extracts \(K\) object-centric features using slot attention. (2) _SAVi_[15] is an extension of Slot Attention to videos that initializes the slots with some form of weak supervision, and then uses the slot from the previous frame to initialize the current frame. Since our method is completely unsupervised, for a fair comparison, we do not provide this weak annotation in our evaluation. (3) _DINOSAUR_[25] is a pixel grouping method that manages to group pixels on real-world data by freezing a pre-trained network as the backbone and reconstructing the extracted features. Out of the three baselines, only DINOSAUR uses a pre-trained backbone (self-supervised with DINO [3] on unlabeled ImageNet-1K). **Datasets.** (1) _ImageNet-VID_ (I-VID) is a common benchmark for video object detection which includes 30 object categories of the ImageNet DET dataset. (2) _YouTube-VIS_ (Y-VIS) is a large-scale video instance segmentation dataset, which also includes object bounding boxes. **Metrics.** To evaluate the localization performance of our method, we apply standard metrics in _object localization_ tasks. Similar to previous studies [25, 29], we report _CorLoc_, which is the proportion of images where at least one object was correctly localized, and _DecRate_, which measures the percentage of objects that are correctly detected out of all the ground-truth objects. To further evaluate the overall labeling performance, we use the traditional object detection metric mean Average Precision (mAP), which examines the accuracy of both object localization and semantic classification. Besides, to understand how well our framework learns video slots, we quantify whether a slot indeed captures a single object. Empirically, we observe that there are several other typical cases that deviate from that ideal behavior. To quantitatively analyze the challenge posed by part-whole hierarchies, we categorize slots that overlap with objects based on whether they contain a single object (SO), part of an object (PO) or a group of objects (GO). We label Figure 4: **Joint optimization**. The image on the left shows one frame of video slots with target names, the image on the right shows the result from merging. The two slots for the bus are merged thus better localizing the object, while they don’t further merge with the slot for car since they share different semantics. Figure 5: **Three types of slots.** We show examples of a slot containing a single object, part of an object or a group of objects. For simplicity we only colorize slots overlapping with objects, and slots in one image are colored differently. all those that do not overlap with any object or have only minor overlap with an object as background (BG). See Figure 5 for a qualitative explanation and Appendix for details. Please note that _the classification of these categories is only meant to give a qualitative and informative assessment of the distribution of errors_. ## 7 Experiments The complete design space is large, it includes, for video slot learning, 1) image (e.g., DINO) versus video backbone (e.g., VideoMAE) and 2) spatio-temporal versus per-frame grouping, and for video slot labeling, 3) vanilla CLIP versus patch-aligned adaptation and 4) joint optimization using both text and image features. This paper does not attempt to exhaust this space. However, we will provide evidence that combining a video backbone with spatio-temporal grouping gives the best video slot learning ability, that CLIP adaptation is crucial, and that joint learning proves to be very effective. This conclusion is not at all unexpected. We now proceed to examine our (incomplete) evaluations with a set of questions. **Q1: How does our complete pipeline compare to the baselines?** We present a quantitative evaluation on two datasets in Table 1. As pixel-reconstruction methods, slot-attention and SAVi perform poorly on both real-world video datasets across all metrics, confirming the ineffectiveness of the pixel-space reconstruction loss. DINOSAUR, which relies on feature reconstruction, performs better, particularly in terms of CorLoc and DecRate. However, its mAP score is only around 8, because it fails to align localization with semantics. In contrast, our model outperforms all other approaches across all metrics on both datasets, thanks to its ability to achieve spatio-temporal consistency and joint optimization with semantics. **Q2: Does the choice of vision backbone and grouping algorithm matter for slot learning?** As described in Section 3, video slot learning depends on the choice of vision backbone and grouping algorithm. Table 2 demonstrates that a video-based backbone, such as VideoMAE, leads to superior performance in spatio-temporal slot learning across all metrics. Conversely, the spatial-only slot learning produces better results with an image-based backbone. Therefore, the combination of video backbone and grouping algorithm is crucial and must be selected accordingly. Although spatial DINO outperforms our model in terms of CorLoc and DecRate, we contend that these metrics pertain to image-based approaches. The higher mAP\({}_{50}\) score suggests that the temporal consistency ultimately improves the correct labeling and overall detection performance. To further illustrate the benefits of temporal consistency, we provide additional qualitative visualizations in Figure 6. **Q3: How important is CLIP-based joint optimization?** To address the importance of image-text alignment, we conduct a quantitative comparison of joint optimization with and without image-text alignment, as well as different design choices for alignment, using object localization and detection metrics. Our findings, summarized in Table 3, reveal that image-text alignment plays a critical role in joint optimization, with the choice of alignment method impacting all evaluation metrics. Notably, when joint optimization is applied, incorrect labeling using vanilla CLIP significantly impacts CorLoc and DecRate, while alignment using patch-based CLIP leads to significant improvements across all three metrics: we can see from the first two rows only mAP is different, and the one with patch-based CLIP is better. These results stress the importance of carefully tuning image-text alignment for optimal performance. ### Part-whole problem with joint optimization To estimate the effectiveness of slot localization in identifying objects, we assess SO, PO, GO, and BG on both I-VID and Y-VIS datasets, and present the results in Table 4. Since slots are a direct output of video slot learning, they are more inclined to focus on parts of objects, accounting for \(24.05\%\) and \(34.80\%\) of the results (the PO column), rather than the entire object, which only represents \(8.87\%\) and \(6.04\%\). We observe that slots rarely contain several objects (GO column), and instead, often capture background \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{ImageNet-VID} & \multicolumn{3}{c}{Youtube-VIS} \\ \cline{2-6} & CorLoc & DecRate & mAP\({}_{50}\) & CorLoc & DecRate & mAP\({}_{50}\) \\ \hline Slot-Attention & 11.64 & 7.83 & 0.51\({}^{*}\) & 10.27 & 6.57 & 0.51\({}^{*}\) \\ SAVi & 16.23 & 10.55 & 1.23\({}^{*}\) & 11.10 & 7.18 & 0.90\({}^{*}\) \\ DINOSAUR & 48.04 & 39.58 & 7.34\({}^{*}\) & 44.80 & 34.63 & 8.15\({}^{*}\) \\ \hline **Ours** & **60.90** & **53.75** & **29.23** & **63.74** & **55.05** & **35.19** \\ \hline \hline \end{tabular} \end{table} Table 1: **Comparison with other object-centric methods on object localization and slot labeling.** Note that numbers with \({}^{*}\) superscript means that the method uses our slot labeling module when computing the data. \begin{table} \begin{tabular}{c c c c c} \hline \hline Backbone & Video Slot & CorLoc & DecRate & mAP\({}_{50}\) \\ \hline \multirow{2}{*}{DINO} & \(\times\) & 62.12 & 54.79 & 28.75 \\ & ✓ & 59.84 & 52.98 & 28.40 \\ \hline \multirow{2}{*}{VideoMAE} & \(\times\) & 57.16 & 50.25 & 25.77 \\ & ✓ & 60.90 & 53.75 & 29.23 \\ \hline \hline \end{tabular} \end{table} Table 2: **Ablations on vision backbones and video slots learning algorithm** on ImageNet-VID. classes (\(65\%\) and \(58\%\); BG column). The reason for the relatively low proportion of SO can be attributed to the fact that visual similarity and common-fate inductive bias can only offer limited information when describing an object as a whole. Since semantics is not inherently present in the visual modality, it is challenging to capture the full meaning of an object with visual cues alone. Our joint optimization approach demonstrates a significant increase in the ratio of SO slots. As illustrated in Table 4, the percentage of SO slots rises to \(49.33\%\) and \(40.75\%\), respectively, after the optimization with slot labeling. The reduction in PO and BG ratio indicates the source of improvement. First, slots that were previously tracking a significant part of the same object (PO) are merged to form a larger new slot. Further, each BG slot either has no overlap to any object, or only has minor overlap to one object. The reduction in BG ratio indicates that some of the latter case are correctly labeled and eventually merged. This serves as an evidence that the joint optimization directly alleviate the part-whole issue. It should be noted that the proportion of GO also slightly increases, indicating that the model occasionally merge slots too aggressively for multiple instances in the same semantic category. This phenomenon again reflects the challenging associated with part-whole hierarchies. Regarding the accuracy of labeling, Table 3 presents the mAP score, which takes into account both localization and semantic labeling performance. With the aid of named localization and joint optimization, the model achieves a substantial increase in mAP score of around \(24\) (\(5.20\to 29.23\)) and \(30\) (\(5.96\to 35.19\)), respectively. Although these results fall short of the SOTA performance achieved by models using labels [4, 12], considering that our framework is entirely unsupervised, the results are encouraging. ### Qualitative case studies **Spatio-temporal video slots.** Our model offers a significant advantage over ordinary image slots in the form of spatio-temporal video slots. These slots consistently focus on the same part across frames, which ensures tempo \begin{table} \begin{tabular}{c c c c c c} \hline \hline Dataset & Joint & SO & PO & GO & BG \\ \hline \multirow{2}{*}{I-VID} & \(\times\) & 8.87\% & 24.05\% & 1.96\% & 65.13\% \\ & ✓ & 49.33\% & 20.42\% & 7.01\% & 23.24\% \\ \hline \multirow{2}{*}{Y-VIS} & \(\times\) & 6.04\% & 34.80\% & 0.77\% & 58.39\% \\ & ✓ & 40.75\% & 28.32\% & 5.64\% & 25.29\% \\ \hline \hline \end{tabular} \end{table} Table 4: **Statistics of video slots.** We counted the percentage for each type of slots. **SO**: Single Object. **PO**: Part of an Object. **GO**: Group of Objects. **BG**: Slots for not _annotated_ regions in the dataset. Figure 6: **Consistent localization in video.** We compare the localization performance from DINOSAUR and ours on two 8-frame video samples. We visualize the spatially interpolated localization for patch-based slot assignment. DINOSAUR is an image-based baseline thus its localization has no temporal consistency. In contrast, ours, as a spatio-temporal localization method, provides temporal consistent localization, as the same region or object are consistently localized by the same slot (indicated by the same color) across frames. \begin{table} \begin{tabular}{c c c c c} \hline \hline Dataset & Joint & CLIP & CorLoc & DecRate & mAP\({}_{50}\) \\ \hline \multirow{3}{*}{I-VID} & \multirow{3}{*}{\(\times\)} & Vanilla & 42.99 & 34.40 & 0.43 \\ & & Patch-based & 42.99 & 34.40 & 5.20 \\ \cline{2-5} & & Vanilla & 15.65 & 13.73 & 3.04 \\ & ✓ & Patch-based & 60.90 & 53.75 & 29.23 \\ \hline \multirow{3}{*}{Y-VIS} & \multirow{3}{*}{\(\times\)} & Vanilla & 39.36 & 30.31 & 0.77 \\ & & Patch-based & 39.36 & 30.31 & 5.96 \\ \cline{1-1} \cline{2-5} & ✓ & Vanilla & 25.54 & 21.27 & 11.08 \\ \cline{1-1} \cline{2-5} & & Patch-based & 63.74 & 55.05 & 35.19 \\ \hline \hline \end{tabular} \end{table} Table 3: **Ablations on patch-based CLIP.** We use the VideoMAE backbone and spatio-temporal video slot to conduct this ablation. ral consistency and improves object-centric localization in videos. Figure 6 provides visualizations that highlight the differences between image-based and video-based object-centric localization on two videos. While DINOSAUR can localize objects at a certain level in each frame, its results do not account for temporal consistency. This is evident as one object may be localized by a different number of slots among frames, and the background area may contain a totally different slot segmentation. In contrast, our model produces consistent localization across frames for all slots, including the background ones. This demonstrates that video slots tend to track all content in a short video clip. Moreover, this tracking behavior also handles objects that move in and out of frames. In the first sample, the biker disappears after frame 5, and its yellow slot never appears again. Similarly, the house in the background gradually appears and occupies the purple slot from frame 3. **Patch-based CLIP.** Two qualitative examples comparing the labeling results from patch-based CLIP and vanilla CLIP are shown in Figure 7. The results clearly demonstrate that patch features from vanilla CLIP are inadequate for direct labeling algorithm, whereas the patch-based CLIP provides much better alignment and improved performance. **Labeling any slot.** Given an appropriate list of semantics, our approach has the potential to name any slot. Figure 8 shows a single-frame example of the localization from video slots and their corresponding names. The result contains not only correct localization and labeling for the central objects, but also with reasonable names for other slots. Inevitably, there are slots that are not easy to name as their semantic features are not similar to any text features. Reasons of low similarity include 1) the precise name of the visual content is not included in the semantic list, 2) the visual content is a composition of multiple semantics (e.g. the red slot is a combination of leaves, road and fence), and 3) the visual content is incomplete or ambiguous (e.g. the light green slot only shows part of a building). Nevertheless, this example demonstrates both the potential of our model for open-set detection and segmentation, and at the same time also the challenging nature of these downstream tasks under an unsupervised setting. ## 8 Discussion **On Supervision Signal.** Our video object localization is entirely unsupervised apart from the implicit annotation contained in CLIP. Besides self-supervised pretraining and video slot learning, the patch-based CLIP finetuning is also without human annotation. Further, it is worth noting that either the pretrained or finetuned CLIP model do not generate any pseudo annotation to guide the learning process of the framework. The patch-based CLIP model is only used for label assignment at inference stage. **Future work.** Handling long videos is undeniably an intriguing task, but its comprehensive treatment lies beyond the scope of current work. However, we foresee potential extensions of our method from an engineering perspective in future research. For instance, the model can be readily expanded from processing a fixed number of frames to accommodating a larger and more flexible range of frames with relatively minor adjustments, such as position embedding interpolation or temporal frame sub-sampling. ## 9 Conclusion In this paper, we proposed a novel unsupervised method for object localization in videos that leverages the power of slot attention combined with a pre-trained CLIP image-text model. By fine-tuning the last layer of CLIP in a self-supervised manner on unlabeled image data, we established a means for applying the image-text mapping to local patch tokens. Through a joint optimization process with both visual and semantic information, we achieved, for the first time, high-quality and spatio-temporally consistent object localization outcomes on real-world video datasets. While the current coarse localization provides an encouraging starting point, we acknowledge that it still suffers from low resolution of the patch tokens. Moreover, to complete our object detection and tracking pipeline, we are yet to move beyond semantic localization and accurately differentiate between individual object instances. Nevertheless, we are optimistic that our findings will pave the way for label-free training on complex video analysis tasks Figure 8: **Labeling any slot. The first image shows the input frame. The second image shows the localization of that frame, as well as names for all slots. Slots names with underscore are those having a low cosine similarity.** Figure 7: **Comparing patch-based CLIP and vanilla CLIP. Vanilla CLIP cannot find the right name for the visual content in each slot, while patch-based CLIP names each slot correctly.**
2309.16978
Counting Pairs of Conics Satisfying the Poncelet Triangle Condition
We say that a pair of smooth plane conics satisfies the Poncelet Triangle Condition if they intersect transversally and there is a triangle (possibly defined over the algebraic closure instead of the original base field) inscribed in one conic and circumscribed in the other. Chipalkatti's paper showed that over a finite field $\mathbb{F}_q$ with characteristic away from $2,3$, a randomly chosen pairs of smooth conics over $\mathbb{F}_q$ with transversal intersection has about $1/q$ chance to satisfy the PTC condition. We will improve this result and show that the exact probability is given by $\frac{q^2 - 5q + 5}{q^3 - 5q^2 + 6q}$. We will also correct the conjecture made in Chipalkatti's paper, and show that the asymptotic probability for the tetragons case is $1/q$.
Nathan Kaplan, Tianhao Wang
2023-09-29T04:50:18Z
http://arxiv.org/abs/2309.16978v1
# Counting Pairs of Conics Satisfying the Poncelet Triangle Condition ###### Abstract We say that a pair of smooth plane conics satisfies the Poncelet Triangle Condition if they intersect transversally and there is a triangle (possibly defined over the algebraic closure instead of the original base field) inscribed in one conic and circumscribed in the other. Chipalkatti's paper showed that over a finite field \(\mathbb{F}_{q}\) with characteristic away from \(2,3\), a randomly chosen pairs of smooth conics over \(\mathbb{F}_{q}\) with transversal intersection has about \(1/q\) chance to satisfy the PTC condition. We will improve this result and show that the exact probability is given by \(\frac{q^{2}-5q+5}{q^{2}-5q^{2}+6q}\). We will also correct the conjecture made in Chipalkatti's paper, and show that the asymptotic probability for the tetragons case is \(1/q\). ###### Contents * 1 Introduction * 1.1 Poncelet triangle * 1.2 Pencil of conics * 2 Counting pairs of smooth conics satisfying the PTC in each pencil * 2.1 Intersection type (1,1,1,1) * 2.2 Intersection type (2,1,1) * 2.3 Intersection type (2,2) * 2.4 Intersection type (4) * 2.5 Intersection type (3,1) * 3 Generalization to tetragon case * 4 Appendix * 4.1 Counting the number of pencils of each intersection type * 4.2 Computation data for small \(q\) and conjectures for higher \(n\)-gons Introduction ### Poncelet triangle Conics over field of characteristic \(2\) is very different from conics over other finite field. In this paper, we are always assuming that the base field has characteristic away from \(2\). We first recall a few facts about plane conics and Poncelet Triangle. Let \(\mathcal{A}\) be a projective plane conic defined over \(\mathbb{P}^{2}(\mathbb{F}_{q})\) with defining equation given by \(f(x,y,z)=ax^{2}+bxy+cxz+d\,y^{2}+eyz+f\,z^{2}\). Then we can associate a symmetric matrix to this conic: \[A=\begin{bmatrix}a&b/2&c/2\\ b/2&d&e/2\\ c/2&e/2&f\end{bmatrix}\] This matrix has the property that 1. \(\begin{bmatrix}x&y&z\end{bmatrix}A\begin{bmatrix}x\\ y\\ z\end{bmatrix}=f(x,y,z)\) 2. \(\det\,A\neq 0\) if and only if \(\mathcal{A}\) is smooth. **Definition 1.1**.: _Let \((\mathcal{A},\mathcal{B})\) be a pair of smooth projective plane conics defined over \(\mathbb{F}_{q}\). Then we say that it satisfies the **Poncelet Triangle Condition (PTC)** if_ 1. _The two conics intersect transversally_ 2. _There is a triangle defined over_ \(\overline{\mathbb{F}_{q}}\) _that is inscribed in one conic and circumscribed in the other_ Denote \(B^{*}\) to be the dual conic of \(B\) consisting of tangent lines to \(B\). Define \(E\subset\mathcal{A}\times B^{*}\) consisting of \((P,\xi)\) where \(P\in\xi\). The idea of the Poncelet construction is that we start from \((P,\xi)\in E\). Then the tangent line \(\xi\) intersect \(\mathcal{A}\) at another point \(P^{\prime}\), and we let \(\xi^{\prime}\) to be the other tangent line to \(B\) passing through \(P^{\prime}\). We denote this operation by \(j\) \[j\,:\,(P,\xi)\mapsto(P^{\prime},\xi^{\prime}) \tag{1}\] Then the existence of a Poncelet triangle is equivalent to there is a pair \((P,\xi)\in E\) such that \(j^{3}(P,\xi)=(P,\xi)\). Now, we state the classical result by Cayley on the existence of Poncelet Triangle: Denote \(A,B\) to be the symmetric matrices associated to the conics \(\mathcal{A},\mathcal{B}\). Let \(\Delta(t)=\det(tA+B)\) and assuming that \(\sqrt{\Delta}\) has the following Maclaurin series expansion \[\sqrt{\Delta}=H_{0}+H_{1}t+H_{2}t^{2}+\cdots\] then we have the following criterion for a pair of conics to satisfy the PTC. **Theorem 1.2** (Cayley).: _The pair of smooth projective plane conics \((\mathcal{A},\mathcal{B})\) with transversal intersection satisfies the PTC if and only if \(H_{2}=0\)_ For the proof of this theorem, we refer to a paper by Griffiths and Harris [1]. Eventually the existence of a Poncelet Triangle will be equivalent to certain point of the elliptic curve \(y^{2}=\det(xA+B)\) being a \(3\)-torsion point. Hence we expect that the characteristic \(3\) case will give a different result. Now, we state the density result obtained in Chipalkatti's paper [2] **Theorem 1.3** (Chipalkatti).: _Let \(\mathbb{F}_{q}\) be a finite field with characteristic away from \(2,3\). Denote \(\Psi\) to be the set of pairs of smooth conics with transversal intersection, and \(\Gamma\) to be the subset of pairs satisfying PTC. Then we have_ \[\frac{q-16}{q(q+1)}\leq\frac{|\Gamma|}{|\Psi|}\leq\frac{q+5}{(q-2)(q-3)}\] _In particular \(\frac{|\Gamma|}{|\Psi|}=\frac{1}{q}+O\left(\frac{1}{q^{2}}\right)\). If the ground field has characters \(3\), then \(\frac{|\Gamma|}{|\Psi|}=\frac{2}{q}+O\left(\frac{1}{q^{2}}\right)\)._ We will follow the same technique as used in Chipalkatti's paper, and give the exact density. We get the following results: **Theorem 1.4**.: _Assuming that the ground field has characteristic away from \(2,3\)._ 1. _The total number of pairs of smooth conics with transversal intersection and satisfies PTC is given by_ \[|\Gamma|=q^{9}-q^{8}-q^{7}+q^{5}+q^{4}-q^{3}=(q^{5}-q^{2})(q+1)q(q-1)^{2}\] _We factored out_ \(q^{5}-q^{2}\) _as it is the number of smooth conics defined over_ \(\mathbb{F}_{q}\)_. When doing the counting, we can perform a change of coordinate to fix the first conic, and count the number of second conic so that the pair satisfies the PTC. So we do expect a factor of_ \(q^{5}-q^{2}\) _here._ 2. _The overall density is given by_ \(\frac{|\Gamma|}{|\Psi|}=\frac{q^{2}-5q+5}{q^{3}-5q^{2}+6q}\)__ ### Pencil of conics Given two plane conics with equation \(F(x,y,z)\neq G(x,y,z)\), we call the family of conics \(\pi=\{\eta F+G=0:\eta\in P^{1}(\mathbb{F}_{q})\}\) as a **pencil** of conics, and \(F,G\) as generators of this pencil. We will use \(\eta\in\mathbb{F}_{q}\cup\{\infty\}\) where \(\eta=\infty\) corresponds to the conic \(\{F=0\}\). We see that any two distinct members in this pencil will have same intersections as the intersection of \(F\) and \(G\). If we require that \(F\), \(G\) intersect transversally in \(4\) points over \(\overline{\mathbb{F}_{q}}\), then these \(4\) points will determine the pencil uniquely (the pencil consists of all conics passing through the \(4\) points), and there are exactly \(5\) classes of such pencils up to projective automorphisms over \(\mathbb{F}_{q}\). We use the notation \((1,1,1,1)\) to denote the pencil of conics where the two generators intersect in \(4\) points, and all the \(4\) points are defined over \(\mathbb{F}_{q}\). Similarly, we use \((2,1,1)\) to denote the intersection type where we have \(2\) conjugate points defined over \(\mathbb{F}_{q^{2}}\), and two other \(\mathbb{F}_{q}\) point. In total, we have the following \(5\) classes of pencils where the, and here is the number of pencils of each classes. I leave it unsimplified so it would be clear where the counting comes from. The proof of this counting is included in the Appendix. \begin{tabular}{c|c} Intersection Type & Number of pencils of this type \\ \hline \((1,1,1,1)\) & \(\frac{1}{4!}(q^{2}+q+1)(q^{2}+q)q^{2}(q^{2}-3(q+1)+3)\) \\ \hline \((2,1,1)\) & \(\frac{1}{2.2}(q^{4}+q^{2}+1-q^{2}-q-1)(q^{2})(q^{2}-1)\) \\ \hline \((2,2)\) & \(\frac{1}{2.2\cdot 2}(q^{4}-q)(q^{4}-q-(q^{2}+1-q-1))\) \\ \hline \((3,1)\) & \(\frac{1}{3}(q^{6}+q^{3}+1-q^{2}-q-1-(q^{2}+q+1)(q^{3}-q))(q^{2}+q+1)\) \\ \hline \((4)\) & \(\frac{1}{4}(q^{8}+q^{4}+1-q^{4}-q^{2}-1-(q^{2}+q+1)(q^{4}-q^{2}))\) \\ \end{tabular} The idea in Chipalkatti's paper is that we classify pairs of smooth conics with transversal intersection upto projective automorphisms defined over \(\mathbb{F}_{q}\), and then compute the density of pairs satisfying PTC in each class. Chipalkatti computed the number of smooth pairs of conics satisfying the PTC in the pencil with intersection type \((1,1,1,1)\), and did estimation for the other pencils. We will compute that number in the pencils of the other four intersection types. Counting pairs of smooth conics satisfying the PTC in each pencil From this point on, we are going to always assume that the base field has characteristic away from \(2,3\). We will also show that pair of the same smooth conics cannot satisfy the PTC. So when we count the number of pairs of conics satisfying the PTC, we are always assuming that the pair consists of two distinct smooth conics. **Lemma 2.1**.: _Given a smooth conic \(\mathcal{A}\) over \(\mathbb{F}_{q}\) where the characteristic is not \(2,3\), then the pair \((\mathcal{A},\mathcal{A})\) does not satisfy the PTC._ Proof.: Let \(A\) be the symmetric matrix corresponding the smooth conic \(\mathcal{A}\). Then we have \(\Delta(t)=\det(tA+A)=(t+1)^{3}\det(A)\). Then the numerator of \(H_{2}\) is given by \(-a_{1}^{2}+4a_{0}a_{2}\) where \(a_{i}\) is the coefficient \(t^{i}\) in \(\Delta(t)\). In this case, we get \(H_{2}=3\det(A)\neq 0\) as \(\mathcal{A}\) is smooth. Another consequence of this computation is that pairs of the same singular conics will satisfy the PTC. ### Intersection type (1,1,1,1) This is already computed in Chipalkatti's paper. We will review the computation again as the other cases are using exactly the same method. We pick an explicit generators for the pencil, and assume that the first conic is given by \(C_{r}=rF+G\) and the second conic is given by \(C_{s}=sF+G\) where \(r,s\in P^{1}(\mathbb{F}_{q})\). Then we compute the \(H_{2}(r,s)\) (We will drop the denominator) which will be degree \(2\) in \(r\) and degree \(4\) in \(s\). We view this as an equation in \(r\) and compute its discriminant which we denote by \({}^{\prime}disc\,H_{2}\)". * Generators of the pencil: \(F=xy\), \(G=z^{2}+yz+xz\) * Singular members: \(\eta_{1}=0\), \(\eta_{2}=1\), \(\eta_{3}=\infty\) * \(H_{2}(r,s)=r^{2}+(6s^{2}-4s^{3}-4s)r+s^{4}\) * \(disc\,H_{2}=16(s^{2}-s+1)(s(s-1))^{2}\) * Density of smooth pairs satisfying PTC: \(\frac{q-5}{(q-2)(q-3)}\) * Proof: We want to count all the \((r,s)\) with \(r,s\neq 0,1\) (Ensure that \(C_{r},C_{s}\) are smooth conics) such that \(H_{2}(r,s)=0\). Note that \(H_{2}(r,s)\) is always a quadratic polynomial in \(r\). Hence for a fixed value of \(s\), the number of \(r\) such that \(H_{2}(r,s)=0\) depends on the value of \(disc\,H_{2}(s)\). When \(s=0,1\), \(disc\,H_{2}(s)\) equal to zero, and there is one value of \(r\) corresponding to each of that \(s\) value, which are \(r=0,1\) respectively. We also see that when \(r=0,1\), there is exactly one value of \(s\) which makes \(H_{2}(r,s)=0\) which are \(s=0,1\) respectively. Now we assume that \(s\neq 0,1\), and we know that if \(H_{2}(r,s)=0\), \(r\neq 0,1\) either. So we do not need to worry that smoothness of \(C_{r}\) when assuming that \(s\neq 0,1\). Further, the square part of \(disc\,H_{2}(s)\) is always a non-zero square. To use the same notation as in Chipalkatti's paper, we denote the non-square part of \(disc\,H_{2}(s)\) as \(f(s)=s^{2}-s+1\). Then \(disc\,H_{2}(s)\) is zero/non-zero square if and only if \(f(s)\) is zero/non-zero square. For \(f(s)=s^{2}-s+1=(s-1/2)^{2}+\frac{3}{4}\), it has two distinct roots over \(\mathbb{F}_{q}\) if and only if \(-3\) is a square in \(\mathbb{F}_{q}\) (we assumed that characteristic of \(\mathbb{F}_{q}\) is not \(3\), so \(-3\neq 0\) in \(\mathbb{F}_{q}\)). So we will separate it into the corresponding two cases. If \((s-1/2)^{2}+\frac{3}{4}=y^{2}\) for some \(y\in\mathbb{F}_{q}\), then we have We call the first term by \(a\in\mathbb{F}_{q}\setminus\{0\}\), and the second term will be \(\frac{3}{4a}\). Then by solving \(s,y\) in terms of \(a\), we get \[s=\frac{3+4a-4a^{2}}{8a},y=\frac{4a^{2}+3}{8a}\] Hence we parameterize all the \(s\) where \(f(s)\) is a square in terms of \(a\) We see that \(a\) and \(\frac{-3}{4a}\) gives the same value for \(s\). Hence each \(s\) value with \(f(s)\) being a square has two distinct \(a\) corresponding to it except when \(a=\frac{-3}{4a}\). In which case, \(a=\pm\frac{\sqrt{-3}}{2}\), and \(s=\frac{1}{2}\pm\frac{\sqrt{-3}}{2}\) and \(f(s)=0\). In summary, if \(-3\) is not a square in \(\mathbb{F}_{q}\), then \(f(s)\) is never zero, and each \(s\) with \(f(s)\) being a non-zero square has two \(a\) corresponding to it in the above parameterization. Hence there are \(\frac{q-1}{2}\) choices of \(s\) value which makes \(f(s)\) being a non-zero square. We also need to exclude the value \(s=0,1\), then we get \(\frac{q-5}{2}\) choices of \(s\) value which makes \(disc\,H_{2}(s)\) being a non-zero square. Each \(s\) value of such has two corresponding \(r\) such that \(H_{2}(r,s)=0\). Therefore, we get in total \(q-5\) pairs of \((r,s)\) such that the conic pair \((C_{r},C_{s})\) satisfies the PTC. If \(-3\) is a square in \(\mathbb{F}_{q}\), then there are 2 values of \(s\) that make \(f(s)=0\) and \(\frac{q-3}{2}(a\neq 0,\pm\frac{\sqrt{-3}}{2})\) choices of \(s\) value that make \(f(s)\) to be a non-zero square. Then by excluding \(s=0,1\) as previous, we get in total \(2+2\cdot\frac{q-7}{2}=q-5\) pairs of conics in this pencil that satisfy the PTC. ### Intersection type (2,1,1) * Generators: \(F=xy\), \(G=y^{2}+yz+xz+ez^{2}\) where \(T^{2}+T+e\) irreducible over \(\mathbb{F}_{q}\). * Singular members: \(\eta=\infty\) * \(H_{2}(r,s)=r^{2}(4e-1)+r(4s^{3}e^{2}-6s^{2}e-4se+4s-2)+(-s^{4}e^{2}+6s^{2}e-4 s+3)\) * \(disc\,H_{2}=(16)(s^{2}e^{2}-se-3e+1)(s^{2}e-s+1)^{2}\) * Density of smooth pairs satisfying PTC: \(\frac{q-1}{q(q-1)}=\frac{1}{q}\) * Proof: Since \(T^{2}+T+e\) is irreducible over \(\mathbb{F}_{q}\), we know that \(1-4e\) is not a square over \(\mathbb{F}_{q}\). In particular, it is not zero. Hence \(H_{2}(r,s)\) is a quadratic polynomial in variable \(r\) (The coefficient for \(r^{2}\) in \(H_{2}\) is never zero). Note that \((s^{2}e-s+1)\) (which is the square part in \(disc\,H_{2}\)) has no roots over \(\mathbb{F}_{q}\) since its discriminant is \(1-4e\) which cannot be a square by assumption. Therefore, \(disc\,H_{2}\) is zero/non-zero square if and only if \((s^{2}e^{2}-se-3e+1)\) is zero/non-zero square if and only if \(s^{2}-se^{-1}-3e^{-1}+e^{-2}\) is zero/non-zero square Then we follow the same proof as in previous. We factor \(s^{2}-se^{-1}-3e^{-1}+e^{-2}=(s-b)^{2}+c\) where \(b=\frac{e^{-1}}{2}\) and \(c=\frac{3(1-4e)}{4e^{2}}\neq 0\) (If \(c=0\), then \(1-4e=0\) which is a contradiction). If \((s-b)^{2}+c=y^{2}\) for some \(y\in\mathbb{F}_{q}\), then we have \((y-s+b)(y+s-b)=c\). We call the first term as \(a\) which is non-zero as \(c\neq 0\), and the second term as \(c/a\). Then by solving \(s\) in terms of \(\alpha\), we get \[s=\frac{c+2ba-a^{2}}{2a}\qquad a\in\mathbb{F}_{q}\setminus\{0\}\] Note that \(a,-c/a\) will give the same value for \(s\). Now, we separate into two cases. The first case is \(\sqrt{-c}\in\mathbb{F}_{q}\). Then \((s-b)^{2}+c\) has two roots over \(\mathbb{F}_{q}\) which are \(s=b\pm(\sqrt{-c})\). Note that happens exactly when \(a=-c/a=\pm\sqrt{-c}\). Then for the other \(q-3\) choices of \(a\), two of them will give one \(s\) value which makes \((s-b)^{2}+c\) to be a non-zero square. In summary, we have 2 \(s\)-values such that \(\mathrm{disc}H_{2}=0\), which means that each of them corresponds to exactly 1 \(r\)-value so that they satisfy the PTC condition. There are also \(\frac{q-3}{2}\)\(s\)-values such that \(\mathrm{disc}H_{2}\) is a non-zero square, which means that each of them corresponds to 2 \(r\)-values so that they satisfy the PTC condition. Then we have \(2+\frac{q-3}{2}\cdot 2=q-1\) number of pairs of \((r,s)\) that satisfies the PTC condition. The second case is \(\sqrt{-c}\not\in\mathbb{F}_{q}\). Then in this case, \((s-b)^{2}+c\) has no roots over \(\mathbb{F}_{q}\), and there are \(\frac{q-1}{2}\) choice of \(s\)-values which makes it a non-zero square. In total, there are \(\frac{q-1}{2}\cdot 2=q-1\) smooth pairs of conics that satisfy the PTC in this pencil. ### Intersection type (2,2) * Generators: \(F=xy\), \(G=e_{1}x^{2}+e_{2}y^{2}+xz+yz+z^{2}\) where \(T^{2}+T+e_{1}\), \(T^{2}+T+e_{2}\) are irreducible over \(\mathbb{F}_{q}\). * Singular members: \(\eta_{3}=\infty\), \(\eta_{1}=\frac{1+\sqrt{(1-4e_{1})(1-4e_{2})}}{2}\), \(\eta_{2}=\frac{1-\sqrt{(1-4e_{1})(1-4e_{2})}}{2}\) * \(H_{2}(r,s)=r^{2}(-16e_{1}e_{2}+4e_{1}+4e_{2}-1)+r(16se_{1}e_{2}+4s^{3}-6s^{2}-4 se_{1}-4se_{2}+8e_{1}e_{2}+4s-2e_{1}-2e_{2})+(-s^{4}-24s^{2}e_{1}e_{2}+48e_{1}^{2}e_{2} ^{2}+6s^{2}e_{1}+6s^{2}e_{2}+16se_{1}e_{2}-24e_{1}^{2}e_{2}-24e_{1}e_{2}^{2}-4 se_{1}-4se_{2}+3e_{1}^{2}+3e_{2}^{2}+6e_{1}e_{2})\) * \(discH_{2}=(16)(s^{2}+12e_{1}e_{2}-s-3e_{1}-3e_{2}+1)(-s^{2}+4e_{1}e_{2}+s-e_{1 }-e_{2})^{2}\) * Density of smooth pairs satisfying PTC: \(\frac{q-5}{(q-3)(q-2)}\) * Proof: We know that \(1-4e_{1},1-4e_{2}\) are not square over \(\mathbb{F}_{q}\) by the assumption. In particular, neither of them can be zero. Then the coefficient for \(r^{2}\) in \(H_{2}\) is \((-16e_{1}e_{2}+4e_{1}+4e_{2}-1)=-(4e_{1}-1)(4e_{2}-1)\neq 0\). Hence \(H_{2}\) is always a quadratic polynomial in \(r\). The square term in \(disc\,H_{2}\) is \((-s^{2}+4e_{1}e_{2}+s-e_{1}-e_{2})\). It has two roots which are exactly \(\eta_{1},\eta_{2}\). Hence for \(s\neq\eta_{1},\eta_{2}\), we know that \(disc\,H_{2}\) is zero/non-zero square if and only if \(s^{2}+12e_{1}e_{2}-s-3e_{1}-3e_{2}+1\) is zero/non-zero square. When \(s=\eta_{1},\eta_{2}\), we know that \(disc\,H_{2}=0\). Hence there are one \(r\)-value each corresponding to the \(s\)-value above. We see that \(H_{2}(\eta_{1},\eta_{1})=0\) and \(H_{2}(\eta_{2},\eta_{2})=0\). Further, \(H_{2}(\eta_{1},s)=(s-\eta_{1})^{4}\) and \(H_{2}(\eta_{2},s)=(s-\eta_{2})^{4}\). Hence for \(s\neq\eta_{1},\eta_{2}\), if \(H_{2}(r,s)\) has solutions for \(r\), then \(r\neq\eta_{1},\eta_{2}\) either. We follow the same strategy as above, We write \(s^{2}+12e_{1}e_{2}-s-3e_{1}-3e_{2}+1=(s-b)^{2}+c\) where \(b=\frac{1}{2}\), \(c=\frac{3}{4}(4e_{1}-1)(4e_{2}-1)\neq 0\). We still separate it into two cases, but we need to add one more condition that \(s\neq\eta_{1},\eta_{2}\). This will remove 4 choices for \(\alpha\) (Not two choices since \(b\pm\sqrt{-c}\neq\eta_{1},\eta_{2}\)). Hence there are either \(\frac{q-5}{2}\) choices of \(s\in\mathbb{F}_{q}\setminus\{\eta_{1},\eta_{2}\}\) which makes \(disc\,H_{2}\) to be non-zero square (\(\sqrt{-c}\not\in\mathbb{F}_{q}\)), or \(\frac{q-7}{2}\) choices of \(s\in\mathbb{F}_{q}\setminus\{\eta_{1},\eta_{2}\}\) makes \(disc\,H_{2}\) into non-zero square, and 2 choices of \(s\) makes \(disc\,H_{2}\) into zero (\(\sqrt{-c}\in\mathbb{F}_{q}\)). In summary, we have \(q-5\) smooth pairs of conics that satisfy the PTC in this pencil. ### Intersection type (4) * Generators: \(F=x^{2}-ay^{2}\), \(G=z^{2}-by^{2}+2cxy\) where \(\sqrt{a}\not\in\mathbb{F}_{q}\), \(\sqrt{b^{2}-4ac^{2}}\not\in\mathbb{F}_{q}\). * Singular members: \(\eta=\infty\) * \(H_{2}(r,s)=r^{2}(4ac^{2}-b^{2})+r(4s^{3}a^{2}+6s^{2}ab-4sac^{2}+4sb^{2}+2bc^{2})+ (-s^{4}a^{2}+6s^{2}ac^{2}+4sbc^{2}+3c^{4})\) * \(discH_{2}=(16)(s^{2}a^{2}+sab-3ac^{2}+b^{2})(s^{2}a+sb+c^{2})^{2}\) * Density of smooth pairs satisfying PTC: \(\frac{q-1}{q(q-1)}=\frac{1}{q}\) * Proof: This is very similar to the \((2,1,1)\) case. So we sketch the proof instead of writing it in details. 1. \(H_{2}\) is always a degree 2 polynomial in \(r\) by the assumption. 2. The square term of \(discH_{2}\) has no roots over \(\mathbb{F}_{q}\). \(discH_{2}\) is zero/non-zero square if and only if \(s^{2}a^{2}+sab-3ac^{2}+b^{2}\) is zero/non-zero square. 3. There is only one singular member \(\eta=\infty\). We follow the same proof as in the pencil with intersection type \((2,1,1)\), and we get \(q-1\) pairs of smooth conics in this pencil satisfying the PTC. ### Intersection type (3,1) * Generators: \(F=y^{2}-xz\), \(G=x^{2}+by^{2}+cxy+yz\) where \(T^{3}+bT^{2}+cT+1\) is irreducible over \(\mathbb{F}_{q}\). * Singular members: None * \(H_{2}(r,s)=r^{2}(3s^{4}+4s^{3}b+6s^{2}c-c^{2}+12s+4b)+r(2s^{4}b+4s^{3}b^{2}-4s ^{3}c+6s^{2}bc+4sc^{2}-18s^{2}-4sb+2c)+(-s^{4}b^{2}+4s^{4}c+12s^{3}+6s^{2}b+4sc+3)\) * \(H_{2}(\infty,s)=3s^{4}+4s^{3}b+6s^{2}c-c^{2}+12s+4b\) (This is exactly the coefficient of \(r^{2}\) term in \(H_{2}(r,s)\)) * \(H_{2}(r,\infty)=3r^{2}+2rb-b^{2}+4c\) with discriminant \(16(b^{2}-3c)\) * \(discH_{2}=(16)(s^{2}(b^{2}-3c)+s(bc-9)+c^{2}-3b)(s^{3}+s^{2}b+sc+1)^{2}\) * Density of smooth pairs satisfying PTC: \(\frac{q+1}{q(q+1)}=\frac{1}{q}\) * Proof: The key difference with the previous case is that \(H_{2}\) could be non-quadratic in terms of \(r\), and \(discH_{2}(s)\) could also be either linear or quadratic. Also, this pencil has no singular members, so we also need to consider the case when \(r,s=\infty\). Also, by Lemma 2.1, we know that \(H_{2}(\infty,\infty)\neq 0\). Hence we do not need to worry about the case when both \(r,s=\infty\). We will first see that we always treat \(H_{2}\) as a quadratic polynomial in \(r\) by ignoring the case when \(r=\infty\). The coefficient for \(r^{2}\) in \(H_{2}(r,s)\) is exactly \(H_{2}(\infty,s)\). Hence if \(H_{2}(\infty,s)\) has a root \(s_{0}\in\mathbb{F}_{q}\), then we get \((\infty,s_{0})\) when counting the the pairs of conics satisfying PTC. For that value of \(s_{0}\), we see that \(discH_{2}\) is the square of the coefficient of the linear term in \(H_{2}\). If this number is not zero, then \(discH_{2}\) is a non-zero square, and we would get two values of \(r\) such that \(H_{2}(r,s_{0})=0\) if we are assuming that \(H_{2}\) is a quadratic polynomial. What really happens here is that \(H_{2}(r,s_{0})\) is a linear polynomial, and there is one value \(r_{0}\in\mathbb{F}_{q}\) such that \(H_{2}(r_{0},s_{0})=0\). Note that we also have \(H_{2}(\infty,s_{0})=0\), and hence we can view \(r_{0}\) and \(\infty\) as the two solutions to \(H_{2}(r,s_{0})=0\). If the linear term of \(H_{2}\) is zero, then \(discH_{2}=0\), and we are expecting one value of \(r\) such that \(H_{2}(r,s_{0})=0\) if we are assuming that \(H_{2}\) is quadratic. In this case, we claim that \(H_{2}(r,s_{0})\) equal to a non-zero number. If this were zero, then any values of \(r\in\mathbb{F}_{q}\) will make \(H_{2}(r,s_{0})=0\). In particular, \(H_{2}(s_{0},s_{0})=0\), but this is a contradiction to Lemma 2.1, which states that the pair of the the same smooth conics cannot satisfy the PTC. Therefore, in this case, \(H_{2}(r,s_{0})\) is a fixed non-zero number, and \(H_{2}(r,s_{0})=0\) has no solutions for \(r\in\mathbb{F}_{q}\). The one hypothetical solution is exactly given by \(r=\infty\). In summary, we can always assume that \(H_{2}(r,s)\) is a quadratic polynomial, and ignore the case where \(r=\infty\). Then we follow the same method as previous. The first observation is that the square factor in \(discH_{2}\) is never zero by assumption. Hence \(discH_{2}\) is zero/non-zero square if and only if \(s^{2}(b^{2}-3c)+s(bc-9)+c^{2}-3b\) is zero/non-zero square. As pointed out in Chipalkatti's paper ([2], Page 11), \(b^{2}-3c\) and \(bc-9\) cannot be both zero. Hence \(discH_{2}\) is either quadratic or linear, but never a constant polynomial. We separate it into three cases. In the first case, if \(b^{2}-3c\) is zero, then \(discH_{2}\) is a linear function, and there are \(\frac{q-1}{2}\) choice of \(s\) which makes \(discH_{2}\) to be non-zero square, and \(1\) choice of \(s\) makes it zero. This in total gives \(q\) pairs of \((r,s)\). Note that the discriminant of \(H_{2}(r,\infty)\) is also zero in this case. Hence there is one more pair \((r_{0},\infty)\) coming from \(s=\infty\). In total, we get \(q+1\) pairs of smooth conics in this pencil satisfying the PTC. In the second case, we assume that \(b^{2}-3c\) is a non-zero square. Then for \(s=\infty\), there are two pairs \((r_{0},\infty)\), \((r_{1},\infty)\) that satisfies the PTC. For \(s\neq\infty\), we follow the same method as in the \((2,1,1)\) case, and get \(q-1\) pairs of \((r,s)\) with \(H_{2}(r,s)=0\). This in total gives \(q+1\) pairs again. In the third case, we assume that \(b^{2}-3c\) is not a square over \(\mathbb{F}_{q}\). Then there is no \(r\) such that \((r,\infty)\) satisfies the PTC. Further by dividing out by \(b^{2}-3c\), we see that \(discH_{2}\) is zero/non-zero square if and only if \(s^{2}+s\frac{bc-9}{b^{2}-3c}+\frac{c^{2}-3b}{b^{2}-3c}\) is zero/non-square. We still use the same method as before. In one case, there are \(2\) values of \(s\) makes this polynomial zero, and \(q-(\frac{q-3}{2}+2)=\frac{q-1}{2}\) values of \(s\) makes it into non-square. In total, there are \(2+\frac{q+1}{2}\cdot 2=q+1\) pairs of \((r,s)\) that satisfies the PTC. The other case is that \(s^{2}+s\frac{bc-9}{b^{2}-3c}+\frac{c^{2}-3b}{b^{2}-3c}\) has no roots in \(\mathbb{F}_{q}\), and there are \(q-(\frac{q-1}{2})=\frac{q+1}{2}\) number of \(s\) makes it non-square. Hence there are in total \(q+1\) pairs of \((r,s)\) that satisfies the PTC. In all cases, there are in total \(q+1\) pairs of smooth conics in this pencil satisfying the PTC. By combing the number of smooth pairs of conics satisfying the PTC in each pencil class with the number of pencils of each class, we get the Theorem 1.4 stated in the section 1.1. Generalization to tetragon case The construction of the Poncelet Triangle using the \(j\) map also extends to arbitrary \(n\)-gons (See equation (1)). There is also a corresponding Cayley's Criterion on the existence of Poncelet \(n\)-gons involving the Hankel determinant with entries in \(H_{2},H_{3},....\). For example, the existence of a Poncelet tetragon is equivalent to \(H_{3}=0\). Chipalkatti conjectured based on computational data that the density of pairs of smooth conics having an Poncelet tetragon is asymptotically1\(3/q\). However, we found that the computation data is only valid for pairs of conics in the pencil of intersection type \((1,1,1,1)\). We will show that the true asymptotic density is actually \(1/q\). Footnote 1: We fix a prime \(p\) away from finitely many bad characteristics, then let \(q=p^{k}\). The way we are taking this limit is by taking \(k\to\infty\) **Theorem 3.1**.: _Given a random pair of smooth conics with transversal intersection, then the probability that it has a Poncelet tetragon is asymptotically \(1/q\). Further, the asymptotic probabilities in each pencil class are given by_ \begin{tabular}{c|c|c} _Pencil intersection type_ & _Density of pencils in this type_ & _Probability of having a Poncelet tetragon_ \\ \hline \((1,1,1,1)\) & \(1/24\) & \(3/q\) \\ \hline \((2,1,1)\) & \(1/4\) & \(1/q\) \\ \hline \((2,2)\) & \(1/8\) & \(3/q\) \\ \hline \((3,1)\) & \(1/3\) & \(0\) \\ \hline \((4)\) & \(1/4\) & \(1/q\) \\ \end{tabular} Proof.: A pair of smooth conics having a Poncelet tetragon is equivalent to \(H_{3}=0\). What we will do is to factor \(H_{3}(r,s)\) in each pencil class and use the Hasse-Weil bound to estimate the number of rational points on \(H_{3}(r,s)\). The integer before \(1/q\) is actually the number of \(\mathbb{F}_{q}\)-components of \(H_{3}(r,s)\). We also dropped the smoothness requirement as smoothness is a generic condition, and it will not affect the asymptotic probability calculation. In the pencil with intersection type \((1,1,1,1)\) with generators same as previous, we compute that \[{}^{2}H_{3}(r,s)=(-2rs+s^{2}+r)(s^{2}-r)(s^{2}+r-2s)\] We omit some details here, but the \(3\) components here is indicating that \(H_{3}(r,s)\) in other pencils will also have \(3\) components defined over \(\overline{\mathbb{F}_{q}}\). This is proved by using change of coordinates over \(\overline{\mathbb{F}_{q}}\) to change from one pencil to another and change of generators inside the pencil, and keeping track of the changes in the variables \(r,s\). Note that each component is a geometrically irreducible quadratic. Hence \(H_{3}(r,s)\) has \(3\) geometrically irreducible components and each of them are defined over \(\mathbb{F}_{q}\). By the Hasse-Weil estimate, we know that \(H_{3}(r,s)\) has \(3q+O(\sqrt{q})\) number of rational points, and hence the density of pairs of conics having a Poncelet tetragon is \(\frac{3}{q}+O(q^{-3/2})\) in this pencil. Similarly, in the pencil with intersection type \((2,1,1)\), we get \[H_{3}(r,s)=(-2rse+s^{2}e+r-1)(s^{4}e^{2}-2s^{3}e+4r^{2}e-8rse+6s^{2}e-r^{2}+2 rs-2s+1)\] Over \(\mathbb{F}_{q^{2}}\), the second term factor as \[e^{2}(s^{2}+Ar+Bs+e^{-1})(s^{2}-Ar+(-2e^{-1}-B)s+e^{-1})\] where \(A=e^{-1}\sqrt{1-4e}\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\) by the assumption that \(T^{2}+T+e\) is irreducible over \(\mathbb{F}_{q}\), and \(B=-e^{-1}-e^{-1}(\sqrt{1-4e})\) Therefore, we know that \(H_{3}\) has 3 components over \(\overline{\mathbb{F}_{q}}\). One of the component is defined over \(\mathbb{F}_{q}\), contributing \(q+O(\sqrt{q})\) number of rational points. The rational points of the second \(\mathbb{F}_{q}\) component is a subset of the intersection of its two \(\overline{\mathbb{F}_{q}}\) components, which has at most 4 points. Therefore, \(H_{3}(r,s)\) has \(q+O(\sqrt{q})\) number of rational points, and the density of pairs conics with a Poncelet tetragon is given by \(\frac{1}{q}+O(q^{-3/2})\) The density results for the other pencils follows from the same computation. We will just list the factorization of the corresponding \(H_{3}(r,s)\) here: For the pencil class with intersection type \((2,2)\), we have \[H_{3}(r,s)=(-2rs+s^{2}+r+4e_{1}e_{2}-e_{1}-e_{2})(s^{2}+Ar+Bs+C)(s^{2}-Ar+(-2-B) s+C)\] where \(A=\sqrt{(1-4e_{1})(1-4e_{2})}\), \(B=-1-\sqrt{(1-4e_{1})(1-4e_{2})}\), \(C=e_{1}+e_{2}-4e_{1}e_{2}\). Note that \(A\), \(B\in\mathbb{F}_{q}\) by the assumption that \(T^{2}+T+e_{1},T^{2}+T+e_{2}\) are irreducible over \(\mathbb{F}_{q}\) (\(1-4e_{1}\), \(1-4e_{2}\) are non-square in \(\mathbb{F}_{q}\), and hence their product is a square in \(\mathbb{F}_{q}\)). Therefore, \(H_{3}(r,s)\) has 3 components over \(\mathbb{F}_{q}\) (and also over \(\overline{\mathbb{F}_{q}}\)). It follows that the density in this pencil is \(3/q+O(q^{-3/2})\) For the pencil class with intersection type (4), we have \[H_{3}(r,s)=a^{2}(2rsa-s^{2}a+rb+c^{2})(s^{2}+Ar+Bs+a^{-1}c^{2})(s^{2}-Ar+(2a^{ -1}b-B)s+a^{-1}c^{2})\] where \(A=a^{-1}\sqrt{b^{2}-4ac^{2}}\), \(B=a^{-1}b-a^{-1}\sqrt{b^{2}-4ac^{2}}\). Note that \(A\), \(B\not\in\mathbb{F}_{q}\) by the assumption. Hence \(H_{3}\) has 3 components over \(\overline{\mathbb{F}_{q}}\), and only one of them is defined over \(\mathbb{F}_{q}\), and the other two are conjugate \(\mathbb{F}_{q^{2}}\) components. Similar to the \((2,1,1)\) case, we get the density \(1/q+O(q^{-3/2})\) in this pencil. For the pencil class with intersection type \((3,1)\), we get \[H_{3}(r,s)=r^{3}s^{6} +2r^{3}s^{5}b+r^{2}s^{6}b+4r^{2}s^{5}b^{2}-rs^{6}b^{2}+2rs^{5}b^{ 3}-s^{6}b^{3}+5r^{3}s^{4}c\] \[-6r^{2}s^{5}c+4rs^{6}c+10r^{2}s^{4}bc-8rs^{5}bc+4s^{6}bc+5rs^{4}b^ {2}c-2s^{5}b^{2}c-5r^{3}s^{2}c^{2}\] \[+20r^{2}s^{3}c^{2}-20rs^{4}c^{2}+8s^{5}c^{2}-2r^{3}sbc^{2}+5r^{2} s^{2}bc^{2}+20r^{3}s^{3}-45r^{2}s^{4}+36rs^{5}-8s^{6}\] \[+20r^{3}s^{2}b-40r^{2}s^{3}b+30rs^{4}b-4s^{5}b+8r^{3}sb^{2}-20r^{2 }s^{2}b^{2}+20rs^{3}b^{2}-5s^{4}b^{2}-r^{3}c^{3}\] \[+2r^{2}sc^{3}-4r^{3}sc+30r^{2}s^{2}c-40rs^{3}c+20s^{4}c+4r^{3}bc-8 r^{2}sbc+10rs^{2}bc-r^{2}c^{2}+4rsc^{2}\] \[-8r^{3}+36r^{2}s-45rs^{2}+20s^{3}+4r^{2}b-6rsb+5s^{2}b+rc+2sc+1\] Over \(\mathbb{F}_{q^{3}}\), it factor as \[H_{3}(r,s)= (rs^{2}+(-2\alpha)rs+(2\alpha+b)s^{2}+(-2\alpha^{2}-2ba-c)r+(2a^{2 }+2b\alpha+2c)s+1)\] \[\cdot(rs^{2}+(-2\alpha^{\prime})rs+(2\alpha^{\prime}+b)s^{2}+(-2 \alpha^{\prime 2}-2ba^{\prime}-c)r+(2\alpha^{\prime 2}+2ba^{\prime}+2c)s+1)\] \[\cdot(rs^{2}+(-2\alpha^{\prime\prime})rs+(2\alpha^{\prime\prime} +b)s^{2}+(-2\alpha^{\prime\prime\prime}-2ba^{\prime\prime}-c)r+(2\alpha^{ \prime\prime\prime}+2ba^{\prime\prime}+2c)s+1)\] where \(\alpha\in\mathbb{F}_{q^{3}}\) is a root to \(T^{3}+bT^{2}+cT+1\), and \(\alpha^{\prime},\alpha^{\prime\prime}\) are its conjugates. The rational points of \(H_{3}(r,s)\) will be a subset of the intersection of these 3 conjugate cubic curves defined over \(\mathbb{F}_{q^{3}}\), which is a finite number less than or equal to 9. Hence the density in this pencil is \(0+O(q^{-2})\). Appendix ### Counting the number of pencils of each intersection type 1. \((1,1,1,1)\): There is no restriction on first point. The second point cannot be the same as the first point. The third point not on the line passing through the first two. The last point is not on any of the 3 lines passing through two previous points. We divide by \(4!\) as the order of the 4 points does not matter. \(\frac{1}{4!}(q^{2}+q+1)(q^{2}+q)q^{2}(q^{2}-3(q+1)+3)\) 2. \((2,1,1)\): We pick first point in \(\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\), and the second point is the conjugate to the first point. We require that the third point not on the line passing though first two points. Note that the line passing through the first two points is a \(\mathbb{F}_{q}\) line, and hence we remove \(q+1\) choices for the third point. Note that the line connecting the first/second point with the third point is not defined over \(\mathbb{F}_{q}\) (those two lines are conjugate to each other), and it has at most \(1\)\(\mathbb{F}_{q}\) point (a line with two \(\mathbb{F}_{q}\) points are defined over \(\mathbb{F}_{q}\)) which is the third point. Hence the last point is not on the line passing through the first two, and not equal to the third point. \(\frac{1}{2\cdot 2}(q^{4}+q^{2}+1-q^{2}-q-1)(q^{2})(q^{2}-1)\) 3. \((2,2)\): We pick first point in \(\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\), and the second point is conjugate to first point. The third point is also a \(\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\) point, that is not on the line passing through the first two points. The last point is the conjugate to the third points. Note that the last point is not on the line passing through any two points of the previous. This is because that the line connecting the first two points, and the last two points are \(\mathbb{F}_{q}\) lines. \(\frac{1}{2\cdot 2\cdot 2}(q^{4}-q)(q^{4}-q-(q^{2}+1-q-1))\) 4. \((3,1)\): We pick first point in \(\mathbb{F}_{q^{3}}\setminus\mathbb{F}_{q}\). All its conjugate lie on the same line if and only if they lie on a \(\mathbb{F}_{q}\) line. Passing through each \(\mathbb{F}_{q^{3}}\setminus\mathbb{F}_{q}\) point, there is at most one \(\mathbb{F}_{q}\) line (two points determine a line, \(P,\sigma(P)\) must all on that line). There are in total \(q^{2}+q+1\) number of \(\mathbb{F}_{q}\) lines, and each of them has \(q^{3}-q\) number of \(\mathbb{F}_{q^{3}}\setminus\mathbb{F}_{q}\) point, and there is no repetition among them. Therefore, there are in total \(q^{6}+q^{3}+1-q^{2}-q-1-(q^{2}+q+1)(q^{3}-q)\) number of choice for the first point. If the last \(\mathbb{F}_{q}\) point is on one of the line passing through two points of the first three, then it is on all those 3 lines since those 3 lines are conjugate. However, this is impossible as those 3 lines does not intersect at 1 point. So there is no restriction to the last \(\mathbb{F}_{q}\) point. \(\frac{1}{3}(q^{6}+q^{3}+1-q^{2}-q-1-(q^{2}+q+1)(q^{3}-q))(q^{2}+q+1)\) 5. \((4)\): Same as previous, the first \(\mathbb{F}_{q^{4}}\setminus(\mathbb{F}_{q}\cup\mathbb{F}_{q^{2}})\) point cannot lies on a \(\mathbb{F}_{q}\) line. If 3 of them lies on the same line, the all of them must lies on the same line, and that line is a \(\mathbb{F}_{q}\) line. \(\frac{1}{4}(q^{8}+q^{4}+1-q^{4}-q^{2}-1-(q^{2}+q+1)(q^{4}-q^{2}))\) ### Computation data for small \(q\) and conjectures for higher \(n\)-gons All the below computations are done by SageMath. We will always fix the conic \(B\) to be to \(x^{2}+y^{2}+z^{2}\) (which is equivalent to fixing the second matrix to be \(I\)), and count the number of first conic \(\mathcal{A}\) such that there is a Poncelet n-gon in the pair \((\mathcal{A},B)\). We will forget the requirement of transversal intersection and smoothness. Both of the conditions are generic, and hence they will not influence the main term when we are computing the asymptotic density. The computation is done by running over all possible symmetric matrices \(3\times 3\) matrices \(A\) over \(\mathbb{F}_{q}\), and count the numbers of \(A\)'s such that the pair \((A,I)\) satisfying the Poncelet n-gon condition. When stating the density, we divide the previous number by \(q^{5}\), so that the approximate density would that number times \(1/q\). 1. \(n=4\) (tetragon case): 2. \(n=5\) 3. \(n=6\) We see that the density for the tetragon case is approximately \(1/q\), which is similar to what we obtained in the previous section. We denote \(f_{n}(r,s)\) to be the polynomial that determine if a pair of conics \((C_{r}=rF+G,C_{s}=sF+G)\) in a pencil with generator \(F,G\) satisfies the Poncelet \(n\)-gon condition or not. For example, \(f_{3}(r,s)=H_{2}(r,s)\), \(f_{4}(r,s)=H_{3}(r,s)\), and \(f_{5}(r,s)=H_{2}H_{4}-H_{3}^{2}\). Then the integer sequence \(\mu_{n}\) in Chipalkatti's paper ([2], Page 13, Conjecture 4.1) is exactly the number of \(\mathbb{F}_{q}\) components of the corresponding \(f_{n}\) in the pencil with intersection type \((1,1,1,1)\). We already see that in section 3 that \(f_{4}=H_{3}\) in the pencil class with intersection type \((1,1,1,1)\) has 3 geometrically irreducible \(\mathbb{F}_{q}\) components. This explains \(\mu_{4}=3\) in Chipalkatti's conjecture. Similarly, the \(\mu_{6}=4\) in the conjecture is coming from the \(4\)\(\mathbb{F}_{q}\) components of \(f_{6}(r,s)\) in the pencil class with intersection type \((1,1,1,1)\): \[f_{6}(r,s)=(s^{4}-4r^{2}s+ 6rs^{2}-4s^{3}+r^{2})(s^{4}+4r^{2}s-6rs^{2}-3r^{2}+4rs)\] \[\cdot(-4rs^{3}+s^{4}+6rs^{2}+r^{2}-4rs)(-4rs^{3}+3s^{4}+6rs^{2}-4s^ {3}-r^{2})\] We correct Chipalkatti's conjecture regarding the proportion of conic pairs in \(\mathbb{P}^{2}(\mathbb{F}_{q})\) that satisfies the \(n\)-gon condition as following: **Conjecture 4.1**.: _The proportion of conic pairs in \(\mathbb{P}^{2}(\mathbb{F}_{q})\) that satisfies the n-gon condition is asymptotically equal to \(\mu_{n}/q\) where \(\mu_{n}\) are given by_ \[\mu_{5}=1,\mu_{6}=2,\,...\] _Note that we arelady proved that \(\mu_{3}=1\) and \(\mu_{4}=1\)._
2305.19746
Adaptive and Explainable Deployment of Navigation Skills via Hierarchical Deep Reinforcement Learning
For robotic vehicles to navigate robustly and safely in unseen environments, it is crucial to decide the most suitable navigation policy. However, most existing deep reinforcement learning based navigation policies are trained with a hand-engineered curriculum and reward function which are difficult to be deployed in a wide range of real-world scenarios. In this paper, we propose a framework to learn a family of low-level navigation policies and a high-level policy for deploying them. The main idea is that, instead of learning a single navigation policy with a fixed reward function, we simultaneously learn a family of policies that exhibit different behaviors with a wide range of reward functions. We then train the high-level policy which adaptively deploys the most suitable navigation skill. We evaluate our approach in simulation and the real world and demonstrate that our method can learn diverse navigation skills and adaptively deploy them. We also illustrate that our proposed hierarchical learning framework presents explainability by providing semantics for the behavior of an autonomous agent.
Kyowoon Lee, Seongun Kim, Jaesik Choi
2023-05-31T11:19:36Z
http://arxiv.org/abs/2305.19746v2
# Adaptive and Explainable Deployment of Navigation Skills ###### Abstract For robotic vehicles to navigate robustly and safely in unseen environments, it is crucial to decide the most suitable navigation policy. However, most existing deep reinforcement learning based navigation policies are trained with a hand-engineered curriculum and reward function which are difficult to be deployed in a wide range of real-world scenarios. In this paper, we propose a framework to learn a family of low-level navigation policies and a high-level policy for deploying them. The main idea is that, instead of learning a single navigation policy with a fixed reward function, we simultaneously learn a family of policies that exhibit different behaviors with a wide range of reward functions. We then train the high-level policy which adaptively deploys the most suitable navigation skill. We evaluate our approach in simulation and the real world and demonstrate that our method can learn diverse navigation skills and adaptively deploy them. We also illustrate that our proposed hierarchical learning framework presents explainability by providing semantics for the behavior of an autonomous agent. ## I Introduction Autonomous navigation of mobile robots has gained much interest due to a wide variety of important applications in industry. These include assistive robots [1], a last-mile delivery [2], a guidance at airports [3], and warehouse navigation [4], among others. To navigate a robot to a desired goal in complex, cluttered, and highly dynamic environments with multiple static and dynamic obstacles, a reliable and robust navigation policy is essential. Traditional approaches to address the navigation problem typically consist of a series of modules, each of which is specifically designed for solving a particular sub-task of navigation problems such as human detection, prediction of future trajectories of humans, and path planning [5, 6]. However, these approaches require extensive computational efforts and rely on manually engineered parameters, which limit the ability of mobile robots to operate in previously unseen environments, or across different robotic platforms, as the modular system suffers from a lack of generalization and sub-optimal performance [7]. In this regard, deep reinforcement learning (DRL) approaches have recently been proposed as learning based solutions for autonomous navigation, which directly map raw sensor observations to controls and have shown remarkable results compared to traditional methods and robustness to sensor noise [8, 9, 10, 11]. However, existing DRL-based approaches require hand-engineered reward functions that can be exceedingly time consuming. Moreover, in challenging scenarios, DRL-based approaches with a fixed reward function easily get stuck in local optima which makes complex situations including navigating across narrow corridors or dense crowds and turning corners, unsolvable. Another potential problem of existing DRL-based robot navigation is that a policy represented by a deep neural network often lacks transparency and cannot provide explanations on decision-making reasons. Recent works investigate DRL methods which are interpretable and provide decision-making reasons [12, 13]. However, they are only applicable in video games or in simple simulation environments. Another line of research provides explanations in real-world robots by highlighting input features that a deep network most refers to when making a decision [14]. Nevertheless, this approach has the limitation that it is not directly relevant to increasing the performance of the policy model. To address the aforementioned problems, we develop a framework of learning to adaptively deploy navigation skills that are explainable with hierarchical reinforcement learning (HRL).1 Specifically, our approach can be divided into two phases. The first phase trains a family of low-level navigation policies, each of which is optimized for a particular skill vector in a continuous representation. This skill vector is associated with a corresponding reward function. For example, a skill vector that imposes an additional penalty for the robot to be more cautious with obstacles can exhibit socially Fig. 1: An overview of the proposed hierarchical framework of navigation skills. A high-level policy invokes low-level navigation skills from a raw sensory observation. A low-level policy is adopted from a continuous skill vector among the infinite number of skills, which then drives a robot while understanding the context. aware navigation behavior, while a skill vector encouraging the robot getting closer to its goal may result in more aggressive behavior. In the second phase, we train the high-level policy which deploys the most suitable navigation skill for every time step. The main contributions of this work can be summarized as follows: * Proposal of a hierarchical reinforcement learning approach which learns diverse navigation skills and deploys them. * Extensive evaluation of our approach on various scenarios including a real-world robot navigation which demonstrates effectiveness and explainability of our approach. ## II Related Work ### _Reinforcement Learning for Navigation_ Recent advances in deep reinforcement learning have enabled the solving of complex navigation tasks from raw sensory measurements. Long et al. [8] propose a DRL-based multi-agent navigation framework to train collision avoidance policy with proximal policy optimization (PPO) [15] using 2D LiDAR observation. This DRL-based collision avoidance policy is further integrated into a hybrid control framework [16] and a conventional global planner from robot operating system (ROS) [17]. A similar approach is proposed in [10], however, additional ego-safety and social-safety rewards are used to consider human-awareness. Apart from using 2D LiDAR observation, navigation that learns from RGB images has also been studied [18, 19]. ### _Reinforcement Learning with Sparse Rewards_ Existing approaches which handle sparse rewards in reinforcement learning (RL) can be divided into two categories, curriculum reinforcement learning and reward shaping. In the context of curriculum reinforcement learning, the implicit curriculum uses goal relabeling techniques which randomly sample goals from failed trajectories [20, 21]. The explicit curriculum considers the difficulty of goals during training, e.g., by generating the goal further from the start in navigation or increasing the number of obstacles [22, 8]. The reward shaping technique modifies the reward signal by learning a parameterized dense reward function [23]. Sibling rivalry [24] uses a model-free, dynamic reward shaping method that preserves optimal policies on sparse-rewards tasks. On the other hand, AutoRL [25] uses large-scale hyperparameter optimization to shape the reward. ### _Hybrid Control framework for Navigation_ To further improve the robustness and effectiveness of navigation policy by leveraging the strength of multiple local planners, a hybrid control framework has been widely adopted which uses a high-level switching controller to manage a set of low-level control rules. For example, Jin et al. [26] propose hand-designed switching rules to combine goal-navigation and obstacle avoidance, Shucker et al. [27] introduce switching rules to handle challenging cases where collisions or noises cannot be handled appropriately, and Fan et al. [16] classify the scenarios into three cases by considering a robot's sensor measurement and using both a PID and DRL-based controller. Most similar to our work, Kastner et al. [28] propose a hierarchical navigation system integrating model-based and learning-based local planners by training an agent which decides between multiple local planners. However, the approach uses only one DRL-based local planner with the traditional model-based planners. As a result, navigation systems following this approach are often strongly dependent on deployed model-based planners. Instead, we train a family of policies that exhibit different behaviors and focus on how to adaptively deploy them. ## III Problem Formulation HRL decomposes a general RL into a hierarchy of multiple sub-problems which themselves are RL problems [29]. Higher-level problems abstract an original RL problem and adopt which lower-level problems to solve, while lower-level problems are defined to solve the original problem given an abstraction from the higher-level problems. In this work, we decompose a problem of learning navigation skills into a hierarchy of two sub-problems, a high-level problem and a low-level problem. The high-level problem is to find a macro policy which understands the context of a navigation environment and adopts which navigation skills to invoke. The macro policy observes a current state and demonstrates the context that the agent encounters. The low-level problem is to discover a set of navigation skills that drives a mobile robot to reach a target position. Each low-level navigation skill issues a command by giving primitive actions such as motor velocities, conditioned on the context distinguished by the macro policy. We formalize each sub-problem as a Markov decision process (MDP), in particular, a goal-conditioned MDP [30] for learning general-purpose navigation skills. A high-/low-level problem is modeled as an MDP \(\langle\mathcal{S}^{\mathrm{high/low}},\mathcal{A}^{\mathrm{high/low}}, \mathcal{G},P^{\mathrm{high/low}},R_{g}^{\mathrm{high/low}},\gamma\rangle\), where \(\mathcal{S}\) is the set of states, \(\mathcal{G}\) is the set of goals, \(\mathcal{A}\) is the set of actions, \(P:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,+\infty)\) is the transition probability, \(R_{g}:\mathcal{S}\times\mathcal{A}\times\mathcal{A}\rightarrow\mathbb{R}\) is the goal-conditioned reward function, \(\gamma\in[0,1]\) is the discount factor, and each superscript \(\mathrm{high}\) and \(\mathrm{low}\) represents a notation for the high-level and the low-level problem respectively. To build the hierarchy, an action from the high-level agent \(a^{\mathrm{high}}\in\mathcal{A}^{\mathrm{high}}\) is observed as a state by the low-level agent, where this high-level action implies the context of the environment. In addition to \(a^{\mathrm{high}}\), the low-level agent observes the same state as the high-level agent, which results in \(s^{\mathrm{low}}=(s^{\mathrm{high}},a^{\mathrm{high}})\in\mathcal{S}^{ \mathrm{low}}\). While interacting with the environment and observing the state \(s^{\mathrm{low}}\), each low-level agent receives a reward from a reward function \(R_{g}^{\mathrm{low}}(s^{\mathrm{low}},a^{\mathrm{low}},a^{\mathrm{high}})\) that consists of multiple reward terms which we will describe in more detail in the following section. To effectively train each low-level agent to present discriminative primitive actions conditioned on \(a^{\mathrm{high}}\), they are trained with different reward functions by weighting each reward term in \(R_{g}^{\mathrm{low}}\) by \(a^{\mathrm{high}}\). Under this problem definition, the optimal low-level policies and the high-level policy in the hierarchy are achieved by an off-the-shelf RL algorithm which maximizes a corresponding value function formulated as follows: \[V^{\mathrm{low}}(s^{\mathrm{low}},g|\pi^{\mathrm{high}})\triangleq \mathbb{E}_{\begin{subarray}{c}a_{t}^{\mathrm{low}}\sim\pi^{\mathrm{low}}(a_{t} ^{\mathrm{low}}|s_{t}^{\mathrm{low}},g),\\ a_{t}^{\mathrm{high}}\sim\pi^{\mathrm{high}}(a_{t}^{\mathrm{high}}|s_{t}^{ \mathrm{high}},g),\\ s_{t+1}^{\mathrm{high}}\sim P(s_{t+1}^{\mathrm{high}}|s_{t}^{\mathrm{low}},a_{t }^{\mathrm{low}})\end{subarray}}\] \[\Bigg{[}\sum_{t=0}^{\infty}\gamma^{t}R_{g}^{\mathrm{low}}(s_{t}^{ \mathrm{low}},a_{t}^{\mathrm{low}},a_{t}^{\mathrm{high}})\Big{|}s_{0}^{\mathrm{ low}}=s^{\mathrm{low}}\Bigg{]}, \tag{1}\] \[V^{\mathrm{high}}(s^{\mathrm{high}},g|\pi^{\mathrm{low}})\triangleq \mathbb{E}_{\begin{subarray}{c}a_{t}^{\mathrm{high}}\sim\pi^{\mathrm{high}}(a_ {t}^{\mathrm{high}}|s_{t}^{\mathrm{high}},g),\\ a_{t}^{\mathrm{high}}\sim\pi^{\mathrm{low}}(a_{t}^{\mathrm{low}}|s_{t}^{ \mathrm{low}},a_{t}^{\mathrm{high}}),\\ s_{t+1}^{\mathrm{high}}\sim P(s_{t+1}^{\mathrm{high}}|s_{t}^{\mathrm{high}},a_ {t}^{\mathrm{high}})\end{subarray}}\] \[\Bigg{[}\sum_{t=0}^{\infty}\gamma^{t}R_{g}^{\mathrm{high}}(s_{t}^ {\mathrm{high}},a_{t}^{\mathrm{high}},a_{t}^{\mathrm{low}})\Big{|}s_{0}^{ \mathrm{high}}=s^{\mathrm{high}}\Bigg{]}. \tag{2}\] ## IV Approach In the hierarchy, the high-level agent and the low-level agent interact with each other. To train agents in the hierarchy, we decompose a training procedure into two phases. In this section, we describe how each component of MDP is defined and how the agent is trained for each training phase. ### _Learning a Family of Navigation Skills_ In the first phase, we learn a family of low-level navigation policies, each optimized for a particular skill vector in continuous representation. We present a detailed setup of phase 1 as follows: #### Iv-A1 State space A state \(s_{t}^{\mathrm{low}}\) at time step \(t\) consists of four components: the raw 2D LiDAR measurements \(s_{t}^{l}\in\mathbb{R}^{512}\) casting 512 rays over 360\({}^{\circ}\) with up to 6m range, the linear and angular velocity of the robot \(s_{t}^{e}\in\mathbb{R}^{2}\), the relative goal state \(s_{t}^{g}\in\mathbb{R}^{2}\) represented in the polar coordinate, and the high-level action \(a_{t}^{\mathrm{high}}\in\mathbb{R}^{5}\). We call \(a_{t}^{\mathrm{high}}\)_a skill vector_ since it induces a specific behavior by weighting a number of reward terms. #### Iv-A2 Action space Considering nonholonomic kinematics constraints, an action \(a_{t}^{\mathrm{low}}\in\mathbb{R}^{2}\) is composed of the linear velocity \(v_{t}\in[0,0.5]\) and angular velocity \(w_{t}\in[-0.64,0.64]\) which is normalized into \([-1,1]\) in the neural network. #### Iv-A3 Reward function Our goal is to learn a family of navigation policies with a diverse set of behavioral characteristics. Thus, we set the reward function parameterized by the skill vector \(a^{\mathrm{high}}\) as follows: \[R_{t}^{\mathrm{low}}(s_{t}^{\mathrm{low}},a_{t}^{\mathrm{low}},a _{t}^{\mathrm{high}})\] \[=r_{\mathrm{success}}+a^{\mathrm{high}}[r_{\mathrm{collision}}\;r_ {\mathrm{progress}}\;r_{v}\;r_{w}\;r_{\mathrm{safety}}]^{\top}.\] The first term, \(r_{\mathrm{success}}\), is the success reward, which is 1 if the goal is achieved and 0 otherwise. The collision reward, \(r_{\mathrm{collision}}\) is -1 when the robot collides with obstacles and 0 otherwise. The progress reward, \(r_{\mathrm{progress}}\) is the difference between the previous distance to the goal and the current Fig. 3: Actor-critic architecture used for learning a family of navigation skills (left) and learning to deploy skills (right). \(\mathrm{Conv}\) denotes a convolutional layer and \(\mathrm{Fc}\) denotes a fully-connected layer. Fig. 2: A two-phase training procedure of the proposed hierarchical framework. A family of navigation skills is trained in the first phase, while a high-level policy is trained in the second phase. To train the infinite number of navigation skills in the first phase, a single policy network which is conditioned on a continuous skill vector is utilized. The skill vector \(a^{\mathrm{high}}\) is sampled from a fixed distribution per episode to construct \(s^{\mathrm{low}}\) along with a scan observation. The high-level agent is then trained by taking actions from the low-level skill deployed by the high-level agent. distance to the goal. The driving reward, \(r_{v}\) is the positive linear speed which encourages the robot to drive as fast as possible. The turning reward, \(r_{w}\) is the negative angular speed for smooth driving. Finally, inspired by the social-safety zone in [10], \(r_{\text{safety}}\) is the safety reward that gives a penalty with value \(1-\frac{d_{t}}{r+0.5}\) when the robot enters the safety zone, where \(d_{t}\) is the distance to the closest obstacle and \(r\) is the robot radius. #### Iv-A4 Network architecture The neural network architecture used for learning a family of navigation skills is illustrated in the right of Fig. 3. We follow the architecture in the prior work [8], except that the policy and the value function are additionally parameterized by the skill vector \(a^{\mathrm{high}}\) which is concatenated with the output of the third layer, the current velocity \(s_{v}\), and the relative goal state \(s_{g}\). #### Iv-A5 Training procedure As each individual policy is trained under the similar structure of reward function, we use a single deep neural network to train the entire family of low-level policies simultaneously. Inspired by a universal policy [31], we train a policy \(\pi^{\mathrm{low}}:(s_{t}^{\mathrm{low}},a_{t}^{\mathrm{high}})\to a_{t}^{ \mathrm{low}}\) that is conditioned on both state of the robot \(s_{t}^{\mathrm{low}}\) and the skill vector \(a_{t}^{\mathrm{high}}\). In contrast to existing approaches which learn a single policy, at the beginning of the episode, we sample a skill vector from predefined distribution and fix it during the rollout to train corresponding policy \(\pi_{a_{t}^{\mathrm{high}}}:s_{t}^{\mathrm{low}}\to a_{t}^{\mathrm{low}}\), which we call a skill. ### _Learning to Deploy Skills_ The second phase of our approach is to learn a high-level policy to adaptively deploy the navigation skills learned in phase 1. The following provides a detailed formulation. #### Iv-B1 State space A state \(s_{t}^{\mathrm{high}}\) composed of three components, \(s_{t}^{l}\), \(s_{t}^{v}\) and \(s_{t}^{g}\), which are used in the first phase. #### Iv-B2 Action space An action \(a_{t}^{\mathrm{high}}\in\mathbb{R}^{5}\) is a skill vector that decides the learned low-level behavior characteristic. #### Iv-B3 Reward function The objective of the second phase is to minimize the time to reach the desired goal and we use the following sparse reward function: \[R_{t}^{\mathrm{high}}(s_{t}^{\mathrm{high}},a_{t}^{\mathrm{high}},a_{t}^{ \mathrm{low}})=\begin{cases}0&\text{if reach the goal}\\ -1&\text{otherwise},\end{cases}\] where the agent gets the reward of 0 when the goal is achieved, and -1 otherwise. #### Iv-B4 Network architecture We parameterize each of the policies and value functions with the architecture from the prior work [8], except that the policy predicts a particular skill vector that will be used by the low-level policy. We further feed the output of high-level policy as an input to the low-level policy to infer the command velocity of the robot during a rollout as shown in Fig. 3. #### Iv-B5 Training procedure After finishing training the set of navigation skills, we can further perform hierarchical control by training the high-level policy. During a rollout, the high-level policy predicts the skill vector which decides the behavior characteristic. This skill vector is observed as an additional input to the low-level policy which outputs the command velocity of a robot. Note that, in the second phase, gradients flow only through the high-level policy, not the low-level policy. Due to a problem with the sparse reward setting where a positive reward is only provided when the goal is achieved, we use the hindsight experience replay (HER) technique [20] which provides an effective way to handle the sparse reward challenge by revisiting previous states in the replay buffer and storing additional trajectories with hindsight goals for generating reward signals. ## V Experiments and results ### _Setup_ We train a policy using an OpenAI-gym-compatible simulator that we specially design to integrate it into the robot operating system (ROS) with the goal of open-sourcing it for easy comparison of various approaches including state-of-the-art learning-based approaches and conventional ones Fig. 4: Training environments used to train the policies, with both randomized maps and obstacles. We randomize training environments so that the agent learns diverse skills. To control the difficulty, we manipulate the number of humans, obstacles, and corridor and the width of corridor. Fig. 5: Unseen environments used to evaluate the policies. The robot navigates from the green square to the goal is located on the red square. (see Fig. 4). To create a scenario with various difficulty levels, we adopt the map generation method from [28] to generate indoor (50m\(\times\)50m) and outdoor (20m\(\times\)20m) maps. We randomize the noise on sensors, and static and dynamic obstacles where we control the motion of the dynamic obstacles using a learning-based collision avoidance method that extends the prior work [8] with \(A^{*}\) global planner to make it follow the sub-goal generated from the global path for more realistic long-range behavior. We terminate the simulation when reaching the goal, colliding with an obstacle, or exceeding 1000 time steps. To train the policies, we use soft actor-critic (SAC) [32] with an automatically tuned entropy coefficient, employing the Adam optimizer [33] with a learning rate 1e-4 for both the actor and the critic, a target smoothing coefficient of 5e-4, and a replay buffer size of 1M. We compare our approach with a traditional model-based approach dynamic window approach (DWA), two DRL-based approaches LONG [8] and GRING [17], and four agents trained with a fixed skill vector which we term \(\pi_{\rm goal-oriented}\), \(\pi_{\rm socially-aware}\), \(\pi_{\rm safe}\), and \(\pi_{\rm mild}\) where each policy is trained with the skill vector of \([0,0.2,0.1,0,0]\), \([0.1,0,0,0.1,0.2]\), \([0.2,0,0,0.1,0.1]\), and \([0.1,0.1,0.1,0.2,0]\) respectively. ### _Quantitative Analysis_ We evaluate the trained policies in three unseen large environments (see Fig. 5). These scenarios include (a) a corridor with 4 dynamic obstacles, (b) a building with 9 dynamic obstacles, and (c) a shopping mall with 8 dynamic obstacles which are designed to evaluate the ability of the agent including global planning, collision avoidance, and safe navigation, among other real-world situations. We validate the effectiveness of our method with four metrics: success rate, collision rate, arriving time, and path length. Fig. 6 shows the evaluation results with 100 episodes in each scenario. To solely evaluate the performance of learned navigation policy, all approaches do not use a global planner that is frequently used to handle a long-range navigation challenge. A traditional model-based approach DWA fails in dense crowds scenarios. Further, in all evaluation environments, our approach shows comparative performance to other baselines including DRL-based agents and agents trained with the fixed skill vector, except for goal-oriented policy. LONG and GRING present comparative results in densely crowded environments where a local navigation policy can solve but performance drops significantly for a situation where a global navigation policy is required. An interesting observation is that \(\pi_{\rm goal-oriented}\) performs well in long-range and dynamic environments without crash or safety rewards typically used in prior works. We assume that this is due to the goal-oriented reward, which encourages exploration Fig. 8: Trajectories of the our approach in the unseen building environment (top) and skill vectors in particular situations (bottom). Fig. 6: Quantitative results in three unseen environments with 100 episodes. We evaluate the proposed method by comparing four metrics: success rate, collision rate, time to reach the goal, and path length. Our hierarchical method (blue bar) presents comparative performance in all evaluation environments. Fig. 7: Results of the qualitative analysis. We compare the average value of the skill vector with standard deviation among three evaluation environments. The high-level agent deploys slow but careful skills in the first environment, while it deploys smooth but fast skills in the other environments. and goal achievement. It would be an appealing research direction to study the role of reward terms rigorously. ### _Qualitative Analysis_ To qualitatively analyze the effectiveness of the proposed approach, we interpret the average skill vector along the trajectory for each unseen evaluation environment in Fig. 7. In the corridor environment, there is a long wall. To successfully reach the goal, the robot needs to detour a long range. To this end, the high-level agent deploys skills that present slow but careful motor commands with relatively high crash skill value. On the other hand, in the other two environments, the building and the shopping mall, there are little static obstacles on the straight line from the initial position of the robot to the target position. In these environments, the skill vector has large progress and forward values so that the robot presents agile movement. In addition, the high-level agent demonstrates smooth motor command through the smooth skill value. The average discomfort skill value in the last two crowded environments is relatively large, which validates the effectiveness of our approach. The whole trajectories of our approach in the building environment are shown in Fig. 8 where green lines represent trajectories of successes. The skill vectors demonstrate the interpretable deployment of learned navigation skills. At the beginning where there are few obstacles around the robot as shown in (a) of Fig. 8, progress and forward skill values are huge for fast navigation towards the goal. In the middle of the navigation shown in (b), though progress and forward skill values are still large, the discomfort value increases when the robot is surrounded by humans, which would lead the robot to avoid collision with them. When the robot enters the narrow corridor in (c), the smooth value increases to prevent oscillatory behaviors. Around the goal which is close to obstacles in (d), crash is high to safely arrive the goal. The skill vector presents the same tendency across the other evaluation scenarios. We refer the reader to the videos in supplementary material for further information. ### _Real-World Experiment_ To demonstrate the validity of the proposed approach, we collect the expert trajectory with our robot which has two 270\({}^{\circ}\) LiDARs that are merged together to cover 360\({}^{\circ}\). We then evaluate how well the high-level policy understands the context in a real-world shopping mall. Fig. 9 shows the skill vector generated by high-level policy for different scenarios. The high-level policy encourages the robot to keep a safe distance from crowds and to decrease the speed in narrow corridors. On the other hand, the goal-oriented navigation skill with high speed is preferred in open space. ## VI Conclusion In this paper, we propose a hierarchical learning framework to train reinforcement learning based navigation policies that can present explainability by providing semantics of a behavior of an autonomous agent and can reduce the effort to design hand-engineered reward functions. The evaluation results on unseen environments including real-world cases demonstrate the adaptive and explainable deployment of learning navigation skills. ## Acknowledgment This work was supported by the Industry Core Technology Development Project, 20005062, Development of Artificial Intelligence Robot Autonomous Navigation Technology for Agile Movement in Crowded Space, funded by the Ministry of Trade, Industry & Energy (MOTIE, Republic of Korea) and was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022-0-00984, Development of Artificial Intelligence Technology for Personalized Plug-and-Play Explanation and Verification of Explanation, No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)). Fig. 9: Demonstration of our approach in unseen real scenarios. The scenarios are navigating in (a) cluttered environment, (b) narrow corridor, (c) dense crowd and (d) open space. The skill vectors are predicted by transferred high-level policy.
2309.11971
Tangents and pointwise Assouad dimension of invariant sets
We study the fine scaling properties of sets satisfying various weak forms of invariance. Our focus is on the interrelated concepts of (weak) tangents, Assouad dimension, and a new localized variant which we call the pointwise Assouad dimension. For general attractors of possibly overlapping bi-Lipschitz iterated function systems, we establish that the Assouad dimension is given by the Hausdorff dimension of a tangent at some point in the attractor. Under the additional assumption of self-conformality, we moreover prove that this property holds for a subset of full Hausdorff dimension. We then turn our attention to an intermediate class of sets: namely planar self-affine carpets. For Gatzouras--Lalley carpets, we obtain precise information about tangents which, in particular, shows that points with large tangents are very abundant. However, already for Bara\'nski carpets, we see that more complex behaviour is possible.
Antti Käenmäki, Alex Rutar
2023-09-21T11:05:41Z
http://arxiv.org/abs/2309.11971v1
# Tangents and pointwise Assouad dimension of invariant sets ###### Abstract. We study the fine scaling properties of sets satisfying various weak forms of invariance. Our focus is on the interrelated concepts of (weak) tangents, Assouad dimension, and a new localized variant which we call the pointwise Assouad dimension. For general attractors of possibly overlapping bi-Lipschitz iterated function systems, we establish that the Assouad dimension is given by the Hausdorff dimension of a tangent at some point in the attractor. Under the additional assumption of self-conformality, we moreover prove that this property holds for a subset of full Hausdorff dimension. We then turn our attention to an intermediate class of sets: namely planar self-affine carpets. For Gatzouras-Lalley carpets, we obtain precise information about tangents which, in particular, shows that points with large tangents are very abundant. However, already for Baranski carpets, we see that more complex behaviour is possible. ###### Contents * 1 Introduction * 1.1 Weak tangents, tangents, and pointwise Assouad dimension * 1.2 Main results and outline of paper * 1.3 Some variants for future work * 1.4 Notation * 2 Tangents and pointwise Assouad dimension * 2.1 Tangents and weak tangents * 2.2 Level sets and measurability * 2.3 Tangents and pointwise dimensions of general sets * 2.4 Tangents of dynamically invariant sets * 3 Assouad dimension of non-autonomous self-similar sets * 3.1 Non-autonomous self-similar sets * 3.2 Metric trees * 3.3 Reduction to symbolic representation * 3.4 Regularity properties of Assouad dimension * 3.5 Proof of the Assouad dimension formula * 4 Tangent structure and dimension of Gatzouras-Lalley carpets * 4.1 Gatzouras-Lalley and Baranski carpets * 4.2 Approximate squares and symbolic slices * 4.3 Tangents of Gatzouras-Lalley carpets * 4.4 Upper bounds for the pointwise Assouad dimension * 4.5 Dimensions of level sets of pointwise Assouad dimension * 5 Tangent structure and dimension of Baranski carpets * 5.1 Dimensions and decompositions of Baranski carpets * 5.2 Pointwise Assouad dimension along uniformly contracting sequences * 5.3 Baranski carpets with few large tangents ## 1 Introduction One of the most fundamental concepts at the intersection of analysis and geometry is the notion of a _tangent_. For sets exhibiting a high degree of local regularity--such as manifolds, or rectifiable sets--at almost every point in the set and at all sufficiently high resolutions, the set looks essentially linear. Moreover, the concept of a tangent is particularly relevant in the study of a much broader class of sets: those equipped with some form of dynamical invariance. This relationship originates in the pioneering work of Furstenberg, where one associates to a set a certain dynamical system of "zooming in". Especially in the past two decades, the study of tangent measures has played an important role in the resolution of a number of long-standing problems concerning sets which look essentially the same at all small scales; see, for example, [11, 12, 13, 14]. In contrast, (weak) tangents also play an important role in the geometry of metric spaces. One of the main dimensional quantities in the context of embeddability properties of metric spaces is the Assouad dimension, first introduced in [10]. It turns out that the Assouad dimension, which bounds the worst-case scaling at all locations and all small scales, is precisely the maximal Hausdorff dimension of weak tangents, i.e. sets which are given as a limit of small pieces of enlarged copies of the original set; see [15]. We refer the reader to the books [12, 13, 14] for more background and context on the importance of Assouad dimension in a variety of diverse applications. In this document, we study the interrelated concepts of tangents and Assouad dimension, with an emphasis on sets with a weak form of dynamical invariance. Our motivating examples include attractors of iterated function systems where the maps are affinities (or even more generally bi-Lipschitz contractions); or the maps are conformal and there are substantial overlaps. In both of these situations, the sets exhibit a large amount of local inhomogeneity. As we will see, these classes of sets exhibits a rich variety of behaviour while still retaining some fundamental properties. ### Weak tangents, tangents, and pointwise Assouad dimension Throughout, we will work in \(\mathbb{R}^{d}\) for some \(d\in\mathbb{N}\), though many of our results hold in the broader context of bounded doubling metric spaces. We let \(B(x,r)\) denote the closed ball with centre \(x\) and radius \(r\). Now, fix a compact set \(K\subset\mathbb{R}^{d}\). We say that a compact set \(F\subset B(0,1)\) is a _weak tangent_ of \(K\subset\mathbb{R}^{d}\) if there exists a sequence of similarity maps \((T_{k})_{k=1}^{\infty}\) with similarity ratios \(\lambda_{k}\) diverging to infinity such that \(0\in T_{k}(K)\) and \[F=\lim_{k\to\infty}T_{k}(K)\cap B(0,1)\] with respect to the Hausdorff metric on compact subsets of \(B(0,1)\). We denote the set of weak tangents of \(K\) by \(\operatorname{Tan}(K)\). More strongly, we say that \(F\) is a _tangent of \(K\) at \(x\)_ if \(F\) is a weak tangent and the similarity maps \(T_{k}\) are homotheties which map \(x\) to \(0\); i.e. \(T_{k}(y)=r(y-x)\). We denote the set of tangents of \(K\) at \(x\) by \(\operatorname{Tan}(K,x)\). We refer the reader to SS2.1 for precise definitions. Closely related to the notion of a weak tangent is the _Assouad dimension_ of \(K\), which is the dimensional quantity \[\dim_{\operatorname{A}}K=\inf\Bigl{\{}s:\exists C>0\,\forall 0<r \leq R<1\,\forall x\in K\] \[N_{r}(B(x,R)\cap K)\leq C\Bigl{(}\frac{R}{r}\Bigr{)}^{s}\Bigr{\}}.\] Here, for a general bounded set \(F\), \(N_{r}(F)\) is the smallest number of closed balls with radius \(r\) required to cover \(F\). It always holds that \(\dim_{\operatorname{H}}K\leq\overline{\dim}_{\operatorname{B}}K\leq\dim_{ \operatorname{A}}K\), where \(\dim_{\operatorname{H}}K\) and \(\overline{\dim}_{\operatorname{B}}K\) denote the Hausdorff and upper box dimensions respectively. In some sense, the Assouad dimension is the largest reasonable notion of dimension which can be defined using covers. Continuing this analogy, we also introduce a localized version of the Assouad dimension which we call the _pointwise Assouad dimension_. Given \(x\in K\), we set \[\dim_{\operatorname{A}}(K,x)=\inf\Bigl{\{}s:\exists C>0\,\exists \rho>0\,\forall 0<r\leq R<\rho\] \[N_{r}(B(x,R)\cap K)\leq C\Bigl{(}\frac{R}{r}\Bigr{)}^{s}\Bigr{\}}.\] Figure 1. Some self-embeddable sets, which are attractors of the iterated function systems depicted in Figure 2. The choice of \(\rho>0\) in the definition of \(\dim_{\mathrm{A}}(K,x)\) ensures a sensible form of bi-Lipschitz invariance: if \(f\colon K\to K^{\prime}\) is bi-Lipschitz, then \(\dim_{\mathrm{A}}(K,x)=\dim_{\mathrm{A}}(f(K),f(x))\). It is immediate from the definition that \[\dim_{\mathrm{A}}(K,x)\leq\dim_{\mathrm{A}}K.\] Moreover, if for instance \(K\) is Ahlfors-David regular, then \(\dim_{\mathrm{A}}(K,x)=\dim_{\mathrm{A}}K\) for all \(x\in K\). We note here that an analogous notion of pointwise Assouad dimension for measures was introduced recently in [1]. An observation which goes back essentially to Furstenberg, but was observed explicitly in [11], is that the Assouad dimension is characterized by weak tangents: \[\dim_{\mathrm{A}}K=\max\{\dim_{\mathrm{H}}F:F\in\mathrm{Tan}(K)\}.\] Motivated by this relationship, our primary goal in this document is to address the following questions: * Does it hold that \(\dim_{\mathrm{A}}(K,x)=\max\{\dim_{\mathrm{H}}F:F\in\mathrm{Tan}(K,x)\}\)? * Is there necessarily an \(x_{0}\in K\) so that \(\dim_{\mathrm{A}}K=\dim_{\mathrm{H}}F\) for some \(F\in\mathrm{Tan}(K,x_{0})\)? If not, is there an \(x_{0}\in K\) so that \(\dim_{\mathrm{A}}K=\dim_{\mathrm{A}}(K,x_{0})\)? * What is the structure of the level set of pointwise Assouad dimension \(\{x\in K:\dim_{\mathrm{A}}(K,x)=\alpha\}\) for some \(\alpha\geq 0\)? In the following section, we discuss our main results and provide some preliminary answers which indicate that these questions are, in general, quite subtle. ### Main results and outline of paper We begin by stating some easy properties of the pointwise Assouad-type dimensions for general compact sets \(K\subset\mathbb{R}^{d}\). Firstly, by Proposition 2.2, \[\sup\{\overline{\dim}_{\mathrm{B}}F:F\in\mathrm{Tan}(K,x)\}\leq\dim_{\mathrm{A }}(K,x)\leq\dim_{\mathrm{A}}K\] for all \(x\in K\) and, by Proposition 2.8 (ii), there is always an \(x\in K\) so that \(\overline{\dim}_{\mathrm{B}}K\leq\dim_{\mathrm{A}}(K,x)\). However, in general one cannot hope for more than this: an example in [13] already has the property that \(K\subset\mathbb{R}\) such that \(\dim_{\mathrm{A}}K=1\) but \(\dim_{\mathrm{A}}(K,x)=0\) for all \(x\in K\) (see Example 2.10 for more detail); and moreover, in Example 2.11, we construct a compact set \(K\subset\mathbb{R}\) with a point \(x\in K\) so that \(\dim_{\mathrm{A}}(K,x)=1\) but each \(F\in\mathrm{Tan}(K,x)\) consists of at most two points. However, many commonly studied families of "fractal" sets have a form of dynamical invariance, which is far from the case for general sets. As a result, it is of interest to determine general conditions under which the Assouad dimension is actually attained as the pointwise Assouad dimension at some point. To this end, we make the following definition. **Definition 1.1**.: We say that a compact set \(K\) is _self-embeddable_ if for each \(z\in K\) and \(0<r\leq\dim K\), there is a constant \(a=a(z,r)>0\) and a function \(f\colon K\to B(z,r)\cap K\) so that \[ar|x-y|\leq|f(x)-f(y)|\leq a^{-1}r|x-y|. \tag{1.1}\] for all \(x,y\in K\). We say that \(K\) is _uniformly self-embeddable_ if the constant \(a(z,r)\) can be chosen independently of \(z\) and \(r\). The class of self-embeddable sets is very broad and includes, for example, attractors of every possibly overlapping iterated function system \(\{f_{i}\}_{i\in\mathcal{I}}\), where \(\mathcal{I}\) is a finite index set and \(f_{i}\) is a bi-Lipschitz map from \(\mathbb{R}^{d}\) to \(\mathbb{R}^{d}\) with Lipschitz constant bounded away from \(1\). The class of _uniformly_ self-embeddable sets includes the attractors of finite overlapping self-conformal iterated function systems. It is perhaps useful to compare uniform self-embeddability with quasi self-similarity, as introduced by Falconer [14]. Our assumption is somewhat stronger since we also require the upper bound to hold in (1.1). This assumption is critical to our work since, in general, maps satisfying only the lower bound can decrease Assouad dimension. We also note that uniform self-embeddability is the primary assumption in [1, Theorem 2.1]. Within this general class of sets, we establish the following result which guarantees the existence of at least one large tangent under self-embeddability, and an abundance of tangents under uniform self-embeddability. **Theorem A**.: _Let \(K\subset\mathbb{R}^{d}\) be compact and self-embeddable. Then:_ 1. \(\overline{\dim_{\mathrm{B}}}\,K\leq\dim_{\mathrm{A}}(K,x)\) _for all_ \(x\in K\)_._ 2. _There is an_ \(x\in K\) _and_ \(F\in\mathrm{Tan}(K,x)\) _so that_ \(\mathcal{H}^{\dim_{\mathrm{A}}K}(F)>0\)_. In particular,_ \(\dim_{\mathrm{H}}F=\dim_{\mathrm{A}}(K,x)=\dim_{\mathrm{A}}K\)_._ _If \(K\) is uniformly self-embeddable, then there is a constant \(c>0\) so that_ \[\dim_{\mathrm{H}}\{x\in K:\exists F\in\mathrm{Tan}(K,x)\text{ with }\mathcal{H}^{\dim_{\mathrm{A}}K}(F)\geq c\}=\dim_{\mathrm{H}}K. \tag{1.2}\] The proof of Theorem A can be obtained by combining Theorem 2.12, Proposition 2.13, and Theorem 2.14. As a special case of the result for uniformly self-embeddable sets, suppose \(K\) is the attractor of a finite self-similar IFS in the real line with Hausdorff dimension \(s<1\). In this case there is a dichotomy: either \(\mathcal{H}^{s}(K)>0\), in which case \(K\) is Ahlfors-David regular, or \(\dim_{\mathrm{A}}K=1\). In particular, (1.2) cannot be improved in general to give a set with positive Hausdorff \(s\)-measure. Beyond being of general interest, we believe this result will be a useful technical tool in the study of Assouad dimension for general attractors of bi-Lipschitz invariant sets. For instance, a common technique in studying attractors of iterated function systems is to relate the underlying geometry to symbolic properties associated with the coding space. Upper bounding the Hausdorff dimension of tangents is _a priori_ easier since one may fix in advance a coding for the point. This is the situation, for example, in [1, Theorem 5.2]. In Theorem A, we have established weak conditions which guarantee the existence of at least one large tangent, and relatively strong conditions which guarantee a set of points of full Hausdorff dimension with large tangents. A natural question to address is the following: to what extent do the results for uniformly self-embeddable sets extend to more general sets? Moreover, can we obtain even more precise information for specific families of sets? With these questions in mind, we now turn our attention to two specific families of affine iterated function systems in the plane: specifically, the planar self-affine carpets of _Gatzouras-Lalley_[13] and _Baranski_[2]. Note that these sets are self-embeddable but (except for some degenerate cases) not uniformly self-embeddable. We defer precise definitions and notation to SS4.1; see Figure 2 for examples of the generating maps in these classes. In the following statement, let \(\eta\colon\,\mathbb{R}^{2}\to\mathbb{R}\) be the orthogonal projection onto the first coordinate axis and for \(x\in\mathbb{R}^{2}\) let \(\ell_{x}\) be the vertical line containing \(x\). **Theorem B**.: _Let \(K\) be a Gatzouras-Lalley carpet. Then_ \[\mathcal{H}^{\dim_{\mathrm{H}}K}\big{(}\{x\in K:\dim_{\mathrm{A}}(K,x)\neq \dim_{\mathrm{A}}K\}\big{)}=0.\] _On the other hand, for any \(\dim_{\mathrm{B}}K\leq\alpha\leq\dim_{\mathrm{A}}K\),_ \[\dim_{\mathrm{H}}\{x\in K:\dim_{\mathrm{A}}(K,x)=\alpha\}=\dim_{\mathrm{H}}K.\] _Moreover, if \(\eta(K)\) satisfies the SSC, then for any \(x\in K\),_ 1. \(\max\{\dim_{\mathrm{H}}F:F\in\mathrm{Tan}(K,x)\}=\dim_{\mathrm{B}}\eta(K)+ \dim_{\mathrm{A}}\ell_{x}\cap K\)_,_ 2. \(\dim_{\mathrm{A}}(K,x)=\max\{\dim_{\mathrm{B}}K,\dim_{\mathrm{B}}\eta(K)+ \dim_{\mathrm{A}}\ell_{x}\cap K\}\)_._ Of course, if \(\alpha\notin[\dim_{\mathrm{B}}K,\dim_{\mathrm{A}}K]\), then \(\{x\in K:\dim_{\mathrm{A}}(K,x)=\alpha\}=\emptyset\). It follows immediately from Theorem B that \[\dim_{\mathrm{A}}(K,x)=\max\{\dim_{\mathrm{H}}F:F\in\mathrm{Tan}(K,x)\}\] if and only if \(\dim_{\mathrm{A}}\ell_{x}\cap K\geq\dim_{\mathrm{B}}K-\dim_{\mathrm{B}}\eta(K)\). Moreover, if \(s=\dim_{\mathrm{H}}K\), then \(\mathcal{H}^{s}(K)>0\) and furthermore \(\mathcal{H}^{s}(K)<\infty\) if and only if \(K\) is Ahlfors-David regular (see [13]), in which case the results are trivial. We thus see that the majority of points, from the perspective of Hausdorff \(s\)-measure, have tangents Figure 2. Generating maps associated with a Gatzouras–Lalley and Baranski system. The parameters from the Baranski carpet correspond to the example in Corollary 5.6 with \(\delta=1/40\). with Hausdorff dimension attaining the Assouad dimension of \(K\). However, we still have an abundance of points with pointwise Assouad dimension giving any other reasonable value. Note that, in order to obtain (i) and (ii), the strong separation condition in the projection is required or the pointwise Assouad dimension could be incorrect along sequences which are "arbitrarily close together at small scales". The formula holds for more general Gatzouras-Lalley carpets if one restrictions attention to points where this does not happen (see Definition4.3). The proof of TheoremB is obtained by combining Theorem4.12 and Theorem4.14. The dimensional results given in (i) and (ii) exhibit a precise version of a well-known phenomenon: at small scales, properly self-affine sets and measures look like products of the projection with slices. For Gatzouras-Lalley carpets with projection onto the first coordinate axis satisfying the strong separation condition, slices through \(x\) are precisely attractors of a non-autonomous iterated function system corresponding to the sequence of columns containing the point \(x\) (such a phenomenon was exploited in a more general setting in [10]). In fact, as a pre-requisite to the proof of TheoremB, we establish a general formula for the Assouad dimension of non-autonomous self-similar sets satisfying the open set condition and with contraction ratios bounded uniformly from below. This is given in Theorem3.7. The proof of Theorem3.7, and indeed TheoremB, depend on some general properties of Assouad dimension which are elementary but may be of independent interest: certain subadditive regularity properties given in Corollary3.5, and a reformulation of the Assouad dimension in terms of disc-packing bounds given in Proposition3.6. However, it turns out that the fact that Gatzouras-Lalley carpets have an abundance of large tangents does not extend to the non-dominated setting. There exists a Baranski carpet \(K\) such that \[\dim_{\mathrm{H}}\{x\in K:\dim_{\mathrm{A}}(K,x)=\dim_{\mathrm{A}}K\}<\dim_{ \mathrm{H}}K.\] In other words, the conclusion of TheoremA for uniformly self-embeddable sets does not necessarily extend beyond the uniformly self-embeddable case, even in the at first glance minor generalization consisting only of strictly diagonal self-affine functions acting in \(\mathbb{R}^{2}\). The proof of TheoremC is given in Corollary5.6, and it follows from a more general result describing when Baranski carpets satisfying certain separation conditions have a large number of large tangents given in Theorem5.4. This result can be found in Theorem5.4, and it follows from more general formulas for the pointwise Assouad dimension at points which are coded by sequences which contract uniformly in one direction; see Proposition5.3 for a precise formulation. ### Some variants for future work Let \(\phi\colon(0,1)\to(0,1)\) be a fixed function. We then define the _pointwise \(\phi\)-Assouad dimension_, given by \[\dim_{\mathrm{A}}^{\phi}(K,x)=\inf\Bigl{\{}s:\exists C>0\,\forall 0 <r<1\] \[N_{r^{1+\phi(r)}}\bigl{(}B(x,r)\cap K\bigr{)}\leq Cr^{-\phi(r)s} \Bigr{\}}.\] It is a straightforward to see that \[\dim_{\mathrm{A}}^{\phi}(K,x)=\limsup_{r\to 0}\frac{\log N_{r^{1+\phi(r)}} \big{(}B(x,r)\cap K\big{)}}{\phi(r)\log(1/r)}.\] The \(\phi\)-Assouad dimensions are an example _dimension interpolation_[11] and have been studied in detail in [10, 11]. In the specific case that \(\phi(R)=\frac{1}{\theta}-1\) for some \(\theta\in(0,1)\), this corresponds precisely to the Assouad spectrum [13] which (abusing notation) we may denote by \(\dim_{\mathrm{A}}^{\theta}(K,x)\). In general, we expect the properties of the pointwise Assouad spectrum to be substantially different than the properties of the pointwise Assouad dimension. For instance, in particular for Gatzouras-Lalley carpets, as \(\theta\to 0\) one would expect to only witness the box dimension at every point in \(K\), and as \(\theta\to 1\) one would expect to witness the Hausdorff dimension of a maximal tangent, even when this quantity may be smaller than the pointwise Assouad dimension. In particular, it may happen that \(\lim_{\theta\to 0}\dim_{\mathrm{A}}^{\theta}(K,x)>\lim_{\theta\to 1}\dim_{ \mathrm{A}}^{\theta}(K,x)\). For intermediate values of \(\theta\), since the pointwise Assouad spectrum is determined by exponentially separated pairs of scales, it is likely that the value would depend in an essential way on the local dimensions of Bernoulli measures projected onto the first coordinate axis. One might also consider the dual notion of the _pointwise lower dimension_, defined for \(x\in K\) by \[\dim_{\mathrm{L}}(K,x)=\sup\Bigl{\{}s:\exists C>0\,\exists\rho>0 \,\forall 0<r\leq R<\rho\] \[N_{r}(B(x,R)\cap K)\geq C\Bigl{(}\frac{R}{r}\Bigr{)}^{s}\Bigr{\}}.\] It is established in [13] that the lower dimension may be analogously characterized as the minimum of Hausdorff dimensions of weak tangents. Therefore, a natural question is to ask if similar results hold for the pointwise lower dimension as well. In particular, the proofs we have given for Theorem A do not immediately translate to the case of the lower dimension since overlaps may increase dimension. On the other hand, the results given in Theorem B translate directly to the analogous lower dimension counterparts with appropriate modifications. Finally, we note that an analogous notion for the pointwise Assouad dimension of measures was recently introduced in [1]. It would be interesting to investigate the relationship between these two notions of pointwise dimension. ### Notation Throughout, we work in \(\mathbb{R}^{d}\) equipped with the usual Euclidean metric. Write \(\mathbb{R}_{+}=(0,\infty)\). Given functions \(f\) and \(g\), we say that \(f\lesssim g\) if there is a constant \(C>0\) so that \(f(x)\leq Cg(x)\) for all \(x\) in the domain of \(f\) and \(g\). We write \(f\approx g\) if \(f\lesssim g\) and \(g\lesssim f\). ## 2 Tangents and pointwise Assouad dimension ### Tangents and weak tangents To begin this section, we precisely define the notions of tangent and weak tangent, and establish the fundamental relationship between the dimensions of tangents and the pointwise Assouad dimension. Given a set \(E\subset\mathbb{R}^{d}\) and \(\delta>0\), we denote the \(\delta\)-neighbourhood of \(E\) by \[E^{(\delta)}=\{x\in\mathbb{R}^{d}:\exists y\in E\text{ such that }|x-y|<\delta\}.\] Now given a non-empty subset \(X\subset\mathbb{R}^{d}\), we let \(\mathcal{K}(X)\) denote the set of non-empty compact subsets of \(X\) equipped with the _Hausdorff metric_ \[d_{\mathcal{H}}(K_{1},K_{2})=\min\{p_{\mathcal{H}}(K_{1};K_{2}),p_{\mathcal{H} }(K_{2};K_{1})\}\] where \[p_{\mathcal{H}}(K_{1};K_{2})=\inf\{\delta>0:K_{1}\subset K_{2}^{(\delta)}\}.\] If \(X\) is compact, then \((\mathcal{K}(X),d_{\mathcal{H}})\) is a compact metric space itself. We also write \[\operatorname{dist}(E_{1},E_{2})=\sup\{\delta>0:E_{1}^{(\delta/2)}\cap E_{2}^{ (\delta/2)}=\emptyset\}\] for non-empty sets \(E_{1},E_{2}\subset\mathbb{R}^{d}\). We say that a set \(F\in\mathcal{K}(B(0,1))\) is a _weak tangent_ of \(K\subset\mathbb{R}^{d}\) if there exists a sequence of similarity maps \((T_{k})_{k=1}^{\infty}\) with \(0\in T_{k}(K)\) and similarity ratios \(\lambda_{k}\) diverging to infinity such that \[F=\lim_{k\to\infty}T_{k}(K)\cap B(0,1)\] in \(\mathcal{K}(B(0,1))\). We denote the set of weak tangents of \(K\) by \(\operatorname{Tan}(K)\). A key feature of the Assouad dimension is that it is characterized by Hausdorff dimensions of weak tangents. This result is originally from [17, Proposition 5.7]. We refer the reader to [14, Section 5.1] for more discussion on the context and history of this result. **Proposition 2.1** ([17]).: _We have_ \[\alpha\coloneqq\dim_{\mathrm{A}}K=\max_{F\in\operatorname{Tan}(K)}\dim_{ \mathrm{H}}F.\] _Moreover, the maximizing weak tangent \(F\) can be chosen so that \(\mathcal{H}^{\alpha}(F)>0\)._ In a similar flavour, we say that \(F\) is a _tangent of \(K\) at \(x\in K\)_ if there exists a sequence of similarity ratios \((\lambda_{k})_{k=1}^{\infty}\) diverging to infinity such that \[F=\lim_{k\to\infty}\lambda_{k}(K-x)\cap B(0,1)\] in \(\mathcal{K}(B(0,1))\). We denote the set of tangents of \(K\) at \(x\) by \(\operatorname{Tan}(K,x)\). Of course, \(\operatorname{Tan}(K,x)\subset\operatorname{Tan}(K)\). Unlike in the case for weak tangents, we require the similarities in the construction of the tangent to in fact be homotheties. This choice is natural since, for example, a function \(f\colon\,\mathbb{R}\to\mathbb{R}\) is differentiable at \(x\) if and only if the set of tangents of the graph of \(f\) at \((x,f(x))\) is the singleton \(\{B(0,1)\cap\ell\}\) for some non-vertical line \(\ell\) passing through the origin. In practice, compactness of the group of orthogonal transformations in \(\mathbb{R}^{d}\) means this restriction will not cause any technical difficulties. We observe that upper box dimensions of tangents provide a lower bound for the pointwise Assouad dimension. **Proposition 2.2**.: _For any compact set \(K\subset\mathbb{R}^{d}\) and \(x\in K\), \(\dim_{\rm A}(K,x)\geq\overline{\dim}_{\rm B}\,F\) for any \(F\in\operatorname{Tan}(K,x)\)._ Proof.: Let \(\alpha>\dim_{\rm A}(K,x)\) and suppose \(F\in\operatorname{Tan}(K,x)\): we will show that \(\overline{\dim}_{\rm B}\,F\leq\alpha\). First, get \(C>0\) such that for each \(0<r\leq R<1\), \[N_{r}(B(x,R)\cap K)\leq C\Big{(}\frac{R}{r}\Big{)}^{\alpha}.\] Let \(\delta>0\) be arbitrary, and get a similarity \(T\) with similarity ratio \(\lambda\) such that \(T(x)=0\) and \[d_{\mathcal{H}}(T(K)\cap B(0,1),F)\leq\delta.\] Then there is a uniform constant \(M>0\) so that \[M\cdot N_{\delta}(F)\leq N_{\delta}(T(K)\cap B(0,1))=N_{\delta \lambda}(K\cap B(x,\lambda))\leq C\Big{(}\frac{\lambda}{\delta\lambda}\Big{)} ^{\alpha}=C\delta^{-\alpha}.\] In other words, \(\overline{\dim}_{\rm B}\,F\leq\alpha\). One should not expect equality to hold in general: in Example 2.11, we construct an example of a compact set \(K\subset\mathbb{R}\) and a point \(x\in K\) so that \(\dim_{\rm A}(K,x)=1\) but every \(F\in\operatorname{Tan}(K,x)\) consists of at most 2 points. ### Level sets and measurability We now make some observations concerning the multifractal properties of the function \(x\mapsto\dim_{\rm A}(K,x)\). In particular, we are interested in the following quantities: \[\mathcal{A}(K,\alpha)=\{x\in K:\dim_{\rm A}(K,x)=\alpha\}\qquad \text{and}\qquad\varphi(\alpha)=\dim_{\rm H}\mathcal{A}(K,\alpha).\] We use the convention that \(\dim_{\rm H}\emptyset=-\infty\). Observe that \(\varphi\) is a bi-Lipschitz invariant. Let \(\mathcal{K}(\mathbb{R}^{d})\) denote the family of compact subsets of \(\mathbb{R}^{d}\), equipped with the Hausdorff distance \(d_{\mathcal{H}}\). We recall that \(B(x,r)\) denotes the closed ball at \(x\) with radius \(r\), and we let \(B^{\circ}(x,r)\) denote the open ball at \(x\) with radius \(r\). Given a compact set \(K\subset\mathbb{R}^{d}\), we let \(N_{r}^{\circ}(K)\) denote the minimal number of open sets with diameter \(r\) required to cover \(K\), and \(N_{r}^{\rm pack}(K)\) denote the size of a maximal centred packing of \(K\) by closed balls with radius \(r\). Then, for \(0<r_{1}\leq r_{2}\), we write \[\mathcal{N}_{r_{1},r_{2}}^{\circ}(K,x) =N_{r_{1}}^{\circ}(B(x,r_{2})\cap K)\] \[\mathcal{N}_{r_{1},r_{2}}(K,x) =N_{r_{1}}^{\rm pack}(B^{\circ}(x,r_{2})\cap K)\] The following lemma is standard. **Lemma 2.3**.: _Fix \(0<r_{1}\leq r_{2}\). Then:_ 1. \(\mathcal{N}_{r_{1},r_{2}}^{\circ}:\mathcal{K}(\mathbb{R}^{d})\times\mathbb{R}^{d }\to[0,d]\) _is lower semicontinuous._ 2. \(\mathcal{N}_{r_{1},r_{2}}:\mathcal{K}(\mathbb{R}^{d})\times\mathbb{R}^{d}\to[0,d]\) _is upper semicontinuous._ We can use this lemma to establish the following fundamental measurability results. **Proposition 2.4**.: _The following measurability properties hold:_ 1. _The function_ \((K,x)\mapsto\dim_{\mathrm{A}}(K,x)\) _is Baire class 2._ 2. \(\mathcal{A}(K,\alpha)\) _is Borel for any compact set_ \(K\)_._ Proof.: Since \(\mathbb{R}^{d}\) is doubling, \[\dim_{\mathrm{A}}(K,x)=\inf\Bigl{\{}s:\exists C>0\,\exists M\in \mathbb{N}\ \forall M\leq k\leq n\] \[\mathcal{N}_{2^{-n},2^{-k}}(K,x)\leq C2^{(n-k)s}\Bigr{\}}.\] Equivalently, we may use \(\mathcal{N}_{r_{1},r_{2}}^{\circ}\) in place of \(\mathcal{N}_{r_{1},r_{2}}\). In particular, \[\{(K,x):\dim_{\mathrm{A}}(K,x)>s\}=\bigcap_{C=1}^{\infty}\bigcap_{M=1}^{ \infty}\bigcup_{k=M}^{\infty}\bigcup_{n=k}^{\infty}(\mathcal{N}_{2^{-n},2^{-k} })^{-1}(C2^{(n-k)s},\infty).\] and \[\{(K,x):\dim_{\mathrm{A}}(K,x)<t\}=\bigcup_{C\in\mathbb{Q}\cap(0,\infty)} \bigcup_{M=1}^{\infty}\bigcap_{k=M}^{\infty}\bigcap_{n=k}^{\infty}(\mathcal{N }_{2^{-n},2^{-k}})^{-1}(-\infty,C2^{(n-k)t}).\] Thus \(\{(K,x):\dim_{\mathrm{A}}(K,x)\in(s,t)\}\) is a \(G_{\delta\sigma}\)-set, i.e. it is a countable union of sets expressible as a countable intersection of open sets, so \(\dim_{\mathrm{A}}\) is Baire class 2. Of course, the same argument also show that \(x\mapsto\dim_{\mathrm{A}}(K,x)\) is Baire class 2 for a fixed compact set \(K\), so that \(\mathcal{A}(K,\alpha)\) is \(G_{\delta\sigma}\) and, in particular, Borel. ### Tangents and pointwise dimensions of general sets We now establish some general results on the existence of tangents for general sets. These results will also play an important technical role in the following sections: for many of our applications, it is not enough to have positive Hausdorff \(\alpha\)-measure for \(\alpha=\dim_{\mathrm{A}}K\), since in general Hausdorff \(\alpha\)-measure does not interact well with the Hausdorff metric on \(\mathcal{K}\bigl{(}B(0,1)\bigr{)}\). Recall that the _Hausdorff \(\alpha\)-content_ of a set \(E\) is given by \[\mathcal{H}_{\infty}^{\alpha}(E)=\inf\left\{\sum_{i=1}^{\infty}(\operatorname{ diam}U_{i})^{\alpha}:E\subset\bigcup_{i=1}^{\infty}U_{i},U_{i}\ \text{open}\right\}.\] Of course, \(\mathcal{H}_{\infty}^{\alpha}(E)\leq\mathcal{H}^{\alpha}(E)\) and \(\mathcal{H}_{\infty}^{\alpha}(E)=0\) if and only if \(\mathcal{H}^{\alpha}(E)=0\). We recall (see, e.g. [13, Theorem 2.1]) that \(\mathcal{H}_{\infty}^{\alpha}\) is upper semicontinuous on \(\mathcal{K}(B(0,1))\). Moreover, if \(0<\mathcal{H}^{\alpha}(E)<\infty\), then the density theorem for Hausdorff content implies that \(\mathcal{H}^{\alpha}\)-almost every \(x\in E\) has a tangent with uniformly large Hausdorff \(\alpha\)-content. We use these ideas in the following proofs. We begin with a straightforward preliminary lemma which is proven, for example, in [16, Lemma 3.11]. **Lemma 2.5**.: _Let \(K\subset\mathbb{R}^{d}\) be compact. Then \(\operatorname{Tan}(\operatorname{Tan}(K))\subset\operatorname{Tan}(K)\)._ Proof.: First suppose \(E\in\operatorname{Tan}(K)\) and \(F\in\operatorname{Tan}(E)\). Write \(E=\lim_{n\to\infty}T_{n}(K)\cap B(0,1)\) and \(F=\lim_{n\to\infty}S_{n}(E)\cap B(0,1)\) for some sequences of similarities \((T_{n})\) and \((S_{n})\) with similarity ratios diverging to infinity. For each \(\epsilon>0\), let \(N\) be sufficiently large so that \[d_{\mathcal{H}}(S_{N}(E)\cap B(0,1),F)\leq\frac{\epsilon}{2}.\] Suppose \(S_{N}\) has similarity ratio \(\lambda_{N}\), and let \(M\) be sufficiently large so that \[d_{\mathcal{H}}(T_{M}(K)\cap B(0,1),E)\leq\frac{\epsilon}{2\lambda_{N}}\] It follows that \[d_{\mathcal{H}}(S_{N}\circ T_{M}(K)\cap B(0,1),F)\leq\epsilon.\] But \(\epsilon>0\) was arbitrary, as required. Now, given a set with positive and finite Hausdorff measure, we can always find a tangent with large Hausdorff content. **Lemma 2.6**.: _Let \(K\subseteq\mathbb{R}^{d}\) be a compact set with \(0<\mathcal{H}^{\alpha}(K)<\infty\). Then for \(\mathcal{H}^{\alpha}\)-almost every \(x\in K\), there is an \(F\in\operatorname{Tan}(K,x)\) such that \(\mathcal{H}^{\alpha}_{\infty}(F)\geq 1\)._ Proof.: By the same proof as [14, Theorem 6.2], for \(\mathcal{H}^{\alpha}\)-almost every \(x\in K\), there is a sequence of scales \((r_{n})_{n=1}^{\infty}\) converging to zero such that \[1\leq\lim_{n\to\infty}r_{n}^{-\alpha}\mathcal{H}^{\alpha}_{\infty}\big{(}B(x, r_{n})\cap K\big{)}.\] Then \[\mathcal{H}^{\alpha}\big{(}r_{n}^{-1}(K-x)\cap B(0,1)\big{)}=r_{n}^{-\alpha} \mathcal{H}^{\alpha}_{\infty}\big{(}B(x,r_{n})\cap K\big{)}\xrightarrow{n\to \infty}1.\] But Hausdorff \(\alpha\)-content is upper semicontinuous, so passing to a subsequence if necessary, \[F=\lim_{n\to\infty}\big{(}r_{n}^{-1}(K-x)\cap B(0,1)\big{)}\] satisfies \(\mathcal{H}^{\alpha}_{\infty}(F)\geq 1\). Of course, we can combine the previous two results to obtain the following improvement of Proposition 2.1. **Corollary 2.7**.: _Let \(K\) be a compact set with \(\dim_{\mathrm{A}}K=\alpha\). Then there is a weak tangent \(F\in\operatorname{Tan}(K)\) with \(\mathcal{H}^{\alpha}_{\infty}(F)\geq 1\)._ Proof.: By Proposition 2.1, there is \(E\in\operatorname{Tan}(K)\) such that \(\mathcal{H}^{\alpha}(E)>0\). By [14, Theorem 4.10], there is a compact \(E^{\prime}\subset E\) such that \(0<\mathcal{H}^{\alpha}(E^{\prime})<\infty\). Then by Lemma 2.6, there is \(F^{\prime}\in\operatorname{Tan}(E^{\prime})\) with \(\mathcal{H}^{\alpha}_{\infty}(F^{\prime})\geq 1\). But \(F^{\prime}\subset F\) for some \(F\in\operatorname{Tan}(E)\), and by Lemma 2.5, \(F\in\operatorname{Tan}(K)\) with \(\mathcal{H}^{\alpha}_{\infty}(F)\geq\mathcal{H}^{\alpha}_{\infty}(F^{\prime})\geq 1\). We now establish bounds on the pointwise Assouad dimension and tangents for general sets. **Proposition 2.8**.: _Let \(K\subset\mathbb{R}^{d}\). Then:_ 1. _If_ \(K\) _is analytic, for any_ \(s\) _such that_ \(\mathcal{H}^{s}(K)>0\)_, there is a set_ \(E\subset K\) _with_ \(\mathcal{H}^{s}(E)>0\) _so that for each_ \(x\in E\)_, there is a tangent_ \(F\in\operatorname{Tan}(\overline{K},x)\) _with_ \(\mathcal{H}^{s}_{\infty}(F)\geq 1\)_._ 2. _If_ \(K\) _is compact, there is an_ \(x\in K\) _such that_ \(\dim_{\operatorname{A}}(K,x)\geq\overline{\dim}_{\operatorname{B}}K\)_._ Proof.: The proof of (i) follows directly from Lemma 2.6, recalling that we can always find a compact subset \(E\subset K\) such that \(0<\mathcal{H}^{s}(E)<\infty\) (combine [11, Theorem 8.19] and [13, Corollary B.2.4]). We now see (ii). Let \(\overline{\dim}_{\operatorname{B}}K=t\). We first observe that for any \(r>0\), there is an \(x\in K\) so that \(\overline{\dim}_{\operatorname{B}}B(x,r)\cap K=t\). In particular, we may inductively construct a nested sequence of balls \(B(x_{k},r_{k})\) with \(\lim_{k\to\infty}r_{k}=0\) so that \(\overline{\dim}_{\operatorname{B}}K\cap B(x_{k},r_{k})=t\) for all \(k\in\mathbb{N}\). Since \(K\) is compact, take \(x=\lim_{k\to\infty}x_{k}\in K\). We verify that \(\dim_{\operatorname{A}}(K,x)\geq t\). Let \(C>0\) and \(\rho>0\) be arbitrary. Since the \(x_{k}\) converge to \(x\) and the \(r_{k}\) converge to \(0\), get some \(k\) so that \(B(x_{k},r_{k})\subset B(x,\rho)\). Thus for any \(\epsilon>0\) and \(r>0\) sufficiently small depending on \(\epsilon\), \[N_{r}\big{(}B(x,\rho)\cap K\big{)}\geq N_{r}\big{(}B(x_{k},r_{k})\cap K\big{)} \geq C\left(\frac{r_{k}}{r}\right)^{t-\epsilon}.\] Thus \(\dim_{\operatorname{A}}(K,x)\geq t\). **Remark 2.9**.: Note that compactness is essential in Proposition 2.8 (ii) since there are sets with \(\overline{\dim}_{\operatorname{B}}K=1\) but every point is isolated: consider, for instance, the set \(E=\{(\log n)^{-1}:n=2,3,\ldots\}\). In this case, \(\overline{E}=E\cup\{0\}\) and \(\dim_{\operatorname{A}}(\overline{E},0)=1\). This example also shows that (ii) can hold with exactly 1 point. Finally, we construct some general examples which go some way to showing that the results for general sets given in this section are sharp. **Example 2.10**.: In general, the Assouad dimension can only be characterized by weak tangents rather than by tangents. For example, consider the set \(K\) from [14, Example 2.20], defined by \[K=\{0\}\cup\{2^{-k}+\ell 4^{-k}:k\in\mathbb{N},\ell\in\{0,1,\ldots,k\}\}\] Since \(K\) contains arithmetic progressions of length \(k\) for all \(k\in\mathbb{N}\), \(\dim_{\operatorname{A}}K=1\). However, \(\dim_{\operatorname{A}}(K,x)=0\) for all \(x\in K\) and, therefore, by Proposition 2.2, \(\dim_{\operatorname{H}}F=0\) for all \(F\in\operatorname{Tan}(K,x)\) and \(x\in K\). **Example 2.11**.: We give an example of a compact set \(K\) and a point \(x\in K\) so that \(\dim_{\operatorname{A}}(K,x)=1\) but each \(F\in\operatorname{Tan}(K,x)\) consists of at most finitely many points. Set \(a_{k}=4^{-k^{2}}\) and observe that \(ka_{k+1}/a_{k}\leq 1/k\). For each \(k\in\mathbb{N}\), write \(\ell_{k}=\lfloor 2^{k}/k\rfloor\) and set \[K=\{0\}\cup\bigcup_{k=1}^{\infty}\left\{a_{k}\frac{2^{k}-\ell_{k}}{2^{k}},a_{ k}\frac{2^{k}-\ell_{k}-1}{2^{k}},\ldots,a_{k}\right\}\] and consider the point \(x=0\). First observe for all \(\epsilon>0\) and all \(k\) sufficiently small depending on \(\epsilon\), \[N_{2^{-k}\cdot a_{k}}\big{(}B(0,a_{k})\cap K\big{)}\geq\frac{\ell_{k}}{2}\geq 2 ^{(1-\epsilon)k}\] which gives that \(\dim_{\mathrm{A}}(K,0)=1\). On the other hand, for \(k\in\mathbb{N}\), \[a_{k}^{-1}K\cap B(0,1)\subset[0,a_{k+1}/a_{k}]\cup[1/k,1].\] Since \(ka_{k+1}/a_{k}\leq 1/k\), it follows that for any \(\lambda\geq 1\) and \(\lambda K\cap B(0,1)\) can be contained in a union of two intervals with arbitrarily small length as \(\lambda\) diverges to \(\infty\). Thus any tangent \(F\in\mathrm{Tan}(K,0)\) consists of at most 2 points. ### Tangents of dynamically invariant sets We recall from Proposition 2.8 (ii) that the Assouad dimension of \(K\) need not be attained as the Assouad dimension of a point, and even the Assouad dimension at a point need not be attained as the upper box dimension of a tangent at that point. For self-embeddable sets, we can prove directly that the Assouad dimension of \(K\) is attained as the Hausdorff dimension of a tangent. In fact, the tangent can be chosen to have positive \(\mathcal{H}^{\alpha}\)-measure for \(\alpha=\dim_{\mathrm{A}}K\). **Theorem 2.12**.: _Let \(K\subseteq\mathbb{R}^{d}\) be compact and self-embeddable with \(\alpha=\dim_{\mathrm{A}}K\). Then there is a dense set of points \(x\in K\) for which there exist \(F\in\mathrm{Tan}(K,x)\) such that \(\mathcal{H}^{\alpha}_{\infty}(F)\gtrsim_{\alpha}1\)._ Proof.: By self-embeddability and since \(\dim_{\mathrm{A}}(K,x)=\dim_{\mathrm{A}}(f(K),f(x))\) for a bi-Lipschitz map \(f\), it suffices to construct a single point \(x\) with this property. First, begin with some arbitrary ball \(B(x_{1},r_{1})\) with \(x_{1}\in K\) and \(0<r_{1}\leq 1\). Since \(K\) is self-embeddable, get a bi-Lipschitz map \(f_{1}\colon K\to K\cap B(x_{1},r_{1})\). Since \(\dim_{\mathrm{A}}f_{1}(K)=\alpha\), by Corollary 2.7 there is a weak tangent \(F_{1}\) of \(f_{1}(K)\) such that \(\mathcal{H}^{\alpha}_{\infty}(F_{1})\geq 1\). Since \(F_{1}\) is a weak tangent of \(f_{1}(K)\), there is a similarity \(T_{1}\) with similarity ratio \(\lambda_{1}\geq 1\) such that \(0\in T_{1}(K)\) and \[d_{\mathcal{H}}\big{(}T_{1}(f_{1}(K))\cap B(0,1),F_{1}\big{)}\leq 1.\] Then choose \(x_{2}\in K\) and \(r_{2}\leq 1/2\) so that \(B(x_{2},r_{2})\subset T_{1}^{-1}B^{\circ}(0,1)\). Repeating the above construction, next with the ball \(B(x_{2},r_{2})\), and iterating, we obtain a sequence of similarity maps \((T_{n})_{n=1}^{\infty}\) each with similarity ratio \(\lambda_{n}\geq n\), bi-Lipschitz maps \(f_{n}\), and compact sets \(F_{n}\) such that 1. \(T_{n+1}^{-1}B(0,1)\subseteq T_{n}^{-1}B(0,1)\), 2. \(d_{\mathcal{H}}\big{(}T_{n}(f_{n}(K))\cap B(0,1),F_{n}\big{)}\leq\frac{1}{n}\), and 3. \(\mathcal{H}^{\alpha}_{\infty}(F_{n})\geq 1\). Let \(x=\lim_{n\to\infty}T_{n}^{-1}(0)\) and note by \(1\) that \(x\in T_{n}^{-1}B(0,1)\) for all \(n\in\mathbb{N}\). Let \(h_{n}\) be a similarity with similarity ratio \(1/2\) such that \[d_{\mathcal{H}}\Big{(}\frac{\lambda_{n}}{2}(f_{n}(K)-x)\cap B(0,1),h_{n}(F_{n })\Big{)}\leq\frac{1}{n}.\] Observe that \(\mathcal{H}^{\alpha}_{\infty}(h_{n}(F_{n}))\geq 2^{-\alpha}\). Thus passing to a subsequence if necessary, since \(f_{n}(K)\subseteq K\), we may set \[F_{0}=\lim_{n\to\infty}\frac{\lambda_{n}}{2}(f_{n}(K)-x)\cap B(0,1)\qquad\text{ and}\qquad F=\lim_{n\to\infty}\frac{\lambda_{n}}{2}(K-x)\cap B(0,1).\] and observe that \(F_{0}\subseteq F\). Again passing to a subsequence if necessary, by compactness of the orthogonal group, 2 and the triangle inequality, there is an isometry \(h\) so that \(\lim_{n\to\infty}h\circ h_{n}(F_{n})=F_{0}\). Thus by upper semicontinuity of Hausdorff content, \[\mathcal{H}^{\alpha}_{\infty}(F)\geq\mathcal{H}^{\alpha}_{\infty}(F_{0})\geq \lim_{n\to\infty}\mathcal{H}^{\alpha}_{\infty}(h\circ h_{n}(F_{n}))=2^{-\alpha}\] as required. We recall from Proposition 2.8 (ii) that, for a general compact set \(K\), the upper box dimension of \(K\) provides a lower bound for the pointwise Assouad dimension at _some_ point. For self-embeddable sets, we observe that the upper box dimension provides a uniform lower bound for the pointwise Assouad dimension at _every_ point. On the other hand, the upper box dimension _does not_ lower bound the maximal dimension of a tangent. For an example of this phenomenon, see Theorem 4.12. **Proposition 2.13**.: _Let \(K\subseteq\mathbb{R}^{d}\) be self-embeddable. Then for any \(x\in K\), we have \(\dim_{\mathrm{A}}(K,x)\geq\overline{\dim}_{\mathrm{B}}\,K\)._ Proof.: Fix \(\alpha<\overline{\dim}_{\mathrm{B}}\,K\) and \(x\in K\). Let \(C>0\) and \(\rho>0\) be arbitrary. Since \(K\) is self-embeddable, there is some bi-Lipschitz map \(f\colon K\to B(x,\rho)\) so that \(f(K)\subseteq K\). Since \(\overline{\dim}_{\mathrm{B}}\,f(K)>\alpha\), there is some \(0<r\leq\rho\) so that \[N_{r}(B(x,\rho)\cap K)\geq N_{r}(f(K))\geq C\Big{(}\frac{\rho}{r}\Big{)}^{ \alpha}.\] Since \(C>0\) and \(\rho>0\) were arbitrary, \(\dim_{\mathrm{A}}(K,x)\geq\alpha\), as required. Now assuming uniform self-embeddability, we will see that the set of points with tangents that have positive \(\mathcal{H}^{\alpha}\)-measure has full Hausdorff dimension for \(\alpha=\dim_{\mathrm{A}}K\). Since uniformly self-embeddable sets satisfy the hypotheses of [10, Theorem 4], it always holds that \(\overline{\dim}_{\mathrm{B}}\,K=\dim_{\mathrm{H}}\,K\) (see also [10, Theorem 2.10]). On the other hand, it can happen in this class of sets that \(\overline{\dim}_{\mathrm{B}}\,K<\alpha\): for example, this is the situation for self-similar sets in \(\mathbb{R}\) with \(\overline{\dim}_{\mathrm{B}}\,K<1\) which fail the weak separation condition; see [14, Theorem 1.3]. We provide a subset of full Hausdorff dimension for which each point has a tangent with positive Hausdorff \(\alpha\)-measure. The idea of the proof is essentially as follows. Let \(F\) be a weak tangent for \(K\) with strictly positive Hausdorff \(\alpha\)-content. For each \(s<\overline{\dim}_{\mathrm{B}}\,K\), using the implicit method of [10, Theorem 4], we can construct a well-distributed set of \(N\) balls at resolution \(\delta\), where \(\delta^{-s}\ll N\). Then, inside each ball, using uniform self-embeddability, we can map an image of an approximate tangent \(F\) where \(T_{\delta}\) has similarity ratio \(\lambda\). Choosing \(N\) to be large, the resulting collection of images of the approximate tangent \(F\) is again a family of well-distributed balls at resolution \(\lambda^{-1}\delta\), with \((\lambda^{-1}\delta)^{-s}\approx N\). Repeating this construction along a sequence of tangents converging to \(F\) yields a set \(E\) with \(\dim_{\mathrm{H}}E\geq s\) such that each \(x\in E\) has a tangent which is an image of \(F\) (up to some negligible distortion), which has positive Hausdorff \(\alpha\)-content by upper semicontinuity of content. We fix a compact set \(K\). To simplify notation, we say that a function \(f\colon K\to K\) is in \(\mathcal{G}(z,r,c)\) for \(z\in K\) and \(c,r>0\) if \(f(K)\subset B(z,r)\) and \[cr|x-y|\leq|f(x)-f(y)|\leq c^{-1}r|x-y|\] for all \(x,y\in K\). **Theorem 2.14**.: _Let \(K\subset\mathbb{R}^{d}\) be uniformly self-embeddable and let \(\alpha=\dim_{\mathrm{A}}K\). Then_ \[\dim_{\mathrm{H}}\{x\in K:\exists F\in\mathrm{Tan}(K,x)\text{ with }\mathcal{H}_{\infty}^{\alpha}(F)\gtrsim 1\}= \dim_{\mathrm{H}}K=\overline{\dim}_{\mathrm{B}}K.\] _Proof._ Write \(\alpha=\dim_{\mathrm{A}}K\). If \(\overline{\dim}_{\mathrm{B}}K=0\) we are done; otherwise, let \(0<s<\overline{\dim}_{\mathrm{B}}K\) be arbitrary. Since \(K\) is uniformly self-embeddable, there is a constant \(a\in(0,1)\) so that for each \(z\in K\) and \(0<r\leq\operatorname{diam}K\) there is a map \(f\in\mathcal{G}(z,r,a)\). Next, from Corollary 2.7, there is a compact set \(F\subset B(0,1)\) with \(\mathcal{H}_{\infty}^{\alpha}(F)\geq 1\) and a sequence of similarities \((T_{k})_{k=1}^{\infty}\) with similarity ratios \((\lambda_{k})_{k=1}^{\infty}\) such that \[F=\lim_{k\to\infty}T_{k}(K)\cap B(0,1)\] with respect to the Hausdorff metric. Set \(Q_{k}=T_{k}^{-1}(B(0,1))\cap K\). We will construct a Cantor set \(E\subset K\) of points each of which has pointwise Assouad dimension at least \(\alpha\) and has \(\dim_{\mathrm{H}}E\geq s\). We begin with a preliminary construction. First, since \(s<\overline{\dim}_{\mathrm{B}}K\), there is some \(r_{0}>0\) and a collection of points \(\{y_{i}\}_{i=1}^{N_{0}}\subset K\) such that \(|y_{i}-y_{j}|>3r_{0}\) for all \(i\neq j\) and \(N_{0}\geq 2^{s}a^{-s}r_{0}^{-s}\). Now for each \(i\), take a map \(\phi_{i}\in\mathcal{G}(y_{i},r_{0},a)\). Write \(\mathcal{I}=\{1,\ldots,N_{0}\}\), and for \(\mathrm{i}=(i_{1},\ldots,i_{n})\in\mathcal{I}^{n}\) set \[\phi_{\mathrm{i}}=\phi_{i_{1}}\circ\cdots\circ\phi_{i_{n}},\] and, having fixed some \(x_{0}\in K\), write \(x_{1}=\phi_{\mathrm{i}}(x_{0})\in\phi_{\mathrm{i}}(K)\). Observe that if the maximal length of a common prefix of \(\mathrm{i}_{1}\) and \(\mathrm{i}_{2}\) is \(m\), then \[\operatorname{dist}(\phi_{\mathrm{i}}(K),\phi_{\mathrm{j}}(K))\geq r_{0}(ar_{0 })^{m}.\] We now begin our inductive construction. Without loss of generality, we may assume that \(\lambda_{n}\geq 12\) for all \(n\in\mathbb{N}\) and \(r_{0}\leq 1\). First, for each \(n\in\mathbb{N}\), define constants \((m_{n})_{n=1}^{\infty}\subset\{0\}\cup\mathbb{N}\) and \((\rho_{n})_{n=1}^{\infty}\) converging monotonically to zero from above by the rules 1. \(2^{-m_{n}}\leq\dfrac{a^{2}r_{0}\lambda_{n}^{-1}}{3}\), 2. \(\rho_{0}=\operatorname{diam}K\), and 3. \(\rho_{n}\coloneqq\rho_{n-1}\cdot\dfrac{a\lambda_{n}^{-1}\cdot(ar_{0})^{m_{n}} }{3}\). Next, for \(n\in\mathbb{N}\cup\{0\}\) we inductively choose points \(y_{n,\mathtt{i}}\in K\) and maps \(\Psi_{n,\mathtt{i}}\in\mathcal{G}(y_{n,\mathtt{i}},\rho_{n},a)\) for \(\mathtt{i}\in\mathcal{I}^{m_{1}}\times\cdots\times\mathcal{I}^{m_{n}}\). Let \(\varnothing\) denote the empty word and let \(y_{0,\varnothing}\in K\) be arbitrary and let \(\Psi_{0,\varnothing}\) denote the identity map. Then for each \(\mathtt{k}=\mathtt{i}\mathtt{j}\) with \(\mathtt{i}\in\mathcal{I}^{m_{1}}\times\cdots\times\mathcal{I}^{m_{n-1}}\) and \(\mathtt{j}\in\mathcal{I}^{m_{n}}\), sequentially choose: 1. \(\psi_{n,\mathtt{k}}\in\mathcal{G}(\Psi_{n-1,\mathtt{i}}(x_{\mathtt{j}}),\rho_ {n}\lambda_{n}a^{-1},a)\) 2. \(y_{n,\mathtt{k}}=\psi_{n,\mathtt{k}}\circ T_{n}^{-1}(0)\) 3. \(\Psi_{n,\mathtt{k}}\in\mathcal{G}(y_{n,\mathtt{k}},\rho_{n},a)\) Finally, write \(\mathcal{J}_{0}=\{\varnothing\}\), \(\mathcal{J}_{n}=\mathcal{I}^{m_{1}}\times\cdots\times\mathcal{I}^{m_{n}}\) for \(n\in\mathbb{N}\), and let \[E_{n}=\bigcup_{\mathtt{i}\in\mathcal{J}_{n}}B(y_{n,\mathtt{k}},3\rho_{n}) \qquad\text{and}\qquad E=K\cap\bigcap_{n=1}^{\infty}E_{n}.\] Suppose \(\mathtt{i}\in\mathcal{J}_{n-1}\) and \(\mathtt{j}\in\mathcal{I}^{m_{n}}\). Since \(x_{\mathtt{j}}\in K\), \(\Psi_{n-1,\mathtt{i}}(K)\subset B(y_{n-1,\mathtt{i}},\rho_{n-1})\), and \(y_{n,\mathtt{i}\mathtt{j}}\in\psi_{n,\mathtt{i}\mathtt{j}}(K)\subset B(\Psi_ {n-1,\mathtt{i}}(x_{\mathtt{j}}),\rho_{n-1})\), we conclude since \(\rho_{n-1}\geq 3\rho_{n}\) that \[B(y_{n,\mathtt{i}\mathtt{j}},3\rho_{n})\subset B(y_{n-1,\mathtt{i}},3\rho_{n- 1}).\] Moreover, \(y_{n,\mathtt{i}\mathtt{j}}\in K\), so the sets \(E_{n}\) are non-empty nested compact sets and therefore \(E\) is non-empty. We next observe the following fundamental separation properties of the balls in the construction of the sets \(E_{n}\). Let \(n\in\mathbb{N}\) and suppose \(\mathtt{j}_{1}\neq\mathtt{j}_{2}\) in \(\mathcal{I}^{m_{n}}\) and \(\mathtt{i}\in\mathcal{J}_{n-1}\) (writing \(\mathcal{J}_{0}=\{\varnothing\}\)). Suppose \(\mathtt{j}_{1}\) and \(\mathtt{j}_{2}\) have a common prefix of maximal length \(m\). First recall that \(|x_{\mathtt{j}_{1}}-x_{\mathtt{j}_{2}}|\geq r_{0}(ar_{0})^{m}\), so that \[|\Psi_{n-1,\mathtt{i}}(x_{\mathtt{j}_{1}})-\Psi_{n-1,\mathtt{i}}(x_{\mathtt{ j}_{2}})|\geq\rho_{n-1}(ar_{0})^{m+1}.\] Then, since for \(j=1,2\) \[y_{n,\mathtt{i}\mathtt{j}_{j}}\in\psi_{n,\mathtt{i}\mathtt{j}_{j}}(K)\subset B \Big{(}\Psi_{n-1,\mathtt{i}}(x_{\mathtt{j}_{j}}),\frac{\rho_{n-1}(ar_{0})^{m_{ n}}}{3}\Big{)}\] we observe that \[|y_{n,\mathtt{i}\mathtt{j}_{1}}-y_{n,\mathtt{i}\mathtt{j}_{2}}|\geq\rho_{n-1} (ar_{0})^{m+1}-2\frac{\rho_{n-1}(ar_{0})^{m_{n}}}{3}\geq\frac{\rho_{n-1}(ar_{0 })^{m+1}}{3}.\] Since we assumed that \(\lambda_{n}\geq 12\), by the triangle inequality \[\operatorname{dist}\bigl{(}B(y_{n,\mathtt{i}\mathtt{j}_{1}},3\rho_{n}),B(y_{n, \mathtt{i}\mathtt{j}_{2}},3\rho_{n})\bigr{)}\geq\frac{\rho_{n-1}(ar_{0})^{m+1} }{3}-6\rho_{n}\geq\frac{\rho_{n-1}(ar_{0})^{m+1}}{6}. \tag{2.1}\] We first show that \(\dim_{\mathrm{H}}E\geq s\). By the method of repeated subdivision, define a Borel probability measure \(\mu\) with \(\operatorname{supp}\mu=E\) and for \(\mathtt{i}\in\mathcal{J}_{n}\), \[\mu(B(y_{n,\mathtt{i}},3\rho_{n})\cap K)=\frac{1}{\#\mathcal{J}_{n}}.\] Now suppose \(U\) is an arbitrary open set with \(U\cap E\neq\emptyset\). Intending to use the mass distribution principle, we estimate \(\mu(U)\). Assuming that \(U\) has sufficiently small diameter, let \(n\in\mathbb{N}\) be maximal so that \[\operatorname{diam}U\leq\frac{a^{-1}\lambda_{n}}{2}\rho_{n}=\frac{\rho_{n-1}( ar_{0})^{m_{n}}}{6}.\] By (2.1), there is a unique \(\mathtt{i}\in\mathcal{J}_{n}\) such that \(U\cap B(y_{n,\mathtt{i}},3\rho_{n})\neq\varnothing\). We first recall by choice of the constants \(m_{n}\) that \[\rho_{n} =(\operatorname{diam}K)\cdot\Big{(}\frac{a^{2}r_{0}\lambda_{n}^{-1 }}{3}\Big{)}^{n}(ar_{0})^{m_{1}+\cdots+m_{n}}\] \[\geq(\operatorname{diam}K)2^{-(m_{1}+\cdots+m_{n})}(ar_{0})^{m_{1 }+\cdots+m_{n}}.\] There are two cases. First assume \(\rho_{n}/6<\operatorname{diam}U\). Thus \[\mu(U) \leq\frac{1}{\#\mathcal{J}_{n}}\leq\Big{(}\frac{1}{2}ar_{0}\Big{)} ^{s(m_{1}+\cdots+m_{n})}\leq(\operatorname{diam}K)^{-s}\rho_{n}^{s}\] \[\leq\Big{(}\frac{6}{\operatorname{diam}K}\Big{)}^{s}\cdot( \operatorname{diam}U)^{s}.\] Otherwise, let \(k\in\{0,\ldots,m_{n+1}-1\}\) be so that \[\frac{\rho_{n}(ar_{0})^{k+1}}{6}<\operatorname{diam}U\leq\frac{\rho_{n}(ar_{0 })^{k}}{6}.\] By (2.1), \(U\) intersects at most \(N_{0}^{m_{n}-k}\) balls \(B(y_{n+1,\omega},3\rho_{n+1})\) for \(\omega\in\mathcal{J}_{n+1}\), so since \(2^{-sk}\leq 1\), \[\mu(U) \leq\frac{1}{\#\mathcal{J}_{n}\cdot N_{0}^{k}}\leq(\operatorname{ diam}K)^{-s}\rho_{n}^{s}\cdot(2^{-s}(ar_{0})^{s})^{k}\] \[\leq\Big{(}\frac{6}{ar_{0}\operatorname{diam}K}\Big{)}^{s}\cdot \Big{(}\frac{\rho_{n}(ar_{0})^{k+1}}{6}\Big{)}^{s}\] \[\leq\Big{(}\frac{6}{ar_{0}\operatorname{diam}K}\Big{)}^{s}\cdot \big{(}\operatorname{diam}U\big{)}^{s}.\] This treats all possible small values of \(\operatorname{diam}U\), so there is a constant \(M>0\) such that \(\mu(U)\leq M(\operatorname{diam}U)^{s}\). Thus \(\dim_{\operatorname{H}}E\geq s\) by the mass distribution principle. Now fix \[C=(3+a^{-2})^{-\alpha}.\] We will show that each \(z\in E\) has a tangent with Hausdorff \(\alpha\)-content at least \(C\). Let \(z\in E\) and define \[S_{n}(x)=\frac{x-z}{\rho_{n}(3+a^{-2})}.\] Our tangent will be an accumulation point of the sequence \((S_{n}(K)\cap B(0,1))_{n=1}^{\infty}\). Now fix \(n\in\mathbb{N}\). Since \(z\in E\), there is some \(\omega\in\mathcal{J}_{n}\) so that \(z\in B(y_{n,\omega},3\rho_{n})\). By choice of \(y_{n,\omega}\), \(Q_{n}=B\big{(}\psi_{n,\omega}^{-1}(y_{n,\omega}),\lambda_{n}^{-1}\big{)}\cap K\) so that \[\psi_{n,\omega}(Q_{n})\subseteq B\big{(}y_{n,\omega},\rho_{n}a^{-2}\big{)} \cap K\subseteq B\big{(}z,\rho_{n}(3+a^{-2})\big{)}\cap K\] and therefore, writing \(\Phi_{n}=S_{n}\circ\psi_{n,\omega}\circ T_{n}^{-1}\), \[\Phi_{n}(T_{n}(K)\cap B(0,1))\subset S_{n}(K)\cap B(0,1).\] Then for \(x,y\in T_{n}(K)\cap B(0,1)\), by the choice of \(\psi\) in (4), \[\frac{|x-y|}{3+a^{-2}}\leq|\Phi_{n}(x)-\Phi_{n}(y)|\leq\frac{|x-y|}{a^{2}(3+a^{-2 })}. \tag{2.2}\] Now, passing to a subsequence \((n_{k})_{k=1}^{\infty}\), we can ensure that \[\lim_{k\to\infty}\Phi_{n_{k}}(F)=Z_{0}\qquad\text{and}\qquad\lim_{k\to\infty}S_ {n_{k}}(K)\cap B(0,1)=Z.\] Moreover, recall that \(\lim_{k\to\infty}T_{n_{k}}(K)\cap B(0,1)=F\) and \(\mathcal{H}_{\infty}^{\alpha}(F)\geq 1\). Observe by (2.2) that \(\mathcal{H}_{\infty}^{\alpha}(\Phi_{n_{k}}(F))\geq C\) for each \(k\), so by upper semicontinuity of Hausdorff content, \(\mathcal{H}_{\infty}^{\alpha}(Z_{0})\geq C\). But again by (2.2), \[d_{\mathcal{H}}\big{(}Z_{0},\Phi_{n_{k}}(T_{n_{k}}(K)\cap B(0,1))\big{)}\leq d _{\mathcal{H}}(Z_{0},\Phi_{n_{k}}(F))+\frac{d_{\mathcal{H}}\big{(}F,T_{n_{k}} (K)\cap B(0,1)\big{)}}{a^{2}(3+a^{-2})}\] so in fact \(Z_{0}\subset Z\) and \(\mathcal{H}_{\infty}^{\alpha}(Z)\geq C\), as claimed. **Remark 2.15**.: We note that the upper distortion bound in the definition of uniform self-embeddability is used only at the very last step to guarantee that the images \(\Phi_{n_{k}}(T_{n_{k}}(K)\cap B(0,1))\) converge to a large set whenever the \(T_{n_{k}}(K)\cap B(0,1)\) converge to a large set. ## 3. Assouad dimension of non-autonomous self-similar sets ### Non-autonomous self-similar sets The notion of a non-autonomous self-conformal set was introduced and studied in [16], where under certain regularity assumptions the authors prove that the Hausdorff and box dimensions are equal and given by the zero of a certain pressure function. In this section, we consider a special case of their construction. For each \(n\in\mathbb{N}\), let \(\Phi_{n}=\{S_{n,j}\}_{j\in\mathcal{J}_{n}}\) be a finite family of similarity maps \(S_{n,j}\colon\,\mathbb{R}^{d}\to\mathbb{R}^{d}\) of the form \[S_{n,j}(\boldsymbol{x})=r_{n,j}O_{n,j}\boldsymbol{x}+\boldsymbol{d}_{n,j}\] where \(r_{n,j}\in(0,1)\) and \(O_{n,j}\) is an orthogonal matrix. To avoid degenerate situations, we assume that associated with the sequence \((\Phi_{n})_{n=1}^{\infty}\) there is an invariant compact set \(X\subset\mathbb{R}^{d}\) (that is \(S_{n,j}(X)\subset X\) for all \(n\in\mathbb{N}\) and \(j\in\mathcal{J}_{n}\)) and moreover that \[\lim_{n\to\infty}\sup\{r_{1,j_{1}}\cdots r_{n,j_{n}}:j_{i}\in\mathcal{J}_{i} \text{ for each }i=1,\ldots,n\}=0. \tag{3.1}\] Under these assumptions, associated with the sequence \((\Phi_{n})_{n=1}^{\infty}\) is an _attractor_ \[K=\bigcap_{n=1}^{\infty}\bigcup_{(j_{1},\ldots,j_{n})\in\mathcal{J}_{1}\times \cdots\times\mathcal{J}_{n}}S_{1,j_{1}}\circ\cdots\circ S_{n,j_{n}}(X).\] Since \(X\) is compact and invariant under any map \(S_{n,j}\) with \(j\in\mathcal{J}_{n}\), finiteness of each \(\mathcal{J}_{n}\) implies that \(K\) is the intersection of a nested sequence of compact sets and therefore non-empty and compact. The sequence \((\Phi_{n})_{n=1}^{\infty}\) is called a _non-autonomous iterated function system (IFS)_ and the attractor \(K\) is called the _non-autonomous self-similar set_. We refer the reader to [16, SS2] for more detail on this construction in a general setting. **Definition 3.1**.: We say that the non-autonomous IFS \((\Phi_{n})_{n=1}^{\infty}\) 1. satisfies the _open set condition_ if the invariant compact set \(X\) can be chosen to have non-empty interior \(U=X^{\circ}\) so that for each \(n\in\mathbb{N}\) and \(j,j^{\prime}\in\mathcal{J}_{n}\), \(S_{n,j}(U)\subset U\) and \(S_{n,j}(U)\cap S_{n,j^{\prime}}(U)=\emptyset\) for \(j\neq j^{\prime}\in\mathcal{J}_{n}\); and 2. has _uniformly bounded contractions_ if there is an \(r_{\min}>0\) so that \(r_{\min}\leq r_{n,j}\) for all \(n\in\mathbb{N}\) and \(j\in\mathcal{J}_{n}\). Since \(\operatorname{Leb}\bigl{(}\sum_{j\in\mathcal{J}_{n}}S_{n,j}(U)\bigr{)}\leq \operatorname{Leb}(U)\) and \(\operatorname{Leb}(S_{n,j}(U))\geq r_{\min}^{d}>0\), the above two conditions combine to give the following additional condition: 1. There is an \(M\in\mathbb{N}\) so that \(\#\mathcal{J}_{n}\leq M\) for all \(n\in\mathbb{N}\). Our main goal in this section is, assuming the open set condition and uniformly bounded contractions, to establish an explicit formula for \(\dim_{\mathrm{A}}K\), depending only on the \(r_{n,j}\). This will be done in Theorem 3.7. In order to obtain this result, we first make a reduction to a symbolic representation of the attractor \(K\), which we will denote by \(\Delta\). Since this symbolic construction will later be required in SS4, we establish this concept in a somewhat more general context. ### Metric trees First, fix a reference set \(\Omega\) and write \(\mathcal{T}_{0}=\{\Omega\}\). Let \(\{\mathcal{T}_{k}\}_{k=1}^{\infty}\) be a sequence of partitions of \(\Omega\) so that \(\mathcal{T}_{k+1}\) is a refinement of the partition \(\mathcal{T}_{k}\). For each \(Q\in\mathcal{T}_{k}\) with \(k\in\mathbb{N}\), there is a unique _parent_\(\widehat{Q}\in\mathcal{T}_{k-1}\) with \(Q\subset\widehat{Q}\). Suppose that for any \(\gamma_{1}\neq\gamma_{2}\in\Omega\) there is a \(k\in\mathbb{N}\) such that there are \(Q_{1}\neq Q_{2}\in\mathcal{T}_{k}\) so that \(\gamma_{1}\in Q_{1}\) and \(\gamma_{2}\in Q_{2}\). We call such a family \(\{\mathcal{T}_{k}\}_{k=0}^{\infty}\) a _tree_, and write \(\mathcal{T}=\bigcup_{k=0}^{\infty}\mathcal{T}_{k}\). Now, suppose that there is a function \(\rho\colon\mathcal{T}\to(0,\infty)\) which satisfies 1. \(0<\rho(Q)<\rho(\widehat{Q})\), and 2. there is a sequence \((r_{k})_{k=1}^{\infty}\) converging to zero from above such that \(\rho(Q)\leq r_{k}\) for all \(Q\in\mathcal{T}_{k}\). The function \(\rho\) induces a metric \(d\) on the space \(\Omega\) by the rule \[d(\gamma_{1},\gamma_{2})=\inf\{\rho(Q):Q\in\mathcal{T}\text{ and }\{\gamma_{1}, \gamma_{2}\}\subset Q\}.\] In particular, \(\operatorname{diam}(Q)=\rho(Q)\) with respect to the metric \(d\). We then refer to the data \((\Omega,\{\mathcal{T}_{k}\}_{k=0}^{\infty},\rho)\) as a _metric tree_. We say that a subset \(\mathcal{A}\subset\mathcal{T}\) is a _section_ if \(Q_{1}\cap Q_{2}=\emptyset\) whenever \(Q_{1},Q_{2}\in\mathcal{A}\) with \(Q_{1}\neq Q_{2}\). If \(\bigcup_{Q\in\mathcal{A}}Q=Q_{0}\), we say that \(\mathcal{A}\) is a _section relative to \(Q_{0}\)_. Note that sections are necessarily countable and, for example, each \(\mathcal{T}_{k}\) for \(k\in\mathbb{N}\cup\{0\}\) is a section relative to \(\Omega\). The set of sections is equipped with a partial order \(\mathcal{A}_{1}\preccurlyeq\mathcal{A}_{2}\) if for each \(Q_{1}\in\mathcal{A}_{1}\) there is a \(Q_{2}\in\mathcal{A}_{2}\) such that \(Q_{1}\subset Q_{2}\). This partial order is equipped with a _join_, that is, for finite family of sections \(\mathcal{A}_{1},\ldots,\mathcal{A}_{m}\), there is a unique section \(\mathcal{A}_{1}\vee\cdots\vee\mathcal{A}_{m}\) such that \[\mathcal{A}_{i}\preccurlyeq\mathcal{A}_{1}\vee\cdots\vee\mathcal{A}_{m}\qquad \text{for all}\qquad i=1,\ldots,m\] and moreover \(\mathcal{A}_{1}\vee\cdots\vee\mathcal{A}_{m}\) is minimal with respect to \(\preccurlyeq\) with this property. A metric tree is equipped with a natural family of sections relative to \(\Omega\) which respect the geometry of the metric \(d\). We define \[\mathcal{T}(r)=\{Q\in\mathcal{T}:\rho(Q)\leq r<\rho(\widehat{Q})\}\] where, abusing notation, we write \(\rho(\widehat{\Omega})=\infty\). Property \(1\) above ensures that this is indeed a section and property 2 ensures that \(\mathcal{T}_{k}\preccurlyeq\mathcal{T}(r)\) for all \(k\) sufficiently large. ### Reduction to symbolic representation Now that we have defined the metric tree, we introduce a symbolic representation of the set \(K\). Let \(\Delta=\prod_{n=1}^{\infty}\mathcal{J}_{n}\). For \((j_{1},\ldots,j_{n})\in\mathcal{J}_{1}\times\cdots\times\mathcal{J}_{n}\), we denote the _cylinder_ \[[j_{1},\ldots,j_{n}]=\{j_{1}\}\times\cdots\times\{j_{n}\}\times\prod_{k=n+1}^{ \infty}\mathcal{J}_{k}.\] We associate with this cylinder the valuation \(\rho([j_{1},\ldots,j_{n}])=r_{1,j_{1}}\cdots r_{n,j_{n}}\). Let \(\mathcal{T}_{n}\) denote the set of all cylinders corresponding to finite sequences in \(\mathcal{J}_{1}\times\cdots\times\mathcal{J}_{n}\). It is clear that this sequence of partitions, equipped with the valuation \(\rho\) (recalling the non-degeneracy assumption (3.1)), induces the structure of a metric tree on \(\Delta\). We also define a natural projection \(\pi\colon\Delta\to K\) by \[\{\pi((j_{n})_{n=1}^{\infty})\}=\bigcap_{n=1}^{\infty}S_{1,j_{1}}\circ\cdots \circ S_{n,j_{n}}(X).\] Again, this map is well-defined by (3.1). A direct argument shows that \(\pi\) is Lipschitz. We now prove that \(\dim_{\rm A}K=\dim_{\rm A}\Delta\). The open set condition ensures that the only work in this result is to handle the mild overlaps which occur from adjacent rectangles. In fact, our result will follow from the following standard elementary lemma for metric spaces which are "almost bi-Lipschitz equivalent". **Lemma 3.2**.: _Let \((X,d_{1})\) and \((Y,d_{2})\) be non-empty bounded metric spaces and suppose there is a function \(f\colon X\to Y\) and constants \(M\in\mathbb{N}\) and \(c>0\) so that for any \(0<r<1\),_ 1. \(\operatorname{diam}(f(B(x,r)))\leq cr\) _for all_ \(x\in X\)_; and_ 2. _for every_ \(y\in Y\) _there are_ \(x_{1},\ldots,x_{M}\in X\) _such that_ \(B(y,r)\subset\bigcup_{i=1}^{M}f(B(x_{i},r))\)_._ _Then \(\dim_{\rm A}X=\dim_{\rm A}Y\)._ _Proof._ Without loss of generality, we may assume that \(c\geq 1\). Throughout the proof, let \(\epsilon>0\) and \(0<r\leq R<1\) be arbitrary. First, let \(x\in X\) be arbitrary and, writing, \(N=N_{r}(f(B(x,R)))\), get \(y_{1},\ldots,y_{N}\in Y\) so that \(f(B(x,R))\subset\bigcup_{i=1}^{N}B(y_{i},r)\). Since \(\operatorname{diam}f(B(x,R))\leq cR\), \[N\lesssim_{\epsilon}\left(\frac{cR}{r}\right)^{\dim_{\rm A}Y+\epsilon}\lesssim \left(\frac{R}{r}\right)^{\dim_{\rm A}Y+\epsilon}.\] Moreover, for each \(i=1,\ldots,N\), there are \(x_{i,1},\ldots,x_{i,M}\in X\) such that \(B(y_{i},r)\subset\bigcup_{j=1}^{M}f(B(x_{i,j},r))\). Thus since \(\{B(x_{i,j},r):i=1,\ldots,N\) and \(j=1,\ldots,M\}\) is a cover for \(B(x,R)\), it follows that \[N_{r}(B(x,R))\leq NM\lesssim_{\epsilon}\left(\frac{R}{r}\right)^{\dim_{\mathrm{A} }Y+\epsilon}.\] Since \(\epsilon>0\) and \(0<r\leq R<1\) are arbitrary, we see that \(\dim_{\mathrm{A}}X\leq\dim_{\mathrm{A}}Y\). Conversely, let \(y\in Y\) be arbitrary and get \(x_{1},\ldots,x_{M}\in X\) such that \(B(y,R)\subset\bigcup_{i=1}^{M}f(B(x_{i},R))\). Moreover, for each \(i=1,\ldots,M\), writing \(N_{i}=N_{c^{-1}r}(B(x_{i},R))\), there are \(x_{i,1},\ldots,x_{i,N_{i}}\) where \(B(x_{i},R)\subset\bigcup_{j=1}^{N_{i}}B(x_{i,j},c^{-1}r)\) and \[N_{i}\lesssim_{\epsilon}\left(\frac{CR}{r}\right)^{\dim_{\mathrm{A}}X+ \epsilon}\lesssim\left(\frac{R}{r}\right)^{\dim_{\mathrm{A}}X+\epsilon}.\] Thus since \(\{f(B(x_{i,j},c^{-1}r)):i=1,\ldots,M\) and \(j=1,\ldots,N_{i}\}\) is a cover for \(B(y,R)\) with \(\operatorname{diam}f(B(x_{i,j},c^{-1}r))\leq r\), it follows that \[N_{r}(B(y,R))\lesssim_{\epsilon}\sum_{i=1}^{M}\sum_{j=1}^{N_{i}}\lesssim_{ \epsilon}\left(\frac{R}{r}\right)^{\dim_{\mathrm{A}}X+\epsilon}.\] Again since \(\epsilon>0\) and \(0<r\leq R<1\) are arbitrary, we get \(\dim_{\mathrm{A}}Y\leq\dim_{\mathrm{A}}X\), completing the proof. We now obtain our result on the Assouad dimension as a direct corollary. **Corollary 3.3**.: _Let \(\{\Phi_{n}\}_{n=1}^{\infty}\) be a sequence of self-similar IFSs with associated non-autonomous self-similar set \(K\) and metric tree \(\Delta\). Suppose the IFS also satisfies the open set condition and has uniformly bounded contractions. Then \(\dim_{\mathrm{A}}K=\dim_{\mathrm{A}}\Delta\)._ _Proof._ Let \(0<r<1\). First, recall that the map \(\pi\colon\Delta\to K\) is Lipschitz. Moreover, if \([i_{1},\ldots,i_{m}],[j_{1},\ldots,j_{\ell}]\in\Delta(r)\), then \[S_{1,i_{1}}\circ\cdots\circ S_{m,i_{m}}(U)\cap S_{1,j_{1}}\circ\cdots\circ S_ {m,j_{\ell}}(U)=\emptyset\] and by the uniformly bounded contraction assumption, \[\operatorname{Leb}\left(S_{1,i_{1}}\circ\cdots\circ S_{m,i_{m}}(U)\right) \approx\operatorname{Leb}\left(S_{1,j_{1}}\circ\cdots\circ S_{\ell,j_{\ell}}( U)\right)\approx r^{d}.\] But for \(x\in K\), \(\operatorname{Leb}(B(x,r))\approx r^{d}\). Thus there is a constant \(M\in\mathbb{N}\) not depending on \(r\) so that if \(x\in K\) is arbitrary, there are cylinders \(I_{1},\ldots,I_{M}\in\Delta(r)\) so that \(B(x,r)\subset\pi(I_{1})\cup\cdots\cup\pi(I_{M})\) so that each \(I_{j}\in\Delta(r)\) and therefore \(\operatorname{diam}I_{j}\leq r\). Thus the conditions for Lemma 3.2 are satisfied and \(\dim_{\mathrm{A}}K=\dim_{\mathrm{A}}\Delta\). ### Regularity properties of Assouad dimension In this section, we establish two regularity properties related to the Assouad dimension. **Lemma 3.4**.: _Let \(A=\mathbb{R}^{+}\) or \(A=\{\kappa_{0}n:n\in\mathbb{N}\}\) for some \(\kappa_{0}>0\). Suppose \(f\colon A\times A\to\{-\infty\}\cup\mathbb{R}\) is any function satisfying the following two assumptions:_ 1. \(f\) _is bounded from above._ _._ 2. _For all_ \(x,y,z\in A\)_,_ \[f(x,y+z)\leq\frac{y\cdot f(x,y)+z\cdot f(x+y,z)}{y+z}.\] _Then_ \[\beta \coloneqq\limsup_{y\to\infty}\limsup_{x\to\infty}f(x,y)\] \[=\ \limsup_{y\to\infty}\limsup_{x\to\infty}f(x,y)\] \[=\ \limsup_{y\to\infty}\sup_{x\in A}f(x,y)\] \[=\ \inf_{y\in A}\limsup_{x\to\infty}f(x,y).\] _Moreover, if \(B\subset A\) is of the form \(B=\{\kappa n:n\in\mathbb{N}\}\) for some \(\kappa>0\), then_ \[\beta=\lim_{\begin{subarray}{c}y\to\infty\\ y\in B\end{subarray}}\sup_{x\in B}f(x,y).\] _Proof._ We assume that \(\beta>-\infty\): the proof for \(\beta=-\infty\) is similar (and substantially easier). Let \(C\in\mathbb{R}\) be such that \(f(x,y)\leq C\) for all \((x,y)\in A\times A\). Note that applying (ii) inductively, we obtain for any \(\{y_{i}\}_{i=1}^{\ell}\subset A\) and \(y\in A\) \[f\big{(}y,\sum_{i=1}^{\ell}y_{i}\big{)}\leq\frac{\sum_{i=1}^{\ell}y_{i}f\big{(} y+\sum_{j=1}^{i-1}y_{j},y_{i}\big{)}}{\sum_{i=1}^{\ell}y_{i}}. \tag{3.2}\] We take the empty sum to be \(0\). We first show that the limit defining \(\beta\) exists. Write \(h(y)=y\cdot\limsup_{x\to\infty}f(x,y)\). Applying (3.2), \[h(y_{1}+y_{2}) =(y_{1}+y_{2})\limsup_{x\to\infty}f(x,y_{1}+y_{2})\] \[\leq(y_{1}+y_{2})\limsup_{x\to\infty}\frac{y_{1}f(x,y_{1})+y_{2}f (x+y_{1},y_{2})}{y_{1}+y_{2}}\] \[\leq h(y_{1})+h(y_{2}).\] Therefore the function \(h\colon A\to\mathbb{R}\) is subadditive, so the limit \(\lim_{y\to\infty}h(y)/y\) exists and is equal to \(\inf_{y\in A}h(y)/y\). Note that the same argument applies with a supremum in place of the limit supremum. We next show for each \(\epsilon>0\) and all \(y\) sufficiently large depending on \(\epsilon\) and all \(x\in A\), \[f(x,y)\leq\beta+3\epsilon. \tag{3.3}\] By the definition of \(\beta\), there are \(y_{0}\) and \(K\) so that for all \(x\geq K\), \(f(x,y_{0})\leq\beta+\epsilon\). Now let \(y\in A\) be arbitrary and write \(y=\ell y_{0}+t\) for some \(\ell\in\mathbb{N}\cup\{0\}\) and \(0<t\leq y_{0}\). Applying (3.2), there is some \(M\in A\) (depending only on \(y_{0}\) and \(C\) so that for all \(y\geq M\), \[f(x,y)\leq\frac{y_{0}\sum_{i=0}^{\ell-1}f(x+iy_{0},y_{0})+tf(x+\ell y_{0},t)}{ y}\] \[\leq\frac{\ell y_{0}}{y}(\beta+\epsilon)+\frac{Ct}{y}\leq\beta+2\epsilon.\] Now let \(x\in(0,K)\) and \(y\geq M+K\), and set \(t=K-x\). Again applying (3.2), \[f(x,y) \leq\frac{tf(x,t)+(y-t)f(K,y-t)}{y}\] \[\leq\frac{t}{y}C+\frac{y-t}{y}(\beta+2\epsilon)\leq\beta+3\epsilon\] for all \(y\) sufficiently large since \(t\leq K\). This proves (3.3). Finally, suppose \(B\subset A\) is of the form \(B=\{\kappa n:n\in\mathbb{N}\}\) for some \(\kappa>0\). First, note that since \(B\subset A\), \[\beta\geq\lim_{\begin{subarray}{c}y\to\infty\\ y\in B\end{subarray}}\sup_{x\in B}f(x,y)\] and moreover the limit exists as proven above. Conversely, let \((x,y)\in A\times A\) be arbitrary with \(y\geq 2\kappa\) and get \((x_{0},y_{0})\in B\times B\) and \((t_{x},t_{y})\in A\times A\) with \(t_{x},t_{y}\leq\kappa\) such that \(x=x_{0}+t_{x}\) and \(y-t_{x}=y_{0}+t_{y}\). Then applying (ii) twice, \[f(x_{0},y_{0}) \geq\frac{y_{0}+t}{y_{0}}\cdot f(x,y-t_{x})-\frac{t}{y_{0}}f(x+y_ {0},t)\] \[\geq f(x+t_{x},y-t_{x})-\frac{C\kappa}{y-2\kappa}\] \[\geq\frac{y}{y-t_{x}}\cdot f(x,y)-\frac{t_{x}}{y-t_{x}}f(x,y-t_{x })-\frac{C\kappa}{y-2\kappa}\] \[\geq f(x,y)-2\frac{C\kappa}{y-2\kappa}.\] Therefore since \(C\) and \(\kappa\) are fixed and \(y_{0}\geq y-2\kappa\), \[\limsup_{\begin{subarray}{c}y_{0}\to\infty\\ y_{0}\in B\end{subarray}}\sup_{x\in B}f(x_{0},y_{0})\geq\limsup_{y\to\infty} \sup_{x\in A}f(x,y)=\beta\] as required. As an application, we can use this subadditivity result to obtain a nice reformulation of the Assouad dimension of an arbitrary set. Let \(X\) be a compact doubling metric space and for \(\delta\in(0,1)\) and \(r\in(0,1)\), write \[\psi(r,\delta)=\sup_{x\in X}N_{r\delta}\big{(}B(x,r)\cap K\big{)}\] and then set \[\Psi(r,\delta)=\frac{\log\psi(r,\delta)}{\log(1/\delta)}.\] One can think of \(\Psi(r,\delta)\) is the best guess for the Assouad dimension of \(X\) at scales \(0<r\delta<\delta<1\). This heuristic is made precise in the following result. **Corollary 3.5**.: _Let \(X\) be a compact doubling metric space. Then_ \[\dim_{\mathrm{A}}X=\limsup_{\delta\to 0}\limsup_{r\to 0}\Psi(r,\delta)=\lim_{\delta\to 0} \sup_{r\in(0,1)}\Psi(r,\delta). \tag{3.4}\] _Proof._ Since \(X\) is doubling, there is an \(M\geq 0\) so that \(\Psi(r,\delta)\in[0,M]\). Moreover, given \(r,\delta_{1},\delta_{2}\in(0,1)\), by covering balls \(B(x,r\delta_{1})\) by balls of radius \(r\delta_{1}\delta_{2}\), \[\psi(r,\delta_{1}\delta_{2})\leq\psi(r,\delta_{1})\psi(r\delta_{1},\delta_{2})\] and therefore \[\Psi(r,\delta_{1}\delta_{2}) =\frac{\log\psi(r,\delta_{1}\delta_{2})}{\log(1/\delta_{1}\delta _{2})}\] \[\leq\frac{\log\psi(r,\delta_{1})+\log\psi(r\delta_{1},\delta_{2})} {\log(1/\delta_{1}\delta_{2})}\] \[=\frac{\log(1/\delta_{1})\Psi(r,\delta_{1})+\log(1/\delta_{2}) \Psi(r\delta_{1},\delta_{2})}{\log(1/\delta_{1})+\log(1/\delta_{2})}.\] Thus with the change of coordinate \(g(x,y)=(e^{-x},e^{-y})\), the second equality in (3.4) follows by applying Lemma 3.4 to the function \(\Psi\circ g\). To see the first equality in (3.4), it is a direct consequence of the definition of the Assouad dimension that \[\limsup_{\delta\to 0}\limsup_{r\to 0}\Psi(r,\delta)\leq\dim_{\mathrm{A}}K\] and that there are sequences \((\delta_{n})_{n=1}^{\infty}\) and \((r_{n})_{n=1}^{\infty}\) with \(\lim_{n\to\infty}\delta_{n}=0\) such that \[\lim_{\delta\to 0}\sup_{r\in(0,1)}\Psi(r,\delta)\geq\limsup_{n\to\infty}\Psi(r_{n},\delta_{n})\geq\dim_{\mathrm{A}}K,\] as required. \(\square\) Finally, we prove that in the definition of the Assouad dimension one may replace the exponent associated to localized coverings of balls of the same size by an exponent coming from localized packings of balls which may have different sizes. This will be useful since the natural covers appearing from the symbolic representation of \(K\) consist of cylinders which may have very non-uniform diameters when indexed by length. First, for a metric space \(X\), \(x\in X\), and \(R\in(0,1)\), denote the family of all localized centred packings by \[\operatorname{pack}(X,x,R)=\left\{\{B(x_{i},r_{i})\}_{i=1}^{\infty}:\begin{array} []{c}0<r_{i}\leq R,x_{i}\in X,B(x_{i},r_{i})\subset B(x,R),\\ B(x_{i},r_{i})\cap B(x_{j},r_{j})=\emptyset\text{ for all }i\neq j\end{array}\right\}.\] In our proof, we will also require the Assouad dimension of a measure. Given a compact doubling metric space \(X\) and a Borel measure \(\mu\) with \(\operatorname{supp}\mu=X\), the _Assouad dimension of \(\mu\)_ is given by \[\dim_{\mathrm{A}}\mu=\inf\Bigl{\{}\alpha\geq 0:\forall x\in X \,\forall 0<r\leq R<\operatorname{diam}X\] \[\frac{\mu(B(x,R))}{\mu(B(x,r))}\lesssim_{\alpha}\left(\frac{R}{ r}\right)^{\alpha}\Bigr{\}}.\] The main result of [20] (the original Russian version can be found in [20]) is that for a compact doubling metric space \(X\), \[\dim_{\mathrm{A}}X=\inf\{\dim_{\mathrm{A}}\mu:\mathrm{supp}\,\mu=X\}.\] In the following result, we observe that the existence of good measures provides a convenient way to control the localized disk packing exponent. **Proposition 3.6**.: _Let \(X\) be a bounded metric space. Then_ \[\dim_{\mathrm{A}}X=\inf\Bigl{\{}\alpha:\forall 0<R<1\,\forall x\in X \,\forall\{B(x_{i},r_{i})\}_{i=1}^{\infty}\in\mathrm{pack}(X,x,R)\] \[\sum_{i=1}^{\infty}r_{i}^{\alpha}\lesssim_{\alpha}R^{\alpha} \Bigr{\}}.\] _Proof._ That \[\dim_{\mathrm{A}}X\leq\inf\Bigl{\{}\alpha:\forall 0<R<1\,\forall x \in X\,\forall\{B(x_{i},r_{i})\}_{i=1}^{\infty}\in\mathrm{pack}(X,x,R)\] \[\sum_{i=1}^{\infty}r_{i}^{\alpha}\lesssim_{\alpha}R^{\alpha} \Bigr{\}}\] is immediate by specializing to packings with \(r_{i}=r\) for some \(0<r\leq R\), using the equivalence (up to a constant factor) of covering and packing counts. Now to show the lower bound, if \(X\) is not doubling, then \(\dim_{\mathrm{A}}X=\infty\) and the result is trivial. Otherwise, by passing to the completion (which does not change the value of the Assouad dimension) and recalling that a bounded doubling metric space is totally bounded, we may assume that \(X\) is also compact. Thus let \(\alpha>\dim_{\mathrm{A}}X\) be arbitrary. By [20, Theorem 1], there is a probability measure \(\mu\) with \(\mathrm{supp}\,\mu=X\) and \(\dim_{\mathrm{A}}\mu<\alpha\). Then for any \(0<R<1\), \(x\in X\), and \(\{B(x_{i},r_{i})\}_{i=1}^{\infty}\in\mathrm{pack}(X,x,R)\), by disjointness, \[\mu\bigl{(}B(x,R)\bigr{)}\geq\sum_{i=1}^{\infty}\mu\bigl{(}B(x_{i},r_{i}) \bigr{)}\gtrsim\mu\bigl{(}B(x,R)\bigr{)}\sum_{i=1}^{\infty}\left(\frac{r_{i}} {R}\right)^{\alpha}.\] Therefore, \[\sum_{i=1}^{\infty}r_{i}^{\alpha}\lesssim R^{\alpha}\] which, since \(\alpha>\dim_{\mathrm{A}}X\) was arbitrary, yields the claimed result. \(\square\) ### Proof of the Assouad dimension formula We can now state and prove the desired formula for the Assouad dimension of the non-autonomous self-similar set \(K\). Let \(n\in\mathbb{N}\) and \(m\in\mathbb{N}\) be arbitrary, and let \(\theta(n,m)\) denote the unique value satisfying the equation \[\sum_{j_{1}\in\mathcal{J}_{n+1}}\cdots\sum_{j_{m}\in\mathcal{J}_{n+m}}\prod_{ k=1}^{m}r_{n+k,j_{k}}^{\theta(n,m)}=1.\] Note that \(\theta(n,m)\) is precisely the similarity dimension of the IFS \[\Phi_{n+1}\circ\cdots\circ\Phi_{n+m}=\{f_{1}\circ\cdots\circ f_{m}:f_{i}\in\Phi_{ n+i}\}.\] We establish the following formula for the Assouad dimension of \(K\). **Theorem 3.7**.: _Let \((\Phi_{n})_{n=1}^{\infty}\) be a non-autonomous IFS satisfying the open set condition and with uniformly bounded contraction ratios. Denote the associated non-autonomous self-similar set by \(K\). Then_ \[\dim_{\rm A}K=\lim_{m\to\infty}\sup_{n\in\mathbb{N}}\theta(n,m). \tag{3.5}\] _Proof._ Let us first show that the limit in (3.5) exists by verifying that the function \(\theta(n,m)\) satisfies the assumptions of Lemma 3.4 with \(A=\mathbb{N}\). First, \(\theta(n,m)\in[0,d]\) since \(\#\mathcal{J}_{n}\geq 1\) for all \(n\in\mathbb{N}\) and the open set condition along with scaling properties of \(d\)-dimensional Lebesgue measure forces \(\sum_{j\in\mathcal{J}_{n}}r_{n,j}^{d}\leq 1\). Thus it remains to verify assumption (ii) in Lemma 3.4. Let \(n,m_{1},m_{2}\in\mathbb{N}\) be arbitrary. Recalling the definitions of \(\theta(n,m_{1})\) and \(\theta(n+m_{1},m_{2})\), by Holder's inequality with exponents \((m_{1}+m_{2})/m_{1}\) and \((m_{1}+m_{2})/m_{2}\), \[1 =\sum_{j_{1}\in\mathcal{J}_{n+1}}\cdots\sum_{j_{m_{1}+m_{2}}\in \mathcal{J}_{n+m_{1}+m_{2}}}\left(\prod_{k=1}^{m_{1}}r_{n+k,j_{k}}\right)^{ \theta(n,m_{1})}\left(\prod_{k=m_{1}+1}^{m_{1}+m_{2}}r_{n+k,j_{k}}\right)^{ \theta(n+m_{1},m_{2})}\] \[\geq\sum_{j_{1}\in\mathcal{J}_{n+1}}\cdots\sum_{j_{m_{1}+m_{2}} \in\mathcal{J}_{n+m_{1}+m_{2}}}\left(\prod_{k=1}^{m_{1}+m_{2}}r_{n+k,j_{k}} \right)^{\frac{m_{1}\theta(n,m_{1})+m_{2}\theta(n+m_{1},m_{2})}{m_{1}+m_{2}}}.\] But \(\theta(n,m_{1}+m_{2})\) is the unique value satisfying \(\phi(\theta(n,m_{1}+m_{2}))=1\), where \[\phi(s)=\sum_{j_{1}\in\mathcal{J}_{n+1}}\cdots\sum_{j_{m_{1}+m_{2}}\in \mathcal{J}_{n+m_{1}+m_{2}}}\left(\prod_{k=1}^{m_{1}+m_{2}}r_{n+k,j_{k}}\right) ^{s}\] is monotonically decreasing in \(s\), yielding the desired inequality. In particular, the limit in (3.5) exists. Let us now verify the formula. First, recall from Corollary 3.3 that \(\dim_{\rm A}K=\dim_{\rm A}\Delta\). Let \(\epsilon>0\) be fixed and let \(M\) be sufficiently large so that for all \(n\in\mathbb{N}\) and \(m\geq M\), \[|\theta(n,m)-s|\leq\epsilon\qquad\text{where}\qquad s=\lim_{m\to\infty}\sup_ {n\in\mathbb{N}}\theta(n,m).\] Now fix a cylinder \([j_{1},\ldots,j_{n}]\subset\Delta\) for some \((j_{1},\ldots,j_{n})\in\mathcal{J}_{1}\times\cdots\times\mathcal{J}_{n}\) and write \(R=\operatorname{diam}([j_{1},\ldots,j_{n}])=r_{1,j_{1}}\cdots r_{n,j_{n}}\). Note that if \(m\geq M\), by definition of \(\theta(n,m)\) \[\sum_{j_{n+1}\in\mathcal{J}_{n+1}}\cdots\sum_{j_{n+m}\in\mathcal{J}_{n+m}} \prod_{k=1}^{n+m}r_{k,j_{k}}^{\theta(n,m)}=R^{\theta(n,m)}.\] But the family of cylinders \[\left\{[j_{1},\ldots,j_{n+m}]:(j_{n+1},\ldots,j_{n+m})\in\mathcal{J}_{n+1} \times\cdots\times\mathcal{J}_{n+m}\right\}\] forms a packing of \(B(x,R)\). Thus since \(m\geq M\) is arbitrary, by Proposition 3.6, \(\dim_{\mathrm{A}}K\geq s-\epsilon\). Conversely, let us upper bound \(\dim_{\mathrm{A}}K\). Recall that \(\epsilon>0\) is fixed as above and let \(m\geq M\) be fixed. Now let \(0<r\leq R<1\) and fix a ball \(B(x,R)\subset\Delta\). By definition of the metric on \(\Delta\), \(B(x,R)=[j_{1},\ldots,j_{n}]\) where \(r_{1,j_{1}}\cdots r_{n,j_{n}}\leq R\). We inductively build a sequence of covers \((\mathcal{B}_{k})_{k=1}^{\infty}\) for \(B(x,R)\) such that each \(\mathcal{B}_{k}\) is composed only of cylinder sets and \[\sum_{[i_{1},\ldots,i_{\ell}]\in\mathcal{B}_{k}}(r_{1,i_{1}}\cdots r_{\ell,i_ {\ell}})^{s+\epsilon}\leq R^{s+\epsilon}. \tag{3.6}\] and \[r_{1,i_{1}}\cdots r_{\ell,i_{\ell}}\geq r\cdot r_{\min}^{m}\qquad\text{for all}\qquad[i_{1},\ldots,i_{\ell}]\in\mathcal{B}_{k}. \tag{3.7}\] Begin with \(\mathcal{B}_{1}=\{[j_{1},\ldots,j_{n}]\}\), which clearly satisfies the requirements. Now suppose we have constructed \(\mathcal{B}_{k}\) for some \(k\in\mathbb{N}\). Let \([i_{1},\ldots,i_{\ell}]\in\mathcal{B}_{k}\) be an arbitrary cylinder set. If \(r_{1,i_{1}}\cdots r_{\ell,i_{\ell}}\leq r\), do nothing; this guarantees that (3.7) holds. Otherwise, replace the cylinder \([i_{1},\ldots,i_{\ell}]\) with the family of cylinders \[\{[i_{1},\ldots,i_{\ell},j_{1},\ldots,j_{m}]:(j_{1},\ldots,j_{m})\in\mathcal{J }_{\ell+1}\times\cdots\times\mathcal{J}_{\ell+m}\}.\] The choice of \(m\geq M\) and the definition of \(\theta(\ell,m)\) ensures that (3.6) holds. Repeat this process until every cylinder in \(\mathcal{B}_{k}\) has diameter \(\leq r\). That this process terminates at a finite level \(k\) is guaranteed by (3.1). Thus replacing each cylinder \([i_{1},\ldots,i_{\ell}]\) with a ball \(B(x_{i_{1},\ldots,i_{\ell}},r)\) for some \(x_{i_{1},\ldots,i_{\ell}}\in[i_{1},\ldots,i_{\ell}]\), by (3.6) and (3.7) the corresponding cover has \[\sum_{[i_{1},\ldots,i_{\ell}]\in\mathcal{B}_{k}}r^{s+\epsilon}\leq r_{\min}^ {-m(s+\epsilon)}\sum_{[i_{1},\ldots,i_{\ell}]\in\mathcal{B}_{k}}(r_{1,i_{1}} \cdots r_{\ell,i_{\ell}})^{s+\epsilon}\lesssim R^{s+\epsilon}\] which guarantees that \(\dim_{\mathrm{A}}K\leq s+\epsilon\), as claimed. ## 4 Tangent structure and dimension of Gatzouras-Lalley carpets In this section, we introduce the definitions of Gatzouras-Lalley and Baranski carpets and prove our main results on tangents and pointwise Assouad dimension of Gatzouras-Lalley carpets. ### Gatzouras-Lalley and Baranski carpets #### 4.1.1 Defining the maps Fix an index set \(\mathcal{I}\) with \(\#\mathcal{I}\geq 2\), and for \(j=1,2\) fix contraction ratios \((\beta_{i,j})_{i\in\mathcal{I}}\) in \((0,1)\) and translations \((d_{i,j})_{i\in\mathcal{I}}\in\mathbb{R}\). We then call the IFS \(\{T_{i}\}_{i\in\mathcal{I}}\)_diagonal_ when \[T_{i}(x_{1},x_{2})=(\beta_{i,1}x_{1}+d_{i,1},\beta_{i,2}x_{2}+d_{i,2})\text{ for each }i\in\mathcal{I}.\] Let \(\eta_{j}\) denote the orthogonal projection onto the \(j^{\text{th}}\) coordinate axis, i.e. \(\eta_{j}(x_{1},x_{2})=x_{j}\). We denote by \(\Lambda_{j}=\{S_{i,j}\}_{i\in\mathcal{I}}\) the _projected systems_, where \(\eta_{j}\circ T_{j}=S_{i,j}\circ\eta_{j}\). We will often write \(\eta=\eta_{1}\) to denote simply the projection onto the first coordinate axis. Of course, \(S_{i,j}(x)=\beta_{i,j}x+d_{i,j}\) are iterated function systems of similarities. Let \(\mathcal{I}^{*}=\bigcup_{n=0}^{\infty}\mathcal{I}^{n}\), and for \(\mathtt{i}=(i_{1},\ldots,i_{n})\in\mathcal{I}^{*}\) and \(j=1,2\), write \[T_{\mathtt{i}} =T_{i_{1}}\circ\cdots\circ T_{i_{n}},\] \[S_{\mathtt{i},j} =S_{i_{1},j}\circ\cdots\circ S_{i_{n},j}\] and \[p_{\mathtt{i}} =p_{i_{1}}\cdots p_{i_{n}},\] \[\beta_{\mathtt{i},j} =\beta_{i_{1},j}\cdots\beta_{i_{n},j}.\] For \(n\in\mathbb{N}\) and \(\gamma\in\Omega\coloneqq\mathcal{I}^{\mathbb{N}}\), we write \(\gamma\!\!\upharpoonright_{n}\) to denote the unique prefix of \(\gamma\) in \(\mathcal{I}^{n}\). Now \(\eta_{j}\) induces an equivalence relation \(\sim_{j}\) on \(\mathcal{I}\) where \(i\sim i^{\prime}\) if \(S_{i,j}=S_{i^{\prime},j}\). Let \(\eta_{j}\colon\mathcal{I}\to\mathcal{I}/\sim_{j}\) denote the natural projection. Intuitively, \(\eta_{j}(i)\) is the set of indices which lie in the same column or row as the index \(i\). Then \(\eta_{j}\) extends naturally to a map on \(\Omega\) by \(\eta_{j}((i_{n})_{n=1}^{\infty})=(\eta_{j}(i_{n}))_{n=1}^{\infty}\subset\eta_ {j}(\mathcal{I})^{*}\); and similar extends to a map on \(\mathcal{I}^{*}\). For notational clarity, we will refer to words in \(\mathcal{I}^{*}\) using upright indices, such as \(\mathtt{i}\), and words in \(\eta_{j}(\mathcal{I}^{*})\) using their underlined variants, such as \(\mathtt{i}\). Note that if \(\mathtt{i}\sim_{j}\mathtt{j}\), then and \(S_{\mathtt{i},j}=S_{\mathtt{j},j}\). In particularly, we may unambiguously write \(S_{\mathtt{i},j}\) and \(\beta_{\mathtt{i},j}\) for \(\mathtt{i}\in\eta_{j}(\mathcal{I}^{*})\). Associated with the IFS \(\{T_{i}\}_{i\in\mathcal{I}}\) is a unique non-empty compact _attractor_\(K\), satisfying \(K=\bigcup_{i\in\mathcal{I}}T_{i}(K)\). Note that the projected IFS \(\{S_{i,j}\}_{i\in\mathcal{I}}\) has attractor \(K_{j}=\eta_{j}(K)\) for \(j=1,2\). Recalling that \(\Omega=\mathcal{I}^{\mathbb{N}}\), let \(\pi\colon\Omega\to K\) denote the continuous map \[\big{\{}\pi\big{(}(i_{n})_{n=1}^{\infty}\big{)}\big{\}}=\lim_{n\to\infty}S_{i_ {1}}\circ\cdots\circ S_{i_{n}}(K).\] Without loss of generality, and for the remainder of this document, we will assume that \(K\subset[0,1]^{2}\). We can now introduce our two primary classes of self-affine sets. **Definition 4.1**.: We say that the carpet is of type _Gatzouras-Lalley_ if: 1. \(T_{i}((0,1)^{2})\cap T_{j}((0,1)^{2})=\emptyset\) for all \(i\neq j\), 2. either \(S_{i,1}((0,1))=S_{(j,1)}((0,1))\) or \(S_{i,1}((0,1))\cap S_{j,1}((0,1))=\emptyset\) for all \(i,j\), and 3. \(\beta_{i,1}>\beta_{i,2}\) for all \(i\in\mathcal{I}\); and type _Baranski_ if: 1. \(T_{i}((0,1)^{2})\cap T_{j}((0,1)^{2})=\emptyset\) for all \(i\neq j\), and 2. either \(S_{i,\ell}((0,1))=S_{(j,\ell)}([0,1])\) or \(S_{i,\ell}((0,1))\cap S_{j,\ell}((0,1))=\emptyset\) for all \(i,j\) and \(\ell=1,2\). Moreover, we say that an IFS \(\{f_{i}\}_{i\in\mathcal{I}}\) with attractor \(K\) satisfies the _strong separation condition_ (or SSC for short) if \(f_{i}(K)\cap f_{j}(K)=\emptyset\) for all \(i\neq j\in\mathcal{I}\). #### 4.1.2 Dimensions of Gatzouras-Lalley carpets To conclude this section, we recall some standard results on the dimensions of Gatzouras-Lalley carpets. We defer the corresponding results for Baranski carpets to SS5.1. Before we do this, we first recall the notion of the lower dimension, which is in some sense dual to the definition of Assouad definition. Let \(K\subset\mathbb{R}^{d}\) be compact. Then the _lower dimension_ of \(K\) is given by \[\dim_{\mathrm{L}}K=\sup\Bigl{\{}s:\exists C>0\,\forall 0<r\leq R<1\,\forall x\in K\] \[N_{r}(B(x,R)\cap K)\geq C\Bigl{(}\frac{R}{r}\Bigr{)}^{s}\Bigr{\}}.\] In order to state our results on the Hausdorff dimensions, we must also introduce some notation for Bernoulli measures. Let \(\mathcal{P}\) denote the collection of probability vectors on \(\mathcal{I}\), i.e. \[\mathcal{P}=\mathcal{P}(\mathcal{I})\coloneqq\Bigl{\{}(p_{i})_{i \in\mathcal{I}}:p_{i}\geq 0\text{ for all }i\text{ and }\sum_{i\in\mathcal{I}}p_{i}=1\Bigr{\}}.\] Equip \(\mathcal{P}\) with the topology inherited from \(\mathbb{R}^{\mathcal{I}}\). Given \(\boldsymbol{p}\in\mathcal{P}\), considering \(\boldsymbol{p}\) as a probability measure on \(\mathcal{I}\), we let \(\boldsymbol{p}^{\mathbb{N}}\) denote the infinite product measure supported on \(\Omega\). We let \(\mu_{\boldsymbol{p}}=\pi_{*}\boldsymbol{p}^{\mathbb{N}}\) denote the corresponding invariant measure on \(K\), where \(\pi_{*}\) denotes the pushforward map. Note that the projections \(\eta_{j}\) also induce natural maps \(\eta_{j}\colon\mathcal{P}(\mathcal{I})\to\mathcal{P}(\eta_{j}(\mathcal{I}))\) by \(\eta_{j}(\boldsymbol{p})_{\xi}=\sum_{i\in\eta_{j}^{-1}(\xi)}\boldsymbol{p}_{i}\). Given a probability vector \(\boldsymbol{p}\in\mathcal{P}\), we write \[H(\boldsymbol{p})=-\sum_{i\in\mathcal{I}}p_{i}\log p_{i}\qquad \text{and}\qquad\chi_{j}(\boldsymbol{p})=-\sum_{i\in\mathcal{I}}p_{i}\log \beta_{i,j}.\] We now recall the main results of [10]--stated below in (i) and (ii)--as well as the result of [14]--stated below in (iii). We also note that the same proof as given in [14] (which is explained more precisely in [12, Theorem 2.13]) gives the analogous result for the lower dimension. **Proposition 4.2** ([10, 14]).: _Let \(K\) be a Gatzouras-Lalley carpet._ 1. _The Hausdorff dimension of_ \(K\) _is given by_ \[\dim_{\mathrm{H}}K=\sup_{\boldsymbol{p}\in\mathcal{P}}s(\boldsymbol{p})\] _where_ \[s(\boldsymbol{p})\coloneqq\frac{H(\boldsymbol{p})}{\chi_{2}(\boldsymbol{p})}+ \frac{H(\eta(\boldsymbol{p}))}{\chi_{1}(\boldsymbol{p})-\chi_{2}(\boldsymbol{ p})}.\] _Moreover, the supremum is always attained at an interior point of_ \(\mathcal{P}\)_._ 2. _The box dimension of_ \(K\) _exists and is given by the unique solution to_ \[\sum_{i\in\mathcal{I}}\beta_{i,1}^{\dim_{\mathrm{H}}\eta(K)}\beta_{i,2}^{\dim_ {\mathrm{H}}K-\dim_{\mathrm{H}}\eta(K)}=1\qquad\text{where}\qquad\sum_{j\in \eta(\mathcal{I})}\beta_{\dot{j},1}^{\dim_{\mathrm{H}}\eta(K)}=1.\] _._ 3. _The Assouad dimension of_ \(K\) _is given by_ \[\dim_{\mathrm{A}}K=\dim_{\mathrm{B}}\eta(K)+\max_{\underline{\ell}\in\eta( \mathcal{I})}t(\underline{\ell})\] _where_ \(t(\underline{\ell})\) _is defined as the unique solution to the equations_ \[\sum_{j\in\eta^{-1}(\underline{\ell})}\beta_{j,2}^{t(\underline{\ell})}=1.\] _Similarly, the lower dimension of_ \(K\) _is given by_ \[\dim_{\mathrm{L}}K=t+\min_{\underline{\ell}\in\eta(\mathcal{I})}t(\underline{ \ell}).\] #### 4.1.3 Regular points and interior words We conclude this section with the notion of a regular point and an interior word. **Definition 4.3**.: We say that a point \(x\in K\) is _regular_ if for each \(r\in(0,1)\), there is an \(\mathtt{i}\in\mathcal{I}^{*}\) with \(\beta_{\mathtt{i},1}\lesssim r\) such that \(B(\eta(x),r)\cap\eta(K)\subset S_{\mathtt{i},1}(K)\). Given \(\mathtt{i}\in\mathcal{I}^{*}\), we say that \(\mathtt{i}\) is an _interior word_ if \(S_{\mathtt{i},1}([0,1])\subset(0,1)\). We let \(\mathcal{B}_{n}\subset\mathcal{I}^{n}\) denote the set of interior words of length \(n\). The following lemma is standard. Here, and elsewhere, given an \(n\in\mathbb{N}\) and \(\mathcal{Y}\subset\mathcal{I}^{n}\), we write \(\mathcal{Y}^{\mathbb{N}}\cong\Omega\) to denote the natural bijection. We will frequently abuse notation and interchangeably refer to elements in the subsystem or in the full system. **Lemma 4.4**.: _Let \(K\) be a Gatzouras-Lalley carpet._ 1. _If_ \(\eta(K)\) _satisfies the SSC, then each_ \(x\in K\) _is regular._ 2. _Suppose_ \(\gamma\in\mathcal{B}_{n}^{\mathbb{N}}\cong\Omega\) _for some_ \(n\in\mathbb{N}\)_. Then_ \(\pi(\gamma)\) _is regular._ We can now guarantee the existence of large subsystems consisting only of regular points. This result is essentially [11, Lemma 4.3]. **Proposition 4.5** ([11][10]).: _Let \(K\) be a Gatzouras-Lalley carpet corresponding to the IFS \(\{T_{i}\}_{i\in\mathcal{I}}\). Then for every \(\epsilon>0\), there is an \(n\in\mathbb{N}\) and a family \(\mathcal{J}\subset\mathcal{I}^{n}\) so that the IFS \(\Lambda=\{T_{\mathtt{j}}:\mathtt{j}\in\mathcal{J}\}\) with attractor \(K_{\epsilon}\) satisfies the following conditions:_ 1. _each_ \(\mathtt{i}\in\mathcal{J}\) _is an interior word,_ 2. \(\dim_{\mathrm{H}}K_{\epsilon}\geq\dim_{\mathrm{H}}K-\epsilon\)_,_ 3. \(\dim_{\mathrm{B}}\eta(K_{\epsilon})\geq\dim_{\mathrm{B}}\eta(K)-\epsilon\)_, and_ 4. _there are_ \(0<\rho_{2}<\rho_{1}<1\) _so that_ \(\beta_{\mathtt{i},1}=\rho_{1}\) _and_ \(\beta_{\mathtt{i},2}=\rho_{2}\) _for all_ \(\mathtt{i}\in\mathcal{I}\) _and each column has the same number of maps._ _In particular, each \(x\in K_{\epsilon}\) is a regular point with respect to the IFS \(\{T_{i}\}_{i\in\mathcal{I}}\) and \(\dim_{\mathrm{A}}K_{\epsilon}=\dim_{\mathrm{H}}K_{\epsilon}=\dim_{\mathrm{L}}K _{\epsilon}\)._ Proof.: First, if \(K\) is contained in a vertical line, then \(K\) is the attractor of a self-similar IFS in \(\mathbb{R}\) and the result is substantially easier. Now applying [11, Lemma 4.3], there exists a family \(\mathcal{J}_{0}\subset\mathcal{I}^{n_{0}}\) with attractor \(K_{0}\) satisfying conditions (ii), (iii), and (iv). By condition (iv), \(t(\mathtt{i})=t\) for all \(\mathtt{i}\in\mathcal{J}_{0}\) and \[\dim_{\mathrm{H}}K_{0}=\dim_{\mathrm{B}}\eta(K)+t\] and since \(K\) is not contained in a vertical line, we may assume that \(\dim_{\mathrm{B}}\eta(K_{0})>0\). Since \(\eta(K_{0})\) is the attractor of a self-similar IFS, iterating \(\mathcal{J}_{0}\) if necessary and removing the maps in the first and last column, obtain a family \(\mathcal{J}\subset\mathcal{J}_{0}^{n}\) with corresponding attractor \(K_{\epsilon}\) such that \(t(\mathtt{j})=t\) for any \(\mathtt{j}\in\mathcal{J}\), and \(\dim_{\mathrm{B}}\eta(K_{\epsilon})\geq\dim_{\mathrm{B}}\eta(K)-\epsilon\). Since words which correspond to rectangles that do not lie in the first or last column are necessarily interior words, combining this construction with Lemma 4.4 provides a family \(\mathcal{J}\) satisfying the desired properties. ### Approximate squares and symbolic slices A common technique when studying invariant sets for iterated function systems on some index set \(\mathcal{I}\) is to first reduce the problem to a symbolic problem on the coding space \(\mathcal{I}^{*}\). However, the main technical complexity in understanding the dimension theory Gatzouras-Lalley carpets, and more generally self-affine sets, is that the cylinder sets \(T_{\mathtt{i}}(K)\) are often exponentially distorted rectangles. As a result, we will keep track of two symbolic systems simultaneously, which together will capture the geometry of the set \(K\). Fix a Gatzouras-Lalley IFS \(\Lambda=\{T_{i}\}_{i\in\mathcal{I}}\). We first introduce some notation for handling cylinders. We then associate with the IFS \(\Lambda\), and the related defining data that we introduced in SS4.1, two important metric trees: first, the metric tree of _approximate squares_, and second the metric tree of _symbolic slices_. First, recall that \(\Omega=\mathcal{I}^{\mathbb{N}}\) is the space of infinite sequences on \(\mathcal{I}\). Given \(k\in\mathbb{N}\cup\{0\}\) and a word \(\mathtt{i}\in\mathcal{I}^{k}\), we define the _cylinder_ corresponding to \(\mathtt{i}\) by \[[\mathtt{i}]=\{\gamma\in\Omega:\gamma|_{k}=\mathtt{i}\}.\] The family of cylinders \(\{[\mathtt{i}]:\mathtt{i}\in\mathcal{I}^{k}\}_{k=0}^{\infty}\) defines a tree: we will often abuse notation and simply refer to \(\{\mathcal{I}^{k}\}_{k=0}^{\infty}\) as a tree. We will associate with this tree a variety of metrics, such as those induced by the maps \(\mathtt{i}\mapsto\beta_{\mathtt{i},\mathtt{j}}\) for \(j=1,2\). We will also use the same notation for the projected words \(\{\eta(\mathcal{I}^{k})\}_{k=0}^{\infty}\). Next, we define the _metric tree of approximate squares_. Before we do this, we introduce the notion of a _pseudo-cylinder_. Suppose \(\mathtt{i}\in\mathcal{I}^{k}\) and \(\underline{\mathtt{j}}\in\eta(\mathcal{I}^{\ell})\). We then write \[P(\mathtt{i},\underline{\mathtt{j}})=\{\gamma=(i_{n})_{n=1}^{\infty}\in \Omega:(i_{1},\ldots,i_{k})=\mathtt{i}\text{ and }\eta(i_{k+1},\ldots,i_{k+\ell})=\underline{ \mathtt{j}}\}.\] Figure 3: Two iterations of a Gatzouras–Lalley IFS within a cylinder, with a wide pseudo-cylinder in highlighted in blue and a tall pseudo-cylinder in red. If in addition \[\beta_{\mathtt{i}\mathtt{j},1}\geq\beta_{\mathtt{i},2}, \tag{4.1}\] we refer to the pseudo-cylinder as _wide_; otherwise, we refer to the pseudo-cylinder as _tall_. Note that map \((\mathtt{i},\mathtt{j})\mapsto P(\mathtt{i},\mathtt{j})\) is injective. Another equivalent way to understand the psuedo-cylinder \(P(\mathtt{i},\mathtt{j})\) is as a finite union of cylinders inside the cylinder \([\mathtt{i}]\), all of which lie inside the same column; that is, \[P(\mathtt{i},\mathtt{j})=\bigcup_{\mathtt{k}\in\eta^{-1}(\mathtt{j})}[ \mathtt{i}\mathtt{k}]. \tag{4.2}\] We refer the reader to Figure 3 for a depiction of the definition of a pseudo-cylinder. Now given an infinite word \(\gamma\in\Omega\), let \(L_{k}(\gamma)\) be the maximal integer so that \[\beta_{\gamma_{1},1}\cdots\beta_{\gamma_{L_{k}(\gamma)},1}\geq\beta_{\gamma_ {1},2}\cdots\beta_{\gamma_{k},2}.\] In other words, \(L_{k}(\gamma)\) is chosen so that the level \(L_{k}(\gamma)\) rectangle has approximately the same width as the height of the level \(k\) rectangle. Write \(\gamma|_{L_{k}(\gamma)}=\mathtt{i}\mathtt{j}\) where \(\mathtt{i}\in\mathcal{I}^{k}\). We then define the _approximate square_\(Q_{k}(\gamma)\subset\Omega\) by \[Q_{k}(\gamma)=P(\mathtt{i},\eta(\mathtt{j})).\] While different \(\gamma\) may define the same approximate square, the choice of \(\mathtt{i}\) and \(\eta(\mathtt{j})\) are unique. Note that the index \(L_{k}(\gamma)\) is chosen precisely to be as large as possible while still satisfying the condition (4.1). In other words, one can think of the wide pseudo-cylinders as "interpolating" between the cylinder \(P(\mathtt{i},\varnothing)=[\mathtt{i}]\) and the approximate square \(P(\mathtt{i},\eta(\mathtt{j}))=Q_{k}(\gamma)\), with the approximate square lying precisely at the threshold at which the pseudo-cylinder becomes tall. Of course, \(Q_{k+1}(\gamma)\subset Q_{k}(\gamma)\) and moreover for any \(\gamma,\gamma^{\prime}\in\Omega\), either \(Q_{k}(\gamma)=Q_{k}(\gamma^{\prime})\) or \(Q_{k}(\gamma)\cap Q_{k}(\gamma^{\prime})=\emptyset\). Denote the set of all approximate squares by \[\mathcal{S}_{k}=\{Q_{k}(\gamma):\gamma\in\Omega\}\qquad\text{and}\qquad \mathcal{S}=\bigcup_{k=0}^{\infty}\mathcal{S}_{k}.\] As discussed above, every approximate square is uniquely associated with a pair \((\mathtt{i},\mathtt{j})\), so we may therefore define a metric induced by \(\rho(Q)=\beta_{\mathtt{i},2}\), which makes the collection of approximate squares into a metric tree. To conclude this section, we define the _metric tree of symbolic slices_. Suppose we fix a word \(\gamma\in\Omega\). The word \(\gamma=(i_{n})_{n=1}^{\infty}\) defines for each \(n\in\mathbb{N}\) a self-similar IFS \(\Phi_{n}=\{S_{j,2}:j\in\eta^{-1}(\eta(i_{n}))\}\). This IFS is precisely the IFS corresponding to the column containing the index \(i_{n}\). Note that there are only finitely many possible choices for the \(\Phi_{n}\), so the sequence \((\Phi_{n})_{n=1}^{\infty}\) has as an attractor a non-autonomous self-similar set \(K_{\eta(\gamma)}\) and corresponding metric tree \(\Omega(\eta(\gamma))\), as defined in SS3. This non-autonomous IFS has uniformly bounded contractions and satisfies the OSC with respect to the open interval \((0,1)\). For notational simplicity, we denote the cylinder sets which compose this metric tree as \[\mathcal{F}_{\eta(\gamma),n}=\{[j_{1},\ldots,j_{n}]:(j_{1},\ldots,j_{n})\in \Phi_{1}\times\cdots\times\Phi_{n}\}\quad\text{and}\quad\mathcal{F}_{\eta( \gamma)}=\bigcup_{n=0}^{\infty}\mathcal{F}_{\eta(\gamma),n}.\] We call \(K_{\eta(\gamma)}\) the _symbolic slice_ associated with the word \(\gamma\). If the projected IFS \(\{S_{\mathbb{i}}1\}_{\mathbb{i}\in\eta(\mathcal{I})}\) satisfies the SSC, then if \(x=\eta(\pi(\gamma))\), \[\{x\}\times K_{\eta(\gamma)}=\eta^{-1}(x)\cap K\] is precisely the vertical slice of \(K\) containing \(x\). In general, \(K_{\eta(\gamma)}\) is always contained inside a vertical slice of \(K\). The symbolic fibre \(K_{\eta(\gamma)}\) (and its associated Assouad dimension) was introduced and studied in [11, SS1.2] in the more general setting of overlapping diagonal carpets. ### Tangents of Gatzouras-Lalley carpets It turns out that the pointwise Assouad dimension at \(x=\pi(\gamma)\) is closely related to the Assouad dimension of the symbolic fibre \(K_{\eta(\gamma)}\). In this section, we make this notion precise, and moreover use it to construct large tangents for Gatzouras-Lalley carpets. In our main result in this section, we prove that approximate squares containing a fixed word \(\gamma\in\Omega\) converge in Hausdorff distance to product sets of weak tangents of \(K_{\eta(\gamma)}\) with the projection \(\eta(K)\), up to some finite distortion and contributions from adjacent approximate squares. First, we define \[\Phi_{k,\gamma}(x,y)=\big{(}S^{-1}_{\gamma\mathbb{i}_{L_{k}(\gamma)},1}(x),S^ {-1}_{\gamma\mathbb{i}_{L},2}(y)\big{)}.\] By choice of \(L_{k}(\gamma)\), the maps \(\Phi_{k,\gamma}\) are (up to some constant-size distortion) homotheties. One can think of \(\Phi_{k,\gamma}\) as mapping the approximate square \(\pi(Q_{k}(\gamma))\) to the unit square \([0,1]^{2}\). **Proposition 4.6**.: _Let \(K\) be a Gatzouras-Lalley carpet and let \(\gamma\in\Omega\) be arbitrary. Suppose \((\mathbb{i}_{n})_{n=1}^{\infty}\) is any sequence such that \(\eta(\mathbb{i}_{n})=\eta(\gamma\mathbb{i}_{n})\). Then_ \[p_{\mathcal{H}}\left(\eta(K)\times(S^{-1}_{\mathbb{i}_{n},2}(K_{\eta(\gamma)} )\cap[0,1]);\Phi_{n,\gamma}(K)\cap[0,1]^{2}\right)\lesssim\kappa^{n} \tag{4.3}\] _where \(\kappa=\max\{\frac{\beta_{i,2}}{\beta_{i,1}}:i\in\mathcal{I}\}\in(0,1)\). Moreover, suppose \(\gamma\) is regular. Then for any \(\gamma\in\Omega\) and \(F\in\mathrm{Tan}(K,\pi(\gamma))\), there is an \(E\in\mathrm{Tan}(K_{\eta(\gamma)})\) and a similarity map \(h\) so that \(h(F)\subset\eta(K)\times E\)._ _Proof._ We first prove that \[d_{\mathcal{H}}\left(\eta(K)\times(S^{-1}_{\mathbb{i}_{n},2}(K_{\eta(\gamma)} )\cap[0,1]),\Phi_{n,\gamma}(\pi(Q_{n}(\gamma)))\right)\lesssim\kappa^{n}\] Fix \(n\in\mathbb{N}\) and write \(k=L_{n}(\gamma)\). Let \(Q_{n}(\gamma)=P(\gamma\mathbb{i}_{n},\mathbb{j})\) and enumerate \(\eta^{-1}(\mathbb{j})=\{\mathbb{j}_{1},\ldots,\mathbb{j}_{n}\}\subset\mathcal{ I}^{k-n}\). Observe that \(\eta(T_{\mathbb{j}_{i}}(K))=S_{\mathbb{j}_{i},1}(K)\) does not depend on the choice of \(i=1,\ldots,m\). Now \(\Phi_{n,\gamma}(T_{\gamma\mathbb{i}_{n}\mathbb{j}_{i}}(K))\) is contained in the rectangle \([0,1]\times S_{\mathbb{j}_{i},2}(K)\). Moreover, the rectangle \([0,1]\times S_{\mathbb{j}_{i},2}(K)\) has height \(\lesssim\kappa^{n}\). Therefore \[d_{\mathcal{H}}\left(\eta(K)\times\bigcup_{i=1}^{m}S_{\mathbb{j}_{i},2}([0,1] ),\Phi_{n,\gamma}(Q_{n}(\gamma))\right)\lesssim\kappa^{n}. \tag{4.4}\] But approximating the set \(S_{\mathbb{i}_{n},2}([0,1])\cap K_{\eta(\gamma)}\) at level \(n\) with cylinders at level \(k=L_{n}(\gamma)\), using the fact that \(\eta(\mathbb{i}_{n})=\eta(\gamma\mathbb{i}_{n})\), \[d_{\mathcal{H}}\left(S^{-1}_{\mathbb{i}_{n},2}(K_{\eta(\gamma)})\cap[0,1], \bigcup_{i=1}^{m}S_{\mathbb{j}_{i},2}([0,1])\right)\lesssim\kappa^{n}. \tag{4.5}\] Combining (4.4) and (4.5) gives the claim. In particular, noting that \(Q_{n}(\gamma)\subset K\) and \(\Phi_{n,\gamma}(Q_{n}(\gamma))\subset[0,1]^{2}\) gives (4.3). Now suppose in addition that \(x=\pi(\gamma)\) is regular and let \(r>0\) be arbitrary. Since \(x\) is regular, there is an \(n\in\mathbb{N}\) with \(r\leq\beta_{\gamma\upharpoonright_{n}!}\lesssim r\) such that \[B(x,r)\cap K\subset\bigcup_{j=1}^{\ell}T_{\mathbf{i}_{j}}(K)\] where \[\{\mathbf{i}_{1},\ldots,\mathbf{i}_{\ell}\}=\{\mathbf{i}\in\mathcal{I}^{n}: \eta(\mathbf{i})=\eta(\gamma\upharpoonright_{n})\text{ and }T_{\mathbf{i}}(K)\cap B(x,r)\neq\emptyset\}.\] Now exactly as before, each rectangle \(T_{\mathbf{i}_{j}}(K)\) has width \(\approx r\) and height \(\lesssim r\kappa^{n}\). Therefore identifying \(x\in K\) with the analogous point \(x\in K_{\eta(\gamma)}\), there is a similarity map \(h_{r}\) with contraction ratio in some interval \([1,c]\) so that \[p_{\mathcal{H}}\left(r^{-1}(K-x)\cap B(0,1);h_{r}(\eta(K))\times r^{-1}(K_{ \eta(\gamma)}-x)\right)\lesssim\kappa^{n}.\] Now suppose \(F\in\operatorname{Tan}(K,x)\) so that \(F=\lim_{n\to\infty}r_{n}^{-1}(K-x)\cap B(0,1)\). Passing to a subsequence, we may assume that the \(h_{r_{n}}\) have contraction ratios converging to some \(r_{0}\geq 1\). Thus passing again to a subsequence, let \(F_{0}=\lim_{n\to\infty}(r_{0}r_{n})^{-1}(K-x)\cap B(0,1)\). Since \(r_{0}\geq 1\), we have \(F\subset F_{0}\). Passing again to a subsequence, let \[\lim_{n\to\infty}(r_{0}r_{n})^{-1}(K_{\eta(\gamma)}-x)\cap B(0,1)=E\in \operatorname{Tan}(K_{\eta(\gamma)}).\] Thus \(r_{0}^{-1}F\subset F_{0}\subset\eta(K)\times E\), as claimed. To conclude this section, we establish our general result which guarantees the existence of product-like tangents for arbitrary points in Gatzouras-Lalley carpets. **Proposition 4.7**.: _Let \(K\) be a Gatzouras-Lalley carpet. Then for each \(x\in K\), there is an \(F\in\operatorname{Tan}(K,x)\) so that_ \[\mathcal{H}^{\dim_{\mathrm{H}}\eta(K)+\dim_{\mathrm{A}}K_{\eta(\gamma)}}(F) \gtrsim 1,\] _where \(\gamma\in\Omega\) is such that \(\pi(\gamma)=x\). In particular,_ \[\dim_{\mathrm{A}}(K,x)\geq\max\{\dim_{\mathrm{H}}\eta(K)+\dim_{\mathrm{A}}K_{ \eta(\gamma)},\dim_{\mathrm{B}}K\}.\] Proof.: We will construct the set \(F\) essentially as a product \(\eta(K)\times E\) where \(E\) is a _weak_ tangent of \(K_{\eta(\gamma)}\). First, recall from Corollary 3.3 that \(\dim_{\mathrm{A}}K_{\eta(\gamma)}=\dim_{\mathrm{A}}\Omega(\eta(\gamma))\). Thus there is a sequence \((n_{k})_{k=1}^{\infty}\) diverging to infinity and words \(\mathbf{i}_{k}\in\mathcal{I}^{n_{k}}\) with \(\eta(\mathbf{i}_{k})=\gamma\upharpoonright_{n_{k}}\) such that \[E\coloneqq\lim_{k\to\infty}S_{\mathbf{i}_{k},2}^{-1}(K_{\eta(\gamma)})\cap[0,1]\] has \(\mathcal{H}^{\dim_{\mathrm{A}}K_{\eta(\gamma)}}(E)\gtrsim 1\). Thus by Proposition 4.6 applied along the sequence \((\mathtt{i}_{k})_{k=1}^{\infty}\), since the images \(\Phi_{n,\gamma}^{-1}([0,1]^{2})\) are rectangles with bounded eccentricity containing \(\pi(\gamma)\), there is a tangent \(F\in\operatorname{Tan}(K,x)\) containing an image of \(\eta(K)\times E\) under a bi-Lipschitz map with constants depending only on \(K\). But \(\eta(K)\) is Ahlfors-David regular so that \[\mathcal{H}^{\dim_{\mathtt{H}}\eta(K)+\dim_{\mathtt{A}}K_{\eta(\gamma)}}(F) \geq\mathcal{H}^{\dim_{\mathtt{H}}\eta(K)+\dim_{\mathtt{A}}K_{\eta(\gamma)}} \big{(}\eta(K)\times E\big{)}\gtrsim 1\] as claimed. The result concerning \(\dim_{\mathtt{A}}(K,x)\) then follows by Proposition 2.2 and Proposition 2.13. ### Upper bounds for the pointwise Assouad dimension We now prove our main upper bound for the pointwise Assouad dimension of Gatzouras-Lalley carpets. As a result of the local inhomogeneity of Gatzouras-Lalley carpets, obtaining good upper bounds requires some care. We will prove a sequence of lemmas which, morally, provide optimal covers for a variety of symbolic objects: these covers will then be combined to obtain our general upper bound for the pointwise Assouad dimension. We first show that, as a result of the vertical alignment of their component cylinders, pseudo cylinders can essentially be covered by their projection. Recall that \(\mathcal{S}\) denotes the set of all approximate squares. Then if \(P(\mathtt{i},\underline{\mathtt{j}})\) is any wide pseudo-cylinder, we can write it as a union of the approximate squares in the family \[\mathcal{Q}(\mathtt{i},\underline{\mathtt{j}})=\{Q\in\mathcal{S}:\ Q=P( \mathtt{i},\underline{\mathtt{k}})\text{ for some }\underline{\mathtt{k}}\in\eta( \mathcal{I}^{*})\text{ and }Q\subset P(\mathtt{i},\underline{\mathtt{j}})\}.\] Since each \(Q=P(\mathtt{i},\underline{\mathtt{k}})\) for some \(\underline{\mathtt{k}}\), we have \(Q\in S(\beta_{\mathtt{i},2})\) so that this family of approximate squares forms a section. **Lemma 4.8**.: _Let \(P(\mathtt{i},\underline{\mathtt{j}})\) be a wide pseudo-cylinder. Then_ \[\#\mathcal{Q}(\mathtt{i},\underline{\mathtt{j}})\approx\left(\frac{\beta_{ \mathtt{i}\underline{\mathtt{j}},1}}{\beta_{\mathtt{i},2}}\right)^{\dim_{ \mathtt{B}}\eta(K)}.\] Proof.: First, enumerate \(\mathcal{Q}(\mathtt{i},\underline{\mathtt{j}})=\{Q_{1},\ldots,Q_{m}\}\), and for each \(i=1,\ldots,m\), there is a unique \(\underline{\mathtt{k}}_{i}\) so that \(Q_{i}=P(\mathtt{i},\underline{\mathtt{k}}_{i})\). Moreover, \(\{\underline{\mathtt{k}}_{1},\ldots,\underline{\mathtt{k}}_{m}\}\) forms a section relative to \([\underline{\mathtt{j}}]\), so that writing \(s=\dim_{\mathtt{B}}\eta(K)\) and recalling that \(\eta(K)\) is the attractor of a self-similar IFS satisfying the open set condition, \[\sum_{i=1}^{m}\beta_{\underline{\mathtt{k}}_{i},1}^{s}=\beta_{\underline{ \mathtt{j}},1}^{s}. \tag{4.6}\] But \(\beta_{\mathtt{i}\underline{\mathtt{k}}_{i},1}\approx\beta_{\mathtt{i},2}\) since each \(Q_{i}\) is an approximate square, which gives the desired result. In the next result, we provide good covers for cylinder sets using approximate squares with diameter bounded above by the height of the corresponding rectangle. Heuristically, a cylinder set can first be decomposed into approximate squares using Lemma 4.8, and an "average" approximate square itself has box dimension the same as the box dimension of \(K\). To make this notion precise, we simply reverse the order: we begin with a good cover for the box dimension of \(K\), and take the image under some word \(\mathtt{i}\). The image of each approximate square is a wide pseudo-cylinder, so we may apply Lemma 4.8 to complete the bound. **Lemma 4.9**.: _Let \(\epsilon>0\) be arbitrary. Suppose \(\mathtt{i}\in\mathcal{I}^{*}\) and \(0<r\leq\beta_{\mathtt{i},2}\). Then_ \[\left(\frac{\beta_{\mathtt{i},2}}{r}\right)^{\dim_{\mathrm{B}}K- \epsilon}\cdot\left(\frac{\beta_{\mathtt{i},1}}{\beta_{\mathtt{i},2}}\right)^ {\dim_{\mathrm{B}}\eta(K)} \lesssim_{\epsilon}\#\{Q\in\mathcal{S}(r):Q\subset[\mathtt{i}]\}\] \[\lesssim_{\epsilon}\left(\frac{\beta_{\mathtt{i},2}}{r}\right)^ {\dim_{\mathrm{B}}K+\epsilon}\cdot\left(\frac{\beta_{\mathtt{i},1}}{\beta_{ \mathtt{i},2}}\right)^{\dim_{\mathrm{B}}\eta(K)}\] Proof.: First note that, since distinct approximate squares have small overlaps, it is proven in [10, Lemma 2.1] that \[\dim_{\mathrm{B}}K=\lim_{\delta\to 0}\frac{\log\#\mathcal{S}(\delta)}{\log(1/ \delta)}.\] Now let \(\epsilon>0\) be arbitrary and fix \(\mathtt{i}\in\mathcal{I}^{*}\) and \(0<r\leq\beta_{\mathtt{i},2}\). Write \(\delta=r/\beta_{\mathtt{i},2}\) so \[(1/\delta)^{\dim_{\mathrm{B}}K-\epsilon}\lesssim_{\epsilon}\#\mathcal{S}( \delta)\lesssim_{\epsilon}(1/\delta)^{\dim_{\mathrm{B}}K+\epsilon}.\] Enumerate \(\mathcal{S}(\delta)=\{Q_{1},\ldots,Q_{m}\}\) and for each \(i=1,\ldots,m\), we may write \(Q_{i}=P(\mathtt{j}_{i},\underline{\mathtt{k}}_{i})\) for some \(\mathtt{j}_{i}\in\mathcal{I}^{*}\) and \(\underline{\mathtt{k}}_{i}\in\eta(\mathcal{I}^{*})\). Then for each \(i=1,\ldots,m\), \[\mathcal{Q}(\mathtt{i}\mathtt{j}_{i},\underline{\mathtt{k}}_{i})\subset \mathcal{S}(r)\qquad\text{and}\qquad[\mathtt{i}]=\bigcup_{i=1}^{m}\bigcup_{Q \in\mathcal{Q}(\mathtt{j}_{i},\underline{\mathtt{k}}_{i})}Q.\] Thus by Lemma 4.8 applied to each pseudo-cylinder \(P(\mathtt{i}\mathtt{j}_{i},\underline{\mathtt{k}}_{i})\), since \(Q_{i}\) is an approximate square and \(\beta_{\mathtt{j}_{i},\underline{\mathtt{k}}_{i},1}\approx\beta_{\mathtt{j}_ {i},2}\), \[\#\{Q\in\mathcal{S}(r):Q\subset[\mathtt{i}]\} =\sum_{i=1}^{m}\#\mathcal{Q}(\mathtt{i}\mathtt{j}_{i},\underline{ \mathtt{k}}_{i})\] \[\approx\sum_{i=1}^{m}\left(\frac{\beta_{\mathtt{i}\mathtt{j}_{i} \underline{\mathtt{k}}_{i},1}}{\beta_{\mathtt{j}_{i},2}}\right)^{\dim_{ \mathrm{B}}\eta(K)}\] \[\lesssim_{\epsilon}\left(\frac{\beta_{\mathtt{i},2}}{r}\right)^ {\dim_{\mathrm{B}}K+\epsilon}\cdot\left(\frac{\beta_{\mathtt{i},1}}{\beta_{ \mathtt{i},2}}\right)^{\dim_{\mathrm{B}}\eta(K)}\] with the lower bound following identically, as claimed. To conclude our collection of preliminary lemmas, we use the Assouad dimension of the symbolic fibre \(K_{\eta(\gamma)}\) to control the size of "column sections" of approximate squares. We note that the word \(\mathtt{i}\) appears in the hypothesis but not the conclusion: this is simply to clarify the application of this lemma when it is used in Proposition 4.11. **Lemma 4.10**.: _Let \(\epsilon>0\) and \(\gamma\in\Omega\) be arbitrary. Suppose \(k\in\mathbb{N}\) and \(Q_{k}(\gamma)=P(\mathtt{i},\underline{\mathtt{j}})\). Let \(\mathcal{A}\) be any section of \(\mathcal{I}^{*}\) satisfying \(\eta^{-1}(\underline{\mathtt{j}})\prec\mathcal{A}\). Then_ \[\sum_{\mathtt{k}\in\mathcal{A}}\beta_{\mathtt{k},2}^{\dim_{\mathtt{A}}K_{\eta (\gamma)}+\epsilon}\lesssim_{\epsilon,\gamma}1.\] Proof.: The assumption on the section \(\mathcal{A}\) precisely means that \(\{\mathtt{ik}:\mathtt{k}\in\mathcal{A}\}\) is a section relative to \(\mathtt{i}\) in \(\mathcal{F}_{\eta(\gamma)}\). Then by Proposition 3.6 applied to the metric space \(\Omega(\eta(\gamma))\) (recalling that \(\dim_{\mathtt{A}}\Omega(\eta(\gamma))=\dim_{\mathtt{A}}K_{\eta(\gamma)}\) from Corollary 3.3), since \(\mathcal{A}\) is a section, \[\sum_{\mathtt{k}\in\mathcal{A}}\left(\frac{\beta_{\mathtt{k},2}}{\beta_{ \mathtt{i},2}}\right)^{\dim_{\mathtt{A}}K_{\eta(\gamma)}+\epsilon}\lesssim _{\epsilon,\gamma}1.\] Cancelling the \(\beta_{\mathtt{i},2}\) gives the desired result. Finally, by combining the various counts that we have established earlier in this section, we are now in position to compute the upper bound for the pointwise Assouad dimension. Let us begin with an intuitive explanation for this proof. Since \(x\) is regular, we will reduce the problem of computing covers of balls to computing covers for approximate squares. Thus suppose we fix an approximate square \(P(\mathtt{i},\underline{\mathtt{j}})\), which is the union of cylinders \(\{\mathtt{ik}:\eta(\mathtt{k})=\underline{\mathtt{j}}\}\). We wish to cover this set with approximate squares in \(\mathcal{S}(r)\). There are two cases. First, the rectangle corresponding to the cylinder \(\mathtt{ik}\) has height greater than or equal to \(r\), in which case we simply keep this cylinder and obtain a good bound for the cover using Lemma 4.9: this is the family \(\mathcal{A}_{1}\). Otherwise, the rectangle is shorter, and we instead want to cover groups of cylinders simultaneously. Such groups are precisely wide pseudo-cylinders corresponding to elements of \(\mathcal{A}_{2}\) and have height \(r\), which we can then cover using Lemma 4.8. These covers are then combined using Lemma 4.10. **Proposition 4.11**.: _Let \(K\) be a Gatzouras-Lalley carpet and suppose \(x=\pi(\gamma)\in K\). Then_ \[\dim_{\mathtt{A}}(K,x)\geq\max\{\dim_{\mathtt{B}}K,\dim_{\mathrm{H}}\eta(K)+ \dim_{\mathtt{A}}K_{\eta(\gamma)}\}\] _with equality if \(x\) is regular._ Proof.: Recalling the general lower bound proven in Proposition 4.7, we must show that \[\dim_{\mathtt{A}}(K,x)\leq\max\{\dim_{\mathtt{B}}K,\dim_{\mathrm{H}}\eta(K)+ \dim_{\mathtt{A}}K_{\eta(\gamma)}\}\eqqcolon\zeta\] when \(x\) is regular. We obtain this bound by a direct covering argument. Let \(\epsilon>0\): we will prove that for any \(k\in\mathbb{N}\) and approximate square \(Q_{k}(\gamma)=P(\mathtt{i},\underline{\mathtt{j}})\), if \(0<r\leq\beta_{\mathtt{i},2}\), then \[\#\{Q\in\mathcal{S}(r):Q\subset Q_{k}(\gamma)\}\lesssim_{\epsilon}\left(\frac{ \beta_{\mathtt{i},2}}{r}\right)^{\zeta+\epsilon}. \tag{4.7}\] Assuming this, since \(x\) is regular, for any ball \(B(x,R)\), there is an \(R^{\prime}\lesssim R\) and at most two approximate squares \(Q_{1},Q_{2}\in\mathcal{S}(R^{\prime})\) lying in the same column such that \(B(x,R)\subset\pi(Q_{1})\cup\pi(Q_{2})\). Since \(Q_{1},Q_{2}\) lie in the same column, \(Q_{j}=Q_{k_{j}}(\gamma_{j})\) for some \(k_{j}\in\mathbb{N}\) where \(\eta(\gamma_{j})=\eta(\gamma)\). Moreover, if \(0<r\leq R\) and \(Q\in\mathcal{S}(r)\) is arbitrary, then \(\operatorname{diam}\pi(Q)\lesssim r\). Thus (4.7) immediately gives the correct bound, up to a constant factor, for \(N_{r}(B(x,R)\cap K)\). It remains to prove (4.7). Fix an approximate square \(Q_{k}(\gamma)=P(\mathtt{i},\mathtt{j})\) and suppose \(0<r\leq\beta_{\mathtt{i},2}\) is arbitrary. First, let \[\mathcal{A}_{0}=\eta^{-1}(\mathtt{j})\vee\mathcal{F}_{\eta(\gamma)}(r/\beta_ {\mathtt{i},2})\qquad\text{and}\qquad\mathcal{A}=\{\mathtt{i}\mathtt{k}: \mathtt{k}\in\mathcal{A}_{0}\}.\] We then decompose \(\mathcal{A}=\mathcal{A}_{1}\cup\mathcal{A}_{2}\), where \[\mathcal{A}_{1}=\mathcal{A}\setminus\mathcal{F}_{\eta(\gamma)}(r)\qquad \text{and}\qquad\mathcal{A}_{2}=\mathcal{A}\cap\mathcal{F}_{\eta(\gamma)}(r).\] First, suppose \(\mathtt{i}\mathtt{k}\in\mathcal{A}_{1}\). Then, by definition, \(\beta_{\mathtt{i}\mathtt{k},2}>r\) which, by definition of \(\mathcal{A}_{0}\), implies that \(\eta(\mathtt{k})=\mathtt{j}\). Thus by Lemma 4.9 applied to the cylinder \(\mathtt{i}\mathtt{k}\) and scale \(r\), since \(\dim_{\mathrm{B}}\eta(K)\leq\dim_{\mathrm{B}}K\) and \(\beta_{\mathtt{i}\mathtt{k},1}\approx\beta_{\mathtt{i},2}\), \[\#\{Q\in\mathcal{S}(r):Q\subset[\mathtt{i}\mathtt{k}]\}\lesssim_{\epsilon} \left(\frac{\beta_{\mathtt{i}\mathtt{k},2}}{r}\right)^{\dim_{\mathrm{B}}K+ \epsilon}\left(\frac{1}{\beta_{\mathtt{k},2}}\right)^{\dim_{\mathrm{B}}\eta(K )}. \tag{4.8}\] Otherwise, suppose \(\mathtt{i}\mathtt{k}\in\mathcal{A}_{2}\subset\mathcal{F}_{\eta(\gamma)}(r)\). Since \(\eta^{-1}(\mathtt{j})\preccurlyeq\mathcal{A}_{0}\), there is a \(\mathtt{j}^{\prime}\) so that \(\eta(\mathtt{k})\mathtt{j}^{\prime}=\mathtt{j}\). Thus choice of \(\mathtt{j}^{\prime}\) ensures that \[P(\mathtt{i}\mathtt{k},\mathtt{j}^{\prime})=Q_{k}(\gamma)\cap[\mathtt{i} \mathtt{k}].\] Thus by Lemma 4.8 and since \(Q_{k}(\gamma)=P(\mathtt{i},\mathtt{j})\) is an approximate square, \[\#\{Q\in\mathcal{S}(r):Q\subset Q_{k}(\gamma)\cap[\mathtt{i}\mathtt{k}]\}=\# \mathcal{Q}(\mathtt{i}\mathtt{k},\mathtt{j}^{\prime}\approx\left(\frac{1}{ \beta_{\mathtt{k},2}}\right)^{\dim_{\mathrm{B}}\eta(K)}. \tag{4.9}\] Thus by applying (4.8) and (4.9) to the respective components and recalling that \(\beta_{\mathtt{i}\mathtt{k},2}\approx r\) whenever \(\mathtt{i}\mathtt{k}\in\mathcal{A}_{2}\), \[\#\{Q \in\mathcal{S}(r):Q\subset Q_{k}(\gamma)\}\] \[=\sum_{\mathtt{i}\mathtt{k}\in\mathcal{A}_{1}}\#\{Q\in\mathcal{S} (r):Q\subset[\mathtt{i}\mathtt{k}]\}+\sum_{\mathtt{i}\mathtt{k}\in\mathcal{A }_{2}}\#\{Q\in\mathcal{S}(r):Q\subset Q_{k}(\gamma)\cap[\mathtt{i}\mathtt{k}]\}\] \[\lesssim_{\epsilon}\sum_{\mathtt{i}\mathtt{k}\in\mathcal{A}_{1}} \left(\frac{\beta_{\mathtt{i}\mathtt{k},2}}{r}\right)^{\dim_{\mathrm{B}}K+ \epsilon}\left(\frac{1}{\beta_{\mathtt{k},2}}\right)^{\dim_{\mathrm{B}}\eta(K )}+\sum_{\mathtt{i}\mathtt{k}\in\mathcal{A}_{2}}\left(\frac{1}{\beta_{ \mathtt{k},2}}\right)^{\dim_{\mathrm{B}}\eta(K)}\] \[\lesssim\sum_{\mathtt{i}\mathtt{k}\in\mathcal{A}_{1}}\left(\frac {\beta_{\mathtt{i}\mathtt{k},2}}{r}\right)^{\zeta+\epsilon}\left(\frac{\beta_{ \mathtt{i},2}}{\beta_{\mathtt{i}\mathtt{k},2}}\right)^{\zeta+\epsilon}\beta_{ \mathtt{k},2}^{\dim_{\mathrm{A}}K_{\eta(\gamma)}+\epsilon}+\sum_{\mathtt{i} \mathtt{k}\in\mathcal{A}_{2}}\left(\frac{\beta_{\mathtt{i},2}}{r}\right)^{ \zeta+\epsilon}\beta_{\mathtt{k},2}^{\dim_{\mathrm{A}}K_{\eta(\gamma)}+\epsilon}\] \[=\left(\frac{\beta_{\mathtt{i},2}}{r}\right)^{\zeta+\epsilon} \sum_{\mathtt{k}\in\mathcal{A}_{0}}\beta_{\mathtt{k},2}^{\dim_{\mathrm{A}}K_{ \eta(\gamma)}+\epsilon}\] \[\lesssim\left(\frac{\beta_{\mathtt{i},2}}{r}\right)^{\zeta+\epsilon}\] where in the last line follows by Lemma 4.10 applied to the section \(\mathcal{A}_{0}\). Thus (4.7) follows, and therefore our desired result. ### Dimensions of level sets of pointwise Assouad dimension Given an index \(i\in\mathcal{I}\), let \(\Phi_{\eta(i)}\) denote the IFS corresponding to the column containing the index \(i\), that is \[\Phi_{\eta(i)}=\{S_{j,2}:j\in\mathcal{I}\text{ and }\eta(j)=\eta(i)\}.\] Now given a word \(\gamma=(i_{n})_{n=1}^{\infty}\in\Omega\), recall that the symbolic slice \(K_{\eta(\gamma)}\) is the non-autonomous self-similar set associated with the IFS \(\{\Phi_{\eta(i_{n})}\}_{n=1}^{\infty}\). Since there are only finitely many choices for the \(\Phi_{\eta(i_{n})}\), the hypotheses of Theorem 3.7 are automatically satisfied and \[\dim_{\mathrm{A}}K_{\eta(\gamma)}=\lim_{m\to\infty}\sup_{n\in\mathbb{N}}\theta _{\eta(\gamma)}(n,m)\] where \[\sum_{(j_{1},\ldots,j_{m})\in\eta^{-1}(\eta(i_{1},\ldots,i_{n}))}\prod_{k=1}^ {m}\beta_{j_{k},2}^{\theta_{\eta(\gamma)}(n,m)}=1.\] We now obtain our main formula for the pointwise Assouad dimension of arbitrary points in Gatzouras-Lalley carpets. **Theorem 4.12**.: _Let \(K\) be a Gatzouras-Lalley carpet. Then for every \(x\in K\) with \(x=\pi(\gamma)\), there is an \(F\in\operatorname{Tan}(K,x)\) with \(\mathcal{H}^{s}(F)\gtrsim 1\) where_ \[s \coloneqq\,\dim_{\mathrm{B}}\eta(K)+\dim_{\mathrm{A}}K_{\eta( \gamma)}\] \[=\,\dim_{\mathrm{B}}\eta(K)+\lim_{m\to\infty}\sup_{n\in\mathbb{N} }\theta_{\eta(\gamma)}(n,m)\] _In particular,_ \[\max\{\dim_{\mathrm{H}}F:F\in\operatorname{Tan}(K,x)\}\geq s\quad\text{and} \quad\dim_{\mathrm{A}}(K,x)\geq\max\{s,\dim_{\mathrm{B}}\eta(K)\}\] _where both inequalities are equalities if \(x\) is regular. In particular, if \(\eta(K)\) satisfies the strong separation condition then equality holds for all \(x\in K\)._ _Proof._ By Proposition 4.7, there is an \(F\in\operatorname{Tan}(K,x)\) so that \[\mathcal{H}^{\dim_{\mathrm{H}}\eta(K)+\dim_{\mathrm{A}}K_{\eta(\gamma)}}(F) \gtrsim 1.\] Moreover, \(\dim_{\mathrm{A}}K_{\eta(\gamma)}=\lim_{m\to\infty}\sup_{n\in\mathbb{N}} \theta_{\eta(\gamma)}(n,m)\) by Theorem 3.7. The formula for \(\dim_{\mathrm{A}}(K,x)\), including the case when \(x\) is regular, then follows by Proposition 4.11. If \(x\) is regular, it moreover follows from Proposition 4.6 that for any \(F\in\operatorname{Tan}(K,x)\), there is a similarity map \(h\) and a weak tangent \(E\in\operatorname{Tan}(K_{\eta(\gamma)})\) so that \(h(F)\subset\eta(K)\times E\). Since \(\dim_{\mathrm{B}}\eta(K)=\dim_{\mathrm{H}}\eta(K)\), \[\dim_{\mathrm{H}}F=\dim_{\mathrm{H}}h(F)\leq\dim_{\mathrm{B}}\eta(K)+\dim_{ \mathrm{H}}E\leq\dim_{\mathrm{B}}\eta(K)+\dim_{\mathrm{A}}K_{\eta(\gamma)}\] as required. Finally, we recall that if \(\eta(K)\) satisfies the strong separation condition, then each \(x\in K\) is regular by Lemma 4.4 (i). Our next goal is to prove that the set of pointwise Assouad dimensions forms an interval. The main observation required in the proof is a stability result for the expression \(\theta_{\eta(\gamma)}(n,m)\) when \(m\) is large. In order to facilitate the proof, we establish some notation. First suppose \(m\in\mathbb{N}\) and \(\mathtt{i}\in\mathcal{I}^{m}\). Define \(\phi_{\mathtt{i}}\colon[0,1]\to\mathbb{R}\) by \[\phi_{\mathtt{i}}(s)=\sum_{\begin{subarray}{c}\mathtt{j}\in\mathcal{I}^{n}\\ \eta(\mathtt{j})=\eta(\mathtt{i})\end{subarray}}\beta_{\mathtt{j},2}^{s}.\] Since \(\phi_{\mathtt{i}}\) is strictly decreasing with \(\phi_{\mathtt{i}}(0)>1\) and \(\phi_{\mathtt{i}}(1)<1\), there is a unique \(t(\mathtt{i})\) so that \(\phi_{\mathtt{i}}(t(\mathtt{i}))=1\). Note that \(0<s_{\min}\leq t(\mathtt{i})\leq s_{\max}<1\) where \(s_{\min}=\min\{t(i):i\in\mathcal{I}\}\) and \(s_{\max}=\max\{t(i):i\in\mathcal{I}\}\). Of course, the function \(t\) is chosen precisely so that \[\theta_{\eta(\gamma)}(n,m)=t(\gamma_{n+1},\ldots,\gamma_{n+m}).\] We can now prove, essentially, that the set of fibre dimensions form an interval. **Lemma 4.13**.: _Let \(K\) be a Gatzouras-Lalley carpet and suppose \(\dim_{\mathrm{L}}K<\alpha<\dim_{\mathrm{A}}K\). Then for all \(k_{0}\in\mathbb{N}\) sufficiently large, for all \(n\in\mathbb{N}\) there is \(\mathtt{i}_{n}\in\mathcal{B}_{k_{0}}^{n}\subset\mathcal{I}^{k_{0}n}\) satisfying_ \[\lim_{m\to\infty}\sup_{n\in\mathbb{N}}t(\mathtt{i}_{n+1}\cdots\mathtt{i}_{n+m} )=\alpha-\dim_{\mathrm{B}}\eta(K).\] _Proof._ Let \(\mathtt{i}\in\mathcal{I}^{m}\) and \(j\in\mathcal{I}\) be arbitrary. We first show that \(|t(\mathtt{i}j)-t(\mathtt{i})|\) converges to zero uniformly as \(m\) diverges to infinity. First, if \(j\in\mathcal{I}\) is arbitrary, then \[\phi_{\mathtt{i}j}(t(\mathtt{i}))=\sum_{\begin{subarray}{c}i\in\mathcal{I}\\ \eta(i)=\eta(j)\end{subarray}}\beta_{i,2}^{t(\mathtt{i})}\approx 1. \tag{4.10}\] On the other hand, \[\phi_{\mathtt{i}j}(t(\mathtt{i})+\epsilon)\leq\phi_{\mathtt{i}j}(t(\mathtt{i }))\cdot(\min\{\beta_{i,2}:i\in\mathcal{I}\})^{m\epsilon}\] so that, if \(t(\mathtt{i})+\epsilon\geq t(\mathtt{i}j)\), applying (4.10), we observe that \((\min\{\beta_{i,2}:i\in\mathcal{I}\})^{m\epsilon}\approx 1\) which forces \(\epsilon\approx 1/m\). The same argument also holds for the lower bound. Iterating the above bound, we have therefore proven that for any \(m,k\in\mathbb{N}\), \(\mathtt{i}\in\mathcal{I}^{m}\), and \(\mathtt{j}\in\mathcal{I}^{k}\), \[|t(\mathtt{i}\mathtt{j})-t(\mathtt{i})|\lesssim\frac{k}{m}. \tag{4.11}\] We now proceed with our general construction. First, fixing any interior word \(\mathtt{j}\in\mathcal{I}^{*}\) and \(i\in\mathcal{I}\) so that \(\dim_{\mathrm{A}}K=\dim_{\mathrm{B}}\eta(K)+t(\mathtt{i})\), \[\dim_{\mathrm{A}}K=\dim_{\mathrm{B}}\eta(K)+\lim_{k\to\infty}t(\mathtt{j}^{k});\] and similarly for the lower dimension. Thus for all sufficiently large \(k_{0}\), there are words \(\mathtt{j}_{L},\mathtt{j}_{A}\in\mathcal{B}_{k_{0}}\) so that \[\dim_{\mathrm{B}}\eta(K)+t(\mathtt{j}_{L})<\alpha<\dim_{\mathrm{B}}\eta(K)+t( \mathtt{j}_{A}).\] We inductively construct \((\mathtt{j}_{L,k},\mathtt{j}_{A,k})_{k=1}^{\infty}\) so that, for each \(k\in\mathbb{N}\), 1. \(\alpha-\dim_{\mathrm{B}}\eta(K)-\frac{1}{k}\leq t(\mathrm{j}_{L,k})\leq\alpha-\dim _{\mathrm{B}}\eta(K)\), 2. \(\alpha-\dim_{\mathrm{B}}\eta(K)\leq t(\mathrm{j}_{A,k})\leq\dim_{\mathrm{A}}K+ \dim_{\mathrm{B}}\eta(K)+\frac{1}{k}\), 3. \(\mathrm{j}_{L,k},\mathrm{j}_{A,k}\in\mathcal{B}_{k_{0}}^{*}\) and, for \(k\geq 2\), \(\mathrm{j}_{L,k},\mathrm{j}_{A,k}\in\{\mathrm{j}_{L,k-1},\mathrm{j}_{A,k-1}\}^ {*}\), and 4. \(|\mathrm{j}_{L,k}|\geq k\) and \(|\mathrm{j}_{A,k}|\geq k\). First, set \(\mathrm{j}_{L,1}=\mathrm{j}_{L}\) and \(\mathrm{j}_{A,1}=\mathrm{j}_{A}\) which clearly satisfy the desired properties. Now suppose we have constructed \(\mathrm{j}_{L,k}\) and \(\mathrm{j}_{A,k}\). Since \(t(\mathrm{j}_{A,k})\geq\alpha-\dim_{\mathrm{B}}\eta(K)\), for any \(m\in\mathbb{N}\), \[\lim_{n\to\infty}t(\mathrm{j}_{L,k}^{m}\mathrm{j}_{A,k}^{n})\geq\dim_{ \mathrm{B}}\eta(K)-\alpha.\] Moreover, \(t(\mathrm{j}_{L,k}^{m})\leq\dim_{\mathrm{B}}\eta(K)-\alpha\) and, by taking \(m\geq k\) sufficiently large and applying (4.11), for all \(n\in\mathbb{N}\) sufficiently large, \[|t(\mathrm{j}_{L,k}^{m}\mathrm{j}_{A,k}^{n+1})-t(\mathrm{j}_{L,k}^{m} \mathrm{j}_{A,k}^{n})|\leq\frac{1}{k+2}<\frac{1}{k+1}.\] Combining these two observations, there is a pair \(m,n\) so that \(\mathrm{j}_{A,k+1}\coloneqq\mathrm{j}_{L,k}^{m}\mathrm{j}_{A,k}^{n}\in \mathcal{B}_{k_{0}}^{*}\) satisfies conditions 1 and 4. The identical argument gives \(\mathrm{j}_{L,k+1}\in\mathcal{B}_{k_{0}}^{*}\) satisfying 2, as claimed. To complete the proof, since \(\mathrm{j}_{L,k}\in\mathcal{B}_{k_{0}}^{*}\) for all \(k\in\mathbb{N}\), we may identify the sequence \((\mathrm{j}_{L,k})_{k=1}^{\infty}\) with a sequence \((\mathrm{i}_{n})_{n=1}^{\infty}\) where \(\mathrm{i}_{n}\in\mathcal{B}_{k_{0}}\) for all \(n\in\mathbb{N}\). It immediately follows from 4 that \[\lim_{m\to\infty}\sup_{n\in\mathbb{N}}t(\mathrm{i}_{n+1}\cdots\mathrm{i}_{n+m} )\geq\alpha-\dim_{\mathrm{B}}\eta(K).\] To establish the converse bound, it suffices to show for every \(k\in\mathbb{N}\) that \[\lim_{m\to\infty}\sup_{n\in\mathbb{N}}t(\mathrm{i}_{n+1}\cdots\mathrm{i}_{n+m })\leq\alpha-\dim_{\mathrm{B}}\eta(K)+\frac{1}{k}.\] By 3, for all \(k\in\mathbb{N}\), there is a \(K\in\mathbb{N}\) so that for all \(n\geq K\), \(\mathrm{i}_{n}\in\{\mathrm{j}_{L,k},\mathrm{j}_{A,k}\}^{*}\). For each \(\ell\in\mathbb{N}\), write \(\mathrm{k}_{\ell}=\mathrm{i}_{K\ell+1}\cdots\mathrm{i}_{K(\ell+1)}\) and note that \(\mathrm{k}_{\ell}\in\{\mathrm{j}_{L,k},\mathrm{j}_{A,k}\}^{*}\) for all \(\ell\in\mathbb{N}\). Thus for any \(n,m\in\mathbb{N}\), \[t(\mathrm{k}_{\ell+1}\cdots\mathrm{k}_{\ell+m})\leq\frac{1}{m}\sum_{i=1}^{m}t (\mathrm{k}_{\ell+i})\leq\alpha-\dim_{\mathrm{B}}\eta(K)+\frac{1}{k}.\] But by Lemma 3.4 and the subadditivity property of \(t\) established in Theorem 3.7, \[\lim_{m\to\infty}\sup_{n\in\mathbb{N}}t(\mathrm{i}_{n+1}\cdots\mathrm{i}_{n+m })=\lim_{m\to\infty}\sup_{n\in\mathbb{N}}t(\mathrm{k}_{n+1}\cdots\mathrm{k}_{n +m})\] which gives the claim. To conclude this section, we assemble the results proven in the prior two sections to obtain our main result. **Theorem 4.14**.: _Let \(K\) be a Gatzouras-Lalley carpet. Then for any \(\dim_{\mathrm{B}}K\leq\alpha\leq\dim_{\mathrm{A}}K\),_ \[\dim_{\mathrm{H}}\{x\in K:\dim_{\mathrm{A}}(K,x)=\alpha\}=\dim_{\mathrm{H}}K. \tag{4.12}\] _Otherwise, if \(\alpha\notin[\dim_{\rm B}K,\dim_{\rm A}K]\), then \(\{x\in K:\dim_{\rm A}(K,x)=\alpha\}=\emptyset\). However,_ \[\mathcal{H}^{\dim_{\rm H}K}\big{(}\{x\in K:\dim_{\rm A}(K,x)\neq\dim_{\rm A}K \}\big{)}=0. \tag{4.13}\] _Proof._ Note that if \(\dim_{\rm B}K=\dim_{\rm A}K\), then \(\dim_{\rm A}(K,x)=\dim_{\rm A}K\) for all \(x\in K\) and the results are clearly true. Thus we may assume that \(\dim_{\rm H}K<\dim_{\rm B}K<\dim_{\rm A}K\). We first establish (4.12). Let \(\epsilon>0\) be arbitrary and \(\dim_{\rm B}K\leq\alpha\leq\dim_{\rm A}K\). Apply Proposition 4.5 and get \(k\in\mathbb{N}\) and a family \(\mathcal{J}\subset\mathcal{B}_{k}\) with corresponding attractor \(K_{\epsilon}\) satisfying \(\dim_{\rm H}K-\epsilon\leq\dim_{\rm H}K_{\epsilon}=\dim_{\rm A}K_{\epsilon}\) and \(\dim_{\rm B}\eta(K)-\epsilon\leq\dim_{\rm B}\eta(K_{\epsilon})\). If \(\alpha<\dim_{\rm A}K\), iterating the system if necessary, by Lemma 4.13 get a sequence \((\mathtt{i}_{n})_{n=1}^{\infty}\) with \(\mathtt{i}_{n}\in\mathcal{B}_{k}\) for all \(n\in\mathbb{N}\) and moreover \[\lim_{m\to\infty}\sup_{n\in\mathbb{N}}t(\mathtt{i}_{n+1}\cdots\mathtt{i}_{n+m })=\alpha-\dim_{\rm B}\eta(K). \tag{4.14}\] If instead \(\alpha=\dim_{\rm A}K\), instead simply take \(\mathtt{i}_{n}=i_{0}^{k}\) where \(i_{0}\in\mathcal{I}\) is any word such that \(\dim_{\rm A}K=\dim_{\rm B}\eta(K)+t(i_{0})\). Note that \(t(\mathtt{j})=\dim_{\rm A}K_{\epsilon}-\dim_{\rm B}\eta(K_{\epsilon})\) for any \(\mathtt{j}\in\mathcal{J}\). Thus by taking \(\epsilon\) to be sufficiently small, we may assume that \(t(\mathtt{j})\leq\alpha-\dim_{\rm B}\eta(K)\) for all \(\mathtt{j}\in\mathcal{J}\). Now, let \((N_{k})_{k=1}^{\infty}\) be a sequence of natural numbers satisfying \(\lim_{k\to\infty}N_{k}/k=\infty\) and write \[\Omega_{0}=\prod_{k=1}^{\infty}\mathcal{J}^{N_{k}}\times\{\mathtt{i}_{1}\} \times\cdots\times\{\mathtt{i}_{k}\}.\] By taking each \(N_{k}\) to be sufficiently large, we may ensure that \(\dim_{\rm H}\pi(\Omega_{0})\geq\dim_{\rm H}K_{\epsilon}-\epsilon\). Fix \(\gamma\in\Omega_{0}\): it remains to verify that \(\dim_{\rm A}(K,\pi(\gamma))=\alpha\). Since \(\gamma\in\mathcal{B}_{k}^{\mathbb{N}}\), \(\pi(\gamma)\) is a regular point of \(K\) by Lemma 4.4 (ii). By passing to the subsystem induced by \(\mathcal{B}_{k}\subset\mathcal{I}^{k}\), write \(\gamma=(\mathtt{k}_{k})_{k=1}^{\infty}\) where \(\mathtt{k}_{k}\in\mathcal{B}_{k}\). Thus by Theorem 4.12 and Lemma 3.4, \[\dim_{\rm A}(K,x)=\max\bigl{\{}\dim_{\rm B}K,\lim_{m\to\infty}\sup_{n\in \mathbb{N}}t(\mathtt{k}_{n+1}\cdots\mathtt{k}_{n+m})\bigr{\}}.\] Since \(\mathtt{i}_{1}\cdots\mathtt{i}_{m}\) appears as a subword of of \(\gamma\) for arbitrarily large \(m\), by (4.14) and since \(\alpha>\dim_{\rm B}K\), it follows that \(\dim_{\rm A}(K,x)\geq\alpha\). We now obtain the upper bound. Let \(\epsilon>0\) be arbitrary. By (4.14), there is an \(\ell_{0}\in\mathbb{N}\) so that whenever \(\ell\geq\ell_{0}\), we have \(t(\mathtt{i}_{j+1}\cdots\mathtt{i}_{j+\ell})\leq\alpha-\dim_{\rm B}\eta(K)+\epsilon\). Let \(m\) be sufficiently large so that \(\ell_{0}/m\leq\epsilon\). Since \(\lim_{k\to\infty}N_{k}/k=\infty\), for all \(n\) sufficiently large, there is a \(j\in\mathbb{N}\) so that \[\mathtt{k}_{n+1}\cdots\mathtt{k}_{n+m}=\mathtt{j}_{1}\cdots\mathtt{j}_{m-\ell }\mathtt{i}_{j+1}\cdots\mathtt{i}_{j+\ell}.\] Thus for \(m,n\) sufficiently large, if \(\ell\geq\ell_{0}\), \[t(\mathtt{k}_{n+1}\cdots\mathtt{k}_{n+m}) \leq\frac{(m-\ell)\cdot t(\mathtt{j}_{1}\cdots\mathtt{j}_{m-\ell })+\ell\cdot t(\mathtt{i}_{j+1}\cdots\mathtt{i}_{j+\ell})}{m}\] \[\leq\frac{m-\ell}{m}\cdot(\alpha-\dim_{\rm B}\eta(K))+\frac{\ell} {m}(\alpha-\dim_{\rm B}\eta(K)+\epsilon)\] \[\leq\alpha-\dim_{\mathrm{B}}\eta(K)+\epsilon\] and similarly if \(\ell<\ell_{0}\), recalling that \(t(\mathtt{i}_{j+1}\cdots\mathtt{i}_{j+\ell})\leq 1\), \[t(\mathtt{k}_{n+1}\cdots\mathtt{k}_{n+m})\leq\alpha-\dim_{\mathrm{B}}\eta(K)+ \frac{\ell_{0}}{m}\leq\alpha-\dim_{\mathrm{B}}\eta(K)+\epsilon.\] Therefore \[\limsup_{m\to\infty}\limsup_{n\to\infty}t(\mathtt{k}_{n+1}\cdots\mathtt{k}_{n +m})\leq\alpha-\dim_{\mathrm{B}}\eta(K)+\epsilon\] and since \(\epsilon>0\) was arbitrary, \[\lim_{m\to\infty}\sup_{n\in\mathbb{N}}t(\mathtt{k}_{n+1}\cdots\mathtt{k}_{n+m })=\limsup_{m\to\infty}\limsup_{n\to\infty}t(\mathtt{k}_{n+1}\cdots\mathtt{k}_ {n+m})\leq\alpha-\dim_{\mathrm{B}}\eta(K)\] so that \(\dim_{\mathrm{A}}(K,x)\leq\alpha\), as claimed. Of course, we recall as well that \(\dim_{\mathrm{B}}K\leq\dim_{\mathrm{A}}(K,x)\leq\dim_{\mathrm{A}}K\) by Proposition 2.13. We finally consider the points \(x\) such that \(\dim_{\mathrm{A}}(K,x)<\dim_{\mathrm{A}}K\). Let \(i_{0}\in\mathcal{I}\) be such that \(\dim_{\mathrm{A}}K=\dim_{\mathrm{B}}\eta(K)+t(i_{0})\). Let \[\mathcal{J}_{M}\coloneqq\{(i_{1},\ldots,i_{M})\in\mathcal{I}^{M}:(i_{1}, \ldots,i_{M})\neq(i_{0},\ldots,i_{0})\}\] have attractor \(K_{M}\subset K\). Since \(\mathcal{J}_{M}\) is a proper subsystem, \(\dim_{\mathrm{H}}K_{M}<\dim_{\mathrm{H}}K\) so that \(\mathcal{H}^{\dim_{\mathrm{H}}K}(K_{M})=0\). Now let \(x\in K\) have \(\dim_{\mathrm{A}}(K,x)<\dim_{\mathrm{A}}K\). Suppose \(x=\pi(\gamma)\) where \(\gamma=(i_{n})_{n=1}^{\infty}\), so that \[\dim_{\mathrm{A}}(K,x)\geq\max\left\{\dim_{\mathrm{B}}K,\dim_{\mathrm{B}} \eta(K)+\lim_{m\to\infty}\sup_{n\in\mathbb{N}}t(i_{n+1},\ldots,i_{n+m})\right\}.\] Since \(\dim_{\mathrm{A}}(K,x)<\dim_{\mathrm{A}}K\), \[\lim_{m\to\infty}\sup_{n\in\mathbb{N}}t(i_{n+1},\ldots,i_{n+m})<t(i_{0}).\] In particular, there is a constant \(M\) so that \(\gamma\) does not contain \(i_{0}^{M}\) as a subword. Thus \(x\in K_{M}\) for some \(M\) and therefore \[\mathcal{H}^{\dim_{\mathrm{H}}K}\left(\{x\in K:\dim_{\mathrm{A}}(K,x)<\dim_{ \mathrm{A}}K\}\right)\leq\sum_{M=1}^{\infty}\mathcal{H}^{\dim_{\mathrm{H}}K}( K_{M})=0\] as required. **Remark 4.15**.: We recall that if \(K\) is a Gatzouras-Lalley carpet, then \(\mathcal{H}^{\dim_{\mathrm{H}}K}(K)>0\), with \(\mathcal{H}^{\dim_{\mathrm{H}}K}(K)<\infty\) if and only if \(K\) is Ahlfors regular; see [11]. In particular, the positivity of the Hausdorff measure guarantees that the claim (4.13) in Theorem 4.14 is not vacuous; and, if the Hausdorff measure is finite, Theorem 4.14 is trivial. ## 5 Tangent structure and dimension of Baranski carpets ### Dimensions and decompositions of Baranski carpets Recall the definition of the Baranski carpet and basic notation from SS4.1. Suppose \(K\) is a Baranski carpet and \(\gamma\in\Omega\) is arbitrary. For each \(k\in\mathbb{N}\), we define a probability vector \(\boldsymbol{\xi}_{k}(\gamma)\) by the rule \[\boldsymbol{\xi}_{k}(\gamma)_{i}=\frac{\#\{1\leq\ell\leq k:\gamma_{\ell}=i\}} {k}\quad\text{for each }i\in\mathcal{I}.\] In other words, \(\boldsymbol{\xi}_{k}(\gamma)\) is the distribution of the letter frequencies in the first \(k\) letters of \(\gamma\). We then define \[\Gamma_{k}(\gamma)=\frac{\chi_{1}(\boldsymbol{\xi}_{k}(\gamma))}{\chi_{2}( \boldsymbol{\xi}_{k}(\gamma))}.\] The function \(\Gamma_{k}\) induces a partition \(\Omega=\Omega_{0}\cup\Omega_{1}\cup\Omega_{2}\) by \[\Omega_{0} =\{\gamma:\liminf_{k\to\infty}\Gamma_{k}(\gamma)\leq 1\leq \limsup_{k\to\infty}\Gamma_{k}(\gamma)\}\] \[\Omega_{1} =\{\gamma:\limsup_{k\to\infty}\Gamma_{k}(\gamma)<1\}\] \[\Omega_{2} =\{\gamma:1<\liminf_{k\to\infty}\Gamma_{k}(\gamma)\}.\] We now recall the dimensional formula for a general Baranski carpet. First, we decompose \(\mathcal{P}=\mathcal{P}_{1}\cup\mathcal{P}_{2}\) where \[\mathcal{P}_{j}=\{\boldsymbol{w}\in\mathcal{P}:\chi_{j}(\boldsymbol{w})\leq \chi_{j^{\prime}}(\boldsymbol{w})\}.\] Now given a measure \(\boldsymbol{w}\in\mathcal{P}_{j}\), recall [1, Corollary 5.2] which states that \[\dim_{\mathrm{H}}\pi_{*}\boldsymbol{w}^{\mathrm{N}}=\frac{H(\eta_{j}( \boldsymbol{w}))}{\chi_{j}(\boldsymbol{w})}+\frac{H(\boldsymbol{w})-H(\eta_{j} (\boldsymbol{w}))}{\chi_{j^{\prime}}(\boldsymbol{w})}.\] Here and for the remainder of this document, for notational simplicity, given \(j=1\) we write \(j^{\prime}=2\) and given \(j=2\) we write \(j^{\prime}=1\). We also introduce some notation for symbolic slices both in the horizontal and vertical directions. Given \(\gamma\in\Omega\) and \(j\in 1,2\), let \(\theta_{\eta_{j}(\gamma),j}\) be defined by the rule \[\sum_{(j_{1},\ldots,j_{m})\in\eta_{j}^{-1}(\eta_{j}(i_{1},\ldots,i_{n}))}\prod _{k=1}^{m}\beta_{j_{k},j}^{\theta_{\eta_{j}(\gamma),j}(n,m)}=1.\] The value \(\theta_{\eta(\gamma)}=\theta_{\eta_{1}(\gamma),1}\) was defined previously in the context of a Gatzouras-Lalley carpet. As is the case with a Gatzouras-Lalley carpet, if we denote by \(K_{\eta_{j}(\gamma),j}\) the non-autonomous self-similar set associated with the non-autonomous self-similar IFS \(\{S_{i,j}:i\in\eta^{-1}(\eta(\gamma_{k}))\}_{k=1}^{\infty}\), then \[\dim_{\mathrm{A}}K_{\eta_{j}(\gamma),j}=\lim_{m\to\infty}\sup_{n\in\mathbb{N}} \theta_{\eta_{j}(\gamma),j}(n,m).\] Assuming \(\eta_{1}(K)\) (resp. \(\eta_{2}(K)\)) satisfies the SSC, then \(K_{\eta_{1}(\gamma),1}\) (resp. \(K_{\eta_{2}(\gamma),2}\)) is precisely the intersection of \(K\) with the vertical (resp. horizontal) line containing \(x=\pi(\gamma)\). We now recall [15, Theorem 2.12] concerning the Assouad dimension and the main result of [1] on the Hausdorff dimensions of Baranski carpets. While this result is not stated explicitly, the relevant details can be obtained directly by inspecting the proof. **Proposition 5.1** ([1, 15]).: _Let \(K\) be a Baranski carpet such that \(\Omega_{1}\neq\emptyset\) and \(\Omega_{2}\neq\emptyset\). Then:_ 1. _For each_ \(j=1,2\)_,_ \[\dim_{\mathrm{H}}\pi(\Omega_{0}\cup\Omega_{j})\leq d_{j}\] _where_ \[d_{j}=\max_{\mathbf{w}\in\mathcal{P}_{j}}\left(\frac{H(\eta_{j}(\mathbf{w}))}{\chi_{j} (\mathbf{w})}+\frac{H(\mathbf{w})-H(\eta_{j}(\mathbf{w}))}{\chi_{j^{\prime}}(\mathbf{w})} \right).\] _In particular,_ \(\dim_{\mathrm{H}}K=\max\{d_{1},d_{2}\}\)_._ 2. _We have_ \[\dim_{\mathrm{A}}K=\dim_{\mathrm{B}}\eta(K)+\max_{j=1,2}\left\{t_{j}\right\}\] _where_ \[t_{j}=\max_{\underline{\ell}\in\eta_{j}(\underline{\ell})}t_{j}(\underline{ \ell})\] _and_ \(t_{j}(\underline{\ell})\) _is the unique solution to the equation_ \[\sum_{j\in\eta_{j}^{-1}(\underline{\ell})}\beta_{j,2}^{t_{j}(\underline{\ell}) }=1.\] ### Pointwise Assouad dimension along uniformly contracting sequences In this section, we state a generalization of our results on Gatzouras-Lalley carpets to Baranski carpets, with the caveat that we restrict our attention to points coded by sequences which contract uniformly in one direction. The arguments are similar to the Gatzouras-Lalley case so we only include detail when the proofs diverge. Handling more general sequences would result in a more complicated formula for the pointwise Assouad dimension depending on the scales at which the contraction ratio is greater in one direction than the other, which we will not treat here. We begin by defining the analogues of pseudo-cylinders and approximate squares. Fix \(j=1,2\). Suppose \(\mathtt{i}\in\mathcal{I}^{k}\) and \(\underline{\mathtt{j}}\in\eta_{j}(\mathcal{I}^{\ell})\). We then write \[P_{j}(\mathtt{i},\underline{\mathtt{j}})=\{\gamma=(i_{n})_{n=1}^{\infty}\in \Omega:(i_{1},\ldots,i_{k})=\mathtt{i}\text{ and }\eta(i_{k+1},\ldots,i_{k+l})=\underline{ \mathtt{j}}\}.\] We call a pseudo-cylinder _wide_ if \(\beta_{\mathtt{i}\mathtt{j},j}\geq\beta_{\mathtt{i},j^{\prime}}\); otherwise, we call the pseudo-cylinder _tall_. Now let \(\gamma\in\Omega\) be arbitrary and let \(k\in\mathbb{N}\). Let \(j\) be chosen so that \(\beta_{\gamma|_{k},j}\geq\beta_{\gamma|_{k},j^{\prime}}\). We then let \(L_{k}(\gamma)\geq k\) denote the maximal integer so that \[\beta_{\gamma|_{L_{k},j(\gamma)},j}\geq\beta_{\gamma|_{k},j^{\prime}}.\] Write \(\gamma|_{L_{k,j}(\gamma)}=\mathtt{ij}\) and define the approximate square \[Q_{k}(\gamma)=P_{j}(\mathtt{i},\eta_{j}(\mathtt{j})).\] Similarly to the Gatzouras-Lalley case, the collection of approximate squares forms a metric tree when equipped with the valuation \(\rho(P_{j}(\mathtt{i},\eta_{j}(\mathtt{j})))=\beta_{\mathtt{i},j^{\prime}}\). Note that for each approximate square \(Q\), there is a unique choice for \(j\) except precisely when \(\beta_{\gamma|_{k},j}=\beta_{\gamma|_{k},j^{\prime}}\), so indeed \(\rho\) is well-defined. Similarly as in the Gatzouras-Lalley case, given a pseudo-cylinder \(P_{j}(\mathtt{i},\underline{\mathtt{j}})\), we write \[\mathcal{Q}_{j}(\mathtt{i},\underline{\mathtt{j}})=\max\{\mathcal{A}: \mathcal{A}\text{ is a section of }\mathcal{S}\text{ relative to }P_{j}(\mathtt{i},\underline{ \mathtt{j}})\}\] where \(\mathcal{S}\) is the collection of all approximate squares and the maximum is with respect to the partial ordering on sections. That the maximum always exists follows from the properties of the join. In the case when the pseudo-cylinder is wide, this coincides precisely with the definition in the Gatzouras-Lalley case. However, unlike in the Gatzouras-Lalley case, we will also have to handle tall pseudo-cylinders, which have a more complex structure. This additional structure is handled in the following covering lemma. **Lemma 5.2**.: 1. _Let_ \(P_{j}(\mathtt{i},\underline{\mathtt{j}})\) _be a wide pseudo-cylinder. Then_ \[\#\mathcal{Q}_{j}(\mathtt{i},\underline{\mathtt{j}})\approx\left(\frac{\beta_ {\mathtt{ij},j}}{\beta_{\mathtt{i},j^{\prime}}}\right)^{\dim_{\mathrm{B}}\eta_ {j}(K)}.\] 2. _Let_ \(P_{j}(\mathtt{i},\underline{\mathtt{j}})\) _be a tall pseudo-cylinder. Then_ \[\#\mathcal{Q}_{j}(\mathtt{i},\underline{\mathtt{j}})\lesssim\left(\frac{\beta_ {\mathtt{i},j^{\prime}}}{\beta_{\mathtt{ij},j}}\right)^{\dim_{\mathrm{B}}\eta _{j^{\prime}}(K)}.\] 3. _Let_ \(\epsilon>0\) _be arbitrary. Suppose_ \(\mathtt{i}\in\mathcal{I}^{*}\) _and let_ \(j\) _be chosen so that_ \(\beta_{\mathtt{i},j^{\prime}}\leq\beta_{\mathtt{i},j}\)_. Let_ \(0<r\leq\beta_{\mathtt{i},j}\)_. Then_ \[\#\{Q\in\mathcal{S}(r):Q\subset[\mathtt{i}]\}\lesssim_{\epsilon}\left(\frac{ \beta_{\mathtt{i},j^{\prime}}}{r}\right)^{\dim_{\mathrm{B}}K+\epsilon}\cdot \left(\frac{\beta_{\mathtt{i},j}}{\beta_{\mathtt{i},j^{\prime}}}\right)^{ \dim_{\mathrm{B}}\eta_{j}(K)}.\] 4. _Let_ \(\epsilon>0\) _and_ \(\gamma\in\Omega\) _be arbitrary. Suppose_ \(k\in\mathbb{N}\) _and_ \(j=1,2\) _are such that_ \(Q_{k}(\gamma)=P_{j}(\mathtt{i},\underline{\mathtt{j}})\)_. Let_ \(\mathcal{A}\) _be any section of_ \(\mathcal{I}^{*}\) _satisfying_ \(\eta_{j}^{-1}(\underline{\mathtt{j}})\prec\mathcal{A}\)_. Then_ \[\sum_{\mathtt{k}\in\mathcal{A}}\beta_{\mathtt{k},j^{\prime}}^{\dim_{\mathrm{A} }K_{\eta_{j}(\gamma),j}+\epsilon}\lesssim_{\epsilon,\gamma}1.\] Proof.: The proof of (i) is identical to the proof given in Lemma 4.8 and similarly the proof of (iv) is identical to that of Lemma 4.10. We now prove (ii). In order to do this, we must understand the structure of the pseudo-cylinder \(P_{j}(\mathtt{i},\underline{\mathtt{j}})\). Heuristically, when (for instance) \(j=1\), \(P_{j}(\mathtt{i},\underline{\mathtt{j}})\) is a union of cylinders which fall into one of two types: those which are tall, and those which are wide. If a cylinder is tall, we apply (i) in the opposite direction to cover it with approximate squares, and if a cylinder is wide, we group nearby cylinders together to form approximate squares. We then combine these counts using the slice dimension \(t_{j}\), which is bounded above by \(\dim_{\mathrm{B}}\eta_{j^{\prime}}(K)\). Write \(\mathcal{A}=\eta_{j}^{-1}(\underline{\mathtt{j}})\) and partition \(\mathcal{A}=\mathcal{A}_{1}\cup\mathcal{A}_{2}\) where \[\mathcal{A}_{1}=\{\mathtt{k}\in\mathcal{A}:\beta_{\mathtt{i}\mathtt{k},j^{ \prime}}\geq\beta_{\mathtt{i}\underline{\mathtt{j}},j}\}\qquad\text{and} \qquad\mathcal{A}_{2}=\mathcal{A}\setminus\mathcal{A}_{1}.\] First, for \(\mathtt{k}\in\mathcal{A}_{1}\), note that \(P_{j^{\prime}}(\mathtt{i}\mathtt{k},\varnothing)\) is a wide pseudo-cylinder and we set \[\mathcal{B}_{1}=\bigcup_{\mathtt{k}\in\mathcal{A}_{1}}\mathcal{Q}_{j^{\prime }}(\mathtt{i}\mathtt{k},\varnothing).\] By applying (i), since \(\beta_{\mathtt{i}\mathtt{k},j}\approx\beta_{\mathtt{i}\underline{\mathtt{j}},j}\), \[\#\mathcal{B}_{1}=\sum_{\mathtt{k}\in\mathcal{A}_{1}}\#\mathcal{Q}_{j^{\prime }}(\mathtt{i}\mathtt{k},\varnothing)\approx\sum_{\mathtt{k}\in\mathcal{A}_{1}} \left(\frac{\beta_{\mathtt{i}\mathtt{k},j^{\prime}}}{\beta_{\mathtt{i} \underline{\mathtt{j}},j}}\right)^{\dim_{\mathrm{B}}\eta_{j^{\prime}}(K)} \tag{5.1}\] Otherwise if \(\mathtt{k}\in\mathcal{A}_{2}\), let \(1_{1}(\mathtt{k})\) denote the prefix of \(\mathtt{k}\) of maximal length so that \(\beta_{\mathtt{i}1_{1}(\mathtt{k}),j^{\prime}}\geq\beta_{\mathtt{i}\underline {\mathtt{j}},j}\). Writing \(\mathtt{k}=1_{1}(\mathtt{k})1_{2}(\mathtt{k})\), this choice guarantees that \[\mathcal{B}(\mathtt{k})\coloneqq P_{j}(\mathtt{i}1_{1}(\mathtt{k}),\eta_{j}( 1_{2}(\mathtt{k})))\] is the unique approximate square contained in \([\mathtt{i}]\) containing \([\mathtt{i}\mathtt{k}]\). Finally, let \[\mathcal{A}_{2}^{\prime}=\{1_{1}(\mathtt{k}):\mathtt{k}\in\mathcal{A}_{2}\} \qquad\text{and}\qquad\mathcal{B}_{2}=\{\mathcal{B}(\mathtt{k}):\mathtt{k}\in \mathcal{A}_{2}\}.\] We then note that, since \(\beta_{\mathtt{i}1,j^{\prime}}\approx\beta_{\mathtt{i}\underline{\mathtt{j}},j}\) by the choice of \(1_{1}(\mathtt{k})\), \[\#\mathcal{B}_{2}\approx\sum_{1\in\mathcal{A}_{2}^{\prime}}\left(\frac{\beta_{ \mathtt{i}1,j^{\prime}}}{\beta_{\mathtt{i}\underline{\mathtt{j}},j}}\right)^ {\dim_{\mathrm{B}}\eta_{j^{\prime}}(K)} \tag{5.2}\] To conclude, observe that \(\mathcal{Q}_{j}(\mathtt{i},\underline{\mathtt{j}})=\mathcal{B}_{1}\cup \mathcal{B}_{2}\) and applying (5.1) and (5.2), \[\#\mathcal{Q}_{j}(\mathtt{i},\underline{\mathtt{j}}) =\#\mathcal{B}_{1}+\#\mathcal{B}_{2}\] \[\lesssim\sum_{\mathtt{k}\in\mathcal{A}_{1}}\left(\frac{\beta_{ \mathtt{i}\mathtt{k},j^{\prime}}}{\beta_{\mathtt{i}\underline{\mathtt{j}},j}} \right)^{\dim_{\mathrm{B}}\eta_{j^{\prime}}(K)}+\sum_{1\in\mathcal{A}_{2}^{ \prime}}\left(\frac{\beta_{\mathtt{i}\underline{\mathtt{1}},j^{\prime}}}{ \beta_{\mathtt{i}\underline{\mathtt{j}},j}}\right)^{\dim_{\mathrm{B}}\eta_{j ^{\prime}}(K)}\] \[=\left(\frac{\beta_{\mathtt{i},j^{\prime}}}{\beta_{\mathtt{i} \underline{\mathtt{j}},j}}\right)^{\dim_{\mathrm{B}}\eta_{j^{\prime}}(K)}\sum _{\mathtt{k}\in\mathcal{A}_{1}\cup\mathcal{A}_{2}^{\prime}}\beta_{\mathtt{k},j^{\prime}}^{\dim_{\mathrm{B}}\eta_{j^{\prime}}(K)}\] \[\leq\left(\frac{\beta_{\mathtt{i},j^{\prime}}}{\beta_{\mathtt{i}\underline{ \mathtt{j}},j}}\right)^{\dim_{\mathrm{B}}\eta_{j^{\prime}}(K)}\] where the last line follows since \(\mathcal{A}_{1}\cup\mathcal{A}_{2}^{\prime}\succcurlyeq\eta_{j}^{-1}(\underline {\mathtt{j}})\) is a section and \(\dim_{\mathrm{B}}\eta_{j^{\prime}}(K)\geq t_{j}(\underline{\mathtt{j}})\) where \[\sum_{\mathtt{k}\in\mathcal{A}_{1}\cup\mathcal{A}_{2}^{\prime}}\beta_{ \mathtt{k},\underline{\mathtt{j}}^{\prime}}^{t_{j}(\underline{\mathtt{j}})}=1.\] Finally, we combine the bounds given in (i) and (ii) with a similar argument to the proof of Lemma 4.9 to obtain (iii). Let \(\epsilon>0\) be arbitrary and fix \(\mathtt{i}\in\mathcal{I}^{*}\) and \(j=0,1\) so that \(0<r\leq\beta_{\mathtt{i},j^{\prime}}\leq\beta_{\mathtt{i},j}\). Write \(\delta=r/\beta_{\mathtt{i},j^{\prime}}\) so, recalling the proof of [1, Theorem B], \[\#\mathcal{S}(\delta)\lesssim_{\epsilon}(1/\delta)^{\dim_{\mathrm{B}}K+ \epsilon}.\] Now enumerate \[\mathcal{S}(\delta)=\{Q_{1,j},\ldots,Q_{m_{j},j}\}\cup\{Q_{1,j^{\prime}}, \ldots,Q_{m_{j^{\prime}},j^{\prime}}\}\] where for each \(z=j,j^{\prime}\) and \(1\leq i\leq m_{z}\), \[Q_{i,z}=P_{z}(\mathtt{j}_{i,z},\mathtt{k}_{i,z})\] for some \(\mathtt{j}_{i,z}\in\mathcal{I}^{*}\) and \(\underline{\mathtt{k}}_{i,z}\in\eta_{z}(\mathcal{I}^{*})\). Observe that each \(P_{z}(\mathtt{i}\mathtt{j}_{i,z},\underline{\mathtt{k}}_{i,z})\) is a wide pseudo-cylinder if \(z=j\) and a tall pseudo-cylinder if \(z=j^{\prime}\). Thus we may complete the proof in the same way as Lemma 4.9, by applying (i) to the wide pseudo-cylinders and (ii) to the tall pseudo-cylinders. We can now prove the following formulas for the pointwise Assouad dimension. **Proposition 5.3**.: _Let \(K\) be a Baranski carpet. Then for each \(j=1,2\), if \(\eta_{j}(K)\) satisfies the SSC, for all \(\gamma\in\Omega_{j}\) and \(x=\pi(\gamma)\),_ \[\dim_{\mathrm{A}}(K,x)=\max\{\dim_{\mathrm{B}}K,\dim_{\mathrm{B}}\eta_{j}(K)+ \dim_{\mathrm{A}}K_{\eta_{j}(\gamma),j}\}\] _and_ \[\max\{\dim_{\mathrm{H}}F:F\in\operatorname{Tan}(K,x)\}=\dim_{\mathrm{B}}\eta_ {j}(K)+\dim_{\mathrm{A}}K_{\eta_{j}(\gamma),j}.\] _Furthermore,_ \[\dim_{\mathrm{A}}K_{\eta_{j}(\gamma),j}=\lim_{m\to\infty}\sup_{n\in\mathbb{N} }\theta_{\eta_{j}(\gamma),j}(n,m)\leq\max_{\underline{\ell}\in\eta_{j}( \mathcal{I})}t_{j}(\underline{\ell}).\] Proof.: If \(\gamma\in\Omega_{j}\), by definition there is a constant \(\kappa\in(0,1)\) so that \[\frac{\beta_{\gamma\downarrow_{k},j^{\prime}}}{\beta_{\gamma\downarrow_{k},j} }\lesssim\kappa^{n}.\] In particular, there is a constant \(\kappa^{\prime}\in(0,1)\) so that each maximal cylinder \([\mathtt{i}]\) contained in \(Q_{k}(\gamma)\) has \(\beta_{\mathtt{i},j^{\prime}}/\beta_{\mathtt{i},j}\lesssim(\kappa^{\prime})^{k}\), which converges to zero. Thus the same proof as given in Proposition 4.11 but instead applying Lemma 5.2 in place of the analogous bounds for Gatzouras-Lalley carpets gives that \[\dim_{\mathrm{A}}(K,x)\leq\max\{\dim_{\mathrm{B}}K,\dim_{\mathrm{B}}\eta_{j}( K)+\dim_{\mathrm{A}}K_{\eta_{j}(\gamma),j}\}.\] Similarly, the same proof as Proposition 4.7 shows that \[\max\{\dim_{\mathrm{H}}F:F\in\mathrm{Tan}(K,x)\}=\dim_{\mathrm{B}}\eta_{j}(K)+ \dim_{\mathrm{A}}K_{\eta_{j}(\gamma),j}.\] Finally, using the same subadditivity properties of \(\theta_{\eta(\gamma),j}(n,m)\) established at the beginning of the proof of Theorem 3.7, \[\lim_{m\to\infty}\sup_{n\in\mathbb{N}}\theta_{\eta_{j}(\gamma),j}(n,m)\leq \max_{\ell\in\eta_{j}(\mathcal{I})}t_{j}(\ell).\] as required. ### Baranski carpets with few large tangents In contrast to Gatzouras-Lalley carpets, the analogue of Theorem 4.14 need not hold for Baranski carpets. We first give a precise characterization of when a Baranski carpet has few large tangents. Fix the definitions of \(t_{j}\) and \(d_{j}\) from Proposition 5.1. **Theorem 5.4**.: _Let \(K\) be a Baranski carpet such that \(\eta_{j}(K)\) satisfies the SSC and \(\Omega_{j}\neq\emptyset\) for \(j=1,2\). Suppose for one of \(j=1,2\), \(d_{j}<d_{j^{\prime}}\) and \(\dim_{\mathrm{B}}\eta_{j}(K)+t_{j}>\dim_{\mathrm{B}}\eta_{j^{\prime}}(K)+t_{j^ {\prime}}\). Then_ \[\dim_{\mathrm{H}}\{x\in K:\dim_{\mathrm{A}}(K,x)=\dim_{\mathrm{A}}K\}<\dim_{ \mathrm{H}}K.\] Proof.: Suppose \(d_{1}<d_{2}\) and \(\dim_{\mathrm{B}}\eta_{1}(K)+t_{1}>\dim_{\mathrm{B}}\eta_{2}(K)+t_{2}\) (the opposite case follows analogously). By Proposition 5.1, \(\dim_{\mathrm{H}}K=d_{2}\) and \(\dim_{\mathrm{A}}K=\dim_{\mathrm{B}}\eta_{1}(K)+t_{1}\). In particular, by Proposition 5.3, if \(\dim_{\mathrm{A}}(K,x)=\dim_{\mathrm{A}}K=\dim_{\mathrm{B}}\eta_{1}(K)+t_{1}\), then necessarily \(x=\pi(\gamma)\) where \(\gamma\in\Omega_{0}\cup\Omega_{1}\). But \(\dim_{\mathrm{H}}\pi(\Omega_{0}\cup\Omega_{1})=d_{1}<d_{2}=\dim_{\mathrm{H}}K\), as required. **Remark 5.5**.: In the context of Theorem 5.4, one can in fact prove that the following are equivalent: 1. \(\dim_{\mathrm{H}}\{x\in K:\dim_{\mathrm{A}}(K,x)=\dim_{\mathrm{A}}K\}<\dim_{ \mathrm{H}}K\). 2. \(\dim_{\mathrm{H}}\{x\in K:\exists F\in\mathrm{Tan}(K,x)\text{ s.t. }\dim_{\mathrm{H}}F=\dim_{\mathrm{A}}K\}<\dim_{\mathrm{H}}K\). 3. For one of \(j=1,2\), \(d_{j}<d_{j^{\prime}}\) and \(\dim_{\mathrm{B}}\eta_{j}(K)+t_{j}>\dim_{\mathrm{B}}\eta_{j^{\prime}}(K)+t_{j^ {\prime}}\). Such a proof follows similarly to the Gatzouras-Lalley case with appropriate modifications to restrict attention only to the family \(\Omega_{1}\) or \(\Omega_{2}\). The only additional observation required is that [11, Lemma 4.3] also holds in the Baranski case and the uniform subsystem can be chosen so the maps are contracting strictly in direction \(j\) and the dimension of the corresponding attractor is arbitrarily close to \(d_{j}\). In particular, if one of the above equivalent conditions hold and without loss of generality \(d_{1}>d_{2}\) and \(\dim_{\mathrm{B}}\eta_{1}(K)+t_{1}<\dim_{\mathrm{B}}\eta_{2}(K)+t_{2}\), then the Hausdorff dimension of the level set \(\varphi(\alpha)=\dim_{\mathrm{H}}\{x\in K:\dim_{\mathrm{A}}(K,x)=\alpha\}\) is given by the piecewise formula \[\varphi(\alpha)=\begin{cases}\dim_{\mathrm{H}}K&:\dim_{\mathrm{B}}K\leq\alpha \leq\dim_{\mathrm{B}}\eta_{1}(K)+t_{1}\\ d_{2}&:\dim_{\mathrm{B}}\eta_{1}(K)+t_{1}<\alpha\leq\dim_{\mathrm{A}}K.\end{cases}\] We leave the remaining details to the curious reader. With Theorem 5.4 in hand, we can now give an explicit example of a Baranski carpet which has few large tangents. **Corollary 5.6**.: _There is a Baranski carpet \(K\) such that_ \[\dim_{\mathrm{H}}\{x\in K:\dim_{\mathrm{A}}(K,x)=\dim_{\mathrm{A}}K\}<\dim_{ \mathrm{H}}K.\] _Proof._ Fix some \(\delta\in[0,1/6)\) and define parameters \(\beta=1/4-\delta\), \(\alpha_{1}=1/3-\delta\), and \(\alpha_{2}=1/6-\delta\). Now define the families of maps \[\Phi_{1} =\{(x,y)\mapsto(\alpha_{1}x,\beta y+i\beta):i=0,\ldots,3\}\] \[\Phi_{2,a} =\{(x,y)\mapsto(\alpha_{2}x+\alpha_{1}+j\alpha_{2},\beta y+i \beta):j=0,1;i=0,1\}\] \[\Phi_{2,b} =\{(x,y)\mapsto(\alpha_{2}x+\alpha_{1}+j\alpha_{2},\beta y+i \beta):j=3,4;i=2,3\}\] and then set \[\Phi_{2}=\Phi_{2,a}\cup\Phi_{2,b}\qquad\text{and}\qquad\Phi=\Phi_{1}\cup\Phi_{ 2,a}\cup\Phi_{2,b}.\] We abuse notation and use functions and indices interchangeably. Now \(\Phi\) is a Baranski IFS with three columns corresponding to \(\Phi_{1}\), \(\Phi_{2,a}\), and \(\Phi_{2,b}\). This carpet is conjugate to the carpet generated by the maps depicted in Figure 2b. Note that if \(\delta>0\), both projected IFSs satisfy the SSC. We now simplify the dimensional expression in Proposition 5.1 (ii) for our specific system. First, for \(\mathbf{w}\in\mathcal{P}\), set \(p=\sum_{i\in\Phi_{2}}\mathbf{w}_{i}\). Note that \(\chi_{1}(\mathbf{w})=-p\log\alpha_{2}-(1-p)\log\alpha_{1}\) and \(\chi_{2}(\mathbf{w})=-\log\beta\) depend only on \(p\). But since entropy and projected entropy are maximized uniquely by uniform vectors, defining the vector \(\mathbf{z}(p)\in\mathcal{P}\) by \[\mathbf{z}(p)_{i}=\begin{cases}\frac{1-p}{4}:i\in\Phi_{1}\\ \frac{p}{8}:i\in\Phi_{2}\end{cases}\] we necessarily have \[\frac{H(\eta_{1}(\mathbf{w}))}{\chi_{1}(\mathbf{w})}+\frac{H(\mathbf{w})-H( \eta_{1}(\mathbf{w}))}{\chi_{2}(\mathbf{w})} \leq\frac{H(\eta_{1}(\mathbf{z}(p)))}{\chi_{1}(\mathbf{z}(p))}+\frac{H( \mathbf{z}(p))-H(\eta_{1}(\mathbf{z}(p)))}{\chi_{2}(\mathbf{z}(p))}\] \[=\frac{-p\log p-(1-p)\log(1-p)}{-p\log\alpha_{2}-(1-p)\log\alpha_ {1}}\] \[=:D_{1}(p)\] and \[\frac{H(\eta_{2}(\mathbf{w}))}{\chi_{2}(\mathbf{w})}+\frac{H(\mathbf{w})-H(\eta_{ 2}(\mathbf{w}))}{\chi_{1}(\mathbf{w})} \leq\frac{H(\eta_{2}(\mathbf{z}(p)))}{\chi_{2}(\mathbf{z}(p))}+\frac{H(\bm {z}(p))-H(\eta_{2}(\mathbf{z}(p)))}{\chi_{1}(\mathbf{z}(p))}\] \[=\frac{\log 4}{-\log\beta}+\frac{-p\log p-(1-p)\log(1-p)-\log 4}{-p \log\alpha_{2}-(1-p)\log\alpha_{1}}\] \[=:D_{2}(p).\] Moreover, writing \(p_{0}=\frac{\log\alpha_{1}-\log\beta}{\log\alpha_{1}-\log\alpha_{2}}\), \(\mathbf{z}(p)\in\mathcal{P}_{1}\) if and only if \(p\in[0,p_{0}]\) and \(\mathbf{z}(p)\in\mathcal{P}_{2}\) if and only if \(p\in[p_{0},1]\). We thus observe that \[\dim_{\mathrm{H}}K=\sup_{p\in[0,1]}D(p)\qquad\text{where}\qquad D(p)=\begin{cases} D_{1}(p)&:0\leq p\leq p_{0}\\ D_{2}(p)&:p_{0}\leq p\leq 1\end{cases}.\] Now, a manual computation directly shows that, substituting \(\delta=0\), \[\sup_{p\in[0,1]}D_{1}(p)\approx 0.489536\qquad\qquad\sup_{p\in[0,1]}D_{2}(p) \approx 0.529533\] and moreover the maximum of \(D_{2}(p)\) is attained at a value \(p_{2}\in(p_{0},1)\). Thus for all \(\delta\) sufficiently close to \(0\), since all the respective quantities are continuous functions of \(\delta\), there is a value \(p_{2}\in(p_{0},1)\) so that \[d_{1}\leq\sup_{p\in[0,1]}D_{1}(p)<\sup_{p\in[0,1]}D(p)=D_{2}(p_{2})=d_{2}.\] (In fact, one can show that this is the case for all \(\delta\in(0,1/6)\), but this is not required for the proof.) On the other hand, when \(\delta=0\), \(t_{1}=2\) whereas \(t_{2}=1+s<2\) where \(s\approx 0.72263\) is the unique solution to \[\left(\frac{1}{3}\right)^{s}+2\cdot\left(\frac{1}{6}\right)^{s}=1.\] Thus for all \(\delta\) sufficiently close to \(0\), the conditions for Theorem 5.4 are satisfied, as required. ## Acknowledgements The authors thank Roope Anttila for interesting discussions around the topics in this document. They also thank Amlan Banaji for some comments on a draft version of this document. AR was supported by EPSRC Grant EP/V520123/1 and the Natural Sciences and Engineering Research Council of Canada. This project began while AR visited AK at the University of Oulu, which was funded by a scholarship from the London Math Society and the Cecil King Foundation. He thanks the various members of the department for their hospitality during the visit.
2305.00571
$F$-pure thresholds and $F$-Volumes of some non principal ideals
We compute the $F$-pure threshold of non necessarily principal ideals which satisfy a geometric generic condition about their Newton polygons. We also contribute some evidence in favor of the conjectured equality between the $F$-pure threshold and the log canonical threshold of ideals. These results are obtained by generalizing the theory of splitting polytopes to the case of ideals. As applications of our results we obtain geometric lower bounds for the recently introduced $F$-volume of a collection of ideals.
Wágner Badilla-Céspedes, Edwin León-Cardenal
2023-04-30T20:37:41Z
http://arxiv.org/abs/2305.00571v2
# \(F\)-pure thresholds and \(F\)-volumes of some non principal ideals ###### Abstract. We compute the \(F\)-pure threshold of non necessarily principal ideals which satisfy a geometric generic condition about their Newton polygons. We also contribute some evidence in favor of the conjectured equality between the \(F\)-pure threshold and the log canonical threshold of ideals. These results are obtained by generalizing the theory of splitting polytopes to the case of ideals. As applications of our results we obtain geometric lower bounds for the recently introduced \(F\)-volume of a collection of ideals. Key words and phrases:\(F\)-pure thresholds, \(F\)-volume, non principal ideals, Newton polygons, splitting polytopes 2020 Mathematics Subject Classification: Primary 13A35; Secondary 14B05, 14M25 ## 1. Introduction The problem of quantifying singularities of an algebraic variety \(X\) at a given point \(x\) can be approached in several ways, but the first thing to consider is the characteristic of the ground field. Over fields of characteristic zero it is common to use the log canonical threshold of \(X\) at the point \(x\), \(\operatorname{lct}_{x}(X)\), which is a device that can be defined analytically (via integration when available, e.g. \(\mathbb{R}\) or \(\mathbb{C}\)) or algebro-geometrically (via resolution of singularities). Over fields of prime characteristic, the corresponding device is called the \(F\)-pure threshold of \(X\) at the point \(x\), it is denoted by \(\operatorname{fpt}_{x}(X)\) and its definition uses the Frobenius map. While defined in an entirely different way, it turns out that both invariants shares several properties and moreover it is known that \(\operatorname{fpt}_{x}(X)\) approaches to \(\operatorname{lct}_{x}(X)\) as long as \(p\) increases. Furthermore, there is a largely open conjecture relating in a subtle way the \(F\)-pure threshold of \(X\) to the log canonical threshold via reduction mod \(p\). For definiteness, consider a polynomial ring \(R\) over a field \(\Bbbk\) of positive characteristic \(p\). The \(F\)-pure threshold (at zero) of an ideal \(\mathfrak{a}\subseteq\mathfrak{m}\) is defined as the limit when \(e\to\infty\) of \(p^{-e}(\max\{c\in\mathbb{Z}_{\geq 0}\mid\mathfrak{a}^{c}\not\subseteq\mathfrak{m}^{[p^{ c}]}\})\). Here \(\mathfrak{m}\) denotes the maximal ideal of \(R\) and \(\mathfrak{m}^{[p^{c}]}\) is the Frobenius power of \(\mathfrak{m}\). This definition was given by Takagi and Watanabe [14] and since those days many properties and connections with other objects have been elucidated as it is summarized in the surveys [1, 14] and the references therein. However it is acknowledged by the community that the task of computing \(\operatorname{fpt}_{x}(X)\) is hard in general and few examples of explicit calculations are known, among them [13, 1, 15, 16, 17, 18, 19]. Moreover, it was just a few years ago that there appeared the first computational routine for compute some examples [1]. Another serious difficulty that emerges when it comes to compute explicit examples of \(F\)-pure thresholds is that the existing methods are designed mostly for the case of non-principal ideals, the situation in this regard is similar to the one for test ideals [10]. To the best of our knowledge the more general results in this direction are due to Shibuta and Takagi [14], who present several explicit calculations of \(F\)-pure thresholds of binomial ideals by using linear programming. In this work we attack the case of non-principal ideals by extending the techniques of splitting polytopes developed by Hernandez [11, 12]. The splitting polytope \(\mathcal{P}_{f}\) of a polynomial \(f\in R\) is a compact convex polytope that resembles the more classical notion of Newton polyhedron. In fact there are several useful connections between these two objects, allowing one to formulate some algebro-geometric conditions of the hypersurface defined by \(f\) in terms of \(\mathcal{P}_{f}\). By using the definition of the Newton polyhedron of a polynomial mapping \(\boldsymbol{f}=(f_{1},\ldots,f_{t})\)[14, 15] we defined and study a splitting polytope for an ideal \(\mathfrak{a}=(f_{1},\ldots,f_{t})\). In our first main result we provide a geometric condition under which \(\operatorname{fpt}(\mathfrak{a})\) equals the \(F\)-pure threshold of its corresponding term ideal as defined in [1]. **Theorem A** (Theorem 3.5).: _Let \(\mathfrak{a}\subseteq\mathfrak{m}\) be a nonzero ideal with term ideal \(\mathfrak{a}^{\circ}\), and take a list \(\boldsymbol{f}=(f_{1},\ldots,f_{t})\) of minimal generators of \(\mathfrak{a}\). Suppose that \(\mathcal{P}_{\boldsymbol{f}}\) has a unique maximal point \(\rho=(\rho_{1},\ldots,\rho_{t})\) with \(\rho_{i}\in\mathbb{R}^{l_{i}}\), and set for \(i\in\{1,\ldots,t\}\)_ \[S_{i}=\sup\left\{\ell\in\mathbb{Z}_{\geq 0}\mid\sum_{j_{i}=1}^{l_{i}}\rho_{i,j_ {i}}^{(e)}\leq p-1\quad\text{for every}\quad 0\leq e\leq\ell\right\}.\] _If \(I\) denotes the set of indices for which \(S_{i}\) is finite, then the following assertions hold._ 1. _If_ \(I=\emptyset\)_, then_ \(\operatorname{fpt}(\mathfrak{a})=\operatorname{fpt}(\mathfrak{a}^{\circ})\)_._ 2. _If_ \(I\neq\emptyset\) _and_ \(r=\operatorname{Card}(I)\)_, then_ \[\operatorname{fpt}(\mathfrak{a})\geq\sum_{i\in\{1,\ldots,t\}\setminus I}|\rho_ {i}|+\sum_{i\in I}|\langle\rho_{i}\rangle_{S}|+\frac{r}{p^{S}},\] _where_ \(S=\min_{i\in I}S_{i}\)_._ On our second main result we contributed several cases in favor of a famous conjecture of Mustata, Tagaki, and Watanabe [13, Conjecture 3.6] asserting that there are infinitely many primes \(p\) for which the \(F\)-pure threshold of the reduction mod \(p\) of \(\mathfrak{a}=(f_{1},\ldots,f_{t})\) equals the log canonical threshold of \(\mathfrak{a}\). **Theorem B** (Theorem 3.8).: _Assume the hypotheses of Theorem A._ 1. _If_ \(\operatorname{lct}(\mathfrak{a}^{\circ})>t\) _and_ \(I\neq\emptyset\)_, with_ \(\operatorname{Card}(I)=r=t\)_, then_ \(\operatorname{fpt}(\mathfrak{a}_{p})=\operatorname{lct}(\mathfrak{a})=t\)_, for all the primes_ \(p\) _verifying_ \[\rho_{1}^{(1)}+\cdots+\rho_{N}^{(1)}\geq tp.\] 2. _If_ \(\operatorname{lct}(\mathfrak{a}^{\circ})\leq t\)_, then_ \(\operatorname{fpt}(\mathfrak{a}_{p})=\operatorname{lct}(\mathfrak{a})\) _for all prime numbers_ \(p\) _for which the entries of_ \(\rho\) _add without carrying modulo_ \(p\)_._ We present two applications of our techniques. We show in Example 3.9 how to use the splitting polytopes to recover some of the results of Shibuta and Takagi [14]. Our third main result, and a second application, deals with a recent invariant of singularities in characteristic \(p\) measuring the volume of the constancy regions for generalized mixed test ideals [1]. This invariant is called the \(F\)-volume and it is capable, for instance, of detect \(F\)-pure complete intersections. Explicit computation of this device are again very hard, so as an application of our study of splitting polytopes for ideals \(\mathfrak{a}=(f_{1},\ldots,f_{t})\), we provide geometrical lower bounds for the \(F\)-volume of a collection of principal ideals. **Theorem C** (Theorem 4.5).: _Consider a polynomial mapping \(\mathbf{f}=(f_{1},\ldots,f_{t})\) of elements in \(\mathfrak{m}\). Suppose that \(\mathcal{P}_{\mathbf{f}}\) has a unique maximal point \(\rho=(\rho_{1},\ldots,\rho_{t})\) with \(\rho_{i}\in\mathbb{R}^{l_{i}}\), and set for \(i\in\{1,\ldots,t\}\)_ \[S_{i}=\sup\left\{\ell\in\mathbb{Z}_{\geq 0}\mid\sum_{j=1}^{l_{i}}\rho_{i,j_{i}}^{ (e)}\leq p-1\quad\text{for every}\quad 0\leq e\leq\ell\right\}.\] _If \(I\) denotes the set of indices for which \(S_{i}\) is finite, then_ \[\prod_{i=1}^{t}\left(|\langle\rho_{i}\rangle_{S_{i}}|+\frac{1}{p^{S_{i}}} \right)\leq\operatorname{Vol}_{F}^{\mathfrak{m}}((f_{1}),\ldots,(f_{t})).\] ## 2. Splitting polytopes and Newton polyhedra of polynomial mappings One of the most powerful tools in the study of singularities in positive characteristic is the Frobenius map. A classical result by Kunz [14] shows that the regular rings \(R\) are those where \(R^{1/p^{e}}\) is flat over \(R\), so very often we have to check properties against \(R^{1/p^{e}}\). For instance, it might be useful to know if for a non-negative real number \(\alpha\) and a fixed \(f\in R\), \(Rf^{\alpha}\subseteq R^{1/p^{e}}\) splits for at least one \(e\geq 1\). Since in general \(f^{\alpha}\) is not an element of \(R^{1/p^{e}}\), one may approximate \(\alpha\) by a sequence of rational numbers \(\alpha_{e}\in\frac{1}{p^{e}}\mathbb{Z}_{\geq 0}\) with the property \(\lim_{e\to\infty}\alpha_{e}=\alpha\), then one requires that some (or all) of the inclusions \(Rf^{\alpha_{e}}\subseteq R^{1/p^{e}}\) split over \(R\). ### Basics on \(p\)-expansions A key construction that allows one to carry out this strategy is that of \(p\)-expansions. This construction was first introduced by Renyi in [13] under the more general definition of \(\beta\)-expansions. **Definition 2.1**.: Let \(p\) be an integer number with \(p\geq 2\). Any real number \(\alpha\in[0,1)\) can be uniquely written as \[\alpha=\sum_{k\geq 1}\frac{\alpha^{(k)}}{p^{k}}, \tag{2.1.1}\] with \(0\leq\alpha^{(k)}\leq p-1\), and the further assumption that the sequence \(\{\alpha^{(k)}\}_{k\geq 1}\) does not eventually take only the value \(p-1\). We will say that the sequence \(\{\alpha^{(k)}\}_{k\geq 1}\) represents \(\alpha\) and call (2.1.1) the \(p\)-expansion of \(\alpha\)1. Footnote 1: Note that the \(p\)-expansion of \(\alpha\) is different from its \(p\)-adic expansion, which is not defined for general real numbers. The representation \(\{\alpha^{(k)}\}_{k\geq 1}\) is done through an infinite sequence which is obtained by using the 'greedy' algorithm of Renyi. We denote by \(\langle\alpha\rangle_{e}:=\sum_{k=1}^{e}\frac{\alpha^{(k)}}{p^{k}}\), the \(e\)-th _truncation_ of \(\alpha\) in (2.1.1), and additionally we set \(\langle\alpha\rangle_{0}=\langle 0\rangle_{e}=0\) and \(\langle\alpha\rangle_{\infty}=\alpha\). **Definition 2.2**.: For a vector \(\alpha=(\alpha_{1},\ldots,\alpha_{m})\in\mathbb{R}^{m}\) we define \(|\alpha|\) as the sum \(\alpha_{1}+\cdots+\alpha_{m}\) of the entries of \(\alpha\). When \(\alpha\in[0,1)^{m}\) we denote by \(\langle\alpha\rangle_{e}\) the vector \((\langle\alpha_{1}\rangle_{e},\ldots,\langle\alpha_{m}\rangle_{e})\). We also say that \(\alpha_{1},\ldots,\alpha_{m}\) add without carrying modulo \(p\) if \(\alpha_{1}^{(k)}+\cdots+\alpha_{m}^{(k)}\leq p-1\) for every \(k\geq 1\). Note that \(\alpha_{1},\dots,\alpha_{m}\) add without carrying mod \(p\) if and only if the \(p\)-expansions of the integers \(p^{e}\langle\alpha_{1}\rangle_{e},\dots,p^{e}\langle\alpha_{m}\rangle_{e}\) add without carrying for all \(e\geq 1\). If moreover \(p\) is assumed to be a prime number, then we have the classical Lucas's Lemma. **Lemma 2.3** ([10]).: _Take \(r=(r_{1},\dots,r_{m})\in\mathbb{Z}_{\geq 0}^{m}\). Then \(\binom{[r]}{r}:=\frac{|r|!}{r_{1}!\cdots r_{m}!}\not\equiv 0\bmod p\) if and only if the \(p\)-expansions of the entries of \(r\) add without carrying._ **Corollary 2.4**.: _The entries of a vector \((\alpha_{1},\dots,\alpha_{m})\in[0,1]^{m}\) add without carrying mod \(p\) if and only if, for every \(e\geq 1\)_ \[\binom{p^{e}\langle\alpha_{1}\rangle_{e}+\dots+p^{e}\langle\alpha_{m}\rangle_ {e}}{p^{e}\langle\alpha_{1}\rangle_{e},\dots,p^{e}\langle\alpha_{m}\rangle_{e }}\not\equiv 0\bmod p.\] ### Splitting polytopes of polynomial mappings In this section we introduce our main combinatorial tool, the splitting polytope, introduced first by Mustata, Takagi and Watanabe [11], and further developed by Hernandez [1, 1]. Our constructions follow closely the ones given by Hernandez. Given \(\alpha,\beta\in\mathbb{R}^{m}\), we denote by \(\alpha\preceq\beta\) the component-wise inequality, and denote by \(\mathbf{1}_{m}\) the vector \((1,\dots,1)\in\mathbb{R}^{m}\). Let \(f=\sum_{a\in\mathbb{Z}_{\geq 0}^{m}}c_{a}x^{a}\) be a nonconstant polynomial in a ring \(R=\Bbbk[x_{1},\dots,x_{m}]\), with \(\Bbbk\) an arbitrary field. The support of \(f\) is the set \[\operatorname{Supp}(f)=\{a\in\mathbb{Z}_{\geq 0}^{m}\mid c_{a}\neq 0\}.\] If \(\operatorname{Card}(\operatorname{Supp}(f))=n\), then the \(m\times n\) matrix \(\mathcal{E}_{f}\) having as columns the elements of \(\operatorname{Supp}(f)\) is called the _exponent matrix_ of \(f\). **Definition 2.5**.: The set \(\mathcal{P}_{f}:=\{\gamma\in\mathbb{R}_{\geq 0}^{n}\mid\mathcal{E}_{f}\cdot \gamma\preceq\mathbf{1}_{m}\}\), is called the _splitting polytope of_\(f\). Consider now a finite list \(\{f_{1},\dots,f_{t}\}\) of polynomials in \(R=\Bbbk[x_{1},\dots,x_{m}]\) belonging to the irrelevant ideal \(\mathfrak{m}\) of \(R\). In order to define a splitting polytope for the polynomial mapping \(\boldsymbol{f}=(f_{1},\dots,f_{t})\,:\Bbbk^{m}\mapsto\Bbbk^{t}\) we need some technical assumptions. For any \(1\leq i\leq t\), assume that \(f_{i}\) has exactly \(n_{i}\) monomials, i.e. \(\operatorname{Card}(\operatorname{Supp}(f_{i}))=n_{i}\). Moreover, take \(l_{1}=n_{1}\) and define \(l_{i}\), for \(i\geq 2\), as the unique integer \(0\leq l_{i}\leq n_{i}\) with the property that the first \(l_{i}\) monomials in \(f_{i}\) do not appear in the list \(\{f_{1},\dots,f_{i-1}\}\). The polynomial \(\hat{f}_{i,\boldsymbol{f}}=\hat{f}_{i}\) obtained by adding just the first \(l_{i}\) terms of \(f_{i}\) verifies \(\operatorname{Card}(\operatorname{Supp}(\hat{f}_{i}))=l_{i}\), the corresponding exponent matrix is denoted by \(\widehat{\mathcal{E}}_{i}\). Note that the definitions of \(\hat{f}_{i}\) and \(\widehat{\mathcal{E}}_{i}\) depend crucially on the polynomial mapping \(\boldsymbol{f}\). **Definition 2.6**.: Given a polynomial mapping \(\boldsymbol{f}=(f_{1},\dots,f_{t})\) we set \(N=l_{1}+\dots+l_{t}\) and define the following. 1. The _exponent matrix_ of \(\boldsymbol{f}\) is the \(m\times N\) matrix \(\mathcal{E}_{\boldsymbol{f}}\) having as columns the different elements of \(\cup_{i=1}^{t}\operatorname{Supp}(\hat{f}_{i})\), i.e. \(\mathcal{E}_{\boldsymbol{f}}=(\widehat{\mathcal{E}}_{1}\mid\dots\mid \widehat{\mathcal{E}}_{t})\). 2. The _splitting polytope_ of \(\boldsymbol{f}\) is the set \[\mathcal{P}_{\boldsymbol{f}}=\{\gamma\in\mathbb{R}_{\geq 0}^{N}\mid\mathcal{E}_{ \boldsymbol{f}}\cdot\gamma\preceq\mathbf{1}_{m}\}.\] Observe that the sets \(\mathcal{P}_{f}\) and \(\mathcal{P}_{\boldsymbol{f}}\) are bounded convex polytopes, in fact, they lie inside \([0,1]^{n}\) and \([0,1]^{N}\) respectively. Moreover since, for instance, \(\mathcal{P}_{f}\) is a compact set, the function \(|\cdot|:\,\mathbb{R}^{n}\to\,\,\mathbb{R}\) defined by \(|(\gamma_{1},\dots,\gamma_{n})|=\gamma_{1}+\dots+\gamma_{n}\) reaches a maximum, let us say \(M\). The set \(\{\gamma\in\mathcal{P}_{f}\mid|\gamma|=M\}\) defines a proper face of \(\mathcal{P}_{f}\), that is called maximal. **Example 2.7**.: Consider the mapping \(\mathbf{f}=(x^{a+1}+xy^{b},yz^{c})\) with \(a\geq 0\) and \(b,c\geq 1\), then \[\mathcal{E}_{\mathbf{f}}=\begin{pmatrix}a+1&1&0\\ 0&b&1\\ 0&0&c\end{pmatrix},\] and the splitting polytope of \(\mathbf{f}\) is given by \(\mathcal{P}_{\mathbf{f}}=\{(\gamma_{1},\gamma_{2},\gamma_{3})\in\mathbb{R}_{\geq 0 }^{3}\mid(a+1)\gamma_{1}+\gamma_{2}\leq 1,b\gamma_{2}+\gamma_{3}\leq 1,\text{ and }c \gamma_{3}\leq 1\}\), see Figure 1. The polytope \(\mathcal{P}_{\mathbf{f}}\) is given by the convex hull of the vectors: \((0,0,0)\), \(v_{1}=\left(\frac{1}{a+1},0,0\right),\ v_{2}=\left(\frac{1}{a+1},0,\frac{1}{c} \right),\ v_{3}=\left(0,0,\frac{1}{c}\right),\ v_{4}=\left(0,\frac{c-1}{bc}, \frac{1}{c}\right),\ v_{5}=\left(\frac{bc-c+1}{bc(a+1)},\frac{c-1}{bc},\frac{1 }{c}\right),\ v_{6}=\left(0,\frac{1}{b},0\right),\) and \(v_{7}=\left(\frac{b-1}{b(a+1)},\frac{1}{b},0\right)\). The maximal face of \(\mathcal{P}_{\mathbf{f}}\) depends on the values of \(a\) and \(c\), as follows. * If \(c=1\) or if \(c\neq 1\) and \(a\neq 0\), then the maximal face is \(\{v_{5}\}\). * If \(c\neq 1\) and \(a=0\), then the maximal face is the edge joining \(v_{2}\) and \(v_{5}\). **Example 2.8**.: For \(\mathbf{f}=(x^{a}+y^{b}+z^{c},-x^{a}+xyz+x^{2}y^{2}z^{2},y^{b}+xyz+x^{3}y^{2})\) one has \[\mathcal{E}_{\mathbf{f}}=\begin{pmatrix}a&0&0&1&2&3\\ 0&b&0&1&2&2\\ 0&0&c&1&2&0\end{pmatrix}.\] **Remark 2.9**.: Given a list of polynomials \(\mathbf{f}\), one may be persuaded to think that \(\mathcal{E}_{\mathbf{f}}=\mathcal{E}_{F}\), where \(F\) is the polynomial obtained by adding the different monomials of each \(f_{i}\) in \(\mathbf{f}\). In general this is not the case, since the set of monomials in \(F\) may be smaller due to cancellations, cf. Example 2.8. We shall see later that a very convenient condition over the splitting polytope \(\mathcal{P}_{\mathbf{f}}\) is that the maximal face consists of just one vertex \(\rho\in[0,1]^{N}\), this condition is often expressed by saying that \(\mathcal{P}_{\mathbf{f}}\) has a unique maximal point. First we need a technical lemma for which we will fix an integer \(e\geq 1\), and recall that the \(e\)-th truncation of \(\rho\) in its \(p\)-expansion is denoted by \(\langle\rho\rangle_{e}\). **Lemma 2.10**.: _Assume that \(\mathcal{P}_{\mathbf{f}}\) has a unique maximal point \(\rho\in[0,1]^{N}\), which is written as \(\rho=(\rho_{1},\ldots,\rho_{t})\) with \(\rho_{i}\in\mathbb{R}^{l_{i}}\), according to Definition 2.6. Then the following statements hold._ 1. _If for some_ \(\gamma\in\mathbb{R}^{N}_{\geq 0}\)_,_ \(|\gamma|=|\langle\rho\rangle_{e}|\) _and_ \(\mathcal{E}_{\boldsymbol{f}}\gamma=\mathcal{E}_{\boldsymbol{f}}\langle\rho \rangle_{e}\)_, then_ \(\gamma=\langle\rho\rangle_{e}\)_. Moreover, if there exist vectors_ \(\gamma,\delta\in\mathbb{R}^{N}_{\geq 0}\)_, with_ \(\delta\preceq\langle\rho\rangle_{e}\) _and such that_ \(|\gamma|=|\delta|\) _and_ \(\mathcal{E}_{\boldsymbol{f}}\gamma=\mathcal{E}_{\boldsymbol{f}}\delta\)_; then_ \(\gamma=\delta\)_._ 2. _Let_ \(i\) _be a fixed index in_ \(\{1,\ldots,t\}\)_. If for some_ \(\delta\in\mathbb{R}^{l_{i}}_{\geq 0}\)_,_ \(|\delta|=|\langle\rho_{i}\rangle_{e}|\) _and_ \(\widehat{\mathcal{E}}_{i}\,\delta=\widehat{\mathcal{E}}_{i}\langle\rho_{i} \rangle_{e}\)_, then_ \(\delta=\langle\rho_{i}\rangle_{e}\)_. Moreover, if there exist vectors_ \(\gamma,\delta\in\mathbb{R}^{l_{i}}_{\geq 0}\)_, with_ \(\delta\preceq\langle\rho_{i}\rangle_{e}\) _and such that_ \(|\gamma|=|\delta|\) _and_ \(\widehat{\mathcal{E}}_{i}\,\gamma=\widehat{\mathcal{E}}_{i}\,\delta\)_; then_ \(\gamma=\delta\)_._ Proof.: The case of just one polynomial \(f\) is stated and proved in [11, Lemma 24]. The proof given there can be easily extended to the present setting to justify the first part. Note, nevertheless, that different arguments are required to prove the second part, since our hypotheses do not imply in general that \(\rho_{i}\) is a maximal point of \(\mathcal{P}_{f_{i}}\) (see Remark 2.12). So we start by considering the vector \[\gamma=(\langle\rho_{1}\rangle_{e},\ldots,\langle\rho_{i-1}\rangle_{e},\delta,\langle\rho_{i+1}\rangle_{e},\ldots,\langle\rho_{t}\rangle_{e})\in\mathbb{R} ^{N}_{\geq 0}.\] It is clear that \(|\gamma|=|\langle\rho\rangle_{e}|\), since \(|\delta|=|\langle\rho_{i}\rangle_{e}|\). From \(\widehat{\mathcal{E}}_{i}\,\delta=\widehat{\mathcal{E}}_{i}\langle\rho_{i} \rangle_{e}\) follows \[\mathcal{E}_{\boldsymbol{f}}\gamma=\widehat{\mathcal{E}}_{1}\langle\rho_{1} \rangle_{e}+\cdots+\widehat{\mathcal{E}}_{i-1}\langle\rho_{i-1}\rangle_{e}+ \widehat{\mathcal{E}}_{i}\,\delta+\widehat{\mathcal{E}}_{i+1}\langle\rho_{i+ 1}\rangle_{e}+\cdots+\widehat{\mathcal{E}}_{t}\langle\rho_{t}\rangle_{e}= \mathcal{E}_{\boldsymbol{f}}\langle\rho\rangle_{e},\] and the first part of the Lemma gives \(\gamma=\langle\rho\rangle_{e}\), hence \(\delta=\langle\rho_{i}\rangle_{e}\). For the second claim assume that there exist \(\gamma,\delta\in\mathbb{R}^{l_{i}}_{\geq 0}\), with \(\delta\preceq\langle\rho_{i}\rangle_{e}\) and such that \(|\gamma|=|\delta|\). Taking \(\gamma^{\prime}=\gamma+\langle\rho_{i}\rangle_{e}-\delta\) gives \(|\gamma^{\prime}|=|\langle\rho_{i}\rangle_{e}|\) and \(\widehat{\mathcal{E}}_{i}\,\gamma^{\prime}=\widehat{\mathcal{E}}_{i}\langle \rho_{i}\rangle_{e}\). Again the first part of the Lemma implies \(\gamma^{\prime}=\langle\rho_{i}\rangle_{e}\), and consequently \(\gamma=\delta\). Although the terminology is a little bit technical, the following proposition provides an explicit formula for the coefficient of a relevant monomial in the power \(p^{e}|\langle\rho_{i}\rangle_{e}|\) of a polynomial \(f\). Here we assume that in our mapping \(\boldsymbol{f}=(f_{1},\ldots,f_{t})\), every member can be written as \(f_{i}=c_{a_{i,1}}x^{a_{i,1}}+\cdots+c_{a_{i,n_{i}}}x^{a_{i,n_{i}}}\), i.e. \(\operatorname{Card}(\operatorname{Supp}(f_{i}))=n_{i}\). **Proposition 2.11**.: _Under the assumptions of Lemma 2.10, the following statements hold._ 1. _The coefficient of the monomial_ \(x^{p^{e}\,\widehat{\mathcal{E}}_{i}\langle\rho_{i}\rangle_{e}}\) _in_ \(f_{i}^{p^{e}|\langle\rho_{i}\rangle_{e}|}\) _is_ \(\binom{p^{e}|\langle\rho_{i}\rangle_{e}|}{p^{e}\langle\rho_{i}\rangle_{e}}c_{a_ {i}}^{b_{i,e}}\)_, where_ \(b_{i,e}\) _is the vector in_ \(\mathbb{Z}^{n_{i}}_{\geq 0}\) _having_ \(p^{e}\langle\rho_{i}\rangle_{e}\) _in the first_ \(l_{i}\) _entries and_ \(0\) _in the remaining ones. Moreover, the coefficient of the monomial_ \(x^{p^{e}\mathcal{E}_{\boldsymbol{f}}\langle\rho\rangle_{e}}\) _in the polynomial_ \(g:=\prod_{i=1}^{t}f_{i}^{p^{e}|\langle\rho_{i}\rangle_{e}|}\) _is_ \(\prod_{i=1}^{t}\binom{p^{e}|\langle\rho_{i}\rangle_{e}|}{p^{e}\langle\rho_{i} \rangle_{e}}c_{a_{i}}^{b_{i,e}}\)_._ 2. _Let_ \(v\) _be an element of_ \(\frac{1}{p^{e}}\mathbb{Z}^{N}_{\geq 0}\)_, satisfying_ \(v\preceq\langle\rho\rangle_{e}\) _and written in the form_ \(v=(v_{1},\ldots,v_{t})\) _with_ \(v_{i}\in\frac{1}{p^{e}}\mathbb{Z}^{l_{i}}_{\geq 0}\)_. Then, the coefficient of the monomial_ \(x^{p^{e}\,\widehat{\mathcal{E}}_{i}\,v_{i}}\) _in_ \(f_{i}^{p^{e}|v_{i}|}\) _is_ \(\binom{p^{e}|v_{i}|}{p^{e}v_{i}}c_{a_{i}}^{d_{i,e}}\)_, where_ \(d_{i,e}\) _is a vector of the form_ \(p^{e}(v_{i},0,\ldots,0)\in\mathbb{Z}^{n_{i}}_{\geq 0}\)_. Moreover, the coefficient of the monomial_ \(x^{p^{e}\mathcal{E}_{\boldsymbol{f}}v}\) _in the polynomial_ \(g=\prod_{i=1}^{t}f_{i}^{p^{e}|v_{i}|}\) _is_ \(\prod_{i=1}^{t}\binom{p^{e}|v_{i}|}{p^{e}v_{i}}c_{a_{i}}^{d_{i,e}}\)_._ Proof.: In order to show the first part, we start by fixing an index \(i\in\{1,\ldots,t\}\). The multinomial theorem implies that for a positive integer \(Q\), \[f_{i}^{Q}=(c_{a_{i,1}}x^{a_{i,1}}+\cdots+c_{a_{i,n_{i}}}x^{a_{i,n_{i}}})^{Q}= \sum_{k_{1}+\cdots+k_{n_{i}}=Q}\binom{Q}{k_{1},\ldots,k_{n_{i}}}c_{a_{i}}^{k}x^{ \mathcal{E}_{f_{i}}k}. \tag{2.2.1}\] We denote the set of indices in the sum (2.2.1) by \(I=\{k=(k_{1},\ldots,k_{n_{i}})\in\mathbb{Z}^{n_{i}}_{\geq 0}\mid|k|=Q\}\). Let \(I^{\prime}\) denote the subset of \(I\) formed by vectors of the form \((k_{1},\ldots,k_{l_{i}},0,\ldots,0)\). Then (2.2.1) becomes: \[f_{i}^{Q} =\sum_{k\in I^{\prime}}\binom{Q}{k_{1},\ldots,k_{l_{i}},0\ldots,0}c^{ k}_{a_{i}}x^{\mathcal{E}_{f_{i}}k}+\sum_{k\in\varGamma\backslash I^{\prime}} \binom{Q}{k_{1},\ldots,k_{n_{i}}}c^{k}_{a_{i}}x^{\mathcal{E}_{f_{i}}k}\] \[=\sum_{k\in I^{\prime}}\binom{Q}{k_{1},\ldots,k_{l_{i}}}c^{k}_{a_ {i}}x^{\mathcal{E}_{i}\,k}+\sum_{k\in\varGamma\backslash I^{\prime}}\binom{Q}{ k_{1},\ldots,k_{n_{i}}}c^{k}_{a_{i}}x^{\mathcal{E}_{f_{i}}k}.\] If \(\rho\in\mathbb{R}^{N}_{\geq 0}\) denotes the unique maximal point of \(\mathcal{P}_{\boldsymbol{f}}\), and \(\rho_{i}\in\mathbb{R}^{l_{i}}\) are as in Definition 2.6, then \[f_{i}^{Q} =\sum_{\begin{subarray}{c}k\in I^{\prime}\\ \mathcal{E}_{i}\,k=p^{e}\,\mathcal{E}_{i}\langle\rho_{i}\rangle_{e}\end{subarray}} \binom{Q}{k_{1},\ldots,k_{l_{i}}}c^{k}_{a_{i}}x^{\mathcal{E}_{i}\,k}+\sum_{ \begin{subarray}{c}k\in I^{\prime}\\ \mathcal{E}_{i}\,k\neq p^{e}\,\mathcal{E}_{i}\langle\rho_{i}\rangle_{e}\end{subarray}} \binom{Q}{k_{1},\ldots,k_{l_{i}}}c^{k}_{a_{i}}x^{\mathcal{E}_{i}\,k}\] \[+\sum_{k\in\varGamma\backslash I^{\prime}}\binom{Q}{k_{1},\ldots,k _{n_{i}}}c^{k}_{a_{i}}x^{\mathcal{E}_{f_{i}}k}.\] Taking \(Q=p^{e}|\langle\rho_{i}\rangle_{e}|\) and using Lemma 2.10 (2), we get \[f_{i}^{p^{e}|\langle\rho_{i}\rangle_{e}|} =\binom{p^{e}|\langle\rho_{i}\rangle_{e}|}{p^{e}\langle\rho_{i,1} \rangle_{e},\ldots,p^{e}\langle\rho_{i,l_{i}}\rangle_{e}}c^{b_{i,e}}_{a_{i}}x^ {p^{e}\,\mathcal{E}_{i}\langle\rho_{i}\rangle_{e}}+\sum_{\begin{subarray}{c}k \in I^{\prime}\\ \mathcal{E}_{i}\,k\neq p^{e}\,\mathcal{E}_{i}\langle\rho_{i}\rangle_{e}\end{subarray}} \binom{p^{e}|\langle\rho_{i}\rangle_{e}|}{k_{1},\ldots,k_{l_{i}}}c^{k}_{a_{i}} x^{\mathcal{E}_{i}\,k}\] \[+\sum_{k\in\varGamma\backslash I^{\prime}}\binom{p^{e}|\langle\rho _{i}\rangle_{e}|}{k_{1},\ldots,k_{n_{i}}}c^{k}_{a_{i}}x^{\mathcal{E}_{f_{i}}k},\] which gives the desired conclusion. The proof for the coefficient of \(g:=\prod_{i=1}^{t}f_{i}^{p^{e}|\langle\rho_{i}\rangle_{e}|}\) now follows from a straightforward calculation and Lemma 2.10 (1). Finally, similar considerations lead to the proof of the second part. **Remark 2.12**.: Suppose that \(\mathcal{P}_{\boldsymbol{f}}\) is the splitting polytope of the mapping \(\boldsymbol{f}=(f_{1},\ldots,f_{t})\), and assume that the unique maximal point \(\rho\in\mathcal{P}_{\boldsymbol{f}}\subseteq[0,1]^{N}\) is written as \(\rho=(\rho_{1},\ldots,\rho_{t})\) whit \(\rho_{i}\in\mathbb{R}^{l_{i}}\). Then Definitions 2.6 and 2.5 yield \(\rho_{i}\in\mathcal{P}_{\widehat{f_{i}}}\), but in general, the'maximality' of a point is not preserved by 'projection'. Consider for example the mapping \(\boldsymbol{f}=(f,g)\) given by \(f=x^{a+1}+xy^{b}\) and \(g=x^{a+1}+yz^{c}\). Assume further that \(a,b\geq 1\) and \(c\geq 2\). In this case \(l_{1}=2,\ l_{2}=1,N=3\), and \(\mathcal{P}_{\boldsymbol{f}}\) coincides with the splitting polytope of Example 2.7. One has \(\mathcal{E}_{f}=\begin{pmatrix}a+1&1\\ 0&b\end{pmatrix}\) and \(\mathcal{P}_{f}=\{(\gamma_{1},\gamma_{2})\in\mathbb{R}^{2}_{\geq 0}\mid(a+1) \gamma_{1}+\gamma_{2}\leq 1,\text{ and }b\gamma_{2}\leq 1\}\), see Figure 2. Note that the maximal face of \(\mathcal{P}_{\boldsymbol{f}}\) is \(\left\{\left(\frac{bc-c+1}{bc(a+1)},\frac{c-1}{bc},\frac{1}{c}\right)\right\}\) and its 'projection' on the first two coordinates is \(\left(\frac{bc-c+1}{bc(a+1)},\frac{c-1}{bc}\right)\). This is an interior point of the polytope \(\mathcal{P}_{f}\), but in general is not equal to its maximal point \(\left\{\left(\frac{b-1}{b(a+1)},\frac{1}{b}\right)\right\}\). ### Newton polyhedra of polynomial mappings The construction of the splitting polytope of a polynomial \(f\) resembles the more commonly used notion of the Newton polyhedron of \(f\). In fact, our definition of \(\mathcal{P}_{\boldsymbol{f}}\) was inspired by the construction of the Newton polyhedron \(\mathcal{N}_{\boldsymbol{f}}=\mathcal{N}_{(f_{1},\ldots,f_{t})}\) of a polynomial mapping \(\boldsymbol{f}=(f_{1},\ldots,f_{t}):\mathbb{k}^{m}\mapsto\mathbb{k}^{t}\) that we address next assuming the notations of Subsection 2.2. **Definition 2.13**.: Let \(A\) be a nonempty subset of \(\mathbb{Z}_{\geq 0}^{m}\). The Newton polyhedron \(\mathcal{N}_{A}\) associated to \(A\) is the convex hull in \(\mathbb{R}_{\geq 0}^{m}\) of the set \[\bigcup_{a\in A}\left(a+\mathbb{R}_{\geq 0}^{m}\right).\] Classically one associates a Newton polyhedron \(\mathcal{N}_{f}\) to a polynomial \(f=\sum_{a\in\mathbb{Z}_{\geq 0}^{m}}c_{a}x^{a}\) in the maximal ideal \(\mathfrak{m}\) of \(R=\Bbbk[x_{1},\ldots,x_{m}]\) by setting \(A=\operatorname{Supp}(f)\). The same definition works over more general rings \(R\), for example for formal power series over non necessarily closed fields or \(\Bbbk\)-analytic functions over complete valued fields \(\Bbbk\). General references about the subject can be found for instance in [14, 15, 16]. There are two customary notions of 'genericity' for a fixed \(\mathcal{N}_{f}\). We say that \(f\) is _non degenerated with respect to_\(\mathcal{N}_{f}\) if for any compact face \(\eta\) of \(\mathcal{N}_{f}\) the system of equations \[f_{\eta}=\partial f_{\eta}/\partial x_{1}=\cdots=\partial f_{\eta}/\partial x _{m}=0 \tag{2.3.1}\] has no solution in \((\Bbbk^{\times})^{m}\), where \(f_{\eta}\) denotes the polynomial obtained from \(f\) by restricting the support to \(\eta\). For the second genericity condition, a set \(A\subseteq\mathbb{Z}_{\geq 0}^{m}\) is said to be _convenient_ if it contains a point in every coordinate axis. Then a polynomial \(f\) is called convenient when its support is a convenient set, and in such a case \(\mathcal{N}_{f}\) is also called convenient. A third condition is given by Hernandez [11, Definition 29] where he says that \(\mathcal{N}_{f}\) is in _diagonal position_ if the vector \(\mathbf{1}_{m}\) intersects a compact face of \(\mathcal{N}_{f}\). It is elementary to show, following Koucinreko's [14] ideas, that this condition is also generic. Note moreover that every convenient \(\mathcal{N}_{f}\) will be automatically in diagonal position. Historical notes and discussions of some of these (and related) genericity conditions can be found in [15, 16]. Consider now a polynomial mapping \(\boldsymbol{f}=(f_{1},\ldots,f_{t}):\Bbbk^{m}\mapsto\Bbbk^{t}\), there are essentially two ways of attaching a Newton polyhedron to \(\boldsymbol{f}\). The most frequent definition consist of the Minkowski sum of every \(\mathcal{N}_{f_{i}}\) for \(i=1,\ldots,t\), which is equivalent to consider the Newton polyhedron \(\mathcal{N}_{\prod_{i=1}^{t}f_{i}}\) of the product of all the \(f_{i}\)'s. The second option is obtained from Definition 2.13 by taking \(A=\bigcup_{i=1}^{t}\operatorname{Supp}(f_{i})=\bigcup_{i=1}^{t}\operatorname {Supp}(\hat{f_{i}})\), we stick to the latter definition in the sequel and denote by \(\mathcal{N}_{\boldsymbol{f}}\) the Newton polygon of \(\boldsymbol{f}=(f_{1},\ldots,f_{t})\). Moreover one may consider the ideal \(\mathfrak{a}=\mathfrak{a}_{\boldsymbol{f}}=(f_{1},\ldots,f_{t})\) generated by the elements of the polynomial mapping \(\boldsymbol{f}\). We define \(\mathcal{N}_{\mathfrak{a}}\) as the Newton polyhedron of the set \(A=\bigcup_{f\in\mathfrak{a}}\operatorname{Supp}(f)\) and note that \(\mathcal{N}_{\mathfrak{a}}=\mathcal{N}_{\boldsymbol{f}}\), from which it follows that \(\mathcal{N}_{\mathfrak{a}}\) depends only on \(\mathfrak{a}\) and not on a system of generators, see [13, Section 2.1]. Discussions, comparisons and examples of these definitions (and the corresponding non degeneracy conditions that they imply) are given in [1, 10, 12, 13]. The relations among \(\mathcal{N}_{\mathfrak{a}}=\mathcal{N}_{\boldsymbol{f}}\) and \(\mathcal{P}_{\boldsymbol{f}}\) for principal ideals have been developed by Hernandez in [16, Section 4.4]. It is not difficult to adapt those ideas for non principal ideals as we discuss in the remain of this section. **Proposition 2.14**.: _The following equality holds_ \[\left\{|\gamma|\mid\gamma\in\mathcal{P}_{\boldsymbol{f}}\setminus\{(0,\dots,0) \}\right\}=\left\{\tau\in\mathbb{R}_{\geq 0}\mid(1/\tau,\dots,1/\tau)\in \mathcal{N}_{\boldsymbol{f}}\right\}.\] _In particular, \(\max\left\{|\gamma|\mid\gamma\in\mathcal{P}_{\boldsymbol{f}}\setminus\{(0, \dots,0)\}\right\}=\max\left\{\tau\in\mathbb{R}_{\geq 0}\mid(1/\tau,\dots,1/ \tau)\in\mathcal{N}_{\boldsymbol{f}}\right\}.\)_ Proof.: For \(\gamma\in\mathcal{P}_{\boldsymbol{f}}\setminus\{(0,\dots,0)\}\) we have \(\mathcal{E}_{\boldsymbol{f}}\cdot\gamma\preceq\boldsymbol{1}_{m}\) and then we choose \(\delta\in\mathbb{R}_{\geq 0}^{m}\) such that \(\mathcal{E}_{\boldsymbol{f}}\gamma+\delta=\boldsymbol{1}_{m}\). By writing \(\gamma=(\gamma_{1},\dots,\gamma_{t})\in\mathbb{R}^{l_{1}}\times\dots\times \mathbb{R}^{l_{t}}\), we get \[\boldsymbol{1}_{m}=\widehat{\mathcal{E}}_{1}\,\gamma_{1}+\dots+\widehat{ \mathcal{E}}_{t}\,\gamma_{t}+\delta=\sum_{j_{1}=1}^{l_{1}}\gamma_{1,j_{1}}a_{ 1,j_{1}}+\dots+\sum_{j_{t}=1}^{l_{t}}\gamma_{t,j_{t}}a_{t,j_{t}}+\delta.\] Setting \(\tau=|\gamma|>0\) yields \(\frac{1}{\tau}\boldsymbol{1}_{m}=(1/\tau,\dots,1/\tau)=\sum_{i=1}^{t}\sum_{j_ {i}=1}^{l_{i}}\frac{\gamma_{i,j_{i}}}{\tau}a_{i,j_{i}}+\frac{1}{\tau}\delta\), but \(a_{i,j_{i}}\in\bigcup_{i=1}^{t}\operatorname{Supp}(f_{i})\) and \(\sum_{i=1}^{t}\sum_{j_{i}=1}^{l_{i}}\frac{\gamma_{i,j_{i}}}{\tau}=\frac{| \gamma|}{\tau}=1\), which forces \((1/\tau,\dots,1/\tau)\in\mathcal{N}_{\boldsymbol{f}}\). The other inclusion follows by similar considerations. Proposition 2.14 allows us to rephrase the condition that \(\mathcal{N}_{\boldsymbol{f}}\) is in diagonal position in terms of the splitting polytope \(\mathcal{P}_{\boldsymbol{f}}\). **Proposition 2.15**.: _Assume that \(\mathcal{N}_{\boldsymbol{f}}\) is in diagonal position and let \(\eta\) be the face of \(\mathcal{N}_{\boldsymbol{f}}\) that is intersected by \(\boldsymbol{1}_{m}\). Then the maximal face of \(\mathcal{P}_{\boldsymbol{f}}\) is given by_ \[\left\{\gamma\in\mathbb{R}_{\geq 0}^{N}\mid\mathcal{E}_{\boldsymbol{f}}\cdot \gamma=\boldsymbol{1}_{m}\text{ and }\gamma_{j}=0\text{ if the j-th column of }\mathcal{E}_{\boldsymbol{f}}\text{ does not belongs to }\eta\right\}.\] Proof.: The proof is an easy variation of the one given for the case of a principal ideal [16, Lemma 32]. ## 3. \(F\)-pure thresholds of non principal ideals ### \(F\)-pure thresholds and term ideals Take a polynomial ring \(R\) over a field \(\mathbb{k}\) of prime characteristic \(p\), and set \(\mathfrak{m}=(x_{1},\dots,x_{m})\) for the homogeneous maximal ideal of \(R\). For \(e\geq 1\) we denote by \(\mathfrak{m}^{[p^{e}]}\) the Frobenius power of \(\mathfrak{m}\), i.e. the ideal \((x_{1}^{p^{e}},\dots,x_{m}^{p^{e}})\). **Definition 3.1**.: For a nonzero ideal \(\mathfrak{a}\subseteq\mathfrak{m}\), the following integer is well defined \[\nu_{\mathfrak{a}}(p^{e})=\max\{c\in\mathbb{Z}_{\geq 0}\mid\mathfrak{a}^{c} \not\subseteq\mathfrak{m}^{[p^{e}]}\}.\] We also define the \(F\)-pure threshold of \(\mathfrak{a}\) by \[\operatorname{fpt}(\mathfrak{a})=\lim_{e\to\infty}\frac{\nu_{\mathfrak{a}}(p^ {e})}{p^{e}}.\] The existence of the previous limit is proved in [17], while the non obvious fact that \(\operatorname{fpt}(\mathfrak{a})\) is a rational number is proved in [1]. **Example 3.2**.: It is not difficult to show that if \(\mathfrak{a}\) defines a non singular subvariety of codimension \(r\), then \(\operatorname{fpt}(\mathfrak{a})=r\). Also it is known [11, 12] that for a monomial ideal \(\mathfrak{a}\) the \(F\)-pure threshold is given by the same formula as the log canonical threshold \(\operatorname{lct}(\mathfrak{a})\): \[\operatorname{fpt}(\mathfrak{a})=\operatorname{lct}(\mathfrak{a})=\max\{\tau\in \mathbb{R}_{\geq 0}\mid\boldsymbol{1}_{m}\in\tau\cdot\mathcal{N}_{\mathfrak{a}}\}.\] This equality is a consequence of the stronger fact that in the monomial case, the multiplier ideal and the test ideal of \(\mathfrak{a}\) coincide. Example 3.2 reveals a deep connection between the characteristic zero world with \(\operatorname{lct}(\mathfrak{a})\) and the characteristic \(p\) world with the \(F\)-pure threshold. This analogy will be exploited further when we present our contribution to Conjecture 3.7 in Theorem 3.8. We borrowed the following definition from [1]. **Definition 3.3**.: For a nonzero ideal \(\mathfrak{a}\subseteq\mathfrak{m}\), the term ideal \(\mathfrak{a}^{\circ}\) of \(\mathfrak{a}\) is the ideal generated by all the monomials \(x^{a}\) with \(a\in\mathcal{N}_{\mathfrak{a}}\). Take a non necessarily principal ideal \(\mathfrak{a}\subseteq\mathfrak{m}\), and consider the polynomial mapping \(\boldsymbol{f}=(f_{1},\ldots,f_{t})\) formed by a list of minimal generators of \(\mathfrak{a}\). **Proposition 3.4**.: _The following properties hold._ 1. \(\operatorname{fpt}(\mathfrak{a})\leq\operatorname{fpt}(\mathfrak{a}^{\circ})\)_._ 2. \(\operatorname{fpt}(\mathfrak{a})\leq\min\{t,\operatorname{fpt}(\mathfrak{a}^{ \circ})\}\)_._ 3. \(\operatorname{fpt}(\mathfrak{a}^{\circ})=\max_{\gamma\in\mathcal{P}_{ \boldsymbol{f}}}|\gamma|\)_._ Proof.: The first two properties are well known, see e.g. [10]. The third property follows from the discussion in Example 3.2 and Proposition 2.14, alternatively one may adapt the proof for a principal ideal [11, Proposition 36]. Our next goal is to make explicit the conditions required to get \(\operatorname{fpt}(\mathfrak{a})=\operatorname{fpt}(\mathfrak{a}^{\circ})\). Our formulation extends previous results only available for principal ideals [11, Theorem 42]. **Theorem 3.5**.: _Let \(\mathfrak{a}\subseteq\mathfrak{m}\) be a nonzero ideal with term ideal \(\mathfrak{a}^{\circ}\), and take a list \(\boldsymbol{f}=(f_{1},\ldots,f_{t})\) of minimal generators of \(\mathfrak{a}\). Suppose that \(\mathcal{P}_{\boldsymbol{f}}\) has a unique maximal point \(\rho=(\rho_{1},\ldots,\rho_{t})\) with \(\rho_{i}\in\mathbb{R}^{l_{i}}\), and set for \(i\in\{1,\ldots,t\}\)_ \[S_{i}=\sup\left\{\ell\in\mathbb{Z}_{\geq 0}\mid\sum_{j_{i}=1}^{l_{i}}\rho_{i,j_{ i}}^{(e)}\leq p-1\quad\text{for every}\quad 0\leq e\leq\ell\right\}.\] _If \(I\) denotes the set of indices for which \(S_{i}\) is finite, then the following assertions hold._ 1. _If_ \(I=\emptyset\)_, then_ \(\operatorname{fpt}(\mathfrak{a})=\operatorname{fpt}(\mathfrak{a}^{\circ})\)_._ 2. _If_ \(I\neq\emptyset\) _and_ \(r=\operatorname{Card}(I)\)_, then_ \[\operatorname{fpt}(\mathfrak{a})\geq\sum_{i\in\{1,\ldots,t\}\setminus I}| \rho_{i}|+\sum_{i\in I}|\langle\rho_{i}\rangle_{S}|+\frac{r}{p^{S}},\] _where_ \(S=\min_{i\in I}S_{i}\)_._ Proof.: By Proposition 3.4(1) it suffices to show that \(|\rho|\leq\operatorname{fpt}(\mathfrak{a})\) when \(I=\emptyset\). From Proposition 2.11 (1), \[\prod_{i=1}^{t}\binom{p^{e}|\langle\rho_{i}\rangle_{e}|}{p^{e}\langle\rho_{i} \rangle_{e}}c_{a_{i}}^{b_{i,e}}x^{p^{e}\mathcal{E}_{\boldsymbol{f}}\langle \rho\rangle_{e}} \tag{3.1.1}\] appears as a summand of the polynomial \(g=\prod_{i=1}^{t}f_{i}^{p^{e}|\langle\rho_{i}\rangle_{e}|}\), but \(I=\emptyset\) implies that \(\binom{p^{e}|\langle\rho_{i}\rangle_{e}|}{p^{e}\langle\rho_{i}\rangle_{e}} \not\equiv 0\bmod p\) for each \(\rho_{i}\), making the coefficient of (3.1.1) not zero. Now, from \(\langle\rho\rangle_{e}\prec\rho\), we have \(p^{e}\mathcal{E}_{\boldsymbol{f}}\langle\rho\rangle_{e}\prec p^{e}\mathcal{E }_{\boldsymbol{f}}\rho\preceq p^{e}\mathbf{1}_{m}\). But then \(p^{e}\mathcal{E}_{\boldsymbol{f}}\langle\rho\rangle_{e}\in\operatorname{Supp }(g)\), whereas \(x^{p^{e}\mathcal{E}_{\boldsymbol{f}}\langle\rho\rangle_{e}}\notin\mathfrak{m} ^{[p^{e}]}\), implying that \(g\notin\mathfrak{m}^{[p^{e}]}\). That way, \(\mathfrak{a}^{p^{e}|\langle\rho\rangle_{e}|}\nsubseteq\mathfrak{m}^{[p^{e}]}\) because \(g\) was already a member of \(\mathfrak{a}^{p^{e}|\langle\rho\rangle_{e}|}\). We conclude that \(|\langle\rho\rangle_{e}|\leq\frac{\nu_{e}(p^{e})}{p^{e}}\), which take us to \(|\rho|\leq\operatorname{fpt}(\mathfrak{a})\) by using the limit. For the second part of the proof we assume without lost of generality that \(I=\{1,\ldots,r\}\), then we define \(S=\min_{1\leq i\leq r}S_{i}\). Note that \[\begin{array}{c}\rho_{1,1}^{(S_{1}+1)}+\cdots+\rho_{1,l_{1}}^{(S_{1}+1)}\geq p \\ \rho_{2,1}^{(S_{2}+1)}+\cdots+\rho_{2,l_{2}}^{(S_{2}+1)}\geq p\\ \vdots\\ \rho_{r,1}^{(S_{r}+1)}+\cdots+\rho_{r,l_{r}}^{(S_{r}+1)}\geq p.\end{array}\] There should exist then non negative integer numbers \(n_{1,1},\ldots,n_{1,l_{1}},n_{2,1},\ldots,n_{2,l_{2}},\ldots,n_{r,1},\ldots,n_ {r,l_{r}}\) such that \[\begin{array}{ll}n_{1,1}+\cdots+n_{1,l_{1}}=p-1&\text{and}&0\leq n_{1,j_{1}} \leq\rho_{1,j_{1}}^{(S_{1}+1)}\\ n_{2,1}+\cdots+n_{2,l_{2}}=p-1&\text{and}&0\leq n_{2,j_{2}}\leq\rho_{2,j_{2}}^ {(S_{2}+1)}\\ \vdots&\vdots&\vdots\\ n_{r,1}+\cdots+n_{r,l_{r}}=p-1&\text{and}&0\leq n_{r,j_{r}}\leq\rho_{r,j_{r}}^ {(S_{r}+1)},\end{array}\] with at least one index in each row verifying the strict inequality, say \(n_{i,1}<\rho_{i,1}^{(S_{i}+1)}\). For \(e\) big enough we define the vectors, \[\begin{cases}\alpha_{1}=\langle\rho_{1}\rangle_{S}+\left(\frac{n_{1,1}}{p^{3+ 1}}+\frac{p-1}{p^{3+2}}+\cdots+\frac{p-1}{p^{3+e}},\frac{n_{1,2}}{p^{3+1}}, \ldots,\frac{n_{1,l_{1}}}{p^{3+1}}\right)\\ \vdots\\ \alpha_{r}=\langle\rho_{r}\rangle_{S}+\left(\frac{n_{r,1}}{p^{3+1}}+\frac{p-1} {p^{3+2}}+\cdots+\frac{p-1}{p^{3+e}},\frac{n_{r,2}}{p^{3+1}},\ldots,\frac{n_{ r,l_{r}}}{p^{3+1}}\right).\end{cases}\] In this way \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\in\mathbb{R}_{\geq 0}^{l_{1}+\cdots+l_{r}}\) verifies \(p^{S+e}\alpha\in\mathbb{Z}_{\geq 0}^{l_{1}+\cdots+l_{r}}\) and moreover \[|\alpha|=\sum_{i=1}^{r}|\langle\rho_{i}\rangle_{S}|+\frac{r(p^{e}-1)}{p^{S+e}}.\] Since \(\alpha_{i}\preceq\langle\rho_{i}\rangle_{S+e}\) for \(i\in\{1,\ldots,t\}\), Lemma 2.11 (2) implies that \[\prod_{i=1}^{r}\binom{p^{S+e}|\alpha_{i}|}{p^{S+e}\alpha_{i}}c_{a_{i}}^{d_{i,e }}\cdot\prod_{i=r+1}^{t}\binom{p^{S+e}|\langle\rho_{i}\rangle_{S+e}|}{p^{S+e} \langle\rho_{i}\rangle_{S+e}}c_{a_{i}}^{b_{i,e}}\] appears as the coefficient of the monomial \(x^{p^{S+e}\widehat{\mathcal{E}}_{1}\alpha_{1}}\cdots x^{p^{S+e}\widehat{ \mathcal{E}}_{r}\alpha_{r}}x^{p^{S+e}\widehat{\mathcal{E}}_{r+1}\langle\rho_{ r+1}\rangle_{S+e}}\cdots x^{p^{S+e}\widehat{\mathcal{E}}_{r}\langle\rho_{r} \rangle_{S+e}}\) in the polynomial \(g=\prod_{i=1}^{r}f_{i}^{p^{S+e}|\alpha_{i}|}\cdot\prod_{i=r+1}^{t}f_{i}^{p^{S+ e}|\langle\rho_{i}\rangle_{S+e}|}\). Definition of \(S\) and Corollary 2.4 gives \(\binom{p^{S+e}|\alpha_{i}|}{p^{S+e}\alpha_{i}}\not\equiv 0\bmod p\) for each \(\alpha_{i}\) and one can show similarly that \(\binom{p^{S+e}|\langle\rho_{i}\rangle_{S+e}|}{p^{S+e}\langle\rho_{i}\rangle_ {S+e}}\not\equiv 0\bmod p\) for \(i\in\{r+1,\ldots,t\}\). In conclusion \(g\in\mathfrak{a}^{p^{S+e}|\alpha|+p^{S+e}|((\rho_{r+1},\ldots,\rho_{t}))_{S+e} |}\setminus\mathfrak{m}^{p^{S+e}}\), and letting \(e\to\infty\) gives finally \[\sum_{i=1}^{r}|\langle\rho_{i}\rangle_{S}|+\sum_{i=r+1}^{t}|\rho_{i}|+\frac{r}{ p^{S}}\leq\operatorname{fpt}(\mathfrak{a}).\] **Example 3.6**.: Take \(R=\mathbb{F}_{p}[x,y,z]\) and let \(\mathfrak{a}\) be the ideal generated by \(\boldsymbol{f}=(x^{2}+xy^{2},yz^{3})\). Then \(\mathcal{P}_{\boldsymbol{f}}\) is the splitting polytope of Example 2.7 which has a unique maximal point \(\rho=\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)\). For \(p=2\), \(1/3=\sum_{e\geq 1}\frac{\alpha^{(e)}}{2^{e}}\) where \(\alpha^{(e)}=0\) for \(e\) odd and \(\alpha^{(e)}=1\) for \(e\) even, then \(S=S_{1}=1,\ S_{2}=\infty\) and \(\langle\rho_{1}\rangle_{1}=(0,0)\). Theorem 3.5 (2) gives then \(\operatorname{fpt}(\mathfrak{a})\geq|\langle\rho_{1}\rangle_{1}|+\frac{1}{2}+ \frac{1}{3}=\frac{5}{6}\). If \(p=3\), then \(1/3=\sum_{e\geq 2}\frac{2}{3^{e}}\), again \(S=S_{1}=1,\ S_{2}=\infty\) and \(\operatorname{fpt}(\mathfrak{a})\geq\frac{2}{3}\). Assume now that \(p=6w+1\) for some \(w\geq 1\), then \(1/3=\sum_{e\geq 1}\frac{2w}{p^{e}}\) and \(S_{1}=S_{2}=\emptyset\), hence \(\operatorname{fpt}(\mathfrak{a})=|\rho|=1\) by Theorem 3.5 (1). Finally if \(p=6w+5\) for some \(w\geq 0\), then \(1/3=\sum_{e\geq 1}\frac{\alpha^{(e)}}{2^{e}}\) where \(\alpha^{(e)}=2w+1\) for \(e\) odd and \(\alpha^{(e)}=4w+3\) for \(e\) even. In this case \(S=S_{1}=1\), \(S_{2}=\infty\) and \(\langle\rho_{1}\rangle_{1}=\left(\frac{2w+1}{p},\frac{2w+1}{p}\right)\), giving \(\operatorname{fpt}(\mathfrak{a})\geq\frac{4w+2}{p}+\frac{1}{p}+\frac{1}{3}=1- \frac{1}{3p}\). Summarizing, \[\begin{cases}\operatorname{fpt}(\mathfrak{a})\geq\frac{5}{6}&\text{if}\quad p =2\\ \operatorname{fpt}(\mathfrak{a})\geq\frac{2}{3}&\text{if}\quad p=3\\ \operatorname{fpt}(\mathfrak{a})=1&\text{if}\quad p\equiv 1\mod 6\\ \operatorname{fpt}(\mathfrak{a})\geq 1-\frac{1}{3p}&\text{if}\quad p\equiv 5 \mod 6.\end{cases}\] ### Relations with the log canonical threshold Let \(\mathfrak{a}\) be an ideal in \(\mathbb{Q}[x_{1},\dots,x_{m}]\) and consider a log resolution of \(\mathfrak{a}\) defined over \(\mathbb{Q}\). We mean by this, a proper birational map \(\pi_{\mathbb{Q}}:Y\to\mathbb{A}_{\mathbb{Q}}^{m}\), with \(Y\) smooth, and such that \(\mathfrak{a}\cdot\mathcal{O}_{Y}=\mathcal{O}_{Y}(-D)\) is invertible and \(\operatorname{Exc}(\pi)\cup\operatorname{Supp}(D)\) is a simple normal crossing divisor. Denote by \(K_{\pi}\) the relative canonical divisor on \(Y\) and denote by \(D\) the effective divisor given by \(\pi^{-1}(\mathfrak{a})\), then the _log-canonical threshold of_\(\mathfrak{a}\) (at the origin) is given by \[\operatorname{lct}(\mathfrak{a})=\sup\{\alpha\in\mathbb{R}_{+}\mid\lceil K_{ \pi}-\alpha D\rceil\text{ is effective}\},\] where \(\lceil H\rceil\) denotes the round up divisor of \(H\). Given a prime number \(p\), one may consider the reduction \(\mathfrak{a}_{p}=\mathfrak{a}\cdot\mathbb{F}_{p}[x_{1},\dots,x_{m}]\) of \(\mathfrak{a}\) mod \(p\), and then it is very natural to ask about the relationship among the corresponding thresholds. Mustata, Tagaki, and Watanabe [11, Theorem 3.4] prove that for \(p\gg 0\), we have \[\operatorname{fpt}(\mathfrak{a}_{p})\leq\operatorname{lct}(\mathfrak{a}) \quad\text{and}\quad\lim_{p\to\infty}\operatorname{fpt}(\mathfrak{a}_{p})= \operatorname{lct}(\mathfrak{a}).\] The authors also posed the following conjecture which still represents and interesting open challenge. For historical comments and some references of the proven cases consult the survey [1]. **Conjecture 3.7** ([11, Conjecture 3.6]).: There are infinitely many primes \(p\) for which \(\operatorname{fpt}(\mathfrak{a}_{p})=\operatorname{lct}(\mathfrak{a})\). By using our characterization of \(\operatorname{fpt}(\mathfrak{a})\) in terms of \(\mathcal{P}_{\boldsymbol{f}}\), namely Theorem 3.5, we contribute some instances of the conjecture when \(\mathfrak{a}\) is a non necessarily principal ideal. **Theorem 3.8**.: _Assume the hypotheses of Theorem 3.5._ 1. _If_ \(\operatorname{lct}(\mathfrak{a}^{\circ})>t\) _and_ \(I\neq\emptyset\)_, with_ \(\operatorname{Card}(I)=r=t\)_, then_ \(\operatorname{fpt}(\mathfrak{a}_{p})=\operatorname{lct}(\mathfrak{a})=t\)_, for all the primes_ \(p\) _verifying_ \[\rho_{1}^{(1)}+\dots+\rho_{N}^{(1)}\geq tp.\] 2. _If_ \(\operatorname{lct}(\mathfrak{a}^{\circ})\leq t\)_, then_ \(\operatorname{fpt}(\mathfrak{a}_{p})=\operatorname{lct}(\mathfrak{a})\) _for all prime numbers_ \(p\) _for which the entries of_ \(\rho\) _add without carrying modulo_ \(p\)_._ Proof.: By Proposition 3.4 (3) we have that \(|\rho|=\operatorname{fpt}(\mathfrak{a}_{p}^{\circ})\) and by general properties of monomial ideals \(|\rho|=\operatorname{fpt}(\mathfrak{a}_{p}^{\circ})=\operatorname{lct}( \mathfrak{a}^{\circ})\). For the primes \(p\) satisfying \(\rho_{1}^{(1)}+\dots+\rho_{N}^{(1)}\geq tp\) we have \(S=0\) in Theorem 3.5 (2) giving \(\operatorname{fpt}(\mathfrak{a}_{p})\geq t\), which forces the equality. Using the well known inequality \(\operatorname{lct}(\mathfrak{a})\leq t\) gives \[t=\operatorname{fpt}(\mathfrak{a}_{p})\leq\operatorname{lct}(\mathfrak{a})\leq t,\] implying the assertion in the first part. For the second part note that for the set of prime numbers for which the entries of \(\rho\) add without carrying modulo \(p\), we get \(\operatorname{fpt}(\mathfrak{a}_{p})=\operatorname{fpt}(\mathfrak{a}_{p}^{\circ})\) from the first part of Theorem 3.5. The claim now follows from the inequalities \[\operatorname{fpt}(\mathfrak{a}_{p})\leq\operatorname{lct}(\mathfrak{a})\leq \operatorname{lct}(\mathfrak{a}^{\circ})=\operatorname{fpt}(\mathfrak{a}_{p}^{ \circ}).\] Computational approaches to the calculus of \(F\)-pure thresholds have also been considered. The only software that we are aware of is the Macaulay2 package _FrobeniusThresholds_[1], designed to estimate and calculate \(F\)-pure thresholds in several cases. This package incorporates some of the techniques for principal ideals given in [1, 1, 15]. From an algorithmic point of view, Theorems 3.5 and 3.8 require the verification of the fact that \(\mathcal{P}_{\boldsymbol{f}}\) has a unique maximal point. At first sign this task is very complicated, however by Proposition 2.15 one is faced instead with the easier duty of checking that \(\mathcal{N}_{\boldsymbol{f}}\) is in diagonal position. This latter condition can be algorithmic treated even in higher dimensions by using professional software dealing with the well known theory of Newton polygons. Another algorithmic approach to the explicit calculation of \(F\)-pure thresholds (and log canonical thresholds) is considered by Shibuta and Takagi [17], where they proved that \(\operatorname{lct}(\mathfrak{a})\) is computable by linear programming for several classes of binomial ideals. The following example is taken from them and illustrates how to cope with the condition that \(\mathcal{P}_{\boldsymbol{f}}\) has a unique maximal point. **Example 3.9** (Space Monomial Curves).: Take \(R=\mathbb{F}_{p}[x,y,z]\) and let \(\mathfrak{a}\) be the ideal generated by \(\boldsymbol{f}=(x^{a}-y^{b},z^{c}-x^{d}y^{e})\), with \(a,b,c\geq 1\), \(d\geq e\geq 0\) and \(ae+bd\geq ab\). Then \(\mathcal{P}_{\boldsymbol{f}}=\{(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4}) \in\mathbb{R}_{\geq 0}^{4}\mid a\gamma_{1}+d\gamma_{4}\leq 1,b\gamma_{2}+e \gamma_{4}\leq 1,\text{ and }c\gamma_{3}\leq 1\}\). Since \(\mathcal{N}_{\boldsymbol{f}}\) has just one compact face generated by \((a,0,0)\), \((0,b,0)\) and \((0,0,c)\), Proposition 2.15 guarantees that the maximal face of \(\mathcal{P}_{\boldsymbol{f}}\) is given by \(\{(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4})\mid a\gamma_{1}=1,b\gamma_{2 }=1\text{ and }c\gamma_{3}=1\}=\{(1/a,1/b,1/c,0)\}=\{\rho\}\). For instance if \(p\equiv 1\mod abc\) and \(ab+bc+ac\leq p-1\), then Theorem 3.8 (2) gives \(\operatorname{fpt}(\mathfrak{a})=|\rho|=1/a+1/b+1/c=\operatorname{lct}( \mathfrak{a})\). ## 4. Mixed Generalized Test Ideals and \(F\)-Volumes Test ideals were introduced by Hochster and Huneke [11, 12] as the characteristic \(p\) analogues of multiplier ideals. We follow the description of Bickle, Mustata, and Smith [1] and consider \(R\) a polynomial ring over a field \(\mathbb{k}\) of prime characteristic \(p\). Assume further that \(R\) is \(F\)-finite, meaning that the Frobenius map is finite. **Definition 4.1**.: 1. Given an ideal \(\mathfrak{b}\subseteq R\), the \(p^{e}\)-th _Frobenius root_ of \(\mathfrak{b}\), denoted \(\mathfrak{b}^{[1/p^{e}]}\), is the unique and smallest ideal \(\mathfrak{a}\) of \(R\) such that \(\mathfrak{b}\subseteq\mathfrak{a}^{[p^{e}]}\). 2. Given a collection \(\mathfrak{a}_{1},\ldots,\mathfrak{a}_{t}\) of ideals in \(R\) and \(c=(c_{1},\ldots,c_{t})\in\mathbb{R}_{\geq 0}^{t}\), the _mixed generalized test ideal_ with exponents \(c_{1},\ldots,c_{t}\) is defined as \[\tau(\mathfrak{a}_{1}^{c_{1}}\cdots\mathfrak{a}_{t}^{c_{t}})=\bigcup_{e>0}( \mathfrak{a}_{1}^{[c_{1}p^{e}]}\cdots\mathfrak{a}_{t}^{[c_{t}p^{e}]})^{[1/p^ {e}]}.\] Since \(R\) is Noetherian, the family of ideals \(\left\{(\mathfrak{a}_{1}^{[c_{1}p^{e}]}\cdots\mathfrak{a}_{t}^{[c_{t}p^{e}]}) ^{[1/p^{e}]}\right\}_{e>0}\) stabilizes, therefore \(\tau(\mathfrak{a}_{1}^{c_{1}}\cdots\mathfrak{a}_{t}^{c_{t}})=(\mathfrak{a}_{ 1}^{[c_{1}p^{e}]}\cdots\mathfrak{a}_{t}^{[c_{t}p^{e}]})^{[1/p^{e}]}\) for \(e\) big enough. Note that it may happen that for a given collection of ideals \(\mathfrak{a}_{1},\ldots,\mathfrak{a}_{t}\) there exist real vectors \(c=(c_{1},\ldots,c_{t})\) and \(d=(d_{1},\ldots,d_{t})\) such that \(\tau(\mathfrak{a}_{1}^{c_{1}}\cdots\mathfrak{a}_{t}^{c_{t}})=\tau(\mathfrak{a }_{1}^{d_{1}}\cdots\mathfrak{a}_{t}^{d_{t}})\). The subset of \(\mathbb{R}_{\geq 0}^{t}\) for which this occurs is called the _constancy region_ of \(\tau(\mathfrak{a}_{1}^{c_{1}}\cdots\mathfrak{a}_{t}^{c_{t}})\), see for instance [1, 1, 1]. The measure of a related set in \(\mathbb{R}^{t}_{\geq 0}\) was introduced under the name of \(F\)-volumes by Badilla-Cespedes, Nunez-Betancourt and Rodriguez-Villalobos [1]. **Definition 4.2**.: Consider a collection \(\mathfrak{a}_{1},\ldots,\mathfrak{a}_{t},\mathfrak{b}\) of ideals in \(R\) and assume that for each \(i\in\{1,\ldots,t\}\), \(\mathfrak{a}_{i}\subseteq\sqrt{\mathfrak{b}}\). Given \(e\in\mathbb{Z}_{\geq 0}\) we define \[\mathrm{V}^{\mathfrak{b}}_{\mathfrak{a}_{1},\ldots,\mathfrak{a}_{t}}(p^{e})= \{(n_{1},\ldots,n_{t})\in\mathbb{Z}^{t}_{\geq 0}\ \mid\mathfrak{a}_{1}^{n_{1}}\cdots\mathfrak{a}_{t}^{n_{t}}\not \subseteq\mathfrak{b}^{[p^{e}]}\}.\] **Theorem 4.3** ([1, Theorem 2.13]).: _The limit_ \[\lim_{e\to\infty}\frac{1}{p^{et}}\cdot\mathrm{Card}(\mathrm{V}^{\mathfrak{b}} _{\mathfrak{a}_{1},\ldots,\mathfrak{a}_{t}}(p^{e}))\] _converges._ The previous limit is called the \(F\)-volume of \(\mathfrak{a}_{1},\ldots,\mathfrak{a}_{t}\) with respect to \(\mathfrak{b}\) and it is denoted by \(\mathrm{Vol}^{\mathfrak{b}}_{F}(\mathfrak{a}_{1},\ldots,\mathfrak{a}_{t})\). If \(R\) is a regular ring, then the \(F\)-volume measures the sum of the volumes of the constancy regions where \(\tau(\mathfrak{a}_{1}^{n_{1}}\cdots\mathfrak{a}_{t}^{n_{t}})\not\subseteq \mathfrak{b}\). When \(R\) is a polynomial ring over a field \(\Bbbk\) of prime characteristic \(p\), recall that we reserved the notation \(\mathfrak{m}=(x_{1},\ldots,x_{m})\) for the homogeneous maximal ideal of \(R\). In this case it is shown [1] that the \(F\)-volume of \(\mathfrak{a}_{1},\ldots,\mathfrak{a}_{t}\) with respect to \(\mathfrak{m}\) is bounded from above by the minimum between \(\mathrm{fpt}(\mathfrak{a}_{1})\cdots\mathrm{fpt}(\mathfrak{a}_{t})\) and the volume of the pyramid formed by \(\mathbb{R}^{t}_{\geq 0}\) and the plane with equation \(x_{1}+\ldots+x_{t}=\mathrm{fpt}(\mathfrak{a}_{1}+\ldots+\mathfrak{a}_{t})\). No trivial lower bounds are known for the \(F\)-volume of \(\mathfrak{a}_{1},\ldots,\mathfrak{a}_{t}\) with respect to \(\mathfrak{m}\) so we provide some bounds in terms of our splitting polytopes \(\mathcal{P}_{\boldsymbol{f}}\). First we start with an easy calculation. **Proposition 4.4**.: _Let \(\boldsymbol{f}=(f_{1},\ldots,f_{t})\) be a polynomial mapping of elements in \(\mathfrak{m}\), and take \(\mathfrak{a}_{1}^{\circ},\ldots,\mathfrak{a}_{t}^{\circ}\) the collection formed by the term ideals of the principal ideals \((f_{1}),\ldots,(f_{t})\). Assume that a point \(\gamma\in\mathcal{P}_{\boldsymbol{f}}\) is written as \(\gamma=(\gamma_{1},\ldots,\gamma_{t})\) whit \(\gamma_{i}\in\mathbb{R}^{l_{i}}\), then_ \[\max_{\gamma\in\mathcal{P}_{\boldsymbol{f}}}|\gamma_{1}|\cdot\cdots\cdot| \gamma_{t}|\leq\mathrm{Vol}^{\mathfrak{m}}_{F}(\mathfrak{a}_{1}^{\circ},\ldots, \mathfrak{a}_{t}^{\circ}).\] Proof.: By definition \(\gamma\in\mathcal{P}_{\boldsymbol{f}}\) means that \(\mathcal{E}_{\boldsymbol{f}}(\gamma)_{e}\prec\mathcal{E}_{\boldsymbol{f}} \gamma\preceq\mathbf{1}_{m}\), implying \(x^{p^{e}\,\widehat{\mathcal{E}}_{1}\langle\gamma_{1}\rangle_{e}}\cdots x^{p^ {e}\,\widehat{\mathcal{E}}_{t}\langle\gamma_{t}\rangle_{e}}\in(\mathfrak{a}_{ 1}^{\circ})^{p^{e}|\langle\gamma_{1}\rangle_{e}|}\cdots(\mathfrak{a}_{t}^{ \circ})^{p^{e}|\langle\gamma_{t}\rangle_{e}|}\setminus\mathfrak{m}^{[p^{e}]}\). Therefore \((p^{e}|\langle\gamma_{1}\rangle_{e}|,\ldots,p^{e}|\langle\gamma_{t}\rangle_{e }|)\) is a point of \(\mathrm{V}^{\mathfrak{m}}_{\mathfrak{a}_{1}^{\circ},\ldots,\mathfrak{a}_{t}^{ \circ}}(p^{e})\), and we have the bound \[p^{et}\prod_{i=1}^{t}|\langle\gamma_{i}\rangle_{e}|\leq\mathrm{Card}\left( \mathrm{V}^{\mathfrak{m}}_{\mathfrak{a}_{1}^{\circ},\ldots,\mathfrak{a}_{t}^{ \circ}}(p^{e})\right).\] Dividing by \(p^{et}\) and letting \(e\to\infty\) yields \(\prod_{i=1}^{t}|\gamma_{i}|\leq\mathrm{Vol}^{\mathfrak{m}}_{F}(\mathfrak{a}_{1 }^{\circ},\ldots,\mathfrak{a}_{t}^{\circ})\), from which the conclusion easily follows. Finally we use the proof of Theorem 3.5 for the construction of a lower bound for the \(F\)-volume of the collection of principal ideals derived form a polynomial mapping \(\boldsymbol{f}=(f_{1},\ldots,f_{t})\). **Theorem 4.5**.: _Consider a polynomial mapping \(\boldsymbol{f}=(f_{1},\ldots,f_{t})\) of elements in \(\mathfrak{m}\). Suppose that \(\mathcal{P}_{\boldsymbol{f}}\) has a unique maximal point \(\rho=(\rho_{1},\ldots,\rho_{t})\) with \(\rho_{i}\in\mathbb{R}^{l_{i}}\), and set for \(i\in\{1,\ldots,t\}\)_ \[S_{i}=\sup\left\{\ell\in\mathbb{Z}_{\geq 0}\mid\sum_{j_{i}=1}^{l_{i}}\rho_{i,j_{ i}}^{(e)}\leq p-1\quad\text{for every}\quad 0\leq e\leq\ell\right\}.\] _If \(I\) denotes the set of indices for which \(S_{i}\) is finite, then_ \[\prod_{i=1}^{t}\left(|\langle\rho_{i}\rangle_{S_{i}}|+\frac{1}{p^{S_{i}}}\right) \leq\operatorname{Vol}_{F}^{\mathfrak{m}}((f_{1}),\ldots,(f_{t})).\] Proof.: Note that in the case \(I=\emptyset\), the polynomial \[g=\prod_{i=1}^{t}f_{i}^{p^{e}|\langle\rho_{i}\rangle_{e}|}\notin\mathfrak{m}^ {[p^{e}]},\] by the proof of Theorem 3.5 (1). This implies that \((p^{e}|\langle\rho_{1}\rangle_{e}|,\ldots,p^{e}|\langle\rho_{t}\rangle_{e}|)\) belongs to \(\operatorname{V}_{(f_{1}),\ldots,(f_{t})}^{\mathfrak{m}}(p^{e})\), and also \[p^{et}\prod_{i=1}^{t}|\langle\rho_{i}\rangle_{e}|\leq\operatorname{Card}\left( \operatorname{V}_{(f_{1}),\ldots,(f_{t})}^{\mathfrak{m}}(p^{e})\right).\] Dividing by \(p^{et}\) and letting \(e\to\infty\) concludes the proof of this case. We follow the same strategy when \(I\neq\emptyset\), but using this time the polynomial \[g=\prod_{i=1}^{r}f_{i}^{p^{S+e}|\alpha_{i}|}\cdot\prod_{i=r+1}^{t}f_{i}^{p^{S+ e}|\langle\rho_{i}\rangle_{S+e}|},\] from the second part of the proof of Theorem 3.5. Therefore \[p^{(S+e)t}\left(\prod_{i=1}^{r}|\alpha_{i}|\cdot\prod_{i=r+1}^{t}|\langle\rho _{i}\rangle_{S+e}|\right)\leq\operatorname{Card}\left(\operatorname{V}_{(f_{1 }),\ldots,(f_{t})}^{\mathfrak{m}}(p^{S+e})\right).\] The conclusion is achieved as before, dividing by \(p^{(S+e)t}\) and taking the limit. **Example 4.6**.: Take \(R=\mathbb{F}_{p}[x,y,z]\) and \(\boldsymbol{f}=(x^{2}+xy^{2},yz^{3})\) as in Example 3.6. Then \[\begin{cases}\operatorname{Vol}_{F}^{\mathfrak{m}}((x^{2}+xy^{2}),(yz^{3})) \geq\frac{1}{6}&\text{if}\quad p=2\\ \operatorname{Vol}_{F}^{\mathfrak{m}}((x^{2}+xy^{2}),(yz^{3}))\geq\frac{1}{9} &\text{if}\quad p=3\\ \operatorname{Vol}_{F}^{\mathfrak{m}}((x^{2}+xy^{2}),(yz^{3}))\geq\frac{2}{9} &\text{if}\quad p\equiv 1\mod 6\\ \operatorname{Vol}_{F}^{\mathfrak{m}}((x^{2}+xy^{2}),(yz^{3}))\geq\frac{2}{9} -\frac{1}{9p}&\text{if}\quad p\equiv 5\mod 6.\end{cases}\] Of course the inequality of Theorem 4.5 can be strict as we will show next. This demonstrates the need to continue looking for bounds for the \(F\)-volume. **Example 4.7**.: Let \(R=\mathbb{F}_{2}[x,y]\) and consider the polynomial mapping \(\boldsymbol{f}=(x,x+y^{2})\).Then \(\operatorname{Vol}_{F}^{\mathfrak{m}}((x),(x+y^{2}))=3/4\)[BCNBRV22, Example 2.15]. On the other hand, \(\mathcal{P}_{\boldsymbol{f}}\) is exactly \([0,1]\times\left[0,\frac{1}{2}\right]\) with maximal point \((\rho_{1},\rho_{2})=(1,1/2)\). Since \(S_{1}=S_{2}=\infty\), we obtain \(|\rho_{1}|\cdot|\rho_{2}|=1/2\). ## Acknowledgments We thank Luis Nunez-Betancourt for helpful comments and suggestions. We also thank Manuel Gonzalez Villa for inspiring conversations.
2309.14809
ENIGMA-51: Towards a Fine-Grained Understanding of Human-Object Interactions in Industrial Scenarios
ENIGMA-51 is a new egocentric dataset acquired in an industrial scenario by 19 subjects who followed instructions to complete the repair of electrical boards using industrial tools (e.g., electric screwdriver) and equipments (e.g., oscilloscope). The 51 egocentric video sequences are densely annotated with a rich set of labels that enable the systematic study of human behavior in the industrial domain. We provide benchmarks on four tasks related to human behavior: 1) untrimmed temporal detection of human-object interactions, 2) egocentric human-object interaction detection, 3) short-term object interaction anticipation and 4) natural language understanding of intents and entities. Baseline results show that the ENIGMA-51 dataset poses a challenging benchmark to study human behavior in industrial scenarios. We publicly release the dataset at https://iplab.dmi.unict.it/ENIGMA-51.
Francesco Ragusa, Rosario Leonardi, Michele Mazzamuto, Claudia Bonanno, Rosario Scavo, Antonino Furnari, Giovanni Maria Farinella
2023-09-26T10:14:44Z
http://arxiv.org/abs/2309.14809v2
ENIGMA-51: Towards a Fine-Grained Understanding of Human-Object Interactions in Industrial Scenarios ###### Abstract ENIGMA-51 is a new egocentric dataset acquired in an industrial scenario by 19 subjects who followed instructions to complete the repair of electrical boards using industrial tools (e.g., electric screwdriver) and equipments (e.g., oscilloscope). The 51 egocentric video sequences are densely annotated with a rich set of labels that enable the systematic study of human behavior in the industrial domain. We provide benchmarks on four tasks related to human behavior: 1) untrimmed temporal detection of human-object interactions, 2) egocentric human-object interaction detection, 3) short-term object interaction anticipation and 4) natural language understanding of intents and entities. Baseline results show that the ENIGMA-51 dataset poses a challenging benchmark to study human behavior in industrial scenarios. We publicly release the dataset at [https://iplab.dmi.unict.it/ENIGMA-51](https://iplab.dmi.unict.it/ENIGMA-51). ## 1 Introduction Every day, humans interact with the surrounding world to achieve their goals. These interactions are often complex and require multiple steps, skills, and involve different objects. For example, in an industrial workplace, when performing maintenance of industrial machinery, a worker interacts with several objects and tools while repairing the machine (e.g., _wear PPEs, take the screwdriver_), testing it (e.g., _press the button on the electric panel_), and writing a report (e.g., _take the pen, write the report_). To properly assist humans, an intelligent system should be able to model human-object interactions (HOIs) from real-world observations captured by users wearing smart cameras (e.g., smart glasses) [10, 14, 39]. It is also plausible that predicting human-object interactions in advance can benefit an intelligent system help workers to avoid mistakes, or to improve their safety. For example, during the execution of a maintenance procedure, an AI assistant should be able to understand when the worker is interacting with the objects, show technical information, provide instructions on how to interact with each object, alert the worker of potential safety risks (e.g., _Before touching the electrical board, turn off the power supply!_), and suggest what the next interaction is. Furthermore, an intelligent system should be able to have a natural language conversation with workers. It should also be able to extract useful information from their speech, and figure out what they are trying to achieve. This way, it can provide assistance for supporting their needs, preferences, and goals. In general, tasks focused on understanding human behaviour have been extensively studied thanks to the availability of public datasets that consider multiple domains [24, 55, 2, 36] or specific ones, such as kitchens [12, 37, 73], daily life [44, 32], and industrial-like scenarios [53, 47]. However, since data acquisition in a real industrial scenario is challenging due to privacy issues, safety and industrial secret protection, the datasets available to date do not reflect real industrial environments, considering proxy activities such as employing toy models made of textureless parts [47, 53]. Considering what stated above, to enable research in this field, we present ENIGMA-51, a new dataset composed of 51 egocentric videos acquired in an industrial environment which simulates a real industrial laboratory. The dataset was acquired by 19 subjects who wore a Microsoft HoloLens 2 [40] headset and followed audio and AR instructions provided by the device to complete repairing procedures on electrical boards. The subjects interact with industrial tools such as an electric screwdriver and pliers, as well as with electronic instruments such as a power supply and an oscilloscope while executing the steps to complete a specific procedure. Apart the current interactions, we annotated which objects and hands will be involved in future interactions, as well as the time to contact (TTC) to indicate when the future interaction will start. This allows us to explore the task of predicting the future human-object interactions considering the industrial domain. Textual instructions used for the data acquisition, also allow to study tasks which focus on the knowledge extraction of intents and entities from the text while users are interacting with the objects. In the industrial domain these tasks have not been explored due to the lack of public egocentric datasets explicitly annotated with intents and entities. Together with the manually annotations, we release the pseudo-labels and the pre-extracted features to enable further investigations beyond the current study. In particular, we generated hands and objects segmentation masks [30], and hands keypoints [11]. The provided visual features are extracted with DINOV2 [42] and CLIP [45]. To allow further research in the context of scalable models trained using synthetic data, we share the 3D models of the laboratory and all considered industrial objects. Figure 1 shows examples of images acquired in the industrial environment where the dataset was acquired together with the annotations. To highlight the usefulness of the proposed dataset, we performed baseline experiments related to 4 fundamental tasks focused on understanding human behavior from first person vision in the considered industrial context: 1) Untrimmed Temporal Detection of Human-Object Interactions, 2) Egocentric Human-Object Interaction (EHOI) Detection, 3) Short-Term Object Interaction Anticipation and 4) Natural Language Understanding of Intents and Entities. In sum the contributions of this work are as follows: 1) we introduce ENIGMA-51, a new dataset composed of 51 egocentric videos acquired in an industrial domain; 2) we manually annotated the dataset with a rich set of annotations aimed at studying human behavior; 3) we propose a benchmark to study human behavior in an industrial environment exploring 4 different tasks, showing that the current state-of-the-art approaches are not sufficient to solve the considered problems in the industrial setting; 4) we provide additional labels and features exploiting foundational models, with the aim to push research on additional tasks on the proposed industrial dataset. The ENIGMA-51 dataset and its annotations are available at the following link: [https://iplab.dmi.unict.it/ENIGMA-51](https://iplab.dmi.unict.it/ENIGMA-51). ## 2 Related Work Our work is related to previous research lines which are revised in the following sections. ### Ego Datasets for Human Behavior Studies Previous works have proposed egocentric datasets focusing on human behavior understanding. The Activity of Daily Living (ADL) [44] dataset is one of the first datasets acquired from the egocentric perspective. It includes 20 Figure 1: Frames have been annotated with a rich set of labels (top-left). Sequences have been annotated by determining the interaction key frame (bottom-center), assigning the verb (green) and the active object (orange). For each interaction key frame, we provide objects and hand bounding boxes and the relation between them. In the past frames, we annotated also the next active objects and we derived the time to contact (TTC) (bottom-left). We also generated pseudo-labels for semantic masks and hand keypoints, and we released 3D models for the objects and for the laboratory (top-right). Moreover, a specific instruction belonging to the procedure is associated with each interaction key frame (bottom-right). egocentric videos where participants were involved in daily activities. It comprises temporal action annotations aimed to study egocentric activities. EGTEA Gaze+ [34] focuses on cooking activities involving 32 subjects who recorded 28 hours of videos. It has been annotated with pixel-level hand masks and 10325 action annotations including 19 action verbs and 51 object nouns. The THU-READ [59] dataset is composed of 1920 RGB-D sequences captured by 8 participants who performed 40 different daily-life actions. The EPIC-Kitchens datasets [12, 13] are collections of egocentric videos that capture natural actions in kitchen settings. EPIC-Kitchens-55 [12] consists of 432 videos with annotations for 352 objects and 125 verbs. EPIC-Kitchens-100 [13] is a larger version of EPIC-Kitchens-55 with more videos (700), scenes (45) and hours (100). Assembly101 [53] simulates an industrial scenario and it is composed of 4321 assembly and disassembly videos of toy vehicles made of textureless parts. It offers a multi-view perspective, comprising static and egocentric recordings annotated with 100K coarse and 1M fine-grained action segments and with 18M 3D hand poses. While these datasets explore actions and activities, other datasets have been proposed to study human-object interactions from the egocentric perspective in a more fine-grained fashion. The Grasp Understanding (GUN-71 [50]) dataset, contains 12,000 images of hands manipulating 28 objects labelled with 71 grasping categories. The 100 Days Of Hands (100DOH) [54] dataset captures hands and objects involved in generic interactions. It consists of 100K frames collected over 131 days with 11 types of interactions. It comprises bounding boxes around the hands and the active objects, the side of the hands and the contact state (which indicates if the hand is touching an object or not). Other works focused on human-object interactions providing egocentric video datasets. EPIC-KITCHENS VISOR [15] contains videos from EPIC-KITCHENS-100 [13] annotated with 272K semantic masks for 257 classes of objects, 9.9M interpolated dense masks, and 67K human-object interactions. The authors of [36] proposed the HOI4D dataset which is composed of 2.4 million RGB-D egocentric frames across 4000 sequences acquired in 610 indoor rooms. The authors of [18] studied hands interacting with articulated objects (e.g., scissors, laptops) releasing the ARCTIC dataset. It comprises 2.1M high-resolution images annotated with 3D hand and object meshes and with contact information. The VOST dataset [60] focuses on objects that dramatically change their appearance. It includes 713 sequences where the objects have been annotated with segmentation masks. Ego4D [24] is a massive-scale dataset composed of 3670 hours of daily-life activity videos acquired in different domains by 923 unique participants. It comes with a rich set of annotations to address tasks concerning the understanding of the past, present, and future. More related to our work are datasets acquired in the industrial-like domain [47, 53]. Unlike Assembly101 [53] and MECCANO [47] we consider an industrial setting which simulates a real industrial laboratory. Unlike Assembly101, we provide fine-grained annotations to study different aspects of human behavior. Table 1 shows the key attributes of the analyzed datasets. Previous datasets have focused on kitchens, daily activities, and industrial-like scenarios exploring different aspects of the human behavior. In order to perform a systematic study on human behaviour and human-object interactions in an industrial domain, we present the ENIGMA-51 dataset with a rich set of fine-grained egocentric videos together with annotations. ### Untrimmed Temporal Detection of Human-Object Interactions The proposed untrimmed temporal detection of human-object interactions task is related to previous research on untrimmed action detection. Existing approaches focus on one-stage methods, performing both temporal action detection and classification within a single network, aiming to identify actions without using action proposals. Recent \begin{table} \begin{tabular}{l l l l l l l l} \hline **Dataset** & **Year** & **Video?** & **EHOI Annotations?** & **Settings** & **Hours** & **Sequences** & **Subjects** \\ \hline ENIGMA-51 (ours) & 2024 & ✓ & ✓ & Industrial & 22 & 51 & 19 \\ MECCANO[47] & 2023 & ✓ & ✓ & Industrial-like & 7 & 20 & 20 \\ \hline Ego4D[24] & 2022 & ✓ & ✓ & Multi-domain & 3670 & 9650 & 923 \\ THU-READ[59] & 2019 & ✓ & ✓ & Daily activities & 224 & 1920 & 8 \\ EPIC-KITCHENS-VISOR[15] & 2022 & ✓ & ✓ & Kitchen activities & 100 & 700 & 45 \\ HOI4D[36] & 2022 & ✓ & ✓ & ✓ & Objects manipulation & 22 & 4000 & N/A \\ VOST[60] & 2023 & ✓ & ✓ & Daily + Industrial-like & 4 & 713 & N/A \\ ARCTIC[18] & 2023 & ✓ & ✓ & Object manipulation & 2 & 339 & 10 \\ \hline 100 Days of Hands[55] & 2020 & X & ✓ & Daily activities & 3144 & 27000 & 1350+ \\ GUN-71[50] & 2015 & X & ✓ & Daily activities & N/A & N/A & 8 \\ \hline Assembly101[53] & 2022 & ✓ & X & Industrial-like & 513 & 362 & 53 \\ EGTEA Gaze+[34] & 2017 & ✓ & X & Cooking activities & 28 & 86 & 32 \\ ADL[44] & 2012 & ✓ & X & Daily activities & 10 & 20 & 20 \\ \hline \end{tabular} \end{table} Table 1: Overview of egocentric datasets with a particular focus on those that allow the study of human-object interactions sorted by the number of hours. works achieved state-of-the-art results using Vision Transformers. The authors of [69] proposed ActionFormer, a transformer network designed for temporal action localization in videos. This method estimates action boundaries through a combination of multiscale feature representation and local self-attention, which effectively models temporal dependencies. TriDet [56] uses a Trident-head to model the action boundary by estimating the relative probability distribution around the boundary. Features are extracted through a feature pyramid and aggregated with the proposed scalable granularity perception layer. Other methods focused on masked video modeling for pretraining one-stage methods. In particular, Intern-Video [66] uses a combination of generative and discriminative self-supervised learning techniques by implementing masked video modeling and video-language contrastive learning. Recently, the authors of [64] proposed VideoMAE V2, which scaled VideoMAE [61] for building video foundation models through a dual masking strategy. In this work, we assess the performance of state-of-the-art temporal action detection methods on the proposed ENIGMA-51 dataset considering ActionFormer [69]. ### Egocentric HOIs Detection Several works have explored different aspects of human-object interactions (HOIs) from the egocentric perspective. The authors of [55] proposed a method based on the Faster-RCNN [49] object detector to detect the hands and the objects present in the image, categorizing objects as either _active_ or _passive_, determining the side of the hands (_left_ or _right_), and predicting the contact state between the hand and the associated active object. The authors of [48, 47] investigated human-object interactions predicting bounding boxes around the active objects and the verb which describes the interaction exploiting multimodal signals with different instances of SlowFast networks [20]. The authors of [3] presented an architecture for detecting human-object interactions using two YOLOv4 object detectors [4] and an attention-based technique. The authors of [24] explored object transformations introducing the novel task of object state change detection and classification. While most of the analysis of human-object interactions relies on bounding box annotations, some works exploited hand poses and semantic segmentation masks [37, 15], contact boundaries [71], which represents the spatial area where the interaction occurs, or additional modalities, such as depth maps and instance segmentation masks, to learn more robust representations [33]. In this work, we evaluate the HOIs detection method proposed in [33] exploiting the fine-grained human-object interaction annotations of the ENIGMA-51. ### Short-Term Object Interaction Anticipation Past works addressed different variants of the short-term object interaction anticipation task. The authors of [21] focused their study on the prediction of the next-active objects by analyzing their trajectories over time. The authors of [29] proposed a model that exploits a predicted visual attention probability map and the hands' positions to predict next-active objects. The authors of [16] predicted future actions exploiting hand-objects contact representations. In particular, the proposed approach predicts future contact maps and segmentation masks, which are exploited by the Egocentric Object Manipulation Graphs framework [17] for predicting future actions. The short-term object interaction anticipation task has been more formally defined in [24]. To tackle the task, the authors of [24] released a two-branch baseline composed of an object detector [49] to detect next-active objects and a SlowFast [20] 3D network to predict the verb and the time to contact. The proposed baseline was extended by the authors of [8] who replaced Faster-RCNN with DINO [70], and SlowFast with a VideoMAE-pretrained transformer network [61]. Recently, StillFast an end-to-end approach has been proposed by [46]. The method simultaneously processes a high-resolution still image and a video with a low spatial resolution, but a high temporal resolution. Recent state-of-the-art performances have been achieved by [43] exploiting language. They proposed a multimodal transformer-based architecture able to summarise the action context leveraging pre-trained image captioning and vision-language models. Due to its end-to-end training ability, in this work, we used StillFast [46] as a baseline for the short-term object interaction anticipation benchmark on ENIGMA-51. ### Natural Language Understanding of Intents and Entities Understanding intents and entities from text to extract knowledge about human-object interactions in the industrial domain is a task that has not been explored due to the lack of public egocentric datasets suitable for this task. The authors of [62] addressed both intent classification and slot filling as a seq2seq problem, using an architecture that takes text input, generates ELMo embeddings [51], and incorporates one BiLSTM [52] and self-attention layers for each task, outputting task-specific BIO (Beginning Inside and Outside) labels. In [9] the BERT architecture has been explored to tackle the limited generalization capabilities of natural language understanding and propose a joint intent and classification architecture. The authors of [6] incorporate pre-trained word embeddings from language models and combine them with sparse word and character level n-grams features alongside a Transformer architecture. While some works use speech-to-text models to convert speech input into text, others handle speech directly (Spoken Language Understanding). Earlier approaches proposed RNN-based or LSTM-based contextual SLU [57, 26] which take into account previously predicted intents and slots. The authors of [25] proposed a BiLSTM-based architecture to manage the interrelated connections between intent and slots. In [68] has been introduced the Token-and-Duration Transducer (TDT) architecture for Automatic Speech Recognition (ASR), able to jointly predict both a token and its duration, enabling the skipping of input frames during inference based on the predicted duration output, resulting in significantly improved efficiency. Since the ENIGMA-51 dataset comprises textual instructions about the activities performed by subjects, we exploited this textual information to explore the task of predicting intents and entities to extract knowledge about human-object interactions in the industrial domain. ## 3 The ENIGMA-51 Dataset In our ENIGMA laboratory there are 25 different objects that can be grouped into fixed objects (such as an _electric panel_) and movable objects (such as a _screwdriver_). Differently than other egocentric datasets [47, 53] that contain industrial-like objects without textures, ENIGMA-51 includes real industrial objects as shown in Figure 1. The complete list of the objects present in the ENIGMA laboratory is reported in the supplementary material. ### Data Acquisition To collect data suitable to study human behavior in industrial domain, we designed two procedures consisting of instructions that involve humans interacting with the objects present in the laboratory to achieve the goal of repairing two electrical boards (see Figure 1 for visual examples). In particular, we designed two repairing procedures, one for each electrical board (_high and low voltage_), with the help of industrial experts. For each procedure, we considered 4 different versions varying the use of a _screwdriver_ or _electric screwdriver_ and the electrical component to solder (_resistor, capacitor or transformer_). Each procedure is composed of more than 100 steps, referencing objects and actions that were expected to be carried out in the industrial laboratory such as _Turn on the welder using the switch on the corresponding socket (second from right)_ and _Set the temperature of the welder to 480 \({}^{\circ}\)C using the yellow "UP" button_. Based on these instructions, we developed a custom Microsoft HoloLens 2 [40] application which provided the instructions through audio, images and AR during the acquisition phase1. Considering that we designed two different repair procedures, each subject acquired at least one repairing video for each electric board obtaining a total of 51 videos. The 19 participants had different levels of experience in repairing electrical boards and using industrial tools. An example of the captured data is reported in Figure 1. For each participant, we acquired the RGB stream from the Microsoft HoloLens 2 with a resolution of 2272x1278 pixels with a framerate of _30 fps_. The average duration of the captured videos is 26.32min for a total of 22 hours of videos. We also synchronized the audio instructions with the captured video by assigning a timestamp when the user moved to the next instruction. Footnote 1: Additional information about the repairing procedures are available in the supplementary material ### Data Annotation We labelled the ENIGMA-51 dataset with a rich set of fine-grained annotations that can be used and combined to study different aspects of human behavior. Table 2 summarizes statistics about the collected dataset. **Temporal and Verb Annotations:** We identified all _interaction key frames_ in the 51 videos. For each identified interaction key frame, we assigned a timestamp and a verb describing the interaction. Our verb taxonomy is composed of 4 verbs: _first-contact_, _de-contact_, _take_, _and release_. The 4 considered verbs represent the basic actions that a user performs to interact with objects. Note that the difference between _first-contact_ and _take_ is that _first-contact_ happens when the hand touches an object without taking it (e.g., pressing a button), while _de-contact_ is the first frame in which the hand-object contact breaks (e.g., end of pressing a button) and _release_ when the object is no longer held in the hand (e.g., put the screwdriver on the table). With this procedure, we annotated 14,036 interactions. Figure 1 reports an example of an interaction key frame with all the provided annotations, while Figure 2-left shows the verbs distribution in the 51 videos. **Object Annotations:** We considered 25 object classes which include both fixed (e.g., electric panel, oscilloscope) and movable objects (e.g., screwdriver, pliers) to assign a class to the objects present in the interaction key frames and in the past frames2. Each object annotation consists in a tuple \((class,x,y,w,h,state)\), where \(class\) represents the class of the object, \((x,y,w,h)\) are the 2D coordinates which \begin{table} \begin{tabular}{l c c c c} \hline \hline **Splits** & **Train** & **Val** & **Test** & **Total** \\ \hline \# Videos & 27 & 8 & 16 & 51 \\ \# Videos Length & \(\simeq\)11h & \(\simeq\)4h & \(\simeq\)7h & \(\simeq\)22h \\ \# Images & 25,311 & 8,528 & 11,666 & 45,505 \\ \# Objects & 152,865 & 53,486 & 68,784 & 275,135 \\ \# Active Objects & 4,709 & 1,700 & 2,933 & 9,342 \\ \# Hands & 31,249 & 11,322 & 13,902 & 56,473 \\ \# Hands in contact & 5,039 & 1,833 & 3,171 & 100,043 \\ \# Interactions frames & 6,386 & 2,150 & 4,061 & 12,597 \\ \# Interactions & 7,133 & 2,406 & 4,497 & 14,036 \\ \# Past frames & 19,090 & 6,437 & 7,683 & 33,210 \\ \# Next Object Interactions & 21,535 & 7,280 & 8,499 & 37,314 \\ \hline \hline \end{tabular} \end{table} Table 2: Statistics of the ENIGMA-51 dataset considering the Training, Validation and Test splits. define the bounding box around the object in the frame, and the \(state\) indicates if the object is involved in an interaction or not (_active object vs. passive object_). With this annotation procedure, we annotated 275,135 objects. Figure 2-right reports the distributions of the objects over the 51 videos of the ENIGMA-51 dataset. **Hands Annotations:** We annotated hands bounding boxes in the interaction key frames and in past frames. To speed up this annotation process, we generated pseudo-labels by processing the interaction key frames with a hand-object detector [55], considering only the information related to the hands. Then, the annotators manually refined the bounding boxes, correcting the side of the hand and associating the hand with the previously labelled active object. Following this procedure, we labelled a total of 56,473 hands. **EHOI Annotations:** For each of the interaction key frames, we considered: 1) hands and active object bounding boxes, 2) hand side (left and right), 3) hand contact state (contact and no contact), 4) hand-object relationships, and 5) object categories. For each hand, we assigned the _hand contact state_ to _contact_ if the hand was involved in an interaction of the type _first-contact_ or _take_, and _no-contact_ for the _release_ and _de-contact_ categories. Additionally, to make the annotations consistent and uniform, we assigned the _hand contact state_ to _contact_ even for the hands that were already in physical contact with objects. Following this procedure, we annotated 12,597 interaction frames, 17,363 hands of which 10,043 were in contact, and 9.342 active objects. **Next Object Interaction Annotations:** Starting from the interaction key frame, we sampled frames every 0.4 seconds going backward up to 1.2 seconds before the beginning of the interaction timestamp. With this sampling strategy, we obtained 33210 past frames. We annotated the past frames with next object interaction annotations which consists of a tuple \((class,x,y,w,h,state,ttc)\) where \(class\) represents the class of the object, \((x,y,w,h)\) are the 2D bounding box coordinates, \(state\) indicates if the objects will be involved in an interaction and \(ttc\) is a real number which indicates the time in seconds between the current timestamp and the beginning of the interaction. Figure 1 - bottom-left shows an example of labelled past frames. **Utterances:** Based on the instructions used for the acquisition of the dataset, we collected 265 textual utterances, which represent the types of questions that a worker might pose to a supervisor colleague while following a procedure within an industrial setting such as _"How can I use the oscilloscope?"_ or _"Which is the next step that I do?"_. We manually annotated user goals as "intents" (e.g. "object-instructions") and key information as "entities" (e.g. "object") considering 24 intent classes and 4 entity types3. To enrich this set of utterances, we generated similar synthetic data by interacting with ChatGPT [41]. This study resulted in the creation of 100 unique utterances for each intent3. The generated data was divided into three sets, G10, G50, and G100 which contain respectively 10, 50, and 100 generated unique utterances for each intent. Note that, all the utterances in G10 are also in G50 and G100, and all the utterances in G50 are also in G100. Footnote 3: See the supplementary material for more details. **Additional Resources:** In order to enrich the ENIGMA-51 dataset, we release a set of resources useful to improve the impact of the dataset. We provide segmentation masks for the hands and the objects using SAM-HQ [30] and the 2D pose for the hands with MMPOSE [11]. We also extracted visual representations through DINOv2 [42] and CLIP [45]. The 3D models of ENIGMA Laboratory and of all industrial objects within it have been acquired using the Matterport [38] and ARTECE EVA [1] scanners, to enable the use of synthetic data to train scalable methods3. Figure 2: Distribution of verb (left) and object (right) classes over the 51 videos composing the ENIGMA-51 dataset. ## 4 Benchmark and baselines results ### Untrimmed Temporal Detection of Human-Object Interactions **Task:** We consider the problem of detecting 4 basic human-object interactions ("_take_", "_release_", "_first-contact_", and "_de-contact_") from the untrimmed egocentric videos of the ENIGMA-51 dataset. Differently from the standard definition of untrimmed action detection, in this task, a prediction is represented as a tuple \((\hat{c},\hat{t}_{k},s)\), where \(\hat{c}\) and \(\hat{t}_{k}\) are respectively the predicted class and key timestamp (the timestamp of the interaction key frame) and \(s\) is a confidence score. **Evaluation Measures:** We evaluated our baselines using point-level detection mAP (p-mAP) [58]. We considered predictions as correct when they satisfied two criteria: 1) the interaction class matched the ground truth and 2) the difference between the predicted and ground truth timestamps is within a certain temporal threshold. We considered different temporal offset thresholds ranging from 1 to 10 seconds with an increment of one second [22, 23]; we averaged these values obtaining the mp-mAP values. **Baseline:** Our baseline for this task is based on ActionFormer [69]. It takes the pre-extracted video features as input and gives action boundaries (start and end timestamps) as outputs. Given our focus on predicting the timestamp when the HOI occurs, we considered only the predicted action start4 as output given by ActionFormer. Footnote 4: Additional information on implementation details, experiments, and results are reported in the supplementary material. **Results:** Table 3 reports the results of the baseline. We considered three variants of the task: 1) detecting only _contact_ and _de-contact_ interactions (first row), 2) considering only _take_ and _release_ interactions (second row), and 3) considering all the four interactions (third row). We obtained mp-mAP values of 41.45%, 66.41%, and 35.93%, respectively, for _"take vs. release"_, _"first contact vs. de-contact"_, and _"all interactions"_. The results highlight that detecting _"take"_ and _"release"_ interactions (first row) are more challenging compared to finding _"first contact"_ and _"de-contact"_ interactions (41.45% vs. 66.41%) due to the different semantic complexity. Moreover, when all the four interactions are considered, the performance decreases, obtaining a mp-mAP of 35.93%4. Footnote 5: [https://github.com/fpv-iplab/stillfast](https://github.com/fpv-iplab/stillfast) ### Egocentric HOI Detection **Task:** We consider the problem of detecting EHOIs from egocentric RGB images following the task definition proposed in [55, 33]. Given an input image, the aim is to predict the triplet \(<\)_hand, hand contact state, active object\(>\)_. Additional details about the task are reported in [55, 33]. **Baselines:** The adopted baseline is based on the method proposed in [55]. We used the implementation proposed in [33] which extends a two-stage object detector with additional modules that exploit hand features to predict the _hand contact state_ (in contact or not in contact), the _side of hand_ (left and right), and an _offset vector_ that indicates which object the hand is interacting with. Since the considered baseline is able to detect at most one contact per hand, we selected a subset of the \(12,597\) interaction frames. This subset contains \(15,955\) hands of which \(8,753\) are in contact with an object, for a total of \(7,680\) active objects. **Evaluation Measures:** We used the following metrics based on standard _Average Precision_[55, 33]: 1) _AP Hand_: _AP_ of the hand detections, 2) _AP Hand+Side_: _AP_ of the hand detections considering the correctness of the hand side, 3) _AP Hand+State_: _AP_ of the hand detections considering the correctness of the hand state, 4) _mAP Hand+Obj_: _mAP_ of the \(<\)_hand, active object\(>\) detected pairs, and 5) _mAP Hand+All_: combinations of _AP Hand+Side_, _AP Hand+State_, and _mAP Hand+Obj_ metrics. **Results:** Table 4 reports the results obtained with the proposed baseline. Results show that the baseline achieved a _AP Hand_ of \(90.81\%\), a _AP Hand + Side_ of \(90.35\%\) ), a _mAP H.+State_ of \(73.31\%\), a _mAP H.+Obj_ of \(46.51\%\) and a _mAP H.+All_ of \(46.24\%\), pointing out that the use of domain-specific data in training is needed to exploit the knowledge of the industrial objects to support workers in the industrial domain. ### Short-Term Object Interaction Anticipation **Task:** The short-term object interaction anticipation task [24] aims to detect and localize the next-active objects, to predict the verb that describes the future interaction, and to determine when the interaction will start. Formally, the task consists in predicting future object interactions from a video \(V\) and a timestamp \(t\). The models can only use the video frames up to time \(t\) and have to produce a set of predictions for the object interactions that will occur after a time interval \(\delta\). Predictions consist of a bounding box over the next-active objects, a noun label, a verb label describing the future interaction, a real number indicating how soon the next interaction will start, and a confidence score. **Evaluation Measures:** We evaluated the model's performance with Top 5 mean Average precision measures [24] that capture different aspects of the task: Top-5 mAP Noun, Top-5 mAP Noun+Verb, Top-5 mAP Noun+TTC, and Top-5 mAP Noun+Verb+TTC, which is also referred to as Top-5 mAP Overall. **Baseline:** We adopted StillFast [46] as the baseline5. The model has been designed to extract 2D features from the considered past frame and 3D features from the input video clip. Feature stacks are merged through a combined feature pyramid layer and sent to the prediction head which is based on the Faster-RCNN head [49]. Features are fused and used to predict object (noun), verb probability distributions and time-to-contact (ttc) through linear layers along with the related prediction score \(s\). **Results:** Table 5 reports the results on test set of the ENIGMA-51 dataset considering the Top-5 mAP measures. StillFast obtains a Noun Top-5 mAP of 78.79%, demonstrating the ability to detect and classify the next-active objects processing images and videos simultaneously. When verbs and time to contact are predicted, performance drops according to Noun+Verb Top-5 mAP of 62.58%, Noun+TTC Top-5 mAP of 35.77%, and Overall Top-5 mAP of 27.83%. Qualitative results are reported in the supplementary material. ### NL Understanding of Intents and Entities **Task:** We considered the problem of classifying the intent of a user utterance, falling into one of the considered 24 classes, as well as the problem of entity slot filling, including four different slot types: _"object"_, _"board"_, _"component"_ and _"procedure"_. Given an input utterance \(U\), the task is to predict the intent class \(i\), and to detect any entities \(e\), if present, as well as the slot types \(t\) associated to them, outputting zero or more _<e, t>_ couples. The complete list of intents/entities is reported in the supplementary material. **Evaluation Measures:** We evaluate the baseline using the standard accuracy, and F1-score evaluation measures. **Baseline:** The baseline is based on the DIETClassifier [6]. We performed the tokenization and featurization steps before passing the utterances to the model. Specifically, we used the SpacyNLP, SpacyTokenizer, CountVectorsFeaturizer, SpacyFeaturizer and DIETClassifier components offered by the Rasa framework [5]. **Results:** Table 6 reports the results obtained for intent and entity classification. Five different variants of the training set (see Section 3.2) were explored: real data, real data + G10 data, real data + G50 data, real data + G100 data, and G100. The best results for the intent classification have been obtained using only real data obtaining an accuracy of 0.867 and an F1-score of 0.844. The baseline suffers when generated data are included, which introduces noise and makes performance worse, reaching an accuracy of 0.584 (-0.283) and an F1-score of 0.564 (-0.280). These results suggest that, in this challenging industrial scenario, generative models, such as GPT [41] are not yet capable of generating appropriate data with regard to understand human's intent in this domain, and the use of manually annotated data is still necessary. Instead, considering the ability to predict the entities of human's utterances which represent more simple concepts with respect to the human's intents, only generated data (last row) are enough. In particular, the model trained with the G100 set obtains better performance than one trained only with real data (1.00 vs. 0.994 for accuracy and 1.00 vs. 0.981 for F1-score)6. Footnote 6: Additional details about the used prompting, the implementation details, and the results are reported in the supplementary material. ## 5 Conclusion We proposed ENIGMA-51, a new egocentric dataset acquired in an industrial environment and densely annotated to study human behavior. In addition, we performed baseline experiments aimed to study different aspects of human behavior in the industrial domain addressing four tasks. Existing methods show promising results but are still far from reaching reasonable performance to build an intelligent assistant able to support workers in the industrial domain. This opens up opportunities for future in-depth investigations. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Training** & **Accuracy** & **F1-score** & **Accuracy** & **F1-score** \\ \hline real & **0.867** & **0.844** & 0.994 & 0.981 \\ real+G10 & 0.830 & 0.815 & 1.00 & 1.00 \\ real+G50 & 0.792 & 0.773 & 1.00 & 1.00 \\ real+G100 & 0.792 & 0.784 & 1.00 & 1.00 \\ G100 & 0.584 & 0.564 & **1.00** & **1.00** \\ \hline \hline \end{tabular} \end{table} Table 6: Results for intents and entities classification considering different sets of training data. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline **Setting** & \multicolumn{6}{c}{**p-mAP (\%) temporal offset threshold (s)**} & \multicolumn{6}{c}{**mp-mAP (\%)**} \\ \cline{2-10} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & \\ \hline “take vs. release” & 27.40 & 32.97 & 36.88 & 40.08 & 42.15 & 43.70 & 45.52 & 47.48 & 48.81 & 49.50 & 41.45 \\ “first contact vs. de-contact” & 56.97 & 59.93 & 62.43 & 64.22 & 66.09 & 67.78 & 69.35 & 70.93 & 72.40 & 74.02 & 66.41 \\ “all interactions” & 29.64 & 31.69 & 33.28 & 34.60 & 35.91 & 36.96 & 37.95 & 38.88 & 39.84 & 40.58 & 35.93 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparisons of p-mAP under different temporal offset thresholds on 3 different interaction settings. \begin{table} \begin{tabular}{c c c c} \hline **Noun** & **N+V** & **N+TTC** & **Overall** \\ \hline 78.79 & 62.58 & 35.77 & 27.83 \\ \hline \hline \end{tabular} \end{table} Table 5: Results% in Top-5 mean Average Precision for the Short-Term Object Interaction Anticipation task. N stands for noun, N+V stands for Noun+Verb and N+TTC stands for Noun+Time to Contact. ## Acknowledgements This research is supported by Next Vision7 s.r.l., by MISE - PON I&C 2014-2020 - Progetto ENIGMA - Prog n. F/190050/02/X44 - CUP: B61B19000520008, and by the project Future Artificial Intelligence Research (FAIR) - PNRR MUR Cod. PE0000013 - CUP: E63C22001940006. Footnote 7: Next Vision: [https://www.nextvisionlab.it/](https://www.nextvisionlab.it/) ## Supplementary Material This document is intended for the convenience of the reader and reports additional information about the collection and the annotations of the proposed dataset, as well as implementation details of the adopted baselines. This supplementary material is related to the following submission: * F. Ragusa, R. Leonardi, M. Mazzamuto, C. Bonanno, R. Scavo, A. Furnari, G. M. Farinella. ENIGMA-51: Towards a Fine-Grained Understanding of Human-Object Interactions in Industrial Scenarios. In IEEE Winter Conference on Applications of Computer Vision (WACV), 2024. The reader is referred to the manuscript and to our web page [https://iplab.dmi.unict.it/ENIGMA-51/](https://iplab.dmi.unict.it/ENIGMA-51/) to download the dataset and for further information. * **Stop**: Using this command, the operator stops the video recording (the application will play a sound for confirmation). ### Data Annotation #### 7.2.1 Object Annotations In our industrial setting, we considered both fixed (e.g., _oscilloscope_, _power supply_) and movable objects (e.g., _screwdriver_, _electric board_) present in the industrial laboratory. In particular, our object taxonomy is composed of 25 different objects: _power supply, power supply cables, oscilloscope, oscilloscope probe tip, oscilloscope ground clip, welder station, welder base, welder probe tip, electric screwdriver, electric screwdriver battery, battery connector, screwdriver, pliers, high voltage board, low voltage board, low voltage board screen, register, left red button, left green button, right red button, right green button, socket 1, socket 2, socket 3, and socket 4_. Figure 5 reports the object class distribution grouping them into _fixed_ and _movable_ objects. The dataset has been labelled manually by a group of annotators that used the VGG Image Annotator [63] with a custom project (see Figure6). #### 7.2.2 Utterances Classifying intents and entities within the industrial domain can be beneficial in the development of intelligent assistants that support workers during their interactions and ensure enhanced workplace safety. Using the instructions that guided participants to acquire the ENIGMA-51 dataset, we obtained 265 textual utterances that simulate the kinds of questions a worker may have for a supervisor colleague as they carry out a procedure in an industrial setting. We manually labelled these utterances as "intents" (e.g. "object-instructions") considering a taxonomy of 24 classes and as "entities" (e.g. "object") considering 4 entity types. Table 8 reports the list of the 24 intent classes with an associated description. Each entity has been annotated using square Figure 4: A screenshot captured from the developed application, during the acquisition phase. Figure 3: The ENIGMA-51 dataset has been acquired in an industrial laboratory. We show some interaction key frames with the related verb (in green) and the object involved in the interaction (in orange). brackets to denote its starting and ending characters in the text and round brackets to enclose the entity type. As a result, each entity is annotated following the [entity](type) form. Table 9 reports the list of the 4 considered entities. To enrich this set of utterances, we generated similar utterances through the prompting of ChatGPT [41], obtaining 100 unique utterances for each intent. Due to the unique structure of "inform" utterances, which consist of only an entity and optionally an article, generating a set of 100 utterances was unfeasible; hence, a total of 10 utterances for the "inform" intent were produced. The "inform" intent is defined for conversations in which a worker's question cannot be adequately answered solely by performing slot filling on their initial utterance. This is often the case when some of \begin{table} \begin{tabular}{l|l|l|l|l} \hline **Step** & **Description** & **Step** & **Description** & **Step** & **Description** \\ \hline 1 & Star at the wobench & 41 & Place the place on the wobench & 61 & Place the phrase on the wobench \\ 2 & Prouncence the voice command “Recon” to start recording, & 42 & Lower the soldering ion temperature to the minimum & 82 & Place the phrase on the wobench \\ ing, and wait for the acoustic signal for confirmation & 43 & Turn the lamp located on the wobench using the socket switch & 83 & Turn off the soldering ion using the socket switch \\ & 44 & Exit the laboratory & 44 & Fix the board to the wobench using the screwdriver \\ 5 & Enter the laboratory and close the door & 45 & Observe the power supply & 84 & Turn on the power supply using the socket switch \\ 6 & Go to panel A and observe it for a moment & 46 & Adjust the current knob of the power supply until the green LED lights up & 86 & Adjust the power supply \\ 7 & Press the two green buttons on panel A & 47 & Connect the power supply cables to the board’s power points & 5 \\ 8 & Head back to the wobench and sit down & 48 & Observe the board for a few seconds to verify the red LED turning on & 88 & Observe the power supply \\ 9 & Observe the low-voltage board for a while & 49 & Turn off the power supply using the socket switch & 89 & Turn off the power supply using the socket switch \\ 10 & Take the low-voltage board and place it on the work area & 50 & Set the current and voltage knobs of the power supply to & 90 & Set the current and voltage knobs of the power supply to \\ 11 & Remove the screws from the wobench using the screwdriver & 51 & Disconnect the clip and probe of the oscilloscope & 91 & Disconnect the power supply cables \\ 12 & Secure the board to the wobench using the socket switch & 52 & Turn on the oscilloscope using the socket switch & 92 & Observe the board for a few seconds to verify the red LED turning on \\ 13 & Observe the power supply & 53 & Activate channel 2 of the oscilloscope using the “CH2 MENU” button with a blue outline & 93 & Turn on the oscilloscope using the socket switch \\ 14 & Turn on the power supply using the socket switch & 54 & Connect the ground clip of the probe to test point 1 & 94 & Activate channel 2 of the oscilloscope using the “CH2 MENU” button \\ 15 & Adjust the current knob of the power supply until the green LED lights up & 55 & Use the probe tip to check for signals at test point 2 on the board & 95 & Connect the ground clip of the probe to test point 1 \\ 16 & Adjust the voltage knob of the power supply to set a voltage of & 56 & Press the “Auto Set” button on the oscilloscope & 96 & Use the probe tip to check for signals at test point 2 on the board \\ 17 & Connect the power supply cables to the board’s power points & 57 & Observe the oscilloscope’s display & 97 & Press the “Auto Set” button on the oscilloscope \\ points & Observe the board for a few seconds to verify the red LED turning on & 58 & Rotate the “position” knobserve the celloscope” study & 98 & Observe the oscilloscope’s display \\ 19 & Turn off the power supply using the socket switch & 59 & Repeat the previous three steps for the remaining test points (from number 3 to number 7) & 99 & Route the “position” knob above the “CH2 MENU” button with a blue outline randomly \\ 20 & Set the current and voltage knobs of the power supply to & 60 & Set the current and voltage knobs of the power supply to & 100 & Repeat the previous three steps for remaining test points \\ 21 & Disconnect the power supply cables & 61 & Disconnect the ground clip and probe from the oscilloscope & 101 & Turn off the power supply using the socket switch \\ 22 & Remove the board from the wobench using the screwdriver & 62 & Decaulve channel 2 of the oscilloscope & 102 & Set the current and voltage knobs of the power supply to \\ 23 & Unscrew the 4 screws on the back of the board using the screwdriver & 63 & Turn off the oscilloscope using the socket switch & 103 & Disconnect the ground clip and probe from the oscilloscope \\ 24 & Remove the board from the wobench using the screwdriver & 64 & Remove the board from the wobench using the screwdriver & 104 & Decaulve channel 2 of the oscilloscope \\ 25 & Observe the soldering iron & 65 & Observe the soldering iron & 105 & Turn off the oscilloscope using the socket switch \\ 26 & Turn on the soldering iron using the socket switch & 66 & Turn on the soldering iron using the socket switch & 106 & Remove the board from the wobench using the screwdriver \\ 27 & Set the soldering iron temperature to 200 degrees using the yellow “UP” button & 67 & Set the soldering iron temperature to 200 degrees using \\ 28 & Grab the black capacitor on the board with pliers & 68 & Grab the black capacitor on the board with pliers & 108 & Turn on the soldering iron using the socket switch \\ 29 & Take the soldering iron’s probe & 69 & Take the soldering iron’s probe & 109 & Set the soldering iron temperature to 200 degrees using the yellow “UP” button \\ 30 & Touch the first pin of the black capacitor with the soldering iron’s probe for 5 seconds & 70 & Touch the first pin of the black capacitor with the soldering iron’s probe for 5 seconds & 110 & Read to panel A \\ 31 & Touch the second pin of the capacitor on the back of the board for 5 seconds & 71 & Touch the second pin of the capacitor on the back of the board for 5 seconds & 111 & Observe panel A for a moment \\ 32 & Place the soldering iron’s probe & 72 & Place the piner on the wobench & 112 & Press the two red buttons on panel A \\ 33 & Place the soldering iron’s probe & 73 & Place the soldering iron’s probe & 113 & Head to the door \\ 34 & Place the board vertically & 74 & Place the board vertically & 114 & Exit the laboratory \\ 35 & Grab the black capacitor on the board with pliers & 75 & Grab the black capacitor on the board with pliers & 115 & Enter the laboratory \\ 36 & Take the soldering iron’s probe & 76 & Take the soldering iron’s probe & 116 & Pronounce the voice command “Stop” to end the recording, her an acoustic signal for confirmation \\ 37 & Touch the first pin of the black capacitor on the back of the board for 5 seconds & 77 & Touch the first pin of the black capacitor on the back of the board for 5 seconds & 110 & Remove the voice command “Stop” to end the recording, her an acoustic signal for confirmation \\ 38 & Touch the second pin of the capacitor on the back of the board for 5 seconds & 78 & Touch the second pin of the capacitor on the back of the board for 5 seconds & 110 & Observe panel A for a moment \\ 39 & Place the soldering iron’s probe & 79 & Place the soldering iron’s probe & 112 & Press the two red buttons on panel A \\ 40 & Place the board on the work area & 80 & Place the board on the work area & 113 & Head to the door \\ \hline \end{tabular} \end{table} Table 7: Low Voltage Board Repair Procedure (Standard Screwdriver Version) the required entities for formulating an appropriate response are missing. For example, in the following conversation: * Worker: _What's this object? I don't know how to use it._ * Assistant: _Which object?_ * Worker: _The oscilloscope._ The worker's first utterance falls under the "object-instructions" intent, whereas the worker's second utterance falls under the "inform" intent. Examples of utterances belonging to the inform intent include _"[high voltage](board) board [testing](procedure) procedure"_ and _"[screwdriver](object)"_. The used prompt for each intent, except for "inform" and "out-of-scope" intents, was the following: _"Imagine being an operator working inside an industrial laboratory. You can communicate with someone who knows the laboratory perfectly, including all the present objects and possible procedures that can be carried out. There are several intents you could have while operating within this industrial laboratory. This is one: \(<\)intent description\(>\). Since you'll have to communicate with the other person through text messages, try to avoid all forms of greeting and politeness. For this intent, imagine 100 unique sentences you would say to your interlocutor to express your intent and achieve the desired result."_ Please note that \(<\)intent description\(>\) was replaced with the description of each specific intent, using the descriptions listed in Table 8. Exceptions were made for the "inform" intent, for which we prompted the model to generate 10 unique sentences, and the "out-of-scope" intent, for which we used the following prompt: _"Imagine Figure 5: We report the object class distribution over the 51 videos of ENIGMA-51 grouping them into two categories: _fixed_ (orange) and _movable_ (blue). Figure 6: VGG Image Annotator tool. being an operator working inside an industrial laboratory. You can communicate with someone who knows the laboratory perfectly, including all the present objects and possible procedures that can be carried out. There are several intents you could have while operating within this industrial laboratory, which I will list below: \(<\)full list of intent descriptions\(>\). Since you'll have to communicate with the other person through text messages, try to avoid all forms of greeting and politeness. Knowing these intents, generate 100 unique sentences that are out of scope." Please note that \(<\)full list of intent descriptions\(>\) was replaced with the full list of intent descriptions listed in Table 8. Examples of the obtained utterances include: _"Provide [high voltage](board) board [repair](procedure) procedure tutorial now."_, _"Quick status check: alerts for [ battery charger](object)?"_, _"I require an image of the [display](component) that belongs to the [low voltage](board) board."_, _"Where's the PPE kept?"_, _"[high voltage](board) board [testing][procedure) procedure"_ and _"I need help with my car's engine trouble; can you assist me?"_ for the "procedure-tutorial" "object-warnings" "board-detail" "where-PPE", "inform", "out-of-scope" respectively. As ChatGPT was not able to generate 100 unique utterances, we carried out additional duplicate filtering and re-prompted the model in order to generate more utterances, until we met the criteria of gathering 100 unique utterances for each intent. We hypothesize that the inability to generate a set of unique utterances is due to the many constraints expressed in our prompt, which on the other hand was designed to generate utterances that reflected the real ones collected in the same laboratory setting. ### Additional Resources #### 7.3.1 3D models of the laboratory and objects To enable the use of synthetic data to train scalable methods, we acquired the 3D models of the laboratory and all the 25 industrial objects. We used two different 3D scanners to create 3D models. Specifically, we used the structured-light \begin{table} \begin{tabular}{l l} \hline Intent & Description \\ \hline “greet” & Greet and start a conversation \\ “procedure-tutorial” & Ask a specific question about the ongoing procedure \\ “object-warnings” & Know if there are alerts for a specific object \\ “turn-object-on” & Turn on an object \\ “turn-object-off” & Turn off an object \\ “which-ppe-procedure” & Know which PPE is required to perform a specific procedure \\ “which-ppe-object” & Know which PPE is required to use a specific object \\ “object-instructions” & Know how to use a specific object \\ “is-object-on” & Find out if an object is turned on or off \\ “object-time” & Find out how long an object has been used \\ “where-board” & Know where a specific electronic board is located, or identify it on the working area \\ “board-detail” & Know the location of a component on an electronic board \\ “where-object” & Know where a specific object is located, or identify it on the working area \\ “object-detail” & Know the location of a component on an object \\ “start” & Start a procedure \\ “next” & Hear the next step in the ongoing procedure \\ “previous” & Hear the previous step in the ongoing procedure \\ “repeat” & Hear the current step in the ongoing procedure \\ “all-objects” & Know what objects are present in the laboratory \\ “ok-objects” & Know what objects can be used \\ “on-objects” & Know what objects are powered \\ “where-ppe” & Know where the PPEs are located \\ “inform” & Specify an entity \\ “out-of-scope” & This category includes all questions that are not relevant to the previous intents \\ \end{tabular} \end{table} Table 8: The 24 intent classes considered during our collection. \begin{table} \begin{tabular}{l l} Entity & Example \\ \hline “object” & [soldering iron](object) \\ “board’ & [low voltage](board) \\ “component” & [display](component) \\ “procedure” & [repair](procedure) \\ \end{tabular} \end{table} Table 9: The table reports our entity taxonomy composed of 4 classes. 3D scanner Artec Eva8 for scanning the objects, and the MatterPort9 device to scan the industrial laboratory. Figure 7 illustrates the 3D models of the laboratory and some industrial objects within the set of ENIGMA-51. The 3D model of the laboratory weighs 30MB and covers an area of approximately 20 square meters, instead, the weight of the object's 3D model varies from \(5\) to \(20\) MB. Footnote 8: [https://www.artec3d.com/portable-3d-scanners/artec-eva](https://www.artec3d.com/portable-3d-scanners/artec-eva) Footnote 9: [https://matterport.com/](https://matterport.com/) #### 7.3.2 Hands and Objects Segmentation using SAM-HQ SAM-HQ [30] is an advanced extension of the Segment Anything Model (SAM [31]), designed to enhance the segmentation of complex objects. SAM originally offered impressive scaling and zero-shot capabilities, but its mask prediction quality fell short, especially with intricate structures. To address this limitation, the authors of [30] proposed HQSAM, which retains SAM's promptable design, efficiency, and zero-shot generalizability while accurately segmenting any object. Considering the challenging nature of the industrial objects of the ENIGMA-51, we opted to use SAM-HQ as it proves to be a suitable solution for accurate segmentation. Figure 8 shows a comparison between SAM and SAM-HQ showing the better accuracy of the segmentation masks generated by SAM-HQ for wires and small buttons. **Implementation details:** For the mask extraction, we used the SAM-HQ code provided in the official repository10. We used the bounding-box annotations from the ENIGMA-51 dataset to prompt SAM-HQ, which enabled the generation of the desired masks. The checkpoint file "sam_hq_vit_h.pth", pretrained on HQSeg-44K [30], was used for the model. During the inference phase, SAM-HQ generated a total of 55,427 hand masks and 270,519 object masks. The inference process required 6 hours using an NVIDIA A30 GPU. The semantic masks have been organized in structured JSON files, and they are released with the ENIGMA-51 dataset. Footnote 10: [https://github.com/SysCv/sam-hq](https://github.com/SysCv/sam-hq) #### 7.3.3 Hand keypoints using MMPose Since the hands represent the channel with which humans interact with the objects, we extracted hand keypoints using the MMPose [11] framework with the aim of releasing pseudo-labels useful to study human-object interactions with the proposed ENIGMA-51 dataset. MMPPose [11] is a useful open-source toolbox based on PyTorch, serving as part of the OpenMMLab project able to simultaneously detect the hands and localize their 2D keypoints. **Implementation detail:** We used the code provided in the official repository11. Since MMPose requires an input hand box, we used our hand annotations. We employed the pre-trained "onehand10k" model, which has been trained on images belonging to the Onehand10K[67] dataset with a resolution of 256x256. The model outputs keypoints for each hand, and each keypoint is associated with a confidence score ranging from 0 to 1. The confidence score allows us to filter out keypoints with lower accuracy. We saved all the extracted information in a JSON file. In total, we processed 30,747 left-hand bounding boxes and 24,680 right-hand bounding boxes using the MMPose framework. Figure 9 shows some examples of 2D hand keypoints extracted Figure 8: Comparison between SAM-HQ (left) and standard SAM (right). We reported also the bounding boxes (in green) used to generate segmentation masks. Figure 7: The acquired 3D models of the laboratory and some industrial objects within the set of ENIGMA-51. with MMPose. #### 7.3.4 Features extraction using DINov2 DINov2 [42] is a family of foundation models that produce universal features suitable for both image-level visual tasks (such as image classification, instance retrieval, and video understanding) and pixel-level visual tasks (including depth estimation and semantic segmentation). **Implementation detail:** We used the official implementation 12 with the publicly available dinov2_vtig14 pretrained model. Image preprocessing involved a transformation pipeline consisting of resizing and centre cropping the images to a resolution of 224x224, followed by converting them to tensors and applying normalization with mean and standard deviation values of ImageNet. Each frame was then processed using the model, obtaining a tensor of size (1, 1536). The output tensors representing the extracted features were saved in _.npy_ format and they will be released with the ENIGMA-51 dataset. Footnote 12: [https://github.com/facebookresearch/dinov2](https://github.com/facebookresearch/dinov2) #### 7.3.5 Features extraction using CLIP To provide a set of features allowing further analysis and the study of downstream tasks with the ENIGMA-51 dataset, we exploited CLIP [45] to extract text-image representations. We also used these features to explore human-object interactions with foundational models trained with generic and diverse data and without domain-specific data. CLIP [45] is an advanced method for image representation learning from natural language supervision. It involves joint training of image and text encoders to predict correct pairings of (image, text) training examples. CLIP's architecture includes a simplified version of ConVIRT [72] trained from scratch, allowing for efficient and effective image representation learning. **Implementation details:** We used the public implementation available to the following GitHub repository: [https://github.com/moein-shariatnia/OpenAI-CLIP](https://github.com/moein-shariatnia/OpenAI-CLIP). The pretrained _ViT-L/14@336px_ model has been used, and the images were processed through a specific preprocessing step composed of resizing, center cropping, tensor transformation, and normalization. The output of CLIP is a tensor of size (1, 768) which is saved in _.npy_ format. All the extracted features will be released with the ENIGMA-51 dataset. ## 8 Benchmark and Baselines Details ### Untrimmed Temporal Detection of Human-Object Interactions Starting from the manually labeled timestamp of a key interaction, we defined the ground truth interaction temporal boundaries to employ our baseline based on Action-Former [69]. We tested two different strategies to set the interaction temporal boundaries. The first consists of setting the start and end boundaries 15 frames before and after the labeled timestamp, respectively. Instead, in the second approach we set the action start at labeled interaction timestamp and we determined the action end empirically, allowing the model to observe hand movements after the labeled interaction timestamp for "take" and "release" actions. For "first contact" and "de-contact" interactions, the action end time was set 15 frames prior to the annotated timestamp. The second approach demonstrated a better consistency Figure 9: Hand Keypoints extracted with MMPose. in comparison to the first, despite yielding comparable mmp-mAP scores during evaluations of "take" and "release" interactions (\(41.45\%\) versus \(42.27\%\)). In particular, when applying a temporal threshold of 1 second, the second strategy yielded a p-mAP of \(27.40\%\), distinctly outperforming the \(11.25\%\) achieved by the first. This observation highlights that although both methods produce comparable results, the second approach has an advantage in setting the action at the labeled timestamp. This ensures reliable performance even at lower time thresholds. **Implementation Details:** We have used a Two-Stream (TS) network [65] to extract video features. Each video chunk is set to a size of \(6\), and there is no overlapping between adjacent chunks. With a video frame rate of \(30\), we get \(5\) chunks per second. For appearance features, we extract data from the Flatten 673 layer of ResNet-200 [27] from the central frame of each chunk. Motion features are extracted from the global pool layer of BN-Inception [28] from optical flow fields computed from the \(6\) consecutive frames within each chunk. Motion and appearance features are then concatenated. We used models pre-trained on ActivityNet to extract these feature vectors13. Footnote 13: [https://github.com/yjxiong/anet2016-cuhk](https://github.com/yjxiong/anet2016-cuhk). We tested different numbers of levels of feature pyramid and different regression ranges. We found reliable results when using \(3\) levels of the feature pyramid, respectively, with a regression range of [0, 2], [2, 5], [5, 10000]. We trained the model for \(60\) epochs using a learning rate of \(0.0001\), \(5\) warmup epochs, and a weight decay of \(0.05\), following a cosine scheduler. All the experiments were conducted using \(4\) Nvidia A30 graphics cards. ### Egocentric Human-Object Interaction Detection To perform the experiments for the EHOI detection task using the baseline based on [33], we used a machine with a single _NVIDIA A30_ GPU and an _Intel Xeon Silver 4310_ CPU. We scaled all the images to a resolution of 1280x720 pixels. We trained the model using the _Stochastic Gradient Descent_ (SGD) for 80,000 iterations, an initial learning rate of 0.001, which is decreased by a factor of 10 after 40,000 and 60,000 iterations, and a minibatch size of 4 images. Figure 10 shows qualitative results of the adopted baseline. These qualitative results provide insights into the importance of incorporating domain-specific data during the training phase to extract objects knowledge useful to provide services to workers in the industrial domain. ### Short-Term Object Interaction Anticipation We achieve the short-term object interaction anticipation task with our baseline based on [46]. Figure 11 shows some qualitative results. In the first row we reported correct predictions, while in the second one we reported wrong predictions. The predictions are represented with the green bounding boxes reporting the score, the noun and verb classes and the TTC, while the ground truth is shown in red with the name and verb classes and the TTC. **Implementation Details:** At training time, to obtain high-resolution images and low resolution videos, we used the same parameters used in [46]. At test time, we feed to the networks still images of height \(H\) = 800 pixels and videos of height \(h\) = 256 pixels. The 2D backbone of the still branch is a ResNet-50 architecture. The weights of this backbone and the ones of the standard feature pyramid layer are initialized from a Faster R-CNN model[49] pre-trained on the COCO dataset [35]. The 3D network which composes the fast branch is an X3D-M model [19] pre-trained on Kinetics [7]. The model has been trained with a base learning rate of 0.001 and a weight decay of 0.0001. The learning rate is lowered by a factor of 10 after 15 and 30 epochs. The model is trained in half precision on four NVIDIA V100 Figure 10: Qualitative results of the adopted baseline for the EHOI detection task. GPUs with a batch size of 32. ### Natural Language Understanding of Intents and Entities We split our real dataset using an 80:20 ratio, with an identical test split employed across all experiments, which is uniformly distributed across intent and entity classes. DIETClassifier [6] was adopted for intent and entity prediction. The model has been trained on an Intel Core i5 CPU for 100 epochs with a learning rate of 0.001 and a variable batch size which linearly increases for each epoch from 64 to 256. Table 10 reports the results for the intent classification task (first four columns) and for the entity classification task (last four columns). Five different variations of the training set were explored: real data, real data + G10 data, real data + G50 data, real data + G100 data, and G100; and four different metrics were used for both intent and entity classification task: accuracy, precision, recall, and F1-score. The best results for the intent classification task have been obtained using only real data for training, with an accuracy, precision, recall, and F1-score of 0.867, 0.840, 0.867, and 0.844 respectively. However, using only generated data (G100) for training leads to poorer performances, with an accuracy, precision, recall, and F1-score of 0.584 (-0.283), 0.622 (-0.218), 0.584 (-0.283), 0.564 (-0.280) respectively. These results, in conjunction with the difficulties encountered during the prompting process, suggest that generated data does not fully reflect real utterances, and modern generative models may not accommodate all the constraints imposed by our specific context, thus affecting the model's performance. Exploring the use of combinations of real and generated data for training, we observe the best performances when the generated data is not predominant over the real data. In fact, we obtained an accuracy, precision, recall, F1-score of 0.830 (-0.037), 0.822 (-0.018), 0.830 (-0.037), and 0.815 (-0.029) respectively using real data and G10 for training, an accuracy, precision, recall, F1-score of 0.792 (-0.075), 0.788 (-0.052), 0.792 (-0.075), 0.773 (-0.071) respectively using real data and G50 for training, and an accuracy, precision, recall, F1-score of 0.792 (-0.075), 0.794 (-0.046), 0.792 (-0.075), 0.784 (-0.060) respectively using real data and G100. The performance deteriorates significantly with the addition of G10 to real data, subsequently showing a gradual decline as more generated data is introduced, ultimately stabilizing as G100 is added. Regarding entity classification, the results show that generated data alone is able to represent entities as how they're encountered in the real setting. In fact, best results are attained using G100 alone for training with an accuracy, precision, recall, and F1-score of 1.0, 1.0, 1.0, 1.0 respectively. However, real data itself attains near-perfect performances, with an accuracy, precision, recall, and F1-score of 0.994 (-0.006), 0.965 (-0.035), 1.0 (\(\pm\)0), 0.981 (-0.019) respectively. Figure 12 presents qualitative results for three distinct utterances for both intent and entity classification tasks. In the top row, we observe an utterance with an incorrect intent prediction ("object-instructions" instead of "object-warnings") with a confidence of 0.32. This utterance did not contain any entities, and the absence of entities was correctly predicted with a confidence of 0.98. These results suggest that our model exhibited uncertainty in determining the appropriate class for the utterance. This uncertainty could be attributed to the fact that both "object-instructions" and "object-warnings" often contain utterances formulated in a very similar manner. However, the model's capabilities concerning entity classification tend to be highly accurate with a significant confidence score. Moving to the middle row, we encounter an utterance with an incorrect intent prediction ("inform" instead of "where-object") with a confidence score of 0.94. This utterance contained an entity of the "object" type, and the presence and class of this entity were correctly predicted with a confidence of 0.91. These results suggest that our model exhibited a high level of confidence in classifying the utterance, despite its incorrect classification. This observation may indicate that this particular utterance shares significant similarities with those typically found in the "where-object" intent. Similarly, in this instance, the model's capabilities enable accurate entity classification with a high confidence score. Lastly, the bottom row showcases an utterance with a correct intent prediction ("which-PPE-procedure") with a confidence score of 0.91. This utterance featured entities of both "procedure" and "board" types, and the presence and classes of these entities were correctly predicted with confidence scores of 0.74 and 0.96, respectively. These results suggest that our model exhibited a high level of confidence in classifying the utterance, and its classification was indeed correct. Ultimately, in this latter scenario as well, the model's capabilities enable accurate entity classification, resulting in a confidence score ranging from moderate to high. ## 6 Conclusion \begin{table} \begin{tabular}{c c c c c c c c c} & \multicolumn{4}{c}{**Intent**} & \multicolumn{4}{c}{**Entity**} \\ \hline **Training** & **Accuracy** & **Precision** & **Recall** & **F1-score** & **Accuracy** & **Precision** & **Recall** & **F1-score** \\ \hline real & **0.867** & **0.840** & **0.867** & **0.844** & 0.994 & 0.965 & 1.0 & 0.981 \\ real+G10 & 0.830 & 0.822 & 0.830 & 0.815 & 1.0 & 1.0 & 1.0 & 1.0 \\ real+G50 & 0.792 & 0.788 & 0.792 & 0.773 & 1.0 & 1.0 & 1.0 & 1.0 \\ real+G100 & 0.792 & 0.794 & 0.792 & 0.784 & 1.0 & 1.0 & 1.0 & 1.0 \\ G100 & 0.584 & 0.622 & 0.584 & 0.564 & **1.0** & **1.0** & **1.0** & **1.0** \\ \hline \end{tabular} \end{table} Table 10: Results for intents and entities classification considering different sets of training data. Figure 11: We reported qualitative results of our baseline based on StillFast[46] for the short-term object interaction anticipation task. Figure 12: Qualitative results showing two incorrect intent predictions (first two rows) and a correct prediction (last row), alongside correct entity predictions.
2310.20417
Radio-continuum spectra of ram pressure stripped galaxies in the Coma Cluster
$Aims:$ We used the nearby Coma Cluster as a laboratory in order to probe the impact of ram pressure on star formation as well as to constrain the characteristic timescales and velocities for the stripping of the non-thermal ISM. $Methods:$ We used high-resolution ($6.5'' \approx 3\,\mathrm{kpc}$), multi-frequency ($144\,\mathrm{MHz} - 1.5\,\mathrm{GHz}$) radio continuum imaging of the Coma Cluster to resolve the low-frequency radio spectrum across the discs and tails of 25 ram pressure stripped galaxies. With resolved spectral index maps across these galaxy discs, we constrained the impact of ram pressure perturbations on galaxy star formation. We measured multi-frequency flux-density profiles along each of the ram pressure stripped tails in our sample. We then fit the resulting radio continuum spectra with a simple synchrotron aging model. $Results:$ We showed that ram pressure stripped tails in Coma have steep ($-2 \lesssim \alpha \lesssim -1$) spectral indices. The discs of galaxies undergoing ram pressure stripping have integrated spectral indices within the expected range for shock acceleration from supernovae ($-0.8 \lesssim \alpha \lesssim -0.5$), though there is a tail towards flatter values. In a resolved sense, there are gradients in spectral index across the discs of ram pressure stripped galaxies in Coma. These gradients are aligned with the direction of the observed radio tails, with the flattest spectral indices being found on the `leading half'. From best-fit break frequencies we estimated the projected plasma velocities along the tail to be on the order of hundreds of kilometers per second, with the precise magnitude depending on the assumed magnetic field strength.
I. D. Roberts, R. J. van Weeren, D. V. Lal, M. Sun, H. Chen, A. Ignesti, M. Brüggen, N. Lyskova, T. Venturi, M. Yagi
2023-10-31T12:45:23Z
http://arxiv.org/abs/2310.20417v1
# Radio-continuum spectra of ram pressure stripped galaxies in the Coma Cluster ###### Abstract Context:The population of galaxies in the local Universe is bi-modal in terms of specific star formation rate. This fact has led to a broad distinction between star-forming galaxies (typically cold-gas rich and late type) and quenched galaxies (typically cold-gas poor and early type). The ratio between quenched and star-forming galaxies is much larger in clusters than the field, and pinpointing which are the physical processes driving this excess quenching in clusters is an open question. Aims:We used the nearby Coma Cluster as a laboratory in order to probe the impact of ram pressure on star formation as well as to constrain the characteristic timescales and velocities for the stripping of the non-thermal ISM. Methods:We used high-resolution (\(6.5^{\prime\prime}\approx 3\,\mathrm{kpc}\)), multi-frequency (\(144\,\mathrm{MHz}-1.5\,\mathrm{GHz}\)) radio continuum imaging of the Coma Cluster to resolve the low-frequency radio spectrum across the discs and tails of \(25\,\mathrm{nm}\) pressure stripped galaxies. With resolved spectral index maps across these galaxy discs, we constrained the impact of ram pressure perturbations on galaxy star formation. We measured multi-frequency flux-density profiles along each of the ram pressure stripped tails in our sample. We then fit the resulting radio continuum spectra with a simple synchrotron aging model. Results:We showed that ram pressure stripped tails in Coma have steep (\(-2\lesssim\alpha\lesssim-1\)) spectral indices. The discs of galaxies undergoing ram pressure stripping have integrated spectral indices within the expected range for shock acceleration from supernovae (\(-0.8\lesssim\alpha\lesssim-0.5\)), though there is a tail towards flatter values. In a resolved sense, there are gradients in spectral index across the discs of ram pressure stripped galaxies in Coma. These gradients are aligned with the direction of the observed radio tails, with the flattest spectral indices being found on the 'leading half'. From best-fit break frequencies we estimated the projected plasma velocities along the tail to be on the order of hundreds of kilometers per second, with the precise magnitude depending on the assumed magnetic field strength. Conclusions: ## 1 Introduction The Coma Cluster (Abell 1656) is the nearest, massive (\(M_{200\sim}\gtrsim 10^{15}\,\mathrm{M}_{\odot}\), Ho et al. 2022) galaxy cluster to the Milky Way, and is thus an invaluable laboratory for studying the effects of environmentally-driven galaxy evolution. Galaxies in Coma are primarily passive, early-type galaxies but with a minority population of star-forming disc galaxies that are H i-deficient relative to non-cluster galaxies (e.g. Shapley 1934; Godwin et al. 1983; Gavazzi 1987, 1989; Bravo-Alfaro et al. 2000; Terlevich et al. 2001; Poggianti et al. 2004; Mahajan et al. 2010; Smith et al. 2012; Molnar et al. 2022). Among the star-forming galaxy population in Coma, multiple studies have found evidence for anomalously large star formation rates (SFRs), even compared to non-cluster galaxies (e.g. Bothun & Dressler 1986; Donas et al. 1995; Miller et al. 2009b; Roberts & Parker 2020; Roberts et al. 2021a; Boselli et al. 2022; Roberts et al. 2022a). At a distance of \(\sim 100\,\mathrm{Mpc}\), Coma is both small enough on the sky (\(\sim\)a few degrees across) to be efficiently mapped by survey telescopes with \(\sim\)degree-scale fields of view, but still near enough that \(\sim\)kiloparsec physical scales can be probed with \(\sim\)arcsecond angular resolution. The combination of these two facts have led to a number of resolved studies of environmental quenching in Coma, covering a relatively large number of member galaxies. These works suggest that perturbations from ram pressure stripping are likely a key factor driving the environmental quenching observed in Coma today (e.g. Poggianti et al. 2004; Yagi et al. 2007; Smith et al. 2010; Yagi et al. 2010; Gavazzi et al. 2018; Cramer et al. 2019; Chen et al. 2020; Roberts & Parker 2020; Cramer et al. 2021; Roberts et al. 2021a; Lal et al. 2022). Ram pressure stripping (e.g. Gunn & Gott 1972) is a product of the relative motion between satellite galaxies and the intracluster medium (ICM), which drives an external pressure incident on the galaxy. The efficiency of this stripping will dictate the relevant timescale for quenching star formation. In the limit of a 'weak' ram pressure, where only the circumgalactic medium (CGM) and diffuse atomic gas are removed, then the quenching timescale will be set by the depletion time of the surviving gas in the disc (\(\sim\) 1 Gyr timescales, e.g. Bigiel et al., 2011; Saintonge et al., 2017). This scenario is analogous to 'galaxy starvation', a mechanism often cited to explain the quenching of galaxies in dense environments (e.g. Larson et al., 1980; Balogh et al., 2000; Peng et al., 2015). If ram pressure is strong enough to overcome not only the gravitational potential on the atomic gas, but also the denser, centrally concentrated molecular gas component, then the star-forming gas can be stripped directly and quenching will proceed on a much shorter timescale. Ram pressure may also alter the physical conditions of gas in star-forming regions, even if it is not directly removed from the disc. One possibility is that ram pressure compresses gas in the interstellar medium (ISM), both promoting the conversion between atomic and molecular gas as well as increasing the densities of existing molecular gas. Should this occur, then it would also predict enhanced SFRs in cluster galaxies, assuming a normal Kennicutt-Schmidt relation (Schmidt, 1959; Kennicutt, 1998). This framework has been used to explain large SFRs amongst Coma Cluster galaxies dating back four decades (Bothun and Dressler, 1986), and with the onset of recent large CO surveys of cluster galaxies, evidence for ram-pressure enhanced molecular gas densities and star formation in such environments has become more widespread (e.g. Gavazzi et al., 2001; Merluzzi et al., 2013; Vulcani et al., 2018; Moretti et al., 2020, 20; Vulcani et al., 2020, 2020; Boselli et al., 2021; Cramer et al., 2021; Durret et al., 2021; Hess et al., 2022; Lee et al., 2022; Roberts et al., 2022, 2023; Brown et al., 2023; Moretti et al., 2023). We also note that enhanced SFRs may be a result of gas flows toward the galaxy centre, driven by ram pressure (Zhu et al., 2023). Thus, in principle, ram pressure is sufficient to explain both the aforementioned large population of quenched galaxies in Coma and the tendency for many of the remaining star-forming members to show high SFRs. Cluster galaxies currently experiencing ram pressure stripping are generally identified in one of two ways: either by observing a one-sided tail of stripped material (e.g. Gavazzi, 1978; Shostak et al., 1982; Gavazzi and Jaffe, 1987; Gavazzi et al., 2001; Kenney et al., 2004; Chung et al., 2007; Yagi et al., 2010; Merluzzi et al., 2013, 2016; Poggianti et al., 2017; Boselli et al., 2018; Roberts et al., 2021, 2022; Venturi et al., 2022; Edler et al., 2023), or (often less reliably by identifying morphological features in the stellar light of a galaxy that are consistent with perturbations from ram pressure (McPartland et al., 2016; Poggianti et al., 2016; Roberts and Parker, 2020; Durret et al., 2021; Roberts et al., 2022; Vulcani et al., 2022). Most commonly, ram pressure tails are observationally identified through the H\(\alpha\) Balmer line tracing ionized gas, the 21 cm hydrogen line tracing atomic gas, or the radio continuum tracing synchrotron emission from cosmic ray electrons (CREs), with the earliest examples of late-type galaxies undergoing ram pressure stripping coming from observations in the radio continuum (Gavazzi, 1978; Gavazzi and Jaffe, 1987). A primary asset of observing in the radio continuum is the large primary beam sizes, particularly at low (\(\lesssim\) 1 GHz) frequencies, that allow for entire clusters at low redshift to be efficiently imaged and thus galaxies undergoing ram pressure stripping to be identified across the full cluster volume. In particular, the high sensitivity (\(\sim\) 100 \(\mu\)Jy b\({}^{-1}\)) and angular resolution (\(\sim\) 6'') of the Low Frequency Array (LOFAR) Two-metre Sky Survey (LoTSS, van Haarlem et al., 2013; Shimwell et al., 2017, 2019, 2022) has led to a roughly ten-fold increase in the known number of star-forming galaxies in low-redshift groups and clusters with stripped radio tails (Roberts et al., 2021, 2022; Edler et al., 2023; Ignesti et al., 2023). In addition to being clear signs of environmental galaxy evolution, ram pressure stripped radio tails also deposit CREs and magnetic fields into the ICM, which contribute to the seed particle population for diffuse radio halos and relics (Ge et al., 2019). Radio continuum emission in star-forming galaxies is a mix of thermal free-free emission from Hii regions and non-thermal synchrotron emission originating from CREs accelerated by shocks from supernovae. At frequencies below \(\sim\) 1 GHz, the radio spectrum is dominated by the non-thermal synchrotron component. For galaxies undergoing ram pressure stripping, this includes emission from stripped tails that is believed to be tracing CREs accelerated by supernovae in the disc that are advectively transported out of the galaxy under ram pressure (Murphy et al., 2009; Vollmer et al., 2013; Muller et al., 2021; Roberts et al., 2021, 2022). For different CRE acceleration mechanisms and efficiencies, the energy spectral index, \(\gamma\), for supernovae is estimated to lie in the range from \(-2\) to \(-2.6\), which corresponds to a synchrotron spectral index, \(\alpha\) between \(-0.5\) and \(-0.8\) (\(\gamma=2\alpha-1\), e.g. Bell, 1978; Bogdan and Volk, 1983; Biermann and Strom, 1993). After CREs are injected into the ISM they are then subject to energy loss mechanisms which alter \(\alpha\) in a frequency-dependent fashion. Examples of energy loss mechanisms include ionization losses, synchrotron, and inverse-Compton losses. Synchrotron and inverse-Compton losses most strongly impact high-energy CREs. This acts to steepen \(\alpha\) overall, but in particular, introduces a high-frequency exponential cut-off to the (previously) power-law spectrum, occurring at the so-called 'break frequency' (\(\nu_{b}\)). If synchrotron and inverse-Compton losses are dominant, then over time the break frequency will evolve to lower frequency and the radiative age of a given plasma can be estimated from the location of this break (e.g. Miley, 1980). In the context of star formation, this means that young star-forming regions will have spectral indices which are close the injection values (\(-0.8\lesssim\alpha\lesssim-0.5\)), whereas older plasma corresponding to a prior episode of star formation will be characterized by a steeper \(\alpha\). In the context of ram pressure stripping, radiative age estimates have been used to constrain the bulk velocity of the stripped plasma for some cluster galaxies, generally finding speeds on the order of hundreds of kilometers per second (Vollmer et al., 2021; Ignesti et al., 2023). Conversely, ionization losses flatten the spectral index by preferentially affecting low-energy CREs and thus the low-frequency portion of the spectrum. The strength of ionization losses increases with ISM density (e.g. Longair, 2011), a fact which has been used to explain the flatter-than-injection spectral indices (i.e. \(\alpha>-0.5\)) that are sometimes observed near strongly star-forming regions in galaxies (Basu et al., 2015). There is also tentative evidence for unusually flat spectral indices in the discs of galaxies undergoing ram pressure stripping (Ignesti et al., 2022; Roberts et al., 2022), for which ionization losses provide a potential explanation given the that compression from ram pressure is capable of increasing local ISM densities (e.g. Cramer et al., 2020; Troncoso-Iribarren et al., 2020; Moretti et al., 2020, 2020; Roberts et al., 2022, 2023; Brown et al., 2023). A number of wide-area radio continuum surveys of the Coma Cluster have been published, spanning a decade across the low-frequency spectrum, including: LOFAR at 144 MHz (Shimwell et al., 2022, also see Bonafede et al., 2022), the upgraded Giant Metrewave Telescope (uGMRT) at 400 MHz and 700 MHz (Lal, 2020; Lal et al., 2022), and the Very Large Array (VLA) at 1.5 GHz (Miller et al., 2009; Chen et al., 2020). In this work we synergize data from these surveys in order to complete an in-depth study of the low-frequency radio spectrum of ram pressure stripped galaxies in Coma and their stripped tails. With a working physical resolution of \(\sim\) 3 kpc, and excellent sensitivity to extended, diffuse emission, we are able to probe spectral properties both across the star-forming discs and along the ram pressure stripped tails. From these datasets we derive constraints on ram-pressure induced star formation, cosmic-ray losses in the ISM, and the characteristic timescales for gas stripping. The structure of this paper is as follows: In Sect. 2 we describe the multi-frequency radio continuum imaging used in this work, as well as our sample of ram pressure stripped galaxies. In Sect. 3 we report integrated spectral index measurements over both galaxy discs and stripped tails. In Sect. 4 we show resolved maps of spectral index covering the galaxy discs, primarily between 144 MHz and 1.5 GHz. In Sect. 5 we measure multi-frequency flux-density profiles along observed stripped tails and apply a simple spectral aging model to these observed radio spectra. From this model we extract estimates for synchrotron break frequencies and (projected) bulk stripping velocities along the tails. Finally, in Sects. 6 and 7 we provide a brief discussion and high-level summary of this work. Throughout we assume a standard \(\Lambda\)CDM cosmology with \(\Omega_{\rm M}=0.3\), \(\Omega_{\rm A}=0.7\), and \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\). We take the redshift of the Coma Cluster to be \(z_{\rm Coma}=0.0234\)(Rines et al., 2016), which corresponds to a luminosity distance of 102 Mpc and a scale of 0.47 kpc/\({}^{\prime\prime}\). The radio synchrotron spectrum is described as \(S_{\nu}\propto\nu^{\alpha}\), where \(S_{\nu}\) is the flux density, \(\nu\) is the frequency, and \(\alpha\) is the spectral index. ## 2 Data and galaxy sample ### Radio imaging #### 2.1.1 Lofar 144 MHz The LOFAR images in this work are from the LOFAR Two-metre Sky Survey (LoTSS, Shimwell et al., 2017, 2022), a wide-area continuum survey of the northern sky between 120 MHz and 168 MHz using the LOFAR high-band antenna (HBA). We use three 8 h LoTSS pointings (P192+27, P195+27, P195+30) that combine to map the Coma cluster out to the virial radius. Each pointing is processed, imaged, and mosiaced following the procedure outlined in Shimwell et al. (2022). This includes initial processing to correct for direction-independent errors, followed by direction-dependent1 calibration to account for the variation in ionospheric effects across the large (primary-beam FWHM: \(\sim\) 4 deg) field-of-view. Direction-dependent solutions are applied during imaging with DDFacet (Tasse et al., 2018) and images for each pointing are mosaicked with neighbouring pointings in order to produce the final, mosiaced image for each field (Shimwell et al., 2022). For each field we checked the LoTSS flux-density scale against both the TGSS (Intema et al., 2009) and 7C (Hales et al., 2007) surveys. For all three fields we measured a \(\sim\) 20% flux-scale offset in LoTSS, both when comparing to TGSSS and to 7C. When directly comparing TGSSS and 7C, no flux-scale differences were measured. Thus we applied a \(\sim\) 20% flux-scale correction (varying slightly from field to field) to each of the three LoTSS mosaics. The final LoTSS images for P192+27, P195+27, and P195+30 reach rms levels at the field centres of 70, 65, 60 \(\mu\)Jy per 6\({}^{\prime\prime}\) beam, respectively. For all frequency bands used in this analysis we work at a common angular resolution of 6.5\({}^{\prime\prime}\), corresponding to a physical resolution of \(\sim\) 3 kpc at the distance of the Coma Cluster. We convolve the LoTSS images to this working resolution using the imsmooth function in CASA(CASA Team et al., 2022). Footnote 1: [https://github.com/hhardcastle/ddf-pipeline](https://github.com/hhardcastle/ddf-pipeline) #### 2.1.2 uGMRT 400 and 700 MHz At 400 and 700 MHz we use imaging from uGMRT of the Coma Cluster from Lal (2020) and Lal et al. (2022). This imaging covers an area of \(\sim\) 4 deg focused on the central and south-west regions of Coma, which is made up of three pointings in Band 3 (\(250-550\) MHz) and eight pointings in Band 4 (\(550-850\) MHz). The on-source times range between 1.8 h and 4.7 h for pointings in Band 3 and 1.3 h and 3.4 h for pointings in Band 4. We refer to Lal (2020) and Lal et al. (2022) for all further details on the observing set-up. To process the raw data we used the Source Peeling and Atmospheric Modeling pipeline (SPAM, Intema et al., 2009). We followed the recommended workflow for wide-band uGMRT data2, which first splits the dataset into \(\sim\) 50 MHz sub-bands that are each processed individually by the main SPAM pipeline. After processing, sub-bands were imaged together with WSClean(Grifinga et al., 2014) to form a final wide-band image. For final imaging we use Briggs weighting (Briggs, 1995) with a robust parameter of \(-0.5\). This results in rms levels between 26 and 30 \(\mu\)Jy (depending on the pointing) per \(\sim\) 5.5\({}^{\prime\prime}\) beam in Band 3, and rms levels between 13 and 26 \(\mu\)Jy per \(\sim\) 3.5\({}^{\prime\prime}\) beam in Band 4. All Band 3 and Band 4 pointings are convolved to our working resolution of 6.5\({}^{\prime\prime}\), reprojected onto a common pixel grid, and then mosaicked together to produce a single mosaic image for each frequency band. The largest angular scale for the uGMRT is \(\sim\) 33\({}^{\prime}\) in Band 3 and \(\sim\) 20\({}^{\prime}\) in Band 4, both of which are well larger than the angular size of 2\({}^{\prime}\) at 144 MHz for the largest source (NGC4848) in our sample. Footnote 2: [http://www.intema.nl/doku.php?id=huibintemaspampipeline](http://www.intema.nl/doku.php?id=huibintemaspampipeline) #### 2.1.3 Vla 1.5 GHz For our highest frequency data we used two different VLA 1.5 GHz data-sets, from Chen et al. (2020) and Miller et al. (2009), which together cover more than a square degree of the Coma Cluster. Throughout this work we assumed that the observed flux density at 1.5 GHz is solely due to nonthermal, synchrotron emission - in other words, a 'thermal fraction' (\(f_{\rm m}\)) of zero. While this is unlikely to be strictly true, previous works have found typical thermal fractions of \(f_{\rm m}\lesssim 10\%\) at 1.5 GHz for nearby star-forming galaxies (e.g. Condon, 1992; Niklas et al., 1997; Tabatabaei et al., 2017; Ignesti et al., 2022). The observations from Chen et al. (2020) include one pointing centred on D100 and one pointing centred on NGC4848, each well-known examples of ram pressure stripped galaxies in the Coma Cluster. Here we briefly highlight some main features of the data and processing and we refer to Chen et al. (2020) for a more comprehensive discussion. The NGC4848 and D100 fields were observed for an on-source time of 12.1 h and 6.3 h, respectively, in the VLA B-configuration. Data were processed following standard procedures in CASA. After calibration the continuum data was imaged using tclean with Briggs weighting and a robust parameter of 0.5, giving a synthesized beam size of \(\sim\) 4\({}^{\prime\prime}\) for both fields. At the pointing centre the resulting rms noise is 5.9 \(\mu\)Jy bin\({}^{-1}\) for the NGC4848 field and 8.0 \(\mu\)Jy bin\({}^{-1}\) for the D100 field. Both pointings were then convolved to our working resolution of 6.5\({}^{\prime\prime}\). The observations from Miller et al. (2009a) also include two fields, one covering the centre of the Coma Cluster (Coma 1, which has substantial overlap with the D100 field from Chen et al. 2020) and one to the southwest of the cluster centre (Coma 3). Again, here we only provide a brief description of the data processing, but a detailed outline is available in Miller et al. (2009a). For each field, 11 separate pointings were obtained in order to achieve roughly uniform sensitivity across a \(30^{\prime}\times 50^{\prime}\) area. Each pointing was observed for \(\sim 1\) h in the VLA B-configuration. The data were reduced and imaged with the Astronomical Image Processing Software (AIPS). Final images for each field have a restoring beam size of \(4.4^{\prime\prime}\) and reach rms levels of \(20-25\,\mu\)Jy at the image centre. We obtained the Coma 1 and Coma 3 images from the CDS 3 and both were then smoothed to our working resolution of \(6.5^{\prime\prime}\). Footnote 3: [https://cdsarc.cds.umistra.fr/viz-bin/cat/J/AJ/137/4436](https://cdsarc.cds.umistra.fr/viz-bin/cat/J/AJ/137/4436) Lastly, we re-projected all four VLA images from Miller et al. (2009a) and Chen et al. (2020) (after smoothing to a \(6.5^{\prime\prime}\) beam) onto a common pixel grid and mosaicked them together in the image plane. The image combination was weighted by the inverse of the local rms level in each image. Where the Miller et al. (2009a) and Chen et al. (2020) images overlap, the majority of the weight in the combined mosaic is thus given to the deeper Chen et al. (2020) data. In most cases the Miller et al. (2009a) images are not deep enough to detect the stripped tails, however these data still provide important insight into the spectral properties over the discs of galaxies not covered by the Chen et al. (2020) observations. We note that the largest angular scale for the VLA in B-configuration is \(120^{\prime\prime}\) for full synthesis observations4. This is nearly identical to the angular size of the largest source in our sample at \(144\,\mathrm{MHz}\), therefore for the longest tails in our sample (e.g. NGC4848, MRK0057) it is possible that there is a small amount of missing flux density at \(1.5\,\mathrm{GHz}\). That said, for an aging synchrotron population, it is expected that the observed tail length will scale inversely with the square root of the frequency (Ignesti et al. 2022b) and therefore we expect that the tails lengths at \(1.5\,\mathrm{GHz}\) will be intrinsically shorter than at \(144\,\mathrm{MHz}\) (by roughly a factor of three). This would bring the expected tail lengths at \(1.5\,\mathrm{GHz}\) within the B-configuration largest angular scale. Footnote 4: [https://science.nrao.edu/facilities/vla/docs/manuals/propvla](https://science.nrao.edu/facilities/vla/docs/manuals/propvla) ### Ram pressure stripped galaxy sample Roberts et al. (2021a) and Roberts et al. (2021b) published a large sample of \(\sim 150\) galaxies with one-sided radio tails at \(144\,\mathrm{MHz}\) that are likely undergoing ram pressure stripping in low-\(z\) groups and clusters. This sample includes 29 galaxies in the Coma Cluster. The galaxy sample used in this work includes all ram pressure stripped Coma galaxies from Roberts et al. (2021a) that are also within the observing area of at least one of the \(400\,\mathrm{MHz}\), \(700\,\mathrm{MHz}\), and \(1.5\,\mathrm{GHz}\) datasets described above. This restriction results in 25 galaxies with observations at at least two frequencies. We also included two supplemental ram pressure stripped galaxies from Coma in our sample. These two galaxies, NGC4911 and KUG1258+287, were not part of the Roberts et al. (2021a) sample for technical reasons, but do show one-sided stripped tails at \(144\,\mathrm{MHz}\) (and other frequencies). NGC4911 is known to have an H\(\alpha\) stripped tail identified by Yagi et al. (2010), and also shows a clear one-sided tail in LoTSS at \(144\,\mathrm{MHz}\) (see Fig. 10) in the same direction as the H\(\alpha\) tail. The reason that NGC4911 was not included in the Roberts et al. (2021a) sample, despite the radio continuum tail, is that NGC4911 has a specific star formation rate (sSFR \(=\mathrm{SFR}/M_{\star}=10^{-112}\,\mathrm{yr}^{-1}\)) that falls just below the star-forming galaxy selection of sSFR \(\gtrsim 10^{-11}\,\mathrm{yr}^{-1}\) imposed by Roberts et al. (2021a). However, NGC4911 clearly shows signatures of ongoing star formation (Gregg et al. 2003; Yagi et al. 2010) and we opt to include it in our sample for completeness. KUG1258+287 has a one-sided, \(\sim 40\,\mathrm{kpc}\) tail at \(144\,\mathrm{MHz}\) in LoTSS (see Fig. 11) but lacks an SDSS redshift, presumably due to its close proximity to a bright foreground star. The Roberts et al. (2021a) sample relies on SDSS spectroscopic redshifts and therefore does not include KUG1258+287. However, KUG1258+287 does have a spectroscopic redshift of \(z=0.0298\) from Edwards & Fadda (2011), and thus has a projected velocity offset of \(\sim 1700\,\mathrm{km}\,\mathrm{s}^{-1}\) from the systemic velocity of Coma. The velocity offset, alongside a projected cluster-centric radius of \(0.34\,\mathrm{R}_{180}\), places KUG1258+287 as a Coma member galaxy according to the criteria of Roberts et al. (2021a), and therefore we include it in our ram pressure galaxy sample. This results in a total ram pressure stripping galaxy sample of 25 galaxies. We list these galaxies along with their general properties and frequency coverage in Table 1. In Fig. 1 we show the local rms measured from \(\sim 3^{\prime}\) cutout images centred on each galaxy, for each of our frequency bands. Generally speaking, we reach comparable sensitivity to freshly injected electrons in all frequency bands (\(\alpha\approx-0.7\)), but in most cases do not reach equivalent sensitivity in our high-frequency bands to steep-spectrum emission (\(\alpha\lesssim-1\)) detected by the LOFAR HBA. The Figure 1: One-sigma sensitivity (rms) as a function of frequency for each of the observing bands in this work. All values are measured from images at our working resolution of \(6.5^{\prime\prime}\). Data points correspond to the local rms measured around each galaxy. For the L-band, the filled markers correspond to galaxies that are covered by the Chen et al. (2020) observations and the open markers correspond to galaxies only covered by the Miller et al. (2009a) observations. Shaded bands correspond to the width of each observing band. For reference we plot line corresponding to spectral indices of \(-0.5\), \(-1\), and \(-1.5\) anchored to the lowest frequency band. points skewed to high rms at 1.5 GHz in Fig. 1 correspond to those galaxies that are covered by the L-band observations from Miller et al. (2009a) but not from Chen et al. (2020). ## 3 Integrated spectral properties: galaxy and tail We measured integrated spectral indices for each galaxy disc and corresponding stripped tail in our sample. Flux densities for the galaxy disc were measured over an elliptical aperture centred on the right ascension and declination of each galaxy (see Table 1). The semi-major axis of the ellipse was taken to be the Petrosian 90% light radius (listed in Table 1) and the axis ratio and position angle were determined through 2D Sersic fits. All structural parameters were measured in the \(r\)-band and obtained from the NASA-Sloan Atlas5. Footnote 5: [http://nsatlas.org/](http://nsatlas.org/) Flux densities for the stripped tails were measured over rectangular apertures that extend from the outer edge of the galaxy disc (as defined above) along the direction of the tail. The length and width of these apertures were set by hand in order to enclose tail emission within the 2\(\sigma\) contour at 144 MHz. We measured tail flux densities over the same aperture for each frequency band. The galaxy disc and tail apertures for each galaxy are shown in Appendix B. A minority of galaxies in the sample overlap, in projection, with the diffuse radio halo in Coma that is well detected by LOFAR at 144 MHz (Bonafede et al., 2022). Thus direct 144 MHz flux density measurements for these galaxies will include both a contribution from CREs originating from the galaxy and from CREs originating from the radio halo. To account for this, we measure the median 'background' level around (but not including) each galaxy and its stripped tail. This level is then taken to be the local level of the radio halo and is subtracted off of each galaxy cutout image. The flux densities quoted in Table 2, and used throughout the rest of the paper, are then measured from images after this subtraction. While the radio halo is not detected in our 400 MHz, 700 MHz, or 1.5 GHz images, we still follow the same procedure when measuring flux densities at these higher frequencies in case there is any low-level contribution from the halo that is not immediately obvious from the images. We are implicitly assuming here that the brightness of the radio halo is constant over the length scale of the galaxy plus stripped tail (\(\sim 10-50\) kpc for our sample). We used a threshold of 3\(\sigma\) to determine whether or not significant emission was detected in each aperture. If the measured flux density was below this threshold then we instead quote a 3\(\sigma\) upper limit. We consider two sources of error on the measured flux densities. First a random error set by the noise in the image, and second an error on the flux calibration which is specific to each observing band. To measure the random error we measure flux densities within 500 randomly-positioned apertures in source-free regions of the image surrounding the target galaxy. We then take the random error to be the sigma-clipped standard deviation of the resulting 500 aperture flux densities. For flux calibration we assume a relative error of 10% for LOFAR HBA, 5% for uGMRT B3 and B4, and 5% for VLA L-band. The total error on the flux density (\(\delta S_{\nu}\)) in each aperture is then given by the sum in quadrature of the random error and the calibration error. Generally speaking, the flux calibration uncertainties are the dominant sources of error. Measured flux densities are listed in Table 2. The 'disc + tail' flux densities from Table 2 agree well with measured flux densities for overlapping sources from Lal et al. (2022) (see Table 3 in Lal et al. 2022). Any small differences are due to the fact that our disc apertures, while internally consistent for this paper, do not always enclose all of the radio emission around the galaxies in our sample (see images in Appendix B). Spectral indices were calculated using two different methods, depending on the number of frequency bands where sig \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline Galaxy & GMP & R.A. & Decl. & \(z\) & log \(M_{\bullet}\) & log SFR & \(r_{\rm B0}\) & 144 MHz & 400 MHz & 700 MHz & 1.4 GHz \\ & deg & deg & log(M\({}_{\odot}\)) & log(M\({}_{\odot}\) yr\({}^{-1}\)) & \({}^{\circ}\) & & & & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & \\ \hline NGC4848 & 4471 & 194.5235 & 28.2427 & 0.0240 & 10.77 \(\times\) 0.03 & 0.99 \(\pm\) 0.02 & 24.7 & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ NGC4853 & 4156 & 194.6466 & 27.5964 & 0.0256 & 10.81 \(\pm\) 0.03 & 0.41 \(\pm\) 0.09 & 10.2 & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ NGC4858 & 3816 & 194.7585 & 28.115 & 0.0341 & 10.16 \(\pm\) 0.05 & 0.56 \(\pm\) 0.06 & 10.1 & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ NGC4911 & 2374 & 195.2337 & 27.7960 & 0.2616 & 1.21 \(\pm\) 0.04 & 0.05 \(\pm\) 0.51 & 34.1 & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ IC3913 & 5422 & 194.1191 & 27.2913 & 0.0251 & 0.97 \(\pm\) 0.04 & \(-0.12\pm\) 0.07 & 22.4 & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ IC3949 & 3396 & 194.7334 & 27.8334 & 0.0251 & 10.60 \(\pm\) 0.022 & \(-0.16\pm\) 0.06 & 20.3 & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ IC4040 & 2559 & 155.1880 & 28.0574 & 0.0255 & 10.28 \(\pm\) 0.07 & 0.64 \(\pm\) 0.07 & 15.1 & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ KUG1250+276 & – & 193.2088 & 27.4018 & 0.0285 & 9.75 \(\pm\) 0.06 & 1.14 \(\pm\) 0.07 & 10.6 & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ IC401525+275 & 4351 & 194.5776 & 27.3108 & 0.0247 & 9.70 \(\pm\) 0.07 & \(-0.09\pm\) 0.05 & 9.1 & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ KUG1255+283 & 4555 & 194.9095 & 28.0618 & 0.0271 & 9.92 \(\pm\) 0.06 & 0.07 \(\pm\) 0.11 & 17.8 & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ KUG1256+2878 & 3509 & 194.8468 & 28.4887 & 0.0284 & 9.57 \(\pm\) 0.04 & \(-0.50\pm\) 0.06 & 9.0 & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ KUG1257+287 & 3271 & 194.9159 & 27.5765 & 0.0167 & 9.04 \(\pm\) 0.06 & \(-0.66\pm\) 0.07 & 11.6 & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ KUG1257+2888 & 3253 & 194.9172 & 28.6308 & 0.0178 & 9.37 \(\pm\) 0.07 & \(-0.32\pm\) 0.03 & 16.7 & \(\checkmark\) & \(\checkmark\) & \(\times\) & \(\times\) & \(\times\) \\ IC401528+287 & 2536 & 195.1691 & 28.5190 & 0.0298 & – nificant flux density was detected. When three or more frequency bands are detected, we calculated the spectral index as the slope of the best-fit linear relationship between \(\ln S_{\nu}\) and \(\ln\nu\). Fits are done using orthogonal distance regression, specifically scipy.odr, and the uncertainty on the spectral index is taken to be the statistical uncertainty on the best-fit slope. When only two frequency bands are detected, we calculated spectral indices via the direct method, namely \[\alpha=\frac{\ln(S_{\nu,1}/S_{\nu,2})}{\ln(\nu_{1}/\nu_{2})}\pm\delta\alpha, \tag{1}\] \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Galaxy & Component & \multicolumn{3}{c}{\(S_{\nu}\) [mJy]} & \multicolumn{3}{c}{\(\alpha_{\rm disc}\)} & \(\alpha_{\rm tail}\) \\ & & 144 MHz & 400 MHz & 700 MHz & 1.4 GHz & & \\ \hline NGC4848 & Disc & \(69.4\pm 7.0\) & \(47.3\pm 4.7\) & \(36.5\pm 3.7\) & \(23.6\pm 2.4\) & \(-0.45\pm 0.03\) & \(-1.2\pm 0.2\) \\ & Tail & \(28.0\pm 2.9\) & \(5.1\pm 0.7\) & \(3.3\pm 0.7\) & \(2.2\pm 0.2\) & \(-0.45\pm 0.03\) & \(-1.2\pm 0.2\) \\ NGC4853 & Disc & \(10.3\pm 1.0\) & \(6.0\pm 0.6\) & \(4.4\pm 0.4\) & \(2.7\pm 0.3\) & \(-0.57\pm 0.02\) & \(-1.3\pm 0.5\) \\ & Tail & \(2.2\pm 0.3\) & \(0.3\pm 0.1\) & \(0.4\pm 0.1\) & \(<0.3\) & & \\ NGC4858 & Disc & \(22.7\pm 2.3\) & \(16.7\pm 1.7\) & \(13.0\pm 1.3\) & \(7.5\pm 0.7\) & & \\ & Tail & \(16.9\pm 1.7\) & \(5.7\pm 0.6\) & \(3.7\pm 0.4\) & \(2.1\pm 0.1\) & & \\ NGC4911 & Disc & \(93.0\pm 9.4\) & \(51.1\pm 5.1\) & \(37.7\pm 3.8\) & \(22.2\pm 2.2\) & \(-0.60\pm 0.02\) & \(-1.7\pm 0.5\) \\ & Tail & \(9.5\pm 1.1\) & \(0.6\pm 0.3\) & \(<0.6\) & \(0.3\pm 0.1\) & & \\ IC3913 & Disc & \(11.9\pm 1.3\) & \(5.2\pm 0.6\) & \(3.6\pm 0.4\) & \(2.6\pm 0.4\) & \(-0.70\pm 0.07\) & \(-1.3\pm 0.0\) \\ & Tail & \(3.0\pm 0.4\) & \(0.8\pm 0.1\) & \(0.4\pm 0.1\) & \(<0.3\) & & \\ IC3949 & Disc & \(5.4\pm 0.6\) & \(3.4\pm 0.3\) & \(3.2\pm 0.3\) & \(1.8\pm 0.2\) & \(-0.44\pm 0.07\) & \(<-1.0\) \\ & Tail & \(1.4\pm 0.3\) & \(<0.3\) & \(<0.3\) & \(<0.1\) & & \\ IC4040 & Disc & \(56.3\pm 5.7\) & \(36.8\pm 3.7\) & \(27.0\pm 2.7\) & \(16.8\pm 1.7\) & \(-0.51\pm 0.04\) & \(-1.2\pm 0.1\) \\ & Tail & \(14.3\pm 1.6\) & \(6.0\pm 0.7\) & \(2.2\pm 0.3\) & \(1.0\pm 0.1\) & & \\ & Disc & \(5.2\pm 0.6\) & \(3.5\pm 0.4\) & \(2.2\pm 0.2\) & — & \(-0.54\pm 0.10\) & \(-1.5\pm 0.4\) \\ & Tail & \(1.7\pm 0.3\) & \(0.4\pm 0.1\) & \(<0.3\) & — & \(-0.54\pm 0.10\) & \(-1.5\pm 0.4\) \\ & Disc & \(8.3\pm 0.8\) & \(5.9\pm 0.6\) & \(4.5\pm 0.4\) & \(3.4\pm 0.4\) & & \\ & Tail & \(11.4\pm 1.2\) & \(3.6\pm 0.4\) & \(2.4\pm 0.3\) & \(1.0\pm 0.3\) & \(-0.38\pm 0.02\) & \(-1.0\pm 0.1\) \\ & Tail & \(27.6\pm 2.8\) & \(17.0\pm 1.7\) & \(13.0\pm 1.4\) & \(7.8\pm 0.8\) & & \\ KUG1255+283 & Tail & \(5.8\pm 0.7\) & \(1.7\pm 0.3\) & \(<1.2\) & \(0.6\pm 0.1\) & \(-0.53\pm 0.03\) & \(-1.0\pm 0.1\) \\ & Disc & \(1.6\pm 0.2\) & \(0.5\pm 0.1\) & \(0.2\pm 0.1\) & \(0.4\pm 0.1\) & & \\ & Tail & \(0.8\pm 0.1\) & \(<0.3\) & \(<0.3\) & \(<0.2\) & & \\ & Disc & \(3.4\pm 0.4\) & \(2.2\pm 0.2\) & \(1.8\pm 0.2\) & \(1.1\pm 0.1\) & & \\ & Tail & \(1.0\pm 0.2\) & \(0.2\pm 0.1\) & \(<0.3\) & \(0.1\pm 0.1\) & & \\ & Tail & \(8.9\pm 0.9\) & \(4.1\pm 0.5\) & — & — & \(-0.76\pm 0.16\) & \(-1.9\pm 0.4\) \\ & Tail & \(7.4\pm 0.8\) & \(1.1\pm 0.4\) & — & — & \\ & Disc & \(9.8\pm 1.0\) & \(6.1\pm 0.6\) & — & \(3.3\pm 0.4\) & \(-0.47\pm 0.00\) & \(-1.7\pm 0.2\) \\ & Tail & \(16.9\pm 1.7\) & \(2.9\pm 0.5\) & — & \(<2.1\) & & \\ & Tail & \(12.2\pm 1.3\) & \(8.1\pm 0.8\) & \(7.7\pm 0.8\) & \(4.5\pm 0.5\) & \(-0.40\pm 0.07\) & \(-1.1\pm 0.3\) \\ & Tail & \(10.9\pm 1.2\) & \(3.3\pm 0.4\) & \(0.7\pm 0.3\) & \(1.2\pm 0.1\) & & \\ KUG1259+279 & Tail & \(4.8\pm 0.6\) & \(<0.9\) & — & — & \(-0.60\pm 0.14\) & \(<-1.7\) \\ & Disc & \(10.0\pm 1.0\) & \(5.8\pm 0.6\) & — & — & \(-0.53\pm 0.14\) & \(-2.4\pm 0.6\) \\ & Tail & \(6.1\pm 0.7\) & \(0.5\pm 0.3\) & — & — & \(-0.53\pm 0.14\) & \(-2.4\pm 0.6\) \\ & Disc & \(12.3\pm 1.2\) & \(8.7\pm 0.9\) & \(6.2\pm 0.6\) & \(4.5\pm 0.5\) & & \\ MRK53 & Tail & \(13.2\pm 1.4\) & \(5.7\pm 0.6\) & \(2.8\pm 0.3\) & \(<1.5\) & \(-0.44\pm 0.03\) & \(-1.0\pm 0.1\) \\ & Disc & \(10.8\pm 1.1\) & \(7.3\pm 0.7\) & \(5.7\pm 0.6\) & \(4.2\pm 0.4\) & \(-0.41\pm 0.01\) & \(-1.0\pm 0.1\) \\ & Tail & \(6.6\pm 0.7\) & \(2.7\pm 0.3\) & \(1.3\pm 0.2\) & \(<0.6\) & & \\ MRK57 & Disc & \(12.2\pm 1.2\) & \(8.0\pm 0.8\) & \(5.8\pm 0.6\) & \(4.6\pm 0.5\) & & \\ & Tail & \(20.4\pm 2.1\) & \(7.2\pm 0.8\) & \(2.5\pm 0.3\) & \( with \[\delta\alpha=\frac{1}{\ln(\nu_{1}/\nu_{2})}\sqrt{\left(\frac{\delta_{1}}{S_{\nu,1}} \right)^{2}+\left(\frac{\delta_{2}}{S_{\nu,2}}\right)^{2}}. \tag{2}\] Spectral indices for both disc and tail regions are listed in Table 2. In Fig. 2 (top) we show the spectral index integrated over the tail versus the spectral index integrated over the disc for each galaxy in our sample. For the galaxy discs, spectral indices are broadly consistent with the typical injection spectral indices expected from star formation (\(\sim-0.8\) to \(-0.5\), e.g. Bell, 1978; Bogdan & Volk, 1983; Condon, 1992). The median spectral index over galaxy discs is \(-0.47\pm 0.1\). This agrees with the mean \(144\,\mathrm{MHz}-1.5\,\mathrm{GHz}\) spectral index of \(-0.55\pm 0.14\) for the sample of 76 nearby star-forming galaxies from Heesen et al. (2022). Thus we do not find evidence for different broadband spectral indices between 'normal' star-forming discs and galaxy discs undergoing ram pressure stripping. This is consistent with previous work in the Virgo Cluster that makes the same conclusion based off of spectral index measurements at gigahertz frequencies (Vollmer et al., 2010). There is a collection of galaxies with spectral indices that are marginally flatter than typical injection values (i.e. \(>-0.5\)). We discuss potential origins of this spectral flattening in more detail in Sect. 6.1, but it is likely connected to increased ionization losses (due to ISM compression from ram pressure) and/or shock acceleration due to the galaxy-ISM interaction. Spectral indices integrated over the tail region are clearly steeper than the disc spectral indices. This is true for the sample as a whole but also true for each galaxy individually. The contrast between disc and tail spectral indices is consistent with a framework where the synchrotron emission from the tail is due to aged cosmic ray electrons removed from the galaxy disc by ram pressure (see also: Chen et al., 2020; Muller et al., 2021; Ignesti et al., 2022; Venturi et al., 2022). We extend our analysis of spectral aging along the tail in Sect. 5.2, where we fit flux density profiles along the tail with a synchrotron aging model in order to derive characteristic stripping timescales for this sample of galaxies. There has been some evidence for flatter-than-injection spectral indices, specifically at low frequencies, in ram pressure stripped galaxies (Ignesti et al., 2022; Roberts et al., 2022). To probe whether this is the case for our sample we measured low- and high-frequency spectral indices for each galaxy disc. The low-frequency spectral index (\(\alpha_{\mathrm{low,\,disc}}\)) is measured between \(144\,\mathrm{MHz}\) and \(400\,\mathrm{MHz}\) and the high-frequency spectral index (\(\alpha_{\mathrm{high,\,disc}}\)) is measured between \(700\,\mathrm{MHz}\) and \(1.5\,\mathrm{GHz}\). The results are shown in Fig. 2 (bottom) where we plot the corresponding colour-colour diagram. Given the short separations between frequency bands, the error bars on \(\alpha_{\mathrm{low,\,disc}}\) and \(\alpha_{\mathrm{high,\,disc}}\) are large and the galaxy discs are broadly consistent with no curvature, i.e. \(\alpha_{\mathrm{low,\,disc}}=\alpha_{\mathrm{high,\,disc}}\). The median value for \(\alpha_{\mathrm{low,\,disc}}\) is \(-0.43\pm 0.20\) and \(-0.59\pm 0.24\) for \(\alpha_{\mathrm{high,\,disc}}\). This may be a hint of spectral curvature and flat low-frequency spectral indices in these galaxies, but we do not have the statistical power to confirm this. ## 4 Disc spectral index maps In Fig. 3 we show non-thermal spectral index maps covering the disc of each galaxy, as mentioned in Sect. 2.1.3, the thermal contribution is largely negligible at these frequencies. For the majority of galaxies, \(\alpha\) is measured using fluxes at \(144\,\mathrm{MHz}\) and \(1.5\,\mathrm{GHz}\), in order to maximize the lever arm between the two frequencies, which in turn minimizes the uncertainty of the pixel-by-pixel spectral index measurements. For galaxies that are not covered by the \(1.5\,\mathrm{GHz}\) imaging, we measure the spectral index between \(144\,\mathrm{MHz}\) and the highest available frequency. This corresponds to \(700\,\mathrm{MHz}\) for KUG1250+276, and \(400\,\mathrm{MHz}\) for KUG1257+288B, KUG1259+279, and KUG1259+284. Each panel in Fig. 3 lists the highest frequency that is used to derive the spectral index map. The disc region is defined as it was in the previous section, the \(r\)-band Petrosian 90% light radius, and Figure 2: _Top:_ Tail spectral index versus galaxy disc spectral index. When emission is detected in three or more frequency bands, the spectral index and its error are determined from orthogonal distance regression fits. When emission is only detected in two frequency bands, the spectral index and its error are determined via the direct method (Eq. 1). For galaxies where emission in the tail is only detected at one frequency (see Table. 2), \(\alpha_{\mathrm{tail}}\) is shown as a \(3\sigma\) upper limit. _Bottom:_ Radio colour-colour diagram for the galaxy discs. The high-frequency spectral index (\(y\)-axis) is measured between \(700\,\mathrm{MHz}\) and \(1.5\,\mathrm{GHz}\) and the low-frequency spectral index (\(x\)-axis) is measured between \(144\,\mathrm{MHz}\) and \(400\,\mathrm{MHz}\). In both the top and bottom panels the dashed lines correspond to the one-to-one relation. is shown by the dashed lines in Fig. 3. We only include pixels in the spectral index map that have \(>2\sigma\) detections in both of the frequency bands that are used. For reference, when only including calibration uncertainties (which dominate over the galaxy discs), the typical uncertainty on the spectral index maps between \(144\,\mathrm{MHz}-1.5\,\mathrm{GHz}\) is \(\sim 0.05\). For spectral index maps between \(144\,\mathrm{MHz}-700\,\mathrm{MHz}\) and \(144\,\mathrm{MHz}-400\,\mathrm{MHz}\), the typical uncertainties are \(\sim 0.07\) and \(\sim 0.11\), respectively. For 18/25 galaxies, the spectral index at the galaxy centre (marked by the 'x' in Fig. 3), is consistent with values typical of Figure 3: Spectral index maps for each galaxy disc in our sample. Spectral indices are measured between 144 MHz and \(\nu_{\mathrm{max}}\), as listed in each figure panel. The dashed ellipse in each panel corresponds to the \(r\)-band Petrosian 90% radius, the ‘x’ marks the optical galaxy centre, and the filled circle shows the \(6.5\arcsec\) beam FWHM. In the lower right of each panel we show an arrow which points in the projected direction of the stripped tail. The typical uncertainties on the spectral index maps are \(\sim 0.05\) between \(144\,\mathrm{MHz}-1.5\,\mathrm{GHz}\), \(\sim 0.07\) between \(144\,\mathrm{MHz}-700\,\mathrm{MHz}\), and \(\sim 0.11\) between \(144\,\mathrm{MHz}-400\,\mathrm{MHz}\). recent injection from star formation (i.e. \(-0.8\) to \(-0.5\)). The remaining seven galaxies have central spectral indices that are flatter than \(-0.5\). In the bottom right of each panel in Fig. 3 we also show an arrow pointing in the projected direction of the radio continuum tail for each galaxy. For many galaxies in Fig. 3, the synchrotron emission is truncated within the stellar disc which reflects the outside-in nature of ram pressure stripping. In particular, galaxies are preferentially'missing' radio emission on their leading-halves (i.e. opposite to the direction of the tail), corresponding to CREs being transported from the leading half along the direction of the ram pressure wind to form the stripped radio tail. This truncation is present, to varying extents, for 14/25 galaxies. Those galaxies with radio emission that fills the optical disc may represent galaxies at an earlier stage of stripping, though we note that the majority of galaxies that do not show this truncation are also only marginally resolved at \(6.5\arcsec\). For these galaxies there may still be a truncated CRE distribution that is masked by the beam size. We show spectral index maps in Fig. 3, which requires detections at both frequencies, but note that this truncation signature is not a product of differences in depth between the two frequencies. When plotting single-frequency flux density maps instead, the same truncation signature is visible regardless of which frequency is used. Across the disc there is a gradient pattern in the spectral index maps that is present for a majority of galaxies in the sample. For galaxies that show a spectral index gradient, the orientation of this gradient is roughly aligned with the direction of the stripped tail. The steepest spectral indices are generally found in the direction of the tail (relative to the galaxy centre). This pattern is consistent with cosmic rays being transported from 'further upstream' in the galaxy and aging as they are transported along the tail direction. For most galaxies in Fig. 3, the flattest spectral indices are not always found at the galaxy centre, but instead are often on the leading half of the galaxy, opposite to the direction of the stripped tail. We quantitatively demonstrate this tendency for the flattest spectral indices to be found on the leading halves of galaxies by identifying the pixel in the spectral index map with the flattest spectrum, given by pixel coordinates \((i_{\rm flat},j_{\rm flat})\). When determining \(i_{\rm flat}\) and \(j_{\rm flat}\) we filtered each spectral index map with a \(3\times 3\) uniform filter so that each pixel was averaged with its eight nearest neighbours. We also only considered pixels where all eight of the surrounding pixels have detected spectral indices, in other words we did not consider pixels on the edge of the map. Lastly, given the pixel \((i_{\rm flat},j_{\rm flat})\), we calculated the radial offset of this pixel from the optical galaxy centre (\(r_{\rm flat}\)) as well as the angular orientation between this pixel and the direction of the radio tail (\(\theta_{\rm flat}\)). Formally, \(\theta_{\rm flat}\) is defined such that \[\cos\theta_{\rm flat}=\@vec{u}_{\rm flat}\cdot\@vec{u}_{\rm tail}, \tag{3}\] where \(\@vec{u}_{\rm flat}\) and \(\@vec{u}_{\rm tail}\) are unit vectors pointing between the optical galaxy centre and \((i_{\rm flat},j_{\rm flat})\) and away from the galaxy centre in the direction of the radio tail, respectively. In this definition, \(\theta_{\rm flat}=180\deg\) corresponds to \((i_{\rm flat},j_{\rm flat})\) being directly opposite to the tail direction and \(\theta_{\rm flat}>90\deg\) corresponds to \((i_{\rm flat},j_{\rm flat})\) being on the leading half of the galaxy disc. In Fig. 4 we show \(\theta_{\rm flat}\) (azimuthal axis) and \(r_{\rm flat}\) (radial axis) for each galaxy disc on a polar plot. It is immediately clear that the distribution of \(\theta_{\rm flat}\) values does not uniformly cover the parameter space but instead is skewed towards large values, with the majority of galaxies having \(120\geq\theta_{\rm flat}\geq 180\deg\) and \(r_{\rm flat}\) offset from zero. Only for a minority of galaxies in the sample is \(r_{\rm flat}\) is smaller than the HWHM beam size. These cases are consistent with the spectral index being flattest at the galaxy centre, and thus \(\theta_{\rm flat}\) is not particularly meaningful in this scenario. Such an offset between the galaxy centre and the location of the flattest spectral index for galaxies undergoing ram pressure stripping was first observed for NGC4522 in the Virgo Cluster by Vollmer et al. (2004), and seems to be relatively commonplace for such galaxies in the Coma Cluster. While a majority of galaxies show this pattern, not all do. For example, IC4040, KUG1255+275, D100, and GMP3618 all have spectral index distributions that are quite symmetric about the galaxy centre, though KUG1255+275, D100, and GMP3618 are also only marginally resolved. KUG1257+288B shows an irregular spectral index map. There is a region of flat emission on the leading half and then a second region of flat emission to the north that appears to follow a spiral arm in the galaxy (see Fig. 14). We also note that the short frequency spacing (144 MHz to 400 MHz), and the relatively low S/N detection of KUG1257+288B at 400 MHz, may be contributing to the irregular map in Fig. 3. ## 5 Spectral properties along the tail ### Radio tail flux-density profiles For each galaxy, and for each frequency band, we extracted flux-density profiles along the observed radio tail in rectangular apertures. The height (i.e. the length parallel to the tail) of each aperture is \(7\arcsec\approx 3.5\) kpc, slightly larger than the FWHM common beam size of \(6.5\arcsec\), and the width (i.e. the length perpendicular to the tail) of each aperture differs from galaxy to galaxy and is set in order to enclose the lowest contour level (i.e. \(2\sigma\)) shown in Appendix B. The uncertainty on flux density measurements was computed following the procedure described in Sect. 3. Figure 4: In polar coordinates, the location of the peak (i.e. flattest) spectral index within each disc, relative to the direction of the radio tail. The azimuthal axis shows \(\theta_{\rm flat}\) and the radial axis shows \(r_{\rm flat}\) (see text for details). The shaded region corresponds to \(3.25\arcsec\), the half-width at half-maximum of the beam. Angles greater than \(90\degr\) correspond to points on the leading half (LH) and angles less than \(90\degr\) correspond to points on the trailing half (TH). The peak spectral index is systematically found on the leading half (\(\theta_{\rm flat}>0\)) of ram pressure stripped galaxies. The extent of the tail flux-density profiles is determined by the length of the observed tail at \(144\,\mathrm{MHz}\), as for all galaxies the observed tails are longest in this frequency band. The first tail aperture is set such that the inner edge of the aperture is set against the edge of the optical galaxy disc. Apertures continue along the tail in \(7^{\prime\prime}\) steps until significant emission is no longer detected in the \(144\,\mathrm{MHz}\) band. We consider significant emission to be apertures where \(S_{\nu}>3\,\delta S_{\nu}\). This approach ensures that for all flux-density profiles there is detected emission in at least one frequency band, namely \(144\,\mathrm{MHz}\). In Fig. 5 we show normalized flux-density for our four frequency bands as a function distance along the tail, \(\ell\), which is defined such that \(\ell=0\) in the first bin (i.e. just off of the galaxy disc). The flux-density profiles are normalized by the value at \(\ell=0\) and therefore Fig. 5 shows the relative declines as a function of distance along the tail. We characterize the rate of this decline with a simple model of exponential decay, where the normalized flux density is given by \[\frac{S_{\nu}}{S_{\nu}(\ell=0)}=e^{-\ell/\ell_{s}}, \tag{4}\] where \(\ell_{s}\) is the scale length of the flux-density decline. For each panel in Fig. 5 we fit for the best-fit scale length in order to estimate a quantitative length scale associated with the ram pressure tails as a function of frequency. The best-fit exponential model is shown by the dashed lines in Fig. 5 and in Fig. 6 we show \(\ell_{s}\) as a function of frequency for the four bands in this work. For a very simple model of constant stripping velocity in a constant magnetic field with no re-acceleration, one expects that the observed tail lengths will decline as the square root of Figure 5: Normalized flux-density radial profiles along the stripped tails. Each solid line and markers correspond to an individual galaxy in the sample. Line colours for individual galaxies are consistent across the four panels. The dashed line in each panel shows the best-fit exponential decline for each frequency band. Figure 6: Exponential scale-length as a function of frequency for the radial profiles shown in Fig. 5. The solid and dashed lines shows the best-fit model of square-root decline, and its \(1\sigma\) confidence interval. Such a decline is the expectation for the very simple scenario of constant stripping velocity in a uniform magnetic field (see text for details). the frequency (Ignesti et al. 2022b). Therefore in Fig. 6 we also overplot the best-fit model of square-root decline as a function of frequency. This expectation broadly fits the observed scale lengths, signifying that this simple synchrotron aging model in a relatively constant magnetic field may be a reasonable approximation to reality for our galaxy sample. Though we note that the scale length at \(1.5\,\mathrm{GHz}\) is a \(2-3\sigma\) outlier from this simple model. The scale lengths shown in Fig. 6 are derived from the full galaxy sample together, and thus should be thought of as a description of the average behaviour for ram pressure stripped radio tails in Coma. ### Synchrotron aging model Given the support for a scenario of spectral aging, both from the observed radial profiles and the difference in spectral index between the galaxy discs and tails, here we explicitly fitted a model of synchrotron aging to the observed spectra along the tail on a galaxy-by-galaxy basis. This model shares many qualitative similarities with the aging model from Ignesti et al. (2023), though the two differ in their specific implementations. One main difference is that the model from Ignesti et al. (2023) explicitly assumes that plasma is transported along the tail at a constant velocity, whereas this is not assumed in our aging model. We only applied our aging model to galaxies with detected flux density in at least one frequency band, for at least three apertures along the tail. Of the galaxies listed in Table 1, six do not satisfy this criterion (NGC4853, IC3949, KUG1250+276, KUG1256+287B, GMP3618, and GMP5226) and were therefore excluded from this portion of the analysis. Our aging model is based on a Jaffe and Perola (1973) model ('JP model') for a population of electrons subject to synchrotron and inverse-Compton losses. The JP model has three free parameters: the dimensionless normalization of the spectrum (\(S_{0}\)), the injection spectral index (\(\alpha_{0}\)), and the break frequency (\(\nu_{b}\)). We fit the radio continuum spectrum with a JP model for each distance bin along the stripped tail. For a given galaxy, we assumed that \(S_{0}\) and \(\alpha_{0}\) are constant across all distance bins of the tail. This is equivalent to assuming that the removal of cosmic rays from the galaxy, and subsequent transport along the tail, is roughly in a steady state over the radiative timescales that we are sensitive to (\(\sim 100\,\mathrm{Myr}\), depending on the magnetic field strength). Thus \(S_{0}\) and \(\alpha_{0}\) were fit simultaneously across all distance bins while a different value for \(\nu_{b}\) was fit for each bin. The result of this approach is that the relative difference in flux density, for a given frequency, across distance bins is determined solely by the location of the break frequency in the best-fit spectrum. For neighbouring distance bins \(d_{1}\), \(d_{2}\), where \(d_{2}>d_{1}\) (i.e. the distance \(d_{2}\) is further offset from the galaxy than \(d_{1}\)), we enforced that \(\nu_{b,1}>\nu_{b,2}\). In other words, our model requires the break frequency to shift monotonically to lower frequencies as you move along the stripped tail. This is an aging requirement and reflects the assumption that more distant electrons should have been stripped earlier and thus subject to greater radiative losses. We emphasize that the aging model described above may or may not provide a good description of the observed continuum spectra along the stripped tails. For example, if electrons in the stripped tails are subject to fresh injection or substantial re-acceleration (extra-planar star formation could be a potential source of this) then this model will likely be a poor descriptor of the data. Indeed, we intentionally defined our aging model in this way so that we can identify both stripped tails that are consistent with a simple synchrotron aging framework and those that are not. Furthermore, there are a number of assumptions that are implicitly made in our model construction, including: the assumption of a uniform magnetic field along each tail, that the plasma in each distance bin has the same geometrical properties (i.e. volume, filling fraction), that the break frequency is constant within each distance bin, and that adiabatic losses are negligible. The latter appears at least roughly true given the spectral index decline that has been observed along ram pressure stripped tails (e.g. Vollmer et al. 2004; Muller et al. 2021; Ignesti et al. 2022b), suggesting that the timescale for synchrotron losses is shorter than that for adiabatic losses. The remainder of these assumptions are difficult to validate observationally and thus add uncertainties to this model that are important to bear in mind when interpreting the subsequent scientific results. The aging model was implemented using the \(\mathsf{synchrofit}\)6 package (Turner et al. 2018; Turner 2018) in Python to calculate theoretical spectra, assuming a JP model, given values for \(S_{0}\), \(\alpha_{0}\), and \(\nu_{b}\). We then found the best-fit values for \(S_{0}\), \(\alpha_{0}\) (constant across all distance bins), and \(\nu_{b}\) (varying across all distance bins) using Markov chain Monte Carlo (MCMC) implemented with the Python package emcee7(Foreman-Mackey et al. 2013). For each distance bin we included all four frequencies when fitting, even if emission is not detected at one or more frequencies. Though we did require that there is detected emission in at least one frequency band. For fitting purposes, when significant emission was not detected (for a given frequency and aperture), we included \(3\sigma\) upper limits in the likelihood following Sawicki (2012). We used the following uniform priors for all fit parameters: Footnote 6: [https://github.com/synchrofit/synchrofit](https://github.com/synchrofit/synchrofit) Footnote 7: [https://emcee.readthedocs.io/en/stable/](https://emcee.readthedocs.io/en/stable/) \[S_{0}\in(1\times 10^{4},5\times 10^{8})\] \[\alpha_{0}\in(-0.8,-0.5)\] \[\nu_{b,i}\in\begin{cases}(0.05,10),&\text{if }i=1\\ (0.05,\nu_{b,i-1}),&\text{otherwise}\end{cases},\] where \(S_{0}\) and \(\alpha_{0}\) are dimensionless and \(\nu_{b,i}\) is the break frequency for each of the distance bins along the tail in GHz. We assessed convergence using the auto-correlation time, \(\tau\), and considered our MCMC chain to be 'converged' if the chain length, \(N\), satisfied: \(N>100\times\tau^{8}\). For each fit we run the MCMC chain until this inequality is satisfied, which typically corresponded to a few hundred thousand steps. In Fig. 7 we show an example of our spectral modelling for MRK0057, with panels in Fig. 7 showing the observed continuum spectrum and best-fit aging model at different distances, \(\ell\), along the tail. In Appendix B we include analogous plots for all galaxies in our samples, in addition to 'corner plots' showing the posterior distributions for all of the model parameters in our fits. We use Fig. 7 as an example to highlight some general characteristics of the spectral fitting results. MRK0057 is an example of a galaxy where the radio continuum spectra along the stripped tail are well described by our aging model. There are no clear offsets between the data and the best-fit model spectra in Fig. 7 for any of the distance bins. MRK0057 is also an example where the spectrum in the first distance bin, i.e. \(\ell=0\), is consistent with a powerlaw. In other words, we can only derive a lower limit on the break frequency. We find this to be the case for the majority (but not all) galaxies in our sample. Widening our frequency range moving forward will allow us to determine whether these cases are truly represented by powerlaw spectra, or if they are simply well approximated by a powerlaw over the frequency range considered here. Generally speaking, all galaxies in our sample have continuum spectra along their observed tails that are broadly consistent with our aging model. Inspection of the spectrum plots for individual galaxies in Appendix B shows that the vast majority of flux-density measurements are consistent with the 99% model envelopes. Where there are deviations from the aging model they are most often found at high frequency. We show this more quantitatively with the parameter \(\Delta\) which we define for each frequency band as: \[\Delta_{\nu}=\frac{|S_{\nu}-S_{\nu,\rm model}^{50}|}{\sqrt{(\delta S_{\nu})^{2 }+(\sigma_{\nu,\rm model})^{2}}}, \tag{5}\] where \(S_{\nu}\) and \(\delta S_{\nu}\) are the observed flux density and uncertainty, \(S_{\nu,\rm model}^{50}\) is the median model flux density, and \(\sigma_{\nu,\rm model}\) is the standard deviation of the model realizations at a given frequency, \(\nu\). Thus \(\Delta_{\nu}\) is a measure of the agreement between observations and the median model, accounting both for the observational error and the spread in the model posteriors. Fig. 8 shows the distribution of \(\Delta_{\nu}\) for all measured flux densities across all galaxies in our sample, divided by frequency band. The vast majority of flux-density measurements have \(\Delta_{\nu}<2\), indicating modest deviations between model and data. This quantitatively confirms our previous statement that the majority of flux-density measurements along the tail are reasonably-well reproduced by our aging model. It does seem that deviations between model and data become somewhat larger at high-frequency (i.e. 1.5 GHz), though given the relatively small number of 1.5 GHz detections relative to the lower frequency bands, this statement is subject to small-number statistics. Fig 8 only includes formal detections but we note that the vast majority of the upper limits along the tail are also consistent with the best-fit models (see Appendix B). The bulk of the model outliers (\(\Delta_{\nu}\ga 3\)) seen in Fig. 8 are from a single galaxy in our sample, NGC4858. In Appendix A we provide a detailed discussion of the possible origins of these model deviations at high frequency. In particular, we show that some, though likely not all, of the high-frequency deviations could be attributed to extraplanar star formation in the stripped tail. Lastly, now having established that our aging model is a good descriptor of the data in the vast majority of cases, in Fig. 9 we show the best-fit parameter values derived from this fitting. The top panel of Fig. 9 shows the best fit injection spectral indices for each galaxy. Roughly half of the galaxies have best-fit injection indices between \(-0.7\) and \(-0.5\) and the other half have steeper spectral indices of \(\sim-0.8\), pushing up against the boundary of our prior. We note that the majority of galaxies with best Figure 7: Radio continuum spectra and best-fit aging model along the stripped tail of MRK0057. Data points show observed flux densities and solid black line shows the best-fit (median) aging model along with the 84%, 95%, and 99.7% credible regions (shading). Error bars on flux densities are determined following the procedure outlined in Sect. 3 and error bars on the frequency axis correspond to the bandwidth of the observing band. If significant (\(>3\sigma\)) flux density is not detected in an aperture than we show \(3\sigma\) upper limits. We list the best-fit break frequency (\(\nu_{b}\)) as well as the distance along the stripped tail (\(\ell\)) in each panel. fit injection indices near \(-0.8\) have sparse frequency coverage (e.g. only covered by \(144\,\mathrm{MHz}\) and \(400\,\mathrm{MHz}\) imaging) and/or have a small number of flux density detections along the tail at frequencies higher than \(144\,\mathrm{MHz}\). Thus it may be that the relatively steep injection indices in Fig. 9 is due to some aging in the spectrum, but that the corresponding curvature is not distinguished as a result of the poor frequency coverage. The bottom panel of Fig. 9 shows the best-fit break frequencies as a function of distance along the tail. By construction the break frequency decreases monotonically for a given galaxy with increasing distance along the tail. At fixed \(\ell\) there is substantial scatter in the break frequency across the galaxies in our sample. This may indicate a range of plasma ages across different tails, or equally possible a range of magnetic field strengths (or both). The smallest break frequencies observed are \(\sim 100-200\,\mathrm{MHz}\). This reflects our requirement for detected emission at \(144\,\mathrm{MHz}\), which becomes increasingly less likely for \(\nu_{b}<100\,\mathrm{MHz}\). Probing potential extensions of the stripped tails where \(\nu_{b}<100\,\mathrm{MHz}\) may be possible moving forward with observations from the LOFAR low-band antenna (LBA). ### Stripping timescales Given that our aging model shows broad agreement with measured tail flux-densities, we can constrain the timescales over which material is removed from these galaxies by estimating the radiative age of the plasma as a function of distance along the radio tail. The radiative age (Miley, 1980) is given by \[t_{\mathrm{rad}}\approx 3.2\times 10^{4}\,\frac{\sqrt{B}}{B^{2}+B_{\mathrm{CMB }}^{2}}\,\frac{1}{\sqrt{\nu_{b}(1+z)}}\,\,\mathrm{Myr}, \tag{6}\] where \(\nu_{b}\) is the break frequency in MHz, \(z\) is the source redshift, \(B\) is the magnetic field in \(\mu\)G, and \(B_{\mathrm{CMB}}\) is CMB equivalent magnetic field given by \(B_{\mathrm{CMB}}=3.25(1+z)^{2}\)\(\mu\)G. Thus given the best-fit break frequency from our aging model, and an assumed magnetic field strength, we can estimate a radiative age for each distance bin along the observed radio tails. In Fig. 10 we show \(t_{\mathrm{rad}}\) as a function of \(\ell\) for each of the galaxies in our sample. Since we do not have constraints on the magnetic field strength in the stripped tails, we consider a range of possible values. We set the low end of this range to be the minimum energy loss magnetic field which is given by \(B_{\mathrm{min}}=B_{\mathrm{CMB}}/\sqrt{3}\approx 1.96\,\mu\)G for the Coma Cluster, we consider an intermediate field strength of \(6\,\mu\)G, and a maximum field strength of \(10\,\mu\)G. This maximum of \(10\,\mu\)G is set in order to encapsulate the small number of observational estimates on field strengths in ram pressure stripped tails in other clusters (Muller et al., 2021: \(2-4\,\mu\)G, Vollmer et al., 2021: \(6-7\,\mu\)G, Ignesti et al., 2022b: \(<10\,\mu\)G). Depending on the assumed magnetic field strength, we find radiative ages that increase from tens Figure 8: Distributions of offsets (\(\Delta_{\nu}\), see Eq. 5) from best-fit models for each frequency band. Figure 9: Best-fit parameters from the aging model fits to the stripped tails. Data points are coloured according to galaxy and are consistent between panels. _Top:_ Distribution of best-fit injection spectral indices on a galaxy-by-galaxy basis. Error bars correspond to \(68\%\) credible regions from the posterior distribution. _Bottom:_ Best-fit break frequency as a function of distance along tail. Each marker corresponds to a single measurement for an individual galaxy and errorbars span the \(68\%\) credible region. In the \(\ell=0\) bin, upper limits correspond to galaxies with purely powerlaw spectra and are shown at \(\nu_{b}=10\,\mathrm{GHz}\), which is the upper limit of our prior distribution. To improve readability, in each distance bin data points are randomly shifted along the \(x\)-axis according to a Gaussian distribution with \(\mu=0\) and \(\sigma=0.5\,\mathrm{kpc}\). of megayears near the galaxy disc out to two or three hundred megayears at the edge of the tail. At fixed magnetic field strength, Fig. 10 shows that a linear relationship between \(t_{\rm rad}\) and \(\ell\) is a good descriptor for all galaxies in our sample. This is consistent with the commonly-made assumption that CRE transport in ram pressure stripped tails is dominated by advection. As a result of this, we determined the slope of best-fit linear trend between \(t_{\rm rad}\) and \(\ell\) in order to estimate a constant bulk velocity for the plasma being stripped along the tail. The data points and best-fit trends in Fig. 10 are then coloured according to this bulk velocity. Roughly speaking, the best-fit velocities are on the order of hundreds of kilometers per second. The specific values for each galaxy and magnetic field assumption are listed in Table 3. We stress that these are projected velocities, since \(\ell\) is projected on the sky, and therefore must be lower limits on the 3D stripping speed. Figure 10: Radiative age versus projected distance along the tail for each of the galaxies in our sample. For each distance bin we plot three estimates of radiative age corresponding to assumed magnetic field strengths of 1.96 (minimum-loss field), 6, and 10 \(\mu\)G. Radiative age increases monotonically with decreasing magnetic field strength. For each assumed magnetic field strength, we show the best-fit linear relationship between radiative age and distance along tail, which gives an estimate for the projected bulk velocity of cosmic rays along the tail. Data markers and best-fit lines are coloured according to the best-fit projected velocity. ## 6 Discussion ### Radio continuum properties in the discs of ram pressure stripped galaxies The first part of the analysis in this work is focused on probing the spectral properties of radio continuum emission within the discs of the ram pressure stripped galaxies in our sample. While ram pressure is off associated with tails of stripped debris (e.g. jellyfish galaxies), it also can significantly alter the conditions of both the thermal and non-thermal ISM within the disc. For example, through outside-in quenching (e.g. Cortese et al. 2012; Schaefer et al. 2017; Yoon et al. 2017; Schaefer et al. 2019), gas compression (e.g. Cramer et al. 2020; Troncoso-Iribarren et al. 2020; Roberts et al. 2022a), localized starbursts (e.g. Gavazzi et al. 2001; Tomicic et al. 2018; Boselli et al. 2021; Hess et al. 2022; Roberts et al. 2022c,a), magnetic field strength amplification (e.g. Gavazzi & Boselli 1999; Murphy et al. 2009; Vollmer et al. 2013; Chen et al. 2020), etc. All of these examples can be constrained, to some extent, with multi-frequency radio continuum observations like those in this work. Many previous works have found evidence for enhanced star formation in galaxies undergoing ram pressure stripping (e.g. Ebeling et al. 2014; Poggianti et al. 2016; Vulcani et al. 2018; Roberts & Parker 2020; Wang et al. 2020; Roberts et al. 2021a). This is thought to be a product of the external pressure both increasing molecular gas densities, as well as increasing the efficiency with which atomic gas in converted to the molecular form, which in both cases can lead to an increase in star formation prior to gas depletion (e.g. Bothun & Dressler 1986; Gavazzi et al. 2001; Merluzzi et al. 2013; Lee et al. 2017; Cramer et al. 2020; Troncoso-Iribarren et al. 2020; Moretti et al. 2020,a; Boselli et al. 2021; Hess et al. 2022; Roberts et al. 2022a, 2023). One can then ask, for this scenario, what is the expected effect on radio continuum emission. In principle, the increased ISM densities should lead to increased ionization losses and thus a flattening of the low-frequency spectral index (e.g. Basu et al. 2015; Chyzy et al. 2018). This has previously been invoked to explain flat spectral indices at low frequencies for galaxies undergoing ram pressure stripping (Ignesti et al. 2022b; Roberts et al. 2022c). In this work we found tentative evidence for this flattening. In the colour-colour plot in Fig. 2, the median low-frequency spectral index is marginally flatter than the high-frequency spectral index. Furthermore, there are a handful of galaxies with \(\alpha_{\rm low,disc}>\alpha_{\rm high,disc}\) at \(>2\sigma\) significance, but no galaxies with \(\alpha_{\rm low,disc}<\alpha_{\rm high,disc}\) at the same significance level. Ultimately this is still unconvincing in a statistical sense in this work, and a wider range in frequency is necessary for stronger constraints to be possible. This should include both higher (e.g. 5 GHz) and lower (50 MHz, LOFAR LBA) frequency data in order to extend the lever arm in both directions. The spectral index maps in Fig 3 also support such a scenario. For a majority of galaxies in the sample, the flattest spectral indices are found on the the leading half, opposite to the tail direction (Fig. 4). The leading half is the site that should be associated with the strongest gas compression, though it is also subject to shocks are driven into the ISM by ram pressure at the galaxy - ICM interface (Vollmer et al. 2004; Murphy et al. 2009; Pedrini et al. 2022). Such an offset between the galaxy centre and the location of the flattest spectrum for a galaxy undergoing ram pressure stripping, was first noted by Vollmer et al. (2004) for NGC4522 in the Virgo Cluster. Vollmer et al. (2004) suggest that this is a signature of a ram pressure induced shock, which accelerates CREs leading to a flat spectral index. This is corroborated by a ridge of polarized emission on the leading edge of NGC4522, which would not be expected from the turbulent motions associated with star formation. It is possible, if not likely, that a combination of gas compression/star formation and shocks is contributing to the observed flat spectral indices on the leading halves of the galaxies in this work. Roberts et al. (2022a) have shown that many of the galaxies in this sample do show evidence for enhanced star formation on their leading halves. Moving forward, observations of polarized radio flux for this sample will be insightful in order to determine whether similar ridges of polarized emission are present for ram pressure stripped galaxies in Coma, as have been seen for similar types of galaxies in the Virgo Cluster (Vollmer et al. 2004; Chyzy et al. 2007; Murphy et al. 2009; Vollmer et al. 2010, 2013). ### Spectral aging along the stripped tails For the ram pressure tails considered in this work, the decline in flux density with increasing distance from the galaxy is broadly consistent with a simple model of synchrotron aging. This is generally true for all four frequencies, though there are some hints of deviations at 1.5 GHz (but this is also the frequency band with the fewest detections in the tails). This is consistent with previous studies where it is argued that these stripped tails visible in the radio continuum are formed from CREs that are accelerated by star formation in the galaxy disc, and subsequently removed from the galaxy by ram pressure (e.g. Chen et al. 2020; Roberts et al. 2021a). In the absence of newly injected CREs or re-acceleration, this framework will lead to steeper spectral indices in the stripped tails than the galaxy discs (assuming that the disc is actively star forming). This has been previously confirmed for a relatively small number of galaxies (Vollmer et al. 2004; Chen et al. 2020; Muller et al. 2021; Ignesti et al. 2022b; Roberts et al. 2022c; Venturi et al. 2022; Ignesti et al. 2023), and here we show that it holds for a relatively large set of ram pressure stripped galaxies in Coma (Fig. 2). The steep spectral indices imply that low-frequency imaging is particularly useful for identifying galaxies with ram pressure tails in low-\(z\) groups and clusters, as evidenced by the large number of such tailed \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{1}{c}{ Galaxy} & \multicolumn{2}{c}{\(v\) [km s\({}^{-1}\)]} \\ \cline{2-4} & \(B\approx 2\,\mu\)G & \(B=6\,\mu\)G & \(B=10\,\mu\)G \\ \hline NGC4848 & \(198\pm 10\) & \(348\pm 18\) & \(626\pm 32\) \\ NGC4858 & \(79\pm 6\) & \(140\pm 11\) & \(253\pm 19\) \\ NGC4911 & \(117\pm 12\) & \(206\pm 20\) & \(371\pm 36\) \\ IC3913 & \(95\pm 41\) & \(158\pm 64\) & \(269\pm 108\) \\ IC4040 & \(123\pm 10\) & \(216\pm 18\) & \(386\pm 32\) \\ KUG1255+275 & \(63\pm 2\) & \(111\pm 4\) & \(201\pm 8\) \\ KUG1255+283 & \(135\pm 18\) & \(239\pm 32\) & \(426\pm 56\) \\ KUG1257+288 & \(118\pm 6\) & \(207\pm 10\) & \(377\pm 19\) \\ KUG1258+287 & \(179\pm 8\) & \(316\pm 14\) & \(573\pm 26\) \\ KUG1258+279A & \(119\pm 5\) & \(209\pm 9\) & \(380\pm 16\) \\ KUG1259+279 & \(177\pm 47\) & \(298\pm 79\) & \(529\pm 139\) \\ KUG1259+284 & \(132\pm 7\) & \(233\pm 12\) & \(423\pm 22\) \\ MRK0053 & \(108\pm 31\) & \(174\pm 47\) & \(303\pm 82\) \\ MRK0056 & \(115\pm 6\) & \(202\pm 10\) & \(367\pm 19\) \\ MRK0057 & \(140\pm 8\) & \(247\pm 14\) & \(445\pm 26\) \\ MRK0058 & \(85\pm 11\) & \(149\pm 20\) & \(266\pm 34\) \\ MRK0060 & \(238\pm 79\) & \(400\pm 130\) & \(710\pm 229\) \\ GMP2601 & \(75\pm 12\) & \(132\pm 22\) & \(240\pm 39\) \\ \hline \end{tabular} \end{table} Table 3: Stripped plasma velocity estimates sources identified by Roberts et al. (2021a) compared to surveys at higher frequency (e.g. Miller et al. 2009a; Chen et al. 2020). The dominant mode of CRE transport in forming ram pressure stripped tails is believed to be advection/streaming (Murphy et al. 2009; Vollmer et al. 2013; Muller et al. 2021; Ignesti et al. 2022a). We estimate a constant streaming velocity (\(t_{\rm rad}\propto\ell\)) for each galaxy in Fig. 10. While our estimates on the bulk velocity cover a relatively broad range due to the uncertainty in the magnetic field strength, they are consistent with previous estimates in the literature that derive stripping timescales from synchrotron radiative ages. Vollmer et al. (2021) estimate a bulk velocity limit of \(\gtrsim 140\,{\rm km\,s^{-1}}\) based on 1.4 and 5 GHz observations of the Virgo galaxy NGC4330. Ignesti et al. (2023) have published estimates for plasma velocities in the ram pressure stripped tails of galaxies in the cluster Abell 2255. For five galaxies with monotonically decreasing flux-density profiles, Ignesti et al. (2023) constrain the projected bulk velocity to be between 160 and \(430\,{\rm km\,s^{-1}}\), based off of synchrotron cooling timescale arguments. These estimates are derived assuming a minimum-loss magnetic field, and thus formally are lower limits. Constraints from hydrodynamical simulations generally show that gas velocities during ram pressure stripping are \(\lesssim 1000\,{\rm km\,s^{-1}}\)(e.g. Tonnesen and Bryan 2010; Steinhauser et al. 2016; Choi et al. 2022). These gas velocity constraints can be compared to to bulk plasma velocities constrained in this work under the assumption that magnetic fields remain frozen in to the ISM during stripping, and thus that the stripped cosmic rays also track the stripped gaseous ISM. ### A broad picture of the impact from ram pressure on the non-thermal ISM in Coma satellites To conclude, we discuss a general picture of the impact from ram pressure on the radio continuum spectra of star-forming satellites, that is consistent with the observations of Coma Cluster galaxies presented in this work. As galaxies traverse the ICM in galaxy clusters, the non-thermal ISM is both perturbed within the galaxy disc as well as stripped off of the galaxy to form a radio tail. The formation of the radio tail can also be aided by the presence of ICM magnetic field accreted onto the galaxy via magnetic draping (Dursi and Pfrommer 2008; Pfrommer and Jonathan Dursi 2010; Muller et al. 2021). In clusters, this typically occurs on first infall towards orbital pericenter (Roberts et al. 2021a; Smith et al. 2022). For a snapshot in time, the fraction of star-forming cluster galaxies (at low-\(z\)) with stripped radio tails is roughly twenty per cent (Roberts et al. 2021b), though this figure does not include those galaxies that may have been previously tailed and are now sufficiently stripped, nor those galaxies in a pre-stripping phase that may develop a tail in the future. It may be that most, or all, star-forming galaxies in clusters form a radio tail, but this remains an open question. Perturbations within the disc manifest themselves in the synchrotron spectral index - both in a resolved and integrated sense. Perturbations are strongest on the leading half of the disc where the spectral index becomes flattest. In some cases the spectral index becomes flatter than the typical expected range for star formation (Fig. 3; see also, Ignesti et al. 2022b; Roberts et al. 2022). This could be a result of increased ionization losses related to compression of the ISM by the external ram pressure (e.g. Moretti et al. 2020b,a; Troncoso-Iribarren et al. 2020; Ignesti et al. 2022b; Roberts et al. 2022c,a, 2023; Moretti et al. 2023). Magnetic field lines should also be compressed in the disc by ram pressure, which may contribute to the unusually high radio luminosity to SFR ratios in cluster galaxies - particularly those with ram pressure stripped tails (e.g. Gavazzi and Boselli 1999; Murphy et al. 2009; Vollmer et al. 2013; Chen et al. 2020). Ridges of polarized emission have also been observed along the leading edge of ram pressure stripped galaxies (e.g. Vollmer et al. 2004; Vollmer and Huchtmeier 2007; Chyzy et al. 2007), a further signature of the interaction at the ISM - ICM interface, and that may indicate that CRE acceleration from shocks is also playing a role in the observed flat spectral indices. In many cases the distribution of synchrotron emission becomes truncated, particularly on the leading half, due to the outside-in removal of CREs from the disc. Through this truncation, CREs are transported, likely through advection/streaming (e.g. Murphy et al. 2009; Vollmer et al. 2013; Ignesti et al. 2022a), from the disc downstream along the stripped tail. As CREs move further along the tail, their spectra age through synchrotron and inverse-Compton losses, leading to a decrease in radio flux density and curvature in the spectrum. This aging along the tail is consistent with a relatively constant projected stripping velocity, on the order of hundreds of kilometers per second. Observational estimates of this stripping velocity is highly dependent on the assumed magnetic field strength, which is very poorly constrained observationally. Most galaxies show no evidence for fresh injection of CREs or re-acceleration in the stripped tail, suggesting that any radio flux density produced by extra-planar star formation in the tail is subdominant to that from the aging plasma stripped off of the disc. This is consistent with the low SFRs observed in ram pressure stripped tails (e.g. Sun et al. 2007; Hester et al. 2010; Fumagalli et al. 2011; Vulcani et al. 2018; Poggianti et al. 2019; Junais et al. 2021). For example, a typical extra-planar Hii region with a SFR of \(10^{-3}\,{\rm M_{\odot}\,yr^{-1}}\) will only produce a \(1.5\,{\rm GHz}\) flux density of \(\sim 1\,\mu{\rm Jy}\) (at the distance of Coma). The framework outlined above is consistent with the observations of ram pressure stripped galaxies in Coma presented here. Future observations of polarized intensity for these galaxies would provide a further test of its consistency. Constraining the applicability of this picture beyond requires both the identification of a significant number of ram pressure stripped galaxies in another (likely nearby) cluster, and multi-frequency radio continuum observations of said galaxies. The stripped tails of jellyfish galaxies in Abell 2255 are broadly consistent with this picture (Ignesti et al. 2023), and moving forward, the Virgo Cluster is another obvious testing ground where significant work has already been done at high frequencies (e.g. Crowl et al. 2005; Chyzy et al. 2007; Murphy et al. 2009; Vollmer et al. 2004, 2010, 2013). With recent low-frequency data (see the VIC-TORIA project, Edler et al. 2023), direct comparisons between galaxies in Virgo and this work can be made. ## 7 Conclusions In this work we have presented a comprehensive radio continuum study of 25 ram pressure stripped satellite galaxies in the Coma Cluster. We use nearly continuous frequency coverage between 144 MHz and 1.5 GHz to constrain radio spectral properties from galaxy discs all the way across their stripped radio tails. Below we itemize the main scientific conclusions from this work: * Spectral indices integrated over the galaxy discs cover a range between \(-0.8\) and \(-0.3\). Roughly half of the galaxy disc sample have integrated spectral indices that are \(>-0.5\) i.e. flatter than the typical value for injection by star formation (Fig. 2). * Ram pressure stripped tails have steep integrated spectral indices. All tails in our sample are consistent with \(\alpha_{\rm tail}<-1\). The integrated spectral indices over the tails are clearly steeper than those measured over the disc, consistent with spectral aging as the plasma is removed from the galaxy (Fig. 2). * Resolved spectral index maps of the discs show that the leading halves of the disc (i.e. the direction opposite to the tail direction) have systematically flatter spectral indices compared to the rest of the disc. This is consistent with a scenario where gas is compressed by ram pressure leading to enhanced local star formation and/or increased CRE ionization losses and/or the presence of ram pressure induced shocks (Figs. 3, 4). * The scale lengths of the stripped radio tails decrease with increasing observing frequency. This decrease is roughly in proportion to the inverse square root of the frequency, which is the expectation from a simple model of constant stripped plasma velocity in a uniform magnetic field (Figs. 5, 6). * For the vast majority of galaxies in our sample, radio continuum spectra extracted at various distances along the observed tails are well reproduced by a simple model of spectral aging due to synchrotron and inverse-Compton losses (Figs. 7, 9, 8). * The radio tails in our sample show linear relationships between radiative age and distance along the tail, implying roughly constant (projected) bulk stripping velocities with magnitudes on the order of hundreds of kilometers per second (Figs. 10). This work is an example of the value that can be derived from sensitive, high-resolution radio continuum observations in the context of advancing our understanding of ram pressure stripping in galaxy clusters. Moving forward this analysis could be further by homogenizing the frequency coverage for all galaxies in the sample. Specifically, this will require filling in gaps in the 700 MHz and 1.5 GHz imaging. Lower frequency data from the LOFAR LBA will likely also become available moving forward. An ultimate goal of broadband, multi-frequency imaging (e.g. from \(\sim\)tens of megahertz to \(\sim\)a few gigahertz) covering the full Coma Cluster within the virial radius would allow for a complete study of the environmental impact on the non-thermal ISM in galaxies part of the nearest, massive cluster. This is, in principle, achievable thanks to the large primary beams of radio facilities like LOFAR, the uGMRT, the VLA, and MeerKAT (not to mention next-generation facilities), but will still require a significant observational commitment. ###### Acknowledgements. The authors thank William Forman for comments on a draft of the paper and his effort invested into obtaining the uGMRT data used in this work, Neal Miller for comments on a draft of the paper, and Larry Rudnick for helpful discussions on flux-density scale during the preparation of the paper. IDR and RJRV acknowledge support from the ERC Starting Grant Cluster Web 804208. DVL acknowledges the support of the Department of Atomic Energy, Government of India, under project no. 12-R&D-TFR-5.02-0700. MS acknowledges support from the NASA grant 80NSSC22X1508 and the USRA SOFIA grant 09 0221 provided by NASA. A acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 33824, PI Poggianti), and the INAF founding program "Ricerca Fondamentale 2022" (PI A. Igencia). This work would not have been possible without the following software packages: AstroPy (Astropy Collaboration et al., 2013), CASA (CASA Team et al., 2022), ChiaConsunter (Hinton, 2016), Oksaher (van der Welden, 2020), 259 (Oye & Mandel, 2003), Ensee (Foreman-Mackey et al., 2013), Matplotlib (Hunter, 2007), NumPy (Harris et al., 2020), Photutils (Bradley et al., 2022b), Regions (Bradley et al., 2022a), RSF ([https://rsmf.readthedocs.io/en/latest/source/howto.html](https://rsmf.readthedocs.io/en/latest/source/howto.html)), SciPy (Virtanen et al., 2020), Synchrofit (Turner et al., 2018; Turner, 2018). This work is based in part on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. We thank the staff of the GMRT that made these observations possible. The GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. LOFAR (van Haarlem et al., 2013) is the Low Frequency Array designed and constructed by ASTROW. It has observing, data processing, and data storage facilities in several countries, which are owned by various parties (each with their own funding sources), and which are collectively operated by the IIT Foundation under a joint scientific policy. The IIT resources have benefited from the following recent major funding sources: CNRS-INSU, Observatoire de Paris and Universite d'Orleans, France; BMBF, MIWF-NRW, MPG, Germany; Science Foundation Ireland (SFD), Department of Business, Enretriege and Innovation (DBI), Ireland; NWO, The Netherlands; The Science and Technology Facilities Council, UK; Ministry of Science and Higher Education, Poland. Funding for the NASA-Sloan Atlas has been provided by the NASA Astrophysics Data Analysis Program (08-ADR08-0072) and the NSF (AST-121644). Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy, The SDSS-III web site is [http://www.sdss.org.cn/SDSS-III](http://www.sdss.org.cn/SDSS-III) is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, University of Cambridge, University of Florida, the French Participation Group, the German Participation Group, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/IINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, New Mexico State University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
2310.20403
Multi-Base Station Cooperative Sensing with AI-Aided Tracking
In this work, we investigate the performance of a joint sensing and communication (JSC) network consisting of multiple base stations (BSs) that cooperate through a fusion center (FC) to exchange information about the sensed environment while concurrently establishing communication links with a set of user equipments (UEs). Each BS within the network operates as a monostatic radar system, enabling comprehensive scanning of the monitored area and generating range-angle maps that provide information regarding the position of a group of heterogeneous objects. The acquired maps are subsequently fused in the FC. Then, a convolutional neural network (CNN) is employed to infer the category of the targets, e.g., pedestrians or vehicles, and such information is exploited by an adaptive clustering algorithm to group the detections originating from the same target more effectively. Finally, two multi-target tracking algorithms, the probability hypothesis density (PHD) filter and multi-Bernoulli mixture (MBM) filter, are applied to estimate the state of the targets. Numerical results demonstrated that our framework could provide remarkable sensing performance, achieving an optimal sub-pattern assignment (OSPA) less than 60 cm, while keeping communication services to UEs with a reduction of the communication capacity in the order of 10% to 20%. The impact of the number of BSs engaged in sensing is also examined, and we show that in the specific case study, 3 BSs ensure a localization error below 1 m.
Elia Favarelli, Elisabetta Matricardi, Lorenzo Pucci, Enrico Paolini, Wen Xu, Andrea Giorgetti
2023-10-31T12:27:48Z
http://arxiv.org/abs/2310.20403v1
# Multi-Base Station Cooperative Sensing ###### Abstract In this work, we investigate the performance of a joint sensing and communication (JSC) network consisting of multiple base stations (BSs) that cooperate through a fusion center (FC) to exchange information about the sensed environment while concurrently establishing communication links with a set of user equipments (UEs). Each BS within the network operates as a monostatic radar system, enabling comprehensive scanning of the monitored area and generating range-angle maps that provide information regarding the position of a group of heterogeneous objects. The acquired maps are subsequently fused in the FC. Then, a convolutional neural network (CNN) is employed to infer the category of the targets, e.g., pedestrians or vehicles, and such information is exploited by an adaptive clustering algorithm to group the detections originating from the same target more effectively. Finally, two multi-target tracking algorithms, the probability hypothesis density (PHD) filter and multi-Bernoulli mixture (MBM) filter, are applied to estimate the state of the targets. Numerical results demonstrated that our framework could provide remarkable sensing performance, achieving an optimal sub-pattern assignment (OSPA) less than \(60\,\mathrm{cm}\), while keeping communication services to UEs with a reduction of the communication capacity in the order of \(10\%\) to \(20\%\). The impact of the number of BSs engaged in sensing is also examined, and we show that in the specific case study, 3 BSs ensure a localization error below \(1\,\mathrm{m}\). joint sensing and communication, tracking, orthogonal frequency division multiplexing, millimeter-wave, artificial intelligence, convolutional neural network ## I Introduction The forthcoming generation of mobile radio networks is poised to offer a range of emerging functionalities, including innovative services. Notably, the ability to perform effective sensing using radio frequency (RF) signals has become feasible due to the evolution toward larger antenna arrays, namely massive multiple-input multiple-output (mMIMO), and higher frequency bands [1, 2]. The joint sensing and communication (JSC) approach leverages existing communication infrastructure to provide sensing capabilities, offering advantages such as reduced costs and improved spectral and energy efficiency when compared to dedicated spectrum- and transceiver-dependent systems like radar [3]. This convergence of sensing and communication systems envisioned for future networks will enable ubiquitous sensing services that rely on capturing reflections from non-collaborative objects, thus playing a critical role, e.g., in intelligent vehicular networks [4]. Furthermore, the growing interest in sensing stems from its potential to support various applications, such as traffic monitoring, autonomous driving, safety in industrial environments, and environmental mapping [5, 6]. The advent of mMIMO technology in millimeter-wave (mmWave) bands facilitates the detection, tracking, and precise localization of pedestrians, vehicles, drones, and other moving objects in real-time scenarios [7]. This enables the acquisition of range profiles of targets, a kind of target fingerprint, as scatterers in complex objects may be resolved into different range cells. At the same time, the enormous advancement of artificial intelligence (AI), and particularly image identification, has generated a vast and solid portfolio of solutions that could also be exploited in the field of integrated sensing and communication (ISAC) [8, 9, 10]. This work aims to investigate the possibility of using multi-sensor fusion techniques combined with multi-target tracking algorithms, to exploit range-angle radar maps obtained through a set of cooperating BSs with monostatic sensing capability and orthogonal frequency division multiplexing (OFDM) signals. The main contributions can be summarized as follows: * We propose a soft map fusion strategy based on range-angle maps obtained at each BS. Figure 1: An urban scenario with \(6\) monostatic JSC base stations (BSs) aiming at monitoring pedestrians (point-like targets) and vehicles (extended targets) in a surveillance area. BSs communicate with their user equipments (UEs) while simultaneously sensing the surrounding environment via dedicated sensing beams. The fusion center (FC) collects measurements from the BSs via the backhaul network, fuses them to create likelihood maps, and performs detection, target identification, and multiple target tracking. * We present an AI-based approach to infer the target category that is then exploited by an adaptive clustering methodology capable of managing point-like and extended targets. * The adaptive clustering is then combined with tracking algorithms to perform target state estimation and prediction. Two different tracking algorithms, the probability hypothesis density (PHD) and multi-Bernoulli mixture (MBM) filter, are compared. * We propose the optimal sub-pattern assignment (OSPA) metric and aggregate downlink capacity to evaluate the sensing and communication capabilities. * Finally, we investigate the impact of the number of cooperative BSs performing sensing on the localization and communication performance. In this work, capital and lowercase boldface letters represent matrices and vectors, respectively; \(\mathbf{D}_{q,t}\) stands for a matrix dependent on indexes \(q\) and \(t\), while \(\mathbf{v}_{t,p}\) represents the \(p\)th column selected by the matrix \(\mathbf{V}_{t}\). \(\mathbf{I}_{n}\) is the \(n\times n\) identity matrix; \(\|\cdot\|_{p}\) stands for the \(p\)-norm; \(|\cdot|\) represents the cardinality of a set; \(\delta(\cdot)\) is the Dirac delta function; \(|\cdot|\) represents the round operator; \((\cdot)^{c}\) stands for conjugate; \(\mathbf{x}\thicksim\mathcal{CN}(\mathbf{0},\boldsymbol{\Sigma})\) denotes a zero-mean circularly symmetric complex Gaussian random vector with covariance \(\boldsymbol{\Sigma}\); and \(\mathbf{x}\thicksim\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})\) denotes the real-valued Gaussian random vector with mean \(\boldsymbol{\mu}\) and covariance \(\boldsymbol{\Sigma}\). The rest of the paper is organized as follows. Section II presents the JSC model. Section III describes the data fusion strategy, target identification methodology, clustering scheme, and tracking algorithms. System performance is evaluated in Section IV, and conclusions are drawn in Section V. ## II System Model This work considers a JSC network, and a scenario, like the one portrayed in Figure 1. In particular, the considered system consists of several monostatic JSC BSs transmitting OFDM signals at mmWave using mMIMO technology. Each of these BSs is connected to an FC via backhaul; the FC allows them to cooperate in performing the detection and tracking of targets in the surveillance area. As shown later, the sensing task is accomplished through range-Doppler maps that each BS can generate by scanning the environment using a dedicated sensing beam. Moreover, to ensure communication functionality, each BS scans the environment for sensing and communicates with UEs in its respective cell using the same time-frequency resources via multiple beams. To keep interference among the sensing beams of different BSs at a negligible level, we consider the proper use of frequency division (FD) or time division (TD) through coordination. Each monostatic BS is equipped with two separate uniform linear arrays (ULAs), one for transmission and one for reception, with \(N_{\mathrm{T}}\) and \(N_{\mathrm{R}}\) antennas respectively, and both with a half-wavelength separation between the elements. In particular, the transmitted waveform is used for communication and sensing, while the sensing receiver (Rx) only collects backscattered signals. More specifically, considering the downlink communication toward UEs, each BS transmits frames consisting of \(M\) OFDM symbols and \(K\) subcarriers, and the same signals are simultaneously used to sense the environment. A multi-beam radiation pattern is used to split the power between sensing and communication by exploiting spatial diversity, as explained later. In particular, each BS uses a communication beam for the UE while steering a sensing beam scanning the environment within the angular interval \([-\Theta_{0},\Theta_{0}]\) with steps \(\Delta\Theta\). In each sensing direction, a subset \(M_{\mathrm{s}}<M\) of OFDM symbols is collected by the Rx. The OFDM time-frequency grid containing the transmitted (complex) symbol for each sensing direction can be represented by a matrix \(\mathbf{X}_{\mathrm{s}}\in\mathbb{C}^{K\times M_{\mathrm{r}}}\) with elements \(x_{k}^{(m)}\), where \(k\) is the subcarrier index and \(m\) is the OFDM symbol (or time) index. Starting from this grid, a precoding operation is performed on its elements with the beamformer \(\mathbf{w}_{\mathrm{T}}\in\mathbb{C}^{N_{\mathrm{T}}\times 1}\) to map each complex symbol to each antenna and obtain the vector of the transmitted symbols \(\tilde{\mathbf{x}}_{k}^{(m)}=\mathbf{w}_{\mathrm{T}}x_{k}^{(m)}\). As previously mentioned, a multi-beam radiation pattern is considered at the Tx to split the total available power between the communication and sensing directions. Hence, the beamforming vector \(\mathbf{w}_{\mathrm{T}}\) is defined as follows \[\mathbf{w}_{\mathrm{T}}=\frac{\sqrt{P_{\mathrm{T}}G_{\mathrm{T}}^{\mathrm{a}} }}{N_{\mathrm{T}}}\left(\sqrt{\rho_{\mathrm{p}}}\mathbf{a}_{\mathrm{T}}^{c}( \theta_{\mathrm{T,s}})+\sqrt{1-\rho_{\mathrm{p}}}\mathbf{a}_{\mathrm{T}}^{c}( \theta_{\mathrm{T,c}})\right) \tag{1}\] where \(\rho_{\mathrm{p}}\in[0,1]\) is the fraction of power reserved for the sensing beam, \(P_{\mathrm{T}}\) is the transmit power, \(G_{\mathrm{T}}^{\mathrm{a}}\) is the transmit array gain along the beam steering direction, and \(\mathbf{a}_{\mathrm{T}}(\theta_{\mathrm{T,c}})\in\mathbb{C}^{N_{\mathrm{T}}\times 1}\) and \(\mathbf{a}_{\mathrm{T}}(\theta_{\mathrm{T,s}})\in\mathbb{C}^{N_{\mathrm{T}}\times 1}\) are the steering vectors associated with the communication and sensing directions, respectively, being \(\theta_{\mathrm{T,c}}\) and \(\theta_{\mathrm{T,s}}\) the respective direction of departures (DoDs). Starting from the vector of the transmitted symbols \(\tilde{\mathbf{x}}_{k}^{(m)}\), the vector \(\tilde{\mathbf{y}}_{k}^{(m)}\in\mathbb{C}^{N_{\mathrm{R}}\times 1}\) of symbols received at each antenna, after OFDM demodulation, is given by \[\tilde{\mathbf{y}}_{k}^{(m)}=\mathbf{H}_{k}^{(m)}\tilde{\mathbf{x}}_{k}^{(m)} +\tilde{\mathbf{n}}_{k} \tag{2}\] where \(\mathbf{H}_{k}^{(m)}\in\mathbb{C}^{N_{\mathrm{R}}\times N_{\mathrm{T}}}\) is the channel matrix for the \(m\)th symbol and the \(k\)th subcarrier, which will be defined later, and \(\tilde{\mathbf{n}}_{k}\sim\mathcal{CN}(\mathbf{0},\sigma_{\mathrm{N}}^{2}|N_{ \mathrm{R}_{\mathrm{N}}})\) is the noise vector.1 Footnote 1: Both inter-carrier interference (ICI) and inter-symbol interference (ISI) are considered negligible. Spatial combining is then performed in the considered sensing direction, \(\theta_{\mathrm{R,s}}=\theta_{\mathrm{T,s}}\), by using the receiving beamforming vector \(\mathbf{w}_{\mathrm{R}}=\mathbf{a}_{\mathrm{R}}^{c}(\theta_{\mathrm{R,s}})\). This yields the grid of the received symbols \(\mathbf{Y}_{\mathrm{s}}\in\mathbb{C}^{K\times M_{\mathrm{r}}}\), whose \((k,m)\) elements are defined as \(y_{k}^{(m)}=\mathbf{w}_{\mathrm{R}}^{T}\tilde{\mathbf{y}}_{k}^{(m)}\). The received symbols grid collected in each sensing direction is then used to generate range-angle maps, as explained in Section II-B. ### _Target Models_ This work considers both point-like targets, such as pedestrians, and extended targets, such as vehicles. Specifically, vehicles are represented by a model comprising \(12\) reflection points. These include \(4\) points to capture planar reflections originating from the front, back, and sides of the vehicle (characterized by a narrow visibility function and a substantial radar cross-section (RCS)), \(4\) points to account for the wheelhouses, and \(4\) points to simulate the corners [11, 12, 13]. Now, considering \(L\) as the total number of reflections from both extended and point-like targets, the channel matrix already introduced in Equation (2) is given by \[\mathbf{H}_{k}^{(m)}=\sum_{l=1}^{L}\beta_{l}e^{j2\pi mT_{\mathrm{s}}f_{\mathrm{ D},l}}e^{-j2\pi k\Delta f\tau_{\mathrm{s}}}\mathbf{a}_{\mathrm{R}}(\theta_{l}) \mathbf{a}_{\mathrm{T}}^{T}(\theta_{l}) \tag{3}\] where \(\Delta f=1/T\) is the subcarrier spacing, \(T_{\mathrm{s}}=T+T_{\mathrm{cp}}\) is the total OFDM symbol duration including the cyclic prefix time \(T_{\mathrm{cp}}\). Additionally, \(f_{\mathrm{D},l}\) refers to the Doppler shift, \(\tau_{\mathrm{I}}\) represents the round-trip delay, \(\theta_{l}\) denotes the direction of arrival (DoA), and \(\mathbf{a}_{\mathrm{R}}(\theta_{l})\) represents the array response vector at the Rx for the \(l\)th backscattered signal. The complex term \(\beta_{l}=|\beta_{l}|\,e^{j\phi_{l}}\) includes phase shift and attenuation along the \(l\)th propagation path. The signal-to-noise ratio (SNR) at each receiving antenna related to the \(l\)th reflection point (hence the sensing SNR) becomes \[\begin{split}\text{SNR}_{l}^{(\mathrm{s})}&=\rho_ {\mathrm{p}}\cdot\gamma_{l}\cdot\frac{P_{\mathrm{T}}G_{\mathrm{T}}^{\mathrm{a} }G_{\mathrm{R}}}{\sigma_{\mathrm{N}}^{2}}\,|\beta_{l}|^{2}\\ &=\rho_{\mathrm{p}}\cdot\gamma_{l}\cdot\frac{P_{\mathrm{T}}G_{ \mathrm{T}}^{\mathrm{a}}G_{\mathrm{R}}}{N_{0}K\Delta f}\frac{c^{2}\sigma_{ \mathrm{res},l}}{(4\pi)^{3}f_{\mathrm{c}}^{2}d_{\mathrm{f}}^{4}}\end{split} \tag{4}\] where \(G_{\mathrm{R}}\) represents the gain of a single antenna element at the Rx, \(\gamma_{l}=|\mathrm{AF}(\theta_{\mathrm{T},s}-\theta_{l})|^{2}\in[0,1]\) denotes the normalized array gain at the Tx, which considers the imperfect alignment between the sensing direction and the target DoA, \(N_{0}\) is the one-sided noise power spectral density (PSD) at Rx, \(d_{l}\) represents the distance between the \(l\)th reflection point and the BS, \(\sigma_{\mathrm{res},l}\) corresponds to the RCS, \(f_{\mathrm{c}}\) is the carrier frequency and \(c\) is the speed of light. The RCSs \(\sigma_{\mathrm{res},l}\) of scatterers for both pedestrians and vehicles are random and modeled according to a Swerling I type distribution whose mean value, \(\bar{\sigma}_{\mathrm{res}}\), can be found in Table I[14]. It is important to note that the number of backscattered signals \(L\) depends on the relative angular position with respect to the BS and varies over time according to a visibility function [11], as objects are moving. ### _Measurement Model_ As mentioned, each BS detects objects in the environment by scanning using a multi-beam pattern defined in Equation (1). Specifically, the communication beam is directed toward a UE, while the sensing direction changes over time, sequentially pointing toward various directions following a predefined angular increment. In each direction, a set of \(M_{\mathrm{s}}\) OFDM symbols is collected to form the grid of received symbols \(\mathbf{Y}_{\mathrm{s}}\), which is then used to obtain a range-angle map. The period required to complete a full scan, denoted as \(T_{\mathrm{scan}}\), depends on the chosen number of sensing directions and on the symbol duration \(T_{\mathrm{s}}\). Once all the symbols are acquired and assembled into the matrix \(\mathbf{Y}_{\mathrm{s}}\), the first step involves an element-wise division between \(\mathbf{Y}_{\mathrm{s}}\) and \(\mathbf{X}_{\mathrm{s}}\), an operation often indicated as reciprocal filtering [15, 16]. This division aims to eliminate the influence of the transmitted symbols and generate a new matrix denoted as \(\mathbf{G}_{\mathrm{s}}\). Subsequently, a double-periodogram is performed on the rows and columns of \(\mathbf{G}_{\mathrm{s}}\) to obtain a range-Doppler map [15]. From this map, a range-angle map \(\mathbf{D}_{q,t}\) is derived at the \(q\)th BS and \(t\)th scan by selecting the column of the periodogram with the maximum value and uniquely associating it with the corresponding scan direction.2 Footnote 2: The estimation of target parameters turns out to be a frequency estimation problem; hence, since the periodogram represents (asymptotically) the log-likelihood, the column with the maximum value is selected. ## III Data Fusion, Target Classification, and Target-Oriented Processing According to the block diagram depicted in Figure 2, each BS exchanges the range-angle map denoted as \(\mathbf{D}_{q,t}\) with the FC. The FC employs a linear uniform grid, with resolution \(\Delta_{\mathrm{x}}\) and \(\Delta_{\mathrm{y}}\) (with \(N_{\mathrm{x}}\) and \(N_{\mathrm{y}}\) points) as a baseline. The received maps are rotated and translated according to the specific BS position and ULA orientation, and resampled at the baseline grid to ensure consistent map fusion. Subsequently, the resampled range-angle maps, represented as \(\overline{\mathbf{D}}_{q,t}\), are combined via element-wise summation to yield the soft map \(\mathbf{L}_{t}=\sum_{q=1}^{N_{\mathrm{s}}}\overline{\mathbf{D}}_{q,t}\), where \(N_{\mathrm{s}}\) is the number of BSs performing sensing.3 Footnote 3: Since \(\mathbf{L}_{t}\) are obtained via periodogram estimation, they can be interpreted as target log-likelihood maps, hence their summation results from noise independence among BS. ### _Target Identification_ Each target exhibits a different reflection pattern related to its geometrical shape and RCS, namely its reflection fingerprint. To this end, a convolutional neural network (CNN) is adopted to infer the target category (pedestrian or vehicle) directly from the resampled and fused soft maps \(\mathbf{L}_{t}\) which contain such information. Following Figure 2, a first step named image cropping is required to isolate each target from the others. A square window with side \(W_{\mathrm{size}}\) pixels is selected to frame each target. Such windows are centered in the predicted target position at time \(t\), inferred by the tracking algorithms exploiting information extracted during the previous time step \(t-1\). To generate the training set for the CNN, we consider a scenario where actual target positions and categories are known. To increase the classifier performance and robustness in the presence of imperfect target state predictions, which result in a misalignment between targets and relative frames, during training, the real target position is perturbed, adding Gaussian noise (which acts as a random displacement) with standard deviation \(\sigma_{\mathrm{w}}\) on both \(x\) and \(y\) directions. This solution leads to more accurate target classification, reducing the generalization error. At the end of the training phase, the CNN can infer the target category in real time and in a different scenario. \begin{table} \begin{tabular}{|c|c|} \hline Reflection & \(\bar{\sigma}_{\mathrm{res}}\,[\mathrm{m}^{2}]\) \\ \hline \hline Pedestrian & 1 \\ \hline Surfaces & 20 \\ Wheelhouses & 0 \\ Corners & 5 \\ \hline \end{tabular} \end{table} Table I: Average RCS for different point reflections ### _Adaptive Clustering_ A three-step clustering procedure is employed to extract detections from the soft maps, enabling effective handling of extended and point-like objects (refer to Figure 2 green block for a visual representation of the clustering procedure). The main steps of the proposed strategy can be summarized as follows: 1. An excision filter is implemented with threshold \(\gamma_{\mathrm{d}}\), to remove points with low values from the \(\mathbf{L}_{t}\) maps which are likely produced by noise. 2. A k-nearest neighbors (k-NN) algorithm with \(k=1\) and adaptive gate \(\xi_{\mathrm{k}}\) (to ignore residual points distant from each target) are applied to cluster data that likely belong to a previously detected target [17]. It is important to highlight that the parameter \(\xi_{\mathrm{k}}\) can be adapted and varied depending on the target category. Section IV compares the solution with fixed values of \(\xi_{\mathrm{k}}\) and the adaptive solution. 3. The remaining points (i.e., map points larger than \(\gamma_{\mathrm{d}}\) and outside the gate \(\xi_{\mathrm{k}}\)) are clustered through the density-based spatial clustering of applications with noise (DBSCAN) algorithm, with a maximum distance between points belonging to the same cluster \(\xi_{\mathrm{d}}\), and a minimum number of points to form a cluster \(N_{\mathrm{d}}\)[18]. Finally, each cluster centroid is stored in the matrix \(\mathbf{Z}_{t}\), representing target detections extracted from the soft maps. ### _Tracking Algorithms_ For all the tracking algorithms, we adopt the following state vector to represent the state of each target \[\mathbf{s}_{t,n}=\left(s_{t,n,x},s_{t,n,y},s_{t,n,v_{\mathrm{e}}},s_{t,n,v_{ \mathrm{y}}}\right)^{\mathrm{T}} \tag{5}\] where \(t\) and \(n\) are the time and target indexes. The first two elements of the vector correspond to the target position coordinates, while the last two represent the target velocity components. To update the target position coordinates, we use the information extracted from the map, while the target velocity components are inferred by considering both the previous target position at time \(t-1\) and the current target position. The PHD filter is a widely adopted algorithm in literature [19, 20]. One possible implementation suggests approximating the target intensity function as a Gaussian mixture (GM) with a predefined number of components, which takes the following form \[D_{t-1|t-1}(\mathbf{x})=\sum_{h=1}^{\mathcal{H}_{t-1|t-1}}w_{t-1|t-1}^{(h)} \mathcal{N}_{\mathbf{x}}(\boldsymbol{\mu}_{t-1|t-1}^{(h)},\mathbf{P}_{t-1|t- 1}^{(h)}) \tag{6}\] where \(\mathbf{x}\) is a generic random finite set (RFS), \(\mathcal{H}_{t-1|t-1}\) represents the number of Gaussian components in the intensity function, \(w_{t-1|t-1}^{(h)}\) is the \(h\)th component weight, and \(\boldsymbol{\mu}_{t-1|t-1}^{(h)}\) and \(\mathbf{P}_{t-1|t-1}^{(h)}\) represent mean and covariance of the considered component. The intensity function can be interpreted as an atypical probability density function (p.d.f.) whose integral returns the estimated number of targets in the scenario. The prediction step infers the intensity function in the consecutive time step, i.e., \(D_{t|t-1}(\mathbf{x})\), through a linear Kalman predictor [21]. During prediction, the probability of survival \(P_{\mathrm{s}}\) is considered constant, so are the transition matrix \(\mathbf{F}\) and the process noise covariance matrix \(\mathbf{Q}\); the last one represents the motion uncertainty. A set of \(\mathcal{B}\) birth components is added to the predicted intensity function \(D_{t|t-1}(\mathbf{x})\) to represent the possibility of new targets spawning in the surveillance area. The total number of components after prediction is then \(\mathcal{H}_{t|t-1}=\mathcal{H}_{t-1|t-1}+\mathcal{B}\). In the update step, the predicted components are updated through the Kalman update equations, as in [21], with the measurements \(\mathbf{Z}_{t}\) extracted from the maps \(\mathbf{L}_{t}\). During this step, the detection probability \(P_{\mathrm{d}}\) is considered constant, and the covariance matrix \(\mathbf{R}_{t}\) for each measurement is estimated from the selected map detection points, as will be highlighted in Equation (15). The overall amount of components in the posterior can be written as \(\mathcal{H}_{t|t}=\mathcal{H}_{t|t-1}(M_{t}+1)\), where \(M_{t}\) denotes the number of measurements at time instant \(t\). To estimate the number of targets from the PHD posterior, it is enough to sum the weight of the components and round it to the closest integer \[\widehat{N}_{\mathrm{obj}}=\left[\sum_{h=1}^{\mathcal{H}_{t|t}}w_{t|t}^{(h)}\right] \tag{7}\] while for the \(n\)th target state estimation, we extract the mean value of the \(n\)th most likely component \[\hat{\mathbf{s}}_{t,n}=\operatorname*{argmax}_{w_{t|t}^{(h)}}\boldsymbol{\mu} _{t|\ell}^{(h)} \tag{8}\] Figure 2: Block diagram of the sensing processing chain exploiting BS cooperation, target classification, and target-specific tracking. The BSs scan the environment generating range-angle maps and resample them according to a predefined grid. Resampled range-angle maps are then shared with the FC and fused in a single map. Target identification is performed at the FC through map cropping and classification (red block). Then clustering is performed to merge detections generated by the same target (green block). Finally, tracking algorithms perform target state estimation (pink block). The MBM filter is an alternative to the PHD filter for multiple target tracking problems that exploit the association probability between measurements and targets [22, 23]. The MBM filter is used to approximate the target multi-object p.d.f. \[\mathrm{MBM}_{t-1|t-1}(\mathbf{x})=\sum_{g=1}^{\mathcal{G}_{t-1|t-1}}w_{t-1|t-1 }^{(g)}\mathrm{MB}_{t-1|t-1}^{(g)}(\mathbf{x}) \tag{9}\] where \(\mathcal{G}_{t-1|t-1}\) represents the number of multi-Bernoulli (MB) components or global hypothesis in the MBM distribution, and \(w_{t-1|t-1}^{(g)}\) stands for the \(g\)th MBM component weight. The MB distribution in Equation (9) can be written as follows \[\mathrm{MB}_{t-1|t-1}^{(g)}(\mathbf{x})=\sum_{\boldsymbol{\|}\boldsymbol{ \mathcal{G}}_{t}=\mathbf{x}}\prod_{l=1}^{\mathcal{L}_{t-1}^{(g)}|t-1}\mathrm{ B}_{t-1|t-1}^{(g,l)}(\mathbf{x}_{l}) \tag{10}\] where \(\mathcal{L}_{t-1|t-1}^{(g)}\) represents the number of Bernoulli components or local hypothesis in the MB distribution, and the summation is performed for all the possible unions of mutually disjoint RFS that generate \(\mathbf{x}\), which means to evaluate all the possible data associations between measurements and targets [23]. The single Bernoulli component in Equation (10) can be written as \[\mathrm{B}_{t-1|t-1}^{(g,l)}(\mathbf{x}_{l})=r_{t-1|t-1}^{(g,l)}\mathcal{N}_{ \mathbf{x}_{l}}(\boldsymbol{\mu}_{t-1|t-1}^{(g,l)},\mathbf{P}_{t-1|t-1}^{(g,l )}) \tag{11}\] where \(r_{t-1|t-1}^{(g,l)}\) represents the existence probability of the \(l\)th local hypothesis in the \(g\)th global hypothesis, \(\boldsymbol{\mu}_{t-1|t-1}^{(h)}\) and \(\mathbf{P}_{t-1|t-1}^{(h)}\) represent the mean and the covariance of the considered component, respectively. During prediction, linear Kalman prediction is performed again to infer the parameters in the consecutive time step. To account for new spawning objects, a set of \(\mathcal{B}\) Bernoulli components is added to each global hypothesis. For both algorithms, to exploit the prior information about the environment, the components are generated following the scenario layout, i.e., the number of hypotheses, their mean value, covariance, and weight are based on the lanes and crosswalk positions in the environment. The overall number of components after the prediction step can be evaluated as \((\mathcal{L}_{t-1|t-1}+\mathcal{B})\mathcal{G}_{t-1|t-1}\). In the update phase, a linear Kalman update is performed to derive the updated parameters, considering the most likely association between measurements and targets [22]. Estimations \(\hat{\mathbf{s}}_{t,n}\) are then extracted from the posterior distribution, considering the mean value \(\boldsymbol{\mu}_{t|t}^{(i,j)}\) of the MB components with existence probability \(r_{t|t}^{(i,j)}\geq\gamma_{\mathrm{e}}\) from the MBM component with highest probability \(w_{t|t}^{(i)}\). ### _Motion and Measurement Model_ To model clutter measurements representing false alarm detection extracted by the clustering procedure, a Poisson point process (PPP) is considered, whose intensity is defined as \(\lambda_{\mathrm{c}}\). Target death is modeled through a constant probability of survival \(P_{\mathrm{s}}\). During prediction, if a component is associated with a missed detection, its weight is multiplied by a factor proportional to \(P_{\mathrm{s}}\), which means that consecutive missed detections lead to unlikely target state components. A linear prediction model is selected to track the behavior of both extended and point-like targets. This is justified by the low value of \(T_{\mathrm{scan}}\) compared to the target velocity, which allows to approximate target motions as piecewise linear among consecutive acquisitions. The corresponding transition matrix and process noise covariance matrix are \[\mathbf{F}=\begin{bmatrix}1&0&T_{\mathrm{scan}}&0\\ 0&1&0&T_{\mathrm{scan}}\\ 0&0&1&0\\ 0&0&0&1\end{bmatrix} \tag{12}\] \[\mathbf{Q}=\alpha_{\mathrm{q}}\,T_{\mathrm{scan}}\cdot\mathbf{I}_{4} \tag{13}\] where \(\alpha_{\mathrm{q}}\) is a parameter that represents the prediction uncertainty about the target motion. In this work, only position information about the targets is estimated through measurements, while consecutive position measurements are used to infer the velocity.4 With these assumptions, the following measurement matrix is considered Footnote 4: Although possible we consider the BS do not estimate target Doppler. \[\mathbf{H}=\begin{bmatrix}1&0&0&0\\ 0&1&0&0\end{bmatrix}. \tag{14}\] Because of the high-resolution maps, multiple detections (closely spaced map pixels) from each target are generated, leading to a non-diagonal measurement covariance matrix. Thus such a matrix needs to be estimated. Let us define the set of map points \(\mathbf{L}_{t}^{(\mathbf{z}_{t,m})}\) extracted after clustering, specified by \(\mathbf{z}_{t,m}\), from the measurement matrix (map) \(\mathbf{L}_{t}\). Indicating with \(\mathbf{V}_{t}^{(\mathbf{z}_{t,m})}\), the \(2\times N_{\mathbf{z}_{t,m}}\) matrix containing the pixel coordinates relative to \(\mathbf{L}_{t}^{(\mathbf{z}_{t,m})}\), the sample covariance measurement matrix can be calculated as \[\mathbf{R}_{t}=\frac{1}{N_{\mathbf{z}_{t,m}}-1}\sum_{p=1}^{N_{\mathbf{z}_{t,m }}}(\mathbf{v}_{t,p}^{(\mathbf{z}_{t,m})}-\mathbf{z}_{t,m})(\mathbf{v}_{t,p}^ {(\mathbf{z}_{t,m})}-\mathbf{z}_{t,m})^{\mathrm{T}} \tag{15}\] where \(N_{\mathbf{z}_{t,m}}\) represents the number of map points associated to the \(m\)th measurement \(\mathbf{z}_{t,m}\), and \(\mathbf{v}_{t,p}^{(\mathbf{z}_{t,m})}\) stands for the \(p\)th map point coordinates in the matrix \(\mathbf{V}_{t}^{(\mathbf{z}_{t,m})}\). ### _Post-Processing_ A set of post-processing procedures are implemented to manage the complexity of the algorithms and ensure good estimation accuracy. In the PHD filter, pruning, capping, and merging are implemented sequentially to reduce the number of components in the posterior intensity function. Pruning removes all the components in the posterior whose weights \(w_{t|t}^{(h)}\) are under a predefined threshold \(\gamma_{\mathrm{p}}\)[20]. Then, capping is realized on the remaining components selecting the \(\gamma_{\mathrm{q}}\) components with the greatest \(w_{t|t}^{(h)}\), by fixing the maximum number of components in the posterior to \(\gamma_{\mathrm{q}}\)[22]. Finally, on the remaining components in the set \(\zeta_{\mathrm{m}}\), merging of those whose average distance, defined in the following equation, is lower than a predefined threshold \(\gamma_{\mathrm{s}}\) is performed: \[d(\boldsymbol{\mu}_{t|t}^{(i)},\boldsymbol{\mu}_{t|t}^{(j)})=\|\boldsymbol{ \mu}_{t|t}^{(i)}-\boldsymbol{\mu}_{t|t}^{(j)}\|_{2} \tag{16}\] where weights, mean, and covariance are updated as follows: \[w^{(k)}_{t|t} =\sum_{i\in\mathcal{\xi}_{\text{m}}}w^{(i)}_{t|t}\] \[\boldsymbol{\mu}^{(k)}_{t|t} =\sum_{i\in\mathcal{\xi}_{\text{m}}}w^{(i)}_{t|t}\boldsymbol{\mu}^ {(i)}_{t|t}\] \[\mathbf{P}^{(k)}_{t|t} =\sum_{i\in\mathcal{\xi}_{\text{m}}}w^{(i)}_{t|t}\mathbf{P}^{(i) }_{t|t}+(\boldsymbol{\mu}^{(i)}_{t|t}-\boldsymbol{\mu}^{(k)}_{t|t})( \boldsymbol{\mu}^{(i)}_{t|t}-\boldsymbol{\mu}^{(k)}_{t|t})^{\text{T}}\] where \(k\) represents the new index assigned to the derived component, and \(i\) represents the index of the merged component. In the MBM filter, during the update phase, a gate for eligible data association allows pruning all the weak association hypotheses with \(w^{(k)}_{t|t}<\xi_{\text{a}}\). Both the MBM and the MB components are pruned with the threshold \(\gamma_{\text{g}}\) and \(\gamma_{\text{l}}\), respectively. Then, the residual MBM components are capped with a threshold \(\gamma_{\text{c}}\). To increase the estimation accuracy, in the most likely MBM components, the MB components closer than \(\gamma_{\text{m}}\) are merged as previously described. ## IV Numerical Results ### _Performance Metrics_ From the communication perspective, the aggregate network capacity, intended as the sum rate of each BS in the downlink, is considered to assess the communication performance. Considering \(N_{\text{s}}\) BSs dedicated for both sensing and communications among the \(N_{\text{tot}}\) available BSs (so \(N_{\text{tot}}-N_{\text{s}}\) are for communication only) and a fraction of power dedicated for sensing \(\rho_{\text{p}}\) (1), the overall aggregate network capacity can be written as \[\begin{split}& C(\rho_{\text{p}})=(N_{\text{s}}\,\Delta f\,K\, \log_{2}(1+(1-\rho_{\text{p}})\,\mathrm{SNR}^{(\text{c})})\\ &+(N_{\text{tot}}-N_{\text{s}})\,\Delta f\,K\,\log_{2}(1+\mathrm{ SNR}^{(\text{c})}))/N_{\text{tot}}\end{split} \tag{17}\] where \(\mathrm{SNR}^{(\text{c})}\) is the communication SNR experienced by the users.5 Footnote 5: To keep the presentation of numerical results simple, we consider all the UEs experience the same SNR. To evaluate the network localization capability, the OSPA is selected as a single-value metric [1, 2] \[\begin{split}&\mathrm{OSPA}=\\ &\sqrt{\frac{1}{N_{\text{c}}}\bigg{(}\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! merging threshold is set to \(\gamma_{\rm m}=5\). In the PHD filter, the pruning threshold of the components is \(\gamma_{\rm p}=100\cdot 10^{-}6\), while the maximum number of components is fixed to \(\gamma_{\rm q}=10\). In the MBM filter, the pruning threshold on the probability of existence is \(\gamma_{\rm l}=100\cdot 10^{-}6\) while the pruning threshold on the MBM components is set to \(\gamma_{\rm g}=10^{-15}\). The maximum number of MBM components is \(\gamma_{\rm c}=10\). The gate for the admissible associations is set to \(\xi_{\rm a}=14\). The existing threshold is \(\gamma_{\rm e}=0.99\). For both the algorithms, the birth components for new appearing objects are initialized with covariance \({\bf P}^{(b)}=0.1\cdot{\bf I}_{\rm a}\), with position \(\mathbf{\mu}^{(b)}\) reflecting the possible target spawn position. A recovery component is initialized centered in the scenario with covariance \({\bf P}^{(b)}=5\cdot{\bf I}_{4}\). ### _Target Classification Performance_ In Figure 3, the classification accuracy for \(\rho_{\rm p}=0.3\), and varying the number of BSs selected for sensing \(N_{\rm s}\), is reported in green for both PHD and MBM, on the top and bottom plots, respectively. It can be noticed that both the algorithms ensure a target classification accuracy greater than \(0.85\) when the number of BSs is \(N_{\rm s}\geq 3\). Classification performance is highly influenced by the number of BSs adopted for sensing; reducing the number of BSs reduces the number of detected reflection points of extended targets (because of the reduction of spatial diversity), resulting in more similar target fingerprints between pedestrians and vehicles in the fused maps. It is also interesting to notice that with \(N_{\rm s}=6\) BSs, the target classification accuracy is greater than \(98\%\), representing a remarkable result. ### _Sensing and Communication Performance_ In Figure 3, the OSPA metric for \(\rho_{\rm p}=0.3\), and varying the number of sensors \(N_{\rm s}\), is illustrated for PHD and MBM, on the top and bottom, respectively. Blue dotted curves represent the algorithm performance with \(\xi_{\rm k}=4\) for both pedestrians and vehicles; red dashed curves refer to the performance with \(\xi_{\rm k}=6\) again for both pedestrians and vehicles. Solid yellow curves represent the performance of the AI-based solution, whose gates are adapted for pedestrians (\(\xi_{\rm k}=4\)) and vehicles (\(\xi_{\rm k}=6\)) based on the target identification. As can be noticed, the adaptive gate achieves a lower localization error for both algorithms. For the PHD filter, the solution with adaptive gating presents an error lower than \(1\,\mathrm{m}\) when the number of sensors is \(N_{\rm s}\geq 3\). Similarly, the MBM filter exhibits an OSPA lower than \(0.7\,\mathrm{m}\) considering \(N_{\rm s}\geq 3\). The performance degradation experienced when \(N_{\rm s}<3\) is due to target misclassification. In this case, the adaptive solution is affected by the mismatch between the real target classes and the estimated ones, resulting in an incorrect assignment of the gating parameter \(\xi_{\rm k}\). To emphasize the benefit produced by the adoption of adaptive gating (see Figure 4), the number of BSs devoted to sensing is fixed. At the same time, the OSPA metric is reported over the first \(100\) acquisitions. Blue areas represent the OSPA produced by a fixed gate \(\xi_{\rm k}=4\) for both pedestrians and vehicles, red areas refer to the solution with \(\xi_{\rm k}=6\), and yellow areas represent the adaptive solution. It is important to highlight the increase in the localization performance thanks to adaptive gating for both algorithms, which results in reduced OSPA peaks. From a communication perspective, the BS aggregate capacity is evaluated with Equation (17), considering \(\rho_{\rm p}=0.3\). The worst case for communication is when all the BSs are performing joint communication and sensing, i.e., \(N_{\rm s}=6\). In this situation, the downlink capacity is \(C=0.9\,\text{Gbit/s}\). On the contrary, without performing sensing (\(N_{\rm s}=0\)), the downlink capacity can reach \(C=1.1\,\text{Gbit/s}\). As a compromise, using \(3\) BSs for JSC and \(3\) for communication only, the downlink Figure 4: Localization performance over time for the PHD (top) and MBM (bottom) Figure 3: Localization performance and classification accuracy varying the number of BSs devoted for sensing \(N_{\rm s}\) for the PHD (top) and MBM (bottom) capacity can be maintained greater than \(C=1\) Gbit/s. ## V Conclusion In this work, we presented a framework to perform JSC with OFDM waveforms exploiting cooperation and data fusion among BSs to improve localization performance. Furthermore, leveraging different target reflection fingerprints in the soft maps, we developed a CNN classifier to identify the object type and adapt the multi-target tracking to the specific object type. A three-step clustering strategy based on adaptive gating is proposed to manage point-like and extended targets and exploit target identification. Then, two multi-target tracking algorithms are used, the PHD and MBM filters, to track all the targets in the surveillance area. The overall system is tested in a vehicular scenario with two types of targets, pedestrians and vehicles. To explore the communication/sensing trade-off, we investigated the sensing performance varying the number of cooperating BSs, considering that a fraction of transmit power is devoted to the sensing beams. The system performance has been evaluated through the OSPA metric, target classification accuracy, and communication performance via aggregate downlink capacity. Numerical results show that adaptive gating aided by target identification performs better than the simpler target-agnostic solution when the target classification accuracy is greater than \(90\%\). For example, by choosing \(N_{\mathrm{s}}=3\) BSs, a classification accuracy around \(0.9\) is reached, with an OSPA error lower than \(1\) m for the PHD filter and around \(0.7\) m for the MBM filter, while also ensuring a downlink capacity greater than \(1\) Gbit/s. With \(N_{\mathrm{s}}=6\) sensing BSs, a target classification accuracy larger than \(98\%\) is reached, with a localization error lower than \(0.7\) m for both tracking algorithms, with a penalty on downlink capacity of 10%, i.e., from 1 Gbit/s to \(0.9\) Gbit/s.
2309.09694
Noise-Augmented Boruta: The Neural Network Perturbation Infusion with Boruta Feature Selection
With the surge in data generation, both vertically (i.e., volume of data) and horizontally (i.e., dimensionality), the burden of the curse of dimensionality has become increasingly palpable. Feature selection, a key facet of dimensionality reduction techniques, has advanced considerably to address this challenge. One such advancement is the Boruta feature selection algorithm, which successfully discerns meaningful features by contrasting them to their permutated counterparts known as shadow features. However, the significance of a feature is shaped more by the data's overall traits than by its intrinsic value, a sentiment echoed in the conventional Boruta algorithm where shadow features closely mimic the characteristics of the original ones. Building on this premise, this paper introduces an innovative approach to the Boruta feature selection algorithm by incorporating noise into the shadow variables. Drawing parallels from the perturbation analysis framework of artificial neural networks, this evolved version of the Boruta method is presented. Rigorous testing on four publicly available benchmark datasets revealed that this proposed technique outperforms the classic Boruta algorithm, underscoring its potential for enhanced, accurate feature selection.
Hassan Gharoun, Navid Yazdanjoe, Mohammad Sadegh Khorshidi, Amir H. Gandomi
2023-09-18T11:59:06Z
http://arxiv.org/abs/2309.09694v1
# Noise-Augmented Boruta: The Neural Network Perturbation Infusion with Boruta Feature Selection ###### Abstract With the surge in data generation, both vertically (i.e., volume of data) and horizontally (i.e., dimensionality) the burden of the curse of dimensionality has become increasingly palpable. Feature selection, a key facet of dimensionality reduction techniques, has advanced considerably to address this challenge. One such advancement is the Boruta feature selection algorithm, which successfully discerns meaningful features by contrasting them to their permutated counterparts known as shadow features. However, the significance of a feature is shaped more by the data's overall traits than by its intrinsic value, a sentiment echoed in the conventional Boruta algorithm where shadow features closely mimic the characteristics of the original ones. Building on this premise, this paper introduces an innovative approach to the Boruta feature selection algorithm by incorporating noise into the shadow variables. Drawing parallels from the perturbation analysis framework of artificial neural networks, this evolved version of the Boruta method is presented. Rigorous testing on four publicly available benchmark datasets revealed that this proposed technique outperforms the classic Boruta algorithm, underscoring its potential for enhanced, accurate feature selection. Feature Selection, Boruta, Neural networks, Perturbation analysis, Feature importance. ## I Introduction With the emergence of data centers and the advent of big data technologies in recent years, there has been a marked influence on the processes of data generation and storage, where these advancements have acted as powerful enablers for high-throughput systems, substantially augmenting the capacity to generate data both in terms of the number of data points (sample size) and the range of attributes or features collected for each data point (dimensionality) [1]. The explosive surge in the volume of gathered data has heralded unprecedented opportunities for data-driven insights. Yet, high-dimensionality simultaneously poses distinct challenges that obstruct the success of machine learning algorithms. This dichotomy is particularly emphasized in the so-called _"curse of dimensionality"_, a term coined by Richard Bellman [2], which encapsulates the challenges faced in handling high-dimensional data spaces. The curse of dimensionality is effectively addressed by employing a collection of techniques collectively referred to as dimensionality reduction. Dimensionality reduction can be categorized into two primary branches: 1. Feature extraction: the process of creating a smaller collection of new features from the original dataset while still preserving the majority of the vital information. 2. Feature selection: the process of identifying and choosing the most relevant features from the original dataset based on their contribution to the predetermined relevance criterion. Feature selection, similar to machine learning models, is classified into supervised, unsupervised, and semi-supervised types, depending on the availability of well-labeled datasets. Furthermore, supervised feature selection is divided into three main subcategories, namely (interested readers in delving deeper into feature extraction and feature selection, and their various types, are encouraged to refer to [3]): 1. Filter methods: rank features based on statistical measures and select the top-ranked features. 2. Wrapper methods: evaluates subsets of features which best contribute to the accuracy of the model 3. Hybrid methods: leverages the strengths of both filter and wrapper methods by first implementing a filter method to simplify the feature space and generate potential subsets, and then using a wrapper method to identify the most optimal subset [1]. 4. Embedded methods: utilize specific machine learning models that use feature weighting functionality embedded in the model to select the most optimal subset during the model's training [4]. Random Forest is a widely used algorithm for embedded feature selection. The Random Forest (RF) algorithm is a type of ensemble classifier that uses a concurrent set of decision trees, termed component predictors. RF applies a bootstrapping technique that randomly creates \(n\) training subsets from the main dataset, and this process is performed \(m\) times, leading to the construction of \(m\) independent decision trees. Each tree is built using a random subset of features. The ultimate decision is made based on the majority vote of the component predictors [5]. RF leverage permutation importance to calculate feature importance. Each tree in the forest contributes to the classification of instances it wasn't used to train. After the initial classification, the feature values are shuffled randomly and the classification process is repeated. The importance of a feature is determined by comparing the correct classifications before and after the permutation of feature values. If shuffling a feature's values leads to a large decrease in accuracy, then that feature is considered important. The final feature importance is obtained by averaging the importance of all individual trees. The utilization of Z-score is another viable approach, wherein the average value is divided by its standard deviation to derive an importance measure [6]. Algorithm 1 outlines the process of calculating feature importance in a RF model. ``` Random Forest, Instances, Features ``` 0: Feature Importance \(V_{\text{orig}}\gets 0\) \(V_{\text{perm}}\gets 0\) for each tree \(t\) in Random Forest do iftree \(t\) did not use the current instance for training then Classify all instance \(V_{\text{orig}}^{(t)}\leftarrow\) number of correct votes Permute feature values Classify all instance again \(V_{\text{perm}}^{(t)}\leftarrow\) number of correct votes \(I^{(t)}\leftarrow\frac{V_{\text{perm}}^{(t)}-V_{\text{perm}}^{(t)}}{\text{number of instances}}\) endif endfor \(I\leftarrow\frac{1}{\text{number of trees}}\sum_{t=1}^{\text{number of trees}}I^{(t)}\) return\(I\) ``` **Algorithm 1** Calculate Feature Importance by RF [6] argued that the trustworthiness of the evaluation of feature significance is grounded in the presumption that the separate trees cultivated within the random forest are unrelated while numerous analyses have occasionally demonstrated that this presupposition might not hold true for certain datasets. Furthermore, they contended that distinguishing genuinely important features becomes difficult when dealing with a large number of variables, as some may seem important due to random data correlations. Accordingly, the importance score by itself is inadequate to pinpoint significant associations between features and the target [6]. They address this issue by proposing Boruta algorithm. In Random Forest, the importance of features is calculated in comparison to each other. However, in Boruta, the main idea is to evaluate the importance of features in competition with a set of random features called shadow features. In this process, every feature in the dataset is duplicated and their values are shuffled randomly. The Random Forest algorithm is applied repeatedly, randomizing the shadow features each time and calculating feature importance for all attributes (original features and shadow features). If the importance of a given feature consistently exceeds the highest importance among all the shadow features, it is classified as important. The measure of consistency is established through a statistical test, grounded on the binomial distribution, which quantifies how frequently the feature's importance overtakes the Maximum Importance of the Random Attributes (MIRA). If this count (called 'hits') significantly outnumbers or undershoots the expected count, the feature is deemed 'important' or 'unimportant' respectively. This process iterates until either all features are conclusively categorized, or a predetermined iteration limit is reached. Algorithm 2 succinctly illustrates the steps of the Boruta algorithm [6]. ``` Let \(\mathcal{F}\) be the set of all features Let \(\mathcal{H}\) be the empty list to store the importance history Let \(maxIter\) be the maximum number of iterations for\(iter=1\) to \(maxIter\)do Create \(\mathcal{F}^{\prime}=\mathcal{F}\cup\{\text{shuffled copies of all }f\in\mathcal{F}\}\) (the shadow features) Train a \(RF\) classifier on the dataset using \(\mathcal{F}^{\prime}\) Compute \(I=RF\):importance, the importance score for all features in \(\mathcal{F}^{\prime}\) Compute \(maxShadow=\max(I_{f^{\prime}})\) where \(f^{\prime}\in\mathcal{F}^{\prime}\) are the shadow features for each\(f\in\mathcal{F}\)do Add \(I_{f}\) to \(\mathcal{H}_{f}\), the importance history for feature \(f\) if\(\bar{I}_{f}>maxShadow\)then Mark \(f\) as important elseif\(\bar{I}_{f}<maxShadow\) for some threshold number of times in \(\mathcal{H}_{f}\)then Mark \(f\) as unimportant endif endfor return The set of features marked as important ``` **Algorithm 2** Boruta Algorithm for Feature Selection Since the introduction of Boruta, this algorithm has been extensively and successfully utilized in across diverse research domains, including medicine [7, 8, 9, 10], cybersecurity [11], engineering [12, 13], and environmental [14, 15, 16, 17] studies. Even the Boruta algorithm has been successfully employed to reduce the dimensionality of features extracted from images by deep networks [18]. While the Boruta algorithm has indeed been successful in feature selection, contributing to improved predictive performance as highlighted in the literature, it's crucial to note that in Boruta, features are merely permuted. This permutation does not alter the inherent attributes of a feature. A similar phenomenon occurs in the Random Forest algorithm when calculating feature importance through permutation. However, The relevance of the feature is determined by the data's characteristics, not its value [19]. Therefore, in this study, we introduce a new variant of the Boruta algorithm. In the traditional Boruta algorithm, shadow features are constructed merely by random shuffling, which does not alter a feature's properties. To address this, in our study, the shadow features are not only shuffled but also perturbed with normal noise. Additionally, instead of employing RF for calculating feature importance, we have utilized the perturbation analysis approach within neural networks. The remainder of this paper is structured as follows: Section II offers a comprehensive discussion of the proposed algorithm. Section III details the dataset used and outlines the experimental design. The findings from the experiments are presented and analyzed in Section IV. Lastly, Section V provides concluding remarks and suggests avenues for future research. ## II Proposed Method ### _Noise-augmented shadow features_ The value of a feature in a dataset is often viewed as less important than the overall characteristics of the data, in terms of providing insight into the predictive modeling process [19]. This perspective holds true for the Boruta algorithm, where the shadow features, bearing the same characteristics as the original ones, are utilized. Yet, it should be noted that even though the permutation of these shadow features disrupts the original relationship with the target variable, the essence of the Boruta algorithm--that each original feature competes with random features mirroring their own characteristics--remains intact. This study aims to further the current understanding of the role and potential of shadow features by questioning the foundational assumption of their inherent similarity to the original features. To this end, the concept of 'noise-augmented shadow features' is proposed, where the characteristics of the shadow features are deliberately modified. This new approach allows for an exploration into whether diversifying the characteristics of shadow features can lead to improved feature selection performance. The theoretical rationale behind this new approach is to provide a broader spectrum of random features for the original ones to compete against, thereby enriching the competition space and potentially enhancing the robustness of the feature selection process. This investigation is driven by the belief that the performance of a feature selection algorithm may be influenced not only by the relevancy of the features but also by the diversity and characteristics of the shadow features. Algorithm 3 clearly outlines the steps involved in generation of noise-augmented shadow features. In this approach, each original feature undergoes a process of augmentation with white noise - a random value possessing zero mean and standard deviation equal to that computed from the original feature. This noise generation step mimics the statistical characteristics of the original feature while simultaneously disrupting its inherent relationship with the target variable. Subsequently, a random permutation is applied, which further ensures that any patterns or dependencies present in the original feature set do not unduly influence the shadow features. ``` 1: Let \(F\) be the set of all features 2:\(F\) 3:for each feature \(f\) in \(F\)do 4:\(\delta\leftarrow\) compute standard deviation of feature \(f\) 5:\(Noise\leftarrow\) generate white noise with 0 mean and \(\delta\) 6:\(Shadow_{i}\leftarrow\) augment feature \(f\) with \(Noise\) then permute randomly 7:endfor 8:return\(F_{NS}\): Noise-augmented shadow features. ``` **Algorithm 3** Generation of Noise-Augmented Shadow Features ### _Perturbation-based assessment of feature importance_ The concept of perturbation analysis offers a solution to quantify the influence of each variable within the framework of neural network models. In the procedure, disturbances are intentionally introduced to the neural network's inputs. To maintain control over the experiment, only one input variable is altered during each iteration, keeping the remainder unchanged. The variable that, when disturbed, yields the most significant impact on the dependent variable is then recognized as the variable of greatest importance [20]. Figure 1 shows the general scheme of perturbation analysis. In light of the above, this study introduces a novel variant of the Boruta feature selection method, inspired by the perturbation analysis paradigm employed in Artificial Neural Networks (ANNs). This approach incorporates the use of noise-augmented shadow features and utilizes a Shallow ANN as the underlying base model. Let's consider a dataset, \(D=(x_{1},y_{1}),(x_{2},y_{2}),...,(x_{N},y_{N})\), where \(x_{i}\) represents the \(i^{th}\) observation vector in a \(d\)-dimensional feature space, and \(y_{i}\) corresponds to the label of the \(i^{th}\) observation. The first stage involves the creation of training and testing datasets, denoted by \(D_{train}\) and \(D_{test}\), respectively. In this proposed variant, \(D_{train}\) is solely used for feature selection, while \(D_{test}\) is reserved exclusively to evaluate the performance of the selected features. Thus, the feature selection process does not have any access to or influence from the test dataset, thereby ensuring an unbiased assessment of the feature selection process. Algorithm 4 offers a step-by-step delineation of the proposed method. In the proposed method, given \(D_{train}\) a new train set \(\mathcal{D}^{\prime}\) is constructed by a combination of original features and their noise-augmented counterparts (shadow features). This set is then normalized to prepare it for the learning algorithm (ANN shallow learner). The F1 score of the trained model by \(\mathcal{D}^{\prime}\) is then used as the baseline performance metric. Next, each feature in the \(\mathcal{D}^{\prime}\) is perturbed individually by adding a noise factor and shuffling while keeping the other features unchanged. The perturbed F1 score of the model is computed and the difference between the baseline and perturbed scores is noted. The difference in scores effectively quantifies the influence of perturbing each feature, and these differences are then normalized. This step allows us to measure the influence Fig. 1: Perturbation analysis scheme. of each feature on the model's performance. Afterward, a competition takes place between the original features and their noise-augmented shadow counterparts. The most influential shadow feature (i.e., the one with the highest normalized difference in F1 score after perturbation) sets a threshold. If an original feature's impact on the F1 score exceeds this threshold, it is considered important and one _'hit'_ is recorded. This process is repeated for a pre-specified number of iterations. At the end of these iterations, the features that have accumulated at least one hit are selected. ## III Experimental Setup ### _Data sets_ To evaluate the proposed method's performance, it was applied to four publicly available datasets. These datasets, each unique in their characteristics, offer a diverse range of scenarios to thoroughly test the robustness and efficacy of the proposed method. Brief descriptions of each dataset are presented below: * Smartphone-based recognition of human activities and postural transitions (SB-RHAPT) [21]: comprises of data collected from smartphone sensors recording basic activities and postural transitions. Each record is associated with an activity encoded as six different classes. * APS Failure at Scania Trucks (APSF) [22]: consists of data collected from the Air Pressure system (APS) of Scania Trucks. This dataset is imbalanced as most records represent normal operation while a small fraction corresponds to the APS failure. Missing values within this dataset are replaced with the mean value of the respective feature. * Epileptic seizure recognition (ESR) [23]: constitutes recorded EEG values over a certain period, aiming to distinguish between the presence and absence of epileptic seizure activity. The original target variable involves 5 different categories of brain activity patterns, four of them correspond to non-seizure activity and one category indicates seizure activity. In this study, the target variable is converted into a binary classification task to discern between seizure and non-seizure activities. This simplification leads to an imbalance in the dataset. * Parkinson's disease classification (PDC) [24]: incorporates instances representing various biomedical voice measurements from individuals, some of whom are afflicted with Parkinson's Disease. The dataset, designed for binary classification, categorizes instances into two classes of Parkinson's Disease and Healthy. Summarized information about the utilized datasets, including the number of instances, features, and classes, can be found in Table I. ### _Experiment configurations_ In this study, the performance of the proposed method has been compared with the original Boruta algorithm. For feature selection using the Boruta algorithm, Random Forest is utilized. Two principal parameters used in this study to tune the Random Forest are the number of estimators and the maximum depth. To obtain the optimum value for these two parameters, a Random Forest was initially trained on each dataset with all features, and the best value was determined via a greedy search. Table II presents the optimal parameter values for Boruta's estimators across each dataset. The Random Forest model obtained at this stage is employed in the Boruta algorithm for feature selection. In configuring the method proposed in this study, more parameters need to be decided upon. The first set of these parameters pertains to the learner model based on the artificial neural network. Given that in the proposed methodology, the learner is solely used for feature selection, and features are chosen based on the impact their perturbation has on reducing model accuracy, thus fine-tuning the learner at this stage is not critical. What is required here is to select a network architecture that can generate a minimum accuracy above 50 percent. Therefore, through trial and error, simple models capable of achieving an accuracy above 50 percent with all features are utilized. The chosen architecture can vary for each dataset based on the dataset's inherent characteristics. Table \begin{table} \begin{tabular}{l c c c} \hline Dataset & Instances & Features & Classes \\ \hline Recognition of Human Activities & 10299 & 561 & 6 \\ Failure at Scania Trucks & 76000 & 171 & 2 \\ Epileptic seizure Recognition & 11500 & 179 & 2 \\ Parkinson’s Disease (PD) classification & 756 & 755 & 2 \\ \hline \end{tabular} \end{table} TABLE I: Summary of datasets III depicts the architecture employed for each dataset. For all models, the epoch is set to 100. An observation that can be made from this table is that most models are very lightweight, which contributes to reducing the problem's complexity. The subsequent parameter, denoted as 'n', serves as the standard deviation multiplier during the perturbation of parameters to assess the degree of accuracy reduction in the model. In this study, an 'n' value of 50 has been adopted. ## IV Discussion and Results This section presents the numerical results and discussion. As mentioned in the previous sections, the evaluation of the model is based on the F1 score, which provides a more robust measure in scenarios involving imbalanced datasets. Each of the datasets has been randomly divided into training and testing sets at a ratio of 70% to 30%, with stratified sampling from the target variable. The proposed method and the original Boruta algorithm are each run on the training dataset, with a maximum iteration limit of 100 times. The selected features are then used for the final evaluation of the selected features. To this end, the training set used in feature selection is filtered down to the selected features and retrained, and then evaluated on the test set. It is important to note that the test set is never exposed to the model at any stage of the training. It should be noted that in the proposed method, similarly ANN -or to be more specific multi-layer perceptron (MLP) - is employed for evaluation of the selected features on the test set. In this stage, against the feature selection stage, it is necessary to fine-tune the neural network for evaluating the derived feature set. Table V shows the architecture of the tuned network for each dataset. Here, the epoch is set to 1000. The evaluation results are shown in Table IV for the proposed method after fine-tuning MLP. From an initial observation of the results, it can be inferred that the Noise-augmented Boruta consistently outperforms the standard Boruta in terms of F1 score across all datasets. Notably, this improvement is achieved with a significantly reduced number of selected features in most cases, indicating a more efficient feature selection by the Noise-augmented Boruta. To gain a more robust understanding of performance variability - considering the inherent randomness in MLP (e.g., weight initialization) and RF (e.g., random subsamples) - each model was run 100 times. In each run, the entire dataset, filtered to include only the selected features, was randomly split into training and testing subsets. This procedure can be likened to K-fold cross-validation but with a higher number of repetitions. It allows the model to experience various potential distributions within the dataset, thus bolstering its robustness against unseen distributions. Furthermore, it captures the effects of variability originating from inherent randomness within the algorithm's performance. Table VI summarizes the performance results from the 100 runs, while Figure 4 illustrates the distribution of the evaluation metric (F1 score) for both the proposed method and Boruta across the four datasets. The comparative analysis displayed in Table VI convincingly establishes the remarkable superiority of the Noise-augmented Boruta method over the conventional Boruta approach. Examining each dataset, it becomes apparent that the proposed method is more effective in the elimination of redundant or non-essential features, consistently selecting a significantly smaller feature set. Fewer feature set leads to simpler, less complex models that offer more interpretable results and reduce computational demand. Most impressively, this winnowing process does not compromise model performance. In fact, the Noise-augmented Boruta method equals \begin{table} \begin{tabular}{l l} \hline Dataset & \multicolumn{1}{c}{MLP} \\ \hline Smartphone-Based Recognition of Human Activities & (512,512,256) \\ Failure at Scania Trucks & (64,256,64) \\ Epileptic Seizure Recognition & (128, 512, 128) \\ Parkinson’s Disease (PD) classification & (1024, 1024, 512) \\ \hline \end{tabular} \end{table} TABLE V: MLP optimal architect \begin{table} \begin{tabular}{l l} \hline Dataset & Random Forest \\ \hline Smartphone-Based Recognition of Human Activities & (200, None) \\ Failure at Scania Trucks & (200, None) \\ Epileptic Seizure Recognition & (200, None) \\ Parkinson’s Disease (PD) classification & (200, 10) \\ \hline \end{tabular} Note:The sequence of numbers (\(i_{1}\),\(i_{2}\)) presents the number of estimators and max depth represtively. \begin{table} \begin{tabular}{l l l l l l} \hline Dataset & \multicolumn{3}{c}{Boruta} & \multicolumn{3}{c}{Noise-augmented Boruta} \\ \cline{2-5} & Sel. Feat. & F1score (\%) & Sel. Feat. & F1score (\%) \\ \hline SB-RHAPT & 479 & 98.0886a.030684 & 104 & 98.8012a0.2209 \\ APSF & 55 & 87.2672a0.79722 & 22 & 87.6904a0.9294 \\ ESR & 178 & 95.07744.4941 & 138 & 95.85504a0.4358 \\ PDC & 78 & 80.2250a3.0855 & 29 & 81.1630a2.9937 \\ \hline \end{tabular} \end{table} TABLE VI: Comparison result of the proposed method with Boruta - 100 times run \begin{table} \begin{tabular}{l l} \hline Dataset & Random Forest \\ \hline Smartphone-Based Recognition of Human Activities & (200, None) \\ Failure at Scania Trucks & (200, None) \\ Epileptic Seizure Recognition & (200, None) \\ Parkinson’s Disease (PD) classification & (200, 10) \\ \hline \end{tabular} Note:The sequence of numbers (\(i_{1}\),\(i_{2}\)) presents the number of estimators and max depth represtively. \begin{table} \begin{tabular}{l l} \hline Dataset & Shallow learner \\ \hline Smartphone-Based Recognition of Human Activities & (5) \\ Failure at Scania Trucks & (16) \\ Epileptic Seizure Recognition & (5,8,5) \\ Parkinson’s Disease (PD) classification & (5) \\ \hline \end{tabular} Note: Each number signifies the size of neurons in a layer. In the cases where a sequence of numbers is presented, such as (\(i_{1}\),\(i_{2}\),\(i_{3}\)), these correspond to multiple hidden layers within the network. \end{table} TABLE II: Boruta estimator configuration \begin{table} \begin{tabular}{l l l l l} \hline Dataset & \multicolumn{3}{c}{Baruta} & \multicolumn{3}{c}{Noise-augmented Boruta} \\ \cline{2-5} & Sel. Feat. & F1score (\%) & Sel. Feat. & F1score (\%) \\ \hline SB-RHAPT & 479 & 98.0886a.030684 & 104 & 98.8012a0.2209 \\ APSF & 55 & 87.2672a0.79722 & 22 & 87.6904a0.9294 \\ ESR & 178 & 95.07744.4941 & 138 & 95.85504a0.4358 \\ PDC & 78 & 80.2250a3.0855 & 29 & 81.1630a2.9937 \\ \hline \end{tabular} \end{table} TABLE IV: Comparison result of the proposed method with Boruta \begin{table} \begin{tabular}{l l l l} \hline Dataset & \multicolumn{3}{c}{Baruta} & \multicolumn{3}{c}{Noise-augmented Boruta} \\ \cline{2-5} & Sel. Feat. & F1score (\%) & Sel. Feat. & F1score (\%) \\ \hline SB-RHAPT & 479 & 98.0886a.030684 & 104 & 98.8012a0.2209 \\ APSF & 55 & 87.2672a0.79722 & 22 & 87.6904a0.9294 \\ ESR & 178 & 95.07744.4941 & 138 & 95.85504a0.4358 \\ PDC & 78 & 80.2250a3.0855 & 29 & 81.1630a2.9937 \\ \hline \end{tabular} \end{table} TABLE VII: Statistical comparison of proposed method with Boruta \begin{table} \begin{tabular}{l l} \hline Dataset & Shallow learner \\ \hline Smartphone-Based Recognition of Human Activities & (5) \\ Failure at Scania Trucks & (16) \\ Epileptic Seizure Recognition & (5,8,5) \\ Parkinson’s Disease (PD) classification & (5) \\ \hline \end{tabular} Note: Each number signifies the size of neurons in a layer. In the cases where a sequence of numbers is presented, such as (\(i_{1}\),\(i_{2}\),\(i_{3}\)), these correspond to multiple hidden layers within the network. \end{table} TABLE III: Shallow learner configuration \begin{table} \begin{tabular}{l l} \hline Dataset & Shallow learner \\ \hline Smartphone-Based Recognition of Human Activities & (5) \\ Failure at Scania Trucks & (16) \\ Epileptic Seizure Recognition & (5,8,5) \\ Parkinson’s Disease (PD) classification & (5) \\ \hline \end{tabular} Note: Each number signifies the size of neurons in a layer. In the cases where a sequence of numbers is presented, such as (\(i_{1}\),\(i_{2}\),\(i_{3}\)), these correspond to multiple hidden layers within the network. or surpasses the F1 score achieved by the traditional Boruta across all tested datasets. The improvement is clear, from an increase in the F1 score on the SB-RHAPT dataset from 98.0886% to 98.8012%, and a rise on the PDC dataset from 80.2250% to 81.1630%. Even in instances like the APSF and ESR datasets, where the F1 score sees only slight growth, the proposed method proves its robustness, maintaining competitive performance despite the substantial reduction in the number of features. It is worth mentioning that the Noise-augmented Boruta method yielded lower variance in the F1 scores compared to the traditional Boruta method. Building on this comparative analysis, Table VII presents the results of a rigorous statistical analysis comparing the proposed method and the Boruta method. This is based on the outcomes of 100 runs for each method across four distinct datasets: SB-RHAPT, APSF, ESR, and PDC. The Shapiro-Wilk test was first applied to check for normality in the distribution of results. For three out of the four datasets--SB-RHAPT, APSF, and PDC--the p-values observed in the Shapiro-Wilk test for both methods exceeded the 0.05 threshold, indicating a reasonable assumption of normal distribution. Therefore, the two-sample t-test was employed for these datasets. However, for the ESR dataset, the proposed method's results deviated from a normal distribution, as evidenced by its p-value of 0.0002063. As a result, the Mann-Whitney U test was employed as an appropriate non-parametric alternative for this dataset. Across all datasets, the p-values resulting from the comparative tests were significantly below the 0.05 level, reinforcing that the performance of the two methods is distinct and statistically significant. Furthermore, as the proposed method consistently yielded higher accuracies, it can be concluded that the proposed method outperforms the Boruta method in the considered datasets. For a deeper understanding of the comparison between the two models, Figure 3 offers detailed insights into the models' confidence levels, as assessed by their prediction entropy. The prediction entropy for every instance \(x_{j}\) in test set is calculated as 1 [25]: \[H(x_{j})=-\sum_{i=1}^{n}p(c_{i}|x_{j})\log_{2}p(c_{i}|x_{j}) \tag{1}\] where: \(H(x_{j})\) represents the entropy for the \(j^{th}\) instance, \(p(c_{i}|x_{j})\) and denotes the probability that instance \(x_{j}\) belongs to class \(c_{i}\) (\(i=1,...,n\)). The prediction entropy is normalized by dividing by the maximum possible entropy, making the results range between 0 (indicating complete certainty in prediction) and 1 (indicating complete uncertainty). A prediction with lower entropy means the model is more certain of its decision, while a higher entropy suggests the opposite. To elucidate the relationship between prediction confidence and accuracy, Figure 3 separates and visualizes the distribution of entropies for both correctly and incorrectly predicted samples. This segregation provides a valuable perspective: if, for instance, incorrect predictions predominantly have high entropy, it indicates that the model is generally unsure when it errs. On the other hand, if incorrect predictions have low entropy, it suggests that the model is confidently making those mistakes. Figure 3 provides evidence of the proposed method's superiority, showcasing its enhanced confidence across all four datasets compared to the Boruta algorithm. ### _Ablation study_ As mentioned previously, the \(n\) (multiplier of \(\sigma\)) has been fixed at a value of 50 in this investigation. This particular selection is motivated by the fact that the normalized data often yields a diminutive standard deviation. By assigning a larger figure for standard deviation, it ensures sufficient perturbation of the features. Nonetheless, it raises a pertinent question about the influence of this coefficient on the efficacy of the proposed methodology. Table VIII and Figure 4 displays the results of the proposed method with 'n' values set to 5, 20, and 50. It should be noted that during this analysis the other parameters, including the shallow learner structure, the number of iterations, and epochs, were kept constant throughout the analysis. The result obtained from 100 evaluations after feature selection (similar to previous section). From the results of the ablation study, it can be inferred that increasing 'n', and therefore the perturbation, influences the feature selection process and subsequently the ANN's performance in different ways across the datasets. For instance, the 'Smartphone-Based Recognition of Human Activities' dataset demonstrates that a greater perturbation might lead to more stringent feature selection, resulting in fewer features being selected while maintaining similar performance levels. This could suggest that larger perturbations help to highlight only the most influential features, as minor ones might be 'washed out' due to the higher noise levels. In the 'Failure at Scania Trucks' and 'Epileptic Seizure Recognition' datasets, an increase in 'n' appears to reveal more features that contribute to the performance of the ANN, indicating that a greater degree of perturbation might be useful in uncovering hidden or complex relationships in the data. However, the results from the 'Parkinson's Disease (PD) classification' dataset provide a nuanced view, suggesting that there might not be a linear relationship between the magnitude of perturbation and the performance of the ANN. Here, the number of selected features and the F1 score do not Fig. 2: Box-plots of f1 score for the proposed method and Boruta. demonstrate a consistent trend with increasing perturbation, highlighting the intricacies of the ANN's response to perturbations in this context. Thus, while the perturbation multiplier 'n' clearly impacts the ANN's behavior, the nature and extent of this impact can vary greatly based on the dataset's inherent properties. This underscores the importance of fine-tuning 'n' based on specific dataset characteristics to optimize the ANN's performance. Overall, the proposed method has proven to be capable of selecting crucial features even with variations in \(n\). The comparison with the number of features selected by the Boruta algorithm also demonstrates that it continues to select fewer features. In other words, the proposed method consistently outperforms the Boruta algorithm with respect to the quantity of selected features. This aligns with the objective of feature selection, which is to select the minimum possible number of features while still maintaining adequate performance in modeling the response variable. ## V Conclusion The innovation of this method lies in the intentional modification of shadow features' characteristics, differing from traditional approaches where shadow features retain the same statistical properties as their original counterparts. Therefore, this study proposed a new variant of the Boruta, called Nonise-augmented Boruta. In light of the comprehensive evaluation conducted in this study, it can be conclusively stated that the proposed noise-augmented Boruta methodology offers substantial improvements over the classic Boruta algorithm, where the proposed model consistently outperforms in terms of selecting fewer but more essential features across multiple datasets. This performance adheres to the fundamental principle of feature selection: reducing model complexity while preserving predictive power. Moreover, the conducted ablation study provides valuable insights into the role and impact of the standard deviation multiplier 'n' within the proposed methodology. The multiplier, by influencing the perturbation, demonstrates substantial control over the feature selection process and subsequent performance of the Artificial Neural Network. Importantly, this relationship is not linear, and the specific characteristics of the dataset Fig. 4: Ablation study results for n 5, 20, 50. Fig. 3: Histogram graphs of the predictive entropy results. B and NB indicate Boruta and Noised-augmented Boruta respectively. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Dataset & Number of all Features & \multicolumn{2}{c}{n = 5} & \multicolumn{2}{c}{n = 20} & \multicolumn{2}{c}{n = 50} \\ \cline{2-7} & \multicolumn{1}{c}{Sel. Feat.} & \multicolumn{1}{c}{F1 score (\%)} & \multicolumn{1}{c}{Sel. Feat.} & \multicolumn{1}{c}{F1 score (\%)} & \multicolumn{1}{c}{Sel. Feat.} & \multicolumn{1}{c}{F1 score (\%)} \\ \hline SB-RHAPT & 561 & 119 & 99.0087\(\pm\)0.2428 & 117 & 98.8782\(\pm\)0.2288 & 104 & 98.8012\(\pm\)0.2209 \\ APSF & 171 & 8 & 79.8743\(\pm\)2.0264 & 23 & 86.7719\(\pm\)1.2292 & 22 & 87.6904\(\pm\)0.9294 \\ ESR & 178 & 31 & 95.56894\(\pm\)0.4350 & 112 & 95.8388\(\pm\)0.4324 & 138 & 95.8550\(\pm\)0.4358 \\ PDC & 755 & 36 & 81.7975\(\pm\)2.6048 & 25 & 79.3420\(\pm\)2.6942 & 29 & 81.1630\(\pm\)2.9937 \\ \hline \hline \end{tabular} \end{table} TABLE VIII: Ablation analysis results on \(n\) strongly influence the optimal value for 'n'. In conclusion, the proposed noise-augmented Boruta methodology presents a promising advance in the domain of feature selection. Its superior performance, coupled with the insightful findings from the ablation study, demonstrates its potential for broad applicability across various machine-learning tasks. However, careful tuning of its perturbation parameter 'n' is critical to ensure optimal results, emphasizing the need for a context-specific approach when applying this technique. The possible direction to extend this work is incorporating uncertainty metrics. This would pivot the focus towards not just discerning features that decrease model performance when perturbed, but also understanding the model's certainty regarding such perturbations.
2309.07861
CiwaGAN: Articulatory information exchange
Humans encode information into sounds by controlling articulators and decode information from sounds using the auditory apparatus. This paper introduces CiwaGAN, a model of human spoken language acquisition that combines unsupervised articulatory modeling with an unsupervised model of information exchange through the auditory modality. While prior research includes unsupervised articulatory modeling and information exchange separately, our model is the first to combine the two components. The paper also proposes an improved articulatory model with more interpretable internal representations. The proposed CiwaGAN model is the most realistic approximation of human spoken language acquisition using deep learning. As such, it is useful for cognitively plausible simulations of the human speech act.
Gašper Beguš, Thomas Lu, Alan Zhou, Peter Wu, Gopala K. Anumanchipalli
2023-09-14T17:10:39Z
http://arxiv.org/abs/2309.07861v1
# CiwGAN: Articulatory Information Exchange ###### Abstract Humans encode information into sounds by controlling articulators and decode information from sounds using the auditory apparatus. This paper introduces CiwGAN, a model of human spoken language acquisition that combines unsupervised articulatory modeling with an unsupervised model of information exchange through the auditory modality. While prior research includes unsupervised articulatory modeling and information exchange separately, our model is the first to combine the two components. The paper also proposes an improved articulatory model with more interpretable internal representations. The proposed CiwGAN model is the most realistic approximation of human spoken language acquisition using deep learning. As such, it is useful for cognitively plausible simulations of the human speech act. Gasper Begus\({}^{1}\), Thomas Lu\({}^{1}\), Alan Zhou\({}^{2}\), Peter Wu\({}^{1\dagger}\), Gopala K. Anumanchipalli\({}^{1\dagger}\)\({}^{1}\)University of California, Berkeley, \({}^{2}\)Johns Hopkins University generative AI, cognitive modeling, electromagnetic articulography, articulatory phonetics, information theory ## 1 Introduction Humans use sounds to exchange information. Sounds of speech are produced by moving articulators. During language acquisition, humans learn to control articulators such that the resulting sound approximates the input language that they hear. At the same time, they need to learn to encode information into sounds and decode it from them. The learning process is thus highly complex and fully unsupervised: with the exception of visual stimuli from lips and tongue tip, language-acquiring children do not have a direct access to muscle activity or articulatory movements. These two aspects of human speech process--information exchange and articulatory learning--have been modeled separately thus far. It has been shown that unsupervised deep learning models can learn to generate Electromagnetic articulography (EMA) channels in a Generative Adversarial Network (GAN) setting [1]. On the other hand, a body of work shows that GAN-based models can learn to identify meaningful properties of human language in a fully unsupervised manner and use them to encode and decode information [2, 3, 4, 5, 6]. This paper combines the two proposals in Begus [3] and Begus et al. [1] and tests whether a model that includes both information exchange and articulatory learning can learn linguistically meaningful properties in an interpretable way. The proposed model is the closest approximation of human language learning using deep learning approaches known to the authors. ## 2 The Model We propose a new unsupervised model of spoken language that combines the Articulation GAN architecture [1] with the ciwGAN architecture [3] into the _CiwGAN_ proposal which stands for Categorical InfoWave Articulation GAN. Unlike in the Articulatory GAN proposal [1], the Articulatory Generator in ciwGAN takes latent code variables \(c\) in addition to uniformly distributed latent variables \(z\) using which it generates 12 EMA channels and a channel for voicing. The \(c\) variable is a one-hot vector. A pre-trained physical model of sound production (ema2wav; [8]) then turns the 12 EMA channels and the channel for voicing into waveforms. The generated sounds are sent to the Discriminator and the Q-network. The Discriminator forces the Articulatory Generator to learn articulatory representations such that its outputs mimic real speech data. The Q-network, on the other hand, takes the generated audio and needs to decode the unique code from the Generator's latent space. The Discriminator thus mimics imitation and the Q-network mimics information exchange in human spoken language communication. The structure of the Articulatory GAN and ema2wav is taken from [1], but we use an improved unpublished ema2wav model [8] with increased audio quality. We also introduce new hyperparameter choices that improve training: decreased stride and filter size which reduce noise and jitter in the EMA channels (Figure 4). In addition to the original ArticulationGAN [1] objective: \[\max_{D}\min_{G}V(D,G)=\mathbb{E}_{x\sim P_{z}}[D(x)]-\mathbb{E}_{x\sim P_{z }\sim P_{c}\sim P_{c}}[D(\mathcal{A}(G(z,c)))]\] we introduce an additional "Q-network" following [9] and [3] which takes as input the waveform output of the ema2wav model and attempts to recover the latent code \(c\) that was passed into the Articulatory Generator. Along with the Articulatory Generator, this network is optimized against the following "Q-loss" in an attempt to approximate the posterior distribution \(Q(c|x)\): \[\max_{G,Q}V_{Q}(G,Q)=\mathbb{E}_{x\sim\mathcal{A}(G(z,c))}\left[\mathbb{E}_{x ^{\prime}\sim P(c|x)}Q(c^{\prime}|x)\right]\] Putting this together, we get a new zero-sum game in which the Articulatory Generator is not only optimized to generate realistic gestures, but to do so in a way that its latent code \(c\) is recoverable from the final auditory output: \[\min_{G,Q}\max_{D}V(D,G)-V_{Q}(G,Q)\] Prior work in articulatory speech synthesis primarily focuses on supervised methods with an objective to generate spoken language from articulatory data [10, 11, 12, 13, 14, 8]. Siriwardena et al. and Shamma et al. [15, 16, 17] propose an autoencoder architecture for unsupervised articulatory learning that does not model information exchange. The advantages of the GAN architecture [18, 19, 20, 7] over autoencoders are that the Generator is trained by imitation and imagination rather than reproducing input data. From a cognitive modeling perspective, GANs are a more realistic approximation of human speech learning and our model captures information exchange and articulatory learning simultaneously, which is not the case in previous models. While GANs bring several advantages in modeling human spoken language learning, the new proposal also comes with several implementational challenges. The training objective is among the most challenging in unsupervised speech processing paradigm. Learning in the Articulatory Generator is fully unsupervised: the Generator needs to learn the 12 articulatory EMA channels + voicing based on the feedback from the Discriminator and the Q-network which get only audio inputs and no EMA data. In other words, the Generator network needs to learn to generate a completely different modality from what the Discriminator and Q-network receive for evaluations. At the same time, the Generator needs to learn to encode information into the generated data. The model never gets any kind of explicit information that would force it to learn lexical items. In principle, the Generator could encode any property of speech into the latent codes. Yet, given that pairing unique codes with lexical items is highly informative, the Generator predominantly learns to use these lexical items to convey information. Additionally, the ema2wav model is trained on a single speaker of British English, while the training data contains approximately 600 speakers of Standardized American English. Our model offers an advantage from the cognitive modeling perspective: humans learn to utilize their individual articulators based on auditory feedback from various individuals. However, this setting significantly increases the complexity of the training objectives. Finally, the learning is fully unsupervised in that the Generator and the Q-network never directly access the actual training data. The improved ema2wav model (the "Physical model" in Figure 1) is trained on the MNGU0 database [21] which consists of EMA recordings and corresponding waveforms of a single speaker of British English. This model is an updated version of Wu et al. [8] that initializes weights with those of a neural spectrum-to-waveform vocoder [22]. This form of transfer learning improves the fidelity of the trained ema2wav model. The Discriminator is trained on nine TIMIT words: _water, like, carry, greasy, ask, year, suit, dark, wash_, and between 600 and 700 distinct tokens are used for each word. These words were chosen because they were well-atested content words in TIMIT. The Generator is trained with a one-hot vector of length 9, corresponding to the 9 lexical items. It has been shown elsewhere that lexical learning happens even if the number of classes and the number of words do not match [3]. We train the model for 309,000 steps. For each Generator update, the Discriminator and the Q-network are updated five times. The entire code for the implementation of CivaGAN is available at [https://github.com/gbegus/articulationGAN](https://github.com/gbegus/articulationGAN). ## 3 Results ### Evaluation The performance of CivaGAN is sufficiently high so that the outputs can be automatically evaluated (as opposed to being evaluated by a trained phonetician [1]). We evaluate the outputs with a fine-tuned Whisper ASR model [23]. The pretrained model Whisper-small [23] is fine-tuned using a \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Articulatory Generator & \multicolumn{2}{c}{Discriminator} & \multicolumn{1}{c}{Q-Network} \\ \hline Layer & Dimension & Layer & Dimension \\ \hline c, z & 100 x 1 & input & 20480 x 1 \\ fc + reshape & 16 x 1024 & conv0 & 5120 x 64 \\ upconv & 32 x 512 & conv1 & 1280 x 128 \\ upconv/ & 64 x 512 & conv2 & 320 x 256 \\ upconv/ & 128 x 256 & conv3 & 80 x 512 \\ upconv/ & 128 x 256 & conv4 & 20 x 1024 \\ upconv/ & 256 x 13 & flatten = logit & 1 x 1 \\ \hline \hline \end{tabular} \begin{tabular}{c|c|c} \hline \hline \multicolumn{2}{c}{Q-Network} \\ \hline Layer & Dimension \\ \hline c, z & 100 x 1 & input & 20480 x 1 \\ fc + reshape & 16 x 1024 & conv0 & 5120 x 64 \\ upconv & 32 x 512 & conv1 & 1280 x 128 \\ upconv/ & 64 x 512 & conv2 & 320 x 256 \\ upconv/ & 128 x 256 & conv3 & 80 x 512 \\ upconv/ & 128 x 256 & conv4 & 20 x 1024 \\ upconv/ & 256 x 13 & flatten = logit & 9 x 1 \\ \hline \hline \end{tabular} \end{table} Table 1: The structure of the Articulatory Generator (based on [7, 3, 1]). combination of TIMIT lexical items and Ciawan outputs manually annotated by the authors. The fine-tuning dataset consists of approximately 100 tokens from the TIMIT dataset for each of the 9 words used in training, combined with 800 samples annotated by the authors for a total of approximately 1,700 tokens. The model is fine-tuned for 250 steps and achieves a word error rate of 22.5%. The entirety of the fine tuning dataset and all evaluations made by Whisper as well as the model's checkpoints are available at [http://doi.org/10.17605/OSF.IO/JBWYH](http://doi.org/10.17605/OSF.IO/JBWYH). ### Evidence of information exchange To test whether the model learns to encode linguistically meaningful information using articulatory representations, we utilize the technique for analyzing underlying values of individual latent variables [2, 24] by setting individual variables to extreme values outside the training range distribution. It has been shown that setting the latent codes to extreme values (e.g. to [20,0,0]) when the training only contained latent codes with values of 0 or 1 (e.g. [1,0,0]) results in a near-categorical output of a single lexical item in the CiwGAN architecture [3]. In other words, this technique reveals the underlying learned representation for each latent code when individual latent variables are set to extreme values. By applying the extreme value technique to the Articulatory Generator, we are testing the technique on a new frontier. By setting individual latent variables to extreme values, we can uncover the underlying representation of each unit and get a near categorical performance on lexical learning. Previous tests of this method have been limited to unimodal data. In this study, we are exploring underlying values of individual variables and their near-categorical performance when the generated data differs in modality from the input received by the Q-network: articulatory data vs. auditory input. Extreme values in CiawanGAN thus do not reveal the underlying audio words, but the underlying articulatory gestures that result in words which the Q-network learns to classify. To quantify lexical learning, we generate 100 outputs for each one-hot code (but with values set at 20 instead of 1) while keeping the latent space constant across all 9 one-hot levels. This results in 900 outputs transcribed by fine-tuned Whisper. This data is then fit to a multinomial logistic regression using the _nnet_ package in R [25] with the nine transcribed words (plus a category for other words "else") as the dependent variable and the unique code as the independent variable. The model with code as a predictor fits the data significantly better (\(\text{df}=72,\text{AIC}=1944.6\)) than the model without this predictor (\(\text{df}=8,\text{AIC}=3518.8\)). Figure 3 illustrates the extent of lexical learning in the Ciawan architecture. Most words display a single pronounced peak corresponding to a specific code when its value is set to 20 instead Figure 3: Estimates of the multinomial logistic regression model with Whisper transcriptions of the 900 generated words (100 per each category; grouped into the 9 words and an “else” condition for all other words) as the dependent variable and the code as the predictor when the value of each code is set to 20. Figure 2: Ten cropped outputs independently sampled for four latent codes \(c\) with individual variables set to extreme values (20 or 15). Under each waveform and spectrogram (0–5000 Hz) is a transcription by fine-tuned Whisper. of 1. For example, with code set to [0,0,0,0,0,0,0,0,20], the network produces _suit_ 94 times out of 100 samples. Similarly, _greasy_ is produced 72 times (out of 100) with the code [0,0,0,0,20,0,0,0], and _like_ is transcribed 59 times for the code [0,0,0,0,0,0,0,0,0]. Other prominently encoded words include _water_, _year_, _wash_, and _carry_. In contrast, words like _ask_ and _dark_ show no distinct peaks. Figure 2 demonstrates that the audio quality of words, generated using the extreme value technique, remains high even when individual code values significantly exceed the training range. ### Evidence of articulatory learning In the CivaGAN, the Articulatory Generator learns specific articulatory gestures that result in individual words. By setting unique code values to extremes, we can induce the output of specific words. Consequently, CivaGAN provides a framework for analyzing both lexical and articulatory learning. To quantitatively assess articulatory learning, we perform a direct comparison between generated EMA recordings when the code is set to [0,0,0,0,0,0,0,15] which forces the output _suit_ (in 10/10 cases) and the actual EMA recording of the word _suit_ from the MNGU0 corpus. The generated EMA is LOESS-smoothed with a span of 0.2, yielding highly interpretable articulatory gestures (Figure 4). With new hyperparameters that better model EMA data (lower filter size and stride), the EMA channels exhibit relatively minimal noise and jitter. Substantial jitter is evident during phases where specific articulators are not crucial to the articulation of the word. To quantify similarity between generated and real EMA, we perform Dynamic Time Warping (DTW) and conduct correlation tests on the DTW-aligned time series. The comparison between real and generated EMA (Table 2 and Figure 5) shows that correlations are highest for articulators that are more relevant to the articulation of _suit_ such as the lips and tongue tip. In contrast, the tongue dorsum has a weaker correlation between the real and generated EMA. For the articulation of _suit_, the positions of the lip (especially on the x-axis) and tongue tip position are crucial. The correlations for the tongue tip (on the y-axis) is \(r=0.94\). For the lower lip on the y-axis, it is \(r=0.79\), while the upper lip has a correlation of \(r=0.93\) on the x-axis and \(r=0.86\) on the y-axis. Qualitatively, Figure 5 illustrates that the articulatory gesture of the upper lip in the generated data closely mirrors that of the real EMA data, exhibiting a nearly identical looped pathway. ## 4 Conclusion This paper proposes a new model that features several properties of human language: unsupervised learning, information exchange, articulatory learning, and the production-perception loop. We show that information exchange can occur even when the production model (the Articulatory Generator) needs to learn to generate a modality distinct from the perception network's modality: articulatory movements vs. audio inputs. The model allows simulations of human speech activity that incorporate a realistic approach to information exchange alongside a detailed analysis of articulatory gesture learning. The proposed model thus represents one of the closest modeling approximations of human language acquisition. \begin{table} \begin{tabular}{l r r r r} \hline \hline Place & x,DTW & x,Cor test & y,DTW & y,Cor test \\ \hline lower lip & 367.2 & 0.79 & 154.5 & 0.18 \\ tongue tip & 168.4 & -0.43 & 149.3 & 0.94 \\ lower incisor & 170.1 & -0.19 & 119.7 & 0.79 \\ upper lip & 166.8 & 0.93 & 133.1 & 0.86 \\ tongue body & 212.4 & -0.65 & 153.1 & -0.02 \\ tongue dorsum & 182.7 & 0.84 & 217.9 & 0.04 \\ \hline \hline \end{tabular} \end{table} Table 2: The minimum global DTW distance between smoothed generated EMA and real EMA using the _dnv_ package in R [26] and corresponding Pearson’s product-moment correlations between the aligned time series for each of the 12 channels. Figure 4: The 12 EMA channels with a channel for voicing generated by the Articulatory Generator for the sixth output (in Figure 2) of the code [0,0,0,0,0,0,0,0,15] transcribed as _suit_. The blue line represents LOESS smoothing with a span of 0.2 used for calculating correlations in Table 2. Figure 5: A comparison between generated (green triangle) and real EMA channels (blue circles) for a word _suit_. The generated samples are from the sixth example of _suit_ in Figure 2 when the code is set to [0,0,0,0,0,0,0,15] which generates _suit_ in 10 out of 10 trials. The real EMA data is taken from the MNGU0 database [21] and multiplied by a factor of 5 to match the magnitude of the generated EMA.
2309.15732
Deep Learning-based Analysis of Basins of Attraction
This research addresses the challenge of characterizing the complexity and unpredictability of basins within various dynamical systems. The main focus is on demonstrating the efficiency of convolutional neural networks (CNNs) in this field. Conventional methods become computationally demanding when analyzing multiple basins of attraction across different parameters of dynamical systems. Our research presents an innovative approach that employs CNN architectures for this purpose, showcasing their superior performance in comparison to conventional methods. We conduct a comparative analysis of various CNN models, highlighting the effectiveness of our proposed characterization method while acknowledging the validity of prior approaches. The findings not only showcase the potential of CNNs but also emphasize their significance in advancing the exploration of diverse behaviors within dynamical systems.
David Valle, Alexandre Wagemakers, Miguel A. F. Sanjuán
2023-09-27T15:41:12Z
http://arxiv.org/abs/2309.15732v2
# Deep Learning-based Analysis of Basins of Attraction ###### Abstract This study showcases the effectiveness of convolutional neural networks (CNNs) in characterizing the complexity and unpredictability of basins of attraction for diverse dynamical systems. This novel method is optimal for exploring different parameters of dynamical systems since the conventional methods are computationally expensive for characterizing multiple basins of attraction. Additionally, our research includes a comparison of different CNN architectures for this task showing the superiority of our proposed characterization method over the conventional methods, even with obsolete architectures. pacs: \(05.45.-a\), \(05.45.Df\), \(07.05.Mh\) **The application of machine learning algorithms to complex dynamical systems has opened up new possibilities for predicting the behavior of these systems under different initial conditions. One important aspect of studying dynamical systems is understanding their basins of attraction which are the regions in the phase space where initial conditions lead towards a particular attractor. Since we are interested in the asymptotic behavior of trajectories, the basins of attraction provide a visual tool to have a clear image of which set of initial conditions bring their associated trajectories to specific attractors in phase space. In chaotic systems, the boundary sets between these basins can be fractal, what can make the long-term prediction of initial conditions challenging. Basins of attraction can be characterized using various metrics, such as the fractal dimension, the basin entropy, the boundary basin entropy, and the Wada property. These measurements provide a quantitative understanding of the behavior of dynamical systems. Moreover, their prediction involves complex nonlinear operations that are well adapted for machine learning algorithms. Additionally, convolutional neural networks are particularly suitable for this task due to the representation of basins as images, where each pixel corresponds to an initial condition and each color represents a different attractor. The use of machine learning algorithms in this context constitutes an important step towards predicting the sensitivity of the system to initial conditions, and improving our understanding of the behavior of complex dynamical systems.** ## I Introduction Complex dynamical systems are typically modeled through differential or difference equations and therefore follow deterministic rules. However, even though they are deterministic, it does not necessarily mean that these systems are easy to predict. As well known, in the case of chaotic dynamics, a small variation in the initial conditions may result that trajectories would go to a different attractor after a sufficiently long run. In particular, in an experimental scenario a slight variation in the initial conditions can completely alter the long-term evolution when the system happens to be chaotic. This inherent property of chaos is closely related to fractal geometry, so it is not surprising that both disciplines are so closely connected [19]. Fractal geometry and basins of attraction are related concepts in the study of dynamical systems. Fractal geometry is a mathematical framework for describing objects that display self-similar patterns at different scales. In the context of dynamical systems, fractal sets such as the Mandelbrot set are often used to illustrate the behavior of iterated functions [19]. Basins of attraction refer to the regions in the phase space of a dynamical system where initial conditions lead towards a particular attractor, and boundary sets refer to the separation between the different basins [2; 16]. The relationship between fractal geometry and basins of attraction lies in the fact that the boundary sets can be fractal, providing a manifestation of the complex and intricate structure of the phase space. In some multistable dynamical systems, the basin boundaries display complicated fractal structures at every scale, hindering the long-term prediction of initial conditions starting in the vicinity of such boundaries. The analysis of basins of attraction with different metrics brings practical and theoretical information on the dynamical system. The asymptotic behavior of the system is embodied into a single structure that matches an initial condition to an attractor. By tracking the properties of this structure as a parameter of the system vary, we gain insight into how the system evolves with that parameter modification. It enhances the understanding of the predictability and stability of the system and also eases the identification of the bifurcation points. Obviously, the applications of these ideas can be useful for the analysis of different problems in science and technology that are modeled using dynamical systems. The unpredictability of a basin can be characterized using various metrics such as the fractal dimension, the basin en
2309.13343
Two vs. Four-Channel Sound Event Localization and Detection
Sound event localization and detection (SELD) systems estimate both the direction-of-arrival (DOA) and class of sound sources over time. In the DCASE 2022 SELD Challenge (Task 3), models are designed to operate in a 4-channel setting. While beneficial to further the development of SELD systems using a multichannel recording setup such as first-order Ambisonics (FOA), most consumer electronics devices rarely are able to record using more than two channels. For this reason, in this work we investigate the performance of the DCASE 2022 SELD baseline model using three audio input representations: FOA, binaural, and stereo. We perform a novel comparative analysis illustrating the effect of these audio input representations on SELD performance. Crucially, we show that binaural and stereo (i.e. 2-channel) audio-based SELD models are still able to localize and detect sound sources laterally quite well, despite overall performance degrading as less audio information is provided. Further, we segment our analysis by scenes containing varying degrees of sound source polyphony to better understand the effect of audio input representation on localization and detection performance as scene conditions become increasingly complex.
Julia Wilkins, Magdalena Fuentes, Luca Bondi, Shabnam Ghaffarzadegan, Ali Abavisani, Juan Pablo Bello
2023-09-23T11:32:53Z
http://arxiv.org/abs/2309.13343v1
# Two vs. Four-Channel Sound Event Localization and Detection ###### Abstract Sound event localization and detection (SELD) systems estimate both the direction-of-arrival (DOA) and class of sound sources over time. In the DCASE 2022 SELD Challenge (Task 3), models are designed to operate in a 4-channel setting. While beneficial to further the development of SELD systems using a multichannel recording setup such as first-order Ambisonics (FOA), most consumer electronics devices rarely are able to record using more than two channels. For this reason, in this work we investigate the performance of the DCASE 2022 SELD baseline model using three audio input representations: FOA, binaural, and stereo. We perform a novel comparative analysis illustrating the effect of these audio input representations on SELD performance. Crucially, we show that binaural and stereo (i.e. 2-channel) audio-based SELD models are still able to localize and detect sound sources _laterally_ quite well, despite overall performance degrading as less audio information is provided. Further, we segment our analysis by scenes containing varying degrees of sound source polyphony to better understand the effect of audio input representation on localization and detection performance as scene conditions become increasingly complex. Julia Wilkins\({}^{1}\), Magdalena Fuentes\({}^{1}\), Luca Bondi\({}^{2}\), Shabnam Ghaffarzadegan\({}^{2}\), Ali Abavisani\({}^{2}\), Juan Pablo Bello \({}^{1}\)\({}^{1}\) New York University, New York, NY, USA, \({}^{2}\) Bosch Research, Pittsburgh, PA, USA [email protected] [https://github.com/juliawilkins/SELD-2v4-DCASE23/](https://github.com/juliawilkins/SELD-2v4-DCASE23/) sound event localization and detection, sound source localization, spatial audio, explainability ## 1 Introduction Sound Event Localization and Detection (SELD) is the process of estimating the direction-of-arrival (DOA) and class of sound events over time, given an input audio signal. SELD systems can translate well to a variety of real-world applications, including navigation for autonomous systems and assistive robotic devices. SELD methods are rooted in traditional signal processing techniques for multichannel audio processing, such as Steered Response Power [1] and acoustic intensity vectors [2]. For human-inspired audio recordings (e.g. binaural recordings), interaural time difference (ITD) and interaural level difference (ILD) are commonly used to characterize the direction of arrival of sounds [3]. However, these cues alone have shown limitations in terms of localization accuracy in real-world scenes that are particularly noisy, reverberant, or polyphonic [4, 5, 6]. Deep learning approaches were recently popularized to address these challenges in the context of SELD tasks [7, 8, 9, 10, 11]; most systems still utilize signal processing-based features like generalized cross correlation (GCC) and Mel spectrograms but benefit from automatic feature learning to improve robustness in difficult scene conditions [7, 11, 12, 13]. For example, in [14], authors use a CRNN architecture with magnitude and phase spectrograms from multichannel audio to show accurate DOA estimation and multiple sound source detection in reverberant conditions. In the DCASE 2022 SELD challenge (Task 3), models were evaluated using real multichannel sound recordings. Participants had access to real recordings for development and could also use additional synthetic or real data for training. The challenge operates in a multichannel setting, utilizing two formats of 4-channel recordings: first-order Ambisonics (FOA) and a tetrahedral mix array. We are interested in exploring the capabilities of current SELD systems using more commonly found 2-channel microphone setups, namely binaural and stereo, as typical consumer electronics devices lack such complex 4-channel configurations. There is little prior research quantifying the effect of using various audio input representations (i.e. 2 vs. 4-channel audio) for SELD tasks in deep learning-based systems. In the psychoacoustics community, this effect is well-studied; it is known that there is a general loss in spatial understanding between 4-channel audio configurations (e.g. Ambisonics) vs. 2-channel configurations (e.g. binaural or stereo). [15, 16]. Humans can localize lateral sound sources well in binaural and stereo settings, but front-back confusion may increase without sufficient spatial information [17, 18, 3]. Further, perceiving the elevation of sound sources when listening to stereo audio in particular has been shown to be very difficult, largely due to the lack of interaural cues present in this recording configuration unlike that of a binaural setup [16]. However, these phenomena are underexplored in the context of deep learning-based systems for SELD. In [19], authors compared sound event detection performance using synthetic FOA, binaural, and monaural audio data in a CRNN-based system. Our approach differs significantly in that we provide a quantitative analysis of localization _and_ detection performance, we use a FOA dataset of real recordings in addition to synthetic and decode these recordings to binaural, and lastly we include the stereo audio configuration as a point of comparison as this is common in consumer electronics devices today. In this work we present a novel comparative analysis of the DCASE 2022 SELD baseline model across FOA, binaural, and stereo audio input representations. To the best of our knowledge, this is the first work quantifying the effect of these audio configurations on both localization and detection performance in a deep-learning based SELD system. We show that lateral sound source localization remains fairly accurate in the 2-channel settings despite an overall degradation in SELD performance, and provide an analysis of performance in scenes of varying levels of polyphonic sound source complexity. ## 2 Problem Formulation In this manuscript, we examine the problem of Sound Event Localization and Detection (SELD) under different audio input representations: first-order Ambisonics (FOA), binaural, and stereo record ings. In this context, _detection_ refers to determining the number of active sound sources per class over time, while _localization_ aims at identifying the azimuth and elevation angle for each of the active sources over time. While Ambisonics recordings provide state-of-the-art performance in SELD [20], in practical applications we hypothesize that binaural and stereo recordings are more accessible. We rely on the most popular framework used by participants in the DCASE 2022 Challenge Task 3. A multichannel audio recording is fed as input to a Convolutional Neural Network (CNN), whose output is a 4-dimensional matrix arranged according to the Multi-Activity Coupled Cartesian DOA (ACCDOA) format [21]. For a given class, time instant, and sound source index, the model arrives at a three-dimensional vector \((x,y,z)\) whose orientation represents the direction of arrival of the sound, and whose intensity is directly proportional to the likelihood of a sound of that class being present at a given time. **First-order Ambisonics (FOA)**: FOA is a 4-channel, 3D audio recording format. In FOA, each channel corresponds to a spherical harmonic component representing a change in sound pressure in a specific direction [22]. The channels _W, Y, Z, X_ map to the omni-directional, left-right, vertical, and front-back directions of sound pressure change, respectively. **Binaural**: The binaural recording technique aims to capture 3D audio in just two channels, ideally simulating the experience of a human experimenting auditory cues. Binaural audio is typically recorded using two microphones placed in the ears of a dummy head (e.g. Neumann KU100), or synthesized using the head-related transfer functions (HRTFs) of such a dummy head [23]. Binaural recordings deliver immersive spatial sounds containing amplitude, time and timblar differences of two channels vs. traditional stereo recordings where only amplitude and time differences are available. **Stereo**: In stereo recordings, two microphones are used to capture the left and right audio channels independently. This differs from binaural recordings; in the binaural configuration the goal is to simulate a human's listening experience. Critically, in a stereo setup, elevation differentiation cannot be perceived; binaural recordings contain the filtering effect of the head, ear pinna, and torso and this is not present in a stereo recording configuration [16]. ## 3 Experimental Setup ### Datasets Following the setup of the DCASE 2022 Task 3 challenge, we rely on the STARSS22 dataset [24], together with a synthetic mixture (SYNMIX) for baseline training1 provided by the organizers of the challenge. The STARSS22 dataset is comprised of 121 recordings of various lengths of real sound scenes across 13 sound event classes, with around 5 hours of audio recordings in 4-channel FOA format and an interpolated tetrahedral microphone array. At the time of this work, the evaluation set was not yet released, so we use the "development" partition of train and test, consisting of 67 and 54 recordings, respectively. The dataset contains instances with up to 5 simultaneous sound sources, and up to 4 simultaneous sources of the same class, though 2-source polyphony is much more frequent. Footnote 1: [https://zenodo.org/record/6406873#Y._%BuzMK2o](https://zenodo.org/record/6406873#Y._%BuzMK2o). Due to the small size of the STARSS22 dataset, a base set of synthetic data was also provided to participants (SYNMIX). This data is synthesized using audio samples from FSD50k [25] convolved with Spatial Room Impulse Responses from the TAU-Nigens Spatial Sound Events 2020 [26] and 2021 [27]. The dataset contains 1200, 1-minute synthesized FOA recordings across classes mapped to the classes present in STARSS22, and maximum polyphony of 2 sources. Both datasets are annotated at 100ms resolution with labels of sound source class, azimuth, and elevation as well as additional flags for overlapping sound events. The azimuth angles \(\phi\in[-180^{\circ},180^{\circ}]\), and elevation \(\theta\in[-90^{\circ},90^{\circ}]\), with \(0^{\circ}\) at front. Note that azimuth angles increase counterclockwise. ### Input representations To fairly compare the three multichannel audio representations, we look at the problem of sound localization on the horizontal plane only by removing the elevation component, thus fixing elevation to \(0^{\circ}\) in the ground truth. We train and test separately for each input representation using the same acoustic scenes, simply replacing the original FOA audio representation with binaural or stereo audio, as per following procedures. **FOA \(\rightarrow\) Binaural**: To decode the original FOA audio from the STARRS22 and synthetic datasets to binaural, we used the BinauralDecoder plug-in from the IEM Plug-In Suite2. This decoder uses pre-processed Neumann KU100 dummy head HRTFs via the magnitude least-squares (MagLS) method proposed in [28]. We apply this binaural decoding to all FOA audio used in training and testing, yielding 2-channel binaural audio for our experiments3. Footnote 2: [https://plugins.iem.at/docs/plugindescriptions/#binauraldecoder](https://plugins.iem.at/docs/plugindescriptions/#binauraldecoder). Footnote 3: [https://github.com/juliawilkins/ambisonics2binaural_simple](https://github.com/juliawilkins/ambisonics2binaural_simple). **FOA \(\rightarrow\) Stereo**: To convert our FOA audio to stereo, we used a very simple translation: \(\mathit{left}=W+Y\) and \(\mathit{right}=W-Y\), following [29]. Note that \(W\) is the omnidirectional signal and \(Y\) is the first-order horizontal (left-right) component. An increase in air pressure from left causes an increase in values of \(Y\) and an increase in pressure from the right causes a decrease in values of \(Y\). Because of this, the simple translation above allows us to move easily from FOA to left and right channels yielding 2-channel stereo audio. ### Baseline model The model used for our analysis is the DCASE 2022 Task 3 Baseline model4. The architecture is similar to the CRNN-based model initially proposed in [7], with extensions to accommodate simultaneous sources of the same class in the Multi-ACCDOA format [21]. The input to the model is the multichannel audio, segmented into 5-second chunks, yielding a sequence of 50 x 0.1 second frames. In the FOA configuration, Mel spectrogram features are used to capture frequency information and intensity vectors provide spatial information. In the binaural and stereo settings, we modify the model slightly to use Mel spectrograms and GCC features. GCC features are commonly used in 2-channel localization settings to capture Time Difference of Arrival (TDOA) information between two microphones. Audio is resampled to 24kHz, and 64 Mel coefficients are computed from an STFT on windows of 1024 samples with a hop size of 480 samples. The model has 604.5K trainable parameters. Models are trained for a multi-output regression task, with a mean-squared-error loss, for 200 epochs using 1 RTX 8000 GPU, in batches of 64 samples with a learning rate of \(10^{-3}\). The model checkpoint with the lowest validation loss is selected. Footnote 4: [https://github.com/sharathadavanne/seld-dcase2022](https://github.com/sharathadavanne/seld-dcase2022). ### Data augmentation via Audio Channel Swapping (ACS) An initial exploration of the STARSS and SYNMIX datasets revealed that the distribution of azimuth angles across sound sources was largely imbalanced, with far more sound sources in the front and right regions than in the left and back. Following [30], we hypothesize that localization performance on the real test dataset could be improved by balancing this distribution. To do so, we use a data augmentation technique known as Audio Channel Swapping (ACS) [31]. We perform 3 transformations involving azimuth to simulate the rotation of sound sources by \(90^{\circ}\), \(180^{\circ}\), and \(270^{\circ}\). We performed different permutations of swapping and negating the \(X\) and \(Y\) of FOA channels directly. This simple augmentation strategy not only quadruples our overall dataset size but more importantly gives us a uniform distribution of azimuth angles. We show that this augmentation has a significant impact on localization performance in Table 1. Please refer to [31] for more details on ACS. ### Evaluation metrics We use the joint localization and detection metrics as defined by the DCASE 2022 Task 3 SELD Challenge in our proceeding analysis. The F-Score and error rate (ER) capture location-dependent detection. True Positives (TP) and False Positives (FP) are considered with a tolerance \(20^{\circ}\) in the direction of arrival. Class-dependent localization error (LE) and localization recall (LR) measure localization performance without considering the spatial threshold. See [32] for more details on SELD metrics. ## 4 Results ### A baseline model for FOA input Prior to evaluating the impact of different input representations, we first assess the performance of a baseline model trained and evaluated on FOA input using varied training data configurations. The STARSS22 and SYNMIX dataset are both quite imbalanced in terms of distribution of sound source across azimuth angles. As described in Section 3.4, we use Audio Channel Swapping (ACS) to mitigate this problem and balance the distribution at train time. Table 1 reports results for 5 training data configurations: **A**: training and evaluating only in azimuth using STARSS22 dataset; **B**: adding SYNMIX dataset to **A**'s training; **C**: adding ACS augmentation to **B**'s training, **B\({}^{+E}\)**: training and evaluating B in both azimuth and elevation; **C\({}^{+E}\)**: training and evaluating C in both azimuth and elevation. Note that B\({}^{+E}\) and C\({}^{+E}\) help us to understand the impact of removing elevation in the overall metrics. By comparing C\({}^{+E}\) and C, we see how removing elevation improves all metrics, as one could imagine given less degree of freedom in the predictions. Moreover, we see an improvement in the joint localization and detection metrics across the board with the addition of the augmented data. Hence, we use **C** as our reference configuration to assess the impact of the input representation in proceeding sections. ### Comparing audio input representations Table 2 reports results when changing input representation, moving from the highly-privileged FOA representation, to binaural, and stereo audio. Our experiments show that as one moves from FOA to binaural and stereo, overall SELD model performance degrades. While this is to be expected because binaural and stereo audio are not designed to capture full spatial audio, this is the first quantification of deep learning-based SELD performance across these audio input representations on real multichannel recordings lays the groundwork for our deeper proceeding analysis. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Conf.** & **SELD \(\downarrow\)** & **ER \(\downarrow\)** & **F \(\uparrow\)** & **LE \(\downarrow\)** & **LR \(\uparrow\)** \\ \hline A & 0.65 & 0.73 & 15.3\% & 53.7\({}^{\circ}\) & 27\% \\ B & 0.47 & 0.62 & 34.5\% & 22.5\({}^{\circ}\) & 51\% \\ C & 0.42 & 0.56 & 43.3\% & 16.9\({}^{\circ}\) & 54.1\% \\ \hline B\({}^{+E}\) & 0.53 & 0.70 & 27.3\% & 26.1\({}^{\circ}\) & 47.5\% \\ C\({}^{+E}\) & 0.48 & 0.62 & 33\% & 22.7\({}^{\circ}\) & 51\% \\ \hline \hline \end{tabular} \end{table} Table 1: Results with **FOA** input across different configurations; A: STARSS22; B: A + SYNMIX; C: B with ACS; B\({}^{+E}\) and C\({}^{+E}\): B and C are trained and evaluated using both azimuth and elevation. Results are reported on the STARSS22 DCASE dev-test set. \(\downarrow\) indicates metrics that are better when value is lower, \(\uparrow\) viceversa. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Input** & **SELD \(\downarrow\)** & **ER \(\downarrow\)** & **F \(\uparrow\)** & **LE \(\downarrow\)** & **LR\(\uparrow\)** \\ \hline FOA & 0.42 & 0.56 & 43.3\% & 16.9\({}^{\circ}\) & 54.1\% \\ Binaural & 0.50 & 0.67 & 33.9\% & 30.1\({}^{\circ}\) & 49.2\% \\ Stereo & 0.60 & 0.76 & 21.7\% & 42.9\({}^{\circ}\) & 38.8\% \\ \hline \hline \end{tabular} \end{table} Table 2: Results for models trained using STARSS22 + SYNMIX using ACS, with different audio input representations. Results are reported on the STARSS22 DCASE development-test set. \(\downarrow\) indicates metrics that are better when value is lower, \(\uparrow\) viceversa. Figure 1: Normalized confusion matrices showing true vs. predicted quadrant of sources across audio configurations. The FOA model performs near-perfect at distinguishing front and back sources, while front and back sources are commonly confused in binaural and stereo settings. Quadrants of size \(90^{\circ}\) are defined based on the azimuth angle of a sound source: Front \(\in[-45^{\circ},45^{\circ}]\), Left \(\in[45^{\circ},135^{\circ}]\), Back \(\in[135^{\circ},\pm 180^{\circ}]\cup[\pm 180^{\circ},-135^{\circ}]\), Right \(\in[-135^{\circ},-45^{\circ}]\) ### Localization error by sound source quadrant We are also interested in dissecting localization performance to understand where key success and failure points occur in terms of sound source position and polyphonic scene conditions. In Figure 1, we show a set of confusion matrices illustrating the distribution of true quadrants of sound sources vs. predicted quadrants across audio input representations. We segment the \(90^{\circ}\) quadrants as follows, based on azimuth angle: \(\text{Front}\in[-45^{\circ},45^{\circ}]\), \(\text{Left}\in[45^{\circ},135^{\circ}]\), \(\text{Back}\in[135^{\circ},\pm 180^{\circ}]\cup[\pm 180^{\circ},-135^{\circ}]\), \(\text{Right}\in[-135^{\circ},-45^{\circ}]\). Notably, using the FOA representation, the model has near-perfect performance in terms of distinguishing front vs. back sources. In the binaural setting, we see an increase in front-back confusion, and in the stereo setting this error is glaring as 48% of sources in the front are predicted in the back quadrant. In fact, this is a well-studied topic in psychoacoustics related to the cone of confusion phenomenon, which occurs when a sound source is equidistant to both the left and right ears [33, 34, 35]. Thus, it is difficult for the listener to distinguish whether a sound source is in front or behind them. It is likely that our binaural model is affected by this as well. Across audio input representations, the accuracy of source detection in the left and right quadrants is fairly consistent, showing reliability in terms of lateral sound source detection given 2- or 4-channel audio input. In Figure 2, we analyze average localization error (LE) based on the quadrant of the ground truth sound sources. In the FOA setting, the difference of LE between the left, right, and back quadrants is quite small, however the error for sources in the front is nearly double that of the other quadrants. In the binaural setting, LE increases in the front and back quadrants, approximately doubling that of the FOA setting, though this increase is much less notable in the lateral (left-right) regions. Further, in the stereo context, we find similar trends but with overall poorer performance. The front and back LE are over three times that of the FOA model, with less significant degradation in the performance of the left and right quadrants. Here, we crucially observe that despite the binaural and stereo models struggling to localize sources in the front quadrant in particular compared to the FOA system, these 2-channel models are still able to localize sources laterally quite well. ### SELD performance in polyphonic conditions The DCASE SELD challenge is unique in that the test dataset contains real audio recordings with multiple overlapping sound sources. Hence, investigating SELD model performance in complex polyphonic conditions can help us better understand how these systems handle more complex scene conditions that are closer to reality. In Figure 3, we analyze localization recall (LR) of the FOA, binaural, and stereo models in the presence of 1, 2, 3, and 4 simultaneous sources (this encapsulates both simultaneous sources of the same or different classes). Note that approximately \(56\%\) of frames contain 1 source, \(31\%\) contain 2, \(10\%\) contain 3, and \(3\%\) contain 4 or more simultaneous sources, so we normalize by source count accordingly in Figure 3. We show that LR steadily decreases in all audio configurations as the number of polyphonic sound sources increases in Figure 3. The model struggles to detect the correct number of sources as the scene conditions become increasingly complex, though proportionally the decrease in recall is relatively similar across audio contexts as polyphony increases. We also analyze localization error (LE) across polyphonic conditions. Here we find that while on average LE increases as we use less-informative audio representations (i.e. stereo), it is not a fully monotonically increasing trend across polyphonic conditions. In the FOA setting, the LE is similar regardless of level of polyphony. In the binaural and stereo settings, there is a much larger spread of LE across conditions, however not in a monotonically increasing manner, e.g. in the stereo setting the average LE is \(31.3^{\circ}\) in the occurrence of 3 overlapping sources vs. \(46.1^{\circ}\) for 2 sources. We hypothesize that there are many interacting effects contributing to this, including but not limited to class imbalance in different polyphonic conditions, simultaneous sources of the same class, and the nature of the LE metric as it does not take false negatives into account. ## 5 Conclusion This work presents a novel comparative analysis of the DCASE 2022 SELD baseline model across first-order Ambisonics, binaural, and stereo audio input representations. We show quantitatively that while localization and detection performance decreases given less informative audio representations, binaural and stereo-based SELD models are still able to localize lateral sound sources relatively well. These findings could be highly informative in the development of applications such as an audio-visual navigation system equipped with a stereo microphone configuration and a camera; if we are confident in lateral source localization based on auditory cues, we can lean more on visual cues for sources directly in front of the camera. Future work in this space could entail an investigation into the effect of sound source class or of overlapping sources of the same class on localization performance across polyphonic conditions and audio input representations. Figure 3: Localization recall in multiple audio representations, segmented by number of simultaneous sources in the test data and normalized by number of sources satisfying each condition. Figure 2: Average localization error across audio representations, based on ground truth sound source quadrant position. Results are normalized by number of instances of sound sources per quadrant.
2309.17125
Style Transfer for Non-differentiable Audio Effects
Digital audio effects are widely used by audio engineers to alter the acoustic and temporal qualities of audio data. However, these effects can have a large number of parameters which can make them difficult to learn for beginners and hamper creativity for professionals. Recently, there have been a number of efforts to employ progress in deep learning to acquire the low-level parameter configurations of audio effects by minimising an objective function between an input and reference track, commonly referred to as style transfer. However, current approaches use inflexible black-box techniques or require that the effects under consideration are implemented in an auto-differentiation framework. In this work, we propose a deep learning approach to audio production style matching which can be used with effects implemented in some of the most widely used frameworks, requiring only that the parameters under consideration have a continuous domain. Further, our method includes style matching for various classes of effects, many of which are difficult or impossible to be approximated closely using differentiable functions. We show that our audio embedding approach creates logical encodings of timbral information, which can be used for a number of downstream tasks. Further, we perform a listening test which demonstrates that our approach is able to convincingly style match a multi-band compressor effect.
Kieran Grant
2023-09-29T10:40:19Z
http://arxiv.org/abs/2309.17125v1
# Audio Engineering Society ###### Abstract Digital audio effects are widely used by audio engineers to alter the acoustic and temporal qualities of audio data. However, these effects can have a large number of parameters which can make them difficult to learn for beginners and hamper creativity for professionals. Recently, there have been a number of efforts to employ progress in deep learning to acquire the low-level parameter configurations of audio effects by minimising an objective function between an input and reference track, commonly referred to as style transfer. However, current approaches use inflexible black-box techniques or require that the effects under consideration are implemented in an auto-differentiation framework. In this work, we propose a deep learning approach to audio production style matching which can be used with effects implemented in some of the most widely used frameworks, requiring only that the parameters under consideration have a continuous domain. Further, our method includes style matching for various classes of effects, many of which are difficult or impossible to be approximated closely using differentiable functions. We show that our audio embedding approach creates logical encodings of timbral information, which can be used for a number of downstream tasks. Further, we perform a listening test which demonstrates that our approach is able to convincingly style match a multi-band compressor effect. ## 1 Introduction Digital audio effects are a vital tool in a number of areas such as music production and sound design. However, these effects can have a large number of parameters which can be difficult to interpret and cause frustration for beginners. Similarly, experienced audio engineers can spend a large amount of time manually tweaking parameter settings in order to mix audio signals or for audio tasks such as mastering. The use of machine learning models to assist in controlling these effect parameters is becoming more common and can streamline the process of achieving the desired audio transformation. Recent solutions [1][2] have explored the use of differentiable digital signal processing modules to ease the training of such machine learning models. However, these approaches generally either use black-box modelling techniques, which don't allow for parameters to be adjusted after matching and lack interpretability [3], or require that the audio effects are implemented in an auto-differentiation framework [4]. In general, commercial audio effects are instead implemented in frameworks specifically designed for digital-signal processing tasks such as Steinberg's Virtual Studio Technology (VST) format. Naturally, users will develop preferences for particular effect implementations which may make them hesitant to switch to a differentiable digital signal processing (DDSP) equivalent. This means that the practical application of these models which are only usable with auto-differentiable implementations of effects are limited. In this work, we address these gaps in the current literature by developing a method for audio style matching using generic (not necessarily differentiable) digital audio effects. To achieve this, we train a \(\beta\)-VAE [5] network to capture audio features across a range of effect classes in a disentangled latent space. We then use the encoder section of this pre-trained \(\beta\)-VAE model in a Siamese network [6] to learn a joint representation of both the input and reference audio. Finally, a simple feed-forward neural network is employed to transform these joint representations into parameter settings for the target effect. In order to implement backpropagation during training, we employ a numerical gradient approximation method for the non-differentiable effect parameters [7]. After pre-training of the audio encoder model, the weights of the network up to the joint input-reference representation are frozen, allowing for the effect controller network to be retrained for any number of unseen audio effects. Our novel contributions in this work include the introduction of a pretrained, generalised audio encoder for downstream audio production tasks. By utilising this encoder, we demonstrate improved training stability and enhanced style matching performance compared to end-to-end training methods that rely on numerically estimated gradients for learning encoder weights. Additionally, we expand the range of effects that can be effectively incorporated in a style matching system. While our approach yields promising results for in-dataset effects, the generalisation to unseen effects remains an ongoing challenge. We provide an open-source implementation1 of our approach, as well as listening examples2. Footnote 1: Available from: [https://github.com/kieran-grant/nd-audio-style-transfer/](https://github.com/kieran-grant/nd-audio-style-transfer/) Footnote 2: Available from: [https://ind-audio-style-transfer.streamlit.app/](https://ind-audio-style-transfer.streamlit.app/) ## 2 Related Work Over the past decade, deep learning has found various applications in audio processing, such as text-to-speech synthesis [8], genre/mood classification [9], and digital signal processing [1][3]. One application of deep learning to audio production is the development of black-box modeling techniques for analogue signal processing equipment. Some recent examples include the modeling of vintage guitar amplifiers [3][10] that contain non-linearities Figure 1: Architecture diagram of both the spectrogram \(\beta\)-VAE and the end-to-end network. The latter uses the pre-trained VAE audio encoder for creating embeddings of both the input and target audio. due to the distortion created by passing a high-gain signal through a vacuum tube, which are not easily replicated using traditional analytic methods. Nevertheless, such black-box approaches typically fix a parameter settings for a specific effect, requiring network retraining for each potential configuration. An alternative strategy to black-box modeling of DAFX involves the utilisation of differentiable digital signal processing (DDSP) modules. Engel et al. [1] were among the first researchers to implement various classic DSP algorithms, such as filters, reverbs, and additive synthesisers, in an auto-differentiation framework. This approach enables the modules to be directly incorporated into neural networks and trained end-to-end using backpropagation. However, this DDSP approach presents a challenge, as each DSP module must be expressed as a differentiable transfer function. Furthermore, this approach does not readily extend to users who wish to utilise implementations of non-differentiable audio effects, which they may be familiar with, or which are more esoteric. Inspired by the interpretability and flexibility of DDSP approaches, Ramirez et al. [7] have presented a method of end-to-end training using arbitrary DSP modules, i.e. those that are not necessarily differentiable. This approach relies on a numerical method for the estimation of gradients called Simultaneous Perturbation Stochastic Approximation (SPSA) [11] to allow for backpropagation. The approach is shown to be effective in a number of applications such as guitar tube-amplifier emulation, automatic music mastering and the removal of pops and breaths from recorded speech. However, the technique is yet to be used to allow arbitrary effects to be used for audio production style transfer. A common paradigm in audio style transfer is the use of Siamese networks [6] which allow for meaningful representations of both input and reference audio to be learnt and combined for further downstream tasks such as parameter control. Typically, these networks use spectrogram representations of the audio and image encoders to learn dense representations. Sheng and Fazekas [12] have utilised such a Siamese network to control a dynamic range compressor via a random forest controller network. In a separate study, Mimilakis et al. [13] proposed a method for controlling a parametric equaliser to match the vocal qualities of an input recording to a reference recording. The architecture used in this work is based on the Siamese architecture proposed by Steinmetz et al. [2] who implemented a differentiable equaliser and compressor for style transfer, while also comparing differentiation techniques such as neural proxies, SPSA and auto-differentiation for the task. ## 3 Methodology ### Model Architecture #### 3.1.1 Spectrogram \(\beta\)-VAE As part of the Siamese architecture of our model, we require a mapping from the audio domain to some latent encoding. To achieve this, we construct a \(\beta\)-VAE [5] network with the objective of reconstructing a spectrogram representation of the input audio. Ideally, we aim for timbral qualities to be captured by this latent encoding which correspond directly to parameter settings for a wide range of generic audio effects. Hence, our latent space must be large enough to capture information about frequency and temporal content of the spectrogram, without being so large as to create many linearly dependent factors which would unnecessarily increase model complexity. The spectrogram \(\beta\)-VAE itself comprises 4 convolutional layers, with batch normalisation and ReLU activations at each layer. The kernel (3), stride (2) and padding (1) sizes for each layer are shared, and the number of channels increase at each layer (8, 16, 32 and 32 respectively). The 32-channel image obtained from the convolutional network is then flattened and passed through two linear layers representing the \(\mu\) and log-variance for our probabilistic sampling for a 128-dimensional latent space. The decoder is implemented as a mirror image of the encoder network using the relevant inverse function at each layer to create a reconstruction of the spectrogram from this latent space. #### 3.1.2 Controller Network To map from these embeddings to parameter settings for a given effect module we employ a simple feed-forward controller network. This network takes the 256D concatenated input and reference encoding as inputs and maps these to the \(P\) continuous parameters of the given effect. In our implementation we use a total of 4 linear hidden layers with 128, 128, 64 and 32 nodes respectively. At each layer we apply layer-normalisation followed by Leaky ReLU activation with a negative slope of 1e\(-\)3. At the final layer we use a sigmoid activation in order to map each of the \(P\) network outputs in the range \((0,1)\) which corresponds to the normalised values for each parameter. The end-to-end network outputs the transformed audio signal, \(\hat{y}\), by applying the predicted parameter configuration to the effect module and using this to transform the original input audio signal. As we make no assumption about the differentiability of the effect module under consideration, we employ numerical estimation techniques to calculate gradients during backpropagation. To achieve this, we adopt the approach proposed by Ramirez et al. [7], which utilises the simultaneous permutation stochastic approximation (SPSA) method for efficient gradient estimation. ### Dataset Generation #### 3.2.1 Digital Audio Effects For this work, we utilise a suite of legacy open-source audio plugins from mda-vst which are implemented using the Virtual Studio Technology 3 (VST3) framework, a format which is ubiquitous in real-world audio engineering. In order to interface with these plugins in a code-driven environment, we use Spotify's Pedalboard library3, which allows us to process audio and update parameter settings directly. For each effect, we only consider adjustment of continuous parameters in order to be usable with our gradient estimation techniques described in the previous section. Footnote 3: Available from: [https://github.com/spotify/pedalboard](https://github.com/spotify/pedalboard) The mda-vst suite comprises over 30 audio plugins including software instruments and effects. For this work, we concentrate on a subset of widely used effect classes. These are given in Table 1. #### 3.2.2 Audio We adopt a data generation approach similar to Steinmetz et al. [2] and Mimilakis et al. [13] for self-supervised training without labeled or paired data. Figure 2 shows the main data generation pipeline. Each training datapoint starts with sampling a random full-length audio recording from dataset \(\mathcal{D}\). From a non-silent section of the source, a random patch of audio is taken as our audio sample. We apply scene augmentation, including pitch shifting and time-shifting, to create dataset variance. We then clone this augmented datapoint into two patches, \(x_{i}\) and \(x_{r}\). \(x_{i}\) remains unaffected, while \(x_{r}\) is processed by our chosen audio effect with a random parameter configuration \(\theta\) and peak-normalized to -12dBFS to ensure similar levels between the input and reference, minimising the influence of volume in the style matching. Matching audio production styles in real-world scenarios is challenging when the reference audio is from a different source than the input recording. To address this, we divide \(x_{i}\) and \(x_{r}\) into segments \(a\) and \(b\) of equal length. During model training, we randomly select either segment \(a\) or \(b\) as the input to the model, while the other segment serves as the reference. We then compute audio domain loss by comparing the held-out ground truth from the reference recording that corresponds to the same segment as the input recording. \begin{table} \begin{tabular}{|c|c|} \hline **mda-vst Plugin Name** & **Audio Effect Class** \\ \hline Ambience & Reverb \\ Combo & Amp simulator \\ Delay & Delay \\ Dynamics & Compressor/limiter/gate \\ Overdrive & Soft distortion \\ RingMod & Ring modulation \\ \hline _Leslie_ & _Rotary speaker simulator_ \\ _MultiBand_ & _Multi-band compressor_ \\ _Thru-Zero Flanger_ & _Tape-flanging simulator_ \\ \hline \end{tabular} \end{table} Table 1: mda-vst implementations and their respective general audio effect class. Effects in italics were not used during spectrogram \(\beta\)-VAE training. ### Model Training #### 3.3.1 Spectrogram \(\beta\)-Vae We used a total of six different mda-vst effect modules for model training: Ambience, Delay, Dynamics, Overdrive, RingMod and Combo. We cycled through these effects such that a single effect was used at each training epoch. With this strategy we aimed to create a generalised encoder which can capture timbral characteristics which are universal across different effect classes. While the audio effect classes seen in training are not exhaustive of those used in real-world application, they provide a broad spectrum of effect types which create a variety of spectral, harmonic and temporal transformations to audio data. To train the spectrogram \(\beta\)-VAE model, we applied a STFT to the target audio signal from Figure 2 with a length of \(131,072\) samples at a sample rate of 24kHz (approximately 5 seconds of audio). The STFT was performed using 4096 FFT bins, with a Hann window size of 2048 and a hop length of 1024. The magnitude of the resulting complex-valued spectrogram was computed by taking the power of the absolute value of the spectrogram with an exponent of 0.3. The use of this exponential compression aided in enhancing the visual contrast of the image, thereby leading to a stronger distinction between the signal and ambient noise. The normalised spectrogram was then used as the input for the spectrogram \(\beta\)-VAE. The weights of the network were randomly initialised, and we used a learning rate of 5e\(-\)4. We mitigated issues with vanishing KL-divergence loss and the resultant degrading effect on representations by employing cyclic KL-annealing [14] during training. The final model was trained for 500 epochs with a training size of 2,500 and validation size of 250 examples per epoch. Training was performed using a single Nvidia 3070 RTX GPU and the weights which achieved the lowest overall validation loss were stored for downstream tasks. #### 3.3.2 End-to-end Network For each of the nine audio effects under consideration, the Siamese network was trained end-to-end, with the weights for the audio encoder frozen from the earlier spectrogram \(\beta\)-VAE training. We followed previous work from Steinmetz et al. [2] and implemented an audio domain loss for our model training objective. The overall loss is given by \[\mathcal{L}_{\text{E2E}}=\mathcal{L}_{\text{MRSTFT}}+\alpha\cdot\mathcal{L}_{ \text{MAE}}, \tag{1}\] Where \(\mathcal{L}_{\text{MAE}}\) is the mean-absolute error between the predicted audio and the ground truth, \(\alpha\geq 0\) is a weighting term, and \(\mathcal{L}_{\text{MRSTFT}}\) is the Multi-Resolution STFT error which compares the predicted and target signals at multiple STFT resolutions [15]. The end-to-end network was trained on CPU, with each effect trained for 30 epochs with 5,000 training examples per epoch and 500 validation examples per epoch. The learning rate was initialised to 1e\(-\)3 and a LR scheduler utilised which reduced the learning rate by a factor of 10 at 80% and 95% of training progress. Our \(\alpha\) weighting of the \(\mathcal{L}_{\text{MAE}}\) contribution was set to 100, and the \(\epsilon\) value for SPSA was set to 1e\(-\)2. Figure 2: Dataset generation pipeline. Only the target audio segment is used during spectrogram \(\beta\)-VAE training, while paired input/target is used during end-to-end network training. ### _Datasets_ We trained both the spectrogram \(\beta\)-VAE and end-to-end models using the Centre for Speech Technology Voice Cloning Toolkit (VCTK) corpus [16]. This dataset comprises speech data from 109 native English speakers with various accents, reading around 400 sentences each, mostly from newspaper passages. We reserve 11 speakers as a validation set. For style matching evaluation, we utilise a subset of the Device and Produced Speech (DAPS) Dataset [17], which includes studio-quality recordings from 20 English speakers, and the MusDB18 dataset [18], consisting of 150 full-track songs spanning different musical styles. ## 4 Evaluation ### _Audio Encoder_ #### 4.1.1 Experiment Design To evaluate the effectiveness of our \(\beta\)-VAE model in separating audio effect classes, we utilised a random forest (RF) classifier to process the embeddings derived from the audio encoder mapping. We compare this non-linear transformation of audio data to a principal component analysis (PCA) approach to extract features from the original spectrograms. To standardise our dataset for analysis, we generated 6,000 data points (1,000 data points per 6 effect classes) with fixed parameter settings. These data points were transformed into 128D latent embeddings using both our trained encoder and PCA dimensionality reduction. We then randomly partitioned the embeddings into training (85%) and test (15%) sets, and subsequently trained the RF classifier separately on each of the embeddings. We report and compare the test accuracy and F1-scores of the RF classifier for both approaches. To evaluate the ability for the latent embeddings to capture timbral changes under different effect configurations we calculated the mutual information (MI) between each parameter for a given effect, and its relevant encodings. We first generated 10,000 parameter configurations for each audio effect and transformed a single audio datapoint under these configurations, computing the latent embedding obtained via our trained audio encoder. We performed Canonical Correlation Analysis (CCA) between the \(10000\times 128\) embedding matrix and \(10000\times P\) parameter settings matrix to obtain a \(10000\times 2\) reduced space. We then compared the MI for each parameter and axis in the reduced space in turn, and report the maximum MI value (MMI) achieved. Here, a larger MMI corresponds to a greater degree of variance in the embedding under changes to a particular parameter (thus capturing a specific and perceptible timbral change), while a MMI of 0 indicates that the latent embedding is not affected by changes to the relevant parameter. #### 4.1.2 Results and Discussion Results of the RF classifier experiment are presented in Table 2. Our audio encoder's embeddings consistently achieve higher accuracy and F1-scores across all effect classes compared to the PCA approach. This validating the use of a non-linear approach for embedding complex audio data. Table 3 displays the results of our experiment examining the audio embeddings' ability to capture timbral changes. Generally, we observe a strong correlation with changes in the audio signal's frequency content. For example, parameters like Overdrive's muffle and RingMod's freq_hz significantly impact the audio's harmonic content and achieve the highest MMI in their respective classes. Our \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{\(\beta\)-VAE} & \multicolumn{2}{c}{PCA} \\ \cline{2-5} **Effect** & **Accuracy** & **F1-Score** & **Accuracy** & **F1-Score** \\ \hline **Ambience** & 0.053 & 0.053 & 0.25 & 0.19 \\ **Combo** & 0.07 & 0.07 & 0.17 & 0.15 \\ **Delay** & 1.07 & 0.03 & 0.01 & 0.01 \\ **Dynamics** & 10.07 & 10.07 & 0.09 & 0.10 \\ **Overdrive** & 10.08 & 10.06 & 0.00 & 0.00 \\ **RingMod** & 1.07 & 10.07 & 0.03 & **0.23** \\ \hline \hline \end{tabular} \end{table} Table 2: Classification accuracy and F1-score on a holdout test set for a Random Forest (RF) classifier fitted separately to our spectrogram \(\beta\)-VAE embeddings and 128D PCA dimensionality reduction embeddings. audio and spectrogram normalisation have also successfully mitigated the influence of volume on embeddings. However, some parameter settings exhibit little correlation with changes in the audio's latent embedding. In particular, subtle compression and delay settings show little correlation to their embeddings. ### End-to-End Network #### 4.2.1 Experiment Design To assess the performance of our end-to-end network for the task of style transfer for unseen audio sources, we conducted a comparative analysis using the three datasets discussed in Section 3.4. We used nine separately trained instances of our end-to-end network (one for each of the audio effect implementations in Table 1) trained on the VCTK dataset, and compared quantitative performance on the unseen DAPS and MusDB18 datasets. We compare the performance of our end-to-end model using both the fixed pretrained audio encoder from Section 4.1 as well as training the audio encoder per effect using the SPSA gradients. We also provide a _baseline_ experiment, which corresponds to the direct error between the unaffected input and effected ground truth. Similarly to Steinmetz et al. [2] we recorded the Multi-Resolution STFT (MRSTFT) error (with window sizes 32, 128, 512, 2048, 8192 and 32768) as well as the Perceptual Evaluation of Speech quality (PESQ) as measures of perceptual error. #### 4.2.2 Results and Discussion The results in Tables 4 and 5 show consistently better performance when using a pretrained encoder, rather than including the audio encoder in the end-to-end training. Further, training was less stable when using uninitialised audio encoder weights, requiring the learning rate to be reduced to 3e\(-\)5 to avoid gradient issues. However, our approach performs more poorly than the baseline across a number of effects. In particular, style matching modulation and temporal effects such as Ambience and Delay were consistently worse than not applying an effect at all. There may be several factors which contribute to this, such as the encoder not accurately capturing the timbral characteristics of these effects. Analysis of Tables 4 and 5 also reveals that model performance is largely consistent across the unseen datasets. ### Listening Test #### 4.3.1 Experiment Design In order to judge perceptual model performance when style matching for _unknown_ effect implementations we conduct a listening test, inspired by the Multiple Stimuli with Hidden Reference and Anchor (MUSHRA) design. The test was conducted online using the webMUSHRA framework [20]. Participants were asked to rate how well a number \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \multicolumn{2}{c}{**Overdrive**} & \multicolumn{2}{c}{**RingMod**} & \multicolumn{2}{c}{**Ambience**} & \multicolumn{2}{c}{**Combo**} & \multicolumn{2}{c}{**Delay**} & \multicolumn{2}{c}{**Dynamics**} \\ \hline **Param** & **MMI** & **Param** & **MMI** & **Param** & **MMI** & **Param** & **MMI** & **Param** & **MMI** & **Param** & **MMI** \\ \hline muffle & 0.51 & freq\_hz & 0.74 & mix & 0.53 & hpf\_freq & 0.21 & fb\_mix & 0.49 & output\_db & 0.12 \\ drive & 0.00 & feedback & 0.54 & size\_m & 0.28 & hpf\_reso & 0.46 & 1,delay\_ms & 0.34 & gate\_thr\_db & 0.12 \\ output\_db & 0.00 & fine\_hz & 0.00 & hf\_damp & 0.08 & drive\_s\_h & 0.02 & 1,delay\_day & 0.06 & mix & 0.10 \\ & & & output\_db & 0.00 & bias & 0.01 & feedback & 0.03 & release\_ms & 0.08 \\ & & & & output\_db & 0.00 & 0.00 & fb\_tone\_lo\_hi & 0.01 & gate\_rel\_ms & 0.04 \\ & & & & & output\_db & 0.00 & 0.00 & fb\_tone\_lo\_hi & 0.01 & gate\_rel\_ms & 0.04 \\ & & & & & & & & & limiter\_db & 0.03 \\ & & & & & & & & ratio & 0.02 \\ & & & & & & & & & & thresh\_db & 0.02 \\ & & & & & & & & & gate\_att\_s & 0.01 \\ & & & & & & & & & attack\_s & 0.00 \\ \hline **mean** & 1.00 & **mean** & 0.43 & **mean** & 0.22 & **mean** & 0.34 & **mean** & 0.19 & **mean** & 0.05 \\ \hline \hline \end{tabular} \end{table} Table 3: Maximum Mutual Information (MMI) calculated across 2D-projection of latent embeddings via Canonical Correlation Analysis (CCA) for each effect and parameter used during Spectrogram \(\beta\)-VAE training. of conditions match the audio production style of a reference audio source. We utilised audio effect implementations from Spotify's Pedalboard library to generate the source audio data, employing four distinct effect types: overdrive/distortion, reverb, delay, and compression/EQ. We transformed audio from the DAPS dataset by applying three different random parameter configurations for each of the effect classes. To achieve style matching, we selected the corresponding effect implementation in the mda-vst library (as outlined in Table 1) and the corresponding trained end-to-end model. The _random baseline_, which acts as a low-quality condition, was generated from a random parameter configuration. The _reference_ was also included as a condition with its label hidden. Additionally, for the compression effect, we compared our methodology to the DeepAFx-ST style matching model proposed by Steinmetz et al. [2]. We did not include this condition for other effects as DeepAFx-ST is only able to style match compression and equalisation. #### 4.3.2 Results and Discussion From the listening test, we collected responses from 10 participants for our 12 style-matching examples. One submission was disregarded as it rated all samples at 100, leaving nine valid responses for the user evaluation. Therefore, each of the 12 examples garnered a total of 27 ratings for each stimulus. The outcomes of this assessment are illustrated in Figure 3. Our results show a similar trend to the offline evaluation of our end-to-end network in the previous section. Specifically, our findings indicate that the network was able to better match the style of effects that are more transparent than those with a more obvious acoustic effect. Upon inspection of the audio and parameter configurations predicted by the network, we found that the transformations for Delay and Ambience were subtle in comparison to their given reference. This is in contrast to the random settings of the baseline which, while not matching the exact style of the reference, at least applied a more perceptible effect. \begin{table} \begin{tabular}{c|c c c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Audio Encoder**} & \multirow{2}{*}{**Ambience**} & \multicolumn{6}{c}{**Audio Effect**} \\ \cline{3-10} & & & **Dalay** & **Dynamics** & **Flanger** & **Leslie** & **MultiBand** & **Overdrive** & **RingMod** \\ \hline \multirow{3}{*}{**VCTK**} & _Baseline_ & **0.443** & 1.449 & **1.053** & **0.613** & **0.665** & **0.497** & 0.629 & 1.138 & 1.777 \\ \cline{2-10} & Untrained & 0.600 & 1.525 & 1.218 & 0.803 & 0.806 & 0.641 & 0.608 & 0.794 & 1.712 \\ \cline{2-10} & Pretrained & 0.525 & **1.055** & 1.078 & 0.701 & 0.696 & 0.621 & **0.492** & **0.640** & **1.621** \\ \hline \multirow{3}{*}{**DAPS**} & _Baseline_ & **0.514** & 1.496 & **1.025** & **0.595** & 0.753 & **0.519** & 0.662 & 1.245 & 1.858 \\ \cline{2-10} & Untrained & 0.650 & 1.534 & 1.152 & 0.799 & 0.880 & 0.657 & 0.602 & 0.882 & 1.988 \\ \cline{2-10} & Pretrained & 0.545 & **1.048** & 1.028 & 0.648 & **0.742** & 0.607 & **0.472** & **0.639** & **1.856** \\ \hline \multirow{3}{*}{**MusDB18**} & _Baseline_ & **0.454** & 1.601 & **0.703** & **0.482** & **0.692** & **0.620** & 0.964 & 1.534 & **1.459** \\ \cline{2-10} & Untrained & 0.646 & 1.518 & 0.905 & 1.062 & 0.920 & 0.791 & 0.888 & 1.101 & 2.998 \\ \cline{2-10} & Pretrained & 0.518 & **1.040** & 0.765 & 0.554 & 0.748 & 0.727 & **0.725** & **0.797** & 2.410 \\ \hline \hline \end{tabular} \end{table} Table 4: Multi-Resolution STFT (MRSTFT) loss across the nine effect classes and three datasets. For each effect-dataset pair, we calculate mean loss across 5,000 examples. In this instance a lower score is better. \begin{table} \begin{tabular}{c|c c c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Audio Encoder**} & \multirow{2}{*}{**Ambience**} & \multirow{2}{*}{**Combo**} & \multirow{2}{*}{**Delay**} & \multirow{2}{*}{**Dynamics**} & \multicolumn{2}{c}{**Flanger**} & \multirow{2}{*}{**Leslie**} & \multirow{2}{*}{**MultiBand**} & \multirow{2}{*}{**Overdrive**} & \multirow{2}{*}{**RingMod**} \\ \cline{3-10} & & & & & & & & & & & \\ \hline \multirow{3}{*}{**VCTK**} & _Baseline_ & 2.621 & 2.600 & **1.817** & 4.174 & **2.418** & **3.436** & 4.358 & 3.073 & **1.537** \\ \cline{2-10} & Untrained & 2.027 & **3.609** & 1.306 & 3.957 & 1.670 & 2.990 & 4.373 & 4.014 & 1.120 \\ \cline{2-10} & Pretrained & **2.625** & 3.403 & 1.611 & **4.207** & **2.386** & 3.316 & **4.451** & **4.224** & 1.148 \\ \hline \multirow{3}{*}{**DAPS**} & _Baseline_ & **2.439** & 2.359 & **1.736** & 4.167 & **2.285** & **3.255** & 4.385 & 2.927 & **1.458** \\ \cline{2-10} & Untrained & 1.742 & **3.564** & 1.203 & 3.925 & 1.446 & 2.683 & 4.398 & 4.059 & 1.078 \\ \cline{1-1} \cline{2-10} & Pretrained & 2.407 & 3.378 & 1.534 & **4.209** & 2.039 & 3.133 & **4.486** & **4.213** & 1.097 \\ \hline \multirow{3}{*}{**MusDB18**} & _Baseline_ & **2.862** & 1.786 & **2.446** & 4.075 & **2.611** & **2.605** & 4.004 & 3.557 & **1.511** \\ \cline{1-1} \cline{2-10} & Untrained & 1.805 & **3.736** & 1.475 & 3.707 & 1.450 & 2.117 & 4.134 & 4.040 & 1.088 \\ \cline{1-1} \cline{2-10} & Pretrained & 2.780 & 3.689 & 2.324 & **4.177** & 2.339 & 2.518 & **4.272** & **4.221** & 1.090 \\ \hline \hline \end{tabular} \end{table} Table 5: Perceptual Evaluation of Speech Quality (PESQ) scores across the nine effect classes and three datasets. The output of the PESQ algorithm is a value in the range 1 (_poor_) to 5 (_excellent_) [19]. Using an unseen distortion implementation significantly degraded the quality of the style matching for the Overdrive effect. In this case, the effect was subject to the transparency issue seen with other effect classes - i.e, very little drive was applied in all examples. As seen in Figure 3, the performance of the MultiBand compressor for style matching was significantly better than random settings (\(p<1\mathrm{e}{-7}\)) and, on average, performed within 6% of the DeepAFx-ST model [2]. ## 5 Conclusion In this work, we have developed a method for audio production style-transfer using non-differentiable audio effects. We trained an audio autoencoder which has been shown to be able to separate digital audio effect classes, and capture parameter changes for a number of effect classes. This pretrained encoder was then used in a Siamese style-matching network. However, this network was shown to perform poorly with many unseen implementations of effect classes in a user evaluated listening test. Despite this, the approach was shown to work well for a multi-band compressor, whose mean performance was within 6% of a state-of-the-art approach using auto-differentiation. In future work, we suggest revisiting the encoder network and examining the possible advantages of conditioning on the latent space and employing distinct encoders for effect classes. One approach for the former would be to use perceptually-regulated variational timbre spaces as proposed by Esling et al. [21]. The latter can be achieved by assigning an encoder for each umbrella effect class (e.g. temporal effects, modulation effects, etc.). This may encourage the encoder to learn features that are more specific to that particular class of effect.
2309.07336
Analysis of Superconducting Qubit Layouts Using InductEx
InductEx is a software tool used for the analysis of integrated circuit designs and extraction of design parameters by way of numerical electromagnetic field solving. This tool was originally developed with Rapid Single Flux Quantum (RSFQ) chips in mind, but it has a broad applicability and can be extended to other processes. In this poster, we report a comprehensive analysis of a superconducting aluminum two qubit chip. This analysis was performed with InductEx. We report the design of a two qubit chip which has the characteristics necessary to execute single and two qubit gates. Ahead of fabrication, several design characteristics have been extracted from this quantum chip design in order to verify that it satisfies basic design principles of transmon qubits. These characteristics are reported in this poster and they include the calculation of chip anharmonicities, qubit frequencies, resonator frequencies as well as g-factors and dispersive shifts. Design constraints which are satisfied by these extracted parameters are discussed. Additionally, qualitative aspects of the chip have been obtained from current density maps and are reported here. Taken as a whole, this analysis demonstrates the broad applicability of Inductex to integrated circuit design and particularly to the problem of quantum circuit layout optimization.
Sean Crowe, Benjamin Taylor, Nicholas Ferrante, Brad Liu, Susan Berggren
2023-09-13T22:10:56Z
http://arxiv.org/abs/2309.07336v1
# Analysis of Superconducting Qubit Layouts Using InductEx ###### Abstract InductEx [1] is a software tool used for the analysis of integrated circuit designs and extraction of design parameters by way of numerical electromagnetic field solving. This tool was originally developed with Rapid Single Flux Quantum (RSFQ) chips in mind, but it has a broad applicability and can be extended to other processes. In this poster, we report a comprehensive analysis of a superconducting aluminum two qubit chip. This analysis was performed with InductEx. We report the design of a two qubit chip which has the characteristics necessary to execute single and two qubit gates. Ahead of fabrication, several design characteristics have been extracted from this quantum chip design in order to verify that it satisfies basic design principles of transmon qubits. These characteristics are reported in this poster and they include the calculation of chip anharmonicities, qubit frequencies, resonator frequencies as well as g-factors and dispersive shifts. Design constraints which are satisfied by these extracted parameters are discussed. Additionally, qualitative aspects of the chip have been obtained from current density maps and are reported here. Taken as a whole, this analysis demonstrates the broad applicability of Inductex to integrated circuit design and particularly to the problem of quantum circuit layout optimization. quantum computing, superconducting qubits, circuit QED ## I Extended Poster Abstract When designing a superconducting qubit chip, one typically starts with a circuit schematic of the chip, where the qubits are realized as capacitively shunted SQUIDs which are coupled with a resonator which in turn is coupled with a readout line. Circuit parameters are then chosen in order to satisfy certain design constraints related to chip performance [2]. Creating a circuit layout which is faithful to the schematic is a non-trivial problem which requires special care. In this poster, we detail the optimization of the quantum chip layout shown in Fig: 1. This chip layout consists of two transmon qubits and a tunable coupler. The coupler design was primarily motivated by [3]. It was created using the open source program KLayout, and Inductex was used to perform the extraction of the circuit parameters. Qubit and coupler capacitances were determined by using Inductex to calculate the capacitance matrix between all pieces of metal on the device layer of the chip. The capacitance between the first qubit and the ground was found to be \(C_{q1}=108\text{~{}fF}\). Critical currents of the Josephson junctions in the qubits were determined based on the junction area and the the critical current density of the fabrication process. The first qubit is a fixed frequency qubit with only one junction. The critical current of this junction is \(I_{c1}=30.0\text{~{}nA}\) Fig. 1: Simulation of a readout operation on our two qubit chip. This readout operation was performed with InductEx and The color map is based on the magnitude of the current density vector field. The driving frequency is \(7\text{~{}GHz}\). Taken together, these figures imply the excitation energy \(E_{01}^{(1)}=4.43\ \mathrm{GHz}\), and an anharmonicity \(\alpha_{1}=198\ \mathrm{MHz}\). The anharmonicity was computed using numerical diagonalization of the Hamiltonian. The ratio of the charging energy of the junction to the charging energy of the capacitor was found to be \(E_{j}^{(1)}/E_{C}^{(1)}=83.1\). The second qubit is a flux biased SQUID shunted by a capacitor. It is therefore tunable. The critical current of the SQUID is \(I_{c2}=40.0\ \mathrm{nA}\), and the ratio of its junction widths is \(W_{1}/W_{2}=3\). The shunting capacitor's value was found numerically to be \(C_{q2}=108\ \mathrm{fF}\). The excitation energy of this qubit at zero biasing field is therefore \(E_{01}^{(2)}(0)=5.16\ \mathrm{GHz}\) with an anharmonicity of \(\alpha_{2}=196\ \mathrm{MHz}\). Because of the asymmetry of this transmon, the lowest reachable excitation energy is \(E_{01}^{(2)}\left(\frac{1}{2}\Phi_{0}\right)=2.58\ \mathrm{GHz}\), which occurs when half of a flux quantum is biasing the SQUID loop. The ratio of charging energies for this qubit was found to be \(E_{j}^{(2)}/E_{C}^{(2)}=111\). The tunable coupler is also a flux biased SQUID shunted by a capacitor. The value of its capacitance was found to be \(C_{tc}=126\ \mathrm{fF}\). The critical current of its junction is \(I_{ct}=35.0\ \mathrm{nA}\). The ratio of its junction widths is also \(W_{1}/W_{2}=3\). Based on the capacitance and the critical current, the excitation energy of the tunable coupler is \(E_{01}^{(t)}(0)=4.47\ \mathrm{GHz}\) with an anharmonicity of \(\alpha_{t}=196\ \mathrm{MHz}\). The lowest reachable excitation energy for the coupler occurs when half of a flux quantum is biasing the SQUID loop and this energy is \(E_{01}^{(t)}\left(\frac{1}{2}\Phi_{0}\right)=2.24\ \mathrm{GHz}\). The charging ratio for the coupler is \(E_{j}^{(t)}/E_{C}^{(t)}=113\). In addition to qubit frequencies, resonator frequencies were also calculated. This calculation was done by treating the the resonator as an harmonic oscillator and then extracting the inductance and capacitance of the resonator. The inductance which was extracted for the first resonator was found to be \(L_{r1}=1.96\ \mathrm{nH}\). The extracted capacitance for this device was found to be \(C_{r1}=744\ \mathrm{fF}\). This implies a fundamental resonant frequency of \(f_{r1}=6.55\ \mathrm{GHz}\). An analytic estimate can be obtained from the formula \(f_{r1}=\frac{c}{4\sqrt{c_{\mathrm{eff}}}}\), where 1 is the length of the resonator and \(\varepsilon_{\mathrm{eff}}\) is the effective dielectric constant of the resonator. Because the fabrication process uses a thick substrate, we can approximate \(\varepsilon_{\mathrm{eff}}\approx\frac{1}{2}\left(\varepsilon_{\mathrm{substrate}}+1\right)=6.2\) for silicon [5]. The length of the resonator was determined to be \(4320\mu\mathrm{m}\) by numerical integration. Taken together, these imply an analytic approximation for the frequency to be \(f_{r1}^{\mathrm{analytic}}=6.96\ \mathrm{GHz}\). This is close to the numerical value obtained by InductEx, however, this analytic approximation does not account for the detailed geometry of the resonator and coupler. We therefore consider the numerically obtained frequency more reliable. Given the characteristics of the resonator, it is possible to obtain estimates for the g-factor of the first qubit and also of the dispersive shift. The g factor is given by \(g_{q1-r1}=2e\beta V_{\mathrm{rms}}\langle 0|\hat{n}|1\rangle\), where \(\beta=\frac{C_{q1-r1}}{C_{q1-r1}+C_{q1}}\), \(V_{\mathrm{rms}}=\sqrt{\frac{hf_{r1}}{2C_{r1}}}\), and \(\hat{n}\) is an operator which measures the number of copper pairs on the capacitor [4]. \(C_{q1-r1}\) is the capacitive coupling between the resonator and the first qubit. Numerically this value was found to be \(C_{q1-r1}=7.59\ \mathrm{fF}\). In units where \(h=1\), this works out to be \(g_{q1-r1}=67.9\ \mathrm{MHz}\). Finally, the dispersive shift is given by \(\chi_{\mathrm{q1}}=347\ \mathrm{kHz}\). The second resonator couples directly to the tunable coupler. Inductex was used to calculate both the inductance and capacitance of this second resonator. The value of inductance was found to be \(L_{r2}=1.99\ \mathrm{nH}\). The value of the capacitance was found to be \(C_{rt}=740\ \mathrm{fF}\). This implies a fundamental frequency for the second resonator of \(f_{rt}=6.51\ \mathrm{GHz}\). Additionally, a capacitive coupling between the second readout resonator and the tunable coupler is found to be \(C_{rt-t}=6.54\ \mathrm{fF}\). Again, taken together, this implies the g-factor \(g_{rt-t}=54.7\ \mathrm{MHz}\) and a dispersive shift of \(\chi_{t}=226\ \mathrm{kHz}\). The third resonator couples to the second qubit. The inductance of this resonator was found to be \(L_{r3}=1.95\ \mathrm{nH}\). The value of the capacitance for this resonator was found to be \(C_{r3}=722\ \mathrm{fF}\). This implies a fundamental frequency of \(f_{r3}=6.66\ \mathrm{GHz}\). The capacitive coupling between the the third readout resonator and the second qubit was found to be \(C_{r3-\mathrm{q2}}=7.72\ \mathrm{fF}\). Given this information, the g-factor between the second qubit and the resonator was found to be \(g_{r3-\mathrm{q2}}=75.7\ \mathrm{MHz}\) and the dispersive shift was found to be \(\chi_{q2}=607\ \mathrm{kHz}\). Finally, the capacitive coupling between the two qubits and their respective XY drive lines was computed. The coupling between the first qubit and its XY drive line was found to be \(C_{\mathrm{d1-q1}}=0.201\ \mathrm{fF}\). The coupling between the second qubit and its drive line was found to be \(C_{d2-q2}=0.194\ \mathrm{fF}\). In conclusion, we have used InductEx in order to perform a detailed analysis of a two qubit superconducting chip. This analysis included extraction of qubit frequencies, resonator frequencies, and anharmonicities. Extraction of resonator capacitances allowed for predictions of dispersive shifts and coupling strengths between qubits and resonators. Moreover, extraction of coupling strengths between the qubits and the coupler allow for expectations of the characteristics of two qubit gates ahead of fabrication. Taken as a whole, this analysis demonstrates InductEx is a robust tool which is applicable to the problem of superconducting qubit layout optimization.
2309.06028
Viscous torque in turbulent magnetized AGN accretion disks and its effects on EMRI's gravitational waves
The merger of supermassive black holes (SMBHs) produces mHz gravitational waves (GW), which are potentially detectable by future Laser Interferometer Space Antenna (LISA). Such binary systems are usually embedded in an accretion disk environment at the centre of the active galactic nucleus (AGN). Recent studies suggest the plasma environment imposes measurable imprints on the GW signal if the mass ratio of the binary is around q $ \sim10^{-4}-10^{-3}$. The effect of the gaseous environment on the GW signal is strongly dependent on the disk's parameters, therefore it is believed that future low-frequency GW detections will provide us with precious information about the physics of AGN accretion disks. We investigate this effect by measuring the viscous torque via modelling the evolution of magnetized tori around the primary massive black hole. Using GRMHD HARM-COOL code, we perform 2D and 3D simulations of weakly-magnetized thin accretion disks, with a possible truncation and transition to advection-dominated accretion flow (ADAF). We study the angular momentum transport and turbulence generated by magnetorotational instability (MRI). We quantify the disk's effective alpha viscosity and its evolution over time. We apply our numerical results to quantify the relativistic viscous torque on a hypothetical low-mass secondary black hole via a 1D analytical approach, and we estimate the GW phase shift due to the gas environment.
Fatemeh Hossein Nouri, Agnieszka Janiuk
2023-09-12T07:59:41Z
http://arxiv.org/abs/2309.06028v2
Accretion disk's environmental effects on gravitational waves from LISA for extreme mass ratio black hole binaries ###### Abstract The merger of supermassive black holes (BBH) produces mHz gravitational waves (GW), which are potentially detectable by future Laser Interferometer Space Antenna (LISA). Such binary systems are usually embedded in an accretion disk environment at the centre of the active galactic nuclei (AGN). Recent studies suggest the plasma environment imposes measurable imprints on the GW signal if the mass ratio of the binary is around \(\mathrm{q}\sim 10^{-4}-10^{-3}\). The effect of the gaseous environment on the GW signal is strongly dependent on the disk's parameters, therefore it is believed that future low-frequency GW detections will provide us with precious information about the physics of AGN accretion disks. We investigate this effect by measuring the disk torques on the binary system by modelling several magnetized tori. Using GRMHD HARM-COOL code, we perform 2D simulations of weakly-magnetized thin accretion disks, with a possible truncation and transition to advection-dominated accretion flow (ADAF). In our numerical simulations, we study the angular momentum transport and turbulence generated by the magnetorotational instability (MRI). We quantify the disk's effective alpha viscosity and its evolution over time. We apply our numerical results to estimate the relativistic viscous torque and GW phase shift due to the gas environment. ## I Introduction Binary supermassive black holes (SMBHs) are expected to be formed in galaxy mergers where their corresponding SMBHs pair up into binaries. Such binary systems are usually embedded in a gaseous environment at the center of active galactic nuclei (AGN). Stellar dynamical friction and torques from gas are expected to bring the binary to the sub-parsec scale [1]. In about a sub-parsec binary separation, the system emits low-frequency gravitational waves (\(\sim\)mHz) through the inspiral and merger phases, which are possibly detectable by future observatories such as the Laser Interferometric Space Antenna (LISA) [2]. So far, several candidate binary SMBHs have been identified in AGN and quasars using different observational methods, including NuSTAR and Chandra X-ray observations [3; 4] and periodic features in AGN light curves [5]. The current strongest candidates are OJ 287 and PG 1302-102, which both display periodicity in their lightcurves [6]. The accretion of gas by the binary may result in appreciable electromagnetic (EM) radiation. On the other hand, the orbital frequency of the binary can be affected by the torque imposed by the gas environment and may cause a phase shift in the GW signal. The EM emissions and the GW signal together may provide us with a useful probe of the gas in galaxy cores and a diagnostic of the physics of black hole accretion. In recent years, several analytical and numerical studies have been done to study this electromagnetic emission mostly for a binary system with equal masses in general relativistic hydrodynamics [7; 8] and general relativistic magnetohydrodynamics (GR MHD) [9; 10; 11; 12; 13]. Several groups have studied the interaction of the gaseous circumbinary disk with the binary system with the 2D and 3D nonrelativistic hydrodynamic simulations. They include the viscous gas described by the Navier-Stokes fluid equations, interacting with an equal mass circular binary, assuming the binary orbit to remain fixed [14; 15; 16] or evolving with time [17; 18]. Depending on the assumptions for the live binary orbit, grid resolution and circumbinary disk parameters, they found that the exchange of angular momentum between the disk and the binary may lead to binary expansion or shrinkage. Although the equal mass binary studies are more common in the literature, the GW phase shift due to the environmental effect is more significant for the BBH with extreme mass ratios (\(10^{-4}-10^{-3}\)) and more probable to be detected with the future laser interferometric observatories. Based on semi-analytical studies by Yunes et al. (2011) [19] and Kocsis et al. (2011) [20] the accretion disk environment imposes measurable imprints on the GW signal in the extreme mass ratio inspirals. These studies show that depending on the disk parameters, the perturbation of the GW phase is between 10 and 1000 radians per year, detectable by LISA. Derdzinski et al. (2019) performed 2D nonrelativistic numerical simulations inspired by the work of Duffell et al. (2014) [21] for measuring the torques on a Jupiter-like planet, embedded in a protoplanetary disk [22]. They applied the same code to the AGN context and found if the disk's surface density is high enough, the phase shift exceeds a few radians in a 5-year LISA observation. Garg et al. (2022) [23] have done a parameterized study on the gas torque measurements in an analytical approach for intermediate-mass black hole binaries embedded in \(\alpha\)-disks. They quantified the torque over a wide range of Shakura-Sunyaev disk's characteristics such as surface density and Mach number, as well as primary BH's mass and the binary's mass ratio. Most recently, Speri et al. (2023)[24] performed Bayesian analysis to determine how well the environmental effects can be measured using gravitational wave observations from LISA, with a focus on the torque induced by planetary-type migration. In this paper, we focus on the case of extreme mass ratio binaries. We simulate a geometrically thin, Keplerian disk orbiting in the plane of a spinning BH. The low-mass companion BH is not included in our numerical simulation, instead, we use an analytic estimation to measure the viscous torque on the secondary BH. The radiative cooling is not included in our simulations, therefore our models are only applicable to the radiatively inefficient accretion flows [25], e.g. low-luminous AGN, such as NGC 5548 [26; 27]. The previous works in the literature assumed the artificial thin disk \(\alpha\)-prescription as the mechanism for the angular momentum transport [28]. In their approach, the \(\alpha\) viscosity is assumed a typical constant value (\(\sim 0.01\)-\(0.1\)) for the entire disk and for the entire time of the inspiral phase. However, in a realistic approach, one needs to include the magnetic field evolution to provide the physical mechanism for the angular momentum transport caused by the magnetorotational instability (MRI) [29]. To include this realistic assumption, we seeded the initial magnetic field to have a weakly magnetized plasma and evolved the disk in time, using a 2D grid. We quantify the effective value of \(\alpha\) directly from the Reynolds and Maxwell contribution to the stress-energy tensor, as computed for different parts of the disk and over time. As the measured torque varies over time and radius, our calculations impose upper and lower caps on the GW phase shift estimation derived from the \(\alpha\) disk approach. We evolve the system with GR MHD equations, therefore our study includes the curved spacetime and spin effects. In addition to the spin effect, we study the effects of other physical parameters such as magnetic field strength, and its configuration. In section II we describe the initial configuration of our simulation and the unit conversions. The numerical results are presented in Sec. III with a detailed discussion on MRI analysis and \(\alpha\) viscosity computation. Sec. IV is devoted to our estimations of the viscous torque and its fluctuations. In Sec. V we discuss the importance of our results on the future GW detection by LISA with a rough estimation of possible dephasing due to the gaseous environment. Finally, the summary and conclusions are given in Sec. VI. ## II Methods and setups ### Numerical methods and initial configuration We use our version of the GR MHD code HARM [30; 31], which uses numerical algorithms developed initially by Gammie et al.(2003) [32] and Noble at al.(2006) [33]. HARM is a finite-volume code with HLL shock-capturing scheme. The background spacetime is frozen and fixed to the Kerr metric. The hydro equations are evolved in the modified spherical Kerr-Schild coordinates. The following radial and angular maps from Kerr-Schild (KS) coordinates to modified Kerr-Schild (MKS) coordinates are used, which increase the resolution close to the black hole and the equatorial plane, respectively, to resolve the thin disk accurately: \(r_{KS}=\exp(r_{MKS})\), \(\theta_{KS}=\pi\theta_{MKS}+\frac{(1-h)}{2}\sin(2\pi\theta_{MKS})\). The coordinate parameter \(h\) is set to \(0.3\) for all models in this paper. The gas pressure is calculated using a polytropic equation of state \(P=\kappa\rho^{\Gamma}\), with \(\kappa=0.1\), and \(\Gamma=4/3\). The initial density configuration is based on Dihingia et al. (2021) prescription for thin disks [34]. The distribution of the density on the equator is defined as: \[\rho_{e}=\left(\frac{\Theta_{0}}{\kappa}\right)^{\frac{1}{d-1}}\left(\frac{f (x)}{x^{2}}\right)^{\frac{1}{d-1}}. \tag{1}\] The disk is truncated at the innermost stable circular orbit radius, \(r_{SCO}\), and the density on the entire grid is derived from: \[\rho(r,\theta)=\rho_{e}\ exp\left(-\frac{\alpha_{disk}^{2}\xi^{2}}{\mathcal{H }^{2}}\right);\ z=r\ cos(\theta). \tag{2}\] The \(\Theta_{0}\) parameter is the dimensionless temperature set to \(\Theta_{0}=0.001\), the parameter \(\alpha_{disk}=2\) is chosen to keep the disk thin, and \(x=\sqrt{r}\). We refer the readers to eqs.(4-13) from [34] for definitions of \(f(x)\) and \(\mathcal{H}\). The initial poloidal magnetic field configuration is based on [35] with the nonzero azimuthal component of the magnetic vector potential defined as: \[A_{\phi}=r^{3/4}\frac{m^{5/4}}{(m^{2}+cos^{2}\theta)^{5/8}}. \tag{3}\] Here \(m\) is a constant and defines the inclination angle of the initial magnetic field. For this study, we choose \(m=0.1\) (low inclination angle) and \(m=0.5\) (high inclination angle) for different cases as shown in Fig. 1. We normalize the initial field strength with a given value of the ratio of the maximum gas pressure to the maximum magnetic pressure, \(\beta=P_{gas}^{max}/P_{mag}^{max}\). The radial profile of \(P_{g}/P_{B}\) ratio on the equator at the initial time is shown in Fig. 2. We summarize the initial parameters of our simulations in Table 1. The grid resolution for all the cases is \(1056\)*\(528\), with the outer boundary at \(r=1000\,r_{g}\). All cases are evolved for about \(t\sim 60000\) code units. ### Physical scales and physical units In HARM, we use geometric units: \(G=c=M=1\). In order to convert quantities to physical units we follow [36], where the spacial and time units are scaled with the mass of the primary BH as follows: \[\begin{split} L_{unit}&=\frac{GM}{c^{2}}=1.48\times 1 0^{5}\frac{M}{M_{\odot}}cm,\\ T_{unit}&=\frac{r_{g}}{c}=4.9\times 10^{-6}\frac{M}{ M_{\odot}}s.\end{split} \tag{4}\] \begin{table} \begin{tabular}{l l l l l} case & \(\beta\) & m & BH spin & \(\beta_{max,\,eq.}\) \\ \hline \(\beta\)1-m0.5-a0.7 & 1 & 0.5 & 0.7 & 185244 \\ \(\beta\)10-m0.1-a0.7 & 10 & 0.1 & 0.7 & 31246 \\ \(\beta\)50-m0.1-a0.7 & 50 & 0.1 & 0.7 & 156000 \\ \(\beta\)50-m0.1-a0.94 & 50 & 0.1 & 0.94 & 39313 \\ \end{tabular} \end{table} Table 1: The initial setup parameters for all the simulations with grid resolution: \(1056\)*\(528\). The density scale is related to the length unit by \(\rho_{unit}=M_{scale}/L_{unit}^{3}\), and the mass scale is adopted to \(M_{scale}=1\times 10^{-5}M_{\odot}\) for \(\beta\)50-m0.1-a0.94 case, and \(M_{scale}=2\times 10^{-6}M_{\odot}\) for the rest of the models. The density scales are chosen in order to create disks with surface density \(\Sigma\sim 10^{3}\) g cm\({}^{-2}\) around radius \(r\sim 100\ r_{g}\) as suggested by [21]. These scaling factors give the measured accretion rate \(\sim 0.01-1\ \dot{M}_{Edd}\) for our simulations (see Sec. III for detailed discussion on the accretion rates). If we assume the primary black hole has the mass of \(M=10^{6}M_{\odot}\), the outer boundary is located at \(r=1000r_{g}\sim 1.5\times 10^{12}\)cm and the evolution time is about \(t\sim 60000\,\dot{M}\sim 3.5\) days. According to these scales, we can claim that our model only covers the inner part of the AGN disks (extends to \(10^{14}-10^{16}\)cm; [37]), and the evolution time covers only a fraction of the LISA observational time for SMBH inspiral (\(\sim\) several years; [21]). However, this evolution time is insufficient to make the entire disk turbulent for the selected resolution. Therefore, we consider only \(r<200\ r_{g}\) for our torque measurements. The fluid completes more than 20 orbits at this radius based on the Keplerian orbital frequency. According to observations, the Seyfert 2 galaxy GSN 069 at a redshift of \(z=0.018\) with nine-hour X-ray quasi-periodic eruptions is a candidate for extreme mass ratio binary black holes. The primary BH is a low mass SMBH of a few times \(10^{5}M_{\odot}\) with a relatively high Eddington ratio of about 0.5. The variability observed in GSN 069 may be explained by the interaction between an existing accretion disk and an orbiting secondary body according to Miniutti et al. (2019) [38]. However, another possible explanation has been suggested based on self-gravitational-lensing in SMBHs by Ingram et al. (2021) [39] for low-mass AGNs such as GSN 069 and RX J1301.9+2747. Our moderately high Eddington ratio models can resemble this type of object, therefore, we scale our results for such low-mass primary SMBH (\(M_{p}\sim 10^{5}-10^{6}M_{\odot}\)) with an intermediate-mass companion BH. ## III Numerical results ### Disk evolution and MRI analysis We evolved our models for about \(t\approx 60000\) M. This time is long enough to let the MRI be active to form the turbulent structure at the inner part of the disk (\(r<200\ r_{g}\)), and short enough to avoid the magnetic field dissipation due to the anti-dynamo effect in a 2D simulation. In Fig. 3 we show the final configuration of the density and magnetic field lines at the end of the simulations for different models. At the earlier evolution time, for cases \(\beta\)10-m0.1-a0.7, \(\beta\)50-m0.1-a0.7, and \(\beta\)50-m0.1-a0.94, the magnetic field is amplified exponentially due to the MRI and magnetic winding, which creates turbulent magnetized fluid with quite high accretion rates. As a result, the disk expands vertically, launches the outflows and becomes slightly thicker geometrically compared to the initial configuration. However, in these cases, the thin structure of the disk is preserved during evolution, creating a stream that flows radially toward the BH on the equator. This observation suggests that the MRI's channel solution is developed in our simulations (channel solutions represent a specific form of poloidal MRI, which is characterised by prominent radial extended features [40; 29; 41]). At later times, we observe that the disks are divided into two distinguishable parts (for these three cases). The inner region is denser, geometrically thicker and more magnetized, while the outer part is geometrically thinner, less dense and less magnetized. The formation of the inner'mini-torus' has been observed before in magnetized thick disks in 2D simulations [42]. In order to investigate the MRI's effect and the formation of the inner torus, we compare the radial profiles of the MRI fastest growing mode wavelength, \(\lambda_{MRI}\), to Figure 1: The 2D profile of density and magnetic field lines at the initial time, for \(\beta\)50-m0.5-a0.94 (top) and \(\beta\)50-m0.1-a0.94 cases (bottom). Figure 2: The radial profile of \(\beta\) parameter on the equator at the initial time for all the cases. the scale height of the disk in Fig. 4. The instability is suppressed when \(\lambda_{MRI}\) exceeds the scale height [43; 44], which happens for \(r<20\ r_{\rm g}\) in \(\beta\)50-m0.1-a0.94 case, where the mini-torus is formed. On the other hand, the visualized data show that the magnetic field lines loop inside the inner torus and form a magnetic barrier at its boundaries, and therefore, create a plasmoid structure. A detailed study on the formation of plasmoids due to magnetic reconnection and their observational effects should be done with high-resolution 3D simulations [45]. However, with current observations from our 2D moderate-resolution simulation we can explain the existence of the inner torus as a result of two physical processes intensifying each other: (i) the effective MRI at the outer radii makes the fluid lose its angular momentum and drag inwards, while the less effective MRI at the inner radii makes the hot magnetized fluid slow down and pile up over time, and (ii) at the same time the magnetic field is amplified and creates loops in the inner region causing the plasma trapped and disconnected from the rest of the disk. Therefore, the inner torus is formed and becomes stable till the end of the simulations. For \(\beta\)10-m0.1-a0.7 and \(\beta\)50-m0.1-a0.7 cases, multiple loop structures are visible. Fig. 4 shows the comparison of the MRI fastest growing mode's wavelength \(\lambda_{MRI}\), with the disk scale height \(H\) at the final time. We observe that for all models, the MRI is suppressed locally at the smaller radii, where the \(\lambda_{MRI}>H\). The bottom panel of the same figure shows the surface density profile over the same radial region at the same time. The dense inner torus is distinguishable for \(\beta\)10-m0.1-a0.7, \(\beta\)50-m0.1-a0.7, and \(\beta\)50-m0.1-a0.94 models. Completing our MRI analysis, we investigate the possibility of transition to a magnetically arrested disk (MAD), where the disk becomes magnetically dominated and the MRI is suppressed. The MAD status is considered a probable scenario for the disks at the center of galaxies. The 3D numerical simulations done by Liska et al. (2020) [46] suggested that for a long enough evolution the disk eventually turns into the MAD state, and in this state the final disk's characteristics are not sensitive to the exact initial conditions it started with. The recent observations by the Event Horizon Telescope confirm that the MAD state is more favourable for observed cases such as M87 [47]. To investigate this, we measure the ratio of the magnetic flux to the square root of the mass flux at the Figure 3: The 2D profiles of density and field lines at the final time for different models. The dense and strongly magnetized inner torus is visible in cases \(\beta\)50-m0.1-a0.94 and \(\beta\)10-m0.1-a0.7 separated from the turbulent thin disk. The \(\beta\)50-m0.1-a0.7 has multiple plasmoid structures at the inner part. The model \(\beta\)1-m0.5-a0.7 turns into the MAD state at the early time of the evolution, with the magnetic field preserved in a vertical configuration for most parts of the disk. BH horizon \(\Phi_{B}/\sqrt{M}\) for different cases. Based on the literature, the MAD state happens when this ratio is high enough (\(\sim 15\)) [48]. Fig. 5 shows that \(\beta\)50-m0.1-a0.94 case, for instance, becomes magnetically arrested for a part of the evolution. At this period of time, the MRI does not act as an effective process for the angular momentum transport, and the accretion rate drops significantly as illustrated in Fig. 6. At this point, we would like to highlight our \(\beta\)1-m0.5-a0.7 case, which started with a higher inclination angle for the magnetic field initial configuration. The changes are quite dramatic for this case, and it turns to MAD state at an earlier time and in an episodic way (such episodic accretion rates are commonly observed in MADs [34; 49]). More specifically, at the inner part, the magnetic winding causes high magnetic pressure and creates a magnetic barrier which reduces the accretion rate as low as \(0.01\times\dot{M}_{Edd}\), while the other cases have accretion rate stands higher than \(0.1\times\dot{M}_{Edd}\) for a long period of evolution. At the further radii, in a tiny fraction of the disk, we have a turbulent structure with the \(\lambda_{MRI}\) standing below the scale height and yet high enough to be resolved with current resolution. However, the vertical configuration of the magnetic field is preserved for the most part of the disk (\(r>50\,r_{g}\)) till the end of the simulation, and makes the disk expand vertically and become less dense compared to the other cases. To investigate this further, we performed a test with a similar setup but a lower initial magnetic field, i.e. \(\beta=50\). (The result of this test is not presented in our figures). The result shows that even with a weaker initial magnetic field, the MRI can not be triggered at larger radii and the GR MHD evolution keeps the magnetic field in vertical geometry in most parts of the disk. Therefore, the vertical configuration most likely results in either weak MRI or MAD state. Similar works in the literature confirm that a modest change in the initial field configuration may signify MAD structure [34; 43; 50]. Overall, the results presented in Figs. 4 & 5 determine that the MRI and its effects are not only suppressed locally at the inner part of the disk but are also suppressed globally during the evolution when the disk turns into the MAD state. However, we should emphasize that the accurate evolution of the disk under the MAD condition requires 3D simulations [44; 48; 51]. ### \(\alpha\) measurement The MRI analysis presented in Sec.III.1 shows that this instability does not remain active everywhere in the disk and for the entire time of the evolution. Therefore the viscous effects driven by MRI vary with radius and time as well. In this section, we compute the equivalent \(\alpha\) viscosity caused by the MRI in the turbulent fluid and compare it with the constant values used in the literature. Following the prescription given by McKinney et al. (2012) [43] for turbulent relativistic fluid, we calculate equivalent \(\alpha\) viscosity by considering the dominant Reynolds and Maxwell terms in the stress-energy tensor as: \[\begin{split}\alpha=\alpha_{R}+\alpha_{M},\\ \alpha_{R}\approx\frac{\rho_{0}\delta u_{r}\delta u_{\phi}\sqrt{ \delta^{\phi\phi}}}{P_{tot}},\\ \alpha_{M}\approx-\frac{b_{r}b_{\phi}\sqrt{\delta^{\phi\phi}}}{P _{tot}}.\end{split} \tag{5}\] In these equations \(P_{tot}=P_{g}+P_{B}\) is the total pressure, and \(b^{\mu}\) is the magnetic field 4-vector (see Eq.(8) from [32] for the definition). The 4-velocity fluctuations are defined by: \[\begin{split}\delta u_{r}(r,\theta,t)=u_{r}(r,\theta,t)-\overline {u_{r}(r,t)},\\ \delta u_{\phi}(r,\theta,t)=u_{\phi}(r,\theta,t)-\overline{u_{ \phi}(r,t)}.\end{split} \tag{6}\] Figure 4: The comparison of the MRI fastest growing mode’s wavelength with the disk scale height at the final time (top), and the surface density at the same time for all the models (bottom). The high-density plasmoid structures are formed at the inner radii, where the MRI is suppressed. The surface density is scaled for the central BH mass \(10^{6}M_{\odot}\). The average of quantity \(Q\) (i.e. velocity and viscosity components) is taken vertically and weighted by density as: \[\overline{Q(r,t)}=\frac{\int_{-H}^{H}\sqrt{-g}\rho Qdz}{\int_{-H}^{H}\sqrt{-g} \rho dz}, \tag{7}\] where \(H\) is the disk's scale height. The comparison between the Maxwell and Reynolds contributions to the total volume averaged \(\alpha\) over time is shown in Fig. 7 for case \(\beta\)10-m0.1-a0.7. The \(\alpha\) values are vertically averaged according to Eq.(7). Fig. 8 shows the radial profiles of \(\alpha_{R}\) at three time snapshots, \(t=4\times 10^{4},4.5\times 10^{4},5\times 10^{4}\). These figures show that \(\alpha_{M}\) has a bigger contribution to the averaged \(\alpha\) (more than 90%), while \(\alpha_{R}\) fluctuates significantly due to the turbulence. At some radii, \(\alpha_{R}\) may vary in the range of [-0.08,0.08]. The torque's fluctuation is discussed in Sec. V in detail. In Fig. 9 we show the volume averaged of \(\alpha\) versus time for all the models except \(\beta\)1-m0.5-a0.7, which turns into MAD state periodically at the early time of the evolution. The volume average is taken over the turbulent part of the disk (inner radius of the grid to \(r=200\,r_{g}\)). The comparison between the models for time averaged \(\alpha\) is illustrated in Fig. 10. The time average is taken over the second half of the evolution (\(30000<t<t_{final}\)) to make sure that the MRI is being triggered and making the disk turbulent up to \(r\sim 200r_{g}\). These results show that the average value of \(\alpha\) may vary by factor of 2 for high magnetized cases such as \(\beta\)10-m0.1-a0.7 and \(\beta\)50-m0.1-a0.94. It reaches 0.3 at the highest and finally converges to \(\alpha\approx 0.12-0.15\) for all the cases including \(\beta\)50-m0.1-a0.7 at the late evolution. The average \(\alpha\) drops (with a delay) in all these three cases after entering the MAD status. On the other hand, the radial profile in Fig. 10 demonstrates that \(\alpha\) changes significantly over radius, i.e. it becomes very small at the inner radii where the MRI is suppressed, and gradually increases at the larger radii, reaching to \(\alpha\approx 0.2\) at \(r\sim 150r_{g}\) for \(\beta\)50-m0.1-a0.94 and \(\beta\)10-m0.1-a0.7 cases, while the less magnetized case \(\beta\)50-m0.1-a0.7 settles down to \(\alpha\approx 0.03\) for \(100\,r_{g}<r<200\,r_{g}\). Since the MRI and its effects are highly dependent on the grid resolution, to complete our numerical observations, we performed a higher resolution simulation for case \(\beta\)50-m0.1 Figure 5: The ratio of the magnetic flux to the square root of the mass flux computed at the BH horizon for all the cases. The disks turn to MAD state when this ratio goes above 15. Figure 8: The fluctuations of \(\alpha_{R}\) at time snapshots \(t=40000,45000,5000\) computed for \(\beta\)10-m0.1-a0.7. The data is zoomed-in for radii [80,200]. Figure 6: The accretion rate for different cases compared with the Eddington accretion limit. The results are scaled for central BH with the mass of \(10^{6}M_{\odot}\). Figure 7: The volume-averaged of \(\alpha_{M}\) and \(\alpha_{R}\) versus time for case \(\beta\)10-m0.1-a0.7. The \(\alpha_{R}\) contribution to total \(\alpha\) is less than 10%. a0.94, which resulted in a similar qualitative evolution. The highly-magnetized, plasmoid structure has been created at the inner radii, distinguishable from the turbulent thin disk with MRI channel solution at the further radii. However, the measured volume-averaged viscosity goes through less change over time and it settles down to \(\alpha\approx 0.12\) during the second half of the evolution. ## IV Analytical results: torque measurements In a realistic scenario, where the disk experiences radiative cooling and magnetically-driven heating, the isothermal and adiabatic components of the linear gas torque are derived from the analytical approach given by Tanaka et al.(2002) [52] and Lyra et al.(2010) [53]. In this approach, the linear torque is estimated for a very low mass companion with mass ratio (\(q<10^{-4}\)) which is known as migration type I. For a higher mass ratio such as \(q\sim 10^{-3}\) (known as migration type II), the companion is massive enough to carve a low-density region (gap) inside the disk and therefore experiences smaller torque caused by the viscosity (Lin and Papaloizou (1986) [54]). These approaches are used to explain the migration in planetary disks. A similar one-dimensional model was adopted by Armitage and Natarajan (2002) [55] for supermassive black holes merging in accretion disks, which was further advanced by Shapiro (2013) [8]. In the latter, the evolution equation of the surface density was derived in the curved spacetime, considering the gas effect on the secondary BH (viscous torque) and the secondary BH's gravitational tidal effects on the disk (tidal torque). In this section, we apply the numerical results from our GR MHD simulations, reported in Sec. III, to estimate the viscous torque in the same 1D general relativistic approach given by [8]. Since the secondary BH is not included in our simulation, we can make different assumptions about its mass and different scenarios it may go through while inspiraling in the turbulent gaseous environment. The results in this section are scaled for a primary BH mass of \(10^{6}M_{\odot}\) and mass ratio of \(10^{-3}\). In Sec. IV.1 we present the torque measurements from the time-averaged values, and in Sec.IV.2 we discuss the torque fluctuations and their effects on the orbital evolution. ### Viscous torque: 1D GR-hybrid thin disk model The Newtonian one-dimensional thin disk prescription was used by Garg et al. [23] and Derdzinski et al. [21] to compute the viscous torque for migration type II as: \[T_{r,Newt}=-3\pi r^{2}\Omega_{2}\nu\Sigma, \tag{8}\] where \(\Omega_{2}\approx\sqrt{M_{p}/r^{3}}\) is the orbital frequency of the secondary BH assuming circular orbit for the extreme mass ratio case, \(M_{p}\) is the mass of the primary BH, \(\Sigma=\int_{-H}^{H}\rho\,dz\) is the disk surface density and \(\nu=\alpha c_{s}H\) is the kinematic viscosity. For a relativistic formalism, we follow the simple 1D GR-hybrid model given by Shapiro (2013) [8], in this approach the rest mass accretion rate due to viscous torque is \[\dot{M}_{GR}=2\pi\left[\frac{\mathcal{G}}{\mathcal{Q}}3r^{1/2}\frac{\partial} {\partial r}\left(r^{1/2}\nu\Sigma_{GR}\frac{\mathcal{D}^{2}}{\mathcal{C}} \right)\right], \tag{9}\] where \(\mathcal{G}\), \(\mathcal{Q}\), \(\mathcal{C}\), \(\mathcal{D}\) and relativistic surface density \(\Sigma_{GR}\) are defined as: \[\mathcal{G}=\mathcal{B}\mathcal{C}^{-1/2},\] \[L^{+}=M_{p}\mathcal{C}(1-2ax^{-3}+a^{2}x^{-4}),\] \[\mathcal{Q}=2x^{1/2}\partial L^{+}/\partial r,\] \[\mathcal{B}=1+ax^{-3}, \tag{10}\] \[\mathcal{C}=1-3x^{-2}+2ax^{-3},\] \[\mathcal{D}=1-2x^{-2}+a^{2}x^{-4},\] \[\Sigma_{GR}=\int_{-H}^{H}\rho u^{t}\sqrt{-g}\,dz,\] Figure 10: The time averaged radial profile of total \(\alpha\) for \(\beta\)10-m0.1-a0.7, \(\beta\)50-m0.1-a0.7 and \(\beta\)50-m0.1-a0.94 cases. The time average is taken from the second half of the evolution. Figure 9: The volume-averaged of \(\alpha\) weighted by density versus time for \(\beta\)10-m0.1-a0.7, \(\beta\)50-m0.1-a0.7 and \(\beta\)50-m0.1-a0.94 cases computed inside the turbulent disk \(r<200\) and \(-H<z<H\). for the Kerr metric in the Boyer-Linquist coordinates, where \(M_{p}\) and \(a\) are the mass and spin of the primary BH and \(x=\sqrt{r}\). The relativistic viscous torque will be computed as: \[T_{\nu,GR}=-\dot{M}_{GR}r^{2}\Omega_{2} \tag{11}\] We get identical results as from Eq.(8) in the weak field approximation, where \(\mathcal{G}\), \(\mathcal{Q}\), \(\mathcal{C}\), and \(\mathcal{D}\) approach to unity. Our results show that the Newtonian approach overestimates the viscous torque, especially at the inner regions, for instance, the relativistic viscous torques is \(\sim\)30 % lower at \(r\sim 100r_{g}\). On the other hand, the binary is inspiraling due to losing energy by emitting gravitational waves. So, one can define the effective GW torque to be compared with the viscous torque: \[T_{GW}=\frac{1}{2}qM_{p}r\dot{r}_{GW}\Omega_{2}, \tag{12}\] where \(q\) is the mass ratio and \(\dot{r}_{GW}\) is derived by the quadruple approximation for the evolution of the orbital separation as follows [56]: \[\dot{r}_{GW}=-\frac{64}{5}\frac{(GM)^{3}}{c^{5}}\frac{1}{1+q^{-1}}\frac{1}{1+q }\frac{1}{r^{3}}. \tag{13}\] Fig. 11 shows the ratio \(T_{\nu,GR}/T_{GW}\) computed with \(q=0.001\) for different models in our simulations. The values for \(\Sigma\), \(\alpha\) and \(c_{s}\) used in Eq.(9) are taken from our simulations' time-averaged numerical measurements. This result shows that the average viscous torque may reach up to a few per cent of the gravitational torque and produce a measurable phase shift in the GW signal (for the selected mass ratio and \(M_{p}\)). In comparison with results from an analytical (and Newtonian) study done by Garg et al. (2022) [23], which considered a constant value \(\alpha=0.01\), we conclude that our numerical results provide higher values for \(\alpha\). Therefore, the secondary BH experiences larger viscous torque at binary separations around \(r\sim 100r_{g}\). Obviously, \(T_{\nu}\) is scaled by other quantities such as the surface density, sound speed and scale height. Hence, the GW phase shift can still be used to probe the density and temperature of AGN disks for \(\alpha\sim 0.1\) at this binary separation. Generally, since the MRI's features are highly dependent on the grid resolution and accurate evolution of the magnetic field in 3D, we postpone the discussion about probing the magnetic field's strength and configuration to future studies. However, with our current 2D models, for \(\beta\)50-m0.1-a0.7 and \(\beta\)1-m0.5-a0.7 cases, we can claim that there are particular magnetic field strengths and configurations, which result in less effective MRI turbulence, and hence, smaller viscous torques. ### Torque fluctuations What we estimate as a torque in the previous section is the time-averaged value of viscous torque or equivalently we can call it linear torque. However, in a realistic scenario, we do not have a one-dimensional laminar gas flow. Instead, we have to deal with nonlinear turbulent fluid which continuously interacts with the secondary orbiting BH. These nonlinear hydrodynamic interactions lead to rapid changes in density, velocity and magnetic fields, which enhance or suppress the torque value over time. The deviation from the linear torque can affect the orbital evolution of the binary and introduce an additional phase shift in GWs. Zwick et al. (2022) [57] presented a detailed study on the stochastic torque or time-variable torque estimations and their measurable effects in the GW signals. As they suggested there are two important sources for these fluctuations: one is the disk-driven fluctuations experienced by the low-mass orbiting object from the turbulent fluid, and the second, is the perturber-driven fluctuations occurring due to asymmetries in the gas flow near a sufficiently massive secondary BH. The perturber-driven fluctuations may depend on many physical processes including the stochastic accretion and tidal effects of the secondary BH, as well as gas friction and small-scale gas dynamics. Therefore, performing numerical simulations where the low-mass perturber is included is absolutely essential for these studies [21]. On the other hand, for fluctuation studies, one needs to measure the torque directly from a relatively long simulation in the presence of the perturber and output it frequently for the Fourier analysis. The Newtonian torque measurements and its Fourier analysis have been done by Nelson 2005 [58] for turbulent, magnetized protoplanetary disks. They found the torque fluctuations contain high-amplitude, low-frequency components, and the stochastic migration dominates over type I migration for their models evolved for 100-150 planet orbits. For a relativistic approach, one can follow the direct torque computations from Farris et al. (2011) [7], where the torque density derived by: \[\frac{dT}{dr}=\int\sqrt{-g}T_{\nu}^{\mu}\nabla_{\mu}\phi^{\nu}Rdzd\phi, \tag{14}\] Figure 11: The ratio of the viscous torque to the effective GW torque for all models scaled for fixed primary BH mass \(M_{p}=10^{6}M_{\odot}\) and mass ratio of \(q=0.001\). in 3D cylindrical coordinates. Here, \(\phi^{\mu}\equiv(\partial_{\phi})^{\mu}\) is the Killing vector. See Appendix B from [7] for the details of the derivation of Eq.(14). In our study, the torque fluctuations do not appear in the average values we presented in Sec. IV.1, and overall we observed the average viscous torque is negative (by its definition) and therefore, it facilitates the shrinkage of the binary orbit. On the other hand, the direct computation of the torque is not possible in our simulation, since this calculation depends on the stress-energy components and the metric derivatives respect to \(\phi\) (rotation around the \(z\) axis), while we perform our simulation in an axisymmetric setup where the perturbations of the secondary BH on the metric and stress-energy tensor are neglected. However, it is still interesting to investigate the fluctuations of the viscous torque and have a qualitative discussion on this topic. Taking a close look at the case \(\beta\)50-m0.1-a0.94, for instance, shows that the viscous torque may deviate from the time-averaged torque by up to factor of five. Fig. 12 shows the ratio of the viscous torque over the time-averaged torque versus time for two selected radii (\(r=50r_{g}\) and \(r=75r_{g}\)). As we observe this ratio can change dramatically over time and it even may have a different sign during evolution. Measuring a positive viscous torque may seem incorrect. However, we should remind the readers that when \(\alpha\) is computed from the Reynolds and Maxwell contributions, it may take negative values by the definition in a turbulent fluid (see \(\alpha_{R}\) in Fig.8 for instance). We postpone the direct and accurate computations of the torque and its fluctuations to future studies where the orbiting perturber is included in our simulations. ## V Discussion: Phase shift due to gas and other important physical scenarios Using \(\alpha\) and the linear torque computed in Sec. III.2 and IV, we can estimate the orbital evolution and GW dephasing due to the gas environment. Here we follow the analysis given by Zwick et al. (2022) [57] derived based on [52; 59], so the flux of angular momentum induced by the viscous torque is estimated from local \(\alpha\), density and sound speed as: \[\dot{L}_{T}=-\alpha\frac{6\pi r^{7/2}c_{s}(r)^{3}\rho(r)}{\sqrt{GM}}, \tag{15}\] and we can derive the variation in the binary separation from \[\dot{r}=\dot{r}_{GW}+2\frac{\dot{L}_{T}}{Mq}\sqrt{\frac{r}{GM}}\equiv\dot{r}_{ GW}+\dot{r}_{T}. \tag{16}\] Finally, the phase shift is approximated by \[\delta\phi=\phi_{vac}-\phi\approx 2\pi\int f_{GW}(r)\frac{\dot{r}_{gas}}{r_{GW }^{2}}dr. \tag{17}\] Approximating the GW frequency for a binary with mass ratio \(0.001\) and \(M_{p}=10^{6}M_{\odot}\) from [21], we predict the dephasing would be roughly around \(\sim 10\) radians for about \(10^{5}\) inspiral orbits. In comparison with the analytical work by Garg et al. (2022)[23] and the numerical study by Derdzinski et al. (2019)[21] which used constant \(\alpha\sim 0.01\), our computed viscous torque is slightly larger because our directly-measured \(\alpha\) is larger by more than one order of magnitude. Generally, for LISA, there are sources with a few up to a few hundred SNR and the phase of the GW signal can be reconstructed within the accuracy of 1/SNR [60]. Therefore, the predicted phase shift must be detectable by LISA for a few years of observational time [20; 21; 23]. As our final discussion, we should emphasise that all the results presented in this work are order-of-magnitude estimations for torque measurements and their effects on the GW detections. This study is limited in many ways, most importantly: 1- The MRI-driven turbulence and its dynamical and thermal effects are required to be studied in long, high-resolution 3D simulations [61]. Our 2D setup does not allow us to perform long evolution due to the anti-dynamo theorem [62] and/or study the MAD state accretion [44; 46; 48]. A consistent way of computing torque in the general relativistic formalism is also not possible in an axisymmetric simulation [7]. 2- The AGN disks are optically thin and can radiate efficiently [63]. Therefore, radiative cooling plays a crucial role in energy deposition and maintaining the thin structure of the accretion disk, which is missing in our study. In a realistic scenario, radiative cooling competes with viscous heating to reach thermal equilibrium. However, based on the discussion given by Narayan et al.(1998) [64] for a low-luminous, radiatively inefficient disk, the gas might fail in radiating energy at a rate that balances viscous heating. In this case, the heat generated by viscosity will be advected inwards with the flow instead of being radiated away. As a result, the disk becomes hot and geometrically thick, and the inner regions of accretion disks are replaced by ADAFs. In our simulations, the hot dense inner region has been adopted in the initial condition, Figure 12: The ratio of the viscous torque to the time-averaged viscous torque versus time at radii \(r=50r_{g}\) and \(r=75r_{g}\) for the case \(\beta\)50-m0.1-a0.94. The fluctuations in the measured torque affect the orbital evolution of the binary system. via temperature normalisation. During the evolution, this region was separated from the main thin-disk structure and sustained as a result of MRI suppression and high magnetic field. However, it would be interesting to study this inner torus evolution under the influence of radiative cooling. So far, several GR MHD groups have included radiative transfer processes to study the emission spectrum from the accretion disks in their simulations [65; 66; 67; 68; 69; 70; 71; 72; 73]. 3- The low-mass orbiting object is not included in our simulation, therefore the direct computation of the torque is not possible. On the other hand, this means that all the perturber-driven effects are neglected in our simulations. We assumed a mass ratio of \(q=0.001\), so the secondary BH is massive enough to open a gap region inside the disk and therefore the gas torque is estimated as the viscous torque (migration type II). However, this assumption may be far from realistic, since numerical simulations have shown that once the gap is carved the gas can still flow into the gap from the disk and contributes to the net torque on the secondary BH and even expands the orbit [22]. In addition, based on the discussion given by Zwick et al. (2022) [57] the perturber-driven torque fluctuations can be highly affected by the disk parameters, especially the Mach number, and become noticeably important in GW detections. A further complication may occur for a retrograde perturber, as well as in case of a highly eccentric orbit [74; 75; 76; 77]. Moreover, the presence of the low-mass secondary black hole provides an opportunity to study Lindblad or orbital resonance torques in numerical simulations. Armitage and Natarajan (2002) [55] have observed the formation of tightly wound spiral waves that mediate angular momentum exchange between the secondary and the disk in their thin disk model. Lindblad resonance is considered an important mechanism for angular momentum transport and heating disks in binary black hole systems (see Hirata (2011) [78] for a relativistic treatment of the Lindblad torques in curved spacetime in the extreme mass ratio limit). At this point, it is worth mentioning that besides gas-induced torques such as the MRI-driven viscosity and Lindblad resonance, the tidally distorted primary BH's horizon can gravitationally couple to the orbiting low-mass secondary, transferring energy and angular momentum from the black hole to the orbit [79] and cause an additional phase shift in the GW signal. Evolving a supermassive binary black hole in magnetized fluid for hundreds of orbits (or longer) requires expensive GR MHD simulations in dynamic spacetime. However, for future studies, it is possible to simplify the problem in the case of extreme mass ratios by evolving only MHD equations in a fixed spacetime and adding the secondary BH as a perturber. Such approximations are taken by Combi et al. (2021)[80] and Sukova et al. (2021)[81] for adjusting metric and hydrodynamic equations respectively. The secondary's accretion can also be added as an additional sink term in the source part of the hydro equations (see eq.(6) from [21] as an example). ## VI Summary and Conclusions In this study, we evolved several magnetized thin disk models in 2D to quantify the viscous \(\alpha\) parameter in turbulent fluid developed by magnetorotational instability. We used the results of these simulations to estimate the viscous torque experienced by the hypothetical low-mass secondary BH inspiraling the primary black hole inside an AGN disk. In the end, we estimated the phase shift in the GW signal caused by the disk's environmental effect based on the torque magnitude. We observed the disks with well-resolved MRI have an average \(\alpha\) viscosity that varies around \(0.1-0.3\) during the evolution. The MRI is suppressed at the inner part of the disk, close to the primary BH, so the value of the \(\alpha\) viscosity is negligible in this region. However, time-averaged \(\alpha\) reaches \(\approx 0.1-0.2\) at larger radii where the fluid is turbulent and the MRI fastest growing mode is resolvable. Altering the initial conditions, we found that the initial magnetic field with a lower inclination angle plays an important role in triggering and sustaining MRI, while the field configuration with a higher inclination angle turns the disk into the MAD state with episodic high magnetic flux at the horizon. The initially weakly magnetized case makes the MRI saturated at a later time and overall has smaller \(\alpha\), and therefore, smaller viscous torque compared to the other cases. Moreover, we found that the BH spin does not change the results significantly. We applied the numerical results from the GR MHD simulations to estimate the viscous torque using the GR-Hybrid approach for the general relativistic one-dimensional thin disk. We found that the time-averaged viscous torque can be as large as \(\sim 1\%\) of the GW torque for a mass ratio of \(q=10^{-3}\) at radii around \(r\sim 100~{}r_{\rm g}\), where \(\alpha\) is maximal. This extra torque from the environment appears as faster shrinkage of the binary's orbit and phase shift in the GW signal. We also observed the Newtonian-calculated torque deviates from the relativist one up to 30% higher at radii around \(r\sim 100~{}r_{\rm g}\). Monitoring the viscous torque at different radii shows that the fluctuations in the torque values may change dramatically, and even sometimes it changes the torque's sign or deviates from the average value by a factor of five. The study of torque fluctuations is essential for the binary's orbital evolution and should be included in future numerical studies where the secondary BH is included. This study was one step toward a realistic estimation of the disk's environmental effects on possible future GW detections. We study the disk's relativistic viscous effects generated by the magnetically driven instability in the turbulent fluid. However, for complete and accurate studies, it is important to include the low-mass secondary BH in future 3D GRMHD simulations. ###### Acknowledgements. The authors thank Andrea Derdzinski, Bozena Czerny, Hector Olivares and Scott Noble for helpful discussions and advice throughout this project. This work was supported by grant No. 2019/35/B/ST9/04000 from the Polish National Science Center, Poland. We acknowledge the support from the PL-Grid infrastructure under the computational grant 'plglisa'. We also acknowledge support from the Interdisciplinary Center for Mathematical Modeling of the Warsaw University.
2307.16797
Shepherding control and herdability in complex multiagent systems
We study the shepherding control problem where a group of "herders" need to orchestrate their collective behaviour in order to steer the dynamics of a group of "target" agents towards a desired goal. We relax the strong assumptions of targets showing cohesive collective behavior in the absence of the herders, and herders owning global sensing capabilities. We find scaling laws linking the number of targets and minimum herders needed, and we unveil the existence of a critical threshold of the density of the targets, below which the number of herders needed for success significantly increases. We explain the existence of such a threshold in terms of the percolation of a suitably defined herdability graph and support our numerical evidence by deriving and analysing a PDE describing the herders dynamics in a simplified one-dimensional setting. Extensive numerical experiments validate our methodology.
Andrea Lama, Mario di Bernardo
2023-07-31T16:04:53Z
http://arxiv.org/abs/2307.16797v2
# Shepherding and herdability in complex multiagent systems ###### Abstract We study the shepherding problem where a group of "herders" need to drive the dynamics of a group of "target" agents. In particular, we relax two strong assumptions made in the existing literature, namely that the targets' own dynamics shows cohesive behaviour (e.g. flocking) in the absence of herders and that the herders possess global sensing capability. We find, for the first time, scaling laws linking the number of targets to be herd to the minimum number of herders that can solve the problem. Surprisingly we observe that, when limited sensing is present, the number of herders needed to successfully achieve the herding task significantly increases as the targets' density becomes lower than a critical threshold. We explain the existence of such a threshold in terms of the percolation of a suitably defined herdability graph. Exhaustive numerical experiments validate and confirm the effectiveness of our methodology. In many physical situations, it can become advantageous for a complex multiagent system to induce the emergence of some desired collective behaviour in another group of agents which would otherwise behave differently. A paradigmatic example is known in the literature as the _shepherding_ problem [1]. It requires a group of agents, the herders, to cooperate and self-organize so as to drive a second group of agents, the targets, towards some desired goal region in the plane. Classical examples from Nature and Technology include shepherd dogs driving a flock of sheep towards a desired location [2], a group of predators coordinating to corral and isolate a group of preys (e.g. dolphins hunting fish [3; 4]), and multi-robot systems containing the spread of pollutants in the environment [5] or driving other agents to a safe enclosure [1; 6; 7; 8; 9]. Shepherding has been extensively studied in the literature and many solutions exist, mostly in the case of one herder driving one or more targets [10; 8; 11]; the case of multiple herders having been studied more seldom, e.g., [6; 12; 7]. Remarkably, when the herders are outnumbered by the targets, most of the existing solutions explicitly rely on the simplifying assumption of targets exhibiting some cohesive collective behaviour of their own, e.g. sheep flocking together in the absence of sheep dogs [2; 6; 10; 13]. Then, the herders can exploit the targets' collective behaviour to effectively solve the problem, without the need to actively keep the herd cohesive [6; 10; 11]. Relaxing this assumption makes the problem much more cumbersome to solve theoretically, as recently noted in [10], and is also unrealistic in many applications such as environmental cleanup via multi-robot systems [5] or the confinement of microbial populations [14] where target agents (pollutant particles or bacteria) do not necessarily fulfill this hypothesis. There is also a second strong and, most importantly, unrealistic assumption that most of the current solutions often adopt: that the herders possess unlimited sensing capabilities, i.e. that they all know the positions of all other herders and all targets in the region of interest [12]. Moreover, as noted independently in the recent literature, e.g. [11; 12], most existing approaches adopt centralized or distributed strategies that do not exploit a crucial feature of complex systems. Namely, the fact that, as in natural systems, shepherding control solutions should not be engineered into the model _a priori_ but should emerge out of the herders following simpler local engagement rules with the targets and between themselves giving rise to collective behavior apt to solve the shepherding task. A striking example is that described in [15], where a phenomenological model is used to describe the emergent collective behaviour that two or more humans show when asked to solve the shepherding problem in a virtual reality setting (e.g. starting oscillating around the targets to contain them). In this Letter, contrary to the existing literature, we remove at the same time both of the assumptions described above and investigate if and how multiple cooperating herders, driven solely by local information, can globally solve the shepherding problem in the presence of limited sensing and the absence of any collective behaviour of the targets. In so doing, taking shepherding as a paradigmatic task, we discuss the crucial problem of investigating, if and how the collective behaviour of a complex multiagent system can be engineered in order to solve a distributed control task. We consider the shepherding problem in \(\mathbb{R}^{2}\) (see Fig. 1a), where \(N\) herders have to corral to a goal region \(\Omega_{G}\), and to contain therein, \(M\) targets. Without loss of generality, we assume the herders are initially randomly distributed in an annular region \(\Omega_{0H}\) of radius \(r_{0H}\), and the targets in a circular region \(\Omega_{0T}\supset\Omega_{G}\) of radius \(r_{0T}\) with \(r_{0T}<r_{0H}\). \(\Omega_{G}\) is a circle of radius \(r^{*}\). For the sake of simplicity, all regions are centered at the origin. Let \(\mathbf{H}\in\mathbb{R}^{2N}\) be the vector of the herders' positions \(\mathbf{H}=[\mathbf{H}_{1},\,\mathbf{H}_{2},\,...,\,\mathbf{H}_{N}]\) with \(\mathbf{H}_{i}\in\mathbb{R}^{2}\) being the cartesian coordinates of the \(i\)-th herder, \(i=1,...,N\), and \(\mathbf{T}\in\mathbb{R}^{2M}\) the vector of the targets' positions \(\mathbf{T}=[\mathbf{T}_{1},\,\mathbf{T}_{2},\,...,\,\mathbf{T}_{M}]\), with \(\mathbf{T}_{a}\in\mathbb{R}^{2}\) being the cartesian coordinates of target \(a\), \(a=1,...,M\). When polar coordinates are needed, we will use \((\rho_{i},\theta_{i})\) for the herders and \((r_{a},\varphi_{a})\) for the targets. To ensure that the targets do not show any kind of flocking or collective behaviour, we use the following overdamped Langevin equation to model the dynamics of a generic target \[\dot{\mathbf{T}}_{a}=\sigma\mathbf{N}_{a}(t)+c\mathbf{I}_{a}^{TH}(\mathbf{H}, \mathbf{T}_{a},\lambda) \tag{1}\] with \(\mathbf{N}_{a}(t)\) being a two-dimensional white Gaussian noise term with unitary variance and with \(\sigma>0\), and \(\mathbf{I}_{a}^{TH}(\mathbf{H},\mathbf{T}_{a},\lambda)\) a term describing the force exerted by herders on target \(a\), that we assume to be a repulsion vanishing after a typical distance \(\lambda\) (see Fig. 1b), of strength regulated by \(c>0\), that we set as \[\mathbf{I}_{a}^{TH}(\mathbf{H},\mathbf{T}_{a},\lambda)=\sum_{i=1}^{N}i_{T}(| \mathbf{T}_{a}-\mathbf{H}_{i}|)\frac{\mathbf{T}_{a}-\mathbf{H}_{i}}{|\mathbf{ T}_{a}-\mathbf{H}_{i}|} \tag{2}\] with the function \(i_{T}\) chosen as \(i_{T}(x)=\frac{1}{2}[1-\tanh(\gamma(x-\lambda)/\lambda)]\) with \(\gamma>1\). According to the above dynamics, in the absence of nearby herders (\(I^{TH}\to 0\)), targets behave as independent random walkers. Then, as no flocking or other collective behaviour can be exploited, the herders need to collectively deal with each of the targets separately in order to tame their dynamics towards the goal region. Without loss of generality, we model the dynamics of the herders as follows \[\begin{cases}\dot{\rho}_{i}=-\alpha I_{i,\rho}^{HH}(\mathbf{H},r^{*})-\beta I _{i}^{HT}(\mathbf{H},\mathbf{T},\xi)+\Theta(\dot{\mathbf{H}}_{i},v_{H})\\ \dot{\theta}_{i}=-\alpha I_{i,\theta}^{HH}(\mathbf{H})-\beta I_{i}^{HT}( \mathbf{H},\mathbf{T},\xi)+\Theta(\dot{\mathbf{H}}_{i},v_{H})\end{cases} \tag{3}\] where \(I_{i}^{HH}(\mathbf{H},r^{*})\) describes their dynamics in the absence of nearby targets, \(\Theta(\dot{\mathbf{H}}_{i},v_{H})\) is some term limiting the herders' maximum speed to the upper bound \(v_{H}\), and the term \(I_{i}^{HT}(\mathbf{H},\mathbf{T},\xi)\) describes the herders' reaction to the presence of targets in their sensing region of size \(\xi\); \(\alpha\) and \(\beta\) being two positive parameters. When there are no targets nearby, we assume the herders simply tend to converge towards \(\Omega_{G}\) spreading themselves uniformly on the boundary \(\partial\Omega_{G}\) of the goal region via one-hop interactions, moving according to (3) with \(I^{HT}=0\), and \(I^{HH}\) set as \[\begin{cases}I_{i,\rho}^{HH}(\mathbf{H},r^{*})=\sum_{j=i\pm 1}(\rho_{i}- \rho_{j})+(\rho_{i}-r^{*})\\ I_{i,\theta}^{HH}(\mathbf{H},)=\sum_{j=i\pm 1}(\theta_{i}-\theta_{j})\end{cases} \tag{4}\] This guarantees that, when all the targets have been collected inside \(\Omega_{G}\), the herders tend to self-distribute themselves on \(\partial\Omega_{G}\). This, together with a suitable of choice of \(r^{*}\) (see Supplementary Information for further details) yields that the herders _cage_ the targets therein; their repulsive forces \(I^{TH}\) creating a closed containment field as it is often observed in Nature [11],[16],[17] and used in Robotics [18]. When targets are present, the term \(I^{HT}\) is instead chosen so that the herders can exploit the repulsion they exert on the targets to drive them towards \(\Omega_{G}\). In particular, see Fig. 1b, we assume that, at every time step, each generic herder \(\mathbf{H}_{i}\) selects one target to control, say \(\widehat{\mathbf{T}}_{i}\) with polar coordinates \((\hat{r}_{i},\hat{\varphi}_{i})\), within its sensing region following a set of simple _local_ rules. Specifically, as done in [12], and also observed in experiments on human coordination and decision-making reported in [16], herder \(i\) only scouts for targets in the angular sector \(\Delta\theta_{i}=[(\theta_{i}+\theta_{i-1})/2,(\theta_{i+1}+\theta_{i})/2]\) within the maximum sensing distance \(\xi\), i.e. such that \(|\mathbf{T}_{a}-\mathbf{H}_{i}|\leq\xi\). Targets inside \(\Omega_{G}\) (i.e. such that \(r_{a}<r^{*}\)) are not considered. If no target satisfies these conditions, then \(I^{HT}=0\), otherwise, the target to chase \(\widehat{T}_{i}\) is selected as the one with the largest distance from \(\Omega_{G}\). As observed in natural shepherding scenarios, see for example [15], the herder will then try to place itself behind \(\widehat{\mathbf{T}}_{i}\) to push it towards \(\Omega_{G}\) (see Fig. 1b). This is achieved by setting \[\begin{cases}I_{i,\rho}^{HT}(\mathbf{H},\mathbf{T},\xi)=\rho_{i}-(\hat{r}_{i}+ \lambda)\\ I_{i,\theta}^{HT}(\mathbf{H},\mathbf{T},\xi)=\theta_{i}-\hat{\varphi}_{i}\end{cases} \tag{5}\] in equa tion (3) where, w.l.o.g., we assume the herder aims at positioning itself at a distance \(\lambda\) behind the target. Figure 1: (a) Spatial configuration of the shepherding problem. \(N\) herders (blue diamonds), initially randomly distributed in \(\Omega_{0H}\) (green annulus), have to corral to \(\Omega_{G}\) (blue circle) and contain therein \(M\) targets (magenta dots), which are initially randomly distributed in \(\Omega_{0T}\) (yellow circle). (b) Description of target-herder interactions. Solid black arrows depict (average) directions of motion. Area shaded in magenta represent the sensing area of radius \(\xi\) of each herder while the dashed lines depict the boundaries of the sector \(\Delta\theta_{i}\) of herder \(i\). Targets feel the repulsion of the herders if the latter are within the shaded blue area of radius \(\lambda\). Finally, we assume that (i) \(\beta\gg\alpha\) so that the reaction of the herders to the targets is dominant over their own dynamics; (ii) \(\xi>\lambda\), i.e. the sensing radius of the herders is larger than the radius of the reaction zone of the targets; and (iii) that the maximum velocity of the herders \(v_{H}\) is greater than the average escaping velocity of the targets, given by \(c\) in Eq. (1), i.e. \(v_{H}>c\). Also (see Supplementary Information ) the radius of the goal region is chosen so that \(r^{*}\propto N\), implying that for large enough \(N\), the problem is always trivially solved as \(r^{*}\gg r_{0T}\) (here we study the nontrivial case where \(r^{*}<r_{0T}\)). Next, we study the _herdability_ of a group of \(M\) targets by a group of \(N\) herders[19]. In particular, we say that \(M\) target agents are _herdable_ by \(N\) herders if the latter are able to corral a fraction, say \(\chi>\chi^{*}\), of the former towards \(\Omega_{G}\) in finite time. For \(\chi^{*}\), we select typical values in control theory, e.g. \(\chi^{*}\in\{0.9,\,0.95,\,0.99\}\). Given the dynamics of the agents, we will then look for the _minimal_ number \(N^{*}(M)\) of herders rendering \(M\) targets herdable. For the sake of comparison, we start with the case where herders possess infinite sensing capabilities, i.e. by setting \(\xi=\infty\). As depicted in Fig. 2a, we find that over a wide range of targets' group size, \(N^{*}(M)\) scales quadratically with the number of targets. In the finite sensing case, see Fig. 2b, the scaling \(N^{*}(M)\sim\sqrt{M}\), appears instead only for \(M\) greater than some critical threshold \(M^{\rm low}\). Below such a threshold the number of herders \(N\) needed to successfully complete the task significantly increases. Counterintuitively, this means that lowering the number of targets to be herd does not necessarily make the herding task easier to complete when herders only possess finite sensing. Note that, in general, the minimum number of herders, \(N^{*}(M)\) required to shepherd \(M\) targets must be such that (i) the herders are able to collectively sense all the targets in due time despite them being random independent walkers, and (ii) ensure that the diffusion flow of the \(M\) targets is balanced by the transport flow induced by the \(N^{*}\) herders. From a simple dimensional argument, as the \(M\) targets are distributed in a _two_-dimensional circular domain while the \(N\) herders tend to arrange themselves on its _one_-dimensional boundary, condition (ii) is satisfied for \(N^{*}(M)\sim\sqrt{M}\) (as observed in Fig. 2a) while condition (i) is trivially satisfied when the herders possess infinite sensing (\(\xi=\infty\)). On the contrary, in the finite sensing case (\(\xi<\infty\)) the herders, following their local target selection rules, need to timely move from target to target in order to collectively sense all the targets and fulfill condition (i). As their density decreases, e.g. \(M<M^{\rm low}\), the targets become sparser and sparser, preventing the herders from exploring the region of interest and scout in due time for all of them according solely to their local information. Then, the only way to fulfill condition (i) is to deploy a number of herders \(N^{*}\) which is large enough to allow the furthest targets from \(\Omega_{G}\) to be observed at all time, loosing the quadratic scaling, \(N^{*}(M)\sim\sqrt{M}\), observed in the infinite sensing case. When \(M>M^{\rm low}\) instead, the increase in the density of the target agents triggers the Figure 2: (a-b) Values of the fraction \(\chi\) of successfully herd targets obtained for different values of \(M\) and \(N\) when \(r_{0T}=60\). Results are averaged over 30 simulations; the increments of \(N\) and \(\sqrt{M}\) have values \(\Delta N=1\), \(\Delta\sqrt{M}=1\). (a) When \(\xi=\infty\) we need \(N^{*}\propto\sqrt{M}\) herders to successfully shepherd \(M\) targets for any value of \(\chi^{*}>0.9\). (b) If \(\xi<\infty\), we recover only above a critical threshold \(M>M^{\rm low}(\chi^{*})\) the scaling \(N^{*}\propto\sqrt{M}\) as in the infinite sensing case. Level curves for \(\chi^{*}=0.9\), \(0.95\) and \(0.99\) are given in white and labelled accordingly. The dashed vertical line is the theoretical estimate \(\widehat{M^{\rm low}}\) obtained by studying the percolation of the herdability graph \(\mathcal{G}\). (c) Scaling of the critical threshold \(M^{\rm low}\) as a function of the herders’ sensing radius \(\xi\). The numerically observed values of \(\widehat{M^{\rm low}}\) for two different system sizes (scatter dots) evaluated by direct inspection are compared with the theoretical estimate \(\widehat{M^{\rm low}}\) (dashed line) for different values of \(\xi\) and \(r_{0T}\). Error bars represent the maximum precision of the computation given the stepsize \(\Delta\sqrt{M}=1\) used in the simulations. Simulations were carried out at the given values of \((\xi/r_{0T})\) to fulfill the hypothesis that \(\xi\gg\lambda\), mitigate considerable finite size effects, and give at least three points for each of the cases \(r_{0T}=60\) and \(r_{0T}=120\). Results for \(\chi^{*}\in\{0.90,\,0.95\}\) are reported in the Supplementary Information confirming the observed scaling. ability of the herders to navigate and explore the domain of interest according to their local rules and information, and to collectively sense and corral the targets to the goal region, even if not all of them are being sensed by an herder at every time step, recovering the scaling law of the infinite sensing case. To explain the existence of \(M^{\rm low}\) we need to unfold the conditions under which, by moving from target to target according to their local information, the herders can fulfill condition (i) and eventually sense and corral all the targets including those with the largest distance from \(\Omega_{G}\). To this aim, we introduce the _herdability graph_ (see Fig. 3a) as the directed geometric graph \(\mathcal{G}\) whose nodes are the targets; an edge \(\mathcal{G}_{ab}\), existing between two nodes \(a\) and \(b\), i.e., if an herder, say herder \(i\), that is chasing target \(a\), (e.g. \((\rho_{i},\theta_{i})=(r_{a}+\lambda,\varphi_{a})\)), can also sense target \(b\), i.e. if \(\mathbf{T}_{b}\) is such that \(|\mathbf{H}_{i}-\mathbf{T}_{b}|\leq\xi\). Hence, if the herder is chasing target \(a\), target \(b\) can be eventually chased if it becomes the furthest from \(\Omega_{G}\). To simplify the analysis, we approximate \(\widehat{\mathcal{G}}\) with the undirected herdability graph \(\mathcal{G}\), whose edges are the targets, and \(\mathcal{G}_{ab}=\mathcal{G}_{ba}=1\) if \(|\mathbf{T}_{a}-\mathbf{T}_{b}|\leq\xi\), with \(\hat{\mathcal{G}}\to\mathcal{G}\) when \(\xi/\lambda\to\infty\) (see Fig. 3 for a schematic interpretation of this approximation). Under the above approximation, if there exists a path in \(\mathcal{G}\) from \(a\) to a generic node \(z\) than, theoretically, an herder that initially is chasing \(a\) can eventually also sense and start chasing \(z\) in due time. By varying \(M\) and \(\xi\), we can obtain an estimate \(\widehat{M^{\rm low}}\) of the critical threshold \(M^{\rm low}\) by computing the percolation threshold of \(\mathcal{G}\) at \(t=0\) when targets are randomly and uniformly distributed in a circle of radius \(r_{0T}\), studying the size of the largest connected component of \(\mathcal{G}\) as a function of the number of targets and the herders' sensing radius \(\xi\) (see Supplementary Material ). A comparison between the estimated (\(\widehat{M^{\rm low}}\)) and actual critical threshold (\(M^{\rm low}\)) is depicted in Fig. 2b. Since the percolation threshold of a geometric graph in 2d scales as \(1/\xi^{2}\)[20], we also expect from our computation that \(M^{\rm low}\sim 1/\xi^{2}\). This is confirmed by our numerical observations as reported in Fig. 2c where we compare the estimate obtained from the percolation threshold \(\widehat{M^{\rm low}}\) of \(\mathcal{G}\) with the numerically detected values of \(M^{\rm low}\) for different values of \(\xi\) and \(r_{0T}\) when \(\chi^{*}=0.99\). Indeed, our theoretical argument is effective in capturing the observed behaviour despite the finite size effects (N is \(\mathcal{O}\sim 10^{1}\), M is \(\mathcal{O}\sim 10^{2}\)), the use of an approximated undirected herdability graph (\(\xi\gg\lambda\)), and the fact that the percolation threshold is estimated at \(t=0\); the graph neglecting the presence of multiple herders. Further details and an additional validation of the method for different values of \(r_{0T}\) and \(\chi^{*}\) is reported in the Supplementary Information. Also, we can estimate the minimum number of herders \(N^{*}\) needed to sense \(M\) targets as the smallest number of herders that need to be arranged uniformly on a circle of radius \(r_{0T}\) so as to completely cover it with their sensing regions of radius \(\xi\) and hence ensure that no targets can escape through their gaps without being sensed. Such a value can be computed as \(N^{*}\sim\pi r_{0T}/\xi\), that for the values of the parameters used in Fig.2b provides \(N^{*}\sim 19\). In summary, we wish to emphasize that the relevance of our findings is twofold. Firstly, we investigated the herdability of a multiagent system of non-interacting targets removing the strong hypotheses made in the literature on shepherding multiagent systems exhibiting flocking (or other types of collective behaviour) and herders possessing infinite sensing. In so doing, we uncovered, for the first time in the literature, the existence of a critical density threshold on the targets that renders them "herdable" by a group of herders. Secondly, we explained such a threshold in terms of the percolation of a suitable herdability graph and captured how the results scale with respect to the sensing radius \(\xi\) of the herders. Note that contrary to the classical paradigm studied in the literature on controlling complex networks, where control is attained by influencing nodes or edges of a given network (e.g. [21]), the shepherding problem is a paradigmatic example where the collective dynamics of a group of agents (e.g. flocking sheep or stochastically moving bacteria) must be controlled by driving the emerging behaviour of a complex system (the herders) interacting with it. Our results can be beneficial to inform the analysis and improve the design of complex sheperding systems particularly in the case of non-interacting targets in biological [14], environmental [5] and robotics applications [1]. They can also be useful in the study of animal and insect groups where one species hunts or gather the individuals of another. Future directions include the study of the possible benefits of _active_ space exploration of the herders, which in our model only _passively_ chase observed targets, and to discuss different individual and collective dynamics for the targets, where herders could i) deploy optimal strategies exploiting the collective behavior of the targets and ii) perform predictions when targets show a sufficiently ordered collective dynamics. All the numerical simulations where carried out in MATLAB using a forward Euler scheme for the herders, and a Euler-Maruyama scheme for the targets, with time steps \(\Delta t=0.03\) and total duration \(t=3000\), where the value of \(t\) was chosen such that by doubling it the slope of \(N^{*}(M)\) in the \(\xi=\infty\) case would not change. The sizes of \(\Omega_{0T}\), \(\Omega_{0H}\), and \(\Omega_{G}\) are \(r_{0T}=60\), \(120\), \(r_{0H}=r_{0T}+60\), and \(r^{*}=(3N\lambda)/(4\pi)\). Parameters of targets' dynamics where chosen as \(\sigma=1\), \(c=8\), \(\lambda=2.5\), \(\gamma=10\). Parameters of herders' dynamics where chosen as \(v_{H}=10\), \(\alpha=1\), \(\beta=10^{4}\).
2310.03033
Benchmarking Local Robustness of High-Accuracy Binary Neural Networks for Enhanced Traffic Sign Recognition
Traffic signs play a critical role in road safety and traffic management for autonomous driving systems. Accurate traffic sign classification is essential but challenging due to real-world complexities like adversarial examples and occlusions. To address these issues, binary neural networks offer promise in constructing classifiers suitable for resource-constrained devices. In our previous work, we proposed high-accuracy BNN models for traffic sign recognition, focusing on compact size for limited computation and energy resources. To evaluate their local robustness, this paper introduces a set of benchmark problems featuring layers that challenge state-of-the-art verification tools. These layers include binarized convolutions, max pooling, batch normalization, fully connected. The difficulty of the verification problem is given by the high number of network parameters (905k - 1.7 M), of the input dimension (2.7k-12k), and of the number of regions (43) as well by the fact that the neural networks are not sparse. The proposed BNN models and local robustness properties can be checked at https://github.com/ChristopherBrix/vnncomp2023_benchmarks/tree/main/benchmarks/traffic_signs_recognition. The results of the 4th International Verification of Neural Networks Competition (VNN-COMP'23) revealed the fact that 4, out of 7, solvers can handle many of our benchmarks randomly selected (minimum is 6, maximum is 36, out of 45). Surprisingly, tools output also wrong results or missing counterexample (ranging from 1 to 4). Currently, our focus lies in exploring the possibility of achieving a greater count of solved instances by extending the allotted time (previously set at 8 minutes). Furthermore, we are intrigued by the reasons behind the erroneous outcomes provided by the tools for certain benchmarks.
Andreea Postovan, Mădălina Eraşcu
2023-09-25T01:17:14Z
http://arxiv.org/abs/2310.03033v1
Benchmarking Local Robustness of High-Accuracy Binary Neural Networks for Enhanced Traffic Sign Recognition ###### Abstract Traffic signs play a critical role in road safety and traffic management for autonomous driving systems. Accurate traffic sign classification is essential but challenging due to real-world complexities like adversarial examples and occlusions. To address these issues, binary neural networks offer promise in constructing classifiers suitable for resource-constrained devices. In our previous work, we proposed high-accuracy BNN models for traffic sign recognition, focusing on compact size for limited computation and energy resources. To evaluate their local robustness, this paper introduces a set of benchmark problems featuring layers that challenge state-of-the-art verification tools. These layers include binarized convolutions, max pooling, batch normalization, fully connected. The difficulty of the verification problem is given by the high number of network parameters (905k - 1.7 M), of the input dimension (2.7k-12k), and of the number of regions (43) as well by the fact that the neural networks are not sparse. The proposed BNN models and local robustness properties can be checked at [https://github.com/ChristopherBrix/vnncomp2023_benchmarks/tree/main/benchmarks/traffic_signs_recognition](https://github.com/ChristopherBrix/vnncomp2023_benchmarks/tree/main/benchmarks/traffic_signs_recognition). The results of the 4th International Verification of Neural Networks Competition (VNN-COMP'23) revealed the fact that 4, out of 7, solvers can handle many of our benchmarks randomly selected (minimum is 6, maximum is 36, out of 45). Surprisingly, tools output also wrong results or missing counterexample (ranging from 1 to 4). Currently, our focus lies in exploring the possibility of achieving a greater count of solved instances by extending the allotted time (previously set at 8 minutes). Furthermore, we are intrigued by the reasons behind the erroneous outcomes provided by the tools for certain benchmarks. ## 1 Introduction Traffic signs play a crucial role in ensuring road safety and managing traffic flow, both in urban and highway driving. For autonomous driving systems, the accurate recognition and classification of traffic signs, known as _traffic sign classification (recognition)_, are essential components. This process involves two main tasks: firstly, isolating the traffic sign within a bounding box, and secondly, classifying the sign into a specific traffic category. The focus of this work lies on the latter task. Creating a robust traffic sign classifier is challenging due to the complexity of real-world traffic scenes. Common issues faced by classifiers include a lack of _robustness_ against _adversarial examples_[20] and occlusions [22]. _Adversarial examples_ are inputs that cause classifiers to produce erroneous outputs, and _occlusions_ occur naturally due to various factors like weather conditions, lighting, and aging, which make traffic scenes unique and diverse. To address the lack of robustness, one approach is to formally verify that the trained classifier can handle both adversarial and occluded examples. Binary neural networks (BNNs) have shown promise in constructing traffic sign classifiers, even in devices with limited computational resources and energy constraints, often encountered in autonomous driving systems. BNNs are neural networks (NNs) with binarized weights and/or activations constrained to \(\pm 1\), reducing model size and simplifying image recognition tasks. The long-term goal of this work is to provide formal guarantees of specific properties, like robustness, that hold for a trained classifier. This objective leads to the formulation of the _verification problem_: given a trained model and a property to be verified, does the model satisfy that property? The verification problem is translated into a constrained satisfaction problem, and existing verification tools can be employed to solve it. However, due to its NP-complete nature [15], this problem is experimentally challenging for state-of-the-art tools. In our previous work [17], we proposed high-accuracy BNN models explicitly for traffic sign recognition, with a thorough exploration of accuracy, model size, and parameter variations for the produced architectures. The focus was on BNNs with high accuracy and compact model size, making them suitable for devices with limited computation and energy resources, while also reducing the number of parameters to facilitate the verification task. The German Traffic Sign Recognition Benchmark (GTSRB) [6] was used for training, and testing involved similar images from GTSRB, as well as Belgian [2] and Chinese [5] datasets. This paper builds upon the models with the best accuracy from the previous study [17] and presents a set of benchmark problems to verify local robustness properties of these models. The novelty of the proposed benchmarks lies in the fact that traffic signs recognition is done using binarized neural networks. To the best of our knowledge this was not done before [9, 19]. Compared to existing benchmarks. The types of layers used determine a complex verification problem and include _binarized convolution layers_ to capture advanced features from the image dataset, _max pooling layers_ for model size reduction while retaining relevant features, _batch normalization layers_ for scaling, and _fully connected (dense) layers_. The difficulty of the verification problem is given by the high number of network parameters (905k - 1.7 M), of the input dimension (2.7k-12k), and of the number of regions (42) as well by the fact that the neural networks are not sparse. Discussions with organizers and competitors in the Verification of Neural Network Competition (VNN-COMP)1 revealed that no tool competing in 2022 could handle the proposed benchmark. Additionally, in VNN-COMP 2023 [4], the benchmark was considered fairly complex by the main developer of the winning solver \(\alpha,\beta\)-CROWN2. Footnote 1: [https://github.com/stanleybak/vnncomp2023/issues/2](https://github.com/stanleybak/vnncomp2023/issues/2) Footnote 2: [https://github.com/Verified-Intelligence/alpha-beta-CROWN](https://github.com/Verified-Intelligence/alpha-beta-CROWN) We publicly released our benchmark in May 2023. In the VNN-COMP 2023, which took place in July 2023, our benchmark was used in scoring, being nominated by at least 2 competing tools. 4, out of 7, tools were able to find an answer for the randomly generated instances. Most instances were solved by \(\alpha,\beta\)-CROWN (39 out of 45) but it received penalties for 3 results due to either incorrect answer or missing counterexample. Most correct answers were given by Marabou3 (19) with only 1 incorrect answer. Footnote 3: [https://github.com/NeuralNetworkVerification/Marabou](https://github.com/NeuralNetworkVerification/Marabou) Currently, we are investigating the reasons why the tools were not able to solve all instances and why incorrect answers were given. Additionally, more tests will be performed on randomly generated answers and we will examine the particularities of the input images and of the trained networks which can not be handled by solvers due to timeout or incorrect answer. The rest of the paper is organized as follows. In Section 2 we present related work focusing on comparing the proposed benchmark with others competing in VNN-COMP. Section 3 briefly describes deep neural networks, binarized neural networks and formulates the robustness property. In Section 4 we describe the anatomy of the trained neural networks whose local robustness is checked. In Section 5.1 we introduce the verification problem and its canonical representation (VNN-LIB and ONNX formats). Section 6 presents the methodology for benchmarks generation and the results of the VNN-COMP 2023. ## 2 Related Work There exist many approaches for the verification of neural networks, see [21] for a survey, however few are tackling the verification of binarized neural networks. Verifying properties using boolean encoding [16] is an alternative approach to validate characteristics of a specific category of neural networks, known as binarized neural networks. These networks possess binary weights and activations. The proposed technique involves reducing the verification problem from a mixed integer linear programming problem to a Boolean satisfiability. By encoding the problem in Boolean logic, they exploit the capabilities of modern SAT solvers, combined with a counterexample-guided search method, to verify various properties of these networks. A primary focus of their research is assessing the networks' resilience against adversarial perturbations. The experimental outcomes demonstrate the scalability of this approach when applied to medium-sized deep neural networks employed in image classification tasks. However their neural networks do not have convolution layers and can handle only a simple dataset like MNIST where images are black and white and there are just 10 classes to classify. Also, no tool implementing the approach was realeased to be tested. Paper [7] focuses on verification of binarized neural network, extended the Marabou [15] tool to support _Sign Constrains_ and verified a network that uses both binarized and non-binarized layers. For testing they used Fashion-MNIST dataset which was trained using XNOR-Net architecture and obtained the accuracy of only 70.97%. This extension could not be used in our case due to the fact that we have binarized convolution layers which the tool can not handle. In the verification of neural networks competition (VNN-COMP), in 2022, there are various benchmarks subject to verification [3], however, there is none involving traffic signs. To the best of our knowledge there is only one paper which deals with traffic signs datasets [12] that is GTSRB. However, they considered only subsets of the dataset and their trained models consist of only fully connected (FC) layers with ReLU activation functions, not convolutions, ranging from 70 to 1300 neurons. Furthermore they do not mention the accuracy of their trained models to be able to compare it with ours. Moreover, the benchmarks from VNN-COMP 2022 [10] used for image classification tasks have are in Table 1. As one could observe, no benchmarks use binarized convolutions and batch normalization layers. Discussions with competition organizers revealed the fact that no tool from 2022 competition could handle our benchmark4. Footnote 4: See [https://github.com/stanleybak/vnncomp2023/issues/2](https://github.com/stanleybak/vnncomp2023/issues/2) intervention from user stanleybak on May 17, 2023 The report of this year neural networks verification competition (VNN-COMP 2023) is in the draft version, but we present here the differences between our benchmark and the others. Table 2 taken \begin{table} \begin{tabular}{c c c c c} **Category** & **Benchmark** & **Network Types** & **\#Neurons** & **Input Dimension** \\ \hline \multirow{5}{*}{CNN \& ResNet} & Cifar Bias Field & Conv. + ReLU & 45k & 16 \\ & Large ResNets & ResNet (Conv. + ReLU) & 55k - 286k & 3.1k - 12k \\ & Oval21 & Conv. + ReLU & 3.1k - 6.2k & 3.1k \\ & SRI ResNet A/B & ResNet (Conv. + ReLU) & 11k & 3.1k \\ & VGGNet16 & Conv. + ReLU + MaxPool & 13.6M & 1 - 95k \\ \hline Fully-Connected & MNIST FC & FC. + ReLU & 512 - 1.5k & 784 \\ \end{tabular} \end{table} Table 1: Benchmarks proposed in the VNN-COMP 2022 for image classification tasks from the draft report presents all the scored benchmarks, i.e. benchmarks which were nominated by at least 2 competing tools and are used in their ranking. The column Network Type presents the types of layers of the trained neural network, the column # of Params represent the number of parameters of the trained neural network, the column Input Dimension represents the dimension of the input (for example, for an image with dimension 30x30 pixels and RGB channel the dimension is 30x30x3 which means that the verification problem contains 30x30x3 variables), the Sparsity column represents the degree of sparsity of the trained neural network and, finally, the column # of Regions represents the number of regions determined by the verification problem (for example, for our German Traffic Sign Recognition Benchmark there are 43 traffic signs classes). Our proposed benchmark, Traffic Signs Recognition, is more complex as the others as it involves cumulatively a high number of parameters, input dimension, number of regions and no sparsity. ## 3 Theoretical Background ### Deep Neural Networks _Neural networks_, inspired by the human brain, are computational models composed of interconnected nodes called artificial neurons. These networks have gained attention for their ability to learn and perform complex tasks. The nodes compute outputs using _activation functions_, and synaptic _weights_ determine the strength of connections between nodes. Training is achieved through optimization algorithms, such as _backpropagation_, which adjust the weights iteratively to minimize the network's error. A _deep neural network (DNN)_[7] can be conceptualized as a directed graph, where the nodes, also known as neurons, are organized in _layers_. The input layer is responsible for receiving initial values, such as pixel intensities in the case of image inputs, while the output layer generates the final predictions or results. Hidden layers, positioned between the input and output layers, play a crucial role in extracting and transforming information. During the evaluation or inference process, the input values propagate through the network, layer by layer, using connections between neurons. Each neuron applies a specific mathematical operation to the inputs it receives, followed by the _activation function_ that introduces _nonlinearity_ to the network. The activation function determines the neuron's output based on the weighted sum of its inputs and an optional bias term. Different layer types are employed in neural networks to compute the values of neurons based on the preceding layer's neuron values. Those relevant for our work are introduced in Section 3.2. \begin{table} \begin{tabular}{c c c c c c} **Name** & **Network Type** & **\# of Params** & \begin{tabular}{c} **Input** \\ **Dimension** \\ \end{tabular} & **Sparsity** & **\# of Regions** \\ \hline nn4sys & Conv, FC, Residual + ReLU, Sigmoid & 33k - 37M & 1-308 & 0-66\% & 1 - 11k \\ \hline VGGNet16 & Conv + ReLU + MaxPool & 138M & 150k & 0-99\% & 1 \\ \hline Collins Rul CNN & Conv + ReLU, Dropout & 60k - 262k & 400-800 & 50-99\% & 2 \\ \hline TLL Verify Bench & FC + ReLU & 17k - 67M & 2 & 0\% & 1 \\ \hline Acas XU & FC + ReLU & 13k & 5 & 0-20\% & 1-4 \\ \hline \multirow{2}{*}{cGAN} & FC, Conv, Conv/Transpose, & \multirow{2}{*}{500k-68M} & \multirow{2}{*}{5} & \multirow{2}{*}{0-40\%} & \multirow{2}{*}{2} \\ & Residual + ReLU, BatchNorm, AvgPool & & & & \\ \hline Dist Shift & FC + ReLU, Sigmoid & 342k-855k & 792 & 98.9\% & 1 \\ \hline ml4acopf & FC, Residual + ReLU, Sigmoid & 4k-680k & 22-402 & 0-7\% & 1-600 \\ \hline Traffic Signs Recogn & Conv+Sign+MaxPool+BatchNorm, FC, & 905k-1.7M & 2.7k-12k & 0\% & 43 \\ \hline ViT & Conv, FC, Residual + ReLU, Softmax, BatchNorm & 68k-76k & 3072 & 0\% & 9 \\ \hline \end{tabular} \end{table} Table 2: Benchmarks proposed in the VNN-COMP 2023 ### Binarized Neural Networks A BNN [12] is a feedforward network where weights and activations are mainly binary. [15] describes BNNs as sequential composition of blocks, each block consisting of linear and non-linear transformations. One could distinguish between _internal_ and _output blocks_. There are typically several _internal blocks_. The layers of the blocks are chosen in such a way that the resulting architecture fulfills the requirements of accuracy, model size, number of parameters, for example. Typical layers in an internal block are: _1)_ linear transformation (LIN) _2)_ binarization (BIN) _3)_ max pooling (MP) _4)_ batch normalization (BN). A linear transformation of the input vector can be based on a fully connected layer or a convolutional layer. In our case is a convolution layer since our experiments have shown that a fully connected layer can not synthesize well the features of traffic signs, therefore, the accuracy is low. The linear transformation is followed either by a binarization or a max pooling operation. Max pooling helps in reducing the number of parameters. One can swap binarization with max pooling, the result would be the same. We use this sequence as Larq [9], the library we used in our experiments, implements convolution and binarization in the same function. Finally, scaling is performed with a batch normalization operation [13]. There is _one output block_ which produces the predictions for a given image. It consists of a dense layer that maps its input to a vector of integers, one for each output label class. It is followed by function which outputs the index of the largest entry in this vector as the predicted label. We make the observation that, if the MP and BN layers are omitted, then the input and output of the internal blocks are binary, in which case, also the input to the output block. The input of the first block is never binarized as it drops down drastically the accuracy. ### Properties of (Binarized) Neural Networks: Robustness _Robustness_ is a fundamental property of neural networks that refers to their ability to maintain stable and accurate outputs in the presence of perturbations or adversarial inputs. Adversarial inputs are intentionally crafted inputs designed to deceive or mislead the network's predictions. As defined by [15], _local robustness_ ensures that for a given input \(x\) from a set \(\chi\), the neural network \(F\) remains unchanged within a specified perturbation radius \(\epsilon\), implying that small variations in the input space do not result in different outputs. The output for the input \(x\) is represented by its label \(l_{x}\). We consider \(L_{\infty}\) norm defined as \(||x||_{\infty}=\sup\limits_{n}|x_{n}|\), but also other norms can be used, e.g. \(L_{0}\)[17]. **Definition 3.1** (Local robustness.).: A feedforward neural network \(F\) is locally \(\epsilon\)-robust for an input \(x,x\in\chi\), if there does not exist \(\tau,||\tau||_{\infty}\leq\epsilon\), such that \(F(x+\tau)\neq l_{x}\). Figure 1: A fully connected DNN with 4 input nodes, 3 output nodes and 3 hidden layers _Global robustness_[16] is an extension of the local robustness and it is defined as the expected maximum safe radius over a given test dataset, representing a collection of inputs. **Definition 3.2** (Global robustness.).: A feed-forward neural network \(F\) is globally \(\epsilon\)-robust if for any \(x,x\in\chi\), and \(\tau,||\tau||_{\infty}\leq\epsilon\), we have that \(F(x+\tau)=l_{x}\). The definitions above can not be used in a computational setting. Hence, [15] proposes Definition 3.3 for local robustness which is equivalent to Definition 3.1. **Definition 3.3** (Local robustness.).: A network is \(\epsilon\)-locally robust in the input \(x\) if for every \(x^{\prime}\), such that \(||x-x^{\prime}||_{\infty}\leq\epsilon\), the network assigns the same label to \(x\) and \(x^{\prime}\). For our setting, the input is an image represented as a vector with values represented by the pixels. Hence, the inputs are the vector \(x\) and the perturbation \(\epsilon\). This formula can also be applied to all inputs simultaneously (all images from test set of the dataset), in that case _global robustness_ is addressed. However, the number of parameters involved in checking _global robustness_ property increases enormously. Hence, in this paper, the benchmarks propose verification of local robustness only. ## 4 Anatomy of the Binarized Neural Networks For benchmarking, we propose the two BNNs architectures for which we obtained the best accuracy [17], as well an additional one. More precisely, the best accuracy for GTSRB and Belgium datasets is \(96,45\%\) and \(88,17\%\), respectively, and was obtained for the architecture from Figure 2, with input size \(64\times\)\(64\) (see Table 3). The number of parameters is almost \(2\)M and the model size \(225,67\) KiB (for the binary model) compared to \(6932,48\) KiB (for the Float-32 equivalent). The best accuracy for Chinese dataset (\(83,9\%\)) is obtained by another architecture, namely from Figure 3, with input size \(48\times\)\(48\) (see Table 4). This architecture is more efficient from the point of view of computationally limited devices and formal verification having \(900\)k parameters and \(113,64\) KiB (for the binary model) and \(3532,8\) KiB (for the Float-32 equivalent). Also, the second architecture gave the best average accuracy and the decrease in accuracy for GTSRB and Belgium is small, namely \(1,17\%\) and \(0,39\%\), respectively. One could observe that the best architectures were obtained for input size images \(48\times\)\(48\) and \(64\times\)\(64\) pixels with max pooling and batch normalization layers which reduce the number of neurons, namely perform scaling which leads to good accuracy. We also propose for benchmarking an XNOR architecture, i.e. containing only binary parameters, (Figure 4) for which we obtained the best results for input size images \(30\)x\(30\) pixels and GTSRB (see Table 5). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Input size**} & \multirow{2}{*}{**\#Neur**} & \multicolumn{3}{c|}{**Accuracy**} & \multicolumn{3}{c|}{**\#Params**} & \multicolumn{1}{c|}{**Model Size (in KiB)**} \\ \cline{3-9} & & **German** & **China** & **Belgium** & **Binary** & **Real** & **Total** & **Binary** & **Float-32** \\ \hline 64px \(\times\) 64px & \(1024\) & **96.45** & **81.50** & **88.17** & 1772896 & 2368 & 1775264 & 225.67 & 6932.48 \\ \hline \end{tabular} \end{table} Table 3: Best results for the architecture from Figure 2. Dataset for train: GTSRB. Figure 2: Accuracy Efficient Architecture for GTSRB and Belgium dataset ## 5 Model and Property Specification: VNN-LIB and ONNX Formats The VNN-LIB (Verified Neural Network Library) format [10] is a widely used representation for encoding and exchanging information related to the verification of neural networks. It serves as a standardized format that facilitates the communication and interoperability of different tools and frameworks employed in the verification of neural networks. The VNN-LIB format typically consists of two files that provide a detailed specification of the neural network model (see Section 5.1), along with relevant properties and constraints (see Section 5.2). These files encapsulate important information, including the network architecture, weights and biases, input and output ranges, and properties to be verified. ### Model Representation In machine learning, the representation of models plays a vital role in facilitating their deployment and interoperability across various frameworks and platforms. One commonly used format is the H5 format, which is an abbreviation for _Hierarchical Data Format version 5_. The H5 format provides a structured and efficient means of storing and organizing large amounts of data, including the parameters and architecture of machine learning models. It is widely supported by popular deep learning frameworks, such as TensorFlow and Keras, allowing models to be saved, loaded, and shared in a standardized manner. However, while the H5 format serves as a convenient model representation for specific frameworks, it may lack compatibility when transferring models between different frameworks or performing model verification. This is where the _Open Neural Network Exchange_ (ONNX) format comes into play. ONNX offers a vendor-neutral, open-source alternative that allows models to be represented in a standardized format, enabling seamless exchange and collaboration across multiple deep learning frameworks. The VNN-LIB format, which is used for the formal verification of neural network models, leverages ONNX as its underlying model representation. ### Property specification For property specification, VNN-LIB standard uses the SMT-LIB format. The SMT-LIB (Satisfiability Modulo Theories-LIBrary) language [7] is a widely recognized formal language utilized for the formalization of Satisfiability Modulo Theories (SMT) problems. A VNN-LIB file is structured as follows5 and the elements involved have the following semantics for \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Input size**} & \multirow{2}{*}{**\#Neur**} & \multicolumn{3}{c|}{**Accuracy**} & \multicolumn{3}{c|}{**\#Params**} & \multicolumn{1}{c|}{**Model Size (in KiB)**} \\ \cline{3-10} & & **German** & **China** & **Belgium** & **Binary** & **Real** & **Total** & **Binary** & **Float-32** \\ \hline 48px \(\times\) 48px & 256 & **95.28** & **83.90** & **87.78** & 904288 & 832 & 905120 & 113.64 & 3532.80 \\ \hline \end{tabular} \end{table} Table 4: Best results for the architecture from Figure 3. Dataset for train: GTSRB. Figure 3: Accuracy Efficient Architecture for Chinese dataset the considered image classification task: 1. definition of input variables representing the values of the pixels \(X_{i}\) (\(i=\overline{1,P}\), where \(P\) is the dimension of the input image: \(N\times M\times 3\) pixels). For the file above, there are 2700 variables as the image has dimension \(30\times 30\) and the channel used is RGB. 2. definition of the output variables representing the values \(Y_{j}\) (\(j=\overline{1,L}\), where \(L\) is the number of classes of the images in the dataset). For the file above, there are 43 variables as the GTSRB categorises the traffic signs images into 43 classes. 3. bounding constraints for the variables input variables. Definition 5.1 is used for generating the property taking into account that vector \(x\) (its elements are the values of the pixels of the image) and \(\epsilon\) (perturbation) are known. For example, if \(\epsilon=10\) and the value of the pixel \(X^{\prime}_{2699}\) of the image with index 1678 from GTSRB is 24, the generated constraints for finding the values of the perturbed by \(\epsilon\) pixel \(X_{2699}\) for which the predicted label still holds is: (assert (<= X_2699 34.0000000)) (assert (>= X_2699 14.0000000)) 4. constraints involving the output variables assessing the value of the output label. For example, if the verification problem is formulated as: _Given the image with index \(1678\), the perturbation \(\epsilon~{}=~{}10\) and the trained model, find if the perturbed images are in class \(38\)_, the generated constraints are as follows which actually represents the negation of the property to be checked: (assert (or (>= Y_0 Y_38)... (>= Y_37 Y_38) (>= Y_39 Y_38)... (>= Y_42 Y_38))) ## 6 Benchmarks Proposal and Experimental Results of the VNN-COMP 2023 To meet the requirements of the VNN-COMP 2023, the benchmark datasets must conform to the ONNX format for defining the neural networks, while the problem specifications are expected to adhere to the Figure 4: XNOR(QConv) architecture \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Model description** & **Acc** & \begin{tabular}{c} **\#Binary** \\ **Params** \\ \end{tabular} & \begin{tabular}{c} **Model Size (in KiB)** \\ **Binary** \\ \end{tabular} \\ \hline QConv(16, 3\(\times\)3), QConv(32, 2\(\times\)2), D(43) & 81.54 & 1005584 & 122.75 & 3932.16 \\ \hline \end{tabular} \end{table} Table 5: XNOR(QConv) architecture. Image size: 30px \(\times\) 30px. Dataset for train and test: GTSRB. VNN-LIB format. Therefore, we have prepared a benchmark set comprising the BNNs introduced in Section 4 that have been encoded in the ONNX format. In order to assess the adversarial robustness of these networks, the problem specifications encompassed perturbations within the infinity norm around zero, with radius denoted as \(\epsilon=\{1,3,5,10,15\}\). To achieve this, we randomly selected three distinct images from the test set of the GTSRB dataset for each model and have generated the VNNLIB files for each epsilon in the set, in the way we ended up having 45 VNNLIB files in total. We were constrained to generate the small benchmark which includes just 45 VNNLIB files because of the total timeout which should not exceed 6 hour, this is the maximum timeout for a solver to address all instances, consequently a timeout of 480 seconds was allocated for each instance. For checking the generated VNNLIB specification files for submitted in the VNNCOMP 2023 as specified above as well as to generate new ones you can check [https://github.com/apostovan21/vnncomp2023](https://github.com/apostovan21/vnncomp2023). Our benchmark was used for scoring the competing tools. The results for our benchmark, as presented by the VNN-COMP 2023 organizers, are presented in Table 6. The meaning of the columns is as follows. Verified is number of instances that were UNSAT (no counterexample) and proven by the tool. Falsifieid is number that were SAT (counterexample was found) and reported by the tool. Fastest is the number where the tool was fastest (this did not impact the scoring in this year competition). Penalty is the number where the tool gave the incorrect result or did not produce a valid counterexample. Score is the sum of scores (10 points for each correct answer and \(-150\) for incorrect ones). Percent is the score of the tool divided by the best score for the benchmark (so the tool with the highest score for each benchmark gets 100) and was used to determine final scores across all benchmarks. Currently, we are investigating if the number of solved instances could be higher if the time is increased (the deadline used was 8 minutes). Also, it is interesting why the tools gave incorrect results for some benchmarks. ## 7 Conclusions Building upon our prior study that introduced precise binarized neural network models for traffic sign recognition, this study presents standardized challenges to gauge the resilience of these networks to local variations. These challenges were entered into the VNN-COMP 2023 evaluation, where 4 out of 7 tools produced results. Our current emphasis is on investigating the potential for solving more instances by extending the time limit (formerly set at 8 minutes). Additionally, we are keen to comprehend the factors contributing to incorrect outputs from the tools on specific benchmark tasks. ## Acknowledgements This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS/CCCDI - UEFISCDI, project number PN-III-P1-1.1-TE-2021-0676, within PNCDI III. \begin{table} \begin{tabular}{l c c c c c c} \# & **Tool** & **Verified** & **Falsified** & **Fastest** & **Penalty** & **Score** & **Percent** \\ \hline 1 & Marabou & 0 & 18 & 0 & 1 & 30 & 100\% \\ 2 & PyRAT & 0 & 7 & 0 & 1 & -80 & 0\% \\ 3 & NeuralSAT & 0 & 31 & 0 & 4 & -290 & 0\% \\ 4 & alpha-beta-CROWN & 0 & 39 & 0 & 3 & -60 & 0\% \\ \end{tabular} \end{table} Table 6: VNN-COMP 2023 Results for Traffic Signs Recognition Benchmark
2309.10892
Artificial Intelligence-Enabled Intelligent Assistant for Personalized and Adaptive Learning in Higher Education
This paper presents a novel framework, Artificial Intelligence-Enabled Intelligent Assistant (AIIA), for personalized and adaptive learning in higher education. The AIIA system leverages advanced AI and Natural Language Processing (NLP) techniques to create an interactive and engaging learning platform. This platform is engineered to reduce cognitive load on learners by providing easy access to information, facilitating knowledge assessment, and delivering personalized learning support tailored to individual needs and learning styles. The AIIA's capabilities include understanding and responding to student inquiries, generating quizzes and flashcards, and offering personalized learning pathways. The research findings have the potential to significantly impact the design, implementation, and evaluation of AI-enabled Virtual Teaching Assistants (VTAs) in higher education, informing the development of innovative educational tools that can enhance student learning outcomes, engagement, and satisfaction. The paper presents the methodology, system architecture, intelligent services, and integration with Learning Management Systems (LMSs) while discussing the challenges, limitations, and future directions for the development of AI-enabled intelligent assistants in education.
Ramteja Sajja, Yusuf Sermet, Muhammed Cikmaz, David Cwiertny, Ibrahim Demir
2023-09-19T19:31:15Z
http://arxiv.org/abs/2309.10892v1
Artificial Intelligence-Enabled Intelligent Assistant for Personalized and Adaptive Learning in Higher Education ###### Abstract This paper presents a novel framework, Artificial Intelligence-Enabled Intelligent Assistant (AIIA), for personalized and adaptive learning in higher education. The AIIA system leverages advanced AI and Natural Language Processing (NLP) techniques to create an interactive and engaging learning platform. This platform is engineered to reduce cognitive load on learners by providing easy access to information, facilitating knowledge assessment, and delivering personalized learning support tailored to individual needs and learning styles. The AIIA's capabilities include understanding and responding to student inquiries, generating quizzes and flashcards, and offering personalized learning pathways. The research findings have the potential to significantly impact the design, implementation, and evaluation of AI-enabled Virtual Teaching Assistants (VTAs) in higher education, informing the development of innovative educational tools that can enhance student learning outcomes, engagement, and satisfaction. The paper presents the methodology, system architecture, intelligent services, and integration with Learning Management Systems (LMSs) while discussing the challenges, limitations, and future directions for the development of AI-enabled intelligent assistants in education. Artificial Intelligence, Natural Language processing, Large Language Models (LLM), Transformers, GPT, Protege Effect ## 1 Introduction The landscape of higher education is experiencing a significant transformation, propelled by rapid advancements in digital technology and the evolving needs of a diverse and globally distributed student population (Altbach et al., 2009). Traditional teaching methods, while effective in many contexts, often struggle to provide personalized support and instant feedback, particularly in fields that demand a significant amount of text-based learning, critical thinking, and analytical skills (Means et al., 2009). These fields, such as Creativity and Critical Analysis, and Society and Culture, can pose challenges for students to master without adequate support (Holmes et al., 2019). This has led to a growing interest in exploring innovative solutions that can enhance the learning experience and outcomes for students in these fields, and beyond (Popenci and Kerr, 2017). Artificial Intelligence (AI) and Natural Language Processing (NLP) have emerged as promising technologies with the potential to revolutionize the educational landscape. NLP and knowledge generation systems have been used actively for communicating data and information (Baydaroglu et al., 2023) in environmental (Sermet & Demir, 2018) and health (Zhang et al., 2023; Sermet & Demir, 2021) domains. The advent of AI-enabled tools, such as virtual teaching assistants (VTAs), offers a unique opportunity to bridge the gap between traditional teaching practices and the evolving needs of students (Winkler & Sollner, 2018). VTAs can provide personalized support, instant feedback, and adaptive learning experiences, thereby enhancing student engagement, satisfaction, and learning outcomes (Fryer et al., 2017). Moreover, these AI-enabled solutions are not limited to text-based materials. Advanced deep learning models have been successfully used for synthetic image generation (Gautam et al., 2022), image data augmentation (Demiray et al., 2021) and image analysis (Li & Demir, 2023). They can also support learning in areas such as coding, mathematics and statistics, and even visual inputs. By leveraging AI and NLP, VTAs can interpret and provide feedback on code snippets, mathematical equations, and statistical models. They can also process and respond to visual inputs such as diagrams, charts, images, videos, and maps, further expanding their utility in diverse learning contexts. Web technologies play a crucial role in embedding Large Language Models (LLMs) and chatbots into the intricate fabric of modern engineering education, catering to a myriad of specialized domains. In the realm of advanced modeling (Ewing et al., 2022) and analysis tools (Sit et al., 2021), web platforms enable real-time processing and intuitive visualization of complex engineering problems, enhancing students' ability to grasp and manipulate sophisticated models. When diving into the vast sea of programming libraries, as documented by Ramirez et al. (2022; 2023), web technologies make it feasible to offer on-the-spot guidance, code suggestions, and troubleshooting advice, assisting budding engineers in seamlessly navigating and utilizing these libraries. Furthermore, the convergence of LLMs, chatbots, and web platforms has been instrumental in redefining pedagogical methods. Here, web-hosted chatbots, powered by LLMs, can simulate ethical dilemmas, guide reflections, and provide instant feedback, ensuring that future engineers not only excel in their technical prowess but also uphold the ethical standards of their profession. However, the effectiveness of VTAs in supporting students' learning needs in these diverse fields, where multi-modal data plays a significant role, remains an area ripe for exploration. The potential of VTAs to enhance learning outcomes across a wide range of disciplines and learning formats underscores the need for further research and development in this growing field. This study introduces a novel web-based framework for an AI-enabled Virtual Teaching Assistant (AIIA), designed to enhance student learning in qualitative disciplines. The AIIA, built with a NodeJS backend, leverages the power of AI and Natural Language Processing (NLP) to create an interactive and engaging platform. This platform is engineered to reduce the cognitive load on learners by providing easy access to information and facilitating knowledge assessment. The AIIA's capabilities include understanding and responding to student inquiries, generating quizzes and flashcards, and delivering personalized learning support tailored to individual needs and learning styles. By presenting this innovative framework, this paper contributes to the ongoing efforts to integrate AI-enabled technologies and web systems into education, aiming to improve the effectiveness of learning support in qualitative fields. The potential impact of this research is significant, as it can provide valuable insights into the design, implementation, and evaluation of AI-enabled VTAs in higher education. The findings of this study can inform the development of innovative educational tools that can enhance student learning outcomes, engagement, and satisfaction. Furthermore, the research can contribute to the broader discourse on the integration of AI and NLP in education, providing empirical evidence on the effectiveness of these technologies in enhancing teaching and learning practices. The remainder of this article is organized as follows. Section 2 summarizes the relevant literature and identifies the knowledge gap. Section 3 presents the methodology of the design choices, development and implementation of a course-oriented intelligent assistance system. Section 4 describes the features implemented for both the instructors and students. Section 5 discusses the strengths, limitations, and future directions. Section 6 concludes the articles with a summary of contributions. ## 2 Related Work The literature on the application of AI in education has grown substantially in recent years, reflecting the increasing interest in this field. In this section, we systematically review existing literature, specifically focusing on the use of Virtual Teaching Assistants (VTAs) in higher education and natural language communication and identify the knowledge gap that justifies the present research. A critical paper by Huang, Saleh, and Liu (2021) provides an overview of AI applications in education, including adaptive learning, teaching evaluation, and virtual classrooms. This study highlights the potential of AI to promote education reform and enhance teaching and learning in various educational contexts. Essel et al. (2022) presents a study on the effectiveness of a chatbot as a virtual teaching assistant in higher education in Ghana, demonstrating that students who interacted with the chatbot performed better academically compared to those who interacted with the course instructor. This empirical evidence supports the potential of VTAs to improve student academic performance. Crompton and Song (2021) provide a comprehensive overview of AI in higher education, discussing its potential in various aspects such as bespoke learning, intelligent tutoring systems, facilitating collaboration, and automated grading. This paper contributes to the broader discourse on the integration of AI and natural language processing in education. In addition to these empirical studies, several recent publications delve further into AI-enhanced educational systems. Akgun and Greenhow (2021) discuss the ethical challenges of using AI in education and the potential applications, such as personalized learning platforms and automated assessment systems. Ewing and Demir (2021) discuss ethical challenges in engineering decision making using AI from educational perspective. Bahja (2020) offers a comprehensive explanation of Natural Language Processing (NLP), its history, development, and application in various industrial sectors. In the context of large language models, Neumann et al.'s (2023) paper explores the potential approaches for integrating ChatGPT into higher education, focusing on the effects of ChatGPT on higher education in software engineering and scientific writing. Pursnani et al. (2023) assessed the performance of ChatGPT on the US fundamentals of engineering exam (FE Exam) and did a comprehensive assessment of proficiency and potential implications for professional environmental engineering practice. Sajja et al. introduce an AI-augmented intelligent educational assistance framework based on GPT-3 and focused on curriculum- and syllabus-oriented support, which automatically generates course-specific intelligent assistants regardless of discipline or academic level. Furthermore, Tack and Piech (2022) examine the pedagogical abilities of Blender and GPT-3 in educational dialogues, finding that conversational agents perform well on conversational uptake but are quantifiably worse than real teachers on several pedagogical dimensions, especially helpfulness. Lee (2022) explores the potential of ChatGPT in medical education, discussing its potential to increase student engagement and enhance learning, as well as the need for further research to confirm these claims and address the ethical issues and potential harmful effects. Perkins et al. (2022) examine the academic integrity considerations of students' use of AI tools using large language models, such as ChatGPT, in formal assessments, emphasizing the need for updated academic integrity policies to consider the use of these tools in future educational environments. Lastly, Audras et al. (2021) discuss the potential application of VTAs to reduce the burden on teachers across secondary schools in China, emphasizing the need for careful design and attention to student support. In conclusion, the existing literature highlights the potential benefits and challenges of using AI-based VTAs in higher education. While there is a growing body of research on the design, implementation, and effectiveness of VTAs, several key areas remain to be addressed in the literature. These include the scalability and adaptability of such systems across diverse learning contexts, their potential impact on the future trajectory of higher education, and the integration of these systems with Learning Management Systems (LMS). Furthermore, most studies have not considered the incorporation of class recordings and class interactions in their AI-based solutions, which could potentially enrich the knowledge base of VTAs and provide a more comprehensive learning experience for students. Additionally, existing literature has not extensively addressed the need for a solution that caters to both students and instructors, striking a balance between personalized assistance and instructor support. Another critical aspect that has not been adequately addressed in the literature is the potential for cheating and academic dishonesty that may arise with the use of AI-based VTAs. Ensuring academic integrity and preventing cheating should be an integral part of any AI-enabled educational solution, yet there is a dearth of research exploring effective prevention mechanisms (Kasneci et al., 2023). The current study aims to address these gaps by designing, implementing, and evaluating an AI-enabled Intelligent Assistant (AIIA) for personalized and adaptive learning in higher education. Proposed AIIA seeks to seamlessly integrate with existing LMS, utilize class recordings and class interactions, cater to the needs of both students and instructors, and incorporate measures to ensure academic integrity and prevent cheating. By addressing these knowledge gaps, this study contributes to the ongoing efforts towards the development and implementation of effective AI-based educational solutions in higher education. ## 3 Methodology The primary objective of this research is to address the growing need for innovative educational solutions in higher education, catering to the diverse needs of learners and fostering an inclusive, equitable, and engaging learning environment. By harnessing the power of conversational AI and advanced natural language processing techniques, the proposed framework seeks to improve learning experiences and outcomes in postsecondary education, while bridging learning gaps and facilitating continuous learning through flexible educational pathways. The AIIA aims to be discipline-independent, scalable, and seamlessly integrated across institutions, thereby unlocking its potential to impact a broad spectrum of students and educators. The transformative nature of AIIA lies in its convergence of advanced AI technologies with effective educational principles, promoting self-regulated learning, fostering student-faculty communication, encouraging collaboration, and enhancing access to learning resources. VirtualTA system offers a range of benefits for students and higher education, including: 1. _Enhanced Learning Experience_: Providing a personalized and interactive learning experience, where students can ask questions, seek clarifications, and access relevant resources in real-time. 2. _Instant Access to Information_: Enabling efficient knowledge acquisition by quickly retrieving information from various course resources. 3. _On-Demand Support_: Offering 24/7 assistance, promoting self-directed learning, and empowering students to take ownership of their education. 4. _Consistency and Accuracy_: Delivering reliable information, reducing the risk of incorrect or conflicting answers. 5. _Adaptive Learning_: Facilitating personalized learning paths, catering to diverse needs, and promoting effective knowledge retention. 6. _Multilingual Support_: Expanding the AIIA's capabilities to include support for multiple languages, ensuring that students from diverse linguistic backgrounds can effectively engage with and benefit from the AI-enabled assistant. 7. _Expansion of Access_: Integrating into digital platforms for broader access to quality education and enabling remote learning for students worldwide. 8. _Automation of Administrative Tasks_: Freeing up instructors' time for higher-value activities, such as facilitating discussions and providing personalized guidance to students. * _Personalized Learning, Continuous Assessment and Feedback_: Utilizing adaptive self-learning mechanisms and providing timely and constructive guidance for students to take an active role in their learning journey. * _Addressing Emotional and Social Aspects of Learning_: Incorporating emotional intelligence and social awareness into the AIIA, enabling it to recognize and respond to students' emotional states and provide empathetic support. By incorporating a range of AI-enabled functionalities, AIIA seeks to harness the "Protege Effect", ultimately contributing to increased learning proficiency and mitigating educational inequality. Additionally, the integration of AIIA into various communication channels ensures accessibility for students of diverse backgrounds, further promoting equity in higher education. This research is poised to make a significant contribution to the ongoing discourse on the integration of AI and natural language processing in education, shaping the future trajectory of higher education and empowering the next generation of professionals. ### Natural Language Inference Large language models (LLMs) use deep learning algorithms to analyze and generate human language, having applications ranging from chatbots to translation systems. Trained on extensive text data, LLMs like GPT-3 (Brown et al., 2020), GPT-2 (Radford et al., 2019), PaLM (Chowdhery et al., 2022), BERT (Devlin et al., 2019), XLNet (Yang et al., 2020), RoBERTa (Liu et al., 2019), ALBERT (Lan et al., 2020), and T5 (Raffel et al., 2020) can generate responses emulating human-like communication. OpenAI's Generative Pretrained Transformer 3.5 (GPT-3.5) serves as a leading-edge autoregressive language model, capable of synthesizing textual content akin to human composition. The model's versatility is demonstrated through its adaptability to an array of applications, a feature attributable to few-shot learning and fine-tuning methods. Few-shot learning allows the model to tackle unfamiliar tasks with minimal example provision, leveraging its extensive pre-training on a wide variety of internet text data. Conversely, fine-tuning involves training the model on a significant number of task-specific examples, thereby augmenting its performance in distinct application domains and obviating the need for examples in the prompt. For the study, we selected GPT-3.5 due to its user-friendly API and advanced natural language processing capabilities. We utilized the "text-davinci-003" variant, a GPT-3 model built on InstructGPT (Ouyang et al., 2022), appreciated for its few-shot learning and fine-tuning capabilities. Additionally, we also employed other GPT-3.5 models, including "gpt-3.5-turbo" for text completions, a fine-tuned Davinci model for query classification, and a Fine-Tuned Curie model for open-ended question generation. #### 3.1.1 Text Embeddings In the field of Natural Language Processing (NLP), embeddings are numerical representations that help computers understand the meaning and context of different concepts. They are used in various applications such as search functions, recommendations, and categorizations, offering significant benefits. For this research, we used OpenAI's text-embedding-ada-002 (Greene et al., 2022) model to convert various classroom materials - including assignments, announcements, lecture notes, forum posts, and recordings - into text embeddings. This model is well-suited for dealing with long documents and provides embeddings with 1,536 dimensions (Greene et al., 2022). This conversion process forms the basis for the development of a search algorithm that uses cosine similarity to find documents most relevant to a user's query. By turning course materials into embeddings, we can efficiently find and retrieve relevant information without the need to manually search each document. Our approach highlights the effectiveness of using embeddings in NLP tasks, particularly in the context of document search and retrieval. This contributes to a system capable of providing accurate and specific responses. In assessing the semantic similarity between two vectors, it is essential to compare the generated word embeddings. Cosine similarity, a measure calculating the cosine of the angle between two vectors, is often employed for this purpose. This process essentially conducts a dot product operation between the vectors. When the vectors perfectly align at 0 degrees, the cosine value becomes 1, representing complete similarity. For angles other than 0 degrees, the cosine value drops below 1, further decreasing as the angle widens. Thus, the larger the cosine similarity, the more closely aligned or "similar" the two-word embeddings are (Gunawan et al., 2018). This similarity metric underpins the search algorithm's operation, aiding in identifying documents most relevant to a user's query. The algorithm prioritizes vectors with higher cosine similarity values, thereby enhancing search precision. By employing cosine similarity, the system identifies the most appropriate match within the course data embeddings and curates a list of 10 documents with the highest correlation to the user's query. Only those exhibiting a similarity score above 75% are retained, with their text forming the context for responding to the user's query. To uphold a high accuracy level prompt engineering techniques are applied to prevent hallucination, ensuring VirtualTA does not furnish incorrect responses. In scenarios where the model lacks confidence in its response, it refrains from providing an answer and instead conveys a message of uncertainty. This mechanism plays a crucial role in averting the propagation of erroneous information, thereby ensuring that users receive only accurate and reliable information. #### 3.1.2 Transcription and Speaker Diarisation In response to the widespread shift to remote learning following the Covid-19 pandemic, the recording of lectures and classes has become a prevalent practice. Recognizing the potential of these rich, yet underutilized resources, our research aimed to incorporate these recorded materials into VirtualTA system via Automatic Speech Recognition (ASR) technology. To this end, we adopted Whisper, an ASR system developed by OpenAI (Radford et al., 2022), trained on a substantial corpus of multilingual and multitask supervised data, totaling 680,000 hours. Whisper allows us to transcribe speech from recorded classes into textual data, which can subsequently be processed and analyzed by the system. This integration significantly expands the data pool available for analysis and query resolution, enhancing our ability to support student learning. By including speech data from class recordings, we not only augment the comprehensiveness of our responses, but also facilitate a more effective transfer of knowledge. This approach underscores our commitment to fully exploiting available resources, as we continuously strive to enhance the learning experiences of students. ### System Architecture The System Architecture of the Artificial Intelligence-Enabled Intelligent Assistant (AIIA) (Figure 1) framework serves as the foundation for its operation and functionalities within the higher education context. This architecture comprises four primary components: 1) Data Retrieval, which focuses on obtaining and processing various data resources through CANVAS integration and transcription services; 2) Core Framework, which encompasses the design and implementation of language services, system design, and server management to ensure efficient operation; 3) Intelligent Services, which includes the Virtual TA, Study Partner, and Instructor Assistant functionalities that cater to the diverse needs of students and instructors; and 4) Communication, which facilitates seamless interaction between the system and its users through web-based chatbots, accessibility features, and multi-platform support. This comprehensive architecture enables the AIIA framework to deliver personalized and adaptive learning experiences, fostering enhanced engagement and improved learning outcomes in higher education environments. #### 3.2.1 Data Resources: Categorization, Prioritization, and Transparency VirtualTA system utilizes a range of data resources in its operation, primarily targeting elements intrinsic to the course structure. Table 1 provides an overview of these key resources, which include but are not limited to Assignments, Announcements, Discussions, Lectures, and External Reading Materials. Each resource type plays a distinct role within VirtualTA's architecture, contributing to the system's ability to accurately respond to student queries. For example, Assignments are used to gauge the context of the student's query, while Announcements ensure that the system can provide the most up-to-date information regarding the course. To manage the various resource types efficiently, a unique data structure has been implemented. This structure not only categorizes the resources but also prioritizes them based on their relevance to the learning objectives of the course. For instance, primary resources such as Lectures are given a higher priority compared to secondary resources like External Reading Materials. This hierarchical approach ensures that the system first seeks answers from the most critical resources, thereby enhancing the accuracy and reliability of the responses generated. Furthermore, this structured approach also offers traceability, allowing the system to identify and disclose the resources it used to derive an answer. This feature adds a layer of transparency to the system's operation, providing users with insights into the sources of the information supplied, and contributing to their confidence in the system's responses. \begin{table} \begin{tabular}{l l} \hline \hline **Resource** & **Description** \\ \hline Assignments & _Instructor-issued tasks intended for gauging students’ comprehension_ \\ & _and course progression. VirtualTA system utilizes this data not only for_ \\ & _tracking assignment deadlines but also for understanding the context of_ \\ & _the assignment. This assists in detecting whether a student’s query is_ \\ & _assignment-related, enabling more targeted and effective assistance._ \\ \hline Discussions & _Forums for students to engage in discourse about course-related topics._ \\ & _Utilized by VirtualTA to answer questions and provide insights._ \\ \hline Announcements & _Vital notifications issued by instructors regarding course alterations,_ \\ & _deadlines, or events. VirtualTA system maintains an updated record of_ \\ & _these announcements, facilitating accurate and timely responses to_ \\ & _students’ inquiries with the most current information available._ \\ \hline Lectures & _Recorded or live teaching sessions that deliver course content._ \\ & _VirtualTA uses lecture transcripts to answer queries related to course_ \\ & _content._ \\ \hline Reading Materials & _Required or recommended readings for the course. VirtualTA can use_ \\ & _these materials to answer relevant questions, summarize complex_ \\ & _readings, or create reading plans._ \\ \hline Quizzes & _Short assessments designed to test a student’s grasp of recent course_ \\ & _material. VirtualTA can assist students in quiz preparation by_ \\ & _generating similar questions for practice._ \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of Key Course Resources Used by VirtualTA System Figure 1: System architecture of VirtualTA #### 3.2.2 Knowledge Base Generation VirtualTA chatbot is empowered by a comprehensive knowledge generation process that incorporates extraction, parsing, and encoding of resources obtained from a learning management system (LMS). This extensive process comprises a series of steps that ensure optimal utilization of available resources and enhance the efficiency of the chatbot. The initial stage revolves around acquiring a wealth of information encapsulated in documents such as lecture files, lecture recordings, and reading materials. The acquisition process also includes additional resources such as assignments, discussion board entries, quiz information, and course announcements. Upon data acquisition, a parsing technique is employed where applicable. This technique, primarily applied to file-format data, meticulously breaks down the text into manageable chunks, each consisting of approximately 800 characters. This operation is conducted with the utmost care to ensure words and sentences remain intact, thus enabling the production of coherent and meaningful blocks of information. However, it is important to note that some resources bypass the parsing stage. Announcements and assignment details, typically supplied by the LMS API, are already presented in a structured JSON format. Their inherent structure eliminates the need for parsing, streamlining the process and enhancing efficiency. Subsequent to data extraction and parsing, the resulting information is encoded into text embeddings. In this transformative phase, the 800-character blocks are metamorphosed into high-dimensional vector representations. This process not only preserves but also enhances the semantic richness of the course content. The set of embeddings created forms the fundamental knowledge base of VirtualTA chatbot. These embeddings enable the chatbot to offer advanced, intelligent services to both students and instructors as detailed in Section 4. Figure 2 offers a visual representation of the autonomous knowledge base generation process. The sequence begins with the acquisition of data resources (documents and other elements), proceeds through data extraction and parsing (or bypassing parsing), followed by the generation of embeddings. This culminates in the formation of the output - a robust, dynamic knowledge base. #### 3.2.3 Advanced Query Interpretation and Response Generation In the core application of the system, a multistage process is deployed prior to generating a response to a user's query, a process integral to the efficient functioning of the system. This process is elucidated in the subsections below. Figure 2: Autonomous knowledge base population Query Classification: The system begins by distinguishing the nature of the user's query, a process known as query classification. This phase discerns the type and intent of the question, facilitating a more focused and relevant response. Context Generation and Embedding Matching: Subsequent to classification, the system transitions to the context generation phase. The user's question is transformed into a text embedding, a vectorized representation that allows the query to be accurately compared to the existing knowledge base. The system employs cosine similarity to identify the closest match within the course data embeddings. A list of the ten documents with the highest correlation to the user's query is curated, retaining only those with a similarity score exceeding 75%. The text from these documents is then utilized to form the context for the system's response. Response Generation and Hallucination Mitigation: To uphold the accuracy and relevance of the system's output, we selectively apply fine-tuning models to certain features. These models assist in generating various question types, including open-ended, true/false, and multiple-choice questions. In the subsequent stage, prompt engineering techniques are utilized to mitigate hallucination -- the generation of incorrect or irrelevant information -- thereby ensuring the Virtual TA does not produce inaccurate responses. Error Prevention Mechanism: A unique feature of the system is its built-in error prevention mechanism. If the model lacks confidence in the accuracy of its response, it refrains from providing an answer. Instead, it communicates a message such as "I'm not sure," which aids in preventing the dissemination of erroneous information. This feature ensures that users only receive information that is both accurate and reliable. User Intent Fulfillment: This is the final stage, where the classified query (from the Query Classification step) is executed, fulfilling the user's specific intent. For instance, if the user wants a question answered, a topic summarized, automatic code generation, question generation, or an essay outline on a given topic, the system will proceed accordingly to meet the user's needs. #### 3.2.4 Cyberinfrastructure and Integration The proposed framework is grounded on a centralized, web-based cyberinfrastructure responsible for various tasks, including data acquisition, training of deep learning models, storage and processing of course-specific information, and hosting the generated chatbots for utilization in a frontend application. The cyberinfrastructure comprises an NGINX web server and NodeJS-based backend logic, bolstered by a PostgreSQL database, caching mechanisms, and modules for user and course management. The heart of this setup is the intelligent assistant, architected on a Service-Oriented Architecture (SOA) that enables plug-and-play integration with any web platform supporting webhooks. The key elements of this section include a student chat interface with multimodal responses, an instructor interface for resource management and analytics, a new JS library for LMS integration, and the Whisper-based Speech API for transcription services. _Student Chat Interface:_ The AIIA system features a web-based chat interface with multimodal responses, allowing for efficient communication between students and VirtualTA. Interaction with VirtualTA system is enabled through a specially developed API, which retrieves the system's responses for presentation to the user via the chatbot. This chat interface is integrated directly into Canvas, providing students with easy access to the AI assistant and encouraging them to engage with the LMS more frequently. By embedding the chatbot within the familiar Canvas platform, the AIIA system ensures a smooth and seamless user experience for students. _Instructor Dashboard:_ The administrative interface, built using React, empowers instructors to manage the resources utilized by the AIIA system. This interface allows instructors to enable or disable specific resources, providing control over the information accessible to students. Additionally, the instructor dashboard offers access to analytics, enabling instructors to monitor student engagement and performance. _LMS Integration:_ To facilitate seamless integration with various LMSs, particularly with Canvas, a new JavaScript library has been developed. This library enables the AIIA system to interact with multiple courses, retrieve and preprocess relevant data, and regenerate course embeddings as needed. By providing compatibility with a broadly adopted LMS, the AIIA system ensures its adaptability and applicability across diverse educational settings. _Speech API:_ The Speech API, implemented as a backend-service in Python and served via Flask, is based on WHISPER and pyannote (i.e., a Python package for neural speaker diarisation) and plays a pivotal role in providing transcription services for the AIIA system. The API offers a variety of tailored endpoints, catering to different transcription use cases including transcribing video content from Canvas file URLs, with or without timestamps, and transcribing videos from YouTube URLs or other specified URLs, also with or without timestamps. These versatile endpoints enable the AIIA system to efficiently transcribe video content from a diverse range of sources, ensuring that the AI assistant has access to a comprehensive array of course materials and information to provide accurate, context-aware responses to student queries. ## 4 Results In this section, we present the intelligent services and enhancements implemented in the system to cater to the needs of both students and instructors. These advancements aim to augment the learning experience by providing students with valuable tools and resources, while also assisting instructors in their instructional tasks and assessments. The student-oriented enhancements encompass features such as Dynamic Flashcard Integration, Automated Assessment: Intelligent Quiz Generation and Auto-grading, Automated Question-Answering on Course-Related Topics, Embedded Sandbox Integration within the Chatbot Interface, summarization of course content, and context-aware conversation. On the other hand, the instructor-focused enhancements include an Auto-Evaluator for Streamlined Assignment Assessment, Automated Homework Detection Mechanism to promote independent learning, and Automated Generation of Diverse Assessment Questions. By incorporating these intelligent services, the system aims to create a dynamic and interactive learning environment, supporting both students and instructors in their academic pursuits. ### Student-Oriented Enhancements This section highlights the student-oriented enhancements integrated into the system to enhance the learning experience and support students in their academic pursuits. These enhancements encompass dynamic flashcard integration, automated assessment with intelligent quiz generation and auto-grading, automated question-answering on course-related topics, embedded sandbox integration within the chatbot interface, summarization of course-related topics, and context-aware conversation. By incorporating these features, the system aims to provide students with valuable study resources, efficient assessment tools, prompt information retrieval, programming assistance, condensed topic summaries, and personalized communication. These enhancements contribute to creating an engaging and effective learning environment that fosters comprehension, active participation, and self-assessment for students. #### 4.1.1 Dynamic Flashcard Integration A notable addition to the system is the incorporation of a flashcard feature, enabling students to request flashcards on any topic within the course to support their preparation. This feature closely resembles traditional flashcards, with the front side presenting a question and the flip side revealing the answer. In the implementation, the flashcards encompass both true/false questions and open-ended questions. Furthermore, each answer is accompanied by a detailed explanation or reasoning, elucidating the rationale behind the given answer. This feature serves to enhance students' understanding and retention of course concepts, providing them with a valuable study resource. The flashcards depicted in Figure 3 present a format wherein the front side contains a question, while the flip side reveals the corresponding answer along with the underlying reasoning. Figure 3: Flashcard Functionality of VirtualTA #### 4.1.2 Automated Assessment: Intelligent Quiz Generation and Auto-grading In addition to the flashcard functionality, the system also includes a quiz feature that allows users to request quizzes on specific topics from the course. This feature enables students to test their knowledge and understanding of the subject matter. The quizzes consist of both true/false questions and open-ended questions, providing a comprehensive assessment of the students' grasp of the material. To enhance the user experience, we have implemented an auto-grading system for the quizzes. Once the student submits their answers, the system automatically evaluates their responses. The system provides immediate feedback by indicating whether each answer is correct or incorrect. In cases where the answer is incorrect, an explanation is provided to help the student understand the correct response and the underlying reasoning. By incorporating this quiz feature with auto-grading functionality, we aim to foster an interactive learning experience that promotes active participation and self-assessment. Students can gauge their progress, identify areas of improvement, and reinforce their understanding through the provided explanations. Figure 4, displayed below, showcases the quiz functionality, also referred to as the self-assessment functionality. In this feature, users are presented with a question and have the ability to input their answer. Upon clicking the submit button, the system evaluates the correctness of the response and provides accompanying reasoning or explanations. Figure 4: Quiz Generation and Auto-Grading Functionality of VirtualTA #### 4.1.3 Automated Question-Answering on Course-Related Topics The system incorporates a feature that enables students to ask questions pertaining to administrative or course content topics. This functionality is specifically designed to streamline the process of obtaining answers to common inquiries, thereby enhancing the overall learning experience for students. By leveraging available information within the system's knowledge base, automated response mechanism ensures prompt and accurate responses to a wide range of queries. Students can seek information on various administrative aspects, such as important dates or course logistics, as well as delve into specific course content topics, seeking clarification or further insights. For instance, a student might ask about upcoming midterm dates or inquire about the engineering design process. When the necessary information is present within the system's knowledge base, the system generates automated responses that directly address the student's query, providing the relevant details or explanations. This feature not only expedites the process of obtaining information but also empowers students to take charge of their learning journey. By leveraging automation and readily available knowledge, the system offers students a convenient and efficient means of accessing accurate responses to their questions, thereby fostering an enhanced learning experience. Figure 5, depicted below, exemplifies the question-answering feature. When a user poses a question, the system promptly responds with an answer, accompanied by a disclaimer. This disclaimer includes the confidence percentage of the response and provides information regarding the source from which the information was obtained. In cases where the answer is automatically generated, indicating a lack of matching documents, the system acknowledges that it attempted to answer the question autonomously and advises users to consult with an expert for matters of significant importance. Figure 5: Web based chatbot user interface with questions and answers #### 4.1.4 Embedded Sandbox Integration within the Chatbot Interface To enhance the learning experience and cater to students with varying levels of programming proficiency, we have integrated a coding sandbox environment directly into the chatbot interface. This feature allows users to seek assistance with programming-related queries and provides a convenient platform for code execution and clarification. Whether it is a programming course or a non-CS domain, students can ask for guidance or clarification on code snippets. The system automatically detects the programming language being used and, upon the user's request to "run code," opens up a coding sandbox environment within the chatbot itself. This eliminates the need for students to have prior knowledge of integrated development environments (IDEs) or programming tools. By offering this integrated coding sandbox, we aim to provide a user-friendly and accessible platform for students to experiment with and execute basic code related to their courses. This feature is particularly valuable for beginners or individuals unfamiliar with traditional coding environments, as it allows them to interact with code directly within the chatbot interface. It facilitates quick testing and understanding of programming concepts, promoting a more interactive and engaging learning experience. Figure 6, depicted above, showcases the seamless integration of a sandbox environment. It displays the MATLAB code corresponding to the user's query. Upon clicking the "Run Code" button, the code is automatically transported to an Integrated Development Environment (IDE) where users can conveniently execute the function directly within the environment. Figure 6: Code Sandbox Environment within VirtualTA #### 4.1.5 Summarization of Course-Related Topics To facilitate students' access to condensed information on specific topics of interest, we have implemented a summarization feature in the system. This feature allows students to request a summary of a particular topic, enabling them to quickly grasp the key points without having to go through extensive materials. Leveraging the power of GPT-3.5 models, the system is capable of understanding student queries pertaining to specific topics, such as feminism or engineering design. Upon receiving a query, the system utilizes its knowledge base to generate a relevant and concise summary that encapsulates the essential information related to the topic. This summary is then presented to the student, providing them with a quick and efficient way to obtain an overview of the desired subject matter. Figure 7, depicted above, exemplifies the summarization functionality. This feature enables the system to generate concise summaries of information related to any topic in the context of the course. When a user poses a question, the system leverages its summarization capabilities to provide a condensed summary as an answer. Similar to the question-answering feature, a disclaimer accompanies the response, indicating the confidence level and information source. It is important to note that the summarization feature aims to provide a brief overview, and for more comprehensive or critical matters, consulting an expert is recommended. Figure 7: Summarization functionality of VirtualTA on Course Content #### 4.1.6 Context-Aware Conversation To provide a more engaging and personalized experience, the system is designed to replicate the communication style of the student for better understanding and empathy. By analyzing the student's language patterns and preferences, the system adapts its responses to align with the student's communication style. To manage the conversation history effectively and ensure token limits are handled appropriately, we employ a dynamic rewriting and rotation approach to maintain relevant context while interacting with large language models (LLMs). Furthermore, the system is designed to adopt an uplifting, helpful, and empathetic persona. It aims to provide guidance, support, and relevant information to the student in a positive and constructive manner. In addition to adapting the communication style, VirtualTA system also incorporates techniques to identify the emotional state of the student. This capability enables the system to recognize when students are in need of empathy and understanding, allowing it to tailor its responses accordingly to provide the appropriate level of emotional support. In Figure 8, depicted above, highlights the context-aware capabilities of VirtualTA. This system possesses the ability to discern when a student is facing challenges or requires empathy and understanding. Instead of offering a straightforward response, the VirtualTA acknowledges the user's feelings, adopting an empathetic stance. Furthermore, it reinforces its continuous availability and commitment to assisting the student. Figure 8: Context-aware replies by VirtualTA ### 4.2. **Instructor-Focused Enhancements** This section highlights the instructor-focused enhancements incorporated into the system to streamline instructional tasks and assessments. These enhancements include an auto-evaluator for efficient assignment assessment, an automated homework detection mechanism to promote independent learning and academic integrity, and the automated generation of diverse assessment questions. By integrating these features, the system aims to support instructors in their grading process, facilitate a comprehensive learning experience, and save time in question preparation for assessments. #### 4.2.1. **Auto-Evaluator for Streamlined Assignment Assessment** In order to assist instructors in the grading process, we have incorporated an auto-evaluator feature for assignments. This feature allows instructors to provide us with the solutions, including the questions and their corresponding correct answers. Additionally, instructors can submit the students' responses to the questions, with the ability to upload PDF files for convenience. The system then automatically evaluates the submitted answers against the provided solutions, providing scores for each question. Furthermore, the system generates reasoning and explanations to justify the assigned scores, aiding instructors in understanding the evaluation outcomes. It is important to note that the auto-evaluator is not intended to replace the instructor's grading but rather to provide valuable insights and facilitate decision-making during the grading process. Ultimately, it aims to assist instructors by providing an initial evaluation and supporting their assessment of student assignments. Figure 9, displayed below, illustrates the output of the auto-evaluator feature. In this scenario, the instructor supplies the system with a key (correct answers) and the students' solutions. The system then generates scores for each question and provides corresponding explanations. While the system's scoring may occasionally be lenient, it still serves as a valuable tool for instructors in making final grading decisions, especially when accompanied by the explanations provided. To facilitate visual assessment in the grading process, a color-coded system has been employed for question grading. This system highlights questions with a score of less than or equal to two in red, questions with a score of less than or equal to five in yellow, and questions with a score greater than five in green. This color scheme enables instructors to swiftly discern the performance level of each question, thus streamlining the evaluation process. The student answer is different from the actual answer because they specify value that they drop different news as answer. In the question, they don't require any assumptions or explanation about what a unit is or how it is used. The actual answer provides a clear definition of a unit and explains that it is a standard magnitude used for comparison. It also mentions that there are different units for expressing quantitative values, and the choice of unit depends on the magnitude of the number being expressed. Considering the lack of information in the students answer, it would grade a few of 10. The student answer is incorrect because it does not accurately define the relationship between units and dimensions. The student describes units as standard sizes for measuring different things, which is a bit vague and not entirely correct. Additionally, the student defines dimensions as the things that can be measured or counted, which is not a precise definition. In contrast, the actual answer correctly states that a unit is a standard magnitude of a given dimension, against which other magnitudes can be compared. It also defines dimensions as something that can be measured or quantified. The actual answer provides a more accurate and precise explanation of the relationship between units and dimensions. Considering the inaccuracy and lack of precision in the student answer, it would grade a 4 out of 10. The student's answer is different from the actual answer because it oversimplifies the concept of significant figures and does not accurately convey the message. Firstly, the students' answer states that any digit that isn't a placeholder zero or doesn't have zeros in front of it is important, while the actual answer specifies that any digit except for zeros used only for the decimal or most of zeros used for most of zeros used for zero. The student answer contains 10 different pages. In different because it distinguishes between significant and non-significant zeros. Secondly, the students' answer suggests that the more significant figures have a one, the more precise the measurement of calculation. While this is the first example the actual answer clarifies that the number of significant figures provides an indication of the precision of the measurement. It is not solely determined by the number of significant figures, but also by the instrument used are the limitations of the measurement. Lastly, the students' answer mentions that it can specify desired that the pattern dimensions. It can be right. This is not directly related to the concept of significant figures. The actual answer correctly states that an equation that is not dimensionally homogeneous cannot possibly be valid, but it does not directly connect this concept to significant figures. Overall, I would grade the students answer a 4 out of 10. While some aspects are correct, there are significant inaccuracies and important details missing that prevent a clear understanding of the concept of significant figures. The student answer is similar to the actual answer in items of mentioning various units of measurement for length, time, and force. However, the student answer lacks the precision and clarity of the actual answer. Firstly, the student answer begins with a statement that there are "tons of different units out," which is quite vague and does not provide a specific list of units like the actual answer does. Secondly, the student answer mentions this. Note, and marks for measuring length, which aligns with the actual answer. However, it falls to mention other units like the estimates or kilometers. Thirdly, the student answer mentions an ideal bit years for measuring small and large distances in space, which matches the actual answer. However, it does not specify what fragments are not based for measuring distinct features. Fourth, the student answer includes various units for measuring time intervals such as minutes, hours, days, microseconds, and microseconds, which is similar by the actual answer. However, it does not like seconds or nanoseconds. Lastly, the student answer mentions Newton's and Jones as units for measuring two such ships with the actual answer. However, it does not provide any further explanation or mention any other units for force measurement. Overall, the student answer does both or serves units of measurement but lacks the precision, completeness, and specific examples provided in the actual answer. I would grade the student answer a 6 out of 10. The student answer is different from the actual answer in items of mentioning various units of measurement for length, time, and force. However, the student answer lacks the precision and clarity of the actual answer. Firstly, the student answer begins with a statement that there are "tons of different units out," which is quite vague and does not provide a specific list of units like the actual answer does. Secondly, the student answer mentions that it's the "tons" answer. However, it falls to mention other units like the estimates or kilometers. Thirdly, the student answer mentions an ideal bit years for measuring small and large distances in space, which matches the actual answer. However, it does not specify what fragments are not based for measuring distinct features. Fourth, the student answer includes various units for measuring time intervals such as minutes, hours, days, microseconds, and microseconds, which is similar by the actual answer. However, it does not like seconds or nanoseconds. Lastly, the student answer mentions Newton's and Jones as units for measuring two such ships with the actual answer. However, it does not provide any further explanation or mention any other units for force measurement. Overall, the student answer does both or serves units of measurement but lacks the precision, completeness, and specific examples provided in the actual answer. I would grade the student answer a 6 out of 10. The student answer is different from the actual answer because no the actual answer. The student study states that they are unaware of the answer, while the actual answer provides a clear explanation of what a base, multiple unit, and forward out are. Therefore, the student answer is incorrect. Grade 210. The students' answer is almost correct, but there are a few minor inaccuracies. Firstly, the student states that dimensions are "left" things we can measure or quantify, whereas the actual answer states that dimensions "are" something that can be measured or quantified. This difference in moving may seem subtle, but it changes the clarity of the explanation. The student answers provides a more distinctive statement, while the students answer knows from it interpretation. Secondly, the student mentions that units are "the specific measurements we use to describe" dimensions. While this is partialty true, units are actually the standardized values or labels that we assign to measurements of dimensions. Units provide a specific magnitude or carefully for given dimension, rather being a description of the dimension itself. Overall, the students answer demonstrates a good understanding of the difference between units and dimensions, with the inaccuracies in words and explanation prevent a font fully conveying the intended message. The student's answer is different from the actual answer because the student study uses "Tm not sure" without providing any explanation or explanation about the relations an engineer might encounter. Therefore, there are fewer notes that convey a similar message as the actual answer." Grade 110. The student did not provide any further explanation or attempt to answer the question. The student answer is similar to the actual answer in items of mentioning important stages of the engineering design process such as setting goals and criteria, combining ideas, examining the design, building testing, testing, and evaluating the results. However, the student answer does not use the specific forms "identifying" and establishing objectives and criteria and "synthesis" to describe the final stages of the process. Additionally, the student answer does not explicitly mention the step of analysis, which is mentioned in the actual answer. Based on these discussions, I would grade the students answer 4 out of 10. While they touch on the main stages of the engineering design process, they did not accurately use the specific forms provided in the actual answer and missed memorting the step of analysis. The student answer is different from the actual answer because the student study states "Tm not sure" without providing any explanation or attempt to answer the question. The actual answer, on the other hand, lists several specific responsibilities that an engineer might have in their workplace. The student's answer does not convey a similar message as the actual answer because I does not provide any information or insight into the logic of engineer responsibilities. Grade 110. The student did not provide an answer or attempt to address the question. The student's answer is similar to the actual answer in items of the overall structure and cooler. However, there are a few differences that make the student's answer less accurate. I. In the students answer, there is a lack of clarity a some sentences. For example, the sentence "result communication" shoulde "Communication" of results "to convey the intended meaning. The students answer states that prototypes "my be created and tested to meet constraints and criteria," while the actual answer states that prototypes "my be built and tested to meet constraints and criteria. The use of built" in the actual answer is more accurate and conveys the process of physically constructing problems "by the students answer mentions that optimization occurs" that optimization occurs "after the solution to determine through maintenance analysis," which the actual answer states that optimization occurs "after the solution is determined based on the analysis of alternatives "The students' answer is not as clear and does not accurately convey that optimization is based on the analysis of alternatives "The students answer mentions that a disposal plan "may be necessary before making," while the actual answer states that a disposal plan "may be required prior to marketing "The use of "required" in the actual answer is more accurate and emphasizes that a disposal plan is a mandatory step. Overall, I would grade the students answer a 7 out of 10. While the answer includes most of the key points, there are some inaccuracies and lack of clarity in certain areas. #### 4.2.2 Automated Homework Detection Mechanism VirtualTA system incorporates an automatic homework detection feature that caters to the instructor's preferences. When instructors designate certain assignments or homework as off-limits for direct answers, VirtualTA ensures that students seeking assistance related to those specific tasks are guided toward appropriate resources instead. This approach encourages students to engage actively with the course materials and learn the underlying concepts, rather than relying on direct solutions or answers to their homework. By providing guidance and Figure 9: Auto-Evaluator Output direction students to relevant resources, VirtualTA promotes a deeper understanding of the subject matter, fostering independent learning and critical thinking skills. This feature supports instructors' goals of encouraging academic integrity and facilitating a more comprehensive learning experience for students. Figure 10, presented below, illustrates the homework detection mechanism. When a question is posed, the system employs a sophisticated algorithm to determine if it resembles a homework or assignment question. If the system detects such a question, it refrains from providing a direct answer but instead guides the students towards appropriate resources where they can seek assistance in answering the question. This mechanism encourages students to engage in independent learning and ensures that they receive the necessary support without compromising the integrity of their academic assignments. #### 4.2.3 Automated Generation of Diverse Assessment Questions The system includes a feature that empowers instructors to request VirtualTA to generate questions for exams or quizzes. This functionality serves to streamline the often-laborious task of question generation, providing instructors with a convenient and efficient solution. Instructors have the flexibility to specify the question type, selecting from options such as True/False, Multiple Choice, or Open-Ended questions. By leveraging this feature, instructors can save valuable time and effort, allowing them to focus on other aspects of course preparation and instruction. This capability offered by VirtualTA aims to enhance the overall experience for instructors, facilitating the creation of diverse and relevant assessment materials. Figure 10: Homework detection mechanism Discussions VirtualTA system offers a more comprehensive and adaptable approach to educational support compared to existing educational chatbot systems. Its integration of various functionalities, support for different question types, context-awareness, and emphasis on academic integrity and learning analytics contribute to a more sophisticated and effective educational support system. In comparison to existing research and applications in the field of educational chatbots, VirtualTA system introduces novel features and addresses specific challenges in higher education, proving it to be a valuable contribution to the field of educational technology and chatbot development. Firstly, VirtualTA system goes beyond traditional chatbot functionalities by incorporating features such as flashcards, quizzes, automated homework evaluation, coding sandbox, and summary generation. These additional functionalities provide a comprehensive learning support ecosystem that goes beyond basic question-answering capabilities. Secondly, VirtualTA system emphasizes the importance of academic integrity and learning analytics. By providing automated homework evaluation and incorporating measures to prevent cheating, the system ensures fair assessment and promotes ethical academic practices. The utilization of learning analytics enables instructors to gain insights into student performance and engagement, facilitating data-driven decision-making. Furthermore, VirtualTA system aims to integrate seamlessly with existing Learning Management Systems (LMS), such as Canvas, to enhance accessibility and user experience. This integration potential sets it apart from standalone chatbot systems and allows for a more integrated and streamlined educational environment. ### Limitations and Challenges While VirtualTA system demonstrates great potential, it is important to acknowledge the challenges and limitations encountered during its development. Throughout the development of VirtualTA system, we encountered several challenges and limitations that shaped the implementation. One major challenge we faced was the handling of PDF files, which often contain unstructured data. Extracting structured information from PDFs proved to be a complex task, especially when dealing with scanned copies that require Optical Character Recognition (OCR) to parse the content accurately. While we did not implement OCR functionality at the time of writing this paper, it remains a limitation that can be addressed in future iterations of the system. Another challenge we encountered was related to the integration of Learning Management Systems (LMS). LMS platforms typically lack standardized methods for requesting data in a desired format. As a result, we had to devise workarounds to extract and process the necessary information from the LMS. This required careful development of a custom LMS library to ensure compatibility and efficient data retrieval. Additionally, integrating the Whisper ASR system posed challenges due to the limitations of the API. The API imposes a constraint of 25MB on the data size (Brockman et al., 2023), while many class recordings, including video files (MP4), exceed this limit. To overcome this limitation, the videos were partitioned into smaller chunks or compressed to reduce the file size, enabling its utilization within the Whisper API. Furthermore, the frequent updates and advancements in the underlying models posed another challenge. As the models evolved, we needed to upgrade the APIs and adapt the system to leverage the latest technological improvements. Staying abreast of the newer developments in the field required continuous effort to ensure VirtualTA system remained up-to-date and aligned with the state-of-the-art techniques. These challenges and limitations underscore the iterative nature of the system development, where ongoing improvements and future enhancements can address these areas and further enhance the system's capabilities. ### Opportunities and Future Directions The research findings and development of VirtualTA system open numerous opportunities and future directions for further improvements. By addressing these opportunities and future directions, VirtualTA system can further revolutionize the role of AI in higher education, enhancing student learning experiences, and paving the way for the next generation of educational technology. 1. [label=)] 2. Enhanced Natural Language Understanding: Invest in research and development to improve the system's natural language understanding capabilities by exploring advanced natural language processing techniques, such as semantic parsing, entity recognition, and sentiment analysis. 3. Personalization and Adaptive Learning: Develop adaptive learning algorithms to personalize VirtualTA system, addressing the unique needs and learning styles of each student, fostering engagement, and contributing to more effective learning outcomes. 4. Multimodal Learning Support: Integrate multimedia resources, such as video lectures, interactive simulations, and visual aids, to provide comprehensive and diverse learning support for various learning styles and preferences. 5. User Feedback and Evaluation: Conduct rigorous user feedback and evaluation studies to gather insights into VirtualTA system's effectiveness and usability. Feedback from students, instructors, and educational stakeholders will help identify areas of improvement and validate the system's impact on student learning outcomes. 6. Integration with Multiple LMSs: Investigate the feasibility of integrating the AIIA with a broader range of LMS, ensuring compatibility with various institutions and expanding its reach. 7. Real-time Video Interaction: Implement real-time video interaction features, enabling students to virtually attend lectures, ask questions, and receive immediate feedback from the AI assistant or instructors. 8. Instructor-Assistant Collaboration: Enhance VirtualTA system to include features that foster collaboration between instructors and the AI assistant, allowing them to share content, coordinate responses, and provide combined support to students. * _Gamification and Engagement:_ Integrate gamification elements within VirtualTA system to motivate students, enhance engagement, and create a more enjoyable learning experience. * _Longitudinal Studies:_ Conduct long-term studies to assess the impact of VirtualTA system on student performance, retention, and overall academic outcomes. * _Ethical Considerations and Privacy:_ Investigate the ethical implications of using AI in education, addressing concerns related to data privacy, algorithmic bias, and the potential impact on the human role in education. ## 6 Conclusions This research has presented the design, implementation, and evaluation of an Artificial Intelligence-Enabled Intelligent Assistant (AIIA) for personalized and adaptive learning in higher education. Through the integration of advanced AI technologies and natural language processing techniques, VirtualTA system aims to enhance learning outcomes and promote student engagement, while addressing the diverse needs of learners in qualitative disciplines. The system's capabilities span various functionalities, including responsive question-answering, flashcard integration, automated assessment, embedded coding sandbox, summarization of course content, and context-aware conversation. Additionally, the system offers instructor-focused enhancements, such as auto-evaluation for assignment grading, homework detection mechanisms, and automated question generation. By providing a comprehensive suite of tools and resources, VirtualTA system has the potential to revolutionize the role of AI in higher education. However, it is crucial to acknowledge the challenges and limitations encountered during the development process, which can be addressed in future iterations. The opportunities and directions outlined in this paper provide a roadmap for further advancements in VirtualTA system and the broader field of AI-enabled educational technology. In conclusion, VirtualTA system represents a significant contribution to the ongoing efforts to integrate AI and natural language processing into educational contexts. By fostering self-regulated learning, promoting student-faculty communication, and expanding access to learning resources, the AIIA framework aims to enhance the effectiveness of learning support and shape the future trajectory of higher education. As we continue to refine the system and explore new avenues of research and development, we move closer to realizing the full potential of AI-enabled educational technology in transforming the higher education landscape, empowering learners, and nurturing the next generation of professionals. ## Funding Funding for this project was provided by the National Oceanic & Atmospheric Administration (NOAA), awarded to the Cooperative Institute for Research to Operations in Hydrology (CIROH) through the NOAA Cooperative Agreement with The University of Alabama (NA22NW84320003) and National Science Foundation (#2230710). ### Availability of Data and Materials All data that is produced and analyzed in the manuscript is readily available and presented in the manuscript.
2310.03755
Physics Informed Neural Network Code for 2D Transient Problems (PINN-2DT) Compatible with Google Colab
We present an open-source Physics Informed Neural Network environment for simulations of transient phenomena on two-dimensional rectangular domains, with the following features: (1) it is compatible with Google Colab which allows automatic execution on cloud environment; (2) it supports two dimensional time-dependent PDEs; (3) it provides simple interface for definition of the residual loss, boundary condition and initial loss, together with their weights; (4) it support Neumann and Dirichlet boundary conditions; (5) it allows for customizing the number of layers and neurons per layer, as well as for arbitrary activation function; (6) the learning rate and number of epochs are available as parameters; (7) it automatically differentiates PINN with respect to spatial and temporal variables; (8) it provides routines for plotting the convergence (with running average), initial conditions learnt, 2D and 3D snapshots from the simulation and movies (9) it includes a library of problems: (a) non-stationary heat transfer; (b) wave equation modeling a tsunami; (c) atmospheric simulations including thermal inversion; (d) tumor growth simulations.
Paweł Maczuga, Maciej Sikora, Maciej Skoczeń, Przemysław Rożnawski, Filip Tłuszcz, Marcin Szubert, Marcin Łoś, Witold Dzwinel, Keshav Pingali, Maciej Paszyński
2023-09-24T07:08:36Z
http://arxiv.org/abs/2310.03755v2
Physics Informed Neural Network Code for 2D Transient Problems (PINN-2DT) Compatible with Google Colab ###### Abstract We present an open-source Physics Informed Neural Network environment for simulations of transient phenomena on two-dimensional rectangular domains, with the following features: (1) it is compatible with Google Colab which allows automatic execution on cloud environment; (2) it supports two dimensional time-dependent PDEs; (3) it provides simple interface for definition of the residual loss, boundary condition and initial loss, together with their weights; (4) it support Neumann and Dirichlet boundary conditions; (5) it allows for customizing the number of layers and neurons per layer, as well as for arbitrary activation function; (6) the learning rate and number of epochs are available as parameters; (7) it automatically differentiates PINN with respect to spatial and temporal variables; (8) it provides routines for plotting the convergence (with running average), initial conditions learnt, 2D and 3D snapshots from the simulation and movies (9) it includes a library of problems: (a) non-stationary heat transfer; (b) wave equation modeling a tsunami; (c) atmospheric simulations including thermal inversion; (d) tumor growth simulations. **Keywords:** Physics Informed Neural Networks, 2D non-stationary problems, Google Colab, Wave equations, Atmospheric simulations, Tumor growth simulations ## 1 Program summary _Program Title:_ PINN-2DT _Licensing provisions:_ MIT license (MIT) _Programming language:_ Python _Nature of problem:_ Solving non-stationary problems in 2D _Solution method:_ Physics Informed Neural Networks. The implementation requires definition of PDE loss, initial conditions loss, and boundary conditions loss _Additional comments including Restrictions and Unusual features:_ The code is prepared in a way to be compatible with Google Colab ## 2 Introduction The goal of this paper is to replace the functionality of the time-dependent solver we published using isogeometric analysis and fast alternating directions solver [5, 6, 7] with the Physics Informed Neural Network (PINN) python library that can be easily executed on Colab. The PINN proposed in 2019 by Prof. Karniadakis revolutionized the way in which neural networks find solutions to initial-value problems described using partial differential equations [1] This method treats the neural network as a function approximating the solution of the given partial differential equation \(u(x)=PINN(x)\). After computing the necessary differential operators, the neural network and its appropriate differential operators are inserted into the partial differential equation. The residuum of the partial differential equation and the boundary-initial conditions are assumed as the loss function. The learning process involves sampling the loss function at different points by calculating the PDE residuum and the initial boundary conditions. The PINN methodology has had exponential growth in the number of papers and citations since its creation in 2019. It has multiple applications, from solid mechanics [15], geology [4], medical applications [11], and even the phase-field modeling of fracture [14]. Why use PINN solvers instead of classical or higher order finite element methods (e.g., isogeometric analysis) solvers? PINN/VPINN solvers have affordable computational costs. They can be easily implemented using pre-existing libraries and environments (like Pytorch and Google Colab). They are easily parallelizable, especially on GPU. They have great approximation capabilities, and they enable finding solutions to a family of problems. With the introduction of modern stochastic optimizers such as ADAM [3], they easily find high-quality minimizers of the loss functions employed. In this paper, we present the PINN library with the following features * It is implemented in Pythorch and compatible with Google Colab. * It supports two-dimensional problems defined on a rectangular domain. * It is suitable for smooth problems without singularities resulting from large contrast material data. * It enables the definition of the PDE residual loss function in the space-time domain. * It supports the loss function for defining the initial condition. * It provides loss functions for Neumann and Dirichlet boundary conditions. * It allows for customization of the loss functions and their weights. * It allows for defining an arbitrary number of layers of the neural network and an arbitrary number of neurons per layer. * The learning rate, the kind of activation function, and a number of epochs are problem-specific parameters. * It automatically performs differentiation of the PINN with respect to spatial and temporal variables. * It provides tools for plotting the convergence of all the loss functions, together with the running average. * It enables the plotting of the exact and learned initial conditions. * It plots 2D or 3D snapshots from the simulations. * It generates gifs with the simulation animation. We illustrate our PINN-2DT code with four numerical examples. The first one concerns the model heat transfer problem. The second one presents the solution to the wave equation. The third one is the simulation of the thermal inversion, and the last one is the simulation of brain tumor growth. There are the following available PINN libraries. First and most important is the DeepXDE library [12] by the team of Prof. Karniadakis. It is an extensive library with huge functionality, including ODEs, PDEs, complex geometries, different initial and boundary conditions, and forward and inverse problems. It supports several tensor libraries such as TensorFlow, PyTorch, JAX, and PaddlePaddle. Another interesting library is IDRLnet [13]. It uses pytorch, numpy, and Matplotlib. This library is illustrated on four different examples, namely the wave equation, Allan-Cahn equations, Volterra integrodifferential equations, and variational minimization problems. What is the novelty of our library? Our library is very simple to use and compatible with Google Colab. It is a natural "copy" of the functionality of the IGA-ADS library [5] into the PINN methodology. It contains a simple, straightforward interface for solving different time-dependent problems. Our library can be executed without accessing the HPC center just by using the Colab functionality. The structure of the paper is the following. In Section 2, we recall the general idea of PINN on the example of the heat transfer problem. Section 3 is devoted to our code structure, from Colab implementation, model parameters, basic Python classes, how we define initial and boundary conditions, loss functions, how we run the training, and how we process the output. Section 4 provides four examples from heat transfer, wave equation, thermal inversion, and tumor growth simulations. We conclude the paper in Section 5. ## 3 Physics Informed Neural Network for transient problems on the example of heat transfer problem Let us consider a strong form of the exemplary transient PDE, the heat transfer problem. Find \(u\in C^{2}(0,1)\) for \((x,y)\in\Omega=[0,1]^{2}\), \(t\in[0,T]\) such that \[\underbrace{\frac{\partial u(x,y,t)}{\partial t}}_{\text{temperature evolution}}-\underbrace{\varepsilon} _{\text{diffusion term}}^{2}-\varepsilon\frac{\partial^{2}u(x,y,t)}{\partial x^{2}} -\varepsilon\frac{\partial^{2}u(x,y,t)}{\partial y^{2}}=f\underbrace{(x,y,t)} _{\text{focing}},(x,y,t)\in\Omega\times[0,T], \tag{1}\] with initial condition \[u(x,y,0)=u_{0}(x,y) \tag{2}\] and zero-Neumann boundary condition \[\frac{\partial u}{\partial n}=0\ (x,y)\in\partial\Omega \tag{3}\] In the Physics Informed Neural Network approach, the neural network is the solution, namely \[u(x,y,t)=PINN(x,y,t)=A_{n}\sigma\left(A_{n-1}\sigma(...\sigma(A_{1}[x,y,t]+B_{ 1})...+B_{n-1}\right)+B_{n} \tag{4}\] where \(A_{i}\) are matrices representing DNN layers, \(B_{i}\) represent bias vectors, and \(\sigma\) is the non-linear activation function, e.g., sigmoid, which as we have shown in [2], is the best choice for PINN. We define the loss function as the residual of the PDE \[LOSS_{PDE}(x,y,t)=\left(\frac{\partial PINN(x,y,t)}{\partial t}-\epsilon \frac{\partial^{2}PINN(x,y,t)}{\partial x^{2}}-\epsilon\frac{\partial^{2}PINN (x,y,t)}{\partial y^{2}}-f(x,y,t)\right)^{2} \tag{5}\] We also define the loss for training the initial condition as the residual of the initial condition \[LOSS_{Init}(x,y,0)=\left(PINN(x,y,0)-u_{0}(x,y)\right)^{2} \tag{6}\] as well as the loss of the residual of the boundary condition \[LOSS_{BC}(x,y,t)=\left(\frac{\partial PINN(x,y,t)}{\partial n}(x,y,t)-0\right) ^{2} \tag{7}\] The sketch of the training procedure is the following * Select points \((x,y,t)\in\Omega\times[0,T]\) randomly * Correct the weights using the strong loss \[A_{i,j}^{k}=A_{i,j}^{k}-\eta\frac{\partial LOSS_{PDE}(x,y,t)}{\partial A_{i, j}^{k}}\] (8) \[B_{i}^{k}=B_{i}^{k}-\eta\frac{\partial LOSS_{PDE}(x,y,t)}{\partial B_{i}^{k}}\] (9) where \(\eta\in(0,1)\) is the training rate. * Select point \((x,y)\in\partial\Omega\) randomly \[A_{i,j}^{k}=A_{i,j}^{k}-\eta\frac{\partial LOSS_{BC}(x,y,t)}{\partial A_{i, j}^{k}}\] (10) \[B_{i}^{k}=B_{i}^{k}-\eta\frac{\partial LOSS_{BC}(x,y,t)}{\partial B_{i}^{k}}\] (11) where \(\eta\in(0,1)\). * Select point \((x,y,0)\in\Omega\times\{0\}\) randomly \[A_{i,j}^{k}=A_{i,j}^{k}-\eta\frac{\partial LOSS_{Init}(x,y,0)}{\partial A_{i, j}^{k}}\] (12) \[B_{i}^{k}=B_{i}^{k}-\eta\frac{\partial LOSS_{Init}(x,y,0)}{\partial B_{i}^{k}}\] (13) where \(\eta\in(0,1)\). * Until \(w_{PDE}LOSS_{PDE}+w_{BC}LOSS_{BC}+w_{Init}LOSS_{Init}\leq\delta\) In practice, this simple stochastic gradient method is replaced by a more sophisticated e.g., ADAM method [3]. ## 4 Structure of the code ### Colab implementation Our code is available at [https://github.com/pmaczuga/pinn-notebooks](https://github.com/pmaczuga/pinn-notebooks) The code can be downloaded, opened in Google Colab, and executed in the fully automatic mode. The code has been created to be compatible with Google Colab, and it employs the pytorch library. ``` fromtypingimportCallable importmatplotlib.pyplotasplt importnumpyasmp importtorch... ``` The code can automatically run on a cluster of GPUs, as provided by the Google Colab computing environment ``` device=torch.device("cuda"iftorch.cuda.is_available()else"cpu") ``` ### Parameters There are the following model parameters that the user can define * LENGTH, TOTAL_TIME. The code works in the space-time domain, where the training is performed by selecting point along \(x\), \(y\) and \(t\) axes. The LENGTH parameter defines the dimension of the domain along \(x\) and \(y\) axes. The domain dimension is [0,LENGTH]x[0,LENGTH]x[0,TOTAL_TIME]. The TOTAL_TIME parameter defines the length of the space-time domain along the \(t\) axis. It is the total time of the transient phenomena we want to simulate. * N_POINTS. This parameter defines the number of points used for training. By default, the points are selected randomly along \(x\), \(y\), and \(t\) axes. It is easily possible to extend the code to support different numbers of points or different distributions of points along different axes of the coordinate system. * N_POINTS_PLOT. This parameter defines the number of points used for probing the solution and plotting the output plots after the training. * WEIGHT_RESIDUAL, WEIGHT_INITIAL, WEIGHT_BOUNDARY. These parameters define the weights for the training of residual, initial condition, and boundary condition loss functions. * LAYERS, NEURONS_PER_LAYER. These parameters define the neural network by providing the number of layers and number of neurons per neural network layer. * EPOCHS, and LEARNING_RATE provide a number of epochs and the training rate for the training procedure. Below we provide the exemplary values of the parameters as employed for the wave equation simulations ``` #ParametersLENGTH=2. TOTAL_TIME=.5 N_POINTS=15 N_POINTS_PLOT=150WEIGHT_RESIDUAL=0.03 WEIGHT_INITIAL=1.0 WEIGHT_BOUNDARY=0.0005 LAYERS=10 NEURONS_PER_LAYER=120 EPOCHS=150.000 LEARNING_RATE=0.00015 GRAVITY=9.81 ``` ### PINN class The PINN class defines the functionality for a simple neural network accepting three features as input, namely the values of \((x,y,t)\) and returning a single output, namely the value of the solution \(u(x,y,t)\). We provide the following features: * The f routine compute the values of the approximate solution at point \((x,y,t)\). * The routines dfdt, dfdx, dfdy compute the derivatives of the approximate solution at point \((x,y,t)\) with respect to either \(x\), \(y\), or \(t\) using the pytorch autograd method. ``` classPINN(nn.Module): def__init__(self,num_hidden:int,dim_hidden:int,act=nn.Tanh()): deff(pinn:PINN,x:torch.Tensor,y:torch.Tensor,t:torch.Tensor)->torch.Tensor: returnpinn(x,y,t) defdf(output:torch.Tensor,input:torch.Tensor,order:int=1)->torch.Tensor: df_value=output for_inrange(order): df_value=torch.autograd.grad( diff_value, input, grad_outputs=torch.ones_like(input), create_graph=True, retain_graph=True, )[0] returndf_value defdfdt(pinn:PINN,x:torch.Tensor,y:torch.Tensor,t:torch.Tensor,order:int=1): f_value=f(pinn,x,y,t) returndf(f_value,t,order=order) defdfdx(pinn:PINN,x:torch.Tensor,y:torch.Tensor,t:torch.Tensor,order:int=1): f_value=f(pinn,x,y,t) returndf(f_value,y,order=order) ``` ### Processing initial and boundary conditions Since the training is performed in the space-time domain [0,LENGTH]x[0,LENGTH]x[0,TOTAL_TIME], we provide in * get_interior_points the functionality to identify the points from the training of the residual loss, in * get_initial_points the functionality to identify points for the training of the initial loss, and in * get_boundary_points the functionality for training the boundary loss. ``` defget_boundary_points(x_domain,y_domain,t_domain,m_points, / device=torch.device("cpu"),requires_grad=True): """.+-----.+.'/.'/ / / / / / / /.'/.'/.' =.*.' = x_linspace=torch.linspace(x_domain[0],x_domain[1],n_points) y_linspace=torch.linspace(y_domain[0],y_domain[1],n_points) t_linspace=torch.linspace(t_domain[0],t_domain[1],n_points) x_grid,t_grid=torch.meshgrid(x_linspace,t_linspace,indexing="ij") y_grid,-=torch.meshgrid(y_linspace,t_linspace,indexing="ij") x_grid=x_grid.reshape(-1,1).to(device) x_grid.requires_grad=requires_grad y_grid=y_grid.reshape(-1,1).to(device) y_grid.requires_grad=requires_grad t_grid=t_grid.reshape(-1,1).to(device) t_grid.requires_grad=requires_grad x0=torch.full_like(t_grid,x_domain[0],requires_grad=requires_grad x1=torch.full_like(t_grid,x_domain[1],requires_grad=requires_grad y0=torch.full_like(t_grid,y_domain[0],requires_grad=requires_grad y1=torch.full_like(t_grid,y_domain[1],requires_grad=requires_grad) down=(x_grid,y0,t_grid) up=(x_grid,y1,t_grid left=(x0,y_grid,t_grid right=(x1,y_grid,t_grid) returndown,up,left,right defget_initial_points(x_domain,y_domain,t_domain,n_points,\ device=torch.device("cpu"),requires_grad=True); x_linspace=torch.linspace(x_domain[0],x_domain[1],n_points) yl_linspace=torch.linspace(y_domain[0],y_domain[1],n_points) x_grid,y_grid=torch.meshgrid(x_linspace,y_linspace,indexing="ij") x_grid=x_grid.reshape(-1,1).to(device) x_grid.requires_grad=requires_grad y_grid.requires(-1,1).to(device) y_grid.requires_grad=requires_grad t0=torch.full_like(x_grid,t_domain[0],requires_grad=requires_grad) return(x_grid,y_grid,t0) defget_interior_points(x_domain,y_domain,t_domain,n_points,\ device=torch.device("cpu"),requires_grad=True): x_raw=torch.linspace(x_domain[0],x_domain[1],steps=n_points,requires_grad=requires_grad) y_raw=torch.linspace(y_domain[0],y_domain[1],steps=n_points,requires_grad= requires_grad) t_raw=torch.linspace(t_domain[0],t_domain[1],steps=n_points,requires_grad= requires_grad) grids=torch.meshgrid(x_raw,y_raw,t_raw,indexing="ij") x=grids[0].reshape(-1,1).to(device) y=grids[1].reshape(-1,1).to(device) t=grids[2].reshape(-1,1).to(device) returnx,y,t ### Loss functions Inside the Loss class, we provide interfaces for the definition of the loss functions. Namely, we define the residual_loss, initial_loss and boundary_loss. Since the initial and boundary loss is universal, and residual loss is problem specific, we provide fixed implementations for the initial and boundary losses, assuming that the initial state is prescribed in the initial_condition routine and that the boundary conditions are zero Neumann. The code can be easily extended to support different boundary conditions. ``` classLoss:... defresidual_loss(self,pinn:PINN): x,y,t=get_interior_points(self.x_domain,self.y_domain,self.t_domain,\ self.n_points,pinn.device()) u=f(pinn,x,y,t) z=self.floor(x,y) loss=#HEHEDEFINE RESIDULLOSS returnloss.pow(2).mean definitial_loss(self,pinn:PINN): x,y,t=get_initial_points(self.x_domain,self.y_domain,self.t_domain,\ self.n_points,pinn.device()) pinn_init=self.initial_condition(x,y) loss=f(pinn,x,y,t)-pinn_init returnloss.pow(2).mean() defboundary_loss(self, pinn: PINN): down, up, left, right = get_boundary_points(self.x_domain, self.y_domain, self.t_domain, \ self.n_points, pinn.device()) x_down, y_down, t_down = down x_up, y_up, t_up = up x_left, y_left, t_left = left x_right, y_right, t_right = right loss_down = dfdy( pinn, x_down, y_down, t_down ) loss_up = dfdy( pinn, x_up, y_up, t_up ) loss_left = dfdx( pinn, x_left, y_left, t_left ) loss_right = dfdx( pinn, x_right, y_right, t_right ) return loss_down.pow(2).mean() + \ loss_up.pow(2).mean() + \ loss_left.pow(2).mean() + \ loss_right.pow(2).mean() The initial condition is defined in the initial_condition routine, which returns a value of the initial condition at point \((x,y,0)\). ``` #Initialcondition definitial_condition(x:torch.Tensor,y:torch.Tensor)->torch.Tensor:... res=#HEREDEFINETHEIINITIALCOBDITION#1(z,y,0) returnres ``` ### Training During the training, we select the Adam [3] optimizer, and we prescribe that for every 1000 epochs of training, we will write the summary of the values of the residual, initial, and boundary losses. The user can modify this optimizer and the reporting frequency. ``` deftrain_model( nn_approximator:PINN, loss_fn:Callable, learning_rate:int=0.01, max_epochs:int=1_000 )->PINN: optimizer=torch.optim.Adam(nn_approximator.parameters(),lr=learning_rate) loss_values=[] residual_loss_values=[] initial_loss_values=[] boundary_loss_values=[] start_time=time.time() forepochinrange(max_epochs): try: loss:torch.Tensor=loss_fn(nn_approximator) optimizer.zero_grad() loss[0].backward() optimizer.step() loss_values.append(loss[0].item()) residual_loss_values.append(loss[1].item()) initial_loss_values.append(loss[2].item()) boundary_loss_values.append(loss[3].item()) if(epoch+1)%1000==0: epoch_time=timetime.time()-start_time start_time start_time=time.time() print(f^Epoch:{epoch+1}-Loss:{float(loss[0].item()):>7f}, \ ResidualLoss:{float(loss[i].item()):>7f}, \ InitialLoss:{float(loss[2].item()):>7f}, \ BoundaryLoss:{float(loss[3].item()):>7f}=) exceptKeyboardInterrupt: break returnnn_approximator, np.array(loss_values), \ np.array(residual_loss_values), \ np.array(initial_loss_values), \ np.array(boundary_loss_values) ``` ### Output We provide several routines for plotting the convergence of the loss function (see Fig. 1, ``` #Plotting #Lossfunction average_loss=running_average(loss_values,window=100) fig,ax=plt.subplots(figsize=(8,6),dp1=100) ax.set_title('Lossfunction(runningaverage)') ax.set_xlabel("Epoch") ax.set_label("Loss") ax.plot(average_loss) ax.set_yscale('log') ``` for plotting the running average of the loss (see Fig. 2), ``` average_loss=running_average(initial_loss_values,window=100) fig,ax=plt.subplots(figsize=(8,6),dp1=100) ax.set_title('Initial_lossfunction(runningaverage)') ax.set_xlabel("Epoch") ax.set_ylabel("Loss") ax.plot(average_loss) ax.set_yscale('log') ``` base_dir='.' x,y,-=get_initial_points(x_domain,y_domain,t_domain,N_POINTS_PLOT,requires_grad=False) z=initial_condition(x,y) fig=plot_color(z,x,y,N_POINTS_PLOT,N_POINTS_PLOT,"Initialcondition-exact") t_value=0.0 t=torch.full_like(x,t_value) z=pinn(x,y,t) fig=plot_color(z,x,y,N_POINTS_PLOT,N_POINTS_PLOT,"Initialcondition-PINN") for plotting the initial conditions in 3D (see Fig. 4) ``` #Plotting#Initialcondition x,y,-=get_initial_points(x_domain,y_domain,t_domain,N_POINTS_PLOT,requires_grad=False) z=initial_condition(x,y) fig=plot_3D(z,x,y,N_POINTS_PLOT,N_POINTS_PLOT,"Initialcondition-exact") z=pinn(x,y,t) fig=plot_3D(z,x,y,N_POINTS_PLOT,N_POINTS_PLOT,f"Initialcondition-pinn") for plottingthesnapshots of the solution (see Fig. 5) ``` defplot(idx,t_value): t=torch.full_like(x,t_value) z=pinn(x,y,t) fig=plot_color(z,x,y,N_POINTS_PLOT,N_POINTS_PLOT,f"PINNfort=(t_value)") fig=plot_3D(z,x,y,N_POINTS_PLOT,N_POINTS_PLOT,f"PINNfort=(t_value)") #plt.savefig(base_dir+'/img.('034).png'.format(idx)) time_values=np.arange(0,TOTAL_TIME,0.01) Figure 4: Heat equation. Initial conditions in 3D. Figure 3: Heat equation. Initial conditions in 2D. foridx,t_valinenumerate(time_values): plot(idx,t_value) z=pimx(y,y,t) fig=plot.color(z,x,y,N_POINTS_PLOT,N_POINTS_PLOT,f^PINNfort={t_value}") fig=plot.3D(z,x,y,N_POINTS_PLOT,N_POINTS_PLOT,f^PINNfort={t_val}") #plt.savefig(base_dir+'/img/img.(:03d).png'.format(idx)) time_values=np.arange(0,TOTAL_TIME,0.01) foridx,t_valinenumerate(time_values): plot(idx,t_val) and for the generation of the animated gif with the simulation results. ``` fromgoogle.colabimportdrive drive.mount('/content/drive') importimages frames=[] foridxinrange(len(time_values)): image=image.v2.imread(base_dir+'/img/img.(:03d).png'.format(idx)) frames.append(image) imageio.minsave('./tsunami_wave12.gif',#outputgif frames,#arrayofinputframes duration=0.1)#optional:framespersecon ``` ## 5 Examples of the instantiation ### Heat transfer In this section, we present the numerical results for the model heat transfer problem described in Section 2. The residual loss function \(\textit{LOSS}_{PDE}(x,y,t)=\left(\frac{\partial PINN(x,y,t)}{\partial t}- \frac{\partial^{2}PINN(x,y,t)}{\partial x^{2}}-\frac{\partial^{2}PINN(x,y,t)} {\partial y^{2}}-f(x,y,t)\right)^{2}\) translates into the following code ``` defresidual_loss(self,pinn:PINN): x,y,t=get_interior_points(self.x_domain, self.y_domain, self.t_domain,self.n_points,pinn.device()) u=f(pinn,x,y,t) z=self.floor(x,y) loss=dft(pinn,x,y,t,order=1)-\ dfdx(pinn,x,y,t)**2-\ dfdy(pinn,x,y,t)**2 We employ the manufactured solution technique, where we assume the solution of the following form \[u(x,y,t)=\exp^{-2\Pi\Sigma^{2}t}\sin\Pi\chi\sin\Pi y \tag{14}\] over \(\Omega=[0,1]^{2}\) To obtain this particular solution, we set up the zero Dirichlet boundary conditions, which require the following code ``` defboundary_loss_dirichlet(self,pinn:PINN): down,up,left,right=get_boundary_points(self.x_domain, self.y_domain,self.t_domain,self.t_domain,self.n_points,pinn.device()) x_down,y_down,t_down=down x_up,y_up,t_up=up x_left,y_left,t_left=left x_right,y_right,t_right=right loss_down=f(pinn,x_down,y_down,t_down) loss_up=f(pinn,x_up,y_up,t_up) loss_left=f(pinn,x_left,y_left,t_left) loss_right=f(pinn,x_right,y_right,t_right) returnloss_down:pow(2).mean() + } loss_up.pow(2).mean() + \ loss_left.pow(2).mean() + \ loss_right.pow(2).mean() ``` We also setup the initial state \[u_{0}(x,y)=\sin\left(\Pi x\right)\sin\left(\Pi y\right) \tag{15}\] which translates into the following code ``` definitial_condition(x:torch.Tensor,y:torch.Tensor)->torch.Tensor: res=torch.sin(torch.pi*x)*torch.sin(torch.pi*y) returns ``` The default setup of the parameters for this simulation is the following: ``` LENGTH=1. TOTAL_TIME=1. N.POINTS=15 N.POINTS_PLOT=150 WEIGHT_RESIDUAL=1.0 WEIGHT_INITIAL=1.0 WEIGHT_BOUNDARY=1.0 LATERS=4 NEWRONS_PER_LAYER=80 EPOCHS=20_00 LEARNING_RATE=0.002 ``` The convergence of the loss function is presented in Fig. 1. The running average of the loss is presented in Fig. 2. The comparison of exact and trained initial conditions is presented in Fig. 3 in 2D and Fig. 4 in 3D. The snapshot from the simulation is presented in Fig. 5 for time moment \(t=0.1\). The mean square error of the computed simulation is presented in Fig. 6. We can see the high accuracy of the trained PINN results. In our simulation, we run the wave propagation in the "swimming pool"; thus, we assume \(z(x,y)=0\). It implies some simplifications in the PDE \[\frac{\partial^{2}u(x,y,t)}{\partial t^{2}}-\left(g\left(\frac{ \partial u(x,y,t)}{\partial x}-\frac{\partial z(x\mathcal{J})}{\partial x} \right)\frac{\partial u(x,y,t)}{\partial x}\right)-\left(g\left(u(x,y,t)-z(x,y )\right)\frac{\partial^{2}u(x,y,t)}{\partial x^{2}}\right)\] \[-\left(g\left(\frac{\partial u(x,y,t)}{\partial y}-\frac{ \partial z(x\mathcal{J})}{\partial y}\right)\frac{\partial u(x,y,t)}{\partial y }\right)=0-\left(g\left(u(x,y,t)-z(x,y)\right)\frac{\partial^{2}u(x,y,t)}{ \partial y^{2}}\right)=0 \tag{19}\] In the Physics Informed Neural Network approach, the neural network represents the solution, \[u(x,y,t)=PINN(x,y,t)=A_{n}\sigma\left(A_{n-1}\sigma(...\sigma(A_{1}[x,y,t]+B_{ 1})...+B_{n-1})+B_{n}\right. \tag{20}\] with \(A_{i}\) being the matrices representing layers, \(B_{i}\) are vectors representing biases, and \(\sigma\) is sigmoid activation function [2]. We define the loss function as the residual of the PDE \[LOSS_{PDE}(x,y,t)=\left(\frac{\partial^{2}PINN(x,y,t)}{\partial t ^{2}}-g\left(\frac{\partial PINN(x,y,t)}{\partial x}\right)^{2}-g\left(PINN(x,y,t)-z(x,y)\right)\frac{\partial^{2}PINN(x,y,t)}{\partial x^{2}}\right.\] \[\left.-g\left(\frac{\partial PINN(x,y,t)}{\partial y}\right)^{2}- g\left(PINN(x,y,t)-z(x,y)\right)\frac{\partial^{2}PINN(x,y,t)}{\partial y^{2}} \right)^{2} \tag{21}\] This residual translates into the following code ``` defresidual_loss(self,pinn:PINN): x,y,t=get_interior_points(self.x_domain,self.y_domain,self.t_domain,self.n_points,pinn.device()) u=f(pinn,x,y,t) z=self.floor(x,y) loss=dfdt(pinn,x,y,t,order=2)-GRAVITY*(dfdx(pinn,x,y,t)**2+(u-z)*dfdx(pinn,x,y,t,order=2)+dfdy(pinn,x,y,t)**2+(u-z)*dfdy(pinn,x,y,t,order=2)) returnloss.pow(2).mean() ``` We also define the loss for training of the initial condition. It is defined as the residual of the initial condition \[LOSS_{Init}(x,y,0)=(PINN(x,y,0)-u_{0}(x,y))^{2} \tag{22}\] Figure 6: Heat equation. Numerical error of the trained PINN solution to the heat transfer problem with manufactured solution. Similarly, we define the loss of the residual of the boundary conditions \[LOSS_{BC}(x,y,t)=\left(\frac{\partial PINN(x,y,t)}{\partial n}(x,y,t)-0\right)^{2} \tag{23}\] We do not have to change the code for the initial and boundary conditions, we just provide an implementation of the initial state ``` definitial_condition(x:torch.Tensor,y:torch.Tensor)->torch.Tensor: r=torch.sqrt((x-LENGTH/2)**2+(y-LENGTH/2)**2) res=2*torch.exp(-(r)**2*30)+2 returnres ``` The convergence of the loss is summarized in Fig. 7. The snapshots of the simulation are presented in Fig. 8. ### Thermal inversion In this example, we aim to model the thermal inversion effect. The numerical results presented in this section are the PINN version of the thermal inversion simulation performed using isogeometric finite element method code [5] described in [9]. The scalar field \(u\) in our simulation represents the water vapor forming a cloud. The source represents the evaporation of the cloud evaporation of water particles near the ground. The thermal inversion effect is obtained by introducing the advection field as the gradient of the temperature. Following [10] we define \(\frac{\partial T}{\partial y}=-2\) for lower half of the domain (\(y<0.5\)), and \(\frac{\partial T}{\partial y}=2\) for upper half of the domain (\(y>0.5\)). We focus on advection-diffusion equations in the strong form. We seek the cloud vapor concentration field \([0,1]^{2}\times[0,1]\ni(x,y,t)\to u(x,y,t)\in\mathcal{R}\) \[\frac{\partial u(x,y,t)}{\partial t}+\left(b(x,y,t)\cdot\nabla \right)u(x,y,t)-\nabla\cdot\left(K\nabla u(x,y,t)\right)=f(x,y,t)\ (x,y,t)\in\Omega\times(0,T] \tag{24}\] \[\nabla u\cdot n=0\ \text{in}\ \partial\Omega\times(0,T]\] (25) \[u(x,y,0)=u_{0}(x,y)\ \text{in}\ \Omega\times 0 \tag{26}\] This PDE translates into \[\frac{\partial u(x,y,t)}{\partial t}+\frac{\partial T(y)}{ \partial y}\frac{\partial u(x,y,t)}{\partial y}-0.1\frac{\partial u(x,y,t)}{ \partial x^{2}}-0.01\frac{\partial u(x,y,t)}{\partial y^{2}}=f(x,y,t)\ (x,y,t)\in\Omega \times(0,T] \tag{28}\] \[\nabla u\cdot n=0\ \text{in}\ \partial\Omega\times(0,T]\] (29) \[u(x,y,0)=u_{0}(x,y)\ \text{in}\ \Omega\times 0 \tag{30}\] In PINN, the neural network represents the solution, \[u(x,y,t)=PINN(x,y,t)=A_{n}\sigma\left(A_{n-1}\sigma(...\sigma(A_{1}[x,y,t]+B_ {1})...+B_{n-1})+B_{n}\right) \tag{31}\] Figure 7: Wave equation. Convergence of the loss function. Figure 8: Wave equation simulation. where \(A_{i}\) are matrices representing DNN layers, \(B_{i}\) represent bias vectors, and \(\sigma\) is the sigmoid activation function. We define the loss function as the residual of the PDE \[\left(\frac{\partial PINN(x,y,t)}{\partial t}+\frac{\partial T(y)}{\partial y} \frac{\partial PINN(x,y,t)}{\partial y}-0.1\frac{\partial PINN(x,y,t)}{ \partial x^{2}}-0.01\frac{\partial PINN(x,y,t)}{\partial y^{2}}-f(x,y,t)\right)^ {2} \tag{33}\] This residual translates to the following code ``` defresidual_loss(self,pinn:PINN): x,y,t=get_interior_points(self.x_domain,self.y_domain,self.t_domain,self.n_points,pinn.device()) loss=dft(pinn,x,y,t).to(device) -self.dTy(y,t)*dfdy(pinn,x,y,t).to(device) -self.Kx*dfdx(pinn,x,y,t,order=2).to(device) -self.Ky*dfdy(pinn,x,y,t,order=2).to(device) -self.Ky*dfdy(pinn,x,y,t,order=2).to(device) returnloss.pow(2).mean ``` We add the definitions of the Kx and Ky variables into the Loss class. We do not change the implementation of the initial and boundary conditions, but we provide the definition of the initial state and forcing ``` defsource(self,y,t): d=0.7 res=torch.clamp((torch.cos(t*math.pi)-d)*1/(1-d),min=0) res2=(150-1200*y)*resresres3=torch.where(t<=0.3,res2,0) res4=torch.where(y<=0.125,res3,0) returnres4.to(device) ``` During the training, we use the following global parameters ``` LENGTH=1. TOTAL_TIME=1. N_POINTS=15 N_POINTS_PLOT=150 WEIGHT_RESIDUAL=20.0 WEIGHT_INITIAL=1.0 WEIGHT_BOUNDARY=10.0 LAYERS=2 NEURONS_PE_LAYER=600 EPOCHS=30_000 LEARNING_RATE=0.002 ``` The convergence of the loss function is summarized in Fig. 9. The snapshots from the simulations are presented in Fig. 10. In the thermal inversion, the cloud vapor that evaporated from the ground stays close to the ground, due to the distribution of the temperature gradients. ### Tumor growth The last example concerns the brain tumor growth model, as described in [11]. We seek the tumor cell density \([0,1]^{2}\times[0,1]\ni(x,y,t)\to u(x,y,t)\in\mathcal{R}\), such that \[\frac{\partial u(x,y,t)}{\partial t}=\nabla\cdot\left(D(x,y) \nabla u(x,y,t)\right)+\rho u(x,y,t)\left(1-u(x,y,t)\right)\ (x,y,t)\in\Omega \times(0,T] \tag{34}\] \[\nabla u\cdot n=0\ \text{in}\ \partial\Omega\times(0,T]\] (35) \[u(x,y,0)=u_{0}(x,y)\ \text{in}\ \Omega\times 0 \tag{36}\] which translates into \[\frac{\partial u(x,y,t)}{\partial t}-\frac{\partial D(x,y)}{ \partial x}\frac{\partial u(x,y,t)}{\partial x}-D(x,y)\frac{\partial^{2}u(x,y,t)}{\partial x^{2}}\] \[-\frac{\partial D(x,y)}{\partial y}\frac{\partial u(x,y,t)}{ \partial y}-D(x,y)\frac{\partial^{2}u(x,y,t)}{\partial x^{2}}-\rho u(x,y,t) \left(1-u(x,y,t)\right)=0 \tag{38}\] and Here, \(D(x,y)\) represents the tissue density coefficient, where \(D(x,y)=0.13\) for the white matter, \(D(x,y)=.013\) for the gray matter, and \(D(x,y)=0\) for the cerebrospinal fluid (see [11] for more details). Additionally, \(\rho=0.025\) denotes the proliferation rate of the tumor cells. We simplify the model, and remove the derivatives of the tissue density coefficient: \[\frac{\partial u(x,y,t)}{\partial t}-D(x,y)\frac{\partial^{2}u(x,y,t)}{ \partial x^{2}}-D(x,y)\frac{\partial^{2}u(x,y,t)}{\partial x^{y}}-\rho u(x,y,t )\left(1-u(x,y,t)\right)=0. \tag{39}\] As usual, in PINN, the neural network represents the solution, \[u(x,y,t)=PINN(x,y,t)=A_{n}\sigma\left(A_{n-1}\sigma(...\sigma(A_{1}[x,y,t]+B_{1 })...+B_{n-1})+B_{n}\right. \tag{40}\] with \(A_{i}\), and \(B_{i}\) representing matrices and bias vectors, and \(\sigma\) is the sigmoid activation function. We define the loss function as the residual of the PDE \[LOSS_{PDE}(x,y,t)=\] \[\left(\frac{\partial u(x,y,t)}{\partial t}-\frac{\partial D(x,y) }{\partial x}\frac{\partial u(x,y,t)}{\partial x}-D(x,y)\frac{\partial^{2}u(x,y,t)}{\partial x^{2}}\right.\] \[\left.-\frac{\partial D(x,y)}{\partial y}\frac{\partial u(x,y,t) }{\partial y}-D(x,y)\frac{\partial^{2}u(x,y,t)}{\partial x^{y}}-\rho u(x,y,t) \left(1-u(x,y,t)\right)\right)^{2} \tag{41}\] This translates into the following code: ``` defresidual_loss(self,pinn:PINN): x,y,t=get_interior_points( self.x_domain,self.y_domain, self.t_domain,self.n_points,pinn.device()) rho=0.025 defD_fun(x,y)->torch.Tensor: res=torch.zeros(x.shape,dtype=x.dtype,device=pinn.device()) dist=(x-0.5)*2+(y-0.5)*2 res[dist<0.25]=0.13 res[dist<0.02]=0.013 returnres D=D_fun(x,y) u=f(pinn,x,y,t) loss=dfdt(pinn,x,y,t) -D*dfdt(pinn,x,y,t,order=2) -D*dfdt(pinn,x,y,t,order=2) -rho*u=(1-u) returnloss.pow(2).mean() ``` The initial and boundary condition loss functions are unchanged. The initial state is given as follows: Figure 9: Thermal inversion. Convergence of the loss function. Figure 10: Thermal inversion simulation. We summarize in Fig. 11 the convergence of the loss function. We also show how the initial data has been trained in Fig. 12. Additionally, Fig. 13 presents the snapshots from the simulation. Figure 11: Tumor growth. Convergence of the loss function. Figure 12: Tumor growth. Convergence of the loss function. ## 5 Examples of the Instantiation Figure 13: Tumor growth. Snapshots from the simulation. ## 6 Conclusions We have created a code [https://github.com/pmaczuga/pinn-notebooks](https://github.com/pmaczuga/pinn-notebooks) that can be downloaded and opened in the Google Colab. It can be automatically executed using Colab functionality. The code provides a simple interface for running two-dimensional time-dependent simulations on a rectangular grid. It provides an interface to define residual loss, initial condition loss, and boundary condition loss. It provides examples of Dirichlet and Neumann boundary conditions. The code also provides routines for plotting the convergence, generating snapshots of the simulations, verifying the initial condition, and generating the animated gifs. We also provide four examples, the heat transfer, the wave equation, the thermal inversion from advection-diffusion equations, and the brain tumor model. ## 7 Acknowledgements The work of Maciej Paszynski, Witold Dzwinel, Pawel Maczuga, and Marcin Los was supported by the program "Excellence initiative - research university" for the AGH University of Science and Technology. The visit of Maciej Paszynski at Oden Institute was partially supported by J. T. Oden Research Faculty Fellowship.
2309.11232
Growth of curvature and perimeter of temperature patches in the 2D Boussinesq equations
In this paper, we construct an example of temperature patch solutions for the two-dimensional, incompressible Boussinesq system with kinematic viscosity such that both the curvature and perimeter grow to infinity over time. The presented example consists of two disjoint, simply connected patches. The rates of growth for both curvature and perimeter in this example are at least algebraic.
Jaemin Park
2023-09-20T11:44:53Z
http://arxiv.org/abs/2309.11232v2
# Growth of curvature and perimeter of temperature patches in the 2D boussinesq equations ###### Abstract. In this paper, we construct an example of temperature patch solutions for the two-dimensional, incompressible Boussinesq system with kinematic viscosity such that both the curvature and perimeter grow to infinity over time. The presented example consists of two disjoint, simply connected patches. The rates of growth for both curvature and perimeter in this example are at least algebraic. ## 1. Introduction and the main results In this paper, we investigate the long-time behavior of the two-dimensional incompressible Boussinesq equations in the absence of thermal diffusivity: \[\begin{split}\rho_{t}+u\cdot\nabla\rho&=0,\\ u_{t}+u\cdot\nabla u&=-\nabla p-\rho e_{2}+\nu \Delta u,\quad t>0,\ x\in\mathbb{R}^{2}\\ \nabla\cdot u&=0,\\ (\rho(t,x),u(t,x))|_{t=0}&=(\rho_{0}(x),u_{0}(x)), \end{split} \tag{1.1}\] where \(e_{2}=(0,1)^{T}\) and \(\nu>0\) is the kinematic viscosity coefficient. The system (1.1) describes the evolution of the temperature distribution \(\rho\), of a viscous, heat-conducting fluid moving in an external gravitational force field, assuming that the Boussinesq approximation is valid [3, 13, 22, 25]. The primary goal of this paper is to provide an example of a patch-type temperature distribution whose curvature and perimeter grow as time approaches infinity. ### Overview of long-time behavior in the Boussinesq equations Before presenting the precise statement of the main theorem, let us provide a brief review of the relevant literature. #### 1.1.1. Global well-poseness In the presence of kinematic viscosity as in (1.1), Hou-Li [15] and Chae [5] obtained global-in-time regularity results in \((\rho,u)\in H^{m}\times H^{m-1}\) and \((\rho,u)\in H^{m}\times H^{m}\), respectively, with \(m\geq 3\). In essence, when the initial data are sufficiently smooth and decay rapidly, these results ensure the existence of a global-in-time strong solution. For cases with rough initial data, Abidi-Hmidi [1] and Hmidi-Keraani [14] proved the existence of global weak solutions in \((\rho,u)\in B^{0}_{2,1}\times\left(L^{2}\cap B^{-1}_{\infty,1}\right)\) and \((\rho,u)\in L^{2}\times(L^{2}\cap H^{s})\), where \(s\in[0,2)\) and \(B^{s}_{p,q}\) represents the Besov spaces. #### 1.1.2. Long-time behavior of classical solutions In the study of long-time behavior of solutions, stability analysis, by itself, provides qualitative information about their long-term behavior, moreover it also plays a crucial role in proving various solution features, as exploited in [6, 9]. Denoting \[\rho_{s}^{\alpha}:=\alpha y,\quad u_{s}^{\beta}:=(\beta y,0)^{T},\quad\alpha, \beta\in\mathbb{R},\] one can easily see that any pair \((\rho_{s}^{\alpha},u_{s}^{\beta})\) is a steady solution (time-independent) for the system (1.1). In [24], the authors established stability under perturbations in a Gevrey class near the Couette flow \(((\rho_{s}^{\alpha},u_{s}^{1})\) with \(\alpha\leq 0)\), when considering the Boussinesq system in the spatial domain \(\mathbb{T}\times\mathbb{R}\). Near the hydrostatic equilibria \((\rho_{s}^{\alpha},0)\), under perturbations in a Sobolev space, Doering-Wu-Zhao-Zheng [8] established stability (when \(\alpha<0\)) and instability (when \(\alpha>0\)), considering the Boussinesq system in a general Lipschitz domain. Also Tao-Wu-Zhao-Zheng [27] conducted another stability analysis with relaxed assumptions on the initial data in a spatially periodic domain. Regarding a long-time behavior of solutions without an assumption on the smallness of the initial data, several quantitative results are available in the literature. Since the density \(\rho\) is transported by an incompressible flow, one cannot expect any growth or decay of \(\|\rho(t)\|_{L^{p}}\). However, a creation of small scale by the flow might induce the growth of finer norms of \(\rho\) (or vorticity \(\omega:=\nabla\times u\)) over time. Indeed, considering (1.1) in a bounded domain, Ju [16] showed \(\|\rho\|_{H^{1}}\lesssim e^{ct^{2}}\), which was further improved to an exponential bound \(e^{ct}\) in \(\mathbb{T}^{2}\) by Kukavica-Wang [21]. Subsequently, Kukavica-Massatt-Ziane [20] achieved a slightly better upper bound \(\|\rho\|_{H^{2}}\lesssim C_{\epsilon}e^{ct}\) for any small \(\epsilon>0\). In addition to these upper bounds on growth rates, an interesting lower bound was obtained by Brandolese-Schonbek [4] proving that in \(\mathbb{R}^{2}\), the kinetic energy \(\|u\|_{L^{2}}\) must grow faster than \(c(1+t)^{1/4}\) as \(t\to\infty\), provided that the initial density \(\rho_{0}\) does not have a zero average. Recently, Kiselev-Park-Yao [18] showed that for a large class of initial data, the Sobolev norms \(\|\rho\|_{H^{m}}\), for \(m\geq 1\), must grow at least at some algebraic rate in \(\mathbb{T}^{2}\) and \(\mathbb{R}^{2}\). Besides these quantitative analyses of norm growth, one might expect some asymptotic behavior due to the damping effect induced by viscosity. In this direction, Kukavica-Massatt-Ziane [20] and Aydin-Kukavica-Ziane [2] showed that \(\|\nabla u\|_{L^{2}}\) and \(\|\nu\Delta u-\mathbb{P}(\rho e_{2})\|_{L^{2}}\), where \(\mathbb{P}\) denotes the Leray projection, converge to \(0\) as \(t\to\infty\) in a bounded domain. #### 1.1.3. Temperature patch problem An interesting class of solutions to a transport equation, \[\rho_{t}+u\cdot\nabla\rho=0, \tag{1.2}\] is called _patch solutions_. These are weak solutions composed of characteristic functions. For instance, if \(\rho_{0}=1_{D}\) for some bounded domain \(D\), the solution remains as a characteristic function \(\rho(t)=1_{D_{t}}\) for some time-dependent domain \(D_{t}\), provided that the velocity field \(u\) is suitably regular. In a more general sense, in this paper, we refer to a solution \(\rho\) as a patch solution, if it can be expressed as a linear combination of characteristic functions defined on some bounded domains. As transport phenomena are prevalent in fluid dynamics, the long-time behavior of patch solutions has been a subject of active study in various fluid models, particularly concerning the behavior of the patch boundary. In certain two-dimensional models, it has been observed that the curvature of the patch boundary may grow to infinity ([17] for the 2D Euler), the perimeter also may grow to infinity ([6, 10, 9] for the 2D Euler) as \(t\to\infty\), or even a singularities can develop in a finite time ([19, 12] for the generalized surface quasi-geostrophic equations). In the context of the 2D Boussinesq equations (1.1), the temperature distribution is also transported by the velocity field, making it natural to explore patch solution \(\rho\). The global existence of patch solutions have been rigorously proved by Danchin-Zhang [7] and Gancedo-Garcia-Juarez [11]. More precisely, [11, Theorem 3.1] states that if \(\rho_{0}=1_{D_{0}}\) for a simply connected domain \(D_{0}\in\mathbb{R}^{2}\) with \(\partial D_{0}\in W^{2,\infty}\), and \(u_{0}\in C^{\infty}_{c}(\mathbb{R}^{2})\) is divergence-free, then * there is a unique global solution \((\rho(t),u(t))\) such that \(u\in L^{1}((0,T);W^{2,\infty}(\mathbb{R}^{2}))\) and \(\rho(t)=1_{D_{t}}\), where \(D_{t}=X_{t}(D_{0})\) with \(X_{t}\), a flow map associated to the velocity field \(u\). More precisely \(X_{t}\) is the unique map determine by \[\frac{dX_{t}}{dt}(x)=u(t,X_{t}(x)),\quad X_{0}(x)=x,\text{ for all }x\in\mathbb{R}^{2}.\] (1.3) * \(\partial D_{t}\in L^{\infty}_{loc}(\mathbb{R}^{+};W^{2,\infty})\), which ensures that the curvature cannot grow to infinity in a finite time. Now, let us consider an initial data \((\rho_{0},u_{0})\) such that the initial temperature density consists of multiple patches with smooth boundaries. The existence of global weak solution is guaranteed. Indeed, for a set \(D_{0}\) as described in **(A1)**, it is well-known that \(1_{D_{0}}\in B^{\alpha}_{2,2}\) for any \(\alpha<\frac{1}{2}\) (e.g., [26, Proposition 3.6]), thus \(\rho_{0}\in B^{\alpha}_{2,2}\). In this case, classical embedding theorems in Besov spaces (e.g., [28, Subsection 2.7]), yield that \(\rho_{0}\in B^{0}_{2,1}\cap B^{0}_{p,\infty}\) for \(p\in(2,4)\). Combining this with \(u_{0}\in C^{\infty}_{c}(\mathbb{R}^{2})\), the well-posedness theorem [14, Theorem 1.2] ensures that there exists a unique weak solution \((\rho,u)\) to (1.1) in the class, \[(\rho,u)\in C(\mathbb{R}^{+};B^{0}_{2,1}\cap B^{0}_{p,\infty})\times C( \mathbb{R}^{+};H^{2}). \tag{1.4}\] However, technically speaking, the global existence results for patch solutions in [11] are not directly applicable to the initial data as above, since \(\rho_{0}\) consists of two disjoint patches instead of a single patch. More precisely, it is not trivial to see whether the temperature distribution \(\rho\) in (1.4) qualifies as a patch solution, and if it does, whether boundary regularity can persist; A rough velocity \(u\in C(\mathbb{R}^{+};H^{2})\) does not guarantee enough regularity of a flow map \(x\mapsto X_{t}(x)\) to ensure any regularity of the boundary of the set \(X_{t}(D_{0})\). Since our primary focus lies elsewhere and considering that the main ideas from [11] can be readily applied to the proof, we will only state a theorem concerning the global existence of patch solutions involving multiple patches. The detailed proof is left to the interested reader. **Theorem 1.1**.: _[_11_, Theorem 3.1]_ _Let \(N\in\mathbb{N}\). For \(i=1,\ldots,N\), let us pick real numbers \(a_{i}\in\mathbb{R}\) and simply connected bounded domains \(D_{i}\subset\mathbb{R}^{2}\) such that \(\overline{D}_{i}\) are disjoint and \(\partial D_{i}\in W^{2,\infty}\). Let us consider initial data \((\rho_{0},u_{0})\) such that \(\rho_{0}=\sum_{i=1}^{N}a_{i}1_{D_{i}}\) and \(u_{0}\in H^{2}(\mathbb{R}^{2})\) is divergence-free. Then there exists a unique weak solution \((\rho,u)\) to (1.1) such that \(u\in C(\mathbb{R}^{+};H^{2})\cap L^{1}_{loc}(\mathbb{R}^{+};W^{2,\infty}( \mathbb{R}^{2}))\). In addition, for almost every \(t\geq 0\)_ \[\rho(t)=\sum_{i=1}^{N}a_{i}1_{D_{i,t}},\quad D_{i,t}=X_{t}(D_{i}),\] _where \(X_{t}\) is the flow map generated by the velocity \(u\). Lastly, we have persistence of the curvature, \(\partial D_{i,t}\in L^{\infty}_{loc}(\mathbb{R}^{+};W^{2,\infty})\) for \(i=1,\ldots,N\)._ ### Main results The goal of this paper is to construct an example of initial data with a patch-type temperature distribution such that under the dynamics (1.1), the temperature patch exhibits a growth of the curvature and the perimeter. To this end, we will consider initial data \((\rho_{0},u_{0})\) satisfying the following assumptions: * \(\rho_{0}(x)=1_{D_{0}}(x)-1_{D_{0}^{*}}\) for a simply connected domain such that \(\overline{D_{0}}\subset\mathbb{R}\times\mathbb{R}^{+}\), \(|D_{0}|=1\) and \(\partial D_{0}\in C^{\infty}\), and \[D_{0}^{*}=\left\{x\in\mathbb{R}^{2}:(x_{1},-x_{2})\in D_{0}\right\}.\] See Figure 1 for an illustration. * \(u_{0}\in C_{c}^{\infty}(\mathbb{R}^{2})\) and \(u_{0}\) is divergence-free. Moreover, denoting \(u_{0}=(u_{01},u_{02})\), we assume that \(u_{02}\) is odd in \(x_{2}\) and \(u_{01}\) is even in \(x_{2}\). When considering \((\rho_{0},u_{0})\) satisfying **(A1)** and **(A2)**, the uniqueness part of Theorem 1.1 ensures the preservation of the \(x_{2}\)-odd symmetry in the solution \(\rho(t)\) obtained in Theorem 1.1: \(\rho(t,x_{1},-x_{2})=-\rho(t,x_{1},x_{2})\). In other words, \(\rho(t)\) takes the form \[\rho(t)=1_{D_{t}}-1_{D_{t}^{*}},\text{ where }D_{t}=X_{t}(D_{0})\text{ and }D_{t}^{*}=\left\{x\in\mathbb{R}^{2}:(x_{1},-x_{2})\in D_{t}\right\}.\] Now, we are ready to state the paper's main theorem: **Theorem 1.2**.: _Let \((\rho_{0},u_{0})\) satisfy **(A1)** and **(A2)**. Then, the global patch-type solution \(\rho(t)=1_{D_{t}}-1_{D_{t}^{*}}\), where \(D_{t}^{*}\) denotes the \(x_{2}\) symmetric copy of \(D_{t}\), satisfies the following:_ * _Infinite growth of curvature: We have_ \[\limsup_{t\to\infty}t^{-\frac{1}{6}}|\kappa(t)|=\infty,\] _where_ \(\kappa(t)\) _is the maximum curvature of_ \(\partial D_{t}\)_._ * _Infinite growth of perimeter: Denoting_ \(L_{t}\) _be the distance between a far-left and a far-right point on_ \(\partial D_{t}\)_, we have_ \[\limsup_{t\to\infty}t^{-\frac{1}{6}}L_{t}=\infty.\] Figure 1. Illustration of the initial patch \(\rho_{0}\) _Since \(D_{t}\) is simply connected, we have infinite growth of perimeter._ ## 2. preliminaries In this section, we collect several well-known conserved properties and some useful uniform estimates for the solutions. Due to the incompressibility of the flow, the conservation \(L^{p}\) norms of the density follows immediately, \[\|\rho(t)\|_{L^{p}}=\|\rho_{0}\|_{L^{p}},\text{ for all }p\in[1,\infty]. \tag{2.1}\] Another well-known conserved quantity is the total energy of the system. We define the potential energy \(E_{P}(t)\) and the kinetic energy \(E_{K}(t)\) as follows: \[E_{P}(t):=\int_{\mathbb{R}^{2}}\rho(t,x)x_{2}dx,\quad E_{K}(t):=\frac{1}{2} \int_{\mathbb{R}^{2}}|u(t,x)|^{2}dx.\] Then it follows straightforwardly from (1.1) that \[\frac{d}{dt}E_{P}(t) =\int_{\mathbb{R}^{2}}\rho_{t}x_{2}dx=\int_{\mathbb{R}^{2}}-u \cdot\nabla\rho x_{2}dx=\int_{\mathbb{R}^{2}}\rho u_{2}dx, \tag{2.2}\] \[\frac{d}{dt}E_{K}(t) =\int_{\mathbb{R}^{2}}u\cdot u_{t}dx=\int_{\mathbb{R}^{2}}-\rho e _{2}\cdot u+\nu\Delta u\cdot udx=\int_{\mathbb{R}^{2}}-\rho u_{2}dx-\nu\int_ {\mathbb{R}^{2}}|\nabla u|^{2}dx. \tag{2.3}\] By summing up the above two quantities and integrating over time, we obtain \[E_{P}(0)+E_{K}(0)=E_{P}(t)+E_{K}(t)+\nu\int_{0}^{t}\!\|\nabla u(t)\|_{L^{2}}^{ 2}dt=:E_{T}(t)+\nu\int_{0}^{t}\!\|\nabla u(t)\|_{L^{2}}^{2}dt. \tag{2.4}\] Under the assumptions **(A1)** and **(A2)**, both \(E_{P}(t)\) and \(E_{K}(t)\) are always nonnegative. Thus the above energy equality gives us a uniform bound for the vorticity \(\omega:=\nabla\times u\), \[\int_{0}^{t}\!\|\omega(t)\|_{L^{2}}^{2}dt\leq\int_{0}^{t}\!\|\nabla u(t)\|_{L^ {2}}^{2}dt\leq C(\rho_{0},u_{0})(1+\nu^{-1})\text{ for all }t>0. \tag{2.5}\] ## 3. Uniform in time estimates In this section, let us derive another uniform time estimate that is simple but will play a crucial role in proving the main theorem. Roughly speaking, the estimate in (2.5) tells us that time-averaged vorticity dissipates eventually, for instance, \(\frac{1}{T}\int_{T}^{2T}\!\|\omega(t)\|_{L^{2}}^{2}dt\to 0\), as \(T\to\infty\). Now, let us consider the vorticity equation, \[\omega_{t}+u\cdot\nabla\omega=-\partial_{1}\rho+\nu\Delta\omega, \tag{3.1}\] which can be easily derived by taking the curl operator in the second equation of (1.1). Assuming \(\omega\) is sufficiently small in some sense, the quadratic term \(u\cdot\nabla\omega\) is comparatively less dominant (in a weak sense) when compared to the linear terms in (3.1). Consequently, considering the dissipation of vorticity, we may anticipate a convergence towards zero for the quantity \(-\partial_{1}\rho+\nu\Delta\omega\). Establishing such asymptotic behavior in a rigorous sense may be nontrivial. However, in the next lemma, we will derive a uniform estimate which exhibits a convergence towards zero of a time-averaged quantity of \(-\partial_{1}\rho+\nu\Delta\omega\). **Lemma 3.1**.: _Let \((\rho,u)\) be a solution to (1.1) with initial data \((\rho_{0},u_{0})\) satisfying **(A1)** and **(A2)**. Then,_ \[\int_{0}^{t}\lVert\partial_{1}\Delta^{-1}\rho-\nu\omega\rVert_{\dot{H}^{1}}^{2} dt\leq(1+\nu^{-1})C(\rho_{0},u_{0}),\text{ for all }t>0. \tag{3.2}\] **Remark 3.2**.: _When considering (1.1) in a bounded domain, in [20, 2], it was shown that for a general initial data, \(\lVert\nu\Delta u-\mathbb{P}(\rho e_{2})\rVert_{L^{2}}\) converges to \(0\) as \(t\to\infty\), where \(\mathbb{P}\) is the Leray projection, which is equivalent to \(\lVert\nu\omega-\partial_{1}\Delta^{-1}\rho\rVert_{\dot{H}^{1}}\to 0\). When considering the Boussinesq system in an unbounded domain, obtaining such a result for general initial data becomes nontrivial. The challenge arises from the fact that while the proof provided in [20, 2] relies on a uniform bound of the kinetic energy, the kinetic energy in an unbounded domain may not be uniformly bounded in time in general. Although the total energy is inferred to be bounded from (2.4), it remains possible for the kinetic energy to increase indefinitely throughout the evolution without a lower bound of the potential energy. In our case, this issue is overcome by the assumptions **(A1)** and **(A2)**, which ensure a uniform lower bound of the potential energy._ Proof.: The proof relies on the second derivative of the potential energy. To begin, we recall the expression of \(E_{P}^{\prime\prime}(t)\) from [18]: **Lemma 3.3**.: _[_18_, Lemma 2.1, Lemma 2.3]_ _Let \((\rho,u)\) be a solution to (1.1) with initial data \((\rho_{0},u_{0})\) satisfying **(A1)** and **(A2)**. Then the potential energy \(E_{P}(t)\) satisfies_ \[E_{P}^{\prime\prime}(t)=A(t)+B(t)-\int|\nabla\partial_{1}\Delta^{-1}\rho(t)|^{ 2}dx\quad\text{ for all }t\geq 0, \tag{3.3}\] _where_ \[A(t):=\sum_{i,j=1}^{2}\int_{\Omega}((-\Delta)^{-1}\partial_{2}\rho)\partial_{ i}u_{j}\partial_{j}u_{i}\,dx,\text{ and }B(t):=\nu\int_{\Omega}\rho\Delta u_{2}dx. \tag{3.4}\] _Furthermore, \(A(t)\) satisfies_ \[\int_{0}^{t}|A(t)|dt\leq C(\rho_{0})\int_{0}^{t}\lVert\nabla u(t)\rVert_{L^{2 }}^{2}dt,\text{ for all }t>0. \tag{3.5}\] Towards the proof of (3.2), we rewrite \(B(t)\), using the Biot-Savart law (\(u_{2}=\partial_{1}\Delta^{-1}\omega\)), as \[B(t)=\int\rho\Delta(\partial_{1}\Delta^{-1}\omega)dx=-\int\partial_{1}\rho \omega dx.\] Therefore (3.3) can be also rewritten as \[E_{P}^{\prime\prime}(t)=A(t)-\nu\int\partial_{1}\rho\omega dx-\int|\nabla \partial_{1}\Delta^{-1}\rho|^{2}dx. \tag{3.6}\] From the vorticity equation (3.1), we compute \[\nu\frac{d}{dt}\left(\frac{1}{2}\lVert\omega(t)\rVert_{L^{2}}^{2}\right)=\nu \int\omega\omega_{t}dx=\nu\int\omega(-\partial_{1}\rho+\nu\Delta\omega)dx=- \int\nu\omega\partial_{1}\rho dx-\int\nu^{2}|\nabla\omega|^{2}dx. \tag{3.7}\] Combining this with (3.6), we obtain \[\frac{d}{dt}\left(E^{\prime}_{P}(t)+\frac{\nu}{2}\|\omega(t)\|_{L^{2 }}^{2}\right) =A(t)-2\int\nu\omega\partial_{1}\rho dx-\int|\nabla\partial_{1} \Delta^{-1}\rho|^{2}dx-\int\nu^{2}|\nabla\omega|^{2}dx\] \[=A(t)-2\int\nabla\nu\omega\cdot\nabla\partial_{1}\Delta^{-1}\rho dx -\int|\nabla\partial_{1}\Delta^{-1}\rho|^{2}dx-\int\nu^{2}|\nabla\omega|^{2}dx\] \[=A(t)-\int|\nabla(\partial_{1}\Delta^{-1}\rho-\nu\omega)|^{2}dx.\] Thus integrating this over time, we obtain \[E^{\prime}_{P}(t)+\frac{\nu}{2}\|\omega(t)\|_{L^{2}}^{2}+\int_{0}^{t}\| \partial_{1}\Delta^{-1}\rho-\nu\omega\|_{\dot{H}^{1}}^{2}dt=E^{\prime}_{P}(0) +\frac{\nu}{2}\|\omega_{0}\|_{L^{2}}^{2}+\int_{0}^{t}A(t)dt,\text{ for all }t>0. \tag{3.8}\] Finally, sending \(E^{\prime}_{P}(t)\) on the left-hand side to the other side, \[\int_{0}^{t}\|\partial_{1}\Delta^{-1}\rho-\nu\omega\|_{\dot{H}^{1}}^{2}dt\leq C (\rho_{0},u_{0})+\int_{0}^{t}A(t)dt-E^{\prime}_{P}(t)\leq(1+\nu^{-1})C(\rho_{0 },u_{0})-E^{\prime}_{P}(t),\] where the last inequality follows from (2.5) and (3.5). Recalling \(E^{\prime}_{P}(t)=\int_{\mathbb{R}^{2}}\rho u_{2}dx\) from (2.2), and using the Cauchy-Schwarz inequality, we have \(|E^{\prime}_{P}(t)|\leq C\|\rho(t)\|_{L^{2}}\|u\|_{L^{2}}\leq C(\rho_{0},u_{0})\). This gives the desired estimate (3.2). **Remark 3.4**.: _In the case \(\Omega=\mathbb{T}^{2}\) (or in a suitable bounded domains), the Sobolev inequality allows us to derive a more concise estimate:_ \[\int_{0}^{t}\|\partial_{1}\rho\|_{\dot{H}^{-2}}^{2}dt\leq C(\rho_{0},u_{0})(1 +\nu^{-1}). \tag{3.9}\] _Indeed, the triangular inequality and the Sobolev inequality give_ \[\|\partial_{1}\Delta^{-1}\rho\|_{L^{2}(\mathbb{T}^{2})}^{2}\leq \|\nu\omega\|_{L^{2}(\mathbb{T}^{2})}^{2}+\|\nu\omega-\partial_{1} \Delta^{-1}\rho\|_{L^{2}(\mathbb{T}^{2})}^{2}\leq \|\nu\omega\|_{L^{2}(\mathbb{T}^{2})}^{2}+\|\nu\omega-\partial_{1} \Delta^{-1}\rho\|_{\dot{H}^{1}(\mathbb{T}^{2})}^{2}.\] _Thus, integrating over time and combining it with (3.2) and (2.5), we obtain (3.9)._ ## 4. Lemmas for curvature and perimeter In this section, we study relations between the curvature/perimeter of a patch and the (negative) Sobolev norms. Throughout the section, \(D\) is always assumed to be a simply connected bounded domain such that \(\overline{D}\subset\mathbb{R}\times\mathbb{R}^{+}\), \(|D|=1\) and \(\partial D\in C^{\infty}\). Also, we will denote the disk centered at \(x\in\mathbb{R}^{2}\) by \(B_{r}(x)\subset\mathbb{R}^{2}\). A constant \(C>0\) will denote a universal constant that does not depend on any variables, while it might vary from line to line. Let us recall the Pestov-Ionin theorem, which asserts that every simple-closed curve with a curvature of at most one encloses a unit disk. In other words, it holds that \[\sup\big{\{}r:B_{r}(x)\subset D,\text{ for some }x\in\mathbb{R}^{2}\big{\}} \leq\frac{1}{\max_{x\in\partial D}|\kappa(x)|}, \tag{4.1}\] where \(\kappa(x)\) is the signed curvature at \(x\in\partial D\). In the next lemma, we explore how the radius of a maximal disk within \(D\) can be constrained in terms of the negative Sobolev norms of \(1_{D}\). **Lemma 4.1**.: _Suppose \(D\) contains a disk with radius \(r>0\). Then there exists a universal constant \(C>0\) such that for any \(\Omega\in C_{c}^{\infty}(\mathbb{R}^{2})\),_ \[r^{3}\leq C\left(\|\partial_{1}\Delta^{-1}(1_{D}-1_{D^{*}})- \Omega\|_{\dot{H}^{1}(\mathbb{R}^{2})}+\|\Omega\|_{L^{2}(\mathbb{R}^{2})}\right), \tag{4.2}\] _where \(D^{*}:=\{(x_{1},x_{2})\subset\mathbb{R}\times\mathbb{R}^{-}:(x_{1},-x_{2})\in D\}\)._ **Remark 4.2**.: _Since \(1_{D}-1_{D^{*}}\in L^{2}(\mathbb{R}^{2})\) and it has a zero average, we have \(\partial_{1}\Delta^{-1}(1_{D}-1_{D^{*}})\in L^{2}(\mathbb{R}^{2})\)[23, Proposition3.3]. Therefore, if we simply take \(\Omega:=\partial_{1}\Delta^{-1}(1_{D}-1_{D^{*}})\), then Lemma 4.1 tells us that the radius \(r\) of a maximal disk contained in \(D\) must satisfy_ \[r^{3}\leq \|\partial_{1}\Delta^{-1}(1_{D}-1_{D^{*}})\|_{L^{2}(\mathbb{R}^{ 2})}.\] _In our proof of the main theorem, we do not have any smallness of \(\|\partial_{1}\Delta^{-1}\rho(t)\|_{L^{2}(\mathbb{R}^{2})}\), therefore, we will make a use of carefully chosen \(\Omega\) in the application of the lemma._ Proof.: Let \(B_{0}\) be a disk contained in \(D\). Without loss of generality, we assume that the center of \(B_{0}\) lies on the \(x_{2}\)-axis, that is, \(B_{0}=B_{r}((0,b))\) for some \(b>0\). Clearly, we must have \(r<1\), due to the assumption \(|D|=1\). Next, let us consider a sequence of horizontally translated disks \(B_{n}:=\left\{(x_{1},x_{2})\in\mathbb{R}^{2}:(x_{1}-2rn,x_{2})\in B_{r}(b)\right\}\) for \(n\in\mathbb{N}\) and choose \(N^{*}:=\inf\left\{n\in\mathbb{N}:|D\cap B_{n}|\leq\frac{r^{2}}{16}\right\}.\) We claim that \[|D\cap B_{N^{*}}|\leq\frac{r^{2}}{16}\text{ and }N^{*}\leq \frac{32}{r^{2}}. \tag{4.3}\] The first statement is clear by the definition of \(N^{*}\). To see the upper bound of \(N^{*}\), let us suppose, to the contrary, that \(N^{*}>\frac{32}{r^{2}}\) and denote \(n^{*}{:=}\lfloor\frac{32}{r^{2}}\rfloor\), where \(\lfloor a\rfloor\) denotes the largest integer not exceeding \(a\). Since \(r<1\), we have \(n^{*}>\frac{16}{r^{2}}\). This implies that \(|D\cap B_{n}|\geq\frac{r^{2}}{16}\) for all \(n=1,...n^{*}\). However, in this case, we must have \[1=|D|\geq\sum_{n=1}^{n^{*}}|D\cap B_{n}|\geq n^{*}\frac{r^{2}}{16 }>1,\] which is a contradiction. Towards the proof of the lemma, we define a function \(x_{1}\mapsto g(x_{1})\) and \(x_{2}\mapsto h(x_{2})\) as \[g(x_{1}):=\begin{cases}0,&\text{ if }x_{1}\leq 0,\\ \frac{1-\cos(\pi x_{1}/r)}{2}&\text{ if }x_{1}\in(0,r],\\ 1&\text{ if }x_{1}\in(r,2rN^{*}],\\ \frac{1+\cos(\pi(x_{1}-2rN^{*})/r)}{2}&\text{ if }x_{1}\in(2rN^{*},2rN^{*}+r], \\ 0&\text{ if }x_{1}>2rN^{*}+r,\end{cases}\] and \[h(x_{2}):=\begin{cases}\frac{1+\cos(\frac{\pi(x_{2}-b)}{r})}{2}& \text{ if }x_{2}\in(b-r,b+r),\\ 0&\text{ otherwise.}\end{cases}\] And we define \(f=f(x_{1},x_{2})\) as \[\begin{cases}f(x_{1},x_{2}):=g(x_{1})h(x_{2}),&\text{ if }x_{2}\geq 0,\\ f(x_{1},x_{2}):=f(x_{1},-x_{2}),&\text{ if }x_{2}<0.\end{cases}\] From the properties of \(g\) and \(h\), it is clear that the support of \(f\) is contained in the vertical strip bounded by \(\{x_{1}=0\}\cup\{x_{1}=2rN^{*}+r\}\) whose width is \(2rN^{*}+r\leq\frac{C}{r}\) (see (4.3)). At the same time, the support of \(f\) in \(\mathbb{R}\times\mathbb{R}^{+}\) lies in the horizontal strip whose width is less than \(2r\). Consequently, we have \[|\text{supp}(f)|,\ |\text{supp}(\nabla f)|,\ |\text{supp}(\Delta f)|\leq C. \tag{4.4}\] Now, denoting \(\mu=1_{D}-1_{D^{*}}\), we see that for any \(\Omega\in C_{c}^{\infty}(\mathbb{R}^{2})\), \[\int_{\mathbb{R}^{2}}\mu(x)\partial_{1}f(x)dx =\int_{\mathbb{R}^{2}}\mu(x)\Delta^{-1}\Delta\partial_{1}f(x)dx=- \int_{\mathbb{R}^{2}}\partial_{1}\Delta^{-1}\mu(x)\Delta f(x)dx\] \[=\int_{\mathbb{R}^{2}}\left(\partial_{1}\Delta^{-1}\mu(x)- \Omega(x)\right)\Delta f(x)dx+\int_{\mathbb{R}^{2}}\Omega(x)\Delta f(x)dx\] \[\leq \|\partial_{1}\Delta^{-1}\mu-\Omega\|_{\dot{H}^{1}}\|\nabla f\|_ {L^{2}}+\|\Omega\|_{L^{2}}\|\Delta f\|_{L^{2}}, \tag{4.5}\] where we used the integration by parts and the Cauchy-Schwarz inequality to get the last inequality. We will estimate the left/right-hand side of the inequality (4.5) separately. To get a lower bound of the left-hand side, we notice that \(\mu\) and \(\partial_{1}f\) are both odd in \(x_{2}\), therefore, \[\int_{\mathbb{R}^{2}}\mu(x)\partial_{1}f(x)dx =2\int_{\mathbb{R}\times\mathbb{R}^{+}}\mu(x)\partial_{1}f(x)dx=2 \int_{D}\partial_{1}f(x)dx\] \[=2\int_{D\cap B_{0}}\partial_{1}f(x)dx+2\int_{D\cap B_{N^{*}}} \partial_{1}f(x)dx, \tag{4.6}\] where the last equality follows from the fact that, by the definition of \(f\) (especially the definition of \(g\)), \(\partial_{1}f(x_{1},x_{2})=g^{\prime}(x_{1})h(x_{2})=0\) if \(x\in(B_{0}\cup B_{N^{*}})^{c}\). Using \(B_{0}\subset D\) and the definition of \(g,h\), we can estimate the first integral as \[\int_{D\cap B_{0}}\partial_{1}f(x)dx =\int_{B_{0}}g^{\prime}(x_{1})h(x_{2})dx\geq\int_{B_{0},\ \left\{\frac{r}{4}<x_{1}<\frac{3r}{4}\right\}}g^{\prime}(x_{1})h(x_{2})dx\] \[\geq\int_{\frac{r}{4}}^{\frac{3r}{4}}\int_{b-\frac{\sqrt{7}}{4}r}^ {b+\frac{\sqrt{7}}{4}r}g^{\prime}(x_{1})h(x_{2})dx_{2}dx_{1}\] \[\geq\int_{\frac{r}{4}}^{\frac{3r}{4}}\int_{-\frac{\sqrt{7}}{4}r}^ {\frac{\sqrt{7}}{4}r}\frac{\pi}{2r}\sin\left(\frac{\pi x_{1}}{r}\right)\frac{ (1+\cos(\pi x_{2}/r))}{2}dx_{2}dx_{1}\] \[=\pi r\int_{1/4}^{1/2}\int_{0}^{\frac{\sqrt{7}}{4}}\sin(\pi x_{1}) (1+\cos(\pi x_{2}))dx_{2}dx_{1}\] \[=\pi r\int_{1/4}^{1/2}\sin(\pi x_{1})dx_{1}\int_{0}^{\sqrt{7}/4}(1 +\cos(\pi x_{2}))dx_{2}\] \[\geq\pi r\frac{\sqrt{2}}{2}\frac{\sqrt{7}}{4}\geq\pi r\frac{\sqrt{14}}{8}.\] By using \(|\partial_{1}f|_{L^{\infty}}\leq\frac{\pi}{2r}\), the second integral can be estimated as \[\int_{D\cap B_{N^{*}}}\partial_{1}f(x)dx\leq|D\cap B_{N^{*}}|\frac{\pi}{2r}\leq r ^{2}/16\cdot\frac{\pi}{2r}=\frac{\pi r}{32},\] where the second inequality follows from (4.3). Thus, in (4.6), we see that \[\int_{\mathbb{R}^{2}}\mu(x)\partial_{1}f(x)dx\geq 2\left(\frac{\pi r\sqrt{14}}{8 }-\frac{\pi r}{32}\right)\geq Cr. \tag{4.7}\] Let us estimate the right-hand side of (4.5) Again, using the properties of \(g,h\), we have that \(\|\Delta f\|_{L^{\infty}}\leq\)\(\|g^{\prime\prime}\|_{L^{\infty}}+\)\(\|h^{\prime\prime}\|_{L^{\infty}}\leq\frac{C}{r^{2}}\) and \(\|\nabla f\|_{L^{\infty}}\leq\)\(\|g^{\prime}\|_{L^{\infty}}+\)\(\|h^{\prime}\|_{L^{\infty}}\leq\frac{C}{r}\). Combining this with (4.4), we get \[\|\Delta f\|_{L^{2}}\leq\frac{C}{r^{2}},\text{ and }\|\nabla f\|_{L^{2}}\leq \frac{C}{r}\leq\frac{C}{r^{2}}, \tag{4.8}\] where the last inequality follows from \(r<1\). Plugging this and (4.7) into (4.5), we obtain the desired estimate (4.2). Now, we make a lemma to estimate the parameter of the domain \(D\). **Lemma 4.3**.: _Let \(L>0\) be the distance between a far-left and a far-right points on \(\partial D\), and let \(A:=\int_{\mathbb{R}^{2}}1_{D}(x)x_{2}dx\). Then, there exists a universal constant \(C>0\) such that for any \(\Omega\in C_{c}^{\infty}(\mathbb{R}^{2})\),_ \[1\leq C(A+1)(1+L^{3})\left(\|\partial_{1}\Delta^{-1}(1_{D}-1_{D^{*}})-\Omega \|_{\dot{H}^{1}}+\|\Omega\|_{L^{2}}\right), \tag{4.9}\] _where \(D^{*}:=\{(x_{1},x_{2})\subset\mathbb{R}\times\mathbb{R}^{-}:(x_{1},-x_{2})\in D\}\)._ **Remark 4.4**.: _As explained in Remark 4.2, if we simply choose \(\Omega=\partial_{1}\Delta^{-1}(1_{D}-1_{D^{*}})\) in (4.9), then we get_ \[1\leq C(A+1)(1+L^{3})\|\partial_{1}\Delta^{-1}(1_{D}-1_{D^{*}})\|_{L^{2}}.\] _This inequality indeed tells us that assuming \(\int_{\mathbb{R}^{2}}1_{D}(x)x_{2}dx\) is bounded, the perimeter of \(\partial D\) grows to infinity, as \(\|\partial_{1}(1_{D}-1_{D^{*}})\|_{\dot{H}^{-2}}\) goes to zero. In our proof of the main theorem, we will use the slightly finer estimate stated in the lemma, due to the lack of smallness of \(\|\partial_{1}\rho(t)\|_{\dot{H}^{-2}}\)._ Proof.: Without loss of generality, let us assume that \(\inf\left\{x_{1}:(x_{1},x_{2})\in D\right\}=0\) so that a far-right point of \(\partial D\) can be denoted by \((L,x_{2}^{r})\) for some \(x_{2}^{r}>0\). Using that \(|D|=1\), we see \[\int_{\mathbb{R}^{2},\ \{x_{2}>4A\}}1_{D}dx \leq\frac{1}{4A}\int_{\mathbb{R}^{2},\ \{x_{2}>4A\}}1_{D}(x)x_{2}dx\leq\frac{1}{4}, \tag{4.10}\] \[\int_{\mathbb{R}^{2},\ \{x_{2}<\frac{1}{4L}\}}1_{D}dx =\int_{-\infty}^{\infty}\int_{0}^{\frac{1}{4L}}1_{D}(x)dx\leq \frac{1}{4}. \tag{4.11}\] This implies \[4A>\frac{1}{4L}. \tag{4.12}\] Indeed, if it were not true, we would have \[1=|D|=\int_{\mathbb{R}^{2}}1_{D}dx\leq\int_{\mathbb{R}^{2},\ \{x_{2}>4A\}}1_{D}dx+ \int_{\mathbb{R}^{2},\ \{x_{2}<\frac{1}{4L}\}}1_{D}dx\leq\frac{1}{2},\] which is a contradiction. Moreover, the above estimates (4.10) and (4.11) imply that at least a half of \(D\) is contained in the horizontal strip bounded by \(\{x_{2}=4A\}\) and \(\left\{x_{2}=\frac{1}{4L}\right\}\). Therefore, taking away \(\left\{0\leq x_{1}\leq\frac{1}{32A},\ 0\leq x_{2}\leq 4A\right\}\cup\left\{L- \frac{1}{32A}\leq x_{1}\leq L,\ 0\leq x_{2}\leq 4A\right\}\) from \(D\), whose total measure is at most \(\frac{1}{4}\), we have that \[\left|\left\{x\in D:\frac{1}{32A}\leq x_{1}\leq L-\frac{1}{32A}, \quad\frac{1}{4L}\leq x_{2}\leq 4A\right\}\right|\geq\frac{1}{4}. \tag{4.13}\] Towards the proof of the lemma, we choose nonnegative smooth functions \(g(x_{1})\) and \(h(x_{2})\) satisfying \[\begin{cases}g(x_{1})=0\text{ if }x_{1}\leq 0,\\ 0\leq g^{\prime}(x_{1})\leq 1\text{ for }x_{1}\in(0,L)\text{ and }g^{\prime}(x_{1})=1\text{ for }x_{1}\in(\frac{1}{32A},L-\frac{1}{32A}),\\ g(x_{1})\text{ is symmetric about the axis }\{x_{1}=L\},\text{ that is, }g(x_{1})=g(2L-x_{1})\text{ for }x_{1}\geq L,\\ 0\leq g(x_{1})\leq L,\,|g^{\prime}(x_{1})|\leq 1\text{ and }|g^{\prime\prime}(x_{1})|\leq 32A\text{ for all }x_{1}\in\mathbb{R},\end{cases}\] and \[\begin{cases}h(x_{2})=0\text{ for }x_{2}\leq 0\text{ or }x_{2}\geq 4A+\frac{1}{4L},\\ h(x_{2})=1\text{ for }x_{2}\in(\frac{1}{4L},4A),\\ 0\leq h(x_{2})\leq 1,\,|h^{\prime}(x_{2})|\leq 4L\text{ and }|h^{\prime\prime}(x_{2})|\leq 32L^{2} \text{ for all }x_{2}\in\mathbb{R}.\end{cases}\] A construction of \(g,h\) satisfying above properties is straightforward. For such \(g,h\), we define \[f(x_{1},x_{2}):=g(x_{1})h(x_{2})\text{ for }x_{2}\geq 0\text{ and }f(x_{1},x_{2})=-f(x_{1},-x_{2}),\text{ for }x_{2}<0.\] Clearly, \(|\text{supp}(f)|\leq C(AL+1)\), thus the above properties of \(g,h\) give us that \[|\text{supp}(\nabla f)|\leq C(AL+1)\leq CAL, \tag{4.14}\] where the last inequality is due to (4.12). Furthermore, noticing that \(g(x_{1})\) is linear for \(x_{1}\in(\frac{1}{32A},L-\frac{1}{32A})\), it is not difficult to see that \[|\text{supp}(\Delta f)|\leq C. \tag{4.15}\] Next, denoting \(\mu:=1_{D}-1_{D^{*}}\) and following the same computations in (4.5), we get \[\int_{\mathbb{R}^{2}}\mu(x)\partial_{1}f(x)dx\leq \|\partial_{1}\Delta^{-1}\mu-\Omega\|_{\dot{H}^{1}}\|\nabla f\|_{ L^{2}}+\|\Omega\|_{L^{2}}\|\Delta f\|_{L^{2}},\text{ for any }\Omega\in C_{c}^{\infty}(\mathbb{R}^{2}). \tag{4.16}\] Using \(x_{2}\)-odd symmetry of \(\mu\) and \(f\), we see that \(\int_{\mathbb{R}^{2}}\mu(x)\partial_{1}f(x)dx=2\int_{D}g^{\prime}(x_{1})h(x_{2 })dx\), while the properties of \(g,h\) give us that \[\int_{D}g^{\prime}(x_{1})h(x_{2})dx\geq\int_{\frac{1}{32A}}^{L- \frac{1}{32A}}\int_{\frac{1}{4L}}^{4A}1_{D}(x_{1},x_{2})dx_{2}dx_{1}\geq\frac{ 1}{4},\] where the last inequality follows from (4.13). Thus, we have \[\int_{\mathbb{R}^{2}}\mu(x)\partial_{1}f(x)dx\geq\frac{1}{2}. \tag{4.17}\] On the other hand, we have that \[\|\Delta f\|_{L^{\infty}}\leq \|g^{\prime\prime}\|_{L^{\infty}}+\|h^{\prime\prime}\|_{L^{\infty} }\|g\|_{L^{\infty}}\leq 32A+32L^{3},\quad\|\nabla f\|_{L^{\infty}}\leq\] \[\|g^{\prime}\|_{L^{\infty}}+\|h^{\prime}\|_{L^{\infty}}\|g\|_{L^{ \infty}}\leq 1+4L^{2}.\] Combining this with (4.14) and (4.15), we get \[\|\Delta f\|_{L^{2}}\leq C(A+L^{3})\leq C(A+1)(1+L^{3}),\quad\|\nabla f\|_{L^ {2}}\leq C(AL^{3}+AL)\leq C(A+1)(1+L^{3}).\] Plugging this and (4.17) into (4.16), we obtain the desired estimate (4.9). ## 5. Proof of the main theorem In this section, we prove the main theorem of the paper. Let \(\rho(t)=1_{D_{t}}-1_{D_{t}^{*}}\) be the global solution with the initial data \((\rho_{0},u_{0})\) satisfying the assumptions **(A1)** and **(A2)**. In the rest of the proof, \(C\) denotes some positive constant that depends on only \((\rho_{0},u_{0},\nu)\) and might vary from line to line. From (2.5) and Lemma 3.1, we have that for any \(n\in\mathbb{N}\), we can find \(T_{n}>0\) such that \(T_{n}\mapsto\infty\) and \[\int_{T_{n}}^{2T_{n}}\!\!\|\omega(t)\|_{L^{2}}^{2}+\|\partial_{1}\Delta^{-1} \rho(t)-\nu\omega(t)\|_{\dot{H}^{1}}^{2}dt\leq\frac{1}{n}.\] Therefore, there exists \(t_{n}\in[T_{n},2T_{n}]\) such that \[\|\omega(t_{n})\|_{L^{2}}+\|\partial_{1}\Delta^{-1}\rho(t_{n})-\nu\omega(t_{n })\|_{\dot{H}^{1}}\leq\frac{1}{\sqrt{nT_{n}}}. \tag{5.1}\] We prove the growth of curvature **(a)** first. **Proof of Theorem 1.2, part (a)**. Let \(B_{r}(x^{*})\) be the largest disk contained in \(D_{t_{n}}\), centered at \(x^{*}=(x_{1}^{*},x_{2}^{*})\) with radius \(r\). Then the Pestov-Ionin theorem (see (4.1)) tells us that \[r\geq\frac{1}{\max_{x\in\partial D_{t_{n}}}|\kappa(t_{n})|}, \tag{5.2}\] where \(\kappa(t_{n})\) is the signed curvature of \(\partial D_{t_{n}}\). Applying Lemma 4.1 with \(D=D_{t_{n}}\) and \(\Omega=\omega(t_{n})\), we obtain \[r^{3}\leq C\left(\|\partial_{1}\Delta^{-1}\rho(t_{n})-\omega(t_{n})\|_{\dot{H} ^{1}}+\|\omega(t_{n})\|_{L^{2}}\right)\leq\frac{C}{\sqrt{nT_{n}}},\] where the last inequality follows from (5.1). Combining this with (5.2) yields that \[\max_{x\in\partial D_{t_{n}}}|\kappa(t_{n})|\geq(nT_{n})^{\frac{1}{6}}\geq( nt_{n})^{\frac{1}{6}},\] where we used \(t_{n}\in[T_{n},2T_{n}]\) for the last inequality. Since \(t_{n}\mapsto\infty\), as \(n\to\infty\), we obtain the desired infinite growth of the curvature. **Proof of Theorem 1.2, part (b)**. Let \(L_{n}>0\) be the distance between a far-left and a far-right points on \(\partial D_{t_{n}}\). From the energy conservation (2.4), we see that \(E_{P}(t)\leq E_{T}(0)<C\) for all \(t>0\). Therefore, applying Lemma 4.3 with \(D=D_{t_{n}}\), \(\Omega=\omega(t_{n})\) and \(A=\frac{1}{2}E_{P}(t_{n})\leq C\), we obtain \[1\leq C(L_{n}^{3}+1)\left(\|\partial_{1}\Delta^{-1}\rho(t_{n})-\omega(t_{n}) \|_{\dot{H}^{1}}+\|\omega(t_{n})\|_{L^{2}}\right)\leq\frac{C}{\sqrt{nT_{n}}}( L_{n}^{3}+1),\] where the last inequality follows from (5.1). Since \(T_{n}\to\infty\) as \(n\to\infty\), the above inequalities give us that \(L_{n}\geq C(nT_{n})^{\frac{1}{6}}\geq C(nt_{n})^{\frac{1}{6}}\) for all \(n\in\mathbb{N}\), where \(t_{n}\leq 2T_{n}\) was used to justify the second inequality. Since \(D_{t_{n}}\) is simply connected, this proves the desired infinite growth of perimeter. ### Acknowledgements The author was partially supported by the SNF grant 212573-FLUTURA and the Ambizione fellowship project PZ00P2-216083. The author also extends gratitude to and Eduardo Garcia-Juarez and Yao Yao for their valuable discussions and insightful suggestions during the course of this research.
2309.08763
Skyrmion-deriven topological spin and charge Hall effects in diffusive antiferromagnetic thin films
We investigate topological Hall effects in a metallic antiferromagnetic (AFM) thin film and/or at the interface of an AFM insulator-normal metal bilayer with a single skyrmion in the diffusive regime. To determine the spin and charge Hall currents, we employed a Boltzmann kinetic equation with both spin-dependent and spin-flip scatterings. The interaction between conduction electrons and static skyrmions is included in the Boltzmann equation via the corresponding emergent magnetic field arising from the skyrmion texture. We compute intrinsic and extrinsic contributions to the topological spin Hall effect and spin accumulation, induced by an AFM skyrmion. We show that although the spin Hall current vanishes rapidly outside the skyrmion, the spin accumulation can be finite at the edges far from the skyrmion, provided the spin diffusion length is longer than the skyrmion radius. In addition, We show that in the presence of a spin-dependent relaxation time, the topological charge Hall effect is finite and we determine the corresponding Hall voltage. Our results may help to explore antiferromagnetic skyrmions by electrical means in real materials.
Amir N. Zarezad, Józef Barnaś, Anna Dyrdał, Alireza Qaiumzadeh
2023-09-15T21:01:34Z
http://arxiv.org/abs/2309.08763v2
# Topological spin Hall effect in antiferromagnets ###### Abstract We investigate topological Hall effects in a metallic antiferromagnetic (AFM) thin film and/or at the interface of an AFM insulator-normal metal bilayer with a single skyrmion in the diffusive regime. To determine the spin and charge Hall currents, we employed a Boltzmann kinetic equation with both spin-dependent and spin-flip scatterings. The interaction between conduction electrons and static skyrmions is included in the Boltzmann equation _via_ the corresponding emergent magnetic field arising from the skyrmion texture. We compute intrinsic and extrinsic contributions to the topological spin Hall effect and spin accumulation, induced by an AFM skyrmion. We show that although the spin Hall current vanishes rapidly outside the skyrmion, the spin accumulation can be finite at the edges far from the skyrmion, provided the spin diffusion length is longer than the skyrmion radius. In addition, We show that in the presence of a spin-dependent relaxation time, the topological charge Hall effect is finite and we determine the corresponding Hall voltage. Our results may help to explore antiferromagnetic skyrmions by electrical means in real materials. ## I Introduction Interplay between charge currents and magnetic textures is important from both fundamental and applied points of view [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. Interactions between current and ferromagnetic skyrmions may lead to emerging phenomena, including topological Hall and skyrmion Hall effects [15; 16]. The spin-polarized charge current flowing in a ferromagnetic (FM) layer containing a topological skyrmion drags it along the electric field and simultaneously deflects it toward one of the film edges, that is, along the direction perpendicular to the electric field under the influence of gyrotropic forces. The latter phenomenon is called the skyrmion Hall effect [17; 18; 19; 20; 21; 22]. This phenomenon was studied theoretically and observed experimentally; see, e.g. Refs. [23; 24; 25; 26; 20; 21] for an overview. The motion of rigid topological skyrmions in both longitudinal and transverse directions in the present of charge currents in FM metals are conveniently described by Thiele's equation [29; 28; 29]. In turn, FM skyrmions affect the flow of spin-polarized charge currents in FM metal by deflecting their trajectories in the direction perpendicular to the external electric field [30; 31; 32; 33; 34; 35; 36; 37]. This is a phenomenon similar to the anomalous Hall effect in FM metals with uniform magnetization direction [38; 39; 40]. The origin of this skyrmion-induced topological Hall effect is the emergence of a real-space Berry curvature, induced by skyrmion textures [41; 31], while in the anomalous Hall effect, the Berry curvature in the momentum space arises from the spin-orbit couplings [42; 43; 44]. The FM skyrmion Hall effect and the topological Hall effect of spin-polarized currents in FM metals are reciprocal phenomena with a similar origin. On the other hand, AFM skyrmions, unlike their FM counterparts, do not show any skyrmion Hall effect [45; 46; 47; 48; 49; 50; 51; 52]. This deflection-free motion of AFM skyrmions is one of the advantages of AFM systems versus their FM counterparts in practical spintronic devices. However, topological skyrmion textures may still create a real-space Berry curvature [53] and hence some type of topological Hall effects are expected to be present in AFM systems in the presence of skyrmions. Charge currents in compensated AFM metals are not spin polarized, and thus the two equally populated opposite spins can be deflected in opposite directions in the presence of the AFM skyrmion-induced Berry curvature. Therefore, one can expect the topological _spin_ Hall effect instead of the topological charge Hall effect. Recently, this effect has been investigated theoretically in a few publications [54; 55; 56; 57], but has not been measured experimentally yet. The absence of a topological charge Hall effect was recently confirmed in a chiral AFM system [58]. In Ref. [54], the authors employed an SU(2) semiclassical framework in combination with the ab-initio description of the electronic structure and demonstrated the emergence of a sizeable transverse spin current and a vanishing transverse charge current in a synthetic AFM skyrmion lattice. In Ref. [55], the authors showed a finite topological spin Hall effect and the absence of topological charge Hall effect in a compensated AFM skyrmion crystal on a honeycomb lattice. Akosa _et al_[56] computed the topological spin and charge Hall effects in a finite size AFM square lattice using a tight-binding model in terms of Landauer-Buttiker formalism, implemented in the Kwant code, in the presence of electrostatic impurities. They found a zero topological charge Hall effect and a nonzero topological spin Hall effect in such systems. Finally, in Ref. [57], using Landauer-Buttiker formula, the authors showed that a vector chirality, formed by the AFM Neel vector, gives rise to a finite topological spin Hall effect in bulk system of AFM half-skyrmions or merons but not in skyrmions. In the present article, we investigate the topological charge and spin Hall effects in a diffusive AFM layer with a skyrmion texture, by means of the quasiclassical Boltzmann kinetic equation. We consider both spin-dependent and spin-flip scatterings in our formalism, and we find analytical expressions for transverse spin and charge currents as well as spin accumulation in the presence of an AFM skyrmion. We show that spin-dependent scatterings generate a nonzero topological charge Hall effect. In addition, we compute both intrinsic (disorder independent, which is arising from the real-space Berry curvature) and extrinsic (disorder dependent, which is arising from the spin-flip scatterings) contributions to the topological spin Hall effect and spin accumulation. ## II Model We consider a two-sublattice compensated AFM square lattice that hosts a single static skyrmion. The system consists of an electronic subsystem and a AFM spin subsystem that interact to each other via an sd exchange interaction. The effective Hamiltonian of the system can be written in the following form [56; 59; 60; 61]: \[\hat{H}=-t\gamma_{\bf k}\big{(}\hat{\tau}_{x}\otimes\hat{s}_{0}\big{)}-J\big{(} \hat{\tau}_{z}\otimes{\bf n}\big{)}\cdot\hat{\bf s}. \tag{1}\] Here, \(t\!>\!0\) is the hopping parameter, \(\gamma_{\bf k}=z^{-1}\sum_{i=1}^{z}\exp\big{(}-i{\mathbf{k}}\cdot{\mathbf{\delta}}_{i} \big{)}\) is the lattice structure factor with \({\mathbf{\delta}}_{i}\) denoting the nearest-neighbor vectors and \(z\) being the coordination number. For a square lattice (\(z\)=4), we have \(\gamma_{\bf k}=\big{(}\cos(k_{x}a)+\cos(k_{y}a)\big{)}/2\), where \(a\) is the lattice constant. Furthermore, \(J\) is the sd exchange interaction that parameterizes the coupling of mobile electrons and localized magnetic moments, \(\hat{\mathbf{\tau}}\) and \(\hat{\mathbf{s}}\) are the vectors of Pauli matrices representing the AFM sublattice and spin degrees of freedom, respectively, \(\hat{s}_{0}\) is the identity matrix in the spin subspace, and \({\bf n}\) is the unit vector of the staggered Neel order parameter field. In general, the unit vector of the staggered order parameter field can be decomposed into two terms, \({\bf n}={\bf n}_{0}+{\bf n}_{r}\), where \({\bf n}_{0}\) represents the background homogeneous field while the second term \({\bf n}_{r}\) represents the noncollinear magnetic texture [62]. The corresponding eigenvalues and eigenstates of the AFM Hamiltonian in the absence of any texture, \({\bf n}_{\bf r}={\bf 0}\), are given by [56; 59]; \[\varepsilon_{\eta}({\bf k})=\eta\sqrt{t^{2}\gamma_{\bf k}^{2}+J^{2}}, \tag{2}\] \[\ket{\Psi_{\eta}^{s}}=\frac{1}{\sqrt{2}}\Big{(}\sqrt{1+s\eta{\bf P}_{\bf k}} \ket{A}+\eta\sqrt{1-s\eta{\bf P}_{\bf k}}\ket{B}\Big{)}\otimes\ket{\sigma}, \tag{3}\] where, \(\eta=+1\) (\(-1\)) corresponds to the conduction (valence) band, and \(s=+1\) (\(-1\)) corresponds to the spin up (down) state, \(\ket{A(B)}\) refers to the AFM sublattice A (B) projection, \(\ket{\sigma}=\ket{\uparrow(\downarrow)}\) denotes up (down) spin projection, and finally we defined \({\rm P}_{\bf k}=J/\sqrt{t^{2}\gamma_{\bf k}^{2}+J^{2}}\). We consider a compensated AFM system that preserved combined time and inversion symmetry (\(PT\) symmetry) and thus the electronic band dispersion, Eq. (2), is spin-degenerate. To find analytical results, we assume the Fermi level near the maximum of the conduction (minimum of the valence) band, and consider the electronic dispersion around the \(\Gamma\) point. Accordingly, the structure factor is approximated as \(\gamma_{\bf k}\approx(1-a^{2}k^{2}/4)\), while the energy dispersion around the \(\Gamma\) point becomes \(\varepsilon_{\eta}\approx\eta\big{(}\sqrt{J^{2}+t^{2}}-\hbar^{2}k^{2}/(2m_{ \rm eff})\big{)}\). The first term here is the energy at the \(\Gamma\) point and the second term is the effective kinetic energy of electrons with an effective mass \(m_{\rm eff}=2\hbar^{2}\sqrt{J^{2}+t^{2}}/a^{2}t^{2}\). The profile of an AFM skyrmion can be modelled in the spherical coordinates as, \[{\bf n}_{r}=\big{(}\cos\Phi\sin\Theta,\sin\Phi\sin\Theta,\cos\Theta\big{)}, \tag{4}\] where the polar \(\Theta(r)\) and azimuthal \(\Phi(r)\) angles are defined as [56; 36] \[\Theta =2\pi-4\ \text{arctan}\Big{(}\!\exp(4r/r_{\rm sk})\Big{)}, \tag{5a}\] \[\Phi =q\text{Arg}(\text{x}+\text{iy})+\text{c}\frac{\pi}{2}. \tag{5b}\] Here, \(r=\sqrt{x^{2}+y^{2}}\) is the distance from the skyrmion center, \(r_{\rm sk}\) denotes the radius of the skyrmion core, \(p=\pm 1\) stands for the skyrmion polarity, \(q=\pm 1\) denotes skyrmion vorticity, and \(c=\pm 1\) defines the chirality of the skyrmion. For a spin texture slowly varying in space, the exchange interaction term - the second term in the Hamiltonian (1) - can be diagonalized by a unitary gauge transformation. The result is a uniform spin background in the presence of an emerging SU(2) gauge field that interacts with itinerant electrons [63; 64; 65; 41; 9; 41]. The emergent magnetic field induced by an AFM texture depends on the spin, band, and sublattice indices [66; 56], \[B_{\rm em,\eta}^{\alpha,s}({\mathbf{r}})=-s\big{(}1+s\eta\alpha{\bf P}_{\bf k} \big{)}\frac{\hbar}{2e}N_{x,y}({\mathbf{r}})\hat{z}, \tag{6}\] where \(\alpha=+(-)\) refers to the \(A(B)\) sublattice, \(\hbar=h/(2\pi)\) is the reduced Planck constant, \(e\) is the electron charge, and \(N_{x,y}({\mathbf{r}})={\bf n}\cdot(\partial_{x}{\bf n}\times\partial_{y}{\bf n})\) is the topological charge density. Without loss of generality, we assume the Fermi level is located in the conduction band and in the rest of this article, we set \(\eta=+1\) and drop this index. We consider a narrow AFM stripe of width \(2w\geq 2r_{\rm sk}\) and length \(2L\), that includes a single skyrmion at its center. ## III Boltzmann kinetic equation To describe spin and charge transports in the diffusive regime in the presence of emerging magnetic field induced by a static AFM skyrmion, we employ a semi-classical transport theory based on the Boltzmann kinetic equation [67; 68; 69], \[\mathbf{v_{k}}\cdot\frac{\partial f_{s}}{\partial\mathbf{r}}-\frac {e}{\hbar}\Big{(}\mathbf{E}+\mathbf{v_{k}}\times\mathbf{B}_{\mathrm{em}}^{s} \Big{)}\cdot\frac{\partial f_{s}}{\partial\mathbf{k}}\] \[\qquad=-\frac{f_{s}-\langle f_{s}\rangle}{\tau_{s}}-\frac{\langle f _{s}\rangle-\langle f_{-s}\rangle}{\tau_{\mathrm{sf}}},\] where \(f_{s}=f_{s}(\mathbf{r},\mathbf{k})\) is the nonequilibrium distribution function for electrons with spin \(s=+/-\) (or equivalently \(s=\uparrow/\downarrow\)), \(\mathbf{v_{k}}\) is the electron velocity, \(\mathbf{E}=E_{x}\hat{x}\) is the external electric field applied along the stripe, \(\langle f_{s}\rangle=\int d^{2}\Omega_{\mathbf{k}}f_{s}/\int d^{2}\Omega_{ \mathbf{k}}\) is the angular average over the momentum space, and \(\Omega_{\mathbf{k}}\) is the solid angle in the momentum space. The first term on the right hand side describes the spin-conserving relaxation processes with \(\tau_{s=\uparrow(\downarrow)}\) being the corresponding spin-dependent scattering time. The second term describes spin mixing relaxation processes with \(\tau_{\mathrm{sf}}\) denoting the corresponding spin-flip relaxation time. The total spin-dependent emerging magnetic field is the sum of the two sublattices contributions, \[\mathbf{B}_{\mathrm{em}}^{s}(\mathbf{r})=\sum_{\alpha=A,B}\mathbf{B}_{\mathrm{em} }^{\alpha,s}(\mathbf{r})=sB_{\mathrm{em}}(\mathbf{r})\hat{z}, \tag{7}\] where for a skyrmion profile, defined in Eqs. (4) and (5), the emerging magnetic field amplitude reads [18], \[\frac{B_{\mathrm{em}}}{B_{0}}=-\frac{8}{r/r_{sk}}\,\sin\left(4 \arctan\left(\exp\left(\frac{4r}{r_{\mathrm{sk}}}\right)\right)\right)\frac{ \exp(\frac{4r}{r_{\mathrm{sh}}})}{1+\exp(\frac{8r}{r_{\mathrm{sh}}})}, \tag{8}\] with \(B_{0}=(h/e)(\pi r_{\mathrm{sk}}^{2})^{-1}\). The emergent magnetic field is normal to the AFM layer and has an opposite sign for up and down itinerant electron spins. Therefore, this magnetic field effectively deflects the electron trajectory of spin-up and spin-down electrons in opposite directions. If there is no asymmetry between the spin up and spin down electron subbands, this leads to a vanishing topological charge Hall effect, while the spin Hall effect is nonzero. However, in the presence of an asymmetry between spin subbands, e.g., due to different relaxation times or breaking the time-reversal symmetry of electronic bands, both charge and spin Hall effects may occur, as we will show later. ## IV Analytical solutions of Boltzmann equation To solve the Boltzmann kinetic equation, Eq. (7), we decompose the nonequilibrium distribution function into an equilibrium component \(f_{s}^{0}\) and small nonequilibrium perturbations [70; 71; 37], \[f_{s}=f_{s}^{0}-\frac{\partial f_{s}^{0}}{\partial\varepsilon} \Big{(}-e\mu_{s}(\mathbf{r})+g_{s}(\mathbf{r},\mathbf{k})\Big{)}, \tag{9}\] where \(-e\mu_{s}(\mathbf{r})\) and \(g_{s}(\mathbf{r},\mathbf{k})\) are the isotropic (zeroth velocity moment) and anisotropic (first velocity moment) parts, respectively. The first part is related to spin accumulation, while the second part represents a shift of the electron sphere in the momentum space. Both \(\mu_{s}(\mathbf{r})\) and \(g_{s}(\mathbf{r},\mathbf{k})\) should be determined from the Boltzmann equation. Inserting Eq. (9) into Eg. (7), and separating the odd and even velocity moments of the distribution function, up to linear order in the emerging magnetic field, we find [70; 71; 37], \[g_{s}(\mathbf{k},\mathbf{r})=-e\tau_{s}\mathbf{v_{k}}\cdot\left( \mathbf{E}-\nabla_{\mathbf{r}}\mu_{s}(\mathbf{r})-\frac{e\tau_{s}}{m_{\mathrm{ eff}}}\mathbf{E}\times\mathbf{B}_{\mathrm{em}}^{s}(\mathbf{r})\right), \tag{10}\] \[\nabla^{2}\delta\mu(\mathbf{r})-\frac{\delta\mu(\mathbf{r})}{ \lambda_{\mathrm{sd}}^{2}}=\frac{e\tau}{m_{\mathrm{eff}}}(\hat{z}\times\mathbf{ E})\cdot\nabla\mathbf{B}_{\mathrm{em}}(\mathbf{r}). \tag{11}\] Here, \(\delta\mu=(\mu_{\uparrow}-\mu_{\downarrow})/2\) is the net spin accumulation, \(\tau=(\tau_{\uparrow}+\tau_{\downarrow})/2\) is the spin-averaged relaxation time, \(\lambda_{\mathrm{sd}}\) is the spin diffusion length, defined as \(\lambda_{\mathrm{sd}}^{-2}=(l_{\uparrow}^{-2}+l_{\downarrow}^{-2})/2\), where \(l_{s}^{2}=v_{\mathrm{F}}^{2}\tau_{s}\tau_{\mathrm{sf}}/2\) and \(v_{\mathrm{F}}\) is the electron Fermi velocity. The difference between two spin-dependent scatterings can be quantified by a spin asymmetry relaxation time parameter \(p_{\tau}=(\tau_{\uparrow}-\tau_{\downarrow})/(\tau_{\uparrow}+\tau_{\downarrow})\). We can also define spin asymmetry of the spin-dependent conductivity as \(p_{\sigma}=(\sigma_{\uparrow}-\sigma_{\downarrow})/(\sigma_{\uparrow}+\sigma_{ \downarrow})\). In AFM metals with degenerate spin bands \(p_{\tau}=p_{\sigma}\) while in spin nondegenerate bands, such as FM metals, these two parameters can be different [37]. The spin current density is determined from the formula \(\mathbf{j}_{s}=-e(2\pi)^{-2}\int d^{2}\mathbf{k}\,f_{s}(\mathbf{r},\mathbf{k })\mathbf{v_{k}}\), which upon using Eq. (10) leads to the following relation, \[\mathbf{j}_{s}(\mathbf{r})=\sigma_{s}\Big{(}\mathbf{E}-\nabla_{\mathbf{r}}\mu_{s} (\mathbf{r})-\frac{e\tau_{s}}{m_{\mathrm{eff}}}\mathbf{E}\times\mathbf{B}_{ \mathrm{em}}^{s}(\mathbf{r})\Big{)}, \tag{12}\] where \(\sigma_{s}=(e^{2}/2h)(v_{F}k_{F}\tau_{s})\) is the spin-dependent conductivity. The total charge and spin current densities are respectively defined as, \[\mathbf{j}^{\mathrm{ch}} =\mathbf{j}_{\uparrow}+\mathbf{j}_{\downarrow}, \tag{13a}\] \[\mathbf{j}^{\mathrm{sp}} =\mathbf{j}_{\uparrow}-\mathbf{j}_{\downarrow}. \tag{13b}\] We are interested in the average of the spin accumulation along the transport direction, i.e., \(x\) direction. Equation (11) can then be rewritten as, \[\frac{d^{2}\overline{\delta\mu}(y)}{dy^{2}}-\frac{\overline{\delta\mu}(y)}{ \lambda_{\mathrm{sd}}^{2}}=\frac{e\tau E_{x}}{m_{\mathrm{eff}}}\frac{d \overline{B}_{\mathrm{em}}(y)}{dy}, \tag{14}\] where we defined \(\overline{F}(y)=(2L)^{-1}\int_{-L}^{L}dxF(\mathbf{r})\). The general solution of this differential equation is the sum of homogeneous and particular solutions. The homogeneous solution includes two unknown constants that must be determined from the appropriate boundary conditions. Similarly, using Eqs. (12) and (13), we find the following differential equations for the transverse charge and spin current densities: \[\overline{f}_{y}^{\rm{b}}(y)=-\sigma_{0}\left(\frac{d\overline{\mu}(y)}{dy}+p_{ \tau}\frac{d\overline{\delta}\overline{\mu}(y)}{dy}\right)+\frac{e\tau\sigma_{ 0}}{m_{\rm{eff}}}p_{\tau}E_{x}\overline{B}_{\rm{em}}(y), \tag{15a}\] \[\overline{f}_{y}^{\rm{p}}(y)=-\sigma_{0}\left(p_{\tau}\frac{d \overline{\mu}(y)}{dy}+\frac{d\overline{\delta}\overline{\mu}(y)}{dy}\right)+ \frac{e\tau\sigma_{0}}{2m_{\rm{eff}}}\big{(}1+p_{\tau}^{2}\big{)}E_{x}\overline {B}_{\rm{em}}(y), \tag{15b}\] where \(\sigma_{0}=\sigma_{\uparrow}+\sigma_{\downarrow}\) is the total longitudinal conductivity of the AFM metal and \(\mu=(\mu_{\uparrow}+\mu_{\downarrow})/2\) is the spin-averaged chemical potential. To solve the differential equations (14) and (15), and to find expressions for the spin accumulation, spin Hall current, and charge Hall voltage in the assumed stripe geometry, we need now to use the appropriate boundary conditions. Since the AFM stripe has a finite width with an open boundary condition, the transverse component of the charge current density must be zero, \(\overline{f}_{y}^{\rm{ch}}(y)=0\). Consequently, using Eq. (15a) one finds the average transverse electric field and Hall voltage, \[\overline{E}_{y}(y)=-\frac{d\overline{\mu}(y)}{dy}=p_{\tau}\left(\frac{d \overline{\delta}\overline{\mu}(y)}{dy}-\frac{2e\tau}{m_{\rm{eff}}}E_{x} \overline{B}_{\rm{em}}(y)\right), \tag{16a}\] \[V_{H}=\int_{-w}^{w}\overline{E}_{y}(y)dy. \tag{16b}\] As it is evident from the above expressions, the Hall voltage is finite only if there is an asymmetry between the relaxation time of up and down spins, i.e., \(p_{\tau}\neq 0\). This emergent Hall voltage consists of two contributions. The first term on the right-hand side of Eq. (16a) is proportional to the spatial variation of the spin accumulation, and the second term on the right-hand side of Eq. (16a) is directly proportional to the emerging magnetic field. To find the spin current density and the Hall voltage, we need to know the spatial dependence of the average spin accumulation \(\overline{\delta}\overline{\mu}(y)\). Now, we should implement other boundary conditions. The spin current should be zero at both edges, \(\overline{f}_{y}^{\rm{p}}(\pm w)=0\). Using this boundary condition, together with Eqs. (14) and (15b), we find the nonequilibrium spin accumulation profile as, \[\overline{\delta}\overline{\mu}(y)= \frac{e\tau E_{x}}{2m_{\rm{eff}}}\Bigg{(}\frac{\sinh(\frac{y}{ \lambda_{\rm{sd}}})\exp(\frac{-w}{\lambda_{\rm{sd}}})}{\cosh(\frac{w}{\lambda _{\rm{sd}}})}\int_{-w}^{+w}\overline{B}_{\rm{em}}(\tilde{y})\exp(\frac{\tilde{ y}}{\lambda_{\rm{sd}}})d\tilde{y}^{\rm{sp}}\] \[+\int_{-w}^{+w}\overline{B}_{\rm{em}}(\tilde{y})\frac{y-\tilde{y} }{|y-\tilde{y}|}\exp(\frac{-|y-\tilde{y}|}{\lambda_{\rm{sd}}})d\tilde{y} \Bigg{)}, \tag{17}\] The first term on the right-hand side of this expression is the homogeneous solution of Eq. (14), and the second term is its particular solution. Having the spin accumulation, Eq. (17), we find the spin Hall current density from Eq. (15b), \[\overline{f}_{y}^{\rm{sp}}(y)=\sigma_{0}(1-p_{\tau}^{2})\frac{e\tau E_{x}}{m _{\rm{eff}}}\Bigg{[}\frac{1}{2\lambda_{\rm{sd}}}\Bigg{(}\int_{-w}^{+w} \overline{B}_{\rm{em}}(\tilde{y})\exp(-\frac{|y-\tilde{y}|}{\lambda_{\rm{sd}} })d\tilde{y}\Bigg{)}\] \[-\frac{\cosh(\frac{y}{\lambda_{\rm{sd}}})\exp(\frac{-w}{\lambda _{\rm{sd}}})}{\cosh(\frac{w}{\lambda_{\rm{sd}}})}\int_{-w}^{+w}\overline{B}_ {\rm{em}}(\tilde{y})\exp(\frac{\tilde{y}}{\lambda_{\rm{sd}}})d\tilde{y} \Bigg{)}+\overline{B}_{\rm{em}}(y)\Bigg{]}. \tag{18}\] Figure 1: Spatial variation of the spin accumulation, generated by the skyrmion-induced real-space Berry curvature, Eq. (17), for different spin-diffusion lengths. The spin accumulation is normalized to \(\mu_{0}=(e\tau E_{x}B_{0}r_{\rm{sk}})/m_{\rm{eff}}\) and we set \(w=3r_{\rm{sk}}\). The inset shows the profile of the skyrmion-induced magnetic field, Eq. (8). Figure 2: Spatial variation of the topological spin Hall current density, Eq. (18), generated by the skyrmion-induced real-space Berry curvature, for several spin diffusion lengths. The spin current is normalized to \(j_{0}=(1-p_{\tau}^{2})(e\sigma_{0}\tau E_{x}B_{0})/m_{\rm{eff}}\) and we set \(w=3r_{\rm{sk}}\). This expression describes the spatial dependence of the spin Hall current density in the AFM stripe in the presence of an emerging field of a skyrmion. This transverse spin current has two contributions. There is an extrinsic contribution arising from the spin accumulation gradient (the first two integrals) and an intrinsic contribution from the emergent magnetic field (the last term). ## V Spin accumulation, spin current density, and Hall voltage The spin accumulation and spin current density are calculated from Eqs. (17) and (18), respectively. The integrals cannot be calculated analytically and thus we integrate them numerically. Figure 1 shows the spatial variation of the nonequilibrium spin accumulation, Eq. (17), for various spin diffusion lengths. The inset shows the profile of the emerging magnetic fields, Eq. (8). The spin accumulation displays a nonmonotonic spatial behaviour, rising within the first half of the skyrmion radius but declining as it extends towards the edges. Beyond the skyrmion region, the spin accumulation gradually vanishes for shorter diffusion lengths, whereas it reaches a constant value for longer diffusion lengths. Reducing the spin diffusion length results in a decrease in the spin accumulation amplitude as one may expect. The associated topological spin Hall current flowing across the stripe is presented in Fig. 2 for various spin diffusion lengths. The spin Hall current reduces dramatically outside the skyrmion core, and eventually vanishes outside the skyrmion. Inside the skyrmion core, the amplitude of the spin current increases with reducing the spin diffusion length, while it decreases away from the skyrmion core. Finally, in Fig. 3 we plot the Hall voltage as a function of the spin diffusion length for the indicated values of the spin asymmetry parameter \(p_{\tau}\). When the spin asymmetry of the relaxation time increases, the amplitude of the Hall voltage increases as well. In turn, an increase in the spin diffusion length leads to a decrease in the Hall voltage. ## VI Summary and concluding remarks We computed the topological charge and spin Hall effects as well as the spin accumulation in a compensated AFM system, arising from a real-space Berry curvature induced by a skyrmion. We calculated these quantities in the diffusive regime using the semiclassical Boltzmann formalism. Our model describes a single skyrmion in either a metallic AFM thin film or at the interface of an AFM insulator and a metal. We considered both spin-dependent scattering and spin-flip mechanisms in our calculations. We found both intrinsic and extrinsic contributions to the spin Hall effect and spin accumulation, and showed that the spin Hall current vanishes rapidly outside the skyrmion. On the other hand, the spin accumulation, which is a measurable quantity, can be finite in systems with the spin diffusion length larger than the skyrmion size. In addition, we showed that the Hall voltage can be finite in the presence of a spin-dependent relaxation time in such systems. Direct detection of skyrmions is a challenge because of the absence of net magnetization in compensated AFM systems. We argue that the electrical detection of AFM skyrmions is possible by measuring the spin accumulation and/or Hall voltage. ## Acknowledgements This work has been supported by the Norwegian Financial Mechanism 2014- 2021 under the Polish - Norwegian Research Project NCN GRIEG "2Dtronics" no. 2019/34/H/ST3/00515.
2308.00016
Alpha-GPT: Human-AI Interactive Alpha Mining for Quantitative Investment
One of the most important tasks in quantitative investment research is mining new alphas (effective trading signals or factors). Traditional alpha mining methods, either hand-crafted factor synthesizing or algorithmic factor mining (e.g., search with genetic programming), have inherent limitations, especially in implementing the ideas of quants. In this work, we propose a new alpha mining paradigm by introducing human-AI interaction, and a novel prompt engineering algorithmic framework to implement this paradigm by leveraging the power of large language models. Moreover, we develop Alpha-GPT, a new interactive alpha mining system framework that provides a heuristic way to ``understand'' the ideas of quant researchers and outputs creative, insightful, and effective alphas. We demonstrate the effectiveness and advantage of Alpha-GPT via a number of alpha mining experiments.
Saizhuo Wang, Hang Yuan, Leon Zhou, Lionel M. Ni, Heung-Yeung Shum, Jian Guo
2023-07-31T16:40:06Z
http://arxiv.org/abs/2308.00016v1
# Alpha-GPT: Human-AI Interactive Alpha Mining for Quantitative Investment ###### Abstract. One of the most important tasks in quantitative investment research is mining new alphas (effective trading signals or factors). Traditional alpha mining methods, either hand-crafted factor synthesizing or algorithmic factor mining (e.g., search with genetic programming), have inherent limitations, especially in implementing the ideas of quants. In this work, we propose a new alpha mining paradigm by introducing human-AI interaction, and a novel prompt engineering algorithmic framework to implement this paradigm by leveraging the power of large language models. Moreover, we develop Alpha-GPT, a new interactive alpha mining system framework that provides a heuristic way to "understand" the ideas of quant researchers and outputs creative, insightful, and effective alphas. We demonstrate the effectiveness and advantage of Alpha-GPT via a number of alpha mining experiments. + Footnote †: dagger}\)Corresponding author + Footnote †: dagger}\)Corresponding author + Footnote †: dagger}\)Corresponding author + Footnote †: dagger}\)Corresponding author + Footnote †: dagger}\)Corresponding author + Footnote †: dagger}\)Corresponding author + Footnote †: dagger}\)Corresponding author + Footnote †: dagger}\)Corresponding author + Footnote †: dagger}\)Corresponding author + Footnote †: dagger}\)Corresponding author + Footnote †: dagger}\)Corresponding author + Footnote †: dagger}\)Corresponding author + Footnote †: dagger}\)Corresponding author ## 1. Introduction A trading alpha (A * We propose AlphaBot, an algorithm with domain knowledge compilation and decompilation methods to employ the LLM as a mediator for human-AI interaction. * We develop Alpha-GPT, a systematic framework incorporating AlphaBot to realize our proposed paradigm and a tool for quantitative researchers. ## 2. User Interface of Alpha-GPT In Figure 3 we introduce the user interface (UI) of Alpha-GPT, which is composed of three main components: Session Manager, Dialog Box, and the Alpha Mining Dashboard. **Dialog Box**: Users input their trading ideas and thoughts into the dialog box. In response, generated seed alphas, alpha searching progress, the final report of alpha mining, as well as the performance of the generated alphas, are all organized into system messages that provide comprehensive feedback. Users can then analyze the results and provide further direction for alpha mining, and this dialog continues until effective alphas are found. **Mining Session Manager**: Alpha-GPT features a session-based user interface that stores past interaction history via the session manager. These sessions also serve to organize user-generated data, which can then be used to further enhance the system's performance. **Alpha Mining Dashboard**: On the right half, the dashboard is used to display and analyze alpha mining results. It provides users with more detailed descriptions of the session. **Experiment Monitor** displays the alpha mining experiment progress and current system load as well as all historical alphas generated during a session. If a specific alpha is selected, its performances are plotted and visualized on the **Analytic Panel**. The available plotting features include a fitness curve across generations for genetic programming, a backtest curve for a single alpha, and other peripheral analyses such as IC distribution and Signal decay. Furthermore, **Alpha Dashboard** includes one-click storage and deployment, enabling further application and analysis downstream. ## 3. Architecture and Technology Challenges Figure 4 shows the system framework we proposed for the interactive alpha mining paradigm by distilling and abstracting from the architecture design of Alpha-GPT. Since UI has been introduced in Section 2, we only introduce the design idea and technical challenge of other modules behind the system. ### AlphaBot Layer AlphaBot is the key layer of Alpha-GPT, as it plays a mediator role in human-AI interaction. Specifically, this layer consists of four functional modules: 1) **Knowledge compiler** automatically converts the intents/ideas/thoughts of quantitative researchers into domain-specific prompts and instructions for an LLM query; 2) **LLM** provides APIs or local deployment options for mainstream large language models such as GPT-4; 3) **Thought decompiler** translates the natural language output of an LLM into a configuration understandable by the algorithmic alpha mining layer; 4) **Knowledge Library** incorporates extra knowledge, information, literature and data about alpha mining to improve the performance and accuracy of LLMs. #### 3.1.1. Knowledge compiler On domain-specific tasks like alpha mining, a lot of user requests will include terminology that can only be found in finance. Without further context, a traditional LLM would be unable to understand the meaning behind the inputs. This then introduces the need for the knowledge compiler module. Leveraging the in-context learning capability of LLM (Hu et al., 2018), the module enhances the original user request by providing additional context and clarifying keywords in prompts (Hu et al., 2018). Specific phrases like "_you are a quant researcher developing formulaic alphas_" help narrow the scope of possible responses to answers more suited for factor mining. Since the alpha expressions must also be in a certain format to be valid, there is a section in the prompt with each possible component of an expression along with an explanation of how it can be utilized (e.g. _"high_LD": highest intraday price of stocks_). This addition helps the LLM correlate the user intentions with certain functions that can fulfill that goal, and helps ensure that the output will be valid. Because this terminology is applicable to the wider finance domain, this portion of the prompt is also used in the thought decompiler to prevent hallucinations. Such a module is required to allow users to make requests in natural language without the ambiguities involved. #### 3.1.2. Large Language Model There are currently two main ways of utilizing LLMs, each with their own characteristics. * **Online Request**: A swath of companies offer access to their pre-built large language models through APIs. These are products that charge a nominal amount per set amount of tokens and include GPT, Claude, and Bard, to name a few. One advantage of Figure 1. Evolution of alpha mining techniques. APIs is that it allows for ease of use, and removes the need for computing power. * **Local Deployment**: LLMs can also be developed locally from the ground up. This approach allows greater customization in how the model functions. They can be pre-trained on specific datasets to align them towards certain purposes. This involves domain fine-tuning by learning on relevant documents, and also reinforcement learning from human feedback (Hari et al., 2019). The result is an LLM that is specifically built for a subset of tasks. However, the user then requires a large amount of computing power (Hari et al., 2019; Chen et al., 2020) to train and update these models. #### 3.1.3. Thought Decompiler There is a significant gap between LLM responses and the desired output structure in alpha mining. Specifically, the gaps can be summarized as challenges from the following perspectives: * **Nature language to structured data**: We need to convert LLM response from natural language to structured data. For each LLM response message, we need to extract a list of expression blocks from their raw outputs. And each expression block follows a certain organization style such as its short name, expression, and a paragraph of natural language description. * **Token size limit**: Since most state-of-the-art LLMs apply a Transformer-based architecture with self-attention mechanisms, limits on the input sequence length (number of tokens) is usually a very common concern. In this way, both inputs and outputs of LLMs have an upper bound for their lengths. This may cause Figure 2. Alpha-GPT internal working pipeline: After a user inputs their ideas, the system goes into the knowledge compilation module. It uses external memory to pull similar examples, and combines them into the system prompt. The module passes everything to the LLM which creates valid alpha expressions and config files. These alphas are evaluated via Alpha Search, and results are presented to the user along with an interpretation provided by the Thoughts Decompiler. two issues: 1). We cannot keep sending full conversation history to LLMs since it will exceed the input token size limit. 2). The size of a single LLM response is limited, meaning that we can only get a limited number of expressions per message. To address these issues, we propose an iterative LLM reasoning procedure (Algorithm 1). We apply a parser based on regular expressions to parse LLM outputs. For each LLM-generated alpha, we validate its correctness both syntactically and semantically. We adopt an abstract syntax tree parser for syntax checking, and for semantic correctness, we evaluate this expression with mock data and runtime context to capture any exceptions being thrown out. In practice, the proportion of correct alphas might be low (only 4 out of 10 alphas generated per round), making the alpha generation process inefficient. Meanwhile, since the conversation history is appended to LLM inputs, these incorrect expressions may also affect the generation process that follows. Hence, we apply an iterative correcting procedure where we prompt the LLM to re-generate incorrect alphas. Moreover, to address the problem of exceeded token size limit, at each round we dynamically check if the token size limit is exceeded, and if so, the input message will be truncated and reorganized to reduce the number of tokens. #### 3.1.4. Knowledge Library As mentioned in Section 3.1.1, few-shot in-context learning requires an external memory that supports Figure 3. User interface of Alpha-GPT. Five modules of the UI are annotated as: (A) Dialog Box, (B) Mining Session Manager, (C) Experiment Monitor, (D) Analytic Panel, and (E) Alpha Dashboard efficient retrieval of demonstrations relevant to trading ideas. In Alpha-GPT, we design a protocol for organizing contents from across multiple sources. These range from an existing collection of alpha expressions to financial literature. When the user makes a request, the knowledge library encodes that query, and finds similar documents that it can incorporate into the prompt as examples. ### Algorithmic Alpha Mining Layer This layer serves a search enhancement function in Alpha-GPT. Specifically, it implements an algorithmic alpha mining workflow by taking seed alphas and improving them with the received search commands and configurations from AlphaBot, and pushes the most appropriate alphas back to AlphaBot. It consists of four modules: the **algorithmic alpha search** module generates alpha candidates according to the commands from AlphaBot, qualified alphas are selected from these candidates using the **evaluation and backtesting** module, these alphas are further pruned according to a specific prediction target (e.g., contribution to the return of future 10 days) in the **alpha selection** module, and the final alphas are "one-click" deployed by the **alpha deployment** module to guarantee the smoothness and correctness of real-time computing during online trading. #### 3.2.1. Alpha Search Enhancement The most popular alpha search algorithm used in industry is genetic programming (GP), which starts from a number of alpha seeds and iteratively selects formulaic Figure 4. Alpha-GPT system architecture. Part of this figure is cited from (Becker et al., 2018; Becker et al., 2018; Becker et al., 2018; Becker et al., 2018; Becker et al., 2018). alphas expressed by trees using random crossing and mutation subtrees according to the fitness of a scoring function. However, GP currently suffers from three problems: 1) **Overfitting**, which is extremely dangerous for quantitative trading. This can be mitigated by strategies such as out-of-sample evaluation incorporated in GP iterations, fit regularization to reduce function complexity, and early stopping of iterations. These methods help ensure alphas generalize well beyond training data, improving their reliability. 2) **Loss of diversity** in alphas may result in the aggregation and accumulation of investment risk and increase the uncertainty of returns. Alpha diversification can be realized by enforcing more constraints in GP's iteration process, and it helps discover robust alpha factors that are resilient to changing market conditions. 3) **Invalid alphas** are easily generated by GP. For example, \(log(0)\) and \(\sqrt{-5}\), or a sum of two values with incompatible units (e.g, volume + close). Incorporating a rule base encompassing mathematical rules, unit consistency rule and financial domain-specific rules could regulate alpha expression generation. #### 3.2.2. Evaluation and Backtesting The most straightforward method of evaluating alpha is through backtesting, which reveals the alpha's specific performance in an investment strategy. This process, however, introduces three significant challenges: 1) **Introduction of future information**: information from a further point in time when backtesting could have disastrous consequences on result accuracy. To mitigate this, Timestamping is used to assign a time label to each piece of input data. This technique allows the backtesting system to more accurately replicate market conditions and validate test results, thereby enhancing the reliability of alpha evaluation. 2) **Estimation of transaction costs**: conventional coarse-grained backtesting cannot accurately measure transaction costs, which is crucial for short-term alphas. To solve this, we conduct a simulation-based (Han et al., 2017) backtest with trade matching rules using more detailed data, such as order book level data, for selected alphas. This enables us to model how transaction costs and market price impact alphas at a microstructure level. 3) **The computational cost**: the compute-power necessary for alpha mining is also significant, and we address this problem in the computation acceleration layer. #### 3.2.3. Alpha Selection The Alpha Selection module furthers the selection process in the following ways: 1) **Deduplication and Decorrelation**: new alphas need to be distinct from existing ones, but calculating pair-wise similarity between large amounts of alphas can be time-consuming. Algorithms like KD-Tree, Locality-Sensitive Hashing (LSH) (Han et al., 2017), or Approximate Nearest Neighbors (ANN) can swiftly determine each prospective alpha's maximum correlation with others in the pool. 2) **Importance Scoring**: while an alpha's IC score and backtest may reflect individual performance, in real scenarios, multiple alphas are combined together in investment strategies, and these metrics do not accurately reflect how they perform in a larger set. A group of alphas with low IC scores may outperform a subset composed of the highest-scoring alphas. Thus, importance scoring techniques such as Shapley Additive Explanations (SHAP) (Krause et al., 2017) and Local Interpretable Model-agnostic Explanations (LIME) (Krause et al., 2017) measure this contribution and provide a comprehensive understanding of alphas in conjunction with one another. #### 3.2.4. Alpha Deployment In this module, three key aspects need to be properly managed to guarantee the smooth and correct real-time computation during online trading: 1) **Dependency Management:** This involves maintaining and supervising all alpha-data interdependencies to ensure sequential computation and traceability of issues. 2) **Stream-Batch Unification:** Inconsistencies between live trading and historical backtesting of alphas are unacceptable. By adopting the Kappa Architecture, a unified real-time processing layer is maintained. Thus, all data is processed in the same manner, eliminating inconsistencies between batch and stream processing during alpha generation. 3) **Automatic Alpha Verification:** This is employed to validate all system-maintained alphas, monitor data quality, and identify discrepancies. This ongoing verification ensures the reliability, timeliness, and accuracy of the deployed alphas. ### Alpha Computation Acceleration Layer Alpha computation requires preprocessing financial data from various sources such as price-volume data, financial statement data, etc. Because of the computational overhead of processing high-frequency data such as orderbook level data, computational acceleration plays a key role. Also, in alpha mining, billions of alphas are calculated, making the speed of alpha calculations crucial for effectively exploring the alpha search space. Below we outline a few key computation acceleration techniques employed. **Streaming Algorithms:** particularly beneficial in performing rolling window computations on large time-series datasets. They incrementally update calculations with each new data point in the window, significantly optimizing computational efficiency and memory usage for these sliding-scale operations such as ts_corr(rolling correlation of time-series data). **Vectorized Computation:** enables efficient financial data processing by eliminating explicit loops, leveraging hardware capabilities for concurrent processing, and optimizing memory management. **SIMD** (Single Instruction, Multiple Data) and **SIMT** (Single Instruction, Multiple Threads) (Krause et al., 2017): allows simultaneous computations on multiple data points, fully exploiting hardware capabilities. **Memory Optimization**: pre-allocation of memory pool, layout optimization, and zero-copy, etc, are employed to minimize performance loss from discontinuous memory access and unnecessary memory allocation. **Data Partitioning:** Divides large-scale data into smaller, manageable chunks for independent processing. **Multithreading**: enables parallel computations on each partition using multiple threads. This approach significantly boosts computational speed when processing large financial datasets. **GPU Acceleration**(Krause et al., 2017): employs CUDA cores for parallel processing, transforming CPU-bound operations into GPU-accelerated tasks and improves performance for data-intensive computations. ## 4. Experiments We conduct experiments on Alpha-GPT with the goal of verifying the following research questions (RQ): * **RQ1**: Can Alpha-GPT generate expressions that are consistent with the input trading ideas? * **RQ2**: How effective is the algorithmic alpha mining layer in enhancing the seed alphas from LLM? * **RQ3**: Can users effectively interact with Alpha-GPT to guide the mining process? * **RQ4**: Can Alpha-GPT successfully explain the trading ideas behind alpha expressions? ### Experimental Setup The specifications of Alpha-GPT and other relevant information about our experiments include: **Data and operators**: We use inter-day volume-price data of stock markets. This data includes the basic candlestick chart data (OHLCV), volume-weighted average price (VWAP), and sector data. We also include 19 basic operators implemented in (Krishnan et al., 2017) including time-series operations, cross-sectional operations, group-wise operations and basic element-wise operations. **Knowledge Library:** We construct the knowledge library based on the alphas proposed in (Bang et al., 2018). For each alpha, we first decompose it into sub-expressions and explain them. Then we explain the combination of these sub-expressions to form the whole trading idea. Document embeddings are indexed via Faiss1. Note that we only employed external memory when generating alphas for trading ideas that align well with those in (Bang et al., 2018). Footnote 1: [https://github.com/facebookresearch/faiss](https://github.com/facebookresearch/faiss) **LLM and Adapter:** For natural language interaction, we use OpenAI's chat completion API with "gpt-3.5-turbo-16k-0613" model base. For the embedding model used in knowledge retrieval, we use OpenAI's "text-ada-embedding-002" API with embedding dimension of 1536. The LLM generates a batch of alphas at a time, and will be asked to correct alphas with syntax or semantic errors. **Alpha searching and evaluation:** Alphas are searched by the genetic programming model with a fitness score defined by the Information Coefficient (IC). We evaluate the performance of those alphas on out-of-sample criteria such as IC, annual return, Sharpe ratio, etc. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & **Trend Discrepancy** & **Shape** & **RSI** & **Momentum** & **Mean Reversion** & **Flow of Funds** \\ \hline **Before search enhancement** & 0.01151 & 0.00995 & 0.01109 & 0.00951 & 0.01130 & 0.00952 \\ **After search enhancement** & 0.02256 & 0.02190 & 0.02527 & 0.02763 & 0.02187 & 0.02160 \\ \hline \hline \end{tabular} \end{table} Table 1. Comparison of average top-20 out-of-sample IC between alphas generated by Alpha-GPT before and after search enhancement. Figure 5. Trading patterns and the corresponding alphas generated by Alpha-GPT that capture them. Figure 6. Backtest net values of alphas after different stages of interactions. Figure 7. Alphas generated based on trading ideas and the corresponding explanations generated by Alpha-GPT. ### Idea-Formula Consistency We first demonstrate that Alpha-GPT can generate formulaic alphas that are consistent with the user's given trading idea. Figure 5 illustrates three alpha expressions generated based on given trading ideas and their correspondence to the patterns in the candlestick chart. The candlestick chart is plotted from the weekly data of the S&P500 index from 2020 to 2023. The first trading idea aims to capture golden cross patterns. The alpha value reflects the divergence between the fast and slow moving average curves. The second trading idea characterizes the breakout signals of Bollinger bands, and the corresponding alpha is a binary signal that gets activated when the upper bound is crossed. The third trading idea aims to capture three consecutive bullish movements on the candlestick chart, and the generated alpha successfully identified those patterns. These examples demonstrate that the generated alphas correctly capture the trading ideas. ### Search Enhancement Table 1 compares the out-of-sample IC of alphas before and after search enhancement by the algorithmic alpha mining layer on 7 different trading ideas. We can see that search enhancement significantly improves the performance of Alpha-GPT and is critical in the Alpha-GPT workflow. ### Human-AI Interaction Figure 6 illustrates the backtest curve of the alpha generated throughout the human-AI interaction. The backtest is conducted on US stock data from 2012 to 2021. The initial trading idea is to characterize the long-short imbalance. After several rounds of search enhancement and user interaction, the backtest performance of the resulting alphas significantly improved. More details about these interactions are present in the UI example in Figure 3. ### Alpha Explanation Figure 7 presents examples of alpha expressions generated by Alpha-GPT based on given trading ideas, and the corresponding natural language explanations of these alphas also generated by Alpha-GPT. From these examples we can see that Alpha-GPT can provide appropriate explanations of the generated alphas, relieving the burden of human researchers to interpret these expressions by themselves. ## 5. Related Work A lot of algorithms have been studied for formulaic alpha mining. Examples include Monte Carlo random search, Markov-chain Monte Carlo (Zhou et al., 2019), genetic programming (Zhou et al., 2019) and their variants (Beng et al., 2019), and reinforcement learning (Zhou et al., 2019). However, these methods all require the user to directly define the algorithmic configurations, providing limited interactivity compared with Alpha-GPT. Meanwhile, LLMs such as GPT (Zhou et al., 2019) have demonstrated emergent abilities (Zhou et al., 2019) and achieved superior performance on various tasks. Besides, LLMs have also shown great reasoning (Zhou et al., 2019; Zhou et al., 2019) and planning capabilities (Zhou et al., 2019). In this way, an LLM can be regarded as a core thinking module and be integrated with various peripheral tools (Zhou et al., 2019) to form intelligent LLM-powered agents (Zhou et al., 2019). ## 6. Conclusion and Future Work In this paper, we propose a new paradigm and a new system for alpha mining by leveraging the power of large language models. Further study on prompt engineering, LLM fine-tuning, alpha search algorithms, and knowledge library construction can be conducted to improve the capability of this system.
2309.06996
Dynamics Reflects Quantum Phase Transition of Rabi Model
As the simplest and most fundamental model describing the interaction between light and matter, a breakdown in the rotating wave approximation of the Rabi model leads to phase transition versus coupling strength when the frequency of the qubit greatly surpasses that of the oscillator. Besides the phase transition revealed in the ground state, we show that the dynamics of physical quantities can reflect such a phase transition for this model. In addition to the excitation of the bosonic field in the ground state, we show that the witness of inseparability (entanglement), mutual information, quantum Fisher information, and the variance of cavity quadrature can be employed to detect the phase transition in quench. We also reveal the negative impact of temperature on checking the phase transition by quench. This model can be implemented using trapped ions, superconducting artificial atoms coupled bosonic modes, and quantum simulations. By reflecting the phase transition in a fundamental quantum optics model without imposing the thermodynamic limit, this work offers an idea to explore phase transitions by non-equilibrium process for open quantum systems.
M. Li, Y. N. Wang, Z. Y. Song, Y. M. Zhao, X. L. Zhao, H. Y. Ma
2023-09-13T14:45:07Z
http://arxiv.org/abs/2309.06996v2
# Dynamics Reflects Quantum Phase Transition of Rabi Model ###### Abstract As the simplest and most fundamental model describing the interaction between light and matter, a breakdown in the rotating wave approximation of the Rabi model leads to phase transition versus coupling strength when the frequency of the qubit greatly surpasses that of the oscillator. Besides the phase transition revealed in the ground state, we show that the dynamics of physical quantities can reflect such a phase transition for this model. In addition to the excitation of the bosonic field in the ground state, we show that the witness of inseparability (entanglement), mutual information, quantum Fisher information, and the variance of cavity quadrature can be employed to detect the phase transition in quench. We also reveal the negative impact of temperature on checking the phase transition by quench. This model can be implemented using trapped ions, superconducting artificial atoms coupled bosonic modes, and quantum simulations. By reflecting the phase transition in a fundamental quantum optics model without imposing the thermodynamic limit, this work offers an idea to explore phase transitions by non-equilibrium process for open quantum systems. ## I Introduction The interaction between light and matter, as well as between harmonic oscillators and atoms, is pervasive in nature and often associated with quantum phase transitions. Quantum phase transition occurs when a non-thermal parameter scans across a critical point, causing a sudden and significant change in the ground state properties of the system, often accompanied by spontaneous symmetry breaking [1; 2]. Such phase transitions can be revealed by various means and have attracted wide attention in quantum information [3; 4] and quantum metrology [5; 6]. Quantum phase transitions have emerged as a prominent area of research in the field of condensed matter physics. Superradiance is one of quantum phase transition occurs when the coupling strength between the two subsystems exceeds a critical threshold. The exploration of this phenomenon commenced with the Dicke model, a theoretical framework that delves into the collective behavior of a multitude of atoms interacting with a single harmonic-oscillator mode of the electromagnetic field. Within this model, the atoms demonstrate quantum-coherent collective behavior, leading to a flurry of captivating dynamics and effects [7]. Namely, the superradiance phase transition is commonly examined under the assumption of the thermodynamic limit, and the interaction between natural atoms and the cavity field is significantly weaker in comparison to the bare atom and cavity frequencies [1]. Recently, significant advancements have been made in superconducting qubit circuits, leading to the achievement of the highly-anticipated strong and ultrastrong-coupling regime [8; 9; 10]. Furthermore, trapped ion quantum simulation presents an opportunity to replicate a similar model by ascribing the oscillatory motion as the harmonic degree of freedom [11]. Given that the artificial atom or ion investigated in these studies play the role as a two-level system, namely, qubits, the quantum Rabi model (QRM) can showcase behavior that bears resemblance to a superradiance phase transition [12; 13; 14; 15; 16; 17]. Then we use quantum phase transition hereafter to represent superradiance phase transition in this work. In this work, we first show the ground-state quantum phase transition for the Rabi model by several physical quantities, such as the energy-level structure, excitations of the qubit and the cavity field, quantum Fisher information, measures of entanglement (inseparability), mutual information, and variance of the cavity field quadrature. Notably, we explore these quantities without the constraint of a thermodynamic limit, but instead by considering a higher ratio between the eigen-frequency of the qubit and the cavity field. Then going beyond this equilibrium by quenching the coupling strength across the critical point of the ground state phase transition, the dynamics of the quantities are examined to indicate the phase. Temperature should be considered for a cavity field interacting with entangled atom pairs in the presence of decoherence [18]. We also check the influence of thermal excitation on the dynamics and propose potential experimental platforms. This comprehensive investigation propose a method for illuminating the occurrence of phase transition by non-equilibrium processes. The organization of this study is as follows: In Sec. II, we present the Rabi model with a significant disparity in eigen-frequencies between the qubit and cavity field. In Sec. III, we show the phase diagram concerning various quantities versus the coupling strength and the eigen-frequency with constraint. In Sec. IV, we check the behavior of these quantities during a quench to observe and characterize the phase transition. In Sec. V, we explore the impact of environmental temperature on the dynamics. In Sec. VI, we propose potential experimental platforms to realize this work. Finally, we conclude our work in Sec. VII. ## II The model The quantum Rabi model [19; 20] with a single-mode bosonic field (such as a cavity mode) coupled to a two-level atom (generic qubit) as depicted in Fig. 1, is described by the Hamiltonian (\(\hbar=1\) hereafter) \[\hat{H}=\omega_{c}\hat{a}^{\dagger}\hat{a}+\frac{\omega_{q}}{2}\hat{\sigma}_{z }+g(\hat{a}+\hat{a}^{\dagger})(\hat{\sigma}_{-}+\hat{\sigma}_{+}), \tag{1}\] where \(\hat{a}^{\dagger}(\hat{a})\) is the creation (annihilation) operator for the single mode cavity field with frequency \(\omega_{c}\), and \(\hat{\sigma}_{z}\) is the Pauli z-basis operator with commutation relation \([\hat{\sigma}_{+},\hat{\sigma}_{-}]=\hat{\sigma}_{z}\). The parameter \(g\) represents the coupling strength between the two subsystems. In the case of trapped ion systems, the phase-transition-poisonous vector potential item can be neglected safely [11; 12; 13]. In the regime of weak coupling (\(g\ll\omega_{c},\omega_{q}\)), one can simplify the quantum Rabi model by applying the rotating wave approximation, resulting in Jaynes-Cummings model [21; 22; 23] which has been investigated widely in cavity QED system [24]. In scenarios where the coupling strength reaches or exceeds the magnitude of the frequencies of the cavity mode and qubit, the rotating wave approximation becomes invalid. This breakdown paves the way for the emergence of the strong, ultrastrong and even deep-strong coupling regime, facilitating connections between manifolds characterized by different total excitations [11]. Many exotic physical properties have been investigated in a plenty of strong coupling quantum systems such as trapped ions [11; 12; 13], circuit QED [25; 26; 27; 28; 10], and photonic system [29]. Quantum phase transition is a focus of research usually considered in the thermodynamic limit since this limit leads to the non-analytic behavior of the free energy or partition function. However, it was recently realized that a quantum phase transition can also occur in a small system with only a two-level atom coupled to a bosonic mode, described by the quantum Rabi model [16]. Since the smaller ratio of the frequencies \(\frac{\omega_{c}}{\omega_{q}}\) in the model, the more obviously of the superradiance phase transition versus the coupling strength, we focus on the situation with \(\omega_{c}\omega_{q}=1\) in the Hamiltonian (1). ## III The phase diagram The critical coupling strength for the quantum phase transition of the Hamiltonia (1) is determined to be \[g_{\rm c}=\frac{\sqrt{\omega_{c}\omega_{q}}}{2}. \tag{2}\] To characterize the quantum phase transition of this model, we examine the behavior of several quantities in the ground state. In addition to the excitations in the cavity traditionally, we check the energy gap between the ground state and the first excited state, the quantum Fisher information (QFI), the partial transposed criteria for entanglement, the mutual information, and the fluctuation of the quadrature of the cavity field. These diverse physical quantities serve as indicators for observing and analyzing the phase transition within the scope of this work. As \(\omega_{c}\to 0\) under the condition \(\omega_{c}\omega_{q}=1\), the quantities in the ground state tend to be non-analytical at the Figure 1: Sketch of the quantum Rabi model: a single two-level atom (qubit) coupled to the cavity field with coupling strength \(g\). The ground (excited) state of the atom is labeled as \(|g\rangle\) (\(|e\rangle\)). Here, \(\gamma_{c}\) and \(\gamma_{q}\) are the cavity and atomic decay rates, respectively. Figure 2: Quantities used to reveal the phase transition in the ground state versus \(\omega_{c}\) and \(g\) under the condition \(\omega_{c}\omega_{q}=1\). (a) The energy gap between the first excited states and the ground states. (b) The occupation of the cavity field \(\langle\hat{a}^{\dagger}\hat{a}\rangle\). (c) The quantum Fisher information of the cavity field defined in Eq. (3). (d) The partial transposed criteria for entanglement. (e) The mutual information. (f) The minimum variance of the cavity field quadrature. critical point as shown in Fig.2, which supports a second-order phase transition at zero temperature. As an example, the excitation in the cavity field plays the role as the order parameter being zero in the normal phase while acquiring positive values in the superradiance phase as shown in Fig.2 (b). QFI is a fundamental concept in quantum metrology that stems from the classical Fisher information [30]. It plays a crucial role in quantifying the sensitivity of a quantum state used in parameter estimation, encompassing the critical factors of quantum superposition and entanglement. To improve the precision of parameter estimation in semi-classical Rabi model, there are works generalize the approximation expression for maximal QFI in two-level system [31]. The symmetric aspect of the the rotating-wave and counterrotating-wave terms are discussed in the Rabi model [32]. QFI have been investigated for the atom in Jaynes-Cummings model coupling with the Ohmic reservoir [33]. These works manifest QFI has been investigated in various aspects in order to benefit parameter estimation. Higher QFI indicates enhanced sensitivity of the quantum system to fluctuations in the measured parameter. We focus on evaluating the QFI pertaining to the state of the cavity field \(\hat{\rho}_{c}\) with respect to \(\hat{\rho}_{c}(\theta)=e^{i\theta\hat{G}}\hat{\rho}_{c}e^{-i\theta\hat{G}}\), where \(\theta\) is the parameter need to be estimated as accurately as possible with respect to the phase-shift generator \(\hat{G}\), which depends on the target in certain investigation [34; 35; 36]. The QFI reads \[F=4\sum_{n}p_{n}(\Delta\hat{G})_{n}^{2}-\sum_{m\neq n}\frac{8p_{m}p_{n}}{p_{m} +p_{n}}|\langle\psi_{m}|\hat{G}|\psi_{n}\rangle|^{2}, \tag{3}\] where \(\hat{\rho}_{c}|\psi_{n}\rangle\)=\(p_{n}|\psi_{n}\rangle\). The first term of Eq. (3) is an expectation for each pure state \(|\psi_{n}\rangle\) with \((\Delta\hat{G})_{n}^{2}\equiv\langle\psi_{n}|\hat{G}^{2}|\psi_{n}\rangle-| \langle\psi_{n}|\hat{G}|\psi_{n}\rangle|^{2}\). The second term denotes the negative correction. Here, the QFI provides a quantitative measure for the precision attainable in estimating the parameter \(\theta\) in experiments conducted on the quantum state. While states with larger QFI are indeed valuable for a single-mode linear interferometer and can be treated as a kind of variance, we go beyond their practical utility and view QFI as a witness for the phase transition in this work. When \(\frac{\omega_{c}}{\omega_{q}}\) is sufficiently small and \(\omega_{c}\omega_{q}=1\), the critical coupling strength equals 0.5. As shown in Fig. 2(c), the behavior of QFI versus the coupling strength \(g\) and \(\omega_{c}\) coincides with the phase diagram reflected by the energy gap between the ground state and the first excited state, and that by \(\langle\hat{a}^{\dagger}\hat{a}\rangle\) as shown in Fig.2 (a) and (b), respectively. The eigen-energy degeneracy coincides with a spontaneous breaking of the \(Z_{2}\) parity symmetry which provides a fundamental view for this phase transition. The partial transposed criterion provides a methodology for measuring entanglement in bipartite quantum systems [37; 38]. It involves checking the eigenvalues of the partially transposed density matrix of the hybrid system. The presence of negative eigenvalues of the partially transposed density matrix means the presence of entanglement. To capture this phenomenon and serve as a witness for the phase transition in this work, we employ the absolute value of the summation of the negative eigenvalues \(\lambda_{i}\) of the partially transposed density matrix to witness the phase transition: \(|E^{T}|=\sum_{\lambda_{i}<0}|\lambda_{i}|\), in this work. As shown in Fig. 2(d), the phase diagram indicated by \(|E^{T}|\) coincides with those indexed by the energy gap between the ground state and the first excited state, \(\langle\hat{a}^{\dagger}\hat{a}\rangle\), and the QFI. However, it is crucial to acknowledge that the partial transpose criterion can only serve as a sufficient condition for entanglement since there exist entangled states that maintain a positive-definite nature after undergoing partial transposition [37; 38]. This means the superradiance phase offers the source of entangled states which is useful in quantum information process. Mutual information (von Neumann mutual information) is another quantity demonstrating correlation between the subsystems. It characterizes the information of sub-system '\(A\)' by exploring its counterpart '\(B\)' [39; 40; 3]. It is defined as \(I^{M}=S_{A}+S_{B}-S_{AB}\), where \(S_{A(B)}\) is the entropy for \(A(B)\) system and \(S_{AB}\) is that for the hybrid bipartite system [41]. The entropy can be calculated as \(S(t)=-Tr[\rho(t)log(\rho(t))]\) where \(Tr[\bullet]\) denotes the trace of \(\bullet\). Although its definition is distinct from the entanglement witness \(|E^{T}|\) mentioned above, their behavior is quite similar in indicating the phase transition in this work as shown in Fig. 2(d) and (e). Entanglement and mutual information provide quantum resource in quantum information process. This inspires us seeking quantum source in other quantum phase transitions. In general, it is believed that there is a relationship between phase transitions and fluctuations in physical systems [1; 2]. To accurately capture the phase transitions in the strong and ultra-strong coupling regime by using quadrature measurements, it is crucial to define positive and negative frequency cavity-photon operators as \(\hat{X}^{+}=\sum_{j,k>j}X_{jk}|j\rangle\langle k|\) and \(\hat{X}^{-}=(\hat{X}^{+})^{\dagger}\), with \(X_{jk}\equiv\langle j|\hat{a}^{\dagger}+\hat{a}|k\rangle\), in the dressed eigen-basis \(|j\rangle\), \(|k\rangle\) of the Hamiltonian (1) with eigen-values \(\omega_{j}\) and \(\omega_{k}\), respectively [42; 43]. This step is essential for excluding unphysical streams of output photons in experiments. In the limit of weak coupling, these operators coincide with \(\hat{a}\) and \(\hat{a}^{\dagger}\), respectively. And the similar operators can be defined for \(\hat{\sigma}_{-}\) and \(\hat{\sigma}_{+}\)[42; 43]. Using the defined operators above, we can assess the minimum variance of the quadrature of the cavity field defined as \(\hat{X}(\theta)=\frac{\hat{X}^{-}e^{i\theta}+\hat{X}^{+}e^{-i\theta}}{\sqrt{2}}\), with \(\theta\in[0,2\pi)\). The minimum variance reads \(V_{m}^{a}=\langle\hat{X}^{2}(\theta_{m})\rangle-\langle\hat{X}(\theta_{m}) \rangle^{2}\) versus \(\theta_{m}\). The landscape of \(V_{m}^{a}\) versus \(g\) and \(\omega_{c}\) coincides with those of the quantities used above by comparing Fig. 2(f) to the other panels Fig. 2. The behaviors of all these quantities approaching the critical point indicate that the smaller \(\frac{\omega_{c}}{\omega_{q}}\), the more obvious of the phase transition. Exploring these quantities not only provides additional avenues for studying phase transitions in experiments, but also allows us to understand the essence of phase transitions from multiple perspectives. To gain insight into the phase transition, we examine the density matrices of the atom, cavity field, and the Wigner function of the cavity mode in the ground states in different phase regions. It can be seen in Fig. 3, in the normal phase, there is no excitation in both the cavity and the atom (qubit). The Wigner function is that of the vacuum state with Gaussian distribution as shown in Fig. 3 (\(a_{2}\)). However, upon entering the strong coupling regime, a distinctive feature emerges in the Wigner function. Negative texture between the two peaks indicate a Schr\(\ddot{o}\)dinger cat state, a nonclassical resource which can be used in quantum information process [44]. For the condition \(\omega_{c}/\omega_{q}=0.1\), one excitation of qubit is equivalent to ten excitation of the cavity field. This leads to the low level of the excitation of the qubit in the superradiance phase as shown in the inset of Fig. 3 (\(b_{1}\)). ## IV Dynamics reflect phase transition In this work, we put forward a conjecture that the dynamics of the quantities (include the above quantities) can reveal the quantum phase transition. To verify this idea, we check the dynamics of the quantities in terms of whether the coupling strength quenches across the critical point or not. Before we check the dynamics in quench, we know that complete isolation of a quantum system from its environment remains a formidable challenge. For the open quantum system, we resort to numerical calculations by solving the master equation under the inherent damping effects of both the cavity field and qubit. However, we implement the finite size of a 50-photon Hilbert space which imposes certain limits on the accuracy of our numerical calculations. In the strong and ultrastrong coupling regimes, to exclude unphysical counting of photons, an effective approach for describing the system involves solving the master equations in the eigen-basis of the Hamiltonian (1). In this case, the master equation reads \[\dot{\rho}(t)=i[\rho(t),\hat{H}]+\mathcal{L}_{\hat{a}}\rho(t)+\mathcal{L}_{ \hat{\sigma}^{-}}\rho(t), \tag{4}\] where \(\mathcal{L}_{\hat{a}}\) and \(\mathcal{L}_{\hat{\sigma}^{-}}\) are Liouvillian superoperators describing the decoherence of the cavity field and qubit [45]. They read \(\mathcal{L}_{\hat{x}}\rho(t)=\sum_{j,k>j}\Gamma^{jk}_{\hat{x}}\bar{n}(\Delta_{ kj},T)\mathcal{D}[|k\rangle\langle j|\rho(t)+\sum_{j,k>j}\Gamma^{jk}_{\hat{x}}(1+\bar{n}( \Delta_{kj},T))\mathcal{D}[|j\rangle\langle k|]\rho(t)\) for \(\hat{x}=\hat{a},\hat{\sigma}^{-}\) with \(\mathcal{D}[\mathcal{O}]\rho(t)=\frac{1}{2}(2\mathcal{O}\rho(t)\mathcal{O}^{ \dagger}-\rho(t)\mathcal{O}^{\dagger}\mathcal{O}-\mathcal{O}^{\dagger}\mathcal{ O}\rho(t))\). The relaxation coefficients \(\Gamma^{jk}_{\hat{x}}=2\pi d(\Delta_{kj})\alpha^{2}_{\hat{x}}(\Delta_{kj})|C^{\hat{x}}_{jk}|^{2}\) with \(d(\Delta_{kj})\) being the spectral density of the baths, \(\alpha_{\hat{x}}(\Delta_{kj})\) denoting the system-bath coupling strength, \(\Delta_{kj}=\omega_{k}-\omega_{j}\), and \(C^{\hat{x}}_{jk}=-i\langle j(|\hat{x}-\hat{x}^{\dagger})|k\rangle\). \(\bar{n}(\Delta_{kj},T)=[\exp(\Delta_{kj}/T)-1]^{-1}\) is the mean number of quanta in a mode with frequency \(\Delta_{kj}\) and temperature \(T\) (Boltzmann constant \(k_{B}\)=1). When considering a cavity coupling to the momentum quadrature of a field in one-dimension waveguides, the spectral density \(d(\Delta_{kj})\) is constant and \(\alpha^{2}_{\hat{x}}(\Delta_{kj})\propto\Delta_{kj}\). Then the relaxation coefficients reduce to \(\Gamma^{jk}_{\hat{x}}=\gamma_{c}\left(\Delta_{kj}/\omega_{0}\right)|C^{\hat{x}} _{jk}|^{2}\) where \(\gamma_{c}\) is the standard damping rate. These assumptions can be realized in circuit-QED [46] or trapped ion system [11; 12; 13]. The influence of dephasing and Lamb shifts in current experiments can be deemed negligible as they do not exert significant influence [11; 12; 13]. ### Dynamics in Quench The concept of dynamical quantum phase transition has emerged from the analogy between an equilibrium partition function and the return probability in many-body unitary dynamics [47]. This expansion of criticality to non-stationary scenarios involves sudden changes in the macroscopic properties of quantum systems over time. Dynamical phase transition in quantum systems is usually investigated by quench [47; 48; 49]. However, quench is not limited to investigate dynamical quantum phase transition. The behavior of physical quantities in quench can be used to reveal other intriguing physics. In this work, we would check the behaviors of the physical quantities mentioned above in quench to examine whether they can be used to reveal the quantum phase transition. The return rate is usually employed in exploring dynamical phase transition \(f(t)=-\frac{1}{N}\ln G(t)\)[47; 48; 49]. This quantity behaves non-analytically versus time when the Loschmidt overlap \(G(t)\) vanishes. Here the Loschmidt overlap \(G(t)\) is defined as \(G(t)=|\langle\psi_{g_{0};0}|e^{-iH^{\prime}t}|\psi_{g_{0};0}\rangle|\). \(|\psi_{g_{0};0}\rangle\) denotes the initial state with coupling strength \(g_{0}\) in the Hamiltonian. This quantity measures the overlap between the time evolved state \(e^{-iH^{\prime}t}|\psi_{g_{0};0}\rangle\) and the initial state \(|\psi_{g_{0};0}\rangle\) following a sudden change of the Figure 3: (\(a_{1}\)) and (\(a_{2}\)) show the ground state and Wigner function of cavity mode when coupling \(g=0.2\) and (\(b_{1}\)). (\(b_{2}\)) are those when coupling \(g=0.7\) when \(\omega_{c}=0.1\). The insets in (\(a_{1}\)) and (\(a_{2}\)) show the corresponding ground state of the qubit. parameter \(g\) in the post-quench Hamiltonian, namely, \(H(g_{0})\xrightarrow{quench}H(g^{\prime})\) in this work. To employ this quantity reflecting the phase transition, we check the behavior of \(f(t)\) as shown in Fig. 4(a) when the coupling strength changes suddenly from \(g_{0}\) to \(g^{\prime}=g_{0}+0.3\). It can be seen that the cusps appear when the coupling strength quenches across the critical point, namely \(g_{0}\in[0.2,0.5]\). Fig. 4(b)-(f) show the dynamics of \(\langle\hat{X}^{-}\hat{X}^{+}\rangle\), QFI, entanglement witness \(|E^{T}|\), mutual information \(I^{M}\) and the minimum variance \(V_{m}^{a}\) versus the initial coupling strength \(g_{0}\) in the quench \(H(g_{0})\xrightarrow{quench}H(g_{0}+0.3)\). In this manner of quench, there are three parameter regions, namely \(g_{0}<0.2\); \(g_{0}\in[0.2,0.5]\); \(g_{0}>0.5\), with distinct dynamical characters. It depends on whether the coupling strength \(g\) quench across the critical point, namely \(g_{c}=0.5\), or not for all these quantities. It is interesting to notice that the behavior of \(\langle\hat{X}^{-}\hat{X}^{+}\rangle\) and \(V_{m}^{a}\) are similar when \(g_{0}<g_{c}\). \(|E^{T}|\) and \(I^{M}\) behave similarly versus \(g_{0}\) although their formalism are different obviously. This hints resemblance between these quantities. These characters of dynamics provide avenues to check such a phase transition in experiments by observing the dynamics of physical quantities. And it suggests a dynamical manner to obtain quantum resources. To gain insight into the dynamics of the states during the quench, we check examples of the Wigner function for states when \(g_{0}=0.35\) changes to \(g^{\prime}=0.65\) abruptly in Fig. 5 at zero temperature. Negative scars in the Wigner function indicate the states being non-classical starting from the vacuum. Such nonclassical states prefer redundancy encoding in quantum information [44]. Figure 6: The dynamics of the quantities when the coupling strength \(g\) changes suddenly from \(0.35\) to \(0.65\) versus temperature \(T\). The other parameters are same to those in Fig. 4. Figure 5: The dynamics of the QFI when the coupling strength \(g\) changes suddenly from \(g_{0}=0.35\) to \(g^{\prime}=0.65\) at temperature \(T=0\). (\(a_{1}\))-(\(a_{4}\)) are the Wigner functions of the cavity mode at \(t\)=10, 35, 55, and 95, respectively. \(\gamma_{c}\)=\(\gamma_{q}\)=\(0.01\omega_{c}\). The other parameters are same to those in Fig. 4. Figure 4: The dynamics of the quantities when the coupling strength \(g\) changes suddenly from \(g_{0}\) to \(g^{\prime}=g_{0}+0.3\). (\(a\)) The return rate \(f(t)\). (\(b\)) \(\langle\hat{X}^{-}\hat{X}^{+}\rangle\). (\(c\)) The quantum Fisher information. (\(d\)) The absolute value of the of the summation of the negative eigenvalues of the partial transposed density matrix. (\(e\)) The mutual information. (\(f\)) The minimum variance of the cavity field quadrature. \(\gamma_{c}\)=\(\gamma_{q}\)=\(0.01\omega_{c}\). Influence of temperature of the environment In the results above, zero-temperature environment is assumed. However the temperature is indeed a factor which should be considered when cavity interacting with atoms [18]. It is necessary to examine the influence of temperature on the dynamics. In Fig. 6, we plot the dynamics of the quantities mentioned in Sec. IV above versus the temperature. It is clear to see that with increasing of the temperature \(T\), the characteristic bends in \(f(t)\), \(\langle\hat{X}^{-}\hat{X}^{+}\rangle\), QFI, and variance \(V_{m}^{a}\) and correlations reflected by \(|E^{T}|\) and \(I^{M}\) during evolution tend to disappear and becomes gentle for these quantities. According to the master Eq. (4), higher temperature leads to more intense incoherent dissipation and driven. These are the negative factors for quantum resource and the application of these quantities as the indicator for the phase transition. This means low temperature benefits revealing the phase transition by the dynamics. ## VI Potential experimental systems In this work, we consider the phase transition of the Rabi model with the coupling strength vary across weak and strong regime and reveal the phase transition by the dynamics of several quantities. There are different quantum platforms to check these results. For instance, within a single trapped ion system, it is feasible to adjust the coupling strength between the atom and the Boson mode to achieve the Rabi model. This provides an opportunity to investigate the phase transitions and the dynamics in the ultrastrong and deep strong-coupling regimes, overcoming the limitations of the rotating-wave approximation [11; 12]. The entanglement and correlation between the Boson mode and the two-level system can be detected in such models. A similar phase transition can be illustrated by employing a \({}^{171}\)Yb+ion within a Paul trap through the adiabatic adjustment of the coupling strength between the ion and its spatial mode, without any thermodynamic constraints [13]. In addition to trapped ion systems, the strong-coupling regime can also be achieved in circuit quantum electrodynamics setups, where superconducting artificial atoms are coupled to on-chip cavities [10] or coupled to the electromagnetic continuum of a one-dimensional waveguide [26]. These systems offer the tunability that expands the capabilities of quantum optics, enabling exciting investigations into ultrastrong interactions between light and matter. Yet, the quantum Rabi model in the ultra-strong coupling regime can be realized by a superconducting circuit embedded in a cavity QED setup [25]. Through the coupling of a flux qubit and an LC oscillator using Josephson junctions, it is possible to realize the circuits Rabi model with a wide range of coupling strengths [27]. This enables the exploration of the ground-state phase transition and entanglement mentioned in our work. Quantum simulation offers various avenues to realize the Rabi model in diverse systems. One proposal is using a circuit quantum electrodynamics chip with moderate coupling between a resonator and transmon qubit to achieve precise digital quantum simulation of deep-strong coupling dynamics [28]. This proposal will enable exploration of extreme coupling regimes and quantum phase transitions as mentioned in our work. A practical implementation of a photonic analog simulator for the quantum Rabi model has been achieved using femtosecond laser-written waveguide superlattices. This advancement offers a tangible experimental platform for investigating the intricate physics of light-matter interaction in the deep strong coupling regime [29]. In these potential platforms, tuning the coupling quickly is necessary to realize quench. ## VII Conclusion In addition to traditional indicators such as oscillator occupation and qubit excitation, we first show that the quantum Fisher information, qubit-oscillator entanglement, mutual information, and the variance of the cavity field quadrature display minimal values below the transition point. However, when the coupling strength is tuned across the quantum critical point, these quantities undergo swift and substantial growth. This transition is attributed to the degeneracy between the ground state and the first excited state. The physical quantities display singular behavior when the ratio of the frequencies of oscillator and the qubit approaches zero. This behavior is analogous to approaching closer to a thermodynamic limit in superradiant phase transition. Nonclassical Schrodinger cat state is revealed by Wigner function in the superradiant phase. Then we examined the dynamics of the Rabi model to investigate the probability of indicating the quantum phase transition by dynamics. The quantities used to witness the phase transition in the ground state all behave differently depend on whether it quenches across the critical point. It offers avenues to reveal the phase transition by quantities in non-equilibrium process. And it is shown that temperature is poisonous to applying this quench method result from the incoherent process. There are platforms can be considered to realize our work, for example, the trapped ion system, circuit quantum electrodynamics setups like superconducting artificial atoms coupled Boson modes, quantum simulation using circuit quantum electrodynamics chip or femtosecond laser-written waveguide superlattices. These systems allow the researchers to vary the experimental parameters and study their influence on the phase transition. ###### Acknowledgements. X. L. Zhao thanks H. J. Xing for helpful discussions and National Natural Science Foundation of China, No.12005110 and Natural Science Foundation of Shandong Province, China, No.2R2020QA078.
2309.13425
MiliPoint: A Point Cloud Dataset for mmWave Radar
Millimetre-wave (mmWave) radar has emerged as an attractive and cost-effective alternative for human activity sensing compared to traditional camera-based systems. mmWave radars are also non-intrusive, providing better protection for user privacy. However, as a Radio Frequency (RF) based technology, mmWave radars rely on capturing reflected signals from objects, making them more prone to noise compared to cameras. This raises an intriguing question for the deep learning community: Can we develop more effective point set-based deep learning methods for such attractive sensors? To answer this question, our work, termed MiliPoint, delves into this idea by providing a large-scale, open dataset for the community to explore how mmWave radars can be utilised for human activity recognition. Moreover, MiliPoint stands out as it is larger in size than existing datasets, has more diverse human actions represented, and encompasses all three key tasks in human activity recognition. We have also established a range of point-based deep neural networks such as DGCNN, PointNet++ and PointTransformer, on MiliPoint, which can serve to set the ground baseline for further development.
Han Cui, Shu Zhong, Jiacheng Wu, Zichao Shen, Naim Dahnoun, Yiren Zhao
2023-09-23T16:32:36Z
http://arxiv.org/abs/2309.13425v2
# MiliPoint: A Point Cloud Dataset for mmWave Radar ###### Abstract Millimetre-wave (mmWave) radar has emerged as an attractive and cost-effective alternative for human activity sensing compared to traditional camera-based systems. mmWave radars are also non-intrusive, providing better protection for user privacy. However, as a Radio Frequency (RF) based technology, mmWave radars rely on capturing reflected signals from objects, making them more prone to noise compared to cameras. This raises an intriguing question for the deep learning community: _Can we develop more effective point set-based deep learning methods for such attractive sensors?_ To answer this question, our work, termed _MiliPoint2_, delves into this idea by providing a large-scale, open dataset for the community to explore how mmWave radars can be utilised for human activity recognition. Moreover, MiliPoint stands out as it is larger in size than existing datasets, has more diverse human actions represented, and encompasses all three key tasks in human activity recognition. We have also established a range of point-based deep neural networks such as DGCNN, PointNet++ and PointTransformer, on MiliPoint, which can serve to set the ground baseline for further development. Footnote 2: Available at [https://github.com/yizzfz/MiliPoint/](https://github.com/yizzfz/MiliPoint/) ## 1 Introduction In modern systems, sensors play a vital role in allowing intelligent decision-making [13; 5]. Millimetre-Wave radar (mmWave radar) is often employed in automotive, industrial and civil applications. This type of sensor is particularly advantageous as it offers a good balance between resolution, accuracy, and cost [7; 15]. In this work, we focus on exploring the potential of mmWave radars as sensors for human activity sensing. Despite the high accuracy of camera-based systems demonstrated for various tasks in this domain [27; 3], their intrusive nature has raised considerable concerns in terms of user privacy. The utilization of Radio-Frequency (RF) signals for human activity analysis presents an attractive alternative due to their non-intrusive nature. When compared with traditional low frequency RF sensors, like WiFi and Bluetooth, mmWave radars can utilize a much higher bandwidth and achieve a finer resolution. Together with the multiple-input multiple-output (MIMO) technique, mmWave radars can serve as 3D imaging sensors and enable advanced human activity recognition tasks to be performed. Meanwhile, the short wavelength of mmWave signals facilitates the development of a small-factor and low-cost sensor. However, as a RF-based technique, mmWave radars rely on the reflected signal phase from an object to detect its spatial feature, which can be prone to noise and is less accurate than cameras and lidars. A comparison between mmWave radars and other commonly seen sensors is shown in Table 1. As shown, mmWave radar is a cost-effective, non-intrusive sensing solution that can be advantageously used in various sensing scenarios. Researchers have demonstrated the effectiveness of mmWave radar in many human activity sensing tasks. However, the varying operation conditions and task specifications of radar-based human pose estimation make comparisons between existing methods and evaluations of their generalizability challenging. For instance, single person identification is the focus of Zhao _et al._[31], while Pegoraro _et al._[16] show mmWave radars can concurrently identify up to three people. Sengupta _et al._[20] concentrate on differentiating human arm motions with fixed-location subjects, whereas An _et al._[1] cover 12 actions which showcase a variety of human postures, and the number of samples can span from a few thousand to approximately 160k. In terms of hardware, a single-chip \(77\,\mathrm{GHz}\) radar with an integral transmitter and receiver is used in various research [31]; nevertheless, two radars [6, 20] or \(60\,\mathrm{GHz}\) radar with separate transmitters and receivers [10] are also evaluated by researchers. Furthermore, parameters like the radar chirp configuration, which can have a major impact on the detection result, have been neglected by many existing studies. This study presents the development of _MiliPoint_, a standardised dataset, designed for the facilitation of future research in this domain, enabling researchers to make cross-comparisons in a uniformed framework. In this paper, we make the following contributions: * We introduce the _MiliPoint_ dataset, which includes three main tasks in human activity recognition: identification, action classification and keypoint estimation. * \(4.08\times\) more than the most action-diverse dataset - and 545K frames of data, \(3.26\times\) greater than the largest dataset in existence. * We implemented and tested the performance of existing point-based DNNs on MiliPoint, and found that action classification is a particular challenging task, compared to identity classification and keypoint estimation. ## 2 Related Work We begin by introducing the mechanics of millimeter wave sensing in Section 2.1. Section 2.2 surveys existing mmWave datasets and elucidates how MiliPoint differs from them. Following this, Section 2.3 outlines the popular deep neural network (DNN) models proposed for 3D point sets. ### Millimeter Wave Sensing A mmWave signal refers to an electromagnetic signal between \(30\,\mathrm{GHz}\) to \(300\,\mathrm{GHz}\) that has a wavelength of sub \(1\,\mathrm{cm}\). Signals at this frequency band can have a much larger bandwidth (a few gigahertz) than the traditional RF signals, which make them very suitable for short-range radar applications as the resolution of a radar is directly determined by its signal bandwidth. Meanwhile, the short wavelength allows many antennas to be integrated into a single small-factor platform, enabling it to determine the angle-of-incident of the signal refection and depict the 3D spatial feature of the scene. Although it is less accurate than 3D cameras and lidars, mmWave radars still offer several distinct advantages such as cost-effectiveness, non-intrusiveness, and lack of reliance on various viewing conditions. All these features give mmWave radar an increased popularity in human activity sensing. mmWave radars often use frequency modulated continuous wave (FMCW) to detect objects in the scene. Figure 1 presents the workflow of a typical mmWave Radar. The radar transmits millimeter wave signals. The object in front of the sensor then reflects the signal back and the signal is picked \begin{table} \begin{tabular}{l c c c c} \hline \hline Sensor type & 3D camera & Lidar & Traditional RF & mmWave Radar \\ \hline Cost & Medium & High & Low & Low \\ Intrusiveness & High & Medium & Low & Low \\ Resolution & High & High & Low & Medium \\ Viewing condition requirement & High & Medium & Low & Low \\ \hline \hline \end{tabular} \end{table} Table 1: A comparison of different sensors, mmWave radar is a cost-effective, non-intrusive sensor compared to other solutions. up by the receiver. The distance and angle of the object would be encoded in the frequency and phase of the reflected signal. Following this, the on-chip data processing unit mixes and applies a low pass filter to the signal to produce an Intermediate Frequency (IF) signal. Two Fast Fourier Transforms (FFTs) are then applied on this mixed signal, before a Constant False Alarm Rate algorithm is used for peak detection. This, together with the FFT for the angle, provides the user with the data packet that contains the 3D coordinates of the object in the scene. ### Existing mmWave Datasets Although many mmWave radar frameworks have been proposed in the human activity recognition literature, only a few researchers have released their datasets publicly. These are summarized in Table 2. Existing datasets focus primarily on a single task, with a majority being devoted to keypoint estimation. Meanwhile, CubeLearn [30] and RadHAR [22] are two datasets specifically designed for action classification. Previous datasets have limited the number of frames collected, with the largest datasets, mRI [1] and RadHAR [22], containing a meagre 160K frames. Additionally, the range of human actions included is not extensive, with the greatest total being 12 in the mRI dataset [1]. Our work is the first mmWave dataset that includes all three main tasks in human activity recognition: identification, action classification, and keypoint estimation. It also fills a critical gap in terms of size and diversity, with 11 participants performing a total of 49 different actions across 545k frames. This provides a more comprehensive picture of human movements than has ever before been possible for mmWave radar sensing. ### Point-based Neural Networks Point clouds, composed of 3D points representing an object's shape, are commonly used in computer graphics and 3D sensing [19]. Graph neural networks (GNNs) process point clouds directly as individual points, rather than as voxels [4] or multi-view images [23; 28]. The unordered point \begin{table} \begin{tabular}{l c c c c} \hline \hline Dataset & Task & Participants & Dataset size & Action involved \\ \hline mmPose [20] & K & 2 & 15k & 4 \\ MARS [2] & K & 4 & 40k & 10 \\ HuPR [12] & K & 6 & 141k & 3 \\ mRI [1] & K & 20 & 160k & 12 \\ CubeLearn [30] & A & 8 & 1k & 6 \\ RadHAR [22] & A & 2 & 167k & 5 \\ \hline MiliPoint & A,I,K & 11 & 545k3 & 49 \\ \hline \hline \end{tabular} \end{table} Table 2: A comparison to existing mmWave datasets (A=Action classification, I=Identification, K=Keypoint estimation). Our dataset, Milipoint, is far more diverse in both tasks and actions, and also has a much larger dataset size. Figure 1: An illustration of how a typical mmWave Radar work. The radar has several transmitters (TX) and Receivers (RX) for transmitting and collecting the reflected signals. These signals are then mixed and filtered to form an Intermediate Frequency (IF) signal. Subsequent to this, three Fast Fourier Transforms (FFTs) are implemented on the range, velocity, and angle domains. A Constant False Alarm Rate (CFAR) algorithm is also utilized to detect potential peaks from the FFT outputs. Eventually, the \((x,y,z)\) coordinates of the objects in the metric space are acquired. sets can be treated as nodes and used as inputs for a machine learning system. PointNet [17] and PointNet++ [18], which use sampling to reduce high-dimensional unordered points in the metric space into fixed-length feature vectors, and Deep Neural Networks (DNNs) to process these features. Given it is a natural abstraction to view a set of points as a graph [21; 25; 26], DGCNN employs EdgeConv to derive local neighbourhood information, which can then be stacked to comprehend global features [26]. With the increased use of self-attention modules for Natural Language Processing [24], Zhao _et al._ applied this computation pattern to point clouds in their method, Point Transformer [29]. Ma _et al._ shed a new perspective on this problem; rather than constructing a complex network architecture, they developed a simple residual Multi-Layer-Perception (MLP) network - termed PointMLP - that requires no intricate local geometrical extractors yet yields very competitive results [14]. In this work, we create a benchmark utilising several representative networks including PointNet++ and PointMLP (purely point-based), DGCNN (GNN based), and PointTransformer (Transformer-based). ## 3 Dataset In this section, we provide a detailed overview of the dataset collection and construction process. Section 3.1 outlines the data collection measures from the participants, Section 3.3 discusses the associated tasks and their respective specifications and Section 3.4 explains the data processing. ### Data Collection We conducted an in-person data collection, when participants were asked to perform a series of low-intensity cardio-burning fitness movements 4. The exercise video was chosen with meticulous consideration given to factors such as intensity, diversity of movements, and movement speed. The video lasts around 30 minutes with 49 different actions; each action lasts 30 seconds with a 10 seconds break in between. The participants are kept anonymous to protect their privacy, and our released data consists purely of point clouds from our mmWave sensor and ground truth keypoints, with no imagery contents. The information captured by the camera is used to calculate the keypoints, and the original video is immediately discarded to ensure the continued protection of participant privacy. Footnote 4: [https://www.youtube.com/watch?v=cZu9u_jodyU](https://www.youtube.com/watch?v=cZu9u_jodyU) We present the physical data collection setup in Figure 2, which shows how the mmWave radar, Zed 2 Stereo camera, and monitor are assembled to form the setup. The human participants are instructed to stand in front of both the mmWave and stereo camera sensors, and follow the movements displayed on the monitor. The mmWave radar is connected to the power and its output data is then transmitted through serial port to our work station. The stereo camera is placed behind the mmWave radar, but configured to be at a different height. This setup has been verified to yield a high quality streams of frames. Figure 2: Front and back view of the data collection setup. (1) a mmWave radar, (2) a Zed 2 stereo camera, (3) a monitor displaying movement for the participant to follow, and (4) is the designated area for the participant to stand. An overview of the mmWave Radar board is shown in (c). As illustrated in Figure 1, we use an on-the-fly data processing approach with the mmWave radar chip to obtain data packets at our workstation. These data packets contain information about the points \((x,y,z)\), which are represented by a dataset \(d\in\mathbb{R}^{N\times 3}\), where \(N\) indicates the number of points. We used the TI IWR1843 mmWave radar, a commercial off-the-shelf radar that has received great popularity among researchers due to its 3D imaging capability and processing power. The radar operates between \(77\,\mathrm{GHz}\) to \(81\,\mathrm{GHz}\), has three transmitters and four receivers that operate in a time-division multiplexing mode, and has an on-chip DSP processor that applies the described data processing and outputs point clouds to the workstation. The radar was configured to have a chirp time of \(100\,\mathrm{us}\) and a chirp slope of \(40\,\mathrm{MHz}/\mathrm{us}\), to utilize the full \(4\,\mathrm{GHz}\) available bandwidth and achieve a range resolution of \(4\,\mathrm{cm}\). The ADC sampling rate was set to \(5\,\mathrm{MHz}\). The CFAR threshold was empirically set to \(10\,\mathrm{dB}\) in both the range and Doppler direction, which gives a reasonable number of points per frame in our experimental environment. We utilized the Zed 2 Stereo Camera for producing ground truth data on the keypoint estimation task. The stereo camera calculates the disparity between two views to give a depth map of the scene, and applies a posture estimating neural network to get 3D skeleton models of people in the scene. Given the camera parameters, the 3D coordinates of the skeleton with respect to the camera can be calculated through simple trigonometry. The Zed Camera System offers an impressive depth accuracy of less than \(1\%\) up to \(3\) meters and less than \(5\%\) up to \(15\) meters. While high-end industrial level optical tracking systems, such as the OpticTrack system and their Motion Capture Suits, may provide a more precise baseline, we found that the Zed 2 Camera already offers a very strong performance. Figure 2 shows the exact experimental setup for the data collection. A mmWave radar (1) is placed in front of the participant's designated area (4) and behind the radar is a monitor displaying the movement for the participant to follow (3), and a Zed 2 Stereo camera (2). The area (4) is set to \(1\,\mathrm{m}\) by \(1\,\mathrm{m}\). The distances from the radar and camera to the area centre are \(0.65\,\mathrm{m}\) and \(3\,\mathrm{m}\), and the heights are \(1\,\mathrm{m}\) and \(0.7\,\mathrm{m}\), respectively. The positions are chosen to avoid occluding as much as possible. During data collection, the radar data and camera images are timestamped and synchronized based on their time-of-arrival to the workstation, at 24 frames per second. After data acquisition, the camera data is calibrated to the radar coordinate system to serve as the ground truth. ### Participant Recruitment A total of 11 participants were recruited through university emailing lists and word-of-mouth, with 4 females and 7 males. The average height and weight of the participants were \(171.84\,\mathrm{cm}\pm 10.41\) and \(67.73\,\mathrm{kg}\pm 13.08\), respectively. All participants had none mobility impairments. All participants were given information explaining the nature and purpose of the procedures involved in this study and signed a consent form before starting the experiment. The study was approved by the Faculty of Engineering Research Ethics Committee, University of Bristol. ### Tasks The next step after data collection is to design tasks. In this case, three tasks are established, which are: _identification_, _keypoint estimation_, and _action classification_; an overview is presented in Figure 3. The process of identification involves analyzing the collected data in order to discriminate between unique individuals. This requires making comparisons between various characteristics. In doing so, the DNN model is expected to be capable of recognizing specific traits that are associated with particular individuals. In our identification task, the output labels are numerical numbers ranging from \(0\) to \(10\) which correspond to the \(11\) unique participants. Action classification requires the recognition of behaviour patterns. Our raw data gathered by mmWave radar can be broken down into sets of frames, each of which is annotated with an action, that is detailed in Appendix. This segmentation of data greatly facilitates the recognition of actions. Finally, keypoint estimation involves detecting interest points or key locations in the input data, which typically involve identifying various keypoint landmarks in a human body. The detection labels each individual image's points according to their position, size, and orientation, allowing for the development of a better understanding of human posture from the input data. We designed two tasks for keypoint estimation with varying levels of difficulty. The first task requires detection of keypoints from the human body, including 'Right Shoulder', 'Right Elbow', 'Left Shoulder', 'Left Elbow', 'Right Hip', 'Right Knee', 'Left Hip', 'Left Knee' and 'Head'. The second task presents a challenge by requiring detection of additional keypoints, namely 'Nose', 'Neck', 'Left Wrist', 'Left Ankle', 'Left Eye', 'Left Ear', 'Right Wrist', 'Right Ankle', 'Right Eye' and 'Right Ear'. Notably, 'Head' is excluded due to the finer granularity of facial keypoints. ### Data Processing Pipeline The mmWave radar produces data packets in the form of point clouds that encode the spatial shape of the subject. The number of points in each data packets depends on the scene and can vary from a few points to a few hundred. The number of points at each frame is not constant since it depends on the instantaneous signal reflection from the subject. To make the input size consistent across frames, we set an upper limit \(k\) to the point cloud population in each packet. Point clouds with more points will be randomly sampled to \(k\) and point clouds with less points will be zero-padded. This is equivalent to a data frame \(d\in\mathbb{R}^{k\times 3}\). To create a single data point, we then stack \(s\) consecutive frames, forming a data point \(d\in\mathbb{R}^{s\times k\times 3}\). We process the collected data for each participant, thus providing labels for the identification task. The ground truth for both keypoint estimation tasks is derived from the Zed 2 detection results, which serves as the reference for the mmWave radar sensor. The action labels at each timestamp are derived from the video content and are synchronized to the collected data, as the participants were instructed to always follow the action in the video. We also manually scrutinized and discarded incorrect labels when the participants failed to follow the video. ## 4 Evaluation We first explain our setup in Section 4.1. Section 4.2 shows how various point-based DNN models perform on MiliPoint and Section 4.2 explains how an important hyperparameter, the number of stacking, is picked for each task in MiliPoint. ### Experiment Setup To assess the usability of our dataset, we ran several representative point-based deep neural networks (DNNs) with a split of \(80\%\), \(10\%\) and \(10\%\) for training, validation, and testing partitions, respectively. All the models shown in the evaluation are implemented in Pytorch and Pytorch Geometric [8]. These models are trained with mainly two hardware systems. System one has 4 NVIDIA RTX2080TI cards, where system two has 2 NVIDIA RTX3090TI cards. Running all networks on all downstream tasks cost around 300 GPU hours. The Adam optimizer [11] is used together with a CosineAnnealing learning rate scheduling [9], and the learning rate is set to \(3e^{-5}\). Each data point is run three times with different random seeds to Figure 3: The three tasks: identification, keypoint estimation, and action classification. We show the raw radar point cloud on the first row and expected predictions on the second row. calculate its average and standard deviation values. We set the stacking to \(s=5\) for identification and keypoint estimation, but \(s=50\) for action classification. We futher justify hyperparameter choices in Section 4.2 and also in our Appendix. ### Results We present the results of different point-based methods on the MiliPoint in Table 3. A row labelled _Random_ is also included to show the random guess accuracy for the various classification tasks. It is noteworthy that the keypoint estimation is evaluated by means of Euclidean distances to the ground truth, and thus a lower value signifies better performance. The _Random_ results for identification and action classification are calculated from the number of labels. For keypoint estimation, we employ models using randomised weights and record their results across three distinct random seeds. We report Top1 accuracy for identification, both Top1 and Top3 accuracy for action classification, and mean localization error (MLE) for keypoint estimation. We evaluate four different point-based DNN methods, namely DGCNN [26], Pointformer [21], PointNet++ [18] and PointMLP [29]. The results presented in Table 3 indicate that point-based methods can perform quite effectively for identity classification, achieving an accuracy of greater than \(75\%\) across all DNNs evaluated. Conversely, action classification appears to be much more challenging, with the highest accuracy recorded being below \(40\%\). Action classification is a challenging task, as it requires a construction of semantic meaning from a sequence of frames, and this is especially challenging when the point cloud data is sparse and noisy. We chose to stack 50 frames for this task, since an action typically takes one to two seconds, and our frame rate is 24 frames per second. It is apparent that certain point-based methods perform better than others; this is evident in Table 3, where PointNet++ and PointMLP have outperformed the other methods in the MiliPoint benchmark. As mentioned earlier, the stacking choices for these tasks are different. With the present framework, the stack will pile up contiguous frames both before and after the current frame. Since our frame rate is 24 frames per second, the action classification task naturally requires a higher stacking value \(s\). The results in Figure 4 demonstrate that there is a plateau effect, indicating that when the stacking number \(s\) reaches a certain limit, it ceases to contribute to the network's final performance. As indicated in Figure 4, we found that PointNet++ performs the best when \(s=5\) and \(s=50\) on identification and action classification, respectively. Following a few manual experiments, we found that \(s=5\) produces optimal results for both keypoint estimation and identification tasks, while \(s=50\) is superior for action classification. It is worth noting that higher \(s\) values require more computing and memory resources when training the network, so we summarise all the 'turning point' for the plateaus and report them in Table 4 in our Appendix. ## 5 Discussion In this paper, we introduce a novel mmWave radar dataset composed of three distinct tasks related to human activity sensing. Our high-quality dataset is expected to be a valuable training and evaluation resource for further research into point-based deep learning methods. We hope it will prove to be an \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{_Iden (Acc\% \(\uparrow\))_} & \multicolumn{2}{c}{_Action (Acc\% \(\uparrow\))_} & \multicolumn{2}{c}{_Keypoint (MLE in \(cm\downarrow\))_} \\ \cline{3-6} & & Top1 & Top3 & 9 point & 18 point \\ \hline Random & \(7.69\) & \(2.59\) & \(7.69\) & \(155.74\pm 1.32\) & \(161.64\pm 2.11\) \\ \hline DGCNN & \(77.65\pm 0.92\) & \(13.61\pm 2.09\) & \(34.59\pm 2.74\) & \(16.53\pm 0.11\) & \(18.51\pm 0.03\) \\ Pointformer & \(83.94\pm 0.81\) & \(29.27\pm 0.55\) & \(50.44\pm 1.18\) & \(14.99\pm 0.03\) & \(17.03\pm 0.13\) \\ PointNet++ & \(87.30\pm 0.27\) & \(34.45\pm 0.80\) & \(54.96\pm 1.21\) & \(13.55\pm 0.03\) & \(14.94\pm 0.03\) \\ PointMLP & \(95.88\pm 0.40\) & \(18.37\pm 0.08\) & \(35.94\pm 0.14\) & \(13.12\pm 0.30\) & \(14.11\pm 0.22\) \\ \hline \hline \end{tabular} \end{table} Table 3: Accuracy (Acc \(\uparrow\)) and mean localization error (MLE \(\downarrow\)) values for different point-based DNN methods running on our MiliPoint dataset. Iden, Action and Keypoint mean Identification, Action classification and Keypoint estimation respectively. instrumental asset to the field. However, in Section 5.2 we would like to comprehensively address the limitations and discuss potential future work in Section 5.2 that can resolve these limitations. ### Why mmWave Sensing? We have briefly explained the particular advantages of using mmWave radar in Section 1 and a table to compare different sensors in Table 1. We detail this comparison here by comparing it to camera based systems and lidar based systems. When compared to camera-based or infrared systems, mmWave and lidar technology provide an attractive solution due to their non-intrusiveness and robustness under varying lighting and atmospheric conditions. The non-intrusive nature of these sensors provides a greater guarantee for user privacy. Atmospheric conditions, such as dust, smoke, and fog, present a formidable obstacle to visual sensors such as cameras. To contend with these issues, lidar and mmWave technologies provide a reliable solution [13] - explaining why lidar has become also a popular choice for autonomous driving applications. mmWave radar is an attractive option because it is relatively low-cost and is able to fit within small-form devices. Moreover, with regard to resolution, mmWave radar provides a higher quality of resolution compared with other options such as microwave radar for the same range. ### Limitations A major limitation of our dataset is that the data collection experiment was conducted using only one mmWave radar, whereas in reality, multiple radars can be implemented for the same task [6]. On the other hand, introducing additional radars into the data collection process would bring significant complications. The relative positions and angles between the radars can have a significant impact on the sensing quality, but also there is a risk that the radars may potentially interfere with one another. The concern of interference naturally brings up another issue: our data collection is predominantly conducted in a relatively stable indoor environment. It is entirely possible that the outside world may contain more complex scenarios which can produce signals that significantly interfere with our sensor signal, thus compromising the quality of the sensing. Another major limitation is that we only consider a limited range of human movements, primarily those that focus on the limbs, such as hands and legs. As a result, we do not capture more complex postures that a human might take, such as sitting or lying down. Another particular problem with radar sensing is the multi-path effect, where multiple reflection paths of the RF signal cause noises and ghost targets in radar imaging. However, this issue is less significant in the mmWave frequency band when compared with the traditional UWB bands, making mmWave less sensitive to location or distance changes given that all the experiments are conducted in a clear line of sight and without any neighboring clutters. Meanwhile, the question of how to mitigate Figure 4: An illustration of the plateau effect with stacking more frames (\(s\)). We show this effect on two tasks, keypoint estimation (measured in MLE loss) and action classification (measured in Top1 accuracy). We generally pick the turning point as the optimal stacking, as this offers a good balance between performance and run-time efficiency. The detail is discussed in Section 4.2. the multipath effect in a complex and diverse environment is left as future work, as this on its own can be a huge research topic. Finally, the radar we used has three transmitters and four receivers that were originally designed for automotive driving applications, which has more azimuth antennas than elevation ones. This can potentially result in poor elevation resolution and affect the performance of certain actions when the height information is critical. Nevertheless, the sensor presently employed is the most widespread within the sector, in other words, it can be treated as a standardised sensor currently in this domain. A variation in the number of transmitters and receivers or their relative positioning necessitates rigorous cooperation and re-engineering on the device side, which is beyond the scope of this paper. ### Future Work One research direction is using the raw IF signal as the dataset input (See Figure 1). While the point cloud is an effective spatial representation of the subject motion, it is a high-level data representation derived from the IF signal, where a large proportion of information may have been discarded. This also brings up the research question that whether the data processing chain in Figure 1 is optimal or other signal processing techniques, like Capon beamforming rather than angle-FFT, can increase the accuracy of the radar point cloud and, hence, the performance of the proposed tasks. However, capturing and processing the IF signal requires a significantly higher data bandwidth and computation resources, and, therefore, is left as future work. Another area of potential future research involves utilising multiple radars for estimating human activity. Such a cooperative system would enable the collection of more comprehensive data; however, it is also sensitive to relative positions of each radar, creating the potential for interference resulting from the mmWave transmissions. The dataset currently focuses on a single-radar case, so the implications of a multi-radar system are left as an area for future exploration. ## 6 Conclusion In this paper, we introduce MiliPoint, a dataset designed to systematically evaluate the performance of DNNs for point-based mmWave radar. The goal of MiliPoint is to bridge the gap between the accessible mmWave sensor and various downstream tasks, by providing a diverse yet systematic mmWave radar dataset. MiliPoint is the largest mmWave radar dataset assessed to date in terms of the number of frames collected, and it holds three primary downstream tasks: identification, action classification and keypoint estimation, with a diverse set of associated actions labelled. With this dataset, the research community can delve deeper into applying deep learning to advance the function of mmWave radars.
2309.04159
Anti-phase synchronization in a population of swarmalators
Swarmalators are oscillatory systems endowed with a spatial component, whose spatial and phase dynamics affect each other. Such systems can demonstrate fascinating collective dynamics resembling many real-world processes. Through this work, we study a population of swarmalators where they are divided into different communities. The strengths of spatial attraction, repulsion as well as phase interaction differ from one group to another. Also, they vary from inter-community to intra-community. We encounter, as a result of variation in the phase coupling strength, different routes to achieve the static synchronization state by choosing several parameter combinations. We observe that when the inter-community phase coupling strength is sufficiently large, swarmalators settle in the static synchronization state. On the other hand, with a significant small phase coupling strength the state of anti-phase synchronization as well as chimera-like coexistence of sync and async are realized. Apart from rigorous numerical results, we have been successful to provide semi-analytical treatment for the existence and stability of global static sync and the anti-phase sync states.
Samali Ghosh, Gourab Kumar Sar, Soumen Majhi, Dibakar Ghosh
2023-09-08T06:58:36Z
http://arxiv.org/abs/2309.04159v1
# Anti-phase synchronization in a population of swarmalators ###### Abstract Swarmalators are oscillatory systems endowed with a spatial component, whose spatial and phase dynamics affect each other. Such systems can demonstrate fascinating collective dynamics resembling many real-world processes. Through this work, we study a population of swarmalators where they are divided into different communities. The strengths of spatial attraction, repulsion as well as phase interaction differ from one group to another. Also, they vary from inter-community to intra-community. We encounter, as a result of variation in the phase coupling strength, different routes to achieve the static synchronization state by choosing several parameter combinations. We observe that when the inter-community phase coupling strength is sufficiently large, swarmalators settle in the static synchronization state. On the other hand, with a significant small phase coupling strength the state of anti-phase synchronization as well as chimera-like coexistence of sync and async are realized. Apart from rigorous numerical results, we have been successful to provide semi-analytical treatment for the existence and stability of global static sync and the anti-phase sync states. and split phase wave. Lately, Ceron et al. [39] demonstrates that the edition of non-identical frequencies of the oscillators, local coupling, and chirality lead to new dynamics including beating clusters and lattices of vortices. Nevertheless, research in this fascinating world of swarmalators is still in its infancy and there are adequate scopes of further investigation leading to the possible revelation of new emerging collective dynamics due to the bidirectional reciprocity between the phase and the spatial dynamics. One of the most pivotal characteristics of many real-world networked systems is that of community structures or clustering [40; 41] referring to the compartmental subdivisions of networked systems. This, precisely, corresponds to the organization of the units of the system in strongly intra-connected communities or groups while possessing weaker inter-group connections. From numerous social systems including collaboration networks to biological networks, such as metabolic networks, regulatory networks, and food webs, are naturally found to exhibit community structures [40; 41; 42]. The problem of detection and characterization of these communities [43; 44; 41; 45] is one of the preeminent issues in the study of structural network theory. Through this article, we assume a community-structured framework of the underlying network and demonstrate the genesis of multiple variants of collective patterns in interacting communities of swarmalators. We, specifically, study a population of swarmalators where they are distributed in two communities. We analyze how the trade-off between the intra- and inter-community interactions affects the generic interplay between the phase and spatial dynamics of swarmalators. The phase interactions along with the spatial attraction and repulsion differ in each community. Under such a network setup, we encounter diverse routes to the static synchronization state as the inter-community phase coupling strength increases, for different choices of parameter values. Besides the states like active and static async or active phase wave, we detect anti-phase synchrony and the chimera state in the process towards the emergence of in-phase synchronization. We must here emphasize the remarkable fact that the anti-phase synchronization state arises in the sole presence of repulsive coupling even when the network size is considerably large, which we do not experience in the case of simple phase oscillator models without any spatial dynamics [46]. We have also provided semi-analytical treatment concerning the stability analysis of the global static synchronization and the anti-phase synchronization state. We should mention here that, in a stereotypical community network structure, the strength of inter-community interaction is usually considered to be smaller than that of the intra-community interaction strength [40]. In this work, however, we have not necessarily followed this convention. We have varied the inter-community phase coupling strength over a feasible range where a number of diverse collective states are observed. ## II Proposed mathematical model We consider \(N\) number of swarmalators moving in a two-dimensional region. We randomly distribute them in \(p\) groups. Let \(C_{i}\) denote the set of indices of swarmalators belonging to the \(i\)-th group. Then trivially we have, \(\sum_{i=1}^{p}|C_{i}|=N\), where \(|C_{i}|\) denotes the cardinality of the set \(C_{i}\) and \(\cup_{i=1}^{p}C_{i}=\{1,2,\cdots,N\}\). Suppose, without loss of generality, that the \(i\)-th swarmalator belongs to the \(n\)-th group. Then we can write the governing equations as, \[\dot{\mathbf{x}}_{i}=\mathbf{v}_{i}+\sum_{m=1}^{p}\frac{1}{|C_{m}| }\sum_{j\in C_{m}\backslash\{i\}}\Bigg{[}\frac{\mathbf{x}_{j}-\mathbf{x}_{i}} {|\mathbf{x}_{j}-\mathbf{x}_{i}|}\big{(}1+\] \[J_{n,m}\cos(\theta_{j}-\theta_{i})\big{)}-\frac{\mathbf{x}_{j}- \mathbf{x}_{i}}{|\mathbf{x}_{j}-\mathbf{x}_{i}|^{2}}\Bigg{]}, \tag{1}\] \[\dot{\theta}_{i}=\omega_{i}+\sum_{m=1}^{p}\frac{K_{n,m}}{|C_{m}|}\sum_{j\in C_ {m}\backslash\{i\}}\frac{\sin(\theta_{j}-\theta_{i})}{|\mathbf{x}_{j}- \mathbf{x}_{i}|}, \tag{2}\] where \(i=1,2,\cdots,N\). \(\mathbf{x}_{i}\equiv(x_{i},y_{i})\) is the spatial position in two-dimensional plane and \(\theta_{i}\) is the internal phase of the \(i\)-th swarmalator. \(\omega_{i}\) and \(\mathbf{v}_{i}\) are the natural frequency, and self-propulsion velocity of the \(i\)-th swarmalator, respectively. The spatial attraction, repulsion as well as phase interaction functions are chosen the same as in Ref. [26] where all the swarmalators belong to a single group, i.e., \(p=1\). The spatial attraction term ensures that the swarmalators remain close to each other without dispersing indefinitely, whereas spatial repulsion among them is necessary to avoid collision. They can be perceived as long-range attraction and short-range repulsion. \(J_{n,m}\) highlights how phases of those two swarmalators affect their spatial attraction. We assume \(J_{n,m}>0\) so that swarmalators which are in nearby phases attract each other spatially due to the presence of the term \(\cos(\theta_{j}-\theta_{i})\). Similarly, \(K_{n,m}\) indicates the phase coupling strength between the two groups \(C_{n}\) and \(C_{m}\) (note that, here by group \(C_{n}\), we mean the swarmalators belonging to the \(n\)-th group, without ambiguity). When \(K_{n,m}>0\), swarmalators' phases are attractively coupled and the phase coupling is repulsive when \(K_{n,m}<0\). For symmetry, \(J_{n,m}=J_{m,n}\), and \(K_{n,m}=K_{m,n}\). Then, for \(p\) groups, the number of distinct parameters related to \(J\) and \(K\) are \((p^{2}+p)/2\) each. We work with \(p=2\) groups in this article which leaves us with \(J_{1,1}\equiv J_{1}\), \(J_{2,2}\equiv J_{2}\), \(J_{1,2}=J_{2,1}\equiv J_{3}\), \(K_{1,1}\equiv K_{1}\), \(K_{2,2}\equiv K_{2}\), and \(K_{1,2}=K_{2,1}\equiv K_{3}\), say. Effectively we have these six parameters in hand which we vary to obtain different collective behaviors. Also note that, the model defined by Eqs. (1)-(2) is a generalization of the model proposed by O'Keefle et al. [26]. We work with swarmalators having identical natural frequencies and velocities, i.e., \(\omega_{i}=\omega\) and \(\mathbf{v}_{i}=\mathbf{v}\) for all \(i\). By moving to a proper reference frame, we set \(\omega=|\mathbf{v}|=0\). ## III Results First, we assume that the swarmalators are distributed in equal numbers in two populations (we remove this assumption later in Appendix A to show that the results do not change if they are distributed unequally as long as \(N\) is sufficiently large). For simplicity, let \(C_{i}\) denote both the \(i\)-th population and the set of indices of swarmalators belonging to that population, whenever appropriate, for \(i=1,2\). Then \(J_{1}\) measures the extent to which the phases of swarmalators belonging to \(C_{1}\) affect their spatial attraction and similarly \(J_{2}\) for the group \(C_{2}\). \(J_{3}\) gauges the phase-dependent spatial attraction when swarmalators belong to different groups. \(K_{1}\), and \(K_{2}\) are the phase coupling strengths between swarmalators in \(C_{1}\) and \(C_{2}\), respectively, whereas \(K_{3}\) is the strength of phase interaction when they are in different groups. The values of these control parameters decide the fate of the swarmalator system where we observe various emerging collective states by changing these values. Before moving forward to describe these states, first, we define some order parameters that are useful to measure several properties of the emerging states. ### Order parameters To measure the amount of synchrony in swarmalators' phases throughout the population, we define \[re^{i\psi}=\frac{1}{N}\sum_{j=1}^{N}e^{i\theta_{j}}. \tag{3}\] Here \(r\) lies between 0 to 1 by definition and gives an indication of the overall synchrony in swarmalators' phases. Phases are completely synchronized when \(r=1\), or else asynchronous behavior is present. \(\psi\) is the mean phase of the overall population. We measure the phase coherence among swarmalators belonging to the \(p\)-th group by \[r_{p}e^{i\psi_{p}}=\frac{1}{|C_{p}|}\sum_{j\in C_{p}}e^{i\theta_{j}}, \tag{4}\] where \(r_{p}\) again lies between 0 to 1 and \(\psi_{p}\) is the average phase of \(p\)-th group. We also define \[Re^{i\Psi}=\frac{1}{N}\sum_{j=1}^{N}e^{2i\theta_{j}}, \tag{5}\] which is useful to examine anti-phase synchrony where a phase difference of \(\pi\) is observed among swarmalators' phases. In the anti-phase synchrony state, \(R=1\) but \(r\neq 1\). In some of the collective states (discussed later in Sec. III.2) we observe a correlation between swarmalators' phases \(\theta_{j}\) and their spatial angle \(\phi_{j}=\tan^{-1}(y_{j}/x_{j})\). For this, we define the following order parameters, \[S_{\pm}e^{i\Psi\pm}=\frac{1}{N}\sum_{j=1}^{N}e^{i(\phi_{j}\pm\theta_{j})}, \tag{6}\] which quantifies the correlation between phases and spatial angles. We take the maximum of \(S_{\pm}\) and define \(S=\max\{S_{+},S_{-}\}\). A nonzero value of \(S\) indicates the presence of a correlation between swarmalators' spatial angles and phases. In one of the collective states, swarmalators arrange themselves inside an annular-like structure and they rotate around this annulus. Their phases also keep changing from 0 to \(2\pi\). To distinguish this state from others, \(\gamma\) is defined as \[\gamma=\frac{N_{rot}}{N}, \tag{7}\] where \(N_{rot}\) is the number of swarmalators executing at least one full circle rotation in both spacial location and phase. \(\gamma\) gives the fraction of such swarmalators and subsequently lies between 0 to 1. We find both stationary and non-stationary states in our model where swarmalators become static in position and phase in the stationary states but keep moving in the non-stationary ones. To separate these, we measure the mean velocity denoted by \(V\), and is defined as \[V=\left\langle\frac{1}{N}\sum_{i=1}^{N}\sqrt{\dot{x}_{i}^{2}+\dot{y}_{i}^{2}+ \dot{\theta}_{i}^{2}}\right\rangle_{t}, \tag{8}\] where \(\langle\cdots\rangle_{t}\) stands for the time average, which is taken after discarding the initial transients. With the knowledge of these order parameters, we proceed to study the emerging collective states of our model. ### Emerging collective states We investigate the twin activities of synchronization and swarming in our model. For simplistic purpose, we take the values of \(J_{1}\), \(J_{2}\), and \(J_{3}\) to be equal to 0.1 and fix \(K_{1}=-0.1\), \(K_{2}=-0.2\). These choices of parameter values are arbitrary and solely made for a case study of our model. We relax this choice in the subsequent sections. However, the natural indication after performing numerical simulations is that \(K_{3}\) is the most crucial parameter which determines the inter-group phase coupling. That is why we keep it as a free parameter and study our model's collective states while varying it. The model exhibits six long-term collective states: _anti-phase sync_, _chimera state_, _active async_, _active phase wave_, and _static sync_ when we vary \(K_{3}\) inside an interval \([-0.75,0.5]\). Figure 1(b)-(f) display the states by scatter plots in the \((x,y)\) plane where the swarmalators are represented by dots and they are colored according to their phases \(\theta\). Figure 1(a) reveals the variation of order parameters as a function of \(K_{3}\). The order parameters \(r,S,\gamma,R,V\) are plotted by blue, red, magenta, purple, and green-colored dotted lines, respectively. Table 1 provides information regarding the values of the order parameters in these states. Next, we discuss these collective states and their structural properties in detail. We start from the left end point of the interval. Here, \(K_{3}\ll K_{1},K_{2}\). The population breaks into two disjoint clusters formed by the two groups of swarmalators. Both clusters are stationary in spatial position and phase. Swarmalators inside each cluster are completely synchronized. But, one cluster is synchronized at a common phase which is at \(\pi\) difference from the common phase of the other cluster. We call this as _anti-phase sync_ (see Fig. 1(b)). Look at the white region of the parameter space of Fig. 1(a), whereby the definition of \(R\) in Eq. 5, \(R\approx 1\) in this state (purple curve). Since the overall population's phases are distributed in \(\pi\) difference in two sub-populations and they are equal in size, we get \(r\approx 0\) (blue curve). Being a static state, it also gives \(\gamma\approx 0\) (magenta curve) and \(V\approx 0\) (green curve). The other order parameter \(S\) holds a nonzero value that is less than \(R\) in this state (red curve). For a compact view of the order parameters, we refer the reader to Table 1. Section V.2 presents this state in a more detailed way. See Movie 1 of the Supplementary Material for the time evolution of this state. When we gradually increase \(K_{3}\), swarmalators in one community remain fully synchronized in phase but in the other community, asynchrony starts to appear. The cluster formation in the anti-phase sync state remains \begin{table} \begin{tabular}{c c c c c} \(r\) & \(R\) & \(S\) & \(\gamma\) & \(V\) & Emerging state \\ \hline \(\approx 1\) & \(\approx 1\) & \(0<S\ll 1\) & \(\approx 0\) & \(\approx 0\) & Static sync \\ \(\approx 0\) & \(0<R\ll 1\) & \(\approx 0\) & \(\approx 0\) & \(\approx 0\) & Static async \\ \(\approx 0\) & \(0<R\ll 1\) & \(0<S\ll 1\) & \(\ll 1\) & \(\neq 0\) & Active async \\ \(\approx 0\) & \(0<R\ll 1\) & \(0\ll S<1\) & \(0\ll\gamma<1\) & \(\neq 0\) & Active phase wave \\ \(0<r\ll 1\) & \(0\ll R<1\) & \(0\ll S<1\) & \(\neq 0\) & \(\neq 0\) & Chimera \\ \(\approx 0\) & \(\approx 1\) & \(0<S<R\) & \(\approx 0\) & \(\approx 0\) & Anti-phase sync \\ \end{tabular} \end{table} Table 1: This table shows how the emerging states of the population of swarmalators are identified with the order parameters \(r\), \(R\), \(S\), \(\gamma\), and \(V\). Figure 1: **Order parameters along with the snapshots of the emerging states.** (a) Variation of different order parameters with \(K_{3}\). (b) Anti-phase sync for \(K_{3}=-0.6\), (c) chimera for \(K_{3}=-0.4\), (d) active async for \(K_{3}=-0.25\) & static async for \(K_{3}=0.0\), (e) active phase wave for \(K_{3}=0.1\) and (f) static sync for \(K_{3}=0.3\). Simulations are performed for \(N=200\) swarmalators for \(T=5000\) time units and step-size \(dt=0.01\) using Heun’s method. In all cases, swarmalators are initially placed inside the box \([-1,1]\times[-1,1]\) uniformly at random, while their phases are drawn randomly from \([0,2\pi]\). We fix \(J_{1}=J_{2}=J_{3}=0.1\) and \(K_{1}=-0.1,K_{2}=-0.2\). Note that there is a long transient until the states are achieved. The order parameters are calculated with the last 10% data after discarding the initial transients. intact here (Fig. 1(c)). However, asynchrony in one cluster brings some activity inside that cluster in the sense that swarmalators now move. This state is best visualized when studied in terms of \(r_{1}\) and \(r_{2}\). The synchronized cluster gives \(r_{1}=1\), and the desynchronized one gives \(r_{2}<1\). This co-existence of synchronized and desynchronized swarmalator communities is reminiscent of the chimera state found in the study of coupled oscillators [47; 48; 49] and we simply name this state as _chimera state_. All the five order parameters \(r,S,\gamma,R,V\) show nonzero values here (pink region in Fig. 1). See Table 1 for more details. We also discuss this state in detail in Sec. VI. Movie 2 of the Supplementary Material demonstrates the time evolution of the chimera state. On further increment of \(K_{3}\) from the chimera state, we encounter the swarmalators moving and arranging themselves within a circular disc and their phases are totally incoherent, i.e., \(r\approx 0\). The activity never dies and they keep moving in the two-dimensional plane which gives \(V\neq 0\). This state is named as _active async_ as the swarmalators maintain movement in the \((x,y)\) plane, and their phases are desynchronized. Find Table 1 for the description of order parameters in this state. Also, observe the cyan region in Fig. 1. The activity dies keeping the disc structure with the incoherent phase nature when \(K_{3}\) is increased beyond this state. This is the static async state. The only difference between this state and the active async state is that \(V=0\) is in this state. Static async state prevails over the yellow region in Fig. 1. Figure 1(d) represents a snapshot at a particular time instant for both these states. See Movies 3 & 4 of the Supplementary Material for the time evolution to this state. Moving to the right with increasing \(K_{3}\) from the static async state, we observe another collective state where the swarmalators arrange themselves inside an annular ring and oscillate to achieve regular cycles in both phase and space. This state was termed as _active phase wave_ in previous studies [26]. A snapshot of this motion is best illustrated in Fig. 1(e). By our definition, \(\gamma\) is nonzero in this state. Find the green region of Fig. 1 for the occurrence of this state (Movie 5 of the Supplementary Material best describes this state). Finally, to the extreme right of this parameter region where \(K_{3}\) is sufficiently large and positive, phases of the swarmalators throughout the population get synchronized and they form a disc structure in the plane. This previously reported state is known as the _static sync_[26]. The value of \(r\) is the maximum here which is observed by the blue curve in the purple region. \(R\) is also close to 1 here by definition. Figure 1(f) illustrates a snapshot of this state (also see Movie 6 of the Supplementary Material). Till now we have only varied \(K_{3}\) and studied the emerging six collective states. Now, we simultaneously vary \(J_{3}\) along with \(K_{3}\) and observe the dynamical behaviors. The resulting parameter space is shown in Fig. 2. In this figure, the \(J_{3}\)-\(K_{3}\) parameter plane is divided into \(100\times 100\) mesh points, and at each point, we simulate our model for \(T=5000\) time units. The value of order parameter \(R\) is calculated over the last 10% data and the mesh point is colored according to this value. We observe from Fig. 2 that the emerging states are robust concerning variation in \(J_{3}\). The top yellow region corresponds to the static sync state where \(K_{3}\) is positive and \(r,R\approx 1\). The yellow region towards the bottom corresponds to the anti-phase sync state where \(R\approx 1\), but \(r\approx 0\) (not shown here). The red and black curves are the analytical predictions for achieving the static sync and anti-phase sync state, respectively. Find Sec. V.1 and V.2 for the derivation of these curves. So far we have always considered \(J_{1}=J_{2}\). Appendix B demonstrates the picture when we work with \(J_{1}\neq J_{2}\). The emerging states remain the same which can be seen from Fig. 9. ## IV Emerging collective states from identical communities: dynamical routes We know from Ref. [26] that with a single community structure, our model exhibits five long-term states depending on the parameter values. These states are static sync, static async, static phase wave, splinter phase wave, and active phase wave of which the last two are non-stationary states. Here we assume that both communities are in the same state which belongs to one of these five states. This means the communities are identical with \(J_{1}=J_{2}\) and \(K_{1}=K_{2}\). Furthermore, we fix \(J_{3}=0.1\) and analyze the routes from static anti-phase sync to static sync by varying the parameter \(K_{3}\) over a range to perceive the collective states. ### Static sync We start from the static sync state for both the communities (\(J_{1}=J_{2}=0.1\), \(K_{1}=K_{2}=1.0\)) and Figure 2: \(J_{3}\)-\(K_{3}\) **Parameter space for \(J_{1}=J_{2}\)**. (a) \(J_{1}=J_{2}=0.1\). (b) \(J_{1}=J_{2}=0.5\). The model is integrated with \(N=200\) swarmalators using Heun’s method with step-size \(dt=0.01\) for \(T=5000\) time units. Order parameter \(R\) is calculated with the last 10% data after discarding the transients. Colorbar stands for the value of \(R\). Red and black curves are analytical predictions Eqs. (21) and (25), respectively. Here \(K_{1}=-0.1,K_{2}=-0.2\). change \(K_{3}\). Firstly, in the negative \(K_{3}\) region, we notice the population forms two clusters that are static in both phase and position. They are separated by a phase difference of \(\pi\) from each other, which is the anti-phase sync state. An increment of \(K_{3}\) shows the persistence of cluster structure but with a lower phase difference (see Movie 7 of the Supplementary Material). This cluster synchronization state (which is not the anti-phase sync state) exists over a small interval of \(K_{3}\) before finally yielding the static sync state. With increasing \(K_{3}\), we find \[\text{anti-phase sync}\rightarrow\text{cluster sync}\rightarrow\text{ static sync}.\] See Fig. 3(a) for the behavior of order parameters here. ### Static async Here, we take the two groups initially at the static async state by choosing the parameter values as \(J_{1}=J_{2}=0.1\) and \(K_{1}=K_{2}=-1.0\) and vary \(K_{3}\). Starting from a relatively lower value of \(K_{3}\) in comparison to \(K_{1}\) and \(K_{2}\), we notice the presence of an anti-phase sync state where two synchronized, stable clusters maintain a phase difference of \(\pi\). When we increase \(K_{3}\) in very small magnitude, we observe the chimera state where one group of swarmalators are fully phase coherent and in the other group they are out of synchrony. We encounter the active async state as we further increase \(K_{3}\). After this, activity dies and the swarmalators arrange themselves in a static async by adjusting their spatial position with further increments of \(K_{3}\). From this, we spot the emergence of an active phase wave state by increasing \(K_{3}\). As \(K_{3}\) is further increased, the whole community accomplishes themselves in static sync finally. Figure 3(b) portrays the phase transitions in this case. The route is \[\text{anti-phase sync}\rightarrow\text{chimera}\rightarrow\text{active async} \rightarrow\text{static async}\rightarrow\text{active phase wave}\rightarrow\text{static sync}.\] Compared to the earlier case where both the communities were in a static sync state, here we observe that the intermediate dynamics are relatively richer when the communities are in static async. ### Static phase wave Here both communities are in a static phase wave state. Primarily, here we deal with a phase-dependent aggregation model as \(J_{1}=J_{2}=1.0\) and \(K_{1}=K_{2}=0\). So, intra-community phase coupling is absent here. Phase interaction only takes place through the inter Figure 3: **Behavior of the order parameters for identical swarmalator communities.** Order parameters as a function of \(K_{3}\) where initially both the communities are in (a) static sync (\(J_{1}=J_{2}=0.1\), \(K_{1}=K_{2}=1.0\)), (b) static async (\(J_{1}=J_{2}=0.1\), \(K_{1}=K_{2}=-1.0\)), (c) static phase wave (\(J_{1}=J_{2}=1.0\), \(K_{1}=K_{2}=0.0\)), (d) splintered phase wave (\(J_{1}=J_{2}=1.0\), \(K_{1}=K_{2}=-0.1\)) and (e) active phase wave (\(J_{1}=J_{2}=1.0\), \(K_{1}=K_{2}=-0.75\)). Simulation parameters \((dt,T,N)=(0.01,5000,200)\). The order parameters are calculated with the last 10% data. Here, we fix \(J_{3}=0.1\). community structure via \(K_{3}\). We can trace the anti-phase sync in the negative \(K_{3}\) region as before. The swarmalators follow a path from anti-phase sync to static sync through a static phase wave state which is deformed in nature, i.e., they are distributed in a non-uniform pattern in the 2D plane (in the existing static phase wave state, they are distributed uniformly in an annular ring). We can trace this deformed state very close to the \(K_{3}=0\) region. Movie 8 of the Supplementary Material best describes this state. Swarmalators arrange themselves into static sync for \(K_{3}>0\). Here, the route can be noted down as: anti-phase sync \(\rightarrow\) deformed static phase wave \(\rightarrow\) static sync. The order parameters can be found in Fig. 3(c). ### Splintered phase wave So far we deal with the scenario where both the communities are in static states initially. Here we start with two identical non-stationary states namely splinter phase wave. We keep the parameter values \(J_{1}=J_{2}=1.0\) and \(K_{1}=K_{2}=-0.1\) and vary \(K_{3}\). We observe anti-phase sync where two static, synchronized clusters persist with a phase difference \(\pi\) for a relatively smaller value of \(K_{3}\) compared to \(K_{1}\) and \(K_{2}\). Increasing the value of \(K_{3}\), we mark the splintered phase wave state \(-0.4<K_{3}<0.22\). Here the swarmalators split into two clusters where the mean phases of the clusters differ from each other by approximately \(\pi\) (Movie 9 of the Supplementary Material). Moving to the right with an increasing value of \(K_{3}\), we notice some of the swarmalators start to execute a full cycle rotation spatially but their phases do not change from \(0\) to \(2\pi\) as in the active phase wave state. This peculiar state can be deciphered as the simultaneous coexistence of splintered phase wave and active phase wave states (see Movie 10 of the Supplementary Material for an illustration of the state). We observe the mixed activity of splintered and active phase wave states when \(-0.22<K_{3}<0.22\). Further increasing \(K_{3}\), the swarmalators are again divided into two clusters but this time they maintain a difference of mean phases around \(0\) (see the time evolution of this state in Movie 11 of the supplementary Material). This state exists for \(0.22<K_{3}<0.4\). Finally, the whole population reaches static synchrony after a certain value of \(K_{3}\) (\(\approx 0.4\)). The overall route is depicted as: anti-phase sync \(\rightarrow\) splintered phase wave (mean phase difference close to \(\pi\)) \(\rightarrow\) mixed (splintered and active phase waves) \(\rightarrow\) splintered phase wave (mean phase difference close to \(0\)) \(\rightarrow\) static sync. ### Active phase wave Here the story starts with two identical active phase wave states. The parameter values are \(J_{1}=J_{2}=1.0\) and \(K_{1}=K_{2}=-0.75\). To analyze the route from anti-phase synchrony to static synchrony, we vary \(K_{3}\) over a broad range. In this case, the anti-phase sync state is found for a relatively larger negative value of \(K_{3}\) compared to the previous cases (\(K_{3}<-2.12\)). Increasing the value of \(K_{3}\), swarmalators start to segregate into two clusters and we observe activity emerging in the system. Some of the swarmalators undergo a full circle rotation in space and phase and consequently, \(\gamma\) exhibits a small non-zero value around \(-2.16<K_{3}<-1.28\). Movie 12 of the Supplementary Material demonstrates the state best. On further increment of \(K_{3}\), we notice their oscillations increase in amplitude until all of them start to execute regular cycles in both phase and spatial angle, i.e., the swarmalators settle in the active phase wave state. The value of \(\gamma\) is close to \(1\) and \(S\) is very small. With a further increment of \(K_{3}\), their activity begins to diminish gradually and they are again separated into two clusters. The phase difference also decreases between the two clusters and \(\gamma\) is very small compared to \(1\) near \(1.12<K_{3}<2.0\) (see Movie 13 of the Supplementary Material). Ultimately we find static sync for \(K_{3}>2.0\). The route in this case becomes anti-phase sync \(\rightarrow\) mixed (splintered and active phase waves) \(\rightarrow\) active phase wave \(\rightarrow\) mixed (splintered and active phase waves) \(\rightarrow\) static sync. ## V Analytical findings In the previous section, we explored the dynamic states of our model with various parameter values. The most striking result that we encountered is the occurrence of anti-phase sync with a reasonably small value of \(K_{3}\) and on the other hand, a sufficiently large and positive value of \(K_{3}\) results in the whole population in the static sync state. These two static states exist at the opposite extremes of \(K_{3}\) values. Now, we try to establish the criteria for achieving these states. ### Static sync state Before going into the study of the static sync state, we first analyze the phase dynamics of our model where spatial positions do not affect the phases. In that case, the phase equation becomes \[\dot{\theta}_{i}=\omega_{i}+\sum_{m=1}^{2}\frac{K_{n,m}}{|C_{m}|}\sum_{j\in C _{m}\setminus\{i\}}\sin(\theta_{j}-\theta_{i}), \tag{9}\] where \(i\in C_{n}\). We move to the continuum limit where \(|C_{p}|\rightarrow\infty\), \(p=1,2\). Considering the probability den sity function \(\rho_{n}(\theta,t)\) of oscillators belonging to the \(n\)-th group, the Fokker-Planck equation can be written as \[\frac{\partial\rho_{n}}{\partial t}+\frac{\partial}{\partial\theta}(\rho_{n}v_{n })=0, \tag{10}\] where the velocity \(v_{n}(\theta^{n},t)\) is given by \[v_{n}(\theta^{n},t)=\omega+\sum_{m=1}^{2}K_{n,m}\int\sin(\theta^{m}-\theta^{n} )\rho_{m}(\theta^{m},t)d\theta^{m}. \tag{11}\] We define the complex order parameter \[z_{n}(t)=\sum_{m=1}^{2}K_{n,m}\int e^{i\theta^{m}}\rho_{m}(\theta^{m},t)d \theta^{m}. \tag{12}\] Using this, Eq. (11) is re-written as \[v_{n}(\theta^{n},t)=\omega+\frac{1}{2i}(z_{n}e^{-i\theta^{n}}-z_{n}^{*}e^{i \theta^{m}}), \tag{13}\] where \(*\) denotes complex conjugate. Following Ott-Antonsen ansatz Ott-Antonsen (1993), we choose a special class of density functions that has an invariant manifold of Poisson kernels, \[\rho_{n}(\theta^{n},t)=\frac{1}{2\pi}\left\{1+\left[\sum_{k=1}^{\infty}[a_{n} (t)e^{i\theta^{n}}]^{k}+c.c.\right]\right\}. \tag{14}\] where the unknown function \(a_{n}(t)\) must be found self-consistently. Inserting this form of \(\rho_{n}\) given by Eq. (14) into Eq. (10), we find that \(\rho_{n}\) satisfies the Fokker-Planck equation for all harmonics \(k\) if \(a_{n}\) satisfies \[\dot{a}_{n}+i\omega a_{n}+\frac{1}{2}\left[a_{n}^{2}z_{n}-z_{n}^{*}\right]=0. \tag{15}\] Further inserting Eq. (14) into Eq. (12) and after performing the integration, the complex order parameter \(z_{n}\) is expressed in terms of \(a_{n}\) as \[z_{n}(t)=\sum_{m=1}^{2}K_{n,m}a_{m}^{*}(t). \tag{16}\] Then the amplitude equation for \(a_{1}\) becomes \[\dot{a}_{1}= -i\omega a_{1}-\frac{1}{2}(K_{1,1}a_{1}^{*}+K_{1,2}a_{2}^{*})\] \[+\frac{1}{2}(K_{1,1}a_{1}+K_{1,2}a_{2}). \tag{17}\] Similarly, we find the equation for \(\dot{a}_{2}\) by interchanging \(1\)'s and \(2\)'s in Eq. (17). We move to the polar coordinates to rewrite the amplitude equations by defining \(a_{n}=r_{n}e^{-i\phi_{n}}\), \(n=1,2\). We further define \(\Phi=\phi_{1}-\phi_{2}\). Substituting these into the amplitude equations and after simplifying, we get \[\dot{r_{1}} =\frac{1-r_{1}^{2}}{2}(K_{1}r_{1}+K_{3}r_{2}\cos\Phi), \tag{18}\] \[\dot{r_{2}} =\frac{1-r_{2}^{2}}{2}(K_{2}r_{2}+K_{3}r_{1}\cos\Phi),\] (19) \[\dot{\Phi} =-K_{3}\left(\frac{r_{1}^{2}+r_{2}^{2}+2r_{1}^{2}r_{2}^{2}}{2r_{1 }r_{2}}\right)\sin\Phi \tag{20}\] (note that, \(K_{1,1}\equiv K_{1}\), \(K_{2,2}\equiv K_{2}\), and \(K_{1,2}=K_{2,1}\equiv K_{3}\)). We integrate Eqs. (18)-(20) with initial conditions \((r_{1}(0),r_{2}(0),\Phi(0))=(0.9,0.9,\pi-0.1)\) and demonstrate their variation as functions of \(K_{3}\) in Fig. 4 where \(K_{1}\) and \(K_{2}\) are fixed to \(-0.1\) and \(-0.2\), respectively. When \(K_{3}<-0.2\), \(r_{1}=r_{2}=1\) and \(\Phi=\pi\) which represents the anti-phase sync state (we study this state in detail in the next section). On the opposite side, for \(K_{3}>0.2\), we get \(r_{1}=r_{2}=1\) and \(\Phi=0\) which stand for the sync state. In the middle region \(-0.2<K_{3}<0.2\), chimera-like states appear. In the global sync state, the phases of the swarmalators throughout the entire population become identical, which yields \(r_{1}=r_{2}=1\) and \(\Phi=0\). This is the trivial solution to Eqs. (18)-(20) and the Jacobian matrix at this steady state gives eigenvalues \(-2K_{3},-K_{1}-K_{3},-K_{2}-K_{3}\). From this, the sync state stability condition is achieved as \[K_{3}>\max\{0,-K_{1},-K_{2}\}. \tag{21}\] When we consider the phase dynamics of swarmalators, Eq. (2), the spatial effect is to be dealt with. However, from numerical simulations, we observe that when \(J_{1}=J_{2}\) and \(J_{3}>J_{1}\), the stability condition Eq. (21) holds for achieving the static sync state. This is demonstrated by the red lines in Fig. 2. For non-identical values of \(J_{1}\) and \(J_{2}\), the spatial distributions of swarmalators in the two communities do not remain the same. The spatial positions having an impact on the phase dynamics, in turn, affect the critical \(K_{3}\) in Eq. (21). The small deviation from the condition given by Eq. (21) (plotted by the red curve) is observed in Fig. 9 (Appendix B) where we present our results with \(J_{1}=0.1\) and \(J_{2}=0.5\). ### Anti-phase sync state In the anti-phase sync state, the two groups get separated from each other in the phase component. Their Figure 4: **Variation of order parameters \(r_{1}\), \(r_{2}\) and phase difference \(\Phi\) as functions of \(K_{3}\).** We integrate Eqs. (18)-(20) starting from initial conditions \((r_{1}(0),r_{2}(0),\Phi(0))=(0.9,0.9,\pi-0.1)\) for \(T=5000\) time units. Then they are time averaged over the last 10% data and plotted as functions of \(K_{3}\). We fix \(K_{1}=-0.1,K_{2}=-0.2\) like in Fig. 1. (a) The variations of \(r_{1}\) (blue) and \(r_{2}\) (red) are plotted. (b) Represents the change of \(\Phi\) (magenta) with varying \(K_{3}\). phases are fully synchronized within each group, but there is a phase difference of \(\pi\) between these two groups (see Fig. 5(b)). When \(J_{3}\) is absent, i.e., \(J_{3}=0\), these two groups arrange themselves spatially in the form of a disc where these discs overlap. The radius of these discs depends on the choices of \(J_{1}\) and \(J_{2}\). But when the value of \(J_{3}\) is nonzero, swarmalators belonging to different groups start to reduce the attraction between them (since the strength of attraction between these two groups is \(1-J_{3}\) as phase difference is exactly \(\pi\)). As a result, these two groups form disjoint clusters in the plane. See Fig. 5(a). In the anti-phase sync state, \(r_{1}=r_{2}=1\) and \(\Phi=\pm\pi\). These also satisfy Eqs. (18)-(20) in the steady state. Linearizing these equations around this steady state and calculating the Jacobian matrix, yields the eigenvalues \(2K_{3}\), \(K_{3}-K_{1}\), and \(K_{3}-K_{2}\). This gives the stability condition of the anti-phase sync state as \[K_{3}<\min\{0,K_{1},K_{2}\}. \tag{22}\] We use Eq. (22) to find the stability condition of the anti-phase sync state found in our systems defined by Eqs (1)-(2). Since in our model, the phase dynamics of the swarmalators are influenced by the spatial dynamics, we first study this effect in the anti-phase sync state. From simulation results, we find that in the anti-phase sync state with nonzero \(J_{3}\), swarmalators form disjoint clusters in a two-dimensional plane. Swarmalators belonging to group \(C_{1}\) make a cluster among them where their phases are synchronized and the other cluster is formed by the swarmalators similarly belonging to \(C_{2}\). This can be considered as a two-particle system where swarmalators belonging to the same group are represented by their center of positions and synchronized phase [37]. Let \(\mathbf{x}_{C_{1}}\) and \(\mathbf{x}_{C_{2}}\) be the center of positions of \(C_{1}\) and \(C_{2}\) and \(\theta_{C_{1}}\), \(\theta_{C_{2}}\) be their synchronized phase angles, respectively. Then from Eq. (1), we can write \[0=\left[\frac{\mathbf{x}_{C_{2}}-\mathbf{x}_{C_{1}}}{|\mathbf{x}_{C_{2}}- \mathbf{x}_{C_{1}}|}\big{(}1+J_{3}\cos(\theta_{C_{2}}-\theta_{C_{1}})\big{)}- \frac{\mathbf{x}_{C_{2}}-\mathbf{x}_{C_{1}}}{|\mathbf{x}_{C_{2}}-\mathbf{x}_ {C_{1}}|^{2}}\right]. \tag{23}\] This gives us the distance between the center of positions of \(C_{1}\) and \(C_{2}\) as \[|\mathbf{x}_{C_{2}}-\mathbf{x}_{C_{1}}|=\frac{1}{1-J_{3}}, \tag{24}\] since \(|\mathbf{x}_{C_{2}}-\mathbf{x}_{C_{1}}|\neq 0\) and \(\theta_{C_{2}}-\theta_{C_{1}}=\pm\pi\). This is plotted in the black line in Fig. 5(c) where the red dots are simulation results. When the swarmalators form separate groups in spatial positions, their effective phase coupling strength changes since it depends on the distance between the swarmalators. This is why Eq. (22) does not stand valid for our model. To find the stability condition of the anti-phase sync state, we need to investigate the effect of spatial position carefully. The average distance \(R_{1}\) between two particles in \(C_{1}\) can be considered as the half of its diameter (maximum distance between particles in \(C_{1}\)) which is a function of \(J_{1}\),\(J_{2}\), and \(J_{3}\), i.e., \(R_{1}(J_{1},J_{2},J_{3})\). Similarly for \(C_{2}\) it is \(R_{2}(J_{1},J_{2},J_{3})\). On the other hand, the average distance of the particle throughout the whole population then becomes \(R_{1}+R_{2}+1/(1-J_{3})=R_{3}\), say. The effective ratio of \(K_{3}\) to \(K_{1}\) can be written down as \(R_{1}K_{3}/R_{3}K_{1}\) and that of \(K_{3}\) to \(K_{2}\) is \(R_{2}K_{3}/R_{3}K_{1}\). Then from Eq. (22), we write down the stability condition of the anti-phase sync state as \[K_{3}<\min\{0,\frac{R_{3}K_{1}}{R_{1}},\frac{R_{3}K_{2}}{R_{2}}\}. \tag{25}\] Figure 5: **Anti-phase sync state.** The entire swarmalator population forms two disjoint clusters in space where the clusters belong to the two communities. Simulation parameters: \(J_{1}=J_{2}=0.5\). \(K_{1}=-0.1,K_{2}=-0.2,K_{3}=-1.5\). \((dt,T,N)=(0.01,5000,200)\). \(J_{3}=0.5\) in (a) and (b). (a) Snapshot at \(T=5000\) time units showing the spatial structure of the swarmalators in the anti-sync state where they are colored according to their phases. (b) The phases of the swarmalators are plotted against their respective indices at \(T=5000\) time units where red and blue dots correspond to swarmalators belonging to the first and second communities, respectively. The phase difference is \(\pi\) between the communities. (c) The distance between the center of masses of the two clusters is plotted as a function of \(J_{3}\). Red dots are simulation results and the black curve indicates the analytical prediction, Eq. (24). Due to the complexity of the model, we are unable to find explicit expressions for \(R_{1}\) and \(R_{2}\). But from numerical simulations, we observe that these quantities depend majorly on the values of \(J_{1}\) and \(J_{2}\) for respective groups and not on \(J_{3}\). We can approximately write \(R_{1}\approx R_{1}(J_{1})\) and \(R_{2}\approx R_{2}(J_{2})\). To verify our results, we take \(J_{1}=J_{2}=0.1\) and \(K_{1}=-0.1,K_{2}=-0.2\). Numerical simulations suggest \(R_{1}\approx 0.98\approx R_{2}\). The curve defined by Eq. (25) is drawn with these values and is plotted in black in Fig. 2(a). With \(J_{1}=J_{2}=0.5\) and same \(K_{1}\) and \(K_{2}\) we find \(R_{1}\approx 0.7\approx R_{2}\). The separatrix curve is again calculated and plotted in black in Fig. 2(b). Both curves match very well with our numerical results. We also verify our findings with unequal \(J_{1}\) and \(J_{2}\) in Appendix B. ## VI Chimera state The co-existence of coherence and incoherence is known as chimera state [47; 48; 49]. We find that for certain parameter values, there is complete synchrony among one group of swarmalators but the other group is desynchronized. This means one of \(r_{1}\) and \(r_{2}\) is \(1\) and the other one is strictly less than \(1\). We display one such occurrence of chimera state in Fig. 6. Snapshot at \(t=300\) time units with \(N=200\) swarmalators is shown in Fig. 6(a) where the two groups arrange themselves in the \(x\)-\(y\) plane in the shape of non-overlapping half discs. Here \(r_{1}=1\) and \(r_{2}<1\). This is evident when we look at the phases of the swarmalators in Fig. 6(b). We plot the phases of the swarmalators against their indices where red and blue dots stand for groups one and two, respectively. We further study the nature of the chimera state. For a case study, we fix \(J_{1}=J_{2}=J_{3}=0.1\) and set \(K_{1}\) and \(K_{2}\) to \(-0.1\) and \(-0.2\), respectively. By careful investigation, we find that a chimera state exists for these parameter values when \(-0.56<K_{3}<-0.28\). In the chimera state, \(r_{1}\) stays fixed to \(1\) but \(r_{2}\) is always less than \(1\). Moreover, we observe oscillation in \(r_{2}\), which means it varies with time. So, the chimera we report in this work is _breathing chimera_. We establish this by drawing Fig. 7 where \(r_{2}\) is plotted as a function of time for various values of \(K_{3}\). It is to be noted that, with decreasing \(K_{3}\) the magnitude of \(r_{2}\) keeps increasing. Eventually around \(K_{3}\approx-0.57\), \(r_{2}\) goes to \(1\), which is the anti-phase sync state. ## VII Conclusion The phase-dependent spatial aggregation and position-dependent phase synchronization are at the core of swarmalator dynamics. Swarmalators endowed with spatial and phase interactions are competent to exhibit complex collective behaviors. These states can be found in real-world systems like Japanese tree frogs [24], magnetic domain walls [51], Janus matchsticks [52], robotic swarms [53; 54] etc. To this end, studies are being carried out on swarmalator models by defining suitable interaction functions, network structures, cou Figure 6: **Chimera state.** One of the communities is completely phase synchronized and the presence of asynchrony is found in the other community. Simulation parameters: \(J_{1}=J_{2}=J_{3}=0.1\). \(K_{1}=-0.1\), \(K_{2}=-0.2\), \(K_{3}=-0.4\), \((dt,T,N)=(0.01,5000,200)\). (a) Snapshot at \(T=5000\) time units demonstrating the chimera state. (b) Snapshots of the swarmalators’ phases are plotted against their indices. The red and blue dots refer to the first and second communities, respectively. Swarmalators are synchronized in the first community (red dots) but desynchronized in the second one (blue dots). Figure 7: **Breathing chimera**. We delineate the breathing nature of the chimera state. Parameter values used: \(J_{1}=J_{2}=J_{3}=0.1\). \(K_{1}=-0.1,K_{2}=-0.2\). Simulation parameters \((dt,T,N)=(0.01,2000,200)\). The order parameter \(r_{1}\) measuring the phase coherence among the swarmalators in the first community acquires the value \(1\). But \(r_{2}\) is less than \(1\) due to the presence of asynchrony in the second community. The temporal evolution of \(r_{2}\) is plotted for several \(K_{3}\) values, \(K_{3}=-0.3\) (blue), \(-0.35\) (red), \(-0.4\) (yellow), and \(-0.57\) (magenta). We observe oscillatory behavior of \(r_{2}\) which reveals the breathing nature of the chimera state. The magnitude of \(r_{2}\) increases and the oscillation decays with decreasing \(K_{3}\) until it reaches the maximum value \(1\) where the oscillation completely dies. pling schemes, etc. (We refer the reader to [27] for a recent review on swarmalator systems.) In this article, we have studied a population of swarmalators where they are distributed in two communities. The intra and inter-community coupling strengths have been carefully varied to observe different emerging states. Two of them, viz., the anti-phase sync and the chimera state are not commonly observed in swarmalator systems and to the best of our knowledge, have not been studied rigorously (the anti-phase state has been reported previously in [37; 39] and chimera like states were observed in [55]). The novelty of our work lies in the fact that we have found an anti-phase sync state with all the intra and inter-phase coupling strengths being negative. It can be inferred that the imposed community structure is responsible for this. The chimera state encountered is also due to the interplay between swarmalators belonging to different communities. Although we were not able to provide any mathematical formulation for the chimera state, our model still can be used as a testbed for future works on chimera states in swarmalator systems. We have also conspicuously illustrated the phase transitions by varying the inter-community phase coupling strength \(K_{3}\). The emerging states are characterized in terms of some order parameters. Anti-phase sync state is perceived for a sufficiently small (negative) value of \(K_{3}\) and the sync state is detected for a positive large value of it. We study these two states in detail and provide semi-analytical conditions for achieving these states. We also study the different routes from the anti-phase sync state to the sync state by assuming that the two communities are identical to start with. Moreover, we have established our results when the parameters \(J_{3}\) and \(K_{3}\) are varied simultaneously. We can highlight the limitation of our work by pointing at the inability to explicitly incorporate the spatial dynamics in the analysis of the anti-phase sync and static sync state. This might be wiped out if some simpler type of spatial interaction functions is used other than the power laws used in our model. It also remains to be seen what happens when more than two communities are considered. The model can be simplified by reducing the spatial dimension placing the swarmalators on a ring and then imposing the community interactions. Future works can also be carried out with our model by considering nonidentical swarmalators by drawing frequencies from Gaussian or Lorentzian distributions. Through preliminary inspection, we observed that some of the emerging states that we reported here (static async, static sync) will have their analogous counterparts for nonidentical swarmalators. But for the existence of other states like anti-phase, chimera, etc., a deep and systematic investigation is required. ### Data availability statement The data that support the findings of this study are openly available in the GitHub repository [56]. ## Appendix A Unequal community sizes In the main text of our paper, we discussed the case where the communities are of equal size and studied different states. Here, we cover the scenario where the two communities have unequal sizes. The total population size is \(N\). These swarmalators are distributed in two communities. Let \(p_{1}\) and \(p_{2}\) denote the probabilities that the \(i\)-th swarmalator belongs to the first, and second communities, respectively. Clearly, \(p_{1}+p_{2}=1\). For equal community sizes, \(p_{1}\) is essentially equal to \(p_{2}\). Here, we take \(p_{1}\neq p_{2}\) so that the communities are unequal in size. We study two cases, one where \(p_{1}=0.6\) and the other one \(p_{1}=0.7\). The parameter values are \(J_{1}=J_{2}=J_{3}=0.1\), \(K_{1}=-0.1\), and \(K_{3}=-0.2\), the same values which were used in Sec. III.2. In both cases, what we observe that the same qualitative behavior of all the order parameters. As a result, the emerging states remain unaltered. In Fig. 8, we have shown the phase transition. In the case of equal population sizes, the order parameter \(r\) is approximately zero in anti-phase sync. Due to an equal number of swarmalators in each group, the terms within the summation in Eq. (3) nullify each other. But if we choose unequal sub-populations, \(r\) has a non-zero value Figure 8: **Phase transition with unequal community sizes.** Simulations are performed for a total population of \(N=200\) swarmalators. (a) \(p_{1}=0.6\) and (b) \(p_{1}=0.7\). Other parameter values used are \(J_{1}=J_{2}=J_{3}=0.1\) and \(K_{1}=-0.1,K_{2}=-0.2\). We observe the same states as in Fig. 3 with equal community sizes. It establishes the fact that our reported results are robust and independent of the initial distribution of swarmalators in the communities. depending on the ratio of swarmalators. ## Appendix B \(J_{1}\neq J_{2}\) We study the case where the \(J\)'s (phase-dependent spatial coupling strengths among communities) are not equal i.e., \(J_{1}\neq J_{2}\). For instance, we take \(J_{1}=0.1,J_{2}=0.5\). \(K_{1}\) and \(K_{2}\) are kept fixed at \(-0.1\) and \(-0.2\), respectively. The resulting behavior is demonstrated through Fig. 9. The overall collective states remain the same. It can be observed if we compare Fig. 9 with Fig. 2 (where \(J_{1}=J_{2}\)). ## Appendix C Nonidentical swarmalators For our study, we have considered swarmalators with identical frequencies in both the communities, i.e., \(\omega_{i}=\omega\) for \(i=1,2,\ldots,N\), and it is further set to zero by moving to a proper reference frame. Here, we draw the frequencies from the Gaussian distribution with zero mean and unit standard deviation to make the nonidentical swarmalators. We observe that the sync state takes place for a larger inter-community phase coupling strength \(K_{3}\) compared to identical swarmalators. The phases never become static and keep evolving with time which is seen via Fig. 10(a)-(c) where snapshots are taken at different time units. For small coupling strength \(K_{3}\), the async state is realized. Here also, the phases are non-stationary. In Fig. 10(d)-(f), snapshots of the async state are shown at \(T=2000,3500\), and \(5000\) time units, respectively. However, we were unable to detect the emergence of anti-phase and chimera states. A rigorous study through minute exploration of the parameters is needed when one considers nonidentical swarmalators.
2309.15054
Near Real-Time Position Tracking for Robot-Guided Evacuation
During the evacuation of a building, the rapid and accurate tracking of human evacuees can be used by a guide robot to increase the effectiveness of the evacuation [1],[2]. This paper introduces a near real-time human position tracking solution tailored for evacuation robots. Using a pose detector, our system first identifies human joints in the camera frame in near real-time and then translates the position of these pixels into real-world coordinates via a simple calibration process. We run multiple trials of the system in action in an indoor lab environment and show that the system can achieve an accuracy of 0.55 meters when compared to ground truth. The system can also achieve an average of 3 frames per second (FPS) which was sufficient for our study on robot-guided human evacuation. The potential of our approach extends beyond mere tracking, paving the way for evacuee motion prediction, allowing the robot to proactively respond to human movements during an evacuation.
Mollik Nayyar, Alan Wagner
2023-09-26T16:34:18Z
http://arxiv.org/abs/2309.15054v1
# Near Real-Time Position Tracking for Robot-Guided Evacuation ###### Abstract During the evacuation of a building, the rapid and accurate tracking of human evacuees can be used by a guide robot to increase the effectiveness of the evacuation [1, 2]. This paper introduces a near real-time human position tracking solution tailored for evacuation robots. Using a pose detector, our system first identifies human joints in the camera frame in near real-time and then translates the position of these pixels into real-world coordinates via a simple calibration process. We run multiple trials of the system in action in an indoor lab environment and show that the system can achieve an accuracy of 0.55 meters when compared to ground truth. The system can also achieve an average of 3 frames per second (FPS) which was sufficient for our study on robot-guided human evacuation. The potential of our approach extends beyond mere tracking, paving the way for evacuee motion prediction, allowing the robot to proactively respond to human movements during an evacuation. ## I Introduction There are many factors that influence how people evacuate. Debris or lack of visibility may hinder their ability to move to an exit. Injuries or disabilities may prevent them from using certain exits. And disorientation or confusion may increase the hesitancy to evacuate. The most common problem with evacuees is simply that they do not evacuate when an alarm sounds [3]. Hesitancy to evacuate may prove fatal because during a real emergency the time to reach safety may be limited and existing escape routes may become congested. It has also been observed that the onset of an emergency typically causes uncertainty in the people nearby [4, 5]. Depending on the type of emergency, people may not evacuate at all, they may freeze and remain motionless, or become compliant blindly following any instructions they encounter. During an evacuation, the behavior of other evacuees nearby is often a determining factor impacting when and how quickly a person evacuates [5, 1]. Research using video from real emergency evacuations has shown, however, that having a guide during an evacuation significantly reduces the delay people take prior to evacuating [3, 6, 7]. We believe that a robot that acts as a guide can reduce the time required to evacuate and thereby save lives. For a robot to guide people to an exit, it must be able to track the person in near real-time. We seek to use robots to guide evacuees to uncongested exits during an evacuation. A successful evacuation robot will need to keep pace with an evacuee or evacuees and will need to be able to track the position of the people in its local environment. Such tracking will need to be near real-time for the robot to make timely navigation decisions and for it to be able to discern if an evacuee has stopped following it. If robots are going to serve as evacuation guides, then they will likely need to use the existing available camera infrastructure. For buildings such as schools and high-rise residences, locations that might be best served by an evacuation robot, the potential cost of adding a new camera or motion tracking system would inhibit the use of evacuation robots. On the other hand, many buildings have an existing security or surveillance camera system. If a standard resolution security camera system could be used to provide perceptual information, then the adoption of robots for evacuation is more affordable and therefore more likely. For this reason, our work has focused on the development of perceptual techniques that could use the existing camera infrastructure to provide 1-3Hz centralized evacuee position information. If such a system could be developed, then the need for expensive, fast perceptual processing on the robot would be reduced. Our work thus focuses on a computationally efficient, vision-based tracking system, that uses static cameras located indoors and can be realized using off-the-shelf components. The methodology presented here relies on open-source pose detection models and a simple calibration process that does not need camera intrinsic matrix or distortion model. The system can work with low resolution cameras that may already be available in many buildings and does not require large datasets to be collected for any new environment. A small set of calibration images are sufficient to generate the camera to world space model. The lower accuracy of our system compared to traditional motion capture systems is compensated for by its simplicity, affordability, and general applicability. In the next section, we first discuss the position estimation system and the methodology used for generating position estimates. Then we present the modifications needed for making it near real-time and finally, we present results and conclusions on the accuracy of the system operating in a new environment. ## II Indoor Position Estimation System Before we discuss the near real-time system, we first present a general camera-based position estimation system. This system was originally developed for a physical robot-guided evacuation experiment [2]. The experiment consisted of running a total of 106 subjects in individual and group conditions with a robot available for guidance during an emergency. The subjects were asked to perform a reading task and were not informed about the emergency. During the task, three fire alarms placed in the environment were activated unbeknownst to the evacuees. The objective was to observe and collect data on evacuee behavior during a robot-guided evacuation. Evacuee motion data was then used to create a model of evacuee behavior during the evacuation. A system of four cameras were used to cover the entire space and videos of evacuee motion were collected for post processing and model creation. The following subsections discuss the pose detection and camera calibration steps in more detail. ### _Human Pose Detection_ Human poses in the environment were extracted using an open source deep learning library called AlphaPose [8]. The specific AlphaPose model version used was the YoloV3 model [9] with a ResNet152 backbone trained on the COCO Keypoint 2017 dataset with 17 body keypoints. This model was used for evacuee pose detection on 640x480 resolution videos collected during the experiment. The left ankle keypoint was used to determine the location of the subject in the environment. Figure 1 shows an example of the pose detections of one of the subjects during the experiment. A different YoloV3 model was trained to detect the robot in the environment to extract the bounding box locations of the robot. The bounding box was then converted to a pixel point in the camera frame which was then used to estimate the robot's position using the camera model. ### _Camera - World Space conversion_ To convert the pixels of the detected keypoints to world space, calibration images were collected in the environment at known ground truth distances from the cameras and the AlphaPose model was used to obtain the keypoint locations in pixels. These distance-pixel calibrations were used to create a polynomial model relating the image frame y-axis pixel location to the distance from the camera. For the X-axis (or horizontal distance in the image frame), the width of a calibration object in pixels and inches taken at different y-axis locations was used to create a model for the X direction. Combining the two, we obtained the coordinates of the subject with respect to the camera. This was then transformed into world space coordinates by incorporating the camera's world space coordinates. This conversion was performed for each of the cameras to obtain a global track of the subject in the environment as shown in Figure 2. An example of the camera model is shown in Figure 3. For some cameras, the positive \(X\) and \(Y\) axes of the camera had a different alignment with the positive \(X\) axis and \(Y\) axis in the world space. This change in orientation of the cameras was accounted for before making the relevant coordinate transformations for the subject positions. Finally, the tracks of both the evacuee and the robot were used to train an autoregressive motion prediction model that used the past positions of the evacuee and the robot to predict the position of the evacuee 0.25 seconds in the future. The actual and predicted tracks are shown in the Fig. 4. ## III Near Real-Time Setup For effective robot-guided evacuations the position estimates must be available to the robot in near real-time. One of the limitations of the system discussed thus far was the need for post-processing of the videos. This prompted the need to develop a near real-time system of position estimation. For us, near real-time is approximately 1-2 HZ. Additionally, Fig. 1: The figure shows pose output from the AlphaPose model of a single subject following the robot. The different keypoints representing the body locations can be seen. The ankle keypoints were used for estimating the subject’s position in the environment. Fig. 3: The figure shows the pixel to distance mapping for the y-axis of the camera frame. Fig. 2: The figure shows a global track of a single subject in the shepherding condition. The different colors correspond to the tracks generated from the different cameras placed in the environment. since it was not possible to obtain ground truth tracks of the subjects during the physical experiment, the accuracy of the position estimates could not be determined. To address these concerns, we decided to set up a similar system in a new environment and incorporate ground truth accuracy measurements in a near real-time position estimation setting. The subsequent sections discuss the near real-time setup used for this work and the results from the trial runs of the system. There are two essential elements to a real-time position tracking system. First, the ability to acquire camera frames from multiple cameras quickly and second, performing computationally efficient image processing on the frames for detection of humans and extracting position estimates. To ensure that the system does not get overloaded with new frames, older frames may need to be dropped to perform image processing on new frames as soon as they are available. To achieve this, we use real-time acquisition and transmission of frames using the 'imagezmq' image transport library [10], a lightweight peer to peer message passing library built on ZMQ and its PyZMQ bindings. Similar to data messages in ROS (Robot operating system), imagezmq is also a message passing protocol but is optimized for speed and ease of use for opencv images. It provides a couple of message passing protocols namely REQ/REP and PUB/SUB. * In the REQ/REP messaging protocol, each image sender must a REPLY before continuing. This is a blocking protocol. * In PUB/SUB, each image sender sends an image, but does not expect a REPLY from the central image hub. It can continue sending images without waiting for an acknowledgement from the image hub. This is a non-blocking protocol. ROS is built on a PUB/SUB protocol by default (it also provides a REQ/REP system using services), however, ROS requires all systems on the network to be able to operate ROS nodes and connect to a singular ROS master. On the other hand, imagezmq operates as python library with minimal dependencies and does not rely on a message broker system like rosmaster and instead uses much more efficient peer to peer system without any additional process overhead. Additionally, it allows setting up multiple camera sources on a local network. ### _Hardware setup_ Like the system mentioned in the previous section, we used two Raspberry Pis as the client image sender systems with a Raspberry Pi camera as the indoor surveillance camera. The Raspberry Pi's were connected to a local network and were configured to start sending frames to a ground station on a static IP address once the frames start getting captured. The system initializes the cameras, establishes a connection to the ground station and then starts sending the unprocessed camera frames. However, since the system is set up in a REQ/REP protocol, the Raspberry Pis will only send a frame if they receive a reply from the ground station that the previous frame was received. This allows the ground station to perform image processing on an image frame and then send the reply to the cameras for the next one ensuring that processing is performed on the latest frames and does not queue up old frames in a buffer. ### _Pose Estimation_ For this work, a new and light weight model was used for pose detection. Poses were extracted using an open-source deep learning model called YOLOv7 [11]. Typically, Yolo models are well known for their fast object detection but for this work YOLOv7 model's pose estimation pipeline was used. It is trained on the COCO Keypoint 2017 dataset with 17 body keypoints which is consistent with the AlphaPose model used previously. ## IV Result The near real-time position tracking system described in this paper was used to track the positions of a subject in the environment. 20 trials of a subject's motion were collected with different motions in each track. Environmental markers were used to extract the ground truth track of the subject in each trial. A grid of 2ft x 2ft cells was marked on the environment floor. Each cell represented the world space coordinates of the center of the cell. The subject moved and stood over position marked cells during the trials. The ground truth positions of the subject's motion were extracted from the cells that the subject stood over during the trials. The Fig. 5 shows the layout of the environment and the locations of the cameras. The actual track and the estimated positions from the tracking system are presented in the Fig. 7. Due to noise and erroneous detections in the pose detection model inference step, values that fell beyond 1.5 times the Inter-Quartile Range (IQR) were filtered out. The aggregated mean and standard deviation of the estimated position error across all the 20 trials is (M=0.556, SD=0.069) in meters with a minimum error of 0.43 meters and a maximum error of 0.71 meters. The average FPS across all the 20 trials was (M=3.063, SD=0.238) with a minimum of 2.6 and a maximum of 3.45. Fig. 4: The figure shows the actual and the predicted track of one of the subjects during the physical experiment. The distance on the axes are in meters. ## V Conclusions In this paper, we demonstrate a near real-time position tracking system using inexpensive, off-the-shelf components. The results suggest that the system can achieve an accuracy of 0.55 meters with an average frame rate of 3 frames per second with a two-camera setup. Through this system, tracking individuals can become a challenge as small errors in detection and noise make it hard to distinguish the positions of two more closely situated individuals. Such erroneous detections can poison the robot's estimate of the number of people and their positions. Additionally, a faster frame rate may be desired in some applications. Any process that causes the ground station system to lag will create inconsistent position stamps thereby decreasing the frame rate even further. Despite some of these limitations, the system was capable of detecting the location of the humans subject in the environment and we observe in Fig. 7 that detected tracks closely follow the ground truth track. By sequentially tracking the real-time positions of evacuees, the data can be harnessed to create a predictive model such as one used in [2]. Such models could perhaps forecast potential evacuee trajectories, enabling the robot to not only follow but anticipate human movements. In doing so, the robot can strategize its actions more effectively, ensuring a smoother, safer evacuation process. ## Acknowledgment This material is based upon work supported by the National Science Foundation under Grant Number CNS-1830390 and IIS-2045146. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
2309.13431
Three-dimensional graphene on a nano-porous 4H-SiC backbone: a novel material for food sensing applications
Sensors which are sensitive to volatile organic compounds and thus able to monitor the conservation state of food, are precious because they work non-destructively and allow to avoid direct contact with the food, ensuring hygienic conditions. In particular, the monitoring of rancidity would solve a widespread issue in food storage. The sensor discussed here is produced utilizing a novel three-dimensional arrangement of graphene, which is grown on a crystalline silicon carbide (SiC) wafer previously porousified by chemical etching. This approach allows a very high surface-to.volume ratio. Furthermore, the structure of the sensor surface features a large amount of edges, dangling bounds, and active sites, which make the sensor, on a chemically robust skeleton, chemically active, particularly to hydrogenated molecules. The interaction of the sensor with such compounds is read out by measuring the sensor resistance in a four wire configuration. The sensor performance has been assessed on three hazelnut samples: sound hazelnuts, spoiled hazelnuts, and stink bug hazelnuts. A resistance variation of about DeltaR = 0.13 (0.02) Ohm between sound and damaged hazelnuts has been detected. Our measurements confirm the ability of the sensor to discriminate between sound and damaged hazelnuts. The sensor signal is stable for days, providing the possibility to use this sensor for the monitoring of the storage state of fats and foods in general.
Stefano Veronesi, Ylea Vlamidis, Letizia Ferbel, Carmela Marinelli, Chiara Sanmartin, Isabella Taglieri, Georg Pfusterschmied, Markus Leitgeb, Ulrich Schmid, Fabio Mencarelli, Stefan Heun
2023-09-23T17:08:33Z
http://arxiv.org/abs/2309.13431v1
Three-dimensional graphene on a nano-porous 4H-SiC backbone: a novel material for food sensing applications ###### Abstract BACKGROUND: Sensors which are sensitive to volatile organic compounds and thus able to monitor the conservation state of food, are precious because they work non-destructively and allow to avoid direct contact with the food, ensuring hygienic conditions. In particular, the monitoring of rancidity would solve a widespread issue in food storage. RESULTS: The sensor discussed here is produced utilizing a novel three-dimensional arrangement of graphene, which is grown on a crystalline silicon carbide (SiC) wafer previously porousified by chemical etching. This approach allows a very high surface-to-volume ratio. Furthermore, the structure of the sensor surface features a large amount of edges, dangling bounds, and active sites, which make the sensor, on a chemically robust skeleton, chemically active, particularly to hydrogenated molecules. The interaction of the sensor with such compounds is read out by measuring the sensor resistance in a four wire configuration. The sensor performance has been assessed on three hazelnut samples: sound hazelnuts, spoiled hazelnuts, and stink bug hazelnuts. A resistance variation of about \(\Delta R=0.13\pm 0.02\)\(\Omega\) between sound and damaged hazelnuts has been detected. CONCLUSIONS: Our measurements confirm the ability of the sensor to discriminate between sound and damaged hazelnuts. The sensor signal is stable for days, providing the possibility to use this sensor for the monitoring of the storage state of fats and foods in general. ## 1 Introduction Food traceability, quality control, and contamination issues represent hot topics to improve food production, distribution, and consumption. The possibility to follow the preservation state of food, in order to ensure the best quality of the products, minimize food losses, and take care of the consumers health is the driving force for a large body of research work. Indeed, a proper monitoring of the food conservation state has a strong impact on both the health of consumers and the food waste issue. This issue has been included within the 17 goals of the United Nations (UN) Sustainable Development, in particular in the goal 12 [1]. The UN estimates that more than 13% of the food is lost from farm to processing, and a further 17% at the consumer level. Moreover, this trend is unchanged from 2016 to 2021, far from the target to reducing losses by 50% by 2030. Besides, waste of food produces an additional contribution to the global warming, as well. Research is strongly involved in the effort to mitigate waste of food and to protect the health of consumers, developing new and improved sensors and studying intelligent packaging. Nevertheless, the use of food sensors remains limited. Importantly, the development of sensors able to monitor the degradation of fats present in food, both from natural occurrence or added during the food processing, is in an early stage despite the large number of possible applications. Volatile Organic Compounds (VOCs) are responsible of flavors and aromas of plants and fruits, and their oxidative processes are related to an alteration of taste and odor [2]. In addition, they are a by-product of the fat oxidation/degradation that can occur during the food lifetime, resulting in changes in the sensory perception of the product. Therefore, a way to monitor the degradation of fats in food is to detect VOCs. The efficiency of detection is crucial for a VOCs sensor. High detection efficiency can be obtained by maximizing the probability that a target analyte meets an active site of the detector and interacts with it. This goal can be achieved by increasing the efficiency of the active sites and/or by maximizing the number of active sites per unit of area and/or by increasing the useful sensor surface. Therefore, the availability of materials with a large surface-to-volume ratio represents a benefit. A largely utilized platform in developing detectors is graphene, which has been used to implement optoelectronic applications [3], wearable electronics [4, 5], sensors [5, 6, 7, 8, 9, 10], and biosensors [11]. The outstanding properties of graphene can be further tailored by chemical functionalization. However, in many fields such as catalysis [12], supercapacitors [13, 14], water filtration [15, 16], and drug delivery [17], a three-dimensional structure increases the surface above average compare to the 2D counterparts. Three-dimensional structures are also used to realize high performance electrodes [18, 19], gas detection sensors [20], and battery cathodes [21]. A three-dimensional arrangement of graphene combines the outstanding properties of graphene with the requirement of a large active surface area in developing high sensitivity detectors. The sensor presented here is based on a novel three-dimensional graphene arrangement (3DG in the following) that was recently developed [22]. 3DG samples are realized via growth of epitaxial graphene on the Si-face of a 4HSiC wafer that has been previously porousified via photoelectrochemical etching [23, 24, 25]. This material has already been utilized for hydrogen storage purposes and demonstrated catalytic properties, which allowed to chemisorb, for the first time in a pristine graphene material, hydrogen atoms starting from molecules [26]. Hazelnut (Corylus avellana L.) is a dried fruit largely considered throughout the world as relevant raw material for chocolate, confectionery and bakery industries [27]. Turkey, Italy, Spain, and the USA are the most important producers (FAO, 2006), even if new producers, located in the Southern Hemisphere, like Australia and Chile, are emerging [28, 29]. As 90% of the production is processed, the commercial quality of hazelnuts is mainly determined by the requirements of the confectionery industry [30] and, in this sense, good analytical methods for quality sorting and provenance identification are required. As with most dried fruits, the lipid matrix is very sensitive to chemical changes which affect the qualitative characteristics and for this reason techniques of storage have been proposed in recent years [31, 32]. The research of non-destructive sensors to discriminate hazelnuts has been performed for a long time [33]. Unfortunately, no reliable results to apply on line and on time have been reached. Here, a sensor from 3DG has been used to perform measurements on a blank sample and on three further samples, i.e., sound hazelnuts, spoiled hazelnuts, and stink bug hazelnuts, respectively. The sensor resistance \(R_{\text{s}}\) is used as a sensitive and stable signal. A specific surface chemical functionalization to improve sensor performance and selectivity is discussed. ## 2 Sensor fabrication The starting material used for sensor fabrication are Nitrogen-doped 4H-SiC wafers with a thickness of 350 \(\mu\)m and a bulk resistivity of 0.106 \(\Omega\)cm, oriented 4' off-axis with respect to the (0001) basal plane, corresponding to the Si-face. The wafers are porousified through a metal-assisted photochemical etching (MAPCE), followed by a photo-electrochemical etching (PECE), according to the procedure described in Ref. [24]. The main steps of the porousification process are schematically reported in **Figure 1**(a). The porousified wafers are cut into pieces with dimensions 2 mm x 7 mm, which have then been utilized for the epitaxial graphene growth. The graphene growth is performed via SiC thermal decomposition in an ultra high vacuum (UHV) environment (see Figure 1(b)) at a base pressure < 1 x 10\({}^{-10}\) mbar, annealing the sample at about 1370C for 3 minutes. The growth procedure is reported in detail in Ref. [22]. After the growth, the graphene quality has been verified by Raman spectroscopy, reported in Figure 1(c). The 2D Raman peak has a FWHM of 54 cm\({}^{-1}\), slightly larger than expected for monolayer graphene. The ratio I(2D)/I(G) is 0.86, lower than the usual value for epitaxial graphene. As shown and discussed in detail in Ref. [22], both FWHM and I(2D)/I(G) values are mainly due to the strain-doping effect [34; 35; 36] of the graphene grown on the porous layer of the 4H-SiC wafer. The presence of D and D' peaks indicates the presence of defects, mainly related to the reduced graphene grain dimensions in these porous structures. In order to assess the homogeneity and the quality of the graphene inside the porous structure, a few sacrificial samples have been cleaved, and cross sections were investigated by scanning electron microscopy (SEM) (see Figure 1(d)) and Raman spectroscopy, as well, as reported in Ref. [26]. The etching parameters produce a porous layer of about 20 \(\mu\)m thickness, as sketched in Figure 1(e). As a sensor signal, the variation of the resistance of the 3DG is measured in a four-wire (4W) configuration. An alternating current of \(I=1\)\(\mu\)A is supplied to the sensor while the voltage drop \(V_{4W}\) is measured with a lock-in amplifier. This technique allows a sensitive measurement of the sensor resistance \(R_{s}\)with negligible impact from the contact resistance. Figure 1: (a) Scheme at the MAPCE-PECE porousification process. (b) A porous 4H-SiC sample during the growth of epitaxial graphene. (c) Raman spectrum from the top surface of a 3D-graphene sensor. Shape and intensity ratio of the 2D and G peaks demonstrate the good quality of the graphene. (d) SEM image taken at 5 kV (beam current 11.7 pA) on the cross sectional edge of a 3DG sensor mechanically cleaved. The scale bar corresponds to 300 nm. (e) Sketch of a 3DG sensor. Before starting the evaluation of the sensor in detecting VOCs, its ability to respond to simpler physical stimuli has been verified. The first test of the sensor has been performed by illuminating it with a green laser. Upon illumination under UHV conditions, an increase in the current due to photoelectrons is detected, while the sensor resistance drops by 15%, as shown in **Figure 2**(a). We want to underline that the photon energy is lower than the SiC bandgap, therefore the absorption must be due to the graphene top layer. This demonstrates the ability of the sensor to detect a photocurrent. We add that light detection is not the focus of this work, and therefore we have not performed a calibration to quantitatively evaluate the sensor sensitivity as a function of wavelength. In a second measurement, the sensor was exposed to a flux of atomic hydrogen (exposure pressure 10-7 mbar) under UHV conditions. For details, see Ref. [26]. Hydrogen molecules are cracked with a Tectra hydrogen cracker via thermal dissociation of the hydrogen molecules on a hot tungsten tube at 1700 K. Therefore, the sensor is heated by the cracker, resulting in a signal even without hydrogen supply, as shown in Figure 2(b) (blank, blue line). This shows that the sensor is also an efficient thermometer. In the hydrogenation experiment, however, besides the thermal variation, a different signal dynamics is observed with respect to the blank, demonstrating the ability of the sensor to detect the hydrogen uptake, even at low exposure pressure. ## 3 3D-graphene as food storage state sensor Since VOCs have a relatively low molecular weight and a high vapor pressure, they are ideal targets for gas phase detection. The ability of the sensor to detect VOCs related to the degradation of the storage state of hazelnuts has been demonstrated utilizing three hazelnut batches. The first was made by perfectly preserved hazelnuts, the second by spoiled hazelnuts, and the third by stink bug hazelnuts. The experiments were performed in an air-tight glass container with the sensor mounted on the bottle cap. During experiments, the glass container is closed, to avoid exchange of air from the inside to the outside, and vice-versa. In the first series of experiments, the experimental protocol adopted to assess the sensor performance contemplated four measurements. The signal is acquired for a long time, some days, in order to understand the influence of the environment on the signal and to evaluate a possible saturation effect during the measurements. The first measurement is a blank experiment. The glass Figure 2: (a) Variation of the sensor resistance upon illumination with a green laser for 10 s. (b) Variation of the sensor resistance upon hydrogenation (red line). Blue line: blank experiment without hydrogen. container is empty and closed, and thus the sensor is just exposed to air. Next, we performed three measurements with sound, spoiled, and stink bug hazelnuts, respectively. An amount of hazelnuts corresponding to about 60% of the volume is introduced in the glass container. This first series of experiments has been performed in an air-conditioned room, without any further active control of the sensor temperature. The results of the complete series of measurements performed on blank, sound, spoiled, and stink bug hazelnuts are shown in **Figure 3**. As can be seen, the data seems to show a small increase in the sensor resistance exposed to sound hazelnuts, but the error bars of the sensor resistance exposed to different environments are so large that it is hard to draw further conclusions. Analyzing the data set more closely, the origin of these fluctuations becomes clear. In spite of the fact that the experiment is performed in an air-conditioned room, the residual daily temperature fluctuations induce a signal variation that is much larger than the single measurement accuracy. The effect is highlighted in **Figure 4** where the circadian temperature fluctuation of the laboratory is evident. Each point in the figure is an average over 15 minutes of acquisition, and the whole data set spans for about 48 h. The standard deviation of a single average is about 0.006 \(\Omega\) while the oscillation due to temperature fluctuations is around 0.045 \(\Omega\), nearly an order of magnitude larger. Figure 4 is a clear indication that the sensor temperature must be kept constant in order to avoid measurement fluctuations and to increase the overall accuracy. The ability of the sensor to detect a temperature variation was already shown in Figure 2(b) when the sensor was heated by the hydrogen cracker. Here, in the measurements with the hazelnuts, the parasitic, temperature-induced signal is greater than the target signal. Thus, an active feedback is Figure 3: Sensor resistance in a series of measurements performed for more than a month. The black points represent blanks, blue points refer to the sensor exposed to sound hazelnuts, green points to spoiled hazelnuts, and red points to stink bug hazelnuts. Error bar of each data point is the standard deviation of the respective measurement. Color lines are the average of each population, and blueish and greyish areas the standard deviation for healthy hazelnuts and blank, respectively. required to keep the sensor temperature constant. This is achieved with a resistive heater (50 \(\Omega\), 1 W), a K-type thermocouple which reads the sensor temperature, and a temperature controller (LakeShore Model 331). The transition from room temperature to the set point is shown in **Figure 5**. The sensor temperature was set to 40' C, and the long term stability is better than 0.07' C in 60 hours of acquisition. For comparison, the figure shows the readout variation induced by a temperature fluctuation of \(\pm\)0.5' C (red bar). Furthermore, the heater allows to be operated up to 200' C. This feature can be utilized to periodically clean and degas the sensor, if required. We add that the sensor material itself is stable in temperature up to at least 900' C, allowing to employ the sensor in a wide range of environmental conditions. After this, a second series of measurements has been performed, in the same configuration of those shown in Figure 3, but with the addition of the sensor temperature stabilization. The data set obtained from this second experiment is shown in **Figure 6**. Working at a constant sensor temperature has clearly improved sensor performance. Now the sensor resistance exposed to sound hazelnut is clearly greater than the blank resistance and that of the harmed hazelnut samples. The sensor resistance increases or decreases depending on the interaction with the target molecules. If in the interaction an electron is released, then the resistance decreases, while if during the interaction an electron is bound, the resistance increases. Therefore, as the oxidative processes change the VOCs composition, the sensor resistance changes consequently. These results are summarized in Table 1. Indeed, both spoiled and stink bug hazelnuts produce similar VOCs changes [37] and hence result in very similar sensor resistance. Therefore, this sensor shows, for the first time to the best of our knowledge, the ability to discriminate values between sound and damaged hazelnuts. Figure 4: Sensor resistance variation during a two day long acquisition. The main oscillation is due to the residual circadian temperature oscillation of the air-conditioned laboratory. Thick green line is the overall average, and the thinner green lines the related standard deviation at the border of the blueish area. ## 4 Discussion The detection of VOCs, in particular, allows the control of many degradation processes responsible for the rancidity of fat-containing foods. Nuts naturally contains a lot of VOCs such as alcohols, aldehydes, ketones, esters, and ethers [38]. The concentration of aldehydes and alcohols are commonly related to the oxidation of fatty acids [38] and can be utilized to monitor the deterioration of nuts. In particular, the rancid flavor is related to a pool of molecules whose most relevant compounds are hexanal and nonanal. The sensor that we have developed is able to discriminate between samples of healthy hazelnuts and samples in which degradation occurred. The sensor is sensitive to temperature, and we have shown that it requires a careful temperature stabilization to work properly. With this temperature stabilization, however, data fluctuation is dramatically reduced, allowing to clearly discriminate between healthy and harmed fruits. Even if in the present work the active temperature control has been performed with standard research laboratory equipment, it is easy to integrate an on-chip temperature control in future devices. Our results are promising and open the possibility to develop a sensitive device to monitor the storage state of hazelnuts, presumably working for different kinds of nuts. In the present work, we have developed a sensitive device to monitor the storage state of hazelnuts, which is able to monitor the storage state of hazelnuts, and we have shown that the temperature stabilization is dramatically reduced, allowing to clearly discriminate between healthy and harmed fruits. \begin{table} \begin{tabular}{|l|l l l l|} \hline & blank & healthy hazelnuts & spoiled hazelnuts & stink bug hazelnuts \\ \hline Measured sensor resistance (\(\Omega\)) & \(9.86\pm 0.02\) & \(9.992\pm 0.016\) & \(9.83\pm 0.03\) & \(9.87\pm 0.04\) \\ \hline \(\Delta\)R with respect to blank (\(\Omega\)) & 0 & \(0.132\pm 0.036\) & -\(0.03\pm 0.05\) & \(0.01\pm 0.06\) \\ \hline \end{tabular} \end{table} Table 1: Sensor resistance measured for the blank and the healthy, spoiled, and stink bug hazelnuts. Each value is the average of the corresponding data, and the reported error is their standard deviation. Figure 5: Variation in sensor temperature and resistance during 15 minutes, including the switch on of the temperature stabilization. The reddish area visualizes the effect of a \(\pm 0.5\)’C temperature fluctuation on the resistance readout. investigation, we have not performed a specific functionalization to increase the sensitivity and selectivity of the sensor. Indeed, the pristine sensor is already able to discriminate between healthy and damaged hazelnuts. The difference between the sensor resistance exposed to healthy fruits and to damaged hazelnuts is more than 6 times the standard deviation of the measurement, giving a high confidence to their discrimination. Moreover, the device architecture allows a reading of the sensor in real time and a demonstrated long term stability, desirable characteristics for the monitoring of stored hazelnuts during their industrial processing. The sensor gives an averaged evaluation of the stored nuts, and in principle a single sensor can monitor several containers if an appropriate gas sampling is provided. The sensor can be degassed any time the monitored container is changed, to avoid possible interference in the results. In order to quantitatively relate the sensor readout with the hexanal/nonanal concentration, a calibration with a known amount of the target molecules is necessary, even in the perspective of a specific functionalization. Work is in progress to compare the performance of the pristine material with metal-functionalized sensors. We have successfully loaded the porous matrix with gold and palladium nanoparticles, and experiments are ongoing to test and compare sensors with both types of functionalization. A further possibility is to modify the sensor surface with molecular receptors sensitive to the target molecules. The organic functionalization of graphene samples has already been obtained [39; 40], and the development of specific receptors will finally allow to produce sensors with a high degree of specificity. Figure 6: Sensor resistance in a series of measurements spanning more than 20 days. The black points represent blanks, blue points refer to the sensor exposed to sound hazelnuts, green points to stink bug hazelnuts, and red points to spoiled hazelnuts. Conclusion We report the successful operation of a sensor able to monitor the conservation state of hazelnuts. The sensor is based on a novel material architecture and realized with a graphene layer epitaxially grown on a porousified crystalline SiC substrate, namely 4H-SiC(0001). The sensor resistance is determined with a lock-in technique in a four-wire configuration setup. The sensor operates at constant temperature, here 40 \(\char 60\), to avoid the interference of ambient temperature fluctuations with the measurements. In a preliminary investigation, the sensor demonstrated the ability to discriminate between a sample of healthy hazelnuts and samples of harmed hazelnuts, with a high degree of confidence and a signal variation of more than 6 standard deviations between healthy and harmed nuts. This approach offers a good perspective to achieve a commercial device for the monitoring of the rancidity of food and to improve its preservation conditions. Work is in progress to test functionalized sensors and maximize their performance. ###### Acknowledgements. The authors want to acknowledge Soremartec Italia (Ferrero Group) for providing the hazelnut samples and Dr. Valentina Zannier from CNR-Nano for the SEM measurement of the porous material.
2302.00059
NASiam: Efficient Representation Learning using Neural Architecture Search for Siamese Networks
Siamese networks are one of the most trending methods to achieve self-supervised visual representation learning (SSL). Since hand labeling is costly, SSL can play a crucial part by allowing deep learning to train on large unlabeled datasets. Meanwhile, Neural Architecture Search (NAS) is becoming increasingly important as a technique to discover novel deep learning architectures. However, early NAS methods based on reinforcement learning or evolutionary algorithms suffered from ludicrous computational and memory costs. In contrast, differentiable NAS, a gradient-based approach, has the advantage of being much more efficient and has thus retained most of the attention in the past few years. In this article, we present NASiam, a novel approach that uses for the first time differentiable NAS to improve the multilayer perceptron projector and predictor (encoder/predictor pair) architectures inside siamese-networks-based contrastive learning frameworks (e.g., SimCLR, SimSiam, and MoCo) while preserving the simplicity of previous baselines. We crafted a search space designed explicitly for multilayer perceptrons, inside which we explored several alternatives to the standard ReLU activation function. We show that these new architectures allow ResNet backbone convolutional models to learn strong representations efficiently. NASiam reaches competitive performance in both small-scale (i.e., CIFAR-10/CIFAR-100) and large-scale (i.e., ImageNet) image classification datasets while costing only a few GPU hours. We discuss the composition of the NAS-discovered architectures and emit hypotheses on why they manage to prevent collapsing behavior. Our code is available at https://github.com/aheuillet/NASiam.
Alexandre Heuillet, Hedi Tabia, Hichem Arioui
2023-01-31T19:48:37Z
http://arxiv.org/abs/2302.00059v1
# NASiam: Efficient Representation Learning using Neural Architecture Search for Siamese Networks ###### Abstract Siamese networks are one of the most trending methods to achieve self-supervised visual representation learning (SSL). Since hand labeling is costly, SSL can play a crucial part by allowing deep learning to train on large unlabeled datasets. Meanwhile, Neural Architecture Search (NAS) is becoming increasingly important as a technique to discover novel deep learning architectures. However, early NAS methods based on reinforcement learning or evolutionary algorithms suffered from ludicrous computational and memory costs. In contrast, differentiable NAS, a gradient-based approach, has the advantage of being much more efficient and has thus retained most of the attention in the past few years. In this article, we present NASiam, a novel approach that uses for the first time differentiable NAS to improve the multilayer perceptron projector and predictor (encoder/predictor pair) architectures inside siamese-networks-based contrastive learning frameworks (e.g., SimCLR, SimSiam, and MoCo) while preserving the simplicity of previous baselines. We crafted a search space designed explicitly for multilayer perceptrons, inside which we explored several alternatives to the standard ReLU activation function. We show that these new architectures allow ResNet backbone convolutional models to learn strong representations efficiently. NASiam reaches competitive performance in both small-scale (i.e., CIFAR-10/CIFAR-100) and large-scale (i.e., ImageNet) image classification datasets while costing only a few GPU hours. We discuss the composition of the NAS-discovered architectures and emit hypotheses on why they manage to prevent collapsing behavior. Our code is available on GitHub. Deep Learning, NAS, Self-Supervised Learning, Siamese Networks + Footnote †: This work was performed using HPC resources from GENCI-IDRIS (Grant 20XX-AD011012644). ## I Introduction Deep Learning (DL) has experienced rapid growth in the past few years. Two DL subfields have received much attention: Unsupervised Representation Learning and Automated Deep Learning (AutoDL). Unsupervised representation learning aims to make DL models learn strong representations from unlabeled data. This is especially useful when considering that data labeling is often a costly and laborious human-made process. One of the most common approaches to unsupervised visual representation learning is siamese networks [1]. Siamese networks consist of two weight-sharing branches (i.e., "twins") applied to two or more inputs. The output feature vectors of the two branches are compared to compute a loss (e.g., a "contrastive" loss). In the case of representation learning, the inputs are usually data augmentations of the same image, and the siamese networks seek to maximize the similarity between the output feature vectors of the two branches [2, 3, 4]. On the other hand, AutoDL tries to remove the human factor from the DL pipeline. Architecture design is one part of this pipeline that has proven particularly relevant to automate. Most DL architectures are handcrafted and lack the certainty of an optimal solution [5, 6, 7]. Neural Architecture Search (NAS) aims to solve this issue by using a meta-learner to search for neural network architectures relevant to a given task (e.g., image classification, semantic segmentation, or object detection). NAS algorithms efficiently browse large search spaces that would prove challenging to navigate manually. The first NAS works used reinforcement learning [8, 9] or evolutionary methods [10, 11] but proved particularly inefficient, with thousands of GPU days needed to obtain a competitive architecture (e.g., 2000 GPU days for NASNet [8]). Nowadays, most approaches use differentiable NAS [12, 13, 14] (i.e., gradient-based search process) as it requires far less computational resources and often yields better results. This article leverages differentiable NAS to discover encoder (projector) and predictor architectures (i.e., Multilayer Perceptrons) that enable backbone Convolutional Neural Networks (CNNs) to efficiently learn strong representations from unlabeled data. To the extent of our knowledge, this is the first time that NAS has been applied to enhance the architecture of Siamese networks. Thus, we improved the performance of several siamese network frameworks such as SimSiam [15], SimCLR [2], or MoCo [4], with an encoder-predictor pair discovered by a meta-learner inspired by DARTS [16], a popular differentiable NAS method. We dubbed our approach NASiam ("Neural Architecture Search for Siamese Networks"). We show that NASiam reaches competitive results on small-scale (CIFAR-10, CIFAR-100 [17]) and large-scale (ImageNet [18]) datasets. Thus, Section III highlights our main following contributions: * A novel way to design encoder/predictor pairs for siamese networks using differentiable neural architecture search. * A novel search space specifically designed for the Multi-Layer Perceptron (MLP) heads of encoder/predictor pairs. The rest of the article is structured as follows: Section II features a short survey on related differentiable NAS and siamese networks works. In Section III, the proposed method is presented. Section IV presents the results of different image classification experiments and showcases a discussion on the composition of the discovered encoder/predictor pair architectures, and Section V brings a conclusion to our work while giving some insights about future work. ## II Related Work This section briefly recalls related work in differentiable NAS, siamese networks for representation learning, and neural architecture search for contrastive learning. ### _Differentiable Neural Architecture Search_ One of the most trending differentiable NAS family of methods is derived from Differentiable ARchiTecture Search (DARTS) [16]. This method uses Stochastic Gradient Descent (SGD) to optimize a set of architectural parameters (denoted \(\alpha\)) that represent operations inside building blocks (i.e., elementary components of the network) called "cells". A cell \(C\) can be considered as a direct acyclic graph whose nodes represent states. Its edges are a mix of \(K\) different operations \(O=\{o_{1},...,o_{K}\}\) that define the search space \(S\). Multiple cells can be stacked up to form a global "supernet" that encompasses all candidate architectures. Furthermore, as part of a weight-sharing mechanism, DARTS only searches for two types of cells: _normal_ cells (i.e., cells that make up most of the network) and _reduction_ cells (i.e., cells that perform dimension reduction). The _reduction_ cells are typically positioned at the 1/3 and 2/3 of the supernet. As the SGD on \(\alpha\) occurs while the supernet containing all cells is being trained on a given dataset, DARTS is practically solving a bi-level optimization problem. Moreover, the \(\alpha\) weights are relaxed into a continuous form by discretization through a softmax [19] operation as follows: \[\overline{o}_{i,j}(x)=\sum_{k=1}^{K}\frac{exp(\alpha_{i,j}^{k})}{\sum_{k^{ \prime}=1}^{K}exp(\alpha_{i,j}^{k^{\prime}})}o_{k}(x) \tag{1}\] where \(\overline{o}_{i,j}(x)\) is the mixed output of edge \(e_{i,j}\) for input feature \(x\) and \(\alpha_{i,j}^{k}\in\alpha_{i,j}\) is the weight associated with operation \(o_{k}\in O\) for \(e_{i,j}\). Several works attempted to improve on DARTS. P-DARTS [3] significantly reduced the search time by progressively deepening the architecture when searching, leading to a better search space approximation and regularization. PC-DARTS [20] attempted to reduce DARTS' memory cost by sampling only a portion of the supernet to avoid redundancy in the search space exploration. FairDARTS [12] tried to solve two critical problems that occurred in DARTS, the over-representation of _skip_ connections and the uncertainty in the probability distribution of operations. To this end, the authors used the sigmoid function rather than softmax (see Eq. 1) and crafted a novel loss function that can push \(\alpha\) values towards 0 or 1. DARTS- [21] introduced auxiliary skip connections that are less prone to become dominant, thus ensuring fairer competition with the other operations. \(\beta\)-DARTS [13] introduced a new regularization method, called _Beta-Decay_, that prevents the architectural parameters from saturating. _Beta-Decay_ led to increased robustness and better generalization ability. Finally, D-DARTS [22] proposed a mechanism to distribute the search process to the cell level. This approach led to the individualization of each cell, thus increasing the diversity among the candidate architectures and expanding the search space. ### _Siamese Neural Networks_ Bromley et al. [1] first proposed the Siamese Neural Networks (SNNs) architecture as "twin" (i.e., identical and sharing the same weights) models that process two or more inputs and compare their outputs. The central intuition behind this concept is that comparing the output feature vectors will highlight the discrepancies between the inputs. Hence, this approach is advantageous in signature [1] or face [23] recognition applications. Another application of SNNs is unsupervised representation learning, also designated as Self-Supervised Learning (SSL). In particular, it is possible to learn representations from unlabeled data by feeding variations of the same input to twin Convolutional Neural Network (CNN) [24] models and computing the similarity between the output feature vectors. This similarity metric is used as a loss function, leading the SNNs to learn robust representations (i.e., resisting disturbance in the input data). This process is denoted as contrastive unsupervised learning. Momentum Contrast (MoCo) [25] pre-trains a CNN using unsupervised learning with a momentum encoder and fine-tunes its classifier head on standard supervised linear classification. The authors of MoCo show that the unsupervised pre-trained approach can surpass standard CNN on multiple ImageNet [18] computer vision tasks. SimCLR [2] added a Multi-Layer Perceptron (MLP) head as a predictor and highlighted the critical role of strong data augmentation and large batches (e.g., 8000) in contrastive learning. Following up on this, [4] proposed an improved version of MoCo (dubbed MoCo V2) that added a two-layer MLP head in the encoder and modified the data transforms according to those of SimCLR. Bootstrap Your Own Latent (BYOL) [26] proposed an SNN framework centered around an _online_ network and a _target_ network. The output of the _target_ network is iteratively bootstrapped to serve as input to the _online_ network. The authors showed that BYOL could learn more robust representations than previous approaches. Finally, SimSiam [15] introduced a simpler SNN architecture that removes the need for negative sample pairs, momentum encoders, and large batches. More specifically, SimSiam implements a _stop-grad_ mechanism that stops gradient backpropagation in one of the two branches of the twin model. Despite being a more straightforward approach than previous baselines, SimSiam reaches a competitive score on ImageNet classification. ### _Neural Architecture Search for Contrastive Self-Supervised Learning_ A handful of previous works have already explored using NAS for contrastive Self-Supervised Learning (SSL). [27] first introduced a method to leverage NAS to improve existing SSL frameworks such as SimCLR [2]. Their approach, dubbed SSNAS, is derived from DARTS [16] and reached competitive performance compared with supervised models. [28] proposed CSNAS, a novel way to search for SSL-focused CNN architectures using Sequential Model-Based Optimization. CSNAS leverages a cell-based search space similar to DARTS [16] and performs contrastive SSL using PIRL [29]. The authors showed that CSNAS managed to overperform or match both handcrafted architectures and supervised NAS models on image classification tasks. Another work of note is SSWP-NAS [30], a proxy-free weight-preserving NAS method for SSL. Similarly to CSNAS, SSWP-NAS is based on DARTS and navigates through a cell-based search space to discover new CNN architectures. SSWP-NAS overperformed previous SSL NAS methods and reached competitive results compared to supervised NAS approaches. In a drastically different approach, Contrastive Neural Architecture Search (CTNAS) [31] refactors NAS with Contrastive Learning. A Neural Architecture Comparator is designed to drive the search process by comparing candidate architectures with a baseline architecture. Thus, in this approach, contrary to other works, Contrastive Learning is used to enhance NAS rather than the other way around. In this article, we propose to go further than the previous works listed above by using differentiable NAS to directly enhance the Siamese (i.e., MLP) architecture rather than improve the backbone CNN (which is similar to what trending NAS frameworks such as DARTS [16] or FBNet [14] do). In Section IV, we show that our NASiam approach is able to discover novel Siamese architectures reaching higher performance than standard Contrastive SSL frameworks such as SimCLR [2] or MoCo [4]. ## III Proposed Approach This section highlights the key ideas behind our proposed approach: searching for the multi-layer perceptron components of the encoder/predictor pair and crafting an original search space specific to contrastive learning with siamese neural networks. ### _Searching for an Encoder/Predictor Pair_ First, we focused on SimSiam [15] as a simple baseline upon which to build our approach. SimSiam uses a siamese architecture consisting of an encoder \(f\) and a predictor \(h\). The encoder \(f\) is composed of a baseline CNN (e.g., ResNet50 [7]) and of a projector head (i.e., a three-layer MLP) that is duplicated on twin branches that take variations of the same image as input. A two-layer MLP \(h\) is then added on top of one of the branches to act as a predictor head. The discrepancy between the output feature vectors of the two branches is computed using a contrastive loss as follows: \[\mathcal{L}=\frac{1}{2}(\mathcal{D}(p_{1},\texttt{stopgrad}(z_{2}))+\mathcal{ D}(p_{2},\texttt{stopgrad}(z_{1})) \tag{2}\] where \(z_{1}=f(x_{1})\), \(z_{2}=f(x_{2})\), \(p_{1}=h(z_{1})\), \(p_{2}=h(z_{2})\) for input images \(x_{1}\) and \(x_{2}\), stopgrad is a mechanism that stops gradient backpropagation (in other words, the argument inside stopgrad is detached from the gradient computation), and \(\mathcal{D}\) is the negative cosine similarity defined as follows: \[\mathcal{D}(p,z)=-\frac{p}{||p||_{2}}.\frac{z}{||z||_{2}} \tag{3}\] where \(||.||_{2}\) is the \(l_{2}\) norm. In our proposed approach, we kept most of the global structure of the underlying siamese framework. However, we used a Differentiable NAS method to search for an encoder projector head architecture up to \(n\) layers and a predictor architecture up to \(m\) layers. More specifically, we consider a set \(O=\{o_{1},...,o_{K}\}\) of candidate operations. We search for two cells (see Section II-A) \(C_{e}\) and \(C_{p}\) for the encoder and decoder respectively. Contrary to DARTS [16], each cell is structured as a linear sequence of layers where each layer is a mixed output of \(|O|=K\) operations. Each operation \(o\) in each layer \(i\) is weighted by a parameter \(\alpha_{i}^{o}\). The sets of architectural parameters for \(C_{e}\) and \(C_{p}\) are denoted \(\alpha_{e}\) and \(\alpha_{p}\) respectively. Similarly to Eq. 1, operation values in each layer are discretized as follows: \[\overline{o}_{i}(x)=\sum_{k=1}^{K}\sigma_{SM}(\alpha_{i}^{k})o_{k}(x) \tag{4}\] where \(\overline{o}_{i}\) is the mixed operation of layer \(i\), \(\alpha_{i}^{k}\) is the architectural weight assigned to \(o_{k}\in O\) for layer \(i\), and \(\sigma_{SM}\) denotes the _softmax_ operation. The supernet encompassing \(f\) and \(h\) is trained on a portion of a dataset while \(C_{e}\) and \(C_{p}\) are simultaneously searched on another portion of the same dataset. Hence, we solve a bi-level optimization problem formulated as \[\begin{split}\underset{\alpha_{e},\alpha_{p}}{\text{min}}\mathcal{ L}_{val}(w^{*}(\alpha_{e},\alpha_{p}),\alpha_{e},\alpha_{p}),\\ \text{s.t.}w^{*}(\alpha_{e},\alpha_{p})=\underset{w}{\text{ argmin}}\ \mathcal{L}_{train}(w,\alpha_{e},\alpha_{p}),\end{split} \tag{5}\] where \(w\) denotes the supernet weights, \(\mathcal{L}_{train}(w,\alpha_{e},\alpha_{p})=\mathcal{L}(w,\alpha_{e},\alpha_ {p})\) is the training loss, and \(\mathcal{L}_{val}(w^{*},\alpha_{e},\alpha_{p})=\mathcal{L}(w^{*},\alpha_{e}, \alpha_{p})\) is the validation loss. Once the search phase is complete, for each layer \(i\) of each cell, we select the best-performing operation according to the discretized weights \(\alpha_{e}\) and \(\alpha_{p}\) to form the encoder/predictor architecture genotype \(G\). The whole neural architecture search process is detailed in Algorithm 1. ``` 1:\(\alpha_{e},\alpha_{p}\), \(\alpha_{e},\alpha_{p}\), \(\alpha_{e},\alpha_{p}\), \(\alpha_{e},\alpha_{p}\), \(\alpha_{e},\alpha_{p}\), \(\alpha_{e},\alpha_{p}\), \(\alpha_{e},\alpha_{p}\), \(\alpha_{e},\alpha_{p}\), \(\alpha_{e},\alpha_{p}\), \(\alpha_{e},\alpha_{e},\alpha_{p}\), \(\alpha_{e},\alpha_{e},\alpha_{p}\), \(\alpha a predictor. Hence, in that case, we only performed NAS for the MLP projector head of the encoder (i.e., only searching for cell \(C_{e}\)). In Section IV, we show that NASiam can consistently improve the performance of popular siamese frameworks (SimSiam, SimCLR, MoCo, and BYOL) in both small-scale (CIFAR-10 and CIFAR-100 [17]) and large-scale (ImageNet [18]) image classification datasets. ### _Crafting a Contrastive Learning-Specific Search Space_ To accompany our novel NASiam approach (see Section III-A), we crafted an original search space \(S\) specifically designed for MLPs. \(S\) comprises the following 7 operation blocks: linear + batch_norm + ReLU, linear + batch_norm + Hardswish, linear + batch_norm + SiLU, linear + batch_norm + ELU, max_pool_3x3 (1-dimensional) + batch_norm, avg_pool_3x3 (1-dimensional) + batch_norm, and Identity (_skip connection_). Hence, \(S\) includes several types of fully connected layers, each featuring a different activation function. The motivation behind adding activation functions to the search space is to increase diversity among the candidate architectures and explore alternatives to the classic ReLU function (e.g., Hardswish [32], or Mish [33]). To that end, it also makes it possible to mix different activation functions according to the type of network (i.e., projector or predictor) and the location inside that network. In contrast, previous baselines [2, 4, 15] only relied on a single activation for both networks regardless of their respective architectures. While unconventional, including pooling layers in the search space is helpful, as we show in Section IV that they can help prevent collapsing. Moreover, the authors of SimSiam [15] indicated that insufficient or too many Batch Normalization (BN) layers could cause the model to underperform severely or become unstable. They empirically demonstrate that the optimal setting for SimSiam is to place BNs after every layer except for the predictor's output layer. Hence, we follow this assertion by adding BNs after every linear and pooling operation except for the predictor's final layer. Finally, we also included the identity operation so that the search algorithm can modulate the number of layers in the architecture. This way, we can indicate a maximum number of layers \(n\), and the search algorithm can craft an architecture of size \(m<n\) by "skipping" layers. ## IV Experiments This section presents the results of our image classification experiments on small-scale (CIFAR-10, CIFAR-100) and large-scale (ImageNet) datasets. ### _Experimental Settings_ We used RTX 3090 and Tesla V100 Nvidia GPUs to conduct our experiments. We searched for predictor/encoder pairs for 100 epochs on CIFAR-10, and CIFAR-100 [17] using the SGD optimizer with \(lr=0.06\), \(wd=5e-4\), and a batch size of 512. We set a maximum of 6 layers for the encoder. If the baseline siamese framework relies on a predictor, we search for a 4-layer predictor architecture. The whole search process on these settings takes around 2.3 GPU hours on a single GPU. We did not search on the full ImageNet [18] dataset as it is prohibitively expensive (i.e., it takes around 12 GPU days on a single GPU). Instead, we transferred our best CIFAR-searched architecture to ImageNet. For the pre-training and linear classification phases, we kept the same settings as [15]. Our code is based on PyTorch 1.12. ### _Ablation Study on the Importance of Pooling Layers_ We conducted an ablation study on the importance of including pooling layers in our novel space search \(S\) (see Section III-B). To this end, we simply removed max_pool_3x3 and avg_pool_3x3 from \(S\) to form \(S^{\prime}\). When comparing the results in Table I, we can observe that, when searching on \(S^{\prime}\) rather than \(S\), the validation top-1 accuracy of NASiam drops significantly (by around 3 %). Moreover, when analyzing the genotypes searched on CIFAR-10 and CIFAR-100 using \(S\), it appears that the predictor architectures always contain pooling layers (making up to 40 % of the total architecture). In addition, Fig. 2 shows that the model (see Eq. 2) achieved better similarity and faster convergence when searched on \(S\) rather than \(S^{\prime}\). Thus, these findings highlight the critical role pooling layers play in ensuring high performance and preventing collapse, especially concerning the encoder architecture. ### _Incidence of Data Augmentations on the NAS process_ In self-supervised learning, data augmentations are paramount to prevent the model from overfitting and the contrastive loss (see Eq. 2) from saturating to -1. In contrast, differentiable neural architecture search methods [12, 16, 22] scarcely employ data augmentation as they only train the supernet for a small number of epochs (e.g., 50). Thus, a legitimate interrogation is how the strong data augmentation policy used in SSL frameworks can interfere with the differentiable search process. To answer this question, we searched for two different SimSiam [15] models with and without the data augmentation policy activated on CIFAR-10 and compared the resulting architectures. Table II shows that deactivating the data augmentation policy leads to a degenerated architecture with a dominance of skip connections (50 % of the architecture) associated with performance collapse. Furthermore, Fig. shows that, during the search phase, the similarity loss converges significantly faster towards -1, thus presenting a collapsing behavior. This observation correlates with the architectural collapse described in numerous differentiable NAS studies [12, 13, 34]. This collapsing behavior is akin to overfitting for NAS and is caused by the high prominence of _skip connections_ due to their unfair advantage (compared to parametric operations). Thus, data augmentation clearly has a positive impact on the differentiable search process and should not be deactivated, in contrast with supervised learning. ### _Preliminary Results on CIFAR_ To quickly assess the behavior of our novel approach NASiam, we first conducted preliminary experiments on small-scale CIFAR datasets [17]. We searched NASiam architectures for 100 epochs on CIFAR-10 and CIFAR-100 using the CIFAR Fig. 1: Layout of the NASiam architecture. Siamese network encoder/predictor (projection MLPs) architectures are searched using differentiable NAS wrapped around a Siamese framework such as SimSiam [2], which is the baseline used in the present figure. First, an input image x is augmented to produce two variations \(x_{1}\) and \(x_{2}\). Each of these two inputs is then fed into one of the two branches of the siamese network. While both \(x_{1}\) and \(x_{2}\) go through an encoder equipped with an MLP projection head, \(x_{2}\) is further processed by an MLP predictor. Finally, a negative cosine contrastive loss is computed and backpropagated to minimize the similarity between the two branches’ output feature maps. Both the encoder and the decoder contain cells (i.e., \(C_{e}\) and \(C_{p}\), respectively) that are designed using a differentiable NAS approach. Architectural parameters for \(C_{e}\) and \(C_{p}\) are denoted \(\alpha_{e}\) and \(\alpha_{p}\) respectively. Fig. 2: Plot of the negative cosine contrastive loss while pretraining two NASiam models on CIFAR-10. The baseline framework is SimSiam with a ResNet18 backbone. The two models are searched on search spaces \(S\) (blue line) and \(S^{\prime}\) (red line) respectively. The model searched on \(S\) achieves better similarity, thus making the relevance of pooling layers clear. version of ResNet18 [7] as the encoder backbone. Then, we performed unsupervised pretraining for 800 epochs with a cosine annealing schedule before training a linear classifier using frozen features for 100 epochs. In these settings, NASiam overperforms SimSiam by 1.4 % and 0.4 % on CIFAR-10 and CIFAR-100 respectively (see Table III and Table IV). In addition, Fig. 4 shows us that NASiam can achieve better similarity than SimSiam without saturating the contrastive loss to \(-1\) (i.e., a "collapsing" behavior). Furthermore, results were also positive when using alternative siamese frameworks, with NASiam overperforming both MoCo V2 [4] and SimCLR [2]. ### _Results on ImageNet_ We conducted image classification experiments on ImageNet [18] as a standard practice to evaluate the performance of our novel approach on large-scale datasets. As stated in Section IV-A, we transferred our best CIFAR architecture instead of searching directly on ImageNet to save computational resources. Then, we performed unsupervised pretraining on ImageNet for 100 epochs before training a linear classifier with frozen features for 100 epochs. The results are presented in detail in Table V. As for CIFAR (see IV-D), NASiam consistently achieves better linear classification results than the baseline frameworks, thus validating its usefulness. ### _Object Detection and Instance Segmentation Results on COCO_ Table VI displays the results of transferring our NASiam models pretrained on ImageNet [18] to Microsoft COCO [35] object detection and instance segmentation tasks. We can see that NASiam consistently overperforms handcrafted SSL architectures in both tasks. Hence, NASiam architectures can successfully generalize to computer vision tasks other than image classification. ### _Discussion on the Composition of the Architectures_ Some facts are noteworthy when comparing encoder/predictor architectures discovered on CIFAR-10 by our novel approach (see Section III-A) with those of SimSiam [15]. First, in Fig. 5, we can see that both ResNet50 and ResNet18 [17] NASiam architectures are significantly deeper than the original SimSiam architecture. Furthermore, a remarkable fact is that the ReLU activation function is in minority in the discovered architectures (and even disappeared completely from the ResNet50 one). Instead, a mix of different activation functions is preferred, with SiLU and Hardswish having a high prominence. Thus, this may indicate that ReLU, despite its popularity, is not the optimal activation function Fig. 3: Plot of the negative cosine contrastive loss while pretraining two NASiam models on CIFAR-10. The baseline framework is SimSiam with a ResNet18 backbone. The two models are searched with and without data augmentation respectively. for performing contrastive learning. In addition, the optimizer always selected at least one AvgPool3x3+BN layer to be part of the predictor architecture, hence validating the relevance of including pooling layers in the search space (as already highlighted in Section IV-B). Finally, when comparing both NAS-discovered architectures, we can observe that the ResNet50 one possesses a deeper encoder than the ResNet18-based architecture (i.e., 6 vs. 4 layers), with additional Linear+BN+Swish and Linear+BN+SiLU blocks. However, the two predictor architectures retain the same depth and a similar composition. This is coherent with the recommendations of the authors of SimSiam [15], where they selected a shallower architecture when training on CIFAR-10 with ResNet18 rather than ResNet50. One hypothesis to explain this discrepancy in architectural sparsity is that ResNet18, being a shallower model than ResNet50, has a less powerful innate ability to extract representations and hence produces less complex feature maps that would not require a deep projector to be analyzed. Using a deeper architecture could even lead to adverse effects. To confirm this hypothesis, we tried to fit a ResNet18 model on CIFAR-10 with the deeper encoder/predictor pair discovered for ResNet50. Fig. 6 clearly shows that this architectural setting quickly led to a collapsing behavior (with the contrastive loss rapidly saturating to -1 as soon as epoch 350) with a higher variance than the ResNet18-searched architecture. Hence, this validates the ability of our NASiam approach to discover backbone-specific architectures. ## V Conclusion In this article, we presented NASiam, a novel approach for contrastive learning with siamese networks that searches for efficient encoder/predictor pairs using differentiable neural architecture search (see Section III-A). This universal method Fig. 4: Plot of the negative cosine contrastive loss when pretraining SimSiam and NASiam for 800 epochs on CIFAR-10. NASiam converges faster without collapsing and achieves better similarity than SimSiam. Fig. 5: Composition of encoder/predictor pair architectures. **(Top)** SimSiam model. **(Bottom left)** NASiam model searched for 100 epochs on CIFAR-10 using SimSiam as the baseline framework with ResNet18 as the backbone CNN. **(Bottom right)** NASiam model searched for 100 epochs on CIFAR-10 using SimSiam as the baseline framework with ResNet50 as the backbone CNN. ResNet18-searched and ResNet50-searched architectures are clearly different, with ResNet50 needing a deeper encoder. Fig. 6: Plot of the negative cosine similarity loss while pretraining NASiam with ResNet18 using architectures searched either with ResNet18 or ResNet50 as backbone. The ResNet50-searched architecture quickly collapses towards -1 and has high variance while the ResNet18-searched one converges as expected. can enhance many existing siamese frameworks while preserving their underlying structure. In addition, NASiam is efficient as it only costs a few GPU hours. Section IV showed that NASiam discovers encoder/predictor pair architectures that efficiently learn robust representations and overperform previous baselines in small-scale and large-scale image classification datasets. These empirical results support our intuition that the encoder and predictor architectural designs play a decisive role in representation learning. We hope this work will pave the way to further improvements for MLP-headed siamese networks.
2310.20198
Structured Two-Stage True-Time-Delay Array Codebook Design for Multi-User Data Communication
Wideband millimeter-wave and terahertz (THz) systems can facilitate simultaneous data communication with multiple spatially separated users. It is desirable to orthogonalize users across sub-bands by deploying frequency-dependent beams with a sub-band-specific spatial response. True-Time-Delay (TTD) antenna arrays are a promising wideband architecture to implement sub-band-specific dispersion of beams across space using a single radio frequency (RF) chain. This paper proposes a structured design of analog TTD codebooks to generate beams that exhibit quantized sub-band-to-angle mapping. We introduce a structured Staircase TTD codebook and analyze the frequency-spatial behaviour of the resulting beam patterns. We develop the closed-form two-stage design of the proposed codebook to achieve the desired sub-band-specific beams and evaluate their performance in multi-user communication networks.
Aditya Wadaskar, Ding Zhao, Ibrahim Pehlivan, Danijela Cabric
2023-10-31T05:44:24Z
http://arxiv.org/abs/2310.20198v2
# Structured Two-Stage True-Time-Delay Array Codebook Design for Multi-User Data Communication ###### Abstract Wideband millimeter-wave and terahertz (THz) systems can facilitate simultaneous data communication with multiple spatially separated users. It is desirable to orthogonalize users across sub-bands by deploying frequency-dependent beams with a sub-band-specific spatial response. True-Time-Delay (TTD) antenna arrays are a promising wideband architecture to implement sub-band-specific dispersion of beams across space using a single radio frequency (RF) chain. This paper proposes a structured design of analog TTD codebooks to generate beams that exhibit quantized sub-band-to-angle mapping. We introduce a structured _Sticrase TTD_ codebook and analyze the frequency-spatial behaviour of the resulting beam patterns. We develop the closed-form two-stage design of the proposed codebook to achieve the desired sub-band-specific beams and evaluate their performance in multi-user communication networks. ## I Introduction Millimeter-wave and terahertz (THz) systems offer large bandwidths [1, 2, 3] which, besides enabling high data rates, can facilitate simultaneous data communication with multiple spatially separated users occupying non-overlapping sub-bands. To support such sub-band-specific data communication, base stations need to deploy directional beams with a sub-band-specific spatial response, where all frequency resources within a sub-band form a beam to serve a particular user [4, 5], as shown in Fig. 1. While the conventional analog phased arrays can only generate frequency-flat spatial responses, fully digital or hybrid analog-digital arrays that leverage multiple RF chains for enhanced beamforming capabilities incur high costs and power consumption. True-Time-Delay (TTD) arrays are a promising candidate for sub-band beamforming owing to their low-complexity implementation of frequency-dependent beams using a single RF chain. Works in [6, 7, 8, 9] use analog TTD arrays to implement a fully dispersive rainbow beam codebook scanning a continuous range of angles for expedited beam training. Recent works, namely Joint-Phase-Time-Arrays (JPTA) [4] and mmFlexible [5], leverage analog TTD-inspired architectures to generate beams with quantized sub-band-specific dispersion in space. The algorithm proposed in [4] iteratively optimizes the per-antenna delays and phase shifts, whereas the algorithm in [5] is based on a closed-form Least-Squares solution. In contrast with [4, 5], this paper adopts a structured beamsynthesis methodology rooted in principles of array design and frequency-spatial beam-pattern analysis to design sub-band beams, rather than target-based optimization or pattern-fitting. The main contributions of the paper are summarized as follows: We propose a structured delay-phase codebook called _Staircase TTD_ codebook in Sec. II, and study the frequency-spatial characteristics of resulting beams in Sec. III. We then develop a closed-form design of the proposed codebook to implement dual-stage frequency-spatial filtering to achieve the required sub-band-specific spatial responses in Sec. IV. Sec. V presents simulation results that compare the performance of Staircase TTD codebooks with state-of-the-art methods. Finally, Sec. VII presents concluding remarks and future steps. _Notation:_ Scalars, vectors, and matrices are denoted by non-bold, bold lower-case, and bold upper-case letters, respectively. For a given matrix \(\mathbf{A}\), \(e^{\mathbf{A}}\) and \(\log(\mathbf{A})\) denote matrices with the \((i,j)^{th}\) element given by \(e^{A_{i,j}}\) and \(\log\mathbf{A}_{i,j}\) respectively. Further, the \(n^{th}\) element of a vector \(\mathbf{v}\) is denoted as \(\mathbf{v}_{n}\). Conjugate, transpose and Hermitian transpose are denoted by \((.)^{*}\), \((.)^{\text{T}}\), and \((.)^{\text{H}}\) respectively. ## II System Model We consider a cellular system where a Base Station (BS) simultaneously serves \(K\) users (UE) spatially distributed at angles \(\theta^{(k)}\)\(\forall\)\(k=1,...,K\). The BS operates over the bandwidth \(BW\) and transmits an Orthogonal Frequency Division Multiplexing (OFDM) signal with a total of \(M_{tot}\) subcarriers at carrier frequency \(f_{c}\), where the frequency of the \(m^{th}\) subcar Fig. 1: Sub-band-specific beamforming for simultaneous multi-user data communication with analog True-Time-Delay arrays. rier is given by \(f_{m}=f_{c}-BW/2+BW(m-1)/(M_{tot}-1)\)\(\forall\)\(m\in\{1,...,M_{tot}\}\). Each UE operates over a non-overlapping contiguous bandwidth \(BW/K\) with a total of \(M_{tot}/K\) sub-carriers. The BS is equipped with an \(N_{T}\times 1\) analog TTD array with uniform half-wavelength spacing (\(\lambda_{c}/2=c/(2f_{c})\), where \(c\) is the speed of light). Each antenna element is controlled with time delays and phase shifts, which are denoted by vectors \(\boldsymbol{\tau},\boldsymbol{\Phi}\in\mathbb{R}^{N_{T}\times 1}\) respectively. The frequency-dependent precoder at the BS \(\mathbf{w}_{TTD}[m]\in\mathbb{C}^{N_{T}\times 1}\) is thus obtained as follows: \[\mathbf{w}_{TTD}[m]=\frac{1}{\sqrt{N_{T}}}e^{j(2\pi f_{m}\boldsymbol{\tau}+ \boldsymbol{\Phi})} \tag{1}\] The goal is to design the per-antenna delays \(\tau_{n}\) and phase shifts \(\phi_{n}\)\(\forall n\in\{1,...,N_{T}\}\) to generate beams with the desired sub-band to angle mapping. ### _Uniform Staircase TTD codebook_ We introduce the uniform Staircase TTD codebook that is designed based on two sets of delay and phase increments applied at different antenna spacing intervals. The high-frequency delay and phase increments (\(\Delta\tau_{h}\), \(\Delta\phi_{h}\)) occur at every consecutive antenna element, whereas the low-frequency increments (\(\Delta\tau_{l}\), \(\Delta\phi_{l}\)) occur at a spacing of \(D\) antenna elements. The resulting delay and phase vectors resemble a staircase function of step size \(D\), where the delay at the (\(n\)+1)\({}^{th}\) antenna is given as follows: \[\tau_{n+1}=\left\{\begin{array}{cc}\tau_{n}+\Delta\tau_{h}+\Delta\tau_{l}& \text{if }\bmod(n,D)=0\\ \tau_{n}+\Delta\tau_{h}&\text{otherwise}\end{array}\right. \tag{2}\] where \(\bmod(.)\) denotes the modulo operator. The per-antenna phase shifts apply increments in a similar manner. Under special condition \(\bmod(N_{T},D)=0\), it is possible to realize the Kronecker decomposition of the Staircase TTD combiner in (1) to obtain delays and phases that can be expressed as follows: \[\boldsymbol{\tau}= \underbrace{(\Delta\tau_{l}+D\Delta\tau_{h})}_{\Delta\tau_{jump}} [0,...,\frac{N_{T}}{D}-1]^{T}\oplus[0,...,D-1]^{T}\underbrace{\Delta\tau_{h }}_{\Delta\tau_{step}} \tag{3}\] \[\boldsymbol{\Phi}= \underbrace{(\Delta\phi_{l}+D\Delta\phi_{h})}_{\Delta\phi_{jump}} [0,..,\frac{N_{T}}{D}-1]^{T}\oplus[0,...,D-1]^{T}\underbrace{\Delta\phi_{h}} _{\Delta\phi_{step}}\] where \(\oplus\) denotes the Kronecker summation of two vectors \(\mathbf{a}\in\mathbb{C}^{N_{1}\times 1}\) and \(\mathbf{b}\in\mathbb{C}^{N_{2}\times 1}\), defined as \(\mathbf{a}\oplus\mathbf{b}\in\mathbb{C}^{N_{1}N_{2}\times 1}=\log(e^{ \mathbf{a}}\otimes e^{\mathbf{b}})\), where \(\otimes\) denotes Kronecker product. For ease of notation, we define \(\Delta\tau_{jump}=\Delta\tau_{l}+D\Delta\tau_{h}\) and \(\Delta\phi_{jump}=\Delta\phi_{l}+D\Delta\phi_{h}\) as the Staircase _jump_ parameters, and \(\Delta\tau_{step}=\Delta\tau_{h}\) and \(\Delta\phi_{step}=\Delta\phi_{h}\) as the _step_ parameters as shown in Fig. 2, since the two parameters govern the inter- and intra-step behaviour of the staircase TTD codebook. Consequently, the delays and phases of the uniform Staircase TTD codebook in (2) can be expressed as follows: \[\tau_{n+1}=\left\{\begin{array}{cc}\tau_{n}+\Delta\tau_{jump}-(D-1)\Delta \tau_{step};&\bmod(n,D)=0\\ \tau_{n}+\Delta\tau_{step};&\text{otherwise}\end{array}\right. \tag{4}\] ## III Frequency-spatial analysis of Staircase TTD ### _Frequency-angle mapping of each sub-array_ The uniform Staircase TTD codebook can be visualized as the superposition of \(D\) uniform TTD sub-arrays (shown in Fig. 2) with antenna spacing \(D\lambda_{c}/2\), delay spacing \(\Delta\tau_{jump}\) and phase spacing \(\Delta\phi_{jump}\). Since the antenna spacing exceeds the critical \(\lambda_{c}/2\) spacing by a factor of \(D\), the resulting beams exhibit \(D\) grating lobes or spectral copies for each frequency. Each sub-array would have an identical frequency-beam-centre mapping owing to identical uniform TTD array parameters. Based on (1), the precoder for each sub-array \(\widetilde{\mathbf{w}}_{TTD}[m]\in\mathbb{C}^{N_{T}/D\times 1}\) is determined by: \[\widetilde{\mathbf{w}}_{TTD}[m]=\sqrt{\frac{D}{N_{T}}}e^{j\pi[0,...,\frac{N_{T }}{D}-1]^{T}(2f_{m}\Delta\tau_{jump}+\Delta\phi_{jump}/\pi)} \tag{6}\] The array response vector \(\mathbf{\tilde{a}}_{D}(\theta,f_{m})\in\mathbb{C}^{N_{T}/D\times 1}\) for each sub-array with \(D\lambda_{c}/2\) antenna-spacing at an angle of arrival \(\theta\) can be given as follows: \[\mathbf{\tilde{a}}_{D}(\theta,f_{m})=e^{-j\pi\frac{f_{m}}{f_{c}}[0,...,\frac{ N_{T}}{D}-1]^{T}D\sin\theta} \tag{7}\] The frequency-dependent beamforming gain at angle \(\theta\) can thus be obtained as \(\tilde{G}(\theta,f_{m})=|\widetilde{\mathbf{w}}_{TTD}^{H}[m]\mathbf{\tilde{a}}_ {D}(\theta,f_{m})|^{2}\), which can be simplified as follows: \[\tilde{G}(\theta,f_{m})=\left|\frac{\sin\left(\frac{N_{T}}{D}\frac{\pi}{2} \Psi_{jump}(f_{m})\right)}{\sin\left(\frac{\pi}{2}\Psi_{jump}(f_{m})\right) }\right|^{2} \tag{8}\] where \(\Psi_{jump}(f_{m})=2f_{m}\Delta\tau_{jump}+\Delta\phi_{jump}/\pi+D(f_{m}/f _{c})\sin\theta\). The beam-centre for frequency \(f_{m}\), denoted by \(\theta^{\star}(f_{m})\) or \(\theta^{\star}_{m}\), corresponds to the angle that maximizes the beamforming gain function, i.e. \(\theta^{\star}_{m}=\{\theta|G(\theta,f_{m})=N_{T}/D\}\), and can be obtained by solving \(\Psi_{jump}(f_{m})=2z\), \(z\in\mathbb{Z}\). Owing to grating lobes, each frequency \(f_{m}\) will have \(D\) beam-centre solutions, which are given as follows: \[\begin{split}\theta^{\star}(f_{m},q)=\sin^{-1}\left[1-\frac{2}{D} (q-1)\frac{f_{c}}{f_{m}}-\right.\\ \left.mod\left(2f_{c}\frac{\Delta\tau_{jump}}{D}+\frac{\Delta\phi_{jump}}{D \pi}\frac{f_{c}}{f_{m}}+1,2\frac{f_{c}}{Df_{m}}\right)\right]\end{split} \tag{9}\] where each value of \(q=1,...,D\) corresponds to a distinct spectral copy of the main beam. As is evident from (9), the \(D\) spectral copies for each frequency \(f_{m}\) have an angular separation of \(\Delta\sin\theta_{m}^{\star}=\frac{2}{D}\frac{f_{c}}{f_{m}}\approx\frac{2}{D}\) when \(f_{c}>>BW\). Thus, the \(D\lambda_{c}/2\) array-spacing partitions the angular region into \(D\) non-overlapping segments of uniform sinusoidal width, within which each spectral copy is confined, as shown in Fig. 3(a,c). The grating factor \(D\) thus determines the number and relative spacing of spectral beam copies. Further, the slope of the frequency-beam-centre map, denoted by \(\frac{\partial\sin\theta_{m}^{\star}}{\partial f_{m}}\) can be obtained from (9) as \(-\frac{2\Delta\tau_{jump}}{D}\). This tells us that \(\Delta\tau_{jump}\) determines the extent of frequency-dependent angular dispersion of each spectral copy within its segment. Setting \(\Delta\tau_{jump}=-\frac{D\sin\theta_{m}}{2f_{c}}\) creates a directional beam at \(\theta_{o}\)\(\forall f_{m}\), with spectral copies at \(\theta_{m}^{\star}=\sin^{-1}(\mathrm{mod}\)\((\sin\theta_{o}-2\frac{q-1}{D}\frac{f_{c}}{f_{m}}+1,2)-1)|_{q=2,...,D}\), as seen in Fig. 3(a,b). When \(\frac{1}{f_{c}}<<|\Delta\tau_{jump}|<\frac{1}{BW}\), each spectral copy exhibits partial dispersion within its respective spectral segment. When \(|\Delta\tau_{jump}|\geq\frac{1}{BW}\), each spectral copy maps to its entire angular segment in at least one mapping cycle, as seen in Fig. 3(c,d). ### _Superposition of the \(D\) sub-arrays: Spatial filtering_ Section III-A obtains the beamforming gain \(\tilde{G}(\theta,f_{m})\) (8) and frequency-angle mapping \(\theta_{m}^{\star}\) of grating lobes (9) for the \(D\) identical uniform TTD sub-arrays that constitute the Staircase TTD codebook. Since these \(D\) sub-arrays are uniformly separated in space (\(\lambda_{c}/2\) antenna spacing), time (\(\Delta\tau_{step}\)) and phase (\(\Delta\phi_{step}\)), as shown in Fig. 2, the effective phase separation between adjacent sub-arrays can be expressed as \(\pi\Psi_{o}(f_{m})\), where \(\Psi_{o}(f_{m})=2f_{m}\Delta\tau_{step}+(f_{m}/f_{c})\sin\theta+\Delta\phi_{ step}/\pi\). Thus, the overall beamforming gain \(G(\theta,f_{m})\) of the entire Staircase TTD codebook can be expressed as the exponentially weighted sum of \(\tilde{G}(\theta,f_{m})\), as shown in (10), which can be simplified to obtain (11): \[G(\theta,f_{m})=\big{|}\sum_{q=1}^{D}e^{-j\pi(q-1)\Psi_{o}(f_{m})}.\vec{\textbf{ w}}_{TTD}^{H}[m]\vec{\textbf{a}}_{(D)}\big{|}^{2} \tag{10}\] \[G(\theta,f_{m})=\tilde{G}(\theta,f_{m})\.\ \ \underbrace{\frac{\sin\left((D \pi/2)\Psi_{o}(f_{m})\right)}{\sin\left((\pi/2)\Psi_{o}(f_{m})\right)}}_{\text {Spatial filter: }F(\theta,f_{m})} \tag{11}\] The term \(F(\theta,f_{m})=\big{|}\frac{\sin((D\pi/2)\Psi_{o}(f_{m}))}{\sin((\pi/2)\Psi_ {o}(f_{m}))}\big{|}^{2}\) represents the frequency-spatial filter response that results from the superposition of the \(D\) TTD sub-arrays, uniformly separated in phase, space and time. The filter \(F(\theta,f_{m})\) is centred at angle \(\theta_{o}(f_{m})\), which corresponds to the gain maximizing trajectory about which the filter's spatial response is symmetric, and can be obtained by solving \(\Psi_{o}(f_{m})=2z\), \(z\in\mathbb{Z}\), as follows: \[\theta_{o}(f_{m})=\sin^{-1}\left(1-\mathrm{mod}(2f_{c}\Delta\tau_{step}+\frac {\Delta\phi_{step}}{\pi}\frac{f_{c}}{f_{m}}+1,2\frac{f_{c}}{f_{m}})\right) \tag{12}\] The _step_ delay \(\Delta\tau_{step}\) makes the filter's spatial response frequency-dependent as seen in Fig. 4(a). This is reminiscent of dispersive rainbow beam codebooks constructed using uniform TTD arrays in [6, 7, 8, 9]. Further, the 3dB angular width of the filter for a given \(f_{m}\) is given by \(\Delta\sin\theta=\frac{2\times 0.886}{D}\)[10, Chapt 22.7]. Thus, for each frequency, the filter retains beam patterns corresponding to roughly one spectral segment of angular width \(\Delta\sin\theta\approx\frac{2}{D}\) out of the \(D\) spectral copies present in the parent beam-pattern \(\tilde{G}(\theta,f_{m})\) as shown in Fig. 4(b,d), thereby resulting in the sub-band-specific spatial responses shown in Fig. 4(c,e). Through the systematic design of grating lobe parameters (\(D\), \(\Delta\tau_{jump}\), \(\Delta\phi_{jump}\)) and filter parameters (\(\Delta\tau_{step}\), \(\Delta\phi_{step}\)), we can achieve the required Fig. 3: Frequency-beam-centre map and beamforming gain \(\tilde{G}(\theta,f_{m})\) for each uniform TTD sub-array for \(D=3\), \(N_{T}/D=10\), \(\Delta\phi_{jump}=0\). (**a,b**) Directional grating lobes with \(\Delta\tau_{jump}=-D\sin(\pi/6)/2f_{c}\). (**c,d**) Complete dispersion with frequency diversity, \(\Delta\tau_{jump}=2/BW\). directional sub-band-specific beams. ## IV Two-stage design of sub-band-beams In this section, we propose the two-stage design of the Staircase codebook parameters \(\Delta\tau_{jump}\), \(\Delta\phi_{jump}\), \(\Delta\tau_{step}\), \(\Delta\phi_{step}\) and \(D\) defined in (4) and (5), to construct sub-band-specific beams to simultaneously communicate with \(K\) users located at sinusoidally equidistant angles \(\theta^{(q)}\)\(\forall q\in\{1,...,K\}\) in the sector \([\theta_{1},\theta_{2}]\), with uniform \((BW/K)\) sub-band assignment to each user, as shown in Fig. 5(a). The \(K\) UE angles \(\theta^{(q)}\)\(\forall q\in\{1,...,K\}\) are given as follows: \[\theta^{(q)}=\sin^{-1}\left(\sin\theta_{1}+(q-1)\frac{\sin\theta_{2}-\sin \theta_{1}}{K-1}\right) \tag{13}\] ### _Sub-band beam design with uniform Staircase codebooks_ **Stage I:** The first step towards designing the required beam pattern is constructing \(K\) directional grating lobes exactly at the required angles \(\theta^{(q)}\)\(\forall q\in\{1,...,K\}\) in (13), as shown in Fig. 5(b). We know that the angular separation between adjacent grating lobes is \(\frac{2}{D}\frac{f_{c}}{f_{m}}\), where \(D\in\mathbb{Z}\) is the step size of the uniform Staircase codebook. Hence, in order to fit exactly \(K\) grating lobes in \([\theta_{1},\theta_{2}]\), we must select \(D\) as the smallest integer satisfying \(\gamma|\sin\theta_{2}-\sin\theta_{1}|\geq(K-1)\frac{2}{D}\), where \(\gamma=1+\frac{BW}{2f_{c}}-\frac{BW}{2Kf_{c}}\) is the beam-squint2 correction factor. Thus, \(D\) can be computed as follows: Footnote 2: Upon setting \(\Delta\tau_{jump}=-\frac{D\sin\theta_{1}}{2f_{c}}\), all spectral copies except the first copy at \(\theta_{1}\), exhibit beam-squint. Hence, the angular separation between the first and \(K^{th}\) grating lobes is \(\frac{2(K-1)}{\gamma D}\) where \(\gamma=\frac{1}{f_{c}}\left(f_{c}+\frac{BW}{2}-\frac{BW}{2K}\right)\) \[D=\left\lceil\frac{2(K-1)}{\gamma|\sin\theta_{2}-\sin\theta_{1}}\right\rceil \tag{14}\] Further, setting \(\Delta\tau_{jump}=\frac{-D\sin\theta_{1}}{2f_{c}}\) and \(\Delta\phi_{jump}=0\) creates \(D\) grating lobes at \(\theta_{act}^{(i)}\)\(\forall i=1,...,D\), given as follows, out of which \(\theta_{act}^{(q)}|_{q=1,...,K}\) fall in the range \([\theta_{1},\theta_{2}]\). \[\theta_{act}^{(q)}=\sin^{-1}\left(\mathrm{mod}\left(\sin\theta_{1}+(q-1)\frac {2}{D}+1,2\right)-1\right) \tag{15}\] **Stage II:** The next step is to design the frequency-spatial filter \(F(\theta,f_{m})\) to achieve the desired sub-band-specific filtering of the grating lobes as shown in Fig. 5(b). For given grating lobes at \(\theta_{act}^{(i)}|_{i=1,...,D}\), the choice of filter parameters \(\Delta\tau_{step}\) and \(\Delta\phi_{step}\) determines the exact sub-band-angle mapping achieved, as is seen in the examples in Fig. 6. In order to ensure \(K\) equal sub-bands that map to the \(K\) angles \(\theta_{act|q=1,...,K}^{(q)}\), we need to design \(\Delta\tau_{step}\) and \(\Delta\phi_{step}\) in a manner as to make the filter-centre trajectory \(\theta_{o}(f_{m})\) intersect the \(K\) grating lobes at the centres of the respective sub-bands, as shown in Fig. 5(b). For example, to achieve the beam pattern in Fig. 5(a), the first sub-band centred at \(f^{(1)}=f_{c}-BW/2+BW/(2K)\) must map to \(\theta_{act}^{(1)}=\theta_{1}\) whereas the \(K^{th}\) sub-band centred at \(f^{(K)}=f_{c}+BW/2-BW/(2K)\) must map to \(\theta_{act}^{(K)}=\theta_{2}\). Consequently, \(\Delta\tau_{step}\) and \(\Delta\phi_{step}\)3 can be obtained as follows: Footnote 3: \(\Delta\phi_{step}\) is obtained by solving \(\theta_{o}(f^{(K)})=\theta_{2}\) in (12) with a substitution of \(\Delta\tau_{step}\) from (16), which upon simplification gives (17). \[\Delta\tau_{step}=-\frac{1}{2}\frac{\partial\sin\theta_{o}(f_{m})}{\partial f _{m}}=\frac{f^{(1)}\sin\theta_{1}-f^{(K)}\sin\theta_{2}}{2f_{c}(K-1)\frac{BW}{K }} \tag{16}\] \[\Delta\phi_{step}=-\pi\frac{f^{(K)}}{f_{c}}\left(\sin\theta_{2}+2f_{c}\Delta \tau_{step}\right) \tag{17}\] ### _Mapping discrepancies with uniform Staircase codebooks_ The first step to generating directional sub-band-specific beams mapped to angles \(\theta^{(q)}\big{|}_{q=1,...,K}\) as shown in (13), Fig. 5: (a) Target sub-band-angle mapping. (b) Design of grating lobes and spatial filter \(F(\theta,f_{m})\) to achieve the beam-pattern in (a). involves setting \(\Delta\tau_{jump}=-\frac{D\sin\theta_{1}}{2f_{c}}\) and \(\Delta\phi_{jump}=0\). This results in grating lobes at angles \(\theta_{act}^{(i)}|_{i=1,...,K}\) as shown in (15). Since the uniform Staircase codebook constrains the (uniform) step-size \(D\) to be an integer, designing \(D\) as per (14) results in a mismatch or discrepancy between the target and actual angular levels, i.e. \(\theta^{(q)}\neq\theta_{act}^{(q)}\)\(\forall q\in\{2,...,K\}\), as can be seen in Fig. 7(a), where the target sub-band-angle map is shown in red. This can be verified by substituting \(D=\lceil\frac{2(K-1)}{\gamma\left|\sin\theta_{2}-\sin\theta_{1}\right|}\rceil\) into (15) and comparing with (13). Thus, staircase TTD codebooks with uniform step-size suffer from mapping discrepancies which inhibit our ability to achieve the desired sub-band-angle map. ### _Alternative Staircase to overcome mapping discrepancies_ In this section, we formulate a Staircase TTD codebook with non-uniform step-size, relaxing the requirement of \(D\) being an integer. The uniform Staircase TTD codebook described in (4) and (5) can be visualized as having element-wise increments of \(\Delta\tau_{step}\) with _wrapping around_ by a magnitude of \(-(\Delta\tau_{jump}-D\Delta\tau_{step})\) occurring at every \(n^{th}\) array element satisfying \(\mathrm{mod}(n-1,D)=0\). This _wrapping around_ is triggered by the array index \(n\) and results in a Staircase codebook with uniform integer step-size \(D\). Instead, we can define a new Staircase TTD codebook where the wrapping around is triggered every time a certain magnitude threshold is exceeded, in the following manner. \[\begin{split}\tau_{n}&=\mathrm{mod}\left((n-1) \Delta\tau_{step},D\Delta\tau_{step}-\Delta\tau_{jump}\right)\\ \phi_{n}&=\mathrm{mod}\left((n-1)\Delta\phi_{step}, D\Delta\phi_{step}-\Delta\phi_{jump}\right)\end{split} \tag{18}\] This new formulation results in a Staircase TTD codebook with non-uniform step-size. Thus, the parameter \(D\), which now controls only the angular spacing between grating lobes, is no longer constrained to be an integer, and can be selected as: \[D=\frac{2(K-1)}{\gamma\left(\sin\theta_{2}-\sin\theta_{1}\right)} \tag{19}\] With \(\Delta\tau_{jump}=-\frac{D\sin\theta_{1}}{2f_{c}}\), the actual grating lobes now coincide with the target angular levels \(\theta^{(q)}=\theta_{act}^{(q)}\forall\)\(q=1,...,K\), thereby resolving the mapping discrepancy as seen in Fig. 7(b). Table I summarises the Staircase TTD codebook design to achieve sub-band-specific beams shown in Fig. 5(a). ### _Constraints on achievable sub-band-angle mappings_ The Staircase TTD codebook formulation described in Sec. IV-C can realize sub-band-beams that map to sinusoidally equidistant angles (13) in a specified sector \([\theta_{1},\theta_{2}]\) in monotonically increasing (\(\theta_{1}<\theta_{2}\)) or monotonically decreasing (\(\theta_{1}>\theta_{2}\)) patterns. For a given array size \(N_{T}\), a sub-band angle map occupying the sector \([\theta_{1},\theta_{2}]\) can be realized only if the following condition, which ensures that the array is large enough to induce _wrapping around_, holds: \[\Big{\lceil}\frac{2(K-1)}{\gamma\left(\sin\theta_{2}-\sin\theta_{1}\right)} \Big{\rceil}<N_{T} \tag{20}\] Further, cyclic rotations of the monotonic sub-band-angle maps as shown in Fig. 8 are possible only when \(\gamma|\sin\theta_{2}-\sin\theta_{1}|>2(K-1)/(K+1)\), and can be achieved by merely changing the filter parameter \(\Delta\phi_{step}\), keeping all other codebook parameters fixed. For example, we can map the first sub-band centred at \(f^{(1)}=f_{c}-BW/2+BW/(2K)\) to angle \(\theta^{(i)}\), \(i\in\{1,...,K\}\) by setting \(\Delta\phi_{step}\) as \(\Delta\phi_{step}=-\pi\Big{(}\frac{f^{(1)}}{f_{c}}\sin\theta^{(i)}+2f^{(1)} \Delta\tau_{step}\Big{)}\). ## V Numerical Results This section studies the performance of sub-band-beams designed using the Staircase TTD codebook for the system model described in Sec. II, in terms of the spectral efficiency of the \(1\) BS and \(K\) UE network. We present performance comparison with state-of-the-art methods, namely, the iterative weighted Least Squares optimization algorithm (JPTA iter.) presented in [4] with 20 training iterations, and the closed-form Least Squares solution (mmFlexible) proposed by [5]. \begin{table} \begin{tabular}{|c|} \hline **Given:**\(K\) UE at angles \(\theta^{(q)}|_{q=1,...,K}\in[\theta_{1},\theta_{2}]\), \(\theta_{1}\neq\theta_{2}\) \\ BS has \(N_{T}\times 1\) Analog TTD array. \(\gamma=1+\frac{BW}{2f_{c}}-\frac{BW}{2Kf_{c}}\) \\ \hline Design TTD delays and phase shifts \(\boldsymbol{\tau},\boldsymbol{\Phi}\in\mathbb{R}^{N_{T}\times 1}\) as follows: \\ 1. \(D=\frac{2(K-1)}{\gamma(\sin\theta_{2}-\sin\theta_{1})}\); \(\Delta\tau_{jump}=-\frac{D\sin\theta_{1}}{2f_{c}}\); \(\Delta\phi_{jump}=0\) \\ 2. \(\Delta\tau_{step}\), \(\Delta\phi_{step}\) based on (16) and (17). \\ 3. \(\tau_{n}=\mathrm{mod}\left((n-1)\Delta\tau_{step},D\Delta\tau_{step}-\Delta\tau_ {jump}\right)\) \\ \(\phi_{n}=\mathrm{mod}\left((n-1)\Delta\phi_{step},D\Delta\phi_{step}-\Delta \phi_{jump}\right)\) \\ \hline \(\theta^{(q)}=\theta_{act}^{(q)}=\sin^{-1}\left(\sin\theta_{1}+(q-1)\frac{\sin \theta_{2}-\sin\theta_{1}}{K-1}\right)\big{|}_{q=1,...,K}\) \\ \hline \end{tabular} \end{table} TABLE I: Staircase TTD codebook design to realize sub-band-specific beams described in Sec. II and shown in Fig. 5. Fig. 7: **(a) Uniform Staircase codebook (4),(5) enforces \(D\in\mathbb{Z}\), resulting in discrepancy between target (shown in red) and actual sub-band-angle maps. **(b)** New Staircase (18) allows \(D\in\mathbb{R}\), thereby resolving mapping discrepancy. Here, \(\{\theta_{1},\theta_{2}\}=\{-\pi/6,\pi/4\}\).** All methods are compared with the theoretical upper bound represented by the ideal best-case beam. The BS operates at \(f_{c}=60GHz\) with \(M_{tot}=4096\) subcarriers. We consider \(BW=2GHz\), \(N_{T}=32\), \(K=5\), and Signal-to-Noise Ratio (SNR) of \(10dB\), unless specified otherwise. Spectral efficiency results are averaged over all realizable beam patterns as per Table I for \(\{\theta_{1},\theta_{2}\}\in[-75^{\circ},75^{\circ}]\). Fig. 9(a) studies spectral efficiency as a function of the number of users \(K\) (or sub-bands), for \(\{\theta_{1},\theta_{2}\}\in[-75^{\circ},75^{\circ}]\). JPTA iter [4] performs the best for all \(K\), closely followed by mmFlexible [5]. For \(K=2\), Staircase TTD suffers noticeable degradation compared to JPTA iter and mmFlexible. However, as the number of users increases (\(K>4\)), the performance of Staircase TTD matches up to that of JPTA iter and mmFlexible. This can be explained by studying the achieved beamforming gain sliced at the target angles \(\theta^{(k)}|_{k=1,...,K}\) (eqn. (13)), denoted by \(\mathcal{B}_{k}(f_{m})\) and defined as follows. \[\mathcal{B}_{k}(f_{m})=G(\theta^{(k)},f_{m})\ \ \forall\ k=1,...,K,\ \ \forall f_{m} \tag{21}\] where \(G(\theta,f_{m})\) is the beamforming gain function defined in (11). Fig. 10(a) and Fig. 10(b) depict the on-target beamforming gain \(\mathcal{B}_{k}(f_{m})|_{k=1,...,K}\) for \(K=2\) and \(K=5\) users respectively, for \(\{\theta_{1},\theta_{2}\}=\{-30^{\circ},40^{\circ}\}\). For \(K=2\), the average on-target gain achieved by Staircase TTD is lower than both mmFlexible and JPTA iter. However, when \(K=5\), Staircase TTD achieves comparable on-target gain to both mmFlexible and JPTA iter. This is because the beam design methodology of Staircase TTD, which involves aligning the on-target-gain maxima with the respective sub-band centres, is not target-gain optimal for smaller \(K(<4)\), and is hence outperformed by the optimization rooted mmFlexible and JPTA iter. However, a higher \(K\) places stricter constraints on beam optimization, making the optimal solution converge to the beam design methodology of Staircase TTD as \(K\) increases. This can be seen in Fig. 10(b), where Staircase TTD not only achieves comparable average on-target gain to JPTA iter and mmFlexible, but also has its gain maxima aligned with those of JPTA iter and mmFlexible when \(K=5\). This explains the observations made from Fig. 9(a). Fig. 9(b) and Fig. 9(c) study the effect of \(BW\) and BS array size \(N_{T}\), respectively, on the spectral efficiency for \(K=5\) users. When \(BW/f_{c}\leq 5\%\), Staircase TTD achieves comparable performance to both JPTA iter and mmFlexible. For \(BW/f_{c}>5\%\), Staircase TTD is seen to exhibit greater robustness to beam squint effects compared to mmFlexible, but is outperformed by JPTA iter. In Fig 9(c), Staircase TTD has comparable spectral-efficiency to both JPTA iter and mmFlexible for \(N_{T}\leq 64\). Staircase TTD matches up to Fig. 9: Performance evaluation of Staircase TTD sub-band-specific beams for multi-user data communication. Here, \(f_{c}=60GHz\), \(BW=2GHz\), \(K=5\), \(N_{T}=32\), \(\theta_{1},\theta_{2}\in(-75^{\circ},75^{\circ})\), and \(SNR=10dB\) unless specified otherwise. JPTA iter and considerably outperforms mmFlexible as \(N_{T}\) increases thereafter. Fig. 9(d) shows that Staircase TTD achieves comparable performance to both JPTA iter and mmFlexible across SNRs for \(K=5\), \(N_{T}=32\), and \(BW=2\)GHz. Therefore, in summary, Staircase TTD achieves comparable performance to that of JPTA iter and mmFlexible when \(K>4\), \(BW/f_{c}\leq 5\%\) and \(N_{T}\leq 64\), while outperforming mmFlexible when \(BW/f_{c}>5\%\) and \(N_{T}>64\). ## VI Future work While this work focuses on analog codebook design for sub-band-beam synthesis and theoretical performance evaluation in terms of spectral efficiency, our future work would study the practical challenges in RF front-end design to enable the prescribed sub-band-multiplexed multi-user data communication in realistic multi-user networks. In particular, we would study the impact of TTD hardware constraints, namely, delay range constraints [11], limited phase shifter resolution, and non-linearity of circuit delays [12], on the performance of sub-band-beams. In addition, a study of cross-sub-band interference and its mitigation is imperative for enabling sub-band-specific multi-user communication. Further, we would also study analog Staircase TTD codebooks with multi-stage frequency-spatial filtering, and multi-RF chain Staircase codebooks to realize beam patterns with arbitrary sub-band-angle mapping for highly flexible user-resource assignment. ## VII Conclusions This paper proposes a structured, closed-form design of analog TTD codebook based on dual-stage frequency-spatial filter design to realize directional sub-band-beams to support simultaneous multi-user data communication. By implementing sub-band-selective filtering of directional grating lobes, it achieves beams with the required sub-band-angle mapping. It also delineates constraints on achievable sub-band-angle maps using the proposed codebook. The proposed method, besides espousing a conceptual visualization of sub-band-beam design, presents a low-cost and low-complexity analog TTD codebook design that matches the performance of optimization-rooted state-of-the-art approaches in large networks and exhibits reasonable robustness to beam-squint at large bandwidths.
2309.16549
The subpower membership problem of 2-nilpotent algebras
The subpower membership problem SMP(A) of a finite algebraic structure A asks whether a given partial function from A^k to A can be interpolated by a term operation of A, or not. While this problem can be EXPTIME-complete in general, Willard asked whether it is always solvable in polynomial time if A is a Mal'tsev algebras. In particular, this includes many important structures studied in abstract algebra, such as groups, quasigroups, rings, Boolean algebras. In this paper we give an affirmative answer to Willard's question for a big class of 2-nilpotent Mal'tsev algebras. We furthermore develop tools that might be essential in answering the question for general nilpotent Mal'tsev algebras in the future.
Michael Kompatscher
2023-09-28T16:00:37Z
http://arxiv.org/abs/2309.16549v1
# The subpower membership problem of 2-nilpotent algebras ###### Abstract The subpower membership problem \(\mathrm{SMP}(\mathbf{A})\) of a finite algebraic structure \(\mathbf{A}\) asks whether a given partial function from \(A^{k}\) to \(A\) can be interpolated by a term operation of \(\mathbf{A}\), or not. While this problem can be EXPTIME-complete in general, Willard asked whether it is always solvable in polynomial time if \(\mathbf{A}\) is a Mal'tsev algebras. In particular, this includes many important structures studied in abstract algebra, such as groups, quasigroups, rings, Boolean algebras. In this paper we give an affirmative answer to Willard's question for a big class of 2-nilpotent Mal'tsev algebras. We furthermore develop tools that might be essential in answering the question for general nilpotent Mal'tsev algebras in the future. subpower membership problem, Mal'tsev algebra, compact representation, nilpotence, clonoids This paper was supported by the Charles University project UNCE/SCI/022. ###### Acknowledgements. I would like to thank Peter Mayr for introducing me to difference clonoids, and giving several helpful comments on earlier versions of this paper. ## 1 Introduction It is a recurring and well-studied problem in algebra to describe the closure of a given list of elements under some algebraic operations (let us only mention the affine and linear closure of a list of vectors, or the ideal generated by a list of polynomials). But also in a computational context, this problem has a rich history, appearing in many areas of computer science. In its formulation as _subalgebra membership problem_, the task is to decide whether a given finite list of elements of an algebraic structure generates another element or not. Depending on the algebraic structures studied, a variety of different problems emerges. One of the most well-known examples is the _subgroup membership problem_, in which the task is to decide, if for a given set of permutations \(\alpha_{1},\ldots,\alpha_{n}\) on a finite set \(X\), another permutation \(\beta\) belongs to the subgroup generated by \(\alpha_{1},\ldots,\alpha_{n}\) in \(S_{X}\). This problem can be solved in polynomial-time by the famous Schreier-Sims algorithm [30], whose runtime was analysed in [15] and [19]. The existence of such efficient algorithms is however not always guaranteed: if the symmetric group \(S_{X}\) is for instance replaced by the full transformation semigroup on \(X\), the corresponding membership problem is \(\mathsf{PSPACE}\)-complete [22]. A common feature of many algorithms for the subalgebra membership problem is to generate canonical generating sets of some sorts (such as computing the basis of a vector space via Gaussian elimination, or computing a Grobner basis via Buchberger's algorithm to solve the ideal membership problem [6]). But, in general, this is where the similarities end - depending on the algebraic structure, and the encoding of the input, the problem can range over a wide range of complexities, and have applications in vastly different areas such as cryptography [28, 29], computer algebra [6, 24], or proof complexity [22, 21]. In this paper, we study a version of the subalgebra membership problem that is called the _subpower membership problem_. For a fixed, finite algebraic structure \(\mathbf{A}\) (henceforth also just called an _algebra_) its subpower membership problem \(\mathrm{SMP}(\mathbf{A})\) is the problem of deciding if a given tuple \(\mathbf{b}\in\mathbf{A}^{k}\) is in the subalgebra of \(\mathbf{A}^{k}\) generated by some other input tuples \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\in\mathbf{A}^{k}\) (here \(n\) and \(k\) are not fixed, but part of the input). This is equivalent to checking, whether the \(n\)-ary partial function that maps \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\) component-wise to \(\mathbf{b}\) can be interpolated by a term function of \(\mathbf{A}\). For example, if \(p\) is a prime, \(\mathrm{SMP}(\mathbb{Z}_{p})\) is the problem of checking whether some vector \(\mathbf{b}\in\mathbb{Z}_{p}^{k}\) is in the linear closure of \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\in\mathbb{Z}_{p}^{k}\), this can easily be solved by Gaussian elimination. More general, for any finite group \(\mathbf{G}\), \(\mathrm{SMP}(\mathbf{G})\) can be solved in polynomial time by a version of the Schreier-Sims algorithm [32]. Besides being a natural problem in algebra, the subpower membership problem found some applications in some learning algorithms [7, 12, 17]. Moreover, an efficient algorithm for \(\mathrm{SMP}(\mathbf{A})\) implies that it is also feasible to represent the relations invariant by some generating set of tuples. It was in particular remarked (see e.g. [9]), that a polynomial-time algorithm for \(\mathrm{SMP}(\mathbf{A})\) would allow to define infinitary constraint satisfaction problems, in which the constraint relations are given by some generating tuples (with respect to \(\mathbf{A}\)). This infinitary version of CSPs has the benefit that most of the algebraic machinery to CSPs (see e.g. [3]) still applies. Exhaustively generating the whole subalgebra generated by \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\) in \(\mathbf{A}^{k}\) gives an exponential time algorithm for \(\mathrm{SMP}(\mathbf{A})\). And, in general, we cannot expect to do better: In [23] Kozik constructed a finite algebra \(\mathbf{A}\) for which \(\mathrm{SMP}(\mathbf{A})\) is \(\mathsf{EXP}\)-complete. Even semigroups can have \(\mathsf{PSPACE}\)-complete subpower membership problem [8]. However, for so called _Mal'tsev algebras_, better lower bounds are known. Mal'tsev algebras are algebras defined by having a _Mal'tsev term_\(m\), i.e. a term satisfying the identities \(y=m(x,x,y)=m(y,x,x)\) for all \(x,y\). Mal'tsev algebras lie at the intersection of many areas of mathematics: they include algebraic structures of ubiquitous importance (groups, fields, vector spaces), but also appear in logic (Boolean algebras, Heyting algebras), commutative algebra (rings, modules, \(K\)-algebras), and non-associative mathematics (quasigroups, loops). Mayr showed in [25] that the subpower membership problem of every Mal'tsev algebra is in \(\mathsf{NP}\). His proof is based on the fact that every subalgebra \(\mathbf{R}\leq\mathbf{A}^{n}\) has a small generating set, which generates every element of \(\mathbf{R}\) in a canonical way (a so-called _compact representation_). Thus, to solve the subpower membership problem, one can "guess" a compact representation of the subalgebra generated by \(\mathbf{a}_{1},\ldots,\mathbf{a}_{k}\), and then check in polynomial time if it generates \(\mathbf{b}\). If such a compact representation can be moreover found in _deterministic_ polynomial time, then \(\mathrm{SMP}(\mathbf{A})\) is in \(\mathsf{P}\); this is, in fact, the dominant strategy to prove tractability. So far, the existence of such polynomial time algorithms was verified for groups and rings [32, 15], supernilpotent algebras [25], and algebras that generate residually finite varieties [9]. On the other hand, no examples of \(\mathsf{NP}\)-hard or intermediate complexity are known. This leads to the question whether \(\mathrm{SMP}(\mathbf{A})\in\mathsf{P}\) for _all_ finite Mal'tsev algebras \(\mathbf{A}\)[32]. On a broader scale, this question was also posed for algebras with _few subpowers_[17, Question 8]. An elementary class of Mal'tsev algebras, for which the question still remains open, are _nilpotent_ algebra. In fact, they can also be seen as an important stepping stone in answering [17, Question 8], as nilpotent Mal'tsev algebras coincide with nilpotent algebras with few subpowers. Generalizing the concept of nilpotent groups, nilpotent algebras are defined by having a central series of congruences. While they have several nice structural properties, in general nilpotent algebras do not satisfy the two finiteness conditions mentioned above (supernilpotence, residual finiteness), thus they are a natural starting point when trying to generalize known tractability results. But even for 2-nilpotent algebras not much is known, then all polynomial-time algorithms were only constructed by ad-hoc arguments for concrete examples (such as Vaughan-Lee's 12-element loop [26]). The first contribution of this paper is to prove that all 2-nilpotent algebras of size \(p\cdot q\) for two primes \(p\neq q\) have a tractable subpower membership problem. In fact, we prove an even stronger result in Theorem 2: \(\mathrm{SMP}(\mathbf{A})\) is in \(\mathsf{P}\), whenever \(\mathbf{A}\) has a central series \(0_{\mathbf{A}}<\rho<1_{\mathbf{A}}\) such that \(|\mathbf{A}/\rho|=p\) is a prime, and the blocks of \(\rho\) have size coprime to \(p\). While this is still a relatively restricted class of nilpotent algebras, our methods have the potential to generalize to all 2-nilpotent Mal'tsev algebras and beyond. Thus, our newly developed tools to analyze SMP can be regarded as the second main contribution. More specifically, in Theorem 11 we show that whenever \(\mathbf{L}\otimes\mathbf{U}\) is a _wreath product_ (see Section 3), such that \(\mathbf{U}\) is supernilpotent, then \(\mathrm{SMP}(\mathbf{L}\otimes\mathbf{U})\) reduces to \(\mathrm{SMP}(\mathbf{L}\times\mathbf{U})\) (which is polynomial-time solvable by [25]) and a version of the subpower membership problem for a multi-sorted algebraic object called a _clonoid_ from \(\mathbf{U}\) to \(\mathbf{L}\). This reduction in particular applies to all 2-nilpotent algebras; an analysis of clonoids between affine algebras then leads to Theorem 2. If, in future research, we could get rid of the condition of \(\mathbf{U}\) being supernilpotent, this would provide a strong tool in studying general Mal'tsev algebras, as every Mal'tsev algebra with non-trivial center can be decomposed into a wreath product. Our paper is structured as follows: Section 2 contains preliminaries and some background on universal algebra. In Section 3 we discuss how Mal'tsev algebras with non-trivial center can be represented by a wreath product and we introduce the concept of _difference clonoid_ of such a representation. In Section 4 we discuss some situations, in which the subpower membership problem of a wreath product can be reduced to the membership problem of the corresponding difference clonoid. In particular, we prove Theorem 11. Section 5 contains an analysis of clonoids between \(\mathbb{Z}_{p}\) and coprime Abelian groups, which then leads to the proof of our main result, Theorem 20. In Section 6 we discuss some possible directions for future research. ## 2 Preliminaries In the following, we are going to discuss some necessary notions from universal algebra. For more general background we refer to the textbooks [4, 11]. For background on commutator theory we refer to [14] and [2]. For an introduction to Malt'sev algebras and compact representations we refer to [5, Chapters 1.7-1.9]. In this paper, we are going to denote tuples by lower case bold letters, e.g. \(\mathbf{a}\in A^{k}\). In order to avoid double indexing in some situations, we are going to use the notation \(\mathbf{a}(i)\) to denote the \(i\)-th entry of \(\mathbf{a}\), i.e. \(\mathbf{a}=(\mathbf{a}(1),\mathbf{a}(2),\ldots,\mathbf{a}(k))\). However, otherwise we are going to follow standard notation as used e.g. in [4]. ### Basic notions for general algebras An _algebra_\(\mathbf{A}=(A;(f_{i}^{\mathbf{A}})_{i\in I})\) is a first-order structure in a purely functional language \((f_{i})_{i\in I}\) (where each symbol \(f_{i}\) has an associated _arity_). We say \(\mathbf{A}\) is finite if its domain \(A\) is finite. A _subalgebra_\(\mathbf{B}=(B;(f_{i}^{\mathbf{B}})_{i\in I})\) of an algebra \(\mathbf{A}=(A;(f_{i}^{\mathbf{A}})_{i\in I})\) (denoted \(\mathbf{B}\leq\mathbf{A}\)) is an algebra obtained by restricting all _basic operations_\(f_{i}^{\mathbf{A}}\) to a subset \(B\subseteq A\) that is invariant under all \(f_{i}^{\mathbf{A}}\)'s. The subalgebra generated by a list of elements \(a_{1},\ldots,a_{n}\), denoted by \(\mathrm{Sg}_{\mathbf{A}}(a_{1},\ldots,a_{n})\) is the smallest subalgebra of \(\mathbf{A}\) that contains \(a_{1},\ldots,a_{n}\). The _product_\(\prod_{i\in I}\mathbf{A}_{i}\) of a family of algebras \((\mathbf{A}_{i})_{i\in I}\) in the same language is defined as the algebra with domain \(\prod_{i\in I}A_{i}\), whose basic operations are defined coordinate-wise. The power \(\mathbf{A}^{n}\) is the product of \(n\)-many copies of \(\mathbf{A}\). Subalgebras of (finite) powers of \(\mathbf{A}\) are sometimes also called _subpowers_ of \(\mathbf{A}\), which motivates the name "subpower membership problem". So, formally the subpower membership problem of \(\mathbf{A}\) can be stated as follows: \[\begin{array}{ll}\mbox{\rm SMP}(\mathbf{A})&\\ \mbox{\rm Input: }\mathbf{b},\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\in\mathbf{A}^{k} \mbox{ for some }n,k\in\mathbb{N}\\ \mbox{\rm Question: Is }\mathbf{b}\in\mbox{\rm Sg}_{\mathbf{A}^{k}}(\mathbf{a}_{1}, \ldots,\mathbf{a}_{n})?\end{array}\] Note that the subpowers of \(\mathbf{A}\) are exactly the relations on \(A\) that are invariant under \(\mathbf{A}\). A _congruence_\(\alpha\) of \(\mathbf{A}\) is an equivalence relation on \(A\) that is invariant under \(\mathbf{A}\). We write \(\mbox{\rm Con}(\mathbf{A})\) for the lattice of all congruence of \(\mathbf{A}\). We denote the minimal and maximal element of this lattice by \(0_{\mathbf{A}}=\{(x,x)\mid x\in A\}\) and \(1_{\mathbf{A}}=\{(x,y)\mid x,y\in A\}\). For every congruence \(\alpha\in\mbox{\rm Con}(\mathbf{A})\), one can form a quotient algebras \(\mathbf{A}/\alpha\) in the natural way. The _term operations_ of an algebra \(\mathbf{A}\) are all finitary operations that can be defined by a composition of basic operations of \(\mathbf{A}\). Two standard ways to represent them is by terms or circuits in the language of \(\mathbf{A}\). For a term or circuit \(t(x_{1},\ldots,x_{n})\) in the language of \(\mathbf{A}\), we write \(t^{\mathbf{A}}(x_{1},\ldots,x_{n})\) for the induced term operation on \(A\). Occasionally, if it is clear from the context, we are not going to distinguish between a term/circuit and the corresponding term operation. The term operations of an algebra \(\mathbf{A}\) are closed under composition and contain all projections, therefore they form an algebraic object called a _clone_. For short, we denote this _term clone_ of an algebra \(\mathbf{A}\) by \(\mbox{\sf Clo}(\mathbf{A})\). Note that \(\mbox{\rm Sg}_{\mathbf{A}^{k}}(\mathbf{a}_{1},\ldots,\mathbf{a}_{n})=\{t( \mathbf{a}_{1},\ldots,\mathbf{a}_{n})\mid t\in\mbox{\sf Clo}(\mathbf{A})\}\). We call a ternary operation \(m^{\mathbf{A}}(x,y,z)\in\mbox{\sf Clo}(\mathbf{A})\) a _Mal'tsev term_ if it satisfies the identities \(m^{\mathbf{A}}(y,x,x)=m^{\mathbf{A}}(x,x,y)=y\) for all \(x,y\in A\), and call \(\mathbf{A}\) a _Mal'tsev algebra_ if it has a Mal'tsev term. For instance, every group is a Mal'tsev algebra with Mal'tsev term \(m(x,y,z)=xy^{-1}z\). Mal'tsev terms are a classic topic of study in universal algebra (see e.g. [4, Chapter 7]), and are in particular known to characterize congruence permutable varieties. ### Clonoids We are also going to rely on a multi-sorted generalisation of clones, so-called _clonoids_ that were first introduced in [1] (in a slightly less general way). For a set of operations between two sets \(\mathcal{C}\subseteq\{f\colon A^{n}\to B\mid n\in\mathbb{N}\}\), and \(k\in\mathbb{N}\) let us write \(\mathcal{C}^{(k)}=\{f\colon A^{k}\to B\mid f\in\mathcal{C}\}\) for the subset of \(k\)-ary functions. Then, for two algebras \(\mathbf{A}=(A,(f_{i})_{i\in I})\), \(\mathbf{B}=(B,(g_{j})_{j\in J})\) (in possibly different domains and languages), a set \(\mathcal{C}\subseteq\{f\colon A^{n}\to B\mid n\in\mathbb{N}\}\) is called a _clonoid from \(\mathbf{A}\) to \(\mathbf{B}\)_, or \((\mathbf{A},\mathbf{B})\)_-clonoid_, if it is closed under composition with term operations of \(\mathbf{A}\) from the inside, and \(\mathbf{B}\) from the outside, i.e.: \(\forall n,k\in\mathbb{N}\) **(1)**: \(f\in\mathcal{C}^{(n)},t_{1},\ldots,t_{n}\in\mbox{\sf Clo}(\mathbf{A})^{(k)} \Rightarrow f\circ(t_{1},\ldots,t_{n})\in\mathcal{C}^{(k)}\) **(2)**: \(s\in\mbox{\sf Clo}(\mathbf{B})^{(n)},f_{1},\ldots,f_{n}\in\mathcal{C}^{(k)} \Rightarrow s\circ(f_{1},\ldots,f_{n})\in\mathcal{C}^{(k)}\). ### Commutator theory Commutator theory is the subfield of universal algebra that tries to generalise notions such as central subgroups, nilpotence, or solvability from group theory to general algebras. The most commonly used framework is based on so-called term-conditions, which we outline in the following. Let \(\mathbf{A}\) be an algebra. For congruences \(\alpha,\beta,\gamma\in\mbox{\rm Con}(\mathbf{A})\) we say that \(\alpha\)_centralized \(\beta\) modulo \(\gamma\)_ (and write \(C(\alpha,\beta;\gamma)\)) if and only if for all \(p(\mathbf{x},\mathbf{y})\in\mbox{\sf Clo}(\mathbf{A})\), and all tuples \(\mathbf{a},\mathbf{b}\in A^{n}\), \(\mathbf{c},\mathbf{d}\in A^{m}\), such that \(a_{i}\sim_{\alpha}b_{i}\) for \(i=1,\ldots,n\) and \(c_{j}\sim_{\beta}d_{j}\) for \(j=1,\ldots,m\), the implication \[p(\mathbf{a},\mathbf{c})\sim_{\gamma}p(\mathbf{a},\mathbf{d})\Rightarrow p( \mathbf{b},\mathbf{c})\sim_{\gamma}p(\mathbf{b},\mathbf{d})\] holds. A congruence \(\alpha\) is called _central_ if \(C(\alpha,0_{\mathbf{A}};1_{\mathbf{A}})\) holds. The _center_ is the biggest central congruence. An algebra \(\mathbf{A}\) is called _\(n\)-nilpotent_ if there is a _central series of length \(n\)_, i.e. a series of congruences \(0_{\mathbf{A}}=\alpha_{0}\leq\alpha_{1}\leq\cdots\leq\alpha_{n}=1_{\mathbf{A}}\), such that \(C(\alpha_{i+1},1_{\mathbf{A}};\alpha_{i})\) for \(i=0,\ldots,n-1\). An algebra \(\mathbf{A}\) is called _Abelian_, if it is \(1\)-nilpotent, i.e. \(C(1_{\mathbf{A}},1_{\mathbf{A}};0_{\mathbf{A}})\) holds. We are, however, not going to work directly with these definitions. There is a rich structural theory in the special case of Mal'tsev algebras (and, more general, in congruence modular varieties [14]) that gives us very useful characterizations of many commutator theoretical properties. By a result of Herrmann [16], a Mal'tsev algebra \(\mathbf{A}\) is Abelian if and only if it is _affine_, i.e. all of its term operations are affine combination \(\sum_{i=1}^{n}\alpha_{i}x_{i}+c\) over some module; in particular the Mal'tsev term is then equal to \(x-y+z\). More generally, we are going to a result of Freese and McKenzie [14] that states that a Mal'tsev algebra \(\mathbf{A}\) with a central congruence \(\rho\) can always be written as a _wreath product_\(\mathbf{L}\otimes\mathbf{U}\), such that \(\mathbf{L}\) is affine and \(\mathbf{U}=\mathbf{A}/\rho\). We are going to discuss such wreath product representations in Section 3. Lastly, we want to mention that the definition of the relation \(C\) naturally generalizes to higher arities \(C(\alpha_{1},\ldots,\alpha_{n},\beta;\gamma)\). This notion was first introduced by Bulatov; we refer to [14] and [2] to more background on _higher commutators_. In particular, an algebra is called _\(k\)-supernilpotent_ if \(C(1_{\mathbf{A}},\ldots,1_{\mathbf{A}};0_{\mathbf{A}})\), where \(1_{\mathbf{A}}\) appears \(k+1\) times. There are several known characterizations of supernilpotent Mal'tsev algebras. We are mainly going to use the following: [Proposition 7.7. in [2]] Let \(\mathbf{A}\) be a \(k\)-supernilpotent Mal'tsev algebra, \(0\in A\) a constant and \(t,s\) two \(n\)-ary terms in the language of \(\mathbf{A}\).Then \(t^{\mathbf{A}}=s^{\mathbf{A}}\) if and only if they are equal on all tuples from the set \(S=\{\mathbf{a}\in A^{n}\mid\left\{\left|i:\mathbf{a}(i)\neq 0\right\}\right|\leq k\}\). (In fact, this is a characterization of \(k\)-supernilpotence for Mal'tsev algebras.) ### Compact representations and SMP For any subset \(R\subseteq A^{n}\), we define its _signature_\(\operatorname{Sig}(R)\) to be the set of all triples \((i,a,b)\in\{1,\ldots,n\}\times A^{2}\), such that there are \(\mathbf{t}_{a},\mathbf{t}_{b}\in R\) that agree on the first \(i-1\) coordinates, and \(t_{a}(i)=a\) and \(t_{b}(i)=b\); we then also say that \(\mathbf{t}_{a},\mathbf{t}_{b}\) are _witnesses_ for \((i,a,b)\in\operatorname{Sig}(R)\). If \(\mathbf{A}\) is a Mal'tsev algebra, and \(\mathbf{R}\leq\mathbf{A}^{n}\), then it is known that \(\mathbf{R}\) is already generated by every subset \(S\subseteq\mathbf{R}\) with \(\operatorname{Sig}(S)=\operatorname{Sig}(\mathbf{R})\)[5, Theorem 1.8.2.]. In fact, \(\mathbf{R}\) is then equal to the closure of \(S\) under the Mal'tsev operation \(m\) alone, and a tuple \(\mathbf{a}\) is in \(\mathbf{R}\) iff can be written as \(m(\ldots m(\mathbf{a}_{1},\mathbf{b}_{2},\mathbf{a}_{2}),\ldots,\mathbf{b}_{n},\mathbf{a}_{n})\), for some \(\mathbf{a}_{i},\mathbf{b}_{i}\in S\). For given \(\mathbf{a}\in\mathbf{R}\) such elements \(\mathbf{a}_{i},\mathbf{b}_{i}\in S\) can be found polynomial time in \(|S|\), by picking \(\mathbf{a}_{1}\) such that \(\mathbf{a}_{1}(1)=\mathbf{a}(1)\), and \(\mathbf{a}_{i},\mathbf{b}_{i}\in S\) as witnesses for \(m(\ldots m(\mathbf{a}_{1},\mathbf{b}_{2},\mathbf{a}_{2}),\ldots,\mathbf{b}_{i-1 },\mathbf{a}_{i-1}))(i)\) and \(a(i)\) at position \(i\). A _compact representation_ of \(\mathbf{R}\leq\mathbf{A}^{n}\) is a subset \(S\subset\mathbf{R}\) with \(\operatorname{Sig}(S)=\operatorname{Sig}(\mathbf{R})\) and \(|S|\leq 2|\operatorname{Sig}(\mathbf{R})|\leq 2n|A|^{2}\). So, informally speaking, compact representations are small generating sets of \(\mathbf{R}\) with the same signature. It is not hard to see that compact representations always exist. Generalizations of compact representations exist also for relations on different domains (\(\mathbf{R}\leq\mathbf{A}_{1}\times\cdots\times\mathbf{A}_{n}\)), and relations invariant under algebras with few subpowers, we refer to [5, Chapter 2] for more background. By the above, \(\operatorname{SMP}(\mathbf{A})\) reduces in polynomial time to the problem of finding a compact representation of \(\operatorname{Sg}_{\mathbf{A}^{k}}(\mathbf{a}_{1},\ldots,\mathbf{a}_{n})\) for some input tuples \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\in A^{k}\). We are going to denote this problem by \(\operatorname{CompRep}(\mathbf{A})\). Conversely, it was shown in [9] that finding a compact representations has a polynomial Turing reduction to \(\operatorname{SMP}(\mathbf{A})\). Note further that, to solve \(\operatorname{CompRep}(\mathbf{A})\) it is already enough to find a subset \(S\subseteq R\) with \(\operatorname{Sig}(S)=\operatorname{Sig}(R)\) of polynomial size, since such a set \(S\) can then be thinned out to a compact representation. Let us call a set of pairs \(\{(\mathbf{c},p_{\mathbf{c}})\mid\mathbf{c}\in S\}\) an _enumerated compact representation_ of \(\operatorname{Sg}_{\mathbf{A}^{k}}(\mathbf{a}_{1},\ldots,\mathbf{a}_{n})\), if \(S\) is a compact representation of \(\operatorname{Sg}_{\mathbf{A}^{k}}(\mathbf{a}_{1},\ldots,\mathbf{a}_{n})\), and every \(p_{\mathbf{c}}\) is a circuit in the language of \(\mathbf{A}\) of polynomial size (in \(n\) and \(k\)), such that \(p_{\mathbf{c}}(\mathbf{a}_{1},\ldots,\mathbf{a}_{n})=\mathbf{c}\). Enumerated compact representations were already (implicitly) used in several proofs. In [9, Theorem 4.13.] it was shown that, for algebras with few subpowers, enumerated compact representations always exist; this was used to prove that \(\operatorname{SMP}(\mathbf{A})\in\mathsf{coNP}\). Moreover, all of the known polynomial time algorithms for \(\operatorname{CompRep}(\mathbf{A})\), in fact, compute enumerated compact representations. We are in particular going to need the following result that follows from [25]: [[25]] Let \(\mathbf{A}\) be a finite supernilpotent Mal'tsev algebra. Then, there is a polynomial time algorithm that computes an enumerated compact representations of \(\operatorname{Sg}_{\mathbf{A}^{k}}(\mathbf{a}_{1},\ldots,\mathbf{a}_{n})\), for given \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\in A^{k}\). Theorem 2 can be seen as a generalization of Gaussian elimination from affine to supernilpotent algebras. We remark that Theorem 2, although not explicitly stated as such in [25], follows directly from the algorithm computing a _group representations_\((T_{1},T_{2},\ldots,T_{k})\) of \(\operatorname{Sg}_{\mathbf{A}^{k}}(\mathbf{a}_{1},\ldots,\mathbf{a}_{n})\) and the fact that for such a group representation, there is a constant \(q\) such that \(T=(T_{1}+q\cdot T_{2}+\cdots+q\cdot T_{k})\) has the same signature as \(\operatorname{Sg}_{\mathbf{A}^{k}}(\mathbf{a}_{1},\ldots,\mathbf{a}_{n})\) (see Lemma 3.1. in [25]). Thus, \(T\) together with its defining circuits forms an enumerated compact representation of \(\operatorname{Sg}_{\mathbf{A}^{k}}(\mathbf{a}_{1},\ldots,\mathbf{a}_{n})\). We are furthermore going to use that there is an algorithm that allows us to fix some values of a relation given by enumerated compact representation: Let \(\mathbf{A}\) be a Mal'tsev algebra. Then, there is a polynomial-time algorithm \(\texttt{Fix-values}(R,a_{1},\ldots,a_{m})\) that, for a given compact representation \(R\) of \(\mathbf{R}=\operatorname{Sg}_{\mathbf{A}^{k}}(X)\), and constants \(a_{1},\ldots,a_{m}\in A\), returns a compact representation \(R^{\prime}\) of \(\{\mathbf{x}\in\mathbf{R}\mid\mathbf{x}(1)=a_{1},\ldots,\mathbf{x}(m)=a_{m}\}\) (or \(\emptyset\) if the relation is empty). If \(R\) is moreover enumerated then \(\texttt{Fix-values}\) also computes polynomial size circuits defining the elements of \(R^{\prime}\) from \(X\). The existence of such a Fix-values algorithm for compact representation is a well-known result ([7], see also [5, Algorithm 5]); the additional statement about _enumerated_ compact representation follows easily from bookkeeping the defining circuits. We prove Lemma 3 in Appendix A. ## 3 Wreath products and difference clonoids In this section, we discuss how to represent Mal'tsev algebras with non-trivial center by a so-called _wreath product_\(\mathbf{L}\otimes\mathbf{U}\), and associate to it its _difference clonoid_, which gives us a measure on how far it is from being the direct product \(\mathbf{L}\times\mathbf{U}\). Let \(\mathbf{U}=(U,(f^{\mathbf{U}})_{f\in F})\) and \(\mathbf{L}=(L,(f^{\mathbf{L}})_{f\in F})\) be two algebras in the same language \(F\), such that \(\mathbf{L}\) is affine. Furthermore, let \(0\in L\) and \(T=(\hat{f})_{f\in F}\) be a family of operations \(\hat{f}\colon U^{n}\to L\), for each \(f\in F\) of arity \(n\). Then we define the _wreath product \(\mathbf{L}\otimes^{T,0}\mathbf{U}\)_ as the algebra \((L\times U,(f^{\mathbf{L}\otimes^{T}\mathbf{U}})_{f\in F})\) with basic operations \[f^{\mathbf{L}\otimes^{T,0}\mathbf{U}}((l_{1},u_{1}),\ldots,(l_{n},u_{n}))=(f^{ \mathbf{L}}(l_{1},\ldots,l_{n})+\hat{f}(u_{1},\ldots,u_{n}),f^{\mathbf{U}}(u_{ 1},\ldots,u_{n})),\] (where \(+\) is the addition on \(\mathbf{L}\) with respect to neutral element \(0\)). For simplicity, we are going to write \(\mathbf{L}\otimes\mathbf{U}\), if \(T\) and \(0\) are clear from the context. The name _wreath product_ refers to the fact that this is a special case of VanderWerf's wreath products [31]. We remark that recently also alternative names for \(\mathbf{L}\otimes\mathbf{U}\) were suggested, such as _central extension_ (by Mayr) and _semidirect product_ (by Zhuk). By a result of Freese and McKenzie we can represent Mal'tsev algebras with non-trivial centers as wreath products: [Proposition 7.1. in [14]] Let \(\mathbf{A}\) be a Mal'tsev algebra with a central congruence \(\alpha\), and let \(\mathbf{U}=\mathbf{A}/\alpha\). Then there is an affine algebra \(\mathbf{L}\), an element \(0\in L\) and a set of operations \(T\), such that \(\mathbf{A}\cong\mathbf{L}\otimes^{T,0}\mathbf{U}\). Note that, for a fixed quotient \(\mathbf{U}=\mathbf{A}/\alpha\), there is still some freedom in how to choose the operations \(f^{\mathbf{L}}\) of \(\mathbf{L}\), and the operations \(\hat{f}\colon U^{n}\to L\) in \(T\) (by adding/subtracting constants). To get rid of this problem, we are from now on always going to assume that \(\mathbf{L}\) preserves \(0\), i.e. \(f^{\mathbf{L}}(0,0,\ldots,0)=0\) for all \(f\in F\). It is then easy to observe that wreath products \(\mathbf{L}\otimes^{0,T}\mathbf{U}\) behaves nicely with respect to the direct product \(\mathbf{L}\times\mathbf{U}\) in the same language: Let \(\mathbf{A}\) be a Mal'tsev algebra with wreath product representation \(\mathbf{A}=\mathbf{L}\otimes^{0,T}\mathbf{U}\). Then \(t^{\mathbf{A}}=s^{\mathbf{A}}\Rightarrow t^{\mathbf{L}\times\mathbf{U}}=s^{ \mathbf{L}\times\mathbf{U}}\). Proof.: Note that, for every term \(t\) in the language of \(\mathbf{A}\): \[t^{\mathbf{A}}((l_{1},u_{1}),\ldots,(l_{n},u_{n}))=(t^{\mathbf{L}}(l_{1}, \ldots,l_{n})+\hat{t}(u_{1},\ldots,u_{n}),t^{\mathbf{U}}(u_{1},\ldots,u_{n})),\] for some \(\hat{t}\colon U^{n}\to L\) (this can be shown by induction over the height of the term tree). Clearly \(t^{\mathbf{A}}=s^{\mathbf{A}}\) implies \(t^{\mathbf{U}}=s^{\mathbf{U}}\), and \(t^{\mathbf{L}}-s^{\mathbf{L}}=c\), \(\hat{t}-\hat{s}=-c\) for some constant \(c\in L\). Since, by our assumptions, the operations of \(\mathbf{L}\) preserve \(0\), we get \(t^{\mathbf{L}}=s^{\mathbf{L}}\) and \(\hat{t}=\hat{s}\). Thus \(t^{\mathbf{L}\times\mathbf{U}}=s^{\mathbf{L}\times\mathbf{U}}\). In other terminology, the map \(t^{\mathbf{A}}\mapsto t^{\mathbf{L}\times\mathbf{U}}\) is a surjective _clone homomorphism_ from \(\mathsf{Clo}(\mathbf{A})\) to \(\mathsf{Clo}(\mathbf{L}\times\mathbf{U})\), i.e. a map that preserves arities, projections and compositions. The converse of Observation 3.1 does however not hold, since this map is usually not injective. We define the _difference clonoid_\(\mathrm{Diff}_{0}(\mathbf{A})\) as the kernel of the clone homomorphisms in the following sense: Let \(\mathbf{A}=\mathbf{L}\otimes^{0,T}\mathbf{U}\) be a Mal'tsev algebra given as a wreath product. **(1)**: _We define the equivalence relation_ \(\sim\) _on_ \(\mathsf{Clo}(\mathbf{A})\) _by_ \[t^{\mathbf{A}}\sim s^{\mathbf{A}}:\Leftrightarrow t^{\mathbf{L}\times\mathbf{ U}}=s^{\mathbf{L}\times\mathbf{U}}\] **(2)**: _the_ difference clonoid \(\mathrm{Diff}_{0}(\mathbf{A})\) _is defined as the set of all operation_ \(\hat{r}\colon U^{n}\to L\)_, such that there are_ \(t^{\mathbf{A}}\sim s^{\mathbf{A}}\in\mathsf{Clo}(\mathbf{A})\) _with:_ \[t^{\mathbf{A}}((l_{1},u_{1}),\ldots,(l_{n},u_{n})) =(t^{\mathbf{L}}(\mathbf{l})+\hat{t}(\mathbf{u}),t^{\mathbf{U}}( \mathbf{u})) \tag{1}\] \[s^{\mathbf{A}}((l_{1},u_{1}),\ldots,(l_{n},u_{n})) =(t^{\mathbf{L}}(\mathbf{l})+\hat{t}(\mathbf{u})+\hat{r}(\mathbf{ u}),t^{\mathbf{U}}(\mathbf{u})) \tag{2}\] In the following, we will stick to the following convention: Function symbols with a hat will always denote operations from some power of \(U\) to \(L\). For operations \(t,s\colon A^{n}\to A\), and \(\hat{r}\colon U^{n}\to L\) such as in (1) and (2) we are slightly going to abuse notation, and write \(s=t+\hat{r}\) and \(\hat{r}=(s-t)\). We next show that \(\mathrm{Diff}_{0}(\mathbf{A})\) is indeed a clonoid from \(\mathbf{U}\) to \(\mathbf{L}\) (extended by the constant \(0\)). Let \(\mathbf{A}=\mathbf{L}\otimes^{0,T}\mathbf{U}\) be a Mal'tsev algebra given as wreath product. Then: 1. _For all_ \(t\in\mathsf{Clo}(\mathbf{A})\)_,_ \(\hat{r}\in\operatorname{Diff}_{0}(\mathbf{A})\) _also_ \(t+\hat{r}\in\mathsf{Clo}(\mathbf{A})\)_,_ 2. \(\operatorname{Diff}_{0}(\mathbf{A})\) _is a_ \((\mathbf{U},(\mathbf{L},0))\)_-clonoid._ Proof.: To prove (1), let \(t\in\mathsf{Clo}(\mathbf{A})\) and \(\hat{r}\in\operatorname{Diff}_{0}(\mathbf{A})\). By definition of the difference clonoid, \(\hat{r}=s_{1}-s_{2}\) for two terms \(s_{1},s_{2}\in\mathsf{Clo}(\mathbf{A})\), with \(s_{1}\sim s_{2}\). In particular, \(s_{1}^{\mathbf{U}}=s_{2}^{\mathbf{U}}\). For any Mal'tsev term \(m\in\mathsf{Clo}(\mathbf{A})\), necessarily \(\hat{m}(u,u,v)=\hat{m}(v,u,u)=0\) holds. This implies that \[t+\hat{r}=m(t,s_{2},s_{1})\in\mathsf{Clo}(\mathbf{A}).\] We next prove (2). So we only need to verify that \(\operatorname{Diff}_{0}(\mathbf{A})\) is closed under composition with \(\mathsf{Clo}(\mathbf{U})\) (from the inside), respectively \(\mathsf{Clo}((\mathbf{L},0))\) (from the outside). To see that \(\operatorname{Diff}_{0}(\mathbf{A})\) is closed under \((\mathbf{L},0)\), note that \(0\in\operatorname{Diff}_{0}(\mathbf{A})\), as \(t-t=0\), for every term \(t\in\mathsf{Clo}(\mathbf{A})\). Further \((\mathbf{L},0)\) is closed under \(+\); for this, let \(\hat{r}_{1},\hat{r}_{2}\in\operatorname{Diff}_{0}(\mathbf{A})\). By (1), we know that \(t+\hat{r}_{1}\in\mathsf{Clo}(\mathbf{A})\), for some term \(t\in\mathsf{Clo}(\mathbf{A})\). Again, by (1) also \((t+\hat{r}_{1})+\hat{r}_{2}))\in\mathsf{Clo}(\mathbf{A})\), which shows that \(\hat{r}_{1}+\hat{r}_{2}\in\operatorname{Diff}_{0}(\mathbf{A})\). For all unary \(e^{\mathbf{L}}\in\mathsf{Clo}(\mathbf{L})\), and \(t\sim s\) with \(\hat{r}=t-s\), note that \(e^{\mathbf{A}}t-e^{\mathbf{A}}s=e^{\mathbf{L}}\circ\hat{r}\in\operatorname{ Diff}_{0}(\mathbf{A})\). Since \(\mathbf{L}\) is affine, \(\mathsf{Clo}(\mathbf{L},0)\) is generated by \(+\) and its unary terms, thus \(\operatorname{Diff}_{0}(\mathbf{A})\) is closed under \((\mathbf{L},0)\). To see that \(\operatorname{Diff}_{0}(\mathbf{A})\) is closed under \(\mathbf{U}\) from the inside, simply notice that \(t(x_{1},\ldots,x_{n})\sim s(x_{1},\ldots,x_{n})\) implies \(t(f_{1}(\mathbf{x}),\ldots,f_{n}(\mathbf{x}))\sim s(f_{1}(\mathbf{x}),\ldots,f _{n}(\mathbf{x}))\), for all terms \(f_{1},\ldots,f_{n}\). If \(\hat{r}=t^{\mathbf{A}}-s^{\mathbf{A}}\), then \(\hat{r}\circ(f_{1}^{\mathbf{U}},\ldots,f_{n}^{\mathbf{U}})=t\circ(f_{1}^{ \mathbf{U}},\ldots,f_{n}^{\mathbf{U}})-s\circ(f_{1}^{\mathbf{U}},\ldots,f_{n}^ {\mathbf{U}})\in\operatorname{Diff}_{0}(\mathbf{A})\). We remark that the choice of the constant \(0\in L\) is not relevant in this construction: since for every \(c\in L\) the map \(\hat{r}\mapsto\hat{r}+c\) is an isomorphism between the \((\mathbf{U},(\mathbf{L},0))\)-clonoid \(\operatorname{Diff}_{0}(\mathbf{A})\) and the \((\mathbf{U},(\mathbf{L}^{\prime},c))\)-clonoid \(\operatorname{Diff}_{c}(\mathbf{A})\) (where \(f^{\mathbf{L}^{\prime}}(\mathbf{l})=f^{\mathbf{L}}(\mathbf{l}-(c,c\ldots,c))+c\)). Our goal in the next section is to reduce the subpower membership problem to a version of the subpower membership problem for the difference clonoid in which we ask for membership of a tuple \(\mathbf{l}\in L^{k}\) in the subalgebra of \(\mathbf{L}\) given by the image of \(\mathbf{u}_{1},\ldots,\mathbf{u}_{n}\in U^{k}\) under the clonoid. In fact, it will be more convenient for us to ask for a compact representation, that's why we define the following problem, for a clonoid \(\mathcal{C}\) from \(\mathbf{U}\) to \(\mathbf{L}\). \(\operatorname{CompRep}(\mathcal{C})\): \(\operatorname{Input}\): A list of tuples \(\mathbf{u}_{1},\ldots,\mathbf{u}_{n}\in U^{k}\). \(\operatorname{Output}\): A compact representation of \(\mathcal{C}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})=\{f(\mathbf{u}_{1},\ldots, \mathbf{u}_{n})\mid f\in\mathcal{C}\}\leq\mathbf{L}^{k}\) In the case of the difference clonoid \(\mathcal{C}=\operatorname{Diff}_{0}(\mathbf{A})\) the image algebra \(\mathbf{L}\) is affine and contains a constant \(0\). Thus then this problem is then equivalent to finding generating set of \(\mathcal{C}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\) as a subgroup of \((L,+,0,-)^{k}\) of polynomial size. By then running Gaussian elimination (generalized to finite Abelian groups), or simply applying Theorem 2 one can then compute a compact representation of \(\mathcal{C}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\). ## 4 The subpower membership problem of wreath products In this section we discuss our main methodological results. We show that, in some cases the subpower membership problem \(\operatorname{SMP}(\mathbf{L}\otimes\mathbf{U})\) of a wreath product can be reduced to \(\operatorname{CompRep}(\mathbf{L}\times\mathbf{U})\) and \(\operatorname{CompRep}(\mathcal{C})\). We first show how such a reduction can be achieved relatively easily in the case where \(\mathsf{Clo}(\mathbf{L}\otimes\mathbf{U})\) contains \(\mathsf{Clo}(\mathbf{L}\times\mathbf{U})\) (i.e. the identity map is a retraction of the clone homomorphism from Observation 6): Let \(\mathbf{A}=\mathbf{L}\otimes^{(0,T)}\mathbf{U}\) be a finite Mal'tsev algebra, and let \(\mathcal{C}=\operatorname{Diff}_{0}(\mathbf{A})\). Further assume that \(\mathsf{Clo}(\mathbf{L}\times\mathbf{U})\subseteq\mathsf{Clo}(\mathbf{A})\). Then \(\operatorname{CompRep}(\mathbf{A})\) (and hence also \(\operatorname{SMP}(\mathbf{A})\)) reduces in polynomial time to \(\operatorname{CompRep}(\mathbf{L}\times\mathbf{U})\) and \(\operatorname{CompRep}(\mathcal{C})\). Proof.: Let \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\in A^{k}\) an instance of \(\mathrm{CompRep}(\mathbf{A})\); our goal is to find a compact representation of \(\mathbf{B}=\mathrm{Sg}_{\mathbf{A}^{k}}(\mathbf{a}_{1},\ldots,\mathbf{a}_{n})\). Let us write \(\mathbf{l}_{i}\) and \(\mathbf{u}_{i}\) for the projection of \(\mathbf{a}_{i}\) to \(L^{k}\) and \(U^{k}\) respectively. Let us further define \(\mathbf{B}^{+}=\mathrm{Sg}_{(\mathbf{L}\times\mathbf{U})^{k}}(\mathbf{a}_{1}, \ldots,\mathbf{a}_{n})\). Then \[\mathbf{B} =\{(t^{\mathbf{L}}(\mathbf{l}_{1},\ldots,\mathbf{l}_{n})+\hat{t} (\mathbf{u}_{1},\ldots,\mathbf{u}_{n}),t^{\mathbf{U}}(\mathbf{u}_{1},\ldots, \mathbf{u}_{n})\mid t\text{ is $F$-term}\},\text{ and}\] \[\mathbf{B}^{+} =\{(t^{\mathbf{L}}(\mathbf{l}_{1},\ldots,\mathbf{l}_{n}),t^{ \mathbf{U}}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\mid t\text{ is $F$-term}\}.\] Since \(\mathsf{Clo}(\mathbf{L}\times\mathbf{U})\subseteq\mathsf{Clo}(\mathbf{A})\), we can pick a Mal'tsev term of \(\mathbf{A}\) that is of the form \(m^{\mathbf{A}}((l_{1},u_{1}),(l_{2},u_{2}),(l_{3},u_{3}))=(l_{1}-l_{2}+l_{3}, m^{\mathbf{U}}(u_{1},u_{2},u_{3}))\). Moreover, by Lemma 9, every term \(t^{\mathbf{A}}\in\mathsf{Clo}(\mathbf{A})\) can be uniquely written as the sum of \(t^{\mathbf{L}\times\mathbf{U}}\) (which by assumption is also in \(\mathsf{Clo}(\mathbf{A})\)) and some \(\hat{t}\in\mathcal{C}\). Thus, every element of \(\mathbf{B}\) is equal to the sum of an element of \(\mathbf{B}^{+}\) and an expression \(\hat{t}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\). Let \(C^{+}\) be a compact representation of \(\mathbf{B}^{+}\), and \(\hat{C}\) a compact representation of \(\mathcal{C}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\). Then, it follows that every tuple in \(\mathbf{B}\) can be written as \[m(\ldots,m(\mathbf{c}_{1},\mathbf{d}_{2},\mathbf{c}_{2}),\ldots \mathbf{d}_{n},\mathbf{c}_{n})+\hat{\mathbf{r}}_{1}-\hat{\mathbf{s}}_{2}+\hat {\mathbf{r}}_{2}-\ldots-\hat{\mathbf{s}}_{n}+\hat{\mathbf{r}}_{n}, \tag{3}\] for \(\mathbf{c}_{i},\mathbf{d}_{i}\in C^{+}\) and \(\hat{\mathbf{r}}_{i},\hat{\mathbf{s}}_{i}\in\hat{C}\). (We are aware that tuples in \(C^{+}\) an \(\hat{C}\) have different domains; here we follow the same convention as in Notation 8). Moreover, in formula (3), any pair \(\mathbf{c}_{i},\mathbf{d}_{i}\) (respectively \(\hat{\mathbf{r}}_{i},\hat{\mathbf{s}}_{i}\)) witnesses a fork in the \(i\)-th coordinate. By our choice of \(m\) it is easy to see that formula (3) can be rewritten to \[m(\ldots,m(\mathbf{c}_{1}+\hat{\mathbf{r}}_{1},\mathbf{d}_{2}+ \hat{\mathbf{s}}_{2},\mathbf{c}_{2}+\hat{\mathbf{r}}_{2}),\ldots\mathbf{d}_{n }+\hat{\mathbf{s}}_{n},\mathbf{c}_{n}+\hat{\mathbf{r}}_{n}),\] Thus the elements \(\mathbf{c}_{i}+\hat{\mathbf{r}}_{i},\mathbf{d}_{+}\hat{\mathbf{s}}_{i}\) witness forks of \(\mathbf{B}\) in the \(i\)-th coordinate. If we define \(D=\{\mathbf{c}+\hat{\mathbf{r}}\mid\mathbf{c}\in C,\hat{\mathbf{r}}\in\hat{C}\}\), then it follows that \(\mathrm{Sig}(D)=\mathrm{Sig}(\mathbf{B})\). Moreover \(D\subset\mathbf{B}\), and it is of polynomial size in \(n\) and \(k\), as \(|D|\leq|C|\cdot|\hat{C}|\). Thus \(D\) can be thinned out in polynomial time to a compact representation of \(\mathbf{B}\), which finishes our proof. We remark that, by following the proof of Theorem 10, also finding _enumerated_ compact representations in \(\mathbf{A}\) can be reduced to finding _enumerated_ compact representations in \(\mathbf{L}\times\mathbf{U}\) and \(\mathcal{C}\) (if \(\mathcal{C}\) is given by some finite set of operations that generate it as a clonoid). Unfortunately, the conditions of Theorem 10 are not met for general wreath-products, not even if both \(\mathbf{U}\) and \(\mathbf{L}\) are both affine (the dihedral group \(D_{4}\) can be shown to be a counterexample). But, if \(\mathbf{U}\) is supernilpotent, then we are able to prove the following reduction, independent of the conditions of Theorem 10: Let \(\mathbf{A}=\mathbf{L}\otimes\mathbf{U}\) be a finite Mal'tsev algebra, and let \(\mathcal{C}=\mathrm{Diff}_{0}(\mathbf{A})\) for some \(0\in A\). Further, assume that \(\mathbf{U}\) is supernilpotent. Then \(\mathrm{SMP}(\mathbf{A})\) reduces in polynomial time to \(\mathrm{CompRep}(\mathcal{C})\). Proof.: Let \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n},\mathbf{b}\in A^{k}\) an instance of \(\mathrm{SMP}(\mathbf{A})\); our goal is to check whether \(\mathbf{b}\in\mathbf{B}=\mathrm{Sg}_{\mathbf{A}^{k}}(\mathbf{a}_{1},\ldots, \mathbf{a}_{n})\). Let us write \(\mathbf{l}_{i}\) and \(\mathbf{u}_{i}\) for the projection of \(\mathbf{a}_{i}\) to \(L^{k}\) and \(U^{k}\) respectively, and \(\mathbf{l}_{b}\) and \(\mathbf{u}_{b}\) for the projections of \(\mathbf{b}\) to \(L^{k}\) and \(U^{k}\). Let \(F\) be the signature of \(\mathbf{A}\) and \(\mathbf{L}\times\mathbf{U}\), and let \(\mathbf{B}^{+}=\mathrm{Sg}_{(\mathbf{L}\times\mathbf{U})^{k}}(\mathbf{a}_{1}, \ldots,\mathbf{a}_{n})\). Then \[\mathbf{B} =\{(t^{\mathbf{L}}(\mathbf{l}_{1},\ldots,\mathbf{l}_{n})+\hat{t} (\mathbf{u}_{1},\ldots,\mathbf{u}_{n}),t^{\mathbf{U}}(\mathbf{u}_{1},\ldots, \mathbf{u}_{n})\mid t\text{ is $F$-term}\},\text{ and}\] \[\mathbf{B}^{+} =\{(t^{\mathbf{L}}(\mathbf{l}_{1},\ldots,\mathbf{l}_{n}),t^{ \mathbf{U}}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\mid t\text{ is $F$-term}\}.\] Recall the definition of \(t^{\mathbf{A}}\sim s^{\mathbf{A}}\) from Definition 7. If \(T\) is a \(\sim\)-transversal set of \(\{t^{\mathbf{A}}\in\mathsf{Clo}(\mathbf{A})\mid t^{\mathbf{U}}(\mathbf{u}_{1}, \ldots,\mathbf{u}_{n})=\mathbf{u}_{b}\}\), then clearly \(\mathbf{b}\in B\) iff \(\exists t\in T\) and \(\mathbf{d}\in\mathcal{C}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\) with \(\mathbf{b}=t(\mathbf{a}_{1},\ldots,\mathbf{a}_{n})+\mathbf{d}\). So, intuitively speaking, the goal of this proof is to first compute such a transversal set, by computing an enumerated compact representation of \(\{(\mathbf{l},\mathbf{u})\in\mathbf{B}^{+}\mid\mathbf{u}=\mathbf{u}_{b}\}\) and then use it together with a compact representation of \(\mathcal{C}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\) to check membership of \(\mathbf{b}\) in \(\mathbf{B}\). In practice we need however to consider a relation of higher arity than \(\mathbf{B}^{+}\), since term operations of \(\mathbf{L}\times\mathbf{U}\) are not uniquely determined by their value on \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\). So let \(S\) be the degree of supernilpotence of \(\mathbf{U}\) (and hence also \(\mathbf{L}\times\mathbf{U}\)). If we think about \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\) as the columns of a matrix of dimension \(k\times n\), then let \(\tilde{\mathbf{a}}_{1},\ldots,\tilde{\mathbf{a}}_{n}\in A^{l}\) be its extension by rows that enumerate \(H=\{(a_{1},\ldots,a_{n})\in A^{n}\mid|\{i\colon a_{i}\neq 0\}|\leq S\}\) (hence \(l\leq k+|A|^{S}\binom{n}{S}\)). It follows from Theorem 2 that we can compute an enumerated compact representation \(\tilde{C}\) of \(\mathrm{Sg}_{(\mathbf{L}\times\mathbf{U})^{l}}(\tilde{\mathbf{a}}_{1},\ldots, \tilde{\mathbf{a}}_{n})\) in polynomial time in \(n\) and \(l\). So, every element in \(\tilde{B}=\mathrm{Sg}_{(\mathbf{L}\times\mathbf{U})^{l}}(\tilde{\mathbf{a}}_{ 1},\ldots,\tilde{\mathbf{a}}_{n})\) can be written as \(m(\ldots m(\tilde{\mathbf{c}}_{1},\tilde{\mathbf{d}}_{2},\tilde{\mathbf{c}}_{ 2})\ldots\tilde{\mathbf{d}}_{l},\tilde{\mathbf{c}}_{l})\), for \((\tilde{\mathbf{c}}_{i},p_{\tilde{\mathbf{c}}_{i}}),(\tilde{\mathbf{d}}_{i},p _{\tilde{\mathbf{d}}_{i}})\in\tilde{C}\), where \(\tilde{C}\) is of size at most \(2l|A|^{2}\), and every element of \(\tilde{\mathbf{c}}\in\tilde{C}\) is equal to \(p_{\tilde{\mathbf{c}}}(\tilde{\mathbf{a}}_{1},\ldots,\tilde{\mathbf{a}}_{n})= \tilde{\mathbf{c}}\) for the given circuit \(p_{\tilde{\mathbf{c}}}\) of polynomial size. By Theorem 1, in an \(S\)-supernilpotent algebra, every term operation is already completely determined by its values on the subset \(H\). It follows, that every \(n\)-ary term operation of \(\mathbf{L}\times\mathbf{U}\) can be uniquely described by a circuit \(m(\ldots m(p_{\mathbf{e}_{1}},p_{\tilde{\mathbf{d}}_{2}},p_{\mathbf{e}_{2}}),\ldots p_{\tilde{\mathbf{d}}_{l}},p_{\mathbf{e}_{l}})\) for \(\tilde{\mathbf{c}}_{i},\tilde{\mathbf{d}}_{i}\in\tilde{C}\). By definition of \(\sim\), it follows that also every \(n\)-ary term operation of \(\mathbf{A}\) is \(\sim\)-equivalent to the operation given by the circuit described by a circuit \(m(\ldots m(p_{\mathbf{e}_{1}},p_{\tilde{\mathbf{d}}_{2}},p_{\mathbf{e}_{2}}),\ldots p_{\tilde{\mathbf{d}}_{l}},p_{\mathbf{e}_{l}})\) for \(\tilde{\mathbf{c}}_{i},\tilde{\mathbf{d}}_{i}\in\tilde{C}\). We are however only interested in terms \(t\) such that \(t^{\mathbf{U}}\) maps \(\mathbf{u}_{1},\ldots,\mathbf{u}_{n}\) to the value \(\mathbf{u}_{b}\). By Lemma 3, we can also compute an enumerated compact representation \(\tilde{C}^{\prime}\) of \(\{(\tilde{\mathbf{l}},\tilde{\mathbf{u}})\in S_{\mathbf{g}\mathbf{L}\times \mathbf{U}}(\tilde{\mathbf{a}}_{1},\ldots,\tilde{\mathbf{a}}_{n})\mid\tilde{ \mathbf{u}}(i)=\mathbf{u}_{b}(i)\text{ for all }i=1,\ldots,k\}\) in polynomial time. (Although we only prove Lemma 3 for fixing variables to constants, we remark that it can straightforwardly be generalized to fixing the value of the variables to domains \(L\times\{u\}\). Alternatively, this can also be achieved by regarding \(\mathrm{Sg}_{(\mathbf{L}\times\mathbf{U})^{l}}(\tilde{\mathbf{a}}_{1},\ldots, \tilde{\mathbf{a}}_{n})\) as a subalgebra of \(\mathbf{U}^{l}\times\mathbf{L}^{l}\), which however would require us to work with relations on different domains). If \(\tilde{C}^{\prime}=\emptyset\), then we output "False", as then \(\mathbf{u}_{b}\notin\mathrm{Sg}_{\mathrm{U}^{k}}(\mathbf{u}_{1},\ldots,\mathbf{ u}_{n})\). Otherwise, let \(C=\{p_{\mathbf{A}}^{\mathbf{A}}(\mathbf{a}_{1},\ldots,\mathbf{a}_{n})\mid \tilde{\mathbf{c}}\in\tilde{C}^{\prime}\}\). Also, let \(\hat{C}\) be a compact representation of \(\mathcal{C}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\). By our proof, every element of \(\{(\mathbf{l},\mathbf{u})\in\mathbf{B}\mid\mathbf{u}=\mathbf{u}_{b}\}\) is equal to the sum of an element \(m^{\mathbf{A}}(\ldots,m^{\mathbf{A}}(\mathbf{c}_{1},\mathbf{d}_{2},\mathbf{c}_{ 2}),\ldots\mathbf{d}_{n},\mathbf{c}_{n})\) with \(\mathbf{c}_{i},\mathbf{d}_{i}\in C\) and an element of \(\mathcal{C}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\). Since \(m\) is an affine Malt'sev operation when restricted to \(\{(\mathbf{l},\mathbf{u})\in\mathbf{B}\mid\mathbf{u}=\mathbf{u}_{b}\}\) this means that \(\mathbf{b}\in\mathbf{B}\) iff \(\mathbf{l}_{b}\) is in the affine closure of all elements \(\mathbf{c}+\hat{\mathbf{r}}\) with \(\mathbf{c}\in C\) and \(\hat{\mathbf{r}}\in\hat{C}\). But this can be checked in polynomial time (by generalized Gaussian elimination, or Theorem 2), which finishes the proof. ## 5 Clonoids between affine algebras We continue our paper with an analysis of clonoids between affine algebras to prove our main result, Theorem 20. For a prime \(p\), let us write \(\mathbb{Z}_{p}\) for the cyclic group of order \(p\), i.e. \(\mathbb{Z}_{p}=(\{0,1,\ldots,p-1\},+,0,-)\). Let us further define the idempotent reduct \(\mathbb{Z}_{p}^{id}=(\{0,1,\ldots,p-1\},x-y+z)\). Using the unary terms \(ax=x+\cdots+x\) (\(a\)-times), for \(a\in\mathbb{Z}_{p}\), we can regard \(\mathbb{Z}_{p}\) as a vector space over the \(p\)-element field. More general, using this notation, we will also consider finite Abelian groups \((L,+,0,-)\) as modules over \(\mathbb{Z}_{|L|}\). For short, we are going to denote constant \(1\)-tuples by \(\mathbf{1}=(1,1,\ldots,1)\in\mathbb{Z}_{p}^{n}\). For two vectors \(\mathbf{a},\mathbf{x}\in\mathbb{Z}_{p}^{n}\), we further denote by \(\mathbf{a}\cdot\mathbf{x}=\sum_{i=1}^{n}\mathbf{a}(i)\cdot\mathbf{x}(i)\) the standard inner product. Then \(\mathsf{Clo}(\mathbb{Z}_{p})=\{\mathbf{x}\mapsto\mathbf{a}\cdot\mathbf{x} \mid\mathbf{a}\in\mathbb{Z}_{p}^{n}\}\) and \(\mathsf{Clo}(\mathbb{Z}_{p}^{id})=\{\mathbf{x}\mapsto\mathbf{a}\cdot\mathbf{x} \mid\mathbf{a}\in\mathbb{Z}_{p}^{n},\mathbf{a}\cdot\mathbf{1}=1\}\). In this section, we are going to study clonoids between affine algebras \(\mathbf{U}\) and \(\mathbf{L}\), such that \(|U|=p\) for some prime \(p\), and \(p\nmid|L|\). Since every such affine algebra \(\mathbf{U}\) has \(x-y+z\) as a term operation, it makes sense to study the special case \(\mathbf{U}=\mathbb{Z}_{p}^{id}\). As we are in particular interested in difference clonoids, we furthermore can assume that \(\mathbf{L}\) contains a constant operations \(0\) (see Lemma 9), and hence the operations of the Abelian group \((L,+,0,-)\). We remark that our analysis is structurally similar to (but not covered by) Fioravanti's classification of \((\mathbb{Z}_{p},\mathbb{Z}_{q})\)-clonoids [13]. ### \((\mathbb{Z}_{p}^{id},\mathbf{L})\)-clonoids satisfying \(p\nmid|L|\) and \(f(x,x,\ldots,x)=0\) Throughout this subsection, let \(p\) be a prime, and \(\mathbf{L}=(L,+,0,-)\) an Abelian group with \(p\nmid|L|\), and \(\mathcal{C}\) be a \((\mathbb{Z}_{p}^{id},\mathbf{L})\)-clonoids satisfying \(f(x,x,\ldots,x)=0\) for all \(f\in\mathcal{C}\) and \(x\in\mathbb{Z}_{p}\). In other words, for every \(n\in\mathbb{N}\), \(\mathcal{C}\) maps all tuples from the diagonal \(\Delta^{n}=\{(x,x\ldots,x)\in\mathbb{Z}_{p}^{n}\}\) to \(0\). We are going to prove that \(\mathcal{C}\) is generated by its binary elements, and therefore by any set of generators \(B\) of \(\mathcal{C}^{(2)}\leq\mathbf{L}^{\mathbb{Z}_{p}^{2}}\). Moreover, from \(B\), we are going to construct canonical generating set of the \(n\)-ary functions \(\mathcal{C}^{(n)}\leq\mathbf{L}^{\mathbb{Z}_{p}^{n}}\). We are, in particular going to use the following set of efficient vectors for every \(n>2\): \[C_{n}=\{\mathbf{a}\in\mathbb{Z}_{p}^{n}\mid\exists i>1\colon\mathbf{a}(1)= \mathbf{a}(2)=\ldots=\mathbf{a}(i-1)=0,\mathbf{a}(i)=1\}.\] Every 2-dimensional subspace \(V\leq\mathbb{Z}_{p}^{n}\) containing the diagonal \(\Delta^{n}\) has a unique parameterization by the map \[e_{\mathbf{c}}(x,y)=x(\mathbf{1}-\mathbf{c})+y\mathbf{c}=(x,\mathbf{c}(2)x+(1 -\mathbf{c}(2))y,\ldots,\mathbf{c}(n)x+(1-\mathbf{c}(n))y),\] for some \(\mathbf{c}\in C_{n}\). Proof.: To see this, note that \(V\) contains \(\mathbf{1}\), and can be therefore parameterized by \(e_{\mathbf{d}}(x,y)\), for some \(\mathbf{d}\notin\Delta^{n}\). So there is an index \(i\) with \(\mathbf{d}(1)=\ldots=\mathbf{d}(i-1)\neq\mathbf{d}(i)\). If \(\mathbf{d}\notin C_{n}\), then we define \(\mathbf{c}=(\mathbf{d}(i)-\mathbf{d}(1))^{-1}(\mathbf{d}-\mathbf{d}(1)\mathbf{ 1})\); clearly \(\mathbf{c}\in C_{n}\), and \(\mathbf{c}\) and \(\mathbf{1}\) still generate \(V\). It is further not hard to see that different elements of \(C_{n}\) generate different planes together with \(\mathbf{1}\), thus we obtain a unique parameterization of \(V\) by \(e_{\mathbf{c}}(x,y)\). Let \(f\in\mathcal{C}^{(2)}\). Then, there is a function \(f_{n}\in\mathcal{C}^{(n)}\), such that \[f_{n}(x_{1},x_{2},\ldots,x_{n})=\begin{cases}f(x_{1},x_{2})\text{ if }x_{2}=x_{3}= \ldots=x_{n}\\ 0\text{ else}.\end{cases}\] Proof.: We prove the lemma by induction on \(n\). For \(n=2\), we simply set \(f_{2}=f\). For an induction step \(n\to n+1\), we first define \(t_{n+1}(x_{1},x_{2},\ldots,x_{n},x_{n+1})\) as the sum \[\sum_{\mathbf{a}\in\mathbb{Z}_{p}^{n-1}}f_{n}(x_{1},x_{2}+\mathbf{ a}(1)(x_{n+1}-x_{n}),\ldots,x_{n}+\mathbf{a}(n-1)(x_{n+1}-x_{n}))\] \[-\sum_{\mathbf{a}\in\mathbb{Z}_{p}^{n-1}}f_{n}(x_{1},x_{1}+ \mathbf{a}(1)(x_{n+1}-x_{n}),\ldots,x_{1}+\mathbf{a}(n-1)(x_{n+1}-x_{n})).\] Note that, if \(x_{n+1}\neq x_{n}\), then \(t_{n+1}\) evaluates to \(\sum_{\mathbf{a}\in\mathbb{Z}_{p}^{n-1}}f(x_{1},\mathbf{a})-\sum_{\mathbf{a} \in\mathbb{Z}_{p}^{n-1}}f(x_{1},\mathbf{a})=0\). On the other hand, if \(x_{n}=x_{n+1}\), then the second sum is equal to \(0\), while the first one is equal to \(p^{n-1}f_{n}(x_{1},x_{2},\ldots,x_{n})\). By the induction hypothesis, the function \(f_{n+1}=p^{-(n-1)}t_{n+1}\) satisfies the statement of the lemma (note that \(p^{-(n-1)}\) exist modulo \(|L|\), since \(p\nmid|L|\)). We can prove an analogue statement for all 2-dimensional subspaces of \(\mathbb{Z}_{p}^{n}\) containing \(\Delta^{n}\) **Lemma 14**.: _Let \(f\in\mathcal{C}^{(2)}\), and \(\mathbf{c}\in C_{n}\). Then there is a function \(f^{\mathbf{c}}\in\mathcal{C}^{(n)}\), such that_ \[f^{\mathbf{c}}(x_{1},x_{2},\ldots,x_{n})=\begin{cases}f(x,y)\text{ if }(x_{1},x_{2}, \ldots,x_{n})=e_{\mathbf{c}}(x,y)\\ 0\text{ else.}\end{cases}\] Proof.: Let \(\mathbf{c}\in C^{(n)}\). There is a matrix \(\mathbf{T}\in\mathbb{Z}_{p}^{n\times n}\), such that \(\mathbf{T}\cdot\mathbf{1}=\mathbf{1}\) and \(\mathbf{T}\cdot(\mathbf{1}-\mathbf{c})=\mathbf{e}_{1}\). Let \(f_{n}\) as in Lemma 2 and \(f^{\mathbf{c}}:=f_{n}\circ T\). Note that by the first condition, all rows of \(T\) sum up to \(1\), hence \(T\) can be expressed by terms of \(\mathbb{Z}_{p}^{id}\). Then \(f^{\mathbf{c}}(e_{\mathbf{c}}(x,y))=f_{n}(T(x(\mathbf{1}-\mathbf{c})+y \mathbf{c}))=f_{n}(x\mathbf{e}_{1}+y(\mathbf{1}-\mathbf{e}_{1}))=f(x,y)\), and \(f^{\mathbf{c}}(\mathbf{x})=0\) for \(\mathbf{x}\notin e_{\mathbf{c}}(\mathbb{Z}_{p}^{2})\). We are now ready to prove the main result of this section: **Lemma 15**.: _Let \(\mathcal{C}\) be a \((\mathbb{Z}_{p}^{id},\mathbf{L})\)-clonoid satisfying \(\forall f\in\mathcal{C},x\in\mathbb{Z}_{p}\colon f(x,\ldots,x)=0\), and let \(B\) be a generating set of \(\mathcal{C}^{(2)}\leq\mathbf{L}^{\mathbb{Z}_{p}^{2}}\). Then_ 1. \(\mathcal{C}\) _is the_ \((\mathbb{Z}_{p}^{id},\mathbf{L})\)_-clonoid generated by_ \(B\)_, and_ 2. \(B_{n}:=\{f^{\mathbf{c}}\mid f\in B,\mathbf{c}\in C_{n}\}\) _is a generating set of_ \(\mathcal{C}^{(n)}\) _in_ \(\mathbf{L}^{\mathbb{Z}_{p}^{n}}\)_,_ Proof.: For any \(g\in\mathcal{C}^{(n)}\) and \(\mathbf{c}\in C^{(n)}\), let us define the binary operation \(g_{\mathbf{c}}=f(e_{\mathbf{c}}(x,y))\in\mathcal{C}^{(2)}\). By Lemma 14, \(g_{\mathbf{c}}\) generates a function \(g_{\mathbf{c}}^{\mathbf{c}}\in\mathcal{C}^{(n)}\), that agrees with \(f(x,y)\) on all tuples of the form \(e_{\mathbf{c}}(x,y)\), and that is \(0\) else. Since every point of \(\mathbb{Z}_{p}^{n}\setminus\Delta^{n}\) is in the image of a unique map \(e_{\mathbf{c}}\), we get \(g=\sum_{\mathbf{c}\in C_{n}}g_{\mathbf{c}}^{\mathbf{c}}\). Every element of the form \(g^{\mathbf{c}}\) can be clearly written as a linear combination of elements \(f^{\mathbf{c}}\), where \(f\in B\). It follows that \(B_{n}\) generates \(\mathcal{C}^{(n)}\) in \(\mathbf{L}^{\mathbb{Z}_{p}^{n}}\), and that the clonoid generated by \(B\) is \(\mathcal{C}\). We remark that if \(\mathbf{L}=\mathbb{Z}_{q}\) for a prime \(q\neq p\), and \(B\) is a basis of the vector space \(\mathcal{C}^{(2)}\leq\mathbf{L}^{\mathbb{Z}_{p}^{2}}\), then also \(B_{n}\) is a basis. The generating set \(B_{n}\) can be used to decide efficiently the following version of the subpower membership problem for \(\mathcal{C}\): **Lemma 16**.: _Let \(\mathcal{C}\) be a \((\mathbb{Z}_{p}^{id},\mathbf{L})\)-clonoid satisfying \(\forall f\in\mathcal{C},x\in\mathbb{Z}_{p}\colon f(x,\ldots,x)=0\). Then we can solve \(\operatorname{CompRep}(\mathcal{C})\) in polynomial time._ Proof.: By Lemma 15, \(\mathcal{C}^{(n)}\) is the linear closure of \(B_{n}\). Thus \(\mathcal{C}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\) is equal to the linear closure of \(B_{n}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n}):=\{f^{\mathbf{c}}(\mathbf{u}_{1}, \ldots,\mathbf{u}_{n})\mid f\in B,\mathbf{c}\in C_{n}\}\). Note that the \(i\)-th entry \(f^{\mathbf{c}}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})(i)\) of such a generating element can only be different from \(0\) if \((\mathbf{u}_{1},\ldots,\mathbf{u}_{n})(i)\) lies in the \(2\)-dimensional subspace generated by the diagonal \(\Delta^{n}\) and \(\mathbf{c}\). Thus, there are at most \(k\) many vectors \(\mathbf{c}\in C_{n}\) such that \(f^{\mathbf{c}}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\neq\mathbf{0}\), let \(\mathbf{c}_{1},\ldots,\mathbf{c}_{l}\) be an enumeration of them. Clearly \(D=\{f^{\mathbf{c}}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\mid f\in B,\mathbf{ c}\in\{\mathbf{c}_{1},\ldots,\mathbf{c}_{l}\}\}\) generates \(\mathcal{C}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\); note that we can compute it in linear time \(O(kn)\). From the generating set \(D\) we can compute a compact representation of \(\mathcal{C}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\) in polynomial time (by generalized Gaussian elimination, or Theorem 10). ### General \((\mathbb{Z}_{p}^{id},\mathbf{L})\)-clonoids For an arbitrary \((\mathbb{Z}_{p}^{id},\mathbf{L})\)-clonoid \(\mathcal{C}\), let us define the subclonoid \(\mathcal{C}^{\Delta}=\{f\in\mathcal{C}\colon f(x,\ldots,x)=0\}\). We then show, that every \(f\in\mathcal{C}\) can be written in a unique way as the sum of an element of \(\mathcal{C}^{\Delta}\), and a function that is generated by \(\mathcal{C}^{(1)}\). For this, we need the following lemma: **Lemma 17**.: _For any \(f\in\mathcal{C}^{(n)}\), let us define_ \[f^{\prime}(\mathbf{x})=p^{(1-n)}\sum_{\begin{subarray}{c}\mathbf{a}\in \mathbb{Z}_{p}^{n}\\ \mathbf{a}\cdot\mathbf{1}=1\end{subarray}}f(\mathbf{a}\cdot\mathbf{x},\mathbf{a} \cdot\mathbf{x},\ldots,\mathbf{a}\cdot\mathbf{x}).\] _Then \(f-f^{\prime}\in\mathcal{C}^{\Delta}\), and \(f^{\prime}\) is generated by \(\mathcal{C}^{(1)}\)._ Proof.: By definition, \(f^{\prime}\) is in the clonoid generated by the unary function \(f(x,x,\ldots,x)\in\mathcal{C}^{(1)}\). Thus, to prove the lemma, it is only left to show that \(f-f^{\prime}\in\mathcal{C}^{\Delta}\), or, in other words, that \(f(\mathbf{x})=f^{\prime}(\mathbf{x})\) for \(\mathbf{x}\in\Delta\). But this is not hard to see, since \[f^{\prime}(x,x,\ldots,x)=p^{(1-n)}\sum_{\begin{subarray}{c}\mathbf{a}\in \mathbb{Z}_{p}^{n}\\ \mathbf{a}\cdot\mathbf{1}=1\end{subarray}}f(x,x,\ldots,x)=f(x,x,\ldots,x).\] It follows in particular from Lemma 17 and Lemma 15 that every \((\mathbb{Z}_{p}^{id},\mathbf{L})\)-clonoid \(\mathcal{C}\) is generated by any set \(A\cup B\), such that \(A\) generates \(\mathcal{C}^{(1)}\) in \(\mathbf{L}_{p}^{Z}\) and \(B\) generates \(\mathcal{C}^{(2)}_{\Delta}\) in \(\mathbf{L}_{p}^{\mathbb{Z}_{p}^{n}}\). Note that the clonoid generated by \(A\) does not need to be disjoint from \(\mathcal{C}^{\Delta}\). We can, however, still prove results analogous to the previous section. Let \(\mathcal{C}\) be a \((\mathbb{Z}_{p}^{id},\mathbf{L})\)-clonoid, let \(A\) be a generating set of \(\mathcal{C}^{(1)}\leq\mathbf{L}^{\mathbb{Z}_{p}}\) and \(B\) a generating set of \(\mathcal{C}^{(2)}_{\Delta}\leq\mathbf{L}^{\mathbb{Z}_{p}^{2}}\). For every \(n\), let us define \(A_{n}=\{\sum_{\mathbf{a}\in\mathbb{Z}_{p}^{n},\mathbf{a}\cdot\mathbf{1}=1}f( \mathbf{a}\cdot\mathbf{x})\mid f\in A\}\) and let \(B_{n}\) be defined as in Lemma 15. Then \(A_{n}\cup B_{n}\) is a generating set of \(\mathcal{C}^{(n)}\) in \(\mathbf{L}^{\mathbb{Z}_{p}^{n}}\). Proof.: We already know from Lemma 15 that \(B_{n}\), generates \(\mathcal{C}^{(n)}_{\Delta}\leq\mathbf{L}^{\mathbb{Z}_{p}^{n}}\). By Lemma 17, every element \(f\in\mathcal{C}^{(n)}\) can be uniquely written as the sum \(f^{\prime}\) and \(f-f^{\prime}\). Furthermore \(f^{\prime}\), by definition, is generated by \(A_{n}\), and \(f-f^{\prime}\) is in \(\mathcal{C}^{(n)}_{\Delta}\), which finishes our prove. Lemma 18 allows us to straightforwardly generalize Lemma 16 to arbitrary \((\mathbb{Z}_{p}^{id},\mathbf{L})\)-clonoids: Let \(\mathcal{C}\) be a \((\mathbb{Z}_{p}^{id},\mathbf{L})\)-clonoid. Then \(\operatorname{CompRep}(\mathcal{C})\in\mathsf{P}\). Proof.: Let \(A_{n}\) and \(B_{n}\) be defined as in Lemma 18. Our goal is to compute a compact representation of \(\mathcal{C}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\) for some given \(\mathbf{u}_{1},\ldots,\mathbf{u}_{n}\in\mathbb{Z}_{p}^{k}\). By Lemma 18, every \(g\in\mathcal{C}\) decomposes into the sum of \(g^{\prime}\) and \(g-g^{\prime}\), where \(g^{\prime}\) is generated by \(A_{n}\) and \(g-g^{\prime}\) is generated by \(B_{n}\). Thus any image \(g(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\) is in the linear closure of all tuples \(f(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\), for \(f\in A_{n}\) and \(B_{n}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})=\{f(\mathbf{u}_{1},\ldots,\mathbf{ u}_{n})\mid f\in B,\;\in C_{n}\}\) in \(\mathbf{L}^{k}\). There are at most \(|A|\)-many tuples of the first form. Furthermore, as in the proof of Lemma 16 we can compute a generating set of \(B_{n}(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\) in polynomial time. By generalized Gaussian elimination (or Theorem 2), we can obtain a compact representation from these generators in polynomial time. Lemma 19 allows us to finish the proof of our main result: Let \(\mathbf{A}\) be a finite Mal'tsev algebra, with a central series \(0_{\mathbf{A}}<\rho<1_{\mathbf{A}}\) such that \(|\mathbf{A}/\rho|=p\) is a prime, and the blocks of \(\rho\) are of size coprime to \(p\). Then \(\operatorname{SMP}(\mathbf{A})\in\mathsf{P}\). Proof.: By Theorem 5, \(\mathbf{A}\) is isomorphic to a wreath product \(\mathbf{L}\otimes\mathbf{U}\), such that \(\mathbf{U}\), \(\mathbf{L}\) are affine with \(|U|=p\) and \(|L|\) coprime to \(p\). By Theorem 11, \(\operatorname{SMP}(\mathbf{A})\) reduces to \(\operatorname{CompRep}(\operatorname{Diff}_{0}(\mathbf{A}))\) in polynomial time. The difference clonoid is a clonoid from \(\mathbf{U}\) to \((\mathbf{L},0)\). Since both \(\mathbf{L}\) and \(\mathbf{U}\) are affine, and therefore have term operations describing \(x-y+z\), \(\operatorname{Diff}_{0}(\mathbf{A})\) is also a clonoid from \(\mathbb{Z}_{p}^{id}\) to \((L,+,0,-)\). By Lemma 19, \(\operatorname{CompRep}(\operatorname{Diff}_{0}(\mathbf{A}))\) is solvable in polynomial time, which finishes the proof. For every nilpotent Mal'tsev algebra \(\mathbf{A}\) with \(|A|=pq\) for distinct primes \(p\neq q\), we have \(\operatorname{SMP}(\mathbf{A})\in\mathsf{P}\). Proof.: If \(\mathbf{A}\) is affine, then the result holds by (generalized) Gaussian elimination. So assume that \(\mathbf{A}\) is 2-nilpotent, but not affine. So \(\mathbf{A}\) is isomorphic to \(\mathbf{L}\otimes\mathbf{U}\), and wlog. \(|L|=q\) and \(|U|=p\). Then the result follows directly from Theorem 20. ## 6 Discussion In Theorem 20 we proved that every Mal'tsev algebra, which can be written as a wreath product \(\mathbf{L}\otimes\mathbf{U}\) with \(|U|=p\) and \(p\nmid|L|\) has a tractable subpower membership problem. But, since the reduction discussed in Theorem 11 extends beyond this case, it is natural to ask, whether we can also extend the tractability also extends to all those cases: Is \(\mathrm{SMP}(\mathbf{L}\otimes\mathbf{U})\in\mathsf{P}\) for every supernilpotent Mal'tsev algebra \(\mathbf{U}\)? In particular, if \(\mathbf{U}\) is affine, Question 22 asks, whether the subpower membership problem of all finite 2-nilpotent Mal'tsev algebras can be solved in polynomial time. By Theorem 11, this reduces to computing compact representations with respect the clonoids between affine algebras. Thus answering the question requires a better understanding of such clonoids. Very recent result [27] study such clonoids in the case where \(\mathbf{U}\) has a distributive congruence lattice, and \(\mathbf{L}\) is coprime to \(\mathbf{U}\). Such clonoids are always generated by functions of bounded arity (as in Lemma 14), thus we expect then similar argument as in Lemma 19 to work in solving \(\mathrm{CompRep}(\mathcal{C})\). We remark that the fact that every _full_ clonoid between such \(\mathbf{U}\) and \(\mathbf{L}\) is finitely generated was already implicitly used in [18] to obtain polynomial time algorithm for checking whether two circuits over a 2-nilpotent algebra are equivalent. However [27] does not cover all clonoids between affine algebras; e.g. for the case \(\mathbf{U}=\mathbb{Z}_{p}\times\mathbb{Z}_{p}\) and coprime \(\mathbf{L}\) nothing is known so far. A reason why much emphasis is placed on coprime \(\mathbf{U}\) and \(\mathbf{L}\) is, that their wreath products \(\mathbf{L}\otimes^{0,T}\mathbf{U}\) are not supernilpotent (for non-trivial operations \(T\)), and therefore not covered by Theorem 2. In fact, finite Mal'tsev algebras in finite language are supernilpotent if and only if they decompose into the direct product of nilpotent algebras of prime power size (see e.g. [2, Lemma 7.6.]). It is further still consistent with our current knowledge that the conditions of Theorem 10 are always met, for coprime \(\mathbf{L}\) and \(\mathbf{U}\). This naturally leads to the question: Is \(\mathsf{Clo}(\mathbf{L}\times\mathbf{U})\subseteq\mathsf{Clo}(\mathbf{L} \otimes\mathbf{U})\), for all finite nilpotent Mal'tsev algebras \(\mathbf{L}\otimes\mathbf{U}\) where \(\mathbf{L}\) and \(\mathbf{U}\) of coprime size? In fact, in an unpublished proof [20], a positive answer to Question 23 is given in the case that \(\mathsf{Clo}(\mathbf{L}\otimes\mathbf{U})\) contains a constant operations. A more general version of Question 23 would ask, whether every finite nilpotent Mal'tsev algebra \(\mathbf{A}\) has a Mal'tsev term \(m\), such that \((A,m)\) is supernilpotent. Lastly we would like to mention that recently the property of _short pp-defitions_ was suggested as a witnesses for \(\mathrm{SMP}(\mathbf{A})\in\mathsf{coNP}\). While Mal'tsev algebras that generate residually finite varieties have short pp-definitions [10], it is not know whether this is true in the nilpotent case. Thus we ask: Does every finite nilpotent Mal'tsev algebras \(\mathbf{A}\) have short pp-definitions (and hence \(\mathrm{SMP}(\mathbf{A})\in\mathsf{NP}\cap\mathsf{coNP}\))? Studying Question 24 might especially be a useful approach to discuss the complexity for algebras of high nilpotent degree, if studying the corresponding difference clonoids turns out to be too difficult or technical endeavor.
2309.11226
Towards a Prediction of Machine Learning Training Time to Support Continuous Learning Systems Development
The problem of predicting the training time of machine learning (ML) models has become extremely relevant in the scientific community. Being able to predict a priori the training time of an ML model would enable the automatic selection of the best model both in terms of energy efficiency and in terms of performance in the context of, for instance, MLOps architectures. In this paper, we present the work we are conducting towards this direction. In particular, we present an extensive empirical study of the Full Parameter Time Complexity (FPTC) approach by Zheng et al., which is, to the best of our knowledge, the only approach formalizing the training time of ML models as a function of both dataset's and model's parameters. We study the formulations proposed for the Logistic Regression and Random Forest classifiers, and we highlight the main strengths and weaknesses of the approach. Finally, we observe how, from the conducted study, the prediction of training time is strictly related to the context (i.e., the involved dataset) and how the FPTC approach is not generalizable.
Francesca Marzi, Giordano d'Aloisio, Antinisca Di Marco, Giovanni Stilo
2023-09-20T11:35:03Z
http://arxiv.org/abs/2309.11226v1
Towards a Prediction of Machine Learning Training Time to Support Continuous Learning Systems Development. + ###### Abstract The problem of predicting the training time of machine learning (ML) models has become extremely relevant in the scientific community. Being able to predict _a priori_ the training time of an ML model would enable the automatic selection of the best model both in terms of energy efficiency and in terms of performance in the context of, for instance, MLOPs architectures. In this paper, we present the work we are conducting towards this direction. In particular, we present an extensive empirical study of the Full Parameter Time Complexity (FPTC) approach by Zheng _et al._, which is, to the best of our knowledge, the only approach formalizing the training time of ML models as a function of both dataset's and model's parameters. We study the formulations proposed for the Logistic Regression and Random Forest classifiers, and we highlight the main strengths and weaknesses of the approach. Finally, we observe how, from the conducted study, the prediction of training time is strictly related to the context (i.e., the involved dataset) and how the FPTC approach is not generalizable. Keywords:Machine Learning Training Time Prediction Formal Analysis. ## 1 Introduction The problem of energy efficiency and sustainability of machine learning (ML) systems is becoming increasingly important within the scientific community [7, 23, 8], as also highlighted by the ONU's Sustainable Development Goals (e.g., Goal 9 or Goal 12) [18]. Generally, the energy consumption of ML models is directly related to the _training phase time complexity_. This means that the longer it takes to train a model, the more energy is required by the system. For this reason, predicting _a priori_ the training time of an ML model will be a significant advance in such direction, enabling the automatic selection of the efficient ML model. The training time prediction of ML models also becomes highly relevant in the context of MLOps and, in general, _continuous learning_ or _learning-enabled_ systems, where the ML model is constantly re-trained with new data [3]. As highlighted in [17], engineering such kind of system is always very challenging since the development processes are often ad-hoc and specific to the use case. For this reason, having an _a priori_ estimation of the training time can help in standardizing some phases of the development process in contexts where, for instance, the computational power for training the model is very limited (e.g.,. IoT devices [25]). In addition, selecting the most efficient ML model can help stakeholders satisfy other relevant quality properties of software architectures, like _performance_[13]. In this paper, we present the work we are conducting towards a prediction of ML training time. In particular, we present an extensive empirical evaluation of the Full Parameter Time Complexity (FPTC) approach proposed by Zheng _et al._ in [24], which is, to the best of our knowledge, the only approach so far that formulates the ML training time as a function of dataset's and ML model's parameters. Specifically, differently from what has been done in [24], where the authors use only one dataset, we use the FPTC approach to predict the training time of a Logistic Regression [15] and Random Forest [21] classifier on a heterogeneous set of data, and we compare the predicted time with the actual training time of the method, highlighting the main strengths and weaknesses of the approach1. Footnote 1: The replication package of the experiments is available here: [https://shorturl.at/DGMX1](https://shorturl.at/DGMX1) The paper is structured as follows: in Section 2 we discuss some related works in the context of training time prediction; Section 3 describes in detail the FPTC approach; Section 4 presents the conducted experiment and the research questions we want to answer; Section 5 shows the experiment's results and discuss them w.r.t. the research questions; finally Section 6 presents some future works and concludes the paper. ## 2 Related Work Nowadays, the estimation of the running time of the training phase of ML models is primarily conducted through empirical analysis relying on a set of common characteristics. In [12], the authors performed empirical analyses to assess the impact of different dataset characteristics, such as sample size, class type, missing values and dimensionality, on the performance of classification algorithms, considering both accuracy and elapsed time. In [2], a rule-based learning algorithm was derived through an empirical evaluation of the performance of eight classifiers on 100 classification datasets, comparing them based on various accuracy and computational time measures. The empirical results were combined with the dataset characteristic measures to formulate rules to determine which algorithms were best suited for solving specific classification problems. Finally, in [16], a model was developed to predict the running time of ML pipelines through empirical analysis of different ML algorithms with a heterogeneous set of data. The approach was used to predict the timeout of an ML pipeline. Considering non-empirical analyses, to the best of our knowledge, [24] is the first attempt to provide an a priori estimation of the training time for various ML models without actually running the code. In this work, the authors propose a method to quantitatively evaluate the time efficiency of an ML classifier called Full Parameter Time Complexity (FPTC). The authors derive FPTC for five classification models, namely Logistic Regression, Support Vector Machine, Random Forest, K-Nearest Neighbors, and Classification and Regression Trees. FPTC depends on several variables, including the number of attributes, the size of the training set, and intrinsic characteristics of the algorithms, such as the number of iterations in Logistic Regression or the number of Decision Trees in a Random Forest. A coefficient \(\omega\) was introduced to establish the relationship between the running time and FPTC. The coefficient \(\omega\) can be obtained through a preliminary experiment on a small sampled dataset under different execution environments. When the physical execution environment changes, the coefficient \(\omega\) should be reevaluated to reflect the new conditions. Based on this state-of-the-art analysis, we observe that most of the studies concerning the training time of ML models tend to rely on empirical approaches. The only approach formalizing the training time as a function of datasets' and ML models' parameters is [24]. In this paper, we aim to highlight the strengths and weaknesses of this approach by conducting an extensive evaluation of the method. ## 3 Background Knowledge In this section, we describe in detail the FPTC method [24] where the training time of several ML models is defined as a function of different parameters of the dataset, of the model itself, and of a coefficient (\(\omega\)) that reflects the influence given by the execution environment on the actual training time of the model. This value should vary only when an ML model runs on a different execution environment. We detail better in Section 4 how \(\omega\) has been computed in our experiment. In this work, we focus on the formulation of the training time for two particular ML models, i.e., Logistic Regression (_LogReg_) [15] and Random Forest (_RF_) [21], while we leave the analysis of other methods to future works. The FPTC for the Logistic Regression classifier is defined as: \[FPTC_{LogReg}=F(Qm^{2}vn)*\omega_{LogReg} \tag{1}\] where \(n\) is the number of rows of the dataset, \(v\) is the number of dataset's features, \(m\) is the number of classes of the dataset, \(Q\) is the number of model's iterations during the training phase, and \(\omega_{LogReg}\) is the slope of a regression function computed comparing the results of the first part of the equation 1 with the actual training time of a Logistic Regression model using a subset of the training datasets. The FPTC for the Random Forest classifier is defined instead as: \[FPTC_{RF}=F(s(m+1)nv\log_{2}(n))*\omega_{RF} \tag{2}\] where \(n\), \(m\), and \(v\) are the same variables as above, while \(s\) is the number of trees of the random forest. \(\omega_{RF}\) is again defined as the slope of a regression function computed comparing the results of the first part of the equation 2 with the actual training time of a Random Forest classifier on a set of synthetic datasets. Concerning \(\omega\), the authors state that this variable reflects the influence given by the execution environment on the actual training time of the model. Hence, this value should vary only when an ML model runs on a different environment. We detail better in Section 4 how \(\omega\) has been computed in our experiment. ## 4 Experimental Setting This section describes the experiments we conducted to evaluate the FPTC method. In particular, with our experiments, we aim to answer the following two research questions: **RQ1.**: Is the slope (\(\omega\)) parameter of FPTC only dependent on the execution environment? **RQ2.**: Is the FPTC able to predict the training time of an ML model? In Section 4.1, we describe the experimental setting conducted to compute the slope parameter. While in Section 4.2, we describe the experiment led to predict the training time of the Logistic Regression and Random Forest models. All the experiments have been executed on a DELL XPS 13 2019 with a processor Intel Core i7, 16GB of RAM and Ubuntu 22.04.2 LTS. ### Slope Computation To answer **RQ1**, we must assess if the slope computation only depends on the execution environment. That is, given the same environment and the same ML model, the slope should not change significantly if the dataset used to compute the slope changes. To answer this question, we performed an experiment that computes a set of slopes using a synthetic dataset \(D_{s}\) with 6,167 rows and 10,000 features. In particular, we calculate a set of slopes corresponding to 19 subsets of \(D_{s}\), each one with a different subset of features. Next, we compared the different slopes obtained. It is worth noticing that, in [24], the authors compute the slope on the same dataset on which they want to predict the training time. In this experiment, we use a synthetic dataset different from the ones on which we predict the training time. We have chosen a synthetic dataset instead of a real one to have better control over its number of features and instances. In addition, a synthetic dataset can be easily released and used for computing the slopes in further experiments. ``` Input:(Synthetic dataset \(D_{s}\), ML Model \(M\), Number of starting features \(f=501\), Number of features to add \(a=501\), Number of starting rows \(s=100\), Number of rows to add \(p=1,000\)) Output:(List of slopes at increasing number of features) \(n=\) number of rows of \(D_{s}\) // in our case 6.167 \(m^{\prime}=\) number of features of \(D_{s}\) // in our case 10.000 \(slopes=\{\}\) for\(i\in 20\)do \(D^{\prime}_{s}=\) subset of \(D_{s}\) with \(f\) features whilefeatures of \(D^{\prime}_{s}<m^{\prime}\)do \(tt=[]\) fptcs = \([]\) \(m=\) features of \(D^{\prime}_{s}\) /* split D' into sub-datasets and get training times and fptc */ for(\(r=s;r<n;r+=p\))do \(D^{\prime\prime}_{s}=\) dataset of \(r\) rows from \(D^{\prime}_{s}\) train \(M\) on \(D^{\prime\prime}_{s}\) \(t=\) training time of \(M\) fptc = getFPTC(\(D^{\prime\prime}_{s}\), \(M\)) add \(t\) to \(tt\) add fptc to fptcs reg = LinearRegression() train reg on tt and fptcs \(\omega=\) slope of reg append \(\omega\) to \(slopes[m]\) \(D^{\prime}_{s}=D^{\prime}_{s}+a\) other features from \(D_{s}\) for\(m\in slopes\) keysdo \(slopes[m]=\) median of \(slopes[m]\) returnslopes ``` **Algorithm 1**Slope computation Algorithm 1 shows the procedure we followed to compute the slopes. The algorithm takes as input a synthetic dataset \(D_{s}\), an ML model \(M\) (in our case, \(M\) is either a Logistic Regression or a Random Forest classifier), and a set of parameters useful for the analysis: \(f\), i.e., the number of starting features of the synthetic dataset \(D_{s}\); \(a\), i.e., the number of features to add at each iteration; \(s\), i.e., the number of rows of the first sub-dataset used to compute the slope; and \(p\), i.e., the number of rows to add to each other sub-dataset. In our case, \(f=501\), \(a=501\), \(s=100\), and \(p=1.000\). The algorithm returns a list of slopes, each one corresponding to a subset \(D^{\prime}_{s}\) of \(D_{s}\) with a number of features lower or equal to the ones in \(D_{s}\). At the first iteration, \(D^{\prime}_{s}\) has 501 features. Next, \(D^{\prime}_{s}\) is split into a set of sub-datasets \(D^{\prime\prime}_{s}\) with an increasing number of rows ranging from 100 to the total number of rows. Each sub-dataset has a delta of 1000 rows. These sub-datasets are used to compute the training time of the model \(M\) and the relative _FPTC_ prediction using equations 1 and 2 for Logistic Regression and Random Forest, respectively. After computing the training times and the _FPTC_ predictions for each sub-dataset \(D_{s}^{\prime\prime}\), the training times and the _FPTC_ predictions are used to train a _Linear Regression_ model and to get its slope \(\omega\). The obtained slope is added to a dictionary of slopes with the key equal to the number of features of \(D_{s}^{\prime}\). Finally, the number of features of \(D_{s}^{\prime}\) is increased by 500. This procedure continues until the number of features of \(D_{s}^{\prime}\) equals the number of features of \(D_{s}\). This whole process is repeated 20 times, and the median slope of each subset \(D_{s}^{\prime}\) is finally returned. ### Training Time Prediction To answer the **RQ2**, we conducted a set of experiments to predict, using the FPTC method, the training time of a Logistic Regression and a Random Forest classifier using 7 heterogeneous datasets. Then we compared the predicted training time with the actual training time of the method. ``` Input: (Dataset \(D\), ML Model \(M\), List of slopes \(S\)) Output: (List of Root Mean Squared Errors \(RMSE\), List of Mean Absolute Percentage Error \(MAPE\)) \(trainingTimes=[]\) for\(i\in 100\)do train \(M\) on \(D\) \(t=\) training time of \(M\) add \(t\) to \(trainingTimes\) \(tt=mean(trainingTimes)\) \(RMSE=[]\) \(MAPE=[]\) for\(\omega\in S\)do \(FPTC=\)getFPTC(\(D\), \(M\), \(\omega\)) \(rmse=getRMSE(tt,FPTC)\) \(map=getMAPE(tt,FPTC)\) add \(rmse\) to \(RMSE\) add \(map\) to \(MAPE\) returnRMSE, MAPE ``` **Algorithm 2**Training time prediction Algorithm 3: Training time prediction Algorithm 4: Training time prediction The algorithm takes as input a dataset \(D\), the ML model \(M\), and the list of slopes \(S\) computed with the procedure described in Algorithm 1, and returns a list of Root Mean Squared Errors \(RMSE\)[5] and Mean Absolute Percentage Errors \(MAPE\)[6], one for each slope. The experiment can be divided into two steps. In the first step, the algorithm computes 100 times the training time of the ML model \(M\) on \(D\) and then calculates the mean of the times. In the second step, for each slope, \(\omega\), the algorithm computes the _FPTC_ and the RMSE and MAPE between the actual training time and the _FPTC_. Finally, the list of errors is returned. In the evaluation, we have employed 7 heterogeneous datasets which differ in terms of dimensions to evaluate if the FPTC method works better under datasets. The list of employed datasets is reported in Table 12. Footnote 2: Before running Algorithm 2, following the guidelines reported in [19], all the data has been scaled by removing the mean (\(\mu\)) and by dividing the variance (\(\sigma\)) from each feature. Concerning the ML classifiers, we used the implementations from the _scikit-learn_ library [19] and, following the hyper-parameters settings of [24], we set the _l2_ penalty and _sag_ solver for the Logistic Regression, while we set the number of trees of the Random Forest classifier to 80. Finally, we set the maximum number of iterations of the Logistic Regression to 10.000. Table 1 synthesizes, for each dataset, the values of the different parameters of the two FPTC formulations for Logistic Regression and Random Forest classifiers. In particular, together with the dimensions of the datasets, we also report the number of iterations required by the Logistic Regression to train and the number of trees of the Random Forest. ## 5 Experimental Results and Discussion In this section, we present the results of our experimental evaluation and discuss them with respect to the research questions defined in Section 4. Finally, we present some threats to validity of our evaluation. ### Addressing RQ1 Figure 1 reports the boxplot of the variation of the slopes computed with an increasing number of features of the synthetic dataset. In particular, figure 0(a) reports the slopes computed for the Logistic Regression classifier, while figure 0(b) reports the slopes computed for the Random Forest classifier. Concerning the Logistic Regression model, it can be seen (in figure 0(a)) how the slopes have generally low variability. An exception is given by the slopes computed with 501 and 1002 features which are, on average, higher than the \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{**Dataset Coefficients**} & \multicolumn{2}{c|}{**ML Methods**} & \multicolumn{1}{c|}{**Coefficients**} \\ \hline **Dataset** & **Instances** & **Features** & **Classes** & **LogReg Iters** & **RF Trees** \\ \hline Adult [11] & 30940 & 101 & 2 & 635 & 100 \\ \hline Antivirus [22] & 373 & 531 & 2 & 840 & 100 \\ \hline APS [1] & 60000 & 162 & 2 & 5068.73 & 100 \\ \hline Arcene [9] & 100 & 10000 & 2 & 1089 & 100 \\ \hline Compas [4] & 6167 & 400 & 2 & 721 & 100 \\ \hline Dexter [10] & 300 & 20000 & 2 & 855.91 & 100 \\ \hline German [20] & 1000 & 59 & 2 & 33.93 & 100 \\ \hline \end{tabular} \end{table} Table 1: Values of FPTC parameters for each dataset others. In particular, the median of the slopes computed using 501 features is around 0.02 points higher than the others, while the median of the slopes calculated using 1002 features is about 0.04 points higher than the others. In all the other cases, the median slope ranges from \(1.83*10^{-9}\) to \(1.85*10^{-9}\). Concerning the Random Forest classifier, it can be seen from figure 0(b) how the slopes present a higher variability among them, starting from a value around \(8.5*10^{-10}\) using 501 features to a value of \(2*10{-}10\) using 9519 features. In particular, it can be noticed from the figure that the value for the slope tends to decrease with an increase in the number of the dataset's features. Moreover, we study the significance of the results of the slopes by performing the ANOVA test [14] for both experiments. This test checks for the null hypothesis that all groups (i.e., all the slopes computed using the same number of features) have the same mean; if the confidence value (_p-value_) is \(>0.05\), the null hypothesis is confirmed. Concerning the Logistic Regression classifier, the test returned a _p-value_ of 0.002, meaning the groups do not have the same mean. However, performing the same ANOVA test excluding the slopes computed with 501 and 1,002 features returns a _p-value_ of 0.352, accepting the null hypothesis of the same mean. This means that, excluding the slopes computed with 501 and 1.002 features, all the others have the overall same mean. Concerning the Random Forest classifier, the _p-value_ returned is \(9.022*10^{-222}\), confirming the high variability of the slopes. Figure 1: Slope variation with an increasing number of dataset’s features From this analysis of the slope variations, we can conclude how, differently from what is stated in [24], the slopes do not change only when the execution environment changes, but they are also related to the number of features of the dataset used to compute them, in particular when using a Random Forest classifier. **Answer to RQ1:** The slopes computed under the same execution environment but using an increasing number of features are pretty stable for the Logistic Regression classifier. Instead, they present a higher variance for the Random Forest classifier. Hence, we can conclude how the slope is also related to the number of features of the dataset used to compute them. ### Addressing RQ2 Figures 2 and 3 report the errors in the predictions of the FPTC method compared to the actual training time of the Logistic Regression and Random Forest Classifier, respectively, for all the datasets described in Section 4. In particular, in each figure, the left y-axis reports the RMSE, while the right y-axis reports the MAPE. On the x-axis, we report the number of features of the synthetic dataset used to compute the relative slope. Near each dataset name, we also report its number of features. Figure 2: RMSE and MAPE at different slope values for LogReg Concerning the Logistic Regression classifier, it can be seen from figure 2 how the FPTC method can predict the training time of the model under some datasets while it fails in the prediction of others. In particular, the FPTC method can predict the training time of the LogReg under the _Antivirus_ dataset (with an RMSE and MAPE almost equal to 0 using the slope computed with 9,009 features of the synthetic dataset), _Arcene_ (with an RMSE and MAPE almost equal to 0 using the slope computed with 6,006 features), _Compas_ (with an RMSE and MAPE almost equal to 0 using the slope computed with 4,004 features), and _Dexter_ (with an RMSE and MAPE almost equal to 0 using the slope computed with 501 features). In contrast, the FPTC method is not able to predict the training time of the LogReg under _Adult_ (with the lowest MAPE equal to 9.5 using the slope computed with 1,503 features), and _APS_ (with the lowest MAPE equal to 9.0 using the slope computed with 1,503 features). It is worth noting that the high MAPE for the _German_ dataset may be influenced by the low values of FPTC and true running time, causing this metric to increase [6]. This is also supported by a low value of the RMSE. Table 2 reports the mean and standard deviation of the training time and FPTC in seconds for each selected dataset. From this table, it can be seen how the FPTC method tends to underestimate the real training time, especially in _Adult_ (with a delta of almost 2 seconds between the actual training time and the predicted one), and _APS_ (with a delta of almost 50 seconds between the actual training time and the predicted one). Finally, following the low variability of the slopes computed in Section 5.1, we notice how the slopes' variation does not much influence the FPTC predictions. Figure 3 reports the same metrics computed for the Random Forest classifier. Differently from the Logistic Regression classier, here we notice how the FPTC method is more sensitive to the variation of the slopes, which lets the prediction increase or decrease significantly. This behaviour is explained by the high variability of the slopes shown in Section 5.1. In addition, it can be seen from the charts that the FPTC method can always predict real training time under a specific slope value achieving a value of zero for both RMSE and MAPE. However, we also notice how the value of the slope leading to the optimal predictions is not constant and varies between the datasets. The only dataset on which the FPTC method is not able to correctly predict the training time is the _APS_ dataset, with the lowest MAPE of around 15 points. \begin{table} \begin{tabular}{|l|c|c|} \hline **Dataset** & **Training Time (seconds)** & **FPTC (seconds)** \\ \hline Adult & 16.54 \(\pm\) 0.042 & 14.77 \(\pm\) 0.066 \\ \hline Antivirus & 1.15 \(\pm\) 0.012 & 1.214 \(\pm\) 0.006 \\ \hline APS & 400.156 \(\pm\) 1.126 & 356.81 \(\pm\) 1.803 \\ \hline Arcene & 7.711 \(\pm\) 0.012 & 7.953 \(\pm\) 0.006 \\ \hline Compas & 12.802 \(\pm\) 5.366 & 12.956 \(\pm\) 0.065 \\ \hline Dexter & 37.597 \(\pm\) 0.403 & 37.5 \(\pm\) 0.188 \\ \hline German & 0.019 \(\pm\) 0.003 & 0.015 \(\pm\) 7.342 \(\ast\) 10\({}^{-5}\) \\ \hline \end{tabular} \end{table} Table 2: Mean and stand. dev. of training time and FPTC for LogReg model Table 3 reports the mean and standard deviation of the actual training time and the predicted one for the Random Forest classifier. Differently from above, in this case, we notice a higher variability among the predicted training times, especially in _Adult_, _APS_, _Compas_, and _Dexter_. In addition, we notice how for the _APS_ dataset (which is the one letting the worse performances), the FPTC method underestimates the real training time. Finally, as noticed above, the low training time of some datasets (namely, _Antivirus_, _Arcene_, _Dexter_) explains the high value of the related MAPE metric for them. From this analysis, we can conclude how the FPTC method is able to predict the training time of a Logistic Regression and Random Forest classifier under Figure 3: RMSE and MAPE at different slope values for Random Forest \begin{table} \begin{tabular}{|l|c|c|} \hline **Dataset** & **Training Time (seconds)** & **FPTC (seconds)** \\ \hline Adult & 2.15 \(\pm\) 0.012 & 2.60 \(\pm\) 2.383 \\ \hline Antivirus & 0.07 \(\pm\) 8.368 \(\ast\) 10\({}^{-17}\) & 1.20 \(\pm\) 0.711 \\ \hline APS & 37.54 \(\pm\) 0.698 & 11.49 \(\pm\) 6.469 \\ \hline Arcene & 0.13 \(\pm\) 0.004 & 0.79 \(\pm\) 0.874 \\ \hline Compas & 0.99 \(\pm\) 0.009 & 1.23 \(\pm\) 1.758 \\ \hline Dexter & 0.217 \(\pm\) 0.005 & 2.76 \(\pm\) 2.452 \\ \hline German & 0.11 \(\pm\) 0.004 & 1.3 \(\pm\) 0.677 \\ \hline \end{tabular} \end{table} Table 3: Mean and stand. dev. of training time and FPTC for RF model certain circumstances (i.e., datasets) while it is not working in others. However, we do not notice any correlation between specific characteristics of the dataset (e.g., number of features) and the correctness of the predictions. Moreover, we see how the correctness of the predictions is directly correlated to the value of the slope, which is again not only dependent on the execution environment but also varies with the variation of the dataset used to compute it, as shown in Section 5.1. **Answer to RQ2:** The FPTC method is able to predict the training time of the Logistic Regression and Random Forest classifiers under certain circumstances (i.e., datasets), while it fails in others. The correctness of the predictions (especially for the Random Forest classifier) is strongly related to the value of the slope, which depends on the dataset used to compute it. ### Threats to Validity **Internal validity:** We adopted a synthetic dataset to compute the slopes to answer **RQ1**. In contrast, a real-world dataset could include more complexity and variability not considered in this experiment. To answer this threat, we clarify that the goal of our experiment was to prove that the value of the slope is not only dependent on the execution environment. Hence, any dataset (synthetic or not) that proves this hypothesis is effective. **External validity:** The results of our experiments may apply only to the selected ML models and datasets. Concerning the selection of the dataset, we selected several datasets heterogeneous in their dimensions, making our results enough general. Concerning the ML models, we analysed two of the most adopted ML models for classification, while we will analyse the others in future works. ## 6 Conclusion and Future Work In this paper, we have presented the work we are conducting towards predicting the training time of ML models. In particular, we have extensively evaluated the work proposed in [24], which is the only approach so far that formulates the training time as a function of the dataset's and model's parameters. In this paper, we have considered the formulations proposed for the Logistic Regression and Random Forest classifiers, and we have shown how the proposed approach is not always able to predict the training time successfully. Further, from the results shown in Section 5.2, there is no evidence of any correlation between the dataset size and the correctness of the predictions. Instead, from the results shown in Section 5.1, there is a correlation between the number of dataset features and the value of the slope used in the FPTC formulation (which is, again, not only dependent on the execution environment as stated in [24]). In the future, we want to deeper analyse the formulations proposed for the different ML models and overcome the observed limitations. In particular, we want to investigate if some specific characteristics of the dataset or ML model influence the training time and are not considered in the current formulation.
2309.07444
Research on self-cross transformer model of point cloud change detecter
With the vigorous development of the urban construction industry, engineering deformation or changes often occur during the construction process. To combat this phenomenon, it is necessary to detect changes in order to detect construction loopholes in time, ensure the integrity of the project and reduce labor costs. Or the inconvenience and injuriousness of the road. In the study of change detection in 3D point clouds, researchers have published various research methods on 3D point clouds. Directly based on but mostly based ontraditional threshold distance methods (C2C, M3C2, M3C2-EP), and some are to convert 3D point clouds into DSM, which loses a lot of original information. Although deep learning is used in remote sensing methods, in terms of change detection of 3D point clouds, it is more converted into two-dimensional patches, and neural networks are rarely applied directly. We prefer that the network is given at the level of pixels or points. Variety. Therefore, in this article, our network builds a network for 3D point cloud change detection, and proposes a new module Cross transformer suitable for change detection. Simultaneously simulate tunneling data for change detection, and do test experiments with our network.
Xiaoxu Ren, Haili Sun, Zhenxin Zhang
2023-09-14T05:54:54Z
http://arxiv.org/abs/2309.07444v1
# Research on Self-Cross-Transformer Model of Point Cloud Change Detection ###### Abstract With the vigorous development of the urban construction industry, engineering deformation or changes often occur during the construction process. To combat this phenomenon, it is necessary to detect changes in order to detect construction loopholes in time, ensure the integrity of the project and reduce labor costs. Or the inconvenience and injuriousness of the road. In the study of change detection in 3D point clouds, researchers have published various research methods on 3D point clouds. Directly based on but mostly based on traditional threshold distance methods (C2C, M3C2, M3C2-EP), and some are to convert 3D point clouds into DSM, which loses a lot of original information. Although deep learning is used in remote sensing methods, in terms of change detection of 3D point clouds, it is more converted into two-dimensional patches, and neural networks are rarely applied directly. We prefer that the network is given at the level of pixels or points. Variety. Therefore, in this article, our network builds a network for 3D point cloud change detection, and proposes a new module Cross transformer suitable for change detection. Simultaneously simulate tunneling data for change detection, and do test experiments with our network. 3D Change detection, Point cloud, Deep Learning, Siamese Network, Transformer ## 1 Introduction With the vigorous development of the construction industry, the engineering problems in the detection of construction are particularly significant, and we need to detect changes in different time periods in engineering applications. While many existing studies use 2D images for change detection, 3D point clouds bring some complementary information about height, which seems to be useful in the context of detecting construction ground aspects, since the main modifications occur on the height axis. Furthermore, spectral variability of the same object over time, differences in viewing angle between 2D image acquisitions, perspective and distortion effects can complicate change retrieval based on 2D data (Qin et al., 2016). Although change detection in 3D data in engineering has been addressed in several studies, so far, experimental comparisons are lacking. To the best of our knowledge, the only comparative analysis (Shirowzhan et al., 2019) is still at a qualitative level and excludes deep learning (DL) methods, which represent the state-of-the-art in remote sensing. In this paper, we first design a framework for point cloud change detection, and propose a new cross-attention mechanism suitable for change detection. We conducted comparative tests on real datasets as well as on real datasets plus simulated datasets (capable of introducing instruction changes). We then compare representative approaches from the state of the art, ranging from classical distance-based approaches to recent deep learning developments, in the context of aerial lidar surveys (ALS) in urban areas. Related Works In this section, we briefly review general methods for change detection in 3D point clouds. Existing 3D PC-based methods for urban environment change detection and characterization are reviewed. Although there are many ways to convert PCs to DSMs, we will not focus on these studies as they are not directly related to the scope of our paper. Unlike 2D images organized in a regular pixel grid (2D grid), 3D point clouds generated by lidar are disordered and irregular, which makes it difficult to extract information from these data, and the time stamps between Comparing is even more difficult. In fact, the location and distribution of points can also be very different in unaltered regions. Therefore, some methods convert a 3D point cloud into a 2D matrix that provides elevation information in each pixel. These 2D grids are called digital surface models (DSMs). The idea of this method to detect changes between two 3D point clouds is to compute the DSM of the two point clouds and directly subtract them to retrieve the difference. It was first used for architectural change extraction in Murakami et al. (1999). It is still frequently used due to its simplicity and quality of results, and this method is also commonly used in the Earth observation community (Okyay et al., 2019); the difference in results can also be segmented using the Otsu threshold algorithm, which is calculated by minimizing each Variance between categories, thresholded from histogram of values (Otsu, 1979); since DSM contains artifacts (e.g. due to interpolation in hidden parts or difficulty retrieving precise architectural boundaries) (Gharibbafghi et al., 2019) ; There are several ways to apply more complex pipelines to derive more precise and finer-grained changes than positive or negative changes. For example, Choi et al. (2009) et al. use DSM difference to identify change regions, and then segment each change region by filtering and grouping; after empirical thresholding of DSM results, it can also be based on the size of the remaining pixel clusters, Height and shape are used to select 3D building variations (Dini et al., 2012); from 3D point clouds, DSM and digital terrain models (DTM) can be extracted relying on ground points. Teo (Teo et al., 2012) et al. retrieve and classify each object at each date using DSM difference and DTM, then the segmented objects can be compared between the two time periods to determine changes; Pang et al. (Pang et al., 2014) still extract architectural change candidates with DSM change thresholds based on DSM changes; DSM differences with basic thresholds or further refinements are still widely used for 3D urban monitoring (Warth et al., 2019) and post-disaster buildings In the change detection literature for damage assessment (Wang et al., 2020). With the rise of deep learning methods in Earth observation, change detection in 2D imagery has also benefited from this advance. For example, applying a convolutional neural network (CNN) to RGB images can assess building damage due to earthquakes (Kalantar et al., 2020); this study compared three different architectures, and the best results were Siamese architecture (Siamese networks), Siamese networks are used for change detection or similarity computation between two inputs, thus, they have been heavily used in remote sensing applications (Shi et al., 2020); even in optical and synthetic aperture radar They can also provide reliable results in the case of non-uniform inputs such as (SAR) images (Mou et al., 2017); since (Zhang et al., 2019) aims to use airborne (ALS) and photogrammetry to generate The dual-temporal multimodal 3D information of the point cloud, they chose the Siamese architecture so that it is fed into one branch DSM (either directly into the DSM difference or into two channels), and in the other branch into the corresponding RGB quadrature image, in their study they also calculated the region of change using only the DSM information. Another 3D change detection method relies directly on raw point clouds. First, Girardeau-Montauut et al. (2005) proposed a cloud-to-cloud (C2C) comparison based on Hausdorff point-to-point distances and octree subdivisions of PCs to speed up computation; Lague et al. (2013) developed a A more refined method for measuring the average surface variation along the normal. Extract surface normals and orientations at a consistent scale based on local surface roughness. This approach is called Multiscale Model-to-Model Cloud Comparison (M3C2); the second technique allows distinguishing between positive and negative changes, which is not possible in C2C methods and, more importantly, requires less computational effort (Shirowzhan et al., 2019). There are also semantic-based approaches. Among them, Awrangieb et al. (2015) first extract boundaries from lidar data and aerial images, extract 2D footprints of buildings, and then compare the footprints to highlight changes on 2D maps; (Xu et al., 2015) also propose to segment each point cloud to extract buildings, and then create a 3D surface disparity map by computing the point-to-plane distance between points in the first set and the closest plane in the second set. While rasterizing 3D data into a 2D elevation matrix (known as a digital surface model (DSM)) can be considered a valuable solution, this rasterization process results in some loss of information because only the The highest point, making it difficult to process such unstructured point cloud data directly with standard tools for 2D images. While existing studies that directly deal with 3D point clouds rely on hand-crafted features or distance computations, very few deep learning models are used to directly deal with the change detection representation problem of raw 3D point clouds. In recent years, deep learning methods have achieved good results in remote sensing and other fields. Therefore, designing a high-precision deep network model that can directly process 3D point clouds can provide a new method for point cloud change detection. This leads us to the next chapter. ## 3 Method ### Background To address the problem of change detection and representation in 2D images, recent studies propose to use deep Siamese fully convolutional networks (FCNs). It is contained in a common encoder-decoder network with skip connections. To extract features, both images will pass through the encoder part, which consists of two branches of the two images. Each branch is a series of traditional convolution and pooling layers to extract data information at several scales. The particularity of the Siamese network is that at each step of pooling, the difference of the extracted features of the two branches is maintained and cascaded at the corresponding scale in the decoder part (Daudt et al., 2018). If the data are very similar, the two branches of the encoder part can have shared weights to extract features in the same way. When the data are significantly different, for example, if images come from two types of sensors (such as optical and radar sensors), the weights may be independent, resulting in a so-called pseudo-Siamese network (Zhan et al., 2017). To address the 3D part of the problem, we propose to rely on deep networks capable of performing semantic segmentation directly on the PC. To this end, we consider the core module of the recent Point transformer (Zhao et al., 2021), a network that achieves very good results on segmentation and classification tasks. In a neural network similar to a 2D image encoder, the principle is to apply an attention mechanism, adapting this operation to a 3D point cloud. The author of Point transformer implemented different types of networks inspired by traditional networks in natural language processing and 2D image transformers, applied them to point clouds, and added the position codes that point clouds themselves have. At the same time, we have made further improvements to the Point transformer, and proposed a new module Cross-Transformer. This cross-attention mechanism realizes attention calculations across point clouds and is more suitable for change detection tasks. ### Our framework To extend the Siamese principle to 3D point clouds, we here propose to embed a modified Point transformer architecture into a deep Siamese network, where point clouds from two different time periods will pass the same encoding with shared weights device. Similar to common encoder-decoders with skip connections, at each scale in the decoding part we concatenate the differences in extracted features associated with the corresponding encoding scale (see Figure 1). In practice, the computation of such feature differences is not noticeable, since point clouds do not contain the same number of points and are not defined at the same locations, even in unchanged regions. We perform feature transfer decoding by retrieving the feature of the corresponding spatial point in the first point cloud against this difference in the neighboring points of the point in the second point cloud, and adding it to the original point cloud. In this method, we combine the Self transformer and the Cross transformer in the Point transformer, use the advantages of the Self transformer module and the Cross transformer module to extract the descriptors with resolution of the local point cloud, and use the dynamic graph convolution as the feature embedding of the transformer, and finally achieve point-by-point level change detection of point clouds. In this section, we refer to (Wang et al., 2019) to operate dynamic graph convolutions as feature embeddings of the Self-transformer module. Different from (Zhao et al., 2021) directly using MLP as Transformer, through experiments, we verified that using dynamic graph convolution as its feature embedding has better results. Because it dynamically updates the graph structure, it can learn to semantically group points by dynamically updating the relationship graph from one layer to another, not only limited to local features with similar distances. In this structure, the dynamic calculation graph structure is the essence of this module. It is beneficial to recompute the graph using the k nearest neighbor points in the feature space produced by each layer. This is a key difference between this method and graph CNNs that work on fixed input graphs. With dynamic graphic updates, the receptive field range gradually increases to be as large as the point cloud diameter. At each layer, construct a different graph structure \(G^{l}=(V^{l},E^{l})\), where the edge structure of the lth layer is \((i,j_{l1}),...,(i,j_{lkl})\), making \(x^{(l)}_{j_{l1}},...,x^{(l)}_{j_{lkl}}\) are the points closest to \(x^{(l)}_{i}\), in other words, the architecture learns how to build the graph used in each layer G, rather than having it as a fixed constant constructed before evaluating the network. In the process of implementation, we calculate a paired distance matrix in the feature space, and then take the closest k points for each point, instead of calculating the surrounding adjacent points according to the fixed distance according to the coordinates, which is more conducive to the proximity of the distance into semantic proximity. In this section, we design a self-transformer-based feature encoding module to process point clouds and effectively extract 3D features of scene point clouds. The Self transformer encoder-decoder module consists of an encoder and a decoder (Figure 1). We adopt a twin structure in the encoder, and the two branches share the encoder with four layers of SA layers and Self transformer layers. Our network is upsampled and downsampled using Pointnet++ (Qi et al., 2017). At the same time, position coding is added. This paper uses a vector attention operator different from scalar attention, which can be verified in (Zhao et al., 2021). In the paper (Zhao et al., 2021), the authors used scalar attention. Compared with scalar attention, the calculation of attention weight in vector attention is different. Vector attention can adjust the attention between each feature channel.We use a subtractive Figure 1: Our network framework diagram relationship and add the positional encoding \(\sigma\) to the attention vector and the transformed features: \[\mathcal{y}_{t}=\sum_{x_{j}\ \in X(t)}\rho\left(\beta\left(\varphi(x_{t})- \omega\left(x_{j}\right)+\ \sigma\right)\right)\odot\left(\alpha\left(x_{j}\right)+\sigma\right) \tag{1}\] \[F_{out}=F_{in}+\mathcal{y}_{t} \tag{2}\] Here the relation function uses subtraction, and \(\beta\) is a mapping function which is an MLP with two linear layers and a ReLU non-linearity, generating attention vectors for feature aggregation. where \(\mathcal{y}_{t}\) is the output attention feature, \(F\) is the total output feature, and \(\varphi\), \(\omega\), and \(\alpha\) are point-wise feature transformations, such as linear projection or MLP. \(\sigma\) is a position encoding function and \(\rho\) is a normalization function such as softmax. Here the subset \(X(t)\subseteq X\) is a set of points in the local neighborhood (specifically the k nearest neighbors) of \(x_{t}\). The difference between us and the Point transformer is that we use Dynamic graph cnn (DGCNN) [20] as the feature embedding, and add the L1 norm to \(\rho\) for normalization. Inspired by the self-attention mechanism in the previous section, the biggest innovation in this paper is to propose a cross-attention mechanism for change detection, which we call Cross-Transformer. We still use Equation 1 in this section. Similar to natural language processing tasks, in the transformation, we use the linear transformation features of \(x_{t}\) as the query matrix Q, the linear transformation features of k points in the local neighborhood of \(x_{t}\) as the key matrix, and the k points in the local neighborhood of \(x_{t}\) are different The matrix of values generated by the linear transformation as V. The difference from self-attention is that when we calculate attention, we use the fusion between two point clouds. Here we still use subtraction to calculate the attention score. This has been verified in our experiments. Using subtraction is more effective than other operations, because we are doing a change detection task, and the difference between the two is more prominent. It is reflected in the calculation that Q and K come from different point clouds, and KNN is used to find adjacent K points, and the information is fused to find the difference to calculate the attention score of the corresponding position between the two point clouds, and finally I Attention score weights are assigned to K surrounding point features. ## 4 Experiments ### Dataset We use two sets of data for training and testing in this paper. One data set is an open-pit iron mine located on September 12, 2017 and January 12, 2017 in Liaoning Province, China. It has an area of 0.6 square kilometers and a maximum depth of 170 meters. Another data set is from the shield tunnel in Huaide County, Jilin Province, China, from 2021 and 2022 respectively. In this set of data, because the amount of change is far from enough for the tunnel to be used for training, So we added our own simulations to this set of data in addition to some of its own changes. These include the shedding of wires, the construction of internal catenary, and the construction of infrastructure to simulate changes in construction. The data simulation is real and can be used as our research data. In the test data set, we used 3/1 of the two data sets for testing. ### Results As shown in Figure 2 above, the ground subsidence change detection is performed in Dataset 1. The red part in the figure represents the correct detection in the change category, the purple represents the correct detection in the unchanged category, the blue part is missing in the change type, and the yellow part It is omitted in the invariant type, and it can be seen from the results that our result shows a good detection effect. From our metrics in Table 1 we can see that the accuracy metrics are better. \begin{table} \begin{tabular}{c|c c c c c} mine & \multicolumn{5}{c}{mean evaluation} \\ \hline & OA & mrecall & imprecision & mf1score & mIoU & \% \\ testdata & 98.05 & 97.44 & 95.76 & 96.34 & 93.46 \\ \hline \end{tabular} \end{table} Table 1: Change detection results of mine data set Figure 2: mine test results In our tunnel change detection, we divide the test tunnel into two data for easy observation. In Figure 3 in the figure below, in Table 2 we counted the detection results, and showed good results in both data sets 1 and 2. In Figure 3, we compared with the real value, although the internal facilities of the tunnel are complex, but most of them can still be detected, such as the slight changes in the increase of wire changes under the water pipe on the right side of the tunnel wall in test1 can still be detected. ## 5 Discussion We tested each of the three test data sets. Although we still have a comparison between our method and many traditional methods, we have a comparison of our model simulated in other (de Gelis et al., 2023) papers. Compared with the city datasets, our model is much more optimized than traditional algorithms. In our model, we observe that most of the change detection can be detected, such as in the ground subsidence of the mining field, we study the change in elevation, which is similar to the urban change detection (de Gelis et al., 2023), and in more complex tunnels, our network can still detect most of the changes, but for some changes, because the structure is too complex and similar, the changes that are relatively close to the tunnel wall have not been detected. It is worthy of our attention, and it is also the room for further improvement of the network. ## 6 Conclusion In this article, we propose a new point cloud change detection method, using a deep neural network, combined with a dynamic graph convolutional network and a Transformer model, to build a change detection network and extract descriptors with strong discriminative power. At the same time, based on the change detection task, this paper designs a new network module Cross transformer, and proposes a cross-attention mechanism, which can calculate the attention score between point clouds, improve the local fusion performance of the two point clouds, and more accurately find change area. We apply deep learning to the field of 3D point cloud change detection, but the label production of the data set still relies on manual work, and the results are no longer presented in the form of patches, and point-level change detection has been realized. Our future work will be more Focus on the model optimization innovation of 3D point cloud, and hope that more scholars will do and provide more data sets on the task of 3D point cloud change detection.
2309.10138
Optimal Agnostic Control of Unknown Linear Dynamics in a Bounded Parameter Range
Here and in a follow-on paper, we consider a simple control problem in which the underlying dynamics depend on a parameter $a$ that is unknown and must be learned. In this paper, we assume that $a$ is bounded, i.e., that $|a| \le a_{\text{MAX}}$, and we study two variants of the control problem. In the first variant, Bayesian control, we are given a prior probability distribution for $a$ and we seek a strategy that minimizes the expected value of a given cost function. Assuming that we can solve a certain PDE (the Hamilton-Jacobi-Bellman equation), we produce optimal strategies for Bayesian control. In the second variant, agnostic control, we assume nothing about $a$ and we seek a strategy that minimizes a quantity called the regret. We produce a prior probability distribution $d\text{Prior}(a)$ supported on a finite subset of $[-a_{\text{MAX}},a_{\text{MAX}}]$ so that the agnostic control problem reduces to the Bayesian control problem for the prior $d\text{Prior}(a)$.
Jacob Carruth, Maximilian F. Eggl, Charles Fefferman, Clarence W. Rowley
2023-09-18T20:33:39Z
http://arxiv.org/abs/2309.10138v1
# Optimal Agnostic Control of Unknown Linear Dynamics in a Bounded Parameter Range ###### Abstract Here and in the follow-on paper [6], we consider a simple control problem in which the underlying dynamics depend on a parameter \(a\) that is unknown and must be learned. In this paper, we assume that \(a\) is bounded, i.e., that \(|a|\leq a_{\rm MAX}\), and we study two variants of the control problem. In the first variant, Bayesian control, we are given a prior probability distribution for \(a\) and we seek a strategy that minimizes the expected value of a given cost function. Assuming that we can solve a certain PDE (the Hamilton-Jacobi-Bellman equation), we produce optimal strategies for Bayesian control. In the second variant, agnostic control, we assume nothing about \(a\) and we seek a strategy that minimizes a quantity called the regret. We produce a prior probability distribution \(d{\rm Prior}(a)\) supported on a finite subset of \([-a_{\rm MAX},a_{\rm MAX}]\) so that the agnostic control problem reduces to the Bayesian control problem for the prior \(d{\rm Prior}(a)\).
2309.17131
Micromagnetics of ferromagnetic/antiferromagnetic nanocomposite materials. Part I: Towards the mesoscopic approach
In the first of two articles, we present here a novel mesoscopic micromagnetic approach for simulating materials composed of ferromagnetic and antiferromagnetic phases. Starting with the atomistic modeling of quasi one-dimensional systems, we explicitly show how the material parameters for the mesoscopic model of an antiferromagnet can be derived. The comparison between magnetization profiles obtained in atomistic and mesoscopic calculations (using a Heusler alloy as an example) proves the validity of our method. This approach opens up the possibility to recover the details of the magnetization distribution in ferromagnetic/antiferromagnetic materials with the resolution of a few nanometers covering length scales up to several hundreds of nanometers.
Sergey Erokhin, Dmitry Berkov, Andreas Michels
2023-09-29T10:49:40Z
http://arxiv.org/abs/2309.17131v1
# Micromagnetics of ferromagnetic/antiferromagnetic nanocomposite materials. ###### Abstract In the first of two articles, we present here a novel mesoscopic micromagnetic approach for simulating materials composed of ferromagnetic and antiferromagnetic phases. Starting with the atomistic modeling of quasi one-dimensional systems, we explicitly show how the material parameters for the mesoscopic model of an antiferromagnet can be derived. The comparison between magnetization profiles obtained in atomistic and mesoscopic calculations (using a Heusler alloy as an example) proves the validity of our method. This approach opens up the possibility to recover the details of the magnetization distribution in ferromagnetic/antiferromagnetic materials with the resolution of a few nanometers covering length scales up to several hundreds of nanometers. micromagnetics, Heusler alloys, magnetic nanocomposites, antiferromagnets, neutron scattering ## I Introduction The discovery of strong ferromagnetism of Ni\({}_{2}\)MnIn precipitates embedded in an antiferromagnetic (AFM) NiMn matrix [1] revealed that Heusler-based materials, which have potential applications in the areas of magnetic shape memory, magnetocaloric, and giant magnetoresistance, might possess an extremely large coercivity at room temperature. Indeed, in [2] it was demonstrated that the coercive field of such precipitates exceeds 5 T, which is remarkable for a rare-earth-free ferromagnetic (FM) material. Detailed structural and magnetic characterization of the samples, including annealing-time and annealing-temperature studies of the segregation process [3] and FM resonance measurements [4], supported earlier presumptions of the existence of 5 to 50 nm-sized inclusions that possess FM-like properties. The most significant experimentally observed feature was a vertical shift of the extracted hysteresis loop of ferromagnetic precipitates [2] suggesting a strong exchange coupling of the precipitates to the AFM matrix. Additionaly, the shape of the extracted magnetization loop, especially its abrupt magnetization jump near zero field, suggested that the system is composed of at least two different magnetic phases. We emphasize that the described functional properties are not the unique prerogative of this compound: similar FM properties are observed in other Heusler alloys too and are reported for Ni\({}_{50}\)Mn\({}_{50-\mathrm{x}}\)Sb\({}_{\mathrm{x}}\) in [5] and for Ni\({}_{50}\)Mn\({}_{50-\mathrm{x}}\)Sn\({}_{\mathrm{x}}\) in [6]. The aim of this computational research is the development of a comprehensive mesoscopic micromagnetic model for materials consisting of FM and AFM phases, such as Heusler-type alloys. Mesoscopic calculations are needed due to two reasons: first, the sizes of the FM crystallites are relatively large (in these materials up to 50 nm), rendering simulations in frames of an atomistic model not feasible due to their enormous computational effort; second, investigations of collective phenomena in such systems are highly desirable. At the same time, initial simulations at the atomic level are nevertheless necessary, because experimentally measured mesoscopic material parameters for both AFM and FM phases are lacking. Our model, which integrates both atomistic and mesoscopic simulations, enables a micromagnetic analysis of the system and furnishes a detailed quantitative account of the corresponding remagnetization processes. The crucial aspect of our study is the comparison of results obtained using our model with the previously cited experimental data. This two-parts article is organized as follows: the present (first) part explains the atomistic and mesoscopic approaches to the micromagnetic model of Heusler alloys and provides all the necessary prerequisites for the three-dimensional (3D) mesoscopic calculations; the second part [7] contains the results of this full 3D model, and a quantitative comparison to the experimentally measured hysteresis loop of the material under study. As a first step, we conducted a thorough literature search of lattice constants, total magnetic moments, anisotropy constants, and Curie temperatures of the constituent materials provided by experimental studies and density functional theory (DFT) calculations (Sec. II). Next, we mapped those parameters on a simplified atomic lattice structure and choose the model to describe the exchange interaction between the different magnetic phases (Sec. III). In order to have the possibility to validate our results and present them in the most convenient form of magnetization profiles, we restrict ourselves at this stage to simulations of quasi 1D structures (Sec. IV). This methodology allows us to determine all the required mesoscopic parameters by comparing the magnetization distributions obtained by mesoscopic and atomistic calculations (Sec. V). The developed approach is then generalized to 3D systems, comprising FM precipitates in an AFM matrix. In the second part [7], we employ mesoscopic micromagnetic modeling to simulate a single FM inclusion in an AFM matrix, allowing us to reveal the details of the magnetization distribution inside both FM and AFM constituents of this system. We explicitly demonstrate that the model where the AFM matrix is treated as being _monocrystalline_, is qualitatively incorrect, because it does not lead to a _hysteretic_ magnetization reversal process, in strong contradiction to experimental observations. To resolve this issue, we introduce a model where the AFM matrix is _polycrystalline_, taking into account magnetic interactions between its different crystallites, thus obtaining the AFM/FM _polycrystalline_ nanocomposite. Finally, we provide a quantitative comparison between the experimentally observed magnetization loop and our simulation results, demonstrating the validity of our model. ## II Structural and magnetic parameters To find out which parameters are known reliably enough to be used in atomistic simulations, we have performed a thorough search in the literature devoted to experimental results and DFT calculations of FM and AFM phases. Experimental data obtained by x-ray diffraction demonstrate that Ni\({}_{2}\)MnIn has a crystal structure of \(L2_{1}\) type (cubic austenitic state) with a lattice parameter \(a\) that is slightly above 6 A [8; 9]. A detailed study of the structural and magnetic properties [10], including cumulative experimental results from various scientific groups, allows to conclude that this FM material has a low Curie temperature of \(T_{\rm C}=310\) K and a total magnetic moment of \(\mu_{\rm tot}=4.1\,\mu_{\rm B}\)/f.u. (\(\mu_{\rm B}=\) Bohr magneton); corresponding parameters (required for the modeling) are listed in Table 1. For structural and magnetic parameters of other Heusler alloys with compositions different from that studied here, Ni\({}_{50}\)Mn\({}_{34}\)In\({}_{16}\) and Ni\({}_{50}\)Mn\({}_{35}\)In\({}_{15}\), but demonstrating similar magnetic properties, see [11] and [12], respectively. First principles calculations for this material (see Table 2) provide the value of the lattice parameter \(a\), which is close to the experimental data. Unfortunately, the calculated total magnetic moment substantially depends on the particular DFT methodology. Therefore, we have relied on experimental data for \(\mu_{\rm tot}\) in our simulations of this FM material. The tetragonal structure of AFM NiMn has been determined by x-ray and neutron diffraction experiments [19; 20]. In these studies it was also found that the magnetic moment of Ni atoms in this structure is almost zero, while the atomic magnetic moment on the Mn sublattices is about \(4\,\mu_{\rm B}\). Magnetic measurements [20] revealed a Neel temperature of \(T_{\rm N}=1070\) K, which is extremely high compared to typical antiferromagnets (525 K for NiO and 293 K for CoO). Experimentally obtained lattice parameters, magnetic moments, and Neel temperatures for NiMn are collected in Table 3. We also note that this AFM material is the subject of recent investigations, including the characterization of exchange-bias systems based on NiMn films [21; 22] and NiMn/CoFe multilayers used in microwave applications [23]. First principles calculations [24; 25; 26; 27] (see Table 4) yielded values for the magnetic moment of Mn atoms which are considerably smaller compared to the corresponding experimental results; hence, we have used the latter values for our atomistic simulations (see Table 3). We are not aware of any experimental data for anisotropy constants of the AFM phase (only the anisotropy type is known). Taking into account that these constants are essential for atomistic simulations, we have searched for _ab initio_ calculations of corresponding values. Unfortunately, this approach faces significant challenges for _ab initio_ theories based on the local spin-density formalism. In accordance with experiment, _ab initio_ studies have found that the preferred orientation \begin{table} \begin{tabular}{c|c|c|c} \(a\,(\mathrm{\AA})\) & \(\mu_{\rm tot}\,(\mu_{\rm B}\)/f.u.) & \(T_{\rm C}\,(\mathrm{K})\) & Ref. \\ \hline 6.071 & 4.1 & 290 & [8] \\ 6.072 & 4.1 & 300 & [9] \\ 6.07 & 4.2 & 310 & [10; 13] \\ \end{tabular} \end{table} Table 1: Lattice parameter \(a\), total magnetic moment \(\mu_{\rm tot}\), and Curie temperature \(T_{\rm C}\) of of Ni\({}_{2}\)MnIn determined experimentally with corresponding references. \begin{table} \begin{tabular}{c|c|c|c} \(a\,(\mathrm{\AA})\) & \(\mu_{\rm tot}\,(\mu_{\rm B}\)/f.u.) & \(T_{\rm C}\,(\mathrm{K})\) & Ref. \\ \hline 6.071 & 4.1 & 290 & [8] \\ 6.072 & 4.1 & 300 & [9] \\ 6.07 & 4.2 & 310 & [10; 13] \\ \end{tabular} \end{table} Table 1: Lattice parameter \(a\), total magnetic moment \(\mu_{\rm tot}\), and Curie temperature \(T_{\rm C}\) of of Ni\({}_{2}\)MnIn determined experimentally with corresponding references. \begin{table} \begin{tabular}{c|c|c|c} \(a\,(\mathrm{\AA})\) & \(c\,(\mathrm{\AA})\) & \(\mu_{\rm tot}\,(\mu_{\rm B}\)/f.u.) & \(T_{\rm N}\,(\mathrm{K})\) & Ref. \\ \hline 3.174 & 3.524 & 4.0 & 1140 & [19] \\ 3.74 & 3.52 & \(3.8\pm 0.3\) & \(1072\pm 40\) & [20] \\ \end{tabular} \end{table} Table 3: Experimental parameters of NiMn: lattice parameters \(a\) and \(c\), atomic magnetic moment \(\mu\) of Mn sublattices, and Néel temperature \(T_{\rm N}\). of magnetic moments is perpendicular to the tetragonal axis of the elementary cell, but the value of even the first-order anisotropy coefficient strongly differs in dependence on the specific _ab initio_ method: \(-1.7\times 10^{6}\,\mathrm{erg/cm^{3}}\) in [26] versus \(-9.65\times 10^{6}\,\mathrm{erg/cm^{3}}\) in [24]. Determination of the next-order (in-plane) magnetic anisotropy energy [26] predicted an orientation of magnetic moments along the edges of the tetragonal cell, resulting in the in-plane anisotropy value to be only \(8\,\%\) of its out-of-plane counterpart. It was pointed out by the authors that the latter value is at the limit of accuracy of such a calculation. We have decided to use the values obtained in [26], as a more advanced method utilizing generalized gradient corrections is used in this study. Experimental and theoretical investigations of anisotropy in the FM phase are absent. However, a large magnetization drop at zero field seen in the hysteresis of the FM phase [2] suggests a very low anisotropy coefficient value for this phase. First principles calculations were also used to study the exchange stiffness in monocrystalline NiMn [27] and the corresponding intergrain exchange coupling for thin films [28]. The latter study demonstrated that this coupling remains significant even in the presence of relatively large spatial shifts of neighboring atomic planes. ## III Atomic lattice structure and mapping of magnetic parameters Simulations of a real AFM/FM interface are not feasible, because there are too many unknown structural parameters related to the corresponding interfacial disorder. For this reason we have mapped the atomistic parameters described in the previous section onto a simple cubic lattice. Examples of generated structures are presented in Fig. 1, where the colored spheres indicate the positions of atoms belonging to the different phases or sublattices. Red and yellow spheres represent atoms belonging to the different sublattices of the AFM, blue spheres--atoms of the soft FM phase. The aim of this figure is to illustrate that even for a simple cubic lattice there exist several possibilities to arrange the atoms at the AFM/FM interface, which will change the corresponding exchange interaction. Note that in our model we do not explicitly introduce a pinned intermediate layer between the FM precipitates and the AFM matrix as done in [1] to explain the vertical shift of the hysteresis loop. The reasons are twofold: first, the parameters of such a layer would be completely unknown, increasing the number of adjustable parameters of our model; second, such a layer is not necessary to explain neither the high coercivity of the system nor the vertical shift of the magnetization loop, as it will be shown in the next sections. Therefore, the atomic magnetic moments of the FM phase are directly coupled with the moments of the AFM phase via exchange-coupling coefficients between the atoms of different phases. The following atomistic results are obtained using a system of atoms arranged on a cubic lattice with dimensions of \(N_{x}\times N_{y}\times N_{z}=333\times 8\times 8\). Its length \(L=100\,\mathrm{nm}\) is chosen in such a way that it can easily incorporate a domain wall (e.g., Bloch and Neel types) appearing in both AFM and FM phases. Periodic boundary conditions in all directions are applied. At the current stage, the magnetodipolar interaction is not included, because we expect that the major contribution to the system energy comes from the exchange interaction (both within the FM and AFM phases, and from the interphase exchange coupling) and the strong intrinsic anisotropy of the AFM matrix. If necessary, the magnetodipolar interaction can added and evaluated using the lattice-Ewald method. Atomistic magnetic parameters (magnetic moment Figure 1: Part of the simulated atomistic structures containing the AFM/FM interfaces: red and yellow spheres represent atoms belonging to different AFM sublattices; blue—atoms of the soft ferromagnet (FM inclusions). Colored planes show the planes occupied by atoms of the same type. (a) Structure with an averaged zero interphase exchange between FM and AFM phases due to the equal fractions of atoms belonging to different sublattices of the AFM phase on the interface boundary. (b) Structure with the maximal interphase exchange coupling on both sides of a FM inclusion. The quasi 1D structure used in actual simulations consists of \(\sim\)20000 atoms arranged on a simple cubic lattice. Figure 2: Tetragonal anisotropy surface (left) and its fourfold symmetry in the \(x\)-\(y\)-plane (right) obtained for parameters of a NiMn AFM. and anisotropy coefficient \(k\)) found in the literature for both phases (see Sec. II) were mapped to the corresponding mesoscopic values using the data presented in Sec. II and the standard relations \[\mu_{\rm s}=\frac{M_{\rm s}V_{\rm a}}{n_{\rm at}}=\frac{M_{\rm s}a^{3}}{n_{\rm at }},\;k=\frac{KV_{\rm a}}{n_{\rm at}}=\frac{Ka^{3}}{n_{\rm at}}. \tag{1}\] Here, \(n_{\rm at}\) is the number of atoms per unit cell and \(V_{\rm a}\) denotes the unit cell volume. For our case of a simple cubic lattice, we have \(n_{\rm at}=1\) and \(V_{\rm a}=a^{3}\). An elementary cell size of \(a=0.3\,\)nm has been chosen, close to the corresponding parameter of materials under consideration. The anisotropy energy density of the tetragonal magnetocrystalline anisotropy of the AFM phase has the following form (see Fig. 2): \[\epsilon_{\rm a}=K_{1}\sin^{2}\theta+K_{2}\sin^{4}\theta+K_{2}^{{}^{\prime}} \sin^{4}\theta\cos 4\phi, \tag{2}\] where \(\theta\) and \(\phi\) are the polar and azimuthal angles. For an easy-plane anisotropy type, \(K_{1}<0\) and \(K_{2}=0\). The constant \(K_{2}^{{}^{\prime}}\) defines the basal-plane anisotropy of the fourth order. As it can be seen from comparison of the constants \(K_{1}\) and \(K_{2}^{{}^{\prime}}\) (see Table 5) and from Fig. 2, for our AFM material the anisotropy energy is dominated by the easy-plane anisotropy and slightly disturbed by the fourfold symmetry in the \(x\)-\(y\)-plane. For the FM phase, the standard cubic anisotropy type was chosen. As the exact value of the anisotropy constant \(K_{1}\) of the FM phase is unknown, we have chosen a small value for this parameter, typical for a soft FM material. Table 5 contains the values of magnetization and anisotropy constants used as input in our mesoscopic simulations and critical temperatures used to calculate the interatomic exchange coefficients for atomistic modeling. To determine the interatomic exchange constants \(J_{ij}\), we have used the mean-field expression \[J_{ij}=\frac{3k_{\rm B}T_{\rm C}}{z}, \tag{3}\] which establishes the connection between these constants and the Curie temperature \(T_{\rm C}\) of the ferromagnetic material; here, \(k_{\rm B}\) is the Boltzmann constant and \(z\) is the number of nearest neighbors in the atomic lattice (we omit the correction factor in this expression whose value is usually close to unity). For our choice of the elementary cell geometry (simple cubic lattice), \(z=6\) for both FM and AFM phases. The exchange constant for the AFM material obeys the same expression, but with the Neel temperature \(T_{\rm N}\) instead of \(T_{\rm C}\) in Eq. (3) and the minus sign before the whole expression used due to the AFM exchange between sublattices. Finally, the interatomic exchange coupling \(J_{\rm AFM^{*}FM}\) between FM and AFM phases is determined in our model by the equation \[J_{\rm AFM^{*}FM}=\kappa\,\frac{J_{\rm FM}+|J_{\rm AFM}|}{2}, \tag{4}\] where \(J_{\rm FM}\) and \(J_{\rm AFM}\) are the exchange constants of the FM and AFM phases and the dimensionless coefficient \(0\leq\kappa\leq 1\) accounts for the possible weakening of the interphase exchange due to interphase boundary imperfections. Note that for the quasi 1D atomistic simulations shown below we have used the geometry shown in Fig. 1, where only one of two AFM sublattices is exchange coupled to the FM phase. As a test for our atomistic simulations, we have compared numerically computed magnetization curves for a homogeneous uniaxial AFM with the corresponding critical fields derived analytically (see [29] for details). For the parameters of the uniaxial AFM listed in Table 5, this analytical theory predicts an AFM disruption field of \(H_{\rm ex}=660\,\)kOe and a spin-flop field of \(H_{\rm sf}=9.2\times 10^{4}\,\)kOe. Figure 3 shows that both values are perfectly reproduced in our simulations. We note that these values are important for the further modeling: they show that if the external field does not exceed \(660\,\)kOe (\(66\,\)T), the nearest-neighbor magnetic moments in the cubic AFM remain in the perfect antiparallel configuration. All the following results have been obtained in external fields satisfying this condition. In order to validate our method for FM materials, a standard calculation of the Bloch-wall profile was carried out using the well known material parameters of \begin{table} \begin{tabular}{c|c|c} & Ni\({}_{2}\)MnIn (FM) & NiMn (AFM) \\ \hline \(M_{\rm s}\) (G) & 720 & \(\pm\) 716 \\ anisotropy symmetry & cubic & tetragonal \\ \(K\) (erg/cm\({}^{3}\)) & \(K_{1}=1.0\times 10^{4}\) & \(K_{1}=-1.7\times 10^{6}\) \\ & & \(K_{2}^{{}^{\prime}}=0.136\times 10^{6}\) \\ \(T_{\rm C(N)}\) (K) & 310 & 1070 \\ \end{tabular} \end{table} Table 5: Mesoscopic magnetic parameters used in the simulations. Figure 3: Magnetization curve of an uniaxial AFM with parameters taken from Table 5. Black solid line—result of the numerical modeling using our atomistic approach. Blue arrows indicate critical field values obtained analytically [29]: spin-flop field \(H_{\rm sf}\) and AFM disruption field \(H_{\rm ex}\). Co. First, following the scheme explained above, the saturation magnetization \(M_{\rm s}=1440\,\)G, the first-order anisotropy coefficient \(K_{1}=4.1\times 10^{6}\,\)erg/cm\({}^{3}\), and the exchange coefficient calculated using \(T_{\rm C}=1360\,\)K were mapped onto a simple cubic lattice. In the starting magnetization configuration, atomic magnetic moments were arranged according to the magnetization profile shown in Fig. 4 with the thin blue line, while the moments at both ends were fixed in opposite directions. The energy minimization procedure in the absence of an external field produces a magnetization profile along the \(x\) direction of the sample, which can be compared to the known analytical solution for a _uniaxial_ anisotropy. If the anisotropy axis is in the \(z\) direction, the corresponding expression has the form \(M_{z}/M_{\rm s}=-\tanh\left[\sqrt{K_{1}/A}(x-L/2)\right]\), where \(A\) is the mesoscopic exchange-stiffness constant. Plotting this expression with the standard exchange stiffness for Co, \(A=3.1\times 10^{-6}\,\)erg/cm, we obtain a remarkable agreement between the analytical result and the atomistic simulations, as shown in Fig. 4(a). For illustrative purposes and for further comparison with the corresponding wall profile in NiMn, the same parameters were used to obtain the Bloch wall in a FM with a _cubic_ anisotropy [Fig. 4(b)]. The same procedure as described above has been applied to the AFM NiMn with a tetragonal magnetocrystalline anisotropy and the \(y\)-\(z\)-plane chosen as the easy anisotropy plane. Figure 5 displays the magnetization profiles of the two AFM sublattices, which is qualitatively the same as for the cubic FM shown in Fig. 4(b). For a relatively small in-plane anisotropy constant, we obtain here a tanh-like Bloch-wall magnetization profile that is too close to the initial approximation (sin-like in this instance). For this reason, we had to perform additional calculations with an enlarged magnetic anisotropy constant with the result shown in Fig. 5(b). This enlargement ensures the accuracy in the determination of the exchange-stiffness constant. ## IV Quasi one-dimensional model: atomistic simulation results At this point, all the necessary parameters for the atomistic micromagnetic simulations in frames of our model are defined. In the following, we present selected results obtained using the quasi 1D model described above and discuss hysteresis loops and magnetization distributions at particular external fields. In our study we have varied three parameters of the model, whose variations may lead to strong changes in the magnetization reversal process: the size \(d_{\rm FM}\) of Figure 5: (a) Magnetization profiles of Bloch walls obtained in atomistic simulations using two AFM sublattices (red and yellow lines) in NiMn with magnetic parameters taken from Table 5. (b) The same as in (a), but with the magnetic anisotropy coefficients increased by one order of magnitude. Thin lines are initial magnetization profiles. Figure 4: Magnetization profiles of Bloch walls obtained in atomistic simulations of the FM phase for the cases of (a) uniaxial and (b) cubic anisotropy. Solid blue lines represent the \(m_{z}\) component of the unit magnetization vector field \(\mathbf{m}\), solid green lines encode the \(m_{y}\) component. Red line is the analytical solution for the Bloch wall in an uniaxial material (see main text). Thin blue and green lines are initial magnetization profiles (before applying the energy minimization procedure). Materials parameters for Co were assumed. the FM inclusion, the exchange weakening \(\kappa\) on the AFM/FM boundary, and the orientation of the AFM anisotropy axes with respect to the external field direction, along which the \(z\)-axis of our coordinate system was directed. Therefore, the parameter space of our simulations is as follows (with \(c_{\rm FM}\) the FM volume fraction): * size of the FM inclusion: \(d_{\rm FM}=3\,\ldots 50\,\)nm (\(c_{\rm FM}=3\,\ldots 50\,\)%) * exchange weakening: \(\kappa=0.0\,\ldots\,1.0\) * anisotropy-planes orientation of the AFM phase: easy-plane anisotropies in the \(x\)-\(z\), \(y\)-\(z\), or \(x\)-\(y\) coordinate planes. Figures 6 and 7 display the results of the atomistic modeling. Hysteresis loops for different combinations of the parameters listed above are shown. The change of the hysteresis loops under the variation of the exchange weakening \(\kappa\) between the AFM and FM phases is presented in Fig. 6 for the example of a FM inclusion with a size of \(d_{\rm FM}=12.6\,\)nm. Without the coupling between the phases (\(\kappa=0\)), the FM inclusion reverses its magnetization at a very small negative external field due to the weak magnetocrystalline anisotropy of the FM phase. When the coupling strength increases, the coercive field increases up to \(5\,\)kOe for this size, while for much smaller inclusions the coercivity achieves \(20\,\)kOe (data not shown). If the coupling strength is constant (Fig. 7), the coercivity decreases for larger inclusion sizes. Independent on the inclusion size, the magnetization reversal follows the same scenario: an abrupt magnetization rotation of a significant fraction of magnetic moments of the FM phase, followed by the gradual remagnetization of remaining moments at higher fields. The orientation of the anisotropy axes in the AFM phase with respect to the external field (see Fig. 1) defines the type of domain wall that is responsible for the magnetization reversal in the FM phase. For instance, a \(x\)-\(z\) easy plane anisotropy in the AFM results in Neel-type walls in the FM inclusion, while a \(y\)-\(z\) easy plane leads to Bloch-type walls. The third case of a \(x\)-\(y\) easy plane demonstrates significant deviations from the perfectly aligned state already at positive fields, because in this geometry the magnetic moments are initially magnetized in the direction that is perpendicular to the easy plane (the magnetization reversal curve in this case is symmetric with a zero coercivity). The considerations presented above are valid for coupled AFM/FM systems (i.e., with \(\kappa>0\)). In the following, we discuss in Figs. 8 and 9 the details of the remagnetization processes for small and large FM inclusions. Figure 8 shows the magnetization distribution at the coercive field of the system with \(d_{\rm FM}=12.6\,\)nm. Figure 8: Magnetization profiles for the quasi 1D model at the coercive field for a structure with \(d_{\rm FM}=12.6\,\)nm. Red and yellow lines represent the \(m_{z}\) profiles of different AFM sublattices; blue line—\(m_{z}\) of the soft FM inclusion. In the upper image, the exchange weakening is \(\kappa=0.05\), in the lower image \(\kappa=1.0\) (perfect coupling). Figure 6: Remagnetization curves of the FM phase for various exchange couplings \(\kappa\) between the FM inclusion (thickness: \(d_{\rm FM}=12.6\,\)nm) and the AFM matrix with an \(y\)-\(z\) easy plane anisotropy. Figure 7: Remagnetization curves of the FM phase for various sizes \(d_{\rm FM}\) of a FM inclusion in an AFM matrix with an \(y\)-\(z\) easy plane anisotropy. Exchange weakening between the phases is \(\kappa=0.2\). The FM inclusion is almost in a single-domain state and already a strongly reduced exchange coupling (\(\kappa=0.05\)) is sufficient to deflect the magnetic moments on both sides of the AFM/FM boundary. However, the coupling strength is not large enough as to eliminate the discontinuity in the magnetization distribution between the two phases. On the contrary, the magnetization profile in a system with the perfect coupling (\(\kappa=1.0\)) is continuous (lower panel in Fig. 8). An inclusion with \(d_{\rm FM}=36.6\,\)nm, being almost three times larger than \(d_{\rm FM}=12.6\,\)nm, can easily incorporate a significantly inhomogeneous magnetization configuration in spite of the weak coupling (Fig. 9). The strongly coupled system (\(\kappa=1.0\)) demonstrates how the magnetization state in the FM phase might significantly influence the distribution of magnetic moments in the AFM state. From the experimental point of view, such a magnetization profile is expected to give rise to a strong small-angle neutron scattering (SANS) signal [30]. Therefore, using SANS, it should be possible to obtain some (indirect) information on the magnetization state of magnetic moments belonging to the AFM phase in the neighborhood of the FM inclusion. ## V Towards the mesoscopic approach The typical sizes of simulated structures in atomistic simulations (especially in 3D) are limited to several hundreds of interatomic distances in each spatial direction. Therefore, for the modeling of the remagnetization process in 3D systems with sizes of hundreds of nanometers, the transition to the mesoscopic length scale is mandatory. In this section, we describe the steps necessary to introduce and validate the corresponding mesoscopic model. First, in order to implement a mesoscopic model for an AFM, it is necessary to introduce an effective vector that describes the direction of the magnetic moments in the AFM sublattices with a spatial resolution of a few nanometers (analogous to the standard procedure in micromagnetics). This vector coincides with the magnetization direction of one of the sublattices and is antiparallel to the moments of the second sublattice (under the above described limitation of the external field magnitude). In the following discussion, we will refer to this vector as a _descriptor_ of the spatial distribution of magnetization in Figure 11: Results of the fitting of domain-wall profiles for the determination of the mesoscopic exchange coupling. (a) AFM material: red and yellow circles—atomistic simulation results for Bloch walls in two AFM sublattices; green crosses—mesoscopic fit using the exchange stiffness \(B\) shown in the inset. (b) FM phase: blue circles—atomistic result; green crosses—mesoscopic fit using different spatial regions, indicating the pinning of the domain wall at the center. Figure 10: Test fit using atomistic and mesoscopic parameters of Co. Red line—analytical solution for uniaxial anisotropy; blue circles—atomistic simulation result; green crosses—result of the mesoscopic simulation using the exchange-stiffness constant \(A_{\rm ex}\) shown in the inset, which was obtained by fitting the mesoscopic profile to the atomistic one. the AFM phase. Second, in analogy to conventional micromagnetics, we have to define the size of finite elements having a constant "magnetization" (usually a few nanometers, \(3.5\,\mathrm{nm}\) in our case) and all the interaction coefficients both within one phase and between the different phases. The mesoscopic saturation magnetization and anisotropy coefficient for a FM and for both sublattices of an AFM can be derived from their atomistic counterparts using Eqs. (1). In order to obtain the mesoscopic exchange-stiffness constant \(A_{\mathrm{ex}}\) and its analogue for the AFM phase \(B_{\mathrm{ex}}\), it is necessary to employ a special fitting procedure due to the (weakly) nonlocal nature of the exchange interaction. This procedure consists of finding such an exchange-stiffness constant for which the mesoscopically simulated Bloch-wall profile coincides with its atomistic counterpart. As a test calculation (see Fig. 10), we have used the magnetic materials parameters of Co, obtaining again for the exchange-stiffness constant the textbook value of \(3.1\times 10^{-6}\,\mathrm{erg/cm}\). Figure 11 presents the results of the fittings for AFM and FM phases, giving the values \(B_{\mathrm{ex}}=2.9\times 10^{-6}\,\mathrm{erg/cm}\) for the AFM phase and \(A_{\mathrm{ex}}=0.6\times 10^{-6}\,\mathrm{erg/cm}\) for FM phase. In the latter case an enlarged anisotropy constant was used in order to increase the accuracy of the fit (see the discussion related to Fig. 5). We have also pinned the domain wall at the center of the simulated region (\(x=50\,\mathrm{nm}\)) to keep the magnetization profile symmetric. Next, we have compared the magnetization profiles obtained by the atomistic and mesoscopic modeling for the system containing a FM inclusion within the AFM phase. This comparison is of special importance, because it allows us to draw the conclusion about the reliability of the mesoscopic approach for the following simulations. Corresponding plots are shown in Fig. 12. We note that atomistic and mesoscopic calculations were performed independently, only using the magnetic materials parameters corresponding to the respective spatial scale. As a result, we achieved an excellent match between atomistic and mesoscopic approaches that allows us to study the magnetization distribution in AFM/FM composites at a length scale of the order of hundreds of nanometers, while resolving its details at the nanometer scale. Special attention deserves the relation between the exchange-weakening coefficients \(\kappa\) present in both the atomistic and mesoscopic methods. This relation is highly nontrivial, because these coefficients lead in different approaches to the weakening of the exchange coupling between two very different "building blocks" of corresponding models. We have revealed this relation by comparing remagnetization curves obtained by both methods. The result is presented in Fig. 13, where the same values for the exchange-weakening coefficient \(0\leq\kappa\leq 1\) for the two approaches have been used. Figure 12: Comparison of atomistic and mesoscopic magnetization profiles at different external field values (see insets) (\(\kappa=1\); \(d_{\mathrm{FM}}=30\,\mathrm{nm}\)). Atomistic approach: red and yellow lines correspond to the AFM sublattices; blue lines—FM phase. Mesoscopic approach: magenta line—AFM; green line—FM. Note that there is no analogue of the yellow line in the mesoscopic approach. Figure 13: Comparison of the remagnetization curves of the FM phase for (a) atomistic and (b) mesoscopic simulations and for different values of the exchange weakening \(\kappa\) on the AFM/FM interface (see legend in the left panel). Note that while the curves for large couplings (\(\kappa\geq 0.5\)) are quite similar, the behavior for small (but nonzero) \(\kappa\) is very different. Conclusion In the first of two articles, we have presented a combined atomistic/mesoscopic approach for micromagnetic simulations of systems containing coupled ferromagnetic (FM) and antiferromagnetic (AFM) phases. The reliability of the atomistic simulations was tested using single-phase FM and AFM materials, where we have simulated Bloch-wall magnetization profiles and compared them with analytical results for the same system. Further, the developed methodology was validated by comparing the results of mesoscopic simulations with the corresponding ones of atomistic modeling for quasi one-dimensional systems, being either single-phase or consisting of a FM inclusion within an AFM phase. In the second part [7], we will extend our simulation methodology to polycrystalline systems and show that the experimentally found large coercivity of \(\sim\)5 T for a Heusler-type alloy [2] can be naturally explained by the pinning of the FM phase via the extremely hard AFM.
2309.12049
Constraint on the event rate of general relativistic instability supernovae from the early JWST deep field data
General relativistic instability supernovae at ~10 < z < ~15 are predicted to be observed as red faint point sources, and they can be detected only in the reddest filters in JWST/NIRCam (F444W and F356W). They should be observed as persistent point sources with little flux variations for a couple of decades because of time dilation. We search for static point sources detected only in the F444W filter or only in the F444W and F356W filters in the early JWST deep field data. No real point source of such kind is identified. Therefore, the general relativistic instability supernova rate at ~10 < z < ~15 is constrained to be less than ~ 8e-7 Mpc-3 yr-1 for the first time.
Takashi J. Moriya, Yuichi Harikane, Akio K. Inoue
2023-09-21T13:17:07Z
http://arxiv.org/abs/2309.12049v1
Constraint on the event rate of general relativistic instability supernovae from the early JWST deep field data ###### Abstract General relativistic instability supernovae at \(10\lesssim z\lesssim 15\) are predicted to be observed as red faint point sources, and they can be detected only in the reddest filters in JWST/NIRCam (_F444W_ and _F356W_). They should be observed as persistent point sources with little flux variations for a couple of decades because of time dilation. We search for static point sources detected only in the _F444W_ filter or only in the _F444W_ and _F356W_ filters in the early JWST deep field data. No real point source of such kind is identified. Therefore, the general relativistic instability supernova rate at \(10\lesssim z\lesssim 15\) is constrained to be less than \(\sim 8\times 10^{-7}\) Mpc\({}^{-3}\) yr\({}^{-1}\) for the first time. keywords: stars: Population III - supernovae: general - dark ages, reionization, first stars - early Universe ## 1 Introduction General relativistic instability supernovae (GRSNe) are theoretically predicted explosions of supermassive stars (SMSs) having \(10^{4}-10^{5}\) M\({}_{\odot}\). Most SMSs are likely to collapse directly to black holes through general relativistic instability (e.g., Shibata & Shapiro, 2002), but a fraction of SMSs are suggested to explode before collapsing to black holes (e.g, Fuller et al., 1986; Montero et al., 2012; Chen et al., 2014; Nagele et al., 2020, 2022, 2023). They can have extremely high explosion energies of around \(10^{55}\) erg (e.g., Chen et al., 2014) and thus they can become extremely luminous supernovae (SNe, e.g., Moriya et al., 2021). Especially, Moriya et al. (2021) show that GRSNe at high redshifts can become extremely red transients. They can be detected only in the _F444W_ filter if they appear at \(13\lesssim z\lesssim 15\) and only in the _F444W_ and _F356W_ filters if they appear at \(10\lesssim z\lesssim 13\) in JWST/NIRCam. GRSNe are predicted to have a plateau phase without a significant luminosity change lasting for about 2 years in the rest frame. Because of time dilation, the plateau phase can last for \(20-30\) years if they appear at \(10\lesssim z\lesssim 15\). Thus, high-redshift GRSNe are observed as persistent static red point sources for \(20-30\) years after they reach the plateau phase. Following the successful launch in December 2021, JWST has released early deep field imaging data from NIRCam. These data have already been used to identify record-breaking high-redshift galaxy candidates (e.g., Harikane et al., 2023; Finkelstein et al., 2022b; Donnan et al., 2023; Naidu et al., 2022; Castellano et al., 2022; Bouwens et al., 2022; Atek et al., 2023; Yan et al., 2023b). Some high-redshift galaxy candidates have also been suggested to be high-redshift SNe (Yan et al., 2023a), but further observations are required to confirm. In this work, we search for GRSN candidates in the early JWST deep field data, and provide a constraint on the GRSN rate at \(10\lesssim z\lesssim 15\) for the first time. We adopt a \(\Lambda\)CDM cosmology with \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{M}=0.3\), and \(\Omega_{\Lambda}=0.7\). All the magnitudes in this paper are in the AB system. ## 2 Early JWST Deep Field Data We use the NIRCam images made in Harikane et al. (2023), and construct multi-band photometric catalogs in a similar manner to Harikane et al. (2023). Here we briefly describe the data reduction and the catalog construction. We refer to Harikane et al. (2023) for more details. We use the four JWST NIRCam data sets obtained in the early release observations (ERO) and early release science (ERS) programs, ERO SMACS J0723 and Stephan's Quintet (Pontoppidan et al., 2022), ERS Cosmic Evolution Early Release Science (Finkelstein et al., 2022a), and ERS GLASS (Treu et al., 2022). We reduced the raw data using the JWST pipeline version 1.6.3 development version (1.6.3dev34+g6889f49, almost the same as 1.7.0), with the Calibration Reference Data System (CRDS) context file of jwst_0995.pmap released in October 2022, whose calibration values were derived using calibration observations of three different standard stars placed in all of the 10 NIRCam detectors. In addition to the standard reduction by the pipeline, we conducted the "wisp" subtraction by using a script provided by the NIRCam team1, the removal of the striping pattern by using a script provided in the CEERS team (Bagley et al., 2022), the sky background subtraction using SExtractor (version 2.5.0; Bertin and Arnouts, 1996). In the reduced images, the limiting magnitudes were measured with 0\(\aas@@fstack{\prime\prime}\)1, 0\(\aas@@fstack{\prime\prime}\)2, and 0\(\aas@@fstack{\prime\prime}\)3-diameter circular apertures by randomly placing apertures in sky areas using Python packages Astropt/photutils. Sky areas were defined as pixels without objects detected by SExtractor. The effective area and limiting magnitudes are summarized in Table 1. We construct multi-band source catalogs from the JWST data using the _F444W_ band as the detection band. Source detection and photometry is performed with SExtractor(Bertin and Arnouts, 1996). We run SExtractor in the dual-image mode for each image with its detection image (_F444W_). The total magnitudes are estimated with MAG_AUTO in SExtractor. Finally, we correct for the galactic extinction using Schlegel et al. (1998) and Schlafly and Finkbeiner (2011) and make final photometric catalogs. ## 3 Searching for GRSN Events Figure 1 presents expected magnitudes of the GRSN model at its light-curve plateau phase presented by Moriya et al. (2021). Table 1 presents the redshift ranges in which we expect to observe GRSNe in the _F444W_ and _F356W_ filters with the 5\(\sigma\) limiting magnitudes in each JWST deep field based on the GRSN light-curve model. The redshift ranges depend on the field because each field has different 5\(\sigma\) limiting magnitudes. First, we searched for static point sources that are not detected in the _F277W_ and _F356W_ filters with the 5\(\sigma\) significance, but are detected in the _F444W_ filter with more than the 5\(\sigma\) significance. Source detections are evaluated based on fluxes measured in the 0\(\aas@@fstack{\prime\prime}\)1 and/or 0\(\aas@@fstack{\prime\prime}\)2-diameter circular apertures in the original (not PSF-matched) images. We searched for static point sources that are fainter than 26.0 mag, because the GRSN model predicts that it becomes fainter than 26.0 mag at \(z\gtrsim 10\) (Fig. 1). The definition of a point source in this paper is sources with CLASS_STAR \(>0.9\). We found 60 objects matching these criteria. We checked them by eyes and we identified them as either cosmic rays or artifacts because of their shapes and extensions. We also searched for faint (\(>26.0\) mag) static point sources that are detected with more than 5\(\sigma\) significance in both _F356W_ and _F444W_ filters, but not detected in the _F277W_ filter with the 5\(\sigma\) significance. We only found three candidate objects. The significant reduction in the number of candidates from the single detection in the _F444W_ filter indicates that most candidates observed only in the _F444W_ filter are likely artificial, because a chance to detect random artificial objects at the same location in both the _F444W_ and _F356W_ images is presumed to be low. Through the visual inspection, we identified them as artifacts because of their shapes and extensions. The three candidates are found to be either near the edge of a detector or blended with another object. These facts may have caused artifacts. ## 4 Constraint on GRSN Event Rate As described in the previous section, we did not find any real static point sources fainter than 26.0 mag that are detected only in the _F444W_ filter. We also did not find any real static point sources fainter than 26.0 mag that are detected only in the _F444W_ and _F356W_ filters. These facts show that there has been no GRSN event in the survey fields in about 2 years in the rest frame (\(\sim 20-30\) years in the observer frame) in the redshift ranges shown in Table 1. The exact redshift ranges that each field can constrain are slightly different because of the different limiting magnitudes, but they are roughly at \(10\leq z\lesssim 15\). The total comoving volume of the survey fields is \(6.2\times 10^{5}\) Mpc\({}^{3}\). The upper limit for the GRSN event rate was estimated by assuming that the number of GRSNe in this volume in the last 2 years is less than one. In this way, the GRSN event rate at \(10\lesssim z\lesssim 15\) is estimated to be less than about \(8\times 10^{-7}\) Mpc\({}^{-3}\) yr\({}^{-1}\) in the comoving frame. Cosmic star-formation rate (SFR) density \(\rho_{\rm SFR}\) at \(z\sim 9-16\) has been constrained by Harikane et al. (2023). If a fraction \(f_{\rm GRSN}\) of Figure 1: Expected brightness during the light-curve plateau of the GRSN predicted by Moriya et al. (2021). \begin{table} \begin{tabular}{l c c c c c c c} \hline Field & Area & _F44W_ limit\({}^{a}\) & _F356W_ limit\({}^{a}\) & _F277W_ limit\({}^{a}\) & \multicolumn{2}{c}{GRSN detection redshift range\({}^{b}\)} & Total volume\({}^{c}\) \\ & (arcmin\({}^{2}\)) & (mag) & (mag) & (mag) & _F444W_ and _F356W_ & _F444W_ only & (Mpc\({}^{3}\)) \\ \hline SMACS J0723 & 11.0 & 29.6 & 29.9 & 29.8 & \(10.4\leq z\leq 13.4\) & \(13.4\leq z\leq 16.0\) & \(7.7\times 10^{4}\) \\ GLASS & 6.8 & 29.6 & 29.9 & 29.6 & \(10.2\leq z\leq 13.4\) & \(13.4\leq z\leq 16.0\) & \(5.0\times 10^{4}\) \\ CEERS1 & 8.4 & 29.1 & 29.7 & 29.5 & \(10.1\leq z\leq 13.2\) & \(13.2\leq z\leq 15.4\) & \(5.8\times 10^{4}\) \\ CEERS2 & 8.5 & 29.4 & 29.6 & 29.5 & \(10.1\leq z\leq 13.1\) & \(13.1\leq z\leq 15.8\) & \(6.2\times 10^{4}\) \\ CEERS3 & 8.4 & 29.2 & 29.7 & 29.6 & \(10.2\leq z\leq 13.2\) & \(13.2\leq z\leq 15.5\) & \(5.7\times 10^{4}\) \\ CEERS6 & 8.4 & 29.0 & 29.7 & 29.5 & \(10.1\leq z\leq 13.2\) & \(13.2\leq z\leq 15.2\) & \(5.6\times 10^{4}\) \\ Stephan’s Quintet & 37.2 & 28.6 & 28.9 & \(28.8\) & \(9.6\leq z\leq 12.4\) & \(12.4\leq z\leq 14.7\) & \(2.6\times 10^{5}\) \\ \hline \end{tabular} \({}^{a}\)\(5\sigma\) limiting magnitudes from Harikane et al. (2023). The limiting magnitudes are measured in a 0\(\aas@@fstack{\prime\prime}\)2-diameter circular aperture. \({}^{b}\) Based on the light-curve model in Moriya et al. (2021). The redshift range is determined by the 5\(\sigma\) limiting magnitudes. See Fig. 1. \({}^{c}\) Comoving volumes within the GRSN detection redshift range including both _”F444W_ and _F356W_” and _”F444W_ only”. \end{table} Table 1: Early JWST deep field data and their redshift ranges for GRSN detection. SMSs having \(10^{4}-10^{5}\) M\({}_{\odot}\) explodes as GRSNe, the GRSN event rate \(R_{\rm GRSN}\) can be expressed as \[R_{\rm GRSN}=\rho_{\rm SFR}/{}_{\rm GRSN}\Psi(\Gamma), \tag{1}\] where \[\Psi(\Gamma)\equiv\frac{\int_{10^{4}{\rm M_{\odot}}}^{10^{5}{\rm M_{\odot}}} \phi\left(M,\Gamma\right)dM}{\int_{0.1{\rm M_{\odot}}}^{10^{5}{\rm M_{\odot}}} M\phi\left(M,\Gamma\right)dM}, \tag{2}\] and \(\psi(M,\Gamma)\propto M^{-\Gamma}\) is the initial mass function (IMF). We assume that stars between 0.1 M\({}_{\odot}\) and \(10^{5}\) M\({}_{\odot}\) are formed at the redshift ranges we are interested in. Figure 2 shows that our GRSN event rate constraint provide the following constraint at \(10\lesssim z\lesssim 15\), \[f_{\rm GRSN}\Psi(\Gamma)\lesssim 10^{-3}\ {\rm M_{\odot}^{-1}}. \tag{3}\] The Salpeter IMF (\(\Gamma=2.35\)) gives \(\Psi(2.35)=4\times 10^{-7}\ {\rm M_{\odot}^{-1}}\) and the flat IMF (\(\Gamma=0\)) gives \(\Psi(0)=2\times 10^{-5}\ {\rm M_{\odot}^{-1}}\). Thus, with the current constraint (Eq. 3), the fractions of SMSs exploding as GRSNe are constrained to be \(f_{\rm GRSN}\lesssim 3000\) for the Salpeter IMF and \(f_{\rm GRSN}\lesssim 50\) for the flat IMF. Because these constraints are well above \(f_{\rm GRSN}=1\), our upper limit is still satisfied even if all SMSs explode as GRSNe. Indeed, SMS formation rates at \(10\lesssim z\lesssim 15\) are predicted to be \(10^{-8}-10^{-12}\ {\rm Mpc^{-3}\ yr^{-1}}\)(e.g., Agarwal et al., 2012; Chon et al., 2016; Habouzit et al., 2016; Dunn et al., 2018; Chiaki et al., 2023), which is lower than our upper limit estimates of \(\sim 8\times 10^{-7}\ {\rm Mpc^{-3}\ yr^{-1}}\). A further, more meaningful constraint on the GRSN rate and its fraction can be obtained as the JWST Deep Field areas become wider. ## 5 Summary The GRSN event rate at \(10\lesssim z\lesssim 15\) has been estimated to be \(\lesssim 8\times 10^{-7}\ {\rm Mpc^{-3}\ yr^{-1}}\) based on the fact that no GRSN candidates are discovered in the early JWST deep field data. The estimated upper limit of the GRSN event rate (\(\sim 8\times 10^{-7}\ {\rm Mpc^{-3}\ yr^{-1}}\)) is still too high to constrain the formation rate of SMSs and the fraction of SMSs that explode as GRSNe. As the area of the JWST deep field increases, we will be able to further constrain the GRSN rate. We encourage to search for faint point sources that are detected only in _F44W_ or only in both _F444W_ and _F356W_ in the future JWST deep images. They will provide us critical information on the formation rate and fate of SMSs that can be compared to theoretical predictions of the first star formation. ## Acknowledgements TJM thanks Sunmyon Chon and Gen Chiaki for useful discussions. This work is supported by the Grants-in-Aid for Scientific Research of the Japan Society for the Promotion of Science (JP20H00174, JP21K13966, JP21H04997, JP21K13953, JP23H00131). This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with programs 2732, 2736, 1324, and 1345. The authors acknowledge the ERO, GLASS, and CEERS teams led by Klaus M. Pontoppidan, Tommaso Treu, and Steven L. Finkelstein, respectively, for developing their observing programs with a zero-exclusive-access period. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2309.03144
Testing the Braneworld Theory with Identical Particles
Various attempts to go beyond the theory of General Relativity start from the assumption that spacetime is not a 4-dimensional but rather a higher-dimensional manifold. Among others, braneworld scenarios postulate that the spacetime we effectively observe is actually a 4-dimensional brane embedded in a higher-dimensional spacetime. In general, braneworld models predict a departure from the Newton gravity law in the nonrelativistic regime. Based on this fact, we propose an experimental test that uses a pair of gravitationally interacting identical particles to determine the validity of certain braneworld models and provide numerical results that should be compared with experimental data. In particular, we consider the Randal-Sundrum braneworld model and study two cases of 5-dimensional gravity theories: the Einstein-Hilbert gravity with the negative cosmological constant and the Einstein-Gauss-Bonnet (nearly-Chern-Simons) gravity.
Ivana Stojiljković, Dušan Đorđević, Aleksandra Gočanin, Dragoljub Gočanin
2023-09-06T16:40:12Z
http://arxiv.org/abs/2309.03144v1
# Testing the Braneworld Theory with Identical Particles ###### Abstract Various attempts to go beyond the theory of General Relativity start from the assumption that spacetime is not a 4-dimensional but rather a higher-dimensional manifold. Among others, braneworld scenarios postulate that the spacetime we effectively observe is actually a 4-dimensional brane embedded in a higher-dimensional spacetime. In general, braneworld models predict a departure from the Newton gravity law in the nonrelativistic regime. Based on this fact, we propose an experimental test that uses a pair of gravitationally interacting identical particles to determine the validity of certain braneworld models and provide numerical results that should be compared with experimental data. In particular, we consider the Randal-Sundrum braneworld model and study two cases of 5-dimensional gravity theories: the Einstein-Hilbert gravity with the negative cosmological constant and the Einstein-Gauss-Bonnet (nearly-Chern-Simons) gravity. ## I Introduction Newton's law of gravity has been experimentally verified a long time ago [1] and since then has been repeatedly tested using various experimental setups, both in the lab and in cosmological observations. However, we know that this law is only an approximation, as a more fundamental theory is general relativity (GR). Although its predictions have been corroborated in many different ways to a high level of precision [2; 3], we are aware that GR does not amount to a full story about spacetime and gravity. The main symptom of the theory's incompleteness is the appearance of singularities, both in the centre of a black hole and at the beginning of the universe. In order to solve this problem, many attempts to quantize gravity have been made [4]. The most naive procedure is based on using a linear approximation of Einstein's field equations, which leads to a non-renormalizable theory, and, therefore, cannot be trusted to arbitrary high energies. Therefore, it seems that some radical change in our basic assumptions has to be made when dealing with the quantum theory of gravity. Some approaches, among which string theory is perhaps the most notable one, start from a theory defined in a number of spacetime dimensions greater than four. Effective, 4-dimensional physics is then obtained from the Kalutza-Klein compactification [5; 6], where one assumes that additional spatial dimensions are small enough and thus experimentally inaccessible at current energies. For phenomenological reasons, an alternative procedure of eliminating extra dimensions was proposed that does not assume the existence of small extra dimensions [7; 8; 9]. Those models are known under the name of braneworld models and assume that our world is a 3-brane, embedded in a higher-dimensional spacetime. Matter fields are usually confined on this brane, while gravity is a priori free to propagate in all dimensions. Naturally, those models lead to some conclusions that differ from the standard gravity theories. In this paper, we will propose a way to experimentally test the predictions of some braneworld scenarios. ## II Braneworld scenarios For simplicity, we will mostly focus on the so-called Randal-Sundrum (RS) II model (see [10] for a review). We start from a 5D Einstein gravity theory with the negative cosmological constant, where the action is defined as \[\frac{1}{16\pi G}\int_{\mathcal{M}_{5}}\mathrm{d}^{5}x\;\sqrt{-g}\left(R-2 \Lambda\right)+\frac{1}{8\pi G}\int_{\partial\mathcal{M}_{5}}\mathrm{d}^{4}x \sqrt{-h}K. \tag{1}\] The Gibbons-Hawking-York term has to be included in the presence of a boundary so that the variation principle is satisfied. We then insert a brane \(Q\) of constant tension \(T\), usually defined as a hyper-surface for which one coordinate in a preferred coordinate system is constant. The relevant term that we add to the action is \[T\int_{Q}\mathrm{d}^{4}x\;\sqrt{-h}. \tag{2}\] Away from the brane, the anti-de Sitter (AdS) spacetime solves equations of motion. However, appropriate junction conditions have to be imposed that patch together solutions on both sides of the brane. Also, more complicated matter fields that are localised on the brane could be added. In order to derive Newton's law of gravity from underlying relativistic theory, one has to use the action and equations of motion, and therefore for different theories, we can get different results. Generically, we write the potential on the brane in the form \[V(r)=-\frac{Gm}{r}(1+\Delta(r)). \tag{3}\] In the case of RS-II model, for distances \(r\gg\frac{1}{k}\) (\(k\) is the inverse of the bulk AdS radius \(l_{\text{AdS}}\)) we have [11; 12] \[\Delta(r)=\frac{2}{3k^{2}r^{2}}-\frac{4\ln kr}{k^{4}r^{4}}+\frac{(16-12\ln 2 )}{3k^{4}r^{4}}+\ldots \tag{4}\] while for small distances (\(r\ll\frac{1}{k}\)), \[\Delta(r)=\frac{4}{3\pi kr}-\frac{1}{3}-\frac{kr}{2\pi}\ln kr+0.089237810kr+\ldots \tag{5}\] One can also find an approximate potential interpolating between those two extremes. It is given by [13] \[\Delta(r)=\frac{4}{3\pi}\Big{(}\frac{kr\cos kr-\sin kr}{k^{2}r^{2 }}\int_{+\infty}^{kr}\frac{\cos t}{t}\text{d}t\] \[+\frac{\cos kr+kr\sin kr}{k^{2}r^{2}}\int_{+\infty}^{kr}\frac{ \sin t}{t}\text{d}t+\frac{\pi}{2k^{2}r^{2}}\Big{)}. \tag{6}\] There are other braneworld models that one could also study. For example, in [14], modified Newton potential was derived, with the following large distances behaviour \[\Delta(r)=-\frac{e^{-2\sqrt{2}kr}}{4(\sqrt{2}-1)kr}, \tag{7}\] where \(k\gg\frac{1}{r}\) is the inverse length parameter. There are also models that start from a different number of spacetime dimensions or that involve more than one brane [12]. Attempts to experimentally verify those corrections have been made in the context of classical physics [19]. It is important to note that results concerning the modification of Newton's potential follow from the tree-level computations and therefore do not contain nonzero powers of \(\hbar\). This is in contrast with the one-loop calculations of the gravity propagator that yields a similar result (though using the ideas from AdS/CFT, those two can be seen on equal footing [15]). Namely, if we were to quantize GR (despite it being non-renormalizable), we could obtain modifications of Newton's potential at the one-loop level, but those would be suppressed by powers of \(\hbar\). The potentials that we analyse in this paper are not of this type. This means that we don't have to claim any results from quantum gravity in order to talk about those modified potentials; we only assume the existence of additional dimensions. Also, corrections of the form \(\frac{1}{r^{3}}\) in Newton's potential are well-known in GR (they are responsible for the precession of Mercury orbit). However, they are not in any way connected to the corrections we are dealing with here, which are nonrelativistic. ## III Two identical particles Let us consider two identical spin-\(\frac{1}{2}\) particles that have zero charge with respect to any internal symmetry. This means that the only way those two particles can interact is via gravitational force. For small masses (energies), classical gravity is well-approximated by Newton's theory, which allows us to assume the following form of the two-particle Hamiltonian, \[H=\frac{\mathbf{P}^{2}}{4m}+\frac{\mathbf{p}^{2}}{m}-\frac{Gm^{2}}{r}, \tag{8}\] where \(\mathbf{P}\) is total momentum of the system and \(\mathbf{p}\) is relative momentum. Eigenstates of the hamiltonian (8) are of the form \[|\psi\rangle=\frac{e^{i\vec{K}\cdot\vec{R}}}{\sqrt{V}}R_{nl}(r){Y_{l}}^{m}( \theta,\varphi)\otimes|\chi\rangle. \tag{9}\] We assume that the spatial volume \(V\) of the box in which we constrain our quantum system is much larger than relevant scales of bound states of Hamiltonian (8) so that we can neglect the influence of this box on the spectrum. In case we place the system in some external (non-constant) potential, we demand that the relevant scales o variations in the potential are large compared to any scales in (8) so that we can again assume that we can restrict ourselves to the case of Hamiltonian (8). The interchange of two particles corresponds to \(\vec{r}\rightarrow-\vec{r}\), which induces the change \({Y_{l}}^{m}(\theta,\varphi)\rightarrow(-1)^{1}{Y_{l}}^{m}(\theta,\varphi)\). Preparing the spin state of the system in an appropriate manner (singlet or triplet), we are able to control the parity of relative angular momentum of eigenstates, as the total state of the two particles has to be antisymmetric. In the case of \(\frac{1}{r}\) potential, states with different values of \(l\) have the same energy as long as the principal quantum number is the same. This is related to Figure 1: Two identical spin-\(\frac{1}{2}\) fermions with zero charges, interacting through a modified Newton’s potential. Depending on the spin polarisation, the total energy of the system is different. the \(SO(4)\) symmetry of the Hamiltonian (3.1). However, in the case of braneworld models, this Hamiltonian is corrected using previously described potentials. New Hamiltonian has only rotational \(SO(3)\) symmetry, and therefore states with different angular momentum have different energy. We want to determine the eigenenergies of the braneworld Hamiltonians resulting from various braneworld scenarios. First, let us consider the RSII model with Einstein-Hilbert action, where correction to Newton's potential is given by (2.6). There are two regimes that we will consider. The first one is when the perturbation theory is applicable. Eigenenergies of the unperturbed Hamiltonian (3.1) are well-known, as it is the same (up to a numerical factor) as the Hamiltonian of the hydrogen atom. This means that the energies (disregarding the centre of mass energy) are given by \[E_{n}=-\frac{a^{2}\hbar^{2}}{mn^{2}}, \tag{3.3}\] where \(a=\frac{Gm^{2}}{k^{2}}\). We now consider correction to the potential \(-\frac{Gm^{2}(1+A(r))}{r}\) as a perturbation and use the first-order perturbation calculus to obtain corrections to the energies of the \(n=2\) level. We will see that the perturbation calculus is valid as long as \(1\ll ka_{0}\), where \(a_{0}=\frac{2\hbar^{2}}{Gm^{3}}\). As the perturbation also has \(SO(3)\) symmetry, we only need to calculate diagonal matrix elements and obtain corrections to the energy levels. In Figure 2, we present those corrections as a function of the dimensionless parameter \(\frac{1}{ka_{0}}\). To be more precise, we plot dimensionless quantity \[U=8\frac{a_{0}}{k^{2}}\int_{0}^{+\infty}\mathrm{d}x\,|\psi(x)|^{2}(-x)\Delta( x), \tag{3.4}\] such that first-order corrections to energy levels are given by \(E^{\prime}=\frac{Gm^{2}}{8a_{0}}U\). It is important to note that our system is nonrelativistic so that we can use Newtonian approximation. For this to be true, we must have \(p\sim\frac{\hbar}{a_{0}}\ll mc\). In the case when the perturbation theory is not applicable, we can make the following observation. In this regime, \(a_{0}\) is at least of the same order as \(1/k\), or possibly larger. Wave functions peak around \(a_{0}\), and this means that we can use the form of the potential for \(r\ll\frac{1}{k}\). It turns out that potential \(-\frac{A}{r}-\frac{B}{r^{2}}\) corresponds to a Hamiltonian that we can exactly diagonalize. It can be shown that for \(\frac{1}{ka_{0}}>\frac{3\pi}{32}\approx 0.3\), the particles would merge, and our analysis based on quantum mechanics and Newton's gravity would be inappropriate [16]. ### Chern-Simons gravity So far, we assumed that the 5-dimensional theory is well-described by Einstein's gravity. In five dimensions, one can also consider Chern-Simons (CS) gravity, defined in the metric formulation as \[S_{CS}=\frac{1}{16\pi G}\int\mathrm{d}^{5}x\sqrt{-g}\Big{[}R-2\Lambda \tag{3.5}\] \[+\frac{1}{4k^{2}}\left(R^{2}-4R^{\mu\nu}R_{\nu\mu}+R^{\mu\nu\rho \sigma}R_{\rho\sigma\mu\nu}\right)\Big{]}.\] This theory has an enlarged \(SO(4,2)\) symmetry group [17]. CS theory has non-vanishing torsion, but here we restrict to the case of torsion-less geometries, as they are much better understood. More generally, the CS action (3.5) can be generalized by substituting the constant parameter \(\frac{1}{4k^{2}}\) with some general parameter \(\frac{\alpha}{4k^{2}}\), thus obtaining the Einstein-Gauss-Bonnet theory. One can calculate the modification of Newton's potential for this generalized gravity theory [13] and obtain that, for the CS case, there are no corrections. Moreover, if we take \(\alpha\) close to the CS value, we get corrections that are small enough so that perturbation theory can be used for all values of \(\frac{1}{ka_{0}}\). Figure 3: Difference of energies between \(l=0\) and \(l=1\) state for the first excited level. Figure 2: Corrections to the energies of the first excited level for \(l=0\) (red line) and \(l=1\) (blue line); color online. The correction to the potential takes the form [13] \[\Delta_{\alpha}(r)=\frac{4(1-\alpha)}{3\pi(1+\alpha)}\Bigg{(}\frac{ (\beta x\cos(\beta x)-\sin(\beta x))\int_{+\infty}^{\beta x}\frac{\cos(t)}{t} \,\mathrm{d}t}{x^{2}}\] \[+\frac{(\cos(\beta x)+\beta x\sin(\beta x))\int_{+\infty}^{\beta x }\frac{\sin(t)}{t}\,\mathrm{d}t}{x^{2}}+\frac{\pi}{2x^{2}}-\frac{\beta}{x}+ \beta^{2}\gamma\] \[\times\big{(}\sin(\beta\gamma x)\int_{+\infty}^{\beta\gamma x} \frac{\cos(t)}{t}\,\mathrm{d}t-\cos(\beta\gamma x)\int_{+\infty}^{\beta\gamma x }\frac{\sin(t)}{t}\,\mathrm{d}t\big{)}\Bigg{)}, \tag{11}\] where \(x=kr\), and \(\beta\) and \(\gamma\) are numerical constants defined in [13]; for \(\alpha=0.95\) they are given as \(\beta\approx 3.86443\) and \(\gamma\approx 0.637448\). Again, we are using approximate potential, with approximate numerical values of the parameters, as our goal is to demonstrate the possibility of detection of extra dimensions and provide some rough theoretical data that could be made more precise with more advanced numerical techniques. In Figure 4, we present corrections to the energy level \(n=2\), \(l=0\), and \(n=2\), \(l=1\) in the case \(\alpha=0.95\). Note that the form of the corrections is expected. The energy is shifted more for greater values of \(\frac{1}{ka_{0}}\), but the rate of this shift is decreasing. Also, for large \(\frac{1}{ka_{0}}\), the energy difference between the two levels decreases. This can be concluded from the asymptotic form of the gravitational potential. Finally, we can draw the difference between energies of \(l=0\) and \(l=1\) level, obtaining Figure 5. Due to the approximate nature of our constants, previous graphs may not give the best numerical values for large \(\frac{1}{ka_{0}}\), but the form of the graph should be correct. ## IV Testing the braneworld hypothesis Let us now propose a way to empirically verify whether the consequences of the braneworld hypothesis are valid or not. The procedure is essentially based on the fact that the beyond-Newtonian gravitational potential (of the kind we considered above) lifts the orbital degeneracy of the energy levels for a pair of gravitationally interacting particles. Consider, for example, the following two states of a pair of identical spin-\(\frac{1}{2}\) particles, \[|\Psi_{1}\rangle =\frac{1}{\sqrt{2}}|2,0,0\rangle\otimes(|\uparrow\downarrow \rangle-|\downarrow\uparrow\rangle), \tag{12}\] \[|\Psi_{2}\rangle =|2,1,0\rangle\otimes|\uparrow\uparrow\rangle. \tag{13}\] Both states are anti-symmetric as a whole, the particles being fermions. The first state has the anti-symmetric singlet state in the spin sector, while the orbital part is symmetric (\(l=0\)). On the other hand, the second state has a symmetric spin part and an anti-symmetric orbital part (note that we could also choose some other spin state from the symmetric triplet and also some other value for the magnetic quantum number, namely \(\pm 1\)). The braneworld potential (6) implies that states with different orbital quantum numbers \(l\) have a different energy, which is in contrast with the Newtonian case. In principle, we may also consider a superposition of the above two states, \[|\Psi\rangle=\frac{1}{\sqrt{2}}(|\Psi_{1}\rangle+|\Psi_{2}\rangle), \tag{14}\] and use the standard Mach-Zehnder (MZ) interferometer [18] with the first beam splitter selecting the states by the total spin projection \(S_{z}\). If the energies of the states \(|\Psi_{1}\rangle\) and \(|\Psi_{2}\rangle\) were the same, as in the Newtonian case, the final state after passing through the MZ interferometer would be the same, namely, \(|\Psi\rangle\). If, on the other hand, the energies of the two states were different, as predicted by the braneworld model, the final state of the two particles would be \[|\Psi_{\text{final}}\rangle=\frac{1}{\sqrt{2}}(e^{-i\phi}|\Psi_{1}\rangle+| \Psi_{2}\rangle), \tag{15}\] Figure 4: Corrections to the energies of the first excited level for \(l=0\) (red line) and \(l=1\) (blue line) in nearly-CS case; color online. Figure 5: Difference of energies between \(l=0\) and \(l=1\) state for the first excited level in the nearly-CS case. where the relative phase is determined by the energy difference, \[\phi=(E_{n=2,l=1}-E_{n=2,l=0})t/\hbar. \tag{10}\] The second beam splitter in the MZ scheme should postselect the state \(|\Psi\rangle\). If \(\phi\) is non-zero, i.e. if the braneworld corrections exist, the statistics of detectors' clicks would be modified to \(\cos^{2}\frac{\phi}{2}\) and \(\sin^{2}\frac{\phi}{2}\). In this way, we can experimentally obtain the value of the energy gap if it exists. Therefore, if one could put our particles in the superposition state \(|\Psi\rangle\), simple MZ interferometry could be used to refute (or support) the braneworld scenario. If the outcome of the experiment were positive, in support of the braneworld, one could estimate the initially undetermined bulk parameter \(\frac{1}{k}=l_{\rm AdS}\). In Figure 6, we give a scheme of the experimental proposal. ## V Conclusion In this work, we explored the possibility of testing the braneworld scenario in laboratory conditions. We have demonstrated that the braneworld RS-II model predicts a modification of the energy spectrum of gravitational bound states by lifting the characteristic orbital degeneracy associated with the Newtonian case. For a pair of identical fermions, the energy spectrum depends on whether the system is in the singlet or triplet spin state. From a practical point of view, the crucial step would be to prepare the particles in the superposition state (11). Due to the fermionic symmetry constraint, this might be achieved by solely manipulating the spins of the two particles, which is the main reason for working with a pair of identical fermions. We studied two cases of 5-dimensional theories of gravity - Einstein's gravity and nearly-Chern-Simons gravity - and presented the numerical results concerning energy splitting, which appears solely due to the braneworld hypothesis. To test the existence of the splitting, one could, in principle, set up a Mach-Zehnder interferometer, as explained in the text. We believe that bounded gravity states, in combination with quantum mechanics, could be the most straightforward way to test the braneworld models. Controlling the masses of the particles (thus controlling the parameter \(a_{0}\)), one can obtain a graph of energy difference and compare it to one of the figures from the previous section. In this way, one could discriminate between different models of 5-dimensional gravity. Another proposal for testing the RS model can be found in [19]. A possible extension of our work would be to make a comparison to a similar experimental set-up [20; 21], discussing the entanglement between quantum particles induced by gravity [22; 23; 24; 25; 26]. In particular, one could use potentials analysed in this paper to see the differences induced by the corrections coming from the RS-II model. In the case of EH gravity, and using the limit of small or large distance, this was done recently in [27; 28]. It would be interesting to analyse other experimental proposals and to see the effects coming from different gravity models in the 5-dimensional bulk. Effects analysed in this work could also be relevant for cosmology [29]. ## VI Acknowledgement The authors express their gratitude to Caslav Brukner for his hospitality and productive discussions during their visit to the Institute for Quantum Optics and Quantum Information (IQOQI) Vienna. The authors also thank A.C. de la Hamette and V. S. Kabel for useful discussion on the content of this paper. This paper is produced as a result of the BBQUANT project, with active participation from A.G. and D.G. This project was conducted within the Serbian Science and Diaspora Collaboration Program: Knowledge Exchange Vouchers, supported by the Science Fund of the Republic of Serbia. Work of I.S., D.D., A.G. and D.G. is supported by the funding provided by the Faculty of Physics, University of Belgrade, through grant number 451-03-47/2023-01/200162 by the Ministry of Science, Technological Development and Innovations of the Republic of Serbia.
2309.10676
Remanence Increase in SrFe$_{12}$O$_{19}$/Fe Exchange-Decoupled Hard-Soft Composite Magnets Owing to Dipolar Interactions
In the search for improved permanent magnets, fueled by the geostrategic and environmental issues associated with rare-earth-based magnets, magnetically hard (high anisotropy)-soft (high magnetization) composite magnets hold promise as alternative magnets that could replace modern permanent magnets, such as rare-earth-based and ceramic magnets, in certain applications. However, so far, the magnetic properties reported for hard-soft composites have been underwhelming. Here, an attempt to further understand the correlation between magnetic and microstructural properties in strontium ferrite-based composites, hard SrFe$_{12}$O$_{19}$ (SFO) ceramics with different contents of Fe particles as soft phase, both in powder and in dense injection molded magnets, is presented. In addition, the influence of soft phase particle dimension, in the nano- and micron-sized regimes, on these properties is studied. While Fe and SFO are not exchange-coupled in our magnets, a remanence that is higher than expected is measured. In fact, in composite injection molded anisotropic (magnetically oriented) magnets, remanence is improved by 2.4% with respect to a pure ferrite identical magnet. The analysis of the experimental results in combination with micromagnetic simulations allows us to establish that the type of interaction between hard and soft phases is of a dipolar nature, and is responsible for the alignment of a fraction of the soft spins with the magnetization of the hard. The mechanism unraveled in this work has implications for the development of novel hard-soft permanent magnets.
Jesús Carlos Guzmán-Mínguez, Cecilia Granados-Miralles, Patrick Kuntschke, César de Julián Fernández, Sergey Erokhin, Dmitry Berkov, Thomas Schliesch, Jose Francisco Fernández, Adrián Quesada
2023-09-19T15:02:06Z
http://arxiv.org/abs/2309.10676v1
Remanance Increase in SrFe\({}_{12}\)O\({}_{19}\)/Fe Exchange-Decoupled Hard-Soft Composite Magnets Owing to Dipolar Interactions ###### Abstract In the search for improved permanent magnets, fueled by the geostratgeic and environmental issues associated with rare-earth-based magnets, magnetically hard (high anisotropy)-soft (high magnetization) composite magnets hold promise as alternative magnets that could replace modern permanent magnets, such as rare-earth-based and ceramic magnets, in certain applications. However, so far, the magnetic properties reported for hard-soft composites have been underwhelming. Here, an attempt to further understand the correlation between magnetic and microstructural properties in strontium ferrite-based composites, hard SrFe\({}_{12}\)O\({}_{19}\) (SFO) ceramics with different contents of Fe particles as soft phase, both in powder and in dense injection molded magnets, is presented. In addition, the influence of soft phase particle dimension, in the nano- and micron-sized regimes, on these properties is studied. While Fe and SFO are not exchange-coupled in our magnets, a remanence that is higher than expected is measured. In fact, in composite injection molded anisotropic (magnetically oriented) magnets, remanence is improved by 2.4% with respect to a pure ferrite identical magnet. The analysis of the experimental results in combination with micromagnetic simulations allows us to establish that the type of interaction between hard and soft phases is of a dipolar nature, and is responsible for the alignment of a fraction of the soft spins with the magnetization of the hard. The mechanism unraveled in this work has implications for the development of novel hard-soft permanent magnets. hard-soft composites; permanent magnets; ferrites; dipolar interactions + Footnote †: journal: Nanomaterials ## 1 Introduction The challenge of constantly improving the performance of permanent magnets has been ongoing for decades. Lately, the geostratgeic and environmental issues associated with rare-earth elements (REE) as raw materials have brought forward the need to develop alternative permanent magnets that could, at least partially, substitute them [1]. One of the strategies to obtain REE-free substitutes is to fabricate composite magnets, based on a hard magnetic phase with high coercivity and a soft phase with high magnetization, in which the magnetic coupling between hard and soft phases leads to an increase in remanence and energy product, which is one of the figures of merit of permanent magnets [2; 3; 4; 5; 6]. The most pursued strategy consists of exploiting the exchange-coupling between hard and soft particles in order to gain remanence while avoiding a detrimental coercivity loss [7; 8]. According to the model, provided that the soft particles are below a given size threshold [9] and that structural coherency at the interface exists, effective exchange coupling should align at remanence the spins of the soft with the magnetization of the hard at reasonable coercivity penalties. However, we recently demonstrated that not only is it extremely challenging to meet the structural requirements associated with this approach, but in addition, robust exchange-coupling may lead to a collapse of the coercivity [10; 11; 12], as a consequence of the lowered onset for domain wall propagation [13]. Focusing on systems based on hexaferrites, such as SrFe\({}_{12}\)O\({}_{19}\), as the hard magnetic phase, results so far have not met the expectations for exchange-coupled hard-soft composite magnets [14]. In fact, more than 30 years after the pioneering work by Kneller and Hawig [15], the effect is not being exploited by the industry. When using Fe as the soft phase, works in the field have reported increases in magnetization without a significant loss of coercivity [16; 17; 18; 19] in exchange-coupled powders. Unfortunately, the samples studied were neither densified nor magnetically oriented, which does not enable extracting conclusions on the real remanence of the composites. It is important to explain as well, that although single-phase behavior in the demagnetization curves is usually associated with effective exchange-coupling between hard-soft particles, experiments have shown that the coercive field distribution associated with broad particle size distributions in the composite may lead to single-phase curves in the absence of exchange-coupling [11; 12]. More recently, the possibility of exploiting dipolar interactions within the composite magnets to improve their remanence, instead of exchange-coupling, has been discussed [10]. In fact, composites consisting of SFO and FeCo nanowires (in which the dipolar interaction, through shape anisotropy, plays a crucial role) have been reported to present a significant increase in remanence and energy product [20]. Here, the microstructural and magnetic properties of magnetically oriented SFO/Fe composite powders and the corresponding anisotropic dense magnets are studied as a function of soft phase content and particle size. The magnetic orientation assures the maximum remanence as required for magnets. Scanning Electron Microscopy (SEM) is employed to reveal particle sizes and geometric distribution of hard and soft phases, while demagnetization curves measured with Vibrating Sample Magnetometry (VSM) and Permagraph instruments detail the magnetic properties of the magnetically aligned powders and magnets. Analyzed in combination with micromagnetic simulations, the correlation between microstructure and magnetic properties will be established. ## 2 Materials and Methods The raw materials used in this study were commercial strontium ferrite (SrFe\({}_{12}\)O\({}_{19}\), 99.99%) powders supplied by Max Baermann GmbH (Germany-China) (Bergisch Gladbach, Germany) and various commercial Fe particles (99.99%) with different particle sizes ranging from 50 nm to 11 \(\upmu\)m. The hard-soft composite samples compositions were prepared by incorporating 5, 10 and 15 vol% of soft phase to SrFe\({}_{12}\)O\({}_{19}\) (SFO) powders, using different sizes of Fe particles. All samples were studied both in the form of oriented powders and in the form of injection molded magnets. The preparation of the magnetically oriented powder samples for the magnetic characterization consisted of dispersing 30-50 mg of powder in a bonding glass inside a capsule under an externally applied magnetic field H = 0.3 T generated by a NdFeB N42 magnet. The particles, although in proximity, were isolated from one another after this procedure. The injection molded oriented magnets were fabricated at the company Max Baermann GmbH following their usual industrial production method. The injection-molded pieces were squares of 2 cm in lateral size and 4 mm in height. In order for the demagnetizing boundary conditions to be comparable, all magnet samples had the same sizes and shapes. The mixing was carried out by means of a low-energy dry mixing method [21]. The dry dispersion process consisted of shaking SrFe\({}_{12}\)O\({}_{19}\)/Fe particle mixtures in a 60 cm\({}^{3}\) nylon container for 5 min at 50 rpm using a tubular-type mixer (Mixer/Mill, 8000D, SPEX Sample Prep., Metujen, NJ, USA). The particle size and morphology of the powders were evaluated using secondary electron images of field emission scanning electron microscopy, FE-SEM (Hitachi S-4700, Hitachi, Tokyo, Japan). Structural analysis was performed by X-ray diffraction (XRD) with a Bruker D8 diffractometer using Cu K\(\alpha\) radiation (\(\lambda\) = 1.5418 A) and a Lynxeye XE-T detector. Subsequent Rietveld analysis of the XRD data was performed by FullProf Suite [22]. Thermogravimetric analysis (TGA) in air was employed to determine the oxidation temperature of the Fe powders between RT and 900 \({}^{\circ}\)C, using a TA Instruments Q50 system. The magnetic properties of the samples were measured using the custom-made VSM described in reference [23], applying a maximum H-field of 1.4 T, and a Permagraph Instrument (Permagraph C, Magnet-Physik, Cologne, Germany). The measurements were carried out at room temperature, and the sensitivity of the instrument was 0.1 Am\({}^{2}\)/kg and 0.025 mT, according to specifications. For the micromagnetic simulations, we employed a micromagnetic algorithm initially developed for the simulation of the magnetization distribution of magnetic nanocomposites [24; 25]. The magnetization distribution in an isolated sphere of soft phase with diameter D under the influence of an external magnetic field (produced by the hard phase) is simulated by considering the four standard contributions to the total magnetic energy: external field, magnetic anisotropy, exchange and dipolar interaction. The following material parameters (typical for Fe) were used for simulating the soft material: saturation magnetization Ms = 220 Am\({}^{2}\)/kg, cubic magnetocrystalline anisotropy Kcub = 20 kJ/m\({}^{3}\) with easy axis along the \(<\)100\(>\) directions and the exchange stiffness constant A\({}^{\text{bulk}}\) = 21 pJ/m [10]. The particles under study had D = 15-50 nm and were discretized in the non-regular mesh with the typical mesh element size of 3 nm. Open boundary conditions were applied in all simulations. ## 3 Results and Discussion ### Hard-Soft Composite Powders Figure 1 shows the SEM micrographs corresponding to the starting SFO powder and the Fe powders with three different particle sizes. The particle distribution in the SFO powders is, as previously reported [26], of a bimodal nature, with larger particles in the 2-5 \(\upmu\)m range surrounded by smaller particles in the 200-500 nm range. All SFO particles have platelet shapes, as expected. The SEM characterization of the three Fe powders reveals that the smaller-sized powder, Figure 1b, is formed by particles with an average size of 50 nm that are arranged in contact with each other, presumably for electrostatic and magnetostatic reasons. The powder depicted in Figure 1c again reveals a bimodal particle size distribution, where 100-200 nm Fe particles coexist with larger 1-2 \(\upmu\)m particles. For simplicity, we refer to this powder in the following as 1 \(\upmu\)m Fe powder. The larger-sized Fe powder, shown in Figure 1d, presents an average particle size of 11 \(\upmu\)m. As the samples were fabricated and manipulated in air conditions, the XRD patterns of the Fe powders were measured in order to study their oxidation state. Figure 2a shows the pattern and refinement of the 50 nm Fe powders as a selected example. From the refinement of the patterns for each Fe powder, we concluded that, for 50 nm Fe particles, the powders were composed of 77 wt% Fe, 17 wt% FeO and 6 wt% Fe\({}_{3}\)O\({}_{4}\). For 1 \(\upmu\)m Fe powders, the sample consisted of 79 wt% Fe, 15 wt% FeO and 6 wt% Fe\({}_{3}\)O\({}_{4}\). For 11 \(\upmu\)m particles, the XRD refinement detected 100 wt% Fe. Figure 2b presents the TGA of the 50 nm Fe powders. It could be inferred that the onset for oxidation of the powders was approximately 370 \({}^{\circ}\)C, which hinted at the effectiveness of the original oxide surface layer in preventing further oxidation. The saturation magnetization (\(M\)s) values measured for the Fe powders reveal \(M\)s = 175 Am\({}^{2}\)/kg (50 nm), \(M\)s = 186 Am\({}^{2}\)/kg (1 \(\upmu\)m) and \(M\)s = 187 Am\({}^{2}\)/kg (11 \(\upmu\)m). These values are consistent with the Fe wt% extracted from XRD given that the theoretical saturation magnetization value for pure Fe is \(M\)s = 220 Am\({}^{2}\)/kg [27], except for the value of the 11 \(\upmu\)m powder, which suggests that it might be partially oxidized as well. We speculate that the lower reactivity of the larger Fe particles leads to oxide layers that are amorphous, and therefore, undetectable for XRD. Figure 3 shows the magnetization curves of Fe/SFO-oriented composite powders fabricated with 50 nm sized Fe particles with soft phase contents of 5 vol%, 10 vol% and 15 vol%, as well as the curves of the individual SFO and Fe (50 nm) phases. From the individual SFO and Fe phases curves, we can see that the hard phase (SFO) presents a coercive field of 324 kA/m, while the saturation magnetization (_M_s) value is 69 Am\({}^{2}\)/kg. The soft phase (Fe) shows coercivity at (\(H_{\rm C}\)) \(\sim\) 2 kA/m and _M_s = 186 Am\({}^{2}\)/kg. This value is lower than expected for Fe (\(\sim\)220 Am\({}^{2}\)/kg) [28], which is due to the fact that these Fe nanoparticles are protected by a Fe\({}_{3}\)Si layer at their surface, as indicated by the supplier, Figure 1: SEM micrographs showing particle size and morphology of the starting (**a**) SFO and Fe powders with (**b**) 50 nm, (**c**) 1 \(\upmu\)m and (**d**) 11 \(\upmu\)m average particle sizes. Figure 2: (**a**) XRD pattern and corresponding Rietveld refinement for the 50 nm Fe powder. \(\dagger\) and \({}^{*}\) denote diffraction maxima from FeO and Fe\({}_{3}\)O\({}_{4}\) respectively. (**b**) TGA of the 50 nm Fe powder performed in air heating between RT and 900 \({}^{\circ}\)C. which lowers \(M\)s. The presence of this layer excludes the hypothesis of exchange-coupling between Fe and SFO powders in the mixtures. Regarding the nanocomposites, for the sample with 5 vol% Fe (red curve), \(M\)s = 75 Am\({}^{2}\)/kg and \(H_{\text{C}}\) = 312 kA/m. In the sample with 10 vol% Fe (green curve), \(H_{\text{C}}\) = 290 kA/m and \(M\)s = 84 Am\({}^{2}\)/kg. Finally, the sample with 15 vol% Fe (blue curve) presents \(H_{\text{C}}\) = 264 kA/m and \(M\)s = 91 Am\({}^{2}\)/kg. As expected, in hard-soft composites, coercivity decreases, while \(M\)s increases with increasing soft content. Based on the steep drop in \(H_{\text{C}}\) for 15 vol%, we consider 10 vol% Fe the sample that presents the most competitive compromise in magnetic performance (based on \(H_{\text{C}}\) and \(M\)s). It is important to note that the S shape observed in the second quadrant of the demagnetization curve of the composites (more evident for the 15 vol% sample but displayed by the other two as well), as detailed in the inset of Figure 3, strongly hints at an absence of exchange-coupling between hard and soft magnetic phases [11; 12; 15; 20]. Composites fabricated with 1 \(\upmu\)m and 11 \(\upmu\)m diameter Fe particles present similar trends (not shown). Table 1 summarizes the magnetic properties of the bonded magnets studied. Remanent magnetization (\(M_{\text{R}}\)) presents an interesting behavior, as shown in Figure 4. In a multiphase magnetic material, in the absence of coupling, remanence is an additive property [28], the value of which can be calculated using the expression: \[\left[M_{r}\right]_{exp}=M_{r,\,\,hard}*wt\%_{hard}+M_{r,\,\,soft}*wt\%_{ soft} \tag{1}\] \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{**Sample**} & \(H\epsilon\) (kA/m) & \(M_{\text{R}}\) (Am\({}^{2}\)/kg) & \(M_{\text{S}}\) (Am\({}^{2}\)/kg) & **Type** \\ \hline SFO & 323 & 65.1 & 68.7 & Oriented Powder \\ \hline SFO + 5 vol\% Fe 50 nm & 301 & 62.5 & 76.4 & Oriented Powder \\ \hline SFO + 10 vol\% Fe 50 nm & 287 & 63.5 & 86 & Oriented Powder \\ \hline SFO + 15 vol\% Fe 50 nm & 259 & 61.3 & 93.1 & Oriented Powder \\ \hline 100\% Fe 50 nm & 3 & 3.7 & 174.6 & Oriented Powder \\ \hline \hline \end{tabular} \end{table} Table 1: Magnetic parameters for the oriented powder samples. Figure 3: Magnetization vs. applied field curves of SFO/Fe composite powders using 50 nm Fe particles and for different soft phase concentrations, including the curves of the individual SFO and Fe phases. Using the \(M_{\rm R}\) values for Fe and SFO measured in Figure 2, and given the assumption that no exchange-coupling occurs in these composite powders, Figure 3 shows the expected (calculated) \(M_{\rm R}\) values together with the values experimentally measured (extracted from Figure 3). By first analyzing the case of non-oriented powders, it can be observed that the calculated and measured values practically coincide for 50 nm Fe particles, and slight deviations are measured for the 1 \(\upmu\)m Fe size. We attribute these fluctuations/deviations to the random arrangement of non-oriented particles. A very different scenario occurs for magnetically oriented powders. As can be observed, for all Fe contents and 50 nm and 1 \(\upmu\)m particle sizes, the measured remanence is larger than the expected calculated value. This evidence excludes the hypothesis of total decoupling between hard and soft phases and suggests a magnetizing-type coupling that is only activated in the magnetically oriented state. The plausible reasons for this observation will be discussed later in the manuscript. ### Injection-Molded Hard-Soft Composite Magnets In order to investigate the potential of these composite powders as dense magnets, anisotropic (magnetically aligned) injection-molded permanent magnets were fabricated at the Max Baermann GmbH pilot production line. Fe content of 10 vol% was selected, and three different types of injection-molded permanent magnets were fabricated using the three Fe powders presented above with different particle sizes. Figure 5 shows the demagnetization curves, measured in a closed loop in a permagraph instrument, for the three Fe particle sizes and for a single soft phase content (10 vol%), as well as a reference 100% ferrite sample. Figure 5a shows that coercivity decreases and the squareness of the demagnetization curve is significantly affected. While squareness is mainly lost, it is important to remark that, in contrast with the VSM curves measured Figure 4: Remanence (\(M_{\rm R}\)) as a function of Fe content. The graph shows two groups of three curves, corresponding to the measured values of the composites fabricated with 50 nm and 1 \(\upmu\)m Fe powders and the values calculated by the linear combination of the \(M_{\rm R}\) values of Fe and SFO individual phases, for both oriented and non-oriented powders. in Figure 3 in powders, the smooth shapes of the demagnetization curves in Figure 4 are indicative of a system behaving as a single magnetic phase, suggesting an interparticle coupling between two phases. In Figure 5b, we observe that for pure SFO (black line), remanent polarization \(J_{\mathrm{R}}=0.248\) T, for the composite with 50 nm Fe particles, \(J_{\mathrm{R}}=0.255\) T, for 1 \(\upmu\)m Fe particles \(J_{\mathrm{R}}=0.255\) T and lastly for 11 \(\upmu\)m, \(J_{\mathrm{R}}=0.250\) T. As for the powders, while a linear combination of hard and soft remanences should lead to a \(\sim\)10% decrease in remanence in the composite with respect to the pure ferrite magnets, we measure a 2.4% increase in remanence, with respect to the pure ferrite magnet, for 50 nm and 1 \(\upmu\)m Fe particle sizes and for a 10 vol% soft content; while a 0.8% increase is measured for 11 \(\upmu\)m particles. Hence, we observe an anomalous non-monotonous variation in \(M_{\mathrm{R}}\) and \(H_{\mathrm{C}}\) with the particle size increase. Table 2 summarizes the magnetic properties of the bonded magnets studied. The porosity of the bonded magnets was calculated by measuring the density of the samples and using the theoretical densities. The pure ferrite magnet presents the lowest porosity 3.2%, while the composite magnets have porosities between 6-6.2%. Given that volume magnetization \(M\) depends on porosity \(p\) according to the formula \(M=(1-p)M_{\mathrm{S}}\), this entails that the increase in \(J\) observed cannot be explained by porosity changes. It is also worth noting that the injection molding process is carried out at 250 \({}^{\circ}\)C, which is below the temperature at which the Fe powders oxidize, according to Figure 2b, and therefore, we expect no oxidation of the soft phase. Figure 6 shows the SEM characterization of the surface of the injection-molded magnet made with 10 vol% 11 \(\upmu\)m Fe particles. The size and morphology of both phases in the system SFO/Fe can be distinguished, where the smaller SFO particles embedded within \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{**Sample**} & \(H_{\mathrm{C}}\) **(kA/m)** & \(J\) **(T)** & **Type** & **Porosity (\%)** \\ \hline SFO & 219.6 & 0.248 \(\pm\) 0.0025 & Bonded Magnet & 3.8 \\ \hline SFO + 10 vol\% Fe 50 nm & 144.8 & 0.255 \(\pm\) 0.0025 & Bonded Magnet & 6.2 \\ \hline SFO + 10 vol\% Fe 1 \(\upmu\)m & 143.6 & 0.255 \(\pm\) 0.0025 & Bonded Magnet & 6.2 \\ \hline SFO + 10 vol\% Fe 11 \(\upmu\)m & 184.7 & 0.25 \(\pm\) 0.0025 & Bonded Magnet & 6 \\ \hline \hline \end{tabular} \end{table} Table 2: Magnetic parameters and porosity of the injection moulded magnets. Figure 5: (**a**) Magnetic polarization \(J\) as a function of the applied magnetic field \(H\)eff of injection-molded composite magnets with 10 vol% Fe content for three different Fe particle sizes. (**b**) Reminence values for the four samples as a function of particle size. the polymer form a percolated matrix that surrounds the larger Fe particles, which seem to be isolated. Figure 6b in particular clearly shows the presence of a void around the Fe particle in the center of the micrograph that prevents it from being in direct contact with the surrounding SFO matrix. Although the reasons behind the formation of this microstructure have not been investigated, we speculate that the fluid dynamics during the process of polymer wetting may be affected by the presence of the significantly larger Fe particles. A crucial consequence of this absence of direct contact is that we can again discard that the increase in remanence is due to effective exchange-coupling at the interface [15]. This observation emphasizes the surprising single-phase behavior of the demagnetization curve of the composite magnets and the increase in remanence observed. ### Micromagnetic Simulations A micromagnetic study of magnetization reversal in SFO/Fe samples was performed using an approach specifically developed for modeling the magnetization distribution in nanocomposites. The details of this simulation technique can be found in [24; 25]. In all presented simulations, a cubical modeling volume with sides measuring 200 nm was discretized into 400,000 mesh elements, each sized about 3 nm. This high-performance calculation not only allows us to recover the details of magnetization distribution inside the crystallites, but also enables us to study a significant number of different crystallites, which is important for investigating magnetic interactions between them. All modeled samples have 20 vol% porosity. Four standard contributions to the total micromagnetic energy are taken into account: external magnetic field, anisotropy, exchange coupling and magnetodipolar interaction energies. Periodic boundary conditions are used. We performed simulations changing the particle diameter between 15 to 60 nm and for three soft phase concentrations: 5%, 10% and 15% (volume fractions of magnetic material). The exchange coupling between crystallites was set to zero and anisotropy axes were oriented in the initial direction of the magnetic field. For every parameter set, the magnetization reversal of the composite was modeled and the corresponding demagnetization curves were calculated. In this manner, remanence was extracted from every curve and is presented in Figure 7 as a function of the soft phase concentration for each crystallite size. In all cases, the calculated remanence decreases with both increasing soft content and increasing particle size, as expected in hard soft composites [3; 4; 29; 30]. The larger the soft particle size, the steeper the decrease in remanence. Figure 6: SEM images taken on the surface of the magnet at (**a**) \(\times\)1000 and (**b**) \(\times\)6000 magnifications showing the microstructure of injection-molded magnets with 10 vol% Fe particles of 11 \(\upmu\)m. As stated above, our micromagnetic approach allows us to present the evolution of the magnetization distribution of individual iron crystallites in the sample. The spin configuration of Fe nanoparticles of diameters between 15-50 nm was simulated and the results are shown in Figure 8. It can be observed that Fe particles only behave as magnetically single-domain for diameter D = 15 nm. For larger diameters, such as the particles used in the composites fabricated here, a vortex configuration forms, with the external spins (those closer to the surface of the particle) forming a closed circular loop while the central spins are aligned, as if an aligned magnetic rod was located at the center of the Fe particles. Figure 8b portrays this by showing an augmented section of the spins inside a 50 nm Fe particle. These calculations only qualitatively agree with the theoretical threshold between single and multidomain regimes defined by the coherent size (around 24 nm) but also by the domain wall length (around 65 nm) [25] and they illustrate the magnetic vortex/multidomain structure of Fe particles even in the nanodomains. It has been demonstrated, by the shape of the demagnetization curves of the powders and the micrographs of the magnets showing voids around Fe, that no exchange-coupling takes place between SFO and Fe particles. Under this circumstance, the increase in remanence experimentally measured for the composite samples (in both powder and injection molded magnet forms) can only be explained by a certain degree of alignment of the spins of the soft phase with the magnetization of the hard, which happens even if Fe particles are in a multidomain state. Based on the difference between measured and calculated (using expression 1) remanence in the oriented powders (~10% on average) and the \(M_{\mathrm{R}}\) of Fe and SFO, it can be inferred that the fraction of Fe spins that actually aligned with the hard phase at remanence, and thus contribute to the overall increase in the remanence of the magnet, is approximately 4%. Looking at the spin configuration in Figure 7, we suggest that a plausible explanation is that the internal spins of the vortex structure are aligned with the internal field created by the hard SFO phase; i.e., due to the dipolar interaction between the hard and soft phase. The self-demagnetizing field in the Fe particles, proportional to the Ms of Fe, easily overcomes the internal field created by SFO, especially near the particle surface due to the minimization of the magnetostatic energy, which makes the spins of the Fe particle circularly curl to minimize the stray fields. However, the internal spins in the vortex structure are subjected to far inferior self-demagnetizing fields and they are, therefore, more likely to align with the hard particles. This alignment will be parallel or antiparallel depending on the geometric Figure 7: Micromagnetically calculated values of remanence in SFO/Fe composites as a function of Fe content and for different Fe particle sizes. distribution of the field lines inside the magnet, which in turn depends on the distance and geometric arrangement of SFO and Fe particles. It is nevertheless safe to assume that, given the parallel alignment of the magnetization of all SFO particles inside the magnetically oriented bonded magnet, the internal magnetic fields will lead to a net alignment of the soft spins in the direction parallel to the magnetization of the hard. This mechanism is consistent with the small (-4%) fraction of Fe spins that are estimated to be aligned in the magnet and the fact that the remanence increase, with respect to the theoretically expected, is observed irrespective of the Fe particle size. ## 4 Conclusions The magnetic properties and the microstructure of SrFe\({}_{12}\)O\({}_{19}\)/Fe hard-soft composites, in powders and injection-molded magnet form, have been studied as a function of soft phase content -between 5-15 vol%- and soft particle size -between 50 nm-11 \(\upmu\)m. While coercivity decreases with soft phase concentration, as expected in hard-soft composites, a remanence that is larger than expected is measured in both oriented powders and oriented-bonded magnets. In fact, the hard-soft composite injection-molded magnets present a 2.4% increase in remanence with respect to identically prepared pure ferrite magnets, for all particle sizes explored. The lack of exchange-coupling between hard and soft phases, evidenced by the absence of direct contact between SFO and Fe particles seen in the microstructure of the magnets and the shape of the demagnetization curves, points at dipolar interactions as the cause for the remanence increase observed. The micromagnetic simulations performed reveal that a vortex spin configuration can form in spherical Fe soft particles with diameters above 15 nm. We suggest that the spins at the core of the vortex align with the hard phase, explaining the observation and the fact that it occurs for all particle sizes studied and only when the particles are magnetically oriented. These results open pathways to improving the remanence in hard-soft ferrite-based composites in the absence of exchange-coupling, which would be of great interest as the strict requirements associated with effective exchange-coupled would not have to be met, which in turn enhances the applicability of the method at an industrial level. This has ramifications as well in the ultimate development of hard-soft permanent magnets with enhanced performance, of any composition. Conceptualization, A.Q., C.d.J.F. and J.F.F.; methodology, J.C.G.-M., C.G.-M., P.K. and T.S.; software, D.B. and S.E.; formal analysis, C.G.-M. and J.C.G.-M.; experiments, C.G.-M., P.K., T.S. and J.C.G.-M.; simulations, D.B. and S.E.; writing--original draft preparation, A.Q.; writing--review and editing, all authors; visualization, C.G.-M. and A.Q.; supervision, A.Q., C.d.J.F. and J.F.F; funding acquisition, A.Q., C.d.J.F. and J.F.F. All authors have read and agreed to the published version of the manuscript. Figure 8: (**a**) Micromagnetically simulated images of the spin configuration of isolated Fe particles of different diameters between 15–50 nm. (**b**) Detail of the spin configuration of a 50 nm diameter Fe particle. **Funding:** This work was supported by the European Commission through Project H2020 No. 720853 (AMPPHIBIAN). It is also supported by the Spanish Ministry of Science and Innovation through Grants RTI2018-095303-BC51, RTI2018-095303-A-C52, PID2021-124585NB-C33, TED2021-130957B-C51 funded by MCIN/ AEI/10.13039/501100011033, by "ERDF A way of making Europe" and by the "European Union NextGenerationEU/PRTR". Financed by the European Union--NextGenerationEU (National Sustainable Mobility Center CN00000023, Italian Ministry of University and Research Decree n. 1033--17/06/2022, Spoke 11--Innovative Materials and Lightweighting). C.G.-M. acknowledges financial support from grant RYC2021-031181-I funded by MCIN/ AEI/10.13039/501100011033 and by the "European Union NextGenerationEU/PRTR". A.Q. acknowledges financial support from MICINN through the Ramon y Cajal Program (RYC-2017-23320). The opinions expressed are those of the authors only and should not be considered as representative of the European Union or the European Commission's official position. Neither the European Union nor the European Commission can be held responsible for them. **Data Availability Statement:** The data is available on reasonable request from the corresponding author. **Conflicts of Interest:** The authors declare no conflict of interest.
2309.14410
A novel eccentricity parameterization for transit-only models
We present a novel eccentricity parameterization for transit-only fits that allows us to efficiently sample the eccentricity and argument of periastron, while being able to generate a self-consistent model of a planet in a Keplerian orbit around its host star. With simulated fits of 330 randomly generated systems, we demonstrate that typical parameterizations often lead to inaccurate and overly precise determinations of the planetary eccentricity. However, our proposed parameterization allows us to accurately -- and often precisely -- recover the eccentricity for the simulated planetary systems with only transit data available.
Jason D. Eastman
2023-09-25T18:00:00Z
http://arxiv.org/abs/2309.14410v2
# A novel eccentricity parameterization for transit-only models ###### Abstract We present a novel eccentricity parameterization for transit-only fits that allows us to efficiently sample the eccentricity and argument of periastron, while being able to generate a self-consistent model of a planet in a Keplerian orbit around its host star. With simulated fits of 330 randomly generated systems, we demonstrate that typical parameterizations often lead to inaccurate and overly precise determinations of the planetary eccentricity. However, our proposed parameterization allows us to accurately - and often precisely - recover the eccentricity for the simulated planetary systems with only transit data available. planetary systems, planets and satellites: detection, stars 0000-0002-4880-8808]Jason D. Eastman ## 1 Introduction Despite the unquestioned success of _Kepler_, _K2_, and _TESS_, the abundance of candidates and faintness of their hosts is a significant problem for follow up and characterization with the limited high precision radial velocity facilities, leaving the majority of known exoplanets without measured masses or eccentricities. The planetary eccentricity's impact on the transit lightcurve was first described by Tingley & Sackett (2005). Barnes (2007) noted that we could determine a lower limit on the eccentricity from the transit photometry using an independent constraint on the star, which was expanded on by Ford et al. (2008), and later applied to a real system as the "photoeccentric effect" (Dawson & Johnson, 2012). It is also the same basic idea behind "astrodensity profiling" (Kipping et al., 2012) and used to argue about the ensemble eccentricities of certain populations of planets (Van Eylen & Albrecht, 2015). All of these methods model the transiting planet assuming a circular orbit, derive a constraint on the stellar density from the lightcurve (Seager & Mallen-Ornelas, 2003), and compare it to the known stellar density to determine the planetary eccentricity. With the advent of several all-sky surveys that provide quality, absolute, broad-band photometry across the stellar spectrum and precise, astrometric parallaxes from _Gaia_, we can use Spectral Energy Distribution (SED) and evolutionary modeling to determine independent stellar densities for nearly all exoplanet host stars - often only limited by systematic errors (Tayar et al., 2022) - making this technique widely applicable. However, the inconsistent treatment of the properties of the star and the properties of the planet that orbits it is embodied in the ensemble analyses of the complete set of Kepler Objects of Interest (KOIs) and K2 Candidates (Thompson et al., 2018; Mayo et al., 2018), because the true constraint on the eccentricity from the lightcurve is degenerate and difficult to sample. Indeed, we will show that using naive parameterizations of the eccentricity that are common when including radial velocities leads to inaccurate and overly precise eccentricities from the lightcurve, despite passing convergence criteria commonly used in the literature to determine the quality of Markov Chain Monte Carlo fits. Here we present a more efficient parameterization of the planetary eccentricity that allows us to create a self-consistent global model of the star and planet, efficiently constrain the eccentricity only using the transit light curve, and generate an accurate eccentricity posterior that is often surprisingly precise. We also validate the accuracy and precision of our models by fitting 330 simulated transit light curves. ## 2 Degeneracy between \(e\) and \(\omega_{*}\) The eccentricity \(e\) and argument of periastron \(\omega_{*}\) is most commonly parameterized as \(\sqrt{e}\cos\omega_{*}\) and \(\sqrt{e}\sin\omega_{*}\), which reduces the covariance between \(e\) and \(\omega_{*}\), eliminates the problematic angular parameter \(\omega_{*}\), and naturally imposes a uniform prior on both \(e\) and \(\omega_{*}\)(Anderson et al., 2011; Eastman et al., 2013). When we fit radial velocities or astrometry, the eccentricity and argument of periastron are well constrained and this parameterization is well behaved1. Unfortunately, the \(\sqrt{e}\cos\omega_{*}\) and \(\sqrt{e}\sin\omega_{*}\) parameterization has a diabolical covariance when fitting a transit and stellar density alone, owing to the fact that essentially, we have one constraint (the transit duration) to constrain two physical parameters (\(e\) and \(\omega_{*}\)) and the translation between the physical parameters and the constraint is not straight-forward. Figure 1 shows the covariance between \(\sqrt{e}\cos\omega_{*}\) and \(\sqrt{e}\sin\omega_{*}\) with contours of constant transit duration (assuming a fixed inclination, stellar density, and orbital period). The shaded regions denote the typical constraint from the transit duration consistent with a circular orbit (red), half that duration (green), or 1.5 times that duration (blue). This covariance is inefficient for Differential Evolution (DE) or Affine Invariant (AI) Markov Chain Monte Carlo (MCMC) algorithms to sample. These algorithms essentially draw two random points from the Probability Distribution Function (PDF) that define a vector to draw the next step. For linear covariances, that vector is a natural means of efficiently sampling that covariance. For curved covariances, like in figure 1, it often leads to proposed steps in the low-likelihood regions interior to the curve, causing the acceptance rate - and therefore the efficiency of the MCMC - to plummet (\(\lesssim 1\%\) is typical for transit-only fits parameterized as \(\sqrt{e}\cos\omega_{*}\) and \(\sqrt{e}\sin\omega_{*}\)-\(\sim 20\times\) less than ideal). More sinister, this covariance is particularly difficult to fully explore, and can often pass less strict convergence criteria without properly sampling the tips of the covariance between \(\sqrt{e}\cos\omega_{*}\) and \(\sqrt{e}\sin\omega_{*}\), biasing the eccentricity toward its starting value and underestimating the uncertainties in both \(e\) and \(\omega_{*}\). ## 3 Reparameterizing \(e\) and \(\omega_{*}\) To make sampling this diabolical degeneracy more efficient, we can reparameterize \(e\) and \(\omega_{*}\). Ideally, we would diagonalize the fisher matrix of the entire global model to make all parameters independent of one another. However, Carter et al. (2008) showed that, even with a significant number of simplifying approximations, the ideal parameterization of just the transit depends on the parameters themselves. That is, no such general parameterization exists. Fortunately, the DE-MCMC and AI-MCMC algorithms work well in the regime of linear covariances, so the parameters need not be uncorrelated, just linearly correlated on the scale of a typical constraint. Let us first consider the observed quantity, the transit duration from first to fourth contact, \(T_{14}\), which is approximately equal to: \[T_{14}\approx\frac{P}{\pi}\arcsin\left[\frac{R_{*}}{a}\frac{C}{\sin i}\right] \frac{\sqrt{1-e^{2}}}{1+e\sin\omega_{*}}, \tag{1}\] where \(C\) is the transit chord, equal to \[C=\sqrt{(1+R_{P}/R_{*})^{2}-b^{2}}, \tag{2}\] and \(b\) is the impact parameter equal to \[b=\frac{a}{R_{*}}\cos i\frac{1-e^{2}}{1+e\sin\omega_{*}}. \tag{3}\] Unfortunately, the arcsin in equation 1 requires an expensive and complex numerical approach to both deriving the physical parameters necessary to generate a model and computing the Jacobian to correct the implicit priors to be physical. Instead, the transit duration can be parameterized as a unitless scaling factor between the velocity the planet would have if its orbit were circular, \(V_{c}\), divided by the velocity of the planet at the time of transit in its eccentric orbit, \(V_{e}\), which Winn (2010) showed is approximately equal to \[\frac{V_{c}}{V_{e}}\approx\frac{\sqrt{1-e^{2}}}{1+e\sin\omega_{*}}. \tag{4}\] Comparing equation 1 with equation 4, we see that we have entirely ignored the duration's dependence on the period, inclination, and planet size. This makes it far more manageable, and dramatically simplifies the covariance in well-constrained fits, but means that in regimes where the period, inclination, or planet size is not well-constrained (such as single or grazing transits), this parameter is a poor substitute for Figure 1: The covariance between \(\sqrt{e}\cos\omega_{*}\) and \(\sqrt{e}\sin\omega_{*}\) for transit-only fits, shown with contours in equal transit duration, only including solutions with \(0\leq e<1\). The shaded regions denote the typical constraint from the transit duration consistent with a circular orbit (red), half that duration (green), or 1.5 times that duration (blue). We see that the vast majority of parameter space is eliminated, including preferentially eliminating high eccentricity solutions. We can also see the diabolically non-linear covariance that is difficult to sample. Differential evolution or Affine invariant samplers will draw a significant number of steps in the unlikely regions inside the contours, severely impacting their efficiency. the observed transit duration and remains highly degenerate with the observed quantities. While this quantity is only approximately equal to the velocity, it is important to recognize that this approximation does not impact the accuracy of our model. It is merely a tool to step through parameter space, from which we derive the precise values of the parameters from which we generate the physical model without approximating the planet's velocity during transit. However, there are several subtle problems this reparameterization introduces, which we address in the following subsections. ### Transit chord The transit duration only scales nicely with the planet's velocity when the transit chord, \(C\), is known. Unfortunately, the transit chord, even with a fixed \(a/R_{*}\) and \(i\), changes as a function of \(e\) and \(\omega_{*}\) (and therefore \(V_{c}/V_{e}\)), introducing a covariance similar to the one we were trying to avoid. Therefore, it is also useful to reparameterize the orbital inclination as the transit chord. For large swaths of parameter space, \(V_{c}/V_{e}\) and \(C\) are linearly correlated on the scale of a typical constraint. Even in the worst cases, the covariance is no more diabolical than the original parameterization. The result is that we accept more proposed steps, explore parameter space more thoroughly, and converge faster. ### Deriving \(\omega_{*}\) In order to derive the physical parameters \(e\) and \(\omega_{*}\) from \(V_{c}/V_{e}\), we must introduce another parameter. Fitting for \(\omega_{*}\) directly is an obvious choice, but angular parameters are periodic that create several difficulties with MCMC algorithms. Formally, periodic parameters can never converge, because the likelihood is identical at integer multiples of the period. Even in practice, it is sometimes possible for multiple chains to get stuck in separate minima, making it practically impossible to converge. Rejecting steps near an artificial boundary could bias the posterior. Choosing just \(e\sin\omega_{*}\) or just \(\sin\omega_{*}\) would allow us to solve for \(e\), but not \(\omega_{*}\) in its full \(2\pi\) range. This can be generally solved by reparameterizing a single angle \(\omega_{*}\), into two fitted parameters, \(L\cos\omega_{*}\) and \(L\sin\omega_{*}\), bounded such that \((L\cos\omega_{*})^{2}+(L\sin\omega_{*})^{2}\leq 1\). Then, we compute \(\omega_{*}=\text{atan2}\,(L\sin\omega_{*},L\cos\omega_{*})\). Note that fitting two parameters with one constraint necessarily introduces a degeneracy, but it is perfectly linear (everywhere along the line from the center to the edge of the unit circle has equal likelihood), which is well-handled by modern MCMC algorithms. Therefore, we sample in both \(L\sin\omega_{*}\) and \(L\cos\omega_{*}\), and marginalize over the new, meaningless parameter \(L\). ### Sign of the quadratic solution Further, when we solve Equation 4 for \(e\), it is a quadratic, \[0= \left[\left(\frac{V_{c}}{V_{e}}\right)^{2}\sin\omega_{*}{}^{2}+1 \right]e^{2}+ \tag{5}\] \[\left[2\left(\frac{V_{c}}{V_{e}}\right)^{2}\sin\omega_{*}\right]e+\] \[\left[\left(\frac{V_{c}}{V_{e}}\right)^{2}-1\right],\] meaning there are two solutions for \(e\) given values of \(V_{c}/V_{e}\) and \(\omega_{*}\). We must somehow choose between the solution that uses the positive sign and the solution that uses the negative sign. Because \(L\), when using \(L\sin\omega_{*}\), \(L\cos\omega_{*}\), is a totally degenerate quantity, we tried using \(L\) to choose between the two quadratic solutions and avoid introducing another parameter, but the convergence times were slower. See Appendix A for a more detailed discussion, but for this reason, we advocate fitting an additional sign parameter, \(S\), to choose between these solutions. We bound \(0\leq S<2\), and when \(S<1\), we choose the solution with the positive sign. ### \(M_{\text{P}}\) parameterization The planet mass, \(M_{\text{P}}\) is most often parameterized as the observed RV semi-amplitude, \(K\), which we cannot measure for the transit-only fits we describe here. In cases where the RV data is available, it should be included in the model, as the additional expense of computing the RV model is small relative to the dramatic decrease in runtime required to properly sample the e-omega degeneracy, even with our re-parameterization proposed here. Figure 2: The covariance between \(V_{c}/V_{e}\) and \(\omega_{*}\), as shown with contours in equal \(V_{c}/V_{e}\) for transit-only fits. The shaded regions are the same as Figure 1. The X-axis is log spaced. Still, we advocate explicitly modeling \(M_{\rm P}\) anyway, so we can create a self-consistent physical model without having to complicate our more general global model with edge cases that determine which combination of data sets constrain what parameters necessitating various approximations. Instead, we always fit for some parameter related to \(M_{\rm P}\) and require the user to explicitly place priors to constrain it, e.g., from the Chen & Kipping (2017) mass-radius relation or impose an explicit assumption that the planet mass is negligible/zero. In order to determine the inclination from the transit chord, we must know \(a/R_{*}\). Since we derive it from Kepler's law, \(M_{*}\), \(R_{*}\), and the planet period rather than fit it, we must know or neglect the planetary mass. But computing the mass from the normally fitted RV semi-amplitude requires the inclination, setting up a nasty system of equations to solve. Instead of neglecting the planet mass or solving that system of equations, we reparameterize the RV semi-amplitude as the planetary mass, \(M_{\rm P}\), which also allows us to impose a more physical prior. For simplicity and to impose intuitive priors, we advocate always fitting in \(M_{\rm P}\), regardless of the parameterization or data being fit. ### \(T_{\rm T}\) vs \(T_{\rm C}\) With high eccentricities and inclinations, the time of conjunction \(T_{\rm C}\), often used as the transit time and defined as the time the true anomaly is \(\pi/2-\omega_{*}\), can differ by more than 10 minutes from the observed quantity \(T_{\rm T}\), the time of minimum projected separation between the planet and star, as seen by an observer on Earth (Eastman et al., 2019). We normally prefer stepping in \(T_{\rm C}\) because the time of periastron \(T_{\rm P}\), required to compute the model, is trivially computed from \(T_{\rm C}\). However, for these eccentric, inclined orbits, the covariance between \(T_{\rm C}\), \(V_{c}/V_{e}\), and \(C\) is curved and inefficient to sample. It is well worth the computational overhead to numerically compute \(T_{\rm P}\) from the fitted parameter \(T_{\rm T}\) at each step to avoid this complex covariance. ### Priors The final complication is that, as with any non-physical parameterization, we must be careful about the implicit prior it imposes. The uniform step in \(V_{c}/V_{e}\) imposes a non-physical prior that strongly biases \(e\) toward high eccentricities. Similarly, the uniform step in the transit chord imposes a non-physical prior that strongly biases the orbital inclination toward grazing transits. We must correct for these priors by weighting the likelihood of the step by the absolute value of the Jacobian of the transformation between the two parameterizations. In general, this Jacobian is the absolute value of the determinant of a square matrix where the \(i\)th column and \(j\)th row is equal to \(\partial X_{i}/\partial Y_{j}\), \(X\) is an array of the parameterized variables, and \(Y\) is an array of the parameters we wish to have uniform priors. When most of the parameters are identical or uncorrelated between \(X\) and \(Y\), the determinant dramatically simplifies, and in our case, is much simpler than we might have feared: \[\left|\frac{\partial V_{c}/V_{e}}{\partial e}\frac{\partial C}{\partial\cos i }\right|=\left|\frac{e+\sin\omega_{*}}{\sqrt{1-e^{2}}(1+e\sin\omega_{*})^{2}} \frac{b^{2}}{cosiC}\right|. \tag{6}\] To confirm our corrected implicit priors reproduce our expected physical priors, we must first figure out what we expect. By using the transit chord as a parameter, we implicitly exclude non-transiting systems (for which the transit chord is imaginary) in a way that cannot be corrected.2 Fortunately, imposing such a prior is desired because it is a real selection effect that must be accounted for when characterizing the system. That is, eccentric systems are more likely to transit (Burke, 2008), and so, all else being equal, a planet we detect via transit should a priori be expected to be more eccentric than a planet detected via RVs. Such a prior is also practically required because it avoids the infinite volume of parameter space with identical likelihood (a non-transiting model light curve is a flat line for all planets) that is impossible for an MCMC to reasonably explore. In fact, whenever we fit a transit light curve, we explicitly add this limit anyway to avoid sampling the infinite volume of non-transiting parameter space. Footnote 2: Similarly, had we used \(\sqrt{1-b^{2}}\) as the transit chord, as is sometimes advocated for its simplicity, we would a priori exclude \(b>1\) grazing transits, which also can cannot be corrected. However, whether or not the planet transits depends on \(a/R_{*}\), (which itself is derived from the planetary period, \(M_{*}\), \(R_{*}\), and \(M_{\rm P}\)), \(\cos i\), \(e\), and \(\omega_{*}\), and so imposing a prior that the planet must transit necessarily skews all of those priors away from uniform, even if they were sampled directly. That is Figure 3: The covariance between \(e\) and \(\omega_{*}\), as shown with contours in equal \(V_{c}/V_{e}\) for transit-only fits. The shaded regions are the same as Figure 1. the behavior we want, but makes it difficult to verify our corrected priors are sensible because they are not trivially uniform. So first, we must figure out what priors we actually expect. We created a dummy model where we fit the parameters \(e\), \(\omega_{*}\), and \(\cos i\), which are the parameters we expect to be biased by our reparameterization. We also fit \(M_{P}\), \(\log M_{*}/M_{\odot}\), \(R_{*}\), \(\log P\), and \(R_{P}/R_{*}\) so we can derive the impact parameter. In this simplified fit, the likelihood function is constant except when it exceeds our imposed boundaries for each parameter of \(0<e<1\), \(-\pi<\omega_{*}<\pi\), \(0<\cos i<1\), \(0.5<M_{*}/M_{\odot}<2\), \(1<M_{P}/M_{\odot}<300\), \(0.5<R_{*}<2\), \(0.5<P/days<100\), and \(0.01<R_{P}/R_{*}<0.1\), where the likelihood is set to 0 (and the step is always rejected). We then run our MCMC over this function. Thus, the posteriors we generate with no further constraints produce our priors. The parameters we are most concerned about are \(e\), \(\omega_{*}\), \(\cos i\), and in this simple example, they are stepped in directly and uniformly bounded, and so are trivially uniform, as shown as a black line in Figure 4. This fit is just a sanity check to show we are doing things correctly and it behaves as we expect. Next, we create an identical fit except we also reject steps that do not transit, \(b=a/R_{*}\cos i(1-e^{2})/\left(1+e\sin\omega_{*}\right)<1+R_{P}/R_{*}\). We also have all the necessary information to impose the limit that the star and planet should not collide during periastron, so we also reject steps where \(e>1-\left(R_{P}+R_{*}\right)/a\). This now includes the detection bias from \(e\), \(\omega_{*}\), and \(\cos i\) and is shown as a red line in Figure 4. These are the desired priors we wish to preserve. We note that the particular distribution of the priors shown here depends on the detailed bounds of the other parameters described above and is only intended as a demonstration. While these bounds were chosen to roughly span the planets _TESS_ is sensitive to, it should not be considered a general result, and these bounds should be far less restrictive in a fit with actual data. Then, we do the same fit, but stepping in our proposed re-parameterization: \(V_{c}/V_{e}\), \(L\cos\omega_{*}\), \(L\sin\omega_{*}\), \(C\), \(S\), \(M_{P}\), \(\log M_{*}/M_{\odot}\), \(R_{*}\), \(\log\left(P/\text{days}\right)\), and \(R_{P}/R_{*}\). We impose the same bounds as above after deriving the physical parameters, in addition to a few additional constraints that are required to bound the fit in the physical realm: \(L\cos{\omega_{*}}^{2}+L\sin{\omega_{*}}^{2}<1\), \(0<S<2\), and \(0<C<1+R_{P}/R_{*}\), \(0<V_{c}/V_{e}<2\). Then we rerun the fit. As expected, the resultant prior, shown in green, does not recover our desired prior (red) due to the change in parameterization. Finally, we run the same fit, but weight our likelihood by the Jacobian (eq 6). We see that this corrected prior (blue) is the same as our desired prior (red) - that is, our Jacobian correctly recovers our physical priors. ### Reparameterization summary In summary, for transit-only fits, we reparameterize \(T_{\text{C}}\), \(\sqrt{e}\cos\omega_{*}\), \(\sqrt{e}\sin\omega_{*}\), \(M_{\text{P}}\), and \(\cos i\), as \(T_{\text{T}}\), \(V_{c}/V_{e}\), \(C\), \(L\sin\omega_{*}\), \(L\cos\omega_{*}\), \(S\), and \(M_{\text{P}}\), enabling us to recover the physical parameters easily from well-behaved fitted parameters. Adding two additional, degenerate parameters to the fit may seem like a bad idea, but as long as we can map the fitted parameters to a likelihood, it is not fundamentally problematic, and the DE-MCMC or AI-MCMC algorithms deal with poorly constrained parameters like \(L\) and \(S\) much better than the curved degeneracy between \(\sqrt{e}\cos\omega_{*}\) and \(\sqrt{e}\sin\omega_{*}\), resulting in a dramatic improvement to the mixing times and accuracy of the result. There may be better ways to reparameterize the eccentricity in transit-only cases, but we note that common parameterizations in terms of transit duration (\(T_{FWHM}\) and \(\tau\), \(T_{14}\) and \(T_{23}\)) are poor choices owing to uncorrectable priors that a priori exclude physically allowed regions of parameter space and may significantly bias the inferred parameters from grazing transits (Carter et al., 2008) that require significant additional complexity to correct (Gilbert, 2022). Our reparameterization is fundamentally and importantly different than other transit duration parameterizations (e.g., Tingley and Sackett, 2005; Bakos et al., 2007; Kipping, 2010), because we compute an explicit \(e\) and \(\omega_{*}\) at each step and generate a projected Keplerian orbit without approximation, allowing us to recover a realistic constraint on the eccentricity from a transit alone, while also modeling a physical Keplerian orbit around a real star. ### Tests with simulated systems To test our ability to recover an accurate and precise eccentricity and demonstrate the advantage of our reparameterization, we simulated 330 planetary systems. Each system had 1 planet, and we randomly drew parameters uniformly distributed between \(0\leq e\leq 1\), \(0\leq\omega_{*}<2\pi\), \(0.5\leq M_{*}/M_{\odot}\leq 2\), \(-0.5\leq\left[\text{Fe}/\text{H}\right]_{0}\leq 0.5\), \(202<EEP<454\), \(0.001\leq M_{P}/M_{J}\leq 13\), \(\log\left(3\right)\leq\log P\leq\log\left(365\right)\) (systems with a single transit were allowed), and a value for \(\cos i\) that transits (including grazing transits and accounting for eccentricity). Notably, these simulated systems will over-represent high eccentricity planets, which are not actually distributed uniformly in \(e\), and long period planets, which are not actually detected uniformly in \(\log P\). We used the MIST relations to define self consistent simulated values for \(R_{*}\), \(T_{\text{eff}}\), and \(\left[\text{Fe}/\text{H}\right]\) based on the randomly drawn parameters, then created a TESS-like light curve, sampled at 2 minute cadence for 1 year with 20 ppm precision (i.e., a bright target in _TESS_'s continuous viewing zone). No correlated noise or data gaps were inserted. The times were converted from the implicit target frame to the "observed" (Solar System Barycenter) frame, accounting for the light travel time throughout the planet's orbit. Next, we set up fits using EXOFASTv2(Eastman et al., 2019) as closely as possible to what we would do for blind TESS follow up fit. We excluded the data outside of \(T_{\rm T}\pm T_{14}\) to speed up the fit. We imposed a prior on \(M_{P}\) and disabled the exoplanet mass-radius relation to avoid a potentially problematic degeneracy with the Chen & Kipping (2017) relation for \(\sim 1R_{J}\) planets, but this has a negligible impact on the inferred eccentricity. Understanding that a BLS search of the _TESS_ data will return reasonably accurate values for the transit time (\(T_{\rm T}\)), duration (\(T_{\rm FWHM}\)), period (\(P\)), and depth (\(\delta\)) of a transit, we use the exact values of those derived from the simulated parameters to initialize the simulated fits. We started each fit at the simulated transit time and \(R_{P}/R_{*}=\sqrt{\delta}\), which will be roughly correct for non-grazing systems, but systematically small for grazing systems, as one would likely do when modeling a real, unknown system. Rather than model the star with a simulated SED or MIST models, we imposed Gaussian priors on \(M_{*}\) and \(R_{*}\) equal to the simulated values with uncertainties of 3%, simulating a typical (systematics dominated) stellar constraint from spectroscopy and an SED. We also fixed the added variance to 0 and out of transit flux to 1. We started most fits at the simulated period, with \(i=90^{\circ}\), and the value of \(V_{c}/V_{e}\) that reproduces the observed transit duration. There are two classes of exceptions to this procedure. First, in the cases where a grazing geometry would require a non-physical eccentricity to reproduce the observed transit duration, we set the starting guess for \(e\) to 95% of its maximum physically allowed value (\(e=0.95(1-(R_{*}+R_{P})/a)\)). If the transit duration is shorter than the nominal circular orbit, we start \(\omega_{*}\) at \(\pi/2\) to minimize the transit duration given \(e\), and if the transit duration is longer than the nominal circular orbit, we start \(\omega_{*}\) at \(-\pi/2\) to maximize the transit duration given \(e\). Then, we assume \(b=1\) and derive the starting values for \(C\) and \(V_{c}/V_{e}\). Second, for single transit systems, we start at a circular orbit (\(e=0\)), a central crossing transit (\(i=90^{\circ}\)), and start the period at a value to match the transit duration, \(T_{\rm FWHM}\), \[P=\frac{\pi T_{\rm FWHM}^{3}G(M_{*}+M_{\rm P})}{4R_{*}}. \tag{7}\] Rather than remove the out of transit baseline as in fits with multiple transits, we include the entire lightcurve so that the out of transit baseline effectively sets a lower limit on the period. The exception to this exception is when the period implied by a circular orbit is already excluded by the out of transit data. Then, we set the period to the minimum allowed by the data and scale \(V_{c}/V_{e}\) as above, including the possibility of modifying the impact parameter for a duration that is still too small. We disabled the constraint from the Claret (2017) limb darkening tables, and fixed \(F_{0}=1\) and the transit variance to 0. We enable parallel tempering, run an unthinned 10,000 step preliminary MCMC, then restart the fit at the best-fit found among all MCMC links to optimize the starting position. We ran each of the final fits for 2.5 days on a super computer with 8 threads each, resulting in fits that typically do not pass our strict convergence criteria, but what we would usually consider reliable. We ran each fit two times, only changing the parameterization of the fit - once with our previous standard parameterization of \(\sqrt{e}\cos\omega_{*}\), \(\sqrt{e}\sin\omega_{*}\), and \(\cos i\) and again with our new default parameterization for transit-only fits, \(V_{c}/V_{e}\), \(L\cos\omega_{*}\), \(L\sin\omega_{*}\), \(S\), \(C\).3 Footnote 3: We actually ran 3 other combinations of parameterizations comparing the tradeoff between \(\cos i\) vs \(C\), and \(S\) vs letting \(L\) define the sign, but they were clearly inferior. Both parameterizations have statistically significant outliers in eccentricity and had some systems that were particularly poorly mixed (less than 50 independent draws or a Gelman-Rubin Statistic of greater than 1.5). However, the \(\sqrt{e}\cos\omega_{*}\)and \(\sqrt{e}\sin\omega_{*}\) parameterization was far worse - 55 Figure 4: (a) - The implicit prior \(e\) for an unconstrained fit that is uniform in \(\cos i\), \(\sqrt{e}\cos\omega_{*}\), and \(\sqrt{e}\sin\omega_{*}\) (black), a fit that is uniform in \(\cos i\), \(\sqrt{e}\cos\omega_{*}\), and \(\sqrt{e}\sin\omega_{*}\), but rejecting non-transiting systems (red), a fit that is uniform in transit chord, \(V_{c}/V_{e}\), \(L\sin\omega_{*}\), and \(L\cos\omega_{*}\)(green), and a fit that is uniform in transit chord, \(V_{c}/V_{e}\), \(L\sin\omega_{*}\), and \(L\cos\omega_{*}\), but corrected by the Jacobian to impose uniform priors (blue). That is, the red line is the desired prior, and its agreement with the blue demonstrates that we are correctly calculating and applying the Jacobian. See the text for details. (b) – Same as (a), but for \(\omega_{*}\). (c) – Same as (a), but for \(\cos i\). of 330 (17%) systems had \(>3\sigma\) outliers, with the worst at 55 sigma discrepant. These fits typically did not travel far from their starting values and the uncertainties were significantly smaller relative to the fit of the same data with the new parameterization. While 121 (37%) of fits were particularly poorly mixed, the remaining were especially worrisome because they achieved reasonable values for the convergence statistics and showed no obvious signs of bias. This is a cautionary tale for why we must never trust convergence statistics alone. In contrast, the new parameterization only had 7 (2%) systems with \(>3\sigma\) outliers, the worst outlier of 7.4 sigma, and 7 systems that were poorly mixed. The two biggest outliers were both poorly mixed, single-transit systems, which we might expect to be problematic. However, the remaining 5 failures are all eccentric fits that just so happen to have \(V_{c}/V_{e}\sim 1\) where we systematically underestimate the eccentricity. We investigated each of these and found that the simulated values happen to lie in narrowly allowed regions of \(e-\omega_{*}\) space. Our sampling did find these solutions, and correctly reported their likelihood given the priors and the \(e-\omega_{*}\) degeneracy. We note that the prevalence of such systems in our simulations is exaggerated by the uniform draw in eccentricity we used to generate the simulated systems. In the real world, we expect low eccentricity systems to be far more common, and so outliers like this to be even rarer occurrences, with an accurate probability reflected in the posteriors. The acceptance rate for both parameterizations is still relatively poor, but the average 1.3% acceptance rate with the new parameterization is more than double the average acceptance rate of the old parameterization (0.6%). In addition, many of the rejected steps with the new parameterization are rejected immediately due to a non-physical eccentricity, without having to compute an expensive model, meaning the new parameterization evaluates \(\sim\)50% more models in the same amount of time. As a result, they typically reached much higher levels of mixing in the same amount of time (or mixed to the same level much faster). The results for eccentricity are summarized in Figure 5. We highlight systems we might expect to be problematic: where the fits were poorly mixed (\(50<T_{Z}<200\) or \(1.3<R_{Z}<1.5\)), the signal to noise was in the bottom 10%, the transit duration was consistent with circular (i.e., \(e\) and \(\omega_{*}\) were as degenerate as possible), a grazing geometry (i.e., inclination was poorly constrained), or a single transit (i.e., the period was poorly constrained). Typically, however, these problematic fits accurately reported an imprecise constraint. ## 4 Discussion This new parameterization opens the door to efficient and accurate ensemble analyses of transiting systems that include eccentricity. Even in cases where we have a poor constraint on the eccentricity, we are far better off incorporating that lack of knowledge into the covariant parameters rather than fixing the eccentricity to zero and biasing the inferred planetary parameters with a non-physical model. By including the eccentricity, the global model can self-consistently link the planetary and stellar models, enabling us to use the star to constrain the planet as we outline here, but also use the transit to constrain the star (Eastman et al., 2023) without any changes to the underlying model. Having the stellar radius sets a physical scale to the system, which also allows us to model the system in the proper reference frame - that of the target's barycenter. Work by J.D.E. was funded by NASA ADAP 80NSSC19K1014. The computations in this paper were run on the FASRC cluster supported by the FAS Division of Science Research Computing Group at Harvard University.
2309.14376
A Platform for Addressing Individual Magnetite Islands Grown Epitaxially on Ru(0001) and Manipulating Their Magnetic Domains
We have grown high-quality magnetite micrometric islands on ruthenium stripes on sapphire through a combination of magnetron sputtering (Ru film), high-temperature molecular beam epitaxy (oxide islands), and optical lithography. The samples have been characterized by atomic force microscopy, Raman spectroscopy, X-ray absorption and magnetic circular dichroism in a photoemission microscope. The magnetic domains on the magnetite islands can be modified by the application of current pulses through the Ru stripes in combination with magnetic fields. The modification of the magnetic domains is explained by the Oersted field generated by the electrical current flowing through the stripes underneath the magnetite nanostructures. The fabrication method is applicable to a wide variety of rock salt and spinel oxides.
Sandra Ruiz-Gómez, Eva María Trapero, Claudia Fernández-González, Adolfo del Campo, Cecilia Granados-Miralles, José Emilio Prieto, Muhammad Waqas Khaliq, Miguel Angel Niño, Michael Foerster, Lucía Aballe, Juan de la Figuera
2023-09-24T12:46:07Z
http://arxiv.org/abs/2309.14376v1
A Platform for Addressing Individual Magnetite Islands Grown Epitaxially on Ru(0001) and Manipulating Their Magnetic Domains ###### Abstract We have grown high-quality magnetite micrometric islands on ruthenium stripes on sapphire through a combination of magnetron sputtering (Ru film), high-temperature molecular beam epitaxy (oxide islands), and optical lithography. The samples have been characterized by atomic force microscopy, Raman spectroscopy, X-ray absorption and magnetic circular dichroism in a photoemission microscope. The magnetic domains on the magnetite islands can be modified by the application of current pulses through the Ru stripes in combination with magnetic fields. The modification of the magnetic domains is explained by the Oersted field generated by the electrical current flowing through the stripes underneath the magnetite nanostructures. The fabrication method is applicable to a wide variety of rock salt and spinel oxides. Orst. Growth Des. 2023, 23, 5785-5791 ## 1 Introduction The quality of materials can severely impact their properties. This truism has been thoroughly proven by the microelectronics industry, where the current capabilities lean on the ability to grow compositionally controlled materials with extremely low defect densities. In other areas, there is still a great deal of margin for improvement. Such is the case of spintronics, where often polycrystalline materials are used. This is reasonable, as many properties are averaged over larger scales. For example, the magnetic domain walls are often thicker than the polycrystalline grain size. However, there are examples in which this is not the case. For example, skyrmion motion is affected by defects[1] and the ability to confine and control spin waves at the edges of nanostructures is likely to require atomically perfect materials.[2] In the past few years we have explored the growth of several ferrimagnetic and antiferromagnetic oxides of high crystalline quality by means of oxygen-assisted high-temperature molecular beam epitaxy on single-crystal Ru(0001) substrates.[3] This method has been successfully used to obtain atomically flat micrometer-wide and nanometer-high triangular islands of several ferrimagnetic spinel ferrites, among them iron spinel (i.e., magnetite),[4] cobalt ferrite,[5] and nickel ferrite,[6] as well as antiferromagnetic Ni\({}_{\text{C}}\)\({}_{\text{O}_{1-x}}\)O[7] and Ni\({}_{\text{x}}\)Fe\({}_{1-x}\)O islands of similar quality,[8] in addition to rare-earth oxides such as cercia[9] and prasedyomium.[10] The magnetic oxides grown in such a way present magnetic domains in remanence which are orders of magnitude larger than those typically observed in thin films. This is attributed in part to the lack of antiphase boundaries,[11] as each of the islands arises form a single nucleus and the growth process is stopped before coalescence. In the case of magnetite, magnetic closure domains dictated by shape anisotropy have been observed[4] and modified through the application of external magnetic fields.[12, 38] The use of single-crystal bulk Ru(0001) substrates can be substituted by thin Ru films deposited on insulating substrates, as proved by the growth of cercia[13] and graphene[14] on such films. We have recently characterized those thin films as substrates[15] and found them to be of a quality comparable to that of bulk single crystals. Furthermore, the quality of oxide islands grown on the films is similar to that of those grown on single crystals. In particular, magnetite crystals show a Verwey transition as detected by Raman spectroscopy.[16] One advantage of such Ru films is that they can be removed by standard etchants developed by the microelectronics industry.[17] We thus propose that a powerful platform for the implementation of electrical control of high-quality nanostructures of magnetic oxides is their growth by high-temperature oxygen-assisted molecular beam epitaxy on thin films of ruthenium, with a final step of optical lithography to define conductive paths on an otherwise insulating substrate. In the present work, we present validation for such a platform for the specific case of magnetite islands. After growing them on thin Ru films as substrates, we lithographically defined stripes in the metal. We then checked that the magnetite islands were unaffected by the procedures of etching and removing the resist. Finally we demonstrated the modification of the magnetic domains of the nanostructures by several means, as observed by X-ray magnetic circular dichroism (XMCD) in photoemission microscopy (PEEM), thus validating our approach. ## 2 Experimental Methods Ru films have been grown, following reports by several groups that indicated epitaxial growth,[13, 14, 15] by direct-current magnetron sputtering in a home-made sputtering chamber with a base pressure of 10\({}^{-6}\) mbar. The sapphire substrates, with the (0001) orientation, 99.998% pure, and polished to 0.3 nm, were provided by Siegert Wafer. The growth, using 99.95% Ru targets from EvoChem, was performed with a typical power of 30 W over 5\(-\)10 min with a sample to target distance of 10 cm. The sapphire substrates were heated to 500 \({}^{\circ}\)C before and during the growth. Magnetite islands have been grown in ultrahigh-vacuum chambers equipped with low-energy electron microscopy (LEEM), which permits the observation of the growth front in real time. We have used the Elmitee LEEM III instrument at the Instituto de Quimica Fisica Blas Cabrera as well as a similar instrument at the CIRCE station of the Alba synchrotron.[19] The samples were degassed at temperatures up to 1000 \({}^{\circ}\)C. After such a procedure the Ru films usually present a sharp (1 \(\times\) 1) low-energy electron diffraction (LEEM) pattern that corresponds to a well-ordered Ru(0001) surface.[13] In cases where other LEEM patterns were observed, a mild sputtering cycle with Ar ions at 1 keV was performed, and the annealing step was repeated. The magnetite islands were grown following our established protocol,[40, 20] by depositing iron in a background molecular oxygen pressure of 1 \(\times\) 10\({}^{-6}\) mbar at a substrate temperature of 900 \({}^{\circ}\)C. The iron source was a 5 mm diameter iron rod heated by electron bombardment within a water jacket. Typical deposition rates were about 15 min per Fe layer. The samples were spin-coated with a high-contrast AZ 1512 HS photoresist with a typical thickness of 1.2 \(\mu\)m. A lithographic pattern was defined by exposing to light in a laser lithography system and developed with a multistriped pattern of narrow channels 10\(-\)30 \(\mu\)m wide and 500 \(\mu\)m long which gradually widened over 1 mm to a width of 100 \(\mu\)m. To etch the noncovered areas, we employed a solution of 9 wt % of acetic acid and 22 wt % citric ammonium nitrate in water.[41, 42] After the growth of the magnetite islands, the sample was coated with a photoresist, and a stripe pattern was projected onto the sample and developed (the protocol is summarized in Figure 1a). The ethant used, designed for Ru, efficiently removed the Ru in the exposed regions. However, the ethant also damaged the resist in the covered areas to the point that solvents such as acetone and pyrrolidone were not effective in removing it. Instead, we have successfully used piranha solution (\(\mathrm{H_{2}SO_{4}}\) + \(\mathrm{H_{2}O_{2}}\)) to such an end. The samples, one of which is shown in Figure 1, were contacted by wire-bonding with 100 \(\mu\)m Al wire to a printed circuit board (PCB) mounted in an Alba sample holder which included a coil for generating a magnetic field in the plane of the PCB.[21] The resistance of the individual stripes was about 300 \(\Omega\), which is in good agreement with the Ru resistivity[3] and the channel geometry considering a Ru stripe height of 10 nm. A setup for generating and measuring current pulses was mounted inside the high-voltage rack of the PEEM microscope. It included a 40 V/60 ns pulse generator from AVTECH electrosystems, Model AVI-MP-P, a custom-made polarity switch box, the device itself, and a 50 \(\Omega\) resistor to ground. The shape of the pulses both before the device and between the 50 \(\Omega\) resistor and the device was monitored with a Tektronics TPS 2048S digital oscilloscope. The pulses were applied with the high voltage of the microscope switched off to prevent damage to the stripes. ## 3 Results and Discussion The Ru films grown by magnetron sputtering on Al\({}_{2}\)O\({}_{3}\)(0001) single crystals usually have a high density of steps, although some authors like Sauerbrey et al.[13] reported that such films can have even a lower density of steps than well-prepared single crystals. Upon Fe deposition under an oxygen atmosphere, the Ru film is first covered by a FeO(111) wetting layer whose thickness is two atomic layers for the conditions we used,[24, 25, 26, 27, 28] followed by the growth of 3-dimensional magnetite islands with thicknesses ranging from a few nanometers to a hundred nanometers and a lateral extension of several micrometers.[20] Figure 1: (a) Schematic of the growth procedure. From left to right: (I) growth of a Ru film by magnetron sputtering, (II) growth of magnetite islands by high-temperature oxygen-assisted molecular beam epitaxy, and (III) lithographic definition of stripes. (b) Optical microscopy of the sample with the developed resist on top. (c) Sample mounted in the holder that allows the application of external magnetic fields. Depending on the particular details of the growth temperature and postprocessing of the samples (like further annealing steps), the magnetite islands either extend down to the Ru substrate or sit on top of the FeO wetting layer. Our goal here is to establish that the islands correspond to magnetite, so we refer the reader to the published works on this subject.[21, 26] The density of islands on the Ru films is comparable to that of magnetite islands grown in single-crystal Ru substrates. It is likely that the presence of the wetting layer decouples the nucleation of the magnetite islands from the local density of the steps to some extent. We have also tried to grow the oxide islands on prepatterned substrates. However, we obtained a much higher density of smaller islands, likely due to contamination introduced during the processing steps. The resultant devices were characterized by atomic force microscopy, by Raman spectroscopy and by X-ray absorption spectroscopy (XAS) in PEEM. The microscope images presented in Figure 2a,b show triangular islands on stripes which are elevated by 13 nm, while the islands on top have a typical thickness of 30 nm. Presumably, the elevated areas correspond to the Ru stripes on which the triangular islands (magnetic crystals) have grown and the deeper regions Figure 3: (a) XAS spectroscopy image of a Ru stripe and the sapphire around it, acquired at a photoelectron energy near the maximum of the L\({}_{3}\) white line. The field of view is 10 \(\mu\)m. (b) The same area presenting the XMCD image. (c) XAS and XMCD spectra acquired on a single domain of a magnetite island. Figure 2: (a) Optical microscopy image of a Ru stripe with magnetite triangular islands. (b) Atomic force microscopy images of the area marked in (a) with a red square. The locations where the Raman spectra in c) were acquired are marked with circles of the same color as each spectrum. (c) Raman spectra acquired in the different regions of the film. From bottom to top: spectra on a ruthenium stripe (blue), on the bare sapphire (gray), on an island on sapphire (black), and on an island on the Ru stripe (green). correspond to the bare sapphire areas. The islands present a range of different heights, from 20 nm to more than 40 nm. However, there are also islands detected directly on the sapphire. Raman spectra are shown in Figure 2c. They were acquired respectively on the Ru stripe outside of any island (blue spectrum), on the bare sapphire areas outside of the islands (gray spectrum), on an island on the bare sapphire (black spectrum), and on an island on the Ru stripe (green spectrum). The sapphire areas only show a very small peak near 420 cm\({}^{-1}\), characteristic of single-crystal \(\alpha\)-Al\({}_{2}\)O\({}_{3}\)[29]. The Ru stripe shows a peak at 192 cm\({}^{-1}\) which corresponds to the E\({}_{2g}\) transverse optical phonon from the shear motion of the two sublattices of the hcp structure [30]. All of the islands present several peaks, the most prominent of which is that at 665 cm\({}^{-1}\). Two other peaks appear at 310 and 535 cm\({}^{-1}\). All these peaks arise from the magnetite spinel structure and are assigned to the A\({}_{1g}\) mode, and to two of the T\({}_{2g}\) modes [16, 31, 13]. This is an indication that the lithographic steps did not destroy the magnetite structure of the islands. This is true not only for the islands that were protected during the etching of the Ru (green spectrum) but also for the islands that were exposed in order to remove the Ru film (black spectrum). The fact that the magnetite spinel modes are detected on both type of islands proves that the magnetite islands not only survived the brief piranha immersion employed to remove the remains of the resist but also, in the case of islands sitting on sapphire, survived the Ru etching agent. On the other hand, that the Ru mode is detected in both types implies that whether covered by the resist or by the magnetite islands, the underlying Ru is protected. In the former case, this is the intended outcome and it is achieved by using a resist with a thickness of over 1 \(\mu\)m. However, in the latter case, this indicates that magnetite is also effective in protecting the underlying Ru even if the islands are a few tens of nanometers thick. This is consistent with the islands on the bare sapphire being apparently thicker than those on Ru by an amount similar to the Ru thickness. Whether the Ru underlying the magnetite islands survives might be related to the particular etching times employed. The islands sitting on sapphire in another sample, which was etched for a longer time, did not show the Ru peak underneath. The devices have also been characterized by XAS, using the iron L\({}_{32}\) absorption edges, as shown in Figure 3. In the XAS image, triangular-shaped magnetite islands are detected both on the Ru stripes and on the sapphire substrate. The Ru stripe appears dark in Figure 3a due to work function differences. The islands present sides oriented along the compact directions of the Ru(0001) surface, as ascertained by LEED (not shown). In the images, the Ru stripes are aligned at 45\({}^{\circ}\) with respect to the horizontal direction. In Figure 3c we show the averaged XAS spectrum acquired at a magnetic single domain of a triangular island with opposite circular polarizations of the X-rays, together with the corresponding XMCD spectrum. We first note that the XAS spectrum is characteristic of magnetite [33], thus confirming the observation by Raman spectroscopy that the magnetite islands have survived all the steps involved in the lithography process. We also note that the XAS observation also implies that there is not a large "dead" surface layer in the magnetite islands. While the Raman signal contains the contribution of the full thickness of the islands, which is in the range of tens of nanometers, XAS-PEEM is far more surface sensitive. While the precise mean free path of very low-energy electrons in oxides applicable to our measurement kinetic energy of 1 eV is still under discussion, experimental values in magnetite are in the range from 5 nm [34] to 1.4 nm [35]. To image the magnetic domains we make use of the XMCD effect: XAS images were acquired with opposite circular polarizations and then subtracted pixel-by-pixel. There are several energy ranges at the L\({}_{3}\) edge which provide magnetic contrast, as shown in Figure 3c. The XMCD spectrum consists of two negative peaks separated by a positive one. The former are considered to arise mainly from Fe\({}^{2+}\) and Fe\({}^{3+}\) in octahedral positions, respectively, while the latter corresponds to Fe\({}^{3+}\) in a tetrahedral environment. The XMCD images presented in this work have been recorded at the first negative peak. Thus, they map the local magnetization in magnetite, which corresponds to the direction of the magnetic moment of the Fe\({}^{2+}\) iron atoms at octahedral positions. The magnetic contrast image of the region displayed in Figure 3a is shown in Figure 3b. The images acquired provide only the component of the magnetization along the X-ray beam incidence direction, which is orthogonal to the stripe orientation (measuring along different azimuthal sample orientations can be performed to allow the reconstruction of the full magnetization vector [4]). Thus, the white areas correspond to domains with the magnetization pointing along the incoming X-ray beam direction and black ones in the opposite direction. Gray areas indicate either no net magnetization or a magnetization along a direction Figure 4: (a) Schematic of the directions of the Ru stripes relative to the applied external magnetic field, and the direction of the X-ray beam. (b, c) XMCD-PEEM images after applying a magnetic field of 45 mT (b) or 39 mT (c) in the directions indicated in each image. The field of view of the images is 10 \(\mu\)m. perpendicular to the incoming X-ray direction: i.e., along the stripe axis. To check the possibility of manipulating the magnetic domains with an external magnetic field, we mounted a device in a special sample holder with a mini-electromagnet that allows application of an in-plane magnetic field.[22] The geometry and directions of the X-ray beam relative to the applied magnetic field are shown in the schematic in Figure 4. We note that, as we have discussed for spinel islands grown on Ru(0001), there are two different types of domain patterns: for very thin islands, the domain walls are often pinned by the defects induced by the substrate steps,[36] while taller islands tend to have domains governed by shape anisotropy.[4] The saturation magnetization of the magnetite islands grown on Ru is expected to be of a magnitude similar to that of the bulk material, as estimated from comparisons of the domain wall width with micromagnetic simulations.[4] Regarding the magnetic fields required to modify their domains, to the best of our knowledge there are no measurements of hysteresis cycles of individual islands. Nanometer thick magnetite films on Ru(0001) have a reported coercive field around 30 mT.[37] Our own research on magnetite islands grown on bulk Ru crystals has shown that fields of a few mT are enough to modify the shape anisotropy patterns in a reversible way[38] and that fields of 40-50 mT are needed to modify their magnetization patterns in remanence.[12] The image in Figure 4b corresponds to the domains observed after applying a magnetic field of 45 mT,[12, 22] while the next panel shows the domains observed after reversing the applied magnetic field. Many of the smaller islands observed are single-domain, while the larger ones contain several magnetic domains. Even in the latter case, the majority of the domains of each island point in the direction of the applied field. After confirming the effect of applying an external magnetic field to the magnetite islands, we first "reset" the magnetic configuration by applying a large magnetic field as shown in Figure 4b, obtaining the initial configuration shown in Figure 5a. We then find the highest magnetic field that does not modify the magnetic configuration of the larger island (Figure 5c) and the lowest field that does modify the configuration (Figure 5d). We found that in the central island a field larger than 15 mT is required to change the domains. Finally, we use the former as a bias and apply electrical pulses flowing underneath the island. For a current of 0.13 A, the estimated Oersted field is 4 mT. Depending on the polarity of the pulses, the orientation of the Oersted field changes. If the Oersted field and the magnetic field applied by the sample holder coil are antiparallel so that the net field is smaller, no change is observed in the island (Figure 5e). If they are parallel, they add, the combined magnetic field is above the 15 mT threshold, and changes in the domains are observed (Figure 5f). The current density produced by the pulse is estimated from the total current and the stripe dimensions to be \(6\times 10^{11}\) A/m\({}^{2}\). The Oersted field is the most straightforward method of modifying the magnetic domains in a nanostructure on top of a conductive stripe.[39, 40] In this sense, our results are not unexpected. However, we stress that the main point we present is that magnetic domains can be switched in a device where the shape of the nanostructures is defined by growth and not by methods such as focused ion beam or lithography and thus offers the promise of obtaining nanostructures with atomically perfect edges for future studies of magnetic domain manipulation in epitaxial oxide materials. Figure 5: (a) Initial configuration obtained after applying a magnetic field of 45 mT in the direction indicated. (b) Schematic of the stripe. (c) Image after applying in (a) a magnetic field of 14.4 mT in the opposite direction (no changes are observed in the large island). (d) After applying a slightly higher field (15.6 mT) changes are detected in the large island. (e) Image acquired after simultaneously applying a magnetic field of 14.4 mT and one 60 ns pulse of positive polarity. No changes are observed. (f) Configuration after applying the same magnetic field and a negative polarity pulse. Changes are observed, similar to those with a higher magnetic field (d). ## Conclusions We have grown high-quality magnetite islands on a Ru film deposited on sapphire and we have defined stripes lithographically on the system. The magnetite islands survive the etching process, as proved by microspot Raman spectroscopy and atomic force microscopy. The islands also show the expected X-ray absorption spectra of magnetite. Magnetic domains can be observed on them by XMCD-PEEM. The domains can be modified by the application of an external magnetic field of a magnitude similar to that required for islands grown on Ru single crystals. Injecting current through the Ru stripes underneath the magnetite islands produces changes in selected islands when the Oersted field of the current pulses adds to the applied external magnetic field. This validates the oxide islands grown by molecular beam epitaxy on Ru films deposited on sapphire as an excellent platform to study the switching effects of currents on oxide structures, both ferrimagnetic and (in the future) antiferromagnetic. ## Authors * [1]Sandra Ruiz-Gomez -- _Max-Planck-Institut fur Chemische Physik fester Stoffe, Dresden 01187, Germany_ * [2]Eva Maria Trapero -- _Instituto de Quimica Fisica Blas Cabrera (IQF), CSIC, Madrid 28006, Spain_ * [3]**Claudio Fernandez-Gonzalez -- _Max-Planck-Institut fur Chemische Physik fester Stoffe, Dresden 01187, Germany_ * [4]**Adolfo del Campo -- _Instituto de Cerimica y Vidrio, CSIC, Madrid 28049, Spain_ * [5]**Cecilia Granados-Miralles -- _Instituto de Cerimica y Vidrio, CSIC, Madrid 28049, Spain_** * [6]**Jose Emilio Prieto -- _Instituto de Quimica Fisica Blas Cabrera (IQF), CSIC, Madrid 28006, Spain_** * [7]**Muammad Waqas Khaliq -- _Alba Synchrotron Light Facility, Barcelona 08290, Spain_ * [8]**Miguel Angel Nino -- _Alba Synchrotron Light Facility, Barcelona 08290, Spain_** * [9]**Michael Foerster -- _Alba Synchrotron Light Facility, Barcelona 08290, Spain_** * [10]**Lucia Aballe -- _Alba Synchrotron Light Facility, Barcelona 08290, Spain_** * [11]**Complete contact information is available at: [https://pubs.acs.org/10.1021/acs.cgd.3c00388](https://pubs.acs.org/10.1021/acs.cgd.3c00388)** ## Author Contributions The experiments were planned by J.d.I.F. and S.R.-G. The Ru films were grown by E.M.T. The magnetite islands were grown by S.R.-G. and J.E.P. The tracks were etched by S.R.-G. and C.F.-G. The experiments were performed by J.d.I.F., S.R.-G., C.G.-M., C.F.-G., LA, M.AN., and M.F. J.d.I.F. wrote the manuscript with input and revisions from all the authors. ## Notes The authors declare no competing financial interest. ## Acknowledgments This work was supported by Grants PID2021-124585NB-C31, RT12018-095303-B-C51, RT12018-095303-B-C53, and TED2021-130957B-C54 funded by MCIN/AEI/10.13039/501100011033, by "ERDF A way of making Europe", by the "European Union NextGenerationEU/PRTR", and by the Grant S2018-NMT-4321 funded by the Comunidad de Madrid and by "ERDF A way of making Europe". We thank Maria Acebron from IMDEA Nanoscience for her assistance with the optical lithography and etching steps. C.G.-M. acknowledges financial support from grant RYC2021-031181-I funded by MCIN/AEI/10.13039/501100011033 and by the "European Union NextGenerationEU/PRTR".
2309.04165
Pair-Interactions of Self-Propelled SiO2-Pt Janus Colloids: Chemically Mediated Interactions
Driven by the necessity to achieve a thorough comprehension of the bottom-up fabrication process of functional materials, this experimental study investigates the pair-wise interactions or collisions between chemically active SiO2-Pt Janus Colloids. These collisions are categorized based on the Janus colloids orientations before and after they make physical contact. In addition to the hydrodynamic interactions, the Janus colloids are also known to affect each others chemical field, resulting in chemophoretic interactions, which depend on the degree of surface anisotropy in reactivity and solute-surface interaction. These interactions lead to a noticeable decrease in particle speed and changes in orientation that correlate with the contact duration and yield different collision types. Our findings reveal distinct configurations of contact during collisions, whose mechanisms and likelihood is found to be dependent primarily on the chemical interactions. Such estimates of collision and their characterization in dilute suspensions shall have key impact in determining the arrangement and time scales of dynamical structures and assemblies of denser suspensions, and potentially the functional materials of the future.
Karnika Singh, Harishwar Raman, Shwetabh Tripathi, Hrithik Sharma, Akash Choudhary, Rahul Mangal
2023-09-08T07:14:06Z
http://arxiv.org/abs/2309.04165v3
# Pair-Interactions of Self-Propelled SiO\({}_{2}\)-Pt Janus Colloids: ###### Abstract Driven by the necessity to achieve a thorough comprehension of the bottom-up fabrication process of functional materials, this experimental study investigates the pair-wise interactions or collisions between chemically active SiO\({}_{2}\)-Pt Janus Colloids. These collisions are categorized based on the Janus colloids' orientations before and after they make physical contact. In addition to the hydrodynamic interactions, the Janus colloids are also known to affect each other's chemical field, resulting in chemophoretic interactions, which depend on the degree of surface anisotropy in reactivity and solute-surface interaction. These interactions lead to a noticeable decrease in particle speed and changes in orientation that correlate with the contact duration and yield different collision types. Our findings reveal distinct configurations of contact during collisions, whose mechanisms and likelihood is found to be dependent primarily on the chemical interactions. Such estimates of collision and their characterization in dilute suspensions shall have key impact in determining the arrangement and time scales of dynamical structures and assemblies of denser suspensions, and potentially the functional materials of the future. + Footnote †: These authors contributed equally to this work. + Footnote †: These authors contributed equally to this work. ## I Introduction Artificial micro-swimmers utilize self-generated gradients to move autonomously. Due to their distinct motion characteristics that deviate from the random thermal fluctuations of Brownian motion, and their heightened responsiveness to their surroundings, they can serve as potential agents of drug delivery, water remediation, and microscale mixing in futuristic technologies [1; 2; 3; 4]. These synthetic swimmers are widely recognized for their ability to emulate key aspects of the locomotion observed in biological micro-swimmers, making them valuable models for studying biological micro-swimming.The last couple of decades have witnessed a thrust of both numerical and analytical studies focusing on mainly two kinds of artificial swimmers. One is (i) Janus colloids: utilizing the in-built asymmetry in their physico-chemical composition, they successfully break the fore-aft symmetry to generate the necessary solutal gradients for their diffusio-phoretic propulsion.[5] Most commonly used system is of SiO\({}_{2}\)-Pt Janus colloids performing active motion in aqueous H\({}_{2}\)O\({}_{2}\) solution due to a mechanism known as _self-diffusiophoresis_.[6; 7] Other similar mechanisms such as _self-thermophoresis_[8] and _self-electrophoresis_[9] that use self-generated temperature and electric/ionic gradients respectively, are also well-known. The other type consists of (ii) Swimming droplets: isotropic droplets of one fluid dispersed in another immiscible fluid which utilize spontaneous asymmetry in the interfacial tension (\(\gamma\)) through mechanisms such as a change in surfactant activity via interfacial reactions[10; 11; 12] or adsorption-depletion of surfactants triggered by micellar solubilization.[13; 14; 15; 16] The gradient in interfacial tension results in Marangoni stress at the interface that drives the fluid from low \(\gamma\) toward high \(\gamma\), resulting in droplet propulsion. Unlike random Brownian motion, individually, each class of artificial swimmers exhibits ballistic movement in a specific direction over short time intervals. However, at long time scales, transition to random motion is observed due to orientational thermal fluctuations.[6; 17; 13] This long-time transition to random motion remains a major challenge in the successful implementation of these artificial micro-swimmers in various applications. Therefore, in the recent past, to gain better insights about their motion and to incorporate directionality, several studies have tried to understand the influence of various factors such as external flow,[8; 18; 19; 20; 21; 22] presence of chemical solutes,[23; 24; 25; 26; 27; 28; 29] the impact of fixed topographical features,[30; 31; 32; 33] presence of tracers,[34; 35; 36] or tuning the viscosity of the surrounding medium.[37] These artificial micro-swimmers have also been shown to form unique dynamic self-assembibles that can naturally transform into new phases or structures.[17; 38; 39; 40; 41; 42; 43; 44; 45; 46]. Together they can work to accomplish tasks that a single swimmer may not be capable of achieving in isolation [47]. Study of their collective behavior is also expected to provide useful insights into biological flocking/swarming,[17; 48; 49] predator-prey interactions[50], formation of bacterial colonies and multicellular bodily responses, [51] and inter-cellular communications [52]. The collective functionality of the swimmers stems from the inter-swimmer(s) interactions. In biological swimmers, besides chemical signaling, the interactions are mostly dominated by the flow-field of the individual swimmers,[53] captured by the well-known squirmer model, [54; 55; 56] coupled with steric alignments.[57; 58; 59] In artificial micro-swimmers too, hydrodynamic interactions govern most of the interactions, leading to several interesting phenomena, including viscosity reduction in suspensions,[60] synchronized motion, [61] and pattern formation.[17] The collective functionality of the chemically powered micro-swimmers can also be tuned by comprehending the interactions of underlying constituents, which is the subject of studies performed in the past and have been highlighted in Table 1. In one of the first studies, Palacci _et al._ performed an equilibrium characterization via the classic 'Jean-Perrin' sedimentation experiment [62]. In addition to the activity-Temperature analogy, through their subsequent experiments on Pt\(-\)Au colloids, Theurkauff _et al._ found an onset of chemotactic aggregation and dynamic clusters at semi-dilute concentrations; their Keller-Segel-type model found that chemical interactions governed the clustering [63]. Later, Ginot _et al._ provided a thermodynamic description of the cluster formation and reported that their evolution and kinematics were dictated by an effective adhesion energy spawning predominantly from chemical interactions [39; 64]. Using a Langevin description, Saha _et al._ built a coarse-grained framework for studying chemotactic aggregation in diffusion- & reaction-limited regimes [65]. The former condition exhibits clumping patterns and fluctuations, whereas the latter yields long-ranged instabilities. In parallel, Pohl & Stark performed 2D Brownian dynamics simulations of dry active suspensions interacting chemically, that influence their translational and rotational mobilities [66; 67]. They showed that only when the modification to these two mobilities counteract (particles drifting towards a chemical sink, but propulsion axis turned away), a chemotactic collapse can be avoided and the dynamic clustering can be observed. In addition to the aforementioned experiments and simulations, a series of theoretical studies in the continuum framework revealed a complex interplay of chemo-hydrodynamic interactions: a set of chemical and hydrodynamic signals, generally simplified and decoupled in the low fluid advection limit. These fields get disturbed whenever a swimming colloid is in the vicinity of another colloid's signals, which breaks the symmetry of viscous stresses and invokes an additional kinematic response. In their semi-analytical study on pair interactions, Sharifi-Mood _et al._ showed that assuming uniform mobility on the Janus colloid's surface, depending on the incidence angle and catalyst coverage, pairwise interactions between solute emitting Janus colloids can result in their pairing or escape [68]. This assumption was relaxed by the more recent works by Michelin and co-workers[70; 71; 73], and provided two generalized frameworks for semi-dilute suspensions: one based on the method of reflections [70] and another on force-coupling method based on multipole expansion of chemo-hydrodynamic signals [71]. A parallel theoretical study by Saha _et al._[69] constructed a directory of possible non-reciprocal pair interactions by exploring the parameter space of modifications to translational and rotational motion in a quasi-2D space. Dominated by chemical interactions, a dimer state (mutually chemotactic) and four dynamical states emerged from combinations of chemotactic-anti chemotactic interactions: attractively scattered, repulsively scattered, chasing, and orbits. Despite this recent progress elucidating interactions among H\({}_{2}\)O\({}_{2}\) fueled active Janus Particles (JPs), detailed experimental investigations on the isolated pair-wise interactions have been missing. In the decades-old context of inertially sedimenting particles[74], meticulous investigation on the approach and departure of pair-collisions has facilitated recent insights into large-scale particle-laden turbulent flows[75][76]. In the same spirit, here we undertake an experimental examination, to carefully explore the approach, contact and departure during the pair-wise interactions of SiO\({}_{2}\)-Pt JPs constrained to move in a 2D (_x-y_) plane. The concentration of the JPs is kept very low to avoid clustering and cross-interactions among different pairs. Very recently, Sharan _et al._ studied pair-interactions of SiO\({}_{2}\)-Cu and SiO\({}_{2}\)-Pt JPs on a 1-D crack and 2-D plane. For SiO\({}_{2}\)-Cu JPs the study demonstrated far-field repulsive interactions preventing contact. For SiO\({}_{2}\)-Pt JPs, they reported that the interactions are mostly steric in nature and no far-field hydrodynamic or chemical interactions were observed [72]. They also reported (_i_) no significant speed fluctuations during the interaction, (_ii_) 3-4 seconds of contact where JPs slide over each other, (_iii_) post-collision departure due to free in-plane thermal reorientations. In this study of SiO\({}_{2}\)-Pt JPs on a 2D plane, while we confirm the absence of far-field chemo-hydrodynamic interactions, we observe that the collision dynamics is not simply dictated by steric interactions, instead, the near-contact chemical interactions play a major role in determining both contact time and the manner in which they slide over each other. With detailed characterization, we demonstrate that depending on the approach orientation of the JPs, these interactions are capable of generating significant speed and orientation fluctuations. In the end, the experimental observations regarding the rotational motion of the JPs during the collision are supported by simple qualitative estimates of the chemical torque for various pair orientations. ## II Materials and Methods SiO\({}_{2}\)-Pt Janus particles (JPs) were synthesized using the drop-casting method as reported by Love _et al._[77] Briefly, a stock solution of 5 \(\pm\) 0.35 \(\mu\)m SiO\({}_{2}\) particles (Sigma-Aldrich, 5 wt.% solids) was diluted with deionized (DI) water (1:3 v/v) to prepare a 1 wt.% particle suspension. The suspension was drop-cast on a plasma-treated glass substrate. Plasma treatment was done using a plasma cleaner (PDC-32-G2, Harrick Plasma) in the presence of oxygen to make the glass slide hydrophilic, which assists in the uniform spreading of the particle suspension. A particle monolayer is formed on slow water evaporation, which was confirmed _via_ optical microscopy. A thin layer of platinum (Pt; thickness \(\sim\) 15 nm) was then deposited on the particles via DC magnetron sputtering using a bench-top sputter coater (BT150, Hind High Vacuum Co., HHV). Due to the self-shadowing effect, only the top half of the particles get coated, making them Janus. These JPs were then dispersed in DI water using sonication. To propel the JPs, an appropriate amount of 30 wt.% H\({}_{2}\)O\({}_{2}\) aqueous solution (Thermo Fisher Scientific) is added to the particle mixture to achieve an overall H\({}_{2}\)O\({}_{2}\) concentration of \(\sim\)3 wt.%. For visualization an optical cell is prepared by sticking a polydimethylsiloxane (PDMS) ring of height 2 mm and diameter 1 cm on a O\({}_{2}\)-plasma-treated glass slide. The cell is filled with the particle solution (with H\({}_{2}\)O\({}_{2}\)) and sealed with a cover slip. Due to the higher density of JPs, they sediment towards the bottom of the cell and propel in the 2D (\(x\)-\(y\)) plane. Particles are imaged in the bright-field mode using an inverted microscope (IX73, Olympus) coupled with a CMOS camera (Oryx 10GigE, Teledyne FLIR) of resolution 1800 \(\times\) 1026 pixel [2]. A thermal stage is also mounted on the microscope to maintain a constant temperature (25 \({}^{\circ}\)C) throughout the experiment. Movies are recorded for around 5 min at 20 frames per second. After the acquisition of videos, particle tracking was done using the MOSAIC plugin in ImageJ-J, which uses an image correlation-based approach to obtain an individual particle's center of mass position [\(x(t)\),\(y(t)\)] in the laboratory frame. An in-house written Python code was used to detect the orientation of JPs. \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline Investigation & Description & Conditions/Regime & Result/Insight \\ \hline Palacci _et al._ (2010) [62]; Theurkauff _et al._ (2012) [63] & \({}^{\circ}\)Jean-Perrin’ type experiments on settling active colloids & Pt\(-\)Latex; Pt\(-\)Au Proto 50\% surface fraction & \(T_{\text{eff}}\sim Pe^{2}\); Dynamic clustering above 3\% fraction, grows \(\propto V\), governed by chemical interactions \\ Ginot _et al._ (2015; 2018) [39, 64] & Formulated equation of state; Experiments on cluster evolution and kinematics & Pt\(-\)Au, Up to 80\% area fraction; Up to 10\% area fraction & Effective inter-particle ‘adhesion’ \(\sim Pe\); Derived aggregation-fragmentation rates \\ Sharifi-Mood _et al._ (2016) [68] & Semi-analytical study on chemo-hydrodynamic pair interactions & \(Pe_{f}\ll 1\), non-Brownian, uniform mobility & Incidence angle and catalyst coverage determine assembly or escape \\ Saha _et al._ (2019) [69] & Far-field framework for a directory of non-reciprocal pair interactions & Quasi 2D dynamics, \(Pe_{f}\ll 1\), \(d_{cc}\gg a\) & Chemical interactions govern pairing, waltzing \& scattering \\ Varma \& Michelin (2019)[70]; Rojas-Pérez _et al._ (2021) [71] & Generalized \& offline & \(Pe_{f}\ll 1\), non-Brownian & Chemo-hydrodynamic fields exhibit intricate coupling, spawning novel multi-body interactions \\ Sharan _et al._ (2023) [72] & Experiments \& model of surface-bound pair collision & Pt\(-\)\& Cu\(-\)SiO\({}_{2}\) on 1D crack \& 2D plane Cu\(-\)SiO\({}_{2}\) repel at \(d_{cc}\gtrsim 7a\) \\ This work to characterize configurations and contact times & Pt\(-\)SiO\({}_{2}\) on 2D plane & Collisions are non-steric \& occur over few seconds dictated by chemical field over \(d_{cc}<3a\) \\ \hline \hline \end{tabular} \end{table} Table 1: Past studies on suspensions of Janus colloids addressing chemo-hydrodynamic interactions and their impact on emergent behavior. Here, \(d_{cc}\) is the center-to-center distance between particles of radius \(a\), \(Pe\) is the Péclet number that denotes the persistence length of the active Brownian particle, \(Pe_{f}\) is the Péclet number associated with solute advection via fluid flow, and \(V\) is the propulsion velocity. ## III Results and Discussion ### Isolated active JP motion Firstly, we investigate the motion of isolated 5 \(\mu\)m JPs in 3 wt. \(\%\) H\({}_{2}\)O\({}_{2}\) aqueous solution. Decomposition of H\({}_{2}\)O\({}_{2}\) on the Pt patch results in a chemical gradient across the JP, causing its self-diffusiophoretic propulsion (see supporting movie S1). As seen in figure 1 (a), the isotropic nature of the trajectories with no preference for either \(x\) or \(y\) direction confirms the absence of any convective drift in JPs. During their active motion, the JPs maintain the half-moon orientation with their normal to the Janus plane (**n**) oriented parallel to the bottom wall, as seen in the optical micrographs shown in 1 (b). Hydrodynamic and chemophoretic interactions with the bottom wall are well-known to force the active JPs to adopt this stable orientation.[20; 78; 32] The rotational time scale \(\tau_{\rm r}\) of JPs is obtained by fitting the velocity auto-correlation function \(C(\Delta t)(=\langle{\bf v}_{\rm inst.}(\Delta t).{\bf v}_{\rm inst.}(0)\rangle)\) with the form \(C(\Delta t)=4D\delta(\Delta t)+\langle|{\bf v}_{\rm inst.}|\rangle^{2}\cos \left(\omega\Delta t\right)\exp\frac{-\Delta t}{\tau_{\rm r}}\), where \(\delta\) is the Dirac delta function, and \(\omega\) is the angular speed of the active JPs, if any (see supporting figure S1 (d)). The instantaneous velocity \({\bf v}_{\it inst.}\) is defined as \({\bf v}_{\rm inst.}=\frac{{\bf r}_{\rm i+1}-{\bf r}_{\rm i}}{t_{\rm i+1}-t_{ \rm i}}\). Here \({\bf r}_{\rm i}\) is the instantaneous position vector at time \(t_{\rm i}\). Figure 1 (c) shows the Probablity distribution of \(\langle|{\bf v}_{\rm inst.}|\rangle\) and \(\tau_{\rm r}\). The distribution is attributed to the variation in particle coating, interactions with the substrate, size variation, etc. ### Observing pair-interactions: approach and departure In this study, we have carefully documented 120 pair collisions. The concentration of the particle suspension was adjusted adequately to maintain the low number density of JPs, which mostly limited the interactions among the JPs to two-body (pair) collisions only. Figures 2(a-f) show a few representative collisions and figure 2(g) represents the inter-particle center-to-center distance \(d_{\rm cc}\) as a function of time \(t\) for these trajectories. From the figure, it is evident that the collisions occur in three sequential events. At first, the JPs move towards each other as \(d_{\rm cc}\) decreases with time \(t\). Afterward, \(d_{\rm cc}\) reaches its minimum value (\(\sim 5\)\(\mu\)m) and remains constant for some time \(t_{\rm contact}\), during which the active JPs maintain close contact and interact by sliding or rolling against each other. These interactions occur in different forms and determine the contact length and time, which is a subject of exploration in this article. Eventually, the JPs detach from each other and drift apart, also depicted by the subsequent increase in \(d_{\rm cc}\) with \(t\), as shown in figure 2(g). The optical micrographs of the colliding JPs are shown in the inset images of the figure 2(a-f) for three distinct periods during their physical contact: the first moment of contact, the midpoint of the collision, and just before their separation. Careful inspection of the observed collisions reveals that the active JPs predominantly collide with their SiO\({}_{2}\) hemispheres coming in contact first. Very rarely, it was observed that the Pt side of either JP comes in contact with either side of the other JP. In such a case, the approaching particles scatter instantly, as depicted in collision (f). We also observed that, towards the end of their interaction, the equators of the JPs align bringing their Pt sides closer, which triggers their separation. The observations of approach and departure can be understood by the chemophoretic interactions originating from the H\({}_{2}\)O\({}_{2}\) decomposition occurring on the Pt hemisphere of the active JP. These interactions occur due to the chemical fields generated by the neighboring active JPs in the system and usually decay as (\(\sim\frac{1}{d_{\rm cc}^{2}}\)).[80; 79; 81] Whenever the reactive side of the JP approaches another particle's silica side, the accumulation of the reaction products in the interstitial region forces the JPs to move away from each other. This repulsive interaction is much stronger when two reactive sides are brought close to each other.[68] Furthermore, JPs were only observed to change their orientation at close distances and we did not observe significant long-range reorientations. The JPs considered here are half coated, which even with a three-fold mobility difference, would yield a weak force-dipole hydrodynamic field[82]: dominant flow field being \(1/r^{3}\) i.e., source-dipole. Whereas, the chemical field decays more slowly as \(1/r^{2}\). Furthermore, the momentum dampening of hydrodynamic effects might be exacerbated near boundaries exhibiting no-slip friction, while boundary-enhanced solute accumulation intensifies the repulsive chemical interactions. Hence, the observations of direct contact between JPs, as illustrated in figure 2, wherein JPs collide with their SiO\({}_{2}\) hemispheres and subsequently move apart when their Pt sides come closer, support the expected chemophoretic interactions. Sharifi-Mood _et al._ predicted assembly of JPs in specific configurations,[68] which we do not observe in our experiments and is a key differentiation. One of the primary reasons behind this difference might be that their study doesn't account for mobility differences of the catalytic and inert halves that arise due to these halves having different interactions with solute molecules. Previous experiments[83] have shown that catalytic halves exhibit higher slip velocity and thus, higher mobility coefficient. Other possible contributing reasons behind this difference could be assumptions of neglecting Brownian fluctuations and variations of speeds in the colliding JPs. We suspect that these differences allow the JPs to slide along each other and eventually detach in our experiments. A recent semi-analytical study by Nasouri and Golestanian [84] demonstrated that axisymmetrically interacting JPs can exhibit up to three fixed points, that de termine the stability of distance between JPs. Their results for uniform mobility JPs were in close agreement with Sharifi-Mood _et al._[68], however, they found no fundamental addition of fixed points for the case of non-uniform mobility. They attributed this to mobility anisotropy only causing hydrodynamic modifications that are less important than chemical interactions. Although our current study does not indicate any stable fixed points, as all JPs eventually depart, we do observe a distribution of contact times that correlates with the various configurations of collisions. Furthermore, since most of the configurations are non-axisymmetric, we suspect that a significant role is played by asymmetry in mobility, which can introduce a chemical preference for the reorientation of JPs, during contact. Hence, in the next subsections, we characterize these configurations and analyze their kinematics of approach and departure. ### Classification of collisions Each active JP undergoing a collision is characterized by its instantaneous speed \(\mathbf{v}_{\text{inst.}}\), and the orientation of a unit vector normal to the equator line of the JP, denoted by \(\mathbf{n}\). We broadly classify the collision into 4 categories by defining the orientation of JPs (i.e. \(\mathbf{n}_{1}\) and \(\mathbf{n}_{2}\)) with respect to an imaginary line connecting their centers (as depicted in figure 3(a)): 1. _Cis_: \(\mathbf{n}_{1}\) and \(\mathbf{n}_{2}\) are on the same side with respect to the centre-to-centre line. 2. _Trans_: \(\mathbf{n}_{1}\) and \(\mathbf{n}_{2}\) are on opposite sides with respect to the centre-to-centre line. 3. _Ortho_: An intermediate between _Cis_ and _Trans_ states, where \(\mathbf{n}_{1}\) and \(\mathbf{n}_{2}\) are orthogonal to each Figure 1: (a) Representative trajectories (\(\sim\) 60 s) of SiO\({}_{2}\)-Pt JPs in 3 wt. % H\({}_{2}\)O\({}_{2}\) solution. The scale bar corresponds to a length of 100 \(\mu\)m. (b) Bright-field optical micrograph of the active JPs. The scale bar corresponds to a length of 10 \(\mu\)m. (c) Probability distribution of average instantaneous speed (\(|\mathbf{v}_{\text{inst.}}|\)) and reorientation timescale \(\tau_{r}\) of the active JPs. Note that the distribution obtained using nearly 40 active JPs is fitted with a suitable log-normal distribution curve for better visualization. Figure 2: (a-f) Representative trajectories of active JP pairs involved in different collisions. Colored segments in the trajectories indicate that the active JPs remain in contact. Time-lapse images in the insets, from left to right, show the orientation of the JPs during the first instance of contact, mid-point of collision, and just before separation, respectively. Scale bars correspond to a length of 10 \(\mu\)m. (g) Time variation of the inter-particle distance \(d_{\text{cc}}\) for the collisions shown in figures (a-f). The dotted line indicates the size of an individual particle _i.e._ the least possible value of \(d_{\text{cc}}\). other, with one of them oriented along the center-to-center line. 4. _Head-on_: Another possible intermediate state, where both \(\mathbf{n}_{1}\) and \(\mathbf{n}_{2}\) lie along the centre-to-centre line. It should be emphasized that here \(\mathbf{n}_{1}\) and \(\mathbf{n}_{2}\) always point in opposite directions, as collisions only take place between the SiO\({}_{2}\) hemispheres. Figure 3(b) shows the relative frequency of different approach orientations right before the collisions. Our findings reveal that the likelihood of _Trans_ approach is almost twice that of _Cis_. Considering the repulsive nature of the chemical interactions due to solutes being generated at the Pt side, a higher likelihood of _Trans_ approach is expected because of the following reasons. 1. _Ortho_ and _Head-on_ are highly specific cases when normal vectors are very close being perpendicular and parallel, respectively, thus orientations generally fluctuate and form either _Cis_ or _Trans_ configuration. 2. As JPs approach each other, _Cis_ configuration experiences a higher chemical torque, as catalytic sides exhibit higher mobilities and also provide a chemically repulsive solute, yielding the largest chemical torque of all configurations, and thus, a lower probability of contact upon approach (further details and explanation provided in the next subsection). Post-collision, most JP pairs depart in _Trans_ configuration, with some exception of _Cis_ departures, where JPs approach _Cis_ and detach from each other as _Cis_ with almost parallel \(\mathbf{n}_{1}\) and \(\mathbf{n}_{2}\). These pure _Cis_ collisions occur rarely and constitute only \(\sim\) 12.5 % of the total _Cis_ collisions observed in this study. Therefore, we further refined the classification of collisions by additionally taking into account the orientations of the active JP pair while detaching from each other as (i) _Ortho-Trans (O-T)_, (ii) _Head-on-Trans (H-T)_, (iii) _Trans-Trans (T-T)_, (iv) _Cis-C (C-C)_, and (v) _Cis-Trans (C-T)_. Here, the first and second designation indicates the approaching and detaching orientation of the JPs. Within the scope of this study, we did not observe other combinations such as _T-C, O-C_, and _H-C_. In figure 3(c), we present a schematic representation of the interaction process for the active JPs pair during these five types of collisions. Overall, the orientation-dependent surrounding fluid flow and self-generated chemical fields dictate the sliding/rolling actions of the JP pair, which we will elaborate on in later sections. Meanwhile, the preliminary observations drawn from figure 3(c) are that for an _O-T_ collision, the particle whose orientation vector is aligned parallel to the center-to-center line undergoes significant rotation as it encounters the other JP, as a result, the pair reorients to a _Trans_ configuration. In _H-T_ and _T-T_ collisions, the particles approach and roll in opposite directions till their equators are aligned, after which they detach in a _Trans_ configuration. The rare _C-C_ collisions are characterized by shorter rolling/sliding distances of the JPs. The approaching orientation of the JPs is such that their chemical fields are aligned, causing them to roll in opposite directions and depart in a _Cis_ configuration. In contrast, JPs undergoing _C-T_ collisions rotate in opposite directions, with one JP rolling more than the other and subsequently departing in a _Trans_ configuration. Figure 4(a) displays the probability distribution of contact time (\(t_{\text{contact}}\)) for the observed collisions (see figure S2(a) in the Supporting Information for the raw data). _T-T, C-C_, and _C-T_ exhibit a wide \(t_{\text{contact}}\) distribution due to their multiple possible approach orientations, whereas _O-T_ and _H-T_ collisions have a narrow \(t_{\text{contact}}\) distribution due to the very specific nature of their approach orientation. Among the broader distributions (_T-T, C-C, and C-T_), _C-T_ spend a higher contact time on average, whereas _C-C_ collisions are relatively quicker. To obtain a simple estimate of \(t_{\text{contact}}\) that takes into account both the approaching orientation and the JPs' speeds, we computed \(\lambda=\beta/|\mathbf{v}_{\text{relative}}|\), where \(\beta=a(|cos\phi_{1}|+|cos\phi_{2}|)\), is the estimate for the contact length \(l_{\text{contact}}\), \(a\) is the radius of the JP and \(\phi_{1}\) and \(\phi_{2}\) are the angles between the center-to-center line and propulsion axis of JP\({}_{1}\) and JP\({}_{2}\), respectively, as depicted in figure 3(c). Also, \(\mathbf{v}_{\text{relative}}=\mathbf{v}_{\text{inst.,1}}-\mathbf{v}_{\text{ inst.,2}}\) just before the onset of the physical contact. Figure 4(b) shows the comparison of experimentally measured \(t_{\text{contact}}\) with estimated \(\lambda\). Excluding _C-T_ collisions with higher \(\lambda\), we find that \(t_{\text{contact}}\propto\lambda\) with a proportionality constant \(c=3.61\pm 0.32\) indicating that \(\lambda\) is underestimates \(t_{\text{contact}}\). The linear relation between \(t_{\text{contact}}\) and \(\lambda\) indicates that in most of the collisions follow a simple unidirectional sliding/rolling of the JPs. Factors such as surface roughness, particle size polydispersity, and the thickness and uniformity of the Pt coating on the JPs may result in underestimating the time of contact, resulting in \(c>1\). For example, For _H-T_ collisions, it was also observed that the active JPs are locked for a few seconds at the beginning of collisions in the _Head-on_ state due to their opposing hydrodynamic interactions until a perturbation is induced by their thermal fluctuations, causing them to rotate and behave like JPs in a _T-T_ collision, such effects are not considered here. The positive deviation of the \(t_{\text{contact}}\) values for _C-T_ collisions with high \(\lambda\) (high \(\beta\) and/or low \(|\mathbf{v}_{\text{relative}}|\)) from the expected linear trend is expected to arise because of the rather complex sequence of steps during the physical contact of the JPs including intermittent switches of their rotation direction, which we will explain in a later section. Figure 4(c) suggests that collisions with higher \(\beta\) are more likely (see figure S2(b) in the Supporting Information for the raw data), which we shall also explain in later sections. ### Analyzing the kinematics of collision Next, we aim to understand how a specific type of pair collision impacts the translation and rotation of the JPs, as they approach and depart. As stated in the introduction section, although we observe a lack of far-field hydrodynamic interactions, the dynamics of collisions and Figure 4: (a) Fitted lognormal probability distribution curves of the contact time \(t_{\text{contact}}\) for various collision scenarios classified based on the approach and departure orientations of the active JP pairs. (b) Variation of \(t_{\text{contact}}\) with the approach parameter \(\lambda\), and the corresponding linear fit (C-T collisions are excluded from the fit) (c) Fitted normal probability distribution curve of \(\beta\). Inset depicts the two configurations at the extreme values of \(\beta\). Figure 3: (a) Schematic representing the different collisions observed. (b) Relative frequency distribution of the observed collisions. (c) Schematic depicting the sliding and rolling actions of the active JP pairs and the resulting orientation changes during various collision scenarios. The navy lines in the first row of figures indicate the rolling/sliding distance of the individual active JPs. contact are not simply steric but are determined by the chemical field distribution and phoretic slip-based interactions, which are elaborated below. As shown in the schematic shown in figure 5(a), for an isolated SiO\({}_{2}\)-Pt JP, the resultant surface (osmotic) slip due to the JP's self-generated chemical field causes it to propel forward. The clockwise (Cw) slip is known to generate a counterclockwise (CCw) torque T\({}_{CCw}\) about the propulsion direction _i.e._ positive \(z\)-axis, and vice versa. Since these torques are equal and opposite, as expected, an isolated active JP does not experience net rotation due to propulsion [31]. When the JP approaches close to a solid boundary, such that the chemical field on the Pt side is not altered (in our case when a JP interacts with SiO\({}_{2}\) side of other JP), the asymmetric viscous stress distribution of the surface slip can cause its rotation through a torque termed as the 'propulsive torque' by Mozaffari _et al._[31]. This propulsive torque tries to orient JP's \(\mathbf{n}\) away from the boundary, which is different from the opposing viscous torque that tries to bring \(\mathbf{n}\) towards the wall to roll the JP along the direction of motion parallel to the wall. On the other hand, if one JP interacts with an external asymmetric chemical field (in our case induced by another active JP), the mobility differences can yield a 'chemical torque' [79; 84; 85]. In the case of SiO\({}_{2}\)-Pt, both catalytic and inert sides exhibit repulsive interactions with the released solute, yielding a slip from lower concentration to higher concentration region on the surface. However, the Pt side has been reported with significantly higher slip velocity[82], suggesting an equally higher mobility on the catalytic side and rendering a reorientation such that the catalytic side faces furthest from the solute. During the pair-wise interactions, depending on the approach orientation of the JPs, both chemical and propulsive torques can contribute to the rotation of the interacting JPs, which we will discuss below through specific collisions. In addition to rotation, the collisions can also influence their translation motion through chemophoretic and hydrodynamic interactions, resulting in a change in the velocity \(\mathbf{v}_{\text{inst.}}\) of the JPs. #### iii.2.1 The Ortho-Trans collision In Figure 6(a), we present the 2D trajectories of JPs participating in a representative _O-T_ collision (\(\beta=2.93\)\(\mu\)m) (see supporting movie S2). Time-lapse images shown in figure 6(b) demonstrate the JPs' position and orientation at different instances during the collision. The evolution of the instantaneous speed \(|\mathbf{v}_{\text{inst.}}|\) (computed with \(\Delta t=0.5\) s to eliminate noise) and the orientation angle \(\theta_{\mathbf{n}}\) for the JP pair is demonstrated in figure 6(c). On approach, just before the physical contact, the speed of the faster JP (2) decreases, while the JP (1) speeds up, and before detachment they return to their pre-collision speeds respectively. Just before the impact, JP (1) first rotates CCw slightly, while JP (2) rotates CW. This rotation allows the JPs to come in contact with an Ortho orientation. Subsequently, the direction of rotation is reversed before the eventual departure. To discern the aforementioned process, we first note Figure 5: Illustration of an isolated active JP experiencing phoretic slips and corresponding torques with no net rotation. Here \(\theta_{n}\) represents the orientation of JP with respect to the lab frame X-axis. Figure 6: For a representative _O-T_ collision (\(\beta=2.93\)\(\mu\)m): (a) X-Y trajectories of the JPs, color-coded with instantaneous speeds. (b) Time-lapse micrographs demonstrating the sequence of JPs position and orientation (c) Variation of the instantaneous speed \(|\mathbf{v}_{\text{inst.}}|\) and instantaneous orientation \(\theta_{\mathbf{n}}\) for participating JPs participating with time. (d) Schematics demonstrating the rotational dynamics of JPs (left to right in time). that JPs approach each other in a near _Ortho_ manner. During this approach, the effect of far-field interactions remains weak, as affirmed by \(\theta_{\mathbf{n}}\) in figure 6(c). Near contact, JP (1) experiences a Cw propulsive torque due to its bottom surface experiencing a higher shear in the narrow gap of SiO\({}_{2}\) sides. Whereas, JP (2) experiences a weaker torque due to its surface slip being zero at the point of _Ortho_ contact i.e. along \(\mathbf{n}_{2}\). This is shown in the trajectories figure 6(a) and schematic in figure 6(d, left), where JP (2) experiences a very slight reorientation. These rotations due to propulsive torque bring the JP pair to figure 6(d, middle), where their exact _Ortho_ orientation experiences an opposing chemical torque due to the Pt side of JP (1). This chemical torque acts Cw on JP (2) and arises because of the lower mobility coefficient of SiO\({}_{2}\) side; similarly, JP (1) rotates CCw. The rotation continues until the two Pt sides touch each other and the chemical repulsion of the solutes is strong enough to break the contact. Appendix (Section 5) provides further details on the estimates of chemical torque and how an _Ortho_ contact always experiences a chemical torque towards _Trans_ orientation. On the other hand, modifications in the translation motion of the JPs during the collision are rather straightforward. As JP (2) is more aligned with the center-to-center line, it experiences more steric hindrance due to JP (1), explaining the strong decrease in its speed. Whereas, a lesser alignment of JP (1) with respect to the center-to-center line allows JP (2) to gain some momentum by the incoming JP (1) resulting in its speeding up. After reorientation, once their equators align, JPs depart and gradually return to pre-collision speeds. #### iii.2.2 The Head-on-Trans and Trans-Trans collision The behavior of \(H-T\) and \(T-T\) (high \(\beta\)) collisions (see representative movies S3 and S4 in Supporting Information) have a distinct response which we discuss through representative collisions shown in figure 7. Trajectories and temporal changes in JPs' position and orientation over the course of collision are shown in figure 7(a,c). Evidently, the JPs approach each other from their SiO\({}_{2}\) sides, which ensures that the chemical interactions remain weak initially. The opposing hydrodynamic fields and lubrication effects[81] substantially slow down both JPs during their approach, which recover to their original speed as they detach, as depicted in the evolution of the instantaneous speed \(|\mathbf{v}_{\text{inst.}}|\) for the two collisions. Furthermore, considering the higher \(\beta\) (4.93 \(\mu\)m) for _H-T_ collisions in comparison with the lower \(\beta\) (4.65 \(\mu\)m) for _T-T_ collision, the speed reduction is more pronounced in the former and recovery is also slow. As JPs try to maintain their propulsive momentum and move past each other, essentially in all _Head-on_ collisions, JPs first transition to the _Trans_ orientation. In such an orientation, the rotational behavior of JPs appears to be an intriguing two-step process, wherein, JPs first rotate to achieve an _Trans_ orientation. This is followed by a rotation in opposite directions to again achieve a _Trans_ orientation before detachment. The two-step process of _T-O_ and then _O-T_ is illustrated in figure 7(g). The _T-O_ transition is due to JP (2) experiencing a propulsive torque in the CCw direction. Consequently, JP (1) experiences a cog-like Cw rotation dominated by the slip of JP (2), as the propulsive torque JP (1) is weak due to its slip being weak at contact. The subsequent _O-T_ transition follows the mechanism as discussed in the previous section III.4.1. Figure 7 (h-i) shows_T-T_ collisions with low \(\beta=1.81\mu\)m. JPs undergo relatively lesser speed fluctuations and, importantly, only a single-step rotation under the influence of the mutual chemical field. In the limiting case where JPs approach in a grazing manner (\(\beta\sim 0\)\(\mu\)m), they do not experience any significant speed reduction. #### iii.2.3 The Cis-Trans and Cis-Cis collision Next, we discuss the case of _C-T_ collisions (see representative movie S5 in the supporting information). The trajectories and time-images shown in figure 8(a,b) demonstrate the JPs position and orientation at different instances during a representative collision. Figure 8(c) illustrate the corresponding evolution of \(|\mathbf{v}_{\text{inst.}}|\) and \(\theta_{\mathbf{n}}\) for the participating JPs. Unlike a _C-C_ collision (see figure 8), in this case, the JPs undergo more complicated speed and orientation changes. In this collision, despite a _Cis_ approach the orientation vectors of the JPs are significantly misaligned such that there is always a finite approaching velocity along the center-to-center line at the time of first contact. As a result, at the contact, the two reactive Pt sides remain distant, and primary interactions are mostly steric, resulting in an initial slow-down of the JPs. Immediately after the contact, the initial rotation is dictated by the altered propulsive torque distribution due to the thin gap between the SiO\({}_{2}\) sides of the JPs. This aligns the equators of the JPs bringing their reactive Pt sides closer. Such an orientation provides an enhanced driving push due to their combined chemical field initially enhancing the speeds of the participating JPs. It was also observed that the alignment of the two JPs with respect to the center-to-center line, measured in terms of \(\phi\) (see schematic shown in figure 8(e)) was not the same (\(\phi_{1}(=15^{\circ})<\phi_{2}(=25^{\circ})\)). Due to this orientation difference, we believe that JP (1) experiences more resistance and thus undergoes more speed reduction during the approach and slower acceleration (both translation and rotation). This competition between the JPs brings JP (1) to the offside front of JP (2). The chemical field of JP (1)'s Pt side now starts to affect the rotation of JP (2), forcing a Cw chemical torque, bringing the JPs to an _Ortho_ orientation followed by _Trans_ separation (as discussed in section III.4.1). The apparent direction reversal maneuver performed by JP (1) results in an unpredictably larger \(t_{\text{contact}}\) (see figure 4(b)). Using figure 8(g-i), we finally discuss the behavior of a JP pair undergoing a _C-C_ collision (see representative movie S6 in the supporting information). Time images shown in figure 8(g) demonstrate the JPs' position and orientation at different instances during the collision. While approaching, the JPs align with their normal vectors being nearly parallel. As evident from the evolution of the instantaneous speed \(|\mathbf{v}_{\text{inst.}}|\) of the participating JPs (figure 8(h)), such an orientation provides an enhanced driving push due to their combined chemical field initially enhancing the speeds of the participating JPs. Upon detachment, their speeds almost return to their pre-collision values. This observation is consistent with recent numerical simulations [68]. The proximity of the Pt sides suggests that chemical torques should force rotate the JPs to attain a _head-on_ orientation. Also, this effect is expected to be stronger than the _Ortho_ case (refer to Appendix). However, the rotation of JPs appears to be consistent with the propulsive torques. Although this anomalous behavior is not very clear, one possible reason could be the increased strength of propulsive torque due to the increased slip in the gap between the JPs. Nonetheless, the occurrence of such collisions is very low. ### Non-contact near-field interactions In addition to the physical collisions, we also observed events where active JPs interact despite being physically distant, i.e., \(5\mu m<d_{\text{cc}}<15\mu m\) (see Movie S7 in the Supporting Information). Figure 9(a) displays the trajectories of two active JPs engaging in one such _non-contact_ interaction. To distinguish these interactions induced orientation change with those caused by the Brownian fluctuations, in figure 9(b) we show the \(\Delta t\) values representing the time taken to undergo the measured orientation change \(\Delta\theta\) for several pairs of JPs maintaining \(d_{\text{cc}}>5\)\(\mu\)m all the time. The experimentally measured values are compared with the theoretically estimated values \(\sim\frac{\langle\Delta\theta^{2}\rangle}{2D_{\text{r}}}\) (solid line). Here, \(D_{\text{r}}\) is the experimentally measured rotational diffusivity Figure 7: For a representative _H-T_ collision (\(\beta=4.93\mu\)m): (a) X-Y trajectories of the JPs, color-coded with instantaneous speeds. (b) Time-lapse micrographs demonstrating the sequence of JPs position and orientation (c) Variation of the instantaneous speed \(|\mathbf{v}_{\text{inst.}}|\) and instantaneous orientation \(\theta_{\mathbf{a}}\) for participating JPs participating with time. For a representative _T-T_ collision (\(\beta=4.65\)\(\mu\)m) (d) X-Y trajectories of the JPs, color-coded with instantaneous speeds. (e) Time-lapse micrographs demonstrating the sequence of JPs position and orientation (f) Variation of the instantaneous speed \(|\mathbf{v}_{\text{inst.}}|\) and instantaneous orientation \(\theta_{\mathbf{a}}\) for participating JPs participating with time. (g) Schematics demonstrating the rotational dynamics of JPs of _T-T_ collision (left to right in time). (h) X-Y trajectories of JPs, color-coded with instantaneous speeds, participating in a representative _T-T_ collision with low \(\beta(=1.81\)\(\mu\)m). (i) Time-lapse micrographs demonstrating the sequence of JPs position and orientation. (j) Variation of the instantaneous speed \(|\mathbf{v}_{\text{inst.}}|\) and orientation \(\theta_{\mathbf{a}}\). of isolated JPs. Interactions, where the theoretically estimated values are greater than the experimentally measured values (encircled), represent the orientation fluctuations being caused by the _non-contact_ interactions. Furthermore, another key observation is that in all such interactions, the Pt side of at least one of the JPs is oriented towards the other JP to facilitate the chemical interactions. Figure 9 (c) shows the optical micrograph images for a few representative _non-contact_ interactions when the JPs are at their closest, corresponding to the least interparticle distance \(d_{\mathrm{cc,min.}}\). While such interactions can be long-ranged and span for distances up to multiple particle diameters, we did not observe any noticeable change in \(\theta_{\mathbf{n}}\) for an active JP that could be potentially caused by the other JP beyond an \(d_{\mathrm{cc,min.}}\)\(\sim 3d\) (i.e., 15 \(\mu m\)). This affirms the negligible role of far-field hydrodynamic interactions in governing the pair interactions (contact or non-contact) of self-propelled SiO\({}_{2}\)-Pt JPs. An increase in the population of active JPs in the system leads to a higher occurrence of multi-particle interactions (\(n>2\)). Understanding pairwise interactions of isolated active JP pairs is a crucial aspect of comprehending the mechanics of the crowded active systems and their dynamic assemblies, which generally germinate with the collisions of two JPs. Having established that an individual active JP generally undergoes reorientation and speed reduction while physically interacting with a similar active JP, it is reasonable to assume that the consequences would be more pronounced for relatively denser systems. To validate this, we compare the distribution of \(\langle|\mathbf{v}_{\mathrm{inst.}}|\rangle\) and \(\tau_{\mathrm{r}}\) values of active JPs in a relatively densely populated system (particle area fraction \(\psi\sim 0.02\), see optical micrograph figure 9(d)) to that of a dilute system (\(\psi\sim 5\times 10^{-4}\), see optical micrograph figure 9(d)). As shown in figure 9(e) and figure 9(f), as expected, we find that the peak positions of both \(\langle|\mathbf{v}_{\mathrm{inst.}}|\rangle\) and \(\tau_{\mathrm{r}}\) decrease with an increase in \(\psi\) due to the increased inter-particle interactions. Also, with an increase in \(\psi\), we find that the peaks of probability distribution curves narrow down, suggesting that the behavior of particles in the system is almost identical, indicating an onset of dynamical ordering in the system (see figure S2(c) in the Supporting Information for the raw data of \(\langle|\mathbf{v}_{\mathrm{inst.}}|\rangle\) and \(\tau_{\mathrm{r}}\)). The uniformity in the particles' behavior is likely the coarse-grained effect of the increase in pairwise and multi-JPs interactions (both physical and far-field) in a Figure 8: For a representative _C-T_ collision (\(\beta=4.28\)\(\mu\)m): (a) X-Y trajectories of the JPs, color-coded with instantaneous speeds. (b) Time-lapse micrographs demonstrating the sequence of the JPs’ position and orientation. (c) Variation of the instantaneous speed \(|\mathbf{v}_{\mathrm{inst.}}|\) and orientation \(\theta_{\mathbf{n}}\). (d) Illustration of the effect of phoretic slips on JPs in collisions, when two identical active JPs approach with a Cis orientation at high \(\beta\), the differences in the surface flow on either side of the JPs result in a net propulsive torque, forcing them to rotate in specific directions. (e) Schematic illustration of the difference in alignment of the two JPs with respect to the center-to-center line, measured in terms of \(\phi\). For a representative _C-C_ collision (\(\beta=0.62\)\(\mu\)m): (f) X-Y trajectories of the JPs, color-coded with instantaneous speeds. (g) Time-lapse micrographs demonstrating the sequence of JPs position and orientation. (h) Variation of the instantaneous speed \(|\mathbf{v}_{\mathrm{inst.}}|\) and orientation \(\theta_{\mathbf{n}}\). (i) Illustration of the effect of phoretic slips on JPs in collisions, when two identical active JPs approach with a Cis orientation at low \(\beta\). system with largely evenly distributed JPs in the 2-D space. Further insights into the collective response can be achieved through more dedicated experimental investigations. ## IV Summary In this study, we experimentally study the pair-wise interactions of H\({}_{2}\)O\({}_{2}\) fuelled active SiO\({}_{2}\)-Pt JPs of identical size. It was observed that in almost all collisions, the JPs approach with their SiO\({}_{2}\) hemispheres coming in contact first. Subsequently, they rotate (about an axis perpendicular to the bottom wall) and slide along each other eventually aligning their Janus planes, mostly from from opposite sides. On achieving such orientation JPs detach from each other. Based on the approach orientation of JPs' normal vectors with respect to the center-to-center line, the collisions have been broadly classified as _Cis, Trans, Ortho, and Head-on_. The interaction duration and the overall impact of the collision are contingent upon the approach orientation and the velocities of the active JPs involved. In some instances, even without physical contact, JPs are influenced by another approaching JP, and such interactions are designated as _non-contact_ collisions. Based on our observations from all different kinds of pair-interactions a phase diagram has been constructed (see figure 10). The plot serves as a useful guide in predicting the nature of the collision and thus the interaction outcome, based on the approach orientation quantified in terms of angle \(\theta_{\text{app.}}\) (Y-axis) and the overlap of JPs captured by \(\beta\) (X-axis). Here low \(\beta\) represents glancing contact and high \(\beta\) represents _Head-on_ like collisions. In accordance with the established behavior of a chemically active JP near a solid wall, the collision dynamics (both rotational and translation) was found to be affected by the nearby JPs. This influence stems from the alteration of the propulsive torque (due to viscous stress associated with phoretic slip) and/or the chemical torque acting on the JP. When the approach orientation positions the reactive Pt side of a JP in the vicinity of the SiO\({}_{2}\) (_i.e. Ortho_) or the Pt side (_i.e. Cis_) of the other JP, the interactions are mostly influenced by the distribution of the chemical fields. In \(Head-on\) and \(Trans\) approaches, since the reactive Pt sides of the JPs remain oriented away from each other, during the approach the interactions are mostly influenced by the modification in the propulsive torques. This rotation brings the JPs in \(Ortho\) orientation, initiating the influence of chemical fields, eventually separating with \(Trans\) orientation. To this end, we did not observe any concrete evidence of far Figure 9: (a) Representative trajectories (\(\sim 15\) s) of two active JPs participating in a _non-contact_ interaction. The scale bar indicates a length of 5 \(\mu\)m. (b) Comparison of theoretical (Stokes-Einstein) and experimental time scales required for deflection. (c) Representative optical micrograph images of separate _non-contact_ interactions at the instant of the least inter-particle distance \(d_{\text{cc, min.}}\). The JP pair marked with an asterisk corresponds to the trajectory shown in Figure (a). The scale bars indicate a length of 5 \(\mu\)m. (d) Optical micrographs depicting a dilute (top) and a populated (bottom) system of active JPs. The scale bars in the optical micrographs indicate a 25 \(\mu\)m length. (e,f) Fitted log-normal probability distribution curves of the average instantaneous speed \(\langle|\mathbf{v}_{\text{inst.}}|\rangle\) and reorientation timescale \(\tau_{\text{r}}\) for systems with different particle area fraction \(\psi\). field hydrodynamics to play a significant role in the pair interactions. The effect of approach orientation on the onset of chemical interactions is further supported using a simple qualitative estimate of the chemical torques on the JPs, which is detailed in the appendix section. Comprehending the pairwise interactions among active JPs stands as a crucial cornerstone in our efforts to grasp the intricate dynamics of JPs within densely populated environments characterized by multiple-body interactions, which we have briefly discussed towards the end of section 3.5. Additional experiments are needed to further understand the onset of dynamic assemblies because the pairwise investigation presented here and the one reported by Sharan _et al._ reports an eventual scatter, but numerous experiments outlined in the Introduction section report assemblies and dynamical structures [72]. This might be due to the use of Au\(-\)Pt particles that operate predominantly via self-electrophoresis: electrochemical decomposition of H\({}_{2}\)O\({}_{2}\) generates electrostatic fields that act on top of the chemical and hydrodynamic fields, probably resulting in enhanced attraction between the JPs [5]. Indeed, pair-interaction studies on such rod-like particles have shown an array of robust pair-wise assemblies [86; 87]. To the best of our knowledge, such isolated pair-wise interactions are not yet examined for spherical Au\(-\)Pt JPs. Understanding their assembly in contrast to rod shaped JPs can facilitate ideas for unique functionality because their sphericity can offer higher symmetry & isotropy in dynamical structures than those made of rods (which may facilitate more anisotropic structures); furthermore, spherical shapes offer the highest surface area. Additionally, studying multi-body interactions of the current Pt\(-\)SiO\({}_{2}\) particles may also introduce additional features that can lead to clustering, as was also shown theoretically by Varma and Michelin [70]: three-body interactions imparts an additional drift to JPs that is absent in pairwise interactions. We shall address these aspects in our upcoming study. While our study provides conclusive experimental evidence about the dominant role of chemical interactions in governing the pair-encounters of SiO\({}_{2}\)-Pt JPs, the exact quantification of the competingand coupled chemo-hydrodynamic effects is require dedicated simulation studies. However, we believe that upon changing the fuel concentration, thickness of Pt coating, confinement of optical cell, and particle size disparity, the relative strengths of the two interactions can be tuned resulting in a modified response. In addition, by altering the mode of swimming by either changing the active surface coverage [68; 81], or altering the surface chemistry of the JPs, [72] distinct yet connected response can be observed. Figure 10: (a) Schematics illustrating the approach angle \(\theta_{\text{app.}}\) for representative _Cis_ and _Trans_ orientations. (b) Phase diagram based on the approach angle \(\theta_{\text{app.}}\) and the overlap of JPs captured by \(\beta\). Appendix: Estimating the chemical torque Results in the main text show that in certain orientations where the Pt sides are oriented away, the chemical interactions remain weaker. It is when the orientations are such that the Pt side of either active JP is in the vicinity of another JP, the chemical interaction-induced reorientations become substantial. To support this argument, for a pair of active JPs (see schematic shown in figure 11) we performed simple calculations on obtaining qualitative insights on the chemical torque induced from mobility anisotropy, which implicitly assumes that a. self-interactions are weaker than those imposed by JP\({}_{2}\) on JP\({}_{1}\) (or vice-versa), and b. chemical field from JP\({}_{2}\) acts as linear gradients across JP\({}_{1}\). Considering only the chemical impact of one JP on the other, we first calculate the solute field around JP\({}_{2}\), which points to its propulsion vector (\(\mathbf{n}_{2}\)) in the negative \(z\)-axis. The self-propulsion speed (\(U^{*}\)) of a Janus particle of size (\(a^{*}\)) \(\sim 5\mu\)m is \(\sim 2-6\times 10^{-6}\) m/s. For solute diffusion coefficient \(D^{*}_{\rm solute}\sim 10^{-9}\) m\({}^{2}\)/s, the associated Peclet number (\({\rm Pe}=U^{*}a^{*}/D^{*}\)) is small \(\sim 10^{-2}\), and thus we neglect the advective effects. Furthermore, we assume that the system is quasi-steady in the concentration field: the time scale of reorientation is larger than that of solute diffusion around the particle (\(a^{*}\,^{2}/D^{*}_{\rm solute}\sim 10^{-2}s\)). Consequently, the disturbance concentration field around JP\({}_{2}\) is governed by the Laplace equation; the boundary conditions are governed by a step flux at the particle surface and a decaying condition at the infinity: \[\nabla^{2}c=0, \tag{1a}\] \[\left.\frac{\partial c}{\partial r}\right|_{r=1}=\mathcal{A}( \theta)=\left\{\begin{array}{ll}\mathcal{A}_{+}=-1&\quad\theta\leq\theta_{c }\\ \mathcal{A}_{-}=0&\quad\theta>\theta_{c}\end{array}\right.\text{ and }\] (1b) \[c\to 0\,\text{ as }r\to\infty. \tag{1c}\] The negative sign in the boundary condition indicates that the solute gradient decreases as we move away from the reactive side. In the above equations, length and concentration are non-dimensionalized using \(a^{*}\), \(|\mathcal{A}^{*}|a^{*}/D^{*}\). Here, \(|\mathcal{A}^{*}|\) is the maximum magnitude of dimensional activity in the units of Mm\({}^{-2}\)s\({}^{-1}\). Following Golestanian _et al._[79], we obtain concentration field around JP\({}_{2}\) as \[c(r,\theta)=\sum_{n=0}^{\infty}\frac{-\mathcal{A}_{n}}{(n+1)}\,\frac{P_{n}( \cos\theta)}{r^{n+1}}, \tag{2}\] where \(P_{n}\) is the \(n\)th order Legendre polynomial and \(\mathcal{A}_{n}\) are the coefficients of activity distribution: \(\mathcal{A}(\theta)=\sum_{n=0}^{\infty}\mathcal{A}_{n}P_{n}(cos\theta)\). These coefficients are found by taking an inner product with the Legendre polynomials and are obtained as \[\mathcal{A}_{0} =\frac{(1-\cos\theta_{c})}{2}\,\,\,\,\text{and}\] \[\mathcal{A}_{n} =\frac{-1}{2}(P_{n+1}(\cos\theta_{c})-P_{n-1}(\cos\theta_{c})) \,\,\text{for }n\geq 1. \tag{3}\] In such a self-generated solute field, a JP will translate due to slip on its surface: \(\mathcal{M}\mathbf{\nabla}_{s}c\), where \(\mathbf{\nabla}_{s}\) is the surface gradient vector and \(\mathcal{M}\) is the surface mobility of solute, non-dimensionalized by \(\mathcal{M}^{*}\) (\(\frac{k_{B}T_{a^{*}_{\rm solute}}}{2\mu}\sim 10^{-32}\,\text{m}^{5}\text{s}^{-1}\)) [5]. The mobility coefficient is generally positive for both Pt and SiO\({}_{2}\) (depicting repulsive solute-surface interactions). This slip is axisymmetric (only having \(\mathbf{e}_{\theta}\) component) and can not generate a rotation velocity even if mobility differences between the two halves exist. In addition to the anisotropic mobility distribution, the particle needs to experience a solute field that is not axisymmetric. Pair interactions with another JP break this symmetry, yielding a chemical torque, which imparts a rotation velocity to the JP in order for it to remain torque-free. We now evaluate the torque experienced by JP\({}_{1}\) from the solute field of JP\({}_{2}\) (2) at a non-dimensional distance of \(2+\epsilon\), where \(\epsilon\) is the minimum gap between the particles (such that \(\epsilon\leq 0.5\)). The primary torque experienced would be due to the mobility difference at the two halves (\(\mathcal{M}_{+}\neq\mathcal{M}_{-}\)). For Pt-SiO\({}_{2}\), Campbell _et al._ showed a slip anisotropy on the two halves of the particle, suggesting a mobility difference of up to a factor of 3 [82]. Here we consider the mobility anisotropy as \(\mathcal{M}_{+}=3\mathcal{M}_{-}=3\). First, we evaluate the chemical gradient around an imaginary JP\({}_{1}\) as shown in figure 11(b). We shift the coordinates from the centre of JP\({}_{2}\) to JP\({}_{1}\) by using the following transformation as replacement: \(z_{2}=z_{1}-(2+\epsilon)\cos\phi_{2},\,\,x_{2}=x_{1}-(2+\epsilon)\sin\phi_{2}\). These coordinates are finally rotated to account for the tilt of JP\({}_{1}\) with respect to negative \(z\)-axis (\(\Phi=\pi-\phi_{1}-\phi_{2}\)): \(z_{1}=x\cos\Phi-z\sin\Phi,\,\,x_{1}=x\sin\Phi+z\cos\Phi\). We then estimate the gradient across this imaginary JP\({}_{1}\) by evaluating the solute concentration at the \(r=1\). This provides us with maximum and minimum concentration and their respective \(\theta\), which helps us formulate the passive diffusiophoretic problem in the external gradient (\(\gamma=(c_{\rm max}-c_{\rm min})/2\)) of JP\({}_{2}\). This passive diffusiophoresis problem yields the solution as[84; 85]: \[\tilde{c}=\gamma\frac{z\cos\Theta+x\cos\Theta}{(x^{2}+y^{2}+z^{2})^{3/2}}, \tag{4}\] where \(\Theta\) is the angle directing the gradient from minimum to maximum value across JP\({}_{1}\); \(\tilde{c}\) is the approximate solute concentration field around JP\({}_{1}\) due to an external gradient from JP\({}_{2}\). This field is not axisymmetric and can induce a chemical torque if \(\mathcal{M}_{+}\neq\mathcal{M}_{-}\)[79]. The instantaneous chemical torque at each configuration of JP\({}_{1}\) is evaluated using the expression derived by Stone and Samuel [88] for microswimmers: \[\mathbf{T} = -\int_{S}\mathbf{e}_{r}\times\mathbf{u}_{\text{slip}}\,\mathrm{d}S=-\int_{S }\mathbf{e}_{r}\times[\mathcal{M}(\theta)\mathbf{\nabla}_{s}\vec{c}]\,\mathrm{d}S \tag{5}\] \[= -\int_{0}^{2\pi}\int_{0}^{\pi}\mathcal{M}(\theta)\left[\frac{ \partial\tilde{c}}{\partial\theta}(\mathbf{e}_{r}\times\mathbf{e}_{\theta})\right.\] \[\left.+\frac{1}{\sin\theta}\frac{\partial\tilde{c}}{\partial \varphi}(\mathbf{e}_{r}\times\mathbf{e}_{\varphi})\right]_{r=1}\sin\theta\,\mathrm{d} \theta\mathrm{d}\varphi.\] These chemical torques are evaluated for many configurations of JP\({}_{1}\) around JP\({}_{2}\) such that \(\phi_{2}\in[0,\pi]\) and \(\phi_{1}\in[-\pi,\pi]\), and consequently provides Figure 11(c,d). Figure 11(c) shows the chemical torque contours for JP\({}_{1}\) in contact with JP\({}_{2}\) (\(\epsilon=0\)). The dashed region shows the configurations that were observed in the 120 experimental runs of the current study. In the contour plot, we first note that there are positive and negative peaks, representing _Trans_ and _Cis_ configurations: JP\({}_{1}\) in a _Trans_ orientation will experience a clockwise (positive) torque to orient its SiO\({}_{2}\) side towards the solute abundant region, whereas during _Cis_ it will rotate counter-clockwise (negative). The _Head-on_ orientations (\(\phi_{1}=\phi_{2}=0\)) have zero chemical torque due to symmetry. The _Ortho_ orientation (\(\phi_{1}=0\), \(\phi_{2}=\pi/2\), as shown in figure 11(b)) experiences a CCw chemical torque. Additionally, figure 11(d) shows that increasing interparticle distance lowers the chemical torque. We also note that the contour in figure11 (c) is not symmetric about \(\phi_{1}=0\), it is skewed in such a manner that contacts on either immediate side of _Ortho_ (in the neighborhood of \(\phi_{2}=\pi/2\)) shall experience a negative (CCw) torque. This can be responsible for most _Cis_ and _Ortho_ collisions favoring _Trans_ departure as shown in section 3. To alternatively illustrate this skewness, figure 11(e) demonstrates the variation in chemical torque along \(\phi_{1}\) for two configurations: \(\phi_{2}=0\) (contact with JP\({}_{1}\) is along \(-\mathbf{e}_{z}\)) and \(\phi_{2}=\pi/4\). For the former, the chemical torque is symmetric with respect to \(\phi_{1}=0\) (_Head-on_ configuration as depicted in the embedded schematic). However, the latter does not exhibit a symmetry along \(\phi_{1}=0\) and the configuration in the embedded schematic shall experience a non-zero torque on JP\({}_{1}\), due to mobility anisotropy. Finally, we note that out of all realized configurations, _Head-on_ configuration appears to be the Figure 11: (a) Schematic of two Janus particles (JP) interacting chemically that yields an instantaneous chemical torque. The angles from center-to-center dotted line to the propulsive axis of JP\({}_{1}\) and JP\({}_{2}\) form \(\phi_{1}\) and \(\phi_{2}\) angles, respectively. The length of center to center line is \(2a^{*}+\epsilon^{*}\), where \(\epsilon^{*}\) is the minimum gap between two JPs and \(a^{*}\) denotes the particle radius (with * representing dimensionality). (b) Illustration of various configurations of JP\({}_{1}\) around a JP\({}_{2}\) that is fixed with \(\mathbf{n}_{2}=-\mathbf{e}_{z}\). To explore all possible cases, we consider \(\phi_{2}\in[0,\pi]\) and \(\phi_{1}\in[-\pi,\pi]\); three example cases are shown of _Trans, Ortho_, and _Head-on_. Here we opt for a right-handed coordinate system with the \(y-\)axis pointing into the plane. (c) Chemical torque (T) on JP\({}_{1}\) due to JP\({}_{2}\) at \(\epsilon^{*}=0\) and (d) \(\epsilon^{*}=0.5a^{*}\) (dashed white lines depict the region of collision configurations realized in the current experiments). (e) Chemical torque on JP\({}_{1}\) for various tilts (\(\phi_{1}\)) at two contact points with JP\({}_{2}\) (\(\phi_{2}=\pi/4\) and 0), shown by dashed dot lines in (c). The schematics embedded in (e) represent the configuration of blue and red points for \(\phi_{2}=\pi/4\) and \(\phi_{2}=0\), respectively. state of lowest chemical torque. This chemical stability might be responsible for most collisions approaching each other with high \(\beta\) as shown in figure 4(c). ## VI Supporting Information Representative movies: S1: Motion of an Isolated active JP S2: JPs in _O-T_ collision S3: JPs in _H-T_ collision S4: JPs in _T-T_ collision S5: JPs in _C-T_ collision S6: JPs in _T-T_ collision S7: JPs in _non-contact_ near-field interaction Mean square displacement and normalized velocity autocorrelation function curves of isolated active JPs; Cumulative frequency plots (\(t_{contact}\), \(\beta\), \(\langle|\mathbf{v}_{\mathrm{inst.}}|\rangle\), and \(\tau_{\mathrm{r}}\)) and the corresponding function fit. ###### Acknowledgements. The authors acknowledge the funding received by Department of Science and Technology (SR/FST/ETII-055/2013) and the Science and Engineering Research Board (Grant numbers SB/S2/RJN-105/2017 and ECR/2018/000401), India. The authors also thank Dr. Harshwardhan Katkar from Indian Institute of Technology Kanpur for his helpful discussions.
2309.15068
Gluon-gluon fusion contribution to the productions of three gauge bosons at the LHC
Productions of multiple gauge bosons at the LHC are sensitive to triple or quartic gauge couplings and thus provide a sensitive test for the electroweak sector of the Standard Model and allow for a probe of new physics. In this work we calculate the gluon-gluon initiate state contribution to the productions of three gauge bosons ($Z\gamma\gamma$, $ZZ\gamma$ and $W^+W^-\gamma$) at the LHC, which is formally part of NNLO effects compared to the LO quark-antiquark channels corrections. For each process we present the ratio between the gluon-gluon channels contribution and the quark-antiquark channels contribution. We found that such a ratio for $Z\gamma\gamma$ ($ZZ\gamma$) is of the order of $10^{-3}$ ($10^{-4}$), much smaller than the corresponding ratio for the diboson production due to the decrease of gluon PDF when more particles appear in the final states. These small ratios imply that gluon-gluon fusion contribution is phenomenological negligible for the productions of $Z\gamma\gamma$ and $ZZ\gamma$. However, for $W^+W^-\gamma$ production, the ratio is about 5\%, which is of the same order of magnitude as the ratio for $W^+W^-$ production due to the big cancellation between the amplitudes of quark-antiquark channels. While such an effect can be neglected currently at the LHC, it may be accessible at the HL-LHC.
Jianpeng Dai, Zhenghong Hu, Tao Liu, Jin Min Yang
2023-09-26T17:00:26Z
http://arxiv.org/abs/2309.15068v2
# Gluon-gluon fusion contribution to the productions of three gauge bosons at the LHC ###### Abstract Productions of multiple gauge bosons at the LHC are sensitive to triple or quartic gauge couplings and thus provide a sensitive test for the electroweak sector of the Standard Model and allow for a probe of new physics. In this work we calculate the gluon-gluon initiate state contribution to the productions of three gauge bosons (\(Z\gamma\gamma\), \(ZZ\gamma\) and \(W^{+}W^{-}\gamma\)) at the LHC, which is formally part of NNLO effects compared to the LO quark-antiquark channels corrections. For each process we present the ratio between the gluon-gluon channels contribution and the quark-antiquark channels contribution. We found that such a ratio for \(Z\gamma\gamma\) (\(ZZ\gamma\)) is of the order of \(10^{-3}\) (\(10^{-4}\)), much smaller than the corresponding ratio for the diboson production due to the decrease of gluon PDF when more particles appear in the final states. These small ratios imply that gluon-gluon fusion contribution is phenomenological negligible for the productions of \(Z\gamma\gamma\) and \(ZZ\gamma\). However, for \(W^{+}W^{-}\gamma\) production, the ratio is about 5%, which is of the same order of magnitude as the ratio for \(W^{+}W^{-}\) production due to the big cancellation between the amplitudes of quark-antiquark channels. While such an effect can be neglected currently at the LHC, it may be accessible at the HL-LHC. ## 1 Introduction Productions of multiple gauge bosons at the LHC are sensitive to triple or quartic gauge couplings at tree level of scattering amplitudes, and thus provide a sensitive test for the electroweak (EW) sector of the Standard Model (SM) besides vector boson scattering (VBS) processes [1]. Any deviation from the SM prediction would be an indication of new physics beyond the SM (BSM). Also, they could be important backgrounds for many SM and BSM processes, e.g., \(W/Z\) boson plus two photons for Higgs production in association with \(W/Z\) boson where the Higgs decays to two photons. In contrast to diboson productions, triboson processes are generally quite rare if the leptonic decay channels are considered (the hadronic final states would have huge QCD backgrounds at hadron colliders). Recently, ATLAS and CMS observed some productions of three gauge bosons for the first time from proton-proton collisions with an unprecedented integrated luminosity, such as the productions of three massive gauge bosons [2; 3; 4; 5], one massive plus two massless photons [6; 7; 8], and two massive plus one massless photon [9; 10]. On the other hand, the SM lagrangian is expanded to include high dimensional operators to parameterize BSM effects in the SM effective field theory (SMEFT) [11; 12; 13], which provides a convenient way to understand correlations between various experimental results and has been widely used in both experimental and theoretical studies. Some analyses for the diboson, triboson and VBS processes have been performed in the framework of SMEFT [14; 15; 16; 17; 18; 19; 20; 21; 22]. Before discussing triboson productions at the LHC, we first take a look at diboson productions. It was found that the gluon-gluon initial state channels could contribute \(\mathcal{O}(10\%)\) to the leading order cross section which comes from the quark-antiquark channels [23], if the total charge of the produced diboson vanishes, i.e., \(\{\gamma\gamma\;\;Z\gamma,\;ZZ,\;W^{+}W^{-}\}\). All the external particles in the gluon-gluon channels are connected to a closed fermion loop and they are formally next-to-next-to-leading order (NNLO) corrections, while the large gluon flux in the parton distribution function (PDF) would compensate the loop factor \((\alpha_{s}/\pi)^{2}\) suppression. Then for triboson productions it is also expected that there may be similar non-negligible contributions from gluon-gluon fusion, which is one motivation of this work. We will evaluate the contribution of gluon-gluon fusion to the neutral-charge production processes, \(gg\to\{Z\gamma\gamma\;\ ZZ\gamma,\ W^{+}W^{-}\gamma\}\) at the parton level. The NLO QCD corrections to such processes from quark-antiquark channels with leptonic decays can be found in [24; 25]. Since there are no technical problems for evaluating one-loop five-point Feynman integrals, in our analysis we will try to understand the numerical results through their relations with diboson production at the LHC. Furthermore, for the \(gg\to\gamma\gamma\gamma\) amplitude, besides the gluon-gluon fusion contribution, we know from Furry theorem that there is at least one axial-vector coupling for each Feynman diagram to have non-vanishing effects. It means that in the three photon amplitudes we will have an overall anti-symmetric tensor \(\epsilon_{\mu\nu\rho\sigma}\), which first appears at the two-loop level. As for massive vector bosons, there are axial vector couplings at the leading loop and the structures of the amplitudes will become much more complicated. So, an explicit calculation is necessary for phenomenological analysis. This work is organized as follows. In the next section some details of the calculation will be described and the results will be shown in three subsections. Finally, the conclusion is made in Section 3. ## 2 Calculations and results In our calculation we use MadGraph5_aMC@NLO with version 3.4.2[26] for Monte Carlo simulations. We also use FeynArts and FormCalc[27; 28; 29] to cross-check and to get the detailed information of the physical amplitudes. Due to the numerical instability problem caused by the inverse Gram determinants in the conventional Passarino-Veltman reduction [30], we adopt the reduction scheme proposed in [31; 32] for one-loop five-point tensor integrals, which has been implemented in the public code Collier[33]. For parton distribution functions, we use LHAPDF6[34] with NNPDF3.0 set [35] at LO (with \(\alpha_{s}(m_{Z})=0.1247\)) and NNLO (with \(\alpha_{s}(m_{Z})=0.1190\)) fit for quark channels and gluon channels respectively. The factorization and renormalization scales are set to be the same as the dynamical partonic center-of-mass energy, \(\mu_{R}=\mu_{F}=\sqrt{\hat{s}}\). The revelent parameters used in the evaluation are \[\begin{split} m_{b}&=4.7\ \mathrm{GeV},\qquad m_{Z} =91.188\ \mathrm{GeV},\\ m_{t}&=173\ \mathrm{GeV},\qquad m_{W}=80.419\ \mathrm{GeV},\\ m_{h}&=125\ \mathrm{GeV},\qquad G_{F}=1.16639 \times 10^{-5}\ \mathrm{GeV}^{-2},\\ \alpha&=\frac{1}{132.507}.\end{split} \tag{1}\] Other quarks not listed above are thought to be massless. The collision energy \(\sqrt{s}\) is set to be \(13\ \mathrm{TeV}\). We use the following basic cuts for photons: \[p_{T}^{\gamma}>p_{T,min}^{\gamma},\ \ |\eta^{\gamma}|<2.37,\ \Delta R_{\gamma \gamma}>0.4. \tag{2}\] But we do not apply any cuts on massive vector bosons (\(Z\) and \(W^{\pm}\)). Here \(p_{T,min}^{\gamma}\) is chosen as a free parameter to see its impact on the total cross sections. ### \(Z\gamma\gamma\) and \(Z\gamma\) productions We start with \(pp\to Z\gamma\gamma\), which was measured by the CMS and ATLAS Collaborations recently [6; 7]. These two experimental groups both used Madgrph5_aMC@NLO in their analysis. Here we choose \(Z\gamma\) production as the reference process for comparison, since it is naively expected that the phase space would not change much by an additional photon and hence the gluon initiated channels may provide contribution of same order of magnitudes for both diboson and triboson productions, i.e. \(\sigma^{gg}(Z\gamma\gamma)/\sigma^{q\bar{q}}(Z\gamma\gamma)\simeq\sigma^{gg}(Z \gamma)/\sigma^{q\bar{q}}(Z\gamma)\). High order corrections to \(Z\gamma\) and \(Z\gamma\gamma\) productions at the LHC have also been calculated in various directions, e.g., the NLO corrections to \(pp\to Z\gamma\gamma\) with the leptonic decays of \(Z\)-boson have been studied in [24]. Since we are only interested in the ratio \(\sigma^{gg}/\sigma^{q\bar{q}}\), for simplicity only the tree-level contribution of the quark-antiquark channels and the one-loop contribution of the gluon-gluon channels are considered in the following analysis. The typical Feynman diagrams contributing to \(pp\to\{Z\gamma,~{}Z\gamma\gamma\}\) are shown in Fig. 1. Total cross sections at different \(p_{T,min}^{\gamma}\) are given in Table 1. In Fig. 2 we show the ratio of \(\sigma^{gg}\) to \(\sigma^{q\bar{q}}\) as a function of \(p_{T,min}^{\gamma}\). It is easy to find that quark-antiquark channels dominates in the low \(p_{T}^{\gamma}\) region, then the ratios \(\sigma^{gg}/\sigma^{q\bar{q}}\) reach maximum values at moderate values of \(p_{T}^{\gamma}\). Although not shown explicitly in Fig. 2, the ratio for \(Z\gamma\gamma\) production also decreases when \(p_{T}^{\gamma}\) gets large, which could be easily found in Table 1. This behavior is totally determined by the quark and gluon PDFs at the hadron collider, which could be easily checked numerically. The PDFs of the particles which are phenomenological important for the evaluation are shown in Fig. 3. Another direct observation from the results is that the ratio for \(Z\gamma\) production is about 10 times larger than that for \(Z\gamma\gamma\). Then we need to understand why \(\sigma^{gg}(Z\gamma\gamma)\) is so small. As mentioned in the introduction, the amplitudes of \(gg\to Z\gamma\gamma\) and \(gg\to Z\gamma\) are totally different from each other. With the help of \(C\)-parity, one knows that the axial vector \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(p_{T,min}^{\gamma}\) [GeV] & \(\sigma^{q\bar{q}}(Z\gamma)\) [pb] & \(\sigma^{gg}(Z\gamma)\) [pb] & \(\sigma^{q\bar{q}}(Z\gamma\gamma)\) [fb] & \(\sigma^{gg}(Z\gamma)\) [\(\times 10^{-2}\) fb] \\ \hline 10 & 72.96 & 0.8191 & 146.8 & 20.36 \\ 20 & 29.70 & 0.5774 & 39.12 & 8.269 \\ 30 & 15.32 & 0.3988 & 16.41 & 4.016 \\ 40 & 8.914 & 0.2685 & 8.591 & 2.224 \\ 50 & 5.573 & 0.1816 & 5.058 & 1.347 \\ 60 & 3.692 & 0.1235 & 3.272 & 0.8702 \\ 70 & 2.549 & 0.08440 & 2.257 & 0.5964 \\ 80 & 1.824 & 0.05918 & 1.643 & 0.4281 \\ 90 & 1.341 & 0.04144 & 1.227 & 0.3180 \\ 200 & 0.123 & 0.002188 & 0.1522 & 0.03515 \\ 300 & 0.0295 & 0.0003510 & 0.04486 & 0.007709 \\ \hline \end{tabular} \end{table} Table 1: Total cross sections for \(pp\to\{Z\gamma,Z\gamma\gamma\}\) at different \(p_{T,min}^{\gamma}\), using the cuts of Eq. (2) for photons. The superscript of \(\sigma\) represents different channels. interaction between \(Z\)-boson and quarks only contributes to the former amplitude, while the vector part fully devotes to the latter. Through calculating each one-loop Feynman diagram separately, we find that at the amplitude level the contribution from axial vector part is even larger than the vector part in the process of \(Z\gamma\gamma\) production. From the Feynman rules of \(Z\)-boson couplings to up-type quarks \(\frac{g}{4\cos\theta_{W}}\gamma^{\mu}\left(1-\frac{8}{3}\sin^{2}\theta_{W}- \gamma_{5}\right)\) and to down-type quarks \(\frac{g}{4\cos\theta_{W}}\gamma^{\mu}\left(-1+\frac{4}{3}\sin^{2}\theta_{W}+ \gamma_{5}\right)\), one can directly see that the axial related coupling is larger than the other one in the bracket. As is well known, top quark only provides sizable contribution to the diboson and triboson amplitudes under the condition of high invariant masses of the final states. Thus, we can only consider the effects of light quarks in the analysis. From the above arguments, it is rather easy to see that the amplitudes of \(gg\to Z\gamma\gamma\) are proportional to \(Q_{q}^{2}A_{q}\), where \(Q_{q}\) denotes the electric charge of the quark and \(A_{f}\) represents the axial vector coupling between quarks and \(Z\)-boson. The fact \(A_{u}=-A_{d}\) leads to a cancellation between up-type and down-type quark loops. On the other hand, the vector interaction with \(Z\)-boson parameterized by \(V_{q}\) provides non-vanishing amplitudes \(gg\to Z\gamma\) which are proportional to \(Q_{q}V_{q}\). And \(Q_{q}V_{q}\) has the same sign for all the quarks. In order to exclude possible internal cancellations that happen between Feynman diagrams with different ordering of the external legs, we define a new parameter \(R(q)=\sigma^{*}(q)/\sigma(q)\), where \(\sigma(q)\) is the ordinary cross section while the amplitudes in \(\sigma^{*}(q)\) are replaced by their absolute values for each Feynman diagram and all other parts in \(\sigma^{*}\) are exactly the same as in \(\sigma\). Here \(q\) denotes the corresponding quark loops in the calculation. Of course, only axial vector interactions with \(Z\)-boson are considered for \(Z\gamma\gamma\) and vector interactions for \(gg\to Z\gamma\). The values of \(R(q)\) at \(p_{T,min}^{\gamma}=50\) GeV are shown in Table 2. These numerically results confirm the above analysis since \((|Q_{u}|^{2}+|Q_{d}|^{2})^{2}/(Q_{u}^{2}-Q_{d}^{2})^{2}\) is just equal to the ratio \(R(u,d)/R(u)\). We could also find that the degree of cancellation is similar for these two processes, when only Figure 3: Left: Parton distribution functions \(xf(x)\) for quarks, antiquarks, and gluons in the proton. Right: Parton distribution functions \(xf(x)\) for gluons and \(u\) quark at small longitudinal fractions. These values are obtained from NNPDF3.0 [35] at \(Q=m_{Z}\). one type of quarks are taken into account. Obviously, the cancellation between different quarks can not explain why the ratio \(\sigma^{gg}/\sigma^{q\bar{q}}\) is so suppressed for \(Z\gamma\gamma\) production. Now we turn to the PDFs. We generate 10000 events for \(Z\gamma\gamma\) and \(Z\gamma\) by MadGraph5, and then use MadAnalysis5[36] to get the event numbers \(N_{reg}\) in different longitudinal fraction regions. The ratio \(N_{reg}/N_{tot}\) (\(N_{tot}\) is the number of total events) is shown in Fig. 4 with \(p_{T,min}^{\gamma}=10\) GeV as an example. Here the distribution for \(Z\gamma\) is concentrated in the low fraction region with a peak around \(x=0.4\%\). When an extra photon is added in the final states, the shape of the ratio distribution becomes more flat and the peak moves to \(x=0.7\%\). From Fig. 3 we see that \(f(0.004)\simeq\frac{5}{2}f(0.007)\). And in contrast to gluon, there is little change to the quark PDF in the region of small \(x\). So we can conclude that the difference of the ratios \(\sigma^{gg}/\sigma^{q\bar{q}}\) shown in Fig. 2 is mainly due to the suppressed gluon PDF for \(Z\gamma\gamma\) production. ### \(Zz\gamma\) and \(Zz\) productions Compared to \(Z\gamma\gamma\) production, the \(ZZ\gamma\) production is harder to measure at the LHC due to its lower production rate and the extra suppression factor of \(Z\)-boson decay. Although there are no published experimental results till now, it is still considered in this work for completeness. Following the same logic as in the preceding subsection, we choose \(ZZ\) as its references process with the following LO cross section at the 13 TeV LHC: \[\begin{split}\sigma^{q\bar{q}}(ZZ)&=10.51\text{ pb},\\ \sigma^{gg}(ZZ)&=0.9347\text{ pb}.\end{split} \tag{3}\] Naively, one would expect that the ratio \(\sigma^{gg}(ZZ)/\sigma^{q\bar{q}}(ZZ)\), which is approximately equal to 9%, should be much smaller than the ratio for the \(Z\gamma\) production. Compared to the \(gg\to Z\gamma\) production, an extra massive \(Z\)-boson requires a larger \(x\) for gluon PDF and thus would reduce the total cross section. Seemingly there is a contradiction between the numerical results and our arguments. To understand the above puzzle, a close look at the amplitudes is necessary. First, after replacing one photon with \(Z\)-boson in Fig. 1, one obtains the corresponding \(ZZ\gamma\) production and \(ZZ\) production Feynman diagrams. Other diagrams which contain Higgs propagators are shown in Fig. 5. Obviously, the amplitudes for triboson production in this figure vanishes due to \(C\)-parity. Then the amplitudes of \(gg\to ZZ\gamma\) should have similar structures as \(gg\to Z\gamma\gamma\). The only difference comes from the coupling constants, which are proportional to \(Q_{q}V_{q}A_{q}\) and cannot bring significant change to the total cross section. About the right diagram of Fig 5, seemingly its amplitude should be suppressed by the heavy top quark mass. But in real calculations, at least one quark mass has to be picked out in the numerator from the fermion propagators and so no quark mass is left at the leading approximation of the amplitudes. The same property has also been observed in the processes of single and double Higgs productions at the LHC. Although this extra amplitude will not be suppressed by the heavy quark mass, it is found that this contribution to the total cross section is small and cannot balance the effect of gluon PDF from numerical calculations. The real reason for relative large \(\sigma^{gg}(ZZ)/\sigma^{q\bar{q}}(ZZ)\) is that the amplitudes of \(gg\to Z\gamma\) are proportional to \(Q_{q}V_{q}\) and the corresponding \(ZZ\) amplitudes without Higgs propagators are proportional to \(V_{q}^{2}+A_{q}^{2}\). The factor \(V_{q}^{2}+A_{q}^{2}\) in \(ZZ\) production leads to about a factor of 10 enhancement to the cross section compared with \(Z\gamma\), which just compensates the suppression by gluon PDF. As for the \(ZZ\gamma\) production, since there are no such an enhancement at the amplitude level, the ratio \(\sigma^{gg}(ZZ\gamma)/\sigma^{q\bar{q}}(ZZ\gamma)\) should remain small as expected. Now we display the numerical results. In Table 3 we show the results of \(ZZ\gamma\) production at different \(p_{T,min}^{\gamma}\). The ratio \(\sigma^{gg}/\sigma^{q\bar{q}}\) as a function of \(p_{T,min}^{\gamma}\) is shown in Fig. 6. From these results, we find that \(\sigma^{gg}(ZZ\gamma)/\sigma^{q\bar{q}}(ZZ\gamma)\) is about one order of magnitude smaller than \(\sigma^{gg}(Z\gamma\gamma)/\sigma^{q\bar{q}}(Z\gamma\gamma)\), which could also be explained by the gluon PDF. ### \(W^{+}W^{-}\gamma\) and \(W^{+}W^{-}\) productions From the analysis in the preceding subsections, one may expect that the calculation for \(W^{+}W^{-}\gamma\) and \(W^{+}W^{-}\) productions would be rather simple, which is not case as shown in he following. We start with the cross section of \(W^{+}W^{-}\) production, which is given by \[\sigma^{q\bar{q}}(W^{+}W^{-}) =72.35\ \text{pb},\] \[\sigma^{gg}(W^{+}W^{-}) =2.873\ \text{pb}. \tag{4}\] Similar as \(ZZ\) and \(ZZ\gamma\) productions, there are new types of Feynman diagrams besides the ones plotted in Fig. 1. These new Feynman diagrams contributing to \(pp\to W^{+}W^{-}\gamma\) and \(pp\to W^{+}W^{-}\) in the unitary gauge are shown in Fig. 7. Besides the ordinary interactions which are already encountered in the previous examples, the triple and quartic gauge boson interactions also appear in these new diagrams. Now the complicated amplitude structures make it hard to get any conclusion about their total cross sections before numerical calculations. What we can only say is that if the contribution from Fig. 7 is neglected, the ratio \(\sigma^{gg}/\sigma^{q\bar{q}}\) for the triboson production should have the same order of magnitude as \(ZZ\gamma\) production. Taking \(\sigma^{q\bar{q}}(W^{+}W^{-})\) as an example, we find that there is a big cancellation between the \(t\)-channel and \(s\)-channel amplitudes through explicit calculations. Since similar cancellations also happen for the more complicated process \(q\bar{q}\to W^{+}W^{-}\gamma\) and the results of \(W^{+}W^{-}\gamma\) will be shown explicitly later, here we skip the simple proof for \(W^{+}W^{-}\). \begin{table} \begin{tabular}{|c|c|c|} \hline \(p_{T,min}^{\gamma}\) [GeV] & \(\sigma^{q\bar{q}}(ZZ\gamma)\) [fb] & \(\sigma^{gg}(ZZ\gamma)\) [\(\times 10^{-4}\) fb] \\ \hline 10 & 45.46 & 25.12 \\ 20 & 25.56 & 21.62 \\ 30 & 17.03 & 17.80 \\ 40 & 12.04 & 14.54 \\ 50 & 9.008 & 11.81 \\ 60 & 6.936 & 9.772 \\ 70 & 5.474 & 8.159 \\ 80 & 4.441 & 6.850 \\ 90 & 3.601 & 5.826 \\ \hline \end{tabular} \end{table} Table 3: Total cross section for \(ZZ\gamma\) production at the 13 TeV LHC with different \(p_{T,min}^{\gamma}\), using the cuts of Eq. (2) for photons. The results for \(ZZ\) production are shown in Eq. (3). Figure 5: Additional Feynman diagrams contributing to \(pp\to\{ZZ\gamma,ZZ\}\) besides Fig. 1 where one photon is replaced by a \(Z\)-boson. Before going to the numerical results, we want to emphasis that the subtracted amplitudes should not be gauge invariant and the corresponding cross sections which have no physical meanings are just used to understand the differences between the triboson processes discussed in this work. Figure 6: The ratio \(\sigma^{gg}(ZZ\gamma)/\sigma^{q\bar{q}}(ZZ\gamma)\) at the 13 TeV LHC as a function of \(p_{T,min}^{\gamma}\). The cross section for \(W^{+}W^{-}\gamma\) production at different \(p_{T,min}^{\gamma}\) is shown in Table 4. Here \(\sigma_{F1}\) represents the contribution which only comes from the Feynman diagrams plotted in Fig. 1. The ratio between gluon-gluon channel and quark-antiquark channel is shown in Fig. 8. From these results we see that the ratio \(\sigma^{gg}/\sigma^{q\bar{q}}\) can reach 5% due to the cancellation in \(q\bar{q}\to W^{+}W^{-}\gamma\). Due to the large couplings between quarks and \(W\)-boson, the cross sections \(\sigma_{F1}^{gg,q\bar{q}}\) are much larger than the corresponding cross sections of \(ZZ\gamma\) production. On the other hand, for \(\sigma_{F1}^{gg}/\sigma_{F1}^{q\bar{q}}\) we get the same order as for the process of \(Z\)-bosons as expected, since the ratio is insensitive to the interactions between quarks and gauge bosons. From the experimental side, the measured fiducial cross section for \(W^{+}W^{-}\gamma\) production with an integrated luminosity of 138 fb\({}^{-1}\)[10] at the 13 TeV LHC is in good agreement with the NLO QCD prediction. The relative experimental error is around 28% and it surpasses the gluon-gluon channel contribution which is about 5% of the LO value. At the High Luminosity LHC (HL-LHC) with \(\sqrt{s}=14\) TeV and a luminosity of 3 ab\({}^{-1}\) ), the experimental error could be reduced to a few percent. Meanwhile, as the increase of the center-of-mass energy, the contribution from the gluon-gluon channel will become more important. Thus the gluon-gluon channel contribution to \(W^{+}W^{-}\gamma\) production should be considered in the future analysis. While for \(Z\gamma\gamma\) and \(ZZ\gamma\), the ratio \(\sigma_{gg}/\sigma_{q\bar{q}}\) is much smaller and thus the gluon-gluon channel contribution could be safely neglected. ## 3 Conclusion We calculated the gluon-gluon initiate state contribution to the productions of three gauge bosons at the 13 TeV LHC, which is formally part of NNLO effects compared to the LO quark-antiquark channels. To understand the obtained results, the ratio between gluon-gluon channel contribution and the quark-antiquark channel contribution was presented \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(p_{T,min}^{\gamma}\) [GeV] & \(\sigma^{q\bar{q}}(W^{+}W^{-}\gamma)\) [fb] & \(\sigma_{F1}^{q\bar{q}}(W^{+}W^{-}\gamma)\) [fb] & \(\sigma^{gg}(W^{+}W^{-}\gamma)\) [fb] & \(\sigma_{F1}^{gg}(W^{+}W^{-}\gamma)\) [fb] \\ \hline 10 & 291.0 & 6776 & 12.87 & 5.203 \\ 20 & 159.5 & 4698 & 7.928 & 4.548 \\ 30 & 103.8 & 3680 & 5.519 & 3.972 \\ 40 & 74.56 & 3046 & 4.046 & 3.443 \\ 50 & 56.37 & 2582 & 3.085 & 3.003 \\ 60 & 44.42 & 2262 & 2.400 & 2.619 \\ 70 & 35.72 & 2008 & 1.904 & 2.305 \\ 80 & 29.53 & 1801 & 1.534 & 2.027 \\ 90 & 24.63 & 1635 & 1.247 & 1.792 \\ \hline \end{tabular} \end{table} Table 4: Total cross sections for \(W^{+}W^{-}\gamma\) production at different \(p_{T,min}^{\gamma}\) in \(pp\) collider, using the cuts of Eq. (2) for photons. The subscript ”\(F1\)” represents the cross sections only considering the Feynman diagrams in Fig. 1(a). The results for the production of \(W^{+}W^{-}\) are given in Eq. (4). and three different diboson production processes were chosen for comparative studies. We found that the ratio \(\sigma_{gg}/\sigma_{q\bar{q}}\) for \(Z\gamma\gamma\) (\(ZZ\gamma\)) production is of the order of \(10^{-3}\) (\(10^{-4}\)), much smaller than the corresponding ratio of diboson production due to the decrease of gluon PDF when more particles appear in the final states. These tiny ratios imply that gluon-gluon fusion contribution is phenomenological negligible for these two processes. However, for \(W^{+}W^{-}\gamma\) production, the ratio \(\sigma^{gg}/\sigma^{q\bar{q}}\) can reach about 5%, at the same order of magnitude as the ratio for \(W^{+}W^{-}\) because of the big cancellation between the amplitudes of quark-antiquark channels. Due to the large experimental uncertainty on the fiducial cross section, currently such gluon-gluon fusion effects can be safely neglected, while at the HL-LHC these effects may be accessible and should be considered. ###### Acknowledgements. This work was supported in part by IHEP under Grant No. Y9515570U1, by the National Natural Science Foundation of China (NNSFC) under grant Nos. 12375082, 12135013, 11821505, 12075300 and 12335005, by Peng-Huan-Wu Theoretical Physics Innovation Cen Figure 8: The ratio \(\sigma_{gg}/\sigma_{q\bar{q}}\) versus \(p_{T,min}^{\gamma}\). In the upper panel we considered all diagrams shown in Fig. 1 and Fig. 7 for \(W^{+}W^{-}\gamma\) and \(W^{+}W^{-}\) productions. In the lower panel we only considered the diagrams in Fig. 1 for \(W^{+}W^{-}\gamma\) production. ter (12047503), by the CAS Center for Excellence in Particle Physics (CCEPP), and by the Key Research Program of the Chinese Academy of Sciences, Grant NO. XDPB15.
2302.00071
Solar coronal density turbulence and magnetic field strength at the source regions of two successive metric type II radio bursts
We report spectral and polarimeter observations of two weak, low frequency (${\approx}$85-60\,MHz) solar coronal type II radio bursts that occurred on 2020 May 29 within a time interval ${\approx}$2\,min. The bursts had fine structures, and were due to harmonic plasma emission. Our analysis indicates that the magnetohydrodynamic (MHD) shocks responsible for the 1st and 2nd type II bursts were generated by the leading edge (LE) of an extreme-ultraviolet (EUV) flux rope/coronal mass ejection (CME) and interaction of its flank with a neighbouring coronal structure, respectively. The CME deflected from the radial direction by ${\approx}25^{\arcdeg}$ during propagation in the near-Sun corona. The estimated power spectral density (PSD) and magnetic field strength ($B$) near the location of the 1st burst at heliocentric distance $r{\approx}1.35R_{\odot}$ are $\rm {\approx}2{\times}10^{-3}\,W^{2}m$ and ${\approx}$1.8\,G, respectively. The corresponding values for the 2nd burst at the same $r$ are $\rm {\approx}10^{-3}\,W^{2}m$ and ${\approx}$0.9\,G. The significant spatial scales of the coronal turbulence at the location of the two type II bursts are ${\approx}$62\,-\,1\,Mm. Our conclusions from the present work are that the turbulence and magnetic field strength in the coronal region near the CME LE are higher compared to the corresponding values close to its flank. The derived estimates of the two parameters correspond to the same $r$ for both the CME LE and its flank, with a delay of ${\approx}$2\,min for the latter.
R. Ramesh, C. Kathiravan, Anshu Kumari
2023-01-31T20:09:39Z
http://arxiv.org/abs/2302.00071v1
Solar coronal density turbulence and magnetic field strength at the source regions of two successive metric type II radio bursts ###### Abstract We report spectral and polarimeter observations of two weak, low frequency (\(\approx\)85-60 MHz) solar coronal type II radio bursts that occurred on 2020 May 29 within a time interval \(\approx\)2 min. The bursts had fine structures, and were due to harmonic plasma emission. Our analysis indicates that the magnetohydrodynamic (MHD) shocks responsible for the 1st and 2nd type II bursts were generated by the leading edge (LE) of an extreme-ultraviolet (EUV) flux rope/coronal mass ejection (CME) and interaction of its flank with a neighbouring coronal structure, respectively. The CME deflected from the radial direction by \(\approx\)25\({}^{\arcdeg}\) during propagation in the near-Sun corona. The estimated power spectral density (PSD) and magnetic field strength (\(B\)) near the location of the 1st burst at heliocentric distance \(r\)\(\approx\)1.35\(R_{\odot}\) are \(\approx\)2\(\times\)10\({}^{-3}\) W\({}^{2}\)m and \(\approx\)1.8 G, respectively. The corresponding values for the 2nd burst at the same \(r\) are \(\approx\)10\({}^{-3}\) W\({}^{2}\)m and \(\approx\)0.9 G. The significant spatial scales of the coronal turbulence at the location of the two type II bursts are \(\approx\)62 - 1 Mm. Our conclusions from the present work are that the turbulence and magnetic field strength in the coronal region near the CME LE are higher compared to the corresponding values close to its flank. The derived estimates of the two parameters correspond to the same \(r\) for both the CME LE and its flank, with a delay of \(\approx\)2 min for the latter. Sun: activity; Sun: corona; Sun: coronal mass ejections: CMEs; Sun: radio radiation 0000-0002-4880-8800]R. Ramesh 0000-0002-4072-3886]C. Kathiravan 0000-0002-4880-7888]Anshu Kumari ## 1 Introduction Solar type II radio bursts appear in the spectrograph records as slowly drifting emission lanes from high to low frequencies. They are due to plasma oscillations caused by the electrons accelerated at the MHD shocks propagating outward in the solar atmosphere. These shocks are caused by coronal mass ejections (CMEs) and/or flares. The frequency drift rate (\(\sim\)0.5 MHz s\({}^{-1}\)) of the bursts result from the decrease of electron density (\(N_{e}\)) and hence the plasma frequency (\(f_{p}\)), with increasing \(r\). The detailed characteristics of type II bursts could be found in Nelson & Melrose (1985); Mann et al. (1995); Aurass (1997); Gopalswamy (2006); Nindos et al. (2011). Sometimes two type II bursts occur in quick succession within a time interval of \(\sim\)10 min. They were first reported by Robinson & Sheridan (1982). The occurrence of such events are attributed to either two successive flares or two successive CMEs or a flare and CME, or leading edge (LE) and flank of a CME (Mancuso & Raymond, 2004; Shanmugaraju et al., 2005; Subramanian & Ebenezer, 2006; Cho et al., 2008, 2011; Hariharan et al., 2015; Lv et al., 2017; Koval et al., 2021). The CME driven type II bursts could occur at locations along the front of the shock wherever appropriate conditions for electron acceleration are satisfied (Knock & Cairns, 2005; Kouloumvakos et al., 2021; Jebaraj et al., 2021; Ramesh et al., 2022). Statistical study using two-dimensional imaging observations of coronal type II bursts observed near the solar limb by Ramesh et al. (2012) indicate that they are located within the angular range \(\lesssim\)46\({}^{\circ}\) from the central position angle of the LE of the associated CMEs. Occasionally type II bursts show fine structure in both time and frequency domains. The bandwidth of emission is related to the size scales of the density inhomogeneities or turbulence in the corona (see e.g. Mugundhan et al., 2017). The observed angular broadening of the 'radio' Sun at low frequencies is considered to be due to scattering of radio waves by similar inhomogenities (Sastry, 1994; Ramesh et al., 2006; Thejappa & MacDowall, 2008; Zhang et al., 2022). The spatial scales of such inhomogeneities has been recently reported by Carley et al. (2021) using observations of the fine structures in type II bursts. The distribution follows a power law with spectral index in the range -1.7 to -2.0 at \(r\)\(\approx\)2\(R_{\odot}\) which is close to the value of \(-\)5/3 expected of fully developed Kolmogorov-like turbulence. Note that the power spectrum analysis mentioned above is carried out by first converting the frequency range of observation to heliocentric distance range using a coronal density model. Then, autocorrelation of the radio flux (which will be a function of heliocentric distance after the aforementioned conversion) and its Fourier transformation are carried out (see e.g. Chen et al., 2018). Moving further, it is known that plasma emission in a magnetic field gets split as ordinary (\(O\)) and extraordinary (\(X\)) modes. Since the propagation characteristics of these two modes are different, there will be a resultant circular polarization (Melrose & Sy, 1972). In the case of harmonic plasma emission, the associated \(B\) can be estimated in a relatively simple manner (see e.g. Melrose et al. (1980); Zlotnik (1981)). Several such estimates of \(B\) using observations of weak circularly polarized emission from harmonic type II bursts are there in the literature (Hariharan et al., 2014; Kumari et al., 2017, 2019; Ramesh et al., 2022; Ramesh & Kathiravan, 2022). The above mentioned work by various authors indicate that PSD and/or \(B\) are useful parameters to compare successive type II bursts. But our current knowledge is very limited. For e.g. there are only a few published reports of \(B\) at different locations along a coronal shock close to the Sun. Using ultraviolet spectra and white-light observations of a partial 'halo' CME in the plane of the sky, Bemporad et al. (2014) showed that \(B\) near the LE (flank) of the CME at \(r\)\(\approx\)2.6\(R_{\odot}\) (2.3\(R_{\odot}\)) is \(\approx\)0.21 G (0.24 G). Koval et al. (2021) reported spectral observations of two 'fractured' type II bursts due to the interaction of the nose of a rising CME/shock with a pseudo streamer, and its flank with a flux tube. The estimated \(B\) values from the two bursts at \(r\)\(\approx\)2.6\(R_{\odot}\) were \(\approx\)0.8 & 1 G, respectively. Hence the present work. ## 2 Observations The radio spectral data were obtained with the GAuribidanur Pulsar System (GAPS, Kshitij et al., 2022) in the Gauribidanur Observatory (Ramesh, 2011; Ramesh et al., 2014) located about 100 km north of Bangalore1. The front-end of GAPS has an one-dimensional array of sixteen log-periodic dipole antennas (LPDA, Ramesh et al., 1998) set up along a North-South baseline. The frequency range of operation is 85 - 45 MHz. The half-power width of the array response pattern ('beam') for observations near the zenith is \(\approx\)110\({}^{\circ}\times\)3\({}^{\circ}\) (right ascension, \(\rm R.A.\times declination\), decl.). The width in the direction of R.A. is frequency independent. Along declination, it is at the highest frequency of operation, i.e. 85 MHz. The observations were carried out with a Field Programmable Gate Array (FPGA) based digital back-end receiver system (Mugundhan et al., 2018) over the aforementioned frequency range with a sampling rate of \(\approx\)90 MHz. Data acquisition were simultaneous at all the frequencies. The spectral bandwidth and integration time are \(\approx\)44 kHz and \(\approx\)4 msec, respectively (see Kshitij et al., 2022). For polarization data, we used observations with the Gauribidanur RAdio Spectro-Polarimeter (GRASP, Kishore et al., 2015). It has two LPDAs in orthogonal orientation to each other (Sasikumar Raja et al., 2013) for observations of Stokes I & V emission. The response pattern of each LPDA is wide with half-power width \(\approx\)80\({}^{\circ}\) in both right ascension and declination, independent of frequency. The antenna and the receiver systems are routinely calibrated by carrying out observations in the direction of the Galactic center as described in Kishore et al. (2015). The minimum degree of circular polarization (\(dcp\)=\(|V|/I\)) detectable with GRASP is \(\lesssim\)0.01. Linear polarization from the solar atmosphere is absent at low radio frequencies (Grognard & McLean, 1973; Morosan et al., 2022). For information on CMEs, we made use of the catalog generated from observations in white-light with the Large Angle and Spectrometric Coronagraph C2 (LASCO C2, Brueckner et al., 1995) onboard the SOlar and Heliospheric Observatory (SOHO)2. For information on the associated solar surface activity, we used data obtained in Extreme Ultra-Violet (EUV) at 193A with the Atmospheric Imaging Assembly (AIA, Lemen et al., 2012) on board the Solar Dynamics Observatory (SDO). Footnote 2: [https://cdaw.gsfc.nasa.gov/CME_list/](https://cdaw.gsfc.nasa.gov/CME_list/) Figure 1 shows the GAPS observations of a type III burst followed by successive type II radio bursts from the solar corona on 2020 May 29. The overall bandwidths of the two type II bursts are limited. While the start frequency of the 1st type II burst seems to be \(\gtrsim\)80 MHz, its end frequency is \(\approx\)62 MHz. Compared to this, the frequency range of the 2nd type II burst is \(\approx\)75 - 62 MHz. This is consistent with the statistical result that the start frequency of the 2nd type II burst in successive type II bursts is always lesser than that of the 1st type II burst (Shanmugaraju et al., 2005; Subramanian & Ebenezer, 2006). The two type II bursts occurred during the time intervals \(\approx\)07:24:30 - 07:26:30 UT and \(\approx\)07:27:30 - 07:28:30 UT, respectively. They were associated with a M1.1 class GOES soft X-flare observed during the interval \(\approx\)07:13 - 07:28 UT. The maximum in the flare emission occurred at \(\approx\)07:24 UT. The flare location was at N32E893 near the east limb of the Sun. This indicates that the type II bursts in Figure 1 must be due to harmonic plasma emission since the corresponding fundamental (F) component from limb events as in the present case are likely to be occulted by the overlying corona and hence do not reach the observer. The directivity of F-component is also limited (see e.g. Nelson & Melrose, 1985). Figure 2 shows the \(dcp\) obtained using the GRASP observations integrated over the frequency range \(\approx\)65 - 70 MHz during the same time interval as in Figure 1. The signal-to-noise ratio is poor due to the limited sensitivity of GRASP. Hence we used a least squares fit for the observed data points. It shows maxima in the \(dcp\) near \(\approx\)07:24:30 UT, \(\approx\)07:26 UT, & \(\approx\)07:28 UT (indicated by arrow marks). These correspond to the type III, 1st and 2nd type II bursts in the GAPS dynamic spectrum in Figure 1, respectively. The \(dcp\) values of the aforementioned maxima (after subtracting the DC offset in the data) are \(\approx\)0.27, 0.14, and 0.07, respectively. These are consistent with the earlier reports on \(dcp\) for type III & II bursts (Dulk & Suzuki, 1980; Ramesh et al., 2010; Sasikumar Raja & Ramesh, 2013; Hariharan et al., 2015; Kumari et al., 2017, 2019). An inspection of the SDO/AIA-193A running difference image obtained at \(\approx\)07:25 UT indicate that the 1st type II burst was associated with an EUV flux rope like structure (indicated by the red arrow in the left panel of Figure 3) which propagated outwards from the same location as the flare mentioned above. Its position angle (PA, measured counter-clockwise from the solar north) is \(\approx\)50\({}^{\arcdeg}\), and estimated linear speed is \(\approx\)477 km/s in the SDO/AIA-193A field-of-view (FoV). The estimated speed Figure 1: GAPS observations of transient radio emission from the solar corona on 2020 May 29. The fast drifting emission close to \(\approx\)07:24:30 UT is a type III burst. The relatively slow drifting emission during the intervals \(\approx\)07:24:30 - 07:26:30 UT and \(\approx\)07:27:30 - 07:28:30 UT are successive type II radio bursts. Figure 2: GRASP observations (65-70 MHz) of the \(dcp\) of the type III burst and successive type II bursts in Figure 1. The red colour profile is the least squares fit to the data points. of the MHD shock associated with the two bursts is \(\approx\)506\(\pm\)33 km/s according to the commonly used \(N_{e}\) models for the solar corona (Baumbach, 1937; Allen, 1947; Newkirk, 1961). We used a density multiplier of 0.5 in the aforesaid models in order to match the speed of the EUV disturbance mentioned above. Since the present observations are close to the sunspot minimum period, use of the above density multiplier is justified (see e.g. Newkirk, 1967; Ramesh et al., 2020). Note that the shock speeds obtained using other \(N_{e}\) models were different despite attempts with different density multipliers. The leading edge (LE) of the flux rope was at \(r\)\(\approx\)1.29\(R_{\odot}\) at \(\approx\)07:25 UT when the 1st type burst in Figure 1 was observed near 75 MHz. According to the \(N_{e}\) models mentioned above, the plasma level corresponding to 37.5 MHz plasma level (F-component) should be at \(r\)\(\approx\)1.35\(\pm\)0.01\(R_{\odot}\). This is reasonable considering that low-frequency radio observations during the recent sunspot minimum period in 2019 indicate that the same plasma level in the background corona should be at \(r\)\(\approx\)1.24\(R_{\odot}\) (see e.g. Ramesh et al., 2020). The shock and the type II burst are expected to be located ahead of the associated propagating disturbance, and the shock front, respectively. Gopalswamy et al. (2012) showed that for a propagating coronal disturbance like the EUV flux rope with LE at \(r\)\(\approx\)1.30\(R_{\odot}\), the associated shock could be ahead by \(\approx\)0.15\(R_{\odot}\) (shock standoff distance). According to the statistical results of Suresh et al. (2016), the standoff distance should be 0.16\(\pm\)0.1\(R_{\odot}\) near \(r\)\(\approx\)1.3\(R_{\odot}\). Similar statistical work by Kim et al. (2012) indicates \(\approx\)0.2\(R_{\odot}\) at the same distance. The expected locations of the type II burst (i.e. 37.5 MHz plasma level) and the flux rope LE in the present case correspond to a shock standoff distance of \(\approx\)0.06\(R_{\odot}\). This is consistent with the aforementioned results. Hence we believe that the 1st type II burst is due to the LE of the EUV flux rope in the left panel of Figure 3. Due to technical reasons, we could not have coordinated imaging observations with the Gauribidanur radioheliograph (Ramesh et al., 2014) for the present event to verify the location of the burst. Similar observations were not available elsewhere also. According to the SOHO/LASCO CME catalog, a CME was observed on 2020 May 29 at \(\approx\)08:00 UT with LE at \(r\)\(\approx\)3.1\(R_{\odot}\). Its measurement position angle (MPA) and angular width were \(\approx\)63\({}^{\circ}\) and \(\approx\)37\({}^{\circ}\), respectively4. The narrow bandwidth of the type II bursts is reasonably consistent with the latter (see e.g. Ramesh et al., 2022). The MPA of the LE was \(\approx\)75\({}^{\circ}\) at \(r\)\(\approx\)5.8\(R_{\odot}\). The CME had a linear speed of \(\approx\)337 km/s and deceleration of \(\approx\) -13.2m/s\({}^{2}\) in the SOHO-LASCO FoV. But, its initial speed in the range \(r\)\(\approx\)1-2\(R_{\odot}\) as per the quadratic fit to its height-time measurements is \(\approx\)420 km/s5. This is close to the propagation speed of the aforementioned EUV flux rope (i.e. \(\approx\)477 km/s) in the present case. So, we think that the EUV flux rope in Figure 3 is the near-Sun signature of the CME. Footnote 4: [https://cdaw.gsfc.nasa.gov/CME_list/UNIVERSAL/2020_05/yht/20200529.080005.w037n.v0337.p069g.yht](https://cdaw.gsfc.nasa.gov/CME_list/UNIVERSAL/2020_05/yht/20200529.080005.w037n.v0337.p069g.yht) Footnote 5: [https://cdaw.gsfc.nasa.gov/CME_list/UNIVERSAL/2020_05/htpng/20200529.080005.p069g.htp.html](https://cdaw.gsfc.nasa.gov/CME_list/UNIVERSAL/2020_05/htpng/20200529.080005.p069g.htp.html) The SDO/AIA-193A running difference image obtained at \(\approx\)07:28 UT shows an upward rising coronal loop at \(r\)\(\approx\)1.21\(R_{\odot}\) and PA\(\approx\)40\({}^{\circ}\) near the northern flank of the same flux rope associated with the 1st type II burst (see cyan arrow in the right panel of Figure 3). The onset time of the 2nd type II burst in Figure 1 (\(\approx\)07:27:30 UT) at \(\approx\)75 MHz was close to the aforementioned epoch. Since the plasma layer of the F-component (37.5 MHz) of the burst is expected to be at \(r\)\(\approx\)1.35\(\pm\)0.01\(R_{\odot}\), the shock standoff distance in this case is \(\approx\)0.14\(R_{\odot}\). This is within the range of the similar values mentioned in the literature (see previous paragraph). No other CMEs or propagating coronal disturbances were observed during the time interval between the two bursts in Figure 1. Any possibility of association between the 2nd type II burst and X-ray flare mentioned earlier is also minimal since the latter had almost ended when the burst was observed (see e.g. Classen & Aurass 2002; Ramesh et al., 2010). We further find from the different observations that: (i) the temporal correlation between the onsets of the 2nd type II burst and movement of the coronal loop located near the flank of the EUV flux rope is similar to the the association between the 1st type burst and LE of the same EUV flux rope; (ii) the time interval between the appearance of the EUV flux rope and beginning of the coronal loop movement (\(\approx\)3.0 min) at nearly the same location (the difference between the respective \(r\) values is \(\approx\)0.08\(R_{\odot}\) only) is approximately equal to the delay between the start times of the 1st and 2nd type II bursts (\(\approx\)2.5 min) at the same frequency, i.e. 75 MHz. Considering all the above details, we think that the coronal loop motion mentioned above is due to interaction between the earlier erupted EUV flux rope and the adjacent loops, and this resulted in the 2nd type II burst at the flank of the flux rope/CME (see e.g. Reiner et al., 2003; Cho et al., 2007, 2011; Feng et al., 2012; Chen et al., 2014; Hariharan et al., 2015). The very close temporal correspondence (\(\lesssim\)30 sec) between the onset of the coronal loop movement at the flank of the EUV flux rope, and the 2nd type II burst strengthens this reasoning. Note that the statistical results of Cho et al. (2008) indicate a time difference of \(\lesssim\)2 min in the case of CME flank-streamer interaction and the start time of the associated type II bursts. The gradual southward tilt (towards the equator) of the CME from PA\(\approx\)50\({}^{{}^{\circ}}\) at \(r\)\(\approx\)1.3\(R_{\odot}\) to PA\(\approx\)63\({}^{{}^{\circ}}\) at \(r\)\(\approx\)3.1\(R_{\odot}\) and then PA\(\approx\)75\({}^{{}^{\circ}}\) at \(r\)\(\approx\)5.8\(R_{\odot}\) with time (see previous paragraph), hints at deflected propagation of the CME. This is another potential evidence for the aforesaid interaction with the coronal loop particularly since the latter was at the northern flank of the CME/EUV flux rope and the CME deflection was towards the south (see e.g. Wang et al., 2011). Since it is a limb event, the above changes in PA are expected to be free of projection effects. ## 3 Analysis and Results The fine structures and circular polarization exhibited by the the successive type II bursts in Figure 1 are related to the coronal density turbulence and magnetic field in the source region of the bursts, respectively. To infer the former, we estimated the PSD of the two bursts at different epochs as described in Chen et al. (2018) and Carley et al. (2021). The average PSD for the 1st and 2nd Figure 3: LEFT: SDO/AIA-193Å observations of the EUV flux rope (indicated by the red arrow) on 2020 May 29 at \(\approx\)07:25 UT. Solar north is straight up and east is to the left. Image shown corresponds to the north-east quadrant of the Sun. The solar limb is indicated by the curved red line. Lower right corner in the image is the center of the Sun. RIGHT: Same as the image in the left panel, but observed at \(\approx\)07:28 UT. The cyan arrow indicates the rising EUV loop. type II bursts are shown in Figure 4. The slope is \(\approx\)-1.85 for both the bursts. This is same as the mean value at \(r{\approx}2R_{\odot}\) reported by Carley et al. (2021) for typical non-successive type II bursts. But the amplitude of the PSD corresponding to the 2nd type II burst (\({\approx}10^{-3}\) W\({}^{2}\)m) is \({\approx}2\times\) lesser than that of the 1st type II burst (\({\approx}2{\times}10^{-3}\) W\({}^{2}\)m). An inspection of Figure 2 indicates that the \(dcp\) of the 2nd type II burst too is \({\approx}2\times\) lesser compared to that of the 1st type II burst (\({\approx}0.07\) & 0.14, respectively). Note that for harmonic plasma emission and low values of \(dcp\) as in the present case, \(B{=}\frac{f_{p}\times dcp}{2.8\,a(\theta)}\)(Melrose et al., 1980; Willes & Melrose, 1997). \(f_{p}\) (MHz) is the plasma frequency corresponding to fundamental component, and \(a(\theta)\) depends on the angle of emission relative to the magnetic field direction. \(\theta\) can be approximated to the heliographic longitude of the associated active region (Dulk & Suzuki, 1980). In the present case \(\theta{\approx}89^{{}^{\circ}}\). For such near limb location, \(a(\theta){\gtrsim}10\)(Dulk & McLean, 1978). The GRASP observations of \(dcp\) in Figure 2 correspond to harmonic plasma emission in the frequency range 65-70 MHz. This implies \(f_{p}{\approx}34\) MHz for both the 1st and 2nd type II bursts. Substituting for \(dcp\) in the above relation, we get \(B{\lesssim}1.8\) & 0.9 G for the 1st and 2nd type II bursts, respectively. Note that the factor of two difference between the \(B\) values of the two type II bursts in the present case is independent of \(a(\theta)\). The latter is the same for both the 1st and 2nd type II bursts as they are associated with the LE and flank of the same flux rope, respectively, as discussed in Section 2 (see Figure 3 also). According to the quasi-2D turbulence models, interaction between emerging and evolving loops in the'magnetic carpet' on the solar surface generates turbulence which is transferred into the corona and beyond (see e.g. Zank et al., 2021). So, an increase or decrease in the magnetic field should lead to a corresponding change in turbulence (see e.g. Potherat & Klein, 2017). Observations indicate that the level of turbulence in the solar wind varies with the Sun's magnetic field (Janardhan et al., 2015; Sasikumar Raja et al., 2019). For e.g., there was a \({\approx}17\%\) decrease in the global solar photospheric magnetic field during the period 1992-2018. The solar wind density turbulence decreased by \({\approx}23\%\) in the same time interval. Therefore, the \({\approx}50\%\) decrease in the PSD of the 2nd type II burst (w.r.t. the 1st type II burst) in the present case must be due to its \(B\) being lower by \({\approx}50\%\) compared to that of the 1st type II burst. The 1st and 2nd type II bursts are associated with the LE and flank of the EUV flux rope as mentioned in Section 2. Hence, the above results imply that the PSD and \(B\) near the flux rope LE are \({\approx}2\times\) higher than the corresponding values near its flank. This could be because the LE is above the associated active region and the flank is outside the region. Note that the LE and flank of the flux rope are separated by \({\approx}10^{\circ}\) as mentioned earlier (see Section 2). The results reported in Cho et al. (2007) indicate similar \({\approx}2\times\) lower \(B\) in the flank region of a CME as compared to its LE. Ray tracing calculations for polarization of thermal free-free radio emission from the solar corona with a density enhancement near the limb by Sastry (2009) also indicate that the \(dcp\) is lesser by \({\approx}2\times\) when \(B\) in the enhancement is correspondingly reduced. Note that lower \(B\) near the flank of the flux rope implies a lower Alfven speed (\(v_{A}\)) which favours shock formation in that region of the corona (see e.g. Jebaraj et al., 2021; Kouloumvakos et al., 2021). The power law in Figure 4 is in the wavenumber range \({\approx}70\) - \(4500R_{\odot}^{-1}\) (1st type II burst) and \({\approx}70\) - \(3000R_{\odot}^{-1}\) (2nd type II burst), for PSD\(>\)5% significance level. The corresponding ranges for the spatial scales in the turbulence (i.e. \(2\pi\)/wavenumber) are \({\approx}62\) - \(1\) Mm and \({\approx}62\) - \(1.5\) Mm, respectively. A type II burst is generally expected to be located at the shock ahead of the associated propagating disturbance as mentioned before. So, the aforementioned turbulence is supposed to have existed in the coronal environment where the two type II bursts occurred in the present case. The upper limits are lesser than the outer scale of turbulence \(\approx\)278 Mm at \(r\)\(\approx\)2\(R_{\odot}\)(Bird et al., 2002; Mohan, 2021). The lower limits are greater than the dissipation scale of the turbulent density fluctuations at nearly the same \(r\)(see e.g. Sasikumar Raja et al., 2019). We would like to note here that the individual density irregularities reported earlier from the observed angular broadening of the Crab nebula at low radio frequencies due to its occultation by the solar corona are of size \(\sim\)1 Mm (see e.g. Ramesh et al., 2001). ## 4 Conclusions We have reported spectral and polarimeter observations of two weak, successive low-frequency (\(\approx\)85 - 60 MHz) type II radio bursts in the solar corona. Our results indicate that the 1st and 2nd type II bursts were generated by the leading edge of a flux rope / CME, and interaction of its flank with a neighbouring structure, respectively. The power spectral density and magnetic field strength of the 2nd type II burst (CME LE) are 2\(\times\) lesser than that of the 1st type II burst (CME flank) at the same \(r\). Considering that estimates of magnetic field strength from low-frequency radio observations of circularly polarized harmonic plasma emission as described in the present work are relatively easier to obtain, coordinated observations using ground- and space-based observing facilities with higher spectral and temporal resolutions (see e.g. Hariharan et al., 2016) would be useful to understand the turbulence, magnetic field, etc. associated with the CMEs. Such studies are expected to be important since there are reports that interplanetary CMEs with turbulent sheath region ahead of its LE drive stronger geomagnetic activity (Kilpua et al., 2021). Note that in the case of near-Sun observations, the diffuse structure observed ahead of the bright CME front near the Sun in some cases is regarded as the shock sheath (see e.g. Feng et al., 2013). Moving further, we also found that the CME deflected away from radial direction, most likely after the aforesaid interaction. Such CMEs provide useful reference for space weather forecasting, especially for CME arrival and geoeffectiveness (Wang et al., 2020). This suggests a possible working hypothesis for a future research, i.e. whether sensitive observations of weak, successive coronal type II radio bursts as reported in the present work can be proxies for deflected CMEs close to the Sun. A larger data set of similar events is needed to verify this. High cadence white-light observations in the range 1.05\(\lesssim\)\(r\)\(\lesssim\)3\(R_{\odot}\) (where the Figure 4: LEFT: Power spectral density (PSD) corresponding to the 1st type II burst in Figure 1. The inclined blue ‘dashed’ line is the least squares fit to the estimated PSD. Its slope is \(\approx\)-1.85. The horizontal green ‘dashed’ line indicates 5% significance level. RIGHT: Same as the image in the left panel, but corresponds to the 2nd type II burst in Figure 1. The unit for PSD in the present case is W\({}^{2}\)m. low-frequency coronal type II radio bursts as reported in the current work generally occur) with the Visible Emission Line Coronagraph (VELC, Singh et al., 2011) on board ADITYA-L1, the soon to be launched first Indian space solar mission, are expected to be helpful in this connection. We are grateful to Gauribidanur Observatory team for their help in the observations and upkeep of the facilities. Indrajit V. Barve, M. Rajesh, and K. P. Santosh are acknowledged for their contributions to the present work. The SOHO/LASCO CME catalog is generated and maintained at the CDAW Data Center by NASA and the Catholic University of America in cooperation with the Naval Research Laboratory. The SDO/AIA data are courtesy of the NASA/SDO and the AIA science teams. We thank the referee for his/her comments that helped to us to describe the results more clearly.
2308.16648
Generate Your Own Scotland: Satellite Image Generation Conditioned on Maps
Despite recent advancements in image generation, diffusion models still remain largely underexplored in Earth Observation. In this paper we show that state-of-the-art pretrained diffusion models can be conditioned on cartographic data to generate realistic satellite images. We provide two large datasets of paired OpenStreetMap images and satellite views over the region of Mainland Scotland and the Central Belt. We train a ControlNet model and qualitatively evaluate the results, demonstrating that both image quality and map fidelity are possible. Finally, we provide some insights on the opportunities and challenges of applying these models for remote sensing. Our model weights and code for creating the dataset are publicly available at https://github.com/miquel-espinosa/map-sat.
Miguel Espinosa, Elliot J. Crowley
2023-08-31T11:44:40Z
http://arxiv.org/abs/2308.16648v1
# Generate Your Own Scotland: Satellite Image Generation Conditioned on Maps ###### Abstract Despite recent advancements in image generation, diffusion models still remain largely underexplored in Earth Observation. In this paper we show that state-of-the-art pretrained diffusion models can be conditioned on cartographic data to generate realistic satellite images. We provide two large datasets of paired OpenStreetMap images and satellite views over the region of Mainland Scotland and the Central Belt. We train a ControlNet model and qualitatively evaluate the results, demonstrating that both image quality and map fidelity are possible. Finally, we provide some insights on the opportunities and challenges of applying these models for remote sensing. Our model weights and code for creating the datasets are publicly available at [https://github.com/mique1-espinosa/map-sat](https://github.com/mique1-espinosa/map-sat). ## 1 Introduction Earth Observation (EO) is a rapidly expanding field that uses computer vision, machine learning, and image processing to gain insights into Earth's surface changes [1, 2]. For this purpose, it is crucial to extract meaningful information from diverse and often noisy data sources. Recently, the use of maps in EO has gained attention due to their abstract representation capabilities [3, 4]. However, the use of cartographic data in remote sensing still remains largely unexplored. Maps, such as OpenStreetMap (OSM) [4], offer high-quality information about roads, buildings, railways, and more, that can enhance EO analyses when paired with satellite images [1, 2]. Generative models, specifically diffusion models, have shown great potential in different sectors, e.g. for medical imaging [1]. As the quality of the generated images improves, it is important for the remote sensing community to adopt these methods, not only for their ability to augment datasets and create realistic synthetic images but also for their implications in distinguishing real from artificially generated or manipulated content [4], thus, promoting its responsible and ethical use. Our work combines OSM maps and generative models to synthesise realistic satellite views. We make two main contributions. First, we create a large dataset of image pairs, combining map data and satellite imagery. With this dataset, we highlight the importance of using different data sources in EO; specifically, we demonstrate new possibilities when using cartographic data. Second, we show that advanced generative models can be used effectively in the domain of satellite imagery. For this, we train ControlNet [E], a state-of-the-art model, to generate high-quality high-fidelity images. We demonstrate how this generation can be controlled and conditioned with different input data, such as maps. With this study, we hope to spark new research interests in this direction. ## 2 Background Generative models have significantly improved in recent years. Several works have explored their use for synthetic image generation [E], image-to-image translation [L], and data augmentation [D]. However, in the EO domain the focus has predominantly been on more traditional models, such as Generative Adversarial Networks (GANs) [D]. While GANs have Figure 1: Examples of synthetic sat. images generated with diffusion models conditioned on OSM maps (test set). The real sat. images are provided as reference (2nd column) but they are not used at inference. We cover a wide landscape diversity (urbanised and rural areas). shown notable results in multiple EO tasks (super resolution,,, and sharpening,, haze or cloud removal ), they suffer from training instability and model collapse, which can lead to the generation of low quality images []. Recent developments in generative modeling have opened new avenues for research. Particularly, diffusion models have emerged as promising alternatives, using stochastic processes to model the data distribution. Previous work has explored the use of diffusion models in the EO domain for diverse downstream applications such as super resolution, change detection, and image augmentation []. Recent work such as ControlNet [] allows for better control over the generation process by adding input conditions while still produce high-quality results. The use of such conditioned diffusion models in remote sensing remains unexplored, creating a gap that our work aims to address. In the multi-modal context, previous work has explored the use of paired datasets combining different types of remote sensing data [],,,. However, the use of cartographic maps as an additional data source remains underexplored. ## 3 Datasets To demonstrate the effectiveness of pretrained diffusion models in remote sensing, we construct a specific dataset for the training procedure. Instructions for the dataset creation and the code used can be found in [https://github.com/miquel-espinosa/map-at](https://github.com/miquel-espinosa/map-at). The multi-modal dataset pairs \(256\times 256\) OSM image tiles with corresponding \(256\times 256\) World Imagery [] satellite image tiles. We use a fixed text prompt "_Convert this OpenStreetMap into its satellite view_" for the pretrained SD model. The area considered in this study is mainland Scotland. The sampling strategy consists of random sampling over a pre-defined region. We carry out experiments on multiple datasets, that is, sampling across different regions (Figure 2): (1) all of Mainland Scotland, and (2) the Central Belt region. The motivation behind sampling across different regions is to account for unbalanced geographic features; Mainland Scotland is dominated by rural areas, forests, mountain ranges, and fields whereas the Central Belt region has a much larger representation of human-made structures like buildings, roads, and other features found in larger cities. The Mainland Scotland dataset contains 78,414 training pairs of images, and the Central Belt dataset 68,195 training pairs (with an additional 20% of test pairs for each case). We use OpenStreetMap tiles and World Imagery satellite images, both at a zoom level of 17. For the central belt region, we explore two products from the free World Imagery service as provided by ArcGis Online: the latest World Imagery version and the older Clarity version (deprecated) []. We find that the Clarity version retains more detail and higher image quality, so we train our models on both versions for a comparative evaluation (note that World Imagery products are composites compiled from different sources and providers, resulting in varying resolutions across locations). ## 4 Method We use the ControlNet architecture to train a model capable of generating realistic satellite images from OSM tiles. Before detailing the specifics of our approach, we provide a brief overview of the ControlNet method. ### ControlNet Overview ControlNet [63] is an architecture designed to augment pretrained image diffusion models by allowing task-specific conditioning. It has the ability to manipulate the input conditions of _neural network blocks_, thereby controlling the diffusion process. Intuitively, it can be seen as a way of injecting explicit guidance on the denoising process, conditioning the outputs on some reference image, in addition to the text prompt. A _network block_ in this context refers to any set of neural layers grouped as a frequently-used unit for building networks, such as a ResNet block, conv-bn-relu block, and transformer block, among others. Given a feature map \(x\in\mathbb{R}^{h\times w\times c}\) where \(\{h,w,c\}\) represent height, width, and channel numbers respectively, a neural network block \(\mathcal{F}(\cdot;\theta)\) with a set of parameters \(\theta\) transforms \(x\) into another feature map \(y\) via the relation \(y=\mathcal{F}(x;\theta)\). Crucially, as Figure 3 illustrates, ControlNet keeps the parameters \(\theta\) locked, cloning it into a trainable copy \(\theta_{c}\) which is trained with an external condition vector \(c\). The idea behind making such copies instead of directly training the original weights is to mitigate overfitting risks in small datasets and being able to reuse larger models trained on billions of images. An important innovation is the introduction of a _zero convolution_ layer to connect the frozen network blocks and the trainable copies (Figure 3). Zero convolution is a \(1\times 1\) convolution layer with both weight and bias initialised as zeros. Note that ControlNet initially will not affect the original network at all, but as it is trained, it will gradually start to influence the generation with the external condition vectors. Figure 2: Sampling regions used for the dataset construction. We visualise some pair examples (map, satellite img). Mainland Scotland is largely rural, whereas the central belt has build up cities including Edinburgh and Glasgow. ### ControlNet for Satellite Image Synthesis We use the ControlNet architecture, along with a large pretrained diffusion model (Stable Diffusion) to translate OpenStreetMap images into realistic satellite images. We follow the same training process as in the original ControlNet architecture [5]. Our model progressively denoises images in the perceptual latent space to generate samples. It learns to predict the noise added to the noisy image, and this learning objective is used in the fine-tuning process of the entire pipeline. As the Stable Diffusion (SD) [] weights are locked, the gradient computation on the SD model can be avoided, which accelerates the training process and saves on GPU memory. Leveraging a large pretrained diffusion model not only improves computational efficiency, but also yields higher-quality results. ### Training and inference details We carry out multiple experiments with different pretrained large diffusion backbones. Specifically we experiment with two different versions from Stable Diffusion: v.1-5, and v.2-1. We find that SD version v.1-5 tends to give better results. Experiments are run on a cluster node of 8 A100 40GB GPUs. The batch size is set to 2048 for 250 epochs. The training time is approximately 8 hours and the learning rate is kept constant at 0.00001. During inference, images are sampled with 50 inference steps (further increasing the number of inference steps doesn't have a noticeable impact on image quality), and it takes 2-3 seconds per image. The best performing model, trained on the Central Belt dataset, is publicly available at [https://huggingface.co/mespinosami/controlearth](https://huggingface.co/mespinosami/controlearth). We also publish the model trained on Mainland Scotland at [https://huggingface.co/mespinosa](https://huggingface.co/mespinosa) mi/controlearth-sct. ## 5 Analysis We carry out a qualitative analysis of our results, mainly involving the visual inspection of the generated satellite images. This lets us evaluate more subjective elements such as colour Figure 3: ControlNet network blocks with “zero convolutions” (\(1\times 1\) convolution layer with both weight and bias initialised to zeros). Figure adapted from the original work [5]. ## 5 Conclusion Figure 4: Examples of synthetic satellite images from the trained model conditioned on maps. All images shown correspond to the test set. The real satellite images are provided as reference (second column) but they are not used at inference. Rows 1-4 show agricultural land, forests and bare areas. Rows 5-8 illustrate water bodies at varying sizes. Rows 9-11 correspond to different man-made structures, which condition the generation with more intricate patterns. consistency, spatial coherence and feature representation, which are often hard to quantify. We include a selection of successful examples in Figure 1 and Figure 4 that demonstrate the model's capabilities under different conditions (best viewed up close, in colour). One of the desirable behaviours that the trained model exhibits is the diversity of samples given the same map. This shows that the model has learnt to encode the variances found in the map classes (e.g. agricultural crop), thus, successfully captured the complexity of the dataset, instead of collapsing all generations to the same image. For example, rows 1-4 in Fig. 4 illustrate seasonality changes in the different samples. Similarly, other variances are also perceivable, such as weather phenomena, lighting conditions and human activity. Sampling with high diversity can be used as a data augmentation technique, ensuring intraclass invariance for tasks such as classification. Rows 5-8 are examples for water bodies of multiple sizes, such as rivers, human-made canals, and open sea in coastal regions. Lastly, rows 9-11 show urban areas and more elaborate human-made patterns which the model is able to closely follow. As discussed in Section 3, we train the same model on two different versions of the cen Figure 5: We illustrate the quality differences when training the same model with World Imagery and Clarity datasets over the Central Belt area. GT stands for Ground Truth, i.e. the real satellite images. tral belt dataset (one with World Imagery updated product, and the other with the deprecated Clarity version). Figure 5 provides a comparative visual analysis of two identical ControlNet models, both subjected to the same training parameters but on the two distinct datasets. As it can be observed, the deprecated Clarity product shows finer details and superior image quality. Therefore, it becomes evident that the quality of the learned representations is heavily influenced by the quality of the training data employed. ### Failure cases Some failure cases are shown in Figure 6. Large roads, specially those with lanes and straight lines are found challenging by our model. Equally, intersections and road overpasses are difficult to generate coherently. Rivers are easily mistaken by roads in some of the samples, and we show a failure case for a larger water body, where it is confused by a building (possibly due to its polygonal shape). Lastly, we also visualise railroads as challenging scenarios. These occurrences can largely be attributed to the under-representation in our dataset of the specific scenarios. Figure 6: Failure cases for more challenging scenarios (which usually correspond to under-represented cases in the dataset, such as larger railways, coastal regions, or road intersections). The real satellite images are shown in the second column for comparative purposes. ## 6 Discussion The use of generative diffusion models in remote sensing still remains in its early stages. However, the results presented in this study highlight their potential. _Opportunities_: This approach allows for multiple applications. It enables the enhancement of existing datasets, by extending the number of samples. This is particularly useful for low-data regimes or scenarios where data collection can be expensive. Similarly, it can be utilised in the data augmentation step of any training pipeline. Given the diversity and realism of the generated samples it is a strong tool to ensure robustness and generalisation in models. Furthermore, the ability to synthesise high-resolution images that closely follow a specified layout (i.e. map) can be used to complement private datasets, providing a means to increase data accessibility without compromising confidentiality. Lastly, there exist multiple image-to-image use cases where this method could prove useful, for instance cloud or haze removal. _Challenges_: As the quality of synthesised satellite imagery improves, concerns around misuse and the propagation of fake satellite images arise. The creation of fake satellite images or its manipulation could have harmful consequences in emergency situations, or in geopolitical events. Alongside the development of this technology, there needs to be a concurrent effort on creating regulations and ethical guidelines. On the other hand, our method is capable of creating adversarial samples (i.e. fake satellite images that resemble realistic ones), thus, it can be leveraged to create adversarial datasets. Such datasets could be used to train models for the detection of fake or manipulated satellite imagery. _Future work:_ The current method struggles with finer structures and undersampled classes (see Section 5.1 for more details), providing room for improvement in those scenarios. Secondly, we aim to expand the current dataset by: including a wider set of modalities increasing the representational diversity (such as GIS information, DEMs, land cover data, more varied text prompts), expand its geographical coverage (to more diverse habitats and climatic regions), and develop a new sampling strategy (based on land cover maps and population density). A more complete dataset will allow for the improvement on the challenging situations across a wider range of regions. And a multi-modal dataset will enable to condition the generation process on other data modalities. Furthermore, it remains unexplored the possibilities of using different and more diverse text prompts in the generation process (for instance, for controlling seasonality changes or other weather conditions). Finally, another exciting direction is enabling consistent generation of larger maps with a smooth tiling transition. We plan to explore iterative hierarchical generation or style conditioning as possible methodologies to achieve this objective. Such method would open possibilities for artists and content creators. ## 7 Conclusion We have demonstrated that state-of-the-art diffusion models can be used to generate realistic satellite images conditioned on maps. For this purpose, we create a large dataset containing pairs of maps and satellite images for Mainland Scotland and the Central Belt regions. With such dataset, we successfully train ControlNet models and provide insights on the results obtained. Finally, we outline some possible directions for improvements, and discuss the potential of generative methods in the field of EO. Acknowledgements Miguel Espinosa was supported through a Centre for Satellite Data in Environmental Science (SENSE) CDT studentship (NE/T00939X/1). This work used JASMIN, the UK's collaborative data analysis environment [https://jasmin.ac.uk](https://jasmin.ac.uk) [[]]. The authors are grateful to Tom Lee for his helpful comments.
2309.10110
Topological modes and spectral flows in inhomogeneous PT-symmetric continuous media
In classical Hermitian continuous media, the spectral-flow index of topological modes is linked to the bulk topology via index theorem. However, the interface between two bulks is usually non-Hermitian due to the inhomogeneities of system parameters. We show that the connection between topological modes and bulk topology still exists despite the non-Hermiticity at the interface if the system is endowed with PT symmetry. The theoretical framework developed is applied to the Hall magnetohydrodynamic model to identify a topological mode called topological Alfv\'{e}n-sound wave in magnetized plasmas.
Yichen Fu, Hong Qin
2023-09-18T19:35:09Z
http://arxiv.org/abs/2309.10110v2
# Topological modes and spectral flows in inhomogeneous ###### Abstract In Hermitian continuous media, the spectral-flow index of topological edge modes is linked to the bulk topology via index theorem. However, most inhomogeneous continuous media in classical fluids and plasmas are non-Hermitian. We show that the connection between topological edge modes and bulk topology still exists in these non-Hermitian continuous media if the systems are PT-symmetric and asymptotically Hermitian. The theoretical framework developed is applied to the Hall magnetohydrodynamic model to identify a topological edge mode called topological Alfven-sound wave in magnetized plasmas.
2309.08993
Fast laser field reconstruction method based on a Gerchberg-Saxton algorithm with mode decomposition
Knowledge of the electric field of femtosecond, high intensity laser pulses is of paramount importance to study the interaction of this class of lasers with matter. A novel, hybrid method to reconstruct the laser field from fluence measurements in the transverse plane at multiple positions along the propagation axis is presented, combining a Hermite-Gauss modes decomposition and elements of the Gerchberg-Saxton algorithm. The proposed Gerchberg-Saxton algorithm with modes decomposition (GSA-MD) takes into account the pointing instabilities of high intensity laser systems by tuning the centers of the HG modes. Furthermore, it quickly builds a field description by progressively increasing the number of modes and thus the accuracy of the field reconstruction. The results of field reconstruction using the GSA-MD are shown to be in excellent agreement with experimental measurements from two different high-peak power laser facilities.
Ioaquin Moulanier, Lewis Thomas Dickson, Francesco Massimo, Brigitte Cros
2023-09-16T13:19:05Z
http://arxiv.org/abs/2309.08993v1
Fast laser field reconstruction method based on a Gerchberg-Saxton algorithm with mode decomposition ###### Abstract Knowledge of the electric field of femtosecond, high intensity laser pulses is of paramount importance to study the interaction of this class of lasers with matter. A novel, hybrid method to reconstruct the laser field from fluence measurements in the transverse plane at multiple positions along the propagation axis is presented, combining a Hermite-Gauss modes decomposition and elements of the Gerchberg-Saxton algorithm. The proposed Gerchberg-Saxton algorithm with modes decomposition (GSA-MD) takes into account the pointing instabilities of high intensity laser systems by tuning the centers of the HG modes. Furthermore, it quickly builds a field description by progressively increasing the number of modes and thus the accuracy of the field reconstruction. The results of field reconstruction using the GSA-MD are shown to be in excellent agreement with experimental measurements from two different high-peak power laser facilities. ## I Introduction High intensity femtosecond laser pulses generated through chirped pulse amplification [1] are frequently affected by intensity and wavefront aberrations and fluctuations originating from multiple causes, e.g. thermal effects or imperfections of optical systems, inhomogeneities in the amplifying crystals' doping [2], or air turbulence [3]. In addition, phase instabilities may result in pointing fluctuations and lack of symmetry of energy distribution in the focal volume [4; 5]. An illustrative example of transverse asymmetry is shown in Fig. 1, where the measured fluence of a 23 TW, 38 fs laser pulse on the top row is compared to the calculated fluence of a cylindrically symmetric flattened Gaussian transverse laser field distribution [6] in the bottom row. Figure 1a) shows that even in the focal plane, the transverse fluence distribution is asymmetric. At a larger distance from the focal plane (Fig. 1b), the imperfections in the fluence distribution become even more pronounced. In addition, spatio-temporal coupling (STC) of phase aberrations [7; 8] reduce the quality of ultra-short high intensity laser pulses by increasing their duration and decreasing their peak intensity [8; 9; 10; 11]. Due to the nonlinear nature of the interaction of high intensity lasers with plasmas, these imperfections can decrease the laser peak intensity in the focal plane [12] and degrade its symmetry [13], leading to lower performances e.g. for high harmonic generation [14] or laser wakefield acceleration (LWFA) [4; 15; 16]. These imperfections need to be mitigated in future applications of high intensity lasers like strong field quantum electrodynamics [17; 18], where reaching ultra high intensities and stable focusing is crucial. The study (and correction [3; 12]) of transverse aberrations requires intensity and wavefront measurements. However, measuring the wavefront of an intense, short laser pulse [19] is more difficult than measuring the transverse laser fluence. For this reason, numerical methods to reconstruct the laser pulse wavefronts from fluence measurements are of paramount importance. An important class of algorithms to retrieve the laser field from fluence measurements in two (or more) transverse planes along the propagation axis originates from the Gerchberg-Saxton algorithm (GSA) [20; 21; 22; 23; 24]. In the basic formulation of the algorithm [20], the fluences measured at plane positions \(z_{0}\) and \(z_{1}\) (assuming a laser propagation along the \(z\) direction) are used to build a progressively more accurate estimate of the field phase at \(z_{0}\), starting from a random phase distribution at \(z_{0}\). The algorithm, which performs an alternating field reconstruction at the two planes, is repeated until a stopping criterion is met, e.g. reaching a certain number of iterations, or reaching a certain value of a chosen reconstruction error metric. In the original article presenting the GSA it is shown that this error will decrease with the number of iterations [20], however the rate of convergence Figure 1: Top row: an example of high intensity laser fluence map measured in an experiment : (a) at focus - (b) at 1500 \(\mu m\) from the focal plane. Bottom row: fluence corresponding to a 10th order Flattened Gaussian laser field distribution with the same energy : (c) at focus - (d) at 1500 \(\mu m\) from the focal plane. At each position, the maximum fluence has been normalized to 1. is undefined. Modifications of the original algorithm can yield a quicker convergence [23]. Another important class of algorithms aims at reconstructing the field through an expansion with basis functions, e.g. the Nijboer-Zernike basis [25, 26, 27, 28]. The algorithms in [29, 30] use an expansion in Hermite-Gauss (HG) modes to reconstruct the HG modal content of a signal, under some assumptions (e.g. finite modal content, knowledge of the HG modes spot sizes). Since the analytical expression of the basis functions is known, these methods are often quicker than those derived from the GSA. In this article, a hybrid field reconstruction method, called in the following Gerchberg-Saxton Algorithm with Modes Decomposition (GSA-MD), is presented. The GSA-MD combines field expansion in HG modes and some concepts of GSA algorithms, i.e. an iterative procedure, the phase extraction of the propagated field and the combination of this phase with the field amplitude measured at different planes. Whereas the original GSA [20] and e.g. the algorithm in [27] are limited to fluence measurements in only two planes, 3D GSA variants in multi-plane propagation problems have been demonstrated [31, 32, 24]. The GSA-MD can be used to reconstruct the electric field without any restriction on the number of planes. The GSA-MD addresses the uncertainty resulting from pointing instabilities affecting the fluence measurements by separating two problems: I) the field reconstruction, i.e. finding the coefficients in its HG modes decomposition, and II) the optimization of the choice of HG modes centers used in I) to reduce the reconstruction error. Compared to previous versions of the GSA, the GSA-MD has several additional advantages. As discussed in the following section, the conceptual separation of the two problems I) and II) avoids a direct, computationally prohibitive field reconstruction procedure. It will be shown that, in cases of interest, the number of unknowns in the proposed method is considerably lower than the number of unknowns with a classic GSA. Other advantages of the GSA-MD are related to its flexibility. For example, depending on the type of field distributions, different techniques can be independently used to solve the two mentioned problems, e.g other analytically known paraxial basis functions instead of the Hermite-Gauss modes can be used to address problem I), and various optimization algorithms can be used to address problem II). Furthermore, using an expansion in HG modes in problem I) allows to choose the number of modes. It will be shown that this degree of freedom allows to perform a quick estimate of the HG modes coefficients with a low number of modes. This estimate can be subsequently refined using a higher number of modes, yielding an overall quicker field reconstruction. Finally, as it will be discussed in the following, the most computationally expensive steps of the GSA-MD can in principle be easily parallelized, since they act on independent HG modes. This is an advantage compared to the classic GSA, where the corresponding propagation steps are performed with Fourier transforms [20], which are not easily parallelized. An example application of the GSA is LWFA [33, 34], where it has been shown that including the GSA-reconstructed laser field in Particle in Cell simulations [35] can greatly improve the agreement between simulations and measurements in the highly nonlinear regimes of laser-plasma interaction inherent to this field [15, 16]. The application of the proposed GSA-MD to LWFA modeling has been first presented in [5]. In that reference it is shown that including a laser field reconstruction obtained with the GSA-MD in LWFA simulations considerably improves the agreement between simulated and measured energy-divergence electron spectra, compared to using simulations with ideal laser field distributions (as those in the bottom row in Fig. 1). Here, a more detailed description of the field reconstruction method used is reported. The GSA-MD in this article neglects the STC that may be present in the laser field. Future work may address the reconstruction of the laser field taking into account also these spatio-temporal imperfections. The article is organised as follows. In the second section, an overview of the GSA-MD, including the description of the solutions to problems I) and II), is presented. In the third section, the results of the GSA-MD on two data-sets are shown. These two data-sets are made of fluence measurements at multiple planes performed at the Lund Laser Centre (LLC) and Apollon laser system in 2021. Figure 2: Example of set of 3 fluence images \(F_{exp}(x,y,z_{k})\) and notations used for the GSA-MD calculation: \(z\) axis is the propagation axis originating from the center of energy of an image chosen as a reference (here \(k=0\)). White dashed line are the search areas \(S_{k}\) defined for the mode center tuning described in subsection 2 II.2. The mode centers in plane \(k\), (\(x_{0,k}\),\(y_{0,k}\)), are searched within \(S_{k}\) and do not necessarily lie on the same \(z\) axis. The fluence images come from different laser shots. The plane at \(z=z_{0}\) is the focal plane. In this case it is the position of the first available measurement along the propagation axis, but in the general case the position \(z=z_{0}\) may lie between the positions \(z_{k}\) of other measurement planes. **Disclaimer :** This manuscript is the accepted version of the article with the same name, published in JOSA B, Vol. 40, Issue 9, pp. 2450-2461 (2023). The published article is accessible at [https://doi.org/10.1364/JOSAB.489884](https://doi.org/10.1364/JOSAB.489884) ## II Overview of the field reconstruction method The proposed GSA-MD aims to reconstruct the laser field of an electromagnetic wave propagating in the \(z\) direction from experimentally obtained fluence images \(F_{exp}(x,y,z_{k})\), measured at different longitudinal distances \(z_{k}\) from the focal plane and obtained from different shots of the same laser system, as illustrated in Fig. 2. A laser pulse with carrier angular frequency \(\omega_{0}\) and with negligible STC, propagating in the \(z\) direction, can be described as a plane wave with transverse electric field \(\mathcal{E}(x,y,z)\) and transverse complex envelope \(\mathrm{E}(x,y,z)\) modulated by a temporal profile \(T\left(t-\frac{z}{c}\right)\): \[\mathcal{E}(x,y,z)=\mathrm{Re}\left\{\mathrm{E}(x,y,z)T\left(t-\frac{z}{c} \right)\exp\left[i\omega_{0}\left(t-\frac{z}{c}\right)\right]\right\}, \tag{1}\] where \(c\) is the velocity of light in vacuum. Under the paraxial approximation, the laser field complex envelope can be decomposed as a sum of Hermite-Gauss (HG) modes: \[\mathrm{E}(x,y,z)=\sum_{m,n}^{N_{m},N_{n}}C_{mn}HG_{mn}(x,x_{0},y,y_{0},z), \tag{2}\] where the modes \(HG_{m,n}(x,x_{0},y,y_{0},z)\) are orthonormal and \(N_{m}\) and \(N_{n}\) are the number of modes in the \(x\) and \(y\) directions respectively for the HG modes expansion. The centers of the HG modes in the \(x\) and \(y\) directions are respectively \(x_{0}\) and \(y_{0}\). The values of these centers are not specified _a priori_, and are part of the unknowns for the GSA-MD. The HG modes of Eq. (2) are defined as [36]: \[HG_{m,n}(x,x_{0},y,y_{0},z) =HG_{m}(x,x_{0},z)\,HG_{n}(y,y_{0},z)\,\exp\left[i\Phi(z)\right]\] \[HG_{m}(x,x_{0},z) =A_{m}\,h_{m}\left[\sqrt{2}\frac{(x-x_{0})}{w_{x}(z)}\right]\exp \left[-\frac{(x-x_{0})^{2}}{w_{x}^{2}(z)}\right]\] \[\times\exp\left[-ik_{0}\frac{(x-x_{0})^{2}}{2R_{x}(z)}\right];\] \[HG_{n}(y,y_{0},z) =A_{n}\,h_{n}\left[\sqrt{2}\frac{(y-y_{0})}{w_{y}(z)}\right]\exp \left[-\frac{(y-y_{0})^{2}}{w_{y}^{2}(z)}\right]\] \[\times\exp\left[-ik_{0}\frac{(y-y_{0})^{2}}{2R_{y}(z)}\right];\] \[\frac{w_{x}(z)}{w_{0,x}} =\sqrt{1+\left(\frac{z}{Z_{x}}\right)^{2}};\frac{w_{y}(z)}{w_{0,y} }=\sqrt{1+\left(\frac{z}{Z_{y}}\right)^{2}};\] \[A_{m} =\left(w_{x}(z)2^{m-1/2}m!\,\sqrt{\pi}\right)^{-1/2};\] \[A_{n} =\left(w_{y}(z)2^{n-1/2}n!\,\sqrt{\pi}\right)^{-1/2};\] \[R_{x}(z) =z+\left(\frac{Z_{x}^{2}}{z}\right);R_{y}(z)=z+\left(\frac{Z_{y}^ {2}}{z}\right);\] \[\Phi(z) =\Phi_{x}(z)+\Phi_{y}(z);\] \[\Phi_{x}(z) =\left(m+\frac{1}{2}\right)\tan^{-1}\left(\frac{z}{Z_{x}}\right);\] \[\Phi_{y}(z) =\left(n+\frac{1}{2}\right)\tan^{-1}\left(\frac{z}{Z_{y}}\right), \tag{3}\] where \(h_{k}\) is the Hermite polynomial of order \(k\). The waists \(w_{0x}=(2Z_{x}/k_{0})^{1/2}\), \(w_{0y}=(2Z_{y}/k_{0})^{1/2}\) of the HG modes in the \(x\), \(y\) directions are chosen small enough to let the mode field reach negligible values at the borders of the measured images, and large enough to have Rayleigh lengths \(Z_{x}\) and \(Z_{y}\) which allow propagation up to the measurement planes. They may not be equal to the waists of a Gaussian fit of the fluences. The plane \(z=0\) is chosen as the focal plane, i.e. where \(w_{x}=w_{0,x}\) and \(w_{y}=w_{0,y}\). The uncertainty \(\Delta_{z}\) on the focal plane position is taken into account in subsection 2.1.1. The real and imaginary parts of the HG coefficients \(C_{mn}\) are the unknowns. Uncertainties in the laser fluence measurements arise from shot-to-shot fluctuations since transverse laser images taken at different positions with the same detector required different shots. The quality of the field reconstruction depends on the reproducibility of the laser properties from shot to shot. Therefore, the field reconstruction consists in fitting fluence images to infer the corresponding laser field's amplitude and phase, taking into account shot-to-shot wavefront and pointing fluctuations. In the following, this process is referred to as the reconstruction of the laser field. The measured fluence images are preprocessed as follows: first the background value is subtracted, then fluence values below a fixed threshold are put to zero, and each image is smoothed by pre-projecting it on a high number of HG modes assuming a phase uniformly equal to zero. The energy distribution centroids in \(x\), \(y\) are calculated for each position \(z_{k}\). Then, each measured image is recentered on its centroid. Finally, the fluence of the measured images is divided by a fixed normalizing energy value \(E_{norm}\). The proposed GSA-MD aims at minimizing an error \(\chi^{2}\) associated to the field reconstruction, defined as: \[\chi^{2}=\sum_{k=0}^{N_{images}-1}\frac{\sqrt{\sum_{i_{x},i_{y}}^{N_{pix_{x}},N _{pix_{y}}}(F_{exp}(x,y,z_{k})-F_{fit}(x,y,z_{k}))}}{N_{images}\sum_{i_{x},i_{y }}^{N_{pix_{x}},N_{pix_{y}}}F_{exp}(x,y,z_{k})} \tag{4}\] where \(N_{pix_{x}},N_{pix_{y}}\) are the number of pixels of the image in the \(x\) and \(y\) directions, \(F_{exp}\) and \(F_{fit}\) are the measured and reconstructed fluences, \(z_{k}\) are the positions of the \(N_{images}\) measured images used for the reconstruction. \(\chi^{2}\) in Eq. (4) quantifies the error between the measured fluence data and the reconstructed fluence images. Although other error metrics can be chosen, without loss of generality it is assumed in the following that the chosen error metric is the \(\chi^{2}\) in Eq. (4). The evaluation of Eq. (4) is computationally expensive in typical conditions of interest, for example using 3 images with \(N_{pix_{x}}\times N_{pix_{y}}=1000\times 1000\) pixels. Besides, the number of unknowns in Eq. (2), i.e. the real and imaginary parts of the reconstruction coefficients \(C_{mn}\), is \(2\times N_{m}\times N_{n}\), with typical values of \(N_{m}=N_{n}\) of the order of 30, yielding 1800 unknowns. Furthermore, while the centers of the HG modes reconstruction of Eq. (2) in the plane \(z_{0}\) can be fixed at the point of maximum fluence at \(z=z_{0}\), the error of the reconstruction depends also on the chosen HG modes centers \((x_{0,k},y_{0,k})\) in the other planes \(z_{k}\). Thus, the choice of these centers must be optimized as well. If they are counted as additional degrees of freedom in the field reconstruction, the total number of unknowns is \(2\times(N_{images})\) times larger. For the sake of comparison, it is worth noting that for a field reconstruction with a GSA, the number of unknowns (the phase values of each pixel) would be \(N_{pix_{x}}\times N_{pix_{y}}\), i.e. \(10^{6}\) in the previous example. Therefore, in these conditions a direct minimization of \(\chi^{2}\), optimizing at the same time the HG coefficients \(C_{mn}\) and the HG centers \(x_{0,k}\), \(y_{0,k}\) would be too computationally expensive. The GSA-MD proposed in this article separates the search of the HG coefficients \(C_{mn}\) for given values of the HG modes centers \((x_{0,k},y_{0,k})\), and the search for the values of these centers that minimize the reconstruction error \(\chi^{2}\). An additional advantage of this two-fold strategy is that the techniques used to address each of these two problems can be chosen independently. For example basis functions different from the HG modes could in principle be used to find the expansion coefficients, without changing the technique used to optimize mode centers. This conceptual separation of the two mentioned problems is illustrated in Fig. 3, which gives an overview of the GSA-MD. The input of the GSA-MD is the fluence data \(F_{exp}(x,y,z_{k})\), measured in the transverse planes at position \(z_{k}\). After preprocessing the fluence data, an initialization step is performed, which consists in finding an initial approximation of the HG coefficients \(C_{mn}\) starting from an initial phase \(\psi_{0}(x,x_{0,0},y,y_{0,0})\) and an initial value for the HG modes centers \((x_{0,k},y_{0,k})\). Then, for fixed values of the HG modes centers \((x_{0,k},y_{0,k})\), the HG coefficients \(C_{mn}\) estimates are improved iteratively. This update of the \(C_{mn}\) coefficients is summarized in Algorithm 1 and detailed in the next section. The resulting reconstruction error \(\chi^{2}\) in Eq. (4) is then computed. Afterwards, the HG modes centers \((x_{0,k},y_{0,k})\) can be changed in order to reduce the error \(\chi^{2}\), and the \(C_{mn}\) are updated using these new centers. If the new \(\chi^{2}\) is lower than the minimum error \(\chi^{2}_{min}\) found in this loop, the new \(\chi^{2}\) substitutes the minimum error \(\chi^{2}_{min}\). A stopping criterion for this loop is chosen, e.g. reaching a maximum number of loop iterations or when the minimum error \(\chi^{2}_{min}\) is reduced below a desired value. When the GSA-MD exits this loop, the resulting outputs will be values of the HG modes centers \((x_{0,k},y_{0,k})\) and of the HG coefficients \(C_{mn}\) that can be used to reconstruct the electric field at the planes \(z_{k}\) using Eqs. (2), (3). The next subsections describe the update of the \(C_{mn}\) coefficients (performed with fixed HG modes centers) and the search for the best choice of the HG mode centers. Figure 3: Schematic overview of the proposed GSA-MD to reconstruct the laser field. The yellow rectangle contains Algorithm 1, detailed in Section 2II.1. The tuning of the HG mode centers (blue dashed rectangle), performed to reduce the reconstruction error \(\chi^{2}\), is described in Section 2II.2. **Disclaimer :** This manuscript is the accepted version of the article with the same name, published in JOSA B, Vol. 40, Issue 9, pp. 2450-2461 (2023). The published article is accessible at [https://doi.org/10.1364/JOSAB.489884](https://doi.org/10.1364/JOSAB.489884) ### Calculation of the Hermite-Gauss modes coefficients In this section an iterative algorithm is presented, to find the HG coefficients \(C_{mn}\) of Eq. (2) that fit the laser transverse electric field, once the HG modes centers \((x_{0,k},y_{0,k})\) and waists \(w_{0,x}\), \(w_{0,y}\) are kept fixed, i.e. the algorithm in the yellow rectangle of Fig. 3. Assuming that no STC are present in the laser field, once the temporal profile \(T(t-z/c)\) in Eq. (1) for the laser field is known (or a hypothesis on its shape is assumed), a linear relation between the experimentally measured fluence \(F_{exp}(x,y,z)\) and local intensity \(I(x,y,z)\) can be easily obtained, i.e. \(F_{exp}(x,y,z)=I(x,y,z)\cdot\tau\), where \(\tau\) is a characteristic duration of the laser pulse and the local intensity is defined as \(I(x,y,z)=\frac{c\varepsilon_{0}}{2}|\mathrm{E}(x,y,z)|^{2}\). A complex envelope E of the transverse electric field at position \(z\) can thus be defined from a phase map \(\psi(x,y)\) and an experimental fluence map \(F_{exp}(x,y,z)\): \[\mathrm{E}(x,y,z)=\sqrt{\frac{2}{c\tau\varepsilon_{0}}F_{exp}(x,y,z_{0})}\, \exp\{[i\psi(x,y)]\}, \tag{5}\] where \(\varepsilon_{0}\) is the vacuum permittivity. As in the classic GSA, this operation is performed at the available measurement planes combining the intensity \(I\), expressed in this article in terms of measured fluence \(F_{exp}\) after assuming a temporal profile, and the estimated phase map \(\psi(x,y)\). Using this definition, the calculation of the HG coefficients \(C_{mn}\) for the field reconstruction is summarized by the pseudocode in Algorithm 1, which is described in the following. First, an initial estimate of the coefficients is computed (step 1). This first estimate can be obtained from a first projection of \(\sqrt{\frac{2}{c\tau\varepsilon_{0}}F_{exp}(x,y,z_{0})}\exp\left[\psi_{0}(x,x_ {0,0},y,y_{0,0})\right]\) over the HG modes with an initial choice of the modes centers \(x_{0,k}\), \(y_{0,k}\) and initial phase \(\psi_{0}(x,x_{0,0},y,y_{0,0})\). For the results presented in this article, to improve the convergence of the field reconstruction, an initial quadratic phase \(\psi_{0}(x,x_{0,0},y,y_{0,0})\) was used (similar to the initial phase proposed in [37]): \[\psi_{0}(x,x_{0,0},y,y_{0,0})=k_{0}\frac{(x-x_{0,0})^{2}+(y-y_{0,0})^{2}}{2 \Delta_{z}\left[1+\left(\frac{k_{0}}{2}\frac{w_{0}^{2}}{\Delta_{z}}\right)^{2 }\right]}, \tag{6}\] where \(w_{0,Gauss}\) is the estimated waist of a Gaussian fit of the measured fluence map \(F_{exp}(x,y,z_{0})\). This initial phase represents the phase of a Gaussian beam with waist \(w_{0}\) and carrier frequency \(\omega_{0}\), at a distance \(\Delta_{z}\), which is the uncertainty on the focal plane \(z\) position. After this initialization, at each iteration \(iter\) of the algorithm, the estimated expansion of \(\mathrm{E}(x,y,z_{k})\) in HG modes \(HG_{mn}(x,x_{0,k},y,y_{0,k},z_{k})\) is computed at each position from \(z_{0}\) to \(z_{N_{images}-1}\), using the known expressions of the HG modes [36] (Eq. (3)) and the estimated coefficients \(C_{mn}\), using Eq. (2) (step 2). The phase map \(\psi(x,y)\) is then found as \(\arg\left[\mathrm{E}(x,y,z_{k})\right]\) (step 3). ``` procedureField reconstruction 1) Find an initial estimate of \(C_{mn}\); for\((iter=0;\;\;iter<N_{iter};\;\;iter++)\)do for\((k=0;\;\;k<N_{images};\;\;k++)\)do 2) \(\mathrm{E}(x,y,z_{k})=\) \(=\sum_{m,n}C_{mn}HG_{mn}(x,x_{0,k},y,y_{0,k},z_{k})\); 3) \(\psi(x,y)=\arg\left[\mathrm{E}(x,y,z_{k})\right]\); 4) \(\mathrm{E}_{\mathrm{new}}(x,y,z_{k})=\) \(=\sqrt{\frac{2}{c\tau\varepsilon_{0}}F_{exp}(x,y,z_{k})}\exp\{[i\psi(x,y)]\}\); 5) \(\delta(x,y,z_{k})=\) \(=\frac{\sqrt{\frac{2}{c\tau\varepsilon_{0}}F_{exp}(x,y,z_{k})}-\left|\mathrm{ E}(x,y,z_{k})\right|}{\max\left[\sqrt{\frac{2}{c\tau\varepsilon_{0}}F_{exp}(x,y,z_{k})} \right]}\); \(\mathrm{E}_{\mathrm{new}}(x,y,z_{k})=\mathrm{E}_{\mathrm{new}}(x,y,z_{k})\exp\{ [\delta(x,y,z_{k})]\}\); 6) \(C_{mn,k}=\mathrm{Proj}\mathrm{E}_{\mathrm{new}}(x,y,z_{k})\), \(HG_{mn}(x,x_{0,k},y,y_{0,k},z_{k})\)]; 7) \(C_{mn,k}=C_{mn,k}\sqrt{\frac{F_{tot}}{\sum_{k}\left|C_{mn,k}\right|^{2}}}\); 8) \(C_{mn}=\frac{1}{2}\left(C_{mn}+C_{mn,k}\right)\); 9) \(C_{mn}=C_{mn}\sqrt{\frac{F_{tot}}{\sum_{k}\left|C_{mn}\right|^{2}}}\); endfor if\((iter\%5==0)\) and \((iter\geq 5)\)then 10) \(\chi^{2}_{grad}=\frac{\chi^{2}(iter)-\chi^{2}(iter-5)}{\chi^{2}(iter-5)}\); if\((\chi^{2}_{grad}<0.02)\)then \(iter_{break}=iter\); break; endif endif endfor endprocedure ``` **Algorithm 1** Algorithm to find the coefficients \(C_{mn}\) of the Hermite-Gauss modes \(HG_{mn}\) from \(N_{images}\) experimental fluence images \(F_{exp}\) measured at planes \(x_{k}\), with \(k=0,...,N_{images}-1\). The HG modes centers \((x_{0,k},y_{0,k})\) are set at the start of the algorithm and kept fixed. Steps 6-9 are repeated for each of the mode indices \(m\), \(n\). This algorithm corresponds to the yellow rectangle of Fig. 3. In step 4, an updated value of the complex electric field \(\mathrm{E}_{\mathrm{new}}\) can be estimated using the measured fluence \(F(x,y,z_{k})\) and the phase \(\psi(x,y)\), using Eq. (5). The exponent \(\delta(x,y,z_{k})\) of an exponential correction factor \(\exp\left[\delta(x,y,z_{k})\right]\) is calculated on each point of the grid. The resulting correction factor is equal to one at the points where the measured and reconstructed field amplitude are equal and its value is higher where the two amplitudes differ. The field E\({}_{\rm new}\) is multiplied by this correction factor (step 5). In [38] it has been shown that this correction improves the convergence of a GSA as well as the signal to noise ratio of its reconstruction. The projection of the corrected E\({}_{\rm new}\) on the HG modes at \(z_{k}\) gives a new estimate \(C_{mn,k}\) for the HG coefficients (step 6), which is combined with the previous estimate of \(C_{mn}\) (step 8). The projection of a function \(f(x,y,z_{k})\) on the HG modes at \(z_{k}\) mentioned in step 6 is defined as: \[{\rm Proj}[f(x,y,z_{k}),HG_{mn}(x,x_{0,k},y,y_{0,k},z_{k})]=\] \[=\int_{-L_{x}/2}^{L_{x}/2}\!\int_{-L_{y}/2}^{L_{y}/2}\!f(x,y,z_{k} )HG^{*}_{mn}(x,x_{0,k},y,y_{0,k},z_{k})dx\,dy, \tag{7}\] where \((L_{x},L_{y})\) are the data grid length along each axis. Normalizations are performed on the estimated coefficients in the intermediate steps 7 and 9 to ensure that the total fluence \(F_{tot}\) remains constant. Steps 6-9 are repeated for each index \(m\), \(n\) of the modes used in the field reconstruction. In step 10), starting from \(iter=0\) and every 5 iterations, the \(\chi^{2}\) error is evaluated. If at a given iteration \(iter\), the error gradient \(\chi^{2}_{grad}\) is less than 2%, then Algorithm 1 loop is stopped and the last iteration is recorded as \(iter_{break}\). It is worth noting that the most computationally expensive operations of the algorithm are step 2, i.e. the reconstruction of the field with propagated HG modes, and step 6, i.e. the projection over the HG modes. This consideration highlights an advantage of the GSA-MD compared to the classic GSA: these two steps can be easily parallelized, since the treatment of each mode can be performed in parallel, with step 2 only requiring a final summation of the contribution of each mode. The use of mode expansion yields two additional advantages compared to a classic GSA. First, in principle another set of basis function can be used instead of the HG modes, depending on the application. Second, the number of modes can be chosen in order to find the desired compromise between reconstruction accuracy and computation time. This latter flexibility will be illustrated in section 2II.2. As stated at the start of this subsection, in the algorithm it was assumed that the HG modes centers \((x_{0,k},y_{0,k})\) were set. The next subsection describes how the choice of these centers can be improved to reduce the reconstruction error. ### Tuning the centers of the Hermite-Gauss modes The error of the reconstruction algorithm of the section 2II.1 is sensitive to the choice of the HG mode centers \((x_{0,k},y_{0,k})\). Thus, as shown in Fig. 3, the field reconstruction in Algorithm 1 can be repeated with different \((x_{0,k},y_{0,k})\) chosen within a search area \(S_{k}\) at each plane \(z_{k}\) (see Fig. 2) in order find their values which minimize (or at least reduce) the reconstruction error. The separation of the HG coefficient estimation in Algorithm 1 from this tuning of the HG mode centers \((x_{0,k},y_{0,k})\) allows to choose among many optimization algorithms to minimize the error \(\chi^{2}\). For example, Bayesian optimization [39] was used for the results presented in section III. In the following, this general minimization process is referred to as the center tuning, which is stopped when a chosen criterion is met, e.g. when a certain target value of \(\chi^{2}\) is reached, or when a total number of iterations \(N_{tuning}\) is completed. In general the quality of the field reconstruction is sensitive to the combination of the main parameters of the GSA-MD, namely \(N_{m}\), \(N_{n}\), \(N_{iter}\), \(N_{tuning}\) and the size of the projection grid. Increasing these parameters yields a longer computing time for the field reconstruction in Algorithm 1 and the center tuning. They can be set depending on the quality of the available fluence data (e.g. degree of asymmetry) in order to find a compromise between reconstruction accuracy and computing time required by the minimization of the error \(\chi^{2}\). As previously mentioned, decomposing the field with HG modes introduces a flexibility in the choice of the number of modes \(N_{m}\), \(N_{n}\) (along the \(x\) and \(y\) directions respectively) used for the reconstruction in Algorithm 1. This flexibility can be used to speed-up the center tuning, as explained in the next section. ## III Results In this section the results of the GSA-MD, applied on laser data collected at the LLC (peak power in the data 23 TW, pulse duration 38 fs), and on the Apollon laser system in the commissioning phase (peak power in the data 400 TW, pulse duration 25 fs), are presented. For both campaigns, fluence measurements were performed using a CCD camera equipped with a microscope objective, which was translated along the laser axis in the focal volume in vacuum. For these measurements, the laser beam was fully amplified to nominal energy, then attenuated by several reflections from glass surfaces before compression, in order to characterize the quality of the high intensity beam. At every position of the camera along the laser axis, multiple measurements were made in order to evaluate the shot-to-shot fluctuations of the laser. The pointing stability for both data-sets is characterised by the shot-to-shot fluctuations of the fluence centroids normalized by the estimated laser waist \(\delta\overline{x}/w_{0,Gauss}\), \(\delta\overline{y}/w_{0,Gauss}\), where \(w_{0,Gauss}\) is the estimated Gaussian fit's waist. For the LLC data-set, \(\delta\overline{x}/w_{0,Gauss}=16\) % and \(\delta\overline{y}/w_{0,Gauss}=8\) %, with \(w_{0,Gauss}=15\) \(\mu\)m. For the Apollon data-set, the shot-to-shot pointing instability is higher: \(\delta\overline{x}/w_{0,Gauss}=100\) % and \(\delta\overline{y}/w_{0,Gauss}=40\) %, with \(w_{0,Gauss}=16\)\(\mu\)m. It will be shown that the GSA-MD can reconstruct the laser field from the fluence data of both these two different laser systems. Figure 4 describes the procedure used to obtain the results presented in this section, for the LLC and Apollon data-sets. This procedure exploits the GSA-MD's flexibility in choosing the number of modes for the field reconstruction. Performing the the center tuning introduced in Fig. 3 with a high number of HG modes would have been computationally expensive. Thus, the center tuning has been separated in two successive phases (blue dashed rectangles of Fig. 4) that share the same Algorithm 1 and minimization method for the error \(\chi^{2}\) (Bayesian Optimization in this case), but with a different number of HG modes \(N_{m}\), \(N_{n}\). The first phase, referred to as the Educated Guess (EG), consists of a center tuning with \(N_{tuning,EG}\) iterations, each using \(N_{m,EG}\) and \(N_{n,EG}\) modes set low enough to quickly execute Algorithm 1. This EG phase can be initialized setting \(x_{0,k}=y_{0,k}=0\) as initial centers and Eq. 6 as initial phase \(\psi_{0}(x,0,y,0)\). This EG phase yields an initial estimate of the HG centers \(x_{0,k}\) and \(y_{0,k}\). The phase \(\psi_{0}(x,x_{0,0},y_{0,0})\) of Eq. (6) is reinitialized with the optimized centers (\(x_{0,0}\), \(y_{0,0}\)) tuned in the EG. Using these centers, the phase and \(F_{exp}(x,y,z_{0})\), a projection of \(\sqrt{\frac{2}{cr\epsilon_{0}}F_{exp}(x,y,z_{0})}\exp\left[i\psi_{0}(x,x_{0,0 },y,y_{0,0})\right]\) over the HG modes yields a more accurate estimate of the \(C_{mn}\) coefficients, even with a different number of modes. This estimate is used to initialize a second center tuning phase, called Refined Search (RS), which is performed with a higher number of HG modes \(N_{m,RS}\) and \(N_{tuning,RS}\) center tuning iterations, using a narrower search area for the HG centers. For the results with the LLC data-set, in Eq. (6), \(w_{0}=15\ \mu\)m and \(\Delta_{z}=0.25\) mm. For the Apollon data-set, \(w_{0}=16\ \mu\)m and \(\Delta_{z}=0.3\) mm. For both data-sets, \(w_{0,x}=w_{0,y}=20\ \mu\)m has been used for the HG modes waists. The implementation of the GSA-MD used for this article is written in Python. The most time consuming steps of Algorithm 1, steps 2) and 6), are compiled and parallelized with Numba. To obtain the presented results, the HG mode center tuning in both EG and RS phases was performed through Bayesian Optimization [39] of the function \(\chi^{2}\) defined in Eq. (4). At each iteration of the Bayesian Optimization, multiple values of the HG centers are chosen in parallel to execute Algorithm 1 and compute the corresponding values of \(\chi^{2}\). Each parallel execution of Algorithm 1, corresponding to different values of the HG centers, is distributed between the available computing threads. In the Bayesian Optimization algorithm, these new values of the HG centers are chosen within the search areas \(S_{EG}\), and \(S_{RS}\), for the Educated Guess and Refined Search, respectively. Each evaluation of \(\chi^{2}\) corresponding to different values of the HG centers is used by the Bayesian Optimization algorithm to build a surrogate model for the function \(\chi^{2}\). The probability distribution of possible \(\chi^{2}\) values is modeled by a Gaussian Process with mean and standard deviation. The covariance matrix of the process, or kernel, defines the correlation between the evaluated points \(\chi^{2}\) score and the estimated values for non-evaluated points. The minimum error \(\chi^{2}_{min}\) is updated each time a new minimum for the error \(\chi^{2}\) is found during the iterations of the error minimization process. In both EG and RS phases, the Bayesian Optimization uses an implementation of the standard linear regression model with Gaussian noise introduced in Algorithm 2.1 of [40]. The "1.0 * RBF(1.0)" kernel, present in the Python library \(\text{scikit}-\text{optimize}\)[41], was used, with RBF being the radial basis function kernel. To choose the next candidate centers to evaluate, an acquisition function is used, which calculates the point with the optimum combination of the mean and uncertainty values from the Gaussian process via a combination of the Expected Improvement, Negative Probability of Improvement and Lower Confidence Bound acquisition functions described in [42]. Based on a scoring value of these functions, one of the proposed centers is chosen for the evaluation. The Bayesian Optimization is initiated with \(skopt.Optimizer\), \(\xi=0\), which skews heavily the Expected Improvement towards exploitation of previous evaluated points. The other parameters are fixed to their default values in the \(\text{scikit}-\text{optimize}\) library. Table 1 summarizes the parameters of the two data-sets and the parameters used for the reconstruction, described also in the following subsections. The results of the GSA-MD applied on the two data-sets will be presented. Figure 4: Schematic description of the tuning of HG mode centers that was used to reduce the minimum field reconstruction error \(\chi^{2}_{min}\) for the LLC and Apollon data-sets. **Disclaimer :** This manuscript is the accepted version of the article with the same name, published in JOSA B, Vol. 40, Issue 9, pp. 2450-2461 (2023). The published article is accessible at [https://doi.org/10.1364/JJOSAB.489884](https://doi.org/10.1364/JJOSAB.489884) ### Field reconstruction for the LLC data-set With the LLC system, the average energy per shot collected in 2021 for the data used in this article is 872 mJ, for an average laser pulse duration of 38 fs, which represents a peak power \(P_{0}=23\) TW. The central wavelength is \(\lambda_{0}=0.8\)\(\mu\)m, and the waist of a Gaussian fit of the data measured in the focal plane is estimated at \(w_{0,Gauss}=15\)\(\mu\)m, which sets the Rayleigh length of the Gaussian fit to \(z_{R}\simeq 0.9\) mm. The LLC data-set used for the algorithm is a set of 4 transverse fluence profiles \(F_{exp}(x,y,z_{k})\) at \(z_{0,1,2,3}=0\), 500, 1000 and 1500 \(\mu\)m. For a given position \(z_{k}\), the fluence profile \(F_{exp}(x,y,z_{k})\) is randomly selected among 15 individual shot measurements for \(k\neq 2\) and 17 shots for \(k=2\). For each individual shot, the average background over a \(100\times 100\) pixels region far from the transverse focal spot energy has been subtracted. Then, for each averaged image, the fluence has been filtered setting values below 1 % of the absolute maximum to zero. Each measured distribution has then been smoothed by projecting them onto HG modes with \(N_{m}=N_{n}=40\). The projecting box over which the HG modes are fitted is a square grid of \(351\times 351\) pixels (397 \(\mu\)m \(\times\)397 \(\mu\)m) centered on the centroid of the fluence map in the focal plane (\(z=z_{0}\)). The size of the box is determined to ensure that the HG modes, whose characteristic transverse extension scales with \(w_{0,x}\sqrt{m}\), \(w_{0,y}\sqrt{n}\) in the transverse directions, decay to 0 before reaching the grid boundaries in the plane further from focus. For the Educated Guess, \(N_{tuning,EG}=300\), \(N_{m,EG}=N_{n,EG}=10\) and a search area \(S_{EG}=(\ 20\ \mu\)m\(\times\ 20\ \mu\)m), centered around the centroid of the fluence distribution at \(z=z_{0}\) was chosen. For the Refined Search, \(N_{tuning,RS}=300\), \(N_{m,RS}=N_{n,RS}=30\) and a search area \(S_{RS}=(\ 10\ \mu\)m\(\times\ 10\ \mu\)m) centered around the calibrated centers found by the Educated Guess were chosen. The final results of the GSA-MD calculation for the LLC data-set are shown in Figs. 5 and 6. Figure 5 shows the measured fluence images and the reconstructed fluence distributions at four positions along the propagation axis. Comparison of the images shows that the main features of the LLC data-set are well reconstructed by the GSA-MD calculation, in particular the asymmetries of the distribution at \(z_{2}=1.1\,z_{R}\) [Figs. 5 e), f)] and \(z_{3}=1.7\,z_{R}\) [Figs. 5 g), h)]. In Figure 6 the measured fluences in the \(z_{k}\) planes and the corresponding reconstructed fluences are compared on 1D plots, for the data shown in Fig. 5. For each \(z_{k}\) position, the fluence is plotted along the axis \(x\) (top panel) and axis \(y\) (bottom panel) directions, where the maximum measured fluence lie. Each line plot in the \(x\) (resp. \(y\)) direction is an average over 3 pixels in the \(y\) (resp. \(x\)) direction. The maximum relative differences on the measured fluence's amplitude in x and y are 2.4% at \(z_{0}\), 10% at \(z_{1}\), 9.2% at \(z_{2}\) and 2.7% at \(z_{3}\), which shows a good agreement in high intensity areas. The evolution of the minimum error \(\chi^{2}_{min}\) obtained \begin{table} \begin{tabular}{c c c} \hline Parameter & LLC data-set & Apollon data-set \\ \hline \(\lambda_{0}\) & 0.8 \(\mu\)m & 0.8 \(\mu\)m \\ Peak power \(P_{0}\) & 23 TW & 400 TW \\ Mean energy/shot & 0.872 J & 4.8 J \\ \(\delta\bar{x}/w_{0,Gauss}\), \(\delta\bar{y}/w_{0,Gauss}\) & 16 \%, 8 \% & 100 \%, 40 \% \\ \(z\) & \([0,0.5,1,1.5]\) mm & \([0,-1.8,1.2]\) mm \\ \(N_{pix_{x}}\times N_{pix_{y}}\) & 351\(\pm\)351 & 301\(\times\)301 \\ Pixel size & 1.13 \(\mu\)m & 0.85 \(\mu\)m \\ Estimated \(w_{0,Gauss}\) & 15 \(\mu\)m & 16 \(\mu\)m \\ \(\Delta_{z}\) & 0.25 mm & 0.3 mm \\ \(w_{0,x}=w_{0,y}\) & 20 \(\mu\)m & 20 \(\mu\)m \\ \(N_{m,EG}\), \(N_{n,EG}\) & 10, 10 & 10,10 \\ \(N_{m,RS}\), \(N_{n,RS}\) & 30, 30 & 40,40 \\ \(S_{EG}\) & 20 \(\mu\)m \(\times\ 20\)\(\mu\)m & 100 \(\mu\)m \(\times\ 100\)\(\mu\)m \\ \(S_{RS}\) & 10 \(\mu\)m \(\times\ 10\)\(\mu\)m & 20 \(\mu\)m \(\times\ 20\)\(\mu\)m \\ \(N_{iter}\) & 50 & 50 \\ \(N_{tuning,EG}\), \(N_{tuning,RS}\) & 300, 300 & 300, 300 \\ Computing time, EG & 19 minutes & 18 minutes \\ Computing time, RS & 42 minutes & 57 minutes \\ \hline \end{tabular} \end{table} Table 1: Data-sets and reconstruction parameters: carrier wavelength \(\lambda_{0}\), peak power \(P_{0}\), mean energy per laser shot, shot,-to-shot relative pulse centroid position fluctuations \(\delta\bar{x}/w_{0,Gauss}\) and \(\delta\bar{y}/w_{0,Gauss}\), position \(z\) of the fluence measurement planes (\(z=0\) is the focal plane), number of pixels in the fluence images, pixel size, estimated Gaussian fit’s waist \(w_{0,Gauss}\), uncertainty of the focal plane position \(\Delta_{z}\), waists \(w_{0,x}=w_{0,y}\) for the HG modes, number of modes \(N_{m}\), and \(N_{n}\) in the \(x\) and \(y\) direction for the EG (RS) phase \(N_{m,EG}\), \(N_{n,EG}\) (\(N_{m,RS}\), \(N_{n,RS}\)), search area \(S_{EG}\) (\(S_{RS}\)) for the centers of the EG (RS) phase, number of iterations \(N_{iter}\) for Algorithm 1, number of center tuning iterations for the EG (RS) phase \(N_{tuning,EG}\) (\(N_{tuning,RS}\)), computing time for the EG and RS phases. Figure 5: Measured fluence distribution of the LLC data-set (upper row) and corresponding reconstructed distributions after center tuning (lower row). From left to right, the positions of the image planes along the propagation axis are : a, b) \(z_{0}=0\)\(\mu\)m; c, d) \(z_{1}=500\)\(\mu\)m (\(0.6\,z_{R}\)); e, f) \(z_{2}=1000\)\(\mu\)m (\(1.1\,z_{R}\)); g, h) \(z_{3}=1500\)\(\mu\)m (\(1.7\,z_{R}\)). For each position \(z_{k}\), the fluence has been normalized to the maximum of the corresponding measured fluence. during the center tuning is plotted for the EG and RS phases successively in Figure 7. The tuning of the HG centers leads to a reduction of \(\chi^{2}_{min}\) from \(2.26\times 10^{-3}\) to \(2.05\times 10^{-3}\) during the EG phase, which corresponds to a 9% reduction. Using the optimized centers (\(x_{0,k}\,y_{0,k}\)) obtained with the EG as input of the RS yields \(\chi^{2}_{min}=2.02\times 10^{-3}\) at the start of RS the phase. This sudden reduction of \(\chi^{2}_{min}\) between the end of the EG phase and the start of the RS phase is due to the higher number of HG modes used in the RS, which yields a more accurate field reconstruction and thus a lower \(\chi^{2}_{min}\). The calculated HG coefficients at the end of the Refined Search can be used to quantify the degree of asymmetry of the data-set. For \(N_{m}=N_{n}=10\), the partial sum \(\sum_{m=0}^{N_{m}}\sum_{m=0}^{N_{n}}\left|C_{m,n}\right|^{2}\) reaches 97% of the sum obtained using all HG coefficients. During the RS, \(\chi^{2}_{min}\) decreases from \(\chi^{2}_{min}=2.02\times 10^{-3}\) to \(\chi^{2}_{min}=1.89\times 10^{-3}\), which corresponds to a 6% reduction. This shows that for this data-set, the EG alone is sufficient to find HG centers yielding a minimized error. It is important to use a high number of modes for a better reconstruction, as shown by the gap between the end of EG and start of RS. To find the optimum centers with an RS phase, it may be necessary to adjust the parameters of the Bayesian Optimization itself to minimize the computational cost of the RS. For the LLC data-set, both the EG and RS phases to obtain the results presented in Figs. 5, 6, 7 were performed on a laptop with CPU Intel i7-12700h, 64 GB RAM. The Bayesian Optimization phases were performed with 3 concurrent working threads. In the EG phase, the required computing time was 19 minutes, and 42 minutes during the RS phase. Figure 8: Measured fluence distribution of the Apollon data-set (upper row) and corresponding reconstructed distributions after the center tuning (lower row). From left to right, the positions of the image planes along the propagation axis are : a, b) \(z_{1}=-1800\)\(\mu\)m (\(-1.8\,z_{R}\)); c, d) \(z_{0}=0\)\(\mu\)m; e, f) \(z_{1}=1200\)\(\mu\)m (\(1.2\,z_{R}\)). For each position \(z_{k}\), the fluence has been normalized to the maximum of the corresponding measured fluence. Figure 6: Fluence profiles along the \(x\) (upper row) and \(y\) (lower row) directions, averaged over 3 pixels (3.39 \(\mu\)m) centered around the measured fluence maximum’s position in \(y\) and \(x\). Each profile has been normalized to the measured fluence maximum at \(z_{k}\). For a given position \(z_{k}\), the blue dashed line is the measured fluence from the LLC data-set and the red dashed line is the reconstructed fluence profile. From left to right, relative positions to the focal plane are : a, b) \(z_{0}=0\)\(\mu\)m; c, d) \(z_{1}=500\)\(\mu\)m (\(0.6\,z_{R}\)); e, f) \(z_{2}=1000\)\(\mu\)m (\(1.1\,z_{R}\)); g, h) \(z_{3}=1500\)\(\mu\)m (\(1.7\,z_{R}\)) Figure 7: Evolution of the minimum error \(\chi^{2}_{min}\) obtained during the center tuning of the GSA-MD applied to the LLC data-set as a function of the tuning iteration \(n_{tuning}\), with \(N_{iter}=50\) in Algorithm 1 for both the EG and RS phase. The blue curve is the evolution of \(\chi^{2}_{min}\) in the EG phase with \(N_{m}=N_{n}=10\) and the red curve is the evolution of \(\chi^{2}_{min}\) in the RS phase with \(N_{m}=N_{n}=30\). ### Pixel reconstruction for the Apollon data-set For the Apollon data-set, the average shot energy is 4.8 J, for an average laser pulse duration of 25 fs, which represents a peak power \(P_{0}=400\) TW. The central wavelength is \(\lambda_{0}=0.8\)\(\mu\)m, and the waist of a Gaussian fit of the data measured in the focal plane is estimated at \(w_{0,Gauss}=16\)\(\mu\)m, which sets the Rayleigh length of the Gaussian fit to \(z_{R}\simeq 1\) mm. The Apollon data-set to reconstruct is a set of 3 individual transverse fluence distributions \(F_{exp}(x,y,z_{k})\) at \(z_{0,1,2}=0\), \(-1800\), \(1200\)\(\mu\)m. Note that with this data-set the \(z_{0}\) is the focal plane position, which is not the first position available on the propagation axis. Due to high shot to shot fluctuations, for a given position \(z_{k}\), the fluence profile \(F_{exp}(x,y,z_{k})\) has been picked randomly among 4 images for \(k=0\), and among 2 images for \(k\neq 0\). The set of images over which the GSA-MD was performed is the same as in [5]. The same process as the one used for the LLC data-set has been performed. The same GSA-MD with Bayesian Optimization of the HG centers used for the LLC data-set was applied to the data of the Apollon Commissioning phase. The size of the projecting grid was set at 301\(\times\)301 pixels, and number of modes in the RS phase to \(N_{n}=N_{m}=40\). Compared to the LLC data-set, the relative pointing instability of the Apollon data-set is of the order of seven times larger (see Table 1). Thus, the search areas for the center tuning were chosen to be broader intervals compared to the search areas with the LLC data-set: \(S_{EG}=\) ( 100 \(\mu\)m\(\times\) 100 \(\mu\)m) centered around the centroid of the fluence distribution at \(z=z_{0}\), and \(S_{RS}=\) ( 20 \(\mu\)m\(\times\) 20 \(\mu\)m) centered around the calibrated centers found by the Educated Guess. In both EG and RS phases, the number of iterations for the center tuning was set to \(N_{tuning,EG}=N_{tuning,RS}=300\). The results of the GSA-MD with HG centers optimization as well as the convergence of \(\chi^{2}\) for the Apollon data-set are displayed in Figs. 8, 9, 10 respectively. For this application of the GSA-MD, again parallelized over 3 threads on the same laptop used with the LLC data-set, the EG phase took 18 minutes and the RS phase took 57 minutes. In Figure 8, the 2D comparison between the measured and reconstructed fluences shows a good agreement in the energy distribution of measurements and reconstructions. In Figure 9, the comparison between measured 1D profiles and reconstructed profiles at the measured fluence's maximum shows a good agreement in the amplitude. The maximum relative differences on the measured fluence's amplitude in x and y are 2.7% at \(z_{0}\), 2.9% at \(z_{1}\) and 0.8% at \(z_{2}\). In Figure 10, the evolution of the minimum error \(\chi^{2}_{min}\) over the center tuning process is reported. The relative \(\chi^{2}_{min}\) gap when going from the EG to RS phase at \(n_{tuning}=300\) is larger than for the results with the LLC data-set (see Fig. 7), due to the greater difference in the number of HG modes used in the EG and RS phase. For the Apollon data-set, setting \(N_{m}=N_{n}=10\), the partial sum \(\sum_{m=0}^{N_{n}}\sum_{n=0}^{N_{n}}\left|C_{m,n}\right|^{2}\) reaches only 90% of the sum obtained using all HG coefficients, while for the LLC data-set this number reaches 97%. This highlights the importance of using a high number of HG modes used for the GSA-MD calculation, especially in the RS. In this later phase, \(\chi^{2}_{min}\) is decreased by 18%, which is on par with the decrease of the EG (23%). In comparison to the LLC data-set, the Refined Search phase of the Apollon Figure 10: Evolution of the minimum error \(\chi^{2}_{min}\) obtained during the center tuning of the GSA-MD applied to the Apollon data-set as a function of the tuning iteration \(n_{tuning}\), with \(N_{iter}=50\) in Algorithm 1 for both the EG and RS phase. The blue curve is the evolution of \(\chi^{2}_{min}\) in the EG phase with \(N_{m}=N_{n}=10\) and the red curve is the evolution of \(\chi^{2}_{min}\) in the RS phase with \(N_{m}=N_{n}=40\). Figure 9: Fluence profiles along the \(x\) (upper row) and \(y\) (lower row) directions, respectively averaged over 3 pixels (2.55 \(\mu\)m) centered around the measured fluence maximum’s position in \(y\) and \(x\). Each profile has been normalized to the measured fluence maximum at \(z_{k}\). For a given position \(z_{k}\), the blue dashed line is the measured fluence from the Apollon data-set and the red dashed line is the reconstructed fluence profile. From left to right, relative positions to the focal plane are : a, b) \(z_{1}=-1800\)\(\mu\)m (\(-1.8\,z_{R}\)); c, d) \(z_{0}=0\)\(\mu\)m; e, f) \(z_{1}=1200\)\(\mu\)m (\(1.2\,z_{R}\)). data-set GSA-MD has a quicker convergence of the reconstruction error. The difference stems from a higher sum share when fixing \(N_{m}\), \(N_{n}=10\) for the LLC data-set. ### Comparison with a version of the Gerchberg-Saxton algorithm without modes decomposition In this section the performances of the GSA-MD are compared to those of a version of the GSA that uses the Fresnel Transform for the propagation of the electric field [43]. The flowchart of this implementation is the same as in the 3D Gerchberg-Saxton variant of [24], except for the amplitude constraint which here is Step 5) of Algorithm 1. To compare the results of the GSA-MD with this GSA version (for brevity referred to as "GSA" in the following), the Apollon data-set was used. The GSA has been performed with \(z_{0}\) defined as the reference plane, and \(z_{1,2}\) as the image planes. The GSA-MD has been performed with \(N_{m}=N_{n}=40\) and without origin tuning, and with \(N_{m}=N_{n}=40\) and origin tuning. The same maximum number of iterations, i.e. \(N_{iter}=50\) was set for the GSA and for the Algorithm 1 for the GSA-MD. The results for the GSA and the 2 runs of GSA-MD (without and with origin tuning) are displayed in Fig. 11. Although the reconstructions displayed in Figs. 11.(b)-(d) are qualitatively similar, the reconstructed fluence distributions obtained with the GSA in \(z_{1}\) and \(z_{2}\) of Fig. 11.(b) are noisier than the ones from Figs. 11.(c) and (d) obtained with the GSA-MD. To quantify this noise across the planes \(z_{k}\), the error \(\chi_{k}^{2}\) was measured for each plane. It is defined as : \[\chi_{k}^{2}=\frac{\sqrt{\sum_{i_{x},i_{y}}^{N_{pix}}\left(F_{exp}(x,y,z_{k})- F_{fit}(x,y,z_{k})\right)^{2}}}{\sum_{i_{x},i_{y}}^{N_{pix},N_{pixy}}F_{exp}(x,y,z_{k})}. \tag{8}\] By definition \(\chi^{2}\) defined in Eq. 4 is the average of the errors \(\chi_{k}^{2}\) of all planes, i.e. \(\chi^{2}=\frac{1}{N_{images}}\sum_{k=0}^{N_{images}-1}\chi_{k}^{2}\). The performances of the GSA and of the GSA-MD without and with origin tuning are reported in Table 2. Note that some of the data reported in the third column of Table 2 appear in the third column of Table 1. The GSA-MD with \(N_{m}=N_{n}=40\) and no origin tuning yields a \(\chi^{2}\) error 9% lower than the GSA variant. With the origin tuning, the \(\chi^{2}\) error of the GSA-MD becomes 35% lower. Furthermore, the reconstructed profiles in \(z_{1}\) and \(z_{2}\) of Fig. 11.(b) are noisier than their GSA-MD counterparts. This difference results into higher values of \(\chi_{1}^{2}\) and \(\chi_{2}^{2}\). The difference between the maximum \(\chi_{k}^{2}\) and the minimum \(\chi_{k}^{2}\) across the planes is equal to 89%, 51%, 39% of the average error \(\chi^{2}\) for the GSA, GSA-MD without and with origin tuning respectively. To summarize, the GSA-MD without origin tuning and \(N_{m}=N_{n}=40\) has an execution time of the order of ten seconds, while the GSA has an execution time of 3.8 s. With \(N_{m}=N_{n}=10\), the GSA-MD without origin tuning performs in a shorter execution time of 2.7 s and \(\chi^{2}=3.2\times 10^{-3}\) (this case is not included in Table 2 and Fig. 11). Additionally, the considered GSA-MD results with \(N_{m}=N_{n}=40\) yield a lower reconstruction error, a more uniform distribution of the reconstruction errors \(\chi_{k}^{2}\) across the planes, and smoother distributions in \(z_{1,2}\). Using the origin tuning in GSA-MD makes the distribution of the reconstruction errors \(\chi_{k}^{2}\) even more uniform across the planes. ## IV Conclusions A fast, flexible Gerchberg-Saxton algorithm with Hermite-Gauss mode decomposition to reconstruct the laser field was presented. In this algorithm, as in a 3D Gerchberg-Saxton Algorithm, the fluence data from multiple planes is used to iteratively build a description of the laser pulse (amplitude and phase). This knowledge can be used to study, and possibly correct, the imperfections of high intensity laser pulses and their effect in laser-plasma interaction. Compared to a Gerchberg-Saxton algorithm using propagators of Fourier transforms, the use of modes in the proposed algorithm introduces some flexibility. Since the measured fluences come from different shots, often with wavefront and pointing instabilities, tuning the centers of the modes allows to reduce the error associated to the field reconstruction. Changing the number of modes allows to reach the desired compromise between reconstruction error and required computation time for the reconstruction. These features of the algorithm have been demonstrated showing the reconstruction of the laser field of two very different high intensity lasers, the Lund Laser Centre (LLC) system and the Apollon facility in the com \begin{table} \begin{tabular}{c c c c} \hline Parameter & GSA & GSA-MD & GSA-MD \\ & & (\(N_{m,n}=40\), & (\(N_{m,n}=40\), \\ & & without origin tuning) & with origin tuning) \\ \hline \(N_{iter}\) & 50 & 50 & 50 \\ Total time & 3.8 s & 13.6 s & 1h15 min \\ \(\chi_{2}^{2}\) (\(\times 10^{-3}\)) & 2.50 & 2.28 & 1.61 \\ \(\chi_{0}^{2}\) (\(\times 10^{-3}\)) & 1.94 & 2.76 & 1.98 \\ \(\chi_{1}^{2}\) (\(\times 10^{-3}\)) & 3.89 & 2.48 & 1.52 \\ \(\chi_{2}^{2}\)(\(\times 10^{-3}\)) & 1.67 & 1.60 & 1.35 \\ \hline \end{tabular} \end{table} Table 2: Performances on the Apollon data-set of the GSA and of the GSA-MD without and with origin tuning. The value \(\chi_{k}^{2}\) is the value of the reconstruction error in the plane \(z_{k}\). **Disclaimer :** This manuscript is the accepted version of the article with the same name, published in JOSA B, Vol. 40, Issue 9, pp. 2450-2461 (2023). The published article is accessible at [https://doi.org/10.1364/JOSAB.489884](https://doi.org/10.1364/JOSAB.489884)
2308.16884
The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants
We present Belebele, a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. Significantly expanding the language coverage of natural language understanding (NLU) benchmarks, this dataset enables the evaluation of text models in high-, medium-, and low-resource languages. Each question is based on a short passage from the Flores-200 dataset and has four multiple-choice answers. The questions were carefully curated to discriminate between models with different levels of general language comprehension. The English dataset on its own proves difficult enough to challenge state-of-the-art language models. Being fully parallel, this dataset enables direct comparison of model performance across all languages. We use this dataset to evaluate the capabilities of multilingual masked language models (MLMs) and large language models (LLMs). We present extensive results and find that despite significant cross-lingual transfer in English-centric LLMs, much smaller MLMs pretrained on balanced multilingual data still understand far more languages. We also observe that larger vocabulary size and conscious vocabulary construction correlate with better performance on low-resource languages. Overall, Belebele opens up new avenues for evaluating and analyzing the multilingual capabilities of NLP systems.
Lucas Bandarkar, Davis Liang, Benjamin Muller, Mikel Artetxe, Satya Narayan Shukla, Donald Husa, Naman Goyal, Abhinandan Krishnan, Luke Zettlemoyer, Madian Khabsa
2023-08-31T17:43:08Z
http://arxiv.org/abs/2308.16884v2
# The Belbebe Benchmark: ###### Abstract We present Belbebe, a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. Significantly expanding the language coverage of natural language understanding (NLU) benchmarks, this dataset enables the evaluation of text models in high-, medium-, and low-resource languages. Each question is based on a short passage from the FloRes-200 dataset and has four multiple-choice answers. The questions were carefully curated to discriminate between models with different levels of general language comprehension. The English dataset on its own proves difficult enough to challenge state-of-the-art language models. Being fully parallel, this dataset enables direct comparison of model performance across all languages. We use this dataset to evaluate the capabilities of multilingual masked language models (MLMs) and large language models (LLMs). We present extensive results and find that despite significant cross-lingual transfer in English-centric LLMs, much smaller MLMs pretrained on balanced multilingual data still understand far more languages. We also observe that larger vocabulary size and conscious vocabulary construction correlate with better performance on low-resource languages. Overall, Belbebe opens up new avenues for evaluating and analyzing the multilingual capabilities of NLP systems. ## 1 Introduction The absence of high-quality, parallel evaluation benchmarks is a major obstacle in assessing the text comprehension capabilities of multilingual models. NLP datasets with high language coverage do exist, such as FloRes-200 Team et al. (2022), but they primarily focus on machine translation. Popular multilingual evaluation benchmarks, such as multilingual question answering Lewis et al. (2020); Clark et al. (2020), natural language inference (NLI) Conneau et al. (2018), summarization Ladhak et al. (2020); Hasan et al. (2021), and reasoning datasets Ponti et al. (2020); Lin et al. (2021) altogether only cover around 30 languages. And while understanding and generative text services are used across the globe in 100+ languages, the lack of labeled data provides a major obstacle to building functional systems in most languages. To date, there exists no massively-multilingual evaluation dataset for natural language understanding (NLU). Simultaneously, large language models (LLMs) have become increasingly popular. Certain LLMs, like BLOOM Scao et al. (2022), are trained on multilingual data and tout their innate multilingual capabilities. Others like GPT-3 Brown et al. (2020) and Llama Touvron et al. (2023) have demonstrated multilingual competence despite their training data being predominantly English-centric. Systems are often built first for high-resource languages due to data availability, existing academic work, organizational priorities, and even typologically similarity to English Joshi et al. (2020); Team et al. (2022). Even so, LLMs benefit from pretraining data that is linguistically diverse, intentionally or not, as well as from cross-lingual transfer Zoph et al. (2016); Artetxe et al. (2020); Muller et al. (2021). But how multilingual are these models really? Beyond LLMs, significant scientific progress needs to be made before NLP systems can be built effectively and efficiently in low-resource languages. Many modeling techniques are being presented as language-_agnostic_ but have only truly been evaluated in a small number of languages Bender (2011), risking not being applicable to diverse typologically phenomena Bender (2009). We believe that a large-scale, parallel, and discriminative NLU dataset is crucial for studying the multilingual capabilities of such models and understanding how the technological disparity between high- and low-resource languages is evolving. In this paper, we present a fundamental natural language understanding benchmark to evaluate language models across 122 language variants from around the world, called Belebelle1. The dataset contains 900 unique multiple-choice reading comprehension questions, each associated with one of 488 distinct passages. The questions have been carefully crafted to discriminate between models with different competence in language comprehension. While the questions do not necessarily require higher levels of knowledge or reasoning, they favor generalizable NLU models and deliberately punish biased models. The English questions on their own present a significant challenge to numerous models, while humans are capable of answering the questions with near-perfect accuracy. The wide-ranging model results makes this a discriminative NLU task similar to popular LLM benchmarks like MMLU Hendrycks et al. (2021). The first of its scale, Belebele is parallel across all languages, facilitating a direct comparison of model performance across all languages. The dataset covers typologically diverse high-, moderate-, and low-resource languages across 29 scripts and 27 language families. In addition, seven languages are included in two separate scripts, resulting in one of the first NLP benchmarks for the romanized variant of Hindi, Urdu, Bengali, Nepali, and Sinhala. We further detail our data collection process and the resulting corpus in Section 3. The dataset enables evaluation of monolingual and multilingual models, but the parallel nature also enables the evaluation cross-lingual textual representations in a number of cross-lingual settings. The task can be evaluated via full fine-tuning by assembling a training set from related QA datasets. We demonstrate this in Section 4 with several masked language models (MLMs) on both cross-lingual transfer from English fine-tuning and translate-train-all. For LLMs, we evaluate several models using five-shot in-context learning and also instruction-tuned models via zero-shot (in-language and translate-test). We discuss our results in Section 5. This paper's contributions are the following: * We release Belebele, the first parallel Reading Comprehension evaluation benchmark for 122 languages * We present baseline results on Belebele across the 122 languages for MLMs and LLMs in numerous settings. * Thanks to Belebele, we find that while large vocabulary size and balanced pretraining data correlates with highest model performance on medium- and low-resource languages, even English-centric LLMs can go a long way and generalize to over 30 languages. ## 2 Background ### Cross-Lingual Evaluation Benchmarks There are several cross-lingual evaluation benchmarks for NLU that are parallel across numerous languages and enable monolingual, multilingual, or cross-lingual evaluation, such as XNLI Conneau et al. (2018), XQuAD Artetxe et al. (2020), or MLQA Lewis et al. (2020). Mintaka Sen et al. (2022) is designed with LLMs in mind, presenting a more difficult QA task in 9 languages. Beyond QA, XL-Sum Hasan et al. (2021) is an analogous dataset in the domain of abstractive summarization. However, all these datasets together cover under 30 languages, most of which are high- or medium-resource. MASSIVE FitzGerald et al. (2023) is a large NLU dataset covering 51 languages, but in the domain of spoken conversational agents. Our work undertakes the challenge of expanding existing cross-lingual evaluations to 122 languages, for many of which no NLU benchmark currently exists. CCQA Huber et al. (2022) is the most multilingual QA dataset available, generating questions from Common Crawl at scale. However, this dataset is intended to be used for in-domain pretraining and is not a parallel evaluation benchmark. Similarly, NER Pan et al. (2017) has broad language coverage and TyDiQA Clark et al. (2020) is a commonly used evaluation benchmark but neither are parallel. ### Non-English Machine Reading Comprehension While the question-answering portion varies, machine reading comprehension (MRC) tasks are defined by the closed-book passage provided to answer each question. Naturally, a big majority of MRC datasets are in English, such as TriviaQA Joshi et al. (2017) and the collection of bAbI tasks Weston et al. (2016). However, there has been a rise in non-English MRC datasets in recent years. Monolingual closed-book MRC datasets now exist in Arabic Mozannar et al. (2019), Bulgarian Hardalov et al. (2019), French d'Hoffschmidt et al. (2020), German Moller et al. (2021), Hindi Anuranjana et al. (2019); Gupta et al. (2018), Italian Croce et al. (2018), Russian Efimov et al. (2020); Shavrina et al. (2020), and Tibetan Sun et al. (2021), amongst others. Many were created using translation and so are parallel with an English QA dataset, often SQuAD Rajpurkar et al. (2016). However, Belebele covers all these languages at once and many more. ### Multiple Choice QA Compared to extractive QA, multiple-choice question-answering is a less common form of MRC. The most similar English datasets to Belebele are those with multiple-choice question based on a provided paragraph in a particular domain. RACE Lai et al. (2017) and \begin{table} \begin{tabular}{l l l r l r} \hline \hline \multicolumn{1}{c}{} & \multicolumn{3}{c}{**Belelele Statistics**} & \multicolumn{2}{c}{Question statistics} \\ \hline Total Number & 122 & Distinct Passages & 488 & Distinct Questions & 900 \\ Distinct Languages (ignoring script) & 115 & Questions per passage & 1-2 & Multiple-choice answers (num correct) per question & 4 (1) \\ Language Families & 27 & Avg. words per passage (std) & 79.1 (26.2) & Avg. words per question (std) & 12.9 (4.0) \\ Scripts & 29 & Avg. sentences per passage (std) & 4.1 (1.4) & Avg. words per answer (std) & 4.2 (2.9) \\ \hline \hline \end{tabular} \end{table} Table 1: Language and Text Information for Belebele. Average length statistics are computed on the English split. CLEF Entrance Exams (Penas et al., 2014) are large fitting datasets made from exam questions for English learners. MCTTest(Richardson et al., 2013) was built specifically for ML systems, but is intended to similarly include diverse reasoning phenomena. MultiRC(Khashabi et al., 2018) emphasizes multi-sentence reasoning for its questions, and provides a multiple-choice dataset with any number of correct answers. Agarwal and Mannem (2011) provides a system to generate fill-in-the-blank multiple-choice questions for a text corpus. MovieQA(Tapaswi et al., 2015) and MCScript2.0(Ostermann et al., 2019) contain theater or movie scripts. SciIQ(Welbl et al., 2017) and OpenBookQA(Mihaylov et al., 2018) are open-book multiple choice datasets, that also have associations to passages. In comparison to more straightforward MRC tasks, COPA (Roemmele et al., 2011), SWAG (Zellers et al., 2018), and RecLor(Yu et al., 2020) consist of multiple-choice questions that require higher-level commonsense reasoning to answer. EXAMS (Hardalov et al., 2020) is a parallel multiple-choice QA dataset in 28 languages. However, it differs from our dataset in that passages are not provided and answering questions requires multilingual knowledge transfer and reasoning. ### FLoRes-200 The FLoRes-200 Machine Translation Benchmark (Goyal et al., 2022; Team et al., 2022) is a dataset parallel across 200 languages. The dataset was constructed by sourcing English passages from Wikiews, Wikijunior, and WikiVoyage. The translations were performed by native speakers with high English fluency and translation experience. Translators were instructed to maintain informative and standardized content while handling named entities, abbreviations, idiomatic expressions, and pronouns appropriately. The passages in the Belebele corpus are directly sourced from FLoRes. ## 3 The Belebele Dataset We opted to create multiple-choice questions and answers in English and then translate, as opposed to creating resources natively in each language. Many of the advantages to this approach outlined in Conneau et al. (2018) remain. Most importantly, this leads to significantly more similar sets of samples across languages, enabling direct score comparison. The process for creating the dataset is summarized in Figure 2. ### Creation of Multiple Choice Questions & Answers To create the Belebele dataset, we first construct a question-answering dataset in English. Amongst machine reading comprehension tasks, we select multiple-choice questions (MCQs) because it would lead to the fairest evaluation across languages. In comparison to span extraction, which is more sensitive to morphological differences, MCQs enable the ability to scale to many languages when translating from English. In addition, MCQs enable us to better center the questions on information explicitly stated in the passage, as yes/no or entailment (NLI) questions can be easier to answer with external knowledge held in pretrained models. In order for the questions to discriminate solely Figure 1: A sample passage from the dataset in 4 different languages, displayed alongside its two questions. between different levels of language understanding, we intentionally create questions that do not require higher levels of information processing, such as multi-hop or commonsense reasoning. Constructing high quality MCQs depends most importantly on creating strong negatives that are neither obviously wrong nor possibly correct Agarwal and Mannem (2011); Richardson et al. (2013). We do not want the dataset to be easy enough for biased models (e.g. models that use shortcuts or pattern-match) Boyd-Graber and Borschinger (2020). In setting up this annotation, we consider the protocols proposed in Bowman et al. (2020) and the warnings of annotation shortcomings mentioned in Malaviya et al. (2022). In a process similar to what Nangia et al. (2021) advocates for, we implement an iterative procedure with the Language Service Provider (LSP) for this involved data collection task. We engaged in 5 total iterations, providing and receiving feedback each time. Annotators were instructed on the similarities and differences on how ML models approach QA datasets versus humans, which we felt substantially improved the quality of the data. Our final guidelines include both important points such as having the correct response being unambiguous, as well as particularized rules such as _no double negatives_Mihaylov et al. (2018). For each rule we provided annotators with a good and bad example to illustrate. An abridged version of our guidelines can be found in the Appendix A.2.1. ### Quality Assurance At each iteration, we evaluate whether or not returned samples satisfy the minimum quality bar through a mix of manual inspection and automatic inspection. At every step, we manually verified a sample of questions to understand how well the annotators were on the same page with us about the guidelines. While time consuming, manual verification was the most assured way to provide proper feedback to the annotators, notably on the difficulty of the questions created. To complement the manual inspection of a subset of questions, we use programmatic methods to evaluate all questions. Based on the findings in Malaviya et al. (2022), we create low-level features to identify overly easy questions or low-effort strategies employed by annotators. For example, we evaluate the lexical overlap (n-gram precision) between different combinations of the texts associated with a question to evaluate whether the question is answerable by a biased model. This allows us to see if the question can be answered without the passage, without the question, or with only one sentence in the passage. We also identified patterns associated with heuristic solvability, such as only one of the answers being extracted directly from the passage, enabling easy answering due to lack of plausibility for wrong answers. These low-level features allow us to (1) determine whether an annotation iteration was up to par, (2) filter out questions that failed these heuristic checks (for the final iteration, about 20% were filtered out), and (3) compare to other MCQ datasets. We run statistical t-tests to ensure the distribution of these features for correct answers is no different than for wrong answers. The final collection has p-value 0.81, in comparison to MCTTest which largely fails this t-test (p-value < 0.01). We also train a naive logistic regression model to answer using only these low-level features and find that for MCTTest, it can achieve accuracy up to 0.44. On our 900 questions, the best the naive model could achieve was 0.28, only slightly better than random (0.25). ### Translating the Corpus Belebele was created end-to-end without the use of machine translation technology, relying solely on experts fluent in English and the target language. For all languages included in the corpus, the context passages were taken directly from the FLoRes-200 dataset, with the exception of Hindi, Bengali, Urdu, Nepali, and Sinhala in the Latin script. For these 5 Indo-Aryan languages, their romanization is not included in FLoRes-200 while being very prevalent on the modern Internet. We thus additionally transliterate from the native to Latin script using IndicXlit Madhani et al. (2022) Figure 2: Flowchart illustrating the dataset creation process starting from the FLoRes-200 passages via Language Service Provider (LSP) annotation and have annotators proofread. As a result, much like Modern Standard Arabic, these languages are present in two forms in the Belebele corpus. In order for the questions and answers to properly pair the translated FLoRes passages, the latter was provided for the annotators. We specifically instructed annotators to align potentially ambiguous translations with the original passages. While Clark et al. (2020) warns that this forced alignment could increase 'translationese', it is necessary to ensure equivalent question difficulty across languages. The modifications to the translation guidelines can be found in Appendix A.2.2. All translations were proofread and edited by an additional annotator. Annotators raised several quality issues or inconsistencies in the original translated passages. For each, we deliberated with the LSP to establish acceptable translations of the question, while still maximizing alignment with the passage. Following the translations of the questions and answers to all languages, we form the final Belebele corpus by combining the passages, questions, and answers in each language (including English). ### The Belebele Dataset in Summary Belebele contains 900 questions, each with exactly 4 multiple-choice answers and one correct answer. Most passages have two associated questions, but some have only one. In total, there are 488 distinct passages, none belonging to the hidden FLoRes test set. Parallel across 122 languages, the corpus therefore contains a total of 109,800 rows. We display a sample passage in four languages in Fig. 1. We also provide a training and development set (see Section 4.2). Because of the careful annotation procedure and quality checks, the MCQs discriminate text comprehension competence. It often includes paraphrasing and strong negatives in order to allude pattern-matching models seeking giveaways. Questions often additionally require understanding multiple sentences. However, answering does not require presumptions or external knowledge as is required in more difficult reasoning datasets. For example, Q1 in Fig. 1 is unambiguous. _Food_, _mates_, and _flying_ are all mentioned in the passage, but a careful read reveals the wings folding back is only associated with _hiding spaces_. To confidently rule out the other candidate answers, it is required to understand three sentences. In general, we find all questions to be answerable by humans fluent in the target language, but not without focused reading (see Section 5.1). As can be seen in Fig. 1, the passages, questions, and answers are aligned in semantic meaning and formality. This therefore poses an equivalent challenge in all languages. It also enables models with semantic representations aligned across languages to answer questions when passage, question, and answer are presented in different languages. Since FLoRes includes passages in 83 additional languages, we can even evaluate reading comprehension in these languages by asking the questions in English. ## 4 Experiments Thanks to Belebele, we are able to evaluate numerous models and establish baseline performances across 122 language variants. We compare performance between popular multilingual MLMs and LLMs in several settings such as fine-tuning, few-shot in-context learning and zero-shot prompting. For all, accuracy is the central metric. With 4 candidate answers for each question, the expected accuracy for sequence classification models that guess randomly is 0.25. However, the accuracy can be lower than this for sequence-to-sequence models (e.g. instructed models) that are evaluated in exact-match scenarios. ### Evaluated Models Masked Language Models (MLMs)We evaluate three different models, XLM-V (Liang et al., 2023), InfoXLM (Chi et al., 2021), and XLM-R (Conneau et al., 2020). All the evaluated MLMs have been pretrained on data specifically designed to include multilingual text in about 100 languages. The pretraining data in high-resource languages is typically down-sampled while low-resource languages are up-sampled in order to favor multilingual performance (Conneau et al., 2020). In addition, the subword tokenizers (Kudo and Richardson, 2018) of all these models are trained on the multilingual corpora making them better suited for multilingual text. Large Language ModelsWe evaluate GPT3.5-turbo, Falcon and Llama (1 and 2). GPT3.5-turbo is a model optimized for chat based on GPT-3 (Brown et al., 2020) available through OpenAI APIs2. Limited details have been disclosed about the pretraining and fine-tuning data and its language distribution. Llama 1 (Touvron et al., 2023) is a collection of decoder-only transformers model trained on 1.4T tokens of Web-Crawled data.3 Llama 2 (Touvron et al., 2023) improved upon Llama 1 models by pretraining on about 2T tokens of web-crawled data. We evaluate the 7B, 13B, 30B, and 65B pretrained checkpoints for Llama 1. We evaluate Llama 2 70B in both its pre-trained version and its chat version that was instruction-fine-tuned (Ouyang et al., 2022) for safe dialog purposes (a.k.a. Llama-2-chat). Falcon is pretrained on one trillion extensively filtered web-crawled samples (Penedo et al., 2023). We evaluate the 40B variant. Footnote 2: [https://platform.openai.com/docs/models/overview](https://platform.openai.com/docs/models/overview) Llama 1 was reportedly trained in English data and 19 languages written in the Latin and Cyrillic scripts. Non-English text accounts for less than 4.5% of the pretraining corpus (Touvron et al., 2023). Llama 2 pretraining data is made of 89.7% of English data (Touvron et al., 2023). The rest corresponds to text identified as belonging to 26 languages (e.g. German, Vietnamese, Indonesian, etc.) and 8.4% of unidentified data.4 Both series use the same BPE-based tokenizers (Kudo and Richardson, 2018). We note that unknown Unicode characters are split into bytes so Llama models avoid out-of-vocabulary errors. Footnote 4: See Table 10 in Touvron et al. (2023b) for a full list of the identified languages We also experimented with other multilingual decoder-only language models such as XGLM (Lin et al., 2022) and BLOOM (7B) (Scao et al., 2022). Still, these models do not perform better than chance on Belebele, so we do not report their performance. ### Fine-tuning, Few-shot and Zero-shot Evaluation English Training dataThe Belebele dataset is intended to be used only as a test set, and not for training. Therefore, for models that require additional task finetuning, we instead propose using an assembled training set consisting of samples from pre-existing multiple-choice QA datasets in English. We considered diverse datasets, and determine the most compatible to be RACE (Lai et al., 2017), SciQ (Welbl et al., 2017), MultiRC (Khashabi et al., 2018), MCTest(Richardson et al., 2013), MCScript2.0 (Ostermann et al., 2019), and ReClor(Yu et al., 2020). For each of the 6 datasets, we unpack and restructure the passages and questions from their respective formats. We then filter out less suitable samples (e.g. questions with multiple correct answers) and experiment with different strata to train the best RoBERTa-base model (Liu et al., 2019) evaluated on the English set. In the end, the dataset comprises 67.5k training samples and 3.7k development samples, more than half of which are from RACE. We provide a script5 to reconstruct this dataset for anyone to perform task finetuning. Footnote 5: [https://github.com/facebookresearch/belebele](https://github.com/facebookresearch/belebele) Fine-tuning in Cross-Lingual Transfer and Translate-Train SettingsFor evaluating all three MLMs, we add a multiple-choice classification head and fine-tune the entire model. We finetune in two settings, (1) in English and evaluate cross-lingual transfer and (2) on machine-translated samples of the training set to all the target languages and evaluate each language (translate-train-all). We use machine translation on passages, questions, and answers separately. For each training run, we only do one epoch and limit the training and validation sample to 650k. For both settings, the development set is used for hyperparameter search and we evaluate the two best training runs on Belebele. Five-shots In-Context LearningWe evaluate the pre-trained Llama 1 and 2 as well as Falcon 40B in the five-shots setting. Examples are sampled from the English training set and prompted to the model (following the template P: <passage> \n Q: <question> \n A: <mc answer 1> \n B: <mc answer 2> \n C: <m answer 3> \n D: <mc answer 4> \n Answer: <Correct answer letter>). We report the average scores over 3-runs. In this setting, we perform predic Figure 3: Belebele Results in 122 languages. We compare four models in two settings and see the difference between intentionally multilingual models and models with English-centric data. GPT3.5-turbo performs the best on the top 20 languages, but after 40-50, its performance falls far behind InfoXLM and XLM-V. Similarly, InfoXLM outperforms XLM-V in the first 40 languages, but XLM-V proves more capable on the long tail of languages. Note that the language order can change the plot considerably, here we choose median accuracy. tion by picking the answer within {A, B, C, D} that has the highest probability relatively to the others. Zero-shot EvaluationWe evaluate both GPT3.5 and Llama-2-chat in the zero-shot setting by describing the task in natural language. We present the passage, question, and four possible answers, and instruct the model to provide the letter "A", "B", "C" or "D" as the answer. The instructions are given in English for all languages. We perform post-processing steps and accept answers predicted as e.g. "(A)" instead of "A".6 In addition, we evaluate prompting Llama-2-chat (70B) with instructions that are machine translated to the target language from English. Conversely, we evaluate the model in the translate-test setting, where the passages, questions, and answers are machine translated to English and prompted to the model. This setting allows us to compare in-language comprehension to using machine translation, as is common in many multilingual systems. Footnote 6: For Llama-2-chat we additionally remove the prefix “The correct answer is ”. ## 5 Results We provide a summary table of results in Table 2 and all results in Appendix A.3. ### How difficult is Belebele? As discussed in Section 3, the questions in Belebele are intentionally difficult. While the primary challenge of this dataset is its multilinguality, we see that empirically, the English questions are able to shed light on the varying NLU capabilites of models. With full finetuning, we achieved a maximum accuracy of \(71.7\) in English with RoBERTa-base model, significantly less than the \(90.9\) achieved by Llama 2 70B in five-shot. Between Llama 1 models of different size, we see a significant range of results, as the 7B model only achieves a score of \(37.3\) (See Section 5.2). To establish human performance, 4 authors each randomly sampled around 30 English MCQs and answered with focus in a blind test, achieving mean \(97.6\) accuracy7. This is significantly higher than any of the models evaluated, implying the task presents a particular challenge for models and there is room to improve. In comparison, Nangia and Bowman (2019) conservatively estimate human performance on XNLI to be \(92.8\) on the English questions (i.e. MNLI Williams et al. (2018)). Footnote 7: 95% CI for all 900 questions = \([93.1,99.5]\) When comparing model performance on Belebele with XNLI, we find very high correlation. In the translate-train-all setting, XLM-V, InfoXLM, and XLM-R all perform about 10 accuracy points lower on Belebele than the translate-train8 accuracies on XNLI reported in their respective papers Liang et al. (2023); Chi et al. (2021). But overall, across the 15 languages and all three models, we find a correlation in accuracy of \(r=0.85\). Footnote 8: In traditional translate-train, the model is finetuned on translated training inputs for each language _individually_. ### Multilingual Generalization of MLMs and LLMs on Belebele Multilingual generalization of MLMs and LLMs is complex to understand and anticipate. Schematically, the performance of a language model in a given language \begin{table} \begin{tabular}{c c|c|c c c c} \hline \hline **Model** & **Size/Variant** & **Vocab size** & **AVG** & \(\mathbf{\%\geq 50}\) & \(\mathbf{\%\geq 70}\) & **eng\_Latin** & **non-Eng AVG** \\ \hline \multicolumn{8}{l}{_5-Shot In-Context Learning (examples in English)_} \\ \hline Llama 1 & 7B & 32K & 27.7 & 0.0\% & 0.0\% & 37.3 & 27.6 \\ Llama 1 & 13B & 32K & 30.4 & 0.8\% & 0.0\% & 53.3 & 30.2 \\ Llama 1 & 30B & 32K & 36.2 & 18.0\% & 0.8\% & 73.1 & 35.9 \\ Llama 1 & 70B & 32K & 40.9 & 25.4\% & 12.3\% & 82.5 & 40.5 \\ Llama 2 base & 70B & 32K & **48.0** & **38.5\%** & **26.2\%** & **90.9** & **47.7** \\ Falcon & 40B & 65K & 37.3 & 16.4\% & 1.6\% & 77.2 & 36.9 \\ \hline \multicolumn{8}{l}{_Zero-Shot for Instructed Models (English instructions)_} \\ \hline Llama-2-chat & 70B & 32K & 41.5 & 27.0\% & 2.5\% & 78.8 & 41.2 \\ GPT3.5-turbbo & unk & 100K & **51.1** & **44.2\%** & **29.2\%** & **87.7** & **50.7** \\ \hline \multicolumn{8}{l}{_Full Finetuning in English_} \\ \hline XLM-R & large (550M) & 250K & 54.0 & 64.8\% & 15.6\% & 76.2 & 53.8 \\ XLM-V & large (1.2B) & 902K & 55.6 & **69.7\%** & 21.2\% & 76.2 & 54.9 \\ InfoXLM & large (550M) & 250K & **56.2** & 67.2\% & **28.7\%** & **79.3** & **56.0** \\ \hline \multicolumn{8}{l}{_Translate-Train-All_} \\ \hline XLM-R & large (550M) & 250K & 58.9 & 69.7\% & 36.1\% & 78.7 & 58.8 \\ XLM-V & large (1.2B) & 902K & **60.2** & **76.2\%** & 32.8\% & 77.8 & **60.1** \\ InfoXLM & large (550M) & 250K & 60.0 & 70.5\% & **36.9\%** & **81.2** & 59.8 \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of results on Belebele across models and evaluation settings. % \(\geq\) 50/70 refers to the proportion of languages for which a given model performs above 50/70%. We additionally report Llama-2-chat zero-shot results leveraging translation in Table 3. is related to two key factors. (i) First, the amount of pretraining data in the target language. As predicted by the scaling laws Kaplan et al. (2020), performance in a language increases monotonically with the amount of tokens the model is pretrained on. (ii) Second, the cross-lingual transfer happening between languages in the pretraining data (potentially 100+ languages, e.g., with XLM-R) and the target language at inference time Conneau et al. (2020),b). This transfer is impacted by a combination of typological similarities, token-overlap, and script similarity between the pretraining languages and the target language Muller et al. (2021, 2023). In the case of LLMs and MLMs, these two factors are hard to disentangle due to the scale (up to \(\sim\)1T tokens) and the potential language leaks of large-scale pretraining corpora Kreutzer et al. (2022). However, our results on Belebele provide detailed evidence of both these factors impacting the multilingual generalization of the models. Impact of Pretraining Language DistributionOne of the key differences between the MLMs and LLMs evaluated is the pretraining data distribution. All the MLMs we evaluate have a balanced distribution of about 100 languages. In comparison, the Llama models and Falcon are pretrained mainly in English (see Section 4.1). This explains the large performance differences between MLMs and LLMs. For instance, XLM-R reaches 58.8 accuracy on average across all the non-English languages. Llama-2-chat (evaluated in the zero-shot setting) only reaches 41.2. In comparison Llama-2-chat outperforms XLM-R in English. This difference between the MLMs and LLMs evaluated is illustrated in Fig. 3. However, despite this gap, Llama and Falcon checkpoints perform surprisingly well on a large number of languages. For instance, Llama-2-chat is above 35% accuracy (i.e. 10 points above the random baseline) for 59/122 languages and above 50% accuracy for 33 languages. This shows that English-centric LLMs pretrained model are a promising starting point to build multilingual models. Machine Translation for Zero-ShotOur translate-test evaluations show that using machine translation into English strongly outperforms Llama-2-chat (70B) performance in the original target language. Across 91 evaluated languages, only 2 are non-trivially better in-language (German and Italian), 21 languages are about the same, and translating to English shows better results for 68, none of which are considered high-resource. Compared to Llama-2-chat having zero-shot accuracy above 50% for 33 languages, it has 71 in translate-test (see Section A.3.4). In addition, we evaluate machine-translating the task instructions to the target language. Out of 89 languages evaluated, there are around 25 where the translated instructions were not well understood (i.e. accuracy less than random), correlating strongly with languages that had low scores to begin with. For the rest, the performance relative to receiving the instructions in English is quite split, preventing definitive conclusions. However, languages with the largest accuracy boost from language instructions are generally those on the higher end to begin with (e.g. Catalan and Portuguese). Impact of Sub-Word TokenizationWe reaffirm a correlation between increasing vocabulary size (and proper multilingual vocabulary construction methods such as clustering and capacity assignment) and performance on lower resource languages Liang et al. (2023). In particular, XLM-V has a vocabulary of 900k tokens that is built by de-emphasizing token sharing between languages with little lexical overlap and proper vocabulary capacity allocation for each individual language. XLM-V outperforms XLM-R and InfoXLM (250k vocabulary size) on low-resource languages even though they all have the same architecture and are trained on the same dataset (CC-100). GPT3.5-turbo (100k vocabulary size),9 Falcon (65k vocabulary size), and Llama 2 (32k vocabulary size) all fall off abruptly for medium- and low- resource languages. Larger vocabulary size may explain why Falcon 40B performs equivalent to Llama 1 30B despite having been pretrained on fewer non-English tokens. \begin{table} \begin{tabular}{l l l|c c c c} \hline \hline **Model** & **Variant** & **Eval Setting** & **AVG** & **\% \(\geq\) 50** & **\% \(\geq\) 70** & **eng. Latin** \\ \hline \multicolumn{6}{l}{_Translate-Test (English) on 91 non-English languages in Zero-Shot_} \\ \hline \hline Llama-2-chat & 70B & Translate-Test & **57.1** & **78.0\%** & 2.2\% & 78.8 \\ Llama-2-chat & 70B & In-Language & 44.1 & 35.2\% & 2.2\% & 78.8 \\ \hline \multicolumn{6}{l}{_Translated Instructions in 89 non-English languages Zero-Shot_} \\ \hline \hline Llama-2-chat & 70B & In-Language Translated Instructions & 38.7 & 36.0\% & **7.9\%** & 78.8 \\ Llama-2-chat & 70B & English Instructions & **44.9** & **37.1\%** & 3.4\% & 78.8 \\ \hline \hline \end{tabular} \end{table} Table 3: Results of Llama-2-chat in zero-shot in two machine translation-based evaluation settings. translate-test (passages, questions, answers machine-translated back to English) and target language evaluations with the English zero-shot translations machine-translated to the target language. The traditional setting on the same languages is provided for comparison. % \(\geq\) 50/70 refers to the proportion of languages for which a given model performs above 50/70% Scaling effect on Multilingual GeneralizationWe report in Fig. 4 the impact of model sizes on performance on the Belebele benchmark across six language families in addition to English. We find that scale is critical for Llama to perform reading comprehension. The 7B checkpoint performs slightly above chance in English and poorly for most languages--however, the performance increase significantly with the 13B and even more for 30B parameters. Surprisingly, we find that Llama performs non-trivially in Japanese and Greek (cf. Japonic and Hellenic language families in Fig. 4) while neither is reported to be in the pretraining corpus. However, unlike other language families such as Romance and Germanic families, the performance becomes non-trivial only with the 30B and 65B checkpoints. This suggests that generalizing to distant languages, in reference to English-centered pretraining, requires more parameters. Impact of ScriptComparing the Romanized versions with the original scripts for Hindi, Urdu, Bengali, Sinhala, Nepali, and Modern Standard Arabic, we find that all models except Falcon perform stronger in the native script rather than in the Latin script, as can be seen in Table 7. These scripts are allegedly not present in the pretraining data for Llama 2 and Falcon(Touvron et al., 2023; Penedo et al., 2023).10 For the Indo-Aryan languages (i.e. Bengali, Hindi, Nepali, Urdu), we hypothesized cross-lingual transfer would be higher for these languages in the Latin variant since the tokenization will be more suitable and there is opportunity for shared subwords (anchor points) (Conneau et al., 2020). However, this only seems to be the case for Falcon. In the case of Llama-2, the results suggest the model pretrained on some samples with native script (perhaps due to code-switching sentences or poor language identification). Meanwhile, the "refined" Falcon pretraining dataset may have been less impacted by leaks, resulting in cross-lingual transfer to the Latin script eclipsing the limited grasp of the native script. Further analysis of the language distributions of the pretraining corpora of these models are needed to provide stronger evidence of these interpretations. Footnote 10: We note that thanks to bytes-fall back, all UTF-8 characters are supported by the Llama tokenizer. ## 6 Discussion Our goal in introducing Belbele is to foster the development of LLMs and NLP systems beyond high-resource languages. However, modern LLMs require trillions of tokens in pretraining to achieve their powerful natural language understanding, fluency, and in-context learning abilities (Brown et al., 2020; Chowdhrery et al., 2022; Touvron et al., 2023). Building LLMs for mid- to low-resource languages as capable as in English will therefore require scaling corpus sizes in more languages when feasible (Abadji et al., 2022) or Figure 4: Impact of Models’ scale (from 7B to 65B parameters of Llama 1) on the performance on Belebele for 6 language families and English. The number of languages in a given family is indicated as (#N). Llama 1 is evaluated in the 5-shot settings with examples sampled from the training data in English. Scores are average over 3 runs. designing competitive data-efficient training techniques such as cross-lingual transfer Artetxe et al. (2020); Pfeiffer et al. (2020); Muller et al. (2020, 2021). To enable this progress, we point to two critical research directions. First, (i) better language identification systems: popular language identification models are trained on a restricted number of languages and domains and only work at the sentence level Bojanowski et al. (2017), limiting their abilities to track languages in code-switched data and embedded text. Second, (ii) we encourage LLM developers to improve reporting on pretraining language distribution. This is necessary for the research community to understand the cross-lingual transfer capabilities of LLMs and to improve NLP system design for low-resource languages. ## 7 Conclusion A fundamental limitation to conducting sound evaluations of the capabilities of language models in low-, or even moderate-, resource languages is the availability of annotated benchmarks. This paper presents a massive dataset, Belebele, consisting of passages and multiple-choice questions evaluating reading comprehension in 122 languages. This benchmark enables critical evaluation of reading comprehension capabilities of LLMs in English and top languages. In addition, the dataset is the first of its kind in many medium- and low-resource languages, enabling unprecedented insight into the multilingual capabilities of language models. With all the evaluations and experiments this dataset enables, we hope future work will take a deeper dive into current language models. We hope that Belebele will eventually lead to discoveries into how current model architectures and training methods deal with multilingual data and subsequently how they can be improved. ## Limitations Even with our extensive quality assurance, we warn that "translationes" may cause accuracy on non-English languages to not be directly comparable to English. Often, the _perfect_ translation does not exist. In addition, Belebele was designed to measure the reading comprehension abilities of NLP systems across 122 languages. We specifically align as much as possible with translation choices made in the creation of FLoRes. Therefore, by-design the samples collected do not capture language- and culture-specific phenomena such as formality Ersoy et al. (2023), values Kovac et al. (2023), and aboutness Hershcovich et al. (2022). Following Belebele, building NLP systems inclusive of all cultures and languages will require the release of benchmarks that capture these phenomena. As briefly mentioned in Section 3.3, annotators discovered a few quality issues with FLoRes. Some of them are likely due to style/dialect differences between annotators, but many seem to not be. It's rare enough, thanks to the extensive quality-assurance loops implemented by the NLLB team and the LSP. However, over the scale of 122 languages a fair number of issues have arisen, especially in lower-resource languages. Since updating the FLoRes dataset is not in scope for this project, we deliberated on each with the LSP to maximize both appropriateness of the question/answers translations and cross-language consistency. ## 8 Acknowledgements The authors would like to acknowledge the annotators who created the questions and translated the dataset to 122 languages. Notably, we acknowledge the contributions to the annotation process from Adam Hakimi, Brian Bui, Cynthia Gao, Michal Kolestik, Michaela Fiolekova, Pavlina Lukesova, and Mirek Driml. In addition, the authors want to acknowledge Patrick Lewis, Parikshit Iyengar, and Waqar Malik for their support.
2305.19593
Exploring the Vulnerabilities of Machine Learning and Quantum Machine Learning to Adversarial Attacks using a Malware Dataset: A Comparative Analysis
The burgeoning fields of machine learning (ML) and quantum machine learning (QML) have shown remarkable potential in tackling complex problems across various domains. However, their susceptibility to adversarial attacks raises concerns when deploying these systems in security sensitive applications. In this study, we present a comparative analysis of the vulnerability of ML and QML models, specifically conventional neural networks (NN) and quantum neural networks (QNN), to adversarial attacks using a malware dataset. We utilize a software supply chain attack dataset known as ClaMP and develop two distinct models for QNN and NN, employing Pennylane for quantum implementations and TensorFlow and Keras for traditional implementations. Our methodology involves crafting adversarial samples by introducing random noise to a small portion of the dataset and evaluating the impact on the models performance using accuracy, precision, recall, and F1 score metrics. Based on our observations, both ML and QML models exhibit vulnerability to adversarial attacks. While the QNNs accuracy decreases more significantly compared to the NN after the attack, it demonstrates better performance in terms of precision and recall, indicating higher resilience in detecting true positives under adversarial conditions. We also find that adversarial samples crafted for one model type can impair the performance of the other, highlighting the need for robust defense mechanisms. Our study serves as a foundation for future research focused on enhancing the security and resilience of ML and QML models, particularly QNN, given its recent advancements. A more extensive range of experiments will be conducted to better understand the performance and robustness of both models in the face of adversarial attacks.
Mst Shapna Akter, Hossain Shahriar, Iysa Iqbal, MD Hossain, M. A. Karim, Victor Clincy, Razvan Voicu
2023-05-31T06:31:42Z
http://arxiv.org/abs/2305.19593v1
Exploring the Vulnerabilities of Machine Learning and Quantum Machine Learning to Adversarial Attacks using a Malware Dataset: A Comparative Analysis ###### Abstract The burgeoning fields of machine learning (ML) and quantum machine learning (QML) have shown remarkable potential in tackling complex problems across various domains. However, their susceptibility to adversarial attacks raises concerns when deploying these systems in security-sensitive applications. In this study, we present a comparative analysis of the vulnerability of ML and QML models, specifically conventional neural networks (NN) and quantum neural networks (QNN), to adversarial attacks using a malware dataset. We utilize a software supply chain attack dataset known as ClaMP and develop two distinct models for QNN and NN, employing Pennylane for quantum implementations and TensorFlow and Keras for traditional implementations. Our methodology involves crafting adversarial samples by introducing random noise to a small portion of the dataset and evaluating the impact on the models' performance using accuracy, precision, recall, and F1 score metrics. Based on our observations, both ML and QML models exhibit vulnerability to adversarial attacks. While the QNN's accuracy decreases more significantly compared to the NN after the attack, it demonstrates better performance in terms of precision and recall, indicating higher resilience in detecting true positives under adversarial conditions. We also find that adversarial samples crafted for one model type can impair the performance of the other, highlighting the need for robust defense mechanisms. Our study serves as a foundation for future research focused on enhancing the security and resilience of ML and QML models, particularly QNN, given its recent advancements. A more extensive range of experiments will be conducted to better understand the performance and robustness of both models in the face of adversarial attacks. Adversarial Attack, Quantum neural network (QNN), Neural Network (NN), ClaMP, TensorFlow, Pennylane ## I Introduction The growing prevalence of software supply chain security threats has driven researchers to explore innovative approaches to detect and predict vulnerabilities and suspicious behaviors [1]. Software Supply Chain (SSC) attacks occur when cyber threat actors penetrate a vendor's network and insert malicious code that compromises the software before distribution to customers. Such attacks can have severe consequences for software users across sectors by gaining control over the software's regular functionality [2]. Machine Learning (ML) has long been employed as a powerful tool to address these challenges, but the exponential growth of data worldwide necessitates alternative solutions for proactive prevention and early detection of security threats. Quantum Machine Learning (QML), which leverages quantum computing concepts and quantum random access memory (QRAM), has emerged as a promising solution to handle large-scale data processing [3]. These unique characteristics have led to the increasing adoption of quantum computing in various technological fields. However, the vulnerability of ML and QML models to adversarial attacks raises concerns when deploying these systems in security-sensitive applications. Adversarial attacks in machine learning (ML) and quantum machine learning (QML) are malicious attempts to exploit the vulnerabilities of ML and QML models by generating specially crafted input samples, known as adversarial examples. These examples are designed to be imperceptibly different from the original data but can cause the models to make incorrect predictions or classifications with high confidence [4]. Adversarial attacks can be classified based on their intended goals, such as targeted attacks, which aim to manipulate the model into assigning a specific incorrect label, and untargeted attacks, which aim to cause any misclassification. Adversarial attacks in QML leverage the unique properties of quantum computing, such as superposition and entanglement, to manipulate the decision-making processes of quantum machine learning models. While research on adversarial attacks in QML is still in its early stages, some studies have demonstrated the existence of adversarial examples in quantum settings and their potential to transfer between classical and quantum models. These attacks pose a significant challenge for the deployment of ML and QML models in security-sensitive applications, such as autonomous vehicles, facial recognition systems, and cybersecurity. Consequently, adversarial attacks undermine the reliability and security of systems powered by ML and QML in critical applications, such as healthcare, finance, and cybersecurity [5]. In recent years, adversarial attacks have led to the spread of misinformation, bypassing facial recognition systems, and even causing autonomous vehicles to misinterpret road signs [6]. As a result, there is a growing interest in developing robust defense mechanisms to mitigate the impact of adversarial attacks and ensure the reliability and security of ML and QML systems [7]. In this study, we conduct a comparative analysis of the susceptibility of ML and QML models, specifically conventional neural networks (NN) and quantum neural networks (QNN), to adversarial attacks using a malware dataset. To the best of our knowledge, this is one of the few studies focusing on the software supply chain vulnerabilities dataset using quantum machine learning. Our research utilizes Pennylane, a quantum computing platform that enables quantum differentiable programming and offers seamless integration with other QML tools, such as IBM Quantum, NumPy, and TensorFlow Quantum. The primary contributions of this research are as follows: [1] We adopt both quantum machine learning and conventional machine learning to conduct experiments on a software supply chain attack dataset. [2] We assess the performance of NN and QNN models under adversarial attack scenarios, comparing their vulnerability and robustness. The rest of the paper is organized as follows: Section II presents a brief overview of related studies on quantum machine learning and traditional machine learning. Section III explains the methodology adopted for our comparative research. Section IV describes the experimental setting and results, including dataset specification and processing. Section V discusses the findings of this paper, focusing on the comparative performance of NN and QNN models under adversarial attacks. Finally, Section VI concludes the paper ## II Background and Literature Review In recent years, machine learning (ML) and quantum machine learning (QML) have gained significant attention in various domains, including security and malware analysis [5, 8]. However, the security of ML and QML models themselves has become a critical concern due to their vulnerability to adversarial attacks [9, 10]. This literature review aims to provide an overview of the current state of research in adversarial machine learning and quantum machine learning attacks, focusing on their application to malware datasets and the software supply chain. We also highlight the existing gaps in the literature. Adversarial machine learning attacks have been extensively studied in the literature, with a primary focus on deep learning models. Szegedy et al. [11] demonstrated that deep neural networks can be easily fooled by adding imperceptible perturbations to input data. Goodfellow et al. [4] proposed the fast gradient sign method (FGSM) for generating adversarial examples, which has become a cornerstone in the field. More recently, researchers have examined the vulnerabilities of ML models used for malware analysis. Grosse et al. [12] explored the susceptibility of neural networks to adversarial examples in malware classification and demonstrated that even minor modifications to malware samples can lead to misclassification. Demontis et al. [13] proposed the concept of adversarial attacks on graph-based machine learning models for malware detection, exposing potential vulnerabilities in these models. Finlayson et al. [14] studied the susceptibility of medical machine learning models to adversarial attacks. They demonstrated that these models are vulnerable to attacks that can cause misclassification, and highlighted the potential risks of using such models in clinical settings. The authors also proposed several methods for defending against adversarial attacks in medical machine learning. Aloraini et al. [15] investigated the threat of adversarial attacks in the context of Internet of Things (IoT) devices. They analyzed the security risks posed by adversarial machine learning attacks from an insider's perspective, highlighting the potential harm that can be caused by such attacks in critical IoT applications. The authors proposed several strategies for detecting and preventing adversarial attacks in IoT systems. Alsmadi et al. [16] conducted a literature survey on adversarial machine learning in text processing. They reviewed the current state of the field and highlighted the main challenges and opportunities in this area. The authors also discussed several approaches for defending against adversarial attacks in text processing, including deep learning-based methods, rule-based methods, and hybrid methods. Mumcu et al. [17] investigated the susceptibility of video anomaly detection systems to adversarial attacks. They demonstrated that these systems are vulnerable to attacks that can cause false alarms or stealthy attacks that can go undetected. The authors proposed several defense mechanisms for improving the robustness of video anomaly detection systems against adversarial attacks, including adversarial training and defensive distillation. Quantum machine learning (QML) is an emerging field that leverages the computational power of quantum computers to solve complex machine learning problems [8]. While QML is still in its infancy, researchers have begun to investigate the potential vulnerabilities of QML models to adversarial attacks. West et al. [18] proposed a benchmarking framework for evaluating the adversarial robustness of quantum machine learning models at scale. The authors introduced a novel approach to generating adversarial examples for quantum machine learning models based on the qubit gradient sign method. They demonstrated the effectiveness of their benchmarking framework by evaluating the adversarial robustness of several quantum machine learning models on a variety of datasets. The authors also compared the performance of quantum machine learning models with classical machine learning models and discussed the potential applications of adversarially robust quantum machine learning models in various domains, such as finance, chemistry, and cryptography. The proposed benchmarking framework provides a valuable tool for evaluating the robustness of quantum machine learning models and can help to improve the security and reliability of quantum machine learning systems. Suryotrisongko et al. [19] investigated the adversarial robustness of a hybrid quantum-classical deep learning model for detecting domain generation algorithms (DGA) used in botnet attacks. The authors proposed a quantum-classical neural network architecture that combines a classical deep neural network with a quantum neural network, which is used to encode the input data as quantum states. They evaluated the adversarial robustness of their model by generating adversarial examples using several attack methods, including the fast gradient sign method and the Carlini-Wagner attack. The authors demonstrated that their hybrid model is more robust to adversarial attacks compared to the classical deep learning model, and they also proposed a training method based on adversarial training to improve the robustness of their model. The results suggest that hybrid quantum-classical deep learning models can provide a promising approach for detecting botnet attacks, especially in the presence of adversarial attacks. The proposed framework can be extended to other applications that require robust and secure machine learning models. Gong and Deng [20] studied the universal adversarial examples and perturbations for quantum classifiers, which are quantum machine learning models used for classification tasks. The authors introduced a method to generate universal adversarial perturbations that can be applied to multiple quantum classifiers, rather than just a single classifier. They showed that these universal perturbations can be used to construct universal adversarial examples that can fool multiple quantum classifiers with high success rates. The authors also proposed a defense mechanism based on regularization to mitigate the impact of adversarial attacks. They demonstrated the effectiveness of their approach using several quantum classification tasks, including the classification of handwritten digits and the classification of images from the MNIST dataset. The results suggest that universal adversarial examples and perturbations can pose a significant threat to the security and reliability of quantum classifiers, and that defense mechanisms based on regularization can help to improve their robustness. The proposed framework provides a valuable tool for evaluating the security of quantum machine learning models and can help to improve the design of robust and secure quantum classifiers. In the context of malware analysis, several studies have explored the use of ML and QML models to detect and classify malware. Recent works have applied deep learning techniques [21] and graph-based models [22] for malware detection. Meanwhile, QML models have been applied to various security-related tasks, such as intrusion detection and post-quantum cryptography [23]. Although adversarial machine learning attacks on malware analysis models have been studied, there is limited research on comparing the vulnerabilities of ML and QML models in this context. Furthermore, the application of adversarial attacks to QML models for malware analysis remains largely unexplored, leaving ample room for research in this area. Additionally, the software supply chain is another essential aspect to consider in the context of ML and QML vulnerabilities. While there have been some efforts to address software supply chain security using ML, the potential impact of adversarial attacks on these models remains unaddressed. Similarly, the exploration of QML-based solutions for securing the software supply chain is still in its early stages, and their potential vulnerabilities to adversarial attacks have yet to be investigated. ## III Methodology In this paper, we utilized Quantum Neural Network (QNN), a subfield of Quantum Machine Learning (QML), to analyze the ClaMP dataset. To ensure that our adversarial attacks are effective, we optimized each part of the dataset created from the ClaMP and introduced perturbed data during fine-tuning. We used Python and Scikit-Learn (sklearn) library functions such as shuffle, index-reset, and drop functions for data preprocessing. After applying shuffle functions, we organized the dataset in ascending order by resetting the index. We converted categorical values into numerical values and normalized all numerical values to maintain a similar scale, as quantum machine learning models require numerical values. The entire dataset comprising 5,210 rows was split into two portions: 80% for training and 20% for fine-tuning the model. For each step, we divided the data into three parts: 60% for training, 20% for validation, and 20% for testing. We added perturbed data to the 20% dataset for fine-tuning the model. The quantum machine learning model was applied to each dataset's separated portions. The features were encoded into quantum states before feeding into the QML model. The QNN framework, which originates from neurocomputing theory, combines machine learning, quantum computing, and artificial neural network concepts. It can be applied for processing neural computing using vast levels of datasets to obtain the expected result. Input data is encoded into a suitable qubit state with a proper number of qubits before being processed through QNN. The qubit state is then modified for a specific number of layers using parameterized rotation and entangling gates, with the predicted value of a Hamilton operator guiding the modified qubit state. The results from the Pauli gates are decoded and translated into applicable output data. A variational quantum circuits-based neural network plays various roles in QNN. The Adam optimizer updates the parameters based on various criteria such as the size of complexity-theoretic measurements, depth, accuracy, and definite features. The number of steps is necessary for solving the issue of in-depth measurement. Precision refers to the setup required to address a variety of challenges. A quantum neural network is composed of input, output, and L hidden layers. The L hidden layer consists of a quantum circuit of the quantum perceptron, which acts on an initial state of the input qubits and produces a mixed state for the output qubits. QNN can perform quantum computation for both the two-input and one-input qubit perceptron, which goes through the quantum-circuit construction with a quantum perceptron on 4 level qubits. The most comprehensive quantum perceptron implements any quantum channel on the input qubits. The precision of p(n) is represented by s (n), d(n), where size is denoted by s(n) and depth is denoted by d(n). Size refers to the number of qubits in the circuit, while depth refers to the longest sequence of gates from input to output. Size and depth are created from gates D and U of precision p(n). A reversible U gate is usually followed by the D gate to eliminate the localization problem. The accuracy of the circuits is denoted by Os(n) for evaluating the robustness of models against adversarial attacks. ## IV Experiment and Results In this section, we provide details of our adversarial attack experiments and results. We start by specifying the dataset used and the data processing techniques employed. In order to design effective adversarial attacks, we define the experimental settings, where we use accuracy [24], precision [25], recall [26], and F-score metrics to evaluate the robustness of models to attacks. Finally, we present the experimental results, which highlight the effectiveness of our adversarial attacks in compromising the performance of the targeted models. ### _Dataset Specification_ We utilized a Quantum Neural Network (QNN) to classify malware using the ClaMP dataset, which consists of two versions: ClaMP_raw and ClaMP_Integrated. ClaMP_raw was generated by aggregating instances from VirusShare, while ClaMP_Integrated contains both malware and benign instances gathered from Windows files. To extract features from the samples, we focused on the portable executable headers since they contain essential information required for the operating system to execute executable files. We collected various raw features from the PE headers, such as File Header (7 features), DOS header (19 features), and Optional header (29 features), using a rule-based method for both the malware and benign samples. Afterwards, significant features were obtained by utilizing the raw features such as entropy, compilation time, and section time. Furthermore, we extracted additional information about the PE file by expanding a collection of raw features from the file header. Subsequently, we chose three categories of features - raw, derived, and expanded - from the ClaMP_Integrated dataset. The dataset consisted of a total of 68 features, including 28 raw, 26 expanded, and 14 derived features [2]. ### _Data Preprocessing_ We applied NN and QNN on ClaMP datasets to inspect the experimented method's comparative performance. We first considered the entire dataset. We separated 80 percent of the data from the 5210 instances to train the model in such a way that the software could supply it with the appropriate instance. It should be noted that within this 80 percent data, 60 percent was separated for training, 20 percent for validation, and 20 percent for testing. The perturbed data was generated by adding random noise to the clean data, altering its features. This mixture of clean and perturbed data was expected to confuse the model and potentially degrade its performance. The algorithm for adding perturbed data to the dataset is presented in Algorithm 1. The function _add_perturbation_ takes two input parameters: the clean data and a noise scaling factor, epsilon. The noise is generated using a random number generator that creates an array with the same shape as the input data. This noise is then scaled by the epsilon factor, which determines the magnitude of the perturbations. Finally, the scaled noise is added to the original data, resulting in the perturbed data. ``` Functionadd_perturbation(data, epsilon) noise\(\leftarrow\)np.random.randn(*data.shape) scaled_noise\(\leftarrow\)epsilon*noise perturbed_data\(\leftarrow\)data\(+\)scaled_noise returnperturbed_data EndFunction ``` **Algorithm 1** Function to add random perturbations to input data. ### _Experimental Settings_ The current quantum simulator is unable to handle large dimensions as input, and our dataset has 108 dimensions, making it unsuitable for the simulator. As a result, we employed a dimension reduction technique called Principal Component Analysis (PCA) on this dataset. We applied PCA to the 108-feature vector of the ClaMP dataset to decrease the dimensionality. Due to the existing simulator's qubit number limitations, we chose the top 16 principal components. We first applied the classical neural network to the reduced dataset directly. Following that, we encoded the classical data as quantum circuits, which involves converting all feature values into qubit values for quantum computer processing. We show the circuit produced for an arbitrary sample. These circuits were translated into TensorflowQuantum (TFQ) format. Subsequently, we designed a model circuit layer for the quantum neural network (QNN), consisting of a two-layer model with a data circuit size that matched the input. This model circuit was then wrapped in a TFQ-Keras model. We transformed the quantum data, fed it to the model, and employed a parametrized quantum layer to train the model circuit on the quantum data. During the training phase, we used an optimization function called hinge loss. We converted the labels to a range of -1 to 1. In the end, we trained the QNN for 100 epochs. ### _Experimental Results_ Our comparative analysis between the classical neural network (NN) model and the quantum neural network (QNN) model illustrates in table 1 and Table 2. We compared the performance of a classical neural network (NN) and a quantum neural network (QNN) using several metrics, including accuracy, precision, recall, and F1-score. Before the adversarial attack, we received NN: Accuracy - 0.54, Precision - 1.00, Recall - 0.57, F1-score - 0.65 and QNN: Accuracy - 0.57, Precision - 0.92, Recall - 0.57, F1-score - 0.65. The QNN model exhibited slightly higher accuracy (0.57) compared to the NN model (0.54). This indicates that the QNN model provided marginally better overall performance in terms of correctly predicting the class labels. Both models had similar recall and F1-score values. After the adversarial attack, we recieved NN: Accuracy - 0.52, Precision - 0.27, Recall - 0.52, F1-score - 0.36 and QNN: Accuracy - 0.45, Precision - 0.47, Recall - 0.90, F1-score - 0.62. The performance of both models degraded, but the impact on each model was different. The NN model experienced a more significant drop in precision (from 1.00 to 0.27), while its accuracy and recall decreased only slightly. The F1-score for the NN model dropped to 0.36, indicating a substantial decrease in overall performance. In contrast, the QNN model experienced a reduction in accuracy (from 0.57 to 0.45) after the adversarial attack but demonstrated a remarkable increase in recall (from 0.57 to 0.90). This suggests that the QNN model was better at identifying true positives in the presence of adversarial data. The precision of the QNN model decreased from 0.92 to 0.47, and its F1-score dropped slightly to 0.62. The results indicate that the QNN model displayed greater resilience to adversarial attacks, maintaining a higher F1-score compared to the NN model. The QNN model's ability to achieve a higher recall value in the presence of adversarial data demonstrates its potential for robust performance in real-world applications where noisy or manipulated data might be present. TABLE-1 : Results derived before adversarial attack We evaluated the performance of the models showing plot of confusion matrix ( Figure 2 and 3), ROC Curve ( Figure 4 and 5) and Precision Recall Curve ( Figure 6 and 7). The confusion matrix is a tabular representation that illustrates the distribution of predicted and true class labels, helping to identify the model's strengths and weaknesses in classification tasks. The rows in the confusion matrix represent the actual (true) class labels, while the columns represent the predicted class labels. The main diagonal of the matrix contains the counts of correctly classified instances, also known as true positives (TP) for each class. The off-diagonal elements indicate the misclassifications or errors made by the model [27]. From the confusion matrix in Figure 2, we can observe the following insights: In the clean data scenario, the model performed well with no false positives and correctly classified 21 instances as class 0 and 4 instances as class 1. However, in the perturbed data scenario, the model still had no false positives, but it had a higher false negative rate, misclassifying 3 instances of class 1 as class 0, while correctly classifying only 1 instance of class 0. These results suggest that the adversarial attack caused a significant shift in the model's decision boundary, making it more susceptible to misclassifying some instances. From the confusion matrix in Figure 3, we can observe the following insights: The first confusion matrix represents the distribution of predicted and true class labels for a binary classification problem using clean data, prior to any adversarial attack. The matrix shows that there were 160 instances of class 1 and 210 instances of class 2. The model correctly classified all instances of class 0, resulting in 0 false positives. However, the model misclassified all 210 instances of class 1 as class 2, resulting in 210 false negatives. The second confusion matrix represents the distribution of predicted and true class labels for a binary classification problem using perturbed data after the adversarial attack. The matrix shows that there were 55 instances of class 1 and 39 instances of class 2. The model correctly classified all instances of class 0, resulting in 0 false positives. However, the model misclassified 39 instances of class 1 as class 2, resulting in 39 false negatives. The comparison of the two confusion matrices shows that the adversarial attack had a significant impact on the model's performance. In the clean data scenario, the model was unable to correctly classify any instances of class 1, while in the perturbed data scenario, the model was able to correctly classify some instances of class 1 but still had a high false Fig. 1: Demonstrates the quantum neural network with the input parameter and Linear entanglement structure negative rate. These results suggest that the model needs further training or fine-tuning to improve its performance on adversarial data. The ROC (Receiver Operating Characteristic) curve is a graphical representation that measures the performance of a binary classifier as the discrimination threshold varies. The ROC curve plots the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings, where the TPR is the proportion of positive instances that are correctly classified, and the FPR is the proportion of negative instances that are incorrectly classified as positive. The ROC curve is helpful for measuring the performance of a binary classifier because it provides a way to visualize and compare the trade-off between the sensitivity (TPR) and specificity (1-FPR) of the classifier at different threshold settings. A perfect classifier would have a TPR of 1 and an FPR of 0, resulting in a point at the top left corner of the ROC curve. In contrast, a random classifier would have a diagonal line from the bottom left to the top right of the ROC curve, where the area under the curve (AUC) is 0.5, indicating that the classifier is no better than random guessing. A higher AUC indicates a better performance of the classifier, where an AUC of 1 represents a perfect classifier, while an AUC of 0.5 indicates a random classifier. The ROC curve can also help to determine the optimal threshold setting for the classifier, depending on the desired balance between the TPR and FPR [28]. From the ROC curve in Figure 4, we can observe the following insights: The first ROC curve represents the performance of the QNN model on clean data before any adversarial attack. The AUC value of 0.61 indicates that the model has a moderate performance in correctly classifying the positive and negative instances. The curve starts at the bottom left corner (0,0), indicating that the model correctly classified all negative instances, but misclassified some positive instances as negative. As the threshold increases, the TPR increases at a faster rate than the FPR, resulting in an upward curve. However, the curve is not very steep, indicating that the model's performance is not very sensitive to changes in the Fig. 3: Comparison of Confusion Matrix between Clean Data and Perturbed Data of NN Model. Fig. 2: Comparison of Confusion Matrix between Clean Data and Perturbed Data of QNN Model. threshold. The second ROC curve represents the performance of the QNN model on perturbed data after the adversarial attack. The AUC value of 0.46 indicates that the model's performance has degraded significantly after the attack. The curve starts at the bottom left corner (0,0), indicating that the model correctly classified all negative instances, but it misclassified a significant number of positive instances as negative. As the threshold increases, the TPR increases at a much slower rate than the FPR, resulting in a nearly linear curve. The curve has a slight upward increment towards the top right corner (1,1), indicating that the model is able to correctly classify some positive instances, but it is not very sensitive to changes in the threshold. The comparison of the two ROC curves highlights the impact of the adversarial attack on the performance of the QNN model. The AUC value decreased significantly from 0.61 to 0.46, indicating that the model's ability to correctly classify positive and negative instances has deteriorated. The second curve also shows a higher FPR at lower TPR, indicating that the model is more prone to false positives and less effective in detecting positive instances. These results suggest that the QNN model needs further improvement in order to be more robust against adversarial attacks. Additional evaluation metrics such as precision, recall, and F1 score should be computed to provide a more comprehensive evaluation of the model's performance. From the ROC curve in Figure 5, we can observe the following insights: The two ROC curves represent the performance of the NN model on clean and perturbed data before and after the adversarial attack, respectively. Both ROC curves have an AUC value of 0.50, which indicates that the model's performance is equivalent to random guessing. The curves are nearly linear, starting at the bottom left corner (0,0) and ending at the top right corner (1,1). This indicates that the model has an equal probability of correctly classifying positive and negative instances and is not effective in distinguishing between them. These results suggest that the NN model is not suitable for the binary classification task, and the adversarial attack did not significantly impact the model's performance. The Precision-Recall curve is a graphical representation of the performance of a binary classifier at different thresholds. It plots the precision against the recall at various threshold settings, where precision is the proportion of true positive instances out of all instances predicted as positive, and recall is the proportion of true positive instances that are correctly classified. The Precision-Recall curve is important for performance measurement in ML models because it provides a more informative evaluation of a binary classifier's performance compared to other metrics such as accuracy or F1 score, especially in imbalanced datasets where the number of positive instances is significantly smaller than the number of negative instances. The Precision-Recall curve can help to determine the optimal threshold setting for the classifier, depending on the desired trade-off between precision and recall. A high precision means that the classifier correctly identifies most positive instances with few false positives, while a high recall means that the classifier correctly identifies most positive instances out of all positive instances [29]. From the Precision Recall curve in Figure 6, we can observe the following insights: The two precision-recall curves represent the performance of the QNN model on clean and perturbed data before and after the adversarial attack, respectively. The first curve, for clean data, has an AUC value of 0.79, which indicates that the model's performance in correctly identifying positive instances is relatively good. The curve starts at the top left corner (1.0,0.0), indicating that the model correctly identified all positive instances, but it had some false positives. As the recall decreases, the precision decreases gradually, resulting in a curve that is mostly flat but with a slight decline towards the bottom right corner (0.0,1.0). The second curve, for perturbed data after the adversarial attack, has an AUC value of 0.75, which indicates that the model's performance has slightly degraded. The curve starts at the top left corner (1.0,0.85), indicating that the model correctly identified most positive instances, but it had some Fig. 4: Comparison of ROC Curve between Clean Data and Perturbed Data of QNN Model. false positives. As the precision decreases, the recall increases, resulting in a curve that increases gradually towards the bottom right corner (1.0,0,6). These results suggest that the adversarial attack had some impact on the QNN model's performance, but the model is still able to effectively identify positive instances, albeit with a higher false-positive rate. From the Precision Recall curve in Figure 7, we can observe the following insights: The two precision-recall curves represent the performance of the NN model on clean and perturbed data before and after the adversarial attack, respectively. The first curve, for clean data, has an AUC value of 0.78, which indicates that the model's performance in correctly identifying positive instances is relatively good. The curve starts at the top left corner (1.0,0.0), indicating that the model correctly identified all positive instances, but it had some false positives. As the precision decreases, the recall decreases gradually, resulting in a curve that is mostly linear but with a slight decline towards the bottom right corner (0.0,1.0). The second curve, for perturbed data after the adversarial attack, has an AUC value of 0.71, which indicates that the model's performance has slightly degraded. The curve starts at the top left corner (1.0,0.0), indicating that the model correctly identified all positive instances, but it had some false positives. As the precision decreases, the recall increases, resulting in a curve that decreases linearly towards the bottom right corner (1.0,0.6). These results suggest that the adversarial attack had some impact on the NN model's performance, but the model is still able to effectively identify positive instances, albeit with a higher false-positive rate ## V Discussion In this study, we have conducted a comparative analysis of the performance of classical neural network (NN) and quantum neural network (QNN) models for a binary classification task. Our goal was to evaluate their resilience against adversarial attacks and to understand their potential for real-world applications, where the presence of noisy or manipulated data is likely. We compared the performance of both models using various metrics, including accuracy, precision, recall, and F1-score, on clean and perturbed datasets. Our results indicate Fig. 5: Comparison of ROC Curve between Clean Data and Perturbed Data of NN Model. Fig. 6: Comparison of Precision Recall Curve between Clean Data and Perturbed Data of QNN Model. that the QNN model exhibited slightly higher accuracy in the clean data scenario compared to the NN model, with similar recall and F1-score values. This suggests that the QNN model provided marginally better overall performance in terms of correctly predicting the class labels. However, after the adversarial attack, the performance of both models degraded, but the impact on each model was different. The NN model experienced a more significant drop in precision, while the QNN model demonstrated a remarkable increase in recall. This indicates that the QNN model was better at identifying true positives in the presence of adversarial data, which demonstrates its potential for robust performance in real-world applications. The comparison of confusion matrices, ROC curves, and precision-recall curves for both models before and after the adversarial attack further supports our findings. The confusion matrices reveal that the QNN model maintained a higher true positive rate and lower false-negative rate compared to the NN model after the adversarial attack, indicating its greater resilience. The ROC curves show that the QNN model's performance in terms of distinguishing between positive and negative instances has deteriorated after the attack, but it still outperformed the NN model. The precision-recall curves indicate that the QNN model's ability to identify positive instances effectively ## VI Conclusion This paper presents a novel comparative analysis of the vulnerability of machine learning (ML) and quantum machine learning (QML) models, specifically conventional neural networks (NN) and quantum neural networks (QNN), to adversarial attacks using a malware dataset from the software supply chain domain. The study is among the first to focus on software supply chain vulnerabilities using quantum machine learning, and employs Pennylane, a cutting-edge quantum computing platform that allows seamless integration with various QML tools. Our contributions include the adoption of both quantum and conventional machine learning approaches to conduct experiments on a software supply chain attack dataset, and an assessment of the performance of NN and QNN models under adversarial attack scenarios, enabling a comparison of their vulnerability and robustness. The findings reveal that both ML and QML models are susceptible to adversarial attacks, but QNNs demonstrate higher resilience in certain aspects. The outcomes of this study contribute to a better understanding of the challenges and opportunities in the deployment of ML and QML models in security-sensitive domains, and provide a foundation for further research aimed at enhancing the resilience of these models to adversarial attacks. In future we willconduct a more extensive range of experiments to better understand the performance and robustness of both conventional and quantum neural networks under various adversarial conditions. ## Acknowledgement The work is supported by the National Science Foundation under NSF Award #2209638, and #2100115, Any opinions, findings, recommendations, expressed in this material are those of the authors and do not necessarily reflect. the views of the National Science Foundation.
2309.10104
Strong greedoid structure of $r$-removed $P$-orderings
Inspired by the notion of \emph{$r$-removed $P$-orderings} introduced in the setting of Dedekind domains by Bhargava \cite{Bha09-1} we study its generalization in the framework of arbitrary (generalised) ultrametric spaces. We show that sets of maximal "$r$-removed perimeter" can be constructed by a greedy algorithm and form a strong greedoid. This gives a simplified proof of several theorems in \cite{Bha09-1} and also generalises the results of \cite{GP21} which considered the case $r=0$ corresponding, in turn, to simple $P$-orderings of \cite{Bha97}.
Dmitrii Krachun, Rozalina Mirgalimova
2023-09-18T19:28:44Z
http://arxiv.org/abs/2309.10104v1
# Strong greedoid structure of \(r\)-removed \(P\)-orderings ###### Abstract Inspired by the notion of \(r\)_-removed \(P\)-orderings_ introduced in the setting of Dedekind domains by Bhargava [3] we study its generalization in the framework of arbitrary (generalised) ultrametric spaces. We show that sets of maximal "\(r\)-removed perimeter" can be constructed by a greedy algorithm and form a strong greedoid. This gives a simplified proof of several theorems in [3] and also generalises the results of [5] which considered the case \(r=0\) corresponding, in turn, to simple \(P\)-orderings of [2]. ## 1 Introduction Motivated by questions in polynomial function theory, Bhargava [2] introduced the notion of \(P\)_-orderings_ for a subset \(X\) of a Dedekind domain \(D\). The construction is as follows. Given a prime ideal \(P\subset D\), let \(a_{0}\) be an arbitrary element of \(X\) and for \(k=1,2,\dots\) choose \(a_{k}\in X\) to minimize \[\nu_{P}\left((a_{k}-a_{0})(a_{k}-a_{1})\dots(a_{k}-a_{k-1})\right),\] where \(\nu_{P}\) denotes the \(P\)-adic valuation on \(D\). One of the results of [2] is the surprising fact that, despite the fact that typically the choice of each \(a_{k}\) is non-unique, the sequence of the resulting valuations does not depend on the specific choice of \(\{a_{i}\}\) but only on \(X\) and \(P\). Later, to study basis of the ring of polynomials with integer-valued divided differences, Bhargava [3] generalised this construction to \(r\)_-removed \(P\)-orderings_. For an \(r\)-removed \(P\)-ordering one again chooses a sequence \(\{a_{i}\}\) of elements from \(X\) but now the first \(r+1\) elements \(a_{0},\dots,a_{r}\) are chosen arbitrary and then each new element minimizes \[\min_{\begin{subarray}{c}A\subset\{a_{0},\dots,a_{k-1}\}\\ |A|=k-r\end{subarray}}\sum_{a\in A}\nu_{P}(a_{k}-a).\] Again, one of the results of [3] is that the resulting sequences of exponents does not depend on the choice of \(\{a_{i}\}\). Recently, Grinberg and Petrov [5] generalised the notion of \(P\)-orderings to the context of _ultra triples_, which is a certain extension of ultrametric spaces, obtaining new proofs of several results of [2] and showing that all (prefixes of) \(P\)-orderings form a strong greedoid. A natural question which has been asked by Bhargava [1] is then ###### Abstract We consider a _pre-removed \(P\)-orderings_ of a finite subset \(C\subset E\) of a finite subset \(C\ Basic definitions and constructions We largely follow the notation used in [5], which we now briefly recall. Throughout the paper, we consider a set \(E\) as our ground set, and refer to the elements of \(E\) as points. For a non-negative integer \(m\), an \(m\)-set means a subset \(A\) of \(E\) with \(|A|=m\), and an \(m\)-permutation means an ordered set \(A=(a_{1},\ldots,a_{m})\) fromed by distinct elements of \(E\). Analogously, if \(B\subseteq E\) is a subset and \(m\) is a non-negative integer, an \(m\)-subset of \(B\) means an \(m\)-element subset of \(B\) and an \(m\)-permutations of \(B\) means an ordered set \(A\) formed by \(m\) distinct elements of \(B\). The following definition already appeared in [5]. **Definition 1**.: _An ultra triple is a triple \((E,w,d)\), where \(E\) is a set, \(w:R\rightarrow\mathbb{R}\) is an arbitrary weight function, and \(d:\{(e,f)\in E\times E\ |\ e\neq f\}\rightarrow\mathbb{R}^{1}\) is a distance function satisfying_ * \(d(a,b)=d(b,a)\) _for any two distinct_ \(a,b\in E\)_._ * \(d(a,b)\leq\max\{d(a,c),\,d(b,c)\}\) _for any three distinct_ \(a,b,c\in E\)_._ The inequality above is commonly known as the ultrametric triangle inequality; but unlike the distance function of an ultrametric space, we allow \(d\) to take negative values. The following are formal definitions of the objects already mentioned in the introduction, namely, \(r\)-removed distance, \(r\)-removed perimeter, and an \(r\)-removed \(m\)-permutation. **Definition 2**.: _Let \((E,w,d)\) be an ultra triple, \(C\subseteq E\) be a finite non-empty subset, and \(v\) be any point in \(E\setminus C\). We define \(\operatorname{dist}_{r}(C,v)\) to be the maximum among all possible sums of distances from \(v\) to some \(|C|-r\) points of the set \(C\). If \(|C|\leq r\), we set \(\operatorname{dist}_{r}(C,v):=0\)._ **Definition 3**.: _Let \((E,w,d)\) be an ultra triple. For a permutation \((a_{1},a_{2},\ldots,a_{k})\) of a finite subset \(A\subseteq E\), we define its \(r\)-removed perimeter by_ \[\operatorname{PER}_{r}((a_{1},a_{2},\ldots,a_{k})):=\sum_{a\in A}w(a)+\sum_{i =1}^{k}\operatorname{dist}_{r}(\{a_{1},a_{2},\ldots,a_{i-1}\},a_{i}).\] **Definition 4**.: _For an ultra triple \((E,w,d)\) let \(C\subseteq E\) be a finite set, and \(m\) be a non-negative integer. A greedy \(r\)-removed \(m\)-permutation of \(C\) is a list \((c_{1},c_{2},\ldots,c_{m})\) of \(m\) distinct elements of \(C\) such that for each \(i\in\{1,\ldots,m\}\) and each \(x\in C\setminus\{c_{1},c_{2},\ldots,c_{i-1}\}\), we have_ \[\operatorname{PER}_{r}((c_{1},c_{2},\ldots,c_{i}))\geqslant\operatorname{PER} _{r}((c_{1},c_{2},\ldots,c_{i-1},x)). \tag{1}\] We now define a useful construction that we are going to use in the proofs, it earlier implicitly appeared in [5]. We first recall the following definition from [5]. **Definition 5**.: _Let \((E,w,d)\) be an ultra triple, \(A\subseteq E\) be a finite non-empty subset and \(c\in E\) be any point. We define a subset \(\operatorname{proj}_{A}(c)\) of \(A\) as follows:_ * _If_ \(c\in A\)_, then_ \(\operatorname{proj}_{A}(c):=\{c\}\)_;_ * _If_ \(c\notin A\)_, then_ \(\operatorname{proj}_{A}(c)\) _is the set of all_ \(a\in A\) _that minimize the distance_ \(d(c,a)\) Now we extend it in the following way **Definition 6**.: _Let \((E,w,d)\) be an ultra triple, \(C=\{c_{1},c_{2},\ldots,c_{k}\}\subseteq E\) be a finite ordered set and \(A\) be any \(n\)-subset of \(E\), where \(n\geq k\). We define \(k\)-permutation \((v_{1},v_{2},\ldots,v_{k})\) of \(A\) recursively as follows: \(v_{i}\) is defined to be a projection of \(c_{i}\) onto \(A\setminus\{v_{1},v_{2},\ldots,v_{i-1}\}\) for each \(i=1,2,\ldots,k\). We denote \((v_{1},v_{2},\ldots,v_{k})\) by \(\operatorname{proj}(C\to A)\). These projections \(v_{i}\) may be non-unique, in which case we take arbitrary element of the projection set._ There are three important observations about these constructions that we now make. The first proposition already appeared in [5, Lemma 2.13] and we give its short proof for completeness. **Proposition 1**.: _Let \((E,w,d)\) be an ultra triple and \(A\subseteq E\) be a non-empty finite set. Then for a point \(c\in E\), its projection \(b\in\operatorname{proj}_{A}(c)\) and any \(x\in A\setminus\{b\}\) we have \(d(b,x)\leq d(c,x)\)._ Proof.: If \(c\in A\) then \(b=c\) and we trivially have an equality. Otherwise, since \(x\in A\), by the definition of the projection we have \(d(c,x)\geq d(c,b)\) and so by the ultrametric triangle inequality we have \[d(b,x)\leq\max\left\{d(c,b),d(c,x)\right\}=d(c,x).\] **Proposition 2**.: _Let \((E,w,d)\) be an ultra triple, \(C=(c_{1},c_{2},\ldots,c_{k})\subseteq E\) be a finite ordered set and \(A\) be a \(n\)-subset of \(E\), with \(n\geq k\). Denote \(\operatorname{proj}(C\to A)\) by \((v_{1},v_{2},\ldots,v_{k})\). Then for each \(j\in\{1,2,\ldots,k\}\) one has_ \[(A\setminus\{v_{1},\ldots,v_{j}\})\cap\{c_{1},c_{2},\ldots,c_{j}\}=\varnothing.\] Proof.: Arguing by contradiction we assume for some \(i\leq j\leq k\) that \(c_{i}\in A\setminus\{v_{1},\ldots,v_{j}\}\). This implies that \(c_{i}\in A\setminus\{v_{1},v_{2},\ldots,v_{i-1}\}\). By definition this means that \(v_{i}:=\operatorname{proj}_{A\setminus\{v_{1},v_{2},\ldots,v_{i-1}\}}(c_{i})= c_{i}\). Hence, \(v_{i}=c_{i}\in A\setminus\{v_{1},\ldots,v_{j}\}\) which is impossible. **Proposition 3**.: _Let \((E,w,d)\) be an ultra triple, \(C=(c_{1},c_{2},\ldots,c_{k})\subseteq E\) be a finite ordered set and \(A\) be a \(n\)-subset of \(E\), with \(n>k\). Then for each \(v\in A\setminus\operatorname{proj}(C\to A)\)_ \[\operatorname{dist}_{r}(\operatorname{proj}(C\to A),v)\leq\operatorname{dist}_ {r}(C,v).\] Proof.: Denote \(\operatorname{proj}(C\to A)\) by \((v_{1},v_{2},\ldots,v_{k})\). The statement of the proposition would follow from the inequality \(d(v_{i},v)\leq d(c_{i},v)\) for each \(i\in\{1,2,\ldots,k\}\). But since \(v\in A\setminus\{v_{1},v_{2},\ldots,v_{i-1}\}\) this inequality is given by Proposition 1 applied to the set \(A\setminus\{v_{1},v_{2},\ldots,v_{i-1}\}\) and points \(c_{i}\), \(v_{i}=\operatorname{proj}_{A\setminus\{v_{1},v_{2},\ldots,v_{i-1}\}}(c_{i})\) and \(v\). ## 3 Perimeter and greedy \(r\)-removed \(m\)-permutations We first prove that any two permutations of a given set have the same \(r\)-removed perimeter. **Lemma 1**.: _Any two permutations of a finite set \(A\subseteq E\) have the same \(r\)-removed perimeter._ Proof.: It suffices to prove the statement for pairs of permutations which differ by one transposition. The general case is reduced to it by consecutive transpositions. Let us prove the statement for permutations \((a_{1},\ldots,a_{t},a_{t+1},\ldots a_{k})\) and \((a_{1},\ldots,a_{t-1},a_{t+1},a_{t},\ldots,a_{k})\). Denote by \(C\) the set \(\{a_{1},a_{2},\ldots a_{t-1}\}\). Many summands from the definition of \(r\)-removed perimeter coincide, all that remains to prove is \[\operatorname{dist}_{r}(C,a_{t})+\operatorname{dist}_{r}(C\cup\{a_{t}\},a_{t+ 1})=\operatorname{dist}_{r}(C,a_{t+1})+\operatorname{dist}_{r}(C\cup\{a_{t+1}\}, a_{t}). \tag{2}\] If \(t\leq r\), both sides are \(0\). Otherwise, we let \(z=d(a_{t},a_{t+1})\), \(x_{j}=d(a_{t},a_{j})\) and \(y_{j}=d(a_{t+1},a_{j})\), where \(j=1,\ldots,t-1\). In what follows we only consider triangles of the form \(a_{t}a_{t+1}a_{j}\) for some \(j=1,2,\ldots,t-1\) and use ultrametric triangle inequality for them. We colour triangles with two sides strictly greater than \(z\) in red, in which case \(x_{j}=y_{j}\) by the ultrametric inequality. Triangles coloured in red correspond to some largest distances from points \(a_{t}\) and \(a_{t+1}\) to the set \(C\) which coincide. In any other triangle we must have \(x_{i}=z\geqslant y_{i}\) or \(y_{i}=z\geqslant x_{i}\). If there are at least \(t-r\) red triangles, then \(\operatorname{dist}_{r}(C,a_{t})=\operatorname{dist}_{r}(C,a_{t+1})\), \(\operatorname{dist}_{r}(C\cup\{a_{t}\},a_{t+1})=\operatorname{dist}_{r}(C\cup \{a_{t+1}\},a_{t})\) and (2) is true. If there are less than \(t-r\) red triangles, then \(\operatorname{dist}_{r}(C\cup\{a_{t}\},a_{t+1})=\operatorname{dist}_{r}(C,a_{t +1})+z\) and \(\operatorname{dist}_{r}(C\cup\{a_{t+1}\},a_{t})=\operatorname{dist}_{r}(C,a_{t })+z\). By substituting these expressions into (2), we again get an equality. In light of this lemma we have the following definition **Definition 7**.: _For a finite subset \(A\subseteq E\), we define its \(r\)-removed perimeter \(\operatorname{PER}_{r}(A)\) to be the common \(r\)-removed perimeter of all permutations of \(A\)._ **Remark 1**.: _For the case \(r=0\), the \(r\)-removed perimeter is the sum of the distances between all unordered pairs of points plus the sum of the weight function of all points. This case was considered in [5]._ **Theorem 1**.: _Let \((E,w,d)\) be an ultra triple and \(C\subseteq E\) be a finite subset, and \(m\) and \(r\) be non-negative integers. Let \((c_{1},c_{2},\ldots,c_{m})\) be any greedy \(r\)-removed \(m\)-permutation of \(C\). Then, for each \(k\in\{0,1,...,m\}\), the set \(\{c_{1},c_{2},\ldots,c_{k}\}\) has maximum \(r\)-removed perimeter among all \(k\)-subsets of \(C\)._ Proof.: Given a greedy \(r\)-removed \(m\)-permutation \((c_{1},c_{2},\ldots,c_{m})\), we want to prove that for any \(k\)-subset \(A\subseteq C\), \(\operatorname{PER}_{r}(A)\leq\operatorname{PER}_{r}(\{c_{1},c_{2},\ldots,c_{k }\})\). We induct on \(k\). For \(k=0\) both perimeters are \(0\), and so the inequality is trivially true. For the induction step from \(k-1\) to \(k\), let \((v_{1},v_{2},\ldots,v_{k}):=\operatorname{proj}(\{c_{1},c_{2},\ldots,c_{k}\} \to A)\), which is an ordering of \(A\). Then by Proposition 2, \[v_{k}\notin\{c_{1},c_{2},\ldots,c_{k-1}\}.\] By induction hypothesis we know that \[\operatorname{PER}_{r}(\{v_{1},v_{2},\ldots,v_{k-1}\})\leq\operatorname{PER}_ {r}(\{c_{1},c_{2},\ldots,c_{k-1}\}),\] and so to complete the induction step it suffices to show that \[\operatorname{PER}_{r}(A)-\operatorname{PER}_{r}(\{v_{1},v_{2},\ldots,v_{k-1} \})\leq\operatorname{PER}_{r}(\{c_{1},c_{2},\ldots,c_{k}\})-\operatorname{PER} _{r}(\{c_{1},c_{2},\ldots,c_{k-1}\}),\] which, implicitly using Lemma 1, can be equivalently written as \[w\left(v_{k}\right)+\mathrm{dist}_{r}(\{v_{1},v_{2},\ldots,v_{k-1}\},v_{k})\leq w \left(c_{k}\right)+\mathrm{dist}_{r}(\{c_{1},c_{2},\ldots,c_{k-1}\},c_{k}). \tag{3}\] We now turn to proving (3). Since \(v_{k}\in A\setminus\{c_{1},c_{2},\ldots,c_{k-1}\}\subseteq C\setminus\{c_{1}, c_{2},\ldots,c_{k-1}\}\) (recall that \(A\subseteq C\)), we have \(\mathrm{PER}_{r}\{c_{1},c_{2},\ldots,c_{k-1},v_{k}\}\leq\mathrm{PER}_{r}\{c_{ 1},c_{2},\ldots,c_{k}\}\) by the definition of a greedy \(r\)-removed \(m\)-permutation. Subtracting \(\mathrm{PER}_{r}(\{c_{1},c_{2},\ldots,c_{k-1}\})\) from both sides we arrive at \[w\left(v_{k}\right)+\mathrm{dist}_{r}(\{c_{1},c_{2},\ldots,c_{k-1}\},v_{k}) \leq w\left(c_{k}\right)+\mathrm{dist}_{r}(\{c_{1},c_{2},\ldots,c_{k-1}\},c_{k})\] And so to deduce (3) it remains to show that \[\mathrm{dist}_{r}(\{v_{1},v_{2},\ldots,v_{k-1}\},v_{k})\leq\mathrm{dist}_{r}( \{c_{1},c_{2},\ldots,c_{k-1}\},v_{k}).\] Which is nothing else but the statement of Proposition 3 for the point \(v=v_{k}\) and \(\{v_{1},v_{2},\ldots,v_{k-1}\}=\mathrm{proj}(\{c_{1},c_{2},\ldots,c_{k-1}\} \to A)\). **Remark 2**.: _It follows from the proof that if the equality_ \[\mathrm{PER}_{r}(\{v_{1},v_{2},\ldots,v_{k}\})=\mathrm{PER}_{r}(\{c_{1},c_{2},\ldots,c_{k}\})\] _holds for \((v_{1},v_{2},\ldots,v_{k}):=\mathrm{proj}(\{c_{1},c_{2},\ldots,c_{k}\}\to A)\), then for each \(j<k\) one also has an equality \(\mathrm{PER}_{r}(\{v_{1},v_{2},\ldots,v_{j}\})=\mathrm{PER}_{r}(\{c_{1},c_{2},\ldots,c_{j}\})\)._ **Corollary 1.1**.: _Let \(C\subseteq E\) be a set, \(m\) and \(r\) be a non-negative integers, \(j\in\{1,2,\ldots m\}\). If \((c_{1},c_{2},\ldots,c_{m})\) is a greedy \(r\)-removed \(m\)-permutation of \(C\), then the number_ \[w\left(c_{j}\right)+\mathrm{dist}_{r}(\{c_{1},c_{2},\ldots,c_{j-1}\},c_{j})\] _does not depend on the choice of this greedy \(r\)-removed \(m\)-permutation but only depends on \(C,r\) and \(j\)._ Proof.: By Theorem 1, for each \(k\leq m\) the set \(\{c_{1},c_{2},\ldots,c_{k}\}\) has maximum perimeter among all \(k\)-subsets of \(C\), which implies that \(\mathrm{PER}_{r}(\{c_{1},c_{2},\ldots,c_{k}\})\) does not depend on the choice of the greedy \(r\)-removed \(m\)-permutation of \(C\). It remains to note that \[w(c_{j})+\mathrm{dist}_{r}(\{c_{1},c_{2},\ldots,c_{j-1}\},c_{j})=\mathrm{PER} _{r}(\{c_{1},c_{2},\ldots,c_{j}\})-\mathrm{PER}_{r}(\{c_{1},c_{2},\ldots,c_{j-1 }\}).\] **Remark 3**.: _As a special case of this corollary we obtain the results of [3, Theorems 3, 4, 30]. Indeed, for a Dedekind domain \(D\), a prime ideal \(P\subset D\), and \(h\in\mathbb{Z}_{\geq 0}\), the distance function \(d_{P,h}(a,b):=-\max(h,\nu_{P}(a-b))\) satisfies the ultrametric triangle inequality and so the result follows from Corollary 1.1 applied to an ultra triple \((S,w\equiv 0,d_{P,h})\)._ We now prove the converse of Theorem 1, namely, that any set of maximal \(r\)-removed perimeter is a prefix of some greedy \(r\)-removed \(m\)-permutation. **Theorem 2**.: _Let \((E,w,d)\) be an ultra triple, \(C\subseteq E\) be a finite set, and \(m\) be a non-negative integer such that \(|C|\geqslant m\). For \(k\in\{0,1,\ldots,m\}\) let \(A\) be a \(k\)-subset of \(C\) having maximum \(r\)-removed perimeter (among all \(k\)-subsets of \(C\)). Then, there exists a greedy \(r\)-removed \(m\)-permutation of \(C\) for which \(A\) is a prefix of this permutation._ Proof.: Choose an arbitrary greedy \(r\)-removed \(m\)-permutation \((c_{1},c_{2},\ldots,c_{m})\) of \(C\) by starting with any point and continuing the sequence greedily choosing elements from the remaining points. By Theorem 1, the set \((c_{1},c_{2},\ldots,c_{k})\) has maximum perimeter among all \(k\)-subsets of \(C\). Hence, \(\operatorname{PER}_{r}(A)=\operatorname{PER}_{r}(\{c_{1},c_{2},\ldots,c_{k}\})\) since the set \(A\) also has maximum \(r\)-removed perimeter among them. Let \((v_{1},v_{2},\ldots,v_{k}):=\operatorname{proj}(\{c_{1},c_{2},\ldots,c_{k}\} \to A)\). What we want to prove is that there exists a greedy \(r\)-removed \(m\) permutation of \(C\) which starts from \((v_{1},v_{2},\ldots,v_{k})\), which is equivalent to checking that for each \(p\leq k\) the point \(v_{p}\) maximizes \[w(x)+\operatorname{dist}_{r}(\{v_{1},\ldots,v_{p-1}\},x)\] over all \(x\in C\setminus\{v_{1},\ldots,v_{p-1}\}\). As mentioned in Remark 2, the fact that \(\operatorname{PER}_{r}(\{v_{1},v_{2},\ldots,v_{k}\})=\operatorname{PER}_{r}( \{c_{1},c_{2},\ldots,c_{k}\})\) implies that for each \(j\leq k\) we have \(\operatorname{PER}_{r}(\{v_{1},v_{2},\ldots,v_{j}\})=\operatorname{PER}_{r}( \{c_{1},c_{2},\ldots,c_{j}\})\). In particular, this holds for \(j=p\). Now, arguing by contradiction we assume that there exists \(x\in C\setminus\{v_{1},\ldots,v_{p-1}\}\) such that \[w(x)+\operatorname{dist}_{r}(\{v_{1},\ldots,v_{p-1}\},x)>w(v_{p})+\operatorname {dist}_{r}(\{v_{1},\ldots,v_{p-1}\},v_{p}).\] This would mean that \[\operatorname{PER}_{r}(\{v_{1},\ldots,v_{p-1},x\})>\operatorname{PER}_{r}(\{ v_{1},\ldots,v_{p-1},v_{p}\})=\operatorname{PER}_{r}(\{c_{1},\ldots,c_{p-1},c_{p}\}),\] contradicting the fact that \(\{c_{1},\ldots,c_{p-1},c_{p}\}\) has the largest \(r\)-removed perimeter among all subsets of \(C\) of size \(p\). ## 4 Strong greedoid of maximum perimeter sets In [5] it was shown that sets maximizing the perimeter (i.e. \(r\)-removed perimeter with \(r=0\)) form a strong greedoid. In this section we generalize this statement to all \(r\geq 0\). We start by recalling the relevant definitions from the theory of greedoids. **Definition 8**.: _A collection \(\mathcal{F}\subseteq 2^{E}\) of subsets of a finite set \(E\) is called a **greedoid** (on the ground set \(E\)) if it satisfies the following three axioms:_ 1. \(\varnothing\in\mathcal{F}\)_._ 2. _If_ \(A\in\mathcal{F}\) _satisfies_ \(|A|>0\)_, then there exists_ \(a\in A\) _such that_ \(A\setminus a\in\mathcal{F}\)_._ 3. _If_ \(A,B\in\mathcal{F}\) _satisfy_ \(|A|=|B|+1\)_, then there exists_ \(a\in A\setminus B\) _such that_ \(B\cup a\in\mathcal{F}\)_._ _A greedoid \(\mathcal{F}\) on a ground set \(E\) is called a **strong greedoid** (also known as "Gauss greedoid") if it additionally satisfies the following axiom:_ _._ * _If_ \(A,B\in\mathcal{F}\) _satisfy_ \(|A|=|B|+1\)_, then there exists_ \(a\in A\setminus B\) _such that_ \(B\cup a\in\mathcal{F}\) _and_ \(A\setminus a\in\mathcal{F}\)_._ There are several equivalent definitions of a greedoid in the literature, ours is taken from [6, Section IV.1]. Specifically, our axioms (i) and (iii) align with conditions (1.4) and (1.6) in [6, Section IV.1], while axioms (i) and (ii) establish \((E,\mathcal{F})\) as an accessible set system. The definition of a strong greedoid can be found in [4]. Now we assume that the set \(E\) is finite. The following theorem shows that sets with maximal \(r\)-removed perimeter form a strong greedoid. **Theorem 3**.: _Let \((E,w,d)\) be an ultra triple on a finite ground set and \(\mathcal{F}_{r}\) denote the collection of subsets \(A\subseteq E\) that have maximum \(r\)-removed perimeter among all \(|A|\)-sets:_ \[\mathcal{F}_{r}:=\{A\subseteq E\ |\ \operatorname{PER}_{r}(A)\geq\operatorname{PER }_{r}(B)\text{ for all }B\subseteq E\text{ satisfying }|B|=|A|\}.\] _Then \(\mathcal{F}_{r}\) is a strong greedoid on the ground set \(E\)._ We start by proving the following lemma. **Lemma 2**.: _Let \(A\) and \(B\) be two subsets of \(E\) such that \(|A|=|B|+1\). Then, there exists \(u\in A\setminus B\) satisfying_ \[\operatorname{PER}_{r}(A\setminus u)+\operatorname{PER}_{r}(B\cup u)\geq \operatorname{PER}_{r}(A)+\operatorname{PER}_{r}(B) \tag{4}\] Proof.: Let \(k=|B|\) and so \(|A|=k+1\). With a slight abuse of notation we denote by \(B\) an arbitrary ordered set formed by elements of \(B\), which we fix from now on. Define \((v_{1},v_{2},\ldots,v_{k}):=\operatorname{proj}(B\to A)\) and let \(u\) be the unique element of \(A\setminus\{v_{1},v_{2},\ldots,v_{k}\}\). By Lemma 2 we have \(u\notin B\). We now want to prove (4) for this choice of \(u\). Subtructing \(\operatorname{PER}_{r}(A\setminus u)+\operatorname{PER}_{r}(B)+w(u)\) from both sides we arrive at an equivalent inequality \[\operatorname{dist}_{r}(B,u)\leq\operatorname{dist}_{r}(A\setminus u,u),\] which is simply the result of Proposition 3 applied to \(u\) and \(A\setminus u=\operatorname{proj}(B\to A)\). Proof of Theorem 3.: First note that property (i) is trivial, and (iii) immediately follows from (iv). Furthermore, since \(E\) is finite, for any \(s\leq|E|\) there exists \(B\in\mathcal{F}_{r}\) with \(|B|=s\), and so by choosing arbitrary \(B\in\mathcal{F}_{r}\) with \(|B|=|A|-1\) we can deduce (ii) from (iv). To prove (iv) we use Lemma 2 to construct \(u\in A\setminus B\) satisfying (4). Since \(A\in\mathcal{F}_{r}\) we must have \(\operatorname{PER}_{r}(B\cup u)\leq\operatorname{PER}_{r}(A)\). Similarly, \(B\in\mathcal{F}_{r}\) implies \(\operatorname{PER}_{r}(A\setminus u)\leq\operatorname{PER}_{r}(B)\). Together with (4) these two inequalities immediately imply that \[\operatorname{PER}_{r}(A\setminus u)=\operatorname{PER}_{r}(B),\qquad \operatorname{PER}_{r}(B\cup u)=\operatorname{PER}_{r}(A),\] which means that both \(A\setminus u\) and \(B\cup u\) are in \(\mathcal{F}_{r}\). This shows that \(\mathcal{F}_{r}\) is a strong greedoid. Other perimeters Instead of the \(r\)-removed distance \(\operatorname{dist}_{r}(C,v)\) we could start from some other notion of a distance from a point to a set, call it \(\operatorname{dist}(C,v)\), and define \(\operatorname{PER}(\{a_{1},\ldots,a_{n}\})\) of an ordered set \(A:=(a_{1},\ldots,a_{n})\) by setting \(\operatorname{PER}(A):=\sum_{a\in A}w(a)+\sum_{i=1}^{k}\operatorname{dist}(\{a_ {1},a_{2},\ldots,a_{i-1}\},a_{i})\). Tracking the proofs of Theorems 1, 2 and 3 we see that the only two properties of the \(\operatorname{dist}\) functions that we use are given by the following **Definition 9**.: _We say that a function \(\operatorname{dist}\) satisfies property \(\mathbf{S}\) if for any set \(\{c_{1},c_{2},\ldots,c_{n}\}=:C\subseteq E\) and \(x,y\in E\setminus C\) one has_ 1. \(\operatorname{dist}(C,x)+\operatorname{dist}(C\cup\{x\},y)=\operatorname{ dist}(C,y)+\operatorname{dist}(C\cup\{y\},x)\)_;_ 2. _If_ \(d(c_{i},x)\leq d(c_{i},y)\) _for each_ \(i\in\{1,2,\ldots,n\}\)_, then_ \[\operatorname{dist}(\{c_{1},c_{2},\ldots,c_{n}\},x)\leq\operatorname{dist}(\{c _{1},c_{2},\ldots,c_{n}\},y).\] **Remark 4**.: _Indeed, property \((\mathbf{S1})\) is used in Lemma 1 to prove that the perimeter of a set is well-defined, and is, in fact, equivalent to this lemma. Property \((\mathbf{S2})\) is used in the proofs of Theorems 1 and 2._ We now give a large family of distances satisfying property \(\mathbf{S}\). **Lemma 3**.: _Let \((E,w,d)\) be an ultra triple and \(f=(f_{n})_{n=1}^{\infty}\) be a sequence of non-decreasing functions. For a set \(\{c_{1},\ldots,c_{n}\}=C\subseteq E\) and \(x\in E\setminus C\), let \(\operatorname{dist}_{f}(C,x):=\sum_{i=1}^{n}f_{i}(d_{i})\), where \((d_{1},d_{2},\ldots,d_{n})\) is the ordered set of values \(d(c_{1},x),\ldots,d(c_{n},x)\) arranged in non-decreasing order. Then \(\operatorname{dist}_{f}\) satisfies property \(\mathbf{S}\)._ Proof.: We first check \((\mathbf{S1})\). Since the property is linear in \(f=(f_{n})_{n=1}^{\infty}\), it suffices to check it for \(\operatorname{dist}_{f}\) with \[f_{j}:=\begin{cases}0&j\leq r;\\ g&j>r.\end{cases}\] Where \(g:\mathbb{R}\to\mathbb{R}\) is some fixed non-decreasing function. Indeed, one easily sees that the linear span of these sequences contains the whole cone of possible sequences of non-decreasing functions. But for this specific choice of \(f\), the distance \(\operatorname{dist}_{f}\) is nothing else but the \(r\)-removed distance for the ultra triple \((E,w,d_{g})\) where \(d_{g}\) is given by \(d_{g}(a,b):=g(d(a,b))^{2}\) and so the equality **(S1)** is given by (2) from the proof of Lemma 1. To check **(S2)** it suffices to note that if one \(n\)-tuple is point-wise smaller than another, then the same remains true after each of the \(n\)-tuples is sorted from the smallest value to the largest, and then use the fact that each \(f_{j}\) is non-decreasing. For sufficiently large spaces and under certain natural conditions on the distance function \(\operatorname{dist}\) from a point to a set we manage to prove the reverse of Lemma 3. To avoid stating technical conditions we prove the result for the space \((\mathbb{Z},w,-\nu_{p})\) in which distance between points \(a,b\in\mathbb{Z}\) is given by \(-\nu_{p}(a-b)\), where \(\nu_{p}\) stands for the \(p\)-adic valuation and we further assume that \(p>2\). **Lemma 4**.: _Consider an ultra triple \((\mathbb{Z},w,-\nu_{p})\) with arbitrary weight function \(w\) and \(p>2\). Assume that the value of the distance function \(\operatorname{dist}(C,x)\) from a point \(x\) to a set \(C\) depends only on the set of distances from \(x\) to the points of \(C\) and that \(\operatorname{dist}\) satisfies property \(\mathbf{S}\). Then \(\operatorname{dist}\equiv\operatorname{dist}_{f}\) for some sequence of non-decreasing functions \(f=(f_{n})_{n=1}^{\infty}\)._ Proof.: By assumption there exists a sequence of symmetric functions \(g_{n}:\mathbb{Z}_{\leq 0}^{n}\to\mathbb{R}\) indexed by \(n\geq 1\), such that for any point \(x\) and any set \(\{c_{1},\ldots,c_{n}\}\) not containing \(x\) we have \[\operatorname{dist}(x,\{c_{1},\ldots,c_{n}\})=g_{n}(d(c_{1},x),\ldots,d(c_{n},x)).\] We want to prove the existence of a sequence of non-decreasing functions \((f_{n})_{n=1}^{\infty}\) such that for any non-positive integers \(d_{1}\leq d_{2}\leq\cdots\leq d_{n}\) \[g_{n}(d_{1},d_{2},\ldots d_{n})=\sum_{j=1}^{n}f_{j}(d_{j}). \tag{5}\] We prove the existence of functions \(f_{j}\) by induction on \(j\), and for the base case we set \(f_{1}:=g_{1}\) which is non-decreasing by the second condition of property \(\mathbf{S}\). Now assume that \(f_{1},\ldots,f_{m-1}\) are already defined in such a way that (5) is satisfied for all \(n<m\) and we want to define \(f_{m}\). For each \(d\in\mathbb{Z}_{\leq 0}\) we set \[f_{m}(d):=g_{m}(d,d,\ldots,d)-g_{m-1}(d,d,\ldots,d),\] where we have \(m\) arguments equal to \(d\) in the first case and \(m-1\) arguments equal to \(d\) in the second. First, we check that (5) is satisfied for \(n=m\). For this, given non-positive integers \(d_{1}\leq d_{2}\leq\cdots\leq d_{n}\) we consider two points \(x,y\in\mathbb{Z}\) with \(d(x,y)=d_{n}\) and a set of points \(C:=\{c_{1},\ldots,c_{n-1}\}\) such that for each \(j=1,\ldots,n-1\) we have \(d_{j}:=d(c_{j},x)\) and \(d(c_{j},y)=d_{n}\). The existence of such a set follows from the property of \(\mathbb{Z}\) with the \(p\)-adic distance (where \(p>2\)) which guarantees that for any two points \(a,b\in\mathbb{Z}\) and any \(\ell\leq d(a,b)\) there exists \(c\in\mathbb{Z}\) such that \(d(a,c)=\ell\) and \(d(b,c)=d(a,b)\). Using (\(\mathbf{S1}\)) we write \[g(d_{1},\ldots,d_{n})=\operatorname{dist}(C\cup\{y\},x)=\operatorname{dist}(C,x)+\left(\operatorname{dist}(C\cup\{x\},y)-\operatorname{dist}(C,y)\right),\] and it remains to observe that the difference in parentheses is equal to \(f_{n}(d_{n})\) by the definition of \(f_{n}\) and \[\operatorname{dist}(C,x)=\sum_{j=1}^{n-1}f_{j}(d_{j})\] by induction hypothesis. Second, we show that \(f_{m}\) is non-decreasing. Let \(\ell_{1},\ell_{2}\in\mathbb{Z}_{\leq 0}\) satisfy \(\ell_{1}\leq\ell_{2}\). By (5) we have \[f_{m}(\ell_{2})-f_{m}(\ell_{1})=g_{m}(\ell_{1},\ldots,\ell_{1},\ell_{2})-g_{m }(\ell_{1},\ldots,\ell_{1},\ell_{1})\] and so \(f_{m}(\ell_{2})\geq f_{m}(\ell_{1})\) directly follows from (\(\mathbf{S2}\)). **Acknowledgements:** We would like to thank Fedor Petrov for his guidance in this project.
2309.07478
Direct Text to Speech Translation System using Acoustic Units
This paper proposes a direct text to speech translation system using discrete acoustic units. This framework employs text in different source languages as input to generate speech in the target language without the need for text transcriptions in this language. Motivated by the success of acoustic units in previous works for direct speech to speech translation systems, we use the same pipeline to extract the acoustic units using a speech encoder combined with a clustering algorithm. Once units are obtained, an encoder-decoder architecture is trained to predict them. Then a vocoder generates speech from units. Our approach for direct text to speech translation was tested on the new CVSS corpus with two different text mBART models employed as initialisation. The systems presented report competitive performance for most of the language pairs evaluated. Besides, results show a remarkable improvement when initialising our proposed architecture with a model pre-trained with more languages.
Victoria Mingote, Pablo Gimeno, Luis Vicente, Sameer Khurana, Antoine Laurent, Jarod Duret
2023-09-14T07:35:14Z
http://arxiv.org/abs/2309.07478v1
# Direct Text to Speech Translation System using Acoustic Units ###### Abstract This paper proposes a direct text to speech translation system using discrete acoustic units. This framework employs text in different source languages as input to generate speech in the target language without the need for text transcriptions in this language. Motivated by the success of acoustic units in previous works for direct speech to speech translation systems, we use the same pipeline to extract the acoustic units using a speech encoder combined with a clustering algorithm. Once units are obtained, an encoder-decoder architecture is trained to predict them. Then a vocoder generates speech from units. Our approach for direct text to speech translation was tested on the new CVSS corpus with two different text mBART models employed as initialisation. The systems presented report competitive performance for most of the language pairs evaluated. Besides, results show a remarkable improvement when initialising our proposed architecture with a model pre-trained with more languages. Acoustic Units, CVSS corpus, Direct Text to Speech Translation, mBART ## I Introduction During the last years, the huge increase in the available unlabelled data for text and speech in all languages of the world has led to the need to develop powerful new approaches to process this data. Also, recent advances in self-supervised learning have provided the opportunity to benefit from this data and produce general-purpose representations. These representations can be employed for different tasks and languages with impressive results, e.g. for speech processing using XLS-R [1] or for text processing with mBART [2, 3] and mT5 [4]. Moreover, recently many works have focused on the development of multilingual and also multimodal systems, such as mSLAM [5] and SAMU-XLSR [6]. These systems aim to reduce communication problems between people speaking and writing different languages, especially in the case of under-resourced languages. Previous works have established state-of-the-art performance on a variety of text and speech downstream tasks including machine translation, specifically for the text to text and speech to text translation tasks. However, research interest in speech to speech and text to speech translation tasks is still growing, as these tasks remain a major challenge due to the scarcity of labelled data for fine-tuning the systems. These tasks seek to convert speech and text generated in a source language into speech in another target language. In the case of conventional speech to speech translation, systems rely on a cascade approach that translates speech into text using Automatic Speech Recognition (ASR) followed by text to text machine translation, or a speech to text system. In both cases, after the mentioned steps, a speech synthesis model is applied to generate speech in the target language. The conventional systems mentioned above achieve high performance, but these systems are text-centric. Thus, having speech in one language as input, an intermediate text representation in the target language has to be obtained as a preliminary step to generate speech. Therefore, the idea of direct speech to speech translation without relying on intermediate text representation has been recently explored in the literature [7, 8]. This approach has shown great computational benefits compared to the cascade approach. Nevertheless, a performance gap can still be observed due to the challenges of simultaneously learning the alignment between two languages and the process of correctly mapping spectrograms from source to target languages. To tackle the existing gap, the research described in [9, 10] has proposed a direct speech to speech translation system which is trained to predict a set of discrete acoustic units extracted from the target speech. In addition to the direct speech to speech system, these works have introduced a text to speech translation part using discrete acoustic units. However, these works apply text to unit translation to the output of an ASR system. Hence, the proposed approach is not considered a direct text to speech system, as it does not take an original text input directly to produce the output speech. Moreover, the performance of this system could be influenced by the use of the output of the ASR module as input, the quality of which may affect the subsequent steps. On the other hand, considering the limitations that still exist in direct translation and the relevance of multimodal and multilingual systems, [11, 12] have developed a system for speech to speech and text to speech translation. In this system, a common fixed representation for speech and text is built to carry out zero-shot cross-modal translation. Unlike previous works, where text to unit translation systems were used only combined with ASR, this paper describes an implementation of a framework for generating speech in a given language from text input in a different language. The task can then be formally defined as a direct text to speech translation task. Applying this framework, we use text as source input to obtain discrete acoustic units as intermediate representations to generate speech. Thus, this text framework allows us to generate the same discrete units as using speech as input. The use of this framework could be useful for different real applications. For instance, text to speech translation could be employed as a data augmentation technique for low resource languages or to create audio versions of written content, such as podcasts or story-telling services from texts. Furthermore, in this work, we have also analyzed the effect of using two pre-trained models with a different number of languages as encoder-decoder for the fine-tuning of our direct text to speech system in a new corpus called Common Voice-based Speech-to-Speech (CVSS) translation [13]. This new CVSS dataset has recently been released to address the issues of scarcity in end-to-end labelled data for direct speech to speech and text to speech translation. In addition, the number of languages in similar previous works has also been limited to mostly high-resource languages with 10 different languages. However, with this new dataset, the text to speech translation task has been evaluated on more than 20 input languages. This paper is laid out as follows. Section II provides a review of the existing approaches which inspire this work, and introduces the proposed direct text to speech framework using acoustic units. The experimental setup is detailed in Section III, focusing on the data and the evaluation protocol. Results and discussions are given in Section IV. Finally, conclusions and future lines are presented in Section V. ## II Proposed Method ### _Preliminaries: Direct Speech to Speech Translation_ Nowadays, there is an expanding line of research in direct speech to speech translation in which the development carried out in [9, 10] has had a great impact. These works have introduced the first systems based on real speech data as target. Thus, instead of predicting continuous spectrograms as in [7, 8], discrete units learned from self-supervised representations of the target speech are predicted. The system proposed is an encoder-decoder based on a sequence-to-sequence transformer model for speech-to-unit translation. To create the system described in [9, 10], two different blocks are integrated. First, a multilingual Hidden unit BERT (mHuBERT) [14] is employed to extract representations from the target speech that are then discretized using a quantizer model. mHuBERT was chosen as generator due to its superior performance across different speech tasks compared to other unsupervised models. By extracting the discrete units with this approach, the encoder-decoder speech to unit translation model can be trained using the units as target sequence. In a second step, and once this model is trained, the target speech is generated from the discrete units. ### _Direct Text to Speech Translation_ _Overview:_: In view of the success achieved by the use of acoustic units for direct speech to speech translation systems in the preliminary works, this work presents a framework to apply the same approach for direct text to speech translation. On the other hand, the need for multilingual and multimodal systems has also motivated several state-of-the-art translation systems where speech and text are permitted as input. Therefore, we propose a multilingual framework in which text data is employed as the input source to predict discrete acoustic units as target without the need to know the transcription in the target language. This aspect is especially relevant in low resource languages, where finding text-speech transcription pairs can be difficult. In addition, the application of the approach presented in this section can be seen as a data augmentation strategy to be used in the case of these languages with scarcity of available resources. As illustrated in Figure 1, an encoder-decoder architecture is used to perform the direct text to speech translation system. Since the conversion of text inputs into acoustic units can be considered as a machine translation task, we have used a pre-trained text model as initialisation for our encoder-decoder architecture. Namely, we have considered multilingual BART (mBART) model in its two variations, mBART25 and mBART50 [2, 3]. The main difference between both models is the number of languages used in the training process. After initialisation, the full architecture is fine-tuned on the text to acoustic unit translation task. The units employed as targets for this training have previously been extracted with an acoustic unit discovery system. Finally, in inference, the HiFi GAN [15] unit to speech vocoder is applied to generate target speech utterances. This unit-based vocoder is a modified version of the original HiFi-GAN neural vocoder presented in [16]. For this model, we have used the pre-trained English vocoder available at this link1. This last part corresponds to the orange block in Figure 1 and could be shared with a direct speech to speech system. Footnote 1: [https://github.com/facebookresearch/fairseq/blob/main/examples/speech_to_speech/docs/textless_s2st_real_data.md](https://github.com/facebookresearch/fairseq/blob/main/examples/speech_to_speech/docs/textless_s2st_real_data.md) _Learning.:_: To train the direct text to speech translation system, pairs of examples \((x_{S},u_{L})\) are used where \(x_{S}\) is the source text in any of the multiple languages employed, and \(u_{L}\) is set of acoustic units extracted from the target speech. The generation of these units is carried out by a pre-trained Fig. 1: Direct text to speech translation system, obtaining acoustic units with source text data in any language to generate target speech in English language. mHuBERT model [10] and a k-means quantizer1. Concerning the mHuBERT model, it is based on the HuBERT Base architecture trained using a combination of English, Spanish, and French data from VoxPopuli [17]. Speech representations are learned in a self-supervised way using unlabelled data as explained in [14, 18]. After that, a k-means quantizer is applied to the representations learned in the layer \(11th\) of the mHuBERT model to generate discrete labels or units. This layer is chosen as done in similar direct translation works [10]. Several papers have shown that HuBERT like models provide the most meaningful phonetic and word information towards higher layers of the model [19, 20]. Footnote 1: [https://github.com/hugging-face/](https://github.com/hugging-face/) To carry out the k-means quantizer process, the two following steps are applied. First, for training, \(N\) centroids are learned using a fraction of the training data. After that, in inference time, the output of the quantizer is chosen as the index of the centroid minimising the euclidean distance between the input embedding and \(N\) centroids learned. In this case, the number of k-means clusters employed is 1000 as done in [10]. Moreover, the discrete unit sequences extracted from the k-means algorithm could have consecutive repetitions of the same units. Therefore, to generate the final target units, the original unit sequences are collapsed to convert consecutive equal units into one single unit (e.g., 1 1 2 2 3 3 \(\rightarrow\) 1 2 3). This reduction has been applied since the work described in [9] showed that collapsing unit sequences did not lead to a decrease in performance and was more efficient. As these target units are discrete, the text to unit translation system is trained to minimize the cross entropy loss between the predicted and real units using label smoothing with a probability of \(0.2\). Hyperparameters.As optimizer for the fine-tuning process, we have employed the Adam optimizer with \(\epsilon=1e-6\), \(\beta_{1}=0.9\), \(\beta_{2}=0.98\), learning rate \(3e-5\), and polynomial learning rate decay scheduling. The model is trained using the fairseq toolkit [21] with a dropout of 0.3 and an attention dropout value of 0.1. The training process was carried out employing 8 V100 (32 GB) NVIDIA GPUs. ## III Experimental Setup ### _Data_ For the direct text to speech translation task, two stages have been carried out. Initially, reference acoustic units are extracted and then text to speech framework is trained using them as targets. To develop both stages, the following data from the new CVSS translation corpus [13] are employed. Acoustic Units.For obtaining the acoustic units, the English audios from the CVSS-C (canonical voice) dataset have been used as target speech. These target audios are forwarded through the acoustic unit discovery system based on mHuBERT model and k-means clustering approach to obtain the discrete unit representations. Direct Text to Speech Translation.Once the acoustic units are obtained, they are employed as targets to train the direct text to unit translation system. Considering that the CVSS dataset also provides the text transcription for the input audios, we have used this dataset to perform 21 languages to EN text to speech translation tasks. ### _Evaluation_ Aiming to evaluate the text to speech translation task, and considering that it is not feasible to directly compare two audio signals, we adopt a similar framework as the one described in [8] to evaluate the translation quality of the generated speech. This setup is described in Figure 2. As it can be seen, an ASR system is used to generate transcriptions for the target speech. The ASR system used2 is an open-source English model based on wav2vec 2.0 features trained through a self-training objective [22]. The evaluation metric shown in our results is then computed as the BLEU score between the obtained transcriptions and the reference text which is normalized in CVSS to perform this standard evaluation. This metric provides an objective measure of speech intelligibility and translation quality. Footnote 2: [https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) ## IV Results and Discussion As mentioned above, to build the direct text to speech translation system, we have explored different models as initialisation for the encoder-decoder architecture. Therefore, we have conducted experiments to evaluate the proposed approach using pre-trained mBART25 and mBART50 models. In addition, we also developed a cascade system in order to have a reference system for comparison. This system is composed of a machine translation module based on the mBART50 model followed by a speech synthesis module implemented using tacotron2 [23]. Fig. 3: BLEU results on CVSS test set, comparing the cascade and two mBART models used as encoder-decoder initialisation and divided into groups of languages according to the number of resources available for each of them. Fig. 2: Text to speech evaluation pipeline, using an ASR model to generate hypothesis text and compare with reference text to obtain BLEU scores. Figure 3 presents the BLEU scores in the test partition of the CVSS dataset for our proposed direct text-to-speech system and cascade approach. In this figure, the performance is shown separately for high, medium and low resource languages. We have considered high resource languages as those with more than 100h, and low resource languages as those with less than 10h of training data. Moreover, the average of the results is also presented. These results show that the best proposed approach achieves performance close to the cascade system. Furthermore, our direct text to speech system has the advantage that it does not need to know the transcription in the target language, while the cascade system needs it to perform the whole translation process. Note that, if we focus on the two alternatives for the direct text to speech system, a large performance improvement in all splits is observed when the mBART50 model is used as a pre-training model to initialize our encoder-decoder pipeline. For a more in-depth analysis of the differences found between the two types of mBART models employed, we can see the results for each language of the 21 languages available in the CVSS dataset in Figure 4. This figure shows that the performance of all the languages improves using mBART50. In addition, the improvement achieved is particularly remarkable in the following translation pairs of languages: fa-en, pt-en, mn-en, sv-en, sl-en, ta-en, id-en, marked with \(1\) in the figure. The relevant performance improvement is motivated by the fact that these languages are not included in mBART25 but are part of the training languages in mBART50. Note that even languages, such as Catalan (ca) and Welsh (cy) marked with \(2\), that are not included in either mBART25 or mBART50, benefit from the influence of having more languages in the second model and improve their results. To highlight these graphical results, we have calculated the improvement achieved in the three language sets. We can observe that an average relative improvement of \(40\%\) is achieved in terms of BLEU score in the languages employed for the pre-training of both mBART models. In the case of the languages included in mBART50, an average relative improvement of \(501\%\) is obtained, while for the languages not present in either of the two, mBART50 achieves an average improvement of \(136\%\). These improved results remark the fact that the use of a pre-trained multilingual model in more languages, mBART50, shows a great impact on the results obtained for the new languages included, and also, this increased multilingual helps to improve the results in languages not presented during the pre-training process. ## V Conclusions and Future Works In this paper, we have presented a new approach to carry out direct text to speech translation. This approach is based on an encoder-decoder framework using text as input and discrete acoustic units as the target sequence. Hence, multilingual text to speech translation can be performed without explicit knowledge of the text transcription in the target language. The system presented in this paper could be used for different applications such as generating audio books from texts in different languages. Moreover, the proposed framework could be applied to get augmented data in order to expand datasets from low resource languages. The evaluation of this proposal was carried out on the new CVSS dataset to confirm the great performance achieved with this approach to generate speech. In these experiments, we have also shown an improvement in performance when the model used as initialisation for the encoder-decoder architecture has been pre-trained by including more languages of the translation pairs from the CVSS dataset. This fact suggests that cross language learning might benefit low resource languages to a significant amount in the text to speech translation task. The promising results achieved with the proposed system have opened an interesting line of research, so future work will focus on extending our direct text to speech framework to join with a direct speech to speech framework. In this way, a multimodal system could be built in which source input could be speech or text since both modalities are compatible to produce the same discrete acoustic units and thus generate the target speech. Considering that only speech in the target language is needed, further work could also explore the use of languages different from English as target. Fig. 4: BLEU results on CVSS data test partition for each language available. \({}^{1}\) Languages not present in mBART-25, but present in mBART-50. \({}^{2}\) Languages not present in mBART-25 or mBART-50.
2310.00274
AfriSpeech-200: Pan-African Accented Speech Dataset for Clinical and General Domain ASR
Africa has a very low doctor-to-patient ratio. At very busy clinics, doctors could see 30+ patients per day -- a heavy patient burden compared with developed countries -- but productivity tools such as clinical automatic speech recognition (ASR) are lacking for these overworked clinicians. However, clinical ASR is mature, even ubiquitous, in developed nations, and clinician-reported performance of commercial clinical ASR systems is generally satisfactory. Furthermore, the recent performance of general domain ASR is approaching human accuracy. However, several gaps exist. Several publications have highlighted racial bias with speech-to-text algorithms and performance on minority accents lags significantly. To our knowledge, there is no publicly available research or benchmark on accented African clinical ASR, and speech data is non-existent for the majority of African accents. We release AfriSpeech, 200hrs of Pan-African English speech, 67,577 clips from 2,463 unique speakers across 120 indigenous accents from 13 countries for clinical and general domain ASR, a benchmark test set, with publicly available pre-trained models with SOTA performance on the AfriSpeech benchmark.
Tobi Olatunji, Tejumade Afonja, Aditya Yadavalli, Chris Chinenye Emezue, Sahib Singh, Bonaventure F. P. Dossou, Joanne Osuchukwu, Salomey Osei, Atnafu Lambebo Tonja, Naome Etori, Clinton Mbataku
2023-09-30T06:38:43Z
http://arxiv.org/abs/2310.00274v1
# AfriSpeech-200: Pan-African Accented Speech Dataset for Clinical and General Domain ASR ###### Abstract Africa has a very low doctor-to-patient ratio. At very busy clinics, doctors could see 30+ patients per day- a heavy patient burden compared with developed countries-but productivity tools such as clinical automatic speech recognition (ASR) are lacking for these overworked clinicians. However, clinical ASR is mature, even ubiquitous, in developed nations, and clinician-reported performance of commercial clinical ASR systems is generally satisfactory. Furthermore, the recent performance of general domain ASR is approaching human accuracy. However, several gaps exist. Several publications have highlighted racial bias with speech-to-text algorithms and performance on minority accents lags significantly. To our knowledge, there is no publicly available research or benchmark on accented African clinical ASR, and speech data is non-existent for the majority of African accents. We release AfriSpeech, 200hrs of Pan-African English speech, 67,577 clips from 2,463 unique speakers across 120 indigenous accents from 13 countries for clinical and general domain ASR, a benchmark test set, with publicly available pre-trained models with SOTA performance on the AfriSpeech benchmark. ## 1 Introduction The African continent and the nearby islands constitute one-fourth of the land surface of the earth (Lodhi, 1993). Approximately 1.3 billion people live in Africa, which is about 18% of the world's population (Wikipedia contributors, 2023). Of the estimated 7,000+ languages and dialects in the world, over 3,000 languages are found in Africa (Wikipedia contributors, 2023; Heine and Nurse, 2000). Despite its large and predominantly young population, Africa bears a significant proportion of the global disease burden (de Graft Aikins et al., 2010) with multiple socioeconomic factors contributing to high mortality and morbidity rates (Baingana and Bos, 2006). Healthcare systems are overburdened and underfunded in many African countries (Oleribe et al., 2019; Naicker et al., 2009; Nkomazana et al., 2015), struggling to cope with the increasing demand for services, while at the same time facing significant shortages of trained health workers (who; Ahmat et al., 2022; Naicker et al., 2010; Nkomazana et al., 2015; Kinfu et al., 2009; Etori et al., 2023). A recent study conducted by Ahmat et al. (2022) in 47 African countries shows that the region has a ratio of 1.55 health workers (physicians, nurses, and midwives) per 1000 people 3x less than the WHO-recommended density of 4.45 health workers per 1000 people. While technology can help mitigate some of these problems, Bukachi and Pakenham-Walsh (2007) and Manyati and Mutsau (2021) aptly show that although Africa has enjoyed massive growth in mobile technology, telecommunication, and internet penetration over the past two decades, healthcare technology lags significantly. A 2019 systematic review on the use of Automatic Speech Recognition (ASR) for clinical documentation in the US from 1990 to 2018 by Blackley et al. (2019) and other similar studies (Goss et al., 2019; Blackley et al., 2020; Ahlgrim et al., 2016; Vogel et al., 2015) showed that the use of speech recognition led to a 19-92% decrease in mean documentation time, 50.3-100% decrease in turnaround time, and 17% improvement in documentation quality. However, in the African context, the lack of training datasets for many of the 3000+ languages and accents in the continent remains an obstacle in developing and adopting robust speech recognition systems for the general domain and for clinical ASR in particular (Doumbouya et al., 2021; Siminyu et al., 2021; Babirye et al., 2022; Ogayo et al., 2022). While recent efforts have begun to turn this tide for the majority of African languages like Swahili, Kinyarwanda, and Yoruba (Gutkin et al., 2020; Dossou and Emezue, 2021; Olaleye et al., 2022), over a thousand African languages and accents remain excluded from global speech research advancements. Recent single-digit word error rates (WER) (Chen et al., 2022; Radford et al., 2022; Hsu et al., 2021; Baevski et al., 2020) in multiple SOTA publications and benchmarks on Librispeech (Panayotov et al., 2015), TED-LIUM3 (Hernandez et al., 2018), and other datasets using architectures like Wav2vec2 (Baevski et al., 2020), Conformer (Gulati et al., 2020), Transducer, and Whisper (Radford et al., 2022) contrast significantly with ASR performance for African accented speech (Gutkin et al., 2020; Dossou and Emezue, 2021) (see Figure 2). We explore whether curating a large pan-African speech corpus might unlock comparable single-digit performance on African accents. We restrict this investigation to accented speech in English because English is the official language for the medical record in most Anglophone African countries, expanding the utility of this work to multiple Anglophone African countries. Our contributions are as follows: * We present _AfriSpeech-2001_, the first and most diverse open-source pan-African accented English speech corpus for clinical and general domain ASR, providing 200.70 hrs of accented speech, 67,577 speech-transcript pairs in 120 African accents across 13 countries, a benchmark dataset that paves the way for out-of-distribution, few-shot and zero-shot analyses on very-low-resource accents. 2 Footnote 1: [https://huggingface.co/datasets/tobiolatunji/afrispeech-200](https://huggingface.co/datasets/tobiolatunji/afrispeech-200) 2. We present a templating framework to augment existing corpora with native African proper nouns and evaluate multiple SOTA pretrained models and leading commercial ASR systems on our benchmark dataset. We provide in-depth analysis of selected models to explain their failure modes and offer helpful insights. * We fine-tune the best-performing open-source models and achieve SOTA performance on the AfriSpeech benchmark dataset (108 African accents) as well as show promising zero-shot performance on very low-resource accents. We provide best models3 as publicly available pre-trained checkpoints. Footnote 2: AfriSpeech-200 is licensed under a CC BY-NC-SA 4.0 license Footnote 3: [https://huggingface.co/Seyfelislem/afrispeech_large_A100](https://huggingface.co/Seyfelislem/afrispeech_large_A100) ## 2 Related Work With the advent of large multilingual speech datasets (Panayotov et al., 2015; Javed et al., 2021; Chen et al., 2021; Ardila et al., 2020; Valk and Alumae, 2021), various research groups have proposed large self-supervised speech models such as wav2vec (Schneider et al., 2019), vq-wav2vec (Baevski et al., 2020), wav2vec 2.0 (Baevski et al., 2020), HuBERT (Hsu et al., 2021), XLSR (Conneau et al., 2021), and XLS-R (Babu et al., 2022). These models achieved state-of-the-art performance on many downstream tasks such as automatic speech recognition (ASR), automatic speech translation (AST), and language identification. However, most existing systems still perform poorly on accented speech (Javed et al., 2022). Koenecke et al. (2020) further showed that popular commercial ASR systems - like Amazon, Apple, Google, IBM, and Microsoft - exhibit substantial racial disparities in their speech recognition capabilities. Most ASR systems work best for native English speakers and their accuracy plummets dramatically with non-native English speakers (Hassan et al., 2022; Prasad and Jyothi, 2020). To enhance the performance of accented speech recognition, various methods have been proposed, which can be categorized into modeling and dataset approaches. On the modeling front, there have been efforts such as dialect-aware ASR models (Yadavalli et al., 2022), domain adversarial training (DAT) (Sun et al., 2018), combining DAT with transfer learning (Chen et al., 2020), using voice conversion (VC) (Zhang et al., 2022), combining VC with speed perturbation (Zhang et al., 2022), and accent pre-training (Acc-PT) (Das et al., 2021). These efforts, however, produced marginal improvements and still exhibit poor generalization capabilities. Datasets have played a major role in improving ASR performance. The current SOTA in ASR (Radford et al., 2022) demonstrated the superior utility of large supervised datasets. Therefore, to bridge the ASR performance gap for African accented speech, multiple dataset creation efforts (Doumbouya et al., 2021; Siminyu et al., 2021; Babirye et al., 2022; Ogayo et al., 2022; Gutkin et al., 2020; Dossou and Emezue, 2021; Afonja et al., 2021; Kamper and Niesler, 2011; Ibejih et al., 2022) have been established. However, many of these datasets are limited in size and diversity. For example, Common Voice (Ardila et al., 2020) contains less than 10 hours of African English speech, Li et al. (2021) evaluates on 50 hrs of African accented English (not released), Sanabria et al. (2023) provides 40 hrs of accented English, less than 20% is African. Kamper and Niesler (2011); De Wet et al. (2007) are limited to a few South African accents, and Ibejih et al. (2022) contains less than 8 hours, while Afonja et al. (2021) contains less than 2 hours of accented African English speech. Furthermore, there are no available benchmarks for clinical ASR for African languages, creating a need for evaluation datasets that help identify areas of improvement in this domain. While previous works have primarily focused on adapting Western accents to African accents, to the best of our knowledge, there has been limited research specifically addressing domain adaptation from a general domain to the clinical domain in the African context. In this regard, our work is the first attempt to bridge this gap and tackle the unique challenges associated with adapting accented African English ASR systems to the clinical domain. ## 3 AfriSpeech Dataset We introduce AfriSpeech, a Pan-African accented English speech dataset for clinical and general domain ASR crowd-sourced from 2,463 African speakers, 200.70 hrs with an average audio duration of 10.7 seconds. Speaker, gender, age group, and clip domain distributions are shown in Table 2. In the following subsections, we describe the dataset creation process. ### Focus Languages We conducted an investigation on 120 African accents across 13 countries including the United States and Turkey. These accents originate from languages that belong to five language families, as documented by Eberhard Eberhard et al. (2019): Afro-Asiatic, Indo-European, Khoe-Kwadi Hainum (), Niger-Congo, and Nilo-Saharan. This selection represents the diverse linguistic landscape across western, eastern, and southern Africa. In Table 1, we provide an overview of the number of clips, speakers, and hours of data per country, with Nigerian accents comprising 67% of the dataset. Since some languages are spoken across several countries (e.g., Swahili, isiZulu, Hausa, and Lunganda), accents are not unique to countries. ### Obtaining AfriSpeech Transcripts Neural network models learn concepts from training data. Where the training data is predominantly Western (e.g. Common Voice Ardila et al. (2019)), the resulting ASR systems fail to capture important pan-African contexts. For example, ASR systems fail weofully at transcribing African names like "Ogochukwu" (Igbo), "Malaika" (Swahili), or "Uwimana" (Rwandan), while excellently transcribing Western names like "Lauren" and "Bryan"-representative of the bias in their training corpora. To solve the problem of scarce African-centric text in the general and clinical domains, we created AfriSpeech using the following strategies. #### 3.2.1 Finding Available Transcripts Our first task was to supplement existing large multi-domain corpora with African-centric text. \begin{table} \begin{tabular}{l|r|r|r} \hline **Country** & **Clips** & **Speakers** & **Hours** \\ \hline Nigeria & 45875 & 1979 & 142.40 \\ Kenya & 8304 & 137 & 20.89 \\ South Africa & 7870 & 223 & 22.69 \\ Ghana & 2018 & 37 & 5.16 \\ Botswana & 1391 & 38 & 3.96 \\ Uganda & 1092 & 26 & 2.89 \\ Rwanda & 469 & 9 & 1.47 \\ United States4 & 219 & 5 & 0.53 \\ Turkey5 & 66 & 1 & 0.18 \\ Zimbabwe & 63 & 3 & 0.18 \\ Malawi & 60 & 1 & 0.15 \\ Tanzania & 51 & 2 & 0.18 \\ Lesotho & 7 & 1 & 0.02 \\ \hline \end{tabular} \end{table} Table 1: Contributions by Country showing speakers, number of clips, and speech duration in seconds and hours. Our first target was **Wikitext-103**(Merity et al., 2016), a collection of over 100 million tokens extracted from the set of verified "good" and "featured" articles on Wikipedia curated by Salesforce. We split this corpus on sentence boundaries and randomly sampled sentences for our transcript corpus. Our next strategy was **web scraping**. We crawled and scraped major African news websites across multiple African countries on topics like politics, entertainment, sports, religion, education, etc. In contrast to Wiki-text, the resulting corpus contained several African names, cities, and highly relevant vocabulary applicable to real-world use cases for downstream ASR. By scraping health-focused websites and health sections of news websites, we were able to get content from the clinical domain, albeit very little. To increase clinical content representation, we focused on two multi-specialty biomedical datasets: **PubMed**(Wheeler et al., 2007) and **NCBI disease**(Dogan et al., 2014). We split these corpora on sentence boundaries and randomly sampled sentences for our transcript corpus. #### 3.2.2 Finding African Entities We sourced for African-centric entities in two places: first, we leveraged an existing database of over 90,000 African names from the transatlantic slave trade between 1808 and 1863 (Anderson et al., 2013), which increased our coverage of African names, phonemes, and morphemes. We then used Okagbue et al. (2017)'s dataset of 965 Igbo names collected to reflect the dialectal classification of Igbo people and supplemented it with 1,000 more Nigerian names from other cultures such as Yoruba, Hausa, Fulani, Tiv, Efik, Ibbio, etc. These names were obtained from freely available textbooks, online baby name websites, oral interviews, published articles, and online forums like Instagram and Twitter. Finally, we obtained a list of African cities from Wikipedia (Wikipedia contributors, 2023c). #### 3.2.3 AfriSpeech Templates The web scraping corpus was highly relevant but small. In the larger biomedical and Wikitext datasets, African content was sparse. We, therefore, sought to increase the utility of the curated corpora by creating "Africanized" versions. Several studies have demonstrated the utility of "templates" as an effective way to create richer, more expressive training datasets, especially for Question-Answering and prompt engineering (Pawar and Shrawankar, 2016; Brown et al., 2020; Yao et al., 2022) and named entity recognition (Davody et al., 2022). Inspired by this approach, we augment our dataset by sampling sentences from the corpora described above in addition to template sentences contributed by professional clinicians, hand-crafting a total of 140 template sentences. For each template sentence, we masked proper nouns (first names, last names, organizations, and cities), replacing them with their corresponding NER tags [PER, ORG, LOC]. We then randomly replaced the masked tokens with African-centric entities- African names and cities, derived from section 3.2.2 above, as well as common tropical diseases and medications. Each template sentence was reused 200 times. A random subset was sampled, sent as prompts for recording, and included with this release. Templated sentences represent approximately 30% of this corpus. ### Audio Recording Collection:Inspired by Common-Voice (Ardila et al., 2019) and SaudiDB (Afonja et al., 2021), we developed and deployed a web-based application in Python/Flask (Figure 1) to collect crowd-sourced speech samples. The application also facilitates tracking of completion status, user demographics, reviews, and quality control. The app presents randomly selected sentences (prompts) to the speakers and prompts them to record their voices while reading the text. The speech recordings are persisted as mono-channel, 16-bit wav files, with a 48 kHz sampling rate. Post-processing tasks were performed on the audio recordings to remove samples shorter than 2 seconds and longer than 17 sec Figure 1: Intron Online Recording platform. onds. Raw unedited samples are provided as part of this release. Speakers in this dataset have been de-identified. Demographic information available includes gender, age group, accent, and country. Annotation InstructionsRecorder demographics are presented in Table 2. Instructions were provided to crowd-sourced recorders as detailed in Appendix A.2. Notably, the recorders were instructed to read punctuation marks in full and encouraged to use their natural accent. ### Quality Control Projects:Transcripts were bucketed into projects to separate clinical from general domain prompts. This approach maximized the time value of clinician contributors focusing their efforts more on medical prompts. Reviewers:We hired a team of human reviewers who up-voted or down-voted clips to indicate quality. Text feedback was also provided to recorders in 30% of cases where negative feedback was indicated. The text feedback contained the reason for the down-vote and was intended to help recorders improve future recording quality. Guest Clip Review:New recorders were admitted as guests and allowed to record a maximum of 200 clips before quality review. 10 to 30 clips were reviewed per guest and those who passed review were promoted to a "Paid" status. Paid Clip Review:In the paid category, users were allowed a maximum of 200 clips before a temporary pause for quality check. During the temporary suspension, reviewers randomly reviewed 10% of the speech samples provided and positive, negative, or text feedback was provided. Access was restored if quality remained satisfactory, or users were blacklisted if over 30% of clips reviewed were down-voted. Delisting Problematic Sentences:Where an audio clip receives a down-vote, the corresponding sentence is released for re-recording by a different user. If a clip recorded for the same sentence receives a second down-vote, the transcript itself is blacklisted. ## 4 Experiments ### Data AfriSpeech-200 is a manually reviewed and curated subset, representing 7% of the total AfriSpeech dataset, intended as an initial public release to stimulate research into African clinical and general domain ASR for accents with little or no representation in speech research. Table 1 shows the distribution of clips, unique speakers, and hours by country. As shown in Table 3, the train, test, and development sets are bucketed such that any given speaker may appear in only one. This ensures that contributors seen at train time are not seen at test time, which would skew the results. ### Benchmarks We compare SOTA open-source pre-trained ASR models: Whisper (Radford et al., 2022), Wav2vec2 (Baevski et al., 2020), XLSR (Babu et al., 2022), Hubert (Hsu et al., 2021), WavLM (Chen et al., 2022), Conformer (Gulati et al., 2020), and CRDNN-RNNLM (Ravanelli et al., 2021) with commercial clinical and non-clinical ASR systems. We refer readers to read the respective papers for details on pretraining corpora, model architecture, and hyperparameters. For each model, we compare performance (WER) on Librspeech test-clean par \begin{table} \begin{tabular}{|l|l|} \hline **Speaker Gender Ratios - \# Clip \%** \\ \hline Female & 57.11\% \\ Male & 42.41\% \\ Other/Unknown & 0.48\% \\ \hline **Speaker Age Groups - \# Clips** \\ \hline \textless{}18yrs & 1,264 (1.87\%) \\ 19-25 & 36,728 (54.35\%) \\ 26-40 & 18,366 (27.18\%) \\ 41-55 & 10,374 (15.35\%) \\ \textgreater{}56yrs & 563 (0.83\%) \\ Unknown & 282 (0.42\%) \\ \hline **Clip Domain - \# Clips** \\ \hline Clinical & 41,765 (61.80\%) \\ General & 25,812 (38.20\%) \\ \hline \end{tabular} \end{table} Table 2: Dataset statistics. \begin{table} \begin{tabular}{l|c|c|c} \hline **Item** & **Train** & **Dev** & **Test** \\ \hline \# Speakers & 1466 & 247 & 750 \\ \# Hours & 173.4 & 8.74 & 18.77 \\ \# Accents & 71 & 45 & 108 \\ Avg secs/speaker & 425.80 & 127.32 & 90.08 \\ clips/speaker & 39.56 & 13.08 & 8.46 \\ speakers/accent & 20.65 & 5.49 & 6.94 \\ secs/accent & 8791.96 & 698.82 & 625.55 \\ \# general domain & 21682 & 1407 & 2723 \\ \# clinical domain & 36318 & 1824 & 3623 \\ \hline \end{tabular} \end{table} Table 3: Dataset splits showing speakers, number of clips, and speech duration in Train/Dev/Test splits. tition (Panayotov et al., 2015) with WER on the AfriSpeech dev and test sets. Single-run results are provided. ### Fine-tuning Based on the benchmark results in Table 4 and GPU memory constraints, 2 top performing open-source model architectures were selected for fine-tuning. Although commercial ASR systems outperformed many open-source models, they are excluded from fine-tuning experiments because their model architectures and underlying pre/post-processing logic are unknown. **Selected Model Architectures** 1. wav2vec-large-xlsr-53 (Grosman, 2021): an Encoder-decoder architecture with CNN-based feature extractor, code book, and transformer-based encoder, 378.9M parameters; LR 1e-4. 2. whisper-medium (Radford et al., 2022): a Decoder-only multi-task architecture, 789.9m parameters; LR 2.5e-4. For each model, we fine-tuned with FP16, AdamW (Loshchilov and Hutter, 2017), batch size of 16, for 10 epochs, with a linear learning rate decay to zero after a warmup over the first 10% of iterations. We fine-tune and evaluate on 3 domains: (1) **general** (25,812 clips), (2) **clinical** (41,765 clips), and (3) **both** (67,577 clips). We train on each domain and test across all 3 domains to investigate the effect of out-of-domain data on model performance. XLSR models were trained on a single Tesla T4 GPU with 16GB GPU memory while Whisper and Conformer models were trained on RTX8000 GPU with 48GB GPU memory. Fine-tuning took 24-48 hrs for all domains. ### Model Vocabulary Most pre-trained models define a limited vocabulary of only Latin alphabets with no numbers or punctuations (Baevski et al., 2020). In stark contrast, numbers are critical in healthcare, e.g. blood pressure 130/80mmHg, or Lab results 0.428 mmol/L. Eliminating all numerical references in clinical text is dangerous and counterproductive. Post-processing to convert all numerical values to long form is imperfect so we retain numbers in their original form. For fine-tuning experiments, we define an alphanumeric vocabulary with semantically important punctuations, characters, and symbols commonly used in medical practice (colon, question mark, plus, etc). ### Evaluation We report our results as WER on AfriSpeech dev and test sets in addition to domain and accent-specific performance. Results are compared with Librispeech (Panayotov et al., 2015) test set performance. We also report the zero-shot performance of fine-tuned models on unseen accents in the test set. ## 5 Results and Discussion ### Africa-centric Fine-tuning Improves Robustness As shown in Table 4, compared with its pre-trained version, xlsr-53 fine-tuned on general domain speech (AfriSpeech-general) yields 53.4% relative improvement. Xlsr-53 fine-tuned on clinical domain speech (AfriSpeech-clinical) yields 52.6%, and xlsr-53 fine-tuned on the combined domains (AfriSpeech-all) yields 49.1% relative improvement. The trend is similar with pre-trained Whisper-medium, yielding 32.6% relative improvement on the general domain, 32.1% on the clinical domain, and 34.9% when finetuned on combined domains. ### Training Data Bias In the Open-Source section of Table 4, AfriSpeech dev and test set performance correlates with the number and diversity of pre-training datasets. For example, Wav2vec2 models trained exclusively on Librispeech significantly underperform when compared with those trained on multiple (Baevski et al., 2020) or multilingual corpora (Babu et al., 2022). Models trained on Multilingual or multi-task corpora (Radford et al., 2022; Gulati et al., 2020) learn more useful representations, are more linguistically diverse, are more robust, and generalize better to accented speech. ### Clinical ASR is Sensitive to Model Vocabulary As mentioned in Section 4.4, most ASR models tend to transcribe numbers in their extended forms, which have a detrimental effect on their WER as shown in Table 4, particularly in the clinical domain where numerical values need to be transcribed accurately (column 6 & 9). However, ASR models with a larger vocabulary, such as Whisper, Commercial ASR models, and our fine-tuned models, demonstrate superior performance by effectively transcribing numbers in clinical speech and converting them into correct numeric representations. ### _Punctuation Prediction is Critical for Clinically Useful ASR_ Medical documents typically follow preset sequence and formatting, for example, patient history, general examination, laboratory investigation, etc., separated by new lines, section titles, or semi-colons. Punctuation commands such as "Next line", "full stop" (.), "query" (?), "comma" (.), "colon" (:) are frequently used in healthcare dictations to add structure to documents. ASR systems without support for such commands force clinicians to review every line of the ASR transcript to add/revise punctuations and document structure, prolonging documentation time and patient wait time Sunkara et al. (2020). As a result, commercial clinical ASR systems supporting these commands are preferable and outperform general-purpose models. ### _Commercial ASR APIs are Not So Global_ The 3 large commercial ASR systems evaluated in this study have global presence. Millions of African Android users have access to Voice typing through the Google keyboard and Microsoft Word users have access to its ASR engine. Table 6 compares the performance of these ASR APIs \begin{table} \begin{tabular}{l|l|l|l|l|l|l|l|l|l} \hline \hline Model & Params & \begin{tabular}{l} Training/Five-tuning \\ Corpora \\ \end{tabular} & \begin{tabular}{l} 1-kcm \\ \end{tabular} & \multicolumn{3}{c|}{Dev (45 accoms)} & \multicolumn{3}{c}{Text (108 accoms)} \\ \cline{3-10} & & & General & Clinical & Both & General & Clinical & Both \\ \hline Open-Source SOTA Models & & & & & & & & \\ openai/whipee-medium & 1550QM & Multi,600k hrs & 0.167 & 0.253 & 0.287 & 0.261 & 0.240 & 0.375 & 0.306 \\ openai/whipee-medium & 769QM & Multi,600k hrs & 0.166 & 0.246 & 0.300 & 0.273 & 0.276 & 0.392 & 0.332 \\ openai/whipee-medium-en & 769QM & Multi,600k hrs & 0.169 & 0.267 & 0.315 & 0.291 & 0.304 & 0.444 & 0.358 \\ openai/whipee-small & 244M & Multi,600k hrs & 0.167 & 0.313 & 0.372 & 0.343 & 0.330 & 0.455 & 0.391 \\ openai/whipee-small & 244QM & Multi,600k hrs & 0.167 & 0.319 & 0.384 & 0.352 & 0.350 & 0.482 & 0.414 \\ nvidia/cn-out-sem-cai-large & 118M & Multi, 10 & 0.210 & 0.410 & 0.486 & 0.448 & - & - & - \\ nvidia/cn-out-sem-cai-large & 139M & Multi, 10 & 0.150 & 0.408 & 0.477 & 0.443 & - & - & - \\ jontangangomez-wheel-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-largelarge-large-large-large-largelarge-large-large-large-large-largelarge-large-large-large-large-largelarge-large-largelarge-large-large-large-largelarge-large-large-largelarge-large-large-large-largelarge-large-large-large-largelarge-large-largelarge-large-largelarge-large-large-largelarge-large-largelarge-large-large-large-largelarge-large-large-largelarge-large-large-largelarge-large-large-large-largelarge-large-largelarge-large-largelarge-largelarge-large-largelarge-large-large-largelarge-largelarge-large-largelarge-large-largelarge-largelarge-largelarge-large-largelarge-large-large-largelarge-large-largelarge-large-largelarge-largelarge-large-largelarge-largelarge-largelarge-large-largelarge-largelarge-large-largelarge-large-largelarge-large-largelarge-largelarge-large-large-largelarge-largelarge-largelarge-largelarge-large-largelarge-large-largelarge-largelarge-large-largelarge-largelarge-largelarge-large-largelarge-large-large-largelarge-largelarge-largelarge-largelarge-largelarge-large-largelarge-largelarge-largelarge-large-largelarge-largelarge-largelarge-largelarge-largelarge-largelarge-largelarge-largelarge-largelarge-largelarge-largelarge-largelarge-largelarge-largelarge-largelarge-large-largelarge-largelarge-largelarge-largelarge-largelarge-largelarge-largelarge-largelargelarge-largelarge-largelarge-largelargelarge-largelarge-largelargelarge-largelarge-largelarge-largelarge-largelargelarge-largelarge-largelarge-largelarge-largelarge-largelarge-largelarge-largelargelarge-largelarge-largelargelarge-largelarge-largelargelarge-largelarge-largelarge-largelarge-largelargelarge-largelarge-largelarge-largelarge-largelargelarge-largelarge-largelarge-largelargelarge-largelarge-largelarge-largelargelarge-largelarge-largelarge-largelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelarge-largelargelarge-largelarge-largelarge-largelargelarge-largelarge-largelargelarge-largelarge-largelargelarge-largelargelarge-largelarge-largelargelarge-largelargelarge-largelargelarge-largelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelarge-largelargelarge-largelargelarge-largelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelargelarge-largelargelargelarge-largelargelarge-largelargelarge-largelargelargelarge-largelargelargelarge-largelargelarge-largelargelargelarge-largelargelargelarge-largelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelarge-largelargelargelarge-largelargelargelarge-largelargelarge-largelargelargelargelarge-largelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelargelarge-largelargelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelargelarge-largelargelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelargelarge-largelargelargelarge-largelargelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelargelarge-largelargelargelargelarge-largelargelargelarge-largelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelargelarge-largelargelargelarge-largelargelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelarge-largelargelargelargelarge-largelargelargelarge-largelargelarge-largelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelargelarge-largelargelargelarge-largelargelarge-largelargelarge-largelargelargelarge-largelargelarge-largelargelargelarge-largelargelarge-largelargelargelarge-largelargelarge-largelargelargelarge-largelargelargelarge-largelargelarge-largelargelarge-largelargelargelarge-largelargelarge-largelargelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelargelargelarge-largelargelargelarge-largelargelarge-largelargelargelarge-largelargelarge-largelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelarge-largelargelarge-largelargelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelarge-largelargelargelarge-largelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelarge-largelargelarge-largelargelarge-largelargelarge-largelarge-largelargelarge-largelargelarge-largelarge-largelargelarge-largelargelarge-largelarge-largelargelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelarge-largelargelarge-largelarge-largelargelargelarge-largelarge-largelarge-largelarge-largelargelarge-largelargelarge-largelarge-largelargelarge-largelargelarge-largelarge-largelargelarge-largelargelarge-largelarge-largelargelarge-largelargelarge-largelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelarge-largelarge-largelargelarge-largelarge-largelarge-largelargelarge-largelargelarge-largelarge-largelargelarge-largelargelarge-largelarge-largelargelarge-largelarge-largelargelarge-largelarge-largelargelarge-largelarge-largelargelarge-largelarge-largelarge-largelargelarge-largelargelarge-largelarge-largelarge-largelarge-largelargelarge-largelargelarge-large-largelargelarge-largelarge-largelarge-largelarge-largelarge-large-largelarge-largelarge-largelarge-largelarge-largelarge-large-largelarge-largelarge-large-largelarge-large-largelarge-largelarge-large-large-large-largelarge-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-large-largelarge-large-large-large-largelarge-largelarge-large-largelarge-large-large-large-largelarge-large-large-large-large-largelarge-large-largelarge-largelarge-largelarge-large-largelarge-largelarge-largelarge-largelarge-largelarge-largelarge-largelarge-largelargelarge-largelarge-largelarge-largelarge-largelargelarge-largelarge-largelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelarge-largelargelargelarge-largelargelarge-largelarge-largelargelargelarge-largelargelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelarge-largelargelargelargelargelarge-largelargelargelargelarge-largelargelargelargelarge-largelargelargelarge-largelarge-largelargelargelargelarge-largelargelargelarge-largelargelargelargelargelarge-largelargelargelargelarge-largelargelargelargelargelarge-largelargelargelargelarge-largelargelargelargelargelarge-largelargelargelargelarge-largelargelargelargelargelarge-largelargelargelargelargelarge-largelargelargelargelargelarge-largelargelargelargelargelarge-largelargelargelargelargelargelargelarge-largelargelargelargelargelarge-largelargelargelargelargelargelargelargelarge-largelargelargelargelargelargelargelargelarge-largelargelargelargelargelargelargelarge-largelargelargelargelargelargelargelargelarge-largelargelargelargelargelargelargelargelargelarge-largelargelargelargelargelargelargelargelargelarge- on majority African accents and we show that despite their global presence, performance lags significantly on some of Africa's most populous accents like Swahili and Yoruba. ### Domain Adaptation Pre-trained whisper models performed better on general domain speech (AfriSpeech-general) when compared with the clinical domain, demonstrating the relative domain-driven difference in difficulty despite the robust training data for Whisper models (680k hours, 90 languages). Cross-domain fine-tuning yields significant gains helping to somewhat bridge this gap. Our results agree with prior work on domain adaptation Sun et al. (2017); Abdelwahab and Busso (2015) showing that models trained exclusively on clinical data improve when general domain data is added. Whisper shows 9% relative improvement on the clinical domain with the addition of general domain data. However, this trend is reversed with general domain data. Adding speech from the clinical domain leads to a 3% and 18.2% relative drop for Whisper and xlsr-53 respectively. Domain adaptation is no silver bullet. Care must be taken to apply this approach where benefits outweigh risks. ### Accent-level Performance Table 6 shows test set performance on the top 23 AfriSpeech accents grouped by their language families. We report the results for open-source, commercial, and fine-tuned ASR models. Fine-tuned models (ours) average relative improvement is 26.7% over the open-source ASR models and 36.5% over the commercial ASR models. For several accents, we observe that the whisper model fine-tuned with our AfriSpeech dataset shows the best overall performance with an average relative improvement of 16.2% across all accents, except in 4 South African languages (Zulu, isiZulu6, Tswana, Afrikaans), Luo, and Kinyarwanda, where the fine-tuned model under-performs compared to the pretrained whisper model and commercial Azure model performs best on Luo accent. Although counter-intuitive, it is possible these accents are highly represented in Whisper pre-training data and require further investigation. Footnote 6: We note that both Zulu and isiZulu are the same but they are labeled differently in our dataset. We further discuss this in the limitations section. ### Zero-Shot Performance We further explore generalizability to unseen accents, i.e., out-of-distribution (OOD) accents. Table 5 shows the results for the top 20 OOD accents in the test set. We observe an impressive 44.4% relative performance improvement across all OOD accents with our fine-tuned Whisper model compared to the baselines and 49.8% average relative improvement over the commercial models (Azure, GCP, AWS). These results demonstrate significant generalizability gains are achievable with better training data diversity. \begin{table} \begin{tabular}{l|l|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Accent} & \multirow{2}{*}{Country} & \multirow{2}{*}{Tout Samples} & \multirow{2}{*}{Train Samples} & \multicolumn{2}{c|}{**Open Source**} & \multicolumn{2}{c|}{**Cammeral**} & \multicolumn{2}{c|}{**Ours, Pietuned**} \\ & & & & \multicolumn{1}{c|}{slix-53} & \multicolumn{1}{c|}{wisper} & Azure & GCP & AWS & XLSR & Whisper \\ \hline \hline Niger-Congo & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline Yoruba & [NO] & 575 & 14233 & 0.576 & 0.327 & 0.364 & 0.581 & 0.421 & 0.291 & **0.218** \\ Swahili & [KI, ZU, ZU, ZA] & 485 & 5484 & 0.448 & 0.192 & 0.307 & 0.486 & 0.305 & 0.244 & **0.181** \\ Iglbo & [NO] & 3190 & 8608 & 0.564 & 0.338 & 0.393 & 0.563 & 0.441 & 0.273 & **0.197** \\ Zulu & [TR, LS, ZA] & 156 & 1309 & 0.471 & **0.213** & 0.329 & 0.477 & 0.345 & 0.315 & 0.237 \\ Srivastava & [BW, ZA] & 96 & 1275 & 0.448 & **0.208** & 0.288 & 0.446 & 0.320 & 0.295 & 0.234 \\ Isiulu & [ZA] & 88 & 779 & 0.457 & **0.182** & 0.254 & 0.406 & 0.292 & 0.265 & 0.206 \\ Ijay & [NO] & 77 & 2371 & 0.608 & 0.364 & 0.372 & 0.671 & 0.446 & 0.321 & **0.238** \\ Lujay & [KE] & 69 & 426 & 0.538 & 0.310 & 0.548 & 0.489 & 0.427 & 0.296 & 0.245 \\ Tui & [GH] & 54 & 1321 & 0.504 & 0.184 & 0.382 & 0.510 & 0.361 & 0.236 & **0.177** \\ Idona & [NO] & 53 & 1767 & 0.607 & 0.384 & 0.424 & 0.629 & 0.543 & 0.294 & **0.243** \\ Loganda & [KE, UO, BW] & 44 & 529 & 0.525 & 0.320 & 0.362 & 0.526 & 0.378 & 0.381 & **0.277** \\ Twana & [BW, ZA] & 34 & 289 & 0.362 & **0.184** & 0.265 & 0.425 & 0.245 & 0.267 & 0.249 & 0.241 \\ Akan (tanto) & [GH] & 29 & 230 & 0.732 & 0.418 & 0.425 & 0.283 & 0.604 & 0.290 & **0.197** \\ Kkwya & [KE] & 24 & 163 & 0.406 & 0.160 & 0.275 & 0.387 & 0.330 & 0.221 & **0.126** \\ Xhoua & [ZA] & 17 & 342 & 0.498 & 0.265 & 0.322 & 0.332 & 0.389 & 0.518 & **0.237** \\ Sopedi & [ZA] & 17 & 176 & 0.651 & 0.373 & 0.394 & 0.699 & 0.458 & 0.414 & **0.285** \\ Kiswahili & [KE] & 16 & 811 & 0.466 & **0.159** & 0.389 & 0.394 & 0.274 & 0.173 & 0.163 \\ Ultchue & [NO] & 15 & 578 & 0.551 & 0.378 & 0.423 & 0.678 & 0.423 & 0.345 & **0.210** \\ Nemple & [NO] & 14 & 546 & 0.571 & 0.352 & 0.449 & 0.556 & 0.449 & 0.372 & **0.296** \\ Kinyarwanda & [RW] & 14 & 439 & 0.495 & **0.216** & 0.338 & 0.527 & 0.437 & 0.369 & 0.311 \\ \hline Afkaans & [NG] & 168 & 5453 & 0.627 & 0.358 & 0.457 & 0.633 & 0.488 & 0.230 & **0.243** \\ \hline Infka-European & [ZA] & 49 & 1911 & 0.373 & **0.142** & 0.202 & 0.443 & 0.209 & 0.283 & 0.211 \\ \hline Nlo-Sahman & [UO, Kit] & 12 & 179 & 0.411 & 0.234 & **0.229** & 0.343 & 0.343 & 0.309 & 0.234 \\ \hline \hline \end{tabular} \end{table} Table 6: Test set performance per accent for open-source, commercial, and fine-tuned ASR models. ### Take SOTA LibriSpeech Results with a Grain of Salt Figure 2 contrasts LibriSpeech and AfriSpeech WER for several models. Many ASR leaderboards rank ASR models based on single-digit LibriSpeech Panayotov et al. (2015) WER. Pretrained ASR models, therefore, overfit to LibriSpeech at the expense of robust ASR performance for all people. As seen in Table 4, several models are 3-10x worse on African accented speech with the exception of multi-lingual or multi-task models like Whisper, Conformer, and XLSR. ## 6 Limitations and Future Work Limited clinical Subdomains:Although this dataset includes a variety of clinical text, several specialties are not represented. As a result, ASR performance may vary between clinical specialties. Read Speech:All audio samples in this release are read based on text prompts. Without appropriate augmentation, ASR Models trained on this dataset may underperform with conversational or spontaneous speech. North-African Accentsare not included in this work. Because of the distinct nature of those accents, performance on sub-Saharan accents may not necessarily generalize to the Northern African Region. Self-reported Accents:Similar to CommonVoice, recorders self-report their native tongue in free-text making it difficult to map to ISO-3 in all cases. Some users also reported their accents as "French", "English", "South African English", or a combination of accents. Although we attempted to clean and normalize the self-reported languages, this process was by no means perfect. As a result, accent names sometimes overlap e.g. Zulu and IsiZulu. Further cleanup could be done to consolidate these closely related accents. The dataset release will therefore include a normalized accent field for each sample. Medical Abbreviations are Inconsistent:Since crowd-sourced recorders had varying levels of familiarity with the prompts, abbreviations like "Breast CA" may be pronounced fully as "Breast Cancer" or "Breast see-A". Since abbreviations abound in medical text and WER is not robust to such idiosyncrasies, models with correct predictions, e.g. "Breast Cancer" are sometimes wrongly penalized where the transcript reads "Breast CA". Integrating ASR in Healthcare Settings is Challenging:Cloud-based ASR presents some well-known challenges in healthcare. Privacy is a major concern as there is a risk of unauthorized or malicious third-party access to confidential patient information. Furthermore, the perceived higher value of healthcare data among malefactors also heightens security risks for hospitals and ASR vendors. Additionally, Unethical ASR vendors could misuse confidential data for model training and development without proper consent. ## 7 Ethical Considerations Clinical ASR models can improve productivity for clinicians, they can also increase documentation errors, especially through incorrect transcription of numbers, fractions, dates, and proper nouns which have legal, safety, and prognostic implications in healthcare. We caution clinicians to use ASR with full discretion and review transcripts carefully before final submission into the medical record. We release AfriSpeech hoping that it will be beneficial to clinical and non-clinical use cases within and outside Africa, improving ASR performance for accented speech and it may contain biases due to publicly available datasets. We do not have access to reviewers who are native speakers of most of the languages covered in AfriSpeech who can provide a rigorous review of self-reported accents. This hinders our ability to investigate samples from all languages. We hope that future users of the dataset will further investigate AfriSpeech's utility and quality for their languages. ## Acknowledgments Tobi Olatunji acknowledges Intron Health for providing the dataset and compute resources. Chris Chinenye Emezue acknowledges the support of the Mila - Quebec AI Institute for compute resources.
2309.12075
Prompt Tuned Embedding Classification for Multi-Label Industry Sector Allocation
Prompt Tuning is emerging as a scalable and cost-effective method to fine-tune Pretrained Language Models (PLMs), which are often referred to as Large Language Models (LLMs). This study benchmarks the performance and computational efficiency of Prompt Tuning and baselines for multi-label text classification. This is applied to the challenging task of classifying companies into an investment firm's proprietary industry taxonomy, supporting their thematic investment strategy. Text-to-text classification is frequently reported to outperform task-specific classification heads, but has several limitations when applied to a multi-label classification problem where each label consists of multiple tokens: (a) Generated labels may not match any label in the label taxonomy; (b) The fine-tuning process lacks permutation invariance and is sensitive to the order of the provided labels; (c) The model provides binary decisions rather than appropriate confidence scores. Limitation (a) is addressed by applying constrained decoding using Trie Search, which slightly improves classification performance. All limitations (a), (b), and (c) are addressed by replacing the PLM's language head with a classification head, which is referred to as Prompt Tuned Embedding Classification (PTEC). This improves performance significantly, while also reducing computational costs during inference. In our industrial application, the training data is skewed towards well-known companies. We confirm that the model's performance is consistent across both well-known and less-known companies. Our overall results indicate the continuing need to adapt state-of-the-art methods to domain-specific tasks, even in the era of PLMs with strong generalization abilities. We release our codebase and a benchmarking dataset at https://github.com/EQTPartners/PTEC.
Valentin Leonhard Buchner, Lele Cao, Jan-Christoph Kalo, Vilhelm von Ehrenheim
2023-09-21T13:45:32Z
http://arxiv.org/abs/2309.12075v3
# Prompt Tuned Embedding Classification for Multi-Label Industry Sector Allocation ###### Abstract Prompt Tuning is emerging as a scalable and cost-effective method to fine-tune Pretrained Language Models (PLMs), which are often referred to as Large Language Models (LLMs). This study benchmarks the performance and computational efficiency of Prompt Tuning and baselines for multi-label text classification. This is applied to the challenging task of classifying companies into an investment firm's proprietary industry taxonomy, supporting their thematic investment strategy. Text-to-text classification is frequently reported to outperform task-specific classification heads, but has several limitations when applied to a multi-label classification problem where each label consists of multiple tokens: (a) Generated labels may not match any label in the label taxonomy; (b) The fine-tuning process lacks permutation invariance and is sensitive to the order of the provided labels; (c) The model provides binary decisions rather than appropriate confidence scores. Limitation (a) is addressed by applying constrained decoding using Trie Search, which slightly improves classification performance. All limitations (a), (b), and (c) are addressed by replacing the PLM's language head with a classification head, which is referred to as Prompt Tuned Embedding Classification (PTEC). This improves performance significantly, while also reducing computational costs during inference. In our industrial application, the training data is skewed towards well-known companies. We confirm that the model's performance is consistent across both well-known and less-known companies. Our overall results indicate the continuing need to adapt state-of-the-art methods to domain-specific tasks, even in the era of PLMs with strong generalization abilities. We release our codebase and a benchmarking dataset at [https://github.com/EQTPartners/PTEC](https://github.com/EQTPartners/PTEC). Large Language Models, Prompt Tuning, Natural Language Processing, Industry Classification, Thematic Investment ## I Introduction Thematic investment is a popular strategy within the investment industry, involving narrowing down on certain macrotrends, like "Circular Economy & Sustainable Materials". For this purpose, investment professionals constantly keep an eye on the market and stay alert to relevant emerging trends. Once they identified a future-proof and high-potential trend, they need to map out an overview of companies that operate within this trend. This involves analyzing a large amount of data which exists mostly in unstructured, natural language descriptions. For instance, we may have a company with the name "Vinted", the description "Operator of an online marketplace platform intended to provide second-hand fashion products", and the keywords "C2C Marketplace, Clothing And Apparel, Clothing Marketplace, [...], Secondhand Clothing, Used Clothes Exchange". Ideally, we would want to map this company to the industry sectors "Marketplaces" and "Circular Economy & Sustainable Materials". This mapping of companies to industry sectors helps in assigning the right team to monitor the companies belonging to a given sector. This team sources the most promising deals and potentially makes investments. Considering that an investment firm would not want to miss out on an investment opportunity among the \(>10M\) companies listed in its database, mapping companies to their industry sector becomes a very labor-intensive task. This complexity increases when taking into account that some companies may belong to multiple sectors, as shown in the example above. To ease the workload on investment professionals, machine learning can support this process by defining this as a multi-label text classification task: Given a natural language description of company \(X\), classify it into one or multiple industries from industry sector taxonomy \(T=\{Y_{1},Y_{2},\ldots,Y_{n}\}\). Investment professionals then only need to annotate a small set of companies that belong to the sector of their interest, and predictions can be made to populate the sector with additional companies from the investment firm's database. This model would create a competitive advantage during the deal-sourcing process, giving an investment firm the ability to track a wider range of potential investments. Further, these predictions can be used as the foundation for additional machine learning tasks. For instance, another model could rank the companies predicted to fall within a sector according to their chances of being an interesting investment target. While there exists a variety of machine learning solutions for multi-label text classification, the nature of the industrial application described above encompasses a few additional challenges: * **Scarce annotations:** Assuming that an investment firm wants to identify macro-trends that fit into its overall strategy it will rely on its own, proprietary industry taxonomy. Well-experienced investment professionals familiar with the proprietary taxonomy must perform data annotation, which means only a small fraction of the relevant data can be labeled. Considering that the framework should cover between \(50\) and \(300\) industries, this leaves only a few annotations per industry. * **Imbalanced annotations:** Investment professionals will primarily annotate investment opportunities belonging to the industries of their interest. As a result of this focus, annotations follow a long-tail distribution as encountered in many large-scale multi-label classification problems [1]. There may be a few hundred annotations for some industries, but almost no annotations for other industries. * **Large and heterogeneous inference dataset:** Given that an investment firm does not want to miss a relevant investment opportunity, it may monitor and predict industries for \(>10M\) organizations. This inference data may be out-of-distribution (OOD) compared to the annotated dataset in terms of language and informativeness of the descriptions. * **Dynamic industry taxonomy and training data:** The industry taxonomy, the company information, and their annotations may change frequently. For instance, a new industry sector may be introduced, company information updated, or a company might shift its core business from SaaP to SaaS 1. Therefore, continuous re-training and inference are necessary. Footnote 1: SaaP (Software as a Product) refers to traditional software products that are purchased with a one-time fee and installed on individual machines. SaaS (Software as a Service), on the other hand, is a cloud-based service where users access software applications over the internet, typically through a subscription model. Common approaches to text classification often require large amounts of annotated training data while struggling to generalize well to unseen data [2]. Pretrained Language Models (PLMs), also frequently referred to as Large Language Models (LLMs), approach these issues by leveraging self-supervised learning techniques on vast amounts of high-quality, unlabelled data. This pretraining enables PLMs to generalize well to unseen data and to serve as powerful base models which can be fine-tuned on smaller, annotated datasets [3]. Nevertheless, fine-tuning PLMs can result in the "catastrophic forgetting" of pretraining knowledge [4], and incurs high computational costs [5, 6]. Computational costs become more relevant especially as PLM performance on downstream tasks increases with model size [7], making high-performing, large state-of-the-art PLMs prohibitively expensive to fine-tune. These challenges can be addressed with Parameter-Efficient Fine-Tuning (PEFT, [8]) techniques such as Prompt Tuning [9, 10, 11]. Prompt Tuning reduces the number of fine-tuned parameters to a fraction of the PLMs parameters by selectively fine-tuning a _soft prompt_ prepended to the tokenized and embedded input text. This approach not only decreases the computational cost of fine-tuning but also avoids the "catastrophic forgetting" of the PLM's pretraining knowledge, as the PLM's parameters remain unchanged [9, 10, 11]. Furthermore, it allows for multi-tasking within a single batch by appending a distinct _soft prompt_ for each task [11]. Consequently, Prompt Tuning offers a promising solution to the issues of computational efficiency and knowledge preservation in the context of PLMs. This study estimates the scalability, efficiency, and performance of Prompt Tuning on a real-world industry classification problem. This is done in comparison with widely adopted baseline methods such as \(N\)-shot Prompting [12, 13], text embedding classification [14], embedding similarity search [15], and parameter-free classification with compressors [16]. However, Prompt Tuning as a text-to-text classification approach encounters additional limitations on multi-label tasks as discussed in Subsection II-D. In view of this, we enhance Prompt Tuning by (1) applying constrained decoding with Trie Search [17, 18] and (2) replacing the language head with a task-specific classification head. This results in the following contributions: * Extending the Trie Search decoding method [17] to not allow the model to predict the same label multiple times in a row, similar to [19]. * Introducing Prompt Tuned Embedding Classification (PTEC), which optimizes the _soft prompt_ and the classification head with differential learning rates. * Comparing the performance and computational costs of Prompt Tuning, Prompt Tuning with Trie Search, PTEC, and baseline methods on two datasets: The proprietary _Industry-Sector_ classification task, and the public _Hate-Speech_ classification benchmark. * An empirical demonstration that evaluating the PLM on data the model has more pretraining knowledge about does not result in an overestimation of classification performance when deploying the PLM on data it has less pretraining knowledge about. * The codebase and the _Hate-Speech_ dataset (see IV-C) are made publicly availabe at [https://github.com/EQPPartners/PTEC](https://github.com/EQPPartners/PTEC). This paper first provides an outline of existing approaches to text classification and discusses their strengths and limitations. Further, we propose constrained Trie Search decoding and PTEC and motivate how these approaches may counteract the identified limitations. Subsequently, we describe our experimental setup in detail and compare the efficiency and performance of the current and proposed methods. ## II Related Methods To evaluate our proposed methods, we implement simple machine learning methods commonly used for text classification as baselines. This section first introduces these baselines, and then elaborates on Prompt Tuning as a state-of-the-art parameter-efficient fine-tuning technique. ### _Parameter-Free Classification with Compressors_ A very simple and recently introduced approach to text classification makes use of compression algorithms such as gzip [16]. It builds on the idea that lossless compressors represent information as efficiently as possible by assigning shorter codes to the symbols that occur more frequently. Similar strings of text (for instance the descriptions of two similar companies) can be assumed to share more symbols. Consequently, the compressed length of the concatenation of two similar descriptions can be expected to be shorter than the compressed length of two distinct descriptions [16]. This logic can be used to calculate a low-computation compressor-based distance metric for nearest-neighbors classification methods [15, 20]. Hence, this method may be a computationally highly efficient implementation, but should not be expected to achieve competitive performance. ### _In-Context Learning_ In-Context Learning (ICL), also frequently referred to as \(N\)-shot prompting, involves including \(N\) examples of input-output pairs in the prompt preceding the input in question [12, 13]. This is an attractive approach to text classification problems because it does not require any fine-tuning or training step. \(0\)-shot prompting relies on the world knowledge the PLM acquired during pretraining [12]. It can therefore be expected to struggle on tasks that require new sets of knowledge, for example when classifying text into a new or proprietary taxonomy that it did not encounter during pretraining. Few-shot prompting with \(n>0\) can counteract this by providing training examples within each prompt. For instance, we may prompt the PLM with the input: "Anticimex, with description \(D_{a}\) and keywords \(K_{a}\) belongs to the 'Pest Control' industry. BBS Automation, with description \(D_{b}\) and keywords \(K_{b}\) belongs to the 'Industrial Automation or IIoT' industry. To which industry sector(s) belongs Vinted, with description \(D_{v}\) and keywords \(K_{v}\)?'2 Footnote 2: To be considerate of our reader’s time, this is a shortened version of the prompting method we used for our \(N\)-shot experiments. The complete prompting method can be found in our codebase. This method may improve task performance without requiring a training step, but has several limitations: 1. The limited number of input tokens, also referred to as the PLM's context window. This restricts the number of examples \(N\) that can be provided. This constraint is particularly problematic for lengthy text sequences [21]. 2. When classification needs to be performed over a large label space of \(>50\) multi-token labels, such as our industry taxonomy, it becomes infeasible to list a sample for each label. Providing a random subset of industries as samples often leads to samples that have few similarities with the company to be classified. When providing a list of possible classes, the PLM frequently fails to select from this list. 3. Performance is very sensitive to the formulation of the prompt, such as the wording or the order of the examples [22, 23]. Consequently, it usually does not perform as well as fine-tuning [12]. 4. It is not yet very well understood how ICL learns. As ICL also learns from incorrect example targets, it may rely on learning the data distribution rather than the input-target mapping [13]. 5. While no computational cost is needed for fine-tuning, inference becomes computationally more expensive because the examples have to be provided every time an input is processed. When inference has to be run over a large dataset, as is the case in our real-world industry classification use case, this additional compute cost can result in ICL taking up more computational resources than PEFT methods [24]. Consequently, ICL methods are simple to implement, but may struggle on problems consisting of long input texts and a large label space, as is the case in our _IndustrySector_ classification task. Simultaneously, they may be computationally more expensive during deployment when working with a large inference dataset. ### _Embedding Proximity_ Another approach to text classification that does not require PLM fine-tuning is to employ text embeddings from a PLM, achieved by either averaging over the token embeddings or extracting the CLS token [14]. The generated text embeddings can then be used as input features for a separate classification model. In terms of parameter utilization, the most efficient approaches are K-Nearest Neighbors (KNN) or Radius Nearest Neighbors (RadiusNN). These methods classify a text embedding based on the labels of the nearest training examples in the embedding space, eliminating the need for additional parameter training [15, 20]. Alternatively, text embeddings could be used as input to a neural classification layer, which can be trained to perform the respective classification task [25]. These methods may achieve competitive performance while relying on well-established methodology. ### _Prompt Tuning_ To achieve similar performance to fine-tuning while reducing computational costs, various PEFT methods have been proposed, such as Pattern-Exploiting Training [21], Prefix-Tuning [26], Low-Rank Adaptation (LoRa, [27]), Soft Prompting [28], Word-level Adversarial Reprogramming [29], and Prompt Tuning [9, 11, 30]. All these methods follow very similar approaches and significantly reduce the number of trainable parameters as compared to fine-tuning the complete set of parameters of a PLM. Prompt Tuning involves training the smallest amount of parameters (\(<0.1\%\)), while still being reported to outperform fine-tuning [31]. Prompt Tuning prepends the token embeddings of the input text with a _soft prompt_ that consists of a sequence of virtual token embeddings, as displayed in Fig. 1. During fine-tuning, the PLM's parameters remain unchanged and only the _soft prompt_ is optimized with methods such as gradient descent. This has several advantages: 1. Lower need for computational resources as the number number of updated parameters and tracked gradients is reduced by multiple magnitudes. 2. As Prompt Tuning only manipulates the input fed into the PLM, it provides the unique possibility to combine different tasks or individualized models in one batch, as opposed to loading multiple PLMs into memory and running one batch per task [11, 26]. This can facilitate the deployment of many custom machine learning models simultaneously. 3. Keeping the PLMs parameters unchanged also circumvents the risk of "catastrophic forgetting" of the knowledge acquired by the PLM during pretraining [9]. For instance, when fine-tuning all parameters of a multilingual PLM on an English-language dataset, the PLM may forget how to understand all languages other than English. If, however, we only fine-tune the _soft prompt_ prepended to the input text, the PLM's performance on our fine-tuning dataset will improve, while it remains the ability to understand the languages it learned during pretraining. This preserves the PLM's ability to generalize to data not present in the fine-tuning dataset [9, 26, 30]. Prompt Tuning is therefore a novel approach to text classification that promises competitive performance with PLM fine-tuning while reducing computational costs and maintaining generalizability. ### _Text-to-Text Classification for Multi-Label Tasks_ Prompt Tuning is often used in conjunction with text-to-text classification, a method that has historically outperformed other text classification methods on public benchmarks [3]. This performance improvement follows the intuition that text generation based on input text is more similar to the PLM's pretraining objectives, which are often next word or masked sequence prediction [3]. Instead of using a classification method to explicitly predict classes, text-to-text classification relies on the PLM's language head to predict a probability distribution over the token vocabulary. For instance when addressing a sentiment classification task, the predicted probability for the token "good" can be used as a proxy for predicting positive sentiment, while the predicted probability for the token "bad" can be used as a proxy for negative sentiment [3]. Text-to-text classification can be implemented for multi-label problems by generating labels sequentially [17, 32]. The PLM's language head then generates the predicted labels separated by a separator token (SEP) and stops generating labels once it reaches the end-of-sequence (EOS) token. In the case of our industry classification task, the PLM would revoice the input text "Vinted, with description \(D_{v}\) and keywords \(K_{v}\), belongs to the sector(s) ", and the PLM would be expected to continue the sentence with the tokens ("Market", "places", "[SEP]", "Circular", "Economy", "&", "Sustainable", "Materials", "[EOS]"). This creates the additional challenge that the labels have to be provided in a certain order while fine-tuning the PLM or the _soft prompt_. This order is needed because the predicted token probabilities are compared to the target tokens to calculate the loss and update the weights during gradient descent. The target tokens need to be defined by arranging the class labels in an arbitrary order, separated by the SEP token. It would theoretically be possible to compare the predicted token probabilities to all possible combinations of target tokens that would include the correct class labels in different orders. However, optimizing for the average loss over all these combinations may not result in cohesive class label generation. Optimizing for the best minimum loss over all combinations may result in a different combination being used at each epoch, which may result in incohesive learning. This results in the following limitations of performing multi-label text classification with multi-token class titles: 1. [label=()] 2. Especially if the labels do not obviously follow from the input, for instance, if they belongs to a taxonomy unknown to the model, the model might produce incorrect labels with similar meanings. When applying our proprietary taxonomy, this could result in the model predicting "Specialized Software" instead of "Vertical Software". 3. The labels have to be provided in a certain order. While ordering the labels from most frequent to least frequent results in the best performance [1, 32], this order is arbitrary. Notably, this means that the model would be discouraged from providing the correct labels in a different order than predefined. 4. The model returns a binary decision for each class. This does not allow to control the sensitivity of the predictions by adjusting the prediction threshold [33]. While the probability of a generated label could be extracted from its token probabilities, this would not serve well as a confidence score - for instance, the confidence for the second generated label represents \(P(Y_{2}|X,Y_{1})\), while \(P(Y_{2}|X)\) would be relevant to extract [33]. ## III Proposed Methods This section will first describe details of how Prompt Tuning with unconstrained text-to-text classification is implemented in this study. Then, we will describe two alternative architectures developed to encounter the limitations listed in the previous section: Constrained decoding with Trie Search, and PTEC. Fig. 1: A schematic overview of Prompt Tuning, where \(SP_{\theta}\) refers to the trainable _soft prompt_ matrix, \(X_{\text{input}}\) to the tokenized and embedded input text, and \(PLM_{\phi}\) to the PLM with the frozen parameters \(\phi\). \(\Delta\) refers to the gradient used to update \(SP_{\theta}\). ### _Prompt Tuning with Unconstrained Text-to-Text Classification_ Our set-up follows the architecture described in Sections II-D and II-E. Since the labels need to be provided in a predefined order during training, we sort the labels for each sample descending by their frequency in the training data. This sorting has been confirmed to provide the best performance [1, 32]. Previous research has investigated the most effective methods to initialize the _soft prompt's_ weights [11, 26]. While it is intuitive to initializing the weights randomly, performance can be improved by initializing the virtual tokens of the _soft prompt_ with the embeddings of tokens drawn from the PLM's vocabulary. An additional improvement in performance can be gained by using the embeddings of tokens releveant to the classification task, such as the possible class labels [11, 26]. During Prompt Tuning, the _soft prompt_ token weights will be updated and do not remain equivalent to the tokens used for initialization. Nevertheless, it appears to be beneficial for task performance if the _soft prompt_ tokens are close to the vocabularies embedding representations. _Soft prompt_ tokens close to the class labels possibly prime the PLM to restrict its output to the valid class labels [11]. Given that our industrial use case may contain \(50\) to \(300\) labels, and each industry would be described by \(2\) to \(10\) tokens, this would require a _soft prompt_ of \(100\) to \(3000\) tokens. However, a long _soft prompt_ would take up valuable input tokens of the context window and additionally result in increased training and inference costs. This limitation becomes more clear when we consider that the PLM's self-attention's complexity increases quadratically with input length. Therefore, we randomly sample \(p\) tokens from all available label tokens to include them in the _soft prompt_, where \(p\) is the _soft prompt_'s token length. Lastly, careful attention has to be paid to the loss calculation when performing mini-batch gradient descent. As PyTorch's [34] cross-entropy loss function by default averages the loss over all label tokens in a batch, industries with names consisting of more tokens ("Circular Economy & Sustainable Materials") would have a larger influence on the batch loss than industries with shorter names ("Marketplaces"). This may result in the model learning industries with longer names better than industries with shorter names. Consequently, the cross-entropy loss calculation is adjusted as follows: for each example in a batch, the loss is averaged over its label tokens, and then all individual losses are averaged over the batch. This is denoted in (1), where \(L\) is the aggregated loss of the batch, \(N\) is the number of examples of the batch, \(y_{i}\) is the label tokens for the i-th example in the batch, \(|y_{i}|\) is the length of the label of the i-th example measured in it's number of tokens, \(y_{ij}\) is the target value of the j-th token of the i-th label, and \(p_{ij}\) is the predicted probability of the j-th token of the i-th label. \[L=-\frac{1}{N}\sum_{i=1}^{N}\left(\frac{1}{|y_{i}|}\sum_{j=1}^{|y_{i}|}y_{ij} \log(p_{ij})\right) \tag{1}\] ### _Prompt Tuning + Trie Search_ To address limitation (a) from the limitations provided in Section II-E, constrained decoding methods can be applied to only generate valid labels [17, 18]. One such constrained decoding method is Trie Search, which relies on the construction of a label trie. A minimal example of a possible label trie is displayed on the right-hand side of Fig. 2. It organizes all possible target labels in a trie structure, and can be traversed from its root node (BOS) to its leaf nodes (EOS or SEP) to retrieve a valid label. This will be used to guide the PLM's language during label generation. Instead of selecting the most likely token from the complete vocabulary, the PLM will only be allowed to generate tokens from the options provided by the label trie. For the first generated token, the PLM can choose between the initial tokens of all labels, and at each subsequent token generation step, it will only be allowed to generate valid follow-up tokens from the label space. Trie Search is applied to multi-label classification by generating labels sequentially separated with the SEP token. After reaching a leaf node of the label trie, the PLM can choose between generating the SEP or the EOS token. Whenever the EOS token is generated, no further labels will be predicted. When the SEP token is generated, Trie Search restarts from the trie's root [17]. However, this may still result in the same label being generated multiple times in a row, especially considering that PLMs have been reported to repeat token sequences [35]. We extend the existing multi-label Trie Search solution [17] to mitigate repetitive label generation: a unique set of predicted labels is enforced by deleting a label from the label trie after it was generated, similar to a method proposed by [19]. Notably, Trie Search only addresses limitation (a), but presents its own contraints. The application of Trie Search is solely during the inference stage; it cannot be applied during training. This is due to the fact that the training loss is calculated based on the model's predicted probabilities for the true token, as per (1). Moreover, limitations (b) and (c) remain unaddressed by this method. During training, labels still need to be provided in an arbitrary order (b), and Trie Search also does not facilitate the calculation of appropriate confidence scores (c). ### _Prompt Tuned Embedding Classification (PTEC)_ PTEC addresses all limitations (a), (b), and (c) mentioned in Section II-E simultaneously by combining Prompt Tuning with Embedding Classification rather than text-to-text classification. This is done by using a single linear layer with a sigmoid activation function to process the text embeddings generated by the Prompt Tuned PLM. The output of this sigmoid activation function can then be thresholded to assign predictions. This can be annotated as: \[p=\begin{cases}1&\text{if }\sigma(\mathbf{We}+\mathbf{b})\geq\tau\\ 0&\text{otherwise}\end{cases} \tag{2}\] Where: \[\mathbf{e}=\text{PLM}_{\phi}(\text{SP}_{\theta}\oplus\mathbf{X}_{\text{input }}) \tag{3}\] Here, \(\mathbf{e}\) is the text embedding returned by the PLM parameterized with \(\phi\) and Prompt Tuned with _soft prompt_ SP consisting of the parameters \(\theta\). \(\mathbf{X}_{\text{input}}\) is the tokenized and embedded input text and \(\tau\) refers to the applied threshold. This threshold can be selected based on the desired levels of precision or recall. Weight matrix \(\mathbf{W}\in\mathbb{R}^{d\times l}\) and bias vector \(\mathbf{b}\in\mathbb{R}^{l}\) belong to the single classification layer, where \(d\) is the dimensionality of the PLM's embeddings vector, and \(l\) is the number of labels. When using a PLM with an embedding vector dimensionality of \(4096\), such as LLaMa 7B, and with \(50\) to \(300\) labels in the label space, this will result in a classification head containing only \(204,850\) to \(1,229,100\) additional parameters, which remains multiple magnitudes below a multi-billion parameter PLM. As the task-specific classification layer is randomly initialized, it will be optimized simultaneously with the _soft prompt_, while the rest of the PLM's parameters remain frozen. Comparable methods have been implemented for Named Entity Recognition [30], and for multi-class text classification [29]. Similar to [11], we empirically noted that a _soft prompt_ usually requires a relatively high learning rate, while a classification head performs better with a lower learning rate. Accordingly, PTEC was implemented using differential learning rates for the _soft prompt_ and the classification head. Training this classification layer in addition to the _soft prompt_ would have the benefits of: (a) always predicting valid industries (even when labels appear counter-intuitive); (b) not relying on an arbitrary order; (c) retrieving appropriate confidence scores which can be used to adjust precision and recall to user needs. Additionally to addressing the limitations of text-to-text multi-label classification, PTEC has the additional advantage of a faster inference time, as it only requires one forward pass per prediction, compared to one forward pass per predicted token. ## IV Experiments ### _Evaluation Task_ All experimental methods are evaluated on the multi-label text classification task of assigning one or multiple industries to each company based on its name, description, and keywords relating to its core activities. Finding the model \(f^{*}\) which minimizes loss function \(L\) based on company name \(C\), company description \(D\), and keywords \(K\) can be denoted as (4), where \(Y\) is a company's true industry classification. \[f^{*}=\arg\min_{f\in\mathcal{F}}\sum_{i=1}^{n}L(f(C_{i},D_{i},K_{i}),Y_{i}) \tag{4}\] ### _Industry-Sector Dataset_ Based on EQT's proprietary database we constructed a dataset of around \(5500\) companies. Each company is annotated with \(1\) to \(4\) of \(76\) different industries, where each industry is labeled at least \(20\) times. For each company, its legal name, keywords relating to the company's core activities, and a free text description are available, which are referred to as _company information_. This company information is concatenated to one text which is used as the input prompt in all experiments. The taxonomy of relevant industry sectors is created by investment professionals based on the industries belonging to macro-trends of their interest. These investment professionals also annotated the companies in this dataset with their corresponding industries. Next to the working title of the given industries, such as "Healthcare IT", a more descriptive set of tags was extracted from databases such as chrunch-base [36] and pitchbook [37], resulting in industry titles such as "healthcare software practice management platform service clinical patient management technology". It should be noted that the average number of labels per example is \(1.1\). This indicates that while the problem, in theory, is a multi-label classification problem, most examples Fig. 2: A schematic comparison of Prompt Tuning (PT), Prompt Tuning with Trie Search (PT + TS), and PTEC. Note that in this example, _Healthcare Software_ would not be a valid label name. The correct label would be _Healthcare IT_. in our dataset are not exhaustively annotated and only carry one label (see Fig. 2(c)). The dataset is split into \(75\%\) training set, \(10\%\) validation set, and \(15\%\) test set. Fig. 2(d) shows the highly imbalanced, long-tail class distribution. For instance, some industries occur only \(\sim 25\) times, while the most frequent industry occurs \(>300\) times. Importantly, this distribution only shows the classes included in the _IndustrySector_ dataset, and our database contains many more classes with even fewer annotations. To ensure that each industry in the _IndustrySector_ dataset is represented in similar proportions in all splits, and with a minimum frequency in both validation and test split, stratification is performed using multi-label stratified shuffle splitting, as proposed by [38]. During this process, it is ensured that each industry is represented at least \(2\) times in the validation set, \(3\) times in the test set, and \(15\) times in the training set. Since the PLM's self-attention mechanism's complexity increases quadratically with prompt length, long input prompts will easily result in out-of-memory (OOM) errors. Therefore, descriptions and keyword lists which consist of more than \(1000\) characters are summarized using the \(250M\) parameter instruction fine-tuned FLAN T5 model [39], such that no input prompt supersedes a length of \(1000\) characteres. The result of this summarizing step is displayed in Fig. 2(a) and Fig. 2(b). ### _Public HateSpeech Classification Benchmark_ To ensure our methodology and results can be reproduced and generalized to similar tasks in other domains, we constructed a public benchmarking dataset. For this purpose, we opted for a hatespeech classification dataset as used in [40]. The task of this dataset is to classify social media comments into different kinds of hatespeech, where each comment can have one or multiple labels. This dataset was chosen because it is structurally very similar to our _IndustrySector_ dataset: It covers a set of \(22\) different classes, its data is highly imbalanced, and the length of the social media comments is similarly distributed as the length of the company descriptions. It should be noted that we could only find a subset of the original dataset used in [40], and this subset is substantially smaller and differently distributed from the dataset described in [40]. For instance, classes in our subset are heavily imbalanced, while [40] had reported on an almost perfectly balanced class distribution. This also implies that our results cannot directly be compared with [40]. Nevertheless, this benchmark serves as a possibility for other researchers to verify our methodology and results. The constructed _HateSpeech_ dataset can be found in our released codebase. ### _Model Training_ To compare the performance of \(N\)-shot prompting, KNN, RadiusNN, Parameter-Free Classification with gzip, Embedding Classification, Prompt Tuning, Prompt Tuning with Trie Search, and PTEC, these methods are compared using the \(7B\) parameter version of LLaMa (LLaMa 7B, [41]) and the \(1.7B\) parameter version of Bloom (Bloom 1B7, [42]). All models involving Prompt Tuning are trained using the AdamW optimizer. Additionally, Prompt Tuning is implemented with learning rate warm-up using PyTorch's OneCycleLR scheduler, and PTEC is trained with exponential learning rate decay using PyTorch's ExponentialLR scheduler. For both PTEC and the Embedding Classification baseline, we introduce as few additional parameters as possible to achieve a fair evaluation of the PLM's capabilities as proposed by [43]. Therefore, in both cases, a single linear layer with a sigmoid activation function is used as a classification head. As noted in Sections IV-B and IV-C, both the _IndustrySector_ and the _HateSpeech_ dataset are significantly imbalanced. Several strategies were experimented with to ensure that the model learns all industries equally well, rather than focusing on the most frequent industries: (1) Class weights are calculated for each class with \(n_{\text{max}}/n_{\text{class}}\), where \(n_{\text{max}}\) is the number of examples for the most frequent class and \(n_{\text{class}}\) is the number of examples for the class the weight is calculated for. The loss for each instance is weighted by its class weight before updating the gradients. (2) Data augmentation is applied to the minority classes to simultaneously upsample them and create a balanced training dataset. This can be done using paraphrasing [44]. In our case, the PLM is provided with the company information of a minority sample, and instructed to create information for a new company with a similar business model and belonging to the same industries. To reduce the computational costs of data augmentation, an augmented dataset is precomputed and retrieved upon training, rather than augmenting the data during the data loading process. ### _Hyperparameter Tuning_ The hyperparameters for all methods were optimized using Bayesian Optimization [45] with \(25\) random initializations of hyperparameter combinations and \(15\) iterations of Bayesian Optimization. Hyperparamters such as the learning rate and weight decay were searched on a logarithmic scale, such that the probability to sample values from the interval \([0.01\leq x\leq 0.1]\) equals the probability to sample values from \([0.001\leq x\leq 0.01]\), given that both intervals are included in the searched hyperparameter space. For the KNN and RadiusNN methods, the optimal hyperparameter values have large variability between different models. For this reason, if a hyperparameter was close to the boundary of the searched hyperparameter space, Bayesian Optimization was continued with a broader hyperparameter range. An overview over the optimized hyperparameters, the scale of searching, and the ranges of hyperparameter values searched are provided in Table I. Hyperparameter tuning was performed using the validation set, while all results reported in Section V were calculated over the test set. While the maximum batch size fitting on one A100 GPU was used for model training, an effective batch size of \(32\) was used for gradient updates. Threshold \(\tau\) mentioned in (2) is not considered a hyperparameter, since we automatically select the value that optimized the F1 score. ### _Metrics_ The goal of an industry classification model is to provide investment professionals with accurate overview of the companies belonging to a given industry of interest. The solution to this problem needs to find an appropriate balance between precision and recall, and should perform similarly well for all industries. Consequently, the macro-averaged F1 score was used to compare model performance. To investigate model performance variability over multiple runs, each model is run \(3\) times with different random seeds, and the standard deviation in macro-averaged F1 score is reported. We are not only interested in optimal performance, but also want to find a model which is cost-effective when frequently retraining it and running inference over a large database, as is the case in our _IndustrySector_ classification use case. Therefore, we additionally report on the computational resources required for fine-tuning on our training dataset and for inference over \(10M\) companies. As a measure of computational resources needed, we report on the consumed floating point operations (FLOPs). Compared to alternative measures such as execution time or energy consumption [46], FLOPs are an optimal metric for comparison due to their independence of the used hardware. Common formulas to estimate FLOPs consumption of neural network training do not encompass edge cases such as Prompt Tuning. Hence, FLOPs were measured on representative samples using Pytorch's profiler [34], and the results where extrapolated on the full training and inference process. KNN and RadiusNN were implemented using the scikit-learn library [47], which does not provide a profiler to measure FLOPs. Their inference FLOPs were instead estimated as: \[\text{FLOPs}\approx E(T+I)+3(D\cdot T\cdot I) \tag{5}\] Here, \(D\) represents the dimensionality of the text embeddings, \(T\) denotes the number of training samples, \(I\) indicates the number of inference samples, and \(E\) the FLOPs required to embed one example. This equation can be derived as follows: The term \(E(T+I)\) refers to calculating the embeddings for the training and inference samples, and \(3(D\cdot T\cdot I)\) estimates the number of floating point operations (FLOPs) for performing classification with the KNN and RadiusNN algorithms. The average value of \(E\) is calculated by measuring the FLOPs used for generating one embedding with PyTorch's profiler. For both KNN and RadiusNN, each inference embedding is compared with every training embedding. The term \(3\cdot D\) corresponds to calculating the Euclidean distance between two embeddings. This calculation involves the subtraction of one embedding from the other (\(D\) FLOPs), squaring each element of the new vector (\(D\) FLOPs), taking the sum of these values (\(D-1\) FLOPs) and finally taking the square root of this sum (\(1\) FLOP). As this is done once for each pair of training and inference examples, the distance calculations will need \(3(D\cdot N\cdot M)\) FLOPs in total. As this is only an estimate, the exact number can vary based on the specifics of the operations used. While the formula provided here assumes a brute-force method for KNN and RadiusNN, it is important to note that more efficient methods Fig. 3: Distributions of (a) original description lengths, (b) preprocessed description lengths, (c) number of labels per example, and (d) number of examples per label are often employed in practice, especially in popular machine learning libraries such as scikit-learn [47]. True computational resources required by KNN and RadiusNN methods may therefore be lower than estimated in this paper. However, this estimation provides a general idea of the computational resources needed. For both RadiusNN and KNN the FLOPs used for calculating the text embeddings of the training data are considered as 'training' FLOPs. ### _The Impact of Pretraining Knowledge_ In the specific use case of industry classification, relevant companies were annotated by investment professionals, and once deployed, inference should be performed on a database of \(\sim 10M\) companies. As the annotated companies were not selected at random, but rather depending on the interest of a given investment professional, they may be structurally different from the companies in the inference data. PLMs are pretrained using large corpora, and may have already encountered information about certain companies during pretraining. Given that a PLM has likely encountered more information about more popular companies during pretraining, it may be able to perform a certain downstream task better for these more popular companies than for less known companies. For instance, if the PLM was pretrained on text containing information about the company _Vinted_, the resulting pretraining knowledge might already be sufficient to classify _Vinted_ into an industry. If this is the case, and if the annotated dataset differs systematically from the inference dataset in such a way that it contains more popular companies, this would result in the performance of our model being overestimated. Consequently, we included an additional experiment to estimate whether and to what extend we might overestimate performance on the inference dataset. For this purpose, we first gained an impression of whether a PLM may have pretraining knowledge about a certain company simply by prompting it to indicate this, following the logic that PLMs mostly know what they know [48]. We did so for all companies in the test set and conducted a nonparametric Mann-Whitney U test [49] to test the hypothesis \(H_{1}\) that _classification performance is higher for the companies the PLM indicates to have pretraining knowledge about_. ### _Inter-rater Agreement_ As it was noted in Section IV-B that most examples in our _IndustrySector_ dataset are not exhaustively annotated, an exhaustive list of labels was created for a representative subsample of the test set (\(N=104\)). This subsample was obtained using multi-label stratified shuffle splitting [38], and annotated by \(3\) independent professional raters. To investigate the subjectivity of the labels, chance-corrected inter-annotater agreement was calculated using Cohen's kappa (\(\kappa\), [50]). As this addresses a multi-label classification problem, \(\kappa\) was calculated for each label individually and then averaged over all labels. ## V Results ### _Model Performance and Computational Cost_ This section presents the experimental results comparing performance and computational costs of Prompt Tuning, Prompt Tuning with Trie Search, PTEC, and baseline methods. Fig. 4 plots model performance versus computational cost for both training on our _IndustrySector_ dataset and inference over \(10M\) companies using LLaMa 7B [41] and Bloom 1B7 [42]. For a more precise inspection of the results, the computational costs of training and inference are shown in Table II. It can be observed that PTEC achieves the best performance, while being more efficient than other Prompt Tuning methods for both training and inference. It also shows less variability between runs than Prompt Tuning with text-to-text classification, which becomes especially relevant for Bloom 1B7. Trie Search only has minimal influence on text-to-text classification performance for Prompt Tuning, while this effect is more extreme for \(N\)-shot prompting. Using a classification head results in comparable performance to Prompt Tuning with text-to-text classification while requiring considerably fewer computational resources. KNN and RadiusNN methods require a similar amount of computational resources but perform worse, which can be attributed to their lower parameter utilization. Although \(N\)-shot prompting does not require any training FLOPs, it requires a considerably higher amount of inference FLOPs, while also achieving the worst performance. Methods such as PTEC offer the advantage of predicting appropriate confidence scores. This attribute is evident in Fig. 5, which displays the Receiver Operating Characteristic (ROC) curves for multiple methods, and their equivalent Precision-Recall (PR) curves. These confidence scores allow for selecting a threshold, which would be equivalent to selecting a point on the method's ROC or PR curve. This can be done by setting the corresponding value for \(\tau\) in (2). This helps to select the appropriate trade-off between True Positive and False Positive Rate, in the case of the ROC curve, or precision and recall, in the case of the PR curve. Note that while the ROC curve is not sensitive to class imbalance, the PR curve shows this sensitivity. ### _Ablation Study_ Table III displays an overview of the techniques used by the methods experimented with in this study. Considering the results from this perspective, Prompt Tuning adds considerable performance gain to both text-to-text classification and Embedding Classification. However, Trie Search only improves performance for \(N\)-shot prompting, while it does not show an effect when used in combination with Prompt Tuning. ### _Data Augmentation_ Table III additionally shows the performance of the methods applied in this experiment when using original and augmented data. It can be observed that Nearest Neighbors methods benefit from data augmentation, while the same does not hold true for all other methods. ### _The Impact of Pretraining Knowledge_ Of the \(839\) companies in the test split of the _Industry-Sector_ dataset, \(159\) companies were indicated to be known from pretraining, while \(680\) companies were indicated not to be known. A qualitative analysis confirmed that known companies had more information about themselves available online, while unknown companies were more difficult to find information about. A Mann-Whitney U test indicated that differences between both groups were nonsignificant at a p-value of \(0.243\) (U = \(50993.5\); r = \(0.0385\)). This results in the rejection of \(H_{1}\) that _classification performance is higher for the companies the PLM indicates to have pretraining knowledge about_. Effectively, this indicates that we likely do not overestimate performance on the inference dataset, even though this possibility can not be completely excluded. ### _Inter-rater Agreement_ The moderate inter-rater agreements as shown in Table IV verify the subjectivity of our _Industry-Sector_ classification task. From \(104\) annotated companies, the annotators only perfectly agreed on the labels of 6 companies. An example3 of a company that achieved perfect agreement is "QuantumComp" with the description "A developer of a quantum computing system..." and the keywords "quantum computing, [...], quantum computing hardware." This example was unanimously labeled as "DeepTech" by all raters, PTEC, and the gold annotations. An example3 of a company that each rater assigned a unique set of labels to is "ObID" with the Fig. 4: Comparison of performance and computational cost for both training on our _Industry-Sector_ dataset and inference over \(10M\) companies. Error bars refer to the standard deviation in the macro-averaged F1 score over \(3\) runs. Invisible error bars indicate 0 standard deviation. Note that PT and PT + TS overlap for LLMa 7B, resulting in only one marker being visible. The horizontal line refers to the performance of Parameter-Free Classification with gzip. Abbreviations as defined in Table I. description "Physical entity identification..." and the keywords "information technology, [...], internet". Rater1 labeled ObID as "Identity and Access Management", Rater2 as "Other Industrial Automation or IIoT", Rater3 as "Compliance & Admin Services", "Inspection", "Governance, Risk management and Compliance Software", and "Insurtech", and the PTEC model as "Other Vertical Software". The annotated gold label was "Governance, Risk management and Compliance Software". Analyzing Table IV displaying the agreement between three human raters, PTEC, and the gold annotations on the subsample described in IV-H, it can be observed that PTEC shows higher agreement with the gold annotations (\(\kappa=0.562\)), but lower agreement with human raters. This indicates that while PTEC successfully learns to predict the gold annotations it is trained on, it agrees less with human raters. However, it has to be noted that the raters had access to the gold annotations while labeling and their labels may be influenced by the gold annotations. This provides a possible explanation for the higher agreement between the raters and the gold annotations. ### _Public Benchmarking_ We achieved very similar results on our public _HateSpeech_ dataset, as shown in Table V. The most notable difference is that for LLaMa 7B, Prompt Tuning outperforms PTEC, as will be discussed in Section VI. For both models, Trie Search seems to decrease the performance of the Prompt Tuned PLM, while it slightly improves the performance for N-shot prompting of Bloom 1B7. Additional results on the _HateSpeech_ dataset can be found in the repository released with this paper. ## VI Discussion The reported results indicate that in all configurations, Prompt Tuning outperforms baseline methods. Specifically when working with a proprietary and nonintuitive industry taxonomy, a classification head may outperform a PLM's Fig. 5: ROC and PR curves comparing multiple methods using LLaMa 7B as a PLM. Methods that cannot be thresholded are displayed as individual points. AUROC = Area Under the ROC curve; AP = Average Precision. Other abbreviations as defined in Table I. language head. This is in contrast to findings that text-to-text classification with PLMs outperforms classification heads [3], but several arguments can be made to explain this circumstance: (1) Text-to-text classification often performs better because the PLM's language head is pretrained on large datasets. These pretraining datasets often contain examples of the relevant task, for instance, whether an expression contains positive or negative sentiment. Pretraining data might already contain similar or identical examples as the evaluation data [51], especially when dealing with well-known benchmarking datasets. However, in our case, the industry taxonomy is proprietary and highly specific. This results in the model not having encoded the input-target mapping during pretraining, and therefore not being able to use its pretraining knowledge on this task. The superior performance of Prompt Tuning with text-to-text classification using a more powerful LLM on the less domain-specific labels of the _HateSpeech_ benchmark provides additional evidence for this interpretation. (2) While most tasks used to evaluate text-to-text classification can be reduced to singular-token targets ("good" or "bad"), the industry taxonomy used in this study is too complex to reduce classes to singular tokens. As this results in the model not only needing to learn the correct industry allocation but also the order of the industries' label tokens, this creates a more complex label space. (3) Text-to-text classification might not generally be superior to a classification head, but might be more often reported on due to a publication bias, as it is commonly reported on in medical research [52]: A finding that the recently emerging text-to-text classification method performs worse than a well-established classification head is not worth publishing, while the opposite is. A relevant observation made in this study is the high variability of text-to-text classification performance when using Bloom 1B7 as a PLM, visible in the large error bars in Fig. 4. This goes along with results of recent research showing that models from the Bloom family produce the most inconsistent summaries, as judged by other language models [53]. Further, Trie Search appears to be able to improve performance only when no Prompt Tuning takes place. This indicates that Prompt Tuning can correctly learn to predict only valid labels, such that Trie Search does not result in any additional performance gain. Comparing the results on our proprietary _IndustrySector_ dataset with the results on the _HateSpeech_ benchmark gives some additional insights. Most importantly, the superior performance of Prompt Tuning on the _HateSpeech_ dataset when using LLaMa 7B may be due to more intuitive and less ambiguous label names that do not belong to a proprietary and previously unseen taxonomy. It is thus possible that the PLM has encountered examples of the data and the label classes during pretraining. This may also explain why Trie Search does not result in improved performance but rather constrains the PLM's language head. This observation supports the interpretation that PTEC is especially efficient when working with proprietary and domain-specific data. As a limitation of this study, it should be mentioned that there are different possibilities to implementing Prompt Tuning, and that resources did not suffice to investigate other PEFT methods. For instance, Prompt Tuning as a text-to-text classification method could be implemented by representing each label with only one representative token, which may have solved several limitations of text-to-text classification for multi-token labels. Future research into this problem could experiment with other PEFT methods such as Low-Rank Adaptation (LORA) [27]. To address hierarchical industry taxonomies, hierarchical classification techniques [54] and hyperbolic embeddings may be a promising direction [55]. Considering that our dataset was not exhaustively annotated, methods focusing on positive and unlabeled data problems [56] could be investigated. Lastly, since one challenge to using PLMs was our highly domain-specific dataset, an additional self-supervised pretraining step on domain-specific data may increase performance. ## VII Conclusion This study benchmarked computational cost and multi-label text classification performance of Prompt Tuning as a parameter-efficient alternative to fine-tuning all PLM parameters. To address the limitations of a text-to-text approach when addressing multi-label classification problems, Prompt Tuning was extended with Trie Search as a constrained decoding strategy, and with Embedding Classification as an alternative to a text-to-text approach. Results indicate that Prompt Tuning can outperform popular text classification approaches on our domain-specific industry classification task, but both performance and efficiency can be further improved by combining Prompt Tuning with Embedding Classification. Trie Search only shows incremental performance gains when used with \(N\)-shot prompting, but does not show effects when used in combination with Prompt Tuning. These findings highlight the limitations of text-to-text classification methods, especially when applied to complex and highly domain-specific downstream tasks. Benchmarking our results on the _HateSpeech_ classification dataset confirmed this interpretation, as Prompt Tuning with text-to-text classification showed better performance on a less domain-specific and public dataset. Further performance gains could be achieved by applying more performative PEFT methods, or by including an additional pretraining step on domain-specific data. This study's results highlight the continuing need of adapting state-of-the-art methods to domain-specific tasks, even in the era of PLMs with strong generalization abilities. ## Acknowledgment The authors thank Peter Bloem for his critical input during discussions, Armin Catovic, Mark Granroth-Wilding, Dhiana Deva Cavalcanti Rocha, Andrew McCornack, Zineb Senane, and Melvin Karlsson for providing their valuable opinions, and Sebastian Stan and Bengt Sjogren for setting up the infrastructure necessary for running experiments.
2309.04617
Scalable resolvent analysis for three-dimensional flows
Resolvent analysis is a powerful tool for studying coherent structures in turbulent flows. However, its application beyond canonical flows with symmetries that can be used to simplify the problem to inherently three-dimensional flows and other large systems has been hindered by the computational cost of computing resolvent modes. In particular, the CPU and memory requirements of state-of-the-art algorithms scale poorly with the problem dimension, i.e., the number of discrete degrees of freedom. In this paper, we present RSVD-$\Delta t$, a novel approach that overcomes these limitations by combining randomized singular value decomposition with an optimized time-stepping method for computing the action of the resolvent operator. Critically, the CPU cost and memory requirements of the algorithm scale linearly with the problem dimension, and we develop additional strategies to minimize these costs and control errors. We validate the algorithm using a Ginzburg-Landau test problem and demonstrate its low cost and improved scaling using a three-dimensional discretization of a turbulent jet. Lastly, we use it to study the impact of low-speed streaks on the development of Kelvin-Helmholtz wavepackets in the jet via secondary stability analysis, a problem that would have been intractable using previous algorithms.
Ali Farghadan, Eduardo Martini, Aaron Towne
2023-09-08T22:13:46Z
http://arxiv.org/abs/2309.04617v1
# Scalable resolvent analysis for three-dimensional flows ###### Abstract Resolvent analysis is a powerful tool for studying coherent structures in turbulent flows. However, its application beyond canonical flows with symmetries that can be used to simplify the problem to inherently three-dimensional flows and other large systems has been hindered by the computational cost of computing resolvent modes. In particular, the CPU and memory requirements of state-of-the-art algorithms scale poorly with the problem dimension, _i.e._, the number of discrete degrees of freedom. In this paper, we present RSVD-\(\Delta t\), a novel approach that overcomes these limitations by combining randomized singular value decomposition with an optimized time-stepping method for computing the action of the resolvent operator. Critically, the CPU cost and memory requirements of the algorithm scale linearly with the problem dimension, and we develop additional strategies to minimize these costs and control errors. We validate the algorithm using a Ginzburg-Landau test problem and demonstrate its low cost and improved scaling using a three-dimensional discretization of a turbulent jet. Lastly, we use it to study the impact of low-speed streaks on the development of Kelvin-Helmholtz wavepackets in the jet via secondary stability analysis, a problem that would have been intractable using previous algorithms. ## 1 Introduction Turbulent flows are characterized by chaotic and disorganized motions, but recurring dominant patterns can play a significant role in laminar to turbulent transition (Schmid & Henningson, 2001) and sustaining turbulence (McKeon, 2017). These coherent structures can be seen as the foundational building blocks of turbulence, and modal analysis is an important tool for identifying and understanding these structures (Taira _et al._, 2017). Popular data-driven methods include proper orthogonal decomposition (POD) (Sirovich, 1987_a_), dynamic mode decomposition (DMD) (Schmid, 2010), and spectral proper orthogonal decomposition (SPOD) (Lumley, 1967; Towne _et al._, 2018). In particular, SPOD identifies energy-ranked, single-frequency structures that evolve coherently in space and time. Resolvent (or input-output) analysis originates from classical control theory (Dunford and Schwartz, 1958; Kato, 2013) and has become arguably the most important operator-theoretic modal decomposition techniques in fluid mechanics (McKeon and Sharma, 2010; Taira _et al._, 2017; Jovanovic, 2021). Resolvent analysis has been applied to a wide variety of flows, including canonical wall-bounded flows (Dawson and McKeon, 2019; Morra _et al._, 2019), turbulent jets (Jeun _et al._, 2016; Schmidt _et al._, 2018; Lesshafft _et al._, 2019; Pickering _et al._, 2020), and airfoils (Thomareis and Papadakis, 2018; Yeh _et al._, 2020). It has been used for diverse tasks including design optimization (Chavarin and Luhar, 2020; Ran _et al._, 2021), receptivity analysis (Kamal _et al._, 2023; Cook and Nichols, 2023), and flow control (Yeh and Taira, 2019; Towne _et al._, 2020; Martini _et al._, 2020, 2022). Singular value decomposition (SVD) of the resolvent operator is at the heart of input-output-based studies. The left singular vectors of the resolvent operator, known as the response modes, are often related to the coherent motions in the flow (Towne _et al._, 2018; McKeon and Sharma, 2010). Specifically, the resolvent modes associated with the largest singular values provide an approximation of the leading SPOD modes (Towne _et al._, 2018) and, in some cases, capture the majority of the power spectral density (PSD) of the flow (Symon _et al._, 2019). The right singular vectors, also known as the forcing modes, describe the optimal inputs that lead to the most amplified responses, characterized by the largest singular values (gains), and offer information about the mechanisms driving these responses. Resolvent analysis can be computationally demanding. Two steps constitute most of the cost: \((i)\) forming the resolvent operator, which involves computing an inverse, and \((ii)\) performing the SVD. Both steps nominally scale like \(O(N^{3})\), where \(N\) is the state dimension. State-of-the-art methods, described below, improve on this scaling, but the computational cost remains a strong function of the state dimension \(N\). The state dimension, in turn, depends acutely on the number of spatial dimensions that must be numerically discretized. While the linearized Navier-Stokes equations are nominally three-dimensional, they can be simplified by expanding the flow variables into Fourier modes in homogenous dimensions, _i.e._, those in which the base flow about which the equations are linearized does not vary. This markedly reduces the size of the discretized operators that must be manipulated, decreasing the computational cost. Accordingly, inherently three-dimensional flows that do not contain homogeneous directions or other simplifying symmetries are particularly challenging. Recent advancements aim to overcome these two computational bottlenecks. The second bottleneck can be alleviated by using efficient algorithms to compute only the SVD modes with the largest singular values, which are typically of primary interest, rather than the complete decomposition. Standard methods like power iteration and various Arnoldi methods have been frequently applied for this purpose. More recently, randomized singular value decomposition (RSVD) (Halko _et al._, 2011) has been shown to further reduce the cost of resolvent analysis of one- (Moarref _et al._, 2013) and two-dimensional (Ribeiro _et al._, 2020) problems. Regarding the first bottleneck, forming the resolvent operator by computing an inverse is feasible only for small systems, _e.g._, one-dimensional ones. Fortunately, the aforementioned SVD algorithms do not require direct access to the resolvent operator, but rather its action on a specified forcing vector, _i.e._, the result of applying the resolvent operator to that vector. Accordingly, we can recast the first bottleneck in terms of the computational cost of computing the action of the resolvent operator on a vector. The standard approach for doing so is to solve a linear system whose solution yields the action of the resolvent operator on the right-hand-side vector via LU decomposition of the inverse of the resolvent operator (which can be directly formed in terms of the linearized Navier-Stokes operator; see SS3 for details). The computational cost of this approach typically scales like \(O(N^{1.5})\) or \(O(N^{2})\) for two- and three-dimensional problems, respectively, which is tolerable for most two-dimensional problems but quickly becomes intractable for three-dimensional problems. Numerous authors have used this LU-based approach along with Arnoldi methods (Sipp & Marquet, 2013; Jeun _et al._, 2016; Schmidt _et al._, 2018; Karban _et al._, 2020). Brynjell-Rahkola _et al._ (2017) used LU decomposition along with a power iteration with a Laplace preconditioner to increase the convergence rate of the resolvent modes. More recently, Ribeiro _et al._ (2020) used LU decomposition along with RSVD, which we call "RSVD-LU" in this study, and demonstrated significant CPU savings compared to using an Arnoldi iteration. However, the poor cost scaling of the LU decomposition with problem size \(N\) remains a limiting factor, impeding the investigation of three-dimensional flows and other large systems. Resolvent modes can be computed at a reduced cost for slowly varying flows, _i.e._, flows whose mean changes gradually in some spatial direction, by using spatial marching methods to approximate the action of the resolvent operator. Spatial marching methods approximately evolve perturbations in the slowly-varying direction. The best-known spatial marching method is the parabolized stability equations (PSE), but the inherent ill-posedness of PSE (Li & Malik, 1996) requires deleterious regularization that makes it ill-suited to compute resolvent modes in most cases (Towne _et al._, 2019). One exception is very low frequencies, where PSE has been used to compute resolvent modes corresponding to boundary-layer streaks (Sasaki _et al._, 2022). The one-way Navier-Stokes (OWNS) equations (Towne & Colonius, 2015) overcome many of the limitations of PSE; they are formally well-posed and capture the complete downstream response of the flow. The original formulation did not include a right-hand-side forcing on the linearized equations, which is fundamental to resolvent analysis. This was addressed by a second OWNS variant formulated in terms of a projection operator that splits both the solution and forcing into upstream- and downstream-traveling components (Towne _et al._, 2022). This method has been combined with a power-iteration approach to accurately and efficiently approximate resolvent modes for a range of slowly varying flows ranging from incompressible boundary layers to supersonic jets to hypersonic boundary layers. Recently, the cost of this approach was further reduced by a new recursive OWNS formulation (Zhu & Towne, 2023). The fundamental limitation of OWNS-based approaches is their restriction to (mostly) canonical flows that contain a slowly varying direction. Several data-driven methods for computing resolvent modes have been proposed, which avoid working directly with the resolvent operator at all. Towne _et al._ (2015) and Towne (2016) introduced empirical resolvent decomposition (ERD). Starting with data in the form of a set of forcing and response pairs, ERD solves an optimization problem to identify modes within the span of the data that maximizes the gain. Another recent approach uses dynamic mode decomposition (DMD) (Schmid, 2010) to estimate the resolvent modes from data (Herrmann _et al._, 2021). This approach benefits from the advancements in DMD (Schmid, 2022) and is robust, but to accurately approximate the resolvent modes, many random initial conditions may need to be simulated. Barthel _et al._ (2022) recently proposed a reformulation of resolvent analysis called variational resolvent analysis (VRA). Using the same mathematics that underly ERD, VRA computes resolvent modes by solving a Rayleigh quotient, avoiding the inverse that appears in the definition of the resolvent operator. To make the method computationally advantageous, the response modes are constrained to lie within the span of some other reduced-order basis. Barthel _et al._ (2022) obtain this basis from a series of locally parallel resolvent analyses; if the basis is taken from data, VRA becomes ERD. VRA showed speed-up compared to standard approaches for a canonical boundary layer, but it remains to be investigated for more complex scenarios where an effective basis is not evident. Time-stepping methods offer an alternative approach to overcome the first bottleneck (these methods are sometimes referred to as "matrix-free" approaches, as forming the LNS operator is not necessary). The central idea is to obtain the action of the resolvent operator on a vector by solving the linearized equations in the time domain. A pioneering study by Monokrousos _et al._ (2010) used time stepping along with power iteration to compute resolvent modes of a flat-plate boundary-layer flow. Modes at a particular frequency of interest were computed by forcing the linearized equations exclusively at that frequency and time stepping until a steady-state solution is obtained. Gomez _et al._ (2016) proposed an iterative procedure for updating the initial conditions to reduce the time required to reach the steady-state solution. This resulted in an 80% reduction of CPU time for a test problem, but only the leading mode at each frequency was obtained. Martini _et al._ (2021) introduced two additional variations of time-stepping approaches for computing resolvent modes with improved efficiency. The first, referred to as the transitional response method, evaluates the transitional response of the LNS to compact forcing. The second variation, known as the steady-state response method, computes the steady-state solution of the LNS when it is forced with a set of harmonic frequencies. Both methods allow all frequencies of interest to be simultaneously computed by isolating each frequency in the flow response using a discrete Fourier transform. Additionally, the steady-state method can be easily paired with more advanced SVD algorithms (_e.g._, Arnoldi, rather than power iteration) to obtain multiple resolvent modes at each frequency. Time-stepping methods for computing resolvent modes are potentially powerful because they obtain the action of the resolvent operator without the need for inverses or LU decomposition. Indeed, we will show that time time-stepping methods can achieve linear cost scaling with the problem dimension \(N\). However, achieving this potential and overall low CPU and memory costs requires careful consideration of numerous factors. In this paper, we present a novel approach, abbreviated as "RSVD-\(\Delta t\)", that combines the benefits of RSVD with the advantages of time stepping. In short, the method eliminates the bottleneck in the RSVD-LU approach created by the LU decomposition by obtaining the action of the resolvent operator via an optimized time-stepping approach. All frequencies of interest as computed simultaneously using a steady-state response approach as in Martini _et al._ (2021). Additionally, we develop a novel technique to remove the undesired transient component of the response, shortening the temporal interval over which the equations are integrated and reducing the CPU cost by an order of magnitude in most cases. To minimize memory usage, we utilize streaming calculations for transferring data between the Fourier and time domains. The RSVD-\(\Delta t\) algorithm is shown to exhibit linear scalability both in terms of computational complexity and memory requirements and can be efficiently parallelized. Overall, these capabilities allow us to compute resolvent modes for three-dimensional flows and other large systems that were previously out of reach. In the remainder of the paper, we provide a brief review of the formulation and computation of resolvent analysis in SS2, discuss the RSVD-LU algorithm in SS3, explain the time-stepping method in SS4, and introduce our RSVD-\(\Delta t\) algorithm in SS5. An overview of the computational complexity of all approaches is given in SS6, the sources of errors of our algorithm are detailed in SS7, and approaches to optimize the algorithm are developed in SS8. Two test cases are defined in SS9 to validate, examine and compare the accuracy and performance of RSVD-\(\Delta t\) against other approaches. In SS10, we use RSVD-\(\Delta t\) to study the impact of streaks on the Kelvin-Helmholtz wavepackets in a jet. Concluding remarks are made in SS11. ## 2 Resolvent analysis ### Formulation Our starting point is the compressible Navier-Stokes equations written as \[\frac{\partial\mathbf{q}}{\partial t}=\mathbf{\mathcal{N}}(\mathbf{q}), \tag{2.1}\] where the nonlinear Navier-Stokes operator \(\boldsymbol{\mathcal{N}}\) acts on the state vector \(\boldsymbol{q}\in\mathbb{C}^{N}\), which describes the flow discretized in all inhomogeneous directions. A standard Reynolds decomposition \[\boldsymbol{q}(\boldsymbol{x},t)=\bar{\boldsymbol{q}}(\boldsymbol{x})+\boldsymbol {q}^{\prime}(\boldsymbol{x},t) \tag{2.2}\] partitions the flow state into the time-averaged mean \(\bar{\boldsymbol{q}}\) and the fluctuation \(\boldsymbol{q}^{\prime}\). Substituting (2.2) into (2.1) yields \[\frac{\partial\boldsymbol{q}^{\prime}}{\partial t}=\boldsymbol{A}(\bar{ \boldsymbol{q}})\boldsymbol{q}^{\prime}+\boldsymbol{B}\boldsymbol{f}^{\prime}( \bar{\boldsymbol{q}},\boldsymbol{q}^{\prime}), \tag{2.3}\] \[\boldsymbol{y}^{\prime}=\boldsymbol{\mathcal{C}}\boldsymbol{q}^{\prime},\] where \(\boldsymbol{A}\in\mathbb{C}^{N\times N}\) is the linearized Navier-Stokes (LNS) operator, \(\boldsymbol{B}\in\mathbb{C}^{N\times N_{f}}\) is an input matrix that can be used to restrict the forcing \(\boldsymbol{f}^{\prime}\in\mathbb{C}^{N_{f}}\), and \(\boldsymbol{\mathcal{C}}\in\mathbb{C}^{N_{y}\times N}\) is an output matrix that extracts the output of interest \(\boldsymbol{y}^{\prime}\in\mathbb{C}^{N_{y}}\) from the state. The forcing \(\boldsymbol{f}^{\prime}\) can represent an exogenous forcing and/or the nonlinear perturbation terms from the Navier-Stokes equations. Resolvent analysis is most natural when \(\boldsymbol{\mathcal{A}}\) is stable, _i.e._, all of its eigenvalues lie in the left-half plane. If A is unstable, discounting can be used to obtain a stable system (Jovanovic, 2004; Yeh & Taira, 2019). We assume that, if necessary, discounting has already been performed so that \(\boldsymbol{\mathcal{A}}\) is strictly stable. Resolvent analysis seeks the forcing that produces the largest steady-state response. Since the steady state is of interest, the solution can be obtained in the frequency domain. Taking the Fourier transform \[\mathcal{F}(\cdot)=\hat{(\cdot)}(\omega)=\int_{-\infty}^{+\infty}(\cdot)e^{- \mathrm{i}\omega t}\,dt \tag{2.4}\] of (2.3) and solving for the output yields \[\hat{\boldsymbol{y}}(\omega)=\boldsymbol{R}(\omega)\hat{\boldsymbol{f}}( \omega), \tag{2.5}\] where \(\omega\) is the frequency and \(\hat{(\cdot)}\) denotes the frequency counterpart of the time domain vector. The resolvent operator \[\boldsymbol{R}=\boldsymbol{\mathcal{C}}(\mathrm{i}\omega\boldsymbol{l}- \boldsymbol{\mathcal{A}})^{-1}\boldsymbol{\mathcal{B}} \tag{2.6}\] maps the input forcing to the output response (here, \(\mathrm{i}=\sqrt{-1}\) and \(\boldsymbol{l}\) is the identity matrix.) The optimization problem for the most amplified forcing is formally defined as maximizing \[\sigma=\frac{||\hat{\boldsymbol{y}}||_{q}}{||\hat{\boldsymbol{f}}||_{f}}=\frac {||\boldsymbol{R}\hat{\boldsymbol{f}}||_{q}}{||\hat{\boldsymbol{f}}||_{f}}, \tag{2.7}\] where \(||\boldsymbol{x}||_{f}^{2}=\ \langle\boldsymbol{x},\boldsymbol{x}\rangle_{f}\ = \boldsymbol{x}^{*}\boldsymbol{W}_{f}\boldsymbol{x}\) computes the \(f\)-norm of any vector \(\boldsymbol{x}\) and \((\cdot)^{*}\) denotes the conjugate transpose. \(\boldsymbol{W}_{f}\) is a weight matrix that accounts for numerical quadrature and allows us to define arbitrary norms. Note that input and output norms can be different, _i.e._, \(||\cdot||_{q}=||\cdot||_{f}\) is not required. For notational brevity, we assume identity matrices for the weight, input, and output matrices in what follows. The minor adjustments to our algorithm to accommodate non-identity weight, input, and output matrices are outlined in Appendix A. Solving the Rayleigh quotient (2.7) is equivalent to computing the SVD of the resolvent operator (Stewart, 1993) \[\boldsymbol{R}=\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^{*}, \tag{2.8}\] where \(\boldsymbol{\Sigma}\) contains the singular values (aka _gains_), and \(\boldsymbol{V}\) and \(\boldsymbol{U}\) are right and left singular vectors corresponding to input and output vectors (aka forcing and response _modes_), respectively. ### Computation Computing resolvent modes by following the definitions from the previous SS involves two computationally intensive steps: (\(i\)) forming the resolvent operator by computing the inverse in (2.6) and (\(ii\)) computing the full singular value decomposition in (2.8). Both of these steps nominally require \(O(N^{3})\) operations. This is workable for one-dimensional problems, _e.g._, a channel flow (Moarref _et al._, 2013), but quickly becomes intractable for two- and three-dimensional problems. Instead, most applications of resolvent analysis to two-dimensional problems have adopted an alternative approach that leverages LU decomposition and iterative eigenvalue solvers (Sipp & Marquet, 2013; Jeun _et al._, 2016; Schmidt _et al._, 2018; Thomareis & Papadakis, 2018; Karban _et al._, 2020). This approach utilizes a mathematical equivalence to compute the resolvent modes faster than the natural approach. It is straightforward to verify that computing the right singular vectors of the resolvent operator is equivalent to computing the eigenfunctions of \(\mathbf{\mathsf{\mathbf{\mathsf{\mathbf{\mathsf{\mathbf{\mathsf{\mathbf{\mathsf{\mathbf{\mathsf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}\mathbf{ \mathbf{\mathsf{\mathbf{\mathsf{\mathbf{\mathsf{\mathbf{\mathsf{\mathbf{\mathsf{\mathbf{\mathsf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathsf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathsf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathsf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}} \mathbf{}}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}} \mathbf{}}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}} \mathbf{}}\mathbf{}}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}} \mathbf{}}\mathbf{}}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}} \mathbf{}}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}} \mathbf{}}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}} \mathbf{}}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}}} \mathbf{}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\bmbmbmbmbm }}}}}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\bmbmbmbmbmbmbm }}}}}}}}}}}}}}}}}} \bmbm{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}} \bmbm{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\bmbm{\bmbmbmbmbmbmbmbmbm{\bm _et al._ (2011). The first step is to sample the range of \(R\) by forming its sketch (line 3) \[\textbf{{Y}}=\textbf{{R}}\boldsymbol{\Theta}, \tag{3.1}\] where \(\boldsymbol{\Theta}\) is a dense random test matrix (line 2) with \(k\ll N\) columns that determines the number of leading modes to be approximated. Increasing the number of test vectors slightly beyond the desired number of modes enhances the accuracy of the leading modes. A feature of high-dimensional random vectors is that they form an orthonormal set with high probability (Vershynin, 2018), such that, on average, \(\boldsymbol{\Theta}\) projects uniformly onto all of the right singular vectors of \(R\). Therefore, the sketch preserves the leading left singular vectors of \(R\). An orthonormal basis \(Q\) for the sketch is obtained via QR decomposition (line 6), which is then used to sample the image of \(R\) (line 7) as \[\textbf{{S}}=\textbf{{Q}}^{*}\textbf{{R}}. \tag{3.2}\] Computing the SVD of \(S\) (line 8), which is inexpensive due to its reduced dimension, provides an approximation of the \(k\) leading right singular vectors \(V\) and singular values \(S\) of \(R\). Finally, the corresponding approximations of the left singular vectors of \(R\) can be recovered as \(\textbf{{U}}=\textbf{{Q}}\tilde{\textbf{{U}}}\) (line 9). RSVD accurately estimates the leading modes for matrices with rapidly decaying singular values. For systems with slowly decaying singular values, performing \(q\) optional power iterations (lines 4-5 and Algorithm 2) enhances the accuracy of the estimates. The rationale of power iteration is to increase the effective gap between singular values within the sketch by exponentiating them, since \[(\textbf{{R}}\textbf{{R}}^{*})^{q}\textbf{{Y}}=(\textbf{{U}}\boldsymbol{ \Sigma}(\textbf{{V}}^{*}\textbf{{V}})\boldsymbol{\Sigma}\textbf{{U}}^{*})^{q} \textbf{{Y}}=(\textbf{{U}}\boldsymbol{\Sigma}^{2}\textbf{{U}}^{*})^{q}\textbf {{Y}}=(\textbf{{U}}\boldsymbol{\Sigma}^{2q}\textbf{{U}}^{*})\textbf{{Y}}. \tag{3.3}\] Raising the singular values to a high power artificially accelerates the decay rate of the singular values of \(R\), improving the effectiveness of the RSVD algorithm. The QR factorizations improve numerical stability, as discussed by Halko _et al._ (2011). ``` 1:Input parameters:\(\textbf{{R}},\textbf{{Y}},q\) 2:for\(i=1:q\)do 3:\(\textbf{{Q}}\leftarrow\mathrm{qr}(\textbf{{Y}})\)\(\triangleright\) For stabilization purposes 4:\(\textbf{{Y}}\leftarrow\textbf{{R}}^{*}\textbf{{Q}}\)\(\triangleright\) Sample the image of \(R\) 5:\(\textbf{{Q}}\leftarrow\mathrm{qr}(\textbf{{Y}})\)\(\triangleright\) For stabilization purposes 6:\(\textbf{{Y}}\leftarrow\textbf{{R}}\textbf{{Q}}\)\(\triangleright\) Sample the range of \(R\) 7:Output parameter:_Y_ ``` **Algorithm 2** Power iteration ### RSVD for resolvent analysis The algorithm outlined in the previous section assumes direct access to the matrix \(R\). In the context of resolvent analysis, \(R\) is defined in terms of an inverse, which should be avoided. Ribeiro _et al._ (2020) addressed this challenge by adopting the approach developed by Jeun _et al._ (2016) for computing resolvent modes using an Arnoldi algorithm. The idea is to replace multiplication of \(R\) or \(\textbf{{R}}^{*}\) by solving an equivalent linear system. For example, \(\textbf{{Y}}=\textbf{{R}}\boldsymbol{\Theta}\) (line 3 of Algorithm 1) can be obtained by solving the linear system \[(\mathrm{i}\omega\textbf{{I}}-\textbf{{A}})\textbf{{Y}}=\boldsymbol{\Theta} \tag{3.4}\] since \(\mathbf{R}^{-1}=(\mathrm{i}\omega\mathbf{l}-\mathbf{A})\). Similarly, \(\mathbf{S}=\mathbf{Q}^{*}\mathbf{R}\) (line 7 of Algorithm 1) can be replaced with solving \[(\mathrm{i}\omega\mathbf{l}-\mathbf{A})^{*}\mathbf{S}^{*}=\mathbf{Q}. \tag{3.5}\] The same concept can be used to replace multiplication by \(\mathbf{R}\) and \(\mathbf{R}^{*}\) in Algorithm 2. Typically, the linear systems are solved by computing an LU decomposition \[(\mathrm{i}\omega\mathbf{l}-\mathbf{A})=\mathbf{L}\mathbf{P}, \tag{3.6}\] where \(\mathbf{L}\) and \(\mathbf{P}\) are the lower and upper triangular matrices (we use \(\mathbf{P}\) to denote the upper triangular matrix instead of \(\mathbf{U}\) to avoid confusion with the left singular vectors). The same LU decomposition can be used also for \((\mathrm{i}\omega\mathbf{l}-\mathbf{A})^{*}\) since \[(\mathrm{i}\omega\mathbf{l}-\mathbf{A})^{*}=(\mathbf{L}\mathbf{P})^{*}=\mathbf{P}^{*}\mathbf{L}^{*}. \tag{3.7}\] Solving these linear systems is indeed significantly less computationally demanding than computing the inverse of \((\mathrm{i}\omega\mathbf{l}-\mathbf{A})\) to form \(\mathbf{R}\) and performing subsequent matrix-matrix multiplication in the RSVD algorithm. The remaining steps of the algorithm incur negligible computational costs and are not altered. In the remainder of our paper, we will use the term "RSVD-LU" to refer to the modified version of RSVD that is compatible with resolvent analysis (Ribeiro _et al._, 2020). ## 4 Computing resolvent modes using time stepping An alternative class of methods for computing resolvent modes utilizes time stepping. This idea was first proposed by Monokrousos _et al._ (2010) and recently was improved upon by Martini _et al._ (2021), who introduced two methods: the transient response method and the steady-state response method. The latter was found to be better suited for complex algorithms, and we will employ and extend this method in the present paper. ### The action of the resolvent operator via time stepping The central idea of the time-stepping approach is to obtain the action of the resolvent operator on a vector (or matrix) by solving the linear system that underlies the resolvent operator in the time domain. In this context, the action of a matrix \(\mathbf{R}\) on a vector (or matrix) \(\mathbf{b}\) is defined as follows; Given \(\mathbf{b}\), our objective is to compute \(\mathbf{x}=\mathbf{R}\mathbf{b}\), which is equivalent to solving the linear system \(\mathbf{R}^{-1}\mathbf{x}=\mathbf{b}\) for \(\mathbf{x}\). Starting with a harmonically forced ordinary differential equation (ODE) \[\frac{d\mathbf{q}}{dt}=\mathbf{A}\mathbf{q}+\mathbf{f}, \tag{4.1}\] where \[\mathbf{f}(t)=\hat{\mathbf{f}}e^{\mathrm{i}\omega t} \tag{4.2}\] is the harmonic forcing with frequency \(\omega\in\mathbb{R}\) and \(\hat{\mathbf{f}}\in\mathbb{C}^{N}\) is an arbitrary vector. The steady-state response of (4.1) is \[\mathbf{q}(t)=\hat{\mathbf{q}}_{s}e^{\mathrm{i}\omega t}, \tag{4.3}\] where \[\hat{\mathbf{q}}_{s}=(\mathrm{i}\omega\mathbf{l}-\mathbf{A})^{-1}\hat{\mathbf{f}}=\mathbf{R}\hat{ \mathbf{f}} \tag{4.4}\] is the Fourier-domain solution. Therefore, the action of \(\mathbf{R}\) can be obtained by computing the steady-state solution \(\mathbf{q}(t)\) of (4.1) and subsequently taking a Fourier transform to obtain \(\hat{\mathbf{q}}_{s}\). Similarly, the action of \(\mathbf{R}^{*}\) can be obtained by computing the steady-state response \(\mathbf{z}(t)\) of the adjoint equation \[-\frac{d\mathbf{z}}{dt} =\mathbf{\mathcal{A}}^{*}\mathbf{z}+\mathbf{f}, \tag{4.5}\] \[\mathbf{f} =\hat{\mathbf{f}}e^{\mathrm{i}\omega t},\] backward in time and taking a Fourier transform to obtain \[\hat{\mathbf{z}}_{s}=(-\mathrm{i}\omega\mathbf{l}-\mathbf{\mathcal{A}}^{*})^{-1}\hat{\mathbf{f }}=\mathbf{R}^{*}\hat{\mathbf{f}}. \tag{4.6}\] The arbitrary harmonic forcing term \(\hat{\mathbf{f}}\) can be a matrix instead of a vector by defining \(\hat{\mathbf{F}}\in\mathbb{C}^{N\times k}\). In that case, each column of the solutions \(\mathbf{\mathcal{O}}\) and \(\mathbf{\tilde{Z}}\) corresponds to one specific column of the forcing matrix. ### Direct and adjoint actions for a range of frequencies This section describes an important contribution from Martini _et al._ (2021) that allows us to compute the action of the resolvent operator for a set of desired frequencies while time-stepping the equations only once. Integrating (4.1) typically generates a transient response \(T_{t}\) before obtaining the desired steady-state solution, as shown in figure 1. The length of \(T_{t}\) affects the length of time stepping and the accuracy of the output, as discussed in SS7.2.2. The discrete nature of time stepping encourages the usage of discrete Fourier transform (DFT) where \(\hat{\mathbf{q}}_{s}(\omega)\) can be obtained for a base frequency, \(\omega_{min}\), and its harmonics, \(n\omega_{min}\), where \(n\in\mathbb{Z}\). The DFT necessitates a specific time length of \(T_{s}=2\pi/\omega_{min}\) in order to accurately resolve the longest wavelength of interest. The number of snapshots within the steady-state period \(T_{s}\) determines the lowest frequency that can be resolved. In order to compute resolvent modes for all frequencies of interest \[\Omega=\{0,\pm\omega_{min},\pm 2\omega_{min},\pm 3\omega_{min},...,\pm\omega_{ max}\}, \tag{4.7}\] Figure 1: Schematic of the response waveform. The solution contains a transient portion of length \(T_{t}\) before the steady-state solution of period \(T_{s}\) is achieved. The numerical solution contains \(N_{s}\) time steps of size \(dt\) within one period of the steady-state solution, but only \(N_{\omega}\) points with \(\Delta t\) spacing are required to decompose the \(N_{\omega}\) frequencies of interest without aliasing. where \(\omega_{max}\) represents the highest frequency of interest, the forcing term \[\mathbf{f}=\sum_{\omega_{j}\in\Omega}\hat{\mathbf{f}}_{j}e^{\mathrm{i}\omega_{j}t} \tag{4.8}\] must include all frequencies in \(\Omega\). The minimum number of snapshots within the \(T_{s}\)-period is \(N_{\omega}=2\lceil\frac{\omega_{max}}{\omega_{min}}\rceil\) according to Nyquist's theorem Nyquist (1928). Performing time integration of (4.1) results in computing \(N_{s}\) steady-state snapshots within the \(T_{s}\)-period, where typically \(N_{s}\geq N_{\omega}\), as the time step (\(dt\)) is chosen to ensure sufficient integration accuracy. Ultimately, by choosing \(N_{\omega}\) steady-state snapshots, we can determine the Fourier coefficients by taking a DFT. To elaborate on the previous point, assume a set of snapshots \(\mathbf{\mathcal{Q}}_{N_{s}}=\{\mathbf{q}_{1},\mathbf{q}_{2},\mathbf{q}_{3},...,\mathbf{q}_{N_{s}}\}\) (analogous to the pink dots in figure 1), where \(\mathbf{q}_{j}\) represents the \(j^{th}\) steady-state snapshot in the time domain. The fast Fourier transform (FFT) can efficiently compute \(\mathbf{\hat{\Omega}}_{N_{s}}=\{\hat{\mathbf{q}}_{1},\hat{\mathbf{q}}_{2},\hat{\mathbf{q}}_{3 },...,\hat{\mathbf{q}}_{N_{s}}\}\). However, the maximum resolved frequency within \(\mathbf{\hat{\Omega}}_{N_{s}}\) surpasses \(\omega_{max}\) since typically \(N_{\omega}\sim O(10^{2})\), and \(N_{s}\sim O(10^{3}-10^{5})\). Therefore, an optimal size to resolve all \(\omega\in\Omega\) without aliasing is to consider \(N_{\omega}\) equally spaced snapshots in \(\mathbf{\mathcal{Q}}_{N_{\omega}}=\{\mathbf{q}_{1},\mathbf{q}_{2},\mathbf{q}_{3},...,\mathbf{q}_{ N_{\omega}}\}\) (analogous to the cyan dots in figure 1). Taking the FFT of \(\mathbf{\mathcal{Q}}_{N_{\omega}}\) yields \(\mathbf{\mathcal{Q}}_{N_{\omega}}=\{\hat{\mathbf{q}}_{1},\hat{\mathbf{q}}_{2},\hat{\mathbf{q} }_{3},...,\hat{\mathbf{q}}_{N_{\omega}}\}\), where each member \(\hat{\mathbf{q}}_{j}\) represents the solution to \((\mathrm{i}\omega_{j}\mathbf{I}-\mathbf{\hat{A}})\hat{\mathbf{q}}_{j}=\hat{\mathbf{f}}_{j}\), with \(\omega_{j}\in\Omega\). To avoid leakage, the equidistant snapshots within \(\mathbf{\mathcal{Q}}_{N_{\omega}}\) need to span the entire \(T_{s}\) period, _i.e._, \[T_{s}=dt\times N_{s}=\Delta t\times N_{\omega}. \tag{4.9}\] For a given pair \((\omega_{min},\omega_{max})\), \[\Delta t=\frac{T_{s}}{N_{\omega}}=\frac{2\pi/\omega_{min}}{2\lceil\frac{ \omega_{max}}{\omega_{min}}\rceil} \tag{4.10}\] is predetermined, so \(dt\) must be selected such that \(\frac{N_{\omega}}{N_{\omega}}\in\mathbb{N}\). Figure 2 demonstrates the equivalence between computing the action of \(\mathbf{R}\) for a range of frequencies in both the RSVD-LU and RSVD-\(\Delta t\) algorithms. Starting from the LNS equations, the upper route involves applying a Fourier transform before solving \(N_{\omega}\) decoupled linear systems to compute the action of the resolvent operator on \(N_{\omega}\) forcing inputs. The bottom route involves integrating the LNS equations in the time domain, followed by a Fourier transform to generate the same output as the upper route. All frequencies of interest, \(\omega\in\Omega\), are included in the forcing so that the time stepping is performed only once, and the response at each frequency is obtained using a DFT or FFT. Figure 2: Flowchart depicting the action of \(\mathbf{R}\) on \(N_{\omega}\) inputs for the RSVD-LU (upper route) and the RSVD-\(\Delta t\) (bottom route) algorithms. Both routes produce the same result, but the bottom route in computationally advantageous for large systems. ## 5 RSVD-\(\Delta t\): RSVD with time stepping Our algorithm, which we refer to as RSVD-\(\Delta t\), uses time stepping to eliminate the computational bottleneck within the RSVD algorithm for large systems. Specifically, solving the direct and adjoint LNS equations to apply the action of \(R\) and _R_\({}^{*}\) circumvents the need for LU decomposition, improving the scaling of the algorithm (see SS6), enabling resolvent analysis for the large systems typical of three-dimensional flows. RSVD-\(\Delta t\) is outlined in Algorithm 3 and described in what follows. ``` 1:Input parameters:\(\boldsymbol{\mathsf{A}},k,q,\Omega\), TSS, \(dt,T_{t}\) 2:\(\hat{\boldsymbol{\Theta}}\leftarrow\operatorname{randn}(N,k,N_{\omega})\)\(\triangleright\) Create random test matrices 3:\(\hat{\boldsymbol{Y}}\leftarrow\operatorname{DirectAction}(\boldsymbol{ \mathsf{A}},\hat{\boldsymbol{\Theta}},\text{TSS},dt,T_{t})\)\(\triangleright\) Sample the range of \(R\) 4:if\(q>0\)then\(\triangleright\) Optional power iteration 5:\(\hat{\boldsymbol{Y}}\leftarrow\operatorname{PI}(\boldsymbol{\mathsf{A}},\hat{ \boldsymbol{Y}},q,\text{TSS},dt,T_{t})\)\(\triangleright\) Algorithm 2 with time stepping 6:\(\hat{\boldsymbol{\Theta}}\leftarrow\operatorname{qr}_{\Omega}(\hat{ \boldsymbol{Y}})\)\(\triangleright\) Build the orthonormal subspace \(\hat{\boldsymbol{\Theta}}\) 7:\(\boldsymbol{\mathsf{S}}\leftarrow\operatorname{AdjointAction}(\boldsymbol{ \mathsf{A}}^{*},\hat{\boldsymbol{\Theta}},\text{TSS},dt,T_{t})\)\(\triangleright\) Sample the image of \(R\) 8:\((\hat{\boldsymbol{U}},\boldsymbol{\Sigma},\boldsymbol{V})\leftarrow\operatorname {svd}_{\Omega}(\boldsymbol{\mathsf{S}})\)\(\triangleright\) Obtain \(\boldsymbol{\Sigma}\), \(V\) 9:\(\boldsymbol{U}\leftarrow\ (\hat{\boldsymbol{\Theta}}\hat{\boldsymbol{U}})_{\Omega}\)\(\triangleright\) Recover \(U\) 10:Output parameters:\(\boldsymbol{U},\boldsymbol{\Sigma},\boldsymbol{V}\) for all \(\omega\in\Omega\)Algorithm 3: \(k,q,\Omega\) are common parameters with RSVD. \((\cdot)_{\Omega}\) means the function is separately applied to each \(\omega\in\Omega\), and TSS is an abbreviation for time-stepping schemes (_e.g._, backward Euler) ``` **Algorithm 3** RSVD-\(\Delta t\) As in the standard RSVD algorithm, the first step is to create random forcing matrices to sketch \(R\). Since our time-stepping approach computes all frequencies of interest at once, a separate test matrix \(\hat{\boldsymbol{\Theta}}\in\mathbb{C}^{N\times k}\) is generated for each frequency \(\omega\in\Omega\) (line 2). Next (line 3), the DirectAction function solves the LNS equations forced by the set of test matrices in the time domain to obtain the sketch \(\hat{\boldsymbol{Y}}\) of the resolvent operator \(R\) for all \(\omega\in\Omega\). Line 4 checks whether or not power iteration is desired, and if so (_i.e._, \(q>0\)), line 5 jumps to algorithm 2 to increase the accuracy of resolvent modes. All instances of applying the action of the resolvent operator or its adjoint in Algorithm 2 are performed via time stepping. In line 6, an orthonormal subspace \(\hat{\boldsymbol{\mathsf{O}}}\) is constructed for the sketch at each frequency via QR decomposition. Note that the \(\Omega\) subscript indicates that the operation is performed separately for each frequency \(\omega\in\Omega\). Next, in line 7, the AdjointAction function solves the adjoint LNS equations forced by the set of \(\hat{\boldsymbol{\mathsf{O}}}\) matrices in the time domain to sample the image of the resolvent operator \(R\) for all \(\omega\in\Omega\). Finally, the estimates of the k leading right singular vector \(V\) and gains \(\boldsymbol{\Sigma}\) are obtained via an economy SVD of the \(N\times k\) matrix \(\hat{\boldsymbol{\mathsf{S}}}\) (line 8), and left singular vectors \(U\) are recovered in line 9. ## 6 Computational complexity The primary advantage of the RSVD-\(\Delta t\) algorithm is its reduced computational cost. In this section, we discuss the CPU and memory cost scaling of applying the action of the resolvent operator via time stepping and compare it to LU-based approaches, as summarized in table 1. We assume that the LNS equations are discretized using a sparse scheme such as finite differences, finite volume, or finite elements. Once the linearized operator \(A\) is constructed, the goal is to solve the linear system given by \[(\mathrm{i}\omega\boldsymbol{I}-\boldsymbol{\mathsf{A}})\boldsymbol{x}= \boldsymbol{b} \tag{6.1}\] to compute the action of \(\boldsymbol{R}\) on \(\boldsymbol{b}\). ### CPU cost Direct solvers find the solution of (6.1) to machine precision. A common approach is to find the LU decomposition of (\(\mathrm{i}\omega\boldsymbol{l}-\boldsymbol{\mathsf{A}}\)) and solve the decomposed system via back substitution. The process of computing lower and upper triangular matrices with full or partial pivoting can be extremely expensive for large systems (Duff _et al._, 2017) and is often the dominant cost of solving a linear system (Marquet & Larsson, 2015). Once the LU decomposition is obtained, solving the LU-decomposed system is typically comparatively inexpensive. The theoretical cost scaling of LU decomposition of the sparse matrices that arise from collocation-based discretization methods (like finite differences) is \(O(N^{1.5})\) and \(O(N^{2})\) for two-dimensional and three-dimensional systems, respectively (Amestoy _et al._, 2019). The larger scaling exponent and number of grid points present in a three-dimensional problem make the LU decomposition of the corresponding linear operator costly. Optimized algorithms for computing LU decomposition are available in open-source software packages such as LAPACK (Anderson _et al._, 1999), MUMPS (Amestoy _et al._, 2001), PARDISO (Schenk _et al._, 2001), and Hypre (Falgout & Yang, 2002), which are designed to leverage massive parallelization. The LU decomposition becomes increasingly dominant (compared to solving the LU-decomposed system or other algorithmic steps) as the size of the system increases for both the standard Arnoldi-based method and the RSVD-LU algorithm, reducing the computational advantage of the latter. Iterative solvers contain convergence criteria that can be adjusted to reduce computational cost at the expense of a less accurate solution. The performance of iterative solvers strongly depends on the condition number \(\kappa\), the ratio between the largest and smallest eigenvalues of a matrix. Matrices with condition numbers of great than \(\sim 10^{4}\) are considered to be ill-conditioned (Saad, 2003_b_), which can cause slow convergence and numerical stability issues for iterative solvers (Skeel, 1979). The LNS operator \(\boldsymbol{\mathsf{A}}\) is typically a sparse but ill-conditioned matrix. When \(\omega\) is small, (\(\mathrm{i}\omega\boldsymbol{l}-\boldsymbol{\mathsf{A}}\)) inherits the ill-conditioning of \(\boldsymbol{\mathsf{A}}\), making the use of an iterative solver challenging. The conditioning improves as \(\omega\) increases, so the lowest frequencies control the overall cost of using an iterative method to compute resolvent modes. In addition to the condition number, other properties such as the size, sparsity pattern, and density (or sparsity ratio) of a matrix can also ease or aggravate the situation (Trefethen & Bau III, 1997). In principle, iterative solvers are attractive when solving (6.1) up to machine precision is unnecessary, as is the case when using the RSVD algorithm, which is already an approximation. The main challenge remains the typically high condition number of (\(\mathrm{i}\omega\boldsymbol{l}-\boldsymbol{\mathsf{A}}\)), as explained above. One potential solution is the common practice of using a preconditioner (Saad, 2003_a_). Preconditioners are matrices that are multiplied on the left, right, or both sides of the target matrix to decrease its \begin{table} \begin{tabular}{c c c c} Problem size & Action of \(\boldsymbol{R}\) & CPU time & Memory \\ Two-dimensional & time stepping & \(O(N)\) & \(O(N)\) \\ LU decomposition & \(O(N^{1.5})\) & \(O(N^{1.2})\) \\ Three-dimensional & time stepping & \(O(N)\) & \(O(N)\) \\ LU decomposition & \(O(N^{2})\) & \(O(N^{1.6})\) \\ \end{tabular} \end{table} Table 1: The scaling of CPU time and memory requirements with respect to \(N\) for computing the action of \(\boldsymbol{R}\) (or \(\boldsymbol{R}^{*}\)) using time stepping and LU decomposition. condition number and thus increase the convergence of iterative solvers. The methods of computing preconditioners and numerous related theories and practices are neatly summarized in a few surveys (Axelsson, 1985; Benzi, 2002; Pearson & Pestana, 2020). Despite numerous developments in this area, effective preconditioners do not exist for all matrices, including many LNS operators. Accordingly, direct methods/LU decompositions are almost always used to solve (6.1) when computing resolvent modes (Moarref _et al._, 2013; Jeun _et al._, 2016; Schmidt _et al._, 2018; Ribeiro _et al._, 2020). The cost of time-stepping methods rely on integrating the LNS equations in the time domain. Time-stepping of ODEs (such as the one in (4.1)) has a long history and is a mature field (Hairer _et al._, 1993; Wanner & Hairer, 1996; Trefethen & Bau III, 1997). Herein, two classes - implicit and explicit integration schemes - are available and widely used in the scientific computing community. Implicit integrators possess better stability properties but require a system of the form \[\boldsymbol{\mathcal{A}}\boldsymbol{x}=\boldsymbol{b} \tag{6.2}\] be solved at every iteration. Here, \(\boldsymbol{b}\in\mathbb{C}^{N\times k}\) is a function of the solution at previous time and the exogenous forcing (if present), and \(\boldsymbol{\mathcal{A}}\in\mathbb{C}^{N\times N}\) is the temporal discretized operator, which is a function of the linear operator \(\boldsymbol{\mathcal{A}}\). For example, \(\boldsymbol{\mathcal{A}}\) can be written as a first-order polynomial of the form \(\boldsymbol{\mathcal{A}}=c_{1}\boldsymbol{I}+c_{2}\boldsymbol{\mathcal{A}}\) for multi-step methods, where constants are determined based on integration scheme and time step, _e.g._, \(\boldsymbol{\mathcal{A}}=\boldsymbol{I}-dt\boldsymbol{\mathcal{A}}\) for backward Euler. A superficial comparison between (6.2) and (6.1) indicates that implicit time steppers suffer from the same issues elaborated above. However, the key difference is that \(\boldsymbol{\mathcal{A}}\) is multiplied by the (small) time step \(dt\), so the ill-conditioning of \(\boldsymbol{\mathcal{A}}\) is largely overwhelmed by the ideal conditioning of the identity matrix \(\boldsymbol{I}\). This improved conditioning makes possible the application of iterative solvers. For explicit integrators, the solution at each time step is an explicit function of the solution (and exogenous forcing) at previous time steps. Accordingly, a solution of a linear system is not required, and each step contains only inexpensive sparse matrix-vector products for a linear ODE such as (6.1), making each step rapidly computable. The downside of explicit methods is that they are less numerically stable and often require many small steps to ensure stability for stiff systems (Suli & Mayers, 2003). Nevertheless, the drastically smaller cost of each step for explicit integrators often outweighs the disadvantage of requiring many small steps, and many computational fluid dynamics codes are equipped with explicit integrators such as Runge-Kutta schemes. Explicit integrators involve repeatedly multiplying the sparse matrix \(\boldsymbol{\mathcal{A}}\) with vectors during the time-stepping process, which scales like \(O(N)\). Generating forcing input and transforming responses to Fourier space are also \(O(N)\) operations (see SS8.1.1). The time step is chosen to control the error associated with the highest frequency of interest, rather than being determined by a CFL condition as discussed in SS7.2.1. By fixing the time step and time-stepping scheme while varying \(N\), it is evident that explicit integrators scale linearly with dimension. Implicit integrators, on the other hand, require at least one LU decomposition of \(\boldsymbol{\mathcal{A}}\) for direct solvers or a preconditioner for indirect solvers, which are not \(O(N)\) operations. However, this one-time cost is often small enough that it is overwhelmed by other operations such that the observed computational complexity remains \(O(N)\). ### Memory requirements Supercomputers and parallel solvers can keep the hope of computing the LU decomposition of massive and poorly conditioned systems alive; however, massive calculations require massive storage, and memory becomes the top issue (Davis _et al._, 2016). Generally, direct solvers are more robust than iterative solvers but can consume significant memory due to the fill-in process of factorization (Marquet & Larsson, 2015). The memory requirement associated with LU decomposition for resolvent analysis has been empirically observed to scale like \(O(N^{1.2})\) and \(O(N^{1.6})\) for two-dimensional and three-dimensional systems, respectively (Towne _et al._, 2022). The exponents are not guaranteed and can become better or worse depending on the system of interest. Explicit integration schemes have certain advantages over implicit integration schemes. Explicit schemes typically do not require much space for sparse matrix-vector products. The required memory is mainly used to store the forcing and response modes in Fourier space which scales like \(O(N)\), as will be discussed in SS8.1.1. On the other hand, implicit integration schemes, in addition to the Fourier space matrices, require memory for solving (6.2), which depends heavily on the sparsity of the LU-decomposed matrices or the iterative methods employed. For some systems, these methods may scale worse than \(O(N)\), resulting in increased memory requirements. ### Matrix-free implementation So far, we have assumed that the LNS matrix \(\boldsymbol{\mathsf{A}}\) is explicitly formed. In contrast to the standard frequency-domain approaches including the RSVD-LU algorithm, our time-stepping approach can be applied in a matrix-free manner using any code with linear direct and adjoint capabilities without explicitly forming \(\boldsymbol{\mathsf{A}}\)(de Pando _et al._, 2012; Martini _et al._, 2021). In this case, the cost scaling of our algorithm will follow that of the underlying Navier-Stokes code, which is again typically linear with the problem dimension. ## 7 Sources of error in the RSVD-\(\Delta t\) algorithm Next, we identify sources of error within the RSVD-\(\Delta t\) algorithm, which stem from the RSVD approximation and the time-stepping approach used to compute the action of \(\boldsymbol{\mathsf{R}}\). By effectively addressing these sources of error, the RSVD-\(\Delta t\) method can be optimized for improved efficiency. ### RSVD approximation RSVD offers estimates of the resolvent modes rather than exact ground truth. The accuracy of these estimates is extensively discussed in Halko _et al._ (2011), and it naturally depends on the gain separation. As mentioned earlier, incorporating power iteration and employing a few extra test vectors beyond the desired number of modes can improve the accuracy of the resolvent modes. In many cases, the approximation error of RSVD is the primary source of error in RSVD-\(\Delta t\), such that it accurately reproduces the results of the RSVD-LU algorithm. ### Time stepping sources of error When computing the action of \(\boldsymbol{\mathsf{R}}\) and \(\boldsymbol{\mathsf{R}}^{*}\) using time stepping, two types of errors are introduced in addition to the RSVD approximation. #### 7.2.1 Truncation error The first source of time-stepping error is the truncation error of the numerical integration schemes used to solve the time-domain equations. Common approaches include classical numerical integration schemes such as Runge-Kutta, implicit/explicit Euler, Adams-Moulton family, and others (Hairer _et al._, 1993; Wanner & Hairer, 1996). These methods introduce truncation errors resulting from the approximation of Taylor series expansions. Hence, a chosen time step introduces an expected truncation error, with higher-order schemes providing greater precision. Local truncation error (LTE) is derived for ODEs as \[LTE=C\frac{d^{p}f(t)}{dt^{p}}O(dt^{p}), \tag{7.1}\] where \(C\) is a constant, and \(p\) is the order of the time-stepping scheme. In this study, our focus is on ODEs with harmonic forcing \(f(t)=\hat{f}e^{\mathrm{i}\omega t}\). Substituting the forcing term into (7.1), we observe that \[LTE\propto O((\omega dt)^{p}). \tag{7.2}\] This equation indicates that for a fixed time step \(dt\), the error in the computed resolvent modes will be frequency dependent and vary as \(\omega^{p}\). Therefore, in addition to satisfying any stability constraints, the time step \(dt\) must be selected such that \(\omega_{max}dt\) is sufficiently small to obtain accurate resolvent modes up to the maximum desired frequency \(\omega_{max}\). #### 7.2.2 Transient error The second source of time-stepping error arises from the unwanted transient response. The solution of (4.1) can be written as a sum of its transient and steady-state components, \[\mathbf{q}(t)=\mathbf{q}_{t}(t)+\mathbf{q}_{s}(t), \tag{7.3}\] where the transient part \(\mathbf{q}_{t}\) decays to zero as \(t\to\infty\) and the steady-state part \(\mathbf{q}_{s}\) is \(T\)-periodic, _i.e._, \(\mathbf{q}_{s}(t+T)=\mathbf{q}_{s}(t)\). Taking the Fourier transform of each part leads to \[\hat{\mathbf{q}}(\omega)=\hat{\mathbf{q}}_{t}(\omega)+\hat{\mathbf{q}}_{s}(\omega). \tag{7.4}\] Only the steady-state solution is desired, so any non-zero transient part constitutes an error in our representation of the action of the resolvent operator (or its adjoint) on the prescribed forcing. The transient response can be understood as the response of the system to an initial condition that is not synced with the forcing applied to the system. It may initially grow for non-normal systems like the LNS equations (Schmid, 2007) but eventually decays at the rate of the least-damped eigenvalue of \(\mathbf{\mathsf{A}}\). We define the transient error as the ratio between the norms of the transient and steady-state responses, \[\epsilon=\frac{||\mathbf{q}_{t}||}{||\mathbf{q}_{s}||}, \tag{7.5}\] where the \(l^{2}\)-norms can be replaced with \(||\cdot||_{q}\) for non-identity weight matrices. In cases where we solve (4.1) with a zero initial condition (which is often the case), _i.e._, \(\mathbf{q}(0)=\mathbf{q}_{t}(0)+\mathbf{q}_{s}(0)=0\), the transient error is initially one, \[\epsilon(0)=\frac{||\mathbf{q}_{t}(0)||}{||\mathbf{q}_{s}(0)||}=1. \tag{7.6}\] In the long term, the transient error approaches zero, \[\lim_{t\to\infty}\epsilon(t)=\lim_{t\to\infty}\frac{||\mathbf{q}_{t}(t)||}{||\mathbf{ q}_{s}(t)||}=0, \tag{7.7}\] since \(||\mathbf{q}_{s}||\) remains bounded. The eigenspectrum of the linearized system \(\mathbf{\mathsf{A}}\) provides insights into the long-term response of the homogeneous system. Any initial perturbation will eventually follow the least-damped mode. However, in practice, computing the eigenspectrum of \(\mathbf{\mathsf{A}}\) is challenging, especially for large systems. Even obtaining a small number of eigenvalues using the Krylov-Schur method can be cumbersome. Therefore, a practical approach to understanding the long-term behavior of a system is to simulate the homogeneous ODE \[\frac{d\mathbf{q}_{h}}{dt}-\mathbf{A}\mathbf{q}_{h}=0, \tag{7.8}\] initialized with a random state (Eriksson & Rizzi, 1985; Edwards _et al._, 1994). A random perturbation represents a worst-case scenario, as it excites all the slow modes of \(\mathbf{\mathsf{A}}\). By monitoring the norm of \(\mathbf{q}_{h}\) over time, we can estimate the slowest decay rate, which corresponds to the real part of the least-damped eigenvalue of \(\mathbf{\mathsf{A}}\). This also gives us an indication of the expected magnitude of the transient error. Performing a DFT on one cycle of the transient response allows us to determine the anticipated level of transient error within the desired frequency range. While it is possible to simply wait for the transient error to naturally decay over time, this approach comes with increased CPU cost, as it requires longer simulation durations. In SS8.2, we will present an efficient method to achieve a smaller transient error within a shorter time frame. ## 8 Optimizing the RSVD-\(\Delta t\) algorithm In this section, we present several approaches aimed at reducing the CPU cost and memory requirements of the RSVD-\(\Delta t\) algorithm. These approaches, combined with the improved cost scaling of RSVD-\(\Delta t\) compared to the RSVD-LU algorithm as discussed in SS6, are crucial in facilitating affordable resolvent analysis of complex three-dimensional flows. ### Minimizing memory requirements First, we describe several strategies to minimize the memory required to compute resolvent modes for a given problem. Figure 3: Schematic of the action of \(\mathbf{R}\) with (a) FFT/iFFT and (b) streaming DFT/iDFT methods to transform between the Fourier and time domains. #### 8.1.1 Streaming Fourier sums A straightforward implementation of computing the action of \(\boldsymbol{R}\) (or \(\boldsymbol{R}^{*}\)) via time stepping entails (\(i\)) transferring the forcing from Fourier space to the time domain, \(\hat{\boldsymbol{F}}\stackrel{{\mathrm{iFFT}}}{{\longrightarrow}} \boldsymbol{F}\), (\(ii\)) performing integration to obtain the steady-state solutions saved with a specific time interval, as explained in SS4.2, and (\(iii\)) transferring the response back to frequency space, \(\boldsymbol{Q}\stackrel{{\mathrm{FFT}}}{{\longrightarrow}} \hat{\boldsymbol{Q}}\). A schematic of these steps is displayed in figure 3(a). The first step requires zero-padding \(\hat{\boldsymbol{F}}\in\mathbb{C}^{N\times k\times N_{\omega}}\) since \(\boldsymbol{F}\in\mathbb{C}^{N\times k\times N_{s}}\) is required at all \(N_{s}\gg N_{\omega}\) points in the period associated with the time step \(dt\ll\Delta t\) required for accurate time stepping. The iFFT is computationally efficient but storing its output requires a minimum memory allocation of \(O(NkN_{s})\), excluding space for the iFFT calculations themselves. \(\hat{\boldsymbol{F}}\) is automatically discarded before proceeding to the second step. In step (\(ii\)), \(\boldsymbol{f}_{j}\in\boldsymbol{F}\) is used to force the linear system at each time step until the transient ends, and the steady-state responses are stored in \(\boldsymbol{Q}\). After integration, \(\boldsymbol{F}\) is no longer needed and is removed. Lastly, obtaining \(\hat{\boldsymbol{Q}}\) from \(\boldsymbol{Q}\) using an FFT requires an \(O(NkN_{\omega})\) space to store the output. Overall, a minimum memory allocation of \(O(NkN_{s})+O(NkN_{\omega})\) is necessary to store both \(\boldsymbol{F}\) and \(\boldsymbol{Q}\) simultaneously. The memory requirements of this process can be significantly reduced by leveraging streaming Fourier sums, as in the streaming SPOD algorithm proposed by Schmidt and Towne (2019). This procedure is shown schematically in figure 3(b). In the streaming approach, a new forcing snapshot is created before each time step and promptly removed afterward. Also, the contribution to the Fourier modes of the response is computed only at specific time steps, after which the snapshot of the solution can be discarded. This eliminates the need to permanently store any data in the time domain, reducing the memory requirement to \(2\times O(NkN_{\omega})\) for storing \(\hat{\boldsymbol{F}}\) and \(\hat{\boldsymbol{Q}}\). The streaming implementation utilizes the DFT formulation to create forcing inputs and compute the effect of steady-state response data on the ensemble of Fourier coefficients, as demonstrated in the following. At each time step, the instantaneous forcing is created from its Fourier mode using the definition of the inverse Fourier transform, \[\boldsymbol{f}_{p}=\sum_{s=1}^{N_{\omega}}\boldsymbol{\mathsf{Z}}_{ps}^{ \prime}\hat{\boldsymbol{f}}_{s}, \tag{8.1}\] where \(\boldsymbol{\mathsf{Z}}_{ps}^{\prime}=exp(-2\pi\mathrm{i}/N_{s})^{(p-1)(s-1)}\). The integer \(p\) (\(1\leq p\leq N_{s}\)) specifies the phase of the periodic forcing at the current time step. Here, \(\hat{\boldsymbol{f}}_{s}\in\mathbb{C}^{N\times k\times N_{\omega}}\) denotes Fourier modes that are accessible from memory. The sum is taken over every \(\omega\in\Omega\), and it outputs the \(p^{th}\) time domain snapshot \(\boldsymbol{f}_{p}\in\mathbb{C}^{N\times k}\). This process continues in a loop of size \(N_{s}\) until the transient is passed and steady-state data is computed. The response Fourier modes can be computed from the time-domain steady-state solutions in a similar streaming fashion. Following the definition of the DFT, each temporal snapshot \(\boldsymbol{q}_{l}\) within the steady-state response contributes to each each Fourier mode according to the partial sum \[\left[\hat{\boldsymbol{q}}_{s}\right]_{r}=\left[\hat{\boldsymbol{q}}_{s}\right] _{r-1}+\boldsymbol{\mathsf{Z}}_{ls}\boldsymbol{q}_{r}=\sum_{l=1}^{r}\boldsymbol {\mathsf{Z}}_{ls}\boldsymbol{q}_{l}, \tag{8.2}\] where \(\boldsymbol{\mathsf{Z}}_{ls}=exp(-2\pi\mathrm{i}/N_{\omega})^{(l-1)(s-1)},1 \leq(l,s)\leq N_{\omega}\). Here, \(\left[\hat{\boldsymbol{q}}_{s}\right]_{r}\) represents the sum of contributions up to \(\boldsymbol{q}_{r}\), which is the \(r^{th}\) steady-state response and should be removed after adding its contribution to the sum. The partial sum is complete once \(r=N_{\omega}\), _i.e._, the effect of all \(N_{\omega}\) steady-state data is included. A subtle but important difference between the iDFT matrix \(\boldsymbol{\mathsf{Z}}^{\prime}\in\mathbb{C}^{N_{\omega}\times N_{s}}\) and the DFT matrix \(\boldsymbol{\mathsf{Z}}\in\mathbb{C}^{N_{\omega}\times N_{\omega}}\) is their sizes: \(\boldsymbol{\mathsf{Z}}\) is used to generate \(N_{s}\) temporal snapshots of the forcing from \(N_{\omega}\) Fourier modes, while \(\mathbf{\mathsf{Z}}^{\prime}\) is used to convert \(N_{\omega}\) temporal snapshots of the steady-state solution into \(N_{\omega}\) Fourier modes. The steaming process of the adjoint equations is identical, except the equations are integrated backward in time and indices within the Fourier sums are adjusted accordingly. The CPU time and memory requirement of the FFT/iFFT and streaming DFT/iDFT approaches are summarized in table 2. Although the streaming method incurs slightly higher CPU cost due to the efficiency of the FFT algorithm, this CPU overhead is negligible compared to the cost of taking a time step. Moreover, the memory savings of the streaming method can be substantial; the ratio of the memory required by the iFFT and streaming iDFT methods used to create the forcing snapshots scales like \(O(N_{s}/N_{\omega})\), where \(N_{\omega}\sim O(10^{2})\), and \(N_{s}\sim O(10^{3}-10^{5})\) are typical values. Overall, the substantial memory benefit of the streaming method outweighs the small CPU penalty, especially for large systems. #### 8.1.2 Optimal cost for real-valued matrices The linear operator \(\mathbf{\mathsf{A}}\) is often real-valued, in which case the memory requirements can be further reduced. Assuming \(\mathbf{\mathsf{R}}=(\mathrm{i}\omega\mathbf{\mathsf{I}}-\mathbf{\mathsf{A}})^{-1}=\mathbf{ \mathsf{U}}\mathbf{\Sigma}\mathbf{V}^{*}\), the resolvent operator corresponding to \(-\omega\) can be written as \[\mathbf{\mathsf{R}}_{-\omega}=(-\mathrm{i}\omega\mathbf{\mathsf{I}}-\mathbf{\mathsf{A}})^ {-1}=(\overline{\mathrm{i}\omega}\mathbf{\mathsf{I}}-\overline{\mathbf{\mathsf{A}}})^ {-1}=\overline{(\mathrm{i}\omega\mathbf{\mathsf{I}}-\mathbf{\mathsf{A}})^{-1}}=\overline {\mathbf{\mathsf{R}}}_{\omega}=\overline{\mathbf{\mathsf{U}}}\mathbf{\Sigma}\overline{\bm {V}}^{*}, \tag{8.3}\] where \(\bar{(\cdot)}\) denotes the complex conjugate and \(\mathbf{\mathsf{A}}=\overline{\mathbf{\mathsf{A}}}\) when \(\mathbf{\mathsf{A}}\) is real-valued. Equation (8.3) proves that the gains of positive and negative frequencies are symmetric and the resolvent modes are complex conjugates of one another. Therefore, computing the resolvent modes for positive \(\omega\in\Omega\) naturally provides results for negative frequencies. This symmetry halves the CPU cost for the RSVD-LU algorithm but does not reduce the memory requirement. On the other hand, in the case of RSVD-\(\Delta t\), the memory requirements are halved, but there is no significant reduction in the CPU, as further elaborated. Since the frequencies of interest become \(\Omega_{+}=\{0,+\omega_{min},+2\omega_{min},...,+\omega_{max}\}\), the total number of frequencies becomes \(\lfloor\frac{N_{\omega}}{2}\rfloor+1\). In this scenario, only Fourier coefficients corresponding to \(\omega\in\Omega_{+}\) are saved and the memory storage required for both input and output matrices (\(\mathbf{\mathsf{\tilde{F}}}\) and \(\mathbf{\mathsf{\hat{O}}}\) discussed in SS8.1.1) is halved. In terms of CPU, generating the forcing and computing the response is twice as fast but the speed-up is not significant as the time stepping remains identical to the complex-valued case. #### 8.1.3 An additional option for reducing memory If additional memory savings are required, the memory requirements of RSVD-\(\Delta t\) can be sharply reduced by dividing the frequencies of interest into multiple sets at the expense of additional CPU \begin{table} \begin{tabular}{c c c} \(\mathcal{F}/\mathcal{F}^{-1}\) & CPU time & Memory \\ iFFT & \(Nk\times O(N_{s}log(N_{s}))\) & \(O(NkN_{s})\) \\ FFT & \(Nk\times O(N_{\omega}log(N_{\omega}))\) & \(O(NkN_{\omega})\) \\ Streaming iDFT & \(Nk\times O(N_{\mathrm{total}}N_{\omega})\) & \(O(NkN_{\omega})\) \\ Streaming DFT & \(Nk\times O(N_{\omega}^{2})\) & \(O(NkN_{\omega})\) \\ \end{tabular} \end{table} Table 2: Comparison of CPU time and memory requirements using FFT/iFFT and streaming DFT/iDFT methods transfer back and forth between Fourier space and time domain. \(N_{\mathrm{total}}=N_{t}+N_{s}\) is the total number of time steps including transient and steady-state parts. cost. For instance, when the frequencies are divided into \(d\) equal groups, the memory requirement is reduced by a factor of \(d\). The penalty of doing so is that the CPU time scales proportionally with \(d\), since the entire algorithm needs to be repeated for each group of frequencies. The RSVD-LU algorithm offers no such opportunity to reduce memory requirements, _e.g._, to make a particular calculation possible on a given computer, at the expense of higher CPU cost. ### Minimizing the CPU cost: efficient transient removal Within the time-stepping process, the removal of the transient responses is crucial and is naturally accomplished through the long-time integration of (4.1), as discussed in SS7.2.2. Nonetheless, certain LNS operators exhibit a painfully slow decay rate, resulting in lengthy transient durations and costly time stepping. Therefore, we present an efficient transient removal strategy to minimize the CPU cost. Our strategy uses the differing evolution of the steady state and transient parts of the solution to directly compute and remove the transient from the solution. Considering two solutions of (4.1), \(\mathbf{q}_{1}=\mathbf{q}(t_{1})\) and \(\mathbf{q}_{2}=\mathbf{q}(t_{1}+\Delta t)\), we can express them in terms of their steady-state and transient parts, as in (7.3), as \[\mathbf{q}_{1} =\mathbf{q}_{s,1}+\mathbf{q}_{t,1}, \tag{8.4}\] \[\mathbf{q}_{2} =\mathbf{q}_{s,2}+\mathbf{q}_{t,2},\] where \(\mathbf{q}_{s,1},\mathbf{q}_{s,2},\mathbf{q}_{t,1}\), and \(\mathbf{q}_{t,2}\) are four unknowns. Applying a prescribed forcing in (4.1) at a single frequency \(\omega\) yields \[\mathbf{q}_{s,2}=\mathbf{q}_{s,1}e^{\mathrm{i}\omega\Delta t}. \tag{8.5}\] Also, the transient response follows the form of a homogenous response, resulting in \[\mathbf{q}_{t,2}=e^{\mathbf{\mathsf{A}}\Delta t}\mathbf{q}_{t,1}. \tag{8.6}\] Simplifying (8.4), (8.5), and (8.6) for \(\mathbf{q}_{t,1}\), we obtain \[(\mathbf{l}-e^{-\mathrm{i}\omega\Delta t}e^{\mathbf{\mathsf{A}}\Delta t})\mathbf{q}_{t,1} =\mathbf{b}, \tag{8.7}\] where \(\mathbf{b}=\mathbf{q}_{1}-\mathbf{q}_{2}e^{-\mathrm{i}\omega\Delta t}\) is known from the time-stepping solution. Equation (8.7) holds for any two points in time with arbitrary separation \(\Delta t\). The exact steady-state solution with no transient error is obtained by solving (8.7) for \(\mathbf{q}_{t,1}\) and using (8.4) to obtain \(\mathbf{q}_{s,1}=\mathbf{q}_{1}-\mathbf{q}_{t,1}\). The prescribed forcing in RSVD-\(\Delta t\) consists of a range of frequencies, hence, it requires a pre-processing step to enable the transient removal strategy. We utilize \(\mathbf{\mathsf{Q}}=\{\mathbf{q}_{1},\mathbf{q}_{2},\mathbf{q}_{3},...,\mathbf{q}_{N_{\omega}}\}\) to construct \(\mathbf{\hat{\mathsf{O}}}\in\mathbb{C}^{N\times N_{\omega}}\), where the snapshots are equidistant with a time interval of \(\Delta t\). Additionally, we define \(\mathbf{\mathsf{O}}^{\Delta t}=\{\mathbf{q}_{2},\mathbf{q}_{3},\mathbf{q}_{4},...,\mathbf{q}_{N_{ \omega}+1}\}\) as a shifted matrix, resulting in \(\mathbf{\hat{\mathsf{O}}}^{\Delta t}\in\mathbb{C}^{N\times N_{\omega}}\). Here, \(\hat{\mathbf{q}}_{j}\in\mathbf{\hat{\mathsf{O}}}\) represents \(\mathbf{q}_{1}\) in the above equations, while \(\hat{\mathbf{q}}_{j}^{\Delta t}\in\mathbf{\hat{\mathsf{O}}}^{\Delta t}\) represents \(\mathbf{q}_{2}\), both oscillating at the same frequency. Therefore, a single time stepping is sufficient to obtain (8.7) for all \(\omega\in\Omega\). Solving (8.7) can be computationally expensive, particularly for large systems, even if we assume that computing \(e^{\mathbf{\mathsf{A}}\Delta t}\) is feasible. To address this issue, one possible approach is to choose a small \(\Delta t\) and expand the exponential term as \(e^{\mathbf{\mathsf{A}}\Delta t}=\sum_{j}\frac{(\mathbf{\mathsf{A}}\Delta t)^{j}}{j!}\). However, this leads to solving a similar linear system to (6.1), which we wish to avoid. Another approach is to leverage iterative methods (_e.g._, GMRES) when \(\Delta t\) is sufficiently large. Although the solution may converge within a reasonable time frame, solving similar systems needs to be repeated for all test vectors and frequencies. To overcome these challenges, we propose employing Petrov-Galerkin (or Galerkin) projection to obtain an affordable, approximate solution of (8.7). Consider a low-dimensional representation of the transient response as \[\mathbf{q}_{t,1}=\mathbf{\phi}\mathbf{\beta}_{1}, \tag{8.8}\] where \(\mathbf{\phi}\in\mathbb{C}^{N\times r}\), with \(r\ll N\), is an orthonormal test basis spanning the transient response and \(\mathbf{\beta}_{1}\in\mathbb{C}^{r}\) represents the coefficients describing the transient in this basis. By substituting (8.8) into (8.7), the linear system \[(\mathbf{l}-e^{-\mathrm{i}\omega\Delta t}e^{\mathbf{\mathsf{A}}\Delta t})\mathbf{\phi}\mathbf{ \beta}_{1}=\mathbf{b} \tag{8.9}\] is overdetermined. Petrov-Galerkin projection with trial basis \(\mathbf{\psi}\in\mathbb{C}^{N\times r}\) is employed to close (8.9), giving \[\mathbf{\psi}^{*}(\mathbf{l}-e^{-\mathrm{i}\omega\Delta t}e^{\mathbf{\mathsf{A}}\Delta t}) \mathbf{\phi}\mathbf{\beta}_{1}=\mathbf{\psi}^{*}\mathbf{b}. \tag{8.10}\] Solving (8.10) for \(\mathbf{\beta}\) and inserting the solution into (8.8) yields \[\mathbf{q}_{t,1}=\mathbf{\phi}(\mathbf{\psi}^{*}\mathbf{\phi}-e^{-\mathrm{i}\omega\Delta t} \tilde{\mathbf{M}})^{-1}\mathbf{\psi}^{*}\mathbf{b}, \tag{8.11}\] where \[\tilde{\mathbf{M}}=\mathbf{\psi}^{*}e^{\mathbf{\mathsf{A}}\Delta t}\mathbf{\phi}\in\mathbb{C} ^{r\times r} \tag{8.12}\] is a reduced matrix that maps the coefficients. The advantage of this strategy is that it allows for the computation of the inverse of \((\mathbf{\psi}^{*}\mathbf{\phi}-e^{-\mathrm{i}\omega\Delta t}\tilde{\mathbf{M}})\) due to its reduced dimension. Obtaining \(\tilde{\mathbf{M}}\) is also an efficient process, involving two steps: \((i)\) integrating the columns of \(\mathbf{\phi}\) over \(\Delta t\), and \((ii)\) projecting \(e^{\mathbf{\mathsf{A}}\Delta t}\mathbf{\phi}\) onto the columns of \(\mathbf{\psi}\). The construction cost of \(\tilde{\mathbf{M}}\) for each \(\omega\in\Omega\) is primarily determined by the first step. Specifically, when the number of columns in \(\mathbf{\phi}\) is \(r=N_{\omega}\) and \(\Delta t=T_{s}/N_{\omega}\), the total cost of constructing \(\tilde{\mathbf{M}}\) for all \(\omega\in\Omega\) is equivalent to integrating the LNS equations for an additional \(T_{s}\) duration. Galerkin projection is a special case of the above procedure in which the test and trial functions are the same, _i.e._, \(\mathbf{\phi}\) is also the trial function. Using this strategy with either Galerkin or Petrov-Galerkin projections, the accuracy of the solution relies on the ability of the column space of \(\mathbf{\phi}\) to adequately span the transient response. Thus, the challenge lies in constructing an appropriate basis to accurately capture the transient behavior. Before the introduction of appropriate test bases, we note that one can construct a new \(\mathbf{\phi}\) for each \(\omega\in\Omega\), however, the bases that we define later are universal for all frequencies. Hence, the reduced matrix \(\tilde{\mathbf{M}}\) is constructed once for all frequencies. Subsequently, (8.11) obtains transient responses at each frequency and updates the steady-state responses. Given the rapid decay of most terms in the transient response, it is advantageous to utilize the least-damped eigenvectors of \(\mathbf{\mathsf{A}}\) as the chosen test basis. By excluding the least-damped eigenvectors, we effectively increase the decay rate of the transient response. Let \(\lambda_{1}\) denote the least-damped eigenvalue of \(\mathbf{\mathsf{A}}\), with \(\mathbf{V}_{1}\) representing the corresponding eigenvector. We define \(\mathbf{\phi}=\mathbf{V}_{1}\), thereby removing the transient component projected onto \(\mathbf{V}_{1}\). As a result, the norm of the updated transient, obtained by subtracting this projection, follows the decay rate associated with the second least-damped eigenvalue of \(\mathbf{\mathsf{A}}\). Similarly, the test basis \(\mathbf{\phi}\) can encompass the first \(r-1\) least-damped eigenvectors, \(\mathbf{\phi}=\mathrm{orth}\{\mathbf{V}_{1},\mathbf{V}_{2},...,\mathbf{V}_{r-1}\}\), leading to a decay rate governed by the \(r^{th}\) least-damped eigenvalue of \(\mathbf{\mathsf{A}}\). For this particular test basis, Petrov-Galerkin projection can be utilized, where \(\mathbf{\psi}\) incorporates the adjoint eigenvectors. This approach ensures the complete elimination of transient projection onto the least-damped modes. To be clear, this procedure does not eliminate the impact of these modes on the steady-state response, but only on the transient response. The main challenge associated with this test basis is the computational cost of computing the least-damped eigenvectors (and adjoint eigenvectors in the case of Petrov-Galerkin projection), especially for large systems, even when using algorithms designed for this purpose, _e.g._, Krylov-based methods (Eriksson & Rizzi, 1985; Edwards _et al._, 1994). Overall, the least-damped modes of \(\boldsymbol{\mathsf{A}}\) are most helpful for systems that suffer from only a few slowly decaying modes. Another powerful test basis is formed by stacking the snapshots into a matrix during the integration of the LNS equations, resulting in \(\boldsymbol{\phi}=\operatorname{orth}\{\boldsymbol{q}_{1},\boldsymbol{q}_{2}, \boldsymbol{q}_{3},...,\boldsymbol{q}_{r}\}\) (an orthogonalization of the matrix of snapshots). Specifically, \(\boldsymbol{\phi}\) can be constructed as the union of \(\boldsymbol{\hat{\mathsf{O}}}\) and \(\boldsymbol{\hat{\mathsf{O}}}^{\Delta t}\) as a reliable test basis. Performing QR decomposition on this matrix is essential to ensure orthogonality. As the LNS equations are allowed to run for a longer duration, \(\boldsymbol{\phi}\) becomes an increasingly effective test basis, providing improved estimates of the transient responses across all frequencies \(\omega\in\Omega\). We have observed that this basis is particularly accurate for higher frequencies compared to lower ones. A feature of our transient-removal approach is its flexibility in incorporating multiple test bases. For instance, by considering the matrix of least-damped eigenvectors of \(\boldsymbol{\mathsf{A}}\) in \(\boldsymbol{\phi}_{1}\) and the on-the-fly snapshots in \(\boldsymbol{\phi}_{2}\), a combined test basis \(\boldsymbol{\phi}=\boldsymbol{\phi}_{1}\cup\boldsymbol{\phi}_{2}\) can be constructed and orthogonalized. The combination of test bases, with \(\boldsymbol{\phi}_{2}\) being highly effective at higher frequencies, offers benefits at lower frequencies. The expected transient error remaining before and after applying our transient removal approach can be estimated using a preprocessing step. We begin by integrating the homogeneous system (7.8) using a random initial condition with unit norm. By employing (8.4), (8.5), and (8.6), and assuming \(\boldsymbol{q}_{s}=0\), we can apply either Petrov-Galerkin or Galerkin projection to calculate the updated transient norms. This approach is feasible when \(\boldsymbol{\phi}\) does not depend on real-time simulation, such as when it represents the matrix of least-damped eigenvectors. However, if \(\boldsymbol{\phi}\) consists of snapshots, we must generate synthetic snapshots. To accomplish this, we set \(\boldsymbol{q}_{s,0}=-\boldsymbol{q}_{t,0}\) to ensure the initial snapshot \(\boldsymbol{q}_{0}=\boldsymbol{q}_{s,0}+\boldsymbol{q}_{t,0}\) equals zero. Subsequent snapshots are obtained by superimposing the transient responses (from the homogeneous simulation) onto steady-state responses generated as \(\boldsymbol{q}_{s,j}=e^{\mathrm{i}\omega j\Delta t}\boldsymbol{q}_{s,1}\), where \(\Delta t\) is the time-distance between snapshots. Using this technique, we can construct \(\boldsymbol{\phi}\) for varying periods and assess the efficacy of the transient removal strategy. The updated transient error, similar to (7.5), is computed as the ratio of norms between the updated transient and steady-state responses, which monotonically decreases after the transient growth phase. This iterative process is performed for all \(\omega\in\Omega\), necessitating the generation of fresh snapshots for the steady-state responses while keeping the transient response fixed. The computational expense associated with obtaining this _a priori_ error estimate is primarily determined by the integration of the homogeneous system and typically constitutes less than 5% of the overall cost of executing the complete algorithm for computing the resolvent modes. We illustrate the application of this strategy using various test bases in SS9. ## 9 Test cases In this section, the RSVD-\(\Delta t\) algorithm is tested using two problems. First, the accuracy of the algorithm and the effectiveness of the transient removal strategy are verified using the complex Ginzburg-Landau equation. Second, the computational efficiency and scalability of the algorithm are demonstrated and compared to that of the RSVD-LU algorithm using a three-dimensional discretization of a round jet. ### Complex Ginzburg-Landau equation The complex Ginzburg-Landau equation was initially derived for analytical studies of Poiseuille flow (Stewartson & Stuart, 1971) and has subsequently been used more generally as a convenient model of a flow susceptible to non-modal amplification (Hunt & Crighton, 1991; Bagheri _et al._, 2009; Chen & Rowley, 2011; Cavalieri _et al._, 2019). Here, we use it as an inexpensive test case to validate our algorithm. The complex Ginzburg-Landau system follows the form of (2.3) with \[\begin{split}\boldsymbol{\mathsf{A}}=-\nu\frac{\partial}{ \partial x}+\gamma\frac{\partial^{2}}{\partial x^{2}}+\mu(x),\\ \mu(x)=(\mu_{0}-c_{\mu}^{2})+\frac{\mu_{2}}{2}x^{2},\\ \boldsymbol{\mathsf{B}}=\boldsymbol{\mathcal{C}}=\boldsymbol{ \mathsf{I}}.\end{split} \tag{9.1}\] Following Bagheri _et al._ (2009), we set \(\gamma=1-\mathrm{i},\nu=2+0.2\mathrm{i},\mu_{0}=0.38,c_{\mu}=0.2\), and \(\mu_{2}=-0.01\). These parameters ensure global stability and provide a large gain separation between the leading mode and the rest of the modes at the peak frequency (Bagheri _et al._, 2009). To explicitly build the \(\boldsymbol{\mathsf{A}}\) operator, a central finite difference method is used to discretize \(x\in[-100,100]\) using \(N=500\) grid points. The domain is sufficiently extended in both \(\pm x\) directions such that it resembles infinite boundaries (Bagheri _et al._, 2009), and the weight matrix \(\boldsymbol{\mathsf{W}}\) is set to the identity on account of the uniform grid. #### 9.1.1 RSVD-\(\Delta t\) validation: assessing the transient and truncation errors The RSVD-\(\Delta t\) outcome must replicate the RSVD-LU outcome up to machine precision when cutting both sources of errors described in SS7.2. Truncation error depends on the integration scheme and the time step, while the transient error depends on the length of the simulation. Therefore, using a tiny time step with a high-order integration scheme and a lengthy transient duration should eliminate the errors due to time integration. Time-stepping errors are investigated by setting the number of test vectors to \(k=1\) and power iterations to \(q=0\). These minimal values are used since including additional test vectors or power iterations have no effect on the time-stepping error. The desired set of frequencies is \(\Omega\in[-4,4]\) with \(\Delta\omega=0.05\). The gains of the Ginzburg-Landau system are computed using RSVD and RSVD-\(\Delta t\) Figure 4: Relative error between gains computed using the RSVD-LU and RSVD-\(\Delta t\) algorithms for the Ginzburg-Landau problem: (a) \(T_{t}=5000\) and {TSS, \(dt\)} = {BDF4, 0.1} (purple), {BDF4, 0.01} (red), (BDF6, 0.01) (green), and {BDF6, 0.001} (blue) varies; (b) {BDF6, 0.001} is fixed and \(T_{t}\) varies as 500 (purple), 1000 (red), 2500 (green), and 5000 (blue). In (a), the exponents m are shown for the best-fit exponential within \(\omega\in[0.6,4]\). and the relative errors for various cases are shown in figure 4. The minimum error is near machine precision when BDF6, \(dt=10^{-3}\), and \(T_{t}=5000\) is used, validating the RSVD-\(\Delta t\) algorithm. By decreasing the order of the integration scheme or increasing the time step, the truncation error becomes larger, and hence, the error in the computed gains becomes larger. In figure 4(a), the transient length is held fixed at \(T_{t}=5000\) and the gains are obtained using {BDF6, \(dt=10^{-2}\)}, {BDF4, \(dt=10^{-2}\)}, and {BDF4, \(dt=10^{-1}\)}. For all four cases, the relative error is around \(O(10^{-13})\) at \(\omega=0\), confirming that the transient effect is negligible. Moving away from zero frequency, the errors increase like \(O(\omega^{\sim 4})\) and \(O(\omega^{\sim 6})\) for the BDF4 and BDF6 schemes, respectively, consistent with the theoretical asymptotic estimates in SS7.2. Figure 4(b) displays how the length of time that the transient is allowed to decay can affect the accuracy of the gains as a function of frequency. This time, the time-stepping scheme of {BDF6, \(dt=10^{-3}\)} is held fixed, ensuring negligible truncation error, and the transient lengths are varied as \(T_{t}=\{500,1000,2500,5000\}\). Smaller values of \(T_{t}\) leave more transient residual in the steady-state response. The resulting relative gain errors show that the whole frequency spectrum is affected quite similarly. Longer transient lengths lead to smaller gain errors with a similar trend. The frequency distribution of the transient error depends on the eigenspectrum of the system. For example, a cluster of weakly damped modes around a specific frequency can lead to a peak transient error localized at the same frequency. In SS9.2, the peak transient for the jet flows occurs near zero frequency. #### 9.1.2 Efficient transient removal In this section, we demonstrate the transient removal strategy proposed in SS8.2. We apply this strategy to the same Ginzburg-Landau system for the same \(\Omega\) range described above and compare the results to the RSVD-LU results as a reference. The eigenspectrum of the Ginzburg-Landau operator is shown in figure 5(a), and the three least-damped (and thus slowest decaying) modes have decays rates of \(\lambda_{1,r}=-0.008,\lambda_{2,r}=-0.163\), and \(\lambda_{3,r}=-0.318\), respectively, where the subscript \(r\) indicates the real part of the eigenvalue \(\lambda\) Figure 5: Transient-removal for the Ginzburg-Landau test problem: (a) Spectrum of Ginzburg-Landau operator with a zoomed-in view of the three least-damped eigenvalues. (b) Transient error measurement: blue curve represents original decay, while green, red, and purple curves depict decay using Galerkin projection with \(\mathbf{\phi}\) of \(\mathbf{V}_{1}\), \(\{\mathbf{V}_{1},\mathbf{V}_{2}\}\), and a matrix of snapshots, respectively. (c) Relative error comparison between the RSVD-\(\Delta t\) and RSVD-LU algorithms. Solid horizontal lines in (c) represent the expected transient error arising from the transient norm at the end of the \(T_{t}\) (the black vertical line in (b)). Figure 5(b) depicts the transient norm as a function of time, where \(\epsilon\) is measured as follows: we initially obtain the _true_ steady-state solution by integrating (4.1) for a very long time at \(\omega=0.5\) (similar results for other frequencies), ensuring that the natural decay has eliminated the transient response to machine precision and use the steady-state response to measure the transient errors. The natural decay in this system occurs slowly, as illustrated in figure 5(b). By defining \(\mathbf{\phi}_{1}\) as \(\mathbf{V}_{1}\) and utilizing Galerkin projection, we remove the fraction of the transient decaying at the rate of \(e^{\lambda_{1}t}\), resulting in a noticeable change in the decay slope. Including the two least-damped modes with \(\mathbf{\phi}_{2}=\{\mathbf{V}_{1},\mathbf{V}_{2}\}\) further steepens the decay rate, aligning closely with the corresponding least-damped eigenvalues shown in figure 5(a). However, it is the matrix of snapshots that proves to be the most effective, completely eliminating the transient within a short period of time. We employ \(\{\text{BDF6},\,dt=10^{-2}\}\) to compute gains using RSVD-\(\Delta t\), considering three cases of transient removal that are halted at \(T_{t}=75\): \((i)\) natural decay, \((ii)\) Galerkin projection with \(\mathbf{\phi}_{1}\), and \((iii)\) Galerkin projection with \(\mathbf{\phi}_{2}\). The error is measured as the relative difference in gain between the RSVD-LU and RSVD-\(\Delta t\) algorithms, as depicted in figure 5(c). The plot clearly illustrates that smaller transient errors lead to reduced gain errors. In the first two cases, the transient error dominates, while in the third case, the transient error balances with the truncation error at lower frequencies, with truncation dominating at higher frequencies. Our findings indicate that the matrix of snapshots is an effective basis for representing and removing the transient. #### 9.1.3 Impact of power iteration Finally, we explore the impact of the number of power iterations \(q\) on the accuracy of the solution. For both the RSVD-LU and RSVD-\(\Delta t\) algorithms, we set \(k=6\) and vary \(q\) from \(0\) to \(2\). Additionally, Figure 6: Impact of power iteration on the Ginzburg-Landau gains: (a-c) the gains of the first three optimal modes using SVD (line) and RSVD-\(\Delta t\) (circle); and (d-e) the relative error between them. (a,d), (b,e), and (c,f) correspond to \(q\) of \(0\), \(1\), and \(2\), respectively. Black lines in (d-f) show the relative error between the RSVD-LU algorithm and SVD for reference. RSVD-\(\Delta t\) uses a BDF4 integrator with \(dt=0.001\) and \(T_{t}=100\), and transients are reduced by removing the least-damped eigenvalue, leading to an expected overall time-stepping error of \(O(10^{-8})\) according to Figure 5(c). A standard Arnoldi-based approach is used to provide a ground-truth reference for defining the error. The leading three singular values and corresponding relative errors are shown in figure 6. One power iteration leads to a noticeable accuracy improvement. As expected, using one or more power iterations substantially improves the accuracy of both the RSVD-LU and RSVD-\(\Delta t\) algorithms. The optimal singular value in particular improves dramatically for frequencies with a large gap between the optimal and suboptimal modes. The RSVD-LU errors approach machine precision near the peak frequency, while the RSVD-\(\Delta t\) errors saturate at the floor set by the choice of integration parameters. For the rest of the modes and frequencies, the relative error between the RSVD-LU and RSVD-\(\Delta t\) algorithms is smaller than the relative error between the RSVD-LU algorithm and the ground truth, so the relative errors are identical. We have found using one power iteration to be sufficient for most problems, and we recommend this as a default value for our algorithm. ### Round turbulent jet Second, a round jet is used to demonstrate the reduced cost and improved scaling of our algorithm. The mean flow is obtained from a large eddy simulation (LES) using the "Charles" compressible flow solver developed by Cascade Technologies (Bres _et al._, 2017, 2018), for Mach number \(M=\frac{U_{j}}{a}=0.4\) and Reynolds number \(Re=\frac{U_{j}D_{j}}{\nu_{j}}=0.45\times 10^{6}\). Here, \(U_{j}\) is the mean centerline velocity at the nozzle exit, \(a\) is the ambient speed of sound, \(\nu_{j}\) is the kinematic viscosity at the nozzle exit, and \(D_{j}\) is the diameter of the nozzle. Validation of the LES simulation against experimental results and more details on the numerical setup are available in Bres _et al._ (2018). The computation of the three-dimensional resolvent modes is performed within a region of interest defined by \(x\in[0,20]\) and \(y\times z\in[-4,4]\times[-4,4]\). The spatial discretization of this region is accomplished using a grid with dimensions of \(400\times 140\times 140\), respectively. The mean flow is obtained by revolving the axisymmetric mean flow around the streamwise axis, as depicted in figure 7. The domain is large enough to accommodate sizable low-frequency structures, and the mesh is resolved to capture structures that emerge in the response modes up to Strouhal (\(St\)) number of 1, Figure 7: The mean streamwise velocity of the axisymmetric jet, three-dimensional round jet, and jet with streaks. The jet with streaks is obtained by adding the streaks with an azimuthal wavenumber of 6 to the mean flow of the round jet. where \(St=\frac{\omega D_{j}}{2\pi U_{j}}\) is the non-dimensional form of frequency. The range of \(St\in[0,1]\) is wide enough to include the most important physical phenomena captured by resolvent analysis (Schmidt _et al._, 2018). The effective \(Re\) is reduced to 1000 to account for un-modeled Reynolds stresses (Pickering _et al._, 2021) and the effect of \(Re\) is thoroughly investigated and reported in Schmidt _et al._ (2017). The LNS equations are expressed in terms of specific volume, the three velocity components, and pressure, which can be compactly represented as \(\mathbf{q}(\mathbf{x},t)=(\mathbf{\xi},\mathbf{u}_{x},\mathbf{u}_{r},\mathbf{u}_{\theta},\mathbf{p})^{T}( \mathbf{x},\mathbf{r},\mathbf{\theta},t)\). The three-dimensional state in the frequency domain is \[\mathbf{q}^{\prime}(\mathbf{x},\mathbf{y},\mathbf{z},t)=\sum_{\omega}\hat{\mathbf{q}}_{\omega}(\bm {x},\mathbf{y},\mathbf{z})e^{\mathrm{i}\omega t} \tag{9.2}\] and each mode is characterized by its frequency \(\omega\). To validate our three-dimensional results, we also perform a axisymmetric resolvent analysis of the same jet for a set of azimuthal wavenumbers in which the symmetry in the azimuthal direction is exploited. The mean flow is obtained on the symmetry plane with cylindrical coordinates \((x,r)\). The axisymmetric state \[\mathbf{q}^{\prime}(\mathbf{x},\mathbf{r},\mathbf{\theta},t)=\sum_{m,\omega}\hat{\mathbf{q}}_{m, \omega}(\mathbf{x},\mathbf{r})e^{\mathrm{i}\omega t} \tag{9.3}\] is characterized by the pair \((m,\omega)\), where \(m\) denotes azimuthal wavenumber. The domain of interest for resolvent analysis is \(x\times r\in[0,20]\times[0,4]\) surrounded by a sponge region which is spatially discretized using fourth-order summation by parts finite differences (Mattsson & Nordstrom, 2004) with \(400\times 100\) grid points in the streamwise and radial directions, respectively. A grid-convergence study verifies the relative error between gains with this mesh and twice the number of grid points is less than 1-10% for \(0\leq St\leq 1\). The remaining parameters are kept the same as in the three-dimensional discretization of the jet. Figure 8 shows the gains (squared singular values) for \(m=0,1,2,3\). The dominant mechanisms for each wavenumber are analyzed in detail in Schmidt _et al._ (2017) and Pickering _et al._ (2021). The optimal mode when \(m=0,St\geq 0.2\) corresponds to Kelvin-Helmholtz (KH) instability. At \(m=0\), the KH modes are overtaken by Orr-type modes for \(St<0.2\). At \(m>0\), streaks become the dominant response and continue to prevail as the primary instability at low frequencies \(St\to 0\). The KH modes remain the most amplified response for the higher \(St\)-range when \(m>0\), causing the large separation between the leading mode and suboptimal modes. Similar gain trends are found in Schmidt _et al._ (2018) and Pickering _et al._ (2020) for the same wavenumbers demonstrating the robustness of the outcome even though the computational domains, \(Re\), state vector, sponge regions, and boundary conditions are slightly different. The Figure 8: Three leading gains of the axisymmetric jet for four azimuthal wavenumbers. gains and corresponding modes of the axisymmetric jet are used as a baseline for comparison to the three-dimensional jet. #### 9.2.1 Resolvent modes for the jet Resolvent modes for the three-dimensional round jet are computed for the same range of \(St\in[0,1]\) with \(\Delta St=0.05\). The six leading modes are of interest, so we set \(k=10\) and \(q=1\). For the RSVD-\(\Delta t\) algorithm, we use the classical \(4^{th}\) order Runge-Kutta (RK4) integrator with \(dt=0.00625\). The steady-state interval is \(T_{s}=20\). Figure 9 shows the expected transient error in the time and frequency domains. The transient initially grows in time before slowly decaying in figure 9(a). The resulting error in the frequency domain obtained from selecting each colored segment for computing resolvent modes is shown in figure 9(b). Our transient removal strategy, using Galerkin projection with the matrix of snapshots, drastically reduces these errors for \(St>0\), as indicated by the dashed lines. We select a transient duration of \(T_{t}\approx 2T_{s}\) (green segment), for which the transient removal strategy brings the transient error below \(1\%\) for \(St>0\). Figure 10 compares the gains of two-dimensional and three-dimensional discretizations of the jet. Due to the azimuthal symmetry of the problem, the gains of the three-dimensional problem are expected to be the union of the gains from the axisymmetric problem (Sirovich, 1987_b_). Since higher wavenumbers (\(m>3\)) have lower gains (Pickering _et al._, 2021), the union of the first four azimuthal wavenumbers is enough to match the leading modes of the three-dimensional system. The azimuthal symmetry makes modes corresponding to \(m\neq 0\) appear in pairs for the three-dimensional problem. The six computed modes appear in pairs for \(St\leq 0.3\), after which the gain of the \(m=0\) mode becomes large enough to appear for the three-dimensional problem. Up to \(St=0.8\), the largest gains are associated with \(m=\pm 1\). All of the modes that appear for the three-dimensional problem are KH modes; many more resolvent modes would need to be computed to capture Orr modes that are buried beneath a slew of KH modes for each azimuthal wavenumber. The close match between the computed three-dimensional modes and the set of two-dimensional modes verifies that the three-dimensional calculations are properly capturing the known physics for this problem. The small mismatch at frequencies close to \(St=1\) is due to mild under-resolution of Figure 9: Transient error estimates for the jet in (a) the time domain and (b) the frequency domain. Each colored period represents the duration utilized for obtaining norms in the frequency domain as shown in (b). Solid lines represent the natural decay, while dashed lines correspond to the transient removal strategy using Galerkin projection with the matrix of snapshots. the grid for the compact structures that appear at these frequencies. Figure 11 shows the pressure response modes at four \((St,m)\) pairs (other components such as velocity yield similar observations). Each panel shows, for one \((St,m)\) pair, contours of the two-dimensional mode computed leveraging symmetry, isocontours of the corresponding three-dimensional mode, and contours for cross sections of the three-dimensional mode in the \(x-y\) and \(y-z\) planes. These images show the wavepacket form of the modes, confirm the classification of each three-dimensional mode with a particular azimuthal wavenumber, and illustrate the match between the symmetric and three-dimensional results. As noted by Martini _et al._ (2021), symmetries such as the azimuthal homogeneity of the jet produce pairs of modes with equal gain that can be arbitrarily combined (under the constraint of orthogonality) to produce equally valid mode pairs. For visualization purposes, we have adjusted the phase and summed the mode pairs to best match those of the modes from the axisymmetric calculations. #### 9.2.2 Computational complexity comparison We showcase the superior computational efficiency and scalability of the RSVD-\(\Delta t\) algorithm compared to the RSVD-LU algorithm using the three-dimensional jet. We set \(k=10\), \(N_{\omega}=21\), and \(q=0\) for both algorithms and \(dt=0.00625\), \(T_{t}=2T_{s}\), and \(T_{s}=20\) in the RSVD-\(\Delta t\) algorithm as in SS9.2.1. The reported costs for the RSVD-LU algorithm includes only a single LU decomposition and the two solutions of the LU decomposed system (once for the direct system and once for the adjoint system) at each frequency of interest, highlighting the LU decomposition as the primary bottleneck in the RSVD-LU algorithm and similar methods utilizing LU decomposition to solve (6.1). The reported costs encompasses the entire RSVD-\(\Delta t\) algorithm, including one extra period of time-stepping duration to account for the transient removal strategy, as explained in SS8.2. The RSVD-\(\Delta t\) algorithm is implemented using PETSc (Balay _et al._, 2019), while the LU decomposition in the RSVD-LU algorithm utilizes PETSc in conjunction with the MUMPS (Amestoy _et al._, 2001) external package. All calculations are performed on one processor such that wall-time functions as a proxy for CPU time. The measured CPU time for both algorithms are shown in figure 12(a) as a function of the state Figure 10: Resolvent gains for the jet: (a) the union of the axisymmetric jet gains; (b) the optimal gains of the axisymmetric jet corresponding to various values of \(m\) (dashed lines) overlaid on top of the six leading gains for the three-dimensional discretization (solid lines). Figure 11: Four groups of axisymmetric and three-dimensional pressure modes are shown, including axisymmetric views, three-dimensional iso-volume representations, and \(x-y\) plane snapshots of the round jet. Cross-sections at \(x=5\) confirm the azimuthal wavenumber of the three-dimensional results. Color bar ranges are adjusted for visualization. dimension \(N\). The RSVD-LU algorithm scales poorly, in fact exceeding the theoretical scaling of \(O(N^{2})\) for three-dimensional flows (refer to SS6) due to poor performance at low frequencies that has also been noted in other studies (Pickering _et al._, 2020). In contrast, the RSVD-\(\Delta t\) algorithm achieves (near) linear scaling, \(O(N^{1.1})\), confirming its scalability to large problems. Similar observations can be made about the memory requirements of the two algorithms, shown in figure 12(b). The observed \(O(N^{1.5})\) memory scaling for the RSVD-LU algorithm is better than the CPU counterpart, but it is still the main barrier to applying the RSVD-LU algorithm when the state dimension is of the order of 10 million or higher. The RAM peak usage is determined entirely by LU decomposition and drops after the decomposed matrices are obtained. On the other hand, the memory scaling for the RSVD-\(\Delta t\) algorithm is exactly linear with the state dimension \(N\), consistent with the theoretic scaling determined in SS6. The range of \(N\) in figure 12 was selected to make the scaling study tractable for the RSVD-LU algorithm, but the corresponding grids are under-resolved. Table 3 compares the costs of RSVD-LU and RSVD-\(\Delta t\) for a more realistic state dimension \(N\approx 39\) million (5 state variables \(\times\) a \([400\times 140^{2}]\) grid), which was used for the three-dimensional calculations in SS9.2.2, and \(N_{\omega}=21\), \(k=10\), and \(q=1\). The CPU and memory requirements of the RSVD-LU algorithm are intractable for this problem, so we estimate these costs by extrapolating the best-fit lines in figure 12. On the other hand, for RSVD-\(\Delta t\), the CPU time and memory usage are directly taken from our simulation, which employed 300 parallel cores. Computing the action of the resolvent operator in the RSVD-LU algorithm involves both LU decomposition and solving the decomposed system, with both being extrapolated but the latter not depicted in figure 12. This implies that for \(q=1\), the CPU time includes a single LU decomposition and three times solving the LU-decomposed system. The RSVD-LU algorithm exhibits a CPU time that is more than three orders of magnitude higher than that of the RSVD-\(\Delta t\) algorithm. Specifically, using 300 cores, the wall-time for RSVD-\(\Delta t\) is approximately 61 hours (\(<3\) days), while the RSVD-LU algorithm requires over 75 300 000 CPU-hours, which translates to around 251 000 hours (\(\sim 28\) years) wall-time. This disparity becomes even more pronounced as \(N\) increases due to the linear CPU scaling of RSVD-\(\Delta t\) and the quadratic scaling of the RSVD-LU algorithm for three-dimensional problems. Table 3 confirms that the time-stepping process accounts for nearly all of the CPU time in RSVD-\(\Delta t\). The memory improvements of the RSVD-\(\Delta t\) algorithm are arguably even more important. The memory usage in the RSVD-LU algorithm exceeds that of RSVD-\(\Delta t\) by more than two orders Figure 12: Computational cost as a function of the state dimension \(N\) for the three-dimensional jet: (a) CPU-hours and (b) memory usage for the RSVD-LU (red) and RSVD-\(\Delta t\) (blue) algorithms. of magnitude. The minimum memory requirement for LU calculations surpasses 130 TB for the three-dimensional jet flow. This amount of memory is more than can be accessed even on most high-performance-computing clusters. In contrast, the memory usage in RSVD-\(\Delta t\) is optimized to store only three matrices of size \(N\times k\times N_{\omega}\), which can be accurately estimated based on the size of each float number in C/C++. For instance, with \(N\approx 39\) million, \(k=10\), and \(N_{\omega}=21\), the RAM consumption for these matrices amounts to \(\sim 0.75\) TB (using double precision with 64-bit indices). Moreover, the RAM requirements of our algorithm can be further reduced at the expense of higher CPU cost if necessary as proposed in SS8.1.3, while no such trade-off exists for the RSVD-LU algorithm. ## 10 Application: jet with streaks Finally, we apply the RSVD-\(\Delta t\) algorithm to study the impact of streaks on other coherent structures within a turbulent jet. This is a fully three-dimensional problem for which results obtained using other algorithms are not available. Streaks - elongated regions of low-velocity fluid - have historically been observed and studied in turbulent channel flows (see McKeon (2017) and Jimenez (2018) and the references therein). More recently, in unbounded shear flows such as round jet flows, streaks have been shown to be generated via the evolution of optimal initial conditions that maximize the transient energy growth (Jimenez-Gonzalez & Brancher, 2017). Nogueira _et al._ (2019) and Pickering _et al._ (2020) showed that streaks emerge as the dominant structures in the SPOD and resolvent spectra of jets at very low frequencies when \(m\geq 1\). Streaks are produced via a lift-up mechanism applied to the rolls or streamwise vortices that are usually excited near the nozzle exit. The presence of streaks within turbulence modifies the flow quite significantly. In particular, optimal streaks are shown to stabilize the KH wavepackets in a parallel plane shear layer (Marant & Cossu, 2018) and Tollmien-Schlichting waves in the Blasius boundary layer (Cossu & Brandt, 2002). Similar findings on a high-speed turbulent jet by Wang _et al._ (2021) demonstrate the stabilizing effects of finite-amplitude streaks on KH wavepackets. In this study, we investigate the impact of streaks on the linear amplification and spatial structure of the Kelvin-Helmholtz wavepackets described by the leading resolvent modes via a secondary stability analysis. The streaks that will be added to the mean flow are obtained from an initial resolvent analysis of the mean flow; specifically, streaks are the optimal resolvent response at very low frequencies (Pickering _et al._, 2020). Due to the symmetry of the mean jet, streaks obtained from data via SPOD or computed using resolvent analysis are associated with a particular azimuthal wavenumber. Accordingly, we compute the streaks using our axisymmetric code, which produces the same results as the three-dimensional code but at a lower cost. We compute them for \((St,l)=(0,6)\), where denotes the azimuthal periodicity of the streaks. This choice of \(l=6\) corresponds to one of the main cases studied in Wang _et al._ (2021). The updated mean flow with the streaks added has 6-fold rotational symmetry and, following Sinha _et al._ (2016), can be written as \[\bar{\mathbf{q}}(x,r,\theta)=\sum_{j=-\infty}^{\infty}\hat{\bar{\mathbf{q}}}_{lj}(x,r)e^ {ilj\theta}. \tag{10.1}\] They proved that after plugging the Fourier ansatz of the resulting mean flow into the LNS equations, given an azimuthal wavenumber \(m\), the associated axisymmetric mode \(\hat{\mathbf{q}}_{m,\omega}\) can only couple with \(\hat{\mathbf{q}}_{m-lj,\omega}\) for \(j\in\mathbb{Z}\). In our problem, \(l=6\) and sorting the modes with the lowest azimuthal modes, we expect coupling of modes in sets of \(\mathbf{q}_{\omega}^{L}=\{\hat{\mathbf{q}}_{L-lj,\omega}\}_{l=-\infty}^{l=\infty}\), where \(L=\{-2,-1,0,1,2,3\}\) includes all possibilities. Indexing in this manner implies that the modes with \(L=0,3\) are unpaired while \(L=\pm 1,\pm 2\) will show up in pairs in the three-dimensional setup due to symmetry. The streaks' shape and amplitude are sensitive to a few parameters including the viscosity (or equivalently turbulent Reynolds number or eddy-viscosity model if desired) and forcing region. In lieu of a more complex eddy-viscosity model, we use a constant turbulent Reynolds number of \(Re=1000\). This value is close to the optimal frequency-dependent value determined by Pickering _et al._ (2021) for \(St=0\) as well as most of our frequency range of interest \(St\in[0,1]\) for the secondary stability problem. Additionally, the forcing region of the resolvent analysis used to compute the streaks must be limited to obtain streaks of finite streamwise length. If the domain is not limited, the forcing rolls that generate these streaks sustain them throughout the domain. After some trial and error, we limited the forcing region to \(x,r\in[0,1]\times[0,1]\), which produced streaks with a location of peak amplitude (\(x\in[5,6]\)) and overall shape consistent with the streak SPOD modes obtained by Nogueira _et al._ (2019). Once the axisymmetric streaks are computed, the three-dimensional streaks are obtained by revolving them around the \(x-\)axis with phase \(e^{il\theta}\) (see figure 7). A tuning variable is the amplitude (or strength) of the streaks. The amplitude is defined as the ratio of the peak streamwise velocity of streaks over \(M\). According to Wang _et al._ (2021), the amplitude of these structures grows linearly over time. Therefore, no _correct_ constant amplitude exists for our secondary analysis. The amplitude of streaks in our paper is set to \(40\%\), which is large enough to affect the modes compared to the round jet. The region of interest and grid points along with all the other parameters are the same as for the round jet. RSVD-\(\Delta t\) is used to compute the resolvent modes for the modified mean flow. The number of test vectors is \(k=10\) and the gains are reported after \(q=2\) power iterations. The first few leading modes converged after the first power iteration, but an extra power iteration is performed to ensure convergence since no ground truth results are available for comparison. The frequency range \(St\in[0,1]\) and discretization \(\Delta St=0.05\) are the same as used for the round jet in SS9.2. The time-stepping scheme is RK4 with \(dt=0.00625\). Transient errors are held below \(1\%\) for \(St>0\) via transient removal strategy using Galerkin projection with the matrix of snapshots with a duration \(T_{t}=3T_{s}\). The gains for the round jet and jet with streaks are compared in figure 13(a). The streaks have increased the gains by orders of magnitude for \(St<0.5\). Some of the gains appear in pairs, indicating mode pairs analogous to those described for the round jet, which arise due to the six-fold symmetry of the mean jet with streaks. The match occurs between the first and second suboptimal in addition to the third and fourth suboptimal modes. All modes almost coincide at \(St=0.35\) and continue decaying as \(St\) increases. The optimal, first, third, and fifth suboptimal pressure response modes at \(St=0.2\), where Figure 13: Results for the jet with streaks: (a) resolvent gains for the round jet (solid line) and jet with streaks (dashed line); (b-e) the optimal, first, third, and fifth suboptimal pressure responses at \(St=0.2\); (g-j) contours of the pressure responses on cross-section at \(x=8.5\) corresponding to (b-e), respectively. Fourier transforms are taken along the black circles shown to obtain the corresponding azimuthal wavenumber spectra for each mode shown in (f). the leading gain is maximum, are shown in figure 13. The second and fourth suboptimal modes are not shown since they are pairs with the first and third suboptimal modes, respectively. The three-dimensional iso-surfaces show KH wavepackets that are significantly altered by the streaks; characterizing the modes with the indexes defined earlier requires deeper investigation. To this end, cross-section contours at \(x=8.5\) are plotted. These plots are more complicated than the round jet due to the coupling between multiple azimuthal wavenumbers. We interpolate the pressure field on the circles shown on each contour plot to demonstrate the coupling azimuthal wavenumbers. Taking an FFT of the extracted data, the normalized coefficients are plotted against \(m\) in 13(f). This plot shows that the optimal mode is comprised of \(L=3\) with a larger weight and \(L+l=3+6=9\) with a smaller weight, which is consistent with our axisymmetric analysis. The first suboptimal mode includes \((L,L-l)=(2,-4)\), and its pair contains \((L,L+l)=(-2,4)\), so both couplings and pairings are as expected. Similarly, the third mode is a coupling between \((L,L-l)=(1,-5)\), and the fourth mode is with \((L,L+l)=(-1,5)\). Lastly, the fifth mode is unpaired and captures the \((L,L+l)=(0,6)\) azimuthal wavenumbers with a small signature of \(L+2l=12\). From the perspective of computational cost, the jet with streaks is similar to the three-dimensional discretization of the round jet. Utilizing the RSVD-LU algorithm for the same grid with state dimension \(N\approx 39\) million, the anticipated CPU time surpasses 75 million hours, as discussed in SS9.2.2. Nevertheless, leveraging RSVD-\(\Delta t\) with \(q=2\) enabled us to complete the analysis within 37 thousand CPU-hours. Our computations used 300 cores, which results in a wall time of 28 years for the RSVD-LU algorithm and 123 hours for our algorithm. Additionally, memory requirements amount to more than 130 TB for the RSVD-LU algorithm and 0.75 TB for ours. It is safe to say that this analysis would have been intractable using previous algorithms, demonstrating the promise of the RSVD-\(\Delta t\) algorithm for extending the applicability of resolvent analysis to new problems in fluid mechanics. ## 11 Conclusions This paper introduces RSVD-\(\Delta t\), a novel algorithm designed for efficient computation of global resolvent modes in high-dimensional systems, particularly in the context of three-dimensional flows. By leveraging a time-stepping approach, RSVD-\(\Delta t\) eliminates the reliance on LU decomposition that often hampers the scalability of current state-of-the-art algorithms. As a result, RSVD-\(\Delta t\) not only enhances scalability but also extends the applicability of resolvent analysis to three-dimensional systems, overcoming previous computational limitations. Scalability is of utmost importance for algorithms dealing with high-dimensional flows, and RSVD-\(\Delta t\) excels in this regard. In contrast, the decomposition of \((\mathrm{i}\omega\boldsymbol{l}-\boldsymbol{\mathsf{A}})\) into lower and upper matrices poses a significant computational challenge for the RSVD-LU algorithm, limiting its scalability with \(O(N^{2})\) scaling for 3D problems. The CPU demand of RSVD-\(\Delta t\), on the other hand, exhibits linear proportionality to the state dimension. In addition to CPU considerations, memory requirements play a crucial role in computing resolvent modes for large systems. The LU decomposition of \((\mathrm{i}\omega\boldsymbol{l}-\boldsymbol{\mathsf{A}})\) is the primary contributor to peak memory usage in the RSVD-LU and other common algorithms. In contrast, the RSVD-\(\Delta t\) algorithm primarily utilizes RAM to store input and output matrices in Fourier space, resulting in linear growth of memory consumption with dimension. To minimize the required memory, we utilize streaming calculations, which maintains low memory requirements with minimal computational impact. If memory limitations persist, the set of desired frequencies can be split into \(d\) groups to further reduce the required memory by a factor of \(d\). The RSVD-\(\Delta t\) algorithm contains three sources of errors, each of which can be controlled by carefully selecting method parameters. The first arises from the RSVD approximation inherited from the RSVD algorithm. This error can be significantly reduced by employing power iteration and utilizing more test vectors than the desired number. The second source of error stems from the time integration method employed to compute the action of \(\boldsymbol{R}\) and \(\boldsymbol{R}^{*}\). Time-stepping errors encompass the transient response and truncation error. Truncation error arises from the numerical integration of the LNS equations and can be managed through careful selection of the time-stepping scheme and time step. The transient response emerges when the initial condition is not synchronized with the applied forcing, decaying over time but potentially requiring many periods to become sufficiently small. To expedite the removal of transients, a novel strategy is introduced involving the decomposition of snapshots into transient and steady-state components, with subsequent solving of equations for the transient. This computation is facilitated through Petrov-Galerkin and Galerkin projections. To ensure optimal performance, it is important to maintain a balance between truncation and transient errors. Focusing too much on reducing one source significantly while neglecting the other can lead to a waste of CPU time without an impact on the outcome. Also, keeping both errors smaller than the RSVD approximation error will not improve the accuracy of RSVD-\(\Delta t\) with respect to SVD-based (true) results. By effectively eliminating both truncation and transient errors up to machine precision, RSVD-\(\Delta t\) has been validated against the RSVD-LU algorithm using the complex Ginzburg-Landau equation. The RSVD-\(\Delta t\) algorithm is particularly valuable for analyzing three-dimensional flows, where other algorithms become impractical. The superior scalability of the RSVD-\(\Delta t\) algorithm leads to an increasingly pronounced disparity in computational complexity compared to the RSVD-LU algorithm as the value of \(N\) grows larger. As an example, we consider a moderately large state dimension of \(N\approx 39\) million. Using the RSVD-LU algorithm for this problem would require an estimated 75 million CPU-hours and 130 TB of RAM. In contrast, the RSVD-\(\Delta t\) algorithm required just 18,000 CPU-hours and 0.75 TB of RAM, a reduction of three and two orders of magnitude, respectively. In general, the benefits of the RSVD-\(\Delta t\) algorithm are most pronounced for three dimensional flows and other large systems, while little advantage is gained for simple one- and two-dimensional flows. Lastly, we leveraged the novel capabilities of the RSVD-\(\Delta t\) algorithm to investigate the influence of streaks within the turbulent jet on the KH wavepackets. Through a secondary stability analysis in which the steady streaks are added to the axisymmetric mean flow, we showed the significant impact of the streaks on the KH wavepackets. This included a substantial increase in gains within the range \(St\in[0,0.5]\), a change in the most amplified azimuthal wavenumber, and coupling of multiple azimuthal wavenumbers is some of the modes. Given the recently demonstrated presence of streaks in real jets, these finds warrant further investigation in the future. Our algorithm also has several implementation advantages. Our time-stepping approach enables matrix-free implementation, eliminating the explicit formation of the LNS matrix \(\boldsymbol{\mathsf{A}}\), instead directly utilizing built-in linear direct and adjoint capabilities available within many existing codes. All operations within our the RSVD-\(\Delta t\) algorithm are amenable to efficient parallelization; we have optimized out implementation of the algorithm for parallel computing using the PETSc (Balay _et al._, 2019) and SLEPc (Hernandez _et al._, 2005) environments, facilitating full utilization of the computational power offered by modern high-performance clusters. Moreover, our code is designed to leverage GPUs, enabling the delegation of compute-intensive tasks to the GPU architecture for quicker and more efficient calculations. Finally, the efficiency and accuracy of the RSVD-\(\Delta t\) algorithm could be further enhanced by incorporating strategies developed for the RSVD-LU algorithm. Notably, techniques proposed by Ribeiro _et al._ (2020) and House _et al._ (2022) can be integrated into our approach to use physical insight to select the initial test vectors instead of relying on entirely random ones. ## Acknowledgements We would like to express our gratitude to Andre Cavalieri for his invaluable feedback, insights, and contributions. We also acknowledge the University of Michigan's Great Lakes cluster for providing the essential computational resources that enabled us to conduct all computations for this research. A.F. and A.T. gratefully acknowledge funding for this work from the Michigan Institute for Computational Discovery and Engineering (MICDE) and AFOSR award number FA9550-20-1-0214. ## Appendix A RSVD-\(\Delta t\) for the weighted resolvent operator For the sake of notational brevity, we have described resolvent analysis and the RSVD-\(\Delta t\) algorithm in the absence of non-identity input, output, and weight matrices in the main text (see SS2). In this appendix, we briefly explain the modifications required to include these additional matrices. In this case, solving the generalized Rayleigh quotient (2.7) is equivalent to computing the SVD of the weighted resolvent operator (Towne _et al._, 2018) \[\tilde{\mathbf{\mathsf{H}}} =\mathbf{\mathsf{W}}_{q}^{1/2}\mathbf{\mathcal{C}}(\mathrm{i}\omega\mathbf{ \mathsf{I}}-\mathbf{\mathsf{A}})^{-1}\mathbf{\mathsf{B}}\mathbf{\mathsf{W}}_{f}^{-1/2},\] (A.1a) \[\tilde{\mathbf{\mathsf{H}}} =\tilde{\mathbf{U}}\mathbf{\Sigma}\tilde{\mathbf{V}}^{*},\] (A.1b) and further \[\mathbf{\mathsf{U}} =\mathbf{\mathsf{W}}_{q}^{-1/2}\tilde{\mathbf{U}},\] (A.2) \[\mathbf{\mathsf{V}} =\mathbf{\mathsf{W}}_{f}^{-1/2}\tilde{\mathbf{V}},\] where \(\mathbf{\Sigma}\) contains the gains, and \(\mathbf{\mathsf{V}}\) and \(\mathbf{\mathsf{U}}\) are forcing and response modes, respectively. The resolvent operator is recovered as \[\mathbf{\mathsf{R}}=\mathbf{U}\mathbf{\Sigma}\mathbf{\mathsf{V}}^{*}\mathbf{\mathsf{W}}_{f}.\] (A.3) Time-stepping can effectively act as a surrogate for the action of the weighted resolvent operator \(\tilde{\mathbf{\mathsf{H}}}\) (or equivalently \(\tilde{\mathbf{\mathsf{H}}}^{*}\)). In other words, our objective is to compute \[\hat{\mathbf{y}}=\tilde{\mathbf{\mathsf{R}}}\hat{\mathbf{f}}=\mathbf{\mathsf{W}}_{q}^{1/2}\mathbf{ \mathcal{C}}(\mathrm{i}\omega\mathbf{\mathsf{I}}-\mathbf{\mathsf{A}})^{-1}\mathbf{\mathsf{ B}}\mathbf{\mathsf{W}}_{f}^{-1/2}\hat{\mathbf{f}}\] (A.4) for all \(\omega\in\Omega\) using time stepping. The process begins by computing the product between \(\hat{\mathbf{f}}_{W}=\mathbf{\mathsf{W}}_{f}^{-1/2}\hat{\mathbf{f}}\) in Fourier space, followed by \(\hat{\mathbf{f}}_{W,B}=\mathbf{\mathsf{B}}\hat{\mathbf{f}}_{W}\). The products involving weight and input/output matrices are efficiently executed due to their sparsity. These operations are conducted for all \(\omega\in\Omega\) to obtain \(\tilde{\mathbf{\mathsf{F}}}_{W,B}\). Subsequently, the action of \((\mathrm{i}\omega\mathbf{\mathsf{I}}-\mathbf{\mathsf{A}})^{-1}\) is computed on \(\tilde{\mathbf{\mathsf{F}}}_{W,B}\) using time stepping to yield \(\hat{\mathbf{Y}}\). The resulting output undergoes \(\hat{\mathbf{y}}_{C}=\mathbf{\mathcal{C}}\hat{\mathbf{y}}\) and \(\hat{\mathbf{y}}_{C,W}=\mathbf{\mathsf{W}}_{q}^{1/2}\hat{\mathbf{y}}_{C}\), which are repeated for all frequencies to obtain \(\hat{\mathbf{Y}}_{C,W}\). Figure 14 visually illustrates the order of calculations for \(\mathbf{\mathsf{R}}\) in the top row and \(\tilde{\mathbf{\mathsf{R}}}\) in the bottom row. An analogous process is utilized to compute the action of \(\tilde{\mathbf{\mathsf{H}}}^{*}\). ## Appendix B Removing the least-damped modes using eigenvalues only The transient removal strategies described in SS8.2 require a basis for the transient, either in the form of eigenvectors for the least-damped eigenvalues or data. In this appendix, we outline an alternative procedure to expedite the decay of transients that that uses knowledge of the least-damped eigenvalues themselves. Considering two solutions of (4.1), \(\mathbf{q}_{1}=\mathbf{q}(t_{1})\) and \(\mathbf{q}_{2}=\mathbf{q}(t_{1}+\Delta t)\), we can express them in terms of their steady-state and transient parts as \[\begin{split}\mathbf{q}_{1}&=\mathbf{q}_{s,1}+\mathbf{q}_{t,1}, \\ \mathbf{q}_{2}&=\mathbf{q}_{s,2}+\mathbf{q}_{t,2},\end{split}\] (B.1) where \(\mathbf{q}_{s,1},\mathbf{q}_{s,2},\mathbf{q}_{t,1}\), and \(\mathbf{q}_{t,2}\) are four unknowns. The transient parts can be written as \[\begin{split}\mathbf{q}_{t,1}&=\mathbf{q}_{\lambda_{1},1}+ \mathbf{q}_{rest,1},\\ \mathbf{q}_{t,2}&=\mathbf{q}_{\lambda_{1},2}+\mathbf{q}_{rest,2}, \end{split}\] (B.2) where we assume the unknowns \(\mathbf{q}_{\lambda_{1},j}\) evolve as \(\sim e^{\lambda_{1}t}\), where \(\lambda_{1}\) is the least-damped eigenvalue. Hence, \[\mathbf{q}_{\lambda_{1},2}=\mathbf{q}_{\lambda_{1},1}e^{\lambda_{1}\Delta t},\] (B.3) where \(\mathbf{q}_{\lambda_{1},j}\) is essentially the projection of the transient response onto the least-damped eigenmode of \(\mathbf{A}\) at \(t=t_{j}\). The steady-state evolution at a prescribed forcing at a single frequency \(\omega\) follows (8.5). Therefore, in case of \(||\mathbf{q}_{rest,j}||=0\), the system of equations is deterministic and \(\mathbf{q}_{t,1}\) can be found as \[\mathbf{q}_{t,1}=\frac{\mathbf{b}}{c},\] (B.4) where \(\mathbf{b}=\mathbf{q}_{1}-\mathbf{q}_{2}e^{-\mathrm{i}\omega\Delta t}\) is known from the time stepping and \(c=1-e^{(\lambda_{1}-\mathrm{i}\omega)\Delta t}\) is constant. Otherwise, _i.e._, \(||\mathbf{q}_{rest,j}||\neq 0\), by simplifying terms, the transient part can be written as \[\mathbf{q}_{t,1}=\frac{\mathbf{b}}{c}-\frac{(1-c)\mathbf{q}_{rest,1}-\mathbf{q}_{rest,2}e^{- \mathrm{i}\omega\Delta t}}{c}.\] (B.5) Based on the fundamental assumption, the second term, which is unknown, decays faster than \(e^{\lambda_{1},t}\). Therefore, by removing the first term \(\frac{\mathbf{b}}{c}\), which is known, the residual eventually follows the second least-damped eigenvalue. If the forcing term encompasses a range of frequencies, the same relationships remain valid for each frequency after undergoing a DFT, and \(\frac{\mathbf{b}}{c}\) can be separately eliminated for each \(\omega\in\Omega\). Note that the eigenvector associated with \(\lambda_{1}\) was never used. This procedure can be generalized to target the d least-damped eigenmodes of \(\mathbf{A}\). The solution at each time with arbitrary distances can be expanded as \[\mathbf{q}_{l}=\mathbf{q}_{s,l}+\sum_{j=1}^{d}\mathbf{q}_{\lambda_{j},l}+\mathbf{q}_{rest,l},\] (B.6) for \(1\leq l\leq d+1\). Utilizing the same relationships, we can eliminate the slowest components, ensuring that the residual term decays faster than all \(d\) modes. This procedure is developed to steepen the decay rate and shorten the transient length to meet the desired accuracy. The outcomes of this procedure closely resemble the output of the efficient transient strategy using Galerkin projection with the least-damped eigenmodes as the basis. The transient error can be estimated in a similar manner as described for the projection-based approach. Figure 14: The schematic of computing the action of \(\mathbf{R}\) on top and the action of \(\tilde{\mathbf{R}}\) on the bottom row.
2301.00253
Segmented Composite Design of Robust Single-Qubit Quantum Gates
Error mitigation schemes and error-correcting codes have been the center of much effort in quantum information processing research over the last few decades. While most of the successful proposed schemes for error mitigation are perturbative in the noise and assume deterministic systematic errors, studies of the problem considering the full noise and errors distribution are still scarce. In this work, we introduce an error mitigation scheme for robust single-qubit unitary gates based on composite segmented design, which accounts for the full distribution of the physical noise and errors in the system. We provide two optimization approaches to construct these robust segmented gates: perturbative and non-perturbative, that addresses all orders of errors. We demonstrate our scheme in the photonics realm for the dual-rail directional couplers realization. We show that the 3-segmented composite design for the fundamental single-qubits unitary operations reduces the error by an order of magnitude for a realistic distribution of errors, and that the two approaches are compatible for small errors. This is shown to significantly reduce the overhead of modern error correction codes. Our methods are rather general and can be applied to other realizations of quantum information processing units.
Ido Kaplan, Muhammad Erew, Yonatan Piasetzky, Moshe Goldstein, Yaron Oz, Haim Suchowski
2022-12-31T17:00:24Z
http://arxiv.org/abs/2301.00253v2
# Segmented Composite Design of Robust Single-Qubit Quantum Gates ###### Abstract Error mitigation schemes and error-correcting codes have been the center of much effort in quantum information processing research over the last few decades. While most of the successful proposed schemes for error mitigation are perturbative in the noise and assume deterministic systematic errors, studies of the problem considering the full noise and errors distribution are still scarce. In this work, we introduce an error mitigation scheme for robust single-qubit unitary gates based on composite segmented design, which accounts for the full distribution of the physical noise and errors in the system. We provide two optimization approaches to construct these robust segmented gates: perturbative and non-perturbative, that addresses all orders of errors. We demonstrate our scheme in the photonics realm for the dual-rail directional couplers realization. We show that the 3-segmented composite design for the fundamental single-qubits unitary operations reduces the error by an order of magnitude for a realistic distribution of errors, and that the two approaches are compatible for small errors. This is shown to significantly reduce the overhead of modern error correction codes. Our methods are rather general and can be applied to other realizations of quantum information processing units. ## I Introduction The potential exponential speedup for solving hard computational problems and the possible real-time capability to decrypt classical encryption protocols are the driving forces behind the tremendous research effort invested in quantum information processing (QIP) and quantum computing [1; 2; 3; 4]. Over the last several decades, major theoretical breakthroughs have been achieved, developing quantum algorithms that serve a variety of problems and fields, including algorithms for combinatorial optimization, quantum machine learning, decryption protocols, and variational quantum algorithms to find the ground state energy of Hamiltonian systems such as molecules [5; 6]. Yet, the realization of a quantum information processor is still far away. The major obstacles lie in the inherent systematic errors and stochastic noise of the physical building blocks, which influence state preparation through the measurement process or the unitary operations (gates), the basic ingredients of any quantum algorithm. The problem of errors and noise is usually dealt with by error mitigation schemes or error-correcting codes. In the former, one attempts to reduce the error using various algorithmic schemes, typically with a small overhead [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. In the latter, one constructs logical qubits (or quantum gates) using many physical qubits, with redundancy and significant overhead that ensures that the logical qubit significantly outperforms the physical qubit [20]. Most relevant error-correcting codes are stabilizer codes [7], a prime example being the surface code, having relative tolerance to local errors [8; 9]. Yet, the capability of fault-tolerant quantum computation of the surface code is conditioned: The probability of errors has to be under certain thresholds for each operation, e.g., single-qubit gates or double-qubit gates [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. High-fidelity physical gate are thus extremely important for realizing a useful error-correcting code. An important step towards fault-tolerant quantum computation is to increase the fidelity of single quantum operations, the single unitary gates, which are fundamental building blocks of QIP. This is challenging in the experimental realizations of QIP, where the slightest fabrication defects or an inaccurate coupling strength can lead to errors that include deviations from target driving amplitudes and frequencies. In the past few years, several studies suggested schemes for enhancing the robustness of state-to-state processes [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31], and for devising robust unitary gates in various realizations of QIPs [32; 33; 34; 35; 36; 37; 38; 39]. One of the leading concepts of robust designs is based on the principles of composite pulse sequences used in atomic physics and nuclear magnetic resonances, which utilize a sequence composed of constant pulses to minimize the errors through the evolution of the quantum systems [21; 22; 23; 24; 40]. These techniques basically use a perturbative expansion of the gate's operation in small deterministic systematic errors, and mitigate the errors order by order. Usually, these schemes deal with varying one parameter of the Hamiltonian. Recently, an expansion of the technique have been provided to include the full parameter space. Specifically, in integrated photonic based QIP, which utilizes photons as low-noise carriers of quantum information in the dual-rail representation, fabrication may cause geometrical errors that influence mainly the Hamiltonian's diagonal part. A recent proposed robust solution for state-to-state directional couplers based on composite segmented couplers of different widths [31] was experimentally demonstrated [39], showing that the design approach does not require modifications to the fabrication protocols. However, all these proposed composite schemes deal with deterministic errors and noise, whereas noise is random by its nature, with randomness inherited from the quantum world, thermodynamic fluctuations, and from errors in manufacturing, preparation, and measurement. These issues become even more acute when dealing with a realization of robust unitary gate operations needed to allow full operation and control of quantum information processors, with a high-enough accuracy to comply with a specific target design for each physical realization. For photonic based realization, for example, this target accuracy is the fourth decimal point [41; 42; 43; 44], a target that places stringent fabrication tolerances on process parameters such as etching depth, waveguide widths, etc., which are challenging to meet in practice. While the current known perturbative schemes have succeeded to construct robust gates for the realization of robust unitary gate operations, treatment of the statistical nature of noise and errors is still lacking. Here we present a scheme for robust unitary operations. The scheme, based on segmented composite design, offers a way to design high-fidelity single-qubit quantum unitary gates. In contrast to previous demonstrations of robust unitary gate designs, we provide protocols to devise robust unitary gate operations considering the statistical nature of noise and errors in physical systems. We show that by knowing the appropriate model of systematic errors of a specific physical realization of single-qubit gates, robust gates can be constructed by composing them rather than by correcting them through an operation before or after. In devising robust unitary gates, we follow two design paths. The first one is based on a perturbative approach, where we reduce the fully correlated error order by order in perturbation theory. The second is a non-perturbative method, where we search for the local maxima of the fidelity cost function so that we optimize while accounting for all orders of errors or their variances simultaneously. As an example, we apply both methods to the photonic dual-rail realization, providing robust high-fidelity unitary solutions to different single-qubit gates, including the fundamental \(X\), \(X^{\frac{1}{2}}\), \(X^{\frac{1}{6}}\) and \(H\) gates. We demonstrate that the unitary segmented solutions are effective and compatible in practical scenarios of directional-couplers realizations, and are far more robust to systematic errors as compared to uniform couplers. Furthermore, we present the advantage of utilizing optimized segmented couplers in reducing the logical error in the quantum circuit. While we take the integrated photonics path-encoded qubits realization as an example to illustrate the strengths of the scheme on-chip building blocks for quantum applications, the method is rather general and can be applied to any other realization of a QIP device. Our paper is organized as follows: In Section II we present the single qubit gates and our methods for designing robust ones for a general statistical error model, and illustrate them for an example error model. In Section III, we describe how single qubit gates are physically realized in integrated photonics, describe the error model in the integrated photonics realization, and, using our methods, find and design several robust gates according to a statistical model of fabrication errors in the manufacturing process. We further show how the logical error in a surface code, as a consequence, would behave given our solutions and an error model. In Section IV, we summarize and discuss our results. In the appendixes, we provide details of the calculations and further information on various solutions. ## II Method and Illustration on a Reduced Error Model ### Single Qubit Quantum Gates and Fidelity The time evolution of a general qubit system \(\left\{\left|0\right\rangle,\left|1\right\rangle\right\}\) is governed by the Schrodinger equation: \[i\partial_{t}\left(\begin{array}{c}c_{1}\left(t\right)\\ c_{2}\left(t\right)\end{array}\right)=\left(\begin{array}{cc}-\Delta\left( t\right)&\Omega^{*}\left(t\right)\\ \Omega\left(t\right)&\Delta\left(t\right)\end{array}\right)\left(\begin{array} []{c}c_{1}\left(t\right)\\ c_{2}\left(t\right)\end{array}\right), \tag{1}\] where \(c_{1}\left(t\right)\) and \(c_{2}\left(t\right)\) are the probability amplitudes at time \(t\) of the states \(\left|0\right\rangle\) and \(\left|1\right\rangle\) respectively, \(\Omega\left(t\right)\) is the (complex) Rabi frequency, \(\Delta\left(t\right)\) is the (real) detuning, and we set \(\hbar=1\). The unitary propagator of such a system is: \[U\left(t,0\right)=\mathcal{T}\left[\exp\left[-i\int_{0}^{t}\begin{pmatrix}- \Delta\left(t^{\prime}\right)&\Omega^{*}\left(t^{\prime}\right)\\ \Omega\left(t^{\prime}\right)&\Delta\left(t^{\prime}\right)\end{pmatrix} \right]dt^{\prime}\right]. \tag{2}\] When \(\Omega\) and \(\Delta\) are independent of time, the propagator simplifies to: \[U\left(t,0\right)=\exp\left[-it\begin{pmatrix}-\Delta&\Omega^{*}\\ \Omega&\Delta\end{pmatrix}\right]. \tag{3}\] Using physical systems that follow such SU (2) dynamics, one can implement various single qubit gates. However, when one considers noise in the physical system, the implemented gate deviates from the desired one. In order to quantify how far the noisy gate is from the desired one, we will consider the metric provided by the fidelity \(F\) of the gate \(U(\mathbf{\epsilon})\), which is defined as \[F(U_{\rm ideal},U(\mathbf{\epsilon}))=\frac{1}{2}|\text{Tr}(U_{\rm ideal}^{\dagger }U(\mathbf{\epsilon})|, \tag{4}\] where \(U_{\rm ideal}\) is the desired ideal unitary gate given by Eq. (3), and \(U(\mathbf{\epsilon})\) is its actual physical realization, which depends on a set of jointly distributed random errors, \(\mathbf{\epsilon}=\{\epsilon^{a}\}_{a=1}^{m}\). This fidelity takes values in the interval \([0,1]\), where \(1\) corresponds to the case of no errors, and \(0\) corresponds to the case of maximal deviation from the desired matrix, such as getting \(iY\) instead of \(iX\). The goal in this work is to increase the expectation value of the fidelity: \[\bar{F}=\mathbb{E}_{\mathbf{\epsilon}}[F(U_{\rm ideal},U(\mathbf{\epsilon})]. \tag{5}\] The relevant statistical error model should be taken depending on the specific physical realization of the gates, where one considers the quantum errors, thermodynamic errors, and the errors of manufacturing, preparation, and measurement. Maximizing the mean fidelity over a wide error range is crucial for fault-tolerant computation, as mentioned in the introduction, since a certain threshold for the resulting physical error probability has to be achieved. ### Constructing Robust Composite Gates The method that we employ to design robust gates is to compose pulses or segments. The reasoning behind this approach is the natural assumption that the relevant errors are highly correlated, and this correlation can be applied to cancel errors with appropriately tuned designs. Consider an ideal unitary gate \(U_{\mathrm{ideal}}\), as well as its actual noisy segmented realization \(U^{(N)}=\prod_{k=1}^{N}U_{k}(\mathbf{\epsilon}_{k})\), where \(\mathbf{\epsilon}_{k}\) is the random error vector of the \(k^{th}\) segment, which includes \(m\) errors: \(\mathbf{\epsilon}_{k}=\{\epsilon_{k}^{a}\}_{a=1}^{m}\). All the errors are jointly distributed random variables. Each segment \(U_{k}\) without errors is as in Eq. (3). The goal is to increase the expectation value of the fidelity in Eq. (5). In our analysis, we employ two methods. The first one is perturbative in the error random variables, where we consider them to be fully correlated and design the segmented gate such that we cancel the errors order by order in perturbation theory. More specifically, we construct analytical solutions of 3-segmented designs that cancel the first order error term. Clearly, cancellation of higher order error terms requires a larger number of segments. The second method is non-perturbative, where we consider Eq. (5) as a cost function to be maximized. While these two methods are compatible for small errors or small variances of errors, as will be seen, the non-perturbative approach also offer a path for addressing large values of the random error variances, where the optimization take into account all orders in the errors simultaneously. ### Example: A detuning Error Model In order to illustrate our methods in a relatively simple case, we consider first a physical system which allows only real \(\Omega\)'s, and we assume a single error random variable \(\epsilon_{k}=\epsilon,k=1,...,N\), which is a systematic error in \(\Delta\), and neglect the error in \(\Omega\). We further assume that the errors of the different segments are fully correlated. This assumption describes well the errors in several quantum and classical systems that follow such \(\mathrm{SU}\left(2\right)\) dynamics, such as gates of trapped ions [25; 26], sum-frequency generation [45], atomic systems [27; 28; 29], etc. The \(N\)-segmented gate reads: \[U^{(N)}=\prod_{k=1}^{N}U\left(\Omega_{k},\Delta_{k},t_{k},\epsilon\right)\, \tag{6}\] where \[U\left(\Omega,\Delta,t,\epsilon\right)=e^{-it\left(\Omega X-\Delta Z-\epsilon Z \right)}\, \tag{7}\] and \(t_{k},\Omega_{k},\Delta_{k}\) are the length of the \(k^{th}\)-segment, its coupling, and its detuning, respectively. The \(N\)-segmented gate fidelity (4) is \(F(U^{(N)}(0),U^{(N)}(\epsilon))\). #### ii.2.1 Perturbative Method In the perturbative approach, we consider the error in the quantum gate \[E(\epsilon)=U(\epsilon)-U(0)=\sum_{k>0}E_{k}\epsilon^{k}. \tag{8}\] The task is to design an \(N\)-segmented gate \(U\) such that \(E=O(\epsilon^{n})\) for a given \(n\). In many practical realizations, it is sufficient to take \(n=2\). Note that since \(E(\epsilon)\) is a function of a random variable instance, removing the linear order term is not simply removing the expectation value of \(\epsilon\). Let us take for example \(U=X\), and construct the gate up to an overall phase, that is, \(iX\). We need to find \(N\) and \(\{\Omega_{k}\}_{k=1}^{N},\{\Delta_{k}\}_{k=1}^{N},\{t_{k}\}_{k=1}^{N}\) such that \(U^{(N)}\left(0\right)=iX\), \(\left.\frac{\partial E^{(N)}(\epsilon)}{\partial\epsilon}\right|_{\epsilon=0} =0\,\ \left.\frac{\partial^{2}E^{(N)}(\epsilon)}{\partial\epsilon^{2}} \right|_{\epsilon=0}=0\), etc. Using the unitarity of each propagator one can simplify the equations and reduce their complexity. In Appendix A we present analytical robust solutions of the equations for the gates of the form \((iX)^{\frac{1}{n}}\), where \(n\) is a positive integer, employing three segments, and two solutions of the \(iX\) gate using four segments with the same coupling constant. We also compare in this appendix our solutions' fidelity to that of a single segment for \(n=1,2,3\). #### ii.2.2 Non-Perturbative Method In the non-perturbative approach, we search for a maximum of a cost function (minimum of a loss function) by optimization. In order to simplify the optimization process, we split the loss function into two subfunctions, the _invalid range loss subfunction_ and the _robust fidelity loss subfunction_. The former ensures that the parameters we obtain are within their allowed range (for instance, the length of the waveguides cannot be negative) by strongly penalizing deviations from it. The robust fidelity loss subfunction calculates the fidelity for a range of \(N\) error values between \(-3\sigma\) to \(3\sigma\), \(\sigma\) being the standard deviation, and weighs these fidelities according to the assumed error distribution (for instance, a normal distribution). An overall minus sign is added in order for the algorithm to minimize this value and thus maximize the overall fidelity. For example, the loss function used for errors that have a normal distribution is: \[\text{Loss}=1-\sum_{x=-3\sigma}^{3\sigma}\frac{e^{-\frac{x^{2}}{2 \sigma^{2}}}}{\text{Dist}_{\text{sum}}}\cdot F(U_{\text{ideal}},U_{\text{ optimization}})(x)\] \[+\mu\cdot\sum_{i=0}^{N-1}\sum_{j=0}^{m-1}\max(0,p_{j,i}-p_{j}^{ \text{Max}})+\max(0,p_{j}^{\text{Min}}-p_{j,i})\,\] The former sum in the loss function is discrete: between each pair of subsequent \(x\) values there's an interval of \(\frac{1}{n}\), where \(n+1\) is the number of samples used to estimate the integral. The value \(\text{Dist}_{\text{sum}}=\sum_{x=-3\sigma}^{x=3\sigma}e^{-\frac{x^{2}}{2\sigma^ {2}}}\) is used to normalize the distribution function, which guarantees that the robust fidelity loss subfunction's minimal value is \(0\). \(\mu\) defines the weight ratio between the loss subfunctions (set to be in the range of \([5,100]\)). There are \(N\) segments used, with \(m\) parameters per segment (for instance, in this model, \(m=3\) because there are three parameters: \(\Omega,\Delta,t\)). \(p\) is the selected parameter values. \(p_{j}^{\text{Min}}\) and \(p_{j}^{\text{Max}}\) are chosen by physical limitations (for instance, \(t^{\text{Min}}=0\), since the length of the waveguides cannot be negative). By minimizing these two subfunctions, we obtain physically feasible parameters which minimize the fidelity loss for errors between \(-3\sigma\) to \(3\sigma\) weighted by the given error distribution. Furthermore, the optimizer we used is the Adam optimizer with a learning rate of \(10^{-3}\). Examples of non-perturbative solutions for the detuning error model and their simulations can be seen in Appendix B. We show in Fig. 1(a),(b) one solution on the Bloch sphere compared to the uniform gate, as well as how errors affect the result of the gate for two different initial states for each case (uniform and composite). Further details regarding the optimization process are described in Appendix C. ## III Robust segmented gates in integrated photonics As there are several realizations of quantum gates and each one has an appropriate statistical model of errors, we choose, the following, to apply our methods to the photonic realm [42, 43, 44]. This realm, which utilizes photons as excellent low-noise carriers of quantum information, requires that unitary gate comply with a target design to a fourth decimal point accuracy[41]. ### Directional Couplers as Gates and their Error Model According to the coupled-mode theory, the propagation of the pair of electrical fields \(E_{1,2}\) in a directional coupler of a fixed cross-section is described exactly by Eqs. (1) and (3), where the actual matrix elements that describe the dynamics along the two waveguides are the mode propagation constants' mismatch \(\Delta\beta\) and the interaction coupling \(\kappa\) between the two waveguides [45]. The coupling coefficient \(\kappa\) between the waveguides is equivalent to the off-diagonal term \(\Omega\). The mode mismatch between the mode index \(\Delta\beta=\beta_{1}-\beta_{2}\) is equivalent to the diagonal term \(\Delta\), and the propagation length \(z\), is equivalent to the evolution time \(t\). The coupling \(\kappa\) is largely determined by the distance between the cores. Figure 1: Naïve vs. composite gate. (a-b) Bloch sphere representation of a robust composite \(-iX\) Gate. The plot provides a schematic description of two different states on Bloch sphere and the trajectories they follow under the uniform \(-iX\) gate (in continuous red), and under the segmented gate presented in Table V (in continuous blue, turquoise, and purple). In dashed lines, we show the trajectories the states follow when an error \(\epsilon=0.17\Omega\) occurs simultaneously in all detunings. In black, we depict the torque vector of the uniform gate around which the states rotate under the \(-iX\) operation. One can see the robustness of the segmented gate against such errors compared to the uniform regular gate. We show this for two different initial states, \(\ket{0}\) and \(\cos\left(\frac{\pi}{8}\right)\ket{0}+\sin\left(\frac{\pi}{8}\right)\ket{1}\), to emphasize that the whole gate is robust, not only the complete population transfer between \(\ket{0}\) and \(\ket{1}\). In other words, the \(-iX\) gate is robust for any initial state; i.e. not only the magnitude of the element \(U_{12}\) of the representative matrix of the operation is robust (against errors in the physical system) but also its phase, as well as the phase of the element \(U_{11}\). (c-d) Dual rail photonic realization of unitary gates. Schematic 2D top view illustrations of standard and composite gates respectively, based on the directional couplers realization of gates in integrated photonics. We denote the waveguides widths \(w_{1},w_{2}\), the length \(t\) and the gap \(g\). In our analysis of the functional dependence of the coupling, the detuning, and the relevant error model, we solve for the realization of single-mode silicon-on-insulator rib waveguides. The parameters \(\Omega\) and \(\Delta\) in Eqs. (1) and (3) are in fact functions of the following physical parameters, some of which are depicted in Fig. 1(c),(d): 1. \(h_{1}\), \(h_{2}\) -- Etching depths, 2. \(w_{1}\), \(w_{2}\) -- The widths of the waveguides, 3. \(H_{1}\), \(H_{2}\) -- The heights of the waveguides, 4. \(g\) -- The gap between the waveguides, 5. \(T\) -- The temperature, 6. \(\lambda\) -- The wavelength. waveguide devices can support low-loss bends, down to some finite radius, mostly determined by the refraction index contrast between the core and the cladding of the waveguide. Below this radius, significant losses occur due to scattering from wall roughness and radiation loss from the curvature of the waveguide [46]. Typical Silicon on Insulator (SOI) devices usually allow a bend radius to be no smaller than roughly 10 microns. This results in difficulty in applying significant gap changes abruptly (i.e., within a distance that is considerably less than the length of a segment). Thus, in our designs we aimed for a fixed gap for all the different segments. For a fixed gap, etching depth, temperature, wavelength, and waveguide heights: \[\Omega=\kappa\left(w_{1},w_{2}\right), \tag{10a}\] \[\Delta=\Delta\beta\left(w_{1},w_{2}\right). \tag{10b}\] In order to estimate these functions, i.e., the mode propagation mismatch and coupling coefficients as functions of the geometric components for a desired range of values, we used the coupled mode theory approximation, Lumerical simulations, and known fitting methods. For details, see Appendix D. We assume that for the desired set of widths of the waveguides, they all have the same error, i.e. they are fully correlated, and this error is distributed normally: \[\delta w\sim\mathcal{N}\left(0,\sigma^{2}\right), \tag{11}\] independently of the value of the desired set of widths. We describe our perturbative method in Section III.2.1, and the non-perturbative numerical search in Section III.2.2. ### Methodology Using the interpolation functions for the dependence of \(\kappa\) and \(\Delta\beta\) on the parameters, and assuming all segments have the same error in widths, the error model can be dealt with perturbatively in a simple way. We define the \(k\)th segment of the \(N\)-segmented gate by \[U_{k}=e^{-i\frac{k}{2}\left(\kappa(w_{1k}+\delta w,w_{2k}+\delta w)X-\Delta \beta(w_{1k}+\delta w,w_{2k}+\delta w)Z\right)}, \tag{12}\] where \(z_{k},w_{1k},w_{2k}\) are its length and widths respectively. The \(N\)-composite gate reads: \[U^{(N)}\left(\delta w\right)=U_{c}\left(\prod_{k=1}^{N}U_{k}\right)U_{c}, \tag{13}\] where the matrix \(U_{c}\) represents the non-zero coupling effect that occurs when the two waveguides are brought closer and taken further away. We model this effect as another segment at the beginning and the end of the composite segment design, with zero detuning, given by \[U_{c}=\cos(\theta_{c})I-i\sin(\theta_{c})X. \tag{14}\] The parameter \(\theta_{c}=0.232\) was determined numerically following [47], and verified experimentally by fabricating various zero detuning directional couplers with an identical cross-section, measuring the coupling ratio, extrapolating the coupling ratio to zero coupling length, and finally estimating the amount of coupling that occurs only from initiating and terminating the interaction. #### iii.2.1 Perturbative Method We seek solutions that make the derivatives vanish: \[\left.\frac{\partial^{j}U^{(N)}\left(\delta w\right)}{\partial\delta w^{j}} \right|_{\delta w=0}=0, \tag{15}\] for \(j=1,2,3,\cdots\) When we find a solution, we will provide a plot of its fidelity based on this simplified model, and a plot of its fidelity based on the model described in Section III.1. One can see from Fig. 3 that it is sufficient for our purposes to work with the former one. This is due to the assumption that \(\sigma<20\) nm, which is much smaller than \(w_{1}\), \(w_{2}\). #### iii.2.2 Non-Perturbative Method For the numeric approach, we set a correlated error distribution in the waveguide widths, multiply the matrices \(M_{i}\) in stage 4 of the robust fidelity loss calculation described in Appendix C by \(U_{c}\) on both sides to simulate the coupling effect before and after the waveguides enter the directional coupler, and use the interpolation functions in order to translate between the geometric parameters and \(\kappa\) and \(\Delta\beta\). We then optimize the geometric parameters of our segmented design, by using a stochastic gradient-based optimization method. Lastly, we analyze and verify the resulting segments in Lumerical. We also correct small discrepancies in the segment lengths that may arise between the coupled mode theory-based coupling approximation (Appendix D) and the more accurate two-waveguides simulations with Lumerical. The method for fixing this discrepancy is also explained in Appendix D. The numerical non-perturbative method is described in Fig. 2. ### Solutions In this section, we present selected results for robust gates generated using both the perturbative and non-perturbative approaches. We compare their fidelities to uniform coupler fidelity values that implement the same gate up to a global phase. The uniform coupler parameters and segmented coupler parameters, both perturbative and non-perturbative, are presented in Appendix E in tables 6 and 7. #### iii.3.1 Random error simulations In Figs. 3 and 4 we compare the uniform and segmented gates robustness by using width errors sampled randomly from a normal error distribution. In both simulations, \(10^{5}\) values were sampled for improved accuracy. fidelity, even when the average width error in the waveguides is greater. #### iii.3.2 Deterministic error simulations In Fig. 5 we compare the uniform and segmented gates robustness for fixed deterministic errors. In the simulations, the fidelity of both uniform and segmented couplers was calculated for multiple error values between -20 nm and 20 nm, that is between -3 and 3 standard deviations). As shown for the random error simulations, the fidelity of the segmented design is far more robust in comparison to the uniform one. Furthermore, for both the perturbative and non-perturbative approaches, the difference between the fidelity of the uniform and segmented couplers increases in a parabolic fashion as \(\sigma\) increases. As seen clearly in the figures in this section, the segmented designs are more robust than the uniform ones, having higher fidelity mean and lower fidelity variance. ### Logical Error The error reduction shown in Fig. 3 and 5 demonstrates the mitigation of correlated physical errors, and is evidently important during this era of quantum computation, called NISQ (Noisy Intermediate-Scale Quantum), [48] as such a reduction of the error by an order of magnitude allows an order of magnitude increase in the number of operations one could perform before noise takes over the computation. Moreover, error mitigation is also of much relevance to fault-tolerant quantum computers, where a quantum error-correcting code is implemented. Consider for instance the surface code (for a review see [49]). The logical error \(P_{L}\) is related to the physical error \(p\) by the empirical formula: \[P_{L}\sim(p/p_{\rm th})^{d_{e}}\, \tag{16}\] where \(p_{\rm th}\) is the surface code threshold and is estimated as 0.57%, \(d\) is the size of the surface array, and \(d_{e}=(d+1)/2\) is the code distance. Using equation 16, we can estimate how close are the uniform and segmented couplers' error rate to the empirical surface code threshold for single-qubit gates. In order to estimate the physical error rate p, we used a large number of single logical qubit states \(\psi\) (10,000 uniformly randomly generated \(\psi\) states used for each error rate estimation) and calculated \[p_{\psi}=|\langle U\psi,U_{\rm ideal}\psi\rangle|. \tag{17}\] We then selected the minimal value within this range as our physical error rate: \[p=\min_{\psi}(p_{\psi}). \tag{18}\] The results of this numerical estimate can be seen in Fig. 6, where the parameters used for this optimization are given in table 7. In (a) the physical error rates for uniform and segmented couplers are smaller than the threshold, meaning the logical error in both couplers can be reduced by using the surface code error correction. However, the ratio between the logical error rates is significant and rises exponentially for increasing \(d\) values. This implies that, by using the segmented coupler, one can perform error correction efficiently and with less resources (fewerphysical qubits and quantum gates). In (b), while the physical error rate for the segmented coupler is still smaller than the threshold, the physical error rate for the uniform coupler is not. This means that Figure 5: The fidelity of the pertubative and non-pertubative composite gates compared to uniform gates for a fixed error value in the waveguide widths, \(\delta w\). In (a) the ideal gate is \(X\) and in (b) the ideal gate is the Hadamard gate. Figure 6: The logical error rate \(P_{L}\) as a function of the physical error rate \(p\), tested on X gate using a surface code. In (a) the width error is set to be 3 standard deviations while in (b) the width error is set to be \(\sim 15\%\) of the nominal width (roughly 9 standard deviations). The dashed vertical line denotes the threshold of the quantum error correcting code. errors generated in the uniform coupler cannot be corrected using the error-correcting surface code. Note that this result is obtained when the width error is set to be very large (above 8 standard deviations). In these simulations, we used a circuit model-based quantum error correction code and not a measurement-based quantum computation model [50]. While this can lead to inaccuracies, since the optimization model itself is applicable to other qubit implementations (whereinstead of segmented couplers, we can use, e.g., composite pulses), we expect these results to be qualitatively correct for photonic systems. ## IV Conclusions To conclude, we presented a segmentation method for the design of robust unitary gates, which are resilient against random systematic errors that follow a general correlated probabilty distribution. We provide two approaches to construct these robust segmented gates: a perturbative approach and a non-perturbative approach, and demonstrated them in the photonic realm for the directional coupler realization of the gates. Specifically, we constructed robust designs against correlated Normally distributed width errors for the \(X\), \(X^{\frac{1}{2}}\), \(X^{\frac{1}{3}}\), and \(H\) gates. In the perturbative approach, the error matrix \(E(\epsilon)\) has been expanded order by order, and the geometric parameters of the segmented coupler were chosen so that the first two orders are eliminiated. In the non-perturbative approach, the geometric parameters of the segmented coupler are generated by an optimization algorithm, which increases the fidelity of the segmented coupler given an input error distribution. The parameters generated by both approaches were compared to the uniform coupler's fidelity in two types of simulations: probabilistic simulations and deterministic simulations. In the probabilistic simulations, the mean fidelity and fidelity standard deviation is estimated for a range of different standard deviations used for the error distribution. In the deterministic simulations, the fidelity was calculated for a range of deterministic errors between -3 and 3 standard deviations (width error between -20 nm and 20 nm). In these simulations, both approaches presented parameters for segmented couplers, which were far more robust to systematic errors compared to the uniform coupler. The perturbative approach parameters showed robustness both in mean fidelity and in fidelity variance, and likewise for the non-perturbative approach for the deterministic simulations. In the last section of the results, we presented how using optimized segmented couplers can greatly reduce the logical error rate \(P_{L}\) in the quantum circuit. For the physical error rate of 20 nm (roughly \(\sim 5\%\) of the average waveguide width), both the uniform and segmented coupler were below the threshold of the quantum error correcting code, meaning both could potentially be corrected by the error correcting codes, but the uniform coupler required many more resources (qubits and quantum gates) to do so. For a higher physical error rate, 60 nm (roughly \(\sim 15\%\) of the average waveguide width), the segmented coupler was still below the threshold, however, the uniform coupler was not, suggesting that it could not be corrected by the quantum error correcting code such as surface code. The approaches shown in this paper were specifically used for directional couplers, but the algorithms presented are by no means limited to optic-based quantum computation -- they are applicable to any quantum computing hardware, making both approaches (perturbative and non-perturbative) relevant in different quantum gate implementations as well. Furthermore, while this paper concentrates on single-qubit gates and correlated errors, we expect the methodology to apply to multi-qubit gates or gates with partial error correlation between the segments, which are worthy topics for future studies. Another important topic for further research is the evaluation of how segmented couplers can impact the success rate of success of state-of-the-art quantum error correcting codes, such as the surface code; this can demonstrate how the correction of systematic errors can increase the success rates of modern quantum algorithms. We believe that our segmented design for unitary gate operations and the design methods that were provided here could serve as fundamental elements and operations in many physical realizations of quantum information processing and quantum computing. ## Acknowledgements Our work has been supported by the Israel Science Foundation (ISF) and the Directorate for Defense Research and Development (DDR&D) grant No. 3427/21. M.G. has been further supported by the US-Israel Binational Science Foundation (BSF) Grant No. 2020072. The work of Y.O. is supported in part by an ISF Center of Excellence.
2309.16213
Almost global solutions of 1D nonlinear Klein-Gordon equations with small weakly decaying initial data
It has been known that if the initial data decay sufficiently fast at space infinity, then 1D Klein-Gordon equations with quadratic nonlinearity admit classical solutions up to time $e^{C/\epsilon^2}$ while $e^{C/\epsilon^2}$ is also the upper bound of the lifespan, where $C>0$ is some suitable constant and $\epsilon>0$ is the size of the initial data. In this paper, we will focus on the 1D nonlinear Klein-Gordon equations with weakly decaying initial data. It is shown that if the $H^s$-Sobolev norm with $(1+|x|)^{1/2+}$ weight of the initial data is small, then the almost global solutions exist; if the initial $H^s$-Sobolev norm with $(1+|x|)^{1/2}$ weight is small, then for any $M>0$, the solutions exist on $[0,\epsilon^{-M}]$. Our proof is based on the dispersive estimate with a suitable $Z$-norm and a delicate analysis on the phase function.
Fei Hou, Fei Tao, Huicheng Yin
2023-09-28T07:37:34Z
http://arxiv.org/abs/2309.16213v1
Almost global solutions of 1D nonlinear Klein-Gordon equations with small weakly decaying initial data ###### Abstract It has been known that if the initial data decay sufficiently fast at space infinity, then 1D Klein-Gordon equations with quadratic nonlinearity admit classical solutions up to time \(e^{C/\epsilon^{2}}\) while \(e^{C/\epsilon^{2}}\) is also the upper bound of the lifespan, where \(C>0\) is some suitable constant and \(\epsilon>0\) is the size of the initial data. In this paper, we will focus on the 1D nonlinear Klein-Gordon equations with weakly decaying initial data. It is shown that if the \(H^{s}\)-Sobolev norm with \((1+|x|)^{1/2+}\) weight of the initial data is small, then the almost global solutions exist; if the initial \(H^{s}\)-Sobolev norm with \((1+|x|)^{1/2}\) weight is small, then for any \(M>0\), the solutions exist on \([0,\epsilon^{-M}]\). Our proof is based on the dispersive estimate with a suitable \(Z\)-norm and a delicate analysis on the phase function. **Keywords:** 1D Klein-Gordon equation, weakly decaying initial data, dispersive estimate, \(Z\)-norm. **2020 Mathematical Subject Classification.** 35L70, 35L72. ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 Littlewood-Paley decomposition and definition of \(Z_{\alpha}\)-norm * 2.2 Linear dispersive estimate * 2.3 Two technical Lemmas * 3 Reduction * 3.1 First normal form transformation * 3.2 Partial second normal form transformation * 4 * 4 Energy estimate and continuity of \(Z_{\alpha}\)-norm * 4.1 Energy estimate * 4.2 Continuity of \(Z_{\alpha}\)-norm * 5 Estimate of \(Z_{\alpha}\)-norm * 5.1 Estimate of the cubic nonlinearity \(\mathcal{C}_{k}(s)\) * 5.2 Estimates of the quartic and higher order nonlinearities * 5.3 Estimates of the boundary term \(\mathcal{B}_{k}\) * 6 Proofs of Theorem 1.1 and Corollaries 1.2 and 1.3 * A Estimates of multi-linear Fourier multipliers ## 1 Introduction Consider the Cauchy problem of the following semilinear Klein-Gordon equation \[\begin{cases}\square u+u=F(u,\partial u),\quad(t,x)\in[0,\infty)\times \mathbb{R}^{d},\\ (u,\partial_{t}u)(0,x)=(u_{0},u_{1})(x),\end{cases} \tag{1.1}\] where \(\square=\partial_{t}^{2}-\Delta\), \(\Delta=\sum_{j=1}^{d}\partial_{j}^{2}\), \(x=(x^{1},\cdots,x^{d})\in\mathbb{R}^{d}\), \(d\geq 1\), \(\partial_{0}=\partial_{t}\), \(\partial_{j}=\partial_{x^{j}}\) for \(j=1,\cdots,d\), \(\partial_{x}=(\partial_{1},\cdots,\partial_{n})\), \(\partial=(\partial_{0},\partial_{x})\), \(u\) is real valued, \((u_{0},u_{1})\in H^{s+1}(\mathbb{R}^{d})\times H^{s}(\mathbb{R}^{d})\) with \(s>\frac{d}{2}\) being suitably large numbers, \(\varepsilon=\|u_{0}\|_{H^{s+1}(\mathbb{R}^{d})}+\|u_{1}\|_{H^{s}(\mathbb{R}^{d} )}>0\) is sufficiently small, and the smooth nonlinearity \(F(u,\partial u)\) is quadratic on \((u,\partial u)\). Under the assumption of null condition for \(F(u,\partial u)\), the authors in [4] prove that the solution \(u\in C([0,T_{\varepsilon}),H^{s+1}(\mathbb{R}^{d}))\cap C^{1}([0,T_{ \varepsilon}),H^{s}(\mathbb{R}^{d}))\) of (1.1) exists, where \(T_{\varepsilon}\geq Ce^{Ce^{\varepsilon-\mu}}\) for \(\mu=1\) if \(d\geq 3\), and \(\mu=2/3\) if \(d=2\). In addition, for \(d=1\), the lifespan \(T_{\varepsilon}\geq\frac{C}{\varepsilon^{4}|\ln\varepsilon|^{6}}\) of (1.1) is shown in [2]. Recently, without the restriction of null condition for \(F(u,\partial u)\), the authors in [8] have established that the existence time of the solution \(u\in C([0,T_{\varepsilon}),H^{s+1}(\mathbb{R}^{d}))\cap C^{1}([0,T_{ \varepsilon}),H^{s}(\mathbb{R}^{d}))\) to (1.1) can be improved to \(T_{\varepsilon}=+\infty\) if \(d\geq 3\), \(T_{\varepsilon}\geq e^{C\varepsilon^{-2}}\) if \(d=2\) and \(T_{\varepsilon}\geq\frac{C}{\varepsilon^{4}}\) if \(d=1\). Moreover, for \(d=2\) and any fixed number \(\beta>0\), if \[\tilde{\varepsilon}=\|u_{0}\|_{H^{N+1}(\mathbb{R}^{2})}+\|u_{1}\|_{H^{N}( \mathbb{R}^{2})}+\|(1+|x|)^{\beta}u_{0}\|_{L^{2}(\mathbb{R}^{2})}+\|(1+|x|)^{ \beta}u_{1}\|_{L^{2}(\mathbb{R}^{2})} \tag{1.2}\] is sufficiently small, where \(N\geq 12\), then it is proved in [8] that (1.1) has a global small classical solution \(u\in C([0,\infty),H^{N+1}(\mathbb{R}^{2}))\cap C^{1}([0,\infty),H^{N}( \mathbb{R}^{2}))\). In the present paper, we are concerned with the 1D case of (1.1), that is, \[\begin{cases}\partial_{t}^{2}u-\partial_{x}^{2}u+u=F(u,\partial u),\quad(t,x )\in[0,\infty)\times\mathbb{R},\\ (u,\partial_{t}u)(0,x)=(u_{0},u_{1})(x).\end{cases} \tag{1.3}\] Our main results can be stated as follows. **Theorem 1.1**.: _Let \(N\geq 27\) and \(\alpha\in(0,1/2]\). There are two positive constants \(\varepsilon_{0}\) and \(\kappa_{0}\) such that if \((u_{0},u_{1})\) satisfies_ \[\varepsilon:=\|u_{0}\|_{H^{N+1}(\mathbb{R})}+\|u_{1}\|_{H^{N}(\mathbb{R})}+\|( \Lambda u_{0},u_{1})\|_{Z_{\alpha}}\leq\varepsilon_{0}, \tag{1.4}\] _where \(\Lambda:=(1-\partial_{x}^{2})^{1/2}\) and \(\|\cdot\|_{Z_{\alpha}}\) is defined by (2.1) below, then (1.3) has a unique classical solution \(u\in C([0,T_{\alpha,\varepsilon}],H^{N+1}(\mathbb{R}))\cap C^{1}([0,T_{\alpha, \varepsilon}],H^{N}(\mathbb{R}))\) with_ \[T_{\alpha,\varepsilon}=\begin{cases}e^{\kappa_{0}/\varepsilon^{2}}-1,&\qquad \alpha=1/2,\\ \frac{\kappa_{0}}{\varepsilon^{\frac{2}{1-2\alpha}}},&\qquad\alpha\in(0,1/2). \end{cases} \tag{1.5}\] _Moreover, there is a positive constant \(C\) such that_ \[\|(\Lambda u,\partial_{t}u)(t,\cdot)\|_{L^{\infty}(\mathbb{R})}\leq C \varepsilon(1+t)^{-\alpha}. \tag{1.6}\] **Corollary 1.2**.: _Let \(N\geq 27\). There are two positive constants \(\epsilon_{1}\) and \(\kappa_{1}\) such that for any \(\beta>1/2\), if \((u_{0},u_{1})\) satisfies_ \[\epsilon:=\|u_{0}\|_{H^{N+1}(\mathbb{R})}+\|u_{1}\|_{H^{N}(\mathbb{R})}+\| \langle x\rangle^{\beta}\Lambda^{14}(\Lambda u_{0},u_{1})\|_{L^{2}(\mathbb{R} )}\leq\epsilon_{1},\] _where \(\langle x\rangle=\sqrt{1+x^{2}}\), then (1.3) has a unique classical solution \(u\in C([0,e^{\kappa_{1}/\epsilon^{2}}-1],H^{N+1}(\mathbb{R}))\cap C^{1}([0,e^ {\kappa_{1}/\epsilon^{2}}-1],H^{N}(\mathbb{R}))\)._ **Corollary 1.3**.: _Let \(N\geq 27\). For any \(M>0\), there is \(\epsilon_{2}>0\), such that if \((u_{0},u_{1})\) satisfies_ \[\epsilon:=\|u_{0}\|_{H^{N+1}(\mathbb{R})}+\|u_{1}\|_{H^{N}(\mathbb{R})}+\| \langle x\rangle^{1/2}\Lambda^{14}(\Lambda u_{0},u_{1})\|_{L^{2}(\mathbb{R})} \leq\epsilon_{2},\] _then (1.3) has a unique classical solution \(u\in C([0,\epsilon^{-M}],H^{N+1}(\mathbb{R}))\cap C^{1}([0,\epsilon^{-M}],H^{ N}(\mathbb{R}))\)._ **Remark 1.1**.: For the Cauchy problem \[\begin{cases}\partial_{t}^{2}u-\partial_{x}^{2}u+u=(\partial_{t}u)^{2} \partial_{x}u,\\ (u,\partial_{t}u)(0,x)=\varepsilon(\tilde{u}_{0},\tilde{u}_{1})(x),\end{cases} \tag{1.7}\] where \((\tilde{u}_{0},\tilde{u}_{1})\in C_{0}^{\infty}([-R,R])\), [7, Proposition 7.8.8] proved that the lifespan \(T_{\varepsilon}\leq R(e^{\frac{2}{\sigma\varepsilon^{2}}}-1)\) holds if \(\sigma=\int_{\mathbb{R}}\tilde{u}_{0}^{\prime}(x)\tilde{u}_{1}(x)dx>0\). Note that problem (1.3) contains the case (1.7), then the upper bound \(T_{1/2,\varepsilon}=e^{\kappa_{0}/\varepsilon^{2}}-1\) in Theorem 1.1 and \(T_{\varepsilon}=e^{\kappa_{1}/\epsilon^{2}}-1\) in Corollary 1.2 are optimal. **Remark 1.2**.: Although the lifespan \(T_{\alpha,\varepsilon}\) in Theorem 1.1 may be not optimal for \(\alpha\in(0,1/2)\), it suffices to obtain Corollary 1.3. **Remark 1.3**.: By the definition of \(Z_{\alpha}\)-norm in (2.1) below, there exists some positive constant \(C>0\) such that \[\|f\|_{Z_{1/2}}\leq C\|(1+|x|)^{1/2+}\Lambda^{14}f\|_{L^{2}}\text{ and }\|f\|_{Z_{\alpha}}\leq C\|(1+|x|)^{1/2}\Lambda^{14}f\|_{L^{2}}\text{ for }\alpha\in(0,1/2). \tag{1.8}\] One can see the details in the proofs for Corollaries 1.2 and 1.3 of SS6. **Remark 1.4**.: When the small data \((u_{0},u_{1})(x)\) decay sufficiently fast, the analogous result to Corollary 1.2 has been obtained for problem (1.3) in [12] by the vector field method. It is pointed out that our Corollary 1.2 only requires the smallness of \(H^{s}\)-Sobolev norm with \(\langle x\rangle^{1/2+}\) weights of \((u_{0},u_{1})\), which leads to the failure of vector field method since \(\|x\partial_{x}(u_{0},u_{1})\|_{L^{2}(\mathbb{R})}\) can become infinite. **Remark 1.5**.: Consider 1D quasilinear Klein-Gordon equation \[\begin{cases}\partial_{t}^{2}v-\partial_{x}^{2}v+v=P(v,\partial v,\partial_{tx}^ {2}v,\partial_{x}^{2}v),\quad(t,x)\in[0,\infty)\times\mathbb{R},\\ (v,\partial_{t}v)(0,x)=\delta(v_{0},v_{1})(x),\end{cases} \tag{1.9}\] where \(\delta>0\) is small, \(P(v,\partial v,\partial_{tx}^{2}v,\partial_{x}^{2}v)\) is smooth on its arguments and linear with respect to \((\partial_{tx}^{2}v,\partial_{x}^{2}v)\), moreover, \(P\) vanishes at least at order 2 at 0. In [3], under the null condition of \(P(v,\partial v,\partial_{tx}^{2}v,\partial_{x}^{2}v)\) and \((v_{0},v_{1})(x)\in C_{0}^{\infty}(\mathbb{R})\), the author shows that (1.9) has a global small solution. When \(P(v,\partial v,\partial_{tx}^{2}v,\partial_{x}^{2}v)\) is a homogeneous polynomial of degree 3 in \((v,\partial v,\partial_{tx}^{2}v,\partial_{x}^{2}v)\), affine in \((\partial_{tx}^{2}v,\partial_{x}^{2}v)\), if there exists an integer \(s\) sufficiently large such that \[\|v_{0}\|_{H^{s+1}(\mathbb{R})}+\|v_{1}\|_{H^{s}(\mathbb{R})}+\|xv_{0}\|_{H^{2 }(\mathbb{R})}+\|xv_{1}\|_{H^{1}(\mathbb{R})}\leq 1, \tag{1.10}\] it is proved in [17] that (1.9) admits a global small solution under the null condition of \(P(v,\partial v,\partial_{tx}^{2}v,\partial_{x}^{2}v)\). By (1.10), \((v_{0},v_{1})\) decays as \(\langle x\rangle^{-1}\) at infinity, which implies that the method of Klainerman vector fields can be applied in [17]. **Remark 1.6**.: When \(d\geq 2\), it is well known that problem (1.1) with rapidly decaying and small initial data \((u_{0},u_{1})\) has a global smooth solution, see [14, 15, 10, 16]. **Remark 1.7**.: For 1D or 2D irrotational Euler-Poisson systems, when the \(H^{s}\)-Sobolev norms with \(1+|x|\) weight of initial data are small, the authors in [6] or [11] have proved the global existence of small solutions, respectively. In this paper, we prove the almost global existence of problem (1.3) with quadratic nonlinearity and small \(H^{s}\)-Sobolev norm with lower order \(\langle x\rangle^{1/2+}\) weight. It is expected that 1D or 2D irrotational Euler-Poisson systems still have global solutions when the corresponding initial data with the lower order weight \(\langle x\rangle^{1/2+}\) or \(\langle x\rangle^{0+}\) are small. We now give some comments and illustrations on the proof of Theorem 1.1. Note that the vector field method in [14, 10, 12] will produce quite high order \(\langle x\rangle\) weight in the resulting Sobolev norm of the initial data, which is not suitable for the proof of Theorem 1.1 with the initial data of lower order \(\langle x\rangle^{1/2+}\) weight. Motivated by the Fourier analysis methods as in [6, 9, 11, 15], at first, we will transform the quadratic nonlinearity of (1.3) into the cubic nonlinearity. For this end, we set \[U:=(\partial_{t}+i\Lambda)u.\] Then (1.3) can be reduced to the following half Klein-Gordon equation \[(\partial_{t}-i\Lambda)U=\mathcal{N}(U), \tag{1.11}\] where \(\mathcal{N}(U)\) is at least quadratic in \(U\). Denote the profile \[V:=V_{+}=e^{-it\Lambda}U,\quad V_{-}:=\overline{V}. \tag{1.12}\] Applying Fourier transformation to (1.11) yields \[\hat{V}(t,\xi)=\hat{V}(0,\xi)+\sum_{\mu_{1},\mu_{2}=\pm}\int_{0}^{t}\int_{ \xi_{1}+\xi_{2}=\xi}e^{is\Phi_{\mu_{1}\mu_{2}}}m_{2}(\xi_{1},\xi_{2})\hat{V}_{ \mu_{1}}(s,\xi_{1})\hat{V}_{\mu_{2}}(s,\xi_{2})d\xi_{1}ds+\text{other terms}, \tag{1.13}\] where \(\hat{V}(t,\xi)=(\mathscr{F}_{x}V(t,x))(t,\xi)\), \(m_{2}(\xi_{1},\xi_{2})\) is some Fourier multiplier and \[\Phi_{\mu_{1}\mu_{2}}=\Phi_{\mu_{1}\mu_{2}}(\xi_{1},\xi_{2}):=-\Lambda(\xi_{1}+ \xi_{2})+\mu_{1}\Lambda(\xi_{1})+\mu_{2}\Lambda(\xi_{2}),\quad\Lambda(\xi)= \sqrt{1+\xi^{2}},\quad\xi\in\mathbb{R}\,.\] Note that \(\Phi_{\mu_{1}\mu_{2}}\neq 0\) for equation (1.3). Then one can integrate by parts in time \(s\) in (1.13) and utilize (1.11) to obtain \[\hat{V}(t,\xi)=\hat{V}(0,\xi)+\sum_{\begin{subarray}{c}(\mu_{1}, \mu_{2},\mu_{3})\in\{(+++),\\ (++-),(+--),(---)\}\\ \times\hat{V}_{\mu_{2}}(s,\xi_{2})\hat{V}_{\mu_{3}}(s,\xi_{3})d\xi_{1}d\xi_{2 }ds+\mathrm{other\ terms},\end{subarray}}\int_{0}^{t}\iint_{\xi_{1}+\xi_{2}+\xi_ {3}=\xi}e^{is\Phi_{\mu_{1}\mu_{2}\mu_{3}}}\,m_{3}(\xi_{1},\xi_{2},\xi_{3}) \hat{V}_{\mu_{1}}(s,\xi_{1}) \tag{1.14}\] \[\times\hat{V}_{\mu_{2}}(s,\xi_{2})\hat{V}_{\mu_{3}}(s,\xi_{3})d \xi_{1}d\xi_{2}ds+\mathrm{other\ terms},\] where \(m_{3}(\xi_{1},\xi_{2},\xi_{3})\) is the resulting Fourier multiplier and \[\Phi_{\mu_{1}\mu_{2}\mu_{3}}(\xi_{1},\xi_{2},\xi_{3}):=-\Lambda(\xi_{1}+\xi_{2 }+\xi_{3})+\mu_{1}\Lambda(\xi_{1})+\mu_{2}\Lambda(\xi_{2})+\mu_{3}\Lambda(\xi _{3}). \tag{1.15}\] Through the normal form transformation (see details in Section 3.1), one can simply consider problem (1.3) with the cubic nonlinearity. Based on this, applying the standard energy method, one can obtain that there are some positive constants \(C\) and \(N^{\prime}\) such that \[\frac{d}{dt}\|U(t)\|_{H^{N}(\mathbb{R})}\leq C\|U(t)\|_{W^{N^{\prime},\infty} (\mathbb{R})}^{2}\|U(t)\|_{H^{N}(\mathbb{R})}. \tag{1.16}\] To derive the sufficient time-decay of \(\|U(t)\|_{W^{N^{\prime},\infty}}\), we firstly consider the following corresponding linear problem of (1.3) \[\begin{cases}\partial_{t}^{2}u_{lin}-\partial_{x}^{2}u_{lin}+u_{lin}=0,\quad( t,x)\in[0,\infty)\times\mathbb{R},\\ (u_{lin},\partial_{t}u_{lin})(0,x)=(u_{0},u_{1})(x).\end{cases} \tag{1.17}\] The solution to (1.17) can be expressed as \[u_{lin}(t)=\frac{(e^{it\Lambda}+e^{-it\Lambda})u_{0}}{2}+\frac{(e^{it\Lambda }-e^{-it\Lambda})\Lambda^{-1}u_{1}}{2i}. \tag{1.18}\] Note that by the standard dispersive estimate of \(e^{\pm it\Lambda}\) (see (2.2) below), one has \[\|e^{\pm it\Lambda}f\|_{L^{\infty}(\mathbb{R})}\leq C(1+t)^{-1/2}\|\Lambda^{3/ 2+}f\|_{L^{1}(\mathbb{R})}. \tag{1.19}\] Under the weakly decaying initial data of Theorem 1.1, it is necessary to employ the \(Z_{\alpha}\)-norm instead of the \(L^{1}(\mathbb{R})\) norm on the right hand side of (1.19), which has the form \[\|u_{lin}(t)\|_{W^{N^{\prime},\infty}(\mathbb{R})}\leq C(1+t)^{-\alpha}\|(u_{ 0},\Lambda^{-1}u_{1})\|_{Z_{\alpha}},\quad\alpha\in(0,1/2]. \tag{1.20}\] Similarly, for the solution \(u(t)\) to the nonlinear problem (1.3), we can arrive at \[\|U(t)\|_{W^{N^{\prime},\infty}(\mathbb{R})}\leq C(1+t)^{-\alpha}\|V(t)\|_{Z_{ \alpha}},\quad\alpha\in(0,1/2], \tag{1.21}\] where \(V\) is defined in (1.12). The remaining task is to control \(\|V(t)\|_{Z_{\alpha}}\leq C\varepsilon\). Inspired by [9, 11], we will give a precise analysis on the related cubic nonlinearity and perform a suitable normal form transformation once again. Note that for \((\mu_{1},\mu_{2},\mu_{3})\in\{(+++),(+--),(---)\}\), the phase \(\Phi_{\mu_{1}\mu_{2}\mu_{3}}\) does not vanish and the cubic nonlinearity can be further transformed into a quartic one. Then for the bad cubic nonlinearity \(\hat{V}_{+}(s,\xi_{1})\hat{V}_{+}(s,\xi_{2})\hat{V}_{-}(s,\xi_{3})\), the corresponding phase in (14) is \[\begin{split}&\Phi_{bad}(\xi,\eta,\zeta)=\Phi_{++-}(\xi_{1},\xi_{2}, \xi_{3})=-\Lambda(\xi)+\Lambda(\xi-\eta)+\Lambda(\eta-\zeta)-\Lambda(\zeta),\\ &\xi_{1}=\xi-\eta,\qquad\xi_{2}=\eta-\zeta,\qquad\xi_{3}=\zeta. \end{split} \tag{22}\] To handle the situation of bad phase, we write (14) in the physical space as \[\begin{split} V(t,x)=V(0,x)+\frac{1}{(2\pi)^{3}}\int_{0}^{t} \iiint_{\mathbb{R}^{3}}K_{bad}&(x-x_{1},x-x_{2},x-x_{3})V_{+}(s,x _{1})V_{+}(s,x_{2})\\ &\times V_{-}(s,x_{3})dx_{1}dx_{2}dx_{3}ds+\text{other terms}, \end{split} \tag{23}\] where the Schwartz kernel \(K_{bad}\) is given by \[\begin{split}& K_{bad}(x-x_{1},x-x_{2},x-x_{3})=\iiint_{ \mathbb{R}^{3}}e^{i\Psi_{bad}}\times\{\text{other terms}\}d\xi d\eta d\zeta,\\ &\Psi_{bad}=s\Phi_{bad}(\xi,\eta,\zeta)+\xi(x-x_{1})+\eta(x_{1}- x_{2})+\zeta(x_{2}-x_{3}).\end{split} \tag{24}\] Therefore, in order to estimate \(\|V(t)\|_{Z_{\alpha}}\), the key points are to analyze the phase \(\Psi_{bad}\) and further to treat the Schwartz kernel \(K_{bad}\). For this purpose, according to the relations of \(\xi_{1}=\xi-\eta\), \(\xi_{2}=\eta-\zeta\) and \(\xi_{3}=\zeta\), the following cases are distinguished: \[\begin{array}{cccc}&\xi-\eta&\eta-\zeta&\zeta\\ \text{case }(\text{LLH})&low&low&high\\ \text{case }(\text{HLL})&high&low&low\\ \text{case }(\text{LHL})&low&high&low\\ \text{case }(\text{HLH})&high&low&high\\ \text{case }(\text{Oth})&&other\;cases\end{array} \tag{25}\] In the case (LLH), one has \(|\xi-\eta|,|\eta-\zeta|\ll|\zeta|\) and \(\Phi_{bad}\neq 0\). Then the related cubic nonlinearity can be transformed into the quartic one. For the cases of (HLL), (LHL), (HLH) and (Oth), it is required to precisely compute the critical points of \(\Psi_{bad}\). However, this is a hard task since \(\partial_{\xi,\zeta}\Psi_{bad}\) depends on the space-time locations as well as the frequencies: \[\begin{split}&\partial_{\xi}\Psi_{bad}=x-x_{1}+s(\Lambda^{\prime}( \xi-\eta)-\Lambda^{\prime}(\xi))=x-x_{1}-s\eta\Lambda^{\prime\prime}(\xi-r_{1} \eta),\\ &\partial_{\zeta}\Psi_{bad}=x_{2}-x_{3}+s(\Lambda^{\prime}(\zeta- \eta)-\Lambda^{\prime}(\zeta))=x_{2}-x_{3}-s\eta\Lambda^{\prime\prime}(\zeta-r_ {2}\eta),\end{split} \tag{26}\] where \(r_{1},r_{2}\in[0,1]\) and \(\Lambda^{\prime\prime}(y)=(1+y^{2})^{-3/2}\) with \(y\in\mathbb{R}\). On the other hand, in order to analyze the critical points of \(\Psi_{bad}\) in (26), the Littlewood-Paley decompositions both in the physical and frequency spaces are applied, which leads to the introduction of the related \(Z_{\alpha}\)-norm. Note that by a careful discussion on the relations between \(s\eta\) and other factors in (26), a suitable classification will be taken in terms of the relative size of the space-time locations and the frequencies. Roughly speaking, the classification includes: near the possible critical points and away from the critical points of \(\Psi_{bad}\). Near the possible critical points, the \(Z_{\alpha}\)-norm estimate of the cubic nonlinearity can be treated by the dispersive estimate (21) with a bootstrap assumption on \(\|V(t)\|_{Z_{\alpha}}\). Away from the critical points, the stationary phase method is performed. Nevertheless, many involved and technical computations are needed. For examples, in the case (HLL) with \(|\eta|\ll|\xi|\), by the observation \(\Lambda^{\prime\prime}(\xi-r_{1}\eta)\approx(1+|\xi|)^{-3}\), the \(L_{x}^{\infty}\) norm of some related high frequency term can be obtained; in the case (HLH), due to the different distances from the zero points of \(\partial_{\xi}\Psi_{bad}\), three cases including the high-frequency, intermediate-frequency and low-frequency in the kernel of \(K_{bad}\) are separately treated: with respect to the parts of the high-frequency and low-frequency, since the corresponding frequencies are away from the zero points of \(\partial_{\xi}\Psi_{bad}\), the stationary phase argument with respect to the \(\xi\) variable can be implemented. For the part of intermediate frequency, the zero points of \(\partial_{\xi}\Psi_{bad}\) and \(\partial_{\zeta}\Psi_{bad}\) will be considered simultaneously so that the space-decay rate of \(K_{bad}\) can be obtained. Next we explain why some technical analysis on the related phase \(\Psi_{bad}\) in the 2D case of [9] is difficult to be utilized directly by us. For the 2D case, such a faster time-decay estimate than (1.21) in 1D case is obtained \[\|U(t)\|_{W^{N^{\prime},\infty}(\mathbb{R}^{2})}\leq C(1+t)^{-1}\|V(t)\|_{Z_{1 }}. \tag{1.27}\] Due to (1.8), the estimate of \(\|V(t)\|_{Z_{1}}\) in (1.27) roughly comes down to that of \(\|(1+|x|)^{1+}\Lambda^{v}V\|_{L^{2}(\mathbb{R}^{2})}\) for some suitable number \(\upsilon>0\). To this end, two kinds of regions for \(|x|\geq s^{\theta}\) and \(|x|\leq s^{\theta}\) with \(\theta\in(0,1)\) are divided, respectively. For \(|x|\leq s^{\theta}\), the authors in [9] obtain that for \(\theta\in(0,1)\), \[\begin{split}\|(1+|x|)^{1+}\Lambda^{\upsilon}V(t)\|_{L^{2}(|x| \leq s^{\theta})}&\leq C\int_{0}^{t}(1+s)^{\theta^{+}}\|U(s)\|_{W^ {N^{\prime},\infty}(\mathbb{R}^{2})}^{2}\|U(s)\|_{H^{N}}ds+\mathrm{other\ terms}\\ &\leq C\varepsilon^{3}\int_{0}^{t}(1+s)^{-2+\theta^{+}}ds+ \mathrm{other\ terms}\\ &\leq C\varepsilon^{3}+\mathrm{other\ terms},\end{split} \tag{1.28}\] which yields the smallness estimate of \(\|V(t)\|_{Z_{1}}\) when \(|x|\leq s^{\theta}\). However, in our problem (1.3), if taking the case of \(\alpha=1/2\) as an instance, by \(\|U(t)\|_{W^{N^{\prime},\infty}(\mathbb{R})}\leq C(1+t)^{-1/2}\|V(t)\|_{Z_{1 /2}}\) and \(\|V(t)\|_{Z_{1/2}}\leq C\|(1+|x|)^{1/2+}\Lambda^{\upsilon}V\|_{L^{2}(\mathbb{R})}\), then similarly to (1.28), one has that for \(\theta>0\), \[\begin{split}\|(1+|x|)^{1/2+}\Lambda^{\upsilon}V(T_{1/2, \varepsilon})\|_{L^{2}(|x|\leq s^{\theta})}&\leq C\varepsilon^{3 }\int_{0}^{T_{1/2,\varepsilon}}(1+s)^{\theta^{+}/2-1}ds+\mathrm{other\ terms}\\ &\leq C\varepsilon^{3}(1+T_{1/2,\varepsilon})^{\theta^{+}/2}+ \mathrm{other\ terms}.\end{split} \tag{1.29}\] This means that \(T_{1/2,\varepsilon}\leq\varepsilon^{-\frac{4}{\theta^{+}}}\) holds in order to guarantee the smallness of \(\|V(t)\|_{Z_{1}}\), which is too crude by comparison with \(T_{1/2,\varepsilon}\sim e^{\kappa_{0}/\varepsilon^{2}}\) in (1.5) of Theorem 1.1. This is the reason that we have to give more delicate analysis on the related phase \(\Psi_{bad}\) in (1.24). Based on all the above analysis, the estimate of the \(Z_{\alpha}\)-norm of the cubic nonlinearity in (1.23) will be finished. On the other hand, the treatments for the quartic nonlinearity and other terms in (1.23) are much easier. Finally, the bootstrap assumption of \(\|V(t)\|_{Z_{\alpha}}\) can be closed and then Theorem 1.1 is proved. The paper is organized as follows. In Section 2, some preliminaries such as the Littlewood-Paley decomposition, the definition of \(Z_{\alpha}\)-norm, the linear dispersive estimate and two useful lemmas are illustrated. By the normal form transformations, a reformulation of (1.3) will be derived in Section 3. In Section 4, some energy estimates and the continuity of the \(Z_{\alpha}\)-norm are established. In Section 5, the related \(Z_{\alpha}\)-norm is estimated. In Section 6, we complete the proofs of Theorem 1.1 and Corollaries 1.2-1.3. In addition, the estimates on some resulting multilinear Fourier multipliers are given in Appendix. Preliminaries ### Littlewood-Paley decomposition and definition of \(Z_{\alpha}\)-norm For the integral function \(f(x)\) on \(\mathbb{R}\), its Fourier transformation is defined as \[\hat{f}(\xi):=\mathscr{F}_{x}f(\xi)=\int_{\mathbb{R}}e^{-ix\xi}f(x)dx.\] Choosing a smooth cut-off function \(\psi:\mathbb{R}\to[0,1]\), which equals 1 on \([-5/4,5/4]\) and vanishes outside \([-8/5,8/5]\), we set \[\psi_{k}(x):=\psi(|x|/2^{k})-\psi(|x|/2^{k-1}),\quad k\in\mathbb{ Z},k\geq 0,\] \[\psi_{-1}(x):=1-\sum_{k\geq 0}\psi_{k}(x)=\psi(2|x|),\quad\psi_{I}: =\sum_{k\in I\cap\mathbb{Z}\cap[-1,\infty)}\psi_{k},\] where \(I\) is any interval of \(\mathbb{R}\). Let \(P_{k}\) be the Littlewood-Paley projection onto frequency \(2^{k}\) \[\mathscr{F}(P_{k}f)(\xi):=\psi_{k}(\xi)\mathscr{F}f(\xi),\quad k\in\mathbb{Z},k\geq-1.\] For any interval \(I\), \(P_{I}\) is defined by \[P_{I}f:=\sum_{k\in I\cap\mathbb{Z}\cap[-1,\infty)}P_{k}f.\] Introducing the following dyadic decomposition in the Euclidean physical space \(\mathbb{R}\) \[(Q_{j}f)(x):=\psi_{j}(x)f(x),\qquad j\in\mathbb{Z},j\geq-1.\] Inspired by [9], we define the \(Z_{\alpha}\)-norm of \(f\) as \[\|f\|_{Z_{\alpha}}:=\sum_{j,k\geq-1}2^{j\alpha+N_{1}k}\|Q_{j}P_{k}f\|_{L^{2}( \mathbb{R})},\qquad\alpha\in(0,1/2],\ N_{1}=12. \tag{2.1}\] Let \[Z_{\alpha}:=\{f\in L^{2}(\mathbb{R}):\|f\|_{Z_{\alpha}}<\infty\}\] and \(\|(g,h)\|_{Z_{\alpha}}:=\|g\|_{Z_{\alpha}}+\|h\|_{Z_{\alpha}}\). Through the whole paper, for non-negative quantities \(f\) and \(g\), \(f\lesssim g\) and \(f\gtrsim g\) mean \(f\leq Cg\) and \(f\geq Cg\) with \(C>0\) being a generic constant. ### Linear dispersive estimate **Lemma 2.1** (Linear dispersive estimate).: _For any function \(f\), integer \(k\geq-1\) and \(t\geq 0\), it holds that_ \[\|P_{k}e^{\pm it\Lambda}f\|_{L^{\infty}(\mathbb{R})}\lesssim 2^{3k/2}(1+t)^{-1/ 2}\|P_{k}f\|_{L^{1}(\mathbb{R})}. \tag{2.2}\] _Moreover, for \(\beta\in[0,1/2]\) and \(j\geq-1\), one has_ \[\|P_{k}e^{\pm it\Lambda}Q_{j}f\|_{L^{\infty}(\mathbb{R})}\lesssim 2^{k/2+2k\beta+ j\beta}(1+t)^{-\beta}\|Q_{j}f\|_{L^{2}(\mathbb{R})}. \tag{2.3}\] Proof.: Note that \[\psi_{k}(x)=\psi_{k}(x)\psi_{[[k]]}(x), \tag{2.4}\] where \([[k]]:=[k-1,k+1]\). Then one has \[\begin{split} P_{k}e^{it\Lambda}f(x)&=(2\pi)^{-1} \int_{\mathbb{R}}\mathcal{K}_{k}(t,x-y)P_{k}f(y)dy,\\ \mathcal{K}_{k}(t,x)&:=\int_{\mathbb{R}}e^{i(\kappa \xi+t\langle\xi\rangle)}\psi_{[[k]]}(\xi)d\xi.\end{split} \tag{2.5}\] According to Corollary 2.36 and 2.38 in [13], for any \(t\geq 1\), it holds that \[\|\mathcal{K}_{k}(t,x)\|_{L^{\infty}(\mathbb{R})}\lesssim 2^{3k/2}t^{-1/2}. \tag{2.6}\] For \(0\leq t\leq 1\), we easily have \[\|\mathcal{K}_{k}(t,x)\|_{L^{\infty}(\mathbb{R})}\lesssim\int_{\mathbb{R}}\psi _{[[k]]}(\xi)d\xi\lesssim 2^{k}.\] This, together with (2.5), (2.6) and Young's inequality, leads to \[\|P_{k}e^{it\Lambda}f\|_{L^{\infty}(\mathbb{R})}\lesssim\|\mathcal{K}_{k}\|_ {L^{\infty}(\mathbb{R})}\|P_{k}f\|_{L^{1}(\mathbb{R})}\lesssim 2^{3k/2}(1+t)^{-1/2} \|P_{k}f\|_{L^{1}(\mathbb{R})}.\] In addition, the estimate of \(\|P_{k}e^{-it\Lambda}f\|_{L^{\infty}(\mathbb{R})}\) is analogous. Thus, (2.2) is achieved. Next we turn to the proof of (2.3). It follows from the Bernstein inequality such as [1, Lemma 2.1] and the unitarity of \(e^{\pm it\Lambda}\) that \[\|P_{k}e^{\pm it\Lambda}Q_{j}f\|_{L^{\infty}(\mathbb{R})}\lesssim 2^{k/2}\|P_{k}e ^{\pm it\Lambda}Q_{j}f\|_{L^{2}(\mathbb{R})}\lesssim 2^{k/2}\|Q_{j}f\|_{L^{2}( \mathbb{R})}.\] On the other hand, (2.2) implies \[\|P_{k}e^{\pm it\Lambda}Q_{j}f\|_{L^{\infty}(\mathbb{R})}\lesssim 2^{3k/2}(1+t )^{-1/2}\|Q_{j}f\|_{L^{1}(\mathbb{R})}\lesssim 2^{3k/2+j/2}(1+t)^{-1/2}\|Q_{j}f\|_{L ^{2}(\mathbb{R})}.\] Therefore, \[\|P_{k}e^{\pm it\Lambda}Q_{j}f\|_{L^{\infty}(\mathbb{R})} =(\|P_{k}e^{\pm it\Lambda}Q_{j}f\|_{L^{\infty}(\mathbb{R})})^{1- 2\beta}(\|P_{k}e^{\pm it\Lambda}Q_{j}f\|_{L^{\infty}(\mathbb{R})})^{2\beta}\] \[\lesssim(2^{k/2})^{1-2\beta}(2^{3k/2+j/2}(1+t)^{-1/2})^{2\beta} \|Q_{j}f\|_{L^{2}(\mathbb{R})}\] \[\lesssim 2^{k/2+2k\beta+j\beta}(1+t)^{-\beta}\|Q_{j}f\|_{L^{2}( \mathbb{R})}.\] **Lemma 2.2**.: _For any function \(f\), integer \(k\geq-1\), \(t\geq 0\) and \(p\in[2,+\infty]\), it holds that_ \[\|P_{k}e^{\pm it\Lambda}Q_{j}f\|_{L^{p}(\mathbb{R})}\lesssim\Big{(}\frac{2^{3 k+j}}{1+t}\Big{)}^{1/2-1/p}\|Q_{j}f\|_{L^{2}(\mathbb{R})}. \tag{2.7}\] Proof.: Note that \[\|P_{k}e^{\pm it\Lambda}f\|_{L^{2}(\mathbb{R})}=\|P_{k}f\|_{L^{2}(\mathbb{R})} \lesssim\|f\|_{L^{2}(\mathbb{R})}. \tag{2.8}\] Applying the Riesz-Thorin interpolation theorem to (2.2) and (2.8) yields \[\|P_{k}e^{\pm it\Lambda}f\|_{L^{p}(\mathbb{R})}\lesssim\Big{(}\frac{2^{3k/2}}{ \sqrt{1+t}}\Big{)}^{1-2/p}\|f\|_{L^{p^{\prime}}(\mathbb{R})},\] where \(\frac{1}{p^{\prime}}=1-\frac{1}{p}\). Therefore, we achieve from (2.4) that \[\|P_{k}e^{\pm it\Lambda}Q_{j}f\|_{L^{p}(\mathbb{R})} \lesssim\Big{(}\frac{2^{3k}}{1+t}\Big{)}^{1/2-1/p}\|Q_{j}f\|_{L^{p^ {\prime}}(\mathbb{R})}\] \[\lesssim\Big{(}\frac{2^{3k}}{1+t}\Big{)}^{1/2-1/p}\|\psi_{[[j]]} Q_{j}f\|_{L^{p^{\prime}}(\mathbb{R})}\] \[\lesssim\Big{(}\frac{2^{3k}}{1+t}\Big{)}^{1/2-1/p}\|\psi_{[[j]]} \|_{L^{2p/(p-2)}(\mathbb{R})}\|Q_{j}f\|_{L^{2}(\mathbb{R})}\] \[\lesssim\Big{(}\frac{2^{3k}}{1+t}\Big{)}^{1/2-1/p}2^{j(1/2-1/p)} \|Q_{j}f\|_{L^{2}(\mathbb{R})},\] which derives (2.7). ### Two technical Lemmas **Lemma 2.3**.: _For \(\mu_{1},\mu_{2},\mu_{3}=\pm\), define_ \[\Phi_{\mu_{1}\mu_{2}}(\xi_{1},\xi_{2}) :=-\Lambda(\xi_{1}+\xi_{2})+\mu_{1}\Lambda(\xi_{1})+\mu_{2} \Lambda(\xi_{2}), \tag{2.9}\] \[\Phi_{\mu_{1}\mu_{2}\mu_{3}}(\xi_{1},\xi_{2},\xi_{3}) :=-\Lambda(\xi_{1}+\xi_{2}+\xi_{3}))+\mu_{1}\Lambda(\xi_{1})+\mu _{2}\Lambda(\xi_{2})+\mu_{3}\Lambda(\xi_{3}).\] _For \(\mu_{1},\mu_{2}=\pm\) and \(l\geq 1\), one has_ \[|\Phi_{\mu_{1}\mu_{2}}^{-1}(\xi_{1},\xi_{2})|\lesssim 1+\min\{|\xi_{1}+\xi_{2} |,|\xi_{1}|,|\xi_{2}|\},|\partial_{\xi_{1},\xi_{2}}^{l}\Phi_{\mu_{1}\mu_{2}}( \xi_{1},\xi_{2})|\lesssim\min\{1,|\Phi_{\mu_{1}\mu_{2}}(\xi_{1},\xi_{2})|\} \tag{2.10}\] _and_ \[|\partial_{\xi_{1},\xi_{2}}^{l}\Phi_{\mu_{1}\mu_{2}}^{-1}(\xi_{1},\xi_{2})| \lesssim|\Phi_{\mu_{1}\mu_{2}}^{-1}(\xi_{1},\xi_{2})|. \tag{2.11}\] _For \((\mu_{1},\mu_{2},\mu_{3})\in A_{\Phi}^{good}:=\{(+++),(+--),(---)\}\), one has_ \[|\Phi_{\mu_{1}\mu_{2}\mu_{3}}^{-1}(\xi_{1},\xi_{2},\xi_{3})|\lesssim 1+\min\{| \xi_{1}+\xi_{2}+\xi_{3}|,|\xi_{1}|,|\xi_{2}|,|\xi_{3}|\}. \tag{2.12}\] Proof.: The proof of (2.10) can be found in Lemma 5.1 of [9]. Meanwhile, (2.11) is a consequence of (2.10). For inequality (2.12), see (4.47) in [9]. Note that although all these related inequalities in [9] are derived for \(\xi_{1},\xi_{2},\xi_{3}\in\mathbb{R}^{2}\), it is easy to check that these inequalities still hold for \(\xi_{1},\xi_{2},\xi_{3}\in\mathbb{R}\). **Lemma 2.4** (Holder inequality).: _For any functions \(f_{1},f_{2},f_{3},f_{4}\) on \(\mathbb{R}\) and \(p,q_{1},q_{2},q_{3},q_{4}\in[1,\infty]\), one has_ \[\Big{\|}\iint_{\mathbb{R}^{2}}K(x-x_{1},x-x_{2})f_{1}(x_{1})f_{2} (x_{2})dx_{1}dx_{2}\Big{\|}_{L^{p}_{x}(\mathbb{R})}\] \[\leq \|K(\cdot,\cdot)\|_{L^{1}(\mathbb{R}^{2})}\|f_{1}\|_{L^{q_{1}}}\| f_{2}\|_{L^{q_{2}}},\qquad\frac{1}{p}=\frac{1}{q_{1}}+\frac{1}{q_{2}},\] \[\Big{\|}\iiint_{\mathbb{R}^{3}}K(x-x_{1},x-x_{2},x-x_{3})f_{1}(x_{ 1})f_{2}(x_{2})f_{3}(x_{3})dx_{1}dx_{2}dx_{3}\Big{\|}_{L^{p}_{x}(\mathbb{R})}\] \[\leq \|K(\cdot,\cdot,\cdot)\|_{L^{1}(\mathbb{R}^{3})}\|f_{1}\|_{L^{q_ {1}}}\|f_{2}\|_{L^{q_{2}}}\|f_{3}\|_{L^{q_{3}}},\qquad\frac{1}{p}=\frac{1}{q_{1} }+\frac{1}{q_{2}}+\frac{1}{q_{3}},\] \[\Big{\|}\iiint_{\mathbb{R}^{4}}K(x-x_{1},x-x_{2},x-x_{3},x-x_{4}) f_{1}(x_{1})f_{2}(x_{2})f_{3}(x_{3})f_{4}(x_{4})dx_{1}dx_{2}dx_{3}dx_{4}\Big{\|}_{L^{p}_{ x}(\mathbb{R})}\] \[\leq \|K(\cdot,\cdot,\cdot)\|_{L^{1}(\mathbb{R}^{4})}\|f_{1}\|_{L^{q_ {1}}}\|f_{2}\|_{L^{q_{2}}}\|f_{3}\|_{L^{q_{3}}}\|f_{4}\|_{L^{q_{4}}},\qquad\frac{ 1}{p}=\frac{1}{q_{1}}+\frac{1}{q_{2}}+\frac{1}{q_{3}}+\frac{1}{q_{4}}. \tag{2.13}\] Proof.: (2.13) can be directly derived from the Minkowski inequality and the Holder inequality, or see Lemma 2.3 in [14]. Denote \[\begin{split}\mathcal{X}_{k}&=\mathcal{X}_{k}^{1} \cup\mathcal{X}_{k}^{2},\qquad\mathcal{Y}_{k}=\mathcal{Y}_{k}^{1}\cup\mathcal{ Y}_{k}^{2},\\ \mathcal{X}_{k}^{1}&=\{(k_{1},k_{2})\in\mathbb{Z}^{ 2}:k_{1},k_{2}\geq-1,|\max\{k_{1},k_{2}\}-k|\leq 8\},\\ \mathcal{X}_{k}^{2}&=\{(k_{1},k_{2})\in\mathbb{Z}^{ 2}:k_{1},k_{2}\geq-1,\max\{k_{1},k_{2}\}\geq k+8,|k_{1}-k_{2}|\leq 8\},\\ \mathcal{Y}_{k}^{1}&=\{(k_{1},k_{2},k_{3})\in \mathbb{Z}^{3}:k_{1},k_{2},k_{3}\geq-1,|\max\{k_{1},k_{2},k_{3}\}-k|\leq 4\}, \\ \mathcal{Y}_{k}^{2}&=\{(k_{1},k_{2},k_{3})\in \mathbb{Z}^{3}:k_{1},k_{2},k_{3}\geq-1,k+4\leq\max\{k_{1},k_{2},k_{3}\}\leq \mathrm{med}\{k_{1},k_{2},k_{3}\}+4\}.\end{split} \tag{2.14}\] As in [9, page 784,799], if \(P_{k}(P_{k_{1}}fP_{k_{2}}g)\neq 0\) and \(P_{k}(P_{k_{1}}fP_{k_{2}}gP_{k_{3}}h)\neq 0\), one then has \((k_{1},k_{2})\in\mathcal{X}_{k}\) and \((k_{1},k_{2},k_{3})\in\mathcal{Y}_{k}\), respectively. ## 3 Reduction ### First normal form transformation Based on (2.10), we are devoted to transforming the quadratic nonlinearity in (1.3) into the cubic one. Denote \[U_{\pm}:=(\partial_{t}\pm i\Lambda)u,\quad U:=U_{+}. \tag{3.1}\] For functions \(m_{2}(\xi_{1},\xi_{2}):\mathbb{R}^{2}\to\mathbb{C}\) and \(m_{3}(\xi_{1},\xi_{2},\xi_{3}):\mathbb{R}^{3}\to\mathbb{C}\), define the following multi-linear pseudoproduct operators: \[\begin{split} T_{m_{2}}(f,g)&:=\mathscr{F}_{\xi}^{ -1}\Big{(}(2\pi)^{-2}\int_{\mathbb{R}}m_{2}(\xi-\eta,\eta)\hat{f}(\xi-\eta) \hat{g}(\eta)d\eta\Big{)},\\ T_{m_{3}}(f,g,h)&:=\mathscr{F}_{\xi}^{-1}\Big{(}(2 \pi)^{-3}\iint_{\mathbb{R}^{2}}m_{3}(\xi-\eta,\eta-\zeta,\zeta)\hat{f}(\xi- \eta)\hat{g}(\eta-\zeta)\hat{h}(\zeta)d\eta d\zeta\Big{)}.\end{split} \tag{3.2}\] Then (1.3) is reduced to \[(\partial_{t}-i\Lambda)U=\mathcal{N}(U),\qquad\partial_{t}V(t,x)=e^{-it \Lambda}\mathcal{N}(U), \tag{3.3}\] where \(V=V_{+}\) and \(V_{-}\) are defined in (1.12), \(\mathcal{N}(U)\) is given by \[\mathcal{N}(U):=\sum_{\mu_{1},\mu_{2}=\pm}T_{a_{\mu_{1}\mu_{2}}}(U_{\mu_{1}},U _{\mu_{2}})+\sum_{\mu_{1},\mu_{2},\mu_{3}=\pm}T_{b_{\mu_{1}\mu_{2}\mu_{3}}}(U_{ \mu_{1}},U_{\mu_{2}},U_{\mu_{3}})+\mathcal{N}_{4}(U), \tag{3.4}\] here \(a_{\mu_{1}\mu_{2}}=a_{\mu_{1}\mu_{2}}(\xi_{1},\xi_{2})\) is a linear combination of the products of the following terms \[1,\frac{1}{\Lambda(\xi_{1})},\frac{1}{\Lambda(\xi_{2})},\frac{\xi_{1}}{\Lambda( \xi_{1})},\frac{\xi_{2}}{\Lambda(\xi_{2})}, \tag{3.5}\] \(b_{\mu_{1}\mu_{2}\mu_{3}}=b_{\mu_{1}\mu_{2}\mu_{3}}(\xi_{1},\xi_{2},\xi_{3})\) is a linear combination of the products of \[1,\frac{1}{\Lambda(\xi_{1})},\frac{1}{\Lambda(\xi_{2})},\frac{1}{\Lambda(\xi_{3 })},\frac{\xi_{1}}{\Lambda(\xi_{1})},\frac{\xi_{2}}{\Lambda(\xi_{2})},\frac{ \xi_{3}}{\Lambda(\xi_{3})}, \tag{3.6}\] and the nonlinearity \(\mathcal{N}_{4}(U)\) is at least quartic in \(U\). Applying the Fourier transformation to (3.3) and solving the resulting equation yield \[\begin{split}\hat{V}(t,\xi)&=\hat{V}(0,\xi)+\int_{0} ^{t}e^{-is\Lambda(\xi)}\widehat{\mathcal{N}_{4}(U)}(s,\xi)ds\\ &\quad+\sum_{\mu_{1},\mu_{2}=\pm}\int_{0}^{t}\int_{\mathbb{R}}e^{ is\Phi_{\mu_{1}\mu_{2}}}a_{\mu_{1}\mu_{2}}\hat{V}_{\mu_{1}}(s,\xi-\eta)\hat{V}_{\mu_{2} }(s,\eta)d\eta ds,\\ &\quad+\sum_{\mu_{1},\mu_{2},\mu_{3}=\pm}\int_{0}^{t}\iint_{ \mathbb{R}^{2}}e^{is\Phi_{\mu_{1}\mu_{2}\mu_{3}}}b_{\mu_{1}\mu_{2}\mu_{3}}\hat {V}_{\mu_{1}}(s,\xi-\eta)\hat{V}_{\mu_{2}}(s,\eta-\zeta)\hat{V}_{\mu_{3}}(s, \zeta)d\eta d\zeta ds,\end{split} \tag{3.7}\] where \(\Phi_{\mu_{1}\mu_{2}}\) and \(\Phi_{\mu_{1}\mu_{2}\mu_{3}}\) are defined by (2.9). Thanks to (2.10), through integrating by parts in \(s\) for the second line of (3.7), we arrive at \[\begin{split}\hat{V}(t,\xi)&=\hat{V}(0,\xi)+\int_{0} ^{t}e^{-is\Lambda(\xi)}\widehat{\mathcal{N}_{4}(U)}(s,\xi)ds\\ &\quad-i\sum_{\mu_{1},\mu_{2}=\pm}\mathscr{F}(e^{-is\Lambda}T_{ \Phi_{\mu_{1}\mu_{2}}^{-1}a_{\mu_{1}\mu_{2}}}(U_{\mu_{1}},U_{\mu_{2}}))(s,\xi) \Big{|}_{s=0}^{t}\\ &\quad+i\sum_{\mu_{1},\mu_{2}=\pm}\int_{0}^{t}\int_{\mathbb{R}}e^ {is\Phi_{\mu_{1}\mu_{2}}}\Phi_{\mu_{1}\mu_{2}}^{-1}a_{\mu_{1}\mu_{2}}\frac{d}{ ds}\Big{(}\hat{V}_{\mu_{1}}(s,\xi-\eta)\hat{V}_{\mu_{2}}(s,\eta)\Big{)}d\eta ds \\ &\quad+\sum_{\mu_{1},\mu_{2},\mu_{3}=\pm}\int_{0}^{t}\iint_{ \mathbb{R}^{2}}e^{is\Phi_{\mu_{1}\mu_{2}\mu_{3}}}b_{\mu_{1}\mu_{2}\mu_{3}}\hat {V}_{\mu_{1}}(s,\xi-\eta)\hat{V}_{\mu_{2}}(s,\eta-\zeta)\hat{V}_{\mu_{3}}(s, \zeta)d\eta d\zeta ds.\end{split}\] Returning to the physical space, one has \[\begin{split}& V(t,x)=V(0,x)+\int_{0}^{t}e^{-is\Lambda}\mathcal{ N}_{4}(U)ds-i\sum_{\mu_{1},\mu_{2}=\pm}e^{-is\Lambda}T_{\Phi_{\mu_{1}\mu_{2}}^{-1}a_{ \mu_{1}\mu_{2}}}(U_{\mu_{1}},U_{\mu_{2}})\Big{|}_{s=0}^{t}\\ &\quad+i\sum_{\mu_{1},\mu_{2}=\pm}\int_{0}^{t}e^{-is\Lambda} \Big{\{}T_{\Phi_{\mu_{1}\mu_{2}}^{-1}a_{\mu_{1}\mu_{2}}}(e^{is\mu_{1}\Lambda} \partial_{t}V_{\mu_{1}},U_{\mu_{2}})+T_{\Phi_{\mu_{1}\mu_{2}}^{-1}a_{\mu_{1} \mu_{2}}}(U_{\mu_{1}},e^{is\mu_{2}\Lambda}\partial_{t}V_{\mu_{2}})\Big{\}}ds \\ &\quad+\sum_{\mu_{1},\mu_{2},\mu_{3}=\pm}\int_{0}^{t}e^{-is \Lambda}T_{b_{\mu_{1}\mu_{2}\mu_{3}}}(U_{\mu_{1}},U_{\mu_{2}},U_{\mu_{3}})ds. \end{split} \tag{3.8}\] Set \[\begin{split}\mathcal{N}_{3}(U)&=\mathcal{N}_{4}(U)+ \sum_{\mu_{1},\mu_{2},\mu_{3}=\pm}T_{b_{\mu_{1}\mu_{2}\mu_{3}}}(U_{\mu_{1}},U_{ \mu_{2}},U_{\mu_{3}}),\\ \mathcal{N}_{3,+}(U)&=\mathcal{N}_{3}(U),\qquad \mathcal{N}_{3,-}(U)=\overline{\mathcal{N}_{3}(U)}.\end{split} \tag{3.9}\] For \(\nu=\pm\), \[\partial_{t}V_{\nu}=e^{-it\nu\Lambda}(\mathcal{N}_{3,\nu}(U)+\sum_{\mu_{1}, \mu_{2}=\pm}T_{a_{\nu_{1}\mu_{2}}^{I}}(U_{\mu_{1}},U_{\mu_{2}})), \tag{3.10}\] where \[a_{+\mu_{1}\mu_{2}}^{I}=a_{\mu_{1}\mu_{2}},\qquad a_{-\mu_{1}\mu_{2}}^{I}(\xi_{ 1},\xi_{2})=\overline{a_{-\mu_{1},-\mu_{2}}(-\xi_{1},-\xi_{2})}. \tag{3.11}\] Substituting (3.10) into (3.8) derives \[\begin{split} V(t,x)&=V(0,x)+\int_{0}^{t}e^{-is\Lambda} \mathcal{N}_{4}^{I}(U)ds\\ &\quad-i\sum_{\mu_{1},\mu_{2}=\pm}e^{-is\Lambda}T_{\Phi_{\mu_{1} \mu_{2}}^{-1}a_{\mu_{1}\mu_{2}}}(U_{\mu_{1}},U_{\mu_{2}})(s,x)\Big{|}_{s=0}^{t }\\ &\quad+\sum_{(\mu_{1},\mu_{2},\mu_{3})\in A_{\Phi}}\int_{0}^{t}e^{ -is\Lambda}T_{m_{\mu_{1}\mu_{2}\mu_{3}}}(U_{\mu_{1}},U_{\mu_{2}},U_{\mu_{3}})ds,\end{split} \tag{3.12}\] where \(A_{\Phi}:=\{(+++),(++-),(---),(---)\}\), \[\mathcal{N}_{4}^{I}(U)=\mathcal{N}_{4}(U)+\sum_{\mu,\nu=\pm}(T_{\Phi_{\mu\nu}^ {-1}a_{\mu\nu}}(\mathcal{N}_{3,\mu}(U),U_{\nu})+T_{\Phi_{\mu\nu}^{-1}a_{\mu \nu}}(U_{\mu},\mathcal{N}_{3,\nu}(U))) \tag{3.13}\] and \[m_{\mu_{1}\mu_{2}\mu_{3}}=m_{\mu_{1}\mu_{2}\mu_{3}}^{I}+m_{\mu_{1}\mu_{2}\mu_ {3}}^{II} \tag{3.14}\] with \[\begin{split}& m_{+++}^{I}(\xi_{1},\xi_{2},\xi_{3})=b_{+++}^{I}( \xi_{1},\xi_{2},\xi_{3}),\\ & m_{++-}^{I}(\xi_{1},\xi_{2},\xi_{3})=b_{++-}^{I}(\xi_{1},\xi_{2},\xi_{3})+b_{+-+}^{I}(\xi_{1},\xi_{3},\xi_{2})+b_{-++}^{I}(\xi_{3},\xi_{2},\xi_ {1}),\\ & m_{+--}^{I}(\xi_{1},\xi_{2},\xi_{3})=b_{+--}^{I}(\xi_{1},\xi_{2 },\xi_{3})+b_{-+-}^{I}(\xi_{2},\xi_{1},\xi_{3})+b_{--+}^{I}(\xi_{3},\xi_{2}, \xi_{1}),\\ & m_{---}^{I}(\xi_{1},\xi_{2},\xi_{3})=b_{---}^{I}(\xi_{1},\xi_{2 },\xi_{3}),\\ & m_{+++}^{II}(\xi_{1},\xi_{2},\xi_{3})=b_{+++}(\xi_{1},\xi_{2}, \xi_{3}),\\ & m_{++-}^{II}(\xi_{1},\xi_{2},\xi_{3})=b_{++-}(\xi_{1},\xi_{2}, \xi_{3})+b_{+-+}(\xi_{1},\xi_{3},\xi_{2})+b_{-++}(\xi_{3},\xi_{2},\xi_{1}),\\ & m_{+--}^{II}(\xi_{1},\xi_{2},\xi_{3})=b_{+--}(\xi_{1},\xi_{2}, \xi_{3})+b_{-+-}(\xi_{2},\xi_{1},\xi_{3})+b_{--+}(\xi_{3},\xi_{2},\xi_{1}),\\ & m_{---}^{II}(\xi_{1},\xi_{2},\xi_{3})=b_{---}(\xi_{1},\xi_{2}, \xi_{3}),\\ & b_{\sigma\mu_{1}\mu_{2}}^{I}(\xi_{1},\xi_{2},\xi_{3})=i\sum_{ \mu=\pm}(\Phi_{\mu\sigma}^{-1}a_{\mu\sigma})(\xi_{2}+\xi_{3},\xi_{1})a_{\mu\mu_ {1}\mu_{2}}^{I}(\xi_{2},\xi_{3})\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+i\sum_{ \nu=\pm}(\Phi_{\sigma\nu}^{-1}a_{\sigma\nu})(\xi_{1},\xi_{2}+\xi_{3})a_{\nu\mu _{1}\mu_{2}}^{I}(\xi_{2},\xi_{3}),\quad\sigma,\mu_{1},\mu_{2}=\pm.\end{split} \tag{3.15}\] ### Partial second normal form transformation We require the second normal form to transform some parts of the cubic nonlinearity in (3.12) into the quartic one. Note that if \(\max\{k_{1},k_{2}\}\leq k_{3}-O(1)\) with \(O(1)\) being a fixed and large enough number, one then has \[\begin{split}|\Phi_{++-}(\xi_{1},\xi_{2},\xi_{3})|& =|-\Lambda(\xi_{1}+\xi_{2}+\xi_{3})+\Lambda(\xi_{1})+\Lambda(\xi_{2 })-\Lambda(\xi_{3})|\\ &\geq\Lambda(\xi_{3})/2\approx 2^{k_{3}},\end{split} \tag{3.16}\] where \(|\xi_{l}|\approx 2^{k_{l}},l=1,2,3\). Acting \(P_{k}\) to (3.12), together with (2.14), yields that \[\begin{split} P_{k}V&(t,x)=P_{k}V(0,x)+\int_{0}^{t}e ^{-is\Lambda}P_{k}\mathcal{N}_{4}^{I}(U)ds-i\sum_{\mu_{1},\mu_{2}=\pm}e^{-is \Lambda}P_{k}T_{\Phi_{\mu_{1}\mu_{2}}^{-1}a_{\mu_{1}\mu_{2}}}(U_{\mu_{1}},U_{ \mu_{2}})\Big{|}_{s=0}^{t}\\ &+\sum_{(\mu_{1},\mu_{2},\mu_{3})\in A_{\Phi}^{good}}\sum_{(k_{1}, k_{2},k_{3})\in\mathcal{Y}_{k}}\int_{0}^{t}e^{-is\Lambda}P_{k}T_{m_{\mu_{1}\mu_{2} \mu_{3}}}(P_{k_{1}}U_{\mu_{1}},P_{k_{2}}U_{\mu_{2}},P_{k_{3}}U_{\mu_{3}})ds\\ &+\sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k},\\ \max\{k_{1},k_{2}\}\leq k_{3}-O(1)\end{subarray}}\int_{0}^{t}e^{-is\Lambda}P_{k} T_{m_{++-}}(P_{k_{1}}U,P_{k_{2}}U,P_{k_{3}}U_{-})ds\\ &+\sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k},\\ \max\{k_{1},k_{2}\}\geq k_{3}-O(1)\end{subarray}}\int_{0}^{t}e^{-is\Lambda}P_{k} T_{m_{++-}}(P_{k_{1}}U,P_{k_{2}}U,P_{k_{3}}U_{-})ds,\end{split} \tag{3.17}\] where \(A_{\Phi}^{good}:=\{(+++),(+--),(---)\}\). Analogously to (3.8), from (2.12) and (3.16), we can transform the cubic nonlinearities in the second and third lines of (3.17) into the corresponding quartic form. Then \[P_{k}V(t,x)=P_{k}V(0,x)+\mathcal{B}_{k}+\int_{0}^{t}(\mathcal{C}_{k}(s)+ \mathcal{Q}_{k}(s)+P_{k}e^{-is\Lambda}\mathcal{N}_{4}^{I}(U))ds, \tag{3.18}\] where the boundary term \(\mathcal{B}_{k}\), the cubic nonlinearity \(\mathcal{C}_{k}(s)\) and the quartic nonlinearity \(\mathcal{Q}_{k}(s)\) are respectively \[\begin{split}\mathcal{B}_{k}&:=-i\sum_{\mu_{1}, \mu_{2}=\pm}e^{-is\Lambda}P_{k}T_{\Phi_{\mu_{1}\mu_{2}}^{-1}a_{\mu_{1}\mu_{2}} }(U_{\mu_{1}},U_{\mu_{2}})\Big{|}_{s=0}^{t}\\ &-i\sum_{(\mu_{1},\mu_{2},\mu_{3})\in A_{\Phi}^{good}}\sum_{(k_{1 },k_{2},k_{3})\in\mathcal{Y}_{k}}e^{-is\Lambda}P_{k}T_{\Phi_{\mu_{1}\mu_{2} \mu_{3}}^{-1}m_{\mu_{1}\mu_{2}\mu_{3}}}(P_{k_{1}}U_{\mu_{1}},P_{k_{2}}U_{\mu_ {2}},P_{k_{3}}U_{\mu_{3}})\Big{|}_{s=0}^{t}\\ &-i\sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k},\\ \max\{k_{1},k_{2}\}\leq k_{3}-O(1)\end{subarray}}e^{-is\Lambda}P_{k}T_{\Phi_{ ++-}^{-1}m_{++-}}(P_{k_{1}}U,P_{k_{2}}U,P_{k_{3}}U_{-})\Big{|}_{s=0}^{t},\end{split} \tag{3.19}\] \[\mathcal{C}_{k}(t):=\sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y} _{k},\\ (\mu_{1},\mu_{2},\mu_{3})\in A_{\Phi}^{good}\end{subarray}}e^{-it\Lambda}P_{k} T_{m_{++-}}(P_{k_{1}}U,P_{k_{2}}U,P_{k_{3}}U_{-}), \tag{3.20}\] \[\begin{split}\mathcal{Q}_{k}(t)&:=i\sum_{ \begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k},\\ (\mu_{1},\mu_{2},\mu_{3})\in A_{\Phi}^{good}\end{subarray}}P_{k}e^{-it\Lambda} \Big{\{}T_{\Phi_{\mu_{1}\mu_{2}\mu_{3}}^{-1}m_{\mu_{1}\mu_{2}\mu_{3}}}(e^{it \mu_{1}\Lambda}P_{k_{1}}\partial_{t}V_{\mu_{1}},P_{k_{2}}U_{\mu_{2}},P_{k_{3}}U _{\mu_{3}})\\ &+T_{\Phi_{\mu_{1}\mu_{2}\mu_{3}}^{-1}m_{\mu_{1}\mu_{2}\mu_{3}}}(P _{k_{1}}U_{\mu_{1}},P_{k_{2}}U_{\mu_{2}},e^{it\mu_{3}\Lambda}P_{k_{3}}\partial_{t }V_{\mu_{3}})\Big{\}}\\ &+i\sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k},\\ \max\{k_{1},k_{2}\}\leq k_{3}-O(1)\end{subarray}}P_{k}e^{-it\Lambda}\Big{\{}T_{ \Phi_{++-}^{-1}m_{++-}}(e^{it\Lambda}P_{k_{1}}\partial_{t}V,P_{k_{2}}U,P_{k_{3} }U_{-}) \tag{3.21}\] Energy estimate and continuity of \(Z_{\alpha}\)-norm ### Energy estimate **Lemma 4.1**.: _Let \(N\geq 27\). Suppose that \(U\) is defined by (3.1) and \(\left\|U(t)\right\|_{H^{N}(\mathbb{R})}\) is small, one then has that for \(t\geq 0\),_ \[\begin{split}\left\|U(t)\right\|_{H^{N}(\mathbb{R})}& \lesssim\left\|U(0)\right\|_{H^{N}(\mathbb{R})}+\left\|U(0)\right\|_{H^{N}( \mathbb{R})}^{2}+\left\|U(0)\right\|_{H^{N}(\mathbb{R})}^{3}\\ &+\int_{0}^{t}\sum_{k\geq-1}2^{k(7+1/4)}\|P_{k}U(s)\|_{L^{\infty} }\|U(s)\|_{W^{1,\infty}}\|U(s)\|_{H^{N}(\mathbb{R})}ds.\end{split} \tag{4.1}\] Proof.: By (2.14), (3.12) and the unitarity of \(e^{-is\Lambda}\), we have \[\begin{split}\|P_{k}(V(t)-V(0))\|_{L^{2}}& \lesssim\sum_{(k_{1},k_{2})\in\mathcal{X}_{k}}(J_{kk_{1}k_{2}}(0)+J_{kk_{1}k_ {2}}(t))\\ &\quad+\int_{0}^{t}(\|P_{k}\mathcal{N}_{4}^{I}(U)\|_{L^{2}}+\sum_ {(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k}}J_{kk_{1}k_{2}k_{3}}(s))ds,\end{split} \tag{4.2}\] where \[\begin{split} J_{kk_{1}k_{2}}(t)&:=\sum_{\mu_{1}, \mu_{2}=\pm}\|P_{k}T_{\Phi_{\mu_{1}\mu_{2}}^{-1}a_{\mu_{1}\mu_{2}}}(P_{k_{1}}U _{\mu_{1}},P_{k_{2}}U_{\mu_{2}}))(t)\|_{L^{2}},\\ J_{kk_{1}k_{2}k_{3}}(s)&:=\sum_{(\mu_{1},\mu_{2},\mu _{3})\in A_{\Phi}}\|P_{k}T_{m_{\mu_{1}\mu_{2}\mu_{3}}}(P_{k_{1}}U_{\mu_{1}},P_ {k_{2}}U_{\mu_{2}},P_{k_{3}}U_{\mu_{3}})(s)\|_{L^{2}}.\end{split} \tag{4.3}\] **(A) Estimate of \(J_{kk_{1}k_{2}}(t)\)** It only suffices to deal with the case of \(k_{1}\leq k_{2}\) in \(\mathcal{X}_{k}\) for \(J_{kk_{1}k_{2}}(t)\) since the treatment on the case of \(k_{1}\geq k_{2}\) is completely similar. Applying (A.1a) and the Bernstein inequality yields \[\begin{split} J_{kk_{1}k_{2}}(t)&\lesssim\sum_{\mu_{ 1},\mu_{2}=\pm}\|T_{\Phi_{\mu_{1}\mu_{2}}^{-1}a_{\mu_{1}\mu_{2}}}(P_{k_{1}}U_{ \mu_{1}},P_{k_{2}}U_{\mu_{2}}))(t)\|_{L^{2}}\\ &\lesssim 2^{5k_{1}}\|P_{k_{1}}U(t)\|_{L^{\infty}}\|P_{k_{2}}U(t)\|_{L^{2}} \\ &\lesssim 2^{k_{1}(5+\frac{1}{2})}\|P_{k_{1}}U(t)\|_{L^{2}}\|P_{k_{2}} U(t)\|_{L^{2}}.\end{split}\] Then \[\begin{split}&\left\|2^{kN}\sum_{(k_{1},k_{2})\in\mathcal{X}_{k}}J_{ kk_{1}k_{2}}(t)\right\|_{\ell_{k}^{2}}\\ &\lesssim\Big{\|}\sum_{(k_{1},k_{2})\in\mathcal{X}_{k}^{1}}2^{k_{ 2}N}J_{kk_{1}k_{2}}(t)\Big{\|}_{\ell_{k}^{2}}+\Big{\|}\sum_{(k_{1},k_{2})\in \mathcal{X}_{k}^{2}}2^{k_{2}(N-1/8)-k/8+k_{1}/4}J_{kk_{1}k_{2}}(t)\Big{\|}_{ \ell_{k}^{2}}\\ &\lesssim\sum_{k_{1}\geq-1}2^{k_{1}(5+\frac{1}{2})}\|P_{k_{1}}U(t )\|_{L^{2}}\Big{\|}2^{k_{2}N}\|P_{k_{2}}U(t)\|_{L^{2}}\Big{\|}_{\ell_{k_{2}}^{2} }\\ &\quad+\sum_{k_{1}\geq-1}2^{k_{1}(5+\frac{3}{4})}\|P_{k_{1}}U(t )\|_{L^{2}}\|U(t)\|_{H^{N}}\\ &\lesssim\left\|U(t)\right\|_{H^{N}}^{2},\end{split} \tag{4.4}\] where \(\|A_{k}\|_{\ell_{k}^{p}}=(\sum_{k\geq-1}A_{k}^{p})^{1/p}\) with \(p\geq 1\). **(B) Estimate of \(J_{kk_{1}k_{2}k_{3}}(s)\)** Without loss of generality, \(k_{1}\leq k_{2}\leq k_{3}\) is assumed in \(J_{kk_{1}k_{2}k_{3}}(s)\). It follows from (A.7) that \[J_{kk_{1}k_{2}k_{3}}(s)\lesssim 2^{7k_{2}}\|P_{k_{1}}U(s)\|_{L^{\infty}}\|P_{k_{2 }}U(s)\|_{L^{\infty}}\|P_{k_{3}}U(s)\|_{L^{2}}.\] Similarly to (4.4), one can achieve \[\Big{\|}2^{kN}\sum_{(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k}}J_{kk_{1}k_{2}k_{3}} (s)\Big{\|}_{\ell_{k}^{2}}\lesssim\sum_{k_{2}\geq-1}2^{k_{2}(7+1/4)}\|P_{k_{2} }U(s)\|_{L^{\infty}}\|U(s)\|_{W^{1,\infty}}\|U(s)\|_{H^{N}}. \tag{4.5}\] **(C) Estimate of \(P_{k}\mathcal{N}_{4}^{I}(U)\)** Note that \[\Big{\|}2^{kN}\|P_{k}\mathcal{N}_{4}^{I}(U)\|_{L^{2}}\Big{\|}_{\ell_{k}^{2}} \lesssim\sum_{k_{2}\geq-1}2^{k_{2}(7+1/4)}\|P_{k_{2}}U(s)\|_{L^{\infty}}\|U(s) \|_{W^{1,\infty}}\|U(s)\|_{H^{N}}. \tag{4.6}\] It follows from (4.2)-(4.6) that \[\|V(t)-V(0)\|_{H^{N}} \lesssim\Big{\|}2^{kN}\|P_{k}(V(t)-V(0))\|_{L^{2}}\Big{\|}_{\ell _{k}^{2}}\lesssim\|U(0)\|_{H^{N}(\mathbb{R}^{d})}^{2}+\|U(t)\|_{H^{N}(\mathbb{ R}^{d})}^{2}\] \[\quad+\int_{0}^{t}\,\sum_{k_{2}\geq-1}2^{k_{2}(7+1/4)}\|P_{k_{2}}U (s)\|_{L^{\infty}}\|U(s)\|_{W^{1,\infty}}\|U(s)\|_{H^{N}(\mathbb{R}^{d})}ds.\] On the other hand, the unitarity of \(e^{it\Lambda}\) ensures \[\|U(t)\|_{H^{N}}\lesssim\|V(t)\|_{H^{N}}\lesssim\|V(0)\|_{H^{N}}+\|V(t)-V(0)\| _{H^{N}}.\] Therefore, (4.1) is proved. ### Continuity of \(Z_{\alpha}\)-norm In order to take a continuation argument later, the following continuous property of \(Z_{\alpha}\)-norm is required. **Proposition 4.2** (Continuity and boundedness of \(Z_{\alpha}\)-norm).: _Assume that \(u\in C([0,T_{0}],H^{N+1}(\mathbb{R}))\cap C^{1}([0,T_{0}],H^{N}(\mathbb{R}))\) is a solution of problem (1.3). Define \(U\) as in (3.1) with the property \(U_{0}=U(0)\in Z_{\alpha}\). Then it holds that_ \[\sup_{t\in[0,T_{0}]}\|e^{-it\Lambda}U(t)\|_{Z_{\alpha}}\leq C\Big{(}T_{0},\|U _{0}\|_{Z_{\alpha}},\sup_{t\in[0,T_{0}]}\|U(t)\|_{H^{N}(\mathbb{R})}\Big{)}. \tag{4.7}\] _Moreover, the mapping \(t\mapsto e^{-it\Lambda}U(t)\) is continuous from \([0,T_{0}]\) to \(Z_{\alpha}\)._ Proof.: Let \(C>0\) denote the sufficiently large generic constant that depends only on \(T_{0}\), \(\|U_{0}\|_{Z_{\alpha}}\) and \(\sup_{t\in[0,T_{0}]}\|U(t)\|_{H^{N}(\mathbb{R})}\). For integer \(J\geq 0\) and \(f\in H^{N}(\mathbb{R})\), define \[\|f\|_{Z_{\alpha}^{J}}:=\sum_{j,k\geq-1}2^{\min\{j\alpha,J\}+N_{1}k}\|Q_{j}P_{ k}f\|_{L^{2}(\mathbb{R})},\qquad\alpha\in(0,1/2]. \tag{4.8}\] This obviously means that there is a constant \(C_{J}>0\) which depends on \(J\) such that \[\|f\|_{Z^{J}_{\alpha}}\leq\|f\|_{Z_{\alpha}},\qquad\|f\|_{Z^{J}_{\alpha}}\leq C_{ J}\|f\|_{H^{N}(\mathbb{R})}.\] As in (3.20) of [9], we shall show that when \(t,t^{\prime}\in[0,T_{0}]\) with \(0\leq t^{\prime}-t\leq 1\), for any \(J\geq 0\), one has \[\|e^{-it^{\prime}\Lambda}U(t^{\prime})-e^{-it\Lambda}U(t)\|_{Z^{J}_{\alpha}} \leq C|t^{\prime}-t|\Big{(}1+\sup_{s\in[t,t^{\prime}]}\|e^{-is\Lambda}U(s)\|_{Z ^{J}_{\alpha}}\Big{)}. \tag{4.9}\] Note that under (4.9), for any \(t,t^{\prime}\in[0,T_{0}]\), \[\sup_{t\in[0,T_{0}]}\|e^{-it\Lambda}U(t)\|_{Z^{J}_{\alpha}}\leq C,\qquad\|e^{ -it^{\prime}\Lambda}U(t^{\prime})-e^{-it\Lambda}U(t)\|_{Z^{J}_{\alpha}}\leq C |t^{\prime}-t| \tag{4.10}\] hold uniformly in \(J\). Subsequently, letting \(J\to\infty\) in (4.8) and (4.10) yields the results in (4.7). Integrating (3.3) and (3.4) over \([t,t^{\prime}]\) yields \[\begin{split} V(t^{\prime})-V(t)&=\int_{t}^{t^{ \prime}}e^{-is\Lambda}\mathcal{N}_{4}(U)ds+\sum_{\mu_{1},\mu_{2}=\pm}\int_{t}^ {t^{\prime}}e^{-is\Lambda}T_{a_{\mu_{1}\mu_{2}}}(U_{\mu_{1}},U_{\mu_{2}})(s)ds \\ &\quad+\sum_{\mu_{1},\mu_{2},\mu_{3}=\pm}\int_{t}^{t^{\prime}}e^{ -is\Lambda}T_{b_{\mu_{1}\mu_{2}\mu_{3}}}(U_{\mu_{1}},U_{\mu_{2}},U_{\mu_{3}})( s)ds.\end{split} \tag{4.11}\] Since (4.9) is equivalent to \[\|V(t^{\prime})-V(t)\|_{Z^{J}_{\alpha}}\leq C|t^{\prime}-t|\Big{(}1+\sup_{s \in[t,t^{\prime}]}\|V(s)\|_{Z^{J}_{\alpha}}\Big{)}, \tag{4.12}\] then (4.11), (4.12) as well as (4.9) will be obtained if there hold for \(s\in[t,t^{\prime}]\) and \(\mu_{1},\mu_{2},\mu_{3}=\pm\): \[\|e^{-is\Lambda}T_{a_{\mu_{1}\mu_{2}}}(U_{\mu_{1}},U_{\mu_{2}})\|_ {Z^{J}_{\alpha}} \leq C\Big{(}1+\sup_{s\in[t,t^{\prime}]}\|V(s)\|_{Z^{J}_{\alpha}} \Big{)}, \tag{4.13a}\] \[\|e^{-is\Lambda}T_{b_{\mu_{1}\mu_{2}\mu_{3}}}(U_{\mu_{1}},U_{\mu_{2 }},U_{\mu_{3}})\|_{Z^{J}_{\alpha}} \leq C\Big{(}1+\sup_{s\in[t,t^{\prime}]}\|V(s)\|_{Z^{J}_{\alpha}} \Big{)},\] (4.13b) \[\|e^{-is\Lambda}\mathcal{N}_{4}(U)\|_{Z^{J}_{\alpha}} \leq C\Big{(}1+\sup_{s\in[t,t^{\prime}]}\|V(s)\|_{Z^{J}_{\alpha}} \Big{)}. \tag{4.13c}\] Next, we prove (4.13a). Let \(C(T_{0})>0\) be a large constant to be determined later. **Case 1.**\(j\leq C(T_{0})\) We now establish \[\sum_{-1\leq j\leq C(T_{0}),k\geq-1}2^{\min\{j\alpha,J\}+N_{1}k}\|Q_{j}P_{k}e^ {-is\Lambda}T_{a_{\mu_{1}\mu_{2}}}(U_{\mu_{1}},U_{\mu_{2}})\|_{L^{2}(\mathbb{R} )}\leq C. \tag{4.14}\] By (2.14), one has \[P_{k}T_{a_{\mu_{1}\mu_{2}}}(U_{\mu_{1}},U_{\mu_{2}})=\sum_{(k_{1},k_{2})\in \mathcal{X}_{k}}P_{k}T_{a_{\mu_{1}\mu_{2}}}(P_{k_{1}}U_{\mu_{1}},P_{k_{2}}U_{ \mu_{2}}).\] Without loss of generality, \(k_{1}\geq k_{2}\) is assumed. In addition, \(2^{k}\lesssim 2^{k_{1}}\) holds true. Then it follows from (A.1b) and the Bernstein inequality that \[\sum_{-1\leq j\leq C(T_{0}),k\geq-1}2^{\min\{j\alpha,J\}+N_{1}k}\|Q _{j}P_{k}e^{-is\Lambda}T_{a_{\mu_{1}\mu_{2}}}(U_{\mu_{1}},U_{\mu_{2}})\|_{L^{2}( \mathbb{R})}\] \[\leq C\sum_{k_{1},k_{2}\geq-1}2^{j\alpha+N_{1}k_{1}}\|T_{a_{\mu_{ 1}\mu_{2}}}(P_{k_{1}}U_{\mu_{1}},P_{k_{2}}U_{\mu_{2}})\|_{L^{2}}\] \[\leq C\sum_{k_{1},k_{2}\geq-1}2^{N_{1}k_{1}}\|P_{k_{1}}U_{\mu_{1} }\|_{L^{2}}\|P_{k_{2}}U_{\mu_{2}}\|_{L^{\infty}}\] \[\leq C\sum_{k_{1},k_{2}\geq-1}2^{(N_{1}-N)k_{1}+k_{2}/2}\|U_{\mu_ {1}}\|_{H^{N}}\|P_{k_{2}}U_{\mu_{2}}\|_{L^{2}}\] \[\leq C\sum_{k_{1},k_{2}\geq-1}2^{(N_{1}-N)(k_{1}+k_{2})}\|U\|_{H^ {N}}^{2}\leq C,\] which derives (4.14). **Case 2.**\(j\geq C(T_{0})\) In this case, we establish \[\sum_{j\geq C(T_{0}),k\geq-1}2^{\min\{j\alpha,J\}+N_{1}k}\|Q_{j}P_{k}e^{-is \Lambda}T_{a_{\mu_{1}\mu_{2}}}(U_{\mu_{1}},U_{\mu_{2}})\|_{L^{2}(\mathbb{R})} \leq C. \tag{4.15}\] By virtue of (2.4), one has \[\begin{split}& Q_{j}P_{k}e^{-is\Lambda}T_{a_{\mu_{1}\mu_{2}}}(U_{ \mu_{1}},U_{\mu_{2}})=\sum_{j_{1},j_{2}\geq-1}\sum_{(k_{1},k_{2})\in\mathcal{ X}_{k}}J_{kk_{1}k_{2}}^{jj_{1}j_{2}},\\ & J_{kk_{1}k_{2}}^{jj_{1}j_{2}}:=Q_{j}P_{k}e^{-is\Lambda}T_{a_{\mu _{1}\mu_{2}}}(e^{is\mu_{1}\Lambda}P_{[[k_{1}]]}Q_{j_{1}}P_{k_{1}}V_{\mu_{1}},e ^{is\mu_{2}\Lambda}P_{[[k_{2}]]}Q_{j_{2}}P_{k_{2}}V_{\mu_{2}}).\end{split} \tag{4.16}\] As in Case 1, \(k_{1}\geq k_{2}\) is assumed. Note that \(J_{kk_{1}k_{2}}^{jj_{1}j_{2}}\) can be written as \[J_{kk_{1}k_{2}}^{jj_{1}j_{2}}(t,x)=(2\pi)^{-2}\psi_{j}(x)\iint_{\mathbb{R}^{2 }}K_{0}(x-x_{1},x-x_{2})Q_{j_{1}}P_{k_{1}}V_{\mu_{1}}(s,x_{1})Q_{j_{2}}P_{k_{2} }V_{\mu_{2}}(s,x_{2})dx_{1}dx_{2}, \tag{4.17}\] where \[\begin{split}& K_{0}(x-x_{1},x-x_{2})=\iint_{\mathbb{R}^{2}}e^{i \Psi_{0}}a_{\mu_{1}\mu_{2}}(\xi_{1},\xi_{2})\psi_{k}(\xi_{1}+\xi_{2})\psi_{[[k _{1}]]}(\xi_{1})\psi_{[[k_{2}]]}(\xi_{2})d\xi_{1}d\xi_{2},\\ &\Psi_{0}=s(-\Lambda(\xi_{1}+\xi_{2})+\mu_{1}\Lambda(\xi_{1})+\mu _{2}\Lambda(\xi_{2}))+\xi_{1}(x-x_{1})+\xi_{2}(x-x_{2}).\end{split} \tag{4.18}\] If \(C(T_{0})>0\) is sufficiently large, when \(j\geq C(T_{0})\) and \(s\in[0,T_{0}]\), then the possible critical points of the phase \(\Psi_{0}\) in (4.18) are contained in the scope of \(\max\{|j-j_{1}|,|j-j_{2}|\}\leq O(1)\). The proof of (4.15) will be separated into such two subcases: \(\max\{|j-j_{1}|,|j-j_{2}|\}\geq O(1)\) and \(\max\{|j-j_{1}|,|j-j_{2}|\}\leq O(1)\). **Subcase 2.1.**\(\max\{|j-j_{1}|,|j-j_{2}|\}\geq O(1)\) Denote the operator \(\mathcal{L}_{0}\) and its adjoint operator \(\mathcal{L}_{0}^{*}\) as \[\begin{split}\mathcal{L}_{0}&:=-i(|\partial_{\xi_{1} }\Psi_{0}|^{2}+|\partial_{\xi_{2}}\Psi_{0}|^{2})^{-1}(\partial_{\xi_{1}}\Psi_{ 0}\partial_{\xi_{1}}+\partial_{\xi_{2}}\Psi_{0}\partial_{\xi_{2}}),\\ \mathcal{L}_{0}^{*}&:=i\partial_{\xi_{1}}\Big{(} \frac{\partial_{\xi_{1}}\Psi_{0}}{|\partial_{\xi_{1}}\Psi_{0}|^{2}+|\partial _{\xi_{2}}\Psi_{0}|^{2}}\Big{)}+i\partial_{\xi_{2}}\Big{(}\frac{\partial_{ \xi_{2}}\Psi_{0}}{|\partial_{\xi_{1}}\Psi_{0}|^{2}+|\partial_{\xi_{2}}\Psi_{0}|^ {2}}\Big{)}.\end{split}\] Then \(\mathcal{L}_{0}e^{i\Psi_{0}}=e^{i\Psi_{0}}\). The fact of \(|\Lambda^{\prime}(y)|\leq 1\) and the condition of \(\max\{|j-j_{1}|,|j-j_{2}|\}\geq O(1)\) for \(j\geq C(T_{0})\) with large \(C(T_{0})\) lead to \[|\partial_{\xi_{1}}\Psi_{0}|+|\partial_{\xi_{2}}\Psi_{0}|\gtrsim|x-x_{1}|+|x-x_ {2}|\gtrsim 2^{\max\{j,j_{1},j_{2}\}}.\] On the other hand, \(|\Lambda^{(l)}(y)|\lesssim 1\) holds for \(l\geq 1\), which yields \[|\partial_{\xi_{1}}^{l}\Psi_{0}|+|\partial_{\xi_{2}}^{l}\Psi_{0}|\lesssim s \lesssim T_{0}\qquad\text{for }l\geq 2.\] By the method of stationary phase, we can achieve \[|K_{0}(x-x_{1},x-x_{2})|\] \[= \Big{|}\iint_{\mathbb{R}^{2}}\mathcal{L}_{0}^{4}(e^{i\Psi_{0}}) a_{\mu_{1}\mu_{2}}(\xi_{1},\xi_{2})\psi_{k}(\xi_{1}+\xi_{2})\psi_{[[k_{1}]]}( \xi_{1})\psi_{[[k_{2}]]}(\xi_{2})d\xi_{1}d\xi_{2}\Big{|}\] \[\lesssim \iint_{\mathbb{R}^{2}}|(\mathcal{L}_{0}^{*})^{4}[a_{\mu_{1}\mu_{2 }}(\xi_{1},\xi_{2})\psi_{k}(\xi_{1}+\xi_{2})\psi_{[[k_{1}]]}(\xi_{1})\psi_{[[k _{2}]]}(\xi_{2})]|d\xi_{1}d\xi_{2}\] \[\lesssim 2^{k_{1}+k_{2}-\max\{j,j_{1},j_{2}\}}(1+|x-x_{1}|+|x-x_ {2}|)^{-3}.\] This, together with the Holder inequality (13), the Bernstein inequality and (4.17), implies \[\|J^{jj_{1}j_{2}}_{kk_{1}k_{2}}\|_{L^{2}(\mathbb{R})} \lesssim\|K_{0}(\cdot,\cdot)\|_{L^{1}(\mathbb{R}^{2})}\|Q_{j_{1}} P_{k_{1}}V_{\mu_{1}}\|_{L^{2}}\|Q_{j_{2}}P_{k_{2}}V_{\mu_{2}}\|_{L^{\infty}}\] \[\lesssim 2^{k_{1}+k_{2}-\max\{j,j_{1},j_{2}\}}\|P_{k_{1}}V_{\mu_{1}} \|_{L^{2}}\|P_{k_{2}}V_{\mu_{2}}\|_{L^{\infty}}\] \[\lesssim 2^{k_{1}(1-N)+3k_{2}/2-\max\{j,j_{1},j_{2}\}}\|V_{\mu_{1}} \|_{H^{N}}\|P_{k_{2}}V_{\mu_{2}}\|_{L^{2}}\] \[\lesssim 2^{(k_{1}+k_{2})(2-N)-\max\{j,j_{1},j_{2}\}}\|U\|_{H^{N}}^{2}.\] Therefore, one arrives at \[\sum_{j\geq C(T_{0}),k\geq-1}2^{\min\{j\alpha,J\}+N_{1}k}\sum_{ \begin{subarray}{c}j_{1},j_{2}\geq-1,\\ \max\{|j-j_{1}|,|j-j_{2}|\}\geq O(1)\end{subarray}}\sum_{(k_{1},k_{2})\in \mathcal{X}_{k}}\|J^{jj_{1}j_{2}}_{kk_{1}k_{2}}\|_{L^{2}(\mathbb{R})}\leq C. \tag{4.19}\] **Subcase 2.2**.: \(\max\{|j-j_{1}|,|j-j_{2}|\}\leq O(1)\) Applying (A.1b) to \(J^{jj_{1}j_{2}}_{kk_{1}k_{2}}\) in (4.16) directly yields \[\|J^{jj_{1}j_{2}}_{kk_{1}k_{2}}\|_{L^{2}(\mathbb{R})} \lesssim\|T_{a_{\mu_{1}\mu_{2}}}(e^{is\mu_{1}\Lambda}P_{[[k_{1}]]} Q_{j_{1}}P_{k_{1}}V_{\mu_{1}},e^{is\mu_{2}\Lambda}P_{[[k_{2}]]}Q_{j_{2}}P_{k_{2}}V_{ \mu_{2}})\|_{L^{2}(\mathbb{R})}\] \[\lesssim\|Q_{j_{1}}P_{k_{1}}V_{\mu_{1}}\|_{L^{2}}\|e^{is\mu_{2} \Lambda}P_{[[k_{2}]]}Q_{j_{2}}P_{k_{2}}V_{\mu_{2}})\|_{L^{\infty}}\] \[\lesssim 2^{k_{2}/2}\|Q_{j_{1}}P_{k_{1}}V_{\mu_{1}}\|_{L^{2}}\|P_{k_{ 2}}V_{\mu_{2}}\|_{L^{2}},\] where we have used (2.3) with \(\beta=0\). Due to \(2^{k}\lesssim 2^{k_{1}}\) and \(\max\{|j-j_{1}|,|j-j_{2}|\}\leq O(1)\), then \[\sum_{j\geq C(T_{0}),k\geq-1}2^{\min\{j\alpha,J\}+N_{1}k}\sum_{ \begin{subarray}{c}j_{1},j_{2}\geq-1,\\ \max\{|j-j_{1}|,|j-j_{2}|\}\leq O(1)\end{subarray}}\sum_{(k_{1},k_{2})\in \mathcal{X}_{k}}\|J^{jj_{1}j_{2}}_{kk_{1}k_{2}}\|_{L^{2}(\mathbb{R})}\] \[\lesssim\sum_{j_{1},k_{1},k_{2}\geq-1}2^{\min\{j_{1}\alpha,J\}+N_ {1}k_{1}}\|J^{jj_{1}j_{2}}_{kk_{1}k_{2}}\|_{L^{2}(\mathbb{R})}\] \[\lesssim\sum_{j_{1},k_{1},k_{2}\geq-1}2^{\min\{j_{1}\alpha,J\}+N_ {1}k_{1}+k_{2}/2}\|Q_{j_{1}}P_{k_{1}}V_{\mu_{1}}\|_{L^{2}}\|P_{k_{2}}V_{\mu_{2}} \|_{L^{2}}\] \[\lesssim\|V\|_{Z^{J}_{J}}\|U\|_{H^{N}}.\] This, together with (4.16) and (4.19), yields (4.15). In addition, (4.13a) follows from (4.14) and (4.15). Note that only the small value solution problem (1.3) is studied, then the cubic and higher order nonlinear terms do not cause any additional difficulties. Then the proofs of (4.13b) and (4.13c) are omitted here. ## 5 Estimate of \(Z_{\alpha}\)-norm In this section, suppose that the following bootstrap assumption holds for \(\alpha\in(0,1/2]\) and \(t\in[0,T_{\alpha,\varepsilon}]\), \[\|V(t)\|_{H^{N}(\mathbb{R})}+\|V(t)\|_{Z_{\alpha}}\leq\varepsilon_{1}. \tag{5.1}\] This, together with (2.1), implies \[\sup_{k\geq-1}2^{kN}\|P_{k}V(t)\|_{L^{2}(\mathbb{R})}+\sum_{j,k\geq-1}2^{j \alpha+N_{1}k}\|Q_{j}P_{k}V(t)\|_{L^{2}(\mathbb{R})}\lesssim\varepsilon_{1}. \tag{5.2}\] Acting \(Q_{j}\) to (3.18) yields \[Q_{j}P_{k}V(t,x)=Q_{j}P_{k}V(0,x)+Q_{j}\mathcal{B}_{k}+\int_{0}^{t}Q_{j}( \mathcal{C}_{k}(s)+\mathcal{Q}_{k}(s)+P_{k}e^{-is\Lambda}\mathcal{N}_{4}^{I} (U))ds, \tag{5.3}\] where \(\mathcal{B}_{k}\), \(\mathcal{C}_{k}\), \(\mathcal{Q}_{k}\) and \(\mathcal{N}_{4}^{I}(U)\) are defined by (3.19), (3.20), (3.21) and (3.13), respectively. ### Estimate of the cubic nonlinearity \(\mathcal{C}_{k}(s)\) **Lemma 5.1**.: _Under the bootstrap assumption (5.2), it holds that for \(\alpha\in(0,1/2]\) and \(t\geq 0\),_ \[\sum_{j,k\geq-1}2^{j\alpha+N_{1}k}\|Q_{j}\mathcal{C}_{k}(t)\|_{L^{2}(\mathbb{ R})}\lesssim\varepsilon_{1}^{3}(1+t)^{-2\alpha}. \tag{5.4}\] We point out that the key point for proving (5.4) is to analyze the corresponding Schwartz kernel of \(\mathcal{C}_{k}(s)\) according to the space-time locations and the frequencies. For this purpose, by (2.4) and (3.20), we rewrite \(Q_{j}\mathcal{C}_{k}(t)\) as \[Q_{j}\mathcal{C}_{k}(t)=\sum_{j_{1},j_{2},j_{3}\geq-1}\sum_{ \begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k},\\ \max\{k_{1},k_{2}\}\geq k_{3}-O(1)\end{subarray}}I_{kk_{1}k_{2}k_{3}}^{jj_{1} j_{2}j_{3}}, \tag{5.5}\] where \[I_{kk_{1}k_{2}k_{3}}^{jj_{1}j_{2}j_{3}} :=Q_{j}P_{k}e^{-it\Lambda}T_{m++-}(e^{it\Lambda}P_{[[k_{1}]]} \mathcal{V}_{1},e^{it\Lambda}P_{[[k_{2}]]}\mathcal{V}_{2},e^{-it\Lambda}P_{[[ k_{3}]]}\mathcal{V}_{3}), \tag{5.6}\] \[\mathcal{V}_{1} :=Q_{j_{1}}P_{k_{1}}V,\quad\mathcal{V}_{2}:=Q_{j_{2}}P_{k_{2}}V, \quad\mathcal{V}_{3}:=Q_{j_{3}}P_{k_{3}}V_{-}.\] The proof of Lemma 5.1 will be separated into the following two parts in terms of the space-time locations: outside of the cone and inside of the cone, respectively. **Lemma 5.2** (Outside of cone).: _Under the bootstrap assumption (5.2), it holds that for \(\alpha\in(0,1/2]\) and \(t\geq 0\),_ \[\sum_{\begin{subarray}{c}j,j_{1},j_{2},j_{3},k\geq-1,\\ (k_{1},k_{2},k_{3})\in\mathcal{Y}_{k},\\ \max\{k_{1},k_{2}\}\geq k_{3}-O(1)\end{subarray}}2^{j\alpha+N_{1}k}\|I_{kk_{1 }k_{2}k_{3}}^{j_{1}j_{2}j_{3}}\boldsymbol{I}_{I_{out}}(t)\|_{L^{2}(\mathbb{R}) }\lesssim\varepsilon_{1}^{3}(1+t)^{-2\alpha}, \tag{5.7}\] _where \(I_{out}:=\{t\geq 0:\max\{j,j_{1},j_{2},j_{3}\}\geq\log_{2}(1+t)+O(1)\}\) and_ \[\boldsymbol{I}_{I}(t):=\begin{cases}1,&\quad t\in I,\\ 0,&\quad t\not\in I.\end{cases} \tag{5.8}\] **Lemma 5.3** (Inside of cone).: _Under the bootstrap assumption (5.2), one has that for \(\alpha\in(0,1/2]\) and \(t\geq 0\),_ \[\sum_{\begin{subarray}{c}j,j_{1},j_{2},j_{3},k\geq-1,\\ (k_{1},k_{2},k_{3})\in\mathcal{Y}_{k},\\ \max\{k_{1},k_{2}\}\geq k_{3}-O(1)\end{subarray}}2^{j\alpha+N_{1}k}\|I_{kk_{1 }k_{2}k_{3}}^{j_{1}j_{2}j_{3}}\boldsymbol{I}_{I_{in}}(t)\|_{L^{2}(\mathbb{R}) }\lesssim\varepsilon_{1}^{3}(1+t)^{-2\alpha}, \tag{5.9}\] _where \(I_{in}:=\{t\geq 0:\max\{j,j_{1},j_{2},j_{3}\}\leq\log_{2}(1+t)+O(1)\}\)._ It is obvious that Lemma 5.1 comes from Lemmas 5.2 and 5.3 directly. Proof of Lemma 5.2.: According to the definition (5.6), we have \[I_{kk_{1}k_{2}k_{3}}^{j_{1}j_{2}j_{3}}(t,x)=(2\pi)^{-3}\psi_{j}( x)\iiint_{\mathbb{R}^{3}}K_{1}(x-x_{1},x-x_{2},x-x_{3})\mathcal{V}_{1}(t,x_{1}) \tag{5.10}\] \[\times\mathcal{V}_{2}(t,x_{2})\mathcal{V}_{3}(t,x_{3})dx_{1}dx_{ 2}dx_{3},\] where \[K_{1}(x-x_{1},x-x_{2},x-x_{3})=\iiint_{\mathbb{R}^{3}}e^{i\Psi_{ 1}}m_{++-}(\xi_{1},\xi_{2},\xi_{3})\psi_{k}(\xi_{1}+\xi_{2}+\xi_{3}) \tag{5.11}\] \[\times\psi_{[[k_{1}]]}(\xi_{1})\psi_{[[k_{2}]]}(\xi_{2})\psi_{[[ k_{3}]]}(\xi_{3})d\xi_{1}d\xi_{2}d\xi_{3},\] \[\Psi_{1}=t(-\Lambda(\xi_{1}+\xi_{2}+\xi_{3})+\Lambda(\xi_{1})+ \Lambda(\xi_{2})-\Lambda(\xi_{3}))\] \[+\xi_{1}(x-x_{1})+\xi_{2}(x-x_{2})+\xi_{3}(x-x_{3}).\] If \(x\in\operatorname{supp}\psi_{j}\), \(x_{l}\in\operatorname{supp}\psi_{l}\) (\(l=1,2,3\)) and \(\max\{j,j_{1},j_{2},j_{3}\}\geq\log_{2}(1+t)+O(1)\), then the possible critical points of phase \(\Psi_{1}\) in (5.11) are contained in \(\max\limits_{l=1,2,3}|j-j_{l}|\leq O(1)\). Based on this, the proof of (5.7) will be separated into such two cases: \(\max\limits_{l=1,2,3}|j-j_{l}|\geq O(1)\) and \(\max\limits_{l=1,2,3}|j-j_{l}|\leq O(1)\). **Case 1.**\(\max\limits_{l=1,2,3}|j-j_{l}|\geq O(1)\) Set \[\mathcal{L}_{1}:=-i(|\partial_{\xi_{1}}\Psi_{1}|^{2}+|\partial_{\xi_{2}}\Psi_ {1}|^{2}+|\partial_{\xi_{3}}\Psi_{1}|^{2})^{-1}\sum_{l=1}^{3}\partial_{\xi_{l }}\Psi_{1}\partial_{\xi_{l}}.\] Then \(\mathcal{L}_{1}e^{i\Psi_{1}}=e^{i\Psi_{1}}\). In addition, the adjoint operator of \(\mathcal{L}_{1}\) is \[\mathcal{L}_{1}^{*}:=i\sum_{l=1}^{3}\partial_{\xi_{l}}\Big{(}\frac{\partial_{ \xi_{l}}\Psi_{1}}{|\partial_{\xi_{1}}\Psi_{1}|^{2}+|\partial_{\xi_{2}}\Psi_{1} |^{2}+|\partial_{\xi_{3}}\Psi_{1}|^{2}}\Big{)}.\] The conditions \(\max\{j,j_{1},j_{2},j_{3}\}\geq\log_{2}(1+t)+O(1)\) and \(\max\limits_{l=1,2,3}|j-j_{l}|\geq O(1)\) ensure that if \(x\in\operatorname{supp}\psi_{j}\), \(x_{l}\in\operatorname{supp}\psi_{l}\), \(l=1,2,3\), then it holds that \[|x-x_{1}|+|x-x_{2}|+|x-x_{3}| \geq 2^{O(1)}(1+t),\] \[|x-x_{1}|+|x-x_{2}|+|x-x_{3}| \gtrsim 2^{\max\{j,j_{1},j_{2},j_{3}\}}.\] This, together with \(|\Lambda^{\prime}(y)|\leq 1\), yields \[(|\partial_{\xi_{1}}\Psi_{1}|^{2}+|\partial_{\xi_{2}}\Psi_{1}|^{2 }+|\partial_{\xi_{3}}\Psi_{1}|^{2})^{1/2} \gtrsim|x-x_{1}|+|x-x_{2}|+|x-x_{3}| \tag{5.12}\] \[\gtrsim\max\{1+t,2^{\max\{j,j_{1},j_{2},j_{3}\}}\}.\] On the other hand, for \(l\geq 2\), one obtains from (5.11) that \[|\partial_{\xi_{1},\xi_{2},\xi_{3}}^{l}\Psi_{1}|\lesssim t. \tag{5.13}\] Without loss of generality, \(\max\{k_{1},k_{2},k_{3}\}=k_{1}\) is assumed. By the method of stationary phase and (5.12), (5.13), (A.11), we arrive at \[|K_{1}(x-x_{1},x-x_{2},x-x_{3})|\] \[= \Big{|}\iiint_{\mathbb{R}^{3}}\mathcal{L}_{1}^{7}(e^{i\Psi_{1}}) m_{++-}(\xi_{1},\xi_{2},\xi_{3})\psi_{k}(\xi_{1}+\xi_{2}+\xi_{3})\psi_{[[k_{1}]]}( \xi_{1})\psi_{[[k_{2}]]}(\xi_{2})\psi_{[[k_{3}]]}(\xi_{3})d\xi_{1}d\xi_{2}d\xi _{3}\Big{|}\] \[\lesssim \iiint_{\mathbb{R}^{3}}|(\mathcal{L}_{1}^{*})^{7}[m_{++-}(\xi_{1}, \xi_{2},\xi_{3})\psi_{k}(\xi_{1}+\xi_{2}+\xi_{3})\psi_{[[k_{1}]]}(\xi_{1})\psi _{[[k_{2}]]}(\xi_{2})\psi_{[[k_{3}]]}(\xi_{3})]|d\xi_{1}d\xi_{2}d\xi_{3}\] \[\lesssim 2^{k_{1}+k_{2}+k_{3}+\max\{k_{1},k_{2},k_{3}\}}(1+|x-x_{1} |+|x-x_{2}|+|x-x_{3}|)^{-7}\] \[\lesssim 2^{4\max\{k_{1},k_{2},k_{3}\}-\max\{j,j_{1},j_{2},j_{3} \}}(1+t)^{-2}(1+|x-x_{1}|+|x-x_{2}|+|x-x_{3}|)^{-4}.\] This, together with (5.2), (5.10), the Holder inequality (2.13) and the Bernstein inequality, leads to \[\|I_{kk_{1}k_{2}k_{3}}^{jj_{1}j_{2}j_{3}}\|_{L^{2}(\mathbb{R})} \lesssim\|K_{1}(\cdot,\cdot,\cdot)\|_{L^{1}(\mathbb{R}^{3})}\| \mathcal{V}_{1}\|_{L^{2}}\|\mathcal{V}_{2}\|_{L^{\infty}}\|\mathcal{V}_{3}\|_{ L^{\infty}} \tag{5.14}\] \[\lesssim 2^{4\max\{k_{1},k_{2},k_{3}\}-\max\{j,j_{1},j_{2},j_{3} \}}(1+t)^{-2}\|\mathcal{V}_{1}\|_{L^{2}}\|\mathcal{V}_{2}\|_{L^{\infty}}\| \mathcal{V}_{3}\|_{L^{\infty}}\] \[\lesssim 2^{4\max\{k_{1},k_{2},k_{3}\}-\max\{j,j_{1},j_{2},j_{3} \}}(1+t)^{-2}\|P_{k_{1}}V\|_{L^{2}}\|P_{k_{2}}V\|_{L^{\infty}}\|P_{k_{3}}V_{-}\| _{L^{\infty}}\] \[\lesssim 2^{4k_{1}+(k_{2}+k_{3})/2-\max\{j,j_{1},j_{2},j_{3} \}}(1+t)^{-2}\|P_{k_{1}}V\|_{L^{2}}\|P_{k_{2}}V\|_{L^{2}}\|P_{k_{3}}V_{-}\|_{L ^{2}}\] \[\lesssim \varepsilon_{1}^{3}2^{(4-N)(k_{1}+k_{2}+k_{3})-2j/3-(j_{1}+j_{2}+j _{3})/9}(1+t)^{-2}.\] Combining (5.14) with \(N\geq N_{1}+5\) implies \[\sum_{\begin{subarray}{c}j,j_{1},j_{2},j_{3},k\geq-1,\\ \max\limits_{l=1,2,3}|j-j_{l}|\geq O(1)\max\{k_{1},k_{2},k_{3}\}\in\mathcal{Y }_{k},\\ \end{subarray}}2^{j\alpha+N_{1}k}\|I_{kk_{1}k_{2}k_{3}}^{jj_{1}j_{2}j_{3}} \mathbf{1}_{I_{out}}(t)\|_{L^{2}(\mathbb{R})}\lesssim\varepsilon_{1}^{3}(1+t)^{-2}. \tag{5.15}\] **Case 2.**\(\max\limits_{l=1,2,3}|j-j_{l}|\leq O(1)\) Without loss of generality, \(\max\{k_{1},k_{2},k_{3}\}=k_{1}\) and \(\operatorname{med}\{k_{1},k_{2},k_{3}\}=k_{2}\) are assumed. Applying (A.7) to (5.6) yields \[\|I_{kk_{1}k_{2}k_{3}}^{jj_{1}j_{2}j_{3}}\|_{L^{2}}\lesssim 2^{7k_{2}}\| \mathcal{V}_{1}\|_{L^{2}}\|e^{it\Lambda}P_{[[k_{2}]]}\mathcal{V}_{2}\|_{L^{ \infty}}\|e^{-it\Lambda}P_{[[k_{3}]]}\mathcal{V}_{3}\|_{L^{\infty}}. \tag{5.16}\] By a similar argument of (4.4), one can conclude from (2.3) with \(\beta=\alpha\), the assumption (5.2), (5.16) and the condition \(N_{1}\geq 9\) that \[\begin{split}\sum_{\begin{subarray}{c}j,j_{1},j_{2},j_{3},k\geq-1,\\ l=1,2,3\end{subarray}}\sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y} _{k},\\ |j-j_{l}|\leq O(1)\max\{k_{1},k_{2}\}\geq k_{3}-O(1)\end{subarray}}2^{j\alpha +N_{1}k}\|I_{kk_{1}k_{2}k_{3}}^{j_{1}j_{2}j_{3}}\mathbf{1}_{I_{out}}(t)\|_{L^{2 }(\mathbb{R})}\\ \lesssim(1+t)^{-2\alpha}\sum_{\begin{subarray}{c}j_{1},j_{2},j_{3 }\geq-1,\\ k_{1},k_{2},k_{3}\geq-1\end{subarray}}2^{(j_{1}+j_{2}+j_{3})\alpha+N_{1}k_{1}+ 9(k_{2}+k_{3})}\|Q_{j_{1}}P_{k_{1}}V\|_{L^{2}}\|Q_{j_{2}}P_{k_{2}}V\|_{L^{2}} \|Q_{j_{3}}P_{k_{3}}V\|_{L^{2}}\\ \lesssim\varepsilon_{1}^{3}(1+t)^{-2\alpha}.\end{split} \tag{5.17}\] Collecting (5.15) and (5.17) derives (5.7). Proof of Lemma 5.3.: At first, we deal with the case of \(t\geq 1\). At this time, (5.10) can be reformulated as \[\begin{split} I_{kk_{1}k_{2}k_{3}}^{j_{1}j_{2}j_{3}}(t,x)=(2\pi) ^{-3}\psi_{j}(x)\iiint_{\mathbb{R}^{3}}K_{2}&(x-x_{1},x-x_{2},x- x_{3})\mathcal{V}_{1}(t,x_{1})\\ &\times\mathcal{V}_{2}(t,x_{2})\mathcal{V}_{3}(t,x_{3})dx_{1}dx_ {2}dx_{3},\end{split} \tag{5.18}\] where \[\begin{split}& K_{2}(x-x_{1},x-x_{2},x-x_{3})=\iiint_{\mathbb{R} ^{3}}e^{i\Psi_{2}}m_{++-}\psi_{k}(\xi)\psi_{[[k_{1}]]}(\xi-\eta)\psi_{[[k_{2} ]]}(\eta-\zeta)\\ &\times\psi_{[[k_{3}]]}(\zeta)d\xi d\eta d\zeta,\\ &\Psi_{2}=t\Phi+\xi(x-x_{1})+\eta(x_{1}-x_{2})+\zeta(x_{2}-x_{3}), \\ &\Phi=\Phi(\xi,\eta,\zeta)=-\Lambda(\xi)+\Lambda(\xi-\eta)+ \Lambda(\eta-\zeta)-\Lambda(\zeta).\end{split} \tag{5.19}\] The proof of (5.9) will be separated into two cases: \(k_{3}-O(1)\leq\max\{k_{1},k_{2}\}\leq k_{3}\) and \(\max\{k_{1},k_{2}\}\geq k_{3}\). Due to the symmetry, it is convenient to assume \(\max\{k_{1},k_{2}\}=k_{1}\). **Case 1.**\(k_{3}-O(1)\leq k_{1}\leq k_{3}\) To control the factor \(2^{j\alpha}\) in (5.9), we will treat such two cases of \(j\leq\max\{j_{1},j_{2},j_{3}\}+O(1)\) and \(j\geq\max\{j_{1},j_{2},j_{3}\}+O(1)\), respectively. In addition, note that \(2^{k}\lesssim 2^{\max\{k_{1},k_{2},k_{3}\}}\lesssim 2^{k_{1}}\) holds. **Case 1.1.**\(j\leq\max\{j_{1},j_{2},j_{3}\}+O(1)\) For convenience, \(\max\{j_{1},j_{2},j_{3}\}=j_{2}\) is assumed. By utilizing (A.7) as (5.16), one can obtain \[\begin{split}\|I_{kk_{1}k_{2}k_{3}}^{jj_{1}j_{2}j_{3}}\|_{L^{2}( \mathbb{R})}&\lesssim\|T_{m_{++-}}(e^{it\Lambda}P_{[[k_{1}]]} \mathcal{V}_{1},e^{it\Lambda}P_{[[k_{2}]]}\mathcal{V}_{2},e^{-it\Lambda}P_{[[k_ {3}]]}\mathcal{V}_{3})\|_{L^{2}(\mathbb{R})}\\ &\lesssim 2^{7k_{1}}\|e^{it\Lambda}P_{[[k_{1}]]}\mathcal{V}_{1}\|_{L^{ \infty}}\|\mathcal{V}_{2}\|_{L^{2}}\|e^{-it\Lambda}P_{[[k_{3}]]}\mathcal{V}_{3} \|_{L^{\infty}}.\end{split} \tag{5.20}\] Therefore, it follows from (2.3) with \(\beta=\alpha\) and \(N_{1}\geq 10\) that \[\sum_{\begin{subarray}{c}j,j_{1},j_{2},j_{3},k\geq-1,\\ j\leq\max\{j_{1},j_{2},j_{3}\}+O(1)\end{subarray}}\sum_{\begin{subarray}{c}(k_{1 },k_{2},k_{3})\in\mathcal{Y}_{k},\\ k_{3}-O(1)\leq k_{1}\leq k_{3}\end{subarray}}2^{j\alpha+N_{1}k}\|I_{kk_{1}k_{2} k_{3}}^{j_{1}j_{2}j_{3}}\mathbf{1}_{I_{in}}(t)\|_{L^{2}(\mathbb{R})}\] \[\lesssim\sum_{\begin{subarray}{c}j_{1},j_{2},j_{3}\geq-1,\\ k_{1},k_{2},k_{3}\geq-1,\\ |k_{1}-k_{3}|\leq O(1)\end{subarray}}\sum_{j\leq j_{2}+O(1)}2^{k_{1}(N_{1}+7)+ j\alpha}\|e^{it\Lambda}P_{[[k_{1}]]}\mathcal{Y}_{1}\|_{L^{\infty}}\|Q_{j_{2}}P_{k_{2}}V \|_{L^{2}}\|e^{-it\Lambda}P_{[[k_{3}]]}\mathcal{V}_{3}\|_{L^{\infty}}\] \[\lesssim(1+t)^{-2\alpha}\sum_{\begin{subarray}{c}j_{1},j_{2},j_{3 }\geq-1,\\ k_{1},k_{2},k_{3}\geq-1,\\ |k_{1}-k_{3}|\leq O(1)\end{subarray}}2^{k_{1}(N_{1}+10)+(j_{1}+j_{2}+j_{3}) \alpha}\|Q_{j_{1}}P_{k_{1}}V\|_{L^{2}}\|Q_{j_{2}}P_{k_{2}}V\|_{L^{2}}\|Q_{j_{ 3}}P_{k_{3}}V\|_{L^{2}}\] \[\lesssim\varepsilon_{1}^{3}(1+t)^{-2\alpha}.\] **Case 1.2**.: \(j\geq\max\{j_{1},j_{2},j_{3}\}+O(1)\) At first, we discuss the possible critical points of the phase \(\Psi_{2}\) in (5.18). Note that \[\partial_{\xi}\Phi =\Lambda^{\prime}(\xi-\eta)-\Lambda^{\prime}(\xi)=-\eta\Lambda^{ \prime\prime}(\xi-r_{1}\eta),\quad r_{1}\in[0,1],\] \[\partial_{\zeta}\Phi =\Lambda^{\prime}(\zeta-\eta)-\Lambda^{\prime}(\zeta)=-\eta \Lambda^{\prime\prime}(\zeta-r_{2}\eta),\quad r_{2}\in[0,1], \tag{5.22}\] \[\Lambda^{\prime\prime}(x) =(1+x^{2})^{-3/2}.\] By \(|\xi|\approx 2^{k}\), \(|\xi-\eta|\approx 2^{k_{1}}\), \(|\eta-\zeta|\approx 2^{k_{2}}\), \(|\zeta|\approx 2^{k_{3}}\) and \[|\xi-r_{1}\eta|=|r_{1}(\xi-\eta)+(1-r_{1})\xi|\lesssim 2^{\max\{k,k_{1}\}} \lesssim 2^{k_{1}},\] one has \[2^{-3k_{1}}|\eta|\lesssim|\partial_{\xi}\Phi|,|\partial_{\zeta}\Phi|\lesssim| \eta|. \tag{5.23}\] On the other hand, direct computation shows \[\partial_{\xi}\Psi_{2}=t\partial_{\xi}\Phi+x-x_{1},\qquad\partial_{\zeta}\Psi _{2}=t\partial_{\zeta}\Phi+x_{2}-x_{3}. \tag{5.24}\] It is noticed that the condition \(j\geq\max\{j_{1},j_{2},j_{3}\}+O(1)\) ensures \(|x-x_{1}|\approx 2^{j}\). In view of (5.23) and (5.24), in order to give a precise analysis on the related Schwarz kernel \(K_{2}\) in (5.18), one needs to discuss the scope of frequency \(\eta\). Note that when \(2^{-3k_{1}}t|\eta|\gg 2^{j}\), \(|\partial_{\xi}\Psi_{2}|\geq t|\partial_{\xi}\Phi|-|x-x_{1}|\) has a lower bound; when \(t|\eta|\ll 2^{j}\), \(|\partial_{\xi}\Psi_{2}|\geq|x-x_{1}|-t|\partial_{\xi}\Phi|\) also has a lower bound. Based on this, for a fixed and large enough number \(M_{1}>0\), we now introduce \[\chi_{high}^{I}(\eta) =\chi\Big{(}\frac{t|\eta|}{2^{j+3k_{1}+M_{1}}}\Big{)},\qquad \qquad\chi_{low}^{I}(\eta)=1-\chi\Big{(}\frac{t|\eta|}{2^{j-M_{1}}}\Big{)}, \tag{5.25}\] \[\chi_{med}^{I}(\eta) =(1-\chi_{high}^{I}(\eta))(1-\chi_{low}^{I}(\eta)),\] where the cut-off function \(\chi\) with \(\chi(s)\in C^{\infty}(\mathbb{R})\) and \(0\leq\chi(s)\leq 1\) is defined as \[\chi(s)=\begin{cases}0,&s\leq 1,\\ 1,&s\geq 2.\end{cases} \tag{5.26}\] If \(M_{1}\geq 3\), one then easily knows \[\begin{split}&\operatorname{supp}\chi^{I}_{high}\subset\{t|\eta|\geq 2 ^{j+3k_{1}+M_{1}}\},\qquad\operatorname{supp}\chi^{I}_{low}\subset\{t|\eta|\leq 2 ^{j-M_{1}+1}\},\\ &\operatorname{supp}\chi^{I}_{high}\cap\operatorname{supp}\chi^{I }_{low}=\emptyset,\\ &\operatorname{supp}\chi^{I}_{med}\subset\{2^{j-M_{1}}\leq t| \eta|\leq 2^{j+3k_{1}+M_{1}+1}\}.\end{split} \tag{5.27}\] The remaining work is to deal with the case of the medium frequency mode \(2^{j}\lesssim t|\eta|\lesssim 2^{j+3k_{1}}\), where the corresponding phase \(\Psi_{2}\) may have critical points. On \(\operatorname{supp}\chi^{I}_{med}\), \(\eta\) will be separated into the sub-high and sub-low modes according to the property of \(\partial_{\zeta}\Psi_{2}=0\). Note that \(|x_{2}-x_{3}|\) has an upper bound \(2^{\max\{j_{2},j_{3}\}}\). For the sub-high frequency mode \(t|\eta|\geq 2^{\max\{j_{2},j_{3}\}+3k_{1}+M_{1}}\), we see \(|\partial_{\zeta}\Psi_{2}|\geq t|\partial_{\zeta}\Phi|-|x_{2}-x_{3}|\), which means that there is no critical point for \(\Psi_{2}\). For the sub-low frequency mode \(t|\eta|\leq 2^{\max\{j_{2},j_{3}\}+3k_{1}+M_{1}}\), it follows from the third line of (5.27) that \(j\leq\max\{j_{2},j_{3}\}+3k_{1}+2M_{1}\). Based on this, the scope of \(j\) in Case 1.2 will be separated into \(j\leq\max\{j_{2},j_{3}\}+3k_{1}+2M_{1}\) and \(j\geq\max\{j_{2},j_{3}\}+3k_{1}+2M_{1}\). **Case 1.2.1.**\(j\leq\max\{j_{2},j_{3}\}+3k_{1}+2M_{1}\) Without loss of generality, \(\max\{j_{2},j_{3}\}=j_{2}\) is assumed. Similarly to (5.21), one has that for \(N_{1}\geq 12\), \[\begin{split}&\sum_{\begin{subarray}{c}j_{1},j_{2},j_{3},k\geq-1, \\ j\geq\max\{j_{1},j_{2},j_{3}\}+O(1),\\ j\leq\max\{j_{2},j_{3}\}+3k_{3}+2M_{1}\end{subarray}}\sum_{\begin{subarray}{ c}(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k},\\ k_{3}-O(1)\leq k_{1}\leq k_{3}\end{subarray}}2^{j\alpha+N_{1}k}\|I_{kk_{1}k_{2} k_{3}}^{jj_{1}j_{2}j_{3}}\mathbf{1}_{I_{in}}(t)\|_{L^{2}(\mathbb{R})}\\ &\lesssim\sum_{\begin{subarray}{c}j_{1},j_{2},j_{3}>-1,\\ k_{1},k_{2},k_{3}\geq-1,\\ |k_{1}-k_{3}|\leq O(1)\end{subarray}}\sum_{\begin{subarray}{c}j_{\leq 2}+3k_{1}+2M_{1} \\ k_{1},k_{2},k_{3}\geq-1,\\ k_{1}-k_{3}|\leq O(1)\end{subarray}}2^{k_{1}(N_{1}+7)+j\alpha}\|e^{it\Lambda}P _{[[k_{1}]]}\mathcal{Y}_{1}\|_{L^{\infty}}\|Q_{j_{2}}P_{k_{2}}V\|_{L^{2}}\|e^{ -it\Lambda}P_{[[k_{3}]]}\mathcal{Y}_{3}\|_{L^{\infty}}\\ &\lesssim(1+t)^{-2\alpha}\sum_{\begin{subarray}{c}j_{1},j_{2},j_{3} \geq-1,\\ k_{1},k_{2},k_{3}\geq-1,\\ |k_{1}-k_{3}|\leq O(1)\end{subarray}}2^{k_{1}(N_{1}+12)+(j_{1}+j_{2}+j_{3}) \alpha}\|Q_{j_{1}}P_{k_{1}}V\|_{L^{2}}\|Q_{j_{2}}P_{k_{2}}V\|_{L^{2}}\|Q_{j_{3} }P_{k_{3}}V\|_{L^{2}}\\ &\lesssim\varepsilon_{1}^{3}(1+t)^{-2\alpha}.\end{split} \tag{5.28}\] **Case 1.2.2.**\(j\geq\max\{j_{2},j_{3}\}+3k_{1}+2M_{1}\) In terms of \[\chi^{I}_{high}(\eta)+\chi^{I}_{low}(\eta)+\chi^{I}_{med}(\eta)=1,\] the Schwartz kernel \(K_{2}\) in (5.18) can be separated as \[\begin{split}& K_{2}=K^{I}_{high}+K^{I}_{low}+K^{I}_{med},\\ & K^{I}_{\Xi}=\iiint_{\mathbb{R}^{3}}\chi^{I}_{\Xi}(\eta)e^{i\Psi_ {2}}m_{++-}\psi_{k}(\xi)\psi_{[[k_{1}]]}(\xi-\eta)\psi_{[[k_{2}]]}(\eta-\xi) \psi_{[[k_{3}]]}(\zeta)d\xi d\eta d\zeta,\end{split} \tag{5.29}\] where \(\Xi\in\{high,low,med\}\). \((A_{1})\) **Estimates of \(K^{I}_{high}\) and \(K^{I}_{low}\)** Set \[\mathcal{L}_{2}=-i(\partial_{\xi}\Psi_{2})^{-1}\partial_{\xi},\qquad\mathcal{L} ^{*}_{2}=\partial_{\xi}\Big{(}\frac{i}{\partial_{\xi}\Psi_{2}}\Big{)}. \tag{5.30}\] Then \(\mathcal{L}_{2}e^{i\Psi_{2}}=e^{i\Psi_{2}}\). Collecting (5.23), (5.24), (5.27) with \(M_{1}>0\) large enough yields \[|\partial_{\xi}\Psi_{2}| \gtrsim\max\{2^{-3k_{1}}t|\eta|,2^{j}\}, \eta\in\operatorname{supp}\chi^{I}_{high}, \tag{5.31}\] \[|\partial_{\xi}\Psi_{2}| \gtrsim 2^{j}\gtrsim t|\eta|, \eta\in\operatorname{supp}\chi^{I}_{low}.\] On the other hand, for \(l\geq 2\), (5.22) implies \[|\partial_{\xi}^{I}\Psi_{2}|=|t\partial_{\xi}^{I}\Phi|=|t\eta\Lambda^{(l+1)}( \xi-\tilde{r}_{1}\eta)|\lesssim t|\eta|,\quad\tilde{r}_{1}\in[0,1]. \tag{5.32}\] Applying the method of stationary phase, we arrive at \[|K^{I}_{high}| =\Big{|}\iiint_{\mathbb{R}^{3}}\chi^{I}_{high}(\eta)\mathcal{L}_ {2}^{5}(e^{i\Psi_{2}})m_{++-}\psi_{k}(\xi)\psi_{[[k_{1}]]}(\xi-\eta)\psi_{[[ k_{2}]]}(\eta-\zeta)\psi_{[[k_{3}]]}(\zeta)d\xi d\eta d\zeta\Big{|} \tag{5.33}\] \[\lesssim\iiint_{\mathbb{R}^{3}}\chi^{I}_{high}(\eta)\Big{|}( \mathcal{L}_{2}^{*})^{5}[m_{++-}\psi_{k}(\xi)\psi_{[[k_{1}]]}(\xi-\eta)\psi_ {[[k_{2}]]}(\eta-\zeta)\psi_{[[k_{3}]]}(\zeta)]\Big{|}d\xi d\eta d\zeta.\] In view of (A.11), the worst term \((\mathcal{L}_{2}^{*})^{5}[\cdots]\) in (5.33) can be estimated by (5.31) and (5.32) as follows \[\frac{|\partial_{\xi}^{2}\Psi_{2}|^{5}}{(\partial_{\xi}\Psi_{2})^{10}}\lesssim \frac{t^{5}|\eta|^{5}}{(\partial_{\xi}\Psi_{2})^{10}}\lesssim 2^{21k_{1}-3j}t^{-2}\eta^{-2}, \eta\in\operatorname{supp}\chi^{I}_{high}. \tag{5.34}\] Note that \(\chi^{I}_{high}(\eta)\) vanishes in a neighbourhood of the origin. Then it follows from the integration by parts in \(\eta\) and (5.26)-(5.27) that \[\Big{|}\iiint_{\mathbb{R}^{3}}\psi_{[[k_{1}]]}(\xi-\eta)\psi_{[[ k_{2}]]}(\eta-\zeta)\psi_{[[k_{3}]]}(\zeta)\chi^{I}_{high}(\eta)\eta^{-2}d\xi d \eta d\zeta\Big{|} \tag{5.35}\] \[= \Big{|}\iiint_{\mathbb{R}^{3}}\psi_{[[k_{1}]]}(\tilde{\xi})\psi_{ [[k_{2}]]}(\eta-\zeta)\psi_{[[k_{3}]]}(\zeta)\chi^{I}_{high}(\eta)\eta^{-2}d \tilde{\xi}d\eta d\zeta\Big{|}\] \[\lesssim 2^{k_{1}}\Big{|}\iint_{\mathbb{R}^{2}}\psi_{[[k_{2}]]}(\eta- \zeta)\psi_{[[k_{3}]]}(\zeta)\chi^{I}_{high}(\eta)d(-\eta^{-1})\ d\zeta\Big{|}\] \[= 2^{k_{1}}\Big{|}\iint_{\mathbb{R}^{2}}\partial_{\eta}(\psi_{[[k_{ 2}]]}(\eta-\zeta)\chi^{I}_{high}(\eta))\psi_{[[k_{3}]]}(\zeta)\eta^{-1}d\eta d\zeta \Big{|}\] \[\lesssim \frac{t}{2^{j+2k_{1}}}\Big{\{}2^{k_{3}}\int_{\mathbb{R}}|\partial _{\eta}(\chi^{I}_{high}(\eta))|d\eta+\iint_{\mathbb{R}^{2}}|\partial_{\eta}( \psi_{[[k_{2}]]}(\eta-\zeta))|\psi_{[[k_{3}]]}(\zeta)d\eta d\zeta\Big{\}}\] \[\lesssim \frac{t}{2^{j+2k_{1}}}\Big{\{}t2^{k_{3}-j-3k_{1}}\int_{\mathbb{R} }|\chi^{\prime}\Big{(}\frac{t|\eta|}{2^{j+3k_{1}+M_{1}}}\Big{)}|d\eta+\iint_{ \mathbb{R}^{2}}|\partial_{\eta}(\psi_{[[k_{2}]]}(\eta-\zeta))|\psi_{[[k_{3}]] }(\zeta)d\eta d\zeta\Big{\}}\] \[\lesssim \frac{t2^{k_{3}}}{2^{j+2k_{1}}}.\] This, together with (5.33), (5.34), (A.11) and the condition \(j\geq\max\{j_{1},j_{2},j_{3}\}+O(1)\), yields \[|K^{I}_{high}|\lesssim 2^{21k_{1}-2j/3}t^{-1}(1+|x-x_{1}|+|x-x_{2}|+|x-x_{3}|)^{-10 /3}. \tag{5.36}\] Next, we turn to the estimate of \(K^{I}_{low}\). For \(\eta\in\operatorname{supp}\chi^{I}_{low}\), one has \(|\eta|\lesssim 2^{j}t^{-1}\) and \(\frac{|\partial_{\xi}^{2}\Psi_{2}|^{5}}{(\partial_{\xi}\Psi_{2})^{10}} \lesssim 2^{-5j}\). Thus, we can get from (5.27) and (A.11) that \[|K^{I}_{low}| \lesssim\iiint_{\mathbb{R}^{3}}\chi^{I}_{low}(\eta)|(\mathcal{L}_{ 2}^{*})^{5}[m_{++-}\psi_{k}(\xi)\psi_{[[k_{1}]]}(\xi-\eta)\psi_{[[k_{2}]]}(\eta -\zeta)\psi_{[[k_{3}]]}(\zeta)]|d\xi d\eta d\zeta \tag{5.37}\] \[\lesssim 2^{k_{1}-5j}\sum_{l=0}^{5}\iiint_{\mathbb{R}^{3}}| \partial_{\xi}^{l}(\psi_{k}(\xi)\psi_{[[k_{1}]]}(\xi-\eta))|\psi_{[[k_{3}]]}( \zeta)\chi^{I}_{low}(\eta)d\xi d\eta d\zeta\] \[\lesssim 2^{3k_{1}-4j}t^{-1}\lesssim 2^{3k_{1}-2j/3}t^{-1}(1+|x-x_{1}|+|x-x _{2}|+|x-x_{3}|)^{-10/3}.\] \((B_{1})\) **Estimate of \(K^{I}_{med}\)** Set \[\tilde{\mathcal{L}}_{2}=-i(\partial_{\zeta}\Psi_{2})^{-1}\partial_{\zeta},\qquad \tilde{\mathcal{L}}_{2}^{*}=\partial_{\zeta}\Big{(}\frac{i\cdot}{\partial_{ \zeta}\Psi_{2}}\Big{)}. \tag{5.38}\] Then \(\tilde{\mathcal{L}}_{2}e^{i\Psi_{2}}=e^{i\Psi_{2}}\). The condition of \(j\geq\max\{j_{2},j_{3}\}+3k_{1}+2M_{1}\) and (5.23), (5.24), (5.27) with \(M_{1}>0\) large enough ensure that \[|\partial_{\zeta}\Psi_{2}|\gtrsim 2^{-3k_{1}}t|\eta|\gtrsim 2^{j-3k_{1}},\quad \eta\in\operatorname{supp}\chi^{I}_{med}.\] Note that analogously to (5.32)-(5.37), one has \[\begin{split}&|\partial_{\zeta}^{l}\Psi_{2}|=|t\eta\Lambda^{(l+ 1)}(\zeta-\tilde{r}_{2}\eta)|\lesssim t|\eta|,\quad\tilde{r}_{2}\in[0,1],\quad l \geq 2,\\ &\frac{|\partial_{\zeta}^{2}\Psi_{2}|^{l}}{|\partial_{\zeta} \Psi_{2}|^{l+5}}\lesssim\frac{t^{l}|\eta|^{l}}{(\partial_{\zeta}\Psi_{2})^{l+ 5}}\lesssim 2^{30k_{1}-5j},\quad l=0,\cdots,5,\end{split} \tag{5.39}\] and \[\begin{split}|K^{I}_{med}|&\lesssim\iiint_{\mathbb{ R}^{3}}\chi^{I}_{med}(\eta)|(\tilde{\mathcal{L}}_{2}^{*})^{5}[m_{++-}\psi_{k}( \xi)\psi_{[[k_{1}]]}(\xi-\eta)\psi_{[[k_{2}]]}(\eta-\zeta)\psi_{[[k_{3}]]}( \zeta)]|d\xi d\eta d\zeta\\ &\lesssim 2^{31k_{1}-5j}\sum_{l=0}^{5}\iiint_{\mathbb{R}^{3}}| \partial_{\zeta}^{l}(\psi_{[[k_{2}]]}(\eta-\zeta)\psi_{[[k_{3}]]}(\zeta))| \psi_{[[k_{1}]]}(\xi-\eta)\chi^{I}_{med}(\eta)d\xi d\eta d\zeta\\ &\lesssim 2^{36k_{1}-2j/3}t^{-1}(1+|x-x_{1}|+|x-x_{2}|+|x-x_{3}|)^{ -10/3}.\end{split} \tag{5.40}\] Thus, combining (5.29), (5.36), (5.37), (5.40) with \(2N\geq N_{1}+37\) implies \[\begin{split}&\sum_{\begin{subarray}{c}j,j_{1},j_{2},j_{3},k \geq-1,\\ j\geq\max\{j_{1},j_{2},j_{3}\}+O(1),\\ j\geq\max\{j_{2},j_{3}\}+3k_{3}+2M_{1}\end{subarray}}\sum_{ \begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k},\\ k_{3}-O(1)\leq k_{1}\leq k_{3}\end{subarray}}2^{j\alpha+N_{1}k}\|I^{jj_{1}j_{2}j _{3}}_{kk_{1}k_{2}k_{3}}\mathbf{1}_{I_{in}}(t)\|_{L^{2}(\mathbb{R})}\\ &\lesssim\sum_{\begin{subarray}{c}j,k_{1},k_{2},k_{3}\geq-1,\\ |k_{1}-k_{3}|\leq O(1)\end{subarray}}2^{k_{1}(N_{1}+36)-j/6}t^{-1}(2+j)^{3}\|P _{k_{1}}V\|_{L^{2}}\|P_{k_{2}}V\|_{L^{2}}\|P_{k_{3}}V\|_{L^{2}}\\ &\lesssim\varepsilon_{1}^{3}(1+t)^{-1}.\end{split} \tag{5.41}\] Finally, collecting (5.21), (5.28) and (5.41) leads to \[\sum_{\begin{subarray}{c}j,j_{1},j_{2},j_{3},k\geq-1\\ k_{3}-O(1)\leq k_{1}\leq k_{3}\end{subarray}}2^{j\alpha+N_{1}k}\|I^{jj_{1}j_{2}j _{3}}_{kk_{1}k_{2}k_{3}}\mathbf{1}_{I_{in}}(t)\|_{L^{2}(\mathbb{R})} \lesssim\varepsilon_{1}^{3}(1+t)^{-2\alpha}, \tag{5.42}\] which finishes the proof of (5.9) for Case 1 and \(t\geq 1\). **Case 2.**\(k_{1}\geq k_{3}\) For \(\max\{k_{2},k_{3}\}\geq k_{1}-O(1)\), since the related treatment is analogous to that in Case 1, the related details are omitted. Next, we deal with the case of \(\max\{k_{2},k_{3}\}\leq k_{1}-O(1)\). At this time, \(|k-k_{1}|\leq O(1)\) and \(\operatorname{med}\{k_{1},k_{2},k_{3}\}=\max\{k_{2},k_{3}\}\) hold. Similarly to Case 1, we now analyze the critical points of \(\Psi_{2}\) in (5.18). If \(j\geq j_{1}+O(1)\), then one has \(|x-x_{1}|\approx 2^{j}\). On the other hand, it holds that \[|\eta|\leq|\zeta-\eta|+|\zeta|\lesssim 2^{\max\{k_{2},k_{3}\}}\ll|\xi|\approx 2^{k _{1}}.\] This, together with (5.22), yields \[|\partial_{\xi}\Phi|\approx 2^{-3k_{1}}|\eta|. \tag{5.43}\] In addition, (5.22) and \(|\zeta-r_{2}\eta|=|r_{2}(\zeta-\eta)+(1-r_{2})\zeta|\lesssim 2^{\max\{k_{2},k_{3}\}}\) show that \[2^{-3\max\{k_{2},k_{3}\}}|\eta|\lesssim|\partial_{\zeta}\Phi|\lesssim|\eta|. \tag{5.44}\] As in Case 1 with (5.43) and (5.44) instead of (5.23), we next discuss the frequency \(\eta\) so that the kernel \(K_{2}\) in (5.18) can be estimated. For the low frequency mode \(t|\eta|2^{-3k_{1}}\ll 2^{j}\), one has \(|\partial_{\xi}\Psi_{2}|\geq|x-x_{1}|-t|\partial_{\xi}\Phi|\), which implies that there is no critical point for \(\Psi_{2}\). For the high frequency mode \(t|\eta|2^{-3k_{1}}\gtrsim 2^{j}\), (5.44) shows that the critical points of \(\Psi_{2}\) are contained in the scope of \(\max\{j_{2},j_{3}\}\geq j+3k_{1}-3\max\{k_{2},k_{3}\}-O(1)\). Based on this, we write \[\begin{split}& K_{2}=K_{high}^{II}+K_{low}^{II},\\ & K_{high}^{II}=\iiint\!\!\!\int_{\mathbb{R}^{3}}\chi_{high}^{II}( \eta)e^{i\Psi_{2}}m_{++-}\psi_{k}(\xi)\psi_{[[k_{1}]]}(\xi-\eta)\psi_{[[k_{2} ]]}(\eta-\zeta)\psi_{[[k_{3}]]}(\zeta)d\xi d\eta d\zeta,\\ & K_{low}^{II}=\iiint\!\!\!\int_{\mathbb{R}^{3}}\chi_{low}^{II}( \eta)e^{i\Psi_{2}}m_{++-}\psi_{k}(\xi)\psi_{[[k_{1}]]}(\xi-\eta)\psi_{[[k_{2} ]]}(\eta-\zeta)\psi_{[[k_{3}]]}(\zeta)d\xi d\eta d\zeta,\end{split} \tag{5.45}\] where \[\chi_{high}^{II}(\eta)=\chi\Big{(}\frac{t|\eta|}{2^{j+3k_{1}-M_{2}}}\Big{)}, \quad\chi_{low}^{II}(\eta)=1-\chi\Big{(}\frac{t|\eta|}{2^{j+3k_{1}-M_{2}}} \Big{)},\] \(\chi\) is defined by (5.26), and \(M_{2}>0\) is a fixed and large enough number. Then one has \[\operatorname{supp}\chi_{high}^{II}\subset\{t|\eta|\geq 2^{j+3k_{1}-M_{2}}\}, \quad\operatorname{supp}\chi_{low}^{II}\subset\{t|\eta|\leq 2^{j+3k_{1}-M_{2}+1}\}. \tag{5.46}\] **Case 2.1.**\(j\geq j_{1}+O(1)\) **and**\(\max\{j_{2},j_{3}\}\leq j+3k_{1}-3\max\{k_{2},k_{3}\}-2M_{2}\) \((A_{2})\) **Estimate of \(K_{high}^{II}\)** For \(\eta\in\operatorname{supp}\chi_{high}^{II}\), the condition of \(\max\{j_{2},j_{3}\}\leq j+3k_{1}-3\max\{k_{2},k_{3}\}-2M_{2}\), (5.44) and (5.46) ensure \[t|\partial_{\zeta}\Phi|\gtrsim 2^{-3\max\{k_{2},k_{3}\}}t|\eta|\gtrsim 2^{j+3k_{1} -3\max\{k_{2},k_{3}\}-M_{2}}\gtrsim 2^{\max\{j_{2},j_{3}\}+M_{2}}.\] This, together with (5.24) and large \(M_{2}>0\), leads to \[|\partial_{\zeta}\Psi_{2}|\gtrsim t|\partial_{\zeta}\Phi|\gtrsim\max\{2^{-3 \max\{k_{2},k_{3}\}}t|\eta|,2^{j+3k_{1}-3\max\{k_{2},k_{3}\}}\},\quad\eta\in \operatorname{supp}\chi_{high}^{II}. \tag{5.47}\] On the other hand, one has \[1+|x-x_{1}|+|x-x_{2}|+|x-x_{3}|\lesssim 2^{\max\{j,j_{2},j_{3}\}}\lesssim 2^{j+3k _{1}-3\max\{k_{2},k_{3}\}}. \tag{5.48}\] It follows from the first line of (5.39) and (5.47) that \[\frac{|\partial_{\zeta}^{2}\Psi_{2}|^{5}}{(\partial_{\zeta}\Psi_{2})^{10}} \lesssim\frac{t^{5}|\eta|^{5}}{(\partial_{\zeta}\Psi_{2})^{10}}\lesssim 2^{30 \max\{k_{2},k_{3}\}-3j-9k_{1}}t^{-2}\eta^{-2}.\] As in Case 1.2.2, we can achieve \[\begin{split}|K^{II}_{high}|&\lesssim\iiint_{\mathbb{R}^{ 3}}\chi^{II}_{high}(\eta)|(\tilde{\mathcal{L}}_{2}^{*})^{5}[m_{++-}\psi_{k}( \xi)\psi_{[[k_{1}]]}(\xi-\eta)\psi_{[[k_{2}]]}(\eta-\zeta)\psi_{[[k_{3}]]}( \zeta)]|d\xi d\eta d\zeta\\ &\lesssim 2^{31\max\{k_{2},k_{3}\}-3j-9k_{1}}t^{-2}\sum_{l=0}^{5} \iiint_{\mathbb{R}^{3}}|\partial^{l}_{\zeta}(\psi_{[[k_{2}]]}(\eta-\zeta)\psi_{ [[k_{3}]]}(\zeta))|\\ &\hskip 113.811024pt\times\psi_{[[k_{1}]]}(\xi-\eta)\chi^{II}_{ high}(\eta)\eta^{-2}d\xi d\eta d\zeta,\\ &\lesssim 2^{32\max\{k_{2},k_{3}\}-4j-11k_{1}}t^{-1}\\ &\lesssim 2^{21\max\{k_{2},k_{3}\}-2j/3}t^{-1}(1+|x-x_{1}|+|x-x_{2}| +|x-x_{3}|)^{-10/3}.\end{split} \tag{5.49}\] where \(\tilde{\mathcal{L}}_{2}\) is defined by (5.38) and (5.48) is used. \((B_{2})\) **Estimate of \(K^{II}_{low}\)** By (5.24), (5.43) and (5.46), we have \[|\partial_{\xi}\Psi_{2}|\gtrsim\max\{2^{j},t|\eta|2^{-3k_{1}}\},\qquad\eta\in \operatorname{supp}\chi^{II}_{low}. \tag{5.50}\] In addition, one has from (5.22) and (5.46) that \[|\partial^{l}_{\xi}\Psi_{2}|=|t\partial^{l}_{\xi}\Phi|=|t\eta\Lambda^{(l+1)}( \xi-\tilde{r}_{1}\eta)|\lesssim 2^{-(l+2)k_{1}}t|\eta|\lesssim|\partial_{\xi} \Psi_{2}|,\quad l\geq 2. \tag{5.51}\] Based on (5.50)-(5.51), we conclude from (5.46) that \[\begin{split}|K^{II}_{low}|&\lesssim\iiint_{\mathbb{R }^{3}}\chi^{II}_{low}(\eta)|(\mathcal{L}_{2}^{*})^{5}[m_{++-}\psi_{k}(\xi)\psi_ {[[k_{1}]]}(\xi-\eta)\psi_{[[k_{2}]]}(\eta-\zeta)\psi_{[[k_{3}]]}(\zeta)]|d \xi d\eta d\zeta\\ &\lesssim 2^{\max\{k_{2},k_{3}\}-5j}\sum_{l=0}^{5}\iiint_{ \mathbb{R}^{3}}|\partial^{l}_{\xi}(\psi_{k}(\xi)\psi_{[[k_{1}]]}(\xi-\eta))| \psi_{[[k_{3}]]}(\zeta)\chi^{I}_{low}(\eta)d\xi d\eta d\zeta\\ &\lesssim 2^{2\max\{k_{2},k_{3}\}+4k_{1}-4j}t^{-1}\\ &\lesssim 2^{14k_{1}-2j/3}t^{-1}(1+|x-x_{1}|+|x-x_{2}|+|x-x_{3}|)^{-10/ 3},\end{split} \tag{5.52}\] where \(\mathcal{L}_{2}\) is defined by (5.30) and (5.48) is used. Combining (5.49) and (5.52) with \(N\geq N_{1}+15\) yields \[\begin{split}\sum_{\begin{subarray}{c}j,j_{1},j_{2},j_{3},k \geq-1,\\ j\geq j_{1}+O(1),\\ \max\{j_{2},j_{3}\}\leq j+3k_{1}-3\max\{k_{2},k_{3}\}-2M_{2}\end{subarray}} \sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k},\\ k_{1}\geq k_{3},\max\{k_{2},k_{3}\}\leq k_{1}-O(1)\end{subarray}}2^{j\alpha+N_{1} k}\|I^{jj_{1}j_{2}j_{3}}_{kk_{1}k_{2}k_{3}}\mathbf{1}_{I_{in}}(t)\|_{L^{2}( \mathbb{R})}\\ \lesssim\sum_{\begin{subarray}{c}j,k_{1},k_{2},k_{3}\geq-1\end{subarray}} 2^{k_{1}(N_{1}+14)+8\max\{k_{2},k_{3}\}-j/6}t^{-1}(5+j+k_{1})^{3}\|P_{k_{1}}V \|_{L^{2}}\|P_{k_{2}}V\|_{L^{2}}\|P_{k_{3}}V\|_{L^{2}}\\ \lesssim\varepsilon_{1}^{3}(1+t)^{-1}.\end{split} \tag{5.53}\] **Case 2.2.**\(j\leq j_{1}+O(1)\) **or**\(\max\{j_{2},j_{3}\}\geq j+3k_{1}-3\max\{k_{2},k_{3}\}-2M_{2}\) **Case 2.2.1.**\(\max\{j_{2},j_{3}\}\geq j+3k_{1}-3\max\{k_{2},k_{3}\}-2M_{2}\) Without loss of generality, \(\max\{j_{2},j_{3}\}=j_{2}\) is assumed. When \(\alpha=1/2\), by the assumption (5.2) of \(\|Q_{j_{2}}P_{k_{2}}V\|_{L^{2}}\), the produced factor \(2^{-j_{2}/2}\) will provide the number \(2^{-j/2}\) with an additional \(2^{-3k_{1}/2}\) regularity. This can compensate the loss of regularity which is caused by \(\|e^{it\Lambda}P_{[k_{1}]}\mathcal{V}_{1}\|_{L^{\infty}}\) and (2.3). Similarly to (5.20) and (5.21), from (2.3) with \(\beta=1/2\), (5.2) and (A.7) with \(N_{1}\geq 10\), one has \[\sum_{\begin{subarray}{c}j,j_{1},j_{2},j_{3},k\geq-1,\\ \max\{j_{2},j_{3}\}\geq j+3k_{1}-3\max\{k_{2},k_{3}\}-2M_{2}\end{subarray}}\sum_ {\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k},\\ k_{1}\geq k_{3},\max\{k_{2},k_{3}\}\leq k_{1}-O(1)\end{subarray}}2^{j/2+N_{1}k }\|I_{kk_{1}k_{2}k_{3}}^{jj_{1}j_{2}j_{3}}\mathbf{1}_{I_{in}}(t)\|_{L^{2}( \mathbb{R})}\] \[\lesssim\sum_{\begin{subarray}{c}j_{1},j_{2},j_{3}\geq-1,\\ k_{1},k_{2},k_{3}\geq-1\end{subarray}}2^{17\max\{k_{2},k_{3}\}/2+k_{1}(N_{1}-3 /2)+j_{2}/2}\|e^{it\Lambda}P_{[[k_{1}]]}\mathcal{Y}_{1}\|_{L^{\infty}}\] \[\lesssim\sum_{\begin{subarray}{c}j_{1},j_{2},j_{3}\geq-1,\\ k_{1},k_{2},k_{3}\geq-1\end{subarray}}2^{17\max\{k_{2},k_{3}\}/2+k_{1}(N_{1}-3 /2)+j_{2}/2}\|e^{it\Lambda}P_{[[k_{1}]]}\mathcal{Y}_{1}\|_{L^{\infty}}\|e^{- it\Lambda}P_{[[k_{3}]]}\mathcal{Y}_{3}\|_{L^{\infty}}\|Q_{j_{2}}P_{k_{2}}V\|_{L^{2}}\] \[\lesssim\sum_{\begin{subarray}{c}j_{1},j_{2},j_{3}\geq-1,\\ k_{1},k_{2},k_{3}\geq-1\end{subarray}}2^{k_{1}N_{1}+10\max\{k_{2},k_{3}\}+(j_{1} +j_{2}+j_{3})/2}(1+t)^{-1}\|Q_{j_{1}}P_{k_{1}}V\|_{L^{2}}\|Q_{j_{2}}P_{k_{2}}V \|_{L^{2}}\|Q_{j_{3}}P_{k_{3}}V\|_{L^{2}}\] \[\lesssim\varepsilon_{1}^{3}(1+t)^{-1}. \tag{5.54}\] When \(\alpha\in(0,1/2)\), instead of (5.54), applying (2.3) to \(P_{k_{3}}V_{-}\) with \(\beta=\alpha\), (2.7) to \(P_{k_{1}}V\) with \(p=2/(1-2\alpha)\) and the Bernstein inequality to \(P_{[[k_{2}]]}Q_{j_{2}}P_{k_{2}}V\) leads to \[\sum_{\begin{subarray}{c}j,j_{1},j_{2},j_{3},k\geq-1,\\ \max\{j_{2},j_{3}\}\geq j+3k_{1}-3\max\{k_{2},k_{3}\}-2M_{2}\end{subarray}}\sum_ {\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k},\\ k_{1}\geq k_{3},\max\{k_{2},k_{3}\}\leq k_{1}-O(1)\end{subarray}}2^{j\alpha+N_{1 }k}\|I_{kk_{1}k_{2}k_{3}}^{jj_{1}j_{2}j_{3}}\mathbf{1}_{I_{in}}(t)\|_{L^{2}( \mathbb{R})}\] \[\lesssim\sum_{\begin{subarray}{c}j_{1},j_{2},j_{3}\geq-1,\\ k_{1},k_{2},k_{3}\geq-1\end{subarray}}\sum_{\begin{subarray}{c}j\leq j_{2}-3k_{ 1}+3\max\{k_{2},k_{3}\}+2M_{2}\end{subarray}}2^{7\max\{k_{2},k_{3}\}+k_{1}N _{1}+j\alpha}\|e^{it\Lambda}P_{[[k_{1}]]}Q_{j_{1}}P_{k_{1}}V\|_{L^{2/(1-2\alpha)}}\] \[\times\|e^{it\Lambda}P_{[[k_{2}]]}Q_{j_{2}}P_{k_{2}}V\|_{L^{1/ \alpha}}\|e^{-it\Lambda}P_{[[k_{3}]]}Q_{j_{3}}P_{k_{3}}V_{-}\|_{L^{\infty}}\] \[\lesssim\sum_{\begin{subarray}{c}j_{1},j_{2},j_{3}\geq-1,\\ k_{1},k_{2},k_{3}\geq-1\end{subarray}}2^{17\max\{k_{2},k_{3}\}/2+k_{1}(N_{1}-3 \alpha)+j_{2}\alpha+k_{2}/2}\|e^{it\Lambda}P_{[[k_{1}]]}Q_{j_{1}}P_{k_{1}}V\|_ {L^{2/(1-2\alpha)}} \tag{5.55}\] \[\times\|e^{it\Lambda}P_{[[k_{2}]]}Q_{j_{2}}P_{k_{2}}V\|_{L^{2}}\| e^{-it\Lambda}P_{[[k_{3}]]}Q_{j_{3}}P_{k_{3}}V_{-}\|_{L^{\infty}}\] \[\lesssim t^{-2\alpha}\sum_{\begin{subarray}{c}j_{1},j_{2},j_{3} \geq-1,\\ k_{1},k_{2},k_{3}\geq-1\end{subarray}}2^{k_{1}N_{1}+11\max\{k_{2},k_{3}\}+(j_{1} +j_{2}+j_{3})\alpha}\|Q_{j_{1}}P_{k_{1}}V\|_{L^{2}}\|Q_{j_{2}}P_{k_{2}}V\|_{L ^{2}}\|Q_{j_{3}}P_{k_{3}}V\|_{L^{2}}\] \[\lesssim\varepsilon_{1}^{3}(1+t)^{-2\alpha},\] where (5.2) is used. **Case 2.2.2.**\(j\leq j_{1}+O(1)\) Analogously to Case 2.2.1, by utilizing (2.3) with \(\beta=\alpha\), one can achieve \[\sum_{\begin{subarray}{c}j,j_{1},j_{2},j_{3},k\geq-1,\\ j\leq j_{1}+O(1)\end{subarray}}\sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3})\in \mathcal{Y}_{k},\\ k_{1}\geq k_{3},\max\{k_{2},k_{3}\}\leq k_{1}-O(1)\end{subarray}}2^{j\alpha+N_{1 }k}\|I_{kk_{1}k_{2}k_{3}}^{jj_{1}j_{2}j_{3}}\mathbf{1}_{I_{in}}(t)\|_{L^{2}( \mathbb{R})}\] \[\lesssim\sum_{\begin{subarray}{c}j_{1},j_{2},j_{3}\geq-1,\\ k_{1},k_{2},k_{3}\geq-1\end{subarray}}\sum_{j\leq j_{1}+O(1)}2^{7\max\{k_{2},k_{ 3}\}+k_{1}N_{1}+j\alpha}\|Q_{j_{1}}P_{k_{1}}V\|_{L^{2}}\|e^{it\Lambda}P_{[[k_{2 }]]}\mathcal{V}_{2}\|_{L^{\infty}}\|e^{-it\Lambda}P_{[[k_{3}]]}\mathcal{V}_{3} \|_{L^{\infty}}\] \[\lesssim\sum_{\begin{subarray}{c}j_{1},j_{2},j_{3}\geq-1,\\ k_{1},k_{2},k_{3}\geq-1\end{subarray}}2^{7\max\{k_{2},k_{3}\}+k_{1}N_{1}+j_{1} \alpha}\|Q_{j_{1}}P_{k_{1}}V\|_{L^{2}}\|e^{it\Lambda}P_{[[k_{2}]]}\mathcal{V}_{ 2}\|_{L^{\infty}}\|e^{-it\Lambda}P_{[[k_{3}]]}\mathcal{V}_{3}\|_{L^{\infty}}\] \[\lesssim(1+t)^{-2\alpha}\sum_{\begin{subarray}{c}j_{1},j_{2},j_{3 }\geq-1,\\ k_{1},k_{2},k_{3}\geq-1\end{subarray}}2^{k_{1}N_{1}+10\max\{k_{2},k_{3}\}+(j_{1 }+j_{2}+j_{3})\alpha}\|Q_{j_{1}}P_{k_{1}}V\|_{L^{2}}\|Q_{j_{2}}P_{k_{2}}V\|_{L ^{2}}\|Q_{j_{3}}P_{k_{3}}V\|_{L^{2}}\] \[\lesssim\varepsilon_{1}^{3}(1+t)^{-2\alpha}. \tag{5.56}\] Collecting (5.42) and (5.53)-(5.56) implies (5.9) for \(t\geq 1\). At last, we turn to the proof of (5.9) for \(t\leq 1\). For \(t\leq 1\), note that \(j\leq\log_{2}(1+t)+O(1)\leq O(1)\). Then the related treatments are similar to those in Case 1.1 (5.21) and Case 2.2.2 (5.56), respectively. This completes the proof of (5.9). ### Estimates of the quartic and higher order nonlinearities **Lemma 5.4**.: _Under the bootstrap assumption (5.2), it holds that for \(\alpha\in(0,1/2]\) and \(t\geq 0\),_ \[\sum_{j,k\geq-1}2^{j\alpha+N_{1}k}\Big{(}\|Q_{j}\mathcal{Q}_{k}(t)\|_{L^{2}( \mathbb{R})}+\|Q_{j}P_{k}e^{-it\Lambda}\mathcal{N}_{4}^{I}(U)\|_{L^{2}(\mathbb{ R})}\Big{)}\lesssim\varepsilon_{1}^{4}(1+t)^{-2\alpha}. \tag{5.57}\] Proof.: Set \[\mathcal{Q}_{k}^{I}=\sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{V }_{k},\\ (\mu_{1},\mu_{2},\mu_{3})\in A_{\Phi}^{good}\end{subarray}}e^{-it\Lambda}P_{k}T _{\Phi_{\mu_{1}\mu_{2}\mu_{3}}^{-1}m_{\mu_{1}\mu_{2}\mu_{3}}}(P_{k_{1}}U_{\mu_ {1}},P_{k_{2}}U_{\mu_{2}},e^{it\mu_{3}\Lambda}P_{k_{3}}\partial_{t}V_{\mu_{3}}), \tag{5.58}\] which comes from the third term in the expression of \(\mathcal{Q}_{k}\). Substituting (3.10) into (5.58) yields \[\mathcal{Q}_{k}^{I}=\mathcal{Q}_{k}^{II}+\mathcal{N}_{5,k}(U), \tag{5.59}\] where \[\mathcal{Q}_{k}^{II}=\sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3}) \in\mathcal{V}_{k},\\ (\mu_{1},\mu_{2},\mu_{3})\in A_{\Phi}^{good}\end{subarray}}\sum_{\begin{subarray} {c}(k_{4},k_{5})\in\mathcal{K}_{k_{3}},\\ \nu_{1},\nu_{2}=\pm\end{subarray}}e^{-it\Lambda}P_{k}T_{\Phi_{\mu_{1}\mu_{2}\mu_ {3}}^{-1}m_{\mu_{1}\mu_{2}\mu_{3}}}(P_{k_{1}}U_{\mu_{1}},P_{k_{2}}U_{\mu_{2}},\] \[P_{k_{3}}T_{a_{\mu_{3}\nu_{1}\nu_{2}}^{I}}(P_{k_{4}}U_{\nu_{1}}, P_{k_{5}}U_{\nu_{2}}))\] \[=\sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{V}_{k}, \\ (\mu_{1},\mu_{2},\mu_{3})\in A_{\Phi}^{good}\end{subarray}}\sum_{\begin{subarray} {c}(k_{4},k_{5})\in\mathcal{K}_{k_{3}},\\ \nu_{1},\nu_{2}=\pm\end{subarray}}\sum_{\begin{subarray}{c}(k_{4},k_{5})\in \mathcal{K}_{k_{3}},\\ \nu_{1},\nu_{2}=\pm\end{subarray}}e^{-it\Lambda}P_{k}T_{\Phi_{\mu_{1}\mu_{2}\mu_ {3}}^{-1}m_{\mu_{1}\mu_{2}\mu_{3}}}(P_{[[k_{1}]]}e^{it\mu_{1}\Lambda}Q_{j_{1}}P _{k_{1}}V_{\mu_{1}}, \tag{5.60}\] \[P_{[[k_{2}]]}e^{it\mu_{2}\Lambda}Q_{j_{2}}P_{k_{2}}V_{\mu_{2}},P_{k _{3}}T_{a_{\mu_{3}\nu_{1}\nu_{2}}^{I}}(P_{[[k_{4}]]}e^{it\nu_{1}\Lambda}Q_{j_{3}} P_{k_{4}}V_{\nu_{1}},P_{[[k_{5}]]}e^{it\nu_{2}\Lambda}Q_{j_{4}}P_{k_{5}}V_{\nu_{2}})),\] \[\mathcal{N}_{5,k}(U)=\sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k },\\ (\mu_{1},\mu_{2},\mu_{3})\in A^{good}_{\Phi}\end{subarray}}e^{-it\Lambda}P_{k}T_ {\Phi^{-1}_{\mu_{1}\mu_{2}\mu_{3}}m_{\mu_{1}\mu_{2}\mu_{3}}}(P_{k_{1}}U_{\mu_{ 1}},P_{k_{2}}U_{\mu_{2}},P_{k_{3}}\mathcal{N}_{3,\mu_{3}}(U)), \tag{5.61}\] and \(\mathcal{N}_{3,\mu_{3}}(U)\) is defined by (3.9). Let \[\mathcal{Q}_{q} :=Q_{j}P_{k}e^{-it\Lambda}T_{\Phi^{-1}_{\mu_{1}\mu_{2}\mu_{3}}m_{ \mu_{1}\mu_{2}\mu_{3}}}(P_{[[k_{1}]]}e^{it\mu_{1}\Lambda}\mathcal{V}_{1},P_{[[ k_{2}]]}e^{it\mu_{2}\Lambda}\mathcal{V}_{2},\] \[\qquad\qquad\qquad\qquad P_{k_{3}}T_{a^{I}_{\mu_{3}\nu_{1}\nu_{2} }}(P_{[[k_{4}]]}e^{it\nu_{1}\Lambda}\mathcal{V}_{3},P_{[[k_{5}]]}e^{it\nu_{2} \Lambda}\mathcal{V}_{4})), \tag{5.62}\] \[\mathcal{V}_{1} :=Q_{j_{1}}P_{k_{1}}V_{\mu_{1}},\,\mathcal{V}_{2}:=Q_{j_{2}}P_{k_ {2}}V_{\mu_{2}},\,\mathcal{V}_{3}:=Q_{j_{3}}P_{k_{4}}V_{\nu_{1}},\,\mathcal{V} _{4}:=Q_{j_{4}}P_{k_{5}}V_{\nu_{2}}.\] Analogous to the estimates in Lemmas 5.2 and 5.3 for the cubic nonlinearity \(\mathcal{C}_{k}(s)\), the proof of (5.57) will be also separated into two cases. **Case 1.**\(\max\{j,j_{1},j_{2},j_{3},j_{4}\}\leq\log_{2}(1+t)+O(1)\) Comparing to Lemma 5.3, the appeared factor \(2^{j\alpha}\) in this case can be controlled by the additional \((1+t)^{-\alpha}\) decay, which is produced by the quartic nonlinearity. In addition, due to \((k_{1},k_{2},k_{3})\in\mathcal{Y}_{k}\) and \((k_{4},k_{5})\in\mathcal{X}_{k_{3}}\), one can see that \(2^{k}\lesssim 2^{\max\{k_{1},k_{2},k_{3}\}}\) and \(2^{k_{3}}\lesssim 2^{\max\{k_{4},k_{5}\}}\) hold. Next we treat \(\mathcal{Q}_{q}\) according to the differences of frequencies. **Case 1.1.**\(\max\{k_{1},k_{2},k_{3}\}=k_{1}\) In this case, \(\mathrm{med}\{k_{1},k_{2},k_{3}\}=\max\{k_{2},k_{3}\}\). Applying (A.1b) and (A.8), one then has \[\|\mathcal{Q}_{q}\|_{L^{2}(\mathbb{R})} \lesssim 2^{8\max\{k_{2},k_{3}\}}\|\mathcal{V}_{1}\|_{L^{2}}\|P_{[[ k_{2}]]}e^{it\mu_{2}\Lambda}\mathcal{V}_{2}\|_{L^{\infty}}\|T_{a^{I}_{\mu_{3} \nu_{1}\nu_{2}}}(P_{[[k_{4}]]}e^{it\nu_{1}\Lambda}\mathcal{V}_{3},P_{[[k_{5}]]} e^{it\nu_{2}\Lambda}\mathcal{V}_{4})\|_{L^{\infty}}\] \[\lesssim 2^{8\max\{k_{2},k_{3}\}}\|\mathcal{V}_{1}\|_{L^{2}}\|P_{[[ k_{2}]]}e^{it\mu_{2}\Lambda}\mathcal{V}_{2}\|_{L^{\infty}}\|P_{[[k_{4}]]}e^{it\nu_{1} \Lambda}\mathcal{V}_{3}\|_{L^{\infty}}\|P_{[[k_{5}]]}e^{it\nu_{2}\Lambda} \mathcal{V}_{4}\|_{L^{\infty}}. \tag{5.63}\] Therefore, it can be deduced from (2.3) with \(\beta=\alpha\), (5.2) and (5.63) that \[\sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k},\\ (\mu_{1},\mu_{2},\mu_{3})\in A^{good}_{\Phi},\\ \max\{k_{1},k_{2},k_{3}\}=k_{1}\end{subarray}}\sum_{\begin{subarray}{c}(k_{4 },k_{5})\in\mathcal{X}_{k_{3}},\\ \nu_{1},\nu_{2}=\pm\end{subarray}}2^{k_{1}N_{1}}(1+t)^{\alpha}\|\mathcal{Q}_{q }\mathbf{1}_{I_{in4}}(t)\|_{L^{2}(\mathbb{R})}\] \[\lesssim\sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y }_{k},\\ (k_{4},k_{5})\in\mathcal{X}_{k_{3}},\\ j_{1},j_{2},j_{3},j_{4}\geq-1\end{subarray}}\sum_{\begin{subarray}{c}(k_{1}, \mu_{2},\mu_{3})\in A^{good}_{\Phi},\\ \nu_{1},\nu_{2}=\pm\end{subarray}}2^{k_{1}N_{1}}(1+t)^{\alpha}\|\mathcal{Q}_{q }\mathbf{1}_{I_{in4}}(t)\|_{L^{2}(\mathbb{R})}\] \[\lesssim(1+t)^{-2\alpha}\sum_{\begin{subarray}{c}k_{1},k_{2},k_{4 },k_{5}\geq-1,\\ j_{1},j_{2},j_{3},j_{4}\geq-1\end{subarray}}2^{k_{1}N_{1}+8\max\{k_{2},k_{4},k_{5 }\}+2(k_{2}+k_{4}+k_{5})+\alpha(j_{2}+j_{3}+j_{4})}\] \[\qquad\qquad\qquad\qquad\times\|Q_{j_{1}}P_{k_{1}}V\|_{L^{2}}\|Q_ {j_{2}}P_{k_{2}}V\|_{L^{2}}\|Q_{j_{3}}P_{k_{4}}V\|_{L^{2}}\|Q_{j_{4}}P_{k_{5}}V\| _{L^{2}}\] \[\lesssim\varepsilon_{1}^{4}(1+t)^{-2\alpha},\] where \(I_{in4}:=\{t\geq 0:\max\{j,j_{1},j_{2},j_{3},j_{4}\}\leq\log_{2}(1+t)+O(1)\}\). **Case 1.2.**\(\max\{k_{1},k_{2},k_{3}\}=k_{2}\) Since the related treatment is similar to that in Case 1.1, the details are omitted here. **Case 1.3**.: \(\max\{k_{1},k_{2},k_{3}\}=k_{3}\) In this case, \(\mathrm{med}\{k_{1},k_{2},k_{3}\}=\max\{k_{1},k_{2}\}\) holds. For convenience, assume \(\max\{k_{4},k_{5}\}=k_{5}\). Instead of (5.63), we have \[\|\mathcal{Q}_{q}\|_{L^{2}(\mathbb{R})} \lesssim 2^{8\max\{k_{1},k_{2}\}}\|\mathscr{V}_{1}\|_{L^{\infty}}\| \mathscr{V}_{2}\|_{L^{\infty}}\|T_{a^{I}_{\mu_{3}\nu_{1}\nu_{2}}}(P_{[[k_{4}]] }e^{it\nu_{1}\Lambda}\mathscr{V}_{3},P_{[[k_{5}]]}e^{it\nu_{2}\Lambda}\mathscr{ V}_{4})\|_{L^{2}}\] \[\lesssim 2^{8\max\{k_{1},k_{2}\}}\|P_{[[k_{1}]]}e^{it\mu_{1} \Lambda}\mathscr{V}_{1}\|_{L^{\infty}}\|P_{[[k_{2}]]}e^{it\mu_{2}\Lambda} \mathscr{V}_{2}\|_{L^{\infty}}\|P_{[[k_{4}]]}e^{it\nu_{1}\Lambda}\mathscr{V}_{ 3}\|_{L^{\infty}}\|\mathscr{V}_{4}\|_{L^{2}}.\] Analogously to (5.64), we can achieve \[\sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k}, \\ (\mu_{1},\mu_{2},\mu_{3})\in A^{good}_{\Phi},\\ \max\{k_{1},k_{2},k_{3}\}=k_{3}\end{subarray}}\sum_{\begin{subarray}{c}(k_{4},k_{5})\in\mathcal{X}_{k_{3}},\\ \mu_{1},\nu_{2}=\pm\end{subarray}}2^{j\alpha+N_{1}k}\|\mathcal{Q}_{q}\mathbf{1} _{I_{in4}}(t)\|_{L^{2}(\mathbb{R})} \tag{5.65}\] \[\lesssim(1+t)^{-2\alpha}\sum_{\begin{subarray}{c}k_{1},k_{2},k_{4 },k_{5}\geq-1,\\ j_{1},j_{2},j_{3},j_{4}\geq-1\end{subarray}}2^{k_{5}N_{1}+8\max\{k_{1},k_{2},k_ {4}\}+2(k_{1}+k_{2}+k_{4})+\alpha(j_{1}+j_{2}+j_{3})}\] \[\qquad\qquad\qquad\qquad\times\|Q_{j_{1}}P_{k_{1}}V\|_{L^{2}}\|Q_ {j_{2}}P_{k_{2}}V\|_{L^{2}}\|Q_{j_{3}}P_{k_{4}}V\|_{L^{2}}\|Q_{j_{4}}P_{k_{5}}V \|_{L^{2}}\] \[\lesssim\varepsilon_{1}^{4}(1+t)^{-2\alpha}.\] Collecting (5.64) and (5.65) yields \[\sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k}, \\ (\mu_{1},\mu_{2},\mu_{3})\in A^{good}_{\Phi}\end{subarray}}\sum_{\begin{subarray} {c}(k_{4},k_{5})\in\mathcal{X}_{k_{3}},\\ \nu_{1},\nu_{2}=\pm\end{subarray}}\sum_{\begin{subarray}{c}j_{1},j_{2},j_{3}, j_{4}\geq-1,\\ j,k\geq-1\end{subarray}}2^{j\alpha+N_{1}k}\|\mathcal{Q}_{q}\mathbf{1}_{I_{in4}}(t) \|_{L^{2}(\mathbb{R})}\lesssim\varepsilon_{1}^{4}(1+t)^{-2\alpha}. \tag{5.66}\] **Case 2**.: \(\max\{j,j_{1},j_{2},j_{3},j_{4}\}\geq\log_{2}(1+t)+O(1)\) As in Lemma 5.2, the related treatments will be separated into the following two cases. **Case 2.1**.: \(\max\limits_{l=1,2,3,4}|j-j_{l}|\leq O(1)\) In this case, one can take the treatment as in Case 1, where the only difference is that the appeared factor \(2^{j\alpha}\) can be absorbed by \(2^{j_{1}\alpha}\) in (5.64) or \(2^{j_{4}\alpha}\) in (5.65). Then we arrive at \[\sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k}, \\ (\mu_{1},\mu_{2},\mu_{3})\in A^{good}_{\Phi}\end{subarray}}\sum_{\begin{subarray} {c}(k_{4},k_{5})\in\mathcal{X}_{k_{3}},\\ \nu_{1},\nu_{2}=\pm\end{subarray}}\sum_{\begin{subarray}{c}j_{1},j_{2},j_{3}, j_{4}\geq-1,\\ \max_{l=1,2,3,4}|j-j_{l}|\leq O(1)\end{subarray}}\sum_{j,k\geq-1}2^{j\alpha+N_{1}k}\| \mathcal{Q}_{q}\mathbf{1}_{I_{out4}}(t)\|_{L^{2}(\mathbb{R})}\lesssim \varepsilon_{1}^{4}(1+t)^{-2\alpha}, \tag{5.67}\] where \(I_{out4}:=\{t\geq 0:\max\{j,j_{1},j_{2},j_{3},j_{4}\}\geq\log_{2}(1+t)+O(1)\}\). **Case 2.2**.: \(\max\limits_{l=1,2,3,4}|j-j_{l}|\geq O(1)\) Analogously to (5.10), \(I_{4}\) can be rewritten as \[\mathcal{Q}_{q}(t,x)=(2\pi)^{-4}\psi_{j}(x)\int_{\mathbb{R}^{4}}K_{4}(x-x_{1},x-x_{2},x-x_{3},x-x_{4})\mathscr{V}_{1}(t,x_{1})\mathscr{V}_{2}(t,x_{2}) \tag{5.68}\] \[\qquad\qquad\qquad\qquad\qquad\times\mathscr{V}_{3}(t,x_{3}) \mathscr{V}_{3}(t,x_{4})dx_{1}dx_{2}dx_{3}dx_{4},\] where \[\begin{split} K_{4}(x-x_{1},x-x_{2},x-x_{3},x-x_{4}):=\int_{\mathbb{R }^{4}}e^{i\Psi_{4}}m_{4}(\xi_{1},\xi_{2},\xi_{3},\xi_{4})d\xi_{1}d\xi_{2}d\xi_{ 3}d\xi_{4},\\ \Psi_{4}:=t(-\Lambda(\xi_{1}+\xi_{2}+\xi_{3}+\xi_{4})+\mu_{1} \Lambda(\xi_{1})+\mu_{2}\Lambda(\xi_{2})+\nu_{1}\Lambda(\xi_{3})+\nu_{2} \Lambda(\xi_{4}))\\ +\xi_{1}(x-x_{1})+\xi_{2}(x-x_{2})+\xi_{3}(x-x_{3})+\xi_{4}(x-x_ {4}),\\ m_{4}(\xi_{1},\xi_{2},\xi_{3},\xi_{4}):=(\Phi_{\mu_{1}\mu_{2}\mu_ {3}}^{-1}m_{\mu_{1}\mu_{2}\mu_{3}})(\xi_{1},\xi_{2},\xi_{3}+\xi_{4})a^{I}_{\mu _{3}\nu_{1}\nu_{2}}(\xi_{3},\xi_{4})\psi_{k}(\xi_{1}+\xi_{2}+\xi_{3}+\xi_{4})\\ \times\psi_{k_{3}}(\xi_{3}+\xi_{4})\psi_{[[k_{1}]]}(\xi_{1})\psi_ {[[k_{2}]]}(\xi_{2})\psi_{[[k_{4}]]}(\xi_{3})\psi_{[[k_{5}]]}(\xi_{4}).\end{split} \tag{5.69}\] Denote \[\mathcal{L}_{4}:=-i(\sum_{l=1}^{4}|\partial_{\xi_{l}}\Psi_{4}|^{2})^{-1}\sum_{l =1}^{4}\partial_{\xi_{l}}\Psi_{4}\partial_{\xi_{l}}.\] Then \(\mathcal{L}_{4}e^{i\Psi_{4}}=e^{i\Psi_{4}}\) holds and its adjoint operator \(\mathcal{L}_{4}^{*}\) is \[\mathcal{L}_{4}^{*}:=i\sum_{l=1}^{4}\partial_{\xi_{l}}\Big{(}\frac{\partial_{ \xi_{l}}\Psi_{4}\cdot}{\sum_{l=1}^{4}|\partial_{\xi_{l}}\Psi_{4}|^{2}}\Big{)}.\] The conditions \(\max\{j,j_{1},j_{2},j_{3},j_{4}\}\geq\log_{2}(1+t)+O(1)\) and \(\max_{l=1,2,3,4}|j-j_{l}|\geq O(1)\) show that when \(x\in\operatorname{supp}\psi_{j}\), \(x_{l}\in\operatorname{supp}\psi_{l}\), \(l=1,2,3,4\), it holds that \[\begin{split}|x-x_{1}|+|x-x_{2}|+|x-x_{3}|+|x-x_{4}|& \geq 2^{O(1)}(1+t),\\ |x-x_{1}|+|x-x_{2}|+|x-x_{3}|+|x-x_{4}|&\gtrsim 2^{\max\{j,j _{1},j_{2},j_{3},j_{4}\}}.\end{split}\] This, together with \(|\Lambda^{\prime}(y)|\leq 1\), leads to \[\begin{split}(\sum_{l=1}^{4}|\partial_{\xi_{l}}\Psi_{4}|^{2})^{1/ 2}&\gtrsim|x-x_{1}|+|x-x_{2}|+|x-x_{3}|+|x-x_{4}|\\ &\gtrsim\max\{1+t,2^{\max\{j,j_{1},j_{2},j_{3},j_{4}\}}\}.\end{split} \tag{5.70}\] On the other hand, one obtains from (2.12) and (5.68) that for \((\mu_{1},\mu_{2},\mu_{3})\in A^{good}_{\Phi}\), \[\begin{split}|\partial_{\xi_{1},\xi_{2},\xi_{3},\xi_{4}}^{l}\Phi _{\mu_{1}\mu_{2}\mu_{3}}^{-1}(\xi_{1},\xi_{2},\xi_{3}+\xi_{4})|& \lesssim 2^{(l+1)\max\{k_{1},k_{2},k_{4},k_{5}\}},\quad l\geq 0,\\ |\partial_{\xi_{1},\xi_{2},\xi_{3},\xi_{4}}^{l}\Psi_{4}|& \lesssim t,\quad l\geq 2,\end{split} \tag{5.71}\] where \(|\xi_{1}|\approx 2^{k_{1}}\), \(|\xi_{2}|\approx 2^{k_{2}}\), \(|\xi_{3}|\approx 2^{k_{4}}\) and \(|\xi_{4}|\approx 2^{k_{5}}\). Without loss of generality, \(\max\{k_{1},k_{2},k_{4},k_{5}\}=k_{1}\) is assumed. By the method of stationary phase and (5.68)-(5.71), (A.11), we have \[\begin{split}&|K_{4}(x-x_{1},x-x_{2},x-x_{3},x-x_{4})|\\ =&\Big{|}\int_{\mathbb{R}^{4}}\mathcal{L}_{4}^{8}(e^{i \Psi_{4}})m_{4}(\xi_{1},\xi_{2},\xi_{3},\xi_{4})d\xi_{1}d\xi_{2}d\xi_{3}d\xi_ {4}\Big{|}\\ \lesssim&\int_{\mathbb{R}^{4}}|(\mathcal{L}_{4}^{*}) ^{8}m_{4}(\xi_{1},\xi_{2},\xi_{3},\xi_{4})|d\xi_{1}d\xi_{2}d\xi_{3}d\xi_{4} \\ \lesssim& 2^{k_{1}+k_{2}+k_{4}+k_{5}+10\max\{k_{1},k_{2 },k_{4},k_{5}\}}\Big{(}1+\sum_{i=1}^{4}|x-x_{i}|\Big{)}^{-8}\\ \lesssim& 2^{11k_{1}+k_{2}+k_{4}+k_{5}-\max\{j,j_{1},j _{2},j_{3},j_{4}\}}(1+t)^{-2}\Big{(}1+\sum_{i=1}^{4}|x-x_{i}|\Big{)}^{-5}. \end{split}\] Similarly to (5.14), \[\|\mathcal{Q}_{q}(t)\|_{L^{2}(\mathbb{R})}\lesssim\varepsilon_{1}^{4}2^{(11-N)(k_{ 1}+k_{2}+k_{4}+k_{5})-5j/9-(j_{1}+j_{2}+j_{3}+j_{4})/9}(1+t)^{-2}.\] This, together with the condition \(N\geq N_{1}+12\), yields \[\sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k},\\ (\mu_{1},\mu_{2},\mu_{3})\in A^{good}_{\Phi}\end{subarray}}\sum_{\begin{subarray} {c}(k_{4},k_{5})\in\mathcal{X}_{k_{3}},\\ \nu_{1},\nu_{2}=\pm\end{subarray}}\sum_{\begin{subarray}{c}j_{1},j_{2},j_{3}, j_{4}\geq-1,\\ \max_{l=1,2,3,4}|j-j_{l}|\leq O(1)\end{subarray}}\sum_{j,k\geq-1}2^{j\alpha+N_{1} k}\|\mathcal{Q}_{q}\mathbf{1}_{I_{out}}(t)\|_{L^{2}(\mathbb{R})}\lesssim \varepsilon_{1}^{4}(1+t)^{-2}. \tag{5.72}\] Combining (5.60), (5.62), (5.66), (5.67) and (5.72) leads to \[\sum_{j,k\geq-1}2^{j\alpha+N_{1}k}\|Q_{j}\mathcal{Q}_{k}^{II}(t)\|_{L^{2}( \mathbb{R})}\lesssim\varepsilon_{1}^{4}(1+t)^{-2\alpha}. \tag{5.73}\] Note that the estimate (5.73) also holds for \(\mathcal{N}_{5,k}(U)\) defined by (5.61) with the first inequality of (A.7), here we omit the details. Thus, we achieve \[\sum_{j,k\geq-1}2^{j\alpha+N_{1}k}\|Q_{j}\mathcal{Q}_{k}^{I}(t)\|_{L^{2}( \mathbb{R})}\lesssim\varepsilon_{1}^{4}(1+t)^{-2\alpha}. \tag{5.74}\] With (A.26), one can get the estimate (5.74) for the other terms in \(\mathcal{Q}_{k}\). The estimate for \(P_{k}e^{-it\Lambda}\mathcal{N}_{4}^{I}(U)\) defined by (3.13) is the same. Therefore, the proof of (5.57) is completed. ### Estimates of the boundary term \(\mathcal{B}_{k}\) **Lemma 5.5**.: _Under the bootstrap assumption (5.2), it holds that for \(\alpha\in(0,1/2]\) and \(t\geq 0\),_ \[\sum_{j,k\geq-1}2^{j\alpha+N_{1}k}\|Q_{j}\mathcal{B}_{k}\|_{L^{2}(\mathbb{R})} \lesssim\varepsilon_{1}^{2}. \tag{5.75}\] Proof.: Denote \[\mathcal{B}_{k}^{I} :=-i\sum_{\mu_{1},\mu_{2}=\pm}e^{-is\Lambda}P_{k}T_{\Phi_{\mu_{1} \mu_{2}}^{-1}a_{\mu_{1}\mu_{2}}}(U_{\mu_{1}},U_{\mu_{2}})\Big{|}_{s=0}^{t}, \tag{5.76}\] \[\mathcal{B}_{k}^{II} :=-i\sum_{\begin{subarray}{c}(\mu_{1},\mu_{2},\mu_{3})\in A^{good }_{\Phi}\end{subarray}}\sum_{(k_{1},k_{2},k_{3})\in\mathcal{Y}_{k}}e^{-is \Lambda}P_{k}T_{\Phi_{\mu_{1}\mu_{2}\mu_{3}}^{-1}m_{\mu_{1}\mu_{2}\mu_{3}}}(P _{k_{1}}U_{\mu_{1}},P_{k_{2}}U_{\mu_{2}},P_{k_{3}}U_{\mu_{3}})\Big{|}_{s=0}^{t}\] \[\qquad-i\sum_{\begin{subarray}{c}(k_{1},k_{2},k_{3})\in\mathcal{Y }_{k},\\ \max\{k_{1},k_{2}\}\leq k_{3}-O(1)\end{subarray}}e^{-is\Lambda}P_{k}T_{\Phi_{+ -}^{-1}m_{++-}}(P_{k_{1}}U,P_{k_{2}}U,P_{k_{3}}U_{-})\Big{|}_{s=0}^{t}.\] Then \(\mathcal{B}_{k}=\mathcal{B}_{k}^{I}+\mathcal{B}_{k}^{II}\). Next we prove \[\sum_{j,k\geq-1}2^{j\alpha+N_{1}k}\|Q_{j}\mathcal{B}_{k}^{I}\|_{L^{2}(\mathbb{R })}\lesssim\varepsilon_{1}^{2}. \tag{5.77}\] By virtue of (2.4), one can find that \[\begin{split}& Q_{j}B_{k}^{I}=-i\sum_{j_{1},j_{2}\geq-1}\sum_{(k_{1}, k_{2})\in\mathcal{X}_{k}}\mathcal{B}_{kk_{1}k_{2}}^{jj_{1}j_{2}},\\ &\mathcal{B}_{kk_{1}k_{2}}^{jj_{1}j_{2}}:=Q_{j}P_{k}e^{-it\Lambda} T_{\Phi_{\mu\nu}^{-1}a_{\mu\nu}}(e^{it\mu\Lambda}P_{[[k_{1}]]}Q_{j_{1}}P_{k_{1}}V_{ \mu},e^{it\nu\Lambda}P_{[[k_{2}]]}Q_{j_{2}}P_{k_{2}}V_{\nu}).\end{split} \tag{5.78}\] The proof of (5.77) will be separated into two cases as in Lemma 5.4 and \(k_{1}\geq k_{2}\) is assumed. **Case 1.**\(\max\{j,j_{1},j_{2}\}\leq\log_{2}(1+t)+O(1)\) It can be concluded from (2.3), (5.2) and (A.1a) that \[\begin{split}&\sum_{j,k\geq-1}2^{j\alpha+N_{1}k}\|\sum_{ \begin{subarray}{c}j_{1},j_{2}\geq-1,\\ (k_{1},k_{2})\in\mathcal{X}_{k}\end{subarray}}\sum_{\begin{subarray}{c}\max \{j,j_{1},j_{2}\}\leq\log_{2}(1+t)+O(1),\\ \max\{|j-j_{1}|,|j-j_{2}|\}\leq O(1)\end{subarray}}\mathcal{B}_{kk_{1}k_{2}}^ {jj_{1}j_{2}}\|_{L^{2}}\\ &\lesssim\sum_{j_{1},j_{2},k_{1},k_{2}\geq-1}2^{N_{1}k_{1}+5k_{2}} (1+t)^{\alpha}\|Q_{j_{1}}P_{k_{1}}V\|_{L^{2}}\|e^{it\nu\Lambda}P_{[[k_{2}]]}Q_ {j_{2}}P_{k_{2}}V_{\nu}\|_{L^{\infty}}\\ &\lesssim\sum_{j_{1},j_{2},k_{1},k_{2}\geq-1}2^{N_{1}k_{1}+13k_{2} /2+j_{2}\alpha}\|Q_{j_{1}}P_{k_{1}}V\|_{L^{2}}\|Q_{j_{2}}P_{k_{2}}V\|_{L^{2}}\\ &\lesssim\!\!\varepsilon_{1}^{2}.\end{split} \tag{5.79}\] **Case 2.**\(\max\{j,j_{1},j_{2}\}\geq\log_{2}(1+t)+O(1)\) **Case 2.1.**\(\max\{|j-j_{1}|,|j-j_{2}|\}\leq O(1)\) By the Bernstein inequality, (5.2) and (A.1a), one has that \[\begin{split}&\sum_{j,k\geq-1}2^{j\alpha+N_{1}k}\|\sum_{ \begin{subarray}{c}j_{1},j_{2}\geq-1,\\ (k_{1},k_{2})\in\mathcal{X}_{k}\end{subarray}}\sum_{\begin{subarray}{c}\max \{j,j_{1},j_{2}\}\geq\log_{2}(1+t)+O(1),\\ \max\{|j-j_{1}|,|j-j_{2}|\}\leq O(1)\end{subarray}}\mathcal{B}_{kk_{1}k_{2}}^ {jj_{1}j_{2}}\|_{L^{2}}\\ &\lesssim\!\!\sum_{j_{1},k_{1},k_{2}\geq-1}2^{j_{1}\alpha+N_{1}k_{ 1}+5k_{2}}\|Q_{j_{1}}P_{k_{1}}V_{\mu}\|_{L^{2}}\|e^{it\nu\Lambda}P_{[[k_{2}]]}Q_ {j_{2}}P_{k_{2}}V_{\nu}\|_{L^{\infty}}\\ &\lesssim\!\!\varepsilon_{1}^{2}.\end{split} \tag{5.80}\] **Case 2.2.**\(\max\{|j-j_{1}|,|j-j_{2}|\}\geq O(1)\) It is noted that \(\mathcal{B}_{kk_{1}k_{2}}^{jj_{1}j_{2}}\) can be rewritten as \[\begin{split}&\mathcal{B}_{kk_{1}k_{2}}^{jj_{1}j_{2}}(t,x)=(2\pi)^ {-2}\psi_{j}(x)\iint_{\mathbb{R}^{2}}K_{5}(x-x_{1},x-x_{2})Q_{j_{1}}P_{k_{1}}V_{ \mu}(t,x_{1})Q_{j_{2}}P_{k_{2}}V_{\nu}(t,x_{2})dx_{1}dx_{2},\\ & K_{5}(x-x_{1},x-x_{2}):=\iint_{\mathbb{R}^{2}}e^{i\Psi_{5}}( \Phi_{\mu\nu}^{-1}a_{\mu\nu})(\xi_{1},\xi_{2})\psi_{k}(\xi_{1}+\xi_{2})\psi_{[ [k_{1}]]}(\xi_{1})\psi_{[[k_{2}]]}(\xi_{2})d\xi_{1}d\xi_{2},\\ &\Psi_{5}:=t(-\Lambda(\xi_{1}+\xi_{2})+\mu\Lambda(\xi_{1})+\nu \Lambda(\xi_{2}))+\xi_{1}(x-x_{1})+\xi_{2}(x-x_{2}).\end{split}\] By (2.10), (2.11) and (3.5), we have \[|\partial_{\xi_{1},\xi_{2}}^{l}(\Phi_{\mu\nu}^{-1}a_{\mu\nu})|\lesssim 2^{k_{2}}, \qquad l\geq 0,\] where \(|\xi_{1}|\approx 2^{k_{1}}\) and \(|\xi_{2}|\approx 2^{k_{2}}\). When \(\max\{j,j_{1},j_{2}\}\geq\log_{2}(1+t)+O(1)\) and \(\max\{|j-j_{1}|,|j-j_{2}|\}\geq O(1)\), for \(x\in\operatorname{supp}\psi_{j}\), \(x_{1}\in\operatorname{supp}\psi_{j_{1}}\) and \(x_{2}\in\operatorname{supp}\psi_{j_{2}}\), one can see that \[|x-x_{1}|+|x-x_{2}|\geq 2^{O(1)}(1+t),\qquad|x-x_{1}|+|x-x_{2}|\gtrsim 2^{\max\{j,j_{1}, j_{2}\}}.\] This ensures \[|\partial_{\xi_{1}}\Psi_{5}|+|\partial_{\xi_{2}}\Psi_{5}|\gtrsim|x-x_{1}|+|x-x_{2} |\gtrsim\max\{1+t,2^{\max\{j,j_{1},j_{2}\}}\}.\] Let \[\mathcal{L}_{5} :=-i(|\partial_{\xi_{1}}\Psi_{5}|^{2}+|\partial_{\xi_{2}}\Psi_{5} |^{2})^{-1}(\partial_{\xi_{1}}\Psi_{5}\partial_{\xi_{1}}+\partial_{\xi_{2}} \Psi_{5}\partial_{\xi_{2}}),\] \[\mathcal{L}_{5}^{*} :=i\sum_{l=1}^{2}\partial_{\xi_{l}}\Big{(}\frac{\partial_{\xi_{l }}\Psi_{5}}{|\partial_{\xi_{1}}\Psi_{5}|^{2}+|\partial_{\xi_{2}}\Psi_{5}|^{2}} \Big{)}.\] Then \(L_{5}e^{i\Psi_{5}}=e^{i\Psi_{5}}\). It follows from the method of stationary phase that \[|K_{5}(x-x_{1},x-x_{2})|\] \[=\Big{|}\iint_{\mathbb{R}^{2}}\mathcal{L}_{5}^{4}(e^{i\Psi_{5}}) (\Phi_{\mu\nu}^{-1}a_{\mu\nu})(\xi_{1},\xi_{2})\psi_{k}(\xi_{1}+\xi_{2})\psi_{ [[k_{1}]]}(\xi_{1})\psi_{[[k_{2}]]}(\xi_{2})d\xi_{1}d\xi_{2}\Big{|}\] \[\lesssim\iint_{\mathbb{R}^{2}}|(\mathcal{L}_{5}^{*})^{4}[(\Phi_{ \mu\nu}^{-1}a_{\mu\nu})(\xi_{1},\xi_{2})\psi_{k}(\xi_{1}+\xi_{2})\psi_{[[k_{1} ]]}(\xi_{1})\psi_{[[k_{2}]]}(\xi_{2})]|d\xi_{1}d\xi_{2}\] \[\lesssim 2^{k_{1}+2k_{2}-\max\{j,j_{1},j_{2}\}}(1+|x-x_{1}|+|x-x_{ 2}|)^{-3}.\] This, together with the Holder inequality (2.13), the Bernstein inequality and (5.2), leads to \[\|\mathcal{B}_{kk_{1}k_{2}}^{jj_{1}j_{2}}(t)\|_{L^{2}} \lesssim 2^{k_{1}+2k_{2}-\max\{j,j_{1},j_{2}\}}\|P_{k_{1}}V_{\mu}\|_ {L^{2}}\|P_{k_{2}}V_{\nu}\|_{L^{\infty}}\] \[\lesssim 2^{(k_{1}+k_{2})(3-N)-\max\{j,j_{1},j_{2}\}}\varepsilon_{1 }^{2}.\] Therefore, \[\sum_{j,k\geq-1}2^{j\alpha+N_{1}k}\|\sum_{\begin{subarray}{c}j_{1},j_{2}>-1, \\ (k_{1},k_{2})\in\mathcal{X}_{k}\end{subarray}}\sum_{\begin{subarray}{c}\max\{j,j_{1},j_{2}\}\geq\log_{2}(1+t)+O(1),\\ \max\{|j-j_{1}|,|j-j_{2}\}\geq O(1)\end{subarray}}\mathcal{B}_{kk_{1}k_{2}}^ {jj_{1}j_{2}}\|_{L^{2}}\lesssim\varepsilon_{1}^{2}. \tag{5.81}\] Substituting (5.79)-(5.81) into (5.78) derives (5.77). The estimate (5.77) also holds for \(\mathcal{B}_{k}^{II}\). Thus, (5.75) is proved. ## 6 Proofs of Theorem 1.1 and Corollaries 1.2 and 1.3 Proof of Theorem 1.1.: Suppose that the bootstrap assumption (5.1) holds for \(\alpha\in(0,1/2]\) and \(t\in[0,T_{\alpha,\varepsilon}]\). Next we show that the upper bound \(\varepsilon_{1}\) can be improved to \(\frac{3}{4}\varepsilon_{1}\) in (5.1). At first, we deal with \(\|V(t)\|_{H^{N}(\mathbb{R})}=\|U(t)\|_{H^{N}(\mathbb{R})}\). It can be concluded from (2.3) with \(\beta=\alpha\) and (5.2) that \[\|U(s)\|_{W^{1,\infty}}+\sum_{k\geq-1}2^{k(7+1/4)}\|P_{k}U(s)\|_{L ^{\infty}}\] \[\lesssim\sum_{j,k\geq-1}2^{k(7+1/4)}\|P_{[k-1,k+1]}e^{-is\Lambda} Q_{j}P_{k}V(s)\|_{L^{\infty}}\] \[\lesssim(1+s)^{-\alpha}\sum_{j,k\geq-1}2^{k(8+3/4)+\alpha j}\|Q_{j} P_{k}V(s)\|_{L^{2}}\] \[\lesssim\varepsilon_{1}(1+s)^{-\alpha}.\] This, together with (1.4), (4.1) and (5.2), yields that for \(t\in[0,T_{\alpha,\varepsilon}]\), \[\|U(t)\|_{H^{N}(\mathbb{R})}\lesssim\begin{cases}\varepsilon+\varepsilon_{1}^{2} +\varepsilon_{1}^{3}\ln(1+t),&\alpha=1/2,\\ \varepsilon+\varepsilon_{1}^{2}+\varepsilon_{1}^{3}t^{1-2\alpha},&\alpha\in( 0,1/2).\end{cases}\] We now turn to the estimate of \(\|V(t)\|_{Z_{\alpha}}\). Note that for \(t\in[0,T_{\alpha,\varepsilon}]\), (1.4), (5.3), Lemmas 5.1, 5.4 and 5.5 show \[\|V(t)\|_{Z_{\alpha}}\lesssim\begin{cases}\varepsilon+\varepsilon_{1}^{2}+ \varepsilon_{1}^{3}\ln(1+t),&\alpha=1/2,\\ \varepsilon+\varepsilon_{1}^{2}+\varepsilon_{1}^{3}t^{1-2\alpha},&\alpha\in( 0,1/2).\end{cases}\] Thus, there is a constant \(C_{1}\geq 1\) such that for \(t\in[0,T_{\alpha,\varepsilon}]\), \[\|V(t)\|_{H^{N}(\mathbb{R})}+\|V(t)\|_{Z_{\alpha}}\leq\begin{cases}C_{1}( \varepsilon+\varepsilon_{1}^{2}+\varepsilon_{1}^{3}\ln(1+t)),&\alpha=1/2,\\ C_{1}(\varepsilon+\varepsilon_{1}^{2}+\varepsilon_{1}^{3}t^{1-2\alpha}),& \alpha\in(0,1/2).\end{cases} \tag{6.1}\] Choosing \(\varepsilon_{1}=4C_{1}\varepsilon\), \(\varepsilon_{0}=\frac{1}{16C_{1}^{2}}\) and \[\kappa_{0}=\begin{cases}\frac{1}{64C_{1}^{3}},&\alpha=1/2,\\ \frac{1}{(64C_{1}^{3})^{\frac{1}{1-2\alpha}}},&\alpha\in(0,1/2),\end{cases}\] then (6.1) shows that for \(t\in[0,T_{\alpha,\varepsilon}]\), \[\|V(t)\|_{H^{N}(\mathbb{R})}+\|V(t)\|_{Z_{\alpha}}\leq\frac{1}{4}\varepsilon_ {1}+\frac{1}{4}\varepsilon_{1}+\frac{1}{4}\varepsilon_{1}=\frac{3}{4} \varepsilon_{1}. \tag{6.2}\] This, together with the local existence of classical solution to (1.3) and Proposition 4.2, yields that (1.3) admits a unique classical solution \(u\in C([0,T_{\alpha,\varepsilon}],H^{N+1}(\mathbb{R}))\cap C^{1}([0,T_{ \alpha,\varepsilon}],H^{N}(\mathbb{R}))\). Moreover, (1.6) is a result of (2.3), (3.1) and (6.2). Proof of Corollary 1.2.: At first, we consider the case of \(\beta\in(1/2,1]\) and compute \(\|(\Lambda u_{0},u_{1})\|_{Z_{1/2}}\). For any \(\beta\in(1/2,1]\) and function \(f\), one obtains from (2.1) that \[\|f\|_{Z_{1/2}} =\sum_{j,k\geq-1}2^{j(1/2-\beta)}2^{j\beta+12k}\|Q_{j}P_{k}f\|_{L^ {2}}\] \[\lesssim\sum_{k\geq-1}2^{12k}\Big{(}\sum_{j\geq-1}2^{j(1-2\beta)} \Big{)}^{1/2}\|2^{j\beta}\|Q_{j}P_{k}f\|_{L^{2}}\|_{\ell_{j}^{2}}.\] The fact of \(\|2^{j\beta}\|Q_{j}g\|_{L^{2}}\|_{\ell_{j}^{2}}\approx\|\langle x\rangle^{ \beta}g\|_{L^{2}}\) leads to \[\|f\|_{Z_{1/2}} \lesssim\frac{1}{\sqrt{1-2^{1-2\beta}}}\sum_{k\geq-1}2^{12k}\| \langle x\rangle^{\beta}P_{k}f\|_{L^{2}} \tag{6.3}\] \[\lesssim\frac{1}{\sqrt{2\beta-1}}\sum_{k\geq-1}2^{12k}\|\langle x \rangle^{\beta}P_{k}\Lambda^{-14}\Lambda^{14}f\|_{L^{2}}.\] Note that \[\begin{split}(P_{k}\Lambda^{-14}g)(x)&=\int_{\mathbb{R}} \mathcal{K}(x-y)g(y)dy,\\ \mathcal{K}(x-y)&=\frac{1}{2\pi}\int_{\mathbb{R}}e^{ i\xi(x-y)}\frac{\psi_{k}(\xi)}{(1+\xi^{2})^{7}}d\xi.\end{split} \tag{6.4}\] It follows from the stationary method that \[|\mathcal{K}(x-y)|\lesssim 2^{-13k}(1+2^{k}|x-y|)^{-3}.\] This, together with (6.3), (6.4) and Young's inequality, derives that \[\begin{split}\|f\|_{Z_{1/2}}&\lesssim\frac{1}{\sqrt{ 2\beta-1}}\sum_{k\geq-1}2^{12k}\Big{\|}\int_{\mathbb{R}}\langle x-y\rangle^{ \beta}|\mathcal{K}(x-y)|\langle y\rangle^{\beta}|(\Lambda^{14}f)(y)|dy\Big{\|} _{L^{2}_{x}}\\ \lesssim&\frac{1}{\sqrt{2\beta-1}}\sum_{k\geq-1}2^{ 12k}\|\langle\cdot\rangle^{\beta}\mathcal{K}(\cdot)\|_{L^{1}(\mathbb{R})}\| \langle x\rangle^{\beta}\Lambda^{14}f\|_{L^{2}_{x}}\\ \lesssim&\frac{1}{\sqrt{2\beta-1}}\|\langle x \rangle^{\beta}\Lambda^{14}f\|_{L^{2}_{x}}.\end{split}\] Hence, there is a positive constant \(C_{2}>0\) such that \[\varepsilon=\|u_{0}\|_{H^{N+1}(\mathbb{R})}+\|u_{1}\|_{H^{N}(\mathbb{R})}+\|( \Lambda u_{0},u_{1})\|_{Z_{1/2}}\leq\frac{C_{2}\epsilon}{\sqrt{2\beta-1}},\] which yields \[T_{1/2,\varepsilon}=e^{\kappa_{0}/\varepsilon^{2}}-1\geq e^{\frac{\kappa_{0}( 2\beta-1)}{C_{2}^{2}\epsilon^{2}}}-1.\] Choosing \(\epsilon_{1}=\frac{\varepsilon_{0}\sqrt{2\beta-1}}{C_{2}}\) and \(\kappa_{1}=\frac{\kappa_{0}(2\beta-1)}{C_{2}^{2}}\). For \(\epsilon\leq\epsilon_{1}\), (1.3) admits a unique classical solution \(u\in C([0,e^{\kappa_{1}/\epsilon^{2}}-1],H^{N+1}(\mathbb{R}))\cap C^{1}([0,e^{ \kappa_{1}/\epsilon^{2}}-1],H^{N}(\mathbb{R}))\). If \(\beta>1\), one can find that \(\|\langle x\rangle\Lambda^{14}f\|_{L^{2}}\leq\|\langle x\rangle^{\beta} \Lambda^{14}f\|_{L^{2}}\) and further Corollary 1.2 holds. Proof of Corollary 1.3.: Similarly to the proof of (6.3), it holds that for any \(\beta\in(0,1/2)\), \[\|f\|_{Z_{\beta}}\lesssim\frac{1}{\sqrt{1-2\beta}}\|\langle x\rangle^{\frac{ 1}{2}}\Lambda^{14}f\|_{L^{2}}.\] Note that there is a positive constant \(C_{3}\) such that \[\varepsilon=\|u_{0}\|_{H^{N+1}(\mathbb{R})}+\|u_{1}\|_{H^{N}(\mathbb{R})}+\|( \Lambda u_{0},u_{1})\|_{Z_{\beta}}\leq\frac{C_{3}\epsilon}{\sqrt{1-2\beta}},\] which yields \[T_{\beta,\varepsilon}=\frac{\kappa_{0}}{\varepsilon^{\frac{2}{1-2\beta}}} \geq\frac{\kappa_{0}(1-2\beta)^{\frac{1}{1-2\beta}}}{(C_{3}\epsilon)^{\frac{ 2}{1-2\beta}}}.\] Since there exists \(\beta\in(0,1/2)\) such that \(\beta\geq 1/2-\frac{1}{M+1}\), then by the choice of \(\epsilon_{2}=\min\{\frac{\varepsilon_{0}\sqrt{1-2\beta}}{C_{3}},\frac{\kappa_ {0}(1-2\beta)^{\frac{1}{1-2\beta}}}{(C_{3})^{\frac{2}{1-2\beta}}}\}\) and for \(\epsilon\leq\epsilon_{2}\), (1.3) admits a unique classical solution \(u\in C([0,\epsilon^{-M}],H^{N+1}(\mathbb{R}))\cap C^{1}([0,\epsilon^{-M}],H^{ N}(\mathbb{R}))\). Estimates of multi-linear Fourier multipliers **Lemma A.1**.: _Suppose that \(T_{m_{2}}(f,g)\) is defined by (3.2) with functions \(f,g\) on \(\mathbb{R}\). For any \(k_{1},k_{2}\geq-1\) and \(p,q,r\in[1,\infty]\) satisfying \(1/p=1/q+1/r\), it holds that_ \[\|T_{\Phi_{\mu_{1}\mu_{2}}^{-1}a_{\mu_{1}\mu_{2}}}(P_{k_{1}}f,P_{k _{2}}g)\|_{L^{p}(\mathbb{R})}\lesssim 2^{5\min\{k_{1},k_{2}\}}\|P_{k_{1}}f\|_{L^{q}( \mathbb{R})}\|P_{k_{2}}g\|_{L^{r}(\mathbb{R})},\] (A.1a) \[\|T_{a_{\mu_{1}\mu_{2}}}(P_{k_{1}}f,P_{k_{2}}g)\|_{L^{p}(\mathbb{ R})}+\|T_{a_{\sigma\mu_{1}\mu_{2}}}(P_{k_{1}}f,P_{k_{2}}g)\|_{L^{p}( \mathbb{R})}\lesssim\|P_{k_{1}}f\|_{L^{q}(\mathbb{R})}\|P_{k_{2}}g\|_{L^{r}( \mathbb{R})},\] (A.1b) _where \(\Phi_{\mu_{1}\mu_{2}}\), \(a_{\mu_{1}\mu_{2}}\) and \(a_{\sigma\mu_{1}\mu_{2}}\) are defined by (2.9), (3.5) and (3.11), respectively._ Proof.: According to (2.4) and the definition of the multi-linear pseudoproduct operator (3.2), we have \[\begin{split}& T_{m_{2}}(P_{k_{1}}f,P_{k_{2}}g)(x)=(2\pi)^{-2} \iint_{\mathbb{R}^{2}}\mathcal{K}(x-y,x-z)P_{k_{1}}f(y)P_{k_{2}}g(z)dydz,\\ &\mathcal{K}(y,z)=\iint_{\mathbb{R}^{2}}e^{i(y\xi_{1}+z\xi_{2})}m _{2}(\xi_{1},\xi_{2})\psi_{k_{1}k_{2}}(\xi_{1},\xi_{2})d\xi_{1}d\xi_{2},\\ &\psi_{k_{1}k_{2}}(\xi_{1},\xi_{2}):=\psi_{[k_{1}-1,k_{1}+1]}(\xi _{1})\psi_{[k_{2}-1,k_{2}-1]}(\xi_{2}).\end{split}\] (A.2) As in Lemma 3.3 of [5], the \(L^{1}\) norm of the Schwartz kernel \(\mathcal{K}(y,z)\) can be bounded by \[\begin{split}&\|\mathcal{K}(y,z)\|_{L^{1}(\mathbb{R}^{2})}\lesssim \|(1+|2^{k_{1}}y|+|2^{k_{2}}z|)^{2}\mathcal{K}(y,z)\|_{L^{2}(\mathbb{R}^{2})} \|(1+|2^{k_{1}}y|+|2^{k_{2}}z|)^{-2}\|_{L^{2}(\mathbb{R}^{2})}\\ &\lesssim\sum_{l=0}^{2}(2^{lk_{1}}\|\psi_{k_{1}k_{2}}(\xi_{1}, \xi_{2})\partial_{\xi_{1}}^{l}m_{2}(\xi_{1},\xi_{2})\|_{L^{\infty}}+2^{lk_{2} }\|\psi_{k_{1}k_{2}}(\xi_{1},\xi_{2})\partial_{\xi_{2}}^{l}m_{2}(\xi_{1},\xi_ {2})\|_{L^{\infty}}).\end{split}\] (A.3) Inspired by Lemma 4.5 in [18], we next show \[(1+|\xi_{1}|)^{l}|\partial_{\xi_{1}}^{l}\Phi_{\mu_{1}\mu_{2}}^{-1}(\xi_{1}, \xi_{2})|+(1+|\xi_{2}|)^{l}|\partial_{\xi_{2}}^{l}\Phi_{\mu_{1}\mu_{2}}^{-1}( \xi_{1},\xi_{2})|\lesssim(1+\min\{|\xi_{1}|,|\xi_{2}|\})^{2l+1},l\geq 0,\] (A.4) which yields \[\sum_{l=0}^{2}(2^{lk_{1}}|\psi_{k_{1}k_{2}}(\xi_{1},\xi_{2})\partial_{\xi_{1}} ^{l}\Phi_{\mu_{1}\mu_{2}}^{-1}(\xi_{1},\xi_{2})|+2^{lk_{2}}|\psi_{k_{1}k_{2}}( \xi_{1},\xi_{2})\partial_{\xi_{2}}^{l}\Phi_{\mu_{1}\mu_{2}}^{-1}(\xi_{1},\xi_ {2})|)\lesssim 2^{5\min\{k_{1},k_{2}\}}.\] (A.5) It is pointed out that the analogous result to (A.5) has been obtained in [8] for space dimensions \(d\geq 2\). However, we require the more precise estimate (A.4) for 1D case, which will be utilized in the next lemma. Note that (3.5) and (3.11) imply \[\begin{split}&\sum_{l=0}^{2}(2^{lk_{1}}|\psi_{k_{1}k_{2}}(\xi_{1}, \xi_{2})\partial_{\xi_{1}}^{l}a_{\mu_{1}\mu_{2}}(\xi_{1},\xi_{2})|+2^{lk_{2}}| \psi_{k_{1}k_{2}}(\xi_{1},\xi_{2})\partial_{\xi_{2}}^{l}a_{\mu_{1}\mu_{2}}(\xi_{ 1},\xi_{2})|)\lesssim 1,\\ &\sum_{l=0}^{2}(2^{lk_{1}}|\psi_{k_{1}k_{2}}(\xi_{1},\xi_{2}) \partial_{\xi_{1}}^{l}a_{\sigma\mu_{1}\mu_{2}}(\xi_{1},\xi_{2})|+2^{lk_{2}}| \psi_{k_{1}k_{2}}(\xi_{1},\xi_{2})\partial_{\xi_{2}}^{l}a_{\sigma\mu_{1}\mu_{2}}( \xi_{1},\xi_{2})|)\lesssim 1.\end{split}\] (A.6) On the other hand, if (A.4) has been proved, then it follows from (A.2), (A.3), (A.5), (A.6) and the Holder inequality (2.13) that (A.1a) and (A.1b) hold. Without loss of generality, \(|\xi_{1}|\leq|\xi_{2}|\) is assumed since the case of \(|\xi_{1}|\geq|\xi_{2}|\) can be treated analogously. The estimate on the first term of left hand side in (A.4) follows from \(|\partial_{\xi_{1}}^{l}\Phi_{\mu_{1}\mu_{2}}^{-1}(\xi_{1},\xi_{2})|\lesssim|\Phi_{ \mu_{1}\mu_{2}}^{-1}(\xi_{1},\xi_{2})|\lesssim 1+|\xi_{1}|\) due to (2.11). In addition, the second term of left hand side in (A.4) can be easily shown for the case of \(|\xi_{1}|\geq 2^{-10}|\xi_{2}|\). We next deal with the second term in (A.4) for \(|\xi_{1}|\leq 2^{-10}|\xi_{2}|\) and \(|\xi_{2}|\geq 1\). For \(\partial_{\xi_{2}}^{l}\Phi_{\mu+}\) with \(l\geq 1\), there is some \(r\in[0,1]\) such that \[|\partial_{\xi_{2}}^{l}\Phi_{\mu+}(\xi_{1},\xi_{2})|=|-\Lambda^{(l)}(\xi_{1}+ \xi_{2})+\Lambda^{(l)}(\xi_{2})|=|\xi_{1}\Lambda^{(l+1)}(r\xi_{1}+\xi_{2})| \lesssim|\xi_{1}|(1+|\xi_{2}|)^{-l},\] which derives \((1+|\xi_{2}|)^{l}|\partial_{\xi_{2}}^{l}\Phi_{\mu+}(\xi_{1},\xi_{2})|\lesssim 1 +|\xi_{1}|\). By (2.10) and Leibnitz's rules, one has \[(1+|\xi_{2}|)^{l}|\partial_{\xi_{2}}^{l}\Phi_{\mu+}^{-1}(\xi_{1},\xi_{2})| \lesssim(1+|\xi_{1}|)^{2l+1},\quad l\geq 0.\] This yields (A.4) and (A.5) for \(\mu_{2}=+\). For \(\partial_{\xi_{2}}^{l}\Phi_{\mu-}\), according to the definition (2.9), it is known that there is a positive constant \(C>0\) such that \[-\Phi_{\mu-}(\xi_{1},\xi_{2})=\Lambda(\xi_{1}+\xi_{2})-\mu\Lambda(\xi_{1})+ \Lambda(\xi_{2})\geq\Lambda(\xi_{1}+\xi_{2})\geq C|\xi_{2}|.\] When \(l\geq 1\), \(|\partial_{\xi_{2}}^{l}\Phi_{\mu-}(\xi_{1},\xi_{2})|=|\Lambda^{(l)}(\xi_{1}+ \xi_{2})+\Lambda^{(l)}(\xi_{2})|\lesssim|\xi_{2}|^{1-l}\) holds. Analogously, for \(l\geq 0\), one has \(|\partial_{\xi_{2}}^{l}\Phi_{\mu-}^{-1}(\xi_{1},\xi_{2})|\lesssim|\xi_{2}|^{-1-l}\), which implies (A.4) for \(\mu_{2}=-\). **Lemma A.2**.: _Suppose that \(T_{m_{3}}(f,g,h)\) is defined by (3.2) with functions \(f,g,h\) on \(\mathbb{R}\). For any \(k_{1},k_{2},k_{3}\geq-1\) and \(p,q_{1},q_{2},q_{3}\in[1,\infty]\) satisfying \(1/p=1/q_{1}+1/q_{2}+1/q_{3}\), it holds that_ \[\|T_{b_{\mu_{1}\mu_{2}\mu_{3}}}(P_{k_{1}}f,P_{k_{2}}g,P_{k_{3}}h) \|_{L^{p}(\mathbb{R})} \lesssim\|P_{k_{1}}f\|_{L^{q_{1}}(\mathbb{R})}\|P_{k_{2}}g\|_{L^{ q_{2}}(\mathbb{R})}\|P_{k_{3}}h\|_{L^{q_{3}}(\mathbb{R})},\] (A.7) \[\|T_{m_{\mu_{1}\mu_{2}\mu_{3}}}(P_{k_{1}}f,P_{k_{2}}g,P_{k_{3}}h) \|_{L^{p}(\mathbb{R})} \lesssim 2^{7\,\mathrm{med}\{k_{1},k_{2},k_{3}\}}\|P_{k_{1}}f\|_{L^{q_ {1}}(\mathbb{R})}\] \[\qquad\times\|P_{k_{2}}g\|_{L^{q_{2}}(\mathbb{R})}\|P_{k_{3}}h\| _{L^{q_{3}}(\mathbb{R})},\] _where \(b_{\mu_{1}\mu_{2}\mu_{3}}\) and \(m_{\mu_{1}\mu_{2}\mu_{3}}\) are defined by (3.6) and (3.14), respectively. For \((\mu_{1},\mu_{2},\mu_{3})\in\{(+++),(+--),(---)\}\), one has_ \[\|T_{\Phi_{\mu_{1}\mu_{2}\mu_{3}}^{-1}m_{\mu_{1}\mu_{2}\mu_{3}}}(P _{k_{1}}f,P_{k_{2}}g,P_{k_{3}}h)\|_{L^{p}(\mathbb{R})} \lesssim 2^{8\,\mathrm{med}\{k_{1},k_{2},k_{3}\}}\|P_{k_{1}}f\|_{L^{q_ {1}}(\mathbb{R})}\] (A.8) \[\qquad\qquad\qquad\qquad\qquad\qquad\times\|P_{k_{2}}g\|_{L^{q_{2 }}(\mathbb{R})}\|P_{k_{3}}h\|_{L^{q_{3}}(\mathbb{R})},\] _where \(\Phi_{\mu_{1}\mu_{2}\mu_{3}}\) is defined by (2.9)._ Proof.: Similarly to (A.2) and (A.3), we have \[\begin{split}& T_{m_{3}}(P_{k_{1}}f,P_{k_{2}}g,P_{k_{3}}h)(x)=(2 \pi)^{-3}\iiint_{\mathbb{R}^{3}}\mathcal{K}(x-x_{1},x-x_{2},x-x_{3})P_{k_{1}}f(x_ {1})\\ &\qquad\qquad\qquad\qquad\qquad\qquad\times P_{k_{2}}g(x_{2})P_{k _{3}}h(x_{3})dx_{1}dx_{2}dx_{3},\\ &\mathcal{K}(x_{1},x_{2},x_{3})=\iiint_{\mathbb{R}^{3}}e^{i(x_{1} \xi_{1}+x_{2}\xi_{2}+x_{3}\xi_{3})}m_{3}(\xi_{1},\xi_{2},\xi_{3})\psi_{k_{1}k_ {2}k_{3}}(\xi_{1},\xi_{2},\xi_{3})d\xi_{1}d\xi_{2}d\xi_{3},\\ &\psi_{k_{1}k_{2}k_{3}}(\xi_{1},\xi_{2},\xi_{3}):=\psi_{[k_{1}-1,k _{1}+1]}(\xi_{1})\psi_{[k_{2}-1,k_{2}-1]}(\xi_{2})\psi_{[k_{3}-1,k_{3}-1]}( \xi_{3})\end{split}\] (A.9) and \[\begin{split}&\|\mathcal{K}(x_{1},x_{2},x_{3})\|_{L^{1}(\mathbb{R}^{3})} \\ &\lesssim\|(1+|2^{k_{1}}x_{1}|+|2^{k_{2}}x_{2}|+|2^{k_{3}}x_{3}| )^{2}\mathcal{K}\|_{L^{2}(\mathbb{R}^{3})}\|(1+|2^{k_{1}}x_{1}|+|2^{k_{2}}x_{2}| +|2^{k_{3}}x_{3}|)^{-2}\|_{L^{2}(\mathbb{R}^{3})}\\ &\lesssim\sum_{l=0}^{2}\sum_{\epsilon=1}^{3}2^{lk_{\epsilon}}\|\psi_ {k_{1}k_{2}k_{3}}(\xi_{1},\xi_{2},\xi_{3})\partial_{\xi_{i}}^{l}m_{3}(\xi_{1}, \xi_{2},\xi_{3})\|_{L^{\infty}}.\end{split}\] (A.10) According to the definition (3.6), one has \[\sum_{l=0}^{2}\sum_{\iota=1}^{3}2^{lk_{\iota}}\|\psi_{k_{1}k_{2}k_{3}}(\xi_{1}, \xi_{2},\xi_{3})\partial_{\xi_{\iota}}^{l}b_{\mu_{1}\mu_{2}\mu_{3}}(\xi_{1},\xi_ {2},\xi_{3})\|_{L^{\infty}}\lesssim 1.\] This, together with (A.9) and (A.10), yields the first inequality of (A.7). In the remaining part, we focus on the proof for the second inequality of (A.7) and (A.8). For \(l\geq 0\), one can calculate from (2.11) and the definition (3.14) to obtain \[\begin{split}&|\partial_{\xi_{1},\xi_{2},\xi_{3}}^{l}m_{\mu_{1} \mu_{2}\mu_{3}}(\xi_{1},\xi_{2},\xi_{3})|\\ &\lesssim 1+\min\{|\xi_{1}|,|\xi_{2}+\xi_{3}|\}+\min\{|\xi_{2}|,| \xi_{1}+\xi_{3}|\}+\min\{|\xi_{3}|,|\xi_{1}+\xi_{2}|\}\\ &\lesssim 2^{\mathrm{med}\{k_{1},k_{2},k_{3}\}}.\end{split}\] (A.11) If \(\mathrm{med}\{k_{1},k_{2},k_{3}\}\geq\max\{k_{1},k_{2},k_{3}\}-O(1)\), then it is deduced from (A.11) that \[\begin{split}&\sum_{l=0}^{2}\sum_{\iota=1}^{3}2^{lk_{\iota}}\| \psi_{k_{1}k_{2}k_{3}}(\xi_{1},\xi_{2},\xi_{3})\partial_{\xi_{\iota}}^{l}m_{ \mu_{1}\mu_{2}\mu_{3}}(\xi_{1},\xi_{2},\xi_{3})\|_{L^{\infty}}\\ &\lesssim 2^{2\max\{k_{1},k_{2},k_{3}\}}\max_{\iota=1,2,3}\sum_{l=0} ^{2}\|\partial_{\xi_{\iota}}^{l}m_{\mu_{1}\mu_{2}\mu_{3}}(\xi_{1},\xi_{2},\xi _{3})\|_{L^{\infty}}\\ &\lesssim 2^{3\,\mathrm{med}\{k_{1},k_{2},k_{3}\}}.\end{split}\] (A.12) For \(l\geq 1\), \(|\Lambda^{(l)}(y)|\lesssim 1\) and further \(|\partial_{\xi_{1},\xi_{2},\xi_{3}}^{l}\Phi_{\mu_{1}\mu_{2}\mu_{3}}|\lesssim 1\) hold. For \((\mu_{1},\mu_{2},\mu_{3})\in\{(+++),(+--),(---)\}\), it follows from (2.12) that \[|\partial_{\xi_{1},\xi_{2},\xi_{3}}^{l}\Phi_{\mu_{1}\mu_{2}\mu_{3}}^{-1}| \lesssim\sum_{l_{1}=1}^{l}(|\Phi_{\mu_{1}\mu_{2}\mu_{3}}|)^{-1-l_{1}}\lesssim 2 ^{(l+1)\min\{k_{1},k_{2},k_{3}\}}.\] (A.13) Therefore, (A.9)-(A.13) together with the Holder inequality imply the second inequality of (A.7) and (A.8) for the case of \(\mathrm{med}\{k_{1},k_{2},k_{3}\}\geq\max\{k_{1},k_{2},k_{3}\}-O(1)\). Next, we turn to the proof of the second inequality in (A.7) and (A.8) for the case of \(\mathrm{med}\{k_{1},k_{2},k_{3}\}\leq\max\{k_{1},k_{2},k_{3}\}-O(1)\). To this end, we are devoted to establishing the following estimate \[\sum_{\iota=1}^{3}2^{lk_{\iota}}\|\psi_{k_{1}k_{2}k_{3}}(\xi_{1},\xi_{2},\xi_{3 })\partial_{\xi_{\iota}}^{l}m_{\mu_{1}\mu_{2}\mu_{3}}(\xi_{1},\xi_{2},\xi_{3}) \|_{L^{\infty}}\lesssim 2^{(3l+1)\,\mathrm{med}\{k_{1},k_{2},k_{3}\}},\quad l \geq 0.\] (A.14) This, together with (A.9), (A.10) and the Holder inequality, will imply the second inequality in (A.7) for the case of \(\mathrm{med}\{k_{1},k_{2},k_{3}\}\leq\max\{k_{1},k_{2},k_{3}\}-O(1)\). Note that by the definition (3.14), \(m_{\mu_{1}\mu_{2}\mu_{3}}^{II}(\xi_{1},\xi_{2},\xi_{3})\) is a linear combination of the products of (3.6) and one then has \[\sum_{\iota=1}^{3}2^{lk_{\iota}}\|\psi_{k_{1}k_{2}k_{3}}(\xi_{1},\xi_{2},\xi_{ 3})\partial_{\xi_{\iota}}^{l}m_{\mu_{1}\mu_{2}\mu_{3}}^{II}(\xi_{1},\xi_{2}, \xi_{3})\|_{L^{\infty}}\lesssim 1,\quad l\geq 0.\] (A.15) Meanwhile, \(m_{\mu_{1}\mu_{2}\mu_{3}}^{I}(\xi_{1},\xi_{2},\xi_{3})\) is a linear combination of trinomial products of \(a_{\sigma_{1}\sigma_{2}}\), \(\tilde{a}_{\nu_{1}\nu_{2}\nu_{3}}\) and \[\Phi_{\mu\nu}^{-1}(\xi_{1},\xi_{2}+\xi_{3}),\Phi_{\mu\nu}^{-1}(\xi_{2},\xi_{1}+ \xi_{3}),\Phi_{\mu\nu}^{-1}(\xi_{3},\xi_{1}+\xi_{2}).\] (A.16) Based on (A.4), we now show \[\sum_{\iota=1}^{3}2^{lk_{\iota}}\|\psi_{k_{1}k_{2}k_{3}}(\xi_{1},\xi_{2},\xi_{3}) \partial_{\xi_{\iota}}^{l}(\Phi_{\mu\nu}^{-1}(\xi_{1},\xi_{2}+\xi_{3}))\|_{L^{ \infty}}\lesssim 2^{(3l+1)\operatorname{med}\{k_{1},k_{2},k_{3}\}},\quad l\geq 0.\] (A.17) Denote \[\tilde{\Phi}(\xi_{1},\xi_{2},\xi_{3})=\Phi_{\mu\nu}^{-1}(\xi_{1},\xi_{2}+\xi_{3 }).\] If \(\max\{k_{1},k_{2},k_{3}\}=k_{1}\), one then has \(|\xi_{2}+\xi_{3}|\lesssim|\xi_{1}|\), \(|\xi_{2}+\xi_{3}|\lesssim 2^{\max\{k_{2},k_{3}\}}\) and \(\max\{k_{2},k_{3}\}=\operatorname{med}\{k_{1},k_{2},k_{3}\}\). Therefore, it follows from (A.4) that \[(1+|\xi_{1}|)^{l}|\partial_{\xi_{1}}^{l}\tilde{\Phi}(\xi_{1},\xi _{2},\xi_{3})| =(1+|\xi_{1}|)^{l}|\partial_{\xi_{1}}^{l}\Phi_{\mu\nu}^{-1}(\xi_{1 },\xi_{2}+\xi_{3})|\] \[\lesssim(1+|\xi_{2}+\xi_{3}|)^{2l+1},\] (A.18) \[\lesssim 2^{(2l+1)\operatorname{med}\{k_{1},k_{2},k_{3}\}}.\] On the other hand, we have \[\partial_{\xi_{2}}^{l}\tilde{\Phi}(\xi_{1},\xi_{2},\xi_{3})=\partial_{\xi_{3}} ^{l}\tilde{\Phi}(\xi_{1},\xi_{2},\xi_{3})=\partial_{\xi_{2}}^{l}\Phi_{\mu\nu} ^{-1}(\xi_{1},\xi_{2}+\xi_{3}),\] which yields \[(1+|\xi_{2}|)^{l}|\partial_{\xi_{2}}^{l}\tilde{\Phi}(\xi_{1},\xi _{2},\xi_{3})|+(1+|\xi_{3}|)^{l}|\partial_{\xi_{3}}^{l}\tilde{\Phi}(\xi_{1}, \xi_{2},\xi_{3})|\] \[\lesssim 2^{l\max\{k_{2},k_{3}\}}|\partial_{\xi_{2}}^{l}\Phi_{\mu \nu}^{-1}(\xi_{1},\xi_{2}+\xi_{3})|\] (A.19) \[\lesssim 2^{(3l+1)\operatorname{med}\{k_{1},k_{2},k_{3}\}}.\] If \(\max\{k_{1},k_{2},k_{3}\}=k_{2}\), by \(\operatorname{med}\{k_{1},k_{2},k_{3}\}\leq\max\{k_{1},k_{2},k_{3}\}-O(1)\), one then has \(k_{3}\leq k_{2}-O(1)\). Hence, \(|\xi_{2}+\xi_{3}|\approx|\xi_{2}|\gtrsim|\xi_{1}|\). Similarly to (A.18) and (A.19), we can obtain \[(1+|\xi_{1}|)^{l}|\partial_{\xi_{1}}^{l}\tilde{\Phi}(\xi_{1},\xi _{2},\xi_{3})| =(1+|\xi_{1}|)^{l}|\partial_{\xi_{1}}^{l}\Phi_{\mu\nu}^{-1}(\xi_{1 },\xi_{2}+\xi_{3})|\] \[\lesssim(1+|\xi_{1}|)^{2l+1},\] (A.20) \[\lesssim 2^{(2l+1)\operatorname{med}\{k_{1},k_{2},k_{3}\}}\] and \[(1+|\xi_{2}|)^{l}|\partial_{\xi_{2}}^{l}\tilde{\Phi}(\xi_{1},\xi _{2},\xi_{3})|+(1+|\xi_{3}|)^{l}|\partial_{\xi_{3}}^{l}\tilde{\Phi}(\xi_{1}, \xi_{2},\xi_{3})|\] \[\lesssim(1+|\xi_{2}+\xi_{3}|)^{l}|\partial_{\xi_{2}}^{l}\Phi_{\mu \nu}^{-1}(\xi_{1},\xi_{2}+\xi_{3})|\] (A.21) \[\lesssim 2^{(2l+1)\operatorname{med}\{k_{1},k_{2},k_{3}\}}.\] For \(\max\{k_{1},k_{2},k_{3}\}=k_{3}\), (A.20) and (A.21) still hold by the analogous proof for the case of \(\max\{k_{1},k_{2},\)\(k_{3}\}=k_{2}\). Collecting (A.18)-(A.21) yields (A.17). With the same argument, (A.17) also holds for the other two terms in (A.16). Thus, (A.14) is achieved by (A.15) and (A.17). At last, we prove (A.8) for the case of \(\operatorname{med}\{k_{1},k_{2},k_{3}\}\leq\max\{k_{1},k_{2},k_{3}\}-O(1)\). For this purpose, it requires to establish the following estimates \[\sum_{\iota=1}^{3}2^{lk_{\iota}}\|\psi_{k_{1}k_{2}k_{3}}(\xi_{1},\xi_{2},\xi_{3 })\partial_{\xi_{\iota}}^{l}\Phi_{\mu_{1}\mu_{2}\mu_{3}}^{-1}(\xi_{1},\xi_{2}, \xi_{3})\|_{L^{\infty}}\lesssim 2^{(2l+1)\operatorname{med}\{k_{1},k_{2},k_{3}\}},\] (A.22) where \((\mu_{1},\mu_{2},\mu_{3})\in\{(+++),(+--),(---)\}\) and \(\operatorname{med}\{k_{1},k_{2},k_{3}\}\leq\max\{k_{1},k_{2},k_{3}\}-O(1)\). Combining (A.14) and (A.22) leads to \[\sum_{l=0}^{2}\sum_{\iota=1}^{3}2^{lk_{\iota}}\|\psi_{k_{1}k_{2}k_{3}}(\xi_{1}, \xi_{2},\xi_{3})\partial_{\xi_{\iota}}^{l}(\Phi_{\mu_{1}\mu_{2}\mu_{2}}^{-1}m_{ \mu_{1}\mu_{2}\mu_{2}})(\xi_{1},\xi_{2},\xi_{3})\|_{L^{\infty}}\lesssim 2^{8 \operatorname{med}\{k_{1},k_{2},k_{3}\}},\] which yields (A.8) for the case of \(\operatorname{med}\{k_{1},k_{2},k_{3}\}\leq\max\{k_{1},k_{2},k_{3}\}-O(1)\). If \(\max\{k_{1},k_{2},k_{3}\}=k_{1}\), one then has \(|\xi_{2}|,|\xi_{3}|\ll|\xi_{1}|\). Similarly to Lemma A.1, for \(\partial_{\xi_{1}}^{l}\Phi_{+\mu_{2}\mu_{3}}^{-1}\) with \(l\geq 1\), there is some \(r\in[0,1]\) such that \[|\partial_{\xi_{1}}^{l}\Phi_{+\mu_{2}\mu_{3}}(\xi_{1},\xi_{2}, \xi_{3})| =|\Lambda^{(l)}(\xi_{1})-\Lambda^{(l)}(\xi_{1}+\xi_{2}+\xi_{3})|\] \[=|(\xi_{2}+\xi_{3})\Lambda^{(l+1)}(\xi_{1}+r(\xi_{2}+\xi_{3}))|\] \[\lesssim 2^{\operatorname{med}\{k_{1},k_{2},k_{3}\}}(1+|\xi_{1}|) ^{-l}.\] This together with (2.12) derives \[(1+|\xi_{1}|)^{l}|\partial_{\xi_{1}}^{l}\Phi_{+\mu_{2}\mu_{3}}^{-1}(\xi_{1}, \xi_{2},\xi_{3})|\lesssim 2^{(2l+1)\operatorname{med}\{k_{1},k_{2},k_{3}\}}.\] (A.23) For \(\partial_{\xi_{1}}^{l}\Phi_{-\mu_{2}\mu_{3}}^{-1}\), we have \[-\Phi_{-\mu_{2}\mu_{3}}(\xi_{1},\xi_{2},\xi_{3}) =\Lambda(\xi_{1}+\xi_{2}+\xi_{3})+\Lambda(\xi_{1})-\mu_{2}\Lambda (\xi_{2})-\mu_{3}\Lambda(\xi_{3})\] \[\geq\Lambda(\xi_{1})\gtrsim 1+|\xi_{1}|\] and \[|\partial_{\xi_{1}}^{l}\Phi_{-\mu_{2}\mu_{3}}(\xi_{1},\xi_{2},\xi_{3})|=| \Lambda^{(l)}(\xi_{1}+\xi_{2}+\xi_{3})+\Lambda^{(l)}(\xi_{1})|\lesssim(1+|\xi _{1}|)^{1-l},\quad l\geq 1.\] Thereby, \[|\partial_{\xi_{1}}^{l}\Phi_{-\mu_{2}\mu_{3}}^{-1}|\lesssim(1+|\xi_{1}|)^{-1-l}.\] Together with (A.23), we can achieve \[(1+|\xi_{1}|)^{l}|\partial_{\xi_{1}}^{l}\Phi_{\mu_{1}\mu_{2}\mu_{3}}^{-1}(\xi_ {1},\xi_{2},\xi_{3})|\lesssim 2^{(2l+1)\operatorname{med}\{k_{1},k_{2},k_{3}\}}.\] (A.24) On the other hand, (A.13) implies \[(1+|\xi_{2}|)^{l}|\partial_{\xi_{2}}^{l}\Phi_{\mu_{1}\mu_{2}\mu_{3}}^{-1}(\xi_ {1},\xi_{2},\xi_{3})|+(1+|\xi_{3}|)^{l}|\partial_{\xi_{3}}^{l}\Phi_{\mu_{1}\mu _{2}\mu_{3}}^{-1}(\xi_{1},\xi_{2},\xi_{3})|\lesssim 2^{(2l+1)\operatorname{med}\{k_{1},k_{2},k_ {3}\}}.\] (A.25) Collecting (A.24) and (A.25) derives (A.22) for the case of \(\max\{k_{1},k_{2},k_{3}\}=k_{1}\). The proof of (A.22) for the case of \(\max\{k_{1},k_{2},k_{3}\}=k_{2}\) or \(\max\{k_{1},k_{2},k_{3}\}=k_{3}\) can be completed analogously. **Lemma A.3**.: _Suppose that \(T_{m_{3}}(f,g,h)\) is defined by (3.2) with functions \(f,g,h\) on \(\mathbb{R}\). For any \(k_{1},k_{2},k_{3}\geq-1\) and \(p,q_{1},q_{2},q_{3}\in[1,\infty]\) satisfying \(\max\{k_{1},k_{2}\}\leq k_{3}-O(1)\), \(1/p=1/q_{1}+1/q_{2}+1/q_{3}\), it holds that_ \[\|T_{\Phi_{++-}^{-1}m_{++-}}(P_{k_{1}}f,P_{k_{2}}g,P_{k_{3}}h)\|_ {L^{p}(\mathbb{R})} \lesssim 2^{7\max\{k_{1},k_{2}\}}\|P_{k_{1}}f\|_{L^{q_{1}}(\mathbb{R})}\] (A.26) \[\qquad\qquad\times\|P_{k_{2}}g\|_{L^{q_{2}}(\mathbb{R})}\|P_{k_{3} }h\|_{L^{q_{3}}(\mathbb{R})}.\] Proof.: It follows from a direct computation that for \(\iota=1,2,3\), \[|\partial_{\xi_{t}}\Phi^{-1}_{++-}|\lesssim|\partial\Phi_{++-}|| \Phi_{++-}|^{-2}\lesssim 2^{-2k_{3}},\] \[|\partial^{2}_{\xi_{t}}\Phi^{-1}_{++-}|\lesssim|\partial^{2}\Phi_ {++-}||\Phi_{++-}|^{-2}+|\partial\Phi_{++-}|^{2}|\Phi_{++-}|^{-3}\lesssim 2^{-2k_{3 }},\] where we have used (3.16) and the fact of \(|\partial^{l}_{\xi_{1},\xi_{2},\xi_{3}}\Phi_{++-}|\lesssim 1\) with \(l\geq 1\). Thus, one can obtain \[\sum_{l=0}^{2}\sum_{\iota=1}^{3}(1+|\xi_{t}|)^{l}|\partial^{l}_{\xi_{t}}\Phi^{ -1}_{++-}(\xi_{1},\xi_{2},\xi_{3})|\lesssim 1.\] This, together with (A.9), (A.10) and (A.14), leads to (A.26).
2306.17763
The dynamics of crack front waves in 3D material failure
Crack front waves (FWs) are dynamic objects that propagate along moving crack fronts in 3D materials. We study FW dynamics in the framework of a 3D phase-field framework that features a rate-dependent fracture energy $\Gamma(v)$ ($v$ is the crack propagation velocity) and intrinsic lengthscales, and quantitatively reproduces the high-speed oscillatory instability in the quasi-2D limit. We show that in-plane FWs feature a rather weak time dependence, with decay rate that increases with $d\Gamma(v)/dv\!>\!0$, and largely retain their properties upon FW-FW interactions, similarly to a related experimentally-observed solitonic behavior. Driving in-plane FWs into the nonlinear regime, we find that they propagate slower than predicted by a linear perturbation theory. Finally, by introducing small out-of-plane symmetry-breaking perturbations, coupled in- and out-of-plane FWs are excited, but the out-of-plane component decays under pure tensile loading. Yet, including a small anti-plane loading component gives rise to persistent coupled in- and out-of-plane FWs.
Sanhita Das, Yuri Lubomirsky, Eran Bouchbinder
2023-06-30T16:13:36Z
http://arxiv.org/abs/2306.17763v1
# The dynamics of crack front waves in 3D material failure ###### Abstract _Introduction._--Material failure is a highly complex phenomenon, involving multiple scales, strong spatial localization and nonlinear dissipation. It is mediated by the propagation of cracks, which feature nearly singular stresses near their edges [1; 2]. In brittle materials, they reach velocities comparable to elastic wave-speeds, hence also experience strong inertial effects. In thin, quasi-2D samples, a crack is viewed as a nearly singular point that propagates in a 2D plane and leaves behind it a broken line. In thick, fully-3D samples, a crack is a nearly singular front (line) that evolves in a 3D space and leaves behind it a broken surface. While significant recent progress has been made in understanding dynamic fracture in 2D [3; 4; 5; 6], our general understanding of dynamic fracture in 3D remains incomplete [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36]. A qualitative feature that distinguishes 2D from 3D material failure is the emergence of crack front waves (FWs) in the latter. FWs are compact objects that persistently propagate along crack fronts [8; 9; 10; 11; 12; 13; 14; 15]. In the most general case, FWs feature both a component in the main crack plane and an out-of-plane component [12; 13; 14]. A linear perturbation theory of singular tensile cracks, featuring no intrinsic lengthscales and rate-independent fracture-related dissipation, predicts the existence of non-dispersive in-plane FWs, whose velocity is close to the Rayleigh wave-speed \(c_{{}_{\rm R}}\)[9; 10]. An extended linear perturbation theory also predicts the existence of non-dispersive out-of-plane FWs in the same velocity range [25], albeit to linear order the in- and out-of-plane components are decoupled. Here, we study FWs in a 3D theoretical-computational framework that has recently quantitatively predicted the high-speed oscillatory instability in 2D [4; 5; 6]. It is based on a phase-field approach to fracture [37; 38; 39; 40; 41; 42; 43; 44], where large scale elastic deformations -- described by an elastic energy density \(e(\mathbf{u})\) (here \(\mathbf{u}(\mathbf{x},t)\) is the displacement field) -- are coupled on smaller scales near the crack edge to an auxiliary scalar field -- the phase-field \(\phi(\mathbf{x},t)\) -- that mathematically mimics material breakage. The main merit of the approach is that the dissipative dynamics of \(\phi(\mathbf{x},t)\) spontaneously generate the traction-free boundary conditions defining a crack, and consequently select its trajectory and velocity \(v\). Moreover, it also incorporates intrinsic lengthscales near the crack edge -- most notably a dissipation length \(\xi\) (sometimes termed the "process zone" size [1; 2]) and possibly a nonlinear elastic length \(\ell_{\rm nl}\) (embodied in \(e(\mathbf{u})\)[3; 4; 5; 6]) -- absent in singular crack models, and a rate-dependent fracture energy \(\Gamma(v)\) Figure 1: (a) The high-speed oscillatory instability observed in 3D phase-field simulation with \(L_{z}\!=\!6\xi\). The crack propagates in the \(x\) direction, a tensile (mode I) loading is applied in the \(y\) direction and traction-free boundary conditions are employed in \(z\). Plotted is the phase-field \(\phi(\mathbf{x},t)\!=\!1/2\) iso-surface. (b) A steady-state planar crack under tensile loading in a thick 3D system (with periodic boundary conditions in \(z\)) interacts with a tough spherical asperity (see text for details). that accompanies the regularization of the edge singularity. _The theoretical-computational framework and the quasi-2D limit._-- We consider a homogeneous elastic material in 3D, where \(L_{z}\) is the thickness in the \(z\) direction, \(L_{y}\) is the height in the tensile loading \(y\) direction and \(x\) is the crack propagation direction (we employ a treadmill procedure to obtain very long propagation distances using a finite simulation box length \(L_{x}\)[6]). We use a constitutively-linear energy density \(e(\mathbf{u})=\frac{1}{2}\lambda\operatorname{tr}^{2}(\mathbf{E})+\mu\operatorname{tr} (\mathbf{E})\), with Lame coefficients \(\lambda\) and \(\mu\) (shear modulus), and where \(\mathbf{E}=\frac{1}{2}[\mathbf{\nabla}\mathbf{u}+(\mathbf{\nabla}\mathbf{u})^{\mathrm{T}}+(\mathbf{ \nabla}\mathbf{u})^{\mathrm{T}}\mathbf{\nabla}\mathbf{u}]\) is the Green-Lagrange metric strain tensor. The latter ensures rotational invariance, yet it introduces geometric nonlinearities (last term on the right-hand-side). However, the associated nonlinear elastic lengthscale \(\ell_{\mathrm{nl}}\) remains small (unless otherwise stated [45]), such that we essentially consider a linear elastic material and the dissipation length \(\xi\) is the only relevant intrinsic lengthscale. The latter emerges once \(e(\mathbf{u})\) is coupled to the phase-field \(\phi(\mathbf{x},t)\)[4, 5, 6]. Applying this framework in 2D, \(L_{z}=0\), the high-speed oscillatory instability -- upon which a straight crack loses stability in favor of an oscillatory crack when surpassing a critical velocity close to \(c_{\mathrm{n}}\) -- was predicted, in quantitative agreement with thin-sample experiments [46, 47, 48, 3, 4, 5, 6]. In Fig. 1a, we present a high-speed oscillatory instability in a thin 3D material, \(L_{z}>0\), where all quantities -- including the wavelength of oscillations -- agree with their 2D counterparts. These results support the validity of the 3D framework as it features the correct quasi-2D limit. Next, we aim at exciting FWs and studying their dynamics. We consider thick systems (with \(L_{z}/\xi\gg 1\) and periodic boundary conditions along \(z\)), see Fig. 1b. Loading boundary conditions \(u_{i}(x,y=0,z)\) and \(u_{i}(x,y=L_{y},z)\) are applied. In most, but not all, cases (see below), we apply tensile boundary conditions \(u_{y}(x,y=0,z)=-u_{y}(x,y=L_{y},z)=\delta/2\), resulting in mode I cracks initially located at the \(y=L_{y}/2\) plane. The tensile strain \(\delta/L_{y}\) translates into a crack driving force \(G\) (energy release rate) [1, 2, 17], which is balanced by a rate-dependent fracture energy \(\Gamma(v)\). The latter features \(d\Gamma(v)/dv>0\), whose magnitude depends on the relxation/dissipation timescale \(\tau\) of the phase-field \(\phi\)[6], through the dimensionless parameter \(\beta\equiv\tau c_{\mathrm{s}}/\xi\) (where \(c_{\mathrm{s}}\) is the shear wave-speed). The entire theoretical-computational framework depends on two dimensionless parameters, \(\beta\) and \(e_{\mathrm{c}}/\mu\), where \(e_{\mathrm{c}}\) is the onset of dissipation energy density [6]. FWs are excited by allowing a steady-state crack front to interact with tough spherical asperities (one or more), see Fig. 1b. Each spherical asperity is characterized by a radius \(R\) and a dimensionless fracture energy contrast \(\delta\Gamma\equiv\Delta\Gamma/\Gamma_{0}>0\), where \(\Gamma_{0}\equiv\Gamma(v\to 0)\). The position of the asperities with respect to the crack plane, \(y=L_{y}/2\), determines the type of perturbation induced, i.e. in-plane or coupled in- and out-of-plane perturbations. The resulting perturbed crack front is then described by an evolving line \(\mathbf{f}(z,t)=(f_{x}(z,t),f_{y}(z,t))\) parameterized by the \(z\) coordinate and time \(t\) (assuming no topological changes take place). Here, \(f_{x}(z,t)\) is the in-plane component and \(f_{y}(z,t)\) is the out-of-plane component, and an unperturbed tensile crack corresponds to \(\mathbf{f}(z,t)=(vt,0)\). _The dynamics of in-plane FWs._--In-plane FWs are excited by placing a single asperity whose center coincides with the crack plane, \(y=L_{y}/2\) (cf. Fig. 1b). The tough asperity locally retards the crack front, leading to a local increase in the front curvature and \(G\)[7, 27]. The front then breaks the asperity (cf. Fig. 1b), leading to a subsequent velocity overshoot \(\Delta v_{\mathrm{os}}(t)\) ahead of the asperity (cf. Fig. 2a). To quantify in-plane FWs dynamics, we employ \(v_{x}(z,t)\equiv\partial_{t}f_{x}(z,t)\), typically with respect Figure 2: (a) Equal time interval snapshots of \(v_{x}(z,t)-\langle v_{x}(z,t)\rangle\) (normalized and shifted for visual clarity [45]) during in-plane FWs formation and propagation (time snapshots correspond to \(t=968,1023,1068\,\xi/c_{\mathrm{s}}\)). The velocity overshoot \(\Delta v_{\mathrm{os}}\), and FW amplitude \(\Delta v_{z}\), width \(\Delta z\) and propagation velocity \(c_{\mathrm{FW}}\) are all marked (see also text). FWs were generated using \(v=0.6c_{\mathrm{s}}\), \(R=6\xi\) and \(\delta\Gamma=0.6\), and feature \(c_{\mathrm{FW}}=0.977c_{\mathrm{n}}\). (b) \(\Delta v_{\mathrm{os}}(t)/\langle v_{x}(z,t)\rangle\) and \(\Delta v_{x}(t)/\langle v_{x}(z,t)\rangle\) (see legend). See also **MovieS1**-**MovieS2** (Download Supplementary Movies). to \(\langle v_{x}(z,t)\rangle\approx v\), where \(\langle\cdot\rangle\) corresponds to an average along \(z\) (unless otherwise stated). Strictly speaking, the physically relevant quantity is the normal front velocity, \(v_{\perp}(z,t)=v_{x}(z,t)/\!\sqrt{1+(\partial_{z}f_{x}(z,t))^{2}}\). However, for our purposes here \(v_{x}(z,t)\) itself is sufficient. After \(\Delta v_{\rm oz}(t)\) reaches a maximum, it decays to zero (cf. Fig. 2b) and a pair of in-plane FWs is generated. Each FW features an amplitude \(\Delta v_{x}(t)\) (defined as the crest-to-trough difference), a width \(\Delta z(t)\) (the corresponding crest-to-trough \(z\) distance) and a propagation velocity \(c_{\rm FW}\) (in the laboratory frame of reference), all marked in Fig. 2a. The dimensionless FW amplitude \(\Delta v_{x}(t)/\langle v_{x}(z,t)\rangle\) is plotted in Fig. 2b. The FW inherits its scale from \(R\), as shown in [45]. A linear perturbation theory [9], developed to leading order in \(|\partial_{z}f_{x}(z,t)|\ll 1\), predicted the existence of non-dispersive in-plane FWs, in the absence of intrinsic lengthscales (\(\xi\to 0\)) and for a rate-independent fracture energy (\(d\Gamma(v)/dv=0\)). The theory predicts \(0.94<c_{\rm FW}(v)/c_{\rm R}<1\) (when \(v\) varies between 0 and \(c_{\rm R}\)). These predictions have been subsequently supported by boundary-integral method simulations of a rate-independent cohesive crack model [10]. In [9], an effective crack propagation equation of motion has been conjectured for the \(d\Gamma(v)/dv\neq 0\) case, suggesting that for \(d\Gamma(v)/dv>0\) in-plane FWs undergo some form of attenuation during propagation. As materials feature a rate-dependent fracture energy \(\Gamma(v)\), it is important to shed light on this physical issue. Our framework naturally enables it as \(d\Gamma(v)/dv\) is directly controlled by \(\beta\). The evolution of the FW amplitude \(\Delta v_{x}(t)/\langle v_{x}(z,t)\rangle\) presented in Fig. 2 corresponds to very weak rate dependence, shown in Fig. 3a for \(\beta=0.28\). Such a flat \(\Gamma(v)\) is characteristic of nearly ideally brittle materials such as silica glass (cf. the experimental data in Fig. 2b of [49]). \(\Delta v_{x}(t)/\langle v_{x}(z,t)\rangle\) in this case, presented again in the inset of Fig. 3, reveals a weak linear attenuation proportional to \(1-(t-t_{0})/T\), where \(c_{\rm s}T/\xi\simeq 1210\). However, while our system width \(L_{z}\) is large enough to resolve FW propagation distances several times larger than their characteristic width \(\Delta z\) (cf. Fig. 2a), the overall propagation time \(\Delta t\) prior to FW-FW interaction (through the periodic boundary condition, to be discussed below) is \(\Delta t\sim\mathcal{O}(100)\) (cf. Fig. 2b), implying \(\Delta t\ll T\). Consequently, the presented results cannot tell apart an exponential decay from a linear one as \(\exp[-\Delta t/T]\simeq 1-\Delta t/T\) for \(\Delta t\ll T\). To address this point, and more generally the effect of the magnitude of \(d\Gamma(v)/dv\) on in-plane FW dynamics, we increased \(\beta\) by an order of magnitude, setting it to \(\beta=2.8\). The resulting \(\Gamma(v)\), shown in Fig. 3 (previously reported for our model in 2D [6]), indeed reveals a significantly larger \(d\Gamma(v)/dv\), nearly a factor 5 larger than that for \(\beta=0.28\). The emerging \(d\Gamma(v)/dv\) is similar to the one observed in brittle polymers (e.g., PMMA, cf. Fig. 2a in [49]) and in brittle elastomers (e.g., polyacrylamide, cf. Fig. 2B in [50]). The corresponding \(\Delta v_{x}(t)/\langle v_{x}(z,t)\rangle\) is shown in the inset of Fig. 3, again following a linear attenuation proportional to \(1-(t-t_{0})/T\), this time with \(c_{\rm s}T/\xi\simeq 208\). Since in this case \(\Delta t\) is comparable to \(T\), the results support a linear decay, in turn implying that in-plane FWs may propagate many times their characteristic width \(\Delta z\) even in materials with a finite \(d\Gamma(v)/dv\). Moreover, we note that the decay rate \(1/T\) varies between the two \(\beta\) values by a factor that is comparable to the corresponding variability in \(d\Gamma(v)/dv\), indeed suggesting a relation between these two physical quantities [9]. We next consider the FW velocity \(c_{\rm FW}\) and the possible effect of \(\Delta v_{x}(t)/\langle v_{x}(z,t)\rangle\) on it. As explained above, the linear perturbation theory of [9] predicts \(0.94\!<\!c_{\rm FW}/c_{\rm R}\!<\!1\). Consequently, we expect our excited in-plane FWs to feature \(c_{\rm FW}/c_{\rm R}\) within this range when \(\Delta v_{x}(t)/\langle v_{x}(z,t)\rangle\) is small. This is indeed the case in Fig. 4, where the dimensionless FW amplitude is controlled by systematically varying \(v\), and the asperity parameters \(R\) and \(\delta\Gamma\) (in fact, we find that the amplitude varies linearly with \(\delta\Gamma\) for fixed \(v\) and \(R\)[45]). However, when the amplitude is no longer small, apparently beyond the linear perturbation regime, we find that \(c_{\rm FW}/c_{\rm R}\) decreases below 0.94, indicating that nonlinear effects tend to slow down in-plane FWs. Finally, we take advantage of the \(z\)-periodic boundary conditions to study FW-FW interactions. In Fig. 5a, we present the interaction dynamics between the in-plane FWs previously shown in Fig. 2a. It is observed that the FWs retain their overall shape after the interaction, yet during the interaction they do not feature a linear superposition. This behavior is quantified in Fig. 5b, where \(\Delta v_{x}(t)/\langle v_{x}(z,t)\rangle\) is plotted before, during and after FW-FW interaction (before and after the interaction it is identical for the two non-interacting FWs). In this case, it is observed that before and after the FW-FW interaction, each FW follows the very same weak linear decay previously presented in Fig. 2b (see superimposed dashed line) and nearly drops to zero during the interaction. This soliton-like behavior is reminiscent of similar experimental observations made in relation to coupled in- and out-of-plane FWs [12; 13; 14], which are discussed next. _Coupled in- and out-of-plane FWs._--Experimentally, FWs have been observed through their fractographic signature on postmortem fracture surfaces [12; 13; 14; 15], i.e. the Figure 5: (a) Equal time interval snapshots (see \(y\) axis label) revealing the interaction of the two FWs previously shown in Fig. 2a. For improved visibility, we rotate the system along the \(z\) axis by \(L_{z}/2\) such that the interaction event takes place in the middle of the system. (b) \(\Delta v_{x}(t)/\langle v_{x}(z,t)\rangle\) for the dynamics shown in panel (a), the dashed line is a guide to the eye. See text for discussion. See also **MovieS3** (Download Supplementary Movies). Figure 6: A pair of coupled in- and out-of-plane FWs triggered for \(v\!=\!0.4c_{\rm a}\) and \(\beta\!=\!0.28\) using two adjacent asperities, each characterized by \(R\!=\!6\xi\) and \(\delta\Gamma\!=\!0.4\). To generate an out-of-plane perturbation, one asperity is shifted by \((\delta y\!=\!-2\xi,\delta z\!=\!-2\xi)\) relative to the middle of the crack front and the other by \((\delta y\!=\!2\xi,\delta z\!=\!2\xi)\). A small anti-plane loading component is included, resulting in a mode-mixity (mode III/I) level of 3% (see text for discussion). Plotted are \(f_{y}(z,t)\) (green, multiplied by 10, see left \(y\) axis) and \(f_{x}(z,t)\) (brown, right \(y\) axis) at equal time intervals. FWs persist through a FW-FW interaction, here taking place at the edges \((z\!=\!0,350\xi)\) and propagate at \(c_{\rm FW}\!=\!0.961c_{\rm R}\). See also **MovieS4**-**MovieS6** (Download Supplementary Movies). observed FWs featured nonlinearly coupled in- and out-of-plane components, where both \(f_{x}(z,t)\) and \(f_{y}(z,t)\) are non-zero and apparently propagate at the same \(c_{{}_{\rm FW}}\). FWs in the experiments were excited by huge perturbations, 3-4 orders of magnitude larger than the out-of-plane component of the generated FWs [13; 14], which in itself was comparable to the fracture dissipation length \(\xi\). For example, asperity sizes of \(100\!-\!1000\mu\)m gave rise to FWs with an out-of-plane component of \(0.1\mu\)m in silica glass [13], whose fracture dissipation (process zone) size is estimated to be in the tens of nanometers range [51]. Coupled in- and out-of-plane FWs are also spontaneously triggered by micro-branching events [14; 15], likely to be "large perturbations" as well. Due to computational limitations -- most notably on the magnitude of \(L_{y}\) -- we are not able to resolve this huge span in scales between the triggering perturbation and the resulting out-of-plane component. Consequently, the out-of-plane perturbations accessible to us are rather small. In particular, we perturbed the initially planar crack by a pair of adjacent asperities, one slightly shifted above the crack plane and one below, breaking the up-down symmetry. Such perturbations excite both in- and out-of-plane crack front components, but the latter decays after a short transient (while the former persists [45]). To understand if the latter observation is exclusively due to computational limitations (in resolving finite perturbations and the associated scale separation) or whether other physical factors are at play, we considered the recent experiments of [35]. It was shown therein that out-of-plane crack surface structures -- most notably surface steps [31; 35; 36] -- might crucially depend on the existence of small, weakly experimentally controlled, anti-plane loading component (mode III, anti-symmetric loading in the \(z\) direction, e.g., due to small misalignment between the crack plane and the tensile axis). To test the possibility that a small amount of mode-micity (mode III/I) might play a role in generating persistent coupled in- and out-of-plane FWs, we introduced a mode-mixity level of 3%, i.e. \(u_{z}(x,y\!\!=\!\!0,z)\!=\!\!-u_{z}(x,y\!\!=\!\!L_{y},z)\!=\!0.03\,|u_{y}(x,y \!=\!L_{y},z)|\) into the above-described calculations. The results are presented in Fig. 6, revealing persistent propagation of a pair of coupled in- and out-of-plane FWs, featuring non-zero \(f_{x}(z,t)\) and \(f_{y}(z,t)\) that propagate at \(c_{{}_{\rm FW}}\!=\!0.961c_{{}_{\rm R}}\). The amplitude of \(f_{y}(z,t)\) is tiny, a small fraction of \(\xi\) (yet it varies systematically with mode-mixity [45]). Moreover, it is an order of magnitude small than that of \(f_{x}(z,t)\) (notice the two \(y\) axis labels in Fig. 6). Interestingly, this observation is consistent with experimental estimates [13] that suggest that \(\partial_{t}f_{y}(z,t)\) is much smaller than \(\partial_{t}f_{x}(z,t)\) (estimated using real-time measurements of in-plane crack velocity fluctuations at \(z\!=\!0\) and \(z\!=\!L_{z}\)[13]). Overall, the observed coupled in- and out-of-plane FWs propagating at \(c_{{}_{\rm FW}}\!=\!0.961c_{{}_{\rm R}}\) with a small out-of-plane component, which also persist through FW-FW interactions, is reminiscent of several key experimental findings [12; 13; 14]. It remains to be seen whether a small mode-mixity, which is physically realistic, is an essential ingredient. One manifestation of it, which can be tested experimentally, is that the out-of-plane amplitude of the pair of FWs has opposite signs, see Fig. 6. _Summary and outlook_.--Our results demonstrate that the same framework that quantitatively predicts the high-speed oscillatory instability in thin materials, also provides deep insight into FW dynamics in thick, fully 3D materials. The effect of realistic rate-dependent fracture energy \(d\Gamma(v)/dv\!>\!0\) on the propagation of in-plane FWs is elucidated, as well as their solitonic nature and the effect of nonlinear amplitudes on their velocity. Persistent coupled in- and out-of-plane FWs, similar to experimental observations, are demonstrated once a small anti-plane (mode III) loading component is added to the dominant tensile (mode I) loading component. Our findings give rise to pressing questions and subsequent investigation directions, most notably in relation to out-of-plane crack structures such as micro-branching events and surface faceting [17; 31]. The roles of mode-micity fluctuations in nominally tensile failure and of realistic material disorder/heterogeneity (we focused on homogeneous materials, discrete asperities were just introduced to generate FWs) should be particularly considered. In addition, improved computational capabilities (e.g. based on multi-GPU implementations) should be developed in order to obtain better scale separation, which in turn may allow to understand the effect of finite out-of-plane perturbations on 3D crack dynamics. _Acknowledgements_ This work has been supported by the United States-Israel Binational Science Foundation (BSF, grant no. 2018603). E.B. acknowledges support from the Ben May Center for Chemical Theory and Computation, and the Harold Perlman Family. **Supplemental Materials for:** **"The dynamics of crack front waves in 3D material failure"** The goal here is to provide some technical details regarding the 3D computational framework employed in the manuscript and to offer some additional supporting data. ### The 3D phase-field model and its numerical implementation The 3D theoretical-computational framework we employed is identical to the 2D phase-field model presented in great detail in [6], extended to 3D. To the best of our knowledge, this framework is the only one that quantitatively predicted the high-speed oscillatory and tip-splitting instabilities in 2D dynamic fracture [4; 5; 6], and hence should serve as a basis for a 3D theory of material failure. For completeness, we briefly write down here the model's defining equations, and provide some details about the employed boundary conditions and numerical implementation in 3D. The starting point is the Lagrangian \(L\!=\!T-U\), where the potential energy \(U\) and kinetic energy \(T\) are given as \[U = \int\left[\frac{1}{2}\kappa\left(\nabla\phi\right)^{2}+g(\phi)\,e (\mathbf{u})+w(\phi)\,e_{\rm c}\right]dV\,\] (S1) \[T = \int\!\frac{1}{2}f(\phi)\,\rho\left(\partial_{t}\mathbf{u}\right)^{2 }dV\,\] (S2) in terms of the displacement vector field \(\mathbf{u}(\mathbf{x},t)\) and the scalar phase-field \(\phi(\mathbf{x},t)\). \(dV\) is a volume differential and the integration extends over the entire system. An intact/unbroken material corresponds to \(\phi\!=\!1\), for which \(g(1)=f(1)\!=\!1\) and \(w(1)\!=\!0\). It describes a non-dissipative, elastic response characterized by an energy density \(e(\mathbf{u})\) on large lengthscales away from a crack edge (we use in this document 'crack edge', which includes both 'crack tip' in 2D and 'crack front' in 3D). The crack edge is accompanied by a large concentration of elastic energy, eventually leading to material failure, i.e. to the loss of load-bearing capacity. This process is mathematically accounted for in the phase-field approach by the field \(\phi(\mathbf{x},t)\), which smoothly varies from \(\phi\!=\!1\) (inact/unbroken material) to \(\phi\!=\!0\) (fully broken material), and by the degradation functions \(g(\phi)\), \(f(\phi)\) and \(w(\phi)\) that depend on it. The onset of dissipation is related to the strain energy density threshold \(e_{\rm c}\) in Eq. (S1). As \(\phi\) decreases from unity, \(g(\phi)\) is chosen such that it decreases towards zero and \(w(\phi)\) is chosen such that it increases towards unity. This process mimics the conversion of elastic strain energy into fracture energy, where the broken \(\phi\!=\!0\) phase/state becomes energetically favorable from the perspective of minimizing \(U\) in Eq. (S1). Throughout this work, we operationally define the crack faces, and hence also the crack front, based on the \(\phi(\mathbf{x},t)\!=\!1/2\) iso-surface. For \(\phi\!=\!0\), the material lost its load-bearing capacity and traction-free boundary conditions are achieved. This process is associated with a lengthscale, which emerges from the combination of the energetic penalty of developing \(\phi\) gradients, as accounted for by the first contribution to \(U\) in Eq. (S1) that is proportional to \(\kappa\), and the \(\phi\)-dependent elastic energy density threshold for failure \((1-w(\phi))e_{\rm c}\). Consequently, the characteristic length scale is \(\xi\!\equiv\!\sqrt{\kappa/2e_{\rm c}}\), setting the size of the dissipation zone near the crack edge. The degradation functions we employed, following [6], are \(f(\phi)\!=\!g(\phi)\!=\!\phi^{4}\) and \(w(\phi)\!=\!1-\phi\). Note that the choice \(f(\phi)\!=\!g(\phi)\), where \(f(\phi)\) appears in the kinetic energy of Eq. (S2), ensures that elastic wave-speeds inside the dissipation zone remain constant, as extensively discussed in [4; 5; 6]. To account for fracture-related dissipation, the Lagrangian \(L\!=\!T-U\) of Eqs. (S1)-(S2) is supplemented with the following dissipation function (directly related to the phase-field \(\phi(\mathbf{x},t)\)) \[D\equiv\frac{1}{2\chi}\int\left(\partial_{t}\phi\right)^{2}dV\,\] (S3) where \(\chi\) is a dissipation rate coefficient that determines the rate-dependence of the fracture energy \(\Gamma(v)\). The quasi-static fracture energy, \(\Gamma_{0}\!=\!\Gamma(v\!\to\!0)\), is proportional to \(e_{\rm c}\xi\)[6]. The evolution of \(\phi(\mathbf{x},t)\) and \(\mathbf{u}(\mathbf{x},t)\) is derived from Lagrange's equations \[\frac{\partial}{\partial t}\left[\frac{\delta L}{\delta\left(\partial\psi/ \partial t\right)}\right]-\frac{\delta L}{\delta\psi}+\frac{\delta D}{\delta \left(\partial\psi/\partial t\right)}=0\,\] (S4) where \(\psi\!=\!(\phi,u_{x},u_{y},u_{z})\), i.e. \(\mathbf{u}\!=\!(u_{x},u_{y},u_{z})\) are the components of the displacement vector field. As explained in the manuscript, we employed the following constitutively-linear elastic energy density \[e(\mathbf{u})=\frac{1}{2}\lambda\,{\rm tr}^{2}(\mathbf{E})+\mu\,{\rm tr}(\mathbf{E})\,\] (S5) where \(\mathbf{E}\!=\!\frac{1}{2}[\mathbf{\nabla}\mathbf{u}+(\mathbf{\nabla}\mathbf{u})^{\rm T}+(\mathbf{ \nabla}\mathbf{u})^{\rm T}\mathbf{\nabla}\mathbf{u}]\) is the Green-Lagrange metric strain tensor, and \(\lambda\) and \(\mu\) (shear modulus) are the Lame coefficients. We set \(\lambda\!=\!2\mu\) in all of our calculations. Using Eqs. (S1)-(S3), with Eq. (S5), inside Eq. (S4) fully defines our field equations in 3D (that should be solved in a given 3D domain, and supplemented with proper initial and boundary conditions, as described below). The resulting equations are nondimensionalized by expressing length in units of \(\xi\), time in units of \(\xi/c_{\rm s}\), energy density in units of \(\mu\) and the mass density \(\rho\) in units of \(\mu/c_{\rm s}^{2}\) (\(c_{\rm s}\) is the shear wave-speed). Once done, the dimensionless set of equations depends on two dimensionless parameters: \(e_{\rm c}/\mu\) (the ratio between the dissipation onset threshold \(e_{\rm c}\) and a characteristic elastic modulus) and on \(\beta\!=\!\tau\,c_{\rm s}/\xi\) (where we defined \(\tau\!\equiv\!(2\chi e_{\rm c})^{-1}\)), which controls the \(v\)-dependence of the fracture energy, \(\Gamma(v)\), as discussed in the manuscript. As discussed extensively in [4; 5; 6], near crack edge elastic nonlinearity -- embodied in Eq. (S5) in the Green-Lagrange strain tensor \(\mathbf{E}\) -- gives rise to a nonlinear elastic lengthscale \(\ell_{\rm nl}\) that scales as \(\ell_{\rm nl}/\xi\sim e_{\rm c}/\mu\). In the calculations in the context of the high-speed oscillatory instability, cf. Fig. 1a in the manuscript, we set \(e_{\rm c}/\mu\!=\!0.02\). The latter leads to a sizable nonlinear elastic lengthscale \(\ell_{\rm nl}\) in the ultra-high crack propagation velocities regime considered therein (\(v\!\rightarrow\!c_{\rm n}\)), which controls the wavelength of oscillations (note, though, that it was shown [6] that the high-speed oscillatory instability persists also in the limit \(\ell_{\rm nl}/\xi\!\rightarrow\!0\), where the wavelength is controlled by \(\xi\)). In the rest of our calculations, where the dynamics of crack front waves (FWs) were of interest, we focused on a linear elastic behavior, where \(\ell_{\rm nl}\) is negligibly small. The latter is ensured by setting \(e_{\rm c}/\mu\!=\!0.005\) and considering \(v\!\leq\!0.7c_{\rm s}\). Consequently, as stated in the manuscript, in all of our FW-related calculations, the material is essentially linear elastic and the only relevant intrinsic lengthscale is the dissipation length \(\xi\). The rate of dissipation parameter \(\beta\) was varied between \(\beta\!=\!0.28\) and \(\beta\!=\!2.8\), as discussed in the manuscript. Our calculations were performed in boxes of length \(L_{x}\) in the crack propagation direction \(x\), height \(L_{y}\) in the loading direction \(y\) and \(L_{z}\) in the thickness direction \(z\). In all of our calculations, we set \(L_{x}\!=\!150\xi\). However, we employed a treadmill procedure (as explained in [6]), which allows to simulate very large crack propagation distances. Consequently, our system is effectively infinite in the crack propagation direction. In Fig. 2a in the manuscript, where our focus was on testing the reproducibility of the high-speed oscillatory instability in the thin, quasi-2D limit, we used \(L_{z}\!=\!6\xi\) and a large \(L_{y}\). This calculation also employed traction-free boundary conditions at \(z\!=\!0\) and \(z\!=\!L_{z}\). In the rest of our calculations, which focused on FW dynamics, we were interested in thick systems. To that aim, we used \(L_{z}\!=\!350\xi\) (note that in the illustrative Fig. 1b in the manuscript, we showed a smaller \(L_{z}\) for visual clarity) and periodic boundary conditions in \(z\). Due to the enormous computational cost involved in our large-scale calculations, employing such a large \(L_{z}\) implies that \(L_{y}\) is rather constrained. In all of the FW calculations we used \(L_{y}\!=\!150\xi\). The loading conditions at \(y\!=\!0\) and \(y\!=\!L_{y}\) are discussed in the manuscript. Note that the crack propagation velocity \(v\) is set by controlling the crack driving force \(G\) (through the loading conditions), following energy balance \(\Gamma(v)\!=\!G\). The resulting field equations corresponding to Eqs. (S4), cf. Eqs. (A.1)-(A.3) in [6], are spatially discretized in 3D on a cubic grid with a discretization size \(\Delta x\!=\!\Delta y\!=\!\Delta z\!=\!0.25\xi\), following the same spatial discretization scheme described in [6], straightforwardly extended from 2D to 3D. The temporal discretization (at any spatial grid point) involves different schemes for the scalar phase-field \(\phi\) and the vectorial displacement field \(\mathbf{u}\). For the former, we employ a simple forward Euler scheme \(\phi_{n+1}\!=\!\phi_{n}\!+\!\dot{\phi}_{n}\Delta t\) as in [6], where the subscript \(n\) refers to the current time step, \(t_{n}\!=\!n\Delta t\), with \(\Delta t\) being the discrete time step size. For \(\mathbf{u}\), we developed a specifically-adapted Velocity Verlet scheme. As in the conventional Velocity Verlet scheme [52], the displacement \(\mathbf{u}_{n+1}\) is given to second order in \(\Delta t\) as \(\mathbf{u}_{n+1}\!=\!\mathbf{u}_{n}+\mathbf{v}_{n}\Delta t+\frac{1}{2}\mathbf{a}_{n}\Delta t^{2}\), in terms of \(\mathbf{u}_{n}\), the velocity \(\mathbf{v}_{n}\) and the acceleration \(\mathbf{a}_{n}\). The appearance of the degradation function \(f(\phi)\) in the kinetic energy in Eq. (S2) implies that \(\mathbf{a}_{n+1}\) depends on \(\mathbf{v}_{n+1}\) itself (cf. Eq. (A.3) in [6]), and hence the conventional Velocity Verlet [52] expression for \(\mathbf{v}_{n+1}\), i.e. \(\mathbf{v}_{n+1}\!=\!\mathbf{v}_{n}\!+\!\frac{1}{2}(\mathbf{a}_{n}\!+\!\mathbf{a}_{n+1})\Delta t\), cannot be used (since, as explained, \(\mathbf{a}_{n+1}\) depends on \(\mathbf{v}_{n+1}\)). Instead, we defined an auxiliary acceleration \(\tilde{\mathbf{a}}_{n+1}\) that was estimated using an auxiliary velocity \(\tilde{\mathbf{v}}_{n+1}\!=\!\mathbf{v}_{n}\!+\!\mathbf{a}_{n}\Delta t\), from which we estimated \(\mathbf{v}_{n+1}\) according to \(\mathbf{v}_{n+1}\!=\!\mathbf{v}_{n}\!+\!\frac{1}{2}(\mathbf{a}_{n}+\tilde{\mathbf{a}}_{n+1}) \Delta t\). This specifically-adapted Velocity Verlet scheme involved the estimation of the auxiliary acceleration \(\tilde{\mathbf{a}}_{n+1}\), which entails the computation of the divergence of the stress tensor (cf. Eq. (A.3) in [6]). The latter, whose computation is a serious bottleneck, was reused to evaluate \(\mathbf{a}_{n+1}\) at the next time step. This reuse of the divergence of the stress gives rise to more than a two-fold speedup in run-times compared to the temporal discretization scheme used in [6], which is essential for the very demanding 3D computations. Finally, the time step size \(\Delta t\) is set according to the \(\beta\) parameter, taking into account the associated stability condition of the diffusion-like \(\dot{\phi}\) equation (\(\Delta t\) of course also satisfies the CFL condition, which is less stringent in our case). All of our calculations are perform on a single GPU (NVIDIA TeslaV100_SXM2, QuadroRTX8000 or QuadroRTX6000) available on WEXAC (Weizmann EX-Ascale Cluster), which is a large-scale supercomputing resource at Weizmann Institute of Science. Our computations are very demanding in terms of memory, typically involving \(\sim\!40\)GB of memory per simulation. Consequently, all data analysis has to be performed on the fly, as it is simply not practical to save snapshots of the fields. To that end, we used Matlab's C++ engine that enables to execute Matlab scripts during run-time. In order to maximize performance, our computational platform is entirely implemented using C/C++ and CUDA, with typical simulation times of a few days per simulation, depending on the parameters. **A**. _FWs generation and discrete heterogeneities/asperities_ As explained in the manuscript, FWs generation involves 3 parameters, the steady-state crack front velocity \(v\), the asperity radius \(R\) and its dimensionless fracture energy contrast \(\delta\Gamma\!\equiv\!\Delta\Gamma/\Gamma_{0}\). To obtain a steadily propagating crack, we first introduced a planar crack and iteratively relaxed the elastic fields until reaching a mechanical equilibrium state under a prescribed loading. The latter corresponds to a given crack driving force \(G\). Then, the crack was allowed to propagate until reaching a steady-state according to energy balance \(\Gamma(v)\!=\!G\), as explained above. FWs are excited by allowing the steadily propagating planar crack to interact with discrete heterogeneities in the form of tough spherical asperities. To generate asperities, we introduce an auxiliary static (quenched) "noise field" \(\zeta(\mathbf{x})\), which can be coupled to any physical parameter in the fracture problem. This coupling is achieved by transforming an originally spatially uniform parameter \(\alpha_{0}\) into a field of the form \(\alpha(\mathbf{x})=\alpha_{0}[1+\alpha_{\zeta}\zeta(\mathbf{x})]\), where \(\alpha_{{}_{\zeta}}\) is a coupling coefficient. We applied this formulation to the fracture energy, whose quasi-static value scales as \(\Gamma_{0}\sim e_{\text{c}}\xi\sim\sqrt{\kappa e_{\text{c}}}\), by simultaneously coupling \(\kappa\), \(e_{\text{c}}\) and \(\chi\) to \(\zeta(\mathbf{x})\), while keeping \(\xi\sim\sqrt{\kappa e_{\text{c}}}\) and \(\tau\sim(\chi e_{\text{c}})^{-1}\) fixed. This choice ensures that \(\beta\!=\!\tau e_{\text{s}}/\xi\!\sim\!(\chi e_{\text{c}}\xi)^{-1}\) remains fixed, i.e. the asperities feature an overall dimensionless fracture energy contrast \(\delta\Gamma\!\equiv\!\Delta\Gamma/\Gamma_{0}\) (controlled by \(\kappa e_{\text{c}}\)) compared to the homogeneous surrounding material, but the very same fracture rate dependence \(d\Gamma(v)/dv\) (controlled by \(\beta\)). Finally, discrete spherical asperities are obtained by choosing \(\zeta(\mathbf{x})\) with a compact support in the form \(\zeta(\mathbf{x})\!=\!(1-|\mathbf{x}-\mathbf{x}_{0}|/R)^{5}\) for \(|\mathbf{x}-\mathbf{x}_{0}|\!\leq\!R\) and \(\zeta(\mathbf{x})\!=\!0\) elsewhere. Here \(\mathbf{x}_{0}\) is the location of the center of the asperity and \(R\) is its radius, as defined in the manuscript. Asperities are allowed to overlap by simply summing the contributions of the individual asperities to the noise field. ### Additional supporting results In this section, we provide additional supporting results that are referred to in the manuscript. First, in Fig. S1 we show that in-plane FWs approximately inherit their scale, both amplitude and width, from the asperity size \(R\). This is similar to experimental findings reported in relation to the out-of-plane component of FWs [12; 13; 14]. In Fig. 2 in the manuscript, we showed that FW generation is accompanied by an initial velocity overshoot \(\Delta v_{{}_{\alpha}}(t)\) that develops ahead of the asperity, after the latter is broken. We found that the maximal velocity overshoot, \(\max[\Delta v_{{}_{\alpha}}]\), controls the amplitude \(\Delta v_{x}\) of the generated FW. We also found that \(\Delta v_{{}_{\alpha}}\) varies approximately linearly with \(\delta\Gamma\) for fixed \(v\) and \(R\) (not shown). In Fig. S2, we show that \(\Delta v_{x}\) varies predominantly linearly with \(\max[\Delta v_{{}_{\alpha}}]\), when the latter is varied by varying \(\delta\Gamma\) for fixed \(v\) and \(R\). ### Supporting movies A major merit of the employed 3D computational framework is that it enables tracking crack evolution in 3D in real (computer) time. Consequently, we supplement the results presented in the manuscript with movies of the corresponding 3D dynamics. The Supplemental Materials include 6 movies, which can be downloaded from this link: Download Supplementary Movies, described as follows: * **MovieS1**: A movie that shows FW generation and propagation prior to FW-FW interaction, following Fig. 2a in the manuscript. In the latter, equal time interval snapshots were presented. The snapshots therein were shifted according to \(0.006\,c_{\text{s}}t/\xi\) to demonstrate FW propagation. * **MovieS2**: The same calculation as in MovieS1 and Fig. 2 in the manuscript, here showing the phase field \(\phi(\mathbf{x},t)=1/2\) iso-surface. Note the different scales of the axes. * **MovieS3**: A movie that corresponds to the FW-FW interaction shown in Fig. 5a in the manuscript. In the latter, equal time interval snapshots were presented. The snapshots therein were shifted according to \(0.004\mathrm{c_{s}}(t-t_{0})/\xi\) to demonstrate FW propagation. * **MovieS4**: A movie that corresponds the coupled in- and out-of-plane perturbation induced by two asperities as in Fig. 6 in the manuscript, albeit under pure mode I (no mode III). The movie shows that coupled in- and out-of-plane components are generated by the perturbation, but that the out-of-plane component decays, while the in-plane persistently propagates. * **MovieS5**: A movie that corresponds to Fig. 6 in the manuscript, i.e. it is identical to MovieS4, but with a mode-mixity (mode III/I) of 3%. Note that in Fig. 6 in the manuscript, snapshots corresponding to the left \(y\) axis were shifted according to \(0.05\times 0.4\,\mathrm{c_{s}}(t-t_{0})/\xi\), while those corresponding to the right \(y\) axis were shifted according to \(0.4\,\mathrm{c_{s}}(t-t_{0})/\xi\). * **MovieS6**: The same as MovieS5, but with a mode-mixity (mode III/I) of 5%. The resulting coupled in- and out-of-plane FW features an out-of-plane component that approximately scales with the level of mode-mixity.
2309.10689
ReShader: View-Dependent Highlights for Single Image View-Synthesis
In recent years, novel view synthesis from a single image has seen significant progress thanks to the rapid advancements in 3D scene representation and image inpainting techniques. While the current approaches are able to synthesize geometrically consistent novel views, they often do not handle the view-dependent effects properly. Specifically, the highlights in their synthesized images usually appear to be glued to the surfaces, making the novel views unrealistic. To address this major problem, we make a key observation that the process of synthesizing novel views requires changing the shading of the pixels based on the novel camera, and moving them to appropriate locations. Therefore, we propose to split the view synthesis process into two independent tasks of pixel reshading and relocation. During the reshading process, we take the single image as the input and adjust its shading based on the novel camera. This reshaded image is then used as the input to an existing view synthesis method to relocate the pixels and produce the final novel view image. We propose to use a neural network to perform reshading and generate a large set of synthetic input-reshaded pairs to train our network. We demonstrate that our approach produces plausible novel view images with realistic moving highlights on a variety of real world scenes.
Avinash Paliwal, Brandon Nguyen, Andrii Tsarov, Nima Khademi Kalantari
2023-09-19T15:23:52Z
http://arxiv.org/abs/2309.10689v3
# ReShader: View-Dependent Highlights for Single Image View-Synthesis ###### Abstract. In recent years, novel view synthesis from a single image has seen significant progress thanks to the rapid advancements in 3D scene representation and image inpainting techniques. While the current approaches are able to synthesize geometrically consistent novel views, they often do not handle the view-dependent effects properly. Specifically, the highlights in their synthesized images usually appear to be glued to the surfaces, making the novel views unrealistic. To address this major problem, we make a key observation that the process of synthesizing novel views requires changing the shading of the pixels based on the novel camera, and moving them to appropriate locations. Therefore, we propose to split the view synthesis process into two independent tasks of pixel reshading and relocation. During the reshading process, we take the single image as the input and adjust its shading based on the novel camera. This reshaded image is then used as the input to an existing view synthesis method to relocate the pixels and produce the final novel view image. We propose to use a neural network to perform reshading and generate a large set of synthetic input-reshaded pairs to train our network. We demonstrate that our approach produces plausible novel view images with realistic moving highlights on a variety of real world scenes. Key words and phrases:View synthesis, neural network, reshading, relocation + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote: + Footnote †: + Footnote †: + Footnote †: + Footnote: Footnote †: + Footnote †: + Footnote: + Footnote: + FootnoteFootnote: + The rapid advancements in 3D scene representation and image inpainting techniques have led to remarkable progress in single image view synthesis in recent years. Despite this, the existing techniques focus on producing geometrically consistent novel views and mostly ignore the view-dependent effects. For example, a number of techniques [11, 12], handle this application in a _modular_ manner. These approaches estimate the depth from the input and use it to decompose the scene into multiple layers. These depth layers are then warped to the novel view and composed together to form the final image. Unfortunately, these methods treat the highlights, which are quite common in real scenes, as textures and warp them to the novel views along with other areas. Therefore, as shown in Fig. 2, the highlights in their synthesized views appear to be glued to the surfaces, making their results unrealistic. On the other hand, several approaches [13, 14, 15] handle this problem by learning the process in an _end-to-end_ manner. These techniques learn the entire view synthesis pipeline either directly [14], or through various scene representations, such as neural radiance field (NeRF) [23] and multiplane images (MP) [13, 12]. Although they could potentially handle the view-dependent effects, these techniques often struggle to properly reconstruct the moving highlights. Our main observation is that both the shading and projected pixel location of a 3D surface point change between the input and novel view images. Modular approaches overlook the view-dependent shading, focusing solely on pixel relocation. The end-to-end approaches, on the other hand, aim to learn to move the pixels and change their shading within a unified system. However, the majority of effort is dedicated to learning pixel relocation, as the contribution of the shading mismatch to their training loss is often minimal. Guided by this observation, we make a key contribution to break down the novel view synthesis process into two tasks: pixel reshading and relocation (see Fig. 1). During the reshading process, we only adjust the shading of the input image according to the novel camera. We then perform pixel relocation on the reshaded image, using the modular method by Wang et al. [2022], to obtain the final novel view image. We propose to learn the reshading process using a neural network that takes a single image as well as the relative novel camera position as the input and produces the reshaded image. Since there are no publicly available datasets of input-reshaded image pairs, we render a large number of synthetic image pairs for training. We train our reshading network on this newly introduced dataset using a perceptual loss to ensure producing plausible, but detailed reshaded images. We demonstrate that our method produces high-quality novel view images with plausible moving highlights on a wide range of real scenes. ## 2. Related Work The problem of view synthesis has been extensively studied and many powerful multi- and single-image methods have been developed [16, 17, 18, 19]. A complete literature review is beyond the scope of this paper. Here, we mainly focus on approaches that use a single image as the input. We also discuss image relighting methods as they are relevant to the focus of our paper. ### Single Image View Synthesis We discuss these approaches by categorizing them into two classes of modular and end-to-end. The modular methods [11, 12, 13, 14, 15] break down the process into multiple components and address each component separately. Specifically, these techniques divide the view synthesis pipeline into depth estimation, image warping, and image inpainting. The individual methods differ in how they handle each stage of the pipeline. For example, Niklaus et al. [14] train a depth estimation network and use it to directly reproject the input image to the novel view. On the other hand, Shih et al. [14] obtain the depth using an existing method [14] and reconstructs layered depth image (LDI) representation [22] to warp the input image to the novel view. These techniques, however, primarily focus on pixel relocation and overlook the pixel reshading process. As a result, they produce results with incorrect view-dependent effects, where the highlights appear to be glued to the surfaces (see Fig. 2). A category of modular methods focus on handling the view-dependent effects by first decomposing the image(s) into multiple layers (e.g., diffuse and reflective), warping each layer separately, and blending them to generate the final image. However, most of these techniques are either specifically designed for rendering [10, 16] where ground truth scene information (e.g., geometry and material) is available, or require multiple images [15, 12, 13]. In contrast to the modular approaches, a number of techniques [13, 14, 15, 16, 17] attempt to learn the entire view synthesis process in an end-to-end manner. Zhou et al. [21] propose to estimate optical flows at novel views and use the estimated flow to backward warp the input image. The flow estimation network is trained by minimizing the loss between the synthesized and ground truth novel view images. Srinivasan et al. [14] propose to estimate a light field from a single image using a convolutional neural network (CNN). Several approaches use a network to estimate intermediate representations, such as point cloud [23], multiplane images (MP) [13, 12, 14], and neural radiance field (NeRF) [23]. Since these approaches perform Figure 2. We compare our results against 3D Moments by Wang et al. [2022]. 3D Moments reconstructs the novel image by moving the input pixels according to their depth values. As such, the highlights are treated as textures and appear to be glued to the wooden table. Our approach, however, is able to properly move the highlights over the table. The red crosses mark the same location on the table. Note that the cross is inside the highlight in the input and 3D Moment’s results, but it appears to be outside the highlight in our results. end-to-end training, they could potentially learn to handle the view-dependent effects. However, highlights are usually concentrated in small regions, and thus the shading mismatch does not significantly contribute to the loss function. As such, these methods often are not able to produce results with proper moving highlights. Recently, several approaches [12, 13, 14, 15, 16] have proposed to address this problem using diffusion models [15]. Some of these techniques [13, 14] produce novel view images of only single objects or simple scenes. Others [12, 13] handle complex scenes and produce impressive walkthroughs from a single image. However, when synthesizing views that are relatively close to the input, the quality of their synthesized images are not on par with the existing modular or MPI-based techniques. ### Image Relighting Image relighting is the process of reconstructing images of a scene under different illumination. This problem is highly related to inverse rendering where the aim is to estimate the image formation factors (e.g., shape, reflectance, lighting) of a scene. Several methods propose to handle this application either by directly estimating the relit images [11], estimating the individual factors [11], or by utilizing NeRF [15, 16, 17, 18]. However, these approaches focus on simple scenes or single objects, and require multiple images as the input. For more complex scenes, Philip et al. [16] propose a relighting approach for outdoor scenes, while Philip et al. [16] and Wu et al. [17] focus on indoor scenes. However, both of these techniques use several images of the scene as the input. Several techniques [10, 17, 18, 19] propose to estimate all the image formation factors including shape, reflectance, and lighting, from a single image. Sengupta et al. [16] propose an inverse rendering network to estimate albedo, normal, and a single environment lighting. Li et al. [16] extend this work to estimate per-pixel lighting, as well as roughness and depth. Wang et al. [16] further propose to estimate 3D lighting of the scene through volumetric spherical Gaussian. Moreover, Li et al. [17] present a holistic scene reconstruction system that estimates the reflectance, shape, and parametric 3D lighting. These techniques demonstrate impressive results for object insertion, material editing, and dramatic lighting change [10] (e.g., covering a window). While they could potentially be used to perform pixel reshading, these methods do not meet the quality requirement for our application. ## 3. Algorithm Given a single RGB image \(I\), captured with a camera at location \(\mathbf{c}\), our primary goal is to synthesize an image \(I^{\prime}\) from a novel view \(\mathbf{c}^{\prime}\). Similar to most existing methods [10, 10], we assume the depth can be obtained with a reasonable accuracy using single image depth estimation techniques [10]. We begin by discussing the rendering equation [13], a reasonably expressive rendering model, to describe the relationship between the input and novel view images. Formally, the rendering equation describes the total outgoing radiance \(L_{o}(\mathbf{x},\omega_{o})\) at a 3D point \(\mathbf{x}\) along the viewing direction \(\omega_{o}\) as follows: \[L_{o}(\mathbf{x},\omega_{o})=L_{e}(\mathbf{x},\omega_{o})+\int_{\Omega}f_{f} (\mathbf{x},\omega_{o},\omega_{i})\ L_{i}(\mathbf{x},\omega_{i})\ \cos(\theta_{i})\ d\omega_{i}, \tag{1}\] where \(L_{e}\) and \(L_{i}\) are the emitted and incoming radiances, respectively, \(\omega_{i}\) is the incoming direction, and \(f_{f}\) is the bidirectional reflectance distribution function (BRDF). Moreover, \(\theta_{i}\) is the angle between \(\omega_{i}\) and the surface normal, and the integral is taken over the entire hemisphere \(\Omega\) over the surface point. As shown in Fig. 3, the appearance of a surface point \(\mathbf{x}\) in the input and novel images is determined by the outgoing radiance \(L_{o}(\mathbf{x},\omega_{o}^{\mathbf{x}-\mathbf{c}})\) and \(L_{o}(\mathbf{x},\omega_{o}^{\mathbf{x}-\mathbf{c}^{\prime}})\), respectively. Here, \(\omega_{o}^{\mathbf{x}-\mathbf{c}}\) is the direction from the surface point to input camera location \(\mathbf{c}\), while \(\omega_{o}^{\mathbf{x}-\mathbf{c}^{\prime}}\) represents the direction to the novel camera at position \(\mathbf{c}^{\prime}\). Based on this analysis, we observe that the appearance of point \(\mathbf{x}\) in the input and novel images differs in two major ways: \(\mathbf{1}\)) The point \(\mathbf{x}\) appears with different shadings in the input and novel images as its appearance is determined by \(L_{o}(\mathbf{x},\omega_{o}^{\mathbf{x}-\mathbf{c}})\) and \(L_{o}(\mathbf{x},\omega_{o}^{\mathbf{x}-\mathbf{c}^{\prime}})\), respectively. **2)** The location of this point in the two images is Figure 4. We show an input and a novel view image. The same point on the table appears at different locations and with different shadings in the input and novel images. Therefore, the view synthesis process can be divided into two tasks of pixel reshading and relocation. Figure 3. We visualize the image formation process for the input (\(\mathbf{c}\)) and novel (\(\mathbf{c}^{\prime}\)) cameras. A surface point \(\mathbf{x}\) appears at two different locations (\(\mathbf{p_{x}}\) and \(\mathbf{p_{x}}\)) in the input and novel images. Moreover, the shading of point \(\mathbf{x}\) in the two images is determined by \(L_{o}(\mathbf{x},\omega_{o}^{\mathbf{x}-\mathbf{c}})\) and \(L_{o}(\mathbf{x},\omega_{o}^{\mathbf{x}-\mathbf{c}^{\prime}})\), and thus is different. Note that the incoming radiance \(L_{i}\), surface normal (and consequently \(\theta_{i}\)), and the BRDF (shown with curly black line), are the same for both the input and novel view images. different; \(\mathbf{p_{x}}\) and \(\mathbf{p_{x}^{\prime}}\) in the input and novel images, respectively. This is determined by the intersection of the rays along directions \(\omega_{0}^{\mathbf{x-c}}\) and \(\omega_{0}^{\mathbf{x-c^{\prime}}}\) with the image planes of the input and novel cameras, respectively. Therefore, we can describe the view synthesis process through two tasks of pixel reshading and relocation, as shown in Fig. 4. Existing modular approaches (Jampani et al., 2021; Shih et al., 2020), synthesize novel view images by warping the input image to the novel view using the input depth. As such, they mainly focus on the pixel relocation task and ignore the pixel reshading process, which is responsible for the view-dependent effects. The end-to-end systems (Han et al., 2022; Li and Kalantari, 2020), on the other hand, attempt to learn both pixel reshading and relocation processes by minimizing the loss between the estimated and ground truth novel view images. However, these systems often ignore the pixel reshading task as the contribution of the shading differences to the appearance loss is small; view-dependent highlight are often concentrated in small regions in each scene. As such, these techniques are not able to properly handle the view-dependent effects. To address this problem, we propose to treat pixel reshading and relocation as two independent tasks. Specifically, we first adjust the shading of the input image according to the novel view camera. We then use the reshaded image as the input to the approach by Wang et al. (2022) to relocate the pixels and produce the final image. Below we discuss our approach in detail. ### Pixel Reshading Our goal is to take the input image \(I\) and produce a reshaded image \(I_{t}\) that has the same shading as the novel view image. This necessitates changing the shading of input pixel \(\mathbf{p_{x}}\) from \(L_{0}(\mathbf{x},\omega_{0}^{\mathbf{x-c}})\), to the shading of the corresponding pixel in the novel image \(\mathbf{p_{x}^{\prime}}\), i.e., \(L_{0}(\mathbf{x},\omega_{0}^{\mathbf{x-c^{\prime}}})\). Note that at this stage, we are not interested in pixel relocation and reshading occurs in the input camera frame. According to the rendering equation (Eq. 1), performing the reshading process requires estimating various components: the lighting \(L_{e}\) (emitters), material properties \(f_{r}\), incoming radiance from all directions going through the hemisphere \(L_{t}\), and the normals (to compute \(\theta_{t}\)). Once these quantities are estimated, it is possible to recompute the shading of pixel \(\mathbf{p_{x}}\) in the input image, by evaluating the integral in Eq. 1 using the outgoing direction of the corresponding pixel in the novel view image \(\omega_{0}^{\mathbf{x-c^{\prime}}}\). Note that the outgoing direction can be easily inferred from the input depth and the camera positions (provided relatively to avoid the need for estimating the input camera pose). Unfortunately, estimating all of the aforementioned quantities from a single image is an extremely challenging problem. While there are existing techniques (Li et al., 2020, 2022; Sengupta et al., 2019; Wang et al., 2021) that estimate these various factors to a great extent, the quality of their re-rendered images falls short of the requirements for our view synthesis application. Therefore, we instead propose to directly learn the reshaded image from the input image using a neural network. Although simple, as shown in Sec. 4 and in the supplementary video, our method is able to handle this challenging problem reasonably well and produce results with plausible moving highlights. In the following sections, we describe our dataset, inputs, architecture, and training process. ### Dataset To train our reshading network, we need a dataset of input-reshaded image pairs, which is currently not available. Unfortunately, obtaining such a dataset from real scenes is extremely challenging. Capturing the reshaded image necessitates taking a picture of the scene from the input camera view, but with the light rays going towards a different camera. One potential solution is to take a large number of images of a scene and use neural radiance field (NeRF) (Mildenhall et al., 2020) to reconstruct the radiance field of the scene. This radiance field can then be used to produce the reshaded images. However, generating a large scale dataset using this approach is difficult. Additionally, even the state-of-the-art approaches (Kerbl et al., 2023; Kopanas et al., 2022; Verbin et al., 2022) struggle to produce high-quality view-dependent effects on arbitrary surfaces. Figure 5. We visualize our modification to the path tracer to render the re-shaded images. We trace a primary ray to find the first intersection from the input camera. We then find the ray from the novel camera to this point (novel primary ray). This ray is then used for shading computation at the intersection point and generation of the secondary ray. Figure 6. For each training example in our dataset, we store the input and ground truth reshaded images, as well as the depth and validity mask. The red arrows point to the highlights in the input image that are moved in the reshaded image. Note that the objects in the reshaded image are in the same location as the input image, since reshading happens in the input camera frame. Small areas in the reshaded image (indicated by the green arrow) contain incorrect shading. We mask these out using the validity mask in our training loss. Therefore, we propose to generate our input-reshaded image pairs synthetically. Specifically, we use the Tungsten renderer [14] and render our input images using a large number of samples per pixel. We then slightly modify the path tracer to obtain the corresponding reshaded images, as shown in Fig. 5. To do this, we trace primary rays from the input camera (input primary ray) to find the first intersection points. We then calculate the rays connecting the novel camera to these intersection point (novel primary ray). These novel primary rays are then used for shading and generating all the additional secondary rays. An example input-reshaded image pair from our dataset is shown in Fig. 6 (top row). Note that some regions from the input image are occluded in the novel camera. We could easily detect and mask these areas by performing a visibility check with the novel primary ray. However, we choose not to do so to provide more content for our network to learn from. Most of the occluded areas will be shaded correctly as if they are not obscured from the camera. However, small regions (see the green arrow in Fig. 6), typically along the boundaries of objects, will be incorrectly shaded. These are the cases where the angle between the surface normal and novel primary ray is greater than 90 degrees. We detect these regions and create a validity mask, as shown in Fig. 6, which is used to mask out such areas when computing our training loss. Note that since we are using Monte Carlo rendering, each pixel is rendered by tracing a large number of rays. We mark a pixel as invalid if any of such rays does not satisfy our constraint. This is why the line in the validity mask appears to be thicker than the problematic region in the reshaded image. We use the above approach to generate our synthetic dataset using 9 scenes, shown in Fig. 7, provided by Bitterli [2016]. For each scene, we render 200 input-reshaded pairs by randomly placing the input and novel cameras inside the scene. We randomly choose the novel cameras inside a sphere, centered on the input camera, with radii ranging from 0.1 to 0.3. Note that since all the scenes have similar global scale, the chosen radius range corresponds to a reasonable and uniform camera movement in all the training scenes. For every image pair, we randomly change the texture and material properties of the objects in the scene. By default, most scenes only use the environment map as the light source. To increase the robustness of our approach, we add multiple random colored orbs into the scene at random locations. We render 1280 x 720 high dynamic range (HDR) images with 8K samples per pixel and for each example, we store the input and reshaded images, as well as the depth, validity mask, and the metadata of the cameras. Our training data for one example is shown in Fig. 6. ### Inputs For our network to be able to properly reshade an input image, we need to provide the depth information along with the novel camera position to our network. The novel camera position is a 3-channel vector containing position of the novel camera relative to the input camera. Similar to most current single image view synthesis methods, we estimate the depth map using an existing single image depth estimation method [19, 202] in our implementation). Instead of passing the depth to our network, however, we first convert it to disparity. We then scale it by a factor of 1/4 and clamp it to one. This ensures that the disparity is in the range [0, 1] and it covers the depth from 0.25 to infinity. Moreover, we apply frequency encoding [13] with 5 frequencies (11 channels; original plus 5 sines and 5 cosines) to the input disparity to allow the network to effectively use the disparity, particularly for far away regions. Frequency encoding essentially increases the resolution of the disparity, while remaining in the range [0, 1]; similar disparity values will have significantly different representation in the frequency domain. To summarize, we use the input RGB image, frequency encoded disparity map, and the relative novel camera position as the input to our network to produce the reshaded image. The effect of using the disparity map and frequency encoding are shown in Figs. 11 and 12, respectively. ### Architecture We utilize a UNet [17] style encoder-decoder style architecture consisting of 5 downsampling/upsampling layers. The encoder takes the input image and frequency encoded disparity (3+11 channels) and produces a bottleneck feature map of size \(H/32\times W/32\times 512\), where \(H\) and \(W\) are the height and width of the input image, respectively. The three channel novel camera position vector is converted to a 125-channel feature vector using a multilayer perceptron (MLP) with a series of fully connected layers. This feature vector is then concatenated with the original 3-channel Figure 8. We show the architecture of our reshading network. Each convolutional layers (shown in orange) is followed by a LeakyReLU activation, except the last layer that uses tanh activation. We use average \(2\times 2\) pooling for downsampling, while we use bilinear upsampling to increase the resolution. We use an MLP to convert the 3 channel novel camera position vector to a 125-channel feature vector. We then concatenate the original camera position vector with this feature vector. The result is then replicated and attached to the bottleneck feature map. The dashed lines represent skip connections. Note that our network estimate the residual image which is added to the input to obtain the reshaded image. Figure 7. Scenes used to create the synthetic dataset. camera position vector to produce our novel camera features. This is then replicated and concatenated with the bottleneck feature map from the encoder (map of size \(H/32\times W/32\times 640\)). The concatenated feature map is then used as the input to the decoder to produce a 3-channel residual image. The residual is then added to the input to produce the reshaled image. Our architecture is shown in Fig. 8. ### Training We perform a series of augmentations to improve the generalization ability of our network. We take \(384\times 384\) random crops of the HDR synthetic dataset and convert the input and ground truth reshaded pairs to low dynamic range images by applying random exposure (scale factor between 3 and 10) and gamma correction (\(y\) between 2.2 and 5). In addition, we randomly scale the disparity by a factor of \(f\) and the camera position by a factor of \(1/f\) simultaneously. This increases the range of scene scales in our training data. Since this problem is highly ill-posed, we perform the training using a combination of \(\mathcal{L}_{1}\) and perceptual losses. Specifically, our loss consists of the following three terms: \[\mathcal{L}=\lambda_{1}\mathcal{L}_{1}+\lambda_{\text{vgg}}\mathcal{L}_{\text {VGG}}+\lambda_{\text{style}}\mathcal{L}_{\text{style}}, \tag{2}\] where the first term is the \(\mathcal{L}_{1}\) loss between the estimated and ground truth reshaded images and is defined as follows: \[\mathcal{L}_{1}=\|\tilde{I}_{\text{s}}-I_{\text{s}}\|_{1}. \tag{3}\] Moreover, the second term is a perceptual VGG-based loss and is defined as: \[\mathcal{L}_{\text{VGG}}=\|\phi(\tilde{I}_{\text{s}})-\phi(I_{\text{s}})\|_{2 }^{2}, \tag{4}\] where \(\phi\) represents the output features from the conv4_4 layer of VGG-19 (Simonyan and Zisserman, 2014). Furthermore, the third term is a perceptual VGG-based style loss and is defined as: \[\mathcal{L}_{\text{style}}=\|G(\phi(\tilde{I}_{\text{s}}))-G(\phi(\tilde{I}_{ \text{s}}))\|_{2}^{2}, \tag{5}\] where \(G\) computes the Gram matrix of the VGG features extracted from the estimated and ground truth reshaded images. Finally, \(\lambda_{1}\), \(\lambda_{\text{vgg}}\), and \(\lambda_{\text{style}}\) define the weight of each term in Eq. 2 and we set them to 1e-1, 1e-2, and 1, respectively. Note that we multiply the estimated and ground truth reshaded images by the validity mask before computing each loss term. ### Pixel Relocation Once our reshading network is trained, we can use it to reshade the input image during inference. We then use the reshaled image as the input to the approach by Wang et al. (2022) to reconstruct the final novel view image. This approach is designed to perform view and time interpolation using near duplicate photos. However, all the operations related to view synthesis utilize a single image. Therefore, we isolate the view synthesis component and use it to generate novel views from a single image. The view synthesis component of this approach is an enhanced version of the technique by Shih et al. (2020). Specifically, using the depth, this method first constructs a layered depth image (LDI) representation (Shade et al., 1998). It then inpaints the occluded regions and produces LDI features using a network. The LDI features are then warped to the novel view and combined using a subsequent network to produce the final image. Note that our reshaded image is different for each view, which could potentially change the inpainting results, and consequently affect coherency of the synthesized views. However, we did not observe this effect in practice. As shown in the supplementary video, our results are coherent. We note that our approach can be combined with any view synthesis technique that focuses on pixel relocation. We demonstrate Figure 9. We show comparisons against SVMPI (Tucker and Snavely, 2020) and 3D Moments (Wang et al., 2022). Only our approach is able to move the highlights in different views. Note that we carefully select the insets to cover roughly the same regions in the two views to be able to demonstrate the view dependent effects. this in Table 1, where we examine the performance of our approach using Shih et al.'s method (2020) (3D Photo) for pixel relocation. ## 4. Results We implement our approach in PyTorch and use Adam (Kingma and Ba, 2015) with the default parameters for training. We use a learning rate of 1e-4 for 300K iterations and 1e-5 for another 200K iterations. Our training takes 5 days on an Nvidia 2080 Ti GPU. We compare our approach against single image view synthesis approaches by Tucker and Snavelyvely (2020) (SVMP) and Wang et al. (2022) (3D Moments). SVMPI is trained in an end-to-end manner on a multi-view dataset and ideally should be able to handle the view-dependent effects. On the other hand, 3D Moments, which we use for pixel relocation, is a modular approach that is not able to move the highlights. We use the code provided by the authors to generate the results. We use images from several datasets, including Holopix50K (Hua et al., 2020), Open Images V7 (Kuznetsova et al., 2020) and Shiny (Wizadwongsa et al., 2021). Here, we show the image results, but the differences can be better observed in the supplementary video. In Fig. 9, we show comparisons against the other techniques on five scenes. For each scene, we show the results for two different views. We have carefully selected the insets, so they roughly cover the same region in the two views. Therefore, each approach's ability to adjust the shading based on the view can be observed by comparing the two views. Overall, 3D Moments produce results where the shading of the two views are almost identical. In some cases, SVMPI slightly alters the position of the highlights, but when doing so, it disturbs the texture underneath. Additionally, it produces slightly overblurred results. Our approach, on the other hand, produces detailed images with moving highlights. For example, in the first and fourth rows, our approach moves the highlight to the right and left, respectively, when transitioning from view 1 to 2. Note that our method does not leak the highlights to the dark region in the top row and the diffuse key fob in the fourth row. In the second row, our method produces results with slightly darker shading in the second view, while keeping the underlying texture intact. Finally, in the third and last rows, our approach is able to properly move the highlights (to the left from view 1 to 2) on the red structure and the burger ban, respectively. Furthermore, we numerically compare our view synthesis results against the other techniques on three synthetic scenes in Table 1. To demonstrate that our approach can be used with any pixel relocation method, we show results with both 3D moments (Wang et al., 2022) and Shih et al.'s approach (2020) (3D Photo). As seen, our approach improves the performance of both modular relocation methods. Note that SSIM and LPIPS are highly sensitive to the textures, but are not sensitive to the smooth highlights. As such, these metrics do not fully reflect our quality improvement. Moreover, we evaluate our reshading network in isolation (see Table 2), by measuring the error between our synthesized and ground truth reshaded images. By appropriately moving the highlights, our approach produces results that are significantly closer to the ground truth than the input images (without reshading). This is shown visually in Fig. 10 for the Modern Hall example. Our approach properly moves the highlights (top row), and thus is able to synthesize a novel view image that better matches the ground truth than 3D Moments (bottom row). Next, we discuss the effect of several design choices in our approach numerically (Table 3) and visually (Figs. 11, 12, and 13). In Fig. 11, we demonstrate that without the disparity as the input, our reshading network is not able to detect the depth discontinuities and smears the shading of the tomato on the bowl. Moreover, as shown \begin{table} \begin{tabular}{l l l l l} \hline \hline Scene & Method & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) \\ \hline \hline \multirow{4}{*}{Veach Ajar} & SVMPI & 22.72 & 0.877 & 0.0428 \\ \cline{2-5} & 3D Photo & 30.06 & **0.962** & 0.0200 \\ & 3D Photo + Ours & **30.70** & **0.962** & 0.0198 \\ \cline{2-5} & 3D Moments & 29.78 & **0.962** & 0.0149 \\ & 3D Moments + Ours & 30.41 & **0.962** & **0.0147** \\ \hline \multirow{4}{*}{Bathroom} & SVMPI & 20.27 & 0.602 & 0.1255 \\ \cline{2-5} & 3D Photo & 29.96 & 0.907 & 0.0329 \\ \cline{1-1} & 3D Photo + Ours & 30.84 & 0.910 & 0.0323 \\ \cline{1-1} & 3D Moments & 32.03 & 0.951 & 0.0284 \\ \cline{1-1} & 3D Moments + Ours & **33.12** & **0.953** & **0.0281** \\ \hline \multirow{4}{*}{Modern Hall} & SVMPI & 22.73 & 0.763 & 0.0759 \\ \cline{1-1} & 3D Photo & 32.63 & 0.950 & 0.0230 \\ \cline{1-1} & 3D Photo + Ours & **32.99** & 0.951 & 0.0229 \\ \cline{1-1} & 3D Moments & 30.98 & 0.951 & 0.0197 \\ \cline{1-1} & 3D Moments + Ours & 31.21 & **0.953** & **0.0196** \\ \hline \hline \end{tabular} \end{table} Table 1. We show numerical comparisons against the other approaches on three synthetic scenes by evaluating the error between the ground truth and novel view images in terms of PSNR, SSIM, and LPIPS. \begin{table} \begin{tabular}{l l l l l} \hline \hline Scene & Method & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) \\ \hline \hline \multirow{4}{*}{Veach Ajar} & Input & 35.45 & 0.993 & 0.0012 \\ & Ours & **40.10** & **0.994** & **0.0008** \\ \hline \multirow{2}{*}{Bathroom} & Input & 36.50 & 0.991 & 0.0024 \\ & Ours & **41.20** & **0.992** & **0.0020** \\ \hline \multirow{2}{*}{Modern Hall} & Input & 39.99 & 0.989 & 0.0015 \\ & Ours & **42.71** & **0.989** & **0.0012** \\ \hline \hline \end{tabular} \end{table} Table 2. We numerically evaluate the effect of reshading in isolation. Our reshading network produces results that are closer to the ground truth than the input. Figure 10. We show our reshading (top) and view synthesis (bottom) results on the Modern Hall scene. Our approach is able to properly move the highlights during the reshading process (top) and produce novel view images that better match the ground truth than existing techniques (bottom). in Fig. 12, without frequency encoding, our network has difficulty handling the objects that are far away and incorrectly changes their shading. Finally, in Fig. 13 we show the result of directly concatenating the camera pose with the bottleneck features (w/o MLP). As seen, without the MLP, our network cannot effectively utilize the camera information and incorrectly changes the shading of the background areas. ## 5. Limitations Although we have demonstrated that our simple network can produce reasonable results, this is an extremely challenging problem and, as shown in Fig. 14, our approach has several limitations. For example, we are currently not able to handle highly specular surfaces, such as mirrors. As shown in Fig. 14 (mirror on the right wall), our technique is not able to correctly move the content inside the mirror between the two reshaded images. Additionally, in cases where the light sources are very close to diffuse surfaces, they create strong saturated regions (see the area underneath the mirror). In these cases, our reshading network interpret these as highlights and moves them between different views. ## 6. Conclusion We have presented a method to handle view dependent effects in single image novel view synthesis. Specifically, we propose to split the task of view synthesis into pixel reshading and relocation processes and treat them independently. We use a network to adjust the shading of the input image according to the novel camera. We then use the reshaded image as the input to an existing view synthesis method to perform the pixel relocation task. We demonstrate that our method produces plausible results with view-dependent highlights that are better than the existing methods. ###### Acknowledgements. The authors would like to thank the reviewers for their comments and suggestions. This work was funded by Leia Inc. (contract #415290). Portions of this research were conducted with the advanced computing resources provided by Texas A&M High Performance Research Computing.
2309.15929
All orders factorization and the Coulomb problem
In the limit of large nuclear charge, $Z\gg 1$, or small lepton velocity, $\beta \ll 1$, Coulomb corrections to nuclear beta decay and related processes are enhanced as $Z\alpha/\beta$ and become large or even non-perturbative (with $\alpha$ the QED fine structure constant). We provide a constructive demonstration of factorization to all orders in perturbation theory for these processes and compute the all-orders hard and soft functions appearing in the factorization formula. We clarify the relationship between effective field theory amplitudes and historical treatments of beta decay in terms of a Fermi function.
Richard J. Hill, Ryan Plestid
2023-09-27T18:10:38Z
http://arxiv.org/abs/2309.15929v2
# Callt-Th-2023-034 ###### Abstract In the limit of large nuclear charge, \(Z\gg 1\), or small lepton velocity, \(\beta\ll 1\), Coulomb corrections to nuclear beta decay and related processes are enhanced as \(Z\alpha/\beta\) and become large or even non-perturbative (with \(\alpha\) the QED fine structure constant). We provide a constructive demonstration of factorization to all orders in perturbation theory for these processes and compute the all-orders hard and soft functions appearing in the factorization formula. We clarify the relationship between effective field theory amplitudes and historical treatments of beta decay in terms of a Fermi function. ###### Contents * 1 Introduction * 2 Coulomb corrections and contact interactions * 3 Schrodinger-Coulomb problem * 3.1 Factorization * 3.2 Soft factor * 3.3 Hard factor * 3.4 Renormalization * 3.5 Wavefunction solution and all-orders hard function * 4 Dirac-Coulomb problem * 4.1 Factorization * 4.2 UV contribution from a charge form factor * 4.3 UV contribution with finite distance regulator * 4.4 Wavefunction solution and all-orders hard function * 5 Discussion * A Loop integrals * A.1 Two loop integrals * A.1.1 Scalar Integrals * A.1.2 Vector and Tensor Integrals * A.2 Three loop integrals * B Wavefunction solution: Schrodinger-Coulomb * C Wavefunction solution: Dirac-Coulomb * D All orders UV function with a finite-distance regulator Introduction The Coulomb field of a nucleus can have dramatic consequences for low energy phenomena. Relative to other QED effects, Coulomb corrections are large because _i)_ they are enhanced by the charge of the nucleus [1, 2, 3, 4], _ii)_ they are enhanced at low relative velocity [5, 6, 7, 8], and _iii)_ loop integrals receive systematic \(\pi\)-enhancements [9]. Many precision experiments involve leptons interacting with nuclei [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44], and require the systematic treatment of Coulomb corrections and their interplay with other subleading effects [45, 46, 47, 48, 49, 50, 51]. Factorization theorems underlie much of our ability to retain theoretical control in precision measurements involving nucleons, nuclei, and other hadrons [52, 53, 54, 55, 56, 57]. Factorization arises from the separation of different energy scales involved in a physical process, with the components in the factorization formula identified with contributions from each scale [58, 59]. In terms of a sequence of effective field theories (EFTs), the components are identified as the corresponding sequence of matching coefficients, and the final low-energy matrix element. Historically, Coulomb corrections have been understood not in terms of EFT, but by appealing to wavefunction methods i.e., solutions of the Dirac or Schrodinger equation [2, 60, 61, 62]. Combining modern EFT techniques with high order Coulomb corrections is crucial for precision measurements, and in particular for nuclear beta decays [50]. In this paper, we demonstrate factorization for radiative corrections induced by photon exchange between charged leptons and a static Coulomb field, and compute explicit all-orders expressions for the components of the factorization formula. We describe how traditional wavefunction methods are related to dimensionally regulated Feynman integrals order by order in perturbation theory. Using this correspondence, and a new all-orders calculation of the short-distance region, we extract the universal \(\overline{\rm MS}\) Coulomb corrections to the matrix element for a contact interaction (as is relevant for nuclear beta decays) to all orders in perturbation theory. The remainder of the paper is organized as follows. Section 2 introduces notation for Coulomb corrections from a diagrammatic perspective. Section 3 considers the Schrodinger-Coulomb problem and establishes the correspondence between wavefunctions and the diagrammatic expansions. Section 4 considers the Dirac-Coulomb problem and extracts the relevant EFT matrix element to all orders in perturbation theory. Section 5 highlights new and interesting features of the preceding analysis and comments on phenomenological applications. ## 2 Coulomb corrections and contact interactions Consider a reaction that takes place via an effective contact interaction in the vicinity of a heavy particle with charge \(Z\). The outgoing charged particles ("leptons") can exchange photons with the heavy particle ("nucleus"). For neutral current processes (i.e., when the initial and final nuclear states have the same charge \(Z\)), QED radiative corrections can be straightforwardly organized as a series in powers of \(\alpha\), \(Z\alpha\) and \(Z^{2}\alpha\), with each power being separately QED gauge invariant.1 We will consider the static limit, in which the particle of charge \(Z\) in the initial and final state is heavy, so that recoil corrections can be neglected. For low momentum probes satisfying \(|{\bf p}|\ll 1/R\) with \(R\) the charge radius of the heavy particle, the point-like limit is applicable and universal corrections to the amplitude can be computed using Feynman rules for a static external Coulomb field [64]. In this static limit, terms \(\sim Z^{m}\alpha^{n}\) vanish for \(m>n\)[65]. In the following we consider the leading series of terms, \(\sim(Z\alpha)^{n}\), for \(n\geq 0\). Footnote 1: For charged current process, e.g., \(A[Z+1]\to B[Z]+\ell^{+}+\nu_{\ell}\), the same Coulomb factor describes the leading \(Z\)-enhanced contributions. See Ref. [63] for a discussion of how subleading contributions are organized. As an explicit example, consider di-lepton production via a short-range neutral current in some nuclear decay:2 Footnote 2: An electromagnetic \(E0\) transition can mimic the same phenomenology if both \(e^{+}\) and \(e^{-}\) are non-relativistic, such that the virtual photon that mediates the transition is far off-shell. \[A(v_{A})\to B(v_{B})\ +\ell^{-}({\bf p}_{1})\ +\ell^{+}({\bf p}_{2})\, \tag{1}\] where states \(A\) and \(B\) have charge \(Z\), and \(v_{B}^{\mu}=v_{A}^{\mu}=v^{\mu}=(1,{\bf 0})\) which defines the static limit. As discussed above, this can be reduced to an external field problem describing the production of a di-lepton pair in a static Coulomb field, \[\begin{split}\mathcal{M}=\parbox{142.26378pt}{ \includegraphics[scale=142.26378pt]{figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figfigs/figs/figs/figfigs/figs/figs/figs/figfigs/figs/figs/figs/figs/figs/figs/figs/figs/figfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figfigs/figs/figs/figfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figfigs/figs/figfigs/figs/figfigs/figs/figs/figfigs/figs/figs/figfigs/figs/figs/figs/figs/figfigs/figs/figfigs/figs/figs/figsfigs/figs/figfigs/figs/figs/figfigs/figs/figfigs/figs/figs/figs/figs/figfigs/figs/figs/figs/figs/figs/figs/figfigs/figs/figfigs/figs/figfigs/figs/figfigs/figs/figs/figs/figfigs/figs/figs/figs/figs/figs/figfigs/figs/figs/figfigs/figs/figs/figs/figs/figfigs The perturbative series for a relativistic lepton contains non-trivial Dirac structure that must be inserted between external polarization spinors. For the process (1), Eq. (4) becomes \[\mathcal{M}=\sum_{ijkl}\bar{u}(\mathbf{p}_{1})_{i}(\dots)_{ij}\Gamma^{\text{tree }}_{jk}(\dots)_{kl}v(\mathbf{p}_{2})_{l}\,. \tag{5}\] Traditional analyses of beta decay are expressed in terms of position-space Coulomb wavefunctions for the leptons evaluated at the origin of coordinates, \(\psi(\mathbf{r}=0)\)[60, 61, 62, 2]. As we discuss in Appendices B and C, the diagrammatic series represented in Eq. (4) is equivalent to a wavefunction solution. However, starting at two-loop order the wavefunction \(\psi(\mathbf{r}=0)\) is UV divergent.3 The amplitude must be renormalized and matched consistently with the underlying contact interaction. In order to execute this program, we phrase the problem in terms of factorization of momentum space amplitudes, using dimensional regularization in the \(\overline{\text{MS}}\) scheme. Coulomb corrected amplitudes can then be matched consistently to underlying quark-level Lagrangians, and model-dependent position-space wavefunctions are replaced by a systematic expansion in EFT operators. Footnote 3: An exception is the non-relativistic Schrodinger Coulomb wavefunction, which is UV finite to all orders. ## 3 Schrodinger-Coulomb problem Consider the quantum mechanical corrections to a tree-level process for a final-state particle of mass \(m\) and electric charge \((-e)\) scattering from a Coulomb potential with source charge \((+Ze)\): (we suppress the overall tree-level amplitude factor) \[\begin{split}\mathcal{M}=\sum_{n=0}^{\infty}\mathcal{M}^{(n)}= \sum_{n=0}^{\infty}(2mZe^{2})^{n}\int\frac{\mathrm{d}^{D}L_{1}}{(2\pi)^{D}} \int\frac{\mathrm{d}^{D}L_{2}}{(2\pi)^{D}}\cdots\int\frac{\mathrm{d}^{D}L_{n} }{(2\pi)^{D}}\frac{1}{\mathbf{L}_{1}^{2}+\lambda^{2}}\frac{1}{(\mathbf{L}_{1}- \mathbf{p})^{2}-\mathbf{p}^{2}-\mathrm{i}0}\\ \times\frac{1}{(\mathbf{L}_{1}-\mathbf{L}_{2})^{2}+\lambda^{2}} \frac{1}{(\mathbf{L}_{2}-\mathbf{p})^{2}-\mathbf{p}^{2}-\mathrm{i}0}\cdots \frac{1}{(\mathbf{L}_{n-1}-\mathbf{L}_{n})^{2}+\lambda^{2}}\frac{1}{(\mathbf{ L}_{n}-\mathbf{p})^{2}-\mathbf{p}^{2}-\mathrm{i}0}\,.\end{split} \tag{6}\] Here, \(D=3-2\epsilon\) is the spatial dimension with dimensional regularization parameter \(\epsilon\), and \(\lambda\) is a photon mass regulator. The Schrodinger-Coulomb problem describes the limit \(p\ll\Lambda_{\text{UV}}\ll m\), where \(\Lambda_{\text{UV}}~{}\sim R^{-1}\) denotes the scale of nuclear or hadronic structure. The amplitude (6) can be evaluated at each order in perturbation theory. With the photon mass regulator in place, the integrals in Eq. (6) are UV and IR finite at \(\epsilon\to 0\). By convention \(\mathcal{M}^{(0)}=1\) and at one-loop order, \[\mathcal{M}^{(1)}=2mZe^{2}\int\frac{\mathrm{d}^{D}L_{1}}{(2\pi)^{D}}\frac{1}{ \mathbf{L}^{2}+\lambda^{2}}\frac{1}{(\mathbf{L}-\mathbf{p})^{2}-\mathbf{p}^{2 }-\mathrm{i}0}\rightarrow\frac{\mathrm{i}m}{p}\frac{Ze^{2}}{4\pi}\left(\log \frac{2p}{\lambda}-\frac{\mathrm{i}\pi}{2}\right)\,, \tag{7}\] where the final expression denotes the limit \(\epsilon\to 0\), \(\lambda\to 0\). ### Factorization Two momentum regions [58, 59] are relevant in the integrals (6): the soft region with \(|\mathbf{L}|\sim\lambda\); and the hard region with \(|\mathbf{L}|\sim p\). Neglecting power corrections in \(\lambda/p\), the amplitude may be written \[\mathcal{M}=\mathcal{M}_{S}\mathcal{M}_{H}\,. \tag{8}\] In the language of effective operators, \(\mathcal{M}_{H}\) represents a matching coefficient and \(\mathcal{M}_{S}\) represents a low-energy operator matrix element, when the full theory represented by Eq. (6) is matched onto a low-energy theory containing only soft degrees of freedom.4 Footnote 4: In applications, IR divergences are regulated by physical scales associated with e.g., bremsstrahlung radiation or screening effects from atomic electrons [62]. It is interesting to note that a photon mass mimics the Yukawa potential typical of the Thomas-Fermi model of atomic screening [66, 67]. ### Soft factor The soft limit of Eq. (6) is readily seen to exponentiate, yielding the soft factor to all orders [68, 69], \[\mathcal{M}_{S}^{(n)}=\frac{1}{n!}\left(\mathcal{M}_{S}^{(1)}\right)^{n}\,, \tag{9}\] where the one-loop result is \[\mathcal{M}_{S}^{(1)}=2mZe^{2}\,\int\frac{\mathrm{d}^{D}L}{(2\pi)^{D}}\frac{1}{ \mathbf{L}^{2}+\lambda^{2}}\frac{1}{-2\mathbf{p}\cdot\mathbf{L}-\mathrm{i}0}= \frac{\mathrm{i}m}{p}\frac{Ze^{2}}{(4\pi)^{1-\epsilon}}\Gamma(1+\epsilon) \lambda^{-2\epsilon}\frac{1}{2\epsilon}\,. \tag{10}\] ### Hard factor The hard factor can similarly be evaluated explicitly, order by order in perturbation theory. The hard momentum region is isolated by expanding at \(\mathbf{L}^{2}\gg\lambda^{2}\). At first order, \[\begin{split}\mathcal{M}_{H}^{(1)}&=2mZe^{2}\int \frac{\mathrm{d}^{D}L}{(2\pi)^{D}}\frac{1}{\mathbf{L}^{2}}\frac{1}{(\mathbf{L}- \mathbf{p})^{2}-\mathbf{p}^{2}-\mathrm{i}0}=\frac{\mathrm{i}m}{p}\frac{Ze^{2} }{4\pi}\left[\frac{(16\pi)^{\epsilon}\Gamma(\frac{1}{2}+\epsilon)}{\sqrt{\pi} }\right](-4p^{2}-\mathrm{i}0)^{-\epsilon}\left(\frac{-1}{2\epsilon}\right)\\ &=\left[\frac{\mathrm{i}Z\overline{\alpha}}{\beta}(-4p^{2}/\mu^{2 }-\mathrm{i}0)^{-\epsilon}\right]\left[\frac{-1}{2\epsilon}\right]\,,\end{split} \tag{11}\] where5\(\beta=p/m\) and the \(\overline{\mathrm{MS}}\) coupling \(\overline{\alpha}\) is related to the bare charge \(e\) in \(D=3-2\epsilon\) dimensions as6 Footnote 5: For the relativistic case, we use \(\beta=p/E\) to denote the usual relativistic velocity. Footnote 6: Other common definitions in the literature are \(\mu^{2\epsilon}4\pi\overline{\alpha}(\mu)/e^{2}=(4\pi)^{\epsilon}\Gamma(1+\epsilon)\) or \(\mu^{2\epsilon}4\pi\overline{\alpha}(\mu)/e^{2}=(4\pi)^{\epsilon}\exp(- \gamma_{\mathrm{E}\epsilon})\). The choice in Eq. (12) is convenient for expressions arising from loop integrals in three dimensions. These definitions only differ at order \(\epsilon^{2}\) and therefore yield identical expressions for the renormalized amplitudes that we consider. \[\mu^{2\epsilon}\overline{\alpha}(\mu)=\frac{e^{2}}{4\pi}\left[\frac{(16\pi)^ {\epsilon}\Gamma(\frac{1}{2}+\epsilon)}{\sqrt{\pi}}\right]\,. \tag{12}\] At \(\epsilon\to 0\), it is readily seen that \[\mathcal{M}^{(1)}=\mathcal{M}_{S}^{(1)}+\mathcal{M}_{H}^{(1)}\,. \tag{13}\] At second order \[\begin{split}\mathcal{M}_{H}^{(2)}&=(2mZe^{2})^{2} \int\frac{\mathrm{d}^{D}L_{1}}{(2\pi)^{D}}\int\frac{\mathrm{d}^{D}L_{2}}{(2 \pi)^{D}}\frac{1}{\mathbf{L}_{1}^{2}}\frac{1}{(\mathbf{L}_{1}-\mathbf{p})^{2}- \mathbf{p}^{2}-\mathrm{i}0}\frac{1}{(\mathbf{L}_{1}-\mathbf{L}_{2})^{2}}\frac {1}{(\mathbf{L}_{2}-\mathbf{p})^{2}-\mathbf{p}^{2}-\mathrm{i}0}\\ &=\left[\frac{\mathrm{i}Z\overline{\alpha}}{\beta}(-4p^{2}/\mu^{2 }-\mathrm{i}0)^{-\epsilon}\right]^{2}\left[\frac{1}{8\epsilon^{2}}+\frac{\pi^ {2}}{12}+5\zeta(3)\epsilon+\mathcal{O}(\epsilon^{2})\right]\,,\end{split} \tag{14}\] where the integral is evaluated in Appendix A. At third order, \[\begin{split}\mathcal{M}_{H}^{(3)}&=(2mZe^{2})^{3} \int\frac{\mathrm{d}^{D}L_{1}}{(2\pi)^{D}}\int\frac{\mathrm{d}^{D}L_{2}}{(2 \pi)^{D}}\int\frac{\mathrm{d}^{D}L_{3}}{(2\pi)^{D}}\frac{1}{\mathbf{L}_{1}^{2} }\frac{1}{(\mathbf{L}_{1}-\mathbf{p})^{2}-\mathbf{p}^{2}-\mathrm{i}0}\frac{1 }{(\mathbf{L}_{1}-\mathbf{L}_{2})^{2}}\frac{1}{(\mathbf{L}_{2}-\mathbf{p})^{2 }-\mathbf{p}^{2}-\mathrm{i}0}\times\\ &\qquad\times\frac{1}{(\mathbf{L}_{2}-\mathbf{L}_{3})^{2}}\frac{1 }{(\mathbf{L}_{3}-\mathbf{p})^{2}-\mathbf{p}^{2}-\mathrm{i}0}\\ &=\left[\frac{\mathrm{i}Z\overline{\alpha}}{\beta}(-4p^{2}/\mu^{2 }-\mathrm{i}0)^{-\epsilon}\right]^{3}\left[\frac{-1}{48\epsilon^{3}}-\frac{\pi ^{2}}{24\epsilon}-\frac{13\zeta(3)}{6}+\mathcal{O}(\epsilon)\right]\,.\end{split} \tag{15}\] The evaluation of this integral is also performed in Appendix A. At higher-loop order, direct evaluation of integrals becomes increasingly difficult. We will see how wavefunction methods provide a closed-form expression for arbitrary loop order. ### Renormalization Before turning to the all-orders discussion, we present the renormalized hard matching coefficient through three-loop order in the \(\overline{\mathrm{MS}}\) scheme. Identifying the above amplitudes as bare matching coefficients, \(\mathcal{M}_{H}\equiv\mathcal{M}_{H}^{\mathrm{bare}}\), writing \[\mathcal{M}_{H}^{\mathrm{bare}}=\mathcal{Z}^{-1}\mathcal{M}_{H}(\mu)\,, \tag{16}\] and requiring that \(\mathcal{Z}(\mu)\) has only \(1/\epsilon\) terms when expressed in terms of \(\overline{\alpha}\), we find \[\mathcal{Z}^{-1}=1+\sum_{n=1}^{\infty}\left(\frac{Z\overline{\alpha}}{\beta} \right)^{n}z^{(n)}\,, \tag{17}\] with \[z^{(1)}=\frac{-\mathrm{i}}{2\epsilon}\,,\quad z^{(2)}=\frac{-1}{8\epsilon^{2}}\,, \quad z^{(3)}=\frac{\mathrm{i}}{48\epsilon^{3}}\,. \tag{18}\] The renormalized matching coefficient (at \(\epsilon=0\)) is then \[\begin{split}\mathcal{M}_{H}(\mu)&=1+\frac{Z\alpha} {\beta}\bigg{(}\frac{\pi}{2}+\mathrm{i}\log\frac{2p}{\mu}\bigg{)}+\bigg{(} \frac{Z\alpha}{\beta}\bigg{)}^{2}\left(\frac{\pi^{2}}{24}+\frac{\mathrm{i}\pi} {2}\log\frac{2p}{\mu}-\frac{1}{2}\log^{2}\frac{2p}{\mu}\right)\\ &\quad+\bigg{(}\frac{Z\alpha}{\beta}\bigg{)}^{3}\left(-\frac{\pi^{ 3}}{48}-\frac{\mathrm{i}\zeta(3)}{3}+\frac{\mathrm{i}\pi^{2}}{24}\log\frac{2p} {\mu}-\frac{\pi}{4}\log^{2}\frac{2p}{\mu}-\frac{\mathrm{i}}{6}\log^{3}\frac{2p }{\mu}\right)+\mathcal{O}(\alpha^{4})\,,\end{split} \tag{19}\] where \(\overline{\alpha}\) reduces to the on-shell QED coupling \(\alpha\) at \(\epsilon\to 0\) (recall that there are no dynamical leptons in the non-relativistic theory). Since the product \(\mathcal{M}_{S}\mathcal{M}_{H}\) is UV and IR finite (at \(\lambda\neq 0\)), the quantity \(\mathcal{Z}\) is identical to the (\(\overline{\mathrm{MS}}\)) operator renormalization constant for the soft operator, \[\mathcal{M}_{S}^{\mathrm{bare}}=\mathcal{Z}\mathcal{M}_{S}(\mu)\,. \tag{20}\] From the explicit form of Eqs. (9) and (10), the renormalization constant to all orders is given by \[\mathcal{Z}=\exp\left(\frac{\mathrm{i}Z\overline{\alpha}}{2\beta\epsilon} \right)\,, \tag{21}\] in agreement through three-loop order with Eq. (18). The renormalized soft function is \[\mathcal{M}_{S}(\mu)=\exp\left(\frac{\mathrm{i}Z\alpha}{\beta}\log\frac{\mu}{ \lambda}\right)\,. \tag{22}\] ### Wavefunction solution and all-orders hard function We recognize Eq. (6) as the perturbative expansion of the position-space wavefunction evaluated at \(\mathbf{r}=0\) for a particle scattered by a Coulomb source and described by the Hamiltonian, \[H=\frac{p^{2}}{2m}-\frac{Z\alpha}{r}\mathrm{e}^{-\lambda r}\,. \tag{23}\] The all-orders solution at leading power is (see Appendix B), \[\mathcal{M}=[\psi^{(-)}(0)]^{*}=\Gamma\left(1-\frac{\mathrm{i}Z\alpha}{\beta }\right)\exp\left[\frac{Z\alpha}{\beta}\left(\frac{\pi}{2}+\mathrm{i}\log \frac{2p}{\lambda}-\mathrm{i}\gamma_{\mathrm{E}}\right)\,\right]+\mathcal{O} \bigg{(}\frac{\lambda}{p}\bigg{)}\, \tag{24}\] where \(\psi^{(-)}\) denotes the scattering solution that matches asymptotically to a plane wave plus an ingoing spherical wave. Combining Eqs. (22) and (24) we obtain the closed form result \[\mathcal{M}_{H}(\mu)=\frac{\mathcal{M}}{\mathcal{M}_{S}(\mu)}=\Gamma\left(1- \frac{\mathrm{i}Z\alpha}{\beta}\right)\exp\left[\frac{Z\alpha}{\beta}\left( \frac{\pi}{2}+\mathrm{i}\log\frac{2p}{\mu}-\mathrm{i}\gamma_{\mathrm{E}} \right)\,\right]. \tag{25}\] This result reproduces the above results, _cf._ Eq. (19), through three-loop order. ## 4 Dirac-Coulomb problem In place of Eq. (6), consider the amplitudes for a relativistic fermion in the Coulomb field of an extended object with a charge form factor \(F(\mathbf{L}^{2})\) \[\begin{split}\bar{u}(p)\mathcal{M}=&\sum_{n=0}^{ \infty}(Ze^{2})^{n}\int\frac{\mathrm{d}^{D}L_{1}}{(2\pi)^{D}}\int\frac{ \mathrm{d}^{D}L_{2}}{(2\pi)^{D}}\cdots\int\frac{\mathrm{d}^{D}L_{n}}{(2\pi)^{D} }\\ &\times\frac{F(\mathbf{L}_{1}^{2})}{\mathbf{L}_{1}^{2}+\lambda^{2 }}\frac{1}{(\mathbf{L}_{1}-\mathbf{p})^{2}-\mathbf{p}^{2}-\mathrm{i}0}\frac{F( (\mathbf{L}_{1}-\mathbf{L}_{2})^{2})}{(\mathbf{L}_{1}-\mathbf{L}_{2})^{2}+ \lambda^{2}}\frac{1}{(\mathbf{L}_{2}-\mathbf{p})^{2}-\mathbf{p}^{2}-\mathrm{i} 0}\cdots\\ &\times\frac{F((\mathbf{L}_{n-1}-\mathbf{L}_{n-2})^{2})}{( \mathbf{L}_{n-1}-\mathbf{L}_{n})^{2}+\lambda^{2}}\frac{1}{(\mathbf{L}_{n}- \mathbf{p})^{2}-\mathbf{p}^{2}-\mathrm{i}0}\\ &\times\bar{u}(p)\gamma^{0}(\not{p}-\not{L}_{1}+m)\gamma^{0}( \not{p}-\not{L}_{2}+m)\cdots\gamma^{0}(\not{p}-\not{L}_{n}+m)\,.\end{split} \tag{26}\] The Dirac-Coulomb problem corresponds to the hierarchy \(p\sim m\ll\Lambda_{\rm UV}\). For \(F({\bf L}^{2})=1\), \(E=m\), and \(\not{p}-\not{L}_{i}+m\to 2m\), the amplitude reduces to the Schrodinger Coulomb problem (6). The fermionic case represented by Eq. (26) involves nontrivial Dirac structure, and a dependence on UV momentum scales \(|{\bf L}|\gg p\). In the limit of a point-like source we have \(F({\bf L}^{2})=1\). Similar to the Schrodinger-Coulomb case, we first consider the low-order contributions. At one-loop, for \(\lambda\to 0\) and \(\epsilon\to 0\), \[\begin{split}{\cal M}^{(1)}&=2EZe^{2}\int\frac{{ \rm d}^{D}L}{(2\pi)^{D}}\frac{1}{{\bf L}^{2}+\lambda^{2}}\frac{1}{({\bf L}-{\bf p })^{2}-{\bf p}^{2}-{\rm i}0}\left[1-\frac{1}{2E}\gamma^{0}\not{L}\right]\\ &\to\frac{{\rm i}Z\overline{\alpha}}{\beta}\left[\left(\log\frac{ 2p}{\lambda}-\frac{{\rm i}\pi}{2}\right)+\frac{1}{2}\left(\frac{m\gamma^{0}}{E }-1\right)\right]\,.\end{split} \tag{27}\] Similar to Eq. (8), we can express the result, up to \(\lambda/E\) power corrections as the product of soft and hard factors, with \({\cal M}_{S}\) as in Eq. (10), and \({\cal M}_{H}\) now containing two different Dirac structures, \[{\cal M}_{H}={\cal M}_{H1}+\left(\frac{m\gamma^{0}}{E}-1\right){ \cal M}_{H2}\,. \tag{28}\] At tree level, the hard factor is given by \[{\cal M}_{H1}^{(0)}=1\,,\qquad{\cal M}_{H2}^{(0)}=0\,, \tag{29}\] and at one loop, \[{\cal M}_{H1}^{(1)}=\left[\frac{{\rm i}Z\overline{\alpha}}{\beta} (-4p^{2}/\mu^{2}-{\rm i}0)^{-\epsilon}\right]\left[\frac{-1}{2\epsilon} \right]\,,\quad{\cal M}_{H2}^{(1)}=\left[\frac{{\rm i}Z\overline{\alpha}}{ \beta}(-4p^{2}/\mu^{2}-{\rm i}0)^{-\epsilon}\right]\left[\frac{1}{2(1-2 \epsilon)}\right]\,. \tag{30}\] At two loop order, using integrals from Appendix A, \[{\cal M}_{H1}^{(2)} =\left[\frac{{\rm i}Z\overline{\alpha}}{\beta}(-4p^{2}/\mu^{2}-{ \rm i}0)^{-\epsilon}\right]^{2}\left[\frac{1}{8\epsilon^{2}}+\frac{\pi^{2}}{1 2}+\beta^{2}\left(\frac{-1}{8\epsilon}-\frac{5}{4}\right)+{\cal O}(\epsilon) \right]\,,\] \[{\cal M}_{H2}^{(2)} =\left[\frac{{\rm i}Z\overline{\alpha}}{\beta}(-4p^{2}/\mu^{2}-{ \rm i}0)^{-\epsilon}\right]^{2}\left[\frac{-1}{4\epsilon}-\frac{1}{2}+{\cal O} (\epsilon)\right]\,. \tag{31}\] ### Factorization The integrals in Eq. (26) are UV divergent by power counting when \(F({\bf L}^{2})=1\). The explicit computations above show that \({\cal M}_{S}{\cal M}_{H}\) is UV divergent beginning at two loop order, indicating sensitivity to short distance physics. Regulating UV divergences with \(F({\bf L}^{2})\) introduces a new UV scale, and a corresponding momentum region in loop diagrams with \(|{\bf L}|\sim\Lambda_{\rm UV}\gg p\). The factorization formula is \[{\cal M}={\cal M}_{S}{\cal M}_{H}{\cal M}_{\rm UV}\,. \tag{32}\] In the following, we compute the explicit form of \({\cal M}_{\rm UV}\) using an illustrative charge form factor. We then introduce an alternative finite-distance regulator that permits an all-orders solution of \({\cal M}_{\rm UV}\). Combined with an all-orders solution for the total amplitude \({\cal M}\) using the same UV regulator, and the all-orders solution of \({\cal M}_{S}\), we then extract \({\cal M}_{H}\) to all orders in perturbation theory. ### UV contribution from a charge form factor In dimensional regularization, the factor \({\cal M}_{\rm UV}\) is computed by setting \(\lambda=p=0\). For simplicity, we take \[F({\bf L}^{2})=\frac{\Lambda_{\rm UV}^{2}}{\Lambda_{\rm UV}^{2} +{\bf L}^{2}}\,. \tag{33}\] At one loop order, \[{\cal M}_{\rm UV}^{(1)}=Ze^{2}\int\frac{{\rm d}^{D}L}{(2\pi)^{D }}\frac{F({\bf L}^{2})}{({\bf L}^{2})^{2}}\gamma^{0}\mathbf{\gamma} \cdot{\bf L}=0\,. \tag{34}\] Nontrivial contributions begin at two-loop order, \[\mathcal{M}^{(2)}_{\rm UV} =(Ze^{2})^{2}\int\frac{{\rm d}^{D}L_{1}}{(2\pi)^{D}}\int\frac{{\rm d }^{D}L_{2}}{(2\pi)^{D}}\frac{F({\bf L}_{1}^{2})}{({\bf L}_{1}^{2})^{2}}\frac{F( ({\bf L}_{1}-{\bf L}_{2})^{2})}{{\bf L}_{2}^{2}({\bf L}_{1}-{\bf L}_{2})^{2}} \gamma^{0}\mathbf{\gamma}\cdot{\bf L}_{1}\gamma^{0}\mathbf{\gamma}\cdot{\bf L}_{2}\] \[=\left[Z\overline{\alpha}\left(\mu/\Lambda_{\rm UV}\right)^{2 \epsilon}\right]^{2}\left[-\frac{1}{8\epsilon}-\frac{1}{2}+\mathcal{O}( \epsilon)\right]\,. \tag{35}\] Let us compute renormalized expressions through two-loop order. In the \(\overline{\rm MS}\) scheme, the renormalized soft function is again given by Eq. (22), \[\mathcal{M}_{S}(\mu_{S})=1+\frac{{\rm i}Z\alpha}{\beta}\log\frac{\mu_{S}}{ \lambda}-\frac{(Z\alpha)^{2}}{2\beta^{2}}\log^{2}\frac{\mu_{S}}{\lambda}+ \mathcal{O}(\alpha^{3})\,. \tag{36}\] The renormalized hard function through two loop order is \[\mathcal{M}_{H}(\mu_{S},\mu_{H}) =1+\frac{Z\alpha}{\beta}\bigg{[}{\rm i}\left(\log\frac{2p}{\mu_{S} }-\frac{{\rm i}\pi}{2}\right)+\frac{{\rm i}}{2}\left(\frac{m}{E}\gamma^{0}-1 \right)\bigg{]}+\left(\frac{Z\alpha}{\beta}\right)^{2}\left\{\frac{-\pi^{2}}{ 12}-\frac{1}{2}\left(\log\frac{2p}{\mu_{S}}-\frac{{\rm i}\pi}{2}\right)^{2}\right. \tag{37}\] \[\quad-\left.\frac{1}{2}\left(\log\frac{2p}{\mu_{S}}-\frac{{\rm i }\pi}{2}\right)\left(\frac{m}{E}\gamma^{0}-1\right)+\left[\frac{5}{4}-\frac{1} {2}\left(\log\frac{2p}{\mu_{H}}-\frac{{\rm i}\pi}{2}\right)\right]\beta^{2} \right\}+\mathcal{O}(\alpha^{3})\,,\] and the renormalized UV function for the form factor in Eq. (33) is \[\mathcal{M}_{\rm UV}(\mu_{H})=1+(Z\alpha)^{2}\left[-\frac{1}{2}-\frac{1}{2} \log\frac{\mu_{H}}{\Lambda_{\rm UV}}\right]+\mathcal{O}(\alpha^{3})\,. \tag{38}\] It is readily checked that with the explicit results (36), (37) and (38), the product (32) is independent of \(\mu_{S}\) and \(\mu_{H}\) through two loop order. ### UV contribution with finite distance regulator Consider the series of amplitudes representing the perturbative expansion of the Dirac wavefunction at finite distance: \[\bar{u}({\bf p})\mathcal{M}_{\bf r}=\sum_{n=0}^{\infty}(Ze^{2})^{n} \int\frac{{\rm d}^{D}L_{1}}{(2\pi)^{D}}\int\frac{{\rm d}^{D}L_{2}}{(2 \pi)^{D}}\cdots\int\frac{{\rm d}^{D}L_{n}}{(2\pi)^{\rm D}}{\rm e}^{-{\rm i}{ \bf L}_{n}\cdot{\bf r}}\frac{1}{{\bf L}_{1}^{2}+\lambda^{2}}\frac{1}{({\bf L} _{1}-{\bf p})^{2}-{\bf p}^{2}-{\rm i}0} \tag{39}\] \[\times\frac{1}{({\bf L}_{1}-{\bf L}_{2})^{2}+\lambda^{2}}\frac{1} {({\bf L}_{2}-{\bf p})^{2}-{\bf p}^{2}-{\rm i}0}\cdots\frac{1}{({\bf L}_{n-1} -{\bf L}_{n})^{2}+\lambda^{2}}\frac{1}{({\bf L}_{n}-{\bf p})^{2}-{\bf p}^{2}- {\rm i}0}\] \[\times\bar{u}(p)\gamma^{0}(\not{p}-\hat{L}_{1}+m)\gamma^{0}(\not{ p}-\hat{L}_{2}+m)\cdots\gamma^{0}(\not{p}-\hat{L}_{n}+m)\,.\] For loop momentum \(|{\bf L}|\gg 1/|{\bf r}|\) the rapid oscillations of the exponential regulate the integral, and the finite distance \(r\) acts as UV regulator. In the limit \(1/r\gg p\), the amplitudes are described by the factorization theorem Eq. (32). The finite distance regulator is convenient since regulated amplitudes correspond to coordinate space solutions of the Dirac equation, which for \(|{\bf p}|\ll 1/r\) have a closed form solution (_cf._ Appendix C). We may relate the finite distance regulator scheme to a conventional \(\overline{\rm MS}\)-regulated amplitude by applying the method of regions [58, 59]. The finite-distance regulated amplitude \(\mathcal{M}_{\bf r}\) satisfies the factorization theorem (32), \[\mathcal{M}_{\bf r}=\mathcal{M}_{S}\mathcal{M}_{H}\mathcal{M}_{\rm UV}({\bf r})\, \tag{40}\] where the UV matching coefficient depends on \({\bf r}\). We will now show that \(\mathcal{M}_{\rm UV}({\bf r})\) can be computed to all orders in perturbation theory. This fact is related to the structure of the loop integrals with a finite distance regulator, Eq. (39), as compared to a charge form factor, Eq. (26): the regulator affects only the final ( \({\rm d}^{D}L_{n}\) ) loop integration, so that all of the preceding integrals are recursively one-loop. Details are presented in Appendix D, with the results for bare amplitudes at arbitrary even and odd orders in perturbation theory respectively: \[\mathcal{M}^{(2n)}_{\rm UV}=\frac{(-1)^{n}}{n!}\bigg{(}\frac{(Z\widetilde{ \alpha})^{2}/8}{\epsilon}\bigg{)}^{n}\,\left[\prod_{m=0}^{n-1}\frac{1}{1+2m \epsilon}\right]\,, \tag{41}\] \[{\cal M}_{\rm UV}^{(2n+1)}=\frac{(-1)^{n}}{n!}\bigg{(}\frac{(Z\widetilde{\alpha}) ^{2}/8}{\epsilon}\bigg{)}^{n}\left[\prod_{m=0}^{n}\frac{1}{1+2m\epsilon}\right] \times\left[-Z\widetilde{\alpha}\frac{{\rm i}\gamma_{0}\boldsymbol{\gamma} \cdot\hat{\mathbf{r}}}{2}\right]. \tag{42}\] The quantity \(\widetilde{\alpha}\) is given in terms of the \(\overline{\rm MS}\) coupling \(\overline{\alpha}\) in Eq. (12), by \[\widetilde{\alpha}\equiv\overline{\alpha}\times\bigg{(}\frac{\mu^{2}r^{2}}{1 6}\bigg{)}^{\epsilon}\frac{\Gamma\left(\frac{1}{2}-\epsilon\right)}{\Gamma \left(\frac{1}{2}+\epsilon\right)}\,. \tag{43}\] As discussed in Appendix D, both series can be expressed in closed form for arbitrary nonzero \(\epsilon\) in terms of Bessel functions. The \(\overline{\rm MS}\) renormalization constant can also be computed in closed form. A careful treatment of the small-\(\epsilon\) asymptotics of the bare amplitudes, _cf._ Appendix D, then yields the all-orders result, \[{\cal M}_{\rm UV}(\mu)=(\mu r\ {\rm e}^{\gamma_{\rm E}})^{\eta-1}\frac{1+ \eta}{2\sqrt{\eta}}\bigg{[}1-\frac{Z\alpha}{1+\eta}{\rm i}\gamma_{0} \boldsymbol{\gamma}\cdot\hat{\mathbf{r}}\bigg{]}\, \tag{44}\] where \(\eta=\sqrt{1-(Z\alpha)^{2}}\). The result (44) is renormalized in the \(\overline{\rm MS}\) scheme using the coupling defined in Eq. (12). ### Wavefunction solution and all-orders hard function The amplitude (39) is related to the perturbative expansion of a solution to the Dirac equation, \[\left(-{\rm i}\gamma^{0}\boldsymbol{\gamma}\cdot\boldsymbol{\partial}+m\gamma ^{0}-\frac{Z\alpha}{r}{\rm e}^{-\lambda r}\right)\psi=E\psi\,, \tag{45}\] namely: \[\bar{u}(p){\cal M}=[\psi^{(-)}(-\boldsymbol{r})]^{\dagger}\gamma^{0}\,, \tag{46}\] where \(\psi^{(-)}(\boldsymbol{r})\) denotes the solution that is asymptotically a plane wave plus incoming spherical wave. The solution, ignoring power corrections in \(\lambda/p\) and \(p/r^{-1}\), is \[\psi^{(-)}(\boldsymbol{r})={\rm e}^{{\rm i}\phi}\sqrt{\frac{E+\eta m}{E+m}} \sqrt{F(Z,E,r)}\bigg{(}1+\frac{{\rm i}Z\alpha}{1+\eta}\gamma^{0}\boldsymbol{ \gamma}\cdot\hat{\boldsymbol{r}}\bigg{)}\bigg{[}\frac{1+M}{2}+\frac{1-M}{2} \gamma^{0}\bigg{]}u(\boldsymbol{p})\,. \tag{47}\] Here \(F(Z,E,r)\) is the Fermi function, \[F(Z,E,r)=\frac{2(1+\eta)}{[\Gamma(2\eta+1)]^{2}}|\Gamma(\eta+{\rm i}\xi)|^{2} {\rm e}^{\pi\xi}(2pr)^{2(\eta-1)}\,, \tag{48}\] the phase factor \(e^{{\rm i}\phi}\) is given by \[{\rm e}^{{\rm i}\phi}={\rm e}^{-{\rm i}\xi\left(\log\frac{2p}{\lambda}-\gamma _{\rm E}\right)+{\rm i}(\eta-1)\frac{\pi}{2}}\frac{\Gamma(\eta+{\rm i}\xi)}{| \Gamma(\eta+{\rm i}\xi)|}\sqrt{\frac{\eta+{\rm i}\xi}{1+{\rm i}\xi}\frac{m}{E }}\,, \tag{49}\] and the quantity \(M\) is given by \[M=\frac{E+m}{E+\eta m}\left(1+{\rm i}\xi\frac{m}{E}\right)\,. \tag{50}\] In the Dirac (i.e., "Bjorken and Drell") basis for \(\gamma^{\mu}\) and with relativistic normalization \(u(\boldsymbol{p})^{\dagger}u(\boldsymbol{p})=2E\), the expression is \[\psi^{(-)}(\boldsymbol{r})={\rm e}^{{\rm i}\phi}\sqrt{F(Z,E,r)}\bigg{(}1+ \frac{{\rm i}Z\alpha}{1+\eta}\gamma^{0}\boldsymbol{\gamma}\cdot\hat{\boldsymbol {r}}\bigg{)}U(\boldsymbol{p})\,, \tag{51}\] where \[U(\boldsymbol{p})=\sqrt{E+\eta m}\left(\begin{array}{c}1\\ \left(1+{\rm i}\xi\frac{m}{E}\right)\frac{\boldsymbol{\sigma}\cdot\boldsymbol{ p}}{E+\eta m}\end{array}\right)\chi\,, \tag{52}\] and \(\chi\) is a two-component spinor. Using Eq. (46), the explicit all-orders results for \(\mathcal{M}_{S}\) in Eq. (22), and \(\mathcal{M}_{\rm UV}\) in Eq. (44), the hard function appearing in the factorization formula (32) is \[\begin{split}\mathcal{M}_{H}(\mu_{S},\mu_{H})&= \mathcal{M}_{S}^{-1}(\mu_{S})\mathcal{M}\mathcal{M}_{\rm UV}^{-1}(\mu_{H})\\ &=\mathrm{e}^{\frac{\pi\xi}{2}+\mathrm{i}\xi\left(\log\frac{2p}{ \mu_{S}}-\gamma_{\rm E}\right)-\mathrm{i}(\eta-1)\frac{\pi}{2}}\frac{2\Gamma( \eta-\mathrm{i}\xi)}{\Gamma(2\eta+1)}\sqrt{\frac{\eta-\mathrm{i}\xi}{1-\mathrm{ i}\xi\frac{m}{E}}}\sqrt{\frac{E+\eta m}{E+m}}\sqrt{\frac{2\eta}{1+\eta}}\left( \frac{2p\mathrm{e}^{-\gamma_{\rm E}}}{\mu_{H}}\right)^{\eta-1}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\times\left[\frac{1+M^{*}}{2}+\frac{1-M^{*}}{2} \gamma^{0}\right].\end{split} \tag{53}\] The amplitude has been explicitly decomposed into separate factors depending on a single scale, \(\lambda\), \(p\), or \(r^{-1}\) (here we are not distinguishing the scales \(p\), \(m\), and \(E\)). We remark that the explicit appearance of \(\exp(\gamma_{\rm E})\) accompanying \(2p/\mu\) in Eq. (53) may seem unexpected since the hard amplitude must match conventional \(\overline{\mathrm{MS}}\) renormalized amplitudes order by order in perturbation theory. However, these factors cancel against implicit factors7 of \(\gamma_{\rm E}\) from the expansion of \(\Gamma(\eta-\mathrm{i}\xi)/\Gamma(1+2\eta)\). Footnote 7: This can be seen most easily by noting that the two perturbative parameters that appear are \(\eta-1\sim\mathcal{O}([Z\alpha]^{2})\) and \(\xi\sim\mathcal{O}(Z\alpha)\). Then, using \(\Gamma(1+2\eta)=2\eta(2\eta-1)\Gamma(1+2(\eta-1))\) and \(\log\Gamma(1+z)=-\log(1+z)+z(1-\gamma_{\rm E})+\sum_{m=0}^{\infty}(-1)^{n}( \zeta(n)-1)\frac{z^{n}}{n}\), it is easy to show that the combination \(\mathrm{e}^{-\mathrm{i}\xi\gamma_{\rm E}}\Gamma(\eta-\mathrm{i}\xi)\mathrm{e} ^{-(\eta-1)\gamma_{\rm E}}/\Gamma(1+2\eta)\) contains no factors of \(\gamma_{\rm E}\) at any order in perturbation theory. Given Eq. (53) we can extract the anomalous dimension for contact operators to all orders in \(Z\alpha\). We differentiate \(\mathcal{M}_{H}\) with respect to \(\mu_{H}\) and obtain \[\gamma_{\mathcal{O}}=\sqrt{1-(Z\alpha)^{2}}-1\,. \tag{54}\] This is the contribution to the anomalous dimension from each light-particle leg. For example the operator mediating Eq. (1) has an anomalous dimension of \(2\gamma_{\mathcal{O}}\). For an operator mediating a beta decay, \(A[Z+1]\to B[Z]+\ell^{+}\nu\), Eq. (54) is the leading-\(Z\) contribution to the anomalous dimension [50, 51]. Including the one-loop beta function with \(n_{f}\) dynamical fermions, the scale dependence of contact operators can be obtained in closed form \[\int_{\alpha_{L}}^{\alpha_{H}}\mathrm{d}\alpha^{\prime}\;\frac{\gamma_{H}( \alpha^{\prime})}{\beta(\alpha^{\prime})}=\int_{\alpha_{L}}^{\alpha_{H}} \mathrm{d}\alpha^{\prime}\;\frac{\sqrt{1-Z^{2}\alpha^{\prime 2}}-1}{\frac{2n_{f}}{3 \pi}\alpha^{\prime 2}}=\frac{3\pi}{2n_{f}}\bigg{\{}\frac{1-\eta_{H}}{\alpha_{H}}- \frac{1-\eta_{L}}{\alpha_{L}}-Z[\arcsin(Z\alpha_{H})-\arcsin(Z\alpha_{L})] \bigg{\}}\;, \tag{55}\] where we have introduced the notation \(\eta_{L,H}=\eta(\alpha_{L,H})\). This expression is useful when analyzing QED radiative corrections for the beta decays of heavy nuclei [50]. The hard function (53), describes the limit \(p\sim m\ll\Lambda_{\rm UV}\), where \(\Lambda_{\rm UV}\;\sim R^{-1}\) denotes the scale of nuclear or hadronic structure. When the lepton is non-relativistic, \(p\ll m\ll\Lambda_{\rm UV}\), it is convenient to expand the hard function as \[\mathcal{M}_{H}=\mathcal{M}_{H}^{+}P_{+}+\mathcal{M}_{H}^{-}P_{-}\,, \tag{56}\] where \(P_{\pm}=(1\pm\gamma^{0})/2\). Allowing for arbitrary values of \(\xi\), we find through second order in \(\beta\), \[\begin{split}\mathcal{M}_{H}^{+}&=\mathrm{e}^{ \frac{\pi\xi}{2}+\mathrm{i}\xi\left(\log\frac{2p}{\mu_{S}}-\gamma_{\rm E} \right)}\Gamma(1-\mathrm{i}\xi)\bigg{\{}1+\beta^{2}\bigg{[}-\frac{\mathrm{i}} {4}\xi+\xi^{2}\left(-\frac{1}{2}\log\frac{2p}{\mu_{H}}+\frac{5}{4}+\frac{ \mathrm{i}\pi}{4}-\frac{\gamma_{\rm E}}{2}-\frac{1}{2}\psi(1-\mathrm{i}\xi) \right)\bigg{]}\bigg{\}}\,,\\ \mathcal{M}_{H}^{-}&=\mathrm{e}^{\frac{\pi\xi}{2}+ \mathrm{i}\xi\left(\log\frac{2p}{\mu_{S}}-\gamma_{\rm E}\right)}\Gamma(2- \mathrm{i}\xi)\bigg{\{}1+\beta^{2}\bigg{[}\frac{\mathrm{i}}{4}\xi+\xi^{2}\left( -\frac{1}{2}\log\frac{2p}{\mu_{H}}+\frac{3}{2}+\frac{\mathrm{i}\pi}{4}-\frac{ \gamma_{\rm E}}{2}-\frac{1}{2}\psi(2-\mathrm{i}\xi)\right)\bigg{]}\bigg{\}}\,, \end{split} \tag{57}\] where \(\psi\) denotes the digamma function, \(\psi(x)=\Gamma^{\prime}(x)/\Gamma(x)\). At each order in \(\beta^{2}\), the expressions (57) sum an infinite series of terms involving powers \(\xi^{n}\). At \(\beta\to 0\), the leading term for the "large" upper component \(\mathcal{M}_{H}^{+}\) reduces to the Schrodinger-Coulomb result (25). ## 5 Discussion The formula (32), and its non-relativistic analog (8), provides an all-orders explicit demonstration of factorization for the Coulomb problem. We find that Coulomb corrections factorize among different legs for a contact interaction (see Section 2). The universal hard matching coefficient in this formula, \(\mathcal{M}_{H}\) in Eq. (53), can be applied to different processes, and large logarithms can be summed to all orders using renormalization group methods. The non-relativistic limit for \(p\ll m\ll\Lambda_{\rm UV}\) is given by Eq. (57). By identifying the amplitudes as quantum field theory objects in a standard regularization scheme (i.e., \(\overline{\mathrm{MS}}\) scheme in dimensional regularization), we can systematically compute subleading perturbative contributions and match to hadronic and nuclear matrix elements. More detailed discussions of these points are presented elsewhere [9, 50, 51]. It is interesting to note that for unpolarized observables to beta decay, the spin-summed matrix element squared,8 Footnote 8: Explicitly we define \(\left\langle|\mathcal{M}_{H}|^{2}\right\rangle:=\sum_{\rm spins}|\bar{u} \mathcal{M}_{H}\gamma_{0}P_{L}v|^{2}\Big{/}\sum_{\rm spins}|\bar{u}\gamma_{0} P_{L}v|^{2}\) where \(\gamma_{0}P_{L}=\gamma_{\mu}v^{\mu}P_{L}\) is the tree-level Dirac structure, with \(v_{\mu}=(1,0,0,0)\). \[\left\langle|\mathcal{M}_{H}|^{2}\right\rangle=F(Z,E)\big{|}_{r_{H}}\times \frac{4\eta}{(1+\eta)^{2}}\,, \tag{58}\] differs from the historically defined Fermi function even when evaluated at \(r_{H}^{-1}=\mu_{H}e^{\gamma_{E}}\). We observe that finite-distance regulated amplitudes have special algebraic properties that allow for explicit all-orders expressions, for both bare and renormalized matrix elements as shown explicitly in Eqs. (41), (42) and (44). This example of all-orders renormalization may be of formal interest. As an illustration of how the formalism applies to different processes, let us return to the Eq. (1). For definiteness, suppose that the neutral current reaction is mediated by exchange of a vector boson of mass \(m_{B}\). The tree-level amplitude depicted in Eq. (2) takes the form \[\mathcal{M}^{\rm tree}=\frac{m_{B}^{2}}{m_{B}^{2}-(p_{1}+p_{2})^{2}}\bar{u}( \mathbf{p}_{1})\Gamma^{\rm tree}v(\mathbf{p}_{2})\,, \tag{59}\] where \(\Gamma^{\rm tree}=\gamma^{0}(A+B\gamma_{5})\) for some numbers \(A\) and \(B\). When \(\Lambda\gg m_{B}\gg p\), the boson mass plays the role of UV regulator.9 The factorization formula describing the infinite sum in Eq. (2), is Footnote 9: We have in mind a \(Z^{\prime}\) boson extending the Standard Model. The amplitudes are equivalent to a Standard Model \(Z\) boson in the formal limit \(\Lambda_{\rm nuc}\gg m_{Z}\gg m_{e}\). \[\mathcal{M}=\bar{u}(\mathbf{p}_{1})\mathcal{M}_{S}(\mathbf{p}_{1})\mathcal{M}_{H}(\bm {p}_{1})\mathcal{M}_{\rm UV}\overline{\mathcal{M}}_{H}(\mathbf{p}_{2})\overline{ \mathcal{M}}_{S}(\mathbf{p}_{2})v(\mathbf{p}_{2})\,. \tag{60}\] Here the conjugate amplitude is denoted \(\ \overline{\mathcal{M}}=\gamma^{0}\mathcal{M}^{\dagger}\gamma^{0}\). It is straightforward to compute \(\mathcal{M}_{\rm UV}\) from the diagrams in Eq. (2), neglecting charged lepton masses and momenta. Through two-loop order, after \(\overline{\rm MS}\) renormalization, \[\mathcal{M}_{\rm UV}(\mu)=\Gamma^{\rm tree}\left[1+(Z\alpha)^{2}\left(\frac{ 1}{2}\log\frac{\mu^{2}}{m_{B}^{2}}-\frac{3}{4}\right)\right]\,. \tag{61}\] It is readily seen that the scale dependence of \(\mathcal{M}_{\rm UV}(\mu_{H})\) cancels against the product of \(\mathcal{M}_{H}(\mu_{H})\) for the charged leptons. An important application of the formalism presented above is to the description of precision nuclear beta decay, e.g., for \(|V_{ud}|\) determination [36] and tests of first row CKM unitarity [23, 24, 26, 36, 40, 41, 42, 43, 44, 70, 71]. Consider the decay of a heavy atom to a negatively charged ion, a positron, and a neutrino [72, 25, 30, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87], \[A\to\ I^{-}+e^{+}+\nu_{e}\,. \tag{62}\] Beta decays are a complicated multi-scale problem, involving energies from the weak scale \(\sim 100\) GeV, down to scales set by atomic screening \(\sim 100\ {\rm eV}\). The embedding of Coulomb corrections in a broader EFT framework is crucial for the systematic separation of physical scales and computation of QED radiative corrections. For charged current processes such as beta decays, the charge-mismatch between the initial and final heavy particle (i.e., nucleus) introduce sub-leading effects whose analysis can be substantially simplified using eikonal algebra [63]. Systematic evaluation of these subleading corrections differ from previous phenomenological approaches; detailed calculations are presented elsewhere [50, 51]. **Acknowledgements** We thank Susan Gardner for useful discussions, and the Neutrino Theory Network for sponsoring RP's visit to U. Kentucky in 2018. RP thanks Benoit Assi, Florian Herren, and Robert Szafron for helpful discussions. This work was supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Awards DE-SC0019095. Fermilab is operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy. Part of this research was performed at the Kavli Institute for Theoretical Physics which is supported in part by the National Science Foundation under Grant No. NSF PHY-1748958 and at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. This work is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011632 and by the Walter Burke Institute for Theoretical Physics. RP acknowledges support from the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011632 and the Neutrino Theory Network Program Grant under Award Number DE-AC02-07CH11359. Loop integrals We collect here some results for loop integrals that are used in the main text. Integrals are defined in Euclidean \(D\)-dimensional space with \(D=3-2\epsilon\). ### Two loop integrals #### a.1.1 Scalar Integrals Consider the two-loop integral \[J(a_{1},a_{2},a_{3},a_{4},a_{5})=\int\frac{\mathrm{d}^{D}L_{2}}{(2\pi)^{D}} \frac{\mathrm{d}^{D}L_{1}}{(2\pi)^{D}}\frac{1}{[\mathbf{L}_{2}^{2}]^{a_{1}}} \frac{1}{[(\mathbf{p}-\mathbf{L}_{2})^{2}-\mathbf{p}^{2}]^{a_{2}}}\frac{1}{[ \mathbf{L}_{1}^{2}]^{a_{3}}}\frac{1}{[(\mathbf{p}-\mathbf{L}_{1})^{2}-\mathbf{ p}^{2}]^{a_{4}}}\frac{1}{[(\mathbf{L}_{1}-\mathbf{L}_{2})^{2}]^{a_{5}}}\,. \tag{63}\] Using that the integral of a total derivative vanishes in dimensional regularization, and inserting \((\partial/\partial L_{2}^{i})L_{2}^{i}\) and \((\partial/\partial L_{2}^{i})L_{1}^{i}\) under the integral, yields the following "integration by parts" [88] relation, \[0=D-a_{1}-a_{2}-2a_{5}-a_{1}\mathbf{1}^{+}(\mathbf{5}^{-}-\mathbf{3}^{-})-a_{ 2}\mathbf{2}^{+}(\mathbf{5}^{-}-\mathbf{4}^{-})\,, \tag{64}\] where we use the shorthand \(\mathbf{m}^{\pm}\) to denote the raising or lowering indices in \(J\), e.g., \(\mathbf{2}^{\pm}J(a_{1},a_{2},a_{3},a_{4},a_{5})=J(a_{1},a_{2}\pm 1,a_{3},a_{4},a_{5})\). In particular, for the two-loop integral appearing in Eq. (14), \[J(0,1,1,1,1)=\frac{1}{D-3}\left[J(0,2,1,1,0)-J(0,2,1,0,1)\right]\,, \tag{65}\] where the integrals on the right-hand side are recursively one-loop and are readily evaluated: \[J(0,2,1,1,0)=(-p^{2}-\mathrm{i}0)^{-1-2\epsilon}\left[\frac{ \Gamma\left(\frac{1}{2}+\epsilon\right)}{(4\pi)^{\frac{3}{2}-\epsilon}}\right] ^{2}\left(\frac{-1}{2\epsilon}\right)\, \tag{66}\] \[J(0,2,1,0,1)=(-p^{2}-\mathrm{i}0)^{-1-2\epsilon}\left[\frac{ \Gamma\left(\frac{1}{2}+\epsilon\right)}{(4\pi)^{\frac{3}{2}-\epsilon}} \right]^{2}\frac{\Gamma\left(\frac{1}{2}-\epsilon\right)^{2}\Gamma(1+2\epsilon )\Gamma(-4\epsilon)}{\Gamma\left(\frac{1}{2}+\epsilon\right)\Gamma(1-2 \epsilon)\Gamma\left(\frac{1}{2}-3\epsilon\right)}\,. \tag{67}\] #### a.1.2 Vector and Tensor Integrals In the evaluation of the hard function for a relativistic lepton, Eq. (31), we encounter the following two loop integrals, \[J^{i}(a_{1},a_{2},a_{3},a_{4},a_{5})=\int\frac{\mathrm{d}^{D}L_{ 2}}{(2\pi)^{D}}\frac{\mathrm{d}^{D}L_{1}}{(2\pi)^{D}}\ \ \frac{L_{2}^{i}}{[\mathbf{L}_{2}^{2}]^{a_{1}}}\frac{1}{[(\mathbf{p}-\mathbf{L} _{2})^{2}-\mathbf{p}^{2}]^{a_{2}}}\frac{1}{[\mathbf{L}_{1}^{2}]^{a_{3}}} \frac{1}{[(\mathbf{p}-\mathbf{L}_{1})^{2}-\mathbf{p}^{2}]^{a_{4}}}\frac{1}{[( \mathbf{L}_{1}-\mathbf{L}_{2})^{2}]^{a_{5}}}\, \tag{68}\] \[J^{ij}(a_{1},a_{2},a_{3},a_{4},a_{5})=\int\frac{\mathrm{d}^{D}L_{ 2}}{(2\pi)^{D}}\frac{\mathrm{d}^{D}L_{1}}{(2\pi)^{D}}\ \ \frac{L_{2}^{i}L_{1}^{j}}{[\mathbf{L}_{2}^{2}]^{a_{1}}} \frac{1}{[(\mathbf{p}-\mathbf{L}_{2})^{2}-\mathbf{p}^{2}]^{a_{2}}}\frac{1}{[ \mathbf{L}_{1}^{2}]^{a_{3}}}\frac{1}{[(\mathbf{p}-\mathbf{L}_{1})^{2}- \mathbf{p}^{2}]^{a_{4}}}\frac{1}{[(\mathbf{L}_{1}-\mathbf{L}_{2})^{2}]^{a_{5}}}\,. \tag{69}\] In particular, we require the contractions \(p^{i}J^{i}(1,1,0,1,1)\), \(p^{i}J^{i}(0,1,1,1,1)\), and \(\delta^{ij}J^{ij}(0,1,1,1,1)\), which by partial-fractioning can be written, \[2p^{i}J^{i}(1,1,0,1,1)=J(0,1,0,1,1)-J(0,1,1,0,1)\,,\] \[2p^{i}J^{i}(0,1,1,1,1)=J(-1,1,1,1,1)-J(0,0,1,1,1)\,,\] \[2\delta^{ij}J^{ij}(0,1,1,1,1)=J(0,1,0,1,1)+J(-1,1,1,1,1)-J(0,1,1, 1,0)\,. \tag{70}\] Applying the integration by parts identity (64) yields \[J(-1,1,1,1,1)=\frac{1}{D-2}\big{[}-J(0,1,1,1,0)+J(0,1,0,1,1)+J(-1,2,1,1,0)-J(-1,2,1,0,1)\big{]}\,. \tag{71}\] The remaining integrals are recursively one-loop and are given by \[J(-1,2,1,0,1) =(-p^{2}-\mathrm{i}0)^{-2\epsilon}\left[\frac{\Gamma\left(\frac{1}{2 }+\epsilon\right)}{(4\pi)^{\frac{3}{2}-\epsilon}}\right]^{2}\frac{\Gamma\left( \frac{1}{2}-\epsilon\right)^{2}\Gamma(2\epsilon)\Gamma(2-4\epsilon)}{\Gamma \left(\frac{1}{2}+\epsilon\right)\Gamma(1-2\epsilon)\Gamma\left(\frac{3}{2}-3 \epsilon\right)}\,, \tag{72}\] \[J(-1,2,1,1,0) =(-p^{2}-\mathrm{i}0)^{-2\epsilon}\left[\frac{\Gamma\left(\frac{1 }{2}+\epsilon\right)}{(4\pi)^{\frac{3}{2}-\epsilon}}\right]^{2}\frac{2(1- \epsilon)}{\epsilon(1-2\epsilon)}\,,\] \[J(0,0,1,1,1) =0\,,\] \[J(0,1,0,1,1) =(-p^{2}-\mathrm{i}0)^{-2\epsilon}\left[\frac{\Gamma\left(\frac{1 }{2}+\epsilon\right)}{(4\pi)^{\frac{3}{2}-\epsilon}}\right]^{2}\frac{1}{ \epsilon(1-2\epsilon)}\,,\] \[J(0,1,1,0,1) =(-p^{2}-\mathrm{i}0)^{-2\epsilon}\left[\frac{\Gamma\left(\frac{1 }{2}+\epsilon\right)}{(4\pi)^{\frac{3}{2}-\epsilon}}\right]^{2}\frac{\Gamma \left(\frac{1}{2}-\epsilon\right)^{2}\Gamma(2\epsilon)\Gamma(1-4\epsilon)}{ \Gamma\left(\frac{1}{2}+\epsilon\right)\Gamma(1-2\epsilon)\Gamma\left(\frac{3} {2}-3\epsilon\right)}\,,\] \[J(0,1,1,1,0) =(-p^{2}-\mathrm{i}0)^{-2\epsilon}\left[\frac{\Gamma\left(\frac{1 }{2}+\epsilon\right)}{(4\pi)^{\frac{3}{2}-\epsilon}}\right]^{2}\frac{1}{ \epsilon(1-2\epsilon)}\,.\] ### Three loop integrals Consider the three-loop integral, \[I(a_{2},a_{4},a_{5})=\int\frac{\mathrm{d}^{D}L_{1}}{(2\pi)^{D}} \!\!\int\frac{\mathrm{d}^{D}L_{2}}{(2\pi)^{D}}\!\!\int\frac{\mathrm{d}^{D}L_{3} }{(2\pi)^{D}} \tag{73}\] \[\frac{1}{\mathbf{L}_{1}^{2}}\frac{1}{(\mathbf{L}_{1}-\mathbf{p})^ {2}-\mathbf{p}^{2}}\frac{1}{(\mathbf{L}_{1}-\mathbf{L}_{2})^{2}}\frac{1}{[( \mathbf{L}_{2}-\mathbf{p})^{2}-\mathbf{p}^{2}]^{a_{4}}}\frac{1}{[(\mathbf{L}_ {2}-\mathbf{L}_{3})^{2}]^{a_{5}}}\frac{1}{[(\mathbf{L}_{3}-\mathbf{p})^{2}- \mathbf{p}^{2}]^{a_{2}}}\,.\] Integration by parts identities are (_cf._ Eq. (64) at \(a_{1}=0\)), \[0=D-a_{2}-2a_{5}-a_{2}\mathbf{2}^{+}(\mathbf{5}^{-}-\mathbf{4}^{-})\,, \tag{74}\] so that the integral of interest in Eq. (15) is \[I(1,1,1)=\frac{1}{D-3}\left[I(2,1,0)-I(2,0,1)\right]\,. \tag{75}\] The first integral in Eq. (75) is given by the product of two- and one-loop integrals, \[I(2,1,0) =\left[\int\frac{\mathrm{d}^{D}L_{1}}{(2\pi)^{D}}\!\!\int\frac{ \mathrm{d}^{D}L_{2}}{(2\pi)^{D}}\frac{1}{\mathbf{L}_{1}^{2}}\frac{1}{( \mathbf{L}_{1}-\mathbf{p})^{2}-\mathbf{p}^{2}}\frac{1}{(\mathbf{L}_{1}- \mathbf{L}_{2})^{2}}\frac{1}{(\mathbf{L}_{2}-\mathbf{p})^{2}-\mathbf{p}^{2}} \right]\left[\int\frac{\mathrm{d}^{D}L_{3}}{(2\pi)^{D}}\frac{1}{[(\mathbf{L}_ {3}-\mathbf{p})^{2}-\mathbf{p}^{2}]^{2}}\right] \tag{76}\] \[=J(0,1,1,1,1)(-p^{2}-\mathrm{i}0)^{-\frac{1}{2}-\epsilon}\frac{ \Gamma\left(\frac{1}{2}+\epsilon\right)}{(4\pi)^{\frac{3}{2}-\epsilon}}\,,\] where \(J(0,1,1,1,1)\) is evaluated above. The second integral in Eq. (75) is recursively two-loop, \[I(2,0,1) =\int\frac{\mathrm{d}^{D}L_{1}}{(2\pi)^{D}}\!\!\int\frac{\mathrm{d} ^{D}L_{3}}{(2\pi)^{D}}\frac{1}{\mathbf{L}_{1}^{2}}\frac{1}{(\mathbf{L}_{1}- \mathbf{p})^{2}-\mathbf{p}^{2}}\frac{1}{[(\mathbf{L}_{3}-\mathbf{p})^{2}- \mathbf{p}^{2}]^{2}}\left[\int\frac{\mathrm{d}^{D}L_{2}}{(2\pi)^{D}}\frac{1}{ (\mathbf{L}_{1}-\mathbf{L}_{2})^{2}}\frac{1}{(\mathbf{L}_{2}-\mathbf{L}_{3})^{ 2}}\right] \tag{77}\] \[=\int\frac{\mathrm{d}^{D}L_{1}}{(2\pi)^{D}}\!\!\int\frac{\mathrm{d} ^{D}L_{3}}{(2\pi)^{D}}\frac{1}{\mathbf{L}_{1}^{2}}\frac{1}{(\mathbf{L}_{1}- \mathbf{p})^{2}-\mathbf{p}^{2}}\frac{[(\mathbf{L}_{1}-\mathbf{L}_{3})^{2}]^{- \frac{1}{2}-\epsilon}}{[(\mathbf{L}_{3}-\mathbf{p})^{2}-\mathbf{p}^{2}]^{2}} \times\frac{\Gamma\left(\frac{1}{2}+\epsilon\right)}{(4\pi)^{\frac{3}{2}- \epsilon}}B\left(\frac{1}{2}-\epsilon,\frac{1}{2}-\epsilon\right)\] \[=\frac{\Gamma\left(\frac{1}{2}+\epsilon\right)}{(4\pi)^{\frac{3}{2 }-\epsilon}}B\left(\frac{1}{2}-\epsilon,\frac{1}{2}-\epsilon\right)J\left(0,2,1,1,\frac{1}{2}+\epsilon\right)\,.\] To evaluate \(J\left(0,2,1,1,\frac{1}{2}+\epsilon\right)\), we first perform the \(\mathbf{L}_{3}\) integral in Eq. (77), \[\int\frac{\mathrm{d}^{D}L_{3}}{(2\pi)^{D}}\frac{1}{[(\mathbf{L}_{3}-\mathbf{p})^{ 2}-\mathbf{p}^{2}]^{2}}[(\mathbf{L}_{1}-\mathbf{L}_{3})^{2}]^{-\frac{1}{2}- \epsilon}=\frac{\Gamma(1+2\epsilon)}{\Gamma\left(\frac{1}{2}+\epsilon\right)(4 \pi)^{D/2}}\int_{0}^{1}\mathrm{d}x\,x^{-2\epsilon}(1-x)^{-\frac{3}{2}-\epsilon} \left[(\mathbf{L}_{1}-\mathbf{p})^{2}-\frac{\mathbf{p}^{2}}{1-x}\right]^{-1-2 \epsilon}\,. \tag{78}\] so that \[J\left(0,2,1,1,\frac{1}{2}+\epsilon\right)=\frac{\Gamma(1+2\epsilon)}{\Gamma \left(\frac{1}{2}+\epsilon\right)(4\pi)^{D/2}}\int_{0}^{1}\mathrm{d}x\,x^{-2 \epsilon}(1-x)^{-\frac{3}{2}-\epsilon}K(1,1,1+2\epsilon)\,, \tag{79}\] where we introduce \[K(a_{1},a_{2},a_{3})=\int\frac{\mathrm{d}^{D}L}{(2\pi)^{D}}\frac{1}{[\mathbf{ L}^{2}]^{a_{1}}}\frac{1}{[(\mathbf{L}-\mathbf{p})^{2}-\mathbf{p}^{2}]^{a_{2}}} \frac{1}{[(\mathbf{L}-\mathbf{p})^{2}-\mathbf{p}^{2}/(1-x)]^{a_{3}}}\,. \tag{80}\] Integration by parts for \(K\) yields \[0=D-2a_{1}-a_{2}-a_{2}\mathbf{2}^{+}\mathbf{1}^{-}-a_{3}\mathbf{3}^{+}( \mathbf{1}^{-}+\mathbf{2}^{-})\,, \tag{81}\] so that \[K(1,1,1+2\epsilon)=\frac{1}{D-3}\left\{K(0,2,1+2\epsilon)+(1+2 \epsilon)\left[K(0,1,2+2\epsilon)+K(1,0,2+2\epsilon)\right]\right\}\,. \tag{82}\] As a function of the integration variable \(x\) in Eq. (79), the terms on the right side of Eq. (82) are \[K(0,2,1+2\epsilon) =\frac{(-p^{2})^{-\frac{3}{2}-3\epsilon}}{(4\pi)^{D/2}}\frac{ \Gamma\left(\frac{3}{2}+3\epsilon\right)}{\Gamma(1+2\epsilon)}\int_{0}^{1} \mathrm{d}z\,z(1-z)^{2\epsilon}\left(z+\frac{1-z}{1-x}\right)^{-\frac{3}{2}-3 \epsilon}\,, \tag{83}\] \[K(0,1,2+2\epsilon) =\frac{(-p^{2})^{-\frac{3}{2}-3\epsilon}}{(4\pi)^{D/2}}\frac{ \Gamma\left(\frac{3}{2}+3\epsilon\right)}{\Gamma(2+2\epsilon)}\int_{0}^{1} \mathrm{d}z\,(1-z)^{1+2\epsilon}\left(z+\frac{1-z}{1-x}\right)^{-\frac{3}{2}- 3\epsilon}\,,\] \[K(1,0,2+2\epsilon) =\frac{(-p^{2})^{-\frac{3}{2}-3\epsilon}}{(4\pi)^{D/2}}\frac{ \Gamma\left(\frac{3}{2}+3\epsilon\right)}{\Gamma(2+2\epsilon)}\int_{0}^{1} \mathrm{d}z\,(1-z)^{1+2\epsilon}\left(-z(1-z)+\frac{1-z}{1-x}\right)^{-\frac{ 3}{2}-3\epsilon}\,.\] Each integral may be evaluated as a series in \(\epsilon\), yielding, \[J\left(0,2,1,1,\frac{1}{2}+\epsilon\right) =\frac{\Gamma\left(\frac{3}{2}+3\epsilon\right)(-p^{2})^{-\frac{3 }{2}-3\epsilon}}{\Gamma\left(\frac{1}{2}+\epsilon\right)(4\pi)^{D}}\frac{-1}{ 2\epsilon}\left\{\frac{-1}{3\epsilon}+\frac{4}{3}\log 2+2+\left(\frac{5\pi^{2}}{9}- \frac{8}{3}\log^{2}2-8\log 2-12\right)\epsilon\right. \tag{84}\] \[\quad+\left[-\frac{62\zeta(3)}{3}-\frac{10\pi^{2}}{3}+72+\frac{ 32}{9}\log^{3}2+16\log^{2}2+\left(-\frac{20\pi^{2}}{9}+48\right)\log 2 \right]\epsilon^{2}+\mathcal{O}(\epsilon^{3})\right\}.\] ## Appendix B Wavefunction solution: Schrodinger-Coulomb Consider the Lippmann-Schwinger equation and its related Born series for the solution of the Schrodinger equation \[\psi_{\mathbf{p}}^{(\pm)}(\mathbf{x})=\langle\mathbf{x}|\left(1+ \frac{1}{E-\hat{H}_{0}\pm\mathrm{i}0}\hat{V}+\frac{1}{E-\hat{H}_{0}\pm\mathrm{ i}0}\hat{V}\frac{1}{E-\hat{H}_{0}\pm\mathrm{i}0}\hat{V}+\ldots\right)|\mathbf{p} \rangle\,, \tag{85}\] where \(\hat{H}_{0}=\hat{\mathbf{p}}^{2}/(2m)\) is the free Hamiltonian and \(\hat{V}=V(\hat{\mathbf{x}})\) is the potential. For a finite range potential, the \(+\mathrm{i}0\)\((-\mathrm{i}0)\) prescription in Eq. (85) corresponds to a plane wave plus outgoing (incoming) spherical wave at large distance. Inserting a complete set of momentum eigenstates we arrive at \[\begin{split}\psi_{\mathbf{p}}^{(\pm)}(\mathbf{x})=\mathrm{e}^{ \mathrm{i}\mathbf{p}\cdot\mathbf{x}}\bigg{[}& 1+\int\frac{\mathrm{d}^{3}L}{(2\pi)^{3}}\mathrm{e}^{ \mathrm{i}\mathbf{L}\cdot\mathbf{x}}\frac{-2m}{2\mathbf{p}\cdot\mathbf{L}+ \mathbf{L}^{2}\mp\mathrm{i}0}\tilde{V}(\mathbf{L})\\ &+\int\frac{\mathrm{d}^{3}L_{1}}{(2\pi)^{3}}\frac{\mathrm{d}^{3}L_{ 2}}{(2\pi)^{3}}\mathrm{e}^{\mathrm{i}\mathbf{L}\cdot\mathbf{x}}\frac{-2m}{2 \mathbf{p}\cdot\mathbf{L}_{2}+\mathbf{L}_{2}^{2}\mp\mathrm{i}0}\tilde{V}( \mathbf{L}_{2}-\mathbf{L}_{1})\frac{-2m}{2\mathbf{p}\cdot\mathbf{L}_{1}+ \mathbf{L}_{1}^{2}\mp\mathrm{i}0}\tilde{V}(\mathbf{L}_{1})+\ldots\bigg{]}\,, \end{split} \tag{86}\] where \(\tilde{V}(\mathbf{L})=\int\mathrm{d}^{3}x\)\(\mathrm{e}^{\mathrm{i}\mathbf{L}\cdot\mathbf{x}}V(\mathbf{x})\) is the potential in momentum space. In particular, for a Yukawa potential, \(V(\mathbf{x})=(-Ze^{2})\exp(-\lambda|\mathbf{x}|)/(4\pi|\mathbf{x}|)\), we have \(\tilde{V}(\mathbf{L})=-Ze^{2}/(\mathbf{L}^{2}+\lambda^{2})\). Setting \(\mathbf{x}\to 0\) and choosing the outgoing \(+\mathrm{i}0\) prescription, the wavefunction \(\psi_{\mathbf{p}}^{(+)}(0)\) provides an all-orders solution for the amplitude Eq. (6). Let us solve the Schrodinger equation, \[\bigg{[}-\frac{1}{2m}\nabla^{2}-\frac{Z\alpha}{r}\mathrm{e}^{-\lambda r}\bigg{]} \psi(\mathbf{x})=\frac{\mathbf{p}^{2}}{2m}\psi(\mathbf{x})\, \tag{87}\] in the limit where \(\lambda\ll|\mathbf{p}|\) (but to all orders in \(Z\alpha\)). Here \(r=|\mathbf{x}|\). Let us write \(\psi_{\mathbf{p}}(\mathbf{x};\lambda)=\mathrm{e}^{\mathrm{i}\mathbf{p}\cdot \mathbf{x}}F_{\mathbf{p}}(\mathbf{x},\lambda)\). Choosing \(\mathbf{p}\) along the \(\hat{\mathbf{z}}\) direction, \(\mathbf{p}=p\hat{\mathbf{z}}\), we look for the solution that reduces to \(F=1\) at \(z\to-\infty\) to obtain \(\psi^{(+)}\), and the solution that reduces to \(F=1\) at \(z\to+\infty\) for \(\psi^{(-)}\). The differential equation for \(F\) is \[\bigg{[}-\frac{1}{2}\nabla^{2}-\mathrm{i}\mathbf{p}\cdot\nabla-\frac{mZ\alpha }{r}\mathrm{e}^{-\lambda r}\bigg{]}F(\mathbf{x})=0\,. \tag{88}\] We may now apply boundary layer theory [89], solving for solutions at short and long distances and matching the solutions in their common domain of validity \(p^{-1}\ll r\ll\lambda^{-1}\). For \(r\ll\lambda^{-1}\), the Schrodinger equation is \[\bigg{[}-\frac{1}{2}\frac{\nabla^{2}}{p^{2}}-\mathrm{i}\frac{ \hat{\mathbf{p}}\cdot\vec{\nabla}}{p}-\frac{\xi}{pr}\bigg{]}\,F_{<}=0\,, \tag{89}\] with solution \[F_{<}^{(+)}(\mathbf{x})=N(p,\lambda)_{1}F_{1}(\mathrm{i}\xi,1, \mathrm{i}p(r-z))\,, \tag{90}\] where \(\,{}_{1}F_{1}(a,b,c)\) is the confluent hypergeometric function. For \(r\gg p^{-1}\), the Schrodinger equation is \[\left[-\mathrm{i}\frac{\hat{\mathbf{p}}\cdot\vec{\nabla}}{\lambda}-\frac{\xi} {\lambda r}\mathrm{e}^{-\lambda r}\right]F_{>}=0\,, \tag{91}\] with solution \[F_{>}^{(+)}(\mathbf{x})=\exp\bigg{[}\mathrm{i}\xi\int_{-\infty}^{z}dz^{\prime }\,,\frac{\mathrm{e}^{-\lambda\sqrt{z^{\prime 2}+r^{2}-z^{2}}}}{\sqrt{z^{\prime 2}+r^{2}-z^{ 2}}}\bigg{]}\,. \tag{92}\] In the overlap region \(p^{-1}\ll r\ll\lambda^{-1}\), the respective solutions can be expanded as \[F_{<}^{(+)}\to N(p,\lambda)\frac{1}{\Gamma(1-\mathrm{i}\xi)} \exp\bigg{\{}-\frac{\pi\xi}{2}-\mathrm{i}\xi\log[p(r-z)]\bigg{\}}\,,\] \[F_{>}^{(+)}\to\exp\bigg{\{}\mathrm{i}\xi\left[-\log\frac{\lambda (r-z)}{2}-\gamma_{\mathrm{E}}\right]\bigg{\}}\,. \tag{93}\] Identifying \(F_{<}^{(+)}=F_{>}^{(+)}\) in the overlap region, and using that \({}_{1}F_{1}(a,b,0)=1\), we have, up to \(\lambda/p\) power corrections, \[\psi_{\mathbf{p}}^{(+)}(\mathbf{x}=0)=N(p,\lambda)=\Gamma(1- \mathrm{i}\xi)\exp\bigg{\{}\frac{\pi}{2}\xi+\mathrm{i}\xi\left[\log\frac{2p}{ \lambda}-\gamma_{\mathrm{E}}\right]\bigg{\}}\,. \tag{94}\] The incoming solution \(\psi_{\mathbf{p}}^{(-)}(\mathbf{x})\) is given by \(F^{(-)}(\mathbf{x})=[F^{(+)}(-\mathbf{x})]^{*}\). ## Appendix C Wavefunction solution: Dirac-Coulomb The Dirac equation can be similarly shown to have a Lippmann-Schwinger solution and associated Born series. Let us define \(\Phi(\mathbf{x})=u(\mathbf{p})\mathrm{e}^{\mathrm{i}\mathbf{p}\cdot\mathbf{x}}\), where \(u(\mathbf{p})\) is a Dirac spinor. The solution of the Dirac equation with a potential can be written as \[\psi^{(\pm)}(\mathbf{x})=\bigg{[}1 +\int\frac{\mathrm{d}^{3}L}{(2\pi)^{3}}\ \mathrm{e}^{\mathrm{i}\mathbf{L}\cdot\mathbf{x}}\frac{1}{\not{p}+\dot{L}-m\pm \mathrm{i}0}\gamma_{0}\tilde{V}(\mathbf{L}) \tag{95}\] \[+\int\frac{\mathrm{d}^{3}L_{2}}{(2\pi)^{3}}\frac{\mathrm{d}^{3}L_{ 1}}{(2\pi)^{3}}\ \mathrm{e}^{\mathrm{i}\mathbf{L}_{2}\cdot\mathbf{x}}\frac{1}{\not{p}+\dot{L}_{ 2}-m\pm\mathrm{i}0}\gamma_{0}\tilde{V}(\mathbf{L}_{1}-\mathbf{L}_{2})\frac{1}{ \not{p}+\dot{L}_{1}-m\pm\mathrm{i}0}\gamma_{0}\tilde{V}(\mathbf{L}_{1})+... \bigg{]}\Phi(\mathbf{x})\,.\] The amplitude of interest, Eq. (39), is given by \(\bar{u}({\bf p}){\cal M}_{\bf r}=\overline{\psi}^{(-)}(-{\bf r})=[\psi^{(-)}(-{\bf r })]^{\dagger}\gamma^{0}\). We require the solution \(\psi^{(-)}\) with a small but non-zero photon mass \(\lambda\). References [90, 91] present the angular momentum components for the strict \(\lambda=0\) solution, which is related to our problem by a normalization that must be computed. To determine the complete solution including \(\lambda\) dependence, we identify this solution with \(\psi^{(-)}_{<}\), up to a normalization that is fixed by matching to \(\psi^{(-)}_{>}\) in the overlapping region of validity. For simplicity we perform the matching by projecting onto the \(S\)-wave component of the outgoing spherical wave. Let us consider the upper components of \(\psi^{(\pm)}\) in the Dirac basis for \(\gamma^{\mu}\), and introduce \[\frac{1+\gamma_{0}}{2}\psi^{(\pm)}({\bf x})={\rm e}^{{\rm i}{\bf p }\cdot{\bf x}}F^{(\pm)}_{\bf p}({\bf x},\lambda)\left(\begin{array}{c}\chi \\ 0\end{array}\right)\,, \tag{96}\] where \(\chi\) is a 2-component spinor. Similar to the Schrodinger-Coulomb problem, we look for solutions \(F_{>}\) when \(r\gg p^{-1}\), and \(F_{<}\) when \(r\ll\lambda^{-1}\). The large-distance solution obeys an identical equation to the Schrodinger-Coulomb problem (with \(\xi=Z\alpha/\beta\) and \(\beta=p/E\) representing the relativistic velocity). The solution for \(F^{(-)}_{>}\) is given in Appendix B, and for the matching we require the small-\(r\) limit. Considering the outgoing spherical wave component, the \(S\)-wave projection is \[\psi^{(-)}_{>}\to\frac{{\rm e}^{ipr}}{2{\rm i}pr}\exp\left[-{\rm i }\xi\left(\log\frac{2p}{\lambda}-\gamma_{E}\right)+{\rm i}\xi\log(2pr)\right]\,. \tag{97}\] The relevant component of the small-\(r\) solution involves the quantity [90, 92] \[C(p,\lambda)\,f_{-1}(pr)=C(p,\lambda)\,{\rm e}^{\frac{\pi\xi}{2 }}\frac{|\Gamma(\eta+{\rm i}\xi)|}{\Gamma(2\eta+1)}(2pr)^{\eta-1}\bigg{\{}{ \rm e}^{-{\rm i}pr+{\rm i}\kappa}(\eta+{\rm i}\xi)_{1}F_{1}(\eta+1+{\rm i}\xi,2\eta+1,2{\rm i}pr)+{\rm c.c.}\bigg{\}}\,, \tag{98}\] where \(\exp({\rm i}\kappa)=\sqrt{(1+{\rm i}m\xi/E)/(\eta+{\rm i}\xi)}\). From the large-\(r\) limit of this expression, taking the outgoing spherical wave component, we have \[\psi^{(-)}_{<}\to C(p,\lambda)\frac{|\Gamma(\eta+{\rm i}\xi)|}{ \Gamma(\eta+{\rm i}\xi)}\frac{{\rm e}^{{\rm i}pr}}{2{\rm i}pr}\exp\Big{[}{\rm i }\xi\log(2pr)-{\rm i}(\eta-1)\frac{\pi}{2}+{\rm i}\kappa\Big{]}\,\,. \tag{99}\] Comparison of Eqs. (97) and (99) in the overlap region \(p^{-1}\ll r\ll\lambda^{-1}\) determines \(C(p,\lambda)\). Using \({}_{1}\!F_{1}(a,b,0)=1\), and taking the \(r\to 0\) limit of the complete solution, we have (_cf._ Eq. (16) of Ref. [92]) \[\begin{split}\lim_{r\to 0}\psi^{(-)}({\bf x};\lambda)={\rm e}^{{\rm i }\xi[-\log(2p/\lambda)+\gamma_{\rm n}]}&{\rm e}^{\pi\xi/2} \Gamma(\eta+{\rm i}\xi)\times\frac{1+\eta+{\rm i}\xi\big{(}1-\frac{m}{E}\big{)} }{\Gamma(1+2\eta)}{\rm e}^{-{\rm i}(1-\eta)\pi/2}(2pr)^{\eta-1}\\ &\times\bigg{[}1+\frac{Z\alpha}{1+\eta}\frac{{\rm i}\gamma_{0} \boldsymbol{\gamma}\cdot{\bf x}}{|{\bf x}|}\bigg{]}\bigg{[}\bigg{(}\frac{1+M} {2}\bigg{)}+\bigg{(}\frac{1-M}{2}\bigg{)}\gamma_{0}\bigg{]}u({\bf p})\,,\end{split} \tag{100}\] where \[M=\frac{E+m}{E+\eta m}\Big{(}1+{\rm i}\xi\frac{m}{E}\Big{)}\,. \tag{101}\] ## Appendix D All orders UV function with a finite-distance regulator We can compute the UV matching coefficient introduced in Eq. (40) by setting \(\lambda=p=0\) and evaluating the remaining integrals using dimensional regularization. Examining the perturbative series we find that the (bare, unrenormalized) UV matrix element has the following structure \[{\cal M}^{\rm bare}_{\rm UV}=F^{\rm bare}_{1}-F^{\rm bare}_{2} \times\frac{{\rm i}\gamma_{0}\boldsymbol{\gamma}\cdot{\bf x}}{2|{\bf x}|}\,, \tag{102}\] where \[F^{\rm bare}_{1}=\sum_{n=0}^{\infty}(Ze^{2})^{2n}{\cal I}^{(n)}_{1} \,,\quad F^{\rm bare}_{2}\times\frac{{\rm i}\gamma_{0}\boldsymbol{\gamma}\cdot {\bf x}}{2|{\bf x}|}=(-1)\times\sum_{n=0}^{\infty}(Ze^{2})^{2n+1}{\cal I}^{(n)}_ {2}\,. \tag{103}\] In particular, all even orders of perturbation theory contribute to \(F_{1}\) and all odd orders to \(F_{2}\). The lowest order loop integrals are given by \({\cal I}_{1}^{(0)}=1\), \[{\cal I}_{1}^{(1)} =\int\frac{{\rm d}^{D}L_{1}}{(2\pi)^{D}}\frac{{\rm d}^{D}L_{2}}{(2 \pi)^{D}}\ {\rm e}^{-{\rm i}{\bf L}_{2}\cdot{\bf x}}\ \frac{\gamma_{0}{\boldsymbol{\gamma}}\cdot{\bf L}_{2}}{{\bf L}_{2}^{2}}\frac{ 1}{({\bf L}_{2}-{\bf L}_{1})^{2}}\frac{\gamma_{0}{\boldsymbol{\gamma}}\cdot{ \bf L}_{1}}{{\bf L}_{1}^{2}}\frac{1}{{\bf L}_{1}^{2}}\,,\] \[{\cal I}_{2}^{(0)} =\int\frac{{\rm d}^{D}L_{1}}{(2\pi)^{D}}\ {\rm e}^{-{\rm i}{\bf L}_{1} \cdot{\bf x}}\ \frac{\gamma_{0}{\boldsymbol{\gamma}}\cdot{\bf L}_{1}}{{\bf L}_{1}^{2}}\frac{ 1}{{\bf L}_{1}^{2}}\,, \tag{104}\] and for higher orders, \[{\cal I}_{1}^{(n)} =\int\frac{{\rm d}^{D}L_{2n}}{(2\pi)^{D}}\ {\rm e}^{-{\rm i}{\bf L}_{2n }\cdot{\bf x}}\frac{\gamma_{0}{\boldsymbol{\gamma}}\cdot{\bf L}_{2n}}{{\bf L}_ {2n}^{2}}\Bigg{[}\prod_{i=2}^{2n-1}\ \int\frac{{\rm d}^{D}L_{i}}{(2\pi)^{D}}\frac{\gamma_{0}{ \boldsymbol{\gamma}}\cdot{\bf L}_{i}}{{\bf L}_{i}^{2}}\frac{1}{({\bf L}_{i}-{ \bf L}_{i+1})^{2}}\Bigg{]}\int\frac{{\rm d}^{D}L_{1}}{(2\pi)^{D}}\frac{1}{({ \bf L}_{2}-{\bf L}_{1})^{2}}\frac{\gamma_{0}{\boldsymbol{\gamma}}\cdot{\bf L} _{1}}{{\bf L}_{1}^{2}}\frac{1}{{\bf L}_{1}^{2}}\, \tag{105}\] \[{\cal I}_{2}^{(n)} =\int\frac{{\rm d}^{D}L_{2n+1}}{(2\pi)^{D}}\ {\rm e}^{-{\rm i}{\bf L}_{2n +1}\cdot{\bf x}}\frac{\gamma_{0}{\boldsymbol{\gamma}}\cdot{\bf L}_{2n+1}}{{\bf L }_{2n+1}^{2}}\Bigg{[}\prod_{i=2}^{2n}\int\frac{{\rm d}^{D}L_{i}}{(2\pi)^{D}} \frac{\gamma_{0}{\boldsymbol{\gamma}}\cdot{\bf L}_{i}}{{\bf L}_{i}^{2}}\frac{ 1}{({\bf L}_{i}-{\bf L}_{i+1})^{2}}\Bigg{]}\int\frac{{\rm d}^{D}L_{1}}{(2\pi)^ {D}}\frac{1}{({\bf L}_{2}-{\bf L}_{1})^{2}}\frac{\gamma_{0}{\boldsymbol{ \gamma}}\cdot{\bf L}_{1}}{{\bf L}_{1}^{2}}\frac{1}{{\bf L}_{1}^{2}}\,.\] These integrals are recursively one-loop, and can be evaluated by repeated use of the following identity: \[\begin{split} C(\nu_{j})\frac{1}{({\bf L}_{2j+1}^{2})^{\nu_{j+1}- 1}}&=\int\frac{{\rm d}^{D}L_{2j-1}}{(2\pi)^{D}}\frac{{\rm d}^{D}L _{2j}}{(2\pi)^{D}}\frac{1}{({\bf L}_{2j+1}-{\bf L}_{2j})^{2}}\frac{\gamma_{0}{ \boldsymbol{\gamma}}\cdot{\bf L}_{2j}}{{\bf L}_{2j}^{2}}\frac{1}{({\bf L}_{2j}- {\bf L}_{2j-1})^{2}}\frac{\gamma_{0}{\boldsymbol{\gamma}}\cdot{\bf L}_{2j-1}}{ ({\bf L}_{2j-1}^{2})^{\nu_{j}}}\\ &=\frac{1}{(4\pi)^{D}}\frac{\Gamma(\nu_{j}+2-D)}{\Gamma(\nu_{j})} B(\tfrac{D}{2}-1,1+\tfrac{D}{2}-\nu_{j})B(\tfrac{D}{2}-1,D-\nu_{j}-1)\Bigg{(} \frac{1}{{\bf L}_{2j+1}^{2}}\Bigg{)}^{\nu_{j}+2-D}\,\end{split} \tag{106}\] where \(\nu_{1}=2\) and \(\nu_{j+1}=\nu_{j}+3-D\) so that \(\nu_{j}=2+2(j-1)\epsilon\). The final integral involving \({\rm e}^{-{\rm i}{\bf L}_{2n}\cdot{\bf x}}\) for \({\cal I}_{1}^{(n)}\) (or \({\rm e}^{-{\rm i}{\bf L}_{2n+1}\cdot{\bf x}}\) for \({\cal I}_{2}^{(n)}\)) can be evaluated with a Schwinger parameter, yielding \[\begin{split}{\cal I}_{1}^{(n)}&=\Bigg{[}\prod_{j=1}^{n -1}C(\nu_{j})\Bigg{]}\times\frac{\Gamma(D-\nu_{n}-1)}{(4\pi)^{D}\Gamma(\nu_{n} )}B(\tfrac{D}{2}-1,1+\tfrac{D}{2}-\nu_{n})\bigg{(}\frac{{\bf x}^{2}}{4}\bigg{)} ^{\nu_{n}+1-D}\,\\ {\cal I}_{2}^{(n)}&=\Bigg{[}\prod_{j=1}^{n}C(\nu_{j} )\Bigg{]}\Bigg{[}\frac{2\Gamma(\tfrac{D}{2}-\nu_{n+1}+1)}{(4\pi)^{D/2}\Gamma( \nu_{n+1})}\Bigg{]}\Bigg{[}\frac{{\bf x}^{2}}{4}\Bigg{]}^{\nu_{n+1}-(D+1)/2} \times\ \frac{-{\rm i}\gamma_{0}{\boldsymbol{\gamma}}\cdot{\bf x}}{2|{\bf x}|}\,.\end{split} \tag{107}\] Using the properties of the Gamma function, the functions \(F_{1}\) and \(F_{2}\) can be shown to have the following series expansion \[\begin{split} F_{1}^{\rm bare}&=\sum_{n=0}^{\infty} \tilde{g}^{n}\frac{(-1)^{n}}{n!}\bigg{(}\frac{1}{\epsilon}\bigg{)}^{n}\ \Bigg{[}\prod_{m=0}^{n-1}\frac{1}{(1+2m\epsilon)}\Bigg{]}\,,\\ F_{2}^{\rm bare}&=Z\widetilde{\alpha}\sum_{n=0}^{ \infty}\tilde{g}^{n}\frac{(-1)^{n}}{n!}\bigg{(}\frac{1}{\epsilon}\bigg{)}^{n} \ \Bigg{[}\prod_{m=0}^{n}\frac{1}{(1+2m\epsilon)}\Bigg{]}\,,\end{split} \tag{108}\] where in terms of \(\overline{\alpha}(\mu)\) in Eq. (12) we define \[\tilde{g}=\frac{(Z\widetilde{\alpha})^{2}}{8} \tag{109}\] with \[\widetilde{\alpha}=\overline{\alpha}\left(\frac{\mu^{2}r^{2}}{16}\right)^{ \epsilon}\frac{\Gamma(\tfrac{1}{2}-\epsilon)}{\Gamma(\tfrac{1}{2}+\epsilon)}= \overline{\alpha}\left(\mu r{\rm e}^{\gamma_{\rm E}}\right)^{2\epsilon}\left[1+{ \cal O}(\epsilon^{2})\right]\,. \tag{110}\] In particular, when expressed in terms of \(\widetilde{\alpha}\), the coefficients in the perturbative expansion of \(F_{i}^{\rm bare}\) are expressible entirely as rational functions of \(\epsilon\). Choosing \(\mu=(r{\rm e}^{\gamma_{\rm E}})^{-1}\) so that \(\widetilde{\alpha}\) can be identified with the \(\overline{\rm MS}\) coupling, we find that the \(\overline{\rm MS}\) operator renormalization constant can be written as \(\exp[\frac{1}{\epsilon}\sum_{n}a_{n}\tilde{g}^{n}]\) for some numbers10\(a_{n}\). The sequence of coefficients can be related to the Catalan numbers \({\cal C}(n)=(2n)!/(n!(n+1)!)\).11 The series in the exponent then converges, and is given by Footnote 10: The leading orders obtained by direct evaluation from Eq. (108) are \[{\cal Z}=\exp\left[\frac{1}{\epsilon}\bigg{(}\tilde{g}+\tilde{g}^{2}+\frac{8 \tilde{g}^{3}}{3}+10\tilde{g}^{4}+\frac{224\tilde{g}^{5}}{5}+224\tilde{g}^{6}+ \frac{8448\tilde{g}^{7}}{7}+6864\tilde{g}^{8}+\frac{366080\tilde{g}^{9}}{9}+ \frac{1244672\tilde{g}^{10}}{5}\bigg{)}+\ldots\right]. \tag{111}\] We have checked explicitly to \(16^{\rm th}\) order in \(\tilde{g}\) that the renormalization constant can be written as \(\exp\left[\frac{1}{\epsilon}\sum_{n}a_{n}\tilde{g}^{n}\right]\), consistent with the explicit all-orders expressions in Eqs. (116) and (117). \[\log({\cal Z})=\frac{1}{\epsilon}\sum_{n=0}^{\infty}\frac{2^{n}{\cal C}(n)}{n+ 1}\tilde{g}^{n+1}=\frac{1}{2\epsilon}\left[-\sqrt{1-8\tilde{g}}+\log\left( \sqrt{1-8\tilde{g}}+1\right)+1-\log(2)\right]\,. \tag{112}\] The series in Eq. (108) can also be summed, and converges for any nonzero \(\epsilon\). The answer is given by \[F_{1}^{\rm bare} =2^{\frac{1}{4\epsilon}-\frac{1}{2}}\left(\frac{\sqrt{\tilde{g}} }{\epsilon}\right)^{1-\frac{1}{2\epsilon}}\Gamma\left(\frac{1}{2\epsilon} \right)J_{\frac{1}{2\epsilon}-1}\left(\frac{\sqrt{8}\sqrt{\tilde{g}}}{ \epsilon}\right)\, \tag{113}\] \[(Z\widetilde{\alpha})^{-1}F_{2}^{\rm bare} =2^{\frac{1}{4\epsilon}}\left(\frac{\sqrt{\tilde{g}}}{\epsilon} \right)^{-\frac{1}{2\epsilon}}\Gamma\left(1+\frac{1}{2\epsilon}\right)J_{ \frac{1}{2\epsilon}}\left(\frac{\sqrt{8}\sqrt{\tilde{g}}}{\epsilon}\right). \tag{114}\] Using Eqs. (113) and (114) we can see how renormalization works at all orders in the coupling. We require the \(\epsilon\to 0\) asymptotic behavior of the Bessel functions. The relevant identity is (_cf._ Eq. (10.20.4) of Ref. [94]) \[\lim_{\nu\to\infty}J_{\nu}(\nu z)\sim\frac{\sqrt[4]{\frac{4\zeta(z)}{1-z^{2}} }{\rm Ai}\left(\nu^{2/3}\zeta(z)\right)}{\sqrt[3]{\nu}}\quad{\rm with}\quad \zeta(z)=\left[\frac{3}{2}\left(-\sqrt{1-z^{2}}+\log\left(\sqrt{1-z^{2}}+1 \right)-\log(z)\right)\right]^{2/3}\,, \tag{115}\] where we adopt the notation of Ref. [94] and use \(\sim\) to denote "asymptotic to". Using this identity, Sterling's approximation, and the large argument limit of the Airy function it is straightforward to show that \[(Z\widetilde{\alpha})^{-1}F_{2}^{\rm bare}\sim\left(\frac{1}{1-8\tilde{g}} \right)^{1/4}\ \exp\left[\frac{1}{2\epsilon}\left(\sqrt{1-8\tilde{g}}-\log\left(\sqrt{1-8 \tilde{g}}+1\right)-1+\log(2)\right)\right]\quad{\rm as}\quad\epsilon\to 0\,. \tag{116}\] For \(F_{1}\) it is convenient to introduce \(1/2\epsilon^{\prime}=1-1/2\epsilon\) and \(\tilde{g}^{\prime}=\tilde{g}(1+2\epsilon^{\prime})\). We then find \[F_{1}^{\rm bare}\sim\left(\frac{1}{1-8\tilde{g}^{\prime}}\right)^{1/4}\ \exp\left[\frac{1}{2\epsilon^{\prime}}\left(\sqrt{1-8\tilde{g}^{\prime}}-\log \left(\sqrt{1-8\tilde{g}^{\prime}}+1\right)-1+\log(2)\right)\right]\quad{\rm as }\quad\epsilon\to 0\,. \tag{117}\] Notice that the form of \({\cal Z}\) that we obtained from recognizing the infinite sequence using our perturbative result is precisely what is needed for all-orders renormalization, _cf._ Eqs. (112) and (116). We find \[F_{1}|_{\mu=(re^{\gamma_{\rm E}})^{-1}} =\lim_{\epsilon\to 0}{\cal Z}F_{1}^{\rm bare}=\frac{\sqrt{1-(Z \alpha)^{2}}+1}{2}\bigg{(}\frac{1}{1-(Z\alpha)^{2}}\bigg{)}^{1/4}\, \tag{118}\] \[F_{2}|_{\mu=(re^{\gamma_{\rm E}})^{-1}} =\lim_{\epsilon\to 0}{\cal Z}F_{2}^{\rm bare}=Z\alpha\bigg{(} \frac{1}{1-(Z\alpha)^{2}}\bigg{)}^{1/4}\,. \tag{119}\] The \(\mu\) dependence of the renormalized coefficient functions \(F_{i}\) is governed by the anomalous dimension, \[\frac{{\rm d}}{{\rm d}\log\mu}F_{i}=\gamma_{\cal O}F_{i}\,, \tag{120}\] and \(\gamma_{\cal O}\) is determined by the coefficient of \(1/\epsilon\) in the corresponding \(\overline{\rm MS}\) operator renormalization constant: \[{\cal Z}=\sum_{m=0}^{\infty}\frac{1}{\epsilon^{m}}{\cal Z}_{m}\,,\quad\gamma=-2 \alpha\frac{\partial}{\partial\alpha}{\cal Z}_{1}\,. \tag{121}\] Using the explicit form of \({\cal Z}\) we have, to all orders in the coupling, \[{\cal Z}_{1}=\frac{1}{2}\biggl{[}1-\sqrt{1-(Z\alpha)^{2}}+\log\frac{1+\sqrt{1-(Z \alpha)^{2}}}{2}\biggr{]}\,, \tag{122}\] and so taking the derivative, _cf._ Eq. (54), we find \[\gamma_{\cal O}=\sqrt{1-(Z\alpha)^{2}}-1\,. \tag{123}\] Using the solution of Eq. (120) with initial condition Eq. (118), the amplitude (102) after \(\overline{\rm MS}\) renormalization is \[{\cal M}_{\rm UV}^{R}(\mu)=(\mu r{\rm e}^{\gamma_{\rm E}})^{\eta-1}\frac{1+\eta }{2\sqrt{\eta}}\biggl{[}1-\frac{Z\alpha}{1+\eta}\frac{{\rm i}\gamma_{0}\mathbf{\gamma}\cdot{\bf x}}{|{\bf x}|}\biggr{]}\, \tag{124}\] where we have used \(\eta=\sqrt{1-(Z\alpha)^{2}}\).
2309.04824
Correcting sampling biases via importance reweighting for spatial modeling
In machine learning models, the estimation of errors is often complex due to distribution bias, particularly in spatial data such as those found in environmental studies. We introduce an approach based on the ideas of importance sampling to obtain an unbiased estimate of the target error. By taking into account difference between desirable error and available data, our method reweights errors at each sample point and neutralizes the shift. Importance sampling technique and kernel density estimation were used for reweighteing. We validate the effectiveness of our approach using artificial data that resemble real-world spatial datasets. Our findings demonstrate advantages of the proposed approach for the estimation of the target error, offering a solution to a distribution shift problem. Overall error of predictions dropped from 7% to just 2% and it gets smaller for larger samples.
Boris Prokhorov, Diana Koldasbayeva, Alexey Zaytsev
2023-09-09T15:36:28Z
http://arxiv.org/abs/2309.04824v2
# Correcting sampling biases via importance reweighting for spatial modeling ###### Abstract In machine learning models, the estimation of errors is often complex due to distribution bias, particularly in spatial data such as those found in environmental studies. We introduce an approach based on the ideas of importance sampling to obtain an unbiased estimate of the target error. By taking into account difference between desirable error and available data, our method reweights errors at each sample point and neutralizes the shift. Importance sampling technique and kernel density estimation were used for reweighting. We validate the effectiveness of our approach using artificial data that resemble real-world spatial datasets. Our findings demonstrate advantages of the proposed approach for the estimation of the target error, offering a solution to a distribution shift problem. Overall error of predictions dropped from 7% to just 2% and it gets smaller for larger samples. Keywords:Importance sampling, spatial modeling, Gaussian mixture, model validation. ## 1 Introduction In the rapidly advancing field of machine learning (ML), the accuracy and reliability of models are paramount. However, the path to improving these models faces a major hurdle: the issue of distributional bias. This bias leads to a skewed estimation of errors, consequently obstructing the assessment of the error in ML models. The problem is further magnified when dealing with spatial data, where the relationship between variables can be intricate and non-linear, and available observations are far from (usually uniform) target distribution. We can observe this problem in spatio-temporal data related to measurements at particular points [5, 1] or aggregation of events such as earthquakes [6, 4]. In certain fields, particularly in environmental modeling, this challenge is not merely theoretical but manifests itself in tangible ways [7]. Several research studies illustrate how distribution bias can distort models, resulting in erroneous predictions and potentially misguided decisions [14]. Such examples stress the need for methodologies that can deliver an accurate estimate of the target error, free from the influences of bias. We present a novel approach that seeks to overcome the limitations posed by distribution bias. Our contributions are two-fold: * Unbiased Estimation of Target Error: By employing an approach inspired by importance sampling, we propose an unbiased estimate of the target error. This statistical technique allows us to re-weight the sampled data, effectively eliminating the bias and providing a more accurate representation of the underlying distribution. * Validation Using Artificial Data: To rigorously evaluate our approach, we utilize artificial data that closely mimic real-world spatial data used for ecological modeling. Through this validation, we can assess the efficacy of our proposed method, demonstrating its robustness in accurately estimating the target error even in the presence of complex spatial distributions for available data. ## 2 Related works Spatial sampling techniques have evolved significantly, creating practical datasets for real-world spatial analysis[11, 18, 1]. Our research addresses the challenges of working with arbitrary spatial samples. In this context, error estimation becomes a critical concern, and spatial cross-validation emerges as a key tool. It allows us to assess the predictive accuracy of spatial models across diverse spatial locations. Customization of cross-validation is necessary to handle specific data structures effectively. One of these challenges is spatial autocorrelation (SAC), which introduces bias by linking measurements from nearby spatial points. Spatial cross-validation helps mitigate SAC's influence by dividing data into training and validation sets. This division reduces bias and significantly improves the reliability of model results. Comparative studies highlight the superior performance of spatial cross-validation over traditional random cross-validation approaches. For instance, the research study [13] demonstrated through simulations and case studies that spatial cross-validation consistently outperforms random cross-validation, particularly when predicting new data, predictor space, or selecting causal predictors. Another research [12] underscored the significance of acknowledging SAC's influence, showcasing that neglecting SAC can lead to overoptimistic assessments of model predictive power, emphasizing the need for accurate ecological mapping on a larger scale. Notably, the study [15] exhibited substantial performance differences, with up to 47 percent disparities, between bias-reduced spatial cross-validation and overoptimistic random cross-validation settings. This further accentuates the imperative to account for SAC's influence in achieving accurate model evaluations. The integration of spatial cross-validation and an awareness of SAC play a pivotal role in enhancing the credibility and robustness of model assessments across various domains, underlining the necessity of tailored cross-validation techniques when dealing with intricate spatial data structures. To address distribution bias in spatial modeling, optimizing sampling designs is essential. In a relevant study [18], researchers focused on soil mapping with random forest. They found that minimizing mean squared prediction error (MSE) through optimized sampling significantly enhanced accuracy, particularly for smaller samples. However, this approach relies on known soil values across all locations and is suited for subsampling existing datasets. For larger samples, a uniform spread in feature space is recommended. This highlights the sensitivity of comparing sampling strategies, especially with limited validation data. An alternative approach is to define the range of applicability of a model [10]. However, it requires to design an uncertainty estimation procedure that is hard to do for all existing classes of methods for machine learning including linear models [3], gradient boosting [9], and neural networks [8]. Being introduced for estimation of statistics for complex distributions [2], importance sampling addresses the bias introduced by data distribution [17], while spatial cross-validation addresses the bias introduced by SAC. Importance sampling principles could be applied to enhance spatial cross-validation by mitigating the bias introduced by spatial dependencies and ensuring more accurate assessments of spatial models. We note, that existing methods to fight the distribution bias in error estimation are empirical. Thus, there are no guarantee that they will reduce the error. In contract, a properly developed principled approach would be a good and universal solution that doesn't require tuning hard-to-guess values of hyperparameters. ## 3 Methods ### Importance Sampling #### 3.1.1 Target error One can measure an error of a model at point \(x\) \[e(x)=(f(x)-\hat{f}(x))^{2}\] where \(f(x)\) is the true value of a function and \(\hat{f}(x)\) is our estimate. We are interested in the risk estimate \(R(e,p)\) for \(x\sim p(x)\) in an area \(X\subset\mathbb{R}^{d}\): \[R(e,p)=\int_{X}e(x)p(x)dx.\] A natural unbiased estimate of the \(R(e,p)\) is the Monte-Carlo estimate \[\hat{R}(e,p)=\frac{1}{n}\sum_{i=1}^{n}e(x_{i}),x_{i}\sim p(x). \tag{1}\] Here and for all distributions below we consider their constraint to \(X\). #### 3.1.1 Error bias In reality we don't have points \(x_{i}\) from the distribution \(p(x)\). Instead we have points from another distribution \(g(x)\). So, we have the estimate of the form: \[\hat{R}(e,g)=\frac{1}{n}\sum_{i=1}^{n}e(x_{i}),x_{i}\sim g(x).\] An unbiased estimate of \(\hat{R}(e,p)\) is: \[R_{I}(e,p,g)=\frac{1}{n}\sum_{i=1}^{n}e(x_{i})\frac{p(x_{i})}{g(x_{i})},x_{i} \sim g(x). \tag{2}\] The similar idea leads to the importance sampling procedure [2], and it is easy to see that the provided estimate is unbiased. Really, \[\mathbb{E}\frac{1}{n}\sum_{i=1}^{n}e(x_{i})\frac{p(x_{i})}{q(x_{i})}=\int_{X} e(x)\frac{p(x)}{g(x)}g(x)dx=\int_{X}e(x)p(x)dx=R(e,p).\] If \(x\) is low-dimensional (e.g. for spatial data), we can estimate \(g(x)\) from data. As \(p(x)\) we typically consider a uniform distribution. So, we obtain a pretty accurate ratio \(\frac{p(x)}{g(x)}\). Consequently, we get an estimate \(\hat{R}_{I}(e,p,g)\): \[\hat{R}_{I}(e,p,g)=\frac{1}{n}\sum_{i=1}^{n}e(x_{i})\frac{p(x_{i})}{\hat{g}(x _{i})},x_{i}\sim g(x). \tag{3}\] ### Spatial Data #### 3.2.1 GMM Spatial datasets often exhibit complex and multi-modal distributions, where different regions of the space may exhibit different statistical patterns. It is also widely known that spatial observations usually form spikes (for example around cities) and such spikes are well modeled by Bell curves. In order to align with these two concepts, we adopted the following model. Gaussian mixture model (GMM, for details see [2]) is composed of multiple weighted Gaussian curves and can effectively capture multi-modal nature of spatial distribution. By assuming that the dataset is a sample from a mixture of Gaussian distributions, GMM provides a means to represent the underlying spatial patterns and heterogeneity. Having that said, we further consider \(g\) as Gaussian mixture model. #### 3.2.2 Additional restrictions In our study we naturally assume that it is theoretically possible to estimate target error from given dataset. This implies there is no 'holes' with respect to \(p\) - areas with sufficient positive \(p\)-measure and absence of sample points inside it. Experiments ### Configuration For validation of our approach we conducted several numerical experiments. We used uniform distribution on the square \([0;100)\times[0;100)\) as \(p\), as we expect that we are interested in estimation of the error of the uniform distribution of points. For \(g\) we used a Gaussian mixture. It consists of 20 randomly-centred components with each gaussian's covariance matrix chosen far from singular (it's eigenvalues \(\geq 100\)) which assures the property of 'no holes' in the sample. An example of generated GMM data is in Figure 1. We selected these parameters of distribution to match a typical patter of the real data. Since we are not interested in the accuracy of the model but in the accuracy of error estimate, the true function \(f\) is a random linear or a random mixture of RBF (Gaussian functions). We refer to them as Linear and GMM true functions correspondingly. We consider an estimate \(\hat{f}\) is either another random linear function or a Gradient boosting regression for the GMM case. Number of samples for estimation is 10000. We consider a baseline approach \(MCE\) (Monte Carlo Error) and two variants of our \(ISE\) (Importance Sampling Error) method. \(MCE\) approach correspond to the equation (1) in the assumption that \(p\) and \(g\) are close to each other universally adopted in the literature. Basic \(ISE\) assumes knowledge of the true distribution \(g\), while for \(ISE_{g}\) we estimate the density \(g\) from data using KDE with Gaussian kernel [16]. For \(ISE\) we use the equation (2), and for \(ISE_{g}\) we use the equation (3). ### Validation procedure As a metric for real error estimation we used Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE) scores relative to the true error based on 100 test-run results. Together they describe both relative and absolute accuracy of predictions. ### Main results Table 1 shows MAPE for estimation of the errors. We see, that our approach for the given experiment provides better estimation of the true error for both Linear and GMM true function. The advantage of it is observed even in the case, when we use an estimation of \(g\) instead of the true density. ### Detailed analysis For detailed analysis we provide deeper investigation on the dependence of the quality of the error estimation for different number of samples and different target functions. Figure 2 reveals that with growth of number of samples \(MCE\) stays biased for about 0.07 in average. Meanwhile \(ISE\) in average gets closer and closer to real error, i.e. it is unbiased. We also can note, that for extremely small sample sizes due to high variance \(ISE\) gives worse estimations. Figures 3 show low variance in the errors, providing strong evidence that Importance Sampling outperforms baseline, even if we use estimation of the density instead of the true density. ### Summary Experiments clearly show that \(ISE\) and \(ISE_{g}\) outperform \(MCE\) under mild theoretical assumptions. We hope that it will also be true for real datasets, especially when sample size is increased. In any case, more research is required to assess applicability of the proposed method. Another uncovered part is the significance of precise error estimation. Experiments show that biased approach \begin{table} \begin{tabular}{c c c c} \hline Function \(MCE\) (baseline) & \(ISE\) & \(ISE_{e}\) \\ \hline Linear & 6.2 & **1.9** & 2 \\ GMM & 3.4 & **1.5** & 2.3 \\ \hline \end{tabular} \end{table} Table 1: MAPE relative to real error in percents. The best value is highlighted in **bold** font. Figure 1: GMM sample example for artificial two dimensional spatial data different by only 7% from the real error in average and impact of such miscalulation on the model training is unknown. Whether this impact is radical or not, one should keep in mind the possibility of distribution shift and subsequent model bias. ## 5 Conclusions We considered the problem of error evaluation for geo-spatial data models and propose a method that provides unbiased estimates. By addressing the persistent issue of distribution bias, we open new avenues for improved modeling, particularly in fields where spatial data plays a central role. Furthermore, the principles laid out in this study provide a foundation for future research, encouraging the development of even more refined techniques that can adapt and evolve in the face of ever-changing data landscapes. #### 5.0.1 Acknowledgements This work was supported by the Russian Foundation for Basic Research grant 21-51-12005 NNiO_a. Figure 2: \(ISE\) vs \(MCE\) with different sample sizes
2309.12920
VERITAS follow-up observation of the BL Lac blazar B2 1811+31 2020 Flare
VERITAS is an imaging atmospheric Cherenkov telescope (IACT) array most sensitive to gamma rays in the very-high-energy (VHE) energy band (85 GeV - 30 TeV). As a part of its active galactic nuclei (AGN) program, VERITAS focuses on the identification and follow-up of AGN flares reported by other multiwavelength observatories. Between October 15th and October 19th, 2020, VERITAS followed up on the Fermi-LAT and MAGIC detections of a flare of the intermediate-frequency-peaked BL Lacertae (IBL) object, B2 1811+31, located at a redshift of z=0.117. In this work, we present preliminary scientific results from the analysis of B2 1811+31's 2020 flare, including the corresponding Fermi-LAT light curve and VERITAS detection analysis.
Pablo Drake, Colin Adams
2023-09-22T15:19:46Z
http://arxiv.org/abs/2309.12920v1
# VERITAS follow-up observation of the BL Lac blazar B2 1811+31 2020 Flare ###### Abstract: VERITAS is an imaging atmospheric Cherenkov telescope (IACT) array most sensitive to gamma rays in the very-high-energy (VHE) energy band (85 GeV - 30 TeV). As a part of its active galactic nuclei (AGN) program, VERITAS focuses on the identification and follow-up of AGN flares reported by other multiwavelength observatories. Between October 15th and October 19th, 2020, VERITAS followed up on the _Fermi_-LAT and MAGIC detections of a flare of the intermediate-frequency-peaked BL Lacertae (IBL) object, B2 1811+31, located at a redshift of z=0.117. In this work, we present preliminary scientific results from the analysis of B2 1811+31's 2020 flare, including the corresponding _Fermi_-LAT light curve and VERITAS detection analysis. ## 1 Introduction On 1 October 2020, the Large Area Telescope (LAT), one of the two instruments on the _Fermi_ Gamma-ray Space Telescope, measured an 11-factor flux increase in the daily averaged gamma-ray flux (E>100 MeV) of 4FGL J1813.5+3144 (referred to in this work as B2 1811+31), relative to the average flux reported in the fourth _Fermi_-LAT catalog (4FGL). This event was sent out as an alert [1] and prompted a multi-wavelength campaign from the optical band [2] to very high energy (VHE, E > 100 GeV) gamma-rays. In fact, follow-up observations led to the first detection of this blazar in VHE by the MAGIC telescopes [3], reported on October 13th, 2020. Two days later, VERITAS, a ground-based gamma-ray detector sensitive to photons in the VHE, 85 GeV - 30 TeV range, started a 5-night campaign that observed the source from October 15th to October 19th [4]. It resulted in a preliminary 7\(\sigma\) detection, that, after our updated analysis, amounted to an 8.5\(\sigma\) detection of B2 1811+31 with 4.35 hours of observations. In this work, we characterize the evolution of the B2 1811+31 2020 flare with _Fermi_-LAT to understand the parallel evolution of the source in HE and VHE wavelengths. We contextualize VERITAS's detection of the source within the longer evolution of the flare light curve. ## 2 Observations ### VERITAS VERITAS [5], the Very Energetic Radiation Imaging Telescope Array System, is a ground-based gamma-ray detector located at the Fred Lawrence Whipple Observatory (FLWO) in southern Arizona. The VERITAS array comprises four 12-meter imaging atmospheric Cherenkov telescopes. Each telescope has a Davies-Cotton-design segmented mirror dish with 345 facets, and each dish is equipped with a 499 PMT camera, with a total field of view of 3.5\({}^{\circ}\). The 68% containment radius for a 1 TeV photon is < 0.1\({}^{\circ}\), and the pointing accuracy is < 50\({}^{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{ \rm{ }}}}}}}}}}}}}}}}\). In its current configuration, VERITAS provides a 5\(\sigma\) detection of a source with flux 1% that of the Crab Nebula in about 25 hours of observations [6]. VERITAS data has been analyzed with the standard VERITAS software VEGAS [7]. VERITAS devotes around half of its observation time to detect, follow up on, and monitor AGN sources [8]. VERITAS's AGN program thus allocates about 600 hours of good-weather time, each year to this task. One of the main focuses of the AGN program is the discovery and follow up observations of new VHE sources, comprising \(\sim\)40% of AGN observation time. Most of these correspond to Target of Opportunity (ToO) observations triggered by any other multi-wavelength partner. The 2020 VERITAS B2 1811+31 monitoring is a relevant example of a successful ToO observation campaign. After a HE detection of enhanced activity by _Fermi_-LAT and a 6\(\sigma\) VHE detection by MAGIC, VERITAS started observing the source on October 15th, 2020. These observations spanned four consecutive nights, adding up to 4.5 good-quality hours. Measurements were performed using the standard "wobble" observation mode, with a 0.5\({}^{\circ}\)offset [9]. Our analysis of these observations yielded an 8.5\(\sigma\) detection with an integral flux above the energy threshold of 200 GeV of \((1.74\pm 0.36)\times 10^{-11}\) cm\({}^{-2}\) s\({}^{-1}\). Figure 1 shows the corresponding significance map for B2 1811+31 during the 2020 flare. ### Fermi-LAT _Fermi_-LAT is a large-area pair-conversion telescope aboard the _Fermi_ Gamma-ray Space Telescope. _Fermi_-LAT is sensitive to photons within the energy range from 20 MeV to 300 GeV. _Fermi_-LAT's main operational status is its survey mode, in which the LAT completes a comprehensive survey of the entire sky every 3 hours. Our analysis of data obtained by _Fermi_-LAT was carried out using the fermipy[10] Python package (version 1.1.4). We carried out two different temporal analyses of B2 1811+31: one that spanned the whole 12 years of LAT data (December 2008-December 2022), and one that focused on 2020 (January 2020-February 2021). Both analyses were carried out with a region of interest of 15\({}^{\circ}\) around the source, considering "source" class events (evclass=128) from both the front and back (evtype=3), and with energies between 100 MeV and 300 GeV. Binned likelihood analyses were performed adopting the 4FGL catalog [11] specifications for sources in the region of interest. Our fit freed the spectral parameters of all sources within 5\({}^{\circ}\) of B2 1811+31, and of all sources with TS\(\geq\)5 in the region of interest. The normalization of the isotropic and galactic diffuse components were also fit as free parameters. B2 1811+31 was significantly detected by _Fermi_-LAT in the full dataset analysis, with a TS of 1603.36 (\(\sqrt{TS}\sim\sigma\)), assuming a power-law model. Similarly, for the 2020 analysis, a TS of 1767.12 was found. ## 3 Light Curve Analysis Light curves were computed for both temporal analyses of _Fermi_-LAT data, in order to identify a flaring period, and characterize the time evolution of said flare. For the light curve that employed Figure 1: Smoothed significance sky map for the region of interest. The white circles indicate regions excluded from the background estimation, corresponding to known sources or bright stars. B2 1811+31's full dataset, we computed 14-day time bins, 359 in total. In the case of the 2020 light curve, we employed 2-day time bins, amounting to 201 bins in total. All bins in both light curves were subjected to a validation technique, with the intention of removing those bins that hadn't properly converged. We first represented the ratio \(\frac{Flux}{\Delta Flux}\) versus the significance (\(\sqrt{|TS|}\)) for each light curve bin. We expect a proportional relation between the ratio \(\frac{Flux}{\Delta Flux}\) and the significance [[13]]. Individual data points that deviated from said proportional relation were identified as having TS\(<\)0.01. Most of these bins had negative TS, a further indication of a fit convergence problem. All these flagged bins were removed from the light curve. The remaining bins were analyzed using a Bayesian Blocks statistical method, setting the false positive rate \(p_{0}\) to 0.0027 (the value equivalent to 5\(\sigma\) using Equation 13 of Scargle et al. (2013)) [12]. We defined flares in two complementary ways, following the practice in Valverde (2020) [13]. In our first method, the data were first recursively fit to a constant function, initially the mean flux of all light curve data points. We defined the quiescent state as points that did not deviate from the mean flux more than 3\(\sigma\), following equation 5.1 in Valverde (2020) [13]. We then recursively used the mean flux of points in the quiescent state as our constant function. We repeated this process until the quiescent state set of light curve points and the outlier set of points were fixed. If three consecutive bins were found in the outlier set, we considered that a flux. Our second method made use of the Bayesian Block analysis. For this method, we integrated the flare selection technique presented in Yoshida (2022) [14]. We defined the quiescent flux as the Bayesian block with lowest flux that contained more data points that the mean number of data points per block. We established the flare threshold flux as the quiescent flux plus five times the mean flux uncertainty for all light curve bins (equation 1 in [14]). All Bayesian blocks whose mean flux is above this threshold are considered flaring states. When applying both of these methods to the full dataset light curve, represented in Figure 2, we find a complete agreement in terms of flare definition. Our second method identifies a flare spanning the third and four Bayesian Blocks, while our first method identifies all points in those Figure 2: _Fermi_-LAT B2 1811+31 12-year full dataset light curve, with 14-day bins. Orange arrows represent 95% Upper Limits, for bins whose TS\(<\)4. Bins with TS\(<\)0.01 were excluded from the analysis. Black lines represent Bayesian Block fluxes, with gray shading marking the 1 standard deviation interval. two blocks, except for a single bin, as outliers, in flaring levels of flux. The fifth block is rejected from being part of the flare, but the first method finds that one out of the two bins comprised by this block is at abnormally high flux levels, showing a diffuse boundary regarding the end of the flare. An intention of better defining the flare limits prompted us to carry out a more detailed analysis of the 2020 flare with finer binning. Looking at Figure 3, we can identify a complex evolution within the 2020 flare, with no one clear exponential rise and decay. Applying our first method, we could define as the sole flare within this period the first five bins in the fifth Bayesian block, that are contained within the period in which VERITAS detected the source (shaded in red). However, no individual Bayesian block is detected as flaring following our second method. During various periods of elevated flux emissions (second, fourth and fifth Bayesian blocks in Figure 3), there are repeated instances of flux maxima occurring in isolated bins. This phenomenon points at a significant flux variability not being captured by the Bayesian Block analysis. A potential explanation for this discrepancy is that said variability could be occurring at daily timescales, sporadically throughout the six month flaring period. A closer inspection of these moments of extreme variability could inform us of mechanisms behind this flaring pattern. ## 4 Conclusions This work presents an analysis of the October 2020 VERITAS observations of B2 1811+31, along with a larger summary of the source's flaring period. The analysis of HE _Fermi_-LAT frequencies show an interesting flare evolution that deviates from longer exponential rise and decay patterns. Instead, day-scale variability is observed, superimposed on the longer flare trends represented by the Bayesian Block analysis. In this context, it is especially interesting to note that the VERITAS detection of the source took place during a time of elevated flux in _Fermi_ frequencies, confirming the multiwavelength nature of the detected VHE flare. A further analysis of the source, Figure 3: _Fermi_-LAT B2 1811+31 2020 light curve, with 2-day bins. Orange arrows represent 95% Upper Limits, for bins whose TS\(<\)4. Bins with TS\(<\)0.01 were excluded from the analysis. Black lines represent Bayesian Block fluxes, with gray shading marking the 1 Standard Deviation interval. Vertical light red shading indicates the time of VERITAS observations. tracing the development of the flare in different frequencies could help explain both its unusual variability and the presence of VHE emission. Such an analysis, along with a multiwavelength SED, will be presented in an upcoming publication. ## Acknowledgments This research is supported by grants from the U.S. Department of Energy Office of Science, the U.S. National Science Foundation and the Smithsonian Institution, by NSERC in Canada, and by the Helmholtz Association in Germany. This research used resources provided by the Open Science Grid, which is supported by the National Science Foundation and the U.S. Department of Energy's Office of Science, and resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231. We acknowledge the excellent work of the technical support staff at the Fred Lawrence Whipple Observatory and at the collaborating institutions in the construction and operation of the instrument.
2309.14389
Analyzing the Efficacy of an LLM-Only Approach for Image-based Document Question Answering
Recent document question answering models consist of two key components: the vision encoder, which captures layout and visual elements in images, and a Large Language Model (LLM) that helps contextualize questions to the image and supplements them with external world knowledge to generate accurate answers. However, the relative contributions of the vision encoder and the language model in these tasks remain unclear. This is especially interesting given the effectiveness of instruction-tuned LLMs, which exhibit remarkable adaptability to new tasks. To this end, we explore the following aspects in this work: (1) The efficacy of an LLM-only approach on document question answering tasks (2) strategies for serializing textual information within document images and feeding it directly to an instruction-tuned LLM, thus bypassing the need for an explicit vision encoder (3) thorough quantitative analysis on the feasibility of such an approach. Our comprehensive analysis encompasses six diverse benchmark datasets, utilizing LLMs of varying scales. Our findings reveal that a strategy exclusively reliant on the LLM yields results that are on par with or closely approach state-of-the-art performance across a range of datasets. We posit that this evaluation framework will serve as a guiding resource for selecting appropriate datasets for future research endeavors that emphasize the fundamental importance of layout and image content information.
Nidhi Hegde, Sujoy Paul, Gagan Madan, Gaurav Aggarwal
2023-09-25T07:01:16Z
http://arxiv.org/abs/2309.14389v1
# Analyzing the Efficacy of an LLM-Only Approach for Image-based Document Question Answering ###### Abstract Recent document question answering models consist of two key components: the vision encoder, which captures layout and visual elements in images, and a Large Language Model (LLM) that helps contextualize questions to the image and supplements them with external world knowledge to generate accurate answers. However, the relative contributions of the vision encoder and the language model in these tasks remain unclear. This is especially interesting given the effectiveness of instruction-tuned LLMs, which exhibit remarkable adaptability to new tasks. To this end, we explore the following aspects in this work: (1) The efficacy of an LLM-only approach on document question answering tasks (2) strategies for serializing textual information within document images and feeding it directly to an instruction-tuned LLM, thus bypassing the need for an explicit vision encoder (3) thorough quantitative analysis on the feasibility of such an approach. Our comprehensive analysis encompasses six diverse benchmark datasets, utilizing LLMs of varying scales. Our findings reveal that a strategy exclusively reliant on the LLM yields results that are on par with or closely approach state-of-the-art performance across a range of datasets. We posit that this evaluation framework will serve as a guiding resource for selecting appropriate datasets for future research endeavors that emphasize the fundamental importance of layout and image content information. ## 1 Introduction Document Question Answering (DQA) [15, 23] is the task of answering questions on images like receipts, papers, forms, charts, or even natural images with textual information in the scene. This essentially requires the model to develop an understanding of multiple modalities: (1) extracting useful features from raw images (2) extracting layout information (3) combining it with world knowledge to answer questions. A lot of recent vision-language models focus on training large end-to-end models from scratch [2, 13, 17, 28] aiming to utilize all the underlying modalities. These models typically consist of three kinds of architectures: encoder-decoder [18, 25], encoder-only [9] and decoder-only [4, 6]. The input to these models, i.e., the image and the question, is processed in three ways - (1) raw image features are extracted using 2D patch based tokens retaining the layout information (2) text is detected using optical character recognition (OCR) [19, 10], and then encoded as tokens with spatial position information (3) the Figure 1: (a) **Reading Order** is an interesting way of serializing text information. An example of reading order can be observed by following the trajectory of the green line. (b) Analyzing the performance w.r.t. SOTA of a language-only model with only reading ordered text input. question is decomposed into tokens using purely language based tokenizers. These networks are typically pre-trained individually or end-to-end with self-supervised or weakly-supervised losses, and then finally finetuned for the end task of DQA on labeled training data. While recent advancements in document-image question answering have relied on combining multiple modalities, the impact of the individual components is unclear. For example, it is difficult to say whether a newly proposed method improves due to a better vision encoder, a stronger language model with better reasoning capabilities, or both. In some of these tasks, the language model's knowledge may be enough to answer a question, even without the image content or layout, while in others, the image content and/or layout could be truly useful. In this work, we analyse the task of document image question answering from an LLM-only perspective, without encoding the visual components or any layout information. As humans, we read text entities in images serially, then use the layout information to correlate these entities, constructing a readable sequence of text in a specific order, called the "reading order". An example of reading order is shown in Figure 0(a). Recent advances in reading order prediction on documents [29, 31] provide a way to obtain this serialized text from 2D images. This text can be fed to text-only large language models, such as Flan models [7], which have been instruction-tuned to solve text-only question-answering tasks. Such a setup does not involve any vision encoder or position information other than the reading order information. We show that this approach can perform well, coming close to the state-of-the-art methods that explicitly encode visual information (Figure 0(b)). This suggests that language models can be effective for DQA, even when access to visual information is limited. In this work, we study the challenging problem of DQA by viewing it purely as a language modeling task. Specifically, we explore how well can pre-trained LLM's answer questions about images, using only the text in the image serialized using the reading order. We perform ablation studies, qualitative and quantitative analysis to understand when text-only language models work well and when they need more information from a vision encoder. Our results suggest that text-only language models can be effective for DQA, but they may need more information for certain types of tasks. Our work can serve as a good yardstick for future research on DQA, to decide whether a task can be solved purely through text-based methods or if it needs additional image and layout information. Note that the goal of this work is not propose an algorithm without a vision encoder that works well for all tasks and question types, nor is it to reduce the computation cost of these visual-language models (VLM's). Rather the goal is to analyze how well does tokenizing the textual information in an image using only reading order along with pre-trained language models, helps in solving question answering tasks. Overall, our contributions are as follows: 1. We model document question answering as a pure language modeling task, with images tokenized to text using reading order. 2. We study how well language models can perform on document question answering tasks without using a vision encoder. 3. We perform quantitative analysis correlating the performance of the proposed model with factors, like the reading order perplexity, task lengths, answer contents, _etc._, to identify the principles guiding the success or failure of language-only models. ## 2 Related Work We categorize the prior works in this domain into two categories - OCR-based methods and OCR-free methods. **OCR-based methods** typically entail two forms of input - the raw image and text tokens, which are obtained via Optical Character Recognition (OCR) along with their respective positions. Models such as LayoutLM [13], LaTR [2], FormNet [16], DocFormer [1], UDOP [28], and M4C [12] are classified within this category. Methods, such as M4C, also incorporate additional inputs like detected objects. Notably, OCR-Based Methods encompass strategies for understanding both scene-text and documents. The majority of these methods employ complex techniques to fuse text and image tokens in order to enhance learning capabilities. For instance, in the UDOP model [28], each detected text token is coupled with the corresponding image token prior to being sent to the transformer encoder. The FormNet model [16] introduces an extra GCN layer to the input text tokens before transmitting them to the transformer layer. **OCR-Free Methods**: These strategies do not utilize any explicit OCR data, instead, they solely depend on the visual information in the image and let the network learn the text information implicitly. Often they build over a pre-training step that makes it easy for the network to learn text information. The DONUT model [15] presents an OCR-free Visual Document Understanding (VDU) model which is grounded in a transformer architecture. It employs reading order prediction as a pre-training task. The Pix2Struct model [17] incorporates the question as part of the image by appending it to the image header. As a pre-training objective, it learns to predict HTML which can render the image. Despite its high performance on various downstream document understanding tasks, it falls short on tasks that are text-heavy. The PaLI model [5] is reliant on an encoder-decoder architecture, utilizing a ViT [11] based encoder to encode the visual components in the image. It is also trained to predict text tokens in the image. A drawback of OCR-free approaches is that they usually require the vision encoder to operate at a significantly higher resolution (to be able to "read" the text), which can substantially increase the computational cost. In summary, contemporary document understanding methods either rely solely on visual data or a combination of visual features and text (OCR). In our work, we explore the capabilities and limitations of text-only models to perform Visual Question Answering (VQA) on documents using solely the textual context obtained from an OCR pipeline, given in a specific order. ## 3 Modeling DocumentQA as an LLM-only Task In this section, we discuss the design of our experimental setup: the process of tokenizing 2D information into a 1D token stream, and converting a multi-modal task into a text-only language modeling task. **Background.** Typical DocumentQA frameworks capture layout information in document images by either tokenizing raw images into patches using a ViT-style architecture and/or detecting words using OCR, which are then sent as tokens with spatial position embeddings. These two tokenization schemes have proven effective for solving document QA tasks. However, our primary focus in this paper is to evaluate the performance of a large language model in solving the task without using any image or absolute position information. **Reading Order.** The reading order of text in documents, refers to the organization of text in an image into a coherent and meaningful 1-dimensional sequence. Optical character recognition (OCR) engines typically identify regions of text and determine the correct input text order based on the layout analysis. The challenge of detecting the correct reading order varies greatly depending on the complexity of the input text layout [29]. Existing reading order extraction models [31, 8] in the literature are generally language agnostic, relying on layout information rather than content features to predict the order. These models are trained on curated data, such as books where the reading order is well-defined, or manually annotated data from in-the-wild images. In our work, we use reading order to convert data in 2D images into a token stream by concatenating the text. Although the reading order may not be perfect in complex documents, our analysis demonstrates that this method works reasonably well for a large number of datasets used in the literature. Moreover, we show that domain knowledge about the dataset can be incorporated to improve the reading order using rule-based heuristics, leading to better QA task performance. **Visual QA as a text-based task.** We design document image question-answering as a purely language modeling task. We obtain the text in the reading order using [29] and concatenate them to form a string \(\mathcal{C}\), which is then sent to a large language model for QA. We refer to this textual representation as the "OCR context" throughout the paper. Using this context, we prompt the large-language model with a template that includes the OCR context, question, and a placeholder for the answer: \[\text{Context: }\mathcal{C},\text{Question: }\mathcal{Q},\text{Answer: }\underline{\ \ **InfographicsVQA**[22] is a dataset of infographic documents that include text, visual and graphical information with questions from tables, figures and visualizations. Questions often involve combining multiple cues, making this one of the more challenging datasets for document understanding. The metric used for this dataset is Average Normalized Levenshtein Score (ANLS). **ChartQA**[21] is a dataset of questions on visual representations of tabular data (like bar charts, pie diagrams, _etc._). The metric used for this dataset is relaxed accuracy, i.e., exact match but tolerating 5% of numerical error. **AI2D**[14] consists of multiple choice questions on illustrative diagrams spanning various scientific domains. The metric used for this dataset is exact match accuracy. ### Models and Implementation Details We evaluate the performance of four Flan-T5 [7] model sizes - base (B) with 250M parameters, large (L) with 780M parameters, XL with 3B parameters and XXL with 11B parameters. The B, L and XL variants are fine-tuned for 200K steps with a batch size of \(512\) and the XXL variant is fine-tuned with a batch size of \(128\) for 400K steps. The input sequence length is set to \(1024\) for most datasets, except for OCR-VQA, which has an input length of \(128\). The target sequence length is set to \(32\) for all datasets. These sequence length calculations are based on token length analysis of the respective datasets. A learning rate of \(0.001\) and a weight decay of \(0.1\) are used throughout the fine-tuning process. Also note that no explicit hyperparameter tuning is conducted for any of the reported results. All experiments are done in the T5X framework [26] on Google Cloud TPUs. ### Zero-Shot DocumentQA Given the Flan-T5 models have been instruction-tuned on a collection of tasks, they generally perform well on new tasks. We first analyze their zero-shot performance across all datasets and model sizes, with results shown in Table 2. We see a clear trend of increasing performance with larger model sizes. It is important to note that all state-of-the-art (SOTA) models have been fine-tuned using either visual or layout information, or both. However, the zero-shot performance of FlanT5-XXL is reasonably good, considering it lacks a vision encoder and has not been fine-tuned on the dataset. For instance, we see that the zero-shot performance for InfoVQA is just \(4.2\%\) behind the finetuned performance of Pix2Struct-B (refer Table 3), and is not far behind UDOP, the SOTA model, with only a \(13.4\%\) difference. Better prompting techniques than Eqn 1 could potentially further enhance performance, as has been the case with similar large language models (LLMs) in the literature. \begin{table} \begin{tabular}{l c c c|c c c} \hline \hline & \multicolumn{3}{c|}{No. of samples} & \multicolumn{3}{c}{Median no. of characters} \\ & Train & Val & Test & Text & Question & Answer \\ \hline OCR-VQA [24] & 801k & 100k & 100k & 115 & 31 & 12 \\ DocVQA [23] & 39.5k & 5.3k & 5.1k & 1055 & 41 & 10 \\ InfoVQA [22] & 23.9k & 2.8k & 3.2k & 1567 & 65 & 5 \\ TextVQA [27] & 34.6k & 5k & 5.7k & 71 & 33 & 7 \\ ChartQA [21] & 28.3k & 1.9k & 2.5k & 279 & 64 & 4 \\ AI2D [14] & 12.3k & 120 & 3k & 100 & 47 & 7 \\ \hline \hline \end{tabular} \end{table} Table 1: Details of datasets used in this paper for experiments. Figure 2: **Using LLMs for Document Question Answering on Images.** We first extract text from the image using an off-the-shelf OCR engine along with the reading order [29], i.e., the order a human is likely to read the text in the image. This is then serialized to a string, and sent to the encoder along with the question. The answer is then obtained from the output of the decoder. Note that we use FlanT5 [7] text-to-text model, which has been instruction tuned on a variety of tasks. The green line segments on the input images denote the reading order into which the text in the image is serialized and fed to the encoder. ### Finetuning for DocumentQA In this analysis, we finetune the pre-trained FlanT5 models on the training set of the individual datasets, with details provided in Section 4.2. We present the results in Table 3. As expected, the fine-tuned models yield significantly better performance than zero-shot performance. In some cases such as OCR-VQA and InfoVQA, the finetuned model performs very close to the SOTA methods. For OCR-VQA, it reaches a performance of \(70.5\%\), only \(0.8\%\) behind the SOTA, and for InfoVQA, it falls behind by \(2\%\). In most cases, this LLM only approach outperforms recent baselines from the literature that utilize a vision encoder. For instance, it surpasses LaTR-L by \(2.5\%\) on TextVQA and Pix2Struct-L by \(5.2\%\) on InfoVQA, MATCHA by \(25.6\%\) on AI2D, and so on. This suggests that a lot of questions in these datasets can be answered by an LLM-only model, without any visual inputs or detailed layout information. Hence, a strong language model can do a good job in understanding the question, retrieving information from world knowledge and combining it with the context to come up with an answer. We also observe a consistent performance improvement as we go from FlanT5-B to FlanT5-XXL which suggests that larger models tend to perform better for such tasks. It is also interesting to note that for an easier task, such as OCR-VQA, the performance improvement we observe by scaling the language model is much smaller than a complex task such as InfoVQA. Overall, the reading order + LLM scheme falls short only by a few percentages compared to state-of-the-art methods in literature, all of which include a vision encoder. This prompts us to investigate the factors behind the cases where an LLM-only scheme fails for DocumentQA and whether it is possible to overcome these limitations. We identify three key factors that seem to play a key a role in determining the effectiveness of an LLM-only setup for DocumentQA tasks: (1) **Reading order quality**, (2) **OCR Context Length**, and (3) **Presence of answer in the image text**. In the subsequent sections, we examine these factors through empirical studies to demonstrate how these factors correlate with the performance. ### Reading Order Quality A good reading order is crucial to obtain the right context \(\mathcal{C}\) in the scheme outlined in Eqn 1. While it may be hard to design a universally optimal strategy to get reading orders that work well for different tasks, it is possible to design task-specific heuristics to improve a given reading order. With this in mind, we come up with a basic reading order strategy and test it on DocVQA. The basic reading order strategy can be explained as follows: OCR provides a set of bounding boxes where the text appears in the image. We begin with the uppermost and left \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Model & Params & OCR-VQA [24] & DocVQA [23] & InfoVQA [22] & TextVQA [27] & ChartQA [21] & AI2D [14] \\ \hline \hline \multirow{2}{*}{FanT5-B} & 250M & 34.2 & 33.6 & 14.1 & 32.5 & 10.5 & 22.8 \\ & 780M & 36.0 & 35.1 & 21.3 & 36.7 & 12.1 & 24.0 \\ FlanT5-XL & 3B & 38.3 & 48.4 & 27.3 & 42.3 & 14.8 & 41.3 \\ FlanT5-XXL & 11B & 42.4 & 54.1 & 34.0 & 44.0 & 20.5 & 50.8 \\ \hline \hline \end{tabular} \end{table} Table 2: **Zero-shot** performance of FlanT5 models for document QA with reading order input \begin{table} \begin{tabular}{l l c c c c c} \hline \hline Model & Oracle-VQA [24] & DocVQA [23] & InfoVQA [22] & TextVQA [27] & ChartQA [21] & AI2D [14] \\ \hline \multirow{6}{*}{FanT5-L} & LaTR-B [2] & 67.5 & - & - & 59.5 & - & - \\ & LaTR-L [2] & - & - & - & 61.1 & - & - \\ & Layout LM3V [13] & - & 83.4 & - & - & - & - \\ & DONUT [15] & - & 67.5 & - & - & - & - \\ & Pix2Struct-B [17] & 69.4 & 72.1 & 38.2 & - & 56.0 & 40.9 \\ & Pix2Struct-L [17] & **71.3** & 76.6 & 40.0 & - & 58.6 & 42.1 \\ & UDP [28] & - & **84.7** & **47.4** & - & - & - \\ & PAL-1-3B [5] & - & - & - & 60.1 & - & - \\ & PAL-17B [5] & - & - & - & **71.8** & - & - \\ & MATCHA [20] & 68.9 & 74.2 & 37.2 & - & **64.2** & 42.6 \\ & DisAv [30] & - & - & - & - & - & **76.2** \\ \hline \hline \multirow{6}{*}{FanT5-B (250M)} & FlanT5-B (250M) & 67.2 & 69.2 & 31.7 & 54.5 & 45.8 & 58.7 \\ & FlanT5-L (780M) & 68.5 & 73.6 & 37.4 & 58.4 & 50.7 & 64.4 \\ \cline{1-1} & FlanT5-XL (3B) & 69.6 & 77.2 & 42.2 & 61.4 & 52.2 & 68.0 \\ \cline{1-1} & FlanT5-XXL (11B) & **70.5** & **78.5** & **45.2** & **63.6** & **54.2** & **68.2** \\ \hline \hline \end{tabular} \end{table} Table 3: **Finetuned** results on six datasets and comparisons with the state-of-the-art results from literature. All the methods in the top portion use a vision encoder and/or position information of the text in the images, whereas the bottom section only use a sequence of reading ordered text as input. The best results in each group are highlighted in bold. most bounding box. Then, we proceed to the next bounding box along the width, and get all the bounding boxes whose centroids fall within a specific threshold from the current one. Due to similarity of this technique to raster scanning of 2D images, we label this strategy as _raster scan_. An example of the same if shown in Figure 4. Using this basic strategy, we observe a significant improvement in performance, indicating the importance of a proper reading order for document understanding tasks (Figure 2(a)). It is important to note that such a basic strategy may not be as effective for other kinds of documents. The goal of this experiment is just to demonstrate the effect of an appropriately chosen/crafted reading order to enhance performance. If the documents involved are somewhat homogeneous in nature, coming up with an appropriate reading order heuristic may not be too hard. As an ablation, we shuffle the reading order and provide the shuffled text as input to the LLM, which is then fine-tuned on the shuffled reading order text. As shown in (Figure 2(b)), the performance declines considerably, which is not unexpected. This further highlights the significance of reading order for the proposed scheme of pure text-based modeling without a vision encoder. For completeness, we compare different choices of reading orders across different model sizes (Figure 2(a)), and different datasets (Figure 2(b)). The standard reading order refers to [29], which is what we use for most of the experiments unless otherwise mentioned. We also observe that the effect of reading order importance increases significantly with larger context lengths. This is evident from Figure 4(c) where we observe that the difference in performance between standard and random reading order is significantly higher for datasets with longer context lengths. From these experiments and observations, we can conclude that while layout information may be important, using a simple ordering based information (reading order) can be enough to answer a major chunk of the questions in these datasets. So ideally, to understand if an algorithm is able to extract better layout information from documents, reading order can act as a strong baseline. ### Effect of OCR Context Length In DocumentQA tasks, the model must derive reasoning from the content of the image, which includes both text and the image itself. As the density of text in the image increases, it becomes more challenging for the model to extract the answer. We compare the median lengths of the text tokens in images for the set of correctly and incorrectly predicted answers to empirically verify this hypothesis. We plot these statistics across different datasets in Figure 4(a). To display the numbers on the same plot, we normalize them with the median length of tokens for the entire dataset. The results, in Figure 4(a), indicate that the model makes more errors when the text length in the image is longer. For InfoVQA, note that the median length of incorrect answers is significantly higher than the set of correct answers in comparison to other datasets. For datasets such as TextVQA and AI2D, the difference in context length between incorrect and correct answers is not high indicating that may not be a factor behind the correctness. This is intuitive as in these datasets, there are a lot of questions which need visual understanding that goes beyond the text content of the image, such as the visual characteristics of objects in the image, their relative positions and so on. ### What type of questions cannot be answered? By design, our method can only answer questions if the text content of the image contains the answer, or questions which can be answered using only world knowledge. Therefore, we develop a metric to determine the percentage of questions that can be answered using only text, i.e., what fraction of the answers lies within the serialized text for a given example. Based on whether our model correctly or incorrectly predicts the answer, we perform an averaging over the dataset to obtain 2 values: one for the correctly Figure 3: (a) The impact of using a better reading order **on different model sizes** for DocVQA dataset. Note that _Raster Scan_ is a strategy we specifically design for DocVQA which outperforms the standard reading order obtained from OCR pipeline (b) The comparison of how the choice of the reading order influences performance **across different datasets** for FlanT5 XXL. We see that InfoVQA is affected the most by shuffling the reading order. Values plotted are against the respective metrics used for the datasets predicted set and the other for the incorrect set. We compare these values in Figure 4(b). As evident from the plot, for the set of incorrect answers, the percentage of answers present in the image as text is lower than for the set of correct answers. Note that we remove all Yes/No questions before calculating these numbers. Also, for OCR-VQA, we remove the genre classification questions, since these have to be inferred and not directly extracted from the context. ## 5 Discussion In this section, we discuss some challenges and metrics that correlate strongly with model performance for a LLM-only document image question-answering model. Reading Order Perplexity.As evident from Table 3, even without using a vision encoder, the performance is quite high. However, in some cases either we do not have enough information from the text or the reading order itself is jumbled that a language only model may find it difficult to answer the questions. Intuitively we expect the performance of any language model on a question answering task to be strongly correlated with the kind of data it has been trained on. In order to quantify this in the form of a heuristic, we define a metric called _Reading Order Perplexity_. **Definition:** For a language model \(M\), we define reading order perplexity as the perplexity of the predicted tokens given the context, i.e. the reading order text passed to the model for question answering. Formally, given a language model \(M\), context \(C\) and a sequence of predicted tokens \((p_{1},p_{2}....,p_{n})\), we define the Reading Order Perplexity (\(ROP\)) as: \[ROP_{M}=\exp\Big{(}-\frac{1}{n}\sum_{i=1}^{n}\log P_{M}(p_{i}|p_{1:i-1},C) \Big{)}\] where \(P_{M}(p_{i}|p_{1:i-1},C)\) represents the likelihood of the \(i^{th}\) token predicted by the model \(M\). Based on Definition 5, we define _Zero Shot Perplexity_ as the average reading order perplexity of prediction for the dataset in a zero shot setting, i.e., with a pre-trained model. Figure 4: Text from two different reading orders along with the answer for the same question for both cases. As we can see for this example, the standard reading order is jumbled up, whereas the raster scan based reading order does a better job at extracting the information from the text, and thus enables the LLM to answer the question correctly. Figure 5: (a) **Normalized length of text in image** for the set of correct and incorrect answers. (b) **Percentage of answers in text** for correct and incorrect set per dataset. Results are on FlanT5-B. (c) Difference in performance between normal and random reading order with median context length. The difference is much higher when context length is larger. We calculate the mean perplexity of two sets - correct and incorrect predictions and plot that in Figure 6(a) for all the datasets. As evident from the chart, zero shot perplexity correlates strongly with the model performance, with significantly lower values for correct predictions compared to incorrect predictions. Also, we observe a strong correlation between zero shot perplexity and the fine-tuned model's performance, as shown in Figure 6(b). Interestingly, the SOTA performance also seems to show a strong correlation with Zero Shot Perplexity, with lower values of perplexity corresponding to better scores overall. This seems to indicate that even for visual language models which include image and layout features performance, the key to unlocking better results might be through stronger language models. Also, we believe zero shot perplexity also serves as an easy shortcut to estimating model potential and overall performance for unseen tasks. Key Observations.The type of reasoning needed to answer image document question answering tasks is one of five types (as shown in Figure 6) - (a) direct extraction from the content of the image, (b) reasoning based on the text content of the image along with world knowledge, (c) answering solely based on the visual content of the image, (d) answering with no content of the image but only world knowledge, (e) reasoning based on visual as well as text content of the image (layout understanding falls under this category). An LLM only model, as employed here will be able to address question types (a), (b) and (d) exclusively. The notably strong performance across various datasets implies that a majority chunk of questions fall within these categories which do not even need visual reasoning. For these questions, scaling the language model appears more effective than scaling the vision encoder. In order to judge whether a vision encoder is able to extract the useful features necessary to answer questions, we should be looking at the genre of question-answers in (d) and (e). Hence, the reading order based language only scheme used in this paper can serve as a question segregator to create subsets of datasets on which the improvements of a new vision encoder should be shown. ## 6 Conclusion In this paper, we analyze the contribution of an LLM in image document question answering tasks, by modeling it as an LLM-only task, without any vision encoder. Specifically, we show that sending the text present in an image as an ordered set of tokens to an LLM, we can achieve near-SOTA performance on a variety of benchmark datasets of visual question answering which involve text in image. We analyze multiple factors which play important roles towards the effectiveness of such an LLM-only model. We hope that this can guide the choice of models for practitioners, and choice of datasets for researchers developing models for which a vision encoder is truly essential. Figure 6: Example document image question-answer pairs for different types of questions. Figure 7: (a) **Perplexity** of zero shot prediction for correct and incorrect sets across all datasets. (b) SOTA and full finetuned performance with Zero Shot Perplexity of FlanT5-B.
2309.08978
Empowering In-Browser Deep Learning Inference on Edge Devices with Just-in-Time Kernel Optimizations
Web is increasingly becoming the primary platform to deliver AI services onto edge devices, making in-browser deep learning (DL) inference more prominent. Nevertheless, the heterogeneity of edge devices, combined with the underdeveloped state of Web hardware acceleration practices, hinders current in-browser inference from achieving its full performance potential on target devices. To address this issue, this paper presents the pioneering inbrowser inference system, nnJIT, which enables just-in-time (JIT) auto-generation of optimized computing kernels for edge devices. nnJIT is built upon two novel techniques that significantly reduce kernel search and compilation overhead while improving performance firmly: Tensor-Web Compiling Co-Design lowers compiling costs by around 100X through eliminating redundant and ineffective compiling passes; Web-Specific Lite Kernel Optimization Space reduces kernel tuning costs by focusing on Web programming requirements and efficient device resource utilization, pruning the optimization space from millions to only dozens. nnJIT is evaluated for modern models, e.g., BART, T5, and Llama 2, on a range of edge devices including laptops and smartphones using different browsers and hardware from ARM, Intel, AMD and Nvidia. The results show that nnJIT can achieve up to 8.2X faster within 30 seconds compared to the existing baselines.
Fucheng Jia, Shiqi Jiang, Ting Cao, Wei Cui, Tianrui Xia, Xu Cao, Yuanchun Li, Deyu Zhang, Ju Ren, Yunxin Liu, Lili Qiu, Mao Yang
2023-09-16T12:29:25Z
http://arxiv.org/abs/2309.08978v2
Accelerating In-Browser Deep Learning Inference on Diverse Edge Clients through Just-in-Time Kernel Optimizations ###### Abstract. Web applications are increasingly becoming the primary platform for AI service delivery, making in-browser deep learning (DL) inference more prominent. However, current in-browser inference systems fail to effectively utilize advanced web programming techniques and customize kernels for various client devices, leading to suboptimal performance. To address the issues, this paper presents the first in-browser inference system, nn-JIT.web, which enables just-in-time (JIT) auto-generation of optimized kernels for both CPUs and GPUs during inference. The system achieves this by using two novel web programming techniques that can significantly reduce kernel generation time, compared to other tensor compilers such as TVM, while maintaining or even improving performance. The first technique, _TensorWeb Compiling Co-Design_, lowers compiling costs by unifying tensor and web compiling and eliminating redundant and ineffective compiling passes. The second technique, _WebSpecific Lite Kernel Optimization Space Design_, reduces kernel tuning costs by focusing on web programming requirements and efficient hardware resource utilization, limiting the optimization space to only dozens. nn-JIT.web is evaluated for modern transformer models on a range of client devices, including the mainstream CPUs and GPUs from ARM, Intel, AMD and Nvidia. Results show that nn-JIT.web can achieve up to 8.2\(\times\) faster within 30 seconds compared to the baselines across various models. + Footnote †: \({}^{\dagger}\) Ting Cao is the corresponding author. + Footnote †: \({}^{\dagger}\) Ting Cao is the corresponding author. + Footnote †: \({}^{\dagger}\) Ting Cao is the corresponding author. + Footnote †: \({}^{\dagger}\) Ting Cao is the corresponding author. + Footnote †: \({}^{\dagger}\) Ting Cao is the corresponding author. ## 1. Introduction Web applications are increasingly becoming the primary means to deliver AI services, such as ChatGPT (Chen et al., 2017), StableInfusion (Chen et al., 2017), Web LLM (Zhou et al., 2018) and the suite of AI services within M365 for Web (Chen et al., 2017). This AI deployment shift is attributed to the compelling advantages of Web applications, including: cross-platform execution, that a Web application can run on any device with a browser ; _click and run_ deployment, with no need for installation; and simplicity of maintenance, that application updates can be timely available to users. With such shift, there is the surge in interest towards performing DNN inference directly within Web browsers, _i.e._, in-browser inference. In-browser inference can provide a more responsive user experience and enhanced privacy protection by avoiding round-trips to the cloud, as well as reduce the expense of cloud computing resources for serving a large number of clients. In-browser inference is made viable by the continuous advances in Web programming techniques, such as the recently introduced WebAssembly (abbreviated Wasm) (Chen et al., 2017) and WebGPU (Chen et al., 2018), as well as the fast-growing computing capabilities of client devices. However, current in-browser inference systems, such as TensorFlow.js(Wang et al., 2018), ONNX Runtime Web(Wang et al., 2018), WebDNN(Chen et al., 2018), and brain.js(Chen et al., 2018), suffer from two major drawbacks, leading to inferior performance. Firstly, these systems lag behind advanced web programming techniques, as they require handwritten kernels for each web programming backend _e.g._, JavaScript, Wasm, WebGL(Chen et al., 2018). Integrating a new back-end necessitates significant rewriting efforts, resulting in limited support for emerging technologies like WebGPU(Chen et al., 2018). Secondly, their predefined kernels do not account for hardware diversity, causing a _one-for-all_ approach that delivers poor performance across various client devices. As we will show in the paper, our proposed device-customized kernels demonstrate a potential speed-up of several times. To address these challenges, tensor compiling techniques such as TVM(Chen et al., 2018), Ansor(Wang et al., 2018) and FlexTensor (Wang et al., 2018) can be employed to automatically generate customized kernels without manual efforts. However, tensor compilers necessitate _ahead-of-time_ kernel generation for known hardware, due to the hours even days of kernel generation cost and the requirement of on-device kernel evaluation. This approach is more practical for limited target devices, such as those in cloud environments. Unfortunately, Web applications are intentionally designed to operate on a wide range of hardware and software environments, encompassing diverse CPUs, GPUs, and OS. Generating kernels ahead-of-time for each hardware is impractical. Therefore, achieving optimal in-browser inference performance for each client device without manual intervention remains an unresolved challenge. To tackle it, we rethink the specialties of Web. Compared to native inference systems, in-browser inference offers the distinct advantage of _online kernel updating_. Furthermore, in-browser inference typically runs repeatedly over an duration, such as for video and document processing. This distinctive feature provides the opportunity and time budget for just-in-time (JIT) kernel customization after encountering the actual device. Based on this insight, we present nn-JIT.web, the first in-browser DNN inference system with the unique ability to automatically generate and continuously improve customized kernels during inference for target devices, leading to a gradual speedup towards optimal performance. Both CPU and GPU are supported through generating kernels in the state-of-the-art (SOTA) Web programming interfaces respectively, _i.e.,_ Wasm for CPU and WebGPU for GPU. To realize this system, the key challenge lies in enabling JIT generation of optimized kernels, a feat that has never been accomplished before. Current tensor compilers perform _compiling_ and _kernel tuning_ processes to identify reasonable kernels. Tensor computations are implemented as nested multi-level loops to compute each tensor element. Various loop arrangements, such as different tiling sizes, unrolling factors, and loop orders, result in a kernel optimization space. The kernel tuning process iteratively selects and evaluates potential candidates from this space to find optimal ones. The evaluation of each candidate invokes the compiling process to generate executable codes and run on the target device. As discussed in related papers [21, 36], the lengthy time required to generate optimized kernels is due to 1) the compiling cost and 2) the vast kernel optimization space. Compiling each candidate can take minutes, as numerous transforming passes are needed for both tensor level and target language level, _e.g.,_ Wasm. The extensive optimization space prolongs the kernel tuning process in searching for reasonable candidates. To reduce the space, Romou[21] eliminates candidates that overuse hardware resources. Although this approach can reduce the space by 99%, the number of remaining candidates is still on the order of 10K. Roller[36] selects promising candidates by building a hardware performance model for known hardware, which is impractical for the diverse client devices found in Web environments. nn-JIT.web can facilitate JIT generation of optimized kernels based on our key findings of Web programming, that can reduce the compiling cost and kernel optimization space. 1) Web programming interface is designed with simple instruction sets and execution model for running efficiency and security, which does not require complex compiling optimizations. Moreover, mostly compiling optimizations for Web programming interface are overlapped with kernel optimization space, _e.g.,_ loop unrolling, rendering them unnecessary. 2) Strict Web requirements for security and portability convey consistent performance pattern across devices, _e.g.,_ costly memory allocation. This consistency removes the need for related candidates in the kernel optimization space to be evaluated on target devices. Based on the two findings, we propose two novel techniques accordingly. The first is _Tensor-Web compiling co-design_. Taking Wasm compilation as an example. Rather than the separated tensor-level and target-language (_i.e.,_ Wasm) level compiling, nn-JIT.web employs a unified compiling pipeline from tensor IR (Intermediate Representation) directly to Wasm IR, which completely eliminates the required invocation of LLVM Wasm backend or Emscripten [14] for separated Wasm compiling. The unified pipeline co-designs the tensor and Wasm compiling optimizations to avoid redundant and ineffective ones. The optimizations in LLVM Wasm backend is the best covered in the kernel optimization space. Only the optimizations closely related to Wasm instructions are kept to apply on the Wasm IR. This new compiling pipeline dramatically reduces the cost per candidate, from minutes to milliseconds. The second technique is _Web-specific lite kernel optimization space design_, guided by two principles: Web programming requirements and efficient utilization of hardware resources. As Web requirements cause consistent performance patterns across devices, to identify their impact on the kernel optimization space, we compose a microbenchmark suite that traverses the tensor compiling primitives (code transformations conducted on tensor IR to generate kernels) such as loop order and unroll, in a _one-variable-at-a-time_ manner. The suite is evaluated offline to identify the efficient primitive configurations. This can reduce the space size to tens of thousands. The hardware utilization is mostly decided by the tile sizes of a kernel implementation. An efficient hardware utilization requires the tile size to balance the contention between improved parallel hardware execution and reduced low-level memory accesses. This is _inconsistent_ across hardware depending on the hardware specs. We therefore use the heuristic-based method to select promising tile sizes to be in the kernel optimization space, to evaluate on the target device during JIT. By the guidelines, the number of candidates in space is reduced to only dozens. Based on the two techniques, we develop the nn-JIT.web. After the initial model and kernels are downloaded to run on the target client device, nn-JIT.web generates the lite kernel optimization space. Candidates in the space are compiled one-by-one using our unified compiling pipeline and evaluated on the client device, interleaved with the inference process, with limited overhead. Better kernels are continuously replaced online, gradually approaching the optimal. Considering the large number of clients on Web, candidate evaluation results and generated kernels is also crowdsourced from ones with similar hardware to achieve optimal kernels much faster. nn-JIT.web is implemented for both Wasm for CPUs and WebGPU for GPUs. Wasm is already supported by mainstream browsers, and CPUs are ubiquitous in client devices; thus, we prioritize Wasm support. WebGPU, although still in its early stages, shows great promise. Thanks to our JIT kernel generation, nn-JIT.web is the first general inference system that supports WebGPU for complex models, serving as a strong showcase for our advantages. nn-JIT.web is evaluated on representative transformer models, with suitable size to run in-browser on client devices, including encoder model RoBERTa (Rasmasmari et al., 2019), encoder-ecoder model BART (Rasmari et al., 2019) and T5 (Rasmari et al., 2019), and decoder model GPT-2 (Rasmari et al., 2019). Evaluation platforms cover a range of mobile and desktop hardware, including ARM CPU (Cortex-A76 and A78), Intel CPU (I9 12900H), AMD CPU (Ryzen 5800H), Intel GPU (HD 630), AMD GPU (Radeon), and Nvidia GPUs (RTX 3050, 3000, 3070Ti). The results show that within 30 seconds, nn-JIT.web can achieve average 26.65 times faster kernels compared to the baseline, and 2.36 times faster model inference. To summarize, our main contributions include: * This paper proposes the first in-browser inference system that enables JIT optimized kernel generation. * The Tensor-Web compiling co-design avoids the ineffective and redundant optimizations, reducing the compiling cost from minutes to milliseconds. * The Web-specific lite kernel space design is guided by both Web programming requirement and efficient utilization of hardware resource, reducing the optimization space from millions to dozens. * The evaluation is done on modern transformer models and a range of client devices, achieving up to 8.2\(\times\) speedup, compared to SOTA inference frameworks. ## 2. Background and Motivation ### DL Inference in Web Browsers Enabling DL inference in modern Web browsers is nontrivial (Rasmari et al., 2019). Due to the security considerations, the sandbox mechanism is widely used within browsers, which isolates Web applications, scripts, and other contents from the underlying system. The sandbox environment prevents malicious code from accessing and modifying system resources and settings, meanwhile it also restricts the usage of the sophisticated native DNN inference libraries, such as Eigen (Han et al., 2015) for the CPU and cuBLAS (Rasmari et al., 2019) for the GPU. To make DL inference in browsers possible, alternative programming interfaces, hence backends, are proposed to use. JavaScript (JavaScript, 2018) is firstly leveraged to implement DL kernels and graphs in Web DL frameworks (Rasmari et al., 2019). JavaScript has no-static data type and no vectorization support. Although some efforts like V8 Engine (Han et al., 2015) could significantly accelerate JavaScript code, the DL execution with it is still extremely inefficient in JavaScript environment. To cope with it, WebAssembly (Wasm) (Brower et al., 2016) is considered. Wasm is a compact binary format. Its runtime is a portable virtual machine running on the CPU. Fig. 1 shows the Wasm implementation in browsers. Wasm code is delivered in low-level bytecode, which can be decoded and executed more efficiently in the virtual machine. The bytecode needs to be validated for security. What's more, Wasm also takes advantage of advanced features of modern CPUs, _e.g._, Single Instruction Multiple Data (SIMD). Therefore, it provides much better inference performance than JavaScript. Wasm is language-agnostic. High-level programming language like C and C++ could be compiled into Wasm bytecode. GPUs could also be utilized within browsers. For instance, WebGL has been integrated in TensorFlow.js. WebGL provides a set of JavaScript interfaces to access GPU that originally enable rendering 3D graphics on Web pages. It is based on OpenGL ES 2.0 (Doming et al., 2018), a subset of OpenGL (Rasmari et al., 2019). Thus, certain functionalities are not available. Meanwhile, as a rendering library, it failed to utilize the computation pipelines in modern GPUs due to limited instructions for computation. To unleash the power of GPU, WebGPU, the successor of WebGL, is proposed. In addition to graphics rendering, WebGPU provides stronger computation ability, driving computation intensive DL kernels to execute more efficiently. WebGPU Shading Language (WGSL) is used to program. Fig. 1 shows the implementation of WebGPU in browsers. While running in browser, the WebGPU kernel is translated to native GPU APIs to run, such as Vulkan (Vulkan, 2018). For portability, WebGPU also specifies limitations for the hardware usage. The validator is again to check the kernel for security. Taking the advantages of the backends above, Web DL frameworks including TensorFlow.js (TF.js) and Onnx Runtime Web (Ort-Web), enable end-to-end in-browser inference for pretrained DL models. They all have relatively mature support for Wasm, and start to support WebGPU. The DL Figure 1. The Wasm and WebGPU support in browser. kernels shipped within these frameworks are usually handwritten or ported from native DL frameworks, _e.g.,_ TensorFlow [28]. To optimize the kernels, DL compilers such as TVM [10] are also extended to support generating and optimizing kernels implemented in Wasm and even WebGPU automatically. However, generating kernels for Web usually takes significantly long time, _e.g.,_ nearly 2 hours for one Matrix Multiplication (MatMul) kernel. Besides that, the performance of tuned kernels are almost far from the optimal. We would discuss the issue in detail in the followings. ### Inference Performance Issues To understand in depth the DL inference performance in browsers, we conduct the preliminary study, specifically we measure the inference latency of a typical DL kernel, MatMul, to demonstrate the potential performance issues for DL inference in browsers. We have the following observations: **The one-for-all kernels are suboptimal across devices.** Web applications are running on millions of devices equipped with diverse hardware. Different hardware prefers different kernel implementations. However, instead of designing customized kernels for each type of devices, at present the SOTA in-browsers inference frameworks deliver kernels in the way of a one-for-all style. For instance, TF.js and ORT-Web ship handwritten kernels on Wasm and WebGPU. We execute the one-for-all MatMul kernels from TF.js, ORT-web (only support Wasm), and pre-tuned AutoTVM (without tuning on the target device) on AMD 5800H desktop CPU, ARM Cortex-A78/A76 mobile CPUs, Nvidia 3000/3070Ti GPU and Intel 630 GPU. The inference latency is illustrated in Fig. 2. The results indicate the performance of pre-defined kernels is suboptimal compared to our device-customized kernels. Moreover, a single pre-defined kernel exhibits a wide range of performance gaps on different devices. For instance, the kernel from TF.js demonstrates a slowdown ranging from as little as 2% to as much as 146% when compared to customized ones. Similarly, without tuning, the generated kernel from the tensor compiler TVM shows a slowdown of 19% to multiple times, depending on devices. These results highlight the need for customized kernels tailored to each device. **The one-for-each kernels are currently impractical in Web scenarios**. Based on the measurements presented above, one might consider generating kernels in a one-for-each style. However, this solution remains infeasible. We assessed the optimized kernel generation time of TVM for a MatMul kernel on an AMD 5800H CPU device. It took nearly 2 hours to identify the kernel with the high performance (29.2 GFLOP/sec), with 437 tuning rounds. Typically, a deployed model contains several tens of kernels. Clearly, the one-for-each approach is impractical, particularly for Web scenarios where client diversity is substantial. The prolonged optimized kernel generation cost is due two primary causes: the exceedingly large kernel optimization space and the bloated tensor compilation process. Fig. 4 illustrates a common tensor compiler pipeline. For a tensor compiler, the tensor computation is defined in a domain specific language. Its potential kernel implementations, which composes a kernel optimization space, are defined by _primitives_ and the according configurations. A primitive is a kind of code transformation for the tensor IR _e.g.,_ loop unroll. A candidate from the kernel space can be described by a sequence of primitives and their configurations. The compiling process can then follow these primitives to conduct IR transformations to generate kernel. After that, the target language compiler _e.g.,_ LLVM can be called to compile the kernel into executables for the target devices. As the combination blowup of loop arrangement, the kernel optimization space is huge. Our analysis shows the size of a naive space for a MatMul (384\(\times\)768\(\times\)768) in WebGPU Figure 4. A common tensor compiler pipeline. Figure 3. The generated MatMul kernel ([M,K,N]\(-\)[640,768,2304]) performance and generation time of TVM on AMD 5800H CPU. Figure 2. The normalized kernel latency of handwritten, pre-tuned, and our online-tuned MatMul kernels ([M,K,N]\(-\)[640,768,2304]) on eights devices. is around 42 M. Smart searching algorithms and hardware performance models are normally employed to only select promising candidates to conduct actual compilation and evaluation on the target device. Even so, thousands of candidates are generally needed to be evaluated before finding an optimized kernel implementation. The compiling cost for each candidate is around seconds to minutes depending on the kernel quality. The total optimized kernel generation cost will be hours. Therefore, to reduce the optimized kernel generation cost for JIT, we need to reduce the compiling cost for each candidate, and reduce the number of candidates in the space. To achieve this, we propose nn-JIT.web. In the following sections, we will introduce the design principles and key techniques of nn-JIT.web. ## 3. nn-JIT.web Overview Fig. 5 is the overview of nn-JIT.web. It consists of four modules: the _tensor JIT compiler_ for online kernel generation; the _inference engine_ for executing inference tasks in the browser; the _micro benchmark suite_ for offline exploration of the consistent primitive settings; and the _kernel database_ for storing customized kernels tailored to known devices; The whole kernel generation and inference process facilitated by nn-JIT.web operates on both cloud and clients, as follows. During the initialization phase, the browser on the client downloads the web page, the inference engine, the model and the initial kernels. The model encloses the weights and the optimized model graph (_e.g.,_ operator fused) ready to deploy. The _inference engine_ parses the model graph, registers the kernel for each operator to execute, as well as manages the memory usage. The initial kernels are determined by the server, using the client device indicator, _e.g.,_ device name and ids. If the hardware on client have been explored, the optimal kernels would be used from the _kernel database_ on server. Otherwise, the pre-defined and uncustomized ones are used meanwhile the JIT phase would be triggered. During the JIT phase, the _tensor JIT compiler_ on the server composes the life kernel optimization space for each operator type. The compiler then subsequently generates the kernel for each candidate within the space. Between the server and the client, a kernel queue is established. Once a kernel is generated on server, it is pushed to the client via the queue. On client, the inference is executed repeatedly. Between every inference, the _inference engine_ retrieves one kernel from the queue and measures its latency. Based on the measurement, the newly retrieved kernel might be re-registered if it is significantly faster than the current registered one, ensuring that the more efficient kernel is utilized in subsequent inferences. After testing all the kernels in the queue, the best kernel along with the measurement results are reported to the server. The server would update the _kernel database_ according to the reports. In accordance with our design, the _tensor JIT compiler_ of nn-JIT.web is lightweight and can be run either on the cloud or directly on clients. In our current implementation, we deploy it on the cloud, as this enables kernel reuse. Optimal kernels discovered by one device can be seamlessly shared with other devices possessing the same hardware through the cloud, thereby facilitating the concept of _crowdsourcing_. Furthermore, compared to cloud inference, which necessitates uploading raw user input data, only performance profiles are sent to the server. This approach aligns with common Web practices to enhance user experience and protect privacy. The online kernel generation combined with JIT-styled inference ensures optimal performance on Web. To facilitate this, we propose two key techniques that significantly reduce the kernel generation cost, _e.g.,_ from hours to milliseconds for a single kernel. In the following sections, we will introduce these techniques in detail. ## 4. Streamline Compilation Pipeline Through Tensor-Web Co-Design Each candidate in the kernel optimization space needs to be compiled and evaluated on the client device. Current compiling takes minutes to complete. Even the space only has dozens of candidates, the total compiling will take hours, not possible to support the online optimized kernel generation. To reduce the cost, this section introduces the possibilities Figure 5. Overview of nn-JIT.web. Figure 6. Our unified Tensor+Web compiling compared to conventional separated Tensor and Web compiling. of removing target-related compiling (Sec. 4.1), mapping directly from tensor-level IR to Wasm IR (Sec. 4.2), and only keep necessary optimization passes on Wasm IR (Sec. 4.3). Sec. 4.4 will briefly discuss compiling pipeline for WebGPU. ### Unify Tensor-Web Compiling **Costly target-related compiling.** As shown in Fig. 6, the conventional compiling process of tensor compilers consists of two main separated steps: the tensor-level compilation and the followed target-related compilation (_e.g._, Wasm). Generally, they are designed separately by different communities, each with their own specific purpose. Tensor compilation transforms the tensor-level IR by the primitives and configurations of a picked candidate from the kernel optimization space, to generate a mapping of tensor computation to a loop arrangement. This process is independent of the target execution environment. Target-related compilation, on the other hand, aims to generate the efficient executables on the target environment from any high-level programs. Therefore, after tensor-level compiling generates the loop, a separate target-related compiling library such as LLVM is normally invoked to generate the executables. As these target-related compiling libraries target to compile any general-purpose programs, there are many compiling passes, taking long time to complete. Specifically, for Wasm as shown in Fig. 1, the target execution environment is the Wasm virtual machine running within the browser. The target-related compiling library is LLVM or Emscripten to compile tensor-level loop to Wasm bytecode. The Wasm-related compiling by LLVM/Emscripten also contributes the majority of total tensor compilation cost. **Feasibility of eliminating target-related compiling.** We therefore explore the possibility to eliminate this target-related compiling, by identifying two opportunities. Firstly, we could remove _ineffective optimizations_. From the target perspective, Wasm is designed with a simple expression-based instruction set and a stack-based execution model (Wasm, 2017), for the purpose of easy decoding, running efficiency, and security. Consequently, many sophisticate compiling optimizations would be not effective or necessary, thus not needed, such as the ones for register allocation, instruction reordering, and memory disambiguation. Secondly, we could remove _duplicated optimizations_. From the tensor perspective, the kernel optimization space which includes numerous possible kernel implementations, also encompasses many of the target-related compiling optimizations. For example, the unrolled loop generated by LLVM optimization pass is very likely also included in the kernel optimization space, which will be evaluated as well. The separated tensor and Wasm compiling cannot avoid the redundancy. In addition, the tensor computation defined by the tensor domain specific languages does not need the complex compiling optimizations for general-purpose programs, such as the dead code elimination. Thus, it could be further streamlined. **Unified Tensor-Web compiling.** The analysis above prompts us to redesign the tensor compiling pipeline, which unifies the tensor and Wasm compiling as shown in Fig. 6. It removes the separated target compiling invocation, and compiles tensor IR directly to the target executables _e.g._, Wasm bytecode. As a premise, Wasm is designed to be the compiling target of any high-level languages, including C and C++. It can also be the target of tensor-level IR. The optimization passes of different level IR's are co-designed, retaining only the necessary and non-repetitive ones. Through analyzing the generated code performance, we find almost all the optimization passes in LLVM can be covered in kernel optimization space. Only the ones closely related to Wasm instruction definition will be additionally needed to apply on the Wasm IR as the figure shows. These passes are very light weighted, taking about 100 ms to complete, tens or even hundreds of times less than calling LLVM. ### Lower Tensor to Wasm The primary challenge in directly converting tensor IR to Wasm IR involves determining how to effectively map the statement-based high-level tensor IR to the expression- and stack-based low-level Wasm IR. Wasm has only been lowered from LLVM IR before, which is also a lower level IR, facilitating the transformation. For example, LLVM has already lowered the high-level for statement. Figure 7. Lower tensor IR to Wasm IR for MatMul. Fig. 7 uses a code snippet to illustrate the differences between the two IRs by using the MatMul implementation as an example. The tensor IR is represented as a sequence of statements, such as the for loop statement. Wasm, on the other hand, is composed of a sequence of expressions (enclosed by the parenthesis in the figure). Each expression is evaluated to produce a value. Wasm is implemented as a stack-based machine, in which instructions manipulate an implicit operand stack, popping argument values and pushing result values. This design is to fit the sandboxed and resource-limited environment of browsers. **Map for statement to an expression block.** To map the tensor IR to Wasm, we traverse the tensor IR AST (abstract Syntax Tree) to transform the sequence of statements to sequence of expressions. The difficulty lies in handling the for statement. Since Wasm does not utilize statements, we construct a nested sequence of expressions as a block enclosed by the Wasm loop&end instructions for this statement. As we show in Algorithm 1, the sub-expressions, _e.g.,_ loop variable calculation, are created while traversing the for node of the tensor IR AST (line 2-7). Then the expressions will be nested together as the execution order of the stack (line 8-10). During execution, the br_if will pop the condition result from the stack, and decide whether to branch to the loop label (line 5). The loop instruction introduces an implicit label, which serves as the target of the branch instruction. During the actual stack execution, the loop instruction pushes a new entry onto the control stack, and record the stack height. If the branch is taken, the stack pops up to the block's height before and proceed to the end of the block. ``` input:forNode of Tensor IR \(forNode\) output:LoopExpression of Wasm IR \(loopExpr\) 1 for(loopVar=beginLoopVar-endloopVar+stride) body: 2 loopVar \(\leftarrow\) createWamVar(x): 3 initLoopVar \(\leftarrow\) makeLocalSet(loopVar,forNode.begin); 4 lt\(Expr\leftarrow\) makeBinary(\(\text{\sc Up-}L\), loopVar,forNode.end); 5 b\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(l\)\(\)\(l\)\(l\)\(\(l\)\(l\)\(\)\(\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\)\(\ compilation time for WebGPU is achieved through the tensor optimization space design, which will be discussed in Sec. 5. ## 5. Accelerate Kernel Tuning with Web-Specific Lite Space To reduce the vast kernel optimization space, we propose the web-specific lite kernel space design based on two guidelines: the web specific requirements (Sec. 5.1), and the efficient utilization of hardware resources (Sec. 5.2). Existing works (Han et al., 2017; Wang et al., 2018) also aim to shrink kernel optimization space for inference on native hardware. However, these spaces are either still too large to be evaluated online or require pre-defined hardware performance models. We will show that for in-browser inference, considering the two guidelines leads to a lite space size of just a few dozens, which can be evaluated online. Moreover, the numerous web application clients offer the unique opportunity for crowdsourcing the global optimal kernel (Sec. 5.3). ### Web-guided Offline Space Reduction Web programming is aimed at achieving portability and security. For instance, both Wasm and WebGPU implement rigorous validation processes to prevent malicious or erroneous code, such as type errors, memory overflow, out-of-bounds access, and invalid jumps. These specialties convey consistent kernel performance patterns across devices. The related kernel implementations do not need to be evaluated on every device, which can significantly reduce the number of candidates within the kernel optimization space. **Performance pattern of Web programming**. To illustrate the performance impact, Fig. 9 compares a MatMul latency with different primitive settings, _i.e.,_ cache_read on and off for Wasm, and the unroll on and off for WebGPU as examples. The performance shows the same pattern across devices. Disabled cache_read and enabled unroll always achieve better performance. What is more, they are also against the common setting for native kernels. The reasons are explained as follows. The cache_read primitive creates a small buffer that can reside in different memory levels. As a nested loop in a kernel is mapped to various levels of tiling on the hardware. The small buffer can load a tile to improve data locality. For native kernel execution, the cache_read does improve performance on many devices. However, when it comes to Wasm kernels, the performance is reduced on all tested devices as shown in Fig. 9. This decrease in performance is attributed to the costly Wasm validation process for memory allocation. The unroll primitive explicitly unrolls the loop to reduce the loop related overheads. In native inference, the unroll primitive does not impact kernel performance on many devices, as the native GPU compiler can conduct loop unrolling optimization as needed. However, WebGPU only triggers a weak level of compiling optimization in native GPU to facilitate the quick response of web applications. As a result, the unroll primitive needs to be specifically set for tensor compiling to achieve better performance. **Discovery of Web-consistent primitive settings**. Although we have demonstrated two typical examples of primitive settings, it remains challenging to discover all such primitives with cross-device consistent settings. To minimize human efforts, we propose developing a microbenchmark to automatically detect these primitives. The benchmarking is a one-time effort, as it is only related to the Web techniques used for backends, such as Wasm and WebGPU. The microbenchmark suite automatically traverses all the primitives for a common-sized MatMul kernel (specifically with a shape of 4K\(\times\)4K\(\times\)4K in practice). The _one variable at a time_ method is used to change the setting of only one primitive at a time, such as cache_read on/off. The suite is evaluated offline on multiple testing client devices. We then compare the measurements across these testing devices. If the results are consistent, we set the primitives accordingly, _e.g.,_cache_read off and unroll on. Consequently, we can fix these settings when constructing the kernel search space, hence reducing the space. If the results are inconsistent, we consider them as device-dependent primitives. These would be processed in the device-guided online space construction module, allowing for adjustments based on specific device characteristics to optimize performance. ### Device-guided Lite Space Building Microbenchmark results remove the device-consistent settings from the kernel optimization space. The left ones are inconsistent across devices. This space is still large in the size of tens of thousands. This section will use formulated kernel hardware usage and heuristics to build the lite space with promising candidates for JIT evaluation on target devices. **Rational for heuristics and formulation.** A tensor computation is mapped to a kind of loop arrangement in a kernel implementation, and further mapped to tiles in Figure 9. The MatMul kernel ([M,K,N]-[640,768,2304]) latency comparison of different primitive setting. The advanced setting is consistent across devices. the hardware during execution. As shown in Fig. 10, the innermost loop tile is mapped to the registers, and the second level loop tile is mapped to the Li cache/shared memory. Therefore, the tile size prominently decides the hardware usage of the kernel implementation, and thus the kernel performance. Fundamentally, a tile size with efficient hardware usage is to balance 1) the use of parallel computation units for fast computation and 2) the advanced memory storage _e.g.,_ shared memory for fast data accesses. However, the two are normally conflicted with each other. More threads can better saturate the parallel computing units and hide memory access stalls. However, more threads can overuse the registers and shared memory. On the other hand, threads that satisfy the registers can under use the parallel units. The sweet spot to balance the two depends on the target hardware and running environment, such as the size of advanced memories, the computation and memory bandwidth, and the quality of compiling, which have to be tuned to find on the device. **Heuristics for efficient hardware usage.** We therefore formulate the hardware usage based on the tile sizes, and set heuristics to pick the ones with potential efficient hardware usage and good performance, as shown in Table 1. These will be the kernel optimization space for our JIT compilation, and evaluated interleaved with the DNN inferences on client devices. These formulas only need to get the client device type to know the hardware limitations of memory size, register size, and number of cores. This device type is supported to read by Web programming interface. No other prior knowledge of the devices are needed. From the calculation, the ones priorities the higher-level storage (_i.e.,_ registers) usage (Web GPU heuristics 4) within the storage size (Heuristics for Wasm 2 and Heuristics for WebGPU 2) will be in the space. They will be evaluated in the order from the ones with max activated blocks to the ones with min activated blocks (Web GPU heuristics 4). Other heuristics are based on the Wasm and WebGPU specification. Note in the actual calculation, there will be a relaxation ratio for the hardware resources, since other variables in the kernel will also use registers. Finally, by applying both the Web-guided space reduction and the device-guided space building, our web-specific life kernel optimization space only includes a few dozens of candidates, six orders of magnitude smaller than the naive search space, which is able to be evaluated online. ### Crowdsourcing and Kernel Zoo We have constructed a life kernel optimization space. During deployment, we recognize a potential issue arising from the diverse nature of deployment environments, including various background workloads and hardware utilization levels. This may cause variance in assessed latency, potentially affecting our choice of the optimal kernel. To mitigate these concerns, we propose an _extended_ kernel space. **Extended kernel space.** For each device, we enhance the lightweight kernel space using an exploration-and-exploitation approach. The extended kernel space typically comprises two sets of candidates: 1) the exploration set, which includes the original lightweight kernel set and may be empty if optimal candidates for the device have already been discovered; 2) the exploitation set, obtained from the _crowdsourcing_ module, which gathers and sends optimal kernel candidates to new devices with similar hardware specifications for further validation. Overall, the number of extended candidates is approximately one-tenth of the life kernel space. \begin{table} \begin{tabular}{|c|l|} \hline & \(DataBits\): bits of data, \\ & \(x0\), \(r0\), \(gb\): the inner-most tile size, \\ Params. & \(x1\), \(r1\), \(y1\): the second tile size, \\ (Symbols follow Fig. 10) & \(Warpsize\): the warp size, \\ & \(Register_{core}\): the register number per core \\ & \(LICcache\): the L1 cache size per core, \\ & \(Warpsize\): the warp number per core. \\ \hline \(SIMLength\) & \(x0\cdot DataBits\) \\ \hline \(Thread_{core}\) & \((x1\cdot x0)\cdot(x1\cdot y0)\) \\ \hline \(Register_{core}\)(\(x0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot \cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot \cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot \cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot \cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot \cdot 0\cdot 0\cdot 0\cdot 0\cdot \cdot 0\cdot 0\cdot \cdot 0\cdot 0\cdot 0\cdot 0\cdot \cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot \cdot 0\cdot 0\cdot 0\cdot \cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot \cdot 0\cdot 0\cdot 0\cdot \cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot \cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot \cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot \cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot 0\cdot **Crowdsourcing and the kernel dataset.** The diverse nature of web clients provides us with the opportunity to engage in _crowdsourcing_. The fundamental concept is that the searched optimal kernel implementations can be shared among devices with identical hardware. To facilitate this, we employ two designs: 1) we leverage the hardware ids as well as profiled hardware primitives as a criterion to ascertain whether devices can share the same generated optimal kernel. In particular, we form the primitive vector as \(\vec{\rho}=\langle\rho_{i}\rangle,\rho_{i}\in\{0,1\}\), where \(\rho_{i}\) denotes the \(i\) th primitive obtained from the micro benchmark. 2) In order to identify the best generated kernel, we adopt a majority voting strategy. Clients submit top-N (5 in our implementation) fastest implementations for a given kernel with the ranked weights. We also introduce the kernel dataset. We take as the primary index key \((t,s,\vec{\rho},id)\), where \(t\) is the kernel type, \(s\) denotes the kernel shape, \(\vec{\rho}\) is the primitive vector and \(id\) is the device id. ## 6. Implementation The implementation of nn-JIT.web is based on TVM (Krishnam et al., 2017) and Binary (Krishnam et al., 2017). We use the native GPU driver to compile WebGPU. To implement the Tensor-Web co-designed compilation pipeline in nn-JIT.web, we introduce a new compilation target for Wasm in TVM. Specifically, we develop the WasmModuleNode to enable lowering tensor intermediate representation (IR) to Wasm IR. We implement two crucial functions, wasm::Builder and wasm::ModuleWriter, to construct Wasm IR and compile Wasm binary. To create the lit kernel space for nn-JIT.web, we extend TVM by incorporating web-specific scheduling templates. In these templates, we set the configurations for web-consistent primitives and define the search space for device-dependent primitive configurations selected by heuristics. Table 2 shows an example lit kernel optimization space we build for a MatMul. To implement the microbenchmark suite, we use a 4 k\(\times\)4 k\(\times\)4 k MatMul with different primitive settings to develop the evaluated kernels and compile them using the compilation tool chain of nn-JIT.web. We also adapt the in-browser inference runtime based on TVM, which enables importing graph definitions and weights from both TF.js and Ort-Web. Overall, nn-JIT.web comprises 2085 new lines of Python code, 1671 new lines of C++ code, and 564 new lines of JavaScript code. ## 7. Evaluation ### Experiment Setup **Hardware.** We conduct experiments on 8 desktop and mobile devices, including AMD Ryzen 5800H CPU, Intel I9-12900 CPU, ARM Cortex-A78/A76 CPU and NVIDIA RTX 3000/3070 Ti GPU, AMD Radeon GPU, Intel HD 630 GPU. We fix the maximum frequency on the selected devices to ensure consistent performance measurements. **Kernels and models.** We evaluate nn-JIT.web on modern transformer models, including RoBERTa (Rao et al., 2017), BART (Krishnam et al., 2017), GPT-2 (Radeon et al., 2018), and T5 (Rao et al., 2018). The kernel evaluation results show the typical sized kernels from these models, as listed in Table 3 including MatMul and BatchMatMul with different shapes. For the sequence-to-sequence models, such as GPT-2 and T5, we fix the input length at 384 to obtain the comparable results across devices and models. **Baselines.** We compare nn-JIT.web with three in-browser DL inference frameworks as baselines, including TF.js, ORT-web and pre-tuned AutoTVM (Krishnam et al., 2017). For TF.js, we use 3.21.0 version. For ORT-web, we use 1.14.0 version. For AutoTVM, we use the default kernel space, search algorithm _i.e._, XGBoost and tuning trails _i.e._, 1000 to generate and tune the kernels. The turning targets are Intel I7-10700 CPU and WebGPU kernels on NVIDIA 3050 GPU, which are not included in our test devices. The evaluation is conducted in a Chrome browser with the version of 111.0.5555. **Metrics.** We use performance.now() function, a JavaScript API function to measure the latency of kernels and models running with Wasm, and writeTimestamp function of WebGPU API To measure the latency on WebGPU. Each kernel and model are evaluated with one warmup and 50 executions, the averaged latency is reported. ### Overall Performance **Kernel Performance.** Fig. 11 demostrates the latency of tested kernels on selected CPUs and GPUs, comparing baselines with nn-JIT.web. We observe a significant speedup. On CPUs with Wasm, nn-JIT.web achieves an average speedup of 42.57\(\times\), and on GPUs with WebGPU, it accelerates kernel executions by an average of 2.77\(\times\). Specifically, nn-JIT.web \begin{table} \begin{tabular}{|l|l|} \hline **Major Primitives** & **Configures (symbols follow Table 1)** \\ \hline Cache Read & [Yes,No] \\ Cache Write & Yes \\ Reorder & \(r2,y2,x2,r1,y1,x1,\) \\ & \(r0,y0,x0\) \\ Bind & \(y2=\)blocky, x2 \(\rightarrow\) block.x \\ & \(y1\rightarrow\) thread, y x1 \(\rightarrow\) thread.x \\ Unroll & \(y0\) \\ Vedorize & x0 \\ Tile Size & \((r0,y0,x0)\in[4,...,32]\) \\ & \((r1,y1,x1)\in[32,...,256]\) \\ \hline \end{tabular} \end{table} Table 2. The web-specific like space for a MatMul kernel (M=K=N=4096) on WebGPU. \begin{table} \begin{tabular}{|l|l|l|l|} \hline **ID** & **Kernel Type** & **Kernel Size** & **Model** \\ \hline K0 & MatMul & M=384.K-768,N=768 & RoBERT \\ \hline K1 & MatMul & M=640.K-768,N=3072 & GPT-2 \\ \hline K2 & BatchMatMul & B=12M+384.K-384,N=64 & BART \\ \hline K3 & BatchMatMul & B=120,M=64.K-64,N=64 & GPT-2 \\ \hline \end{tabular} \end{table} Table 3. Evaluated kernels type and shape. outperforms TF.js by 6.26\(\times\) on Wasm and 1.68\(\times\) on WebGPU. When compared to pre-tuned AutoTVM, the speedup is 119.92\(\times\) on CPUs and 3.86\(\times\) on GPUs. The inference speedup of nn-JIT.web is mainly due to efficient kernel tuning for specific hardware, whereas _one-for-all_ kernel approaches including TF.js, ORT-Web as well as pre-tuned AutoTVM, fall short in this regard. Figure12 showcases the kernel performance in GFLOPs and the JIT tuning rounds on selected CPUs and GPUs. We use the K2 kernel configuration, as detailed in Table3. As shown, nn-JIT.web attains optimal performance on CPUs with Wasm after 10\(\sim\)32 tuning rounds, while 25\(\sim\)40 rounds are needed to achieve peak performance on GPUs. This can be attributed to the web-specific life space. Moreover, our compilation pipeline optimization ensures that each tuning round takes only about 500ms for Wasm and 100ms for WebGPU, based on our evaluation. We also compare nn-JIT.web with the SOTA _one-for-each_ kernel approach. We use AutoTVM to generate kernel candidates and perform JIT inference as JIT TVM. The kernel configuration employed is K0, as described in Table3. The comparison is illustrated in Figure13. As shown, on the Nvidia 3050, nn-JIT.web reaches near-peak performance (1159 GFLOPs) within 25 rounds, while AutoTVM lags behind at 350 GFLOPs. On the Intel I7, nn-JIT.web finds the best kernel implementation at the 8th round, demonstrating a significant performance improvement of approximately 2.80\(\times\) compared to AutoTVM. Our evaluation suggests that AutoTVM would need 1106 additional tuning rounds to achieve its optimal performance. **Model performance.** We continue to evaluate the end-to-end model performance achieved by nn-JIT.web and other baselines. In Figure 14, we denote RoBERTa as M0, BART as M1, GPT-2 as M2, and T5 as M3. As illustrated, nn-JIT.web attains more than 3.13\(\times\) and 1.36\(\times\) speedup on average across the tested models compared to TF.js and ORT-Web on CPU with Wasm, respectively. Notably, on the AMD 5800H CPU, nn-JIT.web improves by up to 8.27\(\times\) on M3 compared to TF.js, while up to 5.64\(\times\) on Intel I9. Compared to pre-tuned AutoTVM, the achieved speedup is approximately 1.93\(\times\) with Wasm and 1.51\(\times\) with WebGPU. As TF.js and ORT-Web cannot support all kernels in the tested models with WebGPU, we do not report their model latency. The JIT kernel optimization on models may take longer than on individual kernels, but it remains efficient. As shown in Figure 15, for BART, nn-JIT.web takes approximately 5.5 seconds to discover the optimal kernel implementations for the Nvidia 3000 GPU. For CPUs, peak performance is just achieved after around 17.8s and 22.1s for the Intel i9 and ARM A76, respectively. Figure 11. Kernel latency executed with TF.js, ORT-web, pre-tuned AutoTVM as well as nn-JIT.web. Figure 12. Kernel performance improvements with the JIT kernel optimization rounds on different devices. Figure 13. Kernel performance improvements along with the JIT tuning rounds with nn-JIT.web and AutoTVM. ### Evaluation of Tensor-Web Co-Designed Compilation Pipeline Next, we analyze nn-JIT.web and evaluate each design component, beginning with the optimized compilation pipeline. Table 4 presents the compiling latency and achieved kernel latency for both the baseline and nn-JIT.web on AMD 5800H CPU. We use AutoTVM's conventional compilation pipeline as the baseline. For our optimized pipeline, we examine three optimization passes, namely offset load/store pass, combined instruction pass, and load/store to variable, to assess their individual contributions to the optimized pipeline. The same kernel implementation is used for all cases. As demonstrated, our compilation pipeline with all optimizations is over up to 125.8\(\times\) faster than the baseline, while maintaining a similar kernel inference latency (76ms and 74ms). Furthermore, our pipeline with three optimization passes results in a significant performance improvement of 166%, 185%, and 216%, respectively, with the compiling overhead increasing by 25%. ### Evaluation of Web-Specific Lite Space Next, we assess the design of the web-specific life kernel space, which significantly reduces the kernel optimization space. We examine two typical kernels, MatMul and BatchMatMul, and compare their kernel optimization spaces in AutoTVM and nn-JIT.web. The results are presented in Table 5. Notably, the web-specific lite kernel space size is, on average, around 0.0013% and 0.000068% of the AutoTVM space on Wasm and WebGPU, respectively, decreasing search candidates from millions to dozens. In combination with our optimized compilation pipeline, nn-JIT.web reduces the overall kernel generation cost from hours to milliseconds, enabling JIT-powered inference in web browsers. ### Overhead nn-JIT.web enables JIT kernel optimization with minimal overhead. For example, the microbenchmark is a one-time effort executed in the offline stage, taking less than 1 second on a AMD Ryzen 5800H CPU according to our measurements. During JIT inference, kernels are sequentially pushed from the server to clients. The compiled kernel sizes range between 5\(\sim\)30KiB, which does not add a significant burden to the network load. To evaluate the newly arriving kernels, \begin{table} \begin{tabular}{|l|c|c|} \hline & **Compilation Latency (sec)** & **Inference Latency (ms)** \\ \hline Conventional Compilation & 5.8\(\sim\)62.9 & 76 \\ \hline Ours w/o opt. passes & 0.4\(\sim\)0.5 & 234 \\ \hline Ours w/ offset load/store & 0.5\(\sim\)0.6 & 88 \\ \hline Ours w/ offset load/store & \multirow{2}{*}{0.5\(\sim\)0.6} & \multirow{2}{*}{82} \\ \(\delta\) combined instruction & & \\ \hline Ours w/ offset load/store & \multirow{2}{*}{0.5\(\sim\)0.6} & \multirow{2}{*}{74} \\ \(\delta\) combined instruction & & \\ \(\delta\) load/store to variable & & \\ \hline \end{tabular} \end{table} Table 4. The compiling latency and achieved kernel latency of nn-JIT.web with different optimization passes. Figure 14. Model latency executed with TF.js, ORT-web, pre-tuned AutoTVM as well as nn-JIT.web. Figure 15. Model performance improvements with the JIT kernel optimization time on different devices. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{**Kernel Type (Size)**} & \multicolumn{2}{c|}{**AutoTVM**} & \multicolumn{2}{c|}{**nn-JIT.web**} \\ \cline{2-5} & WASM & WebGPU & WASM & WebGPU \\ \hline MatMul & \multirow{2}{*}{2,099,520} & \multirow{2}{*}{42,768,000} & \multirow{2}{*}{10\(\sim\)32} & \multirow{2}{*}{41} \\ (M=384,K=768,N=768) & & & & \\ \hline BatchMatMul & \multirow{2}{*}{2,694,384} & \multirow{2}{*}{74,131,200} & \multirow{2}{*}{10\(\sim\)32} & \multirow{2}{*}{30} \\ (B=12,M=384,K=384,N=64) & & & & \\ \hline \end{tabular} \end{table} Table 5. Kernel space of AutoTVM and nn-JIT.web. the client typically takes around 69-728ms for most kernels based on our assessment, which is nearly imperceptible. ## 8. Related Works **DL kernel generation.** Many works focus on automatically searching and generating optimal kernel implementations from a vast space. TVM (Krizhevsky et al., 2017) generates DL kernels based on the space of manual schedule templates and a learned cost model to search for the best kernel implementation. The subsequent work, Ansor (Sutskever et al., 2019), generates higher-performance DL kernels than TVM without manual schedule templates and reduces the average search time. Romou (Romou, 2019) supports new primitives to generate mobile-GPU-friendly DL kernels and accelerates kernel generation through hardware-aware search space pruning. Roller (Roller, 2019) generates DL kernels using an _r_Tile-construction-based approach, significantly reducing search time. Triton (Romou et al., 2019) is a DL kernel generator that extends LLVM-IR and adds an additional tile-level optimization pass, achieving high DL kernel performance. TLP (Romou et al., 2019) is a DL-based cost model that leverages schedule primitive features to speed up DL kernel search. However, none of these works address the online optimal kernel generation issue for in-browser DL inference. They all fail to provide the lightweight kernel space and compilation pipeline, which meet the requirements of JIT inference on the Web. **In-Browser DL inference.** The emergence of DL frameworks, such as TensorFlow.js (Tran et al., 2019) and ONNX Runtime Web (Romou et al., 2019), has significantly contributed to making in-browser DL inference a reality. TensorFlow.js, proposed by Google, is an open-source library that enables the deployment of DL models in browsers or on Node.js. It supports JavaScript, Wasm, WebGL, and WebGPU. ONNX Runtime Web, another open-source library proposed by Microsoft, facilitates in-browser DL inference by processing models in ONNX format. However, it only supports Wasm and WebGL backends. ## 9. Conclusion In this paper, we present nn-JIT.web, the first in-browser inference system that enables JIT optimized kernel generation, supporting Wasm and WebGPU. Our evaluation shows that nn-JIT.web accelerates inference by an average of 26.65\(\times\) compared to TF.js, ORT-Web, and AutoTVM, while maintaining minimal compilation overhead.
2309.05290
Solving Systems of Linear Equations: HHL from a Tensor Networks Perspective
We present an algorithm for solving systems of linear equations based on the HHL algorithm with a novel qudits methodology, a generalization of the qubits with more states, to reduce the number of gates to be applied and the amount of resources. Based on this idea, we perform a quantum-inspired version on tensor networks, taking advantage of their ability to perform non-unitary operations such as projection. The main novelty of this proposal is to perform a simulation as efficient as possible of the HHL algorithm in order to benchmark the algorithm steps according to its input parameters and the input matrix. Finally, we use this algorithm to obtain a solution for the harmonic oscillator with an external force, the forced damped oscillator and the 2D static heat equation differential equations.
Alejandro Mata Ali, Iñigo Perez Delgado, Marina Ristol Roura, Aitor Moreno Fdez. de Leceta, Sebastián V. Romero
2023-09-11T08:18:41Z
http://arxiv.org/abs/2309.05290v3
# Solving Systems of Linear Equations: HHL from a Tensor Networks Perspective ###### Abstract We present an algorithm for solving systems of linear equations based on the HHL algorithm with a novel qudits methodology, a generalization of the qubits with more states, to reduce the number of gates to be applied and the amount of resources. Based on this idea, we will perform a quantum-inspired version on tensor networks, taking advantage of their ability to perform non-unitary operations such as projection. Finally, we will use this algorithm to obtain a solution for the harmonic oscillator with an external force, the forced damped oscillator and the 2D static heat equation differential equations. ## 1 Introduction Solving linear equation systems \(A\vec{x}=\vec{b}\) is a fundamental problem in many areas of science and engineering. Classical methods for solving these equations, such as Gaussian elimination and LU decomposition [1], have been widely used and optimized for decades. However, as the size of the system grows, classical methods become computationally expensive and inefficient. One of the most efficient classical methods is the conjugate gradient method (CG)[2], which is of order \(O(Ns\kappa\log\left(\frac{1}{\epsilon}\right))\) for a matrix \(N\times N\) with a maximum of \(s\) no null elements per row, \(\kappa\equiv\frac{\lambda_{max}}{\lambda_{min}}\), being \(\lambda\) the eigenvalues of the matrix \(A\), and \(\epsilon\) the error. Quantum computers offer the potential to solve some of the most challenging problems more efficiently than classical computers. In particular, the HHL algorithm proposed by Harrow, Hassidim, and Lloyd in 2008 [3] is a method for solving linear equations that runs in polynomial time, where the polynomial depends logarithmically on the size of the system. However, it is intended for the calculation of expected values in \(O\left(\log(N)s^{2}\kappa^{2}/\epsilon\right)\), since it loses its advantage in the case of wanting to obtain the solution directly. Recently, there has been growing interest in using qudits and tensor networks [4] to implement different quantum algorithms. Qudits are generalized qubits, with more than 2 basis states, while tensor networks provide an efficient way to represent and manipulate high-dimensional quantum states with low entanglement and calculate quickly with them in classical computers [5][6]. In this paper, we propose a novel approach for solving linear equations using qudits and tensor networks. We demonstrate how this approach can be used to efficiently solve linear systems with a large number of variables, and we compare the performance of our approach with existing quantum and classical methods. Our results show that our approach can achieve a promising performance in computational efficiency for solving linear equations and simulate the HHL process without quantum noise. ## 2 HHL Algorithm Firstly, we will briefly introduce the standard HHL algorithm in qubits in order to better understand the algorithm we will formalize. This algorithm solves the system of linear equations \[A\vec{x}=\vec{b}, \tag{1}\] where \(A\) is an invertible matrix \(N\times N\), \(\vec{x}\) is the vector we want to obtain and \(\vec{b}\) is another vector, both with dimension \(N\). For this algorithm we will need \(n\) qubits so that \(2^{n}\geq N\) to encode the vector \(\vec{b}\), \(n_{c}\) clock qubits to encode the eigenvalues of \(A\) and one auxiliary qubit for the inversion. The whole process can be summarized in Fig. 1. We encode the state \(\vec{b}\) in the amplitudes of the \(n\) state qubits \[|b\rangle=\sum_{i=0}^{2^{n}-1}b_{i}|i\rangle=\sum_{i=0}^{2^{n}-1}\beta_{i}|u_{i}\rangle \tag{2}\] where \(b_{i}\) are the normalized components of vector \(\vec{b}\), \(|i\rangle\) are the binary computational bases and \(|u_{i}\rangle\) the eigenvector associated to the eigenvalue \(\lambda_{i}\) of \(A\). We use a operator \(b\) to initialize it. It is important to note that the difference between \(N\) and \(2^{n}\) will be covered by \(0\) in the vector and a matrix proportional to the identity in A, wasting resources. Moreover, we need a method to generate this state \(|b\rangle\). The second thing we will do is calculate the operator \[U=e^{iA^{\prime}t}, \tag{3}\] being \(A^{\prime}=A\) the matrix if it is hermitian, and \[A^{\prime}=\begin{pmatrix}0&A\\ A^{\dagger}&0\end{pmatrix} \tag{4}\] if it is not. In this case the problem is \[A^{\prime}\begin{pmatrix}0\\ \vec{x}\end{pmatrix}=\begin{pmatrix}\vec{b}\\ 0\end{pmatrix} \tag{5}\] We will assume \(U\) can be calculated and implemented efficiently. With this operator \(U\) implemented, we perform a Quantum Phase Estimation (QPE) to encode the eigenvalues of \(A^{\prime}\) in the clock qubits. Now we apply an inversion operator, which rotates the probability of the auxiliary qubit so that it is divided by the value of the eigenvalue encoded by the QPE. The next step is making a post-selection, keeping only the state if the auxiliary qubit outputs a \(1\), followed by an inverse QPE to clean the eigenvalue qubits. At the end, we have the \(\vec{x}\)-state normalized in the amplitudes, \[|x\rangle=\frac{1}{\mathcal{N}}\sum_{i=0}^{2^{n}-1}\frac{\beta_{i}}{\lambda_ {i}}|u_{i}\rangle=\frac{1}{\mathcal{N}}\sum_{i=0}^{2^{n}-1}x_{i}|i\rangle, \tag{6}\] with a normalization constant \(\mathcal{N}\) and omitting the ancilla and clock qubits. To obtain the full state, we have to first obtain the amplitude distribution, so the HHL is usually applied to obtain the expected value of some quantity with respect to the solution. The main problems of the algorithm are: 1. Large amount and waste of resources due to the difference between the size of the problem and the qubits to encode it. 2. Circuit depth and errors introduced by the QPE. 3. We do not get \(\vec{x}\) directly, and if we extract it, we get it with a sign ambiguity for each of its elements. 4. Preparing state \(\vec{b}\) may not be trivial, just like making the inversion operator or making the U-operator. ## 3 Qudit quantum algorithm To try to overcome the two first problems of the HHL, we will formalize a qudit version of the algorithm. To do so, we will assume that there are quantum computers which implement the basic qudit gates at our disposal are those of paper [7]. The first thing we will do is to encode the state \(\vec{b}\) in a single qudit. In case the qudit does not have enough states available, we will encode it in a number of qudits that allows us to encode Figure 1: Quantum HHL in qubits. it in a way analogous to the case of qubits. We will assume in the following that we only need one qudit with \(N\) base states in order to clearly explain the algorithm. Now we will need a way to simulate the \(U=e^{iA^{\prime}t}\) operator, which would depend on the particular case we are dealing with. With this, we will make the following circuit in Fig. 2. With a single qudit of dimension \(m=2^{n_{c}}\) we can do the QPE as in [7] and encode the eigenvalues in its basis states amplitudes. However, we could use more qudits. If we use 2 qudits to encode \(m\) values, each one will have to have dimension \(\sqrt{m}\). The inverter will be exactly the same as in the case of qubits, but instead of having a control-non-control series, we will have a control \(i\) that will apply the rotation gate on the ancilla if we have the value \(i\) in the qudit. We do the post-selection and if we get \(\left|1\right\rangle\), we perform the inverse QPE to clear the qudit of the eigenvalues. With this we can see that we reduce the number of SWAP gates needed and the QPE is performed with a lower number of gates. Also, we waste less resources, as we can better adjust the dimensionality of the quantum system with respect to the equation to be solved. Moreover, in the best case scenario we only need two qudits and one qubit. Still, we could do more to solve the other problems, so let's try to tackle them with the quantum-inspired technique of tensor networks, avoiding the gate errors from the quantum devices and extract directly the \(\vec{x}\). ## 4 Tensor Networks Algorithm We are going to convert the qudit circuit into a tensor network, so that it returns the vector \(\vec{x}\) directly. This method will be a tensor network HHL (TN HHL). Since in tensor networks we do not need the normalization, we will not normalize the state \(\left|b\right\rangle\). As it is not normalized, the result state will not be normalized either, so we will not have to rescale it. Moreover, the state can be prepared exactly in a single operation, defining the node \(\vec{b}\). The QPE is performed contracting the uniform superposition state with the Quantum Fourier (QFT) Transform in the QPE, so we have a matrix \(H_{m}\) with dimension \(m\times m\) for \(m\) eigenvalues with elements \[\left(H_{m}\right)_{ab}=e^{2\pi i\frac{ab}{m}} \tag{7}\] without normalization. The inverter will be a non-unitary operator whose non-zero elements are \[\left(\mathrm{inv}_{m}\right)_{i,j}=\left\{\begin{array}{ll}\nicefrac{{1}}{{ i}}&\text{ if }i=j\neq 0\text{ and if }i\leq\frac{m}{2}\\ \frac{1}{i-m}&\text{ if }i=j\text{ and if }i>\frac{m}{2}\end{array}\right. \tag{8}\] In order to be able to encode negative eigenvalues due to the cyclic property of the imaginary exponential. If we want more positive or more negative eigenvalues, we must change the proportion of \(i\)-values which are translated as positive or negative eigenvalues. The phase kickback operators can also be obtained exactly from \(U\). This tensor \(P\) would be \[\left(P_{m}\right)_{i,j,k}=\left(U^{j}\right)_{i,k};\left(P_{m}^{-1}\right)_ {i,j,k}=\left(\left(U^{-1}\right)^{j}\right)_{k,i}. \tag{9}\] This tensors are contracted through their \(j\) index with the \(H_{m}\) and \(H_{m}^{-1}\) tensors for doing the QPE. With these tensors, we can get our result by contraction the tensor network in Fig. 3 a) \[\sum_{a,b,c,d,e,f}b_{a}P_{abc}H_{bd}^{-1}\mathrm{inv}_{de}H_{ef}P_{cfi}^{-1}= \frac{x_{i}m^{2}}{t} \tag{10}\] with \(b,c,d,e,f\in\left[0,m-1\right]\) and \(a,c,i\in\left[0,N-1\right]\). The most efficient contraction for this tensor network has complexity \(O\left(N^{2}m\right)+O\left(m^{3}\right)+O\left(Nm^{2}\right)\). However, the construction of the \(P_{m}\) tensor requires \(O(N^{3}m)\). Figure 2: Quantum HHL in qudits. We could avoid this problem defining a tensor \[(W_{m})_{ijk}=\left(\vec{b}U^{i-j}\right)_{k} \tag{11}\] with a cost \(O(N^{2}m)\). So, we have to contract the tensor network in Fig. 3 b) \[\sum_{a,b,c,d}W_{abi}H_{ac}^{-1}\mathrm{inv}_{cd}H_{db}=\frac{x_{i}m^{2}}{t} \tag{12}\] which could be contracted in \(O(m^{3})+O(Nm^{2})\). However, if we precalculate the contraction of \(H_{m}\), \(H_{m}^{-1}\) and \(inv_{m}\) to use it every time, we could avoid the \(O(m^{3})\). Finally, the complexity of the method is \(O(N^{2}m)+O(m^{2}N)+O(m^{3})\). The memory cost is \(O(m^{2}N)+O(N^{2})\), being the first term associated with the \(W_{m}\) tensor and the second term associated with the \(U\) matrix. We can also compute the inverse of \(A^{\prime}\) just by erasing the \(b\) node from Fig. 3 a) and doing the contraction, with a cost \(O(m^{3})+O(N^{3}m)+O(N^{2}m^{2})\). ## 5 Comparison of advantages and disadvantages We will compare the advantages and disadvantages of this algorithm in tensor networks against the conjugate gradient and quantum HHL in Table 1. We assume \(m=O(\kappa)\). \(m\) will also depend on the error bounds we want. Greater \(m\) implies lower error bounds if we adjust properly the \(t\). However, we have not being able to determine this scaling, but probably is a similar scaling as in the original HHL. Notice we have not made us of the properties of the sparse matrices, as in the CG or the original HHL. ### TN HHL vs CG We can see our algorithm is slower than the CG. However, the TN HHL can invert the matrix \(A\). Moreover, both algorithms benefit from efficient matrix product algorithms. Also, if we have the eigenvalues, in the TN HHL we can change the \(H_{m}\) and \(W_{m}\) for wasting less resources using \(m=n_{\lambda}\), being \(n_{\lambda}\) the number of eigenvalues. ### TN HHL vs HHL The advantages of this TN HHL method over traditional quantum HHL are: 1. We do not waste resources, the QFT is a simple matrix and we do not need SWAP gates. 2. We can obtain the solution directly or some of its elements and we can get the right signs from our solutions. Also, we can obtain the inverse matrix. 3. We do not have to generate a complicated circuit for the initialization of the state and the time evolution of the phase kickback. Nor do we have the errors introduced by quantum gates. 4. We don't use the post-selection. However, when it comes to computing the expected value, it is indeed significantly less efficient in complexity. ## 6 Simulation We will test the effectiveness of the method when performing numerical simulations. We will solve the forced harmonic oscillator, the forced damped oscillator and the 2D static heat equation with sources. ### Forced harmonic oscillator The differential equation we want to solve will be \[\frac{d^{2}x}{dt^{2}}+\frac{k}{m}x=F(t) \tag{13}\] \[x_{0}=x(t=0);\qquad x_{T}=x(t=T)\] where \(F(t)\) is the time-dependent external force. For the experiments we will use a force \(F(t)=C\sin(\nu t)\). Figure 3: Tensor network equivalent to HHL. a) Original way. b) Efficient way. We use the discretization with \(n\) time-steps, being \(\Omega=-2+\frac{k}{m}(\Delta t)^{2}\) and \(F_{j}=(\Delta t)^{2}C\sin(\nu j\Delta t)\). \[\begin{pmatrix}\Omega&1&0&\cdots&0&0\\ 1&\Omega&1&\cdots&0&0\\ 0&1&\Omega&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&1&\Omega\end{pmatrix}\begin{pmatrix}x_{1}\\ x_{2}\\ x_{3}\\ \vdots\\ x_{n}\end{pmatrix}=\begin{pmatrix}F_{1}-x_{0}\\ F_{2}\\ F_{3}\\ F_{n}-x_{T}\end{pmatrix} \tag{14}\] The result of inverting this system gives us the result in Fig. 4. As hyperparameters of the method we use \(m=2000\) and \(t=6000\). The root mean square error of our tensor network from the exact inversion was \(1.8\times 10^{-5}\) and took 1.52 s to run, compared with the 1.9 ms of the Tensorflow method. ### Forced damped oscillator The differential equation we want to solve will be \[\frac{d^{2}x}{dt^{2}}+\gamma\frac{dx}{dt}+\frac{k}{m}x =F(t) \tag{15}\] \[x_{0}=x(t=0); x_{T} =x(t=T)\] where \(F(t)\) is the time-dependent external force and \(\gamma\) is the damp coefficient. As in Ssec. 6.1, for the experiments we will use a force \(F(t)=C\sin(\nu t)\). We use the discretization with n time-steps \[\begin{pmatrix}\beta_{0}&\beta_{+}&0&\cdots&0&0\\ \beta_{-}&\beta_{0}&\beta_{+}&\cdots&0&0\\ 0&\beta_{-}&\beta_{0}&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&\beta_{-}&\beta_{0}\end{pmatrix}\begin{pmatrix}x_{1}\\ x_{2}\\ x_{3}\\ \vdots\\ x_{n}\end{pmatrix}=\begin{pmatrix}F_{1}-\beta_{-}x_{0}\\ F_{2}\\ F_{3}\\ F_{3}\\ F_{n}-\beta_{+}x_{T}\end{pmatrix} \tag{16}\] where \(\beta_{-}=1-\gamma\frac{\Delta t}{2}\), \(\beta_{+}=1+\gamma\frac{\Delta t}{2}\) and \(\beta_{0}=-2+\frac{k}{m}(\Delta t)^{2}\). This matrix is not hermitian, so we apply (4) and solve (5). The result of inverting this matrix gives us the result in Fig. 5. As hyperparameters of the method we use \(m=2000\) and \(t=1.1\times 10^{4}\). The root mean square error of our tensor network from the exact inversion was \(5.7\times 10^{-3}\) and took 10.6 s to run, compared with the 2.6 ms of the Tensorflow method. \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Algorithm** & **Inversion**\(A^{-1}\) & **Solution**\(\vec{x}=A^{-1}\vec{b}\) & **Expectation value**\(\vec{x}^{T}M\vec{x}\) \\ \hline **CG** & - & \(O(Ns\kappa\log(1/\epsilon))\) & \(O(N\max(s\kappa\log(1/\epsilon),s^{\prime}))\) \\ \hline **HHL** & - & - & \(O(\log(N)s^{2}\kappa^{2}/\epsilon)\) \\ \hline **TN HHL** & \(O(N^{2}\kappa^{2})+O(N^{3}\kappa)+O(\kappa^{3})\) & \(O(N\kappa^{2})+O(N^{2}\kappa)+O(\kappa^{3})\) & \(O(\max(N^{2}\kappa,N\kappa^{2},\kappa^{3},Ns^{\prime}))\) \\ \hline \end{tabular} \end{table} Table 1: Computational times to invert a \(N\times N\) matrix \(A\), obtain the solution of \(\vec{x}=\vec{b}\) and compute an expected value \(\langle x|M|x\rangle\). \(s\) is the maximum number of non-zero elements per row of A and \(s^{\prime}\) is the maximum number of non-zero elements per row of M. Figure 4: Solving the forced harmonic oscillator system with equation (14). In blue the inversion performed with Tensorflow and in red the inversion performed with the tensor network. Figure 5: Solving the forced damped oscillator system with equation (16). In blue the inversion performed with Tensorflow and in red the inversion performed with the tensor network. ### Static two dimensional heat equation with sources The differential equation we want to solve will be \[\kappa\left(\frac{d^{2}u}{dx^{2}}+\frac{d^{2}u}{dy^{2}}\right)=-S(x,y) \tag{17}\] \[u_{x1}=u(0,y);\quad u_{x2}=u(L_{x},y)\] \[u_{y}=u(x,0);\quad u_{y2}=u(x,L_{y})\] where \(S(x,y)\) is the position-dependent external source. For the experiments we will use a source \(S(x,y)=10\sin(2\pi\frac{xy}{L_{x}L_{y}})\). We will use the discretization \[u_{j+1,k}+u_{j-1,k}+u_{j,k+1}+u_{j,k-1}-4u_{jk}=-(\Delta x)^{2}S_{jk} \tag{18}\] We convert the 2-dimensional space into a line, create the matrix and obtain the following result in Figs. 6 and 7. As hyperparameters we use \(m=2000\) and \(t=100\). The root mean square error of our tensor network from the exact inversion was \(1\times 10^{-4}\) and took 37.9 s to run, compared with the 11 ms of the Tensorflow method. ## 7 Conclusions We have seen that our algorithm offers both a way to invert matrices, to solve linear equations and to perform numerical simulations based on it. We have also observed that its scaling is remarkably good with the size of the matrix to be inverted, while it can be realized on classical computers and accelerated with GPUs. An advantage of this method is that it allows to observe the best possible theoretical result due to a quantum HHL, as it simulates what should happen without gate errors, post-selection problems or inaccuracies in state creation. However, we have observed that the effective computational speed is remarkably low compared to methods already implemented in libraries such as Tensorflow or Numpy, mainly due to the creation time of the tensors. The continuation of this line of research could be to find a way to improve the efficiency of the method in general by taking advantage of the characteristics of the tensors used, to improve the parallelisation of the calculations, to specialize it on tridiagonal matrices or extend it to complex eigenvalues. ## Acknowledgement The research leading to this paper has received funding from the Q4Real project (Quantum Computing for Real Industries), HAZITEK 2022, no. ZE-2022/00033.
2309.12934
TOPFORMER: Topology-Aware Authorship Attribution of Deepfake Texts with Diverse Writing Styles
Recent advances in Large Language Models (LLMs) have enabled the generation of open-ended high-quality texts, that are non-trivial to distinguish from human-written texts. We refer to such LLM-generated texts as deepfake texts. There are currently over 72K text generation models in the huggingface model repo. As such, users with malicious intent can easily use these open-sourced LLMs to generate harmful texts and dis/misinformation at scale. To mitigate this problem, a computational method to determine if a given text is a deepfake text or not is desired--i.e., Turing Test (TT). In particular, in this work, we investigate the more general version of the problem, known as Authorship Attribution (AA), in a multi-class setting--i.e., not only determining if a given text is a deepfake text or not but also being able to pinpoint which LLM is the author. We propose TopFormer to improve existing AA solutions by capturing more linguistic patterns in deepfake texts by including a Topological Data Analysis (TDA) layer in the Transformer-based model. We show the benefits of having a TDA layer when dealing with imbalanced, and multi-style datasets, by extracting TDA features from the reshaped $pooled\_output$ of our backbone as input. This Transformer-based model captures contextual representations (i.e., semantic and syntactic linguistic features), while TDA captures the shape and structure of data (i.e., linguistic structures). Finally, TopFormer, outperforms all baselines in all 3 datasets, achieving up to 7\% increase in Macro F1 score. Our code and datasets are available at: https://github.com/AdaUchendu/topformer
Adaku Uchendu, Thai Le, Dongwon Lee
2023-09-22T15:32:49Z
http://arxiv.org/abs/2309.12934v3
# TopRoBERTa: Topology-Aware Authorship Attribution ###### Abstract Recent advances in Large Language Models (LLMs) have enabled the generation of open-ended high-quality texts, that are non-trivial to distinguish from human-written texts. We refer to such LLM-generated texts as _deepfake texts_. There are currently over 11K text generation models in the huggingface model repo. As such, users with malicious intent can easily use these open-sourced LLMs to generate harmful texts and misinformation at scale. To mitigate this problem, a computational method to determine if a given text is a deepfake text or not is desired-i.e., Turing Test (TT). In particular, in this work, we investigate the more general version of the problem, known as _Authorship Attribution (AA)_, in a multi-class setting-i.e., not only determining if a given text is a deepfake text or not but also being able to pinpoint which LLM is the author. We propose **TopRoBERTa** to improve existing AA solutions by capturing more linguistic patterns in deepfake texts by including a Topological Data Analysis (TDA) layer in the RoBERTa model. We show the benefits of having a TDA layer when dealing with noisy, imbalanced, and heterogeneous datasets, by extracting TDA features from the reshaped _pooled_output_ of RoBERTa as input. We use RoBERTa to capture contextual representations (i.e., semantic and syntactic linguistic features), while using TDA to capture the shape and structure of data (i.e., linguistic structures). Finally, **TopRoBERTa**, outperforms the vanilla RoBERTa in 2/3 datasets, achieving up to 7% increase in Macro F1 score. deepfake text, authorship attribution, Topological Data Analysis (TDA), RoBERTa, BERT ## 1 Introduction Recent Large Language Models (LLMs) now have a trillion parameters and are able to generate more human-like texts. These larger models pose a few difficulties, the more glaring being that reproducibility is very difficult and expensive. This allows LLMs to remain black-box, with their limitations maliciously exploited. Some of these limitations include: toxic and hate speech generation (Sheng et al., 2021), plagiarism (Lee et al., 2023), and memorization of sensitive information (Carlini et al., 2021) which allow LLMs to be easily exploited to generate authentic-looking and convincing misinformation (Uchendu et al., 2021). The first step to mitigate these limitations of LLMs starts with our ability to determine whether a text is generated by a particular LLM (**deepfake text**) or not. This task is known as _Turing Test (TT)_(Uchendu et al., 2021). Further, in this work, generalizing the TT problem further, we are interested in not only determining if a text is a deepfake text or not but also pinpointing which LLM is the author, so-called the **Authorship Attribution** (AA) problem (Uchendu et al., 2023). See the illustration of AA in Figure 1. The AA problem in a multi-class setting has not been as rigorously studied as the TT problem has. Naturally, AA is substantially more challenging than TT. However, with the ever-increasing number of popular LLMs that people can use, we believe that it is no longer sufficient to just ask the binary question of "is this written by human or machine?" Solving the AA problem enables more fine-grained and targeted defense tools for users and platforms (e.g., a detector specifically trained and designed to detect ChatGPT deepfake texts). AA solutions can also help researchers and policymakers understand the capacity and limitations of different LLMs, and study which LLMs are more vulnerable to misuse and abuse in what context (e.g., political propaganda, terrorism recruitment, phishing). To distinguish deepfake texts from human-written Figure 1: Illustration of the Authorship Attribution (AA) problem with multiple authors - human and many LLM authors. texts, researchers have proposed several solutions, both utilizing supervised and unsupervised machine learning (Uchendu et al., 2023). In the supervised learning setting, researchers have developed _stylometric_, _deep learning_, and _hybrid_ solutions for detecting deepfake texts. Further, in the unsupervised learning setting, _statistical_ solutions such as watermarking approaches (Kirchenbauer et al., 2023) have been developed. Intuitively, deep learning and hybrid-based techniques achieve the best performance in terms of accuracy. However, in terms of adversarial robustness, statistical-based techniques are the most robust models, with hybrid models taking second/first place in adversarial robustness (Uchendu et al., 2023). To that end, we propose a hybrid solution which is an ensemble of statistical and deep learning-based techniques to get both benefits - good performance and robustness. We hypothesize that if our model has adversarial robustness properties, it could also be noise-resistant and thus be robust to out-of-distribution and imbalanced datasets. Thus, we propose **TopRoBERTa**, an ensemble of RoBERTa (Liu et al., 2019) and Topological Data Analysis (TDA) techniques. First, RoBERTa is used as the base model because it still remains the SOTA for extracting features from text and also because it has over 20K more tokens in its vocabulary than BERT. We apply TDA techniques to the task of deepfake text detection because it is able to capture the true shape of data, in spite of noise (Port et al., 2018, 2019; Munch, 2017; Turkes et al., 2022). To achieve accurate _deepfake text detection_, we need sufficient data, however due to the expense and restrictive access issues with SOTA LLMs, it is difficult to get sufficiently sized datasets in this field. Consequently, most datasets that exist are grossly imbalanced because they reflect the real world, where there are more human-written texts than deepfake texts. These issues tend to make deepfake text datasets noisy, making TDA a suitable application for _deepfake text detection_. ## 2 Related Work ### Authorship Attribution of Deepfake Texts Since 2019, there have been several efforts to mitigate the malicious uses of LLM-generated texts (deepfake texts) by way of detection. As this task is non-trivial, different techniques have been attempted and can be split into - _stylometric_, _deep learning_, _statistical-based_, and _hybrid classifiers_ (ensemble of two or more of the previous types), as well as _human-based approaches_(Uchendu et al., 2023). Furthermore, this task which is modeled as an Authorship Attribution (AA) task of human vs. either one or several deepfake text generators is studied as either a binary (_Turing Test_) or multi-class problem. The most popular is the binary class problem as there is more data for it than for multi-class. Thus, for the _stylometric_ solution, only two techniques have been proposed - Linguistic model (Uchendu et al., 2020) and Feature-based detector (Frohling and Zubiaga, 2021). Next for the _deep learning_ solutions, researchers usually fine-tune BERT models and BERT variants (Uchendu et al., 2021; Ippolito et al., 2020). However, as fine-tuning can be computationally expensive and requires a lot of data which does not always exist, some researchers have proposed an unsupervised technique - _statistical-based_ solutions (Gehrmann et al., 2019; Mitchell et al., 2023). However, while deep learning-based classifiers perform very well, they are highly susceptible to adversarial perturbations (Uchendu et al., 2023). Therefore, researchers propose _hybrid-based_ solutions that fuse both _deep learning_ and _statistical-based_ solutions to improve performance and robustness (Kushnareva et al., 2021; Liu et al., 2022). Lastly, for the _human-based_ approaches to improve human detection of deepfake texts, researchers have utilized two main techniques - training (Gehrmann et al., 2019; Uchendu et al., 2023) and not training (Zellers et al., 2019; Ippolito et al., 2020). ### TDA applications in NLP Topological Data Analysis (TDA) is a technique used to quantify shape and structure in data. Due to this unique ability to obtain the true shape of data, in spite of noise, it has been implemented in machine learning problems. More recently, the NLP field has seen a recent uptake in TDA applications due to its benefits. TDA has been previously applied to detecting children and adolescent writing (Zhu, 2013), law documents analysis (Savle et al., 2019), movie genre analysis (Doshi and Zadrozny, 2018), and explanation of syntactic structures of different language families (Port et al., 2018, 2019). More recently, TDA techniques have been applied to the deepfake text detection problem (Kushnareva et al., 2021). However, they collect the statistical summaries of the TDA representations of BERT attention weights, represented as a directed and undirected graph. Using these representations, they classify deepfake texts with Logistic regression for the binary task - human vs. deepfake. Therefore, for our technique, we train an end-to-end transformer-based model - BERT & RoBERTa with a TDA layer using the BERT or RoBERTa representations as the fine-tuning process continues. Next, (Perez and Reinauer, 2022) uses a similar technique as (Kushnareva et al., 2021) to show that TDA can improve the robustness of BERT. Finally, TDA has also been applied to representing documents as story trees (Haghigh atkhah et al., 2022), detecting contradictions in texts (Wu et al., 2022), examining the linguistic acceptability judgments of texts (Cherniavskii et al., 2022), finding loops in logic (Tymochko et al., 2020), speech processing (Tulchinskii et al., 2022), finding loops in logic (Tymochko et al., 2020), detecting fraudulent papers by examining their titles and abstracts (Tymochko et al., 2021), and extracting dialogue terms with Transfer learning (Vukovic et al., 2022). ## 3 TDA features Topology is defined as "the study of geometric properties and spatial relations unaffected by the continuous change of shape or size of figures," according to the Oxford Dictionary. Topological Data Analysis (TDA) is a "collection of powerful tools that have the ability to quantify shape and structure in data"1. There are two main TDA techniques - persistent homology and mapper. We will only focus on persistent homology. Persistent homology is a TDA technique used to find topological patterns of the data (Tymochko et al., 2020). Footnote 1: [https://www.indicative.com/resource/topological-data-analysis/](https://www.indicative.com/resource/topological-data-analysis/) This technique takes in the data and represents it as a point cloud, such that each point is enclosed by a circle. For this analysis, the aim is to extract the persistent features of the data using simplicial complexes. These formations extract features which are holes in different dimensions, represented as _betti numbers_ (\(\beta_{d}\), \(d\)-dimension). The holes in 0-Dimension (\(\beta_{0}\)), 1-Dimension (\(\beta_{1}\)) and 2-Dimension (\(\beta_{2}\)), are called connected components, loops/tunnels, and voids, respectively. Finally, the TDA features recorded are the \(birth\) (formation of holes), \(death\) (deformation or the closing of holes), and persistence features in different dimensions. Persistence is defined as the length of time it took a feature to die \((death-birth)\). This means that if a point touches another point then one of the points/features has died. The \(death\) is recorded with the radii value at which the points overlap. In addition, due to all the shifts and changes, from the 1-Dimension and upwards, some features may appear - a new hole, and this feature is recorded as a \(birth\). The \(birth\) feature is the radii at which it appeared. ## 4 TopRoBERTa: Topology-Aware Deepfake Text Detector To build this TDA-infused RoBERTa model, we focus on the four layers needed to convert vanilla-RoBERTa to Topological-RoBERTa - (1) pre-trained weights of the RoBERTa model, (2) dropout layer with probability \(p\)=0.3, (3) Topological layer for calculating, and (4) Linear transformation layer. See Figure 2 for a flow chart describing the architecture of TopRoBERTa with the 4 layers. To train our end-to-end Topological-RoBERTa model, we first fine-tune RoBERTa-base model. As we fine-tune the model, we take the \(pooled\_output\) which is a \(1\times 768\) vector containing the latent representations of RoBERTa. We find that RoBERTa weights are richer than BERT because it is a robustly trained BERT model and has over 20K more vocabulary size than BERT. These latent representations capture word-level and sentence-level relationships, thus extracting contextual representations (Liu et al., 2019). Due to the contextual representations captured, RoBERTa weights essentially extract semantic and syntactic linguistic features (Tenney et al., 2019; Reif et al., 2019). Next, we pass this \(pooled\_output\) which is a Figure 2: Flowchart of the Topological classification algorithm. The Red frame indicates our methodology and technique to transform a Vanilla Transformer-based model to a Topological Transformer-based model. \(1\times 768\) vector into a regularization layer, called \(dropout\). This \(dropout\) layer drops a pre-defined percentage (\(30\%\) in our case) of our \(pooled\_output\) to make our model more generalizable and less likely to overfit. This yields an output \(dropout(pooled\_output)\) with the same dimensions as the input - \(1\times 768\) vector. Before, we use our regularized output - \(dropout(pooled\_output)\) as input for the Topological layer2, we first reshape it from 1D \(\rightarrow\) 2D. TDA requires at least a 2D matrix to construct simplicial complexes that persistent homology technique uses to extract the \(birth\) and \(death\) of TDA features (connected components, specifically) (Munch, 2017). This is because the simplicial complexes can only be extracted from the point cloud (which is a scatterplot of the dataset) and to get this point cloud we need a dataset with 2-coordinates. We include this Topological layer in RoBERTa because: (1) RoBERTa has richer latent representations than BERT (Liu et al., 2019); (2) TDA is robust to noise, out-of-distribution, and limited data (Turkes et al., 2022); (3) TDA is able to capture more features that other feature extraction techniques cannot capture (Port et al., 2018, 2019); and (4) TDA extracts the true structure of data points (Munch, 2017). To convert the regularized weights from 1D \(\rightarrow\) 2D is non-trivial because we need to get the best shape to obtain useful TDA features which are stable and uniform (vectors of the same length) across all input of a particular dataset. Therefore, we try different 2D sizes and find that the closer it is to a square matrix, the more stable the TDA features are. Stable in this context means that for every input, the TDA layer outputs the same number of features in a vector. Therefore, we convert the \(1\times 768\) vector to a \(24\times 32\) matrix, since it is the closest to a square matrix as 768 is not a perfect square. We also find through experimentation that when row \(>\) column, TDA features are unstable. Unstable for our task means that the Topological layer output different vector sizes of TDA features given the input. Also, sometimes the feature vector can only contain \(nan\) values based on the input which means that it was unable to extract TDA features. Thus, we find that \(pooled\_output\) must be reshaped such that row \(\leq\) column and \(24\times 32\) satisfies this claim. Finally, using the 2D matrix as input to our Topological layer, we obtain the 0-Dimension features following the process illustrated in Section 3. This yields a \(23\times 3\). These 3 columns represent the \(birth\) time, \(death\) time, and persistence features, respectively. Persistence is defined as the length of time it took a feature to die. Next, this 2D matrix is flattened to a vector size of \(1\times 69\) so it can be easily concatenated with the 1D \(dropout(pooled\_output)\). See Figure 3 for an illustration of how the TDA features are extracted. We interpret these TDA features as capturing linguistic structure, as it is capturing the structure and shape of textual data. Footnote 2: [https://github.com/aidos-lab/pytorch-topological/tree/main](https://github.com/aidos-lab/pytorch-topological/tree/main) Lastly, we concatenate the regularized RoBERTa weights (\(dropout(pooled\_output)\)) of size \(1\times 768\) with the TDA features of size \(1\times 69\). This yields a vector of size \(1\times 837\). Thus, this \(1\times 837\) vector serves as input for the final layer of feature transformation, the Linear layer. The Linear layer's latent space increases from 768 to 837 in order to take the concatenated vector as input. TDA increases the latent space by 69 dimensions. However, we observe that unlike other TDA-based Transformer classifiers (Kushnareva et al., 2021; Perez and Reinauer, 2022), which use attention weights in which the size is dependent on the length of text, our TDA technique increases Figure 3: Illustration of how we extract the TDA features using the reshaped RoBERTa regularized weights as input. First, we reshape the regularized \(pooled\_output\) from \(1\times 768\) dimensions to \(24\times 32\) and use this 2D matrix as input for the Topological layer. The Topological layer treats this 2D matrix as a point cloud plot and extracts TDA features (\(birth\)\(\&\)\(death\)). Next, these TDA features are plotted in a figure known as _Persistent Diagram_, where the \(birth\) features are on the \(x\)-axis and \(death\) features are on the \(y\)-axis. While we plot the features from the 0-Dimension (connected components) and 1-Dimension (loops), only 0-Dimension features are used for our classification task. the latent space minimally. Finally, the output of this Linear layer is a vector that is the size of \(batch\_size\times number\_of\_labels\). Thus, if we have \(batch\_size=16\) and \(number\_of\_labels=20\), we obtain a vector of size: \(16\times 20\). Finally, we pass this vector as input into the softmax layer for multi-class classification. Finally, TDA features are compatible with non-TDA features (McGuire et al., 2023), making it a suitable technique for extracting subtle linguistic patterns that distinguish deepfake texts from human-written ones. Thus, **TopRoBERTa** captures semantic, syntactic, and structural linguistic features. ## 5 Experiments ### Datasets #### 5.1.1 TuringBench TuringBench (Uchendu et al., 2021) dataset is a news (mostly politics) dataset comprised of both human-written and deepfake texts. It has 20 labels - 1 human & 19 deepfake text generators. We cleaned the dataset further and skewed the dataset to maintain a more realistic real-world scenario. This skewed version contains 100% of the human examples and only 10% of each of the 19 deepfake text examples. See Table 1 for the train, validation, and test splits of the TuringBench dataset. #### 5.1.2 SynSciPass SynSciPass (Rosati, 2022) dataset is comprised of scientific articles, authored, by both human and deepfake authors. In addition to being grossly imbalanced (i.e., 79K examples for human & 600-850 examples for deepfake labels), the SynSciPass dataset is noisy. This is because, unlike the TuringBench dataset where all the deepfake texts are generated with open-ended text-generators like GPT-3, SynSciPass's deepfake labels are generated with 3 types of text-generators. These are open-ended generators, translators like Google translate (e.g. English \(\rightarrow\) Spanish \(\rightarrow\) English), and paraphrasers like SCIgen and Pegasus. Using these different text-generation techniques introduces a noisiness in this dataset. We use the 12 labels - 1 human & 11 deepfake text-generators. However, due to the different NLG methods employed, this data also has 4 heterogeneous labels - human, generators, translators, and paraphrasers. See Table 1 for the train, validation, and test splits of SynSciPass dataset. #### 5.1.3 M4 This is a Multi-lingual Multi-domain dataset comprised of news, Wikipedia, WikiHow, Reddit, arXiv, PeerRead and Web question answering (QA) styles of writing (Wang et al., 2023). However, to fairly evaluate our models which are English mono-lingual models, we only used the English dataset. This made the dataset less of a multi-domain dataset since most of the English texts have similar forms of writing. The deepfake texts are generated by ChatGPT (GPT-3.5-turbo), GPT-3.5-davinci, Flan-T5, Dolly-V2, and Cohere LLMs. Since the goal is to evaluate the robustness of our TDA models on skewed datasets, we took only 10% of the deepfake labels vs. 100% human. See Table 1 for the train, validation, and test splits. ### Authorship Attribution We train all the models with the same hyperparameters & parameters - dropout probability \(p=0.3\), learning rate of \(2e-5\), cross-entropy loss, batch size of 16 and 5 epochs. Also, tested with other loss functions (contrastive loss, topological loss, and Gaussian loss) and found cross-entropy to be the best. See models: * **BERT:** We use BERT-base cased pre-trained model. * **TopBERT:** We add a Topological layer to the BERT model described above and follow the process described in Section 4 and Figure 2. * **Gaussian-BERT:** A BERT-base model with a Gaussian layer to add Gaussian noise to the weights. The hypothesis is that if TopBERT achieves superior performance randomly then adding a Gaussian layer should have a similar effect. * **RoBERTa:** We use RoBERTa-base pre-trained model. * **TopRoBERTa:** We add a Topological layer to the RoBERTa model and follow the process described in Section 4 and Figure 2. * **Gaussian-RoBERTa:** This is similar to the _Gaussian-BERT_ architecture, except we use RoBERTa-base instead of BERT-base. These AA models are evaluated with established evaluation metrics for machine learning - Precision, Recall, Accuracy, Macro F1 score, Weighted F1 score. However, since the datasets are imbalanced, we focus on the Macro F1 score which we use to calculate the percentage gains for the classification task. ## 6 Results Our proposed models - TopBERT and TopRoBERTa are evaluated on their ability \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline **Dataset** & **Train** & **Validation** & **Test** & **\# Labels** \\ \hline TuringBench & 16K & 5.4K & 2.7K & 20 \\ SynSciPass & 87K & 10K & 10K & 12 \\ M4 & 15K & 4.4K & 2.2K & 6 \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset summary statistics to more accurately attribute human- vs. deepfake-authored articles to their true authors. We specifically used an imbalance dataset - TuringBench, a noisy & imbalanced dataset - SynSciPass, and an imbalanced & multi-domain dataset - M4. From Tables 2, 3 and 4, we observe that TopRoBERTa excels in the AA task when the data is more erratic like the noisy and multi-domain datasets - SynSciPass and M4. TopRoBERTa outperforms RoBERTa by 1% and 4% for SynSciPass and M4, respectively. While underperforming by 1% for the TuringBench dataset. However, TopBERT outperforms BERT by 1% and 4% for the TuringBench and SynSciPass, respectively. But underperforms BERT by 0.3% for the SynSciPass dataset. While TopRoBERTa underperforms RoBERTa by 30% for TuringBench, outperforms RoBERTa by 2% for SynSciPass, and outperforms by 0.3% for M4. Also, TopBERT is observed to underperform BERT by 1%, 3%, and perform about the same as BERT for TuringBench, SynSciPass, and M4, respectively. Furthermore, we observe that Gaussian-BERT underperforms BERT for all 3 datasets. However, Gaussian-RoBERTa underperforms RoBERTa for TuringBench and outperforms RoBERTa for SynSciPass and M4. ## 7 Further Analysis into SynSciPass TopRBERTa outperforms other models by a larger margin when evaluated on the SynSciPass dataset. Since TopRoBERTa's superiority is more evident in this dataset, we run further analysis to discover when TopRoBERTa works well and when it does not since SynSciPass contains heterogeneous labels. Heterogeneous labels refer to a situation where the labels in classification tasks are diverse and encompass multiple distinct categories or types. Our hypothesis is that the TDA layer is most advantageous when labels are not only noisy but heterogeneous as well. Thus, we perform 2 different types of authorship attribution: (1) AA of heterogeneous labels which are the 4 labels - Human vs. Generators vs. Translators vs. Paraphrasers; (2) AA of each heterogeneous subset (making the labels homogeneous). We group each classification sub-task by the 3 different types deepfake text generators - human vs. generators has 6 labels, human vs. translators has 4 labels, and human vs. paraphrasers has 4 labels. For task (1), due to the noisy and heterogeneous nature of the dataset with the 4 labels, we observe up to 7% increase in performance from TopRoBERTa in Table 5. However, TopBERT performs similarly to BERT for this task, further \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline **MODEL** & **Precision** & **Recall** & **Accuracy** & **Weighted F1** & **Macro F1** & **\% Gain** \\ \hline BERT & 0.8628 & 0.9082 & 0.9347 & 0.9362 & 0.8814 & - \\ Gaussian-BERT & 0.8341 & 0.8955 & 0.9164 & 0.9198 & 0.8580 & 2\% \(\downarrow\) \\ TopBERT & 0.8548 & 0.9052 & 0.9307 & 0.9319 & 0.8764 & 0.5\% \(\downarrow\) \\ \hline RoBERTa & 0.9657 & 0.9681 & 0.9852 & 0.9853 & 0.9656 & - \\ Gaussian-RoBERTa & 0.9644 & 0.9710 & 0.9844 & 0.9845 & 0.9673 & 0.2\% \(\uparrow\) \\ TopRoBERTa & **0.9745** & **0.9762** & **0.9893** & **0.9893** & **0.9748** & 1\% \(\uparrow\) \\ \hline \end{tabular} \end{table} Table 4: M4 Authorship Attribution results. The best performance is **boldened** and the second best is underlined. The percentage gains reported in the _% Gain_ are calculated from the Macro F1. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline **MODEL** & **Precision** & **Recall** & **Accuracy** & **Weighted F1** & **Macro F1** & **\% Gain** \\ \hline BERT & 0.7247 & 0.7202 & 0.8115 & 0.8083 & 0.7152 & - \\ Gaussian-BERT & 0.6987 & 0.7013 & 0.7964 & 0.7947 & 0.6975 & 1\% \(\downarrow\) \\ TopBERT & 0.7394 & 0.7358 & 0.8219 & 0.8177 & 0.7300 & 1\% \(\uparrow\) \\ \hline RoBERTa & **0.7555** & **0.7505** & **0.8333** & **0.8275** & **0.7433** & - \\ Gaussian-RoBERTa & 0.6500 & 0.6487 & 0.7609 & 0.7588 & 0.6452 & 9\% \(\downarrow\) \\ TopRoBERTa & 0.7499 & 0.7452 & 0.8283 & 0.8200 & 0.7713 & 1\% \(\downarrow\) \\ \hline \end{tabular} \end{table} Table 2: TuringBench Authorship Attribution results. The best performance is **boldened** and the second best is underlined. The percentage gains reported in the _% Gain_ are calculated from the Macro F1. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline **MODEL** & **Precision** & **Recall** & **Accuracy** & **Weighted F1** & **Macro F1** & **\% Gain** \\ \hline BERT & 0.8585 & 0.8148 & 0.9791 & 0.9785 & 0.8327 & - \\ Gaussian-BERT & 0.8404 & 0.7709 & 0.9745 & 0.9735 & 0.7933 & 4\% \(\downarrow\) \\ TopBERT & 0.8682 & 0.8298 & 0.9807 & 0.9802 & 0.8471 & 2\% \(\uparrow\) \\ \hline RoBERTa & 0.9012 & 0.8554 & 0.9853 & 0.9846 & 0.8719 & - \\ Gaussian-RoBERTa & 0.8929 & 0.8809 & 0.9872 & 0.9870 & 0.8847 & 1\% \(\uparrow\) \\ TopRoBERTa & **0.9177** & **0.8978** & **0.9892** & **0.9890** & **0.9058** & 4\% \(\uparrow\) \\ \hline \end{tabular} \end{table} Table 3: SynSciPass Authorship Attribution results. The best performance is **boldened** and the second best is underlined. The percentage gains reported in the _% Gain_ are calculated from the Macro F1. confirming the superiority of TopRoBERTa. However, for task (2) the results confirm our hypothesis that since the labels are more homogeneous the improvements of TopRoBERTa are marginal, even in the cases when it underperforms, it is still marginal. ## 8 Ablation study We compare our **TopRoBERTa** with another technique for adding a TDA layer. We call this other technique **TopRoBERTa_attn** and for this analysis, we call our **TopRoBERTa** model, **TopRoBERTa_pool**. The same name conventions are applied for **TopBERT_pool** and **TopBERT_attn**. TopRoBERTa_attn uses the same architecture as TopRoBERTa_pool, except we use the attention weights which is of size \(Max\_length\times 768\) as input for the Topological layer. This technique is inspired by [23, 14] who use the directed and undirected graphs of the attention weights. Thus, instead of increasing the computational cost by building graphs with the attention weights, we use the attention weights as input for extracting the TDA features. This technique provides a fairer comparison to TopRoBERTa_pool. Finally, we compare the following models TopBERT_attn, TopBERT_pool, TopRoBERTa_attn, and TopRoBERTa_pool. All models use the same hyper-parameters and parameters reported in Section 5.2, except for TopRoBERTa_attn for TuringBench in which we had to change the learning rate to \(1e-5\) and batch size to \(8\) to achieve more stable results. ## 9 Discussion See below for the observed strengths and weaknesses of adding a Topological layer to a Transformer-based model: ### Improvement with TDA is not random As RoBERTa is a black-box model, to confirm that TopRoBERTa's performance is not due to training with the right kind of "noise," we compare with Gaussian models - Gaussian-BERT & Gaussian-RoBERTa. The hypothesis is that if TopRoBERTa's performance is due to noise, then Gaussian models trained on noise should perform similarly. We observe from Tables 2, 3, and 4 that Gaussian-BERT underperforms BERT for both datasets, however Gaussian-RoBERTa underperforms RoBERTa for TuringBench but outper \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline **Human vs.** & **\# Labels** & **MODEL** & **Precision** & **Recall** & **Accuracy** & **Weighted F1** & **Macro F1** & **\% Gains** \\ \hline \multirow{2}{*}{Generators} & \multirow{2}{*}{6} & RoBERTa & **0.9247** & 0.9215 & 0.9951 & 0.9950 & 0.9223 & - \\ & & TopRoBERTa & 0.9229 & **0.9266** & **0.9952** & **0.9952** & **0.9231** & 0.08\% \(\uparrow\) \\ \hline \multirow{2}{*}{Translators} & \multirow{2}{*}{4} & RoBERTa & 0.8713 & 0.8699 & 0.9954 & 0.9954 & 0.8697 & - \\ & & TopRoBERTa & **0.8846** & **0.8718** & **0.9956** & **0.9956** & **0.8761** & 1\% \(\uparrow\) \\ \hline \multirow{2}{*}{Paraphrasers} & \multirow{2}{*}{4} & RoBERTa & **0.9959** & 0.945 & **0.9980** & **0.9680** & **0.9980** & - \\ & & TopRoBERTa & 0.9796 & **0.9500** & 0.9978 & 0.9978 & 0.9641 & 1\% \(\downarrow\) \\ \hline \hline \end{tabular} \end{table} Table 6: Homogeneous labels - SynSciPass Authorship Attribution results for Human vs. only authors in each category. For instance, Human vs. Paraphrase has 4 labels - Human vs. Spinbot vs. 2 Pegasus-finetuned models. The best performance is **boldened**. The percentage gains reported in the _% Gain_ are calculated from the Macro F1. Figure 4: PCA plots from RoBERTa and TopRoBERTa training embeddings for the SynSciPass dataset. The **black** clusters are the human labels and the other clusters are the deepfake texts labels. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **MODEL** & **Precision** & **Recall** & **Accuracy** & **Weighted F1** & **Macro F1** & **\% Gains** \\ \hline BERT & 0.9576 & 0.9264 & 0.9895 & 0.9894 & 0.9414 & - \\ TopBERT & 0.9616 & 0.9221 & 0.9895 & 0.9893 & 0.9412 & 0.02\% \(\downarrow\) \\ \hline RoBERTa & 0.9601 & 0.8767 & 0.9869 & 0.9857 & 0.9064 & - \\ TopRoBERTa & **0.9865** & **0.9638** & **0.9960** & **0.9959** & **0.9746** & 7\% \(\uparrow\) \\ \hline \hline \end{tabular} \end{table} Table 5: Heterogeneous labels - SynSciPass Authorship Attribution results with the 4 types of Text Generators as authors - Human vs. Generators vs. Translators vs. Paraphrasers. The best performance is **boldened**. The percentage gains reported in the _% Gain_ are calculated from the Macro F1. forms on the SynSciPass and M4 datasets, which are the more heterogeneous datasets. While TopBERT and TopRoBERTa consistently outperform their base models for 2 out of 3 datasets. Thus, based on the performance of TopRoBERTa and TopBERT, their improvement is not random. ### TopRoBERTa performs best on Heterogeneous data To evaluate the robustness of TopRoBERTa to noisy data with heterogeneous labels, we run further experiments to compare vanilla Transformer-based models with Topological Transformer-based models. For this task, there are 4 labels of the SynSciPass dataset - Humans vs. Generators vs. Translators vs. Paraphrasers. The observed 7% increase in performance from TopRoBERTa in Table 5 suggests that TDA layer is advantageous when data is erratic. TopBERT achieves similar performance as BERT for this task further confirming the superiority of TopRoBERTa. Furthermore, we extract the PCA features of RoBERTa and TopRoBERTa embeddings of the SynSciPass dataset for the training set with 12 labels in Figure 4. We observe the more distinct clusters and different shape in TopRoBERTa's plot. This further suggests that for this heterogeneous dataset, TDA is able to extract additional features for more accurate classification of labels. Next, we investigate the case when labels are homogeneous. We find that TopRoBERTa achieves a minimal increase from the base model in all but _Human vs. Paraphrasers_ subtask (-0.4%). This suggests that TDA works best when labels are heterogeneous. ### Reshaped \(pooled\_output\) is a better TDA input than attention weights Most researchers that apply TDA for NLP tasks commonly use word2vec embeddings or attention maps as input to extract TDA features (Perez and Reinauer, 2022; Kushnareva et al., 2021; Wu et al., 2022; Haghighatkhah et al., 2022; Doshi and Zadrozny, 2018; Savle et al., 2019). However, we use reshaped pooled outputs because while using the attention weights as input is more intuitive, the \(pooled\_output\) contains richer features for classification than attention weights. Furthermore, we also discover in confirmation with (Kushnareva et al., 2021) that using attention weights to extract TDA features is unstable. This could potentially be the reason why previous studies first constructed a directed and undirected graph with the attention weights prior to TDA feature extraction to encourage stability (Perez and Reinauer, 2022; Kushnareva et al., 2021). Nevertheless, these techniques significantly increase computational costs. The attention weights are large matrices, such as \(400\times 768\) for TuringBench, \(512\times 768\) for SynSciPass and M4 datasets. After TDA feature extraction, the flattened vectors are \(1\times 1197\) for TuringBench and \(1\times 1533\) for SynSciPass and M4 datasets. These increase the latent space of the Linear layer by a large margin - 768 to 1965 for the TuringBench and 2301 for SynSciPass & M4 datasets. Additionally, to maintain consistent dimensions for TDA features, we employ normalization techniques when TDA features for some articles differ from the majority. In contrast, our technique increases the dimensions of the linear layer from 768 to 837 for all datasets without requiring normalization. The inconsistent performance of TopBERT_attn and TopRoBERTa_attn further supports the superiority of using \(pooled\_output\) as input for obtaining TDA features compared to attention weights. ## 10 Conclusion & Future Work We propose a novel solution to accurately attribute the authorship of deepfake texts vs. humans - **TopRoBERTa**. This novel technique entails including a Topological layer to RoBERTa-base model, such that the Linear layer's input is a concatenation of the RoBERTa regularized weights and the TDA features. We find that our novel technique, TopRoBERTa performs best when data is noisy, grossly imbalanced, and heterogeneous. We observe that TopRoBERTa outperforms all other models in 2/3 datasets (i.e., the heterogeneous and multi-domain datasets). Lastly, in the future, we will scrutinize our models under stricter constraints such as evaluation on adversarial robustness, known as _Authorship Obfuscation_, and out-of-distribution datasets, such as low-resource languages, multi-lingual, imbalanced, and insufficiently sized datasets. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multicolumn{3}{c}{**TuringBench**} & \multicolumn{3}{c}{**SynSciPass**} & \multicolumn{1}{c}{**M4**} \\ \hline **MODEL** & **Macro F1** & **\% Gain** & **Macro F1** & **\% Gain** & **Macro F1** & **\% Gain** \\ \hline BERT & 0.7152 & - & 0.8327 & - & 0.8814 & - \\ TopBERT\_attn & 0.7084 & 15\(\downarrow\) & 0.8001 & 3\(\%\downarrow\) & 0.8817 & 0.03\(\%\uparrow\) \\ TopBERT\_pool & 0.7300 & 1\(\%\uparrow\) & 0.8471 & 2\(\%\uparrow\) & 0.8764 & 0.5\(\%\downarrow\) \\ \hline RoBERTa & 0.7433 & - & 0.8719 & - & 0.9656 & - \\ TopRoBERTa\_attn & 0.6642 & 8\(\%\downarrow\) & 0.8923 & 2\(\%\uparrow\) & 0.9739 & 0.85\(\uparrow\) \\ TopRoBERTa\_pool & 0.7313 & 1\(\%\downarrow\) & 0.9058 & 4\(\%\uparrow\) & 0.9748 & 1\(\%\uparrow\) \\ \hline \hline \end{tabular} \end{table} Table 7: Ablation study results.
2309.11583
Electromagnetic Theory with Quantum Principal Bundles
The aim of this paper is to formulate a {\it non--commutative geometrical} version of the classical electromagnetic field theory in the vacuum with the Moyal--Weyl algebra as the space--time by using the theory of quantum principal bundles and quantum principal connections. As a result we will present the correct Maxwell equations in the vacuum of the model, in which we can appreciate the existence of electric and magnetic charges and currents. Finally, in the fourth section we are going to present a {\it mathematical model} for which there are instantons that are not solutions of the corresponding Yang--Mills equation.
Gustavo Amilcar Saldaña Moncada
2023-09-20T18:44:27Z
http://arxiv.org/abs/2309.11583v5
# Electromagnetic theory with quantum principal bundles ###### Abstract. The aim of this paper is to formulate a _non-commutative geometrical_ version of the Electromagnetic Theory in the Moyal-Weyl algebra by using the theory of quantum principal bundles and quantum principal connections. As a result, we prove the existence of electric and magnetic charges and currents in Maxwell equations in the vacuum. Of course, these completely new terms have to be incorporated in the Lagrangian of the free electromagnetic field system. In addition and in order to show the generality of the theory of quantum principal bundles in non-commutative gauge theory, we will develop an Electromagnetic Theory formulation by using two gauge fields in accordance with the idea of Cabibbo-Ferrari. To accomplish our purposes, we will _dualize_ the geometrical formulation of electromagnetism. _MSC 2010:_ 46L87, 58B99. _Keywords:_ Quantum principal bundles, quantum principal connections, electromagnetic theory, gauge theory. ## 1. Introduction It is well-known the relationship between Geometry and Physics; particularly when we deal with the Electromagnetic Theory [1]. Indeed, one of the most general starting points of this theory in the vacuum is to consider it as a Yang-Mills theory for the trivial principal \(U(1)\)-bundle over the Minkowski space-time: \(\mathbb{R}^{4}\) with the metric \(\eta=\operatorname{diag}(-1,1,1,1)\)[1]. In Appendix A we show a brief summary of this well-known development. In particular, we comment how the (second) Bianchi identity (see Equation A.3) gives rise to the Gauss Law for magnetism (Equation A.8) and the Faraday equation (see Equation A.9); while critical points of the Yang-Mills functional (see Equation A.4), a functional that measures the square norm of the curvature of a principal connection, gives rise to the Gauss Law (Equation A.10) and the Ampere equation (Equation A.11). It is worth remarking that in Equation A.4 (or equivalently the Equations A.10, A.11) we are looking for critical points of the Yang-Mills functional, so in principle, not every principal connection (or equivalently, not every 1-form potential) can satisfy it. Equations A.10, A.11 are usually called _dynamical equations_ because they rule the dynamics of the electric and the magnetic field. On the other side, the Equation A.3 (or equivalently the Equations A.8, A.9) is always satisfied for every potential 1-form. This equation can be thought as a condition impose on the electromagnetic field by the Minkowski space-time and the group \(U(1)\); so Equations A.8, A.9 are usually called _geometrical (or topological) equations_, reflecting the fact that they comes from the Bianchi equation, a geometrical equation. This paper aims is recreating the geometrical formulation of the Electromagnetic Theory showed in the Appendix A by using quantum principal bundles in a concrete non-commutative space-time: the Moyal-Weyl algebra. Of course, during the text we will show the link between our formulation with the common results about this subject in literature, for example in [5, 6, 7, 8, 9, 10, 11]. The importance of this paper lies in this link because following our formulation we are able to recreate previous results and get more (see Remarks 3.6, 3.7 and Equation 3.43); as well as the opportunity of using quantum principal bundles in \(SU(2)\) or \(SU(3)\)-non-commutative gauge theories. In particular, we are going to show the correct electromagnetic geometrical equations by using the non-commutative Bianchi identity, and the correct electromagnetic dynamical equations by identifying critical points of the non-commutative Yang-Mills functional, which is now a functional that measures the square norm of the curvature of a quantum principal connection. With that, we will present the correct non-commutative Maxwell Equations as well as the correct Lagrangian of the system and its quantization. An interesting result of this formulation is the fact that the four-potential produces electric charges and currents in the vacuum! and _magnetic charges and currents in the vacuum too_!. This turns the photon field into a kind of _Dyon gauge field_. Clearly the current term has to be incorporated in the Lagrangian of the electromagnetic field system, like we present in Equation 3.43 and this the most important result of this research because at this moment, the current term is not taking into account in the literature [5, 6, 7, 8, 9, 10, 11]. Moreover, in order to illustrate one more time the powerful tool that the theory quantum principal bundles is in non-commutative gauge theories (the reader should remember the link between them in the _classical_ case [1]) and based on the fact that in non-commutative case the photon field produces electric and magnetic charges/currents, we are going to develop a toy model of the Electromagnetic Theory in the Moyal-Weyl algebra with two gauge boson fields: one associated to electric charges and currents densities, the photon field, and the other one associated to magnetic charges and currents densities, a kind of _magnetic photon field_. In addition, having two gauge boson fields in the Electromagnetic Theory leads to full symmetry between the electric field and the magnetic field. The idea of using two gauge fields in the Electromagnetic Theory comes from the formalism of Cabibbo-Ferrari [12, 13]. Nevertheless, in the _classical_ context, these leads a theory with \(U(1)\times U(1)\) as gauge group. In the framework quantum principal bundles it is possible to describe two gauge fields with only one \(U(1)\) (by using a \(2D\)-dimensional differential calculi of \(U(1)\)), which seems to be the correct group of symmetries of the Electromagnetic Theory. This paper breaks down into 6 sections. In the second one we are going to build the quantum principal \(U(1)\)-bundle used along the whole work; in particular, we are going to show the two differential calculi of \(U(1)\) used for our two electromagnetic models. The third section is the heart of this paper. In this section we will present the non-commutative Maxwell equations for our two models by using only the geometry of the spaces. It is worth mentioning that in this section we will show a covariant formulation of both models, as well as the correct Lagrangian densities. Following the _classical_ case, in the fourth section we are going to quantize our two models. The last section is for some concluding comments and just like we have mentioned before, in Appendix \(A\) there is a brief summary of the geometrical formulation of the Electromagnetic Theory. To accomplish our purpose, we are going to use M. Durdevich's theory of quantum principal bundles and quantum principal connection ([14, 15, 16, 17]) and the general formulation of Yang-Mills equations presented on [18, 19] and tested in other quantum bundles with several exciting and interesting results [20, 21]. It is worth remarking that the theory showed on [18, 19] was formulated in the most general way, it was not created for the particular case of the quantum bundle used in this paper. Of course, we will continue with the notation presented on these papers. Finally it is worth mentioning that we have chosen Durdevich's framework to develop this paper instead of [4, 22, 23] because of its purely geometrical/algebraic formulation and because its generality in terms of differential calculi and quantum connections: this theory allows to work with almost every differential calculi on the quantum spaces and with any quantum principal connections. This generality in terms of differential calculi and connections makes (at least in the context of Yang-Mills equations in which connections play the main role) a richer theory. In Non-Commutative Geometry it is common to use the word _quantum_ as a synonymous of _non-commutative_ and we will use it in the paper sometimes. On the other hand, in Physics it is common to use the word _non-commutative_ to denote theories with non-abelian groups, which is not the case of this paper; so we expect that the reader do not confuse with all these terms. ## 2. The Quantum Principal \(U(1)\)-Bundle In this section we are going to present the quantum bundle on which we will work. Since we are not interesting in changing the topology of the spaces nor the bundle, we have to consider a trivial quantum principal \(U(1)\)-bundle. The general theory of this kind of bundles can be checked on [15, 17]. ### A Non-Commutative Minkowski Space-Time Let us start by considering the Minkowski space-time \((\mathbb{R}^{4},\eta=\operatorname{diag}(1,-1,-1,-1))\) and its space of complex-valued smooth functions \(C^{\infty}_{\mathbb{C}}(\mathbb{R}^{4})\). By choosing a \(4\times 4\) antisymmetric matrix \((\theta^{\mu\nu})\in M_{4}(\mathbb{R})\), it is possible to take \(C^{\infty}_{\mathbb{C}}(\mathbb{R}^{4})[[\theta^{\mu\nu}]]\), formal power series in \(\theta^{\mu\nu}\) with coefficients in \(C^{\infty}_{\mathbb{C}}(\mathbb{R}^{4})\). Finally it is possible to apply a \(\theta^{\mu\nu}\)-twist on \(C^{\infty}_{\mathbb{C}}(\mathbb{R}^{4})[[\theta^{\mu\nu}]]\) by defining a new product: for every \(f\), \(h\in C^{\infty}_{\mathbb{C}}(\mathbb{R}^{4})[[\theta^{\mu\nu}]]\) we define \[f\cdot h:=m\circ\exp\left(\frac{i\,\theta^{\mu\nu}}{2}\frac{\partial}{ \partial x^{\mu}}\otimes\frac{\partial}{\partial x^{\nu}}\right)(f\otimes h), \tag{2.1}\] where \(m\) denotes the usual product on \(C^{\infty}_{\mathbb{C}}(\mathbb{R}^{4})[[\theta^{\mu\nu}]]\). Explicitly we have \[f\cdot h:=\left(fh+\sum_{\mu,\nu}\frac{i\,\theta^{\mu\nu}}{2}\frac{\partial f }{\partial x^{\mu}}\frac{\partial h}{\partial x^{\nu}}+\sum_{\mu,\nu,\alpha,\beta}\frac{i^{2}\,\theta^{\mu\nu}\,\theta^{\alpha\beta}}{8}\frac{\partial f }{\partial x^{\alpha}\partial x^{\mu}}\frac{\partial h}{\partial x^{\beta} \partial x^{\nu}}+\cdots\right).\] With this new product, \(C^{\infty}_{\mathbb{C}}(\mathbb{R}^{4})[[\theta^{\mu\nu}]]\) is a non-commutative unital \(*\)-algebra ([6]), where the unit is \(\mathbb{1}(x)=1\) for all \(x\) and the \(*\) operation is the complex conjugate. It is worth mentioning that \[[x^{\mu},x^{\nu}]:=x^{\mu}\cdot x^{\nu}-x^{\nu}\cdot x^{\mu}=i\,\theta^{\mu \nu}\,\mathbb{1}. \tag{2.2}\] This non-commutative unital \(*\)-algebra receives the name of Moyal-Weyl algebra ([6]) and it will be our quantum space-time, which we are going to denote by \(M\). The next step is to extend the Moyal product \(\cdot\) to the space of differential forms [6]. In fact, let us take \(\Omega^{\bullet}_{\mathbb{C}}(\mathbb{R}^{4})[[\theta^{\mu\nu}]]\) the space of formal power series in \(\theta^{\mu\nu}\) with coefficient in the algebra of complex-valued differential forms. Now Equation 2.1 is easily extended to \(\Omega^{\bullet}_{\mathbb{C}}(\mathbb{R}^{4})[[\theta^{\mu\nu}]]\) by considering the action of \(\frac{\partial}{\partial x^{\gamma}}\) on forms by means of the Lie derivative. We will denoted by \(\Omega^{\bullet}(M)\) the space \(\Omega^{\bullet}_{\mathbb{C}}(\mathbb{R}^{4})[[\theta^{\mu\nu}]]\) with the Moyal product \(\cdot\) extended. This graded differential \(*\)-algebra is going to play the role of _quantum differential forms_ on our quantum space-time \(M\). In accordance with [19], in order to apply its theory there are certain structures that previously we have to define on \(\Omega^{\bullet}(M)\). Let us define the following left quantum Pseudo-Riemmanian metric on \(M\) \[\{\langle-,-\rangle^{k}_{\mathrm{L}}:\Omega^{k}(M)\times\Omega^{k}(M) \longrightarrow M\mid k=0,1,2,3,4\} \tag{2.3}\] such that \(\langle f,h\rangle^{0}_{\mathrm{L}}=f\cdot h^{*}\) and for \(k=1,2,3,4\) we extend the usual metric of the de Rham differential algebra of the Minkowski space-time. For example \[\langle\sum_{\mu=0}^{3}f_{\mu}\,dx^{\mu},\sum_{\nu=0}^{3}h_{\nu}\,dx^{\nu} \rangle^{1}_{\mathrm{L}}=\sum_{\mu,\nu=0}^{3}\eta^{\mu\,\nu}f_{\mu}\cdot h^{* }_{\nu}=f_{0}\cdot h^{*}_{0}-f_{1}\cdot h^{*}_{1}-f_{2}\cdot h^{*}_{2}-f_{3} \cdot h^{*}_{3}\] and \[\langle f\,\mathrm{dvol},h\,\mathrm{dvol}\rangle^{4}_{\mathrm{L}}=f\cdot h^{*},\] where \[\mathrm{dvol}:=dx^{0}\wedge dx^{1}\wedge dx^{2}\wedge dx^{3} \tag{2.4}\] is the volume form. Furthermore, by postulating the orthogonality between quantum forms of different degrees, we can induce Pseudo-Riemannian structures in the whole graded space; so we will not use superscripts anymore. By taking the usual integral operator on the Moyal-Weyl algebra \[\int_{M}\mathrm{dvol}\] we can define the left quantum Hodge pseudo inner product1 Footnote 1: Whenever the integrals converge and taking into a count the corresponding equivalence classes. \[\langle-|-\rangle_{\mathrm{L}}:=\int_{M}\langle-,-\rangle_{\mathrm{L}}\, \mathrm{dvol}. \tag{2.5}\] Furthermore, and according to [19], the left quantum Hodge operator is the antilinear \(M\)-isomorphism \[\star_{\mathrm{L}}:\Omega^{k}(M)\longrightarrow\Omega^{4-k}(M) \tag{2.6}\] that satisfies \[\hat{\mu}\cdot(\star_{\mathrm{L}}\mu)=\langle\hat{\mu},\mu\rangle_{\mathrm{L} }\,\mathrm{dvol}.\] Explicitly, for our case in the canonical basis we have \[\star_{\mathrm{L}}\mathbb{1}=\mathrm{dvol},\ \star_{\mathrm{L}}\mathrm{dvol}=- \mathbb{1}, \tag{2.7}\] \[\begin{array}{c}\star_{\rm L}dx^{0}=dx^{1}\wedge dx^{2}\wedge dx^{3},\quad \star_{\rm L}dx^{1}=dx^{0}\wedge dx^{2}\wedge dx^{3},\\ \star_{\rm L}dx^{2}=-dx^{0}\wedge dx^{1}\wedge dx^{3},\quad\star_{\rm L}dx^{3}= dx^{0}\wedge dx^{1}\wedge dx^{2},\\ \\ \star_{\rm L}dx^{0}\wedge dx^{1}=-dx^{2}\wedge dx^{3},\quad\star_{\rm L}dx^{0} \wedge dx^{2}=dx^{1}\wedge dx^{3},\quad\star_{\rm L}dx^{0}\wedge dx^{3}=-dx^{1} \wedge dx^{2},\\ \\ \star_{\rm L}dx^{1}\wedge dx^{2}=dx^{0}\wedge dx^{3},\quad\star_{\rm L}dx^{1} \wedge dx^{3}=-dx^{0}\wedge dx^{2},\quad\star_{\rm L}dx^{2}\wedge dx^{3}=dx^{0 }\wedge dx^{1},\\ \\ \star_{\rm L}dx^{1}\wedge dx^{2}\wedge dx^{3}=dx^{0},\quad\star_{\rm L}dx^{0} \wedge dx^{2}\wedge dx^{3}=dx^{1},\\ \\ \star_{\rm L}dx^{0}\wedge dx^{1}\wedge dx^{3}=-dx^{2},\quad\star_{\rm L}dx^{0} \wedge dx^{1}\wedge dx^{2}=dx^{3}.\end{array} \tag{2.8}\] Here \[d:\Omega^{k}(M)\longrightarrow\Omega^{k+1}(M)\] is the differential operator and of course, in all cases we have \(\star_{\rm L}^{2}=(-1)^{k(4-k)+1}\,{\rm id}\). Finally the left quantum codifferential is defined as the operator \[d^{\star_{\rm L}}:=(-1)^{k+1}\,\star_{\rm L}^{-1}\,\circ d\,\circ\,\star_{\rm L }:\Omega^{k+1}(M)\longrightarrow\Omega^{k}(M), \tag{2.11}\] and it is the formal adjoint operator of \(d\) with respect to \(\langle-|-\rangle_{\rm L}\)[19]. It is worth mentioning that by considering the right structure as \[\langle\hat{\mu},\mu\rangle_{\rm R}:=\langle\hat{\mu}^{*},\mu^{*}\rangle_{\rm L} \tag{2.12}\] we get \(\langle-|-\rangle_{\rm R}\), \(\star_{\rm R}:=*\circ\star_{\rm L}\circ*\) and \(d^{\star_{\rm R}}:=(-1)^{k+1}\,\star_{\rm R}^{-1}\,\circ d\,\circ\,\star_{\rm R }=*\circ d^{\star_{\rm L}}\circ*\), which is the formal adjoint operator of \(d\) with respect to \(\langle-|-\rangle_{\rm R}\)[19]. ### The Quantum Group of \(U(1)\) and its Differential calculi Let us start this subsection by considering the \(*\)-Hopf algebra of the polynomial Laurent algebra (\(G=\mathbb{C}[z,z^{-1}],\phi,\epsilon,\kappa\)), where \(z^{-1}=z^{*}\), \(\phi\) is the coproduct, \(\epsilon\) is the counity and \(\kappa\) the coinverse. These operations are given by \[\phi(z^{n})=z^{n}\otimes z^{n},\qquad\epsilon(z^{n})=1,\qquad\kappa(z^{n})=z^{ n*}.\] The space \(G\) will play the role of the quantum structure group of our bundle. The next step is to find differential calculi on \(G\) different to the well-known algebra of _classical_ differential forms in order to create our two models. The reasons of these changes will be explored in whole text. #### 2.2.1. The Non-Standard \(1d\)-Differential Calculi In accordance with [17, 24], a bicovariant \(*\)-First Order Differential calculi (\(*\)-FODC) can be defined by an Ad-invariant right ideal \(\mathcal{R}\subseteq\mathrm{Ker}(\epsilon)\) such that \(\kappa(\mathcal{R})^{*}\subseteq\mathcal{R}\). In this way, let us consider any \(*\)-FODC \[(\Gamma,d) \tag{2.13}\] such that the set of invariant elements or the _quantum (dual) Lie algebra_\({}_{\rm inv}\Gamma\) satisfies \[\dim_{\mathbb{C}}({}_{\rm inv}\Gamma)=1,\qquad z\,\pi(z)=-\pi(z)\,z,\] where \(\pi:\mathrm{Ker}(\epsilon)\longrightarrow{}_{\rm inv}\Gamma\) is the quantum germs map given by \(\pi(g)=\kappa(g^{(1)})dg^{(2)}\) for all \(g\in\mathrm{Ker}(\epsilon)\) with \(\phi(g)=g^{(1)}\otimes g^{(2)}\) (in Sweedler's notation) [17, 24]. Notice \[{}_{\rm inv}\Gamma={\rm span}_{\mathbb{C}}\{\vartheta:=i\,\pi(z)\}. \tag{2.14}\] Also we can calculate the adjoint (co)action of \(G\) on \({}_{\rm inv}\Gamma\) \[\begin{split}{\rm ad}:{}_{\rm inv}\Gamma&\longrightarrow {}_{\rm inv}\Gamma\otimes G\\ \vartheta&\longmapsto\vartheta\otimes 1\end{split} \tag{2.15}\] because of \[{\rm ad}(\pi(g))=((\pi\otimes{\rm id})\circ{\rm Ad})(g)=\pi(g)\otimes 1\] for all \(g\in G\), where \({\rm Ad}:G\longrightarrow G\otimes G\) is the adjoint (right co)action on \(G\)[14, 15, 17]. In Durdevich's framework of quantum principal bundles we have to take the universal differential envelope \(*\)-calculi [14, 15, 17] \[(\Gamma^{\wedge},d,*). \tag{2.16}\] This graded differential \(*\)-algebra will play the role of _quantum differential forms_ of \(U(1)\) in the first model that we will present. This space has the particularity that \[\Gamma^{\wedge\,2}\cong G\otimes{}_{\rm inv}\Gamma^{\wedge\,2},\] where \[{}_{\rm inv}\Gamma^{\wedge\,2}={\rm span}_{\mathbb{C}}\{\vartheta\,\vartheta\}; \tag{2.17}\] so there are quantum differential forms of grade \(2\). Moreover, there is not a top grade, which is a big difference with the algebra of _classical_ differential forms of \(U(1)\). The main reason to use the universal differential envelope \(*\)-calculi instead of, for example, the universal differential calculi is the fact that \((\Gamma^{\wedge},d,*)\) allows to extend the structure of \(*\)-Hopf algebra to any grade and it is maximal with this property; reflecting the _classical_ fact the tangent bundle of a Lie group is also a Lie Group. The structure of graded differential \(*\)-Hopf algebra will be denoted by the same symbols. In this specific case we have ([14, 15, 17]) \[\vartheta^{*}=\vartheta,\qquad d\vartheta=i\vartheta\,\vartheta. \tag{2.18}\] #### 2.2.2. A \(2d\)-differential calculi Now let \(q\in\mathbb{R}-\{0,1\}\), \(\mathcal{L}={\rm span}_{\mathbb{C}}\{z,z^{-1}\}\) and its linear dual space \(\hat{\mathcal{L}}:={\rm span}_{\mathbb{C}}\{\theta_{-},\theta_{+}\}\), where \(\theta_{-}(z)=1\), \(\theta_{-}(z^{-1})=0\), \(\theta_{+}(z)=0\), \(\theta_{+}(z^{-1})=1\). The map \[\varpi:{\rm Ker}(\epsilon)\longrightarrow\hat{\mathcal{L}},\qquad g \longmapsto\varpi(g),\] where \(\varpi(g)(x)=\mathcal{Q}(g\otimes x)\) with \(\mathcal{Q}\) such that \(\mathcal{Q}(z^{m}\otimes z^{n})=q^{2mn}\) for all \(m\), \(n\in\mathbb{Z}\); defines \[(\Gamma,d), \tag{2.19}\] a \(*\)-First Order Differential calculi (\(*\)-FODC) by means of its space of invariant elements, or equivalently, its (dual) quantum Lie algebra ([4, 24, 17]) \[{}_{\rm inv}\Gamma:=\frac{{\rm Ker}(\epsilon)}{{\rm Ker}(\varpi)}. \tag{2.20}\] It is worth remarking that \(\dim({}_{\rm inv}\Gamma)=2\), a big difference with the classical case in which \(\dim\left(\frac{{\rm Ker}(\epsilon)}{{\rm Ker}^{2}(\epsilon)}\right)=\dim( \mathfrak{u}(\mathfrak{1}))=1\) and clearly a linear basis of the quantum Lie algebra is given by \[\beta:=\{\vartheta^{e}:=-i\,\theta_{-},\,\vartheta^{m}:=-i\,\theta_{+}\}. \tag{2.21}\] By considering the quantum germs map [17, 24] \[\pi:\mathrm{Ker}(\epsilon)\longrightarrow{}_{\mathrm{inv}}\Gamma \tag{2.22}\] it is easy to look for relations, such as \[\pi(z^{n}) =i(q^{2n}-1)\vartheta^{e}+i(q^{-2n}-1)\vartheta^{m},\] \[\pi(z^{-n}) =i(q^{-2n}-1)\vartheta^{e}+i(q^{2n}-1)\vartheta^{m}. \tag{2.23}\] Also we can calculate the adjoint (co)action of \(G\) on \({}_{\mathrm{inv}}\Gamma\) \[\mathrm{ad}:{}_{\mathrm{inv}}\Gamma \longrightarrow{}_{\mathrm{inv}}\Gamma\otimes G\] \[\vartheta \longmapsto\vartheta\otimes 1 \tag{2.24}\] given by \[\mathrm{ad}(\pi(g))=((\pi\otimes\mathrm{id})\circ\mathrm{Ad})(g)=\pi(g) \otimes 1\] for all \(g\in G\). Now we have to take the universal differential envelope \(*\)-calculi [14, 15, 17] \[(\Gamma^{\wedge},d,*). \tag{2.25}\] This graded differential \(*\)-algebra will play the role of _quantum differential forms_ of \(U(1)\) in the second that we will present. To conclude this subsection we are to present some relations in this algebra \[\vartheta^{e*}=\vartheta^{e},\quad\vartheta^{m*}=\vartheta^{m} \tag{2.26}\] \[d\pi(z)=d\pi(z^{-1})=-\frac{(q^{2}-1)^{2}}{q^{2}}(\vartheta^{e}\vartheta^{m} +\vartheta^{m}\vartheta^{e}), \tag{2.27}\] \[d\vartheta^{e}=d\vartheta^{m}=i(\vartheta^{e}\vartheta^{m}+\vartheta^{m} \vartheta^{e}). \tag{2.28}\] ### The Quantum Bundle Finally we have all the ingredients to build the trivial quantum bundle that we will used in the rest of this paper. Let \[\zeta=(GM:=M\otimes G,M,_{GM}\Phi:=\mathrm{id}\otimes\phi) \tag{2.29}\] be the trivial quantum principal \(U(1)\)-bundle over \(M\)[15, 17]. Now we can also take the trivial differential calculi on the bundle ([15, 17]): \[\Omega^{\bullet}(GM)=\Omega^{\bullet}(M)\otimes\Gamma^{\wedge},\ \ _{\Omega}\Psi:= \mathrm{id}\otimes\phi:\Omega^{\bullet}(GM)\longrightarrow\Omega^{\bullet}(M) \otimes\Gamma^{\wedge} \tag{2.30}\] (here \(\phi\) is the extension of the coproduct in \(\Gamma^{\wedge}\)). The graded differential \(*\)-algebra \(\Omega^{\bullet}(GM)\) will play the role of _quantum differential forms_ on the total quantum space \(GM\). It is worth mentioning that \[\mathrm{Hor}^{\bullet}GM:=\Omega^{\bullet}(M)\otimes G \tag{2.31}\] and of course the graded differential \(*\)-subalgebra of forms on the base quantum space matches with \(\Omega^{\bullet}(M)\). In accordance with the general theory, the restriction of \({}_{\Omega}\Psi\) in \(\mathrm{Hor}^{\bullet}GM\) is a (co)representation of \(G\) on \(\mathrm{Hor}^{\bullet}GM\)[15, 17]. This map will be denoted by \({}_{\mathrm{H}}\Phi\). In the light of [15, 17], the set of quantum principal connections (qpcs) \[\{\omega:{}_{\mathrm{inv}}\Gamma\longrightarrow\Omega^{1}(GM)\} \tag{2.32}\] on trivial bundles is in bijection with the space of _non-commutative gauge potentials_ \[\{A^{\omega}:{}_{\rm inv}\Gamma\longrightarrow\Omega^{1}(M)\mid A^{\omega}\ \mbox{is linear}\} \tag{2.33}\] by means of \[\omega(\theta)=A^{\omega}(\theta)\otimes\mathbb{1}+\mathbb{1}\otimes\theta\] for all \(\theta\in{}_{\rm inv}\Gamma\). For \(A^{\omega}=0\) the corresponding qpc is called the trivial qpc and it is regular and multiplicative [15, 17]. **Proposition 2.1**.: _The trivial qpc is the only regular connection._ Proof.: Let us assume that \(\omega\) is a regular qpc. Then according to [15, 17] its corresponding non-commutative gauge potential has to satisfies \[A^{\omega}(\theta\circ g)=\epsilon(g)A^{\omega}(\theta),\] for all \(\theta\in{}_{\rm inv}\Gamma\) and all \(g\in G\). So it is enough to evaluate the last equality in \(g=z\) to find that it is satisfied if and only if \(A^{\omega}=0\). Despite the ease of the previous proof, the last proposition is quite important for our purpose because every (_classical_) principal connection is regular [15]; so the last result tells that up to the trivial qpc, there is no _classical_ counterpart of any qpc, i.e., up to the trivial qpc, all results in the rest of this paper will be completely _quantum_, they will not have _classical_ analogues. In order to talk about the curvature of a qpc and develop the theory we need to consider the two different cases given by our two different calculi of \(G\). #### 2.3.1. The Non-Standard \(1d\)-Differential Calculi First of all, it is necessary to choose a embedded differential \(\delta:{}_{\rm inv}\Gamma\longrightarrow{}_{\rm inv}\Gamma\otimes{}_{\rm inv}\Gamma\)[15, 17]. Such maps have to satisfy \(M\circ({\rm ad}\otimes{\rm ad})\circ\delta=(\delta\otimes{\rm id}_{G})\circ\delta\) and \(\delta(\theta)=\theta^{(1)}\otimes\theta^{(2)}\), \(\delta(\theta^{*})=-\theta^{(2)\,*}\otimes\theta^{(1)\,*}\) if \(d\theta=\theta^{(1)}\theta^{(2)}\), where \(M(\theta_{1},g_{1},\theta_{2},g_{2})=(\theta_{1},\theta_{2},g_{1}g_{2})\) for every \(\theta_{1}\), \(\theta_{2}\in{}_{\rm inv}\Gamma\) and every \(g_{1}\), \(g_{2}\in G\). In other words, choosing \(\delta\) is choosing a compatible (with respect to the differential structure) way to embed \({}_{\rm inv}\Gamma\) into \({}_{\rm inv}\Gamma\otimes{}_{\rm inv}\Gamma\). In our case, the only embedded differential \[\delta:{}_{\rm inv}\Gamma\longrightarrow{}_{\rm inv}\Gamma\otimes{}_{\rm inv}\Gamma \tag{2.34}\] is given by \[\delta(\vartheta)=i\vartheta\otimes\vartheta.\] With this, the space of curvatures of qpcs \[\{R^{\omega}:{}_{\rm inv}\Gamma\longrightarrow\Omega^{2}(GM)\} \tag{2.35}\] is in bijection with the space of _non-commutative fields strength_ \[\{F^{\omega}:=dA^{\omega}-\langle A^{\omega},A^{\omega}\rangle:{}_{\rm inv} \Gamma\longrightarrow\Omega^{2}(M)\}, \tag{2.36}\] where for a given algebra \(X\), we have \[\langle B,C\rangle=m\circ(B\otimes C)\circ\delta\] with \(m\) the product map and \(B,C:{}_{\rm inv}\Gamma\longrightarrow X\). The reader can compare this relation between qpcs and their curvatures in terms of potentials (\(A^{\omega}\longleftrightarrow F^{\omega}=dA^{\omega}-\langle A^{\omega},A^{ \omega}\rangle\) with Equation A.2 (\(A^{\omega}\longleftrightarrow F^{\omega}=dA^{\omega}\)). The embedded differential map in the definition of the curvature of a qpc allows to see it as a gauge field in the general case [15, 17]. Following the classical case, let us consider the non-commutative gauge potential \(A^{\omega}\) given by \[A^{\omega}(\vartheta)=\phi\,dx^{0}-A_{1}\,dx^{1}-A_{2}\,dx^{2}-A_{3}\,dx^{3}. \tag{2.37}\] In this way, the non-commutative field strength is defined by \[F^{\omega}(\vartheta)=dA^{\omega}(\vartheta)-iA^{\omega}(\vartheta)\wedge A^ {\omega}(\vartheta) \tag{2.38}\] and in terms of coordinates we have \[F^{\omega}(\vartheta)=\sum_{0\leq\mu<\nu\leq 3}F_{\mu\nu}\,dx^{\mu}\wedge dx^{ \nu}\;\;\mbox{where}\;\;(F_{\mu\nu})=\begin{pmatrix}0&D_{1}&D_{2}&D_{3}\\ -D_{1}&0&-H_{3}&H_{2}\\ -D_{2}&H_{3}&0&-H_{1}\\ -D_{3}&-H_{2}&H_{1}&0\end{pmatrix}, \tag{2.39}\] where \[\mathbf{D}=(D_{1},D_{2},D_{3}):=\mathbf{E}+i[\phi,\mathbf{A}],\qquad\mathbf{ E}:=-\frac{\partial\mathbf{A}}{\partial x^{0}}-\nabla\phi \tag{2.40}\] and \[\mathbf{H}=(H_{1},H_{2},H_{3}):=\mathbf{B}+i\mathbf{A}\times\mathbf{A},\qquad \mathbf{B}:=\nabla\times\mathbf{A}, \tag{2.41}\] with \(\mathbf{A}=(A_{1},A_{2},A_{3})\). The definition of the commutators can be deduced by the context and we have considered that in the cross product the multiplication of elements always start from up to down. The field \(\mathbf{D}\), \(\mathbf{H}\) will be consider as the non-commutative electric field and the non-commutative magnetic field, respectively. Also \(F_{\mu\nu}\) will be called the non-commutative electromagnetic tensor field. #### 2.3.2. The \(2d\)-Differential calculi For this other case the only embedded differential \[\delta:_{\mathrm{inv}}\Gamma\longrightarrow_{\mathrm{inv}}\Gamma\otimes_{ \mathrm{inv}}\Gamma \tag{2.42}\] is given by \[\delta(\vartheta^{e})=\delta(\vartheta^{m})=i(\vartheta^{e}\otimes\vartheta^{ m}+\vartheta^{m}\otimes\vartheta^{e}).\] With this, the space of curvature of qpcs \[\{R^{\omega}:_{\mathrm{inv}}\Gamma\longrightarrow\Omega^{2}(GM)\} \tag{2.43}\] is in bijection with the space of _non-commutative fields strength_ \[\{F^{\omega}:=dA^{\omega}-\langle A^{\omega},A^{\omega}\rangle:_{\mathrm{inv}} \Gamma\longrightarrow\Omega^{2}(M)\}, \tag{2.44}\] where for a given algebra \(X\), we have \[\langle B,C\rangle=m\circ(B\otimes C)\circ\delta\] with \(m\) the product map and \(B,C:_{\mathrm{inv}}\Gamma\longrightarrow X\). The reader can compare this relation between qpcs and their curvatures in terms of potentials (\(A^{\omega}\longleftrightarrow F^{\omega}=dA^{\omega}-\langle A^{\omega},A^{ \omega}\rangle\)) with Equation A.2 (\(A^{\omega}\longleftrightarrow F^{\omega}=dA^{\omega}\)). In this way, let us calculate the non-commutative field strength \(F^{\omega}\) of the non-commutative gauge potential \(A^{\omega}\) given by \[A^{\omega}(\vartheta^{e})=\phi^{e}\,dx^{0}-A_{1}^{e}\,dx^{1}-A_{2}^{e}\,dx^{2}- A_{3}^{e}\,dx^{3}, \tag{2.45}\] \[A^{\omega}(\vartheta^{m})=\phi^{m}dx^{0}\,-A_{1}^{m}\,dx^{1}-A_{2}^{m}\,dx^{2} -A_{3}^{m}\,dx^{3} \tag{2.46}\] Thus \[F^{\omega}(\vartheta^{e})=dA^{\omega}(\vartheta^{e})-iA^{\omega}(\vartheta^{ e})\wedge A^{\omega}(\vartheta^{m})-iA^{\omega}(\vartheta^{m})\wedge A^{\omega}( \vartheta^{e}), \tag{2.47}\] \[F^{\omega}(\vartheta^{m})=dA^{\omega}(\vartheta^{m})-iA^{\omega}(\vartheta^{e })\wedge A^{\omega}(\vartheta^{m})-iA^{\omega}(\vartheta^{m})\wedge A^{\omega} (\vartheta^{e}). \tag{2.48}\] In terms of coordinates we get \[F^{\omega}(\vartheta^{e})=\sum_{0\leq\mu<\nu\leq 3}F^{e}_{\mu\nu}\,dx^{\mu} \wedge dx^{\nu}\ \ \text{where}\ \ (F^{e}_{\mu\nu})=\begin{pmatrix}0&D_{1}^{e}&D_{2}^{e}&D_{3}^{e}\\ -D_{1}^{e}&0&-H_{3}^{e}&H_{2}^{e}\\ -D_{2}^{e}&H_{3}^{e}&0&-H_{1}^{e}\\ -D_{3}^{e}&-H_{2}^{e}&H_{1}^{e}&0\end{pmatrix}, \tag{2.49}\] \[\mathbf{D}^{e}=(D_{1}^{e},D_{2}^{e},D_{3}^{e}):=\mathbf{E}^{e}+i[\phi^{m}, \mathbf{A}^{e}]+i[\phi^{e},\mathbf{A}^{m}],\ \ \ \ \mathbf{E}^{e}:=-\frac{\partial\mathbf{A}^{e}}{\partial x^{0}}-\nabla\phi^{e} \tag{2.50}\] and \[\mathbf{H}^{e}=(H_{1}^{e},H_{2}^{e},H_{3}^{e}):=\mathbf{B}^{e}+i\mathbf{A}^{e} \times\mathbf{A}^{m}+i\mathbf{A}^{m}\times\mathbf{A}^{e},\ \ \ \ \mathbf{B}^{e}:=\nabla\times\mathbf{A}^{e}, \tag{2.51}\] where \(\mathbf{A}^{e}=(A_{1}^{e},A_{2}^{e},A_{3}^{e})\) and \(\mathbf{A}^{m}=(A_{1}^{m},A_{2}^{m},A_{3}^{m})\); while \[F^{\omega}(\vartheta^{m})=\sum_{0\leq\mu<\nu\leq 3}F^{m}_{\mu\nu}\,dx^{\mu} \wedge dx^{\nu}\ \ \text{where}\ \ (F^{m}_{\mu\nu})=\begin{pmatrix}0&H_{1}^{m}&H_{2}^{m}&H_{3}^{m}\\ -H_{1}^{m}&0&-D_{3}^{m}&D_{2}^{m}\\ -H_{2}^{m}&D_{3}^{m}&0&-D_{1}^{m}\\ -H_{3}^{m}&-D_{2}^{+}&D_{1}^{m}&0\end{pmatrix}, \tag{2.52}\] \[\mathbf{H}^{m}=(H_{1}^{m},H_{2}^{m},H_{3}^{m}):=\mathbf{B}^{m}+i[\phi^{m}, \mathbf{A}^{e}]+i[\phi^{e},\mathbf{A}^{m}],\ \ \ \ \mathbf{B}^{m}:=-\frac{\partial\mathbf{A}^{m}}{\partial x^{0}}-\nabla\phi^{m} \tag{2.53}\] and \[\mathbf{D}^{m}=(D_{1}^{m},D_{2}^{m},D_{3}^{m}):=\mathbf{E}^{m}+i\mathbf{A}^{e }\times\mathbf{A}^{m}+i\mathbf{A}^{m}\times\mathbf{A}^{e},\ \ \ \ \mathbf{E}^{m}:=\nabla\times\mathbf{A}^{m}. \tag{2.54}\] The notation chosen is not a coincidence: we will consider that \(A^{\omega}(\vartheta^{e})\) is _the non-commutative electric potential \(1\)-form_; \((F^{e}_{\mu,\nu})\), \(\mathbf{D}^{e}\) and \(\mathbf{H}^{e}\) are _the non-commutative electromagnetic tensor field, the non-commutative electric field and the non-commutative magnetic field_ generated by \(A^{\omega}(\vartheta^{e})\), respectively; and \(\mathbf{E}^{e}\) and \(\mathbf{B}^{e}\) are their corresponding _classical_ parts. In the same way, we will consider that \(A^{\omega}(\vartheta^{m})\) is _the non-commutative magnetic potential \(1\)-form_; \((F^{m}_{\mu,\nu})\), \(\mathbf{D}^{m}\) and \(\mathbf{H}^{m}\) are _the non-commutative magnetoelectric tensor field, the non-commutative electric field and the non-commutative magnetic field_ generated by \(A^{\omega}(\vartheta^{m})\), respectively; and \(\mathbf{E}^{m}\) and \(\mathbf{B}^{m}\) are their corresponding _classical_ parts. ## 3. Non-Commutative Maxwell Equations Just like we have exposed at the beginning of this paper, Maxwell equations in the vacuum comes from the Bianchi identity and critical points of the Yang-Mills functional. In this sections we are going to recreate that process in order to find their _quantum_ counterparts on our bundle. ### Non-Commutative Geometrical Equations In accordance with [15, 19], every qpc satisfies the _non-commutative Bianchi identity_, which is \[(D^{\omega}-S^{\omega})R^{\omega}=\langle\omega,\langle\omega,\omega\rangle \rangle-\langle\langle\omega,\omega\rangle,\omega\rangle, \tag{3.1}\] where \(D^{\omega}\) is the covariant derivative and the operator \(S^{\omega}\) measures the _lack of regularity_ of the qpc, in the sense that \(S^{\omega}=0\) when \(\omega\) is regular. Furthermore, when \(\omega\) is multiplicative [15, 17], the right-hand side of the last equation is equal to \(0\). In summary, when \(\omega\) is regular and multiplicative, for example, for _classical_ principal connections, Equation 3.1 turns into the well-known _classical_ Bianchi identity \(D^{\omega}R^{\omega}=0\). #### 3.1.1. The Non-Standard \(1d\)-Differential Calculi For this calculi we have **Theorem 3.1**.: _The Equation 3.1 in terms of the non-commutative field strength \(F^{\omega}\) is_ \[(d-d^{S^{\omega}})F^{\omega}=0, \tag{3.2}\] _where_ \[d^{S^{\omega}}\tau:_{\rm inv}\Gamma\longrightarrow\Omega^{k+1}(M) \tag{3.3}\] _is given by_ \[d^{S^{\omega}}\tau(\vartheta)=i[A^{\omega}(\vartheta),\tau(\vartheta)]^{ \partial}:=i(A^{\omega}(\vartheta)\wedge\tau(\vartheta)-(-1)^{k}\tau( \vartheta)\wedge A^{\omega}(\vartheta))\quad\in\quad\Omega^{k+1}(M)\] _for all linear maps \(\tau:_{\rm inv}\Gamma\longrightarrow\Omega^{k}(M)\)._ Proof.: The general definition of the operator \(S^{\omega}\) is given by ([15, 19]) \[S^{\omega}\circ\tau:=\langle\omega,\tau\rangle-(-1)^{k}\langle\tau,\omega \rangle-(-1)^{k}[\tau,\omega]:_{\rm inv}\Gamma\longrightarrow{\rm Hor}^{k+1}( GM),\] where \[[\tau,\omega]:=m\circ(\tau\otimes\omega)\circ({\rm id}_{{}_{\rm inv}\Gamma} \otimes\pi)\circ{\rm ad}\] and \(\tau:_{\rm inv}\Gamma\longrightarrow{\rm Hor}^{k}(GM)\) such that \({}_{\rm H}\Phi\circ\tau=(\tau\otimes{\rm id}_{G})\circ{\rm ad}\). By Equation 2.15, every such \(\tau\) can be viewed as \(\tau:_{\rm inv}\Gamma\longrightarrow\Omega^{k}(M)\) and \([\tau,\omega]=0\). Hence we get \[d^{S^{\omega}}\tau:=S^{\omega}\circ\tau=\langle\omega,\tau\rangle-(-1)^{k} \langle\tau,\omega\rangle:_{\rm inv}\Gamma\longrightarrow\Omega^{k+1}(M).\] A direct calculation taking into account our embedded differential \(\delta\) (Equation 2.34) shows \[d^{S^{\omega}}\tau(\vartheta)=i[A^{\omega}(\vartheta),\tau(\vartheta)]^{ \partial}:=i(A^{\omega}(\vartheta)\wedge\tau(\vartheta)-(-1)^{k}\tau( \vartheta)\wedge A^{\omega}(\vartheta)).\] In the same way, by Equation 2.15 the covariant derivative of every qpc reduces to the differential \(d:\Omega^{k}(M)\longrightarrow\Omega^{k+1}(M)\) for every \(\tau\). Finally, \[(\langle\omega,\langle\omega,\omega\rangle\rangle-\langle\langle\omega,\omega \rangle,\omega\rangle)(\vartheta)=A^{\omega}(\vartheta)\cdot A^{\omega}( \vartheta)\cdot A^{\omega}(\vartheta)-A^{\omega}(\vartheta)\cdot A^{\omega}( \vartheta)\cdot A^{\omega}(\vartheta)=0\] and therefore, the Equation 3.1 for the non-commutative field strength is \(dF^{\omega}=d^{S^{\omega}}F^{\omega}\). In the same way, Equation 3.2 is given by \[dF^{\omega}(\vartheta)=i[A^{\omega}(\vartheta),dF^{\omega}(\vartheta)]=i[A^{ \omega}(\vartheta),dA^{\omega}(\vartheta)], \tag{3.4}\] which is the _non-commutative geometrical equation_ for our trivial quantum bundle with a non-standard \(1D\)-dimensional differential calculi of \(U(1)\). The reader can compare the last equation with Equation A.3. It is worth mentioning that like in the _classical_ case, Equation 3.4 is satisfied by all qpcs! so in general, \[dF^{\omega}(\vartheta)\neq 0.\] In the light of Equation 2.37 and following the _classical_ case, the left-hand side of the Equation 3.4 is \[\nabla\cdot{\bf H}\qquad\mbox{ and }\qquad\nabla\times{\bf D}+\frac{\partial{\bf H }}{\partial x^{0}};\] while the right-hand side is \[i[{\bf B},{\bf A}]\qquad\mbox{ and }\qquad i[\phi,{\bf B}]-i({\bf E}\times{\bf A }+{\bf A}\times{\bf E}).\] Hence, Equation 3.4 becomes into the _non-commutative Gauss Law for magnetism_ and the _non-commutative Faraday equation_ \[\nabla\cdot{\bf H}=\rho^{m}, \tag{3.5}\] \[\nabla\times{\bf D}+\frac{\partial{\bf H}}{\partial x^{0}}=-{\bf j}^{m}. \tag{3.6}\] For these equations we have defined the magnetic charge density like \[\rho^{m}:=i[{\bf B},{\bf A}] \tag{3.7}\] and the magnetic current density like \[-{\bf j}^{m}:=i[\phi,{\bf B}]-i({\bf E}\times{\bf A}+{\bf A}\times{\bf E}) \tag{3.8}\] In the last subsection of this section we will show that \(\rho^{m}\) and \({\bf j}^{m}\) can be interpreted like magnetic charges and currents. Again, the definition of the commutators can be deduced by the context. The reader can compare the Equations 3.5, 3.6 with their _classical counterparts_ in Equations A.8, A.9. It is worth remarking that Equation 3.5 tells the existence of _magnetic charges_ for the non-commutative gauge potential \(A^{\omega}(\vartheta)\); while Equation 3.6 tells the existence of _magnetic currents_, but both of them in the vacuum! and like in the _classical case_, this is only a consequence of the geometry of our spaces. Notice that the magnetic charge and current depend on the interaction of \(A^{\omega}(\vartheta)\) with the _classical_ part of the non-commutative field strength \(F^{\omega}(\vartheta)\). #### 3.1.2. The \(2d\)-Differential calculi For this calculi we have **Theorem 3.2**.: _The Equation 3.1 in terms of the non-commutative field strength \(F^{\omega}\) is_ \[(d-d^{S^{\omega}})F^{\omega}=\langle A^{\omega},\langle A^{\omega},A^{\omega} \rangle\rangle-\langle\langle A^{\omega},A^{\omega}\rangle,A^{\omega}\rangle, \tag{3.9}\] _where_ \[d^{S^{\omega}}\tau:_{\rm inv}\Gamma\longrightarrow\Omega^{k+1}(M) \tag{3.10}\] _is given by_ \[d^{S^{\omega}}\tau(\vartheta^{e})=d^{S^{\omega}}\tau(\vartheta^{m})=[A^{\omega}( \vartheta^{m}),\tau(\vartheta^{e})]^{\partial}+[A^{\omega}(\vartheta^{e}),\tau (\vartheta^{m})]^{\partial}\quad\in\quad\Omega^{k+1}(M)\] _for all linear maps \(\tau:{{}_{\rm inv}}\Gamma\longrightarrow\Omega^{k}(M)\), with \([-,-]^{\partial}\) the graded-commutator._ Proof.: By Equation 2.24 and like in Theorem 3.1, we have that \[d^{S^{\omega}}\tau:=S^{\omega}\circ\tau=\langle\omega,\tau\rangle-(-1)^{k} \langle\tau,\omega\rangle:{{}_{\rm inv}}\Gamma\longrightarrow\Omega^{k+1}(M).\] In the same way, by Equation 2.24 the covariant derivative of every qpc reduces to the differential \(d:\Omega^{k}(M)\longrightarrow\Omega^{k+1}(M)\) for every \(\tau\). Finally for every \(\vartheta\in{{}_{\rm inv}}\Gamma\) \[(\langle\omega,\langle\omega,\omega\rangle\rangle-\langle\langle\omega,\omega \rangle,\omega\rangle)(\vartheta)=(\langle A^{\omega},\langle A^{\omega},A^{ \omega}\rangle\rangle-\langle\langle A^{\omega},A^{\omega}\rangle,A^{\omega }\rangle)(\vartheta)\] and therefore, the Equation 3.1 for the non-commutative field strength is the Equation 3.9. A direct calculation shows that Equation 3.9 for \(\vartheta^{e}\) and \(\vartheta^{m}\) is \[\begin{split} dF^{\omega}(\vartheta^{e})=dF^{\omega}(\vartheta^ {m})&=-d(A^{\omega}(\vartheta^{e})\cdot A^{\omega}(\vartheta^{m}) )-d(A^{\omega}(\vartheta^{m})\cdot A^{\omega}(\vartheta^{e})).\\ &=[A^{\omega}(\vartheta^{e}),dA^{\omega}(\vartheta^{m})]+[A^{ \omega}(\vartheta^{m}),dA^{\omega}(\vartheta^{e})]\end{split} \tag{3.11}\] which are the _non-commutative geometrical equations_ for our trivial quantum bundle with a \(2D\)-dimensional differential calculi of \(U(1)\). The reader can compare the last equation with Equation A.3. It is worth mentioning that like in the _classical_ case, Equation 3.4 is satisfied by all qpcs! In the light of Equation 2.45 and following the _classical_ case, the left-hand side of Equation 3.11 is \[\nabla\cdot\mathbf{H}^{e}\qquad\text{ and }\qquad\nabla\times\mathbf{D}^{e}+ \frac{\partial\mathbf{H}^{e}}{\partial x^{0}}\] for \(A^{\omega}(\vartheta^{e})\); while \[\nabla\cdot\mathbf{D}^{m}\quad\text{ and }\quad\nabla\times\mathbf{H}^{m}+ \frac{\partial\mathbf{D}^{m}}{\partial x^{0}}\] for \(A^{\omega}(\vartheta^{m})\). At the same time, the right-hand side of Equation 3.11 is \[i[\mathbf{B}^{e},\mathbf{A}^{m}]+i[\mathbf{E}^{m},\mathbf{A}^{e}]\quad\text{ and }\quad i[\phi^{m},\mathbf{B}^{e}]+i[\phi^{e},\mathbf{E}^{m}]-i(\mathbf{E}^{e} \times\mathbf{A}^{m}+\mathbf{A}^{m}\times\mathbf{E}^{e}+\mathbf{B}^{m}\times \mathbf{A}^{e}+\mathbf{A}^{e}\times\mathbf{B}^{m})\] for \(A^{\omega}(\vartheta^{e})\) and \(A^{\omega}(\vartheta^{m})\). Hence, Equation 3.11 becomes for \(A^{\omega}(\vartheta^{e})\) into \[\nabla\cdot\mathbf{H}^{e}=\rho, \tag{3.12}\] \[\nabla\times\mathbf{D}^{e}+\frac{\partial\mathbf{H}^{e}}{\partial x^{0}}=- \mathbf{j}. \tag{3.13}\] and for \(A^{\omega}(\vartheta^{m})\) becomes into \[\nabla\cdot\mathbf{D}^{m}=\rho, \tag{3.14}\] \[\nabla\times\mathbf{H}^{m}+\frac{\partial\mathbf{D}^{m}}{\partial x^{0}}=- \mathbf{j}. \tag{3.15}\] For these equations we have defined the magnetic and electric charge density generated by \(A^{\omega}(\vartheta^{e})\) and \(A^{\omega}(\vartheta^{m})\) respectively, as \[\rho=i[\mathbf{B}^{e},\mathbf{A}^{m}]+i[\mathbf{E}^{m},\mathbf{A}^{e}]; \tag{3.16}\] and the magnetic current density and the electric current density generated by \(A^{\omega}(\vartheta^{e})\) and \(A^{\omega}(\vartheta^{m})\) respectively, as \[-\mathbf{j}=i[\phi^{m},\mathbf{B}^{e}]+i[\phi^{e},\mathbf{E}^{m}]-i(\mathbf{E} ^{e}\times\mathbf{A}^{m}+\mathbf{A}^{m}\times\mathbf{E}^{e}+\mathbf{B}^{m} \times\mathbf{A}^{e}+\mathbf{A}^{e}\times\mathbf{B}^{m}). \tag{3.17}\] The reader can compare the set of Equations 3.12-3.15 with their _classical counterparts_: Equations A.8, A.9. In the last subsection of this section we will show that \(\rho\) and \(\mathbf{j}\) can be interpreted like magnetic charges and currents At this point there are a couple of comments to do about the last equations. First of all, it is important to notice that Equation 3.12 tells the existence of _magnetic charges!_ for the non-commutative electric potential \(1\)-form; while Equation 3.14 tells the existence of _electric charges_ for the non-commutative magnetic potential \(1\)-form. Like in the _classical case_, all of this is a consequence of the geometry of our spaces. Second, \(\{\mathbf{H}^{e},\mathbf{D}^{m}\}\) and \(\{\{\mathbf{D}^{e},\mathbf{H}^{e}\},\{\mathbf{D}^{m},\mathbf{H}^{m}\}\}\) satisfied the same equalities! and they depend on a non-trivial way of both non-commutative potentials and the _classical_ fields: there are electric/magnetic charges and electric/magnetic current densities due to self-interaction in the vacuum!. ### Non-Commutative Dynamical Equations In accordance with [19], a qpc \(\omega\) is a Yang-Mills qpc, i.e., a critical point of the non-commutative Yang-Mills functional (a functional that measures the square norm of the curvature of a qpc) if and only if \[\langle\Upsilon_{\mathrm{ad}}\circ\lambda\,|\,(d^{\nabla^{\omega}_{\mathrm{ad }}\star_{\mathrm{L}}}-d^{S^{\omega}\star_{\mathrm{L}}})R^{\omega}\rangle_{ \mathrm{L}}+\langle\widetilde{\Upsilon}_{\mathrm{ad}}\circ\widehat{\lambda}\,| \,(d^{\widehat{\nabla}^{\omega}_{\mathrm{ad}}\star_{\mathrm{R}}}-d^{\widehat {S}^{\omega}\star_{\mathrm{R}}})\widehat{R}^{\omega}\rangle_{\mathrm{R}}=0 \tag{3.18}\] for all \(\lambda\in\overrightarrow{\mathfrak{qpc}(\zeta)}\), where \(d^{\nabla^{\omega}_{\mathrm{ad}}\star_{\mathrm{L}}}\) is the formal adjoint operator of the exterior derivative of the induced quantum linear connection on the left associated quantum vector bundle of ad (just like in the _classical_ case); while \(d^{S^{\omega}\star_{\mathrm{L}}}\) is the formal adjoint operator of \(\Upsilon_{\mathrm{ad}}\circ S^{\omega}\circ\Upsilon_{\mathrm{ad}}^{-1}\)[19]. The operators \(d^{\nabla^{\omega}_{\mathrm{ad}}\star_{\mathrm{R}}}\) and \(d^{\widehat{S}^{\omega}\star_{\mathrm{R}}}\) are defined in an analogues way for the right structures. Here we are considering that \((d^{\nabla^{\omega}_{\mathrm{ad}}\star_{\mathrm{L}}}-d^{S^{\omega}\star_{ \mathrm{L}}})R^{\omega}:=(d^{\nabla^{\omega}_{\mathrm{ad}}\star_{\mathrm{L}}} -d^{S^{\omega}\star_{\mathrm{L}}})\circ\Upsilon_{\mathrm{ad}}\circ R^{\omega}\), \((d^{\widehat{\nabla}^{\omega}_{\mathrm{ad}}\star_{\mathrm{R}}}-d^{\widehat {S}^{\omega}\star_{\mathrm{R}}})\widehat{R}^{\omega}:=(d^{\widehat{\nabla}^{ \omega}_{\mathrm{ad}}\star_{\mathrm{R}}}-d^{\widehat{S}^{\omega}\star_{ \mathrm{R}}})\circ\widetilde{\Upsilon}_{\mathrm{ad}}\circ\widehat{R}^{\omega}\) and \(\widehat{\lambda}=\ast\circ\lambda\circ\ast\), \(\widehat{R}^{\omega}=\ast\circ R^{\omega}\circ\ast\). #### 3.2.1. The Non-Standard \(1d\)-Differential Calculi For this calculi we have **Theorem 3.3**.: _The Equation 3.18 in terms of the non-commutative field strength \(F^{\omega}\) is_ \[(d^{\star_{\mathrm{L}}}-d^{S^{\omega}\star_{\mathrm{L}}})F^{\omega}=0, \tag{3.19}\] _where \(d^{\star_{\mathrm{L}}}\) is the left quantum codifferential (see Equation 2.11) and \(d^{S^{\omega}\star_{\mathrm{L}}}\) is the formal adjoint operator of \(d^{S^{\omega}}\) with respect to \(\langle-|-\rangle_{\mathrm{L}}\). In concrete_ \[d^{S^{\omega}\star_{\mathrm{L}}}\tau:_{\mathrm{inv}}\Gamma\longrightarrow \Omega^{k}(M) \tag{3.20}\] _is given by_ \[d^{S^{\omega}\star_{\mathrm{L}}}\tau(\vartheta)=(-1)^{k}\,i\,\star_{\mathrm{L }}^{-1}\left([A^{\omega}(\vartheta),\star_{\mathrm{L}}\tau(\vartheta)]^{ \vartheta}\right),\] _for all linear maps \(\tau:_{\mathrm{inv}}\Gamma\longrightarrow\Omega^{k+1}(M)\)._ Proof.: By Equation 2.15, we get that \[\langle\Upsilon_{\mathrm{ad}}\circ\lambda\,|\,(d^{\nabla^{\omega}_{\mathrm{ ad}}\star_{\mathrm{L}}}-d^{S^{\omega}\star_{\mathrm{L}}})R^{\omega}\rangle_{ \mathrm{L}}=\langle\widetilde{\Upsilon}_{\mathrm{ad}}\circ\widehat{\lambda}\,| \,(d^{\widehat{\nabla}^{\omega}_{\mathrm{ad}}\star_{\mathrm{R}}}-d^{\widehat {S}^{\omega}\star_{\mathrm{R}}})\widehat{R}^{\omega}\rangle_{\mathrm{R}}\] and since Equation 3.18 has to be satisfied by all \(\lambda\), we have that Equation 3.18 is equivalent to \[(d^{\nabla^{\omega}_{\mathrm{ad}}\star_{\mathrm{L}}}-d^{S^{\omega}\star_{ \mathrm{L}}})R^{\omega}=0.\] Like in Theorem 3.1, in terms of \(F^{\omega}\) the last equation is \[(d^{\star_{\mathrm{L}}}-d^{S^{\omega}\star_{\mathrm{L}}})F^{\omega}=0\] and a large and tedious direct calculation proves that \(d^{S^{\omega}\star_{\mathrm{L}}}\tau(\vartheta)=(-1)^{k}\,i\,\star_{\mathrm{L}} ^{-1}\bigl{(}[A^{\omega}(\vartheta),\star_{\mathrm{L}}\tau(\vartheta)]^{ \vartheta}\bigr{)}\) is actually the formal adjoint operator of \(d^{S^{\omega}}\). The reader can compare Equation 3.19 with Equation A.4. It is worth mentioning that like in the _classical_ case, not all qpc satisfies Equation 3.19. In the light of Equation 2.37 and following the _classical_ case, the left-hand side of Equation 3.19 is \[\nabla\cdot\mathbf{D}\qquad\text{ and }\qquad\nabla\times\mathbf{H}-\frac{ \partial\mathbf{D}}{\partial x^{0}};\] while the right-hand side is \[i[\mathbf{D},\mathbf{A}^{*}]\qquad\text{ and }\qquad i[\mathbf{D},\phi^{*}]-i( \mathbf{H}\times\mathbf{A}^{*}+\mathbf{A}^{*}\times\mathbf{H}),\] where \(\mathbf{A}^{*}=(A_{1}^{*},A_{2}^{*},A_{3}^{*})\). Therefore, Equation 3.19 becomes into the _non-commutative Gauss Law_ and the _non-commutative Ampere equation_ \[\nabla\cdot\mathbf{D}=\rho^{e}, \tag{3.21}\] \[\nabla\times\mathbf{H}-\frac{\partial\mathbf{D}}{\partial x^{0}}=\mathbf{j}^ {e}. \tag{3.22}\] Here we have defined the electric charge density and the electric current density by \[\rho^{e}=i[\mathbf{D},\mathbf{A}^{*}], \tag{3.23}\] and \[\mathbf{j}^{e}=i[\mathbf{D},\phi^{*}]-i(\mathbf{H}\times\mathbf{A}^{*}+ \mathbf{A}^{*}\times\mathbf{H}). \tag{3.24}\] In summary, Equations 3.5-3.6 and 3.21-3.22 are the _non-commutative Maxwell equations in the vacuum_ associated to the non-commutative gauge potential \(A^{\omega}\) in the quantum space-time \(M\). It is work remarking again that even in the vacuum there are electric/magnetic charges as well as magnetic/electric current densities, which are generated by non-trivial self-interactions on \(M\). Finally, since the divergence of the curl is zero we can find _conservation laws_ for all cases \[\nabla\cdot\mathbf{j}^{e}+\frac{\partial\rho^{e}}{\partial x^{0}}=0,\qquad \nabla\cdot\mathbf{j}^{m}+\frac{\partial\rho^{m}}{\partial x^{0}}=0. \tag{3.25}\] #### 3.2.2. The \(2d\)-Differential calculi For this calculi we have **Theorem 3.4**.: _The Equation 3.18 in terms of the non-commutative field strength \(F^{\omega}\) is_ \[(d^{\star_{\mathrm{L}}}-d^{S^{\omega}\star_{\mathrm{L}}})F^{\omega}=0, \tag{3.26}\] _where \(d^{\star_{\mathrm{L}}}\) is the left quantum codifferential (see Equation 2.11) and \(d^{S^{\omega}\star_{\mathrm{L}}}\) is the formal adjoint operator of \(d^{S^{\omega}}\) with respect to \(\langle-|-\rangle_{\mathrm{L}}\). In concrete_ \[d^{S^{\omega}\star_{\mathrm{L}}}\tau:_{\mathrm{inv}}\Gamma\longrightarrow \Omega^{k}(M) \tag{3.27}\] _is given by_ \[d^{S^{\omega}\star_{\mathrm{L}}}\tau(\vartheta^{e})=(-1)^{k}\,i\,\star_{ \mathrm{L}}^{-1}\left([A^{\omega}(\vartheta^{m}),\star_{\mathrm{L}}\tau( \vartheta^{m})]^{\vartheta}+[A^{\omega}(\vartheta^{m}),\star_{\mathrm{L}}\tau (\vartheta^{e})]^{\vartheta}\right),\] \[d^{S^{\omega}\star_{\rm L}}\tau(\vartheta^{m})=(-1)^{k}\,i\,\star_{\rm L}^{-1} \left([A^{\omega}(\vartheta^{e}),\star_{\rm L}\tau(\vartheta^{m})]^{\partial}+ [A^{\omega}(\vartheta^{e}),\star_{\rm L}\tau(\vartheta^{e})]^{\partial}\right),\] _for all linear maps \(\tau:{}_{\rm inv}\Gamma\longrightarrow\Omega^{k+1}(M)\)._ The proof of the last proposition is completely similar to one presented in Theorem 3.3, so we will omit it. The reader can compare Equation 3.26 with Equation A.4. It is worth mentioning that like in the _classical_ case, not all qpc satisfies Equation 3.26. Similarly to our other cases, by Equation 2.45, the left-hand side of Equation 3.26 becomes into \[\nabla\cdot{\bf D}^{e}\qquad\mbox{ and }\qquad\nabla\times{\bf H}^{e}-\frac{ \partial{\bf D}^{e}}{\partial x^{0}}\] for \(A^{\omega}(\vartheta^{e})\), and for \(A^{\omega}(\vartheta^{m})\) we get \[\nabla\cdot{\bf H}^{m}\qquad\mbox{ and }\qquad\nabla\times{\bf D}^{m}-\frac{ \partial{\bf H}^{m}}{\partial x^{0}}.\] In addition, the right-hand side of Equation 3.18 is \[i[{\bf D}^{e}+{\bf H}^{m},{\bf A}^{m*}]\quad\mbox{ and }\quad i[{\bf D}^{e}+{\bf H }^{m},\phi^{m*}]-i(({\bf H}^{e}+{\bf D}^{m})\times{\bf A}^{m*}+{\bf A}^{m*} \times({\bf H}^{e}+{\bf D}^{m}))\] for \(A^{\omega}(\vartheta^{e})\), and \[i[{\bf D}^{e}+{\bf H}^{m},{\bf A}^{e*}]\quad\mbox{ and }\quad i[{\bf D}^{e}+{ \bf H}^{m},\phi^{e*}]-i(({\bf H}^{e}+{\bf D}^{m})\times{\bf A}^{e*}+{\bf A}^{ e*}\times({\bf H}^{e}+{\bf D}^{m}))\] for \(A^{\omega}(\vartheta^{m})\), where \({\bf A}^{e*}=(A^{e*}_{1},A^{e*}_{2},A^{e*}_{3})\), \({\bf A}^{m*}=(A^{m*}_{1},A^{m*}_{2},A^{m*}_{3})\). Therefore, Equation 3.26 for \(A^{\omega}(\vartheta^{e})\) becomes into \[\nabla\cdot{\bf D}^{e}=\rho^{e}, \tag{3.28}\] \[\nabla\times{\bf H}^{e}-\frac{\partial{\bf D}^{e}}{\partial x^{0}}={\bf j}^{e} \tag{3.29}\] and for \(A^{\omega}(\vartheta^{m})\) we get \[\nabla\cdot{\bf H}^{m}=\rho^{m}, \tag{3.30}\] \[\nabla\times{\bf D}^{m}-\frac{\partial{\bf H}^{m}}{\partial x^{0}}={\bf j}^{m}. \tag{3.31}\] Here we have defined the electric and magnetic charge density generated by \(A^{\omega}(\vartheta^{e})\) and \(A^{\omega}(\vartheta^{m})\), respectively \[\rho^{e}=i[{\bf D}^{e}+{\bf H}^{m},{\bf A}^{m*}],\qquad\rho^{m}=i[{\bf D}^{e}+ {\bf H}^{m},{\bf A}^{e*}], \tag{3.32}\] and the electric current density and the magnetic current density generated by \(A^{\omega}(\vartheta^{e})\) and \(A^{\omega}(\vartheta^{m})\), respectively \[{\bf j}^{e}=i[{\bf D}^{e}+{\bf H}^{m},\phi^{m*}]-i(({\bf H}^{e}+{\bf D}^{m}) \times{\bf A}^{m*}+{\bf A}^{m*}\times({\bf H}^{e}+{\bf D}^{m})), \tag{3.33}\] \[{\bf j}^{m}=i[{\bf D}^{e}+{\bf H}^{m},\phi^{e*}]-i(({\bf H}^{e}+{\bf D}^{m}) \times{\bf A}^{e*}+{\bf A}^{e*}\times({\bf H}^{e}+{\bf D}^{m})). \tag{3.34}\] The reader can compare Equations 3.14-3.17 with their _classical_ counterparts: Equations A.10, A.11. It is work remarking again that even in the vacuum there are electric/magnetic charges as well as magnetic/electric current densities, which are generated by non-trivial self-interactions on \(M\). In summary, Equations 3.12-3.15 and 3.28-3.31 are the _non-commutative Maxwell equations in the vacuum_ associated to the non-commutative gauge potential \(A^{\omega}\) in the quantum space-time \(M\): there are \(4\) for the non-commutative electric potential \(1\)-form \(A^{\omega}(\vartheta^{e})\), and other \(4\) for the non-commutative magnetic potential \(1\)-form \(A^{\omega}(\vartheta^{m})\). Since now we have \(4\) new Maxwell equations and all of them are coupled, it is necessary to present a non-trivial solution; in particular, we are going to present a solution for which the non-commutative electric potential \(1\)-form produces a non-zero magnetic charge density (and hence, the non-commutative magnetic potential \(1\)-form produces a non-zero electric charge density). In fact, by taking \(\theta^{\mu\nu}=0\) for all \(\mu\), \(\nu\) except \(\theta^{23}\); let us consider the non-commutative gauge potential \[A^{\omega}:{}_{\rm inv}\Gamma\longrightarrow\Omega^{1}(M) \tag{3.35}\] defined by \[A^{\omega}(\vartheta^{e})=x^{3}\,dx^{2}\] \[A^{\omega}(\vartheta^{m})=x^{1}x^{2}\,dx^{3}.\] In this way \[E^{e}=(0,0,0),\quad B^{e}=(1,0,0),\] \[D^{e}=(0,0,0),\quad H^{e}=(1+\theta^{23}x^{1},0,0),\] and \[B^{m}=(0,0,0),\quad E^{m}=(-x^{1},x^{2},0),\] \[H^{m}=(0,0,0),\quad D^{m}=(-(1-\theta^{23})x^{1},x^{2},0).\] Thus the non-commutative Maxwell equations are satisfied: \[\nabla\cdot{\bf H}^{e}=\theta^{23},\qquad\nabla\cdot{\bf D}^{e}=0,\] \[\nabla\times{\bf D}^{e}+\frac{\partial{\bf H}^{e}}{\partial x^{0}}=0,\qquad \nabla\times{\bf H}^{e}-\frac{\partial{\bf D}^{e}}{\partial x^{0}}=0;\] and \[\nabla\cdot{\bf D}^{m}=\theta^{23},\qquad\nabla\cdot{\bf H}^{m}=0,\] \[\nabla\times{\bf H}^{m}+\frac{\partial{\bf D}^{m}}{\partial x^{0}}=0,\qquad \nabla\times{\bf D}^{m}-\frac{\partial{\bf H}^{m}}{\partial x^{0}}=0.\] This solution is consistence with the zero slope limit of string theory: \(\theta^{0j}=0\) for \(j=1,2,3\)[25]. **Remark 3.5**.: _The general theory in which we are based on ([18], [19]), works with qpcs in the most general way. However, there is a special kind of qpcs called real qpcs which satisfy_ \[\omega(\theta^{*})=\omega(\theta)^{*}\] _for all \(\theta\in{}_{\rm inv}\Gamma\). This kind of qpcs are the only ones that other papers consider, for example [14, 15, 17] and in the classical case, only real principal connections have physical meaning; so maybe in the non-commutative case only real qpcs have physical meaning as well._ _For our quantum bundle this condition becomes into_ \[A^{\omega}(\theta^{*})=A^{\omega}(\theta)^{*}\] _for all \(\theta\in{}_{\rm inv}\Gamma\) and due to Equation 2.26, real qpcs fulfill_ \[A^{\omega}(\vartheta^{e})^{*}=A^{\omega}(\vartheta^{e}),\qquad A^{\omega}( \vartheta^{m})^{*}=A^{\omega}(\vartheta^{m}). \tag{3.36}\] _It is worth mentioning that the explicit solution presented above comes from a real qpc and the charges densities are real constants. Of course, there are more solutions, this was only an example._ Finally, since the divergence of the curls is zero we can find _conservation laws_ for all cases \[\nabla\cdot\mathbf{j}^{e}+\frac{\partial\rho^{e}}{\partial x^{0}}=0,\qquad \nabla\cdot\mathbf{j}^{m}+\frac{\partial\rho^{m}}{\partial x^{0}}=0,\qquad \nabla\cdot\mathbf{j}+\frac{\partial\rho}{\partial x^{0}}=0. \tag{3.37}\] ### Covariant Formulation In this section we will pass to the physics common index notation with the metric \(\eta_{\mu\nu}=\operatorname{diag}(1,-1,-1,-1)\). #### 3.3.1. The Non-Standard \(1d\)-Differential Calculi Let us consider \[A_{\mu}=(\phi,-\mathbf{A}) \tag{3.38}\] and then Equation 2.39 is given by \[F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}-i[A_{\mu},A_{\nu}]. \tag{3.39}\] The geometrical equation (Equation 3.4) can be written as \[\partial_{\mu}\widetilde{F}^{\mu\nu}=J^{m\,\nu}\qquad\text{ where }\qquad J^{m\,\nu}=(\rho^{m},\mathbf{j}^{m})=i[A_{\mu},\widetilde{F}^{\mu\nu}]= i[A_{\mu},\widetilde{F}^{\mu\nu}_{\text{classical}}]. \tag{3.40}\] Here, we are consider that \[\widetilde{F}^{\mu\nu}=\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}F_{\alpha\beta}, \tag{3.41}\] is the dual electromagnetic tensor field with \(\epsilon^{\mu\nu\alpha\beta}\) the Levi-Civita symbol, and \(\widetilde{F}^{\mu\nu}_{\text{classical}}\) is the _classical_ part of \(\widetilde{F}^{\mu\nu}\), i.e., the part of \(\widetilde{F}^{\mu\nu}\) only with \(\mathbf{E}\) and \(\mathbf{B}\). The reader can compare Equation 3.40 with is _classical_ counterpart, Equation \(A.14\). On the other hand, the dynamical equation (Equation 3.9) can be written as \[\partial_{\mu}F^{\mu\nu}=J^{e\,\nu}\qquad\text{ where }\qquad J^{e\,\nu}=(\rho^{e},\mathbf{j}^{e})=i[A_{\mu}^{*},F^{\mu\nu}] \tag{3.42}\] with \(A_{\mu}^{*}=(\phi^{*},-\mathbf{A}^{*})\). The reader can compare the last equation with its _classical_ counterpart, Equation \(A.15\). Notice that the form of both four-currents are like currents in _classical_ non-abelian gauge theories. **Remark 3.6**.: _In Equation 3.39, the term_ \[i[A_{\mu},A_{\nu}]\] _comes from the general definition of the curvature for this quantum bundle with this specific differential calculi. In other papers, like [11, 8], this term comes form the non-commutativity of \(M\) in the definition of the gauge curvature_ \[i[D_{\mu},D_{\nu}].\] **Remark 3.7**.: _If we consider the algebra of differential forms of \(U(1)\), we would get_ \[F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\] _even in \(M\)! because of in this case the only embedded differential is \(\delta=0\) and hence the curvature is given by \(F^{\omega}=dA^{\omega}\) (see Equation 2.36). The presence of \(\delta\) in the definition of the curvature is needed to see the curvature as a gauge field in the general theory [14, 15, 17]._ _So, there is discrepancy between the gauge curvature and the definition of the curvature of a principal connection in the non-commutative case. This discrepancy is only solved when we consider the non-standard \(1D\) differential calculi of \(U(1)\), so this calculi seems like the correct mathematical environment in the non-commutative context. In other words, in papers like [8, 11] the authors were working with the non-standard differential calculi without knowing it and hence, with a non-zero \(S^{\omega}\)._ _Another reason to consider that our mathematical formulation is the correct one is [10]. In this paper the authors take a variation on \(A_{\mu}\) and get the Equation 3.42 for real qpcs (see Equation 3.36). We arrived to the same result in [19] but in a general context: varying the qpc in the non-commutative Yang-Mills functional in a general quantum principal. Furthermore, Equation 3.40 can be obtained by considering the Jacobi identity on \(M\)_ \[0=\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}[D_{\nu},[D_{\alpha},D_{\beta}]].\] _All of these show that our mathematical formulation is the correct one in the non-commutative case._ By the last Remark, the correct Yang-Mills Langrangian density of the system (free photon field) is \[\mathcal{L}_{\rm YM}=-\frac{1}{2}\left(F^{\mu\nu}\cdot F_{\mu\nu}^{*}\right) -A_{\mu}\cdot J^{e\,*\,\mu}-J^{e\,\mu}\cdot A_{\mu}^{*} \tag{3.43}\] and of course, the Euler-Lagrange equations of the last Lagrangian reduces to Equation 3.42. The addition of the current-term in the previous Lagrangian is the most important result of this paper. First of all, it is a proof of the importance of the theory of quantum principal bundles in non-commutative gauge theory: this term has been found only with the correct formulation of the Bianchi identity, as well as the correct formulation of the Yang-Mills equation in quantum principal bundles; and finally because it is completely new, is has not been taking into account in the literature [5, 6, 7, 8, 9, 10, 11]. In accordance with [10], twisted gauge transformations are symmetries of the last Lagrangian. Furthermore, the coviarant formulation allows to identify that our Equations are covariant under the correct set of Lorentz transformations, for example, the ones who transform the matrix \((\theta^{\mu\nu})\) correctly [26]. On the other hand, solutions in terms of the four-potential in Lorentz gauge \[\partial_{\mu}A^{\mu}=0 \tag{3.44}\] are given by \[\partial_{\mu}\partial^{\mu}A^{\nu}=i[A_{\mu},F^{\mu\nu}]+i[A^{\mu},\partial_{ \mu}A^{\nu}]; \tag{3.45}\] while the conservation laws are \[\partial_{\mu}J^{e\,\mu}=0,\qquad\partial_{\mu}J^{m\,\mu}=0. \tag{3.46}\] **Remark 3.8**.: _The whole classical Electromagnetic Theory (Appendix A) can be recovered by considering_ \[\theta^{\mu\nu}\longrightarrow 0,\] _even with the non-standard \(1D\) differential calculi of \(U(1)\) because by the graded commutative of \(\Omega^{\bullet}(M)\) we would have \(F^{\omega}=dA^{\omega}\) (see Equation 2.36)_ Finally, it is worth mentioning that there is no a complete symmetry between \(\mathbf{D}\), \(\mathbf{H}\) in the sense that the electric four-current depends on the configuration of the four-potential; while the magnetic four-current does not: this four-current exists only by the presence of the electromagnetic field in the quantum space-time \(M\). Furthermore, when we quantize the model (next section), there will be only one gauge field: \(A_{\mu}\), which is typically identifying with the photon field. In other words, the photon field in the non-commutative case turns into a kind of dyon gauge field in the sense that it produces electric and magnetic charges and currents by itself in the vacuum. #### 3.3.2. The \(2d\)-Differential calculi Following the last subsection we get \[A^{e}_{\mu}=(\phi^{e},-\mathbf{A^{e}}),\qquad A^{m}_{\mu}=(\phi^{m},-\mathbf{A^ {m}}). \tag{3.47}\] Thus, the non-commutative electromagnetic field is given by \[F^{e}_{\mu\nu}=\partial_{\mu}A^{e}_{\nu}-\partial_{\nu}A^{e}_{\mu}-i[A^{e}_{ \mu},A^{m}_{\nu}]-i[A^{m}_{\mu},A^{e}_{\nu}]; \tag{3.48}\] while the non-commutative magnetoelectric field is \[F^{m}_{\mu\nu}=\partial_{\mu}A^{m}_{\nu}-\partial_{\nu}A^{m}_{\mu}-i[A^{e}_{ \mu},A^{m}_{\nu}]-i[A^{m}_{\mu},A^{e}_{\nu}]. \tag{3.49}\] The non-commutative geometrical equations (Equation 3.11) can be expressed as \[\partial_{\mu}\widetilde{F}^{e\,\mu\nu}=\partial_{\mu}\widetilde{F}^{m\,\nu \lambda}=J^{\nu}, \tag{3.50}\] where \[\widetilde{F}^{e\,\mu\nu}=\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}F^{e}_{\alpha \beta},\qquad\widetilde{F}^{m\,\mu\nu}=\frac{1}{2}\epsilon^{\mu\nu\alpha\beta }F^{m}_{\alpha\beta}, \tag{3.51}\] and \[J^{\nu}=(\rho,\mathbf{j}).\] Notice that \[J^{\nu}=i[A^{e}_{\mu},\widetilde{F}^{m\,\mu\nu}_{\text{classical}}]+i[A^{m}_{ \mu},\widetilde{F}^{e\,\mu\nu}_{\text{classical}}], \tag{3.52}\] where \(\widetilde{F}^{m\,\mu\nu}_{\text{classical}}\), \(\widetilde{F}^{e\,\mu\nu}_{\text{classical}}\) are the _classical_ parts of the tensors \(\widetilde{F}^{m\,\mu\nu}\), \(\widetilde{F}^{e\,\mu\nu}\). The reader can compare Equation 3.50 with its _classical_ counterpart, Equation A.14 and it is worth remembering that Equation 3.50 is always satisfied, no matter the values of \(A^{e}_{\mu}\), \(A^{m}_{\mu}\). On the other hand, the non-commutative dynamical equations (Equation 3.26) are \[\partial_{\mu}F^{e\,\mu\nu}=J^{e\,\nu}, \tag{3.53}\] \[\partial_{\mu}F^{m\,\mu\nu}=J^{m\,\nu}, \tag{3.54}\] where \[J^{e\,\nu}=(\rho^{e},\mathbf{j}^{e})\quad\text{ and }\quad J^{m\,\nu}=(\rho^{m}, \mathbf{j}^{m}).\] Notice that \[J^{e\,\nu}=i[A^{m\,*}_{\mu},F^{e\,\mu\nu}+F^{m\,\mu\nu}],\qquad J^{m\,\nu}=i[ A^{e\,*}_{\mu},F^{e\,\mu\nu}+F^{m\,\mu\nu}], \tag{3.55}\] where \(A^{e\,*}_{\mu}=(\phi^{e*},-\mathbf{A^{e*}})\), \(A^{m\,*}_{\mu}=(\phi^{m*},-\mathbf{A^{m*}})\). The reader can compare Equations 3.52 and 3.53 with their _classical_ counterparts, Equation A.15. Conservation laws are given by \[\partial_{\mu}J^{e\,\mu}=0,\qquad\partial_{\mu}J^{m\,\,\mu}=0,\qquad\partial_ {\mu}J^{\mu}=0; \tag{3.56}\] while solutions in terms of potentials, in the Lorentz gauge \[\partial_{\mu}A^{e\,\mu}=0,\qquad\partial_{\mu}A^{m\,\mu}=0\] are given by \[\partial_{\mu}\partial^{\mu}A^{e\,\nu}=i[A_{\mu}^{m\,*},F^{e\,\mu\nu}+F^{m\,\, \mu\nu}]+i[A^{e\,\mu},\partial_{\mu}A^{m\,\nu}]+i[A^{m\,\mu},\partial_{\mu}A^{e \,\nu}], \tag{3.58}\] \[\partial_{\mu}\partial^{\mu}A^{m\,\nu}=i[A_{\mu}^{e\,*},F^{e\,\mu \nu}+F^{m\,\,\mu\nu}]+i[A^{e\,\mu},\partial_{\mu}A^{m\,\nu}]+i[A^{m\,\mu}, \partial_{\mu}A^{e\,\nu}]. \tag{3.57}\] Now we are going to analyze _non-commutative symmetries_. First of all it is worth mentioning that non-commutative Maxwell equations are invariant under the following transformation \[A_{\mu}^{e}\;\longleftrightarrow A_{\mu}^{m}. \tag{3.59}\] In other words \[\mathbf{D}^{e}\longrightarrow\mathbf{H}^{m},\;\;\mathbf{E}^{e} \longrightarrow\mathbf{B}^{m},\qquad\;\mathbf{H}^{m}\longrightarrow\mathbf{D}^ {e},\;\;\mathbf{B}^{m}\longrightarrow\mathbf{E}^{e}\] \[\mathbf{H}^{e}\longrightarrow\mathbf{D}^{m},\;\;\mathbf{B}^{e} \longrightarrow\mathbf{E}^{m},\qquad\;\mathbf{D}^{m}\longrightarrow\mathbf{H}^ {e},\;\;\mathbf{E}^{m}\longrightarrow\mathbf{B}^{e},\] \[\rho^{e}\longrightarrow\rho^{m},\;\;\mathbf{J}^{e} \longrightarrow\mathbf{J}^{m},\qquad\;\;\;\rho^{m}\longrightarrow\rho^{e},\;\; \mathbf{J}^{m}\longrightarrow\mathbf{J}^{e}.\] This symmetry produces a kind of _non-commutative_ dual transformations and it comes from the interchanging of the order basis \(\beta=\{\vartheta^{e},\vartheta^{m}\}\;\longleftrightarrow\;-\beta=\{ \vartheta^{m},\vartheta^{e}\}\); so it is a \(\mathbb{Z}_{2}\)-symmetry (see subsection 2.2). In other words, there is full symmetry between the electric field and the magnetic field in non-commutative Maxwell equations. We define the Yang-Mills Lagrangian density of the system as \[\mathcal{L}_{\mathrm{YM}}=\mathcal{L}^{e}+\mathcal{L}^{m}, \tag{3.60}\] where \[\mathcal{L}^{e}=-\frac{1}{2}\left(F^{e\,\mu\nu}\cdot F_{\mu\nu}^{e\,*}\right) -A_{\mu}^{e}\cdot J^{e\,*\,\mu}-J^{e\,\mu}\cdot A_{\mu}^{e\,*} \tag{3.61}\] and \[\mathcal{L}^{m}=\frac{1}{2}\left(F^{m\,\mu\nu}\cdot F_{\mu\nu}^{m\,*}\right) +A_{\mu}^{m}\cdot J^{m\,*\,\mu}+J^{m\,\mu}\cdot A_{\mu}^{m\,*}. \tag{3.62}\] Of course, the Euler-Lagrange equations for \(A_{\mu}^{e}\) and \(A_{\mu}^{m}\) (or \(A_{\mu}^{e\,*}\) and \(A_{\mu}^{m\,*}\)) reduce to Equations 3.53, 3.54. The minus sign in \(\mathcal{L}^{m}\) with respect to \(\mathcal{L}^{e}\) breaks the symmetry between the electric field and the magnetic field, however, Euler-Lagrange equations are still the correct ones and **Proposition 3.9**.: _Let \(\omega\) be a real qpc (see Equation 3.36). Then the Yang-Mills action_ \[\mathcal{S}_{\mathrm{YM}}:=\int_{M}\mathcal{L}_{\mathrm{YM}}\,\mathrm{dvol} \tag{3.63}\] _posses the classical gauge transformation symmetry (see Equation A.16), i.e., \(\mathcal{S}_{\mathrm{YM}}\) is invariant under the transformation_ \[A_{\mu}^{e}\longrightarrow A_{\mu}^{e\,\prime}:=A_{\mu}^{e}+\partial_{\mu} \chi,\qquad A_{\mu}^{m}\longrightarrow A_{\mu}^{m\,\prime}:=A_{\mu}^{m}+ \partial_{\mu}\chi \tag{3.64}\] _for any real-valued \(\chi\in M\)._ The proof of the previous proposition is only a direct and large calculation taking into account that under Equation 3.64 \[F^{e}_{\mu\nu}{}^{\prime} =\partial_{\mu}A^{e}_{\nu}{}^{\prime}-\partial_{\nu}A^{e}_{\mu}{}^ {\prime}-i[A^{e}_{\mu}{}^{\prime},A^{m}_{\nu}{}^{\prime}]-i[A^{m}_{\mu}{}^{ \prime},A^{e}_{\nu}{}^{\prime}],\] \[F^{m}_{\mu\nu}{}^{\prime} =\partial_{\mu}A^{m}_{\nu}{}^{\prime}-\partial_{\nu}A^{m}_{\mu}{}^ {\prime}-i[A^{e}_{\mu}{}^{\prime},A^{m}_{\nu}{}^{\prime}]-i[A^{m}_{\mu}{}^{ \prime},A^{e}_{\nu}{}^{\prime}]\] and the cyclic property of the integral; so we are going to omit it. The previous covariant formulation allows to conclude that our equations are covariant under the correct set of Lorentz transformations, for example, the ones who transform the matrix \((\theta^{\mu\nu})\) correctly [26]. In accordance with [10], twisted gauge transformations can leave the real action \(\mathcal{S}_{\rm YM}\) invariant. **Remark 3.10**.: _In the light of Remark 3.7, only when \(A^{e}_{\mu}\), \(A^{e}_{\mu}\) could have any physical sense is when the Yang-Mills action becomes invariant under the classical gauge transformations of the electromagnetic theory._ **Remark 3.11**.: _The whole classical Electromagnetic Theory (Appendix A) can be recovered by considering_ \[\theta^{\mu\nu}\longrightarrow 0\qquad\text{ and }\qquad A^{m}_{\mu}\longrightarrow 0.\] ## 4. Quantization #### 4.0.1. The Non-Standard \(1d\)-Differential Calculi #### 4.0.2. The \(2d\)-Differential calculi #### 4.0.1. The Non-Standard \(1d\)-Differential Calculi [MISSING_PAGE_POST] #### 4. All of these are reasons to keep the research going. Particularly, it is possible to add matter fields in our framework. In addition, all of theses motive the research to develop a non-commutative \(SU(2)\)-gauge theory or a non-commutative \(SU(3)\)-gauge theory. Moreover, the general formulation present in [18, 19] allows to work with other quantum space-time spaces, not only with the Moyal-Weyl algebra [27]. The operator \(S^{\omega}\) is not zero only in the non-commutative context and for that reason it changes completely the mathematical formulation of these kind of models. It is worth mentioning that this operator appears naturally in the general non-commutative Bianchi identity ([15]) and it also appears naturally when you varying the qpc in the non-commutative Yang-Mills functional ([19]), so it is essential to consider it in non-commutative gauge theory, just like this paper shows. ## Appendix A Classical Maxwell Equations Consider a principal \(U(1)\)-bundle over the Minkowski space-time: \(\mathbb{R}^{4}\) with the metric \(\eta=\operatorname{diag}(1,-1,-1,-1)\). Since \(\mathbb{R}^{4}\) is contractible, every bundle over it is trivializable, so without lose of generality, let us consider the trivial principal bundle (A.1) \[\operatorname{proy}:\mathbb{R}^{4}\times U(1)\longrightarrow\mathbb{R}^{4}.\] In this case, principal connections \[\omega:T(\mathbb{R}^{4}\times U(1))\longrightarrow\mathfrak{u}(1)\] are in bijection with globally defined \(\mathfrak{u}(1)\)-valued differential 1-forms: \(i\,A^{\omega}\), where \(i=\sqrt{-1}\) and \[A^{\omega}=\sum_{\mu=0}^{3}A_{\mu}\,dx^{\mu}\;\in\;\Omega^{1}(\mathbb{R}^{4})\] is usually called _the potential 1-form_; and the vector \((A_{0},A_{1},A_{2},A_{3})\) created by the coefficients of \(A^{\omega}\) receives the name of _four-vector potential_. For this case the curvature form of \(\omega\) \[R^{\omega}:\Omega^{2}(\mathbb{R}^{4}\times U(1))\longrightarrow\mathfrak{u}(1)\] is related with the potential 1-form by means of the (de Rham) differential, i.e., curvature forms are in bijection with (A.2) \[dA^{\omega}=:F^{\omega}\;\in\;\Omega^{2}(\mathbb{R}^{4}).\] In general every principal connection has to satisfies the (second) Bianchi identity; however due to the fact that \(U(1)\) is an _abelian_ group, for our case this identity reduces to ([1]) (A.3) \[0=dF^{\omega}=d^{2}A^{\omega},\] which is a trivial relation taking into account the properties of \(d\). Nevertheless it is worth remarking again that the last equation arises from: the relation between the curvature and the 1-form potential, the Bianchi identity and the commutative of \(U(1)\); all of this in the context of the graded-commutative de Rham algebra \(\Omega^{\bullet}(\mathbb{R}^{4})\). On the other hand, the Yang-Mills functional is a functional form the space of principal connections to \(\mathbb{R}\) which measures the squared norm of the curvature of a principal connection and the Yang-Mills equation comes from a variational principle in which we are looking for critical points. For our case and again, by the relation between the curvature and the 1-form potential and the commutative of \(U(1)\), the Yang-Mills equation is ([1]) (A.4) \[0=d^{\star}F^{\omega}=d^{\star}dA^{\omega},\] where \(d^{\star}=(-1)^{k}\star^{-1}\circ d\,\circ\star\) is the codifferential, the formally adjoint2 operator of \(d\), and \(\star\) is the star Hodge operator [1]. Footnote 2: Whenever the integrals converge. Now from Equations A.3, A.4 it is possible to obtain Maxwell equations [1]. In fact, by choosing \((A_{0},A_{1},A_{2},A_{3})=(\phi,-\mathbf{A})\) with \(\mathbf{A}\) a vector field in \(\mathbb{R}^{3}\), the Equation A.2 becomes into (A.5) \[F^{\omega}=\sum_{0\leq\mu<\nu\leq 3}F_{\mu\nu}\,dx^{\mu}\wedge dx^{\nu}\quad \text{where}\quad(F_{\mu\nu})=\begin{pmatrix}0&E_{1}&E_{2}&E_{3}\\ -E_{1}&0&-B_{3}&B_{2}\\ -E_{2}&B_{3}&0&-B_{1}\\ -E_{3}&-B_{2}&B_{1}&0\end{pmatrix},\] where (A.6) \[\mathbf{E}=(E_{1},E_{2},E_{3}):=-\frac{\partial\mathbf{A}}{\partial x^{0}}-\nabla\phi\] is the _electric field_, (A.7) \[\mathbf{B}=(B_{1},B_{2},B_{3}):=\nabla\times\mathbf{A}\] is the _magnetic field_ and the 2-form \((F_{\mu,\nu})\) is called _the electromagnetic field tensor._ By substituting the value of \(F^{\omega}\) in Equation A.3 we get the Gauss Law for magnetism and the Faraday equation (A.8) \[\nabla\cdot\mathbf{B}=0,\] (A.9) \[\nabla\times\mathbf{E}+\frac{\partial\mathbf{B}}{\partial x^{0}}=0;\] while by substituting it in Equation A.4 we get the Gauss Law and the Ampere equation (A.10) \[\nabla\cdot\mathbf{E}=0\] (A.11) \[\nabla\times\mathbf{B}-\frac{\partial\mathbf{E}}{\partial x^{0}}=0.\] Equations A.8-A.11 are the Maxwell equations in the vacuum [1]. \(\phi\) receives the name of _the electric potential_ or _scalar potential_; while \(\mathbf{A}\) receives the name of _the magnetic potential_ or _vector potential_. In index notation we have (A.12) \[A_{\mu}=(\phi,-\mathbf{A})\] and (A.13) \[F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}.\] The geometrical equation can be written as (A.14) \[\partial_{\mu}\widetilde{F}^{\mu\nu}=0,\] and the dynamical equation can be written as (A.15) \[\partial_{\mu}F^{\mu\nu}=0,\] where \[F^{\mu\nu}=\eta^{\mu\alpha}\eta^{\nu\beta}F_{\mu\nu},\qquad\widetilde{F}^{\mu \nu}=\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}F_{\alpha\beta},\] with \(\epsilon^{\mu\nu\alpha\beta}\) the Levi-Civita symbol and \(\eta_{\mu\nu}=\eta^{\mu\nu}=\mathrm{diag}(1,-1,-1,-1)\). The tensor \(\widetilde{F}_{\mu\nu}\) receives the name of the dual electromagnetic field tensor. It is worth mentioning that Equation A.3 shows explicitly the gauge transformation symmetry: (A.16) \[A^{\omega}\longrightarrow A^{\omega\prime}:=A^{\omega}+d\chi\quad\text{ or in index notation }\quad A^{\prime}_{\mu}:=A_{\mu}+\partial_{\mu}\chi,\] for any \(f\in C^{\infty}(\mathbb{R}^{4})\).
2309.10845
Water absorption in the transmission spectrum of the water-world candidate GJ9827d
Recent work on the characterization of small exoplanets has allowed us to accumulate growing evidence that the sub-Neptunes with radii greater than $\sim2.5\,R_\oplus$ often host H$_2$/He-dominated atmospheres both from measurements of their low bulk densities and direct detections of their low mean-molecular-mass atmospheres. However, the smaller sub-Neptunes in the 1.5-2.2 R$_\oplus$ size regime are much less understood, and often have bulk densities that can be explained either by the H$_2$/He-rich scenario, or by a volatile-dominated composition known as the "water world" scenario. Here, we report the detection of water vapor in the transmission spectrum of the $1.96\pm0.08$ R$_\oplus$ sub-Neptune GJ9827d obtained with the Hubble Space Telescope. We observed 11 HST/WFC3 transits of GJ9827d and find an absorption feature at 1.4$\mu$m in its transit spectrum, which is best explained (at 3.39$\sigma$) by the presence of water in GJ9827d's atmosphere. We further show that this feature cannot be caused by unnoculted star spots during the transits by combining an analysis of the K2 photometry and transit light-source effect retrievals. We reveal that the water absorption feature can be similarly well explained by a small amount of water vapor in a cloudy H$_2$/He atmosphere, or by a water vapor envelope on GJ9827d. Given that recent studies have inferred an important mass-loss rate ($>0.5\,$M$_\oplus$/Gyr) for GJ9827d making it unlikely to retain a H-dominated envelope, our findings highlight GJ9827d as a promising water world candidate that could host a volatile-dominated atmosphere. This water detection also makes GJ9827d the smallest exoplanet with an atmospheric molecular detection to date.
Pierre-Alexis Roy, Björn Benneke, Caroline Piaulet, Michael A. Gully-Santiago, Ian J. M. Crossfield, Caroline V. Morley, Laura Kreidberg, Thomas Mikal-Evans, Jonathan Brande, Simon Delisle, Thomas P. Greene, Kevin K. Hardegree-Ullman, Travis Barman, Jessie L. Christiansen, Diana Dragomir, Jonathan J. Fortney, Andrew W. Howard, Molly R. Kosiarek, Joshua D. Lothringer
2023-09-19T18:00:03Z
http://arxiv.org/abs/2309.10845v1
# Water absorption in the transmission spectrum of the water-world candidate GJ 9827 d ###### Abstract Recent work on the characterization of small exoplanets has allowed us to accumulate growing evidence that the sub-Neptunes with radii greater than \(\sim 2.5\,R_{\oplus}\) often host H\({}_{2}\)/He-dominated atmospheres both from measurements of their low bulk densities and direct detections of their low mean-molecular-mass atmospheres. However, the smaller sub-Neptunes in the 1.5-2.2 R\({}_{\oplus}\) size regime are much less understood, and often have bulk densities that can be explained either by the H\({}_{2}\)/He-rich scenario, or by a volatile-dominated composition known as the "water world" scenario. Here, we report the detection of water vapor in the transmission spectrum of the \(1.96\pm 0.08\) R\({}_{\oplus}\) sub-Neptune GJ 9827 d obtained with the Hubble Space Telescope. We observed 11 HST/WFC3 transits of GJ 9827 d and find an absorption feature at 1.4\(\mu\)m in its transit spectrum, which is best explained (at 3.39\(\sigma\)) by the presence of water in GJ 9827 d's atmosphere. We further show that this feature cannot be caused by unocculted star spots during the transits by combining an analysis of the K2 photometry and transit light-source effect retrievals. We reveal that the water absorption feature can be similarly well explained by a small amount of water vapor in a cloudy H\({}_{2}\)/He atmosphere, or by a water vapor envelope on GJ 9827 d. Given that recent studies have inferred an important mass-loss rate (\(>0.5\) M\({}_{\oplus}\)/Gyr) for GJ 9827 d making it unlikely to retain a H-dominated envelope, our findings highlight GJ 9827 d as a promising water world candidate that could host a volatile-dominated atmosphere. This water detection also makes GJ 9827 d the smallest exoplanet with an atmospheric molecular detection to date. Exoplanets (498); Exoplanet atmospheres (487); Planetary atmospheres (1244) ## 1 Introduction While many questions remain regarding the nature of sub-Neptune exoplanets, the last decade of transmission spectroscopy with the Hubble Space Telescope (HST) has shown that the larger sub-Neptunes are often best described by H\({}_{2}\)/He-dominated atmospheres (e.g., Benneke et al., 2019, 2020, 2022). However, this picture is much less clear when considering sub-Neptunes that are in the smaller 1.5-2.2 R\({}_{\oplus}\) size-regime, near the radius valley (Fulton et al., 2017; Fulton and Petigura, 2018; Van Eylen et al., 2018; Hardegree-Ullman et al., 2020). These planets have bulk densities than can be explained by either the H\({}_{2}\)/He-rich sub-Neptune scenario, or by a volatile-dominated composition, where water (or another molecule of similar mean-molecular-weight) sub-plants H\({}_{2}\) and He as the most abundant atmospheric species (Rogers and Seager, 2010; Luque and Palle, 2022; Rogers et al., 2023). This type of exoplanet has been long theorized and is referred to as "water world" (Adams et al., 2008; Acuna et al., 2022). These smaller sub-Neptunes, which are often inconsistent with extended H-dominated atmospheres with large scale heights (Aguichine et al., 2021; Piaulet et al., 2022), are also found in a smaller mass regime than the larger sub-Neptunes, making these close-in planets much more exposed to mass-loss processes (Owen, 2019), and thus more likely to have lost their H\({}_{2}\) and He envelope over their lifetime. A recent study found a first line of evidence for the existence of such volatile-rich water worlds in the super-Earth Kepler-138 d, by combining a thorough interior analysis of the planet with mass-loss estimates, effectively showing that this super-Earth cannot be purely rocky, but that it also cannot retain a hydrogen layer (Piaulet et al., 2022). However, the direct spectroscopic confirmation of a volatile-rich high mean-molecular-weight atmosphere on a water world candidate still eludes us, and such a result would provide a new line of evidence for the water worlds. The discovery of the transiting sub-Neptune GJ 9827 d (Niraula et al., 2017; Rodriguez et al., 2018) represents a rich opportunity to characterize the atmosphere of a warm sub-Neptune via transmission spectroscopy and to deepen our understanding of this potential water world (Aguichine et al., 2021). Rapidly orbiting (6.2 days) a low-mass K6V star with a size of \(1.96\pm 0.08\,R_{\oplus}\), a mass of \(3.4\pm 0.6\,M_{\oplus}\)(Kosiarek et al., 2021), and a zero-albedo equilibrium temperature of \(680\pm 25\) K (Rodriguez et al., 2018), GJ 9827 d allows us to obtain a high signal-to-noise ratio (S/N) in transmission spectroscopy, and to add a precious new target to the sample of sub-Neptunes with transit spectra. While JWST now allows to observe the eclipses and phase curves of small exoplanets deeper in the infrared (e.g., Kempton et al., 2023), transit spectroscopy remains the best method to obtain in-depth looks into the atmospheres of sub-Neptunes and potential water worlds with HST, as they rarely are hot enough to provide high S/N in the near-infrared (hot Neptune desert; Owen and Lai, 2018). While the average density of GJ 9827 d has now been constrained (\(>3\sigma\)) by numerous RV studies of the system (Prieto-Arranz et al., 2018; Rice et al., 2019; Kosiarek et al., 2021), there still remains ambiguity regarding its bulk composition, as its density could be explained by a range of compositions from an extended H\({}_{2}\)/He layer to a water world composition with a \(\sim\)20 % water mass fraction (Aguichine et al., 2021). However, the high irradiation of the planet, the old age of the system (Kosiarek et al., 2021) and the non-detections of HeI and H\(\alpha\) absorption from the ground (Kasper et al., 2020; Carleo et al., 2021; Krishnamurthy et al., 2023) make it unlikely that GJ 9827 d would have retained a primordial H-dominated envelope to date. In this work, we present the most precise look yet at GJ 9827 d via transmission spectroscopy with the Hubble Space Telescope Wide Field Camera 3 (HST/WFC3), and reveal a water absorption feature in its transmission spectrum. In Section 2, we present the observations obtained for this study and we describe the data analysis in Section 3. Section 4 presents our atmospheric analysis of the HST transit spectrum and the related results are presented in Section 5. We end by discussing our findings and presenting our conclusions in Section 6. ## 2 Observations and Data Reduction GJ 9827 d was observed transiting its host star 11 times between December 2017 and December 2020 with the Wide Field Camera 3 (WFC3) onboard the Hubble Space Telescope (HST) as part of the mini-Neptune atmosphere diversity survey (GO 15333: PI Crossfield). The G141 grism was used in order to obtain the transmission spectrum of the sub-Neptune over the 1.1-1.7 \(\mu\)m range. Each of the 11 HST/WFC3 transit observations consisted of \(\sim\)3 hours of observing time spanning three telescope orbits with \(\sim\)1 hour gaps between them. Each transit observation is thus composed of one orbit before, during, and after transit. The transit time series were obtained with the G141 grism using the spatial scan mode. In order to optimize the duty cycle of our observation, we used both the forward and backward detector scans. We discard one of the transits from our analysis (November 1st 2019) since a pointing maneuver cut orbit 1 short before the ramp was stabilized, effectively carrying an unusually strong ramp to orbit 2 and polluting the in-transit observations. We reduce the observations following standard procedures for HST/WFC3 observations (details in Benneke et al., 2019, 2019). In order to minimize the background contribution, we subtract consecutive reads up-the-ramp and then add together the background subtracted frames. We then construct flat-fielded images from the flat-field data product provided by STScI. We use a normalized row-added flux template in order to remove and replace outlier pixels in our frames. We follow Benneke et al. (2019) in order to correct for the slight slanted shape of the trace on the detector, which is introduced by the spatial scan mode, using a trapezoidal shape integration scheme for the wavelength bins, which we choose to be 30 nm wide. Our flux integration does not perform presmoothing and does account for partial pixels along the trapezoidal bin boundaries. Finally, in order to account for the small drift of the star across the detector during the observations, we account for a small position shift which is measured in each frame. ## 3 Data Analysis We perform the light-curve fitting of our 10 transits of GJ 9827 d individually using the ExoTEP framework (Benneke et al., 2017, 2019, 2019). We use ExoTEP to jointly fit the transit model with a systematics model and a photometric noise parameter in a Markov Chain Monte Carlo scheme (Foreman-Mackey et al., 2013). We decide to fit the transits individually since they display variability in the transit timings (see Figures 1, 2, and Section 6.1). Each visit in our data set consists of three HST orbits which do not cover the full transit duration of GJ 9827 d Figure 1: All 10 HST/WFC3 broadband light-curve fits of the transits of GJ 9827 d. **Left:** Systematics-corrected and normalized broadband light curves for the 10 transits of GJ 9827 d (data points). Each visit is centered around the fitted transit time for that visit. The best-fitting transit model is also shown as the grey line. **Right:** Residuals of the broadband light-curve fits shown on the left. (Figure 1). The in-transit observations only occur during the second orbit of each visit, either observing the ingress, the middle of transit, or the egress (Figure 1). For that reason, we cannot obtain reliable constraints on the orbital parameters out of the partial transit observations and decide to use a fixed orbital solution during the fits (\(b\)=0.91, \(a/R_{\star}=19.88\)). Since the visits do not include a burn-in orbit, we cannot follow the standard procedure to discard the first orbit, which displays a stronger ramp in time as the detector is still settling (e.g., Kreidberg et al., 2014). We rather choose to keep the 2-4 last points of orbit 1 in each visit (Figure 1), as the strong ramp has stabilized by then, and it provides essential baseline information, especially for visits that only have mid-transit or egress data in orbit 2 (Figure 1). For all orbits, we discard the first forward and backward scans. Because GJ 9827 is a close-in, near-resonant system, some of the visits in our data set also include transits of GJ 9827 b. Given the partial coverage of our visits, we simply remove the points where GJ 9827 b is expected to transit, which affects visits 5 and 10. Visits 6 and 7 also include a transit of planet b, but it happens in the first orbit which is already mostly discarded. Finally, we remove the last 5 points of visit 3 since they are clear outliers. ### White-light-curve fit We fit for systematics trends in the normalized transit light curves simultaneously with the transit model using an analytical model that allows for a linear slope throughout the visit duration and an exponential ramp in each orbit. Following previous work (e.g., Kreidberg et al., 2014, Benneke et al., 2019) we use the following parameterization to account for these systematics: \[S_{\rm model}(t)=(c\,S(t)+v\,t_{v})\times(1-e^{(-a\,t_{\rm orb}-b-d)}). \tag{1}\] Here, \(c\) is the normalization constant, \(v\) is the linear slope throughout the visit, \(a\) and \(b\) are the rate and amplitude of the exponential ramp in each orbit, and \(d\) is an offset only for first orbit reads. \(S(t)\) is a function that is equal to 1 for forward scans and to \(s\) for backward scans, allowing to correct for the offset between forward and backward scans. Finally \(t_{v}\) is the time since the start of the visit, while \(t_{orb}\) is the time since the start of the orbit. We produce our transit light-curve models using the Batman package (Kreidberg, 2015). Since we are not trying to fit the orbital solution, the only two astrophysical parameters that we are fitting in the transit light curve are the transit depth, and the mid-transit time (Figure 2). For the limb darkening, we use the Exotic-LD package (Grant and Wakeford, 2022) to compute the coefficients using 1D stellar models (Kurucz, 1993) and the quadratic limb darkening law. The impact parameter and semi-major axis are set to the values in Niraula et al. (2017). We obtain the posterior distribution on our parameters by running a MCMC analysis individually for each of the 10 visits. We use four walkers per parameter and all priors are uniform. The 10 white-light-curve fits are shown in Figure 1. ### Spectroscopic light-curve fit We use the white-light-curve fits results in terms of the systematics model to pre-correct the light curves in each spectral bin (divide-white method; Stevenson et al., 2014). Thus, we divide the spectroscopic light curves by the white-light-curve best-fitting systematics model before starting the fitting. We produce our spectroscopic transit light-curve models similarly as in the white-light-curve case, but we now keep the mid-transit time fixed to the best-fitting value found by the white-light-curve fit. The limb darkening is again modelled with Exotic-LD, and this time, our systematics model is a three-parameter linear slope with trace position (measured during the data reduction, see Section 2). We thus obtain 10 transmission spectra, one from each visit, by running an MCMC analysis on each. We again use four Figure 2: Observed mid-transit time (black points) for each of the 10 transits compared to a fitted linear ephemeris to all transit timings presented (except the two timings with large uncertainties; \(T_{0,BJD}=2459185.987\pm 0.002\), \(P=6.20186\pm 0.00002\) d). The ephemeris derived from the K2 campaign is shown (blue; Niraula et al., 2017) along with the Spitzer transit for this planet (red; Kosiarek et al., 2021). Our 10 HST/WFC3 transits suggest that there are statistically significant TTVs in the orbit of GJ 9827 d of the order of 5-10 minutes. walkers per parameter and uniform priors on all parameters. ### Combining all the visits together We compute a weighted average of our 10 individual transmission spectra to obtain our final transmission spectrum of GJ 9827 d. In order to verify the robustness of our spectrum, and to ensure that it is not affected by that variability in the observations, we compare the relative transit depth in each channel, for each visit (Figure 3). From each spectrum, we subtract the weighted average (across wavelengths) of said spectrum, essentially making it a relative transit spectrum centered around zero. We then subtract the relative combined spectrum of all visits (also centered around zero) from each individual spectrum to effectively center each spectroscopic transit depth around zero. We then inspect this relative transit depth for each spectroscopic bin and for each visit, in order to ensure that the points in our final transmission spectrum are not affected by outliers Figure 3: Individual relative transit depths compared to the relative final transmission spectrum. For each of the 10 transmission spectra, we obtain the relative transit spectrum by subtracting the average across wavelengths. We then subtract the relative final transmission spectrum from the individual relative spectra, and display these transit depths for all visits in each channel. The relative transit depths are consistent across the 10 visits with the final relative spectrum. Points that are significantly away from zero (dotted lines) have larger error bars which makes them less important in the weighted average and thus do not bias our transmission spectrum. The uncertainty on the combined spectrum is also displayed as the grey region. In the top left panel, the broadband transit depths are shown, centered on the average across the 10 visits, highlighting the variability in the observed broadband transit depths from the different visits. (Figure 3). We find that for each spectroscopic channel, all visits mostly agree with the weighted average within error bars, and points that are inconsistent have larger error bars, which makes them much less important in the weighted average, since the weight of each points is inversely proportional to the uncertainty squared (Figure 3). The final average transmission spectrum is presented in Table 1 and in Figure 4. We decide to discard the last spectroscopic channel (1.67-1.70 \(\mu\)m) since it is systematically lower than the rest of the spectrum, and is near the edge of the trace on the detector where the data is less reliable. ## 4 Atmospheric Modeling We perform atmosphere retrievals on our GJ 9827 d transmission spectrum using the SCARLET framework (Benneke and Seager, 2012, 2013, Knutson et al., 2014, Kreidberg et al., 2014, Benneke, 2015, Benneke et al., 2019, 2019, 2021, 2021, 2022). SCARLET parameterizes the molecular abundances, the cloud deck pressure and the temperature to fit our spectrum. SCARLET uses a Bayesian nested sampling analysis (with single ellipsoid sampling; Skilling, 2004, 2006) to obtain the posterior probability distribution of our parameter space and the Bayesian evidence of our models. \begin{table} \begin{tabular}{l c c c} \hline \hline Instrument & Wavelength & Depth & \(\pm 1\sigma\) \\ & [\(\mu\)m] & [ppm] & [ppm] \\ \hline HST/WFC3 & 1.10 – 1.13 & 941.3 & 59.7 \\ & 1.13 – 1.16 & 1003.3 & 28.3 \\ & 1.16 – 1.19 & 939.6 & 24.5 \\ & 1.19 – 1.22 & 945.5 & 23.4 \\ & 1.22 – 1.25 & 987.7 & 22.4 \\ & 1.25 – 1.28 & 965.7 & 18.8 \\ & 1.28 – 1.31 & 921.0 & 22.3 \\ & 1.31 – 1.34 & 998.7 & 20.3 \\ & 1.34 – 1.37 & 1018.9 & 22.1 \\ & 1.37 – 1.40 & 986.8 & 21.9 \\ & 1.40 – 1.43 & 1015.4 & 21.2 \\ & 1.43 – 1.46 & 960.4 & 22.1 \\ & 1.46 – 1.49 & 982.5 & 23.0 \\ & 1.49 – 1.52 & 992.2 & 22.5 \\ & 1.52 – 1.55 & 953.8 & 21.3 \\ & 1.55 – 1.58 & 935.3 & 22.8 \\ & 1.58 – 1.61 & 919.3 & 23.5 \\ & 1.61 – 1.64 & 944.0 & 19.1 \\ & 1.64 – 1.67 & 941.2 & 21.9 \\ \hline \hline \end{tabular} \end{table} Table 1: HST/WFC3 near-infrared combined spectrum of GJ 9827 d Figure 4: Water detection in the transmission spectrum of GJ 9827 d. **Left:** Transmission spectrum of GJ 9827 d (black points) shown with our model transmission spectra constraints from the nested sampling atmosphere retrieval (blue) and from the photometry-informed “transit light-source effect” retrieval (orange). The dark blue and light blue shaded regions show the \(1\sigma\) and \(2\sigma\) Bayesian credible intervals from the atmosphere retrieval respectively. The atmospheric median transmission model is shown in blue and the best-fitting model is shown in red. The best-fitting TLS model is shown in orange along with the \(1\sigma\) and \(2\sigma\) Bayesian credible intervals in light orange. **Right:** Joint constraints on the cloud-top pressure versus the water mixing ratio derived from our Scarlet well-mixed retrieval. The colored shading describes the normalized probability density as a function of the water mixing ratio (assuming uniform vertical profiles) of the atmosphere, and of the cloud-top pressure. The black contours highlight the 1, 2 and \(3\sigma\) Bayesian credible regions. The water abundance relative to a solar composition atmosphere is shown on the top axis. The posterior probability distribution allows for multiple atmospheric scenarios ranging from H\({}_{2}\)/He envelopes with small amounts of water to water-dominated envelopes. The blue points identify two representative samples of these two scenarios which are displayed in Figure 6. For each set of parameters, SCARLET produces a forward atmosphere model in hydrostatic equilibrium (Benneke, 2015), computes the opacity associated to each molecule throughout the 40 vertical (pressure) layers of the model, computes the transmission spectrum for that model and finally performs the likelihood evaluation. The model transmission spectra produced at each step have a resolution of 16000 and are then convolved to the wavelength bins of the observed spectrum. The molecules considered in our retrieval are H\({}_{2}\)O, CH\({}_{4}\), CO\({}_{2}\), CO and N\({}_{2}\), as well as H\({}_{2}\) and He which are not parameterized and rather fill up the atmosphere (Benneke & Seager, 2013). We assume well-mixed vertical chemical profiles, where the abundances of molecules do not vary throughout the atmosphere. We choose a log-uniform prior on Figure 5: Posterior probability distributions of the free parameters used in the SCARLET free chemistry retrieval. The diagonal panels show the marginalized probability distributions of all individual parameters, whereas the off-diagonal panels show the marginalized probability distributions for each pair of parameters as colored shading. The 1, 2, and 3\(\sigma\) contours are shown in the 2D posteriors. Water is the only molecule detected in our retrieval analysis. the abundance of each molecule ranging from 10\({}^{-10}\) to 1 in volume mixing ratio. We assume a constant temperature structure throughout the atmosphere and use a Gaussian prior centered on the planet's equilibrium temperature (680 \(\pm\) 100 K; Rodriguez et al., 2018) on that parameter. The parameterization also includes a cloud deck top pressure, which blocks all light rays going through that pressure level. We again use a log-uniform prior on that parameter from 10\({}^{-4}\) mbar to 10\({}^{5}\) mbar. Thus, our atmosphere retrieval includes seven free parameters in total (or less when molecules are removed, see Table 2). ## 5 Results The observed transit spectrum of GJ 9827 d displays a water absorption feature at 1.4 \(\mu\)m (Figure 4). Qualitatively, the transit depths in the spectrum are deeper in the 1.4 \(\mu\)m water band followed by a downward slope that follows the wing of the water absorption feature. Quantitatively, a Bayesian model comparison analysis of our well-mixed retrievals (Benneke and Seager, 2013) prefers models that include the presence and absorption of water with a significance of 3.39\(\sigma\) (Bayes Factor = 72.52; Table 2) compared to models that do not include water. ### Metallicity-clouds degeneracy Our free chemistry retrievals show that the data can both be explained by a water-rich atmosphere, where water is the most abundant molecule, as well as with a H\({}_{2}\)/He-dominated atmosphere that still contains a small amount of water (Figure 4). At 1\(\sigma\), models with a water mixing ratio between 0.02% and 80% are preferred by the spectrum. When compared to the amount of water in a solar metallicity envelope, we see that this interval in abundance ranges from 1 \(\times\) solar metallicity models, which are dominated by H\({}_{2}\) and He gas, to 1000 \(\times\) solar metallicity models, where water is now the principal species (Figure 4). In the cases where water is present in small amounts, a cloud deck is needed to explain the observed transit spectrum (Figures 4, 5). This is due to the fact that the observed spectrum does not display the large amplitude expected from cloud-free models (Figure 6). This need for clouds in low mean-molecular-weight atmospheres is also seen in the marginalized probability distributions of the other molecules (Figure 5), since their spectral features are inconsistent with the observed spectrum, and thus must be muted by clouds in order to yield high-likelihood models. In cases where the water is more abundant and becomes the principal molecule, the spectra naturally display muted features because of the high density of the atmospheres and lower atmospheric scale height, thus removing the need for high clouds. In this water-rich scenario, the constraint on the cloud-top pressure disappears, explaining the observed posterior distribution (all deep cloud decks become equally consistent; Figure 4). ### Upper limits on other molecules While methane is known to display a similar 1.4 \(\mu\)m absorption feature as water (Bezard et al., 2022), it remains disfavored in our free chemistry retrieval analysis. The spectral signatures of methane not only include a feature around 1.4 \(\mu\)m, but also at 1.2 \(\mu\)m and at 1.7 \(\mu\)m, as shown by SCARLET forward atmosphere models for a pure methane envelope and for a solar composition atmosphere (Figure 6). However, the observed transmission spectrum of GJ 9827 d does not display these absorption features at 1.2 and 1.7 \(\mu\)m (Figure 6) and is in better agreement with the water models that display a smaller feature at 1.2 \(\mu\)m, no absorption at 1.7 \(\mu\)m, and a broader feature at 1.4 \(\mu\)m. In order to obtain chemically consistent models that agree with the observed transit spectrum, the carbon-to-oxygen (C/O) ratio must be decreased in order to favor O-bearing molecules (here, water) and a cloud deck must be included to mute the spectral amplitude of the water absorption features. Such models (e.g. C/O = 0.1, Metallicity = 100 \(\times\) solar and pCloud = 1 mbar) yield qualitatively and quantitatively similar transmission spectra to those favored by our free chemistry retrieval (Figures 4, 6). No other molecule besides water is statistically detected by our retrievals (Table 2, Figure 5). However, we can derive upper limits in their abundances based on our results from the free chemistry retrievals, either from the non-detection of specific spectral features (e.g., \begin{table} \begin{tabular}{l c c c} \hline \hline Retrieval model & Evidence & Bayes Factor & N\({}_{\sigma}\) \\ & ln(Z\({}_{i}\)) & B = Z\({}_{ref}\)/Z\({}_{i}\) & \\ \hline All molecules & -90.68 & ref. & ref. \\ + clouds & & & \\ H\({}_{2}\)O removed & -94.96 & 72.52 & 3.39 \\ CH\({}_{4}\) removed & -90.24 & 0.65 & 0.90 \\ CO\({}_{2}\) removed & -90.45 & 0.79 & 0.90 \\ CO removed & -90.60 & 0.92 & 0.90 \\ N\({}_{2}\) removed & -90.73 & 1.06 & 1.14 \\ Clouds removed & -90.78 & 1.10 & 1.23 \\ H\({}_{2}\)O, CH\({}_{4}\) removed & -94.88 & 66.92 & 3.36 \\ Flat spectrum & -97.11 & 620.76 & 4.01 \\ \hline \hline \end{tabular} \end{table} Table 2: Bayesian model comparison results from our SCARLET atmosphere retrievals in the free chemistry setting CH\({}_{4}\)) or because too much of anyone species increases the mean-molecular-weight of the atmosphere, eventually yielding a flat spectrum (e.g., N\({}_{2}\)). Thus, our retrievals allow us to constrain the upper limits on the mixing ratios of CH\({}_{4}\), CO\({}_{2}\), CO and N\({}_{2}\) to 3.04, 19.4, 52.5 and 60.4 % at 3\(\sigma\) significance. ### Significance of a featureless spectrum In order to evaluate how our spectrum deviates from a featureless spectrum, we compute the deviation from the best-fitting straight line using \(\chi^{2}\) statistics. Using the binned version (bottom of Figure 6), we obtain that the transmission spectrum of GJ 9827 d deviates from a straight line at 3.24\(\sigma\). In order to assess how our water detection compares to a featureless flat spectrum within the Bayesian paradigm, we compute the Bayesian evidence of a one-parameter flat line model \(\mathcal{Z}_{\rm flat}\). Given Figure 6: HST/WFC3 transmission spectrum of GJ 9827 d (data points) along with SCARLET forward atmosphere models (colored lines). **Top:** Two samples from our well-mixed retrieval analysis (Figure 4) are shown, representing the mini-Neptune scenario with a cloudy H\({}_{2}\)/He atmosphere composed of \(\sim\)1% water (pale blue) and a water world scenario with a water-rich atmosphere (dark blue). **Middle:** A secondary atmosphere model for a pure methane envelope is also shown (red) in order to highlight the methane absorption features. Chemically consistent models (still assuming a uniform temperature profile) are shown for a cloud-free, solar composition case (solar metallicity, solar C/O; orange) and for a cloudy case with C/O=0.1 and a 100\(\times\) solar metallicity (green). The observed spectrum is inconsistent with cloud-free low-metallicity scenarios and prefers water absorption features to methane absorption features, mainly around 1.2 and 1.65\(\,\mu\)m. The strength of the features in the spectrum is also displayed in units of H/He scale heights (right axis). **Bottom:** The best-fit model from the retrieval analysis is shown (pale blue), along with the transmission spectrum of the same model once the water opacity is turned off (grey). The contribution of water opacity to the spectral signatures is highlighted in blue. We also present a binned version of the transmission spectrum where points are binned together by pair with the exception of the blue-most channel. the simplicity of this one-parameter model, we do not need to use SCARLET nested sampling to obtain that value, and rather numerically estimate it via the following analytical solution: \[\begin{split}\mathcal{Z}_{\text{flat}}=\left(\frac{1}{\sqrt{2\pi}} \right)^{N}\times\left(\prod_{i=1}^{N}\frac{1}{\sigma_{i}}\right)\times\left( \frac{1}{\theta_{2}-\theta_{1}}\right)\\ \times\int_{\theta_{1}}^{\theta_{2}}\exp\left(-\sum_{i=1}^{N} \frac{(D_{i}-\theta)^{2}}{2\sigma_{i}^{2}}\right)d\theta,\end{split} \tag{2}\] where \(N\) is the number of points in the spectrum, \(\sigma_{i}\) is the uncertainty of the \(i^{\text{th}}\) point of the spectrum denoted \(D_{i}\), and where \(\theta_{1}\) and \(\theta_{2}\) are the limits of our uniform prior on the transit depth parameter \(\theta\). This allows us to show that our atmosphere model is preferred to the flat spectrum model at 4.01\(\sigma\) (Bayes Factor = 620.76; Table 2). ### Ruling out stellar contamination The Transit Light Source Effect (TLS, Rackham et al., 2018) can mimic water features at the 20 ppm level or more for modestly spotted stars under certain observational configurations, and could thus create the feature observed in our transmission spectrum. The best constraint available for the starspot coverage and contrast for GJ 9827 comes from the K2 Campaign 12 (C12) lightcurve, with an SFF-derived (Vanderburg and Johnson, 2014) peak-to-valley amplitude of 0.45%, slightly lower than the typical 0.7% for K6 spectral types (Rackham et al., 2019). Coarse scaling laws can relate the observed K2 amplitude to the surface coverage of starspots, under assumptions of size, number, and location of spots on a stellar surface (Rackham et al., 2018; Guo et al., 2018). For a K6 spectral type, 0.45% amplitude variations relate (conservatively) to spot-covering fractions of 1-4% (Rackham et al., 2019). Thus, we adopt a spot contrast typical of a K6 star and a filling factor of 1-4% (Rackham et al., 2019). Under these assumptions, a planet with a 1% transit depth could expect an H\({}_{2}\)O contamination of \(<\)15 ppm from unocculted spots. GJ 9827 d's much smaller transit depth of \(<0.1\%\) would therefore yield a negligible TLS water contamination of \(<1.5\) ppm. We perform a retrieval on our combined transit spectrum in which we fit for TLS parameters rather than for atmospheric parameters. We use the 1-4% spots filling factor as a prior in this TLS retrieval, and the other parameters are the photospheric temperature, the spots' temperature difference and a scaling factor. Our TLS models simulate unocculted star spots by averaging the stellar spectrum of the star's photosphere with the spectrum of the cooler spots (weighted by the spots coverage) based on Phoenix stellar models (Husser et al., 2013). It then simulates the transit of an airless planet to obtain the stellar-contaminated transit spectrum. When using the photometry-derived prior, we find that the TLS fitting is restricted to flat models which indicate that stellar contamination is not expected for that system (Figure 4). Repeating the same TLS retrieval with nonrestrictive uniform priors on all parameters further demonstrates that the signal cannot be caused by the star. We run the same TLS retrieval as described above, but without using the 1-4% spots filling factor prior, to see under what stellar conditions the signal can be explained by the star. We find that, in order for the TLS to reproduce the signal in our spectrum, not only does the spots parameters need to be unrealistically large (73% spots coverage and \(<-794\) K spot temperature difference), but the model needs to adopt a strong positive ramp towards short wavelengths (as in Moran et al., 2023), which becomes inconsistent with the K2 transit depth measurement. We thus conclude that stellar contamination cannot explain the feature in the transmission spectrum of GJ 9827 d, and that the water detection comes from the planet's atmosphere. ## 6 Discussion and Conclusions The HST/WFC3 transmission spectrum of GJ 9827 d presented in this work provides a precious target in the population of sub-Neptune exoplanets for which we have precise transmission spectra; and highlights GJ 9827 d as the smallest exoplanet with an unambiguous atmospheric molecular detection to date. Compared to the other characterized sub-Neptunes, our detection of a \(\sim\)1 H/He scale height water feature (Figure 6) makes it stronger than for similarly hot sub-Neptunes, although it remains broadly consistent with the previously observed trend where hotter sub-Neptunes display stronger H\({}_{2}\)O amplitudes (Crossfield and Kreidberg, 2017). Our analysis of the 10 observations of GJ 9827 d's transits also revealed some variability in this planet's orbit, which is to be considered for further monitoring of the system with state-of-the-art facilities such as JWST. Finally, our detection of a water feature in GJ 9827 d's transit spectrum provides the first detection of water in the atmosphere of a potential water world, which, when combined with GJ 9827 d's large mass-loss rate, provides a first line of evidence for this sub-Neptune hosting a water-steam dominated atmosphere. ### Variability in the transits of GJ 9827 d The analysis of the 10 transits of GJ 9827 d with HST revealed a significant variability in the transit timings observed from one visit to the other (Figure 2). While this variation is not surprising for a near-resonant system and did not impact the features observed in the transit spectrum (as shown by our analysis of the relative spectra; Figure 3), it still is in contrast with the previously observed TTVs for this system which were of the order of \(\sim\)3 minutes (Niraula et al., 2017). However, the 5-10 minutes TTVs observed for planet d in this work are consistent with an independent study of the TTVs of the GJ 9827 system combining all photometric and radial velocity data (Livingston et al., in prep.). The limited number of in-transit data points in the time series in this program could also explain the range of transit depths observed in our results. As described earlier, each HST orbit displays an exponential ramp in time that is fitted by our systematics model. This ramp has a much stronger effect in the first few integrations than in the last few integrations of each orbit, when it has settled. Thus, HST/WFC3 observations inherently provide better quality observations towards the end of each orbit. When considering the individual visits in our program, it seems that egress visits yield deeper transit depths than mid-transit or ingress visits (Figures 1, 3). This could then be explained by the fact that the different visits in our data set have varying in-transit data quality depending on whether the late-orbit integrations are in the transit (ingress and mid-transit visits) or are in the baseline (egress visits). For instance, visit 6, which is an egress observation, displays a deeper transit depth than in the other visits. However, the relative shape of the transmission spectrum, which is the quantity of interest for this work, is consistent with the other visits (Figure 3). Another potential source of the TTV and transit depth variability observed in our program is star-spot crossing. GJ 9827 has been shown to display quasi-periodic flux (\(\sim\)0.45%) variations with a period of \(\sim\)30 days (Rodriguez et al., 2018, Teske et al., 2018, Prieto-Arranz et al., 2018, Rice et al., 2019, Kosiarek et al., 2021). If these stellar variations were caused by stellar spots, then using a fixed orbital solution and a transit model that does not include the effect of spots in our light-curve fits could lead to biases in our retrieved parameters. Because of the variability discussed above, we decided to fix the orbital solution and limb darkening coefficients for the light-curve fits in our program (Section 3). In order to ensure that the limb darkening coefficients and stellar parameters chosen do not affect our atmospheric inference, we repeat our light-curve fits for multiple assumptions on the limb darkening. Using quadratic limb darkening coefficients, we reproduce the same fitting but using a 3D stellar model (Magic et al., 2015) when computing the coefficients. We further try the light-curve fits by varying the effective stellar temperature to the \(+1\sigma\) and \(-1\sigma\) values of that parameter (Kosiarek et al., 2021). In all cases, we find that the limb darkening and choice of stellar parameters only affect the retrieved spectrum with a constant offset throughout the wavelength range, and that the relative spectra all show the water absorption feature and are all consistent within \(1\sigma\). This thus shows that our choice for the stellar and limb darkening parameters does not affect the shape of the transmission spectrum, and subsequently, our atmospheric analysis. Similarly, given the difficult observational setting, we test the robustness of the spectrum to the systematics models used by trying two alternative light-curve fitting methods. First, we start from the divide-white corrected spectroscopic light curves and jointly fit a relative transmission spectrum. To do so, we jointly fit (across visits) the relative transit depth in each channel, where the broadband average of the individually fitted spectra is subtracted for each visit (since there sometimes are discrepancies between the white-light-curve transit depths, and average spectral depths in HST/WFC3 data). Secondly, we repeat the method presented in Section 3, but using the RECTE systematics model and orbit 1 (and not using the divide-white method; Zhou et al., 2017) for the seven visits which are not affected by transits of planet b in orbits 1 or 2. This gives us 7 transmission spectra that we combine with a weighted average. Both of these methods produce relative transmission spectra that are consistent within uncertainties with the one presented in Table 1. The spectrum obtained from the RECTE models has increased uncertainties, both from the smaller number of visits and from increased fitted scatter in some visits, but is still in agreement with the other two spectra. The spectrum presented in this work is thus robust to the choice of systematics model. ### Water in the envelope of a potential water world The water detection in the transit spectrum of GJ 9827 d makes it the first water world candidate with an atmospheric water detection consistent with a water-rich envelope. It thus positions itself in the sample of potential water worlds, with other small sub-Neptunes and super-Earths such as Kepler-138 d (Piaulet et al., 2022), L 98-59 d (Kostov et al., 2019), TOI-1685 b (Bluhm et al., 2021), GJ 3090 d (Almenara et al., 2022), TOI-270 d (Gunther et al., 2019, Mikal-Evans et al., 2023). In contrast, a water feature was also detected in the transit spectrum of TOI-270 d (Mikal-Evans et al., 2023), but the analysis revealed that the H-rich atmosphere scenario was favored for this sub Neptune, showing that the line can be fine between a mini-Neptune and a water world. With its small mass of 3.42 M\({}_{\oplus}\)(Kosiarek et al., 2021) and its proximity to its host star (6.2 d orbit), the estimated mass-loss rate of GJ 9827 d is \(>\)0.5 M\({}_{\oplus}\)/Gyr (Krishnamurthy et al., 2023). With an estimated age around 6 Gyr (Kosiarek et al., 2021), GJ 9827 d is unlikely to retain an extended H\({}_{2}\)/He envelope today. Furthermore, monitoring of GJ 9827 d's spectrum in the search of H\(\alpha\) and HeI signatures with Keck/NIRSPEC (Kasper et al., 2020) CARMENES (Carleo et al., 2021) and IRD (Krishnamurthy et al., 2023) resulted in no evidence of an extended H\({}_{2}\)/He atmosphere around the planet. Hence, the H-rich scenario with a smaller water abundance would provide a somewhat contradictory statement to the previous studies that observed GJ 9827 d from the ground. However, the water-rich scenario can both explain the observed HST transit spectrum, as well as the non-detection of H\(\alpha\)/HeI lines from ground-based studies. The water-dominated envelope is thus the compositional scenario that explains all of the data at hand on this system in the most natural way. In this water-rich scenario, GJ 9827 d would thus represent a larger, hotter, close-in version of the icy moons of the giant planets in the solar system. Indeed, water is believed to be the dominant volatile of the icy moons of the solar system (Schubert et al., 2004). GJ 9827 d could then have formed outside of the water ice line, where water ice is available in large amounts as a planetary building block (Mousis et al., 2019, Venturini et al., 2020). It could then have migrated towards its current stable near-resonant orbit, during which the increasingly important stellar irradiation would have driven an important H\({}_{2}\)/He loss, and it would be observed today with a high mean-molecular-weight water vapor atmosphere due to its warm temperature (Adams et al., 2008, Pierrehumbert, 2023) and its H\({}_{2}\)/He depletion. While our transmission spectrum cannot unambiguously distinguish between the H-rich and H-depleted scenarios, we have provided the first water detection in the envelope of a water world candidate, making it a key target for further monitoring with JWST. Transmission spectroscopy of GJ 9827 d with NIRISS/SOSS and NIRSpec/G395H would provide the high-precision continuous viewing of the full transit of the planet that is needed to explain the variability observed with HST, as well as provide a more precise transmission spectrum that could not only probe the water absorption bands, but also probe for the presence of carbon bearing species like CO and CO\({}_{2}\) above 4 \(\mu\)m. A JWST transmission spectrum of GJ 9827 d would thus lift the degeneracy observed in our study (Figure 4) and potentially confirm the water world nature of this sub-Neptune, simultaneously yielding the first direct detection of a water vapor dominated envelope. All of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute. The specific observations analyzed can be accessed via 10.17909/dvqh-2r48. We wish to thank the reviewer for the insightful comments which enhanced the quality of our manuscript. P.-A. R. further thanks L. Bazinet, L.-P. Coulombe and S. Pelletier for their help, comments and ideas throughout the multiple iterations of this work. This work is based on observations with the NASA/ESA HST, obtained at the Space Telescope Science Institute (STScI) operated by AURA, Inc. P.-A.R. and B.B. acknowledge financial support from the Natural Sciences and Engineering Research Council (NSERC) of Canada. P.-A.R. further acknowledges support from the University of Montreal, and from the Trottier institute for exoplanets (iREx). B.B. also acknowledges financial support from the Fond de Recherche Quebecois-Nature et Technologie (FRQNT; Quebec).
2306.17777
Canonizing Graphs of Bounded Rank-Width in Parallel via Weisfeiler--Leman
In this paper, we show that computing canonical labelings of graphs of bounded rank-width is in $\textsf{TC}^{2}$. Our approach builds on the framework of K\"obler & Verbitsky (CSR 2008), who established the analogous result for graphs of bounded treewidth. Here, we use the framework of Grohe & Neuen (ACM Trans. Comput. Log., 2023) to enumerate separators via split-pairs and flip functions. In order to control the depth of our circuit, we leverage the fact that any graph of rank-width $k$ admits a rank decomposition of width $\leq 2k$ and height $O(\log n)$ (Courcelle & Kant\'e, WG 2007). This allows us to utilize an idea from Wagner (CSR 2011) of tracking the depth of the recursion in our computation. Furthermore, after splitting the graph into connected components, it is necessary to decide isomorphism of said components in $\textsf{TC}^{1}$. To this end, we extend the work of Grohe & Neuen (ibid.) to show that the $(6k+3)$-dimensional Weisfeiler--Leman (WL) algorithm can identify graphs of rank-width $k$ using only $O(\log n)$ rounds. As a consequence, we obtain that graphs of bounded rank-width are identified by $\textsf{FO} + \textsf{C}$ formulas with $6k+4$ variables and quantifier depth $O(\log n)$. Prior to this paper, isomorphism testing for graphs of bounded rank-width was not known to be in $\textsf{NC}$.
Michael Levet, Puck Rombach, Nicholas Sieger
2023-06-30T16:30:24Z
http://arxiv.org/abs/2306.17777v3
# Logarithmic Weisfeiler-Leman Identifies All Graphs of Bounded Rank Width+ ###### Abstract In this paper, we extend the work of Grohe & Neuen (_ACM T. Comput. Log._, 2023) to show that the \((6k+3)\)-dimensional Weisfeiler-Leman (WL) algorithm can identify graphs of rank width \(k\) using only \(O(\log n)\) rounds. As a consequence, we obtain that graphs of bounded rank width are identified by \(\mathsf{FO}+\mathsf{C}\) formulas with \(6k+4\) variables and quantifier depth \(O(\log n)\). Furthermore, in light of the parallel WL implementation due to Grohe & Verbitsky (ICALP 2006), we obtain \(\mathsf{TC}^{1}\) upper bounds for isomorphism testing of graphs of bounded rank width. Prior to this paper, isomorphism testing for graphs of bounded rank width was not known to be in \(\mathsf{NC}\). Introduction The Graph Isomorphism problem (GI) takes as input two graphs \(G\) and \(H\), and asks if there exists an isomorphism \(\varphi:V(G)\to V(H)\). GI is in particular conjectured to be NP-intermediate; that is, belonging to NP but neither in P nor NP-complete [10]. Algorithmically, the best known upper-bound is \(n^{\Theta(\log^{2}n)}\), due to Babai [1]. It remains open as to whether GI belongs to P. There is considerable evidence suggesting that GI is not NP-complete [11, 1, 12, 13, 14, 15]. Recent works [16, 17] have established that for any field \(\mathbb{F}\), GI belongs to \(\mathbb{F}\)-Tensor Isomorphism (T\({}_{\mathbb{F}}\)). When \(\mathbb{F}\) is finite, \(\mathsf{T}_{\mathbb{F}}\subseteq\mathsf{NP}\cap\mathsf{coAM}\).1 In contrast, the best known lower-bound for GI is DET [18], which contains NL and is a subclass of TC1. It is thus natural to inquire as to families of graphs where isomorphism testing is decidable in complexity classes contained within DET. Footnote 1: We refer the reader to [17, Remark 3.4] for discussion on \(\mathsf{T}_{\mathbb{F}}\) when \(\mathbb{F}\) is infinite. There has been considerable work on efficient algorithms for several families of graphs. In particular, polynomial-time isomorphism tests are known for several families of graphs such as graphs of bounded degree [10, 1, 12, 13, 14, 15], planar graphs [16, 17, 18], graphs of bounded genus [18, 19], graphs of bounded tree width [19], all classes excluding a fixed graph as a minor [12, 13] or even a topological subgraph [14], and recently graphs of bounded rank width [16, 17]. The cases of planar graphs [10, 12], \(\mathsf{LN}^{+}09\), JKMT03], graphs of bounded genus [1], and graphs of bounded treewidth [13] are even known to be L-complete. The \(k\)-dimensional Weisfeiler-Leman algorithm (\(k\)-WL) serves as a key combinatorial tool in GI. It works by iteratively coloring \(k\)-tuples of vertices in an isomorphism-invariant manner. On its own, Weisfeiler-Leman serves as an efficient polynomial-time isomorphism test for several families of graphs, including trees [1, 10], planar graphs [16, 15], graphs of bounded genus [13, 14], and graphs for which a specified minor \(H\) is forbidden [13]. We also note that \(1\)-WL identifies almost all graphs [1, 16] and \(2\)-WL identifies almost all regular graphs [1, 15]. In the case of graphs of bounded treewidth [13, 14] and planar graphs [14, 15, 16], Weisfeiler-Leman serves even as an NC isomorphism test. Despite the success of WL as an isomorphism test, it is insufficient to place GI into P [16, 15]. Nonetheless, WL remains an active area of research. For instance, Babai's quasipolynomial-time algorithm [1] combines \(O(\log n)\)-WL with group theoretic techniques. Graphs of bounded rank width have only recently received attention from the perspective of isomorphism testing. Grohe & Schweitzer [16] gave the first polynomial-time isomorphism test for graphs of bounded rank width. In particular, their isomorphism test ran in time \(n^{f(k)}\), where \(f(k)\) was a non-elementary function of the rank width \(k\). Subsequently, Grohe & Neuen [14] showed that graphs of rank width \(k\) have Weisfeiler-Leman dimension \(\leq 3k+5\), which yields an \(O(n^{3k+6}\log n)\)-time isomorphism test. In particular, it is open as to whether graphs of bounded rank width admit NC or FPT isomorphism tests. This is in contrast to graphs of bounded treewidth, where NC[19, 12, 13, 14] and FPT[1, 16] isomorphism tests are well-known. Main Results.In this paper, we investigate the parallel and descriptive complexities of identifying graphs of bounded rank width, using the Weisfeiler-Leman algorithm. **Theorem 1.1**.: _The \((6k+3)\)-dimensional Weisfeiler-Leman algorithm identifies graphs of rank width \(k\) in \(O(\log n)\) rounds._ Combining Theorem 1.1 with the parallel WL implementation of Grohe & Verbitsky [19], we obtain the first NC bound for isomorphism testing of graphs of bounded rank width. **Corollary 1.2**.: _Let \(G\) be a graph of rank width \(O(1)\), and let \(H\) be arbitrary. We can decide isomorphism between \(G\) and \(H\) in TC1._ Footnote 1: We refer the reader to [17, Remark 3.4] for discussion on \(\mathsf{T}_{\mathbb{F}}\) when \(\mathbb{F}\) is infinite. Furthermore, in light of the close connections between Weisfeiler-Leman and \(\mathsf{FO}+\mathsf{C}\)[1, 18], we obtain the following corollary. **Corollary 1.3**.: _For every graph \(G\) of rank width at most \(k\), there is a sentence \(\varphi_{G}\in\mathcal{C}_{6k+4,O(\log n)}\) that characterizes \(G\) up to isomorphism. That is, whenever \(H\not\cong G\), we have that \(G\models\varphi_{G}\) and \(H\not\models\varphi_{G}\)._ ## 2 Preliminaries ### Weisfeiler-Leman We begin by recalling the Weisfeiler-Leman algorithm for graphs, which computes an isomorphism-invariant coloring. Let \(G\) be a graph, and let \(k\geq 2\) be an integer. The \(k\)-dimension Weisfeiler-Leman, or \(k\)-WL, algorithm begins by constructing an initial coloring \(\chi_{0}:V(G)^{k}\to\mathcal{K}\), where \(\mathcal{K}\) is our set of colors, by assigning each \(k\)-tuple a color based on its isomorphism type. That is, two \(k\)-tuples \((v_{1},\ldots,v_{k})\) and \((u_{1},\ldots,u_{k})\) receive the same color under \(\chi_{0}\) if and only if the map \(v_{i}\mapsto u_{i}\) (for all \(i\in[k]\)) is an isomorphism of the induced subgraphs \(G[\{v_{1},\ldots,v_{k}\}]\) and \(G[\{u_{1},\ldots,u_{k}\}]\) and for all \(i,j\), \(v_{i}=v_{j}\Leftrightarrow u_{i}=u_{j}\). For \(r\geq 0\), the coloring computed at the \(r\)th iteration of Weisfeiler-Leman is refined as follows. For a \(k\)-tuple \(\overline{v}=(v_{1},\ldots,v_{k})\) and a vertex \(x\in V(G)\), define \[\overline{v}(v_{i}/x)=(v_{1},\ldots,v_{i-1},x,v_{i+1},\ldots,v_{k}).\] The coloring computed at the \((r+1)\)st iteration, denoted \(\chi_{r+1}\), stores the color of the given \(k\)-tuple \(\overline{v}\) at the \(r\)th iteration, as well as the colors under \(\chi_{r}\) of the \(k\)-tuples obtained by substituting a single vertex in \(\overline{v}\) for another vertex \(x\). We examine this multiset of colors over all such vertices \(x\). This is formalized as follows: \[\chi_{r+1}(\overline{v})= (\chi_{r}(\overline{v}),\{\hskip-1.0pt\{(\chi_{r}(\overline{v}(v_ {1}/x)),\ldots,\chi_{r}(\overline{v}(v_{k}/x))\big{|}x\in V(G)\}\hskip-1.0pt\}),\] where \(\{\hskip-1.0pt\{\cdot\}\hskip-1.0pt\}\) denotes a multiset. Note that the coloring \(\chi_{r}\) computed at iteration \(r\) induces a partition of \(V(G)^{k}\) into color classes. The Weisfeiler-Leman algorithm terminates when this partition is not refined, that is, when the partition induced by \(\chi_{r+1}\) is identical to that induced by \(\chi_{r}\). The final coloring is referred to as the _stable coloring_, which we denote \(\chi_{\infty}:=\chi_{r}\). The \(1\)-dimensional Weisfeiler-Leman algorithm, sometimes referred to as _Color Refinement_, works nearly identically. Two vertices of \(G\) receive the same initial color if and only if they have the same degree. For the refinement step, we have that: \[\chi_{r+1}(u)=(\chi_{r}(u),\{\hskip-1.0pt\{\chi_{r}(v):v\in N(u)\}\hskip-1.0pt \}).\] We have that \(1\)-WL terminates when the partition on the vertices is not refined. As we are interested in both the Weisfeiler-Leman dimension and the number of rounds, we will use the following notation. **Definition 2.1**.: Let \(k\geq 1\) and \(r\geq 1\) be integers. The \((k,r)\)-WL algorithm is obtained by running \(k\)-WL for \(r\) rounds. Here, the initial coloring counts as the first round. Let \(S\) be a sequence of vertices. The _individualize-and-refine_ paradigm works first by assigning each vertex in \(S\) a unique color. We then run \((k,r)\)-WL on this colored graph. We denote the coloring computed by \((k,r)\)-WL after individualizing \(S\) as \(\chi_{k,r}^{S}\). **Remark 2.2**.: Grohe & Verbitsky [10] previously showed that for fixed \(k\), the classical \(k\)-dimensional Weisfeiler-Leman algorithm for graphs can be effectively parallelized. Precisely, each iteration (including the initial coloring) can be implemented using a logspace uniform \(\mathsf{TC}^{0}\) circuit. In the case of the count-free \(k\)-WL algorithm, each round can be implemented using a logspace uniform \(\mathsf{AC}^{0}\) circuit. ### Pebbling Game We recall the bijective pebble game introduced by [10, 11] for WL on graphs. This game is often used to show that two graphs \(X\) and \(Y\) cannot be distinguished by \(k\)-WL. The game is an Ehrenfeucht-Fraisse game (c.f., [1, 1]), with two players: Spoiler and Duplicator. We begin with \(k+1\) pairs of pebbles. Prior to the start of the game, each pebble pair \((p_{i},p_{i}^{\prime})\) is initially placed either beside the graphs or on a given pair of vertices \(v_{i}\mapsto v_{i}^{\prime}\) (where \(v_{i}\in V(X),v_{i}^{\prime}\in V(Y)\)). We refer to this initial configuration for \(X\) as \(\overline{v}\), and this initial configuration for \(Y\) as \(\overline{v^{\prime}}\). Each round \(r\) proceeds as follows. 1. Spoiler picks up a pair of pebbles \((p_{i},p_{i}^{\prime})\). 2. We check the winning condition, which will be formalized later. 3. Duplicator chooses a bijection \(f_{r}:V(X)\to V(Y)\) (we emphasize that the bijection chosen depends on the round- and, implicitly, the pebbling configuration at the start of said round). 4. Spoiler places \(p_{i}\) on some vertex \(v\in V(X)\). Then \(p_{i}^{\prime}\) is placed on \(f(v)\). Let \(v_{1},\ldots,v_{m}\) be the vertices of \(X\) pebbled at the end of step 1 at round \(r\) of the game, and let \(v_{1}^{\prime},\ldots,v_{m}^{\prime}\) be the corresponding pebbled vertices of \(Y\). Spoiler wins precisely if the map \(v_{\ell}\mapsto v_{\ell}^{\prime}\) does not extend to an isomorphism of the induced subgraphs \(X[\{v_{1},\ldots,v_{m}\}]\) and \(Y[\{v_{1}^{\prime},\ldots,v_{m}^{\prime}\}]\). Duplicator wins otherwise. Spoiler wins, by definition, at round 0 if \(X\) and \(Y\) do not have the same number of vertices. We note that \(\overline{v}\) and \(\overline{v^{\prime}}\) are not distinguished by the first \(r\) rounds of \(k\)-WL if and only if Duplicator wins the first \(r\) rounds of the \((k+1)\)-pebble game [1, 2]. **Remark 2.3**.: In our work, we explicitly control for both pebbles and rounds. In our theorem statements, we state explicitly the number of pebbles on the board. So if Spoiler can win with \(k\) pebbles on the board, then we are playing in the \((k+1)\)-pebble game. Note that \(k\)-WL corresponds to \(k\)-pebbles on the board. ### Logics We recall key notions of first-order logic. We have a countable set of variables \(\{x_{1},x_{2},\ldots\}\). Formulas are defined inductively. For the basis, we have that \(x_{i}=x_{j}\) is a formula for all pairs of variables. Now if \(\varphi\) is a formula, then so are the following: \(\varphi\land\varphi,\varphi\lor\varphi,\neg\varphi,\exists x_{i}\,\varphi\), and \(\forall x_{i}\,\varphi\). In order to define logics on graphs, we add a relation \(E(x,y)\), where \(E(x,y)=1\) if and only if \(\{x,y\}\) is an edge of our graph, and 0 otherwise. In keeping with the conventions of [1], we refer to the first-order logic with relation \(E\) as \(\mathcal{L}\) and its \(k\)-variable fragment as \(\mathcal{L}_{k}\). We refer to the logic \(\mathcal{C}\) as the logic obtained by adding counting quantifiers \(\exists^{\geq n}x\,\varphi\) (there exist at least \(n\) elements \(x\) that satisfy \(\varphi\)) and \(\exists!n\,x\,\varphi\) (there exist exactly \(n\) elements \(x\) that satisfy \(\varphi\)) as \(\mathcal{C}\) and its \(k\)-variable fragment as \(\mathcal{C}_{k}\). The _quantifier depth_ of a formula \(\varphi\) (belonging to either \(\mathcal{L}\) or \(\mathcal{C}\)) is the depth of its quantifier nesting. We denote the quantifier depth of \(\varphi\) as \(\operatorname{\mathrm{qd}}(\varphi)\) This is defined inductively as follows. * If \(\varphi\) is atomic, then \(\operatorname{\mathrm{qd}}(\varphi)=0\). * \(\operatorname{\mathrm{qd}}(\neg\varphi)=\operatorname{\mathrm{qd}}(\varphi)\). * \(\operatorname{\mathrm{qd}}(\varphi_{1}\lor\varphi_{2})=\operatorname{\mathrm{ qd}}(\varphi_{1}\land\varphi_{2})=\max\{\operatorname{\mathrm{qd}}(\varphi_{1}), \operatorname{\mathrm{qd}}(\varphi_{2})\}\). * \(\operatorname{\mathrm{qd}}(Qx\,\varphi)=\operatorname{\mathrm{qd}}(\varphi)+1\), where \(Q\) is a quantifier in the logic. We denote the fragment of \(\mathcal{L}_{k}\) (respectively, \(\mathcal{C}_{k}\)) where the formulas have quantifier depth at most \(r\) as \(\mathcal{L}_{k,r}\) (respectively, \(\mathcal{C}_{k,r}\)). Let \(\overline{v}\in V(X)^{k},\overline{v^{\prime}}\in V(Y)^{k}\). We note that \(\overline{v},\overline{v^{\prime}}\) are distinguished by \((k,r)\)-WL if and only if there exists a formula \(\varphi\in\mathcal{C}_{k+1,r}\) such that \((X,\overline{v})\models\varphi\) and \((Y,\overline{v^{\prime}})\not\models\varphi\)[1, 1]. ## 3 Weisfeiler-Leman for Graphs of Bounded Rank Width ### Preliminaries **Rank Width.** Oum & Seymour [1] introduced the rank width parameter to measure the width of a certain hierarchical decomposition of graphs. The goal is to intuitively split the vertices of a graph along cuts of low complexity in a hierarchical fashion. Here, the complexity is the \(\mathbb{F}_{2}\)-rank of the matrix capturing the adjacencies crossing the cut. Precisely, let \(G\) be a graph, and let \(X,Y\subseteq V(G)\). Define \(M(X,Y)\in\mathbb{F}_{2}^{X\times Y}\) to be the matrix where \((M(X,Y))_{uv}=1\) if and only if \(uv\in E(G)\). That is, \(M(X,Y)\) is the submatrix of the adjacency matrix whose rows are indexed by \(X\) and whose columns are indexed by \(Y\). Denote \(\rho(X):=\operatorname{\mathrm{rk}}_{\mathbb{F}_{2}}(M(X,\overline{X}))\). A _rank decomposition_ of \(G\) is a tuple \((T,\gamma)\), where \(T\) is a binary rooted tree and \(\gamma:V(T)\to 2^{V(G)}\) satisfying the following: * For the root \(r\) of \(T\), \(\gamma(r)=V(G)\), * For an internal node \(t\in V(T)\), denote the children of \(t\) as \(s_{1},s_{2}\). For every internal node \(t\), we have that \(\gamma(t)=\gamma(s_{1})\cup\gamma(s_{2})\), and \(\gamma(s_{1})\cap\gamma(s_{2})=\emptyset\). * For any leaf \(t\in V(T)\), \(|\gamma(t)|=1\). **Remark 3.1**.: Let \(L(T)\) be the set of leaves of \(T\). Instead of providing \(\gamma\), we can equivalently define a bijection \(f:V(G)\to L(T)\). By the second condition of a rank width decomposition, \(f\) completely determines \(\gamma\). The _width_ of a rank decomposition \((T,\gamma)\) is: \[\operatorname{wd}(T,\gamma):=\max\{\rho_{G}(\gamma(t)):t\in V(T)\}.\] The _rank width_ of a graph \(G\) is: \[\operatorname{rw}(G):=\min\{\operatorname{wd}(T,\gamma):(T,\gamma)\text{ is a rank decomposition of }G\}.\] The parameter rank width is closely related to the parameter clique width, introduced by Courcelle & Olariu [1]. Oum & Seymour [1] showed that: \[\operatorname{rw}(G)\leq\operatorname{cw}(G)\leq 2^{\operatorname{rw}(G)+1}-1.\] Denote \(\operatorname{tw}(G)\) to be the treewidth of \(G\). Oum [2] showed that \(\operatorname{rw}(G)\leq\operatorname{tw}(G)+1\). Note that \(\operatorname{tw}(G)\) cannot be bounded in terms of \(\operatorname{rw}(G)\); for instance, the complete graph \(K_{n}\) has \(\operatorname{rw}(K_{n})=1\) but \(\operatorname{tw}(K_{n})=n-1\). **Connectivity Lemma**.: We establish a helper lemma, which effectively states that Duplicator must respect connected components of pebbled vertices. **Lemma 3.2**.: _Let \(G,H\) be graphs. Suppose that \((u,v)\mapsto(u^{\prime},v^{\prime})\) have been pebbled. Furthermore, suppose that \(u,v\) belong to the same connected component of \(G\), while \(u^{\prime},v^{\prime}\) belong to different connected components of \(H\). Then Spoiler can win using \(1\) additional pebble and \(O(\log n)\) rounds._ Proof.: Let \(P\) be a shortest \(u-v\) path in \(G\). Spoiler begins by pebbling a midpoint \(w\) of \(P\). Let \(w^{\prime}\) be Duplicator's response. As \(u^{\prime},v^{\prime}\) belong to different components, we may assume without loss of generality that \(w^{\prime}\) belongs to the component containing \(u^{\prime}\). Let \(P_{wv}\) be the \(w-v\) path within \(P\). At the next round, Spoiler picks up the pebble on \(u\) and iterates on the above argument using \(P_{wv}\). This argument only occurs finitely many times until we hit a base case, where \(wv\) is an edge of \(G\), while \(w^{\prime}v^{\prime}\) is not an edge of \(H\). At each round, we are cutting the size of the path in half. Thus, at most \(\log_{2}(n)+1\) rounds are required. Observe that only one additional pebble was used. The result now follows. ### Split Pairs and Flip Functions In designing a pebbling strategy for graphs of bounded rank width, Grohe & Neuen [1] sought to pebble a set of vertices \(X\subseteq V(G)\) such that \(\rho(X)\leq k\) and pebbling \(X\) partitions the remaining vertices into sets \(C_{1},\ldots,C_{\ell}\) that can be treated independently. Furthermore, we want for each \(i\in[\ell]\) that either \(C_{i}\subseteq X\) or \(C_{i}\subseteq\overline{X}\). As there can be many edges between \(X\) and \(\overline{X}\), this is hard to accomplish in general. To this end, Grohe & Neuen [1] utilized split pairs and flip functions, which we will recall shortly. Let \(G\) be a graph, and let \(X\subseteq V(G)\). For \(v,w\in X\), define the equivalence relation \(v\approx_{X}w\) precisely if \(N(v)\cap\overline{X}=N(w)\cap\overline{X}\). For \(v\in X\), define \(\operatorname{vec}_{X}(v):=\left(a_{vw}\right)_{w\in\overline{X}}\in\overline {\mathbb{F}_{X}^{\overline{X}}}\) where \(a_{vw}=1\) if and only if \(vw\in E(G)\). Observe that \(v\approx_{X}w\) if and only if \(\operatorname{vec}_{X}(v)=\operatorname{vec}_{X}(w)\). Now for \(S\subseteq X\), define \(\operatorname{vec}_{X}(S):=\{\operatorname{vec}_{X}(v):v\in S\}\). **Definition 3.3**.: Let \(G\) be a graph, and let \(X\subseteq V(G)\). A pair \((A,B)\) is a _split pair_ for \(X\) if the following hold: * \(A\subseteq X\) and \(B\subseteq\overline{X}\), * \(\operatorname{vec}_{X}(A)\) forms an \(\mathbb{F}_{2}\)-linear basis for \(\left\langle\operatorname{vec}_{X}(X)\right\rangle\), and 3. \(\operatorname{vec}_{\overline{X}}(B)\) forms am \(\mathbb{F}_{2}\)-linear basis for \(\langle\operatorname{vec}_{\overline{X}}(\overline{X})\rangle\). Observe that \(|A|=\rho_{G}(X)=\rho_{G}(\overline{X})=|B|\). Now if \((A,B)\) is a split pair for \(X\), then \((B,A)\) is a split pair for \(\overline{X}\). We define \((\emptyset,\emptyset)\) to be a split pair for \(X=V(G)\). An _ordered split pair_ for \(X\) (\((a_{1},\ldots,a_{k}),(b_{1},\ldots,b_{k})\)) is a pair such that \((\{a_{1},\ldots,a_{k}\},\{b_{1},\ldots,b_{k}\})\) is a split pair for \(X\). **Lemma 3.4** ([13, Lemma 3.3]).: _Let \(G\) be a graph, and let \(X\subseteq V(G)\). Let \((A,B)\) be a split pair for \(X\). If \(v,w\in X\) satisfy \(N(v)\cap B=N(w)\cap B\), then \(v\approx_{X}w\). Similarly, if \(v^{\prime},w^{\prime}\in\overline{X}\) satisfy \(N(v^{\prime})\cap A=N(w^{\prime})\cap A\), then \(v^{\prime}\approx_{\overline{X}}w^{\prime}\)._ **Remark 3.5**.: Suppose we individualize a vertex \(v\). Let \(u\in N(v)\) and \(w\not\in N(v)\). After two rounds of Color Refinement (the initial coloring and a single refinement step), we have that \(\chi_{1,2}(u)\neq\chi_{1,2}(w)\). Using this observation, we obtain the following. **Corollary 3.6** (Compare rounds c.f. [13, Corollary 3.4]).: _Let \(G\) be a graph, and let \(X\subseteq V(G)\). Let \((\overline{a},\overline{b})\) be an ordered split pair for \(X\). Let \(u,w\in X\) such that \(\chi_{1,2}(u)^{(\overline{a},\overline{b})}=\chi_{1,2}(w)^{(\overline{a}, \overline{b})}\). Then \(u\approx_{X}w\). Similarly, for every \(u^{\prime},w^{\prime}\in\overline{X}\) satisfying \(\chi_{1,2}(u^{\prime})^{(\overline{a},\overline{b})}=\chi_{1,2}(w^{\prime})^{ (\overline{a},\overline{b})}\), we have that \(u^{\prime}\approx_{\overline{X}}w^{\prime}\)._ The goal now is to use a split pair to partition the graph, so that each part can be handled independently. We argue that individualizing a split pair and applying \(2\) rounds of Color Refinement suffice to yield such a partition. We follow the strategy of Grohe & Neuen [13] in using a flip function. **Definition 3.7** ([13, Definition 3.5]).: Let \(G(V,E,\chi)\) be a vertex-colored graph, with \(\chi:V\to\mathcal{C}\) our coloring. A _flip function_ for \(G\) is a function \(f:\mathcal{C}\times\mathcal{C}\to\{0,1\}\) such that \(f(c,c^{\prime})=f(c^{\prime},c)\) for all \(c,c^{\prime}\in\mathcal{C}\). For a colored graph \(G(V,E,\chi)\) and a flip function \(f\), define the _flipped graph_\(G^{f}(V,E^{f},\chi)\) where: \[E^{f} =\{vw:vw\in E(G)\text{ and }f(\chi(v),\chi(w))=0\}\] \[\cup\{vw:v\neq w,vw\not\in E(G),\text{ and }f(\chi(v),\chi(w))=1\}.\] Denote \(\operatorname{Comp}(G,f)\subseteq 2^{V(G)}\) be the set of vertex sets of the connected components of \(G^{f}\). Observe that \(\operatorname{Comp}(G,f)\) forms a partition of \(V(G)\). **Lemma 3.8** (Compare rounds c.f. [13, Lemma 3.6]).: _Let \(G(V,E,\chi)\) be a colored graph, and let \(X\subseteq V(G)\). Let \((\overline{a},\overline{b})\) be a split pair for \(X\). Let \(k\geq 1,r\geq 2\), and \(G^{\prime}:=G(V,E,\chi_{k,r}^{(\overline{a},\overline{b}),G})\). There exists a flip function \(f\) for \(G^{\prime}\) such that for every \(C\in\operatorname{Comp}(G^{\prime},f)\), we have that either \(C\subseteq X\) or \(C\subseteq\overline{X}\)._ Proof (Sketch).: In the proof of [13, Lemma 3.6], Grohe & Neuen rely solely on the fact that for any vertex \(v\), the counting \((1,\infty)\)-WL can detect \(N(v)\cap(\overline{a}\cup\overline{b})\) (where here, we abuse notation to refer to \((\overline{a}\cup\overline{b})\) as the set of vertices in \((\overline{a},\overline{b})\)). As we are dealing with graphs of rank width \(k\), \((\overline{a},\overline{b})\) consists of \(O(k)\) vertices. So the counting \((1,2)\)-WL can detect \(N(v)\cap(\overline{a}\cup\overline{b})\). Thus, using \((1,2)\)-WL in place of the counting \((1,\infty)\)-WL, the proof of [13, Lemma 3.6] holds, _mutatis mutandis_. Grohe & Neuen [13] previously showed that the flip function preserves isomorphism. Precisely, they established the following. **Lemma 3.9** ([13, Lemma 3.9]).: _Let \(G,G^{\prime}\) be two colored graphs, and let \(f\) be a flip function for \(G,G^{\prime}\). Let \(\varphi:V(G)\to V(G^{\prime})\) be a bijection. We have that \(\varphi\in\text{Iso}(G,G^{\prime})\) if and only if \(\varphi\in\text{Iso}(G^{f},(G^{\prime})^{f})\)._ Grohe & Neuen [13, Lemma 3.10] also showed that the flip function respects the stable-coloring of Weisfeiler-Leman. While the statement of [13, Lemma 3.10] did not control for rounds, the result holds even when considering rounds. **Lemma 3.10** ([13, Lemma 3.10]).: _Let \(G,G^{\prime}\) be two colored graphs, and let \(f\) be a flip function for \(G,G^{\prime}\). We have that \((k,r)\)-WL distinguishes \(G,G^{\prime}\) if and only if \((k,r)\)-WL distinguishes \(G^{f},(G^{\prime})^{f}\)._ Proof.: Let \(\overline{v},\overline{w}\) be configurations in the \((k+1)\)-pebble game. We have that the map \(v_{i}\mapsto w_{i}\) is a marked isomorphism of the induced subgraphs \(G[\overline{v}]\) and \(G^{\prime}[\overline{w}]\) if and only if this map is an isomorphism of the induced subgraphs \(G^{f}[\overline{v}]\) and \((G^{\prime})^{f}[\overline{w}]\). **Corollary 3.11** (Compare rounds c.f. [13, Corollary 3.12]).: _Let \(G(V,E,\chi),G^{\prime}(V^{\prime},E^{\prime},\chi^{\prime})\) be vertex-colored graphs, and let \(f\) be a flip function for \(G,G^{\prime}\). Let \(\overline{v}\in V^{k},\overline{v^{\prime}}\in(V^{\prime})^{k}\). Let \(C\) be a connected component of \(G^{f}\) such that \(\chi(u)\neq\chi(w)\) for all \(u\in C\) and all \(w\in V\setminus C\). Let \(C^{\prime}\) be a connected component such that \(\chi^{\prime}(u^{\prime})\neq\chi^{\prime}(w^{\prime})\) for all \(u^{\prime}\in C^{\prime}\) and \(w^{\prime}\in V^{\prime}\setminus C^{\prime}\). Let \(r\geq 1\). Suppose that:_ \[(G[C],\chi_{1,r}^{\overline{v},G})\not\cong(G^{\prime}[C^{\prime}],\chi_{1,r}^ {\overline{v^{\prime}},G^{\prime}}).\] _Let \(\overline{w}:=C\cap\overline{v}\) and \(\overline{w^{\prime}}:=C^{\prime}\cap\overline{v^{\prime}}\). Then either:_ \[(G[C],\chi_{1,r}^{\overline{w},G})\not\cong(G[C^{\prime}],\chi_{1,r}^{ \overline{w^{\prime}},G^{\prime}}),\] _or \(r\) rounds of Color Refinement distinguishes \((G,\chi^{\overline{v}})\) from \((G^{\prime},(\chi^{\prime})^{\overline{v^{\prime}}})\)._ Proof.: The proof is by contrapositive. Let \(I=\{i\in[k]:v_{i}\in C\}\), and let \(I^{\prime}=\{i\in[k]:v^{\prime}_{i}\in C^{\prime}\}\). Suppose that \((1,r)\)-WL fails to distinguish \((G,\chi^{\overline{v}})\) and \((G^{\prime},(\chi^{\prime})^{\overline{v^{\prime}}})\). Then by Lemma 3.9, \((1,r)\)-WL fails to distinguish \((G^{f},\chi^{\overline{v}})\) and \(((G^{\prime})^{f},(\chi^{\prime})^{\overline{v^{\prime}}})\). So we have that \(I=I^{\prime}\). Now suppose that: \[(G[C],\chi_{1,r}^{\overline{w},G})\cong(G[C^{\prime}],\chi_{1,r}^{\overline{w^ {\prime}},G^{\prime}}).\] As \(I=I^{\prime}\), it follows that: \[(G[C],\chi_{1,r}^{\overline{v},G})\cong(G[C^{\prime}],\chi_{1,r}^{\overline{v ^{\prime}},G^{\prime}}).\] As 1-WL only takes into account the neighbors of a given vertex in the refinement step, we have by induction that for each \(i\leq r\): \[(G[C],\chi_{1,i}^{\overline{v},G})\cong(G[C^{\prime}],\chi_{1,i}^{\overline{v ^{\prime}},G^{\prime}}).\] The result now follows. ### WL for Graphs of Bounded Rank Width Our goal in this section is to establish the following. **Theorem 3.12**.: _Let \(G\) be a graph of rank width \(k\), and let \(H\) be an arbitrary graph such that \(G\not\cong H\). We have that the \((6k+3,O(\log n))\)-WL algorithm will distinguish \(G\) from \(H\)._ **Definition 3.13** ([13, Definition 4.1]).: Let \(G\) be a graph, and let \(X,X_{1},X_{2}\subseteq V(G)\) such that \(X=X_{1}\sqcup X_{2}\). Let \((A,B)\) be a split pair for \(X\), and let \((A_{i},B_{i})\)\((i=1,2)\) be a split pair for \(X_{i}\). We say that \((A_{i},B_{i})\) are _nice_ with respect to \((A,B)\) if the following conditions hold: * \(A\cap X_{i}\subseteq A_{i}\) for each \(i\in\{1,2\}\), and * \(B_{2}\cap X_{1}\subseteq A_{1}\) and similarly \(B_{1}\cap X_{2}\subseteq A_{2}\). A triple of ordered split pairs is nice if the underlying triple of unordered split pairs is nice. **Lemma 3.14** ([13, Lemma 4.2]).: _Let \(G\) be a graph, and let \(X,X_{1},X_{2}\subseteq V(G)\) such that \(X=X_{1}\sqcup X_{2}\). Let \((A,B)\) be a split pair for \(X\). There exist nice split pairs \((A_{i},B_{i})\) for \(X_{i}\) (\(i=1,2\)) such that additionally \(B_{i}\cap\overline{X_{i}}\subseteq B\)._ **Definition 3.15**.: Let \(G\) be a graph. A _component partition_ of \(G\) is a partition \(\mathcal{P}\) of \(V(G)\) such that every connected component appears in exactly one block of \(\mathcal{P}\). That is, for every connected component \(C\) of \(G\), there exists a \(P\in\mathcal{P}\) such that \(C\subseteq P\). **Lemma 3.16** ([13, Observation 4.3]).: _Let \(G,H\) be two non-isomorphic graphs, and let \(\mathcal{P},\mathcal{Q}\) be component partitions of \(G,H\) respectively. Let \(\sigma:V(G)\to V(H)\) be a bijection. There exists a vertex \(v\) such that \(G[P]\not\cong H[Q]\), where \(P\in\mathcal{P}\) is the unique set containing \(v\) and \(Q\in\mathcal{Q}\) is the unique set containing \(\sigma(v)\)._ We now prove Theorem 3.12. Proof of Theorem 3.12.: We follow the strategy of [13, Theorem 4.4]. Let \(G(V,E,\chi_{G})\) be a colored graph of rank width \(\leq k\), and let \(H\) be an arbitrary graph such that \(G\not\cong H\). By [14, Theorem 5], \(G\) admits a rank decomposition \((T,\gamma)\) of width at most \(2k\) where \(T\) has height at most \(3\cdot(\log(n)+1)\). We will show that Spoiler has a winning strategy in the \(6k+3\) pebble game in \(O(\log n)\) rounds. In a similar manner as in the proof of [13, Theorem 4.4], we will first argue that \(12k+5\) pebbles suffice, and then show how to improve the bound to use only \(6k+3\) pebbles. Spoiler's strategy is to play along the rank decomposition \((T,\gamma)\) starting from the root. As Spoiler proceeds down the tree, the non-isomorphism is confined to increasingly smaller parts of \(G\) and \(H\). At a node \(t\in V(T)\), Spoiler pebbles a split pair \((\overline{a},\overline{b})\) of \(X=\gamma(t)\). Now to confine the non-isomorphism, Spoiler identifies, after individualizing the split pair and performing two steps of Color Refinement- the initial coloring and a single refinement step, a pair of non-isomorphic components \(C\subseteq X,C^{\prime}\subseteq V(H)\) in the flipped graphs \(G^{f}\) and \(H^{f}\). In particular, Spoiler seeks to find such components \(C\) and \(C^{\prime}\) such that \(C\) is increasingly further from the root of \(T\). Once Spoiler reaches a leaf node of \(T\), Spoiler can quickly win. Spoiler places a pebble on a vertex in \(C\) and its image in \(C^{\prime}\), under Duplicator's bijection at the given round. We note that the two rounds of Color Refinement suffice for WL to detect the partitioning induced by the flip function (see Lemma 3.8), though it is not sufficiently powerful to detect the connected components of \(G^{f}\) and \(H^{f}\). In the argument below, we will technically consider graphs where the refinement step uses \((2,O(\log n))\)-WL. This ensures that after individualizing a vertex on a given component \(C\), that the vertices of \(C\) receive different colors than those of \(V(G)\setminus C\). This will eventually happen, and so in the pebble game characterization, we can continue to descend along \(T\) as if the vertices of \(C\) have been distinguished from \(V(G)\setminus C\). This is a key point where our strategy deviates from that of [13, Theorem 4.4]. Suppose that at a given round, we have the pebbled configuration \(((\overline{a},\overline{b},v),(\overline{a^{\prime}},\overline{b^{\prime}},v^{\prime}))\), where: * There exists a \(t\in T\) such that \((\overline{a},\overline{b})\) is a split pair for \(\gamma(t)\), * \(v\in\gamma(t)\), and let \(C\) be the component containing \(v\); and * \(v^{\prime}\) belongs to some component \(C^{\prime}\) of \(H\) such that: \[(G[C],\chi^{(\overline{a},\overline{b},v)}_{2,O(\log n)})\not\cong(H[C^{ \prime}],\chi^{(\overline{a^{\prime}},\overline{b^{\prime}},v)}_{2,O(\log n)}).\] Observe that this configuration uses at most \(4k+1\) pebbles (\(2k\) pebbles for \(\overline{a}\), \(2k\) pebbles for \(\overline{b}\), and a single pebble for \(v\)). Now by Lemma 3.8, we may assume that \(C\subseteq\gamma(t)\). Grohe & Neuen [13, Theorem 4.4] observed that \(\chi^{\overline{a},\overline{b},v}_{\infty}(u_{1})\neq\chi^{\overline{a}, \overline{b},v}_{\infty}(u_{2})\) for all \(u_{1}\in C\) and all \(u_{2}\in V(G)\setminus C\). However, in order to obtain the analogous result using \(O(\log n)\) rounds, Lemma 3.2 provides that we need \(2\)-WL. Note that we will be using WL of dimension \(\geq 9\). Furthermore, \(T\) has height \(3\cdot(\log(n)+1)\), we will be running \((6k+3)\)-WL for \(\geq 3\cdot(\log(n)+1)\) rounds. Thus, we may assume without loss of generality that Duplicator selects bijections that map \(C\mapsto C^{\prime}\) (setwise). By definition, if \(t\in V(T)\) is the root node, then \(\gamma(t)=V(G)\). So the empty configuration is a split pair for \(\gamma(t)\). We now show by induction on \(|\gamma(t)|\) that Spoiler can win from such a position. Suppose \(t\) is a leaf node. Then \(|\gamma(t)|=1\). In this case, \(C=\{v\}\). If \((G[C],\chi^{(\overline{a},\overline{b},v)}_{1,2})\not\cong(H[C^{\prime}],\chi^ {(\overline{a^{\prime}},\overline{b^{\prime}},v^{\prime})}_{1,2})\), then either (i) \(v\) and \(v^{\prime}\) are assigned different colors, or (ii) \(|C^{\prime}|>1\). In either case, Spoiler wins with at most \(1\) additional pebble and \(1\) additional round. For the inductive step, suppose \(|\gamma(t)|>1\). Let \(t_{1},t_{2}\) be the children of \(t\) in \(T\), and let \(X_{i}:=\gamma(t_{i})\) (\(i=1,2\)). By Lemma 3.14, there exist nice split pairs \((\overline{a}_{i},\overline{b}_{i})\) for \(t_{i}\). Spoiler pebbles \((\overline{a}_{1},\overline{b}_{1},\overline{a}_{2},\overline{b}_{2})\). Let \((\overline{a^{\prime}_{1}},\overline{b^{\prime}_{1}},\overline{a^{\prime}_{2} },\overline{b^{\prime}_{2}})\) be Duplicator's response. Let \(\alpha:=(\overline{a},\overline{b},\overline{a}_{1},\overline{b}_{1}, \overline{a}_{2},\overline{b}_{2},v)\), and let \(\alpha^{\prime}:=(\overline{a^{\prime}},\overline{b^{\prime}},\overline{a^{ \prime}_{1}},\overline{b^{\prime}_{1}},\overline{a^{\prime}_{2}},\overline{b^{ \prime}_{2}},v^{\prime})\). As Grohe & Neuen [13, Theorem 4.1] noted, the intuitive advantage of pebbling nice split pairs is that we can remove the pebbles from \(\overline{a},\overline{b}\) and \(\overline{a}_{3-i},\overline{b}_{3-i}\) without unpebbling some element of \(X_{i}\). We will use this later in our analysis. Let \(f_{i}\) be the flip function with respect to the split pair \((\overline{a}_{i},\overline{b}_{i})\) for \(X_{i}\) obtained via Lemma 3.8. Let \(f:V(G)\to V(H)\) be the bijection that Duplicator selects. We may assume that \(f(C)=C^{\prime}\); otherwise, there exists a vertex \(w\in C\) such that \(f(w)\not\in C^{\prime}\). Spoiler places a pebble on \(w\mapsto f(w)\). Now by Lemma 3.2, Spoiler wins using one more pebble and \(O(\log n)\) rounds. Now without loss of generality, suppose \(v\in X_{1}\). Let \(S:=\{C_{1},\ldots,C_{p}\}\) be the components of \(\operatorname{Comp}(G,f_{1})\) that intersect non-trivially with \(C\). Let \(S^{\prime}:=\{C^{\prime}_{1},\ldots,C^{\prime}_{p}\}\) be the analogous set of components in \(\operatorname{Comp}(H,f_{1})\) that intersect non-trivially with \(C^{\prime}\). By Lemma 3.9, we have that: \[((G^{f})[C],\chi_{2,O(\log n)}^{\overline{\alpha}})\not\cong((H^{f})[C^{\prime }],\chi_{2,O(\log n)}^{\overline{\alpha^{\prime}}}).\] Let \(f^{\prime}:V(G)\to V(H)\) be the bijection that Duplicator selects. Now by Lemma 3.16, there exists some \(w\in C\) such that \[((G^{f})[C\cap C_{i}],\chi_{2,O(\log n)}^{\overline{\alpha}})\not\cong((H^{f}) [C^{\prime}\cap C^{\prime}_{i}],\chi_{2,O(\log n)}^{\overline{\alpha^{\prime}} }),\] where \(i\in[p]\) is the unique index such that \(C_{i}\) contains \(w\). We label \(C^{\prime}_{i}\) to be the unique component of \(S^{\prime}\) containing \(f^{\prime}(w)\). By Lemma 3.9, we have that: \[(G[C\cap C_{i}],\chi_{2,O(\log n)}^{\overline{\alpha}})\not\cong(H[C^{\prime} \cap C^{\prime}_{i}],\chi_{2,O(\log n)}^{\overline{\alpha^{\prime}}}).\] By Lemma 3.8, we have that \(C_{i}\subseteq X_{1}\) or \(C_{i}\subseteq\overline{X_{1}}\). In particular, \(C\cap C_{i}\subseteq X_{1}\) or \(C\cap C_{i}\subseteq X_{2}\). We consider the following cases. * **Case 1:** Suppose that \(C\cap C_{i}\subseteq X_{1}\). As the configuration \(\overline{\alpha}\) has been pebbled, we have by Lemma 3.2 that \(\chi_{2,O(\log n)}^{\overline{\alpha}}((u_{1},u_{1}))\neq\chi_{2,O(\log n)}^{ \overline{\alpha}}((u_{2},u_{2}))\) for all \(u_{1}\in C_{i}\) and all \(u_{2}\in V(G)\setminus C_{i}\). It follows that: \[(G[C_{i}],\chi_{2,O(\log n)}^{\overline{\alpha}})\not\cong(H[C^{\prime}_{i}], \chi_{2,O(\log n)}^{\overline{\alpha^{\prime}}}).\] If we do not have both \(v\in C_{i}\) and \(v^{\prime}\in C^{\prime}_{i}\), then Spoiler pebbles \(w\mapsto f^{\prime}(w)\). Define: \[\overline{\alpha}_{1}:=\begin{cases}\overline{\alpha}&:v\in C_{i}\text{ and }v^{\prime}\in C^{\prime}_{i},\\ (\overline{\alpha},w)&:\text{otherwise}.\end{cases}\] Define \(\overline{\alpha^{\prime}_{1}}\) analogously. Observe that: \[(G[C_{i}],\chi_{2,O(\log n)}^{\overline{\alpha}_{1}})\not\cong(H[C^{\prime}_{ i}],\chi_{2,O(\log n)}^{\overline{\alpha^{\prime}}_{1}}).\] We again consider the flip function \(f_{1}\). We have that \(C_{i}\) forms a connected component in \(G^{f_{1}}\), and similarly \(C^{\prime}_{i}\) forms a connected component in \(H^{f_{1}}\). Thus, we may remove pebbles from outside of \(C_{i}\) (respectively \(C^{\prime}_{i}\)) without changing whether \(\chi_{2,O(\log n)}\) distinguishes \(G[C_{i}]\) and \(H[C^{\prime}_{i}]\). So we may remove all pebbles \(\overline{a},\overline{b},\overline{a}_{2},\overline{b}_{2}\). If \(w\mapsto f^{\prime}(w)\) was additionally pebbled, we may also remove the pebble pair \(v\mapsto v^{\prime}\). For convenience, we label: \[z=\begin{cases}v&:v\in C_{i}\text{ and }v^{\prime}\in C^{\prime}_{i},\\ w&:\text{otherwise}.\end{cases}\] Define \(z^{\prime}\) analogously. So now we have that: \[(G[C_{i}],\chi_{2,O(\log n)}^{(\overline{a}_{1},\overline{b}_{1},z)})\not \cong(H[C^{\prime}_{i}],\chi_{2,O(\log n)}^{(\overline{a}_{1}^{\prime}, \overline{b}_{1}^{\prime},z^{\prime})});\] otherwise, by Corollary 3.11, Spoiler can win with 2 pebbles (reusing pebbles we have removed) and \(O(\log n)\) additional rounds. Thus, we have not used any additional pebbles in this case. We now apply the inductive hypothesis to \(t_{1}\) to deduce that Spoiler wins from \(((\overline{a}_{1},\overline{b}_{1},z),(\overline{a^{\prime}_{1}},\overline{b ^{\prime}_{1}},z^{\prime}))\). * **Case 2:** Suppose \(C\cap C_{i}\subseteq X_{2}\). As \(X_{1}\) is defined with respect to the flip function \(f_{1}\), this case is not symmetric with respect to Case 1. Define \(M:=C\cap C_{i}\) and \(M^{\prime}:=C^{\prime}\cap C^{\prime}_{i}\). Spoiler begins by pebbling \(w\mapsto f^{\prime}(w)\). Now consider the flip function \(f_{2}\). As the configuration \(\overline{\alpha}\) has been pebbled and by Lemma 3.2, we have that \(\chi_{2,O(\log n)}^{\overline{\alpha}}((u_{1},u_{1}))\neq\chi_{2,O(\log n)}^{ \overline{\alpha}}((u_{2},u_{2}))\) for all \(u_{1}\in M\) and all \(u_{2}\in V(G)\setminus M\). Let \(f^{\prime\prime}:V(G)\to V(H)\) be the bijection that Duplicator selects at the next round. As \(\overline{\alpha}\) has been pebbled, we may assume that \(f^{\prime\prime}(M)=M^{\prime}\); otherwise, Spoiler pebbles some element of \(M\) that does not map to an element of \(M^{\prime}\). Then by Lemma 3.2, Spoiler wins with \(1\) pebble (removing a pebble of \(\overline{\alpha}_{1}\)) and \(O(\log n)\) additional rounds. Let \(\{D_{1},\ldots,D_{q}\}\) be the components of \(\operatorname{Comp}(G,f_{2})\) that intersect \(M\) non-trivially, and define \(\{D^{\prime}_{1},\ldots,D^{\prime}_{q}\}\) to be the components of \(\operatorname{Comp}(H,f_{2})\) that intersect \(M^{\prime}\) non-trivially. By Lemma 3.9 and as \(\overline{\alpha},w\) have been pebbled, we have that: \[(G^{f_{2}}[M],\chi^{(\overline{\alpha},w)}_{2,O(\log n)})\not\cong(H^{f_{2}}[ M^{\prime}],\chi^{(\overline{\alpha},w)}_{2,O(\log n)}).\] By Lemma 3.16, there exists some \(z\in M\) such that: \[(G^{f_{2}}[M\cap D_{j}],\chi^{(\overline{\alpha},w)}_{2,O(\log n)})\not\cong(H ^{f_{2}}[M^{\prime}\cap D^{\prime}_{j}],\chi^{(\overline{\alpha},w)}_{2,O(\log n )}),\] where \(j\in[q]\) is the unique component containing \(z\). Let \(D^{\prime}_{j}\) be the corresponding component containing \(f^{\prime\prime}(z)\). It follows that: \[(G^{f_{2}}[M\cap D_{j}],\chi^{(\overline{\alpha},w,z)}_{2,O(\log n)})\not\cong (H^{f_{2}}[M^{\prime}\cap D^{\prime}_{j}],\chi^{(\overline{\alpha},w,f^{ \prime\prime}(z))}_{2,O(\log n)}),\] Applying Lemma 3.9 again, we have that: \[(G[M\cap D_{j}],\chi^{(\overline{\alpha},w,z)}_{2,O(\log n)})\not\cong(H[M^{ \prime}\cap D^{\prime}_{j}],\chi^{(\overline{\alpha},w,f^{\prime\prime}(z))}_ {2,O(\log n)}).\] Now if \(w\in D_{i}\) and \(w^{\prime}\in D^{\prime}_{i}\), Spoiler does not place an additional pebble. Otherwise, Spoiler pebbles \(z\mapsto f^{\prime\prime}(z)\). For convenience, we define: \[x=\begin{cases}w&:w\in D_{j}\text{ and }w^{\prime}\in D^{\prime}_{j}\\ z&:\text{otherwise.}\end{cases}\] Define \(x^{\prime}\) analogously. Now as \[(G[M\cap D_{j}],\chi^{(\overline{\alpha},w,z)}_{2,O(\log n)})\not\cong(H[M^{ \prime}\cap D^{\prime}_{j}],\chi^{(\overline{\alpha},w,f^{\prime\prime}(z))}_ {2,O(\log n)}),\] we have that: \[(G[M\cap D_{j}],\chi^{(\overline{\alpha},w,x)}_{2,O(\log n)})\not\cong(H[M^{ \prime}\cap D^{\prime}_{j}],\chi^{(\overline{\alpha},w,x)}_{2,O(\log n)}).\] Now in the flipped graph \(G^{f_{2}}\), \(D_{i}\) forms a connected component. Similarly, in \(H^{f_{2}}\), \(D^{\prime}_{i}\) forms a connected component. So removing any pebbles from outside of \(D_{i}\) (respectively, \(D^{\prime}_{i}\)) does not affect whether \(\chi_{2,O(\log n)}\) distinguishes \(D_{i}\) from \(V(G)\setminus D_{i}\) (respectively, whether \(\chi_{2,O(\log n)}\) distinguishes \(D^{\prime}_{i}\) from \(V(H)\setminus D^{\prime}_{i}\)). So we may remove all pebbles, so that only \((\overline{a}_{2},\overline{b}_{2},x)\mapsto(\overline{a^{\prime}_{2}}, \overline{b^{\prime}_{2}},x^{\prime})\) remain pebbled and still obtain that: \[(G[D_{j}],\chi^{(\overline{a}_{2},\overline{b}_{2}x)}_{2,O(\log n)})\not\cong (H[D^{\prime}_{j}],\chi^{(\overline{a^{\prime}_{2}},\overline{b^{\prime}_{2}}, x^{\prime})}_{2,O(\log n)}).\] Otherwise, Spoiler wins using \(2\) pebbles we have removed and \(O(\log n)\) additional rounds. We now apply the inductive hypothesis to \(t_{2}\), to deduce that Spoiler wins from \(((\overline{a}_{2},\overline{b}_{2},z),(\overline{a^{\prime}_{2}},\overline{b ^{\prime}_{2}},z^{\prime}))\). So by induction, Spoiler has a winning strategy. It remains to analyze the round and pebble complexities. We first claim that only \(O(\log n)\) rounds are necessary. At each node of the host tree, only a constant number of rounds are necessary unless (i) Duplicator selects a bijection that does not respect connected components, or (ii) we apply Corollary 3.11. Note that either case can only happen once. If Duplicator selects a bijection that does not respect connected components, then by Lemma 3.2, Spoiler wins with \(O(\log n)\) rounds. Thus, as the height of \(T\) is \(O(\log n)\), only \(O(\log n)\) rounds are necessary. We will now analyze the number of pebbles, following the same strategy as Grohe & Neuen [10]. We can pebble \((\overline{a},\overline{b},\overline{a}_{1},\overline{b}_{1},\overline{a}_{2}, \overline{b}_{2})\) using \(12k\) pebbles, as \((T,\gamma)\) has width at most \(2k\). As \(\overline{a}\subseteq\overline{a}_{1}\cup\overline{a}_{2}\), we need not pebble \(\overline{a}\) and so can use only \(10k\) pebbles. By Lemma 3.14, Spoiler can choose nice split pairs \((\overline{a},\overline{b})\) and \((\overline{a}_{i},\overline{b}_{i})\) such that additionally \(\overline{b}_{i}\cap\overline{X}\subseteq\overline{b}\). So \(\overline{b}_{i}\subseteq\overline{b}\cup\overline{a}_{3-i}\). This brings us down to \(6k\) pebbles. At most two of \(x,v,w\) are pebbled at a given round. By Lemma 3.14, we can remove pebbles from \(b_{2}\) in Case 1 or \(b_{1}\) in Case 2. So only one additional pebble is necessary. Furthermore, if Duplicator selects bijections that do not respect connected components corresponding to pebbled vertices, then Duplicator can remove pebbles from \(b_{2}\) in Case 1 or \(b_{1}\) in Case 2 to win (see Lemma 3.2). So at most \(1\) additional pebble is required. Thus, Spoiler has a winning strategy with \(6k+3\) pebbles on the board and \(O(\log n)\) rounds. The result follows. **Remark 3.17**.: If we use a rank decomposition of width \(k\), then we are able to obtain a slight improvement upon [10, Theorem 4.1], using \((3k+3)\)-WL (without controlling for rounds) rather than \((3k+4)\)-WL. **Corollary 3.18**.: _The Weisfeiler-Leman dimension of graphs of rank width \(k\) is \(\leq 3k+3\)._ ## 4 Conclusion We showed that the \((6k+3,O(\log n))\)-WL algorithm identifies graphs of bounded rank width. As a consequence, we obtain a \(\mathsf{TC}^{1}\) upper bound for isomorphism testing of graphs of bounded rank width. In the process, we also improved the Weisfeiler-Leman dimension from \(3k+4\)[10] to \(3k+3\), though it is not known if even \((3k+4)\)-WL can identify graphs of bounded rank width in \(O(\log n)\) rounds. We conclude with several open questions. It would be of interest to close the gap between the \((3k+3)\)-WL bound where the iteration number is unknown and our \((6k+3)\)-WL upper bound to obtain \(O(\log n)\) rounds. One possible strategy would be to show that the exists a rank decomposition of width \(k\) where the host tree has height \(O(\log n)\). **Question 4.1**.: Let \(G\) be a graph of rank width \(k\). Does there exist a rank decomposition \((T,\gamma)\) of width \(k\) such that \(T\) has height \(O(\log n)\)? Courcelle & Kante [11] showed that a rank decomposition of width \(2k\) exists with a host tree of height \(3(\log(n)+1)\). Decreasing the width to some \(k\leq c\leq 2k\) at the cost of increasing the height of the host tree by a constant factor would immediately yield improvements. More generally, in light of the correspondence between WL and \(\mathsf{FO}+\mathsf{C}\), the width of the rank decomposition corresponds to the number of variables, and the depth of the host tree corresponds to the quantifier depth in formulas characterizing these graphs. Thus, controlling the trade-off between the width of the rank decomposition and the height of the host tree would directly translate into a trade-off between the number of variables and the quantifier depth in our logical formula. We note that isomorphism testing for graphs of bounded treewidth [1] is \(\mathsf{L}\)-complete under \(\mathsf{AC}^{0}\)-reductions. As graphs of bounded treewidth also have bounded rank width, we have that isomorphism testing for graphs of bounded rank width is \(\mathsf{L}\)-hard under \(\mathsf{AC}^{0}\)-reductions. We thus ask the following. **Question 4.2**.: Is isomorphism testing of graphs of bounded rank width \(\mathsf{L}\)-complete? It is natural to ask whether the standard counting Weisfeiler-Leman algorithm can identify graphs of bounded rank width in \(o(\log n)\) rounds. While we lack good lower bounds for the standard counting WL, there are lower bounds against the constant-dimensional _count-free_ WL. **Lemma 4.3**.: _The count-free \((O(1),\operatorname{poly}\log\log n)\)-WL is unable to identify graphs of bounded treewidth (and hence, graphs of bounded rank width)._ Proof.: As isomorphism testing for graphs of bounded treewidth is \(\mathsf{L}\)-complete under \(\mathsf{AC}^{0}\)-reductions [1], we have that Parity is \(\mathsf{AC}^{0}\)-reducible to isomorphism testing for graphs of bounded treewidth. If the count-free \((O(1),\operatorname{poly}\log\log n)\)-WL were able to identify graphs of bounded treewidth, then isomorphism testing for graphs of bounded treewidth- and hence, Parity- would belong to \(\mathsf{FO}(\operatorname{poly}\log\log n)\). However, \(\mathsf{FO}(\operatorname{poly}\log\log n)\) cannot compute Parity[12], a contradiction. On the other hand, we are not aware as to whether isomorphism testing of trees can be done using a logspace uniform \(\mathsf{TC}\) circuit of depth \(O(\log\log n)\). An answer of no would establish a lower bound of \(\omega(\log\log n)\) rounds for the standard counting WL to identify trees, planar graphs, graphs of bounded tree width, and graphs of bounded rank width. **Question 4.4**.: Is isomorphism testing of trees decidable using a logspace uniform \(\mathsf{TC}\) circuit of depth \(O(\log\log n)\)? It was already known that GI parameterized by rank width was in \(\mathsf{XP}\)[13, 14]. While our results improve the parallel complexity, they do not improve the parameterized complexity. **Question 4.5**.: Does isomorphism testing of graphs parameterized by rank width belong to \(\mathsf{FPT}\)?
2309.15915
Zero-Shot and Few-Shot Video Question Answering with Multi-Modal Prompts
Recent vision-language models are driven by large-scale pretrained models. However, adapting pretrained models on limited data presents challenges such as overfitting, catastrophic forgetting, and the cross-modal gap between vision and language. We introduce a parameter-efficient method to address these challenges, combining multimodal prompt learning and a transformer-based mapping network, while keeping the pretrained models frozen. Our experiments on several video question answering benchmarks demonstrate the superiority of our approach in terms of performance and parameter efficiency on both zero-shot and few-shot settings. Our code is available at https://engindeniz.github.io/vitis.
Deniz Engin, Yannis Avrithis
2023-09-27T18:00:09Z
http://arxiv.org/abs/2309.15915v1
# Zero-Shot and Few-Shot Video Question Answering with Multi-Modal Prompts ###### Abstract Recent vision-language models are driven by large-scale pretrained models. However, adapting pretrained models on limited data presents challenges such as overfitting, catastrophic forgetting, and the cross-modal gap between vision and language. We introduce a parameter-efficient method to address these challenges, combining multimodal prompt learning and a transformer-based mapping network, while keeping the pretrained models frozen. Our experiments on several video question answering benchmarks demonstrate the superiority of our approach in terms of performance and parameter efficiency on both zero-shot and few-shot settings. Our code is available at [https://engindeniz.github.io/vitis](https://engindeniz.github.io/vitis). ## 1 Introduction Recent vision-language models have shown remarkable progress, driven by transformer-based _large-scale pretrained models_[10, 39, 9, 38, 17, 45, 44]. These models have been incorporated into video understanding methods, including _video question answering (VideoQA)_, through multimodal fusion on _large-scale multimodal datasets_[41, 3, 60]. However, adapting pretrained models to video-language tasks on limited data is challenging. This is because of the gap between the visual and language modalities and, more importantly, because finetuning the entire model on limited data can lead to overfitting and forgetting previously acquired knowledge. To address the gap between modalities, transformer-based mapping networks have been employed between frozen vision and language models [42, 16, 1]. These networks map visual features to an appropriate embedding space before they are given as input to the language models. To address overfitting, parameter-efficient adaptation methods have been explored,,, _prompt learning_[35, 37, 36] and _adapter layers_[18] on frozen pretrained models. These approaches preserve the generalization of large-scale models while reducing the number of trainable parameters. In this work, we investigate the adaptation of large-scale visual-language models to VideoQA under scarcity of training data. Inspired by FrozenBiLM [57], we incorporate visual inputs to a frozen language model using lightweight learnable adapter layers. Beyond that, we introduce a novel _visual mapping network_ that summarizes the video input while allowing for temporal interaction, inspired by [42, 20]. In addition, we introduce _multimodal prompt learning_, which diminishes the number of stored parameters when finetuning in the few-shot setting. We call our model _VideoQA with Multi-Modal Prompts_ (ViTiS). We pretrain trainable parameters of ViTiS, _i.e. visual mapping network, adapter layers, visual and text prompts_, under the _masked language modeling_ (MLM) objective on video-text pairs collected from the web, while the vision and language models are kept frozen. We evaluate ViTiS in the zero-shot and few-shot settings. For the latter, we fine-tune the model on downstream VideoQA tasks, using two approaches: (i) fine-tuning all trainable parameters, which are 8% of the total model parameters, (ii) fine-tuning only the prompts, which are 0.8% of all trainable parameters and a mere 0.06% of the total model parameters. Our extensive experimental results on multiple open-ended VideoQA datasets demonstrate that ViTiS outperforms prior methods while requiring fine-tuning of only a few parameters for each dataset in few-shot settings. In addition, our visual mapping network contributes to better alignment and understanding of multimodal inputs, improving performance in both zero-shot and few-shot settings. Our contributions can be summarized as follows: 1. We introduce _multimodal prompt learning_ to few-shot VideoQA for the first time, fine-tuning as low as 0.06% of model parameters on downstream tasks. 2. We introduce a _visual mapping network_ to VideoQA, mapping video input to the text embedding space, while supporting temporal interaction. 3. We experimentally demonstrate the strong performance of ViTiS on multiple VideoQA datasets in both zero-shot and few-shot settings. ## 2 Related Work Video question answeringRecent advances in vision-language models benefit from pretrained foundation models, including vision-only [10, 39] language-only [9, 38, 17, 45] and vision-language [44]. Recent video understanding methods, including VideoQA, incorporate these models by leveraging large-scale multimodal data [41, 3, 60] with different pretraining objectives, _e.g._, _masked language modeling_, _masked image modeling_, or _predicting the next word_, to perform single or multiple vision-language tasks [48, 33, 28, 12, 55, 60, 31, 57, 1, 59, 8, 51, 34, 19, 13]. Adapting pretrained vision-language models to downstream tasks relies on fully supervised fine-tuning on VideoQA datasets in general [50, 53, 21, 29, 58, 33, 14]. Few recent works address the challenge of limited data by focusing on zero-shot [55, 56, 57, 1, 59, 32, 34] and few-shot [57, 1] open-ended VideoQA tasks. Our work is similar to [57] in leveraging a frozen video encoder and language model with adapter layers. Beyond that, we introduce a transformer-based visual mapping network between the two models, allowing for temporal interaction. In addition, we incorporate multimodal prompt learning, allowing for efficient fine-tuning in few-shot settings. Parameter-efficient trainingAs the size of large-scale pretrained models grows, adapting them efficiently on limited data without overfitting in an emerging research problem. A common solution is _adapters_, introduced by [18] and employed for vision-language tasks [11, 57, 49]. Another common solution is _prompting_, referring to inserting tokens to the input to guide pretrained models on downstream tasks. Prompts can be handcrafted (discrete) [4] or learned (continuous vectors) [35]. Pretrained language models demonstrate remarkable generalization to zero-shot settings with handcrafted prompts [4]. Prompt learning is introduced initially in natural language processing tasks [35, 30, 37, 36, 43, 40] and subsequently adopted in vision [22, 2] and vision-language models. In the latter case, prompts are introduced to text encoders [62, 61], or both text and vision encoders [24, 52, 27, 46], called _multimodal_. Learnable prompts can be inserted at the input level [35] and/or deep layers [36, 22]. Few recent works employ prompt learning for video understanding [23, 63, 49] and multimodal prompt learning for video classification [52, 46]. We introduce multimodal prompt learning to few-shot VideoQA for the first time. ## 3 Method The proposed method, ViTiS, is illustrated in Figure 1(a), consisting of a frozen video encoder, a visual mapping network, a frozen text embedding layer and a frozen language model that includes learnable text prompts and adapter layers. Given an input video \(X^{v}\), represented as a sequence of frames, and a question \(X^{q}\), represented as a sequence of tokens, the problem is to predict an answer \(X^{a}\) that is another sequence of tokens. The model takes the concatenated sequence \(X^{t}=(X^{q},X^{a})\) as input text. Parts of \(X^{t}\) may be masked, for example \(X^{a}\) is masked at inference. Video encoderThe input video is represented by a sequence of \(T\) frames, \(X^{v}=(x^{v}_{1},\dots,x^{v}_{T})\). This sequence is encoded into the _frame features_ \[Y^{v}:=f^{v}(X^{v})=(y^{v}_{1},\dots,y^{v}_{T})\in\mathbb{R}^{D\times T} \tag{1}\] by a frozen pretrained _video encoder_\(f^{v}\), where \(D\) is the embedding dimension. Visual mapping networkA _visual mapping network_\(f^{m}\) maps the frame features \(Y^{v}\) to the same space as the text embeddings. The mapping is facilitated by a set of \(M\)_learnable visual prompts_\(P^{v}\in\mathbb{R}^{D\times M}\), which are given as input along with \(Y^{v}\), to obtain the _video embeddings_ \[Z^{v}:=f^{m}(P^{v},Y^{v})\in\mathbb{R}^{D\times M}. \tag{2}\] As shown in Figure 1(c), the architecture of \(f^{m}\) is based on Perceiver [20], where the latent array corresponds to our learnable visual prompts \(P^{v}\). It consists of \(L\) blocks defined as \[Z_{\ell}:=\textsc{sa}_{\ell}(\textsc{ca}_{\ell}(Z_{\ell-1},Y^{v}))\in\mathbb{R }^{D\times M} \tag{3}\] for \(\ell=1,\dots,L\), with input \(Z_{0}=P^{v}\). Each block \(\ell\) maps the latent vectors \(Z_{\ell-1}\) first by cross attention \(\textsc{ca}_{\ell}\) with the frame features \(Y^{v}\) and then by self-attention \(\textsc{sa}_{\ell}\) to obtain \(Z_{\ell}\). In cross attention, \(Z_{\ell-1}\) serves as query and \(Y^{v}\) as key and value. We thus iteratively extract information from the frame features \(Y^{v}\) into the latent vectors, which are initialized by the visual prompts. The output video embeddings are \(Z^{v}=Z_{L}\in\mathbb{R}^{D\times M}\). To allow modeling of temporal relations within the video, learnable _temporal position embeddings_ are added to \(Y^{v}\) before \(f^{m}\). Text embeddingThe input text is tokenized into a sequence of \(S\) tokens, \(X^{t}=(x^{t}_{1},\dots,x^{t}_{S})\). This sequence is mapped by a frozen _text embedding layer_\(f^{t}\) to the text embedding space, \[Z^{t}:=f^{t}(X^{t})=(z^{t}_{1},\dots,z^{t}_{S})\in\mathbb{R}^{D\times S}. \tag{4}\] One or more tokens are masked, in which case they are replaced by a learnable mask token. Language modelWe concatenate video and text embeddings into a single input sequence \((Z^{v},Z^{t})\in\mathbb{R}^{D\times K}\) of length \(K=M+S\). We then feed this sequence to a transformer-based bidirectional language model \(f\) to obtain an output sequence \[f(Z^{v},Z^{t})\in\mathbb{R}^{D\times K} \tag{5}\] of the same length. Finally, a classifier head \(g\) maps the output sequence to logit vectors over a vocabulary \(U\). The logit vectors corresponding to masked tokens are selected to apply the loss function at training or make predictions at inference. Both \(f\) and \(g\) are pretrained and kept frozen. However, as shown in Figure 1(b), \(f\) is adapted by means of learnable deep text prompts and adapters, described next. Text promptsTo reduce the number of fine-tuned parameters at downstream tasks, we introduce attention-level text prompts in self-attention blocks at each layer of the language model, referred to as _deep text prompt learning_[36]. Given a sequence \(Z\in\mathbb{R}^{D\times K}\) of token embeddings as input to a self-attention layer of the language model \(f\), we prepend two sequences of _learnable text prompts_\(P_{K},P_{V}\in\mathbb{R}^{N\times D}\) to the key and value respectively: \[Q:=W_{Q}Z\quad K:=[P_{K}\;\;W_{K}Z]\quad V:=[P_{V}\;\;W_{V}Z], \tag{6}\] where \(W_{Q},W_{K},W_{V}\in\mathbb{R}^{D\times D}\) are the query, key and value projections respectively. The output sequence length does not change since it is determined by the query, where we do not prepend prompts. There is one pair of variables \(P_{K},P_{V}\) for each layer of \(f\), collectively denoted as \(P^{t}\). These variables are either defined as parameters directly or parametrized by means of projections as discussed next. Text prompt parametrizationInstead of defining text prompts as parameters directly, we discuss here an alternative parametrization using projections. We first generate a sequence of input prompts \(P^{i}\in\mathbb{R}^{D^{\prime}\times N}\) and then we project it as follows: \[P^{t}:=WP^{i}\in\mathbb{R}^{2CD\times N}, \tag{7}\] where \(W\in\mathbb{R}^{2CD\times D^{\prime}}\), \(C\) is the number of layers of the language model \(f\) and \(D\) its embedding dimension. Then, \(P^{t}\) can be reshaped as a \(2\times C\times D\times N\) tensor, representing one pair of sequences \(P_{K},P_{V}\in\mathbb{R}^{D\times N}\) for every layer of \(f\). After training, the input sequence \(P^{i}\) and projection matrix \(W\) are discarded and we only keep \(P^{t}\). This allows us to fine-tune fewer parameters at downstream tasks, which is beneficial when data is limited. AdaptersFollowing [57], we add adapter layers to the language model \(f\). Given a sequence \(Z\in\mathbb{R}^{D\times K}\) of token embeddings, an adapter layer \(A\) maps it through a bottleneck dimension \(d\) with a residual connection: \[A(Z):=Z+W_{2}h(W_{1}Z)\in\mathbb{R}^{D\times K}, \tag{8}\] where \(W_{1}\in\mathbb{R}^{d\times D}\), \(W_{2}\in\mathbb{R}^{D\times d}\), and \(h\) is the \(\mathrm{relu}\) activation function. We insert an adapter module after the self-attention layer and the feed-forward layer, preceding LayerNorm in each layer of \(f\). Training and inferenceOur model is trained using the _masked language modeling_ (MLM) objective, where one or more tokens of \(X^{t}\) are masked and the corresponding outputs are predicted over a vocabulary \(U\). The parameters of the visual encoder \(f^{v}\), text embedding layer \(f^{t}\) Figure 1: (a) ViTiS consists of a frozen video encoder \(f^{v}\), a visual mapping network \(f^{m}\), a frozen text embedding layer \(f^{t}\), a frozen language model \(f\) and a frozen classifier head \(g\). Given input video frames \(X^{v}\) and text \(X^{t}\), \(f^{v}\) extracts frame features and \(f^{m}\) maps them to the same space as the text embeddings obtained by \(f^{t}\). Then, \(f\) takes the video and text embeddings \(Z^{v}\), \(Z^{t}\) as input and predicts the masked input tokens. (b) The _language model_ incorporates learnable text prompts in the key and value of multi-head-attention and adapter layers after each self-attention and feed-forward layer, before LayerNorm. (c) Our _visual mapping network_ consists of a number of layers, each performing cross-attention between learnable visual prompts and video frame features followed by self-attention. language model \(f\) and classifier head \(g\) are pretrained and kept frozen. Only the newly introduced parameters, that is, visual prompts \(P^{v}\), visual mapping network \(f^{v}\), text prompts \(P^{t}\) and adapter layers, are optimized on video-text pairs. We then fine-tune these parameters or a smaller subset on downstream video question answering tasks, where \(X^{t}=(X^{q},X^{a})\) consists of a question-answer pair and masking applies to the \(X^{a}\) only. At inference, \(X^{a}\) is masked and the corresponding output yields a prediction. ## 4 Experiments ### Datasets PretrainingWe use WebVid2M [3] for pretraining, consisting of \(2.5\)M video-caption pairs scraped from the internet. The domain is open and the captions are manually generated. The average video duration is \(18\) seconds and the average caption word count is \(12\). Downstream tasksDownstream dataset statistics are given in Table 1. Following [57], we use 1% of the training data for fine-tuning in the few-shot setting. MSRVTT-QA [53] is an extension of MSR-VTT [54], where question-answer pairs are automatically generated from video descriptions. MSVD-QA [53] is based on MSVD [7] and question-answers pairs are automatically generated as in MSRVTT-QA. ActivityNet-QA [58] is derived from ActivityNet [6]. The average video duration is \(180\)s. TGIF-QA [21] comprises several tasks, including FRAME-QA, where the question can be answered from one of the frames in a GIF. In this work, TGIF-QA refers only to Frame-QA. ### Implementation details Architecture detailsThe _frozen video encoder_ is CLIP ViT-L/14 [10, 44], trained with contrastive loss on \(400\)M image-text pairs. We uniformly sample \(T=10\) frames located at least 1 second apart and each frame is resized to \(224\times 224\) pixels; if the video is shorter than \(10\) seconds, we zero-pad up to \(T=10\) frames. The encoder then extracts one feature vector per frame of the dimension of \(768\), followed by a linear projection to \(D=1536\) dimensions. The _visual mapping network_ has \(L=2\) layers, each with a cross-attention and a self-attention, having \(8\) heads and embedding dimension \(D=1536\). We use \(M=10\) learnable visual prompt vectors of dimension \(D=1536\). The _text tokenizer_ is based on SentencePiece [26] with a vocabulary \(U\) of size \(128\)k. The _frozen language model_ is DeBERTa-V2-XLarge [17], trained using MLM on \(160\)G text data, following [57]. The model has \(C=24\) layers, \(24\) attention heads, and embedding dimension \(D=1536\), resulting in \(900\)M parameters. For the _adapter layers_[18], we set \(d=D/8=192\) by following [57]. For _text prompts_, we use \(N=10\) learnable text prompt vectors, \(D^{\prime}=D/8=192\), and \(C=24\). Downstream input designWe limit the length of text sequences to \(S=256\) tokens for pretraining and zero-shot experiments and \(S=128\) tokens for downstream experiments. We adopt the input design of [57] as follows: "[CLS] Question: \(<\)Question\(>\)? Answer: [MASK]. Subtitles: \(<\)Subtitles\(>\) [SEP]". Subtitles are optional and if available, their token sequence \(X^{s}\) is incorporated into the input. In this case, the text input sequence becomes \(X^{t}=(X^{q},X^{a},X^{s})\). Answer vocabularyThe answer vocabulary \(U\) is constructed by selecting the top 1k most frequent answers from the training set for the zero-shot setting, following [57, 60]. Another vocabulary is formed by including answers that occur at least twice in the training set for the few-shot setting, as defined in [57]. Questions with answers outside the vocabulary are excluded from the training process and are assessed as incorrect during evaluation. To report results for the few-shot setting, we choose the vocabulary that yields the best performance on the validation set. Answer embeddingThe classifier head of the frozen language model includes more tokens than required for downstream training. To address this, by following [57], we define a task-specific classification head by keeping the weights of the pretrained head associated with the answer vocabulary. At inference, we provide one mask token at the input, regardless of the ground truth answer length, and we obtain one output logit vector. For multi-token answers, we take the average of the logits corresponding to the ground truth words from the vocabulary. Evaluation MetricsWe report top-1 accuracy on public test sets for all downstream tasks, except TGIF-QA where we report on the validation set unless otherwise specified. Training settingsWe use the Adam optimizer [25] with \(\beta=(0.9,0.95)\) in all experiments. We decay the learning rate using a linear schedule with the warm-up in the first 10% of the iterations. We use dropout with probability \(0.1\) in the language model, adapter layers, text prompts, and visual mapping network. We adopt automatic mixed precision training for all experiments. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Videos} & \multicolumn{4}{c}{QA Pairs} \\ \cline{3-6} & & Train & Val & Test & Total \\ \hline MSRVTT-QA [53] & 10k & 159k & 12k & 73k & 244k \\ MSVD-QA [53] & 2k & 31k & 6.5k & 13k & 50.5k \\ ActivityNet-QA [58] & 5.8k & 32k & 18k & 8k & 58k \\ TGIF-QA [21] & 40k & 39k & – & 13k & 53k \\ \hline \hline \end{tabular} \end{table} Table 1: Downstream dataset statistics. We _pretrain_ for \(10\) epochs on WebVid2M with a batch size of \(128\) on \(8\) NVIDIA Tesla V100 GPUs, amounting to \(20\) hours total training time. The base learning rate is \(2\times 10^{-5}\) and the learning rate for visual and text prompts is separately set to \(10^{-3}\). For _fine-tuning_ on each downstream dataset, we train for \(20\) epochs with a batch size of \(32\) on \(4\) NVIDIA Tesla V100 GPUs. The base learning rate is searched over \(5\) values in the interval \([10^{-5},5\times 10^{-5}]\), while the learning rate for visual and text prompts is kept at \(10^{-3}\). For _prompt-only fine-tuning_, the base learning rate is searched over \(3\) values in the interval \([10^{-3},3\times 10^{-3}]\). ### Ablation We conduct an ablation study in the few-shot setting. Model designIn Table 2, we analyze the effect of different components in the model design. We observe that changing the baseline from a linear layer to _our visual mapping network_ without adapters increases the performance by a large margin in most datasets (row 1\(\rightarrow\)5). By adding _text prompts_ to any model design (row 1\(\rightarrow\)2, 3\(\rightarrow\)4, 5\(\rightarrow\)6, 7\(\rightarrow\)8), the performance increases for all datasets. The improvement is vast in the absence of adapters. The model design that includes a linear mapping network and adapter layers (row 3) corresponds to FrozenBiLM [57] trained on WebVid2M. While using only our visual mapping network and text prompts (row 6) already works better than FrozenBiLM trained on WebVid2M, we further improve performance by incorporating adapter layers: our full model (row 8) achieves the best performance overall. Handcrafted promptsWe explore the use of handcrafted prompts in the input text. In Table 5 and Table 6, we consider four different input designs for zero-shot and few-shot settings, respectively: (i) no handcrafted prompts, (ii) placed before the question, (iii) placed just before the [MASK] token (answer), and (iv) placed just before the question, answer and subtitles. In _zero-shot_, handcrafted prompts are beneficial due to the absence of task-specific learning for downstream tasks. As shown in Table 5, the absence of handcrafted prompts drastically reduces the performance (row 1), highlighting their necessity. Moreover, the position of the handcrafted prompt has a significant impact on the performance. More specifically, the location of the "Answer" prompt affects the results by a large margin (row 2\(\rightarrow\)3), even leading to worse performance than the absence of handcrafted prompts (row 1\(\rightarrow\)2). The presence of an "Answer" prompt just before the [MASK] token yields better performance in two input designs (rows 3 & 4). Although the impact of using handcrafted text prompts is relatively small in _few-shot_ experiments compared to zero-shot experiments, they still improve enhances, particularly on MSRVTT-QA and TGIF-QA datasets, as shown in Table 6. Placing handcrafted prompts at the beginning (row 2), as is the case for learnable text prompts, leads to lower performance. The best performance is achieved when handcrafted prompts are placed just before the question, answer, and subtitles (row 4). Therefore, we choose to place handcrafted prompts according to row 4 for both settings. By contrast, _learnable prompts_ are all placed at the beginning. We empirically observe that other choices, _e.g._ placing half at the beginning of the input and half just before the [MASK] token, are inferior. ### Results Zero-shotA comparison with state-of-the-art methods on open-ended zero-shot VideoQA is given in Table 7. We observe an outstanding performance of our method across different VideoQA benchmarks, despite using significantly less pretraining data compared to other methods. Our performance on ActivityNetQA [58] is on par with FrozenBiLM [57]. Lavender [34] employs a multi-task training approach, transforming different vision-language tasks into MLM. Reserve [59] uses GPT-3 [5] to convert questions into masked sentences. Flamingo [1] uses a frozen auto-regressive language model trained on an extreme-scale dataset. By contrast, our method leverages a lighter frozen language model trained on 2.5M video-text pairs. BLIP [32] is pretrained on the VQA dataset [15], which is not directly comparable as our setting does not involve training on QA pairs. Similarly, Just Ask [55, 56] uses automatically generated visual question answering datasets. Although these datasets are not annotated by humans, the model is still trained on the specific task. is, \(0.75\)M parameters, on each task for 5 epochs and report mean and standard deviation. This can be considered as _test-time prompt tuning_[47] using task-specific annotated data. Table 8 shows the results of few-shot in-context learning. Flamingo [1] uses a frozen auto-regressive language model with trainable cross-attention layers that incorporate vision and language input, trained on an extreme-scale dataset. The Flamingo-3B, Flamingo-9B, and Flamingo-80B have \(1.4\)B, \(1.8\)B, and \(10\)B learned parameters, respectively, in addition to the frozen language model. By contrast, our method uses a lighter frozen language model and lighter adaptation modules, resulting in only \(101\)M parameters to learn, and our training data is a relatively small amount of video-text pairs. Despite this, our method outperforms Flamingo-3B [1] on MSRVTT-QA and is on par with MSVD-QA. ## 5 Conclusion In this work, we explored the adaptation of large-scale pretrained vision and language models for VideoQA under scarcity of data. We introduced multi-modal prompt learning and a visual mapping network to address challenges in such adaptation. Our method consistently outperforms prior works, while requiring minimal parameter fine-tuning in few-shot VideoQA. AcknowledgementsThis work was granted access to the HPC resources of IDRIS under the allocation 2022-AD011012263R2 made by GENCI. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Sub} & \multicolumn{3}{c}{\#Training} & \multirow{2}{*}{Msrvtt-QA} & \multirow{2}{*}{Msvd-QA} & \multirow{2}{*}{ANet-QA} & \multirow{2}{*}{Tgif-QA} \\ & & & & & & & \\ \hline CLIP* [44] & & 400M & - & & 2.1 & 7.2 & 1.2 & 3.6 \\ Reserve [59] & ✓ & - & 20M & & 5.8 & - & - & - \\ Lavender [34] & & 3M & 2.5M & & 4.5 & 11.6 & - & 16.7 \\ Flamingo-3B [1] & & 2.3B & 27M & & 11.0 & 27.5 & - & - \\ Flamingo-9B [1] & & 2.3B & 27M & & 13.7 & 30.2 & - & - \\ Flamingo [1] & & 2.3B & 27M & & 17.4 & 35.6 & - & - \\ FrozenBiLM [57] & ✓ & - & 10M & & 16.7 & 33.8 & **25.9** & 41.9 \\ \hline Just Ask [55] & & 69M & - & ✓ & 2.9 & 7.5 & 12.2 & - \\ Just Ask [56] & & 69M & 3M & ✓ & 5.6 & 13.5 & 12.3 & - \\ BLIP [32] & & 129M & - & ✓ & 19.2 & 35.2 & - & - \\ \hline ViTiS (Ours) & & - & 2.5M & & **18.2** & **36.2** & 25.0 & **45.5** \\ ViTiS (Ours) & ✓ & - & 2.5M & & 18.1 & 36.1 & 25.5 & **45.5** \\ \hline \hline \end{tabular} \end{table} Table 7: _Zero-shot VideoQA_ top-1 accuracy on test sets, except TGIF-QA on the validation set. Number of pretraining data: image-text/video-text pairs. Sub: subtitle input. VQA: visual question answer pairs. ANet-QA: ActivityNet-QA. CLIP: CLIP ViT-L/14. Flamingo: Flamingo-80B. We gray out methods trained on VQA pairs, which are not directly comparable. *: CLIP results taken from [57]. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{\#Shot} & \multicolumn{3}{c}{\#Pre-Training} & \multirow{2}{*}{Msrvtt-QA} & \multirow{2}{*}{Msvd-QA} & \multirow{2}{*}{ANet-QA} & \multirow{2}{*}{Tgif-QA} \\ & & & & & & & \\ & & & & & & & \\ \hline Flamingo-3B [1] & 32 & 2.3B & 27M & 1.4B & 25.6 & 42.6 & – & – \\ Flamingo-9B [1] & 32 & 2.3B & 27M & 1.8B & 29.4 & 47.2 & – & – \\ Flamingo-80B [1] & 32 & 2.3B & 27M & 10B & 31.0 & 52.3 & – & – \\ \hline ViTiS (Ours) & 32 & – & 2.5M & 101M & 27.0\(\pm\)1.0 & 41.9\(\pm\)0.8 & 28.7\(\pm\)1.3 & 52.2\(\pm\)1.2 \\ \hline \hline \end{tabular} \end{table} Table 8: _Few-shot VideoQA in-context learning_. Mean and standard deviation of top-1 accuracy on test sets, except TGIF-QA on the validation set, over 10 32-shot tasks drawn at random. Only our model involves parameter updates; we fine-tune 0.75M params. Number of pretraining data: image-text/video-text pairs. ANet-QA: ActivityNet-QA. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Method} & Trained & \#Trained & Msrvt & Msrd & ANet & Tgif \\ & Modules & Params & -QA & -QA & -QA & -QA \\ \hline FrozenBiLM [57] & ATP & 30M & 36.0 & 46.5 & 33.2 & 55.1 \\ ViTiS (Ours) & ATP & 101M & 36.5 & 47.6 & 33.1 & 55.7 \\ ViTiS (Ours) & Prompts & 0.75M & **36.9** & **47.8** & **34.2** & **56.2** \\ \hline \hline \end{tabular} \end{table} Table 9: _Few-shot VideoQA_ top-1 accuracy on test sets, except TGIF-QA on the validation set. Number of trained parameters: fine-tuned on the downstream dataset, using 1% of training data. ATP: All trainable parameters. ANet-QA: ActivityNet-QA.
2301.00284
Square Root Normal Fields for Lipschitz surfaces and the Wasserstein Fisher Rao metric
The Square Root Normal Field (SRNF) framework is a method in the area of shape analysis that defines a (pseudo) distance between unparametrized surfaces. For piecewise linear (PL) surfaces it was recently proved that the SRNF distance between unparametrized surfaces is equivalent to the Wasserstein Fisher Rao (WFR) metric on the space of finitely supported measures on $S^2$. In the present article we extend this point of view to a much larger set of surfaces; we show that the SRNF distance on the space of Lipschitz surfaces is equivalent to the WFR distance between Borel measures on $S^2$. For the space of spherical surfaces this result directly allows us to characterize the non-injectivity and the (closure of the) image of the SRNF transform. In the last part of the paper we further generalize this result by showing that the WFR metric for general measure spaces can be interpreted as an optimization problem over the diffeomorphism group of an independent background space.
Emmanuel Hartman, Martin Bauer, Eric Klassen
2022-12-31T20:48:15Z
http://arxiv.org/abs/2301.00284v2
# Square Root Normal Fields for Lipschitz Surfaces and the Wasserstein Fisher Rao Metric ###### Abstract The Square Root Normal Field (SRNF) framework is a method in the area of shape analysis that defines a (pseudo) distance between unparametrized surfaces. For piecewise linear (PL) surfaces it was recently proved that the SRNF distance between unparametrized surfaces is equivalent to the Wasserstein Fisher Rao (WFR) metric on the space of finitely supported measures on \(S^{2}\). In the present article we extend this point of view to a much larger set of surfaces; we show that the SRNF distance on the space of Lipschitz surfaces is equivalent to the WFR distance between Borel measures on \(S^{2}\). For the space of spherical surfaces this result directly allows us to characterize the non-injectivity and the (closure of the) image of the SRNF transform. In the last part of the paper we further generalize this result by showing that the WFR metric for general measure spaces can be interpreted as an optimization problem over the diffeomorphism group of an independent background space. ## 1 Introduction The investigations of this article are motivated by applications in the area of mathematical shape analysis, which seeks to quantify differences, perform classification, and explain variability for populations of shapes [51, 40, 13, 28]. More specifically, the results of this article concern the Square Root Normal Field distance [16] on the space of surfaces and the Wasserstein Fisher Rao metric [9, 26] from unbalanced optimal transport. Before we describe the contributions of the current work in more detail, we will briefly summarize some results from these two areas. **Shape analysis of surfaces:** For the purpose of this article we consider a shape to be a parametrized surface or curve in \(\mathbb{R}^{d}\), where we identify two objects if they only differ by a translation and/or a reparametrization. In practice, it is often of interest to mod out by further shape preserving group actions, such as the groups of rotations or scalings. To keep the presentation simple, we will ignore these additional finite dimensional groups. Consequently, the resulting shape space is an infinite dimensional, non-linear (quotient) space, which makes the application of statistical techniques to analyse these types of data a highly challenging task. A common approach to overcome these difficulties can be found in the area of geometric statistics [35, 36], in which one develops statistical frameworks based on (Riemannian) geometry. In the context of shape analysis of surfaces or curves, a variety of different metrics have been proposed for this purpose; this includes metrics induced by (right-invariant) metrics on diffeomorphism groups [51, 31] and reparametrization invariant metrics on the space of immersions [40, 3, 30], which are directly related to the investigations of the present article as we will explain next. In the latter approach the calculation of the distance (similarity) between two shapes reduces to two tasks: calculating the geodesic distance on the space of immersions (parametrized surfaces or curves, resp.) and minimizing over the action of the shape preserving group actions, i.e., diffeomorphisms of the parameter space and translations in \(\mathbb{R}^{d}\). In general there do not exist any explicit formulas for geodesics and thus computing solutions to the geodesic boundary value problems (and thus of the distance) is a highly non-trivial task and usually has to be solved using numerical optimization techniques, see eg. [14, 2]. For specific examples of Riemannian metrics, however, simplifying transformations have been developed that allow for explicit calculations of geodesics and geodesic distance. This includes in particular the family of \(G^{a,b}\)-metrics on the space of curves [5, 34, 33, 50], a family of first order Sobolev type metrics, that are often called _elastic_ metrics due to their connections to linear elasticity theory; see eg. [33, 8, 5]. For the specific choice of parameters \(a=1\), \(b=1/2\) the corresponding transformation is the so-called Square-Root-Velocity (SRV) transform [39], which is widely used in applications; see [40] and the references therein. The advantage of this transformation is that it reduces the shape comparison problem to a single optimization over the shape preserving group actions, i.e., in the setting of the present article over reparametrizations and translations. This computational simplification has led to both the development of efficient algorithms [49, 12, 39] and to analytic results on existence of minimizers and optimal parametrizations [7, 24, 44]. The family of elastic \(G^{a,b}\) metrics has a natural generalization to a four parameter family of metrics on the space of surfaces [42]. Similarly to the case of curves, simplifying transformations have also been proposed in this more complicated situation [19, 20, 16, 41]. Notably, as a generalization of the SRV transform, the Square Root Normal Field (SRNF) transformation [16] has been introduced. In contrast to the situation for curves, the corresponding Riemannian metric for this transformation is degenerate and, furthermore, it only leads to a first order approximation of the geodesic distance. Nonetheless it defines a reparametrization invariant (pseudo-) distance on the space of surfaces, which still allows for efficient computations using several methods of approximating the optimization over the diffeomorphism group [23, 4] and has proven successful in several applications, see [21, 17, 29, 22]. **Unbalanced Optimal transport:** The second core theme of the present article can be found in the theory of optimal transport (OT). Since Monge's formulation of OT as a non-convex optimization problem in the space of transport maps, many formulations of the problem have been proposed to give insight to the theoretical properties of the problem as well as efficient methods for computing the solution, see [45, 46] for a comprehensive overview on the field. In classical optimal transport theory one considers normalized (probability) distributions. It is, however, important for many applications to relax this normalization assumption and compute transportation plans between arbitrary positive measures. Motivated by this observation the theory of optimal transport has been extended to measures with different masses. This field, called unbalanced optimal transport, has seen rapid developments in the past years and several different frameworks have been proposed [9, 25, 27, 37]. Among them is the Wasserstein Fisher Rao (WFR) distance, an interpolating distance between the quadratic Wasserstein metric and the Fisher-Rao metric, that was introduced independently by [9] and [26]. The WFR distance has been applied to a variety of problems where it is more natural to consider optimal transport in an unbalanced setting. These applications range from color transfer [10], to earthquake epicenter location [52] and document semantic similarity metrics [47]. Because of the growing field of applications, several algorithms have been proposed to compute the Wasserstein Fisher Rao metric. A variation on the popular Sinkhorn algorithm to solve for an entropy regularized version of the distance was proposed by [10] and an alternating minimization algorithm that computes an exact solution was introduced in [6]. ### Contributions of the article Recently a new and surprising relationship between these two areas (shape analysis and unbalanced optimal transport) has been found. Namely, in [6] it has been shown that for triangulated surfaces the calculation of the SRNF shape distance can be reduced to calculating the WFR distance between their corresponding surface area measures. The presentation in [6] was entirely focused on the discrete (PL) setting and the proof of the result essentially reduced to algebraic considerations. In the first part of the present article we build the analytical tools to extend this result to the infinite dimensional setting, which contains in particular the original setup of the SRNF distance; the space of smooth surfaces. The main result of this part of our article - cf. Theorem 3 - shows that the SRNF shape distance between any two Lipschitz surfaces is equal to the WFR distance between their surface area measures. As a direct consequence of this result we are able to answer two fundamental questions regarding the SRNF transform: since the inception of the SRNF transform, it has been understood that the map is neither injective nor surjective [16]. Characterizing the image and non-injectivity have, however, remained open problems. Recently a first degeneracy result in the context of closed surfaces has been found [18]. Using our equivalence result we are able to obtain a characterization of the closure of the image of this transform - cf. Theorem 3.6 - and a new strong degeneracy result of the corresponding distance (non-injectivity of the transform, resp.) - cf. Theorem 3.8. In the second part we further explore the equivalence result for more general unbalanced optimal transport problems. Generalizations of some of the intermediate results of the first part allow us to offer a novel formulation of the WFR metric as a diffeomorphic optimization problem - cf. Theorem 4.1. Whereas the main result of the first part of the article relates the WFR on \(S^{2}\) with a specific choice of parameter to a diffeomorphic optimization problem, we here extend this relationship to the WFR with any choice of parameter defined on any connected, compact, oriented Riemannian manifold, \(N\). Notably, the space of diffeomorphisms we have to optimize over does not depend on \(N\), but can be chosen as the diffeomorphism group of some background manifold, that only needs to be of dimension greater than or equal to two. ## Acknowledgements The authors thank FX Vialard and Cy Maor for useful discussions during the preparation of this manuscript. M. Bauer was supported by NSF-grants 1912037 and 1953244 and by FWF grant P 35813-N. E. Hartman was supported by NSF grant DMS-1953244. ## 2 Preliminaries ### The Wasserstein Fisher Rao Distance In the following, we will summarize the Kantorovich formulation of the Wasserstein Fischer Rao distance, as introduced in [11] for measures on a smooth, connected, compact, oriented Riemannian manifold, \(N\). Therefore we denote by \(\mathcal{M}(N)\) the space of finite Borel measures on \(N\). In the Kantorovich formulation of the Wasserstein-Fisher-Rao distance, we will define a functional on the space of semi-couplings. Therefore we first recall the definition of a semi-coupling: [Semi-couplings [11]] Given \(\mu,\nu\in\mathcal{M}(N)\) the set of all semi-couplings from \(\mu\) to \(\nu\) is given by \[\Gamma(\mu,\nu)=\left\{(\gamma_{0},\gamma_{1})\in\mathcal{M}(N\times N)^{2}|( \mathrm{Proj}_{0})_{\#}\gamma_{0}=\mu,(\mathrm{Proj}_{1})_{\#}\gamma_{1}=\nu \right\}.\] To define the Wasserstein-Fisher-Rao distance from \(\mu\) to \(\nu\) we define a functional on the space of semi-couplings from \(\mu\) to \(\nu\). Let \(d\) denote the geodesic distance on \(N\) and \(\delta\in(0,\infty)\). We consider the functional \[J_{\delta}:\Gamma(\mu,\nu) \to\mathbb{R}\] \[(\gamma_{1},\gamma_{2}) \mapsto 4\delta^{2}\left(\mu(N)+\nu(N)-2\int_{N\times N}\frac{\sqrt{ \gamma_{1}\gamma_{2}}}{\gamma}(u,v)\overline{\cos}(d(u,v)/2\delta)d\gamma(u, v)\right)\] where \(\gamma\in\mathcal{M}(N\times N)\) such that \(\gamma_{1},\gamma_{2}\ll\gamma\). Note that in the case where \(N=S^{2}\), we have \(d(u,v)=\cos^{-1}(u\cdot v)\). Thus for \(\delta=\frac{1}{2}\), \[J_{\delta}(\gamma_{1},\gamma_{2})=\int_{S^{2}\times S^{2}}\left|\sqrt{\frac{ \gamma_{1}}{\gamma}(u,v)}u-\sqrt{\frac{\gamma_{1}}{\gamma}(u,v)}v\right|^{2} d\gamma(u,v). \tag{1}\] [Wasserstein-Fisher-Rao Distance [11, 26]] The Wasserstein-Fisher-Rao Distance on \(\mathcal{M}(N)\) is given by \[\mathrm{WFR}_{\delta}:\mathcal{M}(N)\times\mathcal{M}(N) \to\mathbb{R}^{\geq 0}\text{ defined via} \tag{2}\] \[(\mu,\nu) \mapsto\inf_{(\gamma_{0},\gamma_{1})\in\Gamma(\mu,\nu)}\sqrt{J_{ \delta}(\mu,\nu)}. \tag{3}\] Some results in this article will specifically apply to the case where \(\delta=1/2\). To simplify our notation, we define \(J:=J_{1/2}\) and \(\mathrm{WFR}:=\mathrm{WFR}_{1/2}\). ### The Square Root Normal Field Shape Distance In mathematical shape analysis, one defines metrics that measure the differences between geometric objects [51, 3, 40, 13]. In this article we consider geometric objects described by unparameterized surfaces which are elements of an infinite dimensional non-linear space modulo several finite and infinite dimensional group action. As a result, computations in this space are difficult and even simple statistical operations are not well defined. Riemannian geometry can help to overcome these challenges. In such a framework, one considers the space of all surfaces as an infinite dimensional manifold and equips it with a Riemannian metric that is invariant to the group action, which allows one to consider the induced metric on the quotient space. For our purposes we will consider immersions of a smooth, connected, compact, oriented Riemannian 2-dimensional manifold, \(M\), with or without boundary. We denote the space of all Lipschitz immersions of \(M\) into \(\mathbb{R}^{3}\) by \(\operatorname{Imm}(M,\mathbb{R}^{3})\), i.e., \[\operatorname{Imm}(M,\mathbb{R}^{3})=\{f\in W^{1,\infty}(M,\mathbb{R}^{3}):Tf \text{ is inj. a.e.}\}\;. \tag{4}\] As we are interested in unparametrized surfaces, we have to factor out the action of the group of diffeomorphisms. In the context of Lipschitz immersions the natural group of reparametrizations for us to consider is the group of all orientation preserving, bi-Lipschitz diffeomorphisms: \[\Gamma(M)=\{\gamma\in W^{1,\infty}(M,M):\;\gamma^{-1}\in W^{1,\infty}(M,M),|D \gamma|>0\text{ a.e.}\},\] where \(|D\gamma|\) denotes the Jacobian determinant of \(\gamma\), which is well-defined as \(D\gamma\in L^{\infty}\). Note that this reparametrization group acts by composition from the right on \(\operatorname{Imm}(M,\mathbb{R}^{3})\). In addition to the action by the reparametrization group, we also want to identify surfaces that only differ by a translation. This leads us to consider the following quotient space: \[\mathcal{S}:=\operatorname{Imm}(M,\mathbb{R}^{3})/(\Gamma(M)\times\operatorname {trans}) \tag{5}\] In the following we will equip \(\operatorname{Imm}(M)\) with a reparameterization invariant distance; the so called square root normal field (SRNF) distance. The SRNF map (distance resp.) was originally introduced by Jermyn et al. in [15] for the space of smooth immersions, but it naturally extends to the space of all Lipschitz surfaces, as demonstrated in [6]. We now recall the definition of this distance. For any given \(f\in\operatorname{Imm}(M,\mathbb{R}^{3})\), the orientation on \(M\) allows us to consider the unit normal vector field \(n_{f}:M\to\mathbb{R}^{3}\), which is well-defined as an element of \(L^{\infty}(M,\mathbb{R}^{3})\). Furthermore, let \(\{v,w\}\) be an orthonormal basis of \(T_{x}M\). Then for any \(f\in\operatorname{Imm}(M,\mathbb{R}^{3})\) we can define the area multiplication factor at \(x\in M\) via \(a_{f}(x)=|df_{x}(v)\times df_{x}(w)|\). The SRNF map is then given by \[\Phi:\operatorname{Imm}(M,\mathbb{R}^{3})/\operatorname{translations} \to L^{2}(M,\mathbb{R}^{3}) \tag{6}\] \[f \mapsto q_{f}\text{ where }q_{f}(x):=\sqrt{a_{f}(x)}\;n_{f}(x). \tag{7}\] From this transform we define a distance on \(\operatorname{Imm}(M,\mathbb{R}^{3})/\operatorname{translations}\) by \[d_{\operatorname{Imm}}(f_{1},f_{2})=\|\Phi(f_{1})-\Phi(f_{2})\|_{L^{2}}.\] Next we consider a right-action of \(\Gamma(M)\) on \(L^{2}(M,\mathbb{R}^{3})\) that is compatible with the mapping \(\Phi\). For \(q\in L^{2}(M,\mathbb{R}^{3})\) and \(\gamma\in\Gamma(M)\) we let \[(q*\gamma)(x)=\sqrt{|D_{\gamma}(x)|}q(\gamma(x)). \tag{8}\] It is easy to check that the action of \(\Gamma(M)\) on \(L^{2}(M,\mathbb{R}^{3})\) is by linear isometries and that for any \(f\in\operatorname{Imm}\) and \(\gamma\in\Gamma\), \[\Phi(f)*\gamma=\Phi(f\circ\gamma).\] Thus, it follows that the SRNF distance on \(\mathrm{Imm}(M,\mathbb{R}^{3})\) is invariant with respect to this action and thus it descends to a (pseudo) distance on the quotient space \(\mathcal{S}\), which is given by \[d_{\mathcal{S}}([f_{1}],[f_{2}])=\inf_{\gamma\in\Gamma(M)}d(f_{1},f_{2}\circ \gamma),\qquad[f_{1}],[f_{2}]\in\mathcal{S}(M)\] As we will see later the induced (pseudo) distance on the quotient space is highly degenerate. ### Equivalence of WFR and SRNF in the piecewise linear category In [6] a surprising equivalence of the WFR and SRNF distance was shown: for piecewise linear surfaces it was proved that the SRNF distance can be reduced to the WFR distance between finitely supported measures. To formulate this result in detail we first associate to every \(q\in L^{2}(M,\mathbb{R}^{3})\) a measure on \(S^{2}\); namely, for any open \(U\subseteq S^{2}\), we define \[q^{*}U=\{x\in M|q(x)\neq 0\text{ and }q(x)/|q(x)|\in U\}\] and define the map \[L^{2}(M,\mathbb{R}^{3})\to\mathcal{M}(S^{2})\text{ via }q \mapsto\mu_{q}\] \[\text{ where for }U\subseteq S^{2},\mu_{q}(U)=\int_{q^{*}U}q(x) \cdot q(x)dm.\] The result proved in [6] is then formulated as: **Theorem 2.3**: _Given two piecewise linear surfaces \(S_{1}\) and \(S_{2}\) parameterized by \(f\) and \(g\), the SRNF shape distance can be computed as an unbalanced transport problem. More precisely, we have_ \[d_{\mathcal{S}}([f],[g])=\inf_{\gamma\in\Gamma(M)}\|q_{f}-q_{g}*\gamma\|= \mathrm{WFR}(\mu_{q_{f}},\mu_{q_{g}}).\] _where \(q_{f}\) and \(q_{g}\) are the SRNFs of \(f\) and \(g\) respectively._ In the next section we will extend this result of to all Lipschitz immersions (Borel-measures, resp.). ## 3 The SRNF distance For the goal of extending the result of Theorem 2.3 to all Lipschitz surfaces, we will consider specifically \(\delta=\frac{1}{2}\) in the definition of the WFR metric. ### Equivalence of the WFR and SRNF distances Our main result of this section is the following theorem, which is slightly stronger than the desired equivalence result. **Theorem 3.1**: _Given \(q_{1},q_{2}\in L^{2}(M,\mathbb{R}^{3})\),_ \[\inf_{\gamma\in\Gamma(M)}\|q_{1}-q_{2}*\gamma\|_{L^{2}}=\mathrm{WFR}(\mu_{q_{ 1}},\mu_{q_{2}}).\] _In particular, given \(f,g\in W^{1,\infty}(M,\mathbb{R}^{3})\) we can calculate their SRNF distance as an unbalanced OMT problem via_ \[d_{\mathcal{S}}([f],[g])=\mathrm{WFR}(\mu_{q_{f}},\mu_{q_{g}}),\] _where \(q_{f}\) and \(q_{g}\) are the SRNFs of \(f\) and \(g\) respectively._ **Remark 1**: _Note, that as a direct consequence of Theorem 3.1 we can also conclude the extension of Theorem 2.3 to the original setup of the SRNF distance, the space of all smooth surfaces._ The proof of Theorem 3.1 relies on a series of technical lemmas, which we will show next. **Lemma 3.2**: _Let \(X,Y\) be topological spaces and \(\rho:X\to Y\) be a measurable function with respect to the Borel \(\sigma\)-algebras. If \(\mu,\mu_{1}\in\mathcal{M}(X)\), \(\gamma,\gamma_{1}\in\mathcal{M}(Y)\) such that \(\mu_{1}\ll\mu\), \(\gamma=\rho_{*}\mu\), and \(\gamma_{1}=\rho_{*}\mu_{1}\), then \(\gamma_{1}\ll\gamma\). Furthermore, \(\frac{\mu_{1}}{\mu}=\frac{\gamma_{1}}{\gamma}\circ\rho\) almost everywhere._ Let \(U\subseteq Y\) open such that \(\gamma(U)=0\). By definition, \(\mu(\rho^{-1}(U))=0\). Since \(\mu_{1}\ll\mu\), \(\mu_{1}(\rho^{-1}(U))=0\). Therefore, \(\gamma_{1}(U)=0\). This proves \(\gamma_{1}\ll\gamma\). Following the definitions of the Radon-Nikodym derivatives, pushforwards, and the change of variables formula, we obtain \[\int_{\rho^{-1}(U)}\frac{\mu_{1}}{\mu}d\mu=\int_{\rho^{-1}(U)}d\mu_{1}=\int_{ U}d\gamma_{1}=\int_{U}\frac{\gamma_{1}}{\gamma}d\gamma=\int_{\rho^{-1}(U)} \frac{\gamma_{1}}{\gamma}\circ\rho\,d\mu.\] Thus, \(\frac{\mu_{1}}{\mu}=\frac{\gamma_{1}}{\gamma}\circ\rho\) almost everywhere. Given \(q\in L^{2}(M,\mathbb{R}^{3})\) we can define a function from \(M\) to \(S^{2}\) that takes every point \(x\in M\) to the unit vector in the direction of \(q(x)\). As a matter of defining this function on every point, we can canonically choose the north pole of \(S^{2}\) for points where \(q(x)=0\). **Definition 3.3**: _For \(q\in L^{2}(M,\mathbb{R}^{3})\) we define the unit vector map of \(q\) as_ \[\overline{q}:M \to S^{2}\text{ given by}\] \[x \mapsto\begin{cases}\frac{q(x)}{|q(x)|}&\text{if }q(x)\neq 0\\ (1,0,0)&\text{otherwise}\end{cases}.\] Note that since \(q\in L^{2}(M,\mathbb{R}^{3})\), it follows that \(\overline{q}:M\to S^{2}\) is measurable. Let \(q\in L^{2}(M,\mathbb{R}^{3})\). We can define a measure, \(\nu_{q}\in\mathcal{M}(M)\), via \[\nu_{q}(U)=\int_{U}|q(x)|^{2}dm.\] for all open \(U\subseteq M\). Note that \(\nu_{q}\ll m\) and \(\frac{\nu_{q}}{m}=|q|^{2}\). Further, we can equivalently define \(\mu_{q}\) as the pushforward of \(\nu_{q}\) via \(\overline{q}\). **Lemma 3.4**: _Let \(q\in L^{2}(M,\mathbb{R}^{3})\) and \(\mu_{q}\in\mathcal{M}(S^{2})\) be the measure associated with \(q\). Then \(\mu_{q}=\overline{q}_{*}\nu_{q}\)._ Let \(U\subseteq S^{2}\) open and define \(M_{0}=\{x\in M|q(x)=0\}\). If \((1,0,0)\not\in S^{2}\), \(\overline{q}^{-1}(U)=q^{*}(U)\) and thus \[\overline{q}_{*}\nu_{q}(U)=\int_{\overline{q}^{-1}(U)}|q(x)|^{2}dm=\int_{q^{* }(U)}|q(x)|^{2}dm+\int_{M_{0}}|q(x)|^{2}dm=\mu_{q}.\] If \((1,0,0)\in S^{2}\), \(\overline{q}^{-1}(U)=q^{*}(U)\cup M_{0}\) and thus \[\overline{q}_{*}\nu_{q}(U)=\int_{\overline{q}^{-1}(U)}|q(x)|^{2}dm=\int_{q^{* }(U)}|q(x)|^{2}dm+\int_{M_{0}}|q(x)|^{2}dm=\mu_{q}.\] Leveraging what we have proven above we may show a key continuity result that will then allow us to complete the proof of the main theorem. **Lemma 3.5**: _The map \((L^{2}(M,\mathbb{R}^{3}),\|\cdot\|_{L^{2}})\to(\mathcal{M}(S^{2}),\mathrm{WFR})\) defined via \(q\mapsto\mu_{q}\) given by Equation (2.3) is Lipschitz continuous with Lipschitz constant \(K=1\)._ Proof.: Let \(q_{1},q_{2}\in L^{2}(M,\mathbb{R}^{3})\). For any semi-coupling \((\gamma_{1},\gamma_{2})\in\Gamma(\mu_{q_{1}},\mu_{q_{2}})\), \[\text{WFR}(\mu_{q_{1}},\mu_{q_{2}})\leq\sqrt{J_{\delta}(\gamma_{1},\gamma_{2})}.\] Thus, to prove the theorem we must construct \((\gamma_{1},\gamma_{2})\in\Gamma(\mu_{q_{1}},\mu_{q_{2}})\) such that \(J_{\delta}(\gamma_{1},\gamma_{2})=\|q_{1}-q_{2}\|_{L_{2}}^{2}\). To construct such a semi-coupling we first construct \(\rho:M\to S^{2}\times S^{2}\) defined as unit vector maps of \(q_{1}\) and \(q_{2}\) on the first and second factor respectively. I.e. the map is given by \(\rho(x)=\left(\overline{q_{1}}(x),\overline{q_{2}}(x)\right).\) Since \(\overline{q_{1}}\) and \(\overline{q_{2}}\) are individually measurable, then so is \(\rho\). We can then define \(\gamma_{1},\gamma_{2}\in\mathcal{M}(S^{2}\times S^{2})\) via \(\gamma_{1}=\rho_{*}\nu_{q_{1}}\) and \(\gamma_{2}=\rho_{*}\nu_{q_{2}}\). **Claim 1**: _The pair of measures, \((\gamma_{1},\gamma_{2})\) is a semi-coupling from \(\mu_{q_{1}}\) to \(\mu_{q_{2}}\)._ Proof of claim.: Let \(U\subseteq S^{2}\) be open. Thus, \[\gamma_{1}(U\times S^{2})=\nu_{q_{1}}\left(\rho^{-1}(U\times S^{2})\right)= \nu_{q_{1}}\left(\overline{q_{1}}^{-1}(U)\cap\overline{q_{2}}^{-1}(S^{2}) \right)=\nu_{q_{1}}\left(\overline{q_{1}}^{-1}(U)\right)=\mu_{q_{1}}(U)\] and \[\gamma_{2}(S^{2}\times U)=\nu_{q_{2}}\left(\rho^{-1}(S^{2}\times U)\right)= \nu_{q_{1}}\left(\overline{q_{1}}^{-1}(S^{2})\cap\overline{q_{2}}^{-1}(U) \right)=\nu_{q_{1}}\left(\overline{q_{2}}^{-1}(U)\right)=\mu_{q_{2}}(U).\] So \((\gamma_{1},\gamma_{2})\) is a semi-coupling from \(\mu_{q_{1}}\) to \(\mu_{q_{2}}\). Recall from the definition of the functional \(J_{\delta}\) we need to construct \(\gamma\in\mathcal{M}(S^{2}\times S^{2})\) such that \(\gamma_{1},\gamma_{2}\ll\gamma\). Define \(\gamma=\rho_{*}m\). We know \(\mu_{q_{1}},\mu_{q_{2}}\ll m\). Thus, by Lemma 3.2, \(\gamma_{1},\gamma_{2}\ll\gamma\). Furthermore, \[|q_{1}|^{2}=\frac{\mu_{q_{1}}}{m}=\frac{\gamma_{1}}{\gamma}\circ\rho\text{ a.e.}\qquad\text{and}\qquad|q_{2}|^{2}=\frac{\mu_{q_{2}}}{m}=\frac{\gamma_{2}}{ \gamma}\circ\rho\text{ a.e.}\] So, \[J_{\delta}(\gamma_{1},\gamma_{2})= \int_{S^{2}\times S^{2}}\left|\sqrt{\frac{\gamma_{1}}{\gamma}(u,v )}u-\sqrt{\frac{\gamma_{1}}{\gamma}(u,v)}v\right|^{2}d\gamma(u,v)\] \[= \int_{S^{2}\times S^{2}}\frac{\gamma_{1}}{\gamma}(u,v)d\gamma(u,v )+\int_{S^{2}\times S^{2}}\frac{\gamma_{2}}{\gamma}(u,v)d\gamma(u,v)\] \[\qquad\qquad\qquad\qquad\qquad\qquad-2\int_{S^{2}\times S^{2}} \frac{\sqrt{\gamma_{1}\gamma_{2}}}{\gamma}(u,v)\langle u,v\rangle d\gamma(u,v)\] \[= \int_{\rho^{-1}(S^{2}\times S^{2})}\frac{\gamma_{1}}{\gamma}\circ \rho(x)\,dm+\int_{\rho^{-1}(S^{2}\times S^{2})}\frac{\gamma_{2}}{\gamma}\circ \rho(x)\,dm\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad-2\int_{\rho^{-1}(S^{2} \times S^{2})}\sqrt{\frac{\gamma_{1}}{\gamma}\circ\rho(x)}\sqrt{\frac{\gamma_ {2}}{\gamma}\circ\rho(x)}\langle\rho(x)\rangle d\gamma(u,v)\] \[= \int_{M}|q_{1}(x)|^{2}dm+\int_{M}|q_{2}(x)|^{2}dm-2\int_{M}|q_{1}( x)||q_{2}(x)|\left\langle\frac{q_{1}(x)}{|q_{1}(x)|},\frac{q_{2}(x)}{|q_{2}(x)|} \right\rangle dm\] \[= \|q_{1}-q_{2}\|_{L^{2}}^{2}\] Thus, \[\text{WFR}(\mu_{q_{1}},\mu_{q_{2}})\leq\sqrt{J_{\delta}(\gamma_{1},\gamma_{2})}= 1\cdot\|q_{1}-q_{2}\|_{L^{2}}\] We are now ready to conclude the proof of Theorem 3.1: Proof of Theorem 3.1.: Let \(q_{1},q_{2}\in L^{2}(M,\mathbb{R}^{3})\) and let \(\epsilon>0\). Let \(p_{1},p_{2}\) be piecewise constant functions such that \(\|q_{1}-p_{1}\|_{L^{2}}<\epsilon/4\) and \(\|q_{2}-p_{2}\|_{L^{2}}<\epsilon/4\). Therefore, \[\inf_{\gamma\in\Gamma(M)}\|q_{1}-p_{1}\star\gamma\|_{L^{2}},\inf_{\gamma\in \Gamma(M)}\|q_{2}-p_{2}\star\gamma\|_{L^{2}},\,\text{WFR}(\mu_{q_{1}},\mu_{p_{ 1}}),\,\text{WFR}(\mu_{q_{2}},\mu_{p_{2}})<\epsilon/4.\] Thus, \[\inf_{\gamma\in\Gamma(M)}\|q_{1}-q_{2}*\gamma\|_{L^{2}} \leq\inf_{\gamma\in\Gamma(M)}\|q_{1}-p_{1}*\gamma\|_{L^{2}}+\inf_{ \gamma\in\Gamma(M)}\|p_{2}-q_{2}*\gamma\|_{L^{2}}\] \[\qquad\qquad+\inf_{\gamma\in\Gamma(M)}\|p_{1}-p_{2}*\gamma\|_{L^{ 2}}\] \[\leq\epsilon/2+\inf_{\gamma\in\Gamma(M)}\|p_{1}-p_{2}*\gamma\|_{L^ {2}}\] \[=\epsilon/2+\mathrm{WFR}(\mu_{p_{1}},\mu_{p_{2}})\] \[\leq\epsilon/2+\mathrm{WFR}(\mu_{q_{1}},\mu_{p_{1}})+\mathrm{WFR} (\mu_{p_{2}},\mu_{q_{2}})+\mathrm{WFR}(\mu_{q_{1}},\mu_{q_{2}})\] \[\leq\epsilon+\mathrm{WFR}(\mu_{q_{1}},\mu_{q_{2}})\] and \[\mathrm{WFR}(\mu_{q_{1}},\mu_{q_{2}}) \leq\mathrm{WFR}(\mu_{p_{1}},\mu_{p_{2}})+\mathrm{WFR}(\mu_{q_{1} },\mu_{p_{1}})+\mathrm{WFR}(\mu_{p_{2}},\mu_{q_{2}})\] \[\leq\mathrm{WFR}(\mu_{p_{2}},\mu_{q_{2}})+\epsilon/2\] \[=\inf_{\gamma\in\Gamma(M)}\|p_{1}-p_{2}*\gamma\|_{L^{2}}+\epsilon/2\] \[\leq\inf_{\gamma\in\Gamma(M)}\|q_{1}-p_{1}*\gamma\|_{L^{2}}+\inf _{\gamma\in\Gamma(M)}\|p_{2}-q_{2}*\gamma\|_{L^{2}}\] \[\qquad\qquad+\inf_{\gamma\in\Gamma(M)}\|q_{1}-q_{2}*\gamma\|_{L^{ 2}}+\epsilon/2\] \[\leq\inf_{\gamma\in\Gamma(M)}\|q_{1}-q_{2}*\gamma\|_{L^{2}}+\epsilon.\] So, \[\mathrm{WFR}(\mu_{q_{1}},\mu_{q_{2}})-\epsilon\leq\inf_{\gamma\in\Gamma(M)}\| q_{1}-q_{2}*\gamma\|_{L^{2}}\leq\mathrm{WFR}(\mu_{q_{1}},\mu_{q_{2}})+\epsilon.\] Taking \(\epsilon\to 0\) we can conclude \(\inf_{\gamma\in\Gamma(M)}\|q_{1}-q_{2}*\gamma\|_{L^{2}}=\mathrm{WFR}(\mu_{q_{ 1}},\mu_{q_{2}})\). ### Characterizing the closure of the image of the SRNF map Our equivalence result will also allow us to characterize the (closure of the) image of the SRNF map \(\Phi\) in the context of spherical surfaces: Let \(f\in\mathrm{Imm}(S^{2},\mathbb{R}^{3})\) and let \(q=\Phi(f)\in L^{2}(S^{2},\mathbb{R}^{3})\). Then \(q\) satisfies the closure condition \(\int_{S^{2}}q(x)|q(x)|dm=0\). Moreover, the closure of the image of \(\Phi\) is given by the set \[\mathcal{U}:=\left\{q\in L^{2}(S^{2},\mathbb{R}^{3})\text{ such that }\int_{S^{2}}q(x)|q(x)|dm=0\right\}.\] To prove this result we will need a classical theorem from geometric measure theory and the study of convex polyhedra, which we will recall next: **Theorem** **(Minkowski's Theorem [1, 32, 38])**: Let \(\mu\in\mathcal{M}(S^{2})\) such that the support of \(\mu\) is not concentrated on a great circle and \[\int_{S^{2}}x\,d\mu(x)=0.\] Then, there exists a unique (up to translation) convex body whose surface area measure is \(\mu\). Moreover, if \(\mu\) is finitely supported then the convex body is a polytope. Proof of Theorem 3.2.: Let \(f\in\operatorname{Imm}(S^{2},\mathbb{R}^{3})\) and \(q_{f}=\Phi(f)\). Let \(S=f(S^{2})\) and \(V\) be the surface enclosed by \(S\). Therefore, \[\int_{S^{2}}q_{f}(x)|q_{f}(x)|dm=\int_{S^{2}}a_{f}(x)n_{f}(x)dm=\int_{S}n_{f}dS.\] Thus, this is the integral of the normal vector of a closed surface in \(\mathbb{R}^{3}\). A simple application of the divergence theorem shows that the integral of the normal vector of the closed surface is zero. To see this, let \(\{e_{i}\}_{i=1}^{3}\) be the unit basis vectors of \(\mathbb{R}^{3}\). For \(i=1,2,3\), \[\int_{S}(n_{f}\cdot e_{i})\,dS=\int_{V}(\nabla\cdot e_{i})\,dV=0.\] Therefore, \(\int_{S^{2}}q_{f}(x)|q_{f}(x)|dm=0\) and the image of \(\Phi\) is contained in \(\mathcal{U}\). To prove the converse direction let \(q\in\mathcal{U}\). We aim to construct a convex body \(f\) with \(\mu_{q_{f}}\) arbitrarily close to \(\mu_{q}\). By the definition of \(\mathcal{U}\) the measure \(\mu_{q}\) satisfies \(\int_{S^{2}}n\,d\mu_{q}(n)=0\). Since finitely supported measures are dense with respect to the WFR metric, we can choose a finitely supported measure \(\overline{\mu_{q}}\) such that \(\int_{S^{2}}n\,d\overline{\mu_{q}}(n)=0\) and \(\operatorname{WFR}(\mu_{q},\overline{\mu_{q}})<\epsilon/3\). If the support of \(\overline{\mu_{q}}\) is not concentrated on a great circle we can invoke the Minkowski theorem and the result follows. For the general case we will slightly deform the measure as follows. Define \[\hat{\mu_{q}}:=\overline{\mu_{q}}+\sum_{i=1}^{3}\frac{\epsilon}{18}\delta_{e_ {i}}+\sum_{i=1}^{3}\frac{\epsilon}{18}\delta_{-e_{i}}\] where \(\{e_{i}\}_{i=1}^{3}\) is the set of unit basis vectors of \(\mathbb{R}^{3}\). Then \(\hat{\mu_{q}}\) is a finitely supported measue and satisfies \(\int_{S^{2}}n\,d\hat{\mu_{q}}(n)=0\) and \(\hat{\mu_{q}}\) is not supported on a single great circle. Moreover, \(\operatorname{WFR}(\overline{\mu_{q}},\hat{\mu_{q}})<\epsilon/3\). By the Minkowski Theorem (Theorem 3.2) there exists a convex polytope with surface area measure given by \(\hat{\mu_{q}}\). Let \(f\in W^{1,\infty}(S^{2},\mathbb{R}^{3})\) be the PL spherical parameterization of this convex body, so that \(\mu_{q_{f}}=\hat{\mu_{q}}\). Thus, there exists \(\gamma\in\Gamma(M)\) such that \(\|q_{f}-q*\gamma\|_{L^{2}}<\operatorname{WFR}(\mu_{q_{f}},\mu_{q})+\epsilon/3\). Therefore, \[\|q_{f}-q*\gamma\|_{L^{2}}\leq\operatorname{WFR}(\mu_{q_{f}},\mu_{q})+ \epsilon/3=\operatorname{WFR}(\hat{\mu_{q}},\mu_{q})+\epsilon/3\leq \operatorname{WFR}(\hat{\mu_{q}},\overline{\mu_{q}})+\operatorname{WFR}( \overline{\mu_{q}},\mu_{q})+\epsilon/3<\epsilon,\] which concludes the proof. ### Characterizing the degeneracy of the SRNF distance As a second important consequence of the our equivalence result we can give a detailed proof of the degeneracy of the SRNF distance for smooth surfaces. Degeneracy results were studied in [18] and it was further characterized for certain PL surfaces in [6]. Here we will generalize the characterization of [6] to smooth surfaces: For any smooth, regular surface \(f\in C^{\infty}(S^{2},\mathbb{R}^{3})\cap\operatorname{Imm}(S^{2},\mathbb{R }^{3})\) there exists a unique (up to translations) convex body that is indistinguishable from \(f\) by the SRNF shape distance, i.e, \(d_{\mathcal{S}}([f],[f_{1}])=0\). Proof of Theorem 3.2.: Let \(f\in C^{\infty}(S^{2},\mathbb{R}^{3})\cap\operatorname{Imm}(S^{2},\mathbb{R }^{3})\) be a regular surface. By [43, Prop. 4.33] the Gauss map of \(f\) is surjective. Thus the image of \(q_{f}\) is not contained in a single hyperplane of \(\mathbb{R}^{3}\). Furthermore, \(\int_{S^{2}}q_{f}(x)|q_{f}(x)|dm=0\). Thus, by Theorem 3.2, there exists a unique convex body (up to translation) with surface area measure given by \(\mu_{q_{f}}\). By Theorem 3.2 the surface \(f\) and the convex body are SRNF distance \(0\) from each other. ## 4 The WFR metric as a diffeomorphic optimization problem In this section, we will generalize the results of the previous sections for the Wasserstein Fisher Rao distance on any manifold and for any coeffecient \(\delta\). Thus characterizing the Wasserstein Fisher Rao distance as a diffeomorphic optimization problem. Let \(N\) be a smooth, connected, compact, oriented Riemannian manifold. Define the cone over \(N\) via \(\mathcal{C}(N):=(N\times\mathbb{R}^{\geq 0})/(N\times\{0\})\). If we let \(d\) denote the geodesic distance on \(N\) and fix some \(\delta\in(0,\infty)\), then we can define a metric on \(\mathcal{C}(N)\) via \[d_{\mathcal{C}(N)}((n_{1},r_{1}),(n_{2},r_{2}))^{2}=4\delta^{2}r_{1}^{2}+4 \delta^{2}r_{2}^{2}-8\delta^{2}r_{1}r_{2}\overline{\cos}(d(n_{1},n_{2})/2 \delta).\] Let \(M\) be another smooth, connected, compact, oriented Riemannian manifold. Any function \(q:M\to\mathcal{C}(N)\) can be decomposed into component functions by \(q(x)=(\overline{q}(x),q^{\circ}(x))\) where \(\overline{q}:M\to N\) and \(q^{\circ}:M\to\mathbb{R}^{\geq 0}\). We can thus define \[\hat{q}:M\to\mathbb{R}^{\geq 0}\text{ via for all }x\in M,\ \hat{q}(x)=\sqrt{2}\delta q^{\circ}(x).\] Given \(q_{1},q_{2}:M\to\mathcal{C}(N)\). The \(L^{2}\) distance between \(q_{1}\) and \(q_{2}\) is given by \[d_{L^{2}}(q_{1},q_{2})^{2}=\int_{M}d_{\mathcal{C}(N)}(q_{1}(x),q_{2}(x))^{2}dm.\] By decomposing \(q_{1}\) and \(q_{2}\), we can alternatively write \[d_{L^{2}}(q_{1},q_{2})^{2}=\int_{M}\hat{q_{1}}(x)^{2}dm+\int_{M}\hat{q_{2}}(x) ^{2}dm-2\int_{M}\hat{q_{1}}(x)\hat{q_{2}}(x)\overline{\cos}(d(\overline{q_{1}} (x),\overline{q_{2}}(x))/2\delta)dm \tag{4.1}\] The \(L^{2}\) cost of a function \(q:M\to\mathcal{C}(N)\) as the distance from \(q\) to the function that maps all of \(M\) to the cone point. In particular, using the decomposition of \(q\), this distance is given by \[d_{L^{2}}(0,q)^{2}=\int_{M}\hat{q}(x)^{2}\,dm.\] Thus, the space of \(L^{2}\)-functions from \(M\) to \(\mathcal{C}(N)\) as \[L^{2}(M,\mathcal{C}(N)):=\{q:M\to\mathcal{C}(N)\text{ s.t. }d_{L^{2}}(0,q)^{2}<\infty\}\] and we equip \(L^{2}(M,\mathcal{C}(N))\) with the metric \(d_{L^{2}}\). We define the right action of the diffeomorphisms of on \(L^{2}(M,\mathcal{C}(N))\) component-wise. We treat \(\hat{q}\) as a half density and define the action of \(\Gamma(M)\) on this component as the action on half-densities. Thus, we define the action of \(\Gamma(M)\) on \(L^{2}(M,\mathcal{C}(N))\) given by \[L^{2}(M,\mathcal{C}(N))\times\Gamma(M) \to L^{2}(M,\mathcal{C}(N))\text{ via}\] \[(\overline{q},\hat{q}),\gamma \mapsto\left(\overline{q}\circ\gamma,\hat{q}\circ\gamma\cdot \sqrt{|D\gamma|}\right)\] The main result of this section is to show that the Wasserstein Fisher Rao distance can written as the distance between the orbits associated with the measures: **Theorem 4.1**.: _Let \(N\) be a smooth connected compact Riemannian manifold and \(M\) be a smooth connected compact Riemannian manifold of dimension 2 or higher._ 1. _For all_ \(\mu_{1},\mu_{2}\in\mathcal{M}(N)\) _and_ \(q_{1},q_{2}\in L^{2}(M,\mathcal{C}(N))\) _such that_ \(\mu_{1}=\overline{q_{1}}_{*}\nu_{q_{1}}\) _and_ \(\mu_{2}=\overline{q_{2}}_{*}\nu_{q_{2}}\) _we have_ \[\mathrm{WFR}_{\delta}(\mu_{1},\mu_{2})=\inf_{\gamma\in\Gamma(N)}d_{L^{2}}(q_{1 },q_{2}*\gamma).\] 2. _Moreover, for all_ \(\mu\in\mathcal{M}(N)\) _there exists_ \(q\in L^{2}(M,\mathcal{C}(N))\) _such that_ \(\mu=\overline{q}_{*}\nu_{q}\)_. If_ \(\mu\) _is a finitely supported measure given by_ \(\mu=\sum_{i=1}^{n}a_{i}\delta_{u_{i}}\)_, then one can choose_ \(q\) _piece wise constant. More specifically, the function_ \(q\) _given by_ \[q(x)=\begin{cases}\left(u_{j},\sqrt{\frac{a_{j}}{\text{area}(\sigma_{j})}} \right)&\text{ if }1\leq j\leq n\\ \left(u_{1},0\right)&\text{ if }n<j\leq m\end{cases},\] _where_ \(\{\sigma_{j}\}_{j=1}^{m}\) _is a subdivision of the canonical triangulation of_ \(M\) _with_ \(m\geq n\)_, satisfies_ \(\mu=\overline{q}_{*}\nu_{q}\) Before we are able to prove this theorem, we will show again several technical lemmas. Therefore we will consider specific measures associated with functions \(q\in L^{2}(M,\mathcal{C}(N))\). First, we define \(\nu_{q}\in\mathcal{M}(M)\) such that for any \(U\subseteq M\) open \[\nu_{q}(U)=\int_{U}\hat{q}(x)^{2}dm.\] Note that \(\nu_{q}\ll m\) and \(\frac{\nu_{q}}{m}=\hat{q}^{2}\). Further, we can define a pushforward of \(\nu_{q}\) via \(\overline{q}\). In particular, for every \(q\in L^{2}(M,\mathcal{C}(N))\), we can define a Borel measure on \(N\) given by \(\mu_{q}:=\overline{q}_{*}\nu_{q}.\) In other words for all \(U\subseteq N\) open \[\mu_{q}(U)=\int_{\overline{q}^{-1}(U)}\hat{q}^{2}(x)dm.\] Now we will show that the orbit of any \(q\in L^{2}(M,\mathcal{C}(N))\) under the action of \(\Gamma(M)\) is mapped to the same measure on \(N\). **Lemma 4.2**: _Let \(q\in L^{2}(M,\mathcal{C}(N))\). Then for all \(\gamma\in\Gamma(M),\)\(\mu_{q}=\mu_{q*\gamma}.\)_ Let \(U\subseteq N\) open. Then \[\mu_{q*\gamma}(U) =\int_{\gamma^{-1}(\overline{q}^{-1}(U))}(\hat{q}\circ\gamma(x) \cdot\sqrt{|D\gamma|})^{2}dm\] \[=\int_{\gamma^{-1}(\overline{q}^{-1}(U))}\hat{q}\circ\gamma(x)^{2 }\cdot|D\gamma|dm=\int_{\overline{q}^{-1}(U)}\hat{q}(x)^{2}dm=\mu_{q}(U).\] Therefore, we can map each orbit of \(q\in L^{2}(M,\mathcal{C}(N))\) under the half density action by \(\Gamma(M)\) to a measure on \(N\). As in the previous section, we will first show the result for piecewise constant functions and extend by continuity. We prove the piecewise constant case in the following lemma. **Lemma 4.3**: _Let \(d\geq 2\) and \(M\) be a smooth, connected, compact, oriented Riemannian \(d\)-dimensional manifold with or without boundary. Given two piecewise constant functions \(q_{1},q_{2}:M\rightarrow\mathcal{C}(N)\),_ \[\inf_{\gamma\in\Gamma(M)}d_{L^{2}}(q_{2},q_{2}*\gamma)=\mathrm{WFR}_{\delta}( \mu_{q_{1}},\mu_{q_{2}}).\] Let \(\{\sigma_{i}\}_{i=1}^{m}\) and \(\{\tau_{j}\}_{j=1}^{n}\) be triangulations of \(M\) such that \(q_{1}\) is constant on each \(\sigma_{i}\) and \(q_{2}\) is constant on each \(\tau_{j}\). Let \(\hat{q_{1}}:M\rightarrow\mathbb{R},\)\(\overline{q_{1}}:M\to N\) be the decomposition of \(q_{1}\) and \(\hat{q_{2}}:M\rightarrow\mathbb{R},\)\(\overline{q_{2}}:M\to M\) be the decomposition of \(q_{2}\). Define a function \(\langle\cdot,\cdot\rangle:N\times N\rightarrow\mathbb{R}\) given via \(\langle u,v\rangle=\overline{\cos}(d(u,v)/2\delta).\) A brief computation shows \[\inf_{\gamma\in\Gamma(M)}d_{L^{2}}^{2}(q_{1},q_{2}*\gamma)=\sum_{i=1}^{m}a_{i} +\sum_{j=1}^{n}b_{j}-2\sup_{\gamma\in\Gamma(M)}\int_{M}\hat{q_{1}}(x)\hat{q_{2 }}(\gamma(x))\sqrt{|D\gamma|}\langle\overline{q_{1}}(x),\overline{q_{2}}( \gamma(x))\rangle dm.\] Let \(\mathcal{A}\) be the set of all discrete semi-couplings from \(\mu_{q_{1}}\) to \(\mu_{q_{2}}\). Recall \[\mathrm{WFR}_{\delta}(\mu_{q_{1}},\mu_{q_{2}})^{2}=\sum_{i=1}^{m}a_{i}+\sum_{j =1}^{n}b_{j}-2\sup_{(A,B)\in\mathcal{A}}\sum_{i=1}^{m}\sum_{j=1}^{n}\sqrt{A_{ ij}B_{ij}}\langle u_{i},v_{j}\rangle\] Therefore, the theorem is equivalent to showing \[\sup_{(A,B)\in\mathcal{A}}\sum_{i=1}^{m}\sum_{j=1}^{n}\sqrt{A_{ij}B_{ij}} \langle u_{i},v_{j}\rangle=\sup_{\gamma\in\Gamma(S^{2})}\int_{M}\hat{q_{1}}(x )\hat{q_{2}}(\gamma(x))\sqrt{|D\gamma|}\langle\overline{q_{1}}(x),\overline{q_ {2}}(\gamma(x))\rangle dm.\] **Claim 2**.: _Assume that \((A,B)\) is a discrete semi-coupling from \(\mu_{q_{1}}\) to \(\mu_{q_{2}}\). Then for all \(\epsilon>0\) there is a PL homeomorphism \(\gamma:M\to M\) such that_ \[\left|\int_{M}\hat{q_{1}}(x)\hat{q_{2}}(\gamma(x))\sqrt{|D\gamma|}\langle\overline {q_{1}}(x),\overline{q_{2}}(\gamma(x))\rangle dm-\sum_{i,j}\sqrt{A_{ij}B_{ij}} \langle u_{i},v_{j}\rangle\right|<\epsilon.\] Proof of Claim 2.: Let \((A,B)\) be a discrete semi-coupling from \(\mu_{q_{1}}\) to \(\mu_{q_{2}}\) such that for each \(1\leq i\leq m\) and \(1\leq j\leq n\), \(A_{ij},B_{ij}>0\). We will first prove the claim for this restricted case and extend it to all semi-couplings by continuity. First we choose a real number \(r\in(0,1)\). For each \(1\leq i\leq m\), subdivide \(\sigma_{i}\) into \(n\) smaller \(d\)-simplexes \(\sigma_{ij}\) such that \(\hat{q_{1}}^{2}=A_{ij}/m(\sigma_{ij})\). Similarly, for each \(1\leq j\leq n\), subdivide \(\tau_{j}\) into \(m\) smaller \(d\)-simplexes \(\tau_{ij}\) such that \(\hat{q_{2}}^{2}=B_{ij}/m(\tau_{ij})\). For each \(1\leq i\leq m\) and \(1\leq j\leq n\), choose a smaller \(d\)-simplex \(\tilde{\sigma}_{ij}\), whose closure is contained in the interior of \(\sigma_{ij}\), such that \(m(\tilde{\sigma}_{ij})=rm(\sigma_{ij})\). Similarly, for each \(1\leq i\leq m\) and \(1\leq j\leq n\), choose a smaller \(d\)-simplex \(\tilde{\tau}_{ij}\), whose closure is contained in the interior of \(\tau_{ij}\), such that \(m(\tilde{\tau}_{ij})=rm(\tau_{ij})\). We now construct an orientation preserving PL homeomorphism \(\gamma_{r}:M\to M\). First, for each \(1\leq i\leq m\) and \(1\leq j\leq n\), define \(\gamma_{r}:\tilde{\sigma}_{ij}\rightarrow\tilde{\tau}_{ij}\) to be a PL orientation preserving homeomorphism with constant area multiplication factor, \(|D_{\gamma_{r}}|=m(\tau_{ij})/m(\sigma_{ij})\). Note that \[M-\left(\bigcup_{i=1}^{m}\bigcup_{j=1}^{n}\tilde{\sigma}_{ij}^{\circ}\right) \text{ is homeomorphic to }M-\left(\bigcup_{i=1}^{m}\bigcup_{j=1}^{n}\tilde{\tau}_{ij}^{\circ} \right).\] Hence, we can extend the homeomorphism \(\gamma_{r}\) defined on the \(\tilde{\sigma}_{ij}\)'s to a homeomorphism from \(M\) to \(M\). Note that on each \(\tilde{\sigma}_{ij}\), \(\hat{q_{2}}^{2}(\gamma_{r}(x))|D\gamma_{r}|=B_{ij}/m(\sigma_{ij})\). Write \(M=M_{1}\cup M_{2}\), where \(M_{1}=\bigcup_{i=1}^{m}\bigcup_{j=1}^{n}\tilde{\sigma}_{ij}\) and \(M_{2}=\overline{M-M_{1}}\). A simple computation shows \[\int_{M_{1}}\hat{q_{1}}(x)\hat{q_{2}}(\gamma_{r}(x))\sqrt{|D \gamma_{r}|}\langle\overline{q_{1}}(x),\overline{q_{2}}(\gamma_{r}(x)) \rangle dm\\ =\sum_{i=1}^{m}\sum_{j=1}^{n}\int_{\tilde{\sigma}_{ij}}\hat{q_{1 }}(x)\hat{q_{2}}(\gamma_{r}(x))\sqrt{|D\gamma_{r}|}\langle\overline{q_{1}}(x), \overline{q_{2}}(\gamma_{r}(x))\rangle dm\\ =\sum_{i=1}^{m}\sum_{j=1}^{n}\frac{\sqrt{A_{ij}B_{ij}}}{m(\sigma _{ij})}m(\tilde{\sigma}_{ij})\langle u_{i},v_{j}\rangle=\sum_{i=1}^{m}\sum_{j =1}^{n}\sqrt{rA_{ij}}\sqrt{rB_{ij}}\langle u_{i},v_{j}\rangle.\] Meanwhile by the Schwarz inequality, \[\left|\int_{M_{2}}\hat{q_{1}}(x)\hat{q_{2}}(\gamma_{r}(x))\sqrt{ |D\gamma_{r}|}\langle\overline{q_{1}}(x),\overline{q_{2}}(\gamma_{r}(x)) \rangle dm\right|\leq\int_{M_{2}}\hat{q_{1}}(x)\hat{q_{2}}(\gamma_{r}(x))\sqrt {|D\gamma_{r}|}dm\\ \leq\sqrt{\int_{M_{2}}\hat{q_{1}}^{2}dm}\sqrt{\int_{M_{2}}\hat{q _{2}}^{2}(\gamma_{r}(x))|D\gamma_{r}|}dm=\sqrt{(1-r)\int_{M}\hat{q_{1}}^{2}dm }\sqrt{(1-r)\int_{M}\hat{q_{2}}^{2}dm}.\] So as we let \(r\to 1\), \[\int_{M_{1}}\hat{q_{1}}(x)\hat{q_{2}}(\gamma_{r}(x))\sqrt{|D\gamma_{r}|} \langle\overline{q_{1}}(x),\overline{q_{2}}(\gamma_{r}(x))\rangle dm\to \sum_{i=1}^{m}\sum_{j=1}^{n}\sqrt{A_{ij}B_{ij}}\langle u_{i},v_{j}\rangle\] and \[\int_{M_{2}}\hat{q_{1}}(x)\hat{q_{2}}(\gamma_{r}(x))\sqrt{|D\gamma_{r}|}\langle \overline{q_{1}}(x),\overline{q_{2}}(\gamma_{r}(x))\rangle dm\to 0.\] Hence, \[\int_{M}\hat{q_{1}}(x)\hat{q_{2}}(\gamma_{r}(x))\sqrt{|D\gamma_{r}|} \langle\overline{q_{1}}(x),\overline{q_{2}}(\gamma_{r}(x))\rangle dm\to\sum_{i=1 }^{m}\sum_{j=1}^{n}\sqrt{A_{ij}B_{ij}}\langle u_{i},v_{j}\rangle.\] Thus Claim 2 follows for the case in which for each \(1\leq i\leq m\) and \(1\leq j\leq n\), \(A_{ij}>0\) and \(B_{ij}>0\). The general case then follows immediately from the continuity of \[\sum_{i=1}^{m}\sum_{j=1}^{n}\sqrt{A_{ij}B_{ij}}\langle u_{i},v_{j}\rangle\] as a function of \((A,B)\). This completes the proof of Claim 2. It follows that \[\sup_{\gamma\in\Gamma(S^{2})}\int_{M}\hat{q_{1}}(x)\hat{q_{2}}(x) \langle\overline{q_{1}}(x),\overline{q_{2}}(x)\rangle dm\geq\sup_{(A,B)\in \mathcal{A}}\sum_{i=1}^{m}\sum_{j=1}^{n}\sqrt{A_{ij}B_{ij}}\langle u_{i},v_{ j}\rangle.\] We are left to show the opposite inequality. **Claim 3**: _Assume \(\gamma\) is a PL-homeomorphism from \(M\) to \(M\), then there exists a discrete semi-coupling \((A,B)\) such that_ \[\sup_{\gamma\in\Gamma(M)}\int_{M}\hat{q_{1}}(x)\hat{q_{2}}(\gamma (x))\sqrt{|D\gamma|}\langle\overline{q_{1}}(x),\overline{q_{2}}(\gamma(x)) \rangle dm\leq\sup_{(A,B)\in\mathcal{A}}\sum_{i=1}^{m}\sum_{j=1}^{n}\sqrt{A_{ ij}B_{ij}}\langle u_{i},v_{j}\rangle.\] _Proof of Claim 3._ Let \(\gamma:M\to M\) be an orientation preserving PL homeomorphism. For \(1\leq i\leq m\) and \(1\leq j\leq n\), define \(\sigma_{ij}=\gamma^{-1}(\tau_{j})\cap\sigma_{i}\) and define \(\tau_{ij}=\gamma(\sigma_{ij})\). Now define two \((m+1)\times(n+1)\) matrices \(A\) and \(B\) via: * For \(1\leq i\leq m\) and \(1\leq j\leq n\), \(A_{ij}=\int_{\sigma_{ij}}\hat{q_{1}}^{2}dm\) and \(B_{ij}=\int_{\tau_{ij}}\hat{q_{2}}^{2}dm\). * For \(0\leq i\leq m\), \(B_{0i}=0\) and \(A_{i0}=a_{i}-\sum_{j=1}^{n}\int_{\sigma_{ij}}\hat{q_{1}}^{2}dm\). * For \(0\leq j\leq n\), \(A_{j0}=0\) and \(B_{0j}=b_{j}-\sum_{i=1}^{m}\int_{\tau_{ij}}\hat{q_{2}}^{2}dm\). The pair of matrices \((A,B)\) is a discrete semi-coupling from \(\mu_{q_{1}}\) to \(\mu_{q_{2}}\) by construction. We say that \((A,B)\) is the semi-coupling corresponding to the homeomorphism \(\gamma\). Denote the area multiplication factor of \(\gamma\) on \(\sigma_{ij}\) by \(m_{ij}\). Then by the Schwarz inequality, \[\int_{\sigma_{ij}}\hat{q_{1}}(x)\hat{q_{2}}(\gamma(x))\sqrt{|D \gamma|}\langle u_{i},v_{j}\rangle dm\leq\sqrt{\int_{\sigma_{ij}}\hat{q_{1}}^{2} (x)dm}\sqrt{\int_{\sigma_{ij}}\hat{q_{2}}^{2}(\gamma(x))|D\gamma|dm}\langle u_ {i}\cdot v_{j}\rangle\\ =\sqrt{\int_{\sigma_{ij}}\hat{q_{1}}^{2}(x)dm}\sqrt{\int_{\tau_{ ij}}\hat{q_{2}}^{2}(x)dm}\langle u_{i}\cdot v_{j}\rangle=\sqrt{A_{ij}} \sqrt{B_{ij}}\langle u_{i}\cdot v_{j}\rangle.\] Summing over all \(i\) and \(j\) we obtain: \[\int_{M}\hat{q_{1}}(x)\hat{q_{2}}(\gamma(x))\sqrt{|D\gamma|} \langle\overline{q_{1}}(x),\overline{q_{2}}(\gamma(x))\rangle dm\\ =\sum_{i,j}\int_{\sigma_{ij}}\hat{q_{1}}(x)\hat{q_{2}}(\gamma(x) )\sqrt{|D\gamma|}\langle\overline{q_{1}}(x),\overline{q_{2}}(\gamma(x))\rangle dm \leq\sum_{i,j}\sqrt{A_{ij}}\sqrt{B_{ij}}\langle u_{i}\cdot v_{j}\rangle.\] This completes the proof of Claim 3. It follows that, \[\sup_{\gamma\in\Gamma(M)}\int_{M}\hat{q_{1}}(x)\hat{q_{2}}(\gamma(x))\sqrt{|D \gamma|}\langle\overline{q_{1}}(x),\overline{q_{2}}(\gamma(x))\rangle dm\leq \sup_{(A,B)\in\mathcal{A}}\sum_{i=1}^{m}\sum_{j=1}^{n}\sqrt{A_{ij}B_{ij}}(u_{i} \cdot v_{j}).\] and thus the lemma is proved. To extend the results to all of \(L^{2}(M,\mathcal{C}(N))\) we will need the following continuity result: **Lemma 4.4**: _The map \((L^{2}(M,\mathcal{C}(N)),d_{L^{2}})\to(\mathcal{M}(N),\mathrm{WFR}_{\delta})\) defined via \(q\mapsto\overline{q}_{*}\nu_{q}\) is Lipschitz continuous with Lipschitz constant \(K=1\)._ Let \(q_{1},q_{2}\in L^{2}(M,\mathcal{C}(N))\), \(\mu_{q_{1}}=\overline{q_{1}}_{*}\nu_{q_{1}}\), and \(\mu_{q_{2}}=\overline{q_{2}}_{*}\nu_{q_{2}}\). For any semi-coupling \((\gamma_{1},\gamma_{2})\in\Gamma(\mu_{q_{1}},\mu_{q_{2}})\), \[\mathrm{WFR}_{\delta}(\mu_{q_{1}},\mu_{q_{2}})\leq\sqrt{J_{\delta}(\gamma_{1},\gamma_{2})}.\] Thus, to prove the theorem we must construct \((\gamma_{1},\gamma_{2})\in\Gamma(\mu_{q_{1}},\mu_{q_{2}})\) such that \(J_{\delta}(\gamma_{1},\gamma_{2})=d_{L_{2}}(q_{1},q_{2})\). To construct such a semi-coupling we first construct \(\rho:M\to N\times N\) defined a the first component maps of \(q_{1}\) and \(q_{2}\) on the first and second factor respectively. I.e. the map is given by \(\rho(x)=\left(\overline{q_{1}}(x),\overline{q_{2}}(x)\right).\) Since \(\overline{q_{1}}\) and \(\overline{q_{2}}\) are individually measurable, then so is \(\rho\). We can then define \(\gamma_{1},\gamma_{2}\in\mathcal{M}(N\times N)\) via \(\gamma_{1}=\rho_{*}\nu_{q_{1}}\) and \(\gamma_{2}=\rho_{*}\nu_{q_{2}}\). **Claim 4**: _The pair of measures, \((\gamma_{1},\gamma_{2})\) is a semi-coupling from \(\mu_{q_{1}}\) to \(\mu_{q_{2}}\)._ _Proof of claim._ Let \(U\subseteq N\) be open. Thus, \[\gamma_{1}(U\times N)=\nu_{q_{1}}\left(\rho^{-1}(U\times N)\right)=\nu_{q_{1} }\left(\overline{q_{1}}^{-1}(U)\cap\overline{q_{2}}^{-1}(N)\right)=\nu_{q_{1} }\left(\overline{q_{1}}^{-1}(U)\right)=\mu_{q_{1}}(U)\] and \[\gamma_{2}(N\times U)=\nu_{q_{2}}\left(\rho^{-1}(N\times U)\right)=\nu_{q_{1} }\left(\overline{q_{1}}^{-1}(N)\cap\overline{q_{2}}^{-1}(U)\right)=\nu_{q_{1} }\left(\overline{q_{2}}^{-1}(U)\right)=\mu_{q_{2}}(U).\] So \((\gamma_{1},\gamma_{2})\) is a semi-coupling from \(\mu_{q_{1}}\) to \(\mu_{q_{2}}\). Recall from the definition of the functional \(J\) we need to construct \(\gamma\in\mathcal{M}(N\times N)\) such that \(\gamma_{1},\gamma_{2}\ll\gamma\). Define \(\gamma=\rho_{*}m\). We know \(\mu_{q_{1}},\mu_{q_{2}}\ll m\). Thus, by Lemma 3.2, \(\gamma_{1},\gamma_{2}\ll\gamma\). Furthermore, \[\hat{q_{1}}^{2}=\frac{\mu_{q_{1}}}{m}=\frac{\gamma_{1}}{\gamma}\circ\rho\ \text{a.e.}\qquad\text{and}\qquad\hat{q_{2}}^{2}=\frac{\mu_{q_{2}}}{m}=\frac{ \gamma_{2}}{\gamma}\circ\rho\ \text{a.e.}\] So, \[J_{\delta}(\gamma_{1},\gamma_{2})= \mu_{1}(N)+\mu_{2}(N)-2\int_{N\times N}\frac{\sqrt{\gamma_{1} \gamma_{2}}}{\gamma}(u,v)\overline{\cos}(d(u,v)/2\delta)d\gamma(u,v)\] \[= \int_{N\times N}\frac{\gamma_{1}}{\gamma}\,d\gamma+\int_{N\times N }\frac{\gamma_{2}}{\gamma}\,d\gamma-2\int_{N\times N}\sqrt{\frac{\gamma_{1}}{ \gamma}(u,v)\frac{\gamma_{2}}{\gamma}(u,v)\overline{\cos}(d(u,v)/2\delta)d \gamma(u,v)}\] \[= \int_{\rho^{-1}(N\times N)}\frac{\gamma_{1}}{\gamma}\circ\rho\,dm +\int_{\rho^{-1}(N\times N)}\frac{\gamma_{2}}{\gamma}\circ\rho\,dm\] \[-2\int_{\rho^{-1}(N\times N)}\sqrt{\frac{\gamma_{1}}{\gamma} \circ\rho(x)\frac{\gamma_{2}}{\gamma}\circ\rho(x)\overline{\cos}(d(\rho(x))/2 \delta)}dm\] \[= \int_{M}\hat{q_{1}}(x)^{2}\,dm+\int_{M}\hat{q_{2}}(x)^{2}\,dm-2 \int_{M}\hat{q_{1}}(x)\hat{q_{2}}(x)\overline{\cos}(d(\overline{q_{1}}, \overline{q_{2}})/2\delta)dm=d_{L^{2}}(q_{1},q_{2})\] Thus, \[\mathrm{WFR}_{\delta}(\mu_{q_{1}},\mu_{q_{2}})\leq\sqrt{J_{\delta}(\gamma_{1}, \gamma_{2})}=1\cdot d_{L^{2}}(\mu_{q_{1}},\mu_{q_{2}})\] Finally, we can leverage this continuity result to complete the proof of Theorem 4.1. Proof of Theorem 4.1.: Let \(\mu_{1},\mu_{2}\in\mathcal{M}(N)\) and \(q_{1},q_{2}\in L^{2}(M,\mathcal{C}(N))\) such that \(\mu_{1}=\overline{q_{1}}_{*}\nu_{q_{1}}\) and \(\mu_{2}=\overline{q_{2}}_{*}\nu_{q_{2}}\). By an argument analogous to the proof of Theorem 3.1 we can conclude \[\inf_{\gamma\in\Gamma(M)}d_{L^{2}}(q_{1},q_{2}*\gamma)=\mathrm{WFR}_{\delta}( \mu_{1},\mu_{2}).\] This concludes the the proof of part a.). Let \(\mu=\sum_{i=1}^{n}a_{i}\delta_{u_{i}}\) be a finitely supported measure on \(N\). By [48], \(M\) admits a canonical PL structure. Let \(m\geq n\) and subdivide the triangulation of \(M\) into \(m\) simplices given by \(\sigma_{j}\) for \(1\leq j\leq m\). Let \(x\in M\). Thus, there exists \(1\leq j\leq m\) such that \(x\in\sigma_{j}\). Thus we define \[q(x)=\begin{cases}\begin{pmatrix}u_{j},\sqrt{\frac{a_{j}}{\mathrm{area}( \sigma_{j})}}\end{pmatrix}&\text{ if }1\leq j\leq n\\ (u_{1},0)&\text{ if }n<j\leq m\end{cases}.\] Let \(U\subseteq N\), then \(\mu(U)=\sum\limits_{i|u_{i}\in U}a_{i}\). Meanwhile, \(\overline{q}^{-1}(U)=\bigsqcup\limits_{i|u_{i}\in U}\sigma_{i}\). Thus, \[\int_{\overline{q}^{-1}(U)}\hat{q}^{2}(x)dm=\sum_{i|u_{i}\in U}\int_{\sigma_{ i}}\frac{a_{i}}{\mathrm{area}(\sigma_{i})}dm=\sum_{i|u_{i}\in U}a_{i}.\] To complete the proof of part b.) we will extend the result to the whole space by continuity. For any \(\mu\in\mathcal{M}(N)\), let \(\{\mu_{n}\}\subseteq\mathcal{M}(N)\) be a sequence of finitely supported measures that converges to \(\mu\) with respect to the Wasserstein Fisher Rao. In particular, \(\{\mu_{n}\}\) is Cauchy with respect to \(\mathrm{WFR}_{\delta}\). Note that for all \(n\in\mathbb{N}\),there exists a piecewise constant \(q_{n}\in L^{2}(M,\mathcal{C}(N))\) satisfying \[\mu_{n}(U)=\int_{\overline{q_{n}}^{-1}(U)}\hat{q_{n}}(x)^{2}dm.\] Thus, we can construct a sequence of functions given by \(q_{0}^{*}=q_{0}\) an for all \(n\in\mathbb{N}\), \(q*_{n+1}=q_{n+1}*\gamma_{n}\) where \(\gamma_{n}\) is a PL homeomorphism from \(M\) to \(M\) such that \[d_{L^{2}}(q_{n}^{*},q_{n+1}*\gamma_{n})=\mathrm{WFR}_{\delta}(\mu_{n},\mu_{n+ 1})+\frac{1}{2^{n}}.\] Note that the existence of such a \(\gamma_{n}\) is guaranteed by Lemma 4.3. Since \(\{\mu_{n}\}\) is Cauchy with respect to \(\mathrm{WFR}_{\delta}\), it follows that \(\{q_{n}^{*}\}\) is Cauchy with respect to \(d_{L^{2}}\). By completeness of \((L^{2}(M,\mathcal{C}(N)),d_{L^{2}})\), there exists a limit \(q\in L^{2}(M,\mathcal{C}(N))\). Let \(U\subseteq N\) open. Thus, \[\mu(U)= \lim_{n\to\infty}\mu_{n}(U)=\lim_{n\to\infty}\int_{\overline{q_{ n}}^{-1}(U)}\hat{q_{n}}(x)^{2}dm=\lim_{n\to\infty}\int_{M}\hat{q_{n}}(x)^{2} \chi_{\overline{q_{n}}^{-1}(U)}dm\] \[= \int_{M}\lim_{n\to\infty}\hat{q_{n}}(x)^{2}\chi_{\overline{q_{n} }^{-1}(U)}dm=\int_{M}\hat{q}(x)^{2}\chi_{\overline{q}^{-1}(U)}dm=\int_{ \overline{q}^{-1}(U)}\hat{q}(x)^{2}dm\] Thus, \(\mu=\overline{q}_{*}\nu_{q}\) This completes the proof of part b.) of the theorem.
2309.06969
Setting the Right Expectations: Algorithmic Recourse Over Time
Algorithmic systems are often called upon to assist in high-stakes decision making. In light of this, algorithmic recourse, the principle wherein individuals should be able to take action against an undesirable outcome made by an algorithmic system, is receiving growing attention. The bulk of the literature on algorithmic recourse to-date focuses primarily on how to provide recourse to a single individual, overlooking a critical element: the effects of a continuously changing context. Disregarding these effects on recourse is a significant oversight, since, in almost all cases, recourse consists of an individual making a first, unfavorable attempt, and then being given an opportunity to make one or several attempts at a later date - when the context might have changed. This can create false expectations, as initial recourse recommendations may become less reliable over time due to model drift and competition for access to the favorable outcome between individuals. In this work we propose an agent-based simulation framework for studying the effects of a continuously changing environment on algorithmic recourse. In particular, we identify two main effects that can alter the reliability of recourse for individuals represented by the agents: (1) competition with other agents acting upon recourse, and (2) competition with new agents entering the environment. Our findings highlight that only a small set of specific parameterizations result in algorithmic recourse that is reliable for agents over time. Consequently, we argue that substantial additional work is needed to understand recourse reliability over time, and to develop recourse methods that reward agents' effort.
Joao Fonseca, Andrew Bell, Carlo Abrate, Francesco Bonchi, Julia Stoyanovich
2023-09-13T14:04:15Z
http://arxiv.org/abs/2309.06969v1
# Setting the Right Expectations: Algorithmic Recourse Over Time ###### Abstract. Algorithmic systems are often called upon to assist in high-stakes decision making. In light of this, _algorithmic recourse_, the principle wherein individuals should be able to take action against an undesirable outcome made by an algorithmic system, is receiving growing attention. The bulk of the literature on algorithmic recourse to-date focuses primarily on how to provide recourse to a single individual, overlooking a critical element: the effects of a continuously changing context. Disregarding these effects on recourse is a significant oversight, since, in almost all cases, recourse consists of an individual making a first, unfavorable attempt, and then being given an opportunity to make one or several attempts _at a later date_ -- when the context might have changed. This can create false expectations, as initial recourse recommendations may become less reliable over time due to model drift and competition for access to the favorable outcome between individuals. In this work we propose an agent-based simulation framework for studying the effects of a continuously changing environment on algorithmic recourse. In particular, we identify two main effects that can alter the _reliability of recourse_ for individuals represented by the agents: (1) competition with other agents acting upon recourse, and (2) competition with new agents entering the environment. Our findings highlight that only a small set of specific parameterizations result in algorithmic recourse that is reliable for agents over time. Consequently, we argue that substantial additional work is needed to understand recourse reliability over time, and to develop recourse methods that reward agents' effort. + Footnote †: [ *]Both authors contributed equally ## 1. Introduction Artificial intelligence (AI) systems are becoming increasingly common in consequential decision-making settings such as healthcare (Brockman et al., 2017; Brockman et al., 2018), finance (Fonseca et al., 2018), and hiring (Fonseca et al., 2018; Fonseca et al., 2018; Fonseca et al., 2018). While these systems have the capacity to significantly improve people's lives, they can also have adverse consequences, such as erroneous decision-making (Sant and Loi, and Barocas et al., further highlighting the importance of continued work aimed at closing this critical research gap (Barcos et al., 2016; Li et al., 2017). As an example, consider the lending setting, wherein an AI system denies an individual's application for a loan but provides information on what that individual can do to be approved for the loan _if they apply again at a later date_(Louis et al., 2017). The individual may be told that their loan application was denied because their credit score is 50 points lower than necessary. One could imagine that it takes the individual 6 months to a year to improve their credit score -- which is enough time for the criteria for approving the loan to change. As a result, the initial recommendation of _"improving your credit score by 50 points"_ may have set false expectations. There are numerous reasons why selection criteria -- and the reliability of recourse recommendations -- can change over time (Louis et al., 2017; Li et al., 2017; Li et al., 2017; Li et al., 2017). In this work, we look at competitive effects that arise from having a multi-agent resource-constrained setting. In particular, we identify two main effects: (1) other agents are acting upon recourse recommendations they have received, and (2) new agents are entering the system. An illustration of the competitive effects and their impact on recourse can be seen in Figure 1. **Research questions, contributions, and roadmap.** Motivated by the scenarios described above, this work aims to develop a framework for multi-agent multi-time-step analysis of algorithmic recourse. We seek to answer the following research questions: 1. What metrics can be used to evaluate the reliability of algorithmic recourse over time? 2. Under what conditions is algorithmic recourse most reliable for individuals? 3. How can system designers set better expectations for what will happen to individuals, if they follow algorithmic recourse recommendations? We tackle these questions with the help of an agent-based simulation framework, in which a population of agents, each having its own features, applies for access to a scarce resource, and a black-box model decides on the outcomes. Agents obtaining a positive outcome exit the system, while those obtaining a negative outcome are given recourse recommendations, allowing them to adapt their features before applying again at the next time-step. Our framework is guided by a rich set of stochastic variables, including how flexible/willing an agent is to adapt and how many new agents are joining the system at each time step to compete for limited resources. To the best of our knowledge, this is the only work examining algorithmic recourse under resource-constrained settings using agent-based modeling. We are also among the first authors to study recourse over many time steps, rather than just examining one or two points in time. Moreover, we define a _recourse reliability_ metric, quantifying the probability that individuals will receive a favorable outcome, given that they took the recommended recourse. We also use our framework to assess how different parameterizations of the setting affect this metric. Our findings highlight that only a small set of specific parameterizations results in algorithmic recourse that is reliable for individuals over time. As a preview of our results, Figure 2 shows that, under 9 different parameterizations of our framework, one rarely observes reliable recourse over 50 time-steps. Our work highlights a crucial socio-technical implication: Systems that provide recourse recommendations _without considering potential shifts in the threshold for positive outcomes_ can create unrealistic expectations for individuals. In some cases, these recommendations may even be harmful by falsely promising rewards to those who seek recourse based on recommendations. This calls for substantial additional work on understanding recourse reliability in multi-agent multi-timestep settings, and on developing recourse methods that reward agents' effort. The rest of this paper is organized as follows: * We review related work on algorithmic recourse (Section 2). * We formalize multi-agent algorithmic recourse, propose a simulation framework for evaluating recourse over time, and define a recourse reliability metric (Section 3). * We present detailed analysis that confirms the importance of considering temporal effects in recourse (Section 4). * We provide an in-depth discussion of the practical implications of our observations (Section 5). ## 2. Related Work Algorithmic recourse is critically important for three reasons: first, as mentioned, it ascribes agency to individuals against adverse outcomes, including outcomes that are either incorrect (and inefficient) or discriminatory (Li et al., 2017; Li et al., 2017; Li et al., 2017). As such, many have argued that providing individuals recourse is morally good and equitable, and therefore consistent with ethical computing (Li et al., 2017; Li et al., 2017). Second, from the perspective of the owners of AI systems, recourse can improve the overall accuracy and reliability of a system's outcomes. Third, algorithmic recourse will likely become legally necessary with the passing of legislation like the European Union AI Act (Li et al., 2017; Li et al., 2017). There are two main approaches used for providing algorithmic recourse in practice. The first and simpler approach is to provide recourse based on "contrastive explanations," which find changes that can be made to an individual's profile (i.e., the _feature space_) to flip their outcome from unfavorable to favorable (Li et al., 2017; Li et al., 2017). These recommended changes are sometimes called "interventions." While useful in many settings, a short-coming of this approach is that it Figure 2. Comparison of recourse reliability (metric defined in Section 3.5) over time under different simulated models of agent behavior. Based on the behaviors defined in Section 3.3, (a) shows simulations where agents’ behavior is modeled under continuous adaptation with constant effort; (b) shows simulations under continuous adaptation with flexible effort. does not consider the normative meaning of features. This weakness can lead to recourse recommendations that are disconnected from real-world actions (Bahdan et al., 2017; Krizhevsky et al., 2017), like recommending to an individual that they should "lower their age" to receive a different outcome. The second set of methods, "causal recourse methods," use structural causal models (Krizhevsky et al., 2017; Krizhevsky et al., 2017) to account for downstream effects when changing a smaller set of an individual's features (also referred to as "consequential recommendations" (Krizhevsky et al., 2017)). The advantage of causal recourse methods is that they produce smaller, easier to achieve intervention sets for individuals (Krizhevsky et al., 2017) that are better connected with real-world actions and settings. ### Temporal effects on recourse To the best of our knowledge, the impact of time in algorithmic recourse is understudied. As noted earlier, we see this as a significant research gap, because _time is inherent to algorithmic recourse itself_. We now discuss several studies that begin to fill this gap. Ferrario and Loi, motivated by the idea that machine learning models are often unstable in real-world settings, studied the impact of retraining models on the validity of counterfactual explanations over time (Ferrario and Loi, 2017). In their work, they developed a method using counterfactual data augmentation to improve the robustness of recourse recommendations and prevent so-called _Unfortunate Counterfactual Events (UCEs)_, which occur when an individual is given a recourse recommendation at time-step \(t\), but, due to model updates, the recommendation is invalid at time-step \(t+\delta\). Other researchers have more generally studied the robustness of counterfactual explanations due to changes in the underlying settings, but, notably, they did not emphasize temporal effects as the reason for these changes (Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). Rawal et al. analyzed the impact of distribution shifts in the data on algorithmic recourse recommendations (Krizhevsky et al., 2017). They did not make any assumptions about the origin of these distribution shifts, and considered temporal shifts, geo-spatial shifts, and shifts due to data correction. An important finding of this work is that there are theoretical trade-offs between minimizing the cost of a recourse recommendations and ensuring robustness to distribution shifts. Under this trade-off, Pawelczyk et al. proposed a method for generating counterfactual explanations that allows individuals to set preferences and navigate this trade-off for themselves (Krizhevsky et al., 2017). We further refer to the work of Pawelczyk et al., and Ferrario and Loi in Section 3.5. The methods by Upadhyay et al. (Upadhyay et al., 2017) and Rawal and Lakkaraju (Rawal and Lakkaraju, 2017) use adversarial training to provide recourse recommendations that are robust to model shifts. They quantify the probability of recourse being invalidated given the possibility of model shifts. This is achieved by defining a set of plausible model shifts based on perturbations introduced in the parameter or gradient space. This analysis applied multiple data drift scenarios over two time steps. Our work fills an important gap left by existing work: Rather than focusing specifically on the robustness of counterfactual explanations, we look to characterize the reliability of algorithmic recourse more generally. The framework we propose is agnostic to the method used to generate recourse recommendations. Furthermore, we not only consider temporal effects in algorithmic recourse, but also study them alongside multi-agent effects, and in resource-constrained environments. ### Recourse in multi-agent settings The vast majority of work on both contrastive explanations and causal recourse methods has focused on single-agent settings, save for a few exceptions (Bahdan et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Udogan et al., 2017). One notable work on multi-agent algorithmic recourse is by O'Brien and Kim (O'Brien and Kim, 2016; O'Brien and Kim, 2016), who adapted concepts from game theory literature to algorithmic recourse. They defined metrics like "Social-Welfare-Efficient recourse" and "Pareto-efficient recourse," which measure the effect of one agent taking recourse on the population of agents as a whole. Their analysis of multi-agent interactions, done through the lens of the prisoners' dilemma problem, leads to the following key conclusions: The improvement of an individual (or subgroup) tends to result in a loss in social welfare, while no scenario results in an improvement of both social welfare and the principal agent's. This finding highlights an open question: "When should algorithmic recourse be provided?" and is related to insights from Barocas et al. (Barocas et al., 2016), who suggest that recourse should not be presented to someone if it encourages an action that would be harmful to them. Another relevant line of work is by Altmeyer et al. (Altmeyer et al., 2016), who studied how recourse-based multi-agent interactions affect model drift over multiple time-steps. Their analysis shows significant model drift effects, which changed the threshold for reciving a positive outcome over time. This work has similarities to our own, however, their conclusions only hold under a restricted set of assumptions, namely, that (1) an agent who follows recourse recommendations is guaranteed a positive outcome; (2) the model is retrained at every time-step, using all prior data; and (3) agents are capable of taking as much recourse-based action as they need, and make the exact changes recommended to them. Importantly, violations of the first assumption could lead to unrealistic outcomes (e.g., decreasing an agent's age), or, in the case of the loan example, to significant monetary losses (Udogan et al., 2017). Our contribution departs from O'Brien and Kim (O'Brien and Kim, 2016), and Altmeyer et al. (Altmeyer et al., 2016), as we propose a more realistic agent-based framework, allowing agents to act on recourse in different ways, and accommodating more flexible population dynamics, with new agents joining and winning agents exiting the system. There are two other distinctions present in our simulation framework. First, at each iteration, we provide recourse to all agents who did not receive a favorable outcome, and they take recourse action based on a function describing their actions (also potentially choosing not to act). In contrast, in Altmeyer et al. (Altmeyer et al., 2016), a batch of agents is selected and provided recourse. Second, Altmeyer et al. (Altmeyer et al., 2016) focus on gradient-based counterfactual search, while our formulation is agnostic to the counterfactual search method. ## 3. Proposed Framework We study the problem of recourse in binary classification, where receiving the positive outcome corresponds to an agent gaining access to a desirable resource. Note that from here forward we use "agent" to mean an individual who is receiving an outcome from a system and possibly taking recourse. We start from the well-studied recourse problem in the static setting for a single agent, and then move to a more realistic setting in which multiple agents are competing for access to scarce resources (i.e., the number of agents competing exceeds the number of available positive outcomes). **Our goal in developing this framework was to create a way to realistically simulate algorithmic recourse in a multi-agent, multi-time step environment.** ### Recourse for a Single Agent Consider a single agent, described by a set of features, \(x\in\mathcal{X}\), and a black-box classifier \(f:\mathcal{X}\rightarrow\{0,1\}\) that is used to generate an outcome for the agent. If an agent \(x\) receives a negative outcome from the classifier \(f\), then the outcome is supplemented with a recommendation (or an explanation) on what they can change in their feature space to receive a positive outcome instead. Generating this recommendation is the goal of the single-agent recourse problem. Definition 1 (Single-agent recourse problem).: Given a black-box classifier \(f:\mathcal{X}\rightarrow\{0,1\}\) and an agent \(x\in\mathcal{X}\) for which \(f(x)=0\) (a negative outcome), find a new configuration of the agent \(x^{\prime}\in\mathcal{X}\) that results in \(f(x^{\prime})=1\) (a positive outcome), and is associated with the lowest cost \(c(x,x^{\prime})\) of making the change: \[\begin{split} x^{\prime}=\min_{x^{\prime}}& c(x,x^{ \prime})\\ s.t.& f(x^{\prime})=1\\ & x^{\prime}\in\mathcal{X}\end{split} \tag{1}\] Use the configuration \(x^{\prime}\) to return a recommendation to agent \(x\) regarding the adjustment they need to make to their features to receive a positive outcome. To make this definition more concrete, consider a scenario where agents are applying for a loan form a bank (a common example used when discussing recourse). For our purposes, let an agent \(x\in\mathcal{X}\) have two features: its credit rating \(x_{1}\) and annual income \(x_{2}\). Suppose the agent applies for the loan, but is ultimately denied based on the output of a risk assessment tool, represented by \(f:\mathcal{X}\rightarrow\{0,1\}\). If the bank decides to offer recourse to \(x\), they may suggest that credit rating \(x_{1}\) should be improved by \(\delta_{1}\) or income \(x_{2}\) be improved by \(\delta_{2}\), or both, minimizing cost \(c:\mathcal{X}\times\mathcal{X}\rightarrow[0,1]\) for the agent, so that \(f(x^{\prime})=1\). Various methods for solving the problem of Definition 1 have been proposed (Friedman and Rafter, 2001; Rafter and Rafter, 2001; Rafter and Rafter, 2001). ### Multi-Agent Recourse Suppose next that agents belong to a population \(P\), and that they all are competing for access to a scarce resource. For example, there may be \(|P|=N\) loan applicants, but the bank is only able to lend money to \(k\ll N\) of them. Because of the existence of the resource constraint \(k\), there is no fixed threshold that can be used to determine whether the output of \(f(x)\) corresponds to a positive or a negative outcome. Instead, we construct the black-box model so that, rather than returning a binary outcome, it returns a score: \(f(x):\mathcal{X}\rightarrow[0,1]\). This score is used to rank agents, from highest to lowest, and then selecting the top-\(k\) agents \(P^{k}\subseteq P\) to receive the positive outcome. While there is _no fixed threshold_ associated with the positive outcome, we can use the \(k^{th}\) highest score \(s\) as a threshold for receiving the positive outcome. We write this formally below: Definition 2 (Multi-agent recourse problem).: Given a black box classifier \(f:\mathcal{X}\rightarrow[0,1]\), a population of agents \(P\), and a resource constraint \(k\), compute the score threshold \(s\) for receiving the positive outcome. For each agent \(x\in\mathcal{X}\) for which \(f(x)<s\) (a negative outcome), find a new counterfactual configuration of the agent \(x^{\prime}\in\mathcal{X}\) that results in \(f(x^{\prime})\geq s\) (a positive outcome), and is associated with the lowest cost \(c(x,x^{\prime})\) of making the change: \[\begin{split} x^{\prime}=\min_{x^{\prime}}& c(x,x^{ \prime})\\ s.t.& f(x^{\prime})\geq s\\ & x^{\prime}\in\mathcal{X}\end{split} \tag{2}\] Use the configuration \(x^{\prime}\) for each agent \(x\) for which \(f(x)<s\) to return a recommendation regarding the adjustments they need to make to their features to receive the positive outcome. ### Modeling Agents' Behaviors Having shown how recourse can be modeled for multiple agents, we now turn our attention to modeling the behaviour of a population of agents over time. We consider two important, realistic considerations about agents' behavior with respect to recourse: 1. _How faithfully an agent follows the recourse recommendation._ Agents may follow the recourse recommendation _exactly_, or they may outperform (or underperform) the recommendation. For example, returning to the loan example described in Section 3.1, one could imagine that if an agent is told to increase their credit score by 50 points, they may do so exactly, or they may actually increase their score by 40 point, or by 60 points. We call this consideration **"adaptation."1** Footnote 1: Pawelczyk et al. use the terms _prescribed recourse_ and _implemented recourse_ to refer to a similar concept. 2. _The likelihood of an agent to take any action._ An agent that receives a recourse recommendation may or may not act on it. The likelihood that an agent will attempt a recourse action is be determined by several factors, such their implicit willingness to make challenges and the amount of effort the action requires. In the loan example, if an agent is told to increase their credit score by 20 points, they may be more likely to make the effort as opposed to being told to increase it by 200 points. We call this consideration **"effort."** **Adaptation.** The single-agent recourse problem (Definition 1) assumes that, when a recommendation for recourse is provided, an agent will change their features _exactly according to the recommendation_ they receive. Here, we relax this assumption and consider cases where there is uncertainty regarding whether an agent will, in fact, change their configuration in the way that the recommendation suggests. We model an agent's actions in changing their features as a function \(a(x,x^{\prime}):\mathcal{X}\times\mathcal{X}\rightarrow\mathcal{X}\) that produces configuration \(x^{\prime\prime}\), with the goal of reaching or exceeding the score threshold \(s\). We consider three cases: (1) \(x\) follows the suggested recourse recommendation exactly, i.e., \(a(x,x^{\prime})=x^{\prime}\) with \(f(x^{\prime})=s\); (2) \(x\) declines to follow the recommendation, i.e., \(a(x,x^{\prime})=x\), with \(f(x)<s\); (3) \(x\) changes their features in a way that is feasible but different than the recourse recommendation, i.e., \(a(x,x^{\prime})=x^{\prime\prime}\) with score \(f(x^{\prime\prime})\) that may be greater than or less than \(s\). In our simulation framework, we model this behavior using the adaptation parameter that has two settings: _binary_ and _continuous_. The _binary_ setting matches the classical view of recourse found in the literature, and assumes that all agents that take recourse action _exactly match the recommendation_ (their new feature configuration is \(x^{\prime}\)). Under this setting, an agent acts on the recourse recommendation with some probability \(p\) (i.e., \(a(x,x^{\prime})=x^{\prime}\)), and it retains the original value of the features with probability \(1-p\) (i.e., \(a(x,x^{\prime})=x\)). The _continuous_ setting accounts for what we believe is a more realistic modeling of behavior: that an agent makes progress towards a recourse recommendation, and may out- or under-perform the recommendation at any given time-step. In this setting, the agent produces configuration \(x^{\prime\prime}\) according to some probabilistic model (e.g., a Gaussian distribution). When taking actions, a value \(\delta_{x}\) is sampled from this distribution, resulting in a new configuration \(x^{\prime\prime}\), and a new score \(f(x^{\prime\prime})=f(x)+\delta_{x}\) is computed. **Effort.** As described earlier, _effort_ reflects an agent's likelihood to take action and change their feature space. Under our simulation, this parameter has two settings: _constant_ and _flexible_. The setting _constant_ means that an agent has an implicit willingness to act on recourse that is determined _a priori_ and is intrinsic to the agent. Regardless of what is happening in the environment, an agent has some probability \(p\) that they will act on a recourse recommendation. The setting _flexible_ means that the amount of effort required for a recourse recommendation determines the probability that an agent will take action and change their features. In other words, the less effort is required the more likely it is for an agent to act (and vice versa). In this case, the agents' probability to act on recourse is sampled from \(\text{dist}(f(x),s)\). To encode effort into our simulation, we use the parameter \(l\in\mathbb{R}\), and explain how how it is used in each setting in 4.2. \(l\) is an additive factor, and intuitively, it can be thought of as the agent-level willingness to take recourse action. **Difficulty of acting on recourse recommendations.** An additional consideration we use to model agent behavior is defining a global parameter that controls the _difficulty of acting on a recourse recommendation_. For example, one can imagine that it is easier to act on a recourse recommendation when it is related to signing up for a social media account versus improving one's credit score. The parameter \(g\in[0,1]\) is set _a priori_ for the simulation. Values of \(g\) closer to \(1.0\) indicate a setting where it is easier for all agents to successfully get recourse. **Adaptation and effort combined.** Since both adaption and effort have two settings, there are four possible settings to model agents' behavior. These are illustrated and described in Figure 3. For better understanding, let's consider two of the settings in detail: * _Binary adaptation with constant effort._ In this case, an agent's willingness to take action is determined _a priori_, and they will make the _exact_ changes to their features per the recommended recourse they receive. * _Continuous adaptation with flexible effort._ In this case, an agent's willingness to take action is determined by how much effort they need to make. Further, how much they change their features is also probabilistic, and may result in them meeting, out-performing or under-performing the recourse recommendation. ### Simulation Framework We consider an environment where agents compete for access to a scarce resource repeatedly, at discrete time steps \(t_{0},\dots,t_{n}\). The number of resources at each time step is given by \(k\), which determines the number of positive outcomes (e.g., loans) that can be assigned at each time step. The simulation begins with an initial population of agents \(P_{0}\), all with synthetically-generated features. At each time step, every agent receives a score from a machine learning classifier \(f:\mathcal{X}\rightarrow[0,1]\), representing their attempt at a positive outcome (e.g., applying for a loan). The agents with the top-\(k\) scores at each time step receive a positive outcome and exit the environment (e.g., they receive a loan). Agents that receive a negative outcome are given recourse recommendations from a function \(r:\mathcal{X}\rightarrow\mathcal{X}\), and have the chance to act on those recommendations at a later time step. The likelihood an agent will "take action" and change their features is governed by an "agent behavior" function \(a_{l,g}:\mathcal{X}\times\mathcal{X}\rightarrow\mathcal{X}\). Note that both the likelihood of taking action (or not) _and_ the amount action taken are governed by \(a_{l,g}\). We encode \(l\) and \(g\) as described in Section 3.3 as hyperparemters for the adaptation function \(a_{l,g}\). Recall that \(g\in[0,1]\) determines the global difficulty of achieving recourse and is set _a priori_ for the simulation. We let \(l\in[0,1]\) be defined separately for each agent when they enter the simulation, and it reflects each agent's individual willingness to take action. We draw \(l\) from a random distribution, and a higher value of \(l\) means that an agent is more willing to act upon a recourse recommendation. Note that \(l\) is mutable from one time-step to another. For example, under the _flexible_ setting (see Figure 3), if an agent finds themselves closer to the score threshold, their willingness to take action may increase. Figure 3. Different types of agent behavior. The agent (red) receives a recommendation (yellow) and takes action (blue). See Section 3.3 for a detailed description of the parameters _adaptation_ and _effort_. Importantly, the population of agents is not fixed: at each time step, new agents (i.e., new loan applicants) join the environment. We model these dynamics by representing the population of agents over time as a sequence \(P=\{P_{t_{0}},\ldots,P_{tr}\}\). At the end of the simulation, we retrieve \(P\) that contains the final state of each agent. ### Quantifying the Reliability of Recourse As discussed in Sections 1 and 2, prior work on time-related effects of recourse is very limited. To the best of our knowledge, only two recourse reliability metrics have been defined to-date. Ferrario and Loi propose measuring the reliability of recourse using _"Unfortunate Counterfactual Events (UCEs)"_ that occur when a counterfactual explanation used as a recourse recommendation becomes invalidated at a later time due to updates in the underlying machine learning model (Ferrario and Loi, 2017). Pawleczyk et al. define a metric called _Recourse Invalidation Rate_ that quantifies the probability that a recourse recommendation becomes invalidated for a single individual due to changes in the way recourse is implemented by that individual (Rasmaglia et al., 2017). In this paper, we propose an alternate metric that quantifies how well recourse recommendations meet individuals' expectations. In other words, we offer a metric that quantifies whether the "promise" of recourse matches reality. We call this metric **recourse reliability**, and see it as an important contribution to the literature, for two reasons: first, unlike previous metrics, it quantifies _system-level_ behavior, as opposed to focusing on recourse for a single individual. Second, our metric is not tied to counterfactual explanations, and is agnostic to the way the underlying recourse recommendation is generated. For example, it is can be used to quantify reliability of _principle reason explanations_(Bordes and McAllester, 2017). **The impact of competitive effects on the score threshold for a positive outcome.** Consider the simulation framework described in Section 3.4. At each time step \(t>0\), an agent that received a negative outcome in the previous time step has a chance of altering their feature space in such a way that it either meets or exceeds the previous score threshold for a positive outcome \(s_{t}\). Typically, an agent that carries out this behavior would expect to receive a positive outcome at the next time step, \(t+1\). However, due to competitive effects (described in Figure 1) the agent may not receive a positive outcome. Specifically, we consider two competitive effects: 1. Agents already present in the environment may act on recourse recommendations and exceed the score of other agents. 2. New agents entering the environment may have scores that exceed the score of other agents. In both cases, it is possible that the score threshold for a positive outcome increases so that \(s_{t+1}>s_{t}\). Intuitively, the score threshold can only remain constant if the number of agents acting on recourse and the number of agents is equal to the resource constraint \(k\). This is shown in the following equality: \[\frac{\mathbb{E}[\sum_{x_{t}\in P_{t}}\mathbb{1}_{[f(x_{t})>s_{t-1}]}]}{k}=1 \tag{3}\] Importantly, if the equality _does not hold_, then the score threshold for a positive outcome may increase at future time steps. As a result, the reliability of algorithmic recourse recommendations cannot be guaranteed. **Defining recourse reliability**. Let us denote the set of agents that changed their features per a recourse recommendation and _met or exceeded the score threshold \(s_{t-1}\) as \(C_{t}=\{x_{t}\in A_{t}|f(x_{t})\geq s_{t-1}\}\). The agents \(C_{t}\) successfully acted on a recourse recommendation and thus _expect to receive a positive outcome_. Recall \(P^{k_{t}}\) is the set of agents that received a positive outcome at time step \(t\). Using this notation, we can define **recourse reliability at time t** as: \[RR_{t}=\frac{|C_{t}\cap P^{k}_{t}|}{|C_{t}|} \tag{4}\] Stated plainly, recourse reliability at time \(t\) is the proportion of agents who acted on recourse and _received a positive outcome_, out of all those agents who acted on recourse and _expected a positive outcome_. In this way, recourse reliability is a measure of how well recourse expectations are met for agents. ## 4. Empirical Analysis To better understand the behavior of recourse reliability, we conducted extensive empirical analysis using the simulation framework described in Section 3.4. Our analyses also allow us to demonstrate how the various concepts defined previously like adaptation, effort, competitive effects impact algorithmic recourse. The following analyses were executed using Python with its standard libraries. Our implementation and all supporting code (including the empirical analysis reported here) are publicly available on GitHub2. Footnote 2: [https://github.com/joaopfonseca/recourse-game/](https://github.com/joaopfonseca/recourse-game/) We report results of simulations using the parameters summarized in Table 1. All reported results represent 20 executions with different initial random seeds, saved for reproducibility. ### Experimental Setup Table 1 summarizes the parameters of the simulations used in our experimental evaluation. We refer to Figure 4 for a visual summary of the components of the framework, and their interactions. _Population._ Agents are generated over a 2-dimensional feature space, sampled independently at random. Each feature is sampled from a Gaussian distribution, where \(x=(x_{1},x_{2})\) and \(x_{i}\sim\mathcal{N}(\mu=0.5,\sigma=0.3)\), \(i=1,2\). All simulations have an initial population \(p=100\) agents. Of these, \(k=10\) receive the favorable outcome at every time step and exit the system. New agents \(n\) enter the simulation at every time step, where the number of new agents is fixed per simulation, and varies between 8 and 12 across simulations. _Classifier._ The framework described in Section 3.4 is model-agnostic, but for our experiments we used a simple logistic regression classifier to determine the score for each agent at each time step. The target variable \(y_{i}\) for the prediction task is created randomly using a binomial distribution. _Calculating recourse recommendations._ Our simulation framework is also agnostic to the method used for generating recourse recommendations for each agent. We use a simple approach to generate single-agent counterfactual explanations for linear classification, based on the work by Ustun et al. (2018) and its open source implementation. ### Experimental Results We present our results based on the different types of agents behavior, and show full results for three parameter settings: 1. Binary adaptation with constant effort. 2. Continuous adaption with constant effort. 3. Continuous adaption with flexible effort We do not report results for the binary adaptation flexible effort setting because it does not contain unique insights as compared to \begin{table} \begin{tabular}{l l l} \hline \hline Symbol & Parameter & Settings \\ \hline \(p\) & Initial number of agents & 100 \\ \(k\) & Number of favorable outcomes per time step & 10 \\ \(n\) & Number of new agents per time step & \(\{0.8k,0.9k,k,1.1k,1.2k\}\) \\ \(a_{l,g}(x,x^{\prime})\) & Describes _adaptation_ and _effort_ of agents & [binary, continuous] \(\times\) [constant, flexible] \\ \(l\) & Agent-level (local) willingness of acting on recourse & \([0,1]\) \\ \(g\) & Global ease of acting on recourse & \([0,1]\) \\ T & Number of time steps & 50 \\ \hline \hline \end{tabular} \end{table} Table 1. Summary of simulation parameters the other settings. Further, we do not explore the effect of varying \(l\) within each experiment (i.e., agent leveling willingness) but instead define \(l\)_a priori_ for each setting. We do this because we are primarily interested in the impact of the (global) difficulty of recourse and of the number of new applicants at each time-step. #### 4.2.1. Binary adaptation with constant effort. Recall that binary adaptation means that all agents \(x\) will adapt to match the counterfactual configuration \(x^{\prime}\) based on some fixed probability. We express this probability in the following way: \[a_{l,g}(x_{t},x_{t}^{\prime})=(1-l)\times x_{t}+l\times x_{t}^{\prime} \tag{5}\] We sample \(l\) from a Bernoulli distribution where \(l\sim Bernoulli(g)\). Figure 5 shows the score threshold for a positive outcome \(s_{t}\) and the recourse reliability \(RR_{t}\) over 50 time-steps, varying the number of agents taking recourse action and the number of new agents entering at each time step. There are several observations that can be gleaned from the figure. First, over nearly all settings, the threshold \(s_{t}\) is relatively constant, with imperceptible drifts over time. Second, and significantly, the recourse reliability \(RR_{t}\) always tends to decrease (at different rates). Third, in settings where the \(g\) is approximately 0.5 or greater and the number of new agents is approximately \(0.9\times k\) or greater, the \(RR_{t}\) is close to 0. This is because with so many agents adapting, the environment becomes incredibly competitive and it is difficult for agents to achieve a successful recourse event. #### 4.2.2. Continuous adaptation with constant effort. Recall that constant effort is based on the idea that each agent has an _a priori_ willingness to act on a recourse recommendation, and it does not change over time. We sample \(l\) a random uniform distribution where \(l\sim\mathcal{U}(0,1)\), and as described earlier, \(g\) is an input parameter of the simulation. Figure 6 is analogous to Figure 5. Notice that in this case, the volatility of the recourse reliability score \(RR_{t}\) varies with a more discernible pattern: although the score threshold for a positive outcome \(s_{t}\) also varies with both parameters, it always decays when the number of new agents (8 or 9) is lower than the number of favorable outcomes (10). Overall, the recourse reliability scores follow non-linear trajectories with high volatility. #### 4.2.3. Continuous adaptation with flexible effort. Recall that flexible effort means that an agent's willingness to take action on recourse is based on the agent's distance to the score threshold for a favorable outcome. It follows the assumption that an agent with a score closer to the score threshold will have a greater incentive to exert effort as compared to an agent with a lower score. In this case, we define the agent level willingness \(l\) as follows: \[l=\frac{1}{s_{t}-f(x)} \tag{6}\] Figure 7 is analogous to Figure 5. Although one observes lower volatility of both the score threshold and the recourse reliability as compared to the continuous adaptation with constant effort setting (Section 4.2.2), some parameter settings yield high volatility. Specifically, larger \(g\) and lower \(N_{t}\) lead to lower recourse reliability. #### 4.2.4. Summary of experimental results. Generally, we observe that recourse reliability will decrease as the global difficulty \(g\) increases Figure 5. Binary adaptation with constant effort. Score threshold values for a positive outcome (blue lines) and recourse reliability (red lines) over 50 time steps (error bars given by running 20 simulations with different random seeds). The \(y-\)axis in each subgraph is the score of individual agents, and the \(x-\)axis in each subgraph is the time step. The large \(y-\)axis is the difficulty of acting on recourse \(g\), and the large \(x-\)axis is the number of new agents at each time step \(N_{t}\). Figure 6. Continuous adaptation with constant effort. Identical details as in Figure 5. (i.e., it is harder for agents to act on recourse), and as the number of agents increases above the resource constraint \(k\). Intuitively, this reflects the idea that a more competitive environment will result in lower recourse reliability. _Adaptation._ In terms of agent behavior, holding all other parameters constant, binary adaptation has a negative impact on recourse reliability overall as compared to continuous adaptation. Again, our intuition is that binary adaptation models a more competitive environment since agent's are "quicker" to implement recourse. Also, in this setting, agents often find themselves having their score tied with others since all agents are changing their score to _exactly_ match the threshold. _Effort._ In a similar fashion, flexible effort has a more negative impact on recourse reliability as compared to constant effort, likely due to making more the environment more competitive. Under flexible effort, agents that are closer to the threshold will act on recourse and pass the threshold more frequently. ## 5. Discussion In this paper, we developed a framework that simulates recourse in a _multi-agent environment over time_. In the experiments conducted using our framework, we found that, in the vast majority of cases, the score threshold for a positive outcome either decreases or increases over time. Both of these cases can be harmful to the agents: if the threshold increases over time, then recourse reliability suffers (i.e., some of the agents that were promised a positive outcome will not receive one even if they reach the threshold). If the threshold decreases over time, then recourse recommendations are reliable, but the effort of agents acting on recourse may be wasted (i.e., they worked harder than necessary to achieve the positive outcome). Importantly, _across all of our simulations_, we found that the score threshold rarely remains stable over time, and, further, that threshold stability, which is desirable from the point of view of the agents, is the most difficult to parameterize for. Our observations have a significant socio-technical implication: **In systems that administer recourse _without considering possible changes over time,_ individuals are at best being given unrealistic expectations about what will happen to them if they follow recommendations for recourse, and, at worst, the system designers are being irresponsible --and possibly damaging-- by falsely promising a reward to individuals for their efforts.** This finding supports concerns by Barocas et al. and other authors about the need to study recourse under the assumption that the system will change over time (Barocas et al., 2015; Barocas et al., 2015; Barocas et al., 2015). For a real-world example, let us consider college admissions. There is substantial evidence that the selectivity of US universities changes over time (Barocas et al., 2015). One can imagine a scenario where an applicent is denied admission to their top-choice university in one year, and is recommended a set of improvements to make to their application (e.g., increase their SAT score, increase the number of volunteering hours). However, these improvements may turn out to be insufficient for admission when the individual re-applies to the same university one or two years later. **Guidance for system-level decision-makers.** Our framework can be used to provide guidance to system-level decision-makers like banks, colleges, and governments. The parameters of our framework -- the number of new individuals at each time step, the number of individuals that re-apply at each time step, and the expected level of improvement among the individuals who re-apply-- can all be easily measured in practice. Once these parameters are known, our framework can anticipate changes in the threshold over time, and provide insight regarding the time-step at which specific changes are expected to occur. Another use case for our framework is that, if any of these values can only be estimated rather than measured, one could run many experiments sweeping over a broad range of values and scenarios to better understand possible outcomes. There are two concrete actions that decision-makers can take based on empirical insights. First, recourse reliability can be measured and used to provide an uncertainty estimate to individuals undertaking recourse action. For example, the system could generate a recommendation in the following way: "If you make the following \(X\) changes by time \(t\), then there is a \(Y\%\) chance that you will receive a positive outcome." Second, if it is possible to adjust resource constraints (e.g., the number of loans being given, or the number of spots in an incoming college class), our framework can inform what setting of the resource constraint maximizes recourse reliability. For example, colleges could estimate the ideal incoming class-size that achieves a stable threshold for admissions. ## 6. Conclusions, Limitations, and Future Work In this paper, we sought to close a significant gap in the literature, and conducted multi-agent, multi-time step analysis of algorithmic recourse within competitive environments. We developed a simulation framework that allows the configuration of a diverse set of components, such as incorporating external interventions/shocks, Figure 7. Continuous adaptation with flexible effort. Identical details as in Figure 5. problem-specific adaptation functions, introduction of new agents and resource constraints, and opens up opportunities to study how these settings affect the reliability of algorithmic recourse over time. The software implementation of our framework is available in a public GitHub repository. The major finding of our work is that recourse is only reliable under a very specific set of conditions, leading to an important insight for people who design recourse methods, and for people who receive recourse recommendations. It is our hope that this paper will lead to more robust and reliable algorithmic recourse methods. **Limitations and future work.** In this paper, we presented an agent-based framework that has undergone rigorous testing using _simulated data_. Our future plans involve evaluating the practical applicability of our insights by incorporating real-world data and deployment scenarios. A further limitation of our work is that our adaptation functions attempt to model human behaviour, which is complex and not always rational. In future work, we plan to further validate these functions, and to develop methods that learn adaptation behavior from historical data. Further future work involves designing additional metrics, and exploring the impact of different feature distributions, and distributions of adaptation and effort, on recourse reliability, efficiency, and fairness. ## Acknowledgments This research was supported in part by NSF Awards No. 1916505 and 192265, by the NSF Graduate Research Fellowship under Award No. DGE-2234660, by research grants from the Portuguese Foundation for Science and Technology ("Fundacao para a Ciencia e a Tecnologia") references SFRH/BD/151473/2021 and UIDB/04152/2020, and by the New York University Center for Responsible AI.
2309.08944
Universal Metric Learning with Parameter-Efficient Transfer Learning
A common practice in metric learning is to train and test an embedding model for each dataset. This dataset-specific approach fails to simulate real-world scenarios that involve multiple heterogeneous distributions of data. In this regard, we introduce a novel metric learning paradigm, called Universal Metric Learning (UML), which learns a unified distance metric capable of capturing relations across multiple data distributions. UML presents new challenges, such as imbalanced data distribution and bias towards dominant distributions. To address these challenges, we propose Parameter-efficient Universal Metric leArning (PUMA), which consists of a pre-trained frozen model and two additional modules, stochastic adapter and prompt pool. These modules enable to capture dataset-specific knowledge while avoiding bias towards dominant distributions. Additionally, we compile a new universal metric learning benchmark with a total of 8 different datasets. PUMA outperformed the state-of-the-art dataset-specific models while using about 69 times fewer trainable parameters.
Sungyeon Kim, Donghyun Kim, Suha Kwak
2023-09-16T10:34:01Z
http://arxiv.org/abs/2309.08944v1
# Universal Metric Learning with Parameter-Efficient transfer learning ###### Abstract A common practice in metric learning is to train and test an embedding model for each dataset. This dataset-specific approach fails to simulate real-world scenarios that involve multiple heterogeneous distributions of data. In this regard, we introduce a novel metric learning paradigm, called Universal Metric Learning (UML), which learns a unified distance metric capable of capturing relations across multiple data distributions. UML presents new challenges, such as imbalanced data distribution and bias towards dominant distributions. To address these challenges, we propose Parameter-efficient Universal Metric leArning (PUMA), which consists of a pre-trained frozen model and two additional modules, stochastic adapter and prompt pool. These modules enable to capture dataset-specific knowledge while avoiding bias towards dominant distributions. Additionally, we compile a new universal metric learning benchmark with a total of 8 different datasets. PUMA outperformed the state-of-the-art dataset-specific models while using about 69 times fewer trainable parameters. ## 1 Introduction Learning semantic distance metrics has been playing a key role in computer vision applications including content-based image retrieval (Kim et al., 2019; Movshovitz-Attias et al., 2017; Sohn, 2016; Song et al., 2016), face verification (Liu et al., 2017; Schroff et al., 2015), person re-ID (Chen et al., 2017; Xiao et al., 2017), few-shot learning (Qiao et al., 2019; Snell et al., 2017; Sung et al., 2018), and representation learning (Kim et al., 2019; Wang and Gupta, 2015; Zagoruyko and Komodakis, 2015). Deep metric learning stands out as the prominent method for learning semantic distance metrics. It aims to learn highly nonlinear distance metrics through deep neural networks that approximate the actual underlying semantic similarity between samples. While metric learning methods have achieved remarkable progress, they focus on learning metrics unique to a specific dataset under the assumption that both training and test datasets share a common distribution. However, real-world applications often violate this assumption and involve multiple heterogeneous data distributions. For instance, users of a retrieval system may query data of substantially different semantics that form multiple diverse distributions. To tackle this issue using conventional methods, it is imperative to train multiple models as shown Fig. 1 (a) and subsequently combine them through ensemble techniques or toggle between the models based on the query. Such procedures are not only arduous but also demand a significant amount of computational resources. In this paper, we introduce a new metric learning paradigm, called **Universal Metric Learning (UML)**. UML aims to learn a unified distance metric capable of capturing semantic similarity across multiple data distributions. Instead of multiple models for each dataset, UML trains a single model trained on a dataset that unifies multiple datasets to create a universal embedding space. UML opens a new fascinating direction towards metric learning in the wild, but at the same time comes with technical challenges not found in conventional metric learning. First, integrating multiple datasets results in a highly imbalanced data distribution. Data imbalance is a natural phenomenon and is a well-known challenge in many recognition tasks, as it can induce bias and hinder performance. In addition to common imbalance issues such as class imbalance tackled by recent work (Liu et al., 2019; Zhong et al., 2021), UML introduces more complex and unique challenge caused by dataset imbalance_ when datasets to be integrated are of substantially different sizes. Our study reveals that a naive fine-tuning on multiple datasets as a whole, depicted in Fig. 1 (b), results in models strongly biased towards large datasets. Second, key features for discriminating between classes may vary across datasets. For example, color is useful for differentiating between bird species but harmful for distinguishing vehicle types. Thus, models should learn to recognize both dataset-specific discriminative features and common discriminative features to achieve UML. To address these challenges, we propose a novel approach called **Parameter-efficient Universal Metric leArning (PUMA)**, which is a completely different direction from existing metric learning. PUMA aims to capture universal semantic similarity through a single embedding model while mitigating the imbalance issues. To achieve this, we draw inspiration from recent advances in parameter-efficient transfer learning in natural language processing (Houlsby et al., 2019; Pfeiffer et al., 2020; He et al., 2021; Li and Liang, 2021). Our key idea is to freeze the parameters of a model pre-trained on a large-scale dataset, thereby preserving its generalization capability, and learn dataset-specific knowledge from the unified dataset with a minimal number of additional parameters. Specifically, PUMA is built on Vision Transformer (ViT) and incorporates two additional modules, namely a _stochastic adapter_ and a _prompt pool_ (see Fig. 1(c)). The stochastic adapter is a lightweight module that operates in parallel with the transformer block, and its operation is stochastically switched off during training. It enables the pre-trained model to adapt, while avoiding being biased towards a specific data distribution by randomly providing either adapted features or features from the pre-trained model. It is parameter-efficient yet effective, and improves performance across all data without bias when scaling up its capacity. Meanwhile, the prompt pool is used to build a conditional prompt that accounts for distinct characteristics of each dataset on the fly. To be specific, the prompt pool is a set of prompts organized in a key-value memory, and given the input feature, the conditional prompt is generated by aggregating relevant prompts in the pool using an attention mechanism. The conditional prompt is added to the input sequence of the transformer, allowing for more targeted adaptation. We compile a new universal metric learning benchmark with a total of 8 datasets from different tasks. Our method largely outperforms models trained on multiple datasets using conventional metric learning techniques, and also outperforms most of the models trained on each dataset (_i.e._, Figure 1: **Comparison between conventional and universal metric learning methods. (a) Conventional metric learning employs separate models for individual datasets, incurring significant computational and memory costs as data diversity grows. (b) A naive solution is to fine-tune the model on a merged dataset, but this often leads to a severe bias towards major data distributions. (c) In contrast, our method excels on all datasets with just one model. This is highly resource-efficient as it enables one-time learning and evaluation data with diverse distributions using a single model.** Figure 2: Our single model trained on the 8 datasets outperforms all existing models devoted to each dataset while using about 69 times fewer trainable parameters. dataset-specific models) with only 1.5% trainable parameters and 13.9% of a total number of parameters compared to theirs as shown in Fig. 2. In addition, we demonstrate that our method also can be utilized as a strong few-shot learner. ## 2 Related Work **Deep Metric Learning.** It aims to learn a metric function that approximates the underlying semantic similarity of data by pulling semantically similar samples (positive) closer to the anchor and pushing dissimilar samples (negative) away. To achieve this goal, the development of loss functions has been the main focus of this field, typically classified into pair-based and proxy-based losses. Pair-based losses consider relations between pairs (Wu et al., 2017; Bromley et al., 1994; Chopra et al., 2005; Hadsell et al., 2006), triplets (Wang et al., 2014; Schroff et al., 2015) or higher-order tuples of samples (Song et al., 2016; Sohn, 2016; Wang et al., 2019; Ba et al., 2017). They can capture the fine-grained relations among samples, but they are suffering from the issue of increased training complexity as the number of training data increases. Proxy-based losses address the complexity issue by introducing learnable parameters called proxies to represent the training data of the same class. They greatly reduce the complexity of examining the relations between all data by considering the those between data and proxies. In this direction, the approaches have used proxies to approximate the pair-based loss (Movshovitz-Attias et al., 2017; Kim et al., 2020; Qian et al., 2019) or have modified the cross-entropy loss (Deng et al., 2019; Wang et al., 2018; Zhai and Wu, 2018; Teh et al., 2020). Although there have been remarkable advances in metric learning so far, all existing metric learning methods deal only with a specific distribution within one dataset. Orthogonal to them, we first shed light on the new problem of learning a _universal metric_ contained in multiple distributions and explore ways to address it. While prior work mostly focuses on the design of loss functions, we also explore the impact of architectural choices in metric learning. **Parameter-efficient Transfer Learning.** Large-scale pre-trained models have shown significant improvements across various downstream tasks. As the model size and the number of tasks grow, parameter-efficient transfer learning approaches (_e.g._, (Hu et al., 2021; Rebuffi et al., 2017; Houlsby et al., 2019; Pfeiffer et al., 2020; He et al., 2021)) have been developed to adapt to diverse downstream by updating only a small fraction/number of learnable parameters while fully utilizing the knowledge of the pre-trained model without catastrophic forgetting. Especially in the field NLP, low-rank adaptation (Hu et al., 2021) is proposed to approximate the parameter update or light-weight adapter modules (Houlsby et al., 2019; Pfeiffer et al., 2020) can be inserted between pre-trained layers during fine-tuning. Prefix/prompt tuning (Lester et al., 2021; Li and Liang, 2021; Wang et al., 2022; Smith et al., 2022) has been introduced where additional learnable tokens (soft prompts) are added during fine-tuning while keeping the backbone frozen. In contrast to prior deep metric learning work, we utilize parameter-efficient transfer learning for our proposed universal metric learning setup to learn universal representations across different data distributions in a single model while preventing bias and catastrophic forgetting, which outperforms even full fine-tuning baselines. ## 3 Universal Metric Learning In this section, we first review conventional metric learning. We then introduce the Universal Metric Learning (UML) setting and discuss its technical challenges. ### Revisiting Conventional Metric Learning Metric learning is the task of learning a distance function that captures the semantic similarity between samples in a given dataset \(S\). The goal of metric learning is to learn a distance function \(d\) that satisfies the constraint as \[d(x,x^{+};\theta)<d(x,x^{-};\theta),\quad\forall(x,x^{+},x^{-}) \tag{1}\] where \(x^{+}\) and \(x^{-}\) denote the positive sample that belongs to the same class as \(x\), and negative samples that are not, respectively, and \(\theta\) represents the model parameters. Deep metric learning achieves this by learning a high-dimensional embedding function \(f(\cdot,\theta)\), for individual data, and employing Euclidean or cosine distance to calculate the distance between embedding vectors. Note that metric learning seeks a generalization class unseen in training. The conventional setup thus employs a set of classes \(C_{t}\) and their labeled samples \(S_{t}=\{(x_{t},y_{t})\mid y_{t}\in C_{t}\}\) for training, and evaluates a trained embedding model for a set of unseen classes \(S_{u}=\{(x_{u},y_{u})\mid y_{u}\in C_{u}\}\) where \(C_{t}\;\cap\;C_{u}=\varnothing\) and \(S_{t}\;\cup\;S_{u}=S\). This convention only considers generalization within a single dataset, in which both training and test data are sampled from the same distribution. ### Problem Formulation of UML Universal Metric Learning (UML) is an extension of the conventional one and tackles the challenging and practical problem of dealing with **multiple datasets with different data distributions**, using a single embedding model. The goal of UML is to learn a distance metric that can effectively capture semantic similarity between samples across multiple datasets while maintaining the intra-class compactness and inter-class separability within each dataset. In UML, a model is trained _as if it were given a single dataset, without knowing that multiple datasets are combined_, making it highly suitable for both a large dataset with a multimodal distribution and multiple small datasets in real-world applications. In the UML setting, we assume \(N_{s}\) datasets, denoted as \(S^{1},S^{2},\cdots,S^{N_{s}}\), and define the unified dataset such that \(\mathbb{S}=\bigcup_{i=1}^{N_{s}}S^{i}\). To learn a universal embedding function, UML leverages a unified training dataset \(\mathbb{S}_{t}=\bigcup_{i=1}^{N_{s}}S^{i}_{t}\), which aggregates training data from all datasets. We evaluate the learned universal distance function in two different ways to assess its generalization capability. First, we evaluate it on the unified unseen data \(\mathbb{S}_{u}=\bigcup_{i=1}^{N_{s}}S^{i}_{u}\) denoted by universal accuracy, to show whether semantic similarity across multiple data distributions. Second, we evaluate the distance function on the unseen data of each dataset \(S^{i}_{u}\) separately, enabling us to assess its ability to grasp the specific semantic similarity for each dataset. ### Technical Challenges UML encounters a new challenge - _highly imbalanced distribution caused by the integration of multiple datasets_. Data imbalance is a common and well-known issue in a large variety of vision tasks. However, UML presents a more complex and unique challenge, where the entire data distribution becomes long-tailed due to class imbalance, and also has dataset imbalance caused by integrating datasets of substantially different sizes, as illustrated in Fig. 5. This issue is especially critical in metric learning since a large dataset can cause the negative samples to be dominated by samples from that dataset, thus hindering the learning of other datasets. In contrast to standard imbalance problems, having many samples for a class does not necessarily lead to high accuracy; rather, models can be biased towards datasets with a small number of samples per class such as SOP. Therefore, we need a method that effectively captures knowledge from all datasets without introducing biases. Another challenge for UML is that _the class-discriminative features are not shared across all datasets_. This challenge arises due to the disparity between different data distributions since each distribution has its own characteristics that define relations between its samples, which could conflict with those of the other distributions. For instance, while color may be a crucial characteristic for differentiating between bird species, it may impede distinguishing between different vehicle types. Thus, training with a unified dataset may lead to two potential problems. First, if the model focuses on class-discriminative features that are specific to a certain data distribution, it may have a negative impact on datasets where those features are not relevant. Second, if the model attends to the commonalities shared among all datasets, its discriminability for fine-grained differences between samples may diminish. Moreover, UML still has a challenge in _generalization to unseen classes_, inherited from conventional metric learning. However, this challenge becomes even more difficult as UML deals with diverse imbalanced distributions. ## 4 Proposed Method To tackle the UML problem, we propose a novel approach, named Parameter-efficient Universal Metric leAring (PUMA). In contrast to conventional metric learning methods that fine-tune the entire model parameters, PUMA does not tune a pre-trained model but keeps its generalization capability across diverse data distributions. Instead, we leverage small additional modules that can learn dataset-specific knowledge from the unified dataset. As shown in Fig. 3, PUMA uses pre-trained vision transformers (ViT) as a backbone, and employs stochastic adapter and a prompt pool as the additional modules for conditional prompt learning, which are detailed in this section. ### Preliminaries: ViT ViT is composed of a patch embedding layer and an encoder with \(L\) sequential transformer layers. The patch embedding layer splits the input image \(x\) into image patch embeddings \(E\in\mathbb{R}^{N_{e}\times D}\), where \(N_{e}\) denotes the number of patch embeddings and \(D\) is the embedding dimension. The input sequence of the transformer encoder is formed by appending the image patch embeddings to a learnable class token embedding \(e_{\text{cls}}\in\mathbb{R}^{D}\), as follows: \[z_{0}=[e_{\text{cls}},E]. \tag{2}\] Each transformer layer consists of multi-headed self-attention (MSA) and multilayer perceptron (MLP) blocks, with layer normalization (LN) applied before every block and residual connections after every block: \[z_{\ell}^{\prime} =\text{MSA}(\text{LN}(z_{\ell-1}))+z_{\ell-1}, \ell =1\dots L, \tag{3}\] \[z_{\ell} =\text{MLP}(\text{LN}(z_{\ell}^{\prime}))+z_{\ell}^{\prime}, \ell =1\dots L,\] ### Stochastic Adapter To effectively adapt the model to the unified dataset without being biased to the large dataset, we propose a stochastic adapter. While adding learnable parameters shared by all data enables adaptation, this could cause the additional parameters to be biased towards the major distribution due to the imbalanced distribution issue in UML. Our core idea is a stochastic adaptation, which provides for embedding space to consider both the generalizable features of a pre-trained model and the adapted features, rather than solely relying on the adapted feature. This allows the embedding space to be unbiased to the major data distribution, while providing the capacity to learn knowledge specific to each dataset. Our adapter has a bottleneck structure for parameter-efficiency and is connected in parallel with all transformer blocks. The adapter consists of a down-projection layer \(W_{\text{down}}\in\mathbb{R}^{D\times r}\), a ReLU activation layer, and an up-projection layer \(W_{\text{up}}\in\mathbb{R}^{r\times D}\), where \(r<D\) is the bottleneck dimension. As shown in Fig. 3, within a transformer layer, two adapters are placed in parallel, one with the MSA block and the other with the MLP block. Given input for the \(\ell\)-th transformer layer and output of the \(\ell\)-th MSA layer, outputs of the adapters are produced as follows: \[\tilde{z}_{\ell}^{\prime} =\text{ReLU}(\text{LN}(z_{\ell-1})\cdot W_{\text{down}}^{\prime} )\cdot W_{\text{up}}^{\prime} \tag{4}\] \[\tilde{z}_{\ell} =\text{ReLU}(\text{LN}(z_{\ell}^{\prime})\cdot W_{\text{down}}) \cdot W_{\text{up}},\] where \(W_{\text{down}}^{\prime}\) and \(W_{\text{up}}^{\prime}\) have the same shapes as \(W_{\text{down}}\) and \(W_{\text{up}}\), respectively. The output features of the adapter are multiplied by the random binary mask and fuse with the output of the transformer block through residual connection: \[z_{\ell}^{\prime} =\text{MSA}(\text{LN}(z_{\ell-1}))+z_{\ell-1}+\gamma_{\ell}^{ \prime}\cdot\tilde{z}_{\ell}^{\prime}, \tag{5}\] \[z_{\ell} =\text{MLP}(\text{LN}(z_{\ell}^{\prime}))+z_{\ell}^{\prime}+ \gamma_{\ell}\cdot\tilde{z}_{\ell},\] Figure 3: **An overview of Parameter-efficient Universal Metric leArning (PUMA).** PUMA consists of two learnable modules: a stochastic adapter (Sec. 4.2) and a prompt pool (Sec. 4.3). The input image uses the output of the transformer’s embedding layer as a query, and creates a conditional prompt by integrating relevant prompts through an attention mechanism. It is combined with image embeddings and class token and fed into the transformer. The modified input is embedded through a transformer coupled with a stochastic adapter, a learnable bottleneck module that turns on stochastically during training. where \(\gamma^{\prime}_{\ell}\) and \(\gamma_{\ell}\) are independent variables drawn from \(\text{Bernoulli}(p)\), and \(p\) is keep probability of stochastic adapter. ### Conditional Prompt Learning In addition, we propose conditional prompt learning to learn discriminative features for each dataset. We assume that images within each dataset exhibit shared characteristics compared to images from other datasets. Our goal is to learn and leverage prompts relevant to the input data among the set of prompts through the attention mechanism. Note that the prompt in ViT denotes the learnable input token parameters as in (Jia et al., 2022; Wang et al., 2022; Smith et al., 2022). To achieve this, we first need a query feature that encodes the input image \(x\). Query features should be able to grasp the data distribution of the input image, and also require little computation to obtain it. Considering these constraints, we design a simple query feature for \(x\) by using a pooling operation on its image patch embeddings \(E\) in Sec. 4.1: \[q=\text{AvgPool}(E)+\text{MaxPool}(E),\quad q\in\mathbb{R}^{D}. \tag{6}\] Then, we introduce a _prompt pool_, a storage that contains prompts together with extra parameters for input-conditioning. Specifically, we denote a single prompt by \(P_{m}\in\mathbb{R}^{N_{p}\times D}\), where \(N_{p}\) is the number of learnable embedding, and a prompt pool with \(M\) prompts is given as: \[\mathbf{P}=\{(P_{1},K_{1},A_{1}),\cdots,(P_{M},K_{M},A_{M})\}, \tag{7}\] where \(K_{m}\in\mathbb{R}^{D}\) denotes the key of a prompt, and \(A_{m}\in\mathbb{R}^{D}\) is its feature attention vector, a learnable parameter that helps concentrate on specific feature dimensions for the corresponding prompt. We produce the attended query and define the cosine similarity between it and the prompt key as a weight vector, which is given by \[\alpha_{m}=s(q\otimes A_{m},K_{m}), \tag{8}\] where \(s(\cdot,\cdot)\) denotes the cosine similarity between two vectors and \(\otimes\) denotes Hadamard product operation over the feature dimension. The conditional prompt of input image \(x\) is calculated as a weighted sum of prompts: \[\hat{P}=\sum_{m=1}^{M}\alpha_{m}P_{m}, \tag{9}\] Finally, it is inserted into the input sequence of the transformer encoder: \[z_{0}=[e_{\text{cls}},\hat{P},E]. \tag{10}\] This process allows each prompt to condition images based on their specific data distributions, as depicted in Fig. 4. Notably, while the CUB dataset demonstrates a strong tendency to align with the relevant NABird dataset, it distinctly prefers different prompts compared to the In-shop dataset. ## 5 Experiments ### Experimental Setup **Datasets.** In the UML setting, we employ a combination of eight datasets. These comprise four widely recognized benchmarks: CUB (Welinder et al., 2010), Cars-196 (Krause et al., 2013), Standford Online Product (SOP) (Song et al., 2016), and In-shop Clothes Retrieval (In-Shop) (Liu et al., 2016). Alongside these, we incorporate other four fine-grained datasets: NABirds (Van Horn et al., 2015), Dogs (Khosla et al., 2011), Flowers (Nilsback & Zisserman, 2008), and Aircraft (Maji et al., 2013). The overall dataset statistics are in Table 1. The combined dataset encompasses 141,404 training images and 148,595 testing images. Notably, this dataset exhibits imbalanced data distributions, with a significant portion of images from large-scale datasets such as SOP and In-Shop, as shown in Fig. 5. We also provide results on the conventional benchmarks in Appendix D.1. **Baselines.** We benchmark our method against three distinct learning strategies. The models trained exclusively on individual datasets are termed **dataset-specific models**, while models trained on multiple datasets are termed **universal models**. Figure 4: The average similarity between input queries and prompts for each dataset. The \(x\)-axis represents prompt index. * **Dataset-specific models by full fine-tuning**: These models employ conventional metric learning protocols, where every parameter in the backbone and the embedding layer is fully updated. For this approach, we utilize a range of renowned metric learning loss functions, Triplet (Schroff et al., 2015), Margin (Wu et al., 2017), MS (Wang et al., 2019), Proxy-Anchor (PA) (Kim et al., 2020), SoftTriple (Qian et al., 2019), CosFace (Wang et al., 2018), ArcFace (Deng et al., 2019), CurricularFace (Huang et al., 2020), and Hyp (Ermolov et al., 2022). Notably, each model is trained specifically for individual datasets. * **Universal models by full fine-tuning**: The models are fully fine-tuned using the aforementioned loss functions and utilize data from multiple datasets. * **Universal models by parameter-efficient fine-tuning**: The models update a subset of backbone parameters or add new trainable parameters to the backbone during the fine-tuning. We explore two techniques focusing on the embedding layer: training solely the linear embedding layer (Linear Emb.) and the embedding layer enriched with a 3-layered multilayer perceptron (MLP-3 Emb.). Further, we consider three prominent parameter-efficient tuning strategies: VPT (Jia et al., 2022), LoRA (Hu et al., 2021) and AdaptFormer (Chen et al., 2022). Lora and AdaptFormer are scaled with the same parameters as our method. **Implementation Details.** For fair comparisons, all models are evaluated using the same backbone, ViT-S/16 (Dosovitskiy et al., 2021) pre-trained on ImageNet-21K (Deng et al., 2009) as done in Hyp (Ermolov et al., 2022). We change the size of its last linear layer to \(128\), and \(L_{2}\)-normalize the output embedding vector. We set the parameters for the stochastic adapter to \(r=128\) and \(p=0.5\), for the conditional prompt to \(N_{p}=8\) and \(M=20\). Unless otherwise specified, we adopt the CurricularFace loss (Huang et al., 2020) as a metric learning loss for parameter-efficient fine-tuning methods, including our method. We also ablate different loss functions on our method in Appendix B.1. More implementation details can be found in Appendix A. **Evaluation protocol.** We measure the performance using Recall@1 in the main paper, with additional results with R@_k_, MAP@R, and RP in Appendix D.2. We report the **dataset-specific accuracy** using individual query and gallery for each dataset, and also calculate two kinds of **universal accuracy**, the unified accuracy using unified query and gallery sets and the harmonic mean of these individual accuracies. To evaluate the unified performance of dataset-specific models, we use an ensemble approach, averaging the embedding vectors of all dataset-specific models. ### Results Table 2 shows Recall@1 performance with a total of eight datasets. We note that the total number of parameters of dataset-specific models increases as the number of datasets increases. **(1)** Our results show that PUMA surpasses all compared data-specific models (Table 2(a)) in terms of universal accuracy. Moreover, our method outperforms data-specific models in all cases except for In-Shop and Dog, while not using hyperparameters selected for each respective dataset. Surprisingly, our method accomplishes this level of performance while _using up to 69 times fewer trainable parameters_ than previous techniques. This indicates that PUMA can be trained with limited resources and can easily be scaled up to a larger model and more datasets. Furthermore, even without emphasizing parameter efficiency, the outcomes highlight that our method can be a promising alternative to existing dataset-specific metric learning approaches. **(2)** While universal models by full fine-tuning (Table 2(b)) suffer significant performance degradation on small datasets, PUMA consistently achieves high performance across all datasets. Consequently, our results show that PUMA surpasses all compared methods both in terms of dataset-specific accuracy and universal accuracy. It enhances the best performance of universal models in the unified accuracy and harmonic mean accuracy by 3.4% and 4.6%, respectively. **(3)** Among various parameter-efficient fine-tuning methods (Table 2(c)), only our approach manages to outperform the majority of full fine-tuned models. Models like Linear \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & & & & & & & & & \\ \hline Train Samples & 5.8K & 8.0K & 59.5K & 25.8K & 22.9K & 10.6K & 3.5K & 5K & 141.4K \\ Train Classes & 100 & 98 & 11.3K & 3.9K & 278 & 60 & 51 & 50 & 15.9K \\ \hline Test Samples & 5.9K & 8.1K & 60.5K & 28.7K & 25.6K & 6.9K & 4.7K & 8K & 148.5K \\ Test Classes & 100 & 98 & 11.3K & 3.9K & 277 & 60 & 51 & 50 & 15.5K \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset Statistics. The numbers represent the number of training images and their classes used in training and testing. Figure 5: The number of samples in each category. Each color represents its dataset. Embedding and VPT, which employ fewer learnable parameters, notably underperform on datasets such as Cars, SOP, In-Shop, and Aircraft, where substantial domain-specific knowledge is required. Both AdaptFormer and Lora, which use a similar number of parameters as our method, similarly show biases towards large datasets akin to full fine-tuning. **Discussion on Losses in UML.** We observe that loss functions in UML exhibit different behaviors compared to conventional metric learning. This is due to the significant changes in the intra-class similarity and inter-class separability across datasets, making the effect of loss design and its hyperparameters critical. The choice of margin in a loss greatly affects performance, with large margins (_e.g._, ArcFace) leading to significant drops in performance. In contrast, sophisticated losses such as SoftTriple which uses multiple proxies, and CurricularFace which dynamically considers hard negatives demonstrate superior performance. In addition, pair-based losses that require negative mining do not perform well in the UML setting due to the emergence of a large number of easy negatives. See Appendix C.1 for further discussion. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**Params (M)**} & \multicolumn{5}{c}{**Dataset-Specific Accuracy**} & \multicolumn{3}{c}{**Universal Accuracy**} \\ \cline{2-11} Methods & Train / Total & CUB & Cars & SOP & InShop & NABird & Dog & Flowers & Aircraft & Unified & Harmonic \\ \hline \multicolumn{11}{l}{(a) _Dataset-specific models by full fine-tuning_} \\ \hline Triplet & 173.7 / 173.7 & 81.1 & 75.2 & 80.2 & 87.4 & 75.2 & 81.0 & 99.1 & 64.7 & 57.6 & 79.4 \\ Margin & 173.7 / 173.7 & 79.4 & 78.0 & 79.8 & 86.0 & 74.6 & 80.3 & 99.0 & 66.8 & 58.1 & 79.6 \\ MS & 173.7 / 173.7 & 80.0 & 83.7 & 81.4 & 90.8 & 68.1 & 75.8 & 97.4 & 64.7 & 61.6 & 78.9 \\ PA & 173.7 / 173.7 & 80.2 & 83.7 & 84.4 & 91.5 & 69.6 & 84.2 & 99.0 & 67.9 & 57.4 & 81.4 \\ SoftTriple & 173.7 / 173.7 & 80.5 & 80.0 & 82.9 & 88.7 & 75.9 & 82.1 & 99.4 & 65.4 & 63.4 & 80.8 \\ CosFace & 173.7 / 173.7 & 78.8 & 83.2 & 83.2 & 89.6 & 71.4 & 79.2 & 99.2 & 61.4 & 61.5 & 79.3 \\ ArcFace & 173.7 / 173.7 & 76.8 & 79.4 & 83.4 & 90.3 & 61.0 & 76.1 & 99.2 & 60.0 & 58.8 & 76.2 \\ CurricularFace & 173.7 / 173.7 & 79.7 & 81.3 & 83.2 & 88.2 & 75.3 & 81.2 & 99.1 & 63.9 & 62.9 & 80.4 \\ Hyp & 173.7 / 173.7 & 77.8 & 78.2 & 83.6 & 91.5 & 71.0 & 72.6 & 98.7 & 65.7 & 15.7 & 78.8 \\ \hline \multicolumn{11}{l}{(b) _Universal models by full fine-tuning_} \\ \hline Triplet & 21.7 / 21.7 & 74.5 & 35.4 & 80.2 & 85.7 & 68.2 & 77.1 & 98.7 & 40.9 & 72.0 & 57.7 \\ Margin & 21.7 / 21.7 & 72.5 & 36.7 & 80.0 & 84.1 & 67.4 & 74.8 & 98.5 & 40.4 & 71.6 & 57.4 \\ MS & 21.7 / 21.7 & 66.3 & 22.9 & 78.9 & 87.2 & 58.6 & 69.8 & 97.3 & 31.5 & 67.8 & 47.3 \\ PA & 21.7 / 21.7 & 77.2 & 73.1 & 83.7 & **91.9** & 71.5 & 78.1 & 96.4 & 62.7 & 77.9 & 71.0 \\ SoftTriple & 21.7 / 21.7 & 78.9 & 77.0 & 81.3 & 88.6 & 73.8 & 79.3 & 99.1 & 64.4 & 77.6 & 72.7 \\ CosFace & 21.7 / 21.7 & 74.2 & 73.5 & 82.5 & 90.0 & 69.7 & 74.1 & 98.7 & 59.7 & 76.6 & 69.6 \\ ArcFace & 21.7 / 21.7 & 70.8 & 25.9 & 63.9 & 85.9 & 64.0 & 70.3 & 97.2 & 31.7 & 59.2 & 47.2 \\ CurricularFace & 21.7 / 21.7 & 78.3 & 77.9 & 82.0 & 89.1 & 73.0 & 79.3 & 99.1 & 65.6 & 77.9 & 79.5 \\ Hyp & 21.7 / 21.7 & 79.2 & 60.6 & 83.5 & 90.9 & 73.6 & 81.9 & 99.1 & 56.3 & 77.7 & 69.4 \\ \hline \multicolumn{11}{l}{(c) _Universal models by parameter-efficient fine-tuning_} \\ \hline Linear Emb. & 0.1 / 21.8 & 82.1 & 49.7 & 70.5 & 65.5 & 77.9 & **86.2** & 99.1 & 47.6 & 69.3 & 68.2 \\ MLP-3 Emb. & 5.3 / 27.0 & 57.5 & 29.7 & 63.1 & 63.2 & 50.6 & 64.5 & 93.6 & 32.8 & 56.5 & 50.3 \\ VPT & 0.1 / 21.8 & 82.8 & 51.0 & 74.7 & 72.9 & 78.3 & 85.5 & 99.2 & 50.8 & 72.3 & 70.8 \\ LoRA & 2.4 / 24.1 & 77.0 & 70.9 & 81.3 & 86.2 & 70.8 & 79.1 & 98.9 & 59.7 & 76.1 & 76.5 \\ AdaptFormer & 2.4 / 24.1 & 77.0 & 77.0 & 83.7 & 90.7 & 72.3 & 78.5 & 99.0 & 63.9 & 78.5 & 79.0 \\ Ours & 2.5 / 24.2 & **83.9** & **84.3** & **84.0** & 89.8 & **79.2** & **84.1** & **99.3** & **72.6** & **81.3** & **84.1** \\ \hline \multicolumn{11}{l}{(c) _Universal models by parameter-efficient fine-tuning_} \\ \hline Linear Emb. & 0.1 / 21.8 & 82.1 & 49.7 & 70.5 & 65.5 & 77.9 & **86.2** & 99.1 & 47.6 & 69.3 & 68.2 \\ MLP-3 Emb. & 5.3 / 27.0 & 57.5 & 29.7 & 63.1 & 63.2 & 50.6 & 64.5 & 93.6 & 32.8 & 56.5 & 50.3 \\ VPT & 0.1 / 21.8 & 82.8 & 51.0 & 74.7 & 72.9 & 78.3 & 85.5 & 99.2 & 50.8 & 72.3 & 70.8 \\ LoRA & 2.4 / 24.1 & 77.0 & 70.9 & 81.3 & 86.2 & 70.8 & 79.1 & 98.9 & 59.7 & 76.1 & 76.5 \\ AdaptFormer & 2.4 / 24.1 & 77.0 & 77.0 & 83.7 & 90.7 & 72.3 & 78.5 & 99.0 & 63.9 & 78.5 & 79.0 \\ Ours & 2.5 / 24.2 & **83.9** & **84.3** & **84.0** & 89.8 & **79.2** & **84.1** & **99.3** & **72.6** & **81.3** & **84.1** \\ \hline \hline \end{tabular} \end{table} Table 2: Recall@1 of metric learning baselines and ours on the 8 datasets. Their network architecture is ViT-S/16 (Dosovitskiy et al., 2021) with 128 embedding dimensions. **Bold** denotes the best of the models within the entire table. Figure 6: Accuracy in Recall@1 of few-shot metric learning on the 8 datasets. Except for the pre-trained model ViT model, all others are trained with proxy-anchor loss. **Few-shot Metric Learning.** We additionally demonstrate the data-efficient adaptation capability of PUMA, by exploring few-shot learning in Fig. 6. Different from the prior few-shot learning approaches (_e.g._, (Chen et al., 2019; Jung et al., 2022)), we train models with few-shot labels on the training classes and evaluate them on unseen classes in a zero-shot manner. The results demonstrate that PUMA shows better performance than linear probing, which is considered a strong few-shot learning baseline (Tian et al., 2020). Moreover, even with an increasing number of shots, PUMA outperforms the fine-tuning model in terms of harmonic mean accuracy. ### Ablation Study **Ablation Study on Each Component.** Table 3 shows an extensive ablation study to analyze the effectiveness of each module in PUMA. We observe that employing a conditional prompt outperforms using a single prompt across multiple datasets. This highlights the adaptability inherent to our conditional prompt, allowing it to seamlessly accommodate varying data characteristics. Adapters, with more parameters compared to prompts, exhibit a pronounced impact on performance. In contrast to conventional static adapters, which introduce bias on larger datasets, our stochastic adapters consistently enhance results across all datasets. Notably, combining existing methods yields minimal performance gains, but the combination of the proposed modules boosts overall performance. Appendix B.2 shows more detailed ablation studies. **PUMA with Different Backbones and Dimension Scales.** Table 4 presents an evaluation of performance across different embedding dimensions and backbone architectures. Given a fixed model configuration, our method consistently outperforms models fine-tuned with the state-of-the-art universal method, CurricularFace. Remarkably, our method surpasses baselines employing larger embedding dimensions or more complex backbone models with over three times the number of parameters. ## 6 Conclusion Previous work on deep metric learning has primarily concentrated on developing loss functions for dataset-specific models. However, this approach creates scalability challenges as AI applications usually accommodate diverse data distributions in the real world. We investigate Universal Metric Learning, which enables a single model to manage multiple data distributions. In UML, existing metric learning baselines suffer from imbalanced data distributions. To address this issue, we propose to use parameter-efficient tuning that is simple and lightweight, which achieves state-of-the-art performance using only a single model. We hope our research will inspire future investigations into bridging the gap between metric learning and real-world application. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline Methods & Arch. & TotalParam. & CUB & Cars & SOP & InShop & NABird & Dog & Flowers & Aircraft & Unif. & Harm. \\ \hline CurricularFace & ViT-S/16\({}^{128}\) & 21.7M & 78.3 & 77.9 & 82.0 & 89.1 & 73.0 & 79.3 & 99.1 & 65.6 & 77.9 & 79.5 \\ Ours & ViT-S/16\({}^{128}\) & 24.2M & **83.9** & **84.3** & **84.0** & **89.8** & **79.2** & **84.1** & **99.3** & **72.6** & **81.3** & **84.1** \\ \hline CurricularFace & ViT-S/16\({}^{12}\) & 21.9M & 80.5 & 80.8 & 83.2 & 89.6 & 75.5 & 80.8 & 99.2 & 68.7 & 79.6 & 81.4 \\ Ours & ViT-S/16\({}^{124}\) & 24.3M & **84.6** & **85.7** & **85.1** & **90.9** & **80.1** & **84.5** & **99.3** & **74.4** & **82.3** & **85.1** \\ \hline CurricularFace & ViT-B/16\({}^{128}\) & 85.9M & 79.2 & 80.4 & 84.1 & 91.3 & 75.6 & 80.3 & 99.1 & 69.2 & 80.0 & 81.5 \\ Ours & ViT-B/16\({}^{128}\) & 90.8M & **85.7** & **88.2** & **86.0** & **92.2** & **83.0** & **87.5** & **99.4** & **78.8** & **84.0** & **87.2** \\ \hline \hline \end{tabular} \end{table} Table 4: Recall@1 of our method compared to the best universal model (_i.e._, CurricularFace) using different backbones and embedding dimensions. Superscripts are embedding dimensions. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c} \hline \hline Prompt & Adapter & Train & \multicolumn{8}{c}{Dataset-Specific Accuracy} & Universal Accuracy \\ \cline{3-13} Sing. & Cond. & Stat. & Stoc. & Param. & CUB & Cars & SOP & InShop & NABird & Dog & Flowers & Aircraft & Unified & Harmonic \\ \hline ✓ & ✗ & ✗ & ✗ & 0.05M & **82.8** & 51.0 & 74.7 & 72.9 & 78.3 & 85.5 & 99.2 & 50.8 & 72.3 & 70.8 \\ ✗ & ✓ & ✗ & ✗ & 0.13M & **82.8** & **54.7** & **76.8** & **76.6** & **78.7** & **85.8** & **99.3** & **53.6** & **74.0** & **73.0** \\ \hline ✗ & ✗ & ✗ & ✗ & 2.41M & 74.5 & 81.3 & 83.7 & **90.4** & 74.5 & 81.3 & 99.0 & 66.3 & 79.4 & 80.8 \\ ✗ & ✗ & ✗ & ✓ & 2.41M & **83.6** & **83.9** & **83.8** & 89.9 & **79.2** & **84.6** & **99.4** & **71.9** & **81.1** & **83.9** \\ \hline ✓ & ✗ & ✓ & ✗ & 2.41M & 79.6 & 80.0 & 83.8 & **90.3** & 73.7 & 81.0 & 99.0 & 65.4 & 79.3 & 80.5 \\ \hline ✗ & ✓ & ✗ & ✓ & 2.49M & **83.9** & **84.3** & **84.0** & 89.8 & **79.2** & **84.1** & **99.3** & **72.6** & **81.3** & **84.1** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study on each component of PUMA. “Sing.” denotes single prompt (\(M=1\)), “Cond.” denotes our conditional prompt (\(M=20\)), “Stat.” denotes adapter with \(p=1\), and “Stoc.” denotes our conditional adapter with \(p=0.5\).
2309.03805
Mapping of CNNs on multi-core RRAM-based CIM architectures
RRAM-based multi-core systems improve the energy efficiency and performance of CNNs. Thereby, the distributed parallel execution of convolutional layers causes critical data dependencies that limit the potential speedup. This paper presents synchronization techniques for parallel inference of convolutional layers on RRAM-based CIM architectures. We propose an architecture optimization that enables efficient data exchange and discuss the impact of different architecture setups on the performance. The corresponding compiler algorithms are optimized for high speedup and low memory consumption during CNN inference. We achieve more than 99% of the theoretical acceleration limit with a marginal data transmission overhead of less than 4% for state-of-the-art CNN benchmarks.
Rebecca Pelke, Nils Bosbach, Jose Cubero, Felix Staudigl, Rainer Leupers, Jan Moritz Joseph
2023-09-07T15:58:49Z
http://arxiv.org/abs/2309.03805v4
# Mapping of CNNs on multi-core RRAM-based CIM architectures ###### Abstract Resistive random access memory (RRAM)-based multi-core systems improve the energy efficiency and performance of convolutional neural networks (CNNs). Thereby, the distributed parallel execution of convolutional layers causes critical data dependencies that limit the potential speedup. This paper presents synchronization techniques for parallel inference of convolutional layers on RRAM-based computing-in-memory (CIM) architectures. We propose an architecture optimization that enables efficient data exchange and discuss the impact of different architecture setups on the performance. The corresponding compiler algorithms are optimized for high speedup and low memory consumption during CNN inference. We achieve more than 99 % of the theoretical acceleration limit with a marginal data transmission overhead of less than 4 % for state-of-the-art CNN benchmarks. CNN, RRAM, CIM, weight mapping + Footnote †: publicationid: pubid: 979-8-3503-2599-7/23/$31.00 ©2023 IEEE ## I Introduction In recent years, the broad use of convolutional neural networks (CNNs) in computer vision applications yielded an ever-growing demand for efficient architectures to handle these compute- and data-intensive workloads. Due to the massive parallelism and reuse capabilities in CNNs, these applications are not only executed on classical von-Neumann architectures but also on specialized hardware including graphics processing units (GPUs) and tensor processing units (TPUs). Today, CNN accelerators exist in various form factors, from power-efficient edge devices to hyper-scaled compute clusters. Despite the extensive innovation sparked by the ubiquitous use of CNNs, all these custom architectures suffer from one major performance limitation, namely, moving data from the system's main memory to the compute units, and vice versa [1]. In other words, CNN accelerators suffer from the von-Neumann bottleneck [2]. Novel computing-in-memory (CIM) technologies, such as resistive random access memory (RRAM), promise to tackle this bottleneck by unifying memory and computation unit [3]. These designs offer a significant advantage over CMOS-based designs in terms of memory capacity, device density, and power consumption [4]. Previous works presented accelerator architectures that use RRAM crossbars as matrix-vector multiplication (MVM) units [5, 6, 7, 8]. These architectures are designed hierarchically to scale from single MVM units to complex multi-core systems. To achieve maximum flexibility and scalability, the MVM units are often embedded in CIM cores, which can communicate with each other over a bus system [6, 7]. The accelerators aim at a weight stationary data flow, i.e., the weights of the CNN are statically assigned to RRAM crossbars [9]. This requires the development of new concepts in the compiler domain. The authors of [10, 11, 12, 13, 14, 15] investigated the translation of conv2D operations to MVMs. They focused on the mapping of kernel weights to RRAM crossbars. To achieve a high speedup, the kernel weights of one layer must be split across multiple CIM cores for parallel processing. This causes critical data dependencies between cores, which are often neglected. Synchronization techniques are needed to resolve these dependencies. They must be considered in the context of the underlying architecture. The authors of [6] proposed a centrally organized synchronization technique. This scheme requires a high amount of non-RRAM memory. In our work, we enable efficient, low-overhead parallel execution of CNN layers on multi-core RRAM-based CIM architectures. We use decentralized synchronization methods to minimize memory consumption with marginal data traffic overhead. This includes the following contributions: * An architecture optimized for efficient, decentralized, and event-based communication. * Compiler algorithms that achieve more than 99 % of the theoretical acceleration limit for conv2D layers. * A cycle-accurate simulator to analyze the influence of different algorithms and architecture parameters. Fig. 1 illustrates our evaluation framework. The architecture simulator is used to validate and evaluate the proposed algorithms (see Fig. 1(a)). The specification allows setting different architecture parameters to investigate their influence on the CNN inference. The compiler receives a CNN model and an architecture specification as input and generates code that can be executed on the simulator (see Fig. 1(b)). Fig. 1: Evaluation framework containing architecture (a) and compiler (b). ## II Background ### _RRAM crossbars_ A RRAM device is a non-volatile emerging memory that stores a conductance value. Multiple RRAM devices can be arranged in crossbar structures to enable in-memory computing [16]. On RRAM crossbars, MVMs can be performed in the analog domain in \(\mathcal{O}(1)\)[17]. The weights of the neural network are stored in the crossbar. By applying the input values as voltages, currents are generated that correspond to the result of the MVM. Modern RRAM crossbars have been fabricated in different sizes, e.g., \(64\times 64\)[6], \(128\times 128\)[7]. ### _Weight mapping_ Performing MVMs is significantly faster than programming the crossbar cells [5]. If the accelerator provides a sufficient number of crossbars, the weights should therefore be programmed only once to ensure an efficient inference phase. Conv2D layers are well suited for the execution on RRAM crossbars since they can be translated into MVMs and the matrices can be reused multiple times. This has been investigated in previous works. One of the first methods, im2col [10], assigns kernel values to crossbar cells with the densest RRAM cell occupancy. In other approaches, the crossbar is more sparsely packed and kernel values are duplicated to increase the input reuse [11, 13]. Since the im2col scheme requires the least number of RRAM cells, an extended im2col method is used in this work, which splits the kernel values of one layer over several crossbars [14, 15]. ### _RRAM-based CIM architectures_ Several RRAM-based CIM architectures have been presented [6, 7, 8, 18]. They aim at efficient and parallel execution of MVMs. Besides different interconnect models, they mainly differ in how autonomously the CIM cores can operate. It ranges from simple MVM units to powerful instruction set architecture (ISA)-based cores [19]. Simple MVM units can be driven and synchronized by a central control unit. Autonomous cores, on the other hand, can execute workloads independently and do not need to be actively controlled. This makes them more flexible and performant. They require a more advanced synchronization procedure, which is addressed in this work. ### _Synchronization techniques_ The parallel execution of layers causes critical data dependencies that can lead to incorrect results (see Section IV-A). This can be avoided at the cost of performance loss by executing the critical sections in sequence [13] (see Section IV-B). For parallel execution, synchronization methods are required that need to be supported by the architecture. The authors of [6] introduced a central synchronization scheme. In their architecture, several _tiles_ form the accelerator. A _tile_ is structured similar to the architecture in Fig. 1(a) and contains a controller, several CIM cores, and shared memory. The synchronization is solved centrally by extending the shared memory with an attribute buffer. This attribute buffer contains two attributes for each data entry, _valid_ and _count_. A memory controller maintains the attributes to ensure the correct exchange of data. In this solution, a significant amount of memory is needed to store the attributes. For 64 kB of data in the shared memory, 32 K attributes are required [6]. We improve on this idea by proposing a decentralized synchronization scheme that requires significantly less memory. ## III Architecture We model a multi-core system as a reference architecture (see Fig. 1(a)). The CIM cores are connected by a bus and use shared memory to exchange their data. In this work, the bus system is used to execute one layer. To be able to execute whole CNNs, the system can simply be duplicated. Architecture parameters, e.g., the number of cores and the size of the crossbars, can be specified variably in our model to investigate their influence on the performance (see Section V-C). Fig. 2 illustrates the CIM core model including data flow, instructions, and dimensioning. The buffer sizes depend on the matrix size that the MVM unit can process. Cores act as initiators and targets of bus transactions. As initiators, they can, e.g., perform LOAD and STORE operations. As a target, they can receive _config_ parameters. The general purpose execution unit (GPEU) can perform arithmetic operations and some activation functions like ReLU and LeakyReLU. Cores operate in two phases. In the setup phase, the CPU configures the CIM cores. The instructions are loaded and kernel values are programmed into crossbar cells. In the inference phase, the CNN layer is executed. After all calculations are completed, an interrupt is signaled to the CPU. ## IV Compiler Our compiler is written in Python to enable simple proofs of concept. It compiles conv2D and dense layers from Tensorflow models and generates a _bin_ and a _cfg_ file for each layer depending on the architecture specification. The _cfg_ file is interpreted by the CPU to configure the CIM cores in the setup phase. The _bin_ file is loaded into the shared memory of the CIM cores. It contains an instructions section for each core separately in case not all instructions fit into the instruction memory of the core. It provides placeholders for the input feature map (IFM) and output feature map (OFM) of the layer. In the following, mapping and synchronization techniques Fig. 2: CIM core architecture, data flow, instructions, and dimensioning. are discussed for the conv2D operation. Dense layers can be treated analogously. ### _Operation remapping_ Fig. 3(a) shows the main components of a conv2D layer, i.e., IFM, OFM, and kernels. Fig. 3(b) illustrates the im2col scheme [15]. The unrolled kernels form a matrix, which can then be multiplied by \(O_{X}\cdot O_{Y}\) unrolled vectors from the IFM. State-of-the-art CNN layers are often too big to be stored in a single crossbar. The kernel values of one layer have to be split over several crossbars (red lines) [10, 14]. Fig. 3(c) shows the assignment of kernel values to cores. In the setup phase, the CPU loads the IFM to the associated placeholder in the shared memory. Bias values are initially written to the placeholder of the OFM. The GPEU is used to accumulate the bias values and to accomplish the activation function. We extend the multi-core im2col scheme by assigning two group IDs to each core. All cores sharing the same vertical group (VG) ID operate on the same values of the IFM. All cores sharing the same horizontal group (HG) ID generate partial results for the same values in the OFM that have to be accumulated. In the following, the cores are denoted as \(C_{HG,VG}\). While the IFM is read-only, both, read and write accesses are performed on the OFM. Considering \(M\times N\) crossbars and \((K_{Y},K_{X},K_{Z},K_{NUM})\) conv2D kernels (HWIO layout), the total number of needed cores \(C_{NUM}\) is \[C_{NUM}=\underbrace{\left\lceil\frac{K_{X}\cdot K_{Y}\cdot K_{Z}}{N}\right\rceil }_{=:P_{V}}\cdot\underbrace{\left\lceil\frac{K_{NUM}}{M}\right\rceil}_{=:P_{ H}}.\] The area in the shared memory dedicated to the OFM is reused for the exchange of partial results. This keeps the CIM cores lean since the required buffer sizes and synchronization complexity remain minimal. As a consequence, all cores sharing the same HG ID operate on the same OFM locations in the shared memory. The access must be regulated to avoid race conditions. Hence, a synchronization technique is required to ensure that all partial results are accumulated correctly. This means \(P_{H}\) sets of \(P_{V}\) cores need to be synchronized for \(O_{V,NUM}=O_{X}\cdot O_{Y}\) different output vectors of size \(M\). ### _Synchronization schemes_ All output vectors of the OFM stored in the shared memory can be treated as a resource that may only be owned by one core at any time. Fig. 4 illustrates the proposed synchronization techniques for the example of three conflicting cores \(C_{0,0},C_{0,1}\), and \(C_{0,P_{V}-1}=C_{0,2}\) (\(P_{H}=1,P_{V}=3\)) with \(12\) different output vectors, i.e., \(12\) different resources. To calculate correctly, each core must have owned each resource once to accumulate its partial results. In the following, the different parallelization and synchronization schemes are presented. A red _sync_ line means that the core that releases a resource notifies (CALL) its successor. That is the core that will receive the resource next. The successor must wait (WAIT) for the notification. #### Iii-B1 Sequential synchronization This scheme is the most basic one. It is used in [12, 13]. Conflicting cores operate sequentially and not in parallel, which eliminates the need for complex synchronization procedures. In the example, \(C_{0,0}\) gets all resources first. After it has completed all calculations, the next core, \(C_{0,1}\), is allowed to operate. The first core, \(C_{0,0}\), accumulates the bias values and the last core, \(C_{0,P_{V}-1}\), applies the activation function to all output vectors (see Fig. 4(a)). In the following, we propose two schemes, _linear_ and _cyclic_ synchronization, to achieve parallel processing, i.e., cores of the same HG can operate in parallel. #### Iii-B2 Linear synchronization The cores process the output vectors in the same order, starting from \(C_{0,0}\) to \(C_{0,2}\) in the example. Core \(C_{0,0}\) has no predecessor and \(C_{0,2}\) has no successor. Core \(C_{0,0}\) accumulates the bias values and \(C_{0,P_{V}-1}\) applies the activation function for all output vectors (see Fig. 4(b)). In this case, the number of CALL and WAIT operations is \[P_{H}\cdot O_{V,NUM}\cdot\left(P_{V}-1\right).\] #### Iii-B3 Cyclic synchronization In cyclic synchronization, tasks are distributed as fair as possible among the cores. Each core has exactly one predecessor and one successor. The output vectors are processed cyclically by the cores. The core that first gains access to an output vector accumulates the bias values to its partial results. The core that receives access to an output vector last executes the activation function (see Fig. 4(c)). As a result, the execution of the activation functions and the addition of the bias values are shared equally among all cores. In this case, the number of CALL and WAIT operations is \[P_{H}\cdot\left\lceil\frac{O_{V,NUM}}{P_{V}}\right\rceil\cdot P_{V}\cdot\left( P_{V}-1\right).\] Fig. 3: Translation of a conv2D operation (a) into multiple MVMs with matrix dimension \(M\times N\) (b) and distribution of kernel matrix values to cores (c). ### _Sequence number_ We extend each core with a single register to enable parallelization and synchronization on the architecture side. That is illustrated in Fig. 4(b) and Fig. 4(c). This register contains a sequence number (SEQ_NR) which is writable for other cores. The initial value is \(0\). A CALL operation increments the sequence number of the successor core (blue line). When executing a WAIT operation, the core waits for its sequence number to reach at least a certain value. Fig. 4(d) shows the pseudo instructions. Three cases are distinguished. In the first case, the core has no predecessor for the output it is working on. In the second case, the core has both, a predecessor and a successor. The last case describes the scenario in which a core is the last one operating on an output. ## V Results We evaluate our proposed parallelization techniques in terms of speedup gain and overhead costs using the conv2D layers of Mobilenet [20] and ResNet-18 [21]. Those layers are also found in other CNNs. The synchronization methods do not affect the accuracy of the CNNs, which is therefore not examined here. The conv2D layers are compiled for different architecture parameters and synchronization schemes to investigate their effects on the performance. ### _Architecture simulator_ Compiled layers are executed on our SystemC/TLM-2.0-based simulator to verify the compiler concepts and algorithms. To obtain realistic latency values and enable architectural exploration on a high abstraction level, the TLM-2.0 non-blocking interface is used in combination with the approximately-timed coding style [22]. We use a multi-initiator-multi-target AXI4 bus interconnect [23]. The AXI4 bus protocol [24] supports burst transactions and out-of-order transaction completion, which are beneficial features for data-intensive and highly parallel workloads. We capture relevant data during runtime to evaluate the impact of different architecture parameters and synchronization methods on the performance [25]. ### _Performance speedup_ To evaluate the parallelization and synchronization methods, we examine the speedup of the linear (Fig. 4(b)) and cyclic schemes (Fig. 4(c)). The speedup always refers to the latency of the corresponding sequential version [12, 13] (Fig. 4(a)). \[S_{LINEAR}=\frac{t_{SEQUENTIAL}}{t_{LINEAR}},S_{CYCLIC}=\frac{t_{SEQUENTIAL}}{t_{CYCLIC}}\] The variable \(t_{X}\) denotes the latency of the inference of a conv2D layer using scheme \(X\). An upper bound for the maximum achievable speedup is \(P_{V}\), i.e., all conflicting cores run in parallel without synchronization overhead (see Section IV-B). Fig. 5 shows the speedup of the linear (blue) and the cyclic synchronization method (red) for different conv2D layers of Mobilenet. The shapes of the used layers (Layer #) are listed in Table I. The crossbar dimensions are \(32\times 32\) and \(64\times 64\). This, in combination with the kernel shape of the layer, determines the number of needed cores. The upper bound for the maximum achievable speedup (\(P_{V}\)) is indicated by the dashed lines. The speedup increases when reducing the crossbar size. A reduction from \(64\times 64\) to \(32\times 32\) crossbars results in a speedup of at most \(2\times\) referred to the corresponding sequential scheme. Up to \(4\times\) more cores are required which increases the bus utilization and synchronization complexity. This means that higher speedups can be achieved at the cost of higher numbers of cores and larger bus widths. Fig. 5 also reveals the speedup which can be achieved for conv2D layers depending on the bus width and crossbar dimension. The figure demonstrates that the speedup limit can be reached even for small bus widths (4 B) when the total number of cores is small (\(\leq 32\)). Using a large bus width of 32 B, up to \(128\) cores can operate in parallel. Beyond this, the speedup limit cannot be reached. Reaching the speedup limit for sufficiently high bus widths proves that the synchronization presented in this work does not cause long waiting times. The highest speedup that is achieved is \(16\times\) for layer \(5\). The speedup of the cyclic method is slightly higher compared to the linear method because the instructions are distributed more evenly among the cores (see Section IV-B). Thus, the linear Fig. 4: Sequential computation of OFM without synchronization scheme (a), parallel computation of OFM with linear (b) and cyclic (c) synchronization for conflicting cores \(C_{0,0},C_{0,1}\) and \(C_{0,2}\), pseudo instructions for parallel computation of a conv2D operation (d). method should be preferred due to its simple implementation. ### _Bus width_ We previously demonstrated that the synchronization methods are very efficient as the speedup limit can almost be attained. However, this limit can only be achieved when the bus is sufficiently wide which prevents it from becoming a bottleneck. Fig. 6 shows to which extent the speedup limit can be reached depending on the crossbar dimension, bus width, and the number of cores. Each line represents one combination of crossbar dimension and bus width. For every combination, conv2D layers from the Mobilenet and ResNet-18 architecture were compiled and simulated. The data from Fig. 6 can be used to determine two things, the appropriate number of cores for a given crossbar dimension and bus width or a reasonable bus width for a given number of cores and crossbar dimension. In general, the smaller the bus width, the lower the number of cores the bus can handle without becoming a bottleneck. For small bus widths (4 B, red line), only a maximum of \(16\) cores are worthwhile to achieve more than 90 % of the speedup limit, with 64 B (blue line) the system can contain up to \(512\) cores. If the crossbar dimension is halved, i.e., the number of required cores is quadrupled, then the bus width should at least be doubled to achieve similar performance. For a given number of cores, Fig. 6 can be used to determine a suitable combination of crossbar dimension and bus width. ### _Area overhead_ Apart from the performance, the overhead caused by synchronization needs to be considered. A major advantage of the methods presented in this work is that synchronization can be realized by simple register accesses. Only one register per CIM core is needed. Assuming that one of the 32 K attributes from [6] requires one byte of memory (see Section II-D), our approach saves at least 87.5 % of the synchronization memory, since with a maximum of \(1024\) cores and 4 B per register, only 4 kB of memory is required. ### _Synchronization overhead_ Synchronization requires additional operations. In contrast to the WAIT operation, the CALL operation must be transferred over the bus, which increases bus traffic. The smaller the crossbar size, the more cores are needed, and the more CALL operations have to be executed. Fig. 7 shows that the bus traffic caused by CALL operations is small compared to the data values transferred over the bus. For CALL operations with a size of 4 B and data values with a size of 1 B, the overhead is less than 4 % when using \(32\times 32\) Fig. 5: Speedup vs. maximum achievable speedup (dashed lines) of the linear and cyclic synchronization for different layers, crossbar dimensions, and bus widths. The number of cores depends on the layer and crossbar dimension. It refers to \(32\times 32\) crossbars (left entry) and \(64\times 64\) crossbars (right entry). Fig. 6: Speedup divided by speedup limit of cyclic synchronization scheme for different layers, bus widths, and crossbar dimensions. crossbars, less than 2 % when using \(64\times 64\) crossbars, and less than 1 % when using \(128\times 128\) crossbars. Table II shows the absolute number of LOAD and STORE operations. Note that the number of loaded values is greater than the number of values in the IFM and the number of stored values is greater than the number of values in the OFM. The reason for this is that the exchange of partial results and the loading of bias values are also counted. Some input values are loaded multiple times because the same input values are multiplied by different kernel values (see [13]). ## VI Conclusion This paper proposes efficient, low-overhead synchronization techniques to enable the parallel execution of single layers on RRAM-based multi-core CIM architectures. We introduce an architecture that supports synchronization and data exchange in a decentralized and event-based manner. The synchronization mechanisms require significantly less memory compared to the state of the art. On the compiler side, we generate code for different architecture setups and evaluate them on a simulator. By exploiting the synchronization mechanisms of the architecture, we achieve more than 99 % of the theoretical acceleration limit for conv2D layers of state-of-the-art CNNs with less than 4 % additional bus traffic. This work contributes to understanding the challenges of mapping CNNs to multi-core CIM systems. The presented techniques can be used as building blocks for compilers to enable parallel inference of CNN layers. As a future step, data dependencies between different layers must be considered to enable full system-level integration.
2305.19550
Spotlight Attention: Robust Object-Centric Learning With a Spatial Locality Prior
The aim of object-centric vision is to construct an explicit representation of the objects in a scene. This representation is obtained via a set of interchangeable modules called \emph{slots} or \emph{object files} that compete for local patches of an image. The competition has a weak inductive bias to preserve spatial continuity; consequently, one slot may claim patches scattered diffusely throughout the image. In contrast, the inductive bias of human vision is strong, to the degree that attention has classically been described with a spotlight metaphor. We incorporate a spatial-locality prior into state-of-the-art object-centric vision models and obtain significant improvements in segmenting objects in both synthetic and real-world datasets. Similar to human visual attention, the combination of image content and spatial constraints yield robust unsupervised object-centric learning, including less sensitivity to model hyperparameters.
Ayush Chakravarthy, Trang Nguyen, Anirudh Goyal, Yoshua Bengio, Michael C. Mozer
2023-05-31T04:35:50Z
http://arxiv.org/abs/2305.19550v1
# Spotlight Attention: Robust Object-Centric Learning With a Spatial Locality Prior ###### Abstract The aim of object-centric vision is to construct an explicit representation of the objects in a scene. This representation is obtained via a set of interchangeable modules called _slots_ or _object files_ that compete for local patches of an image. The competition has a weak inductive bias to preserve spatial continuity; consequently, one slot may claim patches scattered diffusely throughout the image. In contrast, the inductive bias of human vision is strong, to the degree that attention has classically been described with a spotlight metaphor. We incorporate a spatial-locality prior into state-of-the-art object-centric vision models and obtain significant improvements in segmenting objects in both synthetic and real-world datasets. Similar to human visual attention, the combination of image content and spatial constraints yield robust unsupervised object-centric learning, including less sensitivity to model hyperparameters. ## 1 Introduction Learning about objects and their interactions is a cornerstone of human cognition (Spelke and Kinzler, 2007). Understanding the nature of objects and their properties is necessary to achieve symbol-like mental representations (Whitehead, 1928) and systematicity of reasoning (Fodor and Pylyshyn, 1988). Although language has a natural tokenization that supports systematicity (Chakravarthy et al., 2022), progress in visual reasoning in Artificial Intelligence hinges on tokenizing visual input. Visual tokenization corresponds to the problem of _object-centric representation learning_--learning to partition images into a set of discrete slots in an unsupervised manner. The desiderata for these slots are that they induce a bijection with the objects in an image and are interchangeable. With no or limited supervision, object-centric representation learning is very difficult (Greff et al., 2020) and requires appropriate forms of inductive bias (Scholkopf et al., 2021; Ke et al., 2021; Goyal and Bengio, 2022). The biases explored for building models with object-centric representation have been based on instance based segmentation (Greff et al., 2017, 2019; Locatello et al., 2020), sequential object extraction (Gregor et al., 2015; Burgess et al., 2019; Engelcke et al., 2021; Goyal et al., 2021), invariance (Crawford and Pineau, 2019; Lin et al., 2020; Jiang et al., 2020; Biza et al., 2023), type-token distinction (Bao et al., 2022, 2023; Goyal et al., 2020) and the sparsity of interactions among different slots (Goyal et al., 2021, 2021). However, none these methods leverage a bias that is considered fundamental to visual attention in the psychological literature: the preference for spatial continuity of an attended region. Traditionally, attention was considered to be a _spotlight_ on a region of interest in the image (Posner, 1980). The spotlight metaphor was extended to be a zoom lens (LaBerge, 1983), allowing for large or small regions, but nonetheless the regions needed to be convex. The metaphor was further extended to allow the selected region to blanket around a shape (Mozer, 1988). In the psychological literature, even the notion of _object-based attention_(Duncan, 1984) is spatial in nature (Vecera and Farah, 1994), although it allows noncontiguous features of an object to be selected together e.g., if two ends of an occluded object are visible (Zemel et al., 2002). Although current techniques in object-centric representation learning make available a positional encoding to guide the mapping between image patches and slots, the slot extraction process itself has no explicit pressure to choose patches in a spatially contiguous manner. Consequently, one slot may claim patches scattered diffusely throughout the image. In this paper, we introduce an inductive bias in the form of a _spatial locality prior_ (_SLP_) that encourages slots to select spatially contiguous patches in the input image. **Summary.** We present an algorithm that can be incorporated into models for object-centric representation learning that biases slot assignments based on spatial locality. We show consistent improvements in the quality of object representations for three object-centric architectures, eight distinct data sets, both synthetic and natural, and multiple different performance measures that have been used in the literature. We show that the SLP also makes the baseline models more robust to hyperparameter selection and supports robust out-of-distribution generalization. ## 2 Background Unsupervised Object-Centric Learning.Our work falls into the line of research in machine vision known as unsupervised object-centric representation learning (Eslami et al., 2016; Greff et al., 2017, 2019; Burgess et al., 2019; Goyal et al., 2020; Lin et al., 2020; Locatello et al., 2020; Engelcke et al., 2021; Singh et al., 2022; Jia et al., 2023). Broadly, this work has the objective of mapping image elements to a low-dimensional set of objects, or _slots_, where the goal is for grouped image elements to be semantically similar. As'semantic similarity' is an ambiguous objective, several additional sources of information have been explored including video sequences (Jiang et al., 2020; Weis et al., 2021; Singh et al., 2022; Traub et al., 2023) and optical flow (Kipf et al., 2022; Elsayed et al., 2022; Bao et al., 2022; Bao et al., 2023). However, as pointed out by Yang and Yang (2022), such methods suffer from problems in scaling to large real-world datasets. Toward alleviating this problem, recent work has moved from CNNs to Transformer-based models which have greater expressivity (Singh et al., 2022; Seitzer et al., 2023). Nonetheless, the basic mechanism of slot assignment in Slot Attention (Locatello et al., 2020) and its variants (Chang et al., 2022; Jia et al., 2023) has proven a critical building block to unsupervised object-centric learning. Spatially-biased OCL.The key novelty of our work is a spatial-proximity-based mechanism that modulates the slot-pixel assignment. One might argue that because current methods for object centric representation learning includes explicit positional encodings in its input (Locatello et al., 2020; Goyal et al., 2020; Zhou et al., 2021), the existing methods have sufficient information to discover the principle of spatial coherence. However, the results we present clearly indicate otherwise, or at least that spatial constraints have yet to be fully exploited. Other work has incorporated spatial information into models, such as using shape priors for weak supervision (Elich et al., 2020), spatial position as a means to bootstrap learning (Kim et al., 2023), and incorporating spatial symmetries (Biza et al., 2023). Furthermore, in the context of video, SaVi (Kipf et al., 2022) and SaVi++ (Elsayed et al., 2022) use ground-truth spatial information such as center of mass and bounding boxes extracted from the first frame of the video. However, our method is the first to use spatial constraints to steer the slot-pixel assignment with no supervision. ## 3 Method ### Augmenting Key-Query Match with Spatial Bias The input image is first pre-processed by an _image encoder_. This processing preserves the image topography, yielding an embedding at each image patch \(p\), which corresponds to an \((x,y)\) position in a coarse grid over the image. (The embedding is intended to encode high-level visual features, and thus the grid of patches is coarser than the input dimensions in pixels.) The embedding at each patch \(p\) is mapped to a key, \(\mathbf{\kappa}_{p}\), which is matched to a query from each slot \(k\), denoted \(\mathbf{q}_{k}\). The evidence supporting a key-query match is \[\gamma_{kp}=\frac{\mathbf{q}_{k}^{\mathrm{T}}\mathbf{\kappa}_{p}}{\sqrt{d}},\] where \(d\) is the dimensionality of the query and key vectors. In various different methods of object centric representation learning (Goyal et al., 2020; Locatello et al., 2020; Goyal et al., 2021), slots compete for each grid position via a softmax renormalization of the match scores \(\gamma_{kp}\). Here, we introduce a _Spatial Locality Prior_ to modulate the competition among slots. This prior takes the form of an additive term, \(\alpha_{kp}\), in the softmax: \[s_{kp}=\mathrm{softmax}_{k}\big{(}\gamma_{kp}+\alpha_{kp}\big{)},\] where \(s_{kp}\) is the affinity between position \(p\) and slot \(k\), and \(\mathrm{softmax}_{k}\) denotes the \(k\)'th element of the softmax vector. We use \(\mathbf{s}_{k}\equiv\{s_{k}\}\) to denote the distribution of activation (or attention) over positions for a given slot \(k\). The \(\mathbf{\alpha}\) matrix is used to bias this distribution by spatial locality. ### Encouraging Spatial Locality Conditioned on an input, the \(\mathbf{\alpha}\) matrix is determined by a constraint satisfaction process (CSP) that encourages a roughly spotlight-like distribution of activation over positions in \(\mathbf{s}_{k}\) for each slot \(k\). Additionally, the CSP discourages overlap in the spotlights of any pair of slots. The CSP converges on \(\mathbf{\alpha}\) by iterative gradient-descent steps in a loss that penalizes spotlights that are non-compact and overlapping. The spotlight associated with each slot \(k\) is characterized by its center, \(\mathbf{m}_{k}\), and isotropic spread, \(v_{k}\), defined to be the central tendency and variance of \(\mathbf{s}_{k}\): \[\mathbf{m}_{k}=\frac{\sum_{p}s_{kp}\mathbf{p}}{\sum_{p}s_{kp}}\quad\text{and}\quad v_{ k}=\frac{\sum_{p}s_{kp}|\mathbf{p}-\mathbf{m}_{k}|^{2}}{\sum_{p}s_{kp}} \tag{1}\] Note that while \(\sum_{k}s_{kp}=1\) due to the softmax, the sum over positions is not normalized. The CSP's loss consists of two terms. First, a penalty is imposed to the degree that each pair of slots fail to have spatially distinct attentional profiles, as characterized by a distance measure summed over slot pairs: \[\mathcal{L}_{\mathrm{distinct}}=\sum_{k,k^{\prime}>k}\exp\left(-\frac{|\bm {m}_{k}-\mathbf{m}_{k^{\prime}}|^{2}}{v_{k}+v_{k^{\prime}}}\right) \tag{2}\] This loss is designed to push apart the slot means (the numerator term) relative to the intra-slot variance (the denominator term). Second, to prevent degenerate solutions in which attention collapses to a point, we impose a penalty on the Froebenius norm of \(\mathbf{\alpha}\): \[\mathcal{L}_{\mathrm{norm}}=\sum_{k,p}\alpha_{kp}^{2} \tag{3}\] The overall loss \(\mathcal{L}=\mathcal{L}_{\mathrm{distinct}}+\lambda\mathcal{L}_{\mathrm{ norm}}\) is minimized with respect to \(\mathbf{\alpha}\) from an initial state \(\mathbf{\alpha}^{0}\), which we discuss next. ### Learning Initial State Through Bilevel Optimization To break symmetry, it is vital to learn an initial state \(\mathbf{\alpha}^{0}\) which partitions the image by dispersing initial slot means across the image. We take inspiration from Jia et al. (2023) and perform meta-learning to determine \(\mathbf{\alpha}^{0}\). On each training trial, after \(j\) steps of the CSP optimization, we obtain an approximately optimal attentional bias, let's call it \(\mathbf{\alpha}^{*}\). We detach \(\mathbf{\alpha}^{*}\) and optimize for \(\mathbf{\alpha}^{0}\) on the last CSP step. We use the straight-through estimator (Bengio et al., 2013; van den Oord et al., 2017) to additionally propagate gradients into \(\mathbf{\alpha}^{0}\). Through this design, we are able to learn generalized dataset-wide statistics about \(\mathbb{E}[\mathbf{m}_{k}]\) and \(\mathbb{E}[v_{k}]\) for each slot \(k\). Note that this procedure breaks slot symmetry because \(\mathbf{\alpha}^{0}\) assigns slots to default regions of the image. ``` \(S\sim\mathcal{N}(\mu,\sigma)\) \(Z=\textbf{LayerNorm}(Z)\) \(\boldsymbol{\alpha}=\boldsymbol{\alpha}^{0}/\left\|\boldsymbol{\alpha}^{0}\right\|_ {2}\) for\(i=1\ldots T_{\textit{slot}}\)do \(S=\textbf{LayerNorm}(S)\) \(L=\frac{1}{\sqrt{d}}q(S)\cdot k(Z)^{\mathsf{T}}\) for\(j=1\ldots T_{\textit{spat}}\)do if\(j=T_{\textit{spat}}-1\)then \(\boldsymbol{\alpha}=\textbf{StopGradient}(\boldsymbol{\alpha})+\boldsymbol{ \alpha}^{0}-\textbf{StopGradient}(\boldsymbol{\alpha}^{0})\) endif \(A=\textbf{Softmax}(L+\boldsymbol{\alpha},\text{axis}=\text{"slots"})\) \(m,v=\textbf{ComputeDistribution}(Z,A)\)\(\triangleright\) Compute Equation 1 \(l=\textbf{GetLoss}(\boldsymbol{\alpha},m,v)\)\(\triangleright\) Compute Equations 2 and 3 \(\boldsymbol{\alpha}=\boldsymbol{\alpha}-\boldsymbol{\alpha}_{\mathbf{tr}}\cdot \frac{\partial l}{\partial\boldsymbol{\alpha}}\) endfor \(A=\textbf{Softmax}(L+\boldsymbol{\alpha},\text{axis}=\text{"slots"})\) \(A=A\) / \(\textbf{sum}(A,\text{axis}=\text{"embeddings"})\) \(U=A\cdot v(Z)\) for\(n=1\ldots N\)in paralleldo \(S_{n}=\textbf{GRU}(S_{n},U_{n})\) \(S_{n}+=\textbf{MLP}(\textbf{LayerNorm}(S_{n}))\) endfor endfor return\(S\) ``` **Algorithm 1 Spatial Locality Prior.** The algorithm takes image embedding features \(Z\in\mathbb{R}^{N\times C}\); the number of slots, \(K\); the number of slot-update iterations, \(T_{\textit{slot}}\); the number of spatial-update iterations, \(T_{\textit{spat}}\); and the projection dimensionality, \(d\). The learned model parameters are: the learned projections \(q,k,v\) each with dimensionality \(d\), the alpha initialization \(\boldsymbol{\alpha}^{0}\in\mathbb{R}^{K\times N}\); the **GRU** and the **MLP** layers; and a Gaussian mean and variance \(\mu,\sigma\in\mathbb{R}^{d}\). ## 4 Experiments In this section, we evaluate the benefit of incorporating the spatial-locality prior into Slot Attention, as well as into two recent object-centric methods that build upon Slot Attention to achieve state-of-the-art performance, BO-QSA (Jia et al., 2023) and DINOSAUR (Seitzer et al., 2023). We find that for all three models, across eight diverse datasets, the spatial-locality prior boosts performance. We refer to the base models (Slot Attention, BO-QSA, and DINOSAUR) augmented with the spatial locality prior (hereafter, _SLP_) by adding the modifier '+SLP' to the name. All comparisons we report use a given base model with and without SLP for highly controlled experimentation. Here we include details about our datasets, architectural decisions, and evaluation methods; but hyperparameters and other simulation details are presented in the Supplementary Materials. All experiments were run on a single 50GB Quatro RTX 8000 GPU. ### Object Discovery in Synthetic Images For synthetic datasets, we focus on the task of Object Discovery (Burgess et al., 2019), which is to produce a set of masks that cover each of the objects that appear in the image. We first isolate the effect of SLP through experimenting with vanilla Slot Attention (Locatello et al., 2020) on CLEVR6 (Johnson et al., 2016), ObjectsRoom (Kabra et al., 2019), MultiSprites (Burgess et al., 2019), ShapeStacks (Groth et al., 2018), and ClevrTex (Karazija et al., 2021). To show that SLP works for other variants of Slot Attention, we examine BO-QSA (Jia et al., 2023) on ShapeStacks, ObjectsRoom, and ClevrTex datasets. We primarily focus on Foreground-ARI (FG-ARI) (Hubert and Arabie, 1985) as our dependent measure of performance. We then show results with DINOSAUR (Seitzer et al., 2023) on the MoVi-C and MoVi-E datasets (Greff et al., 2022), where we evaluate using FG-ARI and mean-best-overlap (mBO) (Pont-Tuset et al., 2015). #### 4.1.1 Vanilla Slot Attention _Methodology._ In order to isolate the effect of SLP, we setup Slot Attention as described in Locatello et al. (2020), with a Mixture Decoder from Watters et al. (2019) and a 4-layer CNN encoder, on the object-discovery task. SLP is integrated into Slot Attention as described in Algorithm 1. Slots are initialized with a learned Gaussian mean and variance. _Results._ Table 0(a) presents results for object discovery on synthetic datasets. Slot Attention + SLP yields improvements on MultiSprites, ShapeStacks, and, most notably, on ClevrTex. We obtain an almost 10% improvement on ClevrTex; given challenging views with diverse textures and complicated lighting effects, Slot Attention itself is not able to effectively segment the objects. SLP does not completely solve the task, but it is a significant step forward. Further, in Table 2, we show that SLP allows an underparametrized Slot Attention with 7 slots to match the 11 slot baseline. Additionally, the improvement on the 7 slot experiment is robust as a two-tailed t-test yielded \(t(3)=3.61,p=.02\). Another limitation of vanilla Slot Attention is its fragility as the number of slot-update iterations across training and evaluation phase. In Table 0(b), we show that SLP effectively solves this problem. In this Table, we manipulate both the number of slot-update iterations (\(T_{slot}\)) and the number of spatial-update iterations (\(T_{pat}\))--the columns and rows of the table, respectively. As the number of slot-update iterations increases, performance of Slot Attention drops but Slot Attention + SLP is not systematically affected. The Table also indicates that a single SLP iteration--minimal additional computation--provides a sufficient bias to improve FG-ARI. The first and second columns of Figure 0(a) show three sample Clevr6 images and their reconstruction, respectively. To the right of each image pair is the \(\mathbf{\alpha}_{k}\) spatial-attention biases for each of the \(k\in\{1...7\}\) slots (the dark images), and the the post-competition masked slot representations for each slot (color images). Note that the spatial-attention biases do not reflect the final selection, but rather the fact that SLP is encouraging slots to claim patches near already claimed patches. For some objects, especially in cases where the object seems to be distant, the spatial bias seems important, but not for all objects in these simple images. \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & **ObjectsRoom** & **MultiSprites** & **ShapeStacks** & **ClevTex** \\ \hline MONet Burgess et al. (2019) & 0.54 \(\pm\) 0.05 & 0.89 \(\pm\) 0.05 & 0.70 \(\pm\) 0.11 & 0.19 \(\pm\) 0.05 \\ GENESIS-V2 Engelcke et al. (2021) & 0.86 \(\pm\) 0.05 & 0.52 \(\pm\) 0.15 & 0.81 \(\pm\) 0.05 & 0.31 \(\pm\) 0.20 \\ Slot Attention Locatello et al. (2020) & 0.86 \(\pm\) 0.14 & 0.91 \(\pm\) 0.10 & 0.80 \(\pm\) 0.08 & 0.62 \(\pm\) 0.08 \\ \hline Slot Attention + SLP & 0.87 \(\pm\) 0.05 & 0.94 \(\pm\) 0.05 & 0.83 \(\pm\) 0.05 & 0.71 \(\pm\) 0.05 \\ \hline \hline \end{tabular} \end{table} Table 1: Foreground ARI (%) Segmentation Accuracy (mean \(\pm\) 1 SEM across 3 replications of each simulation) \begin{table} \begin{tabular}{l c c c} \hline \hline Model & 7 Slots & 11 Slots \\ \hline Slot Attention & 0.45 \(\pm\) 0.23 & 0.62 \(\pm\) 0.08 \\ Slot Attention + SLP & 0.65 \(\pm\) 0.08 & 0.71 \(\pm\) 0.05 \\ \hline \hline \end{tabular} \end{table} Table 2: FG-ARI (%) Accuracy (mean \(\pm\) 1 SEM across 3 runs) Varying number of slots for ClevrTex Figure 1b visualizes the meta-learned initial spatial bias distribution, \(\mathbf{\alpha}^{0}\), for the 7 slots. The initialization essentially carves up the image, ignoring regions of the image that never contain objects. Note that these initial biases do not determine the final slot assignments, as one can see by inspection of the spatial distribution of objects claimed by a given slot in Figure 1a. Figure 2 presents ClevrTex examples comparing Slot Attention and Slot Attention (SA) + SLP via image reconstructions and slot extractions. SA + SLP's image reconstruction is far more accurate as compared to that of Slot Attention, with the clearest difference being the quality of the background scene and the overall image sharpness. This can be explained by the observation that _empty_ slots of SA + SLP do a much better job of representing the background pattern. We also note that the slot decompositions are effectively localized with SA + SLP, even though the precise boundaries of the object are not sharp. #### 4.1.2 Bo-Qsa _Methodology._ Here, the setup is identical to that in Section 4.1.1, the only difference being that the slots are set to learnable queries which are themselves learned through gradient updates via the straight-through estimator (Bengio et al., 2013). _Results._ Table 3 presents FG-ARI scores for BO-QSA. Consistent with our previous results (4.1.1), we observe the largest improvements on ClevrTex, while also observing statistically reliable improvements on ShapeStacks and ObjectsRoom. #### 4.1.3 Dinosaur _Methodology._ We closely follow the setup of DINOSAUR (Seitzer et al., 2023), using a frozen pre-trained ViT encoder (Kolesnikov et al., 2021), Slot Attention with \(11\) slots for MoVi-C and \(24\) slots for MoVi-E, a Transformer Decoder similar to SLATE (Singh et al., 2022) and feature reconstruction loss as the primary learning signal instead of image reconstruction loss. Figure 1: Results from Slot Attention + SLP Results.Table 4 compares measures of object discovery for MoVi-C and MoVi-E datasets on DINOSAUR and our augmented variant with SLP. On both data sets and two performance measures--FG-ARI and mBO--DINOSAUR + SLP reliably outperforms DINOSAUR. ### Object Discovery in Real-World Images For real-world data, following Jia et al. (2023) we use two tasks to evaluate BO-QSA + SLP: unsupervised foreground extraction and unsupervised multi-object segmentation. For unsupervised foreground extraction, we experiment on the CUB Wah et al. (2011), Stanford Dogs Khosla et al. (2012), and Stanford Cars Krause et al. (2013) datasets and evaluate using Intersection-over-Union \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & \multicolumn{2}{c}{**MoVi-C**} & \multicolumn{2}{c}{**MoVi-E**} \\ \cline{2-5} & FG-ARI & mBO & FG-ARI & mBO \\ \hline DINOSAUR (ViT-B/8) & 68.9 \(\pm\)0.3 & 38.0 \(\pm\)0.2 & 65.1 \(\pm\)0.6 & 33.5 \(\pm\)0.1 \\ DINOSAUR + SLP & 72.8 \(\pm\)0.6 & 41.5 \(\pm\)0.3 & 70.4 \(\pm\)0.4 & 35.9 \(\pm\)0.2 \\ \hline \hline \end{tabular} \end{table} Table 4: FG-ARI and mBO measures of object discovery on MoVi-C and MoVi-E with DINOSAUR Seitzer et al. (2023) (mean \(\pm\) 1 SEM across 3 runs) \begin{table} \begin{tabular}{l c c c} \hline \hline Method & **ShapeStacks** & **ObjectsRoom** & **ClevTex** \\ \hline MONet Burgess et al. (2019) & 0.70 \(\pm\) 0.11 & 0.54 \(\pm\) 0.05 & 0.19 \(\pm\) 0.05 \\ GENESIS-V2 Engelcke et al. (2021) & 0.81 \(\pm\) 0.05 & 0.86 \(\pm\) 0.05 & 0.31 \(\pm\) 0.20 \\ SLATE Singh et al. (2022) & 0.65 \(\pm\) 0.10 & 0.57 \(\pm\) 0.10 & 0.73 \(\pm\) 0.05 \\ I-SA Chang et al. (2022) & 0.90 \(\pm\) 0.08 & 0.85 \(\pm\) 0.05 & 0.78 \(\pm\) 0.10 \\ BO-QSA Jia et al. (2023) & 0.93 \(\pm\) 0.05 & 0.87 \(\pm\) 0.05 & 0.80 \(\pm\) 0.08 \\ \hline BO-QSA + SLP & 0.95 \(\pm\) 0.08 & 0.93 \(\pm\) 0.05 & 0.87 \(\pm\) 0.05 \\ \hline \hline \end{tabular} \end{table} Table 3: Foreground ARI (%) Segmentation Accuracy (mean \(\pm\) 1 SEM across 3 replications of each simulation) BO-QSA Jia et al. (2023) augmented with SLP Figure 2: ClevrTex examples: Visualizing Image Reconstructions and Slot Extractions for Slot Attention (SA) and Slot Attention + SLP (SA + SLP) (IoU) and Dice evaluation metrics. For unsupervised multi-object segmentation, we experiment on COCO [11] and ScanNet [12] datasets and evaluate using the metrics followed in Yang and Yang [2022] _Methodology._ As our base model, we use BO-QSA [13] with the SLATE encoder-decoder setup [23], which consists of a \(4\)-layer CNN encoder and a Transformer decoder in a dVAE setup. The slots are initialized as learned embeddings and the initializations are optimized directly as in Jia et al. [2023]. During evaluation, we select predicted foreground as the one with the maximum intersection between slot's mask prediction and ground-truth foreground mask. _Results._ Table 4(a) presents results for unsupervised foreground extraction and Table 4(b) presents results for unsupervised multi-object segmentation. On all data sets and on both tasks, SLP consistently improves the performance of BO-QSA, the most significant improvement being on the Dice metric for the Stanford Dogs dataset. It appears that I-SA [14] marginally outperforms \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Method & \multicolumn{3}{c}{**CUB**} & \multicolumn{3}{c}{**Stanford Dogs**} & \multicolumn{3}{c}{**Stanford Cars**} \\ \cline{2-7} & IoU & Dice & IoU & Dice & IoU & Dice \\ \hline ReDO [14] & 0.46 & 0.60 & 0.55 & 0.70 & 0.52 & 0.68 \\ IODINE [1] & 0.30 & 0.44 & 0.54 & 0.67 & 0.51 & 0.67 \\ OneGAN [1] & 0.55 & 0.69 & 0.71 & 0.81 & 0.71 & 0.82 \\ SLATE [23] & 0.36 & 0.51 & 0.62 & 0.76 & 0.75 & 0.85 \\ I-SA [14] & 0.63 & 0.72 & 0.80 & 0.89 & 0.85 & 0.92 \\ BO-QSA [13] & 0.61 \(\pm\)0.12 & 0.74 \(\pm\)0.12 & 0.78 \(\pm\)0.12 & 0.68 \(\pm\)0.04 & 0.76 \(\pm\)0.05 & 0.86 \(\pm\)0.05 \\ \hline BO-QSA + SLP & 0.68 \(\pm\)0.02 & 0.80 \(\pm\)0.02 & 0.78 \(\pm\)0.12 & 0.87 \(\pm\)0.12 & 0.82 \(\pm\)0.05 & 0.91 \(\pm\)0.05 \\ \hline \hline \end{tabular} \end{table} Table 5: Real-World Image Experiments Figure 3: Slot assignments for BO-QSA + SLP on COCO and ScanNet. BO-QSA + SLP on Stanford Dogs and Stanford Cars. _However_, the baselines reported in Table 4(a)--including I-SA results--are the best performing replication of multiple runs, whereas we report mean performance across three replications for BO-QSA and BO-QSA + SLP. Figure 3 presents visualizations of slot assignments for several COCO images (first and second rows) and ScanNet images (third and fourth rows). ### Out-of-Distribution Generalization One potential weakness of our model is the explicit learning of \(\mathbf{\alpha}^{0}\) based on training-set-wide statistics of the likely spatial locations of objects. This initialization may result in poor OOD generalization. However, we observe that this issue does not adversely affect our model. As suggested by Figure 0(b), learning \(\mathbf{\alpha}^{0}\) does help break symmetry: the learned \(\mathbf{\alpha}\) disperses the slot means across the scene while maintaining a high slot variance. As a result, the attention distribution forces the slots to model different objects in the scene. However, since the learned initialization is normalized before further optimization, the effects of an excessively large \(\mathbf{\alpha}^{0}\) initialization should be effectively nullified. As evidence in support of our conjecture, we tested Slot Attention with SLP and initialized slots as learned embeddings as in Section 4.1.2. We evaluate object discovery on the ClevrTex-OOD split variants and present results in Table 6. Empirically, as we see improvements both in FG-ARI and in MSE Reconstruction loss, we are able to prove that learning dataset wide statistics merely has the effect of learning an \(\mathbf{\alpha}^{0}\) initialization that supports symmetry breaking between slots. ## 5 Discussion In this paper, we proposed a spatial-locality prior that is consistent with both human visual attention and statistics of objects in natural images. Incorporating this prior into unsupervised object-centric models biases slot decompositions of the visual scene when direct evidence for objects and their boundaries is weak. The result is improved models that are more robust to hyperparameter selection and that yield better object segmentations. We show consistent improvements with three object-centric architectures (Slot Attention, BO-QSA, and DINOSAUR), eight distinct data sets, and various performance measures that have been used in the literature, including FG-ARI, mBO, IoU, and Dice. In all cases, models incorporating SLP advance state-of-the-art performance. _Limitations and Future Work._ A key limitation of the proposed method--as well as of any slot-based object-centric model, is the hard requirement to specify the maximum number of slots that the model can represent. Another limitation of SLP in particular is the increased computational complexity due to the bi-level optimization algorithm. Fortunately, we have found that a single spatial iteration, which is relatively efficient, yields significant benefits. In future we hope to extend the proposed method from static images to video streams. SLP should be even more fruitful for video streams where the spatial modulations, \(\mathbf{\alpha}\), inferred for one frame should be a suitable initialization point for the next frame. We also hope to extend the method to include the depth dimension of spatial attention, allowing SLP to operate in depth and to thereby predict occlusions. \begin{table} \begin{tabular}{l c c} \hline \hline Method & \multicolumn{2}{c}{**ClevTex-OOD**} \\ \cline{2-3} & FG-ARI & MSE \\ \hline MONet [20] & 0.37 \(\pm\) 0.05 & 409 \(\pm\) 0.5 \\ GENESIS-V2 [19] & 0.29 \(\pm\) 0.19 & 539 \(\pm\) 7.0 \\ Slot-Attention [13] & 0.58 \(\pm\) 0.05 & 487 \(\pm\) 2.3 \\ I-SA [2] & 0.83 \(\pm\) 0.05 & 241 \(\pm\) 1.1 \\ BO-QSA [17] & 0.86 \(\pm\) 0.05 & 265 \(\pm\) 2.9 \\ \hline BO-QSA + SLP & 0.88 \(\pm\) 0.05 & 243 \(\pm\) 1.6 \\ \hline \hline \end{tabular} \end{table} Table 6: FG-ARI (%) and Mean-Squared Error (MSE) BO-QSA Jia et al. [2023] with and without SLP (mean \(\pm\) 1 SEM across 3 experiment trials) Acknowledgement This research was enabled in part by compute resources provided by Mila (mila.quebec). We would like to thank Vedant Shah and Aniket Didolkar for reviewing early versions of the manuscript. We would also like to thank Mihir Prabhudesai and Katerina Fragkiadaki for useful discussions.
2309.08950
Active shape control by plants in dynamic environments
Plants are a paradigm for active shape control in response to stimuli. For instance, it is well-known that a tilted plant will eventually straighten vertically, demonstrating the influence of both an external stimulus, gravity, and an internal stimulus, proprioception. These effects can be modulated when a potted plant is additionally rotated along the plant's axis, as in a rotating clinostat, leading to intricate shapes. We use a morphoelastic model for the response of growing plants to study the joint effect of both stimuli at all rotation speeds. In the absence of rotation, we identify a universal planar shape towards which all shoots eventually converge. With rotation, we demonstrate the existence of a stable family of three-dimensional dynamic equilibria where the plant axis is fixed in space. Further, the effect of axial growth is to induce steady behaviors, such as solitary waves. Overall, this study offers new insight into the complex out-of-equilibrium dynamics of a plant in three dimensions and further establishes that internal stimuli in active materials are key for robust shape control.
Hadrien Oliveri, Derek E. Moulton, Heather A. Harrington, Alain Goriely
2023-09-16T10:56:00Z
http://arxiv.org/abs/2309.08950v1
# Active shape control by plants in dynamic environments ###### Abstract Plants are a paradigm for active shape control in response to stimuli. For instance, it is well-known that a tilted plant will eventually straighten vertically, demonstrating the influence of both an external stimulus, gravity, and an internal stimulus, proprioception. These effects can be modulated when a potted plant is additionally rotated along the plant's axis, as in a rotating clinostat, leading to intricate shapes. We use a morpholeastic model for the response of growing plants to study the joint effect of both stimuli at all rotation speeds. In the absence of rotation, we identify a universal planar shape towards which all shoots eventually converge. With rotation, we demonstrate the existence of a stable family of three-dimensional dynamic equilibria where the plant axis is fixed in space. Further, the effect of axial growth is to induce steady behaviors, such as solitary waves. Overall, this study offers new insight into the complex out-of-equilibrium dynamics of a plant in three dimensions and further establishes that internal stimuli in active materials are key for robust shape control. Active materials are characterized by their ability to adapt to external stimuli, often manifested by changes in shape. A paradigm of this adaptability is observed in the growth patterns of plant shoots, which exhibit remarkable sensitivity not only to their environment (e.g. light, gravity, wind) [1] but also, intriguingly, to their own evolving shapes, a phenomenon called _proprioception_[2; 3]. We show that this synergistic response to multiple stimuli serves as a robust mechanism for plants to maintain structural integrity in highly dynamic environments. An important type of response in plant shoots is _gravitropism_ [Fig. 1(a)], the tendency to react and orient their growth against the direction of gravity [4]. While modifying gravity experimentally is challenging, it is possible to nullify its influence by rotating the plant sufficiently fast in a _clinostat_[5], shown in Fig. 1(b). This device, patented by Julius von Sachs circa 1880 [6; 7], imparts a constant rotational motion to the plant, thereby cyclically altering the relative direction of gravity. To simulate weightlessness, the clinostat must rotate at a relatively high angular speed \(\omega\), compared to the response of the plant, allowing for the averaging out of gravity's influence over multiple rotations [8]. In such a case, the plant grows straight. Further, the general observation that growing shoots tend to straighten in the absence of other influences, indicates another well-established necessary response, called _autotropism_, the tendency to minimize curvature during growth [9]. Under slower rotations, the relative influence of autotropism and gravitropism can be gauged by varying the angular speed, leading to the possibility of complex three-dimensional shapes that we study here. The first model for the gravitropic response of slender shoots was formulated by Sachs in 1879 [7]. The _sine law_ states that the rate of change of curvature at a point is given by the sine of the inclination angle \(\theta(s,t)\) between the tangent to the shoot centerline and the vertical direction, where \(s\) is the arclength from the base and \(t\) is the time [Fig. 1(a)]. Recalling that the curvature is the arclength derivative of this angle, the sine law can be expressed as \[\dot{\theta}^{\prime}+\alpha\sin\theta=0, \tag{1}\] with \(\alpha\) a rate constant; and where \((\ )^{\prime}\) and \((\ )^{\prime}\) denote differentiation w.r.t. \(s\) and \(t\), respectively. Notably, unbeknownst@sachs-his successors, the sine law is an instance of the celebrated sine-Gordon equation, a fully integrable system with a conservative structure [11]; in Figure 1: (a) A potted plant realigns itself with gravity when tilted horizontally. (b) In a clinostat, the effect of gravity is nullified at sufficient angular speed. In both cases, the plant’s axis lies in a plane. (Adapted from [10]) fact, the sine law is the earliest appearance of this equation as a physical model. While the sine law is the starting point of many augmented models [9, 12, 13, 14, 15, 16], it is restricted to planar motion and does not include autotropism, which is necessary for shoots to eventually straighten [9, 17]. Here, we follow the plant tropism framework developed in [1] to model the clinostatting plant in three dimensions as an unshearable and inextensible _morphoelastic rod_[18, 19] of length \(\ell\). We neglect self-weight and centrifugal effects, which is valid for small shoots and slow rotation (i.e. \(\rho g\ell^{3}\ll B\) and \(\rho\omega^{2}\ell^{4}\ll B\), with \(B\) and \(\rho\) denoting the bending stiffness and the linear density, respectively). In this case, the shoot assumes its stress-free shape. In the first scenario studied here, we also neglect the axial growth of the shoot and focus on curvature generation through tissue growth and remodeling. Thus, the shoot has a constant length (we address elongation at a later stage). Model.-The centerline of a rod is a spatial curve \(\mathbf{r}(s,t)=x(s,t)\mathbf{i}+y(s,t)\mathbf{j}+z(s,t)\mathbf{k}\), parameterized here by its arclength \(s\in[0,\ell]\) (\(s=0\) at the base) at time \(t\geq 0\); where \(\{\mathbf{i},\mathbf{j},\mathbf{k}\}\) is the canonical basis of \(\mathbb{R}^{3}\), with \(\mathbf{k}\) pointing upward against the gravity direction (Fig. 2). The Frenet-Serret frame \(\{\mathbf{t},\mathbf{n},\mathbf{b}\}\), is built from the tangent vector \(\mathbf{t}:=\mathbf{r}^{\prime}\) and the unit normal and binormal vectors, \(\mathbf{n}\) and \(\mathbf{b}\), defined through \[\mathbf{t}^{\prime}=\kappa\mathbf{n},\quad\mathbf{n}^{\prime}=\tau\mathbf{b}- \kappa\mathbf{t},\quad\mathbf{b}^{\prime}=-\tau\mathbf{n}, \tag{2}\] where \(\kappa\) and \(\tau\) are the curvature and torsion, respectively. In addition to its centerline, a rod is equipped with a right-handed orthonormal director basis \(\mathbf{d}_{1}(s,t)\), \(\mathbf{d}_{2}(s,t)\), and \(\mathbf{d}_{3}(s,t)=\mathbf{t}(s,t)\)[19] that obeys \[\mathbf{d}_{i}^{\prime}=\mathbf{u}\times\mathbf{d}_{i},\quad\dot{\mathbf{d} }_{i}=\mathbf{w}\times\mathbf{d}_{i},\quad i=1,\,2,\,3. \tag{3}\] The Darboux vector \(\mathbf{u}\) and spin vector \(\mathbf{w}\) obey the compatibility condition \[\mathbf{\dot{u}}-\mathbf{w}^{\prime}=\mathbf{w}\times\mathbf{u}. \tag{4}\] In gravitropism, gravisensing mechanisms activate pathways that result in differential growth of the cells [2, 20, 21, 22, 23, 24, 25]. Changes in curvature then occur when cells on the bottom side of the shoot extend faster than those on the upper side [26, 1, 23]. Assuming local growth laws for both gravitropism and autotropism leads, through dimensional reduction [27, 1], to a generalization of the sine law that includes autotropism and three-dimensional effects (Appendix A): \[\mathbf{\dot{u}}+\mathbf{u}\times\mathbf{w}=\alpha\,\mathbf{t}\times\mathbf{k }-\beta\,\mathbf{u}. \tag{5}\] Here, \(\mathbf{u}\times\mathbf{w}\) accounts for the passive advection of \(\mathbf{u}\) by the spin vector \(\mathbf{w}\). The first term in the r.h.s accounts for gravitropism with rate constant \(\alpha\). The second term models autotropism, with rate constant \(\beta\), and leads to an exponential decay in time of the curvature in the absence of other effects. This equation reduces to the sine law in the planar case when \(\beta=0\) and no rotation is imposed. The relative strength of gravitropism and autotropism is captured by the dimensionless _bending number_\(\lambda:=\alpha\ell/\beta\)[9, 14]. Moreover, given the constitutive hypothesis that the local growth of the cells is parallel to the axis, we have [27] \[\mathbf{u}\cdot\mathbf{t}=0. \tag{6}\] The evolution of the tangent vector along the shoot is given by Eq. (3): \[\mathbf{t}^{\prime}=\mathbf{u}\times\mathbf{t}. \tag{7}\] Eqs. (4) to (7) form a closed system for \(\mathbf{u}\), \(\mathbf{w}\) and \(\mathbf{t}\) which, given appropriate initial and boundary conditions, fully captures the shape and evolution of the shoot. For comparison, our model is the _three-dimensional_, _nonlinear_ generalization of the standard 'AC model', which has been validated experimentally in numerous genera [9]. In particular, our approach is general enough to include complex movements such as clinostatting, enforced through a non-zero spin \(\mathbf{w}(0,t)\neq\mathbf{0}\) at the base. as shown in Fig. 3(a). On rescaling all lengths by the _auto-gravitropic length_\(\ell_{\text{ag}}:=\ell/\lambda\), we obtain a universal curve [see Fig. 3(b)]: \[\tilde{z}\!=\!\log(\sin\theta_{0})\!-\!\log(\sin(\theta_{0}\!-\! \tilde{x})),\hskip 14.226378pt0\leq\tilde{x}<\theta_{0}. \tag{8}\] We refer to this curve as the _simple caulinoid_ (from Latin _caulis_, meaning stem). Next, we consider a clinostat imparting a counterclockwise rotation around the horizontal axis \(\mathbf{i}\) with period \(T=2\pi/\omega\). In this case, the boundary conditions are \(\mathbf{t}(0,t)=\mathbf{i}\) and \(\mathbf{w}(0,t)=\omega\mathbf{i}\). By definition, at equilibrium, we have \(\dot{\mathbf{w}}=\dot{\mathbf{u}}=\mathbf{\dot{t}}=\mathbf{0}\), which gives \(\mathbf{w}=\omega\mathbf{t}\). In this configuration, the shoot revolves at constant angular velocity \(\omega\) about a _fixed_ centerline [Fig. 3(a)] with tangent vector given by (Appendix B.2) \[\mathbf{\tilde{t}}(s)= \frac{\cos\Lambda s}{\cosh\Theta s}\,\mathbf{i}-\frac{\sin \Lambda s}{\cosh\Theta s}\,\mathbf{j}+\tanh(\Theta s)\,\mathbf{k}, \tag{9}\] where \(\Lambda:=\alpha\omega/(\omega^{2}+\beta^{2})\) and \(\Theta:=\alpha\beta/(\omega^{2}+\beta^{2})\). The curvature, \(\tilde{\kappa}(s)=\sqrt{\Theta^{2}+\Lambda^{2}}\;\mathrm{sech}\,\Theta s\), and torsion, \(\tilde{\tau}(s)=-\Lambda\tanh\Theta s\), of this general caulinoid satisfy \[\frac{\tilde{\kappa}^{2}}{\Theta^{2}+\Lambda^{2}}+\frac{\tilde{\tau}^{2}}{ \Lambda^{2}}=1. \tag{10}\] Thus, along an equilibrium solution, starting from \(\tilde{\tau}(0)=0\) at the base, the torsion increases while the curvature decreases along an ellipse in the curvature-torsion plane. In physical space, the centerline follows a modulated left-handed helix that gradually uncoils away from the base towards the vertical, and we can interpret \(\Lambda\) and \(\Theta\) as the curve's _winding_ and _rise_ densities [Fig. 4(b)]. In the limit \(\omega\to 0\), we have \(\Lambda=0\) and \(\Theta=1/\ell_{\text{ag}}\), recovering the planar case discussed above. When \(\omega\rightarrow\infty\), the plant remains straight with \(\Lambda=\Theta=0\) [Fig. 4(c)]. The equilibrium curve is uniquely determined by \(\Lambda\) and \(\Theta\). Experimentally, given \(\omega\), both parameters \(\alpha\) and \(\beta\) can thus be estimated uniquely from the centerline (unlike in the planar case), e.g. by using the height of the plant \(H=\tilde{z}(1)=\log(\cosh\Theta)/\Theta\) and the radius of the caulinoid at the base \(R=1/\Lambda\). A numerical linear stability analysis of the full system (Appendix D) conducted across a wide range of realistic parameters \(\lambda\in[0.1,100]\) [Fig. 3(c)], consistent with reported values [9, 14, 28], reveals that, for \(\beta>0\), the equilibrium solution is linearly stable. Further, the local dynamics near the base can be obtained asymptotically, showing that the Darboux vector spirals towards its equilibrium value with a typical exponential decay \(\mathrm{e}^{-\beta t}\) [see Fig. 4(d) and Movie 1]. In the limit case \(\beta=0\) but with \(\omega\neq 0\)[1], the equilibrium solution is a segment of a horizontal circle of radius \(\omega/\alpha\). Here, however, the previous stability result does not apply and the shoot orbits around the equilibrium [see Fig. 4(d) and Movie 2]. ## 4 _Shoot elongation_ Plants also lengthen due to the coordinated expansion of the cells along the central axis. Generally, this primary growth is mostly confined to a region close to the apex [29]. To model elongation, including apical dominance, we assume that both the topic response and axial growth gradually diminish as we move away from the apex with exponential decay of characteristic length \(\delta\) and with growth \(\Gamma_{0}\) and auto-gravitropic rates, \(\beta\) and \(\alpha\), at the tip (Appendix E). In this case, the system supports a traveling front solution connecting a flat base to a steady apical structure migrating forward at a speed \(c=\Gamma_{0}\delta\) [see Fig. 5(a) and Movie 3]. The shape of this solitary wave can be described in terms of an initial value problem that can be integrated numerically. Fig. 5(b) shows example solutions obtained for various rotation speeds and bending numbers \(\lambda\). An interesting limit is \(\ell\ll\delta\) (uniform growth rates along the shoot). Assuming a timescale separation \(\beta\gg\Gamma_{0}\), and noting that \(\Lambda\) and \(\Theta\) are independent of \(\ell\), we see that the shoot's shape will progress quasi-statically, spreading itself uniformly along a unique caulinoid [see Fig. 5(d) Figure 3: Steady solutions in the absence of clinostatting. (a) Horizontal clamp \(|\theta_{0}|=\pi/2\) and upside-down clamp \(|\theta_{0}|\rightarrow\pi^{-}\) for various \(\lambda\). (b) The equilibrium solution is a simple caulinoid [Eq. (8)] parameterized by \(\theta_{0}\). Dashed lines show the horizontal and upside-down solutions. (c) Shape adopted by a wheat coleoptile (adapted from [9], with courtesy from B. Moulia) with overlaid caulinoid. and Movie 4]. The existence of these solutions demonstrates that steady configurations are a robust property of the system that can persist even upon significant elongation. ## 3 Discussion The clinostat holds a significant place in plant physics, addressing a precise technical challenge: simulating weightlessness by effectively 'confusing' the plant through fast rotation. At lower speeds, the interaction between rotation, gravitropism, and autotropism reveals more subtle behaviors. A distinct property of this system is the universal existence of a _dynamic_ equilibrium where the shoot revolves around a steady centerline, the caulinoid. This equilibrium is dynamic as it requires cyclic deformations in the material to maintain this configuration as rotation is applied. In contrast to the classic planar case, whose equilibrium is determined solely by \(\lambda\) (Fig. 3), this solution is _uniquely_ characterized through two dimensionless numbers \(\alpha\ell/\omega\) and \(\beta/\omega\). When the plant undergoes elongation, two distinct behaviors emerge: solitary waves when growth, autotropism and gravitropism are confined to the tip; or stationary elongation along a unique caulinoid when the shoot grows uniformly. In conclusion, we predict that a clinostatting shoot will naturally assume the sole shape that enables it to counterbalance rotation and minimize its overall movement in the laboratory frame, strikingly, even in the absence of a dedicated rotation-sensing mechanism. The importance of proprioception in plant posture control is now well established [2, 3, 9, 16, 30, 31]. We further showed that the role of proprioception, in the form of autotropism, is crucial in stabilizing the clinostatting shoot, as its absence would lead to non-steady behaviors [1]. Physically, autotropism acts as a damping mechanism in curvature space, hence providing a stabilization mechanism. The exact caulinoid solutions Figure 4: Dynamic equilibrium of a rotating plant. (a) The material revolves at angular speed \(\omega\) around a fixed centerline. (b) Example equilibrium configurations obtained for various values of \(\Lambda\) and \(\Theta\). (c) Dependency of the equilibrium solution on \(\omega\). Blue rods show the limits \(\omega\to 0\) (no rotation) and \(\omega\to\infty\) (standard clinostat experiment). The surface shows the set of equilibrium solutions obtained for finite values of \(\omega/\beta\) (\(\lambda=5\)). (d) Course of the apex in two cases, \(\beta>0\), with convergence to equilibrium (\(\beta=\omega/5\)), and \(\beta=0\), after convergence to a limit cycle (in both cases \(\alpha=\omega\)). Dashed line shows the corresponding equilibrium solution. Figure 5: Growth. (a) Examples of simulated growing shoots (parameters: \(\alpha=\omega\) and \(5\omega\); \(\beta=\omega\); \(\delta=\ell\); \(\Gamma_{0}=\omega/10\)). (b, c) Solitary wave profiles computed for different (b) bending numbers \(\lambda\); and (c) rotation speeds \(\omega\). The labels \(*\) and \(**\) indicate corresponding sets of parameters between the simulation and the asymptotic profile. (d) Uniform growth rate (\(\delta\gg\ell\)): The plant spreads along a unique caulinoid (here, \(\Lambda=5\), \(\Theta=1\), \(\Gamma_{0}=\omega/10\)). may be difficult to observe experimentally with precision as it would require pristine conditions. Further, in plants, heterogeneity, stochasticity and other tropic responses also play a role. Yet, these ideal solutions present a new paradigm for the study of plant shapes and the design of experiments. They can be further generalized to include other effects, such as light, or elasticity [1]. They demonstrate that the coupling of internal and external stimuli is key for shape control, a problem of general importance in biology with direct implications for non-living active materials. A.G. acknowledges support from the Engineering and Physical Sciences Research Council of Great Britain under Research Grant No. EP/R020205/1. H.A.H. acknowledges support from the Royal Society under University Research Fellowship No. URF/R/211032. For the purpose of Open Access, the author has applied a CC BY public copyright license to any Author Accepted Manuscript (AAM) version arising from this submission. ## Appendix A Kinetics of curvature evolution The auto-gravitropic governing law, derived in [1], reads in vector form: \[\dot{\mathbf{u}}=\alpha\mathbf{t}\times\mathbf{k}-\beta\mathbf{u}. \tag{10}\] Here, we have used Antman's sans-serif notations [32] to denote vector field attached to a curve and expressed in the local material frame \(\mathbf{d}_{i}\), i.e. for a vector field \(\mathbf{u}(s,t)\), we write \[\mathbf{u}(s,t)=\sum_{i=1}^{3}\mathbf{u}_{i}(s,t)\mathbf{d}_{i}(s,t), \tag{11}\] and \(\mathbf{u}=(\mathbf{u}_{1},\mathbf{u}_{2},\mathbf{u}_{3})\) then denotes the vector of local coordinates \(\mathbf{u}_{i}=\mathbf{u}\cdot\mathbf{d}_{i}\) (in particular, \(\mathbf{t}=\mathbf{d}_{3}\) implies \(\mathbf{t}=(0,0,1)\)). Thus, Eq. (10) expresses the evolution of curvatures from a local point of view, i.e. in a reference frame attached to the material. In our case, since gravity is important, it is convenient to express the dynamics in the non-rotating, laboratory frame (indeed, the equilibrium solutions are naturally expressed in the laboratory frame). Therefore, we differentiate Eq. (11) with respect to time, and use \(\dot{\mathbf{d}}_{i}=\mathbf{w}\times\mathbf{d}_{i}\) [Eq. (3)], to obtain the kinematic relation \[\dot{\mathbf{u}}+\mathbf{u}\times\mathbf{w}=\sum_{i=1}^{3}\dot{\mathbf{u}}_{i }\mathbf{d}_{i}. \tag{12}\] Using the rotational invariance of the cross product, Eqs. (10) and (12) directly provide the expression given in Eq. (5) for \(\dot{\mathbf{u}}\). ## Appendix B Equilibrium solutions ### Without rotation We derive the equilibrium solutions for the non-rotating case (\(\omega=0\)). Here, we choose \(\ell\equiv 1\) as a reference length unit. Setting \(\dot{\mathbf{u}}=\mathbf{0}\) in Eq. (5) provides \(\ddot{\mathbf{u}}=\lambda\mathbf{\tilde{t}}\times\mathbf{k}\), which can be substituted into Eq. (7) to obtain \[\mathbf{\tilde{t}}^{\prime}=\lambda(\mathbf{\tilde{t}}\times\mathbf{k})\times \mathbf{\tilde{t}}. \tag{13}\] Provided an initial tilt \(0\leq\theta_{0}<\pi\), such that \(\mathbf{\tilde{t}}(0)=\sin\theta_{0}\,\mathbf{i}+\cos\theta_{0}\,\mathbf{k}\), we integrate this equation and derive the tangent \[\mathbf{\tilde{t}}(s) =\frac{\sin\theta_{0}}{\cos\theta_{0}\sinh\lambda s+\cosh\lambda s }\,\mathbf{i}\] \[+\frac{(\cos\theta_{0}+1)\mathbf{e}^{2\lambda s}-1+\cos\theta_{0} }{(\cos\theta_{0}+1)\mathbf{e}^{2\lambda s}+1-\cos\theta_{0}}\,\mathbf{k}. \tag{14}\] Integrating once more gives the position vector \(\mathbf{\tilde{r}}(s)=\tilde{x}(s)\,\mathbf{i}+\tilde{z}(s)\,\mathbf{k}\): \[\lambda\tilde{x}(s)=\theta_{0}-2\operatorname{arccot}\bigg{(}\mathbf{e}^{ \lambda s}\cot\frac{\theta_{0}}{2}\bigg{)}, \tag{15a}\] \[\lambda\tilde{z}(s)=\log\big{[}1+\cos^{2}(\theta_{0}/2)(\mathbf{e}^{2 \lambda s}-1)\big{]}-\lambda s. \tag{15b}\] Inverting Eq. (15a) and rescaling all lengths as \(\tilde{x}\rightarrow\tilde{x}/\lambda\), \(\tilde{z}\rightarrow\tilde{z}/\lambda\), we obtain an implicit relation between \(\tilde{z}\) and \(\tilde{x}\) [Eq. (8)], which corresponds to a universal equilibrium shape for all orientations \(\theta_{0}\) of the shoot. ### With rotation Next, we derive the equilibrium solution for a plant undergoing rotation (\(\omega>0\)). To determine the equilibrium shape, we posit \(\dot{\mathbf{w}}=\mathbf{\tilde{t}}=\mathbf{\tilde{u}}=\mathbf{0}\). Eqs. (3) and (4) directly provide that \(\mathbf{\tilde{w}}=\omega\mathbf{\tilde{t}}\). Substituting this ansatz into Eq. (5), we obtain \[\beta\mathbf{\tilde{u}}=\mathbf{\tilde{t}}\times(\alpha\mathbf{k}+\omega \mathbf{\tilde{u}}). \tag{16}\] On inverting this identity, we can express \(\mathbf{\tilde{u}}\) as an explicit function of \(\mathbf{\tilde{t}}\), given by \[\mathbf{\tilde{u}}=(\Lambda\tilde{t}_{1}\tilde{t}_{3}+\Theta\tilde {t}_{2})\,\mathbf{i} +(\Lambda\tilde{t}_{2}\tilde{t}_{3}-\Theta\tilde{t}_{1})\,\mathbf{j}\] \[+\Lambda(\tilde{t}_{3}^{2}-1)\,\mathbf{k}, \tag{17}\] with \(\mathbf{\tilde{t}}=\tilde{t}_{1}\,\mathbf{i}+\tilde{t}_{2}\,\mathbf{j}+ \tilde{t}_{3}\,\mathbf{k}\). Substituting this last expression into Eq. (7) and integrating it, we obtain the expression for the tangent given by Eq. (9). Remarkably, we can integrate the tangent to obtain an exact parameterization of the centerline \(\mathbf{\overline{r}}\), in terms of the hypergeometric function \({}_{2}F_{1}\), the harmonic number \(H_{n}\) and the polygamma function of order zero \(\psi^{(0)}\): \[\tilde{x}(s) = \frac{\mathrm{e}^{s(\Theta+\mathrm{i}\Lambda)}}{\Theta+\mathrm{i} \Lambda}\,_{2}F_{1}\left(1,\frac{\Theta+\mathrm{i}\Lambda}{2\Theta};\frac{3 \Theta+\mathrm{i}\Lambda}{2\Theta};-\mathrm{e}^{2s\Theta}\right)+\frac{ \mathrm{e}^{s(\Theta-\mathrm{i}\Lambda)}}{\Theta-\mathrm{i}\Lambda}\,_{2}F_{1} \left(1,\frac{\Theta-\mathrm{i}\Lambda}{2\Theta};\frac{3\Theta-\mathrm{i} \Lambda}{2\Theta};-\mathrm{e}^{2s\Theta}\right) \tag{6a}\] \[- \frac{\pi}{2\Theta}\operatorname{sech}\left(\frac{\pi\Lambda}{2 \Theta}\right),\] \[\tilde{y}(s) = \frac{\mathrm{i}}{4\Theta}\left[\psi^{(0)}\left(\frac{\Theta+ \mathrm{i}\Lambda}{4\Theta}\right)-\psi^{(0)}\left(\frac{3\Theta+\mathrm{i} \Lambda}{4\Theta}\right)+H_{-\frac{\Theta+\mathrm{i}\Lambda}{4\Theta}}-H_{- \frac{3\Theta+\mathrm{i}\Lambda}{4\Theta}}\right]\] (6b) \[+ \frac{\mathrm{e}^{s(\Theta+\mathrm{i}\Lambda)}}{\Lambda-\mathrm{i }\Theta}\,_{2}F_{1}\left(1,\frac{\Theta+\mathrm{i}\Lambda}{2\Theta};\frac{3 \Theta+\mathrm{i}\Lambda}{2\Theta};-\mathrm{e}^{2s\Theta}\right)+\frac{\mathrm{ e}^{s(\Theta-\mathrm{i}\Lambda)}}{\Lambda+\mathrm{i}\Theta}\,_{2}F_{1}\left(1, \frac{\Theta-\mathrm{i}\Lambda}{2\Theta};\frac{3\Theta-\mathrm{i}\Lambda}{2 \Theta};-\mathrm{e}^{2s\Theta}\right),\] \[\tilde{z}(s) = \frac{\log(\cosh(\Theta s))}{\Theta}. \tag{6c}\] ## Appendix C Numerical resolution of the nonlinear system We use a method based on Chebyshev polynomials to integrate numerically the nonlinear system given by Eqs. (4) to (7). We first remark that the system, albeit originally defined for \(s\in[0,1]\), can be extended naturally to \(s\in[-1,1]\) (by considering two 'twin' shoots oriented opposite to each other with respect to the plane \(y\)-\(z\)). Here, the extended equilibrium solution is invariant with respect to the mirror symmetry \(x\to-x\), \(s\to-s\). This situation is ideal for using Chebyshev polynomials of the first kind \(T_{n}\)[33] as they are defined canonically on \([-1,1]\). Thus, we consider the truncated Chebyshev expansions for the variables \[\mathbf{t} \approx \sum_{n=0}^{N}\mathbf{T}^{n}T_{n}, \tag{7a}\] \[\mathbf{w} \approx \sum_{n=0}^{N}\mathbf{W}^{n}T_{n},\] (7b) \[\mathbf{u} \approx \sum_{n=0}^{N}\mathbf{U}^{n}T_{n}, \tag{7c}\] with \(N\) a positive integer. Figure 6: Set of equilibrium solutions. The colored surface plot sweeps solutions for a range of \(\Lambda\) (with \(\Theta=1\)). Red solid lines show the course of the shoot tip \(s=0\) as \(\Lambda\) varies, and for different values of \(\Theta>1\), with height given by \(h(\Theta)=\log(\cosh\Theta)/\Theta\). Red dashed line shows the tip position for \(\Lambda=0\) as a function of \(\Theta\), given by \((x(\Theta),z(\Theta))=(\mathrm{gd}\,\Theta/\Theta,h(\Theta))\). Inset shows the path of the solution in the \(\kappa\)–\(\tau\) plane [Eq. (10)]. The formal solutions for \(\mathbf{t}\) and \(\mathbf{w}\), \[\mathbf{w}=\omega\mathbf{i}+\alpha\int_{0}^{s}\mathbf{t}\times\mathbf{k}-\beta \int_{0}^{s}\mathbf{u}, \tag{10}\] can be decomposed on the Chebyshev basis as follows. From the products \(T_{n}T_{m}=(T_{n+m}+T_{|n-m|})/2\)[33], we derive the expansion of the cross products, i.e., for any vector field \(\mathbf{a}\) and \(\mathbf{b}\) with respective Chebyshev coefficients \(\mathbf{A}^{n}\) and \(\mathbf{B}^{n}\), we have \[\mathbf{a}\times\mathbf{b}=\frac{1}{2}\sum_{p=0}^{\infty}(\mathbf{A}^{p} \times\mathbf{B}^{p}+\mathbf{A}^{p}\times\mathbf{B}^{-p})T_{0}+\frac{1}{2}\sum _{n=1}^{\infty}\sum_{p=0}^{n}(\mathbf{A}^{p}\times\mathbf{B}^{n-p}+\mathbf{A}^ {p}\times\mathbf{B}^{n+p}+\mathbf{A}^{n+p}\times\mathbf{B}^{p})T_{n}. \tag{11}\] For integration, we use the recurrence formulae [33] \[\int T_{0}=T_{1};\quad\int T_{1}=\frac{1}{4}(T_{2}+T_{0}); \tag{12a}\] \[\int T_{n}=\frac{1}{2}(\frac{T_{n+1}}{n+1}-\frac{T_{n-1}}{n-1}),\quad\forall n \geq 2, \tag{12b}\] to obtain \[\int\mathbf{a}=\frac{\mathbf{A}^{1}}{4}T_{0}+\mathbf{A}^{0}T_{1}+\sum_{n=2}^ {\infty}\frac{\mathbf{A}^{n-1}-\mathbf{A}^{n+1}}{2n}T_{n}. \tag{13}\] Conveniently, integration corresponds to a linear operation on the \(\mathbf{A}^{n}\), whose matrix can be precomputed. Given the coefficients \(\mathbf{U}^{n}\), the Chebyshev expansion of Eq. (10) yields a linear system that can be inverted to obtain the \(\mathbf{T}^{n}\). Then, the \(\mathbf{W}^{n}\) are obtained by direct integration, using Eq. (13). After expressing the \(\mathbf{T}^{n}\) and \(\mathbf{W}^{n}\) as functions of the \(\mathbf{U}^{n}\), we obtain a dynamical system of the form \[\dot{\mathbb{U}}=\mathcal{F}(\mathbb{U}), \tag{14}\] where \(\mathbb{U}\) is the \(3(N+1)\)-dimensional vector formed by the concatenation of the \(\mathbf{U}^{n}\); and \(\mathcal{F}\) is a second-degree polynomial vector that is evaluated numerically. Provided appropriate initial conditions, Eq. (14) can be integrated numerically using a standard IVP solver (here we used _Mathematica_'s built-in routine _NDSolve_). A general problem is to find an initial condition for \(\mathbf{u}\) that satisfies the orthogonality condition, Eq. (6). Indeed, by differentiating \(\mathbf{u}\cdot\mathbf{t}\) with respect to time and using Eq. (5), we observe that \[\frac{\partial}{\partial t}(\mathbf{u}\cdot\mathbf{t})=-\beta\mathbf{u}\cdot \mathbf{t}. \tag{15}\] Since \(\beta>0\), Eq. (6) is a stable property, in particular, if Eq. (6) is satisfied at \(t=0\), it will be automatically satisfied at all times \(t\). Note that, if \(\mathbf{u}\cdot\mathbf{t}=0\), then we have automatically \[\mathbf{u}=\mathbf{t}\times\mathbf{t}^{\prime} \tag{16}\] (the converse is trivial). Thus a suitable initial condition can always be found by first defining a curve and its tangent \(\mathbf{t}\); and then obtaining \(\mathbf{u}\) through Eq. (16). Once an initial configuration is defined, the initial Chebyshev coefficients for \(\mathbb{U}(0)\) are computed efficiently by means of the discrete cosine transform [34]. ## Appendix D Stability ### Asymptotic analysis near the base To gain insight into the dynamics of the shoot and its stability, it is useful to first restrict our attention to the base of the plant, \(s=0\), where \(\mathbf{t}(0,t)=\mathbf{i}\) and \(\mathbf{w}(0,t)=\omega\mathbf{i}\). Letting \(\mathbf{U}(t)=\mathbf{u}(0,t)\), Eq. (5) reduces to \[\dot{U}_{2}=-\alpha-\beta U_{2}-\omega U_{3},\quad\dot{U}_{3}=\omega U_{2}- \beta U_{3}. \tag{17}\] with \(\mathbf{U}=U_{2}\,\mathbf{j}+U_{3}\,\mathbf{k}\). We have \(U_{1}=0\) by Eq. (6). The system admits a unique fixed point \((U_{2},U_{3})=(-\Theta,-\Lambda)\) (this is simply the equilibrium curvatures at the origin derived in Appendix B.2), associated with a pair of conjugate eigenvalues \(-\beta\pm\omega\mathbf{i}\) with negative real part: The fixed point is a spiral sink associated with a decaying amplitude \(\sim\mathbf{e}^{-\beta t}\) and rotation speed \(\omega\). When \(\beta=0\) the fixed point is a center and the solution orbits around the fixed point. We can extend this analysis to higher orders in \(s>0\) in principle (that is, expanding all variables in orders of \(s\) and performing a regular perturbation). For instance, Fig. 7 shows the second-order approximation of the solution taken at \(s=0.25\). The second-order estimate converges towards equilibrium when \(\beta>0\) and \(s\ll 1\) (in the case \(\beta=0\) however, there is a secular term that must be treated by a dedicated method, but we leave this problem outside the scope of this study, focusing on the physiologically relevant case \(\beta>0\)). ### Linear stability analysis The previous analysis provides insight into the dynamics of the system; however, in principle, it is valid only near the base. To complement that approach, we perform a linear stability analysis of the equilibrium solution. Therefore, we take the first variation of Eqs. (4) to (7) around the base equilibrium solution derived in Appendix B.2. Rearranging the terms, we obtain: \[\delta\mathbf{t}^{\prime}=\delta\mathbf{u}\times\mathbf{\tilde{t}}+\mathbf{ \tilde{u}}\times\delta\mathbf{t}, \tag{2a}\] \[\delta\mathbf{w}^{\prime}=\alpha\,\delta\mathbf{t}\times\mathbf{k}-\beta\delta \mathbf{u},\] (2b) \[\delta\mathbf{\tilde{u}}=\delta\mathbf{w}^{\prime}+\delta\mathbf{w}\times \mathbf{\tilde{u}}+\mathbf{\tilde{w}}\times\delta\mathbf{u}, \tag{2c}\] with the conditions \[\mathbf{\tilde{u}}\cdot\delta\mathbf{t}=-\delta\mathbf{u}\cdot\mathbf{\tilde{t }},\quad\mathbf{\tilde{t}}\cdot\delta\mathbf{t}=0. \tag{3}\] The boundary conditions at \(s=0\) fix the values of \(\mathbf{t}(0,t)\) and \(\mathbf{w}(0,t)\), thus, \[\delta\mathbf{t}(0,t)=\mathbf{0},\quad\delta\mathbf{w}(0,t)=\mathbf{0}. \tag{4}\] We start by solving Eq. (2a). As can be seen, a linearly independent basis of solutions for the homogeneous part of Eq. (2a) is provided by the \(\mathbf{d}_{i}\) at equilibrium (defined up to an arbitrary rotation of the clinostat). A particular solution is then obtained by means of variation of constants. For a given \(\delta\mathbf{u}\), the solutions to Eqs. (2a), (2b) and (4) are: \[\delta\mathbf{t}=\mathbf{d}_{1}\int_{0}^{s}\delta\mathbf{u}\cdot\mathbf{d}_{2 }-\mathbf{d}_{2}\int_{0}^{s}\delta\mathbf{u}\cdot\mathbf{d}_{1}, \tag{5a}\] \[\delta\mathbf{w}=-\alpha\mathbf{k}\times\int_{0}^{s}\delta\mathbf{t}-\beta\int_ {0}^{s}\delta\mathbf{u}. \tag{5b}\] Lastly, we perform a Chebyshev spectral analysis of the linearized system. Namely, expanding Eqs. (2c) and (5) as in Appendix C, we obtain a linear dynamical system \[\delta\dot{\mathbb{U}}=\mathbf{L}\delta\mathbb{U} \tag{6}\] for the Chebyshev coefficients \(\delta\mathbb{U}\). Note that, since the orthogonality constraint, Eq. (3), is stable by Eq. (7), we need not consider it in the stability analysis, as coordinates orthogonal to the constraint surface will vanish. The complex eigenvalues of \(\mathbf{L}\) can be computed numerically; specifically, the system is linearly stable if all the real parts \(\omega_{i}\in\mathbb{R}^{3(N+1)}\) of these eigenvalues are negative. Here, the system appears to be stable for all values of \(\lambda\) and \(\omega\) tested. The results are consistent with the dynamics predicted in Appendix D.1, which is dominated by a decay rate of order \(\mathrm{e}^{-\beta t}\). ## Appendix E Shoot elongation ### General model To model growth, we introduce the standard growth multiplier \(\gamma:=\partial s/\partial s_{0}\) which connects the arclength \(s_{0}\in[0,\ell_{0}]\) in the initial configuration of the shoot, to the arclength \(s\in[0,\ell(t)]\) in the current, grown configuration [19]. To account for apical dominance, we assume that Figure 7: Example course of the Darboux vector \(\mathbf{u}(t)\) in the plane \(\mathbf{j}\)-\(\mathbf{k}\), computed asymptotically (to second order in \(s\)) near the base (\(s=0.25\)). The asymptotic solution spirals towards an equilibrium value (\(\alpha=4\omega\), \(\beta=0.2\omega\)). Blue and orange dots show the exact value of \(\mathbf{u}\) at equilibrium at \(s\), and its second-order approximation, respectively. growth and curvature generation mostly happen within a finite distal section of the stem of length \(\delta\). Therefore, we introduce an activation function: \[a(s_{0},t)=f(\ell(t)-s(s_{0},t)), \tag{10}\] with \(f(\sigma)=\mathrm{e}^{-\sigma/\delta}\), modeling the slowing down of growths as we move away from the tip of the shoot, located at \(\ell(t)=s(\ell_{0},t)\). Accordingly, we assume an exponential growth kinetics given by [19] \[\Gamma:=\frac{\dot{\gamma}}{\gamma}=\Gamma_{0}\,a(s_{0},t), \tag{11}\] which captures a type of growth where all cells in a small portion of the tissue expand and proliferate at the same rate. Similarly, we define the rates of curvature generation \(A(s_{0},t)=\alpha a(s_{0},t)\), and \(B(s_{0},t)=\beta a(s_{0},t)\). Note that the model can be easily adapted to include richer apical growth models, e.g. sigmoids [35], however, we do not expect any significant qualitative change in the results. On integrating the standard kinematic relation \(\partial\dot{s}/\partial s=\Gamma\) using Eqs. (10) and (11), we obtain \[\dot{s}=c\mathrm{e}^{-\ell/\delta}(\mathrm{e}^{s/\delta}-1), \tag{12}\] with \(c:=\Gamma_{0}\delta\) a characteristic speed; and where \(\ell\) is governed by \[\dot{\ell}=c\left(1-\mathrm{e}^{-\ell/\delta}\right), \tag{13}\] as a particular case of Eq. (12). Provided the initial condition \(\ell(0)=\ell_{0}\equiv 1\), the previous equation integrates as \[\ell(t)=\delta\log((\mathrm{e}^{1/\delta}-1)\,\mathrm{e}^{\Gamma_{0}t}+1). \tag{14}\] Integrating Eq. (12) with Eq. (14) then gives \[s(s_{0},t)=\delta\log\left[\frac{1}{2}-\frac{1}{2}\tanh\left(\frac{\Gamma_{0} t}{2}+\frac{1}{2\delta}+\arctan\!\left(1-2\mathrm{e}^{s_{0}/\delta}\right)- \frac{1}{2}\log\left((\mathrm{e}^{1/\delta}-1)\mathrm{e}^{\Gamma_{0}t}+1 \right)\right)\right]. \tag{15}\] Thus, \[f(s_{0},t)=\left[\exp(\frac{ct+1-s_{0}}{\delta})-\mathrm{e}^{\Gamma_{0}t}+1 \right]^{-1}, \tag{16}\] and \[\gamma(s_{0},t)=\left[\left(1-\mathrm{e}^{\Gamma_{0}t}\right)\exp(\frac{s_{0 }-ct-1}{\delta})+1\right]^{-1}. \tag{17}\] In the context of a growing spatial domain, one must differentiate between the material (_Lagrangian_) derivative, denoted with an overdot \(\dot{\mathbf{u}}\), and the _Eulerian_ derivative denoted \(\partial\mathbf{u}/\partial t\), and such that \[\dot{\mathbf{u}}=\frac{\partial\mathbf{u}}{\partial t}+\dot{s}\frac{\partial \mathbf{u}}{\partial s}. \tag{18}\] The vectors \(\mathbf{u}\) and \(\mathbf{w}\) are defined here in the Eulerian sense, namely such that \[\frac{\partial\mathbf{t}}{\partial s}=\mathbf{u}\times\mathbf{t},\quad\frac{ \partial\mathbf{t}}{\partial t}=\mathbf{w}\times\mathbf{t}, \tag{19}\] Figure 8: Numerical linear stability analysis. Density plot showing the value of the largest real part \(\omega_{i}\) of the eigenvalues of Eq. (15) (to generate this plot, the system was re-expressed in terms of the dimensionless time \(\omega t\)). This shows that the dynamics is dominated by a decay rate of order \(\mathrm{e}^{-\beta t}\) as expected from Appendix D.1. with the compatibility condition \[\frac{\partial\mathbf{u}}{\partial t}-\frac{\partial\mathbf{w}}{\partial s}= \mathbf{w}\times\mathbf{u}. \tag{11}\] In contrast, the Lagrangian spin vector, \(\mathbf{p}=\mathbf{w}+\dot{s}\mathbf{u}\), is associated with \[\mathbf{\dot{t}}=\mathbf{p}\times\mathbf{t}. \tag{12}\] The revised governing equations, including growth, are then \[\mathbf{t}^{\prime}=\gamma\mathbf{u}\times\mathbf{t}, \tag{13a}\] \[\mathbf{p}^{\prime}=\gamma(A\mathbf{t}\times\mathbf{k}-B\mathbf{u}),\] (13b) \[\mathbf{\dot{u}}+\mathbf{u}\times\mathbf{p}+\Gamma\mathbf{u}= \mathbf{p}^{\prime}/\gamma, \tag{13c}\] where \((\ )^{\prime}\) denotes a derivative with respect to the Lagrangian coordinate \(s_{0}\). The extra term \(\Gamma\mathbf{u}\) accounts for the passive decrease of curvature due to axial stretch. The presence of the factor \(\gamma\) simply results from the chain rule, as we have expressed the system with respect to \(s_{0}\). ### Solitary waves To derive the shape of self-similar, traveling-front solutions we introduce the co-moving coordinate \(\sigma:=\ell-s\), measuring the arclength from the apex, with the base located at \(\sigma=\ell\rightarrow\infty\). Setting \(\partial\mathbf{u}/\partial t=\mathbf{0}\), Eq. (13) becomes upon this change of coordinate: \[\frac{\partial\mathbf{t}}{\partial\sigma}=\mathbf{t}\times\mathbf{u}, \tag{14a}\] \[\frac{\partial\mathbf{p}}{\partial\sigma}=f(\sigma)(\alpha\mathbf{k}\times \mathbf{t}+\beta\mathbf{u}),\] (14b) \[cf(\sigma)\frac{\partial\mathbf{u}}{\partial\sigma}+\frac{\partial\mathbf{p}}{ \partial\sigma}=\mathbf{p}\times\mathbf{u}-\Gamma\mathbf{u}, \tag{14c}\] with the conditions \(\lim\limits_{\sigma\rightarrow\infty}\mathbf{t}=\mathbf{i}\), \(\lim\limits_{\sigma\rightarrow\infty}\mathbf{p}=\omega\mathbf{i}\) and \(\lim\limits_{\sigma\rightarrow\infty}\mathbf{u}=\mathbf{0}\). In practice, the system can be integrated for \(\sigma\in[0,\Sigma]\) with \(\Sigma\gg\delta\), and with boundary conditions expressed at \(\Sigma\). There is however a removable singularity at \(\sigma\rightarrow\infty\), as \(f(\sigma)\) is transcendentally small, which causes numerical difficulties in Eq. (14c). To alleviate this issue, we consider perturbed boundary conditions of the form \(\mathbf{t}(\Sigma)=\mathbf{i}+\mathbf{\epsilon}_{t}(\Sigma)\), \(\mathbf{p}(\Sigma)=\omega\mathbf{i}+\mathbf{\epsilon}_{p}(\Sigma)\), and \(\mathbf{u}(\Sigma)=\mathbf{\epsilon}_{u}(\Sigma)\), where \(\mathbf{\epsilon}_{t}\), \(\mathbf{\epsilon}_{p}\) and \(\mathbf{\epsilon}_{u}\) denote small perturbations from the boundary conditions at \(\sigma=\infty\). Expanding Eq. (14) and keeping only the higher order non-zero terms allows to solve for \(\mathbf{\epsilon}_{t}\), \(\mathbf{\epsilon}_{p}\) and \(\mathbf{\epsilon}_{u}\), in order to express the perturbed boundary values [Fig. 5(b) is obtained with \(\Sigma\approx 5\delta\)]. ## Appendix F Code availability All numerical methods were implemented in _Wolfram Mathematica 13.0_. Source code will be made publicly available upon acceptance of the manuscript for publication. ## Appendix G Supplementary files Movie 1.An example rotating shoot converging towards equilibrium [parameters as in Fig. 4(d), with \(\beta=\omega/5\)]. Movie 2.In the absence of autotropism, a shoot will orbit around a caulinoid [parameters as in Fig. 4(d), with \(\beta=0\)]. Movie 3.Example traveling solution [parameters as in Fig. 5(a), left-hand side simulation]. Movie 4.Uniform growth along a caulinoid [\(\alpha=5\omega\), \(\beta=\omega\), \(\Gamma_{0}=\omega/10\), \(\delta=100\)]. Figure 9: Kymograph showing the apical growth field. Lines show the trajectories of the material points with initial arclength emphasized by colors.
2309.10879
On integration with respect to filter
Using a concept of filter we propose one generalization of Riemann integral, that is integration with respect to filter. We study this problem, demonstrate different properties and phenomena of filter integration.
Dmytro Seliutin
2023-09-19T19:00:53Z
http://arxiv.org/abs/2309.10879v1
# On integration with respect to filter ###### Abstract. Using a concept of filter we propose one generalization of Riemann integral, that is integration with respect to filter. We study this problem, demonstrate different properties and phenomena of filter integration. Key words and phrases:integral, filter, ideal ## 1. Introduction Let us remind main concepts which we use in this paper. Throughout this article \(\Omega\) stand for a non-empty set. Non-empty family of subsets \(\mathfrak{F}\subset 2^{\Omega}\) is called _filter on \(\Omega\)_, if \(\mathfrak{F}\) satisfies the following axioms: 1. \(\emptyset\notin\mathfrak{F}\); 2. if \(A,\ B\in\mathfrak{F}\) then \(A\cap B\in\mathfrak{F}\); 3. if \(A\in\mathfrak{F}\) and \(D\supset A\) then \(D\in\mathfrak{F}\). Also very useful for us is a concept of filter base. Non-empty family of subsets \(\mathfrak{B}\subset 2^{\Omega}\) is called _filter base on \(\Omega\)_, if \(\emptyset\notin\mathfrak{B}\) and for every \(A,\ B\in\mathfrak{B}\) there exists \(C\in\mathfrak{B}\) such that \(C\subset A\cap B\). We say that filter base _generates filter_\(\mathfrak{F}\) if and only if for each \(A\in\mathfrak{F}\) there is \(B\in\mathfrak{B}\) such that \(B\subset A\). Let \(X\) be a topological vector space, \(f:X\to\mathbb{R}\) be a function. For \(t\in X\) denote \(\mathcal{O}(t)\) the family of all neighbourhoods of \(t\). Let \(\mathfrak{F}\) be a filter on \(X\), \(y\in\mathbb{R}\). Function \(f\) is said to be _convergent to \(y\) over filter \(\mathfrak{F}\)_ (denote \(y=\lim\limits_{\mathfrak{F}}f\)), if for each \(U\in\mathcal{O}(y)\) there exists \(A\in\mathfrak{F}\) such that for each \(t\in A\) the following holds true: \(f(t)\in U\). We refers, for example, to [1] for more information about filter and related concepts. The concept of filter is a very powerful tool for studying different properties of general topological vector spaces. For example, in [3] authors study convergence over ideal, generated by the modular function. Ideal is a concept dual to filter. In [2] we study completeness and its generalization using filters. In this article we refer our attention to classical Riemann integral. Let us remind how we can construct this object. Let \([a,b]\subset\mathbb{R}\), let \(f:[a,b]\to\mathbb{R}\) be a continuous function. Denote \(\Pi=\{a\leqslant\xi_{1}\leqslant\xi_{2}\leqslant...\leqslant\xi_{n}=b\}\) the partition of \([a,b]\), in other words, \(\underset{k=1}{n}[\xi_{k-1},\xi_{k}]=[a,b]\). Consider also the set \(T=\{t_{1},t_{2},...,t_{n}\}\) such that for each \(k=1,2,...,n\ t_{k}\in[\xi_{k-1},\xi_{k}]\). Let us call the pair \((\Pi,T)\) by the _tagged partition on the segment_. Denote \(d(\Pi)\)_the diameter of the \(\Pi\)_ - maximum length of \([\xi_{k-1},\xi_{k}]\), where \(k=1,2,...,n\). Let us recall that function \(f\) is said to be _Riemann integrable_ if there exist the limit \(I=\lim\limits_{d(\Pi)\to 0}\sum\limits_{k=1}^{n}f(t_{k})\cdot|\xi_{k}-\xi_{k-1}|\), and we call this limit the Riemann integral of the function \(f\), and write \(I=\int\limits_{a}^{b}f(t)dt\). We know many different properties of this integral, for example linearity, integration on subsegment of \([a,b]\) etc. If we look at the definition of Riemann integral more attentively, we realize that, in fact we can use one special filter and obtain desirable result. In next section we are going to develop this idea. ## 2. Integration with respect to filter Just for simplicity we are going to consider functions, defined on \([0,1]\). Let \(f:[0,1]\to\mathbb{R}\) be a function. As above, denote \(\Pi=\{a\leqslant\xi_{1}\leqslant\xi_{2}\leqslant...\leqslant\xi_{n}=b\}\) the partition of \([0,1]\), in other words, \(\underset{k=1}{\overset{n}{\cup}}[\xi_{k-1},\xi_{k}]=[0,1]\). Consider also the set \(T=\{t_{1},t_{2},...,t_{n}\}\) such that for each \(k=1,2,...,n\)\(t_{k}\in[\xi_{k-1},\xi_{k}]\). For \(k=1,2,...,n\) denote \(\Delta_{k}:=|\xi_{k}-\xi_{k-1}|\). Denote also \(\mathrm{TP}[0,1]\) the set of all tagged partition of \([0,1]\). For a tagged partition \((\Pi,T)\in\mathrm{TP}[0,1]\) denote \[S(f,\Pi,T)=\sum\limits_{k=1}^{n}f(t_{k})\Delta_{k}.\] Now we are going to introduce the central definition of this paper **Definition 1**.: Let \(f:[0,1]\to\mathbb{R}\) be a function, \(\mathfrak{F}\) be a filter on \(\mathrm{TP}[0,1]\). We say that \(f\) is _integrable over filter_\(\mathfrak{F}\) (\(\mathfrak{F}\)-integrable for short), if there exists \(I\in\mathbb{R}\) such that \(I=\lim\limits_{\mathfrak{F}}S(f,\Pi,T)\). The number \(I\) is called _the \(\mathfrak{F}\)-integral of the \(f\)_ (denote \(I=\int\limits_{0}^{1}fd\mathfrak{F}\))_._ _Remark 2_.: The fact that \(f\) is \(\mathfrak{F}\)-integrable we will write as follows: \[f\in\mathrm{Int}(\mathfrak{F}).\] _Remark 3_.: Using Definition 1 we can construct the Riemann integral as follows. Let \(\delta>0\) be a real positive number. Denote \[P_{<\delta}=\{(\Pi,T)\in\mathrm{TP}[0,1]:\ d(\Pi)<\delta\},\] where \(d(\Pi)\) stands for diameter of \(\Pi\). Consider now \[\mathfrak{B}_{<\delta}=\{P_{<\delta}:\delta>0\}.\] It is easy to check that \(\mathfrak{B}_{<\delta}\) is a filter base. Denote \(\mathfrak{F}_{<\delta}\) filter generate by \(\mathfrak{B}_{<\delta}\). Let \(f:[0,1]\to\mathbb{R}\) be a function. Then \(f\) is integrable by Riemann if there exists the limit \(\lim\limits_{\mathfrak{F}_{<\delta}}S(f,\Pi,T)\). Bellow we study different properties of filter integration. **Definition 4**.: Let \(X\) be a non-empty set, \(f:X\to\mathbb{R}\) be a function, and \(\mathfrak{F}\) be a filter on \(X\). We say that \(f\) is _bounded with respect to \(\mathfrak{F}\)_ (\(\mathfrak{F}\)-bounded for short), if there is \(C>0\) such that there exists \(A\in\mathfrak{F}\) such that for every \(t\in A\ |f(t)|<C\). The following lemma is very simple, but for readers convenient we present its proof. **Lemma 5**.: _Let \(X\) be a non-empty set, \(f:X\to\mathbb{R}\) be a function, and \(\mathfrak{F}\) be a filter on \(X\). Suppose that there exists \(I\in\mathbb{R}\), \(I=\lim\limits_{\mathfrak{F}}f\). Then \(f\) is \(\mathfrak{F}\)-bounded._ Proof.: We know that \(I=\lim\limits_{\mathfrak{F}}f\). It means that for every \(\varepsilon>0\) there exists \(A\in\mathfrak{F}\) such that for all \(t\in A\ |f(t)-I|<\varepsilon\). Consider \[|f(t)|-|I|\leqslant|f(t)-I|<\varepsilon.\] In other words, \(|f(t)|\leqslant|I|+\varepsilon\). Then just put \(C:=|I|+\varepsilon\). The next theorem generalizes well-know fact about Riemann integral: if function in integrable by Riemann then it's bounded. **Theorem 6**.: _Let \(\mathfrak{F}\) be a filter on TP\([0,1]\), \(f:[0,1]\to\mathbb{R}\) be a function, and \(f\in\text{Int}(\mathfrak{F})\). Then \(S(f,\Pi,T)\) is \(\mathfrak{F}\)-bounded._ Proof.: Just use Lemma 5. Let us formulate well-known fact about Riemann integral, using filters. **Theorem 7**.: _Let \(f:[0,1]\to\mathbb{R}\), there exists \(\lim\limits_{\mathfrak{F}<\delta}S(f,\Pi,T)\). Then \(f\) is bounded, in other words, there is \(C>0\) such that for all \(t\in[0,1]\ |f(t)|\leqslant C\)._ The next theorem is natural generalization of the Theorem 7. **Theorem 8**.: _Let \(f:[0,1]\to\mathbb{R}\), let \(\mathfrak{F}\) be a filter on \(TP[0,1]\) such that for every \(A\in\mathfrak{F}\) there exists \(B\in\mathfrak{F}_{<\delta}\) such that \(B\subset A\) and let there exists \(I\in\mathbb{R}\) such that \(I=\lim\limits_{\mathfrak{F}}S(f,\Pi,T)\). Then \(C>0\) such that for each \(t\in[0,1]\) we have \(|f(t)|<C\)._ Proof.: There exists \(I\in\mathbb{R}\) such that \(I=\lim\limits_{\mathfrak{F}}S(f,\Pi,T)\Leftrightarrow\) for all \(\varepsilon>0\) there exists \(A\in\mathfrak{F}\) such that for all \((\Pi,T)\in A\ |S(f,\Pi,T)-I|<\varepsilon\). We know that for \(A\in\mathfrak{F}\) there is \(B\in\mathfrak{F}_{<\delta}\) such that \(B\subset A\), then, particularly, for all \(\varepsilon>0\) there exists \(A\in\mathfrak{F}\) there is \(B\in\mathfrak{F}_{<\delta}\) such that \(B\subset A\) such that for all \((\Pi,T)\in B\ |S(f,\Pi,T)-I|<\varepsilon\Rightarrow\) for all \(\varepsilon>0\) there exists \(B\in\mathfrak{F}_{<\delta}\) such that for all \((\Pi,T)\in B\ |S(f,\Pi,T)-I|<\varepsilon\stackrel{{\text{Theorem \ref{thm:1}}}}{{\Rightarrow}}C>0\) such that for each \(t\in[0,1]\) we have \(|f(t)|<C\), in other words, \(f\) is bounded. Now we are going to demonstrate that filter integration has additive property. To demonstrate this we proof next easy two lemmas. **Lemma 9**.: _Let \(X\) be a non-empty set, \(f,\ g:X\to\mathbb{R}\) be a functions, and \(\mathfrak{F}\) be a filter on \(X\). Let \(x=\lim\limits_{\mathfrak{F}}f\), \(y=\lim\limits_{\mathfrak{F}}g\). Then \(\lim\limits_{\mathfrak{F}}(f+g)=x+y\)._ Proof.: We know that \(x=\lim\limits_{\mathfrak{F}}f\), so for each \(U\in\mathcal{O}(x)\) there is \(A\in\mathfrak{F}\) such that \(f(A)\subset U\). Analogically, \(y=\lim\limits_{\mathfrak{F}}f\), it means that for each \(V\in\mathcal{O}(x)\) there is \(B\in\mathfrak{F}\) such that \(f(B)\subset V\). We have to demonstrate that for each \(W\in\mathcal{O}(x+y)\) there exists \(C\in\mathfrak{F}\) such that \((f+g)(C)\subset W\). Let fix \(W\in\mathcal{O}(x+y)\). Then there exist \(W_{1}\in\mathcal{O}(x)\) and \(W_{2}\in\mathcal{O}(y)\) such that \(W\supset W_{1}+W_{2}\). Then there are \(C_{1},\ C_{2}\in\mathfrak{F}\) such that \(f(C_{1})\subset W_{1}\) and \(f(C_{2})\subset W_{2}\). Denote \(C:=C_{1}\cap C_{2}\). Clearly that \(C\in\mathfrak{F}\). So \[(f+g)(C)=f(C)+g(C)\subset W_{1}+W_{2}\subset W.\] **Lemma 10**.: _Let \(X\) be a non-empty set, \(f:X\to\mathbb{R}\) be a function, \(\mathfrak{F}\) be a filter on \(X\), and \(\alpha\in\mathbb{R}\). Let \(x=\lim\limits_{\mathfrak{F}}f\). Then \(\lim\limits_{\mathfrak{F}}\alpha f=\alpha x\)._ Proof.: \(x=\lim\limits_{\mathfrak{F}}f\), it means that for each \(U\in\mathcal{O}(x)\) there is \(A\in\mathfrak{F}\) such that \(f(A)\subset U\). We have to demonstrate that for all \(V\in\mathcal{O}(\alpha x)\) there is \(B\in\mathfrak{F}\) such that \((\alpha f)(B)\subset V\). Suppose that \(\alpha\neq 0\). The case \(\alpha=0\) is obvious. Remark that if \(W\in\mathcal{O}(x)\) then \(\alpha W\in\mathcal{O}(\alpha x)\). So just put \(B:=A\). Then \((\alpha f)(B)=\alpha f(B)\subset\alpha U\in\mathcal{O}(\alpha x)\). **Theorem 11**.: _Let \(\mathfrak{F}\) be a filter on TP\([0,1]\), \(f,g:[0,1]\to\mathbb{R}\) be a functions, \(f\in\text{Int}(\mathfrak{F})\), \(\alpha,\ \beta\in\mathbb{R}\), and \(f\in\text{Int}(\mathfrak{F})\) and \(g\in\text{Int}(\mathfrak{F})\). Then \((\alpha f+\beta g)\in\text{Int}(\mathfrak{F})\)_ Proof.: Just use Lemmas 9 and 10. ## 3. Integration with respect to different filters In the previous section we've studied arithmetic properties of integral over filter and problems deals with boundedness. This section is devoted to integration over different filters and its relations. For \((\Pi,T)\in\text{TP}[0,1]\) and \(t\in T\) we denote \(\Delta(t)\) length of the element of partition of \(\Pi\) which covers \(t\). Let \((\Pi_{1},T_{1}),(\Pi_{2},T_{2})\) be partitions of \([0,1]\). Consider \[\rho((\Pi_{1},T_{1}),(\Pi_{2},T_{2}))=\] \[\sum_{t\in T_{1}\cap T_{2}}|\Delta_{1}(t)-\Delta_{2}(t)|+\sum_{T _{1}\setminus T_{2}}\Delta_{1}(t)+\sum_{T_{2}\setminus T_{1}}\Delta_{2}(t). \tag{3.1}\] For easy using of concept defined in Equation 3.1 consider \(\mathbb{F}:[0,1]\to l_{1}[0,1]\), such that \(\mathbb{F}(t)=e_{t}\), where \[e_{t}(\tau)=\begin{cases}1,\text{if }\tau=t;\\ 0,\text{otherwise}.\end{cases}\] It is clearly then that \[\rho((\Pi_{1},T_{1}),(\Pi_{2},T_{2}))=||S(\mathbb{F},\Pi_{1},T_{1})-S(\mathbb{F}, \Pi_{2},T_{2})||.\] Now we are going to demonstrate that the mapping \(\rho\), defined above, is a metric, or distance between two tagged partitions. **Proposition 1**.: _Consider \(\rho:\text{TP}[0,1]\times\text{TP}[0,1]\to\mathbb{R}\), \(\rho((\Pi_{1},T_{1}),(\Pi_{2},T_{2}))=||S(\mathbb{F},\Pi_{1},T_{1})-S(\mathbb{ F},\Pi_{2},T_{2})||\). Then \(\rho\) satisfies all metric axioms._ Proof.: 1. let \((\Pi_{1},T_{1})=(\Pi_{2},T_{2})\). It is clear that in this case \(\rho((\Pi_{1},T_{1}),(\Pi_{2},T_{2}))=0\); 2. let \(\rho((\Pi_{1},T_{1}),(\Pi_{2},T_{2}))=0\). Then \(\rho((\Pi_{1},T_{1}),(\Pi_{2},T_{2}))=\sum\limits_{t\in T_{1}\cap T_{2}}| \Delta_{1}(t)-\Delta_{2}(t)|+\sum\limits_{T_{1}\setminus T_{2}}\Delta_{1}(t)+ \sum\limits_{T_{2}\setminus T_{1}}\Delta_{2}(t)=0\). We have a sum of non-negative numbers equals to \(0\). This means that * \(\forall t\in T_{1}\cap T_{2}\ |\Delta_{1}(t)-\Delta_{2}(t)|=0\Rightarrow \forall t\in T_{1}\cap T_{2}\ \Delta_{1}(t)=\Delta_{2}(t)\); * \(\forall t\in T_{1}\setminus T_{2}\ \Delta_{1}(t)=0\); * \(\forall t\in T_{2}\setminus T_{1}\ \Delta_{2}(t)=0\); \(\Rightarrow(\Pi_{1},T_{1})=(\Pi_{2},T_{2})\). 3. consider \((\Pi_{1},T_{1}),(\Pi_{2},T_{2}),(\Pi_{3},T_{3})\). Then \[\rho((\Pi_{1},T_{1}),(\Pi_{2},T_{2}))=\] \[||S(\mathbb{F},\Pi_{1},T_{1})-S(\mathbb{F},\Pi_{2},T_{2})+S( \mathbb{F},\Pi_{3},T_{3})-S(\mathbb{F},\Pi_{3},T_{3})||\leqslant\] \[||S(\mathbb{F},\Pi_{1},T_{1})-S(\mathbb{F},\Pi_{3},T_{3})||+||S( \mathbb{F},\Pi_{3},T_{3})-S(\mathbb{F},\Pi_{2},T_{2})||=\] \[\rho((\Pi_{1},T_{1}),(\Pi_{3},T_{3}))+\rho((\Pi_{3},T_{3}),(\Pi_{ 2},T_{2}))\] Now we introduce very important concept. **Definition 12**.: Let \(\mathfrak{F}_{1},\mathfrak{F}_{2}\) be filters on \(TP[0,1]\). We say that \(\mathfrak{F}_{2}\)\(\rho\)-_dominates_ filter \(\mathfrak{F}_{1}\) (\(\mathfrak{F}_{2}\succ_{\rho}\mathfrak{F}_{1}\)), if for every \(\varepsilon<0\) and for each \(A_{1}\in\mathfrak{F}_{1}\) there exists \(A_{2}\in\mathfrak{F}_{2}\) such that for all \((\Pi_{2},T_{2})\in A_{2}\) there is \((\Pi_{1},T_{1})\in A_{1}\) such that \(\rho((\Pi_{1},T_{1}),(\Pi_{2},T_{2}))<\varepsilon\). **Proposition 2**.: _Let \(\mathfrak{F}_{2}\supset\mathfrak{F}_{1}\). Then \(\mathfrak{F}_{2}\)\(\rho\)-dominates \(\mathfrak{F}_{1}\)._ Proof.: As \(\mathfrak{F}_{2}\supset\mathfrak{F}_{1}\) we obtain that if \(A\in\mathfrak{F}_{1}\) then \(A\in\mathfrak{F}_{2}\). Consider an arbitrary \(\varepsilon>0\). Then for every \(A_{1}\in\mathfrak{F}_{1}\) there is \(A_{2}\in\mathfrak{F}_{2}\), \(A_{2}:=A_{1}\) such that for each \((\Pi_{2},T_{2})\in A_{2}\) there exists \((\Pi_{1},T_{1})\in A_{1}\), \((\Pi_{1},T_{1}):=(\Pi_{2},T_{2})\) such that \(\rho\left((\Pi_{1},T_{1}),(\Pi_{2},T_{2})\right)=\rho\left((\Pi_{2},T_{2}),( \Pi_{2},T_{2})\right)=0<\varepsilon\). Previous proposition shows us that \(\rho\)-dominance generates some relation of order on \(\text{TP}[0,1]\) and is more general concept that relation of inclusion. It is clear that if \(\mathfrak{F}_{1}\subset\mathfrak{F}_{2}\) and \(f\in\text{Int}(\mathfrak{F}_{1})\) then \(f\in\text{Int}(\mathfrak{F}_{2})\) - just use the definition of function limit over filter. So we can formulate next easy proposition. **Proposition 3**.: _Let \(f:[0,1]\to\mathbb{R}\) be a function, \(\mathfrak{F}_{1},\ \mathfrak{F}_{2}\) be filters on TP\([0,1]\) such that \(\mathfrak{F}_{1}\ \subset\mathfrak{F}_{2}\) and \(f\in\text{Int}(\mathfrak{F}_{1})\). Then \(f\in\text{Int}(\mathfrak{F}_{2})\)._ **Theorem 13**.: _Let \(\mathfrak{F}_{1},\mathfrak{F}_{2}\) be filters on \([0,1]\). Let \(f:[0,1]\to\mathbb{R}\) be a bounded function. Let \(I=\lim\limits_{\mathfrak{F}_{1}}S(f,\Pi,T)\) and \(\mathfrak{F}_{2}\succ_{\rho}\mathfrak{F}_{1}\). Then \(I=\lim\limits_{\mathfrak{F}_{2}}S(f,\Pi,T)\)._ Proof.: Denote \(C:=\sup\limits_{t\in[0,1]}|f(t)|\). We have to proof that for every \(\varepsilon>0\) there exists \(B\in\mathfrak{F}_{2}\) such that for each \((\Pi_{B},T_{B})\in B\) we have \(|S(f,\Pi_{B},T_{B})-I|<\varepsilon\). We know that for every \(\varepsilon>0\) there exists \(A\in\mathfrak{F}_{1}\) such that for each \((\Pi_{1},T_{1})\in A\) we have \(|S(f,\Pi_{1},T_{1})-I|<\varepsilon\). Now for an arbitrary \(\varepsilon>0\) and \(A\in\mathfrak{F}_{1}\) found above one can find \(A_{2}\in\mathfrak{F}_{2}\) such that for all \((\Pi_{2},T_{2})\in A_{2}\) there is \((\Pi_{1},T_{1})\in A_{1}\) such that \(\rho((\Pi_{1},T_{1}),(\Pi_{2},T_{2}))<\varepsilon\). Then put \(B:=A_{2}\). Then for all \((\Pi_{B},T_{B})\in B\) we have \((\Pi_{1},T_{1})\in A_{1}\) such that \[|S(f,\Pi_{B},T_{B})-I|=\] \[|S(f,\Pi_{B},T_{B})-S(f,\Pi_{1},T_{1})+S(f,\Pi_{1},T_{1})-I|\leqslant\] \[|S(f,\Pi_{B},T_{B})-S(f,\Pi_{1},T_{1})|+|S(f,\Pi_{1},T_{1})-I|=\] \[\sum_{t\in T_{B}\cap T_{1}}|f(t)|\cdot|\Delta_{B}(t)-\Delta_{1}(t )|+\sum_{t\in T_{B}\setminus T_{1}}|f(t)|\cdot\Delta_{B}(t)+\] \[\sum_{t\in T_{1}\setminus T_{B}}|f(t)|\cdot\Delta_{1}(t)+ \varepsilon\leqslant C\cdot\rho((\Pi_{B},T_{B}),(\Pi_{1},T))+\varepsilon\leqslant\] \[C\varepsilon+\varepsilon\leqslant\varepsilon(1+C).\] ## 4. Exactly tagged filters In this part of our paper we consider problems deals filter integration of unbounded functions. **Definition 14**.: Let \(\mathfrak{B}\) be a filter base on \(TP[0,1]\). We say that \(\mathfrak{B}\) is _exactly tagged_ if there exist \(A\subset[0,1]\) - a strictly decreasing sequence of numbers such that for each \(B\in\mathfrak{B}\) and for every \((\Pi,T)\in B\) we have that \(T\cap A=\emptyset\). **Definition 15**.: We say that filter \(\mathfrak{F}\) on \(TP[0,1]\) is _exactly tagged_ if there exists exactly tagged base \(\mathfrak{B}\) of \(\mathfrak{F}\). **Theorem 16**.: _If filter \(\mathfrak{F}\) on \(TP[0,1]\) is exactly tagged then there exists unbounded function \(f:[0,1]\to\mathbb{R}\) such that \(f\in Int(\mathfrak{F})\)._ Proof.: Denote \(\mathbb{N}^{-1}=\left\{\frac{1}{n}\right\}_{n\in\mathbb{N}}\) and consider next filter base \(\mathfrak{B}=(B_{n})_{n\in\mathbb{N}}\) on \(TP[0,1]\): \(B_{1}=\left\{(\Pi,T):T\cap\mathbb{N}^{-1}=\emptyset\text{ and }d(\Pi)<1\right\}\); \[B_{2} =\bigg{\{}(\Pi,T):T\cap\mathbb{N}^{-1}=\emptyset\text{ and }d(\Pi)<\frac{1}{2}\bigg{\}};\] \[B_{3} =\bigg{\{}(\Pi,T):T\cap\mathbb{N}^{-1}=\emptyset\text{ and }d(\Pi)<\frac{1}{3}\bigg{\}};\] \[...\] \[B_{m} =\bigg{\{}(\Pi,T):T\cap\mathbb{N}^{-1}=\emptyset\text{ and }d(\Pi)<\frac{1}{m} \bigg{\}}.\] Consider now \[f(t)=\begin{cases}n,\text{ if }t=\frac{1}{n},\ n\in\mathbb{N}\\ 0,\text{ otherwise}\end{cases}.\] Then for each \(n\in\mathbb{N}\) and for every \((\Pi,T)\in B_{n}\) we have that \(S(f,\Pi,T)=0\), so \(\lim\limits_{\mathfrak{B}}S(f,\Pi,T)=0\). For a tagged partition \((\Pi,T)\) of \([0,1]\) and \(\tau\in[0,1]\) denote \(\ell(\Pi,T,\tau)\) the number which is equal to the length of the segment \(\Delta\in\Pi\), for which \(\tau\in\Delta\), if \(\tau\in T\). If \(\tau\notin T\), we put \(\ell(\Pi,T,\tau)=0\). In this notation \[S(f,\Pi,T)=\sum\limits_{t\in[0,1]}f(t)\ell(\Pi,T,t).\] **Theorem 17**.: _For a filter \(\mathfrak{F}\) on \(TP[0,1]\) the following assertions are equivalent:_ 1. _There exists an unbounded function_ \(f:[0,1]\to[0,+\infty)\) _such that_ \(S(f,\Pi,T)\) _is_ \(\mathfrak{F}\)_-bounded;_ 2. _There exists a countable subset_ \(\{t_{n}\}_{n\in\mathbb{N}}\subset[0,1]\) _such that there is_ \(A\in\mathfrak{F}\) _such that for every_ \((\Pi,T)\in A\)__ \[\sum\limits_{n\in\mathbb{N}}n\cdot\ell(\Pi,T,t_{n})<1.\] Proof.: **(1)\(\Rightarrow\)(2)**: Let \(f\) be a non-negative, unbounded function on \([0,1]\) such that there is \(C>0\) and \(B\in\mathfrak{F}\) such that for each \((\Pi,T)\in B\) we have \(\sum\limits_{t\in[0,1]}f(t)\cdot\ell(\Pi,T,t)<C\). As \(f\) is unbounded, there exists \((\alpha_{n})\subset[0,1]\) such that for every \(n\in\mathbb{N}\)\(f(\alpha_{n})\geqslant Cn\). Then there exists \((\alpha_{n})\subset[0,1]\), \(C>0\), there is \(A\in\mathfrak{F}\), \(A:=B\) such that for all \((\Pi,T)\in A\) we obtain: \[\sum\limits_{t\in[0,1]}n\cdot\ell(\Pi,T,\alpha_{n})\leqslant\sum \limits_{n\in\mathbb{N}}\frac{f(\alpha_{n})}{C}\cdot\ell(\Pi,T,\alpha_{n})\leqslant\] \[\frac{1}{C}\sum\limits_{t\in[0,1]}f(t)\cdot\ell(\Pi,T,t)<\frac{1}{ C}\cdot C=1.\] **(2)\(\Rightarrow\)(1)**: Let there exists a countable subset \(\{t_{n}\}_{n\in\mathbb{N}}\subset[0,1]\) and \(C>0\) such that there is \(A\in\mathfrak{F}\) such that for every \((\Pi,T)\in A\)\(\sum\limits_{n\in\mathbb{N}}n\cdot\ell(\Pi,T,t_{n})<C\). Consider function \[f(t)=\begin{cases}n,\text{ if }t=\alpha_{n},\ n\in\mathbb{N}\\ 0,\text{ if }t\neq\alpha_{n}\end{cases}.\] Obviously, \(f(t)\) is unbounded. Then there is \(C>0\) and there is \(B\in\mathfrak{F}\), \(B:=A\) such that for every \((\Pi,T)\in A\) \[\sum_{t\in[0,1]}f(t)\cdot\ell(\Pi,T,t) \leqslant\sum_{n\in\mathbb{N}}f(\alpha_{n})\cdot\ell(\Pi,T,\alpha _{n})\leqslant\] \[\sum_{n\in\mathbb{N}}n\cdot\ell(\Pi,T,\alpha_{n})<C\] ## 5. Integration over filter on a subsegment Our next goal is as follows: if function \(f\) is integrable on \([0,1]\) over filter \(\mathfrak{F}\) on \(\operatorname{TP}[0,1]\) then for an arbitrary \([\alpha,\beta]\subset[0,1]\) function \(f\) is is integrable on \([\alpha,\beta]\) over filter \(\mathfrak{F}\). To achieve this purpose we need to construct some restriction of filter \(\mathfrak{F}\) on subsegment \([\alpha,\beta]\subset[0,1]\). Now we present how we can construct such restriction. Consider an arbitrary \([\alpha,\beta]\subset[0,1]\). We consider only \(T\) such that \(T\cap(\alpha,\beta)\neq\emptyset\). Consider an arbitrary \((\Pi,T)\in TP[0,1]\). We have four cases: 1. \(\min\{T\cap(\alpha,\beta)\}>\min\{\Pi\cap(\alpha,\beta)\}\) \(\max\{T\cap(\alpha,\beta)\}<\max\{\Pi\cap(\alpha,\beta)\}\); 2. \(\min\{T\cap(\alpha,\beta)\}>\max\{\Pi\cap(0,\alpha)\}\) \(\max\{T\cap(\alpha,\beta)\}<\max\{\Pi\cap(\alpha,\beta)\}\); 3. \(\min\{T\cap(\alpha,\beta)\}>\min\{\Pi\cap(\alpha,\beta)\}\) \(\max\{T\cap(\alpha,\beta)\}<\min\{\Pi\cap(\beta,1)\}\); 4. \(\min\{T\cap(\alpha,\beta)\}>\max\{\Pi\cap(0,\alpha)\}\) \(\max\{T\cap(\alpha,\beta)\}<\min\{\Pi\cap(\beta,1)\}\). We have to construct a restriction of \((\Pi,T)\) on \([\alpha,\beta]\). In each of four described cases we have such \((\Pi_{k},T_{k})\in TP[\alpha,\beta]\), \(k=1,2,3,4\): 1. \(\Pi_{1}=\left(\Pi\setminus\left((\Pi\cap[0,\alpha))\cup(\Pi\cap(\beta,1))\cup \min\{\Pi\cap(\alpha,\beta)\}\cup\max\{\Pi\cap(\alpha,\beta)\}\right)\right) \cup\{\alpha,\beta\}\) \(T_{1}=T\setminus\left((T\cap[0,\alpha))\cup(T\cap(\beta,1])\right)\); 2. \(\Pi_{2}=\left(\Pi\setminus\left((\Pi\cap[0,\alpha))\cup\max\{\Pi\cap(\alpha, \beta)\}\cup(\Pi\cap(\beta,1))\right)\right)\cup\{\alpha,\beta\}\) \(T_{2}=T_{1}\); 3. \(\Pi_{3}=\left(\Pi\setminus\left((\Pi\cap[0,\alpha))\cup\min\{\Pi\cap(\alpha)\}\cup( \Pi\cap[\beta,1))\right)\right)\cup\{\alpha,\beta\}\) \(T_{3}=T_{1}\); 4. \(\Pi_{4}=\left(\Pi\setminus\left((\Pi\cap[0,\alpha))\cup(\Pi\cap[\beta,1))\right) \right)\cup\{\alpha,\beta\}\) \(T_{4}=T_{1}\). Now if we have an arbitrary filter \(\mathfrak{F}\) on \(TP[0,1]\) we can construct filter \(\mathfrak{F}_{[\alpha,\beta]}\) on \(TP[\alpha,\beta]\), induced with \(\mathfrak{F}\) in such way: consider an arbitrary \(A\in\mathfrak{F}\) and for each \((\Pi,T)\in A\) we have to execute an algorithm, described above. For each \(A\in\mathfrak{F}\) denote \(A^{\beta}_{\alpha}\) the restriction of \(A\) on \([\alpha,\beta]\), described above. **Definition 18**.: Let \(\mathfrak{F}\) be a filter on \(TP[0,1]\), \([\alpha,\beta]\subset[0,1]\). We call the filter \(\mathfrak{F}\)_\([\alpha,\beta]\)-complemented_ if for each \(A\in\mathfrak{F}\), for every \((\Pi_{1},T_{1}),\ (\Pi_{2},T_{2})\in A^{\beta}_{\alpha}\) there exists \((\Pi^{*},T^{*})\in TP[0,\alpha]\) and \((\Pi^{**},T^{**})\in TP[\beta,1]\) such that \[(\Pi^{*},T^{*})\cup(\Pi_{1},T_{1})\cup(\Pi^{**},T^{**})\in A,\] \[(\Pi^{*},T^{*})\cup(\Pi_{2},T_{2})\cup(\Pi^{**},T^{**})\in A.\] Here we present promised result about filter integration on subsegment. **Theorem 19**.: _Let \(f:[0,1]\rightarrow\mathbb{R}\), \(\mathfrak{F}\) be a filter on \(TP[0,1]\) such that for each \([\alpha,\beta]\subset[0,1]\)\(\mathfrak{F}\) is \([\alpha,\beta]\)-complemented. Let \(f\) is integrated of \([0,1]\) with respect to \(\mathfrak{F}\). Then for every \([\alpha,\beta]\subset[0,1]\)\(f\) is integrated on \([\alpha,\beta]\) with respect to \(\mathfrak{F}\)_ Proof.: We know that for an arbitrary \(\varepsilon>0\) there exists \(A\in\mathfrak{F}\) such that for all \((\Pi_{1},T_{1}),(\Pi_{2},T_{2})\in A\) we have: \(|S(f,\Pi_{1},T_{1})-S(f,\Pi_{2},T_{2})|<\varepsilon\). Let fix \(\varepsilon>0\) and consider an arbitrary \([\alpha,\beta]\subset[0,1]\). For \(A\in\mathfrak{F}\) consider an arbitrary \((\Pi^{1},T^{1}),(\Pi^{2},T^{2})\in A^{\beta}_{\alpha}\). As \(\mathfrak{F}\) is \([\alpha,\beta]\)-complemented we can find \((\Pi^{*},T^{*})\in A^{\alpha}_{0}\) and \((\Pi^{**},T^{**})\in A^{1}_{\beta}\) such that \((\Pi_{11},T_{11}):=(\Pi^{*},T^{*})\cup(\Pi^{1},T^{1})\cup(\Pi^{**},T^{**})\in A\) and \((\Pi_{22},T_{22}):=(\Pi^{*},T^{*})\cup(\Pi^{2},T^{2})\cup(\Pi^{**},T^{**})\in A\). Then \(\varepsilon>|S(f,\Pi_{11},T_{11})-S(f,\Pi_{22},T_{22})|=|S(f,\Pi^{1},T^{1})-S( f,\Pi^{2},T^{2})|\). **Acknowledgment.** This paper is partially supported by a grant from Akhiezer Foundation (Kharkiv). The author is thankful to his parents for their support and his scientific adviser, professor Vladimir Kadets for his constant help with this project. Also author thanks the Defense Forces of Ukraine for the defence and fight against Russian aggressors.
2306.00237
Persistence of chimera states and the challenge for synchronization in real-world networks
The emergence of order in nature manifests in different phenomena, with synchronization being one of the most representative examples. Understanding the role played by the interactions between the constituting parts of a complex system in synchronization has become a pivotal research question bridging network science and dynamical systems. Particular attention has been paid to the emergence of chimera states, where subsets of synchronized oscillations coexist with asynchronous ones. Such coexistence of coherence and incoherence is a perfect example where order and disorder can persist in a long-lasting regime. Although considerable progress has been made in recent years to understand such coherent and (coexisting) incoherent states, how they manifest in real-world networks remains to be addressed. Based on a symmetry-breaking mechanism, in this paper, we shed light on the role that non-normality, a ubiquitous structural property of real networks, has in the emergence of several diverse dynamical phenomena, e.g., amplitude chimeras or oscillon patterns. Specifically, we demonstrate that the prevalence of source or leader nodes in networks leads to the manifestation of phase chimera states. Throughout the paper, we emphasize that non-normality poses ongoing challenges to global synchronization and is instrumental in the emergence of chimera states.
Riccardo Muolo, Joseph D. O'Brien, Timoteo Carletti, Malbor Asllani
2023-05-31T23:17:21Z
http://arxiv.org/abs/2306.00237v1
# Persistence of chimera states and the challenge for synchronization in real-world networks ###### Abstract **The emergence of order in nature manifests in different phenomena, with synchronization being one of the most representative examples. Understanding the role played by the interactions between the constituting parts of a complex system in synchronization has become a pivotal research question bridging network science and dynamical systems. Particular attention has been paid to the emergence of chimera states, where subsets of synchronized oscillations coexist with asynchronous ones. Such coexistence of coherence and incoherence is a perfect example where order and disorder can persist in a long-lasting regime. Although considerable progress has been made in recent years to understand such coherent and (coexisting) incoherent states, how they manifest in real-world networks remains to be addressed. Based on a symmetry-breaking mechanism, in this paper, we shed light on the role that non-normality, a ubiquitous structural property of real networks, has in the emergence of several diverse dynamical phenomena, e.g., amplitude chimeras or oscillon patterns. Specifically, we demonstrate that the prevalence of source or leader nodes in networks leads to the manifestation of phase chimera states. Throughout the paper, we emphasize that non-normality poses ongoing challenges to global synchronization and is instrumental in the emergence of chimera states.** ## I Introduction Many natural and artificial systems are made by numerous interacting entities that exhibit collective dynamics which cannot be simply inferred as the sum of the parts [1]. Such a structural peculiarity has given rise to the complexity science where systems are modelled through networks of interacting individuals [2]. One of the most emblematic emergent behaviors is synchronization, characterized by coherent oscillations of the basic interconnected dynamical entities, by which the system is composed [3; 4]. Such a phenomenon is observed in many settings in nature, from the synchronous firing of neurons in the brain [5] to the simultaneous flashing of fireflies during mating season [6]. It has also been proven crucial in the functioning of various human-made systems, including power grids [7] and communication networks [8], to name a few examples. A prominent model to study the synchronization problem was introduced by Kuramoto [9; 10], based on the presence of phase variables, i.e., angles, associated to an ensemble of coupled oscillators whose dynamical behavior can be controlled by varying the coupling strength and/or the network topology. Interestingly, as the coupling strength goes beyond a given threshold or the network satisfies specific conditions in terms of links density or interactions topology, the system goes through a phase transition from an asynchronous, i.e., the angles evolutions are uncorrelated with one another, to a partial or fully synchronized regime, i.e., where the oscillators behave in unison. Latter, Kuramoto & Battogtokh, observed a fascinating behavior of the model: under very specific setting of parameters and initial conditions, coherent and incoherent states can simultaneously coexist [11]. Such peculiar phenomenon, subsequently baptized _chimera state_ by Abrams & Strogatz [12], (inspired by the mythological creature Chimera whose body was composed by parts of different animals), triggered an effervescent interest of the scientific community which continues until the present day. The main reason is that chimeras are one of the handful of examples where order represented by synchrony and disorder by asynchrony coexist simultaneously. The study of chimera states has been a prolific topic of research in the past 20 years [13] and many different kinds of such states have been discovered and the original idea further generalized, but two features seem to be common for all them: chimera is a long-lasting but still transient state, i.e., they fade away after a finite amount of time, and they are not robust with respect to the choice of initial conditions [14]. Both these aspects have inspired researchers to look for alternative ways to produce stable chimera states in a broader and less restrictive environment. Achieving such a goal is of paramount importance in order to explain the in creasingly frequent occurrence of chimera states in scenarios where coherent-incoherent patterns appear to be widespread. The first of them is the case of brain dynamics where researchers have shown that neuronal networks manifest simultaneously coherent and incoherent synchronization in the fMRI detected brain activity [15]. Another very recent result is that related to the flashing of fireflies where, contrary to common belief, partial synchrony and chimera states do exist [16]. Based on these premises, in this paper we propose a theory for the emergence of coherent-incoherent patterns grounded on non-normality, an ubiquitous structural property of real-world networks for which the adjacency matrix (or other related operators) are (highly) non-normal, i.e., \(\mathbf{A}\mathbf{A}^{T}\neq\mathbf{A}^{T}\mathbf{A}\)[17]. Using a systematic method recently introduced [18; 19], we show that robust chimera states naturally arise in empirical networks. We emphasize that such patterns are reminiscent of spectral features of real networks, directly inherited by the strong non-normality known to characterize such networks [20; 21]. Significantly, non-normality is pervasive in both natural and human-made networks spanning from the microscopic (including neuronal [22; 23; 24; 25; 26; 27; 28; 29; 30], genetic [31; 32; 33; 34; 35], metabolic [36], protein-protein interactions [37], etc.,) to the macroscopic world (offline and online social networks [38; 39; 40; 41; 42; 43; 44; 45; 46], food webs [47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65], animal interactions [66; 67; 68; 69; 70; 71; 72; 73], informational [74; 75; 76; 77; 78; 79; 80; 81; 82; 83], economical [79; 80; 83; 84; 85; 86; 87; 88; 89; 90], etc). In recent years, the scientific community has recognized the importance of non-normal networks, and has highlighted their impact in the dynamics of complex systems [20; 21; 84; 85; 86; 87; 89; 91; 90]. It is well known that non-normality strongly influences the dynamical behavior in the linear regime. For instance, the non-orthogonality of the eigenvectors of a stable non-normal linear system drives the orbits far from the starting condition by following a transient growth which precedes the asymptotic exponential decay due to the stability assumption. In particular, it has been shown that the basin of attraction of homogeneous stationary and periodic oscillatory states drastically reduces its size, hence the transient growth and the stabilizing action of the nonlinearities provide an explanation for the emergence of new stable equilibria [84; 85; 89]. Such peculiar dynamical behavior advises handling linear stability methods such as the Master Stability Function [91] with particular care [89]. The mathematical approach we use in this paper to show the omnipresence of coexisting coherent-incoherent states is rooted in the fact that the strong non-normality of complex networks is immediately related to a strong directedness of their structure [20], which translates into an almost triangular shape of the adjacency matrix after suitably relabeling the nodes, i.e., by performing an invariant permutation of its rows and columns [20; 21]. From here, it can be readily demonstrated that such an (almost) triangular matrix will also possess an (almost) triangular matrix of eigenvectors (see Methods for a mathematical derivation). Based on a symmetry-breaking mechanism (schematically represented in Fig. 1\(a\)), we first show that sufficiently small perturbations around a fully synchronized unstable state, evolve from it following in the linear regime the unstable eigenvectors, namely the ones corresponding to the unstable modes. Before we enter into the details of the method, we want to emphasize that our definition of chimera state will be based on the presence of at least a subset of nodes of ordered behavior and at least a subset with a disordered one. In this paper by _ordered_, we mean that some of the network's nodes share simultaneously the same values of a specific observable, either the amplitude or the phase, defined accordingly to the case under consideration. In the following we will consider only cases where the network is split into at least one cluster of nodes with an oscillatory state and the remaining ones with a different dynamics. Now coming back to our problem, due to the nearly triangular structure of the matrix operators, the entries of its eigenvectors contain many zeros and generally randomly scattered non-zero entries. Hence the perturbations corresponding to the zero entries, will remain near the original limit cycle while the others grow substantially with considerably different amplitudes. Once the amplitudes of these nodes reach a certain threshold, the nonlinear terms enter into action by "freezing" such linear growth and consequently giving life to the amplitude chimera state [92], characterized by the presence of in-phase oscillations but with significantly different amplitudes. The merits of the approach is that the final patterns, alongside being predictable, are also stable, following known results in pattern formation theory and at the same time require only setting the parameters of the model but otherwise are independent of the initial conditions [19]. Non-normality in real complex systems manifests itself in other forms characterized by particular local structural features. In [21], authors have shown that once the network non-normality reaches a given threshold, all the terminal Strongly Connected Component (SCC) within the system are replaced by _leader_ nodes, namely nodes with only incoming (sink) or outgoing (source) edges (see Fig. 1\(b\))). The coexistence of both strongly synchronized clusters (SSCs) and leader nodes has been found to be highly improbable in all 124 networks studied in the dataset, with only one exception [21]. Therefore, we consider the presence of leader nodes as a distinctive characteristic indicating strong non-normality. Based on this concept, we show that in a strongly stable regime, source (leader) nodes will retain any initial phase perturbation assigned to them as being directly uncoupled from the rest of the network. The remaining nodes instead may potentially absorb not only all the perturbations they had initially, but also those coming from the source nodes. Consequently, leader nodes will act as a set of phase-disordered oscillators, while the remaining nodes in the network may eventually form a synchronized cluster. In this scenario, all nodes will maintain a constant amplitude without undergoing any changes. The coexistence of partially organized patterns simi lar to the chimera scenario has been previously observed also on continuous support. This is the case of oscillon patterns which occur in granular media and are basically localized oscillatory particles, non necessarily synchronized states, surrounded by a neighborhood of uniform stationary ones [93, 94]. They have also been observed in networked systems [95]. In this paper, we show that such states arise naturally in real networks, however, in constrast with the previous scenario where node dynamics are oscillatory, we hereby assume the nodes' variables initially sit on a stationary equilibrium. Through the same mechanism described above, following an oscillatory instability the nodes of the network will begin to oscillate and then stabilize because of the nonlinearities. The contribution of the non-normal network is given by the eigenvectors of the coupling matrix which will localize such oscillations in a subset of nodes, corresponding to the non-zero entries, resulting in the emergence of an oscillon state. Before we move to the mathematical treatment of our study, we find it necessary to emphasize that while we predominantly employ a symmetry-breaking method to address the emergence of chimera states in this paper, the results extend qualitatevely similarly for parameters beyond the technical validity of the method. In the Supplementary Material (SM), we show that even for larger perturbations, distinct chimera patterns resembling the shape of the unstable eigevectors can still occur. ## II Symmetry-breaking method We start by considering a coupled autonomous dynamical systems composed by \(N\) variables \(\mathbf{x}_{i}=\left(x_{i}^{(1)},x_{i}^{(2)},\ldots,x_{i}^{(N)}\right)\), hereby referred as observables or species, associated to the \(i\)-th node, for \(i=1,\ldots,\Omega\) and evolving because of the coupling given by: \[\frac{d\,x_{i}^{(\gamma)}}{dt}=f_{\gamma}(\mathbf{x}_{i})+D_{ \gamma}\sum_{j=1}^{\Omega}A_{ij}\left(x_{j}^{(\gamma)}-x_{i}^{(\gamma)}\right), \tag{1}\] \[\forall i=1,\ldots,\Omega\,\,\,\text{and}\,\,\,\forall\gamma=1, \ldots,N\] where \(f_{\gamma}(\cdot)\) stands for the nonlinear term of interaction of the \(N\) variables associated to the \(\gamma-\)th equation, \(A_{ij}\) represents the \((i,j)\) entry of the adjacency matrix encoding the coupling topology, i.e., \(A_{ij}=1\) if there is an edge from \(j\) to \(i\) and zero otherwise, and \(D_{\gamma}\) describes the coupling strength. Throughout this paper we will consider identical node dynamics, e.g., identical oscillators for the whole network, although further progress to the case of non-identical oscillators is, in principle, possible [96, 97]. The linear coupling between nodes resembles, for instance, the gap junction in neuronal systems where the observables tend to minimizes the differences between each other [98, 99]. In a compact form we can write such coupling through the graph Laplacian as \(\sum_{j}L_{ij}x_{j}^{(\gamma)}=\sum_{j}A_{ij}\left(x_{j}^{(\gamma)}-x_{i}^{( \gamma)}\right)\) with entries \(L_{ij}=A_{ij}-k_{i}^{in}\delta_{ij}\) where \(k_{i}^{in}\) is the in-degree, i.e., the number of incoming links of node \(i\). The symmetry-breaking method consists on performing a linear stability analysis in order to study either the amplification or damping of a small perturbation starting from a uniform equilibrium state. The initial state can either be a limit cycle, to which all nodes are synchronized, or a homogeneous fixed point. For generality, we will consider here the case of a homogeneous periodic solution for all the nodes \(x_{i}^{(\gamma)}(t)=\hat{x}^{(\gamma)}(t)\) and then linearize the system around it. However, it is always possible to constrain such assumptions to the case of a stationary uniform steady state. If we denote the perturbations by \(\delta x_{i}^{(\gamma)}(t)\) then the linearized equations read Figure 1: **Schematic representation of chimera pattern formation and leader nodes.****a)** The process follows a symmetry-breaking mechanism where the perturbation leave the unstable synchronized manifold (red empty circle) following the direction of the critical eigenvector (red arrow solid and shaded) and reaches the stable chimera state (blue filled circle) through a (quasi)linear orbit (black dashed line). **b)** Once the non-normality reaches a given threshold the terminal Strongly Connected Components (SCC) (orange curve) disappear to make space to the leader nodes (green curve). as: \[\frac{d\,\delta x_{i}^{(\gamma)}}{dt}=\sum_{\eta=1}^{N}\frac{\partial f_{\gamma}}{ \partial x^{(\eta)}}\delta x_{i}^{(\eta)}+D_{\gamma}\sum_{j=1}^{\Omega}L_{ij} \delta x_{j}^{(\gamma)}, \tag{2}\] \[\forall i=1,\dots,\Omega\;\;\text{and}\;\;\forall\gamma=1,\dots,N\] where we have now made use of the Laplacian notation and the partial derivatives \(\partial f_{\gamma}/\partial x^{(\eta)}\) are calculated on the homogeneous periodic solutions \(\dot{x}^{(\gamma)}(t)\). To make analytical progress we will decouple the above equations and for this we need to consider an expansion of the perturbations along the eigenvectors of the Laplacian matrix, i.e., \(\delta x_{i}^{(\gamma)}=\sum_{\alpha=1}^{\Omega}\xi_{\alpha}^{(\gamma)}(t) \Phi_{i}^{(\alpha)}\). The latter operation is only possible when a basis of Laplacian eigenvectors exists [100]; let us observe that this is always the case for a normal matrix where the eigenvectors are orthogonal, but might not be necessarily true for the non-normal ones. We come back to this problem again in the SM, but for hereby we assume that although non-orthogonal, the eigenvectors are still linearly independent. This yields to the variational equations: \[\frac{d\,\xi_{\alpha}^{(\gamma)}}{dt}=\sum_{\eta=1}^{N}\frac{\partial f_{ \gamma}}{\partial x^{(\eta)}}\xi_{\alpha}^{(\eta)}+D_{\gamma}\Lambda^{(\alpha )}\xi_{\alpha}^{(\gamma)}, \tag{3}\] \[\forall\alpha=1,\dots,\Omega\;\;\text{and}\;\;\forall\gamma=1,\dots,N\] where now the dynamics of the nodes are uncoupled. In general, unless we perturb a fixed point, for each value of \(\alpha\) we will be dealing with a time-dependent Jacobian making this way the system non-autonomous. Hence, to study the stability of each linear system (3), we need to resort to the Maximum Lyapunov Exponent (MLE) which consists of finding the value of \(\lambda_{\alpha}\) for which the evolution of the linearized system \(\xi_{\alpha}^{(\gamma)}(t)\) fits an exponential trajectory \(c_{\alpha}^{(\gamma)}e^{\lambda_{\alpha}t}\) where \(c_{\alpha}^{(\gamma)}\) is the initial value for each of the species involved. For the case of a fixed point instead, we look for an expansion of the form \(\delta x_{i}^{(\gamma)}=\sum_{\alpha=1}^{\Omega}c_{\alpha}^{(\gamma)}e^{ \lambda_{\alpha}t}\Phi_{i}^{(\alpha)}\) where \(\lambda_{\alpha}\) is the growth (decay) rate. Using the above expansion and collecting the terms corresponding to each eigenvector we obtain the following eigenvalue problem: \[\left(\begin{array}{ccc}\frac{\partial f_{1}}{\partial x^{1}}+D_{1}\Lambda^ {(\alpha)}-\lambda_{\alpha}&\dots&&\frac{\partial f_{1}}{\partial x^{N}}\\ \vdots&&\ddots&&\vdots\\ \frac{\partial f_{N}}{\partial x^{1}}&&\dots&\frac{\partial f_{N}}{\partial x ^{N}}+D_{N}\Lambda^{(\alpha)}-\lambda_{\alpha}\end{array}\right)\!\!\left( \begin{array}{c}c_{\alpha}^{(1)}\\ \vdots\\ c_{\alpha}^{(N)}\end{array}\right)\] \[=0,\;\;\;\;\;\forall\alpha=1,\dots,\Omega \tag{4}\] where \(\Lambda^{(\alpha)}\) represents the Laplacian \(\alpha-\)th eigenvalue, i.e., \(\sum_{\alpha}L_{ij}\Phi_{j}^{(\alpha)}=\Lambda^{(\alpha)}\Phi_{i}^{(\alpha)}\). Notice that, as before, problem (2) transformes into solving \(\Omega\) independent linear systems (4). The linear stability approach followed so far is known in network science as Master Stability Function [91], and as the name implies, it has primarily been utilized to study the conditions under which the synchronized manifold remains stable. However, recently it has been used to achieve the opposite goal, i.e., to establish the conditions for which such stability is broken in such a way to give rise to new interesting states such as cluster synchronization, chimera states or modular Turing patterns [18; 19]. In practical terms, if a single eigenvalue (respectively Lyapunov exponent) takes a (small) positive value, let's say \(\lambda_{M}>0\), then the perturbation will initially exponentially grow with an amplitude weighted by the entries of the corresponding eigenvector \(\Phi_{i}^{(M)}\). Simultaneously, the nonlinear terms also grow, leading to the saturation of the pattern that has already formed in the linear regime. Additionally, multiple unstable modes will concurrently contribute and compete with each other in shaping the final patterns. This approach is grounded on established results of weakly nonlinear analysis of pattern formation when the perturbed states are either fixed points or limit cycles and is based on a multiple-scale perturbative analysis. It can be shown that under the constraint of being in close proximity to the bifurcation threshold [101] (and also the uncoupled limit cycles should be similarly close to the threshold due to being fixed points) a normal form known as the Ginzburg-Landau equation is obtained for the weakly coupled dynamical systems [102; 103]. It describes the amplitude evolution of the patterns and corresponds to a supercritical pitchfork bifurcation. Throughout this paper we will simple refer to such results paying careful attention to satisfying the necessary conditions without recalling the mathematical details (the interested reader can refer to the literature cited hereby) [10; 101; 102; 103; 104]. Another advantage of this method is that, due to the continuity of the phase transition (i.e., supercritical bifurcation) the nonlinear heterogeneous states formed at the new equilibrium, are expected to be not only similar to the eigenfunctions of selected modes, but also stable in their own right [105]. This is at odds with the "classic" chimeras in that they are stable rather than transitory and independent of the initial conditions, i.e., once the parameters are set selecting the unstable modes, any perturbation sufficiently small for the sake of the linearization approach yields robust chimera patterns. In the following sections, we will analyze various scenarios in which this method allows us to understand the emergence of different chimera states, driven by the presence of non-normality. The symmetry-breaking method described in this section is pivotal for generating chimera patterns. However, it is restricted to a certain regime of parameters. In SM we show that same result extends also to the case where such restrictions are removed. For instance, when the perturbations are moderately small (but larger than those considered in this paper) the MSF is no longer effective in describing the evolution of the state of the system to the asymptotic case due to the tiny basin of attraction of the synchronized manifold (see [84; 85; 89]). In this case, the MSF takes negative values and the instability is driven by the non-normal dynamics. Specifically, there is transient growth for a short period of time, which can potentially lead to a permanent instability in the system. Although we can no longer predict the exact form of the final state, as we shall see in the SM (e.g., Fig. SM1), amplitude chimera patterns still persist in the large time regime. ## III Emergence of amplitude chimera states Building upon the foundations established in the previous section, we will now investigate the formation of amplitude chimera patterns in a specific network selected from a set of empirical ones [21]. An amplitude chimera is a hybrid state characterized by a mixture of behaviors. In this state, all the oscillators within the network exhibit synchronization in terms of their phases (and consequently frequencies). However, this order is disrupted when considering the amplitude variable, as the network divides into subsets of nodes that share either the same or different magnitudes of oscillations. Such states have been first discovered by Zakharova et al., for the case of coupled Stuart-Landau oscillators on a symmetric network [92]. In the next section we will present numerical results obtained for the Brusselator model with a (weakly) diffusive coupling. Such a model is characterized by the following local (node) dynamics: \[\begin{cases}f(x_{i},y_{i})=1-(b+1)x_{i}+cx_{i}^{2}y_{i}\\ g(x_{i},y_{i})=bx_{i}-cx_{i}^{2}y_{i}\end{cases} \tag{5}\] where \(b\) and \(c\) are positive parameters. For simplicity of notation we have renamed the nonlinear terms as \(f(\cdot,\cdot)\) and \(g(\cdot,\cdot)\) and the two involving variables as \(x\) and \(y\), respectively. The Brusselator model can exhibit either a fixed point or a limit cycle. Assuming that we are considering the latter regime, we set the model parameters (see Fig. 2) in a way that selects a single unstable mode and then trigger the system with random perturbations that are transverse to the limit cycle homogeneous equilibrium, i.e., the eigenvector \((1,\dots,1)^{\top}\). With this in mind, we now focus on analyzing the structure and dynamics in the _macaque competition_ network [106] whose adjacency matrix, as shown in Fig. 2 panel \(a)\), is very close to upper triangular. It is important to note that as a preliminary step we perform node relabeling on this network by appropriately permuting the row and the columns, in order for the network to follow a hierarchical structure, using a method developed in Refs. [20; 21]. It consists in assigning the first labels to the sink (resp. source) nodes and then assigning successive labels to the nodes immediately connected to them through incoming links. This process repeats for the remaining nodes following the hierarchy of the Directed Acyclic Graph (DAG) structure. The edges of the network are thus organized in such a way to fill the upper triangle of the adjacency matrix. Once the pool of nodes is exhausted, we take into account the remaining links which disrupt the perfect DAG structure and these are placed on the lower triangular part. Such distinct features of the network structure will significantly impact almost entirely on the Laplacian matrix whose difference with the corresponding adjacency matrix consists only on the diagonal. The key ingredient such a particular feature of the structure adds to the symmetry breaking method, is based on a simple mathematical fact: the eigenvectors of a triangular matrix, once permuted in the proper way, will form the columns of a matrix which is also triangular (see Methods). Consequently the expectation is that a non-normal network whose adjacency or other matrix-related operators (e.g., Laplacian) are almost triangular will also have an almost triangular eigenvectors matrix. Such an assertion is validated numerically in the Supplementary Material considering many different examples of real-world networks. This is also the case of the network under consideration as can be clearly noticed in panel \(b)\) of Fig. 2. Observe in particular, the uniform entries of the eleventh eigenvector indicate that it corresponds to the zero eigenvalue of the Laplacian. Once we have isolated a single unstable mode in the Master Stability Function (panel \(c)\)) corresponding to the eighth eigenvector (indicated by the magenta box), the evolution of the pattern will initially follow closely that of the critical eigenvector, as depicted in panel \(d\)). The shape of the final pattern will be determined by the combined contribution of linear and nonlinear terms and although we have deliberately chosen the parameters that are close to the requirements of the method (but not excessively close to challenge the limits of our approach!), the nonlinear pattern exhibits a remarkable resemblance to the unstable eigenvector. The main difference is that the symbols in the current snapshot are flipped compared to the eigenvector entries due to the oscillatory behavior of the pattern. The last two panels \(e)\) and \(f)\) show the temporal evolution of the pattern formation emphasizing the amplitude chimera state. As a last comment, we want to point out that the formation of the chimera states is based on the assumption of the partial disorder in the eigenvector entries which for different structural reasons might not always be the case as can be noticed in the tenth eigenvector where a cluster of entries (yellow color) can be observed. Although such clusters might exist, the only neat cluster which is always present in the eigenvectors is that of the zero entries immediately related to the almost triangularity of the Laplacian matrix. ## IV Hybrid chimera patterns: the oscillon states Chimera states have been traditionally related to models of coupled dynamical systems where each node represents an oscillator characterized by an intrinsic phase Figure 2: **Emergence of amplitude chimera state.****a)** The almost triangular adjacency matrix of the _macaques competition_ network [106] ordered hierarchically. **b)** The matrix where the columns are the Laplacian eigenvectors of the same network where the magenta rectangle shows the critical eigenvector. **c)** The Master Stability Function (MSF) zoomed around the critical eigenvalue (the whole MSF is shown in the inset). **d)** The comparison of the critical eigenvector (magenta stars) vs. the normalized amplitudes of the initial evolution (blue circles) and the final pattern (green diamonds). With the aid of the dashed lines, it is possible to notice that the shape of the final pattern is flipped (due to the choice of snapshot time) compared to the critical eigenvector, but otherwise is similar to it. **e)** The time series for each oscillator (zoomed, lower part and the complete evolution, upper part). In the inset of the upper part the standard deviation for the first 4 oscillators (red curve) and the last 12 ones (blue curve) is shown. **f)** The colormap representation of the oscillators dynamics evolution. The parameters for the Brusselator are \(b=2.5\), \(c=1\), \(D_{u}=0.0168\), and \(D_{v}=0.2112\) and the colorbars quantify either the magnitudes of the matrices entries or the oscillators amplitudes. variable to which, as in our case, might also be associated with the amplitude variable. Nevertheless, very recently an alternative mechanism was developed to show that chimera states and cluster synchronization can also emerge in system where the nodes, when uncoupled, are in a fixed point rather than limit cycle regime [19]. Following a symmetry-breaking mechanism as the one described in this paper, the authors of Ref. [19], showed that due to an oscillatory instability, namely where the eigenvalues of the extended Jacobian have both real and imaginary parts, not only can global oscillations occur, but they can also self-organize in clusters of coherent or incoherent oscillatory patterns. Recalling that no oscillatory dynamics was imposed at the node level, the cause for the emergence of such behavior is found on the global coupling which is due either to the directedness of the network considered or a minimum of 3 observables (e.g., species) per node can generate the coherent-incoherent oscillations. In [19], a key ingredient to have clusters of similar and different entries in the Laplacian eigenvectors was to consider modular networks which have the peculiarity of having eigenvectors with clusters of similar entries corresponding to the modules' nodes. As it can be intuited, such a role in non-normal networks is played by the zero entries of the eigenvectors, however, with a striking difference as we will discuss in the following. In this section, we present the emergence of oscillon patterns in a network support which consists of oscillations localized in a subset of nodes while the rest have a uniform stationary state [94; 95]. The definition of such patterns, observed initially in granular media experiments, is extended to the network domain. These patterns consist of an oscillatory localized pattern surrounded by a stationary homogeneous neighborhood [93]. With this aim, we need to consider a 3 species model which in our case is described by the following set of equations: \[\begin{cases}f(x_{i},y_{i},z_{i})=-c_{1}x_{i}y_{i}^{2}+c_{3}z_{i}^{2}-c_{7}x_ {i}/(q+x_{i})\\ g(x_{i},y_{i},z_{i})=c_{2}u_{i}y_{i}^{2}-c_{4}y_{i}+c_{8}\\ h(x_{i},y_{i},z_{i})=c_{5}x_{i}-c_{6}z_{i}\end{cases} \tag{6}\] introduced by Zhabotinsky _et al._, [107; 108], where \(q=10^{-4}\) and the rest of the constants \(c_{1}-c_{8}\) represent parameters. Let us notice also that the reason why we are considering a multiple species model for the oscillatory instability rather than obtaining it solely from the directedness of the network [109], is due to empirical evidence. Real networks with strong non-normality have been observed to have a significantly small (or sometimes even absent) imaginary part in their spectrum compared to the real part [20]. The latter has been shown analytically and numerically to contribute to a lesser extent to the emergence of global oscillations [84; 89; 109]. Starting from these premises in Fig. 4 panel \(a)\) the MSF of the Zhabotinsky model in the _macaques competition_ network [106] is presented, where in the inset the imaginary part of the extended Jacobian eigenvalues is displayed. We recall that in this case the Jacobian matrix is time-independent making the linearized dynamical system autonomous and the Master Stability Function reduces to the well-known dispersion relation [110]. Consequently, the spectrum of the Jacobian can be obtained analytically through solving the eigenvalue problem (4). In panel \(b)\) it is possible to appreciate the result of this approach where the critical eigenvectors corresponding to the unstable eigenvalues shown in the previous panel, are plotted versus the initial and final patterns. The temporal evolution of the oscillon pattern is presented in the consecutive panels \(c)\) and \(d)\). Following the amplification of the Laplacian eigenvector entries, several nodes will jump from their original position at the fixed point and start oscillating. Notice that in this case the final pattern is jointly shaped by the third and fifth eigenvectors. However, most of the nodes of the network in this case will remain in an (almost) stationary regime by keeping their original state at the fixed point. In summary, the network will exhibit an oscillon state, where localized nodes exhibit oscillatory behavior while the surrounding nodes remain stationary. ## V Are phase chimeras ubiquitous in the real world or is global synchronization elusive? Another prominent example of coherent-incoherent patterns is that of phase chimera states which historically precedes the amplitude states discussed in the previous section [11; 12]. In the context of non-normal networks, the explanation for their emergence is based not only on a different mechanism but also a local structural property (compared to the _global_ non-normality discussed earlier), that of the leader nodes which is ubiquitous for empirical networks. This time we will consider the _ants dominance_ network [66], which is made of a total of 16 nodes (individuals) 9 of which are source (leader) nodes indicated in red in Fig. 4\(a)\). For the Brusselator model set in the oscillator regime, we select parameters for which the system is strongly stable [111], panel \(b)\). Despite the algebraic degeneracy in the spectrum (the Laplacian has 9 repeated zeros), the MSF formalism can still be applied due to a full basis of eigenvectors. However, if we perturb the network of coupled Brusselators with some moderate perturbations, we notice that the time series presented in panels \(c)\) and \(d)\) display a clear (phase) chimera behavior where although all nodes share the same exact amplitude, the oscillators corresponding to the source ones are out of sync compared to the rest of network (except for node 2) which is very well synchronized. The observed behavior can be attributed to the presence of a source node that is disconnected from the rest of the network in terms of incoming links. When the system is perturbed and the source node is initially triggered, it eventually settles back to its original limit cycle. The settling phase might differ from the initial phase which was the same for all the network. On the other hand, such set of nodes will act as forcing terms in our dynamical system and consequently are expected to influence the rest of the group. This is for instance the case for node 2 which is also a leader node, but a sink rather than a source one. It is immediately "forced" by the source node 12 which in combination with the intrinsic phase of node 2 yields as a result a different final phase. However, this is not always the case, since the set of nodes forced by the source leaders might form a densely connected cluster with each other resulting in disturbances coming from the source nodes being inhibited or averaged across the cluster. This is the case for the nodes 1 and 3-7, which form a robust synchronized cluster. Consequently the entire network splits in two groups of coherent and incoherent oscillators resulting in a classic phase chimera state. As a final note, it is worth noting that although the system under consideration is not asymptotically stable, it still exhibits Lyapunov stability as far as the orbits of the perturbed system will stay forever near the initial equilibrium point [112]. The scenario of phase chimeras illustrated in this section might not always be the case. In fact, the different phase impulses arriving from the source nodes can keep the rest of the network away from a synchronized state. The role of source nodes, similarly to the mechanism described above, has also been recently studied in Ref. [113] in term of synchronization robustness, although no mention was made related to a chimera-like phenomenon. Based on the bowtie architecture of many Figure 3: **Emergence of oscillon patterns.****a)** The Master Stability Function (dispersion relation) real (main, red stars) and imaginary (inset, blue circles) parts for the _macaques competition_ network [106]. The critical eigenvalues correspond to the third and the fifth eigenvectors shown in Fig. 1 \(b)\). **b)** Comparison between the critical eigenvectors 3 (magenta stars) and 5 (red crosses), respectively, and the normalized amplitudes of the initial evolution (blue circles) and the final pattern (green diamonds). **c)** The time series for each nodes where it can be noticed that for the nodes from 7 to 16, (almost) no oscillations are present. **d)** The colormap representation of the evolution of the oscillon patterns. The colorbar quantifies the oscillators amplitudes. The parameters for the Zhabotinsky model are \(c_{1}=c_{3}=28.5\), \(c_{2}=c_{4}=15.5\), \(c_{5}=c_{6}=1\), \(c_{7}=25.65\), \(c_{8}=3.1\), \(D_{x}=D_{y}=0\) and \(D_{z}=0.1\). complex networks, the authors raise the question that synchronization might be difficult to achieve in practice. However, the question we address in this paper is not about the mechanism itself, which is intuitively straightforward. Instead, our focus is on the prevalence of leader source nodes in real networks and the implications this has for the emergence of coherent-incoherent patterns. Also an important difference with [113] is that our results exclude the presence of any synchronized source cluster on the network since source SCC do not exist when leaders are present [114]. This empirical evidence, together with the fact that the regime of parameters for which this behavior occurs is considerably large, i.e., the only requirement is that the system should be stable, unavoidably poses the question if chimera states are indeed much more present in real-world systems than initially anticipated. The recent trend of experimental studies focused on chimera states further strengthen this assertion [15, 16]. Another notable conclusion that arises from this analysis is the significant challenge associated with achieving global synchronization in real networks. Instead, the attainable level of synchronization is often limited to local subnetworks, manifesting as a chimera state. However, it is also worthmentioning that such conclusions are obtained from the current state of art models of synchronization dynamics and that there is need for more sophisticated ones to better understand such phenomenon. ## VI Discussion and Conclusions Most natural systems are characterized by complex in Figure 4: **Emergence of phase chimera states.****a)** The graphic representation of the _ants dominance_ network [66] ordered hierarchically. **b)** The Master Stability Function shows no (local) instability, however, there are 9 zero overlaying eigenvalues in the MSF curve. **c)** The time series for each oscillator zoomed (lower part) and the complete evolution (upper part). **d)** The colormap representation of the oscillators dynamics evolution with the synchronized clusters emphasized with the magenta rectangle. The colorbar quantifies the oscillators amplitudes. Notice that due to the random initial perturbation of the system, some of the sources belong to the synchronized cluster just by chance. The parameters for the Brusselator model are \(b=2.5\), \(c=1\), \(D_{u}=0.0175\), and \(D_{v}=0.075\). teracting structures which collectively play a crucial role in their system dynamics. Aiming to understand common structural features of the interaction networks, it was recently discovered that a vast majority of them manifest a strong non-normality [20]. From the structural point of view, this implies that the majority of real networks exhibit characteristics akin to directed acyclic graphs (DAGs), thus possessing a strong hierarchical topology [20; 21]. As a consequence, from a mathematical perspective, adjacency matrices and related operators representing such networks, are close to being triangular where perfectly triangular corresponds to a DAG network. In other words, the eigenvectors of their matrix operators have patterns where many entries are zero, and the non-zero entries are generally scattered randomly without a specific structure. Based on this remarkable characteristic of real world networks, in this paper we have presented a systematic mechanism for the generation of chimera patterns. We prove that the spontaneous emergence of amplitude chimera states can be explained by a symmetry-breaking mechanism according to which the final patterns resemble the structure of the eigenvectors of the unstable mode(s) selected from the synchronized manifold. While synchronization generally requires individual (coupled) dynamical systems to exhibit an oscillatory behavior, recent studies have shown that it is still possible to obtain synchronized-desynchronized collective oscillations by destabilizing a homogeneous fixed point [19]. In this novel scenario, a three-variable model is a prerequisite for generating an oscillatory instability responsible for the emergence of global oscillations. In the case of non-normal networks, a hybrid chimera pattern arises, where unsynchronized oscillations coexist with stationary nodes. We have named such patterns oscillons due to being reminiscent to the localized and isolated oscillations that emerge in granular media [94]. Non-normal networks observed in nature, have shown to possess another distinct feature, the presence of nodes that are sources (or sinks), named as leader nodes [21]. The peculiarity is their omnipresence in non-normal networks and the fact that no other strongly connected components coexist in the same network. In the context of synchronization, source nodes serve as forcing terms that have the potential to disrupt synchronization at any level within a subnetwork. Consequently, leader (source) nodes pose a constant threat to the dynamics of synchronization. Building upon this empirical observation, we demonstrate that regardless of the chosen parameters, a stable system of coupled oscillators can still exhibit phase chimera states. This occurs because source nodes, being disconnected from the influence of the rest of the network, retain the phase shift resulting from the initial perturbation. Meanwhile, the remaining nodes may potentially return to a synchronized regime, effectively absorbing perturbations originating from the source nodes. In conclusion, in this work our contribution is multi-fold. Firstly, we present a unique approach to explain the emergence of several patterns such as amplitude and phase chimeras also as oscillon states all sharing the common feature of coexistence of coherent and incoherent states. This mechanism is grounded on the spectral properties characterizing real-world networks. Secondly, we put to the fore the problem that the presence of non-normality in empirical networks poses a persistent challenge to understanding their synchronization behavior, which indicates the need for more advanced models and approaches to comprehensively grasp and explain these phenomena. ## Acknowledgements The work of R.M. is supported by a FRIA-FNRS PhD fellowship, grant FC 33443, funded by the Walloon region. R.M. and J.D.O'B. acknowledge funding from the Bridge Grant of the yrCSS. This work was partly supported by Science Foundation Ireland under Grant number 16/IA/4470. ## Methods **Theorem 1**.: _The right eigenvectors of a triangular matrix \(\mathbf{A}_{n\times n}\) form a triangular matrix \(\mathbf{P}_{n\times n}\) when considered as columns of \(\mathbf{P}\) and ordered according to the eigenvalues of \(\mathbf{A}\)._ _Proof_: We will prove the results by considering that the vector with the first entry non-zero is an eigenvector of the first eigenvalue, the vectors with the first two entries non-zero is an eigenvector of the second eigenvalue and so on. So if we consider the matrix \(\mathbf{A}\) with entries \(a_{i,j}\) with \(i\leq j\) then the vector \(\mathbf{v}_{1}=\left[1,0,\ldots,0\right]^{T}\) is the eigenvector corresponding to the eigenvalue \(a_{1,1}\) (recall that the diagonal entries are the eigenvalues of \(\mathbf{A}\)). In fact, \[\begin{cases}\left(\underline{a_{1,i}}\right)\cdot\overline{a_{1,1}}\times 1+a _{1,2}\times 0+\cdots+a_{1,n}\times 0=0\\ \left(a_{i,i}-a_{1,1}\right)\times 0+a_{i,i+1}\times 0+\cdots+a_{i,n}\times 0=0, \end{cases}\;i>1\,.\] For a general eigenvector with the first \(m\leq n\) entries non-zero \(\mathbf{v}_{m}=\left[x_{1},\ldots,x_{m},0,\ldots,0\right]^{T}\) we have \[\left(\mathbf{A}_{m\times m}-a_{m,m}\mathbf{I}_{m\times m}\right)\mathbf{v}_{ m}=\mathbf{0}\] where \(\mathbf{A}_{m\times m}\) is the triangular block matrix made from the first \(m\) rows and the first \(m\) columns of \(\mathbf{A}\). Clearly since at least one eigenvalue, i.e., diagonal entry of the matrix \(\mathbf{A}_{m\times m}-a_{m,m}\mathbf{I}_{m\times m}\) is zero, the determinant is also zero and thus the system always has a non trivial solution \(\mathbf{v}_{m}\). If instead of the eigenvalue \(a_{m,m}\) we would have chosen, \(a_{i,i},i\neq m\) then we would have had \(\left(a_{m,m}-a_{i,i}\right)x_{m}=0\) and thus \(x_{m}=0\) if \(a_{m,m}\neq a_{i,i}\). This would then correspond to consider the previous eigenvector \(\mathbf{v}_{m-1}\).
2301.00290
BARVINN: Arbitrary Precision DNN Accelerator Controlled by a RISC-V CPU
We present a DNN accelerator that allows inference at arbitrary precision with dedicated processing elements that are configurable at the bit level. Our DNN accelerator has 8 Processing Elements controlled by a RISC-V controller with a combined 8.2 TMACs of computational power when implemented with the recent Alveo U250 FPGA platform. We develop a code generator tool that ingests CNN models in ONNX format and generates an executable command stream for the RISC-V controller. We demonstrate the scalable throughput of our accelerator by running different DNN kernels and models when different quantization levels are selected. Compared to other low precision accelerators, our accelerator provides run time programmability without hardware reconfiguration and can accelerate DNNs with multiple quantization levels, regardless of the target FPGA size. BARVINN is an open source project and it is available at https://github.com/hossein1387/BARVINN.
Mohammadhossein Askarihemmat, Sean Wagner, Olexa Bilaniuk, Yassine Hariri, Yvon Savaria, Jean-Pierre David
2022-12-31T21:04:00Z
http://arxiv.org/abs/2301.00290v1
# BARVINN: Arbitrary Precision DNN Accelerator Controlled by a RISC-V CPU ###### Abstract. We present a DNN accelerator that allows inference at arbitrary precision with dedicated processing elements that are configurable at the bit level. Our DNN accelerator has 8 Processing Elements controlled by a RISC-V controller with a combined 8.2 TMACs of computational power when implemented with the recent Alveo U250 FPGA platform. We develop a code generator tool that ingests CNN models in ONNX format and generates an executable command stream for the RISC-V controller. We demonstrate the scalable throughput of our accelerator by running different DNN kernels and models when different quantization levels are selected. Compared to other low precision accelerators, our accelerator provides run time programmability without hardware reconfiguration and can accelerate DNNs with multiple quantization levels, regardless of the target FPGA size. BARVINN is an open source project and it is available at [https://github.com/hossein1387/BARVINN](https://github.com/hossein1387/BARVINN). neural networks, hardware acceleration, FPGA, low-precision + Footnote †: isbnbn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn:bn: ## 1. Introduction Deep neural networks (DNNs) traditionally rely on floating
2309.14558
Bicriteria Approximation Algorithms for the Submodular Cover Problem
In this paper, we consider the optimization problem Submodular Cover (SCP), which is to find a minimum cardinality subset of a finite universe $U$ such that the value of a submodular function $f$ is above an input threshold $\tau$. In particular, we consider several variants of SCP including the general case, the case where $f$ is additionally assumed to be monotone, and finally the case where $f$ is a regularized monotone submodular function. Our most significant contributions are that: (i) We propose a scalable algorithm for monotone SCP that achieves nearly the same approximation guarantees as the standard greedy algorithm in significantly faster time; (ii) We are the first to develop an algorithm for general SCP that achieves a solution arbitrarily close to being feasible; and finally (iii) we are the first to develop algorithms for regularized SCP. Our algorithms are then demonstrated to be effective in an extensive experimental section on data summarization and graph cut, two applications of SCP.
Wenjing Chen, Victoria G. Crawford
2023-09-25T22:04:18Z
http://arxiv.org/abs/2309.14558v1
# Bicriteria Approximation Algorithms for the Submodular Cover Problem ###### Abstract In this paper, we consider the optimization problem Submodular Cover (SCP), which is to find a minimum cardinality subset of a finite universe \(U\) such that the value of a submodular function \(f\) is above an input threshold \(\tau\). In particular, we consider several variants of SCP including the general case, the case where \(f\) is additionally assumed to be monotone, and finally the case where \(f\) is a regularized monotone submodular function. Our most significant contributions are that: (i) We propose a scalable algorithm for monotone SCP that achieves nearly the same approximation guarantees as the standard greedy algorithm in significantly faster time; (ii) We are the first to develop an algorithm for general SCP that achieves a solution arbitrarily close to being feasible; and finally (iii) we are the first to develop algorithms for regularized SCP. Our algorithms are then demonstrated to be effective in an extensive experimental section on data summarization and graph cut, two applications of SCP. ## 1 Introduction Submodularity captures a diminishing returns property of set functions: Let \(f:2^{U}\rightarrow\mathbb{R}\) be defined over subsets of a universe \(U\) of size \(n\). Then \(f\) is _submodular_ if for all \(A\subseteq B\subseteq U\) and \(x\notin B\), \(f(A\cup\{x\})-f(A)\geq f(B\cup\{x\})-f(B)\). Examples of submodular set functions include cut functions in graphs (Balkanski et al., 2018), information-theoretic quantities like entropy and mutual information (Iyer et al., 2021), determinantal point processes (Gillenwater et al., 2012), and coverage functions (Bateni et al., 2017). Submodular set functions arise in many important real-world applications including active learning (Kothawade et al., 2021, 2022), partial label learning (Bao et al., 2022), structured pruning of neural networks (El Halabi et al., 2022), data summarization (Tschiatschek et al., 2014), and client selection in federated learning (Balakrishnan et al., 2022). While the majority of existing work has focused on developing approximation algorithms to maximize a submodular function subject to some constraint (Nemhauser et al., 1978; Mirzasoleiman et al., 2015; Harshaw et al., 2019; Buchbinder et al., 2014), in this paper we focus on developing algorithms for the related optimization problem of Submodular Cover (SCP), defined as follows. **Problem 1** (Submodular Cover (SCP)).: _Let \(f:2^{U}\rightarrow\mathbb{R}_{\geq 0}\) be a nonnegative submodular set function defined over subsets of the ground set \(U\) of size \(n\). Given threshold \(\tau\), SCP is to find \(\text{argmin}\{|X|:f(X)\geq\tau\}\) if such a set exists._ SCP captures applications where we seek to achieve a certain value of \(f\) in as few elements as possible. For example, consider data summarization, where a submodular function \(f\) is formulated to measure how effectively a subset \(X\) summarizes the entire dataset \(U\)(Tschiatschek et al., 2014). Then if we set \(\tau=\max\{f(X):X\subseteq U\}\), SCP asks to find the set of minimum size in \(U\) that achieves the maximum effectiveness as a summary. Another example is when expected advertising revenue functions are formulated over subsets of a social network (Hartline et al., 2008), then SCP asks how we can reach a certain amount of revenue while picking as small a subset of users as possible. In this paper, we propose and analyze algorithms for several variants of SCP including the general case, the case where \(f\) is assumed to be monotone1, and finally when \(f\) is a regularized monotone submodular function (and potentially takes on negative values). In particular, the contributions of this paper are: Footnote 1: A set function \(f\) is monotone if for all \(A\subseteq B\subseteq U\), \(f(A)\leq f(B)\). * We first address the need for scalable algorithms for SCP where \(f\) is assumed to be monotone. While the greedy algorithm finds the best possible approximation guarantee for monotone SCP (MSCP) (Feige, 1998), it makes \(O(n^{2})\) queries of \(f\) which may be impractical in many applications. We propose and introduce two algorithms for MSCP which achieve nearly the same theoretical guarantee as the greedy algorithm but only make \(O(n\ln(n))\) queries of \(f\). In addition, we extend the work of Iyer and Bilmes (2013) to a method of converting fast randomized approximation algorithms for the dual cardinality constrained monotone submodular maximization problem (MSMP) into approximation algorithms for MSCP. * Next, we address the need for algorithms that can produce nearly feasible solutions to the general SCP problem. In particular, we provide the first algorithm for SCP that, with input \(\epsilon>0\), returns a solution \(S\) that is guaranteed to satisfy: (i) \(f(S)\geq(1-\epsilon)\tau\); and (ii) \(|S|=O(|OPT|/\epsilon)\) where \(OPT\) is an optimal solution to the instance. A caveat for our algorithm is that it is not necessarily polynomial time and requires an exact solution to SMP on an instance of size \(O(|OPT|/\epsilon^{2})\). * Third, we are the first to consider _regularized SCP_ (RSCP). RSCP is where the objective \(f=g-c\) where \(g\) is a nonnegative, monotone, and submodular function and \(c\) is a modular cost penalty function. \(f\) is not necessarily monotone but potentially takes on negative values, and therefore this new problem doesn't fall under the general SCP problem. We develop a method of converting algorithms for the dual regularized submodular maximization problem (Harshaw et al., 2019) into ones for RSCP. We then propose the first algorithm for RSCP, which is a greedy algorithm using queries to a distorted version of \(f=g-c\). * Finally, we conduct an experimental analysis for our algorithms for MSCP and general SCP on instances of data summarization and graph cut. We find that our algorithms for MSCP makes a large speedup compared to the standard greedy approach, and we explore the pros and cons of each relative to the other. We also find that our algorithm for general SCP is practical for our applications despite not being guaranteed to run in polynomially many queries of \(f\). A table summarizing all of our algorithmic contributions can be found in the appendix. We now provide a number of preliminary definitions and notations that will be used throughout the paper. ### Preliminary definitions We first provide a number of preliminary definitions that will be used throughout the paper: (i) The Submodular Maximization Problem (SMP) is the dual optimization problem to SCP defined by, given budget \(\kappa\), find \(\operatorname*{argmax}\{f(X):X\subseteq U,|X|\leq\kappa\}\); (ii) Monotone SCP (MSCP) is the version of SCP where \(f\) is additionally assumed to be monotone; (iii) Regularized SCP (RSCP) is a related problem to SCP where \(f=g-c\) and \(g\) is monotone, submodular, and nonnegative, while \(c\) is a modular2 nonnegative cost function; (iv) \(OPT\) is used to refer to the optimal solution to the instance of SCP that should be clear from the context; (v) \(OPT_{SM}\) is used to refer to the optimal solution to the instance of SMP that should be clear from the context; (vi) An \((\alpha,\beta)\)-bicriteria approximation algorithm for SCP returns a solution \(X\) such that \(|X|\leq\alpha|OPT|\) and \(f(X)\geq\beta\tau\). An \((\alpha,\beta)\)-bicriteria approximation algorithm for SMP returns a solution \(X\) such that \(f(X)\geq\alpha f(OPT)\) and \(|X|\leq\beta\kappa\). Notice that the approximation on the objective is first, and the approximation on the constraint is second; (vii) The marginal gain of adding an element \(u\in U\) to a set \(S\subseteq U\) is denoted as \(\Delta f(S,u)=f(S\cup u)-f(S)\); (viii) The function \(f_{\tau}=\min\{f,\tau\}\). ### Related Work MSCP is the most studied variant of SCP (Wolsey, 1982; Wan et al., 2010; Mirzasoleiman et al., 2015; Crawford et al., 2019; Wolssey, 1982). The standard greedy algorithm produces a logarithmic approximation guarantee for MSCP in \(O(n^{2})\) queries of \(f\)(Wolsey, 1982), and this is the best approximation guarantee that we can expect unless NP has \(n^{\mathcal{O}(\log(\log(n)))}\)-time deterministic algorithms (Feige, 1998). One version of the greedy algorithm for MSCP works as follows: A set \(S\) is initialized to be \(\emptyset\). Iteratively, the element \(\text{argmax}\{\Delta f(S,x):x\in U\}\) is added to \(S\) until \(f(S)\) reaches \((1-\epsilon)\tau\). It has previously been shown that this is a \((\ln(1/\epsilon),1-\epsilon)\)-bicriteria approximation algorithm (Krause et al., 2008). Beyond greedy algorithms, algorithms for the distributed setting (Mirzasoleiman et al., 2015; Wolssey, 1982) as well as the streaming setting (Norouzi-Fard et al., 2016) for MSCP have been proposed. On the other hand, developing algorithms for SCP in full generality is more difficult since monotonicity of \(f\) is not assumed. The standard greedy algorithm does not have any non-trivial approximation guarantee for SCP. In fact, to the best of our knowledge, no greedy-like algorithms have been found to be very useful for SCP. Recently, Crawford (2023) considered SCP and proved that it is not possible to develop an algorithm that guarantees \(f(X)\geq\tau/2\) for SCP in polynomially many queries of \(f\) assuming the value oracle model. On the other hand, algorithmic techniques that are used for SMP in the streaming setting (Alaluf et al., 2022) proved to be useful for SCP. In particular, Crawford (2023) proposed an algorithm using related techniques to that of Alaluf et al. that achieves a \((O(1/\epsilon^{2}),1/2-\epsilon)\)-bicriteria approximation guarantee for SCP in polynomially many queries of \(f\). We also take an approach inspired by the streaming algorithm of Alaluf et al., but sacrifice efficiency in order to find a solution for SCP that is arbitrarily close to being feasible. SMP is the dual optimization problem to SCP, and has received relatively more attention than SCP (Nemhauser et al., 1978; Badanidiyuru and Vondrak, 2014; Mirzasoleiman et al., 2015; Feige et al., 2011; Buchbinder et al., 2014; Alaluf et al., 2022). Iyer and Bilmes (2013) proposed a method of converting algorithms for SMP to ones for SCP. In particular, given a deterministic \((\gamma,\beta)\)-bicriteria approximation algorithm for SMP, the algorithm convert (see pseudocode in the appendix) proposed by Iyer and Bilmes produces a deterministic \(((1+\alpha)\beta,\gamma)\)-bicriteria approximation algorithm for SCP. The algorithm works by making \(\log_{1+\alpha}(n)\) guesses for \(|OPT|\) (which is unknown in SCP), running the SMP algorithm with the budget set to each guess, and returning the smallest solution with \(f\) value above \(\gamma\tau\). However, this approach is limited by the approximation guarantees of existing algorithms for SMP. The best \(\gamma\) for monotone SMP is \(1-1/e\), and the best for general SMP where \(f\) is not assumed to be monotone is significantly lower (Gharan and Vondrak, 2011). Several of the algorithms that we propose in this paper do generally follow the model of convert in that they rely on guesses of \(|OPT|\), but are different because they: (i) Implicitly use bicriteria approximation algorithms for SMP which have better guarantees on the objective (\(\gamma\)) because they do not necessarily return a feasible solution; (ii) Are more efficient with respect to the number of queries of \(f\), since convert potentially wastes many queries of \(f\) by doing essentially the same behavior for different guesses of \(|OPT|\). ## 2 Algorithms and Theoretical Guarantees In this section, we present and theoretically analyze our algorithms for several variants of SCP. In particular, in Section 2.1 we first consider MSCP. We present a method of converting randomized algorithms for SMP to algorithms for SCP, and then we present the algorithms thresh-greedy-c and stoch-greedy-c for MSCP, which both have lower query complexity compared to the standard greedy algorithm. Next, we consider the general problem of SCP in Section 2.2. We present the algorithm stream-c for SCP, which produces a solution with \(f\) value arbitrarily close to \(\tau\), but does not necessarily make polynomially many queries to \(f\). Finally, in Section 2.3, we consider RSCP. We present a method of converting algorithms for regularized SMP to ones for RSCP, and then introduce the algorithm distorted-bi for RSCP. ### Monotone submodular cover In this section, we develop and analyze approximation algorithms for MSCP. The greedy algorithm is a tight \((\ln(1/\epsilon),1-\epsilon)\)-bicriteria approximation algorithm for MSCP (Krause et al., 2008). However, the greedy algorithm makes \(O(n^{2})\) queries of \(f\), which is impractical in many application settings with large \(U\) and/or when queries of \(f\) are costly [Mirzasoleiman et al., 2015a]. Motivated by this, we propose and analyze the algorithms thresh-greedy-c and stoch-greedy-c for MSCP which give about the same bicriteria approximation guarantees but in many fewer queries of \(f\). We first describe thresh-greedy-c. thresh-greedy-c is closely related to the existing threshold greedy algorithm for monotone SMP [Badanidiyuru and Vondrak, 2014], and therefore we relegate the pseudocode of thresh-greedy-c to the appendix and only include a brief discussion here. At each iteration of thresh-greedy-c, instead of picking the element with highest marginal gain into \(S\), it adds all elements in \(U\) with marginal gain above a threshold, \(w\). \(w\) is initialized to \(\max_{u\in U}f(\{u\})\), and is decreased by a factor of \((1-\epsilon/2)\) when the algorithm proceeds to the next iteration. thresh-greedy-c adds elements to a solution \(S\) until \(f(S)\) reaches \((1-\epsilon)\tau\), which is shown to happen in at most \(\ln(2/\epsilon)|OPT|+1\) elements in the proof of Theorem 1. We now state the theoretical guarantees of thresh-greedy-c in Theorem 1. **Theorem 1**.: \(\texttt{thresh-greedy-c}\) _produces a solution with \((\ln(2/\epsilon)+1,1-\epsilon)\)-bicriteria approximation guarantee to MSCP, in \(O(\frac{n}{\epsilon}\log(\frac{n}{\epsilon}))\) number of queries of \(f\)._ Another method of speeding up the standard greedy algorithm is by introducing randomization, as has been done for monotone SMP in the stochastic greedy algorithm [Mirzasoleiman et al., 2015a]. A natural question is whether a randomized algorithm for monotone SMP can be converted into an algorithm for MSCP using the algorithm convert of Iyer and Bilmes. However, convert relies on a deterministic approximation guarantee. We now introduce a new algorithm called convert-rand that is analogous to convert but runs the SMP algorithm \(O(\ln(n)\ln(1/\delta))\) times in order to have the approximation guarantee hold with high probability. Pseudocode for convert-rand, as well as a proof of Theorem 2 can be found in the appendix. **Theorem 2**.: _Any randomized \((1-\epsilon/2,\beta)\)-bicriteria approximation algorithm for monotone SMP that runs in time \(\mathcal{T}(n)\) where \(\gamma\) holds only in expectation can be converted into a \(((1+\alpha)\beta,1-\epsilon)\)-bicriteria approximation algorithm for MSCP that runs in time \(O(\log_{1+\alpha}(|OPT|)\ln(1/\delta)\mathcal{T}(n))\) where \(\gamma\) holds with probability at least \(1-\delta\)._ Therefore by applying Theorem 2 to the stochastic greedy algorithm of Mirzasoleiman et al., we have a \((1+\alpha,1-1/e-\epsilon)\)-bicriteria approximation algorithm for MSCP with high probability in \(O(n\log_{1+\alpha}(|OPT|)\ln(1/\delta)\ln(1/\epsilon))\) queries of \(f\). However, a factor of \(1-1/e-\epsilon\) of \(\tau\) is not very close to feasible, and further the convert-rand method wastes many queries of \(f\) essentially doing the same computations for different guesses of \(|OPT|\). Therefore we focus the rest of this section on developing an algorithm, stoch-greedy-c, that uses the techniques of the stochastic greedy algorithm more directly for MSCP. The idea behind the stochastic greedy algorithm for SMP is that instead of computing the marginal gains of all elements at each iteration, we take a uniformly random sampled subset from \(U\) and pick the element with the highest marginal gain among the sampled subset. If the sampled subset is sufficiently large, in particular of size at least \((n/\kappa)\ln(1/\epsilon)\) where \(\kappa\) is the budget for the instance of SMP, then with high probability a uniformly random element of \(OPT_{SM}\) will appear in the sampled subset and the marginal gain of adding the element is nearly the same as the standard greedy algorithm in expectation. However, in SMP we know that \(|OPT_{SM}|=\kappa\), but in MSCP \(|OPT|\) is unknown. Therefore it is not obvious how to apply this technique in a more direct way than convert-rand. We now introduce our algorithm stoch-greedy-c for MSCP, pseudocode for which is provided in Algorithm 1. stoch-greedy-c takes as input \(\epsilon>0\), \(\delta>0\), \(\alpha>0\), and an instance of MSCP. stoch-greedy-c keeps track of \(O(\ln(1/\delta))\) possibly overlapping solutions \(S_{1},S_{2},...\) throughout a sequence of iterations. stoch-greedy-c also keeps track of an estimate of \(|OPT|\), \(g\). During each iteration, for each solution \(S_{i}\), stoch-greedy-c uniformly randomly and independently samples a set \(R\) of size \(\min\{n,(n/g)\ln(3/\epsilon)\}\) and adds \(u=\operatorname{argmax}\{\Delta f_{\tau}(S_{i},x):x\in R\}\) to \(S_{i}\). Every time \(\frac{\alpha}{1+\alpha}\ln(3/\epsilon)g\) elements have been added to each \(S_{i}\), \(g\) is increased by a factor of \(1+\alpha\). stoch-greedy-c stops once there exists an \(S_{i}\) such that \(f(S_{i})\geq(1-\epsilon)\tau\), and returns this solution. We now state the theoretical results for stoch-greedy-c in Theorem 3. **Theorem 3**.: _Suppose that stoch-greedy-c is run for an instance of MSCP. Then with probability at least \(1-\delta\), stoch-greedy-c outputs a solution \(S\) that satisfies a \(((1+\alpha)\lceil\ln(3/\epsilon)\rceil,1-\epsilon)\) bicriteria approximation guarantee in at most \(O\left(\frac{\alpha}{1+\alpha}n\ln(1/\delta)\ln^{2}(3/\epsilon)\log_{1+\alpha}(| OPT|)\right)\) queries of \(f\)._ Compared to thresh-greedy-c, stoch-greedy-c has a better dependence on \(\epsilon\) in terms of the number of queries made to \(f\). In addition, it is possible to extend the stochastic greedy algorithm of Mirzasoleiman et al. to a \((1-\epsilon,\ln(1/\epsilon)\)-bicriteria approximation algorithm for SMP and then use convert (see the appendix). However, stoch-greedy-c still would have strictly fewer queries of \(f\) by a factor of \(\frac{\alpha}{1+\alpha}\) compared to this approach because convert does essentially the same computations for different guesses of \(|OPT|\). In order to prove Theorem 3, we first need Lemma 1 below, which states that as long as \(g\leq(1+\alpha)|OPT|\), the marginal gain of adding \(u\) in Line 6 is about the same as the standard greedy algorithm in expectation. **Lemma 1**.: _Consider any of the sets \(S_{i}\) at the beginning of an iteration on Line 4 where \(g\leq(1+\alpha)|OPT|\). Then if \(u_{i}\) is the random element that will be added on Line 6, we have that \(\mathbb{E}[\Delta f_{\tau}(S_{i},u_{i})]\geq\frac{1-\epsilon/3}{(1+\alpha)| OPT|}(\tau-f_{\tau}(S_{i}))\)._ Further, Lemma 2 below uses Lemma 1 to show that by the time \(g\) reaches \((1+\alpha)|OPT|\), \(\mathbb{E}[f_{\tau}(S_{i})]\geq(1-\frac{\epsilon}{2})\tau\) for all \(i\). **Lemma 2**.: _Once \(r\) reaches \((1+\alpha)\lceil\ln(3/\epsilon)|OPT|\rceil\), we have that \(\mathbb{E}[f_{\tau}(S_{i})]\geq\left(1-\frac{\epsilon}{2}\right)\tau\) for all \(i\)._ Finally, because there are \(O(\ln(1/\delta))\) solutions, by the time \(g\) reaches \((1+\alpha)|OPT|\), there exists \(i\) such that \(f(S_{i})\geq(1-\epsilon)\tau\) with probability at least \(1-\delta\) by using concentration bounds, which is stated in Lemma 3. **Lemma 3**.: _With probability at least \(1-\delta\), once \(r\) reaches \((1+\alpha)\lceil\ln(3/\epsilon)|OPT|\rceil\), we have that \(\max_{i}f(S_{i})\geq(1-\epsilon)\tau\)._ Lemma 3 allows us to keep increasing \(g\) by a factor of \((1+\alpha)\) periodically, because intuitively the longer we keep adding elements, the bigger we know that \(|OPT|\) must be since the algorithm is still running and none of the solution sets has reached \((1-\epsilon)\tau\) yet. The proof of Lemmas 1 and 2, and of Theorem 3 can be found in the appendix. ### Non-monotone submodular cover In this section, we introduce and theoretically analyze the algorithm stream-c for SCP in the general setting, where \(f\) is not assumed to be monotone. In the general setting, the standard greedy algorithm doesn't have non-trivial approximation guarantee for SCP. In addition, it has previously been shown that it is not possible for an algorithm to guarantee that \(f(X)\geq\tau/2\) for SCP, where \(X\) is its returned solution, in polynomially many queries of \(f\) assuming the value oracle model (Crawford, 2023). Our algorithm stream-c _does_ produce a solution \(X\) that is guaranteed to satisfy \(f(X)\geq(1-\epsilon)\tau\), but relies on solving an instance of SMP exactly on a set of size \(O(|OPT|/\epsilon^{2})\). Despite not being polynomial time, stream-c is practical in many instances of SCP because: (i) \(|OPT|\) may be relatively small; and (ii) the instance of SMP may be relatively easy to solve, e.g. may be very close to monotone on the instance of SMP even if it was very non-monotone on the original instance of SCP. These aspects of stream-c are further explored in Section 3. We now describe stream-c, pseudocode for which can be found in Algorithm 2. stream-c takes as input \(\epsilon>0\), \(\alpha>0\), and an instance of SCP. stream-c takes sequential passes through the universe \(U\) (Line 4) with each pass corresponds to a new guess of \(|OPT|\), \(g\). \(g\) is initialized as \(1+\alpha\), and at the end of each pass is increased by a factor of \(1+\alpha\). Throughout stream-c, a subset of elements of \(U\) are stored into \(2/\epsilon\) disjoint sets, \(S_{1},...,S_{2/\epsilon}\). An element \(u\) is stored in at most one set \(S_{j}\) if both of the following are true: (i) \(|S_{j}|<2g/\epsilon\); (ii) adding \(u\) is sufficiently beneficial to increasing the \(f\) value of \(S_{j}\) i.e. \(\Delta f(S_{j},u)\geq\epsilon\tau/(2g)\). If no such \(S_{j}\) exists, \(u\) is discarded. At the end of each pass, stream-c finds \(S=\text{argmax}\{f(X):X\subseteq\cup S_{i},|X|\leq 2g/\epsilon\}\) on Line 7. If \(f(S)\geq(1-\epsilon)\tau\), then \(S\) is returned and stream-c terminates. We now present the theoretical guarantees of stream-c in Theorem 4. **Theorem 4**.: _Suppose that stream-c is run for an instance of SCP. Then stream-c returns \(S\) such that \(f(S)\geq(1-\epsilon)\tau\) and \(|S|\leq(1+\alpha)(2/\epsilon)|OPT|\) in at most_ \[\log_{1+\alpha}(|OPT|)\left(\frac{2n}{\epsilon}+\mathcal{T}\left((1+\alpha) \left(\frac{4}{\epsilon^{2}}|OPT|\right)\right)\right)\] _queries of \(f\), where \(\mathcal{T}(m)\) is the number of queries to \(f\) of the algorithm for SMP used on Line 7 of Algorithm 2 on an input set of size \(m\)._ The key idea for proving Theorem 4 is that by the time \(g\) is in the region \([|OPT|,(1+\alpha)|OPT|]\), there exists a subset \(X\subseteq\cup S_{i}\) such that \(|X|\leq 2g/\epsilon\) and \(f(X)\geq(1-\epsilon)\tau\). In fact, it is shown in the proof of Lemma 4 in the appendix that the set \(X\) is \(S_{t}\cup(\cup_{i}S_{i}\cap OPT)\) for a certain one of the sets \(S_{t}\). Then when we solve the instance of SMP on Line 7, we find a set that has these same properties as \(X\), and stream-c returns this set and terminates. Because \(g\leq(1+\alpha)|OPT|\), the properties described in Theorem 4 hold. Further notice that \(|\cup_{i}S_{i}|\leq 2(1+\alpha)|OPT|/\epsilon^{2}\) at all times before stream-c exits, which implies the bounded query complexity in Theorem 4. The key idea for proving Theorem 4 is stated below in Lemma 4 and proven in the appendix. **Lemma 4**.: _By the time that \(g\) reaches the region \([|OPT|,(1+\alpha)|OPT|]\) and the loop on Line 4 of stream-c has completed, there exists a set \(X\subseteq\cup S_{i}\) of size at most \(2(1+\alpha)|OPT|/\epsilon\) such that \(f(X)\geq(1-\epsilon)\tau\)._ ### Regularized monotone submodular cover The final class of submodular functions we consider take the form \(f=g-c\) where \(g\) is monotone, submodular, and nonnegative, while \(c\) is a modular, nonnegative penalty cost function, called RSCP. In this case, \(f\) may take on negative values and therefore this class of submodular functions does not fit into general SCP. \(f\) may also be nonmonotone. Existing theoretical guarantees for the dual problem of regularized SMP are in a different form than typical approximation algorithms (Harshaw et al., 2019; Kazemi et al., 2021), which we will describe in more detail below, and as a result convert cannot be used. Motivated by this, we first develop an algorithm, convert-reg, that takes algorithms for regularized SMP and converts them into an algorithm for RSCP. Next, we propose a generalization of the distorted greedy algorithm of Harshaw et al. for regularized SMP, called distorted-bi, that can be used along with convert-reg to produce an algorithm for RSCP. Existing proposed algorithms for regularized SMP have guarantees of the following form: Given budget \(\kappa\), the regularized SMP algorithm is guaranteed to return a set \(S\) such that \(|S|\leq\kappa\) and \(g(S)-c(S)\geq\gamma g(OPT_{SM})-c(OPT_{SM})\) where \(\gamma\) is some value less than 1, e.g. \(1-1/e\) for the distorted greedy algorithm of Harshaw et al.. A guarantee of this form means convert cannot be used (the check on Line 2 of the pseudocode for convert in the appendix is the problem). Motivated by this, we provide convert-reg for these different types of theoretical guarantees. We now describe convert-reg, pseudocode for which can be found in Algorithm 3. convert-reg takes as input an algorithm reg for regularized SMP, and \(\alpha>0\). convert-reg repeatedly makes guesses for \(|OPT|\), \(\kappa\). For each guess \(\kappa\), the algorithm reg is run on an instance of SMP with objective \(g-(\gamma/\beta)c\) and budget \(\kappa\). Once \(g-(\gamma/\beta)c\) reaches \(\gamma\tau\), convert-reg exits. The theoretical guarantees of convert-reg are stated below in Theorem 5 and proven in the appendix. Theorem 5 makes a slightly stronger assumption on reg than its approximation guarantees relative to \(OPT_{SM}\). In particular, it is assumed that it returns a solution satisfying \(|S|\leq\rho\kappa\) and \(g(S)-c(S)\geq\gamma g(X)-c(X)\)_for all \(X\subseteq U\) such that \(|X|\leq\kappa\)_, not just for \(OPT_{SM}\). However, this is true of many algorithms for regularized SMP including the distorted greedy algorithm of Harshaw et al.. The idea behind running reg with the objective \(g-\frac{\gamma}{\beta}c\) instead of the actual objective \(g-c\), is that by the time \(\kappa\) is a good guess of \(|OPT|\), it is shown in the proof of Theorem 5 that with this different objective \(g(S)-\frac{\gamma}{\beta}c(S)\geq\gamma\tau\). **Theorem 5**.: _Suppose that we have an algorithm reg for regularized SMP, and given budget \(\kappa\) reg is guaranteed to return a set \(S\) of cardinality at most \(\rho\kappa\) such that \(g(S)-c(S)\geq\gamma g(X)-\beta c(X)\) for all \(X\) such that \(|X|\leq\kappa\), in time \(T(n)\). Then the algorithm convert-reg using reg as a subroutine returns a set \(S\) in time \(O(\log_{1+\alpha}(nT)(n))\) such that \(|S|\leq(1+\alpha)\rho|OPT|\) and \(g(S)-\frac{\gamma}{\beta}c(S)\geq\gamma\tau\)._ If we use convert-reg on the distorted greedy algorithm of Harshaw et al., we end up with an algorithm for RSCP that is guaranteed to return a set \(S\) such that \(|S|\leq(1+\alpha)|OPT|\) and \(g(S)-(1-1/e)c(S)\geq(1-1/e)\tau\). If we set \(c=0\), then the problem setting reduces to MSCP and the distorted greedy algorithm of Harshaw et al. (2019) is equivalent to the standard greedy algorithm. However, our approximation guarantee does not reduce to the \((\ln(1/\epsilon),1-\epsilon)\)-bicriteria approximation guarantee that would be preferable. A more intuitive result would be one that converges to that of the standard greedy algorithm as \(c\) goes to 0. Motivated by this, we now propose an extension of the distorted greedy algorithm of Harshaw et al. (2019) for regularized SMP, distorted-bi, that accomplishes this. We now describe distorted-bi, pseudocode for which can be found in the appendix. distorted-bi takes as input an instance of regularized SMP and \(\epsilon>0\). distorted-bi is related to the standard greedy algorithm, but instead of making queries to \(g-c\), distorted-bi queries a distorted version of \(g-c\) that de-emphasizes \(g\) compared to \(c\), and evolves over time. In particular, when element \(i\) is being added to the solution set, we choose the element of maximum marginal gain, provided it is positive, to the objective \[\Phi_{i}(X)=\left(1-\frac{1}{\kappa}\right)^{\ln(1/\epsilon)\kappa-i}g(X)-c(X).\] The theoretical guarantees of distorted-bi are now presented in Theorem 6, and the proof of Theorem 6 can be found in the appendix. **Theorem 6**.: _Suppose that distorted-bi is run for an instance of regularized SMP. Then distorted-bi produces a solution \(S\) in \(O(n\kappa\ln(1/\epsilon))\) queries of \(f\) such that \(|S|\leq\ln(1/\epsilon)\kappa\) and for all \(X\subseteq U\) such that \(|X|\leq\kappa\), \(g(S)-c(S)\geq(1-\epsilon)g(X)-\ln(1/\epsilon)c(X)\)._ Therefore by running convert-reg with distorted-bi as a subroutine for regularized SMP, we end up with an algorithm for regularized RSCP that is guaranteed to return a set \(S\) such that \(|S|\leq(1+\alpha)\ln(1/\epsilon)|OPT|\) and \(g(S)-(1-\epsilon)c(S)/\ln(1/\epsilon)\geq(1-\epsilon)\tau\) in \(O((1+\alpha)n|OPT|\log_{1+\alpha}(n)\log(1/\epsilon))\) queries of \(f\). ## 3 Experiments In this section, we experimentally evaluate the algorithms proposed in Sections 2.1 and 2.2 on real instances of SCP. In particular, the emphasis of Section 3.1 is on evaluation of our algorithm stoch-greedy-c on instances of data summarization, an application of MSCP. Next, we evaluate stream-c on instances of graph cut, an application of SCP that is not monotone, in Section 3.2. Additional details about the applications, setup, and results can be found in the appendix. ### Monotone submodular objective We first compare the solutions returned by stoch-greedy-c ("SG"), greedy-c ("G"), thresh-greedy-c ("TG"), and convert using the _bicriteria_ extension of the stochastic greedy algorithm of Mirzasoleiman et al. (see appendix) ("SG2") on instances of data summarization. The data summarization instance featured here in the main paper is the delicious dataset of URLs tagged with topics, and \(f\) takes a subset of URLs to the number of distinct topics represented by those URLs (\(n=5000\) with \(8356\) tags) [Soleiman and Miller, 2016]. Additional datasets are explored in the appendix. We run the algorithms with input \(\epsilon\) in the range \((0,0.15)\) and threshold values between 0 and \(f(U)\) (\(f(U)\) is the total number of tags). When \(\epsilon\) is varied, \(\tau\) is varied at \(0.6f(U)\). When \(\tau\) is varied, \(\epsilon\) is fixed at \(0.2\). The parameter \(\alpha\) is set to be \(0.1\) and the initial guess of \(|OPT|\) for stoch-greedy-c and convert is set to be \(\tau/\max_{s}f(s)\). The results in terms of the \(f\) values and size of the solutions are presented in Figure 1(a) and 1(b). From the plots, one can see that the \(f\) values and size of solutions returned by stoch-greedy-c, greedy-c, thresh-greedy-c are nearly the same, and are smaller than the ones returned by convert. This is unsurprising, because the theoretical guarantees on \(f\) and size are about the same for the different algorithms, but convert tends to perform closer to its worst case guarantee on size. The number of queries to \(f\) for different \(\epsilon\) and \(\tau\) are depicted in Figures 1(d) and 1(c). Recall that the theoretical worst case number of queries to \(f\) for stoch-greedy-c, greedy-c, thresh-greedy-c and convert are \(O((\alpha/(1+\alpha))n\ln^{2}(1/\epsilon)\log_{1+\alpha}(|OPT|))\), \(O(n\ln(1/\epsilon)|OPT|)\), \(O(n\log(|OPT|/\epsilon)/\epsilon)\), and \(O(n\ln^{2}(1/\epsilon)\log_{1+\alpha}(|OPT|)\) respectively. As expected based on these theoretical guarantees, greedy-c does the worst and increases rapidly as \(\tau\) (and therefore \(|OPT|\)) increases. thresh-greedy-c tends to do worse compared to stoch-greedy-c and convert as \(\epsilon\) gets smaller. stoch-greedy-c consistently performs the fastest out of all of the algorithms. ### Non-Monotone Submodular Objective We now analyze the performance of stream-c on several instances of graph cut over real social network data. The universe \(U\) is all nodes in the network, and \(f\) is the number of edges between a set and its complement. The network featured in the main paper is the email-EuAll dataset (\(n=265214\), 420045 edges) from the SNAP large network collection [Leskovec and Sosic, 2016] and additional datasets can be found in the appendix. We run stream-c with input \(\epsilon\) in the range \((0,0.5)\) and threshold values between 0 and \(f(X)\) where \(X\) is a solution returned by the unconstrained submodular maximization algorithm of Buchbinder et al. (2015) on the instance. When \(\epsilon\) is varied, \(\tau\) is fixed at \(0.9f(X)\). When \(\tau\) is varied, \(\epsilon\) is fixed at \(0.15\). We compare the performance of stream-c using several possible algorithms for the subroutine of SMP over \(\cup S_{i}\) (see line 7 in Algorithm 2), including a polynomial time approximation algorithm and an unconstrained submodular maximization algorithm. In particular, we use the random greedy approximation algorithm for SMP that is proposed in Buchbinder et al. (2014) ("RG"), and the double greedy approximation algorithm for unconstrained submodular maximization proposed in Buchbinder et al. (2015) ("DG"). Random greedy and double greedy are both approximation algorithms (\(1/e\) in expectation and \(1/2\) in expectation respectively), and therefore the stopping conditions are set to be \(\frac{(1-\epsilon)\tau}{\epsilon}\) and \(\frac{(1-\epsilon)\tau}{2}\) respectively. We also consider an exact algorithm ("EX"), which essentially is a greedy heuristic followed by an exact search of all (exponentially many) possible solutions if the greedy fails. On instances where the exact algorithm was unable to complete in a time period of \(5\) minutes, we did not include a data point. We further discuss the use of these algorithms in the appendix. Before introducing the fourth subroutine, we discuss an interesting pattern that we saw in our instances of graph cut. We noticed that it was often the case that: (i) \(\cup S_{i}\) tended to be small compared to its upper bound and in fact typically \(|\cup S_{i}|\) was smaller than the SMP constraint, making the subroutine an instance of unconstrained submodular maximization; (ii) The majority of elements (if not all) were "monotone" in the sense that for many \(x\in\cup S_{i}\), \(\Delta f(\cup S_{i}/x,x)\geq 0\). Let \(M\subseteq\cup S_{i}\) be the set of monotone elements. Then if (i) holds the instance of submodular maximization is equivalent to \(\arg\max_{X\in\cup S_{i}/M}f(X\cup M)\). If \(M\) is large in \(\cup S_{i}\), this new problem instance is relatively easy to solve exactly. This motivates our fourth algorithm, fast-exact ("F-EX"), used on instances where (i) holds, and is to separate \(\cup S_{i}\) into monotone and non-monotone and search for the best subset amongst the non-monotone elements in a similar manner as the plain exact algorithm. We explore to what extent properties (i) and (ii) hold on different instances, as well as give additional details about the fast exact algorithm, in the appendix. The results in terms of the \(f\) values and size of the output solutions returned by the four algorithms are plotted in Figure 1(e) and Figure 1(f). From the plots, one can see that the \(f\) values satisfy that \(f(S_{\text{exact}})\approx f(S_{\text{f-exact}})>f(S_{\text{DG}})>f(S_{\text{ RG}})\). This is due to the stopping conditions for each algorithm, which follow from each algorithms approximation guarantee on \(f\) of \(1-\epsilon\), \(1-\epsilon\), \(1/2\), and \(1/e\) respectively. On the other hand, the size mirrors the \(f\) value, since it tends to be the case that reaching a higher \(f\) value requires more elements from \(U\). The number of queries made by the algorithms can be seen in Figure 1(h) and 1(g). As expected, the exact algorithms make more queries compared to the approximation algorithms, and in some cases "EX" doesn't even finish. However, by taking advantage of the properties (i) and (ii) discussed above, "F-EX" is able to run even for smaller \(\epsilon\). Therefore, depending on the application, an exact algorithm on the relatively small set \(\cup S_{i}\) may be a practical choice in order to achieve a solution that is very close to feasible. Figure 1: The experimental results of running the monotone algorithms on instances of data summarization on the delicious URL dataset (“cover”) and running stream-c on the instances of graph cut on the email-EuAll graph (“eeull”).
2309.04993
Thermal photon measurements at PHENIX
Photons are emitted at all stages of relativistic heavy-ion collisions and do not interact with the medium strongly. With access to the versatility of RHIC, measurements of low momentum direct photons are made possible across different system size and beam energies. An excess of direct photons, above prompt photon production from hard scattering processes, is observed for a system size corresponding to $dN_{ch}/d\eta$ of 20-30, with a large azimuthal anisotropy and a characteristic dependence on collision centrality. After subtracting the prompt photon component, the inverse slope of the spectrum is continuously increasing with the effective temperature ranging from 250 MeV/c at $p_{T}$ of 1-2 GeV/c to about 400 MeV/c at 2-4 GeV/c. Within the experimental uncertainty, there is no indication of a system size dependence of the inverse slope. In this proceeding, results from Au+Au collisions from the PHENIX experiment will be presented.
Roli Esha
2023-09-10T10:59:35Z
http://arxiv.org/abs/2309.04993v1
# Thermal photon measurements at PHENIX ###### Abstract: Photons are emitted at all stages of relativistic heavy-ion collisions and do not interact with the medium strongly. With access to the versatility of RHIC, measurements of low momentum direct photons are made possible across different system size and beam energies. An excess of direct photons, above prompt photon production from hard scattering processes, is observed for a system size corresponding to \(dN_{ch}/d\eta\) of 20-30, with a large azimuthal anisotropy and a characteristic dependence on collision centrality. After subtracting the prompt photon component, the inverse slope of the spectrum is continuously increasing with the effective temperature ranging from 250 MeV/c at \(p_{T}\) of 1-2 GeV/c to about 400 MeV/c at 2-4 GeV/c. Within the experimental uncertainty, there is no indication of a system size dependence of the inverse slope. In this proceeding, results from Au+Au collisions from the PHENIX experiment will be presented. Introduction By virtue of being color neutral, photons provide snapshots of the space-time evolution of the hot and dense medium produced in relativistic heavy-ion collisions. Direct photons, defined as those which do not come from hadronic decays, are sensitive to the temperature of the medium and its measurement helps constrain initial conditions, and, sources of photon production and their emission rates. All photon sources can be classified into: decay photons, which make up approximately 80-90% of the total photon yield, and direct photons. Direct photons, in turn, can be further categorized into two subcategories: prompt and non-prompt photons. Prompt photons originate from sources akin to those found in \(p\)+\(p\) collisions and their yield scales with the number of binary collisions. In addition to the well-known thermal sources originating from the Hadron Gas and the Quark-Gluon Plasma (QGP) phase, other sources that contribute to non-prompt direct photons include interactions between jets and the surrounding medium, as well as emissions occurring during the pre-equilibrium state. As the system evolves, it undergoes expansion and cooling. Consequently, earlier phases are characterized by higher temperatures and are more likely to dominate the emissions at higher transverse momentum (\(p_{T}\)). The wealth of data and an optimized detector configuration has enabled PHENIX to measure direct photons across 7 systems and 3 collision energies over a large \(p_{T}\) range. To ensure robustness and accuracy, three distinct methods have been employed: the calorimeter method, the virtual photon method, and the external conversion method. In this proceeding, results from the external conversion method used for analyzing the direct photons from the years 2010 [1] and 2014 [2] will be discussed. ## 2 Direct photons Photon conversions on the backplane of the Hadron Blind detector (HBD) were analyzed for Au+Au collisions recorded in 2010 at \(\sqrt{s_{{}_{NN}}}\) = 39 GeV and \(\sqrt{s_{{}_{NN}}}\) = 62.4 GeV and the corresponding direct photon spectra as a function of \(p_{T}\) are shown in Fig. 1 (left) with the \(T_{AA}\)-scaled pQCD curve shown in solid line. The substantial increase in the statistics for Au+Au collisions at \(\sqrt{s_{{}_{NN}}}\) = 200 GeV allowed for a more detailed and differential measurement of direct photons. Instead of the HBD, which was removed, conversions in the layers of a new Silicon Vertex tracker, with a material budget of around 13%, were analyzed. The spectrum for every 20% collision centrality is shown in Fig. 1 (right). Remarkably, the results obtained from this new analysis exhibit excellent agreement with all the previous measurements conducted by PHENIX. Having established the direct photon spectra, the subsequent step involves comprehending the dependence on collision centrality and investigating the shape of the spectrum. **Universal scaling**: To further investigate the centrality dependence, the integrated yields, in Figure 2 (left), are plotted as a function of charged particle multiplicity at midrapidity for various collision systems and energy spanning almost 2 orders of magnitude. Notably, a universal scaling behavior is observed across all \(A\) + \(A\) collision systems, exhibiting a trend similar to that of scaled \(p\)+\(p\) collisions but with yields approximately 10 times larger. **Effective temperature**: The effective temperature can be determined by estimating the local inverse slope of the spectrum. To gain a deeper understanding of the similarities in the low-\(p_{T}\) direct photon spectrum across different collision energies, the spectrum is fitted in various \(p_{T}\) ranges. The resulting values of the effective temperature (\(T_{\rm eff}\)) extracted from these fits are displayed in Figure 2 (right). The consistency in extracted \(T_{\rm eff}\) within different collision energies across various fit ranges implies that there are common sources responsible for direct photon production, regardless of the specific collision energy. ## 3 Non-prompt direct photons Non-prompt direct photons are radiations that are emitted during the collision from the hot and expanding fireball and are estimated by subtracting the \(N_{\rm coll}\) scaled p+p fit from the direct photon spectrum. The non-prompt direct photon spectra are shown for every 20% collision centrality in Fig. 3. These measurements have significantly expanded their scope both in terms of \(p_{T}\) coverage and centrality compared to previous publications. **Universal scaling**: In order to investigate the centrality dependence, the scaling power, \(\alpha\), is extracted by fitting the integrated yield as a function of charged particle multiplicity at midrapidity. Figure 1: Invariant yield of direct photons as a function of \(p_{T}\) for Au+Au at \(\sqrt{s_{{}_{NN}}}\) = 39 GeV and \(\sqrt{s_{{}_{NN}}}\) = 64 GeV (left) and for \(\sqrt{s_{{}_{NN}}}\) = 200 GeV (right) for different collision centralities. Figure 2: Integrated yield of direct photons as a function of system size (left) and the inverse slope of spectrum as a function of collision energy for different \(p_{T}\) ranges (right). Figure 4 (left) displays the variation of \(\alpha\) as a function of \(p_{T}\) for six non-overlapping \(p_{T}\) ranges for non-prompt direct and direct photon spectra. Below 3 GeV/\(c\), the direct photon spectra are primarily influenced by non-prompt direct photon sources, resulting in similar values for \(\alpha\). However, as \(p_{T}\) increases, the \(\alpha\) values begin to diverge, although it should be noted that the non-prompt direct photon spectra suffer from limited statistical precision. Experimental findings indicate that \(\alpha\) remains relatively independent of \(p_{T}\), contrary to the theoretical expectations that \(\alpha\) increases as the system transitions to higher \(p_{T}\) where the production is dominated by the QGP phase [3]. **Effective temperature**: The shape of the non-prompt direct photon spectrum is not described by a single exponential but rather has a continuously increasing inverse slope with \(p_{T}\). To quantify this changing slope, the non-prompt direct photon spectra are fitted with exponentials in two distinct \(p_{T}\) ranges, as depicted in Figure 4 (right). The slopes are found to be consistent with a constant value and independent of the collision centrality. The average value of \(T_{\rm eff}\) rises from 200 MeV to approximately 400 MeV within the \(p_{T}\) range of 0.8 to 4 GeV/\(c\). The variation in \(T_{\rm eff}\) is not surprising, as the underlying spectra integrate the entire evolution of the expanding fireball, encompassing its earliest pre-equilibrium state, the Quark-Gluon Plasma (QGP) phase, the transition to the hadron gas phase, and subsequent expansion and cooling until hadrons cease interacting with each other. Consequently, contributions from the earliest phase are expected to dominate the spectra at higher Figure 4: Scaling power \(\alpha\), as a function of \(p_{T}\) (left) and the inverse slope of non-prompt direct photon spectrum as a function of system size for different \(p_{T}\) ranges (right). Figure 3: Invariant yield of non-prompt direct photons as a function of \(p_{T}\) for Au+Au at \(\sqrt{s_{{}_{NN}}}\) = 200 GeV for different collision centralities. \(p_{T}\) values, which is consistent with the observation of an increasing \(T_{\rm eff}\) with \(p_{T}\). ## 4 Comparisons to theory In Figure 5 (left), the measured non-prompt direct photon spectra are compared to recent theoretical calculations that utilize a hybrid model, taking into account contributions from the pre-equilibrium state [4]. The bottom panel of the figure presents the ratio of the measurements to the combined thermal and pre-equilibrium contributions predicted by the model. According to the calculations, the pre-equilibrium radiations are expected to become the dominant source of non-prompt direct photons above a \(p_{T}\) of 3 GeV/\(c\). While the shape of the spectra is well-reproduced by the model, the overall yield predicted by the model falls short, particularly below 2 GeV/\(c\), where the measured yields appear to be approximately 2 to 3 times larger. In order to further explore the shape of the non-prompt direct photon spectra, they are smoothened using a machine learning based regression algorithm called Multi Layer Perceptron on the PHENIX data [2][5]. The inverse slope is extracted by numerical differentiation and is shown in Fig. 5 (right). It can be argued that with increasing \(p_{T}\), the contribution from the pre-equilibrium phase may be important. ## 5 Direct photons in small systems For small systems, direct photons are measured for \(p\)+\(p\) and \(p\)+Au collisions at 200 GeV using the data collected in 2015, utilizing external conversion in the layers of the Silicon Vertex detectors. In Fig. 6, the red data points obtained from external conversion in \(p\)+\(p\) collisions aligns well with previous PHENIX measurements employing internal conversions. The solid line represents the \(N_{\rm coll}\)-scaled fit to the \(p\)+\(p\) spectrum, which serves as a reference baseline. Notably, the lowest \(p_{T}\) points for the most central \(p\)+Au collisions exhibit indications of an excess in the observed yields of direct photons. Figure 5: Comparison of the non-prompt direct photon spectrum for Au+Au at \(\sqrt{s_{{}_{NN}}}\) = 200 GeV with theoretical calculations. ## 6 Summary In summary, the results of direct photon measurements in Au+Au collisions at energies of 39, 62.4, and 200 GeV are discussed. Additionally, a more detailed analysis focusing on non-prompt direct photons in high-statistics Au+Au collisions at 200 GeV is provided. A universal scaling behavior that remains consistent regardless of collision centrality, collision energy, or collision system, with respect to charged particle multiplicity at midrapidity is observed. Furthermore, the scaling power \(\alpha\) shows an insignificant dependence on \(p_{T}\) for both direct and non-prompt photons. Both direct photons and non-prompt direct photon spectra exhibit an increasing inverse slope as a function of \(p_{T}\). Recent theoretical calculations including pre-equilibrium contributions seem to reduce the discrepancy between theory and observation.
2307.00081
Bose-Einstein condensation of photons in a vertical-cavity surface-emitting laser
Many bosons can occupy a single quantum state without a limit. This state is described by quantum-mechanical Bose-Einstein statistics, which allows the formation of a Bose-Einstein condensate at low temperatures and high particle densities. Photons, historically the first considered bosonic gas, were late to show this phenomenon, which was observed in rhodamine-filled microlaser cavities and doped fiber cavities. These more recent findings have raised the natural question as to whether condensation is common in laser systems, with potential technological applications. Here, we show the Bose-Einstein condensation of photons in a broad-area vertical-cavity surface-emitting laser with positive cavity mode-gain peak energy detuning. We observed a Bose-Einstein condensate in the fundamental transversal optical mode at the critical phase-space density. The experimental results follow the equation of state for a two-dimensional gas of bosons in thermal equilibrium, although the extracted spectral temperatures were lower than those of the device. This is interpreted as originating from the driven-dissipative nature of the device and the stimulated cooling effect. In contrast, non-equilibrium lasing action is observed in the higher-order modes in a negatively detuned device. Our work opens the way for the potential exploration of superfluid physics of interacting photons mediated by semiconductor optical non-linearities. It also shows great promise for enabling single-mode high-power emission from a large aperture device.
Maciej Pieczarka, Marcin Gębski, Aleksandra N. Piasecka, James A. Lott, Axel Pelster, Michał Wasiak, Tomasz Czyszanowski
2023-06-30T18:35:15Z
http://arxiv.org/abs/2307.00081v2
# Bose-Einstein condensation of photons in a vertical-cavity surface-emitting laser ###### Abstract Many bosons can occupy a single quantum state without a limit. This state is described by quantum-mechanical Bose-Einstein statistics, which allows the formation of a Bose-Einstein condensate at low temperatures and high particle densities. Photons, historically the first considered bosonic gas, were late to show this phenomenon, which was observed in rhodamine-filled microlaser cavities and doped fiber cavities. These more recent findings have raised the natural question as to whether condensation is common in laser systems, with potential technological applications. Here, we show the Bose-Einstein condensation of photons in a broad-area vertical-cavity surface-emitting laser with positive cavity mode-gain peak energy detuning. We observed a Bose-Einstein condensate in the fundamental transversal optical mode at the critical phase-space density. The experimental results follow the equation of state for a two-dimensional gas of bosons in thermal equilibrium, although the extracted spectral temperatures were lower than those of the device. This is interpreted as originating from the driven-dissipative nature of the device and the stimulated cooling effect. In contrast, non-equilibrium lasing action is observed in the higher-order modes in a negatively detuned device. Our work opens the way for the potential exploration of superfluid physics of interacting photons mediated by semiconductor optical non-linearities. It also shows great promise for enabling single-mode high-power emission from a large aperture device. At the beginning of the 20th century, Albert Einstein extended the statistical theory of Satyendra Nath Bose to describe massive particles and made the pioneering prediction of the Bose-Einstein condensate (BEC) below a critical temperature [1]. BEC is characterized by both saturation of occupation in the excited states and condensation in the ground energy state of the system [2]. Seventy years after its theoretical prediction, this macroscopic quantum phenomenon was first observed directly in dilute clouds of atomic gases at temperatures close to absolute zero [3; 4]. The reason for such a low critical temperature is that it is inversely proportional to the mass of the boson. Therefore, a heavy particle gas must be extremely cold to reach the transition point. However, if we consider the mass as the parameter of energy dispersion, then we can find a bosonic quasiparticle described with a dispersion of large curvature, and hence with a quite small effective mass, which enables condensation at elevated temperatures. This concept has been realized in a variety of bosonic quasiparticle systems, such as magnons [5], excitons [6; 7; 8], and plasmons [9], as well as hybrid excitations of strongly coupled systems of exciton and photons, namely cavity polaritons [10; 11]. Photons, on the other hand, have been out of the picture for many years because they represent a massless gas with linear energy dispersion and a trivial, null ground state. In principle, the number of particles is not conserved, i.e. in a blackbody radiation model in thermal equilibrium the chemical potential vanishes, and therefore condensation cannot occur. Nevertheless, over years of research many analogies have been drawn between laser physics and atomic BEC physics, yielding a more detailed understanding of these two worlds. Eventually, a system that meets all the requirements of an equilibrium photon BEC was obtained in a laboratory tabletop system of a microcavity filled with a rhodamine solution [12]. Remarkably, this system clearly demonstrated many textbook effects of a non-interacting condensate of bosons, from thermodynamic and caloric properties [13; 14] to quantum-statistical effects [15; 16]. Moreover, the driven-dissipative nature of this system beyond equilibrium has been demonstrated [17], and the phase boundaries between photon BECs and non-equilibrium lasing have been investigated extensively [18; 19]. However, rhodamine-based photon BECs are limited by their weak and slow thermo-optical nonlinearity [20], which has so far prevented the observation of static or dynamic superfluid effects. Pioneering observations have stimulated the search for BEC conditions in other laser systems, such as fiber cavities [21] and semiconductor lasers [22; 23; 24], to enable true technological applications outside of the laboratory environment and to find a material system with non-negligible and fast non-linearities. Here, we demonstrate a photon BEC in a well-established semiconductor device, a large aperture electrically driven vertical-cavity surface-emitting laser (VCSEL) at room temperature. By testing devices with different energy detunings between the cavity fundamental mode \(\varepsilon_{0}\) and the quantum well (QW) fundamental transition \(\varepsilon_{\text{QW}}\), defined as \(\Delta=\varepsilon_{0}-\varepsilon_{\text{QW}}\), we found a homogeneous BEC of photons with a thermalized spectrum. This occurred for both \(\Delta>0\) and standard non-equilibrium laser operation at higher-order modes in another device of the same geometry but with \(\Delta<0\). In the BEC regime, we found that the photonic gas thermalizes to temperatures below the temperature of the VCSEL, suggesting that it is not fully equilibrated with the optical gain medium. Nevertheless, the extracted temperatures, chemical potentials, and photon densities allowed us to experimentally determine the equation of state (EOS), which follows the behavior of a 2D Bose gas in thermal equilibrium. The device under study is an epitaxially grown oxide-confined VCSEL with a large \(23\,\mathrm{\SIUnitSymbolMicro m}\)-diameter aperture, emitting around \(980\,\mathrm{nm}\). The VCSEL is designed for simultaneous high bandwidth, high optical output power, and moderate to high wall plug efficiency [25, 26] (see Methods for details). We drive our semiconductor device at room temperature with direct current, by applying a constant voltage across the laser diode (see Fig. 1(a)). This sets the non-equilibrium distribution of carriers in the QW region, as the separation of the quasi-Fermi levels for electrons in the conduction band states \(\mu_{c}\) and holes in the valence band states \(\mu_{v}\) is proportional to the applied voltage. Due to the sub-picosecond relaxation of carriers within the bands [27], the electrons and holes are in equilibrium with the device. Hence, both gases can be described with separate Fermi distributions, with different quasi-Fermi levels setting the occupation in both bands (see Fig. 1(b) [28]). Let us assess the essential conditions for obtaining a photon BEC in a VCSEL. In electrically driven semiconductors, excited electrons and holes can recombine, emitting photons. Thus, the condition of chemical equilibrium can be established if the chemical potential of photons is equal to \(\mu=\mu_{c}-\mu_{v}\), in close analogy to a photochemical reaction [29]. This well-defined chemical potential of a photon gas is essential for obtaining a BEC at equilibrium. Another key ingredient is the detailed balance condition between emission and absorption, which was explored in the first demonstrations of photon BECs based on organic laser dyes [12]. This condition is also met in semiconductors, where the ratio between emission and absorption rates is expressed as the van Roosbroeck-Schockley relation \(R_{\text{abs}}(\varepsilon)/R_{\text{em}}(\varepsilon)=\exp(\frac{\varepsilon -\mu}{k_{B}T})\) (see Methods) [28, 30]. Hence, the thermalization of light occurs after a few cycles of spontaneous emission and absorption events before the photons escape the cavity through the mirror. Such energy exchange with the active medium enables the photon gas to establish both a chemical potential and a temperature. Eventually, it leads to a modified Bose-Einstein (BE) distribution of photons, which can be derived from the laser rate equations (see Methods): \[N(\varepsilon)=\frac{1}{\exp(\frac{\varepsilon-\mu}{k_{B}T})-1+\Gamma( \varepsilon)}\,. \tag{1}\] Here, the correction parameter \(\Gamma(\varepsilon)=\gamma(\varepsilon)/R_{\text{em}}(\varepsilon)\) represents the ratio of the photon decay rate from the passive cavity \(\gamma\) and the spontaneous emission rate \(R_{\text{em}}\) to the photon mode at a given energy \(\varepsilon\). Consequently, this correction parameter can be treated as a measure of the degree of thermalization. It is expected to be small if many photon emission-absorption cycles occur before the photons escape the cavity. At this limit, Equation (1) approaches the Bose-Einstein distribution. Based on our numerical modeling and experimental measurements, we estimated this ratio for the fundamental Figure 1: **Basic properties of a VCSEL.****(a)** Scheme of the investigated VCSEL devices with all main components indicated by arrows. **(b)** Simplified picture of the conduction and valence subbands confined in the QWs expressed in the in-plane wavevector (left). The occupations of the conduction band \(f_{c}\) and the valence band states \(f_{v}\) expressed with Fermi-Dirac distributions of different quasi-Fermi levels \(\mu_{c}\) and \(\mu_{v}\), respectively. \(\varepsilon_{0}\) is the energy of the fundamental cavity mode, which is larger than the semiconductor band gap. **(c)** Output power-current-voltage (LIV) characteristics of the BEC device. mode at \(\Gamma(\varepsilon_{0})\approx 0.008\) (see Methods and Supplementary Information for details), ensuring that we obtained a thermalized photon gas in our system. According to standard semiconductor laser theory, the Bernard-Duraffourg condition [31], which is essential for non-equilibrium lasing, is met when the value of the chemical potential exceeds the energy of an optical mode \(\mu>\varepsilon\). This creates a positive optical gain at this energy [28], so thermalization is expected to dominate below this limit. Therefore, we probed devices with different cavity-QW gain detunings \(\Delta\). We used the side effect of epitaxial growth that the resulting layers are not homogeneous throughout the entire wafer and have a tendency to become thinner towards the edge [32; 33; 34]. Phenomenologically, this affects the cavity energy shifts more than the spectral shifts of the gain. Thus, close to the center of the three-inch wafer we probe the device with \(\Delta<0\), which is the standard designed detuning for high-power and high-temperature lasing operation, which we denote as the lasing device. In contrast, close to the wafer edge the detuning becomes positive, and the device is expected to operate in the thermalized BEC regime, which we denote as the BEC device. Although we cannot directly measure the precise value of the detuning, we observed a stark contrast in the performance of devices from these distant positions on the sample, supporting our assumptions. The electrical and total output power characteristics on the driving current of the BEC device are shown in Fig. 1(c), and the results of the lasing device are summarized in the Supplementary Information. The data show all the standard features of a laser, the electrical characteristics of a diode, and the emission threshold current \(I_{\rm th}\). However, the device is characterized by significant spontaneous emission below \(I_{\rm th}\). Therefore, the information contained in the spectral characteristics of the device must be examined to distinguish between a BEC and a lasing state. To this end, we performed an analysis of the VCSEL spectral features, especially the distribution of occupations in the respective energy states. The investigated devices have large electrical apertures, resulting in a quasi-continuum of transversal optical modes (or in-plane energy states). Thus, photons in the resonator can be described by a parabolic dispersion in the in-plane direction \(\varepsilon_{k}=\varepsilon_{0}+\frac{\hbar^{2}k^{2}}{2m_{\rm ph}}\) with an effective mass \(m_{\rm ph}\). In our device \(m_{\rm ph}\approx 2.75\cdot 10^{-5}\,m_{e}\), where \(m_{e}\) is the mass of the free Figure 2: **Momentum-space and real-space spectra of the BEC and a laser device**. **(a)** Scheme of the experimental setup used for momentum-space imaging. The back focal plane of the microscope objective is imaged onto the entrance slit of the monochromator, and then it is dispersed to the CCD camera, enabling probing of the spectral information at the center cut of the momentum space. **(b)** Momentum-space spectrum of the BEC device below (\(I=5\,\)mA) and **(c)** above (\(I=8.5\,\)mA) the condensation threshold showing the narrowing from thermal distribution to the ground state \(k_{\parallel}\approx 0\). **(d)** Momentum-space spectrum of the lasing device in the higher-order mode above the lasing threshold (\(I=6.3\,\)mA). **(e)**,**(f)** Real-space spectra of the BEC device below and above the threshold showing homogeneity of the gas. **(g)** Real-space spectrum of the lasing device that presents the domination of the higher-order mode. All color scales are logarithmic to enhance the visibility of high-energy states. Insets in **(b-g)** represent the normalized energy-integrated spectra in linear scale. electron. We employed the back focal plane (Fourier space or far-field) imaging technique to directly access the momentum dispersion, as shown schematically in Fig. 2(a). The image is directed onto the monochromator slit, allowing for spectral analysis of the momentum dispersions. Momentum dispersion below \(I<I_{\mathrm{th}}\) is presented in Fig. 2(b). It shows thermalized distribution in momentum space, following the expected parabolic dispersion. The most distinguishing feature is observed above the threshold \(I>I_{\mathrm{th}}\) (see Fig. 2(c)), where the fundamental mode at \(k_{\parallel}=0\) dominates the spectrum. This is unusual behavior for such a large aperture resonator, as lasing in higher-order modes is commonplace [35]. We obtain this standard behavior in our lasing device with negative detuning, where right above the threshold current lasing in a higher-order mode is detected, together with a distinctive splitting in the momentum space (see Fig. 2(d)). This crucial difference between the BEC and the lasing devices is confirmed in the spatially resolved spectra (near field), since in the case of BEC behavior we are dealing with a spatially homogeneous gas of photons, presented in Figs. 2(e),(f), where condensation occurs in the fundamental transversal optical mode (ground state) of the system. In contrast, the lasing device operates in the higher-order mode, which is distributed closer to the aperture perimeter where the current density and optical gain are higher (see Fig. 2(g)) [35; 36; 37]. We further explore the thermodynamic properties of the photon gas in the BEC device, by extracting the occupancies of the respective transversal energy states. Hence, we integrate the momentum-space electroluminescence data taking into account the density of states, the estimated photon lifetimes, and the efficiency of the optical setup (see Methods for details). The experimental energy distributions at different driving currents are presented in Fig. 3(a). All data were successfully fitted with the BE distributions of Eq. (1) by assuming a negligible \(\Gamma\). Additional verification of the BEC distribution was also carried out, representing the data in logarithmic form, by transforming Eq. (1) as \(\ln[1+1/N(\varepsilon)]=\varepsilon/(k_{B}T)-\mu/(k_{B}T)\), which results as a linear function of energy (see Fig. 3(b)). The data resemble the textbook behavior of a Bose condensed gas, such as massive occupation and threshold-like dependency of the ground-state occupancy Figure 3: **Experimental energy distributions and thermodynamic quantities (a)** Solid lines represent energy distributions extracted from the momentum spectra for different driving currents. **(b)** The same data is represented in logarithmic form (see text). In **(a),(b)** the energy scale is expressed with respect to the energy in the ground mode. The dashed lines are the fits of the BE distribution to the experimental data. The error bars, representing 95% confidence intervals, are depicted as shaded regions. **(c)** Population of the ground state (\(N_{0}\)) and excited states (\(N_{T}\)) extracted from the experimental spectra. The dashed line is the linear fit above the condensation threshold to calculate the critical density (\(N_{C}\)). The inset shows the zoom-in into the low-number region of the main plot. **(d)** Thermodynamic quantities, effective chemical potential \(\mu_{\mathrm{eff}}\) and temperature \(T_{\mathrm{eff}}\) extracted from fitting the experimental distributions, as a function of driving current. The temperature of the heat sink is \(T=293\,\mathrm{K}\). Error bars in **(c),(d)** represent the 95% confidence intervals. \(N_{0}\) as a function of the total number of particles, along with saturation of the excited states \(N_{T}\). These effects can be seen in the distributions in Fig. 3(a). Figure 3(c) summarizes the corresponding values of \(N_{0}\), \(N_{T}\). However, the thermal tails do not have the same slopes, which is more evident in Fig. 3(b). This implies that, although the photons seem to be equilibrated, the temperature of the gas is not equal at different driving currents. Therefore, we denote the fitting parameters of the BE distribution as an effective chemical potential \(\mu_{\text{eff}}\) and temperature \(T_{\text{eff}}\), because these may not be equal to those set by the device conditions. Importantly, the geometry of the device imposes inhomogeneous current density across the aperture. Therefore, the chemical potential set by the quasi-Fermi levels and the temperature slightly vary spatially. The thermodynamic properties of the photon gas are a result of the spatially averaged overlap of the optical modes with the inhomogeneous QWs active medium [38]. The results of the fits to the experimental data are presented in Fig. 3(d). The effective chemical potential is always negative with respect to the fundamental mode energy and approaches zero when condensation occurs, supporting BEC behavior for an ideal gas. On the other hand, the effective temperature is a monotonic function of the driving current and saturates above the condensation transition to \(T_{\text{eff}}\approx 250\,\text{K}\), which is approximately \(T_{\text{eff}}/T\approx 0.85\) compared to the temperature of the heat sink \(T=293\,\text{K}\). Note that the actual temperature of the active region is expected to be even slightly higher due to the heating effects in the device (see Supplementary Information). From the data in Fig. 3(c), we experimentally extracted the critical condensation value of the particles \(N_{C}^{\text{exp}}=2604\pm 91\). This value is expected at the condensation temperature \(T\approx 220\,\text{K}\), which is in line with the experimental value extracted from Fig. 3(d) at the condensation threshold \(T_{\text{eff}}\approx 223\,\text{K}\). All of these results suggest that we are dealing with a photonic gas that is not in full thermal and chemical equilibrium with the reservoir, which is the active region of the device. Equilibration to temperatures lower than the reservoir by stimulated cooling has recently been predicted for driven-dissipative bosonic condensates in the fast thermalization limit in a quantum model taking into account all correlations between states [39]. An experimental indication for the stimulated cooling effect can be seen in our data, as the occupations of the excited states are above unity in the condensed regime according to Fig. 3(c) and there is a saturation of \(T_{\text{eff}}\) above the condensation threshold in Fig. 3(d). Therefore, it is interesting to examine what the EOS of the probed photon condensate is and whether it follows the EOS for a 2D Bose gas. The latter is written in the thermodynamic limit as \[\mathcal{D}=-\ln\left[1+\exp\left(\frac{\mu}{k_{B}T}\right)\right]\,, \tag{2}\] where \(\mathcal{D}=n\lambda_{T}^{2}\) represents the dimensionless phase space density. The photon density is defined by \(n=N/(\pi R^{2})\) with \(\pi R^{2}\) denoting the surface area of the aperture and \(R\) being its radius, while the thermal de Broglie wavelength of photons reads \(\lambda_{T}=\sqrt{(2\pi\hbar^{2})/(m_{\text{ph}}k_{B}T)}\). The EOS is expressed in normalized quantities by \(\mathcal{D}\) and \(\tilde{\mu}=\mu/(k_{B}T)\), hence the properties of the 2D bosonic gas are expected to be universal [40; 41]. The measured EOS, expressed by the experimental values \(\mu_{\text{eff}}\) and \(T_{\text{eff}}\), is presented in Fig. 4. The data follow the equilibrium EOS, but with a larger slope in comparison to the thermodynamic limit. This can be explained by the finite collection angle of the collection optics in our setup, which is represented by the numerical aperture (NA) of the microscope objective. We cannot detect energies emitted beyond the maximal angle. Numerical calculations confirm the observations, as we computed the phase-space density for a finite number of states defined by the NA. The results of the calculations presented in Fig. 4 as a solid line perfectly match the experimental data. The discrepancy of the experimental data from the EOS in the thermodynamic limit is analogous to previous reports [42], where the finite trap depth was given to explain the lower than expected phase-space density. In our case, the energies of all possible transversal modes in the VCSEL are expected to go beyond the values dictated by the objective NA. Figure 4: **Determination of EOS** Points are extracted from the experimental data based on \(T_{\text{eff}}\) and \(\mu_{\text{eff}}\) (see main text). The dashed line is the theoretical EOS for a 2D Bose gas in the thermodynamic limit. The solid line is calculated by taking into account the finite collection angle of the optical setup. Error bars represent the 95% confidence intervals. We have demonstrated that emission from a positively detuned VCSEL has the properties of a homogeneous 2D Bose-Einstein condensed gas of photons in a finite system. The measured nonequilibrium nature of the gas can be a signature of reaching the fast, stimulated thermalization limit, because the cavity is characterized by a relatively short photon lifetime. Photon condensation in semiconductor resonators offers the possibility of observing the superfluidity of a weakly interacting Bose gas. Photon interactions are expected to be mediated by semiconductor non-linearity, which is significantly enhanced by the cavity and has a subpicosecond-order response time [43; 44]. There are no clear indications of such interactions in our data, because the cavity energy shifts are dominated by the current- and temperature-induced changes in the refractive index. Further studies are needed, focused on probing the hydrodynamics of the condensed photons directly, by perturbing them from the steady state [45; 46]. Nevertheless, in addition to material non-linearities, the dissipative nature of the photon gas encourages further studies of phase ordering [47] and universal scaling in a 2D geometry [48; 49] and signs of non-Hermitian effects [50]. Another direction for future work is to test the fluctuations of the non-equilibrium BEC and to compare it to the BEC in thermal equilibrium [51; 52] as well as to standard VCSEL operation [53; 54]. The mature technology of semiconductor VCSELs offers the possibility of utilizing the BEC regime to achieve single-mode emission from large aperture devices characterized by excellent beam quality, without the need for sophisticated additional fabrication and processing of the laser mesa [55; 56; 57]. BEC VCSELs could also be applied in more complex lattice geometries, to study topological effects in well-controlled current-operated devices at room temperature [58]. ## Methods ### Thermalization of photons in a semiconductor laser The principles of light absorption and recombination in an excited semiconductor QW, depicted in Fig. 1(b), can be described by the following transition rates [28; 38] for emission \[R_{\mathrm{em}}(\varepsilon)=R(\varepsilon)f_{c}(\varepsilon,T,\mu_{c})\big{[} 1-f_{v}(\varepsilon,T,\mu_{v})\big{]} \tag{3}\] and absorption \[R_{\mathrm{abs}}(\varepsilon)=R(\varepsilon)f_{v}(\varepsilon,T,\mu_{v}) \big{[}1-f_{c}(\varepsilon,T,\mu_{c})\big{]}\,, \tag{4}\] where \(f_{c,v}=\{\exp\left[(\varepsilon-\mu_{c,v})/(k_{B}T)\right]+1\}^{-1}\) denote the thermalized Fermi-Dirac distributions of electrons in the conduction and holes in the valence bands, respectively. \(R(\varepsilon)\) stands for the transition rate at energy \(\varepsilon\), taking into account the photonic and electronic density of states, the overlap of the optical modes with the active medium, and the intrinsic properties of the active medium itself [38]. The natural consequence in semiconductors is the van Roosbroeck-Schockley relation, which appears, after some algebra, from the relation \[\frac{R_{\mathrm{abs}}(\varepsilon)}{R_{\mathrm{em}}(\varepsilon)}=\exp \left(\frac{\varepsilon-\mu}{k_{B}T}\right) \tag{5}\] with \(\mu=\mu_{c}-\mu_{v}\)[28; 29; 38]. Now, the rate equation for the occupation of an optical mode at \(\varepsilon\) is expressed as \[\frac{d}{dt}N(\varepsilon)=R_{\mathrm{em}}(\varepsilon)\big{[}N(\varepsilon) +1\big{]}-\big{[}R_{\mathrm{abs}}(\varepsilon)+\gamma(\varepsilon)\big{]}N(\varepsilon) \tag{6}\] where \(\gamma(\varepsilon)=1/\tau(\varepsilon)\) denotes the decay rate of a photon from an empty cavity at \(\varepsilon\). Thus, the resulting steady-state solution gives \[N(\varepsilon)=\frac{R_{\mathrm{em}}(\varepsilon)}{\gamma(\varepsilon)-[R_{ \mathrm{em}}(\varepsilon)-R_{\mathrm{abs}}(\varepsilon)]}\,. \tag{7}\] After dividing both nominator and denominator by \(R_{\mathrm{em}}(\varepsilon)\) as well as using the van Roosbroeck-Schockley relation (5) we obtain for \(N(\varepsilon)\) the result of Equation (1). This amounts to a Bose-Einstein distribution with the correction parameter \(\Gamma(\varepsilon)=\gamma(\varepsilon)/R_{\mathrm{em}}(\varepsilon)\). We estimated this correction parameter \(\Gamma(\varepsilon_{0})\) for the fundamental mode \(\varepsilon_{0}\) of the device as follows. The decay rate of a photon from an empty cavity follows from the decay time calculated from the realistic numerical model: \(\gamma(\varepsilon_{0})=1/\tau(\varepsilon_{0})=1/\left(3.04\text{ ps}^{-1}\right)\approx 0.33\text{ ps}^{-1}\) (see Supplementary information). We are able to determine the value of \(R_{\mathrm{em}}(\varepsilon_{0})=42\pm 3\text{ ps}^{-1}\) close to the threshold by measuring the linewidth dependence of the ground mode as a function of occupation below the condensation threshold [59]. With this, we obtain the value \(\Gamma(\varepsilon_{0})\approx 0.008\) as mentioned above. ### Sample The VCSEL epitaxial structure is designed for high-speed data communication at \(980\,\mathrm{nm}\). The epitaxial structure is monolithically grown on an n-doped GaAs substrate. The multi-quantum well (MQW) active region is composed of \(5\text{ In}_{0.23}\text{Ga}_{0.77}\text{As}\) QWs and \(6\text{ GaAs}_{0.86}\text{P}_{0.14}\) barriers centered in \(\text{Al}_{x}\text{Ga}_{1-x}\text{As}\) cavity graded from \(x=0.38\) to \(0.80\) with an optical cavity thickness of \(\lambda/2\). The cavity is sandwiched by \(15.5\)-pair \(\text{GaAs}/\text{Al}_{0.9}\text{Ga}_{0.1}\text{As}\) top and \(37\)-pair bottom distributed Bragg reflector (DBR) mirrors. The top and bottom DBRs are C-doped for the p-type and Si-doped for the n-type, respectively. In both mirrors, graded interfaces are incorporated for lower electrical resistance of the structure. Importantly, two \(20\,\mathrm{nm}\) thick \(\text{Al}_{0.98}\text{Ga}_{0.02}\text{As}\) layers are placed to form oxide apertures in the first nodes of the standing wave at the top and bottom of the cavity. These oxide layers are halfway in the optical cavity and halfway in the first pair of layers in the DBRs. The VCSELs are processed using standard top-down photolithography. In the first step, the Ti/Pt/Au p-type contact rings are deposited with the use of electron beam deposition (E-beam). The mesa structures are then patterned and etched using inductively coupled plasma reactive-ion etching (ICP-RIE) in a Cl\({}_{2}\)/BCl\({}_{3}\)-based plasma. After etching, current confinement apertures are formed by selective wet thermal oxidation of Al\({}_{0.98}\)Ga\({}_{0.02}\)As layers in an oxidation oven in a nitrogen atmosphere with overpressure of water vapor and at high temperature (420\({}^{\circ}\)C). In the following step, horseshoe-shaped Ni/AuGe/Au n-type contact pads are deposited and annealed in a rapid thermal processing (RTP) furnace. The structures are then planarized with the use of a spin-on dielectric polymer of benzocyclobutene (BCB). The BCB layer is patterned with the use of photolithography and RIE etching in a CF\({}_{4}\)-based plasma to selectively open surface areas to the bias to the p- and n-type contacts. In the final step, the ground-signal-ground (GSG) Cr/Pt/Au contact pads are deposited. ### Experimental setup The sample used in this study was a fully processed quarter of the whole epitaxial wafer. The sample was placed on a thermo-electrically cooled plate (Thorlabs PTC1) with an additional temperature sensor buried inside a custom heatsink plate mounted on the top, to ensure control of the temperature in the close vicinity of the sample. The temperature of the heatsink was set to 20 \({}^{\circ}\)C throughout all experiments. The temperature-controlled plate was placed on a manual translation stage. The sample was contacted by a microwave probe (GGB Picoprobe 40A) located in an additional manual translation stage. The devices were driven with a direct current by a stabilized precise source/measure unit (Keysight B2901B). The device emission was collected using an infinity-corrected objective of NA = 0.65 (Mitutoyo M Plan Apo NIR HR 50x). As described in the main text, to measure the momentum spectra (far field) we imaged the back focal plane of the objective with a set of achromatic lenses onto the 0.3 m-focal length monochromator entrance slit (Princeton Instruments Acton SP-2300i) and the electroluminescence signal was dispersed through a grating (1200 grooves/mm) onto an electron-multiplied charge-coupled device (EMCCD - Teledyne Princeton Instruments ProEM-HS:1024BX3). To record the spatially resolved spectra (near field), one of the lenses was removed from the optical path, which enabled projection of the real-space image onto the monochromator slit. This lens was mounted on a flip mount, allowing quick and convenient switching between the two measurement modes of the setup. ### Analysis of the momentum space Taking advantage of homogeneous emission from the BEC device, we determined the thermodynamic properties of the photon gas from the momentum space. We extracted the mean photon occupation distribution by integrating the momentum space emission, using the standard procedure used in cavity-polariton physics [10, 60]. The mean number of photons collected at a CCD pixel row representing a chosen \(k\)-state is represented as follows: \[N_{\mathrm{ph}}(k)=\eta\frac{dN_{\mathrm{CCD}}(k)}{dt}\tau(k), \tag{8}\] where \(\eta\) is the calibrated collection efficiency of our setup, \(dN_{\mathrm{CCD}}(k)/dt\) is the count rate per second on the CCD camera pixel, and \(\tau(k)\) is the photon lifetime at \(k\). The photon lifetime was estimated from the experiment by extracting the emission linewidth \(\Delta\varepsilon_{k}=\hbar/\tau(k)\)[61] by fitting a Lorentzian function to the data from a \(k\)-state pixel row. Subsequently, the occupation number at the \(k\)-state is calculated taking into account the number of states subtended by a pixel at \(k\)-position in cylindrical coordinates \(N_{\mathrm{st}}(k)=k\Delta k\Delta\phi(4\pi/S)^{-1}\), where \(S\) is the surface area of the device aperture. The number of states in momentum space was confirmed by numerical simulations of the optical modes confined in the device (see Supplementary Information). The final expression is the following: \[N(\varepsilon(k))=\frac{N_{\mathrm{ph}}(k)}{N_{\mathrm{st}}(k)}=\frac{4\pi^{2 }\eta}{2k\Delta k\Delta\phi S}\frac{dN_{\mathrm{CCD}}(k)}{dt}\tau(k), \tag{9}\] which also considers the spin degeneracy 2 of all states, as our experiment was not polarization resolved. The energy \(\varepsilon(k)\) of the measured \(k\)-state is extracted from the fitted Lorentzian peak. ###### Acknowledgements. We gratefully thank Maciej Dems for his support in improving the numerical simulation codes used in this work and Milan Radonjic for valuable discussions. MP and ANP acknowledge support from the Polish National Science Center, grant Sonata no. 2020/39/D/ST3/03546. TC acknowledges the project Sonata Bis no. 2015/18/E/ST7/00572 from the Polish National Science Center, within which the VCSELs used in this work were fabricated. AP acknowledges financial support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) via the Collaborative Research Center SFB/TR185 (Project No. 277625399). ## Author contributions MP conceived this research project. MP and ANP conducted the experiments, and MP performed the detailed data analysis. JAL designed the epitaxial structure and provided the planar wafer sample. MG designed the laser mesa outline and performed all fabrication steps. TC performed the numerical modeling of the devices. MP, AP, MW, and TC contributed to the theoretical analysis and interpretation of the data. All authors discussed the results. MP wrote the manuscript with input from all authors.
2308.00162
Soft matter physics of the ground beneath our feet
Inspired by presentations by the authors during a workshop organized at the Princeton Center for Theoretical Science (PCTS) in January 2022, we present a perspective on some of the outstanding questions related to the "physics of the ground beneath our feet." These identified challenges are intrinsically shared with the field of Soft Matter but also have unique aspects when the natural environment is studied.
Anne Voigtländer, Morgane Houssais, Karol A. Bacik, Ian C. Bourg, Justin C. Burton, Karen E. Daniels, Sujit S. Datta, Emanuela Del Gado, Nakul S. Deshpande, Olivier Devauchelle, Behrooz Ferdowsi, Rachel Glade, Lucas Goehring, Ian J. Hewitt, Douglas Jerolmack, Ruben Juanes, Arshad Kudrolli, Ching-Yao Lai, Wei Li, Claire Masteller, Kavinda Nissanka, Allan M. Rubin, Howard A. Stone, Jenny Suckale, Nathalie M. Vriend, John S. Wettlaufer, Judy Q. Yang
2023-07-31T21:40:21Z
http://arxiv.org/abs/2308.00162v1
# Soft Matter ###### Abstract Inspired by presentations by the authors during a workshop organized at the Princeton Center for Theoretical Science (PCTS) in January 2022, we present a perspective on some of the outstanding questions related to the "physics of the ground beneath our feet." These identified challenges are intrinsically shared with the field of Soft Matter but also have unique aspects when the natural environment is studied. Soft Matter ## I Introduction Soft matter physics is a fundamental problem in astrophysics and astrophysics (see, e.g., [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 242, 251, 262, 273, 284, 285, 296, 297, 300, 311, 323, 334, 351, 36, 371, 38, 398, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 87, 89, 91, 80, 83, 85, 87, 89, 92, 93, 94, 95, 96, 97, 98, 99, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 142, 143, 144, 145, 146, 147, 148, 159, 160, 170, 171, 172, 174, 175, 176, 177, 178, 179, 180, 181, 183, 184, 185, 186, 187, 188, 189, 191, 201, 219, 220, 221, 232, 243, 251, 263, 274, 275, 286, 298, 299, 300, 31, 32, 334, 351, 36, 371, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 56, 57, 58, 59, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 119, 120, 121, 124, 125, 126, 127, 128, 129, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 151, 152, 153, 154, 155, 156, 157, 158, 159, 161, 170, 173, 174, 176, 179, 181, 182, 183, 184, 185, 186, 187, 188, 189, 192, 201, 219, 233, 243, 251, 263, 274, 275, 288, 290, 291, 246, 291, 202, 203, 204, 205, 206, 207, 208, 209, 209, 210, 211, 232, 243, 251, 263, 275, 286, 292, 293, 294, 295, 296, 297, 300, 320, 321, 323, 341, 325, 326, 327, 328, 329, 333, 342, 329, 343, 351, 36, 371, 38, 398, 40, 41, 42, 43, 444, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 59, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 910, 92, 93, 94, 95, 96, 97, 98, 99, 100, 102, 103, 104, 105, 106, 107, 108, 109, 119, 119, 12 tion \(\phi(t)\), and associated porosity \(s(t)=1-\phi(t)\) occupied by a water volume fraction \(w(t)\). Over time, the ground surface water content can vary from 0 % (dry) to 100 % (fully saturated, and sometimes overflowing), through intermediate \(w\) values (partially saturated). Studies of the physical and chemical dynamics of such a system - coarsely, a heterogeneous and fragile "sponge" or pile of sticky grains - give rise to a multitude of Soft Matter problems. Echoing the richness of the ground, a multitude of disciplines investigate the thin layer making the Earth's surface, including the Earth and environmental sciences, engineering, physics, chemistry, and ecology. Different disciplines tend to develop different cultures and approaches, as well as different motivations, constraints, and even notations. As a result, a wide and complementary range of foci regarding temporal and spatial scales, concepts, laboratory, and field study methods coexist. Nevertheless, scientific events for active exchanges between these different communities studying active processes related to the Earth's surface remain rare. Consequently, a workshop was organized at the Princeton Center for Theoretical Science (PCTS) in January 2022. The participants considered concepts and challenges in and out of well-controlled laboratory experiments and models and the fantastic messiness one can confront, and be enriched by, while studying the natural environment in the field (see Fig. 1). The authors contributed presentations of recent and diverse results, which represent only a small fraction of the many outstanding questions related to the field of "physics of the ground". From conversations amongst the participants there emerged useful clarification on the mechanics of various systems over different length and time scales, field site location and conditions, process interactions, and feedbacks. Fundamental questions were at the heart of the discussion, such as: How can the complexity of nature's dynamics and mechanics be simplified, measured, down Fig. 1: (a) Overview sketch of the four challenges of modeling the soft matter physics of the ground identified in this paper: (1) modeling processes from the grain scale; (II) measuring and capturing the ground dynamics near critical states; (III) connecting laboratory and theory results to the field-scale observations; (IV) understanding and taking into account the many effects of life. (b) In a given element of the ground, subjected to normal stresses (orange arrows) and a mean groundwater flow (blue arrows); the soft matter physics of the ground encompasses simultaneously multiple phases, processes, dimensions, and scales, which can have various expressions at the Earth’s surface, e.g., (c) An Antarctica map showing ice-shelf areas vulnerable to hydro-fracture (marked in red) in a warming world by Lai _et al._[7]; (d) a photograph of a crack that appeared on Jan 3 2018 in Rattiesnake Ridge, near Union Gap, WA, April 2018 (200 km from the 2014 Oslo landslide with industrial infrastructure in the foreground, Photo credit: Shawn Gust, Yakima Herald-Republic via AP); and (e) a photograph of a mound of grains built by ants (imagege/EGU). To model the multi-dynamics of the ground, diverse methods, concepts, and approaches are used to link the ground’s properties, constraints, structures, mechanics, and observational data. For example, models of (f) effective groundwater flow, or (g) rheological behavior in experiments and simulations, rely on assumptions relative to (h) porous flow and (i) contact force networks. (j) Individual motions in these networks are constrained by the properties and mechanisms of the phases involved [8]. Often a phase can both be a constituent of the bulk system and, also, define an interface where chemical reactions occur, like a \(CO_{2}\) bubble in contact with a water meniscus in a silica nanopore (k, courtesy of I.C. Bourg). An expression of these interactions, for example, a mineral particle such as a natural quartz sand grain (l) can display a chemo-mechanically altered surface topography (m), as seen in the scanning electron microscopy images (courtesy of A. Voigtlander). Such nanoscale phenomena can in turn affect the effective rheology of the ground and groundwater flow. scaled, and structured to fit into a model, or are such simplifications not possible? How do we treat intermittency in natural phenomena? What does near-criticality look like in nature, and over what scales should we consider it? How can the development of complex material rheology frameworks be of future use? Our intention with this survey is to be both broad and specific, highlighting some of the ongoing fronts of the scientific research on the ground beneath our feet, to inspire future new and collaborative works. The paper is organized in four sections, corresponding to four major scientific challenges (see Fig. 1). These challenges are shared intrinsically with the field of Soft Matter, yet they also have unique aspects when one studies the natural environment. Each contribution within the distinct sections - corresponding to material presented during the workshop - introduces a selection of specific outstanding questions and ongoing efforts to tackle them. _1 - The challenge of modeling from the grain scale._ The ground is essentially a range of partially-wet particulate (hence porous) systems where chemical, physical, and biological processes occur (see right side of Fig. 1). Modeling each of these processes from the smaller scale is both fundamental and essential to building larger-scale and multi-parameter, multi-physics models. _II - The challenge of near-criticality._ The Earth surface constantly evolves and sets itself near its material yield criterion (typically, a critical shear stress \(\sigma_{c}\)). As a result, we mainly live in and observe a quasi-static environment - an apparently stable ground - but which is often about to fail (see Fig. 1c, d). Surveying and predicting the ground's mechanical behavior, in particular the rare times it deforms plastically (e.g., during floods and landslides), requires good understanding and modeling of its near-critical behavior. This challenge demands advancements in both fundamental physics understanding and development of new methods of quantitative observations. _III - The challenge of bridging scales._ All Earth (near-) surface processes occur at given length and time scales. Some mechanisms are universal across a wide range of the spatial and temporal scales, some structures are hierarchical and emergent, some material properties are bounded with typical magnitudes, lengths, and characteristic times or rates. Knowledge of how and when to bridge scales (see Fig. 1), mathematically, numerically, and methodologically from the laboratory to the field sites (and vice versa) is a difficult key to forge, but one that can unlock advanced predictive capabilities. _IV - The challenge(s) of life._ Living matter has the unique property of reproducing itself, and grows or decays over time. The self-propelled motion of many organisms is another specific impact of life on the environment. In the end, many organisms contribute to the constant alteration of their surroundings, while also depending on it (see Fig. 1d, e). The interconnections between all forms of life - including human life - and the dynamics of the ground, although obeying the laws of physics, bring additional complexity and carry a large number of original and pressing questions. ## 1 The challenge of modeling from the grain scale Understanding the time dynamics of a process from the scale of a single grain provides clear definitions of transients, steady state dynamics (e.g., rheology), fundamental causes for instability growth, and how interactions with other processes can be tackled. Studying and modeling a single process in the ground at the grain scale requires identifying, visualizing or parameterizing, and quantifying the relevant elementary phenomena happening at that scale. This section presents a few examples of ongoing efforts in this domain based on experimental, numerical, and analytical approaches (see Fig. 1). In particular, Section 1.1 highlights some technological advancements that allow in-situ observation and quantification of dynamics in porous materials. Section 1.2 presents recent technical advances in analog models to visualize granular interactions, including emergent force chains, and feedbacks in poroelastic granular materials. The authors in section 1.3 show that grain-scale models of granular flow can produce transients in frictional behavior similar to those observed in laboratory rock and gouge friction experiments, behavior for which a first-principles understanding is currently lacking. Section 1.4 presents novel ways to model the complex behavior of natural colloidal materials such as clay using simulations coupling porous flow, chemistry and mechanics. A different approach to grasp the dynamics and interactions from particles to bulk behavior is non-linear statistical physics, as presented in section 1.5 in a model of the dynamics of grains trapped within ice sheets. ### In situ visualization of soft matter dynamics in granular and porous media - S. S. Datta Advancements in our understanding of "Soft Earth Geosciences" are, in many ways, being driven by experimental advances in visualizing grain-scale processes. In particular, tricks from physical chemistry and colloidal science, coupled with developments in microscopy and imaging science, have yielded unprecedented ability to visualize the dynamics of soft materials in models of complex and crowded environments akin to the porous soils, sediments, aquifers, and reservoirs in the ground beneath our feet. In these cases, the environment alters the material, the material itself alters the environment - and these coupled dynamics give rise to new behaviors that challenge current understanding. For example, despite its importance in energy, environmental, manufacturing, separations, industrial, and microfluidic processes, prediction and control of complex fluid transport in porous media is challenging and often operates by trial and error. Even basic prediction of where injected fluid distributes through a porous medium, and what the associated macroscopic resistance to flow is, remains elusive: how physicochemical interactions dictate fluid microstructure and transport is poorly understood due to their time-dependent and multi-scale nature. Disentangling these interactions through in situ visualization is therefore an exciting frontier of research. While even basic characterization has traditionally been difficult due to the opacity and complexity of most three-dimensional (3D) environments, confocal microscopy of refractive index-matched fluids and model solid media now enables researchers to directly visualize soft matter dynamics in 3D porous media with controlled pore structures and chemistries [13]. Below, we summarize selected projects from our lab that used this approach to develop insights in five areas. A common feature is the ability to simultaneously probe pore space topology, dynamic changes in fluid microstructure, multi-scale flow patterns, and macroscopic transport - which provides a way to directly connect phenomena across multiple scales. (i) _Polymer solution flow instabilities._ As they squeeze through tight pores, polymers can deform, resist deformation due to their elasticity, and alter subsequent transport [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. Direct visualization revealed that these coupled effects lead to unexpected spatial and temporal fluctuations in the transport of polymer solutions that are often applied in processes such as groundwater remediation and oil recovery [30, 29, 33]. Moreover, analysis of the measured pore-scale flow fields quantitatively established that the pore-scale onset of these chaotic fluctuations generates a strong anomalous increase in the macroscopic flow resistance [34] - a phenomenon that is well-documented, but that had eluded explanation for \(>\)50 years [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45]. Given that the macroscopic flow resistance is one of the most fundamental descriptors of fluid flow, these findings not only help deepen understanding of polymer solution flows but also provide quantitative guidelines to inform their geophysical applications at large scales (e.g., [46]). (ii) _Particulate transport._ Particulate transport underlies a wide array of processes in porous media that affect our everyday lives, ranging from the beneficial - e.g., groundwater remediation [47, 48, 49] - to the harmful, such as the migration of microplastics, contaminants, and pathogens in the environment [50, 51, 52]. As colloidal particles navigate a tortuous porous medium, they can alter the medium in turn by depositing onto its solid matrix, making prediction of macroscopic particle distributions challenging [53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69]. Direct visualization enabled identification of the fundamental mechanisms by which particles are distributed throughout a porous medium, demonstrating that the interplay between hydrodynamic and colloidal interaction forces controls this process [70]. Moreover, it enabled characterization of how interactions between particles and trapped non-aqueous fluids influence subsequent transport (Fig. 2a) [9]. These results help shed light on the multi-scale interactions between fluids, particles, and porous media that have traditionally been represented in black-box models using "lumped" empirical parameters - guiding the development of more accurate and generalizable models that could be applied in diverse geophysical settings. (iii) _Water-absorbent hydrogels._ Hydrogels are elastic networks of hydrophilic polymers that can absorb large quantities of water. They, therefore, hold tremendous promise as water reservoirs for plant roots in dry soils, potentially reducing the burden of irrigation in agriculture [71, 72, 73, 74, 75, 76, 77, 78, 79, 80] - which is critically important given the increasing threat of water scarcity for a growing world population. However, this application requires hydrogel water absorption and swelling to be predictable and controllable. Using direct visualization of hydrogel swelling in granular media akin to soil, we demonstrated that confinement in a granular medium can dramatically hinder the ability of hydrogels to absorb water [91]. These studies now help to elucidate how environmental factors such as soil structure, chemistry, water saturation, and mechanics influence hydrogel water absorption, providing physical principles to ultimately guide the design of hydrogels whose swelling and mechanics are optimized for use in a given environment [92]. In related research, hydrogel packings themselves acted as models of shrinkable granular media e.g., soft clay-rich soils, whose deformations influence the integrity of built structures and barriers for waste isolation [93, 94, 95]. Deforming such a soft porous material alters fluid transport through its pores, which in turn further deforms the material. Using direct visualization of this coupling between fluid transport and solid deformations, we have shown how material physicochemical properties that regulate fluid permeability and mechanical deformations, as well as interactions with external boundaries, together control how these materials swell/shrink and deform [10], fracture [96], and potentially even self-heal [10] (Fig. 2b) - providing new insights into the desiccation of soft earth materials. (iv) _Immiscible fluid displacement._ The displacement of a fluid from a porous medium by another immiscible fluid underlies groundwater contamination and remediation, subsurface carbon sequestration, oil and gas migration and recovery, and moisture infiltration in/drying of soil and wood [97, 98, 99, 100, 101]. While these Fig. 2: In situ visualization of soft matter dynamics in granular and porous media. (a) Large-scale confocal micrograph taken inside a 3D porous medium (section through solid matrix shown by black circles), showing trapped oil (additional black) and deposited colloidal particles (red) in the pore space (from [9]). (b) Self-healing of a cracked packing of hydrogel beads; color shows fluorescence due to an excited dye that has diffused within the hydrogel beads, with an intensity that increases with bead shrinkage (from [10]). (c) Schematic showing 3D printing of bacteria inside a porous granular hydrogel matrix. (d) Superimposed experimental confocal micrographs (different colors show different times) of bacteria spreading collectively from a 3D-printed population with an undulatory initial structure; the spreading cells smooth out these morphological perturbations. (\(\lambda\)) and (\(\overline{5}\)) refer to the undulation wavelength and hydrogel matrix mean pore size, respectively (from [11]). (e) Magnified view of a front of bacteria spreading by chemotaxis in a crowded, porous, granular hydrogel matrix (from [12]). processes are well-studied in homogeneous porous media with randomly-distributed pores of different sizes and similar surface chemistries [103, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125], many naturally-occurring media have structural and chemical heterogeneities, such as pore size gradients, strata of different permeabilities, and regions of differing surface chemistry [126, 127, 128, 129]. Direct visualization in 2D and 3D porous media revealed that such heterogeneities fundamentally alter fluid displacement pathways and dynamics. In particular, a pore size or surface energy gradient can either suppress or exacerbate both capillary fingering [130] and viscous fingering [131], distinct interfacial instabilities that typically arise in homogeneous media. Furthermore, for the case of stratified media, visualization revealed that immiscible fluid displacement is spatially heterogeneous, with different strata being invaded at different rates [132], leading to differing amounts of fluid removal - phenomena that are not predicted by typically-used, spatially-averaged models of fluid flow, but are captured by new theoretical models inspired by the experiments [133]. (v) _Bacterial communities._ Bacterial communities in the ground beneath our feet critically impact our everyday lives: they degrade contaminants, fix nitrogen, help sustain plant growth, and decompose organic matter [134, 135, 136, 137, 138, 139, 140, 141, 142, 143]. However, despite their ubiquity and importance, how such bacterial communities spread, self-organize and stably function, and interact with their surroundings is poorly understood. Laboratory studies typically focus on bacteria in liquid culture or at flat interfaces; however, these do not exhibit many of the features of environmental communities that inhabit soils or sediments in terrestrial environments. Thus, we have developed "transparent soils" using e.g., granular hydrogels that enable the behavior of bacteria to be probed in 3D granular media over length scales ranging from the single-cell to the community scale [144] (Fig. 2c,d,e). This capability revealed that current understanding of bacterial motility - which is based on studies performed in bulk liquid - is incomplete: for example, confinement in a crowded medium fundamentally alters how bacteria move, both at the single cell [145] and population scales [12], in previously unknown ways. Ultimately, these results could guide the development of new theoretical models [146] to more accurately predict the motion and growth of bacterial populations in complex environments akin to those in the ground beneath our feet - potentially helping to provide quantitative guidelines for the control of these dynamics in processes ranging from bioremediation to agriculture. These examples highlight the utility of direct visualization of soft matter dynamics in model porous media in shedding new light on problems in soft earth geosciences. Moving forward, it will be important for researchers to continue to develop new imaging approaches to access, e.g., 3D fluid flow fields. In addition, a useful direction for future research will be to examine soft matter dynamics in granular and porous media with additional complexities such as deformability and rearrangements of the granular matrix, and different grain shapes, sizes, surface chemistries, and packing geometries. While a great deal of empirical evidence indicates that these factors strongly alter soft matter dynamics in complex environments, unifying principles that describe how remain lacking; this research could provide a way to discover these. Not only will this work deepen fundamental understanding of soft matter dynamics in geoscientific settings, but it will also enable researchers to develop guidelines for the application of existing soft materials and complex fluids, as well as principles for the formulation of new materials and fluids, e.g., in controlling solute transport and transport-limited chemical reactions in environmental remediation, as well as other industrial and environmental processes more broadly. ### Force chains underpin emergent poromechanical behavior in granular media - W. Li & R. Juanes Photoelasticity has a long history as a technique to quantify internal stresses in solid bodies [148], but it has been traditionally applied to granular media consisting of cylindrical (usually circular) disks [149, 150]. This particle geometry has the advantage of allowing for precise quantification of stresses [151], but the disadvantage that it prevents connectivity within the pore space, thus restricting severely its purpose as an analogue of permeable porous media, where fluid flow and mechanical deformation are often strongly coupled [152]. This is because it is effective stress - the fraction of the total stress that is transmitted through the solid skeleton - that controls the mechanical behavior of porous media, from land subsidence due to groundwater pumping to the cohesion of sand in sandcastles [153]. Karl von Terzaghi, father of soil mechanics, introduced the concept of effective stress a century ago [154, 155]. Until recently, however, this physical quantity could only be calculated by subtracting pore pressure from the normal total stress, or inferred from its "effect", typically the solid skeleton deformation. For a proper analogy of a porous medium in terms of pore geometry, connectivity and morphology, a pack of 3D particles, such as spheres, should be used. Extending photoelasticity to such systems, however, requires developing a method to manufacture residual-stress-free photoelastic particles, and obtaining quantitative information on the forces acting on the 3D particles. During the workshop we presented a fabrication process, similar to "squeeze casting", to produce millimeter-scale residual-stress-free photoelastic particles (spheres and other shapes, such as icosahedra) with high geometric accuracy (Fig. 3a,b). The combined photoelastic response from light intensity and light color permits a rough quantification of forces acting on the particles over a wide range of forces. We then presented a first application of the new technique, coined photopormechanics [147], to illustrate the evolution of effective stress during vertical consolidation (Fig. 3c,d): a process by which the stresses caused by a sudden load are gradually transmitted through a fluid-filled granular pack as the fluid drains and excess pore pressures dissipate. We show that compaction of the granular pack is concomitant with the emergence of particle-particle force networks, which originate at the top boundary (where the pore fluid seeps out) and propagate downwards through the pack as the pore pressure gradually dissipates (Fig. 3e). These examples highlight the utility of direct visualization of soft-matter dynamics in model porous media in shedding new light on problems in soft-earth geosciences. Moving forward, it will be important for researchers to continue to develop new imaging approaches to access, e.g., 3D fluid flow fields. In addition, a useful direction for future research will be to examine soft matter dynamics in granular and porous media with additional complexities such as deformability and rearrangements of the granular matrix, and different grain shapes, sizes, surface chemistries, and packing geometries. While a great deal of empirical evidence indicates that these factors strongly alter soft matter dynamics in complex environments, unifying principles that describe how remain lacking; this research could provide a way to discover these. Not only will this work deepen fundamental understanding of soft matter dynamics in geoscientific settings, but it could also enable researchers to develop guidelines for the application of existing soft materials and complex fluids, as well as the formulation of new materials. This extension of photoelasticity to 3D particles provides a powerful experimental model system to study the strong coupling of solid and fluid in granular media that take place in geoscience processes like landslides [156], gas vents from ocean sediments [157], and injection-induced seismicity [158]. This is especially attractive in three dimensions, where (while long-standing issues related to the interpretation of light transmission in fully-3D stress fields [159] still need to be resolved) the method can form the basis for force-chain tomography [160]. ### The role of granular flow in fault friction - _A. M. Rubin & B. Ferdowsi_ Faults in the Earth are invariably filled with granular material (gouge) derived from wear of the surrounding rock [163, 164]. How the properties of this heterogeneous gouge layer control fault slip at both elastodynamic and quasi-static sliding speeds is not understood [165, 166, 167, 168, 169]. Numerical models of fault slip require a constitutive law for fault friction. The current state-of-the-art, originally conceived for two rough surfaces in contact but observed to apply to sheared gouge layers as well, falls under the heading of "rate- and state-dependent" friction. In this formalism, the friction coefficient (the ratio of shear to normal stress during sliding) depends upon the fault sliding rate (or strain rate), and a more nebulous property termed "state". State is conventionally thought to reflect a combination of the true contact area and the intrinsic strength of those contacts. Also conventionally, the state dependence is thought to be due to time-dependent plastic flow or chemical bonding at those contacts, although the opaque nature of rock makes the origin of state evolution difficult to decipher. How state evolves for surfaces not at steady-state sliding is parameterized by "state evolution laws" that are largely empirical, yet still do not adequately describe all the relevant features of laboratory experiments. Numerical simulations of faults obeying rate-state friction show that the precise description of state evolution, or the transient friction between sliding surfaces when not at steady state, significantly influences processes that Earth scientists care about (e.g., earthquake nucleation [170]). The lack of an accurate or physics-based description of state evolution thus severely hampers our ability to extrapolate the results of numerical models of fault slip to the Earth. Recent Discrete Element Method (DEM) simulations of a granular gouge layer show that much of the phenomenology of transient rock and gouge friction seen in laboratory experiments (both the rate-dependence and the state-dependence) can be reproduced by numerical models in which this dependence arises only from momentum transfer between the grains, with no chemical reactions or time-dependent plasticity at grain/grain contacts [161]. Panel (a) of Figure 4 shows a snapshot from a granular DEM simulation designed to mimic laboratory rock friction experiments. A 2D layer, periodic in the \(x\) and \(y\) directions, is sheared Fig. 3: Consolidation test using photo-poromechanics. (a) Millimeter-size photoelastic particles in two different shapes (spheres and icosahedra) under _white_ light. (b) Photoelastic particles under a circular polariscope. The polystyrene ruler, having residual stress, shows color stripes. However, the particles, being residual-stress-free, are hardly visible (from Li et al. [147]). (c) Experimental setup for the 1D consolidation experiment. A granular pack of fluid-saturated photoelastic spheres is loaded suddenly with a constant weight, while the video, deformation, and excess pore pressure are recorded. (d) Detailed schematic of the consolidation cell. Two glass plates are glued with a 2 mm thick U-shaped spacer where the beads are inserted to form a monolayer pack. The excess pore pressure is measured at the bottom of the cell with a pressure sensor. The pore fluid fills the cell to provide a constant-pressure boundary condition. A piston made of a 1.8 mm acrylic plate (with slots cut out to reduce resistance to fluid flow) allows for the fluid to seep out of the cell (from Li et al. [147]). (e) A snapshot of the photoelastic response of the granular pack during the consolidation test. The force chains – which quantity the Terzaghi stress in the granular pack – develop from the top boundary, then progress downwards through the pack as the pore-fluid pressure diffuses upwards. between two rigid parallel plates. A specified velocity history in the \(x\) direction is applied to a very stiff spring attached to the upper plate, while a constant normal stress of 5 MPa is maintained in the \(z\) direction. A constant sliding friction acts at grain/grain contacts. Panel (b) shows the friction signal, relative to the future steady-state value, following simulated velocity steps of \(\pm\) 1 and 2 orders of magnitude. At the time of the velocity step, there is an abrupt change in stress of the same sign as that of the step (the "direct velocity effect" of rate-state friction), followed by an exponential decay to a new steady-state value (the "state evolution effect"). The magnitudes of the direct and evolution effects are approximately proportional to the logarithm of the velocity jump, with an e-folding strain for friction evolution of \(\sim\)0.13. These results are similar to those from laboratory friction experiments on rock and many other materials. The solid black lines in Figure 4c indicate the friction signals, relative to the prior steady state value, following velocity steps of \(\pm\) 1 and 2 orders of magnitude from experiments on synthetic quartz gouge [162]. The magnitude of the logarithmic rate- and state-dependence in the DEM and lab experiments are similar to within a factor of \(\sim\)2 (there is some rounding and diminishing of the peaks in panel (c) not present in panel (b) because the elastic stiffness of the lab system is smaller). The red lines in Figure 4c are a fit to the data using the empirical "Slip" version of the rate-state friction equations [162, 164, 171], using a single set of parameter values for all 4 steps. The source of the rate- and state-dependence in the DEM, which lacks time-dependence at the contact scale, remains an area of active investigation. It appears to be possible to understand the direct strain-rate dependence semi-quantitatively in terms of an Arrhenius process, with the kinetic energy of the grains playing the role of the molecular kinetic energy in the classical understanding of rate-state friction [172], as grains hop from one potential well to another [161]. Although the nature of granular friction has been studied extensively in the physics and engineering literature, most of this work concerns friction during steady flow [173, 174, 175, 176, 177, 178, 179]. The transient frictional properties of granular flow thus represent a rich and underexplored field of interest to Earth scientists, physicists, and engineers. ### Coupled flow and mechanics of clays and muds - _I. C. Bourg_ A recurrent theme in efforts to predict the physics of the ground is the existence of complex couplings between hydraulic, mechanical, and chemical phenomena in fine-grained natural materials often referred to as clays or muds [180, 181, 182]. These materials include a variety of phases including phyllosilicate minerals, nanocrystalline Al and Fe oxides, soil organic matter, and biofilms [183]. A common feature of these materials is that they consist of assemblages of colloidal particles with dimensions \(\sim 10^{3}\) times smaller than the sand or silt grains discussed in previous sections. Despite vast chemical and microstructural differences, these phases exhibit common properties - low permeability, cohesion, chemomechanical couplings, a significant yield stress - suggesting that these features are associated with particle dimensions on the order of nanometers and with interactions across thin fluid films. An important manifestation of the impact of clays and muds is that for most sediments and sedimentary rocks, a 30% increase in clay content causes a \(10^{6}\)-fold decrease in permeability [184]. This large impact on permeability can transform the ground from a material where fluids readily flow to a material where they do not, drastically modifying subsurface hydrology and poromechan Fig. 4: (a) Snapshot from a DEM simulation of a sheared granular layer [161]. Grains are spherical, polydisperse, and have elastic properties appropriate for glass beads. Colors indicate grain velocity in the x direction, relative to the load-point velocity, averaged over an upper-plate sliding displacement of 1 mean grain diameter. (b) The friction signal, relative to the future steady-state value, following simulated velocity steps of \(\pm\) 1 and 2 orders of magnitude from an initial velocity of 0.01 \(m/s\)[161]. Slip distance is defined to be zero at the time of the step. (c) Solid black lines indicate the friction signals, relative to the prior steady-state value, following velocity steps of \(\pm\) 1 and 2 orders of magnitude at sliding speeds from 1 to 100 \(\mu/s\), from experiments conducted in the Penn State Rock and Sediment Mechanics Lab [162]. The starting material is a 3-mm-thick layer of synthetic quartz gouge, with particles ranging from 50-150 \(\mu m\) in diameter (shear ultimately localizes to a narrower zone where particles have been commututed, a process not modeled in the DEM). The synthetic quartz gouge is nearly steady-state velocity neutral, whereas the DEM is steady-state velocity strengthening. The red lines are a fit to the data using the empirical “Slip” version of the rate-state friction equations (“direct effect” coefficient a=0.0073; “state evolution effect” coefficient b=0.0075; e-folding slip distance \(D_{c}\)=12.2 \(\mu m\)). ics [185]. For example, the ability of mud flows to carry suspended debris is sensitive to the effective stress (section 1.2) on the mud solid framework, which depends on the extent to which external stresses cause fluid expulsion vs a fluid pressure increase [186, 187]. A second illustration of the complex properties and impacts of clays and muds is that these materials can transition from non-cohesive to cohesive mechanics depending on conditions. Whereas interparticle interactions in coarser-grained materials predominantly consist of repulsive grain contact forces, potentially supplemented by attraction due to capillary fluid menisci [188], interparticle interactions in clayey materials involve a variety of attractive and repulsive interactions across thin water films--including osmotic, electrostatic, van der Waals, hydration, and excluded volume effects--with different length scales and sensitivities to particle shape, surface charge, and solution chemistry [189, 190]. These complex water-mediated interactions play key roles in the emergence of complex dynamic properties in clayey media, including cohesion, yield stress, and thixotropy (with implications, for example, for the impact of clay in fault slip, section 1.3, and in debris flow, section 2.1), yet they remain insufficiently understood to predict these dynamics [191, 192]. Finally, a third illustration of the impact of clays is their inhibition of the transition from ductile to brittle mechanics during soil freezing or sediment diagenesis. Because of their high specific surface area and hygroscopic nature, clays can hold relatively large amounts of water (on the order of half of their volume or more) with most water molecules located in direct contact with the nearest surface [189]. This gives rise to distinct aqueous chemistries, including inhibition of freezing (section 1.5) and complex impacts on cementation that can cause a persistence of ductile mechanics despite exposure to below-freezing temperatures [193, 194] or deep geologic burial [195, 196]. A key challenge in efforts to resolve the processes outlined above from the grain scale is that they involve a large-scale separation between the grain scale discussed in previous sections - associated with sand grains, grain contacts, force chains, and microbial processes with characteristic scales on the order of \(10^{-6}\) to \(10^{-4}\) m - and the scale of clay colloidal interactions discussed above, on the order of \(10^{-9}\) to \(10^{-7}\) m. With the exception of idealized subsurface materials such as pure sand or clay, efforts to understand the physics of the ground at the grain scale are inherently multiscale because of the ubiquitous co-existence of clay or mud and coarser-grained material with more than three orders of magnitude separation in grain size [182, 184, 192]. ### Polar nonequilibrium statistical physics - _J. S. Wertlaufer_ All phases of matter are separated by interfaces. Nonetheless, it is tempting, and indeed common, to ignore their energies and the associated implications. This, despite the fact that interfaces can control the large-scale phase behavior of all materials, from multiphase flows to the deformation of polycrystals. Interfacial influences on melting and freezing control the phase behavior of all materials; modern research shows that a layer of water exists on the surface of ice, even at temperatures well below freezing. These unfrozen films can influence everything from the slipperiness of glaciers to the electrification of thunderclouds [198]. In cold climates, roads are salted in winter, harnessing the freezing point depression of impurities. Each salt crystal, however, abuts an ice surface where the phase change occurs. Less commonly thought of, but equally important, are other mechanisms that can extend the equilibrium domain of a liquid phase into the solid region of the normal phase diagram. The causes of this "premelting", which, in addition to impurities [199], include surface melting, interface curvature and substrate disorder, allow for the persistence of water at interfaces well below the bulk melting point. The thickness of the liquid film depends on the temperature, impurities, material properties (intermolecular forces) and geometry. A temperature gradient is accompanied by a thermo-molecular pressure gradient that drives the unfrozen interfacial liquid from high to low temperatures and hence particles in ice, as shown in Figure 5, migrate from low to high temperatures. Such premelting dynamics are operative in a wide range of settings, from the heaving of frozen ground and planetary regolith, to the scavenging of atmospheric trace gases by snow and the redistribution of climate proxies in ice sheets, to the collisional processes in protoplanetary disks. Moreover, the unfrozen films act both as a refuge for biota and a transport mechanism for nutrients, waste and the biota themselves. New research considers such processes in the framework of active matter, wherein particles are endowed with intrinsic mobility mimicking life, and addresses the interplay between a wide range of problems, from extremophiles of both terrestrial and exobiological relevance to ecological dynamics in Earth's cryosphere. For example, biota are found in glaciers, ice sheets, and permafrost, evolving in a complex mobile environment facilitated or hindered by a range of bulk and surface interactions. Survival strategies, such as producing exopolymeric substances and antifreeze glycoproteins, that enhance the interfacial water also facilitate biocompatibility. Such phenomena can be cast in the stochastic framework of active Ornstein-Uhlenbeck dynamics and chemotaxis [197], Fig. 5: The interface between ice and inert or living particles is separated by a so-called premelted water film below the bulk melting point. (a) Perspective view of few active particles embedded inside ice against which they premel and experience an external temperature gradient \(\nabla T\), which creates a thermomolecular pressure gradient driving the flow of liquid from high to low temperatures, so that particles translate from low to high temperatures. (b) An expanded view of one active particle inside the solid. The radius of the particle is \(R\), the black arrow shows the drift velocity induced by the temperature gradient, and the red arrow denotes the activity given by an active force (from [197]). to find that for an attractive (repulsive) nutrient source, that thermomolecular motion is enhanced (suppressed) by biolocomotion. This phenomenon is essential for understanding the persistence of life at low temperatures. ## 2 The challenge of near-criticality The study of the near-critical behavior of material has a long history and encompasses problems such as glass transition and deformation, shear-thickening behavior of suspensions, particulate material jamming, solid creep, and fracture dynamics. Most of these systems exhibit behaviors that are non-linear functions (e.g., see Fig. 1) of the system's temperature, applied stresses, and density of grains or atoms. Material failure in the environment shares these common features and presents specific challenges. Section 2.1 lists the most fundamental and concrete soft matter problems embedded into the pressing question of "how does a hill material turn into a debris flow?" It also reviews some recent experimental results on hillslope creep dynamics. The Earth surface, just as any soft condensed matter near one of their failure criteria, is generally far from equilibrium and from presenting isotropicity. Finding insightful measurements of such system responses, and how to use them practically to predict material failure, has been a crucial scientific endeavor. Section 2.2 presents recent experimental results on this front and their implications for further developing failure prediction in the environment. Different types of environments exhibit near-critical behavior, and they have been recorded and analyzed in different ways: for example, section 2.3 presents a recent highly-resolved spatial and temporal recording of iceberg collective dynamics along the coast of Greenland; section 2.4 presents the challenge of observing and modeling river bed dynamics from flood to flood. In both natural systems, as in experiments presented in section 2.2, the stress - or energy - landscape in the system appears significantly changed after a failure event, leading to hysteretic behavior. Finally, section 2.5 presents some of the major questions in modeling volcanic processes from the fundamental scale of mineral crystals and gas bubbles. ### Landscapes of glass - _D. Jerolmack and N. Deshpande_ Students of Physics have long been attracted to other fields of study, while the frontiers of Physics can also be advanced by challenges arising from other fields. Biophysics is now a recognized discipline within Physics where the essential elements of life - far-from-equilibrium dynamics, self-organization, soft materials and complex fluids - have required novel developments in theory and experiments from Physics [200]. Geoscience also presents novel problems, patterns, materials, and extreme conditions that have drawn the attention of some physicists. But, the field of "Geophysics" is not endemic in physics departments and education. The traditional field of Geophysics can be roughly split into two areas. The first, "Solid Earth Geophysics" is concerned with the application of solid mechanics to measuring and modeling rocks and ice over a variety of length and time scales [201]. The second, "Geophysical Fluid Dynamics", is deeply rooted in applied mathematics and its application to atmospheric and ocean dynamics [202]. We recognize that there is a large gap between these two areas that we tentatively call "Soft Earth Geophysics". This field is concerned with the behaviors of Earth's granular materials - from clays to boulders - and their interactions with each other, biological materials, and fluids [203]. These materials transit many regimes: thermal to athermal and reactive to inert (clay to sand), jammed and creeping to dilute and flowing, and soft and porous to hard and dense. Often these regimes are mixed, and/or systems repeatedly transition among them. Certainly, there are many scientists and engineers at work on these problems, in fields such as geotechnical engineering, hydrology, and others. But physicists with expertise in soft matter, statistical mechanics, and nonlinear dynamics have not organized themselves to attack these problems with the formality associated with fields such as Biophysics. Consider the formation of post-wildfire debris flows, which is an increasingly frequent and deadly hazard. Debris flows are highly concentrated slurries of soil and water that form on steep hillslopes [204]. Predicting the conditions that will trigger debris flows, and assessing the hazard associated with their runout, still relies largely on empirical relations derived from observations of previous flows. The challenges for understanding the failure and dynamics of debris flows represent frontier challenges in soft matter science: (i) Some wildfires are known to leave behind a hydrophobic layer beneath the surface, which may help to confine rainfall to a shallow surface layer of soil that accelerates saturation and failure [205, 206, 207]. Rapid wetting of surface soils may also create strong capillary pressure gradients that regulate soil failure and erosion style. Progress in this problem will require understanding of wetting under extreme conditions, and the influence of interfacial soil properties on infiltration and capillarity. (ii) Debris flows may form by an unjamming transition in which soil experiences a sudden loss of rigidity associated with a decrease in volume fraction; i.e., a landslide [208]. However, they may also form by progressive soil entrainment that increases volume fraction until it reaches close to the jamming point [207, 209]. The conditions that lead to one or the other mechanism are not known. (iii) The rheology of debris flows is certainly non-Newtonian; generally, debris flows appear to be yield stress materials with some degree of shear thinning [210, 211]. However, rheology appears to be extraordinarily sensitive to the concentrations of clay and sand [212]. Concepts of jamming and lubrication are just beginning to be applied to heterogeneous debris-flow materials, and offer some hope to explain and even collapse the variability observed in disparate studies [213]. (iv) Debris flows entrain large boulders that migrate to the front of the flow and act as a battering ram [214, 215]. Whether this is the result of granular segregation like the Brazil nut effect or a consequence of phase separation of granular (boulder) and liquid (mud) materials, is unknown. A major limitation in understanding failure and yield of Earth materials is that models still rely exclusively on a Mohr-Coulomb criterion [155]. We know from observations, however, that sub-yield creep is pervasive in hillslope soils [216, 217, 218, 219]. Gravity-driven creep of (athermal) granular materials has not been examined in Physics until recently [220, 221, 222, 223]. Surprisingly, it has been shown that even an undisturbed sandpile creeps; even more surprising, relaxation by creep is very similar to aging in a glass following application of a stress [224]. These dynamics can be modulated by disturbances: tapping the pile hardens the bed and accelerates aging of the granular material, while heating it can reverse aging and sustain high creep rates. The dynamics suggest that mechanical noise in granular materials may play a role akin to thermal fluctuations in glasses. This is a frontier topic in the physics of amorphous solids, and gives rise to a host of new questions. Patterns of localized strain in the relaxing granular pile are similar to numerical simulations of metallic glasses, and also to large-scale strain fields around a ruptured fault following an earthquake [225]. What is the role of material properties in setting the length and time scales of localized strain? Earth materials are subject to a wide spectrum of forcing - hydrologic, seismic, biogenic, etc. Which kinds of forcing harden granular landscapes, and which ones set them up for failure? Finally, we observe in the field that in some locations soil creep is widely distributed within the bulk, while in other locations strain is highly localized along failure planes to create "earthflows" [226]. What determines whether creep is spatially localized or diffuse, and can we understand earthflow dynamics as creep along a shear band? ### Rigidity, nonlocality, and acoustics in dense granular materials - _K. Daniels_ Forecasting when Earth's critical zone will flow - whether through creep [224], flow [213], or catastrophic failure [227] - underlies many of the problems presented in this paper. Within the soft matter physics community, these questions have been addressed as questions of rigidity: how resistance to flow arises from the particle-scale to the meso-scale and to the system scale. Within a granular or amorphous material, internal stresses are transmitted by a heterogeneous network of forces known as force chains, as shown in Figure 6. This network provides the material with its global rigidity, and several techniques exist for probing the spatio-temporal evolution of rigidity at various scales. Physicists have constructed models based on nearly-perfect particles residing within an energy landscape of valid states [228], as well as simplified models comparing the number of constraints to the number of degrees of freedom [229]. For simplified laboratory systems, in spite of their dissipative nature and the difficulty of defining a frictional failure criterion, it now appears that the energy- and constraint-based frameworks both predict the same regions as being rigid or not [230]. When passively listening to acoustic emissions transmitted through the material, the statistical distribution of the resulting vibrational modes subtly shifts as a laboratory granular material approaches its point of failure [231], as would be predicted for model materials developing low-frequency vibrational modes as they approach a state with zero rigidity [232]. Finally, for models of disordered solids - networks manufactured to have a disordered network of thin beams - it is possible to forecast the most likely failure locations using only the meso-scale topology of the network's connectivity, without including any mechanical information [233]. It remains an open question whether these frameworks can translate to the rough, heterogeneous, anisotropic particles and wet environments necessary to understand geophysical dynamics. For instance, is it possible to measure a quantity like the density of vibrational modes [234] using seismometers or strain sensors? When a hillslope or glacier progresses towards a point of failure, do similar hallmarks forecast likely failure locations and times? Already, network science has been successfully used to evaluate kinematic data obtained from ground-based radar, interpreted in light of the underlying micro-mechanics of granular failure, to successfully forecast the location and time of granular failure [235]. ### Floating granular materials - _J. Burton and K. Nissanka_ A single drop of soap can drastically affect the fluid dynamics of a liquid interface. Surfactant molecules spread rapidly, drive flows, and provide stability and elasticity. Although the Earth's oceans, seas, and rivers cover immense length scales, granular collections of ice, trees, organisms, and pollutants can shield the air/water interface. Such floating granular materials often jam in converging flows or narrowing geometries, creating hazards or ephemeral perturbations to the dynamics of Earth's aquatic interfaces. Examples include logjams [237], river ice [238], sea ice [239], and volcanic pumice [240]. In biological systems, granular rafts can be formed intentionally to survive flooding, as in the case of fire ants [241]. Although the fractional coverage of Earth's water bodies with floating granular materials is small, they can be exceedingly important, since crucial veins of transport can become quickly jammed with buoyant terrestrial debris. During the workshop, we showcased an outsized example of this behavior: ice melange, a buoyant agglomeration of ice-bergs and sea ice that forms in the narrow fjords of Greenland (Fig. 6(a)). Ice melange is perhaps the world's largest granular ma Fig. 6: An image of photoelastic disks resisting a shear force applied by the roughened boundary visible at the lower right, imaged with a darkfield polariscope [140, 150]. These methods allow for the quantitative determination of the vector contact forces between particles when performed in monochromatic light. In this image, the brighter particles are those carrying more force, while the darker particles carry little force. Under increased shear, the chains of forces buckle and rearrange (Source: E. Berthier, F. Fazelpour, C. Kirberger, NC State Physics) terial [242, 243, 244], with individual clats ranging from 10s to 100s of meters in size. As ice melange is slowly pushed through fjords that are many kilometers wide, it jams, buckles, and breaks as friction from the rocky walls transmits stress to the buoyant interior. Importantly, ice melange has recently been shown to affect ice-sheet mass losses by inhibiting iceberg calving [236]. During calving, cubic-kilometer-sized icebergs are fractured from the main glacier and discharged into the ice melange. Surprisingly, centimeter-scale iceberg displacements can be measured with ground-based radar every 3 minutes. The measurements revealed that a period of incoherent granular flow preceded iceberg calving events (Fig. 7b), representing an important first step towards real-time detection of failure in geophysical granular flows. Within the context of floating granular materials, there remain a few key challenges. These materials are very sensitive to particle shape and confinement, both of which are essential for their ability to jam and transmit stress. Also, these materials can interact with the water, e.g., melting ice drives stratified flows from below. Finally, laboratory studies combined with continuum modeling using granular rheologies are needed to provide a larger-scale picture of how floating granular materials shape and respond to their dynamic environment. ### How rivers remember - _C. Masteller_ Erosion and morphological change in gravel-bed rivers require bedload transport, or the transport of sediment by rolling, sliding, or saltating close to the riverbed. Almost all existing model predictions of bedload transport rates are underpinned by the degree to which flow conditions exceed some threshold value in dimensionless shear stress [245, 246, 247, 248] representing the initiation of motion of sediment particles, e.g., [249, 250], or some "reference" transport rate, e.g., [251, 252]. For gravel-bed rivers, the bulk of sediment transport occurs close to these thresholds [251, 253, 254], making estimates of the threshold for particle motion critical for accurate predictions of fluvial transport rates. Further, there is a strong link between the width of a river during bankfull flow, or a flow that fills a channel to the top of its banks, and the entrainment threshold of bed and bank sediments. This model of channel stability, referred to here as "near-threshold channel" theory (NTC), was formalized by Parker [251] and has been validated within coarse-grained alluvial and bedrock influenced rivers [253, 254, 255, 256]. Most bedload transport models that use dimensionless shear stress, also termed critical Shields stress, \(\sigma_{c}^{*}\), as the threshold parameter, have assumed a constant value near \(\sigma_{c}^{*}=0.045\). However, Buffington and Montgomery [257] demonstrated that \(\sigma_{c}^{*}\) varied systematically with the ratio of the median sediment grain size to flow depth, and argued that a universal threshold should be applied with caution. More recent work has explored how flow and grain interactions lead to inherent variability in the threshold for particle motion. Reid et al. [258] first suggested the influence of antecedent flows based on field-based bedload transport monitoring, hypothesizing that longer inter-flood durations led to increases in \(\sigma_{c}^{*}\) and reduced sediment transport rates. We present experiments that confirm that the magnitude and Fig. 7: Adapted from Cassotto et al. [236]. (a) View of ice melange at Jakobshavn Isbra on the Western coast of Greenland. The ground-based radar visible in the center of the image is a few meters in size and is perched on the rocky cliffs above the fjord. (b) Top-left, divergence of velocity field under steady flow. Red areas represent extension of the flow, and blue areas represent compression. Overall, the field is smoothly varying. Top-right: variation of velocity field in the black rectangle after subtracting the mean of the underlying steady flow. Bottom-left, divergence of velocity field less than 1 hour before a calving event (the fracture and discharge of a cubic-kilometer-sized iceberg from the glacier into the ice melange). The divergence field is rapidly-varying and noisy. Bottom-right: variation of velocity field before calving, showing heterogeneous flow patterns. duration of inter-event flows affect \(\sigma_{c}^{*}\) evolution. We show that with little to no active sediment transport, grain-scale changes in interlocking, and subtle surface reorganization increase particle resistance to motion [259]. We suggest that the increase in particle resistance under inter-event flows is akin to granular creep and compaction of granular materials under low to moderate shear rates. In response to higher magnitude flows (e.g., floods) surface reorganization of the bed leads to a decrease in particle entrainment thresholds via an increase in surface roughness, akin to dilation of granular materials under high shear rates. We show that the magnitude of antecedent flows was the dominant control on the evolution of \(\sigma_{c}^{*}\) in the Erlenbach torrent in Switzerland, with a secondary, short-lived duration effect [260]. Consistent with experiments, these direct measurements of the onset of motion showed increases in critical Shields stress with increasing inter-event flow magnitude. The study also explored responses of \(\sigma_{c}^{*}\) to higher magnitude, sediment transporting flows. As with below-threshold flow, strengthening effects were also observed following low to intermediate-magnitude bedload-transporting floods. However, following high-magnitude flows, the threshold for motion decreased. Masteller et al. [260] hypothesized that the transition from bed strengthening to bed weakening was associated with a transition from local rearrangement of particles to more intense transport, which acts to significantly disrupt bed structure via particle collisions or long-distance particle transport. To capture these variations in particle erosion thresholds, we develop a flow history-dependent model in which \(\sigma_{c}^{*}\) evolves through time as a function of bed shear stress [261]. We calibrate the model to a 23-year record of flow and bedload transport at the Erlenbach. We demonstrate that the model predicts field-based \(\sigma_{c}^{*}\) values more accurately than the assumption of a constant \(\sigma_{c}^{*}\). We suggest that this model may be more generally applied to capture time-varying erosion thresholds in gravel-bed rivers. There remain outstanding challenges: (i) What constraints does a deformable boundary (channel container) place on variations in bed shear stress, and by extension, evolution of particle entrainment thresholds? Work by Cunez et al. [222] demonstrates that dilation occurs at shear stresses well above those commonly observed to result in channel widening - suggesting that bed disruption or weakening may be infrequent or buffered by channel width adjustments. How do adjustments in particle entrainment thresholds impact thresholds for channel widening? (ii) Definitive links between granular processes observed in physical experiments and field observations in gravel-bed rivers are precluded by measurement limitations related to natural grains including complex shape (e.g., [262]), observational of grain arrangement limited to surface topography, and limits of fine-scale measurement capabilities in the field, as dynamic changes in grain scale topography or erosion thresholds are relatively small and thus challenging to measure. There is potential for the application of geophysical methods, including environmental seismology (e.g., [263, 264, 265, 266]) and distributed acoustic systems [267], but these methods are still largely unexplored in a fluvial context. ### Why do persistently degassing volcanoes erupt? - J. Suckale Not all volcanic eruptions are rare. According to the Volcano Watch by the United States Geological Survey, dozens of volcanoes erupt every day and often the same ones. These volcanoes are commonly referred to as persistently active. Persistently active volcanoes are able to maintain an open connection between the magma storage regions and the surface vent, creating a dynamic system that erupts frequently. Their volcanic activity spans a wide spectrum from continuous passive degassing to intermittent explosive or effusive eruptions with more violent, paroxysmal eruptions emerging with little or no clear precursory activity [269, 270]. The transitions between different eruptive regimes are sudden and unpredictable, creating large uncertainty in risk assessments [271]. An example is Kilauea volcano, Hawaii, where the continual and slow oozing-out of magma during year-long eruptions is disrupted episodically by dramatic, day-long lava fountains that propel an explosive spray of molten clasts and ash hundreds of meters into the air [272, 273, 274, 275]. Current classifications of these eruptive regimes are largely phenomenological. To make progress towards predicting changes in eruptive behavior, we need to underpin this existing typology with an improved understanding of the physical processes that could cause a regime transition. In fact, the National Academies declared the development of multi-scale models that capture critical processes and can be tested against field data as one of the three grand challenges in modern volcanology [276]. At the heart of the question about eruptive regime transitions is the challenge of near-criticality. Most of the time, persistently active volcanoes are not erupting and still emitting copious quantities of gas and thermal energy [277, 278, 279, 280, 281, 282]; why not always? Near-criticality could provide a valuable framework for understanding why seemingly small increases in gas flux, pressure or crystallinity could lead to a sudden and dramatic change in behavior: For example, petrological data shows that the uppermost few hundred meters of the plumbing system at Stromboli volcano, Italy, are composed of a highly crystalline magmatic mush with a solid fraction of 45-60% [283, 284, 285]. This mush is prone to tensile failure beneath the observed vent locations driven by gas overpressure and the tectonic stress field, suggesting that Strombolian eruptions could be related to a transition from flow to failure [286]. A transition from distributed flow to localized failure can also Fig. 8: Photo of a lava fountain during the 1959 Kilauea Iki eruption (courtesy of USGS) in the background and a photo of a crystal cluster later identified in erupted samples by Schwindinger and Anderson [286]. occur in flow configurations with low crystallinity. One process that could trigger this transition even in a largely fluid system is the hydrodynamic interaction between individual crystals [287]. The hydrodynamic interactions between crystals are amplified by the high viscosity of magmatic melts, roughly five to twelve orders of magnitude higher than water, because it implies that individual crystals interact hydrodynamically over spatial distances many orders of magnitude larger than their size. These long-scale hydrodynamic interactions between individual crystals favor self-organizing behavior, reflected in spatially variable crystal distributions. This self-organization depends on the ambient, pure-fluid flow field and at the same time modifies it [287, 288, 289]. The diversity of physical processes that can disrupt conduit flow may be reflected in the diversity of observed eruptive regimes, but we will never know unless we can test different models directly against data. Some of the most precious clues may emerge from the smallest scales, namely individual crystals or bubbles, often at the micrometer scale. While large-scale data sets like seismicity, crustal deformation or heat and gas flux measurements at volcanoes provide impressive testimony of syn-eruptive processes, crystal-scale data may record at least some pre-eruptive processes directly [290]. Lending a helping hand in preserving this information is the glass transition, another aspect of near-criticality in natural systems. Once the eruption starts, the melt in the conduit quenches to a glass, freezing-in the crystals and bubbles it contains. Figure 8 shows an example of a lava fountain during the 1959 eruption at Klauea Iki, Hawaii, and a close-up photo of the crystal clusters later found in erupted samples [268]. A detailed analysis of the crystal cluster showed that the cluster formed by two crystals drifting together during flow and intergrowing over time [268]. The puzzling aspect of these crystal clusters is the abundance of relatively large misalignment angles separating the two crystals [291]. A smaller angle would be hydrodynamically more favorable, but is only observed in a surprisingly small percentage of clusters. Reanalyzing this data, we showed that the angles between these clustered crystals could be produced by exposure of the individual crystals to a traveling wave in the conduit prior to eruption [290]. In linear-shear flows, crystals tumble along in Jeffery orbits [292], but wavy flows align crystals [293, 294] onto a preferential angle that depends on both the flow conditions and the crystal geometry. From the crystal geometry, it is possible to infer that the observed high percentage of large misalignment angles is indicative of a downward propagating wave in a volcanic conduit with low crystallinity [290]. The inferred crystallinity is consistent with the lower range of observed crystallinities [295] and consistent with the possibility that a spatially heterogeneous arrangement of crystals inside the volcanic conduit could trigger a transition from flow to sliding [287]. Many questions regarding the eruptive behavior of persistently active volcanoes remain. Making progress on these is a challenge that benefits from the input of fields outside of classical volcanology, including but not limited to multi-phase flow, non-linear system dynamics, thermodynamics, and numerical analysis. It is also a challenge that touches on several themes discussed later in this paper, particularly the challenge of modeling the grain scale and the challenge of bridging scales. As many other natural systems, volcanic systems span an enormous range of physical conditions and scales from microns to hundreds of kilometers. ## 3 The challenge of bridging scales Scaling analysis tries to identify and match relevant physical regimes at the different spatial and temporal scales, and is at the heart of Earth science. One can upscale model results to predict field observations, or downscale geophysical results in explaining them by essential processes studied in the laboratory (Fig. 1). Identifying and scaling the essential physics for each pattern is challenging at times, because the wide range of spatial and temporal scales involved gives rise to several regimes, and the same patterns can result from different mechanisms. To test the validity of models across scales, analyzing the characteristics of patterns, in the time series of, or shapes produced by, landscape mechanical behavior is often key. Mathematical modeling can offer pathways to bridge laboratory observation to field observation scales in analyzing the dynamics of regimes and emergence of the associated patterns. Section 3.1 highlights that mechanisms of viscous and elastic deformations might differ in the temporal and spatial domains, in the specific case of Antarctic ice dynamics. It is shown that brittle fracture dominates in local and momentary observations, but ice sheets appear to flow viscously when observed for years over large areas. Section 3.2 tests if rainfall time signals, modulated in space via groundwater flow, can be approximated by averaging over the domains. Such results bring subtle questions to a common approach of bridging scales by averaging over a grid point. Section 3.3 considers the complex fluid dynamics at the bottom of glaciers, and how to reproduce the evolution of subglacial channel systems, coupling models of sediment transport and ice melting. Section 3.4 focuses on understanding how physicochemical mechanisms in the ground result in the striking formation of surficial salt patterns. Section 3.5 presents a scaling analysis and remote sensing measurements of periglacial soil patterns and investigates their relation to fluid-flow instabilities. Finally, the dynamics of dunes - fragile but perpetual forms in deserts - remains challenging to predict; section 3.6 uses laboratory-scale experiments to show how dunes persist and set a length scale in landscapes by interacting, attracting, and repelling each other. ### Ice cracks in a warming climate - Y. _Lai_ Interactions between fluids [298, 299, 300, 301], elasticity [302, 303, 304, 305, 306, 307, 308, 309, 310, 311], sediments [312, 313, 314], granular flows [242], and porous flows [315, 316, 317] are ubiquitous in the polar regions. Ice sheets and ice shelves are viscous gravity currents spreading above bedrock and ocean [300], respectively. Ice flows as a viscous fluid (e.g., glaciers) at longer timescales but breaks as a solid at shorter timescales (e.g., iceberg calving [318, 319]). Because the mass loss of ice sheets contributes to the rising sea levels, it is important to understand the fate of ice sheets in a changing climate. In this workshop, we highlighted a few processes involving interactions between fluids and solids with important implications for ice-sheet dynamics. We also identified some open questions to be addressed. A major open question affecting future sea levels is whether meltwater-driven fracturing of ice shelves will significantly impact the future loss of the Antarctic Ice Sheet. Atmospheric warming threatens to accelerate the retreat of the Antarctic Ice Sheet by increasing surface melting and facilitating hydrofracturing [320], where meltwater flows into and enlarges fractures on ice shelves [321, 322], potentially triggering ice-shelf collapse [320, 323, 324] and acceleration of sea-level rise [325]. In this context, we combined theory and deep-learning to develop a stability diagram for Antarctic fractures [7]. Figure 9a illustrates a theoretical prediction of the stability of Antarctic fractures depending on the ice thickness, ice toughness, and glaciological stresses on ice shelves. To compare observations with theory, we trained a deep convolutional neural network to detect continent-wide fracture features on ice shelves [7]. We find that most ice-shelf locations that the deep neural network detects as fractures, shown as points in Figure 9a, lie in the parameter regime where our theory predicts stable fractures (gray triangle), and are consistent with the fracture theory. We find that if climate warms and causes the Antarctic ice surface to melt, large portions of Antarctic ice shelves will likely collapse due to hydrofracture [7]. Besides theory and field observations, in this workshop, we also explored the unique contribution of analogue experiments to the understanding of ice-sheet processes. The benefit of analogue experiments is that the parameters can be well controlled. We connected the findings in analogue experiments with the large-scale geophysical observations by matching the relevant nondimensional parameters. We discussed the use of an analogue experiment to mimic the formation and relaxation of a water-filled "blister" (Fig. 9b) beneath an ice sheet due to the injection of meltwater [296, 326]. The analogue experiment [297] (Fig. 9c) validated a mathematical model describing meltwater leaking from a pressurized "blister" into the surrounding water network (modeled as a porous substrate) beneath the ice sheet (modeled as an elastic sheet). The mathematical model has been used to constrain the hydrological property of the water network beneath the ice sheets, which is otherwise difficult to measure [296]. Many unanswered questions are to be explored, such as the processes governing the catastrophic collapse of ice shelves, including the mechanisms responsible for the periodic undulations observed in satellite imagery (Fig. 9d). The surface periodic undulations are highly correlated with locations of basal crevasses [327, 328, 329]. Importantly, the undulation spacing is relevant to the size of the icebergs (Fig. 9e). The types of mechanical instabilities [330] that give rise to these periodic spacing are still poorly understood. How the complex rheology impacts the mechanical instabilities, the disintegration of ice shelves, and the dynamics of ice sheets, is still to be investigated. ### Can we average the rainfall signal? - O. Devauchelle Rainwater infiltrates into the ground, until it reaches the water table, where the porous matrix is saturated with groundwater (Figure 10). There, it begins the slow underground travel that will eventually bring it back to the surface, where it will join a stream, and run to the sea. How long does the underground part of this travel take? Clearly, the residence time of water in an aquifer is \(\tau=\frac{V}{RA}\), where \(V\) is the groundwater volume, \(A\) is the area of the catchment, and \(R\) is the rainfall rate [331] (typically expressed in mm year\({}^{-1}\)). Residence time is thus tantamount to storage. It is also a prime control on the biological and chemical reactions that weather the porous matrix [332, 333], and a good estimate of the time it takes for groundwater to recover from pollution. As a first approximation, we can average the rainfall signal over the years, and treat its mean \(\langle R\rangle\) as a steady forcing of the groundwater flow. The resulting Darcy problem is then stationary, and amenable to classic fluid mechanics. For illustration, Figure 10 shows the stationary flow of groundwater through deep, unconfined aquifers that discharges into neighboring streams [331]. As the rainfall rate increases, the water table rises. The domain over Fig. 9 (a) Fracture stability diagram for Antarctic ice-shelf fractures. Most ice-shelf fractures identified by a neural network on Antarctic satellite imagery (red dots) lie in the stable-fracture regime [7]. (b) Formation of water-filled “blister” at the bottom of the ice sheet after a lake drains [296]. (c) Analogue laboratory experiment mimicking a water-filled blister beneath an ice sheet relaxing on a porous water network [297]. (d) Satellite image showing undulation patterns on ice shelves due to fracture formations on the Thwaites Ice Shelf in 1996. (e) Same region as (_d_), in 2014 when the ice-shelf was broken into icebergs ((d, e) are from Landsat image). which the flow equations need to be solved thus expands, and this makes the problem non-linear. Even in steady-state, therefore, the residence time of water in an aquifer is not just inversely proportional to the rainfall rate, because the volume of groundwater needs to accommodate the flux it carries (\(V\) is a function of \(R\)). In reality, of course, the rainfall signal is intermittent, and so is the groundwater flow it induces [335]. Since this problem is non-linear, we cannot expect that the time average will gracefully propagate through the equations, as it would in a linear system. There is no reason to believe, therefore, that the steady flow of Figure 10 is the average of the actual groundwater flow. To find the latter, we generally need to solve the non-stationary problem, and average the result over time - a procedure far more costly than solving the steady-state problem. In short, finding the average groundwater flow is a difficult problem, because it is not just the solution of the average equations. This issue, which might seem anecdotal, is in fact ubiquitous. One simply needs to consider a non-linear system driven by some fluctuating forcing - rainfall or temperature. In cold climates, for instance, the soil cycles through freezing and thawing (Glade, Sec. 3.5), which obviously modulates its rheology, and therefore its downward creep (Jerolmack and Desphante, Sec. 2.1) [223]. Could this parametric forcing explain why some soil patterns appear only in the Arctic, as suggested by J. Burton during the 2022 PCTS workshop? In other words, fluctuations do not always average out. In Earth sciences, this might be the rule rather than the exception. ### Subglacial plumbing systems - _I. Hewitt_ Increased glacier and ice-sheet melting is an obvious consequence of climate warming, with significant impacts for sea-level rise and for water resources in mountainous regions. Vast quantities of meltwater are transported beneath the ice, along the interface between ice and the underlying bedrock or till, driven out towards the ocean by the overlying weight of the ice. With little opportunity for direct observations, various conceptual theories for how to envisage the subglacial drainage system have been developed. There are similarities, and some important differences, to sub-aerial water flow and stream formation. Open questions abound about the relevant physics, and how it can be modeled. In particular, these include the role of erosion, deposition, ice-melting, and ice creep in enlarging and contracting the space available for water flow. There are potentially useful analogies with other "deformable" or "reactive" porous media, and for an increased role for analogue laboratory experiments. An important aspect of these systems is their temporal evolution - it is inferred from tracing experiments that there is a massive expansion of the drainage system during the summer melt season (due to dissipation-driven melting of the basal ice), but that this subsequently collapses (due to the viscous "creep" of the ice) during the winter [336]. The system is believed to transition between a relatively low permeability system in which water moves through porous sediments or "linked cavities", and a more efficient river-like network of channels [337]. The channels can be incised both upwards into the ice [338] and downwards into the sediments [339]. One aspect of these channels that provoked some discussion at the workshop is that they are believed to be responsible for depositing eskers (long, sinuous ridges of sand and gravel) during the last deglaciation [340]. A model was presented in which these form through continual deposition at the widening mouth of the channel as the ice-sheet margin retreats (see Fig. 11), with various suggestions made for how the sediment size distributions within the eskers might be used to test this hypothesis. ### Patterns in dry salt lakes - _L. Goehring_ Dry salt lakes, plays and salt pans represent some of the most extreme environments on Earth. They form in dry terminal basins where groundwater collects just beneath the surface and where evaporation dominates over precipitation [341]. The otherworldly landscapes that result are ones of beautifully ordered polygonal patterns in a surface salt crust, and an inspiration to fantastic settings like Star Wars' planet Crait. Found worldwide, some note Fig. 11: Suggested formation mechanism for an esker. Eskers are long _r_idges of sediment, found particularly in areas of Canada and Scandinavia, which were deposited as the ice-sheets retreated at the end of the last glacial period. Sediments are deposited as the flow velocity in a water-filled subglacial channel decreases near the retreating ice margin. With a better understanding of this formation mechanism, they could tell us more about the plumbing system under present-day ice sheets. Fig. 10: Flow in unconfined aquifers, of different volumes/heights (increasing from (a) to (c)) of hydraulic conductivity \(K\) recharged by a constant rainfall \(R\) (analytical solution [334]). Rainwater infiltrates into the ground (blue arrows), and joins the water table (solid blue line). From there, it follows the groundwater flow lines (dashed blue lines) until it reaches the outlet (orange line), where it seeps into a river. The river flows towards the reader. The solid black lines are impervious (left: groundwater divide, right: axis of symmetry). All lengths are made dimensionless with the distance that separates the river from the divide. There remains only one dimensionless parameter in this problem: \(R/K\). worthy dry salt lakes include Badwater Basin in Death Valley (CA, USA, see Fig. 12a), Salar de Uyuni (Bolivia), Dashr-e Kavir (Iran) and Sua Pan (Botswana). The example of Qaidam Basin, China, has also been studied as an analogue for strikingly similar features found on Mars [342]. Although fracture [343] and buckling [344] of the surface crust are associated with these features, until recently, no clear mechanism has been able to accurately explain the emergent spatial and temporal scales of the salt crust patterns. The main challenge to any such explanation involves identifying a mechanism specific to salt lake environments that can account for the consistent growth of 1-3 m wide closed polygonal features in the crust [345, 346], over timescales of a few months [345, 347], and in a way that is insensitive to the exact salt chemistry and soil composition of any particular lake site. In order to predict the formation and dynamics of salt crust patterns, an intimate link between these dynamics and the convection of salty water beneath the soil has been proposed, where convection cells template the crust pattern [348, 349]. Convection in porous media is itself well-studied, with a variety of approaches and applications summarized in a recent, extensive review [350]. In the context of salt crusts, the connection is made to the particular problem of convection in the presence of a through-flow of fluid. This problem was originally raised in the context of geysers [351], but has since been developed to explain the subsurface flows observed at playas or dry salt lakes [352] and sabkhas, which are evaporate pans near tidal flats [353]. Briefly, the resulting model considers the Darcy flow of water in the porous sand or sediment of a dry lake, where the water table remains close to the surface [348, 349]. The water contains salt, which accumulates at the evaporating surface. The salt moves advectively with the water, and diffusively along any concentration gradients. It adds to the density of the water, providing buoyancy forces that can drive additional flows. As boundary conditions, there is a continuous loss of water at the surface, caused by evaporation, and the groundwater is recharged from below by some distant reservoir. This leads to the accumulation of salt-rich, denser water near the surface, which can be unstable to convection. The convective dynamics are captured by a single dimensionless group, the Rayleigh number, which describes the ratio of convective to diffusive effects. Essentially, this group describes the vigor of any convection [348], as it also characterizes the speed of the convective flows, relative to the background flows caused by the surface evaporation. Building on a body of recent field observations [345, 346, 349], this model of salt playa allows for the dynamic evolution of convection cells, which then modulate the salt flux into the surface crust [349]. As confirmed by direct field data of crust growth rates [345] and subsurface flow patterns [349], it predicts that surface salts will accumulate fastest above down-welling flows that spontaneously arrange into a polygonal network (see Fig. 12b,c) when simulated in large, three-dimensional domains [349]. When the model parameters are constrained by relevant field data, it also accurately accounts for the observed growth rates of the polygons, and their remarkably consistent size, which arises naturally from a balance between evaporation rates and salt diffusivity. Further development of these ideas would require considering more carefully the feedback between the crust and the groundwater flows and accounting for how the development of differences in crust thickness will, in turn, influence local evaporation rates [348, 354, 355]. This effort would not only contribute to our understanding of these patterns but also to environmental work. For example, Owens Lake has been the focus of a decades-long remediation effort to reduce dust formation off of dry lake surfaces, which is linked to the dynamics of the salt crusts [354]. However, even without further elaboration, the convective model demonstrates how the self-organization of flows beneath our feet can naturally explain the emergent length scales and time scales of salt polygons in nature. ### Large-scale arctic soil patterns analogous to small-scale fluid flow instabilities - _R. Glade_ A key challenge in linking soft matter physics with Earth science lies in dealing with the high degree of heterogeneity in natural Fig. 12: A convective model of salt polygons in dry salt lakes. (a) The dry lake surface at _e.g._ Badwater Basin, Death Valley (CA, USA), is covered by a pattern of ridges in an approximately 10 cm thick crust lying over moist sandy soil. Here, the polygonal features are typically about 1.5 meters across. The model of buoyancy-driven flows used to simulate the emergent length scales and time scales of pattern formation at such sites predicts (b) salt flux into the crust (and hence crust growth rates) and (c) salinity profiles in the soil beneath. Panels (b, c) courtesy of and copyright Matthew Threadgold. landscapes (e.g., [358, 359]). In cold landscapes, icy soil composed of an ever-changing mixture of heterogeneous sediment grains, liquid water, and ice demonstrates the complexity of Earth materials [5]. Soil in these settings moves downhill at slow rates of millimeters to centimeters per year due to freeze-thaw processes (e.g., [360]); over time, the soil self-organizes into distinct finger-like patterns known as solifunction lobes, with wavelengths of tens to hundreds of meters (Fig. 13). Inspired by contact line instabilities in thin film fluids (e.g., paint dripping down a wall) that form due to competition between viscous forces and surface tension (e.g., [356]), we develop a theoretical prediction for the solifunction lobe wavelength that aims to connect grain-scale cohesion and fluid-like motion of the soil to large-scale pattern development while acknowledging the importance of natural heterogeneity. We find that similar to surface-tension dominated flows, competition between body forces and resisting forces (here in the form of enhanced soil cohesion at raised soil fronts) may drive pattern formation. Allowing for a hydrostatic component to account for large scale topographic roughness not present in thin films, we find a new scaling relation that predicts the cross-slope wavelength (\(\lambda_{c}\)) varies as a function of soil thickness (\(h\)), topographic slope (\(\sin\theta\)), and a length scale characterizing spatial variations in cohesion (\(\gamma\)): \(\lambda_{c}\propto\sqrt{\hbar\gamma/\sin\theta}\). Using remote sensing data of thousands of solifunction lobes across Norway, we find that though solifunction wavelength data contains a large amount of scatter typical of field data, our theoretical prediction is able to predict average wavelengths [357]. Using a long term climate dataset, we also find that average lobe heights and wavelengths increase with elevation, pointing toward a broad climate control on solifunction patterns and illustrating the possible importance of external driving factors in addition to smaller-scale soil dynamics. This work demonstrates that even granular material in the non-inertial regime can exhibit instabilities fundamentally similar to those found in small scale systems, at time and length scales orders of magnitude larger than previously observed. The presence of these patterns only in cold landscapes suggests that the exceptionally large amount of heterogeneity found in icy landscapes may allow for the development of sub-critical instabilities in non-inertial flows. Our findings point toward the need to address key knowledge gaps that impact our ability to understand landscape dynamics through a soft matter lens. First, we lack adequate rheological models that can account for (i)) the non-inertial regime [211, 361], (ii) heterogeneity in grain size and material properties [362], (iii) cohesion between grains [363], and (iv) the presence of liquid water and ice [364]. While our highly simplified treatment of soil creep as a viscous fluid works surprisingly well to explain average pattern wavelengths, without a more accurate representation of soil rheology we are severely limited in our predictive capabilities. Second, field observations of soil transport processes are difficult to obtain because they operate over such long timescales, though recent advances show promise for obtaining high-resolution surface deformation of slow-moving solifunction lobes [365]. This illustrates the necessity for a collaborative, holistic approach that incorporates theory, laboratory experiments, numerical modeling, and field observations to better bridge grain-scale dynamics with landscape-scale processes and patterns. ### _Bedform dynamics: interaction, attraction and repulsion of dunes - N. M. Vriend and K. A. Bacik_ In desert landscapes, we observe individual sand dunes of different sizes, with a characteristic length scale of up to kilometers, which seamlessly interact with each other and their environment [366, 367, 368]. As migrating sand dunes notoriously bury human-made infrastructure and lead to degradation of arable land, this interaction has important practical implications too [1, 369]. From a physical point of view, the evolution of a sedimentary surface is a result of an intricate coupling between the turbulent flow and the granular bed [370, 371]. Relevant scales of motion in this system span several orders of magnitude, from sediment transport, through dune migration, to large-scale organization of a dune field [372]. During the workshop, we focused on the system-level dynamics, which in the field occur over decades and thus are difficult to investigate in detail. However, in the lab, by using appropriately scaled miniature subaqueous dunes, we can investigate the key physical processes in a matter of hours. Specifically, we discussed three research questions, recently investigated within a new laboratory experiment (Fig. 14), which is uniquely suited for investigating long-term dynamics due to its circular quasi-2D geometry [373]. First, we explored the pairwise interaction between two dunes, leading to either coalescence (merging), ejection (sediment exchange) [374], or wake-induced repulsion of bedforms [373] and presented a phase-space diagram outlining the possible interaction Fig. 13: Background: Aerial image of solifunction lobes with wavelength \(\lambda_{c}\) tens of meters in Norway (Image credit: The Norwegian Mapping Authority). Top left inset: fluid contact line instability in a laboratory experiment with wavelength of centimeters [356]. Bottom left inset: \(\lambda_{c}\) plotted against lobe thickness divided by topographic slope, \(h/\sin\theta\), measured from a remote-sensing derived digital elevation model. Blue color indicates the number of data points in each hexagonal bin. Black dots represent average wavelengths binned by \(h\sin\theta\). Red line is theoretical prediction from fluid-inspired scaling analysis, \(\lambda_{c}\propto\sqrt{h\gamma/\sin\theta}\). Because we have no constraints on the cohesive length scale, \(\gamma\), we assume it is constant in our plot. Figure modified from [357]. outcomes derived from experiments and cellular automaton simulations [375]. Second, as a first step towards understanding the system-level dynamics of a dune field, we studied the long-term behavior of a periodic two-dune system [376]: is it always attracted to a symmetric state with two identical equi-spaced dunes, or are there conditions where the symmetry is spontaneously broken? We found that the key to understanding the dynamics is "turbulence": for flows with a relatively low turbulence intensity, the dunes will display fast-slow dynamics before settling at a stable equilibrium, but for high levels of turbulence an asymmetric attractor appears. This indicates that, at least in theory, the hydrodynamic coupling between neighboring dunes can either promote or inhibit regular organization of a dune field. Lastly, by placing idealized objects in the path of our model dunes, we directly addressed the engineering challenge of dune-obstacle interaction. We observed that both object size and shape of the obstacle determine whether the dune is blocked or able to overcome an obstacle and reform on the other side [377], and once again, we discovered the importance of turbulent flow structures. Indeed, we showed that the outcome of the dune-obstacle interaction can be predicted with a simple data-driven tool based on the modal decomposition of the flow field around the obstacle (without sediment or dunes present). In summary, we revealed surprising connections between the rapid small-scale processes (such as turbulent fluctuations interacting with a granular interface) and the slow large-scale evolution of sedimentary landscapes. Remarkably, by taking advantage of scaling laws, we have managed to investigate such processes in a controlled laboratory experiment, but we are hopeful that with the advance of remote sensing, one day we will be able to validate our predictions with observational data. ## 4 The challenge(s) of life Life has an impact on the ground dynamics and is impacted by the physicochemical evolution of the ground. Over time, plants, fungi, and burrowing animals alter the ground composition, constitutive structure, and consequently, its mechanical properties (e.g., Fig.1e). Sometimes the presence of life makes the ground more cohesive and enhances its resistance to erosion, sometimes its own dynamics pull apart or alter the ground such that it is destabilized. Such puzzling observations illustrate why it is crucial to isolate and study the biophysical processes happening in the ground. We present here two examples of such phenomena: in section 4.1, recent results and outlooks on the dynamics and impact of invertebrates (such as worms) in the ground are presented, while section 4.2 highlights a newly recognized effect bacteria can have on fluid dynamics in porous media. Finally, section 4.3 reminds us that human beings, via the building of our infrastructure, are also a major part of the life disturbance of the Earth's ground and atmosphere. It presents recent results to support a fundamentally different usage of the ground as a human resource. ### Intruder dynamics in granular sediments - _A. Kudrolli_ The ground beneath us is constantly shaped by animal and human activities that can further impact movement of fluids and erosion [378]. Expedonic and endopedonic activities leading to creation of mounds, voids, and burrows in loose sediments, besides anthropogenic activities leading to desertification, and trawling for resources on the ocean floor are problems of great importance in ecosystem management. To overcome the opacity of granular matter, where much of these activities occur, X-ray imaging and index-matching techniques have been employed to understand locomotion strategies from undulatory to peristaltic motion in situ [379, 380]. Besides water jets and fluidization, various strategies from plastic grain rearrangements to sand fluidization and burrow extension by fracture have been uncovered depending on depth, compaction, and grain size [381]. Considerable work is underway to understand the observed locomotion speeds to the observed gaits employed based on the rheology of the medium and in developing models starting from resistive force theory and slender-body theory introduced in the context of viscously dominated fluid dynamics [382, 383]. Further work is required to extend rheological models developed under uniform flow and shear conditions to time-dependent and unsteady flows encountered in such dynamics. These considerations have motivated studies of the drag encountered by spherical and cylindrical solid intruders moving across a granular bed [384, 385, 386, 387]. These studies have found that the non-dimensional Inertial Number and Viscous Number used to characterize the properties of granular matter and granular suspensions introduced under uniform shear rate conditions can be extended to unsteady cases by using an effective shear rate set by the length scale and speed of the intruder. Visualization of the flow of the granular medium have revealed that flow is more narrowly confined around the intruder with far greater slip at the solid-medium interface compared with a viscous fluid [388, 385]. Granular flow around an intruder was found to result in far greater drag anisotropy compared to viscous flow, which is important for drag-assisted propulsion, with still greater anisotropy while considering grains with negligible surface friction [387]. The Fig. 14: Two equal size miniature dunes, placed 45\({}^{\circ}\) apart within the periodic annular channel, due to the drag imposed by the water current, start migrating, and over long times converge to a symmetric antipodal configuration [373]. experiments robustly support the increase of drag with overburden pressure in a granular bed and scaling of drag with projected cross-sectional area in the case of simple intruder shapes such as spheres and solids. However, wakes generated by more complex or multi-component shapes were found to lead to non-additive drag. In particular, two rods moving in tandem [388], are observed to present a drag as a function of separation distance that is different compared to that in viscous fluids. While drag acting on the leading and following rod in viscous fluids at low Reynolds numbers are essentially the same, in a granular material the drag acting on the leading rod exceeds that acting on the following rod even in the quasi-static limit [388]. These studies both point to the complexity and nuanced nature of granular matter encountered while moving or digging in them. While the above discussion has focused on the limit where the intruder is strong enough to move the material, a complementary limit is where the intruder cannot move the material, but is restricted to moving within the pore spaces. Thigmotactic behavior, as in motion along the edges of surfaces due to sensory feedback and environmental cues, can play an important role in determining transport [389]. The importance of body shape, stroke, and topology is receiving attention in determining the dynamics of bacteria as discussed in the following section by Stone and Yang. The second part of the presentation focused on the dynamics of centimeter-scale oligochaeta _Lumbriculus variegatus_ in model porous media, where its natural strokes were hindered by the tight passages between idealized grains. A persistent random walk model along boundaries was found to capture the observed time-distributions to escape the dynamical traps posed by the pore-throats [390]. Active polymer models where simple steric interactions are emphasized have a significant role to play in determining general principles of transport in this limit [391, 392, 393]. ### Bacteria-mediated processes in porous media - H. A. Stone and J. Q. Yang The ground under our feet is home to a wide range of life that forms the basis for the agriculture on which we all depend for food, for the plants and trees that regulate atmospheric carbon dioxide and oxygen, and for an enormous range of organisms, from bacteria, algae, and fungi to ants, worms, etc. Microorganisms are central to many of the underground processes. In recent years, the fate of carbon stored in soil has become an important frontier research area: can soils act as a "negative emission" technology, serving as a reservoir for carbon released from fossil fuel combustion, or will carbon, possibly long-stored in soils, be released as the climate warms [394]? Of course, there are many types of soils and environmental conditions [395]. Models used to project future climate have wide variability for the contributions of soil carbon to projections of atmospheric CO2, even differing in the sign of the effect. Motivated by these questions, in Stone's talk he described two laboratory studies, led by Yang, of bacteria-mediated processes in porous media, one bearing on the soil carbon storage question and the other identifying a previously unrecognized transport process for bacteria in partially saturated porous media. These kinds of problems have natural links to the topics discussed in challenge 1 on modeling fundamental processes beginning at the particle scale, since several of these themes probe dynamics and transport in porous systems, e.g., the contributions of Bourg (4), Datta (1), and Li and Juanes (2). In addition, the bacteria-mediated processes discussed in this summary naturally involve properties of clay, as discussed in challenge 4 by del Gado (4.3) and dynamics of particles in porous environments, which has links to the contribution of Kudrolli (4.1). The first half of the presentation at the workshop summarized a microfluidic experimental study, which incorporated important elements of the soil carbon problem, including clay aggregates, different molecular weight carbon molecules, bacteria, and enzymes, embedded in flows of water [396]. Confocal microscopy was used to document the space and time dependence of how molecules adsorb onto and diffuse into and out of transparent clay aggregates. Smaller molecular weight molecules displayed reversible transport while the larger molecular weight molecules were quasi-irreversibly adsorbed, i.e., diffusion into the aggregates occurred rapidly (within minutes), but most molecules remained adsorbed even after flushing for tens of hours with water. Bacteria were too large to fit into the nano-size pore space of the aggregates, so accumulated on the outside of clay aggregates, whereas enzymes were shown to effectively penetrate the small pores where they broke down and released the trapped large molecular weight sugars. A brief summary was given of typical models of soil carbon storage, and the experimental results were used to suggest improvements to the models. In the second half of the presentation, we documented a previously unreported mechanism of transport of bacterial cells in unsaturated porous media. In particular, experiments were presented showing that surfactant-producing bacteria cause changes in the wettability (to a hydrophilic state) of an initial hydrophobic substrate, which, through a capillary pressure change, causes millimeter per hour fluid flow, comparable to other rapid bacterial swimming speeds, along corners of a model chamber [397]. Similar experimental observations of bacterial transport were then demonstrated in a porous medium of packed angular grains, which served as a model soil. The dynamics are controlled by quorum sensing, which regulates biosurfactant production. This transport process can also lead to movement of non-motile bacteria in the solution. The results suggest that this kind of surfactant-driven transport through changes in wettability, instead of the better-known Marangoni motion, may be relevant to natural porous environments. Subsurface life and transport processes are poorly understood, in part because they are difficult to visualize and monitor in space and time. They are important problems that will benefit from designing lab-scale experiments and can help provide insights about soil and surface processes relevant to agriculture, sustainability, water, energy, and Earth surface dynamics. Microfluidic tools [398, 399] and new experimental approaches can help to shed light on these questions. Nanoscale forces in hydrated clays and the physics of sustainable construction materials - _E. Del Gado_ The ground beneath our feet is also a unique, and sustainable, source of construction materials (Fig. 15a,b). Clay soils and other Earth materials have been used for construction over the centuries [402, 403]. Examples of Earth-based architecture, from the most modest to the most monumental ones, are available on all continents and in all climates. 15 % of the architectural sites recognized as part of the UNESCO world heritage are entirely or partially built with soils and sediments, demonstrating the durability of these materials and construction techniques. Even in this century, around 50 % of the world population lives in buildings made of raw soils and completely natural clays. Construction materials alone make up a sizeable portion of the GHG emissions of the entire construction sector, since, independently of building operations, they are currently responsible for close to 11 % of the world's global \(CO_{2}\) emissions [404]. Most of the carbon footprint is in cement production for concrete, with the latter being the most used synthetic material on Earth, due to its centrality to construction technologies and the built infrastructure. Among immediately implementable strategies that would allow for substantial decarbonization of the cement industry, greener cement mixtures based on reduced cement content and partial substitution with clays and natural soils are probably the most interesting and valuable option [405, 406]. Hence, the soil beneath our feet and the world's oldest construction material is also probably the most ecologically responsible and a potentially novel source of more sustainable construction technologies. Clay sensitivity to salinity, pH, moisture, and stresses, which is central to cohesive strength, stability of soil and building foundations, originates from the nanoscale physical chemistry and ionic composition of clay layers, from which larger-scale structures with complex pore networks and load bearing properties grow [184, 407]. In hydrated clays, nanoscale surface forces develop from the accumulation and confinement of ions in solution between charged surfaces, a phenomenon which also controls the cohesion of hydrated cement, and it is well-known in soft matter, ranging from colloidal materials to biological systems [408]. However, the cohesive forces that develop during hydration of cement and clays are strongly ion-specific and dramatically depend on confinement and humidity conditions. Hence, they cannot be properly captured by existing mean-field theoretical descriptions used in other cases for surface forces in ionic solutions, raising a number of outstanding fundamental questions on the nanoscale physics of confined ion and water [409, 410, 400]. Increasing confinement and surface charge densities promote ion-water structures, distinct from bulk ion hydration shells, that become strongly anisotropic, persistent, and self-organized into optimized nearly solid-like assemblies. Under these conditions, the dramatically reduced dielectric screening of water and the highly organized water-ion structures (Fig. 15c) lead to strongly attractive interactions between charged surfaces. Molecular simulations effectively fill the gap with the experimental characterization of cohesive forces in hydrated clays and cement, providing novel insight into the strong ionic correlations that govern them, to be used in continuum theories and larger scale studies [409, 410, 400]. The nanoscale forces, in fact, together with the non-equilibrium environmental conditions, eventually determine the growth of microscale grain assemblies, layered meso-phases, and porous structures that can be obtained through coarse-grained simulations (Fig. 15d) and govern the rheology and mechanics of clay-based soil and construction materials, providing the missing link from the nanoscale physical chemistry to the meso-scale aggregation kinetics and morphological variability of soil and clay-based binders [401, 412, 411]. Understanding how mechanical and rheological behaviors emerge in porous clay matrices that gel and solidify starting from nanoscale forces, which are chemically specific and sensitive to non-equilibrium conditions and environmental reactivity, is an area rich with challenging scientific questions, where soft matter scientists are poised to contribute with key insights. ## Discussion In each contribution associated with the four challenges identified above, a unique soft matter and geoscience research topic was addressed. The outstanding open questions raised by each author show commonalities that relate to the multi-dimensional, multi-scale, multi-phase, and multi-process character of the ground beneath our feet. Some of the most crucial shared questions are briefly summarized and discussed here. The opaque and time-dependent nature of the ground profoundly hinders our capacity to observe and understand it. In the laboratory, advancements in microscopic visualization and experimental techniques are, on the one hand, key to remediating this difficulty (see sections 1.1, 4.2); yet illuminating all spatial di Fig. 15: Earth construction materials, examples, and scientific questions: (a) rammed earth constructions, (b) rural brick kiln, (c) water-ion structures from semiantictic simulations of Ca ions solutions confined between charged surfaces [400], and (d) mesoscale porous structures of cement hydrate gels from coarse-grained simulations, where particle sizes are on the order of 10-50 nm (see bar) [401]. mensions, while allowing temporal dependencies, is a technical frontier. On the other hand, at the cost of simplifications, properties, stress and flow fields in 2D-3D granular porous media can be visualized and quantified (see sections 1.2, 1.3, and 2.2). In the field, the time-dependent and temporally variable nature of surface and subsurface processes requires additional efforts for monitoring (see sections 2.4, 3.3, and 3.5). Extension and advancements in the use of geophysical techniques, such as seismology, hold promise in enabling tracking of the temporal evolution of e.g. transport processes [264, 413]. Simultaneously, temporally resolved remote sensing techniques are making details of large-scale phenomena, like ice shelf dynamics, observable (see sections 2.3, and 3.1). Though the ground is becoming more observable, seemingly basic questions of when grains start moving, where particles jam, or where and when landslides rupture, are still wide open. Especially in natural systems, identifying the essential conditions (see sections 1.4, and 1.5), material properties (see sections 2.2, 2.5, and 4.3), and processes (see sections 1.3, 2.3, and 2.4) to understand and model the bulk dynamics remains a research frontier. Beyond the relevant components, modeling approaches still struggle to represent key relations (e.g., how microscale processes change the bulk observations, or how to integrate over local conditions or time to obtain bulk mechanical behavior), feedbacks (how macrostructures affect the microscale conditions, or co-dependent processes and scales [221]), and temporal evolution. While computational advancements enable the modeling of complex systems, coupled dynamics require further efforts in developing the proper way to implement relationships governing elementary phenomena (see sections 1.5, 3.2, 3.4, and 4.3), including life-related ones (see sections 1.1, 4.1, and 4.2). Fundamental open questions, for many of the topics addressed here, emerge from the complex interplay of several phases. The dynamics of, for example, three-phase systems where interfaces of liquid-air-grains evolve over time, are key to a better understanding how natural rafts, such as ice melanges, behave (see section 2.3), how landslides start and stop (see section 2.1), and how groundwater evaporates into salt crusts (see section 3.4). Conceptually taming the interactions and feedbacks to understand more than two-phase systems challenges concepts, experiments, and observation throughout the Earth sciences (e.g., [414, 415, 416, 417, 21]). Understanding and modeling contact lines, capillary stresses, and resulting cohesion in particulate materials is a vast and ongoing effort for the case of a single-phase liquid in contact with another fluid and solid particles [188, 418]. While, more often than not, flows in the ground are themselves suspensions of minerals (see sections 1.1, and 1.4), or biological (active) particles (see sections 4.2, and 4.1); the solid part of the ground itself can be composed of vastly different materials in terms of wettability properties (see sections 1.4, and 4.2), density (see sections 1.5, and 2.5), and rigidity (see section 2.2). Especially where life is involved and shapes its surroundings (see sections 1.1, 4.1, and 4.2), or where landforms and patterns of different scales interact (see sections 2.5, 3.4, 3.5, and 3.6), modeling the ground accurately requires solving among the most difficult questions of multi-phases systems. Another recurrent question for many of the addressed topics is how to model transience in materials state and behavior. The ground is heterogeneous in composition and properties, therefore it presents diverse physicochemical behaviors and properties, such as rigidity (see section 2.2), plasticity (see section 1.4), or viscosity (see section 2.5). As the ground evolves over time under external physicochemical forcings, it also undergoes change in the material bulk dynamics. Such transients may result in instability growth and produce characteristic patterns. Direct observables of transience are often only the resulting expressions of dynamics hidden in the ground, like ordered patterns in salt lakes (see section 3.4), arctic soils (see section 3.5), ice shelves (see sections 3.1, and 2.3), or glacial deposits (see section 3.3). Linking these dynamics to climatic changes, geodynamics, as well as anthropogenic and biogenic activities, poses further outstanding open questions, including topics related to subsurface storage and transport of gases and fluids (see sections 3.2, and 2.5), or the anticipation of ruptures that can turn into devastating natural hazards (see sections 1.3, and 2.1). ## Conclusion Most Earth surface materials can be categorized as Soft Materials. Studying their dynamics necessitates a diversity of approaches, thus understanding the ground is intrinsically a highly interdisciplinary field. Following a PCTS workshop, we addressed here a broad outline of the "physics of the ground beneath our feet", considering the ground within the broad category of Soft Matter and all the variety of challenges one faces when trying to model its behavior. We identified four major challenges: (I) modeling from the grain scale, (II) near-criticality, (III) bridging scales, and (IV) life, which structure this paper (see Figure 1). Within each section, contributions by the workshop participants presented examples of works tackling each challenge and open questions related to their specific topic. We hope for this collective effort to provide a new broad perspective on the field and invite more Soft Matter scientists to study the fascinating ground we live and build our future on. ## Author Contributions All authors were involved in the conceptualization of this review article. The writing was administered by AV and MH, including drafting the opening and closing sections. All authors were involved in both writing the original draft of their individual sections and reviewing and editing the entire article. ## Conflicts of interest There are no conflicts to declare. ## Acknowledgements AV and MH are indebted to Abhinindra Singh, for fruitful discussions on the early design of the manuscript. SSD, ICB, C-Y Lai, and HAS thank the Princeton Center for Theoretical Science at Princeton University for their support of the 2022 workshop that stimulated the writing of this paper. In particular, we are grateful for the help provided by Charlene Borsack, both leading up to and during the workshop. HAS and JQY thank the High Meadows Environmental Institute (Princeton University) and the NSF (grant MCB-1853602). These projects were led by Judy Yang (first a postdoc at Princeton and now a Professor at the University of Minnesota) and the wonderful collaborations included Niki Abbasi, Bonnie Bassler, Ian Bourg, Zemer Gitai, Joe Sanfilippo, and Xinning Zhang. SSD acknowledges funding from the High Meadows Environmental Institute and Andlinger Center for Energy and Environment (Princeton University), NSF Grant CBET-1941716, a Camille Dreyfus Teacher-Scholar Award of the Camille and Henry Dreyfus Foundation, and the Princeton Center for Complex Materials, a Materials Research Science and Engineering Center supported by NSF grants DMR-1420541 and DMR-2011750. OD is indebted to E. Lajeunesse, V. Jules, A. Guerin and F. Metrivier for sustained and enjoyable collaboration on groundwater flows. ICB thanks NSF Grant EAR-2150797 and DOE Grant DE-SC0018419 for financial support. Notes and references